default search action
Zhewei Yao
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j6]Amir Gholami, Zhewei Yao, Sehoon Kim, Coleman Hooper, Michael W. Mahoney, Kurt Keutzer:
AI and Memory Wall. IEEE Micro 44(3): 33-39 (2024) - [c32]Conglong Li, Zhewei Yao, Xiaoxia Wu, Minjia Zhang, Connor Holmes, Cheng Li, Yuxiong He:
DeepSpeed Data Efficiency: Improving Deep Learning Model Quality and Training Efficiency via Efficient Data Sampling and Routing. AAAI 2024: 18490-18498 - [c31]Zhewei Yao, Xiaoxia Wu, Cheng Li, Stephen Youn, Yuxiong He:
Exploring Post-training Quantization in LLMs from Comprehensive Study to Low Rank Compensation. AAAI 2024: 19377-19385 - [c30]Guanhua Wang, Heyang Qin, Sam Ade Jacobs, Xiaoxia Wu, Connor Holmes, Zhewei Yao, Samyam Rajbhandari, Olatunji Ruwase, Feng Yan, Lei Yang, Yuxiong He:
ZeRO++: Extremely Efficient Collective Communication for Large Model Training. ICLR 2024 - [c29]Haojun Xia, Zhen Zheng, Xiaoxia Wu, Shiyang Chen, Zhewei Yao, Stephen Youn, Arash Bakhtiari, Michael Wyatt, Donglin Zhuang, Zhongzhu Zhou, Olatunji Ruwase, Yuxiong He, Shuaiwen Leon Song:
Quant-LLM: Accelerating the Serving of Large Language Models via FP6-Centric Algorithm-System Co-Design on Modern GPUs. USENIX ATC 2024: 699-713 - [i50]Haojun Xia, Zhen Zheng, Xiaoxia Wu, Shiyang Chen, Zhewei Yao, Stephen Youn, Arash Bakhtiari, Michael Wyatt, Donglin Zhuang, Zhongzhu Zhou, Olatunji Ruwase, Yuxiong He, Shuaiwen Leon Song:
FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric Algorithm-System Co-Design. CoRR abs/2401.14112 (2024) - [i49]Zhenyu Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xiaoxia Wu, Zhangyang Wang:
Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding. CoRR abs/2403.04797 (2024) - [i48]Amir Gholami, Zhewei Yao, Sehoon Kim, Coleman Hooper, Michael W. Mahoney, Kurt Keutzer:
AI and Memory Wall. CoRR abs/2403.14123 (2024) - [i47]Jaeseong Lee, Seung-won Hwang, Aurick Qiao, Daniel F. Campos, Zhewei Yao, Yuxiong He:
STUN: Structured-Then-Unstructured Pruning for Scalable MoE Pruning. CoRR abs/2409.06211 (2024) - [i46]Aurick Qiao, Zhewei Yao, Samyam Rajbhandari, Yuxiong He:
SwiftKV: Fast Prefill-Optimized Inference with Knowledge-Preserving Model Transformation. CoRR abs/2410.03960 (2024) - 2023
- [c28]Sheng Shen, Zhewei Yao, Chunyuan Li, Trevor Darrell, Kurt Keutzer, Yuxiong He:
Scaling Vision-Language Models with Sparse Mixture of Experts. EMNLP (Findings) 2023: 11329-11344 - [c27]Syed Zawad, Cheng Li, Zhewei Yao, Elton Zheng, Yuxiong He, Feng Yan:
DySR: Adaptive Super-Resolution via Algorithm and System Co-design. ICLR 2023 - [c26]Xiaoxia Wu, Cheng Li, Reza Yazdani Aminabadi, Zhewei Yao, Yuxiong He:
Understanding Int4 Quantization for Language Models: Latency Speedup, Composability, and Failure Cases. ICML 2023: 37524-37539 - [i45]Xiaoxia Wu, Cheng Li, Reza Yazdani Aminabadi, Zhewei Yao, Yuxiong He:
Understanding INT4 Quantization for Transformer Models: Latency Speedup, Composability, and Failure Cases. CoRR abs/2301.12017 (2023) - [i44]Sheng Shen, Zhewei Yao, Chunyuan Li, Trevor Darrell, Kurt Keutzer, Yuxiong He:
Scaling Vision-Language Models with Sparse Mixture of Experts. CoRR abs/2303.07226 (2023) - [i43]Zhewei Yao, Cheng Li, Xiaoxia Wu, Stephen Youn, Yuxiong He:
A Comprehensive Study on Post-Training Quantization for Large Language Models. CoRR abs/2303.08302 (2023) - [i42]Pareesa Ameneh Golnari, Zhewei Yao, Yuxiong He:
Selective Guidance: Are All the Denoising Steps of Guided Diffusion Important? CoRR abs/2305.09847 (2023) - [i41]Xiaoxia Wu, Zhewei Yao, Yuxiong He:
ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats. CoRR abs/2307.09782 (2023) - [i40]Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He:
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales. CoRR abs/2308.01320 (2023) - [i39]Fengxiang Bie, Yibo Yang, Zhongzhu Zhou, Adam Ghanem, Minjia Zhang, Zhewei Yao, Xiaoxia Wu, Connor Holmes, Pareesa Ameneh Golnari, David A. Clifton, Yuxiong He, Dacheng Tao, Shuaiwen Leon Song:
RenAIssance: A Survey into AI Text-to-Image Generation in the Era of Large Model. CoRR abs/2309.00810 (2023) - [i38]Zhewei Yao, Xiaoxia Wu, Conglong Li, Minjia Zhang, Heyang Qin, Olatunji Ruwase, Ammar Ahmad Awan, Samyam Rajbhandari, Yuxiong He:
DeepSpeed-VisualChat: Multi-Round Multi-Image Interleave Chat via Multi-Modal Causal Attention. CoRR abs/2309.14327 (2023) - [i37]Zhewei Yao, Reza Yazdani Aminabadi, Stephen Youn, Xiaoxia Wu, Elton Zheng, Yuxiong He:
ZeroQuant-HERO: Hardware-Enhanced Robust Optimized Post-Training Quantization Framework for W8A8 Transformers. CoRR abs/2310.17723 (2023) - [i36]Xiaoxia Wu, Haojun Xia, Stephen Youn, Zhen Zheng, Shiyang Chen, Arash Bakhtiari, Michael Wyatt, Reza Yazdani Aminabadi, Yuxiong He, Olatunji Ruwase, Leon Song, Zhewei Yao:
ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks. CoRR abs/2312.08583 (2023) - 2022
- [c25]Sehoon Kim, Amir Gholami, Zhewei Yao, Nicholas Lee, Patrick Wang, Aniruddha Nrusimha, Bohan Zhai, Tianren Gao, Michael W. Mahoney, Kurt Keutzer:
Integer-Only Zero-Shot Quantization for Efficient Speech Recognition. ICASSP 2022: 4288-4292 - [c24]Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, Kurt Keutzer:
How Much Can CLIP Benefit Vision-and-Language Tasks? ICLR 2022 - [c23]Samyam Rajbhandari, Conglong Li, Zhewei Yao, Minjia Zhang, Reza Yazdani Aminabadi, Ammar Ahmad Awan, Jeff Rasley, Yuxiong He:
DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale. ICML 2022: 18332-18346 - [c22]Xiaoxia Wu, Zhewei Yao, Minjia Zhang, Conglong Li, Yuxiong He:
XTC: Extreme Compression for Pre-trained Transformers Made Simple and Efficient. NeurIPS 2022 - [c21]Zhewei Yao, Reza Yazdani Aminabadi, Minjia Zhang, Xiaoxia Wu, Conglong Li, Yuxiong He:
ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers. NeurIPS 2022 - [c20]Shixing Yu, Zhewei Yao, Amir Gholami, Zhen Dong, Sehoon Kim, Michael W. Mahoney, Kurt Keutzer:
Hessian-Aware Pruning and Optimal Neural Implant. WACV 2022: 3665-3676 - [i35]Samyam Rajbhandari, Conglong Li, Zhewei Yao, Minjia Zhang, Reza Yazdani Aminabadi, Ammar Ahmad Awan, Jeff Rasley, Yuxiong He:
DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale. CoRR abs/2201.05596 (2022) - [i34]Xiaoxia Wu, Zhewei Yao, Minjia Zhang, Conglong Li, Yuxiong He:
Extreme Compression for Pre-trained Transformers Made Simple and Efficient. CoRR abs/2206.01859 (2022) - [i33]Zhewei Yao, Reza Yazdani Aminabadi, Minjia Zhang, Xiaoxia Wu, Conglong Li, Yuxiong He:
ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers. CoRR abs/2206.01861 (2022) - [i32]Yuxin Ma, Ping Gong, Jun Yi, Zhewei Yao, Minjie Wang, Cheng Li, Yuxiong He, Feng Yan:
BiFeat: Supercharge GNN Training via Graph Feature Quantization. CoRR abs/2207.14696 (2022) - [i31]Zhewei Yao, Xiaoxia Wu, Conglong Li, Connor Holmes, Minjia Zhang, Cheng Li, Yuxiong He:
Random-LTD: Random and Layerwise Token Dropping Brings Efficient Training for Large-scale Transformers. CoRR abs/2211.11586 (2022) - [i30]Conglong Li, Zhewei Yao, Xiaoxia Wu, Minjia Zhang, Yuxiong He:
DeepSpeed Data Efficiency: Improving Deep Learning Model Quality and Training Efficiency via Efficient Data Sampling and Routing. CoRR abs/2212.03597 (2022) - 2021
- [j5]Zhewei Yao, Peng Xu, Fred Roosta, Michael W. Mahoney:
Inexact Nonconvex Newton-Type Methods. INFORMS J. Optim. 3(2): 154-182 (2021) - [c19]Zhewei Yao, Amir Gholami, Sheng Shen, Mustafa Mustafa, Kurt Keutzer, Michael W. Mahoney:
ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning. AAAI 2021: 10665-10673 - [c18]Zhengming Zhang, Yaoqing Yang, Zhewei Yao, Yujun Yan, Joseph E. Gonzalez, Kannan Ramchandran, Michael W. Mahoney:
Improving Semi-supervised Federated Learning by Reducing the Gradient Diversity of Models. IEEE BigData 2021: 1214-1225 - [c17]Sheng Shen, Zhewei Yao, Douwe Kiela, Kurt Keutzer, Michael W. Mahoney:
What's Hidden in a One-layer Randomly Weighted Transformer? EMNLP (1) 2021: 2914-2921 - [c16]Jianfei Chen, Lianmin Zheng, Zhewei Yao, Dequan Wang, Ion Stoica, Michael W. Mahoney, Joseph Gonzalez:
ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training. ICML 2021: 1803-1813 - [c15]Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer:
I-BERT: Integer-only BERT Quantization. ICML 2021: 5506-5518 - [c14]Zhewei Yao, Zhen Dong, Zhangcheng Zheng, Amir Gholami, Jiali Yu, Eric Tan, Leyuan Wang, Qijing Huang, Yida Wang, Michael W. Mahoney, Kurt Keutzer:
HAWQ-V3: Dyadic Neural Network Quantization. ICML 2021: 11875-11886 - [i29]Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer:
I-BERT: Integer-only BERT Quantization. CoRR abs/2101.01321 (2021) - [i28]Shixing Yu, Zhewei Yao, Amir Gholami, Zhen Dong, Michael W. Mahoney, Kurt Keutzer:
Hessian-Aware Pruning and Optimal Neural Implant. CoRR abs/2101.08940 (2021) - [i27]Amir Gholami, Sehoon Kim, Zhen Dong, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer:
A Survey of Quantization Methods for Efficient Neural Network Inference. CoRR abs/2103.13630 (2021) - [i26]Sehoon Kim, Amir Gholami, Zhewei Yao, Aniruddha Nrusimha, Bohan Zhai, Tianren Gao, Michael W. Mahoney, Kurt Keutzer:
Q-ASR: Integer-only Zero-shot Quantization for Efficient Speech Recognition. CoRR abs/2103.16827 (2021) - [i25]Jianfei Chen, Lianmin Zheng, Zhewei Yao, Dequan Wang, Ion Stoica, Michael W. Mahoney, Joseph E. Gonzalez:
ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training. CoRR abs/2104.14129 (2021) - [i24]Zhewei Yao, Linjian Ma, Sheng Shen, Kurt Keutzer, Michael W. Mahoney:
MLPruning: A Multilevel Structured Pruning Framework for Transformer-based Models. CoRR abs/2105.14636 (2021) - [i23]Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, Kurt Keutzer:
How Much Can CLIP Benefit Vision-and-Language Tasks? CoRR abs/2107.06383 (2021) - [i22]Sheng Shen, Zhewei Yao, Douwe Kiela, Kurt Keutzer, Michael W. Mahoney:
What's Hidden in a One-layer Randomly Weighted Transformer? CoRR abs/2109.03939 (2021) - 2020
- [c13]Linjian Ma, Gabe Montague, Jiayu Ye, Zhewei Yao, Amir Gholami, Kurt Keutzer, Michael W. Mahoney:
Inefficiency of K-FAC for Large Batch Size Training. AAAI 2020: 5053-5060 - [c12]Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, Kurt Keutzer:
Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT. AAAI 2020: 8815-8821 - [c11]Zhewei Yao, Amir Gholami, Kurt Keutzer, Michael W. Mahoney:
PyHessian: Neural Networks Through the Lens of the Hessian. IEEE BigData 2020: 581-590 - [c10]Yaohui Cai, Zhewei Yao, Zhen Dong, Amir Gholami, Michael W. Mahoney, Kurt Keutzer:
ZeroQ: A Novel Zero Shot Quantization Framework. CVPR 2020: 13166-13175 - [c9]Qinxin Wang, Hao Tan, Sheng Shen, Michael W. Mahoney, Zhewei Yao:
MAF: Multimodal Alignment Framework for Weakly-Supervised Phrase Grounding. EMNLP (1) 2020: 2030-2038 - [c8]Sheng Shen, Zhewei Yao, Amir Gholami, Michael W. Mahoney, Kurt Keutzer:
PowerNorm: Rethinking Batch Normalization in Transformers. ICML 2020: 8741-8751 - [c7]N. Benjamin Erichson, Zhewei Yao, Michael W. Mahoney:
JumpReLU: A Retrofit Defense Strategy for Adversarial Attacks. ICPRAM 2020: 103-114 - [c6]Jianfei Chen, Yu Gai, Zhewei Yao, Michael W. Mahoney, Joseph E. Gonzalez:
A Statistical Framework for Low-bitwidth Training of Deep Neural Networks. NeurIPS 2020 - [c5]Zhen Dong, Zhewei Yao, Daiyaan Arfeen, Amir Gholami, Michael W. Mahoney, Kurt Keutzer:
HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks. NeurIPS 2020 - [i21]Yaohui Cai, Zhewei Yao, Zhen Dong, Amir Gholami, Michael W. Mahoney, Kurt Keutzer:
ZeroQ: A Novel Zero Shot Quantization Framework. CoRR abs/2001.00281 (2020) - [i20]Sheng Shen, Zhewei Yao, Amir Gholami, Michael W. Mahoney, Kurt Keutzer:
Rethinking Batch Normalization in Transformers. CoRR abs/2003.07845 (2020) - [i19]Zhewei Yao, Amir Gholami, Sheng Shen, Kurt Keutzer, Michael W. Mahoney:
ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning. CoRR abs/2006.00719 (2020) - [i18]Zhengming Zhang, Zhewei Yao, Yaoqing Yang, Yujun Yan, Joseph E. Gonzalez, Michael W. Mahoney:
Benchmarking Semi-supervised Federated Learning. CoRR abs/2008.11364 (2020) - [i17]Qinxin Wang, Hao Tan, Sheng Shen, Michael W. Mahoney, Zhewei Yao:
MAF: Multimodal Alignment Framework for Weakly-Supervised Phrase Grounding. CoRR abs/2010.05379 (2020) - [i16]Jianfei Chen, Yu Gai, Zhewei Yao, Michael W. Mahoney, Joseph E. Gonzalez:
A Statistical Framework for Low-bitwidth Training of Deep Neural Networks. CoRR abs/2010.14298 (2020) - [i15]Zhewei Yao, Zhen Dong, Zhangcheng Zheng, Amir Gholami, Jiali Yu, Eric Tan, Leyuan Wang, Qijing Huang, Yida Wang, Michael W. Mahoney, Kurt Keutzer:
HAWQV3: Dyadic Neural Network Quantization. CoRR abs/2011.10680 (2020)
2010 – 2019
- 2019
- [j4]Zhewei Yao, Feng Yang, Jing Huang, Yaqin Liu:
改进型循环生成对抗网络的血管内超声图像增强 (Improved CycleGANs for Intravascular Ultrasound Image Enhancement). 计算机科学 46(5): 221-227 (2019) - [c4]Zhewei Yao, Amir Gholami, Peng Xu, Kurt Keutzer, Michael W. Mahoney:
Trust Region Based Adversarial Attack on Neural Networks. CVPR 2019: 11350-11359 - [c3]Zhen Dong, Zhewei Yao, Amir Gholami, Michael W. Mahoney, Kurt Keutzer:
HAWQ: Hessian AWare Quantization of Neural Networks With Mixed-Precision. ICCV 2019: 293-302 - [c2]Tianjun Zhang, Zhewei Yao, Amir Gholami, Joseph E. Gonzalez, Kurt Keutzer, Michael W. Mahoney, George Biros:
ANODEV2: A Coupled Neural ODE Framework. NeurIPS 2019: 5152-5162 - [i14]N. Benjamin Erichson, Lionel Mathelin, Zhewei Yao, Steven L. Brunton, Michael W. Mahoney, J. Nathan Kutz:
Shallow Learning for Fluid Flow Reconstruction with Limited Sensors and Limited Data. CoRR abs/1902.07358 (2019) - [i13]Linjian Ma, Gabe Montague, Jiayu Ye, Zhewei Yao, Amir Gholami, Kurt Keutzer, Michael W. Mahoney:
Inefficiency of K-FAC for Large Batch Size Training. CoRR abs/1903.06237 (2019) - [i12]N. Benjamin Erichson, Zhewei Yao, Michael W. Mahoney:
JumpReLU: A Retrofit Defense Strategy for Adversarial Attacks. CoRR abs/1904.03750 (2019) - [i11]Zhen Dong, Zhewei Yao, Amir Gholami, Michael W. Mahoney, Kurt Keutzer:
HAWQ: Hessian AWare Quantization of Neural Networks with Mixed-Precision. CoRR abs/1905.03696 (2019) - [i10]Kai Rothauge, Zhewei Yao, Zixi Hu, Michael W. Mahoney:
Residual Networks as Nonlinear Systems: Stability Analysis using Linearization. CoRR abs/1905.13386 (2019) - [i9]Tianjun Zhang, Zhewei Yao, Amir Gholami, Kurt Keutzer, Joseph Gonzalez, George Biros, Michael W. Mahoney:
ANODEV2: A Coupled Neural ODE Evolution Framework. CoRR abs/1906.04596 (2019) - [i8]Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, Kurt Keutzer:
Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT. CoRR abs/1909.05840 (2019) - [i7]Zhen Dong, Zhewei Yao, Yaohui Cai, Daiyaan Arfeen, Amir Gholami, Michael W. Mahoney, Kurt Keutzer:
HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks. CoRR abs/1911.03852 (2019) - [i6]Zhewei Yao, Amir Gholami, Kurt Keutzer, Michael W. Mahoney:
PyHessian: Neural Networks Through the Lens of the Hessian. CoRR abs/1912.07145 (2019) - 2018
- [c1]Zhewei Yao, Amir Gholami, Qi Lei, Kurt Keutzer, Michael W. Mahoney:
Hessian-based Analysis of Large Batch Training and Robustness to Adversaries. NeurIPS 2018: 4954-4964 - [i5]Zhewei Yao, Amir Gholami, Qi Lei, Kurt Keutzer, Michael W. Mahoney:
Hessian-based Analysis of Large Batch Training and Robustness to Adversaries. CoRR abs/1802.08241 (2018) - [i4]Zhewei Yao, Amir Gholami, Kurt Keutzer, Michael W. Mahoney:
Large batch size training of neural networks with adversarial training and second-order information. CoRR abs/1810.01021 (2018) - [i3]Noah Golmant, Nikita Vemuri, Zhewei Yao, Vladimir Feinberg, Amir Gholami, Kai Rothauge, Michael W. Mahoney, Joseph Gonzalez:
On the Computational Inefficiency of Large Batch Sizes for Stochastic Gradient Descent. CoRR abs/1811.12941 (2018) - [i2]Norman Mu, Zhewei Yao, Amir Gholami, Kurt Keutzer, Michael W. Mahoney:
Parameter Re-Initialization through Cyclical Batch Size Schedules. CoRR abs/1812.01216 (2018) - [i1]Zhewei Yao, Amir Gholami, Peng Xu, Kurt Keutzer, Michael W. Mahoney:
Trust Region Based Adversarial Attack on Neural Networks. CoRR abs/1812.06371 (2018) - 2017
- [j3]Shujun Liang, Feng Yang, Tiexiang Wen, Zhewei Yao, Qinghua Huang, Chengke Ye:
Nonlocal total variation based on symmetric Kullback-Leibler divergence for the ultrasound image despeckling. BMC Medical Imaging 17(1): 57:1-57:12 (2017) - [j2]Zixi Hu, Zhewei Yao, Jinglai Li:
On an adaptive preconditioned Crank-Nicolson MCMC algorithm for infinite dimensional Bayesian inference. J. Comput. Phys. 332: 492-503 (2017) - [j1]Qingping Zhou, Zixi Hu, Zhewei Yao, Jinglai Li:
A Hybrid Adaptive MCMC Algorithm in Function Spaces. SIAM/ASA J. Uncertain. Quantification 5(1): 621-639 (2017)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-13 23:49 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint