default search action
Zhuoran Yang
This is just a disambiguation page, and is not intended to be the bibliography of an actual person. Any publication listed on this page has not been assigned to an actual author yet. If you know the true author of one of the publications listed below, you are welcome to contact us.
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j20]Chenjia Bai, Lingxiao Wang, Jianye Hao, Zhuoran Yang, Bin Zhao, Zhen Wang, Xuelong Li:
Pessimistic value iteration for multi-task data sharing in Offline Reinforcement Learning. Artif. Intell. 326: 104048 (2024) - [j19]Qi Cai, Zhuoran Yang, Jason D. Lee, Zhaoran Wang:
Neural Temporal Difference and Q Learning Provably Converge to Global Optima. Math. Oper. Res. 49(1): 619-651 (2024) - [j18]Zhi-Hong Deng, Zuyue Fu, Lingxiao Wang, Zhuoran Yang, Chenjia Bai, Tianyi Zhou, Zhaoran Wang, Jing Jiang:
False Correlation Reduction for Offline Reinforcement Learning. IEEE Trans. Pattern Anal. Mach. Intell. 46(2): 1199-1211 (2024) - [c117]Siyu Chen, Heejune Sheen, Tianhao Wang, Zhuoran Yang:
Training Dynamics of Multi-Head Softmax Attention for In-Context Learning: Emergence, Convergence, and Optimality (extended abstract). COLT 2024: 4573 - [c116]Jianliang He, Han Zhong, Zhuoran Yang:
Sample-efficient Learning of Infinite-horizon Average-reward MDPs with General Function Approximation. ICLR 2024 - [c115]Juno Kim, Kakei Yamamoto, Kazusato Oko, Zhuoran Yang, Taiji Suzuki:
Symmetric Mean-field Langevin Dynamics for Distributional Minimax Problems. ICLR 2024 - [c114]Nuoya Xiong, Zhihan Liu, Zhaoran Wang, Zhuoran Yang:
Sample-Efficient Multi-Agent RL: An Optimization Perspective. ICLR 2024 - [c113]Zehao Dou, Minshuo Chen, Mengdi Wang, Zhuoran Yang:
Theory of Consistency Diffusion Models: Distribution Estimation Meets Fast Sampling. ICML 2024 - [c112]Jianliang He, Siyu Chen, Fengzhuo Zhang, Zhuoran Yang:
From Words to Actions: Unveiling the Theoretical Underpinnings of LLM-Driven Autonomous Systems. ICML 2024 - [c111]Han Shen, Zhuoran Yang, Tianyi Chen:
Principled Penalty-based Methods for Bilevel Reinforcement Learning and RLHF. ICML 2024 - [c110]Nuoya Xiong, Zhaoran Wang, Zhuoran Yang:
A General Framework for Sequential Decision-Making under Adaptivity Constraints. ICML 2024 - [c109]Kakei Yamamoto, Kazusato Oko, Zhuoran Yang, Taiji Suzuki:
Mean Field Langevin Actor-Critic: Faster Convergence and Global Optimality beyond Lazy Learning. ICML 2024 - [c108]Sirui Zheng, Chenjia Bai, Zhuoran Yang, Zhaoran Wang:
How Does Goal Relabeling Improve Sample Efficiency? ICML 2024 - [i130]Han Shen, Zhuoran Yang, Tianyi Chen:
Principled Penalty-based Methods for Bilevel Reinforcement Learning and RLHF. CoRR abs/2402.06886 (2024) - [i129]Zihao Li, Boyi Liu, Zhuoran Yang, Zhaoran Wang, Mengdi Wang:
Double Duality: Variational Primal-Dual Policy Optimization for Constrained Reinforcement Learning. CoRR abs/2402.10810 (2024) - [i128]Siyu Chen, Heejune Sheen, Tianhao Wang, Zhuoran Yang:
Training Dynamics of Multi-Head Softmax Attention for In-Context Learning: Emergence, Convergence, and Optimality. CoRR abs/2402.19442 (2024) - [i127]Awni Altabaa, Zhuoran Yang:
On the Role of Information Structure in Reinforcement Learning for Partially-Observable Sequential Teams and Games. CoRR abs/2403.00993 (2024) - [i126]Hengyu Fu, Zhuoran Yang, Mengdi Wang, Minshuo Chen:
Unveil Conditional Diffusion Models with Classifier-free Guidance: A Sharp Statistical Theory. CoRR abs/2403.11968 (2024) - [i125]Yuchen Zhu, Yufeng Zhang, Zhaoran Wang, Zhuoran Yang, Xiaohong Chen:
A Mean-Field Analysis of Neural Gradient Descent-Ascent: Applications to Functional Conditional Moment Equations. CoRR abs/2404.12312 (2024) - [i124]Jianliang He, Han Zhong, Zhuoran Yang:
Sample-efficient Learning of Infinite-horizon Average-reward MDPs with General Function Approximation. CoRR abs/2404.12648 (2024) - [i123]Chenjia Bai, Lingxiao Wang, Jianye Hao, Zhuoran Yang, Bin Zhao, Zhen Wang, Xuelong Li:
Pessimistic Value Iteration for Multi-Task Data Sharing in Offline Reinforcement Learning. CoRR abs/2404.19346 (2024) - [i122]Chuanhao Li, Runhan Yang, Tiankai Li, Milad Bafarassat, Kourosh Sharifi, Dirk Bergemann, Zhuoran Yang:
STRIDE: A Tool-Assisted LLM Agent Framework for Strategic and Interactive Decision-Making. CoRR abs/2405.16376 (2024) - [i121]Jianliang He, Siyu Chen, Fengzhuo Zhang, Zhuoran Yang:
From Words to Actions: Unveiling the Theoretical Underpinnings of LLM-Driven Autonomous Systems. CoRR abs/2405.19883 (2024) - [i120]Zehao Dou, Minshuo Chen, Mengdi Wang, Zhuoran Yang:
Provable Statistical Rates for Consistency Diffusion Models. CoRR abs/2406.16213 (2024) - [i119]Xinyang Hu, Fengzhuo Zhang, Siyu Chen, Zhuoran Yang:
Unveiling the Statistical Foundations of Chain-of-Thought Prompting Methods. CoRR abs/2408.14511 (2024) - [i118]Siyu Chen, Heejune Sheen, Tianhao Wang, Zhuoran Yang:
Unveiling Induction Heads: Provable Training Dynamics and Feature Learning in Transformers. CoRR abs/2409.10559 (2024) - 2023
- [j17]Zhuoran Yang, Yuan Gao, Jingfang Deng, Lixiang Lv:
Partial Discharge Characteristics and Growth Stage Recognition of Electrical Tree in XLPE Insulation. IEEE Access 11: 145527-145535 (2023) - [j16]Han Zhong, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan:
Can Reinforcement Learning Find Stackelberg-Nash Equilibria in General-Sum Markov Games with Myopically Rational Followers? J. Mach. Learn. Res. 24: 35:1-35:52 (2023) - [j15]Zihao Li, Boyi Liu, Zhuoran Yang, Zhaoran Wang, Mengdi Wang:
Double Duality: Variational Primal-Dual Policy Optimization for Constrained Reinforcement Learning. J. Mach. Learn. Res. 24: 385:1-385:43 (2023) - [j14]Qiaomin Xie, Yudong Chen, Zhaoran Wang, Zhuoran Yang:
Learning Zero-Sum Simultaneous-Move Markov Games Using Function Approximation and Correlated Equilibrium. Math. Oper. Res. 48(1): 433-462 (2023) - [j13]Chi Jin, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan:
Provably Efficient Reinforcement Learning with Linear Function Approximation. Math. Oper. Res. 48(3): 1496-1521 (2023) - [j12]Nikola Banovic, Zhuoran Yang, Aditya Ramesh, Alice Liu:
Being Trustworthy is Not Enough: How Untrustworthy Artificial Intelligence (AI) Can Deceive the End-Users and Gain Their Trust. Proc. ACM Hum. Comput. Interact. 7(CSCW1): 1-17 (2023) - [j11]Mingyi Hong, Hoi-To Wai, Zhaoran Wang, Zhuoran Yang:
A Two-Timescale Stochastic Algorithm Framework for Bilevel Optimization: Complexity Analysis and Application to Actor-Critic. SIAM J. Optim. 33(1): 147-180 (2023) - [c107]Ruitu Xu, Yifei Min, Tianhao Wang, Michael I. Jordan, Zhaoran Wang, Zhuoran Yang:
Finding Regularized Competitive Equilibria of Heterogeneous Agent Macroeconomic Models via Reinforcement Learning. AISTATS 2023: 375-407 - [c106]Yixuan Wang, Simon Sinong Zhan, Zhilu Wang, Chao Huang, Zhaoran Wang, Zhuoran Yang, Qi Zhu:
Joint Differentiable Optimization and Verification for Certified Reinforcement Learning. ICCPS 2023: 132-141 - [c105]Lingxiao Wang, Qi Cai, Zhuoran Yang, Zhaoran Wang:
Represent to Control Partially Observed Systems: Representation Learning with Provable Sample Efficiency. ICLR 2023 - [c104]Miao Lu, Yifei Min, Zhaoran Wang, Zhuoran Yang:
Pessimism in the Face of Confounders: Provably Efficient Offline Reinforcement Learning in Partially Observable Markov Decision Processes. ICLR 2023 - [c103]Zhuoqing Song, Jason D. Lee, Zhuoran Yang:
Can We Find Nash Equilibria at a Linear Rate in Markov Games? ICLR 2023 - [c102]Haoran Xu, Li Jiang, Jianxiong Li, Zhuoran Yang, Zhaoran Wang, Wai Kin Victor Chan, Xianyuan Zhan:
Offline RL with No OOD Actions: In-Sample Learning via Implicit Value Regularization. ICLR 2023 - [c101]Wenhao Zhan, Jason D. Lee, Zhuoran Yang:
Decentralized Optimistic Hyperpolicy Mirror Descent: Provably No-Regret Learning in Markov Games. ICLR 2023 - [c100]Sirui Zheng, Lingxiao Wang, Shuang Qiu, Zuyue Fu, Zhuoran Yang, Csaba Szepesvári, Zhaoran Wang:
Optimistic Exploration with Learned Features Provably Solves Markov Decision Processes with Neural Dynamics. ICLR 2023 - [c99]Siyu Chen, Jibang Wu, Yifan Wu, Zhuoran Yang:
Learning to Incentivize Information Acquisition: Proper Scoring Rules Meet Principal-Agent Model. ICML 2023: 5194-5218 - [c98]Jiacheng Guo, Zihao Li, Huazheng Wang, Mengdi Wang, Zhuoran Yang, Xuezhou Zhang:
Provably Efficient Representation Learning with Tractable Planning in Low-Rank POMDP. ICML 2023: 11967-11997 - [c97]Yixuan Wang, Simon Sinong Zhan, Ruochen Jiao, Zhilu Wang, Wanxin Jin, Zhuoran Yang, Zhaoran Wang, Chao Huang, Qi Zhu:
Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement Learning in Unknown Stochastic Environments. ICML 2023: 36593-36604 - [c96]Yulai Zhao, Zhuoran Yang, Zhaoran Wang, Jason D. Lee:
Local Optimization Achieves Global Optimality in Multi-Agent Reinforcement Learning. ICML 2023: 42200-42226 - [c95]Dongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo R. Jovanovic:
Provably Efficient Generalized Lagrangian Policy Optimization for Safe Multi-Agent Reinforcement Learning. L4DC 2023: 315-332 - [c94]Haoran He, Chenjia Bai, Kang Xu, Zhuoran Yang, Weinan Zhang, Dong Wang, Bin Zhao, Xuelong Li:
Diffusion Model is an Effective Planner and Data Synthesizer for Multi-Task Reinforcement Learning. NeurIPS 2023 - [c93]Zhihan Liu, Miao Lu, Wei Xiong, Han Zhong, Hao Hu, Shenao Zhang, Sirui Zheng, Zhuoran Yang, Zhaoran Wang:
Maximize to Explore: One Objective Function Fusing Estimation, Planning, and Exploration. NeurIPS 2023 - [c92]Shuang Qiu, Ziyu Dai, Han Zhong, Zhaoran Wang, Zhuoran Yang, Tong Zhang:
Posterior Sampling for Competitive RL: Function Approximation and Partial Observation. NeurIPS 2023 - [c91]Fengzhuo Zhang, Vincent Y. F. Tan, Zhaoran Wang, Zhuoran Yang:
Learning Regularized Monotone Graphon Mean-Field Games. NeurIPS 2023 - [c90]Zihan Zhu, Ethan Fang, Zhuoran Yang:
Online Performative Gradient Descent for Learning Nash Equilibria in Decision-Dependent Games. NeurIPS 2023 - [c89]Banghua Zhu, Stephen Bates, Zhuoran Yang, Yixin Wang, Jiantao Jiao, Michael I. Jordan:
The Sample Complexity of Online Contract Design. EC 2023: 1188 - [i117]Zhuoqing Song, Jason D. Lee, Zhuoran Yang:
Can We Find Nash Equilibria at a Linear Rate in Markov Games? CoRR abs/2303.03095 (2023) - [i116]Ruitu Xu, Yifei Min, Tianhao Wang, Zhaoran Wang, Michael I. Jordan, Zhuoran Yang:
Finding Regularized Competitive Equilibria of Heterogeneous Agent Macroeconomic Models with Reinforcement Learning. CoRR abs/2303.04833 (2023) - [i115]Siyu Chen, Jibang Wu, Yifan Wu, Zhuoran Yang:
Learning to Incentivize Information Acquisition: Proper Scoring Rules Meet Principal-Agent Model. CoRR abs/2303.08613 (2023) - [i114]Siyu Chen, Yitan Wang, Zhaoran Wang, Zhuoran Yang:
A Unified Framework of Policy Learning for Contextual Bandit with Confounding Bias and Missing Observations. CoRR abs/2303.11187 (2023) - [i113]Haoran Xu, Li Jiang, Jianxiong Li, Zhuoran Yang, Zhaoran Wang, Wai Kin Victor Chan, Xianyuan Zhan:
Offline RL with No OOD Actions: In-Sample Learning via Implicit Value Regularization. CoRR abs/2303.15810 (2023) - [i112]Yulai Zhao, Zhuoran Yang, Zhaoran Wang, Jason D. Lee:
Local Optimization Achieves Global Optimality in Multi-Agent Reinforcement Learning. CoRR abs/2305.04819 (2023) - [i111]Zhihan Liu, Miao Lu, Wei Xiong, Han Zhong, Hao Hu, Shenao Zhang, Sirui Zheng, Zhuoran Yang, Zhaoran Wang:
One Objective to Rule Them All: A Maximization Objective Fusing Estimation and Planning for Exploration. CoRR abs/2305.18258 (2023) - [i110]Zihao Li, Zhuoran Yang, Mengdi Wang:
Reinforcement Learning with Human Feedback: Learning Dynamic Choices via Pessimism. CoRR abs/2305.18438 (2023) - [i109]Haoran He, Chenjia Bai, Kang Xu, Zhuoran Yang, Weinan Zhang, Dong Wang, Bin Zhao, Xuelong Li:
Diffusion Model is an Effective Planner and Data Synthesizer for Multi-Task Reinforcement Learning. CoRR abs/2305.18459 (2023) - [i108]Yufeng Zhang, Fengzhuo Zhang, Zhuoran Yang, Zhaoran Wang:
What and How does In-Context Learning Learn? Bayesian Model Averaging, Parameterization, and Generalization. CoRR abs/2305.19420 (2023) - [i107]Dongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo R. Jovanovic:
Provably Efficient Generalized Lagrangian Policy Optimization for Safe Multi-Agent Reinforcement Learning. CoRR abs/2306.00212 (2023) - [i106]Jiacheng Guo, Zihao Li, Huazheng Wang, Mengdi Wang, Zhuoran Yang, Xuezhou Zhang:
Provably Efficient Representation Learning with Tractable Planning in Low-Rank POMDP. CoRR abs/2306.12356 (2023) - [i105]Nuoya Xiong, Zhaoran Wang, Zhuoran Yang:
A General Framework for Sequential Decision-Making under Adaptivity Constraints. CoRR abs/2306.14468 (2023) - [i104]Pangpang Liu, Zhuoran Yang, Zhaoran Wang, Will Wei Sun:
Contextual Dynamic Pricing with Strategic Buyers. CoRR abs/2307.04055 (2023) - [i103]Siyu Chen, Mengdi Wang, Zhuoran Yang:
Actions Speak What You Want: Provably Sample-Efficient Reinforcement Learning of the Quantal Stackelberg Equilibrium from Strategic Feedbacks. CoRR abs/2307.14085 (2023) - [i102]Nuoya Xiong, Zhihan Liu, Zhaoran Wang, Zhuoran Yang:
Sample-Efficient Multi-Agent RL: An Optimization Perspective. CoRR abs/2310.06243 (2023) - [i101]Fengzhuo Zhang, Vincent Y. F. Tan, Zhaoran Wang, Zhuoran Yang:
Learning Regularized Monotone Graphon Mean-Field Games. CoRR abs/2310.08089 (2023) - [i100]Fengzhuo Zhang, Vincent Y. F. Tan, Zhaoran Wang, Zhuoran Yang:
Learning Regularized Graphon Mean-Field Games with Unknown Graphons. CoRR abs/2310.17531 (2023) - [i99]Shuang Qiu, Ziyu Dai, Han Zhong, Zhaoran Wang, Zhuoran Yang, Tong Zhang:
Posterior Sampling for Competitive RL: Function Approximation and Partial Observation. CoRR abs/2310.19861 (2023) - [i98]Jianqing Fan, Zhaoran Wang, Zhuoran Yang, Chenlu Ye:
Provably Efficient High-Dimensional Bandit Learning with Batched Feedbacks. CoRR abs/2311.13180 (2023) - [i97]Yixuan Wang, Ruochen Jiao, Chengtian Lang, Simon Sinong Zhan, Chao Huang, Zhaoran Wang, Zhuoran Yang, Qi Zhu:
Empowering Autonomous Driving with Large Language Models: A Safety Perspective. CoRR abs/2312.00812 (2023) - 2022
- [c88]Zehao Dou, Zhuoran Yang, Zhaoran Wang, Simon S. Du:
Gap-Dependent Bounds for Two-Player Markov Games. AISTATS 2022: 432-455 - [c87]Chenjia Bai, Lingxiao Wang, Zhuoran Yang, Zhi-Hong Deng, Animesh Garg, Peng Liu, Zhaoran Wang:
Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning. ICLR 2022 - [c86]Baihe Huang, Jason D. Lee, Zhaoran Wang, Zhuoran Yang:
Towards General Function Approximation in Zero-Sum Markov Games. ICLR 2022 - [c85]Zhi Zhang, Zhuoran Yang, Han Liu, Pratap Tokekar, Furong Huang:
Reinforcement Learning under a Multi-agent Predictive State Representation Model: Method and Theory. ICLR 2022 - [c84]Qi Cai, Zhuoran Yang, Zhaoran Wang:
Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency. ICML 2022: 2485-2522 - [c83]Siyu Chen, Donglin Yang, Jiayang Li, Senmiao Wang, Zhuoran Yang, Zhaoran Wang:
Adaptive Model Design for Markov Decision Process. ICML 2022: 3679-3700 - [c82]Xiaoyu Chen, Han Zhong, Zhuoran Yang, Zhaoran Wang, Liwei Wang:
Human-in-the-loop: Provably Efficient Preference-based Reinforcement Learning with General Function Approximation. ICML 2022: 3773-3793 - [c81]Hongyi Guo, Qi Cai, Yufeng Zhang, Zhuoran Yang, Zhaoran Wang:
Provably Efficient Offline Reinforcement Learning for Partially Observable Markov Decision Processes. ICML 2022: 8016-8038 - [c80]Zhihan Liu, Miao Lu, Zhaoran Wang, Michael I. Jordan, Zhuoran Yang:
Welfare Maximization in Competitive Equilibrium: Reinforcement Learning for Markov Exchange Economy. ICML 2022: 13870-13911 - [c79]Zhihan Liu, Yufeng Zhang, Zuyue Fu, Zhuoran Yang, Zhaoran Wang:
Learning from Demonstration: Provably Efficient Adversarial Policy Imitation with Linear Function Approximation. ICML 2022: 14094-14138 - [c78]Boxiang Lyu, Zhaoran Wang, Mladen Kolar, Zhuoran Yang:
Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline Reinforcement Learning. ICML 2022: 14601-14638 - [c77]Shuang Qiu, Lingxiao Wang, Chenjia Bai, Zhuoran Yang, Zhaoran Wang:
Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning in Online Reinforcement Learning. ICML 2022: 18168-18210 - [c76]Han Zhong, Wei Xiong, Jiyuan Tan, Liwei Wang, Tong Zhang, Zhaoran Wang, Zhuoran Yang:
Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium Learning from Offline Datasets. ICML 2022: 27117-27142 - [c75]Gene Li, Junbo Li, Anmol Kabra, Nati Srebro, Zhaoran Wang, Zhuoran Yang:
Exponential Family Model-Based Reinforcement Learning via Score Matching. NeurIPS 2022 - [c74]Boyi Liu, Jiayang Li, Zhuoran Yang, Hoi-To Wai, Mingyi Hong, Yu (Marco) Nie, Zhaoran Wang:
Inducing Equilibria via Incentives: Simultaneous Design-and-Play Ensures Global Convergence. NeurIPS 2022 - [c73]Yifei Min, Tianhao Wang, Ruitu Xu, Zhaoran Wang, Michael I. Jordan, Zhuoran Yang:
Learn to Match with No Regret: Reinforcement Learning in Markov Matching Markets. NeurIPS 2022 - [c72]Grigoris Velegkas, Zhuoran Yang, Amin Karbasi:
Reinforcement Learning with Logarithmic Regret and Policy Switches. NeurIPS 2022 - [c71]Tengyu Xu, Zhuoran Yang, Zhaoran Wang, Yingbin Liang:
A Unifying Framework of Off-Policy General Value Function Evaluation. NeurIPS 2022 - [c70]Fengzhuo Zhang, Boyi Liu, Kaixin Wang, Vincent Y. F. Tan, Zhuoran Yang, Zhaoran Wang:
Relational Reasoning via Set Transformers: Provable Efficiency and Applications to MARL. NeurIPS 2022 - [c69]Shichao Xu, Yangyang Fu, Yixuan Wang, Zhuoran Yang, Zheng O'Neill, Zhaoran Wang, Qi Zhu:
Accelerate online reinforcement learning for building HVAC control with heterogeneous expert guidances. BuildSys@SenSys 2022: 89-98 - [c68]Jibang Wu, Zixuan Zhang, Zhe Feng, Zhaoran Wang, Zhuoran Yang, Michael I. Jordan, Haifeng Xu:
Sequential Information Design: Markov Persuasion Process and Its Efficient Reinforcement Learning. EC 2022: 471-472 - [i96]Yixuan Wang, Chao Huang, Zhaoran Wang, Zhuoran Yang, Qi Zhu:
Joint Differentiable Optimization and Verification for Certified Reinforcement Learning. CoRR abs/2201.12243 (2022) - [i95]Han Zhong, Wei Xiong, Jiyuan Tan, Liwei Wang, Tong Zhang, Zhaoran Wang, Zhuoran Yang:
Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium Learning from Offline Datasets. CoRR abs/2202.07511 (2022) - [i94]Jibang Wu, Zixuan Zhang, Zhe Feng, Zhaoran Wang, Zhuoran Yang, Michael I. Jordan, Haifeng Xu:
Sequential Information Design: Markov Persuasion Process and Its Efficient Reinforcement Learning. CoRR abs/2202.10678 (2022) - [i93]Chenjia Bai, Lingxiao Wang, Zhuoran Yang, Zhi-Hong Deng, Animesh Garg, Peng Liu, Zhaoran Wang:
Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning. CoRR abs/2202.11566 (2022) - [i92]Boxiang Lyu, Qinglin Meng, Shuang Qiu, Zhaoran Wang, Zhuoran Yang, Michael I. Jordan:
Learning Dynamic Mechanisms in Unknown Environments: A Reinforcement Learning Approach. CoRR abs/2202.12797 (2022) - [i91]Grigoris Velegkas, Zhuoran Yang, Amin Karbasi:
The Best of Both Worlds: Reinforcement Learning with Logarithmic Regret and Policy Switches. CoRR abs/2203.01491 (2022) - [i90]Yifei Min, Tianhao Wang, Ruitu Xu, Zhaoran Wang, Michael I. Jordan, Zhuoran Yang:
Learn to Match with No Regret: Reinforcement Learning in Markov Matching Markets. CoRR abs/2203.03684 (2022) - [i89]Qi Cai, Zhuoran Yang, Zhaoran Wang:
Sample-Efficient Reinforcement Learning for POMDPs with Linear Function Approximations. CoRR abs/2204.09787 (2022) - [i88]Boxiang Lyu, Zhaoran Wang, Mladen Kolar, Zhuoran Yang:
Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline Reinforcement Learning. CoRR abs/2205.02450 (2022) - [i87]Xiaoyu Chen, Han Zhong, Zhuoran Yang, Zhaoran Wang, Liwei Wang:
Human-in-the-loop: Provably Efficient Preference-based Reinforcement Learning with General Function Approximation. CoRR abs/2205.11140 (2022) - [i86]Lingxiao Wang, Qi Cai, Zhuoran Yang, Zhaoran Wang:
Embed to Control Partially Observed Systems: Representation Learning with Provable Sample Efficiency. CoRR abs/2205.13476 (2022) - [i85]Miao Lu, Yifei Min, Zhaoran Wang, Zhuoran Yang:
Pessimism in the Face of Confounders: Provably Efficient Offline Reinforcement Learning in Partially Observable Markov Decision Processes. CoRR abs/2205.13589 (2022) - [i84]Wenhao Zhan, Jason D. Lee, Zhuoran Yang:
Decentralized Optimistic Hyperpolicy Mirror Descent: Provably No-Regret Learning in Markov Games. CoRR abs/2206.01588 (2022) - [i83]Shuang Qiu, Xiaohan Wei, Jieping Ye, Zhaoran Wang, Zhuoran Yang:
Provably Efficient Fictitious Play Policy Optimization for Zero-Sum Markov Games with Structured Transitions. CoRR abs/2207.12463 (2022) - [i82]Shuang Qiu, Lingxiao Wang, Chenjia Bai, Zhuoran Yang, Zhaoran Wang:
Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning in Online Reinforcement Learning. CoRR abs/2207.14800 (2022) - [i81]Mengxin Yu, Zhuoran Yang, Jianqing Fan:
Strategic Decision-Making in the Presence of Information Asymmetry: Provably Efficient RL with Algorithmic Instruments. CoRR abs/2208.11040 (2022) - [i80]Zuyue Fu, Zhengling Qi, Zhaoran Wang, Zhuoran Yang, Yanxun Xu, Michael R. Kosorok:
Offline Reinforcement Learning with Instrumental Variables in Confounded Markov Decision Processes. CoRR abs/2209.08666 (2022) - [i79]Fengzhuo Zhang, Boyi Liu, Kaixin Wang, Vincent Y. F. Tan, Zhuoran Yang, Zhaoran Wang:
Relational Reasoning via Set Transformers: Provable Efficiency and Applications to MARL. CoRR abs/2209.09845 (2022) - [i78]Yixuan Wang, Simon Sinong Zhan, Ruochen Jiao, Zhilu Wang, Wanxin Jin, Zhuoran Yang, Zhaoran Wang, Chao Huang, Qi Zhu:
Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement Learning in Unknown Stochastic Environments. CoRR abs/2209.15090 (2022) - [i77]Rui Ai, Boxiang Lyu, Zhaoran Wang, Zhuoran Yang, Michael I. Jordan:
A Reinforcement Learning Approach in Multi-Phase Second-Price Auction Design. CoRR abs/2210.10278 (2022) - [i76]Han Zhong, Wei Xiong, Sirui Zheng, Liwei Wang, Zhaoran Wang, Zhuoran Yang, Tong Zhang:
GEC: A Unified Framework for Interactive Decision Making in MDP, POMDP, and Beyond. CoRR abs/2211.01962 (2022) - [i75]Banghua Zhu, Stephen Bates, Zhuoran Yang, Yixin Wang, Jiantao Jiao, Michael I. Jordan:
The Sample Complexity of Online Contract Design. CoRR abs/2211.05732 (2022) - [i74]Ying Jin, Zhimei Ren, Zhuoran Yang, Zhaoran Wang:
Policy learning "without" overlap: Pessimism and generalized empirical Bernstein's inequality. CoRR abs/2212.09900 (2022) - [i73]Zuyue Fu, Zhengling Qi, Zhuoran Yang, Zhaoran Wang, Lan Wang:
Offline Reinforcement Learning for Human-Guided Human-Machine Interaction with Private Information. CoRR abs/2212.12167 (2022) - [i72]Riashat Islam, Samarth Sinha, Homanga Bharadhwaj, Samin Yeasar Arnob, Zhuoran Yang, Animesh Garg, Zhaoran Wang, Lihong Li, Doina Precup:
Offline Policy Optimization in RL with Variance Regularizaton. CoRR abs/2212.14405 (2022) - 2021
- [j10]Liya Fu, Zhuoran Yang, Jun Zhang, Anle Long, Yan Zhou:
Generalized estimating equations for analyzing multivariate survival data. Commun. Stat. Simul. Comput. 50(10): 3060-3068 (2021) - [j9]Liya Fu, Zhuoran Yang, Fengjing Cai, You-Gan Wang:
Efficient and doubly-robust methods for variable selection and parameter estimation in longitudinal data analysis. Comput. Stat. 36(2): 781-804 (2021) - [j8]Shuang Qiu, Zhuoran Yang, Jieping Ye, Zhaoran Wang:
On Finite-Time Convergence of Actor-Critic Algorithm. IEEE J. Sel. Areas Inf. Theory 2(2): 652-664 (2021) - [j7]Kaiqing Zhang, Zhuoran Yang, Tamer Basar:
Decentralized multi-agent reinforcement learning with networked agents: recent advances. Frontiers Inf. Technol. Electron. Eng. 22(6): 802-814 (2021) - [j6]Kaiqing Zhang, Zhuoran Yang, Han Liu, Tong Zhang, Tamer Basar:
Finite-Sample Analysis for Decentralized Batch Multiagent Reinforcement Learning With Networked Agents. IEEE Trans. Autom. Control. 66(12): 5925-5940 (2021) - [c67]Jiaheng Wei, Zuyue Fu, Yang Liu, Xingyu Li, Zhuoran Yang, Zhaoran Wang:
Sample Elicitation. AISTATS 2021: 2692-2700 - [c66]Yufeng Zhang, Zhuoran Yang, Zhaoran Wang:
Provably Efficient Actor-Critic for Risk-Sensitive and Robust Adversarial RL: A Linear-Quadratic Case. AISTATS 2021: 2764-2772 - [c65]Dongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo R. Jovanovic:
Provably Efficient Safe Exploration via Primal-Dual Policy Optimization. AISTATS 2021: 3304-3312 - [c64]Zuyue Fu, Zhuoran Yang, Zhaoran Wang:
Single-Timescale Actor-Critic Provably Finds Globally Optimal Policy. ICLR 2021 - [c63]Yingjie Fei, Zhuoran Yang, Zhaoran Wang:
Risk-Sensitive Reinforcement Learning with Function Approximation: A Debiasing Approach. ICML 2021: 3198-3207 - [c62]Hongyi Guo, Zuyue Fu, Zhuoran Yang, Zhaoran Wang:
Decentralized Single-Timescale Actor-Critic on Zero-Sum Two-Player Stochastic Games. ICML 2021: 3899-3909 - [c61]Haque Ishfaq, Qiwen Cui, Viet Nguyen, Alex Ayoub, Zhuoran Yang, Zhaoran Wang, Doina Precup, Lin Yang:
Randomized Exploration in Reinforcement Learning with General Value Function Approximation. ICML 2021: 4607-4616 - [c60]Ying Jin, Zhuoran Yang, Zhaoran Wang:
Is Pessimism Provably Efficient for Offline RL? ICML 2021: 5084-5096 - [c59]Lewis Liu, Yufeng Zhang, Zhuoran Yang, Reza Babanezhad, Zhaoran Wang:
Infinite-Dimensional Optimization for Zero-Sum Games via Variational Transport. ICML 2021: 7033-7044 - [c58]Shuang Qiu, Xiaohan Wei, Jieping Ye, Zhaoran Wang, Zhuoran Yang:
Provably Efficient Fictitious Play Policy Optimization for Zero-Sum Markov Games with Structured Transitions. ICML 2021: 8715-8725 - [c57]Shuang Qiu, Jieping Ye, Zhaoran Wang, Zhuoran Yang:
On Reward-Free RL with Kernel and Neural Function Approximations: Single-Agent MDP and Markov Game. ICML 2021: 8737-8747 - [c56]Wesley Suttle, Kaiqing Zhang, Zhuoran Yang, Ji Liu, David N. Kraemer:
Reinforcement Learning for Cost-Aware Markov Decision Processes. ICML 2021: 9989-9999 - [c55]Weichen Wang, Jiequn Han, Zhuoran Yang, Zhaoran Wang:
Global Convergence of Policy Gradient for Linear-Quadratic Mean-Field Control/Game in Continuous Time. ICML 2021: 10772-10782 - [c54]Qiaomin Xie, Zhuoran Yang, Zhaoran Wang, Andreea Minca:
Learning While Playing in Mean-Field Games: Convergence and Optimality. ICML 2021: 11436-11447 - [c53]Tengyu Xu, Zhuoran Yang, Zhaoran Wang, Yingbin Liang:
Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality. ICML 2021: 11581-11591 - [c52]Jingwei Zhang, Zhuoran Yang, Zhengyuan Zhou, Zhaoran Wang:
Provably Sample Efficient Reinforcement Learning in Competitive Linear Quadratic Systems. L4DC 2021: 597-598 - [c51]Boyi Liu, Qi Cai, Zhuoran Yang, Zhaoran Wang:
BooVI: Provably Efficient Bootstrapped Value Iteration. NeurIPS 2021: 7041-7053 - [c50]Yufeng Zhang, Siyu Chen, Zhuoran Yang, Michael I. Jordan, Zhaoran Wang:
Wasserstein Flow Meets Replicator Dynamics: A Mean-Field Analysis of Representation Learning in Actor-Critic. NeurIPS 2021: 15993-16006 - [c49]Minshuo Chen, Yan Li, Ethan Wang, Zhuoran Yang, Zhaoran Wang, Tuo Zhao:
Pessimism Meets Invariance: Provably Efficient Offline Mean-Field Multi-Agent RL. NeurIPS 2021: 17913-17926 - [c48]Yingjie Fei, Zhuoran Yang, Yudong Chen, Zhaoran Wang:
Exponential Bellman Equation and Improved Regret Bounds for Risk-Sensitive Reinforcement Learning. NeurIPS 2021: 20436-20446 - [c47]Lingxiao Wang, Zhuoran Yang, Zhaoran Wang:
Provably Efficient Causal Reinforcement Learning with Confounded Observational Data. NeurIPS 2021: 21164-21175 - [c46]Runzhe Wu, Yufeng Zhang, Zhuoran Yang, Zhaoran Wang:
Offline Constrained Multi-Objective Reinforcement Learning via Pessimistic Dual Value Iteration. NeurIPS 2021: 25439-25451 - [c45]Prashant Khanduri, Siliang Zeng, Mingyi Hong, Hoi-To Wai, Zhaoran Wang, Zhuoran Yang:
A Near-Optimal Algorithm for Stochastic Bilevel Optimization via Double-Momentum. NeurIPS 2021: 30271-30283 - [i71]Prashant Khanduri, Siliang Zeng, Mingyi Hong, Hoi-To Wai, Zhaoran Wang, Zhuoran Yang:
A Momentum-Assisted Single-Timescale Stochastic Approximation Algorithm for Bilevel Optimization. CoRR abs/2102.07367 (2021) - [i70]Luofeng Liao, Zuyue Fu, Zhuoran Yang, Mladen Kolar, Zhaoran Wang:
Instrumental Variable Value Iteration for Causal Offline Reinforcement Learning. CoRR abs/2102.09907 (2021) - [i69]Tengyu Xu, Zhuoran Yang, Zhaoran Wang, Yingbin Liang:
Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality. CoRR abs/2102.11866 (2021) - [i68]Haque Ishfaq, Qiwen Cui, Viet Nguyen, Alex Ayoub, Zhuoran Yang, Zhaoran Wang, Doina Precup, Lin F. Yang:
Randomized Exploration for Reinforcement Learning with General Value Function Approximation. CoRR abs/2106.07841 (2021) - [i67]Zehao Dou, Zhuoran Yang, Zhaoran Wang, Simon S. Du:
Gap-Dependent Bounds for Two-Player Markov Games. CoRR abs/2107.00685 (2021) - [i66]Tengyu Xu, Zhuoran Yang, Zhaoran Wang, Yingbin Liang:
A Unified Off-Policy Evaluation Approach for General Value Function. CoRR abs/2107.02711 (2021) - [i65]Baihe Huang, Jason D. Lee, Zhaoran Wang, Zhuoran Yang:
Towards General Function Approximation in Zero-Sum Markov Games. CoRR abs/2107.14702 (2021) - [i64]Pratik Ramprasad, Yuantong Li, Zhuoran Yang, Zhaoran Wang, Will Wei Sun, Guang Cheng:
Online Bootstrap Inference For Policy Evaluation in Reinforcement Learning. CoRR abs/2108.03706 (2021) - [i63]Zhihan Liu, Yufeng Zhang, Zuyue Fu, Zhuoran Yang, Zhaoran Wang:
Provably Efficient Generative Adversarial Imitation Learning for Online and Offline Setting with Linear Function Approximation. CoRR abs/2108.08765 (2021) - [i62]Boyi Liu, Jiayang Li, Zhuoran Yang, Hoi-To Wai, Mingyi Hong, Yu Marco Nie, Zhaoran Wang:
Inducing Equilibria via Incentives: Simultaneous Design-and-Play Finds Global Optima. CoRR abs/2110.01212 (2021) - [i61]Han Zhong, Zhuoran Yang, Zhaoran Wang, Csaba Szepesvári:
Optimistic Policy Optimization is Provably Efficient in Non-stationary MDPs. CoRR abs/2110.08984 (2021) - [i60]Shuang Qiu, Jieping Ye, Zhaoran Wang, Zhuoran Yang:
On Reward-Free RL with Kernel and Neural Function Approximations: Single-Agent MDP and Markov Game. CoRR abs/2110.09771 (2021) - [i59]Zhi-Hong Deng, Zuyue Fu, Lingxiao Wang, Zhuoran Yang, Chenjia Bai, Zhaoran Wang, Jing Jiang:
SCORE: Spurious COrrelation REduction for Offline Reinforcement Learning. CoRR abs/2110.12468 (2021) - [i58]Yingjie Fei, Zhuoran Yang, Yudong Chen, Zhaoran Wang:
Exponential Bellman Equation and Improved Regret Bounds for Risk-Sensitive Reinforcement Learning. CoRR abs/2111.03947 (2021) - [i57]Xiao-Yang Liu, Zechu Li, Zhuoran Yang, Jiahao Zheng, Zhaoran Wang, Anwar Walid, Jian Guo, Michael I. Jordan:
ElegantRL-Podracer: Scalable and Elastic Library for Cloud-Native Deep Reinforcement Learning. CoRR abs/2112.05923 (2021) - [i56]Han Zhong, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan:
Can Reinforcement Learning Find Stackelberg-Nash Equilibria in General-Sum Markov Games with Myopic Followers? CoRR abs/2112.13521 (2021) - [i55]Yufeng Zhang, Siyu Chen, Zhuoran Yang, Michael I. Jordan, Zhaoran Wang:
Wasserstein Flow Meets Replicator Dynamics: A Mean-Field Analysis of Representation Learning in Actor-Critic. CoRR abs/2112.13530 (2021) - [i54]Gene Li, Junbo Li, Nathan Srebro, Zhaoran Wang, Zhuoran Yang:
Exponential Family Model-Based Reinforcement Learning via Score Matching. CoRR abs/2112.14195 (2021) - 2020
- [j5]Yubo Zhang, Zhuoran Yang, Jiuchun Yang, Yuanyuan Yang, Dongyan Wang, Yucong Zhang, Fengqin Yan, Lingxue Yu, Liping Chang, Shuwen Zhang:
A Novel Model Integrating Deep Learning for Land Use/Cover Change Reconstruction: A Case Study of Zhenlai County, Northeast China. Remote. Sens. 12(20): 3314 (2020) - [c44]Chi Jin, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan:
Provably efficient reinforcement learning with linear function approximation. COLT 2020: 2137-2143 - [c43]Qiaomin Xie, Yudong Chen, Zhaoran Wang, Zhuoran Yang:
Learning Zero-Sum Simultaneous-Move Markov Games Using Function Approximation and Correlated Equilibrium. COLT 2020: 3674-3682 - [c42]Minshuo Chen, Yizhou Wang, Tianyi Liu, Zhuoran Yang, Xingguo Li, Zhaoran Wang, Tuo Zhao:
On Computation and Generalization of Generative Adversarial Imitation Learning. ICLR 2020 - [c41]Zuyue Fu, Zhuoran Yang, Yongxin Chen, Zhaoran Wang:
Actor-Critic Provably Finds Nash Equilibria of Linear-Quadratic Mean-Field Games. ICLR 2020 - [c40]Lingxiao Wang, Qi Cai, Zhuoran Yang, Zhaoran Wang:
Neural Policy Gradient Methods: Global Optimality and Rates of Convergence. ICLR 2020 - [c39]Qi Cai, Zhuoran Yang, Chi Jin, Zhaoran Wang:
Provably Efficient Exploration in Policy Optimization. ICML 2020: 1283-1294 - [c38]Sen Na, Yuwei Luo, Zhuoran Yang, Zhaoran Wang, Mladen Kolar:
Semiparametric Nonlinear Bipartite Graph Representation Learning with Provable Guarantees. ICML 2020: 7141-7152 - [c37]Shuang Qiu, Xiaohan Wei, Zhuoran Yang:
Robust One-Bit Recovery via ReLU Generative Networks: Near-Optimal Statistical Rate and Global Landscape Analysis. ICML 2020: 7857-7866 - [c36]Lingxiao Wang, Qi Cai, Zhuoran Yang, Zhaoran Wang:
On the Global Optimality of Model-Agnostic Meta-Learning. ICML 2020: 9837-9846 - [c35]Lingxiao Wang, Zhuoran Yang, Zhaoran Wang:
Breaking the Curse of Many Agents: Provable Mean Embedding Q-Iteration for Mean-Field Reinforcement Learning. ICML 2020: 10092-10103 - [c34]Yufeng Zhang, Qi Cai, Zhuoran Yang, Zhaoran Wang:
Generative Adversarial Imitation Learning with Neural Network Parameterization: Global Optimality and Convergence Rate. ICML 2020: 11044-11054 - [c33]Jianqing Fan, Zhaoran Wang, Yuchen Xie, Zhuoran Yang:
A Theoretical Analysis of Deep Q-Learning. L4DC 2020: 486-489 - [c32]Yingjie Fei, Zhuoran Yang, Yudong Chen, Zhaoran Wang, Qiaomin Xie:
Risk-Sensitive Reinforcement Learning: Near-Optimal Risk-Sample Tradeoff in Regret. NeurIPS 2020 - [c31]Yingjie Fei, Zhuoran Yang, Zhaoran Wang, Qiaomin Xie:
Dynamic Regret of Policy Optimization in Non-Stationary Environments. NeurIPS 2020 - [c30]Wanxin Jin, Zhaoran Wang, Zhuoran Yang, Shaoshuai Mou:
Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework. NeurIPS 2020 - [c29]Luofeng Liao, You-Lin Chen, Zhuoran Yang, Bo Dai, Mladen Kolar, Zhaoran Wang:
Provably Efficient Neural Estimation of Structural Equation Models: An Adversarial Approach. NeurIPS 2020 - [c28]Shuang Qiu, Xiaohan Wei, Zhuoran Yang, Jieping Ye, Zhaoran Wang:
Upper Confidence Primal-Dual Reinforcement Learning for CMDP with Adversarial Loss. NeurIPS 2020 - [c27]Hoi-To Wai, Zhuoran Yang, Zhaoran Wang, Mingyi Hong:
Provably Efficient Neural GTD for Off-Policy Learning. NeurIPS 2020 - [c26]Zhuoran Yang, Chi Jin, Zhaoran Wang, Mengdi Wang, Michael I. Jordan:
Provably Efficient Reinforcement Learning with Kernel and Neural Function Approximations. NeurIPS 2020 - [c25]Yufeng Zhang, Qi Cai, Zhuoran Yang, Yongxin Chen, Zhaoran Wang:
Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory. NeurIPS 2020 - [i53]Minshuo Chen, Yizhou Wang, Tianyi Liu, Zhuoran Yang, Xingguo Li, Zhaoran Wang, Tuo Zhao:
On Computation and Generalization of Generative Adversarial Imitation Learning. CoRR abs/2001.02792 (2020) - [i52]Qiaomin Xie, Yudong Chen, Zhaoran Wang, Zhuoran Yang:
Learning Zero-Sum Simultaneous-Move Markov Games Using Function Approximation and Correlated Equilibrium. CoRR abs/2002.07066 (2020) - [i51]Dongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo R. Jovanovic:
Provably Efficient Safe Exploration via Primal-Dual Policy Optimization. CoRR abs/2003.00534 (2020) - [i50]Shuang Qiu, Xiaohan Wei, Zhuoran Yang, Jieping Ye, Zhaoran Wang:
Upper Confidence Primal-Dual Optimization: Stochastically Constrained Markov Decision Processes with Adversarial Losses and Unknown Transitions. CoRR abs/2003.00660 (2020) - [i49]Sen Na, Yuwei Luo, Zhuoran Yang, Zhaoran Wang, Mladen Kolar:
Semiparametric Nonlinear Bipartite Graph Representation Learning with Provable Guarantees. CoRR abs/2003.01013 (2020) - [i48]Yufeng Zhang, Qi Cai, Zhuoran Yang, Zhaoran Wang:
Generative Adversarial Imitation Learning with Neural Networks: Global Optimality and Convergence Rate. CoRR abs/2003.03709 (2020) - [i47]Yufeng Zhang, Qi Cai, Zhuoran Yang, Yongxin Chen, Zhaoran Wang:
Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory. CoRR abs/2006.04761 (2020) - [i46]Wanxin Jin, Zhaoran Wang, Zhuoran Yang, Shaoshuai Mou:
Neural Certificates for Safe Control Policies. CoRR abs/2006.08465 (2020) - [i45]Lingxiao Wang, Zhuoran Yang, Zhaoran Wang:
Breaking the Curse of Many Agents: Provable Mean Embedding Q-Iteration for Mean-Field Reinforcement Learning. CoRR abs/2006.11917 (2020) - [i44]Lingxiao Wang, Zhuoran Yang, Zhaoran Wang:
Provably Efficient Causal Reinforcement Learning with Confounded Observational Data. CoRR abs/2006.12311 (2020) - [i43]Lingxiao Wang, Qi Cai, Zhuoran Yang, Zhaoran Wang:
On the Global Optimality of Model-Agnostic Meta-Learning. CoRR abs/2006.13182 (2020) - [i42]Yingjie Fei, Zhuoran Yang, Yudong Chen, Zhaoran Wang, Qiaomin Xie:
Risk-Sensitive Reinforcement Learning: Near-Optimal Risk-Sample Tradeoff in Regret. CoRR abs/2006.13827 (2020) - [i41]Yingjie Fei, Zhuoran Yang, Zhaoran Wang, Qiaomin Xie:
Dynamic Regret of Policy Optimization in Non-stationary Environments. CoRR abs/2007.00148 (2020) - [i40]Luofeng Liao, You-Lin Chen, Zhuoran Yang, Bo Dai, Zhaoran Wang, Mladen Kolar:
Provably Efficient Neural Estimation of Structural Equation Model: An Adversarial Approach. CoRR abs/2007.01290 (2020) - [i39]Mingyi Hong, Hoi-To Wai, Zhaoran Wang, Zhuoran Yang:
A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis and Application to Actor-Critic. CoRR abs/2007.05170 (2020) - [i38]Jianqing Fan, Zhuoran Yang, Mengxin Yu:
Understanding Implicit Regularization in Over-Parameterized Nonlinear Statistical Model. CoRR abs/2007.08322 (2020) - [i37]Zuyue Fu, Zhuoran Yang, Zhaoran Wang:
Single-Timescale Actor-Critic Provably Finds Globally Optimal Policy. CoRR abs/2008.00483 (2020) - [i36]Shuang Qiu, Zhuoran Yang, Xiaohan Wei, Jieping Ye, Zhaoran Wang:
Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth Nonlinear TD Learning. CoRR abs/2008.10103 (2020) - [i35]Qiaomin Xie, Zhuoran Yang, Zhaoran Wang, Andreea Minca:
Provable Fictitious Play for General Mean-Field Games. CoRR abs/2010.04211 (2020) - [i34]Zhuoran Yang, Chi Jin, Zhaoran Wang, Mengdi Wang, Michael I. Jordan:
Bridging Exploration and General Function Approximation in Reinforcement Learning: Provably Efficient Kernel and Neural Value Iterations. CoRR abs/2011.04622 (2020) - [i33]Zhuoran Yang, Yufeng Zhang, Yongxin Chen, Zhaoran Wang:
Variational Transport: A Convergent Particle-BasedAlgorithm for Distributional Optimization. CoRR abs/2012.11554 (2020) - [i32]Han Zhong, Ethan X. Fang, Zhuoran Yang, Zhaoran Wang:
Risk-Sensitive Deep RL: Variance-Constrained Actor-Critic Provably Finds Globally Optimal Policy. CoRR abs/2012.14098 (2020) - [i31]Ying Jin, Zhuoran Yang, Zhaoran Wang:
Is Pessimism Provably Efficient for Offline RL? CoRR abs/2012.15085 (2020)
2010 – 2019
- 2019
- [j4]Zhonglei Li, Zhuoran Yang, Boxue Du:
Surface Charge Transport Characteristics of ZnO/Silicone Rubber Composites Under Impulse Superimposed on DC Voltage. IEEE Access 7: 3008-3017 (2019) - [j3]Sen Na, Zhuoran Yang, Zhaoran Wang, Mladen Kolar:
High-dimensional Varying Index Coefficient Models via Stein's Identity. J. Mach. Learn. Res. 20: 152:1-152:44 (2019) - [j2]Zhuoran Yang, Lin F. Yang, Ethan X. Fang, Tuo Zhao, Zhaoran Wang, Matey Neykov:
Misspecified nonconvex statistical optimization for sparse phase retrieval. Math. Program. 176(1-2): 545-571 (2019) - [c24]Wei Zhao, Yuanyuan Sun, Zhuoran Yang, Haozhen Li:
Design of Single Channel Speech Separation System Based on Deep Clustering Model. ICIS 2019: 399-402 - [c23]Yixuan Lin, Kaiqing Zhang, Zhuoran Yang, Zhaoran Wang, Tamer Basar, Romeil Sandhu, Ji Liu:
A Communication-Efficient Multi-Agent Actor-Critic Algorithm for Distributed Reinforcement Learning. CDC 2019: 5562-5567 - [c22]Kejun Huang, Zhuoran Yang, Zhaoran Wang, Mingyi Hong:
Learning Partially Observable Markov Decision Processes Using Coupled Canonical Polyadic Decomposition. DSW 2019: 295-299 - [c21]Yi Chen, Zhuoran Yang, Zhicong Ye, Hui Liu:
Research Character Analyzation of Urban Security Based on Urban Resilience Using Big Data Method. ICBDS 2019: 371-381 - [c20]Xiaohan Wei, Zhuoran Yang, Zhaoran Wang:
On the statistical rate of nonlinear recovery in generative models with heavy-tailed data. ICML 2019: 6697-6706 - [c19]Ming Yu, Zhuoran Yang, Mladen Kolar, Zhaoran Wang:
Convergent Policy Optimization for Safe Reinforcement Learning. NeurIPS 2019: 3121-3133 - [c18]Hoi-To Wai, Mingyi Hong, Zhuoran Yang, Zhaoran Wang, Kexin Tang:
Variance Reduced Policy Evaluation with Smooth Function Approximation. NeurIPS 2019: 5776-5787 - [c17]Zhuoran Yang, Yongxin Chen, Mingyi Hong, Zhaoran Wang:
Provably Global Convergence of Actor-Critic: A Case for Linear Quadratic Regulator with Ergodic Cost. NeurIPS 2019: 8351-8363 - [c16]Lingxiao Wang, Zhuoran Yang, Zhaoran Wang:
Statistical-Computational Tradeoff in Single Index Models. NeurIPS 2019: 10419-10426 - [c15]Boyi Liu, Qi Cai, Zhuoran Yang, Zhaoran Wang:
Neural Trust Region/Proximal Policy Optimization Attains Globally Optimal Policy. NeurIPS 2019: 10564-10575 - [c14]Qi Cai, Zhuoran Yang, Jason D. Lee, Zhaoran Wang:
Neural Temporal-Difference Learning Converges to Global Optima. NeurIPS 2019: 11312-11322 - [c13]Kaiqing Zhang, Zhuoran Yang, Tamer Basar:
Policy Optimization Provably Converges to Nash Equilibria in Zero-Sum Linear Quadratic Games. NeurIPS 2019: 11598-11610 - [i30]Zhuoran Yang, Yuchen Xie, Zhaoran Wang:
A Theoretical Analysis of Deep Q-Learning. CoRR abs/1901.00137 (2019) - [i29]Wesley Suttle, Zhuoran Yang, Kaiqing Zhang, Zhaoran Wang, Tamer Basar, Ji Liu:
A Multi-Agent Off-Policy Actor-Critic Algorithm for Distributed Reinforcement Learning. CoRR abs/1903.06372 (2019) - [i28]Qi Cai, Zhuoran Yang, Jason D. Lee, Zhaoran Wang:
Neural Temporal-Difference Learning Converges to Global Optima. CoRR abs/1905.10027 (2019) - [i27]Kaiqing Zhang, Zhuoran Yang, Tamer Basar:
Policy Optimization Provably Converges to Nash Equilibria in Zero-Sum Linear Quadratic Games. CoRR abs/1906.00729 (2019) - [i26]Boyi Liu, Qi Cai, Zhuoran Yang, Zhaoran Wang:
Neural Proximal/Trust Region Policy Optimization Attains Globally Optimal Policy. CoRR abs/1906.10306 (2019) - [i25]Yixuan Lin, Kaiqing Zhang, Zhuoran Yang, Zhaoran Wang, Tamer Basar, Romeil Sandhu, Ji Liu:
A Communication-Efficient Multi-Agent Actor-Critic Algorithm for Distributed Reinforcement Learning. CoRR abs/1907.03053 (2019) - [i24]Chi Jin, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan:
Provably Efficient Reinforcement Learning with Linear Function Approximation. CoRR abs/1907.05388 (2019) - [i23]Wesley Suttle, Zhuoran Yang, Kaiqing Zhang, Ji Liu:
Stochastic Convergence Results for Regularized Actor-Critic Methods. CoRR abs/1907.06138 (2019) - [i22]Zhuoran Yang, Yongxin Chen, Mingyi Hong, Zhaoran Wang:
On the Global Convergence of Actor-Critic: A Case for Linear Quadratic Regulator with Ergodic Cost. CoRR abs/1907.06246 (2019) - [i21]Xinyang Yi, Zhaoran Wang, Zhuoran Yang, Constantine Caramanis, Han Liu:
More Supervision, Less Computation: Statistical-Computational Tradeoffs in Weakly Supervised Learning. CoRR abs/1907.06257 (2019) - [i20]Dongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo R. Jovanovic:
Fast Multi-Agent Temporal-Difference Learning via Homotopy Stochastic Primal-Dual Optimization. CoRR abs/1908.02805 (2019) - [i19]Shuang Qiu, Xiaohan Wei, Zhuoran Yang:
Robust One-Bit Recovery via ReLU Generative Networks: Improved Statistical Rates and Global Landscape Analysis. CoRR abs/1908.05368 (2019) - [i18]Lingxiao Wang, Qi Cai, Zhuoran Yang, Zhaoran Wang:
Neural Policy Gradient Methods: Global Optimality and Rates of Convergence. CoRR abs/1909.01150 (2019) - [i17]Yang Liu, Zuyue Fu, Zhuoran Yang, Zhaoran Wang:
Credible Sample Elicitation by Deep Learning, for Deep Learning. CoRR abs/1910.03155 (2019) - [i16]Zuyue Fu, Zhuoran Yang, Yongxin Chen, Zhaoran Wang:
Actor-Critic Provably Finds Nash Equilibria of Linear-Quadratic Mean-Field Games. CoRR abs/1910.07498 (2019) - [i15]Ming Yu, Zhuoran Yang, Mladen Kolar, Zhaoran Wang:
Convergent Policy Optimization for Safe Reinforcement Learning. CoRR abs/1910.12156 (2019) - [i14]Kaiqing Zhang, Zhuoran Yang, Tamer Basar:
Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms. CoRR abs/1911.10635 (2019) - [i13]Kaiqing Zhang, Zhuoran Yang, Tamer Basar:
Decentralized Multi-Agent Reinforcement Learning with Networked Agents: Recent Advances. CoRR abs/1912.03821 (2019) - [i12]Qi Cai, Zhuoran Yang, Chi Jin, Zhaoran Wang:
Provably Efficient Exploration in Policy Optimization. CoRR abs/1912.05830 (2019) - [i11]Yuwei Luo, Zhuoran Yang, Zhaoran Wang, Mladen Kolar:
Natural Actor-Critic Converges Globally for Hierarchical Linear Quadratic Regulator. CoRR abs/1912.06875 (2019) - [i10]Wanxin Jin, Zhaoran Wang, Zhuoran Yang, Shaoshuai Mou:
Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework. CoRR abs/1912.12970 (2019) - 2018
- [j1]Zhuoran Yang, Yang Ning, Han Liu:
On Semiparametric Exponential Family Graphical Models. J. Mach. Learn. Res. 19: 57:1-57:59 (2018) - [c12]Kaiqing Zhang, Zhuoran Yang, Zhaoran Wang:
Nonlinear Structured Signal Estimation in High Dimensions via Iterative Hard Thresholding. AISTATS 2018: 258-268 - [c11]Zhuoran Yang, Kaiqing Zhang, Mingyi Hong, Tamer Basar:
A Finite Sample Analysis of the Actor-Critic Algorithm. CDC 2018: 2759-2764 - [c10]Kaiqing Zhang, Zhuoran Yang, Tamer Basar:
Networked Multi-Agent Reinforcement Learning in Continuous Spaces. CDC 2018: 2771-2776 - [c9]Kaiqing Zhang, Zhuoran Yang, Han Liu, Tong Zhang, Tamer Basar:
Fully Decentralized Multi-Agent Reinforcement Learning with Networked Agents. ICML 2018: 5867-5876 - [c8]Ming Yu, Zhuoran Yang, Tuo Zhao, Mladen Kolar, Zhaoran Wang:
Provable Gaussian Embedding with One Observation. NeurIPS 2018: 6765-6775 - [c7]Hoi-To Wai, Zhuoran Yang, Zhaoran Wang, Mingyi Hong:
Multi-Agent Reinforcement Learning via Double Averaging Primal-Dual Optimization. NeurIPS 2018: 9672-9683 - [c6]Yi Chen, Zhuoran Yang, Yuchen Xie, Zhaoran Wang:
Contrastive Learning from Pairwise Measurements. NeurIPS 2018: 10932-10941 - [i9]Kaiqing Zhang, Zhuoran Yang, Han Liu, Tong Zhang, Tamer Basar:
Fully Decentralized Multi-Agent Reinforcement Learning with Networked Agents. CoRR abs/1802.08757 (2018) - [i8]Hoi-To Wai, Zhuoran Yang, Zhaoran Wang, Mingyi Hong:
Multi-Agent Reinforcement Learning via Double Averaging Primal-Dual Optimization. CoRR abs/1806.00877 (2018) - [i7]Jianqing Fan, Han Liu, Zhaoran Wang, Zhuoran Yang:
Curse of Heterogeneity: Computational Barriers in Sparse Mixture Models and Phase Retrieval. CoRR abs/1808.06996 (2018) - [i6]Jiechao Xiong, Qing Wang, Zhuoran Yang, Peng Sun, Lei Han, Yang Zheng, Haobo Fu, Tong Zhang, Ji Liu, Han Liu:
Parametrized Deep Q-Networks Learning: Reinforcement Learning with Discrete-Continuous Hybrid Action Space. CoRR abs/1810.06394 (2018) - [i5]Sen Na, Zhuoran Yang, Zhaoran Wang, Mladen Kolar:
High-dimensional Varying Index Coefficient Models via Stein's Identity. CoRR abs/1810.07128 (2018) - [i4]Ming Yu, Zhuoran Yang, Tuo Zhao, Mladen Kolar, Zhaoran Wang:
Provable Gaussian Embedding with One Observation. CoRR abs/1810.11098 (2018) - [i3]Kaiqing Zhang, Zhuoran Yang, Han Liu, Tong Zhang, Tamer Basar:
Finite-Sample Analyses for Fully Decentralized Multi-Agent Reinforcement Learning. CoRR abs/1812.02783 (2018) - 2017
- [c5]Zhuoran Yang, Krishnakumar Balasubramanian, Han Liu:
High-dimensional Non-Gaussian Single Index Models via Thresholded Score Function Estimation. ICML 2017: 3851-3860 - [c4]Zhuoran Yang, Krishnakumar Balasubramanian, Zhaoran Wang, Han Liu:
Estimating High-dimensional Non-Gaussian Multiple Index Models via Stein's Lemma. NIPS 2017: 6097-6106 - [i2]Zhuoran Yang, Lin F. Yang, Ethan X. Fang, Tuo Zhao, Zhaoran Wang, Matey Neykov:
Misspecified Nonconvex Statistical Optimization for Phase Retrieval. CoRR abs/1712.06245 (2017) - 2016
- [c3]Zhuoran Yang, Zhaoran Wang, Han Liu, Yonina C. Eldar, Tong Zhang:
Sparse Nonlinear Regression: Parameter Estimation under Nonconvexity. ICML 2016: 2472-2481 - [c2]Xinyang Yi, Zhaoran Wang, Zhuoran Yang, Constantine Caramanis, Han Liu:
More Supervision, Less Computation: Statistical-Computational Tradeoffs in Weakly Supervised Learning. NIPS 2016: 4475-4483 - 2015
- [c1]Kwang-Sung Jun, Xiaojin Zhu, Timothy T. Rogers, Zhuoran Yang, Ming Yuan:
Human Memory Search as Initial-Visit Emitting Random Walk. NIPS 2015: 1072-1080 - [i1]Zhuoran Yang, Zhaoran Wang, Han Liu, Yonina C. Eldar, Tong Zhang:
Sparse Nonlinear Regression: Parameter Estimation and Asymptotic Inference. CoRR abs/1511.04514 (2015)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-18 20:46 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint