default search action
Emma Brunskill
Person information
- affiliation: Stanford University, CA, USA
- affiliation (former): Carnegie Mellon University, Pittsburgh, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j9]Sherry Ruan, Allen Nie, William Steenbergen, Jiayu He, J. Q. Zhang, Meng Guo, Yao Liu, Kyle Dang Nguyen, Catherine Y. Wang, Rui Ying, James A. Landay, Emma Brunskill:
Reinforcement learning tutor better supported lower performers in a math task. Mach. Learn. 113(5): 3023-3048 (2024) - [j8]Jonathan Lee, Weihao Kong, Aldo Pacchiano, Vidya Muthukumar, Emma Brunskill:
Estimating Optimal Policy Value in Linear Contextual Bandits Beyond Gaussianity. Trans. Mach. Learn. Res. 2024 (2024) - [c137]Scott L. Fleming, Alejandro Lozano, William J. Haberkorn, Jenelle A. Jindal, Eduardo Pontes Reis, Rahul Thapa, Louis Blankemeier, Julian Z. Genkins, Ethan Steinberg, Ashwin Nayak, Birju S. Patel, Chia-Chun Chiang, Alison Callahan, Zepeng Huo, Sergios Gatidis, Scott J. Adams, Oluseyi Fayanju, Shreya J. Shah, Thomas Savage, Ethan Goh, Akshay S. Chaudhari, Nima Aghaeepour, Christopher D. Sharp, Michael A. Pfeffer, Percy Liang, Jonathan H. Chen, Keith E. Morse, Emma P. Brunskill, Jason A. Fries, Nigam H. Shah:
MedAlign: A Clinician-Generated Dataset for Instruction Following with Electronic Medical Records. AAAI 2024: 22021-22030 - [c136]Joy He-Yueya, Noah D. Goodman, Emma Brunskill:
Evaluating and Optimizing Educational Content with Large Language Model Judgments. EDM 2024 - [c135]Ryan Louie, Ananjan Nandi, William Fang, Cheng Chang, Emma Brunskill, Diyi Yang:
Roleplay-doh: Enabling Domain-Experts to Create LLM-simulated Patients via Eliciting and Adhering to Principles. EMNLP 2024: 10570-10603 - [c134]Yash Chandak, Shiv Shankar, Vasilis Syrgkanis, Emma Brunskill:
Adaptive Instrument Design for Indirect Experiments. ICLR 2024 - [c133]Danielle R. Thomas, Jionghao Lin, Erin Gatz, Ashish Gurung, Shivang Gupta, Kole Norberg, Stephen E. Fancsali, Vincent Aleven, Lee G. Branstetter, Emma Brunskill, Kenneth R. Koedinger:
Improving Student Learning with Hybrid Human-AI Tutoring: A Three-Study Quasi-Experimental Investigation. LAK 2024: 404-415 - [c132]Amelia Leon, Allen Nie, Yash Chandak, Emma Brunskill:
Estimating the Causal Treatment Effect of Unproductive Persistence. LAK 2024: 843-849 - [c131]Emma Brunskill, Kole A. Norberg, Stephen Fancsali, Steven Ritter:
Examining the Use of an AI-Powered Teacher Orchestration Tool at Scale. L@S 2024: 356-360 - [c130]Alan Y. Cheng, Ellie Tanimura, Joseph Tey, Andrew C. Wu, Emma Brunskill:
Brief, Just-in-Time Teaching Tips to Support Computer Science Tutors. SIGCSE (1) 2024: 200-206 - [c129]Evan Zheran Liu, David Yuan, Ahmed Ahmed, Elyse Cornwall, Juliette Woodrow, Kaylee Burns, Allen Nie, Emma Brunskill, Chris Piech, Chelsea Finn:
A Fast and Accurate Machine Learning Autograder for the Breakout Assignment. SIGCSE (1) 2024: 736-742 - [c128]Yuchen Hu, Henry Zhu, Emma Brunskill, Stefan Wager:
Minimax-Regret Sample Selection in Randomized Experiments. EC 2024: 1209-1235 - [i85]Aldo Pacchiano, Jonathan N. Lee, Emma Brunskill:
Experiment Planning with Function Approximation. CoRR abs/2401.05193 (2024) - [i84]Joy He-Yueya, Noah D. Goodman, Emma Brunskill:
Evaluating and Optimizing Educational Content with Large Language Model Judgments. CoRR abs/2403.02795 (2024) - [i83]Sanath Kumar Krishnamurthy, Susan Athey, Emma Brunskill:
Data-driven Error Estimation: Upper Bounding Multiple Errors with No Technical Debt. CoRR abs/2405.04636 (2024) - [i82]Matthew Jörke, Shardul Sapkota, Lyndsea Warkenthien, Niklas Vainio, Paul Schmiedmayer, Emma Brunskill, James A. Landay:
Supporting Physical Activity Behavior Change with LLM-Based Conversational Agents. CoRR abs/2405.06061 (2024) - [i81]Allen Nie, Yash Chandak, Christina J. Yuan, Anirudhan Badrinath, Yannis Flet-Berliac, Emma Brunskill:
OPERA: Automatic Offline Policy Evaluation with Re-weighted Aggregates of Multiple Estimators. CoRR abs/2405.17708 (2024) - [i80]Ryan Louie, Ananjan Nandi, William Fang, Cheng Chang, Emma Brunskill, Diyi Yang:
Roleplay-doh: Enabling Domain-Experts to Create LLM-simulated Patients via Eliciting and Adhering to Principles. CoRR abs/2407.00870 (2024) - [i79]Hyunji Alex Nam, Yash Chandak, Emma Brunskill:
Short-Long Policy Evaluation with Novel Actions. CoRR abs/2407.03674 (2024) - [i78]Allen Nie, Yash Chandak, Miroslav Suzara, Ali Malik, Juliette Woodrow, Matt Peng, Mehran Sahami, Emma Brunskill, Chris Piech:
The GPT Surprise: Offering Large Language Model Chat in a Massive Coding Class Reduced Engagement but Increased Adopters Exam Performances. CoRR abs/2407.09975 (2024) - [i77]Joy He-Yueya, Wanjing Anya Ma, Kanishk Gandhi, Benjamin W. Domingue, Emma Brunskill, Noah D. Goodman:
Psychometric Alignment: Capturing Human Knowledge Distributions via Language Models. CoRR abs/2407.15645 (2024) - [i76]Ge Gao, Amelia Leon, Andrea Jetten, Jasmine Turner, Husni Almoubayyed, Stephen Fancsali, Emma Brunskill:
Predicting Long-Term Student Outcomes from Short-Term EdTech Log Data. CoRR abs/2412.15473 (2024) - [i75]Hyunji Nam, Allen Nie, Ge Gao, Vasilis Syrgkanis, Emma Brunskill:
Predicting Long Term Sequential Policy Value Using Softer Surrogates. CoRR abs/2412.20638 (2024) - 2023
- [c127]Kefan Dong, Yannis Flet-Berliac, Allen Nie, Emma Brunskill:
Model-Based Offline Reinforcement Learning with Local Misspecification. AAAI 2023: 7423-7431 - [c126]Vincent Aleven, Richard G. Baraniuk, Emma Brunskill, Scott A. Crossley, Dora Demszky, Stephen Fancsali, Shivang Gupta, Kenneth R. Koedinger, Chris Piech, Steven Ritter, Danielle R. Thomas, Simon Woodhead, Wanli Xing:
Towards the Future of AI-Augmented Human Tutoring in Math Learning. AIED (Posters/Late Breaking Results/...) 2023: 26-31 - [c125]Allen Nie, Ann-Katrin Reuel, Emma Brunskill:
Understanding the Impact of Reinforcement Learning Personalization on Subgroups of Students in Math Tutoring. AIED (Posters/Late Breaking Results/...) 2023: 688-694 - [c124]Jonathan Lee, Annie Xie, Aldo Pacchiano, Yash Chandak, Chelsea Finn, Ofir Nachum, Emma Brunskill:
Supervised Pretraining Can Learn In-Context Reinforcement Learning. NeurIPS 2023 - [c123]Anirudhan Badrinath, Yannis Flet-Berliac, Allen Nie, Emma Brunskill:
Waypoint Transformer: Reinforcement Learning via Supervised Learning with Intermediate Targets. NeurIPS 2023 - [c122]Sanath Kumar Krishnamurthy, Ruohan Zhan, Susan Athey, Emma Brunskill:
Proportional Response: Contextual Bandits for Simple and Cumulative Regret Minimization. NeurIPS 2023 - [c121]Aldo Pacchiano, Jonathan Lee, Emma Brunskill:
Experiment Planning with Function Approximation. NeurIPS 2023 - [e1]Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, Jonathan Scarlett:
International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA. Proceedings of Machine Learning Research 202, PMLR 2023 [contents] - [i74]Yash Chandak, Shiv Shankar, Nathaniel D. Bastian, Bruno Castro da Silva, Emma Brunskill, Philip S. Thomas:
Off-Policy Evaluation for Action-Dependent Non-Stationary Environments. CoRR abs/2301.10330 (2023) - [i73]Kefan Dong, Yannis Flet-Berliac, Allen Nie, Emma Brunskill:
Model-based Offline Reinforcement Learning with Local Misspecification. CoRR abs/2301.11426 (2023) - [i72]Jonathan N. Lee, Weihao Kong, Aldo Pacchiano, Vidya Muthukumar, Emma Brunskill:
Estimating Optimal Policy Value in General Linear Contextual Bandits. CoRR abs/2302.09451 (2023) - [i71]Sherry Ruan, Allen Nie, William Steenbergen, Jiayu He, JQ Zhang, Meng Guo, Yao Liu, Kyle Dang Nguyen, Catherine Y. Wang, Rui Ying, James A. Landay, Emma Brunskill:
Reinforcement Learning Tutor Better Supported Lower Performers in a Math Task. CoRR abs/2304.04933 (2023) - [i70]Anirudhan Badrinath, Yannis Flet-Berliac, Allen Nie, Emma Brunskill:
Waypoint Transformer: Reinforcement Learning via Supervised Learning with Intermediate Targets. CoRR abs/2306.14069 (2023) - [i69]Jonathan N. Lee, Annie Xie, Aldo Pacchiano, Yash Chandak, Chelsea Finn, Ofir Nachum, Emma Brunskill:
Supervised Pretraining Can Learn In-Context Reinforcement Learning. CoRR abs/2306.14892 (2023) - [i68]Sanath Kumar Krishnamurthy, Ruohan Zhan, Susan Athey, Emma Brunskill:
Proportional Response: Contextual Bandits for Simple and Cumulative Regret Minimization. CoRR abs/2307.02108 (2023) - [i67]Scott L. Fleming, Alejandro Lozano, William J. Haberkorn, Jenelle A. Jindal, Eduardo Pontes Reis, Rahul Thapa, Louis Blankemeier, Julian Z. Genkins, Ethan Steinberg, Ashwin Nayak, Birju S. Patel, Chia-Chun Chiang, Alison Callahan, Zepeng Huo, Sergios Gatidis, Scott J. Adams, Oluseyi Fayanju, Shreya J. Shah, Thomas Savage, Ethan Goh, Akshay S. Chaudhari, Nima Aghaeepour, Christopher D. Sharp, Michael A. Pfeffer, Percy Liang, Jonathan H. Chen, Keith E. Morse, Emma P. Brunskill, Jason A. Fries, Nigam H. Shah:
MedAlign: A Clinician-Generated Dataset for Instruction Following with Electronic Medical Records. CoRR abs/2308.14089 (2023) - [i66]Aishwarya Mandyam, Matthew Jörke, Barbara E. Engelhardt, Emma Brunskill:
Adaptive Interventions with User-Defined Goals for Health Behavior Change. CoRR abs/2311.09483 (2023) - [i65]Yash Chandak, Shiv Shankar, Vasilis Syrgkanis, Emma Brunskill:
Adaptive Instrument Design for Indirect Experiments. CoRR abs/2312.02438 (2023) - [i64]Danielle R. Thomas, Jionghao Lin, Erin Gatz, Ashish Gurung, Shivang Gupta, Kole Norberg, Stephen E. Fancsali, Vincent Aleven, Lee G. Branstetter, Emma Brunskill, Kenneth R. Koedinger:
Improving Student Learning with Hybrid Human-AI Tutoring: A Three-Study Quasi-Experimental Investigation. CoRR abs/2312.11274 (2023) - 2022
- [c120]Tong Mu, Georgios Theocharous, David Arbour, Emma Brunskill:
Constraint Sampling Reinforcement Learning: Incorporating Expertise for Faster Learning. AAAI 2022: 7841-7849 - [c119]Ramtin Keramati, Omer Gottesman, Leo Anthony Celi, Finale Doshi-Velez, Emma Brunskill:
Identification of Subgroups With Similar Benefits in Off-Policy Policy Evaluation. CHIL 2022: 397-410 - [c118]Yash Chandak, Shiv Shankar, Nathaniel D. Bastian, Bruno C. da Silva, Emma Brunskill, Philip S. Thomas:
Off-Policy Evaluation for Action-Dependent Non-stationary Environments. NeurIPS 2022 - [c117]Jonathan N. Lee, George Tucker, Ofir Nachum, Bo Dai, Emma Brunskill:
Oracle Inequalities for Model Selection in Offline Reinforcement Learning. NeurIPS 2022 - [c116]Evan Zheran Liu, Moritz Stephan, Allen Nie, Chris Piech, Emma Brunskill, Chelsea Finn:
Giving Feedback on Interactive Student Programs with Meta-Exploration. NeurIPS 2022 - [c115]Tong Mu, Yash Chandak, Tatsunori B. Hashimoto, Emma Brunskill:
Factored DRO: Factored Distributionally Robust Policies for Contextual Bandits. NeurIPS 2022 - [c114]Allen Nie, Yannis Flet-Berliac, Deon R. Jordan, William Steenbergen, Emma Brunskill:
Data-Efficient Pipeline for Offline Reinforcement Learning with Limited Data. NeurIPS 2022 - [c113]Yao Liu, Yannis Flet-Berliac, Emma Brunskill:
Offline policy optimization with eligible actions. UAI 2022: 1253-1263 - [i63]Yao Liu, Yannis Flet-Berliac, Emma Brunskill:
Offline Policy Optimization with Eligible Actions. CoRR abs/2207.00632 (2022) - [i62]Allen Nie, Yannis Flet-Berliac, Deon R. Jordan, William Steenbergen, Emma Brunskill:
Data-Efficient Pipeline for Offline Reinforcement Learning with Limited Data. CoRR abs/2210.08642 (2022) - [i61]Jonathan N. Lee, George Tucker, Ofir Nachum, Bo Dai, Emma Brunskill:
Oracle Inequalities for Model Selection in Offline Reinforcement Learning. CoRR abs/2211.02016 (2022) - [i60]Evan Zheran Liu, Moritz Stephan, Allen Nie, Chris Piech, Emma Brunskill, Chelsea Finn:
Giving Feedback on Interactive Student Programs with Meta-Exploration. CoRR abs/2211.08802 (2022) - 2021
- [c112]Jonathan N. Lee, Aldo Pacchiano, Vidya Muthukumar, Weihao Kong, Emma Brunskill:
Online Model Selection for Reinforcement Learning with Function Approximation. AISTATS 2021: 3340-3348 - [c111]Tong Mu, Shuhan Wang, Erik Andersen, Emma Brunskill:
Automatic Adaptive Sequencing in a Webgame. ITS 2021: 430-438 - [c110]Sherry Ruan, Liwei Jiang, Qianyao Xu, Zhiyuan Liu, Glenn M. Davis, Emma Brunskill, James A. Landay:
EnglishBot: An AI-Powered Conversational System for Second Language Learning. IUI 2021: 434-444 - [c109]Jiayu Yao, Emma Brunskill, Weiwei Pan, Susan A. Murphy, Finale Doshi-Velez:
Power Constrained Bandits. MLHC 2021: 209-259 - [c108]Allen Nie, Emma Brunskill, Chris Piech:
Play to Grade: Testing Coding Games as Classifying Markov Decision Process. NeurIPS 2021: 1506-1518 - [c107]Andrea Zanette, Martin J. Wainwright, Emma Brunskill:
Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning. NeurIPS 2021: 13626-13640 - [c106]Hyunji Alex Nam, Scott L. Fleming, Emma Brunskill:
Reinforcement Learning with State Observation Costs in Action-Contingent Noiselessly Observable Markov Decision Processes. NeurIPS 2021: 15650-15666 - [c105]Andrea Zanette, Kefan Dong, Jonathan N. Lee, Emma Brunskill:
Design of Experiments for Stochastic Contextual Linear Bandits. NeurIPS 2021: 22720-22731 - [c104]Yash Chandak, Scott Niekum, Bruno C. da Silva, Erik G. Learned-Miller, Emma Brunskill, Philip S. Thomas:
Universal Off-Policy Evaluation. NeurIPS 2021: 27475-27490 - [i59]Yash Chandak, Scott Niekum, Bruno Castro da Silva, Erik G. Learned-Miller, Emma Brunskill, Philip S. Thomas:
Universal Off-Policy Evaluation. CoRR abs/2104.12820 (2021) - [i58]Andrea Zanette, Kefan Dong, Jonathan N. Lee, Emma Brunskill:
Design of Experiments for Stochastic Contextual Linear Bandits. CoRR abs/2107.09912 (2021) - [i57]Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ B. Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie S. Chen, Kathleen Creel, Jared Quincy Davis, Dorottya Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren E. Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna, Rohith Kuditipudi, et al.:
On the Opportunities and Risks of Foundation Models. CoRR abs/2108.07258 (2021) - [i56]Andrea Zanette, Martin J. Wainwright, Emma Brunskill:
Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning. CoRR abs/2108.08812 (2021) - [i55]Alex Chohlas-Wood, Madison Coots, Emma Brunskill, Sharad Goel:
Learning to be Fair: A Consequentialist Approach to Equitable Decision-Making. CoRR abs/2109.08792 (2021) - [i54]Allen Nie, Emma Brunskill, Chris Piech:
Play to Grade: Testing Coding Games as Classifying Markov Decision Process. CoRR abs/2110.14615 (2021) - [i53]Ramtin Keramati, Omer Gottesman, Leo Anthony Celi, Finale Doshi-Velez, Emma Brunskill:
Identification of Subgroups With Similar Benefits in Off-Policy Policy Evaluation. CoRR abs/2111.14272 (2021) - [i52]Tong Mu, Georgios Theocharous, David Arbour, Emma Brunskill:
Constraint Sampling Reinforcement Learning: Incorporating Expertise For Faster Learning. CoRR abs/2112.15221 (2021) - 2020
- [c103]Ramtin Keramati, Christoph Dann, Alex Tamkin, Emma Brunskill:
Being Optimistic to Be Conservative: Quickly Learning a CVaR Policy. AAAI 2020: 4436-4443 - [c102]Sherry Ruan, Jiayu He, Rui Ying, Jonathan Burkle, Dunia Hakim, Anna Wang, Yufeng Yin, Lily Zhou, Qianyao Xu, Abdallah A. AbuHashem, Griffin Dietz, Elizabeth L. Murnane, Emma Brunskill, James A. Landay:
Supporting children's math learning with feedback-augmented narrative technology. IDC 2020: 567-580 - [c101]Andrea Zanette, David Brandfonbrener, Emma Brunskill, Matteo Pirotta, Alessandro Lazaric:
Frequentist Regret Bounds for Randomized Least-Squares Value Iteration. AISTATS 2020: 1954-1964 - [c100]Weihao Kong, Emma Brunskill, Gregory Valiant:
Sublinear Optimal Policy Value Estimation in Contextual Bandits. AISTATS 2020: 4377-4387 - [c99]Tong Mu, Andrea Jetten, Emma Brunskill:
Towards Suggesting Actionable Interventions for Wheel Spinning Students. EDM 2020 - [c98]Omer Gottesman, Joseph Futoma, Yao Liu, Sonali Parbhoo, Leo A. Celi, Emma Brunskill, Finale Doshi-Velez:
Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions. ICML 2020: 3658-3667 - [c97]Yao Liu, Pierre-Luc Bacon, Emma Brunskill:
Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling. ICML 2020: 6184-6193 - [c96]Andrea Zanette, Alessandro Lazaric, Mykel J. Kochenderfer, Emma Brunskill:
Learning Near Optimal Policies with Low Inherent Bellman Error. ICML 2020: 10978-10989 - [c95]Yao Liu, Adith Swaminathan, Alekh Agarwal, Emma Brunskill:
Provably Good Batch Off-Policy Reinforcement Learning Without Great Exploration. NeurIPS 2020 - [c94]Hongseok Namkoong, Ramtin Keramati, Steve Yadlowsky, Emma Brunskill:
Off-policy Policy Evaluation For Sequential Decisions Under Unobserved Confounding. NeurIPS 2020 - [c93]Andrea Zanette, Alessandro Lazaric, Mykel J. Kochenderfer, Emma Brunskill:
Provably Efficient Reward-Agnostic Navigation with Linear Value Iteration. NeurIPS 2020 - [i51]Omer Gottesman, Joseph Futoma, Yao Liu, Sonali Parbhoo, Leo Anthony Celi, Emma Brunskill, Finale Doshi-Velez:
Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions. CoRR abs/2002.03478 (2020) - [i50]Peter Henderson, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, Joelle Pineau:
Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning. CoRR abs/2002.05651 (2020) - [i49]Andrea Zanette, Alessandro Lazaric, Mykel J. Kochenderfer, Emma Brunskill:
Learning Near Optimal Policies with Low Inherent Bellman Error. CoRR abs/2003.00153 (2020) - [i48]Hongseok Namkoong, Ramtin Keramati, Steve Yadlowsky, Emma Brunskill:
Off-policy Policy Evaluation For Sequential Decisions Under Unobserved Confounding. CoRR abs/2003.05623 (2020) - [i47]Ramtin Keramati, Emma Brunskill:
Value Driven Representation for Human-in-the-Loop Reinforcement Learning. CoRR abs/2004.01223 (2020) - [i46]Jiayu Yao, Emma Brunskill, Weiwei Pan, Susan A. Murphy, Finale Doshi-Velez:
Power-Constrained Bandits. CoRR abs/2004.06230 (2020) - [i45]Evan Zheran Liu, Ramtin Keramati, Sudarshan Seshadri, Kelvin Guu, Panupong Pasupat, Emma Brunskill, Percy Liang:
Learning Abstract Models for Strategic Exploration and Fast Reward Transfer. CoRR abs/2007.05896 (2020) - [i44]Yao Liu, Adith Swaminathan, Alekh Agarwal, Emma Brunskill:
Provably Good Batch Reinforcement Learning Without Great Exploration. CoRR abs/2007.08202 (2020) - [i43]Andrea Zanette, Alessandro Lazaric, Mykel J. Kochenderfer, Emma Brunskill:
Provably Efficient Reward-Agnostic Navigation with Linear Value Iteration. CoRR abs/2008.07737 (2020) - [i42]Jonathan N. Lee, Aldo Pacchiano, Vidya Muthukumar, Weihao Kong, Emma Brunskill:
Online Model Selection for Reinforcement Learning with Function Approximation. CoRR abs/2011.09750 (2020)
2010 – 2019
- 2019
- [j7]Shayan Doroudi, Vincent Aleven, Emma Brunskill:
Where's the Reward? Int. J. Artif. Intell. Educ. 29(4): 568-620 (2019) - [c92]Tong Mu, Karan Goel, Emma Brunskill:
PLOTS: Procedure Learning from Observations using subTask Structure. AAMAS 2019: 1007-1015 - [c91]Sherry Ruan, Liwei Jiang, Justin Xu, Bryce Joe-Kun Tham, Zhengneng Qiu, Yeshuang Zhu, Elizabeth L. Murnane, Emma Brunskill, James A. Landay:
QuizBot: A Dialogue-based Adaptive Learning System for Factual Knowledge. CHI 2019: 357 - [c90]Shayan Doroudi, Ece Kamar, Emma Brunskill:
Not Everyone Writes Good Examples but Good Examples Can Come from Anywhere. HCOMP 2019: 12-21 - [c89]Karan Goel, Emma Brunskill:
Learning Procedural Abstractions and Evaluating Discrete Latent Temporal Structure. ICLR (Poster) 2019 - [c88]Christoph Dann, Lihong Li, Wei Wei, Emma Brunskill:
Policy Certificates: Towards Accountable Reinforcement Learning. ICML 2019: 1507-1516 - [c87]Omer Gottesman, Yao Liu, Scott Sussex, Emma Brunskill, Finale Doshi-Velez:
Combining parametric and nonparametric models for off-policy evaluation. ICML 2019: 2366-2375 - [c86]Joshua Romoff, Peter Henderson, Ahmed Touati, Yann Ollivier, Joelle Pineau, Emma Brunskill:
Separable value functions across time-scales. ICML 2019: 5468-5477 - [c85]Andrea Zanette, Emma Brunskill:
Tighter Problem-Dependent Regret Bounds in Reinforcement Learning without Domain Knowledge using Value Function Bounds. ICML 2019: 7304-7312 - [c84]Shayan Doroudi, Emma Brunskill:
Fairer but Not Fair Enough On the Equitability of Knowledge Tracing. LAK 2019: 335-339 - [c83]Angelica Willis, Glenn M. Davis, Sherry Ruan, Lakshmi Manoharan, James A. Landay, Emma Brunskill:
Key Phrase Extraction for Generating Educational Question-Answer Pairs. L@S 2019: 20:1-20:10 - [c82]Sherry Ruan, Angelica Willis, Qianyao Xu, Glenn M. Davis, Liwei Jiang, Emma Brunskill, James A. Landay:
BookBuddy: Turning Digital Materials Into Interactive Foreign Language Lessons Through a Voice Chatbot. L@S 2019: 30:1-30:4 - [c81]Andrea Zanette, Alessandro Lazaric, Mykel J. Kochenderfer, Emma Brunskill:
Limiting Extrapolation in Linear Approximate Value Iteration. NeurIPS 2019: 5616-5625 - [c80]Andrea Zanette, Mykel J. Kochenderfer, Emma Brunskill:
Almost Horizon-Free Structure-Aware Best Policy Identification with a Generative Model. NeurIPS 2019: 5626-5635 - [c79]Blossom Metevier, Stephen Giguere, Sarah Brockman, Ari Kobren, Yuriy Brun, Emma Brunskill, Philip S. Thomas:
Offline Contextual Bandits with High Probability Fairness Guarantees. NeurIPS 2019: 14893-14904 - [c78]Jonathan Bragg, Emma Brunskill:
Fake It Till You Make It: Learning-Compatible Performance Support. UAI 2019: 915-924 - [c77]Yao Liu, Adith Swaminathan, Alekh Agarwal, Emma Brunskill:
Off-Policy Policy Gradient with Stationary Distribution Correction. UAI 2019: 1180-1190 - [c76]Ramtin Keramati, Emma Brunskill:
Value Driven Representation for Human-in-the-Loop Reinforcement Learning. UMAP 2019: 176-180 - [i41]Andrea Zanette, Emma Brunskill:
Tighter Problem-Dependent Regret Bounds in Reinforcement Learning without Domain Knowledge using Value Function Bounds. CoRR abs/1901.00210 (2019) - [i40]Joshua Romoff, Peter Henderson, Ahmed Touati, Yann Ollivier, Emma Brunskill, Joelle Pineau:
Separating value functions across time-scales. CoRR abs/1902.01883 (2019) - [i39]Yao Liu, Adith Swaminathan, Alekh Agarwal, Emma Brunskill:
Off-Policy Policy Gradient with State Distribution Correction. CoRR abs/1904.08473 (2019) - [i38]Tong Mu, Karan Goel, Emma Brunskill:
PLOTS: Procedure Learning from Observations using Subtask Structure. CoRR abs/1904.09162 (2019) - [i37]Omer Gottesman, Yao Liu, Scott Sussex, Emma Brunskill, Finale Doshi-Velez:
Combining Parametric and Nonparametric Models for Off-Policy Evaluation. CoRR abs/1905.05787 (2019) - [i36]Zhaohan Daniel Guo, Emma Brunskill:
Directed Exploration for Reinforcement Learning. CoRR abs/1906.07805 (2019) - [i35]Yao Liu, Pierre-Luc Bacon, Emma Brunskill:
Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling. CoRR abs/1910.06508 (2019) - [i34]Andrea Zanette, Emma Brunskill:
Problem Dependent Reinforcement Learning Bounds Which Can Identify Bandit Structure in MDPs. CoRR abs/1911.00954 (2019) - [i33]Ramtin Keramati, Christoph Dann, Alex Tamkin, Emma Brunskill:
Being Optimistic to Be Conservative: Quickly Learning a CVaR Policy. CoRR abs/1911.01546 (2019) - [i32]Scott L. Fleming, Kuhan Jeyapragasan, Tony Duan, Daisy Yi Ding, Saurabh Gombar, Nigam Shah, Emma Brunskill:
Missingness as Stability: Understanding the Structure of Missingness in Longitudinal EHR data and its Impact on Reinforcement Learning in Healthcare. CoRR abs/1911.07084 (2019) - [i31]Weihao Kong, Gregory Valiant, Emma Brunskill:
Sublinear Optimal Policy Value Estimation in Contextual Bandits. CoRR abs/1912.06111 (2019) - 2018
- [c75]Philip S. Thomas, Christoph Dann, Emma Brunskill:
Decoupling Gradient-Like Learning Rules from Representations. ICML 2018: 4924-4932 - [c74]Andrea Zanette, Emma Brunskill:
Problem Dependent Reinforcement Learning Bounds Which Can Identify Bandit Structure in MDPs. ICML 2018: 5732-5740 - [c73]Shayan Doroudi, Philip S. Thomas, Emma Brunskill:
Importance Sampling for Fair Policy Selection. IJCAI 2018: 5239-5243 - [c72]Kamyar Azizzadenesheli, Emma Brunskill, Animashree Anandkumar:
Efficient Exploration Through Bayesian Deep Q-Networks. ITA 2018: 1-9 - [c71]Tong Mu, Shuhan Wang, Erik Andersen, Emma Brunskill:
Combining adaptivity with progression ordering for intelligent tutoring systems. L@S 2018: 15:1-15:4 - [c70]Y. Alex Kolchinski, Sherry Ruan, Dan Schwartz, Emma Brunskill:
Adaptive natural-language targeting for student feedback. L@S 2018: 26:1-26:4 - [c69]Emma Brunskill, Dawn Zimmaro, Candace Thille:
Exploring the impact of the default option on student engagement and performance in a statistics MOOC. L@S 2018: 34:1-34:4 - [c68]Yao Liu, Omer Gottesman, Aniruddh Raghu, Matthieu Komorowski, Aldo A. Faisal, Finale Doshi-Velez, Emma Brunskill:
Representation Balancing MDPs for Off-policy Policy Evaluation. NeurIPS 2018: 2649-2658 - [c67]Sharon Zhou, Tong Mu, Karan Goel, Michael S. Bernstein, Emma Brunskill:
Shared Autonomy for an Interactive AI System. UIST (Adjunct Volume) 2018: 20-22 - [i30]Kamyar Azizzadenesheli, Emma Brunskill, Animashree Anandkumar:
Efficient Exploration through Bayesian Deep Q-Networks. CoRR abs/1802.04412 (2018) - [i29]Yao Liu, Omer Gottesman, Aniruddh Raghu, Matthieu Komorowski, Aldo Faisal, Finale Doshi-Velez, Emma Brunskill:
Representation Balancing MDPs for Off-Policy Policy Evaluation. CoRR abs/1805.09044 (2018) - [i28]Yao Liu, Emma Brunskill:
When Simple Exploration is Sample Efficient: Identifying Sufficient Conditions for Random Exploration to Yield PAC RL Algorithms. CoRR abs/1805.09045 (2018) - [i27]Ramtin Keramati, Jay Whang, Patrick Cho, Emma Brunskill:
Strategic Object Oriented Reinforcement Learning. CoRR abs/1806.00175 (2018) - [i26]Kamyar Azizzadenesheli, Brandon Yang, Weitang Liu, Emma Brunskill, Zachary C. Lipton, Animashree Anandkumar:
Sample-Efficient Deep RL with Generative Adversarial Tree Search. CoRR abs/1806.05780 (2018) - [i25]Aniruddh Raghu, Omer Gottesman, Yao Liu, Matthieu Komorowski, Aldo Faisal, Finale Doshi-Velez, Emma Brunskill:
Behaviour Policy Estimation in Off-Policy Policy Evaluation: Calibration Matters. CoRR abs/1807.01066 (2018) - [i24]Christoph Dann, Lihong Li, Wei Wei, Emma Brunskill:
Policy Certificates: Towards Accountable Reinforcement Learning. CoRR abs/1811.03056 (2018) - [i23]Peter Henderson, Emma Brunskill:
Distilling Information from a Flood: A Possibility for the Use of Meta-Analysis and Systematic Review in Machine Learning Research. CoRR abs/1812.01074 (2018) - 2017
- [c66]Travis Mandel, Yun-En Liu, Emma Brunskill, Zoran Popovic:
Where to Add Actions in Human-in-the-Loop Reinforcement Learning. AAAI 2017: 2322-2328 - [c65]Philip S. Thomas, Emma Brunskill:
Importance Sampling with Unequal Support. AAAI 2017: 2646-2652 - [c64]Philip S. Thomas, Georgios Theocharous, Mohammad Ghavamzadeh, Ishan Durugkar, Emma Brunskill:
Predictive Off-Policy Policy Evaluation for Nonstationary Decision Problems, with Applications to Digital Marketing. AAAI 2017: 4740-4745 - [c63]Akram Erraqabi, Alessandro Lazaric, Michal Valko, Emma Brunskill, Yun-En Liu:
Trading off Rewards and Errors in Multi-Armed Bandits. AISTATS 2017: 709-717 - [c62]Shayan Doroudi, Emma Brunskill:
The Misidentified Identifiability Problem of Bayesian Knowledge Tracing. EDM 2017 - [c61]Karan Goel, Christoph Dann, Emma Brunskill:
Sample Efficient Policy Search for Optimal Stopping Domains. IJCAI 2017: 1711-1717 - [c60]Shayan Doroudi, Vincent Aleven, Emma Brunskill:
Robust Evaluation Matrix: Towards a More Principled Offline Exploration of Instructional Policies. L@S 2017: 3-12 - [c59]Zhaohan Guo, Philip S. Thomas, Emma Brunskill:
Using Options and Covariance Testing for Long Horizon Off-Policy Policy Evaluation. NIPS 2017: 2492-2501 - [c58]Ronan Fruit, Matteo Pirotta, Alessandro Lazaric, Emma Brunskill:
Regret Minimization in MDPs with Options without Prior Knowledge. NIPS 2017: 3166-3176 - [c57]Christoph Dann, Tor Lattimore, Emma Brunskill:
Unifying PAC and Regret: Uniform PAC Bounds for Episodic Reinforcement Learning. NIPS 2017: 5713-5723 - [c56]Shayan Doroudi, Philip S. Thomas, Emma Brunskill:
Importance Sampling for Fair Policy Selection. UAI 2017 - [i22]Karan Goel, Christoph Dann, Emma Brunskill:
Sample Efficient Policy Search for Optimal Stopping Domains. CoRR abs/1702.06238 (2017) - [i21]Zhaohan Daniel Guo, Philip S. Thomas, Emma Brunskill:
Using Options for Long-Horizon Off-Policy Evaluation. CoRR abs/1703.03453 (2017) - [i20]Zhaohan Daniel Guo, Emma Brunskill:
Sample Efficient Feature Selection for Factored MDPs. CoRR abs/1703.03454 (2017) - [i19]Christoph Dann, Tor Lattimore, Emma Brunskill:
UBEV - A More Practical Algorithm for Episodic RL with Near-Optimal PAC and Regret Guarantees. CoRR abs/1703.07710 (2017) - [i18]Philip S. Thomas, Christoph Dann, Emma Brunskill:
Decoupling Learning Rules from Representations. CoRR abs/1706.03100 (2017) - [i17]Philip S. Thomas, Emma Brunskill:
Policy Gradient Methods for Reinforcement Learning with Function Approximation and Action-Dependent Baselines. CoRR abs/1706.06643 (2017) - [i16]Philip S. Thomas, Bruno Castro da Silva, Andrew G. Barto, Emma Brunskill:
On Ensuring that Intelligent Machines Are Well-Behaved. CoRR abs/1708.05448 (2017) - [i15]Thomas Kollar, Stefanie Tellex, Matthew R. Walter, Albert Huang, Abraham Bachrach, Sachithra Hemachandra, Emma Brunskill, Ashis Gopal Banerjee, Deb Roy, Seth J. Teller, Nicholas Roy:
Generalized Grounding Graphs: A Probabilistic Framework for Understanding Grounded Commands. CoRR abs/1712.01097 (2017) - 2016
- [j6]Anna N. Rafferty, Emma Brunskill, Thomas L. Griffiths, Patrick Shafto:
Faster Teaching via POMDP Planning. Cogn. Sci. 40(6): 1290-1332 (2016) - [c55]Travis Mandel, Yun-En Liu, Emma Brunskill, Zoran Popovic:
Offline Evaluation of Online Reinforcement Learning Algorithms. AAAI 2016: 1926-1933 - [c54]Zhaohan Daniel Guo, Shayan Doroudi, Emma Brunskill:
A PAC RL Algorithm for Episodic POMDPs. AISTATS 2016: 510-518 - [c53]Yao Liu, Zhaohan Guo, Emma Brunskill:
PAC Continuous State Online Multitask Reinforcement Learning with Identification. AAMAS 2016: 438-446 - [c52]Shayan Doroudi, Ece Kamar, Emma Brunskill, Eric Horvitz:
Toward a Learning Science for Complex Crowdsourcing Tasks. CHI 2016: 2623-2634 - [c51]James Derek Lomas, Jodi Forlizzi, Nikhil Poonwala, Nirmal Patel, Sharan Shodhan, Kishan Patel, Kenneth R. Koedinger, Emma Brunskill:
Interface Design Optimization as a Multi-Armed Bandit Problem. CHI 2016: 4142-4153 - [c50]Shayan Doroudi, Kenneth Holstein, Vincent Aleven, Emma Brunskill:
Sequence Matters, But How Do I Discover How? Towards a Workflow for Evaluating Activity Sequences from Data. EDM (Workshops) 2016 - [c49]Shayan Doroudi, Kenneth Holstein, Vincent Aleven, Emma Brunskill:
Sequence Matters, But How Exactly? A Method for Evaluating Activity Sequences from Data. EDM 2016: 70-77 - [c48]Philip S. Thomas, Emma Brunskill:
Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning. ICML 2016: 2139-2148 - [c47]Philip S. Thomas, Bruno Castro da Silva, Christoph Dann, Emma Brunskill:
Energetic Natural Gradient Descent. ICML 2016: 2887-2895 - [c46]Travis Mandel, Yun-En Liu, Emma Brunskill, Zoran Popovic:
Efficient Bayesian Clustering for Reinforcement Learning. IJCAI 2016: 1830-1838 - [c45]Li Zhou, Emma Brunskill:
Latent Contextual Bandits and their Application to Personalized Recommendations for New Users. IJCAI 2016: 3646-3653 - [c44]Qi Guo, Chinmay Kulkarni, Aniket Kittur, Jeffrey P. Bigham, Emma Brunskill:
Questimator: Generating Knowledge Assessments for Arbitrary Topics. IJCAI 2016: 3726-3732 - [c43]Rika Antonova, Joe Runde, Min Hyung Lee, Emma Brunskill:
Automatically Learning to Teach to the Learning Objectives. L@S 2016: 317-320 - [i14]Philip S. Thomas, Emma Brunskill:
Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning. CoRR abs/1604.00923 (2016) - [i13]Li Zhou, Emma Brunskill:
Latent Contextual Bandits and their Application to Personalized Recommendations for New Users. CoRR abs/1604.06743 (2016) - [i12]Zhaohan Daniel Guo, Shayan Doroudi, Emma Brunskill:
A PAC RL Algorithm for Episodic POMDPs. CoRR abs/1605.08062 (2016) - [i11]Philip S. Thomas, Emma Brunskill:
Importance Sampling with Unequal Support. CoRR abs/1611.03451 (2016) - 2015
- [c42]Zhaohan Guo, Emma Brunskill:
Concurrent PAC RL. AAAI 2015: 2624-2630 - [c41]Travis Mandel, Yun-En Liu, Emma Brunskill, Zoran Popovic:
The Queue Method: Handling Delay, Heuristics, Prior Data, and Evaluation in Bandits. AAAI 2015: 2849-2856 - [c40]Joseph Rollinson, Emma Brunskill:
From Predictive Models to Instructional Policies. EDM 2015: 179-186 - [c39]Shayan Doroudi, Kenneth Holstein, Vincent Aleven, Emma Brunskill:
Towards Understanding How to Leverage Sense-making, Induction/Refinement and Fluency to Improve Robust Learning. EDM 2015: 376-379 - [c38]Min Hyung Lee, Joe Runde, Warfa Jibril, Zhuoying Wang, Emma Brunskill:
Learning the Features Used To Decide How to Teach. L@S 2015: 421-424 - [c37]Christoph Dann, Emma Brunskill:
Sample Complexity of Episodic Fixed-Horizon Reinforcement Learning. NIPS 2015: 2818-2826 - [i10]Emma Brunskill, Lihong Li:
The Online Discovery Problem and Its Application to Lifelong Reinforcement Learning. CoRR abs/1506.03379 (2015) - [i9]Christoph Dann, Emma Brunskill:
Sample Complexity of Episodic Fixed-Horizon Reinforcement Learning. CoRR abs/1510.08906 (2015) - 2014
- [c36]Vukosi Marivate, Jessica Chemali, Emma Brunskill, Michael L. Littman:
Quantifying Uncertainty in Batch Personalized Sequential Decision Making. AAAI Workshop: Modern Artificial Intelligence for Health Analytics 2014 - [c35]Travis Mandel, Yun-En Liu, Sergey Levine, Emma Brunskill, Zoran Popovic:
Offline policy evaluation across representations with applications to educational games. AAMAS 2014: 1077-1084 - [c34]Yun-En Liu, Travis Mandel, Emma Brunskill, Zoran Popovic:
Towards automatic experimentation of educational knowledge. CHI 2014: 3349-3358 - [c33]Yun-En Liu, Travis Mandel, Emma Brunskill, Zoran Popovic:
Trading Off Scientific Knowledge and User Learning with Multi-Armed Bandits. EDM 2014: 161-168 - [c32]Emma Brunskill, Lihong Li:
PAC-inspired Option Discovery in Lifelong Reinforcement Learning. ICML 2014: 316-324 - [c31]Mohammad Gheshlaghi Azar, Alessandro Lazaric, Emma Brunskill:
Online Stochastic Optimization under Correlated Bandit Feedback. ICML 2014: 1557-1565 - [i8]Ruijie He, Emma Brunskill, Nicholas Roy:
Efficient Planning under Uncertainty with Macro-actions. CoRR abs/1401.3827 (2014) - [i7]Mohammad Gheshlaghi Azar, Alessandro Lazaric, Emma Brunskill:
Stochastic Optimization of a Locally Smooth Function under Correlated Bandit Feedback. CoRR abs/1402.0562 (2014) - 2013
- [j5]Kenneth R. Koedinger, Emma Brunskill, Ryan Shaun Joazeiro de Baker, Elizabeth A. McLaughlin, John C. Stamper:
New Potentials for Data-Driven Intelligent Tutoring System Development and Optimization. AI Mag. 34(3): 27-41 (2013) - [c30]Sukhada Palkar, Emma Brunskill:
Analysis of the impact of errors made during health data collection using mobile phones: exploring error modeling and automatic diagnosis. ACM DEV (3) 2013: 24:1-24:2 - [c29]Yun-En Liu, Travis Mandel, Eric Butler, Erik Andersen, Eleanor O'Rourke, Emma Brunskill, Zoran Popovic:
Predicting Player Moves in an Educational Game: A Hybrid Approach. EDM 2013: 106-113 - [c28]Anna N. Rafferty, Jodi L. Davenport, Emma Brunskill:
Estimating Student Knowledge from Paired Interaction Data. EDM 2013: 260-263 - [c27]Ted McCarthy, Brian DeRenzi, Joshua Evan Blumenstock, Emma Brunskill:
Towards operationalizing outlier detection in community health programs. ICTD (2) 2013: 88-91 - [c26]Siyuan Liu, Miguel Araujo, Emma Brunskill, Rosaldo J. F. Rossetti, João Barros, Ramayya Krishnan:
Understanding Sequential Decisions via Inverse Reinforcement Learning. MDM (1) 2013: 177-186 - [c25]Mohammad Gheshlaghi Azar, Alessandro Lazaric, Emma Brunskill:
Sequential Transfer in Multi-armed Bandit with Finite Set of Models. NIPS 2013: 2220-2228 - [c24]Siyuan Liu, Ramayya Krishnan, Emma Brunskill, Lionel M. Ni:
Modeling Social Information Learning among Taxi Drivers. PAKDD (2) 2013: 73-84 - [c23]Mohammad Gheshlaghi Azar, Alessandro Lazaric, Emma Brunskill:
Regret Bounds for Reinforcement Learning with Policy Advice. ECML/PKDD (1) 2013: 97-112 - [c22]Emma Brunskill, Lihong Li:
Sample Complexity of Multi-task Reinforcement Learning. UAI 2013 - [i6]Mohammad Gheshlaghi Azar, Alessandro Lazaric, Emma Brunskill:
Regret Bounds for Reinforcement Learning with Policy Advice. CoRR abs/1305.1027 (2013) - [i5]Mohammad Gheshlaghi Azar, Alessandro Lazaric, Emma Brunskill:
Sequential Transfer in Multi-armed Bandit with Finite Set of Models. CoRR abs/1307.6887 (2013) - [i4]Emma Brunskill, Lihong Li:
Sample Complexity of Multi-task Reinforcement Learning. CoRR abs/1309.6821 (2013) - 2012
- [c21]Emma Brunskill:
Bayes-optimal reinforcement learning for discrete uncertainty domains. AAMAS 2012: 1385-1386 - [c20]Jung In Lee, Emma Brunskill:
The Impact on Individualizing Student Models on Necessary Practice Opportunities. EDM 2012: 118-125 - [c19]Michael Yudelson, Emma Brunskill:
Policy Building - An Extension To User Modeling. EDM 2012: 188-191 - [c18]Sashank Jakkam Reddi, Emma Brunskill:
Incentive Decision Processes. UAI 2012: 418-427 - [i3]Emma Brunskill, Stuart Russell:
RAPID: A Reachable Anytime Planner for Imprecisely-sensed Domains. CoRR abs/1203.3538 (2012) - [i2]Emma Brunskill, Bethany R. Leffler, Lihong Li, Michael L. Littman, Nicholas Roy:
CORL: A Continuous-state Offset-dynamics Reinforcement Learner. CoRR abs/1206.3231 (2012) - [i1]Sashank Jakkam Reddi, Emma Brunskill:
Incentive Decision Processes. CoRR abs/1210.4877 (2012) - 2011
- [j4]Ruijie He, Emma Brunskill, Nicholas Roy:
Efficient Planning under Uncertainty with Macro-actions. J. Artif. Intell. Res. 40: 523-570 (2011) - [j3]Indrani Medhi, Somani Patnaik, Emma Brunskill, S. N. Nagasena Gautama, William Thies, Kentaro Toyama:
Designing mobile interfaces for novice and low-literacy users. ACM Trans. Comput. Hum. Interact. 18(1): 2:1-2:28 (2011) - [c17]Anna N. Rafferty, Emma Brunskill, Thomas L. Griffiths, Patrick Shafto:
Faster Teaching by POMDP Planning. AIED 2011: 280-287 - [c16]Emma Brunskill:
Estimating Prerequisite Structure From Noisy Data. EDM 2011: 217-222 - [c15]Emma Brunskill, Stuart Russell:
Partially Observable Sequential Decision Making for Problem Selection in an Intelligent Tutoring System. EDM 2011: 327-328 - 2010
- [j2]Emma Brunskill, Leslie Pack Kaelbling, Tomás Lozano-Pérez, Nicholas Roy:
Planning in partially-observable switching-mode continuous domains. Ann. Math. Artif. Intell. 58(3-4): 185-216 (2010) - [c14]Ruijie He, Emma Brunskill, Nicholas Roy:
PUMA: Planning Under Uncertainty with Macro-Actions. AAAI 2010: 1089-1095 - [c13]Emma Brunskill, Neal Lesh:
Routing for Rural Health: Optimizing Community Health Worker Visit Schedules. AAAI Spring Symposium: Artificial Intelligence for Development 2010 - [c12]Kuang Chen, Emma Brunskill, Jonathan J. Dick, Prabhjot Dhadialla:
Learning to Identify Locally Actionable Health Anomalies. AAAI Spring Symposium: Artificial Intelligence for Development 2010 - [c11]Emma Brunskill:
When Policies Can Be Trusted: Analyzing a Criteria to Identify Optimal Policies in MDPs with Unknown Model Parameters. ICAPS 2010: 218-221 - [c10]Emma Brunskill, Stuart Russell:
RAPID: A Reachable Anytime Planner for Imprecisely-sensed Domains. UAI 2010: 83-92
2000 – 2009
- 2009
- [b1]Emma Brunskill:
Compact parametric models for efficient sequential decision making in high-dimensional, uncertain domains. Massachusetts Institute of Technology, Cambridge, MA, USA, 2009 - [j1]Emma Brunskill, Bethany R. Leffler, Lihong Li, Michael L. Littman, Nicholas Roy:
Provably Efficient Learning with Typed Parametric Models. J. Mach. Learn. Res. 10: 1955-1988 (2009) - [c9]Yuan Wei, Emma Brunskill, Thomas Kollar, Nicholas Roy:
Where to go: Interpreting natural directions using global inference. ICRA 2009: 3761-3767 - [c8]Somani Patnaik, Emma Brunskill, William Thies:
Evaluating the accuracy of data collection on mobile phones: A study of forms, SMS, and voice. ICTD 2009: 74-84 - 2008
- [c7]Emma Brunskill, Leslie Pack Kaelbling, Tomás Lozano-Pérez, Nicholas Roy:
Continuous-State POMDPs with Hybrid Dynamics. ISAIM 2008 - [c6]Emma Brunskill, Bethany R. Leffler, Lihong Li, Michael L. Littman, Nicholas Roy:
CORL: A Continuous-state Offset-dynamics Reinforcement Learner. UAI 2008: 53-61 - 2007
- [c5]Emma Brunskill:
Continuous State POMDPs for Object Manipulation Tasks. AAAI 2007: 1925-1926 - [c4]Finale Doshi, Emma Brunskill, Alexander C. Shkolnik, Thomas Kollar, Khashayar Rohanimanesh, Russ Tedrake, Nicholas Roy:
Collision detection in legged locomotion using supervised learning. IROS 2007: 317-322 - [c3]Emma Brunskill, Thomas Kollar, Nicholas Roy:
Topological mapping using spectral clustering and classification. IROS 2007: 3491-3496 - 2005
- [c2]Emma Brunskill, Nicholas Roy:
SLAM using Incremental Probabilistic PCA and Dimensionality Reduction. ICRA 2005: 342-347 - 2001
- [c1]Frank Dabek, Emma Brunskill, M. Frans Kaashoek, David R. Karger, Robert Morris, Ion Stoica, Hari Balakrishnan:
Building peer-to-peer systems with Chord, a distributed lookup service. HotOS 2001: 81-86
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-01-26 23:50 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint