default search action
Hado van Hasselt
Person information
- affiliation: Google DeepMind, London, UK
- affiliation (PhD 2011): Utrecht University, The Netherlands
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j3]Eduardo Pignatelli, Johan Ferret, Matthieu Geist, Thomas Mesnard, Hado van Hasselt, Laura Toni:
A Survey of Temporal Credit Assignment in Deep Reinforcement Learning. Trans. Mach. Learn. Res. 2024 (2024) - [i50]Clare Lyle, Zeyu Zheng, Khimya Khetarpal, Hado van Hasselt, Razvan Pascanu, James Martens, Will Dabney:
Disentangling the Causes of Plasticity Loss in Neural Networks. CoRR abs/2402.18762 (2024) - [i49]Clare Lyle, Zeyu Zheng, Khimya Khetarpal, James Martens, Hado van Hasselt, Razvan Pascanu, Will Dabney:
Normalization and effective learning rates in reinforcement learning. CoRR abs/2407.01800 (2024) - 2023
- [c48]Simon Schmitt, John Shawe-Taylor, Hado van Hasselt:
Exploration via Epistemic Value Estimation. AAAI 2023: 9742-9751 - [c47]Steven Kapturowski, Victor Campos, Ray Jiang, Nemanja Rakicevic, Hado van Hasselt, Charles Blundell, Adrià Puigdomènech Badia:
Human-level Atari 200x faster. ICLR 2023 - [c46]David Abel, André Barreto, Benjamin Van Roy, Doina Precup, Hado Philip van Hasselt, Satinder Singh:
A Definition of Continual Reinforcement Learning. NeurIPS 2023 - [c45]Sebastian Flennerhag, Tom Zahavy, Brendan O'Donoghue, Hado Philip van Hasselt, András György, Satinder Singh:
Optimistic Meta-Gradients. NeurIPS 2023 - [i48]Sebastian Flennerhag, Tom Zahavy, Brendan O'Donoghue, Hado van Hasselt, András György, Satinder Singh:
Optimistic Meta-Gradients. CoRR abs/2301.03236 (2023) - [i47]Chentian Jiang, Nan Rosemary Ke, Hado van Hasselt:
Learning How to Infer Partial MDPs for In-Context Adaptation and Exploration. CoRR abs/2302.04250 (2023) - [i46]Simon Schmitt, John Shawe-Taylor, Hado van Hasselt:
Exploration via Epistemic Value Estimation. CoRR abs/2303.04012 (2023) - [i45]David Abel, André Barreto, Hado van Hasselt, Benjamin Van Roy, Doina Precup, Satinder Singh:
On the Convergence of Bounded Agents. CoRR abs/2307.11044 (2023) - [i44]David Abel, André Barreto, Benjamin Van Roy, Doina Precup, Hado van Hasselt, Satinder Singh:
A Definition of Continual Reinforcement Learning. CoRR abs/2307.11046 (2023) - [i43]Eduardo Pignatelli, Johan Ferret, Matthieu Geist, Thomas Mesnard, Hado van Hasselt, Laura Toni:
A Survey of Temporal Credit Assignment in Deep Reinforcement Learning. CoRR abs/2312.01072 (2023) - 2022
- [c44]Ray Jiang, Shangtong Zhang, Veronica Chelu, Adam White, Hado van Hasselt:
Learning Expected Emphatic Traces for Deep RL. AAAI 2022: 7015-7023 - [c43]Louis Kirsch, Sebastian Flennerhag, Hado van Hasselt, Abram L. Friesen, Junhyuk Oh, Yutian Chen:
Introducing Symmetries to Black Box Meta Reinforcement Learning. AAAI 2022: 7202-7210 - [c42]Simon Schmitt, John Shawe-Taylor, Hado van Hasselt:
Chaining Value Functions for Off-Policy Learning. AAAI 2022: 8187-8195 - [c41]Sebastian Flennerhag, Yannick Schroecker, Tom Zahavy, Hado van Hasselt, David Silver, Satinder Singh:
Bootstrapped Meta-Learning. ICLR 2022 - [c40]David Silver, Anirudh Goyal, Ivo Danihelka, Matteo Hessel, Hado van Hasselt:
Learning by Directional Gradient Descent. ICLR 2022 - [i42]Simon Schmitt, John Shawe-Taylor, Hado van Hasselt:
Chaining Value Functions for Off-Policy Learning. CoRR abs/2201.06468 (2022) - [i41]Veronica Chelu, Diana Borsa, Doina Precup, Hado van Hasselt:
Selective Credit Assignment. CoRR abs/2202.09699 (2022) - [i40]Steven Kapturowski, Víctor Campos, Ray Jiang, Nemanja Rakicevic, Hado van Hasselt, Charles Blundell, Adrià Puigdomènech Badia:
Human-level Atari 200x faster. CoRR abs/2209.07550 (2022) - 2021
- [c39]Hado van Hasselt, Sephora Madjiheurem, Matteo Hessel, David Silver, André Barreto, Diana Borsa:
Expected Eligibility Traces. AAAI 2021: 9997-10005 - [c38]Marta Garnelo, Wojciech Marian Czarnecki, Siqi Liu, Dhruva Tirumala, Junhyuk Oh, Gauthier Gidel, Hado van Hasselt, David Balduzzi:
Pick Your Battles: Interaction Graphs as Population-Level Objectives for Strategic Diversity. AAMAS 2021: 1501-1503 - [c37]Matteo Hessel, Ivo Danihelka, Fabio Viola, Arthur Guez, Simon Schmitt, Laurent Sifre, Theophane Weber, David Silver, Hado van Hasselt:
Muesli: Combining Improvements in Policy Optimization. ICML 2021: 4214-4226 - [c36]Ray Jiang, Tom Zahavy, Zhongwen Xu, Adam White, Matteo Hessel, Charles Blundell, Hado van Hasselt:
Emphatic Algorithms for Deep Reinforcement Learning. ICML 2021: 5023-5033 - [c35]Gregory Farquhar, Kate Baumli, Zita Marinho, Angelos Filos, Matteo Hessel, Hado Philip van Hasselt, David Silver:
Self-Consistent Models and Values. NeurIPS 2021: 1111-1125 - [c34]Vivek Veeriah, Tom Zahavy, Matteo Hessel, Zhongwen Xu, Junhyuk Oh, Iurii Kemaev, Hado van Hasselt, David Silver, Satinder Singh:
Discovery of Options via Meta-Learned Subgoals. NeurIPS 2021: 29861-29873 - [i39]Vivek Veeriah, Tom Zahavy, Matteo Hessel, Zhongwen Xu, Junhyuk Oh, Iurii Kemaev, Hado van Hasselt, David Silver, Satinder Singh:
Discovery of Options via Meta-Learned Subgoals. CoRR abs/2102.06741 (2021) - [i38]David Raposo, Samuel Ritter, Adam Santoro, Greg Wayne, Theophane Weber, Matt M. Botvinick, Hado van Hasselt, H. Francis Song:
Synthetic Returns for Long-Term Credit Assignment. CoRR abs/2102.12425 (2021) - [i37]Matteo Hessel, Ivo Danihelka, Fabio Viola, Arthur Guez, Simon Schmitt, Laurent Sifre, Theophane Weber, David Silver, Hado van Hasselt:
Muesli: Combining Improvements in Policy Optimization. CoRR abs/2104.06159 (2021) - [i36]Matteo Hessel, Manuel Kroiss, Aidan Clark, Iurii Kemaev, John Quan, Thomas Keck, Fabio Viola, Hado van Hasselt:
Podracer architectures for scalable Reinforcement Learning. CoRR abs/2104.06272 (2021) - [i35]Ray Jiang, Tom Zahavy, Zhongwen Xu, Adam White, Matteo Hessel, Charles Blundell, Hado van Hasselt:
Emphatic Algorithms for Deep Reinforcement Learning. CoRR abs/2106.11779 (2021) - [i34]Ray Jiang, Shangtong Zhang, Veronica Chelu, Adam White, Hado van Hasselt:
Learning Expected Emphatic Traces for Deep RL. CoRR abs/2107.05405 (2021) - [i33]Sebastian Flennerhag, Yannick Schroecker, Tom Zahavy, Hado van Hasselt, David Silver, Satinder Singh:
Bootstrapped Meta-Learning. CoRR abs/2109.04504 (2021) - [i32]Louis Kirsch, Sebastian Flennerhag, Hado van Hasselt, Abram L. Friesen, Junhyuk Oh, Yutian Chen:
Introducing Symmetries to Black Box Meta Reinforcement Learning. CoRR abs/2109.10781 (2021) - [i31]Marta Garnelo, Wojciech Marian Czarnecki, Siqi Liu, Dhruva Tirumala, Junhyuk Oh, Gauthier Gidel, Hado van Hasselt, David Balduzzi:
Pick Your Battles: Interaction Graphs as Population-Level Objectives for Strategic Diversity. CoRR abs/2110.04041 (2021) - [i30]Gregory Farquhar, Kate Baumli, Zita Marinho, Angelos Filos, Matteo Hessel, Hado van Hasselt, David Silver:
Self-Consistent Models and Values. CoRR abs/2110.12840 (2021) - 2020
- [c33]Mark Rowland, Anna Harutyunyan, Hado van Hasselt, Diana Borsa, Tom Schaul, Rémi Munos, Will Dabney:
Conditional Importance Sampling for Off-Policy Learning. AISTATS 2020: 45-55 - [c32]Ian Osband, Yotam Doron, Matteo Hessel, John Aslanides, Eren Sezener, Andre Saraiva, Katrina McKinney, Tor Lattimore, Csaba Szepesvári, Satinder Singh, Benjamin Van Roy, Richard S. Sutton, David Silver, Hado van Hasselt:
Behaviour Suite for Reinforcement Learning. ICLR 2020 - [c31]Zeyu Zheng, Junhyuk Oh, Matteo Hessel, Zhongwen Xu, Manuel Kroiss, Hado van Hasselt, David Silver, Satinder Singh:
What Can Learned Intrinsic Rewards Capture? ICML 2020: 11436-11446 - [c30]Veronica Chelu, Doina Precup, Hado van Hasselt:
Forethought and Hindsight in Credit Assignment. NeurIPS 2020 - [c29]Junhyuk Oh, Matteo Hessel, Wojciech M. Czarnecki, Zhongwen Xu, Hado van Hasselt, Satinder Singh, David Silver:
Discovering Reinforcement Learning Algorithms. NeurIPS 2020 - [c28]Zhongwen Xu, Hado Philip van Hasselt, Matteo Hessel, Junhyuk Oh, Satinder Singh, David Silver:
Meta-Gradient Reinforcement Learning with an Objective Discovered Online. NeurIPS 2020 - [c27]Tom Zahavy, Zhongwen Xu, Vivek Veeriah, Matteo Hessel, Junhyuk Oh, Hado van Hasselt, David Silver, Satinder Singh:
A Self-Tuning Actor-Critic Algorithm. NeurIPS 2020 - [i29]Tom Zahavy, Zhongwen Xu, Vivek Veeriah, Matteo Hessel, Junhyuk Oh, Hado van Hasselt, David Silver, Satinder Singh:
Self-Tuning Deep Reinforcement Learning. CoRR abs/2002.12928 (2020) - [i28]Hado van Hasselt, Sephora Madjiheurem, Matteo Hessel, David Silver, André Barreto, Diana Borsa:
Expected Eligibility Traces. CoRR abs/2007.01839 (2020) - [i27]Zhongwen Xu, Hado van Hasselt, Matteo Hessel, Junhyuk Oh, Satinder Singh, David Silver:
Meta-Gradient Reinforcement Learning with an Objective Discovered Online. CoRR abs/2007.08433 (2020) - [i26]Junhyuk Oh, Matteo Hessel, Wojciech M. Czarnecki, Zhongwen Xu, Hado van Hasselt, Satinder Singh, David Silver:
Discovering Reinforcement Learning Algorithms. CoRR abs/2007.08794 (2020) - [i25]Veronica Chelu, Doina Precup, Hado van Hasselt:
Forethought and Hindsight in Credit Assignment. CoRR abs/2010.13685 (2020)
2010 – 2019
- 2019
- [c26]Matteo Hessel, Hubert Soyer, Lasse Espeholt, Wojciech Czarnecki, Simon Schmitt, Hado van Hasselt:
Multi-Task Deep Reinforcement Learning with PopArt. AAAI 2019: 3796-3803 - [c25]Diana Borsa, André Barreto, John Quan, Daniel J. Mankowitz, Hado van Hasselt, Rémi Munos, David Silver, Tom Schaul:
Universal Successor Features Approximators. ICLR (Poster) 2019 - [c24]Vivek Veeriah, Matteo Hessel, Zhongwen Xu, Janarthanan Rajendran, Richard L. Lewis, Junhyuk Oh, Hado van Hasselt, David Silver, Satinder Singh:
Discovery of Useful Questions as Auxiliary Tasks. NeurIPS 2019: 9306-9317 - [c23]Anna Harutyunyan, Will Dabney, Thomas Mesnard, Mohammad Gheshlaghi Azar, Bilal Piot, Nicolas Heess, Hado van Hasselt, Gregory Wayne, Satinder Singh, Doina Precup, Rémi Munos:
Hindsight Credit Assignment. NeurIPS 2019: 12467-12476 - [c22]Hado van Hasselt, Matteo Hessel, John Aslanides:
When to use parametric models in reinforcement learning? NeurIPS 2019: 14322-14333 - [i24]Pedro A. Ortega, Jane X. Wang, Mark Rowland, Tim Genewein, Zeb Kurth-Nelson, Razvan Pascanu, Nicolas Heess, Joel Veness, Alexander Pritzel, Pablo Sprechmann, Siddhant M. Jayakumar, Tom McGrath, Kevin J. Miller, Mohammad Gheshlaghi Azar, Ian Osband, Neil C. Rabinowitz, András György, Silvia Chiappa, Simon Osindero, Yee Whye Teh, Hado van Hasselt, Nando de Freitas, Matthew M. Botvinick, Shane Legg:
Meta-learning of Sequential Strategies. CoRR abs/1905.03030 (2019) - [i23]Hado van Hasselt, Matteo Hessel, John Aslanides:
When to use parametric models in reinforcement learning? CoRR abs/1906.05243 (2019) - [i22]Matteo Hessel, Hado van Hasselt, Joseph Modayil, David Silver:
On Inductive Biases in Deep Reinforcement Learning. CoRR abs/1907.02908 (2019) - [i21]Hado van Hasselt, John Quan, Matteo Hessel, Zhongwen Xu, Diana Borsa, André Barreto:
General non-linear Bellman equations. CoRR abs/1907.03687 (2019) - [i20]Ian Osband, Yotam Doron, Matteo Hessel, John Aslanides, Eren Sezener, Andre Saraiva, Katrina McKinney, Tor Lattimore, Csaba Szepesvári, Satinder Singh, Benjamin Van Roy, Richard S. Sutton, David Silver, Hado van Hasselt:
Behaviour Suite for Reinforcement Learning. CoRR abs/1908.03568 (2019) - [i19]Vivek Veeriah, Matteo Hessel, Zhongwen Xu, Richard L. Lewis, Janarthanan Rajendran, Junhyuk Oh, Hado van Hasselt, David Silver, Satinder Singh:
Discovery of Useful Questions as Auxiliary Tasks. CoRR abs/1909.04607 (2019) - [i18]Mark Rowland, Anna Harutyunyan, Hado van Hasselt, Diana Borsa, Tom Schaul, Rémi Munos, Will Dabney:
Conditional Importance Sampling for Off-Policy Learning. CoRR abs/1910.07479 (2019) - [i17]Anna Harutyunyan, Will Dabney, Thomas Mesnard, Mohammad Gheshlaghi Azar, Bilal Piot, Nicolas Heess, Hado van Hasselt, Greg Wayne, Satinder Singh, Doina Precup, Rémi Munos:
Hindsight Credit Assignment. CoRR abs/1912.02503 (2019) - [i16]Zeyu Zheng, Junhyuk Oh, Matteo Hessel, Zhongwen Xu, Manuel Kroiss, Hado van Hasselt, David Silver, Satinder Singh:
What Can Learned Intrinsic Rewards Capture? CoRR abs/1912.05500 (2019) - 2018
- [c21]Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Gheshlaghi Azar, David Silver:
Rainbow: Combining Improvements in Deep Reinforcement Learning. AAAI 2018: 3215-3222 - [c20]Dan Horgan, John Quan, David Budden, Gabriel Barth-Maron, Matteo Hessel, Hado van Hasselt, David Silver:
Distributed Prioritized Experience Replay. ICLR (Poster) 2018 - [c19]Eugenio Bargiacchi, Timothy Verstraeten, Diederik M. Roijers, Ann Nowé, Hado van Hasselt:
Learning to Coordinate with Coordination Graphs in Repeated Single-Stage Multi-Agent Decision Problems. ICML 2018: 491-499 - [c18]Zhongwen Xu, Hado van Hasselt, David Silver:
Meta-Gradient Reinforcement Learning. NeurIPS 2018: 2402-2413 - [i15]Daniel J. Mankowitz, Augustin Zídek, André Barreto, Dan Horgan, Matteo Hessel, John Quan, Junhyuk Oh, Hado van Hasselt, David Silver, Tom Schaul:
Unicorn: Continual Learning with a Universal, Off-policy Agent. CoRR abs/1802.08294 (2018) - [i14]Dan Horgan, John Quan, David Budden, Gabriel Barth-Maron, Matteo Hessel, Hado van Hasselt, David Silver:
Distributed Prioritized Experience Replay. CoRR abs/1803.00933 (2018) - [i13]Zhongwen Xu, Hado van Hasselt, David Silver:
Meta-Gradient Reinforcement Learning. CoRR abs/1805.09801 (2018) - [i12]Tobias Pohlen, Bilal Piot, Todd Hester, Mohammad Gheshlaghi Azar, Dan Horgan, David Budden, Gabriel Barth-Maron, Hado van Hasselt, John Quan, Mel Vecerík, Matteo Hessel, Rémi Munos, Olivier Pietquin:
Observe and Look Further: Achieving Consistent Performance on Atari. CoRR abs/1805.11593 (2018) - [i11]Matteo Hessel, Hubert Soyer, Lasse Espeholt, Wojciech Czarnecki, Simon Schmitt, Hado van Hasselt:
Multi-task Deep Reinforcement Learning with PopArt. CoRR abs/1809.04474 (2018) - [i10]Tom Schaul, Hado van Hasselt, Joseph Modayil, Martha White, Adam White, Pierre-Luc Bacon, Jean Harb, Shibl Mourad, Marc G. Bellemare, Doina Precup:
The Barbados 2018 List of Open Issues in Continual Learning. CoRR abs/1811.07004 (2018) - [i9]Hado van Hasselt, Yotam Doron, Florian Strub, Matteo Hessel, Nicolas Sonnerat, Joseph Modayil:
Deep Reinforcement Learning and the Deadly Triad. CoRR abs/1812.02648 (2018) - [i8]Diana Borsa, André Barreto, John Quan, Daniel J. Mankowitz, Rémi Munos, Hado van Hasselt, David Silver, Tom Schaul:
Universal Successor Features Approximators. CoRR abs/1812.07626 (2018) - 2017
- [c17]David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac-Arnold, David P. Reichert, Neil C. Rabinowitz, André Barreto, Thomas Degris:
The Predictron: End-To-End Learning and Planning. ICML 2017: 3191-3199 - [c16]Zhongwen Xu, Joseph Modayil, Hado van Hasselt, André Barreto, David Silver, Tom Schaul:
Natural Value Approximators: Learning when to Trust Past Estimates. NIPS 2017: 2120-2128 - [c15]André Barreto, Will Dabney, Rémi Munos, Jonathan J. Hunt, Tom Schaul, David Silver, Hado van Hasselt:
Successor Features for Transfer in Reinforcement Learning. NIPS 2017: 4055-4065 - [i7]Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John P. Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy P. Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing:
StarCraft II: A New Challenge for Reinforcement Learning. CoRR abs/1708.04782 (2017) - [i6]Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Daniel Horgan, Bilal Piot, Mohammad Gheshlaghi Azar, David Silver:
Rainbow: Combining Improvements in Deep Reinforcement Learning. CoRR abs/1710.02298 (2017) - 2016
- [c14]Hado van Hasselt, Arthur Guez, David Silver:
Deep Reinforcement Learning with Double Q-Learning. AAAI 2016: 2094-2100 - [c13]Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, Nando de Freitas:
Dueling Network Architectures for Deep Reinforcement Learning. ICML 2016: 1995-2003 - [c12]Hado van Hasselt, Arthur Guez, Matteo Hessel, Volodymyr Mnih, David Silver:
Learning values across many orders of magnitude. NIPS 2016: 4287-4295 - [i5]Hado van Hasselt, Arthur Guez, Matteo Hessel, David Silver:
Learning functions across many orders of magnitudes. CoRR abs/1602.07714 (2016) - [i4]David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac-Arnold, David P. Reichert, Neil C. Rabinowitz, André Barreto, Thomas Degris:
The Predictron: End-To-End Learning and Planning. CoRR abs/1612.08810 (2016) - 2015
- [i3]Hado van Hasselt, Richard S. Sutton:
Learning to Predict Independent of Span. CoRR abs/1508.04582 (2015) - [i2]Hado van Hasselt, Arthur Guez, David Silver:
Deep Reinforcement Learning with Double Q-learning. CoRR abs/1509.06461 (2015) - 2014
- [c11]Richard S. Sutton, Ashique Rupam Mahmood, Doina Precup, Hado van Hasselt:
A new Q(lambda) with interim forward view and Monte Carlo equivalence. ICML 2014: 568-576 - [c10]Ashique Rupam Mahmood, Hado van Hasselt, Richard S. Sutton:
Weighted importance sampling for off-policy learning with linear function approximation. NIPS 2014: 3014-3022 - [c9]Hado van Hasselt, Ashique Rupam Mahmood, Richard S. Sutton:
Off-policy TD( l) with a true online equivalence. UAI 2014: 330-339 - 2013
- [c8]Hado van Hasselt, Han La Poutré:
Stacking under uncertainty: We know how to predict, but how should we act? CIPLS 2013: 25-32 - [i1]Hado van Hasselt:
Estimating the Maximum Expected Value: An Analysis of (Nested) Cross Validation and the Maximum Sample Average. CoRR abs/1302.7175 (2013) - 2012
- [p1]Hado van Hasselt:
Reinforcement Learning in Continuous State and Action Spaces. Reinforcement Learning 2012: 207-251 - 2011
- [b1]Hado Philip van Hasselt:
Insights in reinforcement rearning : formal analysis and empirical evaluation of temporal-difference learning algorithms. Utrecht University, Netherlands, 2011 - [j2]Harm van Seijen, Shimon Whiteson, Hado van Hasselt, Marco A. Wiering:
Exploiting Best-Match Equations for Efficient Reinforcement Learning. J. Mach. Learn. Res. 12: 2045-2094 (2011) - [c7]Marco A. Wiering, Hado van Hasselt, Auke-Dirk Pietersma, Lambert Schomaker:
Reinforcement learning algorithms for solving classification problems. ADPRL 2011: 91-96 - 2010
- [c6]Hado van Hasselt:
Double Q-learning. NIPS 2010: 2613-2621
2000 – 2009
- 2009
- [c5]Marco A. Wiering, Hado van Hasselt:
The QV family compared to other reinforcement learning algorithms. ADPRL 2009: 101-108 - [c4]Harm van Seijen, Hado van Hasselt, Shimon Whiteson, Marco A. Wiering:
A theoretical and empirical analysis of Expected Sarsa. ADPRL 2009: 177-184 - [c3]Joost Westra, Hado van Hasselt, Frank Dignum, Virginia Dignum:
Adaptive Serious Games Using Agent Organizations. AGS 2009: 206-220 - [c2]Hado van Hasselt, Marco A. Wiering:
Using continuous action spaces to solve discrete problems. IJCNN 2009: 1149-1156 - 2008
- [j1]Marco A. Wiering, Hado van Hasselt:
Ensemble Algorithms in Reinforcement Learning. IEEE Trans. Syst. Man Cybern. Part B 38(4): 930-936 (2008) - [c1]Joost Westra, Hado van Hasselt, Virginia Dignum, Frank Dignum:
On-line adapting games using agent organizations. CIG 2008: 243-250
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-04 21:04 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint