Understanding end-to-end model-based reinforcement learning methods as implicit parameterization

C Gehring, K Kawaguchi, J Huang… - Advances in Neural …, 2021 - proceedings.neurips.cc
Advances in Neural Information Processing Systems, 2021proceedings.neurips.cc
Estimating the per-state expected cumulative rewards is a critical aspect of reinforcement
learning approaches, however the experience is obtained, but standard deep neural-
network function-approximation methods are often inefficient in this setting. An alternative
approach, exemplified by value iteration networks, is to learn transition and reward models
of a latent Markov decision process whose value predictions fit the data. This approach has
been shown empirically to converge faster to a more robust solution in many cases, but …
Abstract
Estimating the per-state expected cumulative rewards is a critical aspect of reinforcement learning approaches, however the experience is obtained, but standard deep neural-network function-approximation methods are often inefficient in this setting. An alternative approach, exemplified by value iteration networks, is to learn transition and reward models of a latent Markov decision process whose value predictions fit the data. This approach has been shown empirically to converge faster to a more robust solution in many cases, but there has been little theoretical study of this phenomenon. In this paper, we explore such implicit representations of value functions via theory and focused experimentation. We prove that, for a linear parametrization, gradient descent converges to global optima despite non-linearity and non-convexity introduced by the implicit representation. Furthermore, we derive convergence rates for both cases which allow us to identify conditions under which stochastic gradient descent (SGD) with this implicit representation converges substantially faster than its explicit counterpart. Finally, we provide empirical results in some simple domains that illustrate the theoretical findings.
proceedings.neurips.cc
Showing the best result for this search. See all results