Jul 13, 2017 · We propose a new approach for joint training of multiple tasks, which we refer to as Distral (Distill & transfer learning). Instead of sharing ...
We find that the Distral algorithms learn faster and achieve better asymptotic performance, are significantly more robust to hyperparameter settings, and learn ...
scholar.google.com › citations
We find that the Distral algorithms learn faster and achieve better asymptotic performance, are significantly more robust to hyperparameter settings, and learn ...
Machine Learning Course Project Skoltech 2018. We implemented the Distral algorithm and replicated the results obtained in the recent Google DeepMind's ...
We propose a new approach for joint training of multiple tasks, which we refer to as Distral (distill & transfer learning). Instead of sharing parameters ...
Apr 28, 2021 · Actor-mimic: Deep multitask and transfer reinforcement learning [ICLR 2016] ... “Distral: Robust multitask reinforcement learning.” arXiv preprint ...
The paper presents an approach to performing transfer between multiple reinforcement learning tasks by regularizing the policies of different tasks towards a ...
Learn a global policy π0. Use a KL divergence regularizer to produce task-specific policies. Knowledge learned on one task are distilled into the shared policy,.
This notebook presents and implements the proposed framework for multitask reinforcement learning in the cited paper. The key idea behind this framework is ...
This work proposes a new approach for joint training of multiple tasks, which it refers to as Distral (Distill & transfer learning), and shows that the ...