Efficient Exploration in Edge-Friendly Hyperdimensional Reinforcement Learning

Y Ni, WY Chung, S Cho, Z Zou, M Imani - Proceedings of the Great …, 2024 - dl.acm.org
Proceedings of the Great Lakes Symposium on VLSI 2024, 2024dl.acm.org
Integrating deep learning with Reinforcement Learning (RL) results in algorithms that
achieve human-like learning in complex yet unknown environments via a process of trial
and error. Despite the advancements, the computational costs associated with deep
learning become a major drawback. This paper proposes a revamped Q-learning algorithm
powered by Hyperdimensional Computing (HDC), targeting more efficient and adaptive
exploration. We introduce a solution leveraging model uncertainty to navigate agent …
Integrating deep learning with Reinforcement Learning (RL) results in algorithms that achieve human-like learning in complex yet unknown environments via a process of trial and error. Despite the advancements, the computational costs associated with deep learning become a major drawback. This paper proposes a revamped Q-learning algorithm powered by Hyperdimensional Computing (HDC), targeting more efficient and adaptive exploration. We introduce a solution leveraging model uncertainty to navigate agent exploration. Our evaluation shows that the proposed algorithm is a significant enhancement in learning quality and efficiency compared to previous HDC-based algorithms, achieving more than 330 more rewards with small overheads in computation. In addition, it maintains an edge over DNN-based alternatives by ensuring reduced runtime costs and improved policy learning, achieving up to 6.9 × faster learning.
ACM Digital Library
Showing the best result for this search. See all results