DCIR: Dynamic Consistency Intrinsic Reward for Multi-Agent Reinforcement Learning

K Lin, Y Wang, P Chen, R Zeng, S Zhou, M Tan… - arXiv preprint arXiv …, 2023 - arxiv.org
K Lin, Y Wang, P Chen, R Zeng, S Zhou, M Tan, C Gan
arXiv preprint arXiv:2312.05783, 2023arxiv.org
Learning optimal behavior policy for each agent in multi-agent systems is an essential yet
difficult problem. Despite fruitful progress in multi-agent reinforcement learning, the
challenge of addressing the dynamics of whether two agents should exhibit consistent
behaviors is still under-explored. In this paper, we propose a new approach that enables
agents to learn whether their behaviors should be consistent with that of other agents by
utilizing intrinsic rewards to learn the optimal policy for each agent. We begin by defining …
Learning optimal behavior policy for each agent in multi-agent systems is an essential yet difficult problem. Despite fruitful progress in multi-agent reinforcement learning, the challenge of addressing the dynamics of whether two agents should exhibit consistent behaviors is still under-explored. In this paper, we propose a new approach that enables agents to learn whether their behaviors should be consistent with that of other agents by utilizing intrinsic rewards to learn the optimal policy for each agent. We begin by defining behavior consistency as the divergence in output actions between two agents when provided with the same observation. Subsequently, we introduce dynamic consistency intrinsic reward (DCIR) to stimulate agents to be aware of others' behaviors and determine whether to be consistent with them. Lastly, we devise a dynamic scale network (DSN) that provides learnable scale factors for the agent at every time step to dynamically ascertain whether to award consistent behavior and the magnitude of rewards. We evaluate DCIR in multiple environments including Multi-agent Particle, Google Research Football and StarCraft II Micromanagement, demonstrating its efficacy.
arxiv.org