Paper:
Adaptive Action Selection of Body Expansion Behavior in Multi-Robot System Using Communication
Tomohisa Fujiki*, Kuniaki Kawabata**, and Hajime Asama*
*RACE, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8568, Japan
**Distributed Adaptive Robotics Research Unit, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
- [1] N. Hutin, C. Pegard, and E. Brassart, “A Communication Strategy for Cooperative Robots,” Proc. of IEEE/RSJ Intl. Conference on Intelligent Robots and Systems, pp. 114-119, 1998.
- [2] Y. Ishida, H. Asama, K. Ozaki, A. Matsumoto, and I. Endo, “Design of Communication System and Development of a Simulator for an Autonomous and Decentralized Robot System,” Journal of Robotics Society of Japan, 10(4), pp. 544-551, 1992 (in Japanese).
- [3] Y. Arai, S. Suzuki, S. Kotosaka, H. Asama, H. Kaetsu, and I. Endo, “Collision Avoidance among Multiple Autonomous Mobile Robots using LOCISS (LOcally Communicable Infrared Sensory System),” Proceedings of the IEEE International Conference on Robotics and Automation, pp. 2091-2096, 1996.
- [4] H. Yanco and L. A. Stein, “An Adaptive Communication Protocol for Cooperating Mobile Robots,” From Animals to Animats 2, pp. 478-485, 1993.
- [5] A. Billard and G. Hayes, “Learning to Communicate Through Imitation in Autonomous Robots,” Artificial Neural Networks – ICANN’97, pp. 763-768, 1997.
- [6] W. Von Raffler-Engel (Ed.), “Aspects of Nonverbal Communication,” Loyola Pr, 1979.
- [7] M. Hoshino, H. Asama, K. Kawabata, Y. Kunii, and I. Endo, “Communication Learning for Cooperation among Autonomous Robots,” Proceedings of the IEEE International Conference on Industrial Electronics, Control & Instrumentation, pp. 2111-2116, 2000.
- [8] K. Kawabata, H. Asama, and M. Tanaka, “A Study of Communication Emergence among Mobile Robots: Simulations of Intention Transmission,” Distributed Autonomous Robotic Systems 5, Springer-Verlag, pp. 71-80, 2002.
- [9] R. S. Sutton and A. G. Barto, “Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning),” The MIT Press, 1998.
- [10] S. J. Bradtke and M. O. Duff, “Reinforcement Learning Methods for Continuous-Time Markov Decision Problems,” In G. Tesauro, D. Touretzky, and T. Leen (Eds.), Advances in Neural Information Processing Systems, Vol.7, pp. 393-400, 1995.
- [11] S. P. Singh, T. Jaakkola, and M. I. Jordan, “Learning Without State-Estimation in Partially Observable Markovian Decision Processes,” International Conference on Machine Learning, pp. 284-292, 1994.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.