single-jc.php

JACIII Vol.17 No.3 pp. 450-458
doi: 10.20965/jaciii.2013.p0450
(2013)

Paper:

Self-Organized Map Based Learning System for Estimating the Specific Task by Simple Instructions

Hiroyuki Masuta, Yasuto Tamura, and Hun-ok Lim

Department of Mechanical Engineering, Kanagawa University, 3-27-1 Rokkakubashi, Kanagawa-ku, Yokohama-shi, Kanagawa 221-8686, Japan

Received:
November 14, 2012
Accepted:
March 29, 2013
Published:
May 20, 2013
Keywords:
service robot, self-organized map, human interaction, decision making
Abstract
This paper discusses a learning system for service robots to estimate specific tasks by using simple instructions from human beings. Intelligent robots are expected to operate in human living areas, so service robots should understand specific tasks from simple instructions given by human beings. It is important to perceive environmental situations and to adapt to human preferences. We propose a learningmethod using the Self-Organized Map (SOM) to estimate specific tasks from both human behavior measurement but also environmental measurement. Through simulation experiments, we verified that the proposed SOMbased method considers environmental situations associated time variations and show that service robots decide table-clearing tasks according to human intent.
Cite this article as:
H. Masuta, Y. Tamura, and H. Lim, “Self-Organized Map Based Learning System for Estimating the Specific Task by Simple Instructions,” J. Adv. Comput. Intell. Intell. Inform., Vol.17 No.3, pp. 450-458, 2013.
Data files:
References
  1. [1] G. A. Bekey, “Autonomous Robots: From Biological Inspiration to Implementation and Control (Intelligent Robotics and Autonomous Agents series),” A Bradford Book, 2005.
  2. [2] N. Mitsunaga, Z. Miyashita, K. Shinozawa, T. Miyashita, H. Ishiguro, and N. Hagita, “What makes people accept a robot in a social environment,” Int. Conf. on Intelligent Robots and Systems, pp. 3336-3343, 2008.
  3. [3] H. Masuta, Y. Tamura, and H. Lim, “A Decision Making for A Robot Based on Simple Interaction with Human,” Knowledge Based and Intelligent Information and Engineering Systems, SIST 14, pp. 1-10, 2012.
  4. [4] S. Zhao, K. Nakamura, K. Ishii, and T. Igarashi, “Magic Cards: A Paper Tag Interface for Implicit Robot Control,” Proc. of the ACM Conf. on Human Factors in Computing Systems, CHI2009, pp. 173-182, 2009.
  5. [5] T. Sato, Y. Nishida, J. Ichikawa, Y. Hatamura, and H. Mizoguchi, “Active Understanding of Human Intention by a Robot through Monitoring of Human Behavior,” Intelligent Robot and Systems, ELSEVIER, pp. 349-372, 1995.
  6. [6] J. J. Gibson, “The ecological approach to visual perception,” Lawrence Erlbaum Associates, 1979.
  7. [7] D. N. Lee, “Guiding movement by coupling taus,” Ecological psychology, Vol.10, No.3-4, pp. 221-250, 1998.
  8. [8] M. T. Turvey and R. E. Shaw, “Ecological foundations of cognition,” Int. J. of Consciousness Studies, No.6, pp. 95-110, 1999.
  9. [9] M. Endsley, “Towards a Theory of Situation Awareness in Dynamic Systems,” Human Factors, Vol.37, No.1, pp. 32-64, 1995.
  10. [10] E. Sato, T. Yamaguchi, and F. Harashima, “Natural Interface Using Pointing Behavior for Human-Robot Gestural Interaction,” IEEE Trans. on Industrial Electronics, Vol.54, No.2, pp. 1105-1112, 2007.
  11. [11] N. Y. Chong, H. Hongu, K. Ohba, S. Hirai, and K. Tanie, “Knowledge Distributed Robot Control Framework,” Proc. Int. Conf. on Control, Automation, and Systems, pp. 22-25, 2003.
  12. [12] M. Mizukawa, Y. Nakauchi, K. Ohba, and T. Yamaguchi, “Review of Kukanchi Research 2009,” Proc. of SICE System Integration Division Annual Conference, 2I1-1, 2009.
  13. [13] Y. Fukusato, E. Sato-Simokawara, T. Yamaguchi, and M. Mizukawa, “A Service System Adapted to Changing Environments Using Kukanchi,” J. of Robotics and Mechatronics, Vol.21, No.4, 2009.
  14. [14] M. Nishiyama, “Robot Vision Technology for Target Recognition,” Toshiba review, Vol.64. No.1, pp. 40-43, 2009.
  15. [15] T. Oggier, M. Lehmann, R. Kaufmannn, M. Schweizer, M. Richter, P. Metzler, G. Lang, F. Lustenberger, and N. Blanc, “An allsolid- state optical range camera for 3D-real-time imaging with sub-centimeter depth-resolution (SwissRanger),” Proc. of SPIE, Vol.5249, pp. 634-545, 2003.
  16. [16] T. Leyvand, C. Meekhof, W. Yi-Chen, S. Jian, and G. Baining, “Kinect Identity: Technology and Experience,” IEEE Computer Society, Vol.44, Issue 4, pp. 94-96, 2011.
  17. [17] OpenRTM-aist:
    http://www.openrtm.org/
  18. [18] N. Ando, S. Kurihara, G. Biggs, T. Sakamoto, and H. Nakamoto, “Software Deployment Infrastructure for Component Based RTSystems,” J. of Robotics and Mechatronics, Vol.23, No.3, pp. 350-359, 2011.
  19. [19] H. Masuta, E. Hiwada, and N. Kubota, “Control Architecture for Human Friendly Robots Based on Interacting with Human,” 4th Int. Conf. on ICIRA 2011, Part II, LNAI 7102, pp. 210-219, 2011.
  20. [20] Y. Matsusaka, “Open Source Software for Human Robot Interaction,” Proc. of IROS 2010 Workshop on Towards a Robotics Software Platform, 2010.
  21. [21] A. Davison, “Kinect Open Source Programming Secrets: Hacking the Kinect with OpenNI,” NITE, and Java, McGraw-Hill/TAB Electronics, 2012.
  22. [22] T. Kohonen, “Self-Organizing Maps,” Springer, 2000.
  23. [23] K. Murakami, T. Hasegawa, Y. Nohara, B. W. Ahn, and R. Kurazume, “Position Tracking System for Commodities in an Indoor Environment,” Proc. of IEEE Int. Conf. on Sensors, pp. 1879-1882, 2010.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Nov. 04, 2024