Sample Report

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

PERIYA, KASARAGOD

KERALA -671320

Seminar Report on
HUMAN ROBOT INTERACTION

Presented By
SIDHARTH M
Reg. No: 17030999

DEPARTMENT OF ELECTRICAL & ELECTRONICS


ENGINEERING

2019-20
KASARAGOD
KERALA- 671320

Certificate
This is to certify that this seminar titled HUMAN ROBOT
INTERACTION has been done by SIDHARTH M. Reg. No:17030999 final
year student of Electrical & Electronics Engineering in partial fulfillment of
the requirement for the award of Diploma in Electrical & Electronics
Engineering under the board of Technical Education, Govt. of Kerala during
the academic year 2019-2020.

Staff in Charge Head of the Department

Internal Examiner External Examiner


ACKNOWLEDGEMENT

It is my proud privilege and duty to acknowledge the kind help and guidance
received from several people in preparation of this seminar report. It would not have been
possible to prepare this report in this form without their valuable help, cooperation and
guidance.

I sincerely thank our seminar coordinator Sri. Sunilkumar S, Lecturer,


Department of Electrical and Electronics Engineering, who has been behind the
completion of the seminar.

I thank Sri. Purandara M, Head of the Department, Department of Electrical and


Electronics Engineering who has been our source of inspiration. He has been especially
enthusiastic in giving his opinions and critical reviews.

I thank our beloved principal Sri. P. Y. Solomon for his constant help and support
throughout.

Finally I thank all the teaching and non-teaching staff of Electrical


and Electronics Engineering Department for the help rendered with this I am also thankful
to my parents too. I will remember their contribution forever.

SIDHARTH M
Reg.No:17030999
ABSTRACT

A very important aspect in developing robots capable of Human-Robot Interaction (HRI) is


the research in natural, human-like communication, and subsequently, the development of a
research platform with multiple HRI capabilities for evaluation.

Besides a flexible dialog system and speech understanding, an anthropomorphic appearance


has the potential to support intuitive usage and understanding of a robot, e.g.: human-like
facial expressions and deictic gestures can as well be produced and also understood by the
robot. As a consequence of our effort in creating an anthropomorphic appearance and to come
close to a human-human interaction model for a robot, we decided to use human-like sensors,
i.e., two cameras and two microphones only, in analogy to human perceptual capabilities too.

Despite the challenges resulting from these limits with respect to perception, a robust attention
system for tracking and interacting with multiple persons simultaneously in real time is
presented. The tracking approach is sufficiently generic to work on robots with varying hard
ware, as long as stereo audio data and images of a video camera are available. To easily
implement different interaction capabilities like deictic gestures, natural adaptive dialogs, and
emotion awareness on the robot, we apply a modular integration approach utilizing XML-
based data exchange. The paper focuses on our efforts to bring together different interaction
concepts and perception capabilities integrated on a humanoid robot to achieve
comprehending human-oriented interaction.

I
CONTENTS

CHAPTERS PAGE NO.

1. INTRODUCTION ........................................................................1
2. BACKGROUND ..........................................................................2
3. DEVELOPMENT HISTORY OF ROBOTS ................................ 3
4. HUMAN MACHINE INTERACTION ........................................ 5
4.1.ERGONOMICS ...................................................................... 5
4.2.HUMAN COMPUTER INTERACTION ............................... 5
4.2.HUMAN ROBOT INTERACTION ....................................... 5
5. HUMAN COMPUTER INTERACTION ..................................... 6
5.1.PUNCHED CARDS ................................................................6
5.2.KEYBOARD AND MOUSE.................................................. 7
5.3.TOUCH SCREEN ...................................................................8
5.4.VIRTUAL REALITY ............................................................. 9
6. HUMAN ROBOT INTERACTION ............................................. 10
7. HUMANICS AND ROLE OF HRI .............................................. 11
7.1.KNOWLEDGE PERSPECTIVE ............................................ 11
7.2.COGNITIVE PERSPECTIVE ................................................ 11
8. THREE LAWS OF HUMAN ROBOT INTERACTION… ......... 12
9. FACE DETECTION ..................................................................... 13
9.1.ANDROID’S BASIC FACIAL RECOGNISATION ............. 14
10. VOICE DETECTION ................................................................... 17
11. HUMAN ROBOT SOCIAL INTERACTION .............................. 18
11.1.SOCIABLE ROBOTS ........................................................... 19
11.1.1.KISMET ...................................................................... 19
11.1.2.ASIMO......................................................................... 20
11.1.3.ICUB ............................................................................ 21
11.1.4.HYPER REALISTIC EINSTEIN ROBOT...................21
12. ADVANTAGES, DISADVANTAGES AND APPLICATION… 22
13. CONCLUSION ............................................................................... 23
14. REFERENCE .................................................................................. 24

II
LIST OF FIGURES

FIGURES PAGE NO

1. Fig.1 DEVELOPMENT HISTORY OF ROBOTS, THE PIGEON .......... 3


2. Fig.2 DEVELOPMENT HISTORY OF ROBOTS, THE LOOM ............. 4
3. Fig.3 PUNCHED CARDS ......................................................................... 6
4. Fig.4 KEYBOARD AND MOUSE............................................................ 7
5. Fig.5 TOUCH SCREEN .............................................................................8
6. Fig.6 VIRTUAL REALITY ........................................................................ 9
7. Fig.7 HUMAN ROBOT INTERACTION .................................................. 10
8. Fig.8 KNOWLEDGE PERSPECTIVE ....................................................... 11
9. Fig.9 COGNITIVE PERSPECTVE ............................................................ 11
10. Fig.10 FACE DETECTION ........................................................................ 13
11. Fig.10.1ANDROID’S BASIC FACIAL RECOGNISATION SAMSUNG 14
12. Fig.10.2ANDROID’S BASIC FACIAL RECOGNISATION IPHONE ..... 15
13. Fig.11.1 KISMET ......................................................................................... 19
14. Fig.11.2 ASIMO ........................................................................................... 20
15. Fig.11.3 ICUB ............................................................................................... 21
16. Fig.11.4 HYPER-REALISTIC EINSTEIN ROBOT .................................... 21

III
HUMAN ROBOT INTERACTION 2020

INTRODUCTION

For face detection, a method originally developed by Viola and Jones for object detection is
adopted. Their approach uses a cascade of simple rectangular features that allows a very
efficient binary classification of image windows into either the face or non face class. This
classification step is executed for different window positions and different scales to scan the
com•plete image for faces. We apply the idea of a classification pyra•mid starting with very
fast but weak classifiers to reject im•age parts that are certainly no faces. With increasing
complexity of classifiers, the number of remaining image parts decreases. The training of the
classifiers is based on the AdaBoost algorithm. Combining the weak classifiers iteratively to
stronger ones until the desired level of quality is achieved.

As an extension to the frontal view detection proposed by Viola and Jones, we additionally
classify the horizontal gazing direction of faces, as shown in Fig. 4, by using four instances
of the classifier pyramids described earlier, trained for faces rotated by 20", 40", 60", and 80".
For classifying left and right-turned faces, the image is mirrored at its vertical axis, and the
same four classifiers are applied again.

The gazing direction is evaluated for activating or deactivating the speech processing, since
the robot should not react to people talking to each other in front of the robot, but only to
communication partners facing the robot. Subsequent to the face detection, face identification
is applied to the detected image region using the eigenface method to compare the detected
face with a set of trained faces. For each detected face, the size, center coordinates, horizontal
rotation, and results of the face identification are provided at a real- time capable frequency
of about 7 Hz on an Athlon64 2 GHz desktop PC with I GB RAM.

DEPT OF EEE, GPC KASARGOD Page 1


HUMAN ROBOT INTERACTION 2020

BACKGROUND

Human-Robot Interaction (HRI) is a relatively young discipline that has attracted a lot of
attention over the past few years due to the increasing availability of complex robots and
people's exposure to such robots in their daily lives, e.g. as robotic toys or, to some extent, as
household appliances (robotic vacuum cleaners or lawn movers). Also, robots are increasingly
being developed for real world application areas, such as robots in rehabilitation, eldercare,
or robots used in robot-assisted therapy and other assistive or educational applications.

This article is not meant to be a review article of HRI per se, please consult e.g. (Goodrich
and Schultz, 2007; Dautenhahn, 2007a) for such surveys and discussions of the history and
origins of this field. Instead, I would like to discuss a few key issues within the domain of
HRI that often lead to misunderstandings or misinterpretations of research in this domain.
The chapter will not dwell into technical details but focus on interdisciplinary aspects of this
research domain in order to inspire innovative new research that goes beyond traditional
boundaries of established disciplines.

Researchers may be motivated differently to join the field HRI. Some may be robot cists,
working on developing advanced robotic systems with possible real-world applications, e.g.
service robots that should assist people in their homes or at work, and they may join this field
in order to find out how to handle situations when these robots need to interact with people,
in order to increase the robots' efficiency. Others may be psychologists or ethnologists and
take a human-center perspective on HRI; they may use robots as tools in order to understand
fundamental issues of how humans interact socially and communicate with others and with
interactive artifacts. Artificial Intelligent and Cognitive Science researchers may join
this field with the motivation to understand and develop complex intelligent systems, using
robots as embodied instantiations and test beds of those.

Last but not least, a number of people are interested in studying the interaction of people and
robots, how people perceive different types and behaviors of robots, how they perceive social
cues or different robot embodiments, etc. The means to carry out this work is usually via 'user
studies'. Such work has often little technical content; e.g. it may use commercially available
and already fully programmed robots, or research prototypes showing few behaviors or being
controlled remotely (via the Wizard-of-Oz approach whereby a human operator, unknown to
the participants, controls the robot), in order to create very constrained and controlled
experimental conditions. Such research strongly focuses on humans' reactions and attitudes
towards robots. Research in this area typically entails large- scale evaluations trying to find
statistically significant results. Unfortunately this area of 'user studies', which is
methodologically heavily influenced by experimental psychology and human-computer
interaction (HCI) research, is often narrowly equated with the field of "HRI". "Shall we focus
on the AI and technical development of the robot or shall we do HRI"? Is not an uncommon
remark heard in research discussions? This tendency to equate HRI with 'user studies ' is in
my view very unfortunate, and it may in the long run sideline HRI and transform this field
into a niche-domain.

DEPT OF EEE, GPC KASARGOD Page 2


HUMAN ROBOT INTERACTION 2020

DEVELOPMENT HISTORY OF ROBOTS

1. Sometime in the years between 425 B.C and 350 B.C the Greek mathematician,
Archytas of Tarentum built a mechanical bird dubbed "The Pigeon". The Pigeon that
was propelled by steam and it could fly a distance of 200 meters. Although the steam-
powered bird could not take off again once it landed, it still serves as one of history's
earliest studies in the field of flight and robotics.

Archytas Pigeon

Fig.1

2. We are continuing our History of Computers category with the Jacquard Mechanical
Loom (that could be attached to a power loom or a hand loom, the head controlled
which warp thread was raised during shedding), that was invented by Joseph Marie
Jacquard in 1801. It simplifies the process of manufacturing textiles with complex
patterns, where multiple shuttles could be used to control the colors of the weaves.
It was a continuation of earlier inventions by the Frenchmen Basile Bouchon (1725)
who invented a way to control a loom with a perforated paper tape, Jean Baptiste
Falcon (1728), who improved Bouchon’s machine, and Jacques Vaucanson (1740),
who crafted the world’s first completely automated loom. The Jacquard loom is
controlled by Punched cards with punched holes, each row of which corresponds to
one row of the design.
Now obsolete as a recording medium, punch cards were widely used throughout the
19th century for controlling textile looms and in the late 19th and early 20th century
for operating fairground organs and related musical instruments. They were used

DEPT OF EEE, GPC KASARGOD Page 3


HUMAN ROBOT INTERACTION 2020

through the 20th century in unit record machines for input, processing, and data
storage. Even early digital computers used punched cards as the primary medium for
input of both computer programs and data. Most known punch card formats are:
Hollerith’s punched card formats,IBM 80-column punched card formats and character
codes, Mark sense (or Electrographic) cards, Aperture cards, IBM Stub (or Short)
cards, IBM Port-A-Punch, IBM 96-column, and Powers/Remington Rand UNIVAC.

Fig.2

DEPT OF EEE, GPC KASARGOD Page 4


HUMAN ROBOT INTERACTION 2020

HUMAN MACHINE INTERACTION

1. Ergonomics (or human factors):

It is the scientific discipline concerned with the understanding of the interactions


among human and other elements of a system, and the profession that applies
theory, principles, data and methods to design in order to optimize human well-
being and overall system performance.

2. Human Computer Interaction:

Human-computer interaction (HCI) is a multidisciplinary field of study focusing on


the design of computer technology and, in particular, the interaction between humans
(the users) and computers. While initially concerned with computers, HCI has since
expanded to cover almost all forms of information technology design.

3. Human Robot Interaction:

Human-Robot Interaction (HRI) is a field of study dedicated to understanding,


designing, and evaluating robotic systems for use by or with humans. Interaction, by
definition, requires communication between robots and humans. Communication
between a human and a robot may take several forms, but these forms are largely
influenced by whether the human and the robot are in close proximity to each other or
not.

DEPT OF EEE, GPC KASARGOD Page 5


HUMAN ROBOT INTERACTION 2020

HUMAN COMPUTER INTERACTION (HCI)

Background

HRI depends on a knowledge of (sometimes natural) human communication, many aspects


of HRI are continuations of human communications topics that are much older than
robotics.

1. Punched Cards:

Punch cards (or "punched cards"), also known as Hollerith cards or IBM cards, are
paper cards in which holes may be punched by hand or machine to represent computer
data and instructions. They were a widely-used means of inputting data into early
computers. The cards were fed into a card reader connected to a computer, which
converted the sequence of holes to digital information.

Fig.3
The Jacquard Loom is controlled by a chain of multiple cards punched with holes that
determine which cords of the fabric warp should be raised for each pass of the shuttle.
The ability to store and automatically reproduce complex operations found wide
application in textile manufacturing.

DEPT OF EEE, GPC KASARGOD Page 6


HUMAN ROBOT INTERACTION 2020

2. Keyboard and Mouse:

Most interactions with a computer involve using a keyboard and a mouse. The
keyboard allows the user to type letters and numbers and the mouse allows the user
to position the cursor, draw and execute program functions by clicking mouse
buttons.

Fig.4

In this guide we’ll learn about remote-controlling robots with a USB HID device of
your choice. Preferably something with buttons like a Rii mini keyboard or a tiny
remote control.

DEPT OF EEE, GPC KASARGOD Page 7


HUMAN ROBOT INTERACTION 2020

3. Touch Screen:

A touch screen is a computer screen that can be used by touching it with a finger or a
stylus pen, instead of using a mouse and keyboard. It can be described as a
touchpad with a screen built-in to it.
Today, all kinds of devices, both big and small, make use of touch screens. However,
tablet computers and smart phones have made them popular and are the most widely
known and used products that use touch screens.

Fig.5

Touchscreen Based Robot. A touchscreen is an electronic visual display that the user
can control through simple or multi-touch gestures by touching the screen with one or
more fingers. Some touch screens can also detect objects such as a stylus or ordinary
or specially coated gloves.

DEPT OF EEE, GPC KASARGOD Page 8


HUMAN ROBOT INTERACTION 2020

4. Virtual Reality:

Virtual Reality (VR) is the use of computer technology to create a simulated


environment. Unlike traditional user interfaces, VR places the user inside an
experience. Instead of viewing a screen in front of them, users are immersed and able
to interact with 3D worlds. By simulating as many senses as possible, such as vision,
hearing, touch, even smell, the computer is transformed into a gatekeeper to this
artificial world. The only limits to near-real VR experiences are the availability of
content and cheap computing power.

Fig.6

Computer graphics based "virtual reality" (VR) techniques offer very valuable task
visualization aids for planning and previewing robotic systems and tasks, for
predicting robotic actions, training robotic system operators, and for visual
perception of non-visible events like contact forces in robotic tasks.

DEPT OF EEE, GPC KASARGOD Page 9


HUMAN ROBOT INTERACTION 2020

HUMAN ROBOT INTERACTION

Human-Robot Interaction (HRI) is a field of study dedicated to understanding, designing,


and evaluating robotic systems for use by or with humans. Interaction, by definition, requires
communication between robots and humans. Communication between a human and a robot
may take several forms, but these forms are largely influenced by whether the human and the
robot are in close proximity to each other or not. Thus, communication and, therefore,
interaction can be separated into two general categories:

• Remote interaction – The human and the robot are not co-located and are separated
spatially or even temporally (for example, the Mars Rovers are separated from earth
both in space and time).

• Proximate interactions – The humans and the robots are co-located (for example,
service robots may be in the same room as humans).

Within these general categories, it is useful to distinguish between applications that require
mobility, physical manipulation, or social interaction. Remote interaction with mobile robots
often is referred to as teleoperation or supervisory control, and remote interaction with a
physical manipulator is often referred to as telemanipulation. Proximate interaction with
mobile robots may take the form of a robot assistant, and proximate interaction may include
a physical interaction. Social interaction includes social, emotive, and cognitive aspects of
interaction. In social interaction, the humans and robots interact as peers or companions.
Importantly, social interactions with robots appear to be proximate rather than remote.
Because the volume of work in social interactions is vast, we present only a brief survey; a
more complete survey of this important area is left to future work.

Fig.7

DEPT OF EEE, GPC KASARGOD Page 10


HUMAN ROBOT INTERACTION 2020

HUMANICS AND ROLE OF HRI

1. Knowledge Perspective

• Data Literacy
• Technology Literacy
• Human Literacy

Fig.8

2. Cognitive Perspective

• System Thinking
• Entrepreneurship
• Cultural Agility
• Critical Thinking

Fig.9

DEPT OF EEE, GPC KASARGOD Page 11


HUMAN ROBOT INTERACTION 2020

THREE LAWS OF HUMAN-ROBOT INTERACTION

The origin of HRI as a discrete problem was stated by 20th-century author Isaac Asimov in
1941, in his novel I, Robot. He states the Three Laws of Robotics as,

1. A robot may not injure a human being or, through inaction, allow a human being to
come to harm.

2. A robot must obey any orders given to it by human beings, except where such orders
would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict
with the First or Second Law.

These three laws of robotics determine the idea of safe interaction. The closer the human and
the robot get and the more intricate the relationship becomes, the more the risk of a human
being injured rises. Nowadays in advanced societies, manufacturers employing robots solve
this issue by not letting humans and robots share the workspace at any time. This is achieved
by defining safe zones using lidar sensors or physical cages. Thus the presence of humans is
completely forbidden in the robot workspace while it is working.

Many in the field of HRI study how humans collaborate and interact and use those studies to
motivate how robots should interact with humans.

DEPT OF EEE, GPC KASARGOD Page 12


HUMAN ROBOT INTERACTION 2020

FACE DETECTION

Face detection plays an important role in developing human-robot interaction (HRI) for social
robots to recognize people. In this paper, we introduce an intelligent vision system that is able
to detect human face from background and filter out all the non-face but face- like images.
The human face is detected using Ada boost-based Haar-Cascade classifier and the real
human face detection is improved using extreme learning machine (ELM). The proposed
robot vision system for human detection is tested through real time experiments.

Fig.10

Facial recognition technology is now a staple of smartphone security, along with the trust old
PIN and increasingly elaborate fingerprint scanners. While not necessarily more secure than
a fingerprint scanner, biometric ideas like facial recognition tend to be faster and more
convenient to use. So let’s explore what options are out there, how they work, and what they
mean for security.

DEPT OF EEE, GPC KASARGOD Page 13


HUMAN ROBOT INTERACTION 2020

Android’s Basic Facial Recognition:

1. Almost all smart phones implement this technology today, as an alternative to


unlocking your phone with a PIN or fingerprint.
Unfortunately, the way that this standard facial recognition works is not very secure.
It only relies on your front facing camera and a 2D facial recognition algorithm, which
makes it cheap and easy to implement. These two features are all Android needs to
create a picture of your face and features. However, as this is just a 2D image a simple
photograph of you is enough for a thief to fool the system and unlock your phone.
The speed level of security provided by this technique varies a lot, and many Android
OEMs have worked to improve on it over the years. The quality of the front camera
is a determining factor, as is the complexity of the algorithm used to extract facial
details. The use of neural network hardware can also accelerate more secure
algorithms on high-end Smartphone. See Huawei’s 360 Face Unlock that shipped
with its P20 series and One Plus’s speedy unlocks technologies as examples. Sadly,
lower cost models are seldom as snappy.

2. Samsung’s iris scanning technology works by identifying the patterns in your irises.
Just like fingerprints, these are unique to each person, making them very difficult to
replicate. To do this, Samsung’s latest flagships are equipped with an infrared diode
that illuminates your eyes regardless of the surrounding lighting conditions. This light
wavelength can’t be detected by a regular front-facing camera, so a special infrared
narrow focus camera then captures the detailed iris information. This image is then
stored and processed locally on the device, nothing is sent via the internet.

Fig.10.1

DEPT OF EEE, GPC KASARGOD Page 14


HUMAN ROBOT INTERACTION 2020

3. Apple unveiled its new Face ID technology as part of its iPhone X launch, the first 3D
face scanning tech in a smartphone. Unlike the basic IR technology mentioned
previously, 3D scanning is designed to map out a user’s entire face in a highly secure
manner. It doesn’t just rely on the phone’s familiar front-facing camera, there are
actually lots of sensors crammed onto that strip at the top.

Fig.10.2

The iPhone X comes equipped with an array of sensors designed to capture details of
your face. For starters, it uses an infrared flood light to illuminate your face, which
will work regardless of your surrounding lighting conditions as it’s outside of the
visible spectrum. A secondary 30,000-point infrared laser matrix is then beamed out,
which reflects off the flood light. Rather than snapping a picture of this infrared light,
a special infrared camera detects subtle changes in the matrix point reflections as your
face makes minute movements, which allows the camera to capture very accurate 3D
depth data.

DEPT OF EEE, GPC KASARGOD Page 15


HUMAN ROBOT INTERACTION 2020

Composition of Properties Forming Matchable Facial Features :

❖ Location and size: eyes, mouth, bridge of nose.

❖ Value: oriented gradients of pixel intensities.

❖ For each detected face, the size, center coordinates, horizontal rotation, and results of
the face identification are provided at a real-time capable frequency of about 7 Hz on
an Athlon64 2 GHz desktop PC with I GB RAM.

DEPT OF EEE, GPC KASARGOD Page 16


HUMAN ROBOT INTERACTION 2020

VOICE DETECTION

As mentioned before, the limited field-of-view of the cam•eras demands for alternative detect
ion and tracking methods. Motivated by human perception, sound location is applied to direct
the robot's attention. The integrated speaker localization (SPLOC) realizes both the detection
of possible communication partners outside the field-of-view of the camera and the
esti•mation whether a person found by face detection is currently speaking. The program
continuously captures the audio data by the two microphones.

To estimate the relative direction of one or more sound sources in front of the robot, the
direction of sound toward the microphones is considered . De•pendent on the position of a
sound source in front of the robot, the run time difference t results from the run times tr and
tl of the right and left microphone. SPLOC compares the recorded audio signal of the left and
the right] microphone using a fixed number of samples for a cross power spectrum phase
(CSP) to calcu•late the temporal shift between the signals. Taking the distance of the
microphones dmic and a minimum range of 30 cm to a sound source into account, it is possible
to estimate the direction of a signal in a 2-D space. For multiple sound source detection, not
only the main energy value for the CSP result is taken, but also all values exceeding an
adjustable threshold.

In the 3-D space, distance and height of a sound source is needed for an exact detection.

This information can be obtained by the face detection when SPLOC is used for checking
whether a found person is speaking or not. For coarsely detecting communication partner,
outside the field-of-view, standard values are used that are sufficiently accurate to align the
camera properly to get the person hypothesis into the field-of-view. The position of a sound
source (a speaker mouth) is assumed at a height of 160 Cm for an average adult. The standard
distance is adjusted to 110 Cm, as observed during interactions with naive users.

DEPT OF EEE, GPC KASARGOD Page 17


HUMAN ROBOT INTERACTION 2020

HUMAN-ROBOT SOCIAL INTERACTION

Human interactions are fundamentally based on normative principles. Particularly in social


contexts, human behaviors are affected by social norms. Individuals expect certain behaviors
from other people, who are perceived to have an obligation to act according to the expected
behavior. Giving robots the ability to interact with humans, on human terms, is an open
challenge. People are more willing to accept robotic systems in daily life when the robots
engage in socially desirable behaviors with benevolent interaction styles. Furthermore,
allowing robots to reason in social situations, which involve a set of social norms generating
expectations, may improve the dynamics of human-robot interactions and the self-evaluation
processes of robot’s behaviors.

Human-robot social interactions play an essential role in extending the use of robots in daily
life. It is widely discussed that, to be social, robots need the ability to engage in interactions
with humans based on the same principles as humans do. Cognitive and social sciences assess
that human interactions are fundamentally based on normative principles. Most human
interactions are influenced by deep social and cultural standards, known as social norms
.Social norms are behavioral expressions of abstract social values (e.g., politeness and
honesty) that underlie the preferences of a community. Social norms guide human behaviors
and generate expectations of compliance that are considered to be legitimate. An open
challenge is how to incorporate norm processing into robotic architectures.

DEPT OF EEE, GPC KASARGOD Page 18


HUMAN ROBOT INTERACTION 2020

SOCIABLE ROBOTS

A sociable robot is an autonomous robot that interacts and communicates with humans or
other autonomous physical agents by following social behaviors and rules attached to its role.
Like other robots, a social robot is physically embodied (avatars or on-screen synthetic social
characters are not embodied and thus distinct). Some synthetic social agents are designed with
a screen to represent the head or 'face' to dynamically communicate with users. In these cases,
the status as a social robot depends on the form of the 'body' of the social agent; if the body
has and uses some physical motors and sensor abilities, then the system could be considered
a robot.

Example for Sociable Robots:

1. KISMET (MIT)

Kismet is an expressive robotic creature with perceptual and motor modalities tailored
to natural human communication channels. To facilitate a natural infant- caretaker
interaction, the robot is equipped with visual, auditory, and proprioceptive sensory
inputs. The motor outputs include vocalizations, facial expressions, and motor
capabilities to adjust the gaze direction of the eyes and the orientation of the head.
Note that these motor systems serve to steer the visual and auditory sensors to the
source of the stimulus and can also be used to display communicative cues.

Fig.11.1

DEPT OF EEE, GPC KASARGOD Page 19


HUMAN ROBOT INTERACTION 2020

2. ASIMO (HONDA)

Honda has developed a new system that is a fundamental technology for advanced
intelligence, which comprehensively evaluates inputs from multiple sensors that are
equivalent to the visual, auditory, and tactile senses of a human being, then estimates
the situation of the surrounding environment and determines the corresponding
behavior of the robot.
ASIMO became capable of responding to the movement of people and the
surrounding situations. For instance, ASIMO will stop its current action and change
its behavior to accommodate the intention of the other party.

Fig.11.2

DEPT OF EEE, GPC KASARGOD Page 20


HUMAN ROBOT INTERACTION 2020

3. ICUB (ROBOTCUB)

iCub is a child-size humanoid robot capable of crawling, grasping objects, and


interacting with people. It's designed as an open source platform for research in
robotics, AI, and cognitive science.

Fig.11.3

4. HYPER-REALISTIC EINSTEIN ROBOT

A hyper-realistic Einstein robot at the University of California, San Diego has learned to smile
and make facial expressions through a process of self-guided learning. The UC San Diego
researchers used machine learning to “empower” their robot to learn to make realistic facial
expressions.Fine designed robotic faces could make realistic facial expressions. However,
manually tuning of the facial motor configurations costs hours of work even for experts.

Fig.11.4

DEPT OF EEE, GPC KASARGOD Page 21


HUMAN ROBOT INTERACTION 2020

ADVANTAGES, DISADVANTAGES AND APPLICATIONS

ADVANTAGES

• Used to do respective actions or jobs.


• Perform a variety of task.
• Improves quality.
• Increase in production.
• Perform dangerous tasks that humans can’t do.
• Robots are designed to work in harsh environments like in the space.
• Robots do work 24/7 without complaining.

DISADVANTAGES

• The robots needs a supply of power.


• I costs a lot of money to make or buy a robots.
• The robots cost much money in the maintenance & repair.
• The software and the equipment that you need to use with the robot cost much
money.
• The robots are not intelligent or sentient.

APPLICATIONS

• Search and Rescue.


• Assistive and Educational robotics.
• Entertainment.
• Military and Police.
• Space Exploration.
• Home and Companion Robotics.
• Hospitality.

DEPT OF EEE, GPC KASARGOD Page 22


HUMAN ROBOT INTERACTION 2020

CONCLUSION

Research and design in HRI demands much greater participation by the human factors
community than has occurred in the past, except for some contexts, such as commercial
aviation and military systems, where human factors professionals have long participated.
Current technology for “self-driving” cars and drones poses huge challenges for safety and
acceptability. With regard to the Task Dynamic Analysis discussion earlier, human factors
professionals definitely need to become more sophisticated in understanding dynamics and
control, if only at a basic level. We need to revisit the discussions of where humans best fit
into systems as compared with AI and computer control. However, shortcomings of AI in
understanding context need to be appreciated by system designers. Teaching (instructing,
programming) a robot is a language problem. The diversity and burgeoning aspects of HRI
reviewed earlier suggest great opportunities for human factors involvement in researching
and designing symbolic teaching. With regard to mental models, that is, what operators are
thinking, what they know, whether they misunderstand, and so on, research is critical as
systems get more complex and the stakes get higher. I believe practical means to elicit
operator mental models (both in real time and offline) and to compare these models with what
computers assess of situations will have safety and efficiency benefits in human–robot
systems. Education (and training) has always been part of human factors, and computers have
fit into training systems for many years. Use of robots (computers that have bodies, that can
move about, that can show affect, and that can act like people) seems a natural extension of
using passive computers in education. Research in the areas relating to lifestyle, fears, and
human values is probably the most important challenge for HRI.

Human-Robot Interaction involves many exiting challenges both with respect to the technical
challenges, as well as with respect to the human-centre aspects involved.

DEPT OF EEE, GPC KASARGOD Page 23


HUMAN ROBOT INTERACTION 2020

REFERENCES

Journals:

1. Adolphs, R. (2005). Could a robot have emotions? Theoretical perspectives from


social cognitive neuroscience. In S. M. Fellous & M. A. Arbib (Eds.), Who needs
emotions? The brain meets the robot (chap. 2). Oxford, UK: Oxford University Press.
2. Seppelt, B. D., & Lee, J. D. (2007). Making adaptive cruise control (ACC) limits
visible. International Journal of Human– Computer Studies, 65, 183–272.
3. Skaar, S. B., & Ruoff, C. F. (1994). Progress in astronautics and aeronautics: Vol.
161. Teleoperation and robotics in space. Reston, VA: American Association of
Aeronautics and Astronautics.

Websites:

1. https://journals.sagepub.com/doi/pdf/10.1177/0018720816644364
2. https://humanrobotinteraction.org/1-introduction/
3. https://www.seminarsonly.com/electronics/Human-Robot-Interaction.php

DEPT OF EEE, GPC KASARGOD Page 24

You might also like