A Survey of Methods For Safe Human-Robot Interaction
A Survey of Methods For Safe Human-Robot Interaction
A Survey of Methods For Safe Human-Robot Interaction
R
in Robotics
Vol. 5, No. 4 (2014) 261–349
c 2017 P. A. Lasota, T. Fong and J. A. Shah
DOI: 10.1561/2300000052
Przemyslaw A. Lasota
Massachusetts Institute of Technology,
USA
[email protected]
Terrence Fong
NASA Ames Research Center,
USA
[email protected]
Julie A. Shah
Massachusetts Institute of Technology,
USA
[email protected]
Contents
1 Introduction 262
1.1 Defining Safety in HRI . . . . . . . . . . . . . . . . . . . 263
1.2 Safety Standards and Criteria . . . . . . . . . . . . . . . . 264
1.3 Goals and Scope . . . . . . . . . . . . . . . . . . . . . . . 265
ii
iii
7 Conclusion 335
Acknowledgements 337
References 338
Abstract
262
1.1. Defining Safety in HRI 263
The main goal of this work is to organize and summarize the large body
of research related to facilitation of safe human-robot interaction. This
survey describes the strategies and methods that have been developed
266 Introduction
1.3.1 Method
Psychological
Control Motion Planning Prediction
Consideration
Constraints Based
Robot Behavior
Pre-Collision on Human Human Activity
Adaptation
Presence
Geometric and
Post-Collision Task-Based Human Motion Assessment
Constraints
Figure 1.1: Diagram depicting the major methods of providing safety in HRI.
1.3.2 Organization
270
2.1. Pre-Collision Methods 271
Non-Intrusive, Real-
Robot Features
Time Measurement
Figure 2.1: Diagram depicting the pre-collision control methods discussed in Sec-
tion 2.1.
ner that limits jerk, acceleration, and velocity. The ability to compute
new trajectories in real time is critical for applying such a planner in a
dynamic HRI setting. The planner developed by Broquere et al. meets
this need, as it constructs trajectories using polygonal chains of cubic
functions for which the parameters are computed directly, allowing for
real-time control.
In the approach taken by Laffranchi et al. (2009), real-time ad-
justment is even more critical: Instead of planning trajectories that
might require adjustment due to changes in the positions of humans
or other objects in the environment, their method focused on real-time
tracking and limiting of the total amount of energy stored within the
system — namely, the sum of kinetic, gravitational potential, and elas-
tic potential energies. In this work, the controller was implemented on
a prototype single-joint series elastic actuator and tested in two cases:
accidental collision and free motion. The actuator was commanded to
follow a sinusoidal path in both cases, but a foam block was placed
in the actuator’s path during the former case. Laffranchi et al found
that the energy of the system remained below a predefined thresh-
old through an online modification of the reference value of the po-
sition controller. Heinzmann and Zelinsky (2003), on the other hand,
developed a control approach that limits the potential force of im-
pact with static obstacles by imposing a safety envelope on the torque
commands of a position control algorithm. This method successfully
limited impact forces, regardless of where on the robot the collision
occurred.
Finally, Haddadin et al. (2012) took the unique approach of embed-
ding injury knowledge into robot control by studying the relationships
between robot mass, velocity, and impact geometry with injury. As the
authors wrote, attempting to form a direct relation between these vari-
ous input parameters and injury is different from other approaches that
rely upon deriving relationships with robot collision outputs, such as
exerted forces or stresses. In this work, the authors identified impact
geometry primitives and performed drop tests on abdominal samples
from pigs while varying mass and speed, and used the international
medical classification system developed by the AO Foundation (AO,
2.1. Pre-Collision Methods 273
2015) to analyze the injuries. Risk curves for each impact geometry
primitive were derived from the results from these tests, establishing
a relationship between impact speed and impact geometry, mass, and
the impacted body part. The authors then used these curves to scale
the velocity of the robot to ensure that injury above a certain threshold
could not occur in the event of an unexpected collision.
Slowing down or stopping the robot through the use of safety zones
or distance of separation is another method of preventing collision
through control. The robotics and automation company ABB has devel-
oped SafeMove, a system that utilizes programmable, complex safety
zones that can control robot speed (Kock et al., 2006). This system
allows for safer interaction between a human and an industrial robot
by using external sensing to track the presence of humans or objects
within safety zones and adjusting the robot’s speed to the zones’ pre-
defined limits. In contrast with static, predefined safety zones, Vogel
et al. (2013) developed a system that incorporates dynamically chang-
ing zones based on robot joint positions and velocities and displays
these zones on the surface around the robot via a projector. The con-
trol system detects when this virtual safety zone is entered, and stops
the robot as needed.
Lasota et al. (2014) developed a safety system for close-proximity in-
teraction with standard industrial robots that leverages accurate sens-
ing of a human’s location and the robot’s current configuration to
rapidly calculate the distance of separation between the human and
robot. This measurement is then used to gradually decrease the robot’s
speed according to a tunable function that can be adjusted via task-
dependent parameters. This approach eliminates the need for predefin-
ing conservative safety zones; however, while scaling the robot’s velocity
as a function of separation distance can be an effective method of im-
proving safety in HRI, slowing the robot can often lead to significant
decreases in the productivity of human-robot collaboration.
To address this, Zanchettin et al. (2015) developed a velocity scal-
ing approach that takes advantage of redundant degrees of freedom,
274 Safety Through Control
Non-Collaborative Detection of
Contact Collaborative Intent
Evaluation
Figure 2.2: Diagram depicting the post-collision safety methods discussed in Sec-
tion 2.2.
278 Safety Through Control
The first step toward utilizing post-collision control methods for HRI
safety is detecting whether a collision has occurred. As the use of ex-
ternal sensing is often impractical, much of the work in detection and
localization of human-robot collisions has focused on methods that in-
corporate on-board sensing. For example, De Luca et al. (2006) pre-
sented a collision detection system requiring only proprioceptive mea-
surement. This system utilizes a collision detection signal that can be
calculated using only the joint positions, velocities, and commanded
torques, and incorporates a measure of energy defined as the sum of
kinetic and gravitational potential energies. Furthermore, the system
utilizes a collision identification signal calculated from the same quan-
tities as the detection signal to provide information about which links
experienced impact and from what direction, allowing the system to
move the robot away from the collision site after impact detection.
Similar to the work by De Luca et al., Geravand et al. (2013) de-
veloped a detection and reaction system that does not require torque
sensing. One key advantage of this system is that it does not rely upon
knowledge of joint velocities, which often requires numerical integration
that introduces noise; rather, it makes use of motor current measure-
ments. This system, designed for industrial robots with a closed-control
architecture, also does not require a priori knowledge of the dynamic
model. It is not only capable of detecting collision, but also whether the
collision was intentional or non-intentional, in order to switch the robot
to a “collaborative mode” in which the robot accepts redirection from
the human as needed. This switch is performed through parallel use
of high- and low-pass filters on the motor currents, with the assump-
tion that non-intentional, hard impacts generate a high-frequency sig-
nal and intentional, soft impacts generate a low-frequency signal. The
system analyzes filtered signals and compares them with time-varying
thresholds: a signal exceeding a threshold after being run through the
high-pass filter indicates non-intentional impact, while intentional con-
tact is identified if at least one of the low-pass-filtered signals exceeds
a threshold and no thresholds are exceeded by signals run through the
high-pass filter.
2.2. Post-Collision Methods 279
1
In general, for continuous interaction such as that presented in work by Gera-
vand et al. (2013), De Luca and Flacco (2012), and Erden and Tomiyama (2010),
some form of compliance control is needed. To learn more about recent advancements
and applications of compliance control in the field of HRI, we direct the reader to a
recent survey by Khan et al. (2014).
282 Safety Through Control
2.3 Summary
284
285
planner was also associated with greater satisfaction with the robot as
a teammate and higher perceived levels of safety and comfort among
participants, as evaluated through questionnaire responses. Although
the control-based system maintained physical safety in both modes, the
lower degree of perceived safety observed during operation in the stan-
dard mode could have a significant negative impact on psychological
safety.
The fact that, in certain scenarios, collision prevention and allevi-
ation through low-level control has been shown to lead to significantly
poorer safety and efficiency compared with human-aware motion plan-
ning provides significant motivation for utilizing motion planning as
a safety measure during HRI. This would involve the development of
motion planners that directly consider human presence and movement
when computing robot paths and motions, as well as motion planners
capable of reasoning on both geometric and task-based constraints and
supporting rapid, real-time replanning. These and other related topics
covered in this section are depicted in Figure 3.1.
3. Safety Through
Motion Planning
Figure 3.1: Diagram depicting the constraints, features, and techniques discussed
in Section 3.
286 Safety Through Motion Planning
determined, but the results did suggest that these measures could serve
to discriminate between the motions. A motion planner developed by
Mainprice et al. (2011) considers similar HRI constraints — specifically,
human vision field, separation distance, and reachability — to drive a
cost-based, random-sampling search in order to plan safe robot motions
within cluttered environments. This planner incorporates the T-RRT
algorithm and local path optimizations to generate paths.
Approaching the problem from a different perspective, Morales
et al. (2015) evaluated safe motion planning for an autonomous ve-
hicle with a human passenger. The authors developed the Human-
Comfortable Path Planner (HCoPP), which takes human preferences
into account, such as how far from a wall people prefer to travel when
moving down a corridor, as well as visibility around corners when ap-
proaching a turn. The framework utilizes a three-layer cost map to
integrate the constraints responsible for balancing between optimizing
path length, as well as path comfort based on position and visibil-
ity. The authors assessed the effectiveness of the motion planner via
a user study in which the HCoPP yielded significantly more pleasant
and comfortable paths compared with a baseline motion planner, as
assessed through questionnaire responses.
3.3 Summary
293
294 Safety Through Prediction
This selection process can involve adjusting the action the robot will
take, the timing of when that action should be performed, or the mo-
tions the robot will perform in order to achieve the action. Work in
this field has incorporated a wide variety of methods, with researchers
often formulating probabilistic models and frameworks according to
which prediction could be made. Among these approaches, some rea-
son directly on low-level features derived from cameras, depth sensors,
and motion capture systems, while others reason on abstract represen-
tations of actions or task steps in order to predict future activities.
Beyond predicting the next human action, other work in this field fo-
cuses on predicting the timing of the actions, which allows a robot to
not only decide what action to take in order to maintain safety, but also
when it would be best to take that action. These topics are summarized
in Figure 4.1.
Probabilistic Plan
Sequence Matching
Recognition
Figure 4.1: Diagram depicting the methods for human activity prediction discussed
in Section 4.1.
4.1. Human Activity Prediction 295
Although the ability to predict human actions can be useful for generat-
ing safe robot motions and action plans, knowledge about what activity
will be performed or the end location a person is reaching or walking
toward does not provide information about which specific portion of a
shared human-robot workspace the human will occupy during the ex-
ecution of that predicted action. This additional information could be
leveraged to ensure safe robot motion by enabling the robot to reason
not only on the expected start and end locations, but on the entire
expected human motion.
4.2. Human Motion Prediction 301
The basis of human motion prediction can be divided into two dis-
tinct categories: goal intent and motion characteristics. For the former,
action prediction can often serve to inform motion-level prediction by
inferring humans’ goals, which, in combination with an appropriate
motion model, can be used to anticipate how a human will move as he
or she walks or reaches toward the predicted goal. In the latter cate-
gory, motion prediction is not linked to predicted goals or actions, but
instead utilizes techniques such as analysis of biomechanical predictors
or reasoning on features of natural motion. These methodologies for
human motion prediction are summarized in Figure 4.2.
Manipulation Navigation
Figure 4.2: Diagram depicting the methods for human motion prediction discussed
in Section 4.2.
the motion people could take while performing the predicted manip-
ulation actions. In the paper by Mainprice and Berenson (2013), the
system uses regressed motions derived via the GMR to compute swept
volumes that define human workspace occupancy during execution of
the predicted reach. In the work by Pérez-D’Arpino and Shah (2015),
on the other hand, prediction during a reaching motion is calculated
using multivariate Gaussian representations. In contrast to computing
a swept volume of the entire reaching motion, the models computed in
this work describe the mean and variance in the human’s hand posi-
tion for each of the possible goal positions at each time step, yielding
a prediction of human position as a function of time during the reach.
In the previously mentioned work by Koppula and Saxena (2013b),
once the system predicts an action, it utilizes Bézier curves to define
potential trajectories of the human hand while performing the action.
An extension of this work by Jiang and Saxena (2014) allows for more-
detailed prediction of human motion during action execution. In this
work, a low-dimensional representation of a high-dimensional model
of human motion is computed through a Gaussian process. The low-
dimensional description of the motion is then incorporated as latent
nodes in a CRF representation to form a model called a Gaussian pro-
cess latent conditional random field. By learning a two-way mapping
between the high- and low-level representations, this approach allows
for computationally tractable prediction of high-dimensional motion
while maintaining the ability to reason on relationships between peo-
ple and objects in the given scene. In addition to being able to predict
human motions with higher fidelity, the compact representation of mo-
tion allows for reasoning on the physical plausibility of actions, thus
improving prediction.
Another manipulation motion prediction technique, developed by
Mainprice et al. (2015), was specifically designed for reaching motions
performed during known, collaborative tasks based on inverse opti-
mal control. In this work, example data of two people performing a
co-manipulation task were collected via a motion capture system. The
gathered trajectories, along with feature functions encoding smoothness
and distance relationships, were then used as inputs for the path inte-
4.2. Human Motion Prediction 303
are moved to new locations or removed from the scene altogether. Fur-
thermore, the presented approach uses an incremental planner that
enables real-time deployment by developing cost maps without taking
prediction of motion into account, and then iteratively plans a robot
trajectory while simulating human motion.
Mathematical
Animation Principles
Formulations
Figure 4.3: Diagram depicting the types of cues that can aid in prediction of robot
motion, as discussed in Section 4.3.
4.4 Summary
314
315
the robot’s appearance. Work in the latter category, on the other hand,
is concerned with discovering which social rules observed in human-
human interaction are important to follow during HRI, and the impact
of factors such as culture and personality traits.
Although adapting robot behavior is important for maintaining
psychological safety, it is also necessary to evaluate the effectiveness
of these adjustments in a principled way. Toward this objective, re-
searchers have developed three tools for assessing psychological safety:
questionnaires, physiological metrics, and behavioral metrics. Each of
these assessment methods possesses its own benefits and drawbacks,
making it imperative to understand which approach should be used
under what conditions.
All of these topics, including robot behavior adaptation and assess-
ment, are discussed in this section of the monograph, as depicted in
Figure 5.1.
5. Consideration of
Psychological Factors
5.2.3 Behavioral
Metrics
Figure 5.1: Diagram depicting methods of assessing and potential factors influenc-
ing psychological safety, as discussed in Section 5.
316 Safety Through Consideration of Psychological Factors
Although perceived safety and comfort were not the primary objec-
tives of all the works mentioned in the preceding sections, a significant
number take psychological safety into consideration by adjusting robot
behavior specifically to make interaction more comfortable for humans.
For example, the trajectory-forming strategy developed by Broquere
et al. (2008) mentioned in Section 2.1 limits the jerk, acceleration, and
velocity of the robot. The authors of that work noted that maintaining
comfortable interaction is one of the major constraints on controllers
developed for HRI.
In the work by Sisbot et al. (2010) outlined in Section 3.1, the au-
thors combined several methods to support comfortable HRI and noted
that physical safety alone is not sufficient for acceptable HRI, adding
that the robot must also avoid any actions that might induce fear, sur-
prise, or discomfort among humans. Authors expressed a similar sen-
timent in another paper focused on developing a safe and comfortable
manipulation planner (Sisbot and Alami, 2012).
The motion planner developed by Mainprice et al. (2011), as out-
lined in Section 3.1, incorporates HRI constraints to generate motion
that result in comfortable reactionary human motions, but focuses
specifically on the physical comfort of the interaction — for example,
how far the human must travel from a resting position to a proposed
reach location, or how close he or she would be to exceeding biome-
chanical limits were they to do so.
While the aforementioned authors took psychological safety into
account within their work, there also exists a body of research with the
primary focus of evaluating how robot behaviors affect psychological
safety factors, both in terms of studying and adjusting robot features,
as well as understanding the impact of social factors.
discloses to the robot). In the authors’ experiment, the robot was made
to appear either rude and selfish or pleasant and empathetic, while gaze
direction was altered to be either inclined toward or averted from the
participant’s face. They found that participants who did not like the
robot maintained greater physical and psychological distances when the
robot gazed at them. Similarly to the results of the work by Takayama
and Pantofaru, these findings indicate that proxemic behaviors should
be a function of personal characteristics, such as gender and prior expe-
rience with robots, but also that robots must be liked by their human
partners in order to enable comfortable close-proximity interaction.
While proxemic preferences are a function of personal charac-
teristics, these preferences can shift over time and can even vary
from culture to culture. In a 6-week study, Walters et al. (2011)
assessed participants’ proxemic preferences when interacting with
robots in homelike environments. They found that the majority of
adaptation of preferred interaction distances occurred within the first
few sessions, then stabilized afterwards. They also wrote that robot
malfunctions led to greater preferred distances, even if a safety concern
was not reported, and that humans approached the robot more closely
themselves than they allowed the robot to approach them when in a
physically constrained area.
Research has shown that cultural differences also play a role in
proxemic preferences. The seminal work by Hall (1966) indicated that
people of different cultures have significantly different standards for
maintaining personal space, and Joosse et al. (2014) empirically demon-
strated that such standards also manifest for HRI. In that study, the
authors observed that participants from different cultures (namely, the
United States, China, and Argentina) maintained different standards
for appropriate approach distances for a robot moving toward a small
group of people.
As proxemic preferences and other social standards can vary among
cultures and change over time, it is important for robots to be able to
reason about these conventions and adjust their behavior appropriately.
Frameworks such as the one developed by Kirby et al. (2009) can be
quite useful for this purpose: in this work, social conventions are treated
5.2. Assessment of Psychological Safety 321
5.2.1 Questionnaires
5.3 Summary
In this section of the monograph, we discussed methods and tools for en-
suring psychological safety during human-robot interaction. In contrast
to the preceding sections, which addressed prevention of unwanted con-
tact or collisions (physical safety), the methods presented in this section
dealt with ensuring that interaction also feels safe and is not stressful.
We began by discussing how robot behavior adaptation can improve
psychological safety. First, we summarized works that investigated the
impact of features of the robot’s motion, such as speed or the degree
of distance maintained from the human, as well as the effect of the
robot’s physical appearance. Later, we highlighted the influence of so-
cial aspects on HRI, including how social conventions of human-human
interaction translate into interaction with robots, the impact that vio-
lation of social standards by robots has on psychological safety, and the
impact of personality traits, experience, and culture on these issues.
326 Safety Through Consideration of Psychological Factors
327
328 Future Directions
approach for minimizing harm from impact with a person’s chest may
not be the optimal approach for minimizing harm from impact with a
person’s back. Studying how to detect whether a pinch point collision
has transpired —and the best control response a robot should take in
order to minimize harm in such a scenario — is also an open research
problem. Developing a system capable of robustly detecting imminent
rigid impacts, allowing for the control response to be engaged earlier,
could be another topic for future research.
Ensuring safety during intentional human-robot contact is another
concept that warrants further research. While some works have allowed
for this interaction paradigm by, for example, detecting whether contact
is intentional or by limiting exerted forces during contact, techniques
that integrate detection of intentional contact and maintenance of safe
interaction by limiting the amount of power transmitted to the person
would be beneficial. In particular, it would be useful for newly devel-
oped methods to not rely upon engagement of a gravity-compensated
mode, as is often done currently, as switching to such a mode limits the
collaborative tasks that can be performed. If a person and robot are co-
manipulating an object, for example, and the robot’s elbow comes into
contact with the person’s side, it would be useful for the safety system
to monitor the amount of power transferred to the person and continue
the original guidance or motion until a safety threshold is reached. It is
also important to monitor the power transfer through both the contact
point and the object being manipulated.
While some work regarding this type of integration has already been
conducted, it has mostly applied to lightweight, back-drivable robotic
arms. It would be useful to extend this type of interaction to stiffer,
heavy-lift industrial robots; allowing for safe collaborative interaction
with such robots would enable a wide variety of useful applications
within the manufacturing domain.
tool. Injury knowledge could also be applied to direct motion away from
more vulnerable parts of the human body depending upon the tool be-
ing held by the robot (i.e., orienting a bright light source away from
a person’s eyes or keeping sharp objects away from people altogether
during motion).
riving the link between physiological signals and perceived safety and
comfort through signal processing methods or other analysis is also nec-
essary. Significant efforts to identify useful metrics from physiological
signals have been made outside of the field of HRI: for example, the
DARPA Augmented Cognition (AugCog) program focused on assess-
ing cognitive activity and performance in order to adapt information
systems (St. John et al., 2004). Although the results of such work may
not be directly applicable to HRI safety, the insights gained can be
useful for development of physiologically-based metrics of psycholog-
ical safety. Tools designed for collection and analysis of physiological
signals would facilitate integration of physiological measurements into
HRI safety methods, allowing for maintenance of psychological safety
across all HRI applications.
Also, greater consistency in evaluation of psychological factors
would be beneficial for the field. Regardless of whether a given work
specifically addresses the issue of safety, researchers should strive to
ensure the algorithms and systems they develop not only improve in-
teraction by some metric, but also do so without compromising psy-
chological safety. By collecting psychological safety data, researchers
can better understand the potential impact of their systems and are
better equipped to refine their work such that it is less likely to result
in harm. However, validated perceived safety and comfort metrics are
necessary in order for researchers to be able to do this effectively. While
some validated surveys for HRI do exist, such as the “Godspeed” or
“BEHAVE-II” questionnaires mentioned in Section 5.2.1, the scope of
these surveys is typically quite broad and extends far beyond assess-
ing perceived levels of safety and comfort. The relatively small number
of safety-relevant items included in surveys of this scope may not be
sufficient for comprehensive evaluation.
335
336 Conclusion
By applying and building upon the lessons learned from prior work,
the research community will be able to make HRI increasingly safe over
time, which will inherently decrease the risks associated with HRI.
Mitigating this risk will, in turn, lead to a more rapid transition of
HRI systems from research labs into homes, offices, and factories.
Acknowledgements
337
References
338
References 339