Next Article in Journal
Null-Space Minimization of Center of Gravity Displacementof a Redundant Aerial Manipulator
Previous Article in Journal
A Social Robot to Assist in Addressing Disruptive Eating Behaviors by People with Dementia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Trust-Assist Framework for Human–Robot Co-Carry Tasks

Department of Computer Science, Montclair State University, Montclair, NJ 07043, USA
*
Author to whom correspondence should be addressed.
Robotics 2023, 12(2), 30; https://doi.org/10.3390/robotics12020030
Submission received: 2 February 2023 / Revised: 16 February 2023 / Accepted: 18 February 2023 / Published: 21 February 2023
(This article belongs to the Section Humanoid and Human Robotics)

Abstract

:
Robots are increasingly being employed for diverse applications where they must work and coexist with humans. The trust in human–robot collaboration (HRC) is a critical aspect of any shared-task performance for both the human and the robot. The study of a human-trusting robot has been investigated by numerous researchers. However, a robot-trusting human, which is also a significant issue in HRC, is seldom explored in the field of robotics. Motivated by this gap, we propose a novel trust-assist framework for human–robot co-carry tasks in this study. This framework allows the robot to determine a trust level for its human co-carry partner. The calculations of this trust level are based on human motions, past interactions between the human–robot pair, and the human’s current performance in the co-carry task. The trust level between the human and the robot is evaluated dynamically throughout the collaborative task, and this allows the trust to change if the human performs false positive actions, which can help the robot avoid making unpredictable movements and causing injury to the human. Additionally, the proposed framework can enable the robot to generate and perform assisting movements to follow human-carrying motions and paces when the human is considered trustworthy in the co-carry task. The results of our experiments suggest that the robot effectively assists the human in real-world collaborative tasks through the proposed trust-assist framework.

1. Introduction

Robots are being employed more and more in shared environments where they coexist with human workers. As such, they will also be completing tasks alongside and in cooperation with people [1,2]. The human–robot collaboration (HRC) will help to advance many technologies such as assembly-line production. The goal is to remove as many as possible of the barriers between robots and humans in these assembly facilities [3]. Removing these barriers will allow humans and robots to collaborate more effectively and regularly. Robot and human behavior must also be taken into account when designing HRC tasks. This was studied by Sauppe, where she explained her work of examining human behavior and then encoding those behaviors into a robot and testing which behaviors produced the best outcomes [4]. This technique helps to allow better communication between human and robot, which can also affect the trust that the human has in the robot. Another important goal of HRC is for the human collaborator to be able to easily predict what the robot is going to do next. One way to go about achieving this goal is to label the robot’s motion into the three categories: functional, predictable, and legible [5]. A functional movement, by this definition, would be any movement where the end effector reaches the goal without any collisions but is not necessarily easily predictable. On the other hand, a legible movement is a movement where the end effector reaches the goal without collision, but also moves with a slightly exaggerated motion, so that the human collaborator can more easily interpret the robot’s goal. Robots must also, to some extent, be able to anticipate human intention when working collaboratively [6]. By anticipating human intention, the task will be able to be completed quicker than if the robot simply follows a strict set of steps. This can be done by tracking human limbs and using neural networks to attempt to predict the future motion of the limb [7]. There must also be a way in which tasks are split between the robot and the human. One challenge when determining this is human fatigue when performing tasks, which may alter performance. Although difficult, it has been shown that finding the tasks can be optimally split between the human and robot [8].
Human–robot co-carry or co-manipulation is the logical next step in the field of human–robot collaboration. The term human–robot co-carry is used to describe the manipulation of an individual object or item by both the human and the robot simultaneously. One of the reasons to pursue human–robot co-carry tasks is to reduce human fatigue [9]. Human fatigue in human–robot collaboration can lead to mistakes being made and possible injury. Allowing the robot to bear most of the force required to lift or move an object will provide the human with more ergonomic carrying methods [10]. Another reason for human–robot co-carry is that there are some tasks that the robot cannot complete on its own, and which and require some human interaction [11]. One other situation where this type of co-carry might be useful is when an object is of a size or shape that makes it difficult for a human to carry alone [12,13]. This could mean anything that is very long, or that has a shape which does not provide any effective-grip locations for the human. Human–robot surgical operations are another major area of study within human–robot co-manipulation. This is discussed in a work in which human–robot co-manipulation is used in endoscopic endonasal surgery, where the surgeon needs a high level of dexterity in a small workspace, which can be improved with the use of robotic arms [14]. One other use of human–robot co-carry is through a swinging motion in order to move an object to a higher position than otherwise would be possible [15]. This is done by moving in a pendulum-like motion to lift an object to a higher location.
The trust in human–robot collaboration is a critical aspect of any shared-task performance for both the human and the robot according to Ososky’s findings [16]. While limited research has gone into the concept of human–robot trust [17,18,19,20,21,22], there are several techniques that can be used to improve it. When a robot unavoidably encounters a failure of some kind, this will likely influence the level of the human’s trust in the robot. In this work, in a different way from traditional studies on humans’ trust in robots, and based on our preliminary findings [23], we investigate the trust between the human and the robot from a new perspective—robots’ trust in humans. The contributions of this study can be summarized as: (1) we propose an easy-to-implement computational trust-assist (TA) framework for the robot to autonomously perceive its trust in human partners, and accommodate assisting actions to ensure task efficiency in human–robot collaborative contexts. (2) The proposed trust-assist model is real-time sensitive to human false positive actions, such as an uneven or shaky motion during the human–robot co-carry process, which can enhance collaborative safety in human–robot teams.

2. Related Work

With more and more interaction between humans and robots happening in industrial and domestic environments, much research has been carried out in order to make these interactions as efficient and as safe as possible. Many different techniques have been used to achieve this goal [24,25,26,27,28]. One area of study in this topic is grasp planning in human robot co-carry tasks. Tariq explored this topic, and discussed a method that can be used so that the robot can grasp the co-carried object in such a way that it minimizes the force applied at that point [29]. Another research area has studied learning methods in order to teach the robot the different tasks that it may need to perform via human demonstrations [30,31]. The robot was able to learn where to grip the object, then switched to a different mode to co-carry the object [32]. This could be very useful, since the robot’s motion would not need to be manually coded, but demonstrated instead. There has also been research into using a nonholonomic constraint to simplify the movement of the object [33]. The researchers were able to verify that this kind of method was useful for human–robot co-carrying of large objects in the 3D space. Neural networks have been used as well, to predict future human movements. One work outlined the use of motion and force with a neural network to anticipate the future motion of the human for the robot’s motion planning [34]. Another work discussed the use of both force feedback and computer vision to collaboratively manipulate deformable objects such as a piece of cloth [35]. The use of both force feedback and computer vision allowed this method to work well for non-rigid objects.
There have been many researchers who have investigated the definition of trust, since it is a subjective term. One researcher first defined a set of epistemic properties such as sincerity, credibility, and validity, and then defined trust based on these properties. One such example is that agent A may trust agent B if agent A believes that agent B is being sincere. In addition, trust is also described as a strong belief by one agent that another agent is being truthful or correct [36].
Several studies on human-trusting-robot in HRC have been conducted recently. Wang stated in his paper that for a robot to be able to perform optimally, the human operating that robot must be able to trust the robot’s capability to perform the necessary task [37]. If a robot continuously failed to complete a specific task, the human counterpart will most likely begin to lose trust in the robot. Knowing this, any major robot failures should be corrected immediately, to regain the trust of the human. Billings laid out three main questions that need to be investigated when researching human–robot trust. In this paper the three questions are explained as: What is being measured? How can it be measured? And finally, when should it be measured? These are the three main questions that need to be answered when developing a good model of trust [38]. Rossi explained that trust, once broken, can be a very difficult thing to fix [39]. This would be another factor that would need to be investigated when developing human trust in robots in HRC tasks. Stormont discussed some of the reasons that humans currently do not trust robots as much as they trust other humans when working collaboratively or performing a task for one another. The two factors that he mentioned were, firstly, the robot’s poor dependability when performing a task, meaning that the robot might fail more often than a human performing that same task would. The second issue impacting humans’ trust in robots is that robots will sometimes act in unexpected ways [40]. Human trust in the robot’s abilities will also affect how the robot is used. For example, if a robot is not fully trusted, it may not be used to the best of its ability or not be used appropriately [41]. Another researcher explained human trust in robots as a latent variable in the trust model [42]. This is a latent variable because it takes time for humans to gain trust in something like a robot. There will almost always be some uncertainty when interacting with a new robot. Another important concept in human trust in robots is over trusting [43]. If a human trusts the robot too much, it could lead to the robot being given tasks that it is unable to complete. This would then cause task failure and the human to lose trust in the robot.
However, robot-trusting-human in HRC is seldom explored in the field of robotics. This does not mean that it is not an important topic that should be better studied. Robots’ trust in humans in HRC means that the robot performing a task should understand and believe that the task demonstrated by the human is correct and error free. As stated by the work [42], human–robot trust is an “approximation of the interaction history” and has a bidirectional property [44]. It means that, through several human–robot interactions, humans’ trust in robots and robots’ trust in humans are built based on the success or failure of the tasks performed. Robots’ trust in humans also plays a significant role in human–robot collaborative tasks. If the human performs incorrect actions in the task, judged by the robot and based on its trust level, which is built on the interaction history and the human’s present performance, the robot will decide not to trust the human. For example, if the human performs false positive actions but the robot still trusts him/her, the robot may execute unpredictable movements when interacting with the human and cause injury to the human. Therefore, this trust should be measured in real time, for the safety of both the human and the robot. In addition, trust-based robot-motion planning is able to effectively improve the task quality and reduce uncertain failures in human–robot collaborative contexts. Therefore, developing a computational model of a robot trusting a human is necessary to solve in HRC.
Motivated by this problem, we propose a novel trust-assist framework for human–robot teams in collaborative tasks. By taking advantage of the developed computational robot-trusting-human (RTH) model, the robot can autonomously perceive its trust in the human in real time. The calculations of this trust level are based on human motions, past interactions between the human–robot pair, and the human’s current performance in the collaborative task. Then a robot-assisting-the-human (RAH) model is established for the robot to actively accommodate and assist its human partner, based on the trust in human–robot collaborative contexts. If the trust level is too low, the human motion will not be followed by the robot. By enabling the robot to only perform assisting movements when the human motion is considered trustworthy, we are able to effectively achieve human–robot co-carry tasks.

3. Modeling Methodology

3.1. Overview of Trust-Assist Framework

The overview of the TA framework for human–robot co-carry tasks is shown in Figure 1. This framework is composed of three parts, including data acquisition and processing, the trust-assist model, and human-robot co-carry. The overarching vision of the proposed framework is to enable robots to understand and assist humans based on the trust level derived from our developed trust-assist model in human-robot co-carry tasks.
Within the proposed frame, human-carrying-motion data will first be acquired and processed. In human–robot collaboration, the robot should understand the intention of the human in real time. In order for this to happen, a camera is placed above the workspace to capture the entire collaborative workspace. This video feed is processed in real time, to be effective for robot-motion planning. The raw video is processed by the Lucas–Kanade (L–K) optical-flow algorithm [45], for the robot to understand human-carrying intentions. Both the direction and speed of the human motion can be calculated via this algorithm.
The TA model contains two sub-models: the RTH model and the RAH model. After receiving the parameterized human-carrying motions from the L-K algorithm, an RTH model is developed to evaluate the trust level of these movements. This trust level works as the confidence coefficient for the robot in its human partner, and must be involved in a certain trustworthy performance. In this study, a trustworthy motion is characterized as smooth and continuous motion, while untrustworthy movement is uneven or shaky motion. If the trust level continues to increase in the human–robot co-carry process, the RAH model will be utilized, for the robot to generate appropriate assisting actions to adapt to human-carrying motions.
In the human–robot co-carry stage, robot-motion planning will take place based on the captured human-motion data, evaluated trust level, and generated assisting actions in the TA model. The TA model is able to determine which precise movements by the human should be trusted and which should not be trusted. If the human-carrying motion is determined to be trustworthy, the robot path planner will be invoked to schedule exact assisting carrying motions for the robot, based on the generated assisting action information from RAH in the co-carry task. This planned path data is then used in the robot-control commands, to execute co-carrying movements.

3.2. Data Acquisition

To complete human–robot co-carry tasks, the human and the robot must be able to communicate in some way. For this to be achieved, the robot should be able to interpret the human’s motion and follow it, so that the co-carried object is moved in the desired position. As shown in Figure 2, the human motion is captured through a camera attached above the collaborative workspace. While the human and robot are both holding the co-carried object, the human can begin to move the object in the desired direction within the collaborative workspace.
This video stream is then processed using the Lucas–Kanade optical-flow algorithm. Using this algorithm, the human-carrying motion is able to be interpreted and further used for the robot. Before the data is sent to the local robot controller, it is first sent through the trust-assist model, to determine the confidence coefficient of the motion. The human-carrying motion is characterized into numeric data, which give the direction and speed of the motion. This is important for the robot to be able to match not only the direction of movement but also the speed at which the human is moving. All this data is then sent to the local robot controller to be further processed for robot-motion planning.

3.3. Formulation of Human-Carrying Motions

In this study, the Lucas–Kanade (L-K) optical-flow algorithm is used to parametrize human motions in the human–robot co-carry process. The algorithm works under three assumptions: (a) brightness constancy during the human–robot co-carry process; (b) temporal persistence in the human–robot co-carry task; (c) spatial coherence in the human–robot co-carry task. Brightness constancy means that for any given pixel in a frame, the brightness intensity of that pixel will be the same in the next frame. The second assumption, temporal persistence, is used to mean that the movement from one frame to the next is slow and gradual. The third assumption is spatial coherence, which means that the pixels around a point have the same motion, since they are considered to be on the same surface.
Suppose the human and the robot co-carry an object on the x y plane in a shared workspace. The goal of the L–K optical-flow algorithm is to calculate the human-carrying movement of points between two frames, at time t and t + Δ t . Let the brightness intensity at a point be I ( x , y , t ) and the point move by Δ x , Δ y , over time Δ t . Using the first assumption of brightness constancy, we can obtain the following equation:
I ( x , y , t ) = I ( x + Δ x , y + Δ y , t + Δ t )
By using Taylor series expansion and assuming movements are small, the following equation is derived from Equation (1)
I ( x , y , t ) + I x Δ x + I y Δ y + I t Δ t = I ( x + Δ x , y + Δ y , t + Δ t )
which can be further reduced to
I x V x + I y V y + I t = 0
where I x , I y , and I t represent the derivatives of the image at the point ( x , y ) at time t , and the V x and V y represent the x and y components of the velocity of I ( x , y , t ) , respectively.
Since this equation has two unknowns, it is unsolvable. However, by using the third assumption of temporal persistence, the local group of pixels have the same velocity, and the equation can be solved. By utilizing the temporal persistence constraint, a system of equations can be set up for the pixels in the surrounding n × n region. Then, using a weighted least-squares approach, a good estimate of the optical flow is determined. The optical-flow equations are created for all the pixels in the surrounding region. Then, the velocity vector V = ( V x , V y ) must satisfy the following system of equations:
{ I x ( x 1 , y 1 ) V x + I y ( x 1 , y 1 ) V y = I t ( x 1 , y 1 ) I x ( x 2 , y 2 ) V x + I y ( x 2 , y 2 ) V y = I t ( x 2 , y 2 ) I x ( x n , y n ) V x + I y ( x n , y n ) V y = I t ( x n , y n )
This system of equations can then be rewritten in matrix notation:
[ I x ( x 1 , y 1 ) I x ( x 1 , y 1 ) I x ( x 2 , y 2 ) I y ( x 2 , y 2 ) I x ( x n , y n ) I y ( x n , y n ) ] [ V x V y ] = [ I t ( x 1 , y 1 ) I t ( x 2 , y 2 ) I t ( x n , y n ) ]
Since this system of equations has more equations than unknowns, it is considered over determined, and can be solved by the least-squares method. Therefore, the velocity for each pixel can be calculated by the following equation:
V = ( A T A ) 1 A T b
where
V = [ V x V y ] ,   A = [ I x ( x 1 , y 1 ) I x ( x 1 , y 1 ) I x ( x 2 , y 2 ) I y ( x 2 , y 2 ) I x ( x n , y n ) I y ( x n , y n ) ] ,   and   b = [ I t ( x 1 , y 1 ) I t ( x 2 , y 2 ) I t ( x n , y n ) ]
By taking advantage of Equation (6), we can obtain the velocity data of human-carrying motions, which include magnitude and direction. The velocity magnitude at time t on the x axis and y axis can be calculated by V x ( t ) and V y ( t ) , respectively. The direction of the human motion can be characterized as an orientation-angle matrix O ( t ) derived from
O ( t ) = arctan V y ( t ) V x ( t )
where each element of O ( t ) [ π , π ] . It can be seen that, based on the L–K algorithm, the human-carrying-motion information can be quantitatively and universally evaluated in real time during the human–robot collaboration process.

3.4. Robot-Trusting -Human Model

Definition 1.
In human–robot interaction, the trust of robot R on human H is denoted as T R H , which is a real number, and T R H [ 0 , 1 ] . T R H = 0 means that the human and the robot have not interacted before or the robot completely distrusts the human in the human–robot interaction process.  T R H = 1 means that the robot definitely trusts the human in the interaction process.
Definition 2.
In human–robot co-carry tasks, the human-carrying intention, C H , at time t is defined as
C H ( t ) = { O ¯ ( t ) π , h u m a n   m o v e s   a n d   h u m a n r o b o t   i n t e r a c t i o n n u l l , h u m a n   s t o p s   o r   h u m a n r o b o t   i n t e r a c t i o n =
where  C H ( t ) represents the human’s intention to move the object in a direction in human–robot co-carry tasks, and  O ¯ ( t )  is the arithmetic mean of  O ( t ) ,  O ¯ ( t ) / π [ 1 , 1 ] .
If the human and the robot have not interacted before or the human stops during the human–robot co-carry process, the human-carrying intention C H = n u l l .
In order to have a uniform interval with T R H for further mathematical combination in the computational-trust model, the human-carrying intention is transformed as
C H ˜ ( t ) = { 1 2 ( O ¯ ( t ) π + 1 ) , h u m a n   m o v e s   a n d   h u m a n r o b o t   i n t e r a c t i o n n u l l , h u m a n   s t o p s   o r   h u m a n r o b o t   i n t e r a c t i o n =
where C H ˜ ( t ) [ 0 , 1 ] . C H ˜ ( t ) will be 0 in the null case.
Furthermore, the trust of the robot on the human, T R H , is defined as a weighted function that entails trust history and present human performance in the human–robot interaction process, which can be expressed as
T R H ( t ) = 1 1 w d [ w h T R H ( t 1 ) + w p C H ˜ ( t ) ] + 1
where w h [ 0 , 1 ] denotes the weight factor for trust history, w p [ 0 , 1 ] denotes the weight factor for present human performance, w h + w p = 1 , and w d is the discriminant coefficient, which is defined as
w d = { 1 , | O ¯ ( t ) O ¯ ( t 1 ) | π 2 0 , | O ¯ ( t ) O ¯ ( t 1 ) | > π 2   o r   C H ( t ) = n u l l
In Equation (10), w h and w p can be adjusted to appropriate values to give more weight to trust history or present human performance, to evaluate online robot-to-human trust according to different tasking settings in human–robot collaboration. For example, if the human and robot have not worked collaboratively previously, the proposed framework may give more weight to the present human performance. This is because the framework does not have any trust history in the human–robot pair, and it cannot make any assumptions about the trustworthiness of the human. By doing this, the framework is able to learn information about the human’s movements, which can be used in future collaborations. Conversely, if the human and robot have worked on many previous collaborative tasks, the proposed framework may give more weight to the trust-history component, since the framework is able to make more determinations about the human trust based on the trust history.
In addition, w d is utilized to avoid unpredictable safety issues caused by human false positive actions in human–robot collaboration. False positives may occur when the human performs carrying motions. For instance, if the human does not want to carry the object but he/she conducts shaky motions or some similar motions, the robot may consider such kind of motion as a carrying intention and perform its corresponding co-carrying motions. In this situation, the human may be injured, or the object may be damaged by the robot. Therefore, the discriminant coefficient w d is designed to eliminate the faults caused by false positives. When the human performs smooth and continuous carrying motions, the orientation angle detected by the L–K algorithm will meet | O ¯ ( t ) O ¯ ( t 1 ) | π / 2 . In Equation (10), w d will be automatically set as 1, whereas when the human shakes hands or carries out some similar motions, the detected orientation angle will meet | O ¯ ( t ) O ¯ ( t 1 ) | > π / 2 and w d will be assigned as 0. The trust value in Equation (10) will be 0, which means that the robot distrusts the human in these false positive motions. If the human stops ( C H ( t ) = n u l l ) during the human–robot collaboration process, w d is reset as well. It can be seen that, by taking advantage of w d , even a single false positive motion by the human will definitely take away the trust of the robot in its human partner, which will prevent the robot from executing undesired motions and will ensure human–robot collaborative safety.

3.5. Robot-Assisting-Human Model

When the human performs carrying motions in the human–robot shared workspace, the displacement of human’s hand from start point to time t can be represented by a six-element vector ( x H ( t ) , y H ( t ) , z H ( t ) , ϕ H ( t ) , θ H ( t ) , ψ H ( t ) ) , where x , y , and z describe the position of the human’s hand in the 3D workspace, and ϕ , θ , and ψ present the rotation angles of the human’s hand. As shown in Figure 3, ϕ is the roll rotation about the 𝑥-axis, θ is the pitch rotation about the 𝑦-axis, and ψ is the yaw rotation about the 𝑧-axis. Since the human and the robot co-carry an object on the x y plane, the roll rotation is 0, the pitch rotation is 0, the yaw rotation can be evaluated by O ¯ ( t ) , and z is a fixed value approximating to the vertical distance, h , between the workspace floor and the human’s hand. Therefore, the displacement of the human’s hand at time t can be evaluated by
{ x H ( t ) = V x ¯ ( t ) t y H ( t ) = V y ¯ ( t ) t z H = h ϕ H ( t ) = 0 θ H ( t ) = 0 ψ H ( t ) = O ¯ ( t )    
where V x ¯ ( t ) and V y ¯ ( t ) denote the arithmetic means of V x ( t ) and V y ( t ) , respectively.
From Equation (10), it can be observed that if some false positive motions occur or the human stops moving in human–robot co-carry tasks, the trust of the robot in its human partner, T R H , will not be increased. The robot does not need to execute any co-carrying motions. In this study, we assume the human will stop when the robot stops if it no longer trusts the human in the co-carry process. When the human–robot team maintains fluent collaboration, T R H will continue to increase. During this period, T R H is a monotonically increasing function. Therefore,
d T R H ( t ) d t > 0
The robot should perform at the same pace to co-carry the object with its human partner if it trusts him/her. Suppose the displacement of the human’s hand between time t + Δ t and time t in the carrying motions is Δ S H ( t ) , and the displacement of the robot’s end-effector in the assisting co-carrying motions is Δ S R ( t ) , the robot-assisting-human problem can then be characterized as a constrained-optimization issue
M R = arg min M R Δ S R ( t ) Δ S H ( t ) S . T . :     d T R H ( t ) d t > 0
where M R represents the desired robot-assisting-co-carrying motion, which includes x , y , z , ϕ , θ , and ψ . The derivative of the trust in Equation (13) works as the constraint. If Equation (14) does not meet this constraint condition, no assisting co-carrying motions will be generated for the robot. Therefore, the proposed model is able to create a safe work environment for human–-robot co-carry tasks.
Furthermore, suppose at time t the initial position of the human’s hand is P H I ( t ) . After performing the carrying motion, the position of the human’s hand at time t + Δ t is P H P ( t ) . Based on the L–K approach, the displacement of the human’s hand, Δ S H ( t ) , is able to be evaluated using a location vector P H P ( t ) P H I ( t ) , which can be described by a linear combination of basic vectors as
Δ S H ( t ) = P H P ( t ) P H I ( t )       = ( Δ x H ( t ) ,   Δ y H ( t ) ,   Δ z H ( t ) ,   Δ ϕ H ( t ) ,   Δ θ H ( t ) ,   Δ ψ H ( t ) )
Similarly, the displacement of the robot’s end-effector, Δ S R ( t ) , in the assisting co-carrying motion can be expressed by
Δ S R ( t ) = ( Δ x M R ( t ) ,   Δ y M R ( t ) ,   Δ z M R ( t ) ,   Δ ϕ M R ( t ) ,   Δ θ M R ( t ) ,   Δ ψ M R ( t ) )
In order to get the optimized result for Equation (14), the robot-assisting co-carrying motion, M R , should minimize the approximation error between Δ S R ( t ) and Δ S H ( t ) . Therefore, the waypoints on the desired robot-assisting co-carrying motion can be derived by
( Δ x M R ( t ) ,   Δ y M R ( t ) ,   Δ z M R ( t ) ,   Δ ϕ M R ( t ) ,   Δ θ M R ( t ) ,   Δ ψ M R ( t ) )     = ( Δ x H ( t ) ,   Δ y H ( t ) ,   Δ z H ( t ) ,   Δ ϕ H ( t ) ,   Δ θ H ( t ) ,   Δ ψ H ( t ) ) S . T . :     d T R H ( t ) d t > 0

3.6. Robot-Path-Planning Algorithm

After obtaining the executable waypoints based on the TA model, we compared multiple robot-path planners, such as RRT [46], PRM [47], RRT-Connect [48], FMT [49], LazyPRM [50], and EST [51] by running these waypoints, and found that the RRT-Connect path planner is the most effective one, with the least execution time. Therefore, in the human–robot co-carry process, the RRT-Connect path-planning algorithm is employed for the robot. As shown in Figure 4, the RRT-Connect path-planner works by incrementally constructing two rapidly exploring random trees rooted at the two neighboring executable cleaning waypoints [48,52]. The trees can explore the space around to move towards each other by a greedy function, and the path planning between the neighboring waypoints will be done once the two trees connect. The robot path-planning algorithm for the co-carry task is illustrated in Algorithm 1, where the R R T _ C O N N E C T _ P L A N N E R denotes the RRT-Connect path-planning function, of which details can be found in [48].
Algorithm 1. Robot path planning
Input: The executable waypoints generated via trust-assist model
Output: Planning paths
  •    ni←the number of executable waypoints;
  •    path_go←0
  •    for j in ni do
  •     RRT_CONNECT_PLANNER ( P j ,   P j + 1 )
  •      if CONNET ( P j ,   P j + 1 ) = true then
  •       path_goPATH ( P j ,   P j + 1 ) ;
  •      else
  •       path_go←0;
  •       skip to 4;
  •      end if
  •     add path_go to the robot controller;
  •    end for

4. Experimental Results and Analysis

4.1. Experimental Setup

In this study, as shown in Figure 5, the experimental platform involves a collaborative robot, a web camera, a workstation, a target object, and a shared workspace. The robot used in this study is the Franka Emika Panda, which is a 7-DoF collaborative robot with a robotic arm, a two-finger parallel gripper, a pilot user interface, and a Franka-Control-Interface (FCI) controller [53]. The robot can work with humans safely, such as in human–human cooperation in collaborative tasks. A dell desktop with Intel Core I7-9700 and 16 GB Memory serves as the workstation for human-motion image processing, trust-assist-model development, and robot-path planning. A 1/10-scale vehicle is used as the target object in the human–robot co-carry task. As presented in Figure 5, the lateral-movement co-carry, one of the typical types of co-carry tasks in human–robot collaborative contexts, is conducted by the human–robot team. The lateral movement is a motion in the left (right) to right (left) directions of the human in the co-carry task. This type of movements allows us to primarily test the trust-assist framework, since the direction of movement does not necessarily matter with regards to trust. In addition, what we learned here on the trust-assist framework for the lateral-movement co-carry task will be a stepstone toward more complex and large-scale applications.
A Robot Operating System (ROS) is utilized in the robot system control [54,55]. ROS is an open-source framework for large-scale cross-platform maneuvering and communication, which includes more than 2000 packages and can provide specialized functionality in different application areas. In addition, the package MoveIt! running in the ROS framework is also employed. MoveIt! is a set of packages and tools to perform dynamic manipulation in ROS. It contains state-of-the-art software for the development of the robot path-planning algorithm [56]. The algorithm of the proposed model and robot-control programs are developed through Python. To plan the robot in the human–robot co-carry task, the control commands are sent to the libfranka interface, which is a ROS package for Panda to communicate with the FCI controller. The FCI will provide the current robot states and enable the robot to be directly controlled by the commands derived from the trust-assist framework.

4.2. Human-Carrying-Motion Recognition Results

Figure 6 depicts the human-carrying-motion recognition process. Figure 6a shows the starting position of the co-carry task with no motion. In the starting position, the human holds one side of the co-carry vehicle stationary, and the robot holds the other side. Since the human is not moving the object, the robot does not need to produce any assisting actions and the object remains stationary. In Figure 6b,c, the human is moving the object from the left side to the right. This is the lateral direction of movement, since it is side to side, from the human’s perspective. Figure 6d,e present human motions in the right-to-left lateral motion. Although we only test lateral human motion in this experiment, this model can be easily extended to detect both longitudinal (forward and back) motion as well as a combination of lateral and longitudinal motions. Once the human begins moving in the desired direction, they do not stop until reaching the desired position. This ensures that the motion continues in a single direction and does not drift backward, in which case the robot may begin providing assisting motion opposite to the intended human motion. This is not detrimental to the model, but it could cause the co-carry task to take longer than intended. Each of the arrows in the images represents the direction and rate of motion of a single point on the co-carried object. Figure 6f shows the ending position of the co-carry task. Once the human reaches the desired ending location of a movement, they can either end the co-carry task, or reverse direction and continue the co-carry task. Since this could also be extended to longitudinal motion and a combination of motions, an end location could also represent a point at which the human will change from lateral to longitudinal motion or vice versa.
Through Figure 6, it can be seen that the human-carrying motions are successfully recognized using the L–K optical-flow algorithm. Each of the optical-flow arrows represents a point on the object that is being tracked. If a point is moved out of view of the camera, the algorithm will attempt to find it again, once it is back in view of the camera. By tracking these points, we are able to calculate an accurate representation of the human-carrying motions. In our implementation, using the arithmetic mean of human motions in Equation (8), we also filter out some extreme outliers that may occur due to tracking errors. This allows the proposed model to still determine the direction of human motion even if a few points are tracked incorrectly. This can be seen in Figure 6c, where the bottom-left-most arrow is pointing down instead of to the right, like the rest of the points. Since there are many points being tracked, this does not significantly affect the overall-motion recognition. These kinds of tracking errors are easily overcome and do not affect the efficacy of the proposed model. Figure 6a,f also show that the proposed model does not recognize the slight shakiness in a human hand while holding an object, since no optical-flow arrows are seen in these images. This means that any minor natural unsteadiness in a human hand while holding the co-carry object does not affect the overall performance of our model. Similarly, this also allows the proposed model to disregard any human false positive movement that may occur while the human tries to hold the co-carry object stationary.
With this experiment we are able to verify that the human motions can be accurately and consistently parameterized in the co-carry task, through the proposed approach. We also show that our model can assist in filtering out small unintentional movements. These results provide a solid base for the trust-assist model to build on.

4.3. Analysis of Robot-to-Human Trust

Figure 7 illustrates the trust level of the robot in the human throughout the co-carry task. Figure 7a shows the trust of the robot when the human is moving the object from left to right in a lateral motion. In Figure 7b, the human is moving the object in a lateral right-to-left motion. In both scenarios, it is shown that the trust is dynamically evaluated throughout human motions. This allows the robot to determine whether or not to trust and assist the human at any given time. The values of the trust-level graph depict times at which the human motion is at least somewhat trusted and the zero points on the graph represent times when the human motion is not trusted. This may occur if the trust model detects human false positive motion. Figure 7c shows the trust level of the robot in a human false positive motion. This false positive motion means that the human is shaking the object. Since this motion is recognized as a false positive motion by the proposed approach, according to Equation (10), the discriminant coefficient will be set to 0, leading the trust level to remain at 0. This is useful, because the robot will not assist in human motion when the trust level is 0, and this provides a safer work environment.
Figure 7a,b both show the trust level changing as the human moves the co-carry object in the desired direction. While the human moves the object, the robot’s trust is calculated dynamically, based on the human’s current performance and trust history. This can cause the trust level to fluctuate throughout the movement and enable the robot to generate assisting actions for the human. Some small unintentional shakiness in the human movement may lead the trust level to drop. Since the model is able to dynamically determine a trust level, current human motions which could potentially cause unpredictable robot motion will be decided to be untrustworthy. This provides more safety to both the human and the robot as well as to the co-carry object, which could be damaged if the robot moves unpredictably. Figure 7c depicts the human performing a false positive motion. This kind of motion is a movement that is shaky and could cause harm to the human or damage the robot if the robot attempted to provide assisting motion for the human. In this experiment, the false positive movement is accurately detected, and the trust level remains at 0 throughout. Since this motion is correctly determined to be untrustworthy, the robot does not attempt to perform assisting motion.
We perform five human–robot lateral-movement co-carry experiments and each experiment includes left-side-to-right-side co-carry, right-side-to-left-side co-carry, and human false positive actions. We testify that our robot-trusting-the-human model is able to consistently and correctly calculate the trust level of the robot in its human partner, based on the human movement and trust history. This model proves to be useful in co-carry tasks and could potentially be extended to co-assembly and many other human–robot collaborative tasks as well. Using the proposed framework, we are able to show that the trust level of the robot in the human can be dynamically calculated throughout the course of the task. Additionally, the trust of the robot in the human can change through even a single motion. By doing this, the robot is able to provide assisting motion in only trusted parts of the human motions, and disregard any motions that are deemed untrustworthy.

4.4. Assisting Human in Co-Carry Tasks

The robot assists the human in the co-carry task based on the proposed trust-assist framework, as shown in Figure 8. The top image shows the beginning of the co-carry task. The human and robot remain still at this stage while both hold the co-carry object, and the trust is 0 until the human begins to move. The trust is set to 0 at this point because the discriminant coefficient is always 0 if no human-interaction occurs. In the middle image, the human begins to move the co-carry object in the right-to-left lateral direction. As this is being performed, the trust level can be seen to be changing dynamically. In the bottom image, the robot begins its assisting movement to help the human co-carry the vehicle. The robot will stop assistance once the trust level is 0. This allows the robot to assist only in the precise movements by the human that have been deemed trustworthy.
In the top image of Figure 8, since the human is not moving the object in any direction, the robot does not provide any assisting motions. Although some small shaky movements may occur here, they do not affect the model’s ability to only assist with trusted human movements. This shows that the proposed model effectively filters out any unintentional human movements, based on the arithmetic mean of human motions in Equation (8). The robot correctly does not attempt to provide any assisting motion in this starting position. When the human begins moving the object in the middle image of Figure 8, the trust level is immediately being calculated. As the human moves with some slight inconsistencies, the trust level fluctuates up and down. Although the movement may appear to be a smooth, lateral motion to the human eye, the proposed model is able to much more precisely spot any small changes in the human’s movements that may affect the robot’s assisting capabilities. When these inconsistencies in the human’s motions are found, the trust level will drop, since the inconsistencies are considered as the human’s stop actions. This is why the dynamic capabilities of the proposed model are of utmost importance for its performance. As shown in the bottom image of Figure 8, with generated assisting motions, the robot matches both the direction and velocity of the human’s movements, allowing the robot to move along with the human at the same pace. During the human-carrying process, if a movement is determined to be untrustworthy, the robot will stop the assisting motion until the motion is verified as being trustworthy again. Since the proposed model calculates the trust level dynamically throughout the human-robot co-carry task, the trust level can change from being trustworthy to non-trustworthy at any moment. By doing this, the model is able to help to minimize the possibility of injury to the human or damage to the robot. The experimental demo is found in [57].
Through this experiment, we prove that the proposed trust-assist framework is able to have the robot correctly assist the human when trustworthy, and reject human motions when untrustworthy. This means that any human false positive movement is correctly determined to be untrustworthy, and therefore no assisting movements are performed by the robot. Once a movement is determined to be trustworthy, the robot starts its assisting actions.

5. Discussions

Through the course of the experimentation, effective results were achieved in both trust levels and assisting movements. The successful evaluations of trust were determined by using smooth motions that have clear human intention, as well as mimicking false positive movements such as a shaky hand. As presented in Figure 7, for the first two scenarios with smooth motions, the robot was able to correctly trust the motions and provide assisting movements in the same direction as the human. This means that the robot was able to view the human movements as trustworthy and determined the intention of the human movements in order to provide the assistance. In the third scenario with shaky movement, the robot was able to accurately label that movement as untrustworthy, and did not provide assisting movement to the human.
The assisting motions were also calculated effectively. As shown in Figure 8, when the human movements were determined to be trustworthy, the movements calculated by the robot-motion planner accurately mimicked the human’s motion. By using the L–K optical- flow algorithm, we were able to calculate an accurate representation of the human movements to be used by the motion planner. Differently from AI-based human-motion-prediction methods [58,59,60,61,62] and robot-control frameworks using neural networks [63,64,65,66,67,68,69], which rely more on a lot of data training and high computational complexity, the model-based approach used in this study does not need to pretrain the data, and is able to evaluate human motions via real-time detected information. In order to enhance the performance of our model in ways such as reducing the latency for the human-–robot co-carry interactions, we will also investigate how to combine our approach with AI-based methods such as transfer learning [70].
Our experimental results also indicated that the proposed approach presented good stability for the human–robot team. The continuous-trust evaluation throughout the co-carry task provided the robot and human with a stable collaborative process. The framework also continued to work through longer co-carry tasks without running into errors or bugs. This grants the human and robot the ability to work cooperatively on co-carry tasks for long periods of time.
While the proposed framework presents distinct advantages for human–robot co-carry tasks, this study still has some limitations that motivate our future investigations. The first direction of our future work is planning the robot-assistance policy, based on different trust levels. This means that for different values of the robot’s trust in the human, different assisting motions will be planned for the robot. One example of this could be where the human is moving the object in a slightly shaky motion but in a general direction; the robot could try to smooth out the motion instead of not following it at all. This could be useful for safety reasons, so that the robot does not copy any shaky motion that the human may perform.
Another direction is to evaluate robots’ trust in 3D-human-carrying motions. With the current model, human-motion data is only collected through a camera mounted above the workspace. This limits the motion tracking with the optical-flow algorithm to only horizontal motion. Additionally, the out-of-plane errors of the human will have influences on human-motion recognition and the performance of human–robot collaboration. To investigate and solve these issues, we will add a second camera mounted next to the workspace, so that data can be collected on the vertical motion of the human as well. This will allow for more freedom of movement by the human, as well as performing more complex carrying tasks. Such tasks could involve lifting the object and placing it on top of another platform within the workspace. Meanwhile, we can also study the impacts of out-of-plane errors on the performance of the proposed model.
Finally, we will verify the performance of the trust model by different subjects with diverse backgrounds in more complex human-robot collaborative tasks. In our current experiments we primarily focused on moving the object in straight lines, from side to side. The more complex movements could be the human moving the co-carried object in a curved path around an obstacle or moving the object to a specific location in the workspace. This also relates to evaluating the trust model in 3D-carrying motions. Additionally, more subjects with diverse backgrounds (industrial workers, undergraduates, master’s students, and PhD candidates) will be recruited to evaluate the performance of the proposed model in this study. We will employ different metrics, such us accuracy, stability, time efficiency, completeness of the collaborative task, collaboration fluency, and robot-response speed, to evaluate the proposed model.

6. Conclusions

In this paper we have proposed a novel trust-assist framework for human–robot collaborative teams. The first part of our framework is the RTH model. This model has allowed the robot to determine the trust level in the human co-carry partner. The calculations of the trust level are based on human motions, past interactions between the human–robot pair, and the human’s current performance in the co-carry task. The RTH model has calculated the trust level between the human and robot throughout the collaborative task, which allows the trust level to change if the human begins moving in nonuniform ways. Then, using the trust level calculated in the previous step, the RAH model has been designed to calculate the assisting motion, based on Lucas–Kanade optical-flow approach. If the trust level is too low, the human-carrying motion is not followed by the robot. Inversely, the proposed approach can enable the robot to perform assisting movements when the human motion is considered trustworthy. The results of our real-world experiments within this framework showed that the robot effectively assisted the human in co-carry tasks through the proposed computational-trust model. We have discussed the advantages and limitations of this study and described the roadmap of future work we will conduct.

Author Contributions

Conceptualization, W.W. and C.H.; methodology, W.W. and R.L.; software, C.H.; validation, W.W., C.H. and R.L.; writing, C.H., R.L. and W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Science Foundation under Grant CNS-2104742 and in part by the National Science Foundation under Grant CNS-2117308.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tsarouchi, P.; Makris, S.; Chryssolouris, G. Human–robot interaction review and challenges on task planning and programming. Int. J. Comput. Integr. Manuf. 2016, 29, 916–931. [Google Scholar] [CrossRef]
  2. Maurtua, I.; Ibarguren, A.; Kildal, J.; Susperregi, L.; Sierra, B. Human–robot collaboration in industrial applications: Safety, interaction and trust. Int. J. Adv. Robot. Syst. 2017, 14, 1–10. [Google Scholar] [CrossRef] [Green Version]
  3. Shi, J.; Jimmerson, G.; Pearson, T.; Menassa, R. Levels of human and robot collaboration for automotive manufacturing. In Proceedings of the Workshop on Performance Metrics for Intelligent Systems, College Park, MD, USA, 20–22 March 2012; pp. 95–100. [Google Scholar]
  4. Sauppé, A. Designing effective strategies for human-robot collaboration. In Proceedings of the Companion Publication of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, Baltimore, MD, USA, 15–19 February 2014; pp. 85–88. [Google Scholar]
  5. Dragan, A.D.; Bauman, S.; Forlizzi, J.; Srinivasa, S.S. Effects of robot motion on human-robot collaboration. In Proceedings of the 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Portland, OR, USA, 2–5 March 2015; pp. 51–58. [Google Scholar]
  6. Wang, W.; Li, R.; Chen, Y.; Jia, Y. Human Intention Prediction in Human-Robot Collaborative Tasks. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 279–280. [Google Scholar]
  7. Li, Y.; Ge, S.S. Human–robot collaboration based on motion intention estimation. IEEE/ASME Trans. Mechatron. 2013, 19, 1007–1014. [Google Scholar] [CrossRef]
  8. Hu, B.; Chen, J. Optimal task allocation for human–machine collaborative manufacturing systems. IEEE Robot. Autom. Lett. 2017, 2, 1933–1940. [Google Scholar] [CrossRef]
  9. Peternel, L.; Tsagarakis, N.; Caldwell, D.; Ajoudani, A. Robot adaptation to human physical fatigue in human–robot co-manipulation. Auton. Robot. 2018, 42, 1011–1021. [Google Scholar] [CrossRef]
  10. Lynch, K.M.; Liu, C. Designing motion guides for ergonomic collaborative manipulation. In Proceedings of the 2000 ICRA, Millennium Conference, IEEE International Conference on Robotics and Automation, Symposia Proceedings (Cat. No. 00CH37065), San Francisco, CA, USA, 24–28 April 2000; pp. 2709–2715. [Google Scholar]
  11. Kana, S.; Tee, K.-P.; Campolo, D. Human–Robot co-manipulation during surface tooling: A general framework based on impedance control, haptic rendering and discrete geometry. Robot. Comput.-Integr. Manuf. 2021, 67, 102033. [Google Scholar] [CrossRef]
  12. Mateus, J.C.; Claeys, D.; Limère, V.; Cottyn, J.; Aghezzaf, E.-H. A structured methodology for the design of a human-robot collaborative assembly workplace. Int. J. Adv. Manuf. Technol. 2019, 102, 2663–2681. [Google Scholar] [CrossRef]
  13. Oyekan, J.O.; Hutabarat, W.; Tiwari, A.; Grech, R.; Aung, M.H.; Mariani, M.P.; López-Dávalos, L.; Ricaud, T.; Singh, S.; Dupuis, C. The effectiveness of virtual environments in developing collaborative strategies between industrial robots and humans. Robot. Comput.-Integr. Manuf. 2019, 55, 41–54. [Google Scholar] [CrossRef]
  14. Colan, J.; Nakanishi, J.; Aoyama, T.; Hasegawa, Y. A Cooperative Human-Robot Interface for Constrained Manipulation in Robot-Assisted Endonasal Surgery. Appl. Sci. 2020, 10, 4809. [Google Scholar] [CrossRef]
  15. Donner, P.; Christange, F.; Lu, J.; Buss, M. Cooperative dynamic manipulation of unknown flexible objects. Int. J. Soc. Robot. 2017, 9, 575–599. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Ososky, S.; Schuster, D.; Phillips, E.; Jentsch, F.G. Building appropriate trust in human-robot teams. In Proceedings of the 2013 AAAI Spring Symposium, Palo Alto, CA, USA, 25–27 March 2013; pp. 60–65. [Google Scholar]
  17. Ullman, D.; Aladia, S.; Malle, B.F. Challenges and opportunities for replication science in HRI: A case study in human-robot trust. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 8–11 March 2021; pp. 110–118. [Google Scholar]
  18. Xu, J.; Howard, A. Evaluating the impact of emotional apology on human-robot trust. In Proceedings of the 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Napoli, Italy, 29 August–2 September 2022; pp. 1655–1661. [Google Scholar]
  19. Esterwood, C.; Robert, L.P. Having the Right Attitude: How Attitude Impacts Trust Repair in Human—Robot Interaction. In Proceedings of the 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Sapporo, Japan, 7–10 March 2022; pp. 332–341. [Google Scholar]
  20. Ezenyilimba, A.; Wong, M.; Hehr, A.; Demir, M.; Wolff, A. Impact of transparency and explanations on trust and situation awareness in human–robot teams. J. Cogn. Eng. Decis. Mak. 2022, 17, 15553434221136358. [Google Scholar] [CrossRef]
  21. Malle, B.F.; Ullman, D. A multidimensional conception and measure of human-robot trust. In Trust in Human-Robot Interaction; Elsevier: Amsterdam, The Netherlands, 2021; pp. 3–25. [Google Scholar]
  22. Esterwood, C.; Robert, L.P. Three Strikes and you are out!: The impacts of multiple human-robot trust violations and repairs on robot trustworthiness. Comput. Human Behav. 2023, 142, 107658. [Google Scholar] [CrossRef]
  23. Hannum, C.; Li, R.; Wang, W. Trust or Not?: A Computational Robot-Trusting-Human Model for Human-Robot Collaborative Tasks. In Proceedings of the 2020 IEEE International Conference on Big Data (Big Data), Atlanta, GA, USA, 10–13 December 2020; pp. 5689–5691. [Google Scholar]
  24. Hanna, A.; Larsson, S.; Götvall, P.-L.; Bengtsson, K. Deliberative safety for industrial intelligent human–robot collaboration: Regulatory challenges and solutions for taking the next step towards industry 4.0. Robot. Comput.-Integr. Manuf. 2022, 78, 102386. [Google Scholar] [CrossRef]
  25. Gualtieri, L.; Rauch, E.; Vidoni, R. Development and validation of guidelines for safety in human-robot collaborative assembly systems. Comput. Ind. Eng. 2022, 163, 107801. [Google Scholar] [CrossRef]
  26. Choi, S.H.; Park, K.-B.; Roh, D.H.; Lee, J.Y.; Mohammed, M.; Ghasemi, Y.; Jeong, H. An integrated mixed reality system for safety-aware human-robot collaboration using deep learning and digital twin generation. Robot. Comput.-Integr. Manuf. 2022, 73, 102258. [Google Scholar] [CrossRef]
  27. Cai, J.; Du, A.; Liang, X.; Li, S. Prediction-Based Path Planning for Safe and Efficient Human–Robot Collaboration in Construction via Deep Reinforcement Learning. J. Comput. Civ. Eng. 2023, 37, 04022046. [Google Scholar] [CrossRef]
  28. Faccio, M.; Granata, I.; Minto, R. Task allocation model for human-robot collaboration with variable cobot speed. J. Intell. Manuf. 2023, 474, 1–14. [Google Scholar] [CrossRef]
  29. Tariq, U.; Muthusamy, R.; Kyrki, V. Grasp planning for load sharing in collaborative manipulation. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 6847–6854. [Google Scholar]
  30. Wang, W.; Li, R.; Chen, Y.; Diekel, Z.M.; Jia, Y. Facilitating Human–Robot Collaborative Tasks by Teaching-Learning-Collaboration From Human Demonstrations. IEEE Trans. Autom. Sci. Eng. 2018, 16, 640–653. [Google Scholar] [CrossRef]
  31. Argall, B.D.; Chernova, S.; Veloso, M.; Browning, B. A survey of robot learning from demonstration. Robot. Auton. Syst. 2009, 57, 469–483. [Google Scholar] [CrossRef]
  32. Gu, Y.; Thobbi, A.; Sheng, W. Human-robot collaborative manipulation through imitation and reinforcement learning. In Proceedings of the 2011 IEEE International Conference on Information and Automation, Shanghai, China, 9–13 May 2011; pp. 151–156. [Google Scholar]
  33. Arai, H.; Takubo, T.; Hayashibara, Y.; Tanie, K. Human-robot cooperative manipulation using a virtual nonholonomic constraint. In Proceedings of the 2000 ICRA, Millennium Conference, IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065), San Francisco, CA, USA, 24–28 April 2000; pp. 4063–4069. [Google Scholar]
  34. Cervera, E.; del Pobil, A.P.; Marta, E.; Serna, M.A. A sensor-based approach for motion in contact in task planning. In Proceedings of the 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems, Human Robot Interaction and Cooperative Robots, Pittsburgh, PA, USA, 5–9 August 1995; pp. 468–473. [Google Scholar]
  35. Kruse, D.; Radke, R.J.; Wen, J.T. Collaborative human-robot manipulation of highly deformable materials. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, DC, USA, 26–30 May 2015; pp. 3782–3787. [Google Scholar]
  36. Demolombe, R. Reasoning about trust: A formal logical framework. In Proceedings of the International Conference on Trust Management, Oxford, UK, 29 March–1 April 2004; pp. 291–303. [Google Scholar]
  37. Wang, Y.; Shi, Z.; Wang, C.; Zhang, F. Human-robot mutual trust in (semi) autonomous underwater robots. In Cooperative Robots and Sensor Networks 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 115–137. [Google Scholar]
  38. Billings, D.R.; Schaefer, K.E.; Chen, J.Y.; Hancock, P.A. Human-robot interaction: Developing trust in robots. In Proceedings of the seventh annual ACM/IEEE International Conference on Human-Robot Interaction, Boston, MA, USA, 5–8 March 2012; pp. 109–110. [Google Scholar]
  39. Rossi, A.; Dautenhahn, K.; Koay, K.L.; Saunders, J. Investigating human perceptions of trust in robots for safe HRI in home environments. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; pp. 375–376. [Google Scholar]
  40. Stormont, D.P. Analyzing human trust of autonomous systems in hazardous environments. In Proceedings of the Human Implications of Human-Robot Interaction Workshop at AAAI, Chicago, IL, USA, 13–17 July 2008; pp. 27–32. [Google Scholar]
  41. Freedy, A.; DeVisser, E.; Weltman, G.; Coeyman, N. Measurement of trust in human-robot collaboration. In Proceedings of the 2007 International Symposium on Collaborative Technologies and Systems, Orlando, FL, USA, 21–25 May 2007; pp. 106–114. [Google Scholar]
  42. Chen, M.; Nikolaidis, S.; Soh, H.; Hsu, D.; Srinivasa, S. Planning with trust for human-robot collaboration. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 307–315. [Google Scholar]
  43. Kaniarasu, P.; Steinfeld, A.; Desai, M.; Yanco, H. Robot confidence and trust alignment. In Proceedings of the 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 3–6 March 2013; pp. 155–156. [Google Scholar]
  44. Basili, P.; Huber, M.; Brandt, T.; Hirche, S.; Glasauer, S. Investigating human-human approach and hand-over. In Human Centered Robot Systems; Springer: Berlin/Heidelberg, Germany, 2009; pp. 151–160. [Google Scholar]
  45. Lucas, B.D.; Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence, San Francisco, CA, USA, 24–28 August 1981; pp. 121–130. [Google Scholar]
  46. LaValle, S.M. Rapidly-Exploring Random Trees: A New Tool for Path Planning. The Annual Research Report. 1998, 1–4. Available online: http://msl.cs.illinois.edu/~lavalle/papers/Lav98c.pdf (accessed on 2 February 2023).
  47. Kavraki, L.; Svestka, P.; Overmars, M.H. Probabilistic Roadmaps for Path Planning in High-Dimensional Configuration Spaces. IEEE Trans. Robot. Autom. 1994, 1994, 566–580. [Google Scholar] [CrossRef] [Green Version]
  48. Kuffner, J.J.; LaValle, S.M. RRT-connect: An efficient approach to single-query path planning. In Proceedings of the Robotics and Automation, 2000, Proceedings ICRA’00, San Francisco, CA, USA, 24–28 April 2000; pp. 995–1001. [Google Scholar]
  49. Janson, L.; Schmerling, E.; Clark, A.; Pavone, M. Fast marching tree: A fast marching sampling-based method for optimal motion planning in many dimensions. Int. J. Robot. Res. 2015, 34, 883–921. [Google Scholar] [CrossRef] [PubMed]
  50. Bohlin, R.; Kavraki, L.E. Path planning using lazy PRM. In Proceedings of the 2000 ICRA. Millennium Conference, IEEE International Conference on Robotics and Automation, Symposia Proceedings (Cat. No. 00CH37065), San Francisco, CA, USA, 24–28 April 2000; pp. 521–528. [Google Scholar]
  51. Hsu, D.; Latombe, J.-C.; Motwani, R. Path planning in expansive configuration spaces. Int. J. Comput. Geom. Appl. 1999, 9, 495–512. [Google Scholar] [CrossRef]
  52. Stilman, M.; Schamburek, J.-U.; Kuffner, J.; Asfour, T. Manipulation Planning among Movable Obstacles; Georgia Institute of Technology: Atlanta, GA, USA, 2007. [Google Scholar]
  53. Bier, S.; Li, R.; Wang, W. A Full-Dimensional Robot Teleoperation Platform. In Proceedings of the 2020 IEEE International Conference on Mechanical and Aerospace Engineering, Athens, Greece, 14–17 July 2020; pp. 186–191. [Google Scholar]
  54. Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source Robot Operating System. In Proceedings of the ICRA Workshop on Open Source Software, Kobe, Japan, 12–17 May 2009; pp. 1–6. [Google Scholar]
  55. Diamantopoulos, H.; Wang, W. Accommodating and Assisting Human Partners in Human-Robot Collaborative Tasks through Emotion Understanding. In Proceedings of the 2021 International Conference on Mechanical and Aerospace Engineering (ICMAE), Athens, Greece, 16–19 July 2021; pp. 523–528. [Google Scholar]
  56. Chitta, S.; Sucan, I.; Cousins, S. Moveit! IEEE Robot. Autom. Mag. 2012, 19, 18–19. [Google Scholar] [CrossRef]
  57. The Experimental Demo. Available online: https://www.youtube.com/watch?v=2sVAmwHPHOc (accessed on 2 February 2023).
  58. Bütepage, J.; Kjellström, H.; Kragic, D. Anticipating many futures: Online human motion prediction and synthesis for human-robot collaboration. arXiv 2017, arXiv:1702.08212. [Google Scholar]
  59. Ding, H.; Reißig, G.; Wijaya, K.; Bortot, D.; Bengler, K.; Stursberg, O. Human arm motion modeling and long-term prediction for safe and efficient human-robot-interaction. In Proceedings of the Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 5875–5880. [Google Scholar]
  60. Kinugawa, J.; Kanazawa, A.; Arai, S.; Kosuge, K. Adaptive Task Scheduling for an Assembly Task Coworker Robot Based on Incremental Learning of Human’s Motion Patterns. IEEE Robot. Autom. Lett. 2017, 2, 856–863. [Google Scholar] [CrossRef]
  61. Elfring, J.; van de Molengraft, R.; Steinbuch, M. Learning intentions for improved human motion prediction. Robot. Auton. Syst. 2014, 62, 591–602. [Google Scholar] [CrossRef]
  62. Song, D.; Kyriazis, N.; Oikonomidis, I.; Papazov, C.; Argyros, A.; Burschka, D.; Kragic, D. Predicting human intention in visual observations of hand/object interactions. In Proceedings of the Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013; pp. 1608–1615. [Google Scholar]
  63. Nguyen, H.-T.; Cheah, C.C. Analytic Deep Neural Network-Based Robot Control. IEEE/ASME Trans. Mechatron. 2022, 27, 2176–2184. [Google Scholar] [CrossRef]
  64. Khan, A.T.; Li, S.; Cao, X. Control framework for cooperative robots in smart home using bio-inspired neural network. Measurement 2021, 167, 108253. [Google Scholar] [CrossRef]
  65. Watanabe, K.; Tang, J.; Nakamura, M.; Koga, S.; Fukuda, T. A fuzzy-Gaussian neural network and its application to mobile robot control. IEEE Trans. Control Syst. Technol. 1996, 4, 193–199. [Google Scholar] [CrossRef]
  66. Thor, M.; Kulvicius, T.; Manoonpong, P. Generic neural locomotion control framework for legged robots. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 4013–4025. [Google Scholar] [CrossRef]
  67. Sun, F.; Sun, Z.; Woo, P.-Y. Neural network-based adaptive controller design of robotic manipulators with an observer. IEEE Trans. Neural Netw. 2001, 12, 54–67. [Google Scholar] [PubMed]
  68. Wang, N.; Chen, C.; Yang, C. A robot learning framework based on adaptive admittance control and generalizable motion modeling with neural network controller. Neurocomputing 2020, 390, 260–267. [Google Scholar] [CrossRef]
  69. Wang, W.; Li, R.; Diekel, Z.M.; Chen, Y.; Zhang, Z.; Jia, Y. Controlling Object Hand-Over in Human–Robot Collaboration Via Natural Wearable Sensing. IEEE Trans. Hum.-Mach. Syst. 2019, 49, 59–71. [Google Scholar] [CrossRef]
  70. Weiss, K.R.; Khoshgoftaar, T.M. An investigation of transfer learning and traditional machine learning algorithms. In Proceedings of the 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI), San Jose, CA, USA, 6–8 November 2016; pp. 283–290. [Google Scholar]
Figure 1. Overview of trust-assist framework for human–robot co-carry tasks.
Figure 1. Overview of trust-assist framework for human–robot co-carry tasks.
Robotics 12 00030 g001
Figure 2. Human-carrying motions acquisition and processing in human–robot co-carry tasks.
Figure 2. Human-carrying motions acquisition and processing in human–robot co-carry tasks.
Robotics 12 00030 g002
Figure 3. The pose of the human’s hand in the 3D workspace in co-carry tasks.
Figure 3. The pose of the human’s hand in the 3D workspace in co-carry tasks.
Robotics 12 00030 g003
Figure 4. The RRT-Connect path planner utilized for the robot.
Figure 4. The RRT-Connect path planner utilized for the robot.
Robotics 12 00030 g004
Figure 5. The experimental platform and task in this study.
Figure 5. The experimental platform and task in this study.
Robotics 12 00030 g005
Figure 6. Human-carrying-motion recognition using L–K algorithms. (a) Ready to move. (b) Human-carrying motion from left side to right side in the collaborative workspace. (c) Approaching the right end. (d) Human-carrying motion from right side to left side in the collaborative workspace. (e) Approaching the left end. (f) Returning to the start station.
Figure 6. Human-carrying-motion recognition using L–K algorithms. (a) Ready to move. (b) Human-carrying motion from left side to right side in the collaborative workspace. (c) Approaching the right end. (d) Human-carrying motion from right side to left side in the collaborative workspace. (e) Approaching the left end. (f) Returning to the start station.
Robotics 12 00030 g006
Figure 7. The robot’s trust in the human is dynamically evaluated in the human–robot co-carry process. (a) Trust-level evaluation when the human carries the object from left side to right side in the collaborative workspace. (b) Trust-level evaluation when the human carries the object from right side to left side in the collaborative workspace. (c) Trust-level evaluation when the human performs a false positive action.
Figure 7. The robot’s trust in the human is dynamically evaluated in the human–robot co-carry process. (a) Trust-level evaluation when the human carries the object from left side to right side in the collaborative workspace. (b) Trust-level evaluation when the human carries the object from right side to left side in the collaborative workspace. (c) Trust-level evaluation when the human performs a false positive action.
Robotics 12 00030 g007aRobotics 12 00030 g007b
Figure 8. The robot assists its human partner, based on trust levels in co-carry tasks.
Figure 8. The robot assists its human partner, based on trust levels in co-carry tasks.
Robotics 12 00030 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hannum, C.; Li, R.; Wang, W. A Trust-Assist Framework for Human–Robot Co-Carry Tasks. Robotics 2023, 12, 30. https://doi.org/10.3390/robotics12020030

AMA Style

Hannum C, Li R, Wang W. A Trust-Assist Framework for Human–Robot Co-Carry Tasks. Robotics. 2023; 12(2):30. https://doi.org/10.3390/robotics12020030

Chicago/Turabian Style

Hannum, Corey, Rui Li, and Weitian Wang. 2023. "A Trust-Assist Framework for Human–Robot Co-Carry Tasks" Robotics 12, no. 2: 30. https://doi.org/10.3390/robotics12020030

APA Style

Hannum, C., Li, R., & Wang, W. (2023). A Trust-Assist Framework for Human–Robot Co-Carry Tasks. Robotics, 12(2), 30. https://doi.org/10.3390/robotics12020030

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop