Skip to main content
Springer logoLink to Springer
. 2017 Jun 12;9(4):575–599. doi: 10.1007/s12369-017-0415-x

Cooperative Dynamic Manipulation of Unknown Flexible Objects

Joint Energy Injection Based on Simple Pendulum Fundamental Dynamics

Philine Donner 1,2,, Franz Christange 3, Jing Lu 1, Martin Buss 1,2
PMCID: PMC6961525  PMID: 32010408

Abstract

Cooperative dynamic manipulation enlarges the manipulation repertoire of human–robot teams. By means of synchronized swinging motion, a human and a robot can continuously inject energy into a bulky and flexible object in order to place it onto an elevated location and outside the partners’ workspace. Here, we design leader and follower controllers based on the fundamental dynamics of simple pendulums and show that these controllers can regulate the swing energy contained in unknown objects. We consider a complex pendulum-like object controlled via acceleration, and an “arm—flexible object—arm” system controlled via shoulder torque. The derived fundamental dynamics of the desired closed-loop simple pendulum behavior are similar for both systems. We limit the information available to the robotic agent about the state of the object and the partner’s intention to the forces measured at its interaction point. In contrast to a leader, a follower does not know the desired energy level and imitates the leader’s energy flow to actively contribute to the task. Experiments with a robotic manipulator and real objects show the efficacy of our approach for human–robot dynamic cooperative object manipulation.

Electronic supplementary material

The online version of this article (doi:10.1007/s12369-017-0415-x) contains supplementary material, which is available to authorized users.

Keywords: Physical human–robot interaction, Cooperative manipulators, Adaptive control, Dynamics, Haptics, Intention estimation

Introduction

Continuous energy injection during synchronized swinging motion enables a human and a robot to lift a bulky flexible object together onto an elevated location. This example scenario is illustrated in Fig. 1a and combines the advantages of cooperative and dynamic manipulation. Cooperative manipulation allows for the manipulation of heavier and bulkier objects than one agent could manipulate on its own. A commonly addressed physical human–robot collaboration scenario is, e.g., cooperative transport of rigid bulky objects [44]. Such object transport tasks are performed by kinematic manipulation, i.e., the rigid object is rigidly grasped by the manipulators [32]. In contrast, dynamic object manipulation makes use of the object dynamics, with the advantage of an increased manipulation repertoire: simpler end effectors can handle a greater variety of objects faster and outside the workspace of the manipulator. Dynamic manipulation examples are juggling, throwing, catching [29] as well as the manipulation of underactuated mechanisms [8], such as the flexible and the pendulum-like objects in Fig. 1a, b.

Fig. 1.

Fig. 1

Approach overview: (1) Interpretation of flexible object swinging as a combination of pendulum swinging and rigid object swinging. (2) Approximation of pendulum swinging by the t-pendulum with 1D acceleration inputs and of flexible object swinging by the afa-system with 1D torque inputs. (3) Projection of the t-pendulum and the afa-system onto the abstract cart-pendulum and abstract torque-pendulum, respectively. (4) Extraction of the closed-loop fundamental dynamics. (5) Fundamental dynamics-based natural frequency estimation and leader and follower controller design

In this article, we take a first step towards combining the advantages of cooperative and dynamic object manipulation by investigating cooperative swinging of underactuated objects. The swinging motion naturally synchronizes the motion of the cooperating agents. Energy can be injected in a favorable arm configuration for a human interaction partner (stretched arm) and task effort can be shared among the agents. Moreover, the accessible workspace of the human arm and robotic manipulator is increased by the swinging motion of the object and by a possible subsequent throwing phase. In order to approach the complex task of cooperative flexible object swinging in Fig. 1a, we split it up into its two extremes, which are swinging of pendulum-like objects which oscillate themselves (b) and swinging of rigid objects, where the agents’ arms together with the rigid object form an oscillating entity (c). In our initial work, we treated pendulum-like object swinging [13] based on the assumption that all system parameters are known. This assumption was alleviated in [14] by an adaptive approach.

The contribution of this work is three-fold: firstly, we experimentally verify the adaptive approach presented in [14]. Secondly, we combine our results from cooperative swinging of pendulum-like objects and human–human swinging of rigid objects in [15], towards cooperative swinging of flexible objects. Our third contribution lies in the unified presentation of modeling the desired oscillation of pendulum-like and flexible objects through simple pendulum abstractions of equal fundamental dynamics (see two paths in Fig. 1). In the following, we discuss the state of the art related to different aspects of our proposed control approach.

Dynamic Manipulation in Physical Human–Robot Interaction

Consideration and exploitation of the mutual influence is of great importance when designing controllers for natural human–robot interaction [45]; even more when the agents are in physical contact. Only little work exists on cooperative dynamic object manipulation in general, and in the context of human–robot interaction in particular. In [25] and [30], a human and a robot perform rope turning. For both cases, a stable rope turning motion had to be established by the human before the robot was able to contribute to sustaining it. The human–robot cooperative sawing task considered in [38] requires adaptation on motion as well as on stiffness level in order to cope with the challenging saw-environment interaction dynamics.

In contrast, cooperative kinematic manipulation of a common object by a human and a robot has seen great interest. Kosuge et al. [26] designed first rather passive gravity compensators, which have been developed further to robotic partners who actively contribute to the task, e.g., [33]. Active contribution comes with own plans and thus own intentions, which have to be communicated and negotiated. Whereas verbal communication allows humans to easily exchange information, human–human studies have shown that haptic coupling through an object serves as a powerful and fast haptic communication channel [21]. In this work, the robotic agent is limited to measurements of its own applied force and torque. Thus, the robot has to use the haptic communication channel to infer both, the intention of the partner and the state of the object.

Cooperation of several agents allows for role allocation. Human–human studies in [40] showed that humans tend to specialize during haptic interaction tasks and motivated the design of follower and leader behavior [17]. Mörtl et al.  [34] assigned effort roles that specify how effort is shared in redundant task directions. Also, the swing-up task under consideration allows for effort sharing. In kinematic physical interaction tasks, the interaction forces are commonly used for intention recognition, e.g., counteracting forces are interpreted as disagreement [20, 34]. Furthermore, the leader’s intention is mostly reflected in a planned trajectory. For the swing-up task, on the contrary, the leader’s intention is reflected in a desired object energy, which is unknown to the follower agent. Dynamic motion as well as a reduced coupling of the agents through the flexible or even pendulum-like object prohibit a direct mapping from interaction force to intention. We propose a follower that monitors and imitates the energy flow to the object in order to actively contribute to the task.

Simple Pendulum Approximation for Modeling and Control

The pendulum-like object in Fig. 1b belongs to the group of suspended loads. Motivated by an extended workspace, mechanisms with single [8] and double [51] cable-suspensions were designed and controlled via parametric excitation to perform point to point motion and trajectory tracking. An impressive example of workspace extension is presented in [9], where a quadrotor injects energy into its suspended load such that it can pass through a narrow opening, which would be impossible with the load hanging down. The pendulum-like object in Fig. 1b is similar to the suspended loads of [50] and [51]. However, the former work focuses on oscillation damping and the latter uses one centralized controller.

In contrast to pendulum-like objects, rigid objects tightly couple the robot and the human motion. Thus, during human–robot cooperative swinging of rigid objects as illustrated in Fig. 1c, the robot needs to move “human-like” to allow for comfort on the human side. On this account, we conducted a pilot study on human–human rigid object swinging reported in [15]. The observed motion and frequency characteristics suggest that the human arm can be approximated as a torque-actuated simple pendulum with pivot point in front of the human shoulder. This result is in line with the conclusion drawn in [22] that the preferred frequency of a swinging lower human arm is dictated by the physical properties of the limb rather than the central nervous system.

Manipulation of flexible and deformable objects is a challenging research topic also at slow velocities. While the finite elements method aims at exact modeling [28], the pseudo-rigid object method offers an efficient tool to estimate deformation and natural frequency [49].

Here, instead of aiming for an accurate model, we achieve stable oscillations of unknown flexible objects by making use of the fact that the desired oscillation is simple pendulum-like. Simple pendulum approximations have been successfully used to model and control complex mechanisms, e.g., for brachiating [36] or dancing [46]. The swing-up and stabilization of simple pendulums in their unstable equilibrium point is commonly used as benchmark for linear and nonlinear control techniques [1, 18]. Instead of a full swing-up to the inverted pendulum configuration, our goal is to reach a periodic motion of desired energy content. Based on virtual holonomic constraints, [19] and achieve desired periodic motions. Above controllers rely on thorough system knowledge, whereas our final goal is the manipulation of unknown flexible objects.

Adaptive Control for Periodic Motions and Leader–Follower Behavior

The cooperative sawing task in [38] is achieved via learning of individual dynamic movement primitives for motion and stiffness control with a human tutor in the loop. Frequency and phase are extracted online by adaptive frequency oscillators [39]. The applicability of learning methods as learning from demonstration [4] or reinforcement learning [16] to nonlinear dynamics is frequently evaluated based on inverted pendulum tasks. Reinforcement learning often suffers from the need of long interactions with the real system and from a high number of tuning parameters [35, 37]. Only recently, Deisenroth et al. showed how Gaussian processes allow for faster autonomous reinforcement learning with few parameters in [10]. Neural networks constitute another effective tool to control nonlinear systems, which have also been applied to adaptive leader–follower consensus control in, e.g., [47].

In this work, we apply model knowledge of the swinging task to design adaptive leader/follower controllers for swinging of unknown flexible objects, without the need of a learning phase. Identification of the underlying fundamental dynamics allows us to design leader and follower controllers which only require few parameters of distinct physical meaning.

Overview of the Fundamental Dynamics-Based Approach

This section highlights the main ideas of the proposed approach and structures the article along Figs. 1 and 2. Individual variables will be introduced in subsequent sections and important variables are listed in Table 1.

Fig. 2.

Fig. 2

Implementation overview block diagram. Based on measured force f1 and torque t1, the complex afa-system and t-pendulum are projected onto their simple pendulum variants. From the extracted FD states φ and ϑr the natural frequency is estimated ω^ and leader or follower behavior is realized a1. Energy-based controllers convert the amplitude factor a1 into desired end effector motion defined by r1 or ρ and ψ

Table 1.

Important variables and abbreviations

FD Fundamental dynamics
fi / ti Force/torque applied by agent i
ri,r˙i,r¨i Position, velocity, acceleration of agent i in x-direction with respect to its initial position
ts Torque applied at shoulder of virtual arm
θ / ψ Desired/undesired oscillation DoF
ρ Virtual arm deflection angle
ϑ Oscillation DoF of abstract simple pendulums
φ Phase angle
E, Ej Energy, energy of oscillation j
jE Amplitude of oscillation j (energy equivalent)
ϑr Phase space radius (approx. energy equivalent)
ai Amplitude factor of agent i
ω Natural frequency
ω0 / ωg Small angle/geometric mean approximation
Γi Relative energy contribution of agent i
(·)i Agent Ai
(·)F, (·)L Follower, leader agent
(·)o / (·)a Parameters of object/virtual arm
(·)ref Reference dynamics
(·) Projection of (·) onto xy-plane
(·)^ / (·)d Estimate/ desired value of (·)

In this work, we achieve cooperative energy injection into unknown flexible objects based on an understanding of the underlying desired fundamental dynamics (FD). Figure 1 illustrates the approximation steps taken that lead from human–robot flexible object swinging (a) to the FD (h). Pendulum-like objects (b) constitute the extreme end on the scale of flexible objects (a) with respect to the coupling strength between the agents. The especially weak coupling allows us to isolate the object from the agents’ end effectors and represent the agent’s influence by acceleration inputs. In the following, we refer to the isolated pendulum-like object (d) as t-pendulum due to its trapezoidal shape. In order to achieve our final goal of flexible object swinging, we consolidate our insights on pendulum and rigid object swinging (see step 2 in Fig. 1). We exploit the result that human arms behave as simple pendulums during rigid object swinging [15] and approximate the human arms by simple pendulums actuated via torque at the shoulder joints. We abbreviate the resultant “arm—flexible object—arm” system (e) as afa-system.

We do not try to extract accurate dynamical models, but make use of the fact that the desired oscillations are simple pendulum-like. The desired oscillations of the t-pendulum and the afa-system are then represented by cart-actuated (f) and torque-actuated (g) simple pendulums, respectively. We extract linear FD (h) which describes the phase and energy dynamics of the simple pendulum approximations controlled by a variant of the swing-up controller of Yoshida [48]. The FD allows for online frequency estimation (i), controlled energy injection and effort sharing among the agents (j).

The block diagram in Fig. 2 visualizes the implementation with input and output variables. The blocks will be detailed in the respective sections as indicated in Figs. 1 and 2. We would like to emphasize here that the proposed robot controllers generate desired end effector motion solely based on force and torque measurements at the robot’s interaction point.1

The remainder of the article is structured as follows. In Sect. 3 we give the problem formulation. This is followed by the FD derivations in Sect. 4, on which basis the adaptive leader and follower controllers are introduced and analyzed in Sect. 5. In Sect. 6, we apply the FD-based controllers to the two-agent t-pendulum and afa-system. We evaluate our controllers in simulation and experiments in Sects. 7 and 8, respectively. In Sect. 9, we discuss design choices, limitations and possible extensions of the presented control approach. Section 10 concludes the article.

Problem Formulation for Cooperative Object Swinging

In this section, we introduce relevant variables and parameters of the t-pendulum and afa-system of Fig. 1d, e. Thereafter, we formally state our problem. Note that we drop the explicit notation of time dependency of the system variables where clear from the context.

The t-Pendulum

Figure 3 shows the t-pendulum. Without loss of generality, we assume that agent A1 = R is the robot who cooperates with a human A2 = H. The t-pendulum has 10 degrees of freedom (DoFs), if we assume point-mass handles: the 3D positions of the two handles r1 and r2 representing the interaction points of the two agents A1 and A2 and 4 oscillation DoFs. The oscillation DoF θ describes the desired oscillation and is defined as the angle between the y-axis and the line connecting the center between the two agents and the center of mass of the pendulum object. The oscillation DoF ψ describes oscillations of the object around the y-axis and is the major undesired oscillation DoF. Experiments showed that oscillations around the object centerline and around the horizontal axis perpendicular to the connection line between the interaction partners2 play a minor role and are therefore neglected in the following.

Fig. 3.

Fig. 3

The t-pendulum (adapted from [13]): cylindrical object of mass mo, length lo and moment of inertia Io under the influence of gravity g attached via massless ropes of length l to two handles of mass mh,i located at ri with i=1,2. The location r1 is defined with respect to the world fixed coordinate system {w}. The location r2 is defined with respect to the fixed point wp=0,0,C in {w}, where C is the initial distance between the two agents. Pairs of parallel lines at the same angle indicate parallelity

The agents influence the t-pendulum by means of handle accelerations r¨1 and r¨2. Although we assume cooperating agents, the only controllable quantity of agent A1 is its own acceleration r¨1. The acceleration r¨2 of agent A2 acts as a disturbance as it cannot be directly influenced by agent A1. We limit the motion of agent A1 to the x-direction for simplicity, which yields the one dimensional input u1=r¨1. Experiments showed that 1D motion is sufficient and does not disturb a human interaction partner in comfortable 3D motion, because the pendulum-like object only loosely couples the two agents. The forces applied at the own handle are the only measurable quantity of agent A1, i.e. measurable output y1=f1.

The afa-System

Figure 4 shows the afa-system. The cylindrical arms are actuated by shoulder torque around the z-axis ts,1 and ts,2. For simplicity, we limit the arm of agent A1 to rotations in the xy-plane. Note that we use the same approximations for the side of agent A2 for ease of illustration, although a human interaction partner can move freely. The angle between the negative y-axis and the arm of agent A1 is the oscillation DoF ρ. The angle ψ describes the wrist orientation with respect to the arm in the xy-plane (see right angle marking in Fig. 4). Thus, position and orientation of the interaction point of A1 are defined by the angles ρ and ψ. We regard excessive and unsynchronized ψ-oscillations as undesired. The wrist joint is subject to damping dψ and stiffness kψ. The desired oscillation DoF θ is defined as the angle between the y-axis and the line connecting the center between the two agents and the center of mass of the undeformed flexible object (indicated by a cross in Fig. 4). The input to the afa-system from the perspective of agent A1 is its shoulder torque u1=ts,1. Agent A1 receives force and torque signals at its wrist: measurable output y1=f1t1.

Fig. 4.

Fig. 4

The afa-system: two cylindrical arms connected at their wrist joints through a flexible object of mass mo and deformation dependent moment of inertia Io under the influence of gravity g. The two cylindrical arms are of mass ma,i, moment of inertia Ia,i and length la,i with i=1,2 and have their pivot point at the origin of the world fixed coordinate system {w} and at wp=0,0,C in {w}, respectively. Pairs of parallel lines at the same angle indicate parallelity

Problem Statement

Our goal is to excite the desired oscillation θ to reach a periodic orbit of desired energy level Eθd and zero undesired oscillation Eψd=0. The desired energy Eθd is then equivalent to a desired maximum deflection angle θEd or a desired height hEd, at which the object could potentially be released. We define the energy equivalent ΘE for a general oscillation Θ:

Definition 1

The energy equivalent ΘE0,π is a continuous quantity which is equal to the maximum deflection angle the Θ-oscillation would reach at its turning points (Θ˙=0) in case EΘ=const.

For the rest of the article, we interchangeably use Eθ, Eψ and θE, ψE according to Definition 1 with Θ=θ,ψ to refer to the energies contained in the θ- and ψ-oscillations, respectively.

We differentiate between leader and follower agents. For a leader A1=L the control law uL is a function of the measurable output yL and the desired energy θEd. We formulate the control goal as follows

graphic file with name 12369_2017_415_Equ1_HTML.gif 1

Hence, the energy of the θ-oscillation should follow first-order reference dynamics θEref within bounds ϵθ. The reference dynamics are of inverse time constant Kd and converge to the desired energy θEd. Furthermore, the energy contained in the ψ-oscillation should stay within ±ϵψ after the settling time Ts. We only consider desired energy levels of θEd<π/2 to avoid undesired phenomena as, e.g., slack suspension ropes in case of the pendulum-like object.

A follower A1=F does not know the desired energy level θEd. We define a desired relative energy contribution for the follower ΓFd0,1 based on the integrals over the energy flows of the leader θ˙E,L and the follower θ˙E,F

ΓF=0Tsθ˙E,Fdτ0Ts(θ˙E,F+θ˙E,L)dτ. 2

Our goal is to split the energy effort among the leader and the follower such that the follower has contributed the fraction ΓFd within bounds ϵF at the settling time Ts. To this end, we formulate the follower control goal as

graphic file with name 12369_2017_415_Equ3_HTML.gif 3

The energy of the undesired oscillation ψE should be kept within ±ϵψ.

Fundamental Dynamics

In this section, we introduce the abstract cart-pendulum and abstract torque-pendulum as approximations for the desired system oscillations of the t-pendulum and the afa-system (see Fig. 1d–g). This is followed by an introduction of the energy-based controller. Finally, we present the fundamental dynamics (FD) of the cart-pendulum and abstract torque-pendulum, which result from a state transformation, insertion of the energy-based controller and subsequent approximations.

The Abstract Cart-Pendulum

For the ideal case of ψE=0 and agents that move along the x-direction in synchrony r1=r2, the desired deflection angle θ is equal to the projected deflection angle θ (projection indicated by the dashed arrow in Fig. 3). This observation motivates us to approximate the desired system behavior of the pendulum-like object as a cart-pendulum with two-sided actuation (see Fig. 1f)

x˙c=ϑ˙-ω02sinϑ+0-1gω02cosϑr¨1+r¨22, 4

with reduced state xc=[ϑ,ϑ˙] consisting of deflection angle ϑ and angular velocity ϑ˙ and the small angle approximation of the natural frequency ω0. We use the variables ϑ for the deflection angle of the abstract simple pendulum variants in contrast to the actual deflection angle θ of the complex objects. On the desired periodic orbit we have θ=θ=ϑ. The small angle approximation of the natural frequency ω0=mϑcϑgIϑ depends on gravity g and abstract pendulum parameters: mass mϑ, distance between pivot point and the center of mass cϑ and the resultant moment of inertia around the pendulum pivot point Iϑ. The parameters mϑ and Iϑ represent one side of the t-pendulum, i.e. half of the mass and moment of inertia of the pendulum mass. By dividing the input accelerations by 2 in (4), we consider the complete mass and moment of inertia of the t-pendulum. We call this pendulum abstract cart-pendulum, where cart refers to the actuation through horizontal acceleration. The term abstract emphasizes the simplification we make by approximating the agents’ influences as summed accelerations and neglecting ψE0.

The Abstract Torque-Pendulum

The afa-system simplifies to the two-link pendubot [43] with oscillation DoFs ρ and ψ, when being projected into the xy-plane of agent A1 (see gray dash-dotted link in Fig. 4). For ψEd=0, the pendubot further reduces to a single link pendulum actuated through shoulder torques of agents A1 and A2 (see Fig. 1g)

x˙c=ϑ˙omega02sinϑ+01Iϑts,1+ts,22. 5

We call this pendulum abstract torque-pendulum. As for the abstract cart-pendulum, the parameter Iϑ represents the moment of inertia of one side of the afa-system. Similar to the t-pendulum, we define a projected deflection angle θ=ρ+ψ (see Fig. 4). On the desired periodic orbit we have θ=θ=ϑ.

Energy-Based Control for Simple Pendulums

Here, we recapitulate important simple pendulum fundamentals and introduce the energy-based controller to be applied to the abstract simple pendulums. For the following derivations, we assume zero handle velocity for the cart-pendulum r˙1=r˙2=0, which is the case for the torque-pendulum by construction. The energy contained in both abstract pendulums is then

Eϑ=Iϑϑ˙2+2mϑgcϑ1-cosϑ. 6

According to Definition 1, the energy equivalent ϑE is equal to the maximum deflection angle ϑ reached at the turning points for angular velocity ϑ˙=0

Eϑ=2mϑgcϑ1-cosϑE. 7

Setting (6) equal to (7), we can express ϑE in terms of the state xc=ϑ,ϑ˙

ϑE=arccoscosϑ-12ω02ϑ˙2, 8

with ϑE0,π. In contrast to the energy Eϑ, which also depends on mass and moment of inertia of the object, the amplitude ϑE only depends on the small angle approximation of the natural frequency ω0. Therefore, we will use ϑE as the preferred energy measure in the following.

Simple pendulums constitute nonlinear systems with an energy dependent natural frequency ω(ϑE). No analytic solution exists for ω, but it can be obtained numerically by ω=ω0M1,cosϑE2 with the arithmetic-geometric mean Mx,y [6]. Already the first iteration of M1,cosϑE2 yields good estimates for ω

ωωa=ω01+cosϑE22ωg=ω0cosϑE2, 9

with relative error 0.748 % for the arithmetic mean approximation ωa and 0.746 % for the geometric mean approximation ωg at ϑE=π2 with respect to the sixth iteration of M1,cosϑE2. In the following, we make use of the geometric mean approximation ωg within derivations and as ground truth for comparison to the estimate ω^ in simulations and experiments.

The pendulum nonlinearities are visualized in phase portraits on the left side of Fig. 5 for two constant energy levels ϑE=0.5π and ϑE=0.9π. The inscribed phase angle φ is

φ=atan2-ϑ˙Ω,ϑ, 10

with normalization factor Ω. The right side of Fig. 5 displays the phase angle φ over time. The normalization factor Ω is used to partly compensate for the pendulum nonlinearities, with the result of an almost circular phase portrait and an approximately linearly rising phase angle

φ(t)ωt+φ(t=0). 11

Figure 5 shows that normalization with the more accurate geometric mean approximation of the natural frequency Ω=ωg allows for a better compensation of the pendulum nonlinearities than a normalization with the small angle approximation Ω=ω0.

Fig. 5.

Fig. 5

Phase portrait (left) and phase angle φ over time (right) at constant energy levels ϑE=0.5π (blue) and ϑE=0.9π (red) of a lossless simple pendulum. Normalization with Ω=ωg marked via solid lines and Ω=ω0 via dashed lines. For energies up to ϑE=0.5π and a normalization with Ω=ωg, the phase space is approximately a circle with radius ϑrϑE and the phase angle φ rises approximately linear over time. Figure adapted from [14]

The main idea of the energy control for the abstract cart-pendulum is captured in the control law [48]

r¨i=aiω2sinφ, 12

where the amplitude factor ai regulates the sign and amount of energy flow contributed by agent Ai to the abstract cart-pendulum, with i=1,2. A well-timed energy injection is achieved through multiplication with sinφ, which according to (11) excites the pendulum at its natural frequency. For the abstract torque-pendulum we choose a similar control law with

ts,i=-aisinφ. 13

Cartesian to Polar State Transformation

The abstract cart- and torque-pendulum dynamics in (4) and (5) are nonlinear with respect to the states xc=[ϑ,ϑ˙]. The index c indicates that the angle ϑ and angular velocity ϑ˙ represent the cartesian coordinates in the phase space (see left side of Fig. 5). We expect the system energy ϑE to ideally be independent of the phase angle φ, which motivates a state transformation to φ and ϑE for simple adaptive control design. Solving (10) for ϑ˙ and insertion into (8) yields

cosϑE=cosϑ-Ω22ω02tan2(φ)ϑ2. 14

However, there is no analytic solution for ϑ(ϑE,φ) from (14). Therefore, we approximate the system energy ϑE through the phase space radius ϑr

ϑr:=ϑ2+ϑ˙Ω2. 15

From Fig. 5 we see that the phase space radius is equal to the energy ϑr=ϑE at the turning points (ϑ˙=0). For energies ϑEπ2 and a normalization with Ωω, the phase space is almost circular and thus ϑrϑE also for ϑ˙0.

The phase angle φ and the phase space radius ϑr span the polar state space xp=φ,ϑr, which we mark with the subscript p. The cartesian states xc written as a function of the polar states xp are

ϑ=ϑrcosφϑ˙=-ϑrΩsinφ. 16

The Fundamental Dynamics

Theorem 1

The FD of the abstract cart- and torque-pendulums in (4) and (5) under application of the respective control laws (12) and (13) can be written in terms of the polar states xp=φ,ϑr as

x˙p=φ˙ϑ˙r=ω0+0Ba1+a22, 17

with system parameter

B=Br¨=12gω3abstract cart-pendulum,Bt=12ωIϑabstract torque-pendulum, 18

when neglecting higher harmonics, applying 3rd order Taylor approximations and making use of the geometric mean approximation of the natural frequency ωg in (9).

Proof

See “Appendix A”.

Thus, the phase φ is approximately time-linear φ˙ω and the influence of the actuation a on the phase is small. The energy flow ϑ˙Eϑ˙r is approximately equal to the mean of the amplitude factors a1 and a2 times a system dependent factor B, and thus zero for no actuation a1=a2=0.

FD-Based Adaptive Leader–Follower Structures

In this section, we use the fundamental dynamics (FD) to design adaptive controllers that render leader and follower behavior according to (1) and (3). For the abstract cart-pendulum FD, the natural frequency ω is the only unknown system parameter. For the abstract torque-pendulum, also an estimate of the moment of inertia I^ϑ is required. Here, we first present the natural frequency estimation. In Sect. 6.3, we discuss how to obtain I^ϑ. The ω-estimate is not only needed for the computation of the system parameter B, but also for the phase angle φ, required in the control laws (12) and (13). In a second step, we design the amplitude factor a1 to render either leader or follower behavior.

Estimation of Natural Frequency

Based on the phase FD φ˙=ω, we design simple estimation dynamics for the natural frequency estimate ω^

ω^=s1+Tωsφ, 19

which differentiates φ, while also applying a first-order low-pass filter with cut-off frequency 1Tω.

Figure 6 shows how the ω-estimation is embedded into the controller. The feedback of the estimate ω^ for the computation of phase angle φ requires a stability analysis.

Fig. 6.

Fig. 6

Block diagram of the ω-estimation with normalization factor Ω=ω^ used for the computation of phase angle φ

Proposition 1

The natural frequency estimate ω^ converges to the true natural frequency ω when estimated according to Fig. 6 with

Tω>max12ω^(t=0),12ωandω^(t=0)>0, 20

and if the system behaves according to the FD with constant natural frequency ω (ω changes only slowly w.r.t. the ω^-dynamics in (19)).

Proof

See “Appendix B”.

Condition (20) indicates that the adaptation of ω^ cannot be performed arbitrarily fast.

Amplitude Factor Based Leader/Follower Design

In the following, we design the amplitude factors for leader agents aL and follower agents aF.

Leader L

Proposition 2

For two leader agents A1=A2=L applying amplitude factors

ai=ki(θEd-ϑr)withki=2ΓidKdB, 21

where i=1,2, Γ1d+Γ2d=1, and ϑr(t=0)=θEref(t=0), the energy θr of the FD in (17) converges to the desired energy θEd and tracks the desired reference dynamics in (1)

θ˙Eref=KdθEd-θEref. 22

Furthermore, each leader agent contributes with the desired relative energy contribution Γi=Γid defined in (2).

Proof

Differentiation with respect to time of the Lyapunov function

V=12θEd-ϑr2 23

and insertion of the FD (17) with (21) yields

V˙=-B2(k1+k2)(θEd-ϑr)2. 24

Thus, as long as ϑrθEd and for k1+k2,B>0 the Lyapunov function has a strictly negative time derivative V˙<0 and, thus, the desired energy level ϑr=θEd is an asymptotically stable fixpoint.

Insertion of (21) into the FD in (17) yields

ϑ˙r=KdθEd-ϑr. 25

Comparison of (25) and (22) shows that the reference dynamics are tracked ϑr(t)=θEref(t) for equal initial values ϑr(t=0)=θEref(t=0). The energy contributed by one agent i according to the FD in (17) is ϑ˙r,i=B2ai. Insertion of (21) yields ϑ˙r,i=ΓidKdθEd-ϑr. With (25), the relative energy contribution of agent i according to (2) results in Γi=0Tsϑ˙r,idτ0Tsϑ˙rdτ=Γid.

Follower F

Proposition 3

A follower agent A1=F applying an amplitude factor

aF=kFϑ˙^rwithkF=2BΓFd, 26

with ΓFd0,1 and a correct estimate of the total energy flow ϑ˙^r=ϑ˙r, contributes the desired fraction ΓF=ΓdF to the overall task effort.

Proof

Insertion of (26) into the energy flow of the follower ϑ˙r,F=B2aF according to the FD in (17) yields ϑ˙r,F=ΓFdϑ˙^r and ΓF=ΓdF (see proof or Proposition 2).

We obtain the total energy flow estimate through filtered differentiation ϑ˙^r=Ghp(TF)ϑr, where Ghp(TF) is a first-order high-pass filter with time constant TF. Thus, the filtered energy flow estimate is not equal to the true value ϑ˙^rϑ˙r. The influence of this filtering will be investigated in the next section.

Analysis of Leader–Follower Structures

Here, we analyze stability, stationary transfer behavior and resultant follower contribution ΓF for filtered energy flow estimates ϑ˙^r and estimation errors on the follower B-B^F0 and leader B-B^L0 side. Figure 7 shows a block diagram of the fundamental energy dynamics-based control structure for a leader and a follower controller. See “Appendix C” for details on the derivations of the transfer functions.

Fig. 7.

Fig. 7

Block diagram showing the leader and follower controllers interacting with the linear fundamental energy dynamics. The leader tracks first-order reference dynamics with inverse time-constant Kd to control the energy ϑr to θEd with a desired relative energy contribution ΓLd. The follower achieves a desired relative energy contribution ΓFd by imitating an estimate of the system energy flow ϑ˙^r

The reference transfer function ϑr(s)=Gfi(s)θEd(s), which describes the closed-loop behavior resulting from the interconnection depicted in Fig. 7, results in

Gfi=ΓLdKdBB^Ls+ΓLdKdBB^L1TFs2+1TF-ΓFdBB^F1TF+ΓLdKdBB^Ls+ΓLdKdBB^L1TF. 27

Thus, ϑr(t)=θEd and we have a stationary transfer behavior equal to one for a step of height θEd in the reference variable θEd(t)=σ(t)θEd. This result holds irrespective of estimation errors B^F/LB. Asymptotic stability of the closed-loop system is ensured for (1TF-ΓFdBB^F1TF+ΓLdKdBB^L)>0. The stability constraint implies that B^F>B is advantageous. This can be achieved by using a high initial value in the follower’s ω^-estimation for the abstract cart-pendulum and a low initialization for the abstract torque-pendulum (see (18)). Factors such as estimation errors, a high desired follower contribution ΓFd and a small time constant TF can potentially destabilize the closed-loop system.

The follower transfer function GFfi from desired energy level θEd to follower energy θrF is

GFfi=ΓLdKdBB^LΓFdBB^F1TFs2+1TF-ΓFdBB^F1TF+ΓLdKdBB^Ls+ΓLdKdBB^L1TF. 28

Application of the final value theorem to (28) yields ϑr,F(t)=ΓFdBB^FθEd. Consequently, ΓF=ΓFdBB^F and the follower achieves its desired relative energy contribution for a correct estimate B^F=B.

Application to Two-Agent Object Manipulation

Here, we extend the fundamental dynamics (FD)-based adaptive controllers presented in the previous section to control the t-pendulum and the afa-system. Figures 8 and 9 show block diagrams of the controller implementation for the t-pendulum controlled by a leader agent and the afa-system controlled by a follower agent, respectively. Follower and leader controllers are invariant with respect to the object types. In Sect. 6.1, we discuss modifications of the fundamental dynamics-based controllers to cope with modeling errors. The projection and energy-based controller block differs between the t-pendulum and the afa-system and will be explained in detail in Sects. 6.2 and 6.3, respectively.

Fig. 8.

Fig. 8

Block diagram of the FD-based leader applied to the t-pendulum

Fig. 9.

Fig. 9

Block diagram of the FD-based follower applied to the afa-system

FD-Based Controllers

The FD derivation is based on approximating the system energy ϑE by the phase space radius ϑr in Sect. 4.4. As visible in the phase space on the left side of Fig. 5, the phase space radius ϑr represents the system energy ϑE less accurately at higher energy levels. The effect is increased oscillations of ϑr for constant ϑE. As a consequence, unsettled follower behavior is expected even when the leading partner is trying to keep the system energy at a constant level. Furthermore, the discrepancy between ϑr and ϑE degrades the leader’s reference dynamics tracking ability.

From ϑ and ϑ˙ we can estimate ϑE based on (8). To this end, we use the geometric mean relationship in (9) with current frequency estimate ωg=ω^ and solve it for the unknown small angle approximation ω^02=ω^2cosϑ^E/2-1. Insertion of ω^0 into (8) results in a quadratic equation which we solve for ϑ^E

ϑ^E=2arccos-ϑ˙28ω^2+14ϑ˙44ω^4+8(cosϑ+1). 29

The estimate ϑ^E can now be used instead of ϑr within the leader and follower controllers.

Interestingly, the error caused by the phase space radius approximation has a greater influence on the abstract torque-pendulum than on the abstract cart-pendulum. Because ts,1 in (13) and ϑ˙ reach their maxima for φ=±π2, the torque-based actuation contributes maximum energy when the error between ϑr and ϑE has its maximum (see Fig. 5). In contrast, the acceleration-based actuation in (12) contributes most energy when the multiplication of velocity r˙1 and applied force in x-direction reach a maximum, where r˙1 has its maximum at φ=0,π. We will show the implications of above discussion and the usage of ϑ^E based on simulations of the abstract simple pendulums in Sect. 7.

The realistic pendulum-like and flexible object do not exhibit perfect simple pendulum-like behavior. As we show with our experimental results in Sect. 8, such unmodeled dynamics have only little effect on the leader controller performance. In order to achieve calm follower behavior during constant energy phases, we use a second-order low-pass filter along with the differentiation of ϑr for the experiments instead of the first-order low-pass filter (compare Figs. 7, 9). Besides the extension by the ω-estimation, the second-order filter for the follower is the only modification we apply to the FD-based controllers in Fig. 7 for the experiments. Because we are limited to relatively small energies for the afa-system where ϑrϑE, use of the more accurate estimate ϑ^E is not needed.

At small energy levels, noise and offsets in the force and torque signals can lead to a phase angle φ that does not monotonically increase over time. We circumvented problems with respect to the ω-estimation by reinitializing ω^ whenever ϑr decreased below a small threshold. No modifications were needed for the amplitude factor computation.

The computation of the FD parameter B in (18) requires a moment of inertia estimate I^ϑ. For the experiments, we computed I^ϑ based on known parameters of the simple pendulum-like arm Iϑa=Ia+ma(la2)2 and based on a point mass approximation of the flexible object I^ϑo=mo2(la+l^o)2. The part of the object mass carried by the robot mo2 is measured with the force sensor. We furthermore assume that an estimate of the projected object length l^o is available. Alternatively, the object moment of inertia could be estimated from force measurements during manipulation (e.g., [3, 27]).

Projection and Energy-Based Controller for the t-Pendulum

Projection onto the Abstract Cart-Pendulum

The goal of what we call the projection onto the abstract cart-pendulum, is to extract the desired oscillation θ from the available force measurements f1. The projection is performed in two steps. First, the projected deflection angle θ is computed from f1

θ=arctan-fo,1xfo,1y, 30

with fo,1=fo,1x,fo,1y,fo,1z being the force exerted by agent A1 onto the pendulum-like object. We obtain fo,1 from the measurable applied force f1 through dynamic compensation of the force accelerating the handle mass mh,1: fo,1=f1-mh,1r¨1,-g,0.

The projected deflection angle θ does not only contain the desired θ-oscillation, but is superimposed by undesired oscillations, such as the ψ-oscillation in Fig. 3. In a second step, we apply a nonlinear observer to extract the states of the virtual abstract cart-pendulum

x˙c=ϑ˙-ω^0sin(ϑ)+l(θ-y),y=10xc, 31

where l(θ-y) couples the observer to the t-pendulum through the observer gain vector l=l1,0. The observer does not only filter out the undesired oscillation ψ, but also noise in the force measurement. An observer gain l1 in the range of ω showed to yield a good compromise between fast transient behavior (large l1) and noise filtering (small l1). The smooth cartesian cart-pendulum states can then be transformed into polar states according to (10) and (15). The observer represents the abstract cart-pendulum dynamics (4) without inputs. Simulations and experiments showed that it suffices to use ω^ as the estimate for the small angle approximation ω^0 needed in (31). We summarize these two steps as projection onto the abstract cart-pendulum.

Complete Control Law for the t-Pendulum

As suggested in [48], we do not directly command the acceleration in (12). Instead, we filter out remaining high frequency oscillations on the phase angle φ through application of a second-order filter

G(s)=r¨1r1d=s2ω^c02s2+2ζω^c0s+ω^c02, 32

with design parameters c0 and ζ, to the reference trajectory

r1d=-a1|G(jω^)|sin(φ-G(jω^)). 33

The acceleration results in

r¨1a1ω2|G(jω)||G(jω^)|sin(φ-G(jω^)+G(jω))a1ω2sin(φ). 34

Hence, we make use of the sinusoidal shape of r¨1 by including knowledge on the expected phase shift G(jω^) and amplitude shift |G(jω^)| at ω^. Use of position r1 as a reference for the robot low-level controller circumvents drift. Furthermore, by imposing limits on a1, the workspace of the robot can be limited [13, 48].

Projection and Energy-Based Controller for the afa-System

Simple Pendulum-like Arm

Based on the results of [15], we model the robot end effector to behave as a cylindrical simple pendulum with human-like parameters of shoulder damping dρ, mass ma, length la and density ϱa for the experiments with a robotic manipulator in Sect. 8. The robot arm dynamics are

Iρρ¨=-dρρ˙+tg-tf1+dψψ˙+kψψ+ts,1, 35

where Iρ is the arm moment of inertia with respect to the shoulder and tg and tf1 are torques around the z-axis of coordinate system {w} caused by gravity and the applied interaction forces at the wrist f1, respectively. The wrist joint dynamics are

Iψ(ψ¨+ρ¨)=-dψψ˙-kψψ-t1z, 36

with moment of inertia Iψ, damping dψ and stiffness kψ. The z-component t1z of applied torque t1 is measured at the interaction point with the flexible object.

Projection onto the Abstract Torque-Pendulum

We base the projection of the afa-system onto the abstract torque-pendulum on a simple summation θ=ρ+ψ and the observer with simple pendulum dynamics in (31).

Complete Control Law for the afa-System

No additional filtering is applied for the computed shoulder torque. However, the wrist damping dissipates energy injected at the shoulder. The energy flow loss due to wrist damping is E˙dψ=-dψψ˙2. We approximate the injected energy flow at the shoulder as

E˙ts,dψ=ts,dψρ˙adψϑrω^sin2φ12adψϑrω^, 37

where we inserted ts,dψ=-adψsinφ according to (13), used ρ˙=ρ˙ϑ˙-ϑrω^sinφ of (16) and approximated sin2φ by its mean. Setting E˙dψ+E˙ts,dψ=!0 yields amplitude factor adψ=2dψψ˙2ϑrω^ for wrist damping compensation.

For the experiments, we add human-like shoulder damping dρ to the passive arm behavior. During active follower or leader control the shoulder damping is compensated for by an additional shoulder torque of ts,dρ=dρρ˙. The complete control law results in

ts,1=-a1sin(φ)+ts,dψ+ts,dρ. 38

Evaluation in Simulation

The linear fundamental dynamics (FD) derived in Sect. 4 enabled the design of adaptive leader and follower controllers in Sect. 5. However, the FD approximates the behavior of the abstract cart- and torque-pendulums, which represent the desired oscillations of the t-pendulum and the afa-system. In this section, we analyze the FD-based controllers in interaction with the abstract cart- and torque-pendulums with respect to stability of the ω-estimation (Sect. 7.3), reference trajectory tracking (Sect. 7.4) and follower contribution (Sect. 7.5). For simplicity, we assume full state feedback xc and use the variables θE and θEd also for the abstract cart- and torque-pendulums.

Simulation Setup

The simulations were performed using MATLAB/Simulink. We modeled the cart-pendulum as a point mass mo=10kg attached to a massless pole of length lo=0.6m. The torque-pendulum consisted of two rigidly attached cylinders with uniform mass distribution. The upper cylinder was of mass, density and length comparable to a human arm: ma=3.35kg [7], ϱa=1100kg/m3 [11], la=0.56m [15]. The lower cylinder had the same radius, but mass mo=10kg and length lo=0.4m.

The following control gains stayed constant for all simulations Kd=0.41/s, TF=1/s, c0=0.9, ζ=1.2. We started all abstract cart- and torque-pendulum simulations with a small angle ϑ(t=0)=2 and zero velocity ϑ˙(t=0)=0rad/s in order to avoid initialization problems, e.g., of the phase angle φ.

Measures

Analysis of Controller Performance

We analyzed the controller performance based on settling time Ts, steady state error e and overshoot o. The settling time Ts was computed as the time after which the energy θE stayed within bounds ±ϵθ=±8% around the energetic steady state value θ¯E. We defined the steady state error as e=θEd-θ¯E and the overshoot as o=maxt(θE-θ¯E).

Analysis of Effort Sharing

The energy flows to the abstract cart-pendulum were calculated based on velocities and applied force along the motion E˙1=12r˙1fx, where fx=f1x=f2x. The energy flows to the abstract torque-pendulum were calculated based on angular velocity and applied torque E˙1=12ϑ˙ts,1, where ϑ˙=ϑ˙1=ϑ˙2. The multiplication with 12 reflects that the agents equally share the control over the abstract pendulums in (4) and (5).

We based the analysis of the effort sharing between the agents on the relative energy contribution of the follower ΓF. The definition in (2) is based on the time derivative of the oscillation amplitude θ˙E,F and θ˙E,L, which requires use of the simple pendulum approximations. In order not to rely on approximations, we define the relative follower contribution

Γin,F=0TsE˙Fdτ0Ts(E˙F+E˙L)dτ. 39

The above computation has the drawback that for mechanisms with high damping Γin,F<ΓFd, because the follower reacts to changes in object energy and, thus, the leader accounts for damping compensation. Therefore, we define a second relative follower contribution based on the object energy E for comparison

Γobj,F=0TsE˙FdτE(Ts). 40

For the abstract simple pendulums we use E=Eθ. Note that Γobj,F+Γobj,L1 for a damped mechanism.

Stability Limits of the ω-Estimation

The FD analysis in Sect. 5.1 revealed the theoretical stability bound (20). Here, we test its applicability to the cart- and torque-pendulums with energy dependent natural frequency ω. Both lossless pendulums were controlled by one leader with constant amplitude factor aL=0.04m for the cart-pendulum and aL=5.5Nm for the torque-pendulum. The amplitude factors were chosen, such that for both pendulums approximately an energy level of θE60 was reached after 8 s. Figure 10 shows the geometric mean approximation of the natural frequency ωg(θE) and the estimate ω^ for two different time constants Tω and ω^(t=0)=2rad/s>0. The results support the conservative constraint found from the Lyapunov stability analysis in Sect. 5.1.

Fig. 10.

Fig. 10

Natural frequency estimation for the (1a–b) cart-pendulum and (2a-b) torque-pendulum: (a) the estimate ω^ smoothly approaches the geometric mean approximation of the natural frequency ωg(θE) for an estimation time constant Tω=2s, (1b) first signs of instability occur for Tω=0.17s for the cart-pendulum and (2b) for Tω=0.19s for the torque-pendulum. Note the different time and natural frequency scales. This result is in accordance with the theoretically found conservative stability bound Tω>max12ω^(t=0),12ω which evaluates to Tω>0.25s for ω^(t=0)=2rad/s

Reference Dynamics Tracking

Here, we evaluate how well reference dynamics tracking is achieved for a single leader interacting with the cart- and torque-pendulums, thus ΓL=1. In order to focus on the reference dynamics tracking, we used the geometric mean ωg(θE) with exact ω0 in (9) as an accurate natural frequency estimate for the leader controller. We set Kd=0.41/s and θEd=120 3. The results for the lossless pendulums are displayed in Fig. 11. The simulation results support the considerations made in Sect. 6.1.

Fig. 11.

Fig. 11

Reference dynamics tracking for (1) cart-pendulum and (2a) torque-pendulum based on energy equivalent ϑr with er¨=2.7 and et=8.3, respectively. Usage of an estimate ϑ^E instead of ϑr reduces the steady state error for the torque-pendulum to et=0.5 (2b). Vertical dashed lines mark settling times Ts

Follower Contribution

For the follower contribution analysis, we ran simulations with a leader and a follower interacting with the abstract cart- and torque-pendulums for different desired relative follower contributions ΓFd=0.3,0.5,0.7. The pendulums were slightly damped with ts,dρ=-dsϑ˙ and dsIϑ=0.011/s. The leader’s desired energy level was θEd=60. In accordance with the stability analysis in Sect. 5.3, we initialized the ω-estimation with ω^(t=0)=6rad/s>ω for the abstract cart-pendulum and ω^(t=0)=2rad/s<ω for the abstract torque-pendulum. The follower and leader controllers for the torque-pendulum made use of the approximation ϑ^E in (29) instead of ϑr in (21) and (26).

The first three lines of Table 2 list the results for ΓFd+ΓLd=1, including the relative follower contributions according to (39) and (40) and the overshoot o. Figure 12 shows angles and energies over time for the most challenging case of ΓFd=0.7. The damping resulted in increased steady state errors of er¨=4.7 for the abstract cart-pendulum and eτ=2.5 for the abstract torque-pendulum. The ω-estimation and filtering for the energy flow estimate ϑ˙^r on the follower side caused a delay with respect to the reference dynamics θEref. With respect to effort sharing, higher ΓFd resulted in increased overshoot o (see Table 2). Successful effort sharing was achieved, with ΓFΓFd.

Table 2.

Effort sharing results

ΓFd/ΓLd Abstr. cart-pend. Abstr. torque-pend.
o[] Γin,F Γobj,F o[] Γin,F Γobj,F
0.3/0.7 0.9 0.27 0.27 0.1 0.33 0.33
0.5/0.5 3.2 0.45 0.47 1.1 0.52 0.54
0.7/0.3 8.7 0.75 0.84 4.9 0.78 0.82
0.3/0.3 0.1 0.30 0.32 0.1 0.31 0.33
0.7/0.7 9.6 0.81 0.87 6.5 0.86 0.90

Fig. 12.

Fig. 12

Simulated follower and leader interacting with the (1) abstract cart-pendulum and (2) abstract torque-pendulum for a desired relative follower contribution ΓFd=0.7: (a) angles and (b) energies. Vertical dashed lines mark settling times Ts. The FD-based controllers allow for successful effort sharing

The last two lines of Table 2 list the results for ΓFd+ΓLd1. The results conform to the FD analysis in Sect. 5.3: Γin,FΓFdΓobj,F with Γin,L=1-Γin,F. The transient behavior is predominantly influenced by ΓLd. Low (high) values ΓFd+ΓLd<(>)1 yield slower (faster) convergence to the desired energy level with small (increased) overshoot o. An increased o comes along with increased transient behavior that settles only after Ts. As a consequence, Γin,F and Γobj,F exceed ΓFd.

Experimental Evaluation

The simulations in Sect. 7 analyze the presented control approach for the abstract cart- and torque-pendu- lum. In this section, we report on the results of real world experiments with a t-pendulum and a flexible object which test the controllers in realistic conditions: noisy force measurements, non-ideal object and robot behavior and a human interaction partner. Online Resources 1 and 2 contain videos of the experiments.

Experimental Setup

Hardware Setup

Figure 13 shows the experimental setups with pendulum-like and flexible objects. Due to the small load capacity of the robotic manipulator4, we used objects of relatively small mass mo=1.25kg for the t-pendulum and mo=1.61kg for the flexible object. The flexible object was composed of an aluminum plate connected to two aluminum bars through rubber bands. Such flexible object can be seen as an especially challenging object as it only loosely couples the agents and its high elasticity can cause unwanted oscillations.

Fig. 13.

Fig. 13

Experimental setups for (a) pendulum-like and (b) flexible object swinging: One side of the objects was attached to the end effector of a KUKA LWR 4+ robotic manipulator under impedance control on joint level (joint stiffness 1500 Nm/rad and damping 0.7 Nm s/rad). The other side was attached to a handle that was either fixed to a table or held by the human interaction partner

Software Implementation

The motion capture data was recorded at 200 Hz and streamed to a MATLAB/Simulink Real-Time Target model. The Real-Time Target model was run at 1 kHz, received the force/torque data and contained the presented energy-based controller and the joint angle position controller of the robotic manipulator. For the analysis, we filtered the motion capture data and the force/torque data by a third-order butterworth low-pass filter with cutoff frequency 4 Hz.

The following control parameters were the same for all experiments Kd=0.41/s, TF=1s, DF=1, c0=0.9, ζ=1.2 and l1=3.61/s. The ω-estimation used a time constant Tω=2s and was initialized to ω^(t=0)=6rad/s for the t-pendulum. For the flexible object swinging, we controlled the robot to behave as a simple pendulum (see Sect. 6.3) with human arm parameters given in Sect. 7.1. The wrist parameters were Iψ=0.01kgm2, dψ=4Nms/rad, kψ=3Nm/rad. The projected object length estimate needed for the approximation of the abstract torque-pendulum moment of inertia I^ϑ was set to l^o=0.64m. The ω-estimation used a time constant Tω=4s and was initialized to ω^(t=0)=2rad/s.

Measures

We used the same measures to analyze the experiments as for the simulations in Sect. 7.2. Extensions and differences are highlighted in the following.

Analysis of the Projections onto the Abstract Cart- and Torque-Pendulums

Ideally, during steady state, the disturbance oscillations is close to zero ψ0, the abstract pendulum angle should be close to the actual object deflection ϑθ and the energies should match ϑrϑ^EθE. From motion capture data we obtained θ and for the t-pendulum ψ. The undesired oscillation of the afa-system is the known wrist angle ψ. From θ, its numerical time derivative θ˙ and ω^0, the energy equivalent θE was computed.

Analysis of Effort Sharing

The energy flows of the agents were calculated based on E˙i=fir˙i+tiΩi with i=1,2, interaction point rotational velocities Ωi and ti0 for the t-pendulum. The energy contained in the object was calculated based on object height yo and object twist ξ˙o=r˙o,Ωo

E=mogyo+12ξ˙oMoξ˙o. 41

The mass matrix MoR6×6 is composed of a 3×3 diagonal matrix with the object mass mo as diagonal entries and a 3×3 moment of inertia tensor Io. The t-pendulum object moment of inertia Io was approximated as a cylinder with uniform mass distribution of diameter do=0.05m. For the afa-system, we neglected energy contained in the rubber bands and the aluminum bars attached to the force/torque sensors and computed the energy contained in the aluminum plate of mass mpl=1.15kg and thickness hpl=0.012m under the simplifying assumption of uniform mass distribution (see Fig. 13 for further dimensions). Above variables are expressed in a fixed world coordinate system translated such that yo=0m for θ=ψ=0. The energy contained in undesired system oscillations ψ can be approximated as EψE-Eθ.

Experimental Controller Evaluation for the t-Pendulum

We present results for three t-pendulum experiments: maximum achievable energy (Sect. 8.3.1), active follower contribution (Sect. 8.3.2) and excitation of undesired ψ-oscillation (Sect. 8.3.3).

Maximum Achievable Energy (Robot Leader and Passive Human)

The limitations of the controller with respect to the achievable energy levels were tested with a robot leader A1=R=L. A human passively held the handle of agent A2=H=P in order to avoid extreme ψ-oscilla- tion excitation at high energy levels due to a rigid fixed end. The t-pendulum started from rest (θE(t=0)ψE(t=0)0). The desired energy level θEd was incrementally increased from 15 deg to 90 deg. The desired relative energy contribution of the robot was ΓRd=1.

The robot successfully controlled the t-pendulum energy to closely follow the desired reference dynamics (see Fig. 14).

Fig. 14.

Fig. 14

Maximum achievable energies θEd for the t-pendulum: (a) deflection angles and energy equivalents, (b) energies contained in the t-pendulum and (c) contributed by the human and the robot (d) natural frequency estimates. Vertical dashed lines mark settling times Ts. A robot leader can reach deflection angles θ>80 in interaction with a passive human

The steady state error increased with higher desired energy due to increased damping, e.g., e=0.4 at θEd=15 and e=8.2 at θEd=90. The energy contained in the undesired oscillation increased from ψE=1.4 at θEd=15 to ψE=15.6 at θEd=90 and was, thus, kept in comparably small ranges. With increased ψ-oscillation, the t-pendulum behaves less simple pendulum-like, which also becomes apparent in an increased difference between ϑr and θE. The successful reference dynamics tracking and close estimate ϑrθE for smaller and intermediate energy levels and the close ω-estimation support the applicability of the fundamental dynamics (FD)-based leader controller.

Active Follower Contribution (Robot Follower and Human Leader)

A robot follower A1=R=F with ΓRd=0.5 interacted with a human leader A2=H=L. The t-pendulum started from rest (θE(t=0)ψE(t=0)0). The human leader was asked to first inject energy to reach θEd=60, to hold the energy constant and finally to release the energy from the pendulum again. The desired energy limit was displayed to the human via stripes of tape on the floor to which the pendulum mass had to be aligned to at maximum deflection angles.

The human–robot team successfully injected energy until θEd=60 was reached with e=3 (see Fig. 15). Similar to the simulations, the reference dynamics were tracked with a delay. The undesired oscillation increased, but did not exceed ψE=10.4. The object energy flow θ˙E highly oscillated, which is in accordance with the results from human–human rigid object swinging [15]. The robot successfully detected and imitated the object energy flow. During the 20 s constant energy phase, the human compensated for energy loss due to damping. The relative energy contributions ΓRin=0.35 and ΓRobj=0.57 were close to the desired ΓRd=0.5. The follower controller highly depends on the FD approximation. Thus, the successful energy sharing between a human leader and a robot follower further supports the efficacy of the FD-based controllers to human–robot dynamic object manipulation.

Fig. 15.

Fig. 15

Robot follower cooperatively injecting energy into the t-pendulum with a human leader: (a) deflection angles and energy equivalents, (b) energies contained in the t-pendulum and contributed by the human and the robot, (c) actual and estimated energy flows, (d) natural frequency estimates. The vertical dashed line marks settling time Ts

Excitation of Undesired ψ-Oscillation (Robot Leader and Fixed End)

The pendulum mass was manually released in a pose with high initial ψ-oscillation ψE(t=0)=29, but θE(t=0)0. A goal energy of θEd=40 was given to the robot leader A1=R=L with ΓRd=1, while the handle of agent A2=0 was fixed.

The robot identified the natural frequency of the ψ-oscillation and tried to inject energy to reach the desired amplitude of θEd=40 (see Fig. 16). Thus, the robot failed to excite the desired θ-oscillation and keep unwanted oscillations in small bounds as defined in Sect. 3. However, considering the controller implementation given in Fig. 8, this experimental result supports the correct controller operation: the ω-estimation identified the frequency of the current oscillation, here the undesired ψ-oscillation. Based on ω^, the leader controller was able to inject energy into the ψ-oscillation; not enough to reach the desired amplitude of θEd=40, but enough to sustain the oscillation. Note that the ψ-oscillation is highly damped, less simple pendulum-like and in general more difficult to excite than the θ-oscillation. Experiments with a controller that numerically differentiates the projected deflection angle θ, instead of using the observer, less accurately timed the energy injection. The result was a suppression of the ψ-oscillation through natural damping until the θ-oscillation dominated ω^ and θEd was reached.

Fig. 16.

Fig. 16

Strong initial ψE for robot leader and fixed end: (a) deflection angles and energy equivalents, (b) energies contained in the t-pendulum and contributed by the robot, (c) natural frequency estimates. The robot detected the natural frequency of the less simple pendulum-like ψ-oscillation and sustained it

On the one hand side, this experiment supports the control approach by showing that the controller is able to excite also less simple pendulum-like oscillations. On the other hand side, this experiment reveals the need for a higher level entity to detect failures as when the wrong oscillation is excited (see the discussion in Sect. 9.1).

Experimental Controller Evaluation for the afa-System

Joint velocity limitations of the KUKA LWR restricted us to energies θEd30 for the afa-system experiments. We present experiments that investigate the maximum achievable energy (Sect. 8.4.1) and active follower contribution (Sect. 8.4.2).

Maximum Achievable Energy (Robot Leader and Passive Human)

A robot leader A1=R=L interacted with a passive human leader A2=H=P under the same conditions as for the t-pendulum in Sect. 8.3.1. We incrementally increased θEd from 10 deg to 30 deg.

The robot leader closely followed the desired reference dynamics and achieved small steady state errors, e.g., e=-0.9 at θEd=10 and e=-0.6 at θEd=30 (see Fig. 17). Undesired oscillations at the wrist stayed below ψE<4.3. The projection of the flexible object onto the abstract torque-pendulum was performed based on the sum θ=ψ+ρ and the simple pendulum observer. From Fig. 4 it seems like the sum ψ+ρ overestimates the deflection angle at the shoulder. However, the known wrist angle ψ only reflects the orientation of the flexible object at the robot interaction point. The flexibility of the object caused greater deflection angles θ. Consequently, the abstract torque-pendulum energy equivalent ϑr closely followed the energy equivalent θE at small energies, but underestimated θ for increased energies. Nevertheless, the results are promising as they show that a controlled swing-up was achieved based on the virtual energy ϑr of the abstract torque-pendulum.

Fig. 17.

Fig. 17

Maximum achievable energies are limited to θE=30 for the afa-system, due to joint velocity limits: (a) deflection angles and energy equivalents, (b) energies contained in the flexible object and (c) contributed by the human and the robot, (d) natural frequency estimates. Vertical dashed lines mark settling times Ts

Active Follower Contribution (Robot Follower and Human Leader)

A robot follower A1=R=F interacted with a human leader A2=H=L under the same conditions as for the t-pendulum in Sect. 8.3.2. Due to the hardware limitations we used θEd=25, but chose a higher and thus more challenging desired relative energy contribution of the robot follower of ΓRd=0.65.

The robot successfully imitated the object energy flow, which led to human–robot cooperative energy injection to θEd=25 with small e=-0.9 (see Fig. 18). The human first injected energy into the passive robot arm which is equivalent to the robot initially withdrawing some energy from the object, before the robot can detect the object energy increase. Therefore and due to the filtering for ϑ˙^r, the follower achieved only ΓRin=0.22 and ΓRobj=0.34, when evaluated at Ts. However, the relative follower contribution increased and reached, e.g., ΓRin=0.35 and ΓRobj=0.62 at t=11s. Interestingly, the energy contribution of the human and the robot were of similar shape, both for a robot follower and a robot leader. Thus, the simple pendulum-like behavior of the robot end effector allows to replicate human whole-arm swinging characteristics.

Fig. 18.

Fig. 18

Robot follower cooperatively injecting energy into the flexible object with a human leader: (a) deflection angles and energy equivalents, (b) energies contained in the flexible object and contributed by the human and the robot, (c) actual and estimated energy flows, (d) natural frequency estimates. The energy contributions of the robot and the human show similar characteristics. The vertical dashed line marks settling time Ts

Discussion

Embedding of Proposed Controllers in a Robotic Architecture

One of the major goals of robotics research is to design robots that are able to manipulate unknown objects in a goal-directed manner without prior model knowledge or tuning. Robot architectures are employed to manage such complex robot functionality [42]. These architectures are often organized in three layers: the lowest layer realizes behaviors which are coordinated by an intermediate executive layer based on a plan provided by the highest layer. In this work, our focus is on the lowest layer: the behavior of cooperative energy injection into swinging motion, which is challenging in itself due to the underactuation caused by the multitude of DoFs of the pendulum-like and flexible objects. On the behavioral layer, we use high-frequency force and torque measurements to achieve continuous energy injection and robustness with respect to disturbances. The controllers presented implement the distinct roles of a leader and a follower. As known from human studies, humans tend to specialize, but do not rigidly stick to one role and continuously blend between leader and follower behaviors [40]. Role mixing or blending would be triggered by the executive layer. The executive layer would operate at a lower frequency and would have access to additional sensors as, e.g., a camera that allows to monitor task execution. Based on the additional sensor measurements, exceptions could be handled (e.g., when a wrong oscillation degree of freedom is excited as in Sect. 8.3.3), the required swinging amplitude θEd could be set and behavior switching could be triggered (e.g., from the object swing-up behavior to an object placement behavior).

Furthermore, additional object specific parameters could be estimated on the executive layer, as, e.g., damping or elastic object deformation. The fundamental dynamics (FD) approach does not model damping, and consequently ΓRobjΓRd indicates that the controller exhibits the desired behavior. However, that also means that ΓRin<ΓRd, because the leader compensates for damping. As all realistic objects exhibit non negligible damping, an increased robot contribution during swing-up can be achieved by increasing ΓRd. The desired relative energy contribution ΓRd could thus serve as a single parameter that could, for instance, be adjusted online by the executive layer to achieve a desired robot contribution to the swing-up. Alternatively to an executive layer, a human partner could adjust a parameter as ΓRd online to achieve desired robot follower behavior and could also assure excitation of the desired oscillation.

Generalizeability

The main assumption made in this work is that the desired oscillation is simple pendulum-like. Based on this assumption, the proposed approach is generalizable in the sense that it can be directly applied to the joint swing-up of unknown objects without parameter tuning5 (see video with online changing flexible object parameters in Online Resource 2). We regard the case of a robotic follower interacting with a human leader as an interesting and challenging scenario and therefore presented our method from the human–robot cooperation perspective. Nevertheless, the proposed method can also directly be employed for robot-robot teams or single robot systems as, e.g., quadrotors and can also be used to damp oscillations instead of exciting them. The task of joint energy injection into a flexible bulky object might appear to be a rare special case. However, it is a basic dynamic manipulation skill that humans possess and should be investigated in order to equip robots with universal manipulation skills.

We see the main take away message for future research from this work in the advantage of an understan- ding of the underlying FD. Based on the FD that encodes desired behavior, simple adaptive controllers can be designed and readily applied to complex tasks even when task parameters change drastically, as, e.g., when objects of different dimensions have to be manipulated.

Dependence of Robot Follower Performance on the Human Interaction Partner

Performance measures as settling time Ts and steady state error e strongly depend on the behavior of the human partner. The robot follower is responsible for the resultant effort sharing. Ideally, the robot follower contributes with the desired fraction to the current change in object energy at all times ϑ˙r,R=ΓRdϑ˙r. Necessary filtering and the approximations made by the FD do result in a delayed follower response and deviation from ΓRd. However, for the follower, we do not make any assumptions on the way how humans inject energy into the system, e.g., we do not assume that human leaders follow the desired reference dynamics that we defined for robot leaders. This is in contrast to our previous work [13], where thresholds were tuned with respect to human swing-up behavior and the follower required extensive model knowledge to compute the energy contained in the oscillation. For demonstration purposes, we aimed for a smooth energy injection of the human leader for the experiments presented in the previous section. Energy was not injected smoothly to match modeled behavior, but only to enable the use of measures as the relative energy contribution at the settling time for effort sharing analysis.

Alternatives to Energy-Based Swing-Up Controllers

Energy-based controllers as [48] are known to be less efficient than, e.g., model predictive control (MPC)-based controllers [31]. MPC can improve performance with respect to energy and time needed to reach a desired energy content. However, in this work, we do not aim for an especially efficient robot controller, but for cooperative energy injection into unknown objects. Use of MPC requires a model, including accurate mass and moment of inertia properties. Use of the energy-based controller of [48] allows to derive the FD as an approximate model. The FD reduces the unknowns to the natural frequency ω and moment of inertia estimate Iϑ for the afa-system, which can be estimated online. Design of a follower controller is only possible, because the FD allows for a comparison of expectation to observation. How to formulate the expectation for an MPC-based approach is unclear and would certainly be more involved. The great advantage of the FD -based approach lies in its simplicity.

Alternative Parameter Estimation Approaches

In this work, the goal of a leader controller is to track desired reference dynamics. Such behavior could also be achieved by employing model reference adaptive control (MRAC) [2] or by employing filters to compare applied amplitude factors a to the achieved energy increase to estimate the unknown FD parameter B. The disadvantage of MRAC and other approaches is that they need to observe the system energy ϑr online to estimate the system constant B. Having more than one agent interacting with the system does not only challenge the stability properties of MRAC, but also makes it impossible to design a follower that requires B^ to differentiate between its own and external influence on ϑr.

The FD approximates the system parameter B by its mean, while the true value oscillates. The mean parameter B depends on the natural frequency ω, which can be approximated by observing the phase angle φ. Because the FD states ϑr and φ are approximately decoupled, reference dynamics tracking and energy flow imitation can be achieved for unknown objects.

The natural frequency ω could also be estimated by observing the time required by a full swing. Decrease of the observation period yields the continuous simple low-pass filter used in this article. Alternatively, the desired circularity of the phase space could be used to employ methods such as gradient descent [37] or Newton Raphson to estimate ω. We chose the presented approach for its continuity and simplicity, as well as its stability properties with respect to the FD assumption.

Stability of Human–Robot Object Manipulation

We proved global stability of the presented control approach for the linear FD. Stability investigations of the human–robot flexible object manipulation face several challenges. Firstly, dynamic models of the complex t-pendulum and afa-system would be required. Furthermore, the human interaction partner acts as a non-autonomous and non-reproducible system that is difficult to model and whose stability cannot be analyzed based on common methods [5]. In [23], Hogan presents results that indicate that the human arm exhibits the impedance of a passive object; however, this result cannot be directly applied to show stabilization of limit cycles, as the simple pendulum oscillation in this work, for a passivity-based stability analysis [24]. A stability analysis of the simpler, but nonlinear abstract simple pendulums requires a reformulation of the system dynamics in terms of the errors Δω^=ω-ω^ and ΔϑE=ϑEd-ϑE. The lack of analytic solutions for ω(ϑE) [6] and ϑ(ϑE,φ) (see Sect. 4.4) impede the derivation of above error dynamics.

As our final goal is cooperative dynamic human–robot interaction, we refrained from further stability investigations in this paper and focused on simulation- and experiment-based analyses. The simulations and human–robot experiments suggest that the domain of attraction of the presented FD-based controllers is sufficiently large to allow for cooperative energy injection into nonlinear high energy regimes.

Conclusions

This article presents a control approach for cooperative energy injection into unknown flexible objects as a first step towards human–robot cooperative dynamic object manipulation. The simple pendulum-like nature of the desired swinging motion allows to design adaptive follower and leader controllers based on simple pendulum closed-loop fundamental dynamics (FD). We consider two different systems and show that their desired oscillations can be approximated by similar FD. Firstly, a pendulum-like object that is controlled via acceleration by the human and the robot. Secondly, an oscillating entity composed of the agents’ arms and a flexible object that is controlled via torque at the agents’ shoulders. The robot estimates the natural frequency of the system and controls the swing energy as a leader or follower from haptic information only. In contrast to a leader, a follower does not know the desired energy level, but actively contributes to the swing-up through imitation of the system energy flow. Experimental results showed that a robotic leader can track desired reference dynamics. Furthermore, a robot follower actively contributed to the swing-up effort in interaction with a human leader. High energy levels of swinging amplitudes greater than 80 were achieved for the pendulum-like object. Although joint velocity limits of the robotic manipulator restricted swinging amplitudes to 30 for the “arm—flexible object—arm” system, the experimental results support the efficacy of our approach to human–robot cooperative swinging of unknown flexible objects.

In future work, we want to take a second step towards human–robot cooperative dynamic object manipulation by investigating controlled object placement as the phase following the joint energy injection. Furthermore, we are interested in applying the presented technique of approximating the desired behavior by its FD to different manipulation tasks.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Acknowledgements

The research leading to these results has received funding partly from the European Research Council under the European Unions Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement no. [267877] and partly from the Technical University of Munich - Institute for Advanced Study (www.tum-ias.de), funded by the German Excellence Initiative.

Biographies

Philine Donner

received the diploma engineer degree in Mechanical Engineering in 2011 from the Technical University of Munich, Germany. She completed her diploma thesis on “Development of computational models and controllers for tendon-driven robotic fingers” at the Biomechatronics Lab at Arizona State University, USA. From October 2011 to February 2017 she worked as a researcher and Ph.D. student at the Technical University of Munich, Germany, Department of Electrical and Computer Engineering, Chair of Automatic Control Engineering. Currently she works as a research scientist at Siemens Corporate Technology. Her research interests are in the area of automatic control and robotics with a focus on control for physical human–robot interaction.

Franz Christange

graduated with B.Sc. and M.Sc. in Electrical Engineering, in the field of control theory and robotics at Technical University of Munich, Germany in 2014. Currently he works as a researcher at the Technical University of Munich, Germany, Department of Electrical and Computer Engineering, Chair of Renewable and Sustainable Energy Systems. His research focuses on intelligent control of distributed energy systems.

Jing Lu

received the Bachelor Degree in Control Engineering from University of Kaiserslautern, Germany and from Fuzhou University, China in 2014. She received the Master Degree in Automatic Control and Robotics from the Technical University of Munich, Germany in 2016.

Martin Buss

received the Diplom-Ingenieur degree in Electrical Engineering in 1990 from the Technical University Darmstadt, Germany, and the Doctor of Engineering degree in Electrical Engineering from the University of Tokyo, Japan, in 1994. In 2000 he finished his habilitation in the Department of Electrical Engineering and Information Technology, Technical University of Munich, Germany. In 1988 he was a research student at the Science University of Tokyo, Japan, for 1 year. As a postdoctoral researcher he stayed with the Department of Systems Engineering, Australian National University, Canberra, Australia, in 1994/5. From 1995 to 2000 he has been senior research assistant and lecturer at the Institute of Automatic Control Engineering, Department of Electrical Engineering and Information Technology, Technical University of Munich, Germany. He has been appointed full professor, head of the control systems group, and deputy director of the Institute of Energy and Automation Technology, Faculty IV Electrical Engineering and Computer Science, Technical University Berlin, Germany, from 2000 to 2003. Since 2003 he is full professor (chair) and director of the Institute of Automatic Control Engineering, Department of Electrical and Computer Engineering, Technical University of Munich, Germany. From 2006 to 2014 he has been the coordinator of the DFG Excellence Research Cluster Cognition for Technical Systems CoTeSys. Martin Buss is a fellow of the IEEE. He has been awarded the ERC advanced grant SHRINE. From 2014 to 2017 he was a Carl von Linde Senior Fellow with the TUM Institute for Advanced Study. Martin Buss research interests include automatic control, haptics, optimization, nonlinear, hybrid discrete-continuous systems, and robotics.

Derivation of the Fundamental Dynamics

Application of the following three steps yields the dynamics of the abstract cart- and torque-pendulums (4), (5) in terms of the polar states xp:

  1. Differentiation of (10) and (15) with respect to time

  2. Insertion of the cartesian state dynamics (4) and (5)

  3. Substitution of remaining cartesian states through polar states (16)

Step S1 applied to the phase angle φ requires the time derivative of the atan2-function, which is

ddtatan2(y,x)=-ydxdt+xdydtx2+y2. 42

We get

φ˙=S1Ωϑ˙2-Ωϑϑ¨Ω2ϑ2+ϑ˙2=S2Ωϑ˙2+Ωω02ϑsinϑ-ΩϑAΩ2ϑ2+ϑ˙2=S3Ωsin2φ+ω02Ωϑrcosφsin(ϑrcosφ)-1ΩϑrcosφA, 43
ϑ˙r=S1Ω2ϑϑ˙+ϑ˙ϑ¨ΩΩ2ϑ2+ϑ˙2=S2Ω2ϑϑ˙-ω02ϑ˙sinϑ+ϑ˙AΩΩ2ϑ2+ϑ˙2=S3-Ωϑrsinφcosφ+ω02Ωsinφsin(ϑrcosφ)-1ΩsinφA, 44

with actuation terms A=Ar¨ for the abstract cart-pendulum

Ar¨=S3-ω02gcos(ϑrcosφ)r¨1+r¨22 45

and A=At for the abstract torque-pendulum

At=1Iϑts,1+ts,22. 46

The resultant state space representations are control affine and coupled

x˙p=fp(xp)+gp(xp)u, 47

with control input u:=A.

Insertion of the control laws (12) and (13) into A=Ar¨ and A=At in (47) yield the state space representations with new inputs a1 and a2 of the form

x˙p=fp(xp)+agp(xp)a1+a22. 48

Application of the following three steps to the state space representation (48) yields the fundamental dynamics (17):

  • S4
    Approximations through 3rd order Taylor polynomials:
    sinxx-x33!,cosx1-x22!
  • S5
    Use of trigonometric identities:
    sin2x+cos2x=1,sin(2x)=2sinxcosx,cos(2x)=cos2x-sin2x
    And deduced from above:
    sin2x=12-12cos(2x),cos2x=12+12cos(2x)
  • S6

    Neglect of higher harmonics, e.g. sin(2x)0, cos(4x)0

Use of the actual natural frequency for normalization of the phase space Ω=ω reduces the error caused by the approximations ϑEϑr.

Phase dynamics φ˙:

fp,1=Ω=ωωsin2φ+ω02ωϑrcosφsin(ϑrcosφ)S4ωsin2φ+ω02ωϑrcosφ(ϑrcosφ-ϑr3cos3φ6)=S5ω12-12cos(2φ)+ω02ω12+12cos(2φ)-ϑr2612+12cos(2φ)2S5,6ω12+ω02ω12-ϑr2614+12cos(2φ)+1412+12cos(4φ)S612ω+ω02ω12-ϑr2614+18=12ω+12ωω021-12ϑr22S4-112ω+12ωω02cosϑr2ωg12ω+12ωωg2ωgωω, 49

with “S4-1” indicating application of the 3rd order Taylor approximation in reverse direction and insertion of the geometric mean approximation (9) with ϑEϑr in the last step. For agp,1, the approximation steps S4 to S6 as detailed in (49) yield agp,10, independent of the actuation terms Ar¨ and At. Consequently, the phase dynamics for the abstract cart- and torque-pendulums result in φ˙ω.

Energy dynamics ϑ˙r: Similar to agp,1, the approximation steps S4 to S6 result in fp,20. The remaining term agp,2 simplifies for the abstract cart-pendulum to

agp,2,r¨=Ω=ωω02ωgsin2φcos(ϑrcosφ)S4ω02ωgsin2φ1-12ϑr2cos2φ=S5ω02ωg12-12cos(2φ)1-ϑr2212+12cos(2φ)=ω02ωg12-12cos(2φ)-ϑr2214-14cos2(2φ)S5,6ω02ωg12-ϑr2214-18=ω02ω2g1-12ϑr22S4-1ω2gω02cosϑr2ωgω2gωg2ωgω12gω3=Br¨. 50

As for (49), we applied a reverse 3rd order Taylor approximation (S4-1) and inserted the geometric mean approximation of the natural frequency ωg in (9).

For the abstract torque-pendulum we get

agp,2,t=1ωIϑsin2φS5,612ωIϑ=:Bt. 51

Thus, the fundamental energy dynamics linearly depends on the amplitude factors ϑ˙rBa1+a22. The result are the fundamental dynamics in (17).

Stability of the ω-Estimation

For an approximately constant natural frequency ω we have φ(t)=ωt, where we set φ(t=0)=0 without loss of generality (see (11)). This yields the modified state transformations ϑ=ϑrcos(ωt) and ϑ˙=-ϑrωsin(ωt) compared to (16), and the phase computation results in

φ=atan2-ϑ˙ω^,ϑ=atan2ωω^sin(ωt),cos(ωt), 52

which is independent of ϑr. Consequently, the natural frequency estimation in Fig. 6 has one input, the natural frequency ω, and one output, the estimate ω^. Note that we assume ω to be known only for the stability analysis, but not for the implementation displayed in Fig. 6.

In a next step, we derive the estimation dynamics in terms of its input ω and output ω^. Differentiation of (52) with respect to time yields

φ˙=ω2ω^sin2(ωt)+cos(ωt)ω2ω^cos(ωt)-ωω^2ω^˙sin(ωt)ω2ω^2sin2(ωt)+cos2(ωt)=cos-2(ωt)ω2ω^tan2(ωt)+ω2ω^-ωω^2tan(ωt)ω^˙1+ω2ω^2tan2(ωt). 53

Transformation of (19) into time domain yields

ω^˙=-1Tω(ω^-φ˙). 54

Insertion of (54) solved for φ˙ into (53), followed by some rearrangements yields the ω-estimation dynamics

ω^˙=ω^ω2-ω^3Tωω^2+ωtan(ωt)+Tωω2tan2(ωt). 55

Because ω is bounded and constant, it suffices to show stability of the estimation error dynamics ω~˙=ω^˙-ω˙=ω^˙. As Lyapunov function we choose

V=12ω^-ω2 56

with time derivative

V˙=-ω^(ω^-ω)2(ω^+ω)Tωω2tan2(ωt)+ωtan(ωt)+Tωω^2. 57

For the numerator of (57) holds that -ω^(ω^-ω)2(ω^+ω)0 if sgn(ω)=sgn(ω^). The denominator is a quadratic function of tan(ωt), with -<tan(ωt)<. From Tωω2>0 we deduce that the denominator with tan(ωt)=x is a convex parabola. Therefore, we have a positive denominator, if the discriminant D is negative, i.e.

D=ω2-4Tωω2Tωω^2<0Tω>12ω^. 58

Condition (58) depends on the natural frequency estimate ω^, which varies over time. Because we are estimating the natural frequency of a pendulum under the influence of gravity, only positive values are physically plausible ω>0. For Tω>12ω^(t=0) and ωω^(t=0)>0, we have sgn(ω)=sgn(ω^(t=0)) and V˙(t=0)<0 and ω^ initially approaches ω. If further Tω>12ω, V˙(t0)<0 as long as ωω^ and (58) can be rewritten as

Tω>max12ω^(t=0),12ω. 59

Thus, if (59) holds, the ω-estimation is asymptotically stable under the fundamental dynamics assumption. This proves convergence of the estimate ω^ to the true value ω for a linearly oscillating pendulum.

Transfer Functions of Leader–Follower Structures

Rearrangement of the block diagram in Fig. 7 leads to the block diagram displayed in Fig. 19. The highlighted intermediate transfer function G1fi is

G1fi=1s1-1sΓFBB^FsTFs+1. 60

Based on (60) the reference input transfer function ϑr(s)=Gfi(s)θEd(s) results in (27).

Fig. 19.

Fig. 19

Rearranged block diagram for the computation of the transfer function Gfi(s): ϑr(s)=Gfi(s)θEd(s)

For the computation of the relative follower contribution ΓF, consider the block diagram rearrangement in Fig. 20. From Fig. 20 with

G2fi=11-ΓFdBB^F1TFs+1, 61

we can compute the transfer function which yields the amount of energy the leader contributes ϑrL(s) based on the reference input θEd(s)

GLfi=ΓLdKdBB^Ls+1TF-ΓFdBB^F1TFs2+1TF-ΓFdBB^F1TF+ΓLdKdBB^Ls+ΓLdKdBB^L1TF. 62

From ϑrF(s)=ϑr(s)-ϑrL(s)=Gfi(s)θEd(s)-GLfi(s)θEd(s) with (27) and (62) we get GFfi(s)=Gfi(s)-GLfi(s) in (28).

Fig. 20.

Fig. 20

Rearranged block diagram for the computation of the energy contributed by the leader ϑr,L=GLfi(s)θEd(s)

Compliance with ethical standards

Conflicts of interest

The authors P. Donner, F. Christange, J. Lu and M. Buss declare that they have no conflict of interest.

Footnotes

1

We furthermore assume that a low-level robot control has access to the end effector position.

2

These oscillations can be damped through application of the controller presented in Sect. 5 in z-direction, as shown in [12] for the non-adaptive control approach of [13].

3

In contrast to the t-pendulum and the afa-system, the simple pendulum approximations are modeled as rigid and can thus reach oscillation amplitudes beyond 90. In order to challenge our approach, we command θEd>90 here.

4

The KUKA LWR 4+ can handle higher loads, if operated close to its singularities. However, joint velocity limits restrict the end effector velocity. As we are interested in a proof of concept of the proposed approach independent of the robotic platform used, we refrained from optimizing the robotic setup for higher loads and velocities.

5

Parameters were set once based on theoretic results (Tω, ω^(t=0)) or according to their physical meaning, i.e. they resemble filter coefficients (TF, DF, c0, ζ, l1), the human-like arm dynamics of the afa-system (Iρ/ψ, dρ/ψ, kψ, ma, la) or define desired leader/follower behavior (Kd, ΓL/F).

Contributor Information

Philine Donner, Email: [email protected].

Franz Christange, Email: [email protected].

Jing Lu, Email: [email protected].

Martin Buss, Email: [email protected].

References

  • 1.Åström K, Furuta K. Swinging up a pendulum by energy control. Autom. 2000;36(2):287–295. doi: 10.1016/S0005-1098(99)00140-5. [DOI] [Google Scholar]
  • 2.Åström KJ, Wittenmark B. Adaptive control. New York: Courier Corporation; 2013. [Google Scholar]
  • 3.Atkeson CG, An CH, Hollerbach JM. Estimation of inertial parameters of manipulator loads and links. Int J Robot Res. 1986;5(3):101–119. doi: 10.1177/027836498600500306. [DOI] [Google Scholar]
  • 4.Atkeson CG, Schaal S. Robot learning from demonstration. Proc Int Conf Mach Learn. 1997;97:12–20. [Google Scholar]
  • 5.Burdet E, Tee KP, Mareels I, Milner TE, Chew CM, Franklin DW, Osu R, Kawato M. Stability and motor adaptation in human arm movements. Biol Cybern. 2006;94(1):20–32. doi: 10.1007/s00422-005-0025-9. [DOI] [PubMed] [Google Scholar]
  • 6.Carvalhaes CG, Suppes P. Approximations for the period of the simple pendulum based on the arithmetic–geometric mean. Am J Phys. 2008;76(12):1150–1154. doi: 10.1119/1.2968864. [DOI] [Google Scholar]
  • 7.Chandler R, Clauser CE, McConville JT, Reynolds H, Young JW (1975) Investigation of inertial properties of the human body. Technical report, DTIC Document
  • 8.Cunningham D, Asada H (2009) The winch-bot: a cable-suspended, under-actuated robot utilizing parametric self-excitation. In: Proceedings of the IEEE international conference on robot automation, pp 1844–1850
  • 9.de Crousaz C, Farshidian F, Buchli J (2014) Aggressive optimal control for agile flight with a slung load. In: IEEE/RSJ IROS workshop mach lern plan control robot motion
  • 10.Deisenroth M, Fox D, Rasmussen C. Gaussian processes for data-efficient learning in robotics and control. IEEE Trans Pattern Anal Mach Intell. 2015;37(2):408–423. doi: 10.1109/TPAMI.2013.218. [DOI] [PubMed] [Google Scholar]
  • 11.Dempster WT (1955) Space requirements for the seated operator. Technical report, Wright Air Development Center TH-55-159, Wright-Patterson Air Force Base, Ohio (AD 85 892)
  • 12.Donner P, Buss M (2016b) Video: damping of in plane oscillations of the t-pendulum. http://www.lsr.ei.tum.de/fileadmin/w00brk/www/videos/Zdamping.mp4, Accessed 08 Mar 2017
  • 13.Donner P, Buss M. Cooperative swinging of complex pendulum-like objects: experimental evaluation. IEEE Trans Robot. 2016;32(3):744–753. doi: 10.1109/TRO.2016.2560898. [DOI] [Google Scholar]
  • 14.Donner P, Christange F, Buss M (2015) Fundamental dynamics based adaptive energy control for cooperative swinging of complex pendulum-like objects. In: Proceedings of the IEEE international conference on decision control, pp 392–399
  • 15.Donner P, Wirnshofer F, Buss M (2014) Controller synthesis for human–robot cooperative swinging of rigid objects based on human–human experiments. In: Proceedings of the IEEE international symposium in robot human interact communication, pp 586–592
  • 16.Doya K. Reinforcement learning in continuous time and space. Neural Comput. 2000;12(1):219–245. doi: 10.1162/089976600300015961. [DOI] [PubMed] [Google Scholar]
  • 17.Evrard P, Kheddar A (2009) Homotopy switching model for dyad haptic interaction in physical collaborative tasks. In: Proceedings of the World Haptics Euro Haptics, pp 45–50
  • 18.Fantoni I, Lozano R, Spong MW, et al. Energy based control of the pendubot. IEEE Trans Autom Control. 2000;45(4):725–729. doi: 10.1109/9.847110. [DOI] [Google Scholar]
  • 19.Freidovich L, Robertsson A, Shiriaev A, Johansson R. Periodic motions of the pendubot via virtual holonomic constraints: theory and experiments. Automatica. 2008;44(3):785–791. doi: 10.1016/j.automatica.2007.07.011. [DOI] [Google Scholar]
  • 20.Geravand M, Werner C, Hauer K, Peer A. An integrated decision making approach for adaptive shared control of mobility assistance robots. Int J Soc Robot. 2016;8(5):631–648. doi: 10.1007/s12369-016-0353-z. [DOI] [Google Scholar]
  • 21.Groten R, Feth D, Klatzky R, Peer A. The role of haptic feedback for the integration of intentions in shared task execution. IEEE Trans Haptics. 2013;6(1):94–105. doi: 10.1109/TOH.2012.2. [DOI] [PubMed] [Google Scholar]
  • 22.Hatsopoulos NG, Warren WH. Resonance tuning in rhythmic arm movements. J Mot Behav. 1996;28(1):3–14. doi: 10.1080/00222895.1996.9941728. [DOI] [PubMed] [Google Scholar]
  • 23.Hogan N. Controlling impedance at the man/machine interface. Proc IEEE Int Conf Robot Autom. 1989;3:1626–1631. [Google Scholar]
  • 24.Khalil HK, Grizzle J. Nonlinear systems. 3. Upper Saddle River: Prentice hall; 2002. [Google Scholar]
  • 25.Kim CH, Yonekura K, Tsujino H, Sugano S (2009) Physical control of the rotation center of an unsupported object rope turning by a humanoid robot. In: Proceedings of the IEEE-RAS international conference on humanoid robots, pp 148–153
  • 26.Kosuge K, Yoshida H, Fukuda T (1993) Dynamic control for robot-human collaboration. In: Proceedings of the IEEE international symposium in robot human interact communication, pp 398–401
  • 27.Kubus D, Kroger T, Wahl FM (2008) On-line estimation of inertial parameters using a recursive total least-squares approach. In: Proceedings of the IEEE/RSJ international conference on intelligent robots and systems, pp 3845–3852
  • 28.Lin H, Guo F, Wang F, Jia YB. Picking up a soft 3d object by feeling the grip. Int J Robot Res. 2015;34(11):1361–1384. doi: 10.1177/0278364914564232. [DOI] [Google Scholar]
  • 29.Lynch KM, Mason MT. Dynamic nonprehensile manipulation: controllability, planning, and experiments. Int J Robot Res. 1999;18(1):64–92. doi: 10.1177/027836499901800105. [DOI] [Google Scholar]
  • 30.Maeda Y, Takahashi A, Hara T, Arai T. Human–robot cooperation with mechanical interaction based on rhythm entrainment-realization of cooperative rope turning. Proc IEEE Int Conf Robot Autom. 2001;4:3477–3482. [Google Scholar]
  • 31.Magni L, Scattolini R, Åström K. Global stabilization of the inverted pendulum using model predictive control. Proc IFAC World Congr. 2002;35:141–146. [Google Scholar]
  • 32.Mason MT, Lynch K. Dynamic manipulation. Proc IEEE/RSJ Int Conf Intell Robot Syst. 1993;1:152–159. [Google Scholar]
  • 33.Medina J, Lorenz T, Hirche S. Synthesizing anticipatory robotic haptic assistance considering human behavior uncertainty. IEEE Trans Robot. 2015;31(1):180–190. doi: 10.1109/TRO.2014.2387571. [DOI] [Google Scholar]
  • 34.Mörtl A, Lawitzky M, Kucukyilmaz A, Sezgin M, Basdogan C, Hirche S. The role of roles: physical cooperation between humans and robots. Int J Robot Res. 2012;31(13):1656–1674. doi: 10.1177/0278364912455366. [DOI] [Google Scholar]
  • 35.Najafi E, Lopes G, Babuska R (2013) Reinforcement learning for sequential composition control. In: Proceedings of the IEEE conference on decision control, pp 7265–7270
  • 36.Nakanishi J, Fukuda T, Koditschek D. A brachiating robot controller. IEEE Trans Robot Autom. 2000;16(2):109–123. doi: 10.1109/70.843166. [DOI] [Google Scholar]
  • 37.Palunko I, Donner P, Buss M, Hirche S (2014) Cooperative suspended object manipulation using reinforcement learning and energy-based control. In: Proceedings of the IEEE/RSJ international conference on intelligent robotic systems, pp 885–891
  • 38.Peternel L, Petrič T, Oztop E, Babič J. Teaching robots to cooperate with humans in dynamic manipulation tasks based on multi-modal human-in-the-loop approach. Auton Robot. 2014;36(1–2):123–136. doi: 10.1007/s10514-013-9361-0. [DOI] [Google Scholar]
  • 39.Petrič T, Gams A, Ijspeert AJ, Žlajpah L. On-line frequency adaptation and movement imitation for rhythmic robotic tasks. Int J Robot Res. 2011;30(14):1775–1788. doi: 10.1177/0278364911421511. [DOI] [Google Scholar]
  • 40.Reed K, Peshkin M, Hartmann M, Patton J, Vishton P, Grabowecky M (2006) Haptic cooperation between people, and between people and machines. In: Proceedings on IEEE/RSJ international conference on intelligent robotic systems, pp 2109–2114
  • 41.Shiriaev A, Perram J, Canudas-de Wit C. Constructive tool for orbital stabilization of underactuated nonlinear systems: virtual constraints approach. IEEE Trans Autom Control. 2005;50(8):1164–1176. doi: 10.1109/TAC.2005.852568. [DOI] [Google Scholar]
  • 42.Siciliano B, Khatib O. Springer handbook of robotics. New York: Springer; 2016. [Google Scholar]
  • 43.Spong M, Block D. The pendubot: a mechatronic system for control research and education. Proc IEEE Conf Decis Control. 1995;1:555–556. [Google Scholar]
  • 44.Takubo T, Arai H, Hayashibara Y, Tanie K. Human–robot cooperative manipulation using a virtual nonholonomic constraint. Int J Robot Res. 2002;21(5–6):541–553. doi: 10.1177/027836402321261904. [DOI] [Google Scholar]
  • 45.Turnwald A, Althoff D, Wollherr D, Buss M. Understanding human avoidance behavior: interaction-aware decision making based on game theory. Int J Soc Robot. 2016;8(2):331–351. doi: 10.1007/s12369-016-0342-2. [DOI] [Google Scholar]
  • 46.Wang H, Kosuge K. Control of a robot dancer for enhancing haptic human–robot interaction in waltz. IEEE Trans Haptics. 2012;5(3):264–273. doi: 10.1109/TOH.2012.36. [DOI] [PubMed] [Google Scholar]
  • 47.Wen GX, Chen CP, Liu YJ, Liu Z. Neural-network-based adaptive leader-following consensus control for second-order non-linear multi-agent systems. IET Control Theory Appl. 2015;9:1927–1934. doi: 10.1049/iet-cta.2014.1319. [DOI] [Google Scholar]
  • 48.Yoshida K. Swing-up control of an inverted pendulum by energy-based methods. Proc Am Control Conf. 1999;6:4045–4047. [Google Scholar]
  • 49.Yu YQ, Howell LL, Lusk C, Yue Y, He MG. Dynamic modeling of compliant mechanisms based on the pseudo-rigid-body model. J Mech Des. 2005;127(4):760–765. doi: 10.1115/1.1900750. [DOI] [Google Scholar]
  • 50.Zameroski D, Starr G, Wood J, Lumia R. Rapid swing-free transport of nonlinear payloads using dynamic programming. J Dyn Syst Meas Control. 2008;130(4):041001–041011. doi: 10.1115/1.2936384. [DOI] [Google Scholar]
  • 51.Zoso N, Gosselin C (2012) Point-to-point motion planning of a parallel 3-dof underactuated cable-suspended robot. In: Proceedings of the IEEE international conference on robotic automation, pp 2325–2330

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials


Articles from International Journal of Social Robotics are provided here courtesy of Springer

RESOURCES