Port-Hamiltonian Systems Theory An Introductory Overview-Van Der Schaft, A., Jeltsema, D PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 228

Port-Hamiltonian Systems

Theory: An Introductory
Overview
Port-Hamiltonian Systems
Theory: An Introductory
Overview

Arjan van der Schaft


Johann Bernoulli Institute for Mathematics
and Computer Science
University of Groningen, the Netherlands
[email protected]

Dimitri Jeltsema
Delft Institute of Applied Mathematics
Delft University of Technology, the Netherlands
[email protected]

Boston — Delft
Foundations and Trends
R
in Systems and Control

Published, sold and distributed by:


now Publishers Inc.
PO Box 1024
Hanover, MA 02339
United States
Tel. +1-781-985-4510
www.nowpublishers.com
[email protected]
Outside North America:
now Publishers Inc.
PO Box 179
2600 AD Delft
The Netherlands
Tel. +31-6-51115274
The preferred citation for this publication is
A. van der Schaft and D. Jeltsema. Port-Hamiltonian Systems Theory: An Introductory
R
Overview. Foundations and Trends in Systems and Control, vol. 1, no. 2-3,
pp. 173–378, 2014.
R
This Foundations and Trends issue was typeset in LATEX using a class file designed by Neal
Parikh. Printed on acid-free paper.
ISBN: 978-1-60198-786-0
c 2014 A. van der Schaft and D. Jeltsema

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system,
or transmitted in any form or by any means, mechanical, photocopying, recording or otherwise,
without prior written permission of the publishers.
Photocopying. In the USA: This journal is registered at the Copyright Clearance Center, Inc., 222
Rosewood Drive, Danvers, MA 01923. Authorization to photocopy items for internal or personal
use, or the internal or personal use of specific clients, is granted by now Publishers Inc for users
registered with the Copyright Clearance Center (CCC). The ‘services’ for users can be found on
the internet at: www.copyright.com
For those organizations that have been granted a photocopy license, a separate system of pay-
ment has been arranged. Authorization does not extend to other kinds of copying, such as that
for general distribution, for advertising or promotional purposes, for creating new collective
works, or for resale. In the rest of the world: Permission to photocopy must be obtained from
the copyright owner. Please apply to now Publishers Inc., PO Box 1024, Hanover, MA 02339,
USA; Tel. +1 781 871 0245; www.nowpublishers.com; [email protected]
now Publishers Inc. has an exclusive license to publish this material worldwide. Permission
to use this content must be obtained from the copyright license holder. Please apply to now
Publishers, PO Box 179, 2600 AD Delft, The Netherlands, www.nowpublishers.com; e-mail:
[email protected]
Foundations and Trends R
in
Systems and Control
Volume 1, Issue 2-3, 2014
Editorial Board

Editors-in-Chief

Panos J. Antsaklis Alessandro Astolfi


University of Notre Dame Imperial College, United Kingdom
United States University of Rome “Tor Vergata”, Italy

Editors

John Baillieul Dragan Nesic


Boston University University of Melbourne
Peter Caines Marios Polycarpou
McGill University University of Cyprus
Christos Cassandras Jörg Raisch
Boston University TU Berlin
Denis Dochain Arjan van der Schaft
UC Louvain University of Groningen
Magnus Egerstedt M. Elena Valcher
Georgia Institute of Technology University of Padova
Karl Henrik Johansson Richard Vinter
KTH Stockholm Imperial College
Miroslav Krstic George Weiss
University of California, San Diego Tel Aviv University
Jan Maciejowski
Cambridge University
Editorial Scope

Topics

Foundations and Trends R


in Systems and Control publishes survey
and tutorial articles in the following topics:

• Control of: • Systems

– Hybrid and discrete – Energy storage


event systems – Grid integration
– Nonlinear systems – Conversion technologies
– Network systems – Underpinning materials
– Stochastic systems developments
– Multi-agent systems • Filtering, estimation, and
– Distributed parameter identification
systems
• Optimal control
– Delay systems
• Systems theory

• Control applications

Information for Librarians

Foundations and Trends R


in Systems and Control, 2014, Volume 1, 4 issues.
ISSN paper version 2325-6818. ISSN online version 2325-6826. Also available
as a combined paper and online subscription.
R
Foundations and Trends in Systems and Control
Vol. 1, No. 2-3 (2014) 173–378
c 2014 A. van der Schaft and D. Jeltsema
DOI: 10.1561/2600000002

Port-Hamiltonian Systems Theory: An


Introductory Overview

Arjan van der Schaft


Johann Bernoulli Institute for Mathematics
and Computer Science
University of Groningen, the Netherlands
[email protected]
Dimitri Jeltsema
Delft Institute of Applied Mathematics
Delft University of Technology, the Netherlands
[email protected]
viii

Dedicated to the memory of Jan C. Willems,


inspiring teacher and friend.
Contents

1 Introduction 3
1.1 Origins of port-Hamiltonian systems theory . . . . . . . 3
1.2 Summary of contents . . . . . . . . . . . . . . . . . . . 6

2 From modeling to port-Hamiltonian systems 11


2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Port-based modeling and Dirac structures . . . . . . . . 15
2.3 Energy-storing elements . . . . . . . . . . . . . . . . . . 22
2.4 Energy-dissipating (resistive) elements . . . . . . . . . . 23
2.5 External ports . . . . . . . . . . . . . . . . . . . . . . . . 25
2.6 Port-Hamiltonian dynamics . . . . . . . . . . . . . . . . 26
2.7 Port-Hamiltonian differential-algebraic equations . . . . 30
2.8 Detailed-balanced chemical reaction networks . . . . . 34

3 Port-Hamiltonian systems on manifolds 41


3.1 Modulated Dirac structures . . . . . . . . . . . . . . . . 41
3.2 Integrability . . . . . . . . . . . . . . . . . . . . . . . . . 47

4 Input-state-output port-Hamiltonian systems 53


4.1 Linear resistive structures . . . . . . . . . . . . . . . . . 53
4.2 Input-state-output port-Hamiltonian systems . . . . . . . 55
4.3 Memristive dissipation . . . . . . . . . . . . . . . . . . . 58

ix
x

4.4 Relation with classical Hamiltonian systems . . . . . . . 59

5 Representations of Dirac structures 63


5.1 Kernel and image representations . . . . . . . . . . . . 64
5.2 Constrained input-output representation . . . . . . . . . 64
5.3 Hybrid input-output representation . . . . . . . . . . . . 65
5.4 Canonical coordinate representation . . . . . . . . . . . 66
5.5 Spinor representation . . . . . . . . . . . . . . . . . . . 67

6 Interconnection of port-Hamiltonian systems 69


6.1 Composition of Dirac structures . . . . . . . . . . . . . . 70
6.2 Interconnection of port-Hamiltonian systems . . . . . . 72

7 Port-Hamiltonian systems and passivity 75


7.1 Linear port-Hamiltonian systems . . . . . . . . . . . . . 77
7.2 Available and required storage . . . . . . . . . . . . . . 79
7.3 Shifted port-Hamiltonian systems and passivity . . . . . 81

8 Conserved quantities and algebraic constraints 83


8.1 Casimirs of conservative port-Hamiltonian systems . . . 84
8.2 Linear resistive structures and the dissipation obstacle . 85
8.3 Algebraic constraints . . . . . . . . . . . . . . . . . . . . 86
8.4 Elimination of algebraic constraints . . . . . . . . . . . . 87

9 Incrementally port-Hamiltonian systems 91


9.1 Incrementally port-Hamiltonian systems . . . . . . . . . 92
9.2 Connections with incremental and differential passivity . 96
9.3 Composition of maximal monotone relations . . . . . . . 98

10 Input-output Hamiltonian systems 101


10.1 Input-output Hamiltonian systems with dissipation . . . 101
10.2 Positive feedback interconnection and stability . . . . . 107

11 Pseudo-gradient representations 111


11.1 Towards the Brayton-Moser equations . . . . . . . . . . 112
11.2 Geometry of the Brayton-Moser equations . . . . . . . . 115
11.3 Interconnection of gradient systems . . . . . . . . . . . 117
xi

11.4 Generation of power-based Lyapunov functions . . . . . 117

12 Port-Hamiltonian systems on graphs 119


12.1 Background on graphs . . . . . . . . . . . . . . . . . . . 120
12.2 Mass-spring-damper systems . . . . . . . . . . . . . . 122
12.3 Swing equations for power grids . . . . . . . . . . . . . 128
12.4 Available storage . . . . . . . . . . . . . . . . . . . . . . 129
12.5 Analysis of port-Hamiltonian systems on graphs . . . . 132
12.6 Symmetry reduction . . . . . . . . . . . . . . . . . . . . 137
12.7 The graph Dirac structures and interconnection . . . . 140
12.8 The Kirchhoff-Dirac structure . . . . . . . . . . . . . . . 141
12.9 Topological analogies . . . . . . . . . . . . . . . . . . . 145

13 Switching port-Hamiltonian systems 147


13.1 Switching port-Hamiltonian systems . . . . . . . . . . . 148
13.2 Jump rule for switching port-Hamiltonian systems . . . . 152
13.3 Charge and flux transfer in switched RLC circuits . . . . 155
13.4 The jump rule for switched mechanical systems . . . . . 159

14 Distributed-parameter systems 163


14.1 The Stokes-Dirac structure . . . . . . . . . . . . . . . . 164
14.2 Distributed-parameter port-Hamiltonian systems . . . . 166
14.3 Presence of sources and dissipation . . . . . . . . . . . 170
14.4 Conservation laws . . . . . . . . . . . . . . . . . . . . . 174
14.5 Covariant formulation of port-Hamiltonian systems . . . 176

15 Control of port-Hamiltonian systems 179


15.1 Control by interconnection . . . . . . . . . . . . . . . . . 179
15.2 Energy transfer control . . . . . . . . . . . . . . . . . . . 181
15.3 Stabilization by Casimir generation . . . . . . . . . . . . 182
15.4 The dissipation obstacle and beyond . . . . . . . . . . . 187
15.5 Passivity-based control . . . . . . . . . . . . . . . . . . 189
15.6 Energy-shaping and damping injection . . . . . . . . . . 189
15.7 Interconnection and damping assignment . . . . . . . . 192
15.8 Power-shaping control . . . . . . . . . . . . . . . . . . . 195
xii

Appendices 199

A Proofs 201
A.1 Proof of Proposition 2.1 . . . . . . . . . . . . . . . . . . 201
A.2 Proof of Proposition 2.2 . . . . . . . . . . . . . . . . . . 202
A.3 Extension of Proposition 2.1 . . . . . . . . . . . . . . . . 202

B Physical meaning of efforts and flows 203

References 207
Abstract

An up-to-date survey of the theory of port-Hamiltonian systems


is given, emphasizing novel developments and relationships with
other formalisms. Port-Hamiltonian systems theory yields a system-
atic framework for network modeling of multi-physics systems. Ex-
amples from different areas show the range of applicability. While the
emphasis is on modeling and analysis, the last part provides a brief
introduction to control of port-Hamiltonian systems.
1
Introduction

1.1 Origins of port-Hamiltonian systems theory

The theory of port-Hamiltonian systems brings together different tra-


ditions in physical systems modeling and analysis.
Firstly, from a modeling perspective it originates in the theory
of port-based modeling as pioneered by Henry Paynter in the late
1950s Paynter (1960); Breedveld (1984, 2009). Port-based modeling
is aimed at providing a unified framework for the modeling of sys-
tems belonging to different physical domains (mechanical, electri-
cal, hydraulic, thermal, etc.). This is achieved by recognizing energy
as the ’lingua franca’ between physical domains, and by identifying
ideal system components capturing the main physical characteristics
(energy-storage, energy-dissipation, energy-routing, etc.). Historically
port-based modeling comes along with an insightful graphical nota-
tion emphasizing the structure of the physical system as a collection
of ideal components linked by edges capturing the energy-flows be-
tween them. In analogy with chemical species these edges are called
bonds, and the resulting graph is called a bond graph. Motivated by,
among others, electrical circuit theory the energy flow along the bonds
is represented by pairs of variables, whose product equals power. Typ-

3
4 Introduction

ical examples of such pairs of variables (in different physical domains)


are voltages and currents, velocities and forces, flows and pressures,
etc.. A port-Hamiltonian formulation of bond graph models can be
found in Golo et al. (2003). Port-based modeling can be seen to be a
further abstraction of the theory of across and through variables (cf.
MacFarlane (1970)) in the network modeling of physical systems1.
A second origin of port-Hamiltonian systems theory is geomet-
ric mechanics; see e.g. Arnol’d (1978); Abraham & Marsden (1994);
Marsden & Ratiu (1999); Bloch (2003); Bullo & Lewis (2004). In this
branch of mathematical physics the Hamiltonian formulation of clas-
sical mechanics is formalized in a geometric way. The basic paradigm
of geometric mechanics is to represent Hamiltonian dynamics in a
coordinate-free manner using a state space (commonly the phase
space of the system) endowed with a symplectic or Poisson struc-
ture, together with a Hamiltonian function representing energy. This
geometric approach has led to an elegant and powerful theory for
the analysis of the complicated dynamical behavior of Hamiltonian
systems, displaying their intrinsic features, such as symmetries and
conserved quantities, in a transparant way. Also infinite-dimensional
Hamiltonian systems have been successfully cast into this framework
Olver (1993).
Finally, a third pillar underlying the framework of port-
Hamiltonian systems is systems and control theory, emphasizing dy-
namical systems as being open to interaction with the environ-
ment (e.g. via inputs and outputs), and as being susceptible to con-
trol interaction. The description and analysis of physical subclasses
of control systems has roots in electrical network synthesis the-
ory. Its geometric formulation was especially pioneered in Brockett
(1977); see e.g. van der Schaft (1984, 1982a,b); Crouch (1981, 1984);
Crouch & van der Schaft (1987); Nijmeijer & van der Schaft (1990);
Maschke & van der Schaft (1992); Bloch (2003); Bullo & Lewis (2004)
for some of the main developments, especially with regard to the anal-
1
‘Abstraction’ since the theory of across and through variables emphasizes the
balance laws in the system; an aspect which is usually not emphasized in port-based
modeling. In Chapter 12 and in Chapter 14 we will see how port-Hamiltonian sys-
tems can be also defined starting with the basic balance laws of the system.
1.1. Origins of port-Hamiltonian systems theory 5

ysis and control of nonlinear mechanical systems (e.g. with nonholo-


nomic kinematic constraints).
A main difference of port-Hamiltonian systems theory with geo-
metric mechanics lies in the fact that for port-Hamiltonian systems
the underlying geometric structure is not necessarily the symplec-
tic structure of the phase space, but in fact is determined by the in-
terconnection structure of the system. In this sense port-Hamiltonian
systems theory intrinsically merges geometry with network theory. The
appropriate geometric object appears to be the notion of a Dirac
structure, which was explored before in Weinstein (1983); Courant
(1990); Dorfman (1993) as a geometric object generalizing at the
same time symplectic and Poisson structures2 . The usefulness of
Dirac structures for a geometric theory of port-based modeling and
analysis was first recognized in van der Schaft & Maschke (1995);
Bloch & Crouch (1999); Dalsmo & van der Schaft (1999). Among oth-
ers it has led to a theory of Hamiltonian differential-algebraic equations.
Extensions to the distributed-parameter case were first explored in
van der Schaft & Maschke (2002). A key property of Dirac structures
is the fact that compositions of Dirac structures are again Dirac struc-
tures. This has the crucial consequence that the power-conserving
interconnection of port-Hamiltonian systems (through their external
ports) is again a port-Hamiltonian system; a fundamental property
for network modeling and control.
Another main extension of port-Hamiltonian systems theory with
respect to geometric mechanics is the inclusion of energy-dissipating ele-
ments, which are largely absent in classical Hamiltonian systems. This
greatly broadens the range of applicability of port-Hamiltonian sys-
tems compared to that of Hamiltonian systems in analytical dynamics.
In fact, the framework of port-based modeling and port-Hamiltonian
systems emerges as a general theory for the modeling of complex
physical systems as encountered in many areas of engineering3 . Fur-
2
The terminology ‘Dirac structure’ seems to be largely inspired by the ’Dirac
bracket’ introduced by Paul Dirac in order to cope with Hamiltonian systems sub-
ject to constraints due to degeneracy of the underlying Lagrangian function Dirac
(1950, 1958). This was motivated in its turn by quantization theory.
3
It should be added here that our emphasis in physical system modeling is on
6 Introduction

thermore, because of its emphasis on energy and power as the lingua


franca between different physical domains, port-Hamiltonian systems
theory is ideally suited for a systematic mathematical treatment of
multi-physics systems, i.e., systems containing subsystems from differ-
ent physical domains (mechanical, electro-magnetic, hydraulic, chem-
ical, etc.).
Apart from offering a systematic and insightful framework for
modeling and analysis of multi-physics systems, port-Hamiltonian
systems theory provides a natural starting point for control. Especially
in the nonlinear case it is widely recognized that physical properties of
the system (such as balance and conservation laws and energy consid-
erations) should be exploited and/or respected in the design of control
laws which are robust and physically interpretable. Port-Hamiltonian
systems theory offers a range of concepts and tools for doing this, in-
cluding the shaping of energy-storage and energy-dissipation, as well
as the interpretation of controller systems as virtual system compo-
nents. In this sense, port-Hamiltonian theory is a natural instance of
a ’cyber-physical’ systems theory: it admits the extension of physi-
cal system models with virtual (’cyber’) system components, which
may or may not mirror physical dynamics. From a broader perspec-
tive port-Hamiltonian systems theory is also related to multi-physics4
network modeling approaches aimed at numerical simulation, such as
20-sim (based on bond graphs) and Modelica/Dymola.

1.2 Summary of contents

In these lecture notes we want to highlight a number of directions in


port-Hamiltonian systems theory. Previous textbooks covering mate-
rial on port-Hamiltonian systems are van der Schaft (2000) (Chapter
4), and Duindam et al. (2009). Especially Duindam et al. (2009) goes
into more detail about a number of topics, and presents a wealth of

‘modeling for control’. Since the addition of control will anyway modify the dynami-
cal properties of the system the emphasis is on relatively simple models reflecting the
main dynamical characteristics of the system.
4
For specific physical domains (e.g., mechanical, electrical, chemical, hydraulic, ..)
there are many network modeling and simulation software packages available.
1.2. Summary of contents 7

material on various application domains. The current lecture notes


present an up-to-date account of the basic theory, emphasizing novel
developments.
Chapter 2 provides the basic definition of port-Hamiltonian sys-
tems and elaborates on the concept of a Dirac structure. Chapter 3
deals with Dirac structures on manifolds, and the resulting defini-
tion of port-Hamiltonian systems on manifolds. A brief discussion
concerning integrability of Dirac structures is given, and the relation
with the theory of kinematic constraints is provided. Chapter 4 de-
tails the special, but important, subclass of input-state-output port-
Hamiltonian systems arising from the assumption of absence of alge-
braic constraints and the linearity of energy-dissipation relations. The
resulting class of port-Hamiltonian systems is often taken as the start-
ing point for the development of control theory for port-Hamiltonian
systems.
With the general definition of port-Hamiltonian systems given in a
geometric, coordinate-free, way, it is for many purposes important to
represent the resulting dynamics in suitable coordinates, and in a form
that is convenient for the system at hand. Chapter 5 shows how this
amounts to finding a suitable representation of the Dirac structure,
and how one can move from one representation to another. In Chapter
6 it is discussed how the power-conserving interconnection of port-
Hamiltonian systems again defines a port-Hamiltonian system. This
fundamental property of port-Hamiltonian system is based on the re-
sult that the composition of Dirac structures is another Dirac struc-
ture. Chapter 7 investigates the close connection of port-Hamiltonian
systems with the concept of passivity, which is a key property for
analysis and control. In Chapter 8 other structural properties of port-
Hamiltonian systems are studied, in particular the existence of con-
served quantities (Casimirs) and algebraic constraints.
Chapter 9 takes a step in a new direction by replacing the compo-
sition of the Dirac structure and the resistive structure by a general
maximal monotone relation, leading to the novel class of incremen-
tally port-Hamiltonian systems. In Chapter 10 the relation of port-
Hamiltonian systems with the older class of input-output Hamilto-
8 Introduction

nian systems is explored, and the key property of preservation of sta-


bility of input-output Hamiltonian systems under positive feedback
(in contrast with negative feedback for port-Hamiltonian and pas-
sive systems) is discussed. Finally Chapter 11 makes the connection
of port-Hamiltonian systems to another class of systems, namely the
pseudo-gradient systems extending the Brayton-Moser equations of
electrical circuits.
Chapter 12 deals with port-Hamiltonian systems on graphs, start-
ing from the basic observation that the incidence structure of the graph
defines a Poisson structure on the space of flow and effort variables
associated to the vertices and edges of the graph. This is illustrated
on a number of examples. In Chapter 13 the framework is extended
to switching port-Hamiltonian systems, including a formulation of
a jump rule generalizing the classical charge and flux conservation
principle from electrical circuits with switches. Chapter 14 deals with
the port-Hamiltonian formulation of distributed-parameter systems,
based on the formulation of the Stokes-Dirac structure expressing
the basic balance laws. Finally, Chapter 15 gives an introduction to
the control theory of port-Hamiltonian systems, exploiting their basic
properties such as passivity and existence of conserved quantities.

What is not in these lecture notes


The overview of port-Hamiltonian systems theory presented in
this article is far from being complete: a number of topics are
not treated at all, or only superficially. Notable omissions are the
theory of scattering of port-Hamiltonian systems Stramigioli et al.
(2002); van der Schaft (2009), treatment of symmetries and con-
servation laws of port-Hamiltonian systems van der Schaft (1998);
Blankenstein & van der Schaft (2001), controllability and observ-
ability for input-output Hamiltonian systems and port-Hamiltonian
systems van der Schaft (1984, 1982a,b); Maschke & van der Schaft
(1992), realization theory of input-output Hamiltonian systems
and port-Hamiltonian systems Crouch & van der Schaft (1987),
port-Hamiltonian formulation of thermodynamical systems
Eberard et al. (2007), model reduction of port-Hamiltonian sys-
tems Polyuga & van der Schaft (2011), well-posedness and stability
1.2. Summary of contents 9

of distributed-parameter port-Hamiltonian systems Villegas (2007);


Jacob & Zwart (2012), and structure-preserving discretization of
distributed-parameter port-Hamiltonian systems Golo et al. (2004);
Seslija et al. (2012). Furthermore, Chapter 15 on control of port-
Hamiltonian systems only highlights a number of the developments
in this area; for further information we refer to the extensive literature
including Ortega et al. (2001a,b); Duindam et al. (2009); Ortega et al.
(2008).
2
From modeling to port-Hamiltonian systems

2.1 Introduction

This chapter will provide the basic definition of port-Hamiltonian sys-


tems, starting from port-based network modeling. In order to moti-
vate the general developments we start with a simple example, the
ubiquitous mass-spring system.

Example 2.1 (Mass-spring system). Consider a point mass with mass


m, moving in one direction, without friction, under the influence of a
spring force corresponding to a linear spring with spring constant k.
The standard way of modeling the system is to start with the con-
figuration z ∈ R of the mass, and to write down the second-order
differential equation
mz̈ = −k(z − z0 ), (2.1)
where z0 is the rest length of the spring.
Port-based network modeling (as originating in particular in the
work of Paynter, see Paynter (1960)) takes a different (but equivalent)
point of view by regarding the mass-spring system as the interconnec-
tion of two subsystems, both of which store energy, namely the spring
system storing potential energy and the mass system storing kinetic en-

11
12 From modeling to port-Hamiltonian systems

fk
k m
em
fm
ek

Figure 2.1: Mass-spring system as the interconnection of two subsystems.

ergy; see Figure 2.1. For the spring system the potential energy is ex-
pressed in terms of the elongation q of the spring. In case of a linear
spring, satisfying Hooke’s law, this potential energy is 12 kq 2 . This leads
to the system equations

 q̇ = −fk ,

spring: 
d 1 2

 ek =
 kq ,
dq 2
where1 −fk denotes the velocity of the endpoint of the spring (where
it is attached to the mass), and ek = kq denotes the spring force at this
endpoint.
For the mass system we obtain similar equations using the kinetic
1 2
energy 2m p expressed in terms of the momentum p of the mass

 ṗ = −fm ,

mass: d

1 2

 em =
 p ,
dp 2m
p
where −fm denotes the force exerted on the mass, and em = m is the
velocity of the mass.
Finally, we couple the spring and the mass subsystems to each other
through the interconnection element using Newton’s third law (action
= −reaction) (
−fk = em ,
interconnection:
fm = ek ,
1
The reason for the minus sign in front of fk is that we want the product fk ek to
be incoming power with respect to the interconnection. This sign convention will be
adopted throughout.
2.1. Introduction 13

leading to the final equations for the total system


 
∂H
" # " (q, p)
#
q̇ 0 1  ∂q
 1 1 2
H(q, p) = kq 2 + (2.2)

=  , p ,
ṗ −1 0  ∂H  2 2m
(q, p)
∂p
which are the well-known Hamiltonian equations for the mass-spring
system. Clearly, (2.2) is equivalent to the second-order model (2.1) by
the correspondence q = z − z0 .

Although extremely simple, this example reflects some of the


main characteristics of port-based network modeling: the system is
regarded as the coupling of the energy-storing spring system with
the energy-storing mass system through a power-conserving inter-
connection element which is routing the power from the mass system
to the spring system and conversely. In the sequel we will see how
this extends to general physical systems, replacing the above power-
conserving interconnection element by the geometric notion of a Dirac
structure, and by adding, next to the energy-storing elements, also
energy-dissipating elements2.
In general, in port-based modeling the, possibly large-scale, phys-
ical system is regarded as the interconnection of three types of ideal
components3: (1) energy-storing elements, (2) energy-dissipating (resis-
tive) elements, and (3) energy-routing elements.
Simplest examples of energy-storing elements are ideal inductors,
capacitors, masses, and springs. Examples of energy-dissipating ele-
ments are resistors and dampers, while examples of energy-routing el-
ements are transformers, gyrators and ideal constraints. Thus energy-
dissipating elements are static (no dynamics involved), and energy-
2
In this example already needed if we want to consider mass-spring-damper sys-
tems.
3
These components do not necessarily match with the actual physical compo-
nents. E.g., an inductor in a circuit may have, next to its energy-storing characteristics,
also an energy-dissipation feature which needs to be taken into account for the mod-
eling. This will mean that we model the inductor by an ideal inductor only reflecting
the energy-storage together with an ideal resistor, accounting for the non-negligible
energy-dissipation present in the physical inductor.
14 From modeling to port-Hamiltonian systems

eS eR
storage D dissipation
fS fR

eP fP

Figure 2.2: Port-Hamiltonian system.

routing elements are neither energy-storing or energy-dissipating but


only redirect the power flow in the overall system.
For the port-Hamiltonian formulation (see also Golo et al. (2003))
the energy-storing elements will be grouped into a single object de-
noted by S (’storage’), and similarly the energy-dissipating elements
are grouped into a single object denoted by R (’resistive’). Finally, the
interconnection of all the energy-routing elements can be considered
as one energy-routing structure4 denoted by D (to be formalized by
the geometric notion of a Dirac structure).
The essence of port-Hamiltonian systems modeling is thus rep-
resented in Fig. 2.2. The energy-storing elements S and the energy-
dissipating (resistive) elements R are linked to a central interconnec-
tion (energy-routing) structure D. This linking takes place via pairs
(f, e) of equally dimensioned vectors of flow and effort variables. A
pair (f, e) of vectors of flow and effort variables is called a port, and
the total set of variables f, e is called the set of port variables. We refer
to Appendix B for the physical meaning of efforts and flows in various
physical domains.
Fig. 2.2 shows three ports: the port (fS , eS ) linking to energy-
storage, the port (fR , eR ) corresponding to energy-dissipation, and the
external port (fP , eP ), by which the system interacts with its environ-
ment (including controller action). The scalar quantities eTS fS , eTR fR ,
and eTP fP denote the instantaneous powers transmitted through the
links (the ’bonds’ in bond graph terminology).
4
Called generalized junction structure in bond graph terminology Breedveld (1984);
Golo et al. (2003).
2.2. Port-based modeling and Dirac structures 15

In the following sections, we will discuss the building blocks of


port-Hamiltonian systems theory in more detail. We will start with
the fundamental notion of a Dirac structure, then treat subsequently
energy-storing elements, energy-dissipating (resistive) elements, and
external ports, and finally end up with the basic geometric definition
of a port-Hamiltonian system, together with a number of basic exam-
ples.

2.2 Port-based modeling and Dirac structures

Central in the definition of a port-Hamiltonian system is the notion


of a Dirac structure, depicted in Fig. 2.2 by D. In electrical circuit par-
lance, the Dirac structure acts as a ‘printed circuit board’ (without the
energy-storing and energy-dissipating components), and provides the
‘wiring’ for the overall system.
Basic property of a Dirac structure is power conservation: the Dirac
structure links the various port (flow and effort) variables f and e
in such a way that the total power eT f is equal to zero. For the for-
mal definition of a Dirac structure, we start with an abstract finite-
dimensional linear space of flows F.5 The elements of F will be de-
noted by f ∈ F, and are called flow vectors. The space of efforts is given
by the dual6 linear space E := F ∗ , and its elements are denoted by
e ∈ E. The total space of flow and effort variables is F × E, and will be
called the space of port variables. The power on the total space of port
variables is defined by

P = < e | f >, (f, e) ∈ F × E, (2.3)

where < e | f > denotes the duality product, that is, the linear func-
5
Usually one can take F = Rk . However, there are interesting cases where the
coordinate-free viewpoint is really rewarding, e.g., in rigid body dynamics the space
of flows is given as the space of twists F = se(3), the Lie algebra of the matrix group
SE(3), while the space of efforts is given by the space of wrenches E = se∗ (3), the
dual Lie algebra. We refer to Chapter 3 for some developments in this direction.
6
The definition E = F ∗ for the effort space is in some sense the minimal required
structure. All definitions and results directly extend to the case that F has an inner-
product structure. In this case we may take E = F with the duality product < e | f >
replaced by the inner product he, f i.
16 From modeling to port-Hamiltonian systems

tional e ∈ E = F ∗ acting on f ∈ F. In the usual case of F = Rk this


amounts to
< e | f >= eT f,
where both f ∈ Rk and e ∈ (Rk )∗ are represented as column vectors.
Definition 2.1. Consider a finite-dimensional linear space F with E =
F ∗ . A subspace D ⊂ F × E is a Dirac structure if
1. < e | f >= 0, for all (f, e) ∈ D,

2. dim D = dim F.
Property (1) corresponds to power-conservation, and expresses the
fact that the total power entering (or leaving) a Dirac structure is zero.
It can be shown that the maximal dimension of any subspace D ⊂ F × E
satisfying Property (1) is equal to dim F. Instead of proving this di-
rectly, we will give an equivalent definition of a Dirac structure from
which this claim immediately follows. Furthermore, this equivalent
definition of a Dirac structure has the advantage that it generalizes to
the case of an infinite-dimensional linear space F, leading to the defini-
tion of an infinite-dimensional Dirac structure. This will be instrumen-
tal in the definition of a distributed-parameter port-Hamiltonian system
in Chapter 14.
In order to give this equivalent characterization of a Dirac struc-
ture, we look more closely at the geometric structure of the total space
of flow and effort variables F × E. Related to the definition of power,
there exists a canonically defined bilinear form ≪, ≫ on the space F ×E,
defined as
≪ (f a , ea ), (f b , eb ) ≫:=< ea | f b > + < eb | f a >, (2.4)
with (f a , ea ), (f b , eb ) ∈ F × E. Note that this bilinear form is indefi-
nite, that is, ≪ (f, e), (f, e) ≫ may be positive or negative. It is non-
degenerate in the sense that ≪ (f a , ea ), (f b , eb ) ≫= 0 for all (f b , eb )
implies that (f a , ea ) = 0.
Proposition 2.1 (Courant (1990); Dorfman (1993)). A Dirac structure
on F × E is a subspace D ⊂ F × E such that
D = D ⊥⊥ , (2.5)
2.2. Port-based modeling and Dirac structures 17

where ⊥⊥ denotes the orthogonal companion7 with respect to the bi-


linear form ≪, ≫.
Alternatively, D ⊂ F × E, with F and E finite-dimensional, is a
Dirac structure if and only if it satisfies Property 1 in Definition 2.1
and has maximal dimension with respect to this property, that is, if the
subspace D ′ also satisfies Property 1 then dim D ′ ≤ dim D. This maxi-
mal dimension is equal to dim F = dim E.
For the proof we refer to Appendix A.
From a mathematical point of view, there are a number of direct
examples of Dirac structures D ⊂ F × E. We leave the simple proofs
as an exercise to the reader.
1. Let J : E → F be a skew-symmetric linear mapping, that is,
J = −J ∗ , where J ∗ : E → E ∗ = F is the adjoint mapping. Then
n o
graph J := (f, e) ∈ F × E | f = Je

is a Dirac structure.

2. Let ω : F → E be a skew-symmetric linear mapping, then


n o
graph ω := (f, e) ∈ F × E | e = ωf

is a Dirac structure.

3. Let K ⊂ F be any subspace. Define


n o
K⊥ = e ∈ E |< e | f >= 0 for all f ∈ K (2.6)

Then K × K⊥ ⊂ F × E is a Dirac structure.


The last example of a Dirac structure is formalized as follows:
Definition 2.2. A Dirac structure D ⊂ F × E is separable if
< ea | fb >= 0, (2.7)
for all (fa , ea ), (fb , eb ) ∈ D.
7
A subspace D such that the bilinear form ≪, ≫ is zero restricted to this subspace
(or equivalently D ⊂ D⊥⊥ ) is sometimes called an isotropic subspace. In this terminol-
ogy a Dirac structure is a maximal isotropic subspace.
18 From modeling to port-Hamiltonian systems

Separable Dirac structures have the following simple geometric


characterization (see Appendix A for a proof).

Proposition 2.2. Consider a separable Dirac structure D ⊂ F × E.


Then,
D = K × K⊥ , (2.8)
for some subspace K ⊂ F, where K⊥ is defined as in (2.6). Conversely,
any subspace D as in (2.8) for some subspace K ⊂ F is a separable
Dirac structure.

Note that (2.7) can be regarded as a generalized statement of Tel-


legen’s theorem for electrical circuits (with f denoting the vector of
currents, and e denoting the vector of voltages).
A typical instance of a separable Dirac structure is the following.

Proposition 2.3. Let A : V → W be a linear map between the linear


spaces V and W with adjoint mapping A∗ : W ∗ → V ∗ , that is,

< w∗ | Av >=< A∗ w∗ | v >,

for all v ∈ V, w∗ ∈ W ∗ (where, as before, < · | · > denotes the duality


product between the dual spaces W and W ∗ , respectively V and V ∗ ).
Identify (V × W)∗ = V ∗ × W ∗ . Then,
n o
D := (v, w, v ∗ , w∗ ) ∈ (V × W) × (V ∗ × W ∗ ) | Av = w, v ∗ = −A∗ w∗

is a separable Dirac structure.

Remark 2.1. In some cases, e.g. 3D-mechanical systems, the above


notion of Dirac structures on vector spaces will turn out not to be suf-
ficient. In Chapter 3, we will discuss the extension of the definition of
constant Dirac structures on vector spaces to that of Dirac structures on
manifolds. Basically, a Dirac structure on a manifold will be the union
of Dirac structures on the product of the tangent and cotangent space
at every point of the manifold. As a result the Dirac structure will be
modulated by the state.

A crucial property of Dirac structures is the fact that the composition


of Dirac structures again defines a Dirac structure, see Chapter 6. This
2.2. Port-based modeling and Dirac structures 19

has the consequence that we can interconnect all energy-routing ele-


ments to each other and that the resulting element (generalized junction
structure in bond graph parlance) will again define a Dirac structure.
Finally, we will discuss a number of physical examples of Dirac
structures.

2.2.1 Transformers, gyrators, ideal constraints, junctions


A transformer, see Paynter (1960), Breedveld (1984), is an element
linking two scalar bonds with flow and effort variables (f1 , e1 ) ∈ R2
and (f2 , e2 ) ∈ R2 by
f2 = αf1 ,
(2.9)
e1 = −αe2 ,
with α a constant, called the transformer ratio. The subspace defined
by (2.9) is easily checked to be a separable Dirac structure. Also the
vector version of (2.9)
f b = Af a ,
(2.10)
ea = −AT eb ,
with (f a , ea ) and (f b , eb ) pairs of column vectors of flow variables and
effort variables of the same dimension, and A a matrix of appropriate
dimensions, is immediately seen to define a Dirac structure.
Similarly, a gyrator is given by the relations

f1 = βe2 ,
(2.11)
βe1 = −f2 ,

which again is defining a Dirac structure (but not a separable one).


The resulting unit gyrator for β = 1 is called the symplectic gyrator
Breedveld (1984). The multi-dimensional version is given as the Dirac
structure defined by
f a = Geb ,
(2.12)
GT ea = −f b ,
where G is a matrix of appropriate dimensions.
Also ideal effort and flow constraints are examples of Dirac struc-
tures. Let (f, e) denote a (multi-dimensional) pair of flows and efforts.
20 From modeling to port-Hamiltonian systems

Then, the ideal effort constraint


n o
D := (f, e) | e = 0

is defining a Dirac structure D, and the same holds for the ideal flow
constraint n o
D := (f, e) | f = 0 .
Finally, the equations of a so-called k-dimensional 0-junction (termi-
nology from bond graph theory, cf. Paynter (1960); Breedveld (1984))

e1 = e2 = · · · = ek , f1 + f2 + · · · + fk = 0,

and dually of a 1-junction

f1 = f2 = · · · = fk , e1 + e2 + · · · + ek = 0,

are immediately seen to define separable Dirac structures.

2.2.2 Kirchhoff’s laws as separable Dirac structures


Consider an electrical circuit with m branches (edges) and k nodes
(vertices) where the current through the i-th branch is denoted by Ii
and the voltage across the i-th branch is Vi . Collect the currents in an
n-dimensional column vector I and the voltages in an n-dimensional
column vector V . Then Kirchhoff’s current laws can be written as

BI = 0, (2.13)

with B the k × m incidence matrix of the circuit graph. Furthermore,


Kirchhoff’s voltage laws can be written as follows. All allowed vectors
of voltages V in the circuit are given as

V = B T λ, (2.14)

with the vector λ ranging through Rk . It is immediately seen that the


total space of currents and voltages allowed by Kirchhoff’s current
and voltage laws,
n o
D := (I, V ) | BI = 0, V = B T λ for some λ , (2.15)
2.2. Port-based modeling and Dirac structures 21

defines a separable Dirac structure. Consequently,

(V a )T I b + (V b )T I a = 0,

for all pairs (I a , V a ), (I b , V b ) ∈ D. In particular, by taking V a , I b equal


to zero, we obtain (V b )T I a = 0 for all I a satisfying (2.13) and all V b
satisfying (2.14). This is Tellegen’s theorem from circuit theory.
Further theory regarding Kirchhoff’s laws and electrical circuits
can be found in Chapter 12.

2.2.3 Kinematic pairs


The equations describing a kinematic pair (e.g., a revolute or prismatic
joint) in a three-dimensional mechanical system are, from the Dirac
structure point of view, of the same type as Kirchhoff’s laws.8
Indeed, the constraint forces F generated in a (frictionless and in-
finitely stiff) kinematic pair produce no power on the velocities V al-
lowed by the kinematic pair, i.e.,

AT V = 0, F = Aλ, (2.16)

where the columns of A form a basis for the space of allowed reaction
forces, and λ is the vector of reaction force Lagrange multipliers.

2.2.4 The principle of virtual work


The principle of virtual work can be formulated as
n
X
Fi δqi = 0, (2.17)
i=1

where Fi are the impressed forces, and δqi denotes the virtual displace-
ments that are compatible with the kinematic constraints of the sys-
P
tem. The expression ni=1 Fi δqi equals the infinitesimal work due to
the impressed forces and an infinitesimal displacement. If the kine-
matic constraints of the system are given as AT δq = 0, with δq =
8
However, for 3D mechanical systems the matrix A will often depend on the con-
figuration coordinates; thus defining a Dirac structure on a manifold, see Chapter
3.
22 From modeling to port-Hamiltonian systems

(δq1 , · · · , δqn )T , then it follows that the impressed forces are given as
F = Aλ, with F = (F1 , · · · , Fn )T , as in the previous subsection; see
Chapter 3 for more details. We conclude that, like in the case of Kirch-
hoff’s laws in the electrical domain, the principle of virtual work can
be formulated as defining a separable Dirac structure on the product
of the space of virtual displacements and impressed forces.
Originally, the principle of virtual work (2.17) is formulated as an
equilibrium condition. Indeed, a system with configuration coordinates
q = (q1 , q2 , . . . , qn )T , which is subject to forces F (q), is at equilibrium
P
q̄ if the virtual work ni=1 Fi (q̄)δqi corresponding to any admissible
virtual displacement δq from q̄ is equal to zero. In d’Alembert’s prin-
ciple this was extended by adding the inertial forces ṗ to the impressed
forces. This can be interpreted as linking the Dirac structure to energy-
storage (in this case, kinetic energy).

2.3 Energy-storing elements

The energy-storing multi-port element S corresponds to the union of


all the energy-storing elements of the system. The port variables of
the Dirac structure associated with the energy-storing multi-port el-
ement will be denoted by (fS , eS ), where fS and eS are vectors of
equal dimension with their product eTS fS denoting the total power
flowing into the Dirac structure from the energy storing elements
(or, equivalently, minus the total power flowing into the storage ele-
ments). The total energy storage of the system is defined by a state
space X , together with a Hamiltonian function H : X → R de-
noting the energy. For now, we will assume that the state space X
is finite-dimensional (in Chapter 14 we will discuss the extension to
the infinite-dimensional case). In general, see Chapter 3, X will be a
smooth (finite-dimensional) manifold, but in the present chapter X
will be assumed to be a linear space.
The vector of flow variables of the energy-storing multi-port ele-
ment is given by the rate ẋ of the state x ∈ X . Thus for any current
state x ∈ X the flow vector ẋ will be an element of the linear space
Tx X , the tangent space of X at x ∈ X . By choosing local coordinates
2.4. Energy-dissipating (resistive) elements 23

x = (x1 , . . . , xn )T for X this means that the vector of flow variables is


given by the vector ẋ = (ẋ1 , . . . , ẋn )T . In the case of a linear state space
X the tangent space Tx X can be identified with X , and we can take
(global) linear coordinates for X , thus identifying X with Rn . Further-
more, the vector of effort variables of the energy-storing multi-port
element is given by the gradient vector ∂H ∗
∂x (x) ∈ Tx X , the dual space
of the tangent space Tx X . In coordinates x = (x1 , . . . , xn )T for X this
means that the vector of effort variables is given by the vector ∂H ∂x (x) of
partial derivatives of H with respect to x1 , . . . , xn (which we through-
out write as a column vector).
We obtain the following power-balance for the energy-storing
multi-port element:

d ∂H ∂T H
H =< (x) | ẋ >= (x)ẋ. (2.18)
dt ∂x ∂x
The interconnection of the energy-storing elements to the storage port
(fS , eS ) of the Dirac structure is accomplished by setting
∂H
fS = −ẋ and eS = (x). (2.19)
∂x
Hence, the power-balance (2.18) can be also written as

d ∂T H
H= (x)ẋ = −eTS fS . (2.20)
dt ∂x
Remark 2.2. The minus sign in (2.19) is inserted in order to have a
T
consistent power flow convention: ∂∂xH (x)ẋ is the power flowing into
the energy-storing elements, whereas eTS fS is the power flowing into
the Dirac structure.

See Appendix B for details on how to set up the Hamiltonian for


energy-storing elements of various physical domains.

2.4 Energy-dissipating (resistive) elements

The second multi-port element R corresponds to internal energy dis-


sipation (due to friction, resistance, etc.), and its port variables are
denoted by (fR , eR ). These port variables are terminated on a static
24 From modeling to port-Hamiltonian systems

energy-dissipating (resistive) relation R. In general, a resistive rela-


tion will be a subset
R ⊂ FR × ER ,
with the property that9
< eR | fR >= eTR fR ≤ 0, (2.21)
for all (fR , eR ) ∈ R. We will call the subset R an energy-dissipating
relation, or a resistive structure. Since the Dirac structure of a port-
Hamiltonian system (without external port) satisfies the power-
balance
eTS fS + eTR fR = 0, (2.22)
this leads by substitution of the equations (2.20) and (2.21) to
d
H = −eTS fS = eTR fR ≤ 0. (2.23)
dt
An important special case of energy-dissipating relations occurs when
the resistive relation can be expressed as the graph of an input-output
mapping, e.g.,
fR = −F (eR ), (2.24)
with F : Rm → Rmr satisfying eTR F (eR ) ≥ 0, for all eR ∈ Rm (with m
denoting the number of energy-dissipating elements). Sometimes the
mapping F is derivable from a so-called Rayleigh dissipation function
DR : Rm → R, in the sense that
∂DR
F (eR ) = (eR ).
∂eR
For linear resistive elements, (2.24) specializes to
fR = −ReR , (2.25)
for some positive semi-definite symmetric matrix R = RT ≥ 0.
9
The sign of the inequality is due to the fact that eTR fR is the power flow associated
to the Dirac structure, not to the energy-dissipating elements. Another way to resolve
this sign problem would be to define an additional pair of flow and effort variables
(f¯R , ēR ) for the energy-dissipating elements satisfying ēTR f¯R ≥ 0, and to interconnect
them, similarly to the case of the energy-storing elements (see Remark 2.2), to the
energy-dissipating port (fR , eR ) of the Dirac structure by setting fR = −f¯R and eR =
ēR .
2.5. External ports 25

Example 2.2. A linear damper in a mass-spring-damper system is


modeled by an equation fR = −deR with d > 0 the damping con-
stant. An example of a nonlinear energy-dissipating relation is the cu-
bic equation fR = −de3R . Another type of example is provided by ideal
Coulomb friction,which is modeled by the energy-dissipating relation




−1 for eR > 0,

fR = α for eR = 0,



 +1 for eR < 0,

with α ∈ [−Fc , Fc ] and Fc > 0 the Coulomb friction constant. Note


that this does not correspond anymore to a function from eR to fR , or
conversely.

In Chapter 4 we will further elaborate on linear energy-dissipating


relations, and their geometric treatment. See also Chapter 9 for a fur-
ther discussion on energy-dissipating and maximal monotone rela-
tions.

2.5 External ports

The external port (fP , eP ) models the interaction of the system with
its environment. This comprises different situations. One are the port
variables which are accessible for controller action. Another type of
external port variables corresponds to an interaction port. Typical ex-
ample of the latter is a controlled robotic system interacting with its
physical environment. Still another type of external port variables are
variables corresponding to sources. For example, in an electrical circuit
with voltage source the input is the voltage of the source, while the
current through the source is the (resulting) output variable.
Taking the external port into account the power-balance (2.22) ex-
tends to
eTS fS + eTR fR + eTP fP = 0, (2.26)
whereby (2.23) extends to
d
H = eTR fR + eTP fP ≤ eTP fP , (2.27)
dt
26 From modeling to port-Hamiltonian systems

since eTR fR ≤ 0. This inequality expresses the basic fact that the in-
crease of the internally stored energy (the Hamiltonian) is always less
than or equal to the externally supplied power.

2.6 Port-Hamiltonian dynamics

The dynamics of a port-Hamiltonian system is defined as follows.

Definition 2.3. Consider a state space X and a Hamiltonian

H : X → R,

defining energy-storage. A port-Hamiltonian system on X is defined


by a Dirac structure

D ⊂ Tx X × Tx∗ X × FR × ER × FP × EP ,

having energy-storing port (fS , eS ) ∈ Tx X ×Tx∗ X and a resistive struc-


ture
R ⊂ FR × ER ,
corresponding to an energy-dissipating port (fR , eR ) ∈ FR × ER . Its
dynamics is specified by
 
∂H
−ẋ(t), (x(t)), fR (t), eR (t), fP , (t), eP (t) ∈ D(x(t)),
∂x (2.28)
(fR (t), eR )(t) ∈ R(x(t)), t ∈ R.

At the beginning of this chapter, we have already seen how a mass-


spring system is modeled as a port-Hamiltonian system. The next ex-
ample concerns a simple electrical circuit.

Example 2.3 (RL-circuit). Consider an electrical circuit depicted in


Fig. 2.3. The energy-storage port of the system is described as

φ̇i = −Vi ,
dHi
Ii = (ϕi ),
dϕi
for i = 1, 2, where Ii are the currents through the inductors with
flux-linkages φi , magnetic energy Hi (φ1 ), and −Vi the voltages across
2.6. Port-Hamiltonian dynamics 27

L1 L2

− ϕ1 + + ϕ2 −
R

Figure 2.3: RL circuit.

them. The energy-dissipating relation corresponding to the (possible


nonlinear) resistor is VR = −F (IR ), with IR F (IR ) ≥ 0. Kirchhoff’s
current and voltage laws define the Dirac structure expressed by the
equations
I1 + I2 + IR = 0, V1 = V2 = VR ,
The resulting port-Hamiltonian system is given as
 
dH1 dH2
ϕ̇1 = F − (ϕ1 ) − (ϕ2 ) ,
dϕ1 dϕ2
 
dH1 dH2
ϕ̇2 = F − (ϕ1 ) − (ϕ2 ) ,
dϕ1 dϕ2
which for a linear resistor VR = −RIR , with R > 0, and linear induc-
ϕi
tors Ii = Li
, for i = 1, 2, reduces to

R R
ϕ̇1 = − ϕ1 − ϕ2 ,
L1 L2
R R
ϕ̇2 = − ϕ1 − ϕ2 .
L1 L2
In Chapter 12, we will elaborate on port-Hamiltonian models for gen-
eral RLC circuits.

The following two examples of port-Hamiltonian systems empha-


size port-based network modeling of multi-physics systems.

Example 2.4 (Levitated ball system). Consider the dynamics of an


iron ball that is levitated by the magnetic field of a controlled inductor
28 From modeling to port-Hamiltonian systems

Figure 2.4: Magnetically levitated ball.

as schematically depicted in Fig. 2.4. The port-Hamiltonian descrip-


tion of this system (with q the height of the ball, p the vertical mo-
mentum, and ϕ the magnetic flux-linkage of the inductor) is given as
 
∂H
 ∂q   
   
q̇ 0 1 0   0
     ∂H 
+ 
 ṗ  = −1 0 0  ∂p  0 V,
ϕ̇ 0 0 −R  

 1 (2.29)
 ∂H 
∂ϕ
∂H
I= .
∂ϕ
Although at first instance the mechanical and the magnetic part of the
system look decoupled, they are actually coupled via the Hamiltonian

p2 ϕ2
H(q, p, ϕ) = mgq + + ,
2m 2L(q)
where the inductance L(q) depends on the height q. In fact, the mag-
ϕ2
netic energy 2L(q) depends both on the flux ϕ and the mechanical vari-
able q. As a result the right-hand side of the second equation (describ-
ing the evolution of the mechanical momentum variable p) depends
on the magnetic variable ϕ, and conversely the right-hand side of the
2.6. Port-Hamiltonian dynamics 29

R L
+
ω
V K J

_ I τ
b

Figure 2.5: DC motor.

third equation (describing the evolution of the magnetic variable ϕ)


depends on the mechanical variable q.

Example 2.5 (DC motor). In the schematic model of a DC motor de-


picted in Figure 2.5, we can distinguish six interconnected subsys-
tems:10
◦ two energy-storing elements with physical states φ, p: an ideal in-
ductor L with state variable ϕ (flux-linkage), and rotational inertia J
with state variable p (angular momentum);
◦ two energy-dissipating elements: the resistor R and the friction b;
◦ a gyrator K;
◦ an ideal voltage source V .
The energy-storing elements (here assumed to be linear) are given by

 ϕ̇ = −VL

Inductor: d

1 2

ϕ
I=
 ϕ = ,
dϕ 2L L

 ṗ = −τJ

Inertia: d

1 2

p
ω =
 p = .
dp 2J J
1 2
Hence, the corresponding total Hamiltonian reads H(p, φ) = 2L φ +
1 2
2J p . The energy-dissipating relations (also assumed to be linear) are
10
In this example it is obvious that the subsystems do not correspond to actual
physical subsystems, but rather model the different physical phenomena present in
the DC-motor.
30 From modeling to port-Hamiltonian systems

given as
VR = −RI, τb = −bω,
with R, b > 0, where τb is a damping torque. Furthermore, the equa-
tions of the gyrator (converting magnetic power into mechanical, or
conversely) are
VK = −Kω, τ = KI.
for a certain positive constant K (the gyrator constant). Finally, the
subsystems are interconnected by the equations

VL + VR + VK + V = 0, τJ + τb + τ = 0.

Note that the Dirac structure is defined by the above interconnection


equations, together with the equations for the gyrator. Collecting all
equations, we obtain the port-Hamiltonian model

# ϕ
 
" # " " #
ϕ̇ −R −K 
 L  + 1 V,

=
ṗ K −b  p  0
ϕ J (2.30)
h i L
I= 1 0 p.

J
While in the previous example the coupling between the mechani-
cal and magnetic domain was provided by the Hamiltonian (depend-
ing in a non-separable way on the mechanical state variables and the
magnetic state variable), in this example the inter-domain coupling is
given by the Dirac structure (through the gyrator constant K).

2.7 Port-Hamiltonian differential-algebraic equations

A usual feature of network modeling is the fact that the obtained


model of the overall system consists of differential equations and alge-
braic equations. This stems from the fact that network modeling admits
quite arbitrary interconnection equations between the subsystems of
the overall system. The resulting set of differential and algebraic equa-
tions are commonly called differential-algebraic systems (DAEs). This is
2.7. Port-Hamiltonian differential-algebraic equations 31

C1 C2

Q2
Q1 +
ϕ L

Figure 2.6: LC circuit.

in contrast with signal-flow diagram modeling where it is assumed


that the external variables for the subsystems can be decomposed into
inputs (free variables) and outputs (determined by the state and pos-
sibly the inputs) in such a way that the value of the inputs to one
subsystem are equal to the value of the outputs of others. However,
in physical systems modeling this is often not the case, and algebraic
constraints due to interconnection constraints between the ‘outputs’
of the subsystems commonly arise.
In order to illustrate this issue, as well as to indicate how port-
Hamiltonian systems theory yields a systematic approach of handling
algebraic constraints, we consider a simple example from the realm
of electrical circuits. The general theory of port-Hamiltonian DAEs
will be touched upon in Chapter 8. For a general treatment of port-
Hamiltonian DAEs we refer to van der Schaft (2013).

Example 2.6. Consider an LC-circuit consisting of two capacitors and


one inductor as shown in Fig. 2.6. Naturally this system can be seen
as the interconnection of three subsystems, the two capacitors and the
inductor, interconnected by Kirchhoff’s current and voltage laws. The
capacitors (first assumed to be linear) are described by the following
dynamical equations
Q̇i = −Ii ,
Qi
Vi = ,
Ci
for i = 1, 2. Here Ii and Vi are the currents through, respectively the
voltages across, the two capacitors, and Ci are their capacitances. Fur-
32 From modeling to port-Hamiltonian systems

thermore, Qi are the charges stored in the capacitors and are regarded
as basic state variables.11 Similarly, the linear inductor is described by
the dynamical equations
ϕ̇ = −VL ,
ϕ
IL = ,
L
where IL is the current through the inductor, and VL is the voltage
across the inductor. Here, the (magnetic) flux-linkage ϕ is taken as the
state variable of the inductor, and L denotes its inductance.
Parallel interconnection of these three subsystems by Kirchhoff’s
laws amounts to the interconnection equations

V1 = V2 = VL , I1 + I2 + IL = 0,

where the equation V1 = V2 gives rise to the algebraic constraint

Q1 Q2
= , (2.31)
C1 C2

relating the two state variables Q1 , Q2 .


There are multiple ways to represent the dynamics of the total sys-
tem. One is to regard either I1 or I2 as a Lagrange multiplier for the
constraint (2.31). Indeed, by defining λ = I1 one may write the total
system as
      
Q̇1 0 0 0 Q1 /C1 −1
      
Q̇2  = 0 0 1 Q2 /C2  +  1  λ,
ϕ̇ 0 −1 0 ϕ/L 0
  (2.32)
h i Q1 /C1
 
0 = −1 1 0 Q2 /C2  ,
ϕ/L

11
In the port-Hamiltonian formulation there is a clear preference for taking the
charges Qi to be the state variables instead of the voltages Vi . This is due to the fact
that the charges satisfy a conservation law, while the voltages do not. Furthermore,
although the introduction of charge variables comes at the expense of extra variables,
it will turn out to be very advantageous from a geometric point of view as well: the
charge variables live in the ’right’ state space.
2.7. Port-Hamiltonian differential-algebraic equations 33

Next one may eliminate the Lagrange multiplier λ by premultiply-


ing the first three differential equations by the matrix
" #
1 1 0
.
0 0 1
Together with the algebraic constraint (2.31) this yields the differential-
algebraic system
     
1 1 0 Q̇1 0 0 1 Q1 /C1
(2.33)
     
0 0 1 Q̇ =
  2  0 −1 0 Q2 /C2  .
0 0 0 ϕ̇ −1 1 0 ϕ/L
The two equivalent equational representations (2.32) and (2.33) result
from two different representations of the Dirac structure of the system,
namely
   
(
f1 e1
3 3    
D= (f, e) ∈ R × R f = f2  , e = e2  ,

f3 e3
     
1 1 0 f1 0 0 −1 e1 )
     
0 0 1 f2  + 0 1 0  e2  = 0 ,
0 0 0 f3 1 −1 0 e3
and
(

3 3
(f, e) ∈ R × R ∃λ such that

D=

        
f1 0 0 0 e1 1 h i e1
)
        
− f2  = 0 0 −1 e2  + −1 λ, 0 = 1 −1 0 e2  .
f3 0 1 0 e3 0 e3
Furthermore, the energy-storing relations are given by
       
f1 Q̇1 e1 Q1 /C1
       
f = f2  = − Q̇2  , e = e2  = Q2 /C2  ,
f3 ϕ̇ e3 ϕ/L
where the last vector is the gradient vector of the total stored energy
Q21 Q2 ϕ2
H(Q1 , Q2 , ϕ) := + 2 + . (2.34)
2C1 2C2 2L
34 From modeling to port-Hamiltonian systems

From a DAE perspective, it may be noted that the algebraic con-


straint (2.31) is of index one. In fact, under reasonable assumptions on
the Hamiltonian, this will turn out to be a general property of port-
Hamiltonian differential-algebraic systems; see Chapter 8.

2.8 Detailed-balanced chemical reaction networks

The final section of this chapter illustrates how port-Hamiltonian


modeling extends to physical systems outside the more traditional
mechanical and electrical domain. Based on van der Schaft et al.
(2013) it is shown how isothermal chemical reaction networks
governed by mass-action kinetics and satisfying a thermodynam-
ically justified assumption admit a natural port-Hamiltonian for-
mulation. This treatment merges the geometric thermodynamic ap-
proach of Oster et al. (1973) with the line of research initiated in
Horn & Jackson (1972); Feinberg (1987) based on the graph of chem-
ical complexes.
Consider an isothermal chemical reaction network (under constant
pressure) consisting of r reversible reactions involving m chemical
species specified by a vector of concentrations x ∈ Rm + := {x ∈ R |
m

xi > 0, i = 1, · · · , m}. The general form of the dynamics of the chemi-


cal reaction network (without inflows and outflows) is

ẋ = Sv(x),
h iT
with S the stoichiometric matrix, and v(x) = v1 (x) · · · vr (x) ∈
Rr the vector of reaction rates. We assume that v(x) is given by mass ac-
tion kinetics; the most basic way of modeling reaction rates. Following
van der Schaft et al. (2013) we will show how, under the assumption
of existence of a thermodynamic equilibrium, the dynamics of the re-
action network can be naturally modeled as a port-Hamiltonian sys-
tem, with Hamiltonian given by the free Gibbs’ energy.
In order to do so we first need to introduce some concepts and ter-
minology. The collection of all the different left- and right-hand sides
of the reactions are called the chemical complexes of the reaction net-
work, or briefly, the complexes. Denoting the number of complexes by
2.8. Detailed-balanced chemical reaction networks 35

c, the expression of the complexes in terms of the chemical species


concentration vector x ∈ Rm + is formalized by an m × c matrix Z,
whose ρ-th column captures the expression of the ρ-th complex in the
m chemical species. Note that by definition all elements of the matrix
Z are non-negative integers.
The complexes can be naturally associated with the vertices of a
directed graph, with edges corresponding to the reactions. The complex
on the left-hand side of each reaction is called the substrate complex,
and the one on the right-hand side the product complex. Formally, the
reaction σ ⇋ π between the σ-th and the π-th complex defines a di-
rected edge with tail vertex being the σ-th complex and head vertex
being the π-th complex. The resulting directed graph is called the com-
plex graph, and is defined Bollobas (1998) by its c × r incidence matrix
B. It is readily verified that the stoichiometric matrix S of the chemical
reaction network is given as S = ZB.
Mass action kinetics for the reaction rate vector v(x) ∈ Rr is de-
fined as follows. Consider first, as an example, the single reaction
X1 + 2X2 ⇋ X3 ,
involving the three chemical species X1 , X2 , X3 with concentrations
x1 , x2 , x3 . In mass action kinetics the reaction is considered to be a
combination of the forward reaction X1 + 2X2 ⇀ X3 with forward rate
equation v1+ (x1 , x2 ) = k+ x1 x22 and the reverse reaction X1 + 2X2 ↽ X3 ,
with rate equation v − (x3 ) = k− x3 . The constants k+ , k− are called
respectively the forward and the reverse reaction constants. The net reac-
tion rate is thus
v(x1 , x2 , x3 ) = v + (x1 , x2 ) − v − (x3 ) = k+ x1 x22 − k− x3 .
In general, the mass action reaction rate of the j-th reaction of a chem-
ical reaction network, from the substrate complex Sj to the product
complex Pj , is given as
m m
Y ZiSj Y ZiPj
vj (x) = kj+ xi − kj− xi , (2.35)
i=1 i=1
where Ziρ is the (i, ρ)-th element of the matrix Z, and kj+ , kj− ≥ 0 are
the forward/reverse reaction constants of the j-th reaction, respec-
tively.
36 From modeling to port-Hamiltonian systems

Eq. (2.35) can be rewritten in the following way. Let ZSj and ZPj
denote the columns of Z corresponding to the substrate complex Sj
and the product complex Sj of the j-th reaction. Defining the mapping
Ln : Rc+ → Rc as the component-wise natural logarithm, (2.35) takes
the form
 
vj (x) = kj+ exp ZSTj Ln(x) − kj− exp ZPTj Ln(x) . (2.36)

A vector of concentrations x∗ ∈ Rm + is called a thermodynamic equilib-



rium if v(x ) = 0. A chemical reaction network ẋ = Sv(x) is called
detailed-balanced if it admits a thermodynamic equilibrium x∗ ∈ Rm +.
Necessary and sufficient conditions for the existence of a thermody-
namic equilibrium are usually referred to as the Wegscheider conditions,
generalizing the classical results of Wegscheider (1902), and can be
derived as follows Feinberg (1989); van der Schaft et al. (2013). Con-
sider the j-th reaction from substrate Sj to product Pj , described by
the mass action rate equation (2.36). Then x∗ ∈ Rm + is a thermody-
namic equilibrium if and only if
 
kj+ exp ZSTj Ln(x∗ ) = kj− exp ZPTj Ln(x∗ ) , j = 1, . . . , r (2.37)

The equations (2.37), referred to as the detailed balance equations, can


be rewritten as follows. Define the equilibrium constant Kjeq of the j-th
reaction as (assuming kj+ 6= 0)

kj+
Kjeq := (2.38)
kj−

Then the detailed balance equations (2.37) are equivalent to


 
Kjeq = exp ZPTj Ln (x∗ ) − ZSTj Ln (x∗ ) , j = 1, . . . , r (2.39)

Collecting all reactions, and making use of the incidence matrix B of


the complex graph, this amounts to the vector equation
   
K eq = Exp B T Z T Ln (x∗ ) = Exp S T Ln (x∗ ) , (2.40)

where K eq is the r-dimensional vector with j-th element Kjeq , j =


1, . . . , r. It follows that there exists a thermodynamic equilibrium
2.8. Detailed-balanced chemical reaction networks 37

+ −
x∗ ∈ Rm
+ if and only if kj > 0, kj > 0, for all j = 1, . . . , r, and further-
more
Ln (K eq ) ∈ im S T (2.41)
It also follows that once a thermodynamic equilibrium x∗ is given, the
set of all thermodynamic equilibria is given by

E := {x∗∗ ∈ Rm T ∗∗ T ∗
+ | S Ln (x ) = S Ln (x )} (2.42)

Let now x∗ ∈ Rm + be a thermodynamic equilibrium. Consider the


rewritten form (2.39) of the detailed-balance equations, and define the
’conductance’ κj (x∗ ) > 0 of the j-th reaction as the common value of
the forward and reverse reaction rate at thermodynamic equilibrium
x∗ , i.e.,
   
κj (x∗ ) := kj+ exp ZSTj Ln (x∗ ) = kj− exp ZPTj Ln (x∗ ) , (2.43)

for j = 1, · · · , r. Then the mass action reaction rate (2.36) of the j-th
reaction can be rewritten as
      
∗ x x
vj (x) = κj (x ) exp ZSTj Ln − exp ZPTj Ln ,
x∗ x∗
where for any vectors x, z ∈ Rm the quotient vector xz ∈ Rm is defined
element-wise. Defining the r × r diagonal matrix of conductances as

K := diag κ1 (x∗ ), · · · , κr (x∗ ) , (2.44)

it follows that the mass action reaction rate vector v(x) of a balanced
reaction network equals
  
T x T
v(x) = −KB Exp Z Ln ,
x∗
and thus the dynamics of a balanced reaction network takes the form
  
x
ẋ = −ZBKB T Exp Z T Ln , K > 0. (2.45)
x∗
The matrix L := BKB T in (2.45) defines a weighted Laplacian ma-
trix for the complex graph, with weights given by the conductances
κ1 (x∗ ), · · · , κr (x∗ ). Note that K, and therefore the Laplacian matrix
38 From modeling to port-Hamiltonian systems

L = BKB T , is dependent on the choice of the thermodynamic equilib-


rium x∗ . However, this dependence is minor: for a connected complex
graph the matrix K is, up to a positive multiplicative factor, independent
of the choice of x∗ , cf. van der Schaft et al. (2013).
How does this define a port-Hamiltonian system ? Define first
the Hamiltonian (up to a constant the Gibbs’ free energy, cf.
van der Schaft et al. (2013); Oster et al. (1973)) as
 
x
T
G(x) = x Ln + (x∗ − x)T 1m ,
x∗
where 1m denotes a vector of dimension m with all ones. It is imme-
x 
diately checked ∂G∂x (x) = Ln x∗ = µ(x), where µ is (up to a constant)
known as the vector of chemical potentials. Then the mass action reac-
tion dynamics (2.45) is obtained from considering the auxiliary port-
Hamiltonian system
ẋ = ZfR ,
∂G (2.46)
eR = Z T (x),
∂x
with inputs fR ∈ Rc and outputs eR ∈ Rc , together with the energy-
dissipating relation

fR = −BKB T Exp (eR ). (2.47)

Indeed, by using the properties of the Laplacian matrix BKB T and the
fact that the exponential function is strictly increasing, it can be shown
that van der Schaft et al. (2013)

γ T BKB T Exp (γ) ≥ 0 for all γ, (2.48)

with equality if and only if B T γ = 0. Hence (2.47) defines a true energy-


dissipating relation, that is, eTR fR ≤ 0 for all eR ∈ Rc and fR ∈ Rc
satisfying (2.47). Therefore the mass action kinetics detailed-balanced
chemical reaction network is a port-Hamiltonian system with Hamil-
tonian G and energy-dissipating relation (2.47).
The consequences of the port-Hamiltonian modeling of detailed-
balanced mass action kinetics reaction networks for the analysis of the
reaction network are explored in van der Schaft et al. (2013). In par-
ticular, it follows that all equilibria are in fact thermodynamic equilib-
2.8. Detailed-balanced chemical reaction networks 39

ria, and a Lyapunov analysis using the Gibbs’ free energy (the Hamil-
tonian) shows that that starting from any initial state in the positive
orthant the system will converge to a unique thermodynamic equilib-
rium (at least under the assumption of persistence of the reaction net-
work: the vector of concentrations does not approach the boundary of
the positive orthant Rm 12
+ ), cf. van der Schaft et al. (2013) for details.

12
For an extension of these results to complex-balanced mass action kinetics reaction
networks we refer to Rao et al. (2013).
3
Port-Hamiltonian systems on manifolds

3.1 Modulated Dirac structures

For quite a few system classes, in particular those with 3-D mechan-
ical components, the Dirac structure is modulated by the state vari-
ables. Furthermore, the state space X is not necessarily anymore a lin-
ear space but instead a (differentiable1 ) manifold. As before, the flows
fS = −ẋ corresponding to energy-storage are elements of the tangent
space Tx X at the state x ∈ X , while the efforts eS = ∂H
∂x (x) are elements

of the co-tangent space Tx X . The modulation of the Dirac structure is
usually intimately related to the underlying geometry of the system.

Example 3.1 (Spinning rigid body). Consider a rigid body spinning


around its center of mass in the absence of gravity. The energy vari-
ables are the three components of the body angular momentum p
along the three principal axes: p = (px , py , pz )T , and the energy is the
kinetic energy
!
1 p2x p2y p2z
H(p) = + + ,
2 Ix Iy Iz

1
’Manifold’ will always mean ’differentiable manifold’.

41
42 Port-Hamiltonian systems on manifolds

where Ix , Iy , Iz are the principal moments of inertia. Euler’s equations


describing the dynamics are
 
    ∂H
ṗx 0 −pz py  ∂px 
  ∂H 
(3.1)
  
ṗy  =  pz 0 −px   ∂p .
 y
ṗz −py px 0 ∂H
| {z } ∂pz
J(p)

The Dirac structure is given as the graph of the skew-symmetric ma-


trix J(p), i.e., modulated by the state variables p. In this example, the
state space X is still a linear space. In fact, X = so∗ (3), the dual of the
Lie algebra so(3) of the matrix group SO(3).
Modulated Dirac structures often arise as a result of ideal con-
straints imposed on the generalized velocities of the mechanical sys-
tem by its environment, called kinematic constraints. In many cases,
these constraints will be configuration dependent, yielding a Dirac
structure modulated by the configuration variables.
Consider a mechanical system with n degrees of freedom, locally
described by n configuration variables q = (q1 , . . . , qn ). Expressing
the kinetic energy as 12 q̇ T M (q)q̇, with M (q) > 0 being the general-
ized mass matrix, we define in the usual way the Lagrangian function
L(q, q̇) as the difference of kinetic energy and potential energy U (q), i.e.
1
L(q, q̇) = q̇ T M (q)q̇ − U (q). (3.2)
2
Suppose now that there are constraints on the generalized velocities q̇,
described as
AT (q)q̇ = 0, (3.3)
with A(q) an n × k matrix of rank k everywhere (that is, there are k in-
dependent kinematic constraints). Classically, the constraints (3.3) are
called holonomic if it is possible to find new configuration coordinates
q = (q 1 , . . . , q n ) such that the constraints are equivalently expressed
as
q̇ n−k+1 = q̇ n−k+2 = · · · = q̇n = 0, (3.4)
in which case one may eliminate the configuration variables
q n−k+1 , . . . , q n , since the kinematic constraints (3.4) are equivalent to
3.1. Modulated Dirac structures 43

the geometric constraints

q n−k+1 = cn−k+1 , . . . , q n = cn , (3.5)

for certain constants cn−k+1 , . . . , cn determined by the initial condi-


tions. Then the system reduces to an unconstrained system in the (n−k)
remaining configuration coordinates (q 1 , . . . , q n−k ). If it is not possible
to find coordinates q such that (3.4) holds (that is, if we are not able to
integrate the kinematic constraints as above), then the constraints are
called nonholonomic.
The equations of motion for the mechanical system with La-
grangian L(q, q̇) and constraints (3.3) are given by the Euler-Lagrange
equations Neimark & Fufaev (1972)
 
d ∂L ∂L
− = A(q)λ + B(q)u, λ ∈ Rk , u ∈ Rm ,
dt ∂ q̇ ∂q (3.6)
AT (q)q̇ = 0,

where B(q)u are the external forces (controls) applied to the system,
for some n × m matrix B(q), while A(q)λ are the constraint forces. The
Lagrange multipliers λ(t) are uniquely determined by the require-
ment that the constraints AT (q(t))q̇(t) = 0 have to be satisfied for all
times t.
Defining the generalized momenta
∂L
p= = M (q)q̇, (3.7)
∂ q̇
the constrained Euler-Lagrange equations (3.6) transform into con-
strained Hamiltonian equations
∂H
q̇ = (q, p)
∂p
∂H
ṗ = − (q, p) + A(q)λ + B(q)u
∂q
(3.8)
∂H
y = B T (q) (q, p)
∂p
∂H
0 = AT (q) (q, p)
∂p
44 Port-Hamiltonian systems on manifolds

l1
φ1 y1
g

x1
m1 y2
l2
φ2
x2
m2
Figure 3.1: Double pendulum.

with H(q, p) = 12 pT M −1 (q)p + U (q) the total energy. The constrained


Hamiltonian equations (3.8) define a port-Hamiltonian system, with
respect to the modulated Dirac structure

D = (fS , eS , fP , eP ) | 0 = AT (q)eS , eP = B T (q)eS , (3.9)
" # " # " # 
0 In 0 0
− fS = eS + λ+ fP , λ ∈ Rk .
−In 0 A(q) B(q)
Example 3.2 (The double pendulum). One way of modeling a double
pendulum as depicted in Figure 3.1 (and mechanisms in general) is to
regard the system as the interconnection of two single pendula.
Consider two ideal pendula with length li , i = 1, 2, in the vertical
plane; described by the Cartesian coordinates (xi , yi ), i = 1, 2, of their
upper end, together with an angle (with respect to the vertical axis)
φi , i = 1, 2. For simplicity assume that the masses mi of the pendula
are concentrated at the lower ends of the pendula. Then the energies
(kinetic and potential) of the pendula are given by
1 2 1 2 1
Hi (xi , yi , φi , pxi , pyi , pφi ) = pxi + pyi + p2φi +mi g(yi −li cos φi ),
2mi 2mi 2Ii
for i = 1, 2, where pxi := mi ẋi , pyi := mi ẏi are the Cartesian momenta,
and pφi := Ii φ̇i , Ii = mi li2 are the angular momenta, i = 1, 2. The dou-
3.1. Modulated Dirac structures 45

ble pendulum system is obtained by imposing the following geometric


constraints
x1 = y1 = 0 (fixing the first pendulum at the top),
x2 = x1 + l1 sin φ1 , y2 = y1 − l1 cos φ1 (attaching 2nd pendulum to 1st).
Differentiating these geometric constraints one obtains the kinematic
constraints
ẋ1 = ẏ1 = 0, ẋ2 = l1 φ̇1 cos φ1 , ẏ2 = l1 φ̇1 sin φ1 .
The corresponding Dirac structure in this case is given as in (3.9), with
n = 4, B = 0 and A given by
 
1 0 0 0 0 0
0 1 0 0 0 0
AT (φ1 ) = 
 
.
0 0 l1 cos φ1 −1 0 0
0 0 l1 sin φ1 0 −1 0
Furthermore, the total Hamiltonian H is given as H1 + H2 .
Note that by differentiating the geometric constraints to kinematic
constraints some information is lost (for example, ẋ1 = 0 only implies
x1 = constant instead of x1 = 0). It turns out that the Casimirs of the
Dirac structure D (see Chapter 8) still encode this loss of information;
in fact x1 is a Casimir of the Dirac structure.
Example 3.3 (Rolling euro). Let x, y be the Cartesian coordinates of
the point of contact of the coin with the plane; see Figure 3.2. Further-
more, ϕ denotes the heading angle, and θ the angle of King Willem-
Alexander’s head2 . With all constants set to unity, the constrained La-
grangian equations of motion are
ẍ = λ1 ,
ÿ = λ2 ,
(3.10)
θ̈ = − λ1 cos ϕ − λ2 sin ϕ + u1 ,
ϕ̈ = u2 ,
with u1 the control torque about the rolling axis, and u2 the control
torque about the vertical axis.
2
On the Dutch version of the Euro.
46 Port-Hamiltonian systems on manifolds

θ
(x, y)
ϕ
x

Figure 3.2: The geometry of the rolling Euro.

The total energy is H = 12 p2x + 21 p2y + 12 p2θ + 12 p2ϕ . The rolling con-
straints are ẋ = θ̇ cos ϕ and ẏ = θ̇ sin ϕ, i.e., rolling without slipping,
which can be written in the form (3.3) by defining
" #
T 1 0 − cos φ 0
A (x, y, θ, φ) = . (3.11)
0 1 − sin φ 0

While in the previous example of the double pendulum the kinematic


constraints are derived from geometric constraints, this is not possible
in the current example: the kinematic constraints AT (q)q̇ = 0 for AT (q)
given by (3.11) cannot be integrated to geometric constraints Φ(q) = 0.
Such kinematic constraints are called non-holonomic.

The foregoing motivates to extend the definition of a constant Dirac


structure D ⊂ F × E (with F a linear space, and E its dual) to Dirac
structures on manifolds.

Definition 3.1. Let X be a manifold. A Dirac structure D on X is a


vector sub-bundle of the Whitney sum3 T X ⊕ T ∗ X such that

D(x) ⊂ Tx X × Tx∗ X

is for every x ∈ X a constant Dirac structure as before.


3
The Whitney sum of two vector bundles with the same base space is defined as
the vector bundle whose fiber above each element of this common base space is the
product of the fibers of each individual vector bundle.
3.2. Integrability 47

Simply put, a Dirac structure on a manifold X is point-wise (that


is, for every x ∈ X ) a constant Dirac structure D(x) ⊂ Tx X × Tx∗ X .
If, next to the energy storage port, there are additional ports (such
as energy-dissipating and external ports) with total set of port vari-
ables f ∈ F and e ∈ E = F ∗ , then a modulated Dirac structure is
point-wise (i.e., for every x ∈ X ) specified by a Dirac structure
D(x) ⊂ Tx X × Tx∗ X × F × E. (3.12)
Remark 3.1. For a full geometric definition of the above
Dirac structure we refer to Dalsmo & van der Schaft (1999);
Blankenstein & van der Schaft (2001), and especially to Merker
(2009), where a formulation in terms of Courant algebroids is given.

3.2 Integrability

A key issue in the case of modulated Dirac structures is that of integra-


bility. Loosely speaking, a Dirac structure is integrable if it is possible to
find local coordinates for the state space manifold such that the Dirac
structure expressed in these coordinates is a constant Dirac structure,
that is, it is not modulated anymore by the state variables.
First let us consider modulated Dirac structures which are given
for every x ∈ X as the graph of a skew-symmetric mapping J(x) from
the co-tangent space Tx∗ X to the tangent space Tx X .
Integrability in this case means that the structure matrix J satisfies
the conditions
n  
X ∂Jik ∂Jkj ∂Jji
Jlj (x) (x) + Jli (x) (x) + Jlk (x) (x) = 0, (3.13)
l=1
∂xl ∂xl ∂xl
for i, j, k = 1, . . . , n. In this case we may find, by Darboux’s theorem
(see e.g. Weinstein (1983); Marsden & Ratiu (1999); Arnol’d (1978);
Nijmeijer & van der Schaft (1990)) around any point x0 where the
rank of the matrix J(x) is constant, local coordinates x = (q, p, r) in
which the matrix J(x) becomes the constant skew-symmetric matrix
 
0 −Ik 0
(3.14)
 
Ik 0 0 .
0 0 0
48 Port-Hamiltonian systems on manifolds

Such coordinates are called canonical. A skew-symmetric matrix J(x)


satisfying (3.13) defines a Poisson bracket on X , given for every F, G :
X → R as
∂T F ∂G
{F, G}(x) = (x)J(x) (x). (3.15)
∂x ∂x
Indeed, by (3.13) the Poisson bracket satisfies the Jacobi-identity
{F, {G, K}} + {G, {K, F }} + {K, {F, G}} = 0, (3.16)
for all functions F, G, K. Conversely, satisfaction of (3.16) for all
F, G, K implies (3.13). (Take G = xi , F = xj and K = xk .)
The choice of coordinates x = (q, p, r) for the state space manifold
also induces a basis for Tx X and a dual basis for Tx∗ X . Denoting the
corresponding splitting for the flows by f = (fq , fp , fr ) and for the
efforts by e = (eq , ep , er ), the Dirac structure defined by J in canonical
coordinates is seen to be given by
D = {(fq , fp , fr , eq , ep , er ) | fq = −ep , fp = eq , fr = 0}. (3.17)
A similar story can be told for the case of a Dirac structure given as
the graph of a skew-symmetric mapping ω(x) from the tangent space
Tx X to the co-tangent space Tx∗ X . In this case the integrability condi-
tions take the (slightly simpler) form
∂ωij ∂ωki ∂ωjk
(x) + (x) + (x) = 0, i, j, k = 1, . . . , n. (3.18)
∂xk ∂xj ∂xi
The skew-symmetric matrix ω(x) can be regarded as the coordinate
representation of a differential two-form ω on the manifold X , that is
P
ω = ni=1,j=1 dxi ∧ dxj , and the integrability condition (3.18) corre-
sponds to the closedness of this two-form (dω = 0). The differential
two-form ω is called a pre-symplectic structure, and a symplectic struc-
ture if the rank of ω(x) is equal to the dimension of X . If (3.18) holds,
then again by a version of Darboux’s theorem we may find, around
any point x0 where the rank of the matrix ω(x) is constant, local co-
ordinates x = (q, p, s) in which the matrix ω(x) becomes the constant
skew-symmetric matrix
 
0 Ik 0
(3.19)
 
−Ik 0 0 .
0 0 0
3.2. Integrability 49

The choice of canonical coordinates x = (q, p, s) induces a basis for


Tx X and a dual basis for Tx∗ X . Denoting the corresponding splitting
for the flows by f = (fq , fp , fs ) and for the efforts by e = (eq , ep , es ),
the Dirac structure corresponding to ω in canonical coordinates is seen
to be given by

D = {(fq , fp , fs , eq , ep , es ) | fq = −ep , fp = eq , es = 0}. (3.20)

In case of a symplectic structure the variables s are absent and the


Dirac structure reduces to

D = {(fq , fp , eq , ep ) | fq = −ep , fp = eq }, (3.21)

which is the standard symplectic gyrator.


For general Dirac structures, integrability is defined in the follow-
ing way.

Definition 3.2. Dorfman (1993); Courant (1990) A Dirac structure D


on X is integrable if for arbitrary pairs of smooth vector fields and dif-
ferential one-forms (X1 , α1 ), (X2 , α2 ), (X3 , α3 ) ∈ D there holds

< LX1 α2 | X3 > + < LX2 α3 | X1 > + < LX3 α1 | X2 >= 0, (3.22)

with LXi denoting the Lie-derivative.

Remark 3.2 (Pseudo-Dirac structures). In the usual definition of


Dirac structures on manifolds (see Courant (1990); Dorfman (1993)),
the integrability condition is included in the definition. Dirac structures
that do not satisfy this integrability condition are therefore sometimes
(but not in this chapter) called pseudo-Dirac structures.

The above integrability condition for Dirac structures generalizes


properly the closedness of symplectic forms and the Jacobi identity for
Poisson brackets as discussed before. In particular, for Dirac structures
given as the graph of a symplectic or Poisson structure, the notion
of integrability is equivalent to the Jacobi-identity or closedness con-
dition as discussed above (see e.g. Courant (1990); Dorfman (1993);
Dalsmo & van der Schaft (1999) for details).
Note that a constant Dirac structure trivially satisfies the integrabil-
ity condition. Conversely, a Dirac structure satisfying the integrability
50 Port-Hamiltonian systems on manifolds

condition together with an additional constant rank condition can be


represented locally as a constant Dirac structure. The precise form of
the constant rank condition can be stated as follows. For any Dirac
structure D, we may define the distribution

GD (x) = {X ∈ Tx X | ∃α ∈ Tx∗ X s.t. (X, α) ∈ D(x)}.

Dually we may define the co-distribution

PD (x) = {α ∈ Tx∗ X | ∃X ∈ Tx X s.t. (X, α) ∈ D(x)}.

We call x0 a regular point for the Dirac structure if both the distribution
GD and the co-distribution PD have constant dimension around x0 .
If the Dirac structure is integrable and x0 is a regular point, then,
again by a version of Darboux’s theorem, we can choose local coor-
dinates x = (q, p, r, s) for X (with dim q = dim p), such that, in the
resulting bases for (fq , fp , fr , fs ) for Tx X and (eq , ep , er , es ) for Tx∗ X ,
the Dirac structure on this coordinate neighborhood is given as
(
fq = −ep , fp = eq
(3.23)
fr = 0, es = 0.

Coordinates x = (q, p, r, s) as above are again called canonical. Note


that the choice of canonical coordinates for a Dirac structure satisfying
the integrability condition encompasses the choice of canonical coor-
dinates for a Poisson structure and for a (pre-)symplectic structure as
above.

Example 3.4 (Kinematic constraints). Recall that the modulated Dirac


structure corresponding to an actuated mechanical system subject to
kinematic constraints AT (q)q̇ = 0 is given by
 h i h i
D = (fS , eS , fC , eC ) | 0 = 0 AT (q) eS , eC = 0 B T (q) eS ,
" # " # " # 
0 In 0 0 k
− fS = eS + λ+ fc , λ ∈ R .
−In 0 A(q) B(q)

Complete necessary and sufficient conditions for integrability of


this Dirac structure have been derived in van der Schaft & Maschke
3.2. Integrability 51

(1994); Dalsmo & van der Schaft (1999). Here we only state a slightly
simplified version of this result, detailed in Dalsmo & van der Schaft
(1999). We assume that the actuation matrix B(q) has the special form
(often encountered in examples) that its j-th column is given as
 
0
 ∂Cj .
(q)
∂q
for some function Cj (q) only depending on the configuration variables
q, j = 1, . . . , m. In this case, the Dirac structure D is integrable if and
only if the kinematic constraints are holonomic. Thus the Dirac structure
corresponding to the Double pendulum example is integrable, while
the Dirac structure corresponding to the Rolling euro example is not
integrable.
4
Input-state-output port-Hamiltonian systems

An important subclass of port-Hamiltonian systems occurs if (1) there


are no algebraic constraints between the state variables, (2) the exter-
nal port variables can be split into input and output variables, and (3)
the resistive structure is linear and of input-output form. This class
of systems, in the usual input-state-output format ẋ = f (x, u), y =
h(x, u), also provides a natural starting point for the development of
control strategies; see Chapter 15.
In the present chapter we will explore the special properties of this
subclass of port-Hamiltonian systems, while in the last section we dis-
cuss the relationships with classical Hamiltonian dynamical systems.

4.1 Linear resistive structures

In quite a few cases of interest the energy-dissipating elements can be


assumed to be linear. This allows for a geometric interpretation, which
is especially interesting when combined with the geometric notion of a
Dirac structure. Linear energy-dissipation in the port-variables fR , eR
can be modeled as a subspace (possibly modulated by the state vari-

53
54 Input-state-output port-Hamiltonian systems

ables x ∈ X )

R(x) = {(fR , eR ) ∈ FR × ER | Rf (x)fR + Re (x)eR = 0}, (4.1)

with ER = FR∗ , satisfying the energy-dissipation property

Rf (x)ReT (x) = Re (x)RfT (x) ≥ 0, (4.2)

together with the dimensionality condition


h i
rank Rf (x) Re (x) = dim fR , x ∈ X. (4.3)

Indeed, by (4.3) and the equality in (4.2) we can equivalently rewrite


the kernel representation (4.1) as an image representation

fR = ReT (x)λ, eR = −RfT (x)λ. (4.4)

That is, any pair (fR , eR ) satisfying (4.1) also satisfies (4.4) for some λ,
and conversely, every (fR , eR ) satisfying (4.4) for some λ also satisfies
(4.1). Hence by (4.2) for all (fR , eR ) satisfying (4.1)
 T
eTR fR = − RfT (x)λ ReT (x)λ = −λT Rf (x)ReT (x)λ ≤ 0,

thus defining a true energy-dissipating relation. We will call R a linear


resistive structure.
A linear resistive structure R defined on the state space manifold
X can be regarded as a geometric object having properties which are
somewhat opposite to those of a Dirac structure. Recall that on any
space F × E, with E = F ∗ , we can define the indefinite form, cf. (2.4),
≪ (f a , ea ), (f b , eb ) ≫=< ea | f b > + < eb | f a >, with ⊥⊥ denoting
orthogonal companion.

Proposition 4.1. Let R(x) ⊂ FR × ER defined by (4.1) be a lin-


ear resistive structure on X , that is, satisfying (4.2) and (4.3). Then
R(x)⊥⊥ = {(fR , eR ) ∈ FR × ER | Rf (x)fR − Re (x)eR = 0} =: (−R)(x).

We leave the proof as an exercise. As a direct result we obtain

Proposition 4.2. Let R(x) ⊂ FR ×ER be a linear resistive structure on


X and let D(x) ⊂ Tx X × Tx∗ X × FR × ER × FP × EP be a Dirac structure
4.2. Input-state-output port-Hamiltonian systems 55

on X . Define the composition D(x) ◦ R(x) ⊂ Tx X × Tx∗ X × FP × EP as


n
D(x) ◦ R(x) = (fS , eS , fP , eP ) | ∃(fR , eR ) ∈ R(x) s.t.
o
(fS , eS , fR , eR , fP , eP ) ∈ D(x) .

Then,
(D(x) ◦ R(x))⊥⊥ = D(x) ◦ (−R) (x).

In the next section we will furthermore use the following simple


result.

Proposition 4.3. Consider a Dirac structure D(x) ⊂ Tx X × Tx∗ X ×


FR × ER × FP × EP and a linear resistive structure R(x) ⊂ FR × ER .
Suppose that the composition D(x) ◦ R(x) ⊂ Tx X × Tx∗ X × FP × EP
can be written as the graph of a linear mapping from Tx∗ X × EP to
Tx X × FP given as
" # " # " #
eS f e
7→ S = K(x) S . (4.5)
eP fP eP

Then, by factorizing K(x) into its skew-symmetric part Kss (x) and its
symmetric part Ks (x) we have
T
K(x) = Kss (x) + Ks (x), Kss (x) = −Kss (x), KsT (x) = Ks (x) ≥ 0.
(4.6)

Proof. For all (fS , eS , fP , eP ) ∈ D(x)◦R(x) there exists (fR , eR ) ∈ R(x)


such that (fS , eS , fR , eR , fP , eP ) ∈ D(x). Hence
" # " #
h i e h i e
eTS eTP(x)Ks (x) S = eTS eTP K(x) S =
eP eP
T T T
eS fS + eP fP = −eR fR ≥ 0,

implying Ks (x) ≥ 0. 

4.2 Input-state-output port-Hamiltonian systems

Consider now a port-Hamiltonian system where the composition of


the Dirac structure D and the linear resistive structure R satisfies the
56 Input-state-output port-Hamiltonian systems

conditions of Proposition 4.3, and thus is given as the graph of a map-


ping (4.5) satisfying (4.6). Write out correspondingly
" # " #
−J(x) −g(x) R(x) P (x)
Kss (x) = T , Ks (x) = , (4.7)
g (x) M (x) P T (x) S(x)

where J T (x) = −J(x), M T (x) = −M (x) and RT (x) = R(x), S T (x) =


S(x). Then it follows from Proposition 4.3 that the matrices
R(x), P (x), S(x) satisfy
" #
R(x) P (x)
≥ 0. (4.8)
P T (x) S(x)

Denoting u := eP and y := fP it follows that


" # " #" #
h i f h i R(x) P (x) eS
S
eTS fS T
+u y = eTS uT = eTS uT ≥ 0.
y P T (x) S(x) u

∂H
Hence, together with the energy-storage relations eS = ∂x (x), fS =
−ẋ, we obtain the port-Hamiltonian system

∂H
ẋ = [J(x) − R(x)] (x) + [g(x) − P (x)] u,
∂x
(4.9)
h i ∂H
T
y = g (x) + P (x) (x) + [M (x) + S(x)] u,
∂x
called an input-state-output port-Hamiltonian system with feedthrough
term. Along the trajectories of the system we recover the fundamen-
tal power-balance
" #" #
d h i R(x) P (x) eS
H(x) = −eTS fS = uT y − eTS uT ≤ uT y.
dt P T (x) S(x) u

Example 4.1. In case of feedthrough terms the skew-symmetric ma-


trix J may also depend on the parameters of energy-dissipation, as
the following example shows. Consider the linear electrical circuit
depicted in Figure 4.1. The dynamics of the circuit takes the port-
4.2. Input-state-output port-Hamiltonian systems 57

I + ϕ −
+ Q C
V R1 R2
R3

Figure 4.1: Circuit for Example 4.1.

Hamiltonian form
  
R2 R3 R3 ∂H
− −
" # " #
ϕ̇ R + R R + R   ∂ϕ  1
 2 3 2 3 
=  + V
Q̇  R3 1   ∂H  0

R2 + R3 R2 + R3 ∂Q
 
∂H
h i  ∂ϕ  V
 
I= 1 0  + ,
 ∂H  R1
∂Q
1 2 1
with H(ϕ, Q) = 2L ϕ + 2C Q2 . This defines a port-Hamiltonian input-
state-output system with feedthrough specified by
   
R3 R2 R3
0 − 0

 R2 + R3 

R + R
 2 3


J = , R =  ,
 R3   1 
0 0
R2 + R3 R2 + R3
M = 0, P = 0, and S = 1/R1 .

In the special case S(x) = 0, implying by (4.8) P (x) = 0, and addi-


tionally assuming M (x) = 0, we obtain from (4.9) the system descrip-
tion
∂H
ẋ = [J(x) − R(x)] (x) + g(x)u,
∂x (4.10)
∂H
y = gT (x) (x).
∂x
This is simply called an input-state-output port-Hamiltonian system.
58 Input-state-output port-Hamiltonian systems

Alternatively, (4.10) can be obtained by first considering the port-


Hamiltonian system without energy dissipation
∂H
ẋ = J(x) (x) + g(x)u + gR (x)fR ,
∂x
∂H
y = gT (x) (x),
∂x
T ∂H
eR = gR (x) (x),
∂x
with the (open) port fR , eR , and then to terminate this port by the
energy-dissipation relation fR = −R̃(x)eR where R̃T (x) = R̃(x) ≥
0. This yields the input-state-output port-Hamiltonian system (4.10)
with R(x) = gR (x)R̃(x)gR T (x). For details we refer to van der Schaft

(2009).

4.3 Memristive dissipation

Another type of resistive relationship is given by the memristor. The


memristor, a contraction of memory and resistance that refers to a re-
sistor with memory, was postulated in the early seventies Chua (1971)
to complete the family of existing fundamental electrical circuit ele-
ments: the resistors, inductor, and capacitor.1 In the port-Hamiltonian
framework, a memristive port is described as follows. Let fM ∈ FM
and eM ∈ EM , with EM = FM ∗ , denote the flows and efforts associ-

ated to the memristive port, and let xfM ∈ XM and xeM ∈ XM ∗ the

corresponding time-integrals of fM and eM , respectively. Then, the re-


lationship xeM = −ΦM (xfM ), with ΦM some differentiable mapping
from XM to XM ∗ , constitutes a x
fM –controlled memristive port

∂ΦM
eM = −RM (xfM )fM , RM (xfM ) = (xfM ),
∂xfM
1
From a mathematical perspective, the behavior of a resistor, inductor, and a ca-
pacitor, whether linear or nonlinear, is described by relationships between two of the
four basic electrical variables: voltage, current, charge, and flux linkage. Indeed, a
resistor is described by the relationship of current and voltage; an inductor by that
of current and flux linkage, and a capacitor by that of voltage and charge. But what
about the relationship between charge and flux linkage? This missing relationship
defines the memristor.
4.4. Relation with classical Hamiltonian systems 59

and we define the associated memristive structure as




M = (fM , eM ) ∈ FM × EM ẋf − fM = 0,
M

eM + RM (xfM )fM = 0 .

Note that the memory effect stems from the fact that the memristor
‘remembers’ the amount of flow that has passed through it via ẋfM =
fM .
Now, locally around xfM ∈ XM , the memristive structure M de-
fines a port-Hamiltonian system with a direct feedthrough term. In-
deed, let HM : XM → R be the zero function, then the dynamics on M
locally take the from
ẋfM = fM ,
∂HM (4.11)
eM = (xfM ) − RM (xfM )fM .
∂xfM
The fact that HM (xfM ) = 0, for all xfM ∈ XM , together with the fact
that eM ≡ 0 whenever fM ≡ 0 regardless of the internal state xfM ,
clearly underscores the ‘no energy discharge property’ as discussed in
Chua (1971). A dual representation can be obtained starting from the
xeM –controlled relationship xfM = −Φ∗M (xeM ).
The concept of the memristor and its generalizations can be use-
ful in modelling a wide variety of phenomena, including thermistors,
Josephson junctions, discharge tubes, and even ionic systems like the
Hodgkin-Huxley model of a neuron; see Jeltsema & van der Schaft
(2010) and Jeltsema & Doria (2012) for a further discussion, some il-
lustrative examples, and the inclusion of so-called meminductors and
memcapacitors, the memory equivalents of inductors and capacitors,
in the port-Hamiltonian framework.

4.4 Relation with classical Hamiltonian systems

In this section we recall the classical framework of Lagrangian and


Hamiltonian differential equations as originating from analytical me-
chanics, and indicate how it naturally extends to input-state-output
port-Hamiltonian systems as dealt with in the previous section.
60 Input-state-output port-Hamiltonian systems

Recall that the Euler-Lagrange equations, see e.g. Arnol’d (1978);


Marsden & Ratiu (1999); Abraham & Marsden (1994), are given as
 
d ∂L ∂L
(q, q̇) − (q, q̇) = τ, (4.12)
dt ∂ q̇ ∂q
where q = (q1 , . . . , qk )T are generalized configuration coordinates for
the system with k degrees of freedom, the Lagrangian L equals the
difference K − P between kinetic energy2 K(q, q̇) and potential en-
ergy P (q), and τ = (τ1 , . . . , τk )T is the vector of generalized forces
acting on the system. Furthermore, ∂L ∂ q̇ denotes the column-vector of
partial derivatives of L(q, q̇) with respect to the generalized velocities
q̇1 , . . . , q̇k , and similarly for ∂L
∂q . In standard mechanical systems the
kinetic energy K is of the form
1
K(q, q̇) = q̇ T M (q)q̇, (4.13)
2
where the k × k inertia (generalized mass) matrix M (q) is symmetric
and positive definite for all q. In this case the vector of generalized
momenta p = (p1 , . . . , pk )T , defined for any Lagrangian L as p = ∂L
∂ q̇ , is
simply given by
p = M (q)q̇, (4.14)
and by defining the state vector (q1 , . . . , qk , p1 , . . . , pk )T the k second-
order equations (4.12) transform into 2k first-order equations
∂H
q̇ = (q, p) (= M −1 (q)p),
∂p
(4.15)
∂H
ṗ = − (q, p) + τ,
∂q
where
1 1
H(q, p) = pT M −1 (q)p + P (q) (= q̇ T M (q)q̇ + P (q)) (4.16)
2 2
is the total energy of the system. The equations (4.15) are called the
Hamiltonian equations of motion, and H is called the Hamiltonian. The
2
Strictly speaking, K(q, q̇) is the kinetic co-energy. However, whenever the kinetic
co-energy is quadratic as in (4.13) then its value is equal to the value of the correspond-
ing true kinetic energy 21 pT M −1 (q)p.
4.4. Relation with classical Hamiltonian systems 61

state space of (4.16) with local coordinates (q, p) is usually called the
phase space.
The following power-balance immediately follows from (4.15):
d ∂T H ∂T H ∂T H
H= (q, p)q̇ + (q, p)ṗ = (q, p)τ = q̇ T τ, (4.17)
dt ∂q ∂p ∂p
expressing that the increase in energy of the system is equal to the sup-
plied work (conservation of energy). Hence by defining the input to be
d
u = τ and the output to be y := q̇ we obtain dt H = y T u. In particular, if
the Hamiltonian H(q, p) is assumed to be the sum of a positive kinetic
energy and a potential energy which is nonnegative, then it follows
that the system (4.15) with inputs u = τ and outputs y := q̇ is passive
(in fact, lossless) with storage function H(q, p).
System (4.15) with inputs u = τ and outputs y := q̇ is an exam-
ple of a Hamiltonian system with collocated inputs and outputs, which
more generally is given in the following form
∂H
q̇ = (q, p),
∂p
∂H
ṗ = − (q, p) + B(q)u, u ∈ Rm , (4.18)
∂q
∂H
y = B T (q) (q, p) (= B T (q)q̇), y ∈ Rm ,
∂p
where q = (q1 , . . . , qk )T and p = (p1 , . . . , pk )T , and B(q) is the in-
put force matrix, with B(q)u denoting the generalized forces resulting
from the control inputs u ∈ Rm . (In case m < k we speak of an un-
deractuated system. If m = k and the matrix B(q) is invertible for all q,
then the system is fully actuated.) Again we obtain the energy balance
dH
(q(t), p(t)) = uT (t)y(t). (4.19)
dt
A further generalization of the class of Hamiltonian systems (4.18)
with collocated inputs and outputs consists in considering systems
which are described in local coordinates as
∂H
ẋ = J(x) (x) + g(x)u, x ∈ X , u ∈ Rm
∂x (4.20)
∂H
y = gT (x) (x), y ∈ Rm ,
∂x
62 Input-state-output port-Hamiltonian systems

where J(x) is an n × n matrix with entries depending smoothly on x,


which is assumed to be skew-symmetric

J(x) = −J T (x). (4.21)

Indeed, because of (4.21) the energy-balance dH T


dt (x(t)) = y (t)u(t)
continues to hold. The system (4.20) with J satisfying (4.21) is an
input-state-output port-Hamiltonian system with Dirac structure de-
termined by J(x) and g(x), with Hamiltonian H, and with zero resis-
tive structure. Note that (4.18) (and hence (4.15)) is a particular case
of (4.20) with x = (q, p),
h and J(x)
i being givenh by the
i constant skew-
0 Ik 0
symmetric matrix J = −Ik 0 , and g(q, p) = B(q) .
Finally adding linear energy-dissipation will then lead to the
input-state-output port-Hamiltonian systems as defined in the begin-
ning of this chapter.
We note that the generalization of classical Hamiltonian systems
(4.15) to systems
∂H
ẋ = J(x) (x),
∂x
with J(x) satisfying (4.21) is common in geometric mechanics; see
e.g. Arnol’d (1978); Abraham & Marsden (1994); Marsden & Ratiu
(1999). In fact, in many situations the formulation ẋ = J(x) ∂H ∂x (x)
can be inferred from the classical Hamiltonian formulation by sym-
metry considerations. The most classical example of this are the Euler
equations for the angular momenta of a spinning rigid body, cf. eqn.
(3.1), which is derivable from the classical 6-dimensional Hamiltonian
equations for the motion of a spinning rigid body by symmetry of the
Hamiltonian (equal to the kinetic energy) under the action of the ma-
trix group SO(3). In all these cases, the matrix J(x) will satisfy, on
top of its skew-symmetry property, an integrability condition guaran-
teeing the existence of canonical coordinates, cf. Chapter 3. Note how-
ever that these integrability conditions are not essential for the defini-
tion of input-state-output port-Hamiltonian systems.
5
Representations of Dirac structures

A fundamental concept in the geometric, coordinate-free, definition


of a port-Hamiltonian system in Chapter 2 is the geometric notion of
a Dirac structure. For many purposes, including simulation and con-
trol, it is useful to obtain coordinate representations of port-Hamiltonian
systems, and to be able to convert one type of representation into an-
other. The key for doing so is to study coordinate representations of
Dirac structures. Specifically, once a basis for the space of the flows
and the dual space of efforts is taken, it is possible to give several ma-
trix representations of a Dirac structure. Once a specific representation
of the Dirac structure has been obtained the coordinate representation
of the corresponding port-Hamiltonian system, generally in the form
of a set of differential-algebraic equations, follows.
In the next sections, we discuss a number of representations of
Dirac structures and the resulting form of port-Hamiltonian systems.
For the corresponding proofs and further information, the reader is
referred to van der Schaft (2009). The chapter is concluded by rep-
resenting a Dirac structure in terms of the (pure) spinor formalism
stemming from exterior algebra.

63
64 Representations of Dirac structures

5.1 Kernel and image representations

Every Dirac structure D ⊂ F × E, with E = F ∗ , can be represented in


kernel representation as

D = (f, e) ∈ F × E | F f + Ee = 0
for linear maps F : F → V and E : E → V satisfying
(i) EF ∗ + F E ∗ = 0,
(5.1)
(ii) rank(F + E) = dim F,
where V is a linear space with the same dimension as F, and where
F ∗ : V ∗ → E and E ∗ : V ∗ → (F ∗ )∗ = F are the adjoint maps of F and
E, respectively.
It follows from (5.1) that D can be also written in image representa-
tion as

D = (f, e) ∈ F × E | f = E ∗ λ, e = F ∗ λ, λ ∈ V ∗ .
Sometimes it will be useful to relax the requirements on the linear
mappings F and E by allowing V to be a linear space of dimension
greater than the dimension of F. In this case we shall speak of relaxed
kernel and image representations.
Matrix kernel and image representations are obtained by choosing
linear coordinates for F, E and V. Indeed, take any basis f1 , · · · , fn for
F and the dual basis e1 = f1∗ , · · · , en = fn∗ for E = F ∗ , where dim
F = n. Furthermore, take any set of linear coordinates for V. Then
the linear maps F and E are represented by n × n matrices F and E
satisfying
(i) EF T + F E T = 0,
(ii) rank[F | E] = dim F.
In the case of a relaxed kernel and image representation F and E will
be n′ × n matrices with n′ ≥ n.

5.2 Constrained input-output representation

Every Dirac structure D ⊂ F × E can be represented as



D = (f, e) ∈ F × E | f = Je + Gλ, GT e = 0 , (5.2)
5.3. Hybrid input-output representation 65

for a skew-symmetric mapping J : F → E and a linear mapping G,


such that

im G = f ∈ F | (f, 0) ∈ D ,

ker J = e ∈ E | (0, e) ∈ D .

Conversely, for every G and skew-symmetric J equation (5.2) defines


a Dirac structure.
We have already encountered constrained input-output represen-
tations in the case of electrical circuits and mechanical systems with
kinematic constraints in Section 2.2.

5.3 Hybrid input-output representation

As we have seen, the graph of a skew-symmetric map from F to E,


or from E to F, is a Dirac structure, but not every Dirac structure
can be represented this way. On the other hand, by exchanging part
of the flow variables with effort variables, any Dirac structure can be
represented as the graph of a mapping. Specifically, let D be given in
matrix kernel representation by square matrices E and F as in (5.1).
Suppose rank F = m (≤ n). Select m independent columns of F ,
and group them into a matrix F1 . Write (possibly after permutations)
F = [F1 | F2 ], and correspondingly E = [E1 | E2 ],
" # " #
f1 e1
f= , and e = .
f2 e2

Then, cf. Bloch & Crouch (1999), the matrix [F1 | E2 ] is invertible, and
(" # " # " # " #)
f1 e1 f
1 e1
D= ∈ F, ∈E =J ,
f2 e2 e2 f2

with J := −[F1 | E2 ]−1 [F2 | E1 ] skew-symmetric.


It follows that any Dirac structure can be written as the graph of a
skew-symmetric map. The vectors e1 and f2 can be regarded as input
vectors, while the complementary vectors f1 and e2 can be seen as
output vectors1 .
1
The hybrid input-output representation of a Dirac structure is similar to the
66 Representations of Dirac structures

5.4 Canonical coordinate representation

Consider a constant Dirac structure D ⊂ F × E. Then, cf. Courant


(1990), there exist coordinates (q, p, r, s) for F and correspond-
ing dual coordinates for E such that (f, e), when partitioned as
(fq , fp , fr , fs , eq , ep , er , es ), is contained in D if and only if
fq = −ep ,
fp = eq ,
fr = 0,
es = 0.
For a non-constant Dirac structure on a manifold (cf. Chapter 3) it is
possible to construct such canonical coordinates locally around a reg-
ular point of the Dirac structure if and only if the Dirac structure is
integrable; see Chapter 3 and Courant (1990) for details.
The representation of a Dirac structure by canonical coordinates
is very close to the classical Hamiltonian equations of motion, as re-
called in Section 4.4. Indeed, for a system without energy-dissipating
relations and without external ports, the dynamics with respect to the
canonical coordinate representation of a Dirac structure and an arbi-
trary Hamiltonian H(q, p, r, s) takes the form
∂H
q̇ = (q, p, r, s),
∂p
∂H
ṗ = − (q, p, r, s),
∂q
ṙ = 0,
∂H
0= (q, p, r, s),
∂s
where the third line of equations correspond to conserved quantities
(any function of r is conserved), and the fourth line of equations rep-
resent the algebraic constraints which are present in the system. Note
multi-port description of a passive linear circuit, where it is known that, although
it is not always possible to describe the port as an admittance or as an impedance,
it is possible to describe it as a hybrid admittance/impedance transfer matrix, for a
suitable selection of input voltages and currents and complementary output currents
and voltages Belevitch (1968).
5.5. Spinor representation 67

that if this last set of equations can be solved for s = s(q, p, r) as a


function of the remaining variables, then the system can be reduced
to the unconstrained canonical equations
∂ H̄
q̇ = (q, p, r),
∂p
∂ H̄
ṗ = − (q, p, r),
∂q
ṙ = 0,
for the Hamiltonian H̄(q, p, r) := H(q, p, r, s(q, p, r). More about elim-
ination of algebraic constraints in port-Hamiltonian systems can be
found in van der Schaft (2013); see also Chapter 8.

5.5 Spinor representation

We close this chapter by briefly outlining a recently proposed spinor


representation of a Dirac structure in Maks (2010). The spinor con-
cept has been discovered by Cartan in 1913 and the combined theory
of spinors and Clifford algebras has played an important role in math-
ematical physics since Pauli and Dirac presented their equations of the
electron in quantum mechanics in 1927 and 1928, respectively.
Let W be a real vector space of suitable dimension equipped with a
non-gegenerate quadratic form Q. There exists a well-established the-
ory of the so-called Clifford algebra Cl(W, Q) associated to (W, Q),
which is closely connected to the existence of a spinor space S
of (W, Q). Without going into the details of the Clifford algebra
Cl(W, Q), the main idea is to represent W as a linear space of oper-
ators that act on S in the following way. Let ρ : W → End(S), with
End(S) (endomorphisms of S) denoting the set of linear operators on
S, be a linear mapping subject to the condition ρ2 (w) = Q(w)1, for
each w ∈ W.2 . Then, the elements of S are called spinors.
Now, focussing on the particular structure W = F × E, with E =
F ∗ , and denoting the elements of W as (f, e), with f ∈ F and e ∈ E
2
Here ‘1’ should be understood as the identity operator. Note that ρ2 (w) = Q(w)1
is often replace by the less formal condition w2 = Q(w), where ρ(w) is simply identi-
fied with w and Q(w) is understood to mean Q(w)1.
68 Representations of Dirac structures

as before, the quadratic form Q associated to the bilinear form ≪, ≫


is given by
Q((f, e)) =< f | e > .
A well-known model for a spinor space is the exterior algebra of the
dual of F, i.e., S ∼
= ∧E.

Definition 5.1. The (linear) action of W = F × E on the spinor space


∧E is defined by
ρ((f, e))s = if s + e ∧ s,
for each s ∈ ∧E, i.e., by the sum of the interior product if s and the
exterior product e ∧ s by the flows f and efforts e, respectively.

Consider a port-Hamiltonian system without energy-dissipation


satisfying  
∂H
−ẋ, (x), fP , eP ∈ D. (5.3)
∂x
According to Definition 5.1, the spinor representation of D is associ-
ated to the vector space F × E = Tx X × Tx∗ X × FP × EP , and naturally
translates (5.3) in the language of spinors into the algebraic identity
 
∂H
i(−ẋ,fP ) s + (x), eP ∧ s = 0, (5.4)
∂x
The strength of the representation (5.4) is that it contains all the rel-
evant information of the port-Hamiltonian system in a single alge-

braic identity, namely the flows (−ẋ, fP ), the efforts ∂H
∂x (x), eP , and
the spinor s encoding the interconnection structure of the system. For
a more detailed exposition, references to Clifford algebra and exte-
rior calculus, and some illustrative examples, the reader is referred to
Maks (2010).
6
Interconnection of port-Hamiltonian systems

Crucial feature of network modeling is ‘interconnectivity’ or ‘compo-


sitionality’, meaning that complex, large-scale, systems can be built up
from simpler parts, and that certain properties of the complex system
can be studied in terms of its constituent parts and the way they are
interconnected. As we will see in this chapter, port-Hamiltonian sys-
tems completely fit within this paradigm, in the sense that the power-
conserving interconnection of port-Hamiltonian systems defines an-
other port-Hamiltonian system.
The theory underlying the compositionality of port-Hamiltonian
systems concerns the composition of Dirac structures. It will be shown
that the composition of Dirac structures through a number of pairs of
shared flow and effort variables leads to another Dirac structure, de-
fined in terms of the remaining flow and effort variables. Once this has
been shown, the rest of the story is easy: the Hamiltonian of the inter-
connected port-Hamiltonian system will be the sum of the Hamiltoni-
ans of its subsystems, while similarly the energy-dissipation relation
of the interconnected system is obtained as the union of the energy-
dissipating relations of the subsystems.

69
70 Interconnection of port-Hamiltonian systems

f1 fA fB f3
DA DB
e1 eA eB e3

Figure 6.1: The composition of DA and DB .

6.1 Composition of Dirac structures

Physically it is plausible that the interconnection of a number of


power-conserving elements is again power-conserving. We will show
how this can be formalized within the framework of Dirac structures,
and how this leads to the stronger statement that the composition of
Dirac structures defines a Dirac structure.
Without loss of generality we consider the composition of two
Dirac structures with partially shared variables. Once we have shown
that the composition of two Dirac structures is again a Dirac struc-
ture, it is immediate that the power-conserving interconnection of any
number of Dirac structures is again a Dirac structure. Thus consider a
Dirac structure DA ⊂ F1 × F2 × E1 × E2 with Ei = Fi∗ , i = 1, 2, and
another Dirac structure DB ⊂ F2 × F3 × E2 × E3 , with E3 = F3∗ . The
linear space F2 is the space of shared flow variables, and E2 = F2∗ the
space of shared effort variables; see Figure 6.1.
In order to compose DA and DB , a problem arises with regard to
the sign convention for the power flow corresponding to the power
variables (f2 , e2 ) ∈ F2 × E2 . Indeed, if we take the convention1 that
< e | f > denotes incoming power, then for
(f1 , e1 , fA , eA ) ∈ DA ⊂ F1 × E1 × F2 × E2 ,
the term < eA | fA > denotes the incoming power in DA due to the
power variables (fA , eA ) ∈ F2 × E2 , while for
(fB , eB , f3 , e3 ) ∈ DB ⊂ F2 × E2 × F3 × E3 ,
the term < eB | fB > denotes the incoming power in DB . Clearly, the
incoming power in DA due to the power variables in F2 × E2 should
1
In physics it seems more common to take the opposite sign convention: positive
power is outgoing. However, the same argument remains to hold.
6.1. Composition of Dirac structures 71

equal the outgoing power from DB . Thus we cannot simply equate the
flows fA and fB and the efforts eA and eB , but instead we define the
interconnection constraints as

fA = −fB ∈ F2 , eA = eB ∈ E2 . (6.1)

Therefore, the composition of the Dirac structures DA and DB , denoted


DA ◦ DB , is defined as
n
DA ◦ DB := (f1 , e1 , f3 , e3 ) ∈ F1 × E1 × F3 × E3 | ∃(f2 , e2 ) ∈ F2 × E2
o
s.t. (f1 , e1 , f2 , e2 ) ∈ DA and (−f2 , e2 , f3 , e3 ) ∈ DB .

The next theorem is proved (in different ways) in Cervera et al.


(2007); van der Schaft (1999); Dalsmo & van der Schaft (1999), and
Narajanan (2002).

Theorem 6.1. Let DA ⊂ F1 × E1 × F2 × E2 and DB ⊂ F2 × E2 × F3 × E3


be Dirac structures. Then DA ◦ DB ⊂ F1 × E1 × F3 × E3 is a Dirac
structure.

Furthermore, the following explicit expression can be given for


the composition of two Dirac structures in terms of their matrix ker-
nel/image representation; see Cervera et al. (2007) for a proof.

Theorem 6.2. Let Fi , i = 1, 2, 3, be finite-dimensional linear spaces


with dim Fi = ni . Consider Dirac structures DA ⊂ F1 × E1 × F2 × E2 ,
nA = dim F1 × F2 = n1 + n2 , DB ⊂ F2 × E2 × F3 × E3 , nB = dim F2 ×
F3 = n2 + n3 , given by relaxed matrix kernel/image representations
(FA , EA ) = ([F1 | F2A ], [E1 | E2A ]), with FA and EA n′A ×nA matrices,
n′A ≥ nA , respectively (FB , EB ) = ([F2B | F3 ], [E2B | E3 ]), with FB
and EB n′B × nB matrices, n′B ≥ nB . Define the (n′A + n′B ) × 2n2 matrix
" #
F2A E2A
M= , (6.2)
−F2B E2B

and let LA and LB be m × n′A , respectively m × n′B , matrices (m :=


dim ker M T ), with
 
L = LA | LB , ker L = im M. (6.3)
72 Interconnection of port-Hamiltonian systems

Then,
   
F = LA F1 | LB F3 , E = LA E1 | LB E3 , (6.4)
is a relaxed matrix kernel/image representation of DA ◦ DB .
Separable Dirac structures turn out to have the following special
compositional property (van der Schaft & Maschke (2013)):
Proposition 6.1. Let DA ⊂ F2 ×E1 ×F2 ×E2 and DB ⊂ F2 ×E2 ×F3 ×E3
be two separable Dirac structures given as
⊥ ⊥
DA = KA × KA , DB = KB × KB
where KA ⊂ F1 × F2 and KB ⊂ F2 × F3 Define the composition
n
KA ◦ KB = (f1 , f3 ) ∈ F1 × F3 | ∃f2 ∈ F2
o
s.t. (f1 , f2 ) ∈ KA , (−f2 , f3 ) ∈ KB

Then the composition DA ◦ DB is the separable Dirac structure


DA ◦ DB = (KA ◦ KB ) × (KA ◦ KB )⊥ (6.5)

6.2 Interconnection of port-Hamiltonian systems

The composition theory of Dirac structures has the following con-


sequence for interconnection of port-Hamiltonian systems. Consider
k port-Hamiltonian systems with state spaces Xi , Hamiltonians Hi ,
energy-dissipating relations Ri , external port flow and effort spaces
Fi × Ei , and Dirac structures Di , i = 1, . . . , k. Furthermore, consider
an interconnection Dirac structure
DI ⊂ F1 × · · · × Fk × E1 × · · · × Ek × F × E, (6.6)
with F × E the new space of external flow and effort port variables, cf.
Figure 6.2. Obviously, the direct product D1 × · · · × Dk is again a Dirac
structure on the resulting state space
X := X1 × · · · × Xk .
Therefore by the theory of the previous section the subspace
D := (D1 × · · · × Dk ) ◦ DI
6.2. Interconnection of port-Hamiltonian systems 73

X1 , F1 , D1 , H1

f1

e1

f1 f
X2 , F2 , D2 , H2 DI
e2 e

fk
ek

Xk , Fk , Dk , Hk

Figure 6.2: Interconnection of port-Hamiltonian systems.

is again a Dirac structure on X .


As a result, the interconnection of the k port-Hamiltonian systems
through the interconnection Dirac structure DI defines another port-
Hamiltonian system with Dirac structure D, Hamiltonian H being the
sum
H := H1 + · · · + Hk ,
and with resistive structure R being the direct product of the resistive
structures Ri , i = 1, . . . , k. This is a key result in the theory of port-
Hamiltonian systems, allowing to build up complex port-Hamiltonian
systems models from simple ones.
Finally we mention that the theory of composition of Dirac struc-
tures and the interconnection of port-Hamiltonian systems can be also
74 Interconnection of port-Hamiltonian systems

extended to infinite-dimensional Dirac structures and port-Hamiltonian


systems, see e.g. Golo (2002); Villegas (2007); Kurula et al. (2010);
Jacob & Zwart (2012). This implies that also distributed-parameter
port-Hamiltonian subsystems (cf. Chapter 14) can be included into the
overall port-Hamiltonian description.
7
Port-Hamiltonian systems and passivity

Passivity is a fundamental property that constitutes a cornerstone


for major developments in systems and control theory; see Willems
(1972a,b); Hill & Moylan (1976); van der Schaft (2000), and the ref-
erences therein. For linear systems, passivity can be characterized in
the frequency-domain by the notion of a positive-real transfer func-
tion. In the time-domain, both for linear and nonlinear systems, pas-
sivity is characterized by a dissipation inequality Willems (1972a),
which in the linear case reduces to a set of Linear Matrix Inequal-
ities (LMIs) Willems (1972b), and in the nonlinear case to a set
of (in-)equalities usually referred to as the Hill-Moylan conditions
Hill & Moylan (1976).
A system ẋ = f (x, u), y = h(x, u), where x ∈ X and u, y ∈ Rm , is
called passive if there exists a differentiable storage function S : X → R
with S(x) ≥ 0, x ∈ X , satisfying the differential dissipation inequality
d
S(x(t)) ≤ uT (t)y(t), (7.1)
dt
along all solutions x(·) corresponding to input functions u(·). For
physical systems, the right-hand side uT y is usually interpreted as the
supplied power, and S(x) as the stored energy of the system when be-

75
76 Port-Hamiltonian systems and passivity

ing in state x. Furthermore, the system is called lossless if (7.1) holds


with equality. Hence, a passive system cannot store more energy than
it is supplied with, and in the lossless case the stored energy is exactly
equal to the supplied one.
Passivity is intimately related to (Lyapunov) stability. Indeed, if
S has a strict minimum at a certain state x∗ , then it follows from
(7.1) with u = 0 that x∗ is a equilibrium of the unforced dynamics
ẋ = f (x, 0) with Lyapunov function S(x); implying stability of the
equilibrium state x∗ .
The differential dissipation inequality (7.1) can be restated as

∂T S
(x)f (x, u) ≤ uT h(x, u), (7.2)
∂x
for all x, u. For affine nonlinear systems ẋ = f (x) + g(x)u, y = h(x),
with g(x) an n × m matrix, this is easily seen to reduce to

∂T S ∂S
(x)f (x) ≤ 0, h(x) = gT (x) (x). (7.3)
∂x ∂x
An integral form of the differential dissipation inequality (7.1) is pro-
vided by
Z t2
 
S x(t1 ) − S x(t0 ) ≤ uT (t)y(t)dt, (7.4)
t1

for all time instants t1 ≤ t2 , all states x(t0 ), and all input functions
u : [t0 , t1 ] → Rm , where x(t1 ) denotes the state at time t1 resulting from
initial state x(t0 ) at time t0 . This integral form allows one to relax the
requirement of differentiability of S. Clearly (7.1) implies (7.4), while
if S is differentiable then conversely (7.4) can be seen to imply (7.1).
For port-Hamiltonian systems, passivity can be directly inferred
from the power-balance (2.27), and thus is a direct consequence of the
properties of the Dirac structure and the energy-dissipating relation.
Indeed, since by definition of the energy-dissipating relation, the term
eTR fR is always less than or equal to zero, the power-balance (2.27) can
be written as
d
H = eTR fR + eTP fP ≤ eTP fP . (7.5)
dt
7.1. Linear port-Hamiltonian systems 77

Under the additional assumption that H is greater or equal than zero1 ,


the latter inequality implies that any port-Hamiltonian system is pas-
sive with respect to the port variables fP , eP and storage function H.
Furthermore, if eTR fR = 0, the system is lossless.
On the other hand, not every passive system is necessarily a port-
Hamiltonian system, as is illustrated by the following example.
Example 7.1. Consider the nonlinear system
" # " # " #
ẋ1 x1 0
= + u,
ẋ2 −x2 1
y = x21 x2 ,
which is passive (in fact, it lossless) with storage function H(x1 , x2 ) =
1 2 2
2 x1 x2 . However, it is readily observed that there does not exist a 2 × 2
matrix J(x1 , x2 ) = −J T (x1 , x2 ), with entries depending smoothly on
x1 and x2 , such that
" # " #
x1 x x2
= J(x1 , x2 ) 21 2 .
−x2 x1 x2
Indeed, such a J(x1 , x2 ) will necessarily have a singularity at (0, 0).
In the next section we will we show that, under an extra condition,
any passive linear system
ẋ = Ax + Bu,
(7.6)
y = Cx + Du,
where A is an n × n matrix, B an n × m matrix, C an m × n matrix,
and D an m × m matrix, can be written as a port-Hamiltonian system.

7.1 Linear port-Hamiltonian systems

Consider the linear version of the input-state-output port-


Hamiltonian system with feedthrough term (4.9)
ẋ = [J − R]Qx + [G − P ]u,
(7.7)
y = [G + P ]T Qx + [M + S]u,
1
Note that it is sufficient to assume that H is bounded from below, i.e., H(x) ≥ c, for
some real constant c. Then S(x) := H(x) − c defines a storage function.
78 Port-Hamiltonian systems and passivity

where J = −J T is an n × n matrix and M = −M T is an m × m matrix,


both reflecting the interconnection structure. The Hamiltonian of the
system is given by the quadratic function H(x) = 21 xT Qx, where Q =
QT is an n × n matrix referred to as the energy matrix. Furthermore,
R = RT is an n × n matrix and S = S T is an m × m matrix, both
reflecting the linear resistive structure, and G and P are n×m matrices,
satisfying " #
R P
≥ 0. (7.8)
PT S
In particular, if P = 0, then the latter condition reduces to the condi-
tion that R ≥ 0 and S ≥ 0.

Theorem 7.1. The following properties hold:


1. If the system (7.6) is passive, with quadratic storage function
1 T 2
2 x Qx, satisfying Q ≥ 0, and ker Q ⊂ ker A, then it allows a port-
Hamiltonian representation of the form (7.7).
2. If Q ≥ 0, then the port-Hamiltonian system (7.7) is passive.

Proof. Because of the condition ker Q ⊂ ker A, it follows from linear


algebra that there exists a matrix Σ such that
" # " #
A B Q 0
=Σ . (7.9)
−C −D 0 I

In fact, if Q > 0 then such a Σ is uniquely defined as


" #
AQ−1 B
Σ := .
−CQ−1 −D

Passivity of the system (7.6), with quadratic storage function 12 xT Qx,


amounts to the differential dissipation inequality xT Qẋ ≤ uT y, for all
x and u. Substituting ẋ = Ax + Bu and y = Cx + Du, and making use
of (7.9), the differential dissipation inequality can be rewritten as
" # " #" #
h
T T
i Q 0 Q 0 x
x u Σ ≤ 0,
0 I 0 I u
2
Note that the condition ker Q ⊂ ker A is automatically satisfied if Q > 0.
7.2. Available and required storage 79

for all x and u, or equivalently


" # " #
Q 0 
T
 Q 0
Σ+Σ ≤ 0.
0 I 0 I
It follows from linear algebra that we can choose Σ satisfying (7.9) in
such a way that Σ + ΣT ≤ 0. Hence, if we write Σ = J¯ − R̄, J¯ = −J¯T ,
and R̄ = R̄T , then R̄ ≥ 0. Now, denote
" # " #
J G R P
J¯ = , R̄ = ,
−GT −M PT S

with J = −J T , M = −M T , R = RT , and S = S T , then (7.6) can be


written as
" # " # " #! " #
ẋ J G R P Qx
= − ,
−y −GT −M PT S u
which readily can be seen to coincide with (7.7).
Secondly, to prove that linear port-Hamiltonian systems (7.7) are
passive with storage function H(x) = 12 xT Qx, with Q ≥ 0, we need
to show that, along the trajectories of (7.7), the differential dissipation
inequality (7.1) holds. Indeed, using skew-symmetry of J and M , and
the condition (7.8), we obtain
d
H = −xT QRQx + y T u − uT Su − 2xT QP u
dt " #" #
h i R P Qx
T
= − (Qx) u T + y T u ≤ y T u.
PT S u


7.2 Available and required storage

Although using the Hamiltonian H as a storage function may suggest


that the storage function is unique, we know from Willems (1972a,b)
that generally a passive system admits infinitely many possible storage
functions. First of all, let us define the non-negative function
Z τ
Sa (x) = sup − uT (t)y(t)dt (7.10)
u(·) 0
τ ≥0
80 Port-Hamiltonian systems and passivity

for x(0) = x. It can be shown Willems (1972a); van der Schaft (2000),
that the system is passive if and only if the function Sa is well-defined
for all x ∈ X , that is, the righthand side of (7.10) is finite for all x ∈ X .
It is easy to see, cf. van der Schaft (2000), that, whenever the system
is reachable from a certain state x∗ , the function Sa is well-defined (and
thus the system is passive) if and only if Sa (x∗ ) < ∞.
The quantity Sa (x) represents the maximal amount of energy that
can be extracted from the system starting from the initial state x(0) =
x, and is therefore called the available storage. Moreover, it is the smallest
of all possible storage functions: Sa (x) ≤ S(x), x ∈ X , for all other
storage functions S.
Furthermore, if we assume that the system is reachable from a cer-
tain state x∗ then there exists a largest storage function in the following
sense. Define the expression
Z 0
Sr (x) = inf − uT (t)y(t)dt, (7.11)
u(·) −τ
τ ≥0

where x(−τ ) = x∗ and x(0) = x. Then it follows from the assumed


passivity of the system that there exists a constant κ > −∞ such that
Sr (x) ≥ κ, and such that the function Sr (x) − κ defines a storage func-
tion. Moreover, for any storage function S it holds that
S(x) ≤ Sr (x) + S(x∗ ),
and Sr (x) + S(x∗ ) is a storage function.
The quantity Sr (x) represents the required supply to reach x at t = 0
starting from x∗ . For lossless systems it can be shown that Sa = Sr ,
and thus that storage functions are unique up to a positive constant.
Example 7.2. Consider a port-Hamiltonian input-state-output sys-
tem
∂H
ẋ = [J(x) − R(x)] (x) + g(x)u,
∂x
∂H
y = gT (x) (x),
∂x
with R(x) ≥ 0 and H(x) ≥ 0 for all x. The system is passive with stor-
age function given by its Hamiltonian H. Let H̃(x) be another storage
7.3. Shifted port-Hamiltonian systems and passivity 81

function. Then usually (see however the possible obstructions indi-


˜
cated in Example 7.1) there exist matrices J(x) and R̃(x) such that

∂H ∂ H̃
[J(x) − R(x)] (x) = [J˜(x) − R̃(x)] (x).
∂x ∂x
Hence, the system is port-Hamiltonian with respect to a different
Hamiltonian and a different Dirac structure and energy-dissipating re-
lation specified by J˜ and R̃.

7.3 Shifted port-Hamiltonian systems and passivity

Consider a port-Hamiltonian system given by a Dirac structure (see


Chapter 2)

D ⊂ Tx X × Tx∗ X × FR × ER × FP × EP ,

a resistive structure R ⊂ FR ×ER , and a Hamiltonian H : X → R, with


resulting dynamics (2.28). Let us assume that both the Dirac structure
D and the resistive structure R are constant; that is, not depending
on x. Furthermore, assume that the resistive structure R is linear (see
Chapter 4).
Consider now the situation of a steady-state x∗ , corresponding to
steady-state values fR∗ , e∗R , fP∗ , e∗P , i.e.,
 
∂H ∗ ∗ ∗ ∗ ∗
0, (x ), fR , eR , fP , eP ∈ D, (fR∗ , e∗R ) ∈ R. (7.12)
∂x
Then, by using the linearity of D and R, we can subtract (7.12) from
(2.28), so as to obtain

∂H ∂H ∗
− ẋ(t), (x(t)) − (x ),fR (t) − fR∗ , eR (t) − e∗R ,
∂x ∂x

(7.13)
fP (t) − fP∗ , eP (t) − e∗P ∈ D,
(fR (t) − fR∗ , eR (t) − e∗R ) ∈ R.

This defines a shifted port-Hamiltonian system as follows. Define as in


Jayawardhana et al. (2007) the shifted Hamiltonian corresponding to
82 Port-Hamiltonian systems and passivity

the steady-state x∗ as3


∂H ∗
H̃(x) := H(x) − (x − x∗ )T (x ) − H(x∗ ), (7.14)
∂x
then we observe that
∂ H̃ ∂H ∂H ∗
(x) = (x) − (x ).
∂x ∂x ∂x
Theorem 7.2. Consider a port-Hamiltonian system with constant
Dirac structure D and constant linear resistive structure R. Further-
more, assume the existence of a steady-state x∗ satisfying (7.12). Then,
by defining the shifted Hamiltonian (7.14), we have obtained a port-
Hamiltonian system (7.13) with the same state space X , same Dirac
structure D and resistive structure R, but with shifted Hamiltonian H̃
and shifted external port variables fP − fP∗ , eP − e∗P .

Corollary 7.3. Assume additionally that the Hamiltonian H is convex.


Then the shifted port-Hamiltonian system (7.13) with shifted Hamil-
tonian H̃ is passive with respect to the shifted external port variables
fP − fP∗ , eP − e∗P .

Proof. By convexity of H the shifted Hamiltonian H̃ has a minimum


at x∗ , and thus the shifted port-Hamiltonian system is passive with
respect to (fP − fP∗ , eP − e∗P ). 

In Chapter 9 we will further generalize the idea of shifted port-


Hamiltonian systems and shifted passivity by exploiting the concept
of (maximal) monotone relations.

3
In thermodynamics this is called the availability function, cf. Kennan (1951). In
convex analysis it is also known as the Bregman function, cf. Bürger et al. (2013).
8
Conserved quantities and algebraic
constraints

In Chapter 7 it is shown that passivity is a key property for stability


analysis since the Hamiltonian H may serve as a Lyapunov function.
Indeed, from (7.1) it is readily observed that for an autonomous port-
Hamiltonian system, with dynamics specified by
 
∂H
−ẋ, (x), fR , eR ∈D (8.1)
∂x

and (fR , eR ) ∈ R, the power-balance reduces to

d
H = eTR fR ≤ 0, (8.2)
dt
which implies that if H(x∗ ) = 0 and H(x) > 0, for every x 6= x∗ , then
x∗ is a stable equilibrium.1 However, the point where the Hamiltonian
is minimal (which typically coincides with the zero state) is often not
the one of practical interest for set-point regulation, in which case the
Hamiltonian alone can not be employed as a Lyapunov function.

1
The equilibrium x∗ is asymptotically stable if the dissipation term eTR fR < 0
for all x 6= x∗ , or alternatively if a detectability condition is satisfied, guaranteeing
asymptotic stability by the use of LaSalle’s Invariance principle.

83
84 Conserved quantities and algebraic constraints

8.1 Casimirs of conservative port-Hamiltonian systems

A well-known approach in Hamiltonian systems, see e.g.


Marsden & Ratiu (1999), is to consider, next to the Hamiltonian
function, additional conserved quantities which may be present in
d
the system. For, consider functions C : X → R such that dt C = 0
(dynamical invariance) along the trajectories of the system. The main
idea then is to search for functions C such that V := H + C has a
minimum at the equilibrium x∗ , and, consequently, that we are able
to infer (asymptotic) stability by replacing (8.2) with
d
V = eTR fR ≤ 0,
dt
using V as a Lyapunov function.
Functions that are conserved quantities of the system for every
Hamiltonian are called Casimir functions or briefly Casimirs. Casimirs
are completely characterized by the Dirac structure D. Indeed, for
every port-Hamiltonian system without energy-dissipation and ex-

ternal ports, and with specified dynamics − ẋ, ∂H ∂x (x) ∈ D, the
function C : X → R is a Casimir if and only if the gradient vector
e = ∂C T
∂x (x) satisfies e fS = 0, for all fS for which there exists eS such
that (fS , eS ) ∈ D, or, equivalently,
d ∂T C  ∂T C 
C= x(t) ẋ(t) = − x(t) fS = −eT fS = 0. (8.3)
dt ∂x ∂x
By the property D = D ⊥⊥ of the Dirac structure D, this is readily seen
to be equivalent to the requirement that e = ∂C
∂x (x) satisfies (0, e) ∈ D.
Example 8.1. For any Hamiltonian dynamics
∂H
ẋ = J(x) (x),
∂x
with J(x) = −J T (x), the corresponding Casimirs are solutions to the
set of PDE’s
∂T C
(x)J(x) = 0.
∂x
The well-known Casimir for the spinning rigid body of Example 3.1,
is the total angular momentum p2x + p2y + p2z , whose vector of partial
derivatives is in the kernel of the matrix J(p) of (3.1).
8.2. Linear resistive structures and the dissipation obstacle 85

8.2 Linear resistive structures and the dissipation obsta-


cle

Similarly, a Casimir for a port-Hamiltonian system (8.1) with linear


resistive structure R, cf. Chapter 4, is defined as a function C : X → R
satisfying  
∂C
0, (x), 0, 0 ∈ D. (8.4)
∂x
Indeed, for every port-Hamiltonian system with the same Dirac struc-
ture this implies the identity (8.3). Although the above definition of
a Casimir function suggests to hold only for a specific linear resistive
structure Rf fR + Re eR = 0, where the square matrices Rf and Re sat-
isfy the condition Rf ReT = Re RfT ≥ 0, together with rank[Rf | Re ] =
dimfR , it can be shown Pasumarthy & van der Schaft (2007) that a
conserved quantity for one resistive relation actually is a conserved
quantity for all linear resistive structures.
The fact that a Casimir for one linear resistive structure is a
Casimir for all linear resistive structures is closely related to the so-
called dissipation obstacle for the existence of Casimir functions in the
case of input-state-output port-Hamiltonian systems. Indeed, for au-
tonomous systems of the form
∂H
ẋ = [J(x) − R(x)] (x),
∂x
with J(x) = −J T (x), R(x) = RT (x), and R(x) ≥ 0 for all x ∈ X , the
corresponding Casimirs are solutions to the set of PDE’s
∂T C
(x) [J(x) − R(x)] = 0.
∂x
Multiplying from the right by ∂C ∂x (x)), and using skew-symmetry of
J(x) and positive semi-definiteness of R(x), this is seen to be equiva-
lent to
∂T C ∂T C
(x)R(x) = 0, (x)J(x) = 0.
∂x ∂x
Hence, Casimirs are necessarily independent of those state-space co-
ordinates that are directly affected by physical damping. We come
back to the dissipation obstacle in Chapter 15.
86 Conserved quantities and algebraic constraints

8.3 Algebraic constraints

Algebraic constraints on the state variables are to a large extent de-


termined by the Dirac structure. Indeed, let us first consider a port-
Hamiltonian system without external and resistive ports, described
by a Dirac structure D and a Hamiltonian H. Define for every x ∈ X
the subspace

PD (x) := α ∈ Tx∗ X | ∃X ∈ Tx X such that (α, X) ∈ D(x) .

(This defines a co-distribution on the manifold X .) Then the definition


of the port-Hamiltonian system implies that
∂H
(x) ∈ PD (x).
∂x
In general, this imposes algebraic constraints on the state variables x ∈
X . For example, if the Dirac structure is given in image representation
(see Chapter 5) as

D(x) = (X, α) ∈ Tx X × Tx∗ X | X = E T (x)λ, α = F T (x)λ ,

then it follows that


∂H
(x) ∈ im F T (x),
∂x
which leads in general (depending on the Hamiltonian H) to algebraic
constraints on the state variables x. Similarly, if the Dirac structure is
given in constrained input-output form (5.2) then the algebraic con-
straints are given as
∂H
GT (x) (x) = 0.
∂x
The resulting dynamics is thus a combination of differential and
algebraic equations, called port-Hamiltonian differential equations
(DAEs).
In the case of resistive and/or external ports, the algebraic con-
straints on the state variables x may also depend on the resistive
and/or external port variables. A special case arises for resistive ports.
Consider a Dirac structure

(X, α, fR , eR ) ∈ D(x) ⊂ Tx X × Tx∗ X × FR × ER ,
8.4. Elimination of algebraic constraints 87

with the resistive flow and effort variables satisfying a relation


R(fR , eR ) = 0.
Then, the gradient of the Hamiltonian has to satisfy the condition
∂H 
(x) ∈ α ∈ Tx∗ X | ∃X, fR , eR ∈ Tx X × FR × ER
∂x

such that (X, α, fR , eR ) ∈ D(x), R(fR , eR ) = 0 .
Depending on the resistive relation R(fR , eR ) = 0, this may again in-
duce algebraic constraints on the state variables x.

8.4 Elimination of algebraic constraints

An important problem concerns the possibility to solve for the alge-


braic constraints of a port-Hamiltonian differential-algebraic system.
We will focus on the case that the Dirac structure is given in con-
strained input-output form (5.2) and thus the algebraic constraints are
explicitly given as
∂H
GT (x) (x) = 0. (8.5)
∂x
The precise way this will constrain the state variables x depends on
G(x) as well as on the properties of the Hamiltonian H. For exam-
ple, if the Hamiltonian H is such that its gradient ∂H
∂x (x) happens to
T
be contained in the kernel of the matrix G (x) for all x, then the al-
gebraic constraints (8.5) are automatically satisfied, and actually the
state variables are not constrained at all.
In general, under constant rank assumptions, the set
∂H
Xc := {x ∈ X | GT (x) (x) = 0}
∂x
will define a submanifold of the total state space X , called the con-
strained state space. In order that this constrained state space qualifies
as the state space for a port-Hamiltonian system without further alge-
braic constraints, one needs to be able to restrict the dynamics of the
port-Hamiltonian system to the constrained state space. This is always
possible under the condition that the matrix
∂2H
GT (x) (x)G(x) (8.6)
∂x2
88 Conserved quantities and algebraic constraints

has full rank. Indeed, in this case the differentiated constraint equation
 
d ∂H ∂2H
0= GT (x) (x) = ∗ + GT (x) 2 (x)G(x)λ
dt ∂x ∂x
(with ∗ denoting unspecified terms) can be uniquely solved for λ, lead-
ing to a uniquely defined dynamics on the constrained state space Xc .
Hence the set of consistent states for the port-Hamiltonian differential-
algebraic system (the set of initial conditions for which the system has
a unique ordinary solution) is equal to the constrained state space Xc .
Using terminology from the theory of DAEs, the condition that the
matrix in (8.6) has full rank ensures that the index of the DAEs speci-
fied by the port-Hamiltonian system is equal to one. This can be sum-
marized as

Proposition 8.1. Consider the port-Hamiltonian differential-


algebraic system represented as in (5.2), with algebraic constraints
∂2 H
GT (x) ∂H T
∂x (x) = 0. Suppose that the matrix G (x) ∂x2 (x)G(x) has full
rank for all x ∈ Xc . Then the system has index one, and the set of
consistent states is equal to Xc .

If the matrix in (8.6) does not have full rank, then the index of
the port-Hamiltonian differential-algebraic system will be larger than
one, and it will be necessary to further constrain the space Xc by
considering apart from the ’primary’ algebraic constraints (8.5), also
their (repeated) time-derivatives (called secondary constraints). We re-
fer to van der Schaft (1987); Nijmeijer & van der Schaft (1990) for a
detailed treatment and conditions for reducing the port-Hamiltonian
DAE system to a system without algebraic constraints in case J(x) cor-
responds to a symplectic structure. For the linear case, and the relation
with the theory of matrix pencils see van der Schaft (2013).
A particular elegant representation of the algebraic constraints
arises from the canonical coordinate representation. We will only consider
the case of a system without energy-dissipation and external ports. If
the Dirac structure D on the state space manifold is integrable, cf. Chap-
ter 3, then there exist local coordinates x = (q, p, r, s) for X in which
the system (without energy-dissipation and external ports) takes the
8.4. Elimination of algebraic constraints 89

form
∂H
q̇ = (q, p, r, s)
∂p
∂H
ṗ = − (q, p, r, s)
∂q (8.7)
ṙ = 0
∂H
0= (q, p, r, s)
∂s
Hence the Casimirs are all state functions only depending on r, while
the algebraic constraints take the simple form ∂H∂s = 0.
The condition that the matrix in (8.6) has full rank is in the canoni-
cal coordinate representation equivalent to the partial Hessian matrix
∂2H
∂s2 being invertible. Solving, by the Implicit Function theorem, the
algebraic constraints ∂H
∂s = 0 for s as a function s(q, p, r) reduces the
DAEs (8.7) to the ODEs

∂ H̄
q̇ = (q, p.r)
∂p
∂ H̄ (8.8)
ṗ = − (q, p, r)
∂q
ṙ = 0

where H̄(q, p, r) := H(q, p, r, s(q, p, r)).


9
Incrementally port-Hamiltonian systems

Recall the definition of a port-Hamiltonian system, cf. Chapter 2. It


is defined by a Dirac structure D ⊂ FS × ES × FP × EP × FR × ER ,
where the flow and effort variables (fR , eR ) ∈ FR × ER are terminated
by an energy-dissipating relation (resistive structure) R ⊂ FR × ER
satisfying the property

eTR fR ≤ 0 for all (fR , eR ) ∈ R.

It follows that the composition of the Dirac structure D with the resis-
tive structure R, defined as

D ◦ R := {(fS , eS , fP , eP ) ∈ F)S × ES × FP × EP |

∃(fR , eR ) ∈ R s.t. (fS , eS , fP , eP , fR , eR ) ∈ D}

satisfies the property

eTS fS + eTP fP = −eTR fR ≥ 0, (9.1)

for all (fS , eS , fP , eP ) ∈ D ◦ R.


Hence a more general viewpoint on port-Hamiltonian systems is
not to distinguish between the Dirac structure D and the resistive

91
92 Incrementally port-Hamiltonian systems

structure R, but instead to start from a general (nonlinear) relation

N := {(fS , eS , fP , eP ) ∈ FS × ES × FP × EP }

having the property that

eTS fS + eTP fP ≥ 0, (9.2)

for all (fS , eS , fP , eP ) ∈ N . Thus N combines the Dirac structure D and


the resistive structure R into a single object.
This leads to two interesting new directions. The first one is the
theory of incrementally port-Hamiltonian systems, as will be explored in
the present chapter based on Camlibel & van der Schaft (2013). The
second is the connection of port-Hamiltonian systems with pseudo-
gradient systems (generalizing the Brayton-Moser equations for elec-
trical circuits), as will be discussed in Chapter 11.

Remark 9.1. Since in the definition of N no explicit assumption re-


garding the linearity of the power-conserving interconnection struc-
ture D is made anymore, this also has a potential towards the model-
ing of systems where the assumption of linearity appears to be a stum-
bling block, as in thermodynamical systems, cf. Eberard et al. (2007).

9.1 Incrementally port-Hamiltonian systems

The basic idea of the definition of incrementally port-Hamiltonian sys-


tems is to replace the composition N of a constant Dirac structure D on
a linear state space X and a resistive structure R by a maximal monotone
relation M, cf. Rockafellar & Wets (1998).

Definition 9.1. Let F be a linear space; in the sequel always assumed


to be finite-dimensional. A relation M ⊂ F × E, with E = F ∗ , is said
to be monotone if
(v1 − v2 )T (u1 − u2 ) ≥ 0, (9.3)
for all (ui , vi ) ∈ M with i ∈ {1, 2}. It is called maximal monotone if it is
monotone and the implication

M′ is monotone and M ⊂ M′ =⇒ M = M′ (9.4)


9.1. Incrementally port-Hamiltonian systems 93

holds.
Furthermore, consider a maximal monotone relation

M ⊂ FS × ES × FP × EP ,

and a Hamiltonian H : X → R, where X = FS is the linear state


space. Then the dynamics of the corresponding incrementally port-
Hamiltonian system is defined as
 
∂H 
−ẋ(t), x(t) , fP (t), eP (t) ∈ M, t ∈ R. (9.5)
∂x
It follows that the dynamics of incrementally port-Hamiltonian
systems are characterized by the satisfaction of the inequality
T
∂T H ∂H
(x1 (t)) − (x2 (t) (ẋ1 (t) − ẋ2 (t))
∂x ∂x (9.6)
≤ (e1P (t) − e2P (t))T (fP1 (t) − fP2 (t))
along all trajectories
 
∂H 
−ẋi (t), xi (t) , fPi (t), eiP (t) ∈ M, i ∈ {1, 2}.
∂x
What is the exact relation between port-Hamiltonian systems and
incrementally port-Hamiltonian systems ? If the Dirac structure D is
a constant Dirac structure on a linear state space, and furthermore
the port-Hamiltonian system has no resistive structure (and thus is
conservative), then the system is also incrementally port-Hamiltonian.
This follows from the following proposition.

Proposition 9.1. Every constant Dirac structure D ⊂ F × E is maxi-


mal monotone.

Proof. Let (fi , ei ) ∈ D with i = 1, 2. Since eT f = 0 for all (f, e) ∈ D, we


obtain
(e1 − e2 )T (f1 − f2 ) = 0.
Therefore, D is monotone on F × E. Let D ′ be a monotone relation
on F × E such that D ⊂ D ′ . Let (f ′ , e′ ) ∈ D ′ and (f, e) ∈ D. Since
D ′ is monotone, D ⊂ D ′ , and since D is a subspace, we have 0 ≤
94 Incrementally port-Hamiltonian systems

(e′ − αe)T (f ′ − αf ) = e′T f ′ − α(e′T f + eT f ′ ) for any α ∈ R. This means


that
e′T f + eT f ′ = 0,
and hence (f ′ , e′ ) ∈ D ⊥⊥ = D. Therefore, we get D ′ ⊂ D. Since the
reverse inclusion already holds, we obtain D ′ = D. Consequently, D is
maximal monotone. 

On the other hand, not every maximal monotone relation is a


Dirac structure, even if we restrict attention to maximal monotone re-
lations M satisfying the additional property eT f = 0 for all (f, e) ∈
M (no energy-dissipation), since maximal monotone relations need
not be linear. We conclude that in the conservative case every port-
Hamiltonian system is incrementally port-Hamiltonian, but not the
other way around.
In the non-conservative case, the relation between port-
Hamiltonian systems and incrementally port-Hamiltonian systems is
less simple, as can be seen from the following examples.
Example 9.1 (Mechanical systems with friction). Consider a mechan-
ical system with standard kinetic energy and arbitrary potential en-
ergy, subject to friction. The friction characteristic corresponds to a
constitutive relation between certain port variables fR , eR . Assume for
simplicity that fR , eR are scalar variables, i.e., consider a single fric-
tion component with velocity fR and friction force −eR . In the case
of linear friction −eR = dfR with d > 0, the resulting system is both
port-Hamiltonian and incrementally port-Hamiltonian. In the case of
a friction characteristic
−eR = F (fR ),
the system will be port-Hamiltonian if the graph of the function F is
in the first and third quadrant. On the other hand, it will be incremen-
tally port-Hamiltonian if the relation is the function F is a monoton-
ically non-decreasing function. For example, the Stribeck friction char-
acteristic defines a port-Hamiltonian system, but not an incrementally
port-Hamiltonian system.
Example 9.2 (Circuit with tunnel diode). Consider an electrical LC-
circuit (possibly with nonlinear capacitors and inductors) together
9.1. Incrementally port-Hamiltonian systems 95

with a resistor corresponding to an electrical port fR = −I, eR = V


(current and voltage). For a linear resistor (conductor) I = GV, G >
0, the system is both port-Hamiltonian and incrementally port-
Hamiltonian. For a nonlinear conductor I = G(V ) the system is port-
Hamiltonian if the graph of the function G is in the first and third quad-
rant while incrementally port-Hamiltonian if G is monotonically non-
decreasing. For example, a tunnel diode characteristic

I = Φ(V − V0 ) + I0 ,

for certain positive constants V0 , I0 , and a function Φ(z) = γz 3 −


αz, α, γ > 0, defines a system which is port-Hamiltonian but not in-
crementally port-Hamiltonian.

Example 9.3 (Sources). Physical systems with constant sources are


not port-Hamiltonian in the sense of a port-Hamiltonian system with-
out external ports, but typically are incrementally port-Hamiltonian.
Consider any nonlinear LC-circuit with passive resistors and constant
voltage and/or current sources, or any arbitrary mechanical system
with passive dampers and constant actuation: all are incrementally
port-Hamiltonian but not a port-Hamiltonian system without exter-
nal ports.
Note that physical systems with constant sources often can be
also modeled as shifted port-Hamiltonian systems, cf. Chapter 7. Fur-
thermore, in Maschke et al. (2000) it is discussed how systems with
constant sources can be represented as the interconnection of a port-
Hamiltonian system with a port-Hamiltonian source system (having a
linear Hamiltonian which is not bounded from below, and therefore
not a passive system).

Example 9.3 can be further generalized and formalized as follows.


Consider a port-Hamiltonian system of the form
∂H
ẋ = J(x) (x) + gR (x)fR + g(x)fP ,
∂x
∂H
T
eR = gR (x) (x), (9.7)
∂x
∂H
eP = gT (x) (x),
∂x
96 Incrementally port-Hamiltonian systems

where J(x) = −J T (x) and the port-variables −fR , eR belong to a max-


imally monotone relation MR , i.e., a maximal relation with the prop-
erty
(eR1 − eR2 )T (fR1 − fR2 ) ≤ 0, (9.8)

for all (fRi , eRi ) ∈ MR with i ∈ {1, 2}.


Clearly for any constant vector c the subspace

C := {(fP , eP ) | fP = c} (9.9)

is also a maximal monotone relation. It follows that the product


MR × C is again a maximal monotone relation, and that the sys-
tem (9.7) for any constant input fP = c is an incrementally port-
Hamiltonian system.

9.2 Connections with incremental and differential passiv-


ity

The notion of incrementally port-Hamiltonian systems is related to


incremental passivity and the recently introduced notion of differential
passivity. These relations are clear if the Hamiltonian is quadratic and
nonnegative, i.e., H(x) = 12 xT Qx for some symmetric positive definite
matrix Q. In this case, the inequality (9.6) is equivalent to

d 1
(x1 (t) − x2 (t))T Q(x1 (t) − x2 (t))
dt 2
≤ (e1P (t) − e2P (t))T (fP1 (t) − fP2 (t)).

This property readily implies that incrementally port-Hamiltonian


systems with nonnegative quadratic Hamiltonian as above are both
incrementally passive and differentially passive.
Indeed, recall Desoer & Vidyasagar (1975); Pavlov et al. (2004);
Angeli (2000) that a system ẋ = f (x, u), y = h(x, u) with x ∈ Rn , u, y ∈
Rm is called incrementally passive if there exists a nonnegative function
V : Rn × Rn → R such that
d
V (x1 , x2 ) ≤ (u1 − u2 )T (y1 − y2 ), (9.10)
dt
9.2. Connections with incremental and differential passivity 97

for all (xi , ui , yi ), i = 1, 2 satisfying ẋ = f (x, u), y = h(x, u). Taking


u = fP , y = eP
1
V (x1 , x2 ) := (x1 − x2 )T Q(x1 − x2 ).
2
we immediately obtain the following result.

Proposition 9.2. An incrementally port-Hamiltonian system with a


quadratic Hamiltonian H(x) = 12 xT Qx with Q ≥ 0 is incrementally
passive.

Recall furthermore from Forni & Sepulchre (2013); van der Schaft
(2013) the following definition of differential passivity.

Definition 9.2. Consider a nonlinear control system Σ with state


space X , affine in the inputs u, and with an equal number of outputs
y, given as
m
X
ẋ = f (x) + gj (x)uj ,
j=1 (9.11)
yj = Hj (x), j = 1, . . . , m.

The variational system along any input-state-output trajectory t ∈


[0, T ] 7→ (x(t), u(t), y(t)) is given by the following time-varying sys-
tem,
m
∂f X ∂gj
δẋ(t) = (x(t))δx(t) + (x(t))uj (t)δx(t)
∂x j=1
∂x
m
X
+ δuj gj (x(t)), (9.12)
j=1

∂Hj
δyj (t) = (x(t))δx(t) , j = 1, . . . , m ,
∂x
with state δx ∈ Rn , where δu = (δu1 , . . . , δum ) denote the inputs of
the variational system and δy = (δy1 , . . . , δym ) the outputs. Then the
system (9.11) is called differentially passive if the system together with
all its variational systems is passive with respect to the supply rate
98 Incrementally port-Hamiltonian systems

(δu)T δy, i.e., if there exists a function P : T X → R+ (called the differ-


ential storage function) satisfying
d
P ≤ (δu)T δy, (9.13)
dt
for all x, u, δu.

The following proposition readily follows, cf. van der Schaft


(2013).

Proposition 9.3. An incrementally port-Hamiltonian system with


quadratic Hamiltonian H(x) = 21 xT Qx with Q ≥ 0 is differentially
passive.

Proof. Consider the infinitesimal version of (9.6). In fact, let (fP1 , e1P , x1 )
and (fP2 , e2P , x2 ) be two triples of system trajectories arbitrarily near
each other. Taking the limit we deduce from (9.6)
∂2H
(∂x)T (x)∂ ẋ ≤ (∂eP )T ∂fP , (9.14)
∂x2
where ∂x denotes the variational state, and ∂fP , ∂eP the variational
inputs and outputs). If the Hamiltonian H is a quadratic function
H(x) = 21 xT Qx then the left-hand side of the inequality (9.14) is equal
d 1 T
to dt 2 (∂x) Q∂x, and hence amounts to the differential dissipativity
inequality
d1
(∂x)T Q∂x ≤ (∂eP )T ∂fP , (9.15)
dt 2
implying that the incrementally port-Hamiltonian system is differen-
tially passive, with differential storage function 12 (∂x)T Q∂x. 

9.3 Composition of maximal monotone relations

A cornerstone of port-Hamiltonian systems theory is the fact that the


power-conserving interconnection of port-Hamiltonian systems de-
fines again a port-Hamiltonian system, cf. Chapter 6. As we have seen
this is based on the fact that the composition of Dirac structures is
again a Dirac structure. In the present section we will show that the
same property holds for incrementally port-Hamiltonian systems, but
9.3. Composition of maximal monotone relations 99

now based on the fact that the composition of maximal monotone re-
lations is again maximal monotone.
Consider two maximal monotonous relations Ma ⊂ Fa × Fa∗ ×
Va × Va∗ , with typical element denoted by (fa , ea , va , wa ) and Mb ⊂
Fb × Fb∗ × Vb × Vb∗ , with typical element denoted by (fb , eb , vb , wb ),
where Va = Vb = V, and thus Va∗ = Vb∗ = V ∗ (shared flow and effort
variables). Define as before the composition of Ma and Mb , denoted as
Ma ◦ Mb , by
n
Ma ◦ Mb := (fa , ea , fb , eb ) ∈ Fa × Fa∗ × Fb × Fb∗ |
∃v ∈ V, w ∈ V ∗ s.t. (fa , ea , v, w) ∈ Fa × Fa∗ × V × V ∗ ,
o
(fb , eb , −v, w) ∈ Fb × Fb∗ × V × V ∗ .
Thus the composition of Ma and Mb is obtained by imposing on the
vectors (fa , ea , va , wa ) ∈ Ma and (fb , eb , vb , wb ) ∈ Mb the interconnec-
tion constraints
va = −vb , wa = wb , (9.16)
and looking at the resulting vectors (fa , ea , fb , eb ) ∈ Fa × Fa∗ × Fb × Fb∗ .
The main result of this section is that, whenever Ma ◦Mb satisfies a
technical condition, then the composition Ma ◦Mb is again a maximal
monotone relation. The key ingredient in the proof is the following
theorem from Rockafellar & Wets (1998) [Ex. 12.46].
Theorem 9.1. Let M ⊂ Fα × Fα∗ × Fβ × Fβ∗ be maximal monotone.
Assume that the reduced relation (with e¯β a constant vector)
Mr := {(fα , eα ) | ∃fβ s.t. (fα , eα , fβ , eβ = e¯β )} (9.17)
is such that there exists e¯α for which (e¯α , e¯β ) is in the relative interior
of the projection of M on the space of efforts {(eα , eβ )}. Then Mr is
maximal monotone.
This theorem can be applied to the situation at hand after applying
the following transformation. Define
va + vb va − vb
yv := √ , zv := √ , yv , zv ∈ V
2 2
wa + wb wa − wb
yw := √ , zw := √ , yw , zw ∈ V ∗ .
2 2
100 Incrementally port-Hamiltonian systems

By direct computation one obtains


hyw1 − yw2 | yv1 − yv2 i + hzw1 − zw2 | zv1 − zv2 i
(9.18)
= hwa1 − wa2 | va1 − va2 i + hwb1 − wb2 | vb1 − vb2 i.
Theorem 9.2. Let Ma ⊂ Fa ×Fa∗ ×Va ×Va∗ and Mb ⊂ Fb ×Fb∗ ×Vb ×Vb∗
be maximal monotone relations where Ma ◦ Mb is such that there
exists ēa , ēb for which (ēa , ēb , yv = 0, zw = 0) is in the relative interior
of the projection of the direct sum Ma ⊕Mb . Then Ma ◦Mb is maximal
monotone.
Proof. It is evident that the direct sum Ma ⊕ Mb ⊂ Fa × Fa∗ × Va ×
Va∗ × Fb × Fb∗ × Vb × Vb∗ of Ma and Mb defined by
n
Ma ⊕ Mb = (fa , ea , fb , eb , va , wa , vb , wb ) |
o
(fa , ea , va , wa ) ∈ Ma , (fb , eb , vb , wb ) ∈ Mb
is a maximal monotone relation. By (9.18) we deduce that Ma ⊕
Mb is also maximal monotone with respect to the coordinates
(fa , ea , fb , eb , yv , yw , zv , zw ). It now follows from Theorem 9.1 (with
eβ = (yv , zw )) that Ma ◦ Mb is maximal monotone. 

Remark 9.2. The same reasoning provides an alternative proof of the


fact that the composition of two Dirac structures is a Dirac structure.
Corollary 9.3. Consider two incrementally port-Hamiltonian sys-
tems Σa and Σb with external port variables respectively (fPa , eaP ) ∈
V × V ∗ and (fPb , ebP ) ∈ V × V ∗ , and with Hamiltonians respectively
Ha and Hb , interconnected by the power-conserving interconnection
constraints
fPa + fPb + fP = 0, eaP = ebP = eP .
Then the interconnected system is incrementally port-Hamiltonian
with total Hamiltonian Ha + Hb and external port variables fP , eP .
Corollary 9.4. Let the Dirac structure D be constant, and the resis-
tive structure R be maximal monotone. Then the composition D ◦ R is
maximal monotone. Hence any port-Hamiltonian system with a con-
stant Dirac structure and maximal monotone resistive structure R is
also incrementally port-Hamiltonian.
10
Input-output Hamiltonian systems

In this chapter we discuss the relationships between port-Hamiltonian


systems and input-output Hamiltonian systems, as initiated in the
groundbreaking paper Brockett (1977), and further explored in
e.g. van der Schaft (1984, 1982a,b); Crouch & van der Schaft (1987);
Nijmeijer & van der Schaft (1990). Furthermore, we will see that
in the linear case that input-output Hamiltonian systems are
very close to systems with negative-imaginary transfer matrices
Lanzon & Petersen (2008, 2010) and in the nonlinear case to systems
with counterclockwise input-output dynamics Angeli (2006). This chap-
ter is partly based on van der Schaft (2011)

10.1 Input-output Hamiltonian systems with dissipation

Consider a standard input-state-output port-Hamiltonian system (for


simplicity without feedthrough term), cf. eqn. (4.10)

∂H
ẋ = [J(x) − R(x)] (x) + g(x)u, x ∈ X , u ∈ Rm
∂x (10.1)
∂H
y = gT (x) (x), y ∈ Rm ,
∂x

101
102 Input-output Hamiltonian systems

where the n × n matrices J(x), R(x) depend smoothly on x and satisfy

J(x) = −J T (x), R(x) = RT (x) ≥ 0. (10.2)

Now suppose that the input matrix g(x) satisfies an integrability condi-
tion1 , in the sense that there exists a mapping C : X → Rm such that2

∂C T
g(x) = −[J(x) − R(x)] (x). (10.3)
∂x
Then the system equations can be rewritten as
!
∂H ∂C T
ẋ = [J(x) − R(x)] (x) − (x)u ,
∂x ∂x
!T
∂C T ∂H
y= (x) [J(x) + R(x)] (x).
∂x ∂x

This suggests to define the new output vector z = C(x) ∈ Rm , leading


to the following system definition.

Definition 10.1. A system, with local coordinates x = (x1 , · · · , xn ) for


some n-dimensional state space manifold X , given by
!
∂H ∂C T
ẋ = [J(x) − R(x)] (x) − (x)u , u ∈ Rm ,
∂x ∂x (10.4)
m
z = C(x), y ∈ R ,

where the n × n matrices J(x) and R(x) depend smoothly on x and


satisfy
J(x) = −J T (x), R(x) = RT (x) ≥ 0, (10.5)
is called an affine input-output Hamiltonian system with dissipation
(briefly, affine IOHD system), with Hamiltonian H : X → R, and out-
put mapping C : X → Rm .
1
Notice that this amounts to assuming that each input vector field gj is a Hamil-
tonian/gradient vector field with respect to the function Cj and the mixed geometric
structure defined by J(x) − R(x), sometimes called a Leibniz structure Morrison
(1986); Ortega & Planas-Bielsa (2004).
2 T
For a mapping C : Rn → Rm we denote by ∂C ∂x
(x) the n × m matrix whose j-th
column consists of the partial derivatives of the j-th component function Cj .
10.1. Input-output Hamiltonian systems with dissipation 103

Remark 10.1. The definition of an IOHD system as given above is


a generalization of the definition of an affine input-output Hamilto-
nian system as originally proposed in Brockett (1977) and studied in
e.g. van der Schaft (1984, 1982a,b); Crouch & van der Schaft (1987).
In fact, it reduces to this definition in case R = 0 and J defines a sym-
plectic form (in particular, has full rank). The components of the output
mapping C : X → Rm are called interaction Hamiltonians.
The new output z can be sometimes regarded as an integrated ver-
sion of the old output y. In fact,
!T " #
∂C T ∂H ∂C T
ż = (x) [J(x) − R(x)] (x) − (x)u ,
∂x ∂x ∂x

and thus if additionally the following conditions are satisfied3


! !T
∂C T ∂C T ∂C T
(x) R(x) = 0, (x) J(x) (x) = 0, (10.6)
∂x ∂x ∂x
then ż = y.
Even if the conditions (10.6) do not hold, and thus ż 6= y,
then ż is still an output of a related input-state-output port-
Hamiltonian system. This follows from the following computation.
T ∂C T
Since uT ( ∂C T m
∂x (x)) J(x) ∂x (x)u = 0 for all u ∈ R by skew-symmetry
of J(x), the following identity holds along trajectories of (10.4) (leav-
ing out arguments x)
 T
 
d
 T  R −R ∂C
∂H
∂H ∂x
H = uT ż −
 
uT   T T  T T  ∂x  .
dt ∂x − ∂C R ∂C
R ∂C T
u
∂x ∂x ∂x

Noting that
 T
  
R −R ∂C
∂x In h i
   ∂C T ≥0
  
T T
 
T T
= 
∂C T
T  R I −
n ∂x
− ∂C
R ∂C
R ∂C
T
− ∂x
∂x ∂x ∂x

this proves the following proposition.


3
Note that for m = 1 the second condition is always satisfied by skew-symmetry
of J(x).
104 Input-output Hamiltonian systems

k1 k2
F m1 m2 v

Figure 10.1: Mass-spring system connected to a moving wall.

Proposition 10.1. The affine IOHD system (10.4) with output equa-
tion
!T !
∂C T ∂H ∂C T
ỹ := ż = (x) [J(x) − R(x)] (x) − (x)u
∂x ∂x ∂x
defines an input-state-output port-Hamiltonian system with
feedthrough term.

Example 10.1 (Mixed force and velocity inputs). Consider a mass-


spring system consisting of two masses (with masses m1 and m2 ) with
an intermediate spring (with spring constant k1 ). The right mass m2 is
linked by another spring (with spring constant k2 ) to a movable wall
with velocity v, while the left mass m1 is directly actuated by an ex-
ternal force F ; see Figure 10.1. The corresponding input-state-output
port-Hamiltonian system is given as
        
q̇1 0 0 1 −1 k1 q1 0 0
 q̇   0
 2  0 k q
0 1   2 2  1
   
0
 
 =   p1  +   v +   F,
ṗ1  −1 0 0 0   m1  0 1
p2
ṗ2 1 −1 0 0 m2 0 0
y1 = k2 q2 ,
p1
y2 = ,
m1
with q1 and q2 the elongations of spring k1 and k2 respectively, and
p1 , p2 the momenta of the two masses.
This system can be integrated to an input-output Hamiltonian sys-
tem (10.4) by defining the outputs
z1 = C1 (q1 , q2 , p1 , p2 ) = −p1 − p2 , z2 = C1 (q1 , q2 , p1 , p2 ) = q1 + q2 .
10.1. Input-output Hamiltonian systems with dissipation 105

Then the system with differentiated outputs ỹ1 = ż1 , ỹ2 = ż2 is a port-
Hamiltonian input-state-output system (with feedthrough term), but
having outputs ỹ1 , ỹ2 differing from the original outputs y1 , y2 . In fact,
p1
ỹ1 = ż1 = k2 q2 + F, ỹ2 = ż2 = + v.
m1
Example 10.2. Consider a linear system
ẋ = Ax + Bu, x ∈ Rn , u ∈ Rm ,
(10.7)
y = Cx + Du, y ∈ Rm ,
with transfer matrix G(s) = C(Is − A)−1 B + D. In Lanzon & Petersen
(2008, 2010) G(s) is called negative imaginary4 if the transfer matrix
H(s) := s(G(s) − D) is positive real and D = DT . In Angeli (2006) the
same notion (mostly for the case D = 0) was coined as counterclockwise
input-output dynamics.
In van der Schaft (2011) it has been shown that the system (10.7)
has negative imaginary transfer matrix if and only if it can be written
as
ẋ = (J − R)(Qx − C T u),
(10.8)
y = Cx + Du, D = DT ,
for some matrices Q, J, R of appropriate dimensions satisfying
Q = QT , J = −J T , R = RT ≥ 0, (10.9)
with Q > 0. We conclude that a linear system (10.7) has negative imag-
inary transfer matrix if and only it is a linear input-output Hamiltonian
system with dissipation (10.8) satisfying Q > 0.
A typical instance of a linear IOHD system is a linear mechani-
cal system with co-located position sensors and force actuators, repre-
sented in Hamiltonian state space form (with q denoting the position
vector and p the momentum vector) as
" # " #" #" # " #
q̇ 0n In K N q 0
= + T u,
ṗ −In 0n NT M −1 p L (10.10)
y = Lq.
4
The terminology ’negative imaginary’, stems, similarly to ’positive real’, from the
Nyquist plot interpretation for single-input single-output systems. For the precise
definition in the frequency domain we refer to Lanzon & Petersen (2008); Angeli
(2006).
106 Input-output Hamiltonian systems

Clearly (10.10) defines a linear IOHD system with Hamiltonian


1 1
H(q, p) = q T Kq + pT M −1 p + q T N p, (10.11)
2 2
where the first term is the total potential energy (with K the com-
pliance matrix), and the second term is the kinetic energy (with M
denoting the mass matrix). The term q T N p corresponds to possible
’gyroscopic forces’.
The definition of an affine IOHD system suggests the following
generalization.
Definition 10.2. A (general) input-output Hamiltonian system with
dissipation (IOHD system) is defined as a system of the form
∂H
ẋ = [J(x) − R(x) (x, u), u ∈ Rm ,
∂x (10.12)
∂H
z=− (x, u), y ∈ Rm ,
∂u
for some function H(x, u), with R(x), J(x) satisfying (10.5).
Obviously, this definition reduces to Definition 10.1 by tak-
ing H(x, u) = H(x) − uT C(x). For R = 0 and J defining a
symplectic form the definition of a general IOHD system amounts
to the definition of an input-output Hamiltonian system given in
Brockett (1977), and explored in e.g. van der Schaft (1984, 1982a,b);
Nijmeijer & van der Schaft (1990).

10.1.1 Dc-gain of IOHD systems


Specializing the approach of Angeli (2007) to the Hamiltonian case
we can define the following notion of dc-gain for an IOHD system.
Consider a general IOHD system (10.12) with Hamiltonian H(x, u).
Assume that for any constant input ū there exists a unique x̄ such that
∂H
(x̄, ū) = 0. (10.13)
∂x
It follows that x̄ is an equilibrium of the system for u = ū. Define
∂H
ȳ = (x̄, ū). (10.14)
∂u
10.2. Positive feedback interconnection and stability 107

Then, see e.g. Wall (1977), eqns. (10.13,10.14) define a Lagrangian


submanifold in the space of steady state outputs and inputs (ȳ, ū) ∈
Y × U . Assuming additionally that this Lagrangian submanifold can
be parametrized by the ū variables, then there exists (locally) a gener-
ating function K such that the relation between ū and ȳ is described
as
∂K
ȳ = (ū). (10.15)
∂ ū
We call this relation the static input-output response or dc-gain of the
IOHD system. Note that this dc-gain for IOHD systems enjoys an in-
trinsic symmetry property (reciprocity), and is solely determined by
the Hamiltonian function H(x, u).

Remark 10.2. In case of the linear IOHD system (10.8) the dc-gain
amounts to the symmetric linear map ȳ = CQ−1 C T + D ū.

10.2 Positive feedback interconnection and stability

We have seen before, cf. Chapter 6, that the basic interconnection prop-
erty of port-Hamiltonian systems is the fact that the power-conserving
interconnection of port-Hamiltonian systems again defines a port-
Hamiltonian system, with Dirac structure being the composition of
the Dirac structures of the composing port-Hamiltonian systems, and
Hamiltonian function and resistive structure being the ’sum’ of the re-
spective Hamiltonian functions and resistive structures. A particular
instance of a power-conserving interconnection is the standard nega-
tive feedback interconnection of two input-state-output systems given
as
u1 = −y2 + e1 , u2 = y1 + e2 ,

where e1 , e2 are new external inputs.


In this section we will discuss the interconnection theory for IOHD
systems, and show that IOHD systems are invariant under positive
feedback interconnection, leading to important consequences for sta-
bility analysis, cf. Section 10.2.1. First of all, consider two affine IOHD
108 Input-output Hamiltonian systems

systems
!
∂Hi ∂CiT
ẋi = [Ji (xi ) − Ri (xi )] (xi ) − (xi )ui , ui ∈ Rm ,
∂xi ∂xi (10.16)
m
yi = Ci (xi ), y ∈ R , i = 1, 2,
interconnected by the positive feedback interconnection

u1 = y2 + e1 , u2 = y1 + e2 . (10.17)

The system resulting from interconnection is the affine IOHD system


" # " # " #!
ẋ1 J1 (x1 ) 0 R1 (x1 ) 0
= −
ẋ2 0 J2 (x2 ) 0 R2 (x2 )
    
∂Hint ∂C1T " #
 ∂x (x1 , x2 )   ∂x (x1 ) 0  e1 
 e  , (10.18)
×  1 − 1  
 ∂Hint   ∂C2T
(x1 , x2 ) 2
0 (x2 )
∂x2 ∂x2
" # " #
y1 C1 (x1 )
= ,
y2 C2 (x2 )
with interconnected Hamiltonian Hint given by

Hint (x1 , x2 ) := H1 (x1 ) + H2 (x2 ) − C1T (x1 )C2 (x2 ). (10.19)

We conclude that the interconnected Hamiltonian Hint results from


addition of the individual Hamiltonians H1 and H2 , together with an
additional coupling term −C1T (x1 )C2 (x2 ). On the other hand, the Pois-
son and dissipation structures of the interconnected system are just
the direct sum of the terms of the two composing subsystems. The
situation is thus opposite to negative feedback interconnection of port-
Hamiltonian systems: in this case the Hamiltonian is the sum of the
Hamiltonians while the Dirac structure is determined by the Dirac
structures of the two systems together with a coupling term.
This positive feedback interconnection property of affine IOHD
systems extends to general IOHD systems as follows. For simplicity
of notation take e1 = 0, e2 = 0 (no external inputs). The positive feed-
back interconnection of two general IOHD systems with Hamiltoni-
ans Hi (xi , ui ) results (under regularity assumptions) in a nonlinear
10.2. Positive feedback interconnection and stability 109

IOHD system, where the interconnected Hamiltonian Hint (x1 , x2 ) is


constructed as follows. The functions Hi (xi , ui ) are generating functions
for two Lagrangian submanifolds Abraham & Marsden (1994); Wall
(1977); van der Schaft (1984) defined as
∂Hi ∂Hi
zi = (xi , ui ), yi = − (xi , ui ), i = 1, 2.
∂xi ∂ui
The composition of these two Lagrangian submanifolds through the
positive feedback interconnection u1 = y2 , u2 = y1 defines a sub-
set in the x1 , x2 , z1 , z2 variables, which is under a transversality con-
dition Guillemin & Sternberg (1979) again a submanifold, and in
fact Hörmander (1971); Guillemin & Sternberg (1979) is again a La-
grangian submanifold. Assuming additionally that this resulting La-
grangian submanifold can be parametrized by the x1 , x2 variables
(this corresponds to well-posedness of the interconnection), it thus
possesses (at least locally) a generating function Hint (x1 , x2 ).

10.2.1 Stability of interconnected IOHD systems


Stability analysis of the positive feedback interconnection of IOHD
systems is quite different from the stability analysis of the negative
feedback interconnection of port-Hamiltonian systems. This directly
stems from the fact that while for negative feedback interconnection
of two port-Hamiltonian systems the resulting Hamiltonian is just the
sum of the Hamiltonians of the two systems, the Hamiltonian of the
positive feedback interconnection of two IOHD systems is more com-
plicated, as explained above (Section 10.2). For clarity of exposition we
will restrict ourselves in this subsection to affine input-output Hamil-
tonian systems with dissipation.

Proposition 10.2. Consider two affine IOHD systems with equilibria


x∗1 , x∗2 satisfying ∂H ∗ ∂H2 ∗ ∗ ∗
∂x1 (x1 ) = 0, ∂x2 (x2 ) = 0 and C1 (x1 ) = 0, C2 (x2 ) = 0.
1

∗ ∗
Then (x1 , x2 ) is a stable equilibrium of the interconnected affine IOHD
system (10.18) if the interconnected Hamiltonian Hint given by (10.19)
has a strict minimum at the origin (x∗1 , x∗2 ). A sufficient condition
2 2
for this is that the Hessian matrices ∂∂xH21 (x∗1 ), ∂∂xH22 (x∗2 ) are positive-
1 2
definite, and furthermore the following coupling condition holds on
110 Input-output Hamiltonian systems

the linearized system


 !−1
∂ T C1 ∗ ∂ 2 H1 ∗ ∂C1T ∗
λmax  (x ) (x ) (x )
∂x1 1 ∂x21 1 ∂x1 1
!−1  (10.20)
∂ T C2 ∂2H 2 ∂C2T
× (x∗2 ) (x∗2 ) (x∗2 ) < 1.
∂x2 ∂x22 ∂x2

Example 10.3. The dc-gain of the linear IOHD system (10.10) is given
as LK −1 LT , and thus only depends on the compliance matrix K (e.g.,
the spring constants) and the collocated sensor/actuator locations.
Note that in this case positive feedback amounts to positive position
feedback, while negative feedback of z = ẏ = LM −1 p = Lq̇ cor-
responds to negative velocity feedback; see also Lanzon & Petersen
(2010).

Remark 10.3. As in (Angeli (2006), Theorem 6) the interconnected


Hamiltonian Hint (x1 , x2 ) can be also used for showing boundedness
of solutions of the interconnected system; this is e.g. guaranteed if
Hint (x1 , x2 ) is radially unbounded. Furthermore, it leads to a bifur-
cation perspective on multi-stability as follows. Consider two IOHD
systems with equilibria x∗1 , x∗2 corresponding to strict global minima of
H1 (x1 ), respectively H2 (x2 ). Then the parametrized positive feedback

u1 = ky2 , u2 = ky1 , (10.21)


k (x , x ), which
for k ≥ 0 results in an interconnected Hamiltonian Hint 1 2
for k small will have (by continuity) a strict minimum at (x∗1 , x∗2 ),
corresponding to a stable equilibrium. By increasing k the shape
k (x , x ) is going to change, possibly resulting in multiple lo-
of Hint 1 2
cal minima, and thus multiple stable equilibria. In a general, non-
Hamiltonian, setting this has been studied in Angeli (2007), where
conditions were derived for multi-stability of the resulting intercon-
nected system, meaning that for generic initial conditions the system
trajectories will always converge to one of those stable equilibria.
11
Pseudo-gradient representations

In this chapter, which is largely based on the work presented in


van der Schaft (2011), it will be demonstrated that a special subclass
of port-Hamiltonian systems, namely reciprocal port-Hamiltonian
systems,1 can be naturally related with systems of the form
∂V
Q(z)ż = − (z, u),
∂z (11.1)
∂V
y=− (z, u),
∂u
where z are local coordinates for some n-dimensional state space man-
ifold Z, V is a potential function, and the matrix Q(z) is a non-singular
symmetric matrix. In case Q(z) is definite, the system (11.1) defines
a gradient system and Q can be considered a Riemannian metric. In
case Q(z) is indefinite, gradient systems of the form (11.1) are com-
monly referred to as pseudo-gradient systems with respect to a pseudo-
Riemannian metric. Pseudo-gradient system establish an important
class of nonlinear systems, especially for systems with nonlinear resis-
tive relationships. Furthermore, the pair (Q, V ) can be advantageously
1
Roughly speaking, reciprocal (port-Hamiltonian) systems are systems that do not
contain essential gyrators Breedveld (1984).

111
112 Pseudo-gradient representations

used as an alternative to generate a family of Lyapunov functions.

11.1 Towards the Brayton-Moser equations

Consider an input-state-output port-Hamiltonian system


  ∂H
ẋ = J(x) − R(x) (x) + g(x)u,
∂x (11.2)
∂H T
y = g (x) (x),
∂x
with x ∈ X , interconnection structure J(x) = −J T (x), resistive struc-
ture R(x) = RT (x), and Hamiltonian H : X → R. Define the co-energy
variables z := ∂H∂x (x), and suppose that the mapping from the energy
variables x to the co-energy variables z is invertible, such that
∂H ∗
(z),
x=
∂z
where H ∗ (z) represents the co-Hamiltonian, defined through the Leg-
endre transformation of H(x) given by H ∗ (z) = z T x − H(x), where
x is solved from z := ∂H∂x (x). Then, the dynamics (11.2) can also be
expressed in terms of the co-energy variables z as
∂2H ∗  
(z)ż = J(x) − R(x) z + g(x)u,
∂z 2 (11.3)
y = gT (x)z.
Now assume that there exists coordinates x1 and x2 such that
" # " # " #
0 −B(x) R1 (x) 0 g (x)
J(x) = T , R(x) = , g(x) = 1 ,
B (x) 0 0 R2 (x) 0
and that the Hamiltonian can be decomposed into
H(x1 , x2 ) = H1 (x1 ) + H2 (x2 ).
Then, the system in co-energy variables (11.3) takes the form
 
∂ 2 H1∗
(z) 0 " # " #" # " #
 ∂z12
  ż R1 (x) B(x) z1 g (x)
 1
=− + 1 u,
∂ 2 H2∗
 
  ż2 −B T (x) R2 (x) z2 0
0 (z)
∂z22
y = g1T (x)z1 .
11.1. Towards the Brayton-Moser equations 113

Furthermore, assuming that there exist functions P1 (z1 ) and P2 (z2 )


such that
∂P1
R1 (x)z1 = (z1 ),
∂z1
∂P2
−R2 (x)z2 = (z2 ),
∂z2
we can define the potential function
P (z) = P1 (z1 ) + P2 (z2 ) + P12 (z1 , z2 ),
where P12 (z1 , z2 ) = z1T B(x)z2 . Consequently, the system (11.2) is
equivalent to the nonlinear pseudo-gradient system
   
∂ 2 H1∗ ∂P
(z) 0  ∂z (z)
" # " #
 ∂z12 ż1 g1 (x)
 
  1 
= − + u,
∂ 2 H2∗
 
  ż2  ∂P  0 (11.4)
0 − (z) (z)
∂z22 ∂z2
y = g1T (x)z1 ,
which, if g1 is constant, is equivalent to (11.1) by noting that
V (z, u) = P (z) − z1T g1 u,
and defining the pseudo-Riemannian metric
 
∂ 2 H1∗
(z) 0
 ∂z12
 
(11.5)

Q(z) =  .
 ∂ 2 H2∗ 
0 − (z)
∂z22
Gradient systems of the form (11.4) are known as the Brayton-
Moser equations, which where originally derived for nonlinear electri-
cal RLC circuits in the early sixties Brayton & Moser (1964a,b); see
also Smale (1972). The function P , which, in case the Hamiltonian
represent the total stored energy, has the units of power, is commonly
referred to as the mixed-potential function due to the different nature of
the potentials P1 , P2 , and P12 . Indeed, decomposing the dynamics of
the system into two subsystems Σ1 and Σ2 , associated to the dynamics
of z1 and z2 , the potentials P1 and P2 represent the resistive relation-
ships in Σ1 and Σ2 , respectively, whereas the potential P12 represents
114 Pseudo-gradient representations

the (instantaneous) power flow from Σ1 to Σ2 . In the context of electri-


cal RLC circuits, the potential associated to the current-controlled re-
sistors is often denoted as the content, whereas the potential that is as-
sociated to the voltage-controlled resistors is denoted as the co-content
as introduced by Millar in the early fifties; see Jeltsema & Scherpen
(2009) and the references therein.
Example 11.1. Consider the DC motor of Example 2.5. Clearly the
Hamiltonian is composed of the energy storage in the electrical and
the mechanical parts of the system. Hence, application of the Legendre
transformation yields the co-energy
1 1
H ∗ (I, ω) = LI 2 + Jω 2 ,
2 2
with the co-energy variables I = ϕ/L and ω = p/J. Furthermore,
since
" # " # " #
0 −K R 0 1
J= , R= , g= ,
K 0 0 b 0
we readily obtain
1 1
P (I, ω) = RI 2 − bω 2 + KωI,
2 2
yielding the Brayton-Moser equations
" #" # " # " #
L 0 I˙ RI + Kω 1
=− + (u = V ),
0 −J ω̇ −bω + KI 0
y = I.
(Compare with (2.30).)
On the other hand, if the Hamiltonian can not be decomposed as
H(x) = H1 (x1 ) + H2 (x2 ), we obtain instead of (11.4)
∂2H ∗ ∂2H ∗
   
∂P
 ∂z 2 (z) (z)  " # (z) " #
 1 ∂z1 ∂z2  ż1  ∂z1

 g1 (x)
  = − + u,
 ∂2H ∗ ∂ 2 H ∗  ż2  ∂P  0
− (z) − (z) (z)
∂z2 ∂z1 ∂z22 ∂z2
y = g1T (x)z1 .
11.2. Geometry of the Brayton-Moser equations 115

However, in this case

∂2H ∗ ∂2H ∗
 
 ∂z 2 (z) ∂z1 ∂z2
(z) 
 1 
Q(z) =  
 ∂2H ∗ ∂2H ∗ 
− (z) − (z)
∂z2 ∂z1 ∂z22

is not a symmetric matrix anymore, and therefore does not represent


a pseudo-Riemannian metric.

11.2 Geometry of the Brayton-Moser equations

Instrumental for the existence of the relationship between (11.2) and


(11.4) is that the pseudo-Riemannian metric Q is a Hessian with re-
spect to the Legendre transformation of the Hamiltonian function.

Definition 11.1. A pseudo-Riemannian metric defined by the non-


singular matrix Q(z) is said to be Hessian if there exists a function
K such that the (i, j)-th element Qij (Z) of the matrix Q(z) is given as

∂2K
Qij (z) = (z),
∂zi ∂zj

for i, j = 1, . . . , n.

A necessary and sufficient condition for the (local) existence of


such function K(z) is the integrability condition, cf. Duistermaat
(2001),
∂Qjk ∂Qik
(z) = (z),
∂zi ∂zj
for i, j, k = 1, . . . , n. Note that the pseudo-Riemannian metric (11.5) is
indeed Hessian with respect to the function K(z) = H1∗ (z1 ) − H2∗ (z2 ).
As for port-Hamiltonian systems, the Brayton-Moser (BM) equa-
tions (11.4) can be also represented in the formalism of Dirac
structures, using a non-canonical Dirac structure representation
Blankenstein (2005). Indeed, let us for ease of presentation assume
that g1 is constant and consider the following non-canonical Dirac
116 Pseudo-gradient representations

structure


DBM = (fS , eS , fP , eP ) ∈ FS × FS∗ × FP × FP∗
" # 
g1 h i
Q(z)fS = −eS + fP , eP = g1T 0 fP ,
0
defined with respect to the bilinear form

≪(fS1 , eS1 , fP1 , eP1 ), (fS2 , eS2 , fP2 , eP2 ) ≫



= eTS1 fS2 + eTS2 fS1 + eTP1 fP2 + eTP2 fP1 + fST1 Q(z) + QT (z) fS2 .

Then, the Brayton-Moser equations can be described as a dynamical


system with respect to the non-canonical Dirac structure DBM by set-
ting the flow variables as the rate of change of the co-energy variables
(z1 , z2 ), i.e., fS = −(ż1 , ż2 ), the effort variables as
 
∂P ∂P
eS = (z), (z) ,
∂z1 ∂z2
and the input port variables fP = u. Notice that the flow and effort
variables are conjugated in the sense that
d ∂T P ∂T P
P = (z)ż1 + (z)ż2 .
dt ∂z1 ∂z2
Furthermore, we observe that the natural output for the Brayton-
Moser equations (11.4) is given by eP = g1T z1 , and eTP fP has the units
of power. However, the port variables (eP , fP ) are not conjugated with
d
respect to dt P , which has the units of power per second. For, we re-
define the output port variables as e′P = g1T ż1 , so that the dynamics of
the Brayton-Moser equations (11.4) can equivalently be specified by
 
∂P ∂P
−ż1 , −ż2 , (z), (z), fP , e′P ∈ DBM .
∂z1 ∂z2
Note that DBM satisfies a power-like balance equation
d 
P = fPT e′P − ż T Q(z) + QT (z) ż. (11.6)
dt
The latter shows that, in general, since Q is indefinite, P is not con-
served.
11.3. Interconnection of gradient systems 117

11.3 Interconnection of gradient systems

Consider two (pseudo-)gradient systems


∂Pj ∂ T hj
Qj (zj )żj = − (zj ) + (zj )uj ,
∂zj ∂zj
yj = hj (zj ), j = 1, 2,
and interconnect them via the standard negative feedback intercon-
nection u1 = −y2 and u2 = y1 . Then, the interconnected system is
again a (pseudo-)gradient system with (pseudo-)Riemannian metric
Q1 ⊕ Q2 and mixed-potential function
P1 (z1 ) − P2 (z2 ) + hT1 (z1 )h2 (z2 ).
Example 11.2. Consider a fluid tank with capacity C1 and pressure
drop p1 . If the flow rate of the fluid flowing out of the tank is denoted
by q1 , then the fluid dynamics is described by C1 ṗ1 = q1 and y1 = p1 .
Hence, the associated mixed-potential function equals P1 = 0. Sup-
pose that a long pipe, with fluid inertia L2 and fluid resistance R2 , is
connected to the tank as shown in Figure 11.1. If the flow rate in the
pipe is denoted as q2 and the pressure at the outlet is denoted as p2 ,
the dynamics of the pipe take the form
∂P2
L2 q̇2 = − (q2 ) + p2 ,
∂q2
y2 = q2 ,
where P2 (q2 ) = 12 R2 q22 . The overall system is described by setting q1 =
−q2 and p2 = p1 , yielding a mixed-potential P (p1 , q2 ) = − 12 R2 q22 +p1 q2 ,
and a pseudo-Riemannian metric Q = diag(C1 , −L2 ).

11.4 Generation of power-based Lyapunov functions

We have seen above that the Brayton-Moser equations (11.4)


satisfy a power-like balance equation (11.6). However, we can-
not establish a dissipation-like inequality since the matrix Q is,
in general, indefinite. Furthermore, to obtain the passivity prop-
erty an additional difficulty stems from the fact that the mixed-
potential P is also not sign definite. To overcome these difficulties,
118 Pseudo-gradient representations

C1
L2
q1 q2 p2
p1
R2

Figure 11.1: Interconnection of two gradient systems.

in Brayton & Moser (1964a,b); Ortega et al. (2003); Jeltsema et al.


(2003); Jeltsema & Scherpen (2005) sufficient conditions have been
given under which the equations (11.4) can be equivalently written
as
∂ P̃
Q̃(z)ż = − (z) + g(z)u, (11.7)
∂z
for some new admissible pair (Q̃, P̃ ), satisfying Q̃(z) + Q̃T (z) ≥ 0 and
d
P̃ ≥ 0, for all z. Under these conditions it is clear that dt P ≤ uT y ′ , i.e.,
the system defines a passive system with respect to the port variables
(u, y ′ ) and storage function P̃ , with output variables y ′ = g1T ż1 . From
the stability properties of a passive system (see Chapter 7), we know
that if z ∗ is a local minimum of P̃ , then is it a stable equilibrium of
(11.7) when u ≡ 0.2

2
Note that if we would start from V (z, u) = P (z) − z1T g1 u, (constant) non-zero
inputs can naturally be taken into account in the Lyapunov analysis by generating an
admissible pair (Q̃, Ṽ ), with Ṽ satisfying Ṽ ≥ 0 for all z.
12
Port-Hamiltonian systems on graphs

In this chapter we will see how the incidence structure of a directed


graph provides a natural Poisson structure on the product of two
spaces of flow and effort variables, namely those associated to the ver-
tices (nodes) of the graph, and those associated to the edges (branches,
links). In this way, many examples of network dynamics can be natu-
rally modeled as port-Hamiltonian systems on graphs.
The Poisson structure resulting from the incidence matrix of the
graph can be interpreted as a combination of two sets of conservation
or balance laws. For example, in the case of a mass-spring-damper
system one set of conservation laws corresponds to momentum bal-
ance while the other set corresponds to a continuity equation. In this
sense the theory of port-Hamiltonian systems on graphs can be re-
garded as a discrete analog of the theory of distributed-parameter
port-Hamiltonian systems with respect to a Stokes-Dirac structure as
treated in Chapter 14.
This chapter is largely based on van der Schaft & Maschke (2013),
to which we refer for further details and extensions.

119
120 Port-Hamiltonian systems on graphs

12.1 Background on graphs

A directed graph G = (V, E) consists of a finite set V of vertices and a


finite set E of directed edges, together with a mapping from E to the
set of ordered pairs of V, where no self-loops are allowed. Thus to any
edge e ∈ E there corresponds an ordered pair (v, w) ∈ V × V (with
v 6= w), representing the tail vertex v and the head vertex w of this
edge.
A directed graph is completely specified by its incidence matrix B,
which is an N × M matrix, N being the number of vertices and M
being the number of edges, with (i, j)-th element equal to 1 if the j-
th edge is an edge towards vertex i, equal to −1 if the j-th edge is
an edge originating from vertex i, and 0 otherwise. It immediately
follows that 1T B = 0 for any incidence matrix B, where 1 is the vector
consisting of all ones. A directed graph is called connected if between
any two vertices there exists a path (a sequence of undirected edges)
linking the two vertices. A directed graph is connected if and only if
ker B T = span 1; see e.g. Bollobas (1998). Since we will only consider
directed graphs in the sequel ’graph’ will throughout mean ’directed
graph’.
Given a graph, we define its vertex space Λ0 as the vector space of
all functions from V to some linear space R. In the examples, R will
be mostly R = R in which case Λ0 can be identified with RN . Further-
more, we define the edge space Λ1 as the vector space of all functions
from E to R. Again, if R = R then Λ1 can be identified with RM .
The dual spaces of Λ0 and Λ1 will be denoted by Λ0 , respectively
by Λ1 . The duality pairing between f ∈ Λ0 and e ∈ Λ0 is given as
X
< f | e >= < f (v) | e(v) > ,
v∈V

where < ˙|>


˙ on the right-hand side denotes the duality pairing be-
tween R and R∗ , and a similar expression holds for f ∈ Λ1 and e ∈ Λ1
(with summation over the edges).
The incidence matrix B of the graph induces a linear map B̂ from
the edge space to the vertex space as follows. Define B̂ : Λ1 → Λ0 as
the linear map with matrix representation B ⊗ I, where I : R → R is
12.1. Background on graphs 121

the identity map and ⊗ denotes the Kronecker product. B will be called
the incidence operator. For R = R the incidence operator reduces to the
linear map given by the matrix B itself, in which case we will through-
out use B both for the incidence matrix and for the incidence operator. The
adjoint map of B̂ is denoted as
B̂ ∗ : Λ0 → Λ1 ,
and is called the co-incidence operator. For R = R3 the co-incidence
operator is given by B T ⊗I3 , while for R = R the co-incidence operator
is simply given by the transposed matrix B T , and we will throughout
use B T both for the co-incidence matrix and for the co-incidence operator.
An open graph G is obtained from an ordinary graph with set of
vertices V by identifying a subset Vb ⊂ V of Nb boundary vertices.
The interpretation of Vb is that these are the vertices that are open to
interconnection (i.e., with other open graphs). The remaining subset
Vi := V − Vb are the Ni internal vertices of the open graph.
The splitting of the vertices into internal and boundary vertices
induces a splitting of the vertex space and its dual, given as
Λ0 = Λ0i ⊕ Λ0b , Λ0 = Λ0i ⊕ Λ0b ,
where Λ0i is the vertex space corresponding to the internal vertices
and Λ0b the vertex space corresponding to the boundary vertices. Con-
sequently, the incidence operator B̂ : Λ1 → Λ0 splits as
B̂ = B̂i ⊕ B̂b ,
with B̂i : Λ1 → Λ0i and B̂b : Λ1 → Λ0b . For R = R we will simply
write " #
Bi
B= .
Bb
Furthermore, we will define the boundary space Λb as the linear space
of all functions from the set of boundary vertices Vb to the linear space
R. Note that the boundary space Λb is equal to the linear space Λ0b ,
and that the linear mapping B̂b can be also regarded as a mapping
B̂b : Λ1 → Λb , called the boundary incidence operator. The dual space of
Λb will be denoted as Λb . The elements fb ∈ Λb are called the boundary
flows and the elements eb ∈ Λb the boundary efforts.
122 Port-Hamiltonian systems on graphs

12.2 Mass-spring-damper systems

The basic way of modeling a mass-spring-damper system as a port-


Hamiltonian system on a graph is to associate the masses to the vertices,
and the springs and dampers to the edges of the graph; see Fig. 12.1.

damper 1

mass 1 spring 1 mass 2 damper 2 mass 3

spring 2

(a)

(b)

Figure 12.1: (a) Mass-spring-damper system; (b) the corresponding graph.

For clarity of exposition we will start with the separate treatment


of mass-spring (Section 12.2.1) and mass-damper (Section 12.2.2) sys-
tems, before their merging in Section 12.2.3.

12.2.1 Mass-spring systems


Consider a graph G with N vertices (masses) and M edges (springs),
specified by an incidence matrix B. First consider the situation that the
mass-spring system is located in one-dimensional space R = R, and
the springs are scalar. A vector in the vertex space Λ0 then corresponds
to the vector p of the scalar momenta of all N masses, i.e., p ∈ Λ0 =
RN . Furthermore, a vector in the dual edge space Λ1 will correspond
12.2. Mass-spring-damper systems 123

to the total vector q of elongations of all M springs, i.e., q ∈ Λ1 = RM .


Next ingredient is the definition of the Hamiltonian H : Λ1 × Λ0 →
R, which typically splits into a sum of the kinetic and potential en-
ergies of each mass and spring. In the absence of boundary vertices
the dynamics of the mass-spring system is then described as the port-
Hamiltonian system
" # " # " ∂H #
q̇ 0 BT ∂q (q, p)
= ∂H (12.1)
ṗ −B 0 ∂p (q, p)

defined with respect to the Poisson structure on the state space Λ1 ×Λ0
given by the skew-symmetric matrix
" #
0 BT
J := . (12.2)
−B 0

The inclusion of boundary vertices, and thereby of external interac-


tion, can be done in different ways. The first option is to associate
boundary masses to the boundary vertices. We are then led to the port-
Hamiltonian system

∂H
q̇ = B T (q, p),
∂p
∂H
ṗ = −B (q, p) + Efb , (12.3)
∂q
∂H
eb = E T (q, p).
∂p

Here E is a matrix with as many columns as there are boundary ver-


tices; each column consists of zeros except for exactly one 1 in the row
corresponding to the associated boundary vertex. fb ∈ Λb are the ex-
ternal forces exerted (by the environment) on the boundary masses,
and eb ∈ Λb are the velocities of these boundary masses.
Another possibility is to regard the boundary vertices as being
massless. In this case we obtain the port-Hamiltonian system (with pi
denoting the vector of momenta of the masses associated to the inter-
124 Port-Hamiltonian systems on graphs

nal vertices)
∂H
q̇ = BiT (q, pi ) + BbT eb ,
∂pi
∂H
ṗi = −Bi (q, pi ), (12.4)
∂q
∂H
f b = Bb (q, pi ),
∂q

with eb ∈ Λb the velocities of the massless boundary vertices, and


fb ∈ Λb the forces at the boundary vertices as experienced by the en-
vironment. Note that in this latter case the external velocities eb of the
boundary vertices can be considered to be inputs to the system and
the forces fb to be outputs; in contrast to the previously considered
case (boundary vertices corresponding to boundary masses), where
the forces fb are inputs and the velocities eb the outputs of the sys-
tem1 .
The above formulation of mass-spring systems in R = R directly
extends to R = R3 by using the incidence operator B̂ = B ⊗ I3 as de-
fined before. Finally, we remark that in the above treatment we have
considered springs with arbitrary elongation vectors q ∈ Λ1 . For ordi-
nary springs the vector q of elongations is given as q = B T qc , where
qc ∈ Λ0 denotes the vector of positions of the vertices. Hence in this
case q ∈ im B T ⊂ Λ1 . Note that the subspace im B T × Λ0 ⊂ Λ1 × Λ0 is
an invariant subspace with regard to the dynamics (12.3) or (12.4). We
will return to this in Section 12.6.

12.2.2 Mass-damper systems

Replacing springs by dampers leads to mass-damper systems. In the


case of massless boundary vertices this yields the following2 equa-

1
One can also consider the hybrid case where some of the boundary vertices are
associated to masses while the remaining ones are massless.
2
Note that these equation follow from (12.4) by replacing −q̇ by e1 and ∂H
∂q
(q, p)
by f 1 .
12.2. Mass-spring-damper systems 125

tions
Bi f1 = −ṗi ,

Bb f 1 = f b , (12.5)
∂H
e1 = −BiT (pi ) − BbT eb ,
∂pi
where f1 , e1 are the flows and efforts corresponding to the dampers
(damping forces, respectively, velocities). For linear dampers f1 =
−Re1 , where R is the positive diagonal matrix with the damping con-
stants on its diagonal. Substitution into (12.5) then yields the port-
Hamiltonian system
∂H
ṗi = −Bi RBiT (pi ) − Bi RBbT eb ,
∂pi
(12.6)
∂H
fb = Bb RBiT (pi ) + BbT RBbT eb ,
∂pi

where, as before, the inputs eb are the boundary velocities and fb are
the forces as experienced at the massless boundary vertices. Note that
the matrix " #
Bi h i
L := R BiT BbT = BRB T
Bb
is the weighted Laplacian matrix of the graph G (with weights given by
the diagonal elements of R). It is well-known Bollobas (1998) that for
a connected graph the matrix L has exactly one eigenvalue 0, with
eigenvector 1, while all other eigenvalues are positive.

12.2.3 Mass-spring-damper systems


For a mass-spring-damper system the edges will correspond partly to
springs, and partly to dampers. Thus a mass-spring-damper system
is described by a graph G(V, Es ∪ Ed ), where the vertices in V corre-
spond to the masses, the edges in Es to the springs, and the edges in Ed
to the dampers
h ofi the system. This corresponds to an incidence ma-
trix B = Bs Bd , where the columns of Bs reflect the spring edges
and the columns of Bd the damper edges. For the case without bound-
ary vertices the dynamics of such a mass-spring-damper system with
126 Port-Hamiltonian systems on graphs

linear dampers takes the form


 
∂H
 ∂q (q, p)
" # " #
q̇ 0 BsT
(12.7)
 
=  .
ṗ −Bs −Bd RBdT  ∂H 
(q, p)
∂p
In the presence of boundary vertices we may distinguish, as above,
between massless boundary vertices, with inputs being the boundary
velocities and outputs the boundary (reaction) forces, and boundary
masses, in which case the inputs are the external forces and the outputs
the velocities of the boundary masses.

Remark 12.1. The above formulation of mass-spring-damper sys-


tems with R equal to R or R3 can be extended to spatial mechanisms,
that is, networks of rigid bodies in R3 related by joints. In this case,
the linear space R is given by R := se∗ (3), the dual of the Lie alge-
bra of the Lie group SE(3) describing the position of a rigid body
in R3 . A spatial mechanism (or multibody system) is a mechanical sys-
tem consisting of rigid bodies related by joints (defined as kinematic
pairs) restricting the relative motion between the rigid bodies. See
van der Schaft & Maschke (2013) for details.

12.2.4 Hydraulic networks


A hydraulic network can be modeled as a directed graph with
edges corresponding to pipes, see e.g. Roberson & Crowe (1993);
De Persis & Kallesoe (2011). The vertices may either correspond to
connection points with fluid reservoirs (buffers), or merely to connec-
tion points of the pipes; we concentrate on the first case (the second
case corresponding to a Kirchhoff-Dirac structure, cf. Section 12.8). Let
xv be the stored fluid at vertex v and let νe be the fluid flow through
edge e. Collecting all stored fluids xv into one vector x, and all fluid
flows νe into one vector ν, the mass-balance is summarized as

ẋ = Bν, (12.8)

with B denoting the incidence matrix of the graph. In the absence of


fluid reservoirs this reduces to Kirchhoff’s current laws Bν = 0.
12.2. Mass-spring-damper systems 127

For incompressible fluids a standard model of the fluid flow νe


through pipe e is
Je ν̇e = Pi − Pj − λe (νe ), (12.9)
where Pi and Pj are the pressures at the tail, respectively head, vertices
of edge e. Note that this captures in fact two effects; one correspond-
ing to energy storage and one corresponding to energy dissipation.
Defining the energy variable ϕe := Je νe the stored energy in the pipe
associated with edge e is given as 2J1 e ϕ2e = 21 Je νe2 . Secondly, λe (νe ) is a
damping force corresponding to energy dissipation.
In the case of fluid reservoirs at the vertices the pressures Pv at each
vertex v are functions of xv , and thus, being scalar functions, always
derivable from an energy function Pv = ∂H ∂xv (xv ), v ∈ V, for some
v

Hamiltonian Hv (xv ) (e.g. gravitational energy). The resulting dynam-


ics (with state variables xv and ϕe ) is port-Hamiltonian with respect
to the Poisson structure (12.2). The set-up is immediately extended to
boundary vertices (either corresponding to controlled fluid reservoirs
or direct in-/outflows).

12.2.5 Single species chemical reaction networks

We have already seen in the last section of Chapter 2 how isothermal


detailed-balanced chemical reaction networks governed by mass ac-
tion kinetics give rise to a port-Hamiltonian system defined on the
graph of complexes, with respect to the Hamiltonian given by the
Gibbs’ free energy, and energy-dissipating relations determined by the
reaction constants and the thermodynamic equilibrium.
For complexes consisting of single species this can be specialized
to the following linear port-Hamiltonian system on a graph having
the same structure as a mass-damper system. In fact, for complexes
consisting of single species we have Z = Im , in which case the port-
Hamiltonian formulation of detailed-balanced mass action kinetics re-
action networks given by (2.45) reduces to
x
ẋ = −BKB T ,
x∗
where x∗ is the assumed thermodynamic equilibrium. Defining the
128 Port-Hamiltonian systems on graphs

diagonal matrix M := diag(x∗1 , · · · , x∗m ) this can be also written as

ẋ = −BKB T M −1 x, (12.10)

which are exactly the equations of a mass-damper system on a graph,


with damping constants given by the diagonal elements of K and
Hamiltonian H(x) = 12 xT M −1 x.

12.3 Swing equations for power grids

Consider a power grid consisting of n buses corresponding to the ver-


tices of a graph. A standard model for the dynamics of the i-th bus is
given by (see e.g. Machovski et al. (2008); Bürger et al. (2013))

δ̇i = ωib − ω r , i = 1, · · · , n,
X
Mi ω̇i = −ai (ωib − ω r ) − Vi Vj Sij [sin(δi − δj ) + ui ,
j6=i

where the summation in the last line is over all buses j which are ad-
jacent to bus i; that is, all buses j that are directly linked to bus i by a
transmission line (defining an edge of the graph). Here δi denotes the
voltage angle, vi the voltage amplitude, ωib the frequency, ωi := ωib −ω r
the frequency deviation, and ui the power generation/consumption;
all at bus i. Furthermore, ω r is the nominal (reference) frequency for
the network, Mi and ai are inertia and damping constants at bus i, and
Sij is the transfer susceptance of the line between bus i and j.
Define zk := δi − δj and ck := Ei Ej Sij , if the k-th edge is point-
ing from vertex i to vertex j. Furthermore, define the momenta pi =
Mi ωi , i = 1, · · · , n. Then the equations can be written in the vector
form
ż = B T M −1 p,
ṗ = −AM −1 p − BC Sin z + u,
where z is the m-vector with components zk , M is the diagonal matrix
with diagonal elements Mi , A is the diagonal matrix with diagonal
elements ai , and C is the diagonal matrix with elements ck . Further-
more, Sin : Rm → Rm denotes the elementwise sin function, and z ∗ is
the m-vector with k-th component δij∗.
12.4. Available storage 129

Defining the Hamiltonian H(z, p) as


1 T −1
H(z, p) = p M p − 1T C Cos z, (12.11)
2
the equations take the port-Hamiltonian form
 
" # " # ∂H (z, p) " #
ż 0 BT  ∂z  + 0 u.

=  (12.12)
ṗ −B −A  ∂H  I
(z, p)
∂p
Note that the Hamiltonian H(z, p) is of the standard ’kinetic energy
plus potential energy’ form, with potential energy −1T C Cos z =
P
− ck cos zk similar to the gravitational energy of a pendulum;
whence the name ’swing equations’. Note that, as in the mass-spring
system example, the potential energy is associated to the edges of the
graph, while the kinetic energy is associated to its vertices. A differ-
ence with the mass-spring-damper system example is that in the cur-
rent example the ’damping torques’ A ∂H ∂p (z, p) are associated to the
vertices, instead of to the edges.

12.4 Available storage

Consider the simplest port-Hamiltonian system on a graph, given as

ẋ = Bu, u ∈ Rm , x ∈ Rn ,
∂H (12.13)
y = BT (x), y ∈ Rm ,
∂x
where B is the incidence matrix of the graph, and H(x) = 21 kxk2 is
the Hamiltonian function. Clearly, since H is non-negative it defines a
storage function, and the system is passive. On the other hand it will
turn out that the minimal storage function for the system, called the
available storage (see Chapter 7), is different from H.
Throughout this section we will assume3 that the graph is con-
nected, or equivalently Bollobas (1998) ker B T = span 1. Based on
3
Without loss of generality, since otherwise the analysis can be repeated for every
connected component of the graph.
130 Port-Hamiltonian systems on graphs

Chapter 7 we know that the available storage Sa is given as


Z τ
Sa (x) = sup − uT (t)y(t)dt, (12.14)
0

where we consider the supremum over all τ ≥ 0 and all input func-
tions u : [0, τ ] → Rm , and where y : [0, τ ] → Rm is the output resulting
from the input function u : [0, τ ] → Rm and initial condition x(0) = x.
Noting that
Z τ Z τ
T
u (t)y(t)dt = uT (t)B T x(t)dt
0 0
Z τ 1 1
= ẋT (t)x(t)dt = kx(τ )k2 − kx(0)k2 ,
0 2 2
we see that the available storage is equivalently given as
 
1 1
Sa (x) = sup kxk2 − kx(τ )k2 , (12.15)
2 2
where we take the supremum over all τ ≥ 0 and all possible states
x(τ ) resulting from input functions u : [0, τ ] → Rm . By connectedness
of the graph, we know that from x(0) = x we can reach, by choosing
the input function suitably, any state x(τ ) satisfying

1T x(τ ) = 1T x. (12.16)

Hence the available storage Sa (x) is given by (12.15) where we take


the supremum over all states x(τ ) satisfying (12.16). This corresponds
to minimizing 12 kx(τ )k2 over all x(τ ) satisfying (12.16), having the so-
lution
1
x(τ ) = 1T x1, (12.17)
n
Thus the available storage Sa is given by the explicit expression
 2  
1 1 1 T 1 1
Sa (x) = kxk2 − 1 x k1k2 = xT In − 11T x. (12.18)
2 2 n 2 n
We conclude that for all initial conditions x(0) = x which are such
that 1T x 6= 0 the available storage Sa (x) is strictly smaller than the
Hamiltonian 21 kxk2 . The reason is that, since the system ẋ = Bu is
not controllable, it is not possible to drive every initial state to the
12.4. Available storage 131

origin; the position of zero energy. Instead, by extracting the maximal


amount of energy the system is brought from state x to a state x∗ with
x∗1 = · · · = x∗n , satisfying x∗1 + · · · + x∗n = x1 + · · · + xn .
Note that the matrix In − n1 11T defines a symmetric weighted
Laplacian matrix for an extended graph; namely the complete graph for
the vertices of the original graph4 .
The above analysis can be extended to any port-Hamiltonian sys-
tem (12.13) for which the Hamiltonian H is non-negative (and thus the
system (12.13) is passive). Indeed, in this case the available storage can
be seen to be
Sa (x) = H(x) − H(v ∗ (x)),
where H(v ∗ (x)) is the solution of minimizing H(v) over all v ∈ Rn
satisfying 1T v = 1T x. Equivalently, this amounts to the minimization
of
H(v) + λ(1T v − 1T x)
over v and the Lagrangian multiplier λ ∈ R, yielding the minimizer
v ∗ (x) as the solution of the equations
∂H ∗ ∂H ∗ ∂H ∗
(v (x)) = (v (x)) = · · · = (v (x)),
∂v1 ∂v2 ∂vn (12.19)
v1 + v2 + · · · + vn = x1 + x2 + · · · + xn .
The first equation of (12.19) can be interpreted as a consensus condi-
∂H ∂H
tion on the co-energy variables ∂v 1
, · · · , ∂vn
. Note that, as in the case
H(x) = 12 kxk2 , the expression for the available storage is independent
of the graph (as long as it is connected).
Example 12.1. Consider a system of n point masses M1 , · · · , Mn in R
with state variables being the momenta p1 , · · · , pn , and with Hamilto-
nian equal to the kinetic energy
n
X p2i
H(p) = .
i=1
2Mi
The available storage can be computed
!2
1X Mi Mj pi pj
Sa (p) = − .
2 i<j M1 + · · · + Mn Mi Mj
4
A graph is called complete if there is an edge between every pair of vertices.
132 Port-Hamiltonian systems on graphs

This quantity was called the motion energy in Willems (2013). It


amounts to the maximal energy which can be extracted from the sys-
P
tem by applying forces F1 , · · · , Fn satisfying nj=1 Fj = 0, or equiva-
lently (since 1T B = 0)
ṗ = F = Bu,
where F is the vector with components F1 , · · · , Fn and B is the in-
cidence matrix of the complete graph with vertices corresponding to
the masses M1 , · · · , Mn . Note that as a result of extracting the maxi-
mal energy the system will end up in a consensus state v1 = · · · = vn ,
pi
with vi = M i
the velocities of the point masses.
The above expression for the available storage can be readily ex-
tended to point masses in R3 ; replacing the expression
!2 2
pi pj p
i pj
with

− − .
Mi Mj Mi Mj

Note that contrary to the Hamiltonian function, the available stor-


age is not additive: the available storage of an interconnected port-
Hamiltonian system on a graph is not necessarily the sum of the avail-
able storages of the individual subsystems, as was already noted in
Willems (2013). A simple example is provided by the juxtaposition
of two systems each consisting of two masses. The sum of the ener-
gies which can be extracted from the two systems separately by apply-
ing for each system two external forces whose sum is zero, is strictly
smaller than the amount of energy which can be extracted from the
four masses by applying four external forces whose sum is zero.

12.5 Analysis of port-Hamiltonian systems on graphs

In this section we will investigate the dynamical properties of the


mass-spring-damper system as discussed in Section 12.2.3. As we
have seen, many other examples share the same mathematical struc-
ture, and their analysis will follow the same lines.
Thus we will consider a mass-spring-damper system as described
by a graph G(V, Es ∪ Ed ), where the vertices in V correspond to the
masses, the edges in Es to the springs, and the edges in Ed to the
12.5. Analysis of port-Hamiltonian systems on graphs 133

h i
dampers of the system, with incidence matrix B = Bs Bd , where
the columns of Bs reflect the spring edges and the columns of Bd
the damper edges. Without boundary vertices the dynamics takes the
form (see equation (12.7) in Section 12.2.3)
 
" # " # ∂H (q, p)
q̇ 0 BsT  ∂q 
= T
 . (12.20)
ṗ −Bs −B RB
d d
 ∂H 
(q, p)
∂p
Throughout this section we make the following simplifying assump-
tion5 . The graph G(V, Es ∪ Ed ) is connected, or equivalently ker BsT ∩
ker BdT = span 1.

12.5.1 Equilibria and Casimirs


We start with the following proposition regarding the equilibria.

Proposition 12.1. The set of equilibria E of (12.20) is given as


∂H 
n
1 ∂H
E = (q, p) ∈ Λ × Λ0 (q, p) ∈ ker Bs , (q, p) ∈ span 1 .
∂q ∂p
Proof. The state (q, p) is an equilibrium whenever
∂H ∂H ∂H
BsT (q, p) = 0, Bs (q, p) + Bd RBdT (q, p) = 0.
∂p ∂q ∂p
∂T H
Premultiplication of the second equation by the row-vector ∂p (q, p),
making use of the first equation, yields

∂T H ∂H
(q, p)Bd RBdT (q, p) = 0,
∂p ∂p
or equivalently BdT ∂H ∂H
∂p (q, p) = 0, which implies Bs ∂q (q, p) = 0. 

In other words, for (q, p) to be an equilibrium, ∂H ∂p (q, p) should


satisfy the consensus conditions corresponding to the mass-damper
graph G(V, Es ∪ Ed ), whereas ∂H
∂q (q, p) should be in the space of cycles
5
Again, this assumption can be made without loss of generality, since otherwise
the same analysis can be performed for each connected component.
134 Port-Hamiltonian systems on graphs

of the mass-spring graph G(V, Es ) (corresponding to zero net spring


forces applied to the masses at the vertices).
Similarly, the Casimirs (conserved quantities independent of the
Hamiltonian H, cf. Chapter 8), are computed as follows.

Proposition 12.2. The Casimir functions are all functions C(q, p) sat-
isfying
∂C ∂C
(q, p) ∈ span 1, (q, p) ∈ ker Bs . (12.21)
∂p ∂q
Proof. The function C(q, p) is a Casimir if
 " #
∂C ∂C 0 BsT
(q, p) (q, p) = 0,
∂q ∂p −Bs −Bd RBdT
or equivalently
∂T C ∂T C ∂T C
(q, p)Bs = 0, (q, p)BsT + (q, p)Bd RBdT = 0.
∂p ∂q ∂p
∂C
Postmultiplication of the second equation by ∂p (q, p), making use of
the first equation, gives the result. 

Therefore all Casimir functions can be expressed as functions of


the linear Casimir functions

C(q, p) = 1T p, C(q, p) = kT q, k ∈ ker Bs (12.22)

This implies that starting from an arbitrary initial position (q0 , p0 ) ∈


Λ1 × Λ0 the solution of the mass-spring-damper system (12.20) will be
contained in the affine space
" # " # " #
q 0 im BsT
A(q0 ,p0) := 0 + + (12.23)
p0 ker 1T 0
i.e., for all t the difference q(t) − q0 remains in the space of co-cycles of
the spring graph, while 1T p(t) = 1T p0 .

12.5.2 Stability analysis


Under generic conditions on the Hamiltonian H(q, p), each affine
space A(q0 ,p0 ) will intersect the set of equilibria E in a single point
12.5. Analysis of port-Hamiltonian systems on graphs 135

(q∞ , p∞ ), which will qualify as the point of asymptotic convergence


starting from (q0 , p0 ) (provided there is enough damping present). In
order to simplify the statement of the results we will throughout this
subsection consider linear mass-spring systems, corresponding to a
quadratic and decoupled Hamiltonian function
1 T 1
H(q, p) = q Kq + pT Gp, (12.24)
2 2
where K is the positive diagonal matrix of spring constants, and G
is the positive diagonal matrix of reciprocals of the masses. It follows
that the set of equilibria is given as E = {(q, p) ∈ Λ1 × Λ0 | Kq ∈
ker Bs , Gp ∈ span 1}, while for each (q0 , p0 ) there exists a unique point
(q∞ , p∞ ) ∈ E ∩ A(q0 ,p0 ) . In fact, q∞ is given by the spring graph co-
cycle/cycle decomposition

q0 = v0 + q∞ , v0 ∈ im BsT ⊂ Λ1 , Kq∞ ∈ ker Bs ⊂ Λ1 , (12.25)

while p∞ is uniquely determined by

Gp∞ ∈ span 1, 1 T p∞ = 1 T p0 . (12.26)

This leads to the following asymptotic stability theorem. First note


that the energy H(q, p) = 12 q T Kq + 21 pT Gp satisfies

d ∂T H ∂H
H(q, p) = − (q, p)Bd RBdT (q, p)
dt ∂p ∂p (12.27)
= −pT Bd GRBdT Gp ≤ 0,

and thus qualifies as a Lyapunov function; showing at least stability.

Theorem 12.1. Consider a linear mass-spring-damper system with


H(q, p) = 12 q T Kq + 12 pT Gp, where K and G are diagonal positive ma-
trices. Then for every (q0 , p0 ) there exists a unique equilibrium point
(q∞ , p∞ ) ∈ E ∩ A(q0 ,p0) , determined by (12.25, 12.26). Define the spring
Laplacian matrix Ls := Bs KBsT . Then for every (q0 , p0 ) the following
holds: the trajectory starting from (q0 , p0 ) converges asymptotically to
(q∞ , p∞ ) if and only if the largest GLs -invariant subspace contained
in ker BdT is equal to span 1.
136 Port-Hamiltonian systems on graphs

The condition that the largest GLs -invariant subspace contained in


ker BdT is equal to span 1 amounts to pervasive damping: the influence
of the dampers spreads through the whole system. Another feature of
the dynamics of the mass-spring-damper system (12.20) is its robust-
ness with regard to constant external (disturbance) forces. Indeed, con-
sider a mass-spring-damper system with boundary masses (see Sec-
tion 12.2) and general Hamiltonian H(q, p), subject to constant forces
f¯b
 
" # " # ∂H (q, p) " #
q̇ 0 BsT  ∂q  0 ¯
= T
  + fb , (12.28)
ṗ −Bs −Bd RBd  ∂H  E
(q, p)
∂p
where we assume6 the existence of a q̄ such that
∂H
Bs (q̄, 0) = E f¯b . (12.29)
∂q

Then, the shifted Hamiltonian H̄(q, p) := H(q, p) − (q − q̄)T ∂H


∂q (q̄, 0) −
H(q̄, 0) introduced before, satisfies

d ∂T H ∂H
H̄(q, p) = − (q, p)Bd RBdT (q, p) ≤ 0. (12.30)
dt ∂p ∂p
Specializing to H(q, p) = 21 q T Kq + 12 pT Gp, in which case H̄(q, p) =
1 T 1 T
2 (q − q̄) K(q − q̄)+ 2 p Gp, we obtain the following analog of Theorem
12.1.

Proposition 12.3. Consider a linear mass-spring-damper system


(12.28) with constant external disturbance f¯b and Hamiltonian
H(q, p) = 12 q T Kq + 12 pT Gp, where K and G are diagonal positive ma-
trices. and with im E ⊂ im Bs . The set of controlled equilibria is given
by Ē = {(q, p) ∈ Λ1 × Λ0 | Bs Kq = E f¯b , Gp ∈ span 1}. For every
(q0 , p0 ) there exists a unique equilibrium point (q̄∞ , p∞ ) ∈ Ē ∩ A(q0 ,p0 ) .
Here p∞ is determined by (12.26), while q̄∞ = q̄ + q∞ , with q̄ such that
Bs K q̄ = E f¯b and q∞ the unique solution of (12.25) with q0 replaced
by q0 − q̄. Furthermore, for each (q0 , p0 ) the trajectory starting from
6
If the mapping q → ∂H∂q
(q, 0) is surjective, then there exists for every f¯b such a q̄
if and only if im E ⊂ im Bs .
12.6. Symmetry reduction 137

(q0 , p0 ) converges asymptotically to (q̄∞ , p∞ ) if and only if the largest


GLs -invariant subspace contained in ker BdT is equal to span 1.

Note that the above proposition has a classical interpretation in


terms of the robustness of integral control with regard to constant dis-
turbances: the springs act as integral controllers which counteract the
influence of the unknown external force f¯b so that the vector of mo-
menta p will still converge to consensus.
Thanks to the systematic use of the port-Hamiltonian structure, the
stability analysis given above is readily extendable to the nonlinear
case.

12.6 Symmetry reduction

In this subsection we will show how port-Hamiltonian systems on


graphs, such as the mass-spring-damper systems, can be alternatively
obtained by symmetry reduction from a symplectic formulation, exploit-
ing the invariance of the Hamiltonian function (in particular, of the
spring potential energies).
Let us return to the formulation of a mass-spring system in Section
12.2, where the vertices correspond to the masses, and the edges to the
springs in between them. An alternative is to consider the configura-
tion vector qc ∈ Λ0 =: Qc , describing the positions of all the masses.
In fact, this is the classical starting point for Lagrangian mechanics,
where we do not start with the energy variables q and p, but instead
we start with the configuration vector qc and the corresponding veloc-
ity vector q̇c . The classical Hamiltonian formulation is then obtained
by defining the vector of momenta p ∈ Λ0 = Q∗c as p = M q̇c (with
M the diagonal mass matrix), resulting in the symplectic phase space
Qc × Q∗c = Λ0 × Λ0 . For ordinary springs the relation between qc ∈ Λ0
and the vector q ∈ Λ1 describing the elongations of the springs is given
as q = B T qc . Hence in this case the Hamiltonian can be also expressed
as a function Hc of (qc , p) by defining

Hc (qc , p) := H(B T qc , p). (12.31)

It follows that the equations of motion of the mass-spring system (with


138 Port-Hamiltonian systems on graphs

boundary masses) are given by the canonical Hamiltonian equations


∂Hc
q̇c = (qc , p),
∂p
∂Hc
ṗ = − (qc , p) + Efb , (12.32)
∂qc
∂Hc
eb = E T (qc , p),
∂p
where, as before, fb are the external forces exerted on the boundary
masses and eb are their velocities.
What is the relation with the port-Hamiltonian formulation given
in Section 12.2 ? It turns out that this relation is precisely given by the
standard procedure of symmetry reduction of a Hamiltonian system7 .
Indeed, since 1T B = 0 the Hamiltonian function Hc (qc , p) given in
(12.31) is invariant under the action of the group G = R acting on the
phase space Λ0 × Λ0 ≃ R2N by the symplectic group action

(qc , p) 7→ (qc + α1, p) , α ∈ G = R. (12.33)

From standard reduction theory, see e.g. Marsden & Ratiu (1999);
Libermann & Marle (1987) and the references quoted therein, it fol-
lows that we may factor out the configuration space Qc := Λ0 to the
reduced configuration space

Q := Λ0 /G (12.34)

Let us assume that the graph is connected, or equivalently ker B T =


span 1. Then we have the following identification

Q := Λ0 /G ≃ B T Λ0 ⊂ Λ1 . (12.35)

Hence the reduced state space of the mass-spring system is given by


im B T × Λ0 , where im B T ⊂ Λ1 . Furthermore, under the symmetry
action the canonical Hamiltonian equations (12.32) on the symplec-
tic space Λ0 × Λ0 reduce to the port-Hamiltonian equations (12.3) on
7
This relation can be regarded as the discrete, graph-theoretic, version, of the cor-
respondence between the port-Hamiltonian formulation of the Maxwell equations
(using the Stokes-Dirac structure) and its symplectic formulation using the vector po-
tential of the magnetic field, cf. Marsden & Ratiu (1999); Vankerschaver et al. (2010).
12.6. Symmetry reduction 139

im B T × Λ0 ⊂ Λ1 × Λ0 obtained before:
∂Hc ∂H
q̇ = B T q̇c = B T (qc , p) = B T (q, p),
∂p ∂p
∂Hc ∂H
ṗ = − (qc , p) + Efb = −B (q, p) + Efb , (12.36)
∂qc ∂q
∂H
eb = E T (q, p).
∂p
In case the graph is not connected, then the above symmetry reduction
can be performed for each component of the graph (i.e., the symmetry
group is RcG , with cG denoting the number of components of the graph
G), yielding again the reduced state space8 im B T × Λ0 .
For a mass-spring-damper system, although in the standard sym-
metry reduction framework not considered as a Hamiltonian system,
the same reduction procedure can still be applied. A mass-spring-
damper system in coordinates (qc , p) takes the form
∂Hc
q̇c = (qc , p),
∂p
∂Hc ∂Hc
ṗ = − (qc , p) − Bd RBdT (qc , p) + Efb , (12.37)
∂qc ∂p
∂Hc
eb = E T (qc , p),
∂p
where Hc (qc , p) = H(BsT q, p) with q = BsT qc the spring elonga-
tions. Here Bs and Bd denote, as before, the incidence matrices of the
spring, respectively, damper graph. Under the same symmetry action
as above this reduces to the equations (12.20) on the reduced state
space im BsT × Λ0 .
Furthermore we obtain the following corollary to Theorem 12.1
regarding to ’second-order consensus’ (see also Goldin et al. (2010);
Camlibel & Zhang (2012)):

Corollary 12.2. Consider the mass-spring-damper system (12.37) in


coordinates (qc , p) where we assume the spring graph to be connected.
8
Note that in fact the subspace im B T ⊂ Λ1 is determined by the Casimirs
k q, Bk = 0 in the sense that im B T = {q ∈ Λ0 | kT q = 0, for all k ∈ ker B}.
T

Furthermore, im B T = Λ1 if and only if the graph does not contain cycles.


140 Port-Hamiltonian systems on graphs

Then for all initial conditions qc (t) → span 1, p(t) → span 1 if and
only the largest GLs -invariant subspace contained in ker BdT is equal
to span 1, and moreover ker Bs = 0.

12.7 The graph Dirac structures and interconnection

Starting point for the definition of port-Hamiltonian systems on


graphs in this chapter is the identification of the Poisson structure
(12.2) corresponding to the incidence matrix of the graph. Inclusion
of the boundary vertices leads to the definition of the following two
’canonical’ Dirac structures.

Definition 12.1. Consider an open graph G with vertex, edge and


boundary spaces, incidence matrix B and boundary incidence matrix
Bb . The flow-continuous graph Dirac structure Df (G) is defined as
n
Df (G) := (f1 , e1 , f0i , e0i , fb , eb ) ∈ Λ1 × Λ1 × Λ0i × Λ0i × Λb × Λb |
o
Bi f1 = f0i , Bb f1 = fb , e1 = −BiT e0i − BbT eb . (12.38)

The effort-continuous graph Dirac structure De (G) is defined as


n
De (G) := (f1 , e1 , f0 , e0 , fb , eb ) ∈ Λ1 × Λ1 × Λ0 × Λ0 × Λb × Λb |
Bi f1 = f0i , Bb f1 = f0b + fb , e1 = −B T e0 , eb = e0b }. (12.39)

By Proposition 2.3 both Df (G) and De (G) are separable Dirac struc-
tures. Note that Df (G) and De (G) only differ in the role of the bound-
ary flows and efforts, and that Df (G) = De (G) if there are no boundary
vertices.
Interconnection of two open graphs G α and G β is done by identify-
ing some of their boundary vertices, and equating (up to a minus sign)
the boundary efforts and flows corresponding to these boundary ver-
tices, resulting in a new graph. For simplicity of exposition consider
the case that the open graphs have all their boundary vertices in com-
mon, resulting in a (closed) graph with set of vertices Viα ∪ Viβ ∪ V,
where V := Vbα = Vbβ denotes the set of boundary vertices of both
graphs. The incidence operator of the interconnected (closed) graph is
12.8. The Kirchhoff-Dirac structure 141

obtained as follows. For simplicity of notation consider the case that


R = R. Let G j have incidence matrices
" #
j Bij
B = , j = α, β.
Bbj

The incidence operator B of the interconnected graph is then given as


 
Biα 0
B =  0 Biβ  , (12.40)
 
β
Bbα Bb

corresponding to the interconnection constraints on the boundary po-


tentials and currents given by

ebα = ebβ , fbα + fbβ = 0. (12.41)

Comparing the interconnection of open graphs with the composi-


tion of their graph Dirac structures (see e.g. Proposition 6.1) it is
readily seen that the flow/effort-continuous graph Dirac structure of
an interconnected graph equals the composition of the flow/effort-
continuous graph Dirac structures of G α and G β .

12.8 The Kirchhoff-Dirac structure

In this section we consider a third canonical graph Dirac structure,


which results from constraining the flows at the internal vertices to zero
(and thus there is no energy-storage or dissipation associated with the
vertices for the corresponding port-Hamiltonian system).
The Kirchhoff-Dirac structure is defined as
n
DK (G) := (f1 , e1 , fb , eb ) ∈ Λ1 × Λ1 × Λb × Λb | (12.42)
o
Bi f1 = 0, Bb f1 = fb , ∃e0i ∈ Λ0i s.t. e1 = −BiT e0i − BbT eb .

Note that, in contrast to the flow/effort-continuous graph Dirac struc-


tures, the Kirchhoff-Dirac structure only involves the flow and effort
variables of the edge and boundary vertex spaces (not of the internal
vertex spaces).
142 Port-Hamiltonian systems on graphs

Proposition 12.4. DK (G) is a separable Dirac structure.

Proof. The Kirchhoff-Dirac structure is equal to the composition of the


flow-continuous9 graph Dirac structure Df (G) with the trivial separa-
ble Dirac structure defined as

{(f0i , e0i ) ∈ Λ0i × Λ0i | f0i = 0}.

The result then follows from Proposition 6.1. 

Port-Hamiltonian systems with respect to the Kirchhoff-Dirac


structure are defined completely similar to the case of the flow/effort-
continuous graph Dirac structure; the difference being that energy-
storing or dissipative relations are now only defined for the flow and
effort variables corresponding to the edges.

12.8.1 Electrical circuits

The prime example of a port-Hamiltonian system10 with respect to


a Kirchhoff-Dirac structure is an electrical RLC-circuit, with circuit
graph G. In this case the elements of Λ1 and Λ1 denote the vectors of
currents through, respectively the voltages across, the edges, and the
Kirchhoff-Dirac structure amounts to Kirchhoff’s current and voltage
laws (whence its name). Furthermore, the effort variables e0 are the po-
tentials at the vertices, while the boundary flows and efforts fb , eb are
the boundary currents, respectively boundary potentials at the boundary
vertices (the terminals of the electrical circuit).
On top of Kirchhoff’s laws, the dynamics is defined by the energy-
storage relations corresponding to either capacitors or inductors, and
dissipative relations corresponding to resistors. The energy-storing re-
9
Or the composition of the effort-continuous graph Dirac structure with
{(f0 , e0 ) ∈ Λ0 × Λ0 | f0 = 0}.
10
The terminology ’port-Hamiltonian’ may be confusing in this context, because
’ports’ in electrical circuits are usually defined by pairs of terminals, that is pairs of
boundary vertices with external variables being the currents through and the voltages
across an edge corresponding to each such port. See also the discussion in Willems
(2007, 2010); van der Schaft & Maschke (2009).
12.8. The Kirchhoff-Dirac structure 143

lations for a capacitor at edge e are given by


dHCe
Q̇e = −Ie , Ve = (Qe ), (12.43)
dQe
with Qe the charge, and HCe (Qe ) denoting the electric energy stored
in the capacitor. Alternatively, in the case of an inductor one specifies
the magnetic energy HLe (Φe ), where Φe is the magnetic flux linkage,
together with the dynamic relations
dHLe
Φ̇e = Ve , −Ie = (Φe ). (12.44)
dΦe
Finally, a resistor at edge e corresponds to a static relation between
the current Ie through and the voltage Ve across this edge, such that
Ve Ie ≤ 0. In particular, a linear (ohmic) resistor at edge e is specified
by a relation Ve = −Re Ie , with Re ≥ 0.
Alternatively, we can decompose the circuit graph G as the inter-
connection of a graph corresponding to the capacitors, a graph corre-
sponding to the inductors, and a graph corresponding to the resistors.
For simplicity let us restrict ourselves to the case of an LC-circuit with-
out boundary vertices. Define V̂ as the set of all vertices that are ad-
jacent to at least one capacitor as well as to at least one inductor. Then
split the circuit graph into an open circuit graph G C corresponding to
the capacitors and an open circuit graph G L corresponding to the in-
ductors, both with set of boundary vertices V̂. Denote the incidence
matrices of these two circuit graphs by
" # " #
C BiC L BiL
B := , B :=
BbC BbL
Assuming for simplicity that all capacitors and inductors are linear we
arrive at the following equations for the C-circuit

BbC Q̇ = IbC ,
BiC Q̇ = 0,
BbCT ψbC = C −1 Q − BiCT ψiC .

with Q the vector of charges of the capacitors and C the diagonal ma-
trix with diagonal elements given by the capacitances of the capaci-
144 Port-Hamiltonian systems on graphs

tors. Similarly for the L-circuit we obtain the equations

Φ̇ = BbLT ψbL + BiLT ψiL ,


0 = BiL L−1 Φ,
IbL = −BbL L−1 Φ,

with Φ the vector of fluxes and L the diagonal matrix of inductances


of all the inductors.
The equations of the LC-circuit are obtained by imposing the in-
terconnection constraints ψbC = ψbL =: ψi and IbC +IbL = 0. By eliminat-
ing the boundary currents IbC , IbL one thus arrives at the differential-
algebraic port-Hamiltonian equations11
   
BiC 0 " # " # " # ψC
i
 L  −Q̇ C −1 Q BiCT 0 BbCT  L
 0 Bi  −1 = 0, =  ψi  .
L Φ −Φ̇ 0 BiLT BbLT
BbC BbL ψi

12.8.2 Boundary flows and efforts of the Kirchhoff-Dirac struc-


ture
The fact that the internal vertex flows in the definition of the
Kirchhoff-Dirac structure are all zero (and consequently no storage
or dissipation at the vertices takes place) has a number of specific
consequences for the behavior of the boundary flows and efforts (see
Willems (2010) for closely related considerations).
Assume (for simplicity of exposition) that R = R. From the def-
inition of the Kirchhoff-Dirac structure and 1T B = 0 it follows that

0 = 1T Bf1 = 1Tb Bb f1 = −1Tb fb , (12.45)


with 1b denoting the vector with all ones of dimension equal to
the number of boundary vertices. Hence the boundary part of the
Kirchhoff-Dirac structure of an open graph is constrained by the fact
that the boundary flows add up to zero. Dually, we may always add
to the vector of vertex efforts e0 the vector 1 leaving invariant the edge
11
For a formulation of pure R, L or C circuits, and their weighted Laplacian matri-
ces, we refer to van der Schaft (2010).
12.9. Topological analogies 145

efforts e1 = B T e0 . Hence, to the vector of boundary efforts eb we may


always add the vector 1b .

Proposition 12.5. Consider an open graph G with Kirchhoff-Dirac


structure DK (G). Then for each (f1 , e1 , fb , eb ) ∈ DK (G) it holds that

1Tb fb = 0 ,

while for any constant c ∈ R

(f1 , e1 , fb , eb + c1b ) ∈ DK (G).

12.9 Topological analogies

From the above formulation of an RLC-circuit in Section 12.8.1 we con-


clude that the structure of the dynamical equations of an inductor are
structurally different from that of a capacitor. In order to elucidate this
basic difference we zoom in on the description of an inductor and a
capacitor as two-terminal elements. To this end consider the elemen-
tary open graph consisting of one edge with two boundary vertices
h iT
α, β, described by the incidence matrix b = 1 −1 . It follows that
an inductor with magnetic energy H(Φ) is described by the equations
" #
ψα
Φ̇ = bT ,
ψβ
" # (12.46)
Iα dH
=b (Φ),
Iβ dΦ

whereas a capacitor with electric energy H(Q) is described as


" #
I
bQ̇ = α ,

" # (12.47)
dH ψα
(Q) = bT .
dQ ψβ

This difference stems from the fact that the energy variable Q of a
capacitor, as well as the current I, takes values in the linear space Λ1 ,
while the state variable Φ of an inductor, as well as the voltage V ,
146 Port-Hamiltonian systems on graphs

takes values in the dual space Λ1 . Recalling from Section 12.2.1 the
description of a spring system
" #

q̇ = bT ,

" # (12.48)
Fα dH
=b (q),
Fβ dq

with q the elongation of the spring, and H(q) its potential energy, we
conclude that there is a strict analogy between a spring and an induc-
tor12 . On the other hand, a moving mass is not a strict analog of a ca-
pacitor. Instead, it can be considered to be the analog of a grounded
capacitor, while the strict analog of a capacitor (12.47) is the so-called
inerter Smith (2002)
" # " #
F dH v
bṗ = α , (p) = bT α ,
Fβ dp vβ

where p is the momentum of the inerter and H(p) its kinetic energy,
while Fα , Fβ and v1 , v2 denote the forces, respectively, velocities, at the
two terminals of the inerter. For a further discussion on analogies, see
Appendix B.

12
Thus we favor the so-called force-current analogy instead of the force-voltage anal-
ogy.
13
Switching port-Hamiltonian systems

In quite a few application areas (e.g., power converters, robotics, hy-


draulic networks) systems arise which operate in different modes.
Such systems are commonly called switching or multi-modal physical
systems. In many cases of interest it is appropriate to model the fast
transitions between the different modes of operation of these systems
by ideal switches. In this chapter we will investigate how multi-modal
physical systems can be approached from the port-Hamiltonian point
of view. It will turn out that the varying topology of the system cor-
responds to a varying Dirac structure, while the energy-storage and
energy-dissipation is the same for all the modes.
Another aspect of multi-modal physical systems is that often the
modes of the system may involve algebraic constraints on the state
variables, while at the moment of switching the current state does
not satisfy the algebraic constraints of the next mode. This problem
needs to be resolved by the formulation of a jump rule, stating how
the present state should be changed instantaneously in order that it
satisfies the algebraic constraints of the next mode.
A classical example of such a jump rule arises in electrical circuit
theory, and concerns the characterization of the discontinuous change

147
148 Switching port-Hamiltonian systems

in the charges of the capacitors and/or in the magnetic fluxes of the


inductors whenever switches are instantaneously closed or opened.
This is sometimes referred to as the charge and flux conservation prin-
ciple, and is usually only discussed on the basis of examples; see
e.g. Seshu & Balabanian (1964). In this chapter we will state a jump
rule for general switching port-Hamiltonian systems, which will in-
clude the classical charge and flux conservation principle as a special
case. The discontinuous change of the state involved in the jump rule
amounts to an impulsive motion satisfying a set of conservation laws
derived from the general conservation laws of the port-Hamiltonian
system. Furthermore, if the Hamiltonian function is convex and non-
negative it follows that the switching port-Hamiltonian system with
this jump rule is passive.
This chapter is largely based on van der Schaft & Camlibel (2009),
to which we refer for some of the proofs and for further extensions.

13.1 Switching port-Hamiltonian systems

For the definition of a switching port-Hamiltonian system we need the


following ingredients. We start with an overall Dirac structure D on
the space of all flow and effort variables involved:

D ⊂ Fx × Ex × FR × ER × FP × EP × FS × ES . (13.1)

The space Fx × Ex is the space of flow and effort variables correspond-


ing to the energy-storing elements1 , the space FR ×ER denotes the space
of flow and effort variables of the energy-dissipating elements, while
FP × EP is the space of flow and effort variables corresponding to the
external ports (or sources). Finally, the linear spaces FS , respectively
ES := FS∗ , denote the flow and effort spaces of the ideal switches. Let s
be the number of switches, then every subset π ⊂ {1, 2, . . . , s} defines
a switch configuration, according to

eiS = 0, i ∈ π, fSj = 0, j 6∈ π. (13.2)


1
Note the slight change in notation with respect to other chapters: we have re-
served the notation FS and ES for the flow and effort spaces of the switches, and Fx
and Ex for the flow and effort spaces of the energy-storing elements.
13.1. Switching port-Hamiltonian systems 149

We will say that in switch configuration π, for all i ∈ π the i-th switch
is closed, while for j ∈
/ π the j-th switch is open.
For each fixed switch configuration π this leads to the following
subspace Dπ of the restricted space of flows and efforts Fx × Ex × FR ×
ER × FP × EP :
n
Dπ = (fx , ex , fR , eR , fP , eP ) | ∃fS ∈ FS , eS ∈ ES
s.t. eiS = 0, i ∈ π, fSj = 0, j 6∈ π, and (13.3)
o
(fx , ex , fR , eR , fP , eP , fS , eS ) ∈ D .
For every π the subspace Dπ defines a Dirac structure. Indeed, every
switch configuration π given by (13.2) defines a Dirac structure on the
space of flow and effort variables fS , eS of the switches, and Dπ equals
the composition of this Dirac structure with the overall Dirac structure
D. Since the composition of any two Dirac structures is again a Dirac
structure (cf. Chapter 6) it thus follows that Dπ is a Dirac structure for
every switch configuration π.
The dynamics of the switching port-Hamiltonian system is defined
by specifying as before, next to its Dirac structure D, the constitutive
relations of the energy-storing and energy-dissipating (resistive) ele-
ments. We will restrict to linear resistive structures given by the graph
of a resistive mapping (see Chapter 4).
Definition 13.1. Consider a linear state space X = Fx , a Dirac struc-
ture D given by (13.1), a Hamiltonian H : X → R, and a linear resis-
tive structure fR = −ReR with R = RT ≥ 0. Then the dynamics of the
corresponding switching port-Hamiltonian system is given as
∂H
(−ẋ(t), (x(t)), −ReR (t), eR (t), fP (t), eP (t)) ∈ Dπ (13.4)
∂x
at all time instants t during which the system is in switch configura-
tion π.
It follows from the power-conservation property of Dirac struc-
tures that during the time-interval in which the system is in a fixed
switch configuration
d
H = −eTR ReR + eTP fP ≤ eTP fP , (13.5)
dt
150 Switching port-Hamiltonian systems

thus showing passivity for each fixed switch configuration if the Hamilto-
nian H is non-negative.

g m

spring & damper


d k x in series
spring & damper
in parallel
y foot fixed to
sum of forces ground
zero on foot

Figure 13.1: Bouncing pogo-stick: definition of the variables (left), flying phase (mid-
dle), contact phase (right).

Example 13.1. Consider a pogo-stick that bounces on a horizontal


plate of variable height (see Figure 13.1). It consists of a mass m and a
mass-less foot, interconnected by a linear spring (with stiffness k and
rest length x0 ) and a linear damper d. The states of the system are x
(length of the spring), y (height of the bottom of the mass), and z = mẏ
(momentum of the mass). The total energy is
1 1 2
H(x, y, z) = k(x − x0 )2 + mg(y + y0 ) + p ,
2 2m
where y0 is the distance from the bottom of the mass to the center of
mass. This leads to the constitutive relations given as

fx = −ẋ, fy = −ẏ, fp = −ṗ, fR = − 1d eR ,


p
ex = k(x − x0 ), ey = mg, ep = m ,

where fR , eR are the flow and effort variables associated to the damper
(energy dissipation), and d is the damping constant. Furthermore, the
overall Dirac structure of the system is described by the linear equa-
13.1. Switching port-Hamiltonian systems 151

tions
fy = fx − fS , fR = fx , fp = ex + ey + eR ,
eS + ex + eR = 0, ep = −fy .
Here the third equation fp = ex + ey + eR represents the total force
balance on the mass m. In the switch configuration eS = 0 (no external
force on the foot) the pogo-stick is in its flying mode, while for fS = 0
the foot is in contact with the horizontal plate. Hence the equation
eS +ex +eR = 0 expresses that for eS = 0 (flying mode) the spring force
on the mass-less foot balances the damping force, while for fS = 0
(contact mode) fy = fx and eS represents the constraint force exerted
by the ground.

The conditions (13.4) for a particular switch configuration π may


entail algebraic constraints on the state variables x. These are charac-
terized by the effort constraint subspace defined for each switch config-
uration π as follows (see also Chapter 8):
n
Cπ := ex ∈ Ex | ∃fx , fR , eR , fP , eP , s.t.
o (13.6)
(fx , ex , fR , eR , fP , eP ) ∈ Dπ , fR = −ReR .

The subspace2 Cπ determines, together with H, the algebraic constraints


in each switch configuration π. Indeed, from (13.4) it follows that
∂H
(x(t)) ∈ Cπ , (13.7)
∂x
for all time instants t during which the system is in switch configura-
tion π. Hence if Cπ 6= Ex then in general (depending on the Hamilto-
nian H) this imposes algebraic constraints on the state vector x(t).

Example 13.2 (Pogo-stick continued). In the example of the pogo-


stick the subspace Cπ is equal to Ex for any of the two switch config-
urations, and hence there are no algebraic constraints. This changes,
however, if the mass of the foot is taken into account. Indeed, by as-
suming a mass M > 0 of the foot, there is an additional state variable
2
Note that Cπ may depend on the linear resistive structure, but not on the energy
storage constitutive relation.
152 Switching port-Hamiltonian systems

pM (momentum of the foot) with kinetic energy 21 p2M , and the equa-
tion eS + ex + eR = 0 changes into fM = eS + ex + eR with fM = −ṗM ,
while furthermore, an extra equation epM == fy − fx with eM = pm M

is added to the equations of the overall Dirac structure.


In the contact mode π given by fS = 0, this means that epM =
fy − fx = 0, so that

Cπ = {(ex , ey , ep , epM ) | epM = 0},


pM
implying the obvious algebraic constraint m = epM = 0.

13.2 Jump rule for switching port-Hamiltonian systems

Next, we define for each π the jump space

Jπ := {fx | (fx , 0, 0, 0, 0, 0) ∈ Dπ }. (13.8)

The following crucial relation between the jump space Jπ and the
effort constraint subspace Cπ holds true. Recall that Jπ ⊂ Fx while
Cπ ⊂ Ex , where Ex = Fx∗ .

Theorem 13.1.
Jπ = Cπ⊥ , (13.9)
where ⊥ denotes the orthogonal complement with respect to the dual-
ity product between the dual spaces Fx and Ex .

The jump rule for a switch configuration π is now formulated as


follows.

Definition 13.2 (Jump rule). Consider the state x− of a switching


port-Hamiltonian system at a switching time where the switch con-
figuration of the system changes into π. Suppose x− is not satisfying
the algebraic constraints corresponding to π, that is
∂H −
(x ) 6∈ Cπ . (13.10)
∂x
Then, the new state x+ just after the switching time satisfies
∂H +
x+ − x− ∈ J π , (x ) ∈ Cπ . (13.11)
∂x
13.2. Jump rule for switching port-Hamiltonian systems 153

This means that at this switching time an instantaneous jump


(’state transfer’) from x− to x+ with xtransfer := x+ − x− ∈ Jπ will
take place, in such a manner that ∂H +
∂x (x ) ∈ Cπ .
The jump space Jπ is the space of flows in the state space X = Fx
that is compatible with zero effort ex at the energy-storing elements
and zero flows fR , fP and efforts eR , eP at the resistive elements and
external ports. Said otherwise, the jump space consists of all flow vec-
tors fx that may be added to the present flow vector corresponding to
a certain effort vector at the energy storage and certain flow and effort
vectors at the resistive elements and external ports, while remaining
in the Dirac structure Dπ , without changing these other effort and flow
vectors. Since Dπ captures the full power-conserving interconnection
structure of the system while in switch configuration π, reflecting the
underlying conservation laws of the system, the jump space Jπ thus
corresponds to a particular subset of conservation laws, and the jump
rule formulated above proclaims that the discontinuous change in the
state vector is an impulsive motion satisfying this particular set of con-
servation laws.
For physical systems one would expect that the value of the Hamil-
tonian H(x+ ) immediately after the switching time is less than or equal
to the value H(x− ) just before. This property is ensured whenever the
Hamiltonian is a convex function.
Theorem 13.2. Consider a switching port-Hamiltonian system with
H a convex function. Then for any x− and x+ satisfying the jump rule
(13.11)
H(x+ ) ≤ H(x− ). (13.12)
The proof is based on the fact that a function f : Rn → R is convex
if and only if [Rockafellar & Wets (1998)]
∂f
f (y) ≥ f (x)+ < (x) | y − x >,
∂x
for all x, y. Application to H with x = x+ and y = x− yields
∂H +
H(x− ) ≥ H(x+ )+ < (x ) | x− − x+ > .
∂x
∂H +
However, by (13.11) < ∂x (x ) | x− − x+ >= 0, since Jπ = Cπ⊥ .
154 Switching port-Hamiltonian systems

Corollary 13.3. Consider a switching port-Hamiltonian system satis-


fying the jump rule, with its Hamiltonian H being a convex function.
Then for all t2 ≥ t1
Z t2
H(x(t2 )) ≤ H(x(t1 )) + eTP (t)fP (t)dt,
t1
and thus the system is passive if H is nonnegative (see Chapter 7).
If the Hamiltonian H is a quadratic function H(x) = 12 xT Kx (and
thus the port-Hamiltonian system is linear), then the jump rule re-
duces to
xtransfer = x+ − x− ∈ Jπ , Kx+ ∈ Cπ . (13.13)
If K ≥ 0 then it follows from Theorem 13.2 and Corollary 13.3 that the
switching port-Hamiltonian system is passive. Furthermore, for each
x− there exists a x+ satisfying (13.13), and moreover if K > 0 this x+
(and the jump xtransfer ) is unique. Indeed, the property Jπ = Cπ⊥ implies
λT Kx = 0 for all λ ∈ Jπ and all x ∈ X with Kx ∈ Cπ , or equivalently
λT Kx = 0 for all x ∈ CπK := {x ∈ X | Kx ∈ Cπ } and all λ ∈ Jπ . Thus,
Jπ is the orthogonal complement of the subspace CπK where the inner
product on X is defined by the positive definite matrix K. Hence it
follows that the vector x+ satisfying (13.13) is unique.
The jump rule in the linear case also allows for a variational charac-
terization (see also Camlibel & Zhang (2012); Gerritsen et al. (2002)).
Theorem 13.4. Let K ≥ 0. A state x+ satisfying (13.13) is a solution
of the minimization problem (for given x− )
1
min (x − x− )T K(x − x− ), (13.14)
x,Kx∈Cπ 2

and conversely if K > 0 then the unique solution of (13.14) is the


unique solution to (13.13).
Furthermore, an application of Dorn’s duality Mangasarian (1969);
Camlibel (2001) yields (see also Camlibel (2001); Gerritsen et al.
(2002))
Theorem 13.5. Let K > 0. Then the jump λ = x+ − x− is the unique
minimum of
1
min (x− + λ)T K(x− + λ). (13.15)
λ∈Jπ 2
13.3. Charge and flux transfer in switched RLC circuits 155

13.3 Charge and flux transfer in switched RLC circuits

In this subsection we will show how the jump rule for switching
port-Hamiltonian systems, as formulated above, includes the classi-
cal charge and flux conservation principles for RLC-circuits with switches
as a special case, and in fact formalizes these principles in an insightful
way.
Consider an RLC-circuit with switches with an arbitrary topology.
It can be described as a switched port-Hamiltonian system as follows
(see also Escobar et al. (1999)). First consider the directed graph asso-
ciated with the circuit. Identify every capacitor, every inductor, every
resistor and every switch with an edge. Furthermore, associate with
every external port an edge (between the terminals of the port). De-
note the incidence matrix of this directed graph by B; cf. Section 12.1.
The incidence matrix has as many columns as there are edges, and as
many rows as there are vertices in the graph. h By reordering the edgesi
we partition the incidence matrix as B = BC BL BR BS BP ,
where the sub-matrices BC , BL , BR , BS correspond, respectively, to
the capacitor, inductor, resistor, and switch edges, and BP corresponds
to the external ports. Then Kirchhoff’s current laws are given as

BC IC + BL IL + BR IR + BS IS + BP IP = 0, (13.16)

with IC , IL , IR , IS , IP denoting the currents through, respectively, the


capacitors, inductors, resistors, switches, and the external ports. Cor-
respondingly, Kirchhoff’s voltage laws are given as

VC = BCT ψ,
VL = BLT ψ,
T ψ,
V R = BR (13.17)
VS = BST ψ,
VP = BPT ψ,

with VC , VL , VR , VS , VP denoting the voltages across the capacitors, in-


ductors, resistors, switches, and ports, respectively, and ψ being the
vector of potentials at the vertices.
156 Switching port-Hamiltonian systems

Kirchhoff’s current and voltage laws define a separable Dirac


structure D on the space of flow and effort variables given as

fx = (IC , VL ),
ex = (VC , IL ),
fR = VR ,
eR = IR ,
(13.18)
fS = VS ,
eS = IS ,
fP = VP ,
eP = IP .

The constitutive relations for the energy storage are given as

(Q̇, Φ̇) = −(IC , VL ),



∂H ∂H
 (13.19)
(VC , IL ) = , ,
∂Q ∂Φ

where Q is the vector of charges at the capacitors, and Φ the vector of


fluxes of the inductors. For a linear RLC-circuit

1 1
H(Q, Φ) = QT C −1 Q + ΦT L−1 Φ, (13.20)
2 2

where the diagonal elements of the diagonal matrices C and L are the
capacitances, respectively, inductances, of the capacitors and induc-
tors.
Similarly, the constitutive relations for the linear (Ohmic) resistors
are given as

VR = −RIR , (13.21)

with R denoting a diagonal matrix with diagonal elements being the


resistances of the resistors.
For every subset π ⊂ {1, · · · , s} (where s is the number of
13.3. Charge and flux transfer in switched RLC circuits 157

switches) the Dirac structure Dπ is defined by the equations

0 = BC IC + BL IL + BR IR + BS IS + BP IP ,
VC = BCT ψ,
VL = BLT ψ,
T ψ,
V R = BR (13.22)
VS = BST ψ,
VP = BPT ψ,
VSi = 0, i ∈ π, ISj = 0, j 6∈ π.

Thus, all switches corresponding to the subset π are closed, while the
remaining ones are open.
The constraint subspace Cπ for each switch configuration π is given
as
Cπ = {(VC , IL ) | ∃IC , VL , VR , IR , VS , IS , VP , IP
(13.23)
such that (13.21) and (13.22) is satisfied }.
Furthermore, the jump space Jπ is given as the set of all (IC , VL ) sat-
isfying for some ψ the equations

0 = BC IC + BS IS ,
0 = BCT ψ,
VL = BLT ψ,
T ψ,
0 = BR (13.24)
VS = BST ψ,
0= BPT ψ,
VSi = 0, i ∈ π, ISj = 0, j 6∈ π.

Hence the jump space can be written as the product of the space

{IC | ∃IS , ISj = 0, j 6∈ π, BC IC + BS IS = 0},

with the space

{VL | ∃ψ such that VL = BLT ψ, 0 = BCT ψ, 0 = BR


T ψ,

0 = BPT ψ, VS = BST ψ, VSi = 0, i ∈ π}.


158 Switching port-Hamiltonian systems

It follows that for RLC circuits with switches the jump (state trans-
fer) splits into a charge transfer Q+ − Q− = Qtransfer and a flux transfer
Φ+ − Φ− = Φtransfer . The charge transfer Qtransfer corresponding to
the switch configuration π is specified as follows. The direction of the
charge transfer Qtransfer is determined by

BC Qtransfer + BS IS = 0, ISj = 0, j 6∈ π, (13.25)

corresponding to Kirchhoff’s current laws for the circuit with switch


configuration π, where the inductors and resistors have been open-
circuited, and the currents through the external ports are all zero. This
recovers and formalizes Frasca et al. (2010) the classical charge con-
servation principle. On the other hand, the amount of charge transfer
Qtransfer is determined by

C −1 (Q− + Qtransfer ) = BCT ψ


for some ψ satisfying VS = BST ψ, VSi = 0, i ∈ π.

Furthermore, the direction of the flux transfer Φtransfer is determined


by the equations

0 = BCT ψ,
Φtransfer = BLT ψ,
T ψ,
0 = BR (13.26)
VS = BST ψ, VSi = 0, i ∈ π,
0= BPT ψ,

corresponding to Kirchhoff’s voltage laws for the circuit corresponding


to the switch configuration π, where the capacitors and the resistors
have been short-circuited, and the voltages across the external ports are
all zero. This formalizes the classical flux conservation principle. On the
other hand, the amount of flux transfer is uniquely determined by the
condition
BC IC + BL L−1 (Φ− + Φtransfer )
+BR IR + BS IS + BP IP = 0,
for some IC , IR , IP , IS with ISj = 0, j 6∈ π.
13.4. The jump rule for switched mechanical systems 159

Since in the case of a linear circuit the Hamiltonian H(Q, Φ) =


1 T −1
2Q C Q + 12 ΦT L−1 Φ splits as the sum of a quadratic function of the
charge Q and the flux Φ, the variational characterization of the jump
(state transfer) rule also splits into the variational characterization of
the charge transfer principle, given as the minimization of
1
min (Q − Q− )T C −1 (Q − Q− ), (13.27)
Q,C −1 Q∈CπV 2

where CπV denotes the projection of the subspace Cπ on the space of


voltages VC , and the variational characterization of the flux transfer
principle, given as the minimization of
1
min (Φ − Φ− )T L−1 (Φ − Φ− ), (13.28)
Φ,L−1 Φ∈CπI 2

where CπI denotes the projection of the subspace Cπ on the space of


currents IL .

13.4 The jump rule for switched mechanical systems

As a second example consider mechanical systems subject to linear


damping and kinematic constraints, written in port-Hamiltonian form
as (see e.g. Chapter 2)
∂H
q̇ = (q, p),
∂p
∂H ∂H
ṗ = − (q, p) − R̄(q, p) (q, p) + A(q)λ + B(q)F,
∂q ∂p (13.29)
∂H
0 = AT (q) (q, p),
∂p
∂H
v = B T (q) (q, p),
∂p
with q = (q1 , · · · , qn ) the vector of generalized position coordinates,
p = (p1 , · · · , pn ) the vector of generalized momenta, F ∈ Rm the
vector of external generalized forces, v ∈ Rm the vector of conju-
gated generalized velocities, and H(q, p) the total energy of the sys-
tem (which usually can be split into a kinetic and a potential energy
contribution). Furthermore, 0 = AT (q) ∂H T
∂p (q, p) = A (q)q̇ denotes the
160 Switching port-Hamiltonian systems

kinematic constraints (such as rolling without slipping) with corre-


sponding constraint forces λ ∈ Rs , where s is the number of kinematic
constraints (equal to the number of rows of the matrix AT (q)).
The damping is characterized by the n × n matrix R̄(q, p) which
is assumed to be symmetric and positive semi-definite, that is, R̄T =
R̄ ≥ 0. This implies the usual energy-balance

dH ∂H T ∂H
(q, p) = − (q, p)R̄(q, p) (q, p) + v T F ≤ v T F.
dt ∂p ∂p
We throughout assume that the matrix R̄(q, p) admits a factorization

R̄(q, p) = P T (q, p)RP (q, p), R = RT > 0,

for some r × n matrix P (q, p) and constant r × r matrix R.


A switching mechanical system arises if the kinematic constraints can
be turned on and off. Denoting fS := λ and replacing the kinematic
constraints in (13.29) by
∂H
eS := AT (q) (q, p), (13.30)
∂p
this defines a switching port-Hamiltonian system as before, where
any subset π ⊂ {1, · · · , r} defines as before the switch configuration
eiS = 0, i ∈ π, fSj = 0, j ∈/ π. Thus in switch configuration π each i-th
kinematic constraint, with i ∈ π, is active, while the other kinematic
constraints (corresponding to indices not in π) are inactive.
It follows that the effort constraint subspace Cπ in this case is given
as
Cπ = {ex | ∃fx , fR , eR , F, fS , with
fSj = 0, j 6∈ π, fR = −ReR , eR = P T (q, p) ∂H
∂p (q, p)
" #
0 In
−fx = ex + P (q, p)fR + B(q)F + A(q)fS
−In 0
eS = AT (q) ∂H
∂p (q, p), eiS = 0, i ∈ π}.
Furthermore, the jump space Jπ is given as
" #
0
Jπ = {fx | fx ∈ im }
Aπ (q)
13.4. The jump rule for switched mechanical systems 161

where the matrix Aπ (q) is obtained from the matrix A(q) by leaving
out every j-th column with j 6∈ π.
Thus jump rule in this case amounts to a jump in the momentum
variables p given as
∂H
ptransfer = p+ − p− ∈ Aπ (q), ATπ (q) (q, p+ ) = 0
∂p
If H can be written as the sum of a kinetic and a potential energy
H(q, p) = 12 pT M −1 (q)p + V (q), with M (q) > 0 denoting the general-
ized mass matrix, then a variational characterization of the jump rule
is given by defining p+ to be the unique minimum of
1
min (p − p− )T M −1 (q)(p − p− ) (13.31)
p,AT (q)M −1 (q)p=0 2
Π

Furthermore, since in this case the kinetic energy is a convex function


of the momenta, it follows from Theorem 13.2 and Corollary 13.3 that
the switching mechanical system is passive if the potential energy is
non-negative.
14
Distributed-parameter systems

The aim of this chapter is to introduce the main concepts behind


the extension of finite-dimensional port-Hamiltonian systems of the
previous chapters to distributed-parameter systems. Dynamic mod-
els of distributed-parameter systems are defined by considering not
only the time but also the space as independent parameters on which
the physical quantities are defined. They allow to model objects such
as vibrating strings or plates, transmission lines, or electromagnetic
fields and mass and heat transfer phenomena. A port-Hamiltonian
formulation of classes of distributed-parameter systems is presented,
which incorporates the energy flow through the boundary of the spa-
tial domain of the system. Instrumental for its contraction is the no-
tion of an infinite-dimensional Dirac structure associated with the ex-
terior derivative and based on Stokes’ theorem. The theory is exempli-
fied using the telegrapher’s equations for an ideal transmission line,
Maxwell’s equations on a bounded domain with non-zero Poynting
vector at its boundary, and a vibrating string with traction forces at
its ends. Finally, some properties of the Stokes-Dirac structure are
reviewed, including the analysis of conservation laws. For further
details we refer to van der Schaft & Maschke (2002), on which this

163
164 Distributed-parameter systems

chapter is largely based. A detailed treatment of linear distributed-


parameter port-Hamiltonian systems on a one-dimensional spatial
domain, including well-posedness and stability theory, can be found in
Jacob & Zwart (2012).

14.1 The Stokes-Dirac structure

We start by introducing the underlying geometric framework for the


port-Hamiltonian formulation of distributed-parameter systems on
a bounded spatial domain, with non-zero energy flow through the
boundary. The key concept is the introduction of a special type of
Dirac structure on suitable spaces of differential forms on the spatial
domain and its boundary, making use of Stokes’ theorem. Through-
out, let Z be an n-dimensional smooth manifold with smooth (n − 1)-
dimensional boundary ∂Z, representing the space of spatial variables.
Denote by Ωk (Z), for k = 0, 1, · · · , n, the space of differential k-
forms on Z, and by Ωk (∂Z), for k = 0, 1, · · · , n − 1, the space of k-
forms on ∂Z.1 Clearly, Ωk (Z) and Ωk (∂Z) are (infinite-dimensional)
linear spaces (over R). Furthermore, there is a natural pairing between
Ωk (Z) and Ωn−k (Z) given by
Z
< β|α >:= β ∧α (∈ R), (14.1)
Z

with α ∈ Ωk (Z), β ∈ Ωn−k (Z), where ∧ is the usual wedge product of


differential forms yielding the n-form β ∧ α. In fact, the pairing (14.1)
is non-degenerate in the sense that if < β|α >= 0 for all α, respectively
for all β, then β = 0, respectively α = 0.
Similarly, there is a pairing between Ωk (∂Z) and Ωn−1−k (∂Z)
given by Z
< β|α >:= β ∧ α, (14.2)
∂Z

with α ∈ Ωk (∂Z), β ∈ Ωn−1−k (∂Z). Now, let us define the linear space

Fp,q := Ωp (Z) × Ωq (Z) × Ωn−p (∂Z), (14.3)


1
Note that Ω0 (Z) and Ω0 (∂Z) are the spaces of smooth functions on Z and ∂Z,
respectively.
14.1. The Stokes-Dirac structure 165

for any pair p, q of positive integers satisfying

p + q = n + 1, (14.4)

and, correspondingly, let us define

Ep,q := Ωn−p (Z) × Ωn−q (Z) × Ωn−q (∂Z). (14.5)

Then, the pairing (14.1) and (14.2) yields a (non-degenerate) pairing


between Fp,q and Ep,q . As for finite-dimensional systems, symmetriza-
tion of this pairing yields the bilinear form on Fp,q × Ep,q , with values
in R
 
≪ fp1 ,fq1 , fb1 , e1p , e1q , e1b , fp2 , fq2 , fb2 , e2p , e2q , e2b ≫
Z h i
:= e1p ∧ fp2 + e1q ∧ fq2 + e2p ∧ fp1 + e2q ∧ fq1 (14.6)
Z Z h i
+ e1b ∧ fb2 + e2b ∧ fb1 ,
∂Z

where for i = 1, 2,

fpi ∈ Ωp (Z), fqi ∈ Ωq (Z), eip ∈ Ωn−p (Z), eip ∈ Ωn−q (Z),

fbi ∈ Ωn−p (∂Z), eib ∈ Ωn−q (∂Z).

The spaces of differential forms Ωp (Z) and Ωq (Z) represent the en-
ergy variables of two different physical energy domains interacting
with each other, while Ωn−p (∂Z) and Ωn−q (∂Z) will denote the bound-
ary variables whose (wedge) product represents the boundary energy
flow. The following theorem is proved in van der Schaft & Maschke
(2002), based on ’integration by parts’ and Stokes’ theorem. Recall that
d denotes the exterior derivative, mapping k-forms into k + 1-forms
(and generalizing in R3 the vector calculus operations grad, curl, div).

Theorem 14.1. Consider Fp,q and Ep,q given in (14.3) and (14.5), re-
spectively, with p, q satisfying (14.4), and bilinear form ≪, ≫ given by
(14.6). Let (·)|∂Z denote the restriction to the boundary ∂Z, then the
166 Distributed-parameter systems

linear subspace


D = (fp , fq , fb , ep , eq , eb ) ∈ Fp,q × Ep,q
" # " #" #
fp 0 (−1)pq+1 d ep
= , (14.7)
fq d 0 eq
" # " #" #
fb 1 0 ep|∂Z
= ,
eb 0 −(−1)n−q eq|∂Z
is a Dirac structure.
The subspace (14.7) is called a Stokes-Dirac structure.

14.2 Distributed-parameter port-Hamiltonian systems

The definition of a distributed-parameter Hamiltonian system with re-


spect to a Stokes-Dirac structure can now be stated as follows. Let Z be
an n-dimensional manifold with boundary ∂Z, and let D be a Stokes-
Dirac structure as in Theorem 14.1. Consider furthermore a Hamilto-
nian density (energy per volume element)
H : Ωp (Z) × Ωq (Z) × Z → Ωn (Z),
resulting in the total energy
Z
H := H ∈ R.
Z
From (14.1), we know that there exists a non-degenerate pairing be-
tween Ωp (Z) and Ωn−p (Z), respectively between Ωq (Z) and Ωn−q (Z).
This means that Ωn−p (Z) and Ωn−q (Z) can be regarded as dual spaces
to Ωp (Z), respectively Ωq (Z) (although strictly contained in their func-
tional analytic duals). Let αp , ∂αp ∈ Ωp (Z) and αq , ∂αq ∈ Ωq (Z). Then,
under weak smoothness conditions on H, we have
Z
H(αp + ∂αp , αq + ∂αq ) = H (αp + ∂αp , αq + ∂αq , z)
Z Z Z
= H (αp , αq , z) + [δp H ∧ ∂αp + δq H ∧ ∂αq ] (14.8)
Z Z

+ higher order terms in ∂αp , ∂αq ,


14.2. Distributed-parameter port-Hamiltonian systems 167

for certain differential forms


δp H ∈ Ωn−p (Z),
δq H ∈ Ωn−q (Z).
Furthermore, from the non-degeneracity of the pairing between Ωp (Z)
and Ωn−p (Z), respectively between Ωq (Z) and Ωn−q (Z), it immedi-
ately follows that these differential forms are uniquely determined.
This means that (δp H, δq H) ∈ Ωn−p (Z) × Ωn−q (Z) can be regarded
as the (partial) variational derivatives (see e.g. Olver (1993)) of H at
(αp , αq ) ∈ Ωp (Z) × Ωq (Z). Throughout this chapter we assume that
the Hamiltonian H admits variational derivatives satisfying (14.8).
Now, consider a time-function
(αp (t), αq (t)) ∈ Ωp (Z) × Ωq (Z), t ∈ R,
and the Hamiltonian H(αp (t), αq (t)) evaluated along this trajectory. It
follows that at any time t
Z  
dH ∂αp ∂αq
= δp H ∧ + δq H ∧ . (14.9)
dt Z ∂t ∂t
The differential forms ∂α p ∂αq
∂t , ∂t represent the generalized velocities
of the energy variables αp , αq . In a similar fashion as for finite-
dimensional systems, we set
∂αp
fp = − , ep = δp H,
∂t (14.10)
∂αq
fq = − , eq = δq H,
∂t
where, as before, the minus sign is included to have a consistent en-
ergy flow description.
Definition 14.1. The distributed-parameter port-Hamiltonian sys-
tem with n-dimensional manifold of spatial variables Z, state-space
Ωp (Z) × Ωq (Z) (with p + q = n + 1), Stokes-Dirac structure D given by
(14.7), and Hamiltonian H, is given as
" # " #" #
∂ αp 0 (−1)r d δp H
− = ,
∂t αq d 0 δq H
" # " #" # (14.11)
fb 1 0 δp H|∂Z
= ,
eb 0 −(−1)n−q δq H|∂Z
168 Distributed-parameter systems

with r = pq + 1.

By the power-preserving property of any Dirac structure, it im-


mediately follows that for any (fp , fq , fb , ep , eq , eb ) in the Stokes-Dirac
structure D
Z Z
[ep ∧ fp + eq ∧ fq ] + eb ∧ fb = 0.
Z ∂Z

Hence, by substitution of (14.10) and using (14.9), we obtain the fol-


lowing proposition.

Proposition 14.1. Consider the port-Hamiltonian system (14.11).


Then, the associated power-balance satisfies
Z
dH
= eb ∧ fb , (14.12)
dt ∂Z

expressing that the rate of change of energy on the domain Z is equal


to the power supplied to the system through the boundary ∂Z, i.e.,

The system (14.11) represents a (nonlinear) boundary control sys-


tem in the sense of e.g. Fattorini (1968). Indeed, we could interpret fb
as the boundary control inputs to the system, and eb as the measured
outputs (or the other way around). This feature is illustrated in the
following examples.

Example 14.1 (Telegrapher’s equations). Consider a lossless trans-


mission line with spacial domain Z = [0, 1] ⊂ R. The energy variables
are the charge density 1-form Q = Q(t, z)dz ∈ Ω1 ([0, 1]), and the flux
density 1-form ϕ = ϕ(t, z)dz ∈ Ω1 ([0, 1]); thus p = q = n = 1. The total
energy stored at time t in the transmission line is given as
" #
1 Q2 (t, z) ϕ2 (t, z)
Z 1
H(Q, ϕ) = + dz,
0 2 C(z) L(z)

with  
" # Q(t, z) " #
δQ H  C(z)  V (t, z)
 ϕ(t, z)  = I(t, z) ,
= 
δϕ H
L(z)
14.2. Distributed-parameter port-Hamiltonian systems 169

where C(z) and L(z) are the distributed capacitance and distributed
inductance of the line, respectively, whereas V (t, z) and I(t, z) rep-
resent the corresponding voltage and current. The resulting port-
Hamiltonian system is given by
 
" # ∂ " #
∂ Q(t, z)  0 ∂z 
 V (t, z)
− = ∂ ,
∂t ϕ(t, z) 0 I(t, z)
∂z
which represents the well-known telegrapher’s equations, together
with the boundary variables
" # " # " # " #
fb0 (t) V (t, 0) e0b (t) I(t, 0)
= , =− .
fb1 (t) V (t, 1) e1b (t) I(t, 1)

The associated power-balance reads


Z 1
dH
= eb fb dz = I(t, 0)V (t, 0) − I(t, 1)V (t, 1), (14.13)
dt 0

which is in accordance with (14.12).

Similar equations as the telegrapher’s equations hold for a vi-


brating string van der Schaft & Maschke (2002), or for a compressible
gas/fluid in a one-dimensional pipe.

Example 14.2 (Shallow water equations). The dynamical behavior of


water in an open canal with spatial domain Z = [a, b] ⊂ R can be
described by
" # " # " #
∂ h v h ∂ h
+ ,
∂t v g v ∂z v
with h(t, z) the height of the water at position z at time t, v(z, t) the
corresponding velocity, and g the gravitational constant. These shal-
low water equations can be written as a port-Hamiltonian system by
recognizing the internally stored energy
Z b 1h 2 i
H(h, v) = hv + gh2 dz,
a 2
170 Distributed-parameter systems

yielding
1 2
eh = δh H = v + gh (Bernoulli function),
2
ev = δv H = hv (mass flow).

Hence, in a similar fashion as the telegrapher’s equations, we obtain


 
" # ∂ " #
∂ h(t, z)  0 ∂z 
 δh H
− = ∂ ,
∂t v(t, z) 0 δv H
∂z
with boundary variables −hv|[a,b] and ( 12 v 2 + gh)|[a,b] . The associated
power-balance is obtained by taking the time-derivative of H, i.e.,
Z b  
d 1h 2 i 1 2
hv + gh2 dz = −hv v + gh
dt a 2 2 [a,b]
   
1 2 1 2 1 2
= −v gh −v hv + gh ,
2 [a,b] 2 2 [a,b]

which expresses that the power flow through the boundary of the
channel equals velocity × pressure + energy flux through the bound-
ary.

14.3 Presence of sources and dissipation

Energy exchange through the boundary is not the only possible way a
distributed-parameter system may interact with its environment. An
example of this is provided by Maxwell’s equations (Example 14.3),
where interaction may also take place via the current density J, which
directly affects the electric charge distribution in the domain Z. In or-
der to cope with this situation, we augment the spaces Fp,q and Ep,q as
defined in (14.3) and (14.5), respectively, to
s
Fq,p := Fp,q × Ωs (S),
s
(14.14)
Eq,p := Ep,q × Ωn−s (S),

for some m-dimensional manifold S and some s ∈ {0, 1, · · · , m}, with


fs ∈ Ωs (S) denoting the externally supplied distributed control flow,
14.3. Presence of sources and dissipation 171

and es ∈ Ωn−s (S) the conjugate distributed quantity, corresponding


to an energy exchange
Z
es ∧ fs . (14.15)
S
The Stokes-Dirac structure (14.7) is then extended to
" # " #" #
fp 0 (−1)r d ep
= + G(fs ),
fq d 0 eq
" # " #" #
fb 1 0 ep|∂Z
= , (14.16)
eb 0 −(−1)n−q eq|∂Z
" #
∗ ep
es = −G ,
eq

with G denoting a linear map


" #
Gp
G= : Ωs (S) → Ωp (Z) × Ωq (Z), (14.17)
Gq

with dual map (again we consider Ωn−p (Z) and Ωn−q (Z) as dual
spaces to Ωp (Z) and Ωn−q (Z), respectively)

G∗ = (G∗p , G∗q ) : Ωn−p (Z) × Ωn−q (Z) → Ωn−s (S),

satisfying
Z Z h i
[ep ∧ Gp (fs ) + eq ∧ Gq (fs )] = G∗p (ep ) + G∗q (eq ) ∧ fs ,
Z S

for all ep ∈ Ωn−p (Z), eq ∈ Ωn−q (Z), and fs ∈ Ωs (S).

Proposition 14.2. Equations (14.16) determine a Dirac structure

D s ⊂ Fp,q
s s
× Ep,q ,

with respect to the augmented bilinear form on Fp,q s × E s , which is


p,q
obtained by adding to the bilinear form (14.6) the term
Z h i
e1s ∧ fs2 + e2s ∧ fs1 . (14.18)
S
172 Distributed-parameter systems

Then, substitution of (14.10) into D s given by (14.16) yields a port-


Hamiltonian system with external variables (fb , fs , eb , es ), with (fb , eb )
the boundary external variables and (fs , es ) the distributed external vari-
ables. Furthermore, the power-balance (14.12) extends to
Z Z
dH
= eb ∧ fb + es ∧ fs , (14.19)
dt ∂Z S
with the first term on the right-hand side denoting the power flow
through the boundary, and the second term denoting the distributed
power flow. We conclude this section with the following example.
Example 14.3 (Maxwell’s equations). Let Z ⊂ R3 be a 3-dimensional
manifold with boundary ∂Z, defining the spatial domain, and con-
sider an electromagnetic field in Z. The energy variables are the mag-
netic field induction 2-form αq = B ∈ Ω2 (Z) :
1
B = Bij (t, z)dz i ∧ dz j ,
2
and the electric field induction 2-form αp = D ∈ Ω2 (Z) :
1
Dij (t, z)dz i ∧ dz j .
D=
2
Furthermore, the associated magnetic and electric field intensities are
given by H = Hi (t, z)dz i ∈ Ω1 (Z) and E = Ei (t, z)dz i ∈ Ω1 (Z), respec-
tively. These 1-forms are related to the energy variables through the
constitutive relations of the medium (or material equations) ⋆B = µH
and ⋆D = ǫE, with the scalar functions µ(t, z) and ǫ(t, z) denoting
the magnetic permeability and electric permittivity, respectively, and
⋆ denoting the Hodge star operator (corresponding to a Riemannian
metric on Z), converting 2-forms into 1-forms.
Then, the Hamiltonian H is defined as
Z
1
H= (E ∧ D + H ∧ B),
Z 2
where one readily verifies that δp H = E, δq H = H, and the corre-
sponding Stokes-Dirac structure (n = 3, p = 2, q = 2) takes the form
" # " #" # " # " #" #
fp 0 −d ep fb 1 0 ep|∂Z
= , = . (14.20)
fq d 0 eq eb 0 1 eq|∂Z
14.3. Presence of sources and dissipation 173

Under the assumption that the current density J in the medium


is zero, and explicitly taking into account the behavior at the bound-
ary, Maxwell’s equations are then represented as the port-Hamiltonian
system with respect to the Stokes-Dirac structure given by (14.20), as
" # " #" #
∂ D 0 −d δD H
− = , (14.21)
∂t B d 0 δB H
" # " #
fb δ H|
= D ∂Z , (14.22)
eb δB H|∂Z
and the power-balance (14.19) takes the form
Z Z Z
dH
= δB H ∧ δD H = H∧E =− E ∧ H,
dt ∂Z ∂Z ∂Z
with E ∧ H a 2-form corresponding to the Poynting vector (see
Ingarden & Jamiolkowski (1985)).
In the case of a non-zero current density, we have to modify (14.21)
to " # " #" # " #
∂ D 0 −d δD H I
− = + J, (14.23)
∂t B d 0 δB H 0
with I denoting the identity operator from J ∈ Ω2 (Z) to Ω2 (Z). (Thus,
in the notation of (14.17), fs = J, S = Z, and Ωs (S) = Ω2 (Z).) Further-
more, we add the equation
" #
h i δ H
D
es = − 0 I = −E, (14.24)
δB H
yielding the augmented power-balance
Z Z
dH
=− E∧H− E ∧ J,
dt ∂Z Z
which is known as Poynting’s theorem.
Energy dissipation can be incorporated in the framework
of distributed-parameter port-Hamiltonian systems by terminating
some of the ports (boundary or distributed) with a resistive relation.
For example, for distributed dissipation, let R : Ωn−s (S) → Ωs (S) be
a map satisfying
Z
es ∧ R(es ) ≥ 0, ∀es ∈ Ωn−s (S).
S
174 Distributed-parameter systems

Then, by adding the relation

fs = −R(es )

to the port-Hamiltonian system defined with respect to the Dirac


structure D s , we obtain a port-Hamiltonian system with dissipation,
satisfying the differential dissipation inequality
Z Z Z
dH
= eb ∧ fb − es ∧ R(es ) ≤ eb ∧ fb .
dt ∂Z S ∂Z

Example 14.4 (Maxwell’s equations (cont’d)). In order to incorporate


energy dissipation into the Maxwell equations (14.23), we decompose
the current density into J = Js + J̄, and impose Ohm’s law

⋆Js = σE,

with σ(t, z) the specific conductivity of the medium.

14.4 Conservation laws

In Chapter 8, we introduced the notion of a conserved quantity—


independent of the Hamiltonian—called a Casimir function. In this
section, this notion is extended to distributed-parameter systems, and
for non-zero boundary conditions gives rise to certain conservation
laws. For, consider the distributed-parameter port-Hamiltonian sys-
tem as defined by (14.11). Conserved quantities that are independent
from the Hamiltonian H are obtained as follows. Let

C : Ωp (Z) × Ωq (Z) × Z → R (14.25)

be a function satisfying

d(δp C) = 0,
(14.26)
d(δq C) = 0.
14.4. Conservation laws 175

Then, the time-derivative of C along the trajectories of (14.11) is given


by
Z Z
dC ∂αp ∂αq
= δp C ∧ + δq C ∧
dt ZZ ∂t Z ∂t
Z
r
=− δp C ∧ (−1) d(δq H) − δq C ∧ d(δp H)
Z Z Z Z
= −(−1)n−q d(δq H ∧ δp C) − (−1)n−q d(δq C ∧ δp H)
Z Z Z Z

= eb ∧ fbC + eC
b ∧ fb ,
∂Z ∂Z

where we have denoted, in analogy with (14.7),

fbC := δp C|∂Z , eC
b := −(−1)
n−q
δq C|∂Z .

In particular, if in addition to (14.26), the function C satisfies

δp C|∂Z = 0,
(14.27)
δq C|∂Z = 0,

then dC
dt = 0 along the system trajectories of (14.11) for any Hamilto-
nian H. Therefore, a function C satisfying (14.26) and (14.27) is called
a Casimir function. If C satisfies (14.26), but not (14.27), then the time-
derivative of C is determined by the boundary conditions of (14.11),
and therefore is called a conservation law for (14.11).

Example 14.5 (Telegrapher’s equations (cont’d)). In the case of the


telegrapher’s equations, the total charge
Z 1
CQ = Q(t, z)dz,
0

is a conservation law. Indeed


Z 1
dCQ ∂I
=− (t, z)dz = I(t, 0) − I(t, 1).
dt 0 ∂z
Similarly, differentiating the total magnetic flux
Z 1
Cϕ = ϕ(t, z)dz
0
176 Distributed-parameter systems

with respect to time yields


Z 1
dCϕ ∂V
=− (t, z)dz = V (t, 0) − V (t, 1).
dt 0 ∂z
For a further discussion on Casimir functions and conserva-
tion laws, also for the extended Dirac structure (14.16), we refer to
van der Schaft & Maschke (2002).

14.5 Covariant formulation of port-Hamiltonian systems

A covariant formulation that is well-known for Maxwell’s equations


(see Ingarden & Jamiolkowski (1985)) can be generalized to general
distributed-parameter port-Hamiltonian systems (14.11), defined with
respect to a general Stokes-Dirac structure D. Define on Z × R, with
space-time coordinates (z, t), the following p-form and q-form

γp := αp + (−1)r δq H ∧ dt,
γq := αq + δp H ∧ dt,

respectively. Then the first part of the equations (14.11) can be equiva-
lently stated as
¯ p = 0,
L ∂ dγ
∂t
(14.28)
¯ q = 0,
L ∂ dγ
∂t

with d¯ denoting the exterior derivative with respect to space-time


¯ p and dγ
(z, t). Indeed, (14.28) means that dγ ¯ q do not depend on t, i.e.,

¯ p = βp ,

(14.29)
¯ q = βq ,

for certain (p + 1)− and (q + 1)−forms βp , respectively βq , not depend-


ing on t. Writing out (14.29) yields

∂αp
dαp + ∧ dt + (−1)r d(δq H) ∧ dt = βp ,
∂t (14.30)
∂αq
dαq + ∧ dt + d(δp H) ∧ dt = βq ,
∂t
14.5. Covariant formulation of port-Hamiltonian systems 177

with d denoting the exterior derivative with respect to the spatial vari-
ables z, resulting in the equations of a port-Hamiltonian system (14.11)

∂αp
− = (−1)r d(δq H),
∂t (14.31)
∂αq
− = d(δp H),
∂t
together with the conserved quantities (cf. Chapter 8) dαp = βp
and dαq = βq . Furthermore, the boundary variables of the port-
Hamiltonian system (14.11) can be re-formulated as
 

i ∂ γq = fb ,
∂t ∂Z
 
= (−1)q eb .

i∂ γp
∂t ∂Z
15
Control of port-Hamiltonian systems

In the previous chapters, we have witnessed that one of the advan-


tages of the port-Hamiltonian (pH) framework is that the Hamiltonian
can be used as a basis to construct a candidate Lyapunov function,
thus providing insight into various system properties like stability,
passivity, finite L2 gain, etc.. Another key feature of pH systems is that
a power-preserving interconnection of pH systems results in another
pH system, with total Hamiltonian being the sum of the Hamiltonian
functions and with a Dirac structure defined by the composition of
the Dirac structures of the subsystems. These features have led to a
research focus on the control of port-Hamiltonian systems.

15.1 Control by interconnection

Consider a plant pH system, with state space Xp , Hamiltonian Hp :


Xp → R, resistive (energy-dissipating) port (fR , eR ), and external port
(fP , eP ), and a controller pH system, with state space Xc , Hamiltonian
Hc : Xc → R, resistive port (f¯R , ēR ), and external port (f¯P , ēP ). From
Chapter 6, we know that the composition of Dirac structures via a
power-preserving interconnection is again a Dirac structure. Indeed,

179
180 Control of port-Hamiltonian systems

any interconnection between the plant and the controller through their
respective external ports satisfying the power-preserving property

eTP fP + ēTP f¯P = 0, (15.1)

results in a closed-loop pH system, with state space Xp × Xc , Hamilto-


nian Hp + Hc , resistive port ((fR , eR ), (f¯R , ēR )), satisfying the power-
balance Ḣp + Ḣc = eTR fR + ēTR f¯R . Using standard Lyapunov stability
theory, we immediately infer that since both eTR fR ≤ 0 and ēTR f¯R ≤ 0,
and (x∗p , x∗c ) ∈ Xp × Xc is a minimum of Hp + Hc , then (x∗p , x∗c ) will be
a stable equilibrium of the closed-loop system. Moreover, the equilib-
rium (x∗p , x∗c ) is asymptotically stable under an additional zero-state
detectability condition on the plant and/or controller system.

Example 15.1. Consider a controlled version of the spinning rigid


body of Example 3.1:
 ∂H 
     
ṗx 0 −pz py  ∂px  gx
     ∂H   
=
ṗy   pz 0 −px   ∂py  gy  u,
+
 
ṗz −py px 0 ∂H gz
∂pz

with control u and natural output


∂H ∂H ∂H
y = gx + gy + gz .
∂px ∂py ∂pz

Since for u = 0, Ḣ(p) = 0 and H(p) has its minimum at p = 0. Thus,


the equilibrium point p∗k = 0, for k ∈ {x, y, z}, is stable. To render the
closed-loop asymptotically stable, we apply the feedback
X pk
u = −y = − gk ,
k
Ik

yielding convergence to the largest invariant set contained in


( X )
n o p k
O := p ∈ R3 | Ḣ(p) = 0 = p ∈ R3 gk =0 ,
I k k

which actually is p = 0 if, and only if, gk 6= 0 for all k ∈ {x, y, z}.
15.2. Energy transfer control 181

The above example shows that a pH system having its energetic


minimum at the origin can be asymptotically stabilized by intercon-
necting it with a static controller pH system (i.e., Hc = 0), which, in
general terms, has the form ēP = Kd f¯P , with Kd = KdT a positive
definite damping injection matrix, through a power-conserving inter-
connection fP = −ēP and eP = f¯P . Note that in the example, we set
Kd = 1, and fP = u, eP = y, f¯P = ū, and ēP = ȳ. Asymptotic stabil-
ity of the origin can be inferred provided a detectability condition is
satisfied.
Another application of control by interconnection is energy trans-
fer control.

15.2 Energy transfer control

Consider two pH systems Σi (without internal dissipation) in input-


state-output form

∂Hi
 ẋi = Ji (xi ) (xi ) + gi (xi )ui ,


∂xi
Σi :

 yi = giT (xi )
 ∂Hi
(xi ), i = 1, 2,
∂xi
both satisfying the power-balance Ḣi (xi ) = yiT ui . Suppose now that
we want to transfer the energy from the port-Hamiltonian system Σ1
to the port-Hamiltonian system Σ2 , while keeping the total energy
H1 + H2 constant. This can be done by using the following output
feedback " # " #" #
u1 0 −y1 y2T y1
= , (15.2)
u2 y2 y1T 0 y2
which, due to the skew-symmetry property of the interconnection ma-
trix, clearly satisfies (15.1). Hence, the closed-loop system composed
of Σ1 and Σ2 is energy-preserving, that is Ḣ1 + Ḣ2 = 0. However, if
we consider the individual energies then we notice that
Ḣ1 (x) = −y1T y1 y2T y2 = −||y1 ||2 ||y2 ||2 ≤ 0, (15.3)
implying that H1 is decreasing as long as ||y1 || and ||y2 || are different
from 0. Conversely, as expected since the total energy is constant,
Ḣ2 (x) = y2T y2 y1T y1 = ||y2 ||2 ||y1 ||2 ≥ 0, (15.4)
182 Control of port-Hamiltonian systems

implying that H2 is increasing at the same rate. In particular, if H1 has


a minimum at the zero equilibrium, and Σ1 is zero-state detectable,
then all the energy H1 of Σ1 will be transferred to Σ2 , provided that
||y2 || is not identically zero.
If there is internal energy dissipation, then this energy transfer
mechanism still works. However, the fact that H2 grows or not will
depend on the balance between the energy delivered by Σ1 to Σ2
and the internal loss of energy in Σ2 due to dissipation. We con-
clude that this particular scheme of power-conserving energy trans-
fer is accomplished by a skew-symmetric output feedback, which is
modulated by the values of the output vectors of both systems. Of
course this raises, among others, the question of the efficiency of
the proposed energy-transfer scheme, and the need for a systematic
quest of similar power-conserving energy-transfer schemes. We refer
to Duindam et al. (2004) for a similar energy-transfer scheme directly
motivated by the structure of the example (control of a snakeboard).

15.3 Stabilization by Casimir generation

Stabilizing a system at the origin, which often coincides with the open-
loop minimum energy, is generally not an enticing control problem.
Of wider practical interest is to stabilize the system at a non-zero set-
point. Indeed, suppose that we want to stabilize a plant pH system
around a set-point x∗p . We know that for any controller pH system, the
closed-loop power-balance satisfies

Ḣp + Ḣc = eTR fR + ēTR f¯R ≤ 0. (15.5)

If x∗p is not a minimum of Hp , a possible strategy is to consider Casimir


functions C : Xp × Xc → R to generate a candidate Lyapunov func-
tion for the closed-loop system of the form V := Hp + Hc + C. The
design is then completed by selecting the controller in such a way that
V has a minimum at (x∗p , x∗c ). As discussed in Chapter 8, Casimirs are
conserved quantities that are completely characterized by the Dirac
structure of the system. Since we are interested in the Casimir func-
tions that are based on the closed-loop Dirac structure, this strategy
15.3. Stabilization by Casimir generation 183

u y
plant

ȳ ū
controller

Figure 15.1: Feedback interconnection of a plant and a controller port-Hamiltonian


system.

reduces to finding all the achievable closed-loop Dirac structures. A


comprehensive analysis is given in Cervera et al. (2007).
Another way to interpret the generation of Casimir functions for
the closed-loop system is the following. Let us, for ease of presen-
tation, assume that the plant can be represented by an input-state-
output pH system of the form

  ∂Hp
ẋp = Jp (xp ) − R(xp ) (xp ) + gp (xp )u,
∂xp
(15.6)
∂Hp
y = gpT (xp ) (xp ),
∂xp

with xp ∈ Xp and u, y ∈ Rm . Furthermore, if the controller is also an


input-state-output pH system of the form

  ∂Hc
ẋc = Jc (xc ) − Rc (xc ) (xc ) + gc (xc )ū,
∂xc
(15.7)
∂Hc
ȳ = gcT (xc ) (xc ),
∂xc

with xc ∈ Xc and ū, ȳ ∈ Rm , then the interconnection of the plant


system (15.6) with (15.7) via the standard power-preserving feedback
interconnection u = −ȳ and ū = y, as shown in Fig. 15.1, yields the
184 Control of port-Hamiltonian systems

closed-loop system
 
∂Hp
" # "
T
#(xp )
ẋp Jp (xp ) − Rp (xp ) −gp (xp )gc (xc )  ∂xp 

=  ,
ẋc gc (xc )gpT (xp ) Jc (xc ) − Rc (xc )  ∂Hc 
(xc )
∂xc
  (15.8)
∂H p
 ∂x (xp )
" # " #
y gpT (xp ) 0  p 
=  ,
ȳ 0 gcT (xc )  ∂Hc 
(xc )
∂xc
which is again a pH system with Hamiltonian Hp (xp ) + Hc (xc ).
The main idea then is to design the controller system such that the
closed-loop system (15.8) has useful Casimirs. If both the plant and
the controller are lossless, i.e., Rp (xp ) = 0 and Rc (xc ) = 0, we thus
look for functions C(xd , xc ) satisfying
" #
∂T C ∂T C
(xp , xc ) (xp , xc )
∂xp ∂xc
" # (15.9)
Jp (xp ) −gp (xp )gcT (xc )
× = 0,
gc (xc )gpT (xp ) Jc (xc )
such that the candidate Lyapunov function
V (xp , xc ) = Hp (xp ) + Hc (xc ) + C(xp , xc ) (15.10)
has a minimum at (x∗p , x∗c ) for a certain x∗c . Subsequently, one may add
extra damping, directly or in the dynamics of the controller, to achieve
asymptotic stability.
Example 15.2. Consider a pendulum with normalized Hamiltonian
1
H(q, p) = p2 + 1 − cos q
2
actuated by a torque u, with output y = p (angular velocity). Suppose
we wish to stabilize the pendulum at q ∗ 6= 0 and p∗ = 0. Apply the
nonlinear integral control
ẋc = ū
∂Hc
ȳ = (xc ),
∂xc
15.3. Stabilization by Casimir generation 185

which is a port-Hamiltonian controller system with Jc = 0. After in-


terconnecting the pendulum with the controller by setting u = −ȳ
and ū = y, we proceed by searching for Casimirs C(q, p, ξ) which are
found by solving
 
 0 1 0
∂C ∂C ∂C  
−1 0 −1 = [0 0 0],
∂q ∂p ∂ξ
0 1 0
leading to C(q, p, ξ) = K(q − ξ), and candidate Lyapunov functions
1
V (q, p, ξ) = p2 + 1 − cos q + Hc (ξ) + K(q − ξ),
2
with the functions Hc and K to be determined. For a local minimum,
determine K and Hc such that
• Equilibrium assignment:
∂V ∗ ∂K ∗
(q , 0, ξ ∗ ) = sin q ∗ + (q − ξ ∗ ) = 0,
∂q ∂q
∂V ∗
(q , 0, ξ ∗ ) = 0,
∂p
∂V ∗ ∂Hc ∗ ∂K ∗
(q , 0, ξ ∗ ) = (ξ ) − (q − ξ ∗ ) = 0.
∂ξ ∂ξ ∂ξ
• Minimum condition:
 
∂2K ∗ ∂2K ∗
cos q ∗
+ (q − ξ ∗ ) 0 − (q − ξ ∗ )
∂q 2
 
 ∂q∂ξ 
 
 0 1 0  > 0,
 
 ∂2K ∗ ∂2K ∗ ∂ 2H
c ∗ 
− (q − ξ ∗ ) 0 ∗
(q − ξ ) + (ξ )
∂ξ∂q ∂ξ 2 ∂ξ 2
which provides many possibilities to accomplish the stabilization task.
Example 15.3. A similar approach can be applied to distributed-
parameter systems. Consider for instance the shallow water equations
of Example 14.2:
 
" # ∂ " #
∂ h(t, z)  0 ∂z 
 δh H
− = ∂ ,
∂t v(t, z) 0 δv H
∂z
186 Control of port-Hamiltonian systems

with boundary variables hv|[a,b] and −( 21 v 2 + gh)|[a,b] and Hamiltonian


Z bh
1 i
H(h, v) = hv 2 + gh2 dz.
2 a

An obvious ‘physical’ controller is to add to one side of the canal, say


the right-end b, an infinite water reservoir of height h∗ , corresponding
to the port-Hamiltonian ‘source’ system

ẋc = ū
∂Hc
ȳ = (xc ),
∂xc
with Hamiltonian Hc (xc ) = gh∗ xc , via the feedback interconnection
1
ū = y = h(b)v(b), ȳ = −u = v 2 (b) + gh(b).
2
By mass-balance, we find that
Z b
h(z, t)dz + xc + κ
a

is a Casimir for the closed-loop system. Thus, we may take as a candi-


date Lyapunov function
Z bh
1 i
V (h, v, xc ) = hv 2 + gh2 dz + gh∗ xc
2 a
Z !
b 1

− gh h(z, t)dz + xc + g(h∗ )2 (b − a)
a 2
Z bh
1 i
= hv 2 + g(h − h∗ )2 dz,
2 a

which has a minimum at the desired set-point (h∗ , v ∗ , x∗c ), with v ∗ = 0


and x∗c arbitrary.

Note that if we restrict the motion of (15.8) to the subset



Ω = (xp , xc ) ∈ Xp × Xc C(xp , xc ) = κ , (15.11)

with C(xp , xc ) = xc −S(xp ), where S(xp ) is differentiable and κ is some


constant, the candidate Lyapunov function (15.10) reduces to a shaped
15.4. The dissipation obstacle and beyond 187

closed-loop Hamiltonian Hp (xp )+Hc (S(xp )+κ). This is accomplished


if, along the trajectories of (15.8), the functions S(xp ) are such that

Ċ(xp , xc ) Ω = 0.
Hence the dynamic feedback reduces to a state feedback scheme, and
we are thus looking for solutions S(x) of the partial differential equa-
tions (PDE’s)
" #" #
∂T S Jp (xp ) − Rp (xp ) −gp (xp )gcT (xc )
− (xp ) Inc = 0, (15.12)
∂xp gc (xc )gpT (xp ) Jc (xc ) − Rc (xc )
which, under the assumption that R(xp ) ≥ 0 and Rc (xc ) ≥ 0, are char-
acterized by the following chain of equalities van der Schaft (2000).
Proposition 15.1. The system of PDE’s (15.12) has a solution if and
only if
∂T S ∂S
(xp )Jp (xp ) (xp ) = Jc (xc ),
∂xp ∂xp
∂S
Rp (xp ) (xp ) = 0,
∂xp
(15.13)
Rc (xc ) = 0,
∂S
Jp (xp ) (xp ) = −gp (xp )gcT (xc ).
∂xp

15.4 The dissipation obstacle and beyond

Surprisingly, the presence of dissipation may pose a problem. Indeed,


if Rp (xp ) ≥ 0 and Rc (xc ) ≥ 0, the set of PDE’s (15.9) extends to
" #
∂T C ∂T C
(xp , xc ) (xp , xc )
∂xp ∂xc
" # (15.14)
J (x ) − R (x ) −gp (xp )gcT (xc )
× p p T p p = 0,
gc (xc )gp (xp ) Jc (xc ) − Rc (xc )
which implies
∂T C ∂C ∂T C ∂C
− (xp , xc )Rp (xp ) (xp , xc ) = (xp , xc )Rc (xc ) (xp , xc ).
∂xp ∂xp ∂xc ∂xc
188 Control of port-Hamiltonian systems

However, since Rp (xp ) and Rc (xc ) are semi-positive definite, we have

∂C ∂C
Rp (xp ) (xp , xc ) = 0, Rc (xc ) (xp , xc ) = 0.
∂xp ∂xc

The same condition is also appearing in (15.13) when we consider


Casimirs of the form (15.11). This restriction is known as the dissipation
obstacle, which, roughly speaking, dictates that the Casimir functions
cannot depend on the coordinates that are subject to dissipation. This
means that dissipation is admissible only on the coordinates of the
closed-loop system that do not require shaping of the Hamiltonian.

Example 15.4. Consider the levitated ball system of Example 2.4 and
assume that the inductance of the coil is given by L(q) = k/(1 − q),
with k some real parameter depending on the geometry of the coil.
Then, the desired equilibrium is given by
p
(q ∗ , p∗ , ϕ∗ ) = (q ∗ , 0, ± 2kmg),

which suggests that both q and ϕ need to be shaped in order to stabi-


lize the system at the desired equilibrium. However, we can not find
any useful Casimir since
 
  0 0 0
∂C ∂C ∂C  
0 0 0  = [0 0 0].
∂q ∂p ∂ϕ
0 0 R

The dissipation obstacle stems from the assumption that both


the plant and controller dissipation structures satisfy Rp (xp ) ≥ 0
and Rc (xc ) ≥ 0. Although these properties are necessary to ensure
that the plant and the controller are both passive systems, they are
merely sufficient for passivity of the closed-loop system. Indeed, in
Koopman & Jeltsema (2012) it is shown that removing the passivity
constraint on the controller naturally resolves the dissipation obsta-
cle. Furthermore, using the Casimir relation between the plant and
controller states as in (15.11) guaranties stability of the closed loop.
Another method to circumvent the dissipation obstacle is discussed
in Section 15.7.
15.5. Passivity-based control 189

15.5 Passivity-based control

In the previous sections, we have considered the control of pH sys-


tems from the perspective of interconnecting a plant pH system with
a controller pH system. The aim is then to design a controller system
that generates a suitable candidate Lyapunov function to ensure sta-
bility of the closed-loop system. In the remaining sections, the control
of pH systems is considered from the notion of passivity. In the past
two decades, passivity-based control (PBC) has emerged as a model-
based non-linear control design method that respects, and success-
fully exploits, the physical structure of a system. In essence, the PBC
methodology exploits the property that physical systems satisfy the
energy-balance, i.e., stored energy equals the difference between the
supplied energy and the dissipated energy, to arrive, via control, at a
modified energy balance satisfying: desired stored energy equals the
difference between a new supplied energy and a desired dissipated
energy. Hence, PBC aims at energy shaping plus damping injection by
rendering the closed-loop passive with respect to some desired stor-
age function.
For ease of presentation, we will assume that the plant system can
be represented as an input-state-output pH system of the form
∂H
ẋ = [J(x) − R(x)] (x) + g(x)u,
∂x (15.15)
T ∂H
y = g (x) (x),
∂x
where x ∈ X and u, y ∈ Rm .

15.6 Energy-shaping and damping injection

For a plant system (15.15), the energy-shaping and damping injection


(ES-DI) objective is to obtain a target closed-loop system
∂Hd
ẋ = [J(x) − Rd (x)] (x), (15.16)
∂x
where Rd (x) is the desired dissipation matrix given by
Rd (x) = R(x) + g(x)Kd (x)gT (x),
190 Control of port-Hamiltonian systems

in terms of a damping injection matrix Kd (x). The desired closed-


loop Hamiltonian Hd (x) is obtained by augmenting the open-loop
Hamiltonian with an additional energy Ha (x) such that Hd (x) =
H(x) + Ha (x) has a minimum at the desired equilibrium, i.e.,
x∗ = arg min{Hd (x)}. (15.17)
The target closed-loop dynamics (15.16) can then be achieved by the
static state feedback control
u(x) = uES (x) + uDI (x),
where
−1 ∂Ha
uES (x) = gT (x)g(x) gT (x) [J(x) − R(x)] (x),
∂x
∂Hd
uDI (x) = −Kd (x)gT (x) (x),
∂x
represent the energy-shaping and damping injection components of
the control, respectively. The added energy Ha (x) us a solution to the
set of PDE’s
" #
g⊥ (x) [J(x) − R(x)] ∂Ha
(x) = 0, (15.18)
gT (x) ∂x
where g⊥ (x) is a left-annihilator of g(x), i.e., g⊥ (x)g(x) = 0, of max-
imal rank. Among all possible solutions (15.18), the one satisfying
(15.17) is chosen.
Example 15.5. Consider a (fully actuated) mechanical system with
total energy
H(q, p) = 12 pT M −1 (q)p + P (q),
where M (q) = M T (q) > 0 represents the generalized mass matrix.
Assume that the potential energy P (q) is bounded from below. The
associated pH equations are given by
" # " #  ∂H  " #
q̇ 0 Ik  ∂q  0
= + u,
ṗ −Ik 0 ∂H B(q)
| {z } ∂p | {z }
J=−J T g(q)
 
h i ∂H
∂q
y= 0 T
B (q)  ∂H  .
∂p
15.6. Energy-shaping and damping injection 191

Clearly, the system has as passive outputs the generalized velocities:


∂H
Ḣ(q, p) = uT y = uT B T (q) (q, p) = uT M −1 (q)p = uT q̇.
∂p
Considering (15.18), the simplest way to ensure that the closed-loop
energy has a minimum at (q, p) = (q ∗ , 0) is to select
1
Ha (q) = −P (q) + (q − q ∗ )T Kp (q − q ∗ ) + κ,
2
where Kp = KpT > 0 is the energy-shaping gain and κ is an arbitrary
constant. This, in turn, provides the energy-shaping part of the control
∂P
uES (q) = (q) − Kp (q − q ∗ ),
∂q
and the resulting closed-loop energy
1 T −1 1
Hd (q, p) = p M (q)p + (q − q ∗ )T Kp (q − q ∗ ).
2 2
To ensure that the trajectories actually converge to (q ∗ , 0), we need to
render the closed-loop asymptotically stable by adding some damp-
ing
∂H
uDI (p) = −Kd (q, p) = −Kd q̇,
∂p
where Kd = KdT > 0. Note that the energy-balance of the system is
now
Z t
Hd [q(t), p(t)] − Hd [q(0), p(0)] = − q̇ T (τ )Kd q̇(τ )dτ.
0
Observe that the controller obtained in Example 15.5 is just the
classical PD plus gravity compensation controller. However, the de-
sign via energy-shaping and damping injection provides a new inter-
pretation of the controller, namely, that the closed-loop energy is (up
to a constant) equal to
Z t
Hd (x) = H(x) − uT (τ )y(τ )dτ,
0
i.e., the difference between the plant and the controller energy. For
that reason, the ES-DI methodology is often referred to as energy-
balancing (EB) control.
192 Control of port-Hamiltonian systems

15.7 Interconnection and damping assignment

As with the conventional Casimir-based control method, a major


drawback of the ES-DI approach is the solvability of the PDE’s (15.18),
which is also stymied by dissipation obstacle. A generalization of the
ES-DI method that circumvents the dissipation obstacle is provided by
the interconnection and damping assignment passivity-based control
(IDA-PBC) method.
For a plant system (15.15), the interconnection and damping as-
signment passivity-based control (IDA-PBC) design objective is to ob-
tain a closed-loop system of the form
∂Hd
ẋ = [Jd (x) − Rd (x)] (x), (15.19)
∂x
where the desired interconnection and the dissipation matrices sat-
isfy skew-symmetry and symmetric positive semi-definiteness respec-
tively, i.e., Jd (x) = −JdT (x) and Rd (x) = RdT (x), with Rd (x) ≥ 0, re-
spectively. As before, the desired closed-loop Hamiltonian is defined
by Hd (x) = H(x) + Ha (x), where the added energy Ha (x) satisfies
∂Ha
[J(x) + Ja (x) − R(x) −Ra (x)] (x) =
∂x (15.20)
∂H
[Ja (x) − Ra (x)] (x) + g(x)u(x),
∂x
where Ja (x) := Jd (x) − J(x) and Ra (x) := Rd (x) − R(x).

Proposition 15.2 (Ortega et al. (2001b)). Consider the system


(15.15) and a desired equilibrium x∗ to be stabilized. Assume that we
can find functions u(x) and Ha (x), and matrices Ja (x) and Ra (x) sat-
isfying (15.20) and such that the following conditions occur.
• Equilibrium assignment: at x∗ the gradient of Ha (x) verifies
∂Ha ∗ ∂H ∗
(x ) + (x ) = 0. (15.21)
∂x ∂x
• Minimum condition: the Hessian of Ha (x) at x∗ satisfies
∂ 2 Ha ∗ ∂2H ∗
2
(x ) + (x ) > 0. (15.22)
∂x ∂x2
15.7. Interconnection and damping assignment 193

Then, x∗ will be a (locally) stable equilibrium of the closed-loop sys-


tem (15.19). It will be (locally) assymptotically stable if, in addition,
the largest invariant set under the closed-loop dynamics contained in
( T )
∂ Hd ∂H d
x ∈ X (x)Rd (x) (x) = 0
∂x ∂x
equals {x∗ }.

Remark 15.1. Note that if the interconnection and damping matri-


ces of the open-loop system are not changed, i.e., Jd (x) = J(x) and
Rd (x) = R(x), or equivalently, if Ja (x) = 0 and Ra (x) = 0, then the
IDA-PBC methodology reduces to the ES-DI scheme outlined in the
previous section.

Example 15.6. Consider again the levitated ball system of Example


2.4. Suppose that we first try to stabilize the system by only shaping
the Hamiltonian without altering the interconnection and damping
matrices. Then, the PDE (15.20) reduces to
∂Ha ∂Ha ∂Ha
= 0, = 0, −R = u(x),
∂q ∂p ∂ϕ
which means that Ha can only depend on ϕ. Thus, the resulting
closed-loop Hamiltonian will be of the form
p2 (1 − q) 2
Hd (q, p, ϕ) = mgq + + ϕ + Ha (ϕ).
2m 2k
Even though, with a suitable selection of Ha , we can satisfy the equi-

librium assignment condition, the Hessian of Hd at (q ∗ , 0, ± 2kmg)
will never be positive definite. The source of the problem is the lack
of an effective coupling between the electrical and mechanical subsys-
tems. Indeed, the interconnection matrix J only couples position with
velocity. To overcome this problem, we propose to enforce a coupling
between the flux-linkage ϕ and the momentum p. For, we modify in-
terconnection structure to
 
0 1 0
 
Jd = −1 0 −α ,
0 α 0
194 Control of port-Hamiltonian systems

where α is a constant to be defined, so that (15.20) becomes


∂Ha
= 0,
∂p
∂Ha ∂Ha (1 − q)
− −α = −α ϕ,
∂q ∂ϕ k
∂Ha ∂Ha p
α −R = α + u(x).
∂p ∂ϕ m
The third equation defines the control, whereas the second one can be
solved to obtain
 
ϕ3 1 ϕ
Ha (q, ϕ) = + (1 − q)ϕ2 + Φ q + ,
6kα 2k α
where Φ(·) should be chosen such that (15.21) and (15.22) are satisfied.

In the latter example, the control is obtained from (15.20), where


Jd is selected based on physical intuition and Rd = 0. The remaining
unknown ‘parameter’ Ha (x) is then selected such that the closed-loop
energy Hd (x) = H(x) + Ha (x) has its minimum in the desired equilib-
rium point. In general, the control that achieves the closed-loop con-
trol objective (15.19) is given by
−1
u(x) = gT (x)g(x) gT (x)
 
∂Hd ∂H
× [Jd (x) − Rd (x)] (x) − [J(x) − R(x)] ,
∂x ∂x
where the desired closed-loop Hamiltonian Hd (x) and the desired in-
terconnection and damping matrices are obtained by solving the so-
called matching condition
∂H ∂Hd
g⊥ (x) [J(x) − R(x)] = g⊥ (x) [Jd (x) − Rd (x)] , (15.23)
∂x ∂x
such that Hd (x) has its minimum at the desired equilibrium.
Solving the matching condition (15.23) can be a tedious process.
The following observations about the choice of the desired intercon-
nection and dissipation matrices of (15.19) can be of help in the pro-
cess:
15.8. Power-shaping control 195

• The desired interconnection matrix Jd (x) and the dissipation


matrix Rd (x) can be freely chosen provided they satisfy skew-
symmetry and positive semi-definiteness, respectively.

• The left-annihilator matrix g⊥ (x) can be considered as an addi-


tional degree of freedom. Hence for a particular problem it can
be appropriately chosen to reduce the complexity of the match-
ing condition (15.23).

• The desired Hamiltonian Hd (x) can be partially or completely


fixed to satisfy the desired equilibrium condition (15.17).

Using combinations of the stated options, there are three major ap-
proaches to solve the PDE’s of (15.23):

• Non-parameterized IDA-PBC: In this general form, Jd (x) and


Rd (x) are fixed and the PDE’s (15.23) are solved for the energy
function Hd (x). Among the admissible solutions, the one satis-
fying (15.17) is chosen.

• Algebraic IDA-PBC: The desired energy function Hd (x) is fixed


thus rendering (15.23) an algebraic set of equations in terms of
the unknown matrices Jd (x) and Rd (x).

• Parameterized IDA-PBC: Here, the structure of the energy func-


tion Hd (x) is fixed. This imposes constraints on the unknown
matrices Jd (x) and Rd (x), which need to be satisfied by (15.23).

For a detailed discussion about these approaches and some motivat-


ing examples, we refer to Ortega et al. (2001b, 2008), and the refer-
ences therein.

15.8 Power-shaping control

We conclude this chapter by briefly highlighting an appealing alter-


native to energy-shaping called power-shaping. Power-shaping PBC is
based on the Brayton-Moser framework presented in Chapter 11. The
idea is as follows. Recall from Section 11.4 that if Q(z) + QT (z) ≤ 0,
196 Control of port-Hamiltonian systems

then Ṗ (z) ≤ uT y ′ . Hence, integrating the latter from 0 to τ , yields the


power balance inequality
Z τ
 
P z(τ ) − P z(0) ≤ uT (t)y ′ (t)dt. (15.24)
0

Usually, the point where the mixed-potential has a minimum is not


the operating point of interest, and we would rather stabilize another
admissible equilibrium point. Thus, we look for a control law such
that the power supplied by the controller, say Pa , can be expressed as
a function of the state z. Indeed, from (15.24) we see that the mixed-
potential function P (z) is shaped with the control u(z), where

g(z)u(z) = ∇Pa (z).



This yields the closed-loop system Q(z)ż = ∂z Pd (z), with total Lya-
punov function Pd (z) := P (z) + Pa (z). The equilibrium will be stable
if it corresponds to the minimum of the total Lyapunov function.
So far the method of power shaping is shown to be applicable to a
large class of nonlinear RLC circuits system Ortega et al. (2003), and
is extended to a class of nonlinear systems in García-Canseco et al.
(2010); Favache and Dochain (2010). The application of power shap-
ing to switched power converters is a topic of future research. Another
control application in which the generation of passive input-output
maps is of interest is PI control. In Hernandez-Gomez et al. (2010) it
is shown that under certain conditions a passive system can be sta-
bilized by a simple PI controller. A general Brayton-Moser based PI
control method is discussed in Dirksz and Scherpen (2012).
Acknowledgements

These lecture notes are based on research collaborations with many


colleagues, post-docs and PhD students. The first author thanks in
particular Bernhard Maschke for a truly inspiring collaboration over
the last 25 years. Special thanks also from both authors to Romeo Or-
tega for a fruitful collaboration during the last 15 years.

197
Appendices
A
Proofs

A.1 Proof of Proposition 2.1

Let D satisfy (2.5). Then for every (f, e) ∈ D


0 =≪ (f, e), (f, e) ≫=< e | f > + < e | f >= 2 < e | f >
By non-degeneracy of ≪, ≫
dim D ⊥⊥ = dim(F × E) − dim D = 2 dim F − dim D
and hence property (2.5) implies dim D = dim F. Conversely, let D
be a Dirac structure and thus satisfying Properties 1 and 2 of 2.1. Let
(f a , ea ), (f b , eb ) be any vectors contained in D. Then by linearity also
(f a + f b , ea + eb ) ∈ D. Hence by Property 1
0 = < ea + eb | f a + f b >
= < ea | f b > + < eb | f a > + < ea | f a > + < eb | f b > (A.1)
= < ea | f b > + < eb | f a >=≪ (f a , ea ), (f b , eb ) ≫,
since by another application of Property 1, < ea | f a >=< eb | f b >=0.
This implies that D ⊂ D ⊥⊥ . Furthermore, by Property 2 and dim D ⊥⊥ =
2 dim F − dim D it follows that
dim D = dim D ⊥⊥ ,

201
202 Proofs

yielding D = D ⊥⊥ . For the alternative characterization we note that we


have actually shown that Property 1 implies D ⊂ D ⊥⊥ . Together with
the fact that dim D ⊥⊥ = 2 dim F − dim D this implies that any subspace
D satisfying Property 1 has the property that dim D ≤ dim F. Thus,
as claimed before, a Dirac structure is a linear subspace of maximal
dimension satisfying Property 1.

A.2 Proof of Proposition 2.2

It is immediately seen that any subspace K × K⊥ satisfies (2.7), and


is a Dirac structure. Conversely, let the Dirac structure D satisfy (2.7).
Define the following subspaces
F0 = {f ∈ F | (f, 0) ∈ D} F1 = {f ∈ F | ∃e ∈ E s.t. (f, e) ∈ D}
E0 = {e ∈ E | (0, e) ∈ D} E1 = {e ∈ E | ∃f ∈ F s.t. (f, e) ∈ D}
It is readily seen Dalsmo & van der Schaft (1999) that for any Dirac
structure E1 = (F0 )⊥ , E0 = (F1 )⊥ . We will now show that (2.7) implies
that F0 = F1 =: K (and hence E0 = E1 =: K⊥ ). Clearly, F0 ⊂ F1 . Let
now (fa , ea ) ∈ D and thus fa ∈ F1 . Then for all (fb , eb ) ∈ D

≪ (fa , 0), (fb , eb ) ≫:=< eb | fa > + < 0 | fb >=< eb | fa >= 0

by (2.7). Hence, also (fa , 0) ∈ D and thus fa ∈ F0 . By definition F0 ×


E0 ⊂ D, and hence K × K⊥ ⊂ D. Finally, since the dimension of K × K⊥
equals the dimension of F equality results.

A.3 Extension of Proposition 2.1

Finally, we mention the following adaptation of the first definition of


a Dirac structure (Definition 2.1), which also applies to the infinite-
dimensional case, and is in the spirit of the definition of a maximal
monotone relation as discussed in Chapter 9.

Proposition A.1. D ⊂ F × E is a Dirac structure if and only if it sat-


isfies Property 1 in Definition 2.1 and is maximal with respect to this
property, that is, if some subspace D ′ satisfies Property 1 while D ⊂ D ′ ,
then D = D ′ .
B
Physical meaning of efforts and flows

In science and engineering, the ideas and concepts developed in


one branch of science and engineering are often transferred to other
branches. One approach to transferring ideas and concepts is by the
use of analogies. Historically, the first attempt to relate mechanical and
electrical systems was due to James Clerk Maxwell and Lord Kelvin
in the 19th century by using the similarity between force and volt-
age, as is also apparent from the early use of the term electromotive
force (emf). This force-voltage analogy implies that a mechanical mass
is analogous to an electrical inductor. Once the force-voltage analogy
had been established, some scientists started to address some of its
limitations. These limitations led to the alternative force-current anal-
ogy (often referred to as the Firestone or mobility analogy), which im-
plies that a mechanical mass is analogous to an electrical capacitor.
For a further discussion, see Jeltsema & Scherpen (2009) and the ref-
erences therein.
Throughout the present work it is shown that a physical system
can be characterized by interconnections between energy storage el-
ements, energy dissipating elements, and the environment. One re-
markable feature of the storage elements in each domain is that their

203
204 Physical meaning of efforts and flows

f R x dH e
(x)
dx

Figure B.1: Structure of a storage element.

structure is identical. Indeed, an inductor and a capacitor, for example,


although being dual elements, can both be characterized by a port-
Hamiltonian representation of the form
ẋ = f
dH
e= (x),
dx
which corresponds to the structure shown in Fig. B.1.
In contrast to common port-based modeling approaches, such as
the standard bondgraph formalism Paynter (1960) or classical energy-
and power-based approaches Jeltsema & Scherpen (2009), the port-
Hamiltonian framework uses only one type of storage, continuing
the work of Breedveld Breedveld (1984) on port-based modeling of
thermodynamics. From this perspective, it is natural to discriminate
among domains depending on the kind of energy that a certain part
of the system can store and split the usual physical domains into two
subdomains. Consequently, we do not speak of electrical, mechanical,
or hydraulic domains, but of electric and magnetic, mechanical kinetic
and mechanical potential, or hydraulic kinetic and hydraulic potential
subdomains. The chemical and thermic domains have no dual sub-
domains, which is related to the irreversible transformation of energy
in thermodynamics. This subdomain classification is based on the
generalized bond graph (GBG) framework introduced in Breedveld
(1984); Duindam et al. (2009), where f is referred to as the flow (rate of
change of state) and e as the effort (equilibrium-determining variable).
By considering only one type of storage, the state variables associated
to each subdomain are all treated at the same footing. Table B.1 shows
a complete overview.
The energy associated to each of these subdomains is then com-
puted as follows. Let e = Φ(x) represent the constitutive relationship
205

Table B.1: Domain classification in port-Hamiltonian framework.

physical subdomain flow f ∈ F effort e ∈ E storage state x ∈ X


electric current voltage charge
magnetic voltage current flux linkage
potential translation velocity force displacement
kinetic translation force velocity momentum
potential rotation angular velocity torque angular displacement
kinetic rotation torque angular velocity angular momentum
potential hydraulic volume flow pressure volume
kinetic hydraulic pressure volume flow flow tube momentum
chemical molar flow chemical potential number of moles
thermal entropy flow temperature entropy

of a one-port energy storing element. Then, the associated Hamilto-


nian is computed as Z
H(x) = Φ(x)dx,

which essentially represents the area underneath the curve in the


(e, x)–plane. Note that the resistive elements in each subdomain are
simply represented by relationships between e and f of the form
R(e, f ) = 0.
An advantage of considering only one type of storage is that the
concept of mechanical force has no unique meaning as it may play the
role of an ‘input’ (rate of change of state or flow) in the kinetic domain
or an ‘output’ (equilibrium-determining variable or effort) in the po-
tential domain. Hence, from a port-Hamiltonian perspective, the dis-
cussion which analogy—when it exists—is superior is left a non-issue.
Another (mathematical) advantage is that the state space manifold
X of the port-Hamiltonian system is one single object, with the flow
variables all belonging to the tangent space Tx X at a given state x
and the effort variables belonging to the co-tangent space Tx∗ X . This is
especially important if the state space is not a linear space.
Notice, however, that in the treatment of Chapter 12 the state vari-
ables may either belong to the vertex space or its dual, respectively, to
the edge space or its dual. For example, the state variable correspond-
ing to a capacitor naturally belongs to the edge space, while the state
variable corresponding to an inductor naturally belongs to the dual of
the edge space. This is very much in line with the use of through and
across variables MacFarlane (1970).
References

R.A. Abraham and J. E. Marsden. Foundations of Mechanics. Addison, ii edi-


tion, 1994.
D. Angeli: A Lyapunov approach to incremental stability properties. IEEE
Trans. Automatic Control 47(2000), pp. 410–421.
D. Angeli, Systems with counterclockwise input-output dynamics. IEEE
Trans. Automatic Control, 51(7), 1130–1143, 2006.
D. Angeli, Multistability in systems with counterclockwise input-output dy-
namics. IEEE Trans. Automatic Control, 52(4), 596–609, 2007.
M. Arcak. Passivity as a design tool for group coordination. Automatic Con-
trol, IEEE Transactions on, 52(8):1380 –1390, 2007.
V.I. Arnold, Mathematical Methods of Classical Mechanics, 2nd edition,
Springer, New York, 1978.
P. Bamberg, S. Sternberg, A course in mathematics for students of physics: 2, Cam-
bridge University Press, 1999.
V. Belevitch, Classical Network Theory, San Francisco: Holden-Day, 1968.
G. Blankenstein, "Geometric modeling of nonlinear RLC circuits”, IEEE
Trans. Circ. Syst., Vol. 52(2), 2005.
G. Blankenstein, A.J. van der Schaft, “Symmetry and reduction in implicit
generalized Hamiltonian systems”, Rep. Math. Phys., vol. 47, pp. 57–100,
2001.
A.M. Bloch, Nonholonomic dynamics and control, Interdisciplinary Applied
Mathematics, Springer, New York, 2003.

207
208 References

A.M. Bloch, P.E.Crouch. Representation of Dirac structures on vector spaces


and nonlinear lcv-circuits. In H.Hermes G. Ferraya, R.Gardner and
H.Sussman, editors, Proc. of Symposia in Pure mathematics, Differential Ge-
ometry and Control Theory, volume 64, pages 103–117, 1999.
B. Bollobas, Modern Graph Theory, Graduate Texts in Mathematics 184,
Springer, New York, 1998.
R.K. Brayton, J.K. Moser, "A theory of nonlinear networks I", Quart. Appl.
Math., 22(1):1–33, 1964.
R.K. Brayton, J.K. Moser, "A theory of nonlinear networks II", Quart. Appl.
Math., 22(2): 81–104, 1964.
P.C. Breedveld. Physical systems theory in terms of bond graphs. PhD the-
sis, Technische Hogeschool Twente, Enschede, The Netherlands, feb 1984.
ISBN 90-90005999-4.
P.C. Breedveld, "Port-based modeling of dynamic systems Systems", Chapter
1 in Modeling and Control of Complex Physical Systems; The Port-Hamiltonian
Approach, eds. V. Duindam, A. Macchelli, S. Stramigioli, H. Bruyninckx,
Springer, 2009.
Control theory and analytical mechanics. in Geometric Control Theory (eds. C.
Martin, R. Hermann), Vol. VII of Lie Groups: History, Frontiers and Applica-
tions, Math. Sci. Press., pages 1–46, 1977.
F. Bullo, A. D. Lewis, Geometric Control of Mechanical Systems, Texts in Applied
Mathematics 49, Springer-Verlag, New York-Heidelberg-Berlin, 2004.
M. Bürger, C. De Persis, S. Trip, "An internal model approach to (optimal)
frequency regulation in power grids", submitted, 2013.
M.K. Camlibel, S. Zhang, Partial consensus for heterogenous multi-agent
system with double integrator dynamics. 4th IFAC Conference on Analysis
and Design of Hybrid Systems, 2012, Eindhoven, the Netherlands.
M.K. Camlibel. Complementarity Methods in the Analysis of Piecewise Linear Dy-
namical Systems, PhD thesis, Tilburg University, 2001.
M.K. Camlibel, A.J. van der Schaft, Incrementally port-Hamiltonian systems,
52nd IEEE Conference on Decision and Control, Florence, Italy, December 10-
13, 2013.
J. Cervera, A.J. van der Schaft, A. Banos, "Interconnection of port-
Hamiltonian systems and composition of Dirac structures" Automatica, vol.
43, pp. 212–225, 2007.
L.O. Chua. "Memristor–the missing circuit element", IEEE Trans. on Circ. The-
ory, CT–18(2):507–519, September 1971.
References 209

J. Cortes, A.J. van der Schaft, P.E. Crouch, "Characterization of gradient con-
trol systems", SIAM J. Control Optim., 44(4): 1192–1214, 2005.
T.J. Courant, “Dirac manifolds”, Trans. Amer. Math. Soc., 319, pp. 631–661,
1990.
P.E. Crouch, "Geometric structures in systems theory, Proc. IEE-D, 128: 242–
252, 1981.
P.E. Crouch, "Spacecraft attitude control and stabilization: applications of ge-
ometric control theory to rigid body models", IEEE Trans. Automatic Con-
trol, AC-29(4), pp. 321-331, 1984.
P.E. Crouch, A.J. van der Schaft, Variational and Hamiltonian control systems,
Lect. Notes in Control and Information Sciences, Vol. 101, Springer-Verlag,
Berlin, 1987, p. 121.
M. Dalsmo, A.J. van der Schaft, “On representations and integrability of
mathematical structures in energy-conserving physical systems”, SIAM J.
Control and Optimization, vol.37, pp. 54–91, 1999.
C. DePersis, C.S. Kallesoe. Pressure regulation in nonlinear hydraulic net-
works by positive and quantized control. IEEE Transactions on Control Sys-
tems Technology, 19(6), pp. 1371–1383, 2011.
C.A. Desoer, M. Vidyasagar, Feedback systems: input-output properties, Siam
Classics in Applied Mathematics, SIAM, 1975.
D.A. Dirksz, J.M.A. Scherpen. Power-based control: Canonical coordinate
transformations, integral and adaptive control. Automatica, Vol. 48, pp.
1046–1056, 2012.
P.A.M. Dirac, Lectures in Quantum Mechanics, Belfer Graduate School of Sci-
ence, Yeshiva University Press, New York, 1964; Can. J. Math., 2, pp. 129–
148, 1950.
P.A.M. Dirac, "Generalized Hamiltonian Dynamics", Proc. R. Soc. A, 246, pp.
333–348, 1958.
I. Dorfman. Dirac Structures and Integrability of Nonlinear Evolution Equations.
John Wiley, Chichester, 1993.
V. Duindam, G. Blankenstein and S. Stramigioli, "Port-Based Modeling and
Analysis of Snake-board Locomotion”, Proc. 16th Int. Symp. on Mathemati-
cal Theory of Networks and Systems (MTNS2004), Leuven, 2004.
Modeling and Control of Complex Physical Systems; The Port-Hamiltonian Ap-
proach, eds. V. Duindam, A. Macchelli, S. Stramigioli, H. Bruyninckx,
Springer, 2009.
210 References

J.J. Duistermaat, "On Hessian Riemannian structures", Asian J. Math., 5(1):


79-92, 2001.
D. Eberard, B. Maschke, A.J. van der Schaft, "An extension of Hamiltonian
systems to the thermodynamic phase space: towards a geometry of non-
reversible thermodynamics", Reports on Mathematical Physics, Vol. 60, No.
2, pp. 175–198, 2007.
G. Escobar, A.J. van der Schaft, R. Ortega, “A Hamiltonian viewpoint in the
modelling of switching power converters”, Automatica, Special Issue on Hy-
brid Systems, vol.35, pp.445–452, 1999.
H.O. Fattorini, "Boundary control systems", SIAM J. Control, 6, pp. 349–385,
1968.
A. Favache and D. Dochain. Power-shaping control of reaction systems: The
CSTR case, Automatica, 46:1877-1883, 2010.
M. Feinberg, “Chemical reaction network structure and the stability of com-
plex isothermal reactors -I. The deficiency zero and deficiency one theo-
rems", Chemical Engineering Science, 43(10), pp. 2229–2268, 1987.
M. Feinberg, “Necessary and sufficient conditions for detailed balancing in
mass action systems of arbitrary complexity", Chemical Engineering Science,
44(9), pp. 1819–1827, 1989.
R. Frasca, M.K. Camlibel I.C. Goknar, L. Iannelli, and F. Vasca, "Linear pas-
sive networks with ideal switches: consistent initial conditions and state
discontinuities", IEEE Transactions on Circuits and Systems I, 2010, 57(12):
pp. 3138–3151.
F. Forni, R. Sepulchre, "On differentially dissipative dynamical systems", 9th
IFAC Symposium on Nonlinear Control Systems, Toulouse, France, September
4–6, 2013.
E. García-Canseco, D. Jeltsema, R. Ortega, and J.M.A. Scherpen. Power shap-
ing control of physical systems. Automatica, 46:127–132, 2010.
D. Goldin, S.A. Attia, J. Raisch Consensus for double integrator dynamics in
heterogeneous networks. 49th IEEE Conf. on Decision and Control (CDC),
Atlanta, USA, 2010, pp. 4504–4510.
G. Golo, Interconnection Structures in Port-Based Modelling: Tools for Analysis
and Simulation Ph.D. Thesis, University of Twente, The Netherlands, 2002.
G. Golo, V. Talasila, A.J. van der Schaft, B.M. Maschke, “Hamiltonian dis-
cretization of boundary control systems”, Automatica, vol. 40/5, pp. 757–
711, 2004.
References 211

G. Golo, A. van der Schaft, P.C. Breedveld, B.M. Maschke, “Hamiltonian for-
mulation of bond graphs”, Nonlinear and Hybrid Systems in Automotive Con-
trol, Eds. R. Johansson, A. Rantzer, pp. 351–372, Springer London, 2003.
K. Gerritsen, A.J. van der Schaft, W.P.M.H. Heemels, “On switched Hamil-
tonian systems”, Proceedings 15th International Symposium on Mathematical
Theory of Networks and Systems (MTNS2002), Eds. D.S. Gilliam, J. Rosen-
thal, South Bend, August 12–16, 2002.
V. Guillemin, S. Sternberg, Some problems in integral geometry and some
related problems in micro-local analysis. American Journal of Mathematics,
101(4), 915–955, 1979.
M. Hernandez-Gomez, R. Ortega, F. Lamnabhi-Lagarrigue, and G. Esco-
bar. Adaptive PI Stabilization of Switched Power Converters, IEEE Trans,
Contr. Sys. Tech., 18(3):688–698, pp. May 2010.
D. Hill and P. Moylan. "The stability of nonlinear dissipative systems”, IEEE
Trans. Aut. Contr., 21(5):708–711, October 1976.
L. Hörmander, Fourier integral operators, I. Acta Math., 127, 79–183, 1971.
F.J.M. Horn, R. Jackson, “General mass action kinetics", Arch. Rational Mech.
Anal., 47, pp. 81–116, 1972.
R.S. Ingarden, A. Jamiolkowski, Classical Electrodynamics, PWN-Polish Sc.
Publ., Warszawa, Elsevier, 1985.
B. Jacob, H.J. Zwart, Linear port-Hamiltonian systems on infinite-dimensional
spaces, Series: Operator Theory: Advances and Applications, Vol. 223, Sub-
series: Linear Operators and Linear Systems, Birkhäuser, Basel, 2012.
B. Jayawardhana, R. Ortega, E. Garcia-Canseco, and F. Castanos, "Passivity of
nonlinear incremental systems: application to PI stabilization of nonlinear
RLC circuits”, Systems & Control Letters, Vol. 56, No. 9-10, pp. 618-622, 2007.
D. Jeltsema, R. Ortega, J.M.A. Scherpen, "On passivity and power-balance
inequalities of nonlinear RLC circuits", IEEE Transactions on Circuits and
Systems I: Fundamental Theory and Applications, 50(9): 1174–1179, 2003.
D. Jeltsema and J.M.A. Scherpen, "On Brayton and Moser’s Missing Stability
Theorem”, it IEEE Trans. Circ. and Syst-II, Vol. 52, No. 9, 2005, pp. 550–552.
D. Jeltsema and J.M.A. Scherpen, "Multi-domain modeling of nonlinear net-
works and systems: energy- and power-based perspectives”, in August edi-
tion of IEEE Control Systems Magazine, pp. 28-59, 2009.
D. Jeltsema and A.J. van der Schaft, "Memristive Port-Hamiltonian Systems",
Mathematical and Computer Modelling of Dynamical Systems, Vol. 16, Issue 2,
pp. 75–93, April 2010.
212 References

D. Jeltsema and A. Doria-Cerezo, "Port-Hamiltonian Formulation of Systems


with Memory", Proceedings of the IEEE, 100(6), pp. 1928–1937, 2012.
J.H. Keenan, "Availability and irreversible thermodynamics", Br. J. Appl.
Phys., 2, pp. 183, 1951.
J.J. Koopman and D. Jeltsema, “Casimir-based control beyond the dissipation
obstacle”, In Proc. IFAC Workshop on Lagrangian and Hamiltonian Methods for
Nonlinear Control, Bertinoro, Italy, 2012.
M. Kurula, H. Zwart, A.J. van der Schaft, J. Behrndt, "Dirac structures and
their composition on Hilbert spaces", Journal of Mathematical Analysis and
Applications, vol. 372, pp. 402–422, 2010.
A. Lanzon, I.R. Petersen, "Stability robustness of a feedback interconnection
of systems with negative imaginary frequency response", IEEE Trans. Au-
tomatic Control, 53(4), 1042–1046, 2008.
I.R. Petersen, A. Lanzon, "Feedback control of negative-imaginary systems",
IEEE Control Systems Magazine, 30(5), 54–72, 2010.
P. Libermann, C.-M. Marle. Symplectic Geometry and Analytical Mechanics. D.
Reidel Publishing Company, Dordrecht, Holland, 1987.
A.G.J. MacFarlane. Dynamical System Models. George G. Harrap & Co. Ltd.,
1970.
J. Machovski, J. Bialek, J. Bumby, Power System Dynamics: Stability and Control,
2nd ed., Wiley, 2008.
J. Maks, "A Spinor Approach to Port-Hamiltonian Systems”, in Proc. 19th In-
ternational Symposium on Mathematical Theory of Networks and Systems, Bu-
dapest, 5–9 July, 2010.
O.L. Mangasarian, Nonlinear Programming, McGraw-Hill, New York, 1969.
J.E. Marsden, T.S. Ratiu. Introduction to Mechanics and Symmetry: a basic expo-
sition to Classical Mechanics. London Mathematical Society Lecture Notes
Series. Springer, New York, 2nd edition, 1999.
B. Maschke, A.J. van der Schaft, “Port-controlled Hamiltonian systems: mod-
elling origins and system theoretic properties”, pp. 282–288 in Proceedings
2nd IFAC Symposium on Nonlinear Control Systems (NOLCOS 2004), Ed. M.
Fliess, Bordeaux, France, June 1992.
B.M. Maschke, R. Ortega, A.J. van der Schaft, “Energy-based Lyapunov func-
tions for forced Hamiltonian systems with dissipation”, IEEE Transactions
on Automatic Control, 45, pp. 1498–1502, 2000.
J. Merker, ”On the geometric structure of Hamiltonian systems with ports",
J. Nonlinear Sci., 19: 717–738, 2009.
References 213

P. J. Morrison, "A paradigm for joined Hamiltonian and dissipative systems",


Phys. D, 18(1–3):410–419, 1986.
H. Narayanan. Some applications of an implicit duality theorem to connec-
tions of structures of special types including Dirac and reciprocal struc-
tures. Systems & Control Letters, 45: 87–96, 2002.
J.I. Neimark, N.A. Fufaev, Dynamics of nonholonomic systems, Vol.33 of Trans-
lations of Mathematical Monographs, American Mathematical Society,
Providence, Rhode Island, 1972.
H. Nijmeijer, A.J. van der Schaft, Nonlinear Dynamical Control Systems,
Springer-Verlag, New York, 1990 (4th printing 1998), p. xiii+467.
P. J. Olver. Applications of Lie Groups to Differential Equations, volume 107 of
Graduate Texts in Mathematics. Springer, New-York, ii edition, 1993.
R. Ortega, A.J. van der Schaft, I. Mareels, & B.M. Maschke, “Putting energy
back in control”, Control Systems Magazine, vol. 21, pp. 18–33, 2001.
R. Ortega, A.J. van der Schaft, B. Maschke, G. Escobar, “Interconnection and
damping assignment passivity-based control of port-controlled Hamilto-
nian systems”, Automatica, vol. 38, pp. 585–596, 2002.
R. Ortega, A.J. van der Schaft, F. Castaños, A. Astolfi, "Control by inter-
connection and standard passivity-based control of port-Hamiltonian sys-
tems", IEEE Transactions on Automatic Control, vol. 53, pp. 2527–2542, 2008.
R. Ortega, D. Jeltsema, J.M.A. Scherpen, "Power shaping: a new paradigm
for stabilization of nonlinear RLC circuits", IEEE Transactions on Automatic
Control, 48(10): 1762–1767.
J.-P. Ortega, V. Planas-Bielsa, "Dynamics on Leibniz manifolds", J. Geom.
Phys., 52(1):1–27, 2004.
J.F. Oster, A.S. Perelson, A. Katchalsky, “Network dynamics: dynamic model-
ing of biophysical systems", Quarterly Reviews of Biophysics, 6(1), pp. 1-134,
1973.
R. Pasumarthy, A.J. van der Schaft, "Achievable Casimirs and its Implications
on Control of port-Hamiltonian systems", Int. J. Control, vol. 80 Issue 9, pp.
1421–1438, 2007.
A. Pavlov, A. Pogromsky, N. van de Wouw, H. Nijmeijer: Convergent dy-
namics: a tribute to Boris Pavlovich Demidovich. Systems & Control Letters
52 (2004), pp. 257–261.
H. M. Paynter, Analysis and design of engineering systems, M.I.T. Press, MA,
1960.
214 References

R. Polyuga, A.J. van der Schaft, "Structure preserving moment matching for
port-Hamiltonian systems: Arnoldi and Lanczos", IEEE Transactions on Au-
tomatic Control, 56(6), pp. 1458–1462, 2011.
S. Rao, A.J. van der Schaft, B. Jayawardhana, "A graph-theoretical approach
for the analysis and model reduction of complex-balanced chemical reac-
tion networks", J. Math. Chem., 51:2401–2422, 2013.
J.A. Roberson, C.T. Crowe. Engineering fluid mechanics. Houghton Mifflin
Company, 1993.
R.T. Rockafellar and J.-B. Wets, Variational Analysis, Series of Comprehensive
Studies in Mathematics 317. Springer, 1998.
M. Seslija, A. J. van der Schaft, and J. M.A. Scherpen, "Discrete Exterior Ge-
ometry Approach to Structure-Preserving Discretization of Distributed-
Parameter Port-Hamiltonian Systems", Journal of Geometry and Physics, Vol-
ume 62, 1509–1531, 2012.
A.J. van der Schaft, “Hamiltonian dynamics with external forces and obser-
vations”, Mathematical Systems Theory, 15, pp. 145–168, 1982.
A.J. van der Schaft, “Observability and controllability for smooth nonlinear
systems”, SIAM J. Control & Opt., 20, pp. 338–354, 1982.
A.J. van der Schaft, "Linearization of Hamiltonian and gradient systems”,
IMA J. Mathematical Control & Inf., 1:185–198, 1984.
A.J. van der Schaft, System theoretic descriptions of physical systems, CWI Tract
No. 3, CWI, Amsterdam, 1984, p. 256.
A.J. van der Schaft, “Equations of motion for Hamiltonian systems with con-
straints”, J. Physics A: Math. Gen., 20, pp. 3271–3277, 1987.
A.J. van der Schaft, “Implicit Hamiltonian systems with symmetry”, Rep.
Math. Phys., 41, pp.203–221, 1998.
A.J. van der Schaft, “Interconnection and geometry”, in The Mathematics of
Systems and Control: from Intelligent Control to Behavioral Systems, Editors
J.W. Polderman, H.L. Trentelman, Groningen, pp. 203–218, 1999.
A.J. van der Schaft, L2 -Gain and Passivity Techniques in Nonlinear Control,
Lect. Notes in Control and Information Sciences, Vol. 218, Springer-Verlag,
Berlin, 1996, p. 168, 2nd revised and enlarged edition, Springer-Verlag,
London, 2000 (Springer Communications and Control Engineering series),
p.xvi+249.
A.J. van der Schaft, "Port-Hamiltonian Systems", Chapter 2 in Modeling and
Control of Complex Physical Systems; The Port-Hamiltonian Approach, eds. V.
Duindam, A. Macchelli, S. Stramigioli, H. Bruyninckx, Springer, 2009.
References 215

A.J. van der Schaft, "Characterization and partial synthesis of the behavior
of resistive circuits at their terminals", Systems & Control Letters, vol. 59, pp.
423–428, 2010.
A.J. van der Schaft, "On the relation between port-Hamiltonian and gradient
systems", Preprints of the 18th IFAC World Congress, Milano (Italy) August
28 - September 2, pp. 3321–3326, 2011.
A.J. van der Schaft, "Positive feedback interconnection of Hamiltonian sys-
tems", 50th IEEE Conference on Decision and Control and European Control
Conference (CDC-ECC) Orlando, FL, USA, December 12-15, pp. 6510–6515,
2011.
A.J. van der Schaft, "Port-Hamiltonian differential-algebraic systems", pp.
173–226 in Surveys in Differential-Algebraic Equations I (eds. A. Ilchmann,
T. Reis), Differential-Algebraic Equations Forum, Springer, 2013.
A.J. van der Schaft, "On differential passivity", pp. 21–25 in Proc. 9th IFAC
Symposium on Nonlinear Control Systems (NOLCOS), Toulouse, France,
September 4–6, 2013.
A.J. van der Schaft, M.K. Camlibel, "A state transfer principle for switching
port-Hamiltonian systems", pp. 45–50 in Proc. 48th IEEE Conf. on Decision
and Control, Shanghai, China, December 16–18, 2009.
A.J. van der Schaft, B.M. Maschke, “On the Hamiltonian formulation of non-
holonomic mechanical systems”, Rep. Mathematical Physics, 34, pp. 225–
233, 1994
A.J. van der Schaft, B.M. Maschke, “The Hamiltonian formulation of energy
conserving physical systems with external ports”, Archiv für Elektronik und
Übertragungstechnik, 49: 362–371, 1995. .
A.J. van der Schaft, B.M. Maschke, “Hamiltonian formulation of distributed-
parameter systems with boundary energy flow”, Journal of Geometry and
Physics, vol. 42, pp.166–194, 2002.
A.J. van der Schaft, B.M. Maschke, "Conservation Laws and Lumped System
Dynamics", in Model-Based Control; Bridging Rigorous Theory and Advanced
Technology, P.M.J. Van den Hof, C. Scherer, P.S.C. Heuberger, eds., Springer,
ISBN 978-1-4419-0894-0, pp. 31–48, 2009.
A.J. van der Schaft, B.M. Maschke, "Port-Hamiltonian systems on graphs",
SIAM J. Control Optim., 51(2), 906–937, 2013.
A.J. van der Schaft, S. Rao, B. Jayawardhana, "On the Mathematical Struc-
ture of Balanced Chemical Reaction Networks Governed by Mass Action
Kinetics", SIAM J. Appl. Math., 73(2), 953Ð-973, 2013.
S. Seshu and N. Balabanian, Linear Network Analysis, John Wiley & Sons, 1964.
216 References

S. Smale, "On the mathematical foundations of electrical circuit theory", J.


Diff. Equations, 7:193–210, 1972.
M.C. Smith, Synthesis of Mechanical Networks: the Inerter. IEEE Trans. Au-
tom. Control, 47, no. 10, pp. 1648–1662, 2002.
S. Stramigioli, A.J. van der Schaft, B.M. Maschke, C. Melchiorri. Geometric
scattering in robotic telemanipulation”, IEEE Transactions on Robotics and
Automation, 18:588–596, 2002.
J. Vankerschaver, H. Yoshimura, J. E. Marsden. Stokes-Dirac structures
through reduction of infinite-dimensional Dirac structures. In Proc. 49th
IEEE Conference on Decision and Control, Atlanta, USA, December 2010.
Villegas, J.A.: A Port-Hamiltonian Approach to Distributed Parameter Sys-
tems, Twente University Press (2007), Ph.D. Thesis. Available at
http://doc.utwente.nl
G.E. Wall, Geometric properties of generic differentiable manifolds, Lect. Notes in
Mathematics 597, Springer, Berlin, pp. 707–774, 1977.
R. Wegscheider, "Über simultane Gleichgewichte und die Beziehungen
zwischen Thermodynamik und Reaktionskinetik homogener Systeme",
Zetschrift für Physikalische Chemie, 39, pp. 257–303, 1902.
A. Weinstein, “The local structure of Poisson manifolds”, J. Differential Geom-
etry, 18, pp. 523-557, 1983.
J.C. Willems, "Dissipative dynamical systems. Part I: General theory", Arch.
Rat. Mech. and Analysis, 45(5): 321–351, 1972.
J.C. Willems, "Dissipative dynamical systems. Part II: Linear systems with
quadratic supply rates", Arch. Rat. Mech. and Analysis, 45(5): 352–393, 1972.
J.C. Willems, “The behavioral approach to open and interconnected sys-
tems”, Control Systems Magazine, 27, Dec. 2007, pp. 46–99.
J.C. Willems, "Terminals and ports", IEEE Circuits and Systems Magazine,
10(4):8–16, December 2010.
J.C. Willems, Power and energy as systemic properties, Part I: Electrical cir-
cuits, Part II: Mechanical systems, 52nd IEEE Conference on Decision and
Control, Florence, Italy, December 10–13, 2013.

You might also like