Gil Delay Diff Equ finalEFB

Download as pdf or txt
Download as pdf or txt
You are on page 1of 280

Stability of Vector Differential Delay Equations

Michael I. Gil’
Contents

Preface ix

Preface

1. Preliminaries 1
1.1. Banach and Hilbert spaces 1
1.2. Examples of normed spaces 3
1.3. Linear operators 4
1.4. Ordered spaces and Banach lattices 7
1.5. The abstract Gronwall lemma 8
1.6. Integral inequalities 10
1.7. Generalized norms 11
1.8. Causal mappings 12
1.9. Compact operators in a Hilbert space 14
1.10. Regularized determinants 16
1.11. Perturbations of determinants 17
1.12. Matrix functions of bounded variations 18
1.13. Comments 24

2. Some Results of the Matrix Theory 25


2.1. Notations 25
2.2. Representations of matrix functions 26
2.3. Norm estimates for resolvents 30
2.4. Spectrum perturbations 32
2.5. Norm estimates for matrix functions 36
2.6. Absolute values of entries of matrix functions 39
2.7. Diagonalizable matrices 42
2.8. Matrix exponential 48
2.9. Matrices with nonnegative off-diagonal parts 51
2.10. Comments 52
vi Contents

3. General Linear Systems 55


3.1. Description of the problem 55
3.2. Existence of solutions 57
3.3. Fundamental solutions 59
3.4. The generalized Bohl - Perron principle 60
3.5. Lp -version of the Bohl - Perron principle 63
3.6. Equations with infinite delay 65
3.7. Proof of Theorem 3.6.1 66
3.8. Equations with continuous infinite delay 68
3.9. Comments 70

4. Time-Invariant Linear Systems with Delay 71


4.1. Statement of the problem 71
4.2. Application of the Laplace transform 74
4.3. Norms of characteristic matrix functions 76
4.4. Norms of fundamental solutions of time-invariant systems 78
4.5. Systems with scalar delay-distributions 84
4.6. Scalar first order autonomous equations 85
4.7. Systems with one distributed delay 91
4.8. Estimates via determinants 95
4.9. Stability of diagonally dominant systems 96
4.10. Comments 97

5. Properties of Characteristic Values 99


5.1. Sums of moduli of characteristic values 99
5.2. Identities for characteristic values 103
5.3. Multiplicative representations of characteristic functions 105
5.4. Perturbations of characteristic values 107
5.5. Perturbations of characteristic determinants 109
5.6. Approximations by polynomial pencils 113
5.7. Convex functions of characteristic values 113
5.8. Comments 115

6. Equations Close to Autonomous and Ordinary Differential Ones 117


6.1. Equations ”close” to ordinary differential ones 117
6.2. Equations with small delays 122
6.3. Nonautomomous systems ”close” to autonomous ones 124
6.4. Equations with constant coefficients and variable delays 127
6.5. Proof of Theorem 6.4.1 129
6.6. The fundamental solution of equation (4.1) 132
6.7. Proof of Theorem 6.6.1 134
6.8. Comments 135
Contents vii

7. Periodic Systems 137


7.1. Preliminary results 137
7.2. The main result 139
7.3. Norm estimates for block matrices 141
7.4. Equations with one distributed delay 142
7.5. Applications of regularized determinants 144
7.6. Comments 147

8. Linear Equations with Oscillating Coefficients 149


8.1. Vector equations with oscillating coefficients 149
8.2. Proof of Theorem 8.1.1 153
8.3. Scalar equations with and several delays 155
8.4. Proof of Theorem 8.3.1 160
8.5. Comments 162

9. Linear Equations with Slowly Varying Coefficients 165


9.1. The ”freezing” method 165
9.2. Proof of Theorem 9.1.1 167
9.3. Perturbations of certain ordinary differential equations 168
9.4. Proof of Theorems 9.3.1 170
9.5. Comments 172

10. Nonlinear Vector Equations 173


10.1. Definitions and preliminaries 173
10.2. Stability of quasilinear equations 176
10.3. Absolute Lp -stability 179
10.4. Mappings defined in Ω(r) ∩ L2 181
10.5. Exponential stability 183
10.6. Nonlinear equations ”close” to ordinary differential ones 185
10.7. Applications of the generalized norm 187
10.8. Systems with positive fundamental solutions 192
10.9. The Nicholson-type system 195
10.10. Input-to-state stability of general semilinear systems 197
10.11. Input-to-state stability of systems with one delay in linear parts 199
10.12. Comments 200
viii Contents

11. Scalar Nonlinear Equations 201


11.1. Preliminary results 201
11.2. Absolute stability 204
11.3. The Aizerman - Myshkis problem 207
11.4. Proof of Lemmas 11.3.2 and 11.3.4 210
11.5. First order nonlinear non-autonomous equations 213
11.6. Comparison of Green’s functions to 2-nd order equations 216
11.7. Comments 217

12. Forced Oscillations in Vector Semi-Linear Equations 219


12.1. Introduction and statement of the main result 219
12.2. Proof of Theorem 12.1.1 221
12.3. Applications of matrix functions 222
12.4. Comments 224

13. Steady States of Differential Delay Equations 225


13.1. Systems of semilinear equations 225
13.2. Essentially nonlinear systems 227
13.3. Nontrivial steady states 229
13.4. Positive steady states 231
13.5. Systems with differentiable entries 232
13.6. Comments 233

14. Multiplicative Representations of Solutions 235


14.1. Preliminary results 235
14.2. Volterra equations 238
14.3. Differential delay equations 239
14.4. Comments 241

Appendix A. The General Form of Causal Operators 243

Appendix B. Infinite Block Matrices 247

Bibliography 263

Index 271
Preface
1. The suggested book deals with the stability of linear and nonlinear vector dif-
ferential delay equations. Equations with causal mappings are also considered.
Explicit conditions for the exponential, absolute and input-to-state stabilities are
suggested. Moreover, solution estimates for the considered equations are estab-
lished. They provide the bounds for regions of attraction of steady states. We
are also interested in the existence of periodic solutions. Besides the Hill method
for ordinary differential equations with periodic coefficients is developed for the
considered equations.
The main methodology presented in the book is based on a combined usage of
the recent norm estimates for matrix-valued functions with the following methods
and results:
a) the generalized Bohl-Perron principle and the integral version of the gen-
eralized Bohl-Perron principle;
b) the freezing method;
c) the positivity of fundamental solutions.
A significant part of the book is devoted solution to the Aizerman - Myshkis
problem and integrally small perturbations of the considered equations.

2. Functional differential equations naturally arise in various applications,


such as control systems, viscoelasticity, mechanics, nuclear reactors, distributed
networks, heat flow, neural networks, combustion, interaction of species, microbi-
ology, learning models, epidemiology, physiology, and many others. The theory of
functional differential equations has been developed in the works of V. Volterra,
A.D. Myshkis, N.N. Krasovskii, B. Razumikhin, N. Minorsky, R. Bellman, A. Ha-
lanay, J. Hale and other mathematicians.
The problem of stability analysis of various equations continues to attract
the attention of many specialists despite its long history. It is still one of the
most burning problems because of the absence of its complete solution. The basic
method for the stability analysis is the method of the Lyapunov type functionals.
By that method many very strong results are obtained. We do not consider the
Lyapunov functionals method because several excellent books cover this topic. It
should be noted that finding the Lyapunov type functionals for the vector equa-
tions is often connected with serious mathematical difficulties, especially in regard
to nonautonomous equations. To the contrary, the stability conditions presented
in the suggested book are mainly formulated in terms of the determinants and
eigenvalues of auxiliary matrices dependent on a parameter. This fact allows us to
apply the well-known results of the theory of matrices to the stability analysis.
One of the methods considered in the book is the freezing method. That
method was the introduced by V.M. Alekseev in 1960 for the stability analysis of
ordinary differential equations and was extended to functional differential equa-
tions by the author.
We also consider some classes of equations with causal mappings. These
x Preface

equations include differential, differential-delay, integro-differential and other tra-


ditional equations. The stability theory of nonlinear equations with causal map-
pings is in an early stage of development.
Furthermore, in 1949 M.A. Aizerman conjectured that a single input-single
output system is absolutely stable in the Hurwitz angle. That hypothesis caused
the great interest among the specialists. Counter-examples were set up that demon-
strated it was not, in general, true. Therefore, the following problem arose: to find
the class of systems that satisfy Aizerman’s hypothesis. The author has showed
that any system satisfies the Aizerman hypothesis if its impulse function is non-
negative. The similar result was proved for multivariable systems, distributed ones
and in the input-output version.
On the other hand, in 1977 A.D. Myshkis pointed out at the importance of
consideration of the generalized Aizerman problem for retarded systems. In 2000
it was proved by the author, that a retarded system satisfies the generalized Aiz-
erman hypothesis if its Green function is non-negative.

3. The aim of the book is to provide new tools for specialists in the stability
theory of functional differential equations, control system theory and mechanics.
This is the first book that:
i) gives a systematic exposition of the approach to stability analysis of vector
differential delay equations which is based on estimates for matrix-valued allowing
us to investigate various classes of equations from the unified viewpoint;
ii) contains a solution of the Aizerman-Myshkis problem;
iii) develops the Hill method for functional differential equations with period
coefficients;
iv) presents the integral version of the generalized Bohl-Perron principle.

It also includes the freezing method for systems with delay and investigates
integrally small perturbations of differential delay equations with matrix coeffi-
cients.
The book is intended not only for specialists in stability theory, but for anyone
interested in various applications who has had at least a first year graduate level
course in analysis.
I was very fortunate to have fruitful discussions with the late Professors M.A.
Aizerman, V.B. Kolmanovskii, M.A. Krasnosel’skii, A.D. Myshkis, A. Pokrovskii,
and A.A. Voronov, to whom I am very grateful for their interest in my investiga-
tions.
Chapter 1

Preliminaries

1.1 Banach and Hilbert spaces


In Sections 1-3 we recall very briefly some basic notions of the theory of Banach
and Hilbert spaces. More details can be found in any textbook on Banach and
Hilbert spaces (e.g. [2] and [16]).
Denote the set of complex numbers by C and the set of real numbers by R.
A linear space X over C is called a (complex) linear normed space if for any
x ∈ X a non-negative number kxkX = kxk is defined, called the norm of x, having
the following properties:
1. kxk = 0 iff x = 0,
2. kαxk = |α|kxk,
3. kx + yk ≤ kxk + kyk for every x, y ∈ X, α ∈ C.
A sequence {hn }∞ n=1 of elements of X converges strongly (in the norm) to
h ∈ X if
lim khn − hk = 0.
n→∞

A sequence {hn } of elements of X is called the fundamental (Cauchy) one if

khn − hm k → 0 as m, n → ∞.

If any fundamental sequence converges to an element of X, then X is called a


(complex) Banach space.
Let in a linear space H over C for all x, y ∈ H a number (x, y) be defined,
such that
1. (x, x) > 0, if x 6= 0, and (x, x) = 0, if x = 0,
2. (x, y) = (y, x),
3. (x1 + x2 , y) = (x1 , y) + (x2 , y) (x1 , x2 ∈ H),
4. (λx, y) = λ(x, y) (λ ∈ C).
2 Chapter 1. Preliminaries

Then (., .) is called the scalar product. Define in H the norm by


p
kxk = (x, x).

If H is a Banach space with respect to this norm, then it is called a Hilbert space.
The Schwarz inequality
|(x, y)| ≤ kxk kyk
is valid.
If, in an infinite dimensional Hilbert space, there is a countable set whose
closure coincides with the space, then that space is said to be separable. Any
separable Hilbert space H possesses an orthonormal basis. This means that there
is a sequence {ek ∈ H}∞ k=1 such that

(ek , ej ) = 0 if j 6= k and (ek , ek ) = 1 (j, k = 1, 2, ...)

and any h ∈ H can be represented as



X
h= ck e k
k=1

with
ck = (h, ek ), k = 1, 2, . . . .
Besides the series strongly converges.
Let X and Y be Banach spaces. A function f : X → Y is continuous if for
any  > 0, there is a δ > 0, such that kx − ykX ≤ δ implies kf (x) − f (y)kY ≤ .
Theorem 1.1.1. (The Urysohn theorem) Let A and B be disjoint closed sets in a
Banach space X. Then there is a continuous function f defined on X such that

0 ≤ f (x) ≤ 1, f (A) = 1 and f (B) = 0.

For the proof see, for instance, [16, p. 15].


Let a function x(t) be defined on a real segment [0, T ] with values in X. An
element x0 (t0 ) (t0 ∈ (0, T )) is the derivative of x(t) at t0 if

x(t0 + h) − x(t0 )
− x 0
(t )
0 → 0 as |h| → 0.
h

Let x(t) be continuous at each point of [0, T ]. Then one can define the Riemann
integral as the limit in the norm of the integral sums:
Xn Z T
(n) (n)
lim x(tk )Δtk = x(t)dt
(n)
max |Δtk |→0 k=1 0

(n) (n) (n) (n) (n)


(0 = t0 < t1 n = T, Δtk
< ... < t(n) = tk − tk−1 ).
1.2. Examples of normed spaces 3

1.2 Examples of normed spaces


The following spaces are examples of normed spaces. For more details see [16, p.
238].
1. The complex n-dimensional Euclidean space Cn with the norm
n
X
kxkn = ( |xk |2 )1/2 (x = {xk }nk=1 ∈ Cn ).
k=1

2. The space B(S) is defined for an arbitrary set S and consists of all bounded
scalar functions on S. The norm is given by
kf k = sups∈S |f (s)|.
3. The space C(S) is defined for a topological space S and consists of all
bounded continuous scalar functions on S. The norm is
kf k = sups∈S |f (s)|.
4. The space Lp (S) is defined for any real number p, 1 ≤ p < ∞, and any
set S having a finite Lebesgue measure. It consists of those measurable scalar
functions on S for which the norm
Z
kf k = [ |f (s)|p ds]1/p
S

is finite.
5. The space L∞ (S) is defined for any set S having a finite Lebesgue measure.
It consists of all essentially bounded measurable scalar functions on S. The norm
is
kf k = ess sup |f (s)|.
s∈S

Note that the Hilbert space has been defined by a set of abstract axioms. It
is noteworthy that some of the concrete spaces defined above satisfy these axioms,
and hence are special cases of abstract Hilbert space. Thus, for instance, the n-
dimensional space Cn is a Hilbert space, if the inner product (x, y) of two elements
x = {x1 , ..., xn } and y = {y1 , ..., yn }
is defined by the formula
n
X
(x, y) = xk y k .
k=1

In the same way, complex l2 space is a Hilbert space if the scalar product
(x, y) of the vectors x = {xk } and y = {yk } is defined by the formula

X
(x, y) = xk y k .
k=1
4 Chapter 1. Preliminaries

Also the complex space L2 (S) is a Hilbert space with the scalar product
Z
(f, g) = f (s)g(s)ds.
S

1.3 Linear operators


An operator A, acting from a Banach space X into a Banach space Y , is called a
linear one if
A(αx1 + βx2 ) = αAx1 + βAx2
for any x1 , x2 ∈ X and α, β ∈ C. If there is a constant a, such that the inequality

kAhkY ≤ akhkX for all h ∈ X

holds, then the operator is said to be bounded. The quantity


kAhkY
kAkX→Y := sup
h∈X khkX

is called the norm of A. If X = Y we will write kAkX→X = kAkX or simple kAk.


Under the natural definitions of addition and multiplication by a scalar, and
the norm, the set B(X, Y ) of all bounded linear operators acting from X into Y
becomes a Banach space. If Y = X we will write B(X, X) = B(X). A sequence
{An } of bounded linear operators from B(X, Y ) converges in the uniform operator
topology (in the operator norm) to an operator A if

lim kAn − AkX→Y = 0.


n→∞

A sequence {An } of bounded linear operators converges strongly to an operator A


if the sequence of elements {An h} strongly converges to Ah for every h ∈ X.
If φ is a linear operator, acting from X into C, then it is called a linear
functional. It is bounded (continuous) if φ(x) is defined for any x ∈ X, and there
is a constant a such that the inequality

kφ(h)kY ≤ akhkX for all h ∈ X

holds. The quantity


|φ(h)|
kφkX := sup
h∈X khkX
is called the norm of the functional φ. All linear bounded functionals on X form
a Banach space with that norm. This space is called the space dual to X and is
denoted by X ∗ .
In the sequel IX = I is the identity operator in X : Ih = h for any h ∈ X.
The operator A−1 is the inverse one to A ∈ B(X, Y ) if AA−1 = IY and
A−1 A = IX .
1.3. Linear operators 5

Let A ∈ B(X, Y ). Consider a linear bounded functional f defined on Y .


Then on X the linear bounded functional g(x) = f (Ax) is defined. The operator
realizing the relation f → g is called the operator A∗ dual (adjoint) to A. By the
definition
(A∗ f )(x) = f (Ax) (x ∈ X).
The operator A∗ is a bounded linear operator acting from Y ∗ to X ∗ .
Theorem 1.3.1. Let {Ak } be a sequence of linear operators acting from a Banach
space X to a Banach space Y . Let for each h ∈ X,

sup kAk hkY < ∞.


k

Then the operator norms of {Ak } are uniformly bounded. Moreover, if {An }
strongly converges to a (linear) operator A, then

kAkX→Y ≤ sup kAn kX→Y .


n

For the proof see, for example, [16, p. 66].


A point λ of the complex plane is said to be a regular point of an operator
A, if the operator Rλ (A) := (A − Iλ)−1 (the resolvent) exists and is bounded. The
complement of all regular points of A in the complex plane is the spectrum of A.
The spectrum of A is denoted by σ(A).
The quantity
rs (A) = sups∈σ(A) |s|
is the spectral radius of A. The Gel’fand formula
q
rs (A) = lim k kAk k
k→∞

is valid. The limit always exists. Moreover,


q
rs (A) ≤ k kAk k

for any integer k ≥ 1. So


rs (A) ≤ kAk.
If there is a nontrivial solution e of the equation Ae = λ(A)e, where λ(A) is a
number, then this number is called an eigenvalue of operator A, and e ∈ H is an
eigenvector corresponding to λ(A). Any eigenvalue is a point of the spectrum. An
eigenvalue λ(A) has the (algebraic) multiplicity r ≤ ∞ if

k=1 ker(A − λ(A)I) ) = r.


dim(∪∞ k

In the sequel λk (A), k = 1, 2, ... are the eigenvalues of A repeated according to


their multiplicities.
6 Chapter 1. Preliminaries

A vector v satisfying (A − λ(A)I)n v = 0 for a natural n, is a root vector of


operator A corresponding to λ(A).
An operator V is called a quasinilpotent one, if its spectrum consists of zero,
only.
Let on a linear manifold D(A) of a Banach space X is defined a linear operator
A, mapping D(A) into a Banach space Y . Then D(A) is called the domain of A. A
linear operator A is called a closed operator, if from xn ∈ X → x0 and Axn → y0
in the norm, it follows that x0 ∈ D(A) and Ax0 = y0 .
Theorem 1.3.2. (The Closed Graph theorem) A closed linear map defined on the
all of an Banach space, and with values in an Banach space, is continuous.
For the proof see [16, p. 57].
Theorem 1.3.3. (The Riesz - Thorin theorem) Assume T is a bounded linear
operator from Lp (Ω1 ) to Lp (Ω2 ) and at the same time from Lq (Ω1 ) to Lq (Ω2 )
(1 ≤ p, q ≤ ∞). Then it is also a bounded operator from Lr (Ω1 ) to Lr (Ω2 ) for any
r between p and q. In addition the following inequality for the norms holds:

kT kLr (Ω1 )→Lr (Ω2 ) ≤ max{kT kLp (Ω1 )→Lp (Ω2 ) , kT kLq (Ω1 )→Lq (Ω2 ) }.

For the proof (in more general situation) see [16, Section VI.10.11].
Theorem 1.3.4. Let f ∈ L1 (Ω) be a fixed integrable function and let T be the
operator of convolution with f , i.e., for each function g ∈ Lp (Ω) (p ≥ 1) we have
Z
(T g)(t) = f (t − s)g(s)ds.
Ω

Then
kT gkLp (Ω) ≤ kf kL1 (Ω) kgkLp (Ω) .
For the proof see [16, p. 528].
Now let us consider operators in a Hilbert space H. A bounded linear operator
A∗ is adjoint to A, if

(Af, g) = (f, A∗ g) for every h, g ∈ H.

The relation kAk = kA∗ k is true. A bounded operator A is a selfadjoint one, if


A = A∗ . A is a unitary operator, if AA∗ = A∗ A = I. Here and below I ≡ IH is the
identity operator in H. A selfadjoint operator A is positive (negative) definite, if

(Ah, h) ≥ 0 ((Ah, h) ≤ 0) for every h ∈ H.

A selfadjoint operator A is strongly positive (strongly negative) definite, if there


is a constant c > 0, such that

(Ah, h) ≥ c (h, h) ((Ah, h) < −c (h, h)) for every h ∈ H.


1.4. Ordered spaces and Banach lattices 7

A bounded linear operator satisfying the relation AA∗ = A∗ A is called a normal


operator. It is clear that unitary and selfadjoint operators are examples of normal
ones. The operator B ≡ A−1 is the inverse one to A, if AB = BA = I. An operator
P is called a projection if P 2 = P . If, in addition, P ∗ = P , then it is called an
orthogonal projection (an orthoprojection). The spectrum of a selfadjoint operator
is real, the spectrum of a unitary operator lies on the unit circle.

1.4 Ordered spaces and Banach lattices


Following [91], let us introduce an inequality relation for normed spaces which can
be used analogously to the inequality relation for real numbers.
A non-empty set M with a relation ≤ is said to be an ordered set, whenever
the following conditions are satisfied.
i) x ≤ x for every x ∈ M,
ii) x ≤ y and y ≤ x implies that x = y and
iii) x ≤ y and y ≤ z implies that x ≤ z.
If, in addition, for any two elements x, y ∈ M either x ≤ y or y ≤ x, then M is
called a totally ordered set. Let A be a subset of an ordered set M . Then x ∈ M
is called an upper bound of A, if y ≤ x for every y ∈ A. z ∈ M is called a lower
bound of A, if y ≥ z for all y ∈ A. Moreover, if there is an upper bound of A,
then A is said to be bounded from above. If there is a lower bound of A, then A
is called bounded from below. If A is bounded from above and from below, then
we will briefly say that A is order bounded. Denote
[x, y] = {z ∈ M : x ≤ z ≤ y}.
That is, [x, y] is an order interval.
An ordered set (M, ≤) is called a lattice, if any two elements x, y ∈ M have
a least upper bound denoted by sup(x, y) and a greatest lower bound denoted by
inf (x, y). Obviously, a subset A is order bounded, if and only if it is contained in
some order interval.
Definition 1.4.1. A real vector space E which is also an ordered set is called an
ordered vector space, if the order and the vector space structure are compatible in
the following sense: if x, y ∈ E, such that x ≤ y, then x + z ≤ y + z for all z ∈ E
and ax ≤ ay for any positive number a. If, in addition, (E, ≤) is a lattice, then E
is called a Riesz space (or a vector lattice).
Let E be a Riesz space. The positive cone E+ of E consist of all x ∈ E, such
that x ≥ 0. For every x ∈ E let
x+ = sup (x, 0), x− = inf (−x, 0), |x| = sup (x, −x)
be the positive part, the negative part and the absolute value of x, respectively.
8 Chapter 1. Preliminaries

Example 1.4.2. Let E = Rn and

n
R+ = {(x1 , ..., xn ) ∈ Rn : xk ≥ 0 for all k}.
Then R+
n
is a positive cone and for x = (x1 , ..., xn ), y = (y1 , ..., yn ) ∈ Rn , we have

x ≤ y iff xk ≤ yk

and
|x| = (|x1 |, ..., |xn |).
Example 1.4.3. Let X be a non-empty set and let B(X) be the collection of all
bounded real valued functions defined on X.
It is a simple and well-known fact that B(X) is a vector space ordered by
the positive cone

B(X)+ = {f ∈ B(X) : f (t) ≥ 0 for all t ∈ X}.

Thus f ≥ g holds, if and only if f − g ∈ B(X)+ . Obviously, the function h1 =


sup (f, g) is defined by
h1 (t) = max {f (t), g(t)}
and the function h2 = inf (f, g) is defined by

h2 (t) = min {f (t), g(t)}

for every t ∈ X and f, g ∈ B(X). This shows that B(X) is a Riesz space and the
absolute value of f is |f (t)|.
Definition 1.4.4. Let E be a Riesz space furnished with a norm k.k, satisfying
kxk ≤ kyk whenever |x| ≤ |y|. In addition, let the space E be complete with
respect to that norm. Then E is called a Banach lattice.
The norm k.k in a Banach lattice E is said to be order continuous, if

inf{kxk : x ∈ A} = 0

for any down directed set A ⊂ E, such that inf{x ∈ A} = 0, cf. [91, p. 86].
The real spaces C(K), Lp (K) (K ⊆ Rn ) and lp (p ≥ 1) are examples of
Banach lattices.
A bounded linear operator T in E is called a positive one, if from x ≥ 0 it
follows that T x ≥ 0.

1.5 The abstract Gronwall lemma


In this section E is a Banach lattice with the positive cone E+ .
1.5. The abstract Gronwall lemma 9

Lemma 1.5.1. (The abstract Gronwall lemma) Let T be a bounded linear positive
operator acting in E and having the spectral radius

rs (T ) < 1.

Let x, f ∈ E+ . Then the inequality

x ≤ f + Tx

implies x ≤ y where y is a solution of the equation

y = f + T y.

Proof. Let Bx = f + T x. Then x ≤ Bx implies x ≤ Bx ≤ B 2 x ≤ ... ≤ B m x. This


gives
m−1
X
x ≤ Bmx = T k f + T m x → (I − T )−1 f = y as m → ∞.
k=0

Since rs (T ) < 1, the Neumann series converges. 

We will say that F : E → E is a non-decreasing mapping if v ≤ w (v, w ∈ E)


implies F (v) ≤ F (w).
Lemma 1.5.2. Let F : E → E be a non-decreasing mapping, and F (0) = 0. In
addition, let there be a positive linear operator T in E, such that the conditions

|F (v) − F (w)| ≤ T |v − w| (v, w ∈ E), (5.1)

and rs (T ) < 1 hold. Then the inequality

x ≤ F (x) + f (x, f ∈ E+ )

implies that x ≤ y where y is a solution of the equation

y = F (y) + f.

Moreover, the inequality

z ≥ F (z) + f (z, f ∈ E+ )

implies that z ≥ y.
Proof. We have x = F (x) + h with an h < f . Thanks to (5.1) and the condition
rs (T ) < 1, the mappings Ff := F + f and Fh := F + h have the following proper-
ties: Ffm and Fhm are contracting for some integer m. So thanks to the generalized
contraction mapping theorem [115], Ffk (f ) → x, Fhk (f ) → y as k → ∞. Moreover,
Ffk (f ) ≥ Fhk (f ) for all k = 1, 2, ..., since F is non-decreasing and h ≤ f . This
proves the inequality x ≥ y. Similarly the inequality x ≤ z can be proved. 
10 Chapter 1. Preliminaries

1.6 Integral inequalities


Let C(J, Rn ) be a space of real vector valued functions defined, bounded and
continuous on a finite or infinite interval J. The inequalities are understood in the
coordinate-wise sense.
To receive various solution estimates, we essentially use the following lemma.
Lemma 1.6.1. Let K̂(t, s) be a matrix kernel with non-negative entries, such that
the integral operator Z
(Kx)(t) = K̂(t, s)x(s)ds
J
maps C(J, Rn ) into itself and has the spectral radius rs (K) < 1. Then for any
non-negative continuous vector function v(t) satisfying the inequality
Z
v(t) ≤ K̂(t, s)v(s)ds + f (t)
J

where f is a non-negative continuous on J vector function, the inequality v(t) ≤


u(t) (t ∈ J) is valid, where u(t) is a solution of the equation
Z
u(t) = K̂(t, s)u(s)ds + f (t).
J

Similarly, the inequality


Z
v(t) ≥ K̂(t, s)v(s)ds + f (t)
J

implies v(t) ≥ u(t) (t ∈ J).


Proof. The lemma is a particular case of the abstract Gronwall lemma. 

If J = [a, b] is an arbitrary finite interval and


Z t
(Kx)(t) = K̂(t, s)x(s)ds (t ≤ b),
a

and the condition Z t


sup kK̂(t, s)kds < ∞
t∈ [a,b] a

is fulfilled with an arbitrary matrix norm, then it is simply to show that rs (K) = 0.
The same equality for the spectral radius is true, if
Z b
(Kx)(t) = K̂(t, s)x(s)ds (t ≥ a),
t
provided
Z b
sup kK̂(t, s)kds < ∞.
t∈ [a,b] t
1.7. Generalized norms 11

1.7 Generalized norms


In this section nonlinear equations are considered in a space furnished with a
vector (generalized) norm introduced by L. Kantorovich [108, p. 334]. Note that
a vector norm enables us to use information about equations more complete than
a usual (number) norm.
Throughout this section E is a Banach lattice with a positive cone E+ and a
norm k.kE .
Let X be an arbitrary set. Assume that in X a vector metric M (., .) is defined.
That is, M (., .) maps X × X into E+ with the usual properties: for all x, y, z ∈ X

a) M (x, y) = 0 iff x = y; b) M (x, y) = M (y, x)

and
c) M (x, y) ≤ M (x, z) + M (y, z).
Clearly, X is a metric space with the metric m(x, y) = kM (x, y)kE . That
is, a sequence {xk ∈ X} converges to x in the metric m(., .) iff M (xk , x) → 0 as
k → ∞.
Lemma 1.7.1. Let X be a space with a vector metric M (., .): X × X → E+ , and
F (x) map a closed set Φ ⊆ X into itself with the property

M (F (x), F (y)) ≤ QM (x, y) (x, y ∈ Φ), (7.1)

where Q is a positive operator in E whose spectral radius rs (Q) is less than one:
rs (Q) < 1. Then, if X is complete in generalized metric M (., .) (or, equivalently,
in metric m(., .)), F has a unique fixed point x ∈ Φ. Moreover, that point can be
found by the method of successive approximations .
Proof. Following the usual proof of the contracting mapping theorem we take an
arbitrary x0 ∈ Φ and define the successive approximations by the equality

xk = F (xk−1 ) (k = 1, 2, ...).

Hence,

M (xk+1 , xk ) = M (F (xk ), F (xk−1 )) ≤ QM (xk , xk−1 ) ≤ ... ≤ Qk M (x1 , x0 ).

For m > k we thus get


m−1
X
M (xm , xk ) ≤ M (xm , xm−1 ) + M (xm−1 , xk ) ≤ ... ≤ M (xj+1 , xj ) ≤
j=k

m−1
X
Qj M (x1 , x0 ).
j=k
12 Chapter 1. Preliminaries

Inasmuch as rs (Q) < 1,

M (xm , xk ) ≤ Qk (I − Q)−1 M (x1 , x0 ) → 0, (k → ∞).

Here and below I is the unit operator in a corresponding space. Consequently,


points xk converge in the metric M (., .) to an element x ∈ Φ. Since limk→∞ F (xk ) =
F (x), x is the fixed point due to (2.2). Thus, the existence is proved.
To prove the uniqueness let us assume that y 6= x is a fixed point of F as
well. Then by (7.1) M (x, y) = M (F (x), F (y)) ≤ QM (x, y). Or

(I − Q)−1 M (x, y) ≤ 0.

But I − Q is positively invertible, because rs (Q) < 1. In this way, M (x, y) ≤ 0.


This proves the result. 

Now let X be a linear space with a vector (generalized) norm M (.). That is,
M (.) maps X into E+ and is subject to the usual axioms: for all x, y ∈ X

M (x) > 0 if x 6= 0; M (λx) = |λ|M (x) (λ ∈ C); M (x + y) ≤ M (x) + M (y).

Following [108], we shall call E a norming lattice, and X a lattice-normed


space. Clearly, X with a generalized (vector) norm M (.) : X → E+ is a normed
space with the norm
khkX = kM (h)kE (h ∈ X). (7.2)
Now the previous lemma implies
Corollary 1.7.2. Let X be a space with a generalized norm M (.) : X → E+ and
F (x) map a closed set Φ ⊆ X into itself with the property

M (F (x) − F (y)) ≤ QM (x − y) (x, y ∈ Φ),

where Q is a positive operator in E with rs (Q) < 1. Then, if X is complete in the


norm defined by (7.2), F has a unique fixed point x ∈ Φ. Moreover, that point can
be found by the method of successive approximations .

1.8 Causal mappings


Let X(a, b) = X([a, b]; Y ) (−∞ < a < b ≤ ∞) be a normed space of functions
defined on [a, b] with values in a normed space Y and the unit operator I. For
example X(a, b) = C([a, b], Cn ) or X(a, b) = Lp ([a, b], Cn ).
Let Pτ (a < τ < b) be the projections defined by

w(t) if a ≤ t ≤ τ,
(Pτ w)(t) = (w ∈ X(a, b)),
0 if τ < t ≤ b

and Pa = 0, and Pb = I.
1.8. Causal mappings 13

Definition 1.8.1. Let F be a mapping in X(a, b) having the following properties:


F 0 ≡ 0, (8.1)
and for all τ ∈ [a, b], the equality
Pτ F P τ = P τ F (8.2)
holds. Then F will be called a causal mapping (operator).
This definition is somewhat different from the definition of the causal op-
erator suggested in [13]; in the case of linear operators our definition coincides
with the one accepted in [17]. Note that, if F is defined on a closed set Ω 3 0 of
X(a, b), then due to the Urysohn theorem, F can be extended by zero to the whole
space. Put X(a, τ ) = Pτ X(a, b). Note that, if X(a, b) = C(a, b), then Pτ f is not
continuous in C(a, b) for an arbitrary f ∈ C(a, b). So Pτ is defined on the whole
space C(a, b) but maps C(a, b) into the space B(a, b) ⊃ C(a, b), where B(a, b) is
the space of bounded functions. However, if Pτ F f is continuous on [a, τ ], that is
Pτ F f ∈ C(a, τ ) for all τ ∈ (a, b], and relations (8.1) and (8.2) hold, then F is
causal in C(a, b).
Let us point an example of a causal mapping. To this end consider in C(0, T )
the mapping
Z t
(F w)(t) = f (t, w(t)) + k(t, s, w(s))ds (0 ≤ t ≤ T ; w ∈ C(0, T ))
0

with a continuous kernel k, defined on [0, T ]2 × R and a continuous function


f : [0, T ] × R → R, satisfying k(t, s, 0) ≡ 0 and f (t, 0) ≡ 0.
For each τ ∈ (0, T ), we have
Z t
(Pτ F w)(t) = fτ (t, w(t)) + Pτ k(t, s, w(s))ds,
0

where 
f (t, w(t)) if 0 ≤ t ≤ τ,
fτ (t, w(t)) = .
0 if τ < t ≤ T
Clearly,
fτ (t, w(t)) = fτ (t, wτ (t)) where wτ = Pτ w.
Moreover,
Z t Z t
Pτ k(t, s, w(s))ds = Pτ k(t, s, wτ (s))ds = 0, t > τ
0 0

and
Z t Z t Z t
Pτ k(t, s, w(s))ds = k(t, s, w(s))ds = k(t, s, wτ (s))ds, t ≤ τ.
0 0 0
14 Chapter 1. Preliminaries

Hence it follows that the considered mapping is causal. Note that, the integral
operator Z c
k(t, s, w(s))ds
0

with a fixed positive c ≤ T is not causal.

1.9 Compact operators in a Hilbert space


A linear operator A mapping a normed space X into a normed space Y is said to
be completely continuous (compact) if it is bounded and maps each bounded set
in X into a compact one in Y . The spectrum of a compact operator is either finite,
or the sequence of the eigenvalues of A converges to zero, any nonzero eigenvalue
has the finite multiplicity.
This section deals with completely continuous operators acting in a separable
Hilbert space H. All the results presented in this section are taken from the books
[2] and [65].
Any normal compact operator can be represented in the form

X
A= λk (A)Ek ,
k=1

where Ek are eigenprojections of A, i.e. the projections defined by Ek h = (h, dk )dk


for all h ∈ H. Here dk are the normal eigenvectors of A. Recall that eigenvectors
of normal operators are mutually orthogonal.
A completely continuous quasinilpotent operator sometimes is called a Volterra
operator.
Let {ek } be an orthogonal normal basis in H, and the series

X
(Aek , ek )
k=1

converges. Then the sum of this series is called the trace of A:



X
T race A = T r A = (Aek , ek ).
k=1

An operator A satisfying the condition

T r (A∗ A)1/2 < ∞

is called a nuclear operator. An operator A, satisfying the relation

T r (A∗ A) < ∞
1.9. Compact operators in a Hilbert space 15

is said to be a Hilbert-Schmidt operator.


The eigenvalues λk ((A∗ A)1/2 ) (k = 1, 2, ...) of the operator (A∗ A)1/2 are
called the singular numbers (s-numbers) of A and are denoted by sk (A). That is,
sk (A) := λk ((A∗ A)1/2 ) (k = 1, 2, ...).
Enumerate singular numbers of A taking into account their multiplicity and in
decreasing order. The set of completely continuous operators acting in a Hilbert
space and satisfying the condition

X
Np (A) := [ spk (A) ]1/p < ∞,
k=1

for some p ≥ 1, is called the von Neumann - Schatten ideal and is denoted by SNp .
Np (.) is called the norm of the ideal SNp . It is not hard to show that
q
Np (A) = p T r (AA∗ )p/2 .

Thus, SN1 is the ideal of nuclear operators (the Trace class) and SN2 is the ideal of
Hilbert-Schmidt operators. N2 (A) is called the Hilbert-Schmidt norm. Sometimes
we will omit index 2 of the Hilbert-Schmidt norm, i.e.
p
N (A) := N2 (A) = T r (A∗ A).
For any orthonormal basis {ek } we can write

X
N2 (A) = ( kAek k2 )1/2 .
k=1

This equality is equivalent to the following one:



X
N2 (A) = ( |ajk |2 )1/2 ,
j,k=1

where ajk = (Aek , ej ) (j, k = 1, 2, . . . ) are entries of a Hilbert-Schmidt operator A


in an orthonormal basis {ek }.
For all finite p ≥ 1, the following propositions are true (the proofs can be
found in the book [65, Section 3.7]).
If A ∈ SNp , then also A∗ ∈ SNp . If A ∈ SNp and B is a bounded linear
operator, then both AB and BA belong to SNp . Moreover,
Np (AB) ≤ Np (A)kBk and Np (BA) ≤ Np (A)kBk.
In addition, the inequality
n
X n
X
|λj (A)|p ≤ spj (A) (n = 1, 2, . . .)
j=1 j=1

is valid, cf. [65, Theorem II.3.1].


16 Chapter 1. Preliminaries

Lemma 1.9.1. If A ∈ SNp and B ∈ SNq (1 < p, q < ∞), then AB ∈ SNs with
1 1 1
= + .
s p q
Moreover,
Ns (AB) ≤ Np (A)Nq (B).
For the proof of this lemma see [65, Section III.7]. Recall also the following
result (Lidskij’s theorem).
Theorem 1.9.2. Let A ∈ SN1 . Then

X
Tr A = λk (A).
k=1

The proof of this theorem can be found in [65, Section III.8].

1.10 Regularized determinants


The regularized determinant of I − A with A ∈ SNp (p = 1, 2, ...) is defined as

Y
detp (I − A) := Ep (λj (A)),
j=1

where λj (A) are the eigenvalues of A with their multiplicities arranged in the
decreasing order, and
p−1 m
X z (A)
Ep (z) := (1 − z)exp [ ], p > 1 and E1 (z) := 1 − z.
m=1
m

As it is shown below the regularized determinant are useful for the investigation
of periodic systems.
The following lemma is proved in [57].
Lemma 1.10.1. The inequality

|Ep (z)| ≤ exp [ζp |z|p ]

is valid, where
p−1
ζp = (p 6= 1, p 6= 3) and ζ1 = ζ3 = 1.
p
From this lemma one can immediately obtain the following result.
Lemma 1.10.2. Let A ∈ SNp (p = 1, 2, ...). Then

|detp (I − A)| ≤ exp [ζp Npp (A)].


1.11. Perturbations of determinants 17

Let us point also the lower bound for regularized determinants which has
been established in [45]. To this end denote by L a Jordan contour connecting 0
and 1, lying in the disc {z ∈ C : |z| ≤ 1} and does not containing the points 1/λj
for any eigenvalue λj of A, such that

φL (A) := inf |1 − sλk | > 0.


s∈L; k=1,2,...

Theorem 1.10.3. Let A ∈ SNp for an integer p ≥ 1 and 1 6∈ σ(A). Then

ζp Npp (A)
|detp (I − A)| ≥ exp [− ].
φL (A)

1.11 Perturbations of determinants


Now let us consider perturbations of determinants. Let X and Y be complex
normed spaces with norms k.kX and k.kY , respectively, and F be a Y -valued
function defined on X. Assume that F (C + λC̃) (λ ∈ C) is an entire function for
all C, C̃ ∈ X. That is, for any φ ∈ Y ∗ , the functional < φ, F (C + λC̃) > defined
on Y is an entire scalar valued function of λ. In [44], the following lemma has been
proved.
Lemma 1.11.1. Let F (C + λC̃) (λ ∈ C) be an entire function for all C, C̃ ∈ X
and there be a monotone non-decreasing function G : [0, ∞) → [0, ∞), such that
kF (C)kY ≤ G(kCkX ) (C ∈ X). Then

1 1
kF (C) − F (C̃)kY ≤ kC − C̃kX G(1 + kC + C̃kX + kC − C̃kX ) (C, C̃ ∈ X).
2 2
Lemmas 1.10.1 and 1.11.1 imply the following result.
Corollary 1.11.2. The inequality

|detp (I − A) − detp (I − B)| ≤ δp (A, B) (A, B ∈ SNp , 1 ≤ p < ∞)

is true, where
1
δp (A, B) := Np (A − B) exp [ζp (1 + (Np (A + B) + Np (A − B)))p ] ≤
2
Np (A − B) exp [(1 + Np (A) + Np (B)))p ].
Now let A and B be n × n-matrices. Then due to the inequality between the
arithmetic and geometric mean values,

n n
!1/n
Y 1X
|det A| =
2
|λk (A)| ≤
2
|λk (A)|2 .
n
k=1 k=1
18 Chapter 1. Preliminaries

Thus,
1
|det A| ≤ N2n (A).
nn/2
Moreover, |det A| ≤ kAkn for an arbitrary matrix norm. Hence, Lemma 1.11.1
implies our next result.
Corollary 1.11.3. Let A and B be n × n-matrices. Then
|det A − det B| ≤
 n
1 1 1
N2 (A − B) 1 + N2 (A − B) + N2 (A + B))
n n/2 2 2
and  n
1 1
|det A − det B| ≤ kA − Bk 1 + kA − Bk + kA − Bk .
2 2
for an arbitrary matrix norm k.k.
Now let us recall the well-known inequality for determinants.
Theorem 1.11.4. (Ostrowski [97]) Let A = (ajk ) be a real n × n-matrix. Then the
inequality
Yn X n
|det A| ≥ (|ajj | − |ajm |)
j=1 m=1,m6=j

is valid, provided
n
X
|ajj | > |ajm | (j = 1, ..., n).
m=1,m6=j

1.12 Matrix functions of bounded variations


A scalar function g : [a, b] → R is a function of bounded variation if
n−1
X
var (g) = V art∈[a,b] g(t) := sup |g(ti+1 ) − g(ti )| < ∞,
P i=0

where the supremum is taken over the set of all partitions P of the interval [a, b].
Any function of bounded variation g : [a, b] → R is a difference of bounded
nondecreasing functions. If g is differentiable and its derivative is integrable then
its variation satisfies Z b
var (g) ≤ |g 0 (s)|ds.
a
For more details see [16, p. 140]. Sometimes we will write
Z b
var (g) = |dg(s)|.
a
1.12. Matrix functions of bounded variations 19

Let kxkn be the Euclidean norm of a vector x and kAkn be the spectral norm
of of a matrix A. The norm of f in C([a, b], Cn ) is supt kf (t)kn , in Lp ([a, b], Cn ) (1 ≤
Rb
p < ∞) its norm is ( a kf (t)kpn )1/p , in L∞ ([a, b], Cn ) its norm is vrai supt kf (t)kn .
For a real matrix valued function R0 (s) = (rij (s))ni,j=1 defined on a real finite
segment [a, b], whose entries have bounded variations

var(rij ) = vars∈[a,b] rij (s)

we can define its variation as the matrix

V ar(R0 ) = V ars∈[a,b] R0 (s) = (var(rij ))ni,j=1 .

For a c > b and an f ∈ C([a, c], Cn ) put


Z b
E0 f (t) = dR0 (s)f (t − s) (b ≤ t ≤ c).
a

Below we prove that E0 is bounded in the norm of Lp (p ≥ 1) on the set of


continuous functions and therefore can be extended to the whole space Lp , since
that set is dense in Lp .
Denote
var (R0 ) := kV ar(R0 )kn ,
and v
n uX
X u n
ζ1 (R0 ) := t (var(rjk ))2 .
j=1 k=1

So var (R0 ) is the spectral norm of matrix V ar (R0 ).


Lemma 1.12.1. Suppose all the entries rjk of the matrix function R0 defined on
[a, b] have bounded variations. Then the inequalities

kE0 kC([a,c],Cn )→C([b,c],Cn ) ≤ n var (R0 ), (12.1)

kE0 kL∞ ([a,c],Cn )→L∞ ([b,c],Cn ) ≤ n var (R0 ), (12.2)
kE0 kL2 ([a,c],Cn )→L2 ([b,c],Cn ) ≤ var (R0 ), (12.3)
and
kE0 kL1 ([a,c],Cn )→L1 ([b,c],Cn ) ≤ ζ1 (R0 ) (12.4)
are valid.
Proof. Let f (t) = (fk (t))nk=1 ∈ C([a, c], Cn ). For each coordinate (E0 f )j (t) of
E0 f (t) we have
Z bX
n
|(E0 f )j (t)| = | fk (t − s)drjk (s)| ≤
a k=1
20 Chapter 1. Preliminaries

n Z
X b
|drjk | max |fk (t − s)| =
a a≤s≤b
k=1
n
X
var(rjk ) max |fk (t − s)|.
a≤s≤b
k=1

Hence,
n n n
!2
X X X
|(E0 f )j (t)| ≤
2
var(rjk )kfk kC(a,c) =
j=1 j=1 k=1

kV ar (R0 ) νC k2n ≤ (var(R0 )kνC kn )2 (b ≤ t ≤ c),


where νC = (kfk kC(a,c) )nk=1 , k.kC(a,c) = k.kC([a,c],C) .
But
Xn
kνC k2n = kfk k2C(a,c) ≤
k=1
n
X
n max kfk k2C(a,c) ≤ n sup kfk (t)k2n = nkf k2C([a,c],Cn ) .
k t
k=1

So √
kE0 f kC([b,c],Cn ) ≤ nvar (R0 )kf kC([a,c],Cn )
and thus inequality (12.1) is proved.
In the case of the space L∞ by inequality (12.1) we have

kE0 f kL∞ ([b,c],Cn ) ≤ n var (R0 )kf kL∞ ([a,c],Cn )

for a continuous function f . But the set of continuous functions is dense in L∞ .


So the previous inequality is valid on the whole space L∞ . This proves (12.2).
Now consider the norm in space L2 . We have
Z c Z c n Z
X b
|(E0 f )j (t)|2 dt ≤ ( |fk (t − s)||drjk (s)|)2 dt =
b b k=1 a

Z b Z n X
bX n Z c
|drjk (s)||drji (s1 )| |fk (t − s)fi (t − s1 )|dt.
a a i=1 k=1 b

By the Schwarz inequality


Z c 2 Z c Z c
|fk (t − s)fi (t − s1 )|dt ≤ |fk (t − s)| dt 2
|fi (t − s1 )|2 dt ≤
b b b
Z c Z c
|fk (t)|2 dt |fi (t)|2 dt.
a a
1.12. Matrix functions of bounded variations 21

Thus
Z c n X
X n
|(E0 f )j (t)|2 dt ≤ var(rjk )var(rji )kfk kL2 (a,c) kfi kL2 (a,c) =
b i=1 k=1

n
X
( var(rjk )kfk kL2 (a,c) )2 (kfk kL2 (a,c) = kfk kL2 ([a,c],C) )
k=1

and therefore
n Z
X c n X
X n
|(E0 f )j (t)|2 dt ≤ ( var(rjk )kfk kL2 (a,c) )2 =
j=1 b j=1 k=1

kV ar (R0 ) ν2 k2n ≤ (var(R0 )kν2 kn )2


where ν2 is the vector with the coordinates kfk kL2 (a,c) . But kν2 kn = kf kL2 ([a,c],Cn ) .
So (12.3) is also proved.
Similarly, for an f (t) = (fk (t))nk=1 ∈ L1 ([a, c], Cn ) we obtain
Z c n Z
X b Z c
|(E0 f )j (t)|dt ≤ |fk (t − s)|dt|drjk (s)| ≤
b k=1 a b

n
X Z c
var(rjk ) |fk (t)|dt.
k=1 a

So
v
Z uX Z
c u n n
cX
kE0 f kL1 ([b,c],Cn ) = t |(E0 f )j (t)|2 dt ≤ |(E0 f )j (t)|dt ≤
b j=1 b j=1

n Z
X n
cX
var(rjk )|fk (t)|dt.
j=1 a k=1

Consequently, by the Schwarz inequality


v
n Z c uX
X u n n
X
kE0 f kL1 ([b,c],Cn ) ≤ t (var(rjk ))2 |fk (t)|2 dt.
j=1 a k=1 k=1

Hence (12.4) follows. As claimed. 

The Riesz - Thorin theorem (see Section 1.3) and previous lemma imply the
following result.
22 Chapter 1. Preliminaries

Corollary 1.12.2. The inequalities



kE0 kLp ([a,c],Cn )→Lp ([b,c],Cn ) ≤ n var (R0 ) (c > b; p ≥ 2)

and

kE0 kLp ([a,c],Cn )→Lp ([b,c],Cn ) ≤ max{ζ1 (R0 ), n var (R0 )} (c > b; p ≥ 1)

are valid.
Let us consider the operator
Z b
Ef (t) = ds R(t, s)f (t − s) (f ∈ C([a, c], Cn ); b ≤ t ≤ c). (12.5)
a

where R(t, s) = (rij (t, s))ni,j=1 is a real n × n-matrix-valued function defined on


[b, c] × [a, b], which is piece-wise continuous in t for each τ and

vjk := sup var(rjk (t, .)) < ∞ (j, k = 1, ..., n). (12.6)
b≤t≤c

Below we prove that E is also bounded in the norm of Lp (p ≥ 1) on the set of


continuous functions and therefore can be extended to the whole space Lp .
Denote
Z(R) = (vjk )nj,k=1 .
Since for b ≤ t ≤ c,
Z Z b
b

fk (t − s)drjk (t, s) ≤ max |fk (t − s)| |rjk (t, s)| ≤ vjk kfk kC(a,c) ,
a a≤s≤b a

for each coordinate (Ef )j (t) of Ef (t) we have


Z bXn
|(Ef )j (t)| = | fk (t − s)drjk (t, s)| ≤
a k=1

n
X
vjk sup |fk (t − s)|.
a≤s≤b
k=1
Hence,
n
X n X
X n
|(Ef )j (t)|2 ≤ ( (vjk ))kfk k2C(a,c) =
j=1 j=1 k=1

kṼ νC k2n ≤ kZ(R)kn kνC k2n ,


where η C is the same as in proof of the previous lemma. As it was above shown,
kνC k2n ≤ nkf k2C([a,c],Cn ) . Thus,

kEf kC([b,c],Cn ) ≤ nkZ(R)kn kf kC([a,c],Cn ) , (12.7)
1.12. Matrix functions of bounded variations 23

Moreover, √
kEf kL∞ ([b,c],Cn ) ≤ nkZ(R)kn kf kL∞ ([a,c],Cn ) (12.8)
for a continuous function f . But the set of continuous functions is dense in L∞ .
So the previous inequality is valid on the whole space. Repeating the arguments
of the proof of the previous lemma we obtain
kEf kL2 ([b,c],Cn ) ≤ kZ(R)kn kf kL2 ([a,c],Cn ) . (12.9)
Now let f (t) = (fk (t)) ∈ L ([a, c], C ). Then
1 n

Z c Xn Z bZ c
|(Ef )j (t)|dt ≤ |fk (t − s)|dt|drjk (t, s)| ≤
b k=1 a b

n
X Z c
vjk |fk (t)|dt.
k=1 a

So v
Z uX Z
c u n n
cX
kEf kL1 ([b,c],Cn ) = t |(Ef )j (t)| dt ≤
2 |(Ef )j (t)|dt ≤
b j=1 b j=1
v
n Z n n Z u n n
X cX X c uX X
vjk |fk (t)|dt ≤ t 2
vjk |fk (t)|2 dt
j=1 a k=1 j=1 a k=1 k=1

Hence
kEf kL1 ([b,c],Cn ) ≤ V1 (R)kf kL1 ([a,c],Cn ) , (12.10)
where v
n uX
X u n
V1 (R) = t 2 .
vjk
j=1 k=1

We thus have proved the following result.


Lemma 1.12.3. Suppose the entries rjk (t, s) of the matrix function R(t, s) satisfy
condition (12.6). Then the operator E defined by (12.5) is subject to the inequalities
(12.7)-(12.10).
Now the Riesz - Thorin theorem and previous lemma imply the following
result.
Corollary 1.12.4. Let condition (12.6) hold. Then for the operator E defined by
(12.5), the inequalities

kEkLp ([a,c],Cn )→Lp ([b,c],Cn ) ≤ nkZkn (c > b; p ≥ 2)
and
kEkLp ([a,c],Cn )→Lp ([b,c],Cn ) ≤ V (R) (c > b; p ≥ 1)

are true, where V (R) = max {V1 (R), nkZ(R)kn }.
24 Chapter 1. Preliminaries

1.13 Comments
The chapter contains mostly well-known results. This book presupposes a knowl-
edge of basic operator theory, for which there are good introductory texts. The
books [2] and [16] are classical. In Sections 1.5 and 1.6 we followed Sections I.9
and III.2 of the book [14]. The material of Sections 1.10 and 1.11 is adapted from
the papers [57, 44] and [45]. The relevant results on regularized determinants can
be found in [64]. Lemmas 1.12.1 and 11.12.3 are probably new.
Chapter 2

Some Results of the Matrix


Theory

This chapter is devoted to norm estimates for matrix-valued functions, in par-


ticular, for resolvents. These estimates will be applied in the rest of the book
chapters.
In Section 2.1 we introduce the notation used in this chapter. In Section 2.2
we recall the well-known representations of matrix-valued functions. In Sections
2.3 and 2.4 we collect inequalities for the resolvent and present some results on
spectrum perturbations of matrices. Sections 2.5 and 2.6 are devoted to matrix
functions regular on simply-connected domains containing the spectrum. Section
2.7 deals with functions of matrices having geometrically simple eigenvalues; i.e.
so called diagonalizable matrices. In the rest of the chapter we consider particular
cases of functions and matrices.

2.1 Notations
p
Everywhere in this chapter kxk is the Euclidean norm of x ∈ Cn : kxk = (x, x)
with a scalar product (., .) = (., .)C n , I is the unit matrix.
For a linear operator A in Cn (matrix), λk = λk (A) (k = 1, ..., n) are the
eigenvalues of A enumerated in an arbitrary order with their multiplicities, σ(A)
denotes the spectrum of A, A∗ is the adjoint to A, and A−1 is the inverse to A;
Rλ (A) = (A − λI)−1 (λ ∈ C, λ 6∈ σ(A)) is the resolvent, rs (A) is the spectral
radius, kAk = supx∈Cn kAxk/kxk is the (operator) spectral norm, N2 (A) is the
Hilbert-Schmidt (Frobenius) norm of A: N22 (A) = T race AA∗ , AI = (A − A∗ )/2i
is the imaginary component, AR = (A + A∗ )/2 is the real component,

ρ(A, λ) = min |λ − λk (A)|


k=1,...,n
26 Chapter 2. Some Results of the Matrix Theory

is the distance between σ(A) and a point λ ∈ C; ρ(A, C) is the Hausdorff distance
between a contour C and σ(A). co(A) denotes the closed convex hull of σ(A),
α(A) = maxk Re λk (A), β(A) = mink Re λk (A); rl (A) is the lower spectral radius:

rl (A) = min |λk (A)|.


k=1,...,n

In addition, Cn×n is the set of complex n × n-matrices.


The following quantity plays an essential role in the sequel:
n
X
g(A) = (N22 (A) − |λk (A)|2 )1/2 .
k=1

It is not hard to check that

g 2 (A) ≤ N22 (A) − |T race A2 |.

In Section 2.2 of the book [31] it is proved that

g 2 (A) ≤ 2N22 (AI ) (1.1)

and
g(eiτ A + zI) = g(A) (1.2)
for all τ ∈ R and z ∈ C.

2.2 Representations of matrix functions


2.2.1 Classical representations
In this subsection we recall some classical representations of functions of matrices.
For details see [74, Chapter 6] and [9].
Let A ∈ Cn×n and M ⊃ σ(A) be an open simply-connected set whose bound-
ary C consists of a finite number of rectifiable Jordan curves, oriented in the pos-
itive sense customary in the theory of complex variables. Suppose that M ∪ C is
contained in the domain of analyticity of a scalar-valued function f . Then f (A)
can be defined by the generalized integral formula of Cauchy
Z
1
f (A) = − f (λ)Rλ (A)dλ. (2.1)
2πi C

If an analytic function f (λ) is represented by the Taylor series



!
X 1
f (λ) = ck λ k
|λ| < p ,
k=0
lim k→∞
k
|ck |
2.2. Representations of matrix functions 27

then one can define f (A) as



X
f (A) = ck Ak
k=0

provided the spectral radius rs (A) of A satisfies the inequality


p
rs (A) lim k
|ck | < 1.
k→∞

In particular, for any matrix A,



X Ak
eA = .
k!
k=0

Consider the n × n-Jordan block:


 
λ0 1 0 ... 0
 0 λ0 1 ... 0 
 
 . . . ... . 
 
Jn (λ0 ) = 
 . . . ... . ,

 . . . ... . 
 
 0 0 ... λ0 1 
0 0 ... 0 λ0

then  
f 0 (λ0 ) f (n−1) (λ0 )
f (λ0 ) 1! ... (n−1)!
 
 0 f (λ0 ) ... 
 
 . . ... . 
 
f (Jn (λ0 )) =  . . ... . .
 
 . . ... . 
 f 0 (λ0 )

 0 ... f (λ0 ) 
1!
0 ... 0 f (λ0 )
Thus, if A has the Jordan block-diagonal form

A = diag (Jm1 (λ1 ), Jm2 (λ2 ), ..., Jmn0 (λn0 )),

where λk , k = 1, ..., n0 are the eigenvalues whose geometric multiplicities are mk ,


then
f (A) = diag (f (Jm1 (λ1 )), f (Jm2 (λ2 )), ..., f (Jmn0 (λn0 ))). (2.2)
Note that in (2.2) we do not require that f is regular on a neighborhood of an
open set containing σ(A); one can take an arbitrary function which has at each
λk derivatives up to mk − 1-order.
28 Chapter 2. Some Results of the Matrix Theory

In particular, if an n × n-matrix A is diagonalizable, that is its eigenvalues


have the geometric multiplicities mk ≡ 1, then
n
X
f (A) = f (λk )Qk , (2.3)
k=1

where Qk are the eigenprojections. In the case (2.3) it is required only that f is
defined on the spectrum.
Now let
σ(A) = ∪m k=1 σk (A) (m ≤ n)

and σk (A) ⊂ Mk (k = 1, ..., m), where Mk are open disjoint simply-connected sets:
Mk ∩ Mj = ∅ (j 6= k). Let fk be regular on Mk . Introduce on M = ∪m k=1 Mk the
piece-wise analytic function by f (z) = fj (z) (z ∈ Mj ). Then
m Z
1 X
f (A) = − f (λ)Rλ (A)dλ, (2.4)
2πi j=1 Cj

where Cj ⊂ Mj are closed smooth contour surrounding σ(Aj ) and the integration
is performed in the positive direction. For more details about representation (2.4)
see [110, p. 49].
For instance, let M1 and M2 be two disjoint disks, and

sin z if z ∈ M1 ,
f (z) = .
cos z if z ∈ M2

Then (2.4) holds with m = 2.

2.2.2 Multiplicative representations of the resolvent


In this subsection we present multiplicative representations of the resolvent which
lead to new representations of matrix functions.
Let A ∈ Cn×n and λk be its eigenvalues with the multiplicities taken into
account.
As it is well-known, there is an orthogonal normal basis (the Schur basis)
{ek } in which A is represented by a triangular matrix. Moreover there is the
(maximal) chain Pk (k = 1, . . . , n) of the invariant orthogonal projections of A.
That is, APk = Pk APk (k = 1, . . . , n) and

0 = P0 Cn ⊂ P1 Cn ⊂ ... ⊂ Pn Cn = Cn .

So dim (Pk − Pk−1 ) = 1. Besides,

A = D + V (σ(A) = σ(D)), (2.5)


2.2. Representations of matrix functions 29

where
n
X
D= λk ΔPk (ΔPk = Pk − Pk−1 ) (2.6)
k=1

is the diagonal part of A and V is the nilpotent part of A. That is, V is a nilpotent
matrix, such that
V Pk = Pk−1 APk (k = 2, . . . , n) (2.7)
For more details see for instance, [18]. The representation (2.5) will be called the
triangular (Schur) representation.
Furthermore, for X1 , X2 , ..., Xj ∈ Cn×n denote

Y
Xk ≡ X1 X2 ...Xn .
1≤k≤j

That is, the arrow over the symbol of the product means that the indexes of the
co-factors increase from left to right.
Theorem 2.2.1. Let D and V be the diagonal and nilpotent parts of an A ∈ Cn×n ,
respectively. Then

Y  
V ΔPk
Rλ (A) = Rλ (D) I+ (λ 6∈ σ(A)),
λ − λk
2≤k≤n

where Pk , k = 1, ..., n, is the maximal chain of the invariant projections of A.


For the proof see [31, Theorem 2.9.1]. Since
Xn
ΔPk
Rλ (D) = ,
λ −λ
j=1 j

from the previous theorem we have the following result.


Corollary 2.2.2. The equality
Xn →
Y  
ΔPj V ΔPk
Rλ (A) = I+ (λ 6∈ σ(A))
λ
j=1 j
−λ λ − λk
j+1≤k≤n+1

is true with V ΔPn+1 = 0.


Now (2.1) implies the representation
Z Xn →
Y  
1 ΔPj V ΔPk
f (A) = − f (λ) I+ dλ.
2πi C λ
j=1 j
−λ λ − λk
j+1≤k≤n+1

Here one can apply the residue theorem.


Furthermore, the following result is proved in [31, Theorem 2.9.1].
30 Chapter 2. Some Results of the Matrix Theory

Theorem 2.2.3. For any A ∈ Cn×n we have



Y  
AΔPk
λRλ = − I+ (λ 6∈ σ(A) ∪ 0), (2.8)
λ − λk
1≤k≤n

where Pk , k = 1, ..., n is the maximal chain of the invariant projections of A.


Let A be a normal matrix. Then
n
X
A= λk ΔPk .
k=1

Hence, AΔPk = λk ΔPk . Since ΔPk ΔPj = 0 for j 6= k, The previous theorem
gives us the equality
n
X n
X
λk ΔPk λk ΔPk
−λRλ (A) = I + = (ΔPk + ).
λ − λk λ − λk
k=1 k=1

Or
n
X ΔPk
Rλ (A) = .
λk − λ
k=1

So from (2.8) we have obtained the well-known spectral representation for the
resolvent of a normal matrix. Thus, the previous theorem generalizes the spectral
representation for the resolvent of a normal matrix. Now we can use (2.1) and
(2.8) to get the representation for f (A).

2.3 Norm estimates for resolvents


For a natural n ≥ 2, introduce the numbers
s
(n − 1)(n − 2) . . . (n − k)
γn,k = (k = 1, 2, ..., n − 1), γn,0 = 1.
(n − 1)k k!

Evidently, for all n > 2,


1
2
γn,k ≤ (k = 1, 2, ..., n − 1). (3.1)
k!
Theorem 2.3.1. Let A be a linear operator in Cn . Then its resolvent satisfies the
inequality
n−1
X g k (A)γn,k
kRλ (A)k ≤ (λ 6∈ σ(A)),
ρk+1 (A, λ)
k=0

where ρ(A, λ) = mink=1,...,n |λ − λk (A)|.


2.3. Norm estimates for resolvents 31

To prove this theorem again use the Schur triangular representation (2.5).
Recall g(A) is defined in Section 2.1. As it is shown in [31, Section 2.1]), the
relation g(U −1 AU ) = g(A) is true, if U is a unitary matrix. Hence it follows that
g(A) = N2 (V ). The proof of the previous theorem is based on the following lemma.
Lemma 2.3.2. The inequality
n−1
X γn,k g k (A)
kRλ (A) − Rλ (D)k ≤ (λ 6∈ σ(A))
ρk+1 (A, λ)
k=1

is true.
Proof. By (2.5) we have

Rλ (A) − Rλ (D) = −Rλ (D)V Rλ (A) = −Rλ (D)V (D + V − Iλ)−1 .

Thus
Rλ (A) − Rλ (D) = −Rλ (D)V (I + Rλ (D)V )−1 Rλ (D). (3.2)
Clearly, Rλ (D)V is a nilpotent matrix. Hence,
n−1
X n−1
X
Rλ (A) − Rλ (D) = −Rλ (D)V (−Rλ (D)V )k Rλ (D) = (−Rλ (D)V )k Rλ (D).
k=0 k=1
(3.3)
Thanks to Theorem 2.5.1 [31], for any n × n nilpotent matrix V0 ,

kV0k k ≤ γn,k N2k (V0 ). (3.4)

In addition, kRλ (D)k = ρ−1 (D, λ) = ρ−1 (A, λ). So

γn,k N2k (V )
k(−Rλ (D)V )k k ≤ γn,k N2k (Rλ (D)V ) ≤ .
ρk (A, λ)

Now the required result follows from (3.3). 

The assertion of Theorem 2.3.1 directly follows from the previous lemma.
Note that the just proved Lemma 2.3.2 is a slight improvement of Theorem
2.1.1 from [31].
Theorem 2.3.1 is sharp: if A is a normal matrix, then g(A) = 0 and Theorem
2.3.1 gives us the equality kRλ (A)k = 1/ρ(A, λ). Taking into account (3.1), we get
Corollary 2.3.3. Let A ∈ Cn×n . Then
n−1
X g k (A)
kRλ (A)k ≤ √ for any regular λ of A.
k=0
k!ρk+1 (A, λ)

We will need the following result.


32 Chapter 2. Some Results of the Matrix Theory

Theorem 2.3.4. Let A ∈ Cn×n . Then

kRλ (A) det (λI − A)k ≤

 (n−1)/2
N22 (A) − 2Re (λ T race (A)) + n|λ|2
(λ 6∈ σ(A)).
n−1
In particular, let V be a nilpotent matrix. Then
 
1 1 N22 (V ) (n−1)/2
k(Iλ − V ) k ≤
−1
[1 + 1+ ] (λ 6= 0).
|λ| n−1 |λ|2

The proof of this theorem can be found in [31, Section 2.11].


We also point the following result.

Theorem 2.3.5. Let A ∈ Cn×n . Then


   (n−1)/2
1 1 g 2 (A)
kRλ (A)k ≤ 1+ 1+ 2
ρ(A, λ) n−1 ρ (A, λ)

for any regular λ of A.

For the proof see [31, Theorem 2.14.1].

2.4 Spectrum perturbations


Let A and B be n × n-matrices having eigenvalues

λ1 (A), ..., λn (A) and λ1 (B), ..., λn (B),

respectively, and q = kA − Bk.


The spectral variation of B with respect to A is

svA (B) := max min |λi (B) − λj (A)|,


i j

cf. [103]. The following simple lemma is proved in [31, Section 4.1].

Lemma 2.4.1. Assume that kRλ (A)k ≤ φ(ρ−1 (A, λ)) for all regular λ of A, where
φ(x) is a monotonically increasing non-negative continuous function of a non-
negative variable x, such that φ(0) = 0 and φ(∞) = ∞. Then the inequality
svA (B) ≤ z(φ, q) is true, where z(φ, q) is the unique positive root of the equation
qφ(1/z) = 1.

This lemma and Corollary 2.3.2 yield the our next result.
2.4. Spectrum perturbations 33

Theorem 2.4.2. Let A and B be n × n-matrices. Then svA (B) ≤ z(q, A), where
z(q, A) is the unique nonnegative root of the algebraic equation
n−1
X y n−j−1 g j (A)
yn = q √ . (4.1)
j=0
j!

Let us consider the algebraic equation


n−1
X
z n = P (z) (n > 1), where P (z) = cj z n−j−1 (4.2)
j=0

with non-negative coefficients cj (j = 0, ..., n − 1).


Lemma 2.4.3. The extreme right-hand root z0 of equation (4.2) is non-negative
and the following estimates are valid:
p
z0 ≤ n P (1) if P (1) ≤ 1, (4.3)

and
1 ≤ z0 ≤ P (1) if P (1) ≥ 1. (4.4)
Proof. Since all the coefficients of P (z) are non-negative, it does not decrease as
z > 0 increases. From this it follows that if P (1) ≤ 1, then z0 ≤ 1. So z0n ≤ P (1),
as claimed.
Now let P (1) ≥ 1, then due to (4.2) z0 ≥ 1 because P (z) does not decrease.
It is clear that
P (z0 ) ≤ z0n−1 P (1)
in this case. Substituting this inequality into (4.2), we get (4.4). 

Substituting z = ax with a positive constant a into (4.2), we obtain


n−1
X cj n−j−1
xn = j+1
x . (4.5)
j=0
a

Let

a=2 max j+1 cj .
j=0,...,n−1

Then
n−1
X n−1
X
cj
≤ 2−j−1 = 1 − 2−n < 1.
j=0
aj+1 j=0

Let x0 be the extreme right-hand root of equation (4.5), then by (4.3) we have
x0 ≤ 1. Since z0 = ax0 , we have derived the following result.
34 Chapter 2. Some Results of the Matrix Theory

Corollary 2.4.4. The extreme right-hand root z0 of equation (4.2) is non-negative.


Moreover,

z0 ≤ 2 max j+1 c .
j
j=0,...,n−1

Now put y = xg(A) into (6.1). Then we obtain the equation


n−1
q X xn−j−1
xn = √ .
g(A) j=0 j!

Put
n−1
X 1
wn = √ .
j=0
j!

Applying Lemma 2.4.3 we get the estimate z(q, A) ≤ δ(q), where



qwn if qwn ≥ g(A),
δ(q) := .
g 1−1/n (A)[qwn ]1/n if qwn ≤ g(A)
Now Theorem 2.4.2 ensures the following result.
Corollary 2.4.5. One has svA (B) ≤ δ(q).
Furthermore let D̃, V+ and V− be the diagonal, upper nilpotent part and
lower nilpotent part of matrix A, respectively. Using the notation A+ = D̃ + V+ ,
we arrive at the relations
σ(A+ ) = σ(D̃), g(A+ ) = N2 (V+ ) and kA − A+ k = kV− k.
Taking

kV− kwn if kV− kwn ≥ N2 (V+ ),
δA := 1−1/n ,
N2 (V+ )[kV− kwn ]1/n if kV− kwn ≤ N2 (V+ )
due to the previous corollary we obtain
Corollary 2.4.6. Let A = (ajk )nj,k=1 be an n × n-matrix. Then for any eigenvalue
μ of A, there is a k = 1, ..., n, such that
|μ − akk | ≤ δA ,
and therefore the (upper) spectral radius satisfies the inequality
rs (A) ≤ max |akk | + δA ,
k=1,...,n

and the lower spectral radius satisfies the inequality


rl (A) ≥ min |akk | − δA ,
k=1,...,n

provided |akk | > δA (k = 1, ..., n).


2.4. Spectrum perturbations 35

Clearly, one can exchange the places V+ and V− .


Let us recall the celebrated Gerschgorin theorem. To this end write
n
X
Rj = |ajk |.
k=1,k6=j

Let Ω(b, r) be the closed disc centered at b ∈ C with a radius r.


Theorem 2.4.7. (Gerschgorin). Every eigenvalue of A lies within at least one of
the discs Ω(ajj , Rj ).
Proof. Let λ be an eigenvalue of A and let x = (xj ) be the corresponding eigen-
vector. Let i be chosen so that |xi | = maxj |xj |. Then |xi | > 0, otherwise x = 0.
Since x is an eigenvector, Ax = λx or equivalent
n
X
aik = λxi
k=1

so, splitting the sum, we get


n
X
aik xk = λxi − aii xi .
k=1,k6=i

We may then divide both sides by xi (choosing i as we explained we can be sure


that xi 6= 0) and take the absolute value to obtain
n
X |xk |
|λ − aii | ≤ |ajk | ≤ Ri ,
|xi |
k=1,k6=i

where the last inequality is valid because


|xk |
≤ 1.
|xl |
As claimed. 

Note that for a diagonal matrix the Gerschgorin discs Ω(ajj , Rj ) coincide with
the spectrum. Conversely, if the Gerschgorin discs coincide with the spectrum, the
matrix is diagonal.
The next lemma follows from the Gerschgorin theorem and gives us a simple
bound for the spectral radius.
Lemma 2.4.8. The spectral radius rs (A) of a matrix A = (ajk )nj,k=1 satisfies the
inequality
X n
rs (A) ≤ max |ajk |.
j
k=1
About this and other estimates for the spectral radius see [80, Section 16].
36 Chapter 2. Some Results of the Matrix Theory

2.5 Norm estimates for matrix functions


2.5.1 Estimates via the resolvent
The following result directly follows from (2.1).
Lemma 2.5.1. Let let f (λ) be a scalar-valued function which is regular on a neigh-
borhood M of an open simply-connected set containing the spectrum of A ∈ Cn×n ,
and C ⊂ M be a closed smooth contour surrounding σ(A). Then
Z
1
kf (A)k ≤ |f (z)|kRz (A)kdz ≤ mC (A)lC sup |f (z)|,
2π C z∈C

where Z
1
mC (A) := sup kRz (A)k, lC := |dz|.
z∈C 2π C

Now we can directly apply the estimates for the resolvent from Section 2.3.
In particular, by Corollary 2.3.3 we have

kRz (A)k ≤ p(A, 1/ρ(A, z)), (5.1)

where
n−1
X xk+1 g k (A)
p(A, x) = √ (x > 0). (5.2)
k=0
k!
We thus get mC (A) ≤ p(A, 1/ρ(A, C)), where ρ(A, C) is the distance between C
and σ(A), and therefore,

kf (A)k ≤ lC p(A, 1/ρ(A, C)) sup |f (z)|. (5.3)


z∈C

2.5.2 Functions regular on the convex hull of the spectrum


Theorem 2.5.2. Let A be an n × n-matrix and f be a function holomorphic on a
neighborhood of the convex hull co(A) of σ(A). Then
n−1
X γn,k g k (A)
kf (A)k ≤ sup |f (λ)| + sup |f (k) (λ)| .
λ∈σ(A) k!
k=1 λ∈co(A)

This theorem is proved in the next subsection. Taking into account (3.1) we
get our next result.
Corollary 2.5.3. Under the hypothesis of Theorem 2.5.2 we have
n−1
X g k (A)
kf (A)k ≤ sup |f (λ)| + sup |f (k) (λ)| .
λ∈σ(A) k=1 λ∈co(A)
(k!)3/2
2.5. Norm estimates for matrix functions 37

Theorem 2.5.2 is sharp: if A is normal, then g(A) = 0 and

kf (A)k = sup |f (λ)|.


λ∈σ(A)

For example,
n−1
X n−1
X
k γn,k g k (A)tk
kexp(At)k ≤ e α(A)t
g (A)t
k
≤e α(A)t
(t ≥ 0)
k=0
k!
k=0
(k!)3/2

where α(A) = maxk=1,...,n Re λk (A). In addition,

n−1
X n−1
γn,k m!g k (A)rsm−k (A) X m!g k (A)rsm−k (A)
kAm k ≤ ≤ (m = 1, 2, ...),
(m − k)!k! (m − k)!(k!)3/2
k=0 k=0

where rs (A) is the spectral radius. Recall that 1/(m − k)! = 0 if m < k.

2.5.3 Proof of Theorem 2.5.2


Let |V |e be the operator whose entries in the orthonormal basis of the triangular
representation (the Schur basis) {ek } are the absolute values of the entries of the
nilpotent part V of A with respect to this basis. That is,

n k−1
X X
|V |e = |ajk |(., ek )ej ,
k=1 j=1

where ajk = (Aek , ej ). Put


Z
(−1)k+1 f (λ)dλ
Ij1 ...jk+1 = .
2πi C (λj1 − λ) . . . (λjk+1 − λ)

We need the following result.

Lemma 2.5.4. Let A be an n × n-matrix and f be a holomorphic function on a


Jordan domain (that is on a closed simply connected set, whose boundary is a
Jordan contour), containing σ(A). Let D be the diagonal part of A. Then

n−1
X
kf (A) − f (D)k ≤ Jk k |V |ke k,
k=1

where
Jk = max{|Ij1 ...jk+1 | : 1 ≤ j1 < . . . < jk+1 ≤ n}.
38 Chapter 2. Some Results of the Matrix Theory

Proof. From (2.1) and (3.3) we deduce that


Z n−1
X
1
f (A) − f (D) = − f (λ)(Rλ (A) − Rλ (D))dλ = Bk , (5.4)
2πi C k=1

where Z
1
Bk = (−1)k+1 f (λ)(Rλ (D)V )k Rλ (D)dλ.
2πi C

Since D is a diagonal matrix with respect to the Schur basis {ek } and its diagonal
entries are the eigenvalues of A, then
n
X ΔPj
Rλ (D) = ,
j=1
λj (A) − λ

where ΔPk = (., ek )ek . In addition, ΔPj V ΔPk = 0 for j ≥ k. Consequently,

2 −1
jX 3 −1
jX jk+1 −1 n
X X
Bk = ΔPj1 V ΔPj2 V . . . V ΔPjk+1 Ij1 j2 ...jk+1 .
j1 =1 j2 =1 jk =1 jk+1 =1

Lemma 2.8.1 from [31] gives us the estimate

2 −1
jX 3 −1
jX jk+1 −1 n
X X
kBk k ≤ Jk k ΔPj1 |V |e ΔPj2 |V |e . . . |V |e ΔPjk+1 k =
j1 =1 j2 =1 jk =1 jk+1 =1

Jk kPn−k |V |e Pn−k+1 |V |e Pn−k+2 . . . Pn−1 |V |e k.


But

Pn−k |V |e Pn−k+1 |V |e Pn−k+2 . . . Pn−1 |V |e = |V |e Pn−k+1 |V |e Pn−k+2 . . . Pn−1 |V |e =

|V |2e . . . Pn−1 |V |e = |V |ke .


Thus
kBk k ≤ Jk k |V |ke k.
This inequality and (5.4) imply the required inequality. 

Since N2 (|V |e ) = N2 (V ) = g(A), by (3.4) and the previous lemma we get


the following result.
Lemma 2.5.5. Under the hypothesis of Lemma 5.4 we have
n−1
X
kf (A) − f (D)k ≤ Jk γn,k g k (A).
k=1
2.6. Absolute values of entries of matrix functions 39

Let f be holomorphic on a neighborhood of co(A). Thanks to Lemma 1.5.1


from [31],
1
Jk ≤ sup |f (k) (λ)|.
k! λ∈co(A)
Now the previous lemma implies
Corollary 2.5.6. Under the hypothesis of Theorem 2.5.2 we have the inequalities
n−1
X g k (A)
kf (A) − f (D)k ≤ sup |f (k) (λ)|γn,k .
k!
k=1 λ∈co(A)

The assertion of Theorem 2.5.2 directly follows from the previous corollary.
Note that the latter corollary is a slight improvement of Theorem 2.7.1 from
[31].
Denote by f [a1 , a2 , ..., ak+1 ] the k-th divided difference of f at points a1 , a2 , ..., ak+1 .
By the Hadamard representation [20, formula (54)], we have

Ij1 ...jk+1 = f [λ1 , ..., λjk+1 ],

provided λj are distinct. Now Lemma 5.5 implies


Corollary 2.5.7. Let all the eigenvalues of an n×n-matrix A be algebraically simple,
and f be a holomorphic function in a Jordan domain containing σ(A). Then
n−1
X n−1
X g k (A)
kf (A) − f (D)k ≤ fk γn,k g k (A) ≤ fk √ ,
k=1 k=1
k!

where
fk = max{|f [λ1 (A), ..., λjk+1 (A)]| : 1 < j1 < . . . < jk+1 ≤ n}.

2.6 Absolute values of entries of matrix functions


Everywhere in the present section, A = (ajk )nj,k=1 , S = diag [a11 , ..., ann ] and the
off diagonal of A is W = A − S. That is, the entries vjk of W are vjk = ajk (j 6= k)
and vjj = 0 (j, k = 1, 2, ...). Denote by co(S) the closed convex hull of the diagonal
entries a11 , ..., ann . We put |A| = (|ajk |)nj,l=1 , i.e. |A| is the matrix whose entries
are the absolute values of the entries A in the standard basis. We also write T ≥ 0
if all the entries of a matrix T are nonnegative. If T and B are two matrices, then
we write T ≥ B if T − B ≥ 0.
Thanks to Lemma 2.4.8 we obtain rs (|W |) ≤ τW , where
n
X
τW := max |ajk |.
j
k=1,k6=j
40 Chapter 2. Some Results of the Matrix Theory

Theorem 2.6.1. Let f (λ) be holomorphic on a neighborhood of a Jordan set, whose


boundary C has the property
n
X
|z − ajj | > |ajk | (6.1)
k=1,k6=j

for all z ∈ C and j = 1, ..., n. Then, with the notation

|f (k) (z)|
ξk (A) := sup (k = 1, 2, ...),
z∈co (S) k!

the inequality

X
|f (A) − f (S)| ≤ ξk (A)|W |k
k=1

is valid, provided p
rs (|W |) lim k
ξk (A) < 1.
k→∞

Proof. By the equality A = S + W we get

Rλ (A) = (S + W − λI)−1 = (I + Rλ (S)W )−1 Rλ (S) =



X
(Rλ (S)W )k (−1)k Rλ (S),
k=0

provided the spectral radius r0 (λ) of Rλ (S)W is less than one. The entries of this
matrix are
ajk
(λ 6= ajj , j =
6 k)
ajj − λ
and the diagonal entries are zero. Thanks to Lemma 2.4.8 we have
n
X |ajk |
r0 (λ) ≤ max < 1 (λ ∈ C)
j |ajj − λ|
k=1,k6=j

and the series



X
Rλ (A) − Rλ (S) = (Rλ (S)W )k (−1)k Rλ (S)
k=1

converges. Thus
Z ∞
X
1
f (A) − f (S) = − f (λ)(Rλ (A) − Rλ (S))dλ = Mk , (6.2)
2πi C k=1

where Z
1
Mk = (−1)k+1 f (λ)(Rλ (S)W )k Rλ (S)dλ.
2πi C
2.6. Absolute values of entries of matrix functions 41

Since S is a diagonal matrix with respect to the standard basis {dk }, we can write
out
Xn
Q̂j
Rλ (S) = (bj = ajj ),
j=1
b j −λ

where Q̂k = (., dk )dk . We thus have


n
X n
X n
X
Mk = Q̂j1 W Q̂j2 W . . . W Q̂jk+1 Jj1 j2 ...jk+1 . (6.3)
j1 =1 j2 =1 jk+1 =1

Here Z
(−1)k+1 f (λ)dλ
Jj1 ...jk+1 = .
2πi C (bj1 − λ) . . . (bjk+1 − λ)
Lemma 1.5.1 from [31] gives us the inequalities
|Jj1 ...jk+1 | ≤ ξk (A) (j1 , j2 , ..., jk+1 = 1, ..., n).
Hence, by (6.3)
n
X n
X n
X
|Mk | ≤ ξk (A) Q̂j1 |W | Q̂j2 |W | . . . |W | Q̂jk+1 .
j1 =1 j2 =1 jk+1 =1

But
n
X n
X n
X
Q̂j1 |W | Q̂j2 |W | . . . |W | Q̂jk+1 = |W |k .
j1 =1 j2 =1 jk+1 =1

Thus |Mk | ≤ ξk (A)|W | . Now (6.2) implies the required result. 


k

Additional estimates for the entries of matrix functions can be found in [43,
38]. Under the hypothesis of the previous theorem with the notation
ξ0 (A) := max |f (akk )|,
k

we have the inequality



X ∞
X
|f (A)| ≤ ξ0 (A)I + ξk (A)|W |k = ξk (A)|W |k . (6.4)
k=1 k=0

Here |W |0 = I.
Let kAkl denote a lattice norm of A. That is, kAkl ≤ k|A|kl , and kAkl ≤ kÃkl
whenever 0 ≤ A ≤ Ã. Now the previous theorem implies the inequality

X
kf (A) − f (S)kl ≤ ξk (A)k|W |k kl (6.5)
k=1

and therefore,

X
kf (A)kl ≤ ξk (A)k|W |k kl .
k=0
42 Chapter 2. Some Results of the Matrix Theory

2.7 Diagonalizable matrices


The results of this section are useful for the investigation of the forced oscillations
of equations close to ordinary ordinary differential equations (see Section 12.3).

2.7.1 A bound for similarity constants of matrices


Everywhere in this section it is assumed that the eigenvalues λk = λk (A) (k =
1, ..., n) of A, taken with their algebraic multiplicities, are geometrically simple.
That is, the geometric multiplicity of each eigenvalue is equal to one. As it is well-
known, in this case A is diagonalizable: there are biorthogonal sequences {uk } and
{vk }: (vj , uk ) = 0 (j 6= k), (vj , uj ) = 1 (j, k = 1, ..., n), such that
n
X
A= λk Qk , (7.1)
k=1

where Qk = (., uk )vk (k = 1, ..., n) are one dimensional eigenprojections. Besides,


there is an invertible operator T and a normal operator S, such that

T A = ST. (7.2)

The constant (the conditional number)

κT := kT kkT −1 k

is very important for various applications, cf. [103]. That constant is mainly nu-
merically calculated. In the present subsection we suggest a sharp bound for κT .
Applications of the obtained bound are also discussed.
Denote by μj , j = 1, ..., m ≤ n the distinct eigenvalues of A, and by pj the
algebraic multiplicity of μj . In particular, one can write

μ1 = λ1 = ... = λp1 , μ2 = λp1 +1 = ... = λp1 +p2 ,

etc.
Let δj be the half-distance from μj to the other eigenvalues of A:

δj := min |μj − μk |/2


k=1,...,m; k6=j

and
δ(A) := min δj = min |μj − μk |/2.
j=1,...,m j,k=1,...,m; k6=j

Put
n−1
X g k (A)
η (A) := √ .
k=1
δ k (A) k!
2.7. Diagonalizable matrices 43

According to (1.1),
n−1 √
X ( 2N2 (AI ))k
η (A) ≤ √ .
k=1
δ k (A) k!
In [51, Corollary 3.6], the inequality
m
X n−1
X g k (A)
κT ≤ pj √ ≤ n(1 + η (A)) (7.3)
j=1 k=0
δjk k!

has been derived. This inequality is not sharp: if A is a normal matrix, then it
gives κT ≤ n but κT = 1 in this case. In this section we improve inequality (7.3).
To this end put
(
n(1 + η (A)) √ if η (A) ≥ 1,
γ(A) = 2 nη (A) .
(η (A) + 1)[ 1−η (A) + 1] if η (A) < 1

Now we are in a position to formulate the main result of the present section.
Theorem 2.7.1. Let A be a diagonalizable n × n-matrix. Then κT ≤ γ(A).
The proof of this theorem is presented in the next subsection. Theorem 2.7.1
is sharp: if A is normal, then g(A) = 0. Therefore η (A) = 0 and γ(A) = 1. Thus
we obtain the equality κT = 1.

2.7.2 Proof of Theorem 2.7.1


We need the following lemma [51, Lemma 3.4].
Lemma 2.7.2. Let A be a diagonalizable n × n-matrix and
n
X
S= λk (., dk )dk , (7.4)
k=1

where {dk } is an orthonormal basis. Then the operator


n
X
T = (., dk )vk (7.5)
k=1

has the inverse one defined by


n
X
T −1
= (., uk )dk ,
k=1

and (7.2) holds.


44 Chapter 2. Some Results of the Matrix Theory

Note that one can take kuk k = kvk k. This leads to the equality.
kT −1 k = kT k. (7.6)
We need also the following technical lemma.
Lemma 2.7.3. Let L1 and L2 be projections satisfying the condition r := kL1 −
L2 k < 1. Then for any eigenvector f1 of L1 with kf1 k = 1 and L1 f1 = f1 , there
exists an eigenvector f2 of L2 with kf2 k = 1 and L2 f2 = f2 , such that
2r
kf1 − f2 k ≤ .
1−r
Proof. We have kL2 f1 − L1 f1 k ≤ r < 1 and
b0 := kL2 f1 k ≥ kL1 f1 k − k(L1 − L2 )f1 k ≥ 1 − r > 0.
Thanks to the relation L2 (L2 f1 ) = L2 f1 , we can assert that L2 f1 is an eigenvector
of L2 . Then
1
f2 := L2 f1
b0
is a normed eigenvector of L2 . So
1 1 1
f1 − f2 = L1 f 1 − L2 f1 = f1 − f1 + (L1 − L2 )f1 .
b0 b0 b0
But
1 1

b0 1−r
and
1 1 1 r 2r
kf1 − f2 k ≤ ( − 1)kf1 k + k(L1 − L2 )f1 k ≤ −1+ = ,
b0 b0 1−r 1−r 1−r
as claimed. 

In the rest of this subsection dk = ek (k = 1, ...n), where {ek }nk=1 is the


Schur orthonormal basis of A. Then S = D, where D is the diagonal part of A
(see Section 2.2). For a fixed index j ≤ m, let P̂j be the eigenprojection of D,
and Q̂j the eigenprojection of A corresponding to the same (geometrically simple)
eigenvalue μj . So
Xm
A= μj Q̂j .
j=1

The following inequality is proved in [51, inequality (3.3)]:


n−1
X g k (A)
kQ̂j − P̂j k ≤ η j , where η j := √ . (7.7)
k=1
δjk k!

Let vjs and ejs (s = 1, ..., pj ) be the eigenvectors of Q̂j and P̂j , respectively, and
kejs k = 1. Inequality (7.7) and the previous lemma yield.
2.7. Diagonalizable matrices 45

Corollary 2.7.4. Assume that


η (A) < 1. (7.8)
Then

vjs


2η (A)
kvjs k − ejs ≤ ψ(A) (s = 1, ..., pj ), where ψ(A) = 1 − η (A) .

Hence it follows that



vk

kvk k − ek ≤ ψ(A) (k = 1, ..., n).

Taking in the previous corollary Q∗j instead of Qj we arrive at the similar inequality

uk

kuk k − ek ≤ ψ(A) (k = 1, ..., n). (7.9)

Proof of Theorem 2.7.1: By (7.5) we have


" n
#1/2 " n
#1/2
X X
kT xk = |(x, uk )| 2
= |(x, kuk kek ) + (x, uk − kuk kek )| 2

k=1 k=1

" n
#1/2 " n
#1/2
X uk X
2
kuk k |(x, − ek )|2 + |kuk k(x, ek )| 2
(x ∈ Cn ).
kuk k
k=1 k=1

Hence, under condition (7.8), inequality (7.9) implies



kT xk ≤ max kuk kkxk( ψ(A) n + 1) (x ∈ Cn ).
k

Thanks to Corollary 3.3 from [51], max k kuk k2 ≤ 1 + η (A). Thus,


 √ 
2 nη (A)
kT k2 ≤ γ̃(A) := (η (A) + 1) +1 .
1 − η (A)

According to (7.6), kT −1 k2 ≤ γ̃(A). So under condition (7.8), the inequality


κT ≤ γ̃(A) is true. Combining this inequality with (7.3), we get the required
result. 

2.7.3 Applications of Theorem 2.7.1


Theorem 2.7.1 immediately implies.
Corollary 2.7.5. Let A be a diagonalizable n × n-matrix and f (z) be a scalar func-
tion defined on the spectrum of A. Then kf (A)k ≤ γ(A) maxk |f (λk )|.
46 Chapter 2. Some Results of the Matrix Theory

In particular, we have

γ(A)
kRz (A)k ≤
ρ(A, λ)

and
keAt k ≤ γ(A)eα(A)t (t ≥ 0).
Let A and à be complex n × n-matrices whose eigenvalues λk and λ̃k , re-
spectively, are taken with their algebraic multiplicities. Recall that

svA (Ã) := max min |λ̃k − λj |.


k j

Corollary 2.7.6. Let A be diagonalizable. Then svA (Ã) ≤ γ(A)kA − Ãk.

Indeed, the operator S = T AT −1 is normal. Put B = T ÃT −1 . Thanks to


the well-known Corollary 3.4 [103], svS (B) ≤ kS − Bk. Now the required result is
due to Theorem 2.7.1.
Furthermore let D̃, V+ and V− be the diagonal, upper nilpotent part and
lower nilpotent part of matrix A = (akk ), respectively. Using the preceding corol-
lary with A+ = D̃ + V+ , we arrive at the relations

σ(A+ ) = σ(D̃), and kA − A+ k = kV− k.

Due to the previous corollary we get

Corollary 2.7.7. Let A = (ajk )nj,k=1 be an n × n-matrix, whose diagonal has the
property
ajj 6= akk (j 6= k; k = 1, ..., n).
Then for any eigenvalue μ of A, there is a k = 1, ..., n, such that

|μ − akk | ≤ γ(A+ )kV− k,

and therefore the (upper) spectral radius satisfies the inequality

rs (A) ≤ max |akk | + γ(A+ )kV− k,


k=1,...,n

and the lower spectral radius satisfies the inequality

rs (A) ≥ min |akk | − γ(A+ )kV− k,


k=1,...,n

provided |akk | > δA (k = 1, ..., n).

Clearly, one can exchange the places V+ and V− .


2.7. Diagonalizable matrices 47

2.7.4 Additional norm estimates for functions


of diagonalizable matrices
Again A is a diagonalizable n×n-matrix, and μj (j = 1, ..., m ≤ n) are the distinct
eigenvalues of A: μj 6= μk for k 6= j and enumerated in an arbitrary order. Thus,
(7.1) can be written as
m
X
A= μk Q̂k ,
k=1

where Q̂k , k ≤ m, is the eigenprojection whose dimension is equal to the algebraic


multiplicity of μk .
Besides, for any function f defined on σ(A), f (A) can be represented as
m
X
f (A) = f (μk )Q̂k .
k=1

Put
j
X
Q̃j = Q̂k (j = 1, ..., m).
k=1

Note that according to (7.2) Q̃j = T −1 P̂j T , where P̂j is an orthogonal pro-
jection. Thus by Theorem 2.7.1,

max kQ̃k k ≤ κT ≤ γ(A).


1≤k≤m

Theorem 2.7.8. Let A be diagonalizable. Then


m−1
X
kf (A) − f (μm )Ik ≤ max kQ̃k k |f (μk ) − f (μk+1 )|.
1≤k≤m
k=1

To prove this theorem, we need the following analog of the Abel transform.

Lemma 2.7.9. Let ak be numbers and Wk (k = 1, ..., m) be bounded linear operators


in a Banach space. Let

m
X j
X
Ψ := ak Wk and Bj = Wk .
k=1 k=1

Then
m−1
X
Ψ= (ak − ak+1 )Bk + am Bm .
k=1
48 Chapter 2. Some Results of the Matrix Theory

Proof. Obviously,
m
X m
X m
X
Ψ = a1 B1 + ak (Bk − Bk−1 ) = a k Bk − ak Bk−1 =
k=2 k=1 k=2

m
X m−1
X
ak Bk − ak+1 Bk ,
k=1 k=1

as claimed. 

Proof of Theorem 2.7.8: According to Lemma 2.7.9,


m−1
X
f (A) = (f (μk ) − f (μk+1 ))Q̃k + f (μm )Q̃m . (7.10)
k=1

But Q̃m = I. Hence, the assertion of the theorem at once follows. 

According to the previous theorem we have


Corollary 2.7.10. Assume that f (λ) is real for all λ ∈ σ(A), and the condition

f (μk+1 ) ≤ f (μk ) (k = 1, ..., m − 1) (7.11)

hold. Then
kf (A) − f (μm )Ik ≤ max kQ̃k k[f (μ1 ) − f (μm )].
1≤k≤m

From this corollary it follows that

kf (A)k ≤ max kQ̃k k[f (μ1 ) + |f (μm )| − f (μm )],


1≤k≤m

provided (7.11) holds.

2.8 Matrix exponential


2.8.1 Lower bounds
As it was shown in Section 2.5,
n−1
X γn,k
kexp(At)k ≤ eα(A)t g k (A)tk (t ≥ 0)
k!
k=0

where α(A) = maxk=1,...,n Re λk (A). Moreover, by (3.1),


n−1
X g k (A)tk
kexp(At)k ≤ e α(A)t
(t ≥ 0). (8.1)
k=0
(k!)3/2
2.8. Matrix exponential 49

Taking into account that the operator exp(−At) is the inverse one to exp(At) it
is not hard to show that
eβ(A)t khk
kexp(At)hk ≥ Pn−1 (t ≥ 0, h ∈ Cn ),
k=0 g k (A)tk (k!)−1 γ n,k

where β(A) = mink=1,...,n Re λk (A). Therefore by (3.1),

eβ(A)t khk
kexp(At)hk ≥ Pn−1 (t ≥ 0). (8.2)
k=0 g k (A)(k!)−3/2 tk

Moreover, if A is a diagonalizable n × n-matrix, then due to Theorem 2.7.1 we


conclute that
ke−At k ≤ γ(A)e−β(A)t (t ≥ 0).
Hence,
khkeβ(A)t
keAt hk ≥ (t ≥ 0, h ∈ Cn ).
γ(A)

2.8.2 Perturbations of matrix exponentials


Let A, Ã ∈ Cn×n and E = Ã − A. To investigate perturbations of matrix expo-
nentials one can use the equation
Z t
e −e =
Ãt At
eÃ(t−s) EeAs ds (8.3)
0

and estimate (8.1). Here we investigate perturbations in the case when kÃE −EAk
is small enough. We will say that A is stable (Hurwitzian), if α(A) < 0. Assume
that A is stable and put
Z ∞ Z ∞
u(A) = keAt kdt and vA = tkeAt kdt.
0 0

Theorem 2.8.1. Let A be stable, and

kÃE − EAkvA < 1. (8.4)

Then à is also stable. Moreover,


u(A) + vA kEk
u(Ã) ≤ (8.5)
1 − vA kÃE − EAk
and
Z ∞
kÃE − EAkvA (u(A) + vA kEk)
keÃt − eAt kdt ≤ kEkvA + . (8.6)
0 1 − vA kÃE − EAk
50 Chapter 2. Some Results of the Matrix Theory

This theorem is proved in the next subsection.


Furthermore, by (8.1) we obtain u(A) ≤ u0 (A) and vA ≤ v̂A , where
n−1
X n−1
X (k + 1)g k (A)
g k (A)
u0 (A) := and v̂ A := .
k=0
|α(A)|k+1 (k!)1/2 k=0
|α(A)|k+2 (k!)1/2

Thus, Theorem 2.8.1 implies


Corollary 2.8.2. Let A be stable and kÃE − EAkv̂A < 1. Then à is also stable.
Moreover,
u0 (A) + v̂A kEk
u(Ã) ≤ .
1 − v̂A kÃE − EAk
and Z ∞
kÃE − EAkv̂A (u0 (A) + v̂A kEk)
keÃt − eAt kdt ≤ kEkv̂A + .
0 1 − v̂A kÃE − EAk

2.8.3 Proof of Theorem 2.8.1


We use the following result, let f (t), c(t) and h(t) be vector functions defined on
[0, b] (0 < b < ∞). Besides, f and h are differentiable and c integrable. Then
Z b Z b
f (t)c(t)h(t)dt = f (b)j(b)h(b) − (f 0 (t)j(t)h(t) + f (t)j(t)h0 (t))dt
0 0

with Z t
j(t) = c(s)ds.
0

For the proof see Lemma 8.2.1 below. By that result


Z t
e Ãt
−e At
= eA(t−s) EeAs ds =
0
Z t
EteAt + eÃ(t−s) [ÃE − EA]seAs ds.
0

Hence, Z Z
∞ ∞
keÃt − eAt kdt ≤ kEteAt kdt+
0 0
Z ∞ Z t
keÃ(t−s) kkÃE − EAkkseAs kds dt.
0 0

But Z Z
∞ t
keÃ(t−s) kskeAs kds dt =
0 0
2.9. Matrices with nonnegative off-diagonals 51
Z ∞ Z ∞
keÃ(t−s) kskeAs kdt ds =
0 s
Z ∞ Z ∞
skeAs kds keÃt kdt = vA u(Ã).
0 0
Thus Z ∞
keÃt − eAt kdt ≤ kEkvA + kÃE − EAkvA u(Ã). (8.7)
0
Hence,
u(Ã) ≤ u(A) + kEkvA + kÃE − EAkvA u(Ã).
So according to (8.4), we get (8.5). Furthermore, due to (8.7) and (8.5) we get
(8.6). As claimed. 

2.9 Matrices with nonnegative off-diagonals


In this section it is assumed that A = (aij )nj,k=1 is a real matrix with

aij ≥ 0 for i 6= j. (9.1)

Put
a = min ajj and b = max ajj .
j=1,...,n j=1,...,n

For a scalar function f (λ) denote

f (k) (x)
αk (f, A) := inf
a≤x≤b k!
and
f (k) (x)
βk (f, A) := sup (k = 0, 1, 2, ...),
a≤x≤b k!
assuming that the derivatives exist.
Let W = A − diag (ajj ) be the off diagonal part of A.
Theorem 2.9.1. Let condition (9.1) hold and f (λ) be holomorphic on a neighbor-
hood of a Jordan set, whose boundary C has the property
n
X
|z − ajj | > ajk
k=1,k6=j

for all z ∈ C and j = 1, ..., n. In addition, let f be real on [a, b]. Then the following
inequalities are valid:
X∞
f (A) ≥ αk (f, A)W k , (9.2)
k=0
52 Chapter 2. Some Results of the Matrix Theory

provided p
rs (W ) lim k
|αk (f, A)| < 1,
k→∞

and

X
f (A) ≤ βk (f, A)W k , (9.3)
k=0

provided, p
rs (W ) lim k
|βk (f, A)| < 1.
k→∞

In particular, if αk (f, A) ≥ 0 (k = 0, 1, ...), then matrix f (A) has nonnegative


entries.
Proof. By (6.2) and (6.3),

X
f (A) = f (S) + Mk ,
k=1

where
n
X n
X n
X
Mk = Q̂j1 W Q̂j2 W . . . W Q̂jk+1 Jj1 j2 ...jk+1 .
j1 =1 j2 =1 jk+1 =1

Here Z
(−1)k+1 f (λ)dλ
Jj1 ...jk+1 = (bj = ajj ).
2πi C (bj1 − λ) . . . (bjk+1 − λ)
Since S is real, Lemma 1.5.2 from [31] gives us the inequalities

αk (f, A) ≤ Jj1 ...jk+1 ≤ βk (f, A).

Hence,
n
X n
X n
X
Mk ≥ αk (f, A) Q̂j1 W Q̂j2 W . . . W Q̂jk+1 = αk (f, A)W k .
j1 =1 j2 =1 jk+1 =1

Similarly, Mk ≤ βk (f, A)W k . This implies the required result. 

2.10 Comments
One of the first estimates for the norm of a regular matrix-valued function was
established by I.M. Gel’fand and G.E. Shilov [19] in connection with their inves-
tigations of partial differential equations, but that estimate is not sharp; it is not
attained for any matrix. The problem of obtaining a sharp estimate for the norm
of a matrix-valued function has been repeatedly discussed in the literature, cf. [14].
2.10. Comments 53

In the late 1970s, the author has obtained a sharp estimate for a matrix-valued
function regular on the convex hull of the spectrum, cf. [21] and references therein.
It is attained in the case of normal matrices. Later, that estimate was extended
to various classes of non-selfadjoint operators, such as Hilbert-Schmidt operators,
quasi-Hermitian operators (i.e., linear operators with completely continuous imag-
inary components), quasiunitary operators (i.e., operators represented as a sum
of a unitary operator and a compact one), etc. For more details see [31, 56] and
references given therein.
The material of this chapter is taken from the papers [51, 53, 56] and the
monograph [31].
About the relevant results on matrix-valued functions and perturbations of
matrices see the well-known books [9, 74] and [90].
54 Chapter 2. Some Results of the Matrix Theory
Chapter 3

General Linear Systems

This chapter is devoted to general linear systems including the Bohl - Perron
principle.

3.1 Description of the problem


Let η < ∞ be a positive constant, and R(t, τ ) be a real n×n-matrix-valued function
defined on [0, ∞) × [0, η ], which is piece-wise continuous in t for each τ , whose
entries are left-continuous and have bounded variations in τ . From the theory of
functions of a bounded variation (see for instance [73]), it is well known that for
a fixed t, R(t, τ ) can be represented as R(t, τ ) = R1 (t, τ ) + R2 (t, τ ) + R3 (t, τ ),
where R1 (t, τ ) is a saltus function of bounded variation with at most countably
many jumps on [0, η ], R2 (t, τ ) is an absolutely continuous function and R3 (t, τ ) is
either zero or a singular function of bounded variation, i.e. R3 (t, τ ) is non-constant,
continuous and has the derivative in τ which is equal to zero almost everywhere
on [0, η ]. If R3 (t, .) is not zero identically, then the expression
Z η
dτ R3 (t, τ )f (t − τ )
0

for a continuous function f , cannot be transformed to a Lebesgue integral or to a


series. For a concrete example see [73, p. 457]. So in the sequel it is assumed that
R3 (t, τ ) is zero identically.
Consider R1 (t, τ ) and let for a given t > 0, h1 (t) < h2 (t) < ... be those points
in [0, η ), where at least one entry of R1 (t, .) has a jump. Define

Ak (t) = R1 (t, hk (t) + 0) − R1 (t, hk (t)).

Then Z η ∞
X
dR1 (t, τ )f (t − τ ) = Ak (t)f (t − hk (t))
0 k=1
56 Chapter 3. General Linear Systems

for any continuous function f .


Define A(t, s) by
Z τ
R2 (t, τ ) = A(t, s)ds, 0 ≤ τ ≤ η .
0

Then Z η Z η
dR2 (t, τ )f (t − τ ) = A(t, s)f (t − s)ds.
0 0
Our main object in this chapter is the following problem in Cn :
Z η
ẏ(t) = dτ R(t, τ )y(t − τ ) (t ≥ 0), (1.1)
0

y(t) = φ(t) (−η ≤ t ≤ 0) (1.2)


for a given vector function φ(t) ∈ C(−η , 0). Here and below ẏ(t) is the right
derivative of y(t). The integral in (1.1) is understood as the Lebesgue-Stieltjes
integral.
We need the corresponding non-homogeneous equation
Z η
ẋ(t) = dτ R(t, τ )x(t − τ ) + f (t) (t > 0) (1.3)
0

with a given locally integrable vector function f (t) and the zero initial condition

x(t) = 0 (−η ≤ t ≤ 0). (1.4)

A solution of problem (1.1), (1.2) is a continuous vector function y(t) defined on


[−η , ∞) and satisfying
Z tZ η
y(t) = φ(0) + dτ R(s, τ )y(s − τ )ds (t ≥ 0),
0 0

y(t) = φ(t) (−η ≤ t ≤ 0). (1.5)


A solution of problem (1.3), (1.4) is a continuous vector function x(t) defined on
[0, ∞) and satisfying
Z t Z η
x(t) = [f (s) + dτ R(s, τ )x(s − τ )]ds (t ≥ 0) (1.6)
0 0

with condition (1.4). Below we prove that problems (1.1), (1.2) and (1.3), (1.4)
have unique solutions. See also the well-known Theorem 6.1.1 from [71].
It is assumed that the variation of R(t, τ ) = (rij (t, τ ))ni,j=1 in τ is bounded
on [0, ∞):
vjk := sup var(rjk (t, .)) < ∞. (1.7)
t≥0
3.2. Existence of solutions 57

For instance, consider the equation


Z η m
X
ẏ(t) = A(t, s)y(t − s)ds + Ak (t)y(t − τk ) (t ≥ 0; m < ∞), (1.8)
0 k=0

where
0 = τ0 < τ1 < ... < τm ≤ η
are constants, Ak (t) are piece-wise continuous matrices and A(t, τ ) is integrable
in τ on [0, η ]. Then (1.8) can be written as (1.1). Besides, (1.7) holds, provided
Z η m
!
X
sup kA(t, s)kn ds + kAk (t)kn < ∞. (1.9)
t≥0 0 k=0

In this chapter kzkn is the Euclidean norm of z ∈ Cn and kAkn is the spectral norm
of a matrix A. In addition, C(χ) = C(χ, Cn ) is the space of continuous functions
defined on a set χ ∈ R with values in Cn and the norm kwkC(χ) = supt∈χ kw(t)kn .
Recall that in a finite dimensional space all the norms are equivalent.
Let y(t) be a solution of problem (1.1), (1.2). Then (1.1) is said to be stable,
if there is a constant c0 ≥ 1, independent of φ, such that

ky(t)kn ≤ c0 kφkC(−η ,0) (t ≥ 0).

Equation (1.1) is said to be asymptotically stable if it is stable and y(t) → 0


as t → ∞. Equation (1.1) is exponentially stable if there are constants c0 ≥ 1 and
ν > 0, independent of φ, such that

ky(t)kn ≤ c0 e−νt kφkC(−η ,0) (t ≥ 0).

3.2 Existence of solutions


Introduce the operator E : C(−η , ∞) → C(0, ∞) by
Z η
Eu(t) = dτ R(t, τ )u(t − τ ) (t ≥ 0; u ∈ C(−η , ∞)).
0

Lemma 3.2.1. Let condition (1.7) hold. Then for any T > 0, there is a constant
V (R) independent of T , such that

kEukC(0,T ) ≤ V (R)kukC(−η ,T ) . (2.1)

Proof. This result is due to Lemma 1.12.3. 

Lemma 3.2.2. If condition (1.7) holds and a vector valued function f is integrable
on each finite segment, then problem (1.3), (1.4) has on [0, ∞) a unique solution.
58 Chapter 3. General Linear Systems

Proof. By (1.6)
x = f1 + W x,
where Z Z
t t
f1 (t) = f (s)ds + Ex(s)ds
0 0

and Z t
W x(t) = Ex(s)ds. (2.2)
0

For a fixed T < ∞, by the previous lemma


Z T
kW xkC(0,T ) ≤ V (R) kxkC(0,s1 ) ds1 .
0

Hence,
Z T Z s1
kW xkC(0,T ) ≤ V (R)
2 2
kxkC(0,s2 ) ds2 ds1 .
0 0

Similarly,
Z T Z s1 Z sk
kW k xkC(0,T ) ≤ V k (R) ... kxkC(0,sk ) dsk ...ds2 ds1 ≤
0 0 0

Tk
V k (R)kxkC(0,T )
.
k!
Thus the spectral radius of W equals zero and, consequently,

X
(I − W )−1 = W k.
k=0

Therefore,

X
x = (I − W )−1 f1 = W k f1 . (2.3)
k=0

This proves the lemma. 

Lemma 3.2.3. If condition (1.7) holds, then problem (1.1), (1.2) has on [0, ∞) a
unique solution for an arbitrary φ ∈ C(−η , 0).

Proof. Put φ̂(t) = φ(t) (t ≤ 0), φ̂(t) = φ(0) (t ≥ 0), and x(t) = y(t) − φ̂(t) (t ≥
0). Then problem (1.1), (1.2) takes the form (1.3), (1.4) with f = E φ̂. Now the
previous lemma proves the result. 
3.3. Fundamental solutions 59

3.3 Fundamental solutions


Let G(t, τ ) (τ ≥ 0; t ≥ τ − η ) be a matrix valued function satisfying the equation
Z tZ η
G(t, τ ) = I + ds1 R(s, s1 )G(s − s1 , τ )ds (t ≥ τ ), (3.1)
τ 0

with the condition


G(t, τ ) = 0 (τ − η ≤ t < τ ). (3.2)
Then G(t, τ ) is called the fundamental solution of equation (1.1).
Repeating the arguments of Lemma 3.2.2, we prove the existence and unique-
ness of the fundamental solution.
Clearly, G(τ, τ ) is an absolutely continuous in t matrix-valued function, sat-
isfying (1.1) and the conditions

G(τ, τ ) = I, G(t, τ ) = 0 (t < τ ). (3.3)

Any solution x(t) of problem (1.3), (1.4) can be represented as


Z t
x(t) = G(t, τ )f (τ )dτ. (3.4)
0

This formula can be obtained by the direct differentiation. About other proofs see
[71, Section 6.2], [78]. In these books the solution representations for the homoge-
neous problem also can be found.
Formula (3.4) is called the Variation of Constants formula.
Now we are going to derive a representation for solutions of the homogeneous
equation (1.1).
To this end put

y(t) − G(t, 0)φ(0) if t ≥ 0,
z(t) =
0 if −η ≤ t < 0,

where y(t) is a solution of problem (1.1), (1.2). Then



z(t) + G(t, 0)φ(0) if t ≥ 0,
y(t) =
φ(t) if −η ≤ t < 0.

If we denote 
0 if t ≥ 0,
φ̃(t) = (3.5)
φ(t) if −η ≤ t < 0,
then we can write

y(t) = z(t) + G(t, 0)φ(0) + φ̃(t) (t ≥ −η ). (3.6)


60 Chapter 3. General Linear Systems

Substituting this equality into (1.1), we have


Z η

ż(t) + G(t, 0)φ(0) = dτ R(t, τ )(z(t − τ ) + φ̃(t − τ ) + G(t − τ, 0)φ(0)) (t ≥ 0).
∂t 0

But Z η

G(t, 0)φ(0) = dτ R(t, τ )G(t − τ, 0)φ(0).
∂t 0

So Z η
ż(t) = dτ R(t, τ )(z(t − τ ) + φ̃(t − τ )) (t ≥ 0).
0

By the Variation of Constants formula


Z t Z η
z(t) = G(t, s) dτ R(s, τ )φ̃(s − τ )ds.
0 0

Taking into account (3.6), we get the following result.


Corollary 3.3.1. For a solution y(t) of the homogeneous problem (1.1), (1.2) the
representation
Z η Z τ
y(t) = G(t, 0)φ(0) + G(t, s)dτ R(s, τ )φ(s − τ )ds (t ≥ 0) (3.7)
0 0

is valid.

3.4 The generalized Bohl - Perron principle


Theorem 3.4.1. If for any f ∈ C(0, ∞), problem (1.3), (1.4) has a bounded on
[0, ∞) solution, and condition (1.7) holds, then equation (1.1) is exponentially
stable.
To prove this theorem we need the following lemma.
Lemma 3.4.2. If for any f ∈ C(0, ∞) a solution of problem (1.3), (1.4) is bounded,
then any solution of problem (1.1), (1.2) is also bounded.
Proof. Let y(t) be a solution of problem (1.1), (1.2). Put

φ(0) if t ≥ 0,
φ̂(t) =
φ(t) if −η ≤ t < 0

and x0 (t) = y(t) − φ̂(t). We can write dφ̂(t)/dt = 0 (t ≥ 0) and


Z η
ẋ0 (t) = dτ R(t, τ )x0 (t − τ ) + ψ(t) (t > 0),
0
3.4. The generalized Bohl - Perron principle 61

where Z η
ψ(t) = dτ R(t, τ )φ̂(t − τ ).
0

Besides, (1.4) holds with x(t) = x0 (t). Since V (R) < ∞, we have ψ ∈ C(−η , ∞).
Due to the hypothesis of this lemma, x0 ∈ C(0, ∞). Thus y ∈ C(−η , ∞). As
claimed. 

Lemma 3.4.3. Let a(s) be a continuous function defined on [0, η ] and condition
(1.7) hold. Then for any T > 0, one has
Z η
k a(s)ds R(t, s)f (t − s)kC(−η ,T ) ≤ V (R) max |a(s)| kf kC(−η ,T ) ,
0 0≤s≤η

where V (R) is defined by (2.1) and independent of T .


Proof. Let fk (t) be the coordinate of f (t). Then
Z η Z η

a(s)f (t − s)d r (t, s) ≤ max |a(s)f (t − s)| |ds rjk (t, s)| ≤
k s jk 0≤s≤η k
0 0

vjk max |a(s)| max |fk (t)|.


0≤s≤η −η ≤t≤T

Now repeating the arguments of the proof of Lemma 1.12.3, we obtain the required
result. 

Proof of Theorem 3.4.1: Substituting

y(t) = y (t)e−t ( > 0) (4.1)

with an  > 0 into (1.1), we obtain the equation


Z η
ẏ (t) = y (t) + eτ dτ R(t, τ )y (t − τ ) (t > 0). (4.2)
0

Introduce in C(0, ∞) the operator


Z t
Ĝf (t) := G(t, s)f (s)ds (t ≥ 0) (4.3)
0

(f ∈ C(0, ∞)). By the hypothesis of the theorem, we have

x = Ĝf ∈ C(0, ∞) for any f ∈ C(0, ∞). (4.4)

So Ĝ is defined on the whole space C(0, ∞). It is closed, since problem (1.3), (1.4)
has a unique solution. Therefore Ĝ is bounded according to the Closed Graph
theorem (see Section 1.3).
62 Chapter 3. General Linear Systems

Consider now the equation


Z η
ẋ (t) = x (t) + eτ dτ R(t, τ )x (t − τ ) + f (t) (4.5)
0

with the zero initial condition. For solutions x and x of (1.3) and (4.5), respec-
tively, we have
Z η Z η
d
(x − x )(t) = dτ R(t, τ )x(t − τ ) − x (t) − eτ dτ R(t, τ )x (t − τ ) =
dt 0 0

Z η
dτ R(t, τ )(x(t − τ ) − x (t − τ )) + f (t),
0

where Z η
f (t) = −x (t) + (1 − eτ )dτ R(t, τ )x (t − τ ). (4.6)
0

Consequently,
x − x = Ĝf . (4.7)

For the brevity in this proof we put k.kC(0,T ) = |.|T for a finite T > 0. Then

|Ĝ|T ≤ kĜkC(0,∞)

and due to Lemma 3.4.3,

|f |T ≤ |x |T ( + V (R)(eη − 1)).

So
|x |T ≤ |x|T + |x |T kĜkC(0,∞) ( + V (R)(eη − 1)).

Thus for a sufficiently small ,

|x|T
|x |T ≤ ≤
1 − kĜkC(0,∞) ( + V (R)(eη − 1))

kxkC(0,∞)
≤ < ∞.
1 − kĜkC(0,∞) ( + V (R)(eη − 1))

Now letting T → ∞, we get x ∈ C(0, ∞). Hence, by the previous lemma, a solu-
tion y of (4.2) is bounded. Now (4.1) proves the exponential stability. As claimed.

3.5. Lp -version of the Bohl - Perron principle 63

3.5 Lp -version of the Bohl - Perron principle


In this chapter Lp (χ) = Lp (χ, Cn ) (p ≥ 1) is the space of functions defined on a
set χ ⊂ R with values in Cn and the finite norm
Z
kwkLp (χ) = [ kw(t)k2n dt]1/2 (w ∈ Lp (χ); 1 ≤ p < ∞),
χ

and kwkL∞ (χ) = vrai supt∈χ ku(t)kn . Besides, R(t, τ ) is the same as in Section
3.1. In particular, condition (1.7) holds.
Theorem 3.5.1. If for a p ≥ 1 and any f ∈ Lp (0, ∞), the non-homogeneous problem
(1.3), (1.4) has a solution x ∈ Lp (0, ∞), and condition (1.7) holds, then equation
(1.1) is exponentially stable.
The proof of this theorem is divided into a series of lemmas presented in this
section.
Note that the existence and uniqueness of solutions of (1.3) in the considered
case is due to the above proved Lemma 3.2.1, since f is locally integrable.
Again put Z η
Eu(t) = dτ R(t, τ )u(t − τ ) (t ≥ 0) (5.1)
0
considering that operator as the one acting from Lp (−η , T ) into Lp (0, T ) for all
T > 0.
Lemma 3.5.2. For any p ≥ 1 and all T > 0, there is a constant V (R) independent
of T , such that
kEukLp (0,T ) ≤ V (R)kukLp (−η ,T ) . (5.2)
Proof. This result is due to Corollary 1.12.4. 

Lemma 3.5.3. For any f ∈ Lp (0, ∞) (p ≥ 1), let a solution of the nonhomoge-
neous problem (1.3), (1.4) be in Lp (0, ∞) and (1.7) hold. Then any solution of
the homogeneous problem (1.1), (1.2) is in Lp (−η , ∞).
Proof. With a μ > 0, put

e−μt φ(0) if t ≥ 0,
v(t) = .
φ(t) if −η ≤ t < 0
Then v ∈ Lp (−η , ∞) and therefore Ev ∈ Lp (0, ∞) for all p ≥ 1. Furthermore,
substitute y(t) = x(t) + v(t) into (1.1). Then we have problem (1.3), (1.4) with
f (t) = μe−μt φ(0) + (Ev)(t).
Clearly, f ∈ Lp (0, ∞). According to the assumption of this lemma, the solution
x(t) of problem (1.3), (1.4) is in Lp (0, ∞). Thus, y = x+v ∈ Lp (0, ∞). As claimed.

64 Chapter 3. General Linear Systems

Lemma 3.5.4. If condition (1.7) holds and a solution y(t) of problem (1.1), (1.2)
is in Lp (0, ∞) for a p ≥ 1, then the solution is bounded on [0, ∞). Moreover, if
p < ∞, then
kykpC(0,∞) ≤ pV (R)kykpLp (−η ,∞)
where V (R) is defined by (5.2).
Proof. By (1.1) and Lemma 3.5.2,

kẏkLp (0,∞) ≤ V (R)kykLp (−η ,∞) .

For simplicity, in this proof put ky(t)kn = |y(t)|. The case p = 1 is obvious, since,
Z ∞ Z ∞
d|y(t1 )|
|y(t)| = − dt1 ≤ |ẏ(t1 )|dt1 ≤ kẏkL1 ≤ V (R)kykL1 (t ≥ 0).
t dt1 t

Assume that 1 < p < ∞. Then by the Gólder inequality


Z ∞ Z ∞
d|y(t1 )|p d|y(t1 )|
|y(t)|p = − dt1 = −p |y(t1 )|p−1 dt1 ≤
t dt1 t dt1
Z ∞ Z ∞ Z ∞
p p−1
|y(t)| |ẏ(t)|dt ≤ p[ |y(t)| q(p−1)
dt] [
1/q
|ẏ(t)|dt]1/p
t t t

where q = p/(p − 1). Since q(p − 1) = p, we get the inequalities


p−1 p
|y(t)|p ≤ pkykL p (0,∞) kẏkLp (0,∞) ≤ pkykLp (−η ,∞) V (R) (t ≥ 0).

As claimed. 

Lemma 3.5.5. Let a(s) be a continuous function defined on [0, η ] and condition
(1.7) hold. Then for any T > 0 and p ≥ 1, one has
Z η
k a(s)ds R(t, s)f (t − s)kLp (−η ,T ) ≤ V (R) max |a(s)| kf kLp (−η ,T ) ,
0 0≤s≤η

where V (R) is defined by (5.2) and independent of T .


The proof of this lemma is similar to the proof of Lemma 3.4.3.
Proof of Theorem 3.5.1: Substituting (4.1) into (1.1), we obtain equation
(4.2). Define the operator Ĝ on Lp (0, ∞) by expression (4.3). By the hypothesis
of the theorem, x = Ĝf ∈ Lp (0, ∞) for all f ∈ Lp (0, ∞). So Ĝ is defined on the
whole space Lp (0, ∞). It is closed, since problem (1.3), (1.4) has a unique solution.
Therefore Ĝ is bounded according to the above mentioned Closed Graph theorem.
Consider now equation (4.5) with the zero initial condition. According to
(1.3), we have equation (4.7), with f defined by (4.6), where x and x are solutions
of (1.3) and (4.5), respectively. For the brevity, in this proof put k.kLp (0,T ) = |.|p,T
3.6. Equations with infinite delays 65

for a finite T > 0. Then |Ĝ|p,T ≤ kĜkLp (0,∞) . In addition, by (4.6) and Lemma
3.5.5, we can write
|f |p,T ≤ |x |p,T ( + V (R)(eη − 1)).
Hence (4.7) implies the inequality
|x |p,T ≤ |x|p,T + kĜkLp (0,∞) |x |p,T ( + V (R)(eη − 1)).
Consequently, for all sufficiently small ,
|x|p,T
|x |p,T ≤ ≤
1 − kĜkLp (0,∞) ( + V (R)(eη − 1))
kxkLp (0,∞)
|x |p,T ≤ < ∞.
1 − kĜkLp (0,∞) ( + V (R)(eη − 1))
Letting T → ∞, we get x ∈ Lp (0, ∞). Consequently, by Lemmas 3.5.3 and 3.5.4,
the solution y of (4.2) is bounded. Now (4.1) proves the exponential stability. As
claimed. 

3.6 Equations with infinite delays


Consider in Cn the equation
Z ∞
ẏ(t) = dτ R(t, τ )y(t − τ ) (t ≥ 0), (6.1)
0

where R(t, τ ) is an n × n-matrix-valued function defined on [0, ∞)2 , which is


continuous in t for each τ ≥ 0 and has the bounded variation in τ for each t ≥ 0.
The integral in (6.1) is understood as the improper vector Lebesgue - Stieltjes
integral. Take the initial condition
y(t) = φ(t) (−∞ < t ≤ 0) (6.2)
for a given φ ∈ C(−∞, 0)∩L1 (−∞, 0). Consider also the nonhomogeneous equation
Z ∞
ẋ(t) = dτ R(t, τ )x(t − τ ) + f (t) (t ≥ 0) (6.3)
0

with a given vector function f (t) and the zero initial condition
x(t) = 0 (−∞ ≤ t ≤ 0). (6.4)
In space Lp (−∞, ∞) with a p ≥ 1 introduce the operator
Z ∞
E∞ u(t) = dτ R(t, τ )u(t − τ ) (t ≥ 0; u ∈ Lp (−∞, ∞)).
0
66 Chapter 3. General Linear Systems

It is assumed that the inequality

kE∞ ukLp (0,∞) ≤ V (R)kukLp (−∞,∞) (6.5)

holds.
A solution of problem (6.1), (6.2) is defined as the one of problem (1.1), (1.2)
wit η = ∞; a solution of problem (6.3), (6.4) is defined as the one of problem
(1.3), (1.4). If f ∈ Lp (0, ∞), p ≥ 1, the existence and uniqueness of solutions can
be proved as in Lemma 3.2.2, since f is locally integrable.
For instance, consider the equation
Z ∞ ∞
X
ẏ(t) = A(t, τ )y(t − τ )dτ + Ak (t)y(t − τk ) (t ≥ 0), (6.6)
0 k=0

where 0 = τ0 < τ1 , ... are constants, Ak (t) are piece-wise continuous matrices and
A(t, τ ) is integrable in τ on [0, ∞). Then it is not hard to check that (6.5) holds,
if Z ∞ !

X
sup kA(t, s)kn ds + kAk (t)kn < ∞. (6.7)
t≥0 0 k=0

According to (6.4) equation (6.3) can be written as


Z t
ẋ(t) = dτ R(t, τ )x(t − τ ) + f (t) (t ≥ 0). (6.8)
0

We will say that equation (6.1) has the -property in Lp , if the relation
Z t
k (eτ − 1)dτ R(t, τ )f (t − τ )kLp (0,∞) → 0 as  → 0 ( > 0) (6.9)
0

holds for any f ∈ Lp (−∞, ∞).


The exponential stability of equation (6.1) is defined as in Section 3.1 with
η = ∞.
Theorem 3.6.1. For a p ≥ 1 and any f ∈ Lp (0, ∞), let the nonhomogeneous
problem (6.3), (6.4) has a solution x ∈ Lp (0, ∞). If, in addition, equation (6.1)
has the -property and condition (6.5) holds, then that equation is exponentially
stable.
The proof of this theorem is divided into a series of lemmas presented in the
next section.

3.7 Proof of Theorem 3.6.1


Lemma 3.7.1. Under the hypothesis of Theorem 3.6.1, any solution of the homo-
geneous problem (6.1), (6.2) is in Lp (0, ∞).
3.7. Proof of Theorem 3.6.1 67

Proof. With a μ > 0, put



e−μt φ(0) if t ≥ 0,
v(t) = .
φ(t) if −∞ ≤ t < 0

Clearly,
kφkpLp (−∞,0) ≤ kφkL1 (−∞,0) kφkC(−∞,0)
p−1
.

So v ∈ Lp (−∞, ∞). By (6.5) we have E∞ v ∈ Lp (0, ∞). Substitute y = v + x unto


(6.1). Then we have problem (6.3), (6.4) with

f (t) = μe−μt φ(0) + E∞ v(t).

By the condition of the lemma x ∈ Lp (0, ∞). We thus get the required result. 

Lemma 3.7.2. If a solution y(t) of problem (6.1), (6.2) is in Lp (0, ∞) (p ≥ 1),


and condition (6.5) holds, then it is bounded on [0, ∞). Moreover, if p < ∞, then
the inequalities

kykpC(0,∞) ≤ pV (R)kykp−1 p
Lp (0,∞) kykL (−∞,∞) ≤ pV (R)kykLp (−∞,∞)
p

are valid.
The proof of this lemma is similar to the proof of Lemma 3.5.4.
Proof of Theorem 3.6.1: Substituting

y (t) = et y(t) (t ≥ 0) and y (t) = φ(t) (t ≤ 0)

with an  > 0 into (6.1), we obtain the equation


Z ∞
ẏ (t) = y (t) + eτ dτ R(t, τ )y (t − τ ) (t > 0). (7.1)
0

Again use the operator Ĝ : f → x, where x is a solution of problem (6.3), (6.4).


In other words,
Z t
Ĝf (t) = G(t, s)f (s)ds (t ≥ 0; f ∈ Lp (0, ∞)),
0

where G is the fundamental solution to (6.1). By the hypothesis of the theorem,


we have
x = Ĝf ∈ Lp (0, ∞) for any f ∈ Lp (0, ∞).
So Ĝ is defined on the whole space Lp (0, ∞). It is closed, since problem (6.3), (6.4)
has a unique solution. Therefore Ĝ is bounded according to the Closed Graph
theorem (see Section 1.3).
68 Chapter 3. General Linear Systems

Consider now the equation


Z ∞
ẋ (t) = x (t) + eτ dτ R(t, τ )x (t − τ ) + f (t) (7.2)
0

with the zero initial condition. For solutions x and x of (6.3) and (7.2), respec-
tively, we have
Z ∞ Z ∞
d
(x − x )(t) = dτ R(t, τ )x(t − τ ) − x (t) − eτ dτ R(t, τ )x (t − τ ) =
dt 0 0
Z ∞
dτ R(t, τ )(x(t − τ ) − x (t − τ )) + f (t),
0
where Z ∞
f (t) = −x (t) + (1 − eτ )dτ R(t, τ )x (t − τ ). (7.3)
0
Consequently,
x − x = Ĝf . (7.4)
For the brevity, in this proof, for a fixed p we put k.kLp (0,T ) = |.|T for a finite
T > 0. Then |Ĝ|T ≤ kĜkLp (0,∞) . In addition, by the -property
|f (t)|T ≤ v()|x |T
where v() → 0 as  → 0. So
|x − x |T ≤ v()kĜkLp (0,∞) |x |T .
Thus for a sufficiently small ,
|x|T
|x |T ≤ ≤
1 − v()kĜkLp (0,∞)
kxkLp (0,∞)
≤ < ∞.
1 − v()kĜkLp (0,∞)
Now letting T → ∞, we get x ∈ Lp (0, ∞). Hence, by the previous lemma, a solu-
tion y of (7.2) is bounded. Now (4.1) proves the exponential stability. As claimed.


3.8 Equations with continuous infinite delay


In this section we illustrate Theorem 3.6.1 in the case p = 2. To this end consider
in Cn the equation
Z ∞ Z ∞
ẏ(t) = A(τ )y(t − τ )dτ + K(t, τ )y(t − τ )dτ (t ≥ 0), (8.1)
0 0
3.8. Equations with continuous infinite delay 69

where A(τ ) is a piece-wise continuous matrix-valued function defined on [0, ∞)


and K(t, τ ) is a piece-wise continuous matrix-valued function defined on [0, ∞)2 .
Besides,

kA(s)kn ≤ Ce−μs and kK(t, s)kn ≤ Ce−μ(t+s) (C, μ = const > 0; t, s ≥ 0).
(8.2)
Then, it is not hard to check that (8.1) has in L2 the -property, and the operator
K̃ : L2 (−∞, ∞) → L2 (0, ∞), defined by
Z ∞
K̃w(t) = K(t, τ )w(t − τ )dτ
0

is bounded. To apply Theorem 3.6.1, consider the equation


Z t Z t
ẋ(t) = A(τ )x(t − τ )dτ + K(t, τ )x(t − τ )dτ + f (t) (t ≥ 0) (8.3)
0 0

with f ∈ L2 (0, ∞). To estimate the solutions of the latter equation, we need the
equation Z t
u̇(t) = A(τ )u(t − τ )dτ + h(t) (t ≥ 0) (8.4)
0

with h ∈ L (0, ∞). Applying to (8.4) the Laplace transform, we have


2

z û(z) = Â(z)û(z) + ĥ(z)

where are Â(z), û(z) and ĥ(z) are the the Laplace transforms of A(t), u(t) and
h(t), respectively, and z is the dual variable. Then

û(z) = (zI − Â(z))−1 ĥ(z).

It is assumed that det (zI − Â(z)) is a stable function, that is all its zeros are in
the open left half plane, and
1
θ0 := sup k(iωI − Â(iω))−1 kn < . (8.5)
ω∈R kK̃kL2 (0,∞)
Note that various estimates for θ0 can be found in Section 2.3. By the Parseval
equality we have kukL2 (0,∞) ≤ θ0 khkL2 (0,∞) . By this inequality, from (8.3) we get

kxkL2 (0,∞) ≤ θ0 kf + K̃xkL2 (0,∞) ≤

θ0 kf kL2 (0,∞) + θ0 kK̃kL2 (0,∞) kxkL2 (0,∞) .


Hence (8.5) implies that x ∈ L2 (0, ∞). Now by Theorem 3.6.1 we get the following
result.
Corollary 3.8.1. Let conditions (8.2) and (8.5) hold. Then (8.1) is exponentially
stable.
70 Chapter 3. General Linear Systems

3.9 Comments
The classical theory of differential delay equations is presented in many excellent
books, for instance see [67, 72, 77, 96].
Recall that the Bohl - Perron principle means that the homogeneous ordinary
differential equation (ODE) ẏ = A(t)y (t ≥ 0) with a variable n × n-matrix A(t),
bounded on [0, ∞) is exponentially stable, provided the nonhomogeneous ODE
ẋ = A(t)x + f (t) with the zero initial condition has a bounded solution for any
bounded vector valued function f , cf. [14]. In [70, Theorem 4.15] the Bohl-Perron
principle was generalized to a class of retarded systems with R(t, τ ) = r(t, τ )I,
where r(t, τ ) is a scalar function; besides the asymptotic (not exponential) stability
was proved (see also the book [4]).
Theorems 3.4.1 and 3.5.1 is proved in [58], Theorem 3.6.1 appears in the
paper [59] (see also [61]).
Chapter 4

Time-Invariant Linear Systems


with Delay

This chapter is devoted to time-invariant (autonomous) linear systems with delay.


In the terms of the characteristic matrix valued function we derive estimates for
the Lp - and C-norms of fundamental solutions. By these estimates below we
obtain stability conditions for linear time-variant and nonlinear systems. Moreover,
these estimates enable us to establish bounds for the region of attraction of the
stationary solutions of nonlinear systems.

4.1 Statement of the problem


Everywhere in this chapter R0 (τ ) = (rjk (τ ))nj,k=1 is a real left-continuous n ×
n-matrix-valued function defined on a finite segment [0, η ], whose entries have
bounded variations.
Recall that R0 can be represented as R0 = R1 +R2 +R3 , where R1 is a saltus
function of bounded variation with at most countably many jumps on [0, η ], R2
is an absolutely continuous function and R3 is either zero or a singular function
of bounded variation, i.e. R3 is non-constant, continuous and has the derivative
which is equal to zero almost everywhere on [0, η ] [73]. As it was mentioned in
Section 3.1, if R3 is not zero identically, then the expression
Z η
dR3 (τ )f (t − τ )
0

for a continuous function f , cannot be transformed to a Lebesgue integral or to a


series.
Consider R1 (t) and let h1 < h2 , ... be those points in [0, η ), where at least
72 Chapter 4. Time-Invariant Linear Systems with Delay

one entry of R1 has a jump. Then for any continuous function f ,


Z η ∞
X
dR1 (τ )f (t − τ ) = Ak f (t − hk )
0 k=1

where Ak are constant matrices. R1 being a function of bounded variation is


equivalent to the condition
X∞
kAk kn < ∞.
k=1

Here and below in this chapter, kAkn is the spectral norm of a matrix A.
For R2 we define A(s) by
Z τ
R2 (τ ) = A(s)ds (0 ≤ τ ≤ η ).
0

Then Z η Z η
dR2 (τ )f (t − τ ) = A(s)f (t − s)ds.
0 0
Since the function R2 (τ ) is of bounded variation, we have
Z η
kA(s)kn ds < ∞.
0

Our main object in this chapter is the problem


Z η
ẏ(t) = dR0 (τ )y(t − τ ) (t ≥ 0), (1.1)
0

y(t) = φ(t) for − η ≤ t ≤ 0, (1.2)


where φ(t) is a given continuous vector-valued function defined on [−η , 0]. Recall
that ẏ(t) = dy(t)/dt, t > 0, and ẏ(0) means the right derivative of y at zero.
As it was explained above in this section, equation (1.1) can be written as
Z η m
X
ẏ(t) = A(s)y(t − s)ds + Ak y(t − hk ) (t ≥ 0; m ≤ ∞) (1.3)
0 k=1

where 0 ≤ h1 < h2 < ... < hm < η are constants, Ak are constant matrices
and A(s) is integrable on [0, η ]. For most situations it is sufficient to consider the
special case, where R0 has only a finite number of jumps: m < ∞. So in the sequel
it is assumed that R0 has a finite number of jumps.
A solution y(t) of problem (1.1), (1.2) is a continuous function y := [−η, ∞) →
Cn , such that
Z tZ η
y(t) = φ(0) + dR0 (τ )y(s − τ )ds, t ≥ 0, (1.4a)
0 0
4.1. Statement of the problem 73

and
y(t) = φ(t) (−η ≤ t ≤ 0). (1.4b)
Recall that
n
V ar (R0 ) = (var(rij ))i,j=1
and
var (R0 ) = kV ar (R0 )kn . (1.5)
In particular, for equation (1.3), if h1 = 0 we put R1 (0) = 0 and
j
X
R1 (τ ) = A1 (0 < τ ≤ h2 ), R1 (τ ) = Ak for hj < τ ≤ hj+1
k=1

(j = 2, ..., m; hm+1 = η ). If h1 > 0 we put R1 (τ ) = 0 (0 ≤ τ ≤ h1 ) and


j
X
R1 (τ ) = Ak for hj < τ ≤ hj+1 (j = 1, ..., m).
k=1

In addition, Z τ
R0 (τ ) = A(s)ds + R1 (τ ).
0
(k)
Let ãij (s) and aij be the entries of A(s) and Ak , respectively. Then for equation
(1.3), each entry of R0 , satisfies the inequality
Z η m
X (k)
var (rij ) ≤ |ãij (s)|ds + |aij |. (1.6)
0 k=0

In this chapter again C(a, b) = C([a, b]; Cn ), Lp (a, b) = Lp ([a, b]; Cn ).


Furthermore, Lemma 1.12.1 implies
Z η

sup k dR0 (τ )y(s − τ )kn ≤ n var (R0 ) sup ky(s)kn . (1.7)
0≤s≤t 0 −η ≤s≤t

Put
ŷ(t) := sup ky(s)kn .
0≤s≤t

Then from (1.4) we deduce that


Z t

ŷ(t) ≤ kφkC(−η ,0) + nvar (R0 ) ŷ(s)ds.
0

Since ky(s)kn ≤ ŷ(s), by the Gronwall lemma we arrive at the inequality



ky(t)kn ≤ kφkC(−η ,0) et nvar (R0 )
(t ≥ 0). (1.8)
74 Chapter 4. Time-Invariant Linear Systems with Delay

4.2 Application of the Laplace transform


We need the non-homogeneous problem
Z η
ẋ(t) = dR0 (τ )x(t − τ ) + f (t) (t ≥ 0), (2.1)
0

with a locally integrable f and the zero initial condition

x(t) = 0 for − η ≤ t ≤ 0. (2.2)

A solution of problem (2.1), (2.2) is a continuous vector function x(t) defined on


[0, ∞) and satisfying the equation
Z tZ η Z t
x(t) = dR0 (τ )x(s − τ )ds + f (s)ds (t ≥ 0). (2.3)
0 0 0

and condition (2.2). Hence,


Z t Z t

kxkC(0,t) ≤ kf (s)kn ds + n var (R0 ) kxkC(0,s) ds.
0 0

Now the Gronwall lemma implies the inequality


Z t

kx(t)kn ≤ kxkC(0,t) ≤ kf (s)kn ds exp [(t n var (R0 )].
0

Assume also that f satisfies the inequality

kf (t)kn ≤ c0 (c0 = const) (2.4)

almost everywhere on [0, ∞).


Taking into account (1.8) we can assert that the Laplace transforms
Z ∞ Z ∞
ỹ(z) = e−zt y(t)dt and x̃(z) = e−zt x(t)dt
0 0

of solutions
√ of problems (1.1), (1.2) and (2.1), (2.2), respectively, exist at least for
Re z > n var (R0 ) and the integrals converge absolutely in this half-plane. In
addition, inequality (1.7) and together with
√ equation (1.1) show that also ẏ(t) has
a Laplace transform at least in Re z > n var (R0 ) given by z ỹ(z) − φ(0). Taking
the Laplace transforms of the both sides of equation (1.1), we get
Z ∞ Z η
z ỹ(z) − φ(0) = e −zt
dR0 (τ )y(t − τ )dt =
0 0
Z η Z 0 
e−τ z dR0 (τ ) e−zt y(t)dt + ỹ(z) .
0 −τ
4.2. Application of the Laplace transform 75

Interchanging the Stieltjes integration with the improper Riemann integration is


justified by the Fubini theorem; we thus get
Z η Z 0
K(z)ỹ(z) = φ(0) + e −τ z
dR0 (τ ) e−zt φ(t)dt,
0 −τ

where Z η
K(z) = zI − e−τ z dR0 (τ ). (2.5)
0

The matrix-valued function K(z) is called the characteristic matrix-valued function


to equation (1.1) and det K(z) is called the characteristic determinant of equation
(1.1). A zero of the characteristic determinant det K(z) is called a characteristic
value of K(.) and λ ∈ C is a regular value of K(.) if det K(λ) 6= 0.
In the sequel it is assumed that all the characteristic values of K(.) are in
the open left half-plane C− .
Applying the inverse Laplace transform, we obtain
Z ∞  Z Z 
1 η 0
y(t) = e iωt
K −1
(iω) φ(0) + dR0 (τ ) e −iω(s+τ )
φ(s)ds dω (2.6)
2π −∞ 0 −τ

for t ≥ 0. Furthermore, apply the Laplace transform to problem (2.1), (2.2). Then
we easily obtain
x̃(z) = K −1 (z)f˜(z) (2.7)

for all regular z. Here f˜(z) is the Laplace transforms to f . Applying the inverse
Laplace transform, we get the following equality:
Z t
x(t) = G(t − s)f (s)ds (t ≥ 0), (2.8)
0

where Z ∞
1
G(t) = eiωt K −1 (iω)dω. (2.9)
2π −∞

Clearly, a matrix-valued function G(t) satisfies (1.1). Moreover,

G(0+) = I, G(t) = 0 (t < 0). (2.10)

So G(t) is the fundamental solution of equation (1.1). Formula (2.8) is the Variation
of Constants formula to problem (2.1), (2.2). Note that for equation (1.3) we have
Z η m
X
K(z) = zI − e−sz A(s)ds − e−hk z Ak . (2.11)
0 k=0
76 Chapter 4. Time-Invariant Linear Systems with Delay

4.3 Norms of characteristic matrix functions


Let A be an n × n-matrix. Recall that the quantity
n
X
g(A) = [N22 (A) − |λk (A)|2 ]1/2 ,
k=1

is introduced in Section 2.3. Here λk (A), k = 1, ..., n are the eigenvalues of A,


counted with their multiplicities; N2 (A) = (T race AA∗ )1/2 is the Frobenius
(Hilbert-Schmidt norm) of A, A∗ is adjoint to A. As it is shown in Section 2.3,
the following relations are valid:

N22 (A − A∗ )
g 2 (A) ≤ N22 (A) − |T race A2 | and g 2 (A) ≤ = 2N22 (AI ), (3.1)
2
where AI = (A − A∗ )/2i. Moreover,

g(eit A + zI) = g(A) (t ∈ R; z ∈ C). (3.2)

If A is a normal matrix: AA∗ = A∗ A, then g(A) = 0. Put


Z η
B(z) = e−zτ dR0 (τ ) (z ∈ C).
0

In particular, for equation (1.3) we have


Z η m
X
B(z) = e−sz A(s)ds + e−hk z Ak (3.3)
0 k=0

and
Z η m
X
g(B(iω)) ≤ N2 (B(iω)) ≤ N2 (A(s))ds + N2 (Ak ) (ω ∈ R). (3.4)
0 k=0

Below, under various assumptions, we suggest the sharper estimates for g(B(iω)).
According to Theorem 2.3.1, the inequality
n−1
X g k (A)
kA−1 kn ≤ √
k=0
k!dk+1 (A)

is valid for any invertible matrix A, where d(A) is the smallest modulus of eigen-
values of A.
Hence we arrive at the inequality

k[K(z)]−1 kn ≤ Γ(K(z)) (z ∈ C), (3.5)


4.3. Norms of characteristic matrix functions 77

where
n−1
X g k (B(z))
Γ(K(z)) = √
k=0
k!dk+1 (K(z))
and d(K(z)) is the smallest modulus of eigenvalues of matrix K(z) for a fixed z:

d(K(z)) = min |λk (K(z))|.


k=1,...,n

If B(z) is a normal matrix, then g(B(z)) = 0, and


1
k[K(z)]−1 kn ≤ .
d(K(z))

For example that inequality holds, if K(z) = zI −A0 e−zη , where A0 is a Hermitian
matrix.
Denote
θ(K) := sup kK −1 (iω)kn .
−2 var(R0 )≤ω≤2 var(R0 )

Lemma 4.3.1. The equality

sup kK −1 (iω)kn = θ(K)


−∞≤ω≤∞

is valid.
Proof. We have
Z η Z η
K(0) = dR0 (s) = dR0 (s) = R0 (η ) − R0 (0).
0 0

So
kK(0)kn = kR0 (η ) − R0 (0)kn ≤ var(R0 ),
and therefore,
1
kK −1 (0)kn ≥ .
var(R0 )
Here and below in this chapter, kvkn is the Euclidean norm of v ∈ Cn . Simple
calculations show that
Z η
k e−iω dR0 (τ )kn ≤ var(R0 ) (ω ∈ R)
0

and

kK(iω)vkn ≥ (|ω|−var (R0 ))kvkn ≥ var(R0 )kvkn (ω ∈ R; |ω| ≥ 2var(R0 ); v ∈ Cn ).

So
1
kK −1 (iω)kn ≤ ≤ kK −1 (0)kn (|ω| ≥ 2var(R0 )).
var(R0 )
78 Chapter 4. Time-Invariant Linear Systems with Delay

Thus the maximum of kK −1 (iω)kn is attained inside the segment [−2 var(R0 ), 2 var(R0 )],
as claimed. 

By (3.4) and the previous lemma we have the inequality θ(K) ≤ θ̂(K), where

θ̂(K) := sup Γ(K(iω)).


|ω|≤2var(R0 )

Denote
gB := sup g(B(iω))
ω∈[−2var(R0 ),2var(R0 )]

and
dK := inf d(K(iω)).
ω∈[−2var(R0 ),2var(R0 )]

Then we obtain the following result.


Corollary 4.3.2. The inequalities

θ(K) ≤ θ̂(K) ≤ Γ0 (K), (3.6)

are true, where


n−1
X k
gB
Γ0 (K) := √ .
k=0
k!dk+1
K

4.4 Norms of fundamental solutions of time-invariant


systems
Recall that all the characteristic values of K(.) are in the open left half-plane
C− . The following result directly follows from (2.6): let y(t) be a solution of the
time-invariant homogeneous problem (1.1), (1.2). Then due to (2.6)
Z η Z 0
y(t) = G(t)φ(0) + G(t − τ − s)dR0 (τ )φ(s)ds (t ≥ 0). (4.1)
0 −τ

Lemma 4.4.1. Let x(t) be a solution of problem (2.1), (2.2) with f ∈ L2 (0, ∞).
Then
kxkL2 (0,∞) ≤ θ(K)kf kL2 (0,∞) .
Proof. The result is due to (2.8), Lemma 4.3.1 and the Parseval equality. 

The previous lemma and Theorem 3.5.1 yield the following result.
Corollary 4.4.2. Let all the characteristic values of K(.) be in C− . Then equation
(1.1) is exponentially stable.
4.4. Norms of fundamental solutions of time-invariant systems 79

Lemma 4.4.3. The inequalities


Z η
k dR0 (s)f (t − s)kL2 (0,∞) ≤ var(R0 )kf kL2 (−η ,∞) (f ∈ L2 (−η , ∞)). (4.2)
0

and
Z η √
k dR0 (s)f (t − s)kC(0,∞) ≤ nvar (R0 )kf kC(−η ,∞) (f ∈ C(−η , ∞)) (4.3)
0

are true for all t ≥ 0.


This result is due to Lemma 1.12.1.
Thanks to the Parseval equality,
Z ∞
1 1
kGk2L2 (0,∞) = kK −1 (iω)k2L2 (−∞,∞) := kK −1 (iω)k2n dω.
2π 2π −∞

Calculations of such integrals is often a difficult task. Because of this, in the next
lemma we suggest an estimate for kGkL2 (0,∞) . Denote
p
W (K) := 2θ(K)[var (R0 )θ(K) + 1]. (4.4)

Lemma 4.4.4. The inequality kGkL2 (0,∞) ≤ W (K) is valid.


Proof. Set w(t) = G(t) − ψ(t), where
 −bt
Ie if t ≥ 0,
ψ(t) =
0 if −η ≤ t < 0,

with a positive constant b, which should be deduced. Then


Z η Z η
ẇ(t) = dR0 (s)G(t − s) + be I =
−bt
dR0 (s)w(t − s) + f (t) (t ≥ 0),
0 0

where Z η
f (t) = dR0 (s) ψ(t − s) + be−bt I.
0

By Lemma 4.4.3, we have

kf kL2 (0,∞) ≤ (var (R0 )k ψkL2 (0,∞) + b)ke−bt kL2 (0,∞) ≤

(var (R0 ) + b)ke−bt kL2 (0,∞) .


Take into into account that
1
ke−tb kL2 (0,∞) = √ .
2b
80 Chapter 4. Time-Invariant Linear Systems with Delay

Then due to Lemma 4.4.1, we get


θ(K)(var (R0 ) + b)
kwkL2 (0,∞) ≤ √ .
2b
Hence,
1 1
kGkL2 (0,∞) ≤ kwkL2 (0,∞) + √ = √ (1 + θ(K)(var (R0 ) + b)).
2b 2b
The minimum of the right-hand part is attained for
1 + θ(K)var (R0 )
b= .
θ(K)
Thus, we arrive at the inequality
p
kGkL2 (0,∞) ≤ 2θ(K)(θ(K)var (R0 ) + 1).
As claimed. 

By Corollary 4.3.2 we have the inequality


q
W (K) ≤ Ŵ (K), where Ŵ (K) := 2θ̂(K)[var (R0 )θ̂(K) + 1]. (4.5)

In addition, Lemma 4.4.3 and (1.1) imply the following lemma.


Lemma 4.4.5. The inequality kĠkL2 (0,∞) ≤ kGkL2 (0,∞) var (R0 ) is valid.
Now (4.5) yields the inequalities
kĠkL2 (0,∞) ≤ W (K)var (R0 ) ≤ Ŵ (K)var (R0 ). (4.6)
We need also the following simple result.
Lemma 4.4.6. Let f ∈ L2 (0, ∞) and f˙ ∈ L2 (0, ∞). Then
kf k2C(0,∞) ≤ 2kf kL2 (0,∞) kf˙kL2 (0,∞) .
Proof. Obviously,
Z ∞ Z ∞
dkf (s)k2n dkf (s)kn
kf (t)k2n = − ds = −2 kf (s)kn ds.
t ds t ds
Taking into account that
d
| kf (s)kn | ≤ kf˙(s)kn ,
ds
we get the required result due to Schwarz’s inequality. 

By the previous lemma and (4.6) at once we obtain the following result.
4.4. Norms of fundamental solutions of time-invariant systems 81

Theorem 4.4.7. The inequality

kGk2C(0,∞) ≤ 2kGk2L2 (0,∞) var (R0 )

is valid, and therefore,


kGkC(0,∞) ≤ a0 (K), (4.7)

where
p p
a0 (K) := 2var (R0 )W (K) = 2 var (R0 )θ(K)[var (R0 )θ(K) + 1].

Clearly,
a0 (K) ≤ 2(1 + var (R0 )θ(K)), (4.8)

and by (4.5) and (3.6),


p q
a0 (K) ≤ 2var (R0 )Ŵ (K) = 2 var (R0 )θ̂(K)[var (R0 )θ̂(K) + 1]. (4.9)

Now we are going to estimate the L1 -norm of the fundamental solution. To this
end consider a function r(s) of bounded variation. Then

r(s) = r+ (s) − r− (s),

where r+ (s), r− (s) are nondecreasing functions. For a continuous function q defined
on [0, η ], let
Z η Z η Z η
q(s)|dr(s)| := q(s)dr+ (s) + q(s)dr− (s).
0 0 0

In particular, denote
Z η
vd (r) := s|dr(s)|.
0

Furthermore, put
vd (R0 ) = k(vd (rjk ))nj,k=1 kn .
Recall that rjk are the entries of R0 . That is, vd (R0 ) is the spectral norm of the
matrix whose entries are vd (rjk ).

Lemma 4.4.8. The inequality


Z η

τ dR0 (τ )f (t − τ ) ≤ vd (R0 )kf kL2 (−η ,T ) (T > 0; f ∈ L2 (−η , T ))
2
0 L (0,T )

is true.
82 Chapter 4. Time-Invariant Linear Systems with Delay

Proof. Put Z η
E1 f (t) = τ dR0 (τ )f (t − τ ) (f (t) = (fk (t))).
0
Then we obtain
n Z
X T
kE1 f (t)k2L2 (0,T ) = |(E1 f )j (t)|2 dt,
j=1 0

where (E1 f )j (t) denotes the coordinate of (E1 f )(t). But


n Z
X η
|(E1 f )j (t)| = | 2
sfk (t − s)drjk (s)|2 ≤
k=1 0

n Z
X η
( s|fk (t − s)||drjk (s)|)2 =
k=1 0

n Z
X η n Z
X η
s|fk (t − s)||drjk (s)| s1 |fi (t − s1 )||drik (s1 )|.
k=1 0 i=1 0

Hence Z T
|(E1 f )j (t)|2 dt ≤
0
Z η Z η n X
X n Z T
s|drjk (s)|s1 |drji (s1 )| |fk (t − s)fi (t − s1 )|dt.
0 0 i=1 k=1 0

By the Schwarz inequality


Z T Z T Z T
( |fk (t − s)fi (t − s1 )|dt)2 ≤ |fk (t − s)|2 dt |fi (t − s1 )|2 dt ≤
0 0 0
Z T Z T
|fk (t)|2 dt |fi (t)|2 dt.
−η −η

Thus
Z T n X
X n
|(E1 f )j (t)|2 dt ≤ vd(rjk )vd(rik )kfk kL2 (−η ,T ) kfi kL2 (−η ,T ) =
0 i=1 k=1

n
!2
X
vd(rjk )kfk kL2 (−η ,T ) .
k=1

Hence
n Z
X T n X
X n
|(E1 f )j (t)|2 dt ≤ ( vd(rjk )kfk kL2 (−η ,T ) )2 =
j=1 0 j=1 k=1
4.4. Norms of fundamental solutions of time-invariant systems 83

n
(vd (rjk ))j,k=1 ν2 ≤ vd (R0 )kν2 kn ,
n
where n
ν2 = kfk kL2 (−η ,T ) k=1
.
But kν2 kn = kf kL2 (−η ,T ) . This proves the lemma. 

Clearly
vd(R0 ) ≤ η var(R0 ).
For equation (1.3) we easily obtain
Z η m
X
vd(R0 ) ≤ skA(s)kn ds + hk kAk kn . (4.10)
0 k=0

We need the following technical lemma.


Lemma 4.4.9. Let equation (1.1) be asymptotically stable. Then the function Y (t) :=
tG(t) satisfies the inequality

kY kL2 (0,∞) ≤ θ(K)(1 + vd(R0 ))kGkL2 (0,∞) .

Proof. By (1.1)
Z η
Ẏ (t) = tĠ(t) + G(t) = t dR0 (τ )G(t − τ ) + G(t) =
0
Z η Z η
dR0 (τ )(t − τ )G(t − τ ) + τ dR0 (τ )G(t − τ ) + G(t).
0 0
Thus, Z η
Ẏ (t) = dR0 (τ )Y (t − τ ) + F (t)
0
where Z η
F (t) = τ R(dτ )G(t − τ ) + G(t).
0
Hence, Z t
Y (t) = G(t − t1 )F (t1 )dt1 .
0
By Lemma 4.4.1, kY kL2 (0,∞) ≤ θ(K)kF kL2 (0,∞) . But due to the previous lemma

kF kL2 (0,∞) ≤ kGkL2 (0,∞) (1 + vd(R0 )).

We thus have established the required result. 

Now we are in a position to formulate and prove the main result of the
section.
84 Chapter 4. Time-Invariant Linear Systems with Delay

Theorem 4.4.10. The fundamental solution G to equation (1.1) satisfies the in-
equality p
kGkL1 (0,∞) ≤ kGkL2 (0∞) πθ(K)(1 + vd(R0 )) (4.11)
and therefore, p
kGkL1 (0,∞) ≤ W (K) πθ(K)(1 + vd(R0 )). (4.12)
Proof. Let us apply the Karlson inequality
Z ∞ Z ∞ Z ∞
( |f (t)|dt) ≤ π
4 2
f (t)dt
2
t2 f 2 (t)dt
0 0 0

for a real scalar-valued f ∈ L2 [0, ∞) with the property tf (t) ∈ L2 [0, ∞), cf. [93,
Chapter VIII]. By this inequality

kGk4L1 (0,∞) ≤ π 2 kY k2L2 (0,∞) kGk2L2 (0,∞) .

Now the previous lemma yields the required result. 

Recall that θ(K) and W (K) can be estimated by (3.6).

4.5 Systems with scalar delay-distributions


The aim of this section is to evaluate g(B(z)) for the system
m
X Z η
ẏ(t) = Ak y(t − s)dμk (s), (5.1)
k=1 0

where Ak are constant matrices and μk (s) are scalar nondecreasing functions de-
fined on [0, η ].
In this case the characteristic matrix function is
Xm Z η
K(z) = zI − Ak e−zs dμk (s).
k=1 0

Simple calculations show that


m
X m
X Z η
var(R0 ) ≤ kAk kn var (μk ) and vd(R0 ) ≤ kAk kn s dμk (s).
k=1 k=1 0

In addition,
m
X
g(K(iω)) = g(B(iω)) ≤ N2 (Ak )var(μk ) (ω ∈ R).
k=1
4.6. Scalar first order autonomous equations 85

For instance, the equation


m
X
ẏ(t) = Ak y(t − hk ) (5.2)
k=1

can take the form (5.1). In the considered case


m
X
K(z) = zI − Ak e−hk z ,
k=1

m
X m
X
var(R0 ) = kAk kn and vd(R0 ) = hk kAk kn ,
k=1 k=1

and
m
X
g(B(iω)) ≤ N2 (Ak ) (ω ∈ R).
k=1

Under additional conditions, the latter estimate can be improved. For example, if

K(z) = zI − A1 e−hk z − A2 e−h2 z , (5.3)

then due to (3.1) and (3.2), for all ω ∈ R, we obtain

g(B(iω)) = g(eiωh1 B(iω)) = g(A1 + A2 e(h1 −h2 )iω ) ≤


1
√ N2 (A1 − A∗1 + A2 e(h1 −h2 )iω − A∗2 e−(h1 −h2 )iω )
2
and, consequently,
1 √
g(B((iω)) ≤ √ N2 (A1 − A∗1 ) + 2N2 (A2 ). (5.4)
2
Similarly, we get
√ 1
g(B((iω)) ≤ 2N2 (A1 ) + √ N2 (A2 − A∗2 ) (ω ∈ R). (5.5)
2

4.6 Scalar first order autonomous equations


In this section we are going to estimate the fundamental solutions of some scalar
equations. These estimates give us lower bounds for the quantity dK . Recall that
this quantity is introduced in Section 4.3. The obtained estimates will be applied
in the next sections.
Consider a nondecreasing scalar function μ(s) (s ∈ [0, η ]), and put
Z η
k(z) = z + exp(−zs)dμ(s) (z ∈ C). (6.1)
0
86 Chapter 4. Time-Invariant Linear Systems with Delay

Obviously, k(z) is the characteristic function of the scalar equation


Z η
ẏ + y(t − s)dμ(s) = 0. (6.2)
0

Lemma 4.6.1. The equality

inf |k(iω)| = inf |k(iω)|


−2var(μ)≤ω≤2var(μ) ω∈R

is valid.
Proof. Clearly, k(0) = var(μ) and

|k(iω)| ≥ |ω| − var(μ) ≥ var(μ) (|ω| ≥ 2var(μ)).

This proves the lemma. 

Lemma 4.6.2. Let


η var(μ) < π/4. (6.3)
Then all the zeros of k(z) are in C− and

inf |k(iω)| ≥ dˆ > 0,


ω∈R

where Z η
dˆ := cos(2var(μ)τ )dμ(τ ). (6.4)
0
Proof. For the brevity put v = 2var(μ). Clearly,
Z η Z η Z η
|k(iω)| = |iω+
2
e −iωτ
dμ(τ )| = (ω−
2
sin (τ ω)dμ(τ )) +(
2
cos (τ ω)dμ(τ ))2 .
0 0 0

Hence, by the previous lemma we obtain

|k(iω)| ≥ inf |k(iω)| ≥ dˆ (ω ∈ R).


|ω|≤v

Furthermore, put

k(m, z) = m var(μ)(1 + z) + (1 − m)k(z), 0 ≤ m ≤ 1.

We have

k(0, z) = k(z), k(1, z) = var(μ)(1 + z) and k(m, 0) = var(μ).

Repeating the proof of the previous lemma we derive the equality

inf |k(m, iω)| = inf |k(m, iω)|.


ω∈R −v≤ω≤v
4.6. Scalar first order autonomous equations 87

In addition,
Z η
Re k(m, iω) = var (μ)m + (1 − m) cos(ωτ )dμ.
0

Consequently,
Z η
|k(m, iω)| ≥ var (μ)m + (1 − m) cos(vτ )dμ.
0

Therefore,
|k(m, iω)| ≥ var (μ)m + (1 − m)dˆ > 0 (ω ∈ R). (6.5)
Furthermore, assume that k(z) has a zero in the closed right hand plane C + . Take
into account that k(1, z) = 1 + z does not have zeros in C + . So k(m0 , iω) (ω ∈ R)
should have a zero for some m0 ∈ [0, 1], according to continuous dependence of
zeros on coefficients. But due to to (6.5) this is impossible. The proof is complete. 

Remark 4.6.3. If
π
μ(t) − μ(0) > 0 for some t < ,
4
then Z η
cos(πτ )dμ(τ ) > 0
0
and one can replace condition (6.4) by the following one:
π
η var(μ) ≤ .
4
Consider the scalar function
m
X
k1 (z) = z + bk e−ihk z (hk , bk = const ≥ 0).
k=1

The following result can be deduced from the previous lemma, but we are going
to present an independent proof.
Lemma 4.6.4. With the notation
m
X
c=2 bk ,
k=1

let
hj c < π/2 (j = 1, ..., m).
Then all the zeros of k1 (z) are in C− and
m
X
inf |k1 (iω)| ≥ bk cos (chk ) > 0.
ω∈R
k=1
88 Chapter 4. Time-Invariant Linear Systems with Delay

Proof. We restrict ourselves by the case m = 2. In the general case the proof is
similar. Put h1 = v, h2 = h. Introduce the function

f (y) = |iy + b2 e−ihy + b1 e−iyv |2 .

Clearly,

f (y) = |iy + b2 cos(hy) + b1 cos(yv) − i (b2 sin (hy) + b1 sin (yv))|2 =

(b2 cos(hy) + b1 cos(yv))2 + (y − b2 sin (hy) − b1 sin (yv))2 =


y 2 + b22 + b21 − 2b2 y sin (hy) − 2b1 ysin (yv)+
2b2 b1 sin (hy) sin (yv) + 2b2 b1 cos(yv)cos(hy).
So

f (y) = y 2 + b22 + b21 − 2b2 y sin (hy) − 2b1 ysin (yv) + 2b2 b1 cos y(v − h). (6.6)

But f (0) = (b2 + b1 )2 and

f (y) ≥ (y − b2 − b1 )2 ≥ (b2 + b1 )2 (|y| > 2(b2 + b1 ) ).

Thus, the minimum of f is attained on [−c, c] with c = 2(b2 + b1 ). Then thanks


to (6.6) f (y) ≥ w(y) (0 ≤ y ≤ c), where

w(y) = y 2 + b22 + b21 − 2b2 y sin (hc)−

2b1 ysin (vc) + 2b2 b1 cos c(h − v).


and
dw(y)/dy = 2y − 2 (b2 sin (hc) + 2b1 sin(vc)).
The root of dw(y)/dy = 0 is s = b2 sin (hc) + b1 sin(vc). Thus

miny w(y) = s2 + b22 + b21 − 2s2 + 2b2 b1 cos c(h − v) =

b22 + b21 − (b2 sin (hc) + b1 sin(vc))2 + 2b2 b1 cos c(h − v).
Hence

miny w(y) = b22 + b21 − b22 sin2 (ch) − b21 sin2 (cv) + 2b2 b1 cos (ch)cos (cv) =

= b22 cos2 (hc) + b21 cos2 (vc) + 2b2 b1 cos(ch)cos(cv) =


(b2 cos(ch) + b1 cos(cv))2 .
This proves the required inequality. To prove the stability, consider the function

K(z, s) = z + b1 e−szv + b2 e−szh (s ∈ [0, 1]).


4.6. Scalar first order autonomous equations 89

Then all the zeros of K(z, 0) are in C− due to the just proved inequality,

inf |K(iω, s)| ≥ b1 cos(csv) + b2 cos(csh) ≥


ω∈R

b1 cos(cv) + b2 cos(ch) > 0 (s ∈ [0, 1]).


So K(z, s) does not have zeros on the imaginary axis. This proves the lemma. 

Again consider equation (6.2) whose fundamental solution is defined by


Z
1 a+i∞
dz
ζ(t) = ezt (a = const). (6.7)
2πi a−i∞ k(z)

Hence, Z ∞
1
= e−zt ζ(t)dt.
k(z) 0

Let
eη var (μ) < 1. (6.8)
Then it is not hard to show that (6.2) is exponentially stable and ζ(t) ≥ 0 (t ≥ 0)
(see Section 11.4). Hence it easily follows that
Z ∞
1 1
≤ ζ(t)dt = (ω ∈ R).
|k(iω)| 0 k(0)

But k(0) = var (μ). We thus have proved the following lemma.
Lemma 4.6.5. Let μ(s) be a nondecreasing function satisfying condition (6.8).
Then Z η

inf iω + exp(−iωs)dμ(s) = k(0) = var (μ)
−∞≤ω≤∞
0

and Z ∞
1 1
ζ(t)dt = = .
0 k(0) var (μ)
Now let μ0 (s) (s ∈ [0, η ]) be another nondecreasing scalar function with the
property
eη var(μ0 ) ≤ c0 (1 ≤ c0 < 2) (6.9)
and Z η
k0 (z) = z + exp(−zs)dμ0 (s) (z ∈ C). (6.10)
0

Put μ(s) = μ0 (s)/c0 . Then


Z η
|k0 (iω) − k(iω)| ≤ d(μ0 (s) − μ(s)) = (c0 − 1) var(μ).
0
90 Chapter 4. Time-Invariant Linear Systems with Delay

Hence, by the previous lemma,

|k0 (iω)| ≥ |k(iω)| − |k0 (iω) − k(iω)| ≥ var(μ) − (c0 − 1)var(μ) =

(2 − c0 )var(μ) (ω ∈ R).
We thus have proved.
Lemma 4.6.6. Let μ0 (s) (s ∈ [0, η ]) be a nondecreasing scalar function satisfying
(6.9). Then
Z η
2 − c0
inf |iω + exp(−iωs)dμ0 (s)| ≥ var(μ0 ).
ω∈R 0 c0

Furthermore, let μ2 (s) be a nondecreasing function defined on [0, η ] and


Z η
k2 (z) = z + ae −hz
+ exp(−zs)dμ2 (s) (z ∈ C)
0

with constants a > 0 and h ≥ 0. Put

v2 := 2(a + var (μ2 )).

By Lemma 4.6.1,
inf |k2 (iω)| = inf |k2 (iω)|.
−v2 ≤ω≤v2 ω∈R

But Z η
|k2 (iω)|2 = (ω − asin (hω) − sin (τ ω)dμ2 (τ ))2 +
0
Z η
(a cos (hω) + cos (τ ω)dμ2 (τ ))2 .
0

Let
v2 h < π/2 (6.11)
and Z η
d2 := inf a cos (hω) + cos (τ ω)dμ2 (τ ) > 0. (6.12)
|ω|≤v2 0

Then we obtain
inf |k2 (iω)| ≥ d2 . (6.13)
ω∈R

Repeating the arguments of the proof of Lemma 4.6.2 we arrive at the following
result.
Lemma 4.6.7. Let conditions (6.11) and (6.12) hold. Then all the zeros of k2 (z)
are in C− and inequality (6.13) is fulfilled.
Let us point one corollary of this lemma.
4.7. Systems with one distributed delay 91

Corollary 4.6.8. Let the conditions (6.11) and

a cos (hv2 ) > var (μ2 )

hold. Then all the zeros of k2 (z) are in C− and the inequality

inf |k2 (iω)| ≥ acos (hv2 ) − var (μ2 ) > 0. (6.14)


ω∈R

is valid.
In particular, if
Z η
k2 (z) = z + a + exp(−zs)dμ2 (s) (z ∈ C)
0

and
a > var (μ2 ), (6.15)
then condition (6.11) automatically holds and

inf |k2 (iω)| ≥ a − var (μ2 ) > 0. (6.16)


ω∈R

4.7 Systems with one distributed delay


In this section we illustrate our preceding results in the case of the equation
Z η
ẏ(t) + A y(t − s)dμ = 0 (t ≥ 0), (7.1)
0

where A = (ajk ) is a constant Hurwitzian n × n-matrix and μ is a scalar nonde-


creasing function, again. So in the considered case we have R0 (s) = μ(s)A,
Z η
K(z) = zI + e−zs dμ(s)A, (7.2)
0

and the eigenvalues of matrix K(z) with a fixed z are


Z η
λj (K(z)) = z + e−zs dμ(s)λj (A).
0

In addition,
Z η
g(B(iω)) = g(A e−iωs dμ(s)) ≤ g(A)var (μ) (ω ∈ R),
0

var(R0 ) = kAkn var (μ), and vd (R0 ) = kAkn vd (μ), where


Z η
vd(μ) = τ dμ(τ ).
0
92 Chapter 4. Time-Invariant Linear Systems with Delay

According to Corollary 4.3.2, we have the inequality θ(K) ≤ θA , where


n−1
X (g(A)var (μ))k
θA := √ k+1
k=0
k!dK

with
Z η

dK = min inf ωi + λj (A) e−iωs dμ(s) .
j=1,...,n −2var(μ)kAkn ≤ω≤2var(μ)kAkn
0

In particular, if A is a normal matrix, then g(A) = 0 and θA = 1/d(K).


Now Lemma 4.4.4, (4.6), (4.7) and (4.12) imply our next result.

Theorem 4.7.1. Let G be the fundamental solution of equation (7.1) and all the
characteristic values of its characteristic function be in C− . Then

kGkL2 (0,∞) ≤ W (A, μ), (7.3)

where p
W (A, μ) := 2θA [kAkn var(μ)θA + 1].
In addition,
kĠkL2 (0,∞) ≤ kAkn var(μ) W (A, μ), (7.4)
p
kGkC(0,∞) ≤ W (A, μ) 2kAkn var(μ) (7.5)
and p
kGkL1 (0,∞) ≤ W (A, μ) πθA (1 + kAkn vd(μ)). (7.6)
Clearly, dK can be directly calculated. Moreover, by Lemma 4.6.3 we get the
following result.

Lemma 4.7.2. Let all the eigenvalues of A be real and positive:

0 < λ1 (A) ≤ ... ≤ λn (A) (7.7)

and let
π
η var(μ)λn (A) < . (7.8)
4
Then all the characteristic values of equation (7.1) are in C− and

dK ≥ d(A, μ), (7.9)

where Z η
d(A, μ) := cos (2τ λn (A)var (μ)) dμ(τ ) > 0.
0
4.7. Systems with one distributed delay 93

Thus
n−1
X (g(A)var (μ))k
θA ≤ θ(A, μ), where θ(A, μ) := √ .
k=0
k!dk+1 (A, μ)

So we have proved the following result.

Corollary 4.7.3. Let conditions (7.7) and (7.8) hold. Then all the characteristic
values of equation (7.1) are in C− and inequalities (7.3)-(7.6) are true with θ(A, μ)
instead of θA .

In particular, if (7.1) takes the form

ẏ(t) + Ay(t − h) = 0 (h = const > 0; t ≥ 0), (7.10)

then the eigenvalues of K(z) are λj (K(z)) = z + e−zh λj (A). In addition, var(μ) =
1, vd(μ) = h. According to (7.7), condition (7.8) takes the form

hλn (A) < π/4. (7.11)

So in this case we obtain the equality

d(A, μ) ≥ cos(2λn (A)h). (7.12)

Furthermore, under condition (7.7), instead of condition (7.8) assume that

eη var(μ) λn (A) < 1. (7.13)

Then according to Lemma 4.6.5 we can write


Z η

inf |λj (K(iω))| = inf iω + λj (A) exp(−iωs)dμ(s) = λj (A)var(μ).
ω∈R ω∈R 0

Hence,
dK ≥ var(μ)λ1 (A), (7.14)

and therefore, θA ≤ θ̂A , where


n−1
X
1 g k (A)
θ̂A := √ k+1 .
var (μ) k!λ1 (A)
k=0

Now taking into account Theorem 4.7.1 we arrive at our next result.

Corollary 4.7.4. Let conditions (7.7) and (7.13) hold. Then all the characteristic
values of equation (7.1) are in C− and inequalities (7.3)-(7.6) are true with θA =
θ̂A .
94 Chapter 4. Time-Invariant Linear Systems with Delay

Now let A be diagonalizable. Then there is an invertible matrix T and a


normal matrix S, such that T −1 AT = S. Besides,
Z η
T −1 K(z)T = KS (z) where KS (z) = zI + e−zs dμ(s)S
0

and therefore,
kK −1 (z)kn ≤ κT kKS−1 (z)kn

where κT = kT −1 kn kT kn . Recall that some estimates for κT are given in Section


2.7.
Since g(S) = 0, and the eigenvalues of K and KS coincide, we have dK = dKS ,

1 κT
θS = and therefore θA ≤
dK dK

Now Theorem 4.7.1 implies the following result.

Corollary 4.7.5. Suppose A is diagonalizable and all the characteristic values of


equation (7.1) are in C− . Then inequalities (7.3)-(7.6) are true with d(K)
κT
instead
of θA .

If, in addition, conditions (7.7) and (7.8) hold, then according to (7.9) equa-
tion (7.1) is stable and
κT
θA ≤ . (7.15)
d(A, μ)

Moreover, if conditions (7.7) and (7.13) hold, then according to (7.14) equation
(7.1) is also stable and
κT
θA ≤ . (7.16)
λ1 (A)var(μ)

Note also that if A is Hermitian, and conditions (7.7) and (7.13) holds, then
reducing (7.1) to the diagonal form, due to Lemma 4.6.5, we can assert that the
fundamental solution G to (7.1) satisfies the inequality
n
1 X 1
kGkL1 (0,∞) ≤ . (7.17)
var(μ) λk (A)
k=1

So, if A is diagonalizable and conditions (7.7) and (7.13) holds, then


n
κT X 1
kGkL1 (0,∞) ≤ . (7.18)
var(μ) λk (A)
k=1
4.8. Estimates via determinants 95

4.8 Estimates via determinants


Again consider equation (1.1). As it was shown in Section 2.3,
N2n−1 (A)
kA−1 kn ≤ (n ≥ 2),
(n − 1)(n−1)/2 |det (A)|
for any invertible n × n-matrix A. The following result directly follows from this
inequality.
Corollary 4.8.1. For any regular point z of the characteristic function of equation
(1.1), one has
N2n−1 (K(z))
kK −1 (z)kn ≤ (n ≥ 2), (8.1)
(n − 1)(n−1)/2 |det (K(z))|
and thus θ(K) ≤ θdet (K) where
N2n−1 (K(iω))
θdet (K) := sup .
−2 var(R0 )≤ω≤2 var(R0 ) (n − 1)
(n−1)/2 |det (K(iω))|

This corollary is more convenient for the calculations than (3.6), if n is small
enough.
Recall that it is assumed that all the characteristic values of (1.1) are in the
open left half-plane C− . Denote
p
Wdet (K) := 2θdet (K)[var(R0 )θdet (K) + 1].
By Lemma 4.4.4 we arrive at the following lemma.
Lemma 4.8.2. The inequality kGkL2 (0,∞) ≤ Wdet (K) is valid.
In addition, Lemma 4.4.5 and Theorem 4.4.7 imply our next result.
Lemma 4.8.3. The inequalities
kĠkL2 (0,∞) ≤ Wdet (K)var(R0 )
and
kGkC(0,∞) ≤ adet (K), (8.2)
hold, where p
adet (K) := 2var(R0 )Wdet (K).
Clearly,
p
adet (K) = 2 var(R0 )θdet (K)[var(R0 )θdet (K) + 1]
and
adet (K) ≤ 2(1 + var(R0 )θdet (K)) ≤ 2(1 + var(R0 )θ̂det (K)).
To estimate the L1 -norm of the fundamental solution via the determinant, we use
inequality (4.12) and Corollary 4.8.1, by which we get the following result.
96 Chapter 4. Time-Invariant Linear Systems with Delay

Corollary 4.8.4. The fundamental solution G to equation (1.1) satisfies the in-
equality
p
kGkL1 (0,∞) ≤ Wdet (K) πθdet (K)(1 + vd (R0 )).

Furthermore, if the entries rjk (τ ) of R0 (τ ) have the following properties: rjj


are non-increasing and rjk (τ ) ≡ 0 (j > k), that is, R0 is triangular; then clearly,
n 
Y Z η 
det K(z) = z− e −zs
drkk (s) . (8.3)
k=1 0

If, in addition
η var rjj ≤ π/4, j = 1, ..., n, (8.4)

then by Lemma 4.6.2 all the zeros of det K(z) are in C− and
n
Y
|det K(iω)| ≥ dˆkk > 0. (8.5)
k=1

where Z η
dˆjj = cos(2var(rjj )τ ) drjj (τ ).
0

4.9 Stability of diagonally dominant systems


Again rjk (s) (j, k = 1, ..., n) are the entries of R0 (s). In addition rjj (s) are non-
increasing. Put
Xn
ξj = var(rjk ).
k=1,k6=j

Lemma 4.9.1. Assume that all the zeros of the functions


Z η
kj (z) := z − e−zs drjj (s), j = 1, ..., n,
0

are in C− and, in addition,

|kj (iω)| > ξj (ω ∈ R; |ω| ≤ 2var(rjj ); j = 1, ..., n), (9.1)

then equation (1.1) is exponentially stable. Moreover,


n
Y
|det (K(iω))| ≥ (|kj (iω)| − ξj ) (ω ∈ R).
j=1
4.10. Comments 97

Proof. Clearly
n
X Z η
| e−iωs drjk (s)| ≤ ξj .
k=1,k6=j 0

Hence the result is due to the Ostrowski theorem (see Section 1.11). 

If condition (8.4) holds, then then by Lemma 4.6.2 we obtain

|kj (iω)| ≥ dˆjj > 0

where dˆjj are defined in the previous section. So due to the previous lemma we
arrive at the following result.
Corollary 4.9.2. If the conditions (8.4) and

dˆjj > ξj (j = 1, ..., n)

hold, then equation (1.1) is exponentially stable. Moreover,


n
Y
|det (K(iω))| ≥ (dˆjj − ξj ) (ω ∈ R).
j=1

Now we can apply the results of the previous section.

4.10 Comments
The contents of this chapter is based on the paper [26] and on some results from
Chapter 8 of the book [24].
98 Chapter 4. Time-Invariant Linear Systems with Delay
Chapter 5

Properties of Characteristic
Values

In this chapter we investigate properties of the characteristic values of autonomous


systems. In particular some identities for characteristic values and perturbations
results are derived.

5.1 Sums of moduli of characteristic values


Recall that Z η
K(z) = zI − exp(−zs)dR0 (s)
0

is the characteristic matrix function of the autonomous equation


Z η
ẏ = dR0 (s)y(t − s).
0

Here and below in this chapter R0 (τ ) is an n × n-matrix-valued function defined


on a finite segment [0, η ], whose entries have bounded variations and I is the unit
matrix.
Enumerate the characteristic values zk (K) of K with their multiplicities in
the nondecreasing order of their absolute values: |zk (K)| ≤ |zk+1 (K)| (k = 1, 2, ...).
If K has l < ∞ zeros, we put 1/zk (K) = 0 (k = l + 1, l + 2, ...).
The aim of this section is to estimate the sums
j
X 1
(j = 1, 2, ...).
|zk (K)|
k=1
100 Chapter 5. Properties of Characteristic Values

To this end, note that


Z ∞
X
(−1)k z k η
zk
K(z) = zI − sk dR0 (s) = Bk , (1.1)
k! 0 k!
k=0

where Z η Z η
B0 = − dR0 (s) = −R0 (η ), B1 = I + sdR0 (s),
0 0
Z η
Bk = (−1)k+1 sk dR0 (s) (k ≥ 2).
0

In addition, it is supposed that

R0 (η ) is invertible . (1.2)

Put Ck = −R0−1 (η )Bk and



X zk
K1 (z) := −R0−1 (η )K(z) = Ck (C0 = I).
k!
k=0

Since det K1 (z) = −det R0−1 (η ) det K(z), all the characteristic values of K and
K1 coincide.
For the brevity, in this chapter kAk denotes the spectral norm of a matrix
A: kAk = kAkn . Without loss of generality assume that

η < 1. (1.3)

If this condition is not valid, by the substitution z = wa with some a > η into
(1.1), we obtain
Z η
K(aw) = awI − exp(−saw)dR0 (s) =
0
 Z 
1 aη
a wI − exp(−τ w)dR0 (τ /a) = aK1 (w),
a 0
where Z η1
K1 (w) = wI − exp(−τ w)dR1 (τ )
0

with R1 (τ ) = a1 R0 (τ /a) and η 1 = ηa . Thus, condition (1.3) holds with η = η 1 .


Let rjk (s) be the entries of R0 (s). Recall that var(R0 ) denote the spectral
n
norm of the matrix (var(rjk ))j,k=1 :
n
var(R0 ) = k (var(rjk ))j,k=1 k.
5.1. Sums of moduli of characteristic values 101

Lemma 5.1.1. The inequality


Z η
kBk k = k sk dR0 (s)k ≤ η k var (R0 ) (k ≥ 2)
0

is true.
The proof of this lemma is similar to the proof of Lemma 4.4.8. It is left to
the reader.
From this lemma it follows that

kCk k ≤ var (R0 )kR0−1 (η )kη k (k ≥ 2).

So under condition (1.3) we have



X
kCk k2 < ∞. (1.4)
k=1

Furthermore, denote
" ∞
#1/2
X
ΨK := Ck Ck∗ .
k=1

So ΨK is an n × n-matrix. Set

λk (ΨK ) for k = 1, ..., n,
ωk (K) = ,
0 if k ≥ n + 1

where λk (ΨK ) are the eigenvalues of matrix Ψ K with their multiplicities and
enumerated in the decreasing way: ωk (K) ≥ ωk+1 (K) (k = 1, 2, ...).
Theorem 5.1.2. Let conditions (1.2) and (1.3) hold. Then the characteristic values
of K satisfy the inequalities
j
X j 
X 
1 n
< ωk (K) + (j = 1, 2, ...).
|zk (K)| k+n
k=1 k=1

This result is a particular case of Theorem 12.2.1 [46].


From the latter theorem it follows that
j 
X 
j n
< ωk (K) +
|zj (K)| k+n
k=1

and thus,
j
|zj (K)| > P h i (j = 1, 2, ...).
j
k=1 ωk (K) + n
(k+n)
102 Chapter 5. Properties of Characteristic Values

Therefore, in the disc


j
|z| ≤ P h i (j = 2, 3, ...)
j
k=1 ωk (K) + n
(k+n)

K has no more than j −1 characteristic values. Let νK (r) be the function counting
the characteristic values of K in the disc |z| ≤ r. We consequently, get.
Corollary 5.1.3. Let conditions (1.2) and (1.3) hold. Then the inequality νK (r) ≤
j − 1 is valid, provided
j
r≤ P h i (j = 2, 3, ...).
j
k=1 ωk (K) + n
(k+n)

Moreover, K(z) does not have characteristic values in the disc


1
|z| ≤ .
λ1 (ΨK ) + n
1+n

Let us apply the Borel (Laplace) transform. Namely, put


Z ∞
FK (w) = e−wt K1 (t)dt (w ∈ C; |w| = 1).
0

Then Z ∞  Z η 
FK (w) = −R0−1 (η ) e−wt tI − exp(−ts)dR0 (s) dt.
0 0
We can write down
 Z 
1 η
1
FK (w) = −R0−1 (η ) I− dR0 (s) . (1.5)
w2 0 s+w
On the other hand

X 1
FK (w) = Ck
wk+1
k=0
and therefore Z ∞
X
1 2π
FK (e−is )FK

(eis )ds = Ck Ck∗ .
2π 0 k=0
Thus, we have proved the following result.
Lemma 5.1.4. Let condition (1.3) hold. Then
Z 2π
1
ΨK =
2
FK (e−is )FK

(eis )ds
2π 0
and consequently
Z
1 2π
2
kΨK k ≤ kFK (eis )k2 ds ≤ sup kFK (w)k2 .
2π 0 |w|=1
5.2. Identities for characteristic values 103

Simple calculations, show that


Z η
1 var(R0 )
k dR0 (s)k ≤ (|w| = 1).
0 s + w 1−η

Taking into account (1.5), we deduce that


 
var(R0 )
kΨK k ≤ kR0−1 k 1 +
1−η

and consequently,
ωj (K) ≤ sup kFK (w)k ≤ α(FK ), (1.6)
|w|=1

where  
var(R0 )
α(FK ) := kR0−1 k 1 + (j ≤ n).
1−η
Note that the norm of R0−1 (η ) can be estimated by the results presented in Section
2.3 above. Inequality (1.6) and Theorem 5.1.2 imply the following result.
Corollary 5.1.5. Let conditions (1.2) and (1.3) hold. Then the characteristic values
of K satisfy the inequalities
j
X X 1 j
1
< jα(FK ) + n (j ≤ n)
|zk (K)| k+n
k=1 k=1

and !
j
X X 1 j
1
< n α(FK ) + (j > n).
|zk (K)| k+n
k=1 k=1

5.2 Identities for characteristic values


This section deals with the following sums containing the characteristic values
zk (K) (k = 1, 2, ...):

X 1
s̃m (K) := (m = 2, 3, ...),
zkm (K)
k=1

∞ 
X 2 ∞ 
X 2
1 1
Im and Re .
zk (K) zk (K)
k=1 k=1

Again use the expansion



X zk
K1 (z) := −R0−1 (η )K(z) = Ck , (2.1)
k!
k=0
104 Chapter 5. Properties of Characteristic Values

where Z η
Ck = (−1) k
R0−1 (η ) sk dR0 (s) (k ≥ 2),
0
 Z η 
C0 = I and C1 = −R0−1 (η ) I + sdR0 (s) .
0
To formulate our next result, for an integer m ≥ 2, introduce the m × m-block
matrix
 
−C1 −C2 /2 ... −Cm−1 /(m − 1)! −Cm /m!
 I 0 ... 0 0 
 
B̂m =  0 I ... 0 0 

 . . ... . . 
0 0 ... I 0

whose entries are n × n matrices.


Theorem 5.2.1. For any integer m ≥ 2, we have

s̃m (K) = T race B̂m


m
.

This result is a particular case of Theorem 12.7.1 from [46].


Furthermore, denote

X
τ (K) := N22 (Ck ) + (ζ(2) − 1)n
k=1

and
ψ(K, t) := τ (K) + Re T race [e2it (C12 − C2 /2)] (t ∈ [0, 2π))
where

X 1
ζ(z) := (Re z > 1)
kz
k=1

is the Riemann zeta function.


Theorem 5.2.2. Let conditions (1.2) and (1.3) hold. Then for any t ∈ [0, 2π) the
relations
X∞ ∞ 
X 2
1 eit
τ (K) − = ψ(K, t) − 2 Re ≥0
|zk (K)|2 zk (K)
k=1 k=1

are valid.
This theorem is a particular case of Theorem 12.4.1 from [46].
Note that
1
ψ(K, π/2) = τ (K) − Re T race (C12 − C2 )
2
5.3. Multiplicative representations of characteristic functions 105

and
1
ψ(K, 0) = τ (K) + Re T race (C12 − C2 ).
2
Now Theorem 5.2.2 yields the following result.
Corollary 5.2.3. Let conditions (1.2) and (1.3) hold. Then

X ∞ 
X 2
1 1
τ (K) − = ψ(K, π/2) − 2 Im =
|zk (K)|2 zk (K)
k=1 k=1

∞ 
X 2
1
ψ(K, 0) − 2 Re ≥ 0.
zk (K)
k=1

Consequently,

X 1
≤ τ (K),
|zk (K)|2
k=1
∞ 
X 2
1
2 Im ≤ ψ(f, π/2)
zk (K)
k=1

and
∞ 
X 2
1
2 Re ≤ ψ(f, 0).
zk (K)
k=1

5.3 Multiplicative representations of characteristic func-


tions
Let A be an n × n-matrix and Ek (k = 1, . . . , n) be the maximal chain of the
invariant orthogonal projections of A. That is,

0 = E0 ⊂ E1 ⊂ ... ⊂ En = I

and
AEk = Ek AEk (k = 1, . . . , n). (3.1)
Besides, ΔEk = Ek − Ek−1 (k = 1, ..., n) are one dimensional. Again set

Y
Xk := X1 X2 ...Xm
1≤k≤m

for matrices X1 , X2 , ..., Xm . As it is shown in Subsection 2.2.2,


Y→  
AΔEk
(I − A)−1 = I+ , (3.2)
1 − λk (A)
1≤k≤n
106 Chapter 5. Properties of Characteristic Values

provided I − A is invertible.
Furthermore, for each fixed z ∈ C, K(z) possesses the maximal chain of
invariant orthogonal projections, which we denote by Ek (K, z):
0 = E0 (K, z) ⊂ E1 (K, z) ⊂ ... ⊂ En (K, z) = I
and
K(z)Ek (K, z) = Ek (K, z)K(z)Ek (K, z) (k = 1, . . . , n).
Moreover,
ΔEk (K, z) = Ek (K, z) − Ek−1 (K, z) (k = 1, ..., n)
are one dimensional orthogonal projections.
Write K(z) = zI − B(z) with
Z η
B(z) = exp(−zs)dR0 (s).
0

Then K(z) = z(I − 1


z B(z)). Now by (3.2) with K(z) instead of I − A, we get our
next result.
Theorem 5.3.1. For any regular z 6= 0 of K, the equality
→  
1 Y B(z)ΔEk (z)
K (z) =
−1
I+
z zλk (K(z))
1≤k≤n

is true.
Furthermore, let
A = D + V (σ(A) = σ(D)). (3.3)
be the Schur triangular representation of A (see Section 2.2). Namely, V is the
nilpotent part of A and
Xn
D= λk (A)ΔEk
k=1
is its diagonal part. Besides V Ek = Ek V Ek (k = 1, . . . , n). Let us use the equality
Y→  
V ΔEk
A−1 = D −1 I−
λk (A)
2≤k≤n

for any non-singular matrix A (see Section 2.2). Now replace A by K(z) and denote
by D̃K (z) and ṼK (z) be the diagonal and nilpotent parts of K(z), respectively.
Then the previous equality at once yields the following result.
Theorem 5.3.2. For any regular z of K, the equality

" #
Y ṼK (z)ΔEk (z)
−1
K (z) = D̃K (z)
−1
I−
λk (K(z))
2≤k≤n

is true.
5.4. Perturbations of characteristic values 107

5.4 Perturbations of characteristic values


Let Z η
K(z) = zI − exp(−zs)dR0 (s) (4.1a)
0

and Z η
K̃(z) = zI − exp(−zs)dR̃(s), (4.1b)
0

where R0 and R̃ are n × n-matrix-valued functions defined on [0, η ], whose entries


have bounded variations.
Enumerate the characteristic values zk (K) and zk (K̃) of K and K̃, respec-
tively with their multiplicities in the nondecreasing order of their moduli.
The aim of this section is to estimate the quantity

1 1

rvK (K̃) = max min −
j=1,2,... k=1,2,... zk (K) zj (K̃)

which will be called the relative variation of the characteristic values of K̃ with
respect to the characteristic values of K.
Assume that
R0 (η ) and R̃(η ) are invertible , (4.2)
and
η < 1. (4.3)
As it was shown in Section 5.1, the latter condition does not affect on the generality.
Again put

K1 (z) = −R−1 (η )K(z) and, in addition, K̃1 (z) = −R̃−1 (η )K̃(z).

So

X ∞
X
zk zk
K1 (z) = Ck and K̃1 (z) = C̃k , (4.4)
k! k!
k=0 k=0

where Z η
Ck = (−1)k R0−1 (η ) sk dR0 (s)
0

and Z η
C̃k = (−1)k R̃−1 (η ) sk dR̃(s) (k ≥ 2);
0

C0 = C̃0 = I;
 Z η   Z η 
C1 = −R0−1 (η ) I + sdR0 (s) and C̃1 = −R̃−1 (η ) I + sdR̃(s) .
0 0
108 Chapter 5. Properties of Characteristic Values

As it is shown in Section 5.1, condition (4.3) implies the inequalities



X ∞
X
kCk k2 < ∞ and kC̃k k2 < ∞. (4.5)
k=0 k=0

Recall that " #1/2



X
ΨK := Ck Ck∗
k=1

and put
w(K) := 2N2 (ΨK ) + 2[n(ζ(2) − 1)]1/2 ,
where ζ(.) is the Riemann zeta function, again. Denote also
 
1 1 w2 (K)
ξ(K, s) := exp + (s > 0) (4.6)
s 2 2s2
and " #1/2

X
q= kC̃k − Ck k 2
.
k=1

Theorem 5.4.1. Let conditions (4.2) and (4.3) hold. Then

rvK (K̃) ≤ r(K, q),

where r(K, q) is the unique positive root of the equation

qξ(K, s) = 1. (4.7)

This result is a particular case of Theorem 12.5.1 from [46]. If we substitute


the equality y = xw(K) into (4.7) and apply Lemma 1.6.4 from [46], we get
r(K, q) ≤ δ(K, q), where

eq if w(K) ≤ e q
δ(K, q) := .
w(K) [ln (w(K)/q)]−1/2 if w(K) > e q

Therefore, rvK (K̃) ≤ δ(K, q).


Put
   
1 1
Wj = z ∈ C : qξ K, − ≥ 1 (j = 1, 2, ...).
z zj (K)
Since ξ(K, y) is a monotone decreasing function with respect to y > 0, Theorem
5.4.1 yields the following result.
Corollary 5.4.2. Under the hypothesis of Theorem 5.4.1, all the characteristic val-
ues of K̃ lie in the set
∪∞j=1 Wj
5.5. Perturbations of characteristic determinants 109

Let us apply the Borel (Laplace) transform. Namely, put


Z ∞
F (u) = e−ut (K̃1 (t) − K1 (t))dt (u ∈ C; |u| = 1).
0

Then

X 1
F (u) = (C̃k − Ck ).
uk+1
k=0

Therefore
Z ∞
X
1 2π
F ∗ (e−is )F (eis )ds = ((C̃k )∗ − Ck∗ )(C̃k − Ck ).
2π 0 k=0

Thus,
Z ∞
X
1 2π
T race F (e
∗ −is
)F (e )ds = T race
is
((C̃)∗k − Ck∗ )(C̃k − Ck ),
2π 0 k=0

or Z

X 1 2π
q2 ≤ N22 (C̃k − Ck ) = N22 (F (eis ))ds.
2π 0
k=0

This forces Z
1 2π
q ≤ 2
N22 (F (eis ))ds ≤ sup N22 (F (z)).
2π 0 |z|=1

5.5 Perturbations of characteristic determinants


In the present section we investigate perturbations of characteristic determinants.
Besides, K and K̃ are defined by (4.1), again.
Lemma 5.5.1. The equality

inf |det K(iω)| = inf |det K(iω)|


−∞≤ω≤∞ −2 var(R0 )≤ω≤2 var(R0 )

is valid. Moreover,

inf |det K(iω)| ≤ |det R0 (η )|.


−∞≤ω≤∞

Proof. Put d(s) = |det K(is)|. Clearly, d(0) = |det R0 (η )| and


n
Y n
Y
det K(z) = λk (zI − B(z)) = (z − λk (B(z)).
k=1 k=1
110 Chapter 5. Properties of Characteristic Values

In addition,
|λk (B(is))| ≤ kB(is)k ≤ var(R0 )
and thus

d(s) ≥ ||s| − var(R0 )|n ≥ (var(R0 ))n (|s| ≥ 2var(R0 )).

We conclude that the minimum of d(s) is attained inside the segment [−2 var(R0 ), 2 var(R0 )],
as claimed. 

Furthermore, let A and B be n × n-matrices. As it is shown in Section 1.11,


 n
1 1
|det A − det B| ≤ kA − Bk 1 + kA − Bk + kA + Bk .
2 2

Hence we get
Corollary 5.5.2. The inequality

|det K(z) − det K̃(z)| ≤


 n
1 1
kK(z) − K̃(z)k 1 + kK(z) − K̃(z)k + kK(z) + K̃(z)k
2 2
holds.
Simple calculations show that

kK(iω)k ≤ |ω| + var(R0 ), kK(iω) + K̃(iω)k ≤ 2|ω| + var(R0 ) + var(R̃)

and
kK(iω) − K̃(iω)k ≤ var(R0 − R̃) (ω ∈ R).
We thus arrive at the following corollary.
Corollary 5.5.3. The inequality

|det K(iω) − det K̃(iω)| ≤


 n
1
var(R0 − R̃) 1 + |ω| + (var(R0 + R̃) + var(R0 − R̃))
2
is valid.
In particular,
|det K̃(iω)| ≥ |det K(iω)|−
 n
1
var(R0 − R̃) 1 + |ω| + (var(R0 + R̃) + var(R0 − R̃)) .
2
Now simple calculations and Lemma 5.5.1 imply the following result.
5.5. Perturbations of characteristic determinants 111

Theorem 5.5.4. Let all the zeros of det K(z) be in C− and

J1 := inf |det K(iω)|−


|ω|≤2var(R0 )

 n
1
var(R0 − R̃) 1 + 2var(R̃) + (var(R0 + R̃) + var(R0 − R̃)) > 0.
2
Then all the zeros of det K̃(z) are also in C− and

inf |det K̃(iω)| ≥ J1 .


ω∈R

Note that
J1 ≥ inf |det K(iω)|−
|ω|≤2var(R0 )

var(R0 − R̃)[1 + 3var(R̃) + var(R̃)]n .


Furthermore, let us establish a result similar to Theorem 5.5.1 but in the
terms of the norm N2 (.). Again let A and B be n × n-matrices. Then as it is
shown in Section 1.11,
 n
1 1 1
|det A − det B| ≤ n/2 N2 (A − B) 1 + N2 (A − B) + N2 (A + B)) .
n 2 2

Now at once we get our next result.

Lemma 5.5.5. The inequality

|det K(z) − det K̃(z)| ≤


 n
1 1 1
N 2 (K(z) − K̃(z)) 1 + N 2 (K(z) − K̃(z)) + N 2 (K(z) + K̃(z))
nn/2 2 2
is valid.

Note that the previous lemma implies the inequality

|det K(z) − det K̃(z)| ≤

1 h in
n/2
N 2 (K(z) − K̃(z)) 1 + N 2 (K(z)) + N 2 ( K̃(z)) .
n
To illustrate the results obtained in this section consider the following matrix-
valued functions:
Z η Xm
K(z) = zI − −sz
e A(s)ds − e−hk z Ak , (5.2a)
0 k=0
112 Chapter 5. Properties of Characteristic Values

and Z η m
X
K̃(z) = zI − e−sz Ã(s)ds − e−hk z Ãk , (5.2b)
0 k=0

where A(s) and Ã(s) are integrable matrix functions, Ak and Ãk are constant
matrices, hk are positive constants. Put
Z η m
X
V ar2 (R0 ) = N2 (A(s))ds + N2 (Ak ),
0 k=0
Z η m
X
V ar2 (R0 ± R̃) = N2 (A(s) ± Ã(s))ds + N2 (Ak ± Ãk ).
0 k=0
We have

N2 (K(iω)) ≤ n|ω| + V ar2 (R0 ), N2 (K(iω) − K̃(iω)) ≤ V ar2 (R0 − R̃)

and √
N2 (K(iω) + K̃(iω)) ≤ 2 n|ω| + V ar2 (R0 + R) (ω ∈ R).
We thus arrive at the following corollary.
Corollary 5.5.6. Let K and K̃ be defined by (5.2). Then the inequality

|det K(iω) − det K̃(iω)| ≤


 n
1 √ 1
V ar2 (R0 − R̃) 1 + n|ω| + (V ar2 (R0 + R̃) + V ar2 (R0 − R̃))
nn/2 2
is true.
In particular,
|det K̃(iω)| ≥ |det K(iω)|−
1 √ 1
V ar2 (R0 − R̃)[1 + n|ω| + (V ar2 (R0 + R̃) + V ar2 (R0 − R̃))]n (ω ∈ R).
n n/2 2
Hence, making use Lemma 5.5.1 and taking into account that var(R0 ) ≤ V ar2 (R0 ),
we easily get our next result.
Corollary 5.5.7. Let K and K̃ be defined by (5.2). Let all the zeros of det K(z) be
in C− and
J0 := inf |det K(iω)|−
|ω|≤2V ar2 (R0 )
 n
1 √ 1
V ar2 (R0 −R̃) 1 + 2 nV ar2 (R̃) + (V ar2 (R0 + R̃) + V ar2 (R0 − R̃)) > 0.
n n/2 2
Then all the zeros of det K̃(z) are also in C− and

inf |det K̃(iω)| ≥ J0 .


ω∈R
5.6. Approximations by polynomial pencils 113

5.6 Approximations by polynomial pencils


Let (1.2) hold, and K, and K1 be defined as in Section 5.1. Let us consider the
approximation of function K1 by the polynomial pencil
m
X C k λk
hm (λ) = .
k!
k=0

Put " #1/2



X
qm (K) := kCk k 2

k=m+1

and !
m
X
1/2
w(hm ) = 2 N2 Ck Ck∗ + 2 [n(ζ(2) − 1)]1/2 .
k=1

Define ξ(hm , s) by (4.6) with hm instead of K. The following result at once follows
from [46, Corollary 12.5.3].
Corollary 5.6.1. Let conditions (1.2) and (1.3) hold, and rm (K) be the unique
positive root of the equation

qm (K)ξ(hm , y) = 1.

Then either, for any characteristic value z(K) of K, there is a characteristic value
z(hm ) of polynomial pencil hm , such that

1 1

z(K) z(hm ) ≤ rm (K),

or
1
|z(K)| ≥ .
rm (K)

5.7 Convex functions of characteristic values


We need the following classical result.
Lemma 5.7.1. Let φ(x) (0 ≤ x ≤ ∞) be a convex continuous function, such that
φ(0) = 0, and
aj , bj (j = 1, 2, ..., l ≤ ∞)
be two non-increasing sequences of real numbers, such that
j
X j
X
ak ≤ bk (j = 1, 2, ..., l).
k=1 k=1
114 Chapter 5. Properties of Characteristic Values

Then
j
X j
X
φ(ak ) ≤ φ(bk ) (j = 1, 2, ..., l).
k=1 k=1

For the proof see for instance [65, Lemma II.3.4], or [64, p. 53]. Put
n
χk = ωk (K) + (k = 1, 2, ...).
k+n
The following result is due to the previous lemma and Theorem 5.1.2.
Corollary 5.7.2. Let φ(t) (0 ≤ t < ∞) be a continuous convex scalar-valued func-
tion, such that φ(0) = 0. Let conditions (1.2) and (1.3) hold. Then the inequalities
j
X   j
X
1
φ < φ(χk ) (j = 1, 2, ...)
|zk (K)|
k=1 k=1

are valid. In particular, for any p > 1,


j
X X p j
1
< χk
|zk (K)| p
k=1 k=1

and thus
" j #1/p " j #1/p " j #1/p
X 1 X p X 1
< ωk (K) +n (j = 1, 2, ...). (7.1)
|zk (K)|p (k + n)p
k=1 k=1 k=1

For any p > 1 we have



X 1
ζn (p) := < ∞.
(k + n)p
k=1

Relation (7.1) with the notation


" n
#1/p
X
Np (ΨK ) = λpk (ΨK )
k=1

yields our next result.


Corollary 5.7.3. Let the conditions (1.2) and (1.3) hold. Then


!1/p
X 1
< Np (ΨK ) + nζn1/p (p) (p > 1).
|zk (K)|p
k=1

The next result is also well known, cf. [65, Chapter II], [64, p. 53].
5.8. Comments 115

Lemma 5.7.4. Let a scalar-valued function Φ(t1 , t2 , ..., tj ) with an integer j be


defined on the domain

0 < tj ≤ tj−1 ... ≤ t2 ≤ t1 < ∞

and have continuous partial derivatives, satisfying the condition


∂Φ ∂Φ ∂Φ
> > ... > > 0 for t1 > t2 > ... > tj , (7.2)
∂t1 ∂t2 ∂tj

and ak , bk (k = 1, 2, ..., j) be two nonincreasing sequences of real numbers satisfy-


ing the condition
Xm Xm
ak ≤ bk (m = 1, 2, ..., j).
k=1 k=1

Then Φ(a1 , ..., aj ) ≤ Φ(b1 , ..., bj ).


This lemma and Theorem 5.1.2 immediately imply our next result.
Corollary 5.7.5. Under the hypothesis of Theorem 5.1.2, let condition (7.2) hold.
Then  
1 1 1
Φ , , ..., < Φ(χ1 , χ2 , ..., χj ).
|z1 (K)| |z2 (K)| |zj (K)|

k=1 be a decreasing sequence of non-negative numbers.


In particular, let {dk }∞
Take
j
X
Φ(t1 , t2 , ..., tj ) = d k tk .
k=1

Then the previous corollary yields the inequalities


j
X j
X j
X  
dk n
< χk dk = dk ωk (K) + (j = 1, 2, ...).
|zk (K)| k+n
k=1 k=1 k=1

5.8 Comments
The material of this chapter is adapted from Chapter 12 of the book [46]. The
relevant results in more general situation can be found in [32, 33] and [35].
116 Chapter 5. Properties of Characteristic Values
Chapter 6

Equations Close to Autonomous


and Ordinary Differential Ones

In this chapter we establish explicit stability conditions for linear time variant
systems with delay ”close” to ordinary differential systems and for systems with
small delays. We also investigate perturbations of autonomous equations.

6.1 Equations ”close” to ordinary differential ones


Let R(t, τ ) = (rjk (t, τ ))nj,k=1 be an n×n-matrix-valued function defined on [0, ∞)×
[0, η ] (0 < η < ∞), which is piece-wise continuous in t for each τ , whose entries
have bounded variations in τ .
Consider the equation
Z η
ẏ(t) = A(t)y(t) + dτ R(t, τ )y(t − τ ) (t ≥ 0), (1.1)
0

where A(t) is a piece-wise continuous matrix valued function. In this chapter again
C(Ω) = C(Ω, Cn ) and Lp (Ω) = Lp (Ω, Cn ) (p ≥ 1) are the spaces of vector valued
functions.
Introduce in L1 (−η , ∞) the operator
Z η
Ew(t) = dτ R(t, τ )w(t − τ ).
0

It is assumed that
vjk = sup var(rjk (t, .)) < ∞. (1.2)
t≥0
118 Chapter 6. Equations Close to Autonomous and Ordinary Differential Ones

By Lemma 1.12.3, there is a constant


v
n uX
X u n
q1 ≤ t 2
vjk
j=1 k=1

such that
kEwkL1 (0,∞) ≤ q1 kwkL1 (−η ,∞) (w ∈ L1 (−η , ∞)). (1.3)
For instance, if (1.1) takes the form
Z η m
X
ẏ(t) = A(t)y + B(t, s)y(t − s)ds + Bk (t)y(t − τk ) (t ≥ 0; m < ∞), (1.4)
0 k=0

where 0 ≤ τ0 < τ1 , ..., < τm ≤ η are constants, Bk (t) are piece-wise continuous
matrices and B(t, s) is a matrix function Lebesgue integrable in s on [0, η ], then
(1.3) holds, provided
Z m
!
η X
q̂1 := sup kB(t, s)kn ds + kBk (t)kn < ∞.
t≥0 0 k=0

Here and below in this chapter kAkn is the spectral norm of an n × n-matrix A.
Moreover, we have
Z ∞ Z ∞ Z η m
!
X
kEf (t)kn dt ≤ kB(t, s)f (t − s)kn ds + kBk (t)f (t − τk )kn dt ≤
0 0 0 k=0

Z η Z ∞
sup kB(τ, s)kn kf (t − s)kn ds dt+
τ 0 0

m
X Z ∞
kBk (τ )kn kf (t − τk )kn dt.
k=0 0

Consequently,
Z Z Z m
!
∞ ∞ ∞X
kEf (t)kn dt ≤ q̂1 (R) max kf (t − s)kn dt + kf (t − τk )kn dt .
0 0≤s≤η 0 0 k=0

But Z Z
∞ ∞
max kf (t − s)kn dt ≤ kf (t)kn dt.
0≤s≤η 0 −η

Thus, in the case of equation (1.4), condition (1.3) holds with q1 = q̂1 .
6.1. Equations ”close” to ordinary differential ones 119

Theorem 6.1.1. Let condition (1.3) hold and the evolution operator U (t, s) (t ≥
s ≥ 0) of the equation
ẏ = A(t)y (t > 0) (1.5)
satisfy the inequality
Z ∞
1
ν1 := sup kU (t, s)kn dt < . (1.6)
s≥0 s q1

Then equation (1.1) is exponentially stable.


To prove this theorem we need the following result.
Lemma 6.1.2. Let conditions (1.3) and (1.6) hold. Then a solution x(t) of the
equation Z η
ẋ(t) = A(t)x(t) + dτ R(t, τ )x(t − τ ) + f (t) (1.7)
0

with an f ∈ L (0, ∞) and the zero initial condition


1

x(t) = 0 (t ≤ 0) (1.8)

satisfies the inequality


ν1 kf kL1 (0,∞)
kxkL1 (0,∞) ≤ .
1 − ν 1 q1
Proof. Equation (1.1) is equivalent to the following one:
Z t
x(t) = U (t, s)(Ex(s) + f (s))ds.
0

So Z t
kx(t)kn ≤ kU (t, s)kn (kEx(s)kn + kf (s)kn )ds.
0
Integrating this inequality, we obtain
Z t0 Z t0 Z t
kx(t)kn dt ≤ kU (t, s)kn (kEx(s)kn + kf (s)kn )ds dt (0 < t0 < ∞).
0 0 0

Take into account that


Z t0 Z t
kU (t, s)kn kf (s)kn ds dt =
0 0
Z t0 Z t0
kf (s)kn kU (t, s)kn dt ds ≤ ν1 kf kL1 (0,∞) .
0 s
In addition, Z Z
t0 t
kU (t, s)kn kEx(s)kn ds dt =
0 0
120 Chapter 6. Equations Close to Autonomous and Ordinary Differential Ones
Z t0 Z t0 Z t0
kEx(s)kn kU (t, s)kn dt ds ≤ q1 ν1 kx(s)kn ds.
0 s 0

Thus,
Z t0 Z t0
kx(s)kn ds ≤ ν1 q1 kx(s)kn ds + ν1 kf kL1 (0,∞) .
0 0

Hence,
Z t0
ν1 kf kL1 (0,t0 )
kx(s)kn ds ≤ .
0 1 − ν1 q1
Now letting t0 → ∞, we arrive at the required result. 

The assertion of Theorem 6.1.1 is due to Theorem 3.5.1 and the previous
lemma.
Now consider the operator E in space C. By Lemma 1.12.3, under condition
(1.2) there is a constant
√ n
q∞ ≤ nk (vjk )j,k=1 kn ,
such that
kEwkC(0,∞) ≤ q∞ kwkC(−η ,∞) (w ∈ C(−η , ∞)). (1.9)
For instance, if (1.1) takes the form
Z η m
X
ẏ(t) = A(t)y + B(t, s)y(t − s)ds + Bk (t)y(t − hk (t)) (t ≥ 0; m < ∞),
0 k=1
(1.10)
where 0 ≤ h1 (t), h2 (t), ..., hm (t) ≤ η are continuous functions, B(t, s) and Bk (t)
are the same as above in this section. Then
Z η m
!
X
sup kEw(t)kn ≤ sup kB(t, s)w(t − s)kn ds + kBk (t)w(t − hk (t))kn .
t≥0 t≥0 0 k=0

Hence, under the condition q̂1 < ∞, we easily obtain inequality (1.9) with q∞ = q̂1 .

Theorem 6.1.3. Let condition (1.9) hold and the evolution operator U (t, s) (t ≥
s ≥ 0) of equation (1.5) satisfy the inequality
Z t
1
ν∞ := sup kU (t, s)kn ds < . (1.11)
t≥0 0 q∞

Then equation (1.1) is exponentially stable.

The assertion of this theorem follows from Theorem 3.4.1 and the following
lemma.
6.1. Equations ”close” to ordinary differential ones 121

Lemma 6.1.4. Let conditions (1.9) and (1.11) hold. Then a solution x(t)of problem
(1.7), (1.8) with a f ∈ C(0, ∞) satisfies the inequality
ν∞ kf kC(0,∞)
kxkC(0,∞) ≤ .
1 − ν∞ q∞
The proof of this lemma is similar to the proof of Lemma 6.1.2.
Assume that
((A(t) + A∗ (t))h, h)C n ≤ −2α(t)(h, h)C n (h ∈ Cn , t ≥ 0)
with a positive piece-wise continuous function α(t) having the property
Z ∞ R
t
ν̂1 := sup e− s α(t1 )dt1 dt < ∞.
s≥0 s

Then we easily have the inequalities


Rt
kU (t, s)kn ≤ e− s
α(t1 )dt1

and ν1 ≤ ν̂1 . For instance, if α0 := inf t α(t) > 0, then


Z ∞
1
ν̂1 ≤ sup e−α0 (t−s) dt = .
s≥0 s α0

Let A(t) ≡ A0 -a constant matrix. Then


U (t, s) = eA0 (t−s) (t ≥ s ≥ 0).
In this case
ν1 (A) = ν∞ (A) = keA0 t kL1 (0,∞) .
Applying Corollary 2.5.3, we have
n−1
X g k (A)tk
keAt kn ≤ eα(A)t (t ≥ 0).
k=0
(k!)3/2

Recall that
n
X √
g(A) = (N 2 (A) − |λk (A)|2 )1/2 ≤ 2N (AI ),
k=1

and α(A) = maxk Re λk (A); λk (A) (k = 1, ..., n) are the eigenvalues of A, N2 (A)
is the Hilbert-Schmidt norm of A and AI = (A − A∗ )/2i. Thus
keA0 t kL1 (0,∞) ≤ νA0 ,
where
n−1
X g k (A0 )
νA0 := √ .
k=0
k!|α(A0 )|k+1
122 Chapter 6. Equations Close to Autonomous and Ordinary Differential Ones

6.2 Equations with small delays


Again consider the equation
Z η
ẏ(t) = dτ R(t, τ )y(t − τ ) (t ≥ 0), (2.1)
0

where R(t, τ ) = (rjk (t, τ )) is the same as in the previous section. In particular,
condition (1.2) holds.
For instance, (2.1) can take the form
Z η m
X
ẏ(t) = B(t, s)y(t − s)ds + Bk (t)y(t − τk ) (t ≥ 0; m < ∞) (2.2)
0 k=0

where B(t, s), τk and B(t, τ ) are the same as in the previous section. In C(−η , ∞)
introduce the operator
Z η
Êd f (t) := dτ R(t, τ )[f (t − τ ) − f (t)].
0

Assume that there is a constant Ṽd (R), such that


kÊd f kC(0,T ) ≤ Ṽd (R)kf˙kC(−η ,T ) (T > 0) (2.3)

for any f ∈ C(−η , T ) with f˙ ∈ C(−η , T ).


It is not not hard to show that condition (1.2) implies (2.3).
In the case of equation (2.2) we have
Z η m
X
Êd f (t) = B(t, τ )[f (t − τ ) − f (t)]dτ + Bk (t)[f (t − τk ) − f (t)].
0 k=0

Since
kf (t − τ ) − f (t)kC(0,T ) ≤ τ kf˙kC(−η ,T ) ,
we easily obtain
Z m
!
η X
kÊd f (t)kC(0,T ) ≤ sup τ kB(t, τ )kn + kBk (t)kn τk kf˙kC(−η ,T ) .
t 0 k=0

That is, for equation (2.2), condition (2.3) holds with


Z η m
!
X
Ṽd (R) ≤ sup τ kB(t, τ )kn + τk kBk (t)kn . (2.4)
t 0 k=0

Put
A(t) = R(t, η ) − R(t, 0) (2.5)
and assume that equation (1.5) is asymptotically stable. Recall that U (t, s) is the
evolution operator of the ordinary differential equation (1.5).
6.2. Equations with small delays 123

Theorem 6.2.1. Under conditions (1.2) and (2.3), let A(t) be defined by (2.5) and
 Z t 
ψR := Ṽd (R) sup kA(t)U (t, s)kn ds + 1 < 1. (2.6)
t≥0 0

Then equation (2.1) is exponentially stable.


Proof. We need the non-homogeneous problem

ẋ(t) = Ex(t) + f (t ≥ 0; f ∈ C(0, ∞)), (2.7)

with the zero initial condition

x(t) = 0, t ≤ 0.

Observe that
Z η Z η Z η
dτ R(t, τ )x(t − τ ) = dτ R(t, τ )x(t) + dτ R(t, τ )(x(t − τ ) − x(t)) =
0 0 0
Z η
(R(t, η ) − R(t, 0))x(t) + dτ R(t, τ )(x(t − τ ) − x(t)) = A(t)x(t) + Êd x(t).
0
So we can rewrite equation (2.7) as

ẋ(t) = A(t)x(t) + Êd x(t) + f (t) (t ≥ 0). (2.8)

Consequently, Z t
x(t) = U (t, s)Êd x(s)ds + f1 (t), (2.9)
0
where Z t
f1 (t) = U (t, s)f (s)ds.
0
Differentiating (2.9), we get
Z t
ẋ(t) = A(t)U (t, s)Êd x(s)ds + Êd x(t) + A(t)f1 (t) + f (t).
0

For the brevity put |x|T = kxkC(0,T ) for a finite T . By condition (2.3) we have
 Z t 
|ẋ|T ≤ c0 + Ṽd (R)|ẋ|T sup kA(t)U (t, s)kn ds + 1 ,
t≥0 0

where
c0 := kA(t)f1 kC(0,∞) + kf kC(0,∞) .
So
|ẋ|T ≤ c0 + ψR |ẋ|T .
124 Chapter 6. Equations Close to Autonomous and Ordinary Differential Ones

Now condition (2.6) implies


c0
|ẋ|T ≤ .
1 − ψR
Letting T → ∞, we can assert that ẋ ∈ C(0, ∞). Hence due to (2.3) and (2.9) it
follows that x(t) is bounded, since
Z t
kxkC(0,∞) ≤ kf1 kC(0,∞) + Ṽd (R)kẋkC(0,∞) sup kU (t, s)kn ds.
t≥0 0

Now the required result is due to Theorem 3.4.1. 

6.3 Nonautomomous systems ”close” to autonomous


ones
In the present section we consider the vector equation
Z η Z η
ẏ(t) = dR0 (τ )y(t − τ ) + dτ R(t, τ )y(t − τ ) = 0 (t ≥ 0), (3.1)
0 0

(0)
where R(t, τ ) is the same as in Section 6.1, and R0 (τ ) = (rjk (τ ))nj,k=1 is an n × n-
matrix-valued function defined on [0, η ], whose entries have bounded variations.
Recall that var (R0 ) is the spectral norm of the matrix
 n
(0)
var (rjk )
j,k=1

(see Section 1.12). Again use the operator Ef (t) = 0 dτ R(t, τ )f (t − τ ), consid-
ering it in space L2 (−η , ∞).
Under condition (1.2) , due to Lemma 1.12.3, there is a constant
n
q2 ≤ k (vjk )j,k=1 kn ,

such that
kEf kL2 (0,∞) ≤ q2 kf kL2 (−η ,∞) (f ∈ L2 (−η , ∞)). (3.2)
Again Z η
K(z) = zI − exp(−zs)dR0 (s) (z ∈ C)
0

is the characteristic matrix of the autonomous equation


Z η
ẏ(t) = dR0 (τ )y(t − τ ) (t ≥ 0). (3.3)
0
6.3. Nonautomomous systems ”close” to autonomous ones 125

As above it is assumed that all the zeros of det K(z) are in the open left half-plane.
Recall also that

θ(K) := sup kK −1 (iω)kn ,


−2 var(R0 )≤ω≤2 var(R0 )

and in Chapter 4, estimates for θ(K) are given.


The equation
Z η Xm Z η
ẏ(t) = A(s)y(t − s)ds + Ak y(t − τk ) + B(t, s)y(t − s)ds+ (3.4)
0 k=0 0


X
Bk (t)y(t − hk ) (t ≥ 0; m, m̃ < ∞)
k=0

is an example of equation (3.1). Here

0 = h0 < h1 < ... < hm̃ ≤ η and 0 = τ0 < τ1 < ... < τm ≤ η

are constants, Ak are constant matrices and A(s) is Lebesgue integrable on [0, η ],
Bk (t) are piece-wise continuous matrices and B(t, s) is Lebesgue integrable in s
on [0, η ]. In this case,
Z η m
X
K(z) = zI − e−sz A(s)ds − e−hk z Ak (3.5)
0 k=0

and Z η m
X
var(R0 ) ≤ kA(s)kn ds + kAk kn . (3.6)
0 k=0

Theorem 6.3.1. Let the conditions (3.2) and

q2 θ(K) < 1 (3.7)

be fulfilled. Then equation (3.1) is exponentially stable.


Proof. Consider the non-homogeneous equation

ẋ = E0 x + Ex + f, (3.8)

with the zero initial condition and f ∈ L2 (0, ∞). Here


Z η
E0 x(t) = dR0 (τ )x(t − τ ).
0

By Lemma 4.4.1 and (3.2),

kxkL2 (0,∞) ≤ θ(K)kEx + f kL2 (0,∞) .


126 Chapter 6. Equations Close to Autonomous and Ordinary Differential Ones

Thus, by (3.2),

kxkL2 (0,∞) ≤ θ(K)(q2 kxkL2 (0,∞) + kf kL2 (0,∞) ).

Hence, taking into account (3.7), we get

θ(K)kf kL2 (0,∞)


kxkL2 (0,∞) ≤ . (3.9)
1 − θ(K)q2

Now Theorem 3.5.1 implies the required result. 

Example 6.3.2. Consider the equation


Z η Z η
ẏ(t) + A1 y(t − s)dr1 (s) + A2 y(t − s)dr2 (s) = (Ey)(t) (3.10)
0 0

with commuting positive definite Hermitian matrices A1 , A2 , and scalar nonde-


creasing functions r1 (s), r2 (s).
Besides,
Z η Z η
K(z) = zI + A1 e−zs dr1 (s) + A2 e−zs dr2 (s).
0 0

In addition, the eigenvalues λj (A1 ), λj (A2 ) (j = 1, ..., n) of A1 and A2 , respec-


tively are positive. Moreover, since the matrices commute, one can enumerate the
eigenvalues in such a way, that the eigenvalues of K(z) are defined by
Z η
λj (K(z)) = z + λj (A1 ) e−zs dr1 (s)+
0
Z η
λj (A2 ) e−zs dr2 (s) (j = 1, ..., n).
0
In addition,
var (R0 ) ≤ kA1 kn var(r1 ) + kA2 kn var(r2 ).
Let us use Corollary 4.3.2. In the considered case B(z) is normal. So g(B(z)) ≡ 0
and thus the mentioned corollary implies
1
θ(K) ≤
ˆ
d(K)

with Z η
ˆ
d(K) = min inf |iω + λj (A1 ) e−iωs dr1 (s)+
j=1,...,n |ω|≤2var (R0 ) 0
Z η
λj (A2 ) e−iωs dr2 (s)|.
0
6.4. Equations with constant coefficients and variable delays 127

Denoting
r̃j (s) = λj (A1 )r1 (s) + λj (A2 )r2 (s),
we obtain
var(r̂j ) = λj (A1 )var (r1 ) + λj (A2 )var (r2 )
and Z η
λj (K(z)) = z + e−zs dr̂j (s) (j = 1, ..., n).
0
Assume that
eη var(r̂j ) < 1, j = 1, ..., n. (3.11)
Then by Lemma 4.6.5, we obtain
ˆ
d(K) = min var(r̂j ).
j

Applying Theorem 6.3.1 we can assert that equation (3.10) is exponentially stable,
provided the conditions (3.2), (3.11) and

q2 < min var(r̂j ) = min(λj (A1 )var (r1 ) + λj (A2 )var (r2 ))
j j

hold.

6.4 Equations with constant coefficients and variable


delays
Consider in Cn the equation
m
X
ẋ(t) = Ak x(t − τk (t)) + f (t) (f ∈ L2 (0, ∞); t ≥ 0) (4.1)
k=1

with condition (1.8). Here Ak are constant matrices, τk (t) are nonnegative contin-
uous scalar functions defined on [0, ∞) and satisfying the conditions

hk ≤ τk (t) ≤ ηk (0 ≤ hk , ηk ≡ const ≤ η; k = 1, ..., m < ∞; t ≥ 0). (4.2)

Introduce the matrix function


m
X
K(z) = zI − Ak e−zhk (z ∈ C).
k=1

As above, it is assumed that all the roots of det K(z) are in C− . Set
m
X
v0 := kAk kn , θ(K) := max kK −1 (is)kn
−2v0 ≤s≤2v0
k=1
128 Chapter 6. Equations Close to Autonomous and Ordinary Differential Ones

and
m
X
γ(K) := (ηk − hk )kAk kn .
k=1

Theorem 6.4.1. Let the conditions (4.2) and

v0 θ(K) + γ(K) < 1 (4.3)

hold. Then a solution of equation (4.1) with the zero initial condition (1.8) satisfies
the inequality
θ(K)kf kL2 (0,∞)
kxkL2 (0,∞) ≤ . (4.4)
1 − v0 θ(K) − γ(K)
This theorem is proved in the next section.
Theorems 6.4.1 and 3.5.1 imply
Corollary 6.4.2. Let conditions (4.2) and (4.3) hold. Then the equation
m
X
ẏ(t) = Ak y(t − τk (t)) (4.5)
k=1

is exponentially stable.
For instance, consider the following equation with one delay:

ẏ(t) = A0 y(t − τ (t)) (t > 0), (4.6)

where A0 is a constant n × n-matrix and the condition

h ≤ τ (t) ≤ η (h ≡ const ≥ 0; t ≥ 0). (4.7)

holds. In the considered case K(z) = zI − e−zh A0 . As it is shown in Section 4.3,


for any regular z,
kK −1 (z)kn ≤ Γ(K(z)) (z ∈
/ Σ(K))
where
n−1
X g k (B(z))
Γ(K(z)) = √ ,
k=0
k!dk+1 (K(z))

with B(z) = e−zh A0 and d(K(z)) is the smallest modulus of eigenvalues of K(z):

d(K(z)) = min |λk (K(z))|.


k=1,...,n

Here
λj (K(z)) = z − e−zh λj (A0 )
are the eigenvalues of matrix K(z) counting with their multiplicities.
6.5. Proof of Theorem 6.4.1 129

For a real ω we have g(B(iω)) = g(A0 ). In addition,

v0 = kA0 kn and γ(K) := (η − h)kA0 kn .

Thus
θ(K) ≤ Γ0 (K)) := max Γ(K(iω)) ≤ θA0 ,
|ω|≤2v0

where
n−1
X g k (A0 )
θA0 := √ ,
k=0
k!dk+1 (K)
and
d(K) := inf |yi + λj (A0 )e−iyh |.
j=1,...,n; |y|≤2|λj (A0 )|

In particular, if A0 is a normal matrix, then g(A0 ) = 0 and θA = 1/d(K).


Now Theorem 6.4.1 yields the following result.
Corollary 6.4.3. Let the conditions (4.7) and

v0 θA0 + (η − h)kA0 kn < 1 (4.8)

hold. Then equation (4.6) is exponentially stable.

6.5 Proof of Theorem 6.4.1


To prove Theorem 6.4.1 we need the following result.
Lemma 6.5.1. Let τ (t) be a nonnegative continuous scalar function defined on
[0, ∞) and satisfying condition (4.7). In addition, let a function w ∈ L2 (−η, ∞)
have the properties ẇ ∈ L2 (0, ∞) and w(t) = 0 for t < 0. Then

kw(t − τ (t)) − w(t − h)kL2 (0,∞) ≤ (η − h)kẇkL2 (0,∞) .

Proof. Put
Z t−h
u(t) = w(t − h) − w(t − τ (t)) = ẇ(s)ds.
t−τ (t)

By the Schwarz inequality and (4.7) we obtain,


Z ∞ Z t−h
kuk2L2 (0,∞) = k ẇ(s)dsk2n dt ≤
0 t−τ (t)

Z ∞ Z t−h
≤ (η − h) kẇ(s)k2n ds dt =
0 t−η
Z ∞ Z η−h
(η − h) kẇ(t − h − s1 )k2n ds1 dt ≤
0 0
130 Chapter 6. Equations Close to Autonomous and Ordinary Differential Ones

Z η−h Z ∞
(η − h) kẇ(t − s1 − h)k2n dt ds1 =
0 0
Z η−h Z ∞
(η − h) kẇ(t1 )k2n dt1 ds1 ≤
0 η−s1 −h
Z ∞
(η − h)2 kẇ(t1 )k2n dt1 .
0
We thus get the required result. 

Now let w ∈ L2 (0, T ) for a sufficiently large finite finite T > 0. Extend it to
the whole positive half-line by

w(t) = w(T )(t − T + 1) (T < t ≤ T + 1) and w(t) = 0 (t > T + 1).

Then by the previous lemma we get our next result.


Corollary 6.5.2. Under condition (4.7) let a function w ∈ L2 (−η, T ) (0 < T < ∞)
have the properties ẇ ∈ L2 (0, T ) and w(t) = 0 for t < 0. Then

kw(t − τ (t)) − w(t − h)kL2 (0,T ) ≤ (η − h)kẇkL2 (0,T ) .

Proof of Theorem 6.4.1: In this proof for the brevity we put k.kL2 (0,T ) =
|.|T (T < ∞). Recall that it is supposed that the characteristic values of K are in
the open left half-plane.
From (4.1) it follows
m
X
ẋ(t) = Ak x(t − hk ) + [F0 x](t) (t ≥ 0), (5.1)
k=1

where
m
X
[F0 x](t) = f (t) + Ak [x(t − τk (t)) − x(t − hk )].
k=1

It is simple to check that by (1.5)


m
X
| Ak x(t − hk )|T ≤ v0 |x|T .
k=1

Moreover, thanks to Corollary 6.5.2,


m
X
| Ak [x(t − τk (t)) − x(t − hk )]|T ≤ γ(K)|ẋ|T .
k=1

So
|F0 x|T ≤ γ(K)|ẋ|T + |f |T (5.2)
6.5. Proof of Theorem 6.4.1 131

and
|ẋ|T ≤ v0 |x|T + γ(K)|ẋ|T + |f |T .
By (4.3) we have γ(K) < 1. Hence,

v0 |x|T + |f |T
|ẋ|T ≤ . (5.3)
1 − γ(K)

Furthermore, by the Variation of Constants formula,


Z t
x(t) = G(t − s)F0 (s)ds.
0

where G is the fundamental solution of the equation


m
X
ẋ(t) = Ak x(t − hk ).
k=1

So |x|T ≤ |Ĝ|T |F0 |T , where


Z t
Ĝf (t) = G(t − s)f (s)ds
0

But |Ĝ|T ≤ kĜkL2 (0,∞) . and according to Lemma 4.4.1

kĜkL2 (0,∞) = θ(K).

Therefore, from (5.1) it follows that |x|T ≤ θ(K)|F0 |T . Hence, due to (5.2)

|x|T ≤ θ(K)(γ(K)|ẋ|T + |f |T ).

Now (5.3) implies

θ(K)(v0 |x|T + |f |T )
|x|T ≤ θ(K)(γ(K)|ẋ|T + |f |T ) ≤ .
1 − γ(K)

By condition (4.3) we have


v0 θ(K)
< 1.
1 − γ(K)
Consequently,
θ(K)kf kL2 (0,∞)
|x|T ≤ .
1 − v0 θ(K) − γ(K)
Letting in this inequality T → ∞, we get the required inequality, as claimed. 
132 Chapter 6. Equations Close to Autonomous and Ordinary Differential Ones

6.6 The fundamental solution of equation (4.1)


Again consider equation (4.1). Denote its fundamental solution by W (t, s) and put

θ(K)
ψ(K) = ,
1 − v0 θ(K) − γ(K)
m
X
A= Ak and χA := max ke−At − Ikn . (6.1)
0≤t≤η
k=1

Theorem 6.6.1. Under the above notation, let A be a Hurwitz matrix and conditions
(4.2) and (4.3) hold. Then for all s ≥ 0, the following relations are valid:

kW (., s)kL2 (s,∞) ≤ (1 + χA v0 ψ(K))keAt kL2 (s,∞) , (6.2)

v0 (1 + v0 χA ψ(K)) At
kWt0 (., s)kL2 (s,∞) ≤ ke kL2 (s,∞) (6.3)
1 − γ(K)
and √
2v0 keAt kL2 (s,∞)
kW (., s)kC(s,∞) ≤ p (1 + ψ(K)χA v0 ) . (6.4)
1 − γ(K)
This theorem is proved in the next section.
Let f (z) be a function holomorphic on a neighborhood of the closed convex
hull co(A) of the spectrum of an n × n-matrix and A. Then by Corollary 2.5.3 we
have
n−1
X g k (A)
kf (A)k ≤ sup |f (k) (λ)| .
k=0 λ∈co(A)
(k!)3/2

Hence, in particular,
n−1
X g k (A)tk
keAt kn ≤ eα(A)t (t ≥ 0),
k=0
(k!)3/2

and
n−1
X g k (A)tk
ke−At − Ikn ≤ e−β(A)t (t ≥ 0), (6.5)
k=1
(k!)3/2

where
α(A) := max Re λk (A) and β(A) := min Re λk (A).
k k

So
 !2 1/2
Z ∞ n−1
X g k (A)tk
keAt kL2 (0,∞) ≤  e2α(A)t dt . (6.6)
0 k=0
(k!)3/2
6.6. The fundamental solution of equation (4.1) 133

This integral is easily calculated. Due to (6.5)


n−1
X g k (A)η k
χA ≤ e−β(A)η , (6.7)
k=1
(k!)3/2

since A is Hurwitzian.
In the rest of this section we illustrate Theorem 6.6.1 in the case of equation
(4.6) with one delay. In this case
γ(K) = (η − h)kA0 kn .
To estimate χA0 and keA0 t kL2 (s,∞) we can directly apply (6.6) and (6.7). As it is
shown in Section 6.4, we have the inequality ψ(K) ≤ ψA0 , where
θ A0
ψA0 = .
1 − kA0 kn θA0 − (η − h)kA0 kn
Now Theorem 6.6.1 implies
Corollary 6.6.2. Let A0 be a Hurwitz matrix and conditions (4.7), and (4.8) hold.
Then for all s ≥ 0, the fundamental solution W (., .) of equation (4.6) satisfies the
inequalities
kW (., s)kL2 (s,∞) ≤ (1 + ψA0 kA0 kn χA0 )keA0 t kL2 (s,∞) ,
kA0 kn (1 + ψA0 kAkn χA0 ) A0 t
kWt0 (., s)kL2 (s,∞) ≤ ke kL2 (s,∞)
1 − (η − h)kA0 kn
and
2kA0 kn
kW (., s)k2C(s,∞) ≤ (1 + ψA0 kA0 kn χA0 )2 keA0 t k2L2 (s,∞) .
1 − (η − h)kA0 kn
Let us use Lemma 4.6.2 which asserts that which for constants h, a0 ∈ (0, ∞)
we have
inf |iω + a0 e−ihω | ≥ cos (2ha0 ) > 0.
ω∈R

provided a0 h < φ/4. From the latter result it follows that if all the eigenvalues of
A0 are real, negative, and
h|λj (A0 )| < π/4 (j = 1, ..., n). (6.8)
Then
d(K) ≥ d˜A0 := min |λj (A0 )| cos (2hλj (A)).
j=1,...,n

So under condition (6.8) one has d(K) ≥ d˜A and therefore,


n−1
X g k (A0 )
θA0 < √ k+1 .
k=0
k!d˜ A
134 Chapter 6. Equations Close to Autonomous and Ordinary Differential Ones

6.7 Proof of Theorem 6.6.1


Consider equation (4.1) with the initial condition

x(t) = 0 (s − η ≤ t < s, s ≥ 0).

Replace in (4.1) t by t − s. Then (4.1) takes the form


m
X
ẋ(t − s) = Ak x(t − s − τk (t − s)) + f (t − s) (t ≥ s).
k=1

Applying Theorem 6.4.1 to this equation, under conditions (4.2) and (4.3), we get
the inequality
kx(t − s)kL2 (0,∞) ≤ kf (t − s)kL2 (0,∞) ψ(K).
Or
kxkL2 (s,∞) ≤ kf kL2 (s,∞) ψ(K). (7.1)
Now for a fixed s, substitute

z(t) = W (t, s) − eA(t−s) (7.2)

into the homogeneous equation (4.5). Then


m
X
ż(t) = Ak [W (t − τk (t), s) − e(t−s)A ] =
k=1

m
X
Ak z(t − τk (t)) + u(t),
k=1

where
m
X
u(t) := Ak [eA(t−s−τk (t)) − eA(t−s) ] =
k=1

m
X
eA(t−s) Ak [e−Aτk (t) − I].
k=1

Obviously,
kukL2 (s,∞) ≤ v0 keAt kL2 (s,∞) χA .
By (7.1),

kzkL2 (s,∞) ≤ ψ(K)kukL2 (s,∞) ≤ ψ(K)v0 χA keAt kL2 (s,∞) .

Now the estimate (6.2) follows from (7.2).


6.8. Comments 135

Furthermore, rewrite equation (4.5) with w(t) = W (t, s) as


m
X m
X
ẇ(t) = Ak w(t − hk ) + Ak [w(t − τk (t)) − w(t − hk )].
k=1 k=1

Then due to Lemma 6.5.1 we deduce that

kẇkL2 (s,∞) ≤ v0 kwkL2 (s,∞) + γ(K)kẇkL2 (s,∞) .

Hence, condition (4.3) and the just obtained estimate for kW (., s)kL2 (s,∞) imply

v0 kW (., s)kL2 (s,∞)


kWt0 (., s)kL2 (s,∞) ≤ ≤
1 − γ(K)
v0
(1 + ψ(K)χA v0 )keAt kL2 (s,∞) .
1 − γ(K)
That is, inequality (6.3) is also proved.
Moreover, by the Schwarz inequality,
Z ∞ Z ∞
dkf (τ )k2n
kf (t)k2n = − dτ = 2kf (t)kn kf˙(τ )kn dτ ≤
t dτ t

2kf kL2 (s,∞) kf˙kL2 (s,∞) (f, f˙ ∈ L2 (s, ∞), t ≥ s).


Hence,
kW (., s)k2C(s,∞) ≤ 2kWt0 (., s)kL2 (s,∞) kW (., s)kL2 (s,∞) .
Now (6.2) and (6.3) imply (6.4). This proves the theorem. 

6.8 Comments
The results which appear in Section 6.1 are probably new. The material of Sections
6.2-6.4 is taken from the papers [39, 41]. Sections 6.6. and 6.7 are based on the
paper [42].
136 Chapter 6. Equations Close to Autonomous and Ordinary Differential Ones
Chapter 7

Periodic Systems

This chapter deals with a class of periodic systems. Explicit stability conditions
are derived. The main tool is the invertibility conditions for infinite block matrices.
In the case of scalar equations we apply regularized determinants.

7.1 Preliminary results


Our main object in this chapter is the equation
m
X Z η
ẋ(t) + As (t) x(t − τ )dμs (τ ) = 0 (0 < η < ∞; t ≥ 0), (1.1)
s=1 0

where μs are nondecreasing functions of bounded variations var(μs ), and As (t)


are T -periodic n × n-matrix-valued functions satisfying conditions pointed below.
Recall that the Floquet theory for ordinary differential equations have been
developed to linear periodic functional differential equations with delay, in partic-
ular, in the book [71].
Let equation (1.1) have a nonzero solution of the form

x(t) = p(t)eλt , p(t) = p(t + T )

Then μ = eλT is called the characteristic multiplier of equation (1.1) (see [71,
Lemma 8.1.2]). As it was pointed in [71], a complete Floquet theory for functional
differential equations is impossible. However, it is possible to define characteristic
multipliers and exploit the compactness of the solution operator to show that a
Floquet representation exists on the generalized eigen-space of a characteristic
multiplier. The characteristic multipliers of equation (1.1) are independent of the
starting time.
Lemma 7.1.1. Equation (1.1) is asymptotically stable if and only if all the charac-
teristic multipliers of equation (1.1) have moduli less than 1.
138 Chapter 7. Periodic Systems

For the proof see [71, Corollary 8.1.1].


Without loss of the generality take T = 2π: As (t) = As (t + 2π) (t ∈ R; s =
1, ..., m).
Any 2π-periodic vector-valued function f with the property f ∈ L2 (0, 2π)
can be represented by the Fourier series

X Z
1 2π
f (t) = fk e ikt
(fk = f (t)e−ikt dt; k = 0, ±1, ±2, ....).
2π 0
k=−∞

Introduce the Hilbert space PF of 2π-periodic functions defined on the real axis
R with values in Cn , and the scalar product

X
(f, u)P F := (fk , uk )C n (f, u ∈ P F ),
k=−∞

where fk , uk are the Fourier coefficients of f and u, respectively. The norm in P F


is
p X∞
kf kP F = (f, f )P F = ( kfk k2n )1/2 .
k=−∞

The Parseval equality yields


Z
1 a+2π
kf kP F = [ kf (t)k2n dt]1/2 (f ∈ P F ; a ∈ R).
2π a

In addition, introduce the subspace DP F ⊂ P F of 2π-periodic functions f whose


Fourier coefficients satisfy the condition

X
kkfk k2n < ∞.
k=−∞

That is, f 0 ∈ L2 (0, 2π).


Furthermore, with a λ ∈ C substitute

x(t) = eλt v(t) (1.2)

into (1.1). Then we have


m
X Z η
v̇(t) + λv(t) + As (t) e−λτ v(t − τ )dμs (τ ) = 0. (1.3)
s=1 0

Impose the condition


v(t) = v(t + 2π). (1.4)
7.2. The main result 139

Let vk and Ask (k = 0, ±1, ...) be the Fourier coefficients of v(t) and As (t), respec-
tively:

X ∞
X
v(t) = vk e ikt
and As (t) = Ask eikt (s = 1, ..., m). (1.5)
k=−∞ k=−∞

It is assumed that

X
kAsk kn < ∞. (1.6)
k=−∞

For instance, assume that As (t), s = 1, ..., m, have an integrable second derivative
from L2 (0, 2π). Then
X∞
k 4 kAsk k2n < ∞
k=−∞

and therefore, k 2 kAsk kn → 0 as k → ∞. So condition (1.6) is fulfilled.

7.2 The main result


Without loss of generality assume that

var (μs ) = 1, s = 1, ..., m. (2.1)

Put
m
X ∞
X m
X
w0 := kAsl kn and var (F ) = kAs0 kn .
s=1 l6=0;−∞ s=1

Recall that the characteristic values of a matrix valued function are the zeros of
its determinant.
Now we are in a position to formulate the main result of the present chapter.

Theorem 7.2.1. Let conditions (1.6) and (2.1) hold. Let all the characteristic values
of the matrix function
m
X Z η
F (z) := zI + As0 e−zτ dμs (τ )
s=1 0

be in the open left half-plane C− , and

w0 sup kF −1 (iω)kn < 1. (2.2)


−2var (F )≤ω≤2var (F )

Then equation (1.1) is asymptotically stable.


140 Chapter 7. Periodic Systems

Proof. Substituting (1.5) into equation (1.3), we obtain



X ∞
X ∞ X
X m Z η
(ijI +λ)vj e ijt
+ e irt
Asr vk e−λτ eik(t−τ ) dμs (τ ) = 0. (2.3)
j=−∞ k=−∞ r=−∞ s=1 0

Or
∞ X
X m Z η
(ijI + λ)vj + As,j−k vk e−(λ+ik)τ dμs (τ ) = 0 (j = 0, ±1, ... ).
k=−∞ s=1 0

Rewrite this system as

T (λ)v̂ = 0 (v̂ = (vk )∞


k=−∞ )),

where T (λ) = (Tjk (λ))∞


j,k=−∞ is the infinite block matrix with the blocks

m
X Z η
Tjk (λ) = As,j−k e−(λ+ik)τ dμs (k 6= j)
s=1 0

and
m
X Z η
Tjj (λ) = ijI + λ + As,0 e−(λ+ij)τ dμs (τ ).
s=1 0

By [37, Theorem 6.2] (see also Section 16.6 of Appendix B below) T (z) is invertible,
provided
X∞
−1
sup kTjj (z)kn kTjk (z)kn < 1.
j k=−∞
k6=j

Clearly,

X ∞
m X
X
kTjk (iω)kn ≤ kAs,j−k kn =
k=−∞ s=1 k=−∞
k6=j k6=j


m X
X
kAs,l kn = w0 .
s=1 l=−∞
l6=0

and
sup kTj−1 (iω)kn =
j=1,2,...; −∞<ω<∞
!−1
Xm Z η

sup i(j + ω)I + As,0 e −(λ+ij)τ
dμs (τ ) =

j=0,±1,±2,...; −∞<ω<∞

s=1 0
n
7.3. Norm estimates for block matrices 141
!−1
Xm Z η


sup iyI + As,0 e −iyτ
dμs (τ ) = sup kF −1 (iy)kn .

y∈(−∞,∞) s=1 0 y∈(−∞,∞)
n

But thanks to Lemma 4.3.1, the equality

sup kF −1 (iω)kn = sup kF −1 (iω)kn


ω∈(−∞,∞) |ω|≤2var (F )

is valid.
Furthermore, for a ξ ∈ (0, 1], let us introduce the matrix T (ξ, z) = (Tjk (ξ, z))
with
Tjk (ξ, z) = ξTjk (z) (k 6= j), Tjj (ξ, z) = Tjj (z).
Assume that T (z) has a characteristic value in the closed right-hand plane. Then
according to continuity of characteristic values, for some ξ0 ∈ (0, 1], matrix T (ξ0 , z)
has a characteristic value on the imaginary axis, but according to (2.2) this is im-
possible. This and Lemma 7.1.1 prove the theorem. 

7.3 Norm estimates for block matrices


Let A be an n × n-matrix. Recall that
n
X
g(A) = [N22 (A) − |λk (A)|2 ]1/2 ,
k=1

where λk (A), k = 1, ..., n are the eigenvalues of A, counted with their multiplicities;
N2 (A) is the Hilbert-Schmidt norm of A (see Section 2.3). Besides

N22 (A − A∗ )
g 2 (A) ≤ N22 (A) − |T race A2 | and g 2 (A) ≤ = 2N22 (AI ), (3.1)
2
where AI = (A − A∗ )/2i. Moreover,

g(eit A + zI) = g(A) (t ∈ (−∞, ∞); z ∈ C). (3.2)

If A1 and A2 are commuting matrices, then g(A1 + A2 ) ≤ g(A1 ) + g(A2 ). If A is


a normal matrix: AA∗ = A∗ A, then g(A) = 0.
Let Z η
m
X
B(z) = As0 e−zτ dμs (τ ),
s=1 0

and d(F (z)) be the smallest modulus of eigenvalues of F (z):

d(F (z)) = min |λj (F (z))|.


j=1,...,n
142 Chapter 7. Periodic Systems

Thanks to Corollary 4.3.2, the inequality

sup kF −1 (iω)kn ≤ Γ0 (F )
|ω|≤2var (F )

is valid, where
n−1
X g k (B(iω))
Γ0 (F ) := sup √ .
|ω|≤2var (F ) k=0 k!dk+1 (F (iω))
Now Theorem 7.2.1 implies our next result.
Corollary 7.3.1. Let all the characteristic values of F (z) be in C− and w0 Γ0 (F ) <
1. Then equation (1.1) is asymptotically stable.
Note that
m
X
g(B(iω)) ≤ N2 (As0 ) (ω ∈ R).
s=1

If As0 are mutually commuting, then


m
X
g(B(iω)) ≤ g(As0 ) (ω ∈ R)
s=1

and one can enumerate the eigenvalues of As0 in such a way that
m
X Z η
λj (F (z)) = z + λj (As0 ) e−zτ dμs (τ ).
s=1 0

Moreover, if
m
X
B(z) = As0 e−zhs ,
s=1

then by (3.1),

p m
X
g(B(iω)) ≤ 1/2 N2 (eihs As0 − e−ihs A∗s0 ) (ω ∈ R).
s=1

In the next section, under some assumptions, we suggest an additional simple


estimate for Γ0 (F ).

7.4 Equations with one distributed delay


To illustrate Theorem 7.2.1, consider the equation
Z η
ẋ(t) + A(t) x(t − τ )dμ (t ≥ 0), (4.1)
0
7.4. Equations with one distributed delay 143

where A(t) is a piece-wise continuous n × n-matrix function, satisfying A(t) =


A(t + 2π), and μ is a nondecreasing function of bounded variation.
We need the function
Z η
k(z) = z + exp(−zs)dμ(s) (z ∈ C).
0

As it was shown in Section 4.6, the equality

inf |k(iω)| = inf |k(iω)|


−2var(μ)≤ω≤2var(μ) ω∈(−∞,∞)

is valid. Moreover, if
π
var(μ) η < , (4.2)
4
then all the zeros of k(z) are in C− and

inf |k(iω)| ≥ dˆ > 0, (4.3)


ω∈(−∞,∞)

where Z η
dˆ := cos(2var(μ)τ )dμ(τ ).
0

Now let Ak (k = 0, ±1, ...) be the Fourier coefficients of A(t). Without loss of
generality assume that
var(μ) = 1. (4.4)
Then Z η
F (z) = zI + A0 e−zτ x(t − τ )dμ,
0

X ∞
X
w0 = kAk kn and var (F ) = kAk kn .
k=−∞ k=∞
k6=0

According to (3.2)
g(B(iω)) = g(A0 ) (ω ∈ R)
and Z η
λj (F (z)) = z + λj (A0 ) exp(−zs)dμ(s) (z ∈ C).
0

Let all the eigenvalues of A be real and positive:

0 < λ1 ≤ ... ≤ λn (4.5)

and the conditions (4.4) and

η λn (A0 )var(μ) = η λn (A0 ) < π/4. (4.6)


144 Chapter 7. Periodic Systems

hold, then due to (4.3),

inf ˆ ),
|λj (F (iω))| ≥ d(F
ω∈(−∞,∞)

where Z η
ˆ ) :=
d(F cos(2λn (A0 )dμ(τ ) > 0.
0
Thus
n−1
X g k (A0 )
Γ0 (F ) ≤ Γ̂0 where Γ̂0 := √ .
k=0
k!dˆk+1 (F )
Now Corollary 7.3.1 implies the following result.
Corollary 7.4.1. Let the conditions (4.5), (4.6) and
n−1
X g k (A0 )
w0 √ <1
k=1
k!cosk+1 (2η λn (A0 ))

hold. Then equation (4.1) is asymptotically stable.


If A0 = A∗0 and condition (4.5) holds, then g(A0 ) = 0 and the stability
conditions are (4.6) and
w0 < cos(2η λn (A0 )).

7.5 Applications of regularized determinants


In this section we are going to show that the regularised determinants can be useful
to investigate the periodic equations. For the simplicity we restrict ourselves by
the simple scalar equation

ẋ(t) + bx(t) + a(t)x(t − h) = 0 (0 < h < ∞; t ≥ 0), (5.1)

where a(t) is a real 2π-periodic piece-wise continuous scalar function and b 6= 0 is


a real constant.
In the scalar case PF is the Hilbert space of 2π-periodic scalar functions
defined on the real axis with the scalar product

X
(f, u)P F := fk uk (f, u ∈ P F ),
k=−∞

where fk , uk are the Fourier coefficients of f and u, respectively. The norm in P F


is

X
kf kP F = ( |fk |2 )1/2 .
k=−∞
7.5. Applications of regularized determinants 145

The Parseval equality yields


Z
1 a+2π
kf kP F =[ |f (t)|2n dt]1/2 (f ∈ P F ; a ∈ R).
2π a

In addition, DP F is the subspace of 2π-periodic functions f whose Fourier coef-


ficients satisfy the condition

X
|kfk |2 < ∞.
k=−∞

Substituting
x(t) = eλt v(t) (5.2)
into (5.1), we have

v̇(t) + λv(t) + bv(t) + a(t)ehλ v(t − h) = 0. (5.3)

Besides
v(t) = v(t + 2π). (5.4)
Let vk and ak , k = 0, ±1, ... be the Fourier coefficients of v(t) and a(t), respectively:

X ∞
X
v(t) = vk eikt and a(t) = ak eikt . (5.5)
k=−∞ k=−∞

Since a(t) is real piece-wise continuous, we have



X
a2j < ∞. (5.6)
j=−∞

Substituting (5.5) into equation (5.3), we obtain



X
(ij + λ + b)vj + aj−k e−(λ+ik)h vk = 0.
k=−∞

Hence,

X
1
vj + aj−k e−(λ+ik)h vk = 0 (j = 0, ±1, ...).
ij + λ + b
k=−∞

Rewrite this system as

(I + Z(λ))v̂ = 0 (v̂ = (vk )∞


k=−∞ ), (5.7)

where Z(λ) = (Zjk (λ))∞


j,k=−∞ is the infinite matrix with the entries

aj−k e−(λ+ik)h
Zjk (λ) = .
ij + λ + b
146 Chapter 7. Periodic Systems

For a fixed real ω we have



X ∞
X
N22 (Z(iω)) = |Zjk (iω)|2 =
j=−∞ k=−∞


X ∞
X |aj−k e−(iω+ik)h) |2
≤ ν 2 (ω),
j=−∞ k=−∞
|i(j + ω) + b|2

where

X ∞
X |aj−k |2
ν 2 (ω) := .
j=−∞ k=−∞
(j + ω)2 + b2

This series converges according to (5.6). Let λk (z) be the eigenvalues of Z(z).
Then

Y
det2 (I + Z(z)) = (1 + λk (z))e−λk (z) .
k=1

As it is shown in Section 1.10,



ν 2 (ω)
|det2 (I + Z(iω))| ≤ exp .
2
We thus have established the following result.
Lemma 7.5.1. Let the conditions (5.6) and |det2 (I + Z(iω))| 6= 0 hold for all real
ω. Then the problem (5.3), (5.4) does not have characteristic values on the real
axis.
Let us explore perturbations of (5.1). To this end consider the equation

ẋ(t) + b̃x(t) + ã(t)x(t − h̃) (0 < h̃ < ∞; t ≥ 0), (5.8)

where ã(t) is a real 2π-periodic piece-wise continuous scalar function and b̃ 6= 0 is


a real constant.
Let ãk k = 0, ±1, ... be the Fourier coefficients of ã(t). We have

X
ã2j < ∞. (5.9)
j=−∞

In this case equation (5.4) takes the form

(I + Z̃(λ))v̂ = 0,

where Z̃(λ) = (Z̃jk (λ))∞


j,k=−∞ is the infinite matrix with the entries

ãj−k e−(λ+ik)h̃
Z̃jk (λ) =
ij + λ + b̃
7.6. Comments 147

Then

X ∞
X
N22 (Z̃(iω)) = |(ij + iω + b̃)−1 ãj−k e−(iω+ik)h̃) |2 ≤ ν̃ 2 (ω),
j=−∞ k=−∞

where

X ∞
X |ãj−k |2
ν̃ 2 (ω) :=
j=−∞ k=−∞
(j + ω)2 + b̃2
Besides,
1
|det2 (I + Z̃(iω))| ≤ exp[ ν̃ 2 (ω)].
2
Put
q(ω) = N2 (Z(iω) − Z̃(iω)).
Then by Corollary 1.11.2 we arrive at the inequality

|det2 (I + Z(ω)) − det2 (I + Z̃(ω))| ≤ δ2 (ω),

where
δ2 (ω) := q(ω) exp [(1 + ν(ω) + ν̃(ω))2 /2].
Hence,
|det2 (I + Z̃(ω))| ≥ |det2 (I + Z(ω))| − δ2 (ω).
So we have proved the following result.
Corollary 7.5.2. If equation (5.1) is asymptotically stable and

|det2 (I + Z(ω))| > δ2 (ω) (ω ∈ R),

then equation (5.8) is also asymptotically stable.

7.6 Comments
The material of this chapter is based on the paper [62].
As a specific case, the problem of stability investigation of linear periodic
systems (LPS) with time delay is of great theoretical and practical interest. The
majority of mathematical works in this area are based on investigation of the
monodromy operator [69], and are mainly of theoretical nature. An application
of the monodromy operator method is based on a solution of special boundary
problems for ordinary differential equations and is connected with serious techni-
cal difficulties. In connection with that method, approximate approaches are used,
which exploit various kind of averaging, approximation and discretization, as well
as truncation of infinite Hill determinants, [11, 75]. In the interesting paper [84],
devoted a single-loop linear periodic system with time delay a new approach is sug-
gested. Namely, using the theory of the second kind integral Fredholm equations,
148 Chapter 7. Periodic Systems

the authors construct the characteristic function whose roots of which are inverses
to the multipliers of the considered system. Besides, sufficient stability conditions
are given, which are based on approximate representation of the characteristic
function in the form of a polynomial.
In the present chapter we describe an alternative approach to the stability
problem for a multivariable LPS with distributed delay, which is based on the
recent results for infinite block matrices and regularized determinants.
Chapter 8

Linear Equations with


Oscillating Coefficients

In the present chapter we investigate vector and scalar linear equations with
”quickly” oscillating coefficients.

8.1 Vector equations with oscillating coefficients


The present section is devoted to the following equation in Cn :
Z η
ẋ(t) = A(t) dR0 (s)x(t − s) (t ≥ 0) (1.1)
0

where A(t) is a variable piece-wise continuous n × n-matrix bounded on [0, ∞);


R0 (τ ) = (rjk (τ ))nj,k=1 is an n×n-matrix-valued function defined on a finite segment
[0, η ], whose entries have bounded variations.
In this section we do not require that the characteristic determinant
 Z η 
det zI − A(t) e−zs dR0 (s)
0

is stable for all t ≥ 0. That is, it can have zeros in the open right half-plane for
some t ≥ 0. Besides, it is assumed that

A(t) = B + C(t),

where B, is a constant matrix, such that the equation


Z η
ẏ(t) = B dR0 (s)y(t − s) (1.2)
0
150 Chapter 8. Linear Equations with Oscillating Coefficients

is exponentially stable, and C(t) is a variable matrix, satisfying the condition


Z t

wC := sup
C(τ )dτ < ∞.

t≥0 0 n

Recall that kAkn is the spectral norm of an n × n matrix A. In addition, in this


section and in the next one L1 (0, ∞) = L1 ([0, ∞); Cn ), for a real nunber a and a
vector function f defined and bounded on [a, ∞) (not necessarily continuous) we
put kf kC(a,∞) = supt≥a kf (t)kn . Similarly, kAkC(0,∞) = supt≥0 kA(t)kn .
Denote by FB (t) the fundamental solution to (1.2). It is not hard to check
that Z ∞
1 −1
FB (t) = eiyt K̂B (iy)dy,
2π −∞
where Z η
K̂B (z) = zI − B e−zs dR0 (s).
0

Put Z η
(E0 f )(t) := dR0 (s)f (t − s).
0

From Lemma 1.12.1 it follows that there is a constant


v
n uX
√ X u n
v̂(R0 ) ≤ max{ nvar (R0 ), t (var(rjk ))2 },
j=1 k=1

such that
kE0 f kC(0,∞) ≤ v̂(R0 )kf kC(−η ,∞)
and
kE0 f kL1 (0,∞) ≤ v̂(R0 )kf kL1 (−η ,∞) .
Now we are in a position to formulate the main result of this chapter.

Theorem 8.1.1. Assume that


1
wC < . (1.3)
v̂ (R0 )[1 + v̂ (R0 )kFB kL1 (0,∞) (kBkn + kAkC(0,∞) )]

Then equation (1.1) is exponentially stable.

This theorem is proved in the next section. It is sharp. Namely, if A(t) ≡ B,


then wC = 0 and condition (1.3) is automatically fulfilled. About the estimates
for kFB kL1 (0,∞) see for instance Section 4.4. Furthermore, let

A(t) = B + C0 (ωt) (ω > 0) (1.4)


8.1. Vector equations with oscillating coefficients 151

with a piece-wise continuous matrix C0 (t), such that


Z t


ν0 := sup C0 (s)ds (1.5)
t≥0
< ∞.
0 n

Then we have
kAkC(0,∞) ≤ kBkn + kC0 kC(0,∞)
and Z t
ν0

wC = sup C0 (ωs)ds = . (1.6)
t
ω
0 n

For example, if C0 (t) = sin (t) C1 , with a constant matrix C1 , then ν0 = 2kC1 kn .
Theorem 8.1.1 implies our next result.
Corollary 8.1.2. Assume that the conditions (1.4), (1.5) and

ω > ν0 v̂ (R0 )[1 + v̂ (R0 )kFB kL1 (0,∞) (2kBkn + kC0 kC(0,∞) )] (1.7)

hold. Then equation (1.1) is exponentially stable.


To illustrate Theorem 8.1.1, consider the system
Z η
ẋ(t) = A(t) x(t − τ )dμ(τ ), (1.8)
0

where μ is a scalar nondecreasing function with var(μ) < ∞. Consider the equation
Z η
ẋ(t) = B x(t − τ )dμ(τ ), (1.9)
0

assuming that B is a negative definite Hermitian n × n-matrix. Again C(t) =


A(t) − B. Put Z η
Eμ w(t) = w(t − τ )dμ(τ )
0

for a scalar function w(t). Reduce equation (1.9) to the diagonal form:

ẋj (t) = λj (B)Eμ xj (t) (j = 1, ..., n), (1.10)

where λj (B) are the eigenvalues of B with their multiplicities. Let Xj (t) be the
fundamental solution of the scalar equation (1.10). Assume that
Z ∞
Jj := |Xj (t)|dt < ∞ (j = 1, ..., n). (1.11)
0

Then the fundamental solution F̂ (t) to (1.9) satisfies the inequality

kF̂ kL1 (0,∞) = max Jj .


j=1,...,n
152 Chapter 8. Linear Equations with Oscillating Coefficients

Moreover,
kE0 f kC(0,∞) ≤ var (μ)kf kC(−η ,∞)
and Z ∞ Z η
kE0 f k =
L1 (0,∞) k f (t − τ )dμ(τ )kn dt ≤
0 0
Z η Z ∞
kf (t − τ )kn dt dμ(τ ) ≤ var (μ)kf kL1 (−η ,∞) .
0 0
Now Theorem 8.1.1 implies
Theorem 8.1.3. Let B be a negative definite Hermitian matrix and the conditions
(1.11) and
1
wC < (j = 1, ..., n)
var(μ)(1 + var(μ)Jj (kBkn + kAkC(0,∞) ))
hold. Then equation (1.8) is exponentially stable.
If A(t) has the form (1.4) and condition (1.11) holds, then, clearly, Corollary
8.1.2 is valid with R0 (.) = μ(.)I and kF kL1 (0,∞) = maxj Jj .
Furthermore, assume that
1
max |λk (B)| < , (1.12)
k=1,...,n e var (μ)η
and put
ρ0 (B) := min |λk (B)|.
k=1,...,n
Then by Lemma 4.6.5 we have
1
Jj ≤ (1 ≤ j ≤ n).
var (μ)ρ0 (B)
Now Theorem 8.1.3 implies
Corollary 8.1.4. Let B be negative definite Hermitian matrix and the conditions
(1.12), and
ρ0 (B)
wC <
var(μ)(ρ0 (B) + kBkn + kAkC(0,∞) )
hold. Then equation (1.8) is exponentially stable.
Now consider the equation
Z η
ẋ(t) = (B + C0 (ωt)) x(t − τ )dμ(τ ). (1.13)
0
So A(t) has the form (1.4). Then the previous corollary and (1.6) imply.
Corollary 8.1.5. Assume that (1.4) and (1.5) hold and
ρ0 (B)ω > ν0 var(μ)(ρ0 (B) + 2kBkn + kC0 kC(0,∞) ).
Then equation (1.13) is exponentially stable.
8.2. Proof of Theorem 8.1.1 153

8.2 Proof of Theorem 8.1.1


For simplicity put FB (t) = F (t) and recall that
Z η
(E0 f )(t) = dR0 (s)f (t − s).
0

So (1.2) can be written as

ẏ(t) = B(E0 y)(t), t ≥ 0.

Due to the Variation of Constants Formula, the equation

ẋ(t) = B(E0 x)(t) + f (t) (t ≥ 0)

with a given function f and the zero initial condition x(t) = 0 (t ≤ 0) is equivalent
to the equation
Z t
x(t) = F (t − s)f (s)ds. (2.1)
0

Let G(t, s) (t ≥ s ≥ 0) be the fundamental solution to (1.1). Put G(t, 0) = G(t).


Subtracting (1.2) from (1.1) we have

d
(G(t) − F (t)) = B((E0 G)(t) − (E0 F )(t)) + C(t)(E0 G)(t).
dt
Now (2.1) implies
Z t
G(t) = F (t) + F (t − s)C(s)(E0 G)(s)ds. (2.2)
0

We need the following simple lemma.


Lemma 8.2.1. Let f (t), u(t) and v(t) be vector functions defined on a finite segment
[a, b] of the real axis. Assume that f (t) and v(t) are boundedly differentiable and
u(t) is integrable on [a, b]. Then with the notation
Z t
ju (t) = u(s)ds (a < t ≤ b),
a

the equality
Z t
f (s)u(s)v(s)ds = f (t)ju (t)v(t)−
a
Z t
[f 0 (s)ju (s)v(s) + f (s)ju (s)v 0 (s)]ds
a

is valid.
154 Chapter 8. Linear Equations with Oscillating Coefficients

Proof. Clearly,
d
f (t)ju (t)v(t) = f 0 (t)ju (t)v(t) + f (t)u(t)v(t) + f (t)ju (t)v 0 (t).
dt
Integrating this equality and taking into account that ju (a) = 0, we arrive at the
required result. 

Put Z t
J(t) := C(s)ds.
0
By the previous lemma,
Z t
F (t − τ )C(τ )(E0 G)(τ )dτ = F (0)J(t)(E0 G)(t)−
0
Z t 
dF (t − τ ) d(E0 G)(τ )
J(τ )(E0 G)(τ ) + F (t − τ )J(τ ) dτ.
0 dτ dτ
But F (0) = I,
Z η Z η
d(E0 G)(τ ) d d
= dR0 (s)G(τ − s) = dR0 (s) G(τ − s) =
dτ dτ 0 0 dτ
Z η Z η
d
dR0 (s) G(τ − s) = dR0 (s)A(τ − s)(E0 G)(τ − s)
0 dτ 0
and
dF (t − τ ) dF (t − τ )
=− = B(E0 F )(t − τ ).
dτ dt
Thus, Z t
F (t − τ )C(τ )(E0 G)(τ )dτ = J(t)(E0 G)(t)+
0
Z t
[B(E0 F )(t − τ )J(τ )(E0 G)(τ )−
0
Z η
F (t − τ )J(τ ) dR0 (s)A(τ − s)(E0 G)(τ − s)]dτ.
0
Now (2.2) implies
Lemma 8.2.2. The following equality is is true:
Z t
G(t) = F (t) + J(t)(E0 G)(t) + [B(E0 F )(t − τ )J(τ )(E0 G)(τ )−
0
Z η
F (t − τ )J(τ ) dR0 (s)A(τ − s)(E0 G)(τ − s)]dτ.
0
8.3. Scalar equations with several delays 155

Take into account that


kE0 GkC(0,∞) ≤ v̂ (R0 )kGkC(0,∞) , kE0 F kC(0,∞) ≤ v̂ (R0 )kF kC(0,∞) ,
and
kE0 F kL1 (0,∞) ≤ v̂ (R0 )kF kL1 (0,∞) .
Then from the previous lemma, the inequality
kGkC(0,∞) ≤ kF kC(0,∞) + κkGkC(0,∞)
follows with
κ := wC v̂ (R0 )(1 + v̂ (R0 )(kBkn + kAkC(0,∞) )kF kL1 (0,∞) ).
If condition (1.3) holds, then κ < 1, and therefore,
kF kC(0,∞)
kGkC(0,∞) = kG(., 0)kC(0,∞) ≤ . (2.3)
1−κ
Replacing zero by s, we get the same bound for kG(., s)kC(s,∞) . Now Corollary
3.3.1 proves the stability of (1.1). Substituting
x (t) = et x(t) (2.4)
with  > 0 into (1.1), we have the equation
Z η
ẋ (t) = x (t) + A(t) es dR0 (s)x (t − s). (2.5)
0

If  > 0 is sufficiently small, then considering (2.5) as a perturbation of the equation


Z η
ẏ(t) = y(t) + B es dR0 (s)y(t − s)
0

and applying our above arguments, according to (2.3), we obtain kx kC(0,∞) < ∞
for any solution x of (2.5). Hence (2.4) implies
kx(t)kn ≤ e−t kx kC(0,∞) (t ≥ 0)
for any solution x of (1.1), as claimed. 

8.3 Scalar equations with several delays


In the present section, in the case of scalar equations, we particularly generalize
the results of the previous section to equations with several delays. Namely, this
section deals with the equation
m
X Z h
ẋ(t) + aj (t) x(t − s)drj (s) = 0 (t ≥ 0; h = const > 0), (3.1)
j=1 0
156 Chapter 8. Linear Equations with Oscillating Coefficients

where rj (s) are nondecreasing functions having finite variations var(rj ), and aj (t)
are piece-wise continuous real functions bounded on [0, ∞).
In the present section we do not require that aj (t) are positive for all t ≥ 0.
So the function
Xm Z h
z+ aj (t) e−zs drj (s)
j=1 0

can have zeros in the right-hand plane for some t ≥ 0.


Let
aj (t) = bj + cj (t) (j = 1, ..., m),
where bj are positive constants, such that all the zeros of the function
Xm Z h
k(z) := z + bj e−zs drj (s)
j=1 0

are in the open left-hand plane, and functions cj (t) have the property
Z t

wj := sup cj (t)dt < ∞ (j = 1, ..., m).
t≥0 0

The function Z
1 i∞
ezt dz
W (t) =
2πi −i∞ k(z)
is the fundamental solution to the equation
m
X Z h
ẏ(t) = − bj y(t − s)drj (s). (3.2)
j=1 0

Without loss of the generality assume that


var(rj ) = 1 (j = 1, ..., m).
In this section and in the next one, L1 = L1 (0, ∞) is the space of real scalar
functions integrable on [0, ∞). So
Z ∞
kW kL1 = |W (t)|dt.
0

For a scalar function f defined and bounded on [0, ∞) (not necessarily continuous)
we put kf kC = supt≥0 |f (t)|.
Theorem 8.3.1. Let
  " #
m
X m
X
 wj  1 + kW kL1 (bk + kak kC ) < 1. (3.3)
j=1 k=1

Then equation (3.1) is exponentially stable.


8.3. Scalar equations with several delays 157

This theorem is proved in the next section. It is sharp. Namely, if aj (t) ≡ bj


(j = 1, ..., m), then wj = 0 and condition (3.3) is automatically fulfilled.
Let
m
X
r̂(s) = bj rj (s).
j=1

Then (3.2) takes the form


Z h
ẏ(t) = − y(t − s)dr̂(s). (3.4)
0

Besides,
m
X m
X
var(r̂) = bj var(rj ) = bj .
j=1 j=1

For instance, let


m
X
eh bj = eh var(r̂) < 1. (3.5)
j=1

Then W (t) ≥ 0 and equation (3.2) is exponentially stable, cf. Section 4.6. Now,
integrating (3.2), we have
Z ∞ Z h
1 = W (0) = W (t − s)dr̂(s) dt =
0 0

Z h Z ∞
W (t − s) dt dr̂(s) =
0 0
Z h Z ∞
W (t) dt dr̂(s) =
0 −s
Z h Z ∞
W (t) dt dr̂(s) = var(r̂)kW kL1 .
0 0

So
1
kW kL1 = Pm . (3.6)
k=1 bk

Thus, Theorem 8.3.1 implies


Corollary 8.3.2. Let the conditions (3.5) and
m
X Pm
k=1 bk
wj < Pm (3.7)
j=1 k=1 (2b k + kak kC )

hold. Then equation (3.1) is exponentially stable.


158 Chapter 8. Linear Equations with Oscillating Coefficients

Furthermore, let

aj (t) = bj + uj (ωj t) (ωj > 0; j = 1, ..., m) (3.8)

with a piece-wise continuous functions uj (t), such that


Z t
νj := sup | uj (s)ds| < ∞.
t 0

Then we have kaj kC ≤ bj + kuj kC and


Z t
wj = sup | uj (ωj s)ds| = νj /ωj .
t 0

For example if uj (t) = sin (t), then νj = 2. Now Theorem 8.3.1 and (3.7) imply
our next result.
Corollary 8.3.3. Let the conditions (3.5), (3.8) and
m
X Pm
νj k=1 bk
< Pm (3.9)
ωj
k=1 k=1 (3bk + kuk kC )

hold. Then equation (3.1) is exponentially stable.


Example 8.3.4. Consider the equation
m
X Z 1
ẋ = − (bj + τj sin(ωj t)) x(t − s)dj (s)ds (τj = const > 0), (3.10)
j=1 0

where dj (s) are positive and bounded on [0, 1] functions, satisfying the condition
Z 1
dj (s)ds = 1.
0

Assume that (3.5) holds with h = 1. Then νj = 2τj and condition (3.9) takes the
form Pm
Xm
2τj bk
< m k=1
P .
ωj k=1 (3b k + τk )
k=1

So for arbitrary τj , there are ωj , such that equation (3.10) is exponentially stable.
In particular, consider the equation

ẋ = −(b + τ0 sin(ωt))x(t − 1) (b < e−1 ; τ0 , ω = const > 0). (3.11)

Then according to condition (3.9) for any τ0 , there is an ω, such that equation
(3.11) is exponentially stable.
8.3. Scalar equations with several delays 159

Furthermore, assume that


m
X
eh bj < ξ (1 < ξ < 2) (3.12)
j=1

and consider the equation


m
X Z h
ẏ(t) + b̃j y(t − s)drj (s) = 0, (3.13)
j=1 0

where b̃j = bj /ξ. Let W̃ be the fundamental solution to the equation (3.13).
Subtracting (3.13) from (3.2), we obtain
Xm Z h
d
(W (t) − W̃ (t)) + b̃j (W (t − s) − W̃ (t − s))drj (s) =
dt j=1 0

m
X Z h
− (bj − b̃j ) W (t − s)drj (s).
j=1 0

Due to the Variation of Constants formula,


Z t m
X Z h
W (t) − W̃ (t) = − W̃ (t − τ ) (bj − b̃j ) W (τ − s)drj (s)dτ.
0 j=1 0

Hence, taking into account that var (rj ) = 1, by simple calculations we get
m
X
kW − W̃ kL1 ≤ kW̃ kL1 kW kL1 (bj − b̃j ).
j=1

If
m
X
ψ := kW̃ kL1 (bj − b̃j ) < 1,
j=1

then
kW̃ kL1
kW kL1 ≤ .
1− ψ
But condition (3.12) implies (3.5) with b̃k instead of bk . So according to (3.6) we
have
1 ξ
kW̃ kL1 = Pm = Pm .
k=1 kb̃ k=1 bk
Consequently, ψ = ξ − 1 and
kW̃ kL1
kW kL1 ≤ .
2−ξ
Thus we have proved the following result.
160 Chapter 8. Linear Equations with Oscillating Coefficients

Lemma 8.3.5. Let conditions (3.12) and var (rj ) = 1 (j = 1, ..., m) hold. Then

ξ
kW kL1 ≤ Pm .
(2 − ξ) k=1 bk

Now we can directly apply Theorem 8.3.1.

8.4 Proof of Theorem 8.3.1


Due to the Variation of Constants Formula the equation
m
X Z h
ẋ(t) = − bj x(t − s)drj (s) + f (t) (t > 0)
k=1 0

with a given function f and the zero initial condition

x(t) = 0 (t ≤ 0)

is equivalent to the equation


Z t
x(t) = W (t − τ )f (τ )dτ. (4.1)
0

Recall that a differentiable in t function G(t, τ ) (t ≥ τ ≥ 0) is the fundamental


solution to (3.1) if it satisfies that equation in t and the initial conditions

G(τ, τ ) = 1, G(t, τ ) = 0 (t < τ, τ ≥ 0).

Put G(t, 0) = G(t). Subtracting (3.2) from (3.1) we have

d
(G(t) − W (t)) =
dt
m
X Z h m
X Z h
− bj (G(t − s) − W (t − s))drj (s) − cj (t) G(t − s)drj (s). (4.2)
k=1 0 j=1 0

Now (4.1) implies


Z t m
X Z h
G(t) = W (t) − W (t − τ ) cj (τ ) G(τ − s)drj (s) dτ. (4.3)
0 j=1 0

Put Z t
Jj (t) := cj (s)ds.
0
8.4. Proof of Theorem 8.3.1 161

By Lemma 8.2.1 we obtain the inequality


Z t
W (t − τ )cj (τ )G(τ − s)dτ = W (0)Jj (t)G(t − s)−
0
Z t 
dW (t − τ ) dG(τ − s)
Jj (τ )G(τ − s) + W (t − τ )Jj (τ ) dτ. (4.4)
0 dτ dτ
But Z
X m h
dG(τ − s)
=− ak (τ − s) G(τ − s − s1 )drk (s1 )
dτ 0
k=1

and Z
m
dW (t − τ ) dW (t − τ ) X h
=− = bj W (t − τ − s1 )drj (s1 ) =
dτ dt j=1 0

Z h
W (t − τ − s1 )dr̂(s1 ).
0

Thus, Z t
W (t − τ )cj (τ )G(τ − s)dτ = Zj (t, s),
0

where
Z t Z h
Zj (t, s) := Jj (t)G(t − s) + Jj (τ )[− W (t − τ − s1 )dr̂(s1 )G(τ − s)+
0 0

m
X Z h
W (t − τ ) ak (τ − s) G(τ − s − s1 )drk (s1 )]dτ.
k=1 0

Now (4.3) implies


Lemma 8.4.1. The equality
Z m
hX
G(t) = W (t) − Zj (t, s)drj (s)
0 j=1

is true.
We have
Z tZ h
sup |Zj (t, s)| ≤ wj kGkC [1 + |W (t − τ − s1 )|dr̂(s1 )dτ +
t≥0 0 0

m
X Z t
kak kC |W (t − τ )|dτ ].
k=1 0
162 Chapter 8. Linear Equations with Oscillating Coefficients

But Z Z Z Z
h t h t−s
|W (t − τ − s)|dτ dr̂(s) = |W (t − τ )|dτ dr̂(s) ≤
0 0 0 −s
Z ∞
var(r̂) |W (τ )|dτ.
0
Thus,
m
X
kZj (t, s)kC ≤ wj kGkC (1 + (bk + kak kC )kW kL1 ).
k=1

From the previous lemma we get kGkC ≤ kW kC + γkGkC , where


m
!" m
#
X X
γ := wj 1+ (bk + kak kC )kW kL1 .
k=1 k=1

Condition (3.3) means that γ < 1. We thus have proved the following result.
Lemma 8.4.2. Let condition (3.3) hold. Then

kW kC
kGkC ≤ . (4.5)
1−γ

The previous lemma implies the stability of (3.1). Substituting

x (t) = e−t x(t) (4.6)

with  > 0 into (3.1), we have the equation


m
X Z h
ẋ (t) = x (t) − ak (t) es x (t − s)drk (s). (4.7)
k=1 0

If  > 0 is sufficiently small, then according to (4.5) we easily obtain that kx kC <
∞ for any solution x of (4.7). Hence (4.6) implies

|x(t)| ≤ e−t kx kC (t ≥ 0)

for any solution x of (3.1), as claimed. 

8.5 Comments
This chapter is based on the papers [54] and [55].
The literature on the first order scalar linear functional differential equations
is very rich, cf. [81, 89, 102, 109, 114] and references therein, but mainly, the
coefficients are assumed to be positive. The papers [6, 7, 117] are devoted to
8.5. Comments 163

stability properties of differential equations with several (not distributed) delays


and an arbitrary number of positive and negative coefficients. In particular, the
papers [6, 7] give us explicit stability tests in the iterative and limit forms. Besides
the main tool is the comparison method based on the Bohl-Perron type theorem.
The sharp stability condition for the first order functional-differential equation
with one variable delay was established by A.D. Myshkis (the so called 3/2-stability
theorem) in his celebrated paper [94]. The similar result was established by J.
Lillo [87]. The 3/2-stability theorem was generalized to nonlinear equations and
equations with unbounded delays in the papers [112, 113, 114]. As Example 8.3.4
shows, Theorem 8.3.1 improves the 3/2-stability theorem in the case of constant
delays and ”quickly”oscillating coefficients.
It should be noted that the theory of vector functional differential equations
with oscillating coefficients in contrast to scalar equations is not enough developed.
164 Chapter 8. Linear Equations with Oscillating Coefficients
Chapter 9

Linear Equations with Slowly


Varying Coefficients

This chapter deals with vector differential-delay equations having slowly varying
coefficients. The main tool in this chapter is the ”freezing” method.

9.1 The ”freezing” method


Again consider in Cn the equation
Z η
ẏ(t) = dτ R(t, τ )y(t − τ ) (t ≥ 0), (1.1)
0

where R(t, τ ) = (rjk (t, τ ))nj,k=1 is an n × n- matrix-valued function defined on


[0, ∞) × [0, η ] whose entries have uniformly bounded variations in τ . C(a, b) is the
space of continuous vector valued functions, again. It is also assumed that there
is a positive constant q, such that
Z η
k dτ (R(t, τ ) − R(s, τ ))f (t − τ )kn ≤ q |t − s|kf kC(−η ,t) (f ∈ C(−η , t); t, s ≥ 0).
0
(1.2)
For example, consider the equation
Z η m
X
ẏ(t) = A(t, τ )y(t − τ )dτ + Ak (t)y(t − hk ) (t ≥ 0; m < ∞) (1.3)
0 k=0

where 0 = h0 < h1 < ... < hm ≤ η are constants, Ak (t) and A(t, τ ) are matrix-
valued functions satisfying the inequality
Z η m
X
kA(t, τ ) − A(s, τ )kn dτ + kAk (t) − Ak (s)kn ≤ q |t − s| (t, s ≥ 0). (1.4)
0 k=0
166 Chapter 9. Linear Equations with Slowly Varying Coefficients

Then we have
Z η m
X
k (A(t, τ ) − A(s, τ ))f (t − τ )dτ + (Ak (t) − Ak (s))f (t − hk )kn ≤
0 k=0

Z m
!
η X
kf kC(−η ,t) kA(t, τ ) − A(s, τ )kn dτ + kAk (t) − Ak (s, τ )kn .
0 k=0

So condition (1.2) holds.


To formulate the result, for a fixed s ≥ 0, consider the ”frozen” equation
Z η
ẏ(t) = dτ R(s, τ )y(t − τ ) (t > 0). (1.5)
0

Let Gs (t) be the fundamental solution to the autonomous equation (1.5).


Theorem 9.1.1. Let the conditions (1.2) and
Z ∞
1
χ := sup tkGs (t)kn dt < (1.6)
s≥0 0 q

hold. Then equation (1.1) is exponentially stable.


This theorem is proved Rin the next section. Let us establish an estimate for
η
χ. Recall that Ks (z) = zI − 0 e−zτ dτ R(s, τ ).
Put
α0 := sup Re zk (Ks ),
s≥0; k=1,2,...

where zk (Ks ) are the characteristic values of Ks ; so under our assumptions α0 <
0. Since (Ks−1 (z))0 = dKs−1 (z)/dz is the Laplace transform of −tGs (t), for any
positive c < |α0 |, we obtain
Z Z ∞
1 i∞
1
tGs (t) = − e zt
(Ks−1 (z))0 dz =− et(iω−c) (Ks−1 (iω − c))0 dω.
2πi −i∞ 2πi −∞

Hence, Z ∞
e−tc
tkGs (t)kn ≤ k(Ks−1 (iω − c))0 kn dω.
2π −∞

But (Ks−1 (z))0 = −Ks−1 (z)Ks0 (z)Ks−1 (z) and thus


 Z η 
(Ks−1 (iω − c))0 = −Ks−1 (iω − c) I + τ e−(iω−c)τ dτ R(s, τ ) Ks−1 (iω − c).
0

Therefore, Z ∞
Mc,s e−tc
tkGs (t)kn ≤ kKs−1 (iω − c)k2n dω,
2π −∞
9.2. Proof of Theorem 9.1.1 167

where
Z η
Mc,s = sup kI+ τ e−(iω−c)τ dτ R(s, τ )kn ≤ 1+ecη vd(R(s, .)) ≤ 1+ηecη varR(s, .).
ω 0

Here vd(R(s, .) is the spectral norm of the matrix whose entries are 0
τ dτ |rjk (s, τ )|.
We thus obtain the following result
Lemma 9.1.2. For any positive c < |α0 | we have
Z
Mc,s ∞
χ ≤ sup kKs−1 (iω − c)k2n dω.
s≥0 2πc −∞

9.2 Proof of Theorem 9.1.1


Again consider the non-homogeneous equation
Z η
ẋ(t) = dτ R(t, τ )x(t − τ ) + f (t) (t ≥ 0) (2.1)
0

with a given f ∈ C(0, ∞) and the zero initial condition

x(t) = 0 (t ≤ 0). (2.2)

For a continuous vector-valued function u defined on [−η , ∞) and a fixed s ≥ 0,


put Z η
(E(s)u)(t) = dτ R(s, τ )u(t − τ ).
0
Then (2.1) can be written as

ẋ(t) = E(s)x(t) + [E(t) − E(s)]x(t) + f (t).

By the Variation of Constants formula


Z t
x(t) = Gs (t − t1 )[(E(t1 ) − E(s))x(t1 ) + f (t1 )]dt1 . (2.3)
0

Condition (1.2) gives us the inequality

k[E(t) − E(s)]x(t)kn ≤ q|t − s|kxkC(0,t) (t, s ≥ 0). (2.4)

Note that for an  > 0, we have


Z ∞ Z  Z
1 ∞
kGs (t)kn dt ≤ kGs (t)kn dt + tkGs (t)kn dt ≤ c1
0 0  
where Z 
1
c1 = sup kGs (t)kn dt + χ.
s≥0 0 
168 Chapter 9. Linear Equations with Slowly Varying Coefficients

Thus
Z t Z t
k Gs (t − t1 )f (t1 )dt1 kn ≤ kf kC(0,∞) kGs (t1 )kn dt1 ≤ c1 kf kC(0,∞) .
0 0

Now (2.4) and (2.3) for a fixed t > 0 imply


Z t
kx(t)kn ≤ c1 kf kC(0,∞) + kxkC(0,t) kGs (t − t1 )kn q|t1 − s|dt1 (t ≥ 0).
0

Hence with s = t we obtain


Z t
kx(t)kn ≤ c1 kf kC(0,∞) + kxkC(0,t) kĜt (t − t1 )kn q(t − t1 )dt1 . (2.5)
0

But Z t
kĜt (t − t1 )kn (t − t1 )dt1 =
0
Z t
kĜt (u)kn u du ≤
0
Z ∞ Z ∞
kĜt (u)kn udu ≤ sup kĜt (u)kn u du = χ.
0 t≥0 0

Thus (2.5) implies


kxkC(0,T ) ≤ c1 kf kC(0,∞) + kxkC(0,T ) χq
for any finite T > 0. Hence, due to condition (1.6) we arrive at the inequality
c1 kf kC(0,∞)
kxkC(0,T ) ≤ .
1 − χq
Hence, letting T → ∞ we get
c1 kf kC(0,∞)
kxkC(0,∞) ≤ .
1 − qχ
So for any bounded f the solution of problem (2.1), (2.2), is uniformly bounded.
Now Theorem 3.4.1 proves the required result. 

9.3 Perturbations of certain ordinary differential equa-


tions
Now let us consider the equation
Z η
ẋ(t) = A(t)y(t) + dτ R1 (t, τ )y(t − τ ) (t ≥ 0), (3.1)
0
9.3. Perturbations of certain ordinary differential equations 169

where A(t) for any t ≥ 0 is an n × n Hurwitzian matrix satisfying the condition

kA(t) − A(s)kn ≤ q0 |t − s| (t, s ≥ 0), (3.2)

and R1 (t, τ ) is an n × n matrix-valued function defined on [0, ∞) × [0, η ] whose


entries have the uniformly bounded variations. Put
Z η
E1 u(t) = dτ R1 (s, τ )u(t − τ ) (u ∈ C(−η , ∞)).
0

By Lemma 1.12.3, there is a constant V (R1 ) such that

kE1 ukC(0,∞) ≤ V (R1 )kukC(−η ,∞) . (3.3)

Besides, some estimates for V (R1 ) are given in Section 1.12. Let us point a stability
criterion which is more convenient than Theorem 9.1.1 in the case of equation (3.1).

Theorem 9.3.1. Let the conditions (3.2),


Z ∞
1
νA := sup keA(s)t kn dt < (3.4)
s≥0 0 V (R1 )

and Z ∞
1 − νA V (R1 )
χ̂0 := sup tkeA(s)t kn dt < (3.5)
s≥0 0 q0

hold. Then equation (3.1) is exponentially stable.

This theorem is proved in the next section. For instance, consider the equation
m
X Z η
ẋ(t) = A(t)x(t) + Bk (t) x(t − τ )dμk (τ ) (t ≥ 0) (3.6)
k=1 0

where μk are nondecreasing scalar functions, and Bk (t) are n × n-matrices with
the properties
sup kBk (t)kn < ∞ (k = 1, ..., m).
t≥0

Simple calculations show that in the considered case


m
X
V (R1 ) ≤ sup kBk (s)kn var(μk ).
s
k=1

About various estimates for keA(s)t kn see Sections 2.5 and 2.8.
170 Chapter 9. Linear Equations with Slowly Varying Coefficients

9.4 Proof of Theorems 9.3.1


To prove Theorem 9.3.1, consider the equation
Z η
ẋ(t) = A(t)x(t) + dτ R1 (t, τ )x(t − τ ) + f (t) (4.1)
0

and denote by U (t, s) (t ≥ s ≥ 0) the evolution operator of the equation

ẏ(t) = A(t)y(t). (4.2)

Put Z t
1
ξA := sup sup
U (t, s)f (s)ds .

f ∈C(0,∞) kf kC(0,∞) t≥0 0 n

Lemma 9.4.1. Let the condition

ξA V (R1 ) < 1 (4.3)

hold. Then any solution of (4.1) with f ∈ C(0, ∞) and the zero initial condition
satisfies the inequality
ξA kf kC(0,∞)
kxkC(0,∞) ≤ .
1 − V (R1 )ξA
Proof. Equation (4.1) is equivalent to the following one:
Z t
x(t) = U (t, s)(E1 x(s) + f (s))ds.
0

Hence,
kxkC(0,∞) ≤ ξA (kE1 xkC(0,∞) + kf kC(0,∞) ).
Hence, by (3.3) we arrive at the inequality

kxkC(0,∞) ≤ ξA (V (R1 )kxkC(0,∞) + kf kC(0,∞) ).

Now condition (4.3) ensures the required result. 

The previous lemma and Theorem 3.4.1 imply

Corollary 9.4.2. Let condition (4.3) hold. Then equation (3.1) is exponentially
stable.

Lemma 9.4.3. Let conditions (3.2) and (3.5) hold. Then


νA
ξA <
1 − q0 χ̂0
9.4. Proof of Theorems 9.3.1 171

Proof. Consider the equation

ẋ(t) = A(t)x(t) + f (t) (4.4)

with the zero initial condition x(0) = 0. Rewrite it as

ẋ(t) = A(s)x(t) + (A(t) − A(s))x(t) + f (t).

Hence Z t
x(t) = eA(s)(t−t1 ) [(A(t1 ) − A(s))x(t1 ) + f (t1 )]dt1 .
0
Take s = t. Then
Z t
kx(t)kn ≤ keA(t)(t−t1 ) kk(A(t1 ) − A(t))x(t1 )kdt1 + c0 ,
0

where Z t
c0 := sup keA(s)(t−t1 ) kn kf (t1 )kn dt1 ≤
s,t 0
Z ∞
kf kC(0,∞) sup keA(s)t1 kn dt1 ≤ νA kf kC(0,∞) .
s 0
Thus, for any T < ∞, we get
Z T
sup kx(t)kn ≤ c0 + q0 sup kx(t)kn keA(t)(T −t1 ) k|t1 − T |dt1 ≤
t≤T t≤T 0

Z T
c0 + q0 sup kx(t)kn keA(t)u kudu ≤
t≤T 0

c0 + q0 χ̂0 sup kx(t)kn .


t≤T

By (3.5), we have q0 χ̂0 < 1. So

c0 νA kf kC(0,∞)
kxkC(0,T ) ≤ = .
1 − q0 χ̂0 1 − q0 χ̂0
Hence, letting T → ∞, we get

c0 νA kf kC(0,∞)
kxkC(0,∞) ≤ = .
1 − q0 χ̂0 1 − q0 χ̂0
This proves the lemma. 

Proof of Theorem 9.3.1: The required result at once follows from Corollary
9.4.2 and the previous lemma. 
172 Chapter 9. Linear Equations with Slowly Varying Coefficients

9.5 Comments
The papers [23] and [60] were essentially used in this chapter.
Theorem 9.1.1 extends the ”freezing” method for ordinary differential equa-
tions, cf. [12, 76, 107]. Nonlinear systems with delay and slowly varying coefficient
were considered in [29].
Chapter 10

Nonlinear Vector Equations

In the present chapter we investigate nonlinear systems with causal mappings.


The main tool is the norm estimates for fundamental solutions. The generalized
norm is also applied. It enables us to use an information about a system more
completely than the usual (number) norm.

10.1 Definitions and preliminaries


Let η < ∞ be a positive constant, and R(t, τ ) = (rjk (t, τ ))nj,k=1 be an n × n-
matrix-valued function defined on [0, ∞) × [0, η ], piece-wise continuous in t for
each τ , whose entries have uniformly bounded variations in τ :
vjk = sup var rjk (t, .) < ∞ (j, k = 1, ..., n).
t≥0

In this chapter, again C([a, b], Cn ) = C(a, b) and Lp ([a, b], Cn ) = Lp (a, b). Our
main object of in present section is the problem
Z η
ẋ(t) = ds R(t, s)x(t − s) + [F x](t) + f (t) (t ≥ 0), (1.1)
0

x(t) = φ(t) ∈ C(−η , 0) (−η ≤ t ≤ 0), (1.2)


where f ∈ C(0, ∞) and F is a continuous causal mapping in C(−η, ∞) (see Section
1.8).
Denote
Ω(%) := {v ∈ C(−η , ∞) : kvkC(−η ,∞) ≤ %}
for a positive % ≤ ∞.
It is supposed that there is a constant q ≥ 0, such that
kF wkC(0,∞) ≤ qkwkC(−η ,∞) (w ∈ Ω(%)). (1.3)
Below we present the examples of the mapping satisfying condition (1.3).
174 Chapter 10. Nonlinear Vector Equations

Lemma 10.1.1. Let F be a continuous causal mapping in C(−η , ∞) and condition


(1.3) hold. Then F is a continuous mapping in C(−η , T ) and

kF wkC(0,T ) ≤ qkwkC(−η ,T ) (w ∈ Ω(%) ∩ C(−η , T ))

for all T > 0.


Proof. Take w ∈ Ω(%) and put

w(t) if 0 ≤ t ≤ T,
wT (t) =
0 if t > T

and 
(F w)(t) if 0 ≤ t ≤ T,
FT w(t) = .
0 if t > T
Since F is causal, one has FT w = FT wT . Consequently

kF wkC(0,T ) = kFT wkC(0,∞) = kFT wT kC(0,∞) ≤

kF wT kC(0,∞) ≤ qkwT kC(−η ,∞) = qkwkC(−η ,T ) .


Furthermore, take v ∈ Ω(%) and put

v(t) if 0 ≤ t ≤ T,
vT (t) = ,
0 if t > T

and
δ = kwT − vT kC(−η ,T ) and  = kF wT − F vT kC(−η ,∞) .
We have kwT − vT kC(−η ,∞) = δ. Since F is continuous in Ω(%) and

kFT wT − FT vT kC(−η ,∞) ≤ kF wT − F vT kC(−η ,∞)

we prove the continuity of F in Ω(%). This proves the result. 

A (mild) solution of problem (1.1), (1.2) is a continuous function x(t) defined


on [−η , ∞), such that
Z t
x(t) = z(t) + G(t, t1 )([F x](t1 ) + f (t1 ))dt1 (t ≥ 0), (1.4a)
0

x(t) = φ(t) ∈ C(−η , 0) (−η ≤ t ≤ 0), (1.4b)


where G(t, t1 ) is the fundamental solution of the linear equation
Z η
ż(t) = ds R(t, s)z(t − s) (1.5)
0
10.1. Definitions and preliminaries 175

and z(t) is a solution of the problem (1.5), (1.2). Again use the operator
Z t
Ĝf (t) = G(t, t1 )f (t1 )dt1 (f ∈ C(0, ∞)). (1.6)
0

It is assumed that

kzkC(−η ,∞) + kĜkC(0,∞) (q% + kf kC(0,∞) ) < % if % < ∞, (1.7a)

or
qkĜkC(0,∞) < 1, if % = ∞. (1.7b)
Theorem 10.1.2. Let F be a continuous causal mapping in C(−η , ∞). Let condi-
tions (1.3) and (1.7) hold. Then problem (1.1), (1.2) has a solution x(t) satisfying
the inequality

kzkC(−η ,∞) + kĜkC(0,∞) kf kC(0,∞)


kxkC(−η ,∞) ≤ . (1.8)
1 − qkĜkC(0,∞)

Proof. Take a finite T > 0 and define on ΩT (%) = Ω(%) ∩ C(−η , T ) the mapping
Φ by
Z t
Φw(t) = z(t) + G(t, s)([F w](s) + f (s))ds (0 ≤ t ≤ T ; w ∈ ΩT (%)),
0

and
Φw(t) = φ(t) for − η ≤ t ≤ 0.
Clearly, Φ maps Ω(%) into C(−η , T ). Moreover, by (1.3) and Lemma 10.1.1, we
obtain the inequality

kΦwkC(−η ,T ) ≤ max{kzkC(0,T ) + kĜkC(0,∞) (qkwkC(−η ,T ) +

kf kC(0,∞) ), kφkC(−η ,0) }.


But
max {kzkC(0,T ) , kφkC(−η ,T ) } = kzkC(−η ,T ) .
So
kΦwkC(−η ,T ) ≤ kzkC(−η ,T ) +

kĜkC(0,∞) (qkwkC(−η ,T ) + kf kC(0,∞) ).


According to (1.7) Φ maps ΩT (%) into itself. Taking into account that Φ is compact
we prove the existence of solutions.
Furthermore, we have

kxkC(−η ,T ) = kΦxkC(−η ,T ) ≤ kzkC(−η ,T ) +


176 Chapter 10. Nonlinear Vector Equations

kĜkC(0,T ) (qkxkC(−η ,T ) + kf kC(0,∞) ).


Hence we easily obtain (1.8), completing the proof. 

Note that the Lipschitz condition


kF w − F w1 kC(0,∞) ≤ qkw − w1 kC(−η ,∞) (w1 , w ∈ Ω(%)) (1.9)
together with the Contraction Mapping theorem allows us easily to prove the
existence and uniqueness of solutions. Namely, the following result is valid.
Theorem 10.1.3. Let F be a continuous causal mapping in C(−η , ∞). Let con-
ditions (1.7) and (1.9) hold. Then problem (1.1), (1.2) has a unique solution
x ∈ Ω(r).
Note that in our considerations one can put [F x](t) = 0 for −η ≤ t < 0.

10.2 Stability of quasilinear equations


In the rest of this chapter the uniqueness of solutions is assumed.
Consider the equation
Z η
ẋ(t) = ds R(t, s)x(t − s) + [F x](t) (t ≥ 0), (2.1)
0

Recall that any causal mapping satisfies the condition F 0 ≡ 0 (see Section 1.8).
Definition 10.2.1. Let F be a continuous causal mapping in C(−η , ∞). Then the
zero solution of (2.1) is said to be stable (in the Lyapunov sense), if for any  > 0,
there exists a δ > 0, such that the inequality kφkC(−η,0) ≤ δ implies kxkC(0,∞) ≤ 
for any solution x(t) of problem (2.1), (1.2).
The zero solution of (2.1) is said to be asymptotically stable, if it is stable,
and there is an open set Ω̃ ⊆ C(−η, 0), such that φ ∈ Ω̃ implies x(t) → 0 as
t → ∞. Besides, Ω̃ is called the region of attraction of the zero solution. If the
zero solution of (2.1) is asymptotically stable and Ω̃ = C(−η, 0), then it is globally
asymptotically stable.
The zero solution of (2.1) is exponentially stable, if there are positive con-
stants ν, m0 and r0 , such that the condition kφkC(−η ,0) ≤ r0 implies the relation

kx(t)kn ≤ m0 kφkC(−η ,0) e−νt (t ≥ 0).


The zero solution of (2.1) is globally exponentially stable if it is exponentially
stable and the region of attraction coincides with C(−η , 0). That is, r0 = ∞.
Theorem 10.2.2. Let the conditions (1.3) and
qkĜkC(0,∞) < 1 (2.2)
hold. Then the zero solution to (2.1) is stable.
10.2. Stability of quasilinear equations 177

This result immediately follows from Theorem 10.1.2.


Clearly, Z t
kĜkC(0,∞) ≤ sup kG(t, t1 )kn dt1 . (2.3)
t≥0 0

Thus the previous theorem implies


Corollary 10.2.3. Let the conditions (1.3) and
Z t
1
sup kG(t, t1 )kdt1 <
t≥0 0 q

hold. Then the zero solution to (2.1) is stable


By Theorem 10.1.2, under conditions (1.3) and (2.2) we have the solution
estimate
kzkC(−η ,∞)
kxkC(−η ,∞) ≤ (2.4)
1 − qkĜkC(0,∞)
provided
kzkC(−η ,∞) < %(1 − qkĜkC(0,∞) ).
Since (1.5) is assumed to be stable, there is a constant c0 , such that

kzkC(−η ,∞) ≤ c0 kφkC(−η ,0) . (2.5)

Thus the inequality

c0 kφkC(−η ,0) ≤ %(1 − qkĜkC(0,∞) ) (2.6)

gives us a bound for the region of attraction.


Furthermore, if the condition
kF wkC(0,∞)
lim =0 (2.7)
kwkC(−η ,∞) →0 kwkC(−η ,∞)

holds, then equation (2.1) will be called a quasilinear equation.


Theorem 10.2.4. (Stability in the linear approximation) Let kĜkC(0,∞) < ∞ and
equation (2.1) be quasilinear. Then the zero solution to equation (2.1) is stable.
Proof. From (2.7) it follows that for any % > 0, there is a q > 0, such that (1.3)
holds, and q = q(%) → 0 as % → 0. Take % in such a way that the condition
qkĜkC(0,∞) < 1 is fulfilled. Now the required result is due the to the previous
theorem. 

For instance, assume that


m Z η
X
kF w(t)kn ≤ kw(t − s)kpnk dμk (s) (w ∈ C(−η , ∞)), (2.8)
k=1 0
178 Chapter 10. Nonlinear Vector Equations

where μk (s) are nondecreasing functions, and pk = const > 1. Then


m
X
kF wkC(0,∞) ≤ var (μk )kwkpC(−η
k
,∞) .
k=1

So (2.7) is valid. Moreover, for any % > 0,


m Z
X η
kF w(t)kn ≤ kw(t − s)kn dμk (s)%pk −1 (w ∈ Ω(%)).
k=1 0

So condition (1.3) holds with


m
X
q = q(%) = %pk −1 var (μk ). (2.9)
k=1

Furthermore, consider the following equation with the autonomous linear part.
Z η
ẋ(t) = dR0 (s)x(t − s) + [F x](t), (2.10)
0

where R0 (τ ) is an n × n-matrix-valued function defined on [0, η ] and having a


bounded variation. Let G0 (t) be the fundamental solution of the equation
Z η
ż(t) = dR0 (s)z(t − s). (2.11)
0

Put Z t
Ĝ0 f (t) = G0 (t − t1 )f (t1 )dt1 , (2.12)
0

for a f ∈ C(0, ∞). Theorem 10.2.2 implies that, if the conditions (1.3) and

qkĜ0 kC(0,∞) < 1

hold, then the zero solution to equation (2.10) is stable. Hence we arrive at the
following result.
Corollary 10.2.5. Let the conditions (1.3) and
1
kG0 kL1 (0,∞) < (2.13)
q

hold. Then the zero solution to equation (2.10) is stable.


Recall that some estimates for kG0 kC(0,∞) and kG0 kL1 (0,∞) can be found in
Sections 4.4 and 4.8 (in the general case) and Section 4.7 (in the case of systems
with one delay).
10.3. Absolute Lp -stability 179

10.3 Absolute Lp -stability


Let F be a continuous causal mapping in Lp (−η , ∞) for some p ≥ 1. That is, F
maps Lp (−η , ∞) into itself continuously in the norm of Lp (−η , ∞), and for all
τ > −η we have Pτ F = Pτ F Pτ , where Pτ are the projections defined by

w(t) if −η ≤ t ≤ τ,
(Pτ w)(t) = (w ∈ Lp (−η , ∞))
0 if τ < t < ∞,

and P∞ = I. Consider equation (2.10) assuming that the inequality

kF wkLp (0,∞) ≤ qp kwkLp (−η ,∞) (w ∈ Lp (−η , ∞)) (3.1)

is fulfilled with a constant qp .


Repeating the proof of Lemma 10.1.1 we arrive at the following result.
Lemma 10.3.1. Let F be a continuous causal mapping in Lp (−η , ∞) for some
p ≥ 1. Let condition (3.1) hold. Then

kF wkLp (0,T ) ≤ qp kwkLp (−η ,T ) (w ∈ Lp (−η , T ))

for all T > 0.


Let Ĝ0 be defined in space Lp (0, ∞) by (2.12).
Theorem 10.3.2. Let F be a continuous causal mapping in Lp (−η , ∞) for some
p ≥ 1. Let the conditions (3.1) and

qp kĜ0 kLp (0,∞) < 1 (3.2)

hold. Then problem (2.10), (1.2) has a solution x(t) ∈ Lp (−η , ∞) satisfying the
inequality
kzkLp (−η ,∞)
kxkLp (−η ,∞) ≤ (3.3)
1 − qp kĜ0 kLp (0,∞)
where z(t) is a solution of the linear problem (2.11), (1.2).
The proof of this theorem is similar to the proof of Theorem 10.1.2 with the
replacement of C(0, T ) by Lp (0, T ).
The Lipschitz condition

kF w − F w1 kLp (0,∞) ≤ qp kw − w1 kLp (−η ,∞) (w1 , w ∈ Lp (0, ∞)) (3.4)

together with the Contraction Mapping theorem also allows us easily to prove the
existence and uniqueness of solutions Namely, the following result is valid.
Theorem 10.3.3. Let F be a continuous causal mapping in Lp (−η , ∞) for some
p ≥ 1. Let conditions (3.2) and (3.4) hold. Then problem (2.10), (1.2) has a unique
(continuous) solution x ∈ Lp (−η , ∞).
180 Chapter 10. Nonlinear Vector Equations

Definition 10.3.4. The zero solution to equation (2.10) is said to be absolutely


Lp -stable in the class of the nonlinearities satisfying (3.1), if there is a positive
constant m0 independent of the specific form of functions F (but dependent on
qp ), such that
kxkLp (−η ,∞) ≤ m0 kφkC(−η ,0) (3.5)
for any solution x(t) of problem (2.10), (1.2).
From Theorem 10.3.2 it follows that the zero solution to equation (2.10) is
absolutely Lp -stable in the class of the nonlinearities satisfying (3.1), provided
condition (3.2) holds.
According to the well-known property of convolutions (see Section 1.3) we
have
kĜ0 kLp (0,∞) ≤ kG0 kL1 (0,∞) (3.6)
We thus arrive at the following result.
Corollary 10.3.5. The zero solution to equation (2.10) is absolutely Lp -stable in
the class of the nonlinearities satisfying (3.1) provided qp kG0 kL1 (0,∞) < 1.
Recall that
Z η
K(z) = Iz − exp(−zs)dR0 (s) (z ∈ C).
0

It is assumed that all the characteristic values of K are in C− . So the autonomous


linear equation (2.11) is exponentially stable. According to Lemma 4.4.1 we have
the inequality
kĜ0 kL2 (0,∞) ≤ θ(K),
where
θ(K) := sup kK −1 (iω)kn .
−2 var(R0 )≤ω≤2 var(R0 )

So due to Theorem 10.3.2 we get


Corollary 10.3.6. The zero solution to equation (2.10) is absolutely L2 -stable in
the class of the nonlinearities satisfying

kF wkL2 (0,∞) ≤ q2 kwkL2 (−η ,∞) (w ∈ L2 (−η , ∞)), (3.7)

provided
q2 θ(K) < 1. (3.8)
Moreover, any its solution satisfies the inequality

kzkL2 (−η ,∞)


kxkL2 (−η ,∞) ≤ , (3.9)
1 − q2 θ(K)

where z(t) is a solution of the linear problem (2.11), (1.2).


10.4. Mappings defined on Ω(%) ∩ L2 181

Now we can apply the bounds for θ(K) from Sections 4.3, 4.7 and 4.8.
The following lemma shows that the notion of the L2 -absolute stability of
(2.10) is stronger that the notion of the asymptotic (absolute) stability.
Lemma 10.3.7. If the zero solution to equation (2.10) is absolutely L2 -stable in
the class of nonlinearities (3.7), then the zero solution to (2.10) is asymptotically
stabile.
Proof. Indeed, assume that a solution x of (2.10) is in L2 (0, ∞) and note that from
(2.10) and (3.7) it follows that kẋkL2 (0,∞) ≤ (var(R0 ) + q2 )kxkL2 (−η ,∞) . Thus
Z ∞ Z ∞
d
kx(t)k2n = − kx(s)k2n ds ≤ 2 kx(s)kn kẋ(s)kn ds ≤
t ds t
Z ∞ Z ∞
2( kx(s)kn ds) (
2 1/2
kẋ(s)k2n ds)1/2 → 0 as t → ∞.
t t

As claimed. 

10.4 Mappings defined on Ω(%) ∩ L2


In this section we investigate a causal mapping F acting in space L2 (−η , ∞) =
L2 ([−η , ∞), Cn ) and satisfying the condition

kF f kL2 (0,∞) ≤ q2 kf kL2 (−η ,∞) (f ∈ Ω(%) ∩ L2 (−η , ∞)) (4.1)

with a positive constant q2 . Such a condition enables us to derive stability condi-


tions, sharper than (2.2).
For example, let there be a nondecreasing function ν(s) = ν(%, s), defined on
[0, η ], such that
Z η
k[F f ](t)kn ≤ kf (t − s)kn dν(s) (t ≥ 0; f ∈ Ω(%)). (4.2)
0

Let us show that condition (4.2) implies the inequality

kF f kL2 (0,∞) ≤ var(ν) kf kL2 (−η ,∞) (f ∈ Ω(%) ∩ L2 (−η , ∞)). (4.3)

Indeed, introduce in the space of scalar functions L2 ([−η , ∞); C) the operator Êν
by Z η
(Êν w)(t) = w(t − τ )dν(τ ) (w ∈ L2 ([−η , ∞); C).
0

Then Z ∞ Z η
kÊν wk2L2 (0,∞) = | w(t − τ )dν(τ )|2 dt =
0 0
182 Chapter 10. Nonlinear Vector Equations
Z ∞ Z η Z η
| w(t − τ )dν(τ ) w(t − τ1 )dν(τ1 )|dt ≤
0 0 0
Z ∞ Z η Z η
|w(t − τ )|dν(τ ) |w(t − τ1 )|dν(τ1 )|dt.
0 0 0

Hence
Z η Z η Z ∞
kÊν wk2L2 (0,∞) ≤ |w(t − τ )||w(t − τ1 )|dt dν(τ1 ) dν(τ ).
0 0 0

But by the Schwarz inequality


Z sZ sZ
∞ ∞ ∞
|w(t − τ )||w(t − τ1 )|dt ≤ |w(t − τ )|2 dt |w(t − τ1 )|2 dt ≤
0 0 0

Z ∞
|w(t)|2 dt.
−η

Now simple calculations show that

kÊν wkL2 (0,∞) ≤ var(ν)kwkL2 (−η ,∞) .

Hence (4.2) with w(t) = kf (t)kn implies (4.3).


Theorem 10.4.1. Let F be a continuous causal mapping in L2 (−η , ∞). Let condi-
tions (4.1) and (3.8) hold. Then the zero solution to (2.10) is L2 -stable. Namely,
there are constants c0 ≥ 0 and r0 ∈ (0, %], such that

kxkL2 (−η ,∞) ≤ c0 kφkC(−η ,0) (4.4)

for a solution x(t) of problem (2.10), (1.2) provided

kφkC(−η ,0) < r0 . (4.5)

Proof. First let % = ∞. Since the linear equation (2.11) is L2 -stable, we can write

kzkL2 (−η ,∞) ≤ c1 kφkC(−η ,0) (c1 = const). (4.6)

By (3.9) we thus have (4.4). So in the case % = ∞, the theorem is proved.


Furthermore, recall that
Z η
(E0 f )(t) = dR0 (τ )f (t − τ ).
0

By Lemma 1.12.1

kE0 f kL2 (0,∞) ≤ var(R0 )kf kL2 (−η ,∞) .


10.5. Exponential stability 183

Now from (2.10) and (4.1) in the case % = ∞ it follows that

kẋkL2 (0,∞) ≤ (var(R0 ) + q2 )kxkL2 (−η ,∞) .

Or according to (4.4),

kẋkL2 (0,∞) ≤ (var(R0 ) + q2 )c0 kφkC(−η ,0) . (4.7)

Recall that by Lemma 4.4.6, if f ∈ L2 (0, ∞) and f˙ ∈ L2 (0, ∞), then

kf k2C(0,∞) ≤ 2kf kL2 (0,∞) kf˙kL2 (0,∞) .

This and (4.7) imply the inequality



kxkC(0,∞) ≤ c2 kφkC(−η ,0) (c2 = c0 2). (4.8)

Now let % < ∞. By the Urysohn theorem, (see Section 1.1), there is a continuous
scalar-valued function ψ% defined on C(0, ∞), such that

1 if kf kC(0,∞) < %,
ψ% (f ) =
0 if kf kC(0,∞) ≥ %

Put F% f = ψ% (f )F f . Clearly, F% satisfies (4.1) for all f ∈ L2 (−η , ∞). Consider


the equation
ẋ = E0 x + F% x. (4.9)
The solution of problem (4.9), (1.2) denote by x% . According to (4.4) and (4.8) we
have
kx% kL2 (−η ,∞) ≤ c0 kφkC(−η ,0) and kx% kC(0,∞) ≤ c2 kφkC(−η ,0) .
If we take
%
kφkC(−η ,0) < r0 = ,
c2
then x% ∈ Ω(%). So the solutions of equations (2.10) and (4.8) coincide. This proves
the theorem. 

10.5 Exponential stability


Let F̃ be a continuous causal operator in C(−η , ∞). Consider the equation

ẋ = F̃ x (5.1)

and substitute
x(t) = y (t)e−t (5.2)
with an  > 0 into (5.1). Then we obtain the equation

ẏ = y + et F̃ (e−t y ). (5.3)


184 Chapter 10. Nonlinear Vector Equations

Lemma 10.5.1. For an  > 0, let the zero solution of equation (5.3) be stable in
the Lyaponov sense. Then the zero solution of equation (5.1) is exponential stable.
Proof. If kφkC(−η ,0) is sufficiently small, we have

ky (t)kn ≤ m0 kφkC(−η ,0) (t ≥ 0)

for a solution of (5.3). Now (5.2) implies the result. 

Recall that Ω(%) := {v ∈ C(−η , ∞) : kvkC(−η ,∞) ≤ %} for a positive % ≤ ∞.


Let a continuous causal mapping F in C(−η , ∞) satisfy condition (1.3), for
an % ≤ ∞. Then we will say that F has the -property, if for any f ∈ Ω(%) we
have
lim ket F (e−t f )kC(0,∞) ≤ qkf kC(0,∞). (5.4)
→0

Here
ket F (e−t f )kC(0,∞) = sup ket [F (e−t f )](t)kn .
t≥0

For example, if condition (4.2) is fulfilled, then


Z η
ket [F (e−t f )](t)kn ≤ et e−(t−s) kf (t − s)kn dν ≤
0
Z η
eη kf (t − s)kn dν (f ∈ Ω(%)),
0

and thus condition (5.4) holds.


Theorem 10.5.2. Let conditions (1.3), (2.2) and (5.4) hold. Then the zero solution
to (2.1) is exponentially stable.
Proof. Substituting (5.2) with a sufficiently small  > 0 into (2.1), we obtain the
equation
ẏ − y = E,R y + F y , (5.5)
where Z η
(E,R f )(t) = eτ dτ R(t, τ )f (t − τ )
0

and
[F f ](t) = et [F (e−t f )](t).
By (5.4) we have
kF f kC(0,∞) ≤ a()kF f kC(0,∞) (5.6)
where a() → l ≤ 1 as  → 0. Let G be the fundamental solution of the linear
equation
ẏ − y = E,R y, (5.7)
10.6. Nonlinear equations ”close” to ordinary differential ones 185

Put Z t
Ĝ f (t) = G (t, t1 )f (t1 )dt1 (f ∈ C(0, ∞)),
0

It is simple to see that Ĝ → Ĝ as  → 0. Taking  sufficiently small and applying


Theorem 10.2.2 according to (5.6) we can assert that equation (5.5) is stable in
the Lyapunov sense. Now Lemma 10.5.1 proves the exponential stability. 

10.6 Nonlinear equations ”close” to ordinary differen-


tial ones
Consider the equation

ẏ(t) = A(t)y(t) + [F y](t) (t ≥ 0), (6.1)

where A(t) is a piece-wise continuous matrix valued function and F is a continuous


causal mapping F in C(−η , ∞). Recall the k.kn means the Euclidean norm for
vectors and the corresponding operator (spectral) norm for matrices.
Lemma 10.6.1. Let F be a continuous causal mapping in C(−η , ∞) satisfying
condition (1.3), and the evolution operator U (t, s) (t ≥ s ≥ 0) of the equation

ẏ = A(t)y (t > 0) (6.2)

satisfy the condition


Z t
1
ν∞ := sup kU (t, s)kn ds < . (6.3)
t≥0 0 q

Then the zero solution of equation (6.1) is stable. Moreover, a solution y of problem
(6.1), (1.2) satisfies the inequality
supt≥0 kU (t, 0)φ(0)kn + qν∞ kφkC(−η ,0)
kykC(0,∞) ≤
1 − qν∞
provided
supt≥0 kU (t, 0)φ(0)kn + qν∞ kφkC(−η ,0)
< %. (6.4)
1 − qν∞
Proof. Rewrite (6.1) as
Z t
x(t) = U (t, 0)φ(0) + U (t, s)(F x)(s)ds.
0

So Z t
kx(t)kn ≤ kU (t, 0)φ(0)kn + kU (t, s)kn kF x(s)kn ds.
0
186 Chapter 10. Nonlinear Vector Equations

According to (6.4) and continuity of solutions, there is a T > 0, such that

kx(t)kn < % (t ≤ T ).

Since F is causal, by (1.3)

kF xkC(0,T ) ≤ qkxkC(−η ,T ).

Thus
Z t
kxkC(0,T ) ≤ sup kU (t, 0)φ(0)kn + qkxkC(−η ,T ) sup kU (t, s)kn ds.
0≤t≤T 0≤t≤T 0

Consequently

kxkC(0,T ) ≤ sup kU (t, 0)φ(0)kn + qkxkC(−η ,T ) ν∞ ≤


t≥0

sup kU (t, 0)φ(0)kn + qν∞ (kxkC(0,T ) + kφkC(−η ,0) ).


t≥0

Hence,
supt≥0 kU (t, 0)φ(0)kn + qν∞ kφkC(−η ,0)
kxkC(0,T ) ≤
1 − qν∞
Now condition (6.4) enables us to extend this inequality to the whole half-line. As
claimed. 

Due to the latter theorem we arrive at our next result.


Corollary 10.6.2. Under the hypothesis of Theorem 10.6.1, let F have the -property
(5.4). Then the zero solution of equation (6.1) is exponentially stable.
Note that, if

((A(t) + A∗ (t))h, h)C n ≤ −2α(t)(h, h)C n (h ∈ Cn , t ≥ 0)

with a positive a piece-wise continuous function α(t), then


Rt
kU (t, s)kn ≤ e− s
α(t1 )dt1

So if α has the property


Z t Rt
ν̂∞ := sup e− s
α(t1 )dt1
ds < ∞,
t≥0 0

then ν∞ ≤ ν̂∞ . For instance, under the condition

α0 := inf α(t) > 0,


t
10.7. Applications of the generalized norm 187

we deduce that Z t
ν̂∞ ≤ sup e−α0 (t−s) ds = 1/α0 .
t≥0 0

In particular, if A(t) ≡ A0 is a constant matrix, then

U (t, s) = e(t−s)A0 (t ≥ s ≥ 0).

As it is shown in Section 2.5,


n−1
X k k t g (A0 )
keA0 t kn ≤ eα(A0 )t (t ≥ 0),
k=0
(k!)3/2

where α(A0 ) = maxk Re λk (A0 ) and therefore,

ν∞ = keA0 t kL1 (0,∞) ≤ νA0 ,

where
n−1
X g k (A0 )
νA0 := √ .
k=0
k!|α(A0 )|k+1
Now Lemma 10.6.1 and Corollary 10.6.2 imply the following result.
Corollary 10.6.3. Let A0 be a Hurwitzian matrix and the conditions (1.3) and

qνA0 < 1

hold. Then the zero solution of the equation

ẋ(t) = A0 x(t) + (F x)(t) (6.5)

is stable. If, in addition, F has the -property (5.4), then the zero solution of
equation (6.5) is exponentially stable.

10.7 Applications of the generalized norm


Again consider equation (2.1). Let rjk (t, s) be the entries of R(t, s). Rewrite (2.1)
in the form of the coupled system
n Z
X η
ẋj (t) + xj (t − s)ds rjk (t, s) = [F x]j (t) (t ≥ 0; j = 1, ..., n), (7.1)
k=1 0

where (x(t) = (xk (t))nk=1 , [F w]j (t) mean the coordinates of the vector function
F w(t) with a w ∈ C([−η , ∞), Cn ).
Below the inequalities for real vectors or vector functions are understood in
the coordinate-wise sense.
188 Chapter 10. Nonlinear Vector Equations

Furthermore, let ρ̂ := (ρ1 , ..., ρn ) be a vector with positive coordinates ρj <


∞. We need the following set:

Ω̃(ρ̂) := {v(t) = (vj (t)) ∈ C([−η , ∞), Cn ) : kvj kC([−η ,∞),C) ≤ ρj ; j = 1, ..., n}.

If we introduce in C([a, b], Cn ) the generalized norm as the vector

M[a,b] (v) := (kvj kC([a,b],C )nj=1 (v(t) = (vj (t)) ∈ C([a, b], Cn ))

(see Section 1.7), then we can write down

Ω̃(ρ̂) := {v ∈ C([−η , ∞), Cn ) : M[−η ,∞) (v) ≤ ρ̂}.

It is assumed that F satisfies the following condition: there are nonnegative con-
stants νjk (j, k = 1, ..., n), such that for any

w(t) = (wj (t))nj=1 ∈ Ω̃(ρ̂),

the inequalities
n
X
k[F w]j kC([0,∞),C) ≤ νjk kwk kC([−η ,∞),C) (j = 1, ..., n) (7.2)
k=1

hold. In other words,

M[0,∞) (F w) ≤ Λ(F )M[−η ,∞) (w) (w ∈ Ω̃(ρ̂)), (7.3)

where Λ(F ) is the matrix defined by

Λ(F ) = (νjk )nj,k . (7.4)

Lemma 10.7.1. Let F be a continuous causal mapping in C(−η , ∞) satisfying


condition (7.3). Then

M[0,T ] (F w) ≤ Λ(F )M[−η ,T ] (w) (w ∈ Ω̃(ρ̂) ∩ C([−η , T ]), Cn ))

for all T > 0.


Proof. Again put

w(t) if 0 ≤ t ≤ T,
wT (t) =
0 if t > T
and 
(F w)(t) if 0 ≤ t ≤ T,
FT w(t) =
0 if t > T
Take into account that FT w = FT wT . Then

M[0,T ] (F w) = M[0,∞) (FT w) = M[0,∞) (FT wT ) ≤


10.7. Applications of the generalized norm 189

M[0,∞) (F wT ) ≤ Λ(F )M[−η ,∞) (wT ) = Λ(F )M[−η ,T ] (w).


This proves the required result. 

It is also assumed that, the entries Gjk (t, s) of the fundamental solution
G(t, s) of equation (1.5) satisfy the conditions
Z ∞
γjk := sup |Gjk (t, s)|ds < ∞. (7.5)
t≥0 0

Denote by γ̂ the matrix with the entries γjk :

γ̂ = (γjk )nj,k .

Theorem 10.7.2. Let the condition (7.2) and (7.5) hold. If, in addition, the spectral
radius of the matrix Q = γ̂Λ(F ) is less than one, then the zero solution of equation
(2.1) is stable. Moreover, if a solution z of the linear problem (1.5), (1.2) satisfies
the condition
M[−η ,∞) (z) + Qρ̂ ≤ ρ̂ , (7.6)
then the solution x(t) of problem (2.1), (1.2) satisfies the inequality

M[−η ,∞) (x) ≤ (I − Q)−1 M[−η ,∞) (z). (7.7)

Proof. Take a finite T > 0 and define on ΩT (ρ̂) = Ω̃(ρ̂) ∩ C(−η , T ) the mapping
Φ by
Z t
Φw(t) = z(t) + G(t, t1 )[F w](t1 )dt1 (0 ≤ t ≤ T ; w ∈ ΩT (ρ̂)),
0

and
Φw(t) = φ(t) for − η ≤ t ≤ 0.
Then by (7.3) and Lemma 10.7.1,

M[−η ,T ] (Φw) ≤ M[−η ,T ] (z) + γ̂Λ(F )M[−η ,T ] (w).

According to (7.6) Φ maps ΩT (ρ̂) into itself. Taking into account that Φ is compact
we prove the existence of solutions. Furthermore,

M[−η ,T ] (x) = M[−η ,T ] (Φx) ≤ M[−η ,T ] (z) + QM[−η ,T ] (x).

So
M[−η ,T ] (x) ≤ (I − Q)−1 M[−η ,T ] (z).
Hence letting T → ∞, we obtain (7.7), completing the proof. 
190 Chapter 10. Nonlinear Vector Equations

The Lipschitz condition

M[0,∞) (F w − F w1 ) ≤ Λ(F )M[−η ,∞) (w − w1 ) (w1 , w ∈ Ω̃(ρ̂)) (7.8)

together with the Generalized Contraction Mapping theorem (see Section 1.7) also
allows us to prove the existence and uniqueness of solutions. Namely, the following
result is valid.

Theorem 10.7.3. Let conditions (7.5) and (7.8) hold. If, in addition, the spectral
radius of the matrix Q = γ̂Λ(F ) is less than one, then problem (2.1), (1.2) has a
unique solution x̃ ∈ Ω̃(ρ̂), provided z satisfies condition (7.6). Moreover, the zero
solution of equation (2.1) is stable.

The proof is left to the reader.


Note that one can use the well-known inequality
n
X
rs (Q) ≤ max qjk , (7.9)
j
k=1

where qjk are the entries of Q. About this inequality, as well as about other
estimates for the matrix spectral radius see Section 2.4.
Let μj (s) be defined and nondecreasing on [0, η ]. Now we are going to in-
vestigate the stability of the following nonlinear system with the diagonal linear
part:
Z η
ẋj (t) + xj (t − s)dμj (s) = [Fj x](t) (x(t) = (xk (t))nk=1 ; j = 1, ..., n; t ≥ 0).
0
(7.10)
In this case Gjk (t, s) = Gjk (t − s) and

Gjk (t) = 0 (j 6= k; j, k = 1, ..., n; t ≥ 0).

Suppose that,
1
η var(μj ) < (j = 1, ..., n), (7.11)
e
then Gjj (s) ≥ 0 (see Section 11.3). Moreover, due to Lemma 4.6.3, we obtain the
relations Z ∞
1
γjj = Gjj (s)ds = .
0 var (μj )
Thus
νjk
Q = (qjk )nj,k=1 with the entries qjk = .
var (μj )
Now Theorem 10.7.2 and (7.9) imply our next result.
10.7. Applications of the generalized norm 191

Corollary 10.7.4. If the conditions (7.2), (7.11) and


X n
1
νjk < 1 (j = 1, ..., n)
var (μj )
k=1

hold, then the zero solution of equation (7.10) is stable.


Furthermore, let us apply the generalized norm to equation (6.1). To this end
we use the representation of the evolution operator by the exponential multiplica-
tive integral: Z ←
U (t, s) = eA(t1 )dt1
[s,t]

(see [14, Section III.1.5]), where the symbol


Z ←
eA(t1 )dt1
[s,t]

means the limit of the products


eA(τn )δ eA(τn−1 )δ ...eA(τ1 δ eA(τ0 )δ
(n) (t − s)k t−s
(τk = τk =s+ ,δ = )
n n
as n → ∞
Assume that A(t) = (ajk (t)) is real and its norm is bounded on the positive
half-line; so there are positive constants bjk (j 6= k) and real constants bjj , such
that
ajj (t) ≤ bjj and |ajk (t)| ≤ bjk (t ≥ 0; j 6= k; j, k = 1, ..., n). (7.12)
These inequalities imply
A(t) ≤ B for all t ≥ 0, where B = (bjk ).
For a matrix (cjk ) let |C| mean the matrix (|cjk |). Then due to (7.12) we have

|eA(t)δ | ≤ eBδ .
Hence Z

A(t1 )dt1
|U (t, s)| = e ≤ exp [(t − s)B] (t ≥ s).
[s,t]
In the considered case G(t, s) = U (t, s) and the entries Gjk (t, s) of G(t, s)satisfy
|Gjk (t, s)| ≤ ujk (t, s).
where ujk (t, s) are the entries of the matrix exp [(t − s)B].
Now we can apply Theorem 10.7.2 and estimates for ujk (t, s) from Section
2.6 to establish the stability condition for equation (6.1).
192 Chapter 10. Nonlinear Vector Equations

10.8 Systems with positive fundamental solutions


In this section all the considered functions and matrices are assumed to be real.
Let R+ n
be the cone of vectors from Cn with nonnegative coordinates. Denote
by
C+ (a, b) = C([a, b], R+
n
)
the cone of vector functions from C(a, b) = C([a, b], Rn ) with nonnegative coordi-
nates. The inequality v ≥ 0 for a vector v means that v ∈ R+ n
. The inequality f ≥ 0
for a function f means that f ∈ C+ (a, b). The inequality f1 ≥ f2 for functions
f1 , f2 means that f1 − f2 ∈ C+ (a, b).
Consider the problem
Z η
ẋ(t) = ds R(t, s)x(t − s) + [F x](t), (8.1)
0

x(t) = φ(t) ∈ C+ (−η , 0) (−η ≤ t ≤ 0), (8.2)


where R(t, s) = (rjk (t, τ ))nj,k=1 again is an n × n-matrix-valued function defined
on [0, ∞) × [0, η ], which is piece-wise continuous in t for each τ , whose entries
satisfy the condition

vjk = sup = var rjk (t, .) < ∞ (j, k = 1, ..., n),


t≥0

and F is a continuous causal mapping in C(−η , ∞).


A solution of problem (8.1), (8.2) is understood as in Section 10.1. In this
section the existence of solutions of problem (8.1), (8.2) is assumed.
Let the fundamental solution G(t, s) of the equation
Z η
ż(t) = ds R(t, s)z(t − s) (8.3)
0

be a matrix function nonnegative for all t ≥ s ≥ 0. Denote by Kη the subset of


C+ (−η , 0), such that

φ ∈ Kη implies z(t) ≥ 0 (t ≥ 0) (8.4)

for a solution z(t) of the linear problem (8.3), (8.2).


Recall that F 0 ≡ 0.
Definition 10.8.1. Let F be a continuous causal mapping in C(−η , ∞). Then the
zero solution of (8.1) is said to be stable (in the Lyapunov sense) with respect to
Kη , if for any  > 0, there exists a number δ > 0, such that the conditions φ ∈ Kη
and kφkC(−η,0) ≤ δ imply kxkC(0,∞) ≤  for a solution x(t) of problem (8.1), (8.2).
The zero solution of (8.1) is said to be asymptotically stable, with respect to
Kη if it is stable with respect to Kη , and there is an open set ω+ ⊆ Kη , such
that φ ∈ ω+ implies x(t) → 0 as t → ∞.
10.8. Systems with positive fundamental solutions 193

The zero solution of (8.1) is exponentially stable with respect to Kη , if there


are positive constants ν, m0 and r0 , such that the conditions

φ ∈ Kη and kφkC(−η ,0) ≤ r0

imply the relation kx(t)kn ≤ m0 kφkC(−η ,0) e−νt (t ≥ 0) for any positive solution
x. If r0 = ∞, then the zero solution of (8.1) is globally exponentially stable with
respect to Kη .
It is assumed that F satisfies the following conditions: there are linear causal
non-negative bounded operators A− and A+ , such that

A− v ≤ F v ≤ A+ v for any v ∈ C+ (−η , ∞). (8.5)

It particular, A− can be the zero operator: A− v = 0 for any positive v.


Theorem 10.8.2. Let the fundamental solution G(t, s) of equation (8.3) be a matrix
function nonnegative for all t ≥ s ≥ 0 and condition (8.5) hold. Then a solution
x(t) of problem (8.1), (8.2) with φ ∈ Kη satisfies the inequalities

x− (t) ≤ x(t) ≤ x+ (t) (t ≥ 0), (8.6)

where x+ (t) is the (nonnegative continuous) solution of the equation


Z η
ẋ(t) = ds R(t, s)x(t − s) + [A+ x](t) (8.7)
0

and x− (t) is the solution of the equation


Z η
ẋ(t) = ds R(t, s)x(t − s) + [A− x](t) (8.8)
0

with the same initial function φ ∈ Kη .


Proof. Since φ(t) ≥ 0; we have [F x](s) ≥ 0, s ≤ 0. Take into account that
Z t
x(t) = z(t) + G(t, s)[F x](s)ds.
0

So, there is a T > 0, such that

x(t) ≥ 0 (0 ≤ t ≤ T ) and therefore x(t) ≥ z(t) (0 ≤ t ≤ T ).

But z(t) ≥ 0 for all t ≥ 0. Consequently, one can extend these inequalities to the
whole positive half-line.
Now conditions (8.5) imply
Z t
x(t) ≤ z(t) + G(t, s)[A+ x](s)ds
0
194 Chapter 10. Nonlinear Vector Equations

and Z t
x(t) ≥ z(t) + G(t, s)[A− x](s)ds
0

Hence due the abstract Gronwall Lemma (see Section 1.5), omitting simple calcu-
lations we have
x(t) ≤ x+ (t) and x(t) ≥ x− (t) (t ≥ 0),

where x+ (t) is a solution of the equation


Z t
x+ (t) = z(t) + G(t, s)[A+ x+ ](s)ds
0

and x− (t) is a solution of the equation


Z t
x− (t) = z(t) + G(t, s)[A− x− ]ds (t ≥ 0).
0

According to the Variation of Constant formula these equations are equivalent to


(8.7) and (8.8). 

Corollary 10.8.3. Under the hypothesis of Theorem 10.8.2, let equation (8.7) be
(asymptotically, exponentially) stable with respect to Kη . Then the zero solution
to (8.1) is globally (asymptotically, exponentially) stable with respect to Kη .
Conversely, let the zero solution to (8.1) be globally (asymptotically, expo-
nentially) stable with respect to Kη . Then equation (8.8) is (asymptotically, expo-
nentially) stable with respect to Kη .

Again use the generalized norm as the vector

M[a,b] (v) := (kvj kC([a,b],R) )nj=1 (v(t) = (vj (t)) ∈ C([a, b], Rn )).

Furthermore, let ρ̂ := (ρ1 , ..., ρn ) be a vector with positive coordinates ρj < ∞.


Put

Ω+ (ρ̂) := {v(t) = (vj (t)) ∈ C([−η, ∞), R+


n
) : 0 ≤ vj (t) ≤ ρj ; t ≥ −η; j = 1, ..., n}.

So
Ω+ (ρ̂) := {v ∈ C([−η , ∞), Rn+ ) : M[−η ,∞) (v) ≤ ρ̂}.

It should be noted that the Urysohn theorem and just proved theorem enable us,
instead of conditions (8.3), to impose the following ones:

A− v ≤ F v ≤ A+ v for any v ∈ Ω+ (ρ̂).


10.9. The Nicholson-type system 195

10.9 The Nicholson-type system


In this section we explore the vector equation, which models cancer cell populations
and other biological processes, cf. [8] and references given therein. Namely, we
consider the system
dx1
= r(t) [−a1 x1 (t) + b1 x2 (t) + c1 x1 (t − τ ) exp(−x1 (t − τ ))] ,
dt
(9.1)
dx2
= r(t) [−a2 x2 (t) + b2 x1 (t) + c2 x2 (t − τ ) exp(−x2 (t − τ ))] ,
dt
where ai , bi , ci , τ = const ≥ 0, and r(t), t ≥ 0, is a piece-wise continuous function
bounded on the positive half-line with the property

inf r(t) > 0.


t≥0

Take the initial conditions

x1 (t) = φ1 (t), x2 (t) = φ2 (t) (t ∈ [−τ, 0]) (9.2)

with continuous functions φk . Everywhere in this section it is assumed that φk


(k = 1, 2) are nonnegative.
Recall that inequalities for vector-valued functions are understood in the
coordinate-wise sense.
Rewrite (9.1) as

dx(t)
= r(t)(Ax(t) + CF1 (x(t − τ ))), (9.3)
dt
where    
−a1 b1 c1 0
A= ,C =
b2 −a2 0 c2
and
F1 (x(t − τ )) = (xk (t − τ ) exp(−xk (t − τ )))2k=1 .
Taking F x(t) = CF1 (x(t − τ ) we have

0 ≤ F v(t) ≤ Cv(t − τ ) (v(t) ≥ 0).

Equation (8.7) in the considered case takes the form

dy(t)
= r(t)[Ay(t) + Cy(t − τ )], (9.4)
dt
and equation (8.8) takes the form

dy(t)
= r(t)Ay(t). (9.5)
dt
196 Chapter 10. Nonlinear Vector Equations

The evolution operator U− (t, s) to (9.5) is


Rt
U− (t, τ ) = e τ
r(s)ds A
.

Since the off-diagonal entries of A are non-negative, the evolution operators to


(9.5) is non-negative. Applying Theorem 10.8.2 and omitting simple calculations,
we arrive at the following result.
Corollary 10.9.1. Any solution x(t) of problem (9.1), (9.2) is non-negative and
satisfies the inequalities
Rt
e 0
r(s)ds A
φ(0) ≤ x(t) ≤ x+ (t) (t ≥ 0).

where x+ (t) is the solution of problem (9.4), (9.2). Consequently, the zero so-
lution to (9.1) is globally asymptotically stable with respect to the cone K =
C([−η , 0], R+
2
), provided (9.4) is asymptotically stable.
Rewrite (9.4) as
Z t
y(t) = U− (t, 0)φ(0) + U− (t, s)r(s)Cy(s − τ )ds
0

Thus

kykC(0,T ) = sup kU− (t, 0)φ(0)k2 + kykC(−τ,T ) M0 (0 < T < ∞),


t

where Z t
M0 = sup r(s)kU− (t, s)Ck2 ds.
t 0

Here k.k2 is the spectral norm in C2 . Hence it easily follows that (9.4) is stable
provided M0 < 1. Note that the eigenvalues λ1 (A) and λ2 (A) of A are simple
calculated and with the notation
Z t
m(t, s) = r(s1 )ds1 ,
s

we have
kem(t,s) A k2 ≤ eα(A)m(t,s) (1 + g(A)m(t, s)),
where α(A) = maxk=1,2 Re λ1 (A) and g(A) = |b1 − b2 | (see Section 2.8). Assume
that c1 ≤ c2 , α(A) < 0, and

r− = inf r(t) > 0, and r+ = sup r(t) < ∞.


t t

Then M0 ≤ M1 , where
Z t
M1 := r+ c2 sup e(t−s)α(A)r− (1 + g(A)(t − s)r+ )ds =
t 0
10.10. Input-to-state stability of general systems 197
Z ∞
c 2 r+ euα(A)r− (1 + g(A)ur+ )du.
0

This integral is simple calculated. Thus, if M1 < 1, then the zero solution to (9.1)
is globally asymptotically stable with respect to the cone C([−η , 0], R+2
).

10.10 Input-to-state stability of general systems


This section is devoted to the input-to-state stability of coupled systems of func-
tional differential equations with with nonlinear causal mappings. The notion of
the input-to-state stability plays an essential role in the control theory.
Consider in Cn the problem
Z η
ẋ(t) = dR0 (τ )x(t − τ ) + [F x](t) + u(t) (0 < η < ∞; t > 0), (10.1)
0

x(t) = 0 for − η ≤ t ≤ 0, (10.2)


where u(t) is the input, x(t) is the state, and R0 (τ ) (0 ≤ τ ≤ η ) is an n × n-
matrix-valued function of bounded variation, again. In addition, F is a continuous
causal mapping in Lp (−η , ∞) (p ≥ 1).
For example, to the form (10.1) can be reduced the nonlinear differential
delay equation
Z η
ẋ(t) = dR0 (τ )x(t − τ ) + F0 (x(t − h)) + u(t) (h = const > 0; t ≥ 0), (10.3)
0

where F0 : Cn → Cn is a continuous function, u ∈ Lp (0, ∞). Equation (10.3) takes


the form (10.1) with
[F x](t) = F0 (x(t − h)).
Besides F is causal in L∞ (−h, ∞).
Again, Z η
K(z) = zI − e−τ z dR0 (τ )
0

and Z ∞
1 eiωt dω
G(t) =
2π −∞ K(iω)
are the characteristic matrix-valued function of and fundamental solution of the
linear equation Z η
ż = dR0 (τ )z(t − τ ) (t ≥ 0), (10.4)
0

respectively. All the characteristic values of K(.) are in C− .


198 Chapter 10. Nonlinear Vector Equations

Definition 10.10.1. We will say that system (10.1), (10.2) is input-to-state Lp -


stable, if there is a positive constant m0 ≤ ∞, such that for all u ∈ Lp satisfying
kukLp (0,∞) ≤ m0 , a solution of problem (10.1), (10.2) is in Lp (0, ∞).
Put
Ωp (%) = {w ∈ Lp (−η , ∞) : kwkLp (−η ,∞) ≤ %}
for a positive number % ≤ ∞.
It is assumed that there is a constant q = q(%) ≥ 0, such that

kF wkLp (0,∞) ≤ qkwkLp (−η ,∞) (w ∈ Ωp (%)). (10.6)

Recall that Z t
Ĝf (t) = G(t, s)f (s)ds.
0

Theorem 10.10.2. Let the conditions (10.6) and qkĜkLp (0,∞) < 1 hold. Then (10.1)
is input-to-state Lp -stable stable.
Proof. Put l = kukLp (0,∞) for a fixed u. Repeating the arguments of the proof
of Theorem 10.1.2 with Lp instead of C, and taking into account the zero initial
conditions, we get the inequality

kĜkLp (0,∞) l
kxkLp (0,∞) ≤ .
1 − qkĜkLp (0,∞)

This inequality proves the theorem. 

By the properties of convolutions (see Section 1.3), we have

kĜkLp (),∞) ≤ kGkL1 (0,∞) .

Now Theorem 10.10.2 implies our next result.


Corollary 10.10.3. Let condition (10.6) hold. In addition, let qkGkL1 (0,∞) < 1.
Then equation (10.1) is input-to-state Lp -stable.
Due to Lemma 4.4.1 we have

kĜkL2 (0,∞) ≤ θ(K),

where
θ(K) := sup kK −1 (iω)kn .
−2 var(R0 )≤ω≤2 var(R0 )

Now making use Theorem 10.10.2, we arrive at the following result.


Corollary 10.10.4. Let condition (10.6) hold with p = 2. In addition, let qθ(K) < 1.
Then system (10.1) is input-to-state L2 -stable.
10.11. Input-to-state stability of systems with one delay in linear parts 199

10.11 Input-to-state stability of systems


with one delay in linear parts
In this section we illustrate the results of the previous section in the case of the
systems whose linear parts have one distributed delay:
Z η
ẋ(t) + A x(t − s)dμ(s) = [F x](t) + u(t), (11.1)
0

where μ is a scalar nondecreasing function and A is a constant positive definite


Hermitian matrix. So the eigenvalues λj (A) of A are real and positive. Then the
linear equation Z η
ẋ(t) + A x(t − s)dμ(s) = 0 (11.2)
0
can be written as

ẋj (t) − λj (A)Eμ xj (t) = 0, j = 1, ..., n, (11.3)

where Z η
Eμ f (t) = f (t − s)dμ(s)
0
for a scalar function f .
If the inequality

eη λj (A) var (μ) < 1 (j = 1, ..., n) (11.4)

holds, then by Lemma 4.6.6 the fundamental solution Gj of the scalar equation
(11.3) is positive and Z ∞
1
Gj (t)dt = .
0 λj (A)var (μ)
But the fundamental solution Gμ of the vector equation (11.2) satisfies the equality
Z ∞ Z ∞
kGμ (t)kn dt = max Gj (t)dt.
0 j 0

Thus Z ∞
1
kGμ (t)kn dt = .
0 minj λj (A)var (μ)
Now Theorem 10.10.2 implies our next result.
Corollary 10.11.1. Let A be a positive definite Hermitian matrix. and conditions
(10.6) and (11.4) hold. If in addition,

q < min λj (A)var (μ), (11.5)


j

then system (11.1) is input-to-state Lp -stable.


200 Chapter 10. Nonlinear Vector Equations

10.12 Comments
This chapter is particularly based on the papers [39, 41] and [61].
The basic results on the stability of nonlinear differential-delay equations
are presented, in particular, in the well-known books [72, 77, 100]. About the
recent results on absolute stability of nonlinear retarded systems see [88, 111] and
references therein.
The stability theory of nonlinear equations with causal mappings is at an
early stage of development. The basic method for the stability analysis is the
direct Lyapunov method, cf. [13, 83]. But finding the Lyapunov functionals for
equations with causal mappings is a difficult mathematical problem.
Interesting investigations of linear causal operators are presented in the books
[82, 105]. The papers [5, 15] also should be mentioned. In the paper [15], the
existence and uniqueness of local and global solutions to the Cauchy problem for
equations with causal operators in a Banach space are established. In the paper
[5] it is proved that the input-output stability of vector equations with causal
operators is equivalent to the causal invertibility of causal operators.
Chapter 11

Scalar Nonlinear Equations

In this chapter, nonlinear scalar first and higher order equations with differential-
delay linear parts and nonlinear causal mappings are considered. Explicit stability
conditions are derived.
The Aizerman - Myshkis problem is also discussed.

11.1 Preliminary results


In this chapter all the considered functions are scalar; so Lp (a, b) = Lp ([a, b], C)
and C(a, b) = C([a, b], C). In addition, either X(a, b) = Lp (a, b), p ≥ 1, or X(a, b) =
C(a, b) .
Consider the equation
Z t
x(t) = z(t) + k(t, t1 )(F x)(t1 )dt1 (t ≥ 0), (1.1a)
0

x(t) = z(t) (−η ≤ t ≤ 0), (1.1b)


where k : [0 ≤ s ≤ t ≤ ∞) → R is a measurable kernel and z ∈ X(−η , ∞) is given,
and F is a continuous causal mapping in X(−η , ∞) (0 ≤ η < ∞) (see Section
1.8). For instance, the mapping defined by
Z η
(F w)(t) = g(w(t − s))dμ(s) (t ≥ −η ), (1.2)
0

where μ is a nondecreasing function and g is a continuous function with g(0) = 0,


is causal in C(−η , ∞).
A solution of (1.1) is a continuous function x defined on [−η , ∞), which
satisfies (1.1).
It is assumed that there is a constant q ≥ 0, such that

kF vkX(0,∞) ≤ q kvkX(−η ,∞) (v ∈ X(−η , ∞)). (1.3)


202 Chapter 11. Scalar Nonlinear Equations

Introduce the operator V : X(0, ∞) → X(0, ∞) by


Z t
(V v)(t) = k(t, t1 )v(t1 )dt1 (t > 0; v ∈ X(0, ∞)).
0

Lemma 11.1.1. Let V be compact in X(0, τ ) for each finite τ , and the conditions
(1.3) and
qkV kX(0,∞) < 1 (1.4)
hold. Then equation (1.1) has a (continuous) solution x ∈ X(−η , ∞). Moreover,
that solution satisfies the inequality
kzkX(−η ,∞)
kxkX(−η ,∞) ≤ .
1 − qkV kX(0,∞)

Proof. By Lemmas 10.1.1 and 10.1.3, if condition (1.3) holds, then for all T ≥ 0
and w ∈ X(−η , T ), we have

kF wkX(0,T ) ≤ q kwkX(−η ,T ) .

On X(−η , T ), T < ∞, let us define the mapping Φ by

(Φw)(t) = z(t) + (V F w)(t), t ≥ 0,

and
(Φw)(t) = z(t), t < 0,
for a w ∈ X(−η , T ). Hence, according to the previous inequality, for any number
r > 0, large enough, we have

kΦwkX(−η ,T ) ≤ kzkX(−η ,T ) + kV kX(0,T ) qkwkX(−η ,T ) ≤ r (kwkX(−η ,T ) ≤ r).

So Φ maps a bounded set of X(−η , T ) into itself. Now the existence of a solution
x(t) is due to the Schauder Fixed Point Theorem, since V is compact.
From (1.1) it follows that

kxkX(−η ,T ) ≤ kzkX(−η ,T ) + kV kX(0,T ) qkxkX(−η ,T ) .

Thus (1.4) implies


kzkX(−η ,T )
kxkX(−η ,T ) ≤ .
1 − qkV kX(0,T )
Now letting T → ∞ we get the required result. 

Furthermore, let k(t, s) = Q(t − s) with a continuous Q ∈ L1 (0, ∞). Then


V = VQ , where the operator VQ is defined by
Z t
(VQ v)(t) = Q(t − t1 )v(t1 )dt1 (v ∈ X(0, ∞); t > 0).
0
11.1. Preliminary results 203

For each positive T < ∞, operator VQ is compact. Moreover, by the properties of


the convolution operators we have

kVQ kX(0,∞) ≤ kQkL1 (0,∞)

with X(0, ∞) = Lp (0, ∞) and X(0, ∞) = C(0, ∞) (see Section 1.3). Now the
previous lemma implies
Corollary 11.1.2. Assume that Q ∈ L1 (0, ∞), z ∈ X(−η , ∞), and the condition
(1.3) holds. If, in addition, qkQkL1 (0,∞) < 1, then the problem
Z t
x(t) = z(t) + Q(t − t1 )(F x)(t1 )dt1 (t > 0), (1.5a)
0

x(t) = z(t) (−η ≤ t ≤ 0) (1.5b)


has a solution x ∈ X(−η , ∞). Moreover, that solution satisfies the inequality

kzkX(−η ,∞)
kxkX(−η ,∞) ≤ .
1 − qkQkL1 (0,∞)

Let Z ∞
Q̃(z) := e−zt Q(t)dt (Re z ≥ 0)
0
be the Laplace transform of Q. Then by the Parseval equality we easily get
kVQ kL2 (0,∞) = ΛQ , where
ΛQ := sup kQ̃(is)k.
s∈R

So Lemma 11.1.1 implies our next result.


Corollary 11.1.3. Assume that Q ∈ L1 (−η , ∞), z ∈ L2 (−η , ∞), and the condition
(1.3) holds with X(−η , ∞) = L2 (−η , ∞). If, in addition, the condition qΛQ < 1 is
fulfilled, then problem (1.5) has a (continuous) solution x ∈ L2 (−η , ∞). Moreover,
that solution satisfies the inequality
kzkL2 (−η ,∞)
kxkL2 (−η ,∞) ≤ .
1 − qΛQ

Furthermore, suppose that

Q(t) ≥ 0 (t ≥ 0). (1.6)

Then, obviously,
Z ∞ Z ∞
|Q̃(iy)| = | e−yit Q(t)dt| ≤ Q(t)dt = Q̃(0) (y ∈ R). (1.7)
0 0

Now Corollary 11.1.2 implies the following result.


204 Chapter 11. Scalar Nonlinear Equations

Corollary 11.1.4. Let Q ∈ L1 (0, ∞), z ∈ X(−η , ∞) and the conditions (1.3), (1.6)
and
q Q̃(0) < 1 (1.8)
be fulfilled. Then problem (1.5) has a solution x ∈ X(−η , ∞) and

kzkX(−η ,∞)
kxkX(−η ,∞) ≤ .
1 − q Q̃(0)

To investigate the stability of the scalar differential delay equation we need


the following lemma.
Lemma 11.1.5. Let Q ∈ L1 (0, ∞) and condition (1.6) hold. Then inequality (1.8)
is valid if and only if all the zeros of the function
1
−q (1.9)
Q̃(z)

are in C− := {z ∈ C : Re z < 0}.


Proof. Let (1.8) hold. Then thanks to (1.7), |Q̃(iy)|
1
> q for all real y. So according
to the Rouché theorem, all the roots of the function defined by (1.9) are in C− ,
since Q ∈ L1 (0, ∞) and therefore all the roots of the function 1/Q̃(z) are in C− .
Conversely, let all the roots of the function defined by (1.9) be in C− . Then
either
1
>q (1.10)
|Q̃(iy)|
or
1
<q (1.11)
|Q̃(iy)|
for all real y. But in the case (1.11), the integral
Z ∞
Q̃(iy)dy
−∞

does not converge. This proves the lemma. 

11.2 Absolute stability


In the rest of this chapter the uniqueness of solutions is assumed.
Let us consider the equation

XZ η
n−1
x (n)
(t) + x(k) (t − τ )dμk (τ ) = [F x](t) (t > 0), (2.1)
k=0 0
11.2. Absolute stability 205

where μk (k = 0, ..., n − 1) are bounded nondecreasing functions defined on [0, η ]


and F is a causal mapping in X(−η , ∞) with X(a, b) = Lp (a, b), p ≥ 1, or
X(a, b) = C(a, b), again. Impose the initial condition
x(t) = φ(t) (−η ≤ t ≤ 0) (2.2)
with a given function φ having continuous derivatives up to n − 1-th order. Let
K(.) be the characteristic function of the equation
XZ η
n−1
x (n)
(t) + x(k) (t − τ )dμk (τ ) = 0 (t > 0). (2.3)
k=0 0

That is,
n−1
X Z η
K(λ) = λn + λk e−λτ dμk (τ ).
k=0 0

It is assumed that all the zeros of K(.) are in C− . Introduce the Green function
of (2.3): Z ∞ tiω
1 e dω
G(t) := (t ≥ 0).
2π −∞ K(iω)
If n = 1, then the notions of the Green function and fundamental solution coincide.
It is simple to check that the equation
XZ η
n−1
w (t) +
(n)
w(k) (t − τ )dμk (τ ) = f (t) (t ≥ 0) (2.4)
k=0 0

with the zero initial condition


w(k) (t) ≡ 0 (−η ≤ t ≤ 0; k = 1, ..., n − 1) (2.5)
and the locally integrable function f satisfying
|f (t)| ≤ c0 ec1 t (c0 , c1 = const; t ≥ 0)
admits the Laplace transform. So by the inverse Laplace transform , problem (2.4),
(2.5) has the solution
Z t
w(t) = G(t − t1 )f (t1 )dt1 (t > 0).
0

Hence it follows that (2.1) is equivalent to the equation


Z t
x(t) = ζ(t) + G(t − t1 )(F x)(t1 )dt1 (t > 0), (2.6)
0

where ζ is a solution of problem (2.3), (2.2).


A continuous solution of the integral equation (2.6) with condition (2.2) will
be called a (mild) solution of problem (2.1), (2.2).
206 Chapter 11. Scalar Nonlinear Equations

Lemma 11.2.1. Let all the zeros of K(z) be in C− . Then the linear equation (2.3)
is exponentially stable.
Proof. As it is well known, if all the zeros of K(z) are in C− , then (2.3) is asymp-
totically stable, cf. [77, 78]. Now we get the required result by small perturbations
and the continuity of the zeros of K(z). 

Introduce in X(0, ∞) the operator


Z t
Ĝw(t) = G(t − t1 )w(t1 )dt1 (t > 0)
0

Thanks to the preceding lemma it is bounded, since


kĜkX(0,∞) ≤ kGkL1 (0,∞) < ∞.
Moreover, since (2.3) is exponentially stable, for a solution ζ of problem (2.2),
(2.3) we have
n−1
X
|ζ(t)| ≤ const e−t kφ(k) kC[−η ,0] ( > 0; t ≥ 0). (2.7)
k=0

Hence, ζ ∈ X(−η , ∞). Now Lemma 11.1.1 implies the following result.
Theorem 11.2.2. Assume that condition (1.3) holds for X(−η, ∞) = Lp (−η, ∞), p ≥
1 or X(−η , ∞) = C(−η , ∞), and all the zeros of K(z) are in C− . If, in addition,
qkĜkX(0,∞) < 1, (2.8)
then problem (2.1), (2.2) has a solution x ∈ X(−η , ∞) and
kζkX(−η ,∞)
kxkX(0,∞) ≤ ,
1 − qkĜkX(0,∞)
where ζ is a solution of problem (2.2), (2.3), and consequently,
n−1
X
kxkX(0,∞) ≤ M kφ(k) kC[−η ,0] , (2.9)
k=0

where the constant M does not depend on the initial conditions.


Combining this theorem with Corollary 11.1.4 we obtain the following result.
Corollary 11.2.3. Assume that condition (1.3) holds with X(−η , ∞) = Lp (−η , ∞)
for a p ≥ 1, or X(−η , ∞) = C(−η , ∞), and all the zeros of K(z) are in C− . If,
in addition, G(t) is non-negative and
K(0) > q, (2.10)
then problem (2.1), (2.2) has a solution x ∈ X(0, ∞). Moreover, that solution
satisfies inequality (2.9).
11.3. The Aizerman - Myshkis problem 207

Furthermore, Corollary 11.1.3 implies


Corollary 11.2.4. Let all the zeros of K(z) be in C− . Assume that the conditions
(1.3) with X(−η , ∞) = L2 (−η , ∞) and

inf |K(is)| > q (2.11)


s∈R

hold. Then problem (2.1), (2.2) has a solution x ∈ L2 (0, ∞). Moreover, that solu-
tion satisfies the inequality (2.9) with X(0, ∞) = L2 (0, ∞).
Definition 11.2.5. Equation (2.1) is said to be absolutely X-stable in the class of
nonlinearities satisfying (1.3), if there is a positive constant M0 independent of
the specific form of mapping F (but dependent on q), such that (2.9) holds for
any solution x(t) of (2.1).
Let us point the following corollary to Theorem 11.2.2.
Corollary 11.2.6. Assume that all the zeros of K(z) be in C− . and condition (2.8)
holds, then (2.1) is absolutely X-stable in the class of nonlinearities (1.3) with
X(−η , ∞) = Lp (−η , ∞), p ≥ 1 or X(−η , ∞) = C(−η , ∞).
If, in addition, G is positive, then condition (2.8) can be replaced by (2.10).
In the case X(−η , ∞) = L2 (−η , ∞), condition (2.8) can be replaced by
(2.11).

11.3 The Aizerman - Myshkis problem


We consider the following problem, which we call the Aizerman - Myshkis problem.
Problem 11.3.1: To separate a class of equations (2.1), such that the asymp-
totic stability of the linear equation

XZ η
n−1
x (n)
+ x(k) (t − τ )dμj (τ ) = q̃x(t), (3.1)
k=0 0

with some q̃ ∈ [0, q] provides the absolute X-stability of (2.1) in the class of non-
linearities (1.3).
Recall that X(−η , ∞) = C(−η , ∞) or X(−η , ∞) = Lp (−η , ∞).
Theorem 11.3.1. Let the Green function of (2.3) be non-negative and condition
(2.10) hold. Then equation (2.1) is absolutely L2 -stable in the class of nonlineari-
ties satisfying (1.3). Moreover, (2.1) satisfies the Aizerman - Myshkis problem in
L2 (−η , ∞) with q̃ = q.
Proof. Corollary 11.2.6 at once yields that (2.1) is absolutely stable, provided
(2.10) holds. By Lemma 11.1.5 this is equivalent to the asymptotic stability of
(3.1) with q̃ = q. This proves the theorem. 
208 Chapter 11. Scalar Nonlinear Equations

Let us consider the first order equation


Z η
ẋ(t) + x(t − s)dμ(s) = [F x](t) (t ≥ 0), (3.2)
0

where μ is a nondecreasing function having a bounded variation var (μ). We need


the corresponding linear equation
Z η
ẏ(t) + y(t − s)dμ(s) = 0 (t > 0). (3.3)
0

Denote by Gμ (t) the Green function (the fundamental solution) of this equation.
In the next section we prove the following result.
Lemma 11.3.2. Under the condition,

eη var(μ) < 1, (3.4)

Gμ is non-negative on the positive half-line.


Denote by Kμ the characteristic function of (3.3):
Z η
Kμ (z) = z + e−sz dμ(s).
0

So Kμ (0) is equal to the variation var (μ) of μ. By Lemma 11.1.5 Kμ (z) − q has
all its zeros in C− if and only if q < var (μ). Now by Theorem 11.3.1 we easily
get the following result.
Corollary 11.3.3. Assume that the conditions (3.4) and q < var (μ) are fulfilled.
Then equation (3.2) is X-absolutely stable in the class of nonlinearities (1.3).
Now let us consider the second order equation

ü(t) + Au̇(t) + B u̇(t − 1) + Cu(t) + Du(t − 1) + Eu(t − 2) = [F u](t) (t > 0) (3.5)

with non-negative constants A, B, C, D, E.


Introduce the functions

K2 (λ) = λ2 + Aλ + Bλe−λ + C + De−λ + Ee−2λ (3.6)

and Z
1 c0 +i∞
etz dz
G2 (t) := (c0 = const).
2πi c0 −∞i K2 (z)
Assume that
B 2 /4 > E, A2 /4 > C, (3.7)
and denote r
A A2
r± (A, C) = ± − C,
2 4
11.3. The Aizerman - Myshkis problem 209

and r
B B2
r± (B, E) = ± − E.
2 4
In the next section we also prove the following result.
Lemma 11.3.4. Let the conditions (3.7),
D ≤ r+ (B, E)r− (A, C) + r− (B, E)r+ (A, C) (3.8)
and
1
r+ (B, E)er+ (A,C) < (3.9)
e
hold. Then G2 (t) is non-negative on the positive half-line.
Clearly, K2 (0) = C + D + E. Now Theorem 11.3.1 and Lemma 11.1.5 yield
the following corollary.
Corollary 11.3.5. Let the conditions (3.7)-(3.9) and q < C + D + E hold. Then
equation (3.5) is X-absolutely stable in the class of nonlinearities (1.3).
Finally, consider the higher order equations. To this end at a continuous
function v defined on [−η , ∞), let us define an operator Sk by
(Sk v)(t) = ak v(t − hk ) (k = 1, ..., n; ak = const > 0, hk = const ≥ 0; t ≥ 0).
Besides,
h1 + ... + hn = η .
Consider the equation
n 
Y 
d
+ Sk x(t) = [F x](t) (t ≥ 0). (3.10)
dt
k=1

Put
n
Y
Ŵn (z) := (z + ak e−hk z )
k=1
and Z
1 c0 +i∞
ezt dz
Gn (t) = .
2πi c0 −i∞ Ŵn (z)
So Gn (t) is the Green function of the linear equation
Yn  
d
+ Sk x(t) = 0.
dt
k=1

Due to the properties of the convolution, we have


Z t Z t1 Z tn−2
Gn (t) = Ĝ1 (t − t1 ) Ĝ2 (t1 − t2 )... Ĝn (tn−1 )dtn−1 ... dt2 dt1 ,
0 0 0
210 Chapter 11. Scalar Nonlinear Equations

where Ĝk (t) (k = 1, ..., n) is the Green function of the equation


 
d
+ Sk x(t) = ẋ(t) + ak x(t − hk ) = 0.
dt
Assume that
eak hk < 1 (k = 1, ..., n); (3.11)
then due to Lemma 11.3.2 Ĝk (t) ≥ 0 (t ≥ 0; k = 1, ..., n). We thus have proved
the following result.
Lemma 11.3.6. Let condition (3.11) hold. Then Gn (t) is non-negative.
Clearly,
n
Y
Ŵn (0) = ak .
k=1

Now Theorem 11.3.1 and Lemma 11.1.5 yield our next result.
Corollary 11.3.7. Let the conditions (3.11) and
n
Y
q< ak
k=1

hold. Then equation (3.10) is X-absolutely stable in the class of nonlinearities


(1.3).

11.4 Proofs of Lemmas 11.3.2 and 11.3.4


With constants a ≥ 0, b > 0, let us consider the equation

u̇(t) + au(t) + bu(t − h) = 0. (4.1)

Lemma 11.4.1. Let the condition

hbeah < e−1 (4.2)

hold. Then the Green function of equation (4.1) is positive.


Proof. First, we consider the Green function Gb (t) of the equation

u̇ + bu(t − h) = 0 (b = const, t ≥ 0). (4.3)

Recall that Gb satisfies with the initial condition

Gb (0) = 1, Gb (t) = 0 (t < 0). (4.4)

Suppose that
bh < e−1 . (4.5)
11.4. Proofs of Lemmas 11.3.2 and 11.3.4 211

Since
max τ e−τ = e−1 ,
τ ≥0

there is a positive solution w0 of the equation we−w = bh. Taking c = h−1 w0 , we


get a solution c of the equation c = behc . Put in (4.3) Gb (t) = e−ct z(t). Then

ż − cz + behc z(t − h) = ż + c(z(t − h) − z(t)) = 0.

But due to (4.4) z(0) = 1, z(t) = 0 (t < 0). So the latter equation is equivalent to
the following one:
Z t Z t Z t−h
z(t) = 1 + c[z(s) − z(s − h)]ds = 1 + c z(s)ds − c z(s)ds.
0 0 0

Consequently,
Z t
z(t) = 1 + c z(s)ds.
t−h

Due to the Neumann series it follows that z(t) and, therefore, the Green function
Gb (t) of (4.3) are positive.
Furthermore, substituting u(t) = e−at v(t) into (4.1), we have the equation

v̇(t) + beah v(t − h) = 0.

According to (4.5), condition (4.2) provides the positivity of the Green function
of the latter equation. Hence the required result follows. 

Denote by G+ the Green function of equation (4.3) with b = var(μ). Due to


the previous lemma, under condition (3.4) G+ is positive.
The assertion of Lemma 11.3.2 follows from the next result.

Lemma 11.4.2. If condition (3.4) holds, then the Green function Gμ of equation
(3.3) is nonnegative and satisfies the inequality

Gμ (t) ≥ G+ (t) ≥ 0 (t ≥ 0).

Proof. Indeed, according to the initial conditions, for a sufficiently small t0 > η ,

Gμ (t) ≥ 0, Ġμ (t) ≤ 0 (0 ≤ t ≤ t0 ).

Thus,
Gμ (t − η ) ≥ Gμ (t − s) (s ≤ η ; 0 ≤ t ≤ t0 ).
Hence, Z η
var (μ)Gμ (t − η ) ≥ Gμ (t − s)dμ(s) (t ≤ t0 ).
0
212 Chapter 11. Scalar Nonlinear Equations

According to (3.3) we get

Ġμ (t) + var (μ)Gμ (t − η ) = f (t)

with
Z η
f (t) = var (μ)Gμ (t − η ) − Gμ (t − s)dμ(s) ≥ 0 (0 ≤ t ≤ t0 ).
0

Hence, by virtue of the variation of constants formula, arrive at the relation


Z t
Gμ (t) = G+ (t) + G+ (t − s)f (s)ds ≥ G+ (t) (0 ≤ t ≤ t0 ).
0

Extending this inequality to the whole half-line, we get the required result.


Proof of Lemma 11.3.4: First, consider the function

K0 (λ) = (λ + a1 + b1 e−λ )(λ + a2 + b2 e−λ ).

with nonnegative constants ak , bk (k = 1, 2). Then


Z
1 c0 +i∞
etz dz
G0 (t) := (c0 = const).
2πi c0 −∞i K0 (z)

is the Green function to equation


  
d d
+ S1 + S2 x(t) = 0 (t ≥ 0). (4.6)
dt dt

where
(Sk v)(t) = ak v(t) + bk v(t − 1) (k = 1, 2; t ≥ 0).
Due to the properties of the convolution, we have
Z t
G0 (t) = W1 (t − t1 )W2 (t1 )dt1 ,
0

where Wk (t) (k = 1, 2) is the Green function of the equation


 
d
+ Sk x(t) = ẋ(t) + ak x(t) + bk x(t − 1) = 0.
dt

Assume that
1
(k = 1, 2),
e ak b k < (4.7)
e
then due to Lemma 11.4.1 G0 (t) ≥ 0 (t ≥ 0).
11.5. First order nonlinear non-autonomous equations 213

Now consider the function

P2 (λ) = K0 (λ) − me−λ .

with a constant m. It is the characteristic function of the equation


  
d d
+ S1 + S2 x(t) = mx(t − 1) (t ≥ 0). (4.8)
dt dt

By the integral inequalities (see Section 1.6), it is not hard to show that, if G0 (t) ≥
0 and m ≥ 0, then the Green function of (4.8) is also non-negative.
Furthermore, assume that P2 (λ) = K2 (λ), where K2 (λ) is defined in Section
11.3. That is,

(λ + a1 + b1 e−λ )(λ + a2 + b2 e−λ ) − me−λ = λ2 + Aλ + Bλe−λ + C + De−λ + Ee−2λ .

Then, comparing the coefficients of P2 (λ) and K2 (λ), we get the relations

a1 + a2 = A, a1 a2 = C, (4.9)

b1 + b2 = B, b1 b2 = E, (4.10)
and
a1 b2 + b1 a2 − m = D. (4.11)
Solving (4.9), we get

a1,2 = A/2 ± (A2 /4 − C)1/2 = r± (A, C).

Similarly, (4.10) implies

b1,2 = B/2 ± (B 2 /4 − E)1/2 = r± (B, E).

From the hypothesis (3.7) it follows that a1,2 , b1,2 are real. Condition (3.9) implies
(4.7). So G0 (t) ≥ 0, t ≥ 0. Moreover, (3.8) provides relation (4.11) with a positive
m.
But as it was above mentioned, if G0 (t) ≥ 0, then the Green function G2 (t)
corresponding to K2 (λ) = P2 (λ) is also positive. This proves the lemma. 

11.5 First order nonlinear non-autonomous equations


First consider the linear equation
Z η
ẋ(t) + x(t − τ )dτ μ(t, τ ) = 0, (5.1)
0
214 Chapter 11. Scalar Nonlinear Equations

where μ(t, τ ) is a function defined on [0, ∞) × [0, η ] nondecreasing in τ and contin-


uous in t. Assume that there are nondecreasing functions μ± (τ ) defined on [0, η ],
such that

μ− (τ2 ) − μ− (τ1 ) ≤ μ(t, τ2 ) − μ(t, τ1 ) ≤ μ+ (τ2 ) − μ+ (τ1 ) (η ≥ τ2 > τ1 ≥ 0). (5.2)

We need also the autonomous equations


Z η
ẋ+ (t) + x+ (t − τ )dμ+ (τ ) = 0 (5.3)
0

and Z η
ẋ− (t) + x− (t − τ )dμ− (τ ) = 0. (5.4)
0

Denote by G1 (t, s), G+ (t) and G− (t) the Green functions to (5.1), (5.3) and (5.4),
respectively.
Lemma 11.5.1. Let the conditions (5.2) and

var(μ+ )η e < 1 (5.5)

hold. Then

0 ≤ G+ (t − s) ≤ G1 (t, s) ≤ G− (t − s) (t ≥ s ≥ 0). (5.6)

Proof. Due to Lemma 11.3.2, G± (t) ≥ 0 for all t ≥ 0. In addition, according to


the initial conditions,

G+ (0) = G1 (0, 0) = G− (0) = 1, G+ (t) = G1 (t, 0) = G− (t) = 0 (t < 0).

For a sufficiently small t0 > 0, we have

G1 (t, 0) ≥ 0, Ġ1 (t, 0) ≤ 0 (0 ≤ t ≤ t0 ).

From (5.1) we obtain


Z η
Ġ1 (t, 0) + G1 (t − τ, 0)dμ+ (τ ) = f (t)
0

with Z η
f (t) = G1 (t − τ )(dμ+ (τ ) − dτ μ(t, τ )).
0

Hence, by virtue of the Variation of Constants formula and (5.2), we arrive at the
relation
Z t
G1 (t, 0) = G+ (t) + G+ (t − s)f (s)ds ≥ G+ (t) (0 ≤ t ≤ t0 ).
0
11.5. First order nonlinear non-autonomous equations 215

Extending this inequality to the whole positive half-line, we get the left-hand part
of inequality (5.6) for s = 0. Similarly the right-hand part of inequality (5.6) and
the case s > 0 can be investigated. 

Note that due to (5.6)


Z t Z t Z ∞
sup G1 (t, s)dt ≤ sup G− (t − s)ds = G− (s)ds.
t 0 t 0 0

But by (5.4)
Z ∞ Z η Z η Z ∞
G− (0) = 1 = G− (t − τ )dμ− (τ )dt = G− (t − τ )dt dμ− (τ ) =
0 0 0 0
Z η Z ∞ Z η Z ∞ Z ∞
G− (s)dtdμ− (τ ) = G− (s)dt dμ− (τ ) = G− (s)dt var(μ− ).
0 −τ 0 0 0
So we obtain the equality
Z ∞
1
G± (s)ds = (5.7)
0 var(μ± )
Hence, we get
Corollary 11.5.2. Let conditions (5.2) and (5.5) be fulfilled. Then

G1 (t, s) ≤ 1 (t ≥ s ≥ 0),
Z η
∂G1 (t, s)
0≥ ≥− dτ μ(t, τ ) = −var μ(t, .) (t ≥ s),
∂t 0
and Z t
1
sup G1 (t, s)ds ≤ . (5.8)
t 0 var(μ− )
Furthermore, consider the nonlinear equation
Z η
ẋ(t) + x(t − τ )dτ μ(t, τ ) = [F x](t) (t > 0), (5.9)
0

where F is a causal mapping in C(−η , ∞).


Denote
Ω(r) := {v ∈ C(−η , ∞) : kvkC(−η ,∞) ≤ r}
for a positive r ≤ ∞.
It is assumed that there is a constant q, such that

kF vkC(−η ,∞) ≤ q kvkC(−η ,∞) (v ∈ Ω(r)). (5.10)

Following the arguments of the proof of Theorem 10.1.2, according to (5.8), we


arrive at the main result of the present section.
216 Chapter 11. Scalar Nonlinear Equations

Theorem 11.5.3. Let the conditions (5.2), (5.5), (5.10) and


q < var(μ− ) (5.11)
hold. Then the zero solution of equation (5.9) is stable.

11.6 Comparison of Green’s functions to 2-nd order


equations
Let us consider the equation
ü(t) + 2c0 (t)u̇(t) + c1 (t)u̇(t − h)+
d0 (t)u(t) + d1 (t)u(t − h) + d2 (t)u(t − 2h) = 0 (t ≥ 0), (6.1)
where c1 (t), dj (t) (t ≥ 0; j = 0, 1, 2) are piece-wise continuous functions, and
c0 (t) (t ≥ 0) is an absolutely continuous function having a piece-wise continuous
derivative ċ0 (t).
Let G(t, s) be the Green function to equation (6.1). So it is a function defined
for t ≥ s − 2h (s ≥ 0), having the continuous first and second derivatives in t for
t > s, satisfying that equation for all t > s ≥ 0 and the conditions
∂G(t, s)
G(t, s) = 0 (s − 2h ≤ t ≤ s), = 0 (s − 2h ≤ t < s);
∂t
and
∂G(t, s)
lim = 1.
∂t t↓s

Furthermore, extend c0 (t) to [−2h, ∞) by the relation


c0 (t) ≡ c0 (0) for − 2h ≤ t ≤ 0,
and put
Rt Rt
c0 (s)ds c0 (s)ds
a1 (t) = c1 (t)e t−h , and a2 (t) = d2 (t)e t−2h (t ≥ 0).
The aim of this section is to prove the following result.
Theorem 11.6.1. Let the conditions
−ċ0 (t) + c20 (t) + d0 (t) ≤ 0 (6.2)
and
−c1 (t)c0 (t − h) + d1 (t) ≤ 0 (t ≥ 0) (6.3)
hold. Let the Green function G0 (t, s) to the equation
ü(t) + a1 (t)u̇(t − h) + a2 (t)u(t − 2h) = 0 (t ≥ 0) (6.4)
be nonnegative. Then the Green function G(t, s) to equation (6.1) is also nonneg-
ative.
11.7. Comments 217

Proof. Substitute the equality


Rt
u(t) = w(t)e− 0
c0 (s)ds
(6.5)

into (6.1). Then, taking into account that


Z t−h Z t
d d
c0 (s)ds = c0 (s1 − h)ds1 = c0 (t − h),
dt 0 dt h

we have
Rt
e− 0
c0 (s)ds
[ẅ(t) − 2c0 (t)ẇ(t) + w(t)(−ċ0 (t) + c20 (t) + d0 (t))+
R t−h
2(c0 (t)ẇ(t) − c20 (t)w(t))] + c1 (t)e− 0 [−c0 (t − h)w(t
c0 (s)ds
− h) + ẇ(t − h)]+
R t−h R t−2h
d1 (t)e− 0
c0 (s)ds
w(t − h) + d2 (t)e − 0
w(t − 2h) = 0.
c0 (s)ds

Or

ẅ(t) + a1 (t)ẇ(t − h) + m0 (t)w(t) + m1 (t)w(t − h) + a2 (t)w(t − 2h) = 0, (6.6)

where
m0 (t) := −ċ0 (t) + c20 (t) + d0 (t)
and Rt
c0 (s)ds
m1 (t) := e t−h [−c1 (t)c0 (t − h) + d1 (t)].
According to (6.3), m0 (t) ≤ 0, m1 (t) ≤ 0. Hence, by the integral inequalities prin-
ciple, see Section 1.6, it easily follows that if the Green function to equation (6.4)
is nonnegative, then the Green function to equation (6.6) is also nonnegative. This
and (6.5) prove the theorem. 

11.7 Comments
This chapter is based on the papers [36, 49]. Theorem 11.6.1 is taken from the
paper [48].
Recall that in 1949 M. A. Aizerman conjectured the following hypothesis: let
A, b, c be an n × n-matrix, a column-matrix and a row-matrix, respectively. Then
for the absolute stability of the zero solution of the equation

ẋ = Ax + bf (cx) (ẋ = dx/dt)

in the class of nonlinearities f : R → R, satisfying the condition

0 ≤ f (s)/s ≤ q (q = const > 0, s ∈ R, s 6= 0),


218 Chapter 11. Scalar Nonlinear Equations

it is necessary and sufficient that the linear equation ẋ = Ax + q1 bcx be asymp-


totically stable for any q1 ∈ [0, q] [3]. These hypothesis caused the great interest
among the specialists. Counterexamples were set up that demonstrated it was
not, in general, true, cf. [107]. Therefore, the following problem arose: to find the
class of systems that satisfy Aizerman’s hypothesis. In 1983 the author had shown
that any system satisfies the Aizerman hypothesis if its impulse function is non-
negative. The similar result was proved for multivariable systems, distributed ones
and in the input-output version. For the the details see [24]. On the other hand,
A.D. Myshkis [95, Section 10] pointed out at the importance of consideration of
the generalized Aizerman problem for retarded systems. In [27] that problem was
investigated for the retarded systems, whose nonlinearities have discrete constant
delays; in [30], more general systems with nonlinearities acting in space C were
considered.
The positivity conditions for the fundamental solutions of the first order
scalar differential equations are well-known, cf. [1].
About recent very interesting results on the absolute stability see for instance,
[86, 116] and references therein.
Chapter 12

Forced Oscillations in Vector


Semi-Linear Equations

This chapter deals with forced periodic oscillations of coupled systems of semi-
linear functional differential equations. Explicit conditions for the existence and
uniqueness of periodic solutions are derived. These conditions are formulated in
terms of the roots of characteristic matrix functions.
In addition, estimates for periodic solutions are established.

12.1 Introduction and statement of the main result


As as it is well-known, any 1-periodic vector-valued function f with the property
f ∈ L2 (0, 1) can be represented by the Fourier series

X
f= ck ek
k=−∞

where
ek (t) = e2πikt (k = 0, ±1, ±2, ....),
and Z 1
ck = f (t)ek (t)dt ∈ Cn
0

are the Fourier coefficients. Introduce the Hilbert space PF of 1-periodic functions
defined on the real axis R with values in Cn , and the scalar product

X
(f, u)P F ≡ (ck , bk )C n (f, u ∈ P F ),
k=−∞
220 Chapter 12. Forced Oscillations in Vector Semi-Linear Equations

where (., .)C n is the scalar product Cn , ck and bk are the Fourier coefficients of f
and u, respectively. The norm in P F is

!1/2
p X
|f |P F = (f, f )P F = 2
kck kn .
k=−∞
p
Here kckn = (c, c)C n for a vector c.
Due to the periodicity, for any real a we have,
Z a+1 1/2
kvkL2 ([0,1],Cn ) = kv(t)k2n dt (v ∈ P F ).
a

The Parseval equality yields


Z a+1 1/2
|v|P F = kvkL2 ([0,1],Cn ) = kv(t)k2n dt for any real a.
a

Let R(τ ) be an n × n-matrix-valued function of bounded variation defined


on [0, 1]. Consider the system
Z 1
u̇(t) − dR(τ )u(t − τ ) + (F u)(t) (t ∈ R) (1.1)
0

where F : P F → P F is a mapping satisfying the condition

|F v − F w|P F ≤ q|v − w|P F (q = const > 0; v, w ∈ P F ). (1.2)

In addition,
l := |F 0|P F > 0. (1.3)
That is, F 0 is not zero identically.
In particular, one can take F v = F̂ v + f , where F̂ is causal and f ∈ P F .
Let K(z) be the characteristic matrix of the linear term of (1.1):
Z 1
K(z) = zI − e−zτ dR(τ ) (z ∈ C),
0

where I is the unit matrix. Assume that matrices K(2iπj) are invertible for all
integer j and the condition

M0 (K) := sup kK −1 (2iπj)kn < ∞


j=0,±1,±2,...

hold. Here the matrix norm is spectral.


An absolutely continuous function u ∈ P F satisfying equation (1.1) for al-
most all t ∈ R will be called a periodic solution to that equation. A solution is
nontrivial if it is not equal to zero identically. Now we are in a position to formu-
late the main result of the chapter
12.2. Proof of Theorem 12.1.1 221

Theorem 12.1.1. Let the conditions (1.2), (1.3) and

qM0 (K) < 1 (1.4)

be fulfilled. Then equation (1.1) has a unique nontrivial periodic solution u. More-
over, it subjects the estimate
lM0 (K)
|u|P F ≤ . (1.5)
1 − qM0 (K)
The proof of this theorem is presented in the next section.
Note that
M0 (K) ≤ sup kK −1 (iω)kn .
ω∈R

Now Lemma 4.3.1 implies

M0 (K) ≤ θ(K) := sup kK −1 (iω)kn .


|ω|≤2var(R)

So one can use the estimates for θ(K) derived in Sections 4.4 and 4.5.

12.2 Proof of Theorem 12.1.1


In this section for brevity we set M0 (K) = M0 . Furthermore, consider an operator
T defined on P F by the equality
Z 1
(T w)(t) = ẇ(t) − dR(τ )w(t − τ ) (t ∈ R, w ∈ W (P F )).
0

We need the linear equation


(T w)(t) = f (t) (2.1)
with f ∈ P F, t ∈ R. For any h ∈ Cn and an integer k we have
 Z 1 
(T (hek ))(t) = ek (t) 2iπkI − e−2iπkτ dR(τ ) h =
0

ek (t)K(2πik)h.
We seek a solution u to (2.1) in the form

X
u= ak ek (ak ∈ Cn ).
k=−∞

Hence,

X ∞
X
Tu = T e k a k ek = ek K(2πik)ak
k=−∞ k=−∞
222 Chapter 12. Forced Oscillations in Vector Semi-Linear Equations

and by (2.1),
ak = K −1 (2πik)ck (k = 0, ±1, ±2, ....).
Therefore,
kak kn ≤ M0 kck kn .
Hence, |u|P F ≤ M0 |f |P F . This means that |T −1 |P F ≤ M0 .
Furthermore, equation (1.1) is equivalent to the following one:

u = Ψ(u) (2.2)

where
Ψ(u) = T −1 F u.
For any v, w ∈ P F relation (1.2) implies

|Ψ(v) − Ψ(w)|P F ≤ M0 q|v − w|P F

Due to (1.3) Ψ maps P F into itself. So according to (1.4) and the Contraction
Mapping theorem equation (1.1) has a unique solution u ∈ P F . To prove estimate
(1.5), note that
|u|P F = |Ψ(u)|P F ≤ M0 (q|u|P F + l).
Hence (1.4) implies the required result. 

12.3 Applications of matrix functions


Consider the problem

y 0 (t) = Ay(t) + [F y](t), y(0) = y(1), (3.1)

where A is a constant invertible diagonalizable matrix (see Section 2.7) and F is


an arbitrary continuous mapping of PF into itself, satisfying

|F y|P F ≤ q|y|P F + l (q, l = const > 0) and F (0) 6= 0. (3.2)

For example, let (F y)(t) = B(t)F0 (y(t − 1)), where B(t) is a 1-periodic matrix-
function and F0 : Rn → Rn has the Lipschitz property and F0 (0) 6= 0.
First, consider the scalar problem

x0 (t) = ωx(t) + f0 (t) (x(0) = x(1)) (3.3)

with a constant ω 6= 0 and a function f0 ∈ L2 ([0, 1], C). Then a solution of (3.3)
is given by
Z 1
x(t) = G(ω, t, s)f0 (s)ds, (3.4)
0
12.3. Applications of matrix functions 223

where 
1 eω(1+t−s) if 0 ≤ t ≤ s ≤ 1,
G(ω, t, s) =
1 − eω eω(t−s) if 0 ≤ s < t ≤ 1
is the Green function to the problem (3.3), cf. [70].
Now consider the vector equation
w0 (t) = Aw(t) + f (t), y(0) = y(1)
with an f ∈ P F . Then a solution of this equation is given by
Z 1
w(t) = G(A, t, s)f (s)ds (3.5)
0

Since A is a diagonalizable matrix, we have


n
X
G(A, t, s) = G(λk (A), t, s)Pk
k=1

where Pk are the Riesz projections and λk (A) are the eigenvalues of A (see Section
2.7).
By Corollary 2.7.5,
kG(A, t, s)kn ≤ γ(A) max |G(λk (A), t, s)|.
k

Consequently, Z Z
1 1
kG(A, t, s)k2n dsdt ≤ JA
2
,
0 0
where
Z 1 Z 1 1/2
JA = γ(A) max |G(λk (A), t, s)|2 ds dt .
k 0 0

So the operator Ĝ defined by


Z 1
Ĝf (t) = G(A, t, s)f (s)ds
0

is a Hilbert-Schmidt operator with the Hilbert-Schmidt norm N2 (Ĝ) ≤ JA . By


the Parseval equality we can write down |Ĝ|P F ≤ JA .
Rewrite (3.1) as
Z 1
y(t) = G(A, t, s)[F y](s)ds.
0

Hence condition (3.2) implies the inequality


|y|P F ≤ JA (q|y|P F + l).
Due to the Schauder fixed point theorem, we thus arrive at the following result.
224 Chapter 12. Forced Oscillations in Vector Semi-Linear Equations

Theorem 12.3.1. Let the conditions (3.2) and

JA q < 1

hold. Then (3.1) has a nontrivial solution y. Moreover, it satisfies the inequality

JA l
|y|P F ≤ .
1 − JA q

12.4 Comments
The material of this chapter is adapted from the paper [28].
Periodic solutions (forced periodic oscillations) of nonlinear functional dif-
ferential equations (FDEs) have been studied by many authors, see for instance
[10, 66, 79] and references given therein. In many cases, the problem of the ex-
istence of periodic solutions of FDEs is reduced to the solvability of the corre-
sponding operator equations. But for the solvability conditions of the operator
equations, estimates for the Green functions of linear terms of equations are often
required. In the general case, such estimates are unknown. Because of this, the
existence results were established mainly for semilinear coupled systems of FDEs.
Chapter 13

Steady States of Differential


Delay Equations

In this chapter we investigate steady states of differential delay equations. Steady


states of many differential delay equations are described by the equations of the
type
F0 (x) = 0,
where F0 : Cn → Cn is a function satisfying various conditions. For example
consider the equation
ẏ(t) = F1 (y(t), y(t − h)),
where F1 maps Cn × Cn into Cn . Then the condition y(t) ≡ x ∈ Cn yields the
equation F1 (x, x) = 0. So in this case F0 (x) = F1 (x, x).

13.1 Systems of semilinear equations


In this chapter k.k is the Euclidean norm and

Ω(r; Cn ) := {x ∈ Cn : kxk ≤ r}.

for a positive r ≤ ∞.
Let us consider in Cn the nonlinear equation

Ax = F (x), (1.1)

where A is an invertible matrix, and F continuously maps Ω(r; Cn ) into Cn .


Assume that there are positive constants q and l, such that

kF (h)k ≤ qkhk + l (h ∈ Ω(r; Cn )). (1.2)


226 Chapter 13. Steady States of Differential Delay Equations

Lemma 13.1.1. Under condition (1.2) with r < ∞, let

kA−1 k(qr + l) ≤ r. (1.3)

Then equation (1.1) has at least one solution x ∈ Ω(r; Cn ), satisfying the inequality

kA−1 kl
kxk ≤ . (1.4)
1 − qkA−1 k

Proof. Set
Ψ(y) = A−1 F (y) (y ∈ Cn ).

Hence,

kΨ(y)k ≤ kA−1 k(qkyk + l) ≤ kA−1 k(qr + l) ≤ r (y ∈ Ω(r; Cn )). (1.5)

So due to the Browuer Fixed Point Theorem, equation (1.1) has a solution. More-
over, due to (1.3),
kA−1 kq < 1.

Now, using (1.5), we esily get (1.4). 

Put
n−1
X g k (A)
R(A) = √ .
k=0
dk+1
0 (A) k!

where g(A) is defined in Section 2.3, d0 (A) is the lower spectral radius. That is,
d0 (A) is the minimum of the absolute values of the eigenvalues of A:

d0 (A) := min |λk (A)|.


k=1,...,n

Due to Corollary 2.3.3,


kA−1 k ≤ R(A).

Now the previous lemma implies

Theorem 13.1.2. Under condition (1.2), let

R(A)(qr + l) ≤ r.

Then equation (1.1) has at least one solution x ∈ Ω(r; Cn ), satisfying the inequality

R(A) l
kxk ≤ .
1 − qR(A)
13.2. Essentially nonlinear systems 227

13.2 Essentially nonlinear systems


Consider the coupled system
n
X
ajk (x)xk = fj
k=1

(j = 1, ..., n; x = (xj )nj=1 ∈ Cn ), (2.1)


where
ajk : Ω(r; Cn ) → C (j, k = 1, ..., n)
are continuous functions and f = (fj ) ∈ Cn is given. We can write out system
(2.1) in the form
A(x)x = f (2.2)
with the matrix
A(z) = (ajk (z))nj,k=1 (z ∈ Ω(r; Cn )).
Theorem 13.2.1. Let

inf d0 (A(z)) ≡ inf min |λk (A(z))| > 0


z∈Ω(r;Cn ) z∈Ω(r;Cn ) k

and
n−1
X g k (A(z)) r
θr := sup √ ≤ . (2.3)
z∈Ω(r;Cn ) k=0 k!d0 (A(z))
k+1 kf k

Then system (2.1) has at least one solution x ∈ Ω(r; Cn ), satisfying the estimate

kxk ≤ θr kf k. (2.4)

Proof. Tanks to Corollary 2.3.3,

kA−1 (z)k ≤ θr (z ∈ Ω(r; Cn )).

Rewrite (2.2) as
x = Ψ(x) ≡ A−1 (x)f. (2.5)
Due to (2.3)
kΨ(z)k ≤ θr kf k ≤ r (z ∈ Ω(r; Cn )).
So Ψ maps Ω(r; Cn ) into itself. Now the required result is due to the Brouwer
Fixed Point theorem. 
Corollary 13.2.2. Let matrix A(z) be normal:

A∗ (z)A(z) = A(z)A∗ (z) (z ∈ Ω(r; Cn )).


228 Chapter 13. Steady States of Differential Delay Equations

If, in addition,
kf k ≤ r inf d0 (A(z)), (2.6)
z∈Ω(r;Cn )

then system (2.1) has at least one solution x satisfying the estimate
kf k
kxk ≤ .
inf z∈Ω(r;Cn ) d0 (A(z))

Indeed, if A(z) is normal, then g(A(z)) ≡ 0 and


1
θr = .
inf z∈Ω(r;Cn ) d0 (A(z))

Corollary 13.2.3. Let matrix A(z) be upper triangular:


n
X
ajk (x)xk = fj (j = 1, ..., n). (2.7)
k=j

In addition with the notations


X
τ (A(z)) := |ajk (z)|2 ,
1≤j<k≤n

and
ã(z) := min |ajj (A(z)|.
j=1,...,n

let
n−1
X τ k (A(z)) r
sup √ ≤ . (2.8)
z∈Ω(r;Cn ) k=0 k!ã k+1 (z) kf k
Then system (2.7) has at least one solution x ∈ Ω(r; Cn ).
Indeed, this result is due to Theorem 13.2.1, since the eigenvalues of a trian-
gular matrix are its diagonal entries, and

g(A(z)) ≤ τ (A(z)) and d0 (A(z)) = ã(z).

The similar result is true for lower triangular systems.


Note that according to the relation
p
g(A0 ) ≤ 1/2N2 (A∗0 − A0 )

for any constant matrix A0 (see Section 2.3), in the general case, g(A(z)) can be
replaced by the simple calculated quantity
" n
#1/2
p 1X
v(A(z)) = 1/2N2 (A (z) − A(z)) =

|ajk (z) − akj (z)| 2
. (2.9)
2
k=1
13.3. Nontrivial steady states 229

Example 13.2.4. Let us consider the system

aj1 (x)x1 + aj2 (x)x2 = fj (x = (x1 , x2 ) ∈ C2 ), (2.10)


where fj are given numbers, and continuous scalar-valued functions

ajk (j, k = 1, 2)

are defined on
Ω(r; C2 ) ≡ {z ∈ C2 : kzk ≤ r}.
Due to (2.9)
g(A(z)) ≤ |a21 (z) − a12 (z)|.
In addition, λ1,2 (A(z)) are the roots of the polynomial

y 2 − t(z)y + b(z),

where
t(z) = T race (A(z)) = a11 (z) + a22 (z)
and
b(z) = det (A(z)) = a11 (z)a22 (z) − a12 (z)a21 (z).
Then
d0 (A(z)) := min |λk (A(z))|.
k=1,2

Moreover, if inf z∈Ω(r;C2 ) d0 (A(z)) > 0, then

1 |a21 (z) − a12 (z)|


θr ≤ θ̃r := sup + .
z∈Ω(r;C2 ) d0 (A(z)) d20 (A(z))

If
kf kθ̃r ≤ r,
then due to Theorem 13.2.1, system (2.10) has a solution.

13.3 Nontrivial steady states


Consider the system

Fj (x) = 0 (j = 1, ..., n; x = (xj )nj=1 ∈ Cn ), (3.1)

where functions Fj : Ω(r; Cn ) → C admit the representation


n
!
X
Fj (x) = ψj (x) ajk (x)xk − fj (j = 1, ..., n). (3.2)
k=1
230 Chapter 13. Steady States of Differential Delay Equations

Here ψj : Ω(r; Cn ) → C are functions with the property ψj (0) = 0, ajk :


Ω(r; Cn ) → C are continuous functions, and fj (j, k = 1, ..., n) are given num-
bers. Again put
A(z) = (ajk (z))nj,k=1 (z ∈ Ω(r; Cn ))

assuming that it is invertible on Ω(r; Cn ).


In the rest of this section it is assumed that at least one of the numbers fj is
non-zero. Then
kA(x̂)kkx̂k ≥ kf k > 0

for any solution x̂ ∈ Ω(r; Cn ) of equation (2.2) (if it exists). So x̂ is non-trivial.


Obviously, (3.1) has the trivial (zero) solution. Moreover, if (2.2) has a solution
x̂ ∈ Ω(r; Cn ), then it is simultaneously a solution of (3.1). Now Theorem 13.2.1
implies

Theorem 13.3.1. Let condition (2.3) hold. Then system (3.1) with Fj defined by
(3.2) has at least two solutions: the trivial solution and a nontrivial one satisfying
inequality (2.4).

In addition, the previous theorem and Corollary 13.2.2 yield

Corollary 13.3.2. Let matrix A(z) be normal for any z ∈ Ω(r; Cn ) and condition
(2.6) hold. Then system (3.1) has at least two solutions: the trivial solution and a
nontrivial one belonging to Ω(r; Cn ).

Theorem 13.3.2 and Corollary 13.2.3 imply

Corollary 13.3.3. Let matrix A(z) be upper triangular for any z ∈ Ω(r; Cn ). Then
under condition (2.8), system (3.1) has at least two solutions: the trivial solution
and a nontrivial one belonging to Ω(r; Cn ).

The similar result is valid if A(z) is lower triangular.


Example 13.3.4. Let us consider the system

ψj (x1 , x2 )(aj1 (x)x1 + aj2 (x)x2 − fj ) = 0 (x = (x1 , x2 ) ∈ C2 ) (3.3)

where functions ψj : Ω(r; Cn ) → C have the property ψj (0, 0) = 0. In addition,


fj , ajk (j, k = 1, 2) are the same as in Example 13.2.4. Assume that at least one
of the numbers f1 , f2 is non-zero. Then due to Theorem 13.3.1, under conditions
(2.3), system (3.3) has in Ω(r; Cn ) at least two solutions.
13.4. Positive steady states 231

13.4 Positive steady states


Consider the coupled system
n
X
uj − ajk (u)uk = Fj (u) (j = 1, ..., n), (4.1)
k=1, k6=j

where
ajk , Fj : Ω(r; Cn ) → R (j 6= k; j, k = 1, ..., n)
are continuous functions. For instance, the coupled system
n
X
wjk (u)uk = fj (j = 1, ..., n), (4.2)
k=1

where fj are given real numbers, wjk : Ω(r; Cn ) → R are continuous functions,
can be reduced to (4.1) with

wjk (u)
ajk (u) ≡ −
wjj (u)

and
fj
Fj (u) ≡ ,
wjj (u)
provided
wjj (z) 6= 0 (z ∈ Ω(r; Cn ); j = 1, ..., n). (4.3)
Put
cr (F ) = sup kF (z)k.
z∈Ω(r;Cn )

Denote
   
0 a12 (z) . . . a1n (z) 0 ... 0 0
 0 0 . . . a2n (z)   a21 (z) . . . 0 0 
V+ (z) := 
 .
 , V− (z) := 
 
.

... . . . ... .
0 0 ... 0 an1 (z) . . . an,n−1 (z) 0

Recall that N2 (A) is the Frobenius norm of a matrix A. So


n−1
X n
X j−1
n X
X
N22 (V+ (z)) = a2jk (z), N22 (V− (z)) = a2jk (z).
j=1 k=j+1 j=1 k=2

In addition, put
n−1
X N2k (V± (z))
J˜Rn (V± (z)) ≡ √ .
k=0
k!
232 Chapter 13. Steady States of Differential Delay Equations

Theorem 13.4.1. Let the conditions


1 1
αr ≡ max{ inf ( − kV+ (z)k), inf ( − kV− (z)k)} > 0
z∈Ωr ˜
JR (V− (z))
n z∈Ω r ˜
JR (V+ (z))
n

and
cr (F ) < rαr (Ωr )
hold. Then system (4.1) has at least one solution u ∈ Ω(r; Cn ) satisfying the
inequality
cr (F )
kuk ≤ .
αr (Ωr )
In addition, let ajk (z) ≥ 0 and Fj (z) ≥ 0 (j 6= k; j, k = 1, ..., n) for all z from the
ball
cr (F )
{z ∈ Rn : kzk ≤ }.
αr (Ωr )
Then the solution u of (4.1) is non-negative
For the proof see [34].

13.5 Systems with differentiable entries


Consider the system

fk (y1 , ..., yn ) = hk ∈ C (k = 1, ...., n),

where fj (x) = fj (x1 , x2 , ..., xn ), fj (0) = 0 (j = 1, ..., n) are scalar-valued contin-


uously differentiable functions defined on Cn . Put F (x) = (fj (x))nj=1 and
 n
∂fi (x)
F (x) =
0
.
∂xj i,j=1

That is, F 0 (x) is the Jacobian matrix. Rewrite the considered system as

F (y) = h ∈ Cn . (5.1)

For a positive number r ≤ ∞ assume that

ρ0 (r) ≡ min d0 (F 0 (x)) = min min |λk (F 0 (x))| > 0, (5.2)


x∈Ω(r;Cn ) x∈Ω(r;Cn ) k

and
g̃0 (r) = max g(F 0 (x)) < ∞. (5.3)
x∈Ω(r;Cn )

Finally put
n−1
X g̃0k (r)
p(F, r) ≡ √ .
k=0
k!ρk+1
0 (r)
13.6. Comments 233

Theorem 13.5.1. Let fj (x) = fj (x1 , x2 , ..., xn ), fj (0) = 0 (j = 1, ..., n) be scalar-


valued continuously differentiable functions, defined on Ω(r; Cn ). Assume that con-
ditions (5.2) and (5.3) hold. Then for any h ∈ Cn with the property
r
khk ≤ ,
p(F, r)

there is a solution y ∈ Cn of system (5.1) which subordinates to the inequality

khk
kyk ≤ .
p(F, r)

For the proof of this result see [22].

13.6 Comments
This chapter is based on the papers [22] and [34].
234 Chapter 13. Steady States of Differential Delay Equations
Chapter 14

Multiplicative Representations
of Solutions

In this chapter we suggest a representation for solutions of differential delay equa-


tions via multiplicative operator integrals.

14.1 Preliminary results


Let X be a Banach space with the unit operator I, and A be a bounded linear
operator acting in X.
A family P (t) of projections in X defined on a finite real segment [a, b] is
called a resolution of the identity, if it satisfies the following conditions:

1) P (a) = 0, P (b) = I, 2) P (t)P (s) = P (min{t, s}) (t, s ∈ [a, b])

and 3) supt∈[a,b] kP (t)kX < ∞.


A resolution of the identity P (t) is said to be the spectral resolution of A, if

P (t)AP (t) = AP (t) (t ∈ [a, b]). (1.1)

We will say that A has a vanishing diagonal, if it has a spectral resolution P (t) (a ≤
t ≤ b), and with the notation
(n) (n) (n) (n)
ΔPk = ΔPk,n = P (tk ) − P (tk−1 ) (k = 1, ..., n; a = t0 < t1 n = b),
< ... < t(n)

the sums
n
X
Dn := ΔPk AΔPk
k=1
(n) (n)
tend to zero in the operator norm, as max k |tk − tk−1 | → 0.
236 Chapter 14. Multiplicative Representations of Solutions

Lemma 14.1.1. Let a bounded linear operator A acting in X have a spectral res-
olution P (t) (a ≤ t ≤ b) and a vanishing diagonal. Then the sequence of the
operators
Xn
Zn = P (tk−1 )AΔPk
k=1
(n) (n)
converges to A in the operator norm as maxk |tk − tk−1 | tends to zero.
Proof. Thanks to (1.1) ΔPj AΔPk for j > k. So

X n
n X n X
X k
A= ΔPj AΔPk = ΔPj AΔPk = Zn + Dn
j=1 k=1 k=1 j=1

By the assumption Dn → 0 as n → ∞. Thus Zn − A = Dn → 0 as n → ∞, as


claimed. 

For bounded operators A1 , A2 , ..., Aj put



Y
Ak ≡ A1 A2 , ...Aj .
1≤k≤j

That is, the arrow over the symbol of the product means that the indexes of the
co-factors increase from left to right.
Lemma 14.1.2. Let {P̃k }nk=0 (n < ∞) be an increasing chain of projections in X.
That is,
0 = range P̃0 ⊂ range P̃n−1 ⊂ . . . ⊂ range P̃n = I.
Suppose a linear operator W in X satisfies the relation

P̃k−1 W P̃k = W P̃k for k = 1, 2, . . . , n.

Then W is a nilpotent operator and



Y
(I − W )−1 = (I + Wk )
2≤k≤n

with Wk = W (P̃k − P̃k−1 ).


Proof. Taking into account that P̃n = I and W P̃1 = 0, we can write down

W j = W P̃n−j+1 W P̃n−j+2 ...P̃n−1 W P̃n .

Hence
W n = 0, (1.2)
14.1. Preliminary results 237

i.e. W is a nilpotent operator. Besides,


m
X
W P̃m = Wk (m ≤ n)
k=1

and thus X
Wj = Wk1 Wk2 ..Wkj
2≤k1 <k2 <...<kj ≤n

Furthermore, we have

Y n
X X
(I + Wk ) = I + Wk + Wk1 Wk2 + . . . + W2 W3 ...Wn .
2≤k≤n k=2 2≤k1 <k2 ≤n

Simple calculations show that



Y n−1
X
(I + Wk ) = W j.
2≤k≤n j=1

But thanks to (1.2) it follows that


n−1
X
(I − W )−1 = W j.
j=1

This proves the result. 

Furthermore, let P (t) be a resolution of the identity in X and let F (t) be a


function defined on [a, b], whose values are bounded operators in X and continuous
in t in the operator topology. Introduce the right multiplicative integral
Z →
(I + F (s) dP (s))
[a,b]

as the limit in the operator norm (if it exists) of the sequence of the products

Y
(I + F (tk )ΔPk ),
1≤k≤n

(n) (n)
as maxk |tk − tk−1 | tends to zero. In particular,
Z →
(I + A dP (s))
[a,b]

denotes the limit in the operator norm of the sequence of the products

Y
(I + AΔPk ).
1≤k≤n
238 Chapter 14. Multiplicative Representations of Solutions

Similarly, the left multiplicative integral is defined as the limit in the operator
norm (if it exists) of the sequence of the products

Y
(I + F (tk )ΔPk ) =
1≤k≤n

(I + F (tn )ΔPn )(I + F (tn−1 )ΔPn−1 )...(I + F (t1 )ΔP1 ).


The left multiplicative integral is denoted by
Z ←
(I + F (s) dP (s)).
[a,b]

Lemma 14.1.3. Let a linear operator A have a continuous spectral resolution P (t)
defined on a segment [a,b] and a vanishing diagonal. Then
Z →
(I − A)−1 = (1 + AdP (t)).
[a,b]

Proof. By Lemma 14.1.1, A is the limit in the operator norm of Zn . Due to Lemma
14.1.2,
Y→
(I − Zn )−1 = (I + Zn ΔPk ).
1≤k≤n

Hence letting n → ∞, we get the required result. 

14.2 Volterra equations


Let Cn be a complex Euclidean space and C(0, T ) = C([0, T ], Cn ) (T < ∞).
Consider in Cn the equation
Z t
x(t) − K(t, s)x(s)ds = f (t) (2.1)
0

where K(t, s) is a matrix kernel, such that

K(t, s) = 0 (0 ≤ s < t ≤ T ). (2.2)

In addition, K(t, s) is continuous in t and Reimann-integrable in s, and f is


Reimann-integrable. It is not hard to check the operator V defined by
Z t
(V w)(t) = K(t, s)w(s) ds (w ∈ C(0, T )) (2.3)
0

is a compact operator in C(0, T ).


14.3. Differential delay equations 239

For each τ ∈ (0, T ) we define the projection Q(τ ) by



0 if τ < t ≤ T
(Q(τ )w)(t) :=
w(t) if 0 ≤ t ≤ τ

In addition, Q(0) = 0 and Q(T ) = I-the identity operator in C(0, T ). Then,

Q(τ )V Q(τ ) = Q(τ )V, τ ∈ [0, T ],

V dP (τ )w(t) = 0 if t 6= τ and

V dP (τ )w(τ ) = K(t, τ )w(τ )dτ (w ∈ C(0, T )).

If we put P̂ (τ ) = I − Q(T − τ ), then we obtain

(I − P̂ (τ ))V (I − P̂ (τ )) = (I − P̂ (τ ))V.

Or P̂ (τ )V P̂ (τ ) = V̂ P (τ ). Consequently, P̂ (t) is the spectral resolution of V . More-


over, relation (2.2) and simple calculations show that V has a vanishing diagonal
with respect to P̂ (τ ).
So we can use Lemma 14.1.3. It asserts that (I − V )−1 is the limit in the
operator norm of the products

Y
(I + V ΔP̂ (tk ))
2≤k≤m

which are equivalent to the products



Y
(I + V ΔQ(tk )).
1≤k≤m−1

So we have proved the following result.

Theorem 14.2.1. One has


Z ←
(I − V ) −1
= (I + V dQ(s)).
[0,T ]

14.3 Differential delay equations


Consider the equation
Z η m
X
ẏ(t) = A(t, s)y(t − s)ds + Bk (t)y(t − hk ) + f (t) (t ≥ 0; m < ∞) (3.1)
0 k=0
240 Chapter 14. Multiplicative Representations of Solutions

where f ∈ C(0, T ), and 0 ≤ h0 < h1 < ... < hm ≤ η < ∞ are constants, Bk (t)
are continuous matrices and A(t, s) is Riemann integrable in s and continuous in
t. Take the zero initial condition

y(t) = 0, t ≤ 0 (3.2)

and integrate (3.1). Then


Z "Z t η m
X
#
y(t) = A(t1 , s)y(t1 − s)ds + Bk (t1 )y(t1 − hk ) dt1 + f1 (t1 ), (3.3)
0 0 k=0

where Z t
f1 (t) = f (t1 )dt1 .
0

Take into account that according to (3.2),


Z η Z t
A(t, s)y(t − s)ds = A(t, s)y(t − s)ds, t ≤ η .
0 0

Put 
A(t, s) if s≤η,
A1 (t, s) :=
0 if s>η.
Then Z Z
η t
A(t, s)y(t − s)ds = A1 (t, s)y(t − s)ds.
0 0
Hence,
Z tZ η Z tZ t1
A(t1 , s)y(t1 − s)dsdt1 = A1 (t1 , t1 − τ )y(τ )dτ dt1 =
0 0 0 0
Z t
K1 (t, τ )y(τ )dτ,
0
where Z t
K1 (t, τ ) = A1 (t1 , t1 − τ )dt1 .
τ
Moreover, Z Z
t t
Bk (t1 )y(t1 − hk )dt1 = Bk (t1 )y(t1 − hk )dt1 =
0 hk
Z t−hk Z t
Bk (τ + hk )y(τ )dτ = B̂k (t, τ )y(τ )dτ,
0 0
where 
Bk (τ + hk ) if τ ≤ t − hk
B̂k (t, τ ) :=
0 if τ > t − hk .
14.4. Comments 241

Thus equation (3.1) can be written as equation (2.1) with f = f1 and


m
X
K(t, τ ) = K1 (t, τ ) + B̂k (t, τ ).
k=0

Now we can directly apply Theorem 14.2.1.

14.4 Comments
The results of this chapter are probably new.
242 Chapter 14. Multiplicative Representations of Solutions
Appendix A. The General Form
of Causal Operators

The aim of this appendix is to establish the general form of a linear bounded
causal operator acting in space C(0, T ) of continuous real functions defined on a
finite segment [0, T ] with the sup-norm k.kC(0,T ) .
Let ΣT be the σ-algebra of the Borel subsets of [0, T ].
Lemma 15.1.1. Let A be a bounded linear operator acting in C(0, T ). Then there is
a scalar function m defined on [0, T ] × ΣT additive and having bounded variation
var m(t, .) with respect to the second argument, such that

m(., Δ) ∈ C(0, T ) (Δ ∈ ΣT ) and sup var m(t, .) ≤ kAkC(0,T ) (1)


t∈[0,T ]

and Z T
[Af ](t) = f (s)m(t, ds) (0 ≤ t ≤ T ; f ∈ C(0, T )). (2)
0
Proof. Let

0 = t0 < t1 < ... < tn = T, Δk = tk − tk−1 (k = 1, ..., n).

Take a piece-wise constant function,


n
X
fn = ck χ(Δk ),
k=1

where χ is the characteristic function:



1 if t ∈ (Δ),
(χ(Δ))(t) =
0 if t ∈
6 (Δ)

for any Δ ∈ ΣT . It is clear that A can be defined on piece-wise constant functions.


Then
n
X Xn
(Afn )(t) = ck (Aχ(Δk ))(t) = ck m(t, Δk ),
k=1 k=1
244 Chapter 14. Multiplicative Representations of Solutions

where m(t, Δ) = (Aχ(Δ))(t). Hence letting maxk |tk − tk−1 | → 0, we get (2).
Clearly,
n
X n
X
m(t, Δk ) ≤ kAkC(0,T ) k χ(Δk )kC(0,T ) = kAkC(0,T ) (t ∈ (0, T )).
k=1 k=1

So variation of m(t, .) is less than or is equal to kAkC(0,T ) . This completes the


proof. 

Note that the result similar to the preceding lemma is well-known [16].
Put ν(t, s) = m(t, [0, s]). So ν(t, s) = m(t, [0, s]) = (Aχ[0, s])(t). Now (2) can
be written as Z T
[Af ](t) = f (s)ds ν(t, s). (3)
0
If, in addition, A is positive then ν(t, s) is non-decreasing in s.
Let us turn now to the causal mappings. For all τ ∈ (0, T ) and w ∈ C(0, T ),
let the projections Pτ be defined by

w(t) if 0 ≤ t ≤ τ,
(Pτ w)(t) = .
0 if τ < t ≤ T
In addition, PT = I, P0 = 0.
We have Pτ C(0, T ) = C(0, τ ). Clearly, for any w ∈ C(0, T ), the function
Pτ w is in B(0, T ), where B(0, T ) is the Banach space of all bounded functions
defined on [0, T ] with the same sup-norm k.kC(0,T ) . Since C(0, T ) is embedded
into B(0, T ), the equality
Pτ APτ w = Pτ Aw (w ∈ C(0, T ); τ ∈ [0, T ]) (4)
has a sense.
Recall that a bounded linear operator A is said to be causal, if (4) is fulfilled
(see Section 1.8).
Theorem 15.1.2. Let A be a bounded causal linear operator acting in C(0, T ). Then
there is a function μ(t, s) defined on [0, T ]2 , having bounded variation with respect
to the second argument, and continuous with respect to the first argument, such
that Z t
[Af ](t) = f (s)ds μ(t, s) (f ∈ C(0, T ), 0 ≤ t ≤ T ).
0
Proof. According to (3) we obtain,
Z T
[Pτ Af ](t) = f (s)ds ντ (t, s),
0

and Z τ
[Pτ APτ f ](t) = f (s)ds ντ (t, s) (0 ≤ τ < T ),
0
14.4. Comments 245

where ντ (t, s) = 0 for t > τ and ντ (t, s) = ν(t, s) for t ≤ τ . Due to (4) we get
Z T Z τ
f (s)ds ντ (t, s) = f (s)ds ντ (t, s).
0 0

Take t = τ . Then
Z T Z τ
f (s)ds ντ (τ, s) = f (s)ds ντ (τ, s).
0 0

Hence ντ (τ, s) = 0 for s > τ . So taking μ(t, s) = νt (τ, s) we prove the theorem. 

Theorem 15.1.2 appears in [40].


246 Chapter 14. Multiplicative Representations of Solutions
Appendix B. Infinite Block
Matrices

This appendix contains the proofs of the results applied in Chapter 7. It deals with
infinite block matrices having compact off diagonal parts. Bounds for the spectrum
are established and estimates for the norm of the resolvent are proposed. The main
tool in this appendix is so the called π-triangular operators defined below. The ap-
pendix is organized as follows. It consists of 7 sections. In this section we define the
π-triangular operators. In Section 16.2 some properties of Volterra operators are
considered. In Section 16.3 we establish the norm estimates and multiplicative rep-
resentation for the resolvents of π-triangular operators. Section 16.4 is devoted to
perturbations of block triangular matrices. Bounds for the spectrum of an infinite
block matrix close to a triangular infinite block matrix are presented in Section
16.5. Diagonally dominant infinite block matrices are considered in Section 16.6.
Section 16.7 contains examples illustrating our results.

16.1 Definitions
Let H be a separable complex Hilbert space, with the norm k.k and unit operator
I. All the operators considered in this appendix are linear and bounded. Recall
that σ(A) and Rλ (A) = (A − λI)−1 denote the spectrum and resolvent of an
operator A, respectively.
Recall also that a linear operator V is called quasinilpotent, if σ(V ) = 0. A
linear operator is called a Volterra operator, if it is compact and quasinilpotent.
In what follows
π = {Pk , k = 0, 1, 2, ...}
is an infinite chain of orthogonal projections Pk in H, such that

0 = P0 H ⊂ P1 H ⊂ P2 H ⊂ ....

and Pn → I in the strong topology as n → ∞. In addition

sup dim Δ Pk H < ∞, where ΔPk = Pk − Pk−1 .


k≥1
248 Chapter 14. Multiplicative Representations of Solutions

Let a linear operator A acting in H satisfy the relations

APk = Pk APk (k = 1, 2, ...). (1.1)

That is, Pk are invariant projections for A. Put



X
D := ΔPk AΔPk
k=1

and V = A − D. Then
A = D + V, (1.2)
and
DPk = Pk D (k = 1, 2, ...), (1.3)
and
Pk−1 V Pk = V Pk (k = 2, 3, ...); V P1 = 0. (1.4)
Definition 16.1.1. Let relations (1.1)-(1.4) hold with a compact operator V . Then
we will say that A is a π-triangular operator, D is a π- diagonal operator and V
is a π-Volterra operator.
Besides, relation (1.2) will be called the π-triangular representation of A, and
D and V will be called the π-diagonal part and π-nilpotent part of A, respectively.

16.2 Properties of π-Volterra operators


Lemma 16.2.1. Let πm = {Qk , k = 1, ..., m; m < ∞}, Qm = I be a finite chain
of projections. Then any operator V satisfying the condition Qk−1 V Qk = V Qk
(k = 2, ..., m), V Q1 = 0 is a nilpotent operator. Namely, V m = 0.
Proof. Since

V m = V m Qm = V m−1 Qm−1 V = V m−2 Qm−2 V Qm−1 V = ... =

V Q1 ...V Qm−2 V Qm−1 V,


we have V m
= 0. As claimed. 

Lemma 16.2.2. Let V be a π-Volterra operator (i.e. it is compact and satisfies


(1.4)). Then V is quasinilpotent.
Proof. Thanks to the definition of a π-Volterra operator and the previous lemma,
V is a limit of nilpotent operators in the operator norm. This and Theorem I.4.1
from [65] prove the lemma. 

Lemma 16.2.3. Let V be a π-Volterra operator and B be π-triangular. Then V B


and BV are π-Volterra operators.
16.2. Properties of π-Volterra operators 249

Proof. It is obvious that


Pk−1 BV Pk = Pk−1 BPk−1 V Pk = BPk−1 V Pk = BV Pk .
Similarly Pk−1 V BPk = V BPk . This proves the lemma. 

Lemma 16.2.4. Let A be a π-triangular operator. Let V and D be the π-nilpotent


and π-diagonal parts of A, respectively. Then for any regular point λ of D, the
operators V Rλ (D) and Rλ (D)V are π-Volterra ones.
Proof. Since Pk Rλ (D) = Rλ (D)Pk , the previous lemma ensures the required re-
sult. 

Let Y be a norm ideal of compact linear operators in H. That is, Y is alge-


braically a two-sided ideal, which is complete in an auxiliary norm | ∙ |Y for which
|CB|Y and |BC|Y are both dominated by kCk|B|Y .
In the sequel we suppose that there are positive numbers θk (k ∈ N), with
1/k
θk → 0 as k → ∞,
such that
kV k k ≤ θk |V |kY (2.1)
for an arbitrary Volterra operator
V ∈ Y. (2.2)
Recall that the Schatten - von Neumann ideal SN2p (p = 1, 2, ...) is the ideal of
compact operators with the finite ideal norm
N2p (K) = [T race (K ∗ K)p ]1/2p (K ∈ SN2p ).
Let V ∈ SN2p be a Volterra operator. Then due to Corollary 6.9.4 from [31], we
get
(p) j
kV j k ≤ θj N2p (V ) (j = 1, 2, ...), (2.3)
where
(p) 1
θj =p
[j/p]!
and [x] means the integer part of a positive number x. Inequality (2.3) can be
written as
pk+m
N2p (V )
kV kp+m
k≤ √ (k = 0, 1, 2, ...; m = 0, ..., p − 1). (2.4)
k!
In particular, if V is a Hilbert-Schmidt operator, then
N2j (V )
kV j k ≤ √ (j = 0, 1, 2, ...). (2.5)
j!
250 Chapter 14. Multiplicative Representations of Solutions

16.3 Resolvents of π-triangular operators


Lemma 16.3.1. Let A be a π-triangular operator. Then σ(A) = σ(D), where D is
the π-diagonal part of A. Moreover,

X
Rλ (A) = Rλ (D) (V Rλ (D))k (−1)k . (3.1)
k=0

Proof. Let λ be a regular point of operator D. According to the triangular repre-


sentation (1.2) we obtain
Rλ (A) = (D + V − λI)−1 = Rλ (D)(I + V Rλ (D))−1 .
Operator V Rλ (D) for a regular point λ of operator D is a Volterra one due to
Lemma 16.2.3. Therefore,

X
(I + V Rλ (D)) −1
= (V Rλ (D))k (−1)k
k=0

and the series converges in the operator norm. Hence, it follows that λ is a regular
point of A.
Conversely let λ 6∈ σ(A). According to the triangular representation (1.2) we
obtain
Rλ (D) = (A − V − λI)−1 = Rλ (A)(I − V Rλ (A))−1 .
Since V is a p-Volterra, for a regular point λ of A, operator V Rλ (A) is a Volterra
one due to Lemma 16.2.4. So
X∞
(I − V Rλ (A))−1 = (V Rλ (A))k
k=0

and the series converges in the operator norm. Thus,



X
Rλ (D) = Rλ (A) (V Rλ (A))k .
k=0

Hence, it follows that λ is a regular point of D. This finishes the proof. 

With V ∈ Y , introduce the function



X
ζY (x, V ) := θk |V |kY xk+1 (x ≥ 0).
k=0

Corollary 16.3.2. Let A be a π-triangular operator and let its π-nilpotent part V
belong to a norm ideal Y with the property (2.1). Then

X
kRλ (A)k ≤ ζY (kRλ (D)k, V ) := θk |V |kY kRλ (D)kk+1
k=0

for all regular λ of A.


16.3. Resolvents of π-triangular operators 251

Indeed, according to Lemma 16.2.3 and (2.1),

k(V Rλ (D))k k ≤ θk |V Rλ (D)|kY .

But
|V Rλ (D)|Y ≤ |V |Y kRλ (D)k.
Now the required result is due to (3.1).
Corollary 16.3.2 and inequality (2.3) yield
Corollary 16.3.3. Let A be a π-triangular operator and its π-nilpotent part V ∈
SN2p for some integer p ≥ 1. Then

X (p)
kRλ (A)k ≤ k
θk N2p (V )kRλ (D)kk+1 (λ 6∈ σ(A)),
k=0

where D is the π-diagonal part of A. In particular, if V is a Hilbert-Schmidt


operator, then
X∞
N2k (V )
kRλ (A)k ≤ √ kRλ (D)kk+1 (λ 6∈ σ(A)).
k=0
k!

Note that under the condition V ∈ SN2p , p > 1, inequality (2.4) implies
p−1 X∞ pk+j
X N2p (V )
kRλ (A)k ≤ √ kRλ (D)kpk+j+1 . (3.2)
j=0 k=0
k!

Thanks to the Schwarz inequality, for all x > 0 and a ∈ (0, 1),
X∞ X∞
xk xk a k 2
[ √ ]2 = [ √ ] ≤
k=0
k! k=0
ak k!

X ∞
X x2k 2 2
a2k 2k
= (1 − a2 )−1 ex /a .
a k!
k=0 k=0

In particular, take a = 1/2. Then


2

X∞
xk √ 2
√ ≤ 2ex .
k=0
k!

Now (3.2) implies


Corollary 16.3.4. Let A be a π-triangular operator and its π-nilpotent part V ∈
SN2p for some integer p ≥ 1. Then

kRλ (A)k ≤ ζp (kRλ (D)k, V ) (λ 6∈ σ(A)),


252 Chapter 14. Multiplicative Representations of Solutions

where
p−1
√ X j 2p
ζp (x, V ) := 2 N2p (V )xj+1 exp [ N2p (V )x2p ] (x > 0). (3.3)
j=0

Lemma 16.3.5. Let A be a π triangular operator, whose π-nilpotent part V belongs


to a norm ideal Y with the property (2.1). Let B be a bounded operator in H.
Then for any μ ∈ σ(B) either μ ∈ σ(D), or
kA − BkζY (kRμ (D)k, V ) ≥ 1.
In particular, if V ∈ SN2p for some integer p ≥ 1, then this inequality holds with
ζY = ζp .
Indeed this result follows from Corollaries 16.3.2 and 16.3.4.
Let us establish the multiplicative representation for the resolvent of a π-
triangular operator. To this end, for bounded linear operators X1 , X2 , ..., Xm and
j < m, denote

Y
Xk := Xj Xj+1 ...Xm .
j≤k≤m

In addition → →
Y Y
Xk := lim Xk
m→∞
j≤k≤∞ j≤k≤m

if the limit exists in the operator norm.


Lemma 16.3.6. Let π = {Pk }∞
k=1 be a chain of orthogonal projections, V a π-
Volterra operator. Then

Y
(I − V )−1 = (I + V ΔPk ) (3.4)
k=2,3,....

Proof. First let π = {P1 , ..., Pm } be finite. According to Lemma 16.2.1


m−1
X
(I − V )−1 = V k. (3.5)
k=0

On the other hand,



Y m
X X
(I + V ΔPk ) = I + Vk + V k 1 V k2
2≤k≤m k=2 2≤k1 <k2 ≤m

+... + V2 V3 ...Vm .
Here, as above, Vk = V ΔPk . However,
X X
V k1 V k 2 = V ΔPk1 V ΔPk2 =
2≤k1 <k2 ≤m 2≤k1 <k2 ≤m
16.4. Perturbations of block triangular matrices 253
X X
V Pk2 −1 V ΔPk2 = V 2 ΔPk2 = V 2 .
3≤k2 ≤m 3≤k2 ≤m

Similarly, X
Vk1 Vk2 ...Vkj = V j
2≤k1 <k3 ...<kj ≤m

for j < m. Thus from (3.5) the relation (3.4) follows. The rest of the proof is left
to the reader. 

Theorem 16.3.7. For any π- triangular operator A and a regular λ ∈ C we have



Y
Rλ (A) = (D − λI)−1 (I − V ΔPk (D − λI)−1 ΔPk ),
2≤k≤∞

where D and V are the π-diagonal and π-nilpotent parts of A, respectively.


Proof. Due to Lemma 16.2.4, V Rλ (D) is π-nilpotent. Now the previous lemma
implies

Y
(I + V Rλ (D))−1 = (I − V Rλ (D)ΔPk ).
2≤k≤m

But Rλ (D)ΔPk = ΔPk Rλ (D). This proves the result. 


Let A be a π- triangular operator and

Y
Π(A, λ) := kRλ (D)k (1 + kRλ (D)ΔPk kkV ΔPk k) < ∞.
k=2

Then from the previous theorem it follows the inequality kRλ (A)k ≤ Π(A, λ).

16.4 Perturbations of block triangular matrices


Let H = l2 (Cn ) be the space of sequences h = {hk ∈ Cn }∞k=1 with values in the
Euclidean space Cn and the norm
"∞ #1/2
X
|h|l2 (Cn ) = 2
khk kn ,
k=1

where k ∙ kn is the Euclidean norm in Cn .


Consider the operator defined in l2 (Cn ) by the upper block triangular matrix
 
A11 A12 A13 . . .
 0 A22 A23 . . . 
A+ =  0
, (4.1)
0 A33 . . . 
. . . ...
254 Chapter 14. Multiplicative Representations of Solutions

where Ajk are n × n-matrices.


So A+ = D̃ + V+ , where V+ and D̃ are the strictly upper triangular, and
diagonal parts of A+ , respectively:
 
0 A12 A13 A16 ...
 0 0 A23 A24 ... 
V+ =  0

0 0 A34 ... 
. . . . ...

and D̃ = diag [A11 , A22 , A33 , ...]. Put

η n (λ) := sup kRλ (Akk )kn .


k

Lemma 16.4.1. Let A+ be the block triangular matrix defined by (4.1) and V+ be
a compact operator belonging to a norm ideal Y with the property (2.1). Then
σ(A+ ) = σ(D̃), and
|Rλ (A+ )|l2 ≤ ζY (η n (λ), V+ ) (4.2)
for all regular λ of D̃. Moreover, for any bounded operator B acting in l2 (Cn ) and
a μ ∈ σ(B), either μ ∈ σ(D̃), or

qζY (η n (μ), V+ ) ≥ 1

where q := |A+ − B|l2 (Cn ) . In particular, if

V+ ∈ SN2p (p = 1, 2, ...) (4.3)

then ζY = ζp .

Proof. Let Pj , j = 0, 1, 2, ... be projections onto the subspaces of l2 (Cn ) generated


by the first nj elements of the standard basis. Then π = {Pk } is the infinite chain
of orthogonal projections in l2 (Cn ), such that (1.1) holds and Pn → I strongly as
n → ∞. Moreover, dim ΔPk H ≡ n (k = 1, 2, ..) and

X
Ajk = ΔPk ÃΔPk , D̃ = Ajj .
j=1

Hence it follows that A+ is a π triangular operator. Now Corollaries 16.3.2 and


16.3.5 prove the result. 

Let
n−1
X cl
kRλ (Akk )k ≤ φ(ρ(Akk , λ)) := (λ 6∈ σ(D̃))
ρl+1 (Akk , λ)
l=0
16.4. Perturbations of block triangular matrices 255

where cl are nonnegative coefficients, independent of k, and ρ(A, λ) is the distance


between a complex point λ and σ(A). Then
n−1
X cl
kRλ (Akk )k ≤ φ(ρ(D̃, λ)) := (λ 6∈ σ(D̃)) (4.4)
l=0
ρ (D̃, λ)
l+1

and
ρ(D̃, λ) = inf min |λ − λj (Akk )|
k=1,2,.... j=1,...,n

is the distance between a point λ and σ(D̃), and


n−1
X ck
φ(y) = (y > 0).
y k+1
k=0

We have
η n (λ) = sup kRλ (Ajj )k ≤ φ(ρ(D̃, λ)).
j=1,2,...

Under (4.3), Lemma 16.4.1 gives us the inequality

|Rλ (A+ )|l2 (Cn ) ≤ ζp (φ(ρ(D̃, λ)), V+ ) (4.5)

for all regular λ of D̃, provided V+ ∈ SN2p . So for a bounded operator B and
μ ∈ σ((B), either μ ∈ σ(D̃) or

qζp (φ(ρ(D̃, μ)), V+ ) ≥ 1. (4.6)

Furthermore, let C = (cjk )nj,k=1 be an n × n-matrix. Then as it is shown in Section


2.3,
n−1
X g k (C)
kRλ (C)kn ≤ √ ,
k=0
k!ρk+1 (C, λ)
where λk (C); k = 1, . . . , n are the eigenvalues of C including their multiplicities,
and
Xn
g(C) = (N2 (C) −
2
|λk (C)|2 )1/2 .
k=1

In particular, the inequalities


1 2 ∗
g 2 (C) ≤ N22 (C) − |T race C 2 | and g 2 (C) ≤ N (C − C) (4.7)
2 2
are true. If C is a normal matrix, then g(C) = 0. Thus
n−1
X g k (Ajj )
kRλ (Ajj )kn ≤ √ .
k=0
k!ρk+1 (Ajj , λ)
256 Chapter 14. Multiplicative Representations of Solutions

Since D̃ is bounded, we have

g0 := sup g(Akk ) < ∞.


k=1,2,...

Then one can take φ(y) = φ0 (y), where


n−1
X g0k
φ0 (y) = √ .
k=0
k!y k+1

If all the diagonal matrices Akk are normal, then g(Akk ) = 0 and φ0 (y) = 1/y.
Relation (4.6) yields
Lemma 16.4.2. Let A+ be defined by (4.1) and B a linear operator on l2 (Cn ). If,
in addition, condition (4.3) holds, then for any μ ∈ σ(B), there is a λ ∈ σ(D̃),
such that
|λ − μ| ≤ rp (q, V+ )),
where rp (q, V+ ) is the unique positive root of the equation

qζp (φ0 (y), V+ ) = 1. (4.8)

Here y is the unknown.


It is simple to see that

rp (q, V+ ) ≤ yn (zp (q, V+ )), (4.9)

where yn (b) is the unique positive root of the equation

φ0 (y) = b (b = const > 0),

and zp (q, V+ ) is the unique positive root of the equation


p−1
√ X j 2p
qζp (x, V+ ) = q 2 N2p (V+ )xj+1 exp [ N2p (V+ )x2p ] = 1. (4.10)
j=0

Furthermore, due to Lemma 2.4.3, yn (b) ≤ pn (b), where



φp0 (1)/b if φ0 (1) ≥ b,
pn (b) = n
φ0 (1)/b if φ0 (1) < b.

We need the following


Lemma 16.4.3. The unique positive root za of the equation
p−1
X
z j+1 exp [z 2p ] = a (a = const > 0) (4.11)
j=0
16.5. Block matrices close to triangular ones 257

satisfies the inequality za ≥ δp (a), where



a/pe if a ≤ pe,
δp (a) := (4.12)
[ln (a/p)]1/2p if a > pe.

For the proof see [31, Lemma 8.3.2].


√ Put in (4.10) N2p (V+ )x = z. Then we
get equation (4.12) with a = N2p (V+ )/q 2. The previous lemma implies

zp (q, V+ ) ≥ γp (q, V+ )

where √
δp (N2p (V+ )/q 2)
γp (q, V+ ) := .
N2p (V+ )
We thus get
rp (q, V+ ) ≤ pn (γp (q, V+ )). (4.13)

16.5 Block matrices close to triangular ones


Consider in l2 (Cn ) the operator defined by the block matrix
 
A11 A12 A13 . . .
 A21 A22 A23 . . . 
à = 
 A31 A32
 (5.1)
A33 . . . 
. . . ...

where Ajk are n × n-matrices. Clearly,

à = D̃ + V+ + V−

where V+ is the strictly upper triangular, part, D̃ is the diagonal part and V− is
the strictly lower triangular part of Ã:
 
0 0 0 0 ...
 A21 0 0 0 ... 
 
V− =  A31 A32 0 0 ... .

 A41 A42 A43 0 ... 
. . . ... .

Now we get the main result of the paper which is due to (4.6) with B = Ã.
Recall that φ0 is defined in the previous section and ζp is defined by (3.3).
Theorem 16.5.1. Let à be defined by (5.1) and condition (4.3) hold. Then for any
μ ∈ σ(Ã), either μ ∈ σ(D̃), or there is a λ ∈ σ(D̃), such that

|V− |l2 (Cn ) ζp (φ0 (|λ − μ|), V+ ) ≥ 1.


258 Chapter 14. Multiplicative Representations of Solutions

The theorem is exact in the following sense: if V− = 0, then σ(Ã) = σ(D̃).


Moreover, Lemma 16.4.2 with B = Ã implies
Corollary 16.5.2. Let à be defined by (5.1) and condition (4.3) hold. Then for any
μ ∈ σ(D̃), there is a λ ∈ σ(D̃), such that

|λ − μ| ≤ rp (Ã),

where rp (Ã) is is the unique positive root of the equation

|V− |l2 (Cn ) ζp (φ0 (y), V+ ) = 1.

Moreover, (4.13) gives us the bound for rp (Ã) if we take q = |V− |l2 (Cn ) .
Note that in Theorem 16.5.1 it is enough that V+ is compact. Operator V−
can be noncompact.
Clearly, one can exchange V+ and V− .

16.6 Diagonally dominant block matrices


Put
mjk = kAjk kn (j, k = 1, 2, ...)
and consider the matrix
M = (mjk )∞
j,k=1 .

Lemma 16.6.1. The spectral radius rs (Ã) of à is less than or is equal to the spectral
radius of M .
(ν) (ν)
Proof. Let Ajk and mjk (ν = 2, 3, ...) be the entries of Ãν and M ν , respectively.
We have

X
(2)
kAjk kn = k Ajl Alk kn ≤
l=1

X ∞
X (2)
kAjl kn kAlk kn = mjl mlk = mjk .
l=1 l=1
(ν) (ν)
Similarly, we get kAjk kn
≤ mjk .
But for any h = {hk } ∈ l (C ), we have
2 n

∞ X
X ∞ ∞ X
X ∞
|Ãh|2l2 (C n ) ≤ ( kAjk hk kn )2 ≤ ( mjk khk kn )2 = |M h̃|2l2 (R)
j=1 k=1 j=1 k=1

where
h̃ = {khk kn } ∈ l2 (R1 ).
16.6. Diagonally dominant block matrices 259

Since

X
|h|2l2 (C n ) = khk k2n = |h̃|2l2 (R) ,
k=1

we obtain |Ãν |l2 (C n ) ≤ |M ν |l2 (R) (ν = 2, 3, ...). Now the Gel’fand formula for the
spectral radius yields the required result. 

Denote

X
Sj := kAjk kn .
k=1,k6=j

Theorem 16.6.2. Let Ajj be invertible for all integer j. In addition, let

sup Sj < ∞ (6.1)


j

and there be an  > 0, such that

kA−1
jj kn − Sj ≥  (j = 1, 2, ...).
−1
(6.2)

Then à is invertible. Moreover, let

ψ(λ) := sup k(Ajj − λ)−1 kSj < 1.


j

Then λ is a regular point of Ã, and

|(Ã − λ)−1 |l2 (Cn ) ≤ (1 − ψ(λ))−1 sup k(Ajj − λ)−1 kn .


j

Proof. Put W = Ã − D̃ = V+ + V− with an invertibe D̃. That is, W is the


off-diagonal part of Ã, and

à = D̃ + W = D̃(I + D̃ −1 W ). (6.3)

Clearly,

X
kA−1 −1
jj Ajk kn ≤ Sj kAjj k.
k=1,k6=j

Fom (6.2) it follows

1 − Sj kA−1 −1
jj kn ≥ kAjj kn  (j = 1, 2, ...).

Therefore

X
sup kA−1
jj Ajk kn < 1.
j
k=1,k6=j
260 Chapter 14. Multiplicative Representations of Solutions

Then thanks to Lemma 2.4.8 on the bound for the spectral radius and the previ-
ous lemma the spectral radius rs (D̃ −1 W ) of the matrix D̃−1 W is less than one.
Therefore I + D̃−1 W is invertible. Now (6.3) implies that à is invertble. This
proves the theorem. 

It should be noted that condition (6.1) implies that the off-diagonal part W
of à is compact, since under (6.1) the sequence of the finite dimensional operators
 
0 A12 A13 . . . A1l
 A21 0 A23 . . . A2l 
 
Wl := 
 31A A 32 0 . . . A3l 
 . . . ... . 
Al1 Al2 Al3 . . . 0

converges to W in the norm of space l2 (Cn ) as l → ∞.

16.7 Examples
Let n = 2. Then D̃ is the orthogonal sum of the 2 × 2-matrices
 
a2k−1,2k−1 a2k−1,2k
Akk = (k = 1, 2, ...).
a2k,2k−1 a2k,2k

If Akk are real matrices, then due to the above mentioned inequality g 2 (C) ≤
N22 (C ∗ − C)/2, we have

g(Akk ) ≤ |a2k−1,2k − a2k,2k−1 |.

So one can take  


1 g̃0
φ0 (y) = 1+
y y
with
g̃0 := sup |a2k−1,2k − a2k,2k−1 |.
k

Besides, σ(D̃) = {λ1,2 (Akk )}∞


k=1 , where

1
λ1,2 (Akk ) = (a2k−1,2k−1 + a2k,2k ± [(a2k−1,2k−1 − a2k,2k )2 − a2k−1,2k a2k,2k−1 ]1/2 ).
2
Now we can directly apply Theorems 16.5.1 and Corollary 16.5.2.
Furthermore, let L2 (ω, Cn ) be the space of vector valued functions defined
on a bounded subset ω of Rm with the scalar product
Z
(f, g) = (f (s), g(s))C n ds
ω
16.7. Examples 261

where (., .)C n is the scalar product in Cn . Let us consider in L2 (ω, Cn ) the matrix
integral operator Z
(T f )(x) = K(x, s)f (s)ds
ω
with the condition Z Z
kK(x, s)k2n dx ds < ∞.
ω ω
That is, T is a Hilbert-Schmidt operator.
Let {ek (x)} be an orthogonal normal basis in L2 (ω, Cn ) and

X
K(x, s) = Ajk ek (s)ek (x)
j,k=1

be the Fourier expansion of K, with the matrix coefficients Ajk . Then T is unitarily
equivalent to the operator à defined by (1.1). Now one can apply Theorems 16.5.1
and 16.6.1, and Corollary 16.5.2.
This appendix is based on the paper [37].
262 Chapter 14. Multiplicative Representations of Solutions
Bibliography

[1] Agarwal, R. Berezansky, L. Braverman, E. and A. Domoshnitsky, Nonoscilla-


tion Theory of Functional Differential Equations and Applications, Elsevier,
Amsterdam, 2012.
[2] Ahiezer, N. I. and Glazman, I. M., Theory of Linear Operators in a Hilbert
Space. Pitman Advanced Publishing Program, Boston, 1981.
[3] Aizerman, M.A., On a conjecture from absolute stability theory, Ushechi
Matematicheskich Nauk, 4 (4), (1949) 187-188 In Russian.
[4] Azbelev, N.V. and P.M. Simonov, Stability of Differential Equations with
Aftereffects, Stability Control Theory Methods Appl. v. 20, Taylor & Francis,
London, 2003.
[5] Bazhenova, L.S., The IO-stability of equations with operators causal with re-
spect to a cone. Mosc. Univ. Math. Bull. 57 (2002), no.3, 33-35; translation
from Vestn. Mosk. Univ., Ser. I, (2002), no.3, 54-57 .
[6] Berezansky, L. and Braverman, E. On exponential stability of linear differ-
ential equations with several delays, J. Math. Anal. Appl. 324 (2006) 1336 -
1355
[7] Berezansky, L. and Braverman, E. On stability of some linear and nonlinear
delay differential equations, J. Math. Anal. Appl., 314 (2006) 391-411.
[8] Berezansky, L. Idels, L. and Troib, L. Global dynamics of Nicholson-type delay
systems with applications, Nonlinear Analysis: Real World Applications, 12
(2011) 436-445.
[9] Bhatia, R. Matrix Analysis, Springer, New York, 1997.
[10] Burton, T.A. Stability of Periodic Solutions of Ordinary and Functional Dif-
ferential Equations, Academic Press, New York, 1985.
[11] Butcher, E.A., Ma, H., Bueler, E., Averina, V., and Szabo, Z., Stability of
linear time-periodic delay differential equations via Chebyshev polynomials,
Int. J. Numer. Meth. Eng., 59, (2004) 859 - 922.
264 Bibliography

[12] Bylov, B.F., B.M. Grobman, V.V. Nemyckii and R.E. Vinograd, The Theory
of Lyapunov Exponents, Nauka, Moscow, 1966 (In Russian).
[13] Corduneanu, C., Functional Equations with Causal Operators, Taylor and
Francis, London, 2002.
[14] Daleckii, Yu L. and Krein, M. G. Stability of Solutions of Differential Equa-
tions in Banach Space, Amer. Math. Soc., Providence, R. I., 1974.
[15] Drici, Z., McRae, F.A. and Vasundhara Devi, J. , Differential equations with
causal operators in a Banach space. Nonlinear Anal., Theory Methods Appl.
62, (2005) no.2 (A), 301-313.
[16] Dunford, N. and Schwartz, J.T., Linear Operators, part I, Interscience Pub-
lishers, Inc., New York, 1966.
[17] Feintuch, A., Saeks, R. System Theory. A Hilbert Space Approach. Ac. Press,
New York, 1982.
[18] Gantmakher, F. R. Theory of Matrices. Nauka, Moscow, 1967. In Russian.
[19] Gel’fand, I.M. and Shilov, G.E. Some Questions of Theory of Differential
Equations. Nauka, Moscow, 1958. In Russian.
[20] Gel’fond A.O. Calculations Of Finite Differences, Nauka, Moscow. 1967, In
Russian.
[21] Gil’, M. I. Estimates for norms of matrix-valued functions, Linear and Mul-
tilinear Algebra, 35 (1993), 65-73.
[22] Gil’, M.I. On solvability of nonlinear equations in lattice normed spaces, Acta
Sci. Math., 62, (1996), 201-215.
[23] Gil’, M. I. The freezing method for evolution equations. Communications in
Applied Analysis, 1, (1997), no. 2, 245-256.
[24] Gil’, M.I. Stability of Finite and Infinite Dimensional Systems, Kluwer Aca-
demic Publishers, Boston, 1998.
[25] Gil’, M.I. Perturbations of simple eigenvectors of linear operators,
Manuscripta Mathematica, 100, (1999), 213-219.
[26] Gil’, M.I. On bounded input-bounded output stability of nonlinear retarded
systems, Robust and Nonlinear Control, 10, (2000), 1337-1344.
[27] Gil’, M.I. On Aizerman-Myshkis problem for systems with delay. Automatica,
36, (2000), 1669-1673.
[28] Gil’, M. I. . Existence and stability of periodic solutions of semilinear neutral
type systems, Dynamics Discrete and Continuous Systems, 7, (2001), no. 4,
809-820.
Bibliography 265

[29] Gil’, M.I. On the ”freezing” method for nonlinear nonautonomous systems
with delay, Journal of Applied Mathematics and Stochastic Analysis 14,
(2001), no. 3, 283-292.
[30] Gil’, M.I., Boundedness of solutions of nonlinear differential delay equations
with positive Green functions and the Aizerman - Myshkis problem, Nonlinear
Analysis, TMA , 49 (2002) 1065-168.
[31] Gil’, M.I. Operator Functions and Localization of Spectra, Lecture Notes In
Mathematics vol. 1830, Springer-Verlag, Berlin, 2003.
[32] Gil’, M.I. Bounds for the spectrum of analytic quasinormal operator pencils
in a Hilbert space, Contemporary Mathematics, 5, (2003), no 1, 101-118
[33] Gil’, M.I. On bounds for spectra of operator pencils in a Hilbert space, Acta
Mathematica Sinica, 19 (2003), no. 2, 313-326
[34] Gil’, M.I., On positive solutions of nonlinear equations in a Banach lattice,
Nonlinear Functional Analysis and Appl., 8 (2003), 581-593 .
[35] Gil’, M.I. Bounds for characteristic values of entire matrix pencils, Linear
Algebra Appl. 390, (2004), 311-320
[36] Gil’, M.I., The Aizerman-Myshkis problem for functional-differential equa-
tions with causal nonlinearities, Functional Differential Equations, 11, (2005)
no 1-2, 445-457
[37] Gil’, M.I. Spectrum of infinite block matrices and π-triangular operators, El.
J. of Linear Algebra, 16 (2007) 216-231
[38] Gil’, M.I. Estimates for absolute values of matrix functions, El. J. of Linear
Algebra, 16 (2007) 444-450
[39] Gil’, M.I. Explicit stability conditions for a class of semi-linear retarded sys-
tems, Int. J. of Control, 322, (2007) no. 2, 322-327.
[40] Gil’, M.I. Positive solutions of equations with nonlinear causal mappings,
Positivity, 11, (2007), no. 3, 523-535.
[41] Gil’, M.I. L2 -stability of vector equations with causal mappings, Dynamic
Systems and Applications, 17 (2008), 201-220.
[42] Gil’, M.I. Estimates for Green’s Function of a vector differential equation with
variable delays, Int. J. Applied Math. Statist, 13 (2008), 50-62.
[43] Gil’, M. I. Estimates for entries of matrix valued functions of infinite matrices.
Math. Phys. Anal. Geom. 11, (2008), no. 2, 175-186.
[44] Gil’, M.I. Inequalities of the Carleman type for Neumann-Schatten operators
Asian-European J. of Math., 1, (2008), no. 2, 203-212.
[45] Gil’, M.I. Upper and lower bounds for regularized determinants, Journal of
Inequalities in Pure and Appl. Mathematics, 9, (2008), no. 1, 1-6.
266 Bibliography

[46] Gil’, M.I. Localization and Perturbation of Zeros of Entire Functions. Lecture
Notes in Pure and Applied Mathematics, 258. CRC Press, Boca Raton, FL,
2009.
[47] Gil’, M.I., Perturbations of functions of operators in a Banach space, Math.
Phys. Anal. Geom. 13, (2009) 69-82.
[48] Gil’, M. I. Lower bounds and positivity conditions for Green’s functions to
second order differential-delay equations. Electron. J. Qual. Theory Differ.
Equ., (2009) no. 65, 1-11.
[49] Gil’, M.I. L2 -absolute and input-to-state stabilities of equations with nonlin-
ear causal mappings, J. Robust and Nonlinear systems, 19, (2009), 151-167.
[50] Gil’, M.I. Meromorphic functions of matrix arguments and applications Ap-
plicable Analysis, 88 (2009), no. 12, 1727 - 1738
[51] Gil’, M.I. Perturbations of functions of diagonalizable matrices, Electr. J. of
Linear Algebra, 20 (2010) 303-313.
[52] Gil’, M.I. Stability of delay differential equations with oscillating coefficients,
Electronic Journal of Differential Equations, 2010, (2010), no. 11, 1–5.
[53] Gil’, M.I. Norm estimates for functions of matrices with simple spectrum,
Rendiconti del Circolo Matematico di Palermo, 59, (2010) 215-226
[54] Gil’, M.I. Stability of vector functional differential equations with oscillating
coefficients, J. of Advanced Research in Dynamics and Control Systems, 3,
(2011), no. 1, 26–33.
[55] Gil’, M.I. Stability of functional differential equations with oscillating co-
efficients and distributed delays, Differential Equations and Applications, 3
(2011), no. 11, 1–19.
[56] Gil’, M.I. Estimates for functions of finite and infinite matrices. Perturbations
of matrix functions. In: Albert R. Baswell (editor) Advances in Mathematics
Research, 16, Nova Science Publishers, Inc., New York, 2011, pp. 25-90
[57] Gil’, M.I. Ideals of compact operators with the Orlicz norms Annali di Matem-
atica. Pura Appl., Published online, October, 2011.
[58] Gil’, M.I. The Lp - version of the generalized Bohl - Perron principle for vector
equations with delay, Int. J. Dynamical Systems and Differential Equations,
3, (2011) no. 4, 448-458.
[59] Gil’, M.I. The Lp-version of the generalized BohlPerron principle for vector
equations with infinite delay, Advances in Dynamical Systems and Applica-
tions , 6 (2011), no. 2, 177 - 184.
[60] Gil’, M.I. Stability of retarded systems with slowly varying coefficient ESAIM:
Control, Optimisation and Calculus of Variations, published online Sept.
2011.
Bibliography 267

[61] Gil’, M.I. Stability of vector functional differential equations: a survey, Quaes-
tiones Mathematicae, 35 (2012), 83-131.
[62] Gil’, M.I. Exponential stability of periodic systems with distributed delays,
Asian J. of Control, (accepted for publication)
[63] Gil’, M.I., A. Ailon and B.-H. Ahn., On absolute stability of nonlinear systems
with small delays, Mathematical Problems in Engineering, 4, (1998) 423-435.
[64] Gohberg, I., Goldberg, S. and Krupnik, N. Traces and Determinants of Linear
Operators, Birkhäuser Verlag, Basel, 2000.
[65] Gohberg, I. and Krein, M. G. Introduction to the Theory of Linear Non-
selfadjoint Operators, Trans. Mathem. Monographs, v. 18, Amer. Math. Soc.,
Providence, R. I., 1969.
[66] Gopalsamy, K. Stability and Oscillations in Delay Differential Equations of
Population Dynamics. Kluwer Academic Publishers, Dordrecht, 1992.
[67] Gu, K., V. Kharitonov and J. Chen, Stability of Time-delay Systems,
Birkhauser, Boston, 2003.
[68] Guter, P., Kudryavtsev L. and Levitan, B. Elements of the Theory of Func-
tions, Fizmatgiz, 1963. In Russian.
[69] Halanay, A., Stability Theory of Linear Periodic Systems with Delay, Rev.
Math. Pure Appl. 6(4), (1961), 633 - 653.
[70] Halanay, A. Differential Equations: Stability, Oscillation, Time Lags, Aca-
demic Press, NY, 1966.
[71] Hale, J.K. Theory of Functional Differential Equations, Springer- Verlag, New
York, 1977.
[72] Hale, J.K. and S.M.V. Lunel, Introduction to Functional Differential Equa-
tions, Springer, New York, 1993.
[73] Hewitt, E. and Stromberg, K. Real and Abstract Analysis, Springer Verlag,
Berlin 1969.
[74] Horn R.A and Johnson, C.R. Matrix Analysis, Cambridge Univ. Press, Cam-
bridge, 1985
[75] Insperger, T. and Stepan, G., Stability of the damped Mathieu equation with
time-delay, J. Dynam. Syst., Meas. Control, 125, (2003) no. 2, 166 - 171.
[76] Izobov, N.A. Linear systems of ordinary differential equations. Itogi Nauki i
Tekhniki. Mat. Analis, 12, (1974) 71-146, In Russian.
[77] Kolmanovskii, V. and A. Myshkis, Applied Theory of Functional Differential
Equations, Kluwer, Dordrecht, 1999.
268 Bibliography

[78] Kolmanovskii, V.B. and Nosov, V.R. Stability of Functional Differential Equa-
tions, Ac Press, London, 1986.
[79] Krasnosel’skii, A. M. Asymptotic of Nonlinearities and Operator Equations,
Birkhäuser Verlag, Basel, 1995.
[80] Krasnosel’skij, M.A., Lifshits, J. and Sobolev A. Positive Linear Systems. The
Method of Positive Operators, Heldermann Verlag, Berlin, 1989.
[81] Krisztin, T. On stability properties for one-dimensional functional-differential
equations, Funkcial. Ekvac. 34 (1991) 241–256.
[82] Kurbatov, V., Functional Differential Operators and Equations, Kluwer Aca-
demic Publishers, Dordrecht, 1999.
[83] Lakshmikantham, V., Leela, S., Drici, Z. and McRae, F. A. Theory of Causal
Differential Equations, Atlantis Studies in Mathematics for Engineering and
Science, 5. Atlantis Press, Paris, 2009.
[84] Lampe, B. P. and E. N. Rosenwasser, Stability investigation for linear peri-
odic time-delayed systems using Fredholm theory, Automation and Remote
Control, 72, (2011) no. 1, 38 - 60.
[85] Liao, Xiao Xin, Absolute Stability of Nonlinear Control Systems, Kluwer, Dor-
drecht, 1993.
[86] Liberzon, M.R., Essays on the absolute stability theory. Automation and Re-
mote Control, 67, (2006), no. 10, 1610-1644.
[87] Lillo, J.C. Oscillatory solutions of the equation y 0 (x) = m(x)y(x − n(x)). J.
Differ. Equations 6, (1969) 1-35.
[88] Liu Xinzhi, Shen Xuemin and Zhang, Yi, Absolute stability of nonlinear equa-
tions with time delays and applications to neural networks. Math. Probl. Eng.
7, (2001) no.5, 413-431.
[89] Liz, E., V. Tkachenko, and S. Trofimchuk, A global stability criterion for
scalar functional differential equations, SIAM J. Math. Anal. 35 (2003) 596–
622.
[90] Marcus, M. and Minc, H. A Survey of Matrix Theory and Matrix Inequalities.
Allyn and Bacon, Boston, 1964.
[91] Meyer-Nieberg, P. Banach Lattices, Springer - Verlag, 1991.
[92] Michiels, W. and Niculescu, S.I., Stability and Stabilization of Time-Delay
Systems. An Eigenvalue-Based Approach, SIAM, Philadelphia, 2007.
[93] Mitrinović, D. S., Pecaric, J.E. and Fink, A.M. Inequalities Involving Func-
tions and their Integrals and Derivatives, Kluwer Academic Publishers, Dor-
drecht, 1991.
Bibliography 269

[94] Myshkis, A.D. On solutions of linear homogeneous differential equations of


the first order of stable type with a retarded arguments, Mat. Sb., N. Ser.
28(70), (1951) 15-54.
[95] Myshkis A. D., On some problems of theory of differential equations with
deviation argument, Uspechi Matemat. Nauk, 32 (194), (1977) no 2, 173-202.
In Russian.
[96] Niculescu, S.I. Delay Effects on Stability: a Robust Control Approach, Lecture
Notes in Control and Information Sciences, vol. 269, Springer, London, 2001.
[97] Ostrowski, A.M. Note on bounds for determinants with dominant principal
diagonals, Proc. of AMS, 3, (1952) 26-30.
[98] Ostrowski, A. M. Solution of Equations in Euclidean and Banach spaces.
Academic Press, New York - London, 1973.
[99] Pietsch, A. Eigenvalues and s-Numbers, Cambridge University Press, Cam-
bridge, 1987.
[100] Razvan, V. Absolute Stability of Equations with Delay, Nauka, Moscow, 1983.
In Russian.
[101] Richard, J.-P. Time-delay systems: an overview of some recent advances and
open problems, Automatica, 39, (2003) 1667 - 1694.
[102] So, J.W.H., Yu, J.S. and M.P. Chen, Asymptotic stability for scalar delay
differential equations, Funkcial. Ekvac. 39 (1996) 1–17.
[103] Stewart, G.W. and Sun Ji-guang. Matrix Perturbation Theory, 1990. Aca-
demic Press, New York.
[104] Tyshkevich, V. A. Some Questions of the Theory of Stability of Functional
Differential Equations, Naukova Dumka, Kiev, 1981. In Russian.
[105] Vath, M. Volterra and Integral Equations of Vector Functions, Marcel
Dekker, 2000.
[106] Vidyasagar, M., Nonlinear Systems Analysis, second edition. Prentice-Hall.
Englewood Cliffs, New Jersey, 1993.
[107] Vinograd, R. An improved estimate in the method of freezing, Proc. Amer.
Soc. 89 (1), (1983) 125-129.
[108] Vulikh, B. Z. Introduction to the Theory of Partially Ordered Spaces ,
Wolters-Noordhoff Scientific Publications LTD, Groningen, 1967.
[109] Wang, X. and L. Liao, Asymptotic behavior of solutions of neutral differ-
ential equations with positive and negative coefficients, J. Math. Anal. Appl.
279 (2003) 326–338.
[110] Yakubovich V.A. and Starzhinskii, V.M. Differential Equations with Periodic
Coefficients, John Wiley, New York, 1975.
270 Bibliography

[111] Yang, Bin and Chen, Mianyun, Delay-dependent criterion for absolute sta-
bility of Lur’e type control systems with time delay. Control Theory Appl, 18,
(2001) no. 6, 929-931.
[112] Yoneyama, Toshiaki, On the stability for the delay-differential equation
ẋ(t) = −a(t)f (x(t − r(t))). J. Math. Anal. Appl. 120, (1986) 271-275.
[113] Yoneyama, Toshiaki, On the 3/2 stability theorem for one-dimensional delay-
differential equations. J. Math. Anal. Appl. 125, (1987) 161-173.
[114] Yoneyama, Toshiaki. The 3/2 stability theorem for one-dimensional delay-
differential equations with unbounded delay. J. Math. Anal. Appl. 165, (1992)
no.1, 133-143.
[115] Zeidler, E. Nonlinear Functional Analysis and its Applications, Springer-
Verlag, New York, 1986.
[116] Zevin, A.A. and Pinsky M.A., A new approach to the Lur’e problem in
the theory of exponential stability and bounds for solutions with bounded
nonlinearities, IEEE Trans. Autom. Control, 48, (2003) no. 10, 1799-1804 .
[117] Zhang, Z. and Wang, Z. Asymptotic behavior of solutions of neutral dif-
ferential equations with positive and negative coefficients, Ann. Differential
Equations 17 (3) (2001) 295–305.
Index

kxk = kxkn Euclidean norm of vector linear equations, 57


x nonlinear equations, 176
kAk = kAkn spectral (operator) norm
of matrix A, 25, 57 freezing method, 165
Frobenius norm of matrix, 25
absolute stability, 204 function of bounded variation, 18
Aizerman - Myshkis problem, 207 fundamental solution, 59

Banach lattice, 7 g(A), 26


Banach space, 1 γ(A), 43
Bohl-Perron’s principle, 61, 63 Γ(K), 77, 78
bounded variation of function, 19 generalized norm, 11
bounded variation of matrix function, Gerschgorin’s theorem, 35
19 Green’s function of
second order equation, 216
causal operator, 13 higher order equation, 205
characteristic matrix, 75 Gronwall lemma, 9
characteristic value, 75
Closed Graph theorem, 6 Hilbert-Schmidt norm, 15
compact operator, 14 Hilbert-Schmidt operator, 15
completely continuous operator, 14 Hilbert space, 2
convergence strong, 1 Hurwitzian matrix, 49
convolution, 6
I unit operator or matrix
diagonal part of matrix, 29 input-to-state stability, 196
determinant regularized, 16
lower spectral radius, 26
entire Banach-valued function, 17
estimate for norm of matrix function of bounded variation,
matrix-valued functions, 36-41 18
resolvent of matrix, 30-32 multiplicative representation for
Euclidean space, 3 evolution operator, 190
Euclidean norm, 3 resolvent of matrix, 29
evolution operator, 119 solutions of differential delay equa-
exponential stability of tions, 240
272 Index

solutions of Volterra equations, Urysohn’s theorem, 2


239

Np (A) Shatten-von Neumann norm,


15
N2 (A) Hilbert-Schmidt norm, 15
Nicholson type equation, 194
nilpotent part of matrix, 29
norm of
operator, 4
vector, 1
normal operator, 7

operator
adjoint, 5
closed, 6
normal, 7
negative definite, 6
positive definite, 6
quasinilpotent, 6
selfadjoint, 6

projection, 7

quasinilpotent operator, 6

radius spectral (upper), 5


radius spectral lower, 26
regularized determinant, 16
relative variation of characteristic val-
ues, 107
resolvent, 5
resolution of identity,235
Riesz space, 7
Riesz-Torihn’s theorem, 6

spectral radius, 5
spectrum, 5
stability in the linear approximation,
178
stability of quasilinear equations, 178

θ(K), 77

You might also like