Gil Delay Diff Equ finalEFB
Gil Delay Diff Equ finalEFB
Gil Delay Diff Equ finalEFB
Michael I. Gil’
Contents
Preface ix
Preface
1. Preliminaries 1
1.1. Banach and Hilbert spaces 1
1.2. Examples of normed spaces 3
1.3. Linear operators 4
1.4. Ordered spaces and Banach lattices 7
1.5. The abstract Gronwall lemma 8
1.6. Integral inequalities 10
1.7. Generalized norms 11
1.8. Causal mappings 12
1.9. Compact operators in a Hilbert space 14
1.10. Regularized determinants 16
1.11. Perturbations of determinants 17
1.12. Matrix functions of bounded variations 18
1.13. Comments 24
Bibliography 263
Index 271
Preface
1. The suggested book deals with the stability of linear and nonlinear vector dif-
ferential delay equations. Equations with causal mappings are also considered.
Explicit conditions for the exponential, absolute and input-to-state stabilities are
suggested. Moreover, solution estimates for the considered equations are estab-
lished. They provide the bounds for regions of attraction of steady states. We
are also interested in the existence of periodic solutions. Besides the Hill method
for ordinary differential equations with periodic coefficients is developed for the
considered equations.
The main methodology presented in the book is based on a combined usage of
the recent norm estimates for matrix-valued functions with the following methods
and results:
a) the generalized Bohl-Perron principle and the integral version of the gen-
eralized Bohl-Perron principle;
b) the freezing method;
c) the positivity of fundamental solutions.
A significant part of the book is devoted solution to the Aizerman - Myshkis
problem and integrally small perturbations of the considered equations.
3. The aim of the book is to provide new tools for specialists in the stability
theory of functional differential equations, control system theory and mechanics.
This is the first book that:
i) gives a systematic exposition of the approach to stability analysis of vector
differential delay equations which is based on estimates for matrix-valued allowing
us to investigate various classes of equations from the unified viewpoint;
ii) contains a solution of the Aizerman-Myshkis problem;
iii) develops the Hill method for functional differential equations with period
coefficients;
iv) presents the integral version of the generalized Bohl-Perron principle.
It also includes the freezing method for systems with delay and investigates
integrally small perturbations of differential delay equations with matrix coeffi-
cients.
The book is intended not only for specialists in stability theory, but for anyone
interested in various applications who has had at least a first year graduate level
course in analysis.
I was very fortunate to have fruitful discussions with the late Professors M.A.
Aizerman, V.B. Kolmanovskii, M.A. Krasnosel’skii, A.D. Myshkis, A. Pokrovskii,
and A.A. Voronov, to whom I am very grateful for their interest in my investiga-
tions.
Chapter 1
Preliminaries
khn − hm k → 0 as m, n → ∞.
If H is a Banach space with respect to this norm, then it is called a Hilbert space.
The Schwarz inequality
|(x, y)| ≤ kxk kyk
is valid.
If, in an infinite dimensional Hilbert space, there is a countable set whose
closure coincides with the space, then that space is said to be separable. Any
separable Hilbert space H possesses an orthonormal basis. This means that there
is a sequence {ek ∈ H}∞ k=1 such that
with
ck = (h, ek ), k = 1, 2, . . . .
Besides the series strongly converges.
Let X and Y be Banach spaces. A function f : X → Y is continuous if for
any > 0, there is a δ > 0, such that kx − ykX ≤ δ implies kf (x) − f (y)kY ≤ .
Theorem 1.1.1. (The Urysohn theorem) Let A and B be disjoint closed sets in a
Banach space X. Then there is a continuous function f defined on X such that
Let x(t) be continuous at each point of [0, T ]. Then one can define the Riemann
integral as the limit in the norm of the integral sums:
Xn Z T
(n) (n)
lim x(tk )Δtk = x(t)dt
(n)
max |Δtk |→0 k=1 0
2. The space B(S) is defined for an arbitrary set S and consists of all bounded
scalar functions on S. The norm is given by
kf k = sups∈S |f (s)|.
3. The space C(S) is defined for a topological space S and consists of all
bounded continuous scalar functions on S. The norm is
kf k = sups∈S |f (s)|.
4. The space Lp (S) is defined for any real number p, 1 ≤ p < ∞, and any
set S having a finite Lebesgue measure. It consists of those measurable scalar
functions on S for which the norm
Z
kf k = [ |f (s)|p ds]1/p
S
is finite.
5. The space L∞ (S) is defined for any set S having a finite Lebesgue measure.
It consists of all essentially bounded measurable scalar functions on S. The norm
is
kf k = ess sup |f (s)|.
s∈S
Note that the Hilbert space has been defined by a set of abstract axioms. It
is noteworthy that some of the concrete spaces defined above satisfy these axioms,
and hence are special cases of abstract Hilbert space. Thus, for instance, the n-
dimensional space Cn is a Hilbert space, if the inner product (x, y) of two elements
x = {x1 , ..., xn } and y = {y1 , ..., yn }
is defined by the formula
n
X
(x, y) = xk y k .
k=1
In the same way, complex l2 space is a Hilbert space if the scalar product
(x, y) of the vectors x = {xk } and y = {yk } is defined by the formula
∞
X
(x, y) = xk y k .
k=1
4 Chapter 1. Preliminaries
Also the complex space L2 (S) is a Hilbert space with the scalar product
Z
(f, g) = f (s)g(s)ds.
S
Then the operator norms of {Ak } are uniformly bounded. Moreover, if {An }
strongly converges to a (linear) operator A, then
kT kLr (Ω1 )→Lr (Ω2 ) ≤ max{kT kLp (Ω1 )→Lp (Ω2 ) , kT kLq (Ω1 )→Lq (Ω2 ) }.
For the proof (in more general situation) see [16, Section VI.10.11].
Theorem 1.3.4. Let f ∈ L1 (Ω) be a fixed integrable function and let T be the
operator of convolution with f , i.e., for each function g ∈ Lp (Ω) (p ≥ 1) we have
Z
(T g)(t) = f (t − s)g(s)ds.
Ω
Then
kT gkLp (Ω) ≤ kf kL1 (Ω) kgkLp (Ω) .
For the proof see [16, p. 528].
Now let us consider operators in a Hilbert space H. A bounded linear operator
A∗ is adjoint to A, if
n
R+ = {(x1 , ..., xn ) ∈ Rn : xk ≥ 0 for all k}.
Then R+
n
is a positive cone and for x = (x1 , ..., xn ), y = (y1 , ..., yn ) ∈ Rn , we have
x ≤ y iff xk ≤ yk
and
|x| = (|x1 |, ..., |xn |).
Example 1.4.3. Let X be a non-empty set and let B(X) be the collection of all
bounded real valued functions defined on X.
It is a simple and well-known fact that B(X) is a vector space ordered by
the positive cone
for every t ∈ X and f, g ∈ B(X). This shows that B(X) is a Riesz space and the
absolute value of f is |f (t)|.
Definition 1.4.4. Let E be a Riesz space furnished with a norm k.k, satisfying
kxk ≤ kyk whenever |x| ≤ |y|. In addition, let the space E be complete with
respect to that norm. Then E is called a Banach lattice.
The norm k.k in a Banach lattice E is said to be order continuous, if
inf{kxk : x ∈ A} = 0
for any down directed set A ⊂ E, such that inf{x ∈ A} = 0, cf. [91, p. 86].
The real spaces C(K), Lp (K) (K ⊆ Rn ) and lp (p ≥ 1) are examples of
Banach lattices.
A bounded linear operator T in E is called a positive one, if from x ≥ 0 it
follows that T x ≥ 0.
Lemma 1.5.1. (The abstract Gronwall lemma) Let T be a bounded linear positive
operator acting in E and having the spectral radius
rs (T ) < 1.
x ≤ f + Tx
y = f + T y.
x ≤ F (x) + f (x, f ∈ E+ )
y = F (y) + f.
z ≥ F (z) + f (z, f ∈ E+ )
implies that z ≥ y.
Proof. We have x = F (x) + h with an h < f . Thanks to (5.1) and the condition
rs (T ) < 1, the mappings Ff := F + f and Fh := F + h have the following proper-
ties: Ffm and Fhm are contracting for some integer m. So thanks to the generalized
contraction mapping theorem [115], Ffk (f ) → x, Fhk (f ) → y as k → ∞. Moreover,
Ffk (f ) ≥ Fhk (f ) for all k = 1, 2, ..., since F is non-decreasing and h ≤ f . This
proves the inequality x ≥ y. Similarly the inequality x ≤ z can be proved.
10 Chapter 1. Preliminaries
is fulfilled with an arbitrary matrix norm, then it is simply to show that rs (K) = 0.
The same equality for the spectral radius is true, if
Z b
(Kx)(t) = K̂(t, s)x(s)ds (t ≥ a),
t
provided
Z b
sup kK̂(t, s)kds < ∞.
t∈ [a,b] t
1.7. Generalized norms 11
and
c) M (x, y) ≤ M (x, z) + M (y, z).
Clearly, X is a metric space with the metric m(x, y) = kM (x, y)kE . That
is, a sequence {xk ∈ X} converges to x in the metric m(., .) iff M (xk , x) → 0 as
k → ∞.
Lemma 1.7.1. Let X be a space with a vector metric M (., .): X × X → E+ , and
F (x) map a closed set Φ ⊆ X into itself with the property
where Q is a positive operator in E whose spectral radius rs (Q) is less than one:
rs (Q) < 1. Then, if X is complete in generalized metric M (., .) (or, equivalently,
in metric m(., .)), F has a unique fixed point x ∈ Φ. Moreover, that point can be
found by the method of successive approximations .
Proof. Following the usual proof of the contracting mapping theorem we take an
arbitrary x0 ∈ Φ and define the successive approximations by the equality
xk = F (xk−1 ) (k = 1, 2, ...).
Hence,
m−1
X
Qj M (x1 , x0 ).
j=k
12 Chapter 1. Preliminaries
(I − Q)−1 M (x, y) ≤ 0.
Now let X be a linear space with a vector (generalized) norm M (.). That is,
M (.) maps X into E+ and is subject to the usual axioms: for all x, y ∈ X
and Pa = 0, and Pb = I.
1.8. Causal mappings 13
where
f (t, w(t)) if 0 ≤ t ≤ τ,
fτ (t, w(t)) = .
0 if τ < t ≤ T
Clearly,
fτ (t, w(t)) = fτ (t, wτ (t)) where wτ = Pτ w.
Moreover,
Z t Z t
Pτ k(t, s, w(s))ds = Pτ k(t, s, wτ (s))ds = 0, t > τ
0 0
and
Z t Z t Z t
Pτ k(t, s, w(s))ds = k(t, s, w(s))ds = k(t, s, wτ (s))ds, t ≤ τ.
0 0 0
14 Chapter 1. Preliminaries
Hence it follows that the considered mapping is causal. Note that, the integral
operator Z c
k(t, s, w(s))ds
0
T r (A∗ A) < ∞
1.9. Compact operators in a Hilbert space 15
for some p ≥ 1, is called the von Neumann - Schatten ideal and is denoted by SNp .
Np (.) is called the norm of the ideal SNp . It is not hard to show that
q
Np (A) = p T r (AA∗ )p/2 .
Thus, SN1 is the ideal of nuclear operators (the Trace class) and SN2 is the ideal of
Hilbert-Schmidt operators. N2 (A) is called the Hilbert-Schmidt norm. Sometimes
we will omit index 2 of the Hilbert-Schmidt norm, i.e.
p
N (A) := N2 (A) = T r (A∗ A).
For any orthonormal basis {ek } we can write
∞
X
N2 (A) = ( kAek k2 )1/2 .
k=1
Lemma 1.9.1. If A ∈ SNp and B ∈ SNq (1 < p, q < ∞), then AB ∈ SNs with
1 1 1
= + .
s p q
Moreover,
Ns (AB) ≤ Np (A)Nq (B).
For the proof of this lemma see [65, Section III.7]. Recall also the following
result (Lidskij’s theorem).
Theorem 1.9.2. Let A ∈ SN1 . Then
∞
X
Tr A = λk (A).
k=1
where λj (A) are the eigenvalues of A with their multiplicities arranged in the
decreasing order, and
p−1 m
X z (A)
Ep (z) := (1 − z)exp [ ], p > 1 and E1 (z) := 1 − z.
m=1
m
As it is shown below the regularized determinant are useful for the investigation
of periodic systems.
The following lemma is proved in [57].
Lemma 1.10.1. The inequality
is valid, where
p−1
ζp = (p 6= 1, p 6= 3) and ζ1 = ζ3 = 1.
p
From this lemma one can immediately obtain the following result.
Lemma 1.10.2. Let A ∈ SNp (p = 1, 2, ...). Then
Let us point also the lower bound for regularized determinants which has
been established in [45]. To this end denote by L a Jordan contour connecting 0
and 1, lying in the disc {z ∈ C : |z| ≤ 1} and does not containing the points 1/λj
for any eigenvalue λj of A, such that
ζp Npp (A)
|detp (I − A)| ≥ exp [− ].
φL (A)
1 1
kF (C) − F (C̃)kY ≤ kC − C̃kX G(1 + kC + C̃kX + kC − C̃kX ) (C, C̃ ∈ X).
2 2
Lemmas 1.10.1 and 1.11.1 imply the following result.
Corollary 1.11.2. The inequality
is true, where
1
δp (A, B) := Np (A − B) exp [ζp (1 + (Np (A + B) + Np (A − B)))p ] ≤
2
Np (A − B) exp [(1 + Np (A) + Np (B)))p ].
Now let A and B be n × n-matrices. Then due to the inequality between the
arithmetic and geometric mean values,
n n
!1/n
Y 1X
|det A| =
2
|λk (A)| ≤
2
|λk (A)|2 .
n
k=1 k=1
18 Chapter 1. Preliminaries
Thus,
1
|det A| ≤ N2n (A).
nn/2
Moreover, |det A| ≤ kAkn for an arbitrary matrix norm. Hence, Lemma 1.11.1
implies our next result.
Corollary 1.11.3. Let A and B be n × n-matrices. Then
|det A − det B| ≤
n
1 1 1
N2 (A − B) 1 + N2 (A − B) + N2 (A + B))
n n/2 2 2
and n
1 1
|det A − det B| ≤ kA − Bk 1 + kA − Bk + kA − Bk .
2 2
for an arbitrary matrix norm k.k.
Now let us recall the well-known inequality for determinants.
Theorem 1.11.4. (Ostrowski [97]) Let A = (ajk ) be a real n × n-matrix. Then the
inequality
Yn X n
|det A| ≥ (|ajj | − |ajm |)
j=1 m=1,m6=j
is valid, provided
n
X
|ajj | > |ajm | (j = 1, ..., n).
m=1,m6=j
where the supremum is taken over the set of all partitions P of the interval [a, b].
Any function of bounded variation g : [a, b] → R is a difference of bounded
nondecreasing functions. If g is differentiable and its derivative is integrable then
its variation satisfies Z b
var (g) ≤ |g 0 (s)|ds.
a
For more details see [16, p. 140]. Sometimes we will write
Z b
var (g) = |dg(s)|.
a
1.12. Matrix functions of bounded variations 19
Let kxkn be the Euclidean norm of a vector x and kAkn be the spectral norm
of of a matrix A. The norm of f in C([a, b], Cn ) is supt kf (t)kn , in Lp ([a, b], Cn ) (1 ≤
Rb
p < ∞) its norm is ( a kf (t)kpn )1/p , in L∞ ([a, b], Cn ) its norm is vrai supt kf (t)kn .
For a real matrix valued function R0 (s) = (rij (s))ni,j=1 defined on a real finite
segment [a, b], whose entries have bounded variations
n Z
X b
|drjk | max |fk (t − s)| =
a a≤s≤b
k=1
n
X
var(rjk ) max |fk (t − s)|.
a≤s≤b
k=1
Hence,
n n n
!2
X X X
|(E0 f )j (t)| ≤
2
var(rjk )kfk kC(a,c) =
j=1 j=1 k=1
So √
kE0 f kC([b,c],Cn ) ≤ nvar (R0 )kf kC([a,c],Cn )
and thus inequality (12.1) is proved.
In the case of the space L∞ by inequality (12.1) we have
√
kE0 f kL∞ ([b,c],Cn ) ≤ n var (R0 )kf kL∞ ([a,c],Cn )
Z b Z n X
bX n Z c
|drjk (s)||drji (s1 )| |fk (t − s)fi (t − s1 )|dt.
a a i=1 k=1 b
Thus
Z c n X
X n
|(E0 f )j (t)|2 dt ≤ var(rjk )var(rji )kfk kL2 (a,c) kfi kL2 (a,c) =
b i=1 k=1
n
X
( var(rjk )kfk kL2 (a,c) )2 (kfk kL2 (a,c) = kfk kL2 ([a,c],C) )
k=1
and therefore
n Z
X c n X
X n
|(E0 f )j (t)|2 dt ≤ ( var(rjk )kfk kL2 (a,c) )2 =
j=1 b j=1 k=1
n
X Z c
var(rjk ) |fk (t)|dt.
k=1 a
So
v
Z uX Z
c u n n
cX
kE0 f kL1 ([b,c],Cn ) = t |(E0 f )j (t)|2 dt ≤ |(E0 f )j (t)|dt ≤
b j=1 b j=1
n Z
X n
cX
var(rjk )|fk (t)|dt.
j=1 a k=1
The Riesz - Thorin theorem (see Section 1.3) and previous lemma imply the
following result.
22 Chapter 1. Preliminaries
and
√
kE0 kLp ([a,c],Cn )→Lp ([b,c],Cn ) ≤ max{ζ1 (R0 ), n var (R0 )} (c > b; p ≥ 1)
are valid.
Let us consider the operator
Z b
Ef (t) = ds R(t, s)f (t − s) (f ∈ C([a, c], Cn ); b ≤ t ≤ c). (12.5)
a
vjk := sup var(rjk (t, .)) < ∞ (j, k = 1, ..., n). (12.6)
b≤t≤c
n
X
vjk sup |fk (t − s)|.
a≤s≤b
k=1
Hence,
n
X n X
X n
|(Ef )j (t)|2 ≤ ( (vjk ))kfk k2C(a,c) =
j=1 j=1 k=1
Moreover, √
kEf kL∞ ([b,c],Cn ) ≤ nkZ(R)kn kf kL∞ ([a,c],Cn ) (12.8)
for a continuous function f . But the set of continuous functions is dense in L∞ .
So the previous inequality is valid on the whole space. Repeating the arguments
of the proof of the previous lemma we obtain
kEf kL2 ([b,c],Cn ) ≤ kZ(R)kn kf kL2 ([a,c],Cn ) . (12.9)
Now let f (t) = (fk (t)) ∈ L ([a, c], C ). Then
1 n
Z c Xn Z bZ c
|(Ef )j (t)|dt ≤ |fk (t − s)|dt|drjk (t, s)| ≤
b k=1 a b
n
X Z c
vjk |fk (t)|dt.
k=1 a
So v
Z uX Z
c u n n
cX
kEf kL1 ([b,c],Cn ) = t |(Ef )j (t)| dt ≤
2 |(Ef )j (t)|dt ≤
b j=1 b j=1
v
n Z n n Z u n n
X cX X c uX X
vjk |fk (t)|dt ≤ t 2
vjk |fk (t)|2 dt
j=1 a k=1 j=1 a k=1 k=1
Hence
kEf kL1 ([b,c],Cn ) ≤ V1 (R)kf kL1 ([a,c],Cn ) , (12.10)
where v
n uX
X u n
V1 (R) = t 2 .
vjk
j=1 k=1
1.13 Comments
The chapter contains mostly well-known results. This book presupposes a knowl-
edge of basic operator theory, for which there are good introductory texts. The
books [2] and [16] are classical. In Sections 1.5 and 1.6 we followed Sections I.9
and III.2 of the book [14]. The material of Sections 1.10 and 1.11 is adapted from
the papers [57, 44] and [45]. The relevant results on regularized determinants can
be found in [64]. Lemmas 1.12.1 and 11.12.3 are probably new.
Chapter 2
2.1 Notations
p
Everywhere in this chapter kxk is the Euclidean norm of x ∈ Cn : kxk = (x, x)
with a scalar product (., .) = (., .)C n , I is the unit matrix.
For a linear operator A in Cn (matrix), λk = λk (A) (k = 1, ..., n) are the
eigenvalues of A enumerated in an arbitrary order with their multiplicities, σ(A)
denotes the spectrum of A, A∗ is the adjoint to A, and A−1 is the inverse to A;
Rλ (A) = (A − λI)−1 (λ ∈ C, λ 6∈ σ(A)) is the resolvent, rs (A) is the spectral
radius, kAk = supx∈Cn kAxk/kxk is the (operator) spectral norm, N2 (A) is the
Hilbert-Schmidt (Frobenius) norm of A: N22 (A) = T race AA∗ , AI = (A − A∗ )/2i
is the imaginary component, AR = (A + A∗ )/2 is the real component,
is the distance between σ(A) and a point λ ∈ C; ρ(A, C) is the Hausdorff distance
between a contour C and σ(A). co(A) denotes the closed convex hull of σ(A),
α(A) = maxk Re λk (A), β(A) = mink Re λk (A); rl (A) is the lower spectral radius:
and
g(eiτ A + zI) = g(A) (1.2)
for all τ ∈ R and z ∈ C.
then
f 0 (λ0 ) f (n−1) (λ0 )
f (λ0 ) 1! ... (n−1)!
0 f (λ0 ) ...
. . ... .
f (Jn (λ0 )) = . . ... . .
. . ... .
f 0 (λ0 )
0 ... f (λ0 )
1!
0 ... 0 f (λ0 )
Thus, if A has the Jordan block-diagonal form
where Qk are the eigenprojections. In the case (2.3) it is required only that f is
defined on the spectrum.
Now let
σ(A) = ∪m k=1 σk (A) (m ≤ n)
and σk (A) ⊂ Mk (k = 1, ..., m), where Mk are open disjoint simply-connected sets:
Mk ∩ Mj = ∅ (j 6= k). Let fk be regular on Mk . Introduce on M = ∪m k=1 Mk the
piece-wise analytic function by f (z) = fj (z) (z ∈ Mj ). Then
m Z
1 X
f (A) = − f (λ)Rλ (A)dλ, (2.4)
2πi j=1 Cj
where Cj ⊂ Mj are closed smooth contour surrounding σ(Aj ) and the integration
is performed in the positive direction. For more details about representation (2.4)
see [110, p. 49].
For instance, let M1 and M2 be two disjoint disks, and
sin z if z ∈ M1 ,
f (z) = .
cos z if z ∈ M2
0 = P0 Cn ⊂ P1 Cn ⊂ ... ⊂ Pn Cn = Cn .
where
n
X
D= λk ΔPk (ΔPk = Pk − Pk−1 ) (2.6)
k=1
is the diagonal part of A and V is the nilpotent part of A. That is, V is a nilpotent
matrix, such that
V Pk = Pk−1 APk (k = 2, . . . , n) (2.7)
For more details see for instance, [18]. The representation (2.5) will be called the
triangular (Schur) representation.
Furthermore, for X1 , X2 , ..., Xj ∈ Cn×n denote
→
Y
Xk ≡ X1 X2 ...Xn .
1≤k≤j
That is, the arrow over the symbol of the product means that the indexes of the
co-factors increase from left to right.
Theorem 2.2.1. Let D and V be the diagonal and nilpotent parts of an A ∈ Cn×n ,
respectively. Then
→
Y
V ΔPk
Rλ (A) = Rλ (D) I+ (λ 6∈ σ(A)),
λ − λk
2≤k≤n
Hence, AΔPk = λk ΔPk . Since ΔPk ΔPj = 0 for j 6= k, The previous theorem
gives us the equality
n
X n
X
λk ΔPk λk ΔPk
−λRλ (A) = I + = (ΔPk + ).
λ − λk λ − λk
k=1 k=1
Or
n
X ΔPk
Rλ (A) = .
λk − λ
k=1
So from (2.8) we have obtained the well-known spectral representation for the
resolvent of a normal matrix. Thus, the previous theorem generalizes the spectral
representation for the resolvent of a normal matrix. Now we can use (2.1) and
(2.8) to get the representation for f (A).
To prove this theorem again use the Schur triangular representation (2.5).
Recall g(A) is defined in Section 2.1. As it is shown in [31, Section 2.1]), the
relation g(U −1 AU ) = g(A) is true, if U is a unitary matrix. Hence it follows that
g(A) = N2 (V ). The proof of the previous theorem is based on the following lemma.
Lemma 2.3.2. The inequality
n−1
X γn,k g k (A)
kRλ (A) − Rλ (D)k ≤ (λ 6∈ σ(A))
ρk+1 (A, λ)
k=1
is true.
Proof. By (2.5) we have
Thus
Rλ (A) − Rλ (D) = −Rλ (D)V (I + Rλ (D)V )−1 Rλ (D). (3.2)
Clearly, Rλ (D)V is a nilpotent matrix. Hence,
n−1
X n−1
X
Rλ (A) − Rλ (D) = −Rλ (D)V (−Rλ (D)V )k Rλ (D) = (−Rλ (D)V )k Rλ (D).
k=0 k=1
(3.3)
Thanks to Theorem 2.5.1 [31], for any n × n nilpotent matrix V0 ,
γn,k N2k (V )
k(−Rλ (D)V )k k ≤ γn,k N2k (Rλ (D)V ) ≤ .
ρk (A, λ)
The assertion of Theorem 2.3.1 directly follows from the previous lemma.
Note that the just proved Lemma 2.3.2 is a slight improvement of Theorem
2.1.1 from [31].
Theorem 2.3.1 is sharp: if A is a normal matrix, then g(A) = 0 and Theorem
2.3.1 gives us the equality kRλ (A)k = 1/ρ(A, λ). Taking into account (3.1), we get
Corollary 2.3.3. Let A ∈ Cn×n . Then
n−1
X g k (A)
kRλ (A)k ≤ √ for any regular λ of A.
k=0
k!ρk+1 (A, λ)
(n−1)/2
N22 (A) − 2Re (λ T race (A)) + n|λ|2
(λ 6∈ σ(A)).
n−1
In particular, let V be a nilpotent matrix. Then
1 1 N22 (V ) (n−1)/2
k(Iλ − V ) k ≤
−1
[1 + 1+ ] (λ 6= 0).
|λ| n−1 |λ|2
cf. [103]. The following simple lemma is proved in [31, Section 4.1].
Lemma 2.4.1. Assume that kRλ (A)k ≤ φ(ρ−1 (A, λ)) for all regular λ of A, where
φ(x) is a monotonically increasing non-negative continuous function of a non-
negative variable x, such that φ(0) = 0 and φ(∞) = ∞. Then the inequality
svA (B) ≤ z(φ, q) is true, where z(φ, q) is the unique positive root of the equation
qφ(1/z) = 1.
This lemma and Corollary 2.3.2 yield the our next result.
2.4. Spectrum perturbations 33
Theorem 2.4.2. Let A and B be n × n-matrices. Then svA (B) ≤ z(q, A), where
z(q, A) is the unique nonnegative root of the algebraic equation
n−1
X y n−j−1 g j (A)
yn = q √ . (4.1)
j=0
j!
and
1 ≤ z0 ≤ P (1) if P (1) ≥ 1. (4.4)
Proof. Since all the coefficients of P (z) are non-negative, it does not decrease as
z > 0 increases. From this it follows that if P (1) ≤ 1, then z0 ≤ 1. So z0n ≤ P (1),
as claimed.
Now let P (1) ≥ 1, then due to (4.2) z0 ≥ 1 because P (z) does not decrease.
It is clear that
P (z0 ) ≤ z0n−1 P (1)
in this case. Substituting this inequality into (4.2), we get (4.4).
Let
√
a=2 max j+1 cj .
j=0,...,n−1
Then
n−1
X n−1
X
cj
≤ 2−j−1 = 1 − 2−n < 1.
j=0
aj+1 j=0
Let x0 be the extreme right-hand root of equation (4.5), then by (4.3) we have
x0 ≤ 1. Since z0 = ax0 , we have derived the following result.
34 Chapter 2. Some Results of the Matrix Theory
Put
n−1
X 1
wn = √ .
j=0
j!
Note that for a diagonal matrix the Gerschgorin discs Ω(ajj , Rj ) coincide with
the spectrum. Conversely, if the Gerschgorin discs coincide with the spectrum, the
matrix is diagonal.
The next lemma follows from the Gerschgorin theorem and gives us a simple
bound for the spectral radius.
Lemma 2.4.8. The spectral radius rs (A) of a matrix A = (ajk )nj,k=1 satisfies the
inequality
X n
rs (A) ≤ max |ajk |.
j
k=1
About this and other estimates for the spectral radius see [80, Section 16].
36 Chapter 2. Some Results of the Matrix Theory
where Z
1
mC (A) := sup kRz (A)k, lC := |dz|.
z∈C 2π C
Now we can directly apply the estimates for the resolvent from Section 2.3.
In particular, by Corollary 2.3.3 we have
where
n−1
X xk+1 g k (A)
p(A, x) = √ (x > 0). (5.2)
k=0
k!
We thus get mC (A) ≤ p(A, 1/ρ(A, C)), where ρ(A, C) is the distance between C
and σ(A), and therefore,
This theorem is proved in the next subsection. Taking into account (3.1) we
get our next result.
Corollary 2.5.3. Under the hypothesis of Theorem 2.5.2 we have
n−1
X g k (A)
kf (A)k ≤ sup |f (λ)| + sup |f (k) (λ)| .
λ∈σ(A) k=1 λ∈co(A)
(k!)3/2
2.5. Norm estimates for matrix functions 37
For example,
n−1
X n−1
X
k γn,k g k (A)tk
kexp(At)k ≤ e α(A)t
g (A)t
k
≤e α(A)t
(t ≥ 0)
k=0
k!
k=0
(k!)3/2
n−1
X n−1
γn,k m!g k (A)rsm−k (A) X m!g k (A)rsm−k (A)
kAm k ≤ ≤ (m = 1, 2, ...),
(m − k)!k! (m − k)!(k!)3/2
k=0 k=0
where rs (A) is the spectral radius. Recall that 1/(m − k)! = 0 if m < k.
n k−1
X X
|V |e = |ajk |(., ek )ej ,
k=1 j=1
n−1
X
kf (A) − f (D)k ≤ Jk k |V |ke k,
k=1
where
Jk = max{|Ij1 ...jk+1 | : 1 ≤ j1 < . . . < jk+1 ≤ n}.
38 Chapter 2. Some Results of the Matrix Theory
where Z
1
Bk = (−1)k+1 f (λ)(Rλ (D)V )k Rλ (D)dλ.
2πi C
Since D is a diagonal matrix with respect to the Schur basis {ek } and its diagonal
entries are the eigenvalues of A, then
n
X ΔPj
Rλ (D) = ,
j=1
λj (A) − λ
2 −1
jX 3 −1
jX jk+1 −1 n
X X
Bk = ΔPj1 V ΔPj2 V . . . V ΔPjk+1 Ij1 j2 ...jk+1 .
j1 =1 j2 =1 jk =1 jk+1 =1
2 −1
jX 3 −1
jX jk+1 −1 n
X X
kBk k ≤ Jk k ΔPj1 |V |e ΔPj2 |V |e . . . |V |e ΔPjk+1 k =
j1 =1 j2 =1 jk =1 jk+1 =1
The assertion of Theorem 2.5.2 directly follows from the previous corollary.
Note that the latter corollary is a slight improvement of Theorem 2.7.1 from
[31].
Denote by f [a1 , a2 , ..., ak+1 ] the k-th divided difference of f at points a1 , a2 , ..., ak+1 .
By the Hadamard representation [20, formula (54)], we have
where
fk = max{|f [λ1 (A), ..., λjk+1 (A)]| : 1 < j1 < . . . < jk+1 ≤ n}.
|f (k) (z)|
ξk (A) := sup (k = 1, 2, ...),
z∈co (S) k!
the inequality
∞
X
|f (A) − f (S)| ≤ ξk (A)|W |k
k=1
is valid, provided p
rs (|W |) lim k
ξk (A) < 1.
k→∞
provided the spectral radius r0 (λ) of Rλ (S)W is less than one. The entries of this
matrix are
ajk
(λ 6= ajj , j =
6 k)
ajj − λ
and the diagonal entries are zero. Thanks to Lemma 2.4.8 we have
n
X |ajk |
r0 (λ) ≤ max < 1 (λ ∈ C)
j |ajj − λ|
k=1,k6=j
converges. Thus
Z ∞
X
1
f (A) − f (S) = − f (λ)(Rλ (A) − Rλ (S))dλ = Mk , (6.2)
2πi C k=1
where Z
1
Mk = (−1)k+1 f (λ)(Rλ (S)W )k Rλ (S)dλ.
2πi C
2.6. Absolute values of entries of matrix functions 41
Since S is a diagonal matrix with respect to the standard basis {dk }, we can write
out
Xn
Q̂j
Rλ (S) = (bj = ajj ),
j=1
b j −λ
Here Z
(−1)k+1 f (λ)dλ
Jj1 ...jk+1 = .
2πi C (bj1 − λ) . . . (bjk+1 − λ)
Lemma 1.5.1 from [31] gives us the inequalities
|Jj1 ...jk+1 | ≤ ξk (A) (j1 , j2 , ..., jk+1 = 1, ..., n).
Hence, by (6.3)
n
X n
X n
X
|Mk | ≤ ξk (A) Q̂j1 |W | Q̂j2 |W | . . . |W | Q̂jk+1 .
j1 =1 j2 =1 jk+1 =1
But
n
X n
X n
X
Q̂j1 |W | Q̂j2 |W | . . . |W | Q̂jk+1 = |W |k .
j1 =1 j2 =1 jk+1 =1
Additional estimates for the entries of matrix functions can be found in [43,
38]. Under the hypothesis of the previous theorem with the notation
ξ0 (A) := max |f (akk )|,
k
Here |W |0 = I.
Let kAkl denote a lattice norm of A. That is, kAkl ≤ k|A|kl , and kAkl ≤ kÃkl
whenever 0 ≤ A ≤ Ã. Now the previous theorem implies the inequality
∞
X
kf (A) − f (S)kl ≤ ξk (A)k|W |k kl (6.5)
k=1
and therefore,
∞
X
kf (A)kl ≤ ξk (A)k|W |k kl .
k=0
42 Chapter 2. Some Results of the Matrix Theory
T A = ST. (7.2)
κT := kT kkT −1 k
is very important for various applications, cf. [103]. That constant is mainly nu-
merically calculated. In the present subsection we suggest a sharp bound for κT .
Applications of the obtained bound are also discussed.
Denote by μj , j = 1, ..., m ≤ n the distinct eigenvalues of A, and by pj the
algebraic multiplicity of μj . In particular, one can write
etc.
Let δj be the half-distance from μj to the other eigenvalues of A:
and
δ(A) := min δj = min |μj − μk |/2.
j=1,...,m j,k=1,...,m; k6=j
Put
n−1
X g k (A)
η (A) := √ .
k=1
δ k (A) k!
2.7. Diagonalizable matrices 43
According to (1.1),
n−1 √
X ( 2N2 (AI ))k
η (A) ≤ √ .
k=1
δ k (A) k!
In [51, Corollary 3.6], the inequality
m
X n−1
X g k (A)
κT ≤ pj √ ≤ n(1 + η (A)) (7.3)
j=1 k=0
δjk k!
has been derived. This inequality is not sharp: if A is a normal matrix, then it
gives κT ≤ n but κT = 1 in this case. In this section we improve inequality (7.3).
To this end put
(
n(1 + η (A)) √ if η (A) ≥ 1,
γ(A) = 2 nη (A) .
(η (A) + 1)[ 1−η (A) + 1] if η (A) < 1
Now we are in a position to formulate the main result of the present section.
Theorem 2.7.1. Let A be a diagonalizable n × n-matrix. Then κT ≤ γ(A).
The proof of this theorem is presented in the next subsection. Theorem 2.7.1
is sharp: if A is normal, then g(A) = 0. Therefore η (A) = 0 and γ(A) = 1. Thus
we obtain the equality κT = 1.
Note that one can take kuk k = kvk k. This leads to the equality.
kT −1 k = kT k. (7.6)
We need also the following technical lemma.
Lemma 2.7.3. Let L1 and L2 be projections satisfying the condition r := kL1 −
L2 k < 1. Then for any eigenvector f1 of L1 with kf1 k = 1 and L1 f1 = f1 , there
exists an eigenvector f2 of L2 with kf2 k = 1 and L2 f2 = f2 , such that
2r
kf1 − f2 k ≤ .
1−r
Proof. We have kL2 f1 − L1 f1 k ≤ r < 1 and
b0 := kL2 f1 k ≥ kL1 f1 k − k(L1 − L2 )f1 k ≥ 1 − r > 0.
Thanks to the relation L2 (L2 f1 ) = L2 f1 , we can assert that L2 f1 is an eigenvector
of L2 . Then
1
f2 := L2 f1
b0
is a normed eigenvector of L2 . So
1 1 1
f1 − f2 = L1 f 1 − L2 f1 = f1 − f1 + (L1 − L2 )f1 .
b0 b0 b0
But
1 1
≤
b0 1−r
and
1 1 1 r 2r
kf1 − f2 k ≤ ( − 1)kf1 k + k(L1 − L2 )f1 k ≤ −1+ = ,
b0 b0 1−r 1−r 1−r
as claimed.
Let vjs and ejs (s = 1, ..., pj ) be the eigenvectors of Q̂j and P̂j , respectively, and
kejs k = 1. Inequality (7.7) and the previous lemma yield.
2.7. Diagonalizable matrices 45
Taking in the previous corollary Q∗j instead of Qj we arrive at the similar inequality
uk
kuk k − ek
≤ ψ(A) (k = 1, ..., n). (7.9)
" n
#1/2 " n
#1/2
X uk X
2
kuk k |(x, − ek )|2 + |kuk k(x, ek )| 2
(x ∈ Cn ).
kuk k
k=1 k=1
In particular, we have
γ(A)
kRz (A)k ≤
ρ(A, λ)
and
keAt k ≤ γ(A)eα(A)t (t ≥ 0).
Let A and à be complex n × n-matrices whose eigenvalues λk and λ̃k , re-
spectively, are taken with their algebraic multiplicities. Recall that
Corollary 2.7.7. Let A = (ajk )nj,k=1 be an n × n-matrix, whose diagonal has the
property
ajj 6= akk (j 6= k; k = 1, ..., n).
Then for any eigenvalue μ of A, there is a k = 1, ..., n, such that
Put
j
X
Q̃j = Q̂k (j = 1, ..., m).
k=1
Note that according to (7.2) Q̃j = T −1 P̂j T , where P̂j is an orthogonal pro-
jection. Thus by Theorem 2.7.1,
To prove this theorem, we need the following analog of the Abel transform.
m
X j
X
Ψ := ak Wk and Bj = Wk .
k=1 k=1
Then
m−1
X
Ψ= (ak − ak+1 )Bk + am Bm .
k=1
48 Chapter 2. Some Results of the Matrix Theory
Proof. Obviously,
m
X m
X m
X
Ψ = a1 B1 + ak (Bk − Bk−1 ) = a k Bk − ak Bk−1 =
k=2 k=1 k=2
m
X m−1
X
ak Bk − ak+1 Bk ,
k=1 k=1
as claimed.
hold. Then
kf (A) − f (μm )Ik ≤ max kQ̃k k[f (μ1 ) − f (μm )].
1≤k≤m
Taking into account that the operator exp(−At) is the inverse one to exp(At) it
is not hard to show that
eβ(A)t khk
kexp(At)hk ≥ Pn−1 (t ≥ 0, h ∈ Cn ),
k=0 g k (A)tk (k!)−1 γ n,k
eβ(A)t khk
kexp(At)hk ≥ Pn−1 (t ≥ 0). (8.2)
k=0 g k (A)(k!)−3/2 tk
and estimate (8.1). Here we investigate perturbations in the case when kÃE −EAk
is small enough. We will say that A is stable (Hurwitzian), if α(A) < 0. Assume
that A is stable and put
Z ∞ Z ∞
u(A) = keAt kdt and vA = tkeAt kdt.
0 0
with Z t
j(t) = c(s)ds.
0
Hence, Z Z
∞ ∞
keÃt − eAt kdt ≤ kEteAt kdt+
0 0
Z ∞ Z t
keÃ(t−s) kkÃE − EAkkseAs kds dt.
0 0
But Z Z
∞ t
keÃ(t−s) kskeAs kds dt =
0 0
2.9. Matrices with nonnegative off-diagonals 51
Z ∞ Z ∞
keÃ(t−s) kskeAs kdt ds =
0 s
Z ∞ Z ∞
skeAs kds keÃt kdt = vA u(Ã).
0 0
Thus Z ∞
keÃt − eAt kdt ≤ kEkvA + kÃE − EAkvA u(Ã). (8.7)
0
Hence,
u(Ã) ≤ u(A) + kEkvA + kÃE − EAkvA u(Ã).
So according to (8.4), we get (8.5). Furthermore, due to (8.7) and (8.5) we get
(8.6). As claimed.
Put
a = min ajj and b = max ajj .
j=1,...,n j=1,...,n
f (k) (x)
αk (f, A) := inf
a≤x≤b k!
and
f (k) (x)
βk (f, A) := sup (k = 0, 1, 2, ...),
a≤x≤b k!
assuming that the derivatives exist.
Let W = A − diag (ajj ) be the off diagonal part of A.
Theorem 2.9.1. Let condition (9.1) hold and f (λ) be holomorphic on a neighbor-
hood of a Jordan set, whose boundary C has the property
n
X
|z − ajj | > ajk
k=1,k6=j
for all z ∈ C and j = 1, ..., n. In addition, let f be real on [a, b]. Then the following
inequalities are valid:
X∞
f (A) ≥ αk (f, A)W k , (9.2)
k=0
52 Chapter 2. Some Results of the Matrix Theory
provided p
rs (W ) lim k
|αk (f, A)| < 1,
k→∞
and
∞
X
f (A) ≤ βk (f, A)W k , (9.3)
k=0
provided, p
rs (W ) lim k
|βk (f, A)| < 1.
k→∞
where
n
X n
X n
X
Mk = Q̂j1 W Q̂j2 W . . . W Q̂jk+1 Jj1 j2 ...jk+1 .
j1 =1 j2 =1 jk+1 =1
Here Z
(−1)k+1 f (λ)dλ
Jj1 ...jk+1 = (bj = ajj ).
2πi C (bj1 − λ) . . . (bjk+1 − λ)
Since S is real, Lemma 1.5.2 from [31] gives us the inequalities
Hence,
n
X n
X n
X
Mk ≥ αk (f, A) Q̂j1 W Q̂j2 W . . . W Q̂jk+1 = αk (f, A)W k .
j1 =1 j2 =1 jk+1 =1
2.10 Comments
One of the first estimates for the norm of a regular matrix-valued function was
established by I.M. Gel’fand and G.E. Shilov [19] in connection with their inves-
tigations of partial differential equations, but that estimate is not sharp; it is not
attained for any matrix. The problem of obtaining a sharp estimate for the norm
of a matrix-valued function has been repeatedly discussed in the literature, cf. [14].
2.10. Comments 53
In the late 1970s, the author has obtained a sharp estimate for a matrix-valued
function regular on the convex hull of the spectrum, cf. [21] and references therein.
It is attained in the case of normal matrices. Later, that estimate was extended
to various classes of non-selfadjoint operators, such as Hilbert-Schmidt operators,
quasi-Hermitian operators (i.e., linear operators with completely continuous imag-
inary components), quasiunitary operators (i.e., operators represented as a sum
of a unitary operator and a compact one), etc. For more details see [31, 56] and
references given therein.
The material of this chapter is taken from the papers [51, 53, 56] and the
monograph [31].
About the relevant results on matrix-valued functions and perturbations of
matrices see the well-known books [9, 74] and [90].
54 Chapter 2. Some Results of the Matrix Theory
Chapter 3
This chapter is devoted to general linear systems including the Bohl - Perron
principle.
Then Z η ∞
X
dR1 (t, τ )f (t − τ ) = Ak (t)f (t − hk (t))
0 k=1
56 Chapter 3. General Linear Systems
Then Z η Z η
dR2 (t, τ )f (t − τ ) = A(t, s)f (t − s)ds.
0 0
Our main object in this chapter is the following problem in Cn :
Z η
ẏ(t) = dτ R(t, τ )y(t − τ ) (t ≥ 0), (1.1)
0
with a given locally integrable vector function f (t) and the zero initial condition
with condition (1.4). Below we prove that problems (1.1), (1.2) and (1.3), (1.4)
have unique solutions. See also the well-known Theorem 6.1.1 from [71].
It is assumed that the variation of R(t, τ ) = (rij (t, τ ))ni,j=1 in τ is bounded
on [0, ∞):
vjk := sup var(rjk (t, .)) < ∞. (1.7)
t≥0
3.2. Existence of solutions 57
where
0 = τ0 < τ1 < ... < τm ≤ η
are constants, Ak (t) are piece-wise continuous matrices and A(t, τ ) is integrable
in τ on [0, η ]. Then (1.8) can be written as (1.1). Besides, (1.7) holds, provided
Z η m
!
X
sup kA(t, s)kn ds + kAk (t)kn < ∞. (1.9)
t≥0 0 k=0
In this chapter kzkn is the Euclidean norm of z ∈ Cn and kAkn is the spectral norm
of a matrix A. In addition, C(χ) = C(χ, Cn ) is the space of continuous functions
defined on a set χ ∈ R with values in Cn and the norm kwkC(χ) = supt∈χ kw(t)kn .
Recall that in a finite dimensional space all the norms are equivalent.
Let y(t) be a solution of problem (1.1), (1.2). Then (1.1) is said to be stable,
if there is a constant c0 ≥ 1, independent of φ, such that
Lemma 3.2.1. Let condition (1.7) hold. Then for any T > 0, there is a constant
V (R) independent of T , such that
Lemma 3.2.2. If condition (1.7) holds and a vector valued function f is integrable
on each finite segment, then problem (1.3), (1.4) has on [0, ∞) a unique solution.
58 Chapter 3. General Linear Systems
Proof. By (1.6)
x = f1 + W x,
where Z Z
t t
f1 (t) = f (s)ds + Ex(s)ds
0 0
and Z t
W x(t) = Ex(s)ds. (2.2)
0
Hence,
Z T Z s1
kW xkC(0,T ) ≤ V (R)
2 2
kxkC(0,s2 ) ds2 ds1 .
0 0
Similarly,
Z T Z s1 Z sk
kW k xkC(0,T ) ≤ V k (R) ... kxkC(0,sk ) dsk ...ds2 ds1 ≤
0 0 0
Tk
V k (R)kxkC(0,T )
.
k!
Thus the spectral radius of W equals zero and, consequently,
∞
X
(I − W )−1 = W k.
k=0
Therefore,
∞
X
x = (I − W )−1 f1 = W k f1 . (2.3)
k=0
Lemma 3.2.3. If condition (1.7) holds, then problem (1.1), (1.2) has on [0, ∞) a
unique solution for an arbitrary φ ∈ C(−η , 0).
Proof. Put φ̂(t) = φ(t) (t ≤ 0), φ̂(t) = φ(0) (t ≥ 0), and x(t) = y(t) − φ̂(t) (t ≥
0). Then problem (1.1), (1.2) takes the form (1.3), (1.4) with f = E φ̂. Now the
previous lemma proves the result.
3.3. Fundamental solutions 59
This formula can be obtained by the direct differentiation. About other proofs see
[71, Section 6.2], [78]. In these books the solution representations for the homoge-
neous problem also can be found.
Formula (3.4) is called the Variation of Constants formula.
Now we are going to derive a representation for solutions of the homogeneous
equation (1.1).
To this end put
y(t) − G(t, 0)φ(0) if t ≥ 0,
z(t) =
0 if −η ≤ t < 0,
If we denote
0 if t ≥ 0,
φ̃(t) = (3.5)
φ(t) if −η ≤ t < 0,
then we can write
But Z η
∂
G(t, 0)φ(0) = dτ R(t, τ )G(t − τ, 0)φ(0).
∂t 0
So Z η
ż(t) = dτ R(t, τ )(z(t − τ ) + φ̃(t − τ )) (t ≥ 0).
0
is valid.
where Z η
ψ(t) = dτ R(t, τ )φ̂(t − τ ).
0
Besides, (1.4) holds with x(t) = x0 (t). Since V (R) < ∞, we have ψ ∈ C(−η , ∞).
Due to the hypothesis of this lemma, x0 ∈ C(0, ∞). Thus y ∈ C(−η , ∞). As
claimed.
Lemma 3.4.3. Let a(s) be a continuous function defined on [0, η ] and condition
(1.7) hold. Then for any T > 0, one has
Z η
k a(s)ds R(t, s)f (t − s)kC(−η ,T ) ≤ V (R) max |a(s)| kf kC(−η ,T ) ,
0 0≤s≤η
Now repeating the arguments of the proof of Lemma 1.12.3, we obtain the required
result.
So Ĝ is defined on the whole space C(0, ∞). It is closed, since problem (1.3), (1.4)
has a unique solution. Therefore Ĝ is bounded according to the Closed Graph
theorem (see Section 1.3).
62 Chapter 3. General Linear Systems
with the zero initial condition. For solutions x and x of (1.3) and (4.5), respec-
tively, we have
Z η Z η
d
(x − x )(t) = dτ R(t, τ )x(t − τ ) − x (t) − eτ dτ R(t, τ )x (t − τ ) =
dt 0 0
Z η
dτ R(t, τ )(x(t − τ ) − x (t − τ )) + f (t),
0
where Z η
f (t) = −x (t) + (1 − eτ )dτ R(t, τ )x (t − τ ). (4.6)
0
Consequently,
x − x = Ĝf . (4.7)
For the brevity in this proof we put k.kC(0,T ) = |.|T for a finite T > 0. Then
|Ĝ|T ≤ kĜkC(0,∞)
So
|x |T ≤ |x|T + |x |T kĜkC(0,∞) ( + V (R)(eη − 1)).
|x|T
|x |T ≤ ≤
1 − kĜkC(0,∞) ( + V (R)(eη − 1))
kxkC(0,∞)
≤ < ∞.
1 − kĜkC(0,∞) ( + V (R)(eη − 1))
Now letting T → ∞, we get x ∈ C(0, ∞). Hence, by the previous lemma, a solu-
tion y of (4.2) is bounded. Now (4.1) proves the exponential stability. As claimed.
3.5. Lp -version of the Bohl - Perron principle 63
and kwkL∞ (χ) = vrai supt∈χ ku(t)kn . Besides, R(t, τ ) is the same as in Section
3.1. In particular, condition (1.7) holds.
Theorem 3.5.1. If for a p ≥ 1 and any f ∈ Lp (0, ∞), the non-homogeneous problem
(1.3), (1.4) has a solution x ∈ Lp (0, ∞), and condition (1.7) holds, then equation
(1.1) is exponentially stable.
The proof of this theorem is divided into a series of lemmas presented in this
section.
Note that the existence and uniqueness of solutions of (1.3) in the considered
case is due to the above proved Lemma 3.2.1, since f is locally integrable.
Again put Z η
Eu(t) = dτ R(t, τ )u(t − τ ) (t ≥ 0) (5.1)
0
considering that operator as the one acting from Lp (−η , T ) into Lp (0, T ) for all
T > 0.
Lemma 3.5.2. For any p ≥ 1 and all T > 0, there is a constant V (R) independent
of T , such that
kEukLp (0,T ) ≤ V (R)kukLp (−η ,T ) . (5.2)
Proof. This result is due to Corollary 1.12.4.
Lemma 3.5.3. For any f ∈ Lp (0, ∞) (p ≥ 1), let a solution of the nonhomoge-
neous problem (1.3), (1.4) be in Lp (0, ∞) and (1.7) hold. Then any solution of
the homogeneous problem (1.1), (1.2) is in Lp (−η , ∞).
Proof. With a μ > 0, put
e−μt φ(0) if t ≥ 0,
v(t) = .
φ(t) if −η ≤ t < 0
Then v ∈ Lp (−η , ∞) and therefore Ev ∈ Lp (0, ∞) for all p ≥ 1. Furthermore,
substitute y(t) = x(t) + v(t) into (1.1). Then we have problem (1.3), (1.4) with
f (t) = μe−μt φ(0) + (Ev)(t).
Clearly, f ∈ Lp (0, ∞). According to the assumption of this lemma, the solution
x(t) of problem (1.3), (1.4) is in Lp (0, ∞). Thus, y = x+v ∈ Lp (0, ∞). As claimed.
64 Chapter 3. General Linear Systems
Lemma 3.5.4. If condition (1.7) holds and a solution y(t) of problem (1.1), (1.2)
is in Lp (0, ∞) for a p ≥ 1, then the solution is bounded on [0, ∞). Moreover, if
p < ∞, then
kykpC(0,∞) ≤ pV (R)kykpLp (−η ,∞)
where V (R) is defined by (5.2).
Proof. By (1.1) and Lemma 3.5.2,
For simplicity, in this proof put ky(t)kn = |y(t)|. The case p = 1 is obvious, since,
Z ∞ Z ∞
d|y(t1 )|
|y(t)| = − dt1 ≤ |ẏ(t1 )|dt1 ≤ kẏkL1 ≤ V (R)kykL1 (t ≥ 0).
t dt1 t
As claimed.
Lemma 3.5.5. Let a(s) be a continuous function defined on [0, η ] and condition
(1.7) hold. Then for any T > 0 and p ≥ 1, one has
Z η
k a(s)ds R(t, s)f (t − s)kLp (−η ,T ) ≤ V (R) max |a(s)| kf kLp (−η ,T ) ,
0 0≤s≤η
for a finite T > 0. Then |Ĝ|p,T ≤ kĜkLp (0,∞) . In addition, by (4.6) and Lemma
3.5.5, we can write
|f |p,T ≤ |x |p,T ( + V (R)(eη − 1)).
Hence (4.7) implies the inequality
|x |p,T ≤ |x|p,T + kĜkLp (0,∞) |x |p,T ( + V (R)(eη − 1)).
Consequently, for all sufficiently small ,
|x|p,T
|x |p,T ≤ ≤
1 − kĜkLp (0,∞) ( + V (R)(eη − 1))
kxkLp (0,∞)
|x |p,T ≤ < ∞.
1 − kĜkLp (0,∞) ( + V (R)(eη − 1))
Letting T → ∞, we get x ∈ Lp (0, ∞). Consequently, by Lemmas 3.5.3 and 3.5.4,
the solution y of (4.2) is bounded. Now (4.1) proves the exponential stability. As
claimed.
with a given vector function f (t) and the zero initial condition
x(t) = 0 (−∞ ≤ t ≤ 0). (6.4)
In space Lp (−∞, ∞) with a p ≥ 1 introduce the operator
Z ∞
E∞ u(t) = dτ R(t, τ )u(t − τ ) (t ≥ 0; u ∈ Lp (−∞, ∞)).
0
66 Chapter 3. General Linear Systems
holds.
A solution of problem (6.1), (6.2) is defined as the one of problem (1.1), (1.2)
wit η = ∞; a solution of problem (6.3), (6.4) is defined as the one of problem
(1.3), (1.4). If f ∈ Lp (0, ∞), p ≥ 1, the existence and uniqueness of solutions can
be proved as in Lemma 3.2.2, since f is locally integrable.
For instance, consider the equation
Z ∞ ∞
X
ẏ(t) = A(t, τ )y(t − τ )dτ + Ak (t)y(t − τk ) (t ≥ 0), (6.6)
0 k=0
where 0 = τ0 < τ1 , ... are constants, Ak (t) are piece-wise continuous matrices and
A(t, τ ) is integrable in τ on [0, ∞). Then it is not hard to check that (6.5) holds,
if Z ∞ !
∞
X
sup kA(t, s)kn ds + kAk (t)kn < ∞. (6.7)
t≥0 0 k=0
We will say that equation (6.1) has the -property in Lp , if the relation
Z t
k (eτ − 1)dτ R(t, τ )f (t − τ )kLp (0,∞) → 0 as → 0 ( > 0) (6.9)
0
Clearly,
kφkpLp (−∞,0) ≤ kφkL1 (−∞,0) kφkC(−∞,0)
p−1
.
By the condition of the lemma x ∈ Lp (0, ∞). We thus get the required result.
kykpC(0,∞) ≤ pV (R)kykp−1 p
Lp (0,∞) kykL (−∞,∞) ≤ pV (R)kykLp (−∞,∞)
p
are valid.
The proof of this lemma is similar to the proof of Lemma 3.5.4.
Proof of Theorem 3.6.1: Substituting
with the zero initial condition. For solutions x and x of (6.3) and (7.2), respec-
tively, we have
Z ∞ Z ∞
d
(x − x )(t) = dτ R(t, τ )x(t − τ ) − x (t) − eτ dτ R(t, τ )x (t − τ ) =
dt 0 0
Z ∞
dτ R(t, τ )(x(t − τ ) − x (t − τ )) + f (t),
0
where Z ∞
f (t) = −x (t) + (1 − eτ )dτ R(t, τ )x (t − τ ). (7.3)
0
Consequently,
x − x = Ĝf . (7.4)
For the brevity, in this proof, for a fixed p we put k.kLp (0,T ) = |.|T for a finite
T > 0. Then |Ĝ|T ≤ kĜkLp (0,∞) . In addition, by the -property
|f (t)|T ≤ v()|x |T
where v() → 0 as → 0. So
|x − x |T ≤ v()kĜkLp (0,∞) |x |T .
Thus for a sufficiently small ,
|x|T
|x |T ≤ ≤
1 − v()kĜkLp (0,∞)
kxkLp (0,∞)
≤ < ∞.
1 − v()kĜkLp (0,∞)
Now letting T → ∞, we get x ∈ Lp (0, ∞). Hence, by the previous lemma, a solu-
tion y of (7.2) is bounded. Now (4.1) proves the exponential stability. As claimed.
kA(s)kn ≤ Ce−μs and kK(t, s)kn ≤ Ce−μ(t+s) (C, μ = const > 0; t, s ≥ 0).
(8.2)
Then, it is not hard to check that (8.1) has in L2 the -property, and the operator
K̃ : L2 (−∞, ∞) → L2 (0, ∞), defined by
Z ∞
K̃w(t) = K(t, τ )w(t − τ )dτ
0
with f ∈ L2 (0, ∞). To estimate the solutions of the latter equation, we need the
equation Z t
u̇(t) = A(τ )u(t − τ )dτ + h(t) (t ≥ 0) (8.4)
0
where are Â(z), û(z) and ĥ(z) are the the Laplace transforms of A(t), u(t) and
h(t), respectively, and z is the dual variable. Then
It is assumed that det (zI − Â(z)) is a stable function, that is all its zeros are in
the open left half plane, and
1
θ0 := sup k(iωI − Â(iω))−1 kn < . (8.5)
ω∈R kK̃kL2 (0,∞)
Note that various estimates for θ0 can be found in Section 2.3. By the Parseval
equality we have kukL2 (0,∞) ≤ θ0 khkL2 (0,∞) . By this inequality, from (8.3) we get
3.9 Comments
The classical theory of differential delay equations is presented in many excellent
books, for instance see [67, 72, 77, 96].
Recall that the Bohl - Perron principle means that the homogeneous ordinary
differential equation (ODE) ẏ = A(t)y (t ≥ 0) with a variable n × n-matrix A(t),
bounded on [0, ∞) is exponentially stable, provided the nonhomogeneous ODE
ẋ = A(t)x + f (t) with the zero initial condition has a bounded solution for any
bounded vector valued function f , cf. [14]. In [70, Theorem 4.15] the Bohl-Perron
principle was generalized to a class of retarded systems with R(t, τ ) = r(t, τ )I,
where r(t, τ ) is a scalar function; besides the asymptotic (not exponential) stability
was proved (see also the book [4]).
Theorems 3.4.1 and 3.5.1 is proved in [58], Theorem 3.6.1 appears in the
paper [59] (see also [61]).
Chapter 4
Here and below in this chapter, kAkn is the spectral norm of a matrix A.
For R2 we define A(s) by
Z τ
R2 (τ ) = A(s)ds (0 ≤ τ ≤ η ).
0
Then Z η Z η
dR2 (τ )f (t − τ ) = A(s)f (t − s)ds.
0 0
Since the function R2 (τ ) is of bounded variation, we have
Z η
kA(s)kn ds < ∞.
0
where 0 ≤ h1 < h2 < ... < hm < η are constants, Ak are constant matrices
and A(s) is integrable on [0, η ]. For most situations it is sufficient to consider the
special case, where R0 has only a finite number of jumps: m < ∞. So in the sequel
it is assumed that R0 has a finite number of jumps.
A solution y(t) of problem (1.1), (1.2) is a continuous function y := [−η, ∞) →
Cn , such that
Z tZ η
y(t) = φ(0) + dR0 (τ )y(s − τ )ds, t ≥ 0, (1.4a)
0 0
4.1. Statement of the problem 73
and
y(t) = φ(t) (−η ≤ t ≤ 0). (1.4b)
Recall that
n
V ar (R0 ) = (var(rij ))i,j=1
and
var (R0 ) = kV ar (R0 )kn . (1.5)
In particular, for equation (1.3), if h1 = 0 we put R1 (0) = 0 and
j
X
R1 (τ ) = A1 (0 < τ ≤ h2 ), R1 (τ ) = Ak for hj < τ ≤ hj+1
k=1
In addition, Z τ
R0 (τ ) = A(s)ds + R1 (τ ).
0
(k)
Let ãij (s) and aij be the entries of A(s) and Ak , respectively. Then for equation
(1.3), each entry of R0 , satisfies the inequality
Z η m
X (k)
var (rij ) ≤ |ãij (s)|ds + |aij |. (1.6)
0 k=0
Put
ŷ(t) := sup ky(s)kn .
0≤s≤t
of solutions
√ of problems (1.1), (1.2) and (2.1), (2.2), respectively, exist at least for
Re z > n var (R0 ) and the integrals converge absolutely in this half-plane. In
addition, inequality (1.7) and together with
√ equation (1.1) show that also ẏ(t) has
a Laplace transform at least in Re z > n var (R0 ) given by z ỹ(z) − φ(0). Taking
the Laplace transforms of the both sides of equation (1.1), we get
Z ∞ Z η
z ỹ(z) − φ(0) = e −zt
dR0 (τ )y(t − τ )dt =
0 0
Z η Z 0
e−τ z dR0 (τ ) e−zt y(t)dt + ỹ(z) .
0 −τ
4.2. Application of the Laplace transform 75
where Z η
K(z) = zI − e−τ z dR0 (τ ). (2.5)
0
for t ≥ 0. Furthermore, apply the Laplace transform to problem (2.1), (2.2). Then
we easily obtain
x̃(z) = K −1 (z)f˜(z) (2.7)
for all regular z. Here f˜(z) is the Laplace transforms to f . Applying the inverse
Laplace transform, we get the following equality:
Z t
x(t) = G(t − s)f (s)ds (t ≥ 0), (2.8)
0
where Z ∞
1
G(t) = eiωt K −1 (iω)dω. (2.9)
2π −∞
So G(t) is the fundamental solution of equation (1.1). Formula (2.8) is the Variation
of Constants formula to problem (2.1), (2.2). Note that for equation (1.3) we have
Z η m
X
K(z) = zI − e−sz A(s)ds − e−hk z Ak . (2.11)
0 k=0
76 Chapter 4. Time-Invariant Linear Systems with Delay
N22 (A − A∗ )
g 2 (A) ≤ N22 (A) − |T race A2 | and g 2 (A) ≤ = 2N22 (AI ), (3.1)
2
where AI = (A − A∗ )/2i. Moreover,
and
Z η m
X
g(B(iω)) ≤ N2 (B(iω)) ≤ N2 (A(s))ds + N2 (Ak ) (ω ∈ R). (3.4)
0 k=0
Below, under various assumptions, we suggest the sharper estimates for g(B(iω)).
According to Theorem 2.3.1, the inequality
n−1
X g k (A)
kA−1 kn ≤ √
k=0
k!dk+1 (A)
is valid for any invertible matrix A, where d(A) is the smallest modulus of eigen-
values of A.
Hence we arrive at the inequality
where
n−1
X g k (B(z))
Γ(K(z)) = √
k=0
k!dk+1 (K(z))
and d(K(z)) is the smallest modulus of eigenvalues of matrix K(z) for a fixed z:
For example that inequality holds, if K(z) = zI −A0 e−zη , where A0 is a Hermitian
matrix.
Denote
θ(K) := sup kK −1 (iω)kn .
−2 var(R0 )≤ω≤2 var(R0 )
is valid.
Proof. We have
Z η Z η
K(0) = dR0 (s) = dR0 (s) = R0 (η ) − R0 (0).
0 0
So
kK(0)kn = kR0 (η ) − R0 (0)kn ≤ var(R0 ),
and therefore,
1
kK −1 (0)kn ≥ .
var(R0 )
Here and below in this chapter, kvkn is the Euclidean norm of v ∈ Cn . Simple
calculations show that
Z η
k e−iω dR0 (τ )kn ≤ var(R0 ) (ω ∈ R)
0
and
So
1
kK −1 (iω)kn ≤ ≤ kK −1 (0)kn (|ω| ≥ 2var(R0 )).
var(R0 )
78 Chapter 4. Time-Invariant Linear Systems with Delay
Thus the maximum of kK −1 (iω)kn is attained inside the segment [−2 var(R0 ), 2 var(R0 )],
as claimed.
By (3.4) and the previous lemma we have the inequality θ(K) ≤ θ̂(K), where
Denote
gB := sup g(B(iω))
ω∈[−2var(R0 ),2var(R0 )]
and
dK := inf d(K(iω)).
ω∈[−2var(R0 ),2var(R0 )]
Lemma 4.4.1. Let x(t) be a solution of problem (2.1), (2.2) with f ∈ L2 (0, ∞).
Then
kxkL2 (0,∞) ≤ θ(K)kf kL2 (0,∞) .
Proof. The result is due to (2.8), Lemma 4.3.1 and the Parseval equality.
The previous lemma and Theorem 3.5.1 yield the following result.
Corollary 4.4.2. Let all the characteristic values of K(.) be in C− . Then equation
(1.1) is exponentially stable.
4.4. Norms of fundamental solutions of time-invariant systems 79
and
Z η √
k dR0 (s)f (t − s)kC(0,∞) ≤ nvar (R0 )kf kC(−η ,∞) (f ∈ C(−η , ∞)) (4.3)
0
Calculations of such integrals is often a difficult task. Because of this, in the next
lemma we suggest an estimate for kGkL2 (0,∞) . Denote
p
W (K) := 2θ(K)[var (R0 )θ(K) + 1]. (4.4)
where Z η
f (t) = dR0 (s) ψ(t − s) + be−bt I.
0
By the previous lemma and (4.6) at once we obtain the following result.
4.4. Norms of fundamental solutions of time-invariant systems 81
where
p p
a0 (K) := 2var (R0 )W (K) = 2 var (R0 )θ(K)[var (R0 )θ(K) + 1].
Clearly,
a0 (K) ≤ 2(1 + var (R0 )θ(K)), (4.8)
Now we are going to estimate the L1 -norm of the fundamental solution. To this
end consider a function r(s) of bounded variation. Then
where r+ (s), r− (s) are nondecreasing functions. For a continuous function q defined
on [0, η ], let
Z η Z η Z η
q(s)|dr(s)| := q(s)dr+ (s) + q(s)dr− (s).
0 0 0
In particular, denote
Z η
vd (r) := s|dr(s)|.
0
Furthermore, put
vd (R0 ) = k(vd (rjk ))nj,k=1 kn .
Recall that rjk are the entries of R0 . That is, vd (R0 ) is the spectral norm of the
matrix whose entries are vd (rjk ).
is true.
82 Chapter 4. Time-Invariant Linear Systems with Delay
Proof. Put Z η
E1 f (t) = τ dR0 (τ )f (t − τ ) (f (t) = (fk (t))).
0
Then we obtain
n Z
X T
kE1 f (t)k2L2 (0,T ) = |(E1 f )j (t)|2 dt,
j=1 0
n Z
X η
( s|fk (t − s)||drjk (s)|)2 =
k=1 0
n Z
X η n Z
X η
s|fk (t − s)||drjk (s)| s1 |fi (t − s1 )||drik (s1 )|.
k=1 0 i=1 0
Hence Z T
|(E1 f )j (t)|2 dt ≤
0
Z η Z η n X
X n Z T
s|drjk (s)|s1 |drji (s1 )| |fk (t − s)fi (t − s1 )|dt.
0 0 i=1 k=1 0
Thus
Z T n X
X n
|(E1 f )j (t)|2 dt ≤ vd(rjk )vd(rik )kfk kL2 (−η ,T ) kfi kL2 (−η ,T ) =
0 i=1 k=1
n
!2
X
vd(rjk )kfk kL2 (−η ,T ) .
k=1
Hence
n Z
X T n X
X n
|(E1 f )j (t)|2 dt ≤ ( vd(rjk )kfk kL2 (−η ,T ) )2 =
j=1 0 j=1 k=1
4.4. Norms of fundamental solutions of time-invariant systems 83
n
(vd (rjk ))j,k=1 ν2
≤ vd (R0 )kν2 kn ,
n
where n
ν2 = kfk kL2 (−η ,T ) k=1
.
But kν2 kn = kf kL2 (−η ,T ) . This proves the lemma.
Clearly
vd(R0 ) ≤ η var(R0 ).
For equation (1.3) we easily obtain
Z η m
X
vd(R0 ) ≤ skA(s)kn ds + hk kAk kn . (4.10)
0 k=0
Proof. By (1.1)
Z η
Ẏ (t) = tĠ(t) + G(t) = t dR0 (τ )G(t − τ ) + G(t) =
0
Z η Z η
dR0 (τ )(t − τ )G(t − τ ) + τ dR0 (τ )G(t − τ ) + G(t).
0 0
Thus, Z η
Ẏ (t) = dR0 (τ )Y (t − τ ) + F (t)
0
where Z η
F (t) = τ R(dτ )G(t − τ ) + G(t).
0
Hence, Z t
Y (t) = G(t − t1 )F (t1 )dt1 .
0
By Lemma 4.4.1, kY kL2 (0,∞) ≤ θ(K)kF kL2 (0,∞) . But due to the previous lemma
Now we are in a position to formulate and prove the main result of the
section.
84 Chapter 4. Time-Invariant Linear Systems with Delay
Theorem 4.4.10. The fundamental solution G to equation (1.1) satisfies the in-
equality p
kGkL1 (0,∞) ≤ kGkL2 (0∞) πθ(K)(1 + vd(R0 )) (4.11)
and therefore, p
kGkL1 (0,∞) ≤ W (K) πθ(K)(1 + vd(R0 )). (4.12)
Proof. Let us apply the Karlson inequality
Z ∞ Z ∞ Z ∞
( |f (t)|dt) ≤ π
4 2
f (t)dt
2
t2 f 2 (t)dt
0 0 0
for a real scalar-valued f ∈ L2 [0, ∞) with the property tf (t) ∈ L2 [0, ∞), cf. [93,
Chapter VIII]. By this inequality
where Ak are constant matrices and μk (s) are scalar nondecreasing functions de-
fined on [0, η ].
In this case the characteristic matrix function is
Xm Z η
K(z) = zI − Ak e−zs dμk (s).
k=1 0
In addition,
m
X
g(K(iω)) = g(B(iω)) ≤ N2 (Ak )var(μk ) (ω ∈ R).
k=1
4.6. Scalar first order autonomous equations 85
m
X m
X
var(R0 ) = kAk kn and vd(R0 ) = hk kAk kn ,
k=1 k=1
and
m
X
g(B(iω)) ≤ N2 (Ak ) (ω ∈ R).
k=1
Under additional conditions, the latter estimate can be improved. For example, if
is valid.
Proof. Clearly, k(0) = var(μ) and
where Z η
dˆ := cos(2var(μ)τ )dμ(τ ). (6.4)
0
Proof. For the brevity put v = 2var(μ). Clearly,
Z η Z η Z η
|k(iω)| = |iω+
2
e −iωτ
dμ(τ )| = (ω−
2
sin (τ ω)dμ(τ )) +(
2
cos (τ ω)dμ(τ ))2 .
0 0 0
Furthermore, put
We have
In addition,
Z η
Re k(m, iω) = var (μ)m + (1 − m) cos(ωτ )dμ.
0
Consequently,
Z η
|k(m, iω)| ≥ var (μ)m + (1 − m) cos(vτ )dμ.
0
Therefore,
|k(m, iω)| ≥ var (μ)m + (1 − m)dˆ > 0 (ω ∈ R). (6.5)
Furthermore, assume that k(z) has a zero in the closed right hand plane C + . Take
into account that k(1, z) = 1 + z does not have zeros in C + . So k(m0 , iω) (ω ∈ R)
should have a zero for some m0 ∈ [0, 1], according to continuous dependence of
zeros on coefficients. But due to to (6.5) this is impossible. The proof is complete.
Remark 4.6.3. If
π
μ(t) − μ(0) > 0 for some t < ,
4
then Z η
cos(πτ )dμ(τ ) > 0
0
and one can replace condition (6.4) by the following one:
π
η var(μ) ≤ .
4
Consider the scalar function
m
X
k1 (z) = z + bk e−ihk z (hk , bk = const ≥ 0).
k=1
The following result can be deduced from the previous lemma, but we are going
to present an independent proof.
Lemma 4.6.4. With the notation
m
X
c=2 bk ,
k=1
let
hj c < π/2 (j = 1, ..., m).
Then all the zeros of k1 (z) are in C− and
m
X
inf |k1 (iω)| ≥ bk cos (chk ) > 0.
ω∈R
k=1
88 Chapter 4. Time-Invariant Linear Systems with Delay
Proof. We restrict ourselves by the case m = 2. In the general case the proof is
similar. Put h1 = v, h2 = h. Introduce the function
Clearly,
f (y) = y 2 + b22 + b21 − 2b2 y sin (hy) − 2b1 ysin (yv) + 2b2 b1 cos y(v − h). (6.6)
b22 + b21 − (b2 sin (hc) + b1 sin(vc))2 + 2b2 b1 cos c(h − v).
Hence
miny w(y) = b22 + b21 − b22 sin2 (ch) − b21 sin2 (cv) + 2b2 b1 cos (ch)cos (cv) =
Then all the zeros of K(z, 0) are in C− due to the just proved inequality,
Hence, Z ∞
1
= e−zt ζ(t)dt.
k(z) 0
Let
eη var (μ) < 1. (6.8)
Then it is not hard to show that (6.2) is exponentially stable and ζ(t) ≥ 0 (t ≥ 0)
(see Section 11.4). Hence it easily follows that
Z ∞
1 1
≤ ζ(t)dt = (ω ∈ R).
|k(iω)| 0 k(0)
But k(0) = var (μ). We thus have proved the following lemma.
Lemma 4.6.5. Let μ(s) be a nondecreasing function satisfying condition (6.8).
Then Z η
inf iω + exp(−iωs)dμ(s) = k(0) = var (μ)
−∞≤ω≤∞
0
and Z ∞
1 1
ζ(t)dt = = .
0 k(0) var (μ)
Now let μ0 (s) (s ∈ [0, η ]) be another nondecreasing scalar function with the
property
eη var(μ0 ) ≤ c0 (1 ≤ c0 < 2) (6.9)
and Z η
k0 (z) = z + exp(−zs)dμ0 (s) (z ∈ C). (6.10)
0
(2 − c0 )var(μ) (ω ∈ R).
We thus have proved.
Lemma 4.6.6. Let μ0 (s) (s ∈ [0, η ]) be a nondecreasing scalar function satisfying
(6.9). Then
Z η
2 − c0
inf |iω + exp(−iωs)dμ0 (s)| ≥ var(μ0 ).
ω∈R 0 c0
By Lemma 4.6.1,
inf |k2 (iω)| = inf |k2 (iω)|.
−v2 ≤ω≤v2 ω∈R
But Z η
|k2 (iω)|2 = (ω − asin (hω) − sin (τ ω)dμ2 (τ ))2 +
0
Z η
(a cos (hω) + cos (τ ω)dμ2 (τ ))2 .
0
Let
v2 h < π/2 (6.11)
and Z η
d2 := inf a cos (hω) + cos (τ ω)dμ2 (τ ) > 0. (6.12)
|ω|≤v2 0
Then we obtain
inf |k2 (iω)| ≥ d2 . (6.13)
ω∈R
Repeating the arguments of the proof of Lemma 4.6.2 we arrive at the following
result.
Lemma 4.6.7. Let conditions (6.11) and (6.12) hold. Then all the zeros of k2 (z)
are in C− and inequality (6.13) is fulfilled.
Let us point one corollary of this lemma.
4.7. Systems with one distributed delay 91
hold. Then all the zeros of k2 (z) are in C− and the inequality
is valid.
In particular, if
Z η
k2 (z) = z + a + exp(−zs)dμ2 (s) (z ∈ C)
0
and
a > var (μ2 ), (6.15)
then condition (6.11) automatically holds and
In addition,
Z η
g(B(iω)) = g(A e−iωs dμ(s)) ≤ g(A)var (μ) (ω ∈ R),
0
with
Z η
dK = min inf ωi + λj (A) e−iωs dμ(s) .
j=1,...,n −2var(μ)kAkn ≤ω≤2var(μ)kAkn
0
Theorem 4.7.1. Let G be the fundamental solution of equation (7.1) and all the
characteristic values of its characteristic function be in C− . Then
where p
W (A, μ) := 2θA [kAkn var(μ)θA + 1].
In addition,
kĠkL2 (0,∞) ≤ kAkn var(μ) W (A, μ), (7.4)
p
kGkC(0,∞) ≤ W (A, μ) 2kAkn var(μ) (7.5)
and p
kGkL1 (0,∞) ≤ W (A, μ) πθA (1 + kAkn vd(μ)). (7.6)
Clearly, dK can be directly calculated. Moreover, by Lemma 4.6.3 we get the
following result.
and let
π
η var(μ)λn (A) < . (7.8)
4
Then all the characteristic values of equation (7.1) are in C− and
where Z η
d(A, μ) := cos (2τ λn (A)var (μ)) dμ(τ ) > 0.
0
4.7. Systems with one distributed delay 93
Thus
n−1
X (g(A)var (μ))k
θA ≤ θ(A, μ), where θ(A, μ) := √ .
k=0
k!dk+1 (A, μ)
Corollary 4.7.3. Let conditions (7.7) and (7.8) hold. Then all the characteristic
values of equation (7.1) are in C− and inequalities (7.3)-(7.6) are true with θ(A, μ)
instead of θA .
then the eigenvalues of K(z) are λj (K(z)) = z + e−zh λj (A). In addition, var(μ) =
1, vd(μ) = h. According to (7.7), condition (7.8) takes the form
Hence,
dK ≥ var(μ)λ1 (A), (7.14)
Now taking into account Theorem 4.7.1 we arrive at our next result.
Corollary 4.7.4. Let conditions (7.7) and (7.13) hold. Then all the characteristic
values of equation (7.1) are in C− and inequalities (7.3)-(7.6) are true with θA =
θ̂A .
94 Chapter 4. Time-Invariant Linear Systems with Delay
and therefore,
kK −1 (z)kn ≤ κT kKS−1 (z)kn
1 κT
θS = and therefore θA ≤
dK dK
If, in addition, conditions (7.7) and (7.8) hold, then according to (7.9) equa-
tion (7.1) is stable and
κT
θA ≤ . (7.15)
d(A, μ)
Moreover, if conditions (7.7) and (7.13) hold, then according to (7.14) equation
(7.1) is also stable and
κT
θA ≤ . (7.16)
λ1 (A)var(μ)
Note also that if A is Hermitian, and conditions (7.7) and (7.13) holds, then
reducing (7.1) to the diagonal form, due to Lemma 4.6.5, we can assert that the
fundamental solution G to (7.1) satisfies the inequality
n
1 X 1
kGkL1 (0,∞) ≤ . (7.17)
var(μ) λk (A)
k=1
This corollary is more convenient for the calculations than (3.6), if n is small
enough.
Recall that it is assumed that all the characteristic values of (1.1) are in the
open left half-plane C− . Denote
p
Wdet (K) := 2θdet (K)[var(R0 )θdet (K) + 1].
By Lemma 4.4.4 we arrive at the following lemma.
Lemma 4.8.2. The inequality kGkL2 (0,∞) ≤ Wdet (K) is valid.
In addition, Lemma 4.4.5 and Theorem 4.4.7 imply our next result.
Lemma 4.8.3. The inequalities
kĠkL2 (0,∞) ≤ Wdet (K)var(R0 )
and
kGkC(0,∞) ≤ adet (K), (8.2)
hold, where p
adet (K) := 2var(R0 )Wdet (K).
Clearly,
p
adet (K) = 2 var(R0 )θdet (K)[var(R0 )θdet (K) + 1]
and
adet (K) ≤ 2(1 + var(R0 )θdet (K)) ≤ 2(1 + var(R0 )θ̂det (K)).
To estimate the L1 -norm of the fundamental solution via the determinant, we use
inequality (4.12) and Corollary 4.8.1, by which we get the following result.
96 Chapter 4. Time-Invariant Linear Systems with Delay
Corollary 4.8.4. The fundamental solution G to equation (1.1) satisfies the in-
equality
p
kGkL1 (0,∞) ≤ Wdet (K) πθdet (K)(1 + vd (R0 )).
If, in addition
η var rjj ≤ π/4, j = 1, ..., n, (8.4)
then by Lemma 4.6.2 all the zeros of det K(z) are in C− and
n
Y
|det K(iω)| ≥ dˆkk > 0. (8.5)
k=1
where Z η
dˆjj = cos(2var(rjj )τ ) drjj (τ ).
0
Proof. Clearly
n
X Z η
| e−iωs drjk (s)| ≤ ξj .
k=1,k6=j 0
Hence the result is due to the Ostrowski theorem (see Section 1.11).
where dˆjj are defined in the previous section. So due to the previous lemma we
arrive at the following result.
Corollary 4.9.2. If the conditions (8.4) and
4.10 Comments
The contents of this chapter is based on the paper [26] and on some results from
Chapter 8 of the book [24].
98 Chapter 4. Time-Invariant Linear Systems with Delay
Chapter 5
Properties of Characteristic
Values
where Z η Z η
B0 = − dR0 (s) = −R0 (η ), B1 = I + sdR0 (s),
0 0
Z η
Bk = (−1)k+1 sk dR0 (s) (k ≥ 2).
0
R0 (η ) is invertible . (1.2)
Since det K1 (z) = −det R0−1 (η ) det K(z), all the characteristic values of K and
K1 coincide.
For the brevity, in this chapter kAk denotes the spectral norm of a matrix
A: kAk = kAkn . Without loss of generality assume that
η < 1. (1.3)
If this condition is not valid, by the substitution z = wa with some a > η into
(1.1), we obtain
Z η
K(aw) = awI − exp(−saw)dR0 (s) =
0
Z
1 aη
a wI − exp(−τ w)dR0 (τ /a) = aK1 (w),
a 0
where Z η1
K1 (w) = wI − exp(−τ w)dR1 (τ )
0
is true.
The proof of this lemma is similar to the proof of Lemma 4.4.8. It is left to
the reader.
From this lemma it follows that
Furthermore, denote
" ∞
#1/2
X
ΨK := Ck Ck∗ .
k=1
So ΨK is an n × n-matrix. Set
λk (ΨK ) for k = 1, ..., n,
ωk (K) = ,
0 if k ≥ n + 1
where λk (ΨK ) are the eigenvalues of matrix Ψ K with their multiplicities and
enumerated in the decreasing way: ωk (K) ≥ ωk+1 (K) (k = 1, 2, ...).
Theorem 5.1.2. Let conditions (1.2) and (1.3) hold. Then the characteristic values
of K satisfy the inequalities
j
X j
X
1 n
< ωk (K) + (j = 1, 2, ...).
|zk (K)| k+n
k=1 k=1
and thus,
j
|zj (K)| > P h i (j = 1, 2, ...).
j
k=1 ωk (K) + n
(k+n)
102 Chapter 5. Properties of Characteristic Values
K has no more than j −1 characteristic values. Let νK (r) be the function counting
the characteristic values of K in the disc |z| ≤ r. We consequently, get.
Corollary 5.1.3. Let conditions (1.2) and (1.3) hold. Then the inequality νK (r) ≤
j − 1 is valid, provided
j
r≤ P h i (j = 2, 3, ...).
j
k=1 ωk (K) + n
(k+n)
Then Z ∞ Z η
FK (w) = −R0−1 (η ) e−wt tI − exp(−ts)dR0 (s) dt.
0 0
We can write down
Z
1 η
1
FK (w) = −R0−1 (η ) I− dR0 (s) . (1.5)
w2 0 s+w
On the other hand
∞
X 1
FK (w) = Ck
wk+1
k=0
and therefore Z ∞
X
1 2π
FK (e−is )FK
∗
(eis )ds = Ck Ck∗ .
2π 0 k=0
Thus, we have proved the following result.
Lemma 5.1.4. Let condition (1.3) hold. Then
Z 2π
1
ΨK =
2
FK (e−is )FK
∗
(eis )ds
2π 0
and consequently
Z
1 2π
2
kΨK k ≤ kFK (eis )k2 ds ≤ sup kFK (w)k2 .
2π 0 |w|=1
5.2. Identities for characteristic values 103
and consequently,
ωj (K) ≤ sup kFK (w)k ≤ α(FK ), (1.6)
|w|=1
where
var(R0 )
α(FK ) := kR0−1 k 1 + (j ≤ n).
1−η
Note that the norm of R0−1 (η ) can be estimated by the results presented in Section
2.3 above. Inequality (1.6) and Theorem 5.1.2 imply the following result.
Corollary 5.1.5. Let conditions (1.2) and (1.3) hold. Then the characteristic values
of K satisfy the inequalities
j
X X 1 j
1
< jα(FK ) + n (j ≤ n)
|zk (K)| k+n
k=1 k=1
and !
j
X X 1 j
1
< n α(FK ) + (j > n).
|zk (K)| k+n
k=1 k=1
∞
X 2 ∞
X 2
1 1
Im and Re .
zk (K) zk (K)
k=1 k=1
where Z η
Ck = (−1) k
R0−1 (η ) sk dR0 (s) (k ≥ 2),
0
Z η
C0 = I and C1 = −R0−1 (η ) I + sdR0 (s) .
0
To formulate our next result, for an integer m ≥ 2, introduce the m × m-block
matrix
−C1 −C2 /2 ... −Cm−1 /(m − 1)! −Cm /m!
I 0 ... 0 0
B̂m = 0 I ... 0 0
. . ... . .
0 0 ... I 0
and
ψ(K, t) := τ (K) + Re T race [e2it (C12 − C2 /2)] (t ∈ [0, 2π))
where
∞
X 1
ζ(z) := (Re z > 1)
kz
k=1
are valid.
This theorem is a particular case of Theorem 12.4.1 from [46].
Note that
1
ψ(K, π/2) = τ (K) − Re T race (C12 − C2 )
2
5.3. Multiplicative representations of characteristic functions 105
and
1
ψ(K, 0) = τ (K) + Re T race (C12 − C2 ).
2
Now Theorem 5.2.2 yields the following result.
Corollary 5.2.3. Let conditions (1.2) and (1.3) hold. Then
∞
X ∞
X 2
1 1
τ (K) − = ψ(K, π/2) − 2 Im =
|zk (K)|2 zk (K)
k=1 k=1
∞
X 2
1
ψ(K, 0) − 2 Re ≥ 0.
zk (K)
k=1
Consequently,
∞
X 1
≤ τ (K),
|zk (K)|2
k=1
∞
X 2
1
2 Im ≤ ψ(f, π/2)
zk (K)
k=1
and
∞
X 2
1
2 Re ≤ ψ(f, 0).
zk (K)
k=1
0 = E0 ⊂ E1 ⊂ ... ⊂ En = I
and
AEk = Ek AEk (k = 1, . . . , n). (3.1)
Besides, ΔEk = Ek − Ek−1 (k = 1, ..., n) are one dimensional. Again set
→
Y
Xk := X1 X2 ...Xm
1≤k≤m
provided I − A is invertible.
Furthermore, for each fixed z ∈ C, K(z) possesses the maximal chain of
invariant orthogonal projections, which we denote by Ek (K, z):
0 = E0 (K, z) ⊂ E1 (K, z) ⊂ ... ⊂ En (K, z) = I
and
K(z)Ek (K, z) = Ek (K, z)K(z)Ek (K, z) (k = 1, . . . , n).
Moreover,
ΔEk (K, z) = Ek (K, z) − Ek−1 (K, z) (k = 1, ..., n)
are one dimensional orthogonal projections.
Write K(z) = zI − B(z) with
Z η
B(z) = exp(−zs)dR0 (s).
0
is true.
Furthermore, let
A = D + V (σ(A) = σ(D)). (3.3)
be the Schur triangular representation of A (see Section 2.2). Namely, V is the
nilpotent part of A and
Xn
D= λk (A)ΔEk
k=1
is its diagonal part. Besides V Ek = Ek V Ek (k = 1, . . . , n). Let us use the equality
Y→
V ΔEk
A−1 = D −1 I−
λk (A)
2≤k≤n
for any non-singular matrix A (see Section 2.2). Now replace A by K(z) and denote
by D̃K (z) and ṼK (z) be the diagonal and nilpotent parts of K(z), respectively.
Then the previous equality at once yields the following result.
Theorem 5.3.2. For any regular z of K, the equality
→
" #
Y ṼK (z)ΔEk (z)
−1
K (z) = D̃K (z)
−1
I−
λk (K(z))
2≤k≤n
is true.
5.4. Perturbations of characteristic values 107
and Z η
K̃(z) = zI − exp(−zs)dR̃(s), (4.1b)
0
which will be called the relative variation of the characteristic values of K̃ with
respect to the characteristic values of K.
Assume that
R0 (η ) and R̃(η ) are invertible , (4.2)
and
η < 1. (4.3)
As it was shown in Section 5.1, the latter condition does not affect on the generality.
Again put
So
∞
X ∞
X
zk zk
K1 (z) = Ck and K̃1 (z) = C̃k , (4.4)
k! k!
k=0 k=0
where Z η
Ck = (−1)k R0−1 (η ) sk dR0 (s)
0
and Z η
C̃k = (−1)k R̃−1 (η ) sk dR̃(s) (k ≥ 2);
0
C0 = C̃0 = I;
Z η Z η
C1 = −R0−1 (η ) I + sdR0 (s) and C̃1 = −R̃−1 (η ) I + sdR̃(s) .
0 0
108 Chapter 5. Properties of Characteristic Values
and put
w(K) := 2N2 (ΨK ) + 2[n(ζ(2) − 1)]1/2 ,
where ζ(.) is the Riemann zeta function, again. Denote also
1 1 w2 (K)
ξ(K, s) := exp + (s > 0) (4.6)
s 2 2s2
and " #1/2
∞
X
q= kC̃k − Ck k 2
.
k=1
qξ(K, s) = 1. (4.7)
Then
∞
X 1
F (u) = (C̃k − Ck ).
uk+1
k=0
Therefore
Z ∞
X
1 2π
F ∗ (e−is )F (eis )ds = ((C̃k )∗ − Ck∗ )(C̃k − Ck ).
2π 0 k=0
Thus,
Z ∞
X
1 2π
T race F (e
∗ −is
)F (e )ds = T race
is
((C̃)∗k − Ck∗ )(C̃k − Ck ),
2π 0 k=0
or Z
∞
X 1 2π
q2 ≤ N22 (C̃k − Ck ) = N22 (F (eis ))ds.
2π 0
k=0
This forces Z
1 2π
q ≤ 2
N22 (F (eis ))ds ≤ sup N22 (F (z)).
2π 0 |z|=1
is valid. Moreover,
In addition,
|λk (B(is))| ≤ kB(is)k ≤ var(R0 )
and thus
We conclude that the minimum of d(s) is attained inside the segment [−2 var(R0 ), 2 var(R0 )],
as claimed.
Hence we get
Corollary 5.5.2. The inequality
and
kK(iω) − K̃(iω)k ≤ var(R0 − R̃) (ω ∈ R).
We thus arrive at the following corollary.
Corollary 5.5.3. The inequality
n
1
var(R0 − R̃) 1 + 2var(R̃) + (var(R0 + R̃) + var(R0 − R̃)) > 0.
2
Then all the zeros of det K̃(z) are also in C− and
Note that
J1 ≥ inf |det K(iω)|−
|ω|≤2var(R0 )
1 h in
n/2
N 2 (K(z) − K̃(z)) 1 + N 2 (K(z)) + N 2 ( K̃(z)) .
n
To illustrate the results obtained in this section consider the following matrix-
valued functions:
Z η Xm
K(z) = zI − −sz
e A(s)ds − e−hk z Ak , (5.2a)
0 k=0
112 Chapter 5. Properties of Characteristic Values
and Z η m
X
K̃(z) = zI − e−sz Ã(s)ds − e−hk z Ãk , (5.2b)
0 k=0
where A(s) and Ã(s) are integrable matrix functions, Ak and Ãk are constant
matrices, hk are positive constants. Put
Z η m
X
V ar2 (R0 ) = N2 (A(s))ds + N2 (Ak ),
0 k=0
Z η m
X
V ar2 (R0 ± R̃) = N2 (A(s) ± Ã(s))ds + N2 (Ak ± Ãk ).
0 k=0
We have
√
N2 (K(iω)) ≤ n|ω| + V ar2 (R0 ), N2 (K(iω) − K̃(iω)) ≤ V ar2 (R0 − R̃)
and √
N2 (K(iω) + K̃(iω)) ≤ 2 n|ω| + V ar2 (R0 + R) (ω ∈ R).
We thus arrive at the following corollary.
Corollary 5.5.6. Let K and K̃ be defined by (5.2). Then the inequality
k=m+1
and !
m
X
1/2
w(hm ) = 2 N2 Ck Ck∗ + 2 [n(ζ(2) − 1)]1/2 .
k=1
Define ξ(hm , s) by (4.6) with hm instead of K. The following result at once follows
from [46, Corollary 12.5.3].
Corollary 5.6.1. Let conditions (1.2) and (1.3) hold, and rm (K) be the unique
positive root of the equation
qm (K)ξ(hm , y) = 1.
Then either, for any characteristic value z(K) of K, there is a characteristic value
z(hm ) of polynomial pencil hm , such that
1 1
z(K) z(hm ) ≤ rm (K),
−
or
1
|z(K)| ≥ .
rm (K)
Then
j
X j
X
φ(ak ) ≤ φ(bk ) (j = 1, 2, ..., l).
k=1 k=1
For the proof see for instance [65, Lemma II.3.4], or [64, p. 53]. Put
n
χk = ωk (K) + (k = 1, 2, ...).
k+n
The following result is due to the previous lemma and Theorem 5.1.2.
Corollary 5.7.2. Let φ(t) (0 ≤ t < ∞) be a continuous convex scalar-valued func-
tion, such that φ(0) = 0. Let conditions (1.2) and (1.3) hold. Then the inequalities
j
X j
X
1
φ < φ(χk ) (j = 1, 2, ...)
|zk (K)|
k=1 k=1
and thus
" j #1/p " j #1/p " j #1/p
X 1 X p X 1
< ωk (K) +n (j = 1, 2, ...). (7.1)
|zk (K)|p (k + n)p
k=1 k=1 k=1
∞
!1/p
X 1
< Np (ΨK ) + nζn1/p (p) (p > 1).
|zk (K)|p
k=1
The next result is also well known, cf. [65, Chapter II], [64, p. 53].
5.8. Comments 115
5.8 Comments
The material of this chapter is adapted from Chapter 12 of the book [46]. The
relevant results in more general situation can be found in [32, 33] and [35].
116 Chapter 5. Properties of Characteristic Values
Chapter 6
In this chapter we establish explicit stability conditions for linear time variant
systems with delay ”close” to ordinary differential systems and for systems with
small delays. We also investigate perturbations of autonomous equations.
where A(t) is a piece-wise continuous matrix valued function. In this chapter again
C(Ω) = C(Ω, Cn ) and Lp (Ω) = Lp (Ω, Cn ) (p ≥ 1) are the spaces of vector valued
functions.
Introduce in L1 (−η , ∞) the operator
Z η
Ew(t) = dτ R(t, τ )w(t − τ ).
0
It is assumed that
vjk = sup var(rjk (t, .)) < ∞. (1.2)
t≥0
118 Chapter 6. Equations Close to Autonomous and Ordinary Differential Ones
such that
kEwkL1 (0,∞) ≤ q1 kwkL1 (−η ,∞) (w ∈ L1 (−η , ∞)). (1.3)
For instance, if (1.1) takes the form
Z η m
X
ẏ(t) = A(t)y + B(t, s)y(t − s)ds + Bk (t)y(t − τk ) (t ≥ 0; m < ∞), (1.4)
0 k=0
where 0 ≤ τ0 < τ1 , ..., < τm ≤ η are constants, Bk (t) are piece-wise continuous
matrices and B(t, s) is a matrix function Lebesgue integrable in s on [0, η ], then
(1.3) holds, provided
Z m
!
η X
q̂1 := sup kB(t, s)kn ds + kBk (t)kn < ∞.
t≥0 0 k=0
Here and below in this chapter kAkn is the spectral norm of an n × n-matrix A.
Moreover, we have
Z ∞ Z ∞ Z η m
!
X
kEf (t)kn dt ≤ kB(t, s)f (t − s)kn ds + kBk (t)f (t − τk )kn dt ≤
0 0 0 k=0
Z η Z ∞
sup kB(τ, s)kn kf (t − s)kn ds dt+
τ 0 0
m
X Z ∞
kBk (τ )kn kf (t − τk )kn dt.
k=0 0
Consequently,
Z Z Z m
!
∞ ∞ ∞X
kEf (t)kn dt ≤ q̂1 (R) max kf (t − s)kn dt + kf (t − τk )kn dt .
0 0≤s≤η 0 0 k=0
But Z Z
∞ ∞
max kf (t − s)kn dt ≤ kf (t)kn dt.
0≤s≤η 0 −η
Thus, in the case of equation (1.4), condition (1.3) holds with q1 = q̂1 .
6.1. Equations ”close” to ordinary differential ones 119
Theorem 6.1.1. Let condition (1.3) hold and the evolution operator U (t, s) (t ≥
s ≥ 0) of the equation
ẏ = A(t)y (t > 0) (1.5)
satisfy the inequality
Z ∞
1
ν1 := sup kU (t, s)kn dt < . (1.6)
s≥0 s q1
x(t) = 0 (t ≤ 0) (1.8)
So Z t
kx(t)kn ≤ kU (t, s)kn (kEx(s)kn + kf (s)kn )ds.
0
Integrating this inequality, we obtain
Z t0 Z t0 Z t
kx(t)kn dt ≤ kU (t, s)kn (kEx(s)kn + kf (s)kn )ds dt (0 < t0 < ∞).
0 0 0
Thus,
Z t0 Z t0
kx(s)kn ds ≤ ν1 q1 kx(s)kn ds + ν1 kf kL1 (0,∞) .
0 0
Hence,
Z t0
ν1 kf kL1 (0,t0 )
kx(s)kn ds ≤ .
0 1 − ν1 q1
Now letting t0 → ∞, we arrive at the required result.
The assertion of Theorem 6.1.1 is due to Theorem 3.5.1 and the previous
lemma.
Now consider the operator E in space C. By Lemma 1.12.3, under condition
(1.2) there is a constant
√ n
q∞ ≤ nk (vjk )j,k=1 kn ,
such that
kEwkC(0,∞) ≤ q∞ kwkC(−η ,∞) (w ∈ C(−η , ∞)). (1.9)
For instance, if (1.1) takes the form
Z η m
X
ẏ(t) = A(t)y + B(t, s)y(t − s)ds + Bk (t)y(t − hk (t)) (t ≥ 0; m < ∞),
0 k=1
(1.10)
where 0 ≤ h1 (t), h2 (t), ..., hm (t) ≤ η are continuous functions, B(t, s) and Bk (t)
are the same as above in this section. Then
Z η m
!
X
sup kEw(t)kn ≤ sup kB(t, s)w(t − s)kn ds + kBk (t)w(t − hk (t))kn .
t≥0 t≥0 0 k=0
Hence, under the condition q̂1 < ∞, we easily obtain inequality (1.9) with q∞ = q̂1 .
Theorem 6.1.3. Let condition (1.9) hold and the evolution operator U (t, s) (t ≥
s ≥ 0) of equation (1.5) satisfy the inequality
Z t
1
ν∞ := sup kU (t, s)kn ds < . (1.11)
t≥0 0 q∞
The assertion of this theorem follows from Theorem 3.4.1 and the following
lemma.
6.1. Equations ”close” to ordinary differential ones 121
Lemma 6.1.4. Let conditions (1.9) and (1.11) hold. Then a solution x(t)of problem
(1.7), (1.8) with a f ∈ C(0, ∞) satisfies the inequality
ν∞ kf kC(0,∞)
kxkC(0,∞) ≤ .
1 − ν∞ q∞
The proof of this lemma is similar to the proof of Lemma 6.1.2.
Assume that
((A(t) + A∗ (t))h, h)C n ≤ −2α(t)(h, h)C n (h ∈ Cn , t ≥ 0)
with a positive piece-wise continuous function α(t) having the property
Z ∞ R
t
ν̂1 := sup e− s α(t1 )dt1 dt < ∞.
s≥0 s
Recall that
n
X √
g(A) = (N 2 (A) − |λk (A)|2 )1/2 ≤ 2N (AI ),
k=1
and α(A) = maxk Re λk (A); λk (A) (k = 1, ..., n) are the eigenvalues of A, N2 (A)
is the Hilbert-Schmidt norm of A and AI = (A − A∗ )/2i. Thus
keA0 t kL1 (0,∞) ≤ νA0 ,
where
n−1
X g k (A0 )
νA0 := √ .
k=0
k!|α(A0 )|k+1
122 Chapter 6. Equations Close to Autonomous and Ordinary Differential Ones
where R(t, τ ) = (rjk (t, τ )) is the same as in the previous section. In particular,
condition (1.2) holds.
For instance, (2.1) can take the form
Z η m
X
ẏ(t) = B(t, s)y(t − s)ds + Bk (t)y(t − τk ) (t ≥ 0; m < ∞) (2.2)
0 k=0
where B(t, s), τk and B(t, τ ) are the same as in the previous section. In C(−η , ∞)
introduce the operator
Z η
Êd f (t) := dτ R(t, τ )[f (t − τ ) − f (t)].
0
Since
kf (t − τ ) − f (t)kC(0,T ) ≤ τ kf˙kC(−η ,T ) ,
we easily obtain
Z m
!
η X
kÊd f (t)kC(0,T ) ≤ sup τ kB(t, τ )kn + kBk (t)kn τk kf˙kC(−η ,T ) .
t 0 k=0
Put
A(t) = R(t, η ) − R(t, 0) (2.5)
and assume that equation (1.5) is asymptotically stable. Recall that U (t, s) is the
evolution operator of the ordinary differential equation (1.5).
6.2. Equations with small delays 123
Theorem 6.2.1. Under conditions (1.2) and (2.3), let A(t) be defined by (2.5) and
Z t
ψR := Ṽd (R) sup kA(t)U (t, s)kn ds + 1 < 1. (2.6)
t≥0 0
x(t) = 0, t ≤ 0.
Observe that
Z η Z η Z η
dτ R(t, τ )x(t − τ ) = dτ R(t, τ )x(t) + dτ R(t, τ )(x(t − τ ) − x(t)) =
0 0 0
Z η
(R(t, η ) − R(t, 0))x(t) + dτ R(t, τ )(x(t − τ ) − x(t)) = A(t)x(t) + Êd x(t).
0
So we can rewrite equation (2.7) as
Consequently, Z t
x(t) = U (t, s)Êd x(s)ds + f1 (t), (2.9)
0
where Z t
f1 (t) = U (t, s)f (s)ds.
0
Differentiating (2.9), we get
Z t
ẋ(t) = A(t)U (t, s)Êd x(s)ds + Êd x(t) + A(t)f1 (t) + f (t).
0
For the brevity put |x|T = kxkC(0,T ) for a finite T . By condition (2.3) we have
Z t
|ẋ|T ≤ c0 + Ṽd (R)|ẋ|T sup kA(t)U (t, s)kn ds + 1 ,
t≥0 0
where
c0 := kA(t)f1 kC(0,∞) + kf kC(0,∞) .
So
|ẋ|T ≤ c0 + ψR |ẋ|T .
124 Chapter 6. Equations Close to Autonomous and Ordinary Differential Ones
(0)
where R(t, τ ) is the same as in Section 6.1, and R0 (τ ) = (rjk (τ ))nj,k=1 is an n × n-
matrix-valued function defined on [0, η ], whose entries have bounded variations.
Recall that var (R0 ) is the spectral norm of the matrix
n
(0)
var (rjk )
j,k=1
Rη
(see Section 1.12). Again use the operator Ef (t) = 0 dτ R(t, τ )f (t − τ ), consid-
ering it in space L2 (−η , ∞).
Under condition (1.2) , due to Lemma 1.12.3, there is a constant
n
q2 ≤ k (vjk )j,k=1 kn ,
such that
kEf kL2 (0,∞) ≤ q2 kf kL2 (−η ,∞) (f ∈ L2 (−η , ∞)). (3.2)
Again Z η
K(z) = zI − exp(−zs)dR0 (s) (z ∈ C)
0
As above it is assumed that all the zeros of det K(z) are in the open left half-plane.
Recall also that
m̃
X
Bk (t)y(t − hk ) (t ≥ 0; m, m̃ < ∞)
k=0
0 = h0 < h1 < ... < hm̃ ≤ η and 0 = τ0 < τ1 < ... < τm ≤ η
are constants, Ak are constant matrices and A(s) is Lebesgue integrable on [0, η ],
Bk (t) are piece-wise continuous matrices and B(t, s) is Lebesgue integrable in s
on [0, η ]. In this case,
Z η m
X
K(z) = zI − e−sz A(s)ds − e−hk z Ak (3.5)
0 k=0
and Z η m
X
var(R0 ) ≤ kA(s)kn ds + kAk kn . (3.6)
0 k=0
ẋ = E0 x + Ex + f, (3.8)
Thus, by (3.2),
with Z η
ˆ
d(K) = min inf |iω + λj (A1 ) e−iωs dr1 (s)+
j=1,...,n |ω|≤2var (R0 ) 0
Z η
λj (A2 ) e−iωs dr2 (s)|.
0
6.4. Equations with constant coefficients and variable delays 127
Denoting
r̃j (s) = λj (A1 )r1 (s) + λj (A2 )r2 (s),
we obtain
var(r̂j ) = λj (A1 )var (r1 ) + λj (A2 )var (r2 )
and Z η
λj (K(z)) = z + e−zs dr̂j (s) (j = 1, ..., n).
0
Assume that
eη var(r̂j ) < 1, j = 1, ..., n. (3.11)
Then by Lemma 4.6.5, we obtain
ˆ
d(K) = min var(r̂j ).
j
Applying Theorem 6.3.1 we can assert that equation (3.10) is exponentially stable,
provided the conditions (3.2), (3.11) and
q2 < min var(r̂j ) = min(λj (A1 )var (r1 ) + λj (A2 )var (r2 ))
j j
hold.
with condition (1.8). Here Ak are constant matrices, τk (t) are nonnegative contin-
uous scalar functions defined on [0, ∞) and satisfying the conditions
As above, it is assumed that all the roots of det K(z) are in C− . Set
m
X
v0 := kAk kn , θ(K) := max kK −1 (is)kn
−2v0 ≤s≤2v0
k=1
128 Chapter 6. Equations Close to Autonomous and Ordinary Differential Ones
and
m
X
γ(K) := (ηk − hk )kAk kn .
k=1
hold. Then a solution of equation (4.1) with the zero initial condition (1.8) satisfies
the inequality
θ(K)kf kL2 (0,∞)
kxkL2 (0,∞) ≤ . (4.4)
1 − v0 θ(K) − γ(K)
This theorem is proved in the next section.
Theorems 6.4.1 and 3.5.1 imply
Corollary 6.4.2. Let conditions (4.2) and (4.3) hold. Then the equation
m
X
ẏ(t) = Ak y(t − τk (t)) (4.5)
k=1
is exponentially stable.
For instance, consider the following equation with one delay:
with B(z) = e−zh A0 and d(K(z)) is the smallest modulus of eigenvalues of K(z):
Here
λj (K(z)) = z − e−zh λj (A0 )
are the eigenvalues of matrix K(z) counting with their multiplicities.
6.5. Proof of Theorem 6.4.1 129
Thus
θ(K) ≤ Γ0 (K)) := max Γ(K(iω)) ≤ θA0 ,
|ω|≤2v0
where
n−1
X g k (A0 )
θA0 := √ ,
k=0
k!dk+1 (K)
and
d(K) := inf |yi + λj (A0 )e−iyh |.
j=1,...,n; |y|≤2|λj (A0 )|
Proof. Put
Z t−h
u(t) = w(t − h) − w(t − τ (t)) = ẇ(s)ds.
t−τ (t)
Z ∞ Z t−h
≤ (η − h) kẇ(s)k2n ds dt =
0 t−η
Z ∞ Z η−h
(η − h) kẇ(t − h − s1 )k2n ds1 dt ≤
0 0
130 Chapter 6. Equations Close to Autonomous and Ordinary Differential Ones
Z η−h Z ∞
(η − h) kẇ(t − s1 − h)k2n dt ds1 =
0 0
Z η−h Z ∞
(η − h) kẇ(t1 )k2n dt1 ds1 ≤
0 η−s1 −h
Z ∞
(η − h)2 kẇ(t1 )k2n dt1 .
0
We thus get the required result.
Now let w ∈ L2 (0, T ) for a sufficiently large finite finite T > 0. Extend it to
the whole positive half-line by
Proof of Theorem 6.4.1: In this proof for the brevity we put k.kL2 (0,T ) =
|.|T (T < ∞). Recall that it is supposed that the characteristic values of K are in
the open left half-plane.
From (4.1) it follows
m
X
ẋ(t) = Ak x(t − hk ) + [F0 x](t) (t ≥ 0), (5.1)
k=1
where
m
X
[F0 x](t) = f (t) + Ak [x(t − τk (t)) − x(t − hk )].
k=1
So
|F0 x|T ≤ γ(K)|ẋ|T + |f |T (5.2)
6.5. Proof of Theorem 6.4.1 131
and
|ẋ|T ≤ v0 |x|T + γ(K)|ẋ|T + |f |T .
By (4.3) we have γ(K) < 1. Hence,
v0 |x|T + |f |T
|ẋ|T ≤ . (5.3)
1 − γ(K)
Therefore, from (5.1) it follows that |x|T ≤ θ(K)|F0 |T . Hence, due to (5.2)
|x|T ≤ θ(K)(γ(K)|ẋ|T + |f |T ).
θ(K)(v0 |x|T + |f |T )
|x|T ≤ θ(K)(γ(K)|ẋ|T + |f |T ) ≤ .
1 − γ(K)
θ(K)
ψ(K) = ,
1 − v0 θ(K) − γ(K)
m
X
A= Ak and χA := max ke−At − Ikn . (6.1)
0≤t≤η
k=1
Theorem 6.6.1. Under the above notation, let A be a Hurwitz matrix and conditions
(4.2) and (4.3) hold. Then for all s ≥ 0, the following relations are valid:
v0 (1 + v0 χA ψ(K)) At
kWt0 (., s)kL2 (s,∞) ≤ ke kL2 (s,∞) (6.3)
1 − γ(K)
and √
2v0 keAt kL2 (s,∞)
kW (., s)kC(s,∞) ≤ p (1 + ψ(K)χA v0 ) . (6.4)
1 − γ(K)
This theorem is proved in the next section.
Let f (z) be a function holomorphic on a neighborhood of the closed convex
hull co(A) of the spectrum of an n × n-matrix and A. Then by Corollary 2.5.3 we
have
n−1
X g k (A)
kf (A)k ≤ sup |f (k) (λ)| .
k=0 λ∈co(A)
(k!)3/2
Hence, in particular,
n−1
X g k (A)tk
keAt kn ≤ eα(A)t (t ≥ 0),
k=0
(k!)3/2
and
n−1
X g k (A)tk
ke−At − Ikn ≤ e−β(A)t (t ≥ 0), (6.5)
k=1
(k!)3/2
where
α(A) := max Re λk (A) and β(A) := min Re λk (A).
k k
So
!2 1/2
Z ∞ n−1
X g k (A)tk
keAt kL2 (0,∞) ≤ e2α(A)t dt . (6.6)
0 k=0
(k!)3/2
6.6. The fundamental solution of equation (4.1) 133
since A is Hurwitzian.
In the rest of this section we illustrate Theorem 6.6.1 in the case of equation
(4.6) with one delay. In this case
γ(K) = (η − h)kA0 kn .
To estimate χA0 and keA0 t kL2 (s,∞) we can directly apply (6.6) and (6.7). As it is
shown in Section 6.4, we have the inequality ψ(K) ≤ ψA0 , where
θ A0
ψA0 = .
1 − kA0 kn θA0 − (η − h)kA0 kn
Now Theorem 6.6.1 implies
Corollary 6.6.2. Let A0 be a Hurwitz matrix and conditions (4.7), and (4.8) hold.
Then for all s ≥ 0, the fundamental solution W (., .) of equation (4.6) satisfies the
inequalities
kW (., s)kL2 (s,∞) ≤ (1 + ψA0 kA0 kn χA0 )keA0 t kL2 (s,∞) ,
kA0 kn (1 + ψA0 kAkn χA0 ) A0 t
kWt0 (., s)kL2 (s,∞) ≤ ke kL2 (s,∞)
1 − (η − h)kA0 kn
and
2kA0 kn
kW (., s)k2C(s,∞) ≤ (1 + ψA0 kA0 kn χA0 )2 keA0 t k2L2 (s,∞) .
1 − (η − h)kA0 kn
Let us use Lemma 4.6.2 which asserts that which for constants h, a0 ∈ (0, ∞)
we have
inf |iω + a0 e−ihω | ≥ cos (2ha0 ) > 0.
ω∈R
provided a0 h < φ/4. From the latter result it follows that if all the eigenvalues of
A0 are real, negative, and
h|λj (A0 )| < π/4 (j = 1, ..., n). (6.8)
Then
d(K) ≥ d˜A0 := min |λj (A0 )| cos (2hλj (A)).
j=1,...,n
Applying Theorem 6.4.1 to this equation, under conditions (4.2) and (4.3), we get
the inequality
kx(t − s)kL2 (0,∞) ≤ kf (t − s)kL2 (0,∞) ψ(K).
Or
kxkL2 (s,∞) ≤ kf kL2 (s,∞) ψ(K). (7.1)
Now for a fixed s, substitute
m
X
Ak z(t − τk (t)) + u(t),
k=1
where
m
X
u(t) := Ak [eA(t−s−τk (t)) − eA(t−s) ] =
k=1
m
X
eA(t−s) Ak [e−Aτk (t) − I].
k=1
Obviously,
kukL2 (s,∞) ≤ v0 keAt kL2 (s,∞) χA .
By (7.1),
Hence, condition (4.3) and the just obtained estimate for kW (., s)kL2 (s,∞) imply
6.8 Comments
The results which appear in Section 6.1 are probably new. The material of Sections
6.2-6.4 is taken from the papers [39, 41]. Sections 6.6. and 6.7 are based on the
paper [42].
136 Chapter 6. Equations Close to Autonomous and Ordinary Differential Ones
Chapter 7
Periodic Systems
This chapter deals with a class of periodic systems. Explicit stability conditions
are derived. The main tool is the invertibility conditions for infinite block matrices.
In the case of scalar equations we apply regularized determinants.
Then μ = eλT is called the characteristic multiplier of equation (1.1) (see [71,
Lemma 8.1.2]). As it was pointed in [71], a complete Floquet theory for functional
differential equations is impossible. However, it is possible to define characteristic
multipliers and exploit the compactness of the solution operator to show that a
Floquet representation exists on the generalized eigen-space of a characteristic
multiplier. The characteristic multipliers of equation (1.1) are independent of the
starting time.
Lemma 7.1.1. Equation (1.1) is asymptotically stable if and only if all the charac-
teristic multipliers of equation (1.1) have moduli less than 1.
138 Chapter 7. Periodic Systems
Introduce the Hilbert space PF of 2π-periodic functions defined on the real axis
R with values in Cn , and the scalar product
∞
X
(f, u)P F := (fk , uk )C n (f, u ∈ P F ),
k=−∞
Let vk and Ask (k = 0, ±1, ...) be the Fourier coefficients of v(t) and As (t), respec-
tively:
∞
X ∞
X
v(t) = vk e ikt
and As (t) = Ask eikt (s = 1, ..., m). (1.5)
k=−∞ k=−∞
It is assumed that
∞
X
kAsk kn < ∞. (1.6)
k=−∞
For instance, assume that As (t), s = 1, ..., m, have an integrable second derivative
from L2 (0, 2π). Then
X∞
k 4 kAsk k2n < ∞
k=−∞
Put
m
X ∞
X m
X
w0 := kAsl kn and var (F ) = kAs0 kn .
s=1 l6=0;−∞ s=1
Recall that the characteristic values of a matrix valued function are the zeros of
its determinant.
Now we are in a position to formulate the main result of the present chapter.
Theorem 7.2.1. Let conditions (1.6) and (2.1) hold. Let all the characteristic values
of the matrix function
m
X Z η
F (z) := zI + As0 e−zτ dμs (τ )
s=1 0
Or
∞ X
X m Z η
(ijI + λ)vj + As,j−k vk e−(λ+ik)τ dμs (τ ) = 0 (j = 0, ±1, ... ).
k=−∞ s=1 0
m
X Z η
Tjk (λ) = As,j−k e−(λ+ik)τ dμs (k 6= j)
s=1 0
and
m
X Z η
Tjj (λ) = ijI + λ + As,0 e−(λ+ij)τ dμs (τ ).
s=1 0
By [37, Theorem 6.2] (see also Section 16.6 of Appendix B below) T (z) is invertible,
provided
X∞
−1
sup kTjj (z)kn kTjk (z)kn < 1.
j k=−∞
k6=j
Clearly,
∞
X ∞
m X
X
kTjk (iω)kn ≤ kAs,j−k kn =
k=−∞ s=1 k=−∞
k6=j k6=j
∞
m X
X
kAs,l kn = w0 .
s=1 l=−∞
l6=0
and
sup kTj−1 (iω)kn =
j=1,2,...; −∞<ω<∞
!−1
Xm Z η
sup
i(j + ω)I + As,0 e −(λ+ij)τ
dμs (τ )
=
j=0,±1,±2,...; −∞<ω<∞
s=1 0
n
7.3. Norm estimates for block matrices 141
!−1
Xm Z η
sup
iyI + As,0 e −iyτ
dμs (τ )
= sup kF −1 (iy)kn .
y∈(−∞,∞)
s=1 0
y∈(−∞,∞)
n
is valid.
Furthermore, for a ξ ∈ (0, 1], let us introduce the matrix T (ξ, z) = (Tjk (ξ, z))
with
Tjk (ξ, z) = ξTjk (z) (k 6= j), Tjj (ξ, z) = Tjj (z).
Assume that T (z) has a characteristic value in the closed right-hand plane. Then
according to continuity of characteristic values, for some ξ0 ∈ (0, 1], matrix T (ξ0 , z)
has a characteristic value on the imaginary axis, but according to (2.2) this is im-
possible. This and Lemma 7.1.1 prove the theorem.
where λk (A), k = 1, ..., n are the eigenvalues of A, counted with their multiplicities;
N2 (A) is the Hilbert-Schmidt norm of A (see Section 2.3). Besides
N22 (A − A∗ )
g 2 (A) ≤ N22 (A) − |T race A2 | and g 2 (A) ≤ = 2N22 (AI ), (3.1)
2
where AI = (A − A∗ )/2i. Moreover,
sup kF −1 (iω)kn ≤ Γ0 (F )
|ω|≤2var (F )
is valid, where
n−1
X g k (B(iω))
Γ0 (F ) := sup √ .
|ω|≤2var (F ) k=0 k!dk+1 (F (iω))
Now Theorem 7.2.1 implies our next result.
Corollary 7.3.1. Let all the characteristic values of F (z) be in C− and w0 Γ0 (F ) <
1. Then equation (1.1) is asymptotically stable.
Note that
m
X
g(B(iω)) ≤ N2 (As0 ) (ω ∈ R).
s=1
and one can enumerate the eigenvalues of As0 in such a way that
m
X Z η
λj (F (z)) = z + λj (As0 ) e−zτ dμs (τ ).
s=1 0
Moreover, if
m
X
B(z) = As0 e−zhs ,
s=1
then by (3.1),
p m
X
g(B(iω)) ≤ 1/2 N2 (eihs As0 − e−ihs A∗s0 ) (ω ∈ R).
s=1
is valid. Moreover, if
π
var(μ) η < , (4.2)
4
then all the zeros of k(z) are in C− and
where Z η
dˆ := cos(2var(μ)τ )dμ(τ ).
0
Now let Ak (k = 0, ±1, ...) be the Fourier coefficients of A(t). Without loss of
generality assume that
var(μ) = 1. (4.4)
Then Z η
F (z) = zI + A0 e−zτ x(t − τ )dμ,
0
∞
X ∞
X
w0 = kAk kn and var (F ) = kAk kn .
k=−∞ k=∞
k6=0
According to (3.2)
g(B(iω)) = g(A0 ) (ω ∈ R)
and Z η
λj (F (z)) = z + λj (A0 ) exp(−zs)dμ(s) (z ∈ C).
0
inf ˆ ),
|λj (F (iω))| ≥ d(F
ω∈(−∞,∞)
where Z η
ˆ ) :=
d(F cos(2λn (A0 )dμ(τ ) > 0.
0
Thus
n−1
X g k (A0 )
Γ0 (F ) ≤ Γ̂0 where Γ̂0 := √ .
k=0
k!dˆk+1 (F )
Now Corollary 7.3.1 implies the following result.
Corollary 7.4.1. Let the conditions (4.5), (4.6) and
n−1
X g k (A0 )
w0 √ <1
k=1
k!cosk+1 (2η λn (A0 ))
Substituting
x(t) = eλt v(t) (5.2)
into (5.1), we have
Besides
v(t) = v(t + 2π). (5.4)
Let vk and ak , k = 0, ±1, ... be the Fourier coefficients of v(t) and a(t), respectively:
∞
X ∞
X
v(t) = vk eikt and a(t) = ak eikt . (5.5)
k=−∞ k=−∞
Hence,
∞
X
1
vj + aj−k e−(λ+ik)h vk = 0 (j = 0, ±1, ...).
ij + λ + b
k=−∞
aj−k e−(λ+ik)h
Zjk (λ) = .
ij + λ + b
146 Chapter 7. Periodic Systems
∞
X ∞
X |aj−k e−(iω+ik)h) |2
≤ ν 2 (ω),
j=−∞ k=−∞
|i(j + ω) + b|2
where
∞
X ∞
X |aj−k |2
ν 2 (ω) := .
j=−∞ k=−∞
(j + ω)2 + b2
This series converges according to (5.6). Let λk (z) be the eigenvalues of Z(z).
Then
∞
Y
det2 (I + Z(z)) = (1 + λk (z))e−λk (z) .
k=1
(I + Z̃(λ))v̂ = 0,
ãj−k e−(λ+ik)h̃
Z̃jk (λ) =
ij + λ + b̃
7.6. Comments 147
Then
∞
X ∞
X
N22 (Z̃(iω)) = |(ij + iω + b̃)−1 ãj−k e−(iω+ik)h̃) |2 ≤ ν̃ 2 (ω),
j=−∞ k=−∞
where
∞
X ∞
X |ãj−k |2
ν̃ 2 (ω) :=
j=−∞ k=−∞
(j + ω)2 + b̃2
Besides,
1
|det2 (I + Z̃(iω))| ≤ exp[ ν̃ 2 (ω)].
2
Put
q(ω) = N2 (Z(iω) − Z̃(iω)).
Then by Corollary 1.11.2 we arrive at the inequality
where
δ2 (ω) := q(ω) exp [(1 + ν(ω) + ν̃(ω))2 /2].
Hence,
|det2 (I + Z̃(ω))| ≥ |det2 (I + Z(ω))| − δ2 (ω).
So we have proved the following result.
Corollary 7.5.2. If equation (5.1) is asymptotically stable and
7.6 Comments
The material of this chapter is based on the paper [62].
As a specific case, the problem of stability investigation of linear periodic
systems (LPS) with time delay is of great theoretical and practical interest. The
majority of mathematical works in this area are based on investigation of the
monodromy operator [69], and are mainly of theoretical nature. An application
of the monodromy operator method is based on a solution of special boundary
problems for ordinary differential equations and is connected with serious techni-
cal difficulties. In connection with that method, approximate approaches are used,
which exploit various kind of averaging, approximation and discretization, as well
as truncation of infinite Hill determinants, [11, 75]. In the interesting paper [84],
devoted a single-loop linear periodic system with time delay a new approach is sug-
gested. Namely, using the theory of the second kind integral Fredholm equations,
148 Chapter 7. Periodic Systems
the authors construct the characteristic function whose roots of which are inverses
to the multipliers of the considered system. Besides, sufficient stability conditions
are given, which are based on approximate representation of the characteristic
function in the form of a polynomial.
In the present chapter we describe an alternative approach to the stability
problem for a multivariable LPS with distributed delay, which is based on the
recent results for infinite block matrices and regularized determinants.
Chapter 8
In the present chapter we investigate vector and scalar linear equations with
”quickly” oscillating coefficients.
is stable for all t ≥ 0. That is, it can have zeros in the open right half-plane for
some t ≥ 0. Besides, it is assumed that
A(t) = B + C(t),
Put Z η
(E0 f )(t) := dR0 (s)f (t − s).
0
such that
kE0 f kC(0,∞) ≤ v̂(R0 )kf kC(−η ,∞)
and
kE0 f kL1 (0,∞) ≤ v̂(R0 )kf kL1 (−η ,∞) .
Now we are in a position to formulate the main result of this chapter.
Then we have
kAkC(0,∞) ≤ kBkn + kC0 kC(0,∞)
and
Z t
ν0
wC = sup
C0 (ωs)ds
= . (1.6)
t
ω
0 n
For example, if C0 (t) = sin (t) C1 , with a constant matrix C1 , then ν0 = 2kC1 kn .
Theorem 8.1.1 implies our next result.
Corollary 8.1.2. Assume that the conditions (1.4), (1.5) and
ω > ν0 v̂ (R0 )[1 + v̂ (R0 )kFB kL1 (0,∞) (2kBkn + kC0 kC(0,∞) )] (1.7)
where μ is a scalar nondecreasing function with var(μ) < ∞. Consider the equation
Z η
ẋ(t) = B x(t − τ )dμ(τ ), (1.9)
0
for a scalar function w(t). Reduce equation (1.9) to the diagonal form:
where λj (B) are the eigenvalues of B with their multiplicities. Let Xj (t) be the
fundamental solution of the scalar equation (1.10). Assume that
Z ∞
Jj := |Xj (t)|dt < ∞ (j = 1, ..., n). (1.11)
0
Moreover,
kE0 f kC(0,∞) ≤ var (μ)kf kC(−η ,∞)
and Z ∞ Z η
kE0 f k =
L1 (0,∞) k f (t − τ )dμ(τ )kn dt ≤
0 0
Z η Z ∞
kf (t − τ )kn dt dμ(τ ) ≤ var (μ)kf kL1 (−η ,∞) .
0 0
Now Theorem 8.1.1 implies
Theorem 8.1.3. Let B be a negative definite Hermitian matrix and the conditions
(1.11) and
1
wC < (j = 1, ..., n)
var(μ)(1 + var(μ)Jj (kBkn + kAkC(0,∞) ))
hold. Then equation (1.8) is exponentially stable.
If A(t) has the form (1.4) and condition (1.11) holds, then, clearly, Corollary
8.1.2 is valid with R0 (.) = μ(.)I and kF kL1 (0,∞) = maxj Jj .
Furthermore, assume that
1
max |λk (B)| < , (1.12)
k=1,...,n e var (μ)η
and put
ρ0 (B) := min |λk (B)|.
k=1,...,n
Then by Lemma 4.6.5 we have
1
Jj ≤ (1 ≤ j ≤ n).
var (μ)ρ0 (B)
Now Theorem 8.1.3 implies
Corollary 8.1.4. Let B be negative definite Hermitian matrix and the conditions
(1.12), and
ρ0 (B)
wC <
var(μ)(ρ0 (B) + kBkn + kAkC(0,∞) )
hold. Then equation (1.8) is exponentially stable.
Now consider the equation
Z η
ẋ(t) = (B + C0 (ωt)) x(t − τ )dμ(τ ). (1.13)
0
So A(t) has the form (1.4). Then the previous corollary and (1.6) imply.
Corollary 8.1.5. Assume that (1.4) and (1.5) hold and
ρ0 (B)ω > ν0 var(μ)(ρ0 (B) + 2kBkn + kC0 kC(0,∞) ).
Then equation (1.13) is exponentially stable.
8.2. Proof of Theorem 8.1.1 153
with a given function f and the zero initial condition x(t) = 0 (t ≤ 0) is equivalent
to the equation
Z t
x(t) = F (t − s)f (s)ds. (2.1)
0
d
(G(t) − F (t)) = B((E0 G)(t) − (E0 F )(t)) + C(t)(E0 G)(t).
dt
Now (2.1) implies
Z t
G(t) = F (t) + F (t − s)C(s)(E0 G)(s)ds. (2.2)
0
the equality
Z t
f (s)u(s)v(s)ds = f (t)ju (t)v(t)−
a
Z t
[f 0 (s)ju (s)v(s) + f (s)ju (s)v 0 (s)]ds
a
is valid.
154 Chapter 8. Linear Equations with Oscillating Coefficients
Proof. Clearly,
d
f (t)ju (t)v(t) = f 0 (t)ju (t)v(t) + f (t)u(t)v(t) + f (t)ju (t)v 0 (t).
dt
Integrating this equality and taking into account that ju (a) = 0, we arrive at the
required result.
Put Z t
J(t) := C(s)ds.
0
By the previous lemma,
Z t
F (t − τ )C(τ )(E0 G)(τ )dτ = F (0)J(t)(E0 G)(t)−
0
Z t
dF (t − τ ) d(E0 G)(τ )
J(τ )(E0 G)(τ ) + F (t − τ )J(τ ) dτ.
0 dτ dτ
But F (0) = I,
Z η Z η
d(E0 G)(τ ) d d
= dR0 (s)G(τ − s) = dR0 (s) G(τ − s) =
dτ dτ 0 0 dτ
Z η Z η
d
dR0 (s) G(τ − s) = dR0 (s)A(τ − s)(E0 G)(τ − s)
0 dτ 0
and
dF (t − τ ) dF (t − τ )
=− = B(E0 F )(t − τ ).
dτ dt
Thus, Z t
F (t − τ )C(τ )(E0 G)(τ )dτ = J(t)(E0 G)(t)+
0
Z t
[B(E0 F )(t − τ )J(τ )(E0 G)(τ )−
0
Z η
F (t − τ )J(τ ) dR0 (s)A(τ − s)(E0 G)(τ − s)]dτ.
0
Now (2.2) implies
Lemma 8.2.2. The following equality is is true:
Z t
G(t) = F (t) + J(t)(E0 G)(t) + [B(E0 F )(t − τ )J(τ )(E0 G)(τ )−
0
Z η
F (t − τ )J(τ ) dR0 (s)A(τ − s)(E0 G)(τ − s)]dτ.
0
8.3. Scalar equations with several delays 155
and applying our above arguments, according to (2.3), we obtain kx kC(0,∞) < ∞
for any solution x of (2.5). Hence (2.4) implies
kx(t)kn ≤ e−t kx kC(0,∞) (t ≥ 0)
for any solution x of (1.1), as claimed.
where rj (s) are nondecreasing functions having finite variations var(rj ), and aj (t)
are piece-wise continuous real functions bounded on [0, ∞).
In the present section we do not require that aj (t) are positive for all t ≥ 0.
So the function
Xm Z h
z+ aj (t) e−zs drj (s)
j=1 0
are in the open left-hand plane, and functions cj (t) have the property
Z t
wj := sup cj (t)dt < ∞ (j = 1, ..., m).
t≥0 0
The function Z
1 i∞
ezt dz
W (t) =
2πi −i∞ k(z)
is the fundamental solution to the equation
m
X Z h
ẏ(t) = − bj y(t − s)drj (s). (3.2)
j=1 0
For a scalar function f defined and bounded on [0, ∞) (not necessarily continuous)
we put kf kC = supt≥0 |f (t)|.
Theorem 8.3.1. Let
" #
m
X m
X
wj 1 + kW kL1 (bk + kak kC ) < 1. (3.3)
j=1 k=1
Besides,
m
X m
X
var(r̂) = bj var(rj ) = bj .
j=1 j=1
Then W (t) ≥ 0 and equation (3.2) is exponentially stable, cf. Section 4.6. Now,
integrating (3.2), we have
Z ∞ Z h
1 = W (0) = W (t − s)dr̂(s) dt =
0 0
Z h Z ∞
W (t − s) dt dr̂(s) =
0 0
Z h Z ∞
W (t) dt dr̂(s) =
0 −s
Z h Z ∞
W (t) dt dr̂(s) = var(r̂)kW kL1 .
0 0
So
1
kW kL1 = Pm . (3.6)
k=1 bk
Furthermore, let
For example if uj (t) = sin (t), then νj = 2. Now Theorem 8.3.1 and (3.7) imply
our next result.
Corollary 8.3.3. Let the conditions (3.5), (3.8) and
m
X Pm
νj k=1 bk
< Pm (3.9)
ωj
k=1 k=1 (3bk + kuk kC )
where dj (s) are positive and bounded on [0, 1] functions, satisfying the condition
Z 1
dj (s)ds = 1.
0
Assume that (3.5) holds with h = 1. Then νj = 2τj and condition (3.9) takes the
form Pm
Xm
2τj bk
< m k=1
P .
ωj k=1 (3b k + τk )
k=1
So for arbitrary τj , there are ωj , such that equation (3.10) is exponentially stable.
In particular, consider the equation
Then according to condition (3.9) for any τ0 , there is an ω, such that equation
(3.11) is exponentially stable.
8.3. Scalar equations with several delays 159
where b̃j = bj /ξ. Let W̃ be the fundamental solution to the equation (3.13).
Subtracting (3.13) from (3.2), we obtain
Xm Z h
d
(W (t) − W̃ (t)) + b̃j (W (t − s) − W̃ (t − s))drj (s) =
dt j=1 0
m
X Z h
− (bj − b̃j ) W (t − s)drj (s).
j=1 0
Hence, taking into account that var (rj ) = 1, by simple calculations we get
m
X
kW − W̃ kL1 ≤ kW̃ kL1 kW kL1 (bj − b̃j ).
j=1
If
m
X
ψ := kW̃ kL1 (bj − b̃j ) < 1,
j=1
then
kW̃ kL1
kW kL1 ≤ .
1− ψ
But condition (3.12) implies (3.5) with b̃k instead of bk . So according to (3.6) we
have
1 ξ
kW̃ kL1 = Pm = Pm .
k=1 kb̃ k=1 bk
Consequently, ψ = ξ − 1 and
kW̃ kL1
kW kL1 ≤ .
2−ξ
Thus we have proved the following result.
160 Chapter 8. Linear Equations with Oscillating Coefficients
Lemma 8.3.5. Let conditions (3.12) and var (rj ) = 1 (j = 1, ..., m) hold. Then
ξ
kW kL1 ≤ Pm .
(2 − ξ) k=1 bk
x(t) = 0 (t ≤ 0)
d
(G(t) − W (t)) =
dt
m
X Z h m
X Z h
− bj (G(t − s) − W (t − s))drj (s) − cj (t) G(t − s)drj (s). (4.2)
k=1 0 j=1 0
Put Z t
Jj (t) := cj (s)ds.
0
8.4. Proof of Theorem 8.3.1 161
and Z
m
dW (t − τ ) dW (t − τ ) X h
=− = bj W (t − τ − s1 )drj (s1 ) =
dτ dt j=1 0
Z h
W (t − τ − s1 )dr̂(s1 ).
0
Thus, Z t
W (t − τ )cj (τ )G(τ − s)dτ = Zj (t, s),
0
where
Z t Z h
Zj (t, s) := Jj (t)G(t − s) + Jj (τ )[− W (t − τ − s1 )dr̂(s1 )G(τ − s)+
0 0
m
X Z h
W (t − τ ) ak (τ − s) G(τ − s − s1 )drk (s1 )]dτ.
k=1 0
is true.
We have
Z tZ h
sup |Zj (t, s)| ≤ wj kGkC [1 + |W (t − τ − s1 )|dr̂(s1 )dτ +
t≥0 0 0
m
X Z t
kak kC |W (t − τ )|dτ ].
k=1 0
162 Chapter 8. Linear Equations with Oscillating Coefficients
But Z Z Z Z
h t h t−s
|W (t − τ − s)|dτ dr̂(s) = |W (t − τ )|dτ dr̂(s) ≤
0 0 0 −s
Z ∞
var(r̂) |W (τ )|dτ.
0
Thus,
m
X
kZj (t, s)kC ≤ wj kGkC (1 + (bk + kak kC )kW kL1 ).
k=1
Condition (3.3) means that γ < 1. We thus have proved the following result.
Lemma 8.4.2. Let condition (3.3) hold. Then
kW kC
kGkC ≤ . (4.5)
1−γ
If > 0 is sufficiently small, then according to (4.5) we easily obtain that kx kC <
∞ for any solution x of (4.7). Hence (4.6) implies
8.5 Comments
This chapter is based on the papers [54] and [55].
The literature on the first order scalar linear functional differential equations
is very rich, cf. [81, 89, 102, 109, 114] and references therein, but mainly, the
coefficients are assumed to be positive. The papers [6, 7, 117] are devoted to
8.5. Comments 163
This chapter deals with vector differential-delay equations having slowly varying
coefficients. The main tool in this chapter is the ”freezing” method.
where 0 = h0 < h1 < ... < hm ≤ η are constants, Ak (t) and A(t, τ ) are matrix-
valued functions satisfying the inequality
Z η m
X
kA(t, τ ) − A(s, τ )kn dτ + kAk (t) − Ak (s)kn ≤ q |t − s| (t, s ≥ 0). (1.4)
0 k=0
166 Chapter 9. Linear Equations with Slowly Varying Coefficients
Then we have
Z η m
X
k (A(t, τ ) − A(s, τ ))f (t − τ )dτ + (Ak (t) − Ak (s))f (t − hk )kn ≤
0 k=0
Z m
!
η X
kf kC(−η ,t) kA(t, τ ) − A(s, τ )kn dτ + kAk (t) − Ak (s, τ )kn .
0 k=0
where zk (Ks ) are the characteristic values of Ks ; so under our assumptions α0 <
0. Since (Ks−1 (z))0 = dKs−1 (z)/dz is the Laplace transform of −tGs (t), for any
positive c < |α0 |, we obtain
Z Z ∞
1 i∞
1
tGs (t) = − e zt
(Ks−1 (z))0 dz =− et(iω−c) (Ks−1 (iω − c))0 dω.
2πi −i∞ 2πi −∞
Hence, Z ∞
e−tc
tkGs (t)kn ≤ k(Ks−1 (iω − c))0 kn dω.
2π −∞
Therefore, Z ∞
Mc,s e−tc
tkGs (t)kn ≤ kKs−1 (iω − c)k2n dω,
2π −∞
9.2. Proof of Theorem 9.1.1 167
where
Z η
Mc,s = sup kI+ τ e−(iω−c)τ dτ R(s, τ )kn ≤ 1+ecη vd(R(s, .)) ≤ 1+ηecη varR(s, .).
ω 0
Rη
Here vd(R(s, .) is the spectral norm of the matrix whose entries are 0
τ dτ |rjk (s, τ )|.
We thus obtain the following result
Lemma 9.1.2. For any positive c < |α0 | we have
Z
Mc,s ∞
χ ≤ sup kKs−1 (iω − c)k2n dω.
s≥0 2πc −∞
Thus
Z t Z t
k Gs (t − t1 )f (t1 )dt1 kn ≤ kf kC(0,∞) kGs (t1 )kn dt1 ≤ c1 kf kC(0,∞) .
0 0
But Z t
kĜt (t − t1 )kn (t − t1 )dt1 =
0
Z t
kĜt (u)kn u du ≤
0
Z ∞ Z ∞
kĜt (u)kn udu ≤ sup kĜt (u)kn u du = χ.
0 t≥0 0
Besides, some estimates for V (R1 ) are given in Section 1.12. Let us point a stability
criterion which is more convenient than Theorem 9.1.1 in the case of equation (3.1).
and Z ∞
1 − νA V (R1 )
χ̂0 := sup tkeA(s)t kn dt < (3.5)
s≥0 0 q0
This theorem is proved in the next section. For instance, consider the equation
m
X Z η
ẋ(t) = A(t)x(t) + Bk (t) x(t − τ )dμk (τ ) (t ≥ 0) (3.6)
k=1 0
where μk are nondecreasing scalar functions, and Bk (t) are n × n-matrices with
the properties
sup kBk (t)kn < ∞ (k = 1, ..., m).
t≥0
About various estimates for keA(s)t kn see Sections 2.5 and 2.8.
170 Chapter 9. Linear Equations with Slowly Varying Coefficients
Put
Z t
1
ξA := sup sup
U (t, s)f (s)ds
.
f ∈C(0,∞) kf kC(0,∞) t≥0 0 n
hold. Then any solution of (4.1) with f ∈ C(0, ∞) and the zero initial condition
satisfies the inequality
ξA kf kC(0,∞)
kxkC(0,∞) ≤ .
1 − V (R1 )ξA
Proof. Equation (4.1) is equivalent to the following one:
Z t
x(t) = U (t, s)(E1 x(s) + f (s))ds.
0
Hence,
kxkC(0,∞) ≤ ξA (kE1 xkC(0,∞) + kf kC(0,∞) ).
Hence, by (3.3) we arrive at the inequality
Corollary 9.4.2. Let condition (4.3) hold. Then equation (3.1) is exponentially
stable.
Hence Z t
x(t) = eA(s)(t−t1 ) [(A(t1 ) − A(s))x(t1 ) + f (t1 )]dt1 .
0
Take s = t. Then
Z t
kx(t)kn ≤ keA(t)(t−t1 ) kk(A(t1 ) − A(t))x(t1 )kdt1 + c0 ,
0
where Z t
c0 := sup keA(s)(t−t1 ) kn kf (t1 )kn dt1 ≤
s,t 0
Z ∞
kf kC(0,∞) sup keA(s)t1 kn dt1 ≤ νA kf kC(0,∞) .
s 0
Thus, for any T < ∞, we get
Z T
sup kx(t)kn ≤ c0 + q0 sup kx(t)kn keA(t)(T −t1 ) k|t1 − T |dt1 ≤
t≤T t≤T 0
Z T
c0 + q0 sup kx(t)kn keA(t)u kudu ≤
t≤T 0
c0 νA kf kC(0,∞)
kxkC(0,T ) ≤ = .
1 − q0 χ̂0 1 − q0 χ̂0
Hence, letting T → ∞, we get
c0 νA kf kC(0,∞)
kxkC(0,∞) ≤ = .
1 − q0 χ̂0 1 − q0 χ̂0
This proves the lemma.
Proof of Theorem 9.3.1: The required result at once follows from Corollary
9.4.2 and the previous lemma.
172 Chapter 9. Linear Equations with Slowly Varying Coefficients
9.5 Comments
The papers [23] and [60] were essentially used in this chapter.
Theorem 9.1.1 extends the ”freezing” method for ordinary differential equa-
tions, cf. [12, 76, 107]. Nonlinear systems with delay and slowly varying coefficient
were considered in [29].
Chapter 10
In this chapter, again C([a, b], Cn ) = C(a, b) and Lp ([a, b], Cn ) = Lp (a, b). Our
main object of in present section is the problem
Z η
ẋ(t) = ds R(t, s)x(t − s) + [F x](t) + f (t) (t ≥ 0), (1.1)
0
and
(F w)(t) if 0 ≤ t ≤ T,
FT w(t) = .
0 if t > T
Since F is causal, one has FT w = FT wT . Consequently
and
δ = kwT − vT kC(−η ,T ) and = kF wT − F vT kC(−η ,∞) .
We have kwT − vT kC(−η ,∞) = δ. Since F is continuous in Ω(%) and
and z(t) is a solution of the problem (1.5), (1.2). Again use the operator
Z t
Ĝf (t) = G(t, t1 )f (t1 )dt1 (f ∈ C(0, ∞)). (1.6)
0
It is assumed that
or
qkĜkC(0,∞) < 1, if % = ∞. (1.7b)
Theorem 10.1.2. Let F be a continuous causal mapping in C(−η , ∞). Let condi-
tions (1.3) and (1.7) hold. Then problem (1.1), (1.2) has a solution x(t) satisfying
the inequality
Proof. Take a finite T > 0 and define on ΩT (%) = Ω(%) ∩ C(−η , T ) the mapping
Φ by
Z t
Φw(t) = z(t) + G(t, s)([F w](s) + f (s))ds (0 ≤ t ≤ T ; w ∈ ΩT (%)),
0
and
Φw(t) = φ(t) for − η ≤ t ≤ 0.
Clearly, Φ maps Ω(%) into C(−η , T ). Moreover, by (1.3) and Lemma 10.1.1, we
obtain the inequality
Recall that any causal mapping satisfies the condition F 0 ≡ 0 (see Section 1.8).
Definition 10.2.1. Let F be a continuous causal mapping in C(−η , ∞). Then the
zero solution of (2.1) is said to be stable (in the Lyapunov sense), if for any > 0,
there exists a δ > 0, such that the inequality kφkC(−η,0) ≤ δ implies kxkC(0,∞) ≤
for any solution x(t) of problem (2.1), (1.2).
The zero solution of (2.1) is said to be asymptotically stable, if it is stable,
and there is an open set Ω̃ ⊆ C(−η, 0), such that φ ∈ Ω̃ implies x(t) → 0 as
t → ∞. Besides, Ω̃ is called the region of attraction of the zero solution. If the
zero solution of (2.1) is asymptotically stable and Ω̃ = C(−η, 0), then it is globally
asymptotically stable.
The zero solution of (2.1) is exponentially stable, if there are positive con-
stants ν, m0 and r0 , such that the condition kφkC(−η ,0) ≤ r0 implies the relation
Furthermore, consider the following equation with the autonomous linear part.
Z η
ẋ(t) = dR0 (s)x(t − s) + [F x](t), (2.10)
0
Put Z t
Ĝ0 f (t) = G0 (t − t1 )f (t1 )dt1 , (2.12)
0
for a f ∈ C(0, ∞). Theorem 10.2.2 implies that, if the conditions (1.3) and
hold, then the zero solution to equation (2.10) is stable. Hence we arrive at the
following result.
Corollary 10.2.5. Let the conditions (1.3) and
1
kG0 kL1 (0,∞) < (2.13)
q
hold. Then problem (2.10), (1.2) has a solution x(t) ∈ Lp (−η , ∞) satisfying the
inequality
kzkLp (−η ,∞)
kxkLp (−η ,∞) ≤ (3.3)
1 − qp kĜ0 kLp (0,∞)
where z(t) is a solution of the linear problem (2.11), (1.2).
The proof of this theorem is similar to the proof of Theorem 10.1.2 with the
replacement of C(0, T ) by Lp (0, T ).
The Lipschitz condition
together with the Contraction Mapping theorem also allows us easily to prove the
existence and uniqueness of solutions Namely, the following result is valid.
Theorem 10.3.3. Let F be a continuous causal mapping in Lp (−η , ∞) for some
p ≥ 1. Let conditions (3.2) and (3.4) hold. Then problem (2.10), (1.2) has a unique
(continuous) solution x ∈ Lp (−η , ∞).
180 Chapter 10. Nonlinear Vector Equations
provided
q2 θ(K) < 1. (3.8)
Moreover, any its solution satisfies the inequality
Now we can apply the bounds for θ(K) from Sections 4.3, 4.7 and 4.8.
The following lemma shows that the notion of the L2 -absolute stability of
(2.10) is stronger that the notion of the asymptotic (absolute) stability.
Lemma 10.3.7. If the zero solution to equation (2.10) is absolutely L2 -stable in
the class of nonlinearities (3.7), then the zero solution to (2.10) is asymptotically
stabile.
Proof. Indeed, assume that a solution x of (2.10) is in L2 (0, ∞) and note that from
(2.10) and (3.7) it follows that kẋkL2 (0,∞) ≤ (var(R0 ) + q2 )kxkL2 (−η ,∞) . Thus
Z ∞ Z ∞
d
kx(t)k2n = − kx(s)k2n ds ≤ 2 kx(s)kn kẋ(s)kn ds ≤
t ds t
Z ∞ Z ∞
2( kx(s)kn ds) (
2 1/2
kẋ(s)k2n ds)1/2 → 0 as t → ∞.
t t
As claimed.
kF f kL2 (0,∞) ≤ var(ν) kf kL2 (−η ,∞) (f ∈ Ω(%) ∩ L2 (−η , ∞)). (4.3)
Indeed, introduce in the space of scalar functions L2 ([−η , ∞); C) the operator Êν
by Z η
(Êν w)(t) = w(t − τ )dν(τ ) (w ∈ L2 ([−η , ∞); C).
0
Then Z ∞ Z η
kÊν wk2L2 (0,∞) = | w(t − τ )dν(τ )|2 dt =
0 0
182 Chapter 10. Nonlinear Vector Equations
Z ∞ Z η Z η
| w(t − τ )dν(τ ) w(t − τ1 )dν(τ1 )|dt ≤
0 0 0
Z ∞ Z η Z η
|w(t − τ )|dν(τ ) |w(t − τ1 )|dν(τ1 )|dt.
0 0 0
Hence
Z η Z η Z ∞
kÊν wk2L2 (0,∞) ≤ |w(t − τ )||w(t − τ1 )|dt dν(τ1 ) dν(τ ).
0 0 0
Z ∞
|w(t)|2 dt.
−η
Proof. First let % = ∞. Since the linear equation (2.11) is L2 -stable, we can write
By Lemma 1.12.1
Or according to (4.4),
Now let % < ∞. By the Urysohn theorem, (see Section 1.1), there is a continuous
scalar-valued function ψ% defined on C(0, ∞), such that
1 if kf kC(0,∞) < %,
ψ% (f ) =
0 if kf kC(0,∞) ≥ %
ẋ = F̃ x (5.1)
and substitute
x(t) = y (t)e−t (5.2)
with an > 0 into (5.1). Then we obtain the equation
Lemma 10.5.1. For an > 0, let the zero solution of equation (5.3) be stable in
the Lyaponov sense. Then the zero solution of equation (5.1) is exponential stable.
Proof. If kφkC(−η ,0) is sufficiently small, we have
Here
ket F (e−t f )kC(0,∞) = sup ket [F (e−t f )](t)kn .
t≥0
and
[F f ](t) = et [F (e−t f )](t).
By (5.4) we have
kF f kC(0,∞) ≤ a()kF f kC(0,∞) (5.6)
where a() → l ≤ 1 as → 0. Let G be the fundamental solution of the linear
equation
ẏ − y = E,R y, (5.7)
10.6. Nonlinear equations ”close” to ordinary differential ones 185
Put Z t
Ĝ f (t) = G (t, t1 )f (t1 )dt1 (f ∈ C(0, ∞)),
0
Then the zero solution of equation (6.1) is stable. Moreover, a solution y of problem
(6.1), (1.2) satisfies the inequality
supt≥0 kU (t, 0)φ(0)kn + qν∞ kφkC(−η ,0)
kykC(0,∞) ≤
1 − qν∞
provided
supt≥0 kU (t, 0)φ(0)kn + qν∞ kφkC(−η ,0)
< %. (6.4)
1 − qν∞
Proof. Rewrite (6.1) as
Z t
x(t) = U (t, 0)φ(0) + U (t, s)(F x)(s)ds.
0
So Z t
kx(t)kn ≤ kU (t, 0)φ(0)kn + kU (t, s)kn kF x(s)kn ds.
0
186 Chapter 10. Nonlinear Vector Equations
kx(t)kn < % (t ≤ T ).
kF xkC(0,T ) ≤ qkxkC(−η ,T ).
Thus
Z t
kxkC(0,T ) ≤ sup kU (t, 0)φ(0)kn + qkxkC(−η ,T ) sup kU (t, s)kn ds.
0≤t≤T 0≤t≤T 0
Consequently
Hence,
supt≥0 kU (t, 0)φ(0)kn + qν∞ kφkC(−η ,0)
kxkC(0,T ) ≤
1 − qν∞
Now condition (6.4) enables us to extend this inequality to the whole half-line. As
claimed.
we deduce that Z t
ν̂∞ ≤ sup e−α0 (t−s) ds = 1/α0 .
t≥0 0
where
n−1
X g k (A0 )
νA0 := √ .
k=0
k!|α(A0 )|k+1
Now Lemma 10.6.1 and Corollary 10.6.2 imply the following result.
Corollary 10.6.3. Let A0 be a Hurwitzian matrix and the conditions (1.3) and
qνA0 < 1
is stable. If, in addition, F has the -property (5.4), then the zero solution of
equation (6.5) is exponentially stable.
where (x(t) = (xk (t))nk=1 , [F w]j (t) mean the coordinates of the vector function
F w(t) with a w ∈ C([−η , ∞), Cn ).
Below the inequalities for real vectors or vector functions are understood in
the coordinate-wise sense.
188 Chapter 10. Nonlinear Vector Equations
Ω̃(ρ̂) := {v(t) = (vj (t)) ∈ C([−η , ∞), Cn ) : kvj kC([−η ,∞),C) ≤ ρj ; j = 1, ..., n}.
M[a,b] (v) := (kvj kC([a,b],C )nj=1 (v(t) = (vj (t)) ∈ C([a, b], Cn ))
It is assumed that F satisfies the following condition: there are nonnegative con-
stants νjk (j, k = 1, ..., n), such that for any
the inequalities
n
X
k[F w]j kC([0,∞),C) ≤ νjk kwk kC([−η ,∞),C) (j = 1, ..., n) (7.2)
k=1
It is also assumed that, the entries Gjk (t, s) of the fundamental solution
G(t, s) of equation (1.5) satisfy the conditions
Z ∞
γjk := sup |Gjk (t, s)|ds < ∞. (7.5)
t≥0 0
γ̂ = (γjk )nj,k .
Theorem 10.7.2. Let the condition (7.2) and (7.5) hold. If, in addition, the spectral
radius of the matrix Q = γ̂Λ(F ) is less than one, then the zero solution of equation
(2.1) is stable. Moreover, if a solution z of the linear problem (1.5), (1.2) satisfies
the condition
M[−η ,∞) (z) + Qρ̂ ≤ ρ̂ , (7.6)
then the solution x(t) of problem (2.1), (1.2) satisfies the inequality
Proof. Take a finite T > 0 and define on ΩT (ρ̂) = Ω̃(ρ̂) ∩ C(−η , T ) the mapping
Φ by
Z t
Φw(t) = z(t) + G(t, t1 )[F w](t1 )dt1 (0 ≤ t ≤ T ; w ∈ ΩT (ρ̂)),
0
and
Φw(t) = φ(t) for − η ≤ t ≤ 0.
Then by (7.3) and Lemma 10.7.1,
According to (7.6) Φ maps ΩT (ρ̂) into itself. Taking into account that Φ is compact
we prove the existence of solutions. Furthermore,
So
M[−η ,T ] (x) ≤ (I − Q)−1 M[−η ,T ] (z).
Hence letting T → ∞, we obtain (7.7), completing the proof.
190 Chapter 10. Nonlinear Vector Equations
together with the Generalized Contraction Mapping theorem (see Section 1.7) also
allows us to prove the existence and uniqueness of solutions. Namely, the following
result is valid.
Theorem 10.7.3. Let conditions (7.5) and (7.8) hold. If, in addition, the spectral
radius of the matrix Q = γ̂Λ(F ) is less than one, then problem (2.1), (1.2) has a
unique solution x̃ ∈ Ω̃(ρ̂), provided z satisfies condition (7.6). Moreover, the zero
solution of equation (2.1) is stable.
where qjk are the entries of Q. About this inequality, as well as about other
estimates for the matrix spectral radius see Section 2.4.
Let μj (s) be defined and nondecreasing on [0, η ]. Now we are going to in-
vestigate the stability of the following nonlinear system with the diagonal linear
part:
Z η
ẋj (t) + xj (t − s)dμj (s) = [Fj x](t) (x(t) = (xk (t))nk=1 ; j = 1, ..., n; t ≥ 0).
0
(7.10)
In this case Gjk (t, s) = Gjk (t − s) and
Suppose that,
1
η var(μj ) < (j = 1, ..., n), (7.11)
e
then Gjj (s) ≥ 0 (see Section 11.3). Moreover, due to Lemma 4.6.3, we obtain the
relations Z ∞
1
γjj = Gjj (s)ds = .
0 var (μj )
Thus
νjk
Q = (qjk )nj,k=1 with the entries qjk = .
var (μj )
Now Theorem 10.7.2 and (7.9) imply our next result.
10.7. Applications of the generalized norm 191
|eA(t)δ | ≤ eBδ .
Hence Z
←
A(t1 )dt1
|U (t, s)| = e ≤ exp [(t − s)B] (t ≥ s).
[s,t]
In the considered case G(t, s) = U (t, s) and the entries Gjk (t, s) of G(t, s)satisfy
|Gjk (t, s)| ≤ ujk (t, s).
where ujk (t, s) are the entries of the matrix exp [(t − s)B].
Now we can apply Theorem 10.7.2 and estimates for ujk (t, s) from Section
2.6 to establish the stability condition for equation (6.1).
192 Chapter 10. Nonlinear Vector Equations
imply the relation kx(t)kn ≤ m0 kφkC(−η ,0) e−νt (t ≥ 0) for any positive solution
x. If r0 = ∞, then the zero solution of (8.1) is globally exponentially stable with
respect to Kη .
It is assumed that F satisfies the following conditions: there are linear causal
non-negative bounded operators A− and A+ , such that
But z(t) ≥ 0 for all t ≥ 0. Consequently, one can extend these inequalities to the
whole positive half-line.
Now conditions (8.5) imply
Z t
x(t) ≤ z(t) + G(t, s)[A+ x](s)ds
0
194 Chapter 10. Nonlinear Vector Equations
and Z t
x(t) ≥ z(t) + G(t, s)[A− x](s)ds
0
Hence due the abstract Gronwall Lemma (see Section 1.5), omitting simple calcu-
lations we have
x(t) ≤ x+ (t) and x(t) ≥ x− (t) (t ≥ 0),
Corollary 10.8.3. Under the hypothesis of Theorem 10.8.2, let equation (8.7) be
(asymptotically, exponentially) stable with respect to Kη . Then the zero solution
to (8.1) is globally (asymptotically, exponentially) stable with respect to Kη .
Conversely, let the zero solution to (8.1) be globally (asymptotically, expo-
nentially) stable with respect to Kη . Then equation (8.8) is (asymptotically, expo-
nentially) stable with respect to Kη .
M[a,b] (v) := (kvj kC([a,b],R) )nj=1 (v(t) = (vj (t)) ∈ C([a, b], Rn )).
So
Ω+ (ρ̂) := {v ∈ C([−η , ∞), Rn+ ) : M[−η ,∞) (v) ≤ ρ̂}.
It should be noted that the Urysohn theorem and just proved theorem enable us,
instead of conditions (8.3), to impose the following ones:
dx(t)
= r(t)(Ax(t) + CF1 (x(t − τ ))), (9.3)
dt
where
−a1 b1 c1 0
A= ,C =
b2 −a2 0 c2
and
F1 (x(t − τ )) = (xk (t − τ ) exp(−xk (t − τ )))2k=1 .
Taking F x(t) = CF1 (x(t − τ ) we have
dy(t)
= r(t)[Ay(t) + Cy(t − τ )], (9.4)
dt
and equation (8.8) takes the form
dy(t)
= r(t)Ay(t). (9.5)
dt
196 Chapter 10. Nonlinear Vector Equations
where x+ (t) is the solution of problem (9.4), (9.2). Consequently, the zero so-
lution to (9.1) is globally asymptotically stable with respect to the cone K =
C([−η , 0], R+
2
), provided (9.4) is asymptotically stable.
Rewrite (9.4) as
Z t
y(t) = U− (t, 0)φ(0) + U− (t, s)r(s)Cy(s − τ )ds
0
Thus
where Z t
M0 = sup r(s)kU− (t, s)Ck2 ds.
t 0
Here k.k2 is the spectral norm in C2 . Hence it easily follows that (9.4) is stable
provided M0 < 1. Note that the eigenvalues λ1 (A) and λ2 (A) of A are simple
calculated and with the notation
Z t
m(t, s) = r(s1 )ds1 ,
s
we have
kem(t,s) A k2 ≤ eα(A)m(t,s) (1 + g(A)m(t, s)),
where α(A) = maxk=1,2 Re λ1 (A) and g(A) = |b1 − b2 | (see Section 2.8). Assume
that c1 ≤ c2 , α(A) < 0, and
Then M0 ≤ M1 , where
Z t
M1 := r+ c2 sup e(t−s)α(A)r− (1 + g(A)(t − s)r+ )ds =
t 0
10.10. Input-to-state stability of general systems 197
Z ∞
c 2 r+ euα(A)r− (1 + g(A)ur+ )du.
0
This integral is simple calculated. Thus, if M1 < 1, then the zero solution to (9.1)
is globally asymptotically stable with respect to the cone C([−η , 0], R+2
).
and Z ∞
1 eiωt dω
G(t) =
2π −∞ K(iω)
are the characteristic matrix-valued function of and fundamental solution of the
linear equation Z η
ż = dR0 (τ )z(t − τ ) (t ≥ 0), (10.4)
0
Recall that Z t
Ĝf (t) = G(t, s)f (s)ds.
0
Theorem 10.10.2. Let the conditions (10.6) and qkĜkLp (0,∞) < 1 hold. Then (10.1)
is input-to-state Lp -stable stable.
Proof. Put l = kukLp (0,∞) for a fixed u. Repeating the arguments of the proof
of Theorem 10.1.2 with Lp instead of C, and taking into account the zero initial
conditions, we get the inequality
kĜkLp (0,∞) l
kxkLp (0,∞) ≤ .
1 − qkĜkLp (0,∞)
where
θ(K) := sup kK −1 (iω)kn .
−2 var(R0 )≤ω≤2 var(R0 )
where Z η
Eμ f (t) = f (t − s)dμ(s)
0
for a scalar function f .
If the inequality
holds, then by Lemma 4.6.6 the fundamental solution Gj of the scalar equation
(11.3) is positive and Z ∞
1
Gj (t)dt = .
0 λj (A)var (μ)
But the fundamental solution Gμ of the vector equation (11.2) satisfies the equality
Z ∞ Z ∞
kGμ (t)kn dt = max Gj (t)dt.
0 j 0
Thus Z ∞
1
kGμ (t)kn dt = .
0 minj λj (A)var (μ)
Now Theorem 10.10.2 implies our next result.
Corollary 10.11.1. Let A be a positive definite Hermitian matrix. and conditions
(10.6) and (11.4) hold. If in addition,
10.12 Comments
This chapter is particularly based on the papers [39, 41] and [61].
The basic results on the stability of nonlinear differential-delay equations
are presented, in particular, in the well-known books [72, 77, 100]. About the
recent results on absolute stability of nonlinear retarded systems see [88, 111] and
references therein.
The stability theory of nonlinear equations with causal mappings is at an
early stage of development. The basic method for the stability analysis is the
direct Lyapunov method, cf. [13, 83]. But finding the Lyapunov functionals for
equations with causal mappings is a difficult mathematical problem.
Interesting investigations of linear causal operators are presented in the books
[82, 105]. The papers [5, 15] also should be mentioned. In the paper [15], the
existence and uniqueness of local and global solutions to the Cauchy problem for
equations with causal operators in a Banach space are established. In the paper
[5] it is proved that the input-output stability of vector equations with causal
operators is equivalent to the causal invertibility of causal operators.
Chapter 11
In this chapter, nonlinear scalar first and higher order equations with differential-
delay linear parts and nonlinear causal mappings are considered. Explicit stability
conditions are derived.
The Aizerman - Myshkis problem is also discussed.
Lemma 11.1.1. Let V be compact in X(0, τ ) for each finite τ , and the conditions
(1.3) and
qkV kX(0,∞) < 1 (1.4)
hold. Then equation (1.1) has a (continuous) solution x ∈ X(−η , ∞). Moreover,
that solution satisfies the inequality
kzkX(−η ,∞)
kxkX(−η ,∞) ≤ .
1 − qkV kX(0,∞)
Proof. By Lemmas 10.1.1 and 10.1.3, if condition (1.3) holds, then for all T ≥ 0
and w ∈ X(−η , T ), we have
kF wkX(0,T ) ≤ q kwkX(−η ,T ) .
and
(Φw)(t) = z(t), t < 0,
for a w ∈ X(−η , T ). Hence, according to the previous inequality, for any number
r > 0, large enough, we have
So Φ maps a bounded set of X(−η , T ) into itself. Now the existence of a solution
x(t) is due to the Schauder Fixed Point Theorem, since V is compact.
From (1.1) it follows that
with X(0, ∞) = Lp (0, ∞) and X(0, ∞) = C(0, ∞) (see Section 1.3). Now the
previous lemma implies
Corollary 11.1.2. Assume that Q ∈ L1 (0, ∞), z ∈ X(−η , ∞), and the condition
(1.3) holds. If, in addition, qkQkL1 (0,∞) < 1, then the problem
Z t
x(t) = z(t) + Q(t − t1 )(F x)(t1 )dt1 (t > 0), (1.5a)
0
kzkX(−η ,∞)
kxkX(−η ,∞) ≤ .
1 − qkQkL1 (0,∞)
Let Z ∞
Q̃(z) := e−zt Q(t)dt (Re z ≥ 0)
0
be the Laplace transform of Q. Then by the Parseval equality we easily get
kVQ kL2 (0,∞) = ΛQ , where
ΛQ := sup kQ̃(is)k.
s∈R
Then, obviously,
Z ∞ Z ∞
|Q̃(iy)| = | e−yit Q(t)dt| ≤ Q(t)dt = Q̃(0) (y ∈ R). (1.7)
0 0
Corollary 11.1.4. Let Q ∈ L1 (0, ∞), z ∈ X(−η , ∞) and the conditions (1.3), (1.6)
and
q Q̃(0) < 1 (1.8)
be fulfilled. Then problem (1.5) has a solution x ∈ X(−η , ∞) and
kzkX(−η ,∞)
kxkX(−η ,∞) ≤ .
1 − q Q̃(0)
XZ η
n−1
x (n)
(t) + x(k) (t − τ )dμk (τ ) = [F x](t) (t > 0), (2.1)
k=0 0
11.2. Absolute stability 205
That is,
n−1
X Z η
K(λ) = λn + λk e−λτ dμk (τ ).
k=0 0
It is assumed that all the zeros of K(.) are in C− . Introduce the Green function
of (2.3): Z ∞ tiω
1 e dω
G(t) := (t ≥ 0).
2π −∞ K(iω)
If n = 1, then the notions of the Green function and fundamental solution coincide.
It is simple to check that the equation
XZ η
n−1
w (t) +
(n)
w(k) (t − τ )dμk (τ ) = f (t) (t ≥ 0) (2.4)
k=0 0
Lemma 11.2.1. Let all the zeros of K(z) be in C− . Then the linear equation (2.3)
is exponentially stable.
Proof. As it is well known, if all the zeros of K(z) are in C− , then (2.3) is asymp-
totically stable, cf. [77, 78]. Now we get the required result by small perturbations
and the continuity of the zeros of K(z).
Hence, ζ ∈ X(−η , ∞). Now Lemma 11.1.1 implies the following result.
Theorem 11.2.2. Assume that condition (1.3) holds for X(−η, ∞) = Lp (−η, ∞), p ≥
1 or X(−η , ∞) = C(−η , ∞), and all the zeros of K(z) are in C− . If, in addition,
qkĜkX(0,∞) < 1, (2.8)
then problem (2.1), (2.2) has a solution x ∈ X(−η , ∞) and
kζkX(−η ,∞)
kxkX(0,∞) ≤ ,
1 − qkĜkX(0,∞)
where ζ is a solution of problem (2.2), (2.3), and consequently,
n−1
X
kxkX(0,∞) ≤ M kφ(k) kC[−η ,0] , (2.9)
k=0
hold. Then problem (2.1), (2.2) has a solution x ∈ L2 (0, ∞). Moreover, that solu-
tion satisfies the inequality (2.9) with X(0, ∞) = L2 (0, ∞).
Definition 11.2.5. Equation (2.1) is said to be absolutely X-stable in the class of
nonlinearities satisfying (1.3), if there is a positive constant M0 independent of
the specific form of mapping F (but dependent on q), such that (2.9) holds for
any solution x(t) of (2.1).
Let us point the following corollary to Theorem 11.2.2.
Corollary 11.2.6. Assume that all the zeros of K(z) be in C− . and condition (2.8)
holds, then (2.1) is absolutely X-stable in the class of nonlinearities (1.3) with
X(−η , ∞) = Lp (−η , ∞), p ≥ 1 or X(−η , ∞) = C(−η , ∞).
If, in addition, G is positive, then condition (2.8) can be replaced by (2.10).
In the case X(−η , ∞) = L2 (−η , ∞), condition (2.8) can be replaced by
(2.11).
XZ η
n−1
x (n)
+ x(k) (t − τ )dμj (τ ) = q̃x(t), (3.1)
k=0 0
with some q̃ ∈ [0, q] provides the absolute X-stability of (2.1) in the class of non-
linearities (1.3).
Recall that X(−η , ∞) = C(−η , ∞) or X(−η , ∞) = Lp (−η , ∞).
Theorem 11.3.1. Let the Green function of (2.3) be non-negative and condition
(2.10) hold. Then equation (2.1) is absolutely L2 -stable in the class of nonlineari-
ties satisfying (1.3). Moreover, (2.1) satisfies the Aizerman - Myshkis problem in
L2 (−η , ∞) with q̃ = q.
Proof. Corollary 11.2.6 at once yields that (2.1) is absolutely stable, provided
(2.10) holds. By Lemma 11.1.5 this is equivalent to the asymptotic stability of
(3.1) with q̃ = q. This proves the theorem.
208 Chapter 11. Scalar Nonlinear Equations
Denote by Gμ (t) the Green function (the fundamental solution) of this equation.
In the next section we prove the following result.
Lemma 11.3.2. Under the condition,
So Kμ (0) is equal to the variation var (μ) of μ. By Lemma 11.1.5 Kμ (z) − q has
all its zeros in C− if and only if q < var (μ). Now by Theorem 11.3.1 we easily
get the following result.
Corollary 11.3.3. Assume that the conditions (3.4) and q < var (μ) are fulfilled.
Then equation (3.2) is X-absolutely stable in the class of nonlinearities (1.3).
Now let us consider the second order equation
and Z
1 c0 +i∞
etz dz
G2 (t) := (c0 = const).
2πi c0 −∞i K2 (z)
Assume that
B 2 /4 > E, A2 /4 > C, (3.7)
and denote r
A A2
r± (A, C) = ± − C,
2 4
11.3. The Aizerman - Myshkis problem 209
and r
B B2
r± (B, E) = ± − E.
2 4
In the next section we also prove the following result.
Lemma 11.3.4. Let the conditions (3.7),
D ≤ r+ (B, E)r− (A, C) + r− (B, E)r+ (A, C) (3.8)
and
1
r+ (B, E)er+ (A,C) < (3.9)
e
hold. Then G2 (t) is non-negative on the positive half-line.
Clearly, K2 (0) = C + D + E. Now Theorem 11.3.1 and Lemma 11.1.5 yield
the following corollary.
Corollary 11.3.5. Let the conditions (3.7)-(3.9) and q < C + D + E hold. Then
equation (3.5) is X-absolutely stable in the class of nonlinearities (1.3).
Finally, consider the higher order equations. To this end at a continuous
function v defined on [−η , ∞), let us define an operator Sk by
(Sk v)(t) = ak v(t − hk ) (k = 1, ..., n; ak = const > 0, hk = const ≥ 0; t ≥ 0).
Besides,
h1 + ... + hn = η .
Consider the equation
n
Y
d
+ Sk x(t) = [F x](t) (t ≥ 0). (3.10)
dt
k=1
Put
n
Y
Ŵn (z) := (z + ak e−hk z )
k=1
and Z
1 c0 +i∞
ezt dz
Gn (t) = .
2πi c0 −i∞ Ŵn (z)
So Gn (t) is the Green function of the linear equation
Yn
d
+ Sk x(t) = 0.
dt
k=1
Now Theorem 11.3.1 and Lemma 11.1.5 yield our next result.
Corollary 11.3.7. Let the conditions (3.11) and
n
Y
q< ak
k=1
Suppose that
bh < e−1 . (4.5)
11.4. Proofs of Lemmas 11.3.2 and 11.3.4 211
Since
max τ e−τ = e−1 ,
τ ≥0
But due to (4.4) z(0) = 1, z(t) = 0 (t < 0). So the latter equation is equivalent to
the following one:
Z t Z t Z t−h
z(t) = 1 + c[z(s) − z(s − h)]ds = 1 + c z(s)ds − c z(s)ds.
0 0 0
Consequently,
Z t
z(t) = 1 + c z(s)ds.
t−h
Due to the Neumann series it follows that z(t) and, therefore, the Green function
Gb (t) of (4.3) are positive.
Furthermore, substituting u(t) = e−at v(t) into (4.1), we have the equation
According to (4.5), condition (4.2) provides the positivity of the Green function
of the latter equation. Hence the required result follows.
Lemma 11.4.2. If condition (3.4) holds, then the Green function Gμ of equation
(3.3) is nonnegative and satisfies the inequality
Proof. Indeed, according to the initial conditions, for a sufficiently small t0 > η ,
Thus,
Gμ (t − η ) ≥ Gμ (t − s) (s ≤ η ; 0 ≤ t ≤ t0 ).
Hence, Z η
var (μ)Gμ (t − η ) ≥ Gμ (t − s)dμ(s) (t ≤ t0 ).
0
212 Chapter 11. Scalar Nonlinear Equations
with
Z η
f (t) = var (μ)Gμ (t − η ) − Gμ (t − s)dμ(s) ≥ 0 (0 ≤ t ≤ t0 ).
0
Extending this inequality to the whole half-line, we get the required result.
where
(Sk v)(t) = ak v(t) + bk v(t − 1) (k = 1, 2; t ≥ 0).
Due to the properties of the convolution, we have
Z t
G0 (t) = W1 (t − t1 )W2 (t1 )dt1 ,
0
Assume that
1
(k = 1, 2),
e ak b k < (4.7)
e
then due to Lemma 11.4.1 G0 (t) ≥ 0 (t ≥ 0).
11.5. First order nonlinear non-autonomous equations 213
By the integral inequalities (see Section 1.6), it is not hard to show that, if G0 (t) ≥
0 and m ≥ 0, then the Green function of (4.8) is also non-negative.
Furthermore, assume that P2 (λ) = K2 (λ), where K2 (λ) is defined in Section
11.3. That is,
Then, comparing the coefficients of P2 (λ) and K2 (λ), we get the relations
a1 + a2 = A, a1 a2 = C, (4.9)
b1 + b2 = B, b1 b2 = E, (4.10)
and
a1 b2 + b1 a2 − m = D. (4.11)
Solving (4.9), we get
From the hypothesis (3.7) it follows that a1,2 , b1,2 are real. Condition (3.9) implies
(4.7). So G0 (t) ≥ 0, t ≥ 0. Moreover, (3.8) provides relation (4.11) with a positive
m.
But as it was above mentioned, if G0 (t) ≥ 0, then the Green function G2 (t)
corresponding to K2 (λ) = P2 (λ) is also positive. This proves the lemma.
and Z η
ẋ− (t) + x− (t − τ )dμ− (τ ) = 0. (5.4)
0
Denote by G1 (t, s), G+ (t) and G− (t) the Green functions to (5.1), (5.3) and (5.4),
respectively.
Lemma 11.5.1. Let the conditions (5.2) and
hold. Then
with Z η
f (t) = G1 (t − τ )(dμ+ (τ ) − dτ μ(t, τ )).
0
Hence, by virtue of the Variation of Constants formula and (5.2), we arrive at the
relation
Z t
G1 (t, 0) = G+ (t) + G+ (t − s)f (s)ds ≥ G+ (t) (0 ≤ t ≤ t0 ).
0
11.5. First order nonlinear non-autonomous equations 215
Extending this inequality to the whole positive half-line, we get the left-hand part
of inequality (5.6) for s = 0. Similarly the right-hand part of inequality (5.6) and
the case s > 0 can be investigated.
But by (5.4)
Z ∞ Z η Z η Z ∞
G− (0) = 1 = G− (t − τ )dμ− (τ )dt = G− (t − τ )dt dμ− (τ ) =
0 0 0 0
Z η Z ∞ Z η Z ∞ Z ∞
G− (s)dtdμ− (τ ) = G− (s)dt dμ− (τ ) = G− (s)dt var(μ− ).
0 −τ 0 0 0
So we obtain the equality
Z ∞
1
G± (s)ds = (5.7)
0 var(μ± )
Hence, we get
Corollary 11.5.2. Let conditions (5.2) and (5.5) be fulfilled. Then
G1 (t, s) ≤ 1 (t ≥ s ≥ 0),
Z η
∂G1 (t, s)
0≥ ≥− dτ μ(t, τ ) = −var μ(t, .) (t ≥ s),
∂t 0
and Z t
1
sup G1 (t, s)ds ≤ . (5.8)
t 0 var(μ− )
Furthermore, consider the nonlinear equation
Z η
ẋ(t) + x(t − τ )dτ μ(t, τ ) = [F x](t) (t > 0), (5.9)
0
we have
Rt
e− 0
c0 (s)ds
[ẅ(t) − 2c0 (t)ẇ(t) + w(t)(−ċ0 (t) + c20 (t) + d0 (t))+
R t−h
2(c0 (t)ẇ(t) − c20 (t)w(t))] + c1 (t)e− 0 [−c0 (t − h)w(t
c0 (s)ds
− h) + ẇ(t − h)]+
R t−h R t−2h
d1 (t)e− 0
c0 (s)ds
w(t − h) + d2 (t)e − 0
w(t − 2h) = 0.
c0 (s)ds
Or
where
m0 (t) := −ċ0 (t) + c20 (t) + d0 (t)
and Rt
c0 (s)ds
m1 (t) := e t−h [−c1 (t)c0 (t − h) + d1 (t)].
According to (6.3), m0 (t) ≤ 0, m1 (t) ≤ 0. Hence, by the integral inequalities prin-
ciple, see Section 1.6, it easily follows that if the Green function to equation (6.4)
is nonnegative, then the Green function to equation (6.6) is also nonnegative. This
and (6.5) prove the theorem.
11.7 Comments
This chapter is based on the papers [36, 49]. Theorem 11.6.1 is taken from the
paper [48].
Recall that in 1949 M. A. Aizerman conjectured the following hypothesis: let
A, b, c be an n × n-matrix, a column-matrix and a row-matrix, respectively. Then
for the absolute stability of the zero solution of the equation
This chapter deals with forced periodic oscillations of coupled systems of semi-
linear functional differential equations. Explicit conditions for the existence and
uniqueness of periodic solutions are derived. These conditions are formulated in
terms of the roots of characteristic matrix functions.
In addition, estimates for periodic solutions are established.
where
ek (t) = e2πikt (k = 0, ±1, ±2, ....),
and Z 1
ck = f (t)ek (t)dt ∈ Cn
0
are the Fourier coefficients. Introduce the Hilbert space PF of 1-periodic functions
defined on the real axis R with values in Cn , and the scalar product
∞
X
(f, u)P F ≡ (ck , bk )C n (f, u ∈ P F ),
k=−∞
220 Chapter 12. Forced Oscillations in Vector Semi-Linear Equations
where (., .)C n is the scalar product Cn , ck and bk are the Fourier coefficients of f
and u, respectively. The norm in P F is
∞
!1/2
p X
|f |P F = (f, f )P F = 2
kck kn .
k=−∞
p
Here kckn = (c, c)C n for a vector c.
Due to the periodicity, for any real a we have,
Z a+1 1/2
kvkL2 ([0,1],Cn ) = kv(t)k2n dt (v ∈ P F ).
a
In addition,
l := |F 0|P F > 0. (1.3)
That is, F 0 is not zero identically.
In particular, one can take F v = F̂ v + f , where F̂ is causal and f ∈ P F .
Let K(z) be the characteristic matrix of the linear term of (1.1):
Z 1
K(z) = zI − e−zτ dR(τ ) (z ∈ C),
0
where I is the unit matrix. Assume that matrices K(2iπj) are invertible for all
integer j and the condition
be fulfilled. Then equation (1.1) has a unique nontrivial periodic solution u. More-
over, it subjects the estimate
lM0 (K)
|u|P F ≤ . (1.5)
1 − qM0 (K)
The proof of this theorem is presented in the next section.
Note that
M0 (K) ≤ sup kK −1 (iω)kn .
ω∈R
So one can use the estimates for θ(K) derived in Sections 4.4 and 4.5.
ek (t)K(2πik)h.
We seek a solution u to (2.1) in the form
∞
X
u= ak ek (ak ∈ Cn ).
k=−∞
Hence,
∞
X ∞
X
Tu = T e k a k ek = ek K(2πik)ak
k=−∞ k=−∞
222 Chapter 12. Forced Oscillations in Vector Semi-Linear Equations
and by (2.1),
ak = K −1 (2πik)ck (k = 0, ±1, ±2, ....).
Therefore,
kak kn ≤ M0 kck kn .
Hence, |u|P F ≤ M0 |f |P F . This means that |T −1 |P F ≤ M0 .
Furthermore, equation (1.1) is equivalent to the following one:
u = Ψ(u) (2.2)
where
Ψ(u) = T −1 F u.
For any v, w ∈ P F relation (1.2) implies
Due to (1.3) Ψ maps P F into itself. So according to (1.4) and the Contraction
Mapping theorem equation (1.1) has a unique solution u ∈ P F . To prove estimate
(1.5), note that
|u|P F = |Ψ(u)|P F ≤ M0 (q|u|P F + l).
Hence (1.4) implies the required result.
For example, let (F y)(t) = B(t)F0 (y(t − 1)), where B(t) is a 1-periodic matrix-
function and F0 : Rn → Rn has the Lipschitz property and F0 (0) 6= 0.
First, consider the scalar problem
with a constant ω 6= 0 and a function f0 ∈ L2 ([0, 1], C). Then a solution of (3.3)
is given by
Z 1
x(t) = G(ω, t, s)f0 (s)ds, (3.4)
0
12.3. Applications of matrix functions 223
where
1 eω(1+t−s) if 0 ≤ t ≤ s ≤ 1,
G(ω, t, s) =
1 − eω eω(t−s) if 0 ≤ s < t ≤ 1
is the Green function to the problem (3.3), cf. [70].
Now consider the vector equation
w0 (t) = Aw(t) + f (t), y(0) = y(1)
with an f ∈ P F . Then a solution of this equation is given by
Z 1
w(t) = G(A, t, s)f (s)ds (3.5)
0
where Pk are the Riesz projections and λk (A) are the eigenvalues of A (see Section
2.7).
By Corollary 2.7.5,
kG(A, t, s)kn ≤ γ(A) max |G(λk (A), t, s)|.
k
Consequently, Z Z
1 1
kG(A, t, s)k2n dsdt ≤ JA
2
,
0 0
where
Z 1 Z 1 1/2
JA = γ(A) max |G(λk (A), t, s)|2 ds dt .
k 0 0
JA q < 1
hold. Then (3.1) has a nontrivial solution y. Moreover, it satisfies the inequality
JA l
|y|P F ≤ .
1 − JA q
12.4 Comments
The material of this chapter is adapted from the paper [28].
Periodic solutions (forced periodic oscillations) of nonlinear functional dif-
ferential equations (FDEs) have been studied by many authors, see for instance
[10, 66, 79] and references given therein. In many cases, the problem of the ex-
istence of periodic solutions of FDEs is reduced to the solvability of the corre-
sponding operator equations. But for the solvability conditions of the operator
equations, estimates for the Green functions of linear terms of equations are often
required. In the general case, such estimates are unknown. Because of this, the
existence results were established mainly for semilinear coupled systems of FDEs.
Chapter 13
for a positive r ≤ ∞.
Let us consider in Cn the nonlinear equation
Ax = F (x), (1.1)
Then equation (1.1) has at least one solution x ∈ Ω(r; Cn ), satisfying the inequality
kA−1 kl
kxk ≤ . (1.4)
1 − qkA−1 k
Proof. Set
Ψ(y) = A−1 F (y) (y ∈ Cn ).
Hence,
So due to the Browuer Fixed Point Theorem, equation (1.1) has a solution. More-
over, due to (1.3),
kA−1 kq < 1.
Put
n−1
X g k (A)
R(A) = √ .
k=0
dk+1
0 (A) k!
where g(A) is defined in Section 2.3, d0 (A) is the lower spectral radius. That is,
d0 (A) is the minimum of the absolute values of the eigenvalues of A:
R(A)(qr + l) ≤ r.
Then equation (1.1) has at least one solution x ∈ Ω(r; Cn ), satisfying the inequality
R(A) l
kxk ≤ .
1 − qR(A)
13.2. Essentially nonlinear systems 227
and
n−1
X g k (A(z)) r
θr := sup √ ≤ . (2.3)
z∈Ω(r;Cn ) k=0 k!d0 (A(z))
k+1 kf k
Then system (2.1) has at least one solution x ∈ Ω(r; Cn ), satisfying the estimate
kxk ≤ θr kf k. (2.4)
Rewrite (2.2) as
x = Ψ(x) ≡ A−1 (x)f. (2.5)
Due to (2.3)
kΨ(z)k ≤ θr kf k ≤ r (z ∈ Ω(r; Cn )).
So Ψ maps Ω(r; Cn ) into itself. Now the required result is due to the Brouwer
Fixed Point theorem.
Corollary 13.2.2. Let matrix A(z) be normal:
If, in addition,
kf k ≤ r inf d0 (A(z)), (2.6)
z∈Ω(r;Cn )
then system (2.1) has at least one solution x satisfying the estimate
kf k
kxk ≤ .
inf z∈Ω(r;Cn ) d0 (A(z))
and
ã(z) := min |ajj (A(z)|.
j=1,...,n
let
n−1
X τ k (A(z)) r
sup √ ≤ . (2.8)
z∈Ω(r;Cn ) k=0 k!ã k+1 (z) kf k
Then system (2.7) has at least one solution x ∈ Ω(r; Cn ).
Indeed, this result is due to Theorem 13.2.1, since the eigenvalues of a trian-
gular matrix are its diagonal entries, and
for any constant matrix A0 (see Section 2.3), in the general case, g(A(z)) can be
replaced by the simple calculated quantity
" n
#1/2
p 1X
v(A(z)) = 1/2N2 (A (z) − A(z)) =
∗
|ajk (z) − akj (z)| 2
. (2.9)
2
k=1
13.3. Nontrivial steady states 229
ajk (j, k = 1, 2)
are defined on
Ω(r; C2 ) ≡ {z ∈ C2 : kzk ≤ r}.
Due to (2.9)
g(A(z)) ≤ |a21 (z) − a12 (z)|.
In addition, λ1,2 (A(z)) are the roots of the polynomial
y 2 − t(z)y + b(z),
where
t(z) = T race (A(z)) = a11 (z) + a22 (z)
and
b(z) = det (A(z)) = a11 (z)a22 (z) − a12 (z)a21 (z).
Then
d0 (A(z)) := min |λk (A(z))|.
k=1,2
If
kf kθ̃r ≤ r,
then due to Theorem 13.2.1, system (2.10) has a solution.
Theorem 13.3.1. Let condition (2.3) hold. Then system (3.1) with Fj defined by
(3.2) has at least two solutions: the trivial solution and a nontrivial one satisfying
inequality (2.4).
Corollary 13.3.2. Let matrix A(z) be normal for any z ∈ Ω(r; Cn ) and condition
(2.6) hold. Then system (3.1) has at least two solutions: the trivial solution and a
nontrivial one belonging to Ω(r; Cn ).
Corollary 13.3.3. Let matrix A(z) be upper triangular for any z ∈ Ω(r; Cn ). Then
under condition (2.8), system (3.1) has at least two solutions: the trivial solution
and a nontrivial one belonging to Ω(r; Cn ).
where
ajk , Fj : Ω(r; Cn ) → R (j 6= k; j, k = 1, ..., n)
are continuous functions. For instance, the coupled system
n
X
wjk (u)uk = fj (j = 1, ..., n), (4.2)
k=1
where fj are given real numbers, wjk : Ω(r; Cn ) → R are continuous functions,
can be reduced to (4.1) with
wjk (u)
ajk (u) ≡ −
wjj (u)
and
fj
Fj (u) ≡ ,
wjj (u)
provided
wjj (z) 6= 0 (z ∈ Ω(r; Cn ); j = 1, ..., n). (4.3)
Put
cr (F ) = sup kF (z)k.
z∈Ω(r;Cn )
Denote
0 a12 (z) . . . a1n (z) 0 ... 0 0
0 0 . . . a2n (z) a21 (z) . . . 0 0
V+ (z) :=
.
, V− (z) :=
.
... . . . ... .
0 0 ... 0 an1 (z) . . . an,n−1 (z) 0
In addition, put
n−1
X N2k (V± (z))
J˜Rn (V± (z)) ≡ √ .
k=0
k!
232 Chapter 13. Steady States of Differential Delay Equations
and
cr (F ) < rαr (Ωr )
hold. Then system (4.1) has at least one solution u ∈ Ω(r; Cn ) satisfying the
inequality
cr (F )
kuk ≤ .
αr (Ωr )
In addition, let ajk (z) ≥ 0 and Fj (z) ≥ 0 (j 6= k; j, k = 1, ..., n) for all z from the
ball
cr (F )
{z ∈ Rn : kzk ≤ }.
αr (Ωr )
Then the solution u of (4.1) is non-negative
For the proof see [34].
That is, F 0 (x) is the Jacobian matrix. Rewrite the considered system as
F (y) = h ∈ Cn . (5.1)
and
g̃0 (r) = max g(F 0 (x)) < ∞. (5.3)
x∈Ω(r;Cn )
Finally put
n−1
X g̃0k (r)
p(F, r) ≡ √ .
k=0
k!ρk+1
0 (r)
13.6. Comments 233
khk
kyk ≤ .
p(F, r)
13.6 Comments
This chapter is based on the papers [22] and [34].
234 Chapter 13. Steady States of Differential Delay Equations
Chapter 14
Multiplicative Representations
of Solutions
We will say that A has a vanishing diagonal, if it has a spectral resolution P (t) (a ≤
t ≤ b), and with the notation
(n) (n) (n) (n)
ΔPk = ΔPk,n = P (tk ) − P (tk−1 ) (k = 1, ..., n; a = t0 < t1 n = b),
< ... < t(n)
the sums
n
X
Dn := ΔPk AΔPk
k=1
(n) (n)
tend to zero in the operator norm, as max k |tk − tk−1 | → 0.
236 Chapter 14. Multiplicative Representations of Solutions
Lemma 14.1.1. Let a bounded linear operator A acting in X have a spectral res-
olution P (t) (a ≤ t ≤ b) and a vanishing diagonal. Then the sequence of the
operators
Xn
Zn = P (tk−1 )AΔPk
k=1
(n) (n)
converges to A in the operator norm as maxk |tk − tk−1 | tends to zero.
Proof. Thanks to (1.1) ΔPj AΔPk for j > k. So
X n
n X n X
X k
A= ΔPj AΔPk = ΔPj AΔPk = Zn + Dn
j=1 k=1 k=1 j=1
That is, the arrow over the symbol of the product means that the indexes of the
co-factors increase from left to right.
Lemma 14.1.2. Let {P̃k }nk=0 (n < ∞) be an increasing chain of projections in X.
That is,
0 = range P̃0 ⊂ range P̃n−1 ⊂ . . . ⊂ range P̃n = I.
Suppose a linear operator W in X satisfies the relation
Hence
W n = 0, (1.2)
14.1. Preliminary results 237
and thus X
Wj = Wk1 Wk2 ..Wkj
2≤k1 <k2 <...<kj ≤n
Furthermore, we have
→
Y n
X X
(I + Wk ) = I + Wk + Wk1 Wk2 + . . . + W2 W3 ...Wn .
2≤k≤n k=2 2≤k1 <k2 ≤n
as the limit in the operator norm (if it exists) of the sequence of the products
→
Y
(I + F (tk )ΔPk ),
1≤k≤n
(n) (n)
as maxk |tk − tk−1 | tends to zero. In particular,
Z →
(I + A dP (s))
[a,b]
denotes the limit in the operator norm of the sequence of the products
→
Y
(I + AΔPk ).
1≤k≤n
238 Chapter 14. Multiplicative Representations of Solutions
Similarly, the left multiplicative integral is defined as the limit in the operator
norm (if it exists) of the sequence of the products
←
Y
(I + F (tk )ΔPk ) =
1≤k≤n
Lemma 14.1.3. Let a linear operator A have a continuous spectral resolution P (t)
defined on a segment [a,b] and a vanishing diagonal. Then
Z →
(I − A)−1 = (1 + AdP (t)).
[a,b]
Proof. By Lemma 14.1.1, A is the limit in the operator norm of Zn . Due to Lemma
14.1.2,
Y→
(I − Zn )−1 = (I + Zn ΔPk ).
1≤k≤n
V dP (τ )w(t) = 0 if t 6= τ and
(I − P̂ (τ ))V (I − P̂ (τ )) = (I − P̂ (τ ))V.
where f ∈ C(0, T ), and 0 ≤ h0 < h1 < ... < hm ≤ η < ∞ are constants, Bk (t)
are continuous matrices and A(t, s) is Riemann integrable in s and continuous in
t. Take the zero initial condition
y(t) = 0, t ≤ 0 (3.2)
where Z t
f1 (t) = f (t1 )dt1 .
0
Put
A(t, s) if s≤η,
A1 (t, s) :=
0 if s>η.
Then Z Z
η t
A(t, s)y(t − s)ds = A1 (t, s)y(t − s)ds.
0 0
Hence,
Z tZ η Z tZ t1
A(t1 , s)y(t1 − s)dsdt1 = A1 (t1 , t1 − τ )y(τ )dτ dt1 =
0 0 0 0
Z t
K1 (t, τ )y(τ )dτ,
0
where Z t
K1 (t, τ ) = A1 (t1 , t1 − τ )dt1 .
τ
Moreover, Z Z
t t
Bk (t1 )y(t1 − hk )dt1 = Bk (t1 )y(t1 − hk )dt1 =
0 hk
Z t−hk Z t
Bk (τ + hk )y(τ )dτ = B̂k (t, τ )y(τ )dτ,
0 0
where
Bk (τ + hk ) if τ ≤ t − hk
B̂k (t, τ ) :=
0 if τ > t − hk .
14.4. Comments 241
14.4 Comments
The results of this chapter are probably new.
242 Chapter 14. Multiplicative Representations of Solutions
Appendix A. The General Form
of Causal Operators
The aim of this appendix is to establish the general form of a linear bounded
causal operator acting in space C(0, T ) of continuous real functions defined on a
finite segment [0, T ] with the sup-norm k.kC(0,T ) .
Let ΣT be the σ-algebra of the Borel subsets of [0, T ].
Lemma 15.1.1. Let A be a bounded linear operator acting in C(0, T ). Then there is
a scalar function m defined on [0, T ] × ΣT additive and having bounded variation
var m(t, .) with respect to the second argument, such that
and Z T
[Af ](t) = f (s)m(t, ds) (0 ≤ t ≤ T ; f ∈ C(0, T )). (2)
0
Proof. Let
where m(t, Δ) = (Aχ(Δ))(t). Hence letting maxk |tk − tk−1 | → 0, we get (2).
Clearly,
n
X n
X
m(t, Δk ) ≤ kAkC(0,T ) k χ(Δk )kC(0,T ) = kAkC(0,T ) (t ∈ (0, T )).
k=1 k=1
Note that the result similar to the preceding lemma is well-known [16].
Put ν(t, s) = m(t, [0, s]). So ν(t, s) = m(t, [0, s]) = (Aχ[0, s])(t). Now (2) can
be written as Z T
[Af ](t) = f (s)ds ν(t, s). (3)
0
If, in addition, A is positive then ν(t, s) is non-decreasing in s.
Let us turn now to the causal mappings. For all τ ∈ (0, T ) and w ∈ C(0, T ),
let the projections Pτ be defined by
w(t) if 0 ≤ t ≤ τ,
(Pτ w)(t) = .
0 if τ < t ≤ T
In addition, PT = I, P0 = 0.
We have Pτ C(0, T ) = C(0, τ ). Clearly, for any w ∈ C(0, T ), the function
Pτ w is in B(0, T ), where B(0, T ) is the Banach space of all bounded functions
defined on [0, T ] with the same sup-norm k.kC(0,T ) . Since C(0, T ) is embedded
into B(0, T ), the equality
Pτ APτ w = Pτ Aw (w ∈ C(0, T ); τ ∈ [0, T ]) (4)
has a sense.
Recall that a bounded linear operator A is said to be causal, if (4) is fulfilled
(see Section 1.8).
Theorem 15.1.2. Let A be a bounded causal linear operator acting in C(0, T ). Then
there is a function μ(t, s) defined on [0, T ]2 , having bounded variation with respect
to the second argument, and continuous with respect to the first argument, such
that Z t
[Af ](t) = f (s)ds μ(t, s) (f ∈ C(0, T ), 0 ≤ t ≤ T ).
0
Proof. According to (3) we obtain,
Z T
[Pτ Af ](t) = f (s)ds ντ (t, s),
0
and Z τ
[Pτ APτ f ](t) = f (s)ds ντ (t, s) (0 ≤ τ < T ),
0
14.4. Comments 245
where ντ (t, s) = 0 for t > τ and ντ (t, s) = ν(t, s) for t ≤ τ . Due to (4) we get
Z T Z τ
f (s)ds ντ (t, s) = f (s)ds ντ (t, s).
0 0
Take t = τ . Then
Z T Z τ
f (s)ds ντ (τ, s) = f (s)ds ντ (τ, s).
0 0
Hence ντ (τ, s) = 0 for s > τ . So taking μ(t, s) = νt (τ, s) we prove the theorem.
This appendix contains the proofs of the results applied in Chapter 7. It deals with
infinite block matrices having compact off diagonal parts. Bounds for the spectrum
are established and estimates for the norm of the resolvent are proposed. The main
tool in this appendix is so the called π-triangular operators defined below. The ap-
pendix is organized as follows. It consists of 7 sections. In this section we define the
π-triangular operators. In Section 16.2 some properties of Volterra operators are
considered. In Section 16.3 we establish the norm estimates and multiplicative rep-
resentation for the resolvents of π-triangular operators. Section 16.4 is devoted to
perturbations of block triangular matrices. Bounds for the spectrum of an infinite
block matrix close to a triangular infinite block matrix are presented in Section
16.5. Diagonally dominant infinite block matrices are considered in Section 16.6.
Section 16.7 contains examples illustrating our results.
16.1 Definitions
Let H be a separable complex Hilbert space, with the norm k.k and unit operator
I. All the operators considered in this appendix are linear and bounded. Recall
that σ(A) and Rλ (A) = (A − λI)−1 denote the spectrum and resolvent of an
operator A, respectively.
Recall also that a linear operator V is called quasinilpotent, if σ(V ) = 0. A
linear operator is called a Volterra operator, if it is compact and quasinilpotent.
In what follows
π = {Pk , k = 0, 1, 2, ...}
is an infinite chain of orthogonal projections Pk in H, such that
0 = P0 H ⊂ P1 H ⊂ P2 H ⊂ ....
and V = A − D. Then
A = D + V, (1.2)
and
DPk = Pk D (k = 1, 2, ...), (1.3)
and
Pk−1 V Pk = V Pk (k = 2, 3, ...); V P1 = 0. (1.4)
Definition 16.1.1. Let relations (1.1)-(1.4) hold with a compact operator V . Then
we will say that A is a π-triangular operator, D is a π- diagonal operator and V
is a π-Volterra operator.
Besides, relation (1.2) will be called the π-triangular representation of A, and
D and V will be called the π-diagonal part and π-nilpotent part of A, respectively.
and the series converges in the operator norm. Hence, it follows that λ is a regular
point of A.
Conversely let λ 6∈ σ(A). According to the triangular representation (1.2) we
obtain
Rλ (D) = (A − V − λI)−1 = Rλ (A)(I − V Rλ (A))−1 .
Since V is a p-Volterra, for a regular point λ of A, operator V Rλ (A) is a Volterra
one due to Lemma 16.2.4. So
X∞
(I − V Rλ (A))−1 = (V Rλ (A))k
k=0
Corollary 16.3.2. Let A be a π-triangular operator and let its π-nilpotent part V
belong to a norm ideal Y with the property (2.1). Then
∞
X
kRλ (A)k ≤ ζY (kRλ (D)k, V ) := θk |V |kY kRλ (D)kk+1
k=0
But
|V Rλ (D)|Y ≤ |V |Y kRλ (D)k.
Now the required result is due to (3.1).
Corollary 16.3.2 and inequality (2.3) yield
Corollary 16.3.3. Let A be a π-triangular operator and its π-nilpotent part V ∈
SN2p for some integer p ≥ 1. Then
∞
X (p)
kRλ (A)k ≤ k
θk N2p (V )kRλ (D)kk+1 (λ 6∈ σ(A)),
k=0
Note that under the condition V ∈ SN2p , p > 1, inequality (2.4) implies
p−1 X∞ pk+j
X N2p (V )
kRλ (A)k ≤ √ kRλ (D)kpk+j+1 . (3.2)
j=0 k=0
k!
Thanks to the Schwarz inequality, for all x > 0 and a ∈ (0, 1),
X∞ X∞
xk xk a k 2
[ √ ]2 = [ √ ] ≤
k=0
k! k=0
ak k!
∞
X ∞
X x2k 2 2
a2k 2k
= (1 − a2 )−1 ex /a .
a k!
k=0 k=0
X∞
xk √ 2
√ ≤ 2ex .
k=0
k!
where
p−1
√ X j 2p
ζp (x, V ) := 2 N2p (V )xj+1 exp [ N2p (V )x2p ] (x > 0). (3.3)
j=0
In addition → →
Y Y
Xk := lim Xk
m→∞
j≤k≤∞ j≤k≤m
+... + V2 V3 ...Vm .
Here, as above, Vk = V ΔPk . However,
X X
V k1 V k 2 = V ΔPk1 V ΔPk2 =
2≤k1 <k2 ≤m 2≤k1 <k2 ≤m
16.4. Perturbations of block triangular matrices 253
X X
V Pk2 −1 V ΔPk2 = V 2 ΔPk2 = V 2 .
3≤k2 ≤m 3≤k2 ≤m
Similarly, X
Vk1 Vk2 ...Vkj = V j
2≤k1 <k3 ...<kj ≤m
for j < m. Thus from (3.5) the relation (3.4) follows. The rest of the proof is left
to the reader.
Then from the previous theorem it follows the inequality kRλ (A)k ≤ Π(A, λ).
Lemma 16.4.1. Let A+ be the block triangular matrix defined by (4.1) and V+ be
a compact operator belonging to a norm ideal Y with the property (2.1). Then
σ(A+ ) = σ(D̃), and
|Rλ (A+ )|l2 ≤ ζY (η n (λ), V+ ) (4.2)
for all regular λ of D̃. Moreover, for any bounded operator B acting in l2 (Cn ) and
a μ ∈ σ(B), either μ ∈ σ(D̃), or
qζY (η n (μ), V+ ) ≥ 1
then ζY = ζp .
Let
n−1
X cl
kRλ (Akk )k ≤ φ(ρ(Akk , λ)) := (λ 6∈ σ(D̃))
ρl+1 (Akk , λ)
l=0
16.4. Perturbations of block triangular matrices 255
and
ρ(D̃, λ) = inf min |λ − λj (Akk )|
k=1,2,.... j=1,...,n
We have
η n (λ) = sup kRλ (Ajj )k ≤ φ(ρ(D̃, λ)).
j=1,2,...
for all regular λ of D̃, provided V+ ∈ SN2p . So for a bounded operator B and
μ ∈ σ((B), either μ ∈ σ(D̃) or
If all the diagonal matrices Akk are normal, then g(Akk ) = 0 and φ0 (y) = 1/y.
Relation (4.6) yields
Lemma 16.4.2. Let A+ be defined by (4.1) and B a linear operator on l2 (Cn ). If,
in addition, condition (4.3) holds, then for any μ ∈ σ(B), there is a λ ∈ σ(D̃),
such that
|λ − μ| ≤ rp (q, V+ )),
where rp (q, V+ ) is the unique positive root of the equation
zp (q, V+ ) ≥ γp (q, V+ )
where √
δp (N2p (V+ )/q 2)
γp (q, V+ ) := .
N2p (V+ )
We thus get
rp (q, V+ ) ≤ pn (γp (q, V+ )). (4.13)
à = D̃ + V+ + V−
where V+ is the strictly upper triangular, part, D̃ is the diagonal part and V− is
the strictly lower triangular part of Ã:
0 0 0 0 ...
A21 0 0 0 ...
V− = A31 A32 0 0 ... .
A41 A42 A43 0 ...
. . . ... .
Now we get the main result of the paper which is due to (4.6) with B = Ã.
Recall that φ0 is defined in the previous section and ζp is defined by (3.3).
Theorem 16.5.1. Let à be defined by (5.1) and condition (4.3) hold. Then for any
μ ∈ σ(Ã), either μ ∈ σ(D̃), or there is a λ ∈ σ(D̃), such that
|λ − μ| ≤ rp (Ã),
Moreover, (4.13) gives us the bound for rp (Ã) if we take q = |V− |l2 (Cn ) .
Note that in Theorem 16.5.1 it is enough that V+ is compact. Operator V−
can be noncompact.
Clearly, one can exchange V+ and V− .
Lemma 16.6.1. The spectral radius rs (Ã) of à is less than or is equal to the spectral
radius of M .
(ν) (ν)
Proof. Let Ajk and mjk (ν = 2, 3, ...) be the entries of Ãν and M ν , respectively.
We have
∞
X
(2)
kAjk kn = k Ajl Alk kn ≤
l=1
∞
X ∞
X (2)
kAjl kn kAlk kn = mjl mlk = mjk .
l=1 l=1
(ν) (ν)
Similarly, we get kAjk kn
≤ mjk .
But for any h = {hk } ∈ l (C ), we have
2 n
∞ X
X ∞ ∞ X
X ∞
|Ãh|2l2 (C n ) ≤ ( kAjk hk kn )2 ≤ ( mjk khk kn )2 = |M h̃|2l2 (R)
j=1 k=1 j=1 k=1
where
h̃ = {khk kn } ∈ l2 (R1 ).
16.6. Diagonally dominant block matrices 259
Since
∞
X
|h|2l2 (C n ) = khk k2n = |h̃|2l2 (R) ,
k=1
we obtain |Ãν |l2 (C n ) ≤ |M ν |l2 (R) (ν = 2, 3, ...). Now the Gel’fand formula for the
spectral radius yields the required result.
Denote
∞
X
Sj := kAjk kn .
k=1,k6=j
Theorem 16.6.2. Let Ajj be invertible for all integer j. In addition, let
kA−1
jj kn − Sj ≥ (j = 1, 2, ...).
−1
(6.2)
à = D̃ + W = D̃(I + D̃ −1 W ). (6.3)
Clearly,
∞
X
kA−1 −1
jj Ajk kn ≤ Sj kAjj k.
k=1,k6=j
1 − Sj kA−1 −1
jj kn ≥ kAjj kn (j = 1, 2, ...).
Therefore
∞
X
sup kA−1
jj Ajk kn < 1.
j
k=1,k6=j
260 Chapter 14. Multiplicative Representations of Solutions
Then thanks to Lemma 2.4.8 on the bound for the spectral radius and the previ-
ous lemma the spectral radius rs (D̃ −1 W ) of the matrix D̃−1 W is less than one.
Therefore I + D̃−1 W is invertible. Now (6.3) implies that à is invertble. This
proves the theorem.
It should be noted that condition (6.1) implies that the off-diagonal part W
of à is compact, since under (6.1) the sequence of the finite dimensional operators
0 A12 A13 . . . A1l
A21 0 A23 . . . A2l
Wl :=
31A A 32 0 . . . A3l
. . . ... .
Al1 Al2 Al3 . . . 0
16.7 Examples
Let n = 2. Then D̃ is the orthogonal sum of the 2 × 2-matrices
a2k−1,2k−1 a2k−1,2k
Akk = (k = 1, 2, ...).
a2k,2k−1 a2k,2k
If Akk are real matrices, then due to the above mentioned inequality g 2 (C) ≤
N22 (C ∗ − C)/2, we have
1
λ1,2 (Akk ) = (a2k−1,2k−1 + a2k,2k ± [(a2k−1,2k−1 − a2k,2k )2 − a2k−1,2k a2k,2k−1 ]1/2 ).
2
Now we can directly apply Theorems 16.5.1 and Corollary 16.5.2.
Furthermore, let L2 (ω, Cn ) be the space of vector valued functions defined
on a bounded subset ω of Rm with the scalar product
Z
(f, g) = (f (s), g(s))C n ds
ω
16.7. Examples 261
where (., .)C n is the scalar product in Cn . Let us consider in L2 (ω, Cn ) the matrix
integral operator Z
(T f )(x) = K(x, s)f (s)ds
ω
with the condition Z Z
kK(x, s)k2n dx ds < ∞.
ω ω
That is, T is a Hilbert-Schmidt operator.
Let {ek (x)} be an orthogonal normal basis in L2 (ω, Cn ) and
∞
X
K(x, s) = Ajk ek (s)ek (x)
j,k=1
be the Fourier expansion of K, with the matrix coefficients Ajk . Then T is unitarily
equivalent to the operator à defined by (1.1). Now one can apply Theorems 16.5.1
and 16.6.1, and Corollary 16.5.2.
This appendix is based on the paper [37].
262 Chapter 14. Multiplicative Representations of Solutions
Bibliography
[12] Bylov, B.F., B.M. Grobman, V.V. Nemyckii and R.E. Vinograd, The Theory
of Lyapunov Exponents, Nauka, Moscow, 1966 (In Russian).
[13] Corduneanu, C., Functional Equations with Causal Operators, Taylor and
Francis, London, 2002.
[14] Daleckii, Yu L. and Krein, M. G. Stability of Solutions of Differential Equa-
tions in Banach Space, Amer. Math. Soc., Providence, R. I., 1974.
[15] Drici, Z., McRae, F.A. and Vasundhara Devi, J. , Differential equations with
causal operators in a Banach space. Nonlinear Anal., Theory Methods Appl.
62, (2005) no.2 (A), 301-313.
[16] Dunford, N. and Schwartz, J.T., Linear Operators, part I, Interscience Pub-
lishers, Inc., New York, 1966.
[17] Feintuch, A., Saeks, R. System Theory. A Hilbert Space Approach. Ac. Press,
New York, 1982.
[18] Gantmakher, F. R. Theory of Matrices. Nauka, Moscow, 1967. In Russian.
[19] Gel’fand, I.M. and Shilov, G.E. Some Questions of Theory of Differential
Equations. Nauka, Moscow, 1958. In Russian.
[20] Gel’fond A.O. Calculations Of Finite Differences, Nauka, Moscow. 1967, In
Russian.
[21] Gil’, M. I. Estimates for norms of matrix-valued functions, Linear and Mul-
tilinear Algebra, 35 (1993), 65-73.
[22] Gil’, M.I. On solvability of nonlinear equations in lattice normed spaces, Acta
Sci. Math., 62, (1996), 201-215.
[23] Gil’, M. I. The freezing method for evolution equations. Communications in
Applied Analysis, 1, (1997), no. 2, 245-256.
[24] Gil’, M.I. Stability of Finite and Infinite Dimensional Systems, Kluwer Aca-
demic Publishers, Boston, 1998.
[25] Gil’, M.I. Perturbations of simple eigenvectors of linear operators,
Manuscripta Mathematica, 100, (1999), 213-219.
[26] Gil’, M.I. On bounded input-bounded output stability of nonlinear retarded
systems, Robust and Nonlinear Control, 10, (2000), 1337-1344.
[27] Gil’, M.I. On Aizerman-Myshkis problem for systems with delay. Automatica,
36, (2000), 1669-1673.
[28] Gil’, M. I. . Existence and stability of periodic solutions of semilinear neutral
type systems, Dynamics Discrete and Continuous Systems, 7, (2001), no. 4,
809-820.
Bibliography 265
[29] Gil’, M.I. On the ”freezing” method for nonlinear nonautonomous systems
with delay, Journal of Applied Mathematics and Stochastic Analysis 14,
(2001), no. 3, 283-292.
[30] Gil’, M.I., Boundedness of solutions of nonlinear differential delay equations
with positive Green functions and the Aizerman - Myshkis problem, Nonlinear
Analysis, TMA , 49 (2002) 1065-168.
[31] Gil’, M.I. Operator Functions and Localization of Spectra, Lecture Notes In
Mathematics vol. 1830, Springer-Verlag, Berlin, 2003.
[32] Gil’, M.I. Bounds for the spectrum of analytic quasinormal operator pencils
in a Hilbert space, Contemporary Mathematics, 5, (2003), no 1, 101-118
[33] Gil’, M.I. On bounds for spectra of operator pencils in a Hilbert space, Acta
Mathematica Sinica, 19 (2003), no. 2, 313-326
[34] Gil’, M.I., On positive solutions of nonlinear equations in a Banach lattice,
Nonlinear Functional Analysis and Appl., 8 (2003), 581-593 .
[35] Gil’, M.I. Bounds for characteristic values of entire matrix pencils, Linear
Algebra Appl. 390, (2004), 311-320
[36] Gil’, M.I., The Aizerman-Myshkis problem for functional-differential equa-
tions with causal nonlinearities, Functional Differential Equations, 11, (2005)
no 1-2, 445-457
[37] Gil’, M.I. Spectrum of infinite block matrices and π-triangular operators, El.
J. of Linear Algebra, 16 (2007) 216-231
[38] Gil’, M.I. Estimates for absolute values of matrix functions, El. J. of Linear
Algebra, 16 (2007) 444-450
[39] Gil’, M.I. Explicit stability conditions for a class of semi-linear retarded sys-
tems, Int. J. of Control, 322, (2007) no. 2, 322-327.
[40] Gil’, M.I. Positive solutions of equations with nonlinear causal mappings,
Positivity, 11, (2007), no. 3, 523-535.
[41] Gil’, M.I. L2 -stability of vector equations with causal mappings, Dynamic
Systems and Applications, 17 (2008), 201-220.
[42] Gil’, M.I. Estimates for Green’s Function of a vector differential equation with
variable delays, Int. J. Applied Math. Statist, 13 (2008), 50-62.
[43] Gil’, M. I. Estimates for entries of matrix valued functions of infinite matrices.
Math. Phys. Anal. Geom. 11, (2008), no. 2, 175-186.
[44] Gil’, M.I. Inequalities of the Carleman type for Neumann-Schatten operators
Asian-European J. of Math., 1, (2008), no. 2, 203-212.
[45] Gil’, M.I. Upper and lower bounds for regularized determinants, Journal of
Inequalities in Pure and Appl. Mathematics, 9, (2008), no. 1, 1-6.
266 Bibliography
[46] Gil’, M.I. Localization and Perturbation of Zeros of Entire Functions. Lecture
Notes in Pure and Applied Mathematics, 258. CRC Press, Boca Raton, FL,
2009.
[47] Gil’, M.I., Perturbations of functions of operators in a Banach space, Math.
Phys. Anal. Geom. 13, (2009) 69-82.
[48] Gil’, M. I. Lower bounds and positivity conditions for Green’s functions to
second order differential-delay equations. Electron. J. Qual. Theory Differ.
Equ., (2009) no. 65, 1-11.
[49] Gil’, M.I. L2 -absolute and input-to-state stabilities of equations with nonlin-
ear causal mappings, J. Robust and Nonlinear systems, 19, (2009), 151-167.
[50] Gil’, M.I. Meromorphic functions of matrix arguments and applications Ap-
plicable Analysis, 88 (2009), no. 12, 1727 - 1738
[51] Gil’, M.I. Perturbations of functions of diagonalizable matrices, Electr. J. of
Linear Algebra, 20 (2010) 303-313.
[52] Gil’, M.I. Stability of delay differential equations with oscillating coefficients,
Electronic Journal of Differential Equations, 2010, (2010), no. 11, 1–5.
[53] Gil’, M.I. Norm estimates for functions of matrices with simple spectrum,
Rendiconti del Circolo Matematico di Palermo, 59, (2010) 215-226
[54] Gil’, M.I. Stability of vector functional differential equations with oscillating
coefficients, J. of Advanced Research in Dynamics and Control Systems, 3,
(2011), no. 1, 26–33.
[55] Gil’, M.I. Stability of functional differential equations with oscillating co-
efficients and distributed delays, Differential Equations and Applications, 3
(2011), no. 11, 1–19.
[56] Gil’, M.I. Estimates for functions of finite and infinite matrices. Perturbations
of matrix functions. In: Albert R. Baswell (editor) Advances in Mathematics
Research, 16, Nova Science Publishers, Inc., New York, 2011, pp. 25-90
[57] Gil’, M.I. Ideals of compact operators with the Orlicz norms Annali di Matem-
atica. Pura Appl., Published online, October, 2011.
[58] Gil’, M.I. The Lp - version of the generalized Bohl - Perron principle for vector
equations with delay, Int. J. Dynamical Systems and Differential Equations,
3, (2011) no. 4, 448-458.
[59] Gil’, M.I. The Lp-version of the generalized BohlPerron principle for vector
equations with infinite delay, Advances in Dynamical Systems and Applica-
tions , 6 (2011), no. 2, 177 - 184.
[60] Gil’, M.I. Stability of retarded systems with slowly varying coefficient ESAIM:
Control, Optimisation and Calculus of Variations, published online Sept.
2011.
Bibliography 267
[61] Gil’, M.I. Stability of vector functional differential equations: a survey, Quaes-
tiones Mathematicae, 35 (2012), 83-131.
[62] Gil’, M.I. Exponential stability of periodic systems with distributed delays,
Asian J. of Control, (accepted for publication)
[63] Gil’, M.I., A. Ailon and B.-H. Ahn., On absolute stability of nonlinear systems
with small delays, Mathematical Problems in Engineering, 4, (1998) 423-435.
[64] Gohberg, I., Goldberg, S. and Krupnik, N. Traces and Determinants of Linear
Operators, Birkhäuser Verlag, Basel, 2000.
[65] Gohberg, I. and Krein, M. G. Introduction to the Theory of Linear Non-
selfadjoint Operators, Trans. Mathem. Monographs, v. 18, Amer. Math. Soc.,
Providence, R. I., 1969.
[66] Gopalsamy, K. Stability and Oscillations in Delay Differential Equations of
Population Dynamics. Kluwer Academic Publishers, Dordrecht, 1992.
[67] Gu, K., V. Kharitonov and J. Chen, Stability of Time-delay Systems,
Birkhauser, Boston, 2003.
[68] Guter, P., Kudryavtsev L. and Levitan, B. Elements of the Theory of Func-
tions, Fizmatgiz, 1963. In Russian.
[69] Halanay, A., Stability Theory of Linear Periodic Systems with Delay, Rev.
Math. Pure Appl. 6(4), (1961), 633 - 653.
[70] Halanay, A. Differential Equations: Stability, Oscillation, Time Lags, Aca-
demic Press, NY, 1966.
[71] Hale, J.K. Theory of Functional Differential Equations, Springer- Verlag, New
York, 1977.
[72] Hale, J.K. and S.M.V. Lunel, Introduction to Functional Differential Equa-
tions, Springer, New York, 1993.
[73] Hewitt, E. and Stromberg, K. Real and Abstract Analysis, Springer Verlag,
Berlin 1969.
[74] Horn R.A and Johnson, C.R. Matrix Analysis, Cambridge Univ. Press, Cam-
bridge, 1985
[75] Insperger, T. and Stepan, G., Stability of the damped Mathieu equation with
time-delay, J. Dynam. Syst., Meas. Control, 125, (2003) no. 2, 166 - 171.
[76] Izobov, N.A. Linear systems of ordinary differential equations. Itogi Nauki i
Tekhniki. Mat. Analis, 12, (1974) 71-146, In Russian.
[77] Kolmanovskii, V. and A. Myshkis, Applied Theory of Functional Differential
Equations, Kluwer, Dordrecht, 1999.
268 Bibliography
[78] Kolmanovskii, V.B. and Nosov, V.R. Stability of Functional Differential Equa-
tions, Ac Press, London, 1986.
[79] Krasnosel’skii, A. M. Asymptotic of Nonlinearities and Operator Equations,
Birkhäuser Verlag, Basel, 1995.
[80] Krasnosel’skij, M.A., Lifshits, J. and Sobolev A. Positive Linear Systems. The
Method of Positive Operators, Heldermann Verlag, Berlin, 1989.
[81] Krisztin, T. On stability properties for one-dimensional functional-differential
equations, Funkcial. Ekvac. 34 (1991) 241–256.
[82] Kurbatov, V., Functional Differential Operators and Equations, Kluwer Aca-
demic Publishers, Dordrecht, 1999.
[83] Lakshmikantham, V., Leela, S., Drici, Z. and McRae, F. A. Theory of Causal
Differential Equations, Atlantis Studies in Mathematics for Engineering and
Science, 5. Atlantis Press, Paris, 2009.
[84] Lampe, B. P. and E. N. Rosenwasser, Stability investigation for linear peri-
odic time-delayed systems using Fredholm theory, Automation and Remote
Control, 72, (2011) no. 1, 38 - 60.
[85] Liao, Xiao Xin, Absolute Stability of Nonlinear Control Systems, Kluwer, Dor-
drecht, 1993.
[86] Liberzon, M.R., Essays on the absolute stability theory. Automation and Re-
mote Control, 67, (2006), no. 10, 1610-1644.
[87] Lillo, J.C. Oscillatory solutions of the equation y 0 (x) = m(x)y(x − n(x)). J.
Differ. Equations 6, (1969) 1-35.
[88] Liu Xinzhi, Shen Xuemin and Zhang, Yi, Absolute stability of nonlinear equa-
tions with time delays and applications to neural networks. Math. Probl. Eng.
7, (2001) no.5, 413-431.
[89] Liz, E., V. Tkachenko, and S. Trofimchuk, A global stability criterion for
scalar functional differential equations, SIAM J. Math. Anal. 35 (2003) 596–
622.
[90] Marcus, M. and Minc, H. A Survey of Matrix Theory and Matrix Inequalities.
Allyn and Bacon, Boston, 1964.
[91] Meyer-Nieberg, P. Banach Lattices, Springer - Verlag, 1991.
[92] Michiels, W. and Niculescu, S.I., Stability and Stabilization of Time-Delay
Systems. An Eigenvalue-Based Approach, SIAM, Philadelphia, 2007.
[93] Mitrinović, D. S., Pecaric, J.E. and Fink, A.M. Inequalities Involving Func-
tions and their Integrals and Derivatives, Kluwer Academic Publishers, Dor-
drecht, 1991.
Bibliography 269
[111] Yang, Bin and Chen, Mianyun, Delay-dependent criterion for absolute sta-
bility of Lur’e type control systems with time delay. Control Theory Appl, 18,
(2001) no. 6, 929-931.
[112] Yoneyama, Toshiaki, On the stability for the delay-differential equation
ẋ(t) = −a(t)f (x(t − r(t))). J. Math. Anal. Appl. 120, (1986) 271-275.
[113] Yoneyama, Toshiaki, On the 3/2 stability theorem for one-dimensional delay-
differential equations. J. Math. Anal. Appl. 125, (1987) 161-173.
[114] Yoneyama, Toshiaki. The 3/2 stability theorem for one-dimensional delay-
differential equations with unbounded delay. J. Math. Anal. Appl. 165, (1992)
no.1, 133-143.
[115] Zeidler, E. Nonlinear Functional Analysis and its Applications, Springer-
Verlag, New York, 1986.
[116] Zevin, A.A. and Pinsky M.A., A new approach to the Lur’e problem in
the theory of exponential stability and bounds for solutions with bounded
nonlinearities, IEEE Trans. Autom. Control, 48, (2003) no. 10, 1799-1804 .
[117] Zhang, Z. and Wang, Z. Asymptotic behavior of solutions of neutral dif-
ferential equations with positive and negative coefficients, Ann. Differential
Equations 17 (3) (2001) 295–305.
Index
operator
adjoint, 5
closed, 6
normal, 7
negative definite, 6
positive definite, 6
quasinilpotent, 6
selfadjoint, 6
projection, 7
quasinilpotent operator, 6
spectral radius, 5
spectrum, 5
stability in the linear approximation,
178
stability of quasilinear equations, 178
θ(K), 77