Mémoire Final Menasra Amina
Mémoire Final Menasra Amina
Mémoire Final Menasra Amina
ENTITLED :
Presented by :
• Amina Menasra
2020 /2021
Thanks
My dears friends.....
2
Table of contents
3
red4.5 Proprieties of pseudospectrum : . . . . . . . . . . . . . . . 63
red4.6 The condition number, the spectral abscissa and the spec-
tral radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
red4.7 The pseudoprojection . . . . . . . . . . . . . . . . . . . . . 68
red4.8 matrix function . . . . . . . . . . . . . . . . . . . . . . . . 69
red4.9 Circulant matrix(example) . . . . . . . . . . . . . . . . . . 71
4
Introduction
The domain of function analysis presents an important part of applied
mathematics, such as the results of operational equations,spectrum of ope-
rators, the field of values. For that we choose to spak about this latter in our
study.
The eigenvalue was one of the most important in understanding and sol-
ving linear equations and appearing of spectral theory with the investigation
of localized vibrations of variety of differents objects ,made so much mathe-
metics and physicals problems solved.
Hilbert was the first who coined the term of ”eigenvalue” and the set
of eigenvalues”the spectrum”.His research laids to the foundation of spectral
notion and function analysis.
But the spectral objects have some changes in the case of small pertur-
bations,addition to that studying of behavior of non normal operator using
the spectrum wasn’t enought and evident . That what lead Trefethen in 1990
to the concept of ”pseudospectrum” , and he applied it to plenty of highly
intresting problems.
Our works was deviding to five chapters.In the first one, we mentioned
some basic concepts and definitions of Hilbert space , the space that the
study is based on.
In the third chapter ,we studied the spectral notion , the resolvant equa-
tions ,spectral radius and the spectrum of various clasesses of operators.
5
The next chapter was about the pseudo-spectrum,we give the six defin-
tions of pseudospectrum, the singular values addition to the pseudospectrum
of diagonal matrices and other proprities.
6
Chapter 1
Hilbert space
Before defining the hilbert space,we need to see a background in linear algebra
and real analysis.We begin our study by some notions that are fondamen-
tale to the fieldof functionnal analysis ; vector spaces,normed spaces, inner
product spaces.
X × X → X , (x, y) → x + y,
X × X → X; (x, λ) → λx,
And for the following conditions are satisfied:
1)Vector addition is commutative: ∀xϵX; ∀u, λϵF ;
x + y = y + x,
7
2)Vector addition is associative: ∀ x, y, z ∈ X
(x + y) + z = x + (y + z),
−x + x = 0,
4)Scalar multiplication is associative: ∀u, λϵF ; ∀xϵX;
(λu)x = λ(ux),
5)Scalar multiplication distributes over scalar addition: ∀xϵX; ∀u, λϵF ;
(λ + u)x = λu + λx,
6)Scalar multiplication identity applies to vector: for any ele-
ment ∀xϵX;
1.x = x,
7)Scalar multiplication distributes overnvector addition: ∀x, yϵX; ∀λϵF ;
Remark 1.1.2 From now, the study will consider specificly the linear space
complex.
∥ x ∥> 0,
∥ x ∥= 0 ,
8
c.Multiplicative: for all xϵL and λ ϵC.
∥ λx ∥= |λ| . ∥ x ∥,
d.Triangle inequality: for all x, yϵL.
∥ x + y ∥≤∥ x ∥ + ∥ y ∥ .
⟨x, y⟩ ≥ 0,
d.Nodegenerate:
⟨, ⟩ × X × X → F ,
Such that
⟨x, y⟩ =ni xi yj ,
for x = (x1 , . . . , xn ), y = (y1 , y2 , , , , , , yn) is an inner product .
9
Remark 1.1.6 ⋆From (a) and (b) , it follows that ⟨, ⟩ is a conjugate
linear(sesquilinear) meaning that:
⟨y, x⟩ = 0,
Suppose;
δ = ⟨y, x⟩ ̸= 0,
then x ̸= 0 and y ̸= 0 then;
Z = δ. |δ|−1 y,
then ;
⟨z, x⟩ = δ. |δ|−1 x
= |δ|
= |⟨y, x⟩|
≥ 0,
Let;
v = x ∥x∥−1 and w = z ∥z∥−1 ,
Then;
∥W ∥ = ∥V ∥ = 1,
And;
⟨v; w⟩ ≥ 0,
Since;
∥v − w∥2 = ⟨v; w⟩ + ⟨w; v⟩ − 2Re ⟨v; w⟩ ,
10
It follows that;
⟨v; w⟩ ≤ 1,
So;
.⋆Using the distributive property of inner products we see that we see that
for x, yϵH;
∥ x + y ∥ 2 = ⟨x, x⟩ + ⟨x, y⟩ + ⟨y, x⟩ + ⟨y, y⟩ ,
and according to shwartz-inequality
11
so
∥ x + y ∥≤∥ x ∥ + ∥ y ∥,
We conclude that ∥.∥ is a norm. 2
∥x + y∥2 + ∥x − y∥2 = ⟨x + y; x + y⟩ + ⟨x − y; x − y⟩
= ⟨x; x⟩ + ⟨y; x⟩ + ⟨x; y⟩ + ⟨y; y⟩ + ⟨x; x⟩ + ⟨y; y⟩ − ⟨x; y⟩ − ⟨x; y⟩
= 2(⟨x; x⟩ + ⟨y; y⟩) = 2(∥x∥2 + ∥y∥2 ).
and
(−i ∥x + y∥2 + i ∥x − y∥2 ) = 4Im ⟨y; x⟩ .
2
⟨x; y⟩ : X × X → C,
is continous.
12
proof. Since C and X ×X are metric spaces, it suffices to show sequential
continuity.Suppose xn → x and yn → y. Then by the Schwarz inequality:
3)Symetric:
d(x, y) = d(y, x),
4)Triangle inequality:
Definition 1.2.2 Let X be a metric space and let {xn }be a sequence of points
x.
1)We say that {xn } is a cauchy sequence if for every ε ≥ 0 , there exists
an N ϵN so that; i, j ≥ N =⇒ d(xi , xj ) ≺ ϵ,
2)We say that {xn } converges to a point xi ϵX;
lim d(xn , x) = 0.
n→+∞
13
Definition 1.2.3 (Metric space) A metric space X is said to be complete
if every cauchy sequence in X converges to a point inX.
Examples 1.2.6 ⋆any finite dimensional inner product are hilbert space.
⋆L2 (A) for any measurable A ⊂ Rn , with inner product
∫
⟨g, f ⟩ = g(x).f (x)dx,
A
∑
∞ ∑
∞
|xk |2 ≺ ∞ with ⟨y, x⟩ = y k xk ,
k=0 k=0
⟨; ⟩ : X × X → F,
defined by :
⟨f ; g⟩ =11 f (x)g(x)dx,
for each f ; g ∈ X isn’t Hilbert space.
14
Example 1.2.7 Space X = lp (p ̸= 2)isn’t hilbert space so it sufficr to prouve
that the parralogram law isn’t verify.Let x = (1, 0; 0; 0), Y = (0, 1, 0, 0...) then
∥x ∥=∥ y∥ = 2P ,
∥x + y∥ + ∥x − y∥ ̸= 2 ∥x∥2 + 2 ∥y∥2 .
∥xn ∥ → d,
Now we should prouve that {xn } is a cauchy sequence then we should prouve
that ∥x∥ → 0when n, m → ∞.
A is a convex set thatn ∀x, y ∈ A;
1
(xn + xm ) ≥ d,
2
then
∥(xn + xm )∥ ≥ 2d,
Then because
∥xn + xm ∥2 + ∥xn − xm ∥2
= 2 ∥xn ∥2 + 2 ∥xm ∥2 ; ∥xn − xm ∥2
= − ∥xn + xm ∥2 + 2 ∥xn ∥2 + 2 ∥xm ∥2
≤ 2 ∥xn ∥2 + 2 ∥xm ∥2 − 4d2 .
15
Now as ∥xn ∥ → d, ∥xm ∥ → d when m, n → ∞.then ∥xn − xm ∥ → 0.So {xn }
converges in A AsX is complet then there is x ∈ X such that xn → x.
As the sequence {xn } is in A then,
xn → x ∈ A,
and as,
∥x − y∥2 ≥ 0,
so,
∥x − y∥2 = 0,
then,
x = y.
2
Corollary 1.2.10 LetA nonempty closet and convex set in Hilbert space and
let x0 ∈
/ A.Then there is an only element a ∈ A such that,
∥x − a∥ = d(x0 , A).
16
Definition 1.2.11 Two hilbert spaces H1 , H2 are called isomorphic if
there exists a bijection linear mapping f ; H1 → H2 such that,
∥x + λy∥
= ∥f (x) + λf (y)∥
= ∥f (x)∥ + λ ∥f (y)∥ ,
then;
∥x + λy∥2 = ∥x∥2 + 2Re(λ ⟨x, y⟩) + |λ|2 ∥y∥2 ,
so;.
∥f (x) + λf (y)∥2
= ∥f (x)∥2 + 2Re(λ ⟨f (x), f (y)⟩) + |λ|2 ∥f (y)∥2 ,
As;
∥f (x) + λf (y)∥2 = ∥x + λy∥2 ,
so;
(λ ⟨f (x), f (y)⟩) = (λ ⟨x, y⟩),
then;
⟨f (x), f (y)⟩ = ⟨x, y⟩ ,
If F = R ,we take λ = 1, If F = C we take λ = 1, λ = i.we get ⟨x, y⟩ ,
⟨f (x), f (y)⟩ had the same reel part and imaginary part.so
17
2 → 1 let x ∈ X with using 2;
⟨f (x), f (x)⟩ = ⟨x, x⟩
→ ∥f (x)∥2
= ∥x∥2
→ ∥f (x)∥
= ∥x∥ for all x ∈ X.
2
18
proof. If the vector X is orthogonal to all vectors (x1 , x2 , ..xn ) in the
hilbert space then it is othogonal to any linear combination .
x⊥xk for all k = 1, 2, 3, , , , n;
⟨x, xi ⟩ = 0,
let
∑
n
z= λ i xi ,
i=1
proof. If x ⊥ y then
∑
Theorem 1.3.6 Let {xn }be an orthogonal in H.Then ∞
∑ k=0 xk converges if
∞
k=0 ∥xk ∥ converges.
2
∑
proof. Note the convergence of a serie ∞ k=0 xk of elements
∑xi of a hilbert
space H is defined to be the limit of the ’partial sum limn→∞ ’ ∞ .In par-
k=0 xk∑
ticular,the cauchy criterion applies since H is complete.The series ∞ k=0 yk
converges if and only if for ε ≥ 0 thereexists n0 ϵN such that m,n ≥ n0 imply
∑
∞
yi ≤ ε,
k=0
∑ ∑∞
by the above discussion ∞
2
k=0 xk converges if and only if ∥ k=0 xk ∥ becomes
small
∑∞ for sufficiently large n,m .By the Pythagorean theorem this term equals
∑∞
k=0 ∥xk ∥ . Then
2
∑∞
k=0 ∥xk ∥ converges. 2
2
k=0 xk converges if and only if
19
Notation 1.3.7 If M, N are subspaces in Hilbert space as M ⊥ N then
M ∩ N = {0} .
∥x − y∥ = min ∥x − z∥ ; zϵH
B.The point yϵH closest to xϵH is the unique element H with the
propriety that
(x − y)⊥H
.
20
proof. Let the distance d from M.
First we proove there is a closer point yϵH at which this infirum is attained
meaning that;
∥x − y∥ = d,
From the definition of d there is a sequence of elements yn such that;
lim ∥x − yn ∥ = d,
n→∞
∥x − yn ∥ 6 d + ϵ, when n ≥ N,
we show that the sequence{yn } is cauchy from the parralogram law we have
∥ym − yn ∥2 + ∥2x − ym − yn ∥2 = 2 ∥x − yn ∥2 + 2 ∥x − ym ∥2 ,
∥x − (yn + yn )/2∥ ≥ d
∥ym − yn ∥2 = − ∥2x − ym − yn ∥2 + 2 ∥x − yn ∥2 + 2 ∥x − ym ∥2
≤ 4(d + ϵ)2 − 4d2
≤ 4ϵ(d + ϵ),
therefore {yn } is cauchy since hilbert space is complete .there is y such that
yn → y and since M is closed , we have y ∈ M the norm is continuous so
∥x − y∥ = lim ∥x − yn ∥
n→∞
= d
∥x − y∥2 + ∥x − y ′ ∥ = 2 ∥2x − y − y ′ ∥ + 2 ∥y − y ′ ∥ ,
2 2 2
21
Since(y + y′)/2 ∈ M ;
therefore;
∥y − y ′ ∥ = 0 and y = y ′ ,
2
third we show that the unique y ∈ M found above satisfies the condition
that the vector x − y is orthogonal to M . since y minimize the distanceto x
, we have for every λ ∈ C; z ∈ M that;
suppose that;
⟨x − y, z⟩ = |⟨x − y, z⟩| eiθ ,
chosen λ = ϵeiθ where ϵ ≥ 0 and dividing by ϵ we get;
2 ⟨x − y; z⟩ ≤ ϵ ∥z∥2 ,
⟨x − y, z⟩ = 0,
so ;
(x − y) ⊥ M,
Finally we share that y is the only element in M such that (x − y) ⊥ M .
Suppose that y′ is another such element in M .then y − y′ϵM and for any
z∈M .
We have
22
Definition 1.3.11 IfM and N are closed subspace of a hilbert space then we
define the orthogonality direct sum or simply sum M ⊕ N by :
M ⊕ N = {y + z/y ∈ M ; z ∈ N } .
We may also define the orthogonal direct sumof two hilber spaces are not
subspaces of the same space .
M ⊕ M ⊥ = H.
⟨xn ; xm ⟩ = {}0; xn ̸= ym 1; xn = ym .
23
Proposition 1.4.2 Let A is a set in hilbert space X. then A is linear
independant.
proof. Let {x1 , x2 , , xn } is a finite set in A then :
n
i=1 λi xi = 0,
than
∥ni=1 λi xi ∥2 = 0.
As
xi ⊥xj (i ̸= j) =⇒ni=1 |λi |2 ∥xi ∥2 = 0,
and xi ̸= 0; (∀i); ∥xi ∥ ≥ 0 then λi = 0. 2
Theorem 1.4.3 let {x1 , x2,,,,, xn } are orthonormal vectors in hilbet space X
.∀x ∈ X;
1) ∥x −ni=1 ⟨x; xi ⟩ xi ∥2 = ∥x∥2 − ∥ni=1 ⟨x; xi ⟩∥2 , ∀i ∈ 1, n.
2) ∥x∥2 ≥ ∥ni=1 ⟨x; xi ⟩∥2 .
3) ⟨x −ni=1 ⟨x; xi ⟩ xi , xj ⟩ ⊥ xj .
proof. Let {x1 , x2,,,,, xn } are orthonormal vectors in hilbet space X
.∀x ∈ X we have:
1.λi = ⟨x, xi ⟩ ,
2.
∥x −ni=1 ⟨x; xi ⟩ xi ∥2 ≥ 0 ⇒ ∥x∥2 ≥ ∥ni=1 ⟨x; xi ⟩∥2 ,
3.
⟨x −ni=1 ⟨x; xi ⟩ xi , xj ⟩ = ⟨x, xj ⟩ −ni=1 ⟨x; xi ⟩ ⟨xi , xj ⟩ ,
As
⟨xi , xj ⟩ = {}0, i ̸= j1, i = j,
so,
24
then,
⟨x −ni=1 ⟨x; xi ⟩ xi , xj ⟩ ⊥ xj .
2
yn =ni=1 λi xi ,
proof. yn =ni=1 λi xi ; ym =m
i=1 λi xi
If m ≻ n, m = n + k; such that k ∈ Z+ ;
ym = yn+k +n+k
i=1 λi xi ,
then,
ym − yn+k =n+k
i=1 λi xi ,
so that,
2
∥ym − yn ∥2 = n+k
n+1 λi xi
25
Theorem 1.4.7 Let be {xn } is a sequence in Pr-hilbert space and let x =ni=1
λi xi , y =ni=1 µi xi thus
1) ⟨x; y⟩ =ni=1 λi µi .
2) ⟨x; xk ⟩ = λk .
3) ∥x∥2 =∞ ∞
i=1 |λi | =i=1 |⟨x, xi ⟩| .
2 2
2) uk = 1, uj = 0 j ̸= k ;then
⟨x; xk ⟩ = λk ,
3) x ∈ X
∥x∥2 = ⟨x; x⟩
= ni=1 λi λi
= ni=1 |λi |2
= ∞i=1 |⟨x, xi ⟩| ,
2
Definition 1.5.1 (total set) Let A a subset in Pre hilbert space.A is called
a total set if A⊥ = {0} and specifficly {xn } a sequence is calling total sequence
if ”x⊥xn ∀n x = 0”
26
Definition 1.5.3 A is subset in hilbert X ,we call A is an orthonormal basis
in X if A is orthonormal and total.
27
Chapter 2
Definition 2.1.1 Let X and Y are normed space and let M is a subset from
X. If it is possible to link each an element x ∈ M and individually with an
element y ∈ Y.Then we say that we have an operator (mapping) from M
to Y and we write:
T : M → Y ; x → T (x),
*We called M the set where T is defined :the domain of T.
*If D(X) = X in this case we write
T : X → Y, x → T (x),
*We call
28
Definition 2.1.2 we say that
T : D(T ) → Y,
1)Linear if
T (λ1 x1 + λ2 x2 ) = λ1 T (x1 ) + λ2 T (x2 ), ∀x1 , x2 ∈ D(T ), ∀λ1 , λ2 ∈ K,
2)Bounded if there is a constant C ≻ 0 such that:
29
Definition 2.1.4 Let T1 and T2 be linear operators with domains D(T1 ) and
D(T2 ) both contained in a linear space X and ranges R(T1 ) and R(T2 ) both
contained in a linear space Y.Then, T1 = T2 ,if, and only if , D(T1 ) = D(T2 )
and T1 x = T2 x for all x ∈ D(T1 ) = D(T2 ). If D(T1 ) ⊆ D(T2 ) and T1 x = T2 x
for all x ∈ D(T1 ) = D(T2 ).T2 is called an extension of T1 and T2 is a
restriction of T2 . We shall write T1 ⊆ T2 .
∥T x∥v ≤ C ∥x∥X , ∗
for x ∈ V.
Remark 2.2.2 We can see the linearity and the homogenity of the norm in
w to see that
(x) T (x)
T =
∥x∥v w ∥x∥v w
∥T (x)∥w
= ,
∥x∥v
We see that T is bounded,satisfying *, if and only if
sup ∥T (x)∥∥x∥=1 ≤ C,
30
Theorem 2.2.3 Let V, W be normed vector spaces and let
T : v → w,
be a linear transformation. The following statements are equivalent.
1)T is a bounded linear transformation.
2)T is continuous everwhere in V
3)T is continuous at 0 in V.
x
δ = δ/2,
2 ∥x∥ v v
then
x
T (δ ) ≤ 1,
2 ∥x∥ v
But by the linearity of T and the homogeneity of the norm we get:
x
1 ≥ T (δ )
2 ∥x∥ v w
Tx
= δ( )
2 ∥x∥ v w
δ
= ∥T x∥w
2 ∥x∥ v
31
Notation 2.2.4 If
T : v → w,
is is linear one often writes T x for T (x).
Definition 2.2.5 We denote by L(V ; W ) the set of all bounded linear trans-
formations T : v → w, L(V ; W ) from a vector space. S + T is the transfor-
mation with
∥T v∥ w
∥T ∥L(V ;W ) = ∥T ∥op = sup ,
∥v∥V
v =nj=1 aj vj ,
we have
∥T v∥w = n
j=1 aj vj w
≤ n
j=1 |aj | ∥T vj ∥w
≤ n
j=1 ∥T vj ∥w max |ak |k=1....n
The expression max |ak |k=1....n defines a norm on V.Since all norms on V are
equivalent, there is a constant C1 such that
∥T v∥w ≤ C ∥v∥V
32
for all v ∈ V where the constant C is given by:
C = C1 nj=1 ∥T vj ∥w ,
2
xy = yx.
∥e∥ = 1.
A Banach algebra is a normed algebra which is complete considered as a
normed space.
33
Theorem 2.3.3 (B(H), ∥, ∥) where ∥T ∥ = sup {∥T x∥ : ∥x∥ ≤ 1} .T ∈ B(H),
is Banach algebra with idedndity provided that H ̸= {0} .
proof. Since
it follows that
∥ST ∥ ≤ ∥S∥ ∥T ∥ .
That B(H) is Banach space .The operator I is the identity and satisfies
∥I∥ = 1 when H ̸= {0} . 2
Remark 2.3.5 there are two types of convergence , weak and strong conver-
gence .
34
Proposition 2.3.9 If T ∈ B(H) and ∥I − T ∥ ≺ 1.then T is invertible and
T −1 =∞
k=0 (I − T ) ,
k
n
k=0 (I − T )k −m
k=n+1 (I − T )
k
= n
k=m+1 (I − T )k
≤ − T )k
n
k=m+1 (I
η m+1
≤ nk=m+1 η ≤ ,
1−η
{ }
The sequence of partial sums nk=0 (I − T )k n≥1 is cauchy.If S =∞
k=0 (I −
k
T ) ,then
(∞ )
T S = [I − (I − T )] k=0 (I − T )
k
(n )
= lim [I − (I − T )] k=0 (I − T )k
n
[ ]
= lim I − (I − T )n+1 = I.
n
∥S∥ = lim n
k=0 (I − T )k
n
≤ lim nk=0 (I − T )k
n
1
= .
1 − ∥I − T ∥
2
35
Theorem 2.3.11 An operator T ∈ B(H) is invertible if, and only if, it is
bounded below and has dense range.
36
1)
B(x1 + x2 , y) = B(x1 , y) + B(x2 , y).
2)
B(x, y1 + y2 ) = B(x, y1 ) + B(x, y2 ).
3)
B(αx, y) = αB(αx, y).
4)
B(x, βy) = βB(x, y).
for all x, x1 , x2 , y, y1 , y2 in X and all scalars a, b in C.Thus , B is linear
in the first argument and conjugate linear in the second argument.If X is
a real vector space,then
since B is nonnegative. Now let α = t be real and set β = B(x, y)/ |B(x, y)| .Then,
37
βB(x, y) = |B(x, y)| and ββ = 1.
Hence,
|B(x, y)|
∥B∥ = sup |B(x, y)| = sup ,
∥x∥=∥y∥=1 x∈H,y∈H,x̸=y̸=0 ∥x∥ ∥y∥
∥S∥ = ∥B∥ ,
38
B(x, y) = (z, y),
Here, z is unique but, of course, depends onx ∈ H.Define the mapping S :
H → H bySx = z, x ∈ H.Then
|B(x, y)|
∥B∥ = sup
x∈H,y∈H,x̸=y̸=0 ∥x∥ ∥y∥
|(Sx, y)|
= sup
x∈H,y∈H,x̸=y̸=0 ∥x∥ ∥y∥
∥Sx∥
≤ sup
x̸=0 ∥x∥
= ∥Sx∥ .
39
|(Sx, y)|
∥B∥ = sup
x∈H,y∈H,x̸=y̸=0 ∥x∥ ∥y∥
|(Sx, Sx)|
≥ sup
x∈H,y∈H,x̸=y̸=0 ∥x∥ ∥Sx∥
∥Sx∥
= sup
x̸=0 ∥x∥
= ∥Sx∥ .
40
.
6)
|B(x, y)| = |B(y, x)| .
where M is a constant;x, x1 , x2 , y, y1 , y2 are arbitrary elements ofH .a, b are
scalars, then Bis a bounded sesquilinear functional with: ∥B∥ ≤ M.
|B(x, x)|
∥B∥ = sup ,
x∈H,∥x∦=0 ∥x∥2
by the other hand ,
|B(x, x)|
sup
x∈H,∥x∦=0 ∥x∥2
|B(y, x)|
≤ sup = ∥B∥ .
x∈H,∥y∦=∥x∦=0 ∥x∥ ∥y∥
|B(x, x)|
∥B∥ = sup .
x∈H,∥x∦=0 ∥x∥2
41
pleasant algebraic properties. Moreover, many properties of T can be
studied through the operator T ∗ . It also helps us to study three impor-
tant classes of operators, namely self-adjoint, unitary and normal operators.
These classes have been studied extensively, because they play an important
role in various applications.
T : H → T ∗,
such that for all x, y ∈ H
⟨T x, y⟩ = (x, T ∗ y) .
The Hilbert space adjoint T ∗ of T in Definition exists, is unique and is a
bounded linear operator with norm
∥T ∥ = ∥T ∗ ∥ .
proof. The formula
B(y, x) = (y, T x) , x, y ∈ X
defines a bounded sesquilinear form on H × H because the inner product is a
sesquilinear form andT is a bounded linear operator. Indeed, for y1 , y2 , x1 , x2
in H and a, b scalars,
B(αy1 + βy2 , x)
= B(αy1 + βy2 , T x)
= α(y1 , T x) + βB(y2 , T x)
= α(y1 , x) + βB(y2 , x),
and ,
42
Moreover, B is bounded
(y, T x)
B = sup
x̸=0 ∥y∥ ∥x∥
y̸=0
|(tx, tx)|
≥ sup
x̸=0 ∥T x∥ ∥T x∥
T x̸=0
= ∥T ∥ .
we conclude that
∥B∥ = ∥T ∥ .
From the representation Theorem for bounded sesquilinear forms, we have
B(y, x) = (T ∗ , x) ,
where we have replaced S of Theorem T ∗ .
The operator T ∗ : H → H is a uniquely defined bounded linear operator
with norm
∥B∥ = ∥T ∥ = ∥T ∗ ∥ .
Then we note that
(y, T x) = (T ∗ y, x) ,
we conclude
(T x, y) = (T ∗ y, x).
2
43
2)
(T S)∗ = S ∗ T ∗ .
3)
(S ∗ )∗ = S.
4)If S is invertible in B(H) and S −1 is its inverse, then S ∗ is invertible
and ( )∗
(S ∗ )−1 = S −1
5)
∥S ∗ S∥ = ∥SS ∗ ∥
= |S|2 .
hence (ST )∗ = T ∗ S ∗
3)For x, y ∈ H
44
(x, I ∗ y) = (Ix, y)
= (x, y)
= (x, Iy).
∥SS ∗ ∥ = ∥S∥2
(ab)∗ = (ba)∗ .
An algebra with an involution is called a∗ algebra. A normed algebra with
an involution is called a normed ∗algebra. A Banach algebra A with an
involution satisfying ∥aa∗ ∥ = ∥a∥2 is called
a C ∗ −algebra.
45
Theorem 2.5.5 If T ∈ B(H) is such that T and T ∗ are both bounded below,
then T is invertible.
proof. If T ∗ is bounded below, then Kerf (T ∗ ) = {0}. Using the last
[ ]⊥
⊥ ⊥
theorem [ran(T )] = 0 which implies [ran(T )] = 0⊥ = ran(T ) =
H.Thus, ran(T ) is dense in H. Thus T is invertible. 2
T = T ∗.
T ∗ = T −1 .
3)T is normal if
T T ∗ = T ∗ T.
Remark 2.6.2 In the analogy between the adjoint and the conjugate, Hermi-
tian operators become analogues of real numbers, unitaries are the analogues
of complex numbers of absolute value 1. Normal operators are the true ana-
logues of complex numbers: Note that:
T + T∗ T − T∗
T = +i .
2 2i
T +T ∗ ∗
where 2
and T −T
2i
are self-adjoint and
T + T∗ T − T∗
T = −i .
2 2i
∗ T −T ∗
The operators T +T
2
and 2i
are called real and imaginary parts of T .
46
If T is self-adjoint or unitary, then T is normal. However, a normal
operator need not be self-adjoint or unitary. First note thatI � the identity
operator in B(H), is self-adjoint. The operator T = 2iI then
T ∗ = −2iI
so
T T ∗ = 4I = T ∗ T
but
T ∗ ̸= T
and
1
T −1 = − iI ̸= T ∗ .
2
.
0 −1
T = ̸= T.
1 0
47
proof. Clearly, T is a bounded linear operator. It is enough to show that
∗
T = T.Then
∥(Tn )∗ − T ∗ ∥ = ∥(Tn − T )∗ ∥
= ∥Tn − T ∥ .
Therefore,
T ∗ = lim (Tn )∗
n
= lim Tn
n
= T.
B is Hermitian. hence;
{ }
∥B∥ = sup |(T x, x)| / ∥x∥2 ; x ∈ H, x ̸= 0
= sup {|(T x, x)| ; x ∈ H, x ≤ 1} .
48
Proposition 2.6.8 If H is a complex Hilbert space and T ∈ B(H) is such
that (T x, x) = 0 for all x ∈ H then T = 0.
proof. for all x, y ∈ H the following equality is easily verified:
1
(T x, x) = [(T (x + y), x + y) − (T (x − y), x − y) + i(T (x + y), x + y) − i(T (x − y), x − y)] .
4
Since(T x, x) = 0 for all x ∈ H, it follows that (T x, y) = 0for all x, y ∈
H.Setting y = T x,we obtain
T x = 0 for all x ∈ H.
that is;
T x = 0 for all x ∈ H.Consequently, T = 0. 2
T1 ≤ T2 ≤ .... ≤ αI,
where α is a real number. Then, {Tn }n≥1 is strongly convergent.
49
2.7 Normal, Unitary and Isometric Opera-
tors
The true analogues of complex numbers are the normal operators. The fol-
lowing Theorem gives a characterisation of these operators.
T + T∗ T − T∗
T1 T2 = T2 T1 whereT1 = and T2 = .
2 2i
proof. If x ∈ H then;
∥T x∥2 − ∥T ∗ x∥2 = (T x, T x) − (T ∗ x, T ∗ x)
= (T ∗ T x, T ∗ T x) − (T T ∗ x, T T ∗ x)
= ((T ∗ T − T T ∗ ), x).
50
Theorem 2.7.2 Let T ∈ B(H) satisfy
T ∗T = T T ∗.
then,
∥T ∥k = T K .
proof. From the definition of ρ and the definition of norm, it follows that
∥T ∥p = ∥T p ∥ for p = 2n , n = 1, 2, 3.
So,
1
∥T ∥ = ∥T p ∥ p
1
≤ (2p (T p )) p (Proof By induction)
1
= 2 p ρ(T ).
when p → ∞ , we get:
51
∥T ∥ ≤ ρ(T ).
thus,
∥T ∥ = ρ(T ).
2
(T ∗ T x, x) = (T x, T x)
= ∥T x∥2
= ∥x∥2
= (x, x).
This implies
T ∗ T = I.
(b) implies (c),
(T x, T y) = (T ∗ T x, y) = (x, y)
52
(c) implies (a) This follows on taking
y = x.
S = U T U −1 = U T U ∗ .
53
Chapter 3
Spectral Theory
54
Examples 3.1.2 (i) Let
1 0 0
T = I = 0 1 0 ,
0 0 1
we check for
If λ ∈ σ(T ) then,
1−λ 0 0
det(I − λI) = det 0 1−λ 0
0 0 1−λ
= (1 − λ)3
= 0,
then,
(1 − λ)3 = 0 =⇒ 1 − λ = 0 =⇒ λ = 1.
Then,
σ(T ) = {1} .
(ii) For an n × n matrix T , λI − T is not invertible if and only if
det (λI − T ) = 0.Thus, in the finite-dimensional case, σ(I) is just the set
of eigenvalues of T (since det (λI − T ))is an nth-degree polynomial whose
roots are the eigenvalues of T ).
55
Definition 3.1.4 a)The point spectrum (eigenspectrum, eigenvalues) of T ∈
B(H) is defined to be the set
σp (T ) = {λ ∈ C : ker(λI − T ) ̸= 0} ;
x ∈ H, (x ̸= 0) : T x = λx.
c)In the former case, λ is said to belong to the compression spectrum δcom (T )
of T , and in the latter case, λ is said to belong to the approximate point
spectrum σ ap (T ) of T . In other words,
{
σap (T ) = λ ∈ C : there is a sequence {xn }n≥1 such that ∥xn ∥ = 1 for every n and ∥(λI − T )x
σp (T ) = σ(T ).
56
proof. We have,
R(λ, T ) − R(µ, T ) = (λI − T )−1 − (µI − T )−1
= (λI − T )−1 [(µI − T ) − (λI − T )] (µI − T )−1
= −(λ − µ)R(λ, T )R(µ, T ).
2
Theorem 3.2.2 Let T ∈ B(H). The resolvent set p(T ) of T is open, and
the map λ → R(λ, T ) = (λI − T )−1 from p(T ) ⊆ C to B(H) is strongly
holomorphic in the sense of Definition vanishing at ∞. For each x, y ∈
H, the map λ → (R(λ, T )x, y) = ((λI − T )−1 x, y) ∈ C is holomorphic on
p(T ),vanishing at ∞.
57
1 1
Lemma 3.2.7 For T ∈ B(H), limn→∞ ∥T n ∥ n exists and equals inf n ∥T n ∥ n ,Moreover,
1
0 ≤ inf ∥T n ∥ n ≤ ∥T ∥ .
n
r(T ) = ∥T ∥ .
For a normal operator T.
58
( )
λ−1 ∈ σ(T ) ⇐⇒ T −1 − λ−1 I is inversible
⇐⇒ (λ − T I) λ−1 T −1 is inversible.
⇐⇒ (λ − T I) ∈ σ(T ).
Then,
λ−1 ∈ σ(T −1 ) ⇐⇒ λσ(T )
λ−1 σ(T −1 ) ⇐⇒ λ ∈ σ(T ).
Then, { }
σ(T −1 ) = λ−1 , λ ∈ σ(T ) .
2
59
Theorem 3.4.3 Let B(H) denote the algebra of bounded linear operators
on a complex Hilbert space H. Suppose T ∈ B(H) satisfies the equality
T T ∗ = T ∗ T, i.e. T is normal. Then,
a)
σp (T ) = σp (T ∗ ).
(b) eigenvectors corresponding to distinct eigenvalues, if any, are orthogonal.
60
Chapter 4
Psuedespectrum of a matrix
−9 11 −21 63 −25
70 −69 141 −421 168
A=
−575 575 −1449 3451 −1380
3891 3891 7782 −23345 9336
1024 −1024 2084 −6144 2457
we have the polynomial characteristic polynomial A is
P (z) = z 5 .
The spectral of A is equal to 0.
σ(A) = {0} .
And we denote the spectral of σ(A) is .
Then if we calculate with matlab the eigen value of A we obtain the result
, there were five eigenvalues
61
2πi
Zk = 0, 0407e k .k ∈ {0, , , , 4}
The spectral of A calculated with matleb is totally diffrent than theoric
spectral of A .
Another problem is posed in calculating the eigen values. it is the sensi-
bility of perturbations.
We consider the matrix B = (bij ) defined by B = Bij ,
B = P DP −1 .
P = (pij )
where
1
pij = .
i+j
The spectral of B is stricly include in the unit disck i.e p(B) ≺ 1 where the
ρ(B) is the spectral rayon of the matrix B then the matrix is stable.We
pertubate the element (b15 ) with insering it the relative fault 2.10−4 .The
eigen values of perturbate matrix Be:
then the matrix B e isn’t stable then we see the effect of that small per-
turbation in the level of matrix stability.
62
Definition 4.1.1 (the norm of resolvent) Let A ∈ Cn×n and ϵ ≻ 0
then the ϵ−pseudospectrum σϵ of A is:
{ }
σϵ = z ∈ C : (z − A)−1 ≻ ϵ−1 .
In words,the set ϵ−pseudospectrum of the complex plane is open and bounded
by ϵ−1 .
{ }
σϵ = z ∈ C : z ∈ σ(A + E) for some E ∈ Cn×n with ∥E∥ ≺ ϵ .
In words, the set ϵ−pseudospectrum is the set of numbers that are eigevalues
of the boundaries matrices of A with ∥E∥ ≺ ϵ.
Remark 4.1.4 Let A ∈ Cn×n .If ∥.∥ = ∥.∥2 , a matrix norm is its largest
singular value and its inverse norm is the inverse of the smallest singular
value.Specially,
(z − A)−1 2 = [smin (z − A)]−1 ,
where smin (z − A) denotes the smallest singular value of (z − A) then we get
the fourth definition.
63
Definition 4.1.5 (singular values) Let A ∈ Cn×n and ϵ ≻ 0 then the
ϵ−pseudospectrum σϵ of A is:
σϵ = {z ∈ C : smin (z − A) ≤ ϵ } .
64
Definition 4.2.3 (the norm of matrix) Let A = (aij ) ∈ Cn×n ,then we
have
m
∥A∥1 = max |aij | ,
1≺j≤n
i=1
and,
n
∥A∥∞ = max |aij | .
1≺i≤n
j=1
Definition 4.2.4 (the spectral norm (norm 2)) Let A ∈ Cn×n , then
the spectral norm is defined by,
Proposition 4.2.5 √
The spectral norm of A is the largest singular value of
A ,it means ∥A∥2 = α where α is the largest eigenvalue of AA∗ .
Lemma 4.2.6 Let A ∈ Cn×n , ϵ ≻ 0 ,v ∈ Cn×n with ∥v∥ = 1.If ∥(A − zI) v∥ ≤
′ ′
ϵ.Then there is ϵ such 0 ≺ ϵ ≺ ϵ and u ∈ Cn×n with ∥u∥ = 1,such
′
(A − zI) v = ϵ u.
with,
′
ϵ = ∥w∥ ,
and,
w
u= ,
∥w∥
65
we have,
∥u∥ = 1.
Addition to that,we have,
∥(A − zI) v∥ ≤ ϵ,
then,
′
ϵ u ≤ ϵ,
and that imply,
′
ϵ ∥u∥ ≤ ϵ,
then,
′
ϵ ≤ ϵ.
2
(z − A)−1 u0
(z − A)−1 = , u0 ̸= 0.
∥u0 ∥
because,
= (z − A)−1 w0
−1 (z − A)−1 u0
(z − A) w0 =
∥u0 ∥
66
we suppose,
(z − A)−1 u0 = w.
we get,
u0 = (z − A) w.
ε−1 ≤ (z − A)−1
(z − A)−1 u0
=
∥u0 ∥
∥w∥
=
∥(z − A) w∥
then,
∥(z − A) w∥
≤ ε.
∥w∥
addition to,
∥(z − A) v∥ ≤ ε,
with,
w
v= ,
∥w∥
and ∥v∥ = ∥w∥
∥w∥
= 1.
3 =⇒ 2 we suppose that z ∈ σ(A + E) such that ∥E∥ ≤ ϵ then
(A + E)v = zv , v ̸= 0,we devise with ∥v∥ we get,
v v
(A + E) =z ,
∥v∥ ∥v∥
then,
(A + E)w = zw,
where,
v
w= ,
∥w∥
which imply,
Ew = (zI − A)w, ∥w∥ = 1.
then,
w = (zI − A)−1 w.
67
then,
68
proof. The matrice A is with the form of A = U T U ∗ .The decomposition
of shur T is a triangular matrice superior
AA∗ = (U T U ∗ )(U T U ∗ )∗
= (U T U ∗ )(U T ∗ U ∗ )
= U T T ∗U ∗.
AA∗ = (U T U ∗ )∗ (U T U ∗ )
= U T T ∗U ∗.
Then,
AA∗ = AA∗ .
with,
T ∗ T = T ∗ T.
superior T is diagonal and A = U T U ∗ and U is unitary and T is
triangular. 2
69
proof. 1) Let z ∈ σϵ1 (A) ,then smin (z −A) ≤ ϵ1 ≤ ϵ2 ,then z ∈ σϵ2 (A).
2) A1 ⊕ A2 is diagonal matrix z .The singular value of (zI− A1 ⊕ A2 ) are
the same singular values of (zI− Ai ) with i = 1...n with usi,g the fourth
defenition os pseudospectrum we get it .
3)Let z ∈ σϵ (A + E) then, z ∈ σϵ (A + E + F ) when ∥E∥ ≤ ϵ. We have
∥F + E∥ ≤ ∥F ∥ + ∥E∥ then, z ∈ σϵ+∥F ∥ (A)
4)Let z ∈ σϵ1 (A) + σϵ2 (A) with ϵ1 ≤ ϵ2 . then z = z1 + z2 when
z1 ∈ σϵ1 (A) and z2 ∈ σϵ2 (A).We suppose that u1 is the pseudospectrum
eigenvector of A correspending to z1 when ∥v1 ∥ = 1.Then (A + E)u1 =
z1 u1 . ∥E1 ∥ ≤ ϵ1 and (A + E)u1 = z2 u1 + w2 , ∥E2 ∥ ≤ ϵ2 , w2 ∈ C n .Thus,
z1 u1 + z2 u2 = zu1 ,
Then,
∥E1 + E2 − w2 u∗1 ∥ ≤ 2ϵ1 + 2ϵ2 + |z1 − z2 | ,
So,
σϵ1 (A) ⊆ σϵ2 (A)
We can take z1 = z2 thus it results,
proof. Let q the medium of segment [p1, p2 ] .Let z be the complex number
corresponding to q.We have z1 u1 = (A + E1 )u1 , ∥E1 ∥ ≤ ϵ1 et z2 u2 = (A +
E2 )u2 , ∥E2 ∥ ≤ ϵ2 . ∥u1 ∥ = ∥u2 ∥ = 1.
70
Then,
z1 + z2
zu1 = ( )u1
2
1
= [(A + E1 )u1 + (A + E2 )u2 + w1 ] .
2
Then,
E1 + E2 + w1 u∗1 )
zu1 = (A + )u1 .
2
with,
ϵ1 +ϵ2 +∥w1 ∥
∥E∥ ≤ .
2
2
∥E∥ = ∥ηuφ∗ ∥
≤ η ∥w∥ ∥φ∗ ∥
≤ η ≤ ϵ.
And,
u∗ E = u∗ (zI − A).
which imply,
u∗ (A + E) = zu∗ .
71
Then,
z ∈ σϵ (A).
2
72
Theorem 4.6.1 Let A ∈ C n×n , ϵ ≥ 0 then we have
1)if z ∈ σϵ (etA ), t ≥ 0 and etA ≤ |z| resp( etA ≥ |z|),then
∑
∞
etnα(A) ∑
∞
|z n |
k(V ) ≥ ϵ−1 resp(k(V ) ≥ ϵ−1 ).
n=0
|z n+1 | n=0
et(n+1)α(A)
∑
∞
p(A)n ∑
∞
|z n |
−1
k(V ) ≥ ϵ resp(k(V ) ≥ ϵ−1 ).
n=0
|z n+1 | n=0
p(A)n+1
3)z ∈ σϵ (f (A)) with ∥f (A)∥ ≤ |z|, resp (∥f (A)∥ ≥ |z|)then
∑
∞
∥f n ∥σϵ (A) ∑
∞
|z n |
−1
k(V ) ≥ ϵ resp(k(V ) ≥ ϵ−1 ).
n=0
|z n+1 | n=0
∥f ∥σϵ (A)
n+1
∑∞
∥A∥n ∑∞
|z n |
−1 −1
{}k(V ) ≥ ϵ , if ∥A∥ ≤ |z| .k(V ) n+1 ≥ ϵ , if ∥A∥ ≥ |z| .
n=0
|z n+1 |
n=0
∥A∥
And with using the fisrt introduction of that section we get the result. 2
σϵ (A) ⊑ σϵ (αn In ).
In is the n × n idendity matrix.
73
proof. Let zk ∈ ∂σϵ (A) k ∈ {1, 2, , , , n} where ∂σϵ (A) is the bord of
σϵ (A).We chooseαn the barycenter of system {zk (A)} when k ∈ {1, 2, , , , n}
and
rϵ = sup |αn − zk |
zk∈∂σϵ (A)
Remark 4.6.4 The role of this theorem isn’t to prouve the excistence of
disk contains the pseudospectrum because it’s assured by the compactness of
the pseudospectrum but to define.
λS = {λs, s ∈ S, λ ∈ R, S ⊂ P (C)} .
74
2)
f (λP ) = λf (P ).
3)
f 2 = f when f 2 = f ◦ f.
2)
⟨λσϵ1 (A)⟩ = D(λα, λrϵ ) = λD(α, rϵ ) = λ ⟨σϵ (A)⟩ , λ ∈ C
3)
⟨ ⟩
⟨⟨σϵ1 (A)⟩⟩ = D(α, rϵ ) = ⟨σrϵ (αI)⟩ = D(α, rϵ ) = ⟨σϵ1 (A)⟩ .
2
75
4.8 matrix function
proof. We have
∫
1
∥f (A)∥ = f (z)(zI − A)−1 dz
2πi ∂D(α,rϵ )
∫
1
≤ |f (z)| (zI − A)−1 |dz|
2π ∂D(α,rϵ )
∫
1
= |f (z)| |dz|
2πϵ ∂D(α,rϵ )∩σϵ (A)
∫
1
≤ sup |f (z)| |dz|
2πϵ z∈σϵ (A) ∂D(α,rϵ )
rϵ
≤ sup |f (z)| .
ϵ z∈σϵ (A)
2
76
Theorem 4.8.4 Let A ∈ C n×n , ϵ ≥ 0 then
rϵ tαϵ (A)
etA ≤ e .
ϵ
Definition 4.9.1 Circulant matrix is a square matrix which each row vector
is rotate one element to the right relative to the proceding row vector.
c0 c1 c2 c3 cn−1
cn−1 c0 c1 c2 cn−2
C= c n−2 c n−1 c 0 c 1 c n−3
c2 ... cn−1 c0 ..
c1 c2 .. cn− 1 c0
where the coefficients are complex numbers.The circulant matrix is a special
case of Toepliz matrix and it defined as c = circl(c0 , c1,,,, cn−1 ).
77
Proposition 4.9.3 Let A and B are two circulant matrix.Then,
1)A + B is circulant matrix .
2) AB is circulant matrix.
3) A∗ is circulant matrix.
p(z) = 1 + 2z + 3z 2
Then the spectrum is
Remark 4.9.5 The circulant matrix is a normal matrix then its eigenvalues
aren’t sensibles to perturbations.
78
Chapter 5
W (T ) = {⟨T x, x⟩ , x ∈ H, ∥x∥ = A} .
Let A be an n × n complex matrix. Then the numerical range of A , W (A),
is defined to be { ∗ }
x Ax
W (A) = , x ∈ C , x ̸= 0 .
n
x∗ x
where x∗ denotes the conjugate transpose of the vector x.
Example 5.1.3 ( )
0 1
T =
0 0
If x = (f, g) then ∥x∥2 = |f |2 + |g|2 = 1.We have T x = (g, 0) and ⟨T x, x⟩ =
gf .
79
Notice that,
⟨T f, f ⟩ = f1 f2 + f2 f3 + f3 f4 + ....
With
|f1 |2 + |f2 |2 + .. = 1.
Notice that ,
80
considering the minimum natural number n , for which fn ̸= 0.Thus W (T )
is contained in the open unit disk
{z : |z| ≺ 1} .
We now show that it is in fact the open unit disk. Let z = reiθ , 0 ≤ r ≺ 1 be
any point of this disk. Consider
√ √ √
f = ( 1 − r2 , r 1 − r2 e−iθ , r2 1 − r2 e−2iθ , , , ).
Observe that,
∥f ∥2 = 1 − r2 + r2 (1 − r2 ) + r4 (1 − r2 ) + .. = 1.
So,
⟨T f, f ⟩ = reiθ .
Let the transformation A : C2 → C2 be represented by
( )
r b
A= , r ∈ R, b ∈ C
0 −r
[ ]
Let (f, g) be a unit vector in C2 , f = eiα cos θ, g = eiβ sin θ, α ∈ 0, π2 , β ∈
[0, 2π] .Then we have,
81
|b|2
(x − r cos φ) + y =
2 2
sin2 φ.with 0 ≤ φ ≤ π.
4
Eliminating φ between the last two equations, one obtains:
x2 y2
2 + 2 = 1.
r2 + ( |b|4 ) ( |b|4 )
√
This is an ellipse with center at 0, minor axis b, and major axis 4r2 + b2 .
4)
W (A∗ ) = {z, z ∈ W (A)} .
5)
W (U ∗ AU ) = W (A).
1 ∗
x∗ Hx = x (A + A∗ ) x
2
1
= [⟨x, Ax⟩ + ⟨x, A∗ x⟩]
2
= Rex∗ Ax
82
Then every z ∈ W (H) has the form Rez for any z ∈ W (H).The converse is
true also.With the same way we prouve that any z ∈ W (S) has the form of
iImz . W (S) = ImW (A). 2
A + A∗
W − (A) = inf λ where λ is the eigenvalue of matrix .
2
W − (A) = inf Re z
z∈W (A)
A+A∗
proof. Let λ the eigenvalue of 2
then,
83
⟨ ⟩
− A + A∗
W (A) = inf λ = inf x,
|x|=1 2
∗
= inf Rex Ax
|x|=1
= inf Rez.
z∈W (A)
Theorem 5.2.7 Let A ∈ C n×n then,If 0 ∈ W (A) then the numerical range
W (A) is closed.
84
5.3 The numerical radius
σ(A) ⊂ W (A).
σ(A) ⊂ W (A).
85
proof. It is clear that the boundary of the spectrum is contained the
approximate spectrum point σapp (A), And since W (T ) is convex then its
suffices to show that σapp (A) ⊂ W (A).
Consider that λ ∈ σapp (A) and a sequence {fn } of complex vectors with
∥(T − λI)fn ∥ → 0,
Then,by the shwartz equality,
⟨T fn , fn ⟩ → λ
so,
λ ∈ W (T ).
2
σϵ (A) ⊆ W (A) + ∆ϵ .
Where ∆ϵ is the closed disk of center 0 and radius ϵ.
proof. Let z ∈ σϵ (A) then there exists E such that ∥E∥ ≤ ϵ and
z ∈ σ(A + E) then there exists v ̸= 0 such that (A + E)v = zv .Then we
can write z in the form of
v ∗ (A + E)v v ∗ Av v ∗ Av
z= = + ∗ .
v∗v v∗v v v
2
86
Theorem 5.5.2 Let A ∈ C n×n , u ∈ C n , ∥u∥ = 1 and ϵ ≥ 0.If z ∈ σϵ (A)
then there is excist w ∈ C n such that,
W (A + E) ∩ W (zI + wu∗ ) = ∅.
with ∥E∥ ≤ ϵ.
W (A + E) ∩ W (zI + wu∗ ) ̸= ∅.
W (A + E) ⊆ W (A) + ∆ϵ .
z = x∗ Ex.
Then
∥u∥ = 1
.Then
|z| ≤ ∥E∥ ≤ ϵ.
Then
z ∈ ∆ϵ
87
so
W (E) ⊆ ∆ϵ
then,
W (A + E) ⊆ W (A) + ∆ϵ .
2
88
5.7 Matrix pseudo-commutating
Definition 5.7.1 Let A ∈ C n×n and B ∈ C n×n .We said that A and B
commutate if AB = BA or equivently if their commututor,
[A, B] = AB − BA.
∥[A, B]∥ ≤ ϵ.
Theorem 5.7.5 Let A ∈ C n×n , u ∈ C n with ∥u∥ = 1.If z ∈ σϵ (A) then there
is w ∈ C n , v ∈ C n where ∥v∥ = 1 such that,
1
W (A + [A, vu∗ ]) ∩ W (zI + wu∗ ) ̸= ∅.
2
proof. This result is getting clear if we take E = 12 [A, vu∗ ] we will get
1
∥E∥ = [A, vu∗ ] ≤ ϵ.
2
And the rest is clear using the last theorem. 2
89
5.8 Generalized spectrum and numerical range
of matrix the lorentzain oscillator group
of dimension four
Connected Lie groups that admit a bi-invariant Lorentzian metric were deter-
mined by the first of the authors in [?]. Among them, those that are solvable,
non-commutative, and simply connected are called oscillator groups. This
group has many properties useful both in geometry and physics.
We study here the geometry of these groups and their networks, i.e their
discrete sub-groups co-compact. If G is an oscillator group, its networks
determine compact homogeneous Lorentz manifolds, on which G acts by
isometries.
Let H2k+1 = R × Ck be the Heisenberg group and let λ = (λ1 , λ2 , . . . , λk )
k be strictly positive real numbers. Let the additive group R act on H2k+1
by the action:
ρ(t)(u, (zj )) = (u, (eiλj t zj )).
The group Gk (λ), a semi-direct product of R by H2k+1 following ρ, is
provided with a bi-invariant Lorentz metric. Here is how it is built:
g = R × R × R2k
is the tangent space at the origin. Let us extend the usual scalar product
of R2k into a symmetric bilinear form over g so that the plane R × R is
hyperbolic and orthogonal to R2k . This form defines an invariant Lorentz
metric on the left on Gk (λ), it is also invariant on the right because the
adjoint operators on g are antisymmetric [?].
Groups Gk (λ) are characterized [?] by:
Theorem 5.8.1 The groups Gk (λ) are the only Lie group simply connected
, resolvable and noncommutative which admit a bi-invariant Lorentz metric.
90
5.8.1 Oscillator group of dimension 4
Letλ > 0. Suppose Gλ the group of Lie with moderlying variety R4 = R×
C × R with product
( )
1 _ iy1
(x1 , z1 , y1 ) · (x2 , z2 , y2 ) = x1 + x2 + Im(z1 e z2 ), z1 + e z2 , y1 + y2 ,
iy1
2
where (x1 , z1 , y1 ), (x2 , z2 , y2 ) ∈ R× C × R.
Gλ is the group of Lie of dimension 4, simply connexe résolable and no-
commutatif, called Oscillator group.
The oscillator group of dimension 4 doesn’t admet invariant metric flat.but
it isn’t the only group of Lie which is simply connex résolable non commu-
tatif which admet a metric Lorentzienne bi-invariant (i.e invariant from the
translations à gauche et à droitfrom left and right).
The algebraof lie of oscillator group , noted gλ , will be said Oscillating
algebra. gλ is defining as the algebra as the algebra of Lie real and generated
by four invariat from the left noted , X, Y, P, Q,such that hooksof Lie are
done by
(5.1) [X, Y ] = P, [Q, X] = λY, [Q, Y ] = −λX.
The field of vectors X, Y, P, Q verify (??) can be presented in matrix form
such (see [?] and [?]):
0 0 1 0 0 −1 0 0
0 0 0 1 0 0 0 0
X=
0 0 0 0 , Y = 0 0 0 1 ,
0 0 0 0 0 0 0 0
0 0 0 2 0 0 0 0
0 0 0 0
P = , Q = 0 0 −λ 0 .
0 0 0 0 0 λ 0 0
0 0 0 0 0 0 0 0
Then the oscillator group of dimension 4 can be seen as a subspace of
GL(4, R) (see [?]):
Gλ = {Mλ (x1 , x2 , x3 , x4 ) ∈ GL(4, R) | x1 , x2 , x3 , x4 ∈ R} ,
have an elemrnt as
Mλ (xi ) = exp(x1 P ) exp(x2 X) exp(x3 Y ) exp(x4 Q),
91
it means
1 x2 sin(λx4 ) − x3 cos(λx4 ) x2 cos(λx4 ) + x3 sin(λx4 ) 2x1 + x2 x3
0 cos(λx4 ) − sin(λx4 ) x2
Mλ (xi ) =
0
.
sin(λx4 ) cos(λx4 ) x3
0 0 0 1
e specially , Mλ is diffeoomorphism between Gλ and R3 × R/ 2π λ
Z.
∂
In the suite,we note ∂i = ∂xi the locally coordinate system in a local coor-
dinate chart (x1 , x2 , x3 , x4 ). On matrice writting,its correspond to ∂M λ
∂xj
(x1 , x2 , x3 , x4 )
Either now,Let {e1 , e2 , e3 , e4 }a base of invariant vector field invariant left
to Gλ such that (ej )I = (∂j )I for j ∈ {1, 2, 3, 4}, where I = Mλ (0, 0, 0, 2kπ/λ),
for normal number k, design the unitary
matrix. then ([?], [?])
0 0 0 2
0 0 0 0
(e1 )Mλ (x1 ,x2 ,x3 ,x4 ) =
0 0 0 0 ,
0 0 0 0
0 0 1 x2 sin(λx4 ) − x3 cos(λx4 )
0 0 0 cos(λx4 )
(e2 )Mλ (x1 ,x2 ,x3 ,x4 ) =
0 0 0
,
sin(λx4 )
0 0 0 0
0 −1 0 x2 cos(λx4 ) + x3 sin(λx4 )
0 0 0 − sin(λx4 )
(e3 )Mλ (x1 ,x2 ,x3 ,x4 ) =
0 0 0
,
cos(λx4 )
0 0 0 0
0 x2 cos(λx4 ) + x3 sin(λx4 ) −x2 sin(λx4 ) + x3 cos(λx4 ) 0
0 − sin(λx4 ) − cos(λx4 ) 0
(e4 )Mλ (x1 ,x2 ,x3 ,x4 ) = λ
0
.
cos(λx4 ) − sin(λx4 ) 0
0 0 0 0
Then,the invariant base of fieldsat left on Gλ , is done by
e1 = ∂1 ,
e2 = −x3 cos(λx4 )∂1 + cos(λx4 )∂2 + sin(λx4 )∂3 ,
(5.2)
e3 = x3 sin(λx4 )∂1 − sin(λx4 )∂2 + cos(λx4 )∂3 ,
e4 = ∂4 .
using (??),the direct calcul we prouve that the only hooks of Lie [ei , ej ] no
zero , is done by
(5.3) [e2 , e3 ] = e1 , [e2 , e4 ] = −λe3 , [e3 , e4 ] = λe2 .
92
comparing(??) with (??), we see that the algebra of Lie of o Gλ coïncide
with the oscillator algebra,we pose, X = e2 , Y = e3 , P = e1 and Q = e4 .So,
Gλ is the oscillator group.
ga = adx21 + 2ax3 dx1 dx2 + (1 + ax23 )dx22 + dx23 + 2dx1 dx4 + 2x3 dx2 dx4 + adx24 ,
λ1 = 1, √ ( )
λ2 = 23 a + 13 ax23 − 21 S + 2S
C
+ 13 − 23 i S + CS ,
λ3 = λ2 ,
λ4 = 32 a + 13 ax23 + S − CS + 13 ,
with √
√ 8
N−
3
S= M+ ,
27
93
and
2 1 1 1 11 1
M = a + a2 − a3 + x23 + ax23 + ax43
9 9 27 6 18 6
1 2 2 1 3 2 1 2 4 1 3 4 1
− a x3 − a x3 + a x3 + a x3 + a3 x63
18 18 9 18 27
4 3 4 2 1 4 8 2 13 4 1 2 1 1 7
N = a − a − a − x3 − x3 − x63 − ax23 − ax43 − ax63 + a2 x23
27 27 27 27 108 27 9 54 54 27
4 3 2 7 2 4 1 4 2 1 2 6 11 4 4 1 3 6 1 5 4 1 2 8
+ a x3 + a x 3 − a x3 + a x3 − a x3 + a x3 + a x3 − a x3
27 36 9 18 108 27 54 108
1 6 4 1 1 1 1 6 8
− a x3 − a5 x63 + a4 x83 − a6 x63 − a x3
108 54 54 54 108
2 1 1 2 1 1 4
C = a − a2 − x23 − ax23 − a2 x23 − a2 x43 −
9 9 3 9 9 9 9
proof. We have
with
( )
L = 1 + 2a + ax23 ,
K = (−a2 − 2a − a2 x23 + x23 + 1),
z 3 + pz + q = 0,
such as
L
(5.4) z =λ− , z ∈ C,
3
and
94
Then the CARDAN method he says that the 3 solutions are:
v ( ) v ( )
u √ u √
u1 −∆ u1 −∆
zk = j k t
3
−q + + j −k t
3
−q − , 0≤k≤2
2 27 2 27
such as ,
∆ = −4p3 − 27q 2 ,
π
j = ei2 3 .
x∗ Aa x = a |z1 |2 +a |z4 |2 +|z2 |2 +|z3 |2 +ax3 (z1 z2 +z2 z1 )+x3 (z2 z4 +z4 z2 )+(z1 z4 +z4 z1 )+a |z2 |2 x23 ,
95
so,
We have,
|zj |2
(5.5) ≤ 1, ∀j ∈ {1, . . . , 4} .
∑
4
|zi |2
i=1
and,
zi zj + zj zi
(5.6) ≤ 1, ∀i, j ∈ {1, . . . , 4} ,
∑ 4
|zi |2
i=1
x∗ Aa x
≤ 1 + |ax3 | + |x3 | + |a| + ax23 .
x∗ x
It had to be proven. 2
96
we get,
r12 + r42 − 2r1 r4 cos(θ1 − θ4 )
= 0,
r12 + r22 + r32 + r42
And that will be correct for r1 = r4 , (θ1 − θ2 ) = 2kπ, k ∈ N and r3 = r2 we
g 0 (x∗ ,x)
get 0 x∗ x = 1. Then 1 ∈ W (A00 ) .
On the other hand, we have,
r12 + r42 − 2r1 r4 cos(θ1 − θ4 )
− ≥ −2,
r12 + r22 + r32 + r42
to prouve that we will suppose the contrast and we show that isn’t true .It
means,
2r22 + 2r32 + (r1 cos θ1 − r4 cos θ2 )2 + (r1 sin θ1 − r4 sin θ2 )2 ≤ 0.(never occurs)
So,
g00 (x∗ , x)
≥ −1.
x∗ x
And we have for r1 = r4 ∧ θ1 − θ4 = (2k + 1)π ∧ r2 = r3 = 0(k ∈ N ) we
g00 (x∗ ,x)
have x∗ x
= −1 . So W (A00 ) = [−1, 1]
2) For a = 0 and x3 = 0.5,
0 0 0 1
0 1 0 0.5
A0.5
0 =
,
0 0 1 0
1 0.5 0 0
so,
g00.5 (x∗ , x) g00 (x∗ , x) r2 r4 cos(θ2 − θ4 )
∗
= ∗
+ 2 .
xx xx r1 + r22 + r32 + r42
97
We have,
r2 r4 cos(θ2 − θ4 ) 1
2 2 2 2
≤ ,
r1 + r2 + r3 + r4 2
To prouve it we need to take the contrast
r2 r4 cos(θ2 − θ4 ) 1
2 2 2 2
≥ ,
r1 + r2 + r3 + r4 2
And we get
r12 + r22 + r32 + r42 + r2 r4 cos(θ2 − θ1 ) ≤ 0,
Then,
And also,
g00.5 (x∗ , x) 5
∗
≥−
xx 4
for prouving that we need to prouve that the contract isn’t true , it means,
g00.5 (x∗ , x) 5
≤ −
x∗ x 4
it means,
r2 r4 cos(θ2 − θ4 ) 9
2 2 2 2
≤−
r1 + r2 + r3 + r4 4
and we get,
[ ]
4 r2 r4 cos(θ2 − θ4 ) + r22 + r42 + 5r22 + 5r42 + 5r12 + 5r32 ≤ 0
Then,
[ ]
5(r22 +r42 +r12 +r32 )+4 (r2 cos θ2 − r4 cos θ2 )2 + (r2 sin θ2 − r4 sin θ2 )2 ≤ 0(that is impossible)
So we conclude,
5 g 0.5 (x∗ , x) 3
− ≤ 0 ∗ ≤ ,
4 xx 2
but − 45 and 3
2
does not belong to W (A0.50 ).let’s prouve that:
98
If 3
2
∈ W (A0.5
0 ) then,
So,
So,
−r12 − r42 + 2r1 r4 cos(θ1 − θ4 ) + r2 r4 cos(θ2 − θ4 ) 5
2 2 2 2
=−
r1 + r2 + r3 + r4 4
The only condition which verify the equation is just when r1 = r2 = r3 =
r4 = 0 and]that is[ not possible because x ̸= 0.Then, − 45 ∈
/ W (A00.5 ).So we get
W (A0 ) ⊂ − 4 , 2 .
0.5 5 3
99
Conclusion
Our memoire dealt with one of the most important and the newest topics
in functional analysis,and it focused on several aspects such as the theory
of spectrum,which contribute to the solution of linear equations addition
to many mathematics and physicals problems.Also ,we made the point in
pseudo spectrum which had great role in understanding the perturbations of
spectral objects.
100
30 avr. 2021 14:22
The algebra
of operator Orthogonal
ser
Hilbert
Spectral space
notion proprieties
Spectrum Orthonorm
of various al set
classes of
Bounded Inner
Resolvent linear product
Equation operator Space
Hilbert
space
Spectral
theory
Spectrum The
mapping
numerical
range of Spectrum
and
numrical
The
Pseudo-spe numerical
ctrum range of
Matrix The relation
pseudospe btw
ctrum spectrum
The relation
Singular btw
values pseudospe
Equivalenc The
e of numerical
pseudospe radius
Pseudospe Numerical
ctrum range of
proprieties some
The The
pseudoproj numerical
ection The The abssica
pseudospe numerical
ctrum of range of
Bibliography
[3] G. Calvaruso, J. Van der Veken: Totally geodesic and parallel hyper-
surfaces of four-dimensional oscillator groups. Results Math. 64 (2013),
135–153.
[6] R. Duran Diaz, P.M. Gadea, J.A. Oubiña: The oscillator group as a
homogeneous spacetime. Lib. Math. 19 (1999), 9–18.
[8] K.E. Gustafson and D.K.M. Rao, Numerical Range, Springer, New York,
1997.
[9] R.A. Horn and C.R. Johnson, Topics in Matrix Analysis, Cambridge
University Press, Cam- bridge, 1991.
102
[10] V. Khiem Ngo, An Approach of Eigenvalue Perturbation Theory. ApPl.
Num. Anal. Comp. Math. 2 (2005), No. 1, pp. 108-125.
[17] T. Nomura, The Paley-Wiener theorem for the oscillator group, J. Math.
Kyoto Univ. 22 (1982/83), 71–96.
103
[23] C. Van Loan, A study of the matrix exponential, Numerical Analysis
Report No. 10, University of Manchester, UK, August, 1975, Reissued
as MIMS EPrint, Manchester Institute for Mathematical Sciences, The
University of Manchester, UK, November 2006.
104