Example of Hessenberg Reduction

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Eigenvalues

In addition to solving linear equations another important task of linear algebra is nding
eigenvalues.
Let F be some operator and x a vector. If F does not change the direction of the vector
x, x is an eigenvector of the operator, satisfying the equation
F(x) = x, (1)
where is a real or complex number, the eigenvalue corresponding to the eigenvector.
Thus the operator will only change the length of the vector by a factor given by the ei-
genvalue.
If F is a linear operator, F(ax) = aF(x) = ax, and hence ax is an eigenvector, too.
Eigenvectors are not uniquely determined, since they can be multiplied by any constant.
In the following only eigenvalues of matrices are discussed. A matrix can be considered
an operator mapping a vector to another one. For eigenvectors this mapping is a mere
scaling.
Let A be a square matrix. If there is a real or complex number and a vector x such
that
Ax = x, (2)
is an eigenvalue of the matrix and x an eigenvector. The quation (2) can also be writ-
ten as
(AI)x = 0.
If the equation is to have a nontrivial solution, we must have
det(AI) = 0. (3)
When the determinant is expanded, we get the characteristic polynomial, the zeros of
which are the eigenvalues.
If the matrix is symmetric and real valued, the eigenvalues are real. Otherwise at least
some of them may be complex. Complex eigenvalues appear always as pairs of complex
conjugates.
For example, the characteristic polynomial of the matrix

1 4
1 1

is
det

1 4
1 1

=
2
2 3.
The zeros of this are the eigenvalues
1
= 3 and
2
= 1.
Finding eigenvalues usign the characteristic polynomial is very laborious, if the matrix
is big. Even determining the coefcients of the equation is difcult. The method is not
suitable for numerical calculations.
To nd eigenvalues the matrix must be transformed to a more suiteble form. Gaussian
elimination would transform it into a triangular matrix, but unfortunately the eigenva-
lues are not conserved in the transform.
Another kind of transform, called a similarity transform, will not affect the eigenvalues.
Similarity transforms have the form
A A

= S
1
AS,
where S is any nonsingular matrix, The matrices A and A

are called similar.


QR decomposition
A commonly used method for nding eigenvalues is known as the QR method. The met-
hod is an iteration that repeatedly computes a decomposition of the matrix know as its
QR decomposition. The decomposition is obtained in a nite number of steps, and it has
some other uses, too. Well rst see how to compute this decomposition.
The QR decomposition of a matrix A is
A = QR,
where Q is an orthogonal matrix and R an upper triangular matrix. This decomposition
is possible for all matrices.
There are several methods for nding the decomposition
1) Householder transform
2) Givens rotations
3) GramSchmidt orthogonalisation
In the following we discuss the rst two methods with exemples. They will probably be
easier to understand than the formal algorithm.
Householder transform
The idea of the Householder transform is to nd a set of transforms that will make all
elements in one column below the diagonal vanish.
Assume that we have to decompose a matrix
A =

3 2 1
1 1 2
2 1 3

We begin by taking the rst column of this


x
1
= a(:, 1) =

3
1
1

2
and compute the vector
u
1
= x
1
x
1

1
0
1

0 =

0.7416574
1
1

2
This is used to create a Householder transformation matrix
P
1
= I 2
u
1
u
T
1
u
1

2
=

0.8017837 0.2672612 0.5345225


0.2672612 0.6396433 0.7207135
0.5345225 0.7207135 0.4414270

It can be shown that this is an orthogonal matrix. It is easy to see this by calculating the
scalar product of any two columns. The products are zeros, and thus the column vectors
of the matrix are mutually orthogonal.
When the original matrix is multiplied by this transform the result is a matrix with zeros
in the rst column below the diagonal:
A
1
= P
1
A =

3.7416574 2.4053512 2.9398737


0. 0.4534522 0.6155927
0. 0.0930955 2.2311854

Then we use the second column to create a vector


x
2
= a(2 : 3, 1) =

0.4534522
0.0930955

,
from which
u
2
= x
2
x
2

1
0

0.0094578
0.0930955

.
This will give the second transformation matrix
P
2
= I 2
u
2
u
T
2
u
2

2
=

1 0 0
0 0.9795688 0.2011093
0 0.2011093 0.9795688

The product of A
1
and the transformation matrix will be a matrix with zeros in the se-
cond column below the diagonal:
A
2
= P
2
A
1
=

3.7416574 2.4053512 2.9398737


0 0.4629100 0.1543033
0 0 2.3094011

Thus the matrix has been transformed to an upper triangular matrix. If the matrix is
bigger, repeat the same procedure for each column until all the elements below the diago-
nal vanish.
Matrices of the decomposition are now obtained as
Q = P
1
P
2
=

0.8017837 0.1543033 0.5773503


0.2672612 0.7715167 0.5773503
0.5345225 0.6172134 0.5773503

R = A
2
= P
2
P
1
A =

3.7416574 2.4053512 2.9398737


0 0.4629100 0.1543033
0 0 2.3094011

.
The matrix R is in fact the A
k
calculated in the last transformation; thus the original
matrix A is not needed. If memory must be saved, each of the matrices A
i
can be sto-
red in the area of the previous one. Also, there is no need to keep the earlier matrices P
i
,
but P
1
will be used as the initial value of Q, and at each step Q is always multiplied by
the new tranformation matrix P
i
.
As a check, we can calculate the product of the factors of the decomposition to see that
we will restore the original matrix:
QR =

3 2 1
1 1 2
2 1 3

.
Orthogonality of the matrix Q can be seen e.g. by calculating the productQQ
T
:
QQ
T
=

1 0 0
0 1 0
0 0 1

In the general case the matrices of the decomposition are


Q = P
1
P
2
P
n
,
R = P
n
P
2
P
1
A.
Givens rotations
Another commonly used method is based on Givens rotation matrices:
P
kl
() =

1
cos sin
.
.
. 1
.
.
.
sin cos
1

.
This is an orthogonal matrix.
Finding the eigenvalues
Eigenvalues can be found using iteratively the QR-algorithm, which will use the previous
QR decomposition. If we started with the original matrix, the task would be computa-
tionally very time consuming. Therefore we start by transformin the matrix to a more
suitable form.
A square matrix is in the block diagonal form if it is

T
11
T
12
T
13
T
1n
0 T
22
T
23
T
2n
0 0
.
.
.
0 0 0 T
nn

,
where the submatrices T
ij
are square matrices. It can be shown that the eigenvalues of
such a matrix are the eigenvalues of the diagonal blocks T
ii
.
If the matrix is a diagonal or triangular matrix, the eigenvalues are the diagonal ele-
ments. If such a form can be found, the problem is solved. Usually such a form cannot
be obtained by a nite number of similarity transformations.
If the original matrix is symmetric, it can be transformed to a tridiagonal form without
affecting its eigenvalues. In the case of a general matrix the result is a Hessenberg mat-
rix, which has the form
H =

x x x x x
x x x x x
0 x x x x
0 0 x x x
0 0 0 x x

The transformations required can be accomplished with Householder transforms or Gi-


vens rotations. The method is now slightly modied so that the elements immediately
below the diagonal are not zeroed.
Transformation using the Householder transforms
As a rst example, consider a symmetric matrix
A =

4 3 2 1
3 4 1 2
2 1 1 2
1 2 2 2

We begin to transform this using Householder transform. Now we construct a vector x


1
by taking only the elements of the rst column that are below the diagonal:
x
1
=

3
2
1

Using these form the vector


u
1
= x
1
x
1

1
0
1

0 =

0.7416574
2
1

and from this the matrix


p
1
= I 2u
1
u
T
1
/ u
1
=

0.8017837 0.5345225 0.2672612


0.5345225 0.4414270 0.7207135
0.2672612 0.7207135 0.6396433

and nally the Householder transformation matrix


P
1
=

1 0 0 0
0 0.8017837 0.5345225 0.2672612
0 0.5345225 0.4414270 0.7207135
0 0.2672612 0.7207135 0.6396433

Now we can make the similarity transform of the matrix A. The transformation matrix
is symmetric, so there is no need for transposing it:
A
1
= P
1
AP
1
=

4 3.7416574 0 0
3.7416574 2.4285714 1.2977396 2.1188066
0 1.2977396 0.0349563 0.2952113
0 2.1188066 0.2952113 4.5364723

The second column is handled in the same way. First we form the vector x
2
x
2
=

1.2977396
2.1188066

and from this


u
2
= x
2
x
2
(1, 0)
T
=

1.1869072
2.1188066

and
p
2
=

0.5223034 0.8527597
0.8527597 0.5223034

and the nal transformation matrix


P
2
=

1 0 0 0
0 1 00
0 0 0.5223034 0.8527597
0 0 0.8527597 0.5223034

Making the transform we get


A
2
= P
2
A
1
P
2
=

4 3.7416574 0 0
3.7416574 2.4285714 2.4846467 0
0 2.4846467 3.5714286 1.8708287
0 0 1.8708287 1

Thus we obtained a tridiagonal matrix, as we should in the case of a symmetric matrix.


As another example, we take an asymmetric matrix:
A =

4 2 3 1
3 4 2 1
2 1 1 2
1 2 2 2

The transformation proceeds as before, and the result is


A
2
=

4 3.4743961 1.3039935 0.4776738


3.7416574 1.7857143 2.9123481 1.1216168
0 2.0934787 4.2422252 1.0307759
0 0. 1.2980371 0.9720605

,
which is of the Hessenberg form.
QR-algorithm
We now have a simpler tridiagonal or Hessenberg matrix, which still has the same eigen-
values as the original matrix. Let this transformed matrix be H. Then we can begin to
search for the eigenvalues. This is done by iteration.
As an initial value, take A
1
= H. Then repeat the following steps:
Find the QR decomposition Q
i
R
i
= A
i
.
Calculate a new matrix A
i+1
= R
i
Q
i
.
The matrix Q of the QR decomposition is orthogonal and A = QR, and so R = Q
T
A.
Therefore
RQ = Q
T
AQ
is a similarity transform that will conserve the eigenvalues.
The sequence of matrices A
i
converges towards a upper tridiagonal or block matrix, from
which the eigenvalues can be picked up.
In the last example the limiting matrix after 50 iterations is

7.0363389 0.7523758 0.7356716 0.3802631


4.265E 08 4.9650342 0.8892339 0.4538061
0 0 1.9732687 1.3234202
0 0 0 0.9718955

The diagonal elements are now the eigenvalues. If the eigenvalues are complex numbers,
the matrix is a block matrix, and the eigenvalues are the eigenvalues of the diagonal 2 2
submatrices.

You might also like