Vector and Matrix Norm
Vector and Matrix Norm
Vector Spaces
Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars.
A Vector Space, V , over the field F is a non-empty set of objects (called vectors) on which two
binary operations, (vector) addition and (scalar) multiplication, are defined and satisfy the axioms
below.
Addition: is a rule which associates a vector V with each pair of vectors x, y V and that
member is called the sum x + y.
Scalar multiplication: is a rule which associates a vector V with each scalar, F , and each
vector x V , and it is called the scalar multiple x.
For V to be called a Vector Space the two operations above must satisfy the following axioms x,
y, w V :
1. Vector addition is commutative: x + y = y + x;
2. Vector addition is associative: x + (y + w) = (x + y) + w;
3. Vector addition has an identity: a vector in V , called the zero vector 0, such that x + 0 = x;
4. Vector addition has an inverse: for each x V an inverse (or negative) element, (x) V ,
such that x + (x) = 0;
5. Distributivity holds for scalar multiplication over vector addition: (x + y) = (x + y),
F;
4
6. Distributivity holds for scalar multiplication over field addition: (+)x = (x+x), , F ;
7. Scalar multiplication is compatible with field multiplication: (x) = ()x, , F ;
8. Scalar multiplication has an identity element: 1x = x, where 1 is the multiplicative identity
of F .
Any collection of objects together with two operations which satisfy the above axioms is a vector
space. In context of Numerical Analysis a vector space is often called a Linear Space.
Example 1.1.1 An obvious example of a linear space is R3 with the usual definitions of addition
of 3D vectors and scalar multiplication (Fig 1.1).
Figure 1.1: A pictorial example of some vectors belonging to the linear space R3 .
Example 1.1.2 Another example of a linear space is the set of all continuous functions f (x) on
(, ) with the usual definition for the addition of functions and the scalar multiplication of
functions, usually denoted by C(, ).
If f (x) and g(x) C(, ) then f + g and f (x) are also continuous on (, ) and the axioms
are easily verified.
Example 1.1.3 A sub-space of C(, ) is Pn : the space of polynomials of degree at most n. [We
will meet this vector space later in the course.]
Note, it must be of degree at most n so that addition (and subtraction) produce members of Pn ,
e.g. (x x2 ) P2 , x2 P2 and (x x2 ) + (x2 ) = x P2 .
1.2
1.3
1.3.1
We require some method to measure the magnitude of a vector for error analysis. We generalise
the concept of a length of a vector in 3D to n dimensions by defining a norm.
Given a vector/linear space V , then a norm, denoted by kxk for x V , is a real number such that
kxk > 0, x 6= 0, (k0k = 0)
(1.1)
kxk = ||kxk, R
(1.2)
kx + yk kxk + kyk.
(1.3)
The norm is a measure of the size of the vector x where Equation (1.1) requires the size to be
positive, Equation (1.2) requires the size to be scaled as the vector is scaled, and Equation (1.3) is
known as the triangular inequality and has its origins in the notion of distance in R3 .
Any mapping of an n-dimensional Vector Space onto a subset of R that satisfies these three requirements can be called a norm. The space together with a defined norm is called a Normed Linear
Space.
Example 1.3.1 For the vector space V = Rn with x V given by x = (x1 , x2 , . . . , xn ) an obvious
6
definition of a norm is
kxk = x21 + x22 + + x2n
1/2
This is just a generalisation of the normed linear space V = R3 with the norm defined as the
magnitude of the vector x R3 .
Example 1.3.2 Another norm on V = Rn is
kxk = max {|xi |} ,
1in
and it is easy to verify that the three axioms are obeyed (see Tutorial sheet 1).
Example 1.3.3 Let V = C[a, b], the space of all continuous functions on the interval [a, b], and
define
kf k =
(Z
(f (x)) dx
)1/2
1.3.2
One of the most familiar norms is the magnitude of a vector, x = (x1 , x2 , x3 ) R3 , (Example 1.3.1).
This norm is commonly denoted |x| and equals
q
i.e. it is equal to the dot product of x. The dot product of a vector is an example of an Inner
Product. Some other norms are also inner products.
In 1.3.1 we presented examples of Normed Linear Spaces. Two of these can be obtained from the
alternative route of Inner Product Spaces and it is useful to observe this fact as it gives access to an
important result, namely the Cauchy-Schwarz Inequality. For such spaces
kxk = {hx, xi}1/2
where
hx, yi =
and
hf , gi =
n
X
xi yi
in Example 1.3.1
i=1
f (x)g(x)dx
in Example 1.3.3 .
Hence, inner products give rise to norms but not all norms can be cast in terms of inner products.
The formal definition of an Inner Product is given on Tutorial Sheet 1 and it is easy to show that
the above two examples satisfy the stated requirements. The Cauchy-Schwarz Inequality is given by
hx, yi2 hx, xihy, yi
7
and can be used to confirm the triangular inequality for norms. For example, given
kxk
kx + yk2
kx + yk2
= {hx, xi}1/2
= hx + y, x + yi
= hx, xi + hy, yi + 2hx, yi
p
hx, xi + hy, yi + 2 hx, xihy, yi (using C S inequality)
(kxk + kyk)2
(1.4)
and hence the triangular inequality holds for all such norms. A particular example is given on
Tutorial Sheet 1.
1.3.3
In theory, any mapping of an n-dimensional space onto real numbers that satisfies (1.1)-(1.3) is a
norm. In practice we are only concerned with a small number of Normed Linear Spaces and the
most frequently used are the following:
kxkp =
i=1
|xi |
!1/p
, p 1,
is known as the Lp normed linear space. The most common are the one norm, L1 , and
the two norm, L2 , linear spaces where p = 1 and p = 2, respectively.
2. The other standard norm for the space Rn is the infinity, or maximum, norm given by
kxk = max (|xi |) .
1in
The vector space Rn together with the infinity norm is commonly denoted L .
Example 1.3.4 Consider the vector x = (3, 1, 2, 0, 4), which belongs to the vector space R5 .
Then determine its (i) one norm, (ii) two norm and (iii) infinity norm.
(i) One norm: kxk1 = |3| + | 1| + |2| + |0| + |4| = 10.
p
kf (x)kp =
|f (x)|p dx
, p 1,
1.4
We are interested in analysing methods for solving linear systems so we need to be able to measure
the size of vectors in Rn and any associated matrices in a compatible way. To achieve this end, we
define a Sub-ordinate Matrix Norm.
For the Normed Linear Space {Rn , kxk}, where kxk is some norm, we define the norm of the matrix
Ann which is sub-ordinate to the vector norm kxk as
kAxk
.
kAk = max
kxk
kxk6=0
Note, Ax is a vector, x Rn Ax Rn , so kAk is the largest value of the vector norm of Ax
normalised over all non-zero vectors x.
Not surprisingly, the three requirements of a vector norm (1.1 1.3) are properties of kAk. There
are two further properties which are a consequence of the definition for kAk. Hence, sub-ordinate
matrix norms satisfy the following five rules:
kAk > 0,
A 6= 0,
kAk = ||kAk,
(1.5)
(1.6)
kA + Bk kAk + kBk ,
(1.7)
(1.8)
(1.9)
These five rules, together with the three for vector norms (1.1 1.3), provide the means for an
analysis of a linear system. First, let us justify the above results:
Rule 1 (1.5): For a matrix Ann where A 6= 0, x Rn where x 6= 0 such that the vector
Ax 6= 0. So both kAxk > 0 and kxk > 0, thus kAk > 0.
Also, if A 0 then Ax = 0 x and kAk = 0.
Rule 2 (1.6): Since Ax is a vector and therefore satisfies (1.2) we can show that
kAxk
|| kAxk
kAxk
kAk = max
= max
= || max
= ||kAk
kxk
kxk
kxk
kxk6=0
kxk6=0
kxk6=0
9
Rule 3 (1.7):
kA + Bk = max
kxk6=0
k(A + B)xk
kxk
k(A + B)xm k
kxm k
kAxm k + kBxm k
max
kxm k
kxk6=0
kAxk
kxk
+ max
kxk6=0
kBxk
kxk
Hence
kA + Bk kAk + kBk .
Rule 4 (1.8): By definition,
kAk
kAxk
kxk
for any x 6= 0.
Hence
kAk kxk kAxk
(including x 0)
Rule 5 (1.9): Finally, consider kABk, where A and B are n n matrices. Using a similar
argument to that used in rule 3, we have
kABk = max
kxk6=0
kABxk
kxk
kABxm k
kxm k
1.4.1
kAk kBkkxm k
kABxm k
= kAk kBk .
kxm k
kxm k
Clearly, the sub-ordinate matrix norm is not easy to calculate direct from its definition as in theory
all vectors x Rn should be considered. Here we will consider the commonly used matrix norms
and consider practical ways to calculate them.
10
The easiest matrix norms to compute are matrix norms sub-ordinate to the L1 and L vector norms.
These are
kAk1 = max
x6=0
Pn
i=1 |(Ax)i |
P
n
i=1 |xi |
kAk = max
x6=0
max (|(Ax)i |)
1in
max (|xi |)
1in
i=1
|xi | .
Pn
aij xj and
j=1
n X
X
n
.
a
x
|yi | =
kAxk1 = kyk1 =
ij j
j=1
i=1
i=1
n
X
Now
n
n
n
X
X
X
a
x
|aij | |xj | .
|a
x
|
ij
j
ij
j
j=1
j=1
j=1
Thus
kAxk1
n X
n
X
i=1
where
n
X
Pn
i=1
j=1
n
n X
X
j=1 i=1
n
X
j=1
|xj |
|aij | |xj |
|aij | |xj |
(
n
X
i=1
|aij |
Pn
kAxk1
i=1
n
X
j=1
n
X
j=1
and hence
kAxk1
Sm ,
kxk1
11
|xj | = Sm kxk1
where Sm is the maximum column sum of absolute values. This is true for all non-zero x. Hence,
kAk Sm
We need to determine now if Sm is the maximum value of
kAxk1
kxk1 ,
where xj =
n
X
i=1
|yi | =
n
X
i=1
1
Pn
j 6= m
j=m
j=1
|aim | = Sm .
kAxk1
Sm
=
= Sm .
kxk1
1
This means that the bound can be reached with a suitable x and so
kAxk1
= Sm ,
kAk1 = max
kxk1
kxk6=0
where Sm is the maximum column sum of absolute values. Hence, Sm is not just an upper bound,
it is actually the maximum!
Example 1.4.1: Determine the matrix norm sub-ordinate to (i) the one norm and (ii) the infinity
norm for the following matrices:
3 6
A= 2
5
3 2
kAk1
and
B = 2
3 0
0 1
kBk1
kAk
kBk
max(11, 8, 7) = 11 .
max(6, 5, 2) = 6 .
Note, since B is symmetric, the maximum column sum of absolute values equals the maximum row
sum of absolute values. i.e. kBk1 = kBk .
12
1.5
Spectral Radius
A useful, and important, quantity associated with matrices is the Spectral Radius of a matrix. An
n n matrix A has n eigenvalues i , i = 1, . . . , n. The Spectral Radius of A, which is denoted by
(A) is defined as
(A) = max (|i |) .
1in
Note, the spectral radius is not a norm! This can be easily shown.
Let
which gives
However,
1 0
A=
2 0
and
0 2
B=
0 1
and
1 2
A+B=
2 1
and
So the spectral radius does not satisfy (1.3), rule 3 for a norm.
13
1.6
where A n n, x n 1 & b n 1
and the elements of b are subject to small errors b so that instead of finding x, we find x
where
A
x = b + b .
How big is the disturbance in x in relation to the disturbance in b ? (i.e. How sensitive is this
system of linear equations?)
Ax = b
and
A
x = b + b
so,
A(
x x) = b
and hence the critical change in x is given by
(
x x) = A1 b .
We are interested in the size of the errors, so let us take the norm:
k
x xk = kA1 bk kA1 k kbk
for some norm. This gives a measure of the actual change in the solution. However, a more useful
measure is the relative change and so we also consider,
kbk = kAxk kAk kxk
so
1
kAk
.
kxk
kbk
Hence, the relative change in x, given by
kbk
k
x xk
kAk kA1 k
,
kxk
kbk
is at worst the relative change in b times kAk kA1 k.
The quantity kAk kA1 k = K(A) is called the condition number of A with respect to the norm
chosen. The condition number gives an indication of the potential sensitivity of a linear system of
equations. The smaller the conditional number, K(A), the smaller the change in x. If K(A) is very
large, the solution x
, is considered unreliable.
We would like the condition number to be small, but it is easy to see that it will never be < 1.
14
Hence,
K(A) 1 .
If K(A) is modest then we can be confident that small changes to b do not seriously affect the
solution. However, if K(A) 1 (very large) then small changes to b may produce large changes
in x. When this happens, we say the system is ill-conditioned.
How large is large ? This all depends on the system. Remember, the condition number helps in
finding the relative error:
k
x xk
kbk
kAk kA1 k
.
kxk
kbk
If the relative error of kbk/|bk = 106 and we wish to know x to within a relative error of b = 104
then provided the condition number K(A) < 100 then it would be considered small.
1.6.1
Example 1.6.1: We will consider an innocent looking set of equations that result in a large condition
number.
Consider the linear system Wx = b, where W is the Wilson Matrix,
1
10 7 8
7
1
7 5 6
5
1
8 6 10 9
1
7 5 9 10
so that
32
23
Wx = b =
.
33
31
32.1
22.9
W
x = b + b =
33.1
30.9
hence
15
+0.1
0.1
.
b =
+0.1
0.1
W1
Then from W
x = b + b, we find
25
41
=
10
41
10
68
17
17
10
10
.
1
8.2
1
13.6
1
1
.
x
= W b + W b = +
1 3.5
1
2.1
It is clear that the system is sensitive to changes: a small change to b has had a very large effect on
the solution. So the Wilson Matrix is an example of an ill-conditioned matrix and we would expect
the condition number to be very large.
To evaluate the condition number, K(W) for the Wilson Matrix we need to select a particular norm.
First, we select the 1-norm (maximum column sum of absolute values) and estimate the error
kWk1 = 33
and
kW1 k1 = 136 ,
and hence,
K1 (W) = kWk1 kW1 k1 = 33 136 = 4488,
which of course is considerably bigger than 1.
Remember that
kbk1
k
x xk1
K1 (W)
kxk1
kbk1
such that the
error in x 4488
0.4
(|0.1| + | 0.1| + |0.1| + | 0.1|)
= 4488
15 .
(|32| + |23| + |33| + |31|)
119
So far, we have used the 1-norm but it is more natural to look at the -norm, which deals with the
maximum values.
Since W is symmetric, kWk1 = kWk and kW1 k1 = kW1 k so,
k
x xk
kbk
max(|0.1|, | 0.1|, |0.1|, | 0.1|)
0.1
K (W)
4488
= 4488
13.6 .
kxk
kbk
max(|32|, |23|, |33|, |31|)
33
This is exactly equal to the biggest error we found, so a very realistic (reliable) estimate. For this
example, the bound provides a good estimate of the scale of any change in the solution.
16
1.7
1.7.1
To estimate condition numbers of matrices we need to find norms of inverses. To find an inverse,
we have to solve a linear system and if the linear system is sensitive, how reliable is the value we
get for A1 ? Ideally, we wish to estimate kA1 k without needing to find A1 . One possible result
that might help is the following:
Suppose B is a non-singular matrix,
B B1 = I .
Write B = I + A so
(I + A)(I + A)1 = I ,
or
(I + A)1 + A(I + A)1 = I ,
so
(I + A)1 = I A(I + A)1 .
Taking norms and, using Rules 3 & 5 (1.7) and (1.9), we find
k(I + A)1 k = kI A(I + A)1 k
kIk + kA(I + A)1 k
1 + kAk k(I + A)1 k ,
so
k(I + A)1 k (1 kAk) 1 .
If kAk 1 then
1
,
1 kAk
where B = I + A.
Example 1.7.1: Consider the matrix
B = 1/4
1/4
1/4
1
1/4
17
1/4
1/4
then
A = B I = 1/4
1/4
1
2
0
1/4
< 1, so
kB1 k
and kBk = 1.5. Hence,
1/4
1/4
1/4 .
1
= 2,
1 1/2
1.7.2
The Spectral Radius of a matrix states that sub-ordinate matrix norms are always greater than (or
equal to) the absolute values of the eigenvalues. This is true for all eigenvalues. Suppose
|1 | |2 | |n1 | |n | > 0
i.e. there is a largest and a smallest eigenvalue. Then, the spectral radius for all norms implies that,
kAk |1 | = max |eigenvalues| = (A).
Furthermore, if the eigenvalues of A are i and A is non-singular, then from Aei = ei we have
1
ei = A1 ei
i
so the eigenvalues of A1 are 1 (A non-singular 6= 0). Hence, the spectral radius implies
1
kA1 k max
i
i
or
kA1 k
Thus, the condition number for A,
1
.
|n |
K(A) = kAkkA1 k
where
|1 |
|n |
|1 |
,
|n |
is the ratio of the largest to the smallest eigenvalue. This ratio gives an indication of
just how big the condition number can be, i.e. if the ratio is large, then K(A) will be large (and the
matrix is ill-conditioned). Also, this result illustrates another simple result, namely that scaling a
matrix dos not affect the condition number
K(A) = K(A) .
18
1.8
1 1 + 107
A=
1
1
then det(A) = 107 , which is very small. The eigenvalues of A are given by
(1 )2 (1 + 107 ) = 0 ,
or, using the expansion (1 + x)1/2 1 + 21 x + 81 x2 + . . . ,
1
(1 ) = (1 + 107 )1/2 (1 + 107 ) .
2
Hence, we find eigenvalues
1 2
and hence the condition number K(A) =
1
2 107
2
and
|1 |
|n |
matrix is ill-conditioned.
Example 1.8.2: The Hilbert Matrix is notorious for being ill-conditioned
Consider the n n Hilbert matrix
1/2
H = 1/3
.
..
1/n
1/2
1/3
...
1/3
1/4
...
1/4
1/5
...
1/n
..
.
..
.
...
...
1/(2n 1)
nn
The condition numbers for various Hnn matrices are given in the table below.
n
K (Hnn )
748
2.8 104
5
6
9.4 105
2.9 107
Typical single precision, floating point accuracy on a computer is approximately 0.6 107 . Apply
Gaussian Elimination on a 6 6 Hilbert matrix on a computer with single precision arithmetic and
the result is likely to be subject to significant error! Such error can be avoided with packages like
19
Maple which allows you to perform calculations using exact arithmetic, i.e. all digits will be carried
in the calculations. However, error can still occur if the matrix is not specified properly. If the
elements are not generated exactly, the wrong result can still be obtained.
20