Practice Exercise - Chapter 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Exercises I 03

Exercises

2.1. Letx' = [5, 1, 3]andy' = [-1, 3, 1].


· (a) Graph the two vectors.
(b) Find (i) the length ofx, (ii) the angle between x andy, and (iii) the projection ofy on x.
(c) Since :X= 3 and y = 1, graph [5 - 3,1 - 3, 3 - 3] = [2, -2, OJ and
[-1- 1,3- 1,1 -1] = [-2,2,0].
2 .2. Given the matrices

perform the indicated multiplications.


(a) 5A
(b) BA
(c) A'B'
(d) C'B
(e) Is AB defined?
2.3. Verify the following properties of the transpose when

A = [ !l B = [! l and C = [! J
(a) (A')' = A
(b) (C')-1 = cc-1)'

(c) (AB)' = B' A'


(d) For general A and B , (AB)' = B'A'.
(mxk) (kxt)
1 1
2.4. When A- and B- exist, prove each of the following.
(a) (A')-1 = (A- 1)'
(b) (AB)-1 = B-1A- 1 .
Hint: Partacanbeprovedb(notingthatAK1 =1,1 ;;,I',and(AA- 1 )' = (A- 1 )'A'.
Part b follows from (B- 1A- )AB = B- 1(A-1A)B = B- 18 =I.
2.5. Check that

Q = [
5

13
112]
13
is an orthogonal matrix.
2.6. Let

A= [ 9
-2
-2]
6
(a) Is A symmetric?
(b) Show that A is positive definite.
I 04 Chapter 2 Matrix Algebra and Random Vectors

2.7. Let A be as given in Exercise 2.6.


(a) Determine the eigenvalues and eigenvectors of A.
(b) Write the spectral decomposition of A.
(c) Find A- 1.
(d) Find the eigenvalues and eigenvectors of A- 1.
2.8. Given the matrix

A = [1 2]
2 -2
find the eigenvalues A1 and A2 and the associated normalized eigenvectors e 1 and e 2 •
Determine the spectral decomposition (2-16) of A.
2.9. Let A be as in Exercise 2.8.
(a) Find A- 1•
(b) Compute the eigenvalues and eigenvectors of A- 1.
(c) Write the spectral decomposition of A-1, and compare it with that of A from
Exercise 2.8.
2.10. Consider the matrices

A =
4
[ 4.001
4.001
4.002
J and B =
[4
4.001
4.001
4.002001
J
These matrices are identical except for a small difference in the (2, 2) position.
Moreover, the columns of A (and B) are nearly linearly dependent. Show that
A_, ( -3)8- 1. Consequently, small changes-perhaps caused by rounding--can give
substantially different inverses.
2.11. Show that the determinant of the p X p diagonal matrix A= {a;i} with a;i = 0, i j, *
is given by the product of the diagonal elements; thus, IA I = a 1 1a 22 · · · aPr
Hint: By Definition 2A.24, I A I = a 11 A 11 + 0 + · · · + 0. Repeat for the submatrix
A 1 1 obtained by deleting the first row and first column of A.
2.12. Show that the determinant of a square symmetric p X p matrix A can be expressed as
the product of its eigenvalues A1 , A2 , ... , AP; that is, IA I = ITf= 1 A;.
Hint: From (2-16) and (2-20), A =PAP' with P'P =I. From Result 2A.11(e),
/A/= /PAP'/= /P //AP' I= /P /I A //P'/ =/A 1/1/,since/1/ = /P'P/ = /P' 1/P/. Apply
Exercise 2.11.
2.13. Show that IQ I = + 1 or - 1 if Q is a p X p orthogonal matrix.
Hint: /QQ' I = /1/. Also, from Result 2A.ll, JQ 1/Q' I = /Q /2 . Thus, IQ /2 = /1/. Now
use Exercise 2.11.
2.14. Show that Q' A Q and A have the same eigenvalues if Q is orthogonal.
(p><p)(p><p)(p><p) (p><p)
Hint: Let A be an eigenvalue of A. Then 0 = IA - All. By Exercise 2.13 and Result
2A.ll(e),wecanwrite0 = JQ'I/A- AII/QJ = /Q'AQ- Al/,sinceQ'Q =I.
2.1 S. A quadratic form x 'Ax is said to be positive definite if the matrix A is positive definite.
Is the quadratic form 3xt + 3xi - 2x 1x 2 positive definite?
2.1 6. Consider an arbitrary n X p matrix A. Then A' A is a symmetric p X p matrix. Show
that A' A is necessarily nonnegative definite.
Hint: Sety = Axsothaty'y = x'A'Ax.
Exercises I 05

2.17. Prove that every eigenvalue of a k X k positive definite matrix A is positive.


Hint: Consider the definition of an eigenvalue, where Ae = A.e. Multiply on the left by
e' so that e' Ae = A.e'e.
2.18. Consider the sets of points ( x 1 , x 2) whose "distances" from the origin are given by
c 2 = 4xt + - 2V2x1x2
for c 2 = 1 and for c 2 = 4. Determine the major and minor axes of the ellipses of con-
stant distances and their associated lengths. Sketch the ellipses of constant distances and
comment on their positions. What will happen as c 2 increases?
m
2.19. Let A 112 =
(mxm)
2:
i=]
"1/A;e;e; = PA 112P',wherePP' = P'P = I.(TheA;'sandthee;'sare
the eigenvalues and associated normalized eigenvectors of the matrix A.) Show Properties
(1)-(4) of the square-root matrix in (2-22). (Ignore this question)
12
2.20. Determine the square-root matrix A 1 , using the matrix A in Exercise 2.3. Also, deter-
mine A- 112 , and show that A 112A- 112 = A- 1f2A 112 =I.
2.21. (See Result 2A.15) Using the matrix

(a) Calculate A' A and obtain its eigenvalues and eigenvectors.


(b) Calculate AA' and obtain its eigenvalues and eigenvectors. Check that the nonzero
eigenvalues are the same as those in part a.
(c) Obtain the singular-value decomposition of A.
2.22. (See Result 2A.l5) Using the matrix

A=[::
(a) Calculate AA' and obtain its eigenvalues and eigenvectors.
(b) Calculate A' A and obtain its eigenvalues and eigenvectors. Check that the nonzero
eigenvalues are the same as those in part a.
(c) Obtain the decomposition of A.
2.23. Verify the relationships V 112pv 112 = I and p = (V 1f2)- 1I(V 112f\ where I is the
p X p population covariance matrix (2-32)], pis the p X p population cor-
relation matrix [Equation (2-34)], and V 12 is the population standard deviation matrix
[Equation (2-35)]. (Ignore this question)
2.24. Let X have covariance matrix

Find
(a) I- 1
(b) The eigenvalues and eigenvectors of I.
(c) The eigenvalues and eigenvectors of I- 1.
106 Chapter 2 Matrix Algebra and Random Vectors

2.25. Let X have covariance matrix

I=
25
-2
-2
4 1
4]
[
4 1 9
(a) Determine p VI/2.
(b) Multiply your matrices to check the relation Vif2pVI/2 = I.
2.26. Use I as given in Exercise 2.25.
(a) Find p 13 •
(b) Find the correlation between XI and +
2.27. Derive expressions for the mean and variances of the following linear combinations in
terms of the means and covariances of the random variables XI, X 2 , and X 3 •
(a) XI- 2X 2
(b) -XI+ 3X2
(c) XI + Xz + X3
(e) XI + 2Xz - X 3
(f) 3XI - 4X2 if XI and X 2 are independent random variables.
2.28. Show that (Ignore this question)

where c) = [c 11 , ci 2 , ... , ci p] and ci = [c2 I, c22 , •.. , c2 p]· This verifies the off-diagonal
elements CixC' in (2-45) or diagonal elements if ci = c2 •
Hint: By (2-43),ZI- E(ZI) = cii(XI- 1-ti) + ··· + cip(Xp- 1-tp) and
Z 2 - E(Zz) = c21(XI- 1-ti) + ··· + Czp(Xp- /-tp).SoCov(ZI,Z2 ) =
E[(ZI - E(ZI))(Zz- E(Zz))] = E[(c11(XI- 1-ti) +
··· + Cip(Xp- 1-tp))(czi(XI - 1-ti) + Czz(Xz- 1-tz) + ··· + czp(Xp- 1-tp))].
The product
(c11(XI- 1-ti) + cdXz- 1-tz) + · ··
+ cip(Xp- l-tp))(c2l(XI- 1-ti) + czz(Xz- + ··· + Czp(Xp- 1-tp))

p p
= 2: 2: CifCzm(Xe- 1-tf)(Xm- 1-tm)
e=I m=l

has expected value

Verify the last step by the definition of matrix multiplication. The same steps hold for all
elements.
Exercises I0 7

2.29. Consider the arbitrary random vector X' = X 2 , X 3 , X 4 , X 5 ] with mean vector
JL' = [JLJ. IL2· J.LJ, IL4• J.L 5 ]. Partition X into

X=

where

xPJ [1:] •nd x<'1 U:J


Let I be the covariance matrix of X with general element u;k· Partition I into the
covariance matrices of X (1 l and X (2) and the covariance matrix of an element of X {l)
and an element of X (2).
2.30. You are given the random vector X' = [X1 , X 2 , X 3 , X 4 ] with mean vector
JL'x = [4, 3, 2, 1] and variance-covariance matrix

l
3 0 12 02]
0 1
Ix = 2 1 9 -2
2 0 -2 4
Partition X as

Let

A = [1 2] and B = [ =J
and consider the linear combinations AX(' l and BX(2). Find
(a) E(X(ll)
(b) E(AX(ll)
(c) Cov (X(!))
(d) Cov(AX(!l)
(e) E(X(2l)
(f) E(BX(2l)
(g) Cov(X(2l)
(h) Cov (BX< 2l)
(i) Cov(X< 1>, X(2l)
(j) Cov (AX(ll, BX(2))
2.31. Repeat Exercise 2.30, but with A and B replaced by

A = [1 -1] and B =[ - J
108 Chapter 2 Matrix Algebra and Random Vectors

2.32. You are given the random vector X' = [X1 , X2 , ..• , X 5 J with mean vector
P.x = [2, 4, -1, 3, OJ and variance-covariance matrix
I I
4 -1 2 -;z 0
-1 3 -1 0
I
Ix = 2 6 -1
-:zI -1 1 4 0
0 0 -1 0 2
Partition X as

Let

A = D-J and B= U _J
and consider the linear combinations AX(lJ and BX(2). Find
(a) E(X(Il)
(b) E(AX('l)
(c) Cov(X( 1))
(d) Cov (AX(Il)
(e) E(X(2l)
(f) E(BX( 2 ))
(g) Cov(X( 2))
(h) Cov(BX( 2l)
(i) Cov(X(l), X(2))
0) Cov(AX( 1l, BXI 2l)
2.33. Repeat Exercise 2.32, but with X partitioned as

and with A and B replaced by

A=U -1 OJ 1 3 and B =
[1 2]
1 -1

2.34. Considerthevectorsb' = [2, -1, 4, OJ and d' = [-1, 3, -2, 1J. Verify the Cauchy-Schwarz
2
inequality(b'd) (b'b)(d'd).
Exercises I09

2.35. Using the vectors b' = [-4, 3] and d' = [1, 1], verify the extended Cauchy-Schwarz
2
inequality (b'd) :s; (b'Bb)(d'B- 1d) if

B =[ 2 -2]
-2 5

2.36. Fmd the maximum and minimum values of the quadratic form 4_x1 + + 6x,xz for
all points x' = [x 1 , x 2 ] such that x'x = 1.
2.37. With A as given in Exercise 2.6, fmd the maximum value of x' Ax for x' x = 1.
2.38. Find the maximum and minimum values of the ratio x' A xjx'x for any nonzero vectors
x' = [x 1 ,x2 ,x3 ] if
13 -4
A= -4 13
[
2 -2 10
2.39. Show that
s r
A B C has (i, j)th entry 2: 2: a;ebekCkj
(rXs)(sXr)(tXv)

r
Hint: BC has ( e, j)th entry 2: bekCki = de;· So A(BC) has (i, j)th element

2.40. Verify (2-24): E(X + Y) = E(X) + E(Y) and E(AXB) = AE(X)B.


Hint· X + Y has X;;+ Y;; as its (i, j)th element. Now,E(X;; + Y;;) = E(X;;) + E(Y;;)
by a univariate property of expectation, and this last quantity is the ( i, j) th element of
E(X) + E(Y). Next (see Exercise 2.39),AXB has (i,j)th entry 2: 2: a;eXekbki• and
by the additive property of expectation, e k

E(2: 2: a;eXekbk;) = 2: 2: a;eE(Xek)bk;


e k e k

which is the (i, j)th element of AE(X)B.


2.41. You are given the random vector X'= [X 1 ,X2 ,X3 ,X4 ] with mean vector
P.x= [3, 2, -2, OJ and variance--covariance matrix

l30 ]
0
0 3 0
Ix = 0 0 3
0 0 0
Let

(a) Find E (AX), the mean of AX.


-1
-2
0
j]
(b) Find Cov (AX), the variances and covariances of AX.
(c) Which pairs of linear combinations have zero covariances?
110 Chapter 2 Matrix Algebra and Random Vectors

2.42. Repeat Exercise 2.41, but with

References

1. Bellman, R.lntroduction to Analysis (2nd ed.) Philadelphia: Soc for Industrial &
Applied Math (SIAM), 1997.
2. Eckart, C., and G. Young. "The Approximation of One Matrix by Another of Lower
Rank." Psychometrika,1 (1936), 211-218.
3. Graybill, F. A. Introduction to Matrices with Applications in Statistics. Belmont, CA:
Wadsworth, 1969.
4. Halmos, P.R. Finite-Dimensional Vector Spaces. New York: Springer-Verlag, 1993.
5. Johnson, R. A., and G. K. Bhattacharyya. Statistics: Principles and Methods (5th ed.) New
York: John Wiley, 2005.
6. Noble, B., and J. W. Daniel. Applied Linear Algebra (3rd ed.). Englewood Cliffs, NJ:
Prentice Hall, 1988.

You might also like