Lecture 22

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Lecture 22

Positive & Negative Definite Matrices & Singular Value Decomposition(SVD)


Definition 1. Let A be a real symmetric matrix. Then A is said to be positive (negative) definite if all
of its eigenvalues are positive (negative).
Definition 2. Let A be a real symmetric matrix. Then A is said to be positive (negative) semi-definite
if all of its eigenvalues are non-negative (non-positive).
Remark 3. 1. If A is positive definite, then det(A) > 0 and tr(A) > 0.
2. If A is negative definite matrix of order n, then tr(A) < 0. If n is even, det(A) > 0 and if n is odd
det(A) < 0.
3. If A is positive semi-definite, then det(A) ≥ 0 and tr(A) ≥ 0.
4. If A is negative semi-definite matrix of order n, then tr(A) ≤ 0. If n is even, det(A) ≥ 0 and if n is
odd det(A) ≤ 0.

Proposition 4. Let A ∈ Mn (R) be a symmetric matrix. Then


1. A is positive definite if and only if X T AX > 0 for all 0 6= X ∈ Rn .
2. A is negative definite if and only if X T AX < 0 for all 0 6= X ∈ Rn .

Proof. Let A be positive definite. Since A is a real symmetric matrix, A is orthogonally diagonalizable
with positive eigenvalues. Therefore, A = P DP T , where D is a diagonal matrix with entries as eigenvalues
of A and P is an orthogonal matrix. Thus, X T AX = X T P DP T X = (P T X)T D(P T X) = Y T DY , where
Y = P T X 6= 0. Let Y = (y1 , y2 , . . . , yn )T . Then X T AX = Y T DY = λ1 y12 + λ2 y22 + · · · + λn yn2 > 0, where
λi are eigenvalues of A.

Conversely, let X T AX > 0 for all X ∈ Rn . Let λ ∈ R be an eigenvalue of A and X0 be an eigenvector


corresponding to λ. Then X0T AX0 > 0 ⇒ λX0T X0 > 0. Note that X0T X0 = kX0 k2 > 0 as X0 6= 0.
Therefore, λ > 0.
Proposition 5. Let A ∈ Mn (R) be a symmetric matrix. Then
1. A is positive definite if and only if A = B T B for some invertible matrix B.
2. A is positive semi-definite if and only if A = B T B for some matrix B.

Proof. Let A be a positive definite matrix. Then A is symmetric, by Spectral theorem, there exists an
T
orthogonal matrix P such that
√ P AP √ = D√ with D = √ diag(λ1 , λ2 , . . .√
, λn ), where λi ’s are eigenvalues
of A. Here, λi > 0. Define D = diag( λ1 , λ2 , . . . , λn ). Set B = DP T , then B is invertible and
B T B = A.

Conversely, X T AX = X T B T BX = (BX)T (BX) = kBXk2 . Therefore, for X 6= 0, X T AX > 0.

Let A ∈ Mn (R). The leading principal minor Dk of A of order k, 1 ≤ k ≤ n, is the determinant of the
matrix obtained from A by deleting last n − k rows and last n − k columns of A.
Proposition 6. Let A ∈ Mn (R) be a symmetric matrix. Then
1. A is positive definite if and only if Dk > 0 for 1 ≤ k ≤ n.
2. A is negative definite if and only if (−1)k Dk > 0 for 1 ≤ k ≤ n.
3. A is positive semi-definite, then Dk ≥ 0 for 1 ≤ k ≤ n. Show that the converse need not be true.
4. A is negative semi-definite, then (−1)k Dk ≥ 0 for 1 ≤ k ≤ n. Show that the converse need not be true.

1
Proof. Theprove for 
this result has been omitted. To see that converse is not true in case of (3),
1 1 1  
1 1
take A = 1 1 1 . Then D1 = 1, D2 = det
  = 0 and D3 = det(A) = 0. The matrix is
1 1
1 1 1/2
symmetric and Dk ≥ 0 for k = 1, 2, 3. But X T AX = −2 for X = (1, 1, −2)T . Therefore, A is not positive
semi-definite.

Exercise 1. Which of the following matrices is positive definite/negative definite/positive semi-definite/


negative semi-definite.  
  1 1 1 1
    1 1/2 1/3
1 2 1 1   1 1 1 1
, , 1/2 1/3 1/4,  .
2 1 0 1 1 1 1 1
1/3 1/4 1/5
1 1 1 0

Singular-Value Decomposition
We know that every matrix is not diagonalizable and diagonalizability can be discussed only for square
matrices. Here we discuss a decomposition of an m×n matrix which coincide with a known decomposition
of a positive semi-definite matrix.

Let A ∈ Mm×n . Then a decomposition of the form

A = U ΣV T ,

where U ∈ Mm (R) and V ∈ Mn (R) are orthogonal, and Σ is a rectangular diagonal matrix with non-
negative real diagonal entries, is called Singular-Value Decomposition of A. The non-zero diagonal entries
of Σ are called singular values of A.

When A is a positive semi-definite matrix, then SVD is nothing but A = P DP T for some orthogonal
matrix P.

Theorem 7. Let A ∈ Mm×n (R). Then A has a singular value decomposition.

Proposition 8. Let A ∈ Mm×n (R). Then


1. AT A is positive semi-definite.
2. AAT is positive semi-definite.
3. If m ≥ n, then P T (AT A)P = D and P 0T (AAT )P 0 = D0 for some orthogonal matrices P ∈ Mn (R) and
P 0 ∈ Mm (R) with  
0 D 0m×m−n
D = .
0m−n×m 0m−n×m−n

Proof. Note that AT A and AAT are symmetric matrices. We claim that X T AX ≥ 0 for every X 6= 0. For
X 6= 0, X T AAT X = (AT X)T (AT X) = kAT Xk2 ≥ 0. Therefore, AAT is positive semi-definite. Similarly
for AT A. Since the AT A and AAT are symmetric, they are orthogonally diagonalizable. Therefore,
P T (AT A)P = D and P 0T (AAT )P = D0 for some orthogonal matrices P ∈ Mm (R) and P 0 ∈ Mn (R).
Recall that pAAT x = xm−n pAT A (x), 
where pAAT (x) and p
AT A are the characteristic polynomial of AA
T

D 0m×m−n
and AT A respectively. Hence, D0 = .
0m−n×m 0m−n×m−n

2
Method to find SVD of A
Step 1: Find AAT , which is positive semi-definite matrix. Therefore, we can find an orthogonal matrix
U ∈ Mm (R) such that
U T (AAT )U = D.
Note that columns of U are eigenvectors (orthonormal) of AAT .
Step 2: Find AT A, which is positive semi-definite matrix. We can find an orthogonal matrix V ∈ Mn (R)
such that
V T (AT A)V = D0 .
Note that columns of V are eigenvectors (orthonormal) of AT A. √
Step 3: Define a rectangular diagonal matrix Σ ∈ Mm×n such that Σii = λi for i = 1, 2, . . . , min(m, n),
where λi are the common eigenvalues of AT A and AAT . Note that non-zero diagonal entries σi are
corresponding to non-zero eigenvalues of AT A or AAT .

Step 4: Verify that U ΣV T = A.

Remark 9. Let A ∈ Mm×n (R) and rank(A) = r. Let U ΣV T be a singular value decomposition of A. Let
U1 , U2 , . . . , Um are columns of U and V1 , V2 , . . . , Vn are columns of V . Then
1. {U1 , U2 , . . . , Ur } is an orthonormal basis of column space(A).
2. {Vr+1 , Vr+2 , . . . , Vn } is an orthonormal basis of null space(A).
3. {V1 , V2 , . . . , Vr } is an orthonormal basis of Column space of (AT ) or row space of A.
4. {Ur+1 , Ur+2 , . . . , Un } is an orthonormal basis of null space(AT ).

Proof. Note that AV = U Σ ⇒ AVj = σi Uj for j = 1, 2, . . . , r and AVj = 0 for j = r + 1, . . . , n.


Since nullity of A is n − r and Vr+1 , Vr+2 , . . . , Vn forms an orthonormal basis of N (A). Since σj > 0
and AVj = σj Uj , Uj ∈ C(A) for j = 1, 2, . . . , r. Thus {U1 , U2 , . . . , Ur } is an orthonormal basis of C(A).
Similarly, AT U = V Σ gives that first r columns of V forms a basis of the column space of AT .
 
1 0 1 0
Example 10. Find SVD of A = .
0 1 0 1

 
  1 0 1 0  
T 2 0 T
0 1 0 1 1 0
Solution: AA = and A A =    . Then U = . Note that non-zero eigen-
0 2 1 0 1 0 0 1
0 1 0 1
value of A A is 2 (as non-zero eigenvalue of AAT is 2) with eigenvectors (0, 1, 0, 1) and (1, 0, 1, 0) and
T

the remaining eigenvalues of AT A are all zero.


 The eigenvectors corresponding to 0 are (1, 0, −1, 0) and
1 √1

2
0 2
0
 0 √1 0 √1 
√ 
2 2 2 √0 0 0
(0, 1, 0, −1). Thus V =  √1 . The rectangular diagonal matrix Σ = .

−1
 2 0 √ 2
0  0 2 0 0
−1
0 √12 0 √ 2
√1 √1

0 0
  √  2 1 2
1 
1 0 2 √0 0 0  0 √2 0 √2 
Therefore, A = −1 .
0 1 0 2 0 0  √12 0 √ 0

2
−1
0 √12 0 √ 2

3
Remark: After finding U , one can find columns of V corresponding to non-zero eigenvalues by using
the relation Vi = σ1i AT Ui . The other columns of V can be found by finding vectors orthogonal to V1 , V2
and to each other.

You might also like