0% found this document useful (0 votes)
18 views46 pages

Jacobi

Maths

Uploaded by

Kinza Kalsoom
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
18 views46 pages

Jacobi

Maths

Uploaded by

Kinza Kalsoom
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 46

Jacobi and Gauss-Seidel Iterative Techniques

MATH 375 Numerical Analysis

J Robert Buchanan

Department of Mathematics

Spring 2022
Objectives

In this lesson we will learn to


▶ solve linear systems using Jacobi’s method,
▶ solve linear systems using the Gauss-Seidel method, and
▶ solve linear systems using general iterative methods.
Background

▶ For small linear systems direct methods are often as efficient (or
even more efficient) than the iterative methods to be discussed
today.
▶ For large linear systems particularly those with sparse matrix
representations (matrices with many zero entries), the iterative
methods can be more efficient that the direct methods.
▶ Sparse linear systems are often found in applications such as
ordinary and partial differential equarions and circuit analysis.
Initial Approximation

Consider the linear system A x = b where A is an n × n matrix and


b ∈ Rn .
Given an initial approximation x(0 to the solution of the linear system
x, iterative techniques generate a sequence of vectors {x(k ) }∞ k =0
which converge to the solution x.
Jacobi’s Method

Given the linear system A x = b, if aii ̸= 0 solve the ith equation of the
system for xi .

bi = ai1 x1 + · · · + aii xi + · · · + ain xn


n
bi X aij xj
xi = −
aii aii
j=1,j̸=i

We will have n equations of this form (1 ≤ i ≤ n).


Jacobi’s Method

Given the linear system A x = b, if aii ̸= 0 solve the ith equation of the
system for xi .

bi = ai1 x1 + · · · + aii xi + · · · + ain xn


n
bi X aij xj
xi = −
aii aii
j=1,j̸=i

We will have n equations of this form (1 ≤ i ≤ n).


Given x(k ) then
n (k )
(k +1) bi X aij xj
xi = −
aii aii
j=1,j̸=i

for 1 ≤ i ≤ n. The process can be repeated until


∥x(k ) − x(k −1) ∥∞
< ϵ.
∥x(k ) ∥∞
Example

Use Jacobi’s method to approximate the solution to the following


linear system. Use x(0) = 0 and let ϵ = 10−3 .

1
−2x1 + x2 + x3 = 4
2
1
x1 − 2x2 − x3 = −4
2
x2 + 2x3 = 0

For purposes of comparison, the exact solution is


     
x1 −16/11 −1.454545
 x2  =  16/11  ≈  1.454545  .
x3 −8/11 −0.727273
Solution

1 1
x1 = −2 + x2 + x3
2 4
1 1
x2 = 2 + x1 − x3
2 4
1
x3 = − x2
2
Solution

1 1
x1 = −2 + x2 + x3
2 4
1 1
x2 = 2 + x1 − x3
2 4
1
x3 = − x2
2

(k ) (k ) (k )
k x1 x2 x3
0 0.0000 0.0000 0.0000
1 −2.0000 2.0000 0.0000
2 −1.0000 1.0000 −1.0000
3 −1.2500 1.2500 −0.8750
4 −1.5938 1.5938 −0.6250
.. .. .. ..
. . . .
19 −1.4552 1.4552 −0.7268
20 −1.4541 1.4541 −0.7276
Matrix Notation for Jacobi’s Method (1 of 2)

Matrix A can be decomposed as A = D − L − U where


   
a11 a12 · · · a1n a11 0 ··· 0
 a21 a22 · · · a2n   0 a22 · · · 0 
A= D=
   
.. .. . . ..  .. .. . . .. 
 . . . .   . . . . 
an1 an2 · · · ann 0 0 ··· ann
   
0 0 ··· 0 0 −a12 · · · −a1n
 −a21 0 · · · 0   0 0 ··· −a2n 
L= U= . .
   
.. .. . . ..  . .. . . ..
 . . . .   . . . . 
−an1 −an2 ··· 0 0 0 ··· 0
Matrix Notation for Jacobi’s Method (2 of 2)

Ax = b
(D − L − U)x = b
Dx = (L + U)x + b
x = D −1 (L + U)x + D −1 b

assuming aii ̸= 0 for 1 ≤ i ≤ n.


Matrix Notation for Jacobi’s Method (2 of 2)

Ax = b
(D − L − U)x = b
Dx = (L + U)x + b
x = D −1 (L + U)x + D −1 b

assuming aii ̸= 0 for 1 ≤ i ≤ n.

▶ Define Tj = D −1 (L + U) and cj = D −1 b.
▶ The Jacobi method can be expressed in matrix notation as

x(k ) = Tj x(k −1) + cj .


Example

Express the following linear system in the Jacobi matrix notation.

1
−2x1 + x2 + x3 = 4
2
1
x1 − 2x2 − x3 = −4
2
x2 + 2x3 = 0
Example

Express the following linear system in the Jacobi matrix notation.

1
−2x1 + x2 + x3 = 4
2
1
x1 − 2x2 − x3 = −4
2
x2 + 2x3 = 0

Solution
   
−2 1 1/2 4
Let A =  1 −2 −1/2  and b =  −4 .
0 1 2 0
Solution

 
−2 0 0
D =  0 −2 0 
0 0 2
 
0 −1 −1/2
L+U =  −1 0 1/2 
0 −1 0
 
0 1/2 1/4
Tj = D −1 (L + U) =  1/2 0 −1/4 
0 −1/2 0
 
−2
cj = D −1 b =  2 
0
Solution

 
−2 0 0
D =  0 −2 0 
0 0 2
 
0 −1 −1/2
L+U =  −1 0 1/2 
0 −1 0
 
0 1/2 1/4
Tj = D −1 (L + U) =  1/2 0 −1/4 
0 −1/2 0
 
−2
cj = D −1 b =  2 
0
 (k )     (k −1)   
x1 0 1/2 1/4 x1 −2
0 −1/4   x2(k −1)  +  2 
 (k ) 
 x2  =  1/2
 
(k )
x3 0 −1/2 0 (k −1)
x3 0
Improving the Jacobi Method

Recall that in the Jacobi method,


 
n
(k ) 1 bi −
X (k −1) 
xi = aij xj .
aii
j=1,j̸=i

▶ As designed all the components of x(k −1) are used to calculate


(k )
xi .
▶ When i > 1 the components xj(k ) for 1 ≤ j < i have already been
calculated and should be more accurate than the components
(k −1)
xj for 1 ≤ j < i.
▶ We can modify the Jacobi method to use xj(k ) for 1 ≤ j < i in
(k −1)
place of xj to improve the convergence of the algorithm. This
modification is known as the Gauss-Seidel iterative technique.
Gauss-Seidel Method

 
i−1 n
(k ) 1  X (k )
X (k −1) 
xi = bi − aij xj − aij xj .
aii
j=1 j=i+1
Example

Use the Gauss-Seidel method to approximate the solution to the


following linear system. Use x(0) = 0 and let ϵ = 10−3 .

1
−2x1 + x2 + x3 = 4
2
1
x1 − 2x2 − x3 = −4
2
x2 + 2x3 = 0
Solution

(k ) 1 (k −1) 1 (k −1)
x1 = −2 + x + x3
2 2 4
(k ) 1 (k ) 1 (k −1)
x2 = 2 + x1 − x3
2 4
(k ) 1 (k )
x3 = − x2
2
Solution

(k ) 1 (k −1) 1 (k −1)
x1 = −2 + x + x3
2 2 4
(k ) 1 (k ) 1 (k −1)
x2 = 2 + x1 − x3
2 4
(k ) 1 (k )
x3 = − x2
2

(k ) (k ) (k )
k x1 x2 x3
0 0.0000 0.0000 0.0000
1 −2.0000 1.0000 −0.5000
2 −1.6250 1.3125 −0.6523
3 −1.5078 1.4102 −0.7051
4 −1.4712 1.4407 −0.7203
5 −1.4598 1.4502 −0.7251
6 −1.4562 1.4532 −0.7266
7 −1.4551 1.4541 −0.7271
Gauss-Seidel Method in Matrix Form (1 of 2)

 
i−1 n
(k ) 1  X (k )
X (k −1) 
xi = bi − aij xj − aij xj
aii
j=1 j=i+1
i−1 n
(k ) (k ) (k −1)
X X
aii xi = bi − aij xj − aij xj
j=1 j=i+1
i−1 n
(k ) (k ) (k −1)
X X
aii xi + aij xj = bi − aij xj
j=1 j=i+1
Gauss-Seidel Method in Matrix Form (2 of 2)
Since for i = 1, 2, . . . , n,

i−1 n
(k −1)
(k )
X (k )
X
aii xi + aij xj = bi − aij xj ,
j=1 j=i+1

we can express the linear system as follows:


(k ) (k −1) (k −1) (k −1)
a11 x1 = b1 − a12 x2 − a13 x3 − · · · − a1n xn
(k ) (k ) (k −1) (k −1) (k −1)
a21 x1 + a22 x2 = b2 − a23 x2 − a24 x3 − · · · − a2n xn
..
.
(k ) (k ) (k )
an1 x1 + an2 x2 + · · · + ann xn = bn

This is equivalent to the matrix form

(D − L)x(k ) = b + Ux(k −1)


x(k ) = (D − L)−1 b + (D − L)−1 U x(k −1)
x (k )
= cg + Tg x(k −1) .
Example

Express the following linear system in the Gauss-Seidel matrix


notation.
1
−2x1 + x2 + x3 = 4
2
1
x1 − 2x2 − x3 = −4
2
x2 + 2x3 = 0
Example

Express the following linear system in the Gauss-Seidel matrix


notation.
1
−2x1 + x2 + x3 = 4
2
1
x1 − 2x2 − x3 = −4
2
x2 + 2x3 = 0

Solution
   
−2 1 1/2 4
Let A =  1 −2 −1/2  and b =  −4 .
0 1 2 0
Solution

 
−2 0 0
D−L =  1 −2 0 
0 1 2
 
−1/2 0 0
(D − L)−1 =  −1/4 −1/2 0 
1/8 1/4 1/2
 
0 1/2 1/4
Tg = (D − L)−1 U =  0 1/4 −1/8 
0 −1/8 1/16
 
−2
cg = (D − L)−1 b =  1 
−1/2
Solution

 
−2 0 0
D−L =  1 −2 0 
0 1 2
 
−1/2 0 0
(D − L)−1 =  −1/4 −1/2 0 
1/8 1/4 1/2
 
0 1/2 1/4
Tg = (D − L)−1 U =  0 1/4 −1/8 
0 −1/8 1/16
 
−2
cg = (D − L)−1 b =  1 
−1/2
 (k )
    (k −1)   
x1 0 1/2 1/4 x1 −2
1/4 −1/8   x2(k −1)  + 
 (k )   0
 x2  = 1 
 
(k )
x3 0 −1/8 1/16 (k −1)
x3 −1/2
General Iteration Methods

We have seen that we can express an iterative method for the


solution of a linear system in the form:

x(k ) = T x(k −1) + c

for k = 1, 2, . . . where x(0) is arbitrary.


General Iteration Methods

We have seen that we can express an iterative method for the


solution of a linear system in the form:

x(k ) = T x(k −1) + c

for k = 1, 2, . . . where x(0) is arbitrary.


We must now establish conditions under which this iterative method
will converge to the unique solution of the system A x = b.
Important Lemma

Lemma
If ρ(T ) < 1 then (I − T )−1 exists and

X
−1 2
(I − T ) = I + T + T + ··· = Tj.
j=0
Proof (1 of 2)

Tx = λx
x−T x = x − λx
(I − T )x = (1 − λ)x

▶ Thus λ is an eigenvalue of T if and only if 1 − λ is an eigenvalue


of I − T .
▶ If ρ(T ) < 1 then for any eigenvalue λ of T , |λ| < 1, therefore
λ ̸= 1.
▶ Hence 1 − 1 = 0 cannot be an eigenvalue of I − T which implies
I − T is nonsingular.
Proof (2 of 2)

Define Sm = I + T + T 2 + · · · + T m for m = 1, 2, . . ..

(I − T )Sm = (I + T + T 2 + · · · + T m ) − (T + T 2 + · · · + T m+1 )
= I − T m+1
lim (I − T )Sm = lim (I − T m+1 )
m→∞ m→∞
(I − T ) lim Sm = I (since T is convergent)
m→∞
P∞
Consequently (I − T )−1 = j=0 Tj.
Convergence of Iterative Methods

Theorem
For any x(0) ∈ Rn the sequence {x(k ) }∞
k =0 defined by

x(k ) = T x(k −1) + c for k = 1, 2, . . .

converges to the unique solution of x = T x + c if and only if ρ(T ) < 1.


Proof (1 of 4)

Suppose ρ(T ) < 1, then by assumption

x(k ) = T x(k −1) + c


= T (T x(k −2) + c) + c
= T 2 x(k −2) + (I + T )c
..
.
x(k ) = T k x(0) + (I + T + · · · T k −1 )c
h i
lim x(k ) = lim T k x(0) + (I + T + · · · T k −1 )c
k →∞ k →∞
 
  X ∞
= lim T k x(0) +  Tj c
k →∞
j=0
−1
= 0 + (I − T ) c (since T is convergent)
−1
x = (I − T ) c.
Proof (2 of 4)

So far we know the sequence {x(k ) }∞


k =0 converges to

x = (I − T )−1 c
Proof (2 of 4)

So far we know the sequence {x(k ) }∞


k =0 converges to

x = (I − T )−1 c
(I − T )x = c
x = T x + c.

Hence x is a solution to the linear system.


Proof (3 of 4)

▶ To prove the converse, let z be any vector in Rn .


▶ Let x be the unique solution to the linear system x = T x + c.
Proof (3 of 4)

▶ To prove the converse, let z be any vector in Rn .


▶ Let x be the unique solution to the linear system x = T x + c.
▶ Define x(0 = x − z and for k ≥ 1 define x(k ) = T x(k −1) + c.
Proof (3 of 4)

▶ To prove the converse, let z be any vector in Rn .


▶ Let x be the unique solution to the linear system x = T x + c.
▶ Define x(0 = x − z and for k ≥ 1 define x(k ) = T x(k −1) + c.
▶ By assumption lim x(k ) = x.
k →∞
Proof (4 of 4)

x − x(k ) = (T x + c) − (T x(k −1) + x)


= T (x − x(k −1) )
 
= T T (x − x(k −2) )
..
.
x − x(k ) = T k (x − x(0) ) = T k z

Since z is arbitrary then T is a convergent matrix and hence ρ(T ) < 1.


Proof (4 of 4)

x − x(k ) = (T x + c) − (T x(k −1) + x)


= T (x − x(k −1) )
 
= T T (x − x(k −2) )
..
.
x − x(k ) = T k (x − x(0) ) = T k z
lim (x − x(k ) ) = lim T k z
k →∞ k →∞

0 = lim T k z
k →∞

Since z is arbitrary then T is a convergent matrix and hence ρ(T ) < 1.


Error Bounds

Corollary
If ∥T ∥ < 1 for any natural matrix norm and c is a fixed vector, then the
sequence {x(k ) }∞k =0 defined by x
(k )
= T x(k −1) + c converges for any
(0) n
x to a vector x ∈ R with x = T x + c with the following error
bounds.
1. ∥x − x(k ) ∥ ≤ ∥T ∥k ∥x(0) − x∥
∥T ∥k
2. ∥x − x(k ) ∥ ≤ ∥x(1) − x(0) ∥
1 − ∥T ∥
Error Bounds

Corollary
If ∥T ∥ < 1 for any natural matrix norm and c is a fixed vector, then the
sequence {x(k ) }∞k =0 defined by x
(k )
= T x(k −1) + c converges for any
(0) n
x to a vector x ∈ R with x = T x + c with the following error
bounds.
1. ∥x − x(k ) ∥ ≤ ∥T ∥k ∥x(0) − x∥
∥T ∥k
2. ∥x − x(k ) ∥ ≤ ∥x(1) − x(0) ∥
1 − ∥T ∥

Remark: if we can show that ρ(Tj ) < 1 and ρ(Tg ) < 1 then the Jacobi
and Gauss-Seidel methods will always converge to the unique
solution of the linear system.
Diagonal Dominance

Theorem
If matrix A is strictly diagonally dominant, then for any choice of x(0) ,
both the Jacobi and Gauss-Seidel methods produce sequences
{x(k ) }∞
k =0 that converge to the unique solution of A x = b.
Stein-Rosenberg Theorem

Theorem (Stein-Rosenberg)
If aij ≤ 0 for each i ̸= j and aii > 0 for i = 1, 2, . . . , n then exactly one
of the following statements is true.
1. 0 ≤ ρ(Tg ) < ρ(Tj ) < 1
2. 0 = ρ(Tg ) = ρ(Tj )
3. 1 < ρ(Tj ) < ρ(Tg )
4. 1 = ρ(Tg ) < ρ(Tj )
Homework

▶ Read Section 7.3.


▶ Exercises: 1ac, 3ac, 5ac, 7ac

You might also like