Jacobi
Jacobi
J Robert Buchanan
Department of Mathematics
Spring 2022
Objectives
▶ For small linear systems direct methods are often as efficient (or
even more efficient) than the iterative methods to be discussed
today.
▶ For large linear systems particularly those with sparse matrix
representations (matrices with many zero entries), the iterative
methods can be more efficient that the direct methods.
▶ Sparse linear systems are often found in applications such as
ordinary and partial differential equarions and circuit analysis.
Initial Approximation
Given the linear system A x = b, if aii ̸= 0 solve the ith equation of the
system for xi .
Given the linear system A x = b, if aii ̸= 0 solve the ith equation of the
system for xi .
1
−2x1 + x2 + x3 = 4
2
1
x1 − 2x2 − x3 = −4
2
x2 + 2x3 = 0
1 1
x1 = −2 + x2 + x3
2 4
1 1
x2 = 2 + x1 − x3
2 4
1
x3 = − x2
2
Solution
1 1
x1 = −2 + x2 + x3
2 4
1 1
x2 = 2 + x1 − x3
2 4
1
x3 = − x2
2
(k ) (k ) (k )
k x1 x2 x3
0 0.0000 0.0000 0.0000
1 −2.0000 2.0000 0.0000
2 −1.0000 1.0000 −1.0000
3 −1.2500 1.2500 −0.8750
4 −1.5938 1.5938 −0.6250
.. .. .. ..
. . . .
19 −1.4552 1.4552 −0.7268
20 −1.4541 1.4541 −0.7276
Matrix Notation for Jacobi’s Method (1 of 2)
Ax = b
(D − L − U)x = b
Dx = (L + U)x + b
x = D −1 (L + U)x + D −1 b
Ax = b
(D − L − U)x = b
Dx = (L + U)x + b
x = D −1 (L + U)x + D −1 b
▶ Define Tj = D −1 (L + U) and cj = D −1 b.
▶ The Jacobi method can be expressed in matrix notation as
1
−2x1 + x2 + x3 = 4
2
1
x1 − 2x2 − x3 = −4
2
x2 + 2x3 = 0
Example
1
−2x1 + x2 + x3 = 4
2
1
x1 − 2x2 − x3 = −4
2
x2 + 2x3 = 0
Solution
−2 1 1/2 4
Let A = 1 −2 −1/2 and b = −4 .
0 1 2 0
Solution
−2 0 0
D = 0 −2 0
0 0 2
0 −1 −1/2
L+U = −1 0 1/2
0 −1 0
0 1/2 1/4
Tj = D −1 (L + U) = 1/2 0 −1/4
0 −1/2 0
−2
cj = D −1 b = 2
0
Solution
−2 0 0
D = 0 −2 0
0 0 2
0 −1 −1/2
L+U = −1 0 1/2
0 −1 0
0 1/2 1/4
Tj = D −1 (L + U) = 1/2 0 −1/4
0 −1/2 0
−2
cj = D −1 b = 2
0
(k ) (k −1)
x1 0 1/2 1/4 x1 −2
0 −1/4 x2(k −1) + 2
(k )
x2 = 1/2
(k )
x3 0 −1/2 0 (k −1)
x3 0
Improving the Jacobi Method
i−1 n
(k ) 1 X (k )
X (k −1)
xi = bi − aij xj − aij xj .
aii
j=1 j=i+1
Example
1
−2x1 + x2 + x3 = 4
2
1
x1 − 2x2 − x3 = −4
2
x2 + 2x3 = 0
Solution
(k ) 1 (k −1) 1 (k −1)
x1 = −2 + x + x3
2 2 4
(k ) 1 (k ) 1 (k −1)
x2 = 2 + x1 − x3
2 4
(k ) 1 (k )
x3 = − x2
2
Solution
(k ) 1 (k −1) 1 (k −1)
x1 = −2 + x + x3
2 2 4
(k ) 1 (k ) 1 (k −1)
x2 = 2 + x1 − x3
2 4
(k ) 1 (k )
x3 = − x2
2
(k ) (k ) (k )
k x1 x2 x3
0 0.0000 0.0000 0.0000
1 −2.0000 1.0000 −0.5000
2 −1.6250 1.3125 −0.6523
3 −1.5078 1.4102 −0.7051
4 −1.4712 1.4407 −0.7203
5 −1.4598 1.4502 −0.7251
6 −1.4562 1.4532 −0.7266
7 −1.4551 1.4541 −0.7271
Gauss-Seidel Method in Matrix Form (1 of 2)
i−1 n
(k ) 1 X (k )
X (k −1)
xi = bi − aij xj − aij xj
aii
j=1 j=i+1
i−1 n
(k ) (k ) (k −1)
X X
aii xi = bi − aij xj − aij xj
j=1 j=i+1
i−1 n
(k ) (k ) (k −1)
X X
aii xi + aij xj = bi − aij xj
j=1 j=i+1
Gauss-Seidel Method in Matrix Form (2 of 2)
Since for i = 1, 2, . . . , n,
i−1 n
(k −1)
(k )
X (k )
X
aii xi + aij xj = bi − aij xj ,
j=1 j=i+1
Solution
−2 1 1/2 4
Let A = 1 −2 −1/2 and b = −4 .
0 1 2 0
Solution
−2 0 0
D−L = 1 −2 0
0 1 2
−1/2 0 0
(D − L)−1 = −1/4 −1/2 0
1/8 1/4 1/2
0 1/2 1/4
Tg = (D − L)−1 U = 0 1/4 −1/8
0 −1/8 1/16
−2
cg = (D − L)−1 b = 1
−1/2
Solution
−2 0 0
D−L = 1 −2 0
0 1 2
−1/2 0 0
(D − L)−1 = −1/4 −1/2 0
1/8 1/4 1/2
0 1/2 1/4
Tg = (D − L)−1 U = 0 1/4 −1/8
0 −1/8 1/16
−2
cg = (D − L)−1 b = 1
−1/2
(k )
(k −1)
x1 0 1/2 1/4 x1 −2
1/4 −1/8 x2(k −1) +
(k ) 0
x2 = 1
(k )
x3 0 −1/8 1/16 (k −1)
x3 −1/2
General Iteration Methods
Lemma
If ρ(T ) < 1 then (I − T )−1 exists and
∞
X
−1 2
(I − T ) = I + T + T + ··· = Tj.
j=0
Proof (1 of 2)
Tx = λx
x−T x = x − λx
(I − T )x = (1 − λ)x
Define Sm = I + T + T 2 + · · · + T m for m = 1, 2, . . ..
(I − T )Sm = (I + T + T 2 + · · · + T m ) − (T + T 2 + · · · + T m+1 )
= I − T m+1
lim (I − T )Sm = lim (I − T m+1 )
m→∞ m→∞
(I − T ) lim Sm = I (since T is convergent)
m→∞
P∞
Consequently (I − T )−1 = j=0 Tj.
Convergence of Iterative Methods
Theorem
For any x(0) ∈ Rn the sequence {x(k ) }∞
k =0 defined by
x = (I − T )−1 c
Proof (2 of 4)
x = (I − T )−1 c
(I − T )x = c
x = T x + c.
0 = lim T k z
k →∞
Corollary
If ∥T ∥ < 1 for any natural matrix norm and c is a fixed vector, then the
sequence {x(k ) }∞k =0 defined by x
(k )
= T x(k −1) + c converges for any
(0) n
x to a vector x ∈ R with x = T x + c with the following error
bounds.
1. ∥x − x(k ) ∥ ≤ ∥T ∥k ∥x(0) − x∥
∥T ∥k
2. ∥x − x(k ) ∥ ≤ ∥x(1) − x(0) ∥
1 − ∥T ∥
Error Bounds
Corollary
If ∥T ∥ < 1 for any natural matrix norm and c is a fixed vector, then the
sequence {x(k ) }∞k =0 defined by x
(k )
= T x(k −1) + c converges for any
(0) n
x to a vector x ∈ R with x = T x + c with the following error
bounds.
1. ∥x − x(k ) ∥ ≤ ∥T ∥k ∥x(0) − x∥
∥T ∥k
2. ∥x − x(k ) ∥ ≤ ∥x(1) − x(0) ∥
1 − ∥T ∥
Remark: if we can show that ρ(Tj ) < 1 and ρ(Tg ) < 1 then the Jacobi
and Gauss-Seidel methods will always converge to the unique
solution of the linear system.
Diagonal Dominance
Theorem
If matrix A is strictly diagonally dominant, then for any choice of x(0) ,
both the Jacobi and Gauss-Seidel methods produce sequences
{x(k ) }∞
k =0 that converge to the unique solution of A x = b.
Stein-Rosenberg Theorem
Theorem (Stein-Rosenberg)
If aij ≤ 0 for each i ̸= j and aii > 0 for i = 1, 2, . . . , n then exactly one
of the following statements is true.
1. 0 ≤ ρ(Tg ) < ρ(Tj ) < 1
2. 0 = ρ(Tg ) = ρ(Tj )
3. 1 < ρ(Tj ) < ρ(Tg )
4. 1 = ρ(Tg ) < ρ(Tj )
Homework