Untitled
Untitled
Untitled
y = 1 + .5sin x
y=x
x
α
y = 3 + 2sin x
y=x
α x
E1: x = 1 + :5 sin x
E2: x = 3 + 2 sin x
E1 E2
n xn xn
0 0:00000000000000 3:00000000000000
1 1:00000000000000 3:28224001611973
2 1:42073549240395 2:71963177181556
3 1:49438099256432 3:81910025488514
4 1:49854088439917 1:74629389651652
5 1:49869535552190 4:96927957214762
6 1:49870092540704 1:06563065299216
7 1:49870112602244 4:75018861639465
8 1:49870113324789 1:00142864236516
9 1:49870113350813 4:68448404916097
10 1:49870113351750 1:00077863465869
The above iterations can be written symbolically as
E1: xn+1 = 1 + :5 sin xn
E2: xn+1 = 3 + 2 sin xn
for n = 0; 1; 2; ::: Why does one of these iterations
converge, but not the other? The graphs show similar
behaviour, so why the di erence.
a x b ) a g (x) b
xn+1 g 0( ) ( xn )
The proof is given in the text, and I go over only a
portion of it here. For S2, note that from (#), if x0
is in [a; b], then
x1 = g (x0)
is also in [a; b]. Repeat the argument to show that
x2 = g (x1)
belongs to [a; b]. This can be continued by induction
to show that every xn belongs to [a; b].
g (w ) g (z ) = g 0(c) (w z)
for some unknown point c between w and z . There-
fore,
jg (w) g (z )j jw zj
for any a w; z b.
For S3, subtract xn+1 = g (xn) from = g ( ) to get
xn+1 = g ( ) g (xn)
= g 0(cn) ( xn ) ($)
j xn+1j j xn j (*)
with cn between and xn. From (*), we have that
the error is guaranteed to decrease by a factor of
with each iteration. This leads to
j xn j nj x0 j ; n 0
With some extra manipulation, we can obtain the error
bound in S3.
xn+1 g 0( ) ( xn )
tells us that when near to the root , the errors will
decrease by a constant factor of g 0( ). If this is nega-
tive, then the errors will oscillate between positive and
negative, and the iterates will be approaching from
both sides. When g 0( ) is positive, the iterates will
approach from only one side.
The statements
xn+1 = g 0(cn) ( xn )
xn+1 g 0( ) ( xn )
also tell us a bit more of what happens when
g 0( ) > 1
Then the errors will increase as we approach the root
rather than decrease in size.
Look at the earlier examples
E1: x = 1 + :5 sin x
E2: x = 3 + 2 sin x
In the rst case E1,
g (x) = 1 + :5 sin x
g 0(x) = :5 cos x
g 0( 1
2
Therefore the xed point iteration
xn+1 = 1 + :5 sin xn
will converge for E1.
xn+1 = 3 + 2 sin xn
will diverge for E2.
Corollary: Assume x = g (x) has a solution , and
further assume that both g (x) and g 0(x) are contin-
uous for all x in some interval about . In addition,
assume
g 0( ) < 1 (**)
Then any su ciently small number " > 0, the interval
[a; b] = [ "; + "] will satisfy the hypotheses of
the preceding theorem.
xn + [ xn xn 1]
1
is said to be an extrapolation of the numbers xn 1
and xn. But what is ?
From
xn
lim = g 0( )
n!1 xn 1
we have
xn
xn 1
Unfortunately this also involves the unknown root
which we seek; and we must nd some other way of
estimating .
xn ( xn 1 ) ; xn 1 ( xn 2 )
If this is accurate, then so are the above extrapolation
and error estimation formulas.
EXAMPLE
Step 1: Select x0
Step 2: Calculate
x1 = g (x0); x2 = g (x1)
Step3: Calculate
2 x2 x1
x3 = x2 + [ x2 x1 ] ; 2=
1 2 x1 x0
Step 4: Calculate
x4 = g (x3); x5 = g (x4)
and calculate x6 as the extrapolate of fx3; x4; x5g.
Continue this procedure, ad in natum.
x3 = 7:98 10 4; x6 = 2:27 10 6
In comparison, the original iteration had
x6 = 1:23 10 2
GENERAL COMMENTS
xn+1 = g (xn)
This shows the power of understanding the behaviour
of the error in a numerical process. From that un-
derstanding, we can often improve the accuracy, thru
extrapolation or some other procedure.