Unit 3 Solution of Non-Linear Equations: Structure Page Nos
Unit 3 Solution of Non-Linear Equations: Structure Page Nos
Unit 3 Solution of Non-Linear Equations: Structure Page Nos
2.0 INTRODUCTION
In this unit we will discuss one of the most basic problems in numerical analysis. The problem is
called a root-finding problem and consists of finding values of the variable x (real) that satisfy the
equation f(x) = 0, for a given function f. Let f be a real-value function of a real variable. Any real
number for which f() = 0 is called a root of that equation or a zero of f. We shall confine our
discussion to locating only the real roots of f(x), that is, locating non-real complex roots of f(x) = 0
will not be discussed. This is one of the oldest numerical approximation problems. The procedures
we will discuss range from the classical Newton-Raphson method developed primarily by Isaac
Newton over 300 years ago to methods that were established in the recent past.
Myriads of methods are available for locating zeros of functions and in first section we discuss
bisection methods and fixed point method. In the second section, Chord Method for finding roots will
be discussed. More specifically, we will take up regula-falsi method (or method of false position),
Newton-Raphson method, and secant method. In section 3, we will discuss error analysis for iterative
methods or convergence analysis of iterative method.
We shall consider the problem of numerical computation of the real roots of a given equation
f(x) = 0
which may be algebraic or transcendental. It will be assumed that the function f(x) is continuously
differentiable a sufficient number of times. Mostly, we shall confine to simple roots and indicate the
iteration function for multiple roots in case of Newton Raphson method.
All the methods for numerical solution of equations discussed here will consist of two steps. First step
is about the location of the roots, that is, rough approximate value of the roots are obtained as initial
approximation to a root. Second step consists of methods, which improve the rough value of each
root.
A method for improvement of the value of a root at a second step usually involves a
process of successive approximation of iteration. In such a process of successive approximation a
sequence {Xn} n = 0, 1, 2, … is generated by the method used starting with the initial approximation x o
of the root obtained in the first step such that the sequence {X n} converges to as n . This xn is
called the nth approximation of nth iterate and it gives a sufficiently accurate value of the root .
For the first step we need the following theorem:
Theorem 1: If f(x) is continuous in the closed internal [a, b] and f(a) are of opposite signs, then there
is at least one real root of the equation f(x) = 0 such that a < < b.
If further f(x) is differentiable in the open interval (a, b) and either f’(x) < 0 or f’(x) > 0 in (a, b)
then f(x) is strictly monotonic in [a, b] and the root is unique.
We shall not discuss the case of complex roots, roots of simultaneous equations nor shall we take up
cases when all roots are targeted at the same time, in this unit.
2.1 OBJECTIVES
After going through this unit, you should be able to:
where xn denotes the nth iterated value which is known and xn + 1 denotes (n + 1)th approximated value
which is to be computed. However, f(x) = 0 can be expressed in the form x = (x) in many ways but
the corresponding iterative may not converge in all cases to the true value, rather it may diverge start
giving absurd values. It can be proved that necessary and sufficient condition for convergence of the
scheme is that the modulus of the first derivative of (x) i.e. (x) at the exact root should be less than
1 i.e. if is the exact root then | ()| 1 . But since we do not know the exact root which is to be
computed we test the condition for convergence at the initial approximation i.e. | (x 0 )| 1 . Hence, it
is necessary that the initial approximation should be taken quite close to the exact root and test the
condition before starting the iteration. This method is also known as ‘fixed point’ method since the
mapping x (x) maps the root to itself since ( ) i.e. remains unchanged (fixed) under
the mapping x (x) .
Example
Find the positive root of x 3 2x 8 0 by method of successive substitution correct upto two
places of decimal.
Solution
f (x) x 3 2x 8
To find the approximate location of the root (+ ive) we try to evaluate the function
values at different x and tabulate as follows :
x 0 1 2 3 x>3
f(x) 8 9 4 13 + ive
Sign of f(x) + +
The root lies between 2 and 3. Let us choose the initial approximation as x 0 = 2.5.
Let us express f(x) = 0 as x (x) in the following forms and check whether | ()| 1 for
x = 2.5.
(i) x x3 x 8
1 3
(ii) x (x 8)
2
1
(iii) x (2x 8) 3
We see that in cases (i) and (ii) | (x)| 1 , hence we should discard these representations. As
1
the third case satisfies the condition, | (x) | 2
1 for x = 2.5 we have the
3(2x 8) 3
Starting from x0 = 2.5, we get the successive iterates as shown in the table below :
n 0 1 2 3
xn 2.5 2.35 2.33 2.33
(1) y = f (x)
(2)
x1 x1 x1
x
O x2
Example
Find positive root of x 3 4x 2 10 0 by Regula-Falsi method. Compute upto the two
decimal places only.
Solution
It is the same problem as given in the previous example. We start by taking x 1 = 1 and x2 = 2.
We have y x 3 4x 2 10 ; y1 = 5 and y2 = 14. The point on the curve are (1, – 5) and
(2, 14). The points on the x-axis where the line joining these two pints cuts it, is given by
I-Iteration
1 14 2 ( 5) 24
x 1.26
14 5 19
y f (x) 1.65
II-Iteration
Take points (1.26, – 1.65) and (2, 14)
1.26 14 2 ( 1.65)
x 1.34
14 1.65
y f (x) 0.41
III-Iteration
Take two points (1.34, – 0.41) and (2, 14)
1.34 14 2 ( 0.41)
x 1.36
14 0.41
y f (x) 0.086
IV-Iteration
Take two points (1.36, – 0.086) and (2, 14)
1.36 14 2 ( 0.086)
x 1.36
14 0.086
Since value of x repeats we take the root as x = 1.36.
Secant Method
Like Regula-Falsi method in this method also two values of x, x 1 and x2 are chosen in the
neighbourhood of the actual root but they may be on the same side or on the opposite side of the
root. Then a straight line is drawn through (x1, y1) and (x2, y2) and position of x is found where it
intersects the x-axis. Then we take the points (x, y) and (x 1, y1) or (x2, y2) and draw straight line
and find point of intersection with x-axis and so on. See Figure 2.
y
(x1, y1) Secant
(x2, y2)
Example
Show four iterations of Secant method for finding the root of the equation x 3 4x 2 10 0
near x = 0 and x = 1. Compute upto two decimal places only.
Solution
f (x) x 3 4x 2 10 0 ; y f (x) x 3 4x 2 10
y1 f (0) 10 ; y 2 f (1) 5
We have two points on the curve (0, – 10) and (1, – 5) and can draw a secant passing through
these points. The point where it cuts x-axis is given by,
I-Iteration
0 ( 5) 1 ( 10)
x 2.0
5 ( 10)
y f (x) 14
II-Iteration
Take two points (1, – 5) and (2, 14)
1 14 2 ( 5)
x 1.26
14 5
y f (x) 1.65
III-Iteration
Take two points (1, – 5) and (1.26, – 1.65)
1 ( 1.65) 1.26 ( 5)
x 1.39
1.65 5
y f (x) 0.41
IV-Iteration
Take (1.26, – 1.65) and (1.39, 0.41)
1.26 0.41 1.39 ( 1.65)
x 1.36
0.41 1.65
y f (x) 0.086
Newton-Raphson (N-R) Method
The Newton-Raphson’s method or commonly known as N-R method is most popular for finding
the roots of an equation. Its approach is different from all the methods discussed earlier in the
sense that it uses only one value of x in the neighbourhood of the root instead of two. We can
explain the method geometrically as follows :
Let us suppose we want to find out the root of an equation f(x) = 0 while y = f(x) represents a
curve and we are interested to find the point where it cuts the x-axis. Let x = x 0 be an initial
approximate value of the root close to the actual root. We evaluate y (x 0 ) f (x 0 ) y 0 (say).
dy
Then point (x0, y0) lies on the curve y = f(x). We find f (x) for x = x0, say f (x 0 ) . Then
dx
we may draw a tangent at (x0, y0) given as,
y y0 f (x 0 ) (x x 0 )
The point where the tangent cuts the x-axis (y = 0) is taken as the next estimate x = x 1 for the
root, i.e.
f (x 0 )
x1 x 0
f (x 0 )
f (x n )
In general x n 1 x n , see Figure 3
f (x n )
y
y = f (x)
x1 x3 x4
x
O x2 x0
f (x)
where (x) x
f (x)
f (x) x q N ; f (x) q x q 1
The N-R iterative scheme may be written
x qn N (q 1) x qn N
xn 1 xn
q x qn 1 q x qn 1
For cuberoot of 10 we have N = 10, q = 3.
2x 3n 10
Hence, xn 1
3x 2n
Taking x0 = 2.0 we get the following iterated values
16 30.3520 29.9879
x1 2.167 ; x 2 2.154 ; x 3 2.154
2 14.0877 13.9191
1
Hence, we get 10 3 2.154 .
Example
Using N-R method find the root of the equation x – cos x = 0 correct upto two places of decimal
only. Take the starting value as ( = 3.1416, radian = 180o).
4
Solution
f (x) x cos x ; f (x) 1 sin x
N-R scheme is given by
x n cos x n x n sin x n cos x n
xn 1 xn
1 sin x n 1 sin x n
Taking x 0
4
1 1
sin cos
x1 4 4 4 4 2 2
1
1 sin 1
4 2
1
1.7854
4 0.7395
2 1 2.4142
y= x
y = cos x
x
O /2
Supplementary material
In other words, we want to locate zeros of the function f(x). The function f(x) may be a polynomial in
x or a transcendental function. Rarely it may be possible to obtain the exact roots of f(x) = 0. In
general, we aim to obtain only approximate solutions using some computational techniques. However,
it should be borne in mind that the roots can be computed as close to the exact roots as we wish
through these methods. We say x* satisfies f(x) = 0 approximately when f(x*) is small or a point x*
which is close to a solution of f(x) = 0 in some sense like x* – < where is a root of
f(x) = 0.
To find an initial approximation of the root, we use tabulation method or graphical method which
gives an interval containing the root. In this section, we discuss two iterative methods (i) bisection
method and (ii) fixed-point method. In a later section we shall discuss about the rate of convergence
of these methods.
The method calls for a repeated halving of subintervals of [a, b] and, at each step, locating the “half”
1
containing . To start with, a1 = a and b1 = b, and let 1 be the mid point of [a, b], that is 1 = (a1 +
2
b1). If f(1) = 0, then = 1. If not, then f(1) has the same sign as either f(a1) or f(b1). If f(a1)
f(1) < 0, then root lies in (a1, 1). Otherwise the root lies in (1, b1). In the first case we set a2 = a1
and b2 = 1 and in the later case we set a2 = 1 and b2 = b1. Now we reapply the process to the
interval (a2, b2). Repeat the procedure until the interval width is as small as we desire. At each step,
bisection halves the length of the preceding interval. After n steps, the original interval length will be
1
reduced by a factor enclosing the root.
2n
f(b) (b,f(b))
u2 x
a u1 u3 b
f(a)
(a,f(a))
We now mention some stopping procedures that could be applied to terminate the algorithm. Select a
tolerance > 0 and generate 1, 2, … n until one of the following conditions is met:
(i) n – n–1 < , (2.2.1)
α n α n 1
(ii) < , n 0, or (2.2.2)
αn
(iii) f(n) < ε (2.2.3)
While applying bisection method we repeatedly apply a fixed sequence of steps. Such a method is
called an Iteration method.
However, it is pertinent to mention that difficulties can arise using any of these stopping criteria. For
example, there exist sequence {n} with the property that the differences n – n–1 converge to zero
while the sequence itself diverges. Also it is possible for f(n) to be close to zero while n differs
significantly from . The criteria given by (2.2.2) is the best stopping criterion to apply since it tests
relative error.
Though bisection algorithm is conceptually clear, it has significant drawbacks. It is very slow in
converging. But, the method will always converge to a solution and for this reason it is often used to
obtain a first approximation for more efficient methods that are going to the discussed.
Theorem 3: Let f C [a, b] and suppose f(a).f(b) < 0. The bisection procedure generates a sequence
{n} approximating with the property,
ba
n – , n 1.
2n
Example 1
f(x) = x3 + 4x2 – 10 = 0
Check that f(x) has only one root in the interval in which this least positive root lies. Find 4 by
bisection algorithm.
Solution
X 0 1 2
f(x) –10 –5 14
N an bn n f(n)
1 1 2 1.5 2.375
2 1 1.5 1.25 – 1.79687
3 1.25 1.5 1.375 0.16211
4 1.25 1.375 1.3125 – 0.84839
5 1.3125 1.375 1.34375 – 0.35098
After four iterations, we have 4 = 1.3125 approximating the root with an error
– 4 1.375 – 1.325 = .050 and since 1.3125 < .
α α4 b5 a 5 .050 .5 1 –1 1 1–2
< < = 10 = 10
α a5 1.3125 10 2 2
Remarks 1: Generally the first stage methods for location of the roots of f(x) = 0 are (i) Tabulation
method and (ii) Graphical method. The method of tabulation is very crude and labourious and we
have used it in the above example to some extent in locating the least positive root of f(x) = 0. In
graphical method we plot the graph of the curve y = f(x) on the graph paper and the points where the
curve crosses the x-axis gives approximate values of the roots.
1
x = (1 x) 3
1 x
or x=
x2
1 x
or x=
x
Each such g(x) given above is called an iteration function. In fact, these are infinite number of ways
in which the original equation f(x) = 0 can be written as x = g(x). Out of all these functions where one
is to be selected, will be discussed in the following analysis.
Definition 1: A number is called a fixed point of g(x) if g() = and g is called the iteration
function.
Our problem is now to find out fixed point(s) of g(x). Graphically x = g(x) is equivalent to solving y =
x and y = g(x).
y
y=x
y = g(x)
a b x
Once an iteration function is chosen, to solve x = g(x), we start with some suitable value x 0 close to the
root (how to choose this will be explained) and calculate
x1 = g(x0) (the first approximation), then x2 = g(x1) (second approximation) and so on.
In general
xn+1 = g(xn), n = 0, 1, 2 …
The sequence {xn} converges (under some suitable conditions on g) to a number (say). If g is
continuous then this gives = g(), that is, is a fixed point of g(x).
Concerning the existence, uniqueness of a fixed point and convergence of the sequence, we state a
theorem below:
Theorem 4 (Fixed Point Theorem): Let iteration function g(x) be defined and continuous on a
closed interval I = [a, b]. suppose further that g(x) satisfies the following:
(i) g(x) I for all I
(ii) g(x) is differentiable on I = [a, b]
and there exists a non-negative number k < 1 such that for all x I, g(x) k < 1.
Then
(a) g(x) has a fixed point ,
(b) the fixed point is unique, and
(c) the sequence {x1} generated from the rule xn+1 = g(xn) converges to , the fixed point of g(x),
when x0 [a, b]
Proof: (a) Existence: Suppose g(a) = a or g(b) = b, then there is nothing to be proved. So, suppose
g(a) a and g(b) b. Then g(a) > a and g(b) < b since g(x) I for all x I.
Also h(x) is continuous on I since g(x) is so. Hence by Intermediate Value Theorem, there exists a
number , a < < b such that h() = 0
g() – = 0, i.e., g() =
Therefore, h(x) is a decreasing function and it crosses x-axis only once, i.e. h(x) vanishes only once in
I.
Therefore g(x) – x = 0 only for unique value of x in (a, b). Hence uniqueness.
Let en+1 = – xn+1 = g() – g(xn) = g(n) ( – xn), where n lies between xn and , that is, en+1 =
g(n)en.
i.e. mlim
– xn = 0 {xn}
Example 2
For x3 – x – 1 = 0, find a positive root by the fixed point method. Find minimum number of iterations
so that nth approximate xn is correct to 4 decimal places.
Solution
1
Write x = (1 x) 3 = g(x).
1
Also g(1) – 1 = – 1 = +ve
23
1
g(2) – 2 = 3 3 – 2 = –ve
I = [a, b] = [1, 2]
1
g(x) = 2 is decreasing function and
3(1 x) 3
1
max g(x)= g(1) = 2 = k < 1.
xI
3 2 3
Since g(x) = +ve, therefore g(x) is increasing.
max g(x) = g(2) = 1 = 1.442 < 2
xI 33
1
min g(x) = g(1) = > 1.
xI 23
Hence, g(x) I for all x I.
1
Therefore, xn+1 = (1 x ) 3 generates a sequence of numbers which converges to a fixed point of g(x),
n
(starting with xo I).
1
We have k = 2 < 1 and
3 2 3
en kn en and eo< 1. Hence for the desired accuracy we have
n
1 1
en 2
< 10–4 n = 7.
2
3 2 3
Remark 2: In the following figures we observe the importance of g(x) in the neighbourhood of a
fixed point .
y y
y = g(x) y=x y=#
y=x
y = g(x)
y = g(x)
x6 -x3 x1 x6 x2 x -x3 x1 x6 x2
Figure 3
In the neighbourhood of ,g(x)>1 (the sequences converge in these cases Fig. 3).
In the neighbourhood of , g(x) < 1 (the sequences converge in these two cases Fig. 4).
y y y=x
y=x
y = g(x)
y = g(x)
x
-x3 x1 x6 x2 x x2 x0
Figure 4
Remark 3: In numerical problems, one may follow the following procedure to find an interval [a, b].
In order to use this method one needs only to see if g(x)<1 at a point in the neighbourhood of the
root. Therefore, determining an interval I is not necessary.
Choose an interval [a, b] by some trial and check for the following:
(i) a – g(a) and b – g(b) must be of opposite sign (with b – g(b) > 0).
(ii) g(x) k < 1 for x [a, b]
(iii) g(x) is continuous on [a, b].
If above conditions are not satisfied try for a smaller interval and so on.
Example 3
Find the smallest positive root of e–x – cos x = 0 by the fixed point method.
Solution
y
y=x y
y=x
(0, b)
1
.8
.6
(0, a) .4
.2 y=0
Figure 5 Figure 6
π π
Figure shows that the desired root lies between 0 and i.e. in (0, ).
2 2
Now let us try x = cos–1 (e–x) = g(x)
1
g' (x) =
2x
e 1
To make this less than 1, we must choose e2x – 1 > 1, that is, e2x > 2. This gives
1
x> ln2 This suggest that we should take the suggested interval
2
1 π π
( ln2, ), but to take a closed interval, we consider I = [ln2, ].
2 2 2
Now further, suppose that we want to find minimum number of iteration required to get 4 decimal
place accuracy. Let n be the minimum number of iterations required for the desired accuracy.
1
en k n e o 10 4
2
π
eo 2 ln2 1. Thus the given condition is satisfied if
n
1 1
10 4 . That is,
3 2
– n log 10 3 4 log 10 2 i.e.
4.301
n 18.07 i.e., n =19
0.238
Example 4: Find the iteration function and interval I = [a, b] which satisfy the conditions of the
theorem of fixed point to find the smallest positive root of
x = tan x.
Solution:
π 3π π π π 3π
For x 1
2 , 2 , we have tan x , hence g(x)
2 2 2 2
1 1 4
1
Also max.
xI
g' (x) max. 1 x 2
xI π 2 4π2
1
4
(Since g' (x) is a decreasing function).
π 3π
Hence for any xo I = , , the sequence generated by the fixed-point iteration method will
2 2
converge.
Remark 5: If ξ is a fixed point of g(x) lying in the open interval (c, d) on which φ' (x) then
the sequence {xn}generated with g(x) as iteration function will not converge to ξ , however close xo
to ξ is taken except accidentally. (Consider the root ξ = 2 of f(x) x 2 x 2 0 with
g(x) x 2 2 ).
Remark 6: If ξ is a fixed point of g(x) such that g' (ξξ 1, then the iteration function with g(x)
may or may not converge to . However, if g' (ξξ 1, in some deleted neighbourhood of ξ ,
then it will converge to ξ , with xo taken sufficiently close to ξ . If g' (ξξ ) 1, in some deleted
neighbourhood of ξ , then sequence will not converge to ξ .
Remark 7: The conditions mentioned in fixed-point theorem are sufficient but not necessary.
Now we discuss one example, which is very simple but conveys the fact that if a function f(x) has
more zeros i.e. f(x) = 0 has more than one real root, then we may have to consider different g(x) for
different roots.
Example 5: Find the iteration function g(x) and corresponding interval to get the two roots 1 and 2 by
fixed point iteration method for the equations
x2 – 3x + 2 = 0
Solution:
3 3
g 1 (x) 1
Also max.
xI 9 10 (since g(x) is a decreasing function).
2 2 2
2
Hence all the conditions for the fixed point theorem are satisfied.
In the following two examples, we use the corollary to the fixed point theorem (Theorem 4).
Example 6: The equation f(x) = x4 – x – 10 = 0 has a root in the interval [1, 2]. Derive a suitable
iteration function (x) such that the sequence of iterates obtained from the method x k+1 = (xk), k = 0,
1, 2, ... converges to the root of f(x)=0. Using this method and the initial approximation x 0 = 1.8,
iterate thrice.
1
Solution: Choose (x) = (x 10) 4 , I = [1, 2].
1 3 1 1
Then (x) = 4
x 10
4
3
4
x 10 4
1 1
max. (x) 3
x I2 4
(11) 4
i.e., (x) 1 for x [1, 2]
Given x0 = 1.8
x1 = 1.8 10 1 / 4 = 1.8534 = 1.86
x2 = 1.8 10 1 / 4 = 1.8534 = 1.86
x3 = 1.8 10 1 / 4 = 1.8534 = 1.86
Example 7: The equation f(x) = x3 – 5x +1 = 0 has a root in the interval [0, 1]. Derive a suitable
iteration function (x), such that the sequence of iterates obtained from the formula x k+1 = (xk), k = 0,
1, 2, ... converge to the root of f(x)=0. Using this formula and the initial approximation x 0 = 0.5,
iterate thrice.
x 3 1 3x 2 max
Solution: (x) = is chosen since (x) = and 0 x I (x) < 1.
5 5
With x0 = 0.5, x1 = 0.225 = 0.23, x2 = 0.202.
1
What about choosing (x) = 5x 1 3 ?
What is 0max
x I
(x) in this case?
Let us consider the line joining (xo, f(xo)) and (x1, f(x1)). This line cuts the x-axis at some point, say
x2. We find f(x2). If f(x2)f(x0) < 0, then we replace x1 by x2 and draw a straight line connecting (x2,
f(x2)) and {xo, f(xo)}. If f(x2) and f(xo) are such that f(x2)f(x0) > 0, then x0 is replaced by x2 and draw
a straight line connecting (x1, f(x1)) and (x2, f(x2)). Where the straight line crosses x – axis. that print
gives x3. In both the cases, the new interval obtained is smaller then the initial interval. We repeat
the above procedure. Ultimately the sequence is guaranteed to converge to the desired root.
y
x0 x2 x3 x4
x1
y = f(x)
Figure 7
f(x1 ) f(x o )
The equation of the chord PQ is y – f(x0) = (x x o )
x1 x o
This cuts x-axis at the point x2 given by
f(x1 ) f(x o )
0 – f(xo) = (x 2 x o )
x1 x o
x o f(x 1 ) x 1 f(x o )
i.e. x2 = .
f(x 1 ) f(x o )
x r 1 f(x r ) x r f(x r 1 )
In general, xr+1 = , r = 1, 2, 3,....
f(x r ) f(x r 1 )
If f(x2) = 0, then x2 is the required root. If f(x2) 0 and f(xo)f(x2) < 0, then the next approximation
lies in (xo, x2). Otherwise it lies in (x2, x1). Repeat the process till x i 1 x i ε .
Example 8: The equation 2x3 + 5x2 + 5x + 3 = 0 has a root in the interval [–2, –1]. Starting with xo =
–2.0 and x1 = –1.0 as initial approximations, perform three iteration of the Regula-falsi method.
Solution:
f(–2) = –16 + 20 – 10 + 3 = –3
f(–1) = –2 + 5 – 5 + 3 = 1, and f(–2) f(–1) < 0
It can be introduced by basing it on the Taylor’s expansion as explained below. Let x o be an initial
approximation and assume that xo is close to the exact root and
f’(xo) 0. Let = xo + h where h is a small quantity in magnitude. Let f(x) satisfy all the conditions
of Taylor’s theorem. Then
f(xo+ h) = f(xo) + h f’(xo) + .....
The method is derived by assuming that the term involving h2 is negligible and that f(xo) + h f(xo) =
0 i.e. f(xo) + ( - xo)f(xo) = 0
f ( xo )
i.e. xo -
f ' ( xo )
f ( xo )
i.e. x1 = xo -
f ' ( xo )
Geometrically the next approximation, x1, is the abscissa of the point of intersection of the tangent PT
and the x-axis in Figure 8.
x2 x1 x0
y = f(x)
x = R 1/p or xp ─ R = 0.
f (x) = pxp─1.
f xk
xk+1 = x k ,
f ' xk
x kp R
= xk ,
px kp 1
1 R
On simiplification we get xk+1 = 1
xk , k 0, 1, 2, .....
p p x kp 1
8 91
For R = 16, p = 3, xo = 2, we get x1 = = 2.67, x2 = = 2.53,
3 36
f xk
x k 1 x k in
f x k
Example 10 : The quadric equation x4 – 4x2 + 4 = 0 has a double root. Starting with xo = 1.5,
compute two iterations by Newton-Raphson method.
Solution: For m-repeated root of f(x) = 0, the iteration scheme in case of Newton-Raphson method is
given by:
f xk
xk+1 = x k m , k = 0, 1, 2, ...
f' x k
In this case, we have
xk+1 = x k
2 x k4 4x k2 4 , (since m =2 and f(x) = x 4 4x 2 4 )
4x k3 8x k
With xo = 1.5, we have
1
2
3 16 17 1.41
x1 =
2 3 12
2
Example 11: Perform two iterations of the Newton-Raphson method to find an approximate value of
1
starting with the initial approximation xo = 0.02
15
Solution: Suppose we want to find the reciprocal of the number N.
1
Let f(x) = –N
x
1
Then f’(x) = and the iteration scheme is
x2
1
N
xk
xk+1 = x k = 2xk – Nx k2 , k = 0, 1, 2...
1
x k2
We compute,
f xo
xo x o f x1 x 1 f x o
x2 = f x1 f xo .
f x1 f x o
x1 x o
y
x3
x0 x2
x1
Figure 9
Example 12: Apply the Secant method to find a root of the equation
Solutions:
E1) Using the Newton-Raphson method, find the square root of 10 with initial approximation x o =
3.
E3) Do three iterations of Secant method to find an approximate root of the equation.
3x3 – 4x2 + 3x – 4 = 0
Starting with initial approximations xo = 0 and x1 = 1.
E4) Do three iterations of fixed point iteration method to find the smallest positive roots of x 2 – 3x
+ 1 = 0, by choosing a suitable iteration function, that converges. Start with x o = 0.5.
E5) Obtain the smallest positive root of the equation of x3 – 5x + 1 = 0 by using 3 iterations of the
bisection method.
E6) Starting with xo = 0, perform two iterations to find an approximate root of the equation x 3 – 4x
+ 1 = 0, using Newton-Raphson method.
E8) Apply bisection method to find an approximation to the positive root of the equation.
2x – 3 Sinx – 5 = 0
rounded off to three decimal places.
E9) It is known that the equation x3 + 7x2 + 9 = 0 has a root between –8 and
–7. Use the regula-falsi method to obtain the root rounded off to 3 decimal places. Stop the
iteration when x i +1 x i < 10 4
Let x0, x1, x2, etc be a sequence generated by some iterative method.
lim en+1
= C (C does not depend on n)
n →∞ enp
then p is called the order of convergence of the method and C is called the asymptotic error constant.
An iterative method with higher order of convergence than 1 is expected to converge rapidly. If p =
1, 2, 3, …, then the convergence is called linear, quadratic, cubic… respectively.
(i) For the Fixed Point Iteration method the order of convergence is generally 1, that is, it is of
first order (convergence is linear).
(ii) For the Newton-Raphson method, with xo near the root, the order of convergence is 2, that is,
of second order (convergence is quadratic).
(iii) For the Secant Method order of convergence is 1.618 1.62 but it is not guaranteed to
converge.
The bisection method is guaranteed to converge, but convergence is slow. Regula-falsi method is
guaranteed to converge. However, it is slow and order of convergence is 1.
If g’(x) is continuous in some neighbourhood of the fixed point of g, then the fixed point method
converges linearly provided g’( ) 0.
On taking n , we have
e n 1
lim g' ξ = C 0
n en
e n 1
Hence p = 1, since g' ξ
en
Therefore, fixed point method converges linearly.
Theorem 6: If g”(x) is continuous in some neighbourhood of the fixed point of g, then the fixed
point method converges quadratically, provided g’( ) =0 and g”( )0.
Proof: We have
en+1 = – xn+1
en+1 = g( ) – g(xn)
= g( ) – g( – en)
By using Taylor’s series expansion, we have
en+1 = g( ) – {g( ) – en g’( ) +
2
en
g" η n
2
en+1
e2
= - n g '' ξ hn since g”(x) is continuous, hn 0. as
2
n .
e n 1 g" ξ
Thus, nlim 2
= C0
en 2
N─R Method
Suppose we are to solve f(x) = 0. If f’(x) 0 and f”(x) is continuous on the closed finite interval [a,
b] and let the following conditions be satisfied:
f a f b
< b – a and < b – a.
f' a f' b
Then Newton’s method converges to the unique solution of f(x) = 0 in [a, b] for any choice of x0
[a, b].
Theorem 8: Let f(x) be twice continuously differentiable in an open interval containing a simple root
of f(x) = 0. Further let f"(x) exists in neighbourhood of. Then the Newton’s method converges
quadratically.
f x
Proof: g(x) = x – is continuously differentiable in some open neighbourhood of . On
f ' x
differentiating g(x), we get
f' x 2 f(x)f' x
g (x) = 1 –
f' x 2
=
f( x)f" x
f' x 2
f' x 2
f" x f x f' x f" ' x 2f x f" x 2
= f' x 3
g”( ) =
f" ξ 0
f' ξ
Hence the Newton’s Method converges quadratically if of x o is chosen sufficiently close to , where
is a simple root of .
C
x0 x3 x2
x3
(x2 f(x9))
Figure 10
Let the chord AB through the points A(xn–1, f(xn–1)) and B (xn, f(xn)) be drawn. Suppose this intersects
x–axis at C. Denote this value of x by xn+1. That is
f(x n ) f(x n 1 )
y – f(xn-1) = (x – xn–1)
x n x n 1
y = 0, x = xn+1, we get
f(x n 1 )(x n x n 1 )
xn+1 = xn–1 –
f x n f x n 1
x n 1 f(x n ) x n f(x n 1 )
=
f x n f x n 1
This is known as secant method. The sequence {xn} is generated with starting points x0, x1. We get
x2, reject x0, and use x1, x2 to get x3 and so on.
we get
e n 1 f(ξ e n ) e n f(ξ e n 1 )
en+1 =
f ξ e n f ξ e n 1
en2
= e f(ξ(e f'(ξξ 2 f"(ξξ higher order terms
n1 n
f( ) = 0
e2
since
en f(ξ( en 1 f' (ξξ n f" (ξξ higher order terms
2
f(ξ( e n f' (ξξ f(ξ( e n1 f' (ξξ
en+1
1
2
e 2
n e n 1 e n e 2
n 1 f"
e n e n 1 f' (ξξ
1 f" (ξξ
= enen–1, for sufficiently large n
2 f' (ξξ
1 f" (ξξ
Let
2 f' (ξξ
= (fixed constant).
Then en+1 = enen–1
Now after comparing the rate of convergence of fixed point method, N─R Method and Secant method,
we find that secant is faster than fixed point month and N─R method is faster than secant method. For
further qualitative comparison refer the books mentioned.
Apart from the rate of convergence, the amount of computational effort required for iteration and the
sensitivity of the method to the starting value and the intermediate values, are two main basis for
comparison of various iterative methods discussed here. In the case of Newton’s method, if f (x) is
near zero anytime during the iterative cycle, it may diverge. Furthermore, the amount of
computational effort to compute f(x) and f (x) is considerable and time consuming. Whereas the
fixed point method is easy to programme.
E11) Let M denote the length of the initial interval a0 ,b0 . Let x 0 , x1 , x3 ... represent the
M
successive midpoints generated by the bisection method. Show that x i 1 x i
2 i2
Also show that the number n of iterations required to generate an approximation to a root to an
accuracy is given by
n 2
log ε M
log2
E13) The quadratic equation x 4 4x 2 4 0 has a double root at 2 . Starting with x 0 1.5,
compute three successive approximations to the root by Newton-Raphson method. Do this
f(x) 2f(x)
with g 1 (x) x and g 2 (x) x and comment on the order of convergence
f' (x) f' (x)
from your results.
E14) The following are the five successive iterations obtained by the Secant method to find the real
positive root of the equation x 3 x 1 0 starting with x 0 1.0 and x 1 2.0.
n 2 3 4 5 6 7
x n 1.166667 1.2531120 1.3372064 1.3238501 1.3247079 1.3247180
f' ' ξ
2f' ξ assuming the value of correct to eight decimal places
as = 1.324718.
E15) If a 0 0 and b 0 1.0 , how many steps of the bisection method are needed to determine the
root with an error of at most 10 5 ?
2.5 SUMMARY
In this unit we have covered the following points:
The methods for finding an approximate solution of equation in one variable involve two steps:
(i) Find an initial approximation to a root.
(ii) Improve the initial approximation to get more accurate value of the root.
2.6 SOLUTIONS/ANSWERS
E1) x= 10 , i.e. x2 = 10. f(x) = x2 – 10
x n2 10 x n2 10
xn+1 = xn = , n = 0, 1, 2,
2x n 2x n
19 (3.167) 2 10
xo = 3, x1 = = 3.167, x2 = = 3.162
6 6.334
E2) Here ( x ) = –
1
2
2 3x 2x 2
3x
1 4 3x
' ( x) = 1 at xo = –0.5
3 x3
5 169
x1 0.417, x 2 0.391 and x 3 0.384
12 432
E5) f(0) > 0 and f(1) < 0. The smallest positive root lies in ]0, 1[.
It is enough to check the sign of f(x0) – the value need not be calculated.
This gives
x1 0.25, x1 0.254095 0.2541
E7) f ( x) x 3 x 6, x 0 1, x1 2
x n 1 f ( x n ) x n f ( x n 1 )
x n 1 n = 1, 2, 3, …
f ( x n ) f ( x n 1 )
Similarly x 3 7.168174
The iterated values are presented in tabular form below:
Complete the above table. You can find that the difference between the 5 th and 6th iterated
values is 7.1748226 7.1747855 0.0000371 signaling a stop to the iteration. We
conclude that –7.175 is an approximate root rounded to the decimal places.
x0 f(x 1 ) x 1 f(x 0 )
x2 0.3146653378
f(x 1 ) f(x 0 )
x 1 f(x 2 ) x 2 f(x1 )
x3 0.4467281466
f(x 2 ) f(x1 )
E11) Starting with bisection method with initial interval a 0 , b0 (recall that in each step the
1
interval width is reduced by we have
2
b a0 M
b1 a 1 0
2 2
b1 a1 b0 a0
b2 a 2
2 2
b0 a0
and finally bn a n
2n
a bn
Let x n n .
2
a n bn
2
xn
an bn
xn+1
a n 1 b n 1 a n b n
Then x n 1 x n
2 2
a n bn
We have either a n 1 and bn 1 bn
2
a bn
or a n 1 a n and bn 1 n .
2
In either case
bn a n b0 a 0
x n 1 x n
22 2 n2
M
We want x n 1 x n ε.
2 n 2
This is satisfied if
M
log n 2 logε
2
log ε M
n 2
log2
. Thus
f" (ξξ
g' (ξξ 0 since f( ) = 0 and g" (ξξ (f' (ξξ 0)
f' (ξξ
But given that f( ) = 0 = f’( ) and f”( ) 0.
In this case
lim
x ξ
f( x)f " (x )
f' ( x) 2 =
f(x)
lim lim f " ( x)
x
f' (x) 2 x
f' (x) 1
= lim
x
lim
x
f" (x) = (by L’Hospital Rule)
2f(x)f" (x) 2
1
That is g' (ξξ 0
2
Hence it does not converge quadratically.
In case
2f(x n ) 2f(x)
x n 1 x n where g(x) = x
f' (x n ) f' (x)
g ' ( x)
2f( x) f" (x ) f' (
f' ( x) 2
Since
g’(x)
2f(x)f" (x )
1
f' (x)
2
and
lim g’(x) = lim
x x
2f( x)f" (x)
1
f' ( x) 2
2f(x)
= lim lim
x f”(x) – 1
x
f' (x) 2
1
2 1 0.
2
With g1 ( x)
.0625
x1 = 1.5 1.5 – .04 = 1.46
1.5
f(1.46) = 4.543 – 8.52 + 4 = 0.02
f’(1.46) = 12.45 – 11.68 = 0.77
0.02
x2 = 1.46 1.44
0.77
With g 2 ( x )
2 .0625
x1 = 1.5 1.5 – 0.08 = 1.42
1.5
f(1.42) = 4.065 – 8.065 + 4 = 0
x2 = 1.42
Actual root = 1.4142. Hence convergence is faster with g 2 ( x ) with two decimal digit
arithmetic.
n 1 2 3 4 5
en 0.1580513 0.0716060 0.012884 0.0008679 0.0000101
e n 1 e n e n 1 1.1034669 0.9705400 0.9318475
E15) Here M = b0 a0 1 0 1
log ε M
n 2
log2
Here = 10 5
log10 5
So n 2
log2
5log10
2
log2
5
2
log2
2 5 3.322
2 16.61
14.61
n 15