Numerical Methods For ODE and PDE
Numerical Methods For ODE and PDE
References:
Bradie B A Friendly Introduction to Numerical Anaysis Pearson Education,2007
Burden RL, Faires J D Numerical Analysis Cengage Learning, 2007
Chapra SC, Canale, R P Numerical Methods for Engineers Tata McGraw Hill, 2003
Gerald C.F., Wheatley P O Applied Numerical analysis, Addison Wesley, 1998
Lecture 1
Keywords: Initial Value Problem, Approximate solution, Picard method, Taylor series
Solution of first order ordinary differential equations
Consider y(t) to be a function of a variable t. A first order Ordinary differential
equation is an equation relating y, t and its first order derivatives. The most general
form is :
F(t,y(t),y(t)) 0
i) y(t) is differentiable
ii) Substitution of y(t) and y (t) in (1.1) satisfies the differential equation
identically
y 2y 0
y sin y exp(t)
The first of these equations represents the exponential decay of radioactive material
where y represent the amount of material at any given time and k=2 is the rate of decay.
It may be noted that y(t) c exp( 2t) is the solution of differential equation as it
identically satisfies the given differential equation for arbitrarily chosen constant c. This
means that the differential equation has infinitely many solutions for different choices of
c. In other words, the real world problem has infinitely many solutions which we know is
not true. In fact, an initial condition should be specified for finding the unique solution of
the problem:
y(0) A
That is, the amount of radioactive material present at time t=0 is A. When this initial
condition is imposed on the solution, the constant c is evaluated as A and the solution
y(t) A exp( 2t) is now unique. The expression can now be used for computing the
amount of material at any given time.
The solution with arbitrary constant is known as the general solution of the differential
equation. The solution obtained using the initial condition is a particular solution.
A first order Initial Value Problem (IVP) is defined as a first order differential equation
together with specified initial condition at t=t0:
There exist several methods for finding solutions of differential equations. However, all
differential equations are not solvable. The following well known theorem from theory of
differential equations establishes the existence and uniqueness of solution of the IVP:
Let f(t,y(t)) be continuous in a domain D={ (t,y(t)): t0≤t≤b, c≤y≤d } R2. If f satisfies
Lipschitz condition on the Variable y and (t0,y0) in D, then IVP has a unique solution
y=y(t) on the some interval t0 ≤ t ≤ t0+δ.
{The function f satisfies Lipschitz condition means that there exists a positive constant L
such that f(t,y) f(t,w) L y w }
The theorem gives conditions on function f(t, y) for existence and uniqueness of the
solution. But the solution has to be obtained by available methods. It may not be
possible to obtain analytical solution (in closed form) of a given first order differential
equation by known methods even when the above theorem guarantees its existence.
Sometimes it is very difficult to obtain the solution. In such cases, the approximate
solution of given differential equation can be obtained.
Approximate Solution
From the theory of differential equations, it can be proved that the above sequence of
approximations converges to the exact solution of IVP.
Example 1.1: Obtain the approximate solution of IVP using Picard method. Obtain its
exact solution also
y 1 ty; y(0) 1
Solution: Given that y0=1. Using (1.3) gives
t
yk1(t) 1 f(x,yk )dx f(x,yk ) 1 xyk
0
t t
yk1(t) 1 (1 xyk )dx 1 t (xyk )dx
0 0
y1 (t) 1 t t 2 / 2
t
y 2 (t) 1 t x(1 x x 2 / 2)dx 1 t t 2 / 2 t 3 / 3 t 4 / 8
0
t
y 3 (t) 1 t x(1 x x 2 / 2 x 3 / 3 x 4 / 8)dx
0 and so on.
1 t t 2 / 2 t 3 / 3 t 4 / 8 t 5 /15 t 6 / 48
The differential equation in example 1.1 is a linear first order equation. Its exact solution
can be obtained as
t
y(t) exp(x 2 / 2)[1 exp( x 2 )dx]
0
The closed form solution of differential equation in this case is possible. But the
expression involving an integral is difficult to analyze. The sequence of polynomials as
obtained by Picard method gives only approximate solution, but for many practical
problems this form of solution is preferred.
Taylor Series method:
The IVP gives the solution y0 at initial point t=t0. for given step size h, the solution at
t=t0+h can be computed from Taylor series as
h2 h3
y(t1 ) y(t 0 h) y(t 0 ) h y (t 0 ) y (t 0 ) y (t 0 ) ... (1.4)
2 6
f f
y (t 0 ) y
t y t t0
2 f 2 f f
y (t 0 ) 2 2 y ( y )2 and so on
t ty y t t0
Substituting these derivatives and truncating the series (1.4) gives the approximate
solution at t1.
Example 1.2: Obtain the approximate solution y(t) of IVP using Taylor series method.
Obtain approximate solution at t= 0.1 correct to 4 places of decimal.
y 1 ty; y(0) 1
Solution: Given that y 1 ty f(t,y)
Repeated differentiations yield
y y ty
y 2y ty
yiv (2y 1)y xy y and so on
Observe that the Picard method involves integration while Taylor series method
involves differentiation of the function f. Depending on the ease of operation, one can
select the appropriate method for finding the approximate solution. The number of
iterations in Picard method depends upon the accuracy requirement. The step size h
can be chosen sufficiently small to meet the accuracy requirement in case of Taylor
series method. For fixed h, more terms have to be included in the solution when more
accuracy is desired.
In the category of methods that include Picard method and Taylor series method, the
approximate solution is given in the form of a mathematical expression.
Module1: Numerical Solution of Ordinary Differential Equations
Lecture 2
Keywords: Numerical solution, grid points, local truncation error, rounding error
Numerical Solution
Numerical methods for solving ordinary differential equations are more popular due to
several reasons:
More computational efforts are involved in Picard and Taylor series methods for
complex real life applications
The numerical methods can still be applied in cases where the closed form
expression for the function is not available, but the values of function f are known
at finitely many discrete points. The analytical methods are not applicable.
For example, the velocity of a particle is measured at given points and one is interested
to predict the position of particle at some times in future. In such cases the analytical
methods cannot be applied and one has to obtain solution by numerical methods.
In this lecture a very basic method known as Euler method is being discussed. The
method is illustrated with an example.
Euler method:
When initial value problem (1.2) is solved numerically, the numeric values of the
solution y=g(t) are obtained at finitely many (say n) discrete points in the interval of
interest. Let these n points be equi-spaced in the interval [t0, b] as t1,t 2 ,...,t n such that
t 0 kh, k 1,2,...n . These points are known as grid points. Here the step size h is
b t0
computed as h . The numeric value of the solution is known at t=t0. The
n
approximate numeric value yk of the solution at kth grid point t=tk is an approximation to
the exact solution y(tk) of IVP. The Euler method specifies the formula for computing the
solution:
y k 1 y k h f(t k , y k ); k 0,1,2,...n 1 (1.5)
y(t)
y2
y1
y0 t
t1 t2 tn‐1 tn
The Euler formula (1.5) is a one step difference formula. The solution obtained by this
formula is shown on the computational grid in figure 1.1.
Observe that f(t, y 0 ); y(t 0 ) y 0 is the slope of the solution curve at t=t0. The solution is
approximated as a straight line passing through y (t0) = y0 having slope f(t0, y(t0)). The
actual solution y(t) (shown in blue) may not be a straight line and y(t1) may be different
than y1 computed by the formula (1.5). It is only an approximation to the exact solution.
Starting from this approximation y1 at t1, the solution at next grid point t2 can be
approximated as y2 using (1.5). This is further continued for other grid points.
The actual solution curve may be above/ below the approximated solution.
Accordingly, the algorithm for computing solution using Euler method is given below:
Example 1.3: Solve the initial value problem using the above algorithm
3y t y; y(0) 1
The IVP is solved first for step size h=1. The solution is obtained at t=1, 2 and 3.
The computations are performed using MS-Excel diff-euler1.xls [See columns B and C.
The column D gives the truncation error.] Note that the equation can be solved exactly.
Its exact solution is y(t) 4 exp( t / 3) t 3 , y(3.0)=1.471518.
Next, the same problem is solved with step size h=0.5 up to t=3. The solution is
obtained successively at t=0.5, 1.0, 1.5, 2.0, 2.5, 3.0. [See columns E and F of the same
MS-Excel sheet. The column G gives the error.]
Comparing y computed at t=3.0 by two different step sizes, it is observed that solution
with smaller step size is closer to exact solution.
The computations are repeated with step size h=0.25 and 0.125 also. [See the excel
sheet columns H to M. The column O of the sheet gives exact solution at grid points
with h=0.125].
The table 1.1 shows the application of Euler Method for h=0.125. The attached graph
shows that the difference between the exact solution and the solution obtained by Euler
method with h=0.125. The following conclusions can be drawn:
For the derivation of Euler formula, consider the finite difference approximation of the
derivative
dy y yk
k 1
dx t t h
k
f(t,y(t)) f(tk , yk )
y k 1 y k h f(t k , y k )
When k=0, the right side of the formula can be computed from known initial value y0.
Once y1 is computed, other yk, k=2, 3, 4,… can be computed successively in the similar
manner.
Analysis :
h2
y(t k h) y(t k ) h y(t k ) y( ), (t k ,t k h)
2
Substituting the derivative from the differential equation and neglecting second order
terms of the Taylor theorem gives Euler formula which is an approximation of the
solution at next grid point:
Starting from the initial condition at t0, the approximate solution y1 at t1 computed by
Euler method has error due to following reasons:
solution and approximate solution as obtained by the numerical method assuming the
solution is exact at kth step.
Tk 1 y t k h y k 1
h2
Tk 1 y(t k ) h y (t k ) y ( ) [y k h f(t k , y k )]
2
h2
Tk 1 y(t k ) y k h[f(t k , y(t k )) f(t k , y k )] y ( )
2
h2
Tk 1 (1 hL) Tk M
2
For y1, the initial condition y0 is assumed to be the correct solution; hence the local
truncation error is of order h2.
However, the solution at t2 has one more source of error and that is approximate value
of the solution y1 at t1 as computed in earlier step. This error is further accumulated as
solution is advanced to more grid points tk, k=3, 4,… . The accumulation of error is
evident from the fig. 1.1 also. The Final Global Error (F.G.E) in computing the final value
at the end of the interval (a,b); a= t0, b=to+Mh is accumulated error in M steps and is of
order h. This means that the error E(y(b),h) in computing y(b) using step size h is
approximated as
E( y(B), h) Ch
Accordingly, E( y (B), h / 2) Ch / 2
Therefore, halving the step size will half the FGE. FGE gives an estimate of
computational effort required to obtain an approximation to exact solution.
Mh
1 hL 1
k 1
Tk 1
2L
Mh (k 1)hL
Tk 1 e 1
2L
Apart from the Euler method, there are numerous numerical methods for solving IVP. In
the next couple of lectures more methods are discussed. These are not exhaustive. The
selection of methods is based on the fact that these are generally used by scientists and
engineers in solving IVP because of their simplicity. Also, more complex techniques are
the combination of one or more of these and their development is on similar lines.
Module1: Numerical Solution of Ordinary Differential Equations
Lecture 3
Modified Euler Method: Better estimate for the solution than Euler method is expected
if average slope over the interval (t0,t1) is used instead of slope at a point. This is being
used in modified Euler method. The solution is approximated as a straight line in the
interval (t0,t1) with slope as arithmetic average at the beginning and end point of the
interval.
y1c
y1p
y0
t0
t1
Accordingly, y1 is approximated as
However the value of y( t1) appearing on the RHS is not known. To handle this, the
value of y1p is first predicted by Euler method and then the predicted value is used in
(1.6) to compute y1’ from which a better approximation y1c to y1 can be obtained:
f(t 0 ,y 0 ) f(t1,y1,p )
y1,p y 0 h f(t 0 , y 0 ) y1c y 0 h
; 2
In the fig (1.3), observe that black dotted line indicates the slope f(t0,y(t0)) of the solution
curve at t0, red line indicates the slope f(t1,y(t1)), at the end point t1. Since the solution at
end point y(t1) is not known at the moment, its approximation y1p as obtained from Euler
method is used. The blue line indicates the average slope. Accordingly, y1 is a better
estimate than y1p. The method is also known as Heun’s Method.
Example 1.4: Solve the IVP in the interval (0.0, 2.0) using Modified Euler method with
step size h=0.2
dy
y 2t 2 1 ;y(0) 0.5
dt
h2 h3
y(t k h) y(t k ) h y(t k ) y(t k ) y( ), (t k ,tk h) (1.6)
2 6
h2 y(t k 1 ) y(t k ) h3
y(t k h) y(t k ) h y(t k ) ( ) y( ), (t k ,t k h)
2 h 6
Further simplification gives local truncation error of modified Euler formula as O(h3):
h h3
y(t k h) y(t k ) ( y (t k ) y (t k 1 )) y( ), (t k ,t k h)
2 6
The FGE in this method is of order h2. This means that halving the step size will reduce
the error by 25%.
The Euler method and Modified Euler methods are explicit single step methods as they
need to know the solution at a single step. It may be observed that the Euler method is
derived by replacing derivative by forward difference:
dy y k 1 y k
O(h)
dt t tk h
The central and backward difference approximation can also be used to give single step
methods
dy y k y k 1 dy y k 1 y k 1
O(h) or O(h2 )
dt t tk h dt t tk 2h
Module1
Lecture 4
Even if the expressions for derivatives are obtained, lot of computational effort may
still be required in their numerical evaluation
It is possible to develop one step algorithms which require evaluation of first derivative
as in Euler method but yields accuracy of higher order as in Taylor series. These
methods require functional evaluations of f(t,y(t)) at more than one point.on the interval
[tk,tk+1]. The Category of methods are known as Runge-Kutta methods of order 2, 3 and
more depending upon the order of accuracy. .A general Runge Kutta algorithm is given
as
y k 1 y k h ( t k , y k , h) (1.7)
The function phi is termed as increment function. The mth order Runge-Kutta method
gives accuracy of order hm. The function is chosen in such a way so that when
expanded the right hand side of (1.7) matches with the Taylor series upto desired order.
This means that for a second order Runge-Kutta mehod the right side of (1.7 ) matches
up to second order terms of Taylor series.
The Second order Runge Kutta methods are known as RK2 methods. For the derivation
of second order Runge Kutta methods, it is assumed that phi is the weighted average of
two functional evaluations at suitable points in the interval [tk,tk+1]:
(tk , yk , h) w1K1 w 2K 2
K1 f ( t k , yk ) (1.8)
K 2 f (t k ph, yk qhK1 ); 0 p, q 1
Here, four constants w1, w2, p and q are introduced. These are to be chosen in such a
way that the expansion matches with the Taylor series up to second order terms.
For this
K 2 f (t k ph, y k qhK1 )
f (t k , y k ) phft (t k , y k ) qhK1fy (t k , yk ) O(h2 ) (1.9)
f (t k , y k ) phft (t k , y k ) qhf (t k , yk )(fy (t k , yk ) O(h2 )
yk 1 yk h[w1f (tk , yk ) w 2{f (tk , yk ) phft (tk , yk ) qhf (tk , yk )(fy (tk , yk ) O(h2 )}]
Or
yk 1 yk h[ w1f (tk , yk ) w 2 f (tk , yk )] h2 [pft (tk , yk ) qf (tk , yk )(fy (tk , yk )] O(h3 ) (1.10)
h2 h3
y (t k h) y (t k ) hf (t k y (t k )) f (t k , y (t k )) f ( , y ( )); t k t k 1
2 6
f ( t k , y ( t k )) ft ( t k , y ( t k )) f ( t k , y ( t k ))( fy ( t k , y ( t k ))
h2
y (t k h) y (t k ) hf (t k y ( t k )) [ ft ( t k , y ( tk )) f ( tk , y ( t k ))( fy ( tk , y ( t k ))] O(h3 ) (1.11)
2
w 1 w 2 1 , w 1p 1/ 2 and w 2 q 1/ 2 (1.12)
Observe that four unknowns are to be evaluated from three equations. Accordingly
many solutions are possible for (1.12). Let us chose arbitrary value to constant q as
q=1, then
w 1 w 2 1/ 2, p 1 and q 1
Accordingly, the second order Runge-Kutta can be written as
y kp y k hf ( t k , y k )
h (1.13)
y k 1 y k [ f ( t k , y k ) f ( t k h, y kp )]
2
This is the same as modified Euler method. It may be noted that the method reduces to
a quadrature formula [Trapezoidal rule] when f(t, y) is independent of y:
h
y k 1 y k [f (t k ) f (t k h)]
2
For convenience q is chosen between 0 and 1such that one of the weights w in the
method is zero. For example choose q=1/2 makes w1=0 and (1.12) yields:
w 1 0, w 2 1, p q 1/ 2
h
yk yk f (t k , yk )
2
h (1.14)
y k 1 yk hf (t k , yk )
2
w 1 1/ 4, w 2 3 / 4, p q 2 / 3
This gives another second order Runge-Kutta method known as optimal RK2 method:
2h
y k y k f (tk , yk )
3
h 3h 2h (1.15)
y k 1 y k f (t k , y k ) f (t k , y k )
4 4 3
Example 1.5: Solve IVP in 1<t<2 with h=0.1using Optimal Runge Kutta Method (1.15)
y y / t ( y / t ) 2 ; y (1) 1
[Ref modified-euler.xlsx/sheet3]
Module1: Numerical Solution of Ordinary Differential Equations
Lecture 5
All the fourth order Runge Kutta Methods are of the following general form:
yk 1 yk h (t k , yk , h)
w1K1 w 2K 2 w 3K 3 w 4K 4
K1 f (t k , y k )
K 2 f (t k p1h, yk a21K1 ) (1.16)
K 3 f (t k p2h, yk a31K1 a32K 2 )
K 4 f (t k p3h, yk a 41K1 a 42K 2 a43K 3 )
The thirteen unknowns in the method have to be obtained. The Taylor series expansion
of the solution and Ki, i=1,2,3,4 are obtained and substituted in the first equation of
(1.16). For Fourth order Runge Kutta Method, comparing terms up to h4 on the two
sides gives the following 11 equations:
1 1
p1 p2 , p3 1, w 2 w 3
2 3
1 1
w 1 w 4 , a21 a32
6 2
a31 a 41 a 42 0, a 43 1
Accordingly the classical fourth order Runge-Kutta method is obtained as
h
y k 1 y k [K1 2K 2 2K 3 K 4 ]
6
K1 f ( t k , y k )
h h
K 2 f (t k , yk K1 ) (1.16)
2 2
h h
K 3 f (t k , y k K 2 )
2 2
K 4 f (t k h, yk hK 3 )
It may be observed that RK4 uses four functional evaluations in the interval [ t0, t1].
These points are shown as p0, p1, p2 and p3 in the following figure
dy 1 y
; y (1) 0
dt 2 t
Solution: The solution of IVP by RK4 classical method is shown in the following table:
Various entries of the table in a row are computed by the RK4 classical formula (1.16).
For computations in the table the user is referred to the excel sheet
R4_CLASSICAL.xlsx/sheet1
The solutions are compared with the exact solution and the absolute error is given in the
last column.
The other RK4 formulae are obtained as
h
y k 1 y k [K1 3K 2 3K 3 K 4 ]
8
K1 f (t k , y k )
h h
K 2 f (t k , yk K1 ) (1.17)
3 3
2h h
K 3 f (t k , y k K1 hK 2 )
3 3
K 4 f ( t k h, y k hK1 hK 2 hK 3 )
and
h 1 1
yk 1 yk [K1 2(1 )K 2 2(1 )K 3 K 4 ]
6 2 2
K1 f (t k , y k )
h h
K 2 f (t k , y k K1 ) (1.18)
2 2
h 1 1 1
K 3 f (tk , yk ( )hK1 (1 )hK 2 )
2 2 2 2
1 1
K 4 f (t k h, yk hK 2 (1 )hK 3 )
2 2
Example 1.7: find the solution of IVP using classical fourth order Runge-Kutta method
given in (1.17) with h=1
dy 1 y
; y (1) 0
dt 2 t
Solution: In the following table the computation are shown to solve the IVP using (1.17).
Although both the methods are of same order but (1.17) gives more accurate result as
compared to classical method (1.16)
Abs
h k tk yk k1 k2 k3 k4 yk+1 exact sol error
1 0 1 0 0.5 0.625 0.775 0.825 0.690625 0.693147 0.00252
1 1 2 0.69063 0.84531 0.91674 0.9971 1.038765 1.643824 1.647918 0.00409
1 2 3 1.64382 1.04794 1.09794 1.15249 1.186578 2.76705 2.772589 0.00554
1 3 4 2.76705 1.19176 1.23022 1.27143 1.300004 4.016642 4.023595 0.00695
1 4 5 4.01664 1.30333 1.33458 1.36767 1.392176 5.366922 5.375278 0.00836
1 5 6 5.36692 1.39449 1.4208 1.44843 1.469863 6.80093 6.810686 0.00976
1 6 7 6.80093 1.47156 1.49429 1.518 1.537026 8.306613 8.317766 0.01115
1 7 8 8.30661 1.53833 1.55833 1.5791 1.59619 9.874961 9.887511 0.01255
1 8 9 9.87496 1.59722 1.61508 1.63355 1.649065 11.49898 11.51293 0.01395
1 9 10 11.499 1.6499 1.66603 1.68266 1.696865 13.17308 13.18842 0.01534
1 10 11 13.1731 1.69755 1.71226 1.72738 1.74048 14.8927 14.90944 0.01674
Table 1.5 Solution By Classical RK4 for example 1.7
[Reference R4_CLASSICAL.xlsx/sheet2]
Let Tk+1(h) be the local truncation error at the( k+1)th step of the one step method with
step size h, assuming that no error was made in the previous step. It is obtained as
Tk 1 (h) y k 1 y k h ( t k , y k , h)
T (h)
lim k 1 0
h 0
h
It is now easy to verify that the Euler, Modified Euler and Runge-Kutta methods are
consistent
It is now easy to verify that the Eulers, Modified Eulers and Runge Kutta methods are
consistent.
A one step method is convergent when the difference between the exact solution and
the solution of difference equation at kth step satisfies the condition
lim max y(t k ) yk 0
h 0 1k N
Using the bound for Tk=y(tk)-yk proves the convergence of Euler method.
Stability of a numerical method ensures that small changes in the initial conditions
should not lead to large changes in the solution. This is particularly important as the
initial conditions may not be given exactly. The approximate solution computed with
errors in initial condition is further used as the initial condition for computing solution at
the next grid point. This accounts for large deviation in the solution started with small
initial errors. Also round off errors in computations may also affect the accuracy of the
solution at a grid point. Euler method is found to be stable.
According to Lax equivalence theorem Given a properly posed initial value problem and
a finite-difference approximation to it that satisfies the consistency condition, stability is
the necessary and sufficient condition for convergence.
Module1: Numerical Solution of Ordinary Differential Equations
Lecture 6
We can further improve the efficiency by employing still higher order Runge-Kutta
methods. The higher order methods, say of order 5 and 6 are developed on the same
lines. These are more efficient as the higher accuracy is achieved with less
computational effort as compared to lower order methods.
Since the lower order method is of order four, the step size adjustment factor s can be
computed as
1/ 4
s 0.84
Ti1
Here, ε is the accuracy requirement and T is the truncation error Ti1 yi1 yi1 .
ˆ
If the desired accuracy is not achieved the solution is iterated taking new value of h.
Depending upon error requirement the step size h can be increased or decreased. The
solution yk+1 of desired accuracy is obtained at tk+1=tk+sh. The method is known as
RKF45. To implement the method, the user specifies the allowable smallest step size
hmin , largest step size hmax and the maximum allowable local truncation error ε. The
following algorithm is used to solve IVP using RKF45 formulae with self adjusting
variable step sizes:
Algorithm RKF45
[Step 6] if ( R ) flag=0
[Step 7] go to step 2
[Step 8] t=t+h, y=yk+1, k++; flag=1
[Step 9] if( t<=b) goto step 2
[Step 10] stop
dy
y t 2 1; y (0) 0.5
dt
Solution: The detailed solution of the problem is worked out in the excel sheet rkf45.xls
h t y k1 k2 k3 k4 k5 Kk6 y5
0.25 0 0.5 1.5 1.589844 1.638153 1.833988 1.863381 1.681916 0.92048705
0.188324 0 0.5 1.5 1.568405 1.604568 1.753716 1.775389 1.638313 0.80850123
0.25 0.188324 0.808501 1.773035 1.856403 1.901019 2.079976 2.10662 1.94126 1.2937217
0.192067 0.188324 0.808501 1.773035 1.83778 1.87192 2.011491 2.031676 1.903627 1.17405097
0.20392 0.380391 1.174051 2.029354 2.091427 2.124074 2.255372 2.274263 2.154073 1.61316482
0.206565 0.584311 1.613165 2.271745 2.326045 2.354349 2.465632 2.481366 2.380102 2.10457288
0.214212 0.790876 2.104573 2.479088 2.524275 2.54744 2.634325 2.646171 2.568076 2.6543036
0.225464 1.005088 2.654304 2.644102 2.676656 2.692615 2.74499 2.751256 2.706084 3.26380128
0.24762 1.230552 3.263801 2.749544 2.763568 2.768682 2.767586 2.764947 2.771299 3.94887292
0.25 1.478171 3.948873 2.763882 2.747947 2.735929 2.656062 2.640557 2.722156 4.62774423
0.25 1.728171 4.627744 2.641168 2.586313 2.552099 2.370792 2.338767 2.517156 5.25474897
0.206501 1.728171 4.627744 2.641168 2.596419 2.569447 2.430547 2.406828 2.541928 5.15141426
0.215663 1.934673 5.151414 2.408456 2.326784 2.278814 2.042876 2.003464 2.231007 5.63074392
Table 1.7 Details of Solution of example 1.9
1.1 Apply Eulers method to solve the initial value problems in the interval (0,1]:
y 2y 3t, y (0) 1
y 2ty 2 , y (0) 1
x
y , y (0) 1
y
Take h=0.1 and compare with exact solutions.
Reduce h=0.05 and again solve the two IVPs.
1.2 Apply modified Euler method to IVPs in Ex. 1.1 with h=0.1. Compare the effort
and accuracy achieved in two methods.
1.3 Can Eulers method be applied to solve the following IVP in the interval (0,2)
y( t ) 4 y 2 , y (0) 0
1.4 Show that Eulers method and modified Eulers method fail to approximate the
solution y ( t ) 8t 3 / 2 of the IVP
Course: Numerical Solution of Ordinary Differential Equations
2 Adams-Moulton method. 1
3 Adams Bashforth method 1
Module 2
Lecture 1
Consider I VP
The increment function Φ depends on solution yj at previous grid point tj and step size h.
If yj+1 can be determined simply by evaluating right hand side then the method is explicit
method. The methods developed in the module 1 are one step methods. These
methods might use additional functional evaluations at number of points between tj and
tj+1. These functional evaluations are not used in further computations at advanced grid
points. In these methods step size can be changed according to the requirement.
It may be reasonable to develop methods that use more information about the solution
(functional values and derivatives) at previously known values while computing solution
at the next grid point. Such methods using information at more than one previous grid
points are known as multi-step methods and are expected to give better results than
one step methods.
To determine solution yj+1, a multi-step method or k-step method uses values of y(t) and
f(t,y(t)) at k previous grid points tj-k, k=0,1,2,…k-1,. yj is called the initial point while yj-k
are starting points. The starting points are computed using some suitable one step
method. Thus multi-step methods are not self starting methods.
t j1 t j1
r
y j1 y j f(t,y(t))dt y j a t dt
j
j (2.2)
t jk t j k j 0
The method may be explicit or implicit. An implicit method involves computation of yj+1 in
terms of yj+1. First an explicit formula known as predictor formula is used to predict yj+1.
Then another formula, known as corrector formula, is used to improve the predicted
value of yj+1. The predictor-corrector methods form a large class of general methods for
numerical integration of ordinary differential equations. A popular predictor-corrector
scheme is known as the Milne-Simpson method.
Milne-Simpson method
Its predictor is based on integration of f (t, y(t)) over the interval [tj−3, tj+1] with k=3 and
r=3. The interpolating polynomial is considered to match the function at three points tj−2,
tj−1, and tj and the function is extrapolated at both the ends in the interval [tj−3, tj-2] and [tj,
tj+1] as shown in the Fig 2.2(a). Since the end points are not used, an open integration
formula is used for the integral in (2.2):
4h 14
p j1 y j1 y j
3
2f(t j ,y j ) f(t j1,y j1 ) 2f(t j2 ,y j2 ) h5 f (4) (); in(t j3 ,t j1 ) (2.3)
45
The explicit predictor formula is of O(h4) and requires starting values. These starting
values should also be of same order of accuracy. Accordingly, if the initial point is y0
then the starting values y1, y2 and y3 are computed by fourth order Runge kutta method.
Then predictor formula (2.3) predicts the approximate solution y4 as p4 at next grid point.
The predictor formula (2.3) is found to be unstable (proof not included) and the solution
so obtained may grow exponentially.
The predicted value is then improved using a corrector formula. The corrector formula is
developed similarly. For this, a second polynomial for f (t, y(t)) is constructed, which is
based on the points (tj−1, fj−1), (tj, fj) and the predicted point (tj+1, fj+1). The closed
integration of the interpolating polynomial over the interval [tj, tj+1] is carried out [See Fig
2.2 (b)]. The result is the familiar Simpson’s rule:
h 1 5 ( 4)
y j1 y j
3
f(t j1,y j1 ) 4f(t j ,y j ) f(t j1,y j1 )
90
h f ( ); in(t j1,t j1 ) (2.4)
fj-1
fj-3 fj
fj-2
fj-k fj+1
x
x
xj-1
x
x
x
x
x
x
t
t
tj-3 tj-2 tj-1 tj tj+1
tj-3 tj-1 tj-1 tj tj+1
(a) (b)
Fig 2.2 (a) Open Scheme for Predictor (b) Closed integration for Corrector
In the corrector formula fj+1 is computed from the predicted value pj+1 as obtained from
(2.3).
Denoting f j f(t j , y j ) , the equations (2.3) and (2.4) gives the following predictor corrector
formulae, respectively, for solving IVP (2.1) at equi-spaced discrete points t4,t5,…
4h
p j1 y j1 y j3 2fj fj1 2fj2
3 (2.5)
h
y j1 y j1 fj1 4f j f j1
3
The solution at initial point t0 is given in the initial condition and t1, t2 and t3 are the
starting points where solution is to be computed using some other suitable method such
as Runge Kutta method. This is illustrated in the example 2.1
Example 2.1: Solve IVP y’=y+3t-t2 with y(0)=1; using Milne’s predictor corrector method
take h=0.1
Solution: The following table 2.1 computes Starting values using fourth order Runge
Kutta method.
k y k1= t+ y+ k2 y+ k3 y+ y+h(k1+2k
t f(t,y) h/2 h/2*k h/2*k2 t+h h*k3 k4 2+2k3+k4)
1 /6
0 0 1 1 0.05 1.05 1.197 1.05987 1.2074 0.1 1.1207 1.41074 1.1203415
5 5
1 0.1 1.1203 1.410 0.15 1.190 0.222 1.13147 1.559 0.2 1.2762 1.83624 1.2338409
34 859 7 7
2 0.2 1.2338 1.793 0.25 1.323 0.321 1.24993 1.9374 0.3 1.4276 2.23758 1.3763387
84 533 8 1
3 0.2 1.3763
Using initial value and starting values at t=0, 0.1, 0.2 and 0.3, the predictor formula
predicts the solution at t=0.4 as 1.7199359. It is used in corrector formula to give the
corrected value. The solution is continued at advanced grid points [see table 2.2].
Milne Predictor corrector1 f(t,p)
k t y f(t,y) corrector
0 0 1 1
rk4 Milne pc exact
1 0.1 1.1203415 1.410341
2 0.2 1.2338409 1.793841 starting points
3 0.3 1.3763387 2.186339
4 0.4 1.7199359 2.759936 1.67714525 2.717145
5 0.5 2.0317593 3.281759 1.920894708 3.170895
Table 2.2: Example 2.1 using predictor corrector method with h=0.1
The exact solution is possible in this example; however it may not be possible for other
equations. Table 2.3 compares the solution with the exact solution of given equation.
Clearly the accuracy is better in predictor corrector method than the Runge-Kutta
method.
Table 2.4: Example 2.1 using predictor corrector method with h=0.05
The exercise 2.1 is repeated with h=0.5 in table 2.4.The table 2.5 clearly indicates that
the better accuracy is achieved with h=0.05 [see table 2.5]
0.05 1.055042 1.055042
0.1 1.120342 1.120342
0.15 1.196169 1.196168
0.2 1.282805 1.282806
0.25 1.380551 1.380551
Predictor Corrector methods are preferred over Runge-Kutta as it requires only two
functional evaluations per integration step while the corresponding fourth order Runge-
Kutta requires four evaluations. The starting points are the weakness of predictor-
corrector methods. In Runge kutta methods the step size can be changed easily.
Module 2
Lecture 2
Contd…
A predictor-corrector method refers to the use of the predictor equation with one
subsequent application of the corrector equation and the value so obtained is the final
solution at the grid point. This approach is used in example 2.1.
The predicted and corrected values are compared to obtain an estimate of the
truncation error associated with the integration step. The corrected values are accepted
if this error estimate does not exceed a specified maximum value. Otherwise, the
corrected values are rejected and the interval of integration is reduced starting from the
last accepted point. Likewise, if the error estimate becomes unnecessarily small, the
interval of integration may be increased. The predictor formula is more influential in the
stability properties of the predictor-corrector algorithm.
In another more commonly used approach, a predictor formula is used to get a first
estimate of the solution at next grid point and then the corrector formula is applied
iteratively until convergence is obtained. This is an iterative approach and corrector
formula is used iteratively. The number of derivative evaluations required is one greater
than the number of iterations of the corrector and it is clear that this number may in fact
exceed the number required by a Runge-Kutta algorithm. In this case, the stability
properties of the algorithm are completely determined by the corrector equation alone
and the predictor equation only influences the number of iterations required. The step
size is chosen sufficiently small to converge to the solution in one or two iterations. The
step size can be estimated from the error term in (2.4).
Example 2.2: Apply iterative method to solve IVP y’=y+3t-t2 with y(0)=1 with h=0.1
Solution: With h=0.1 the computations are arranged in the table 2.6
Note that the corrector formula converges fast but is not converging to the solution of
the equation. It converges to the fixed point of difference scheme given by the corrector
formula. If h=0.05 then the solution converges to the exact solution in just two iterations.
h k t y f(t,y)
Milne Predictor corrector for t=0.4 with h=0.1
0.1 0 0 1 1
0.1 1 0.1 1.120341 1.410341 starting point1
0.1 2 0.2 1.233841 1.793841 starting point2
0.1 3 0.3 1.376339 2.186339 starting point3
0.1 4 0.4 1.719936 2.759936 predictor
0.1 5 0.4 1.677145 2.717145 corrector1
0.1 6 0.4 1.675719 2.715719 corrector2
0.1 7 0.4 1.675671 2.715671 corrector3
0.1 8 0.4 1.67567 2.71567 corrector4
0.1 9 0.4 1.67567 2.71567 corrector5
Milne Predictor corrector at t=0.5 with h=0.1
0.1 1 0.1 1.120341 1.410341
0.1 2 0.2 1.233841 1.793841 Starting values
0.1 3 0.3 1.376339 2.186339
0.1 4 0.4 1.67567 2.71567
0.1 5 0.5 1.840277 3.090277 predictor
0.1 6 0.5 1.914315 3.164315 corrector1
0.1 7 0.5 1.916783 3.166783 corrector2
0.1 8 0.5 1.916865 3.166865 corrector3
0.1 9 0.5 1.916868 3.166868 corrector4
Table 2.6a iterative Milnes predictor corrector method example 2.2 with h=0.1
Error estimates
28 5 ( 4)
y(t j1 ) p j1 h f (); in(t j3 ,t j1 )
90
1 5 ( 4)
y(t j1 ) y j1 h f ( ); in(t j1,t j1 )
90
It is assumed that the derivative is constant over the interval [tj-3, tj+1]. Then simplification
yields the error estimates based on predicted and corrected values.
28
y(t j1 ) p j1 [y j1 p j1 ] (2.6)
29
Further, assume that the difference between predicted and corrected values at each
step changes slowly. Accordingly, pj and yj can be substituted for pj+1 and yj+1 in (2.6)
gives a modifier qj+1 as
28
q j1 p j1 [y j p j ]
29
4h
p j1 y j1 y j 2fj fj1 2fj2
3
28
q j1 p j1 [y j p j ]; f j1 f(t j 1,q j1 )
29 (2.7)
h
y j1 y j f j1 4fj f j1
3
Another problem associated with Milne’s predictor corrector method is the instability
problem in certain cases. This means that error does not tend to zero as h tends to
zero. This is illustrated analytically for a simple IVP
Y’=Ay, y(0)=y0
Its solution at t=tn is yn=y0 exp(tn-t0). Substituting y’=Ay in the corrector formula gives the
difference equation
y j1 y j1
h
3
Ay j1 4Ay j Ay j1
hA 4hA hA
Or (1 )y j1 y j (1 )y j1 0 (2.8)
3 3 3
hA 2 4hA hA
(1 )Z Z (1 )0
3 3 3
2r 3r 2 1 hA
y j C Z C2 Z ;with Z1,2
j j
,r
1 r
1 1 2
3
2r 3r 2 1
Z1 1 3r O(r 2 ) 1 Ah O(h2 )
1 r
2r 3r 2 1 Ah
Z2 1 r O(r 2 ) (1 ) O(h2 )
1 r 3
When A>0, the second term will die out but the first grows exponentially as j increases
irrespective of h. However, first term will die out and second will grow exponentially
when A<0. This establishes the instability of the solution.
Module 2
Lecture 3
When bk=0, the method is explicit and yj+1 is explicitly determined from the initial value y0
and starting values yi; i=1,2, …, k-1. When bk is nonzero, the method is implicit.
The Milne’s predictor corrector formulae are the special cases of (2.9):
Predictor formula : k=4,a1=a 2 =a3 =0,a 4 =1,b0 =0, b1=8/3, b 2 =-4/3, b3 =8/3, b 4 =0
Another category of multistep methods, known as Adams methods, are obtained from
t j1 t j1
r
y j1 y j f(t,y(t))dt y j a t dt
j
j
tj t j k j 0
Here the integration is carried out only on the last panel, while many function and
derivative values at equi-spaced points are considered for the interpolating polynomial.
Both open and closed integration are considered giving two types of formulas. The
integration scheme is shown in the fig. 2.3
x
x x
x x
x x
x x
t t
tj-1 tj-1 tj-3 tj-1 tj-1 tj tj+1
tj-3 tj tj+1
Fig. 2.3 Schematic diagram for open and closed Adams integration
formulas
The open integration of Adams formula gives Adams Bashforth formula while
closed integration gives Adams Moulton formula. Different degrees of interpolating
polynomials depending upon the number r of interpolating points give rise to formulae of
different order. Although these formulae can be derived in many different ways, here
backward application of Taylor series expansion is used for the derivation of second
order Adams Bashforth open formula.
For second order formula, consider degree of polynomial r=2 and points tj, tj-1and tj-2
are used in the following
t j1 t j1
2
y j1 y j f(t,y(t))dt y j a t dt
j
j
tj t j k j 0
f j f j1 f j
f j h O(h2 )
Also, h 2
3 1 5
y j1 y j h( fj fj1 ) h3 fj()
2 2 12 (2.11)
A fourth order Adams Bashforth formula can be derived on similar lines and it is written
as
55 59 37 9 251 5 iv
y j1 y j h( fj fj1 fj2 f j 3 ) h f ( )
24 24 24 24 720 (2.12)
Example 2.3: Apply Adams Bashforth method to solve IVP
Solution: With h=0.1 the computations are arranged in the table 2.7
Exact
h k t y f(t,y)
solution
0.05 0 0 1 1
0.05 1 0.05 1.0550422 1.202542 Starting
0.05 2 0.1 1.1203418 1.410342
values
0.05 3 0.15 1.1961685 1.623669
0.05 4 0.2 1.2828053 1.842805 1.2828055
Lecture 4
t j1 t j1
2
y j1 y j f(t,y(t))dt y j a t dt
j
j
tj t j k j 0
f j1 f j fj1
f j1 h O(h2 )
h 2
1 1 1
y j1 y j h( fj1 fj ) h3 fj1 ()
2 2 12 (2.14)
9 19 5 1 19 5 iv
y j1 y j h( fj1 fj fj1 f j2 ) h f ()
24 24 24 24 720 (2.15)
Example 2.4 Solve IVP of example 2.3 by Adams predictor corrector Method
h k t y f(t,y)
Solution At t=0.20
0.05 0 0 1 1
0.05 1 0.05 1.0550422 1.202542
Starting
0.05 2 0.1 1.1203418 1.410342
values
0.05 3 0.15 1.1961685 1.623669
0.05 4 0.2 1.2828053 1.842805 predictor
0.05 5 0.2 1.2828055 1.842806 corrector
0.05 6 0.2 1.2828056 1.842806 corrector
Solution At t=0.25
0.05 1 0.05 1.0550422 1.202542
0.05 2 0.1 1.1203418 1.410342
Starting
0.05 3 0.15 1.1961685 1.623669
values
0.05 4 0.2 1.2828056 1.842806
0.05 5 0.25 1.3805506 2.068051 predictor
0.05 6 0.25 1.3805509 2.068051 corrector
0.05 7 0.25 1.3805509 2.068051 corrector
Table 2.8 Solution of IVP of Example 2.3 by Adams predictor corrector Method
Both the predictor and corrector formula have Local Truncation errors of order O(h5):
251 (5 )
y (t k 1 ) pk 1 y (1 )h5
720
19 (5 )
y (t k 1 ) y k 1 y (2 )h5
720
If fifth derivative is nearly constant and h is small then the error estimate can be
obtained by eliminating the derivative and simplifying as
19
y (t k 1 ) y k 1 y (tk 1 ) pk 1
270
Exercise 2
2.1 Consider the IVP
y e t y, y(0)=1
Compute solution at times t=0.05, 0.1 and 0.15 using RK4 by taking h=0.05. Use
these values to compute solution at t=0.2 0.25 and 0.3 using Milne-Simpson
method. Compare solution with the exact solution y (t 1)e t
2.2 Consider the IVP
y y t 2 , y(0)=1
Compute solution at times t=0.2, 0.4 and 0.6 using RK4 by taking h=0.2. Apply
Adams Bashforth method to compute solution at t=0.8 and 1.00
2.3 Solve IVP of exercise 2.2 by fourth order Adams predictor corrector Method.
Course: Numerical Solution of Ordinary Differential Equations
Module 3
In (3.1) t is independent variable and m dependent variables are y1, y 2 , y 3 ,...y m . Introducing
column vector Y as (y1, y 2 , y 3 ,...y m )T , F as (f1,f2 ,f3 ,...fm )T and Y0 as (y 01, y 02 , y 03 ,...y 0m )T , the
The form (3.2) is similar to the IVP (1.2) with scalars being replaced by vectors.
Let the interval (a,b) is divided into N subintervals of width h=(b-a)/N such that the grid
points are tj=a+jh and tj+1=tj+h. Let yi,j ; i=1,2,…m and j=1,2,…N denote the
th
approximation of i dependent variable yi (tj) at t=tj=t0+jh.
The Eulers method for the system of equations be written as
y i,j1 y i,j hfi (t j , y1,j , y 2,j ,..., y m,j );i 1,2,...,m and j 0,1,2,...,N (3.3)
Example 3.1 Solve the system of differential equations with given initial conditions
using Eulers method in the interval (0,1) taking h=0.02
x=x+2y x(0)=6
y=3x+2y y(0)=4
Solution: The Euler method given in (3.3) is used for solving given system of two
equations (m=2):
f1(x,y)=x+2y x(0)=6
f2 (x,y)=3x+2y y(0)=4
The computations are shown in the table 3.1.
j h t xj yj f1 f2
0 0.02 0 6 4 14 26
1 0.02 0.02 6.28 4.52 15.32 27.88
2 0.02 0.04 6.5864 5.0776 16.7416 29.9144
3 0.02 0.06 6.921232 5.675888 18.273008 32.115472
4 0.02 0.08 7.28669216 6.31819744 19.923087 34.49647136
5 0.02 0.1 7.685153901 7.00812687 21.7014076 37.07171544
6 0.02 0.12 8.119182054 7.74956118 23.6183044 39.85666851
7 0.02 0.14 8.591548142 8.54669455 25.6849372 42.86803352
8 0.02 0.16 9.105246886 9.40405522 27.9133573 46.12385109
9 0.02 0.18 9.663514033 10.3265322 30.3165785 49.64360657
10 0.02 0.2 10.2698456 11.3194044 32.9086543 53.44834555
11 0.02 0.22 10.92801869 12.3883713 35.7047613 57.56079863
Table 3.1 Solution of Example 3.1
Refer to euler-system of equations.xlsx
Example 3.2 Solve the following system of differential equations with given initial
conditions using fourth order Runge Kutta method in the interval (0, 2) taking h=0. 5
x=x-xy x(0)=4
y=-y+xy y(0)=1
t1 x y k11=f1(x,y) k12=f2(x,y)
0.5 2.384262085 3.252456665 -5.37044702 4.50225244
t1+h/2 x+h*k11/2 y+h*k12/2 k21=f1(x,y) k22=f2(x,y)
0.75 1.041650329 4.378019776 -3.51871541 0.18234596
t1+h/2 x+h*k21/2 y+h*k22/2 k31=f1(x,y) k32=f2(x,y)
0.75 1.504583232 3.298043156 -3.4575972 1.66413728
t1+h x+h*k31 y+h*k32 k41=f1(x,y) k42=f2(x,y)
1 0.655463485 4.084525303 -2.02179371 -1.40726811
phi1 phi2
-1.77873883 0.56566257
t2 x+phi1 y+phi2
1 0.605523256 3.818119233
t2 x y k11=f1(x,y) k12=f2(x,y)
1 0.605523256 3.818119233 -1.70643673 -1.50615924
t2+h/2 x+h*k11/2 y+h*k12/2 k21=f1(x,y) k22=f2(x,y)
1.25 0.178914073 3.441579422 -0.43683292 -2.82583243
t2+h/2 x+h*k21/2 y+h*k22/2 k31=f1(x,y) k32=f2(x,y)
1.25 0.496315026 3.111661125 -1.04804915 -1.56729695
t2+h x+h*k31 y+h*k32 k41=f1(x,y) k42=f2(x,y)
1.5 0.081498682 3.034470757 -0.16580669 -2.78716539
phi1 phi2
-0.40350063 -1.08996528
t3 x+phi1 y+phi2
1.5 0.202022627 2.728153949
t3 x y k11=f1(x,y) k12=f2(x,y)
1.5 0.202022627 2.728153949 -0.3491262 -2.17700512
t3+h/2 x+h*k11/2 y+h*k12/2 k21=f1(x,y) k22=f2(x,y)
1.25 0.114741077 2.183902669 -0.13584227 -1.93331933
t3+h/2 x+h*k21/2 y+h*k22/2 k31=f1(x,y) k32=f2(x,y)
0.25 0.16806206 2.244824118 -0.20920771 -1.86755435
t3+h x+h*k31 y+h*k32 k41=f1(x,y) k42=f2(x,y)
2 0.097418774 1.794376773 -0.07738721 -1.61957079
2 phi1 x+phi1 phi2 y+phi2
-0.09305111 0.108971514 -0.94986027 1.778293677
Table 3.2: Solution of the system of equations in Example 3.2
Accordingly, the first part of the table gives x(0.5)=2.384262085, y(0.5)=
3.252456665.The solution at t=1.0,1.5 and 2.0 are computed in the subsequent parts of
the table
Module 3
Lecture 2
Numerical Solution of Higher Order Ordinary Differential Equations
F(y,y,y,t) 0 (3.6)
Accordingly, the second order equation (3.6) reduces to a system of two coupled first order
equations in two unknowns y and z:
z y
F(z,z,y,t) 0
y z
Or (3.7)
z f(z, y,t)
The associated initial conditions get transformed to
y(0)=y 0 and z(0)=y1 (3.8)
The second order differential equation is reduced to a system of two first order
differential equation in two unknowns y and y’. Similarly nth order differential equation
can be reduced to a system of n first order differential equation in n unknowns y ,
y’,…,y(n-1) .Once the higher order differential equation is converted into system of first
order differential equations, then one of the methods already discussed can be applied.
Example 3.3 Solve the following initial value problem using fourth order Runge Kutta
method in the interval (1, 2) taking h=0.2
d2 x dx 2 dx
2x 2
( ) 1 0, x (1) 1, 0
dt dt dt x 1
dx
Solution: Writing y , the given differential equation becomes
dt
dy 1 y 2
dt 2x
Therefore, the equivalent system of equations is
dx dy 1 y 2
y f1 ( x, y ), f2 ( x, y )
dt dt 2x
The system is subjected to initial conditions x(1)=1, y(1)=0.The following table shows
the detailed computational steps for solution at t=1.2 when h=1.2
j h t x y k11=f1(x,y) k12=f2(x,y)
0 0.2 1 1 0 0 -0.5
keywords: Euler method, explicit method, implicit method, stiff equations, stability
Example 3.4: Solve IVP x 15x; x(o) 1 numerically in the interval (0, 2)
Solution: Solving the differential equation numerically in the interval (0, 2) for c=15 by
Euler’s method with h=0. 5, it is observed that the solution explodes very soon. [See the
computations in the Excel sheet Stiff equ.xls/ sheet 1 and sheet 2]. When h=0. 25, it is
oscillatory in contrast to exact solution. The step size h is to be decreased drastically to
obtain the solution close to the exact solution. However, this will increase the
computational effort.
t at h=0.5 at h=0.25 at h=0.1 at h=0.05 at h=0.01 at exact
h=0.005
0.5 42.25 7.5625 -0.3125 9.54 E-07 0.000296 0.000411 0.00053
1 -274.625 57.19141 0.000977 9.09E-13 3.06E-07
1.5 1785.0625 432.51 -3.10E-05 1.69E-10
2 -11602.91 3270.857 9.50E-07 9.36E-14
Table 3.5: Comparison of solution with different values of h
To solve stiff equation (3.10), it is desirable to use implicit methods. The explicit Euler
method is modified for the equation as follows:
x (t t ) x (t )
x cx (t t );
t
x(t)
Or x ( t t ) (3.11)
1 ct
The application of this Euler implicit scheme for the equation (3.10) will not misbehave.
[See the computations in the Excel sheet Stiff equ.xls/ sheet 3]
In case of IVP x f ( x, t ); x (o) x 0
Another implicit scheme which can be used to solve stiff equations is Trapezoidal
scheme or two stage adams Moulton scheme
x k 1 x k h[f ( x k , t k ) f ( x k 1, t k 1 )] / 2 (3.13)
The stiff differential equations generally results from the phenomenon that involves
widely differing time scales. Consider the differential equation
x 100x
Its most general solution is
x ( t ) Ae10t Be 10t
There are two different time scale in the solution. In another example
x 1000x e t ; x (o) 1
Its exact solution also involves two time scales 1 versus 1/1000
x ( t ) (e 1000t e t ) / 999
In general the presence of vastly different evolutionary time scale gives stiff dynamic
differential equations. In such cases, care should be taken in solving them.
Example 3.5: Using Euler implicit Scheme Solve IVP over interval (0, 3.0)
x 1000x e t ; x (o) 1. Take h=0.05
Solution: Applying Scheme (3.12) to the given equation gives
x k h exp( t k 1 )
x k 1
1 1000h
The numerical details are worked out in Excel file Stiff equ.xls/ sheet 4
euler method
h tk xk f(xk,tk) xk+1 exact
0.05 0 1 -1001 -49.05 0.05 -0.000952182
0.05 0.05 -49.05 49049.0488 2403.40244 0.1 -0.000905743
0.05 0.1 2403.40244 -2403403.3 -117766.765 0.15 -0.00086157
0.05 0.15 -117766.765 117766764 5770571.43 0.2 -0.00081955
0.05 0.2 5770571.43 -5.771E+09 -282758000 0.25 -0.00077958
0.05 0.25 -282758000 2.8276E+11 1.3855E+10 0.3 -0.00074156
0.05 0.3 1.3855E+10 -1.386E+13 -6.789E+11 0.35 -0.000705393
Table 3.6 a Euler I Method for stiff equation
Exercise 3
3.1 Solve the following system of equations using Runge Kutta method of
order 4 in the interval 0< t 1.0 :
x xy t; x (0) 1
(a) ; taking h=0.1
y ty x; y (0) 1
x x 4y; x (0) 2
(b) ; taking h=0.0.5
y y x; y (0) 3
x y 2 x 2 ; x(0) 2
(c) ; taking h=0.0.5
y 2xy; y(0) 0.1
Finite difference method for two point linear Boundary Value problem with Dirichlet type
conditions
y c(x)y(x) d(x)y(x) e(x);a x b
(4.3)
y(a) 1; y(b) 2
To apply finite difference method first discretize the domain a x b into N-1
computational grid points xi;i 1,2...N 1 and two boundary points x0 and xN as
a x0 x1 x 2 ... xN1 xN b
The grid points are equi-spaced and computed as
ba
xi x0 ih;h
N
The step size h is a critical parameter for stability and convergence of the numerical
scheme. The differential equation is now written at each internal grid point
xi;i 1,2...N 1 . For this, the derivatives are replaced by corresponding finite differences:
y 2yi yi1
y xi i1 o(h2 )
2
h
y y
y xi i1 i1 o(h2 )
2h
that is
yi1 2yi yi 1 y y
c(xi ) i 1 i1 d(xi )yi e(xi )
h2 2h
h h
or ( 1 c i )yi1 (2 h2di )yi ( 1 c i )yi1 h2ei;i 1,2,N 1 (4.4)
2 2
The unknown yi’s are on the left side and known quantities are on right side of
the equation for i=2,3,…N-2. Using boundary conditions for i=1and i=N-1 give
h h
(2 h2d1)y1 ( 1 c1)y 2 h2e1 (1 c1)1 (4.6a)
2 2
h h
( 1 cN1)yN 2 (2 h2dN1)yN1 [h2eN1 (1 cN1)2 ] (4.6b)
2 2
This reduces the system of differential equations to linear system of N-1 algebraic
equations which can be written in the matrix form as AX=B where
1 u1
2 2 u2
. ... .
. ...
N 2 N 2 uN 2
N1 N1 (N1)X(N1)
2 h
h e1 (1 2 )1
x1
i 2 h2di x h2e2
h 2
i 1 ci X . B ... (4.7)
2 2
h xN 2 h eN 2
ui 1 ci xN1
2 2 h
h eN1 (1 )2
2
The system of equations must admit unique solution for which the sufficient condition is
the diagonal dominance of the matrix A. Suppose d(x) has positive values in the domain
and c(x) is continuous. Let L be upper bound of the function c(x) over the domain then
the step size h smaller than 2/L guarantees the uniqueness of the solution.
Example 4.1: Solve the boundary value problem using N=4
y 12y 16; y(0) y(2) 5
Solution: It is observed that c(x)=0, d(x)=12>0, e(x)=-16. for N=4, the Bvp will
reduce to system of three algebraic equations with step size h=2/4=0.5 The equivalent
system given below:
5y1 y 2 1
y1 5y 2 y3 4
y 2 5y3 1
The system of algebraic equations corresponding to the BVP has unique solution
irrespective of step size h. The solution of the system is
1 18 1
y1 , y 2 , y3
23 23 23
Example 4.1: Solve the boundary value problem using N=4
The coefficients of the matrix are computed from the expressions given below
x 1 2 2x 1 h 2x 1 h
di 2 i h , i 1 ( i ) ,ui 1 ( i ) ,
xi xi 2 xi 2
Applying finite differences gives the following system of equations:
2.41667y1-0.33333y 2 =9.060939
-1.625 y1+2.375 y 2 -0.375 y3 =0
-1.6 y 2 +2.35 y3 =80.34215
The coefficient matrix is diagonally dominant. The system of equations can be solved
using Gauss-Seidel iterative scheme with initial guess as (0,0,0).
For numeric computations refer to NPTEL-II\BVP-I.xls
The final solution is obtained as
y1=5.125596, y 2 =9.977751, y 3 =40.98151
Module 4
Linear Boundary value problems
Lecture 2
Finite Difference Methods: Mixed boundary condition
boundary conditions the solution is known at the boundary points x0 and xN and the
solution is to be computed at internal N-1 grid points. In case of non-Dirichlet boundary
conditions the solution is also to be computed at two boundary points. The number of
unkowns is now N+1. The discretization of differential equation is obtained at
N+1points including boundary points i=0 and N as in (3.4):
h h
( 1 c i )yi1 (2 h2di )yi ( 1 ci )yi 1 h2ei;i 0,1,2,N (4.9)
2 2
The boundary conditions are discretized using central differences at i=0 and i=N
corresponding to x0 and xN respectively:
y(x0 h) y(x0 h)
1(x0 )y(x0 ) 1(x0 ) 1
2h
y(xN h) y(xN h)
2 (xN )y(xN ) 2 (xN ) 2
2h
2h10 y0 10 y1 y 1 2h1
Or
2h2NyN 2N yN1 yN1 2h2
y0 b0
y 2
1 h e1
y . B ...
yN1 h2e
N1
yN
bN
h h
i 2 h2di , i 1 ci , ui 1 ci, bi h2ei; i 2,3,...,N 1
2 2
a01 0 2h0 (10 / 10 ) , a02 2 ,
aN,N N 2huN ( 2N / 2N ), aN1,N 2
b0 h2e0 2h01 / 10 bN h2eN 2huN2 / 2N
The finite difference methods reduces the problem of solving linear boundary value
problem to solution of linear simultaneous algebraic equations and the approximate
solution is obtained at finitely many equi-spaced discrete points. If the the number of
equations n is small then Gauss elimination method is used. If n is large then iterative
method is used if the system is diagonally dominant which is required for convergence
of iterative method. Further, it may be observed that the system of equations forms a tri
diagonal system for which a special form of Gaussian elimination method may be used
even for large n. The elementary transformations are required to reduce the tri-diagonal
matrix T and B in the following form
a01 a02 b0
0 1 u1 b
1
0 2 u2 b
b 2
. ... .
0
N1 uN1 b
N1
0
aN,N
bN
(N1)X(N1)
Example 4.3: Solve second order differential equation with Dirichlet condition at one
end and mixed boundary condition at other end using N=4:
1
0
0
1
1
1
1
1
d2 y y 2yi yi1
i1 ,i 1,2,3,4;
2
dx i h2
The substitution at i=1,2…N grid points, where the solution is desired, yields:
The Solution is already known at the boundary point i=0. Also, at the other boundary
point i=N=4, the central difference formula is used for the derivative:
d2 y y y
N1 N1
dx 2 N 2h
Using (C) for i=1, (A) for i=2,3 and (D) for i=4 gives the system of algebraic equations
written in the matrix form as
2
0 1
0 0 0 2
2
2
h y h
1
1 0 0
2
2
h h
2
y
1 0
2 (E)
2
2
h y h
3
2
2
2
h h y h h
4
Substitution for h gives tridiagonal matrix Ty=B, where
2
0
6 1 0 0
2
5
1 6
0 1 0
0 0 0 5
0
0
6
2
5
. .
2
0
2 1
5
0
0
6
2
5
T
. ; B .
2
6 2
2
5
0
0
6
2
5
. .
0
2
6
2
5
0
4
3
7
5
. .
The system of equations is solved using Gauss-Seidel method to give
The convergence is not good. More iteration are required for better solution.
Using elimination gives the following solution:
Lecture 3
Shooting Method
Shooting Method
The shooting method is an iterative method for solving boundary value problems. It first
reduces the boundary value problem to an initial value problem. The missing initial
conditions to the given differential equation are guessed and solution of equivalent initial
value problem is obtained. The solution so obtained is compared with the boundary
conditions and new guess is obtained accordingly for the initial conditions. The
procedure is repeated till the boundary conditions are satisfied. It is a trial and error
method.
To illustrate this, let us consider a second order linear differential equation with Dirichlet
type of boundary conditions:
y F(x,y,y) p(x)y q(x)y r(x);a x b
(4.14)
y(a) 1; y(b) 2
The following initial value problem is assumed to be equivalent to the problem ( 4.14) :
y F(x,y,y);a x b
(4.15)
y(a) 1; y(a) 1
The initial value problem is solved for a x b with any of the known methods. Let the
solution curve so obtained is 1and y(b) k1 2 [see fig 4.2]. Now the solution of initial
value problem (4.15) is solved with another initial guess y(a) 2 . Let the new solution
curve is 2 and y(b) k 2 2 . The new initial condition chosen as y(b) k 3 ;k1 k 3 k 2
Example 4.3 Consider the steady state heat transfer in a rod of length i meters with
heat transfer coefficient k=0.01 m-2. The two ends of the rod are maintained at fixed
temperature 10 and 100 0C.The rod is not insulated and heat transfer is allowed from
(to) the surface to (from) the surrounding air at temperature 20 0C by convection.
Solution: According to the statement of the problem, the boundary value problem is
d2u
+ k(u0 - u) = 0;u(0) = 10,u(5) = 100
dx 2
u0 = 5,k = 0.01m-2
y
2
k2
σ2
1 k1
σ1
x
a b
dz
. ( u)
dx
The following initial conditions are associated with the above system:
0
1
0
5
1
0
0
u( ) , z( )
To apply Shooting method let us assume z(0)=20. The system is solved with h=1 using
classical Runge-Kuttta method of order 4 taking h=1. The solution is obtained as [bvp-
shooting.xls/sheet1]
z(0)= 20
u(0) u(1) u(2) u(3) u(4) u(5)
10 29.94173 49.66786 69.16518 87.95444 106.2631
Observe that u(5) is computed as 106.2631 overshooting the given u(5)= 100.
Let us now assume a lower estimate z(0)=18. Again applying RK4 gives the solution
[bvp-shooting.xls/sheet2] :
z(0)= 18
u(0) u(1) u(2) u(3) u(4) u(5)
10 27.94673 45.69777 63.13577 80.14587 96.61643
This time the lower value is obtained for u(5). Thus the actual value of z(0) will lie
between 20 and 18. Since the bvp is linear, a linear interpolation is carried out to get the
better estimate of z(0) which may hit the target u(5)=100. That means a straight line
passing through (20,106.26) and (18, 96.6) is obtained then the point (z(0),100) lying on
the line is estimated. Accordingly, the linear interpolation gives z(0) = 18.703. The
solution is now computed for z(0)=18.7 [bvp-shooting.xls/sheet3] :
z(0)= 18.7
u(0) u(1) u(2) u(3) u(4) u(5)
10 28.64498 47.0873 65.20505 82.87887 99.99277
Lecture 4
Shooting Method Contd….
u (b)
c (4.18)
2
u (b)
Shooting Method for Non-Dirichlet boundary conditions
Now consider the boundary value problem with Dirichlet condition at x=a and
Robins (mixed) boundary condition at x=b:
y p(x)y q(x)y r(x);a x b
(4.19)
y(a) ; 1y(b) 2 y(b) 3
Here, again the solutions u1(b) and u2(b) of the two initial value problems
(4.17a) and (4.17b) are obtained. The differential equation and the condition
at x=a is satisfied by u(x)=u1(x) +c u2(x).The arbitrary constant c is computed
as follows:
1
1
2
1
1 2
1
2 2
1
u (b) u (b)
or c (4.20)
1
u (b) u (b)
0
1
1
u (b)
c (4.21)
2
u (b)
Example 4.4: Solve The boundary value problem by Shooting method
y 4y sin x;0 x / 2
y(0) 1; y( / 2) y( / 2) 1
Solution: The following two BVP with robin type mixed boundary condition at
x=π/2 are solved:
y 4y sin x;0 x / 2
y(0) 1; y(0) 0
y 4y 0;0 x / 2
y(0) 0; y(0) 1
Let the solution of first problem is u1 and the second problem is u2. Each
differential equation is reduced to system of two first order differential
equation and then solved numerically using classical Runge-Kutta method
taking step size as h= π/8. The detailed solution of first problem is given in
sheet 1 of xls file BVP-shooting-robin.xls, while the solution details of second
problem is given in its sheet 2. The numerical solutions of the two problems
are given in the following tables:
1 2
1
2 2
1
u (b) u (b)
1
u (b) u (b)
obtain the solution u1(x)+cu2(x) of given problem. The value of constant c is
computed as c=-1.3195718.This gives the final solution of given problem as
x1 0 0.39269908 0.78539816 1.17809725 1.57079633
ufinal 1 0.25253778 -0.6146839 -1.0135542 -0.6884501
-
z final 1.3196 -2.270626 -1.8074759 -0.1190162 1.68845015
This numerical solution is approximate only. The exact solution of the problem
can be obtained analytically which gives the solution as
x1 0 0.39269908 0.78539816 1.17809725 1.57079633
Exercise 4
4.1 Solve the BVP using finite difference method:
y xy y x 2 ,y(0) 0,y(1) 1,takeh 0.2
y 2y y xe x x,y(0) 2, y(2) 4,takeh 0.25
Lecture 1
Y [y0 ,y1,y2,...,yN1,yN ]T
Due to Dirichlet boundary conditions
y 0 , yN
Such that
g0(Y) y0 ; gN(Y) yN ;
2 yi1 yi1 b a (5.3)
g(Y)
i (yi1 2yi yi1) h f(x ,y
i i , ); i 1,2,...,N 1,h
2h N
The system (5.2) is a nonlinear system of N+1equations in N+1 unknowns. The
Newton’s iterative method can be used to solve the system numerically. This method is
a generalization of Newton-Raphson iterative scheme for finding roots of nonlinear
equation g(x)=0, N=1. According to Newton’s method, the sequence of iterations are
given as:
1
g(y(n) )
y(n ) y(n) y(n) v(n)
g(y )
(n)
0
1
2
3
v (n)g(y(n) ) g(y(n) ),n , , , ,...
For the system of equations, the derivative is replaced by Jacobian:
0 0 1 0
0 1 1 1
0
g g g
y .
y yN
1
g g g
.
J(Y)= y y yN
. . . .
gN gN
.
gN
0
y y yN
According to Newton’s method for solving (5.2), the sequence of iterations are given by
1
1
1
gi h f y yi
(xi,yi, i
2
)
1
yi y h
2
1
2
gi f y yi
(xi,yi, i
2
h )
yi y h
12
1
1
gi h f y yi
(xi,yi, i
2
)
1
yi y h
0 0
g gN
Also, ;
y yN
Since gi involves yi-1,yi and yi+1 only, the rows in J(Y) will have at the most three non-
zero entries except in the first and last row and the Jacobian J is tri-diagonal:
1
0
0
0
. .
0
l
1
d u . .
. . . . . .
0
J(Y)
. li di ui .
. 0 . . . . .
0
0
1
. . NXN
where
2
12
1
2
1
f y yi
0
di h (xi,yi, i ),d
y h
12
1
1
0
h f y yi
0
(xi,yi, i
2
ui ),lN u (5.6)
y h
1
1
1
h f y yi
(xi,yi, i 1
2
li ),dN
y h
The initial guess for the solution is obtained using line passing through the point
(a, )and(b, ) :
0
ba
yi( ) (xi a); xi a ih,h
ba N
0
0
1
2
Or yi( ) i i , , ,...,N (5.7)
N
The method is illustrated for Dirichlet boundary conditions in the example.
Example 5.1: Solve the second order ordinary differential equation with Dirichlet type
boundary conditions:
yy y 2 2 0 , y 0 0, yN 1
y2 2
y f(x,y,y)
y
f 1 y2 f 2y
or` ;
y i y 2 i y i y i
2
4
1 4
1
h (y y )
2
1
0
di i i ,d
yi
12
1
1
0
y yi
0
ui i ,lN u (A)
hyi
12
1
1
1
y yi
li i ,dN
hyi
y2 2
y f(x,y,y)
y
y y ba
gi (Y) (yi1 2yi yi1) h2 f(xi,yi, i 1 i1 ); i 1,2,...,N 1,h
2h N
y y
G(Y) [(yi1 2yi yi1) h2 (2 ( i1 i1 )2 / yi ]
2h
y y
G(Y) [( yi1 2yi yi1) (2h2 ( i1 i 1 )2 / yi ]
2
1 0 0 1
-0.83333 2.066667 -1.16667 -0.06667 The system
-0.83333 2.057143 -1.14286 -0.05714
of equations
-0.85714 2.05 -1.125 -0.05
-0.875 2.044444 -1.11111 -0.04444 is now
1 2
solved using
iterative method. The detailed solution is given in the excel file nonlinear
BVPnewton.xls . The final solution of nonlinear BVP is given in the table. The solution is
converging to the exact solution of the problem.
Lecture 2
Shooting Method
that
y(b,p*) 2
For this, the secant method is used for the following reasons:
Convergence of bisection method is slow
NR method requires derivative evaluations
Secant method has faster convergence without derivative evaluation
According to secant method, if the first two initial iterates of the equation (5.10) are pi
and pi+1; i=0, then
1 1
0
1
2
pi pi
2
F(p) or pi pi
Example 5.2: Solve the second order ordinary differential equation with Dirichlet type
boundary conditions using shooting method:
The iteration scheme (5.11) is used to get better approximation of p for solving IVP. The
iterations are shown in the table
t y y’
1 1 1.433544
1.2 1.260739 0.726185
1.4 1.48214 1.030026
1.6 1.67537 0.907326
1.8 1.846659 0.808776
2 1.999935 0.726185
Lecture 3
Shooting Method
Contd…
Solve the IVP (5.13) to obtain solution at x=b as y(b, p) and its derivative as
y’(b, p). Now the constant p is to be chosen so that
2 y(b,p) y '(b,p) 0 (5.14)
This means that the BVP (5.12) is solved when IVP (5.13) is solved subject to
the constraint (5.14). That is essentially the root finding problem for (5.14).
Accordingly, (5.13) is solved for arbitrarily chosen p1 and p2 and then secant
method (5.11) is used to get the estimate for the root of (5.14).
The initial value problem (5.15) is solved with p=1 and p=0.1 using classical
RK4 method in the excel sheet1 and 2 of the file nonlinear bvp-shooting
method5.3.xls. With the solution so obtained is used to compute the
expression F(1,p)=3-y(1)+z(1) at p=1.0 and p=0.1.
The iteration scheme (5.11) is used to obtain better estimate for p that may
satisfy the mixed boundary condition at x=1.0
The IVP is again solved with this new estimate for p. The following table
shows the solution for p=1 and p=0.1 respectively
x y z
0 1 0.25
0.2 1.179 1.61777
0.4 1.711 4.017089
x y z
0 0.1 0.25
0.2 0.151656 0.269173
0.4 0.208982 0.308245
0.6 3.02 10.34542
0.6 0.277028 0.378895
0.8 6.959 10.34542
0.8 0.363928 0.378895
1 28.27 323.9563
1 0.483478 0.71466
Table 5.4 Solution for IVP with p=1.0 and
0.1 respectively
p y(1,p) z(1,p) F(p,y,z)
1 28.27 323.9563 -349.2
0.1 0.483 0.71466 1.8019
0.1046 0.495 0.737053 1.768
0.3455 1.532 3.785332 -2.318
0.2089 0.825 1.509149 0.6654
0.2393 0.951 1.858759 0.19
0.2515 1.006 2.019784 -0.026
0.25 0.999 1.999293 0.0014
0.2501 1 2.000653 -5E-04
Table 5.5 Table showing the iterations for p
x y z
0 0.2501 0.25
0.2 0.30874 0.343004
0.4 0.390726 0.48844
0.6 0.510304 0.729153
0.8 0.694501 0.729153
1 0.999806 2.000653
Table 5.6 Table showing the solution of BVP of Example 5.3
In case the mixed boundary condition is applied at x=0 end, then the
transformation x 1 x is used first and then the above algorithm is used for
solving the given Boundary Value Problem.
Exercise 5
5.1 Solve the nonlinear BVP using finite differences
y (y)2 1,y(0) 1, y(1) 2, taking h 0.25