Study On Different Numerical Methods For Solving Differential Equations
Study On Different Numerical Methods For Solving Differential Equations
Study On Different Numerical Methods For Solving Differential Equations
differential equations
Department of mathematics,
Chittagong University of Chittagong.
December 2011 Chittagong-4331
Bangladesh
Certification
This thesis is entitled “Study on Different Numerical Methods for Solving
Differential Equations” submitted by Mahtab Uddin as the partial fulfillment of the
requirement for the degree of Master of Science in Mathematics to the Department of
Mathematics, University of Chittagong, Bangladesh is carried by him under my
supervision.
i
Dedication
ii
Acknowledgments
At first, I am giving all praise to almighty Allah for enabling me to complete
this thesis work.
With great pleasure I would like to express my heartiest gratitude, cordial thanks,
deepest sense of respect and appreciation to my reverend teacher and thesis supervisor
Dr. Munshi Nazrul Islam, Professor, Department of mathematics, University of
Chittagong for his indispensable guidance, sympathetic encouragement, valuable
suggestions and generous help during the course of study and the progress of this thesis
work.
I also acknowledge my gratefulness to Dr. Musleh Uddin Ahmed and Dr. Nil
Raton Bhattacharjee, Professor and Ex-Chairman, Department of mathematics,
University of Chittagong for their valuable suggestions and kind inspiration in carrying
out this thesis work.
I would like to express my deep sense of gratitude to Dr. Abul Kalam Azad,
Professor & Chairman, Department of mathematics, University of Chittagong and all
of my honorable teachers of this department for their fruitful advice and
encouragement.
Chittagong AUTHOR
December 2011.
iii
Abstract
This thesis paper is mainly analytic and comparative among various numerical
methods for solving differential equations but Chapter-4 contains two proposed
numerical methods based on (i) Predictor-Corrector formula for solving ordinary
differential equation of first order and first degree (ii) Finite-difference approximation
formula for solving partial differential equation of elliptic type.
Two types of problems are discussed in detail in this thesis work, namely the
ordinary differential equation in Chapters-2 & Chapter-3 and partial differential
equations in Chapter-4. Also, Chapter-5 highlights the boundary value problems.
Also in it, the comparison between Taylor’s series method and Picard’s method
of successive approximation has given. Moreover the advantages and disadvantages of
these three methods narrated in it.
The law of the rate of nuclear decay is solved in this chapter by means of
standard fourth-order Runge-Kutta method and then the obtained solution is compared
with the exact solution, which is an application of the numerical method to the nuclear
physics. Comparison between the Predictor-Corrector method and the Runge-Kutta
method discussed in detail. Also, the advantages and disadvantages of these two
methods discussed in it.
iv
the method of the elliptic equation of iterations and relaxation are discussed. Schmidt's
method and the Crank-Nicholson method are discussed to solve parabolic equations.
The solution of vibrations of a stretched string is mentioned as a method of solution of
hyperbolic equations.
Chapter-5 deals with the solution of the boundary value problems in both
ordinary differential equations and partial differential equations. It provides a brief
discussion of the finite-difference approximation method and shooting method with
their applications. Also, the applications of Green’s function to solve boundary value
problems are discussed in detail with the application.
Moreover, the B-Spline method for solving two-point boundary value problems
of order Four is introduced in this chapter at length. Derivations of cubic B-splines have
represented. Cubic B-spline solutions of the special linear fourth-order boundary value
problems, the general case of the boundary value problem, treatment of non-linear
problems and singular problems have discussed here.
Chapter-6 contains the proposal for the modification of two numerical methods.
One of which proposed a modification of Milne’s predictor-corrector formula for
solving ordinary differential equations of the first order and first degree, namely
Milne’s (modified) Predictor-Corrector formula. One more step-length and one more
term in Newton’s interpolation formula being calculated for deriving the predictor and
corrector formulae of Milne’s (modified) Predictor-Corrector formula. Also, a modified
formula for solving the elliptic equation by finite-difference approximation is proposed,
namely surrounding 9-point formula. This formula is obtained by combining standard
5-point formula and diagonal 5-point formula, which gives a more contributive to find
mesh points of a given domain in a certain region.
Finally, recommendations for future research and a list of few further works
have mentioned.
v
Contents
ARTICLES PAGES
Certification…………………………………………………………………………….. i
Dedication………..……………………………………………………………………... ii
Acknowledgments…………………………………………………………………….... iii
Abstract…………………………………………………………………………………. iv-v
Content………………………………………………………………………………….. vi-x
vi
2.1 Introduction……………………………………………………….................... 12
2.2 Taylor’s series method-
i. Derivation………………………………………………................. 12
ii. Truncation error……………………………………………........... 14
2.3 Application of Taylor’s series method-
i. Approximation by Taylor’s series method…………………........... 14
ii. Exact result……………………………………………………....... 16
2.4 Derivation of Picard’s method of successive approximation……………......... 17
2.5 Application of the Picard’s method of successive approximation-
i. Approximation by Picard’s method of successive approximation... 18
ii. Exact result……………………………………………………....... 21
iii. Graphical representation…………………………………….......... 22
2.6 Comparison between Taylor’s series method and Picard’s method of
successive approximation……………………………………………….......... 23
2.7 Euler’s method-
i. Derivation………………………………………………................. 23
ii. Truncation error……………………………………………........... 24
2.8 Physical application of Euler’s method-
i. Approximation by Euler’s method………………………..…......... 25
ii. Graphical representation of the application………..…………....... 28
2.9 Modification of Euler’s method-
iii. Derivation………………………………………………................. 29
iv. Truncation error……………………………………………........... 31
2.10 Application of modified Euler’s method-
iii. Approximation by modified Euler’s method …………….............. 32
iv. Exact result……………………………………………………....... 34
vii
i. Derivation of Milne’s predictor formula..……………………........ 37
ii. Derivation of Milne’s corrector formula………………….…......... 38
iii. Local truncation error…………..…………………......................... 39
3.4 Application of Milne’s predictor-corrector method………………………....... 39
3.5 Adams-Moulton predictor-corrector method-
i. Derivation of Adams-Moulton predictor formula…………............ 41
ii. Derivation of Adams-Moulton corrector formula……………........ 42
iii. Local truncation error…………..…………………......................... 43
3.6 Application of Adams-Moulton predictor-corrector method……………......... 45
3.7 Comments on predictor-corrector methods………………………………........ 47
3.8 Runge-Kutta method-
i. Derivation of Runge-Kutta formulae………………………........... 48
ii. Error estimation in Runge-Kutta formulae………………….......... 52
3.9 Physical application of Runge-Kutta method-
i. Approximation by Runge-Kutta method…………………….......... 53
ii. Exact result……………………………………………………....... 58
3.10 Extensions of Runge-Kutta formulae……………………………………......... 59
3.11 Generalized formula for Runge-Kutta method………………………….......... 62
3.12 Comparison between the predictor-corrector method and Runge-Kutta method. 62
viii
4.7 Application of solving parabolic equation……………………………............. 79
4.8 Solution of hyperbolic equations (wave equation) ………………………....... 81
4.9 Application of solving hyperbolic equation………………………………....... 82
4.10 Comparison between iterative method and relaxation method………….......... 84
4.11 The Rayleigh-Ritz method-
i. Introduction…………………………………………….................. 85
ii. Vibration of a rectangular membrane……………………….......... 85
4.12 Comparative discussion of the Rayleigh-Ritz method with the iterative method
and relaxation method…………………………………………........................ 90
ix
ii. Derivation of Milne’s (modified) corrector formula…………........ 109
6.3 Application of Milne’s (modified) predictor-corrector method-
i. Approximation by Milne’s (modified) predictor-corrector
formulae……………………………………….………………...... 110
ii. Exact result……………………………………….……………….. 111
iii. Comment………………………………………………………….. 112
6.4 Surrounding 9-point formula-
i. Derivation of Surrounding 9-point formula…………………….… 112
ii. Algorithm………………….....………………………………….... 114
6.5 Application of surrounding 9-point formula-
i. Approximation by surrounding 9-point formula……….......……... 114
ii. Comment………………………………………………………….. 117
6.6 Advantages of proposed methods over previous methods………………......... 117
REFERENCES……………………………………………………………….……….... 121-122
x
CHAPTER-1
1.1 INTRODUCTION
𝑑𝑛 𝑦 𝑑𝑛−1 𝑦 𝑑𝑦
𝑎0 (𝑥) 𝑑𝑥 𝑛 + 𝑎1 (𝑥) 𝑑𝑥 𝑛−1 + ⋯ … … … … … … + 𝑎𝑛−1 (𝑥) 𝑑𝑥 + 𝑎𝑛 (𝑥)𝑦 = 𝑓(𝑥) (1.2.1)
𝑑𝑦
𝑥 𝑑𝑥 = 3𝑦 (1.3.1)
𝑑2 𝑦 𝑑𝑦
𝑥 𝑑𝑥 2 + 6 𝑑𝑥 + 3𝑦 = 0 (1.3.2)
𝑑𝑦
√ +𝑦 =𝑥 (1.3.3)
𝑑𝑥
𝑑2 𝑦 𝑑𝑦
(𝑑𝑥 2 )3 + (𝑑𝑥 )2 = 𝑥 (1.3.4)
The order [22] of a differential equation is the order of the highest order
derivative appearing in the equation.
For example, orders of the differential equations (1.3.1), (1.3.2), (1.3.3) and
(1.3.4) are 1,2,1 & 2 respectively.
The degree of a differential equation is the degree of the highest order derivative
involved in it when the derivatives are free from radicals and fractions. i,e, write the
differential equations as polynomials in derivatives.
For example, degrees of the differential equations (1.3.1), (1.3.2), (1.3.3) and
(1.3.4) are 1,1,1 & 3 respectively.
Because the derivative is the rate of change, such an equation states how a
function changes but does not specify the function itself. Given sufficient initial
conditions, however, such as a specific function value, the function can be found by
various methods, most based on integration.
To distinguish the above case from this one, an equation of the form
𝑑2 𝑥(𝑡)
𝑚 = 𝐹(𝑥(𝑡)) (1.4.3)
𝑑𝑡 2
In general, the force 𝐹 depends upon the position 𝑥(𝑡) of the particle at time 𝑡,
and thus the unknown function 𝑥(𝑡) appears on both sides of (1.4.3), as indicated in the
notation 𝐹(𝑥(𝑡)).
Partial differential equations are used to formulate and thus aid to solution of
problems involving several variables [10], such as the propagation of sound or heat,
electrostatics, electrodynamics, fluid flow, and electricity. Seemingly distinct physical
phenomena may have indicated mathematical formulations and thus governed by the
same underlying dynamic. They find their generalization in stochastic partial
differential equations.
𝜕𝑢 𝜕𝑢 𝜕𝑢 𝜕2 𝑢 𝜕2 𝑢
𝐹 (𝑥1 , 𝑥2 , 𝑥3 , . . . . . . , 𝑥𝑛 , 𝜕𝑥 , 𝜕𝑥 , … … 𝜕𝑥 , 𝜕𝑥 , 𝜕𝑥 ,……) = 0 (1.4.4)
1 2 𝑛 1 𝜕𝑥2 2 𝜕𝑥3
𝜕2 𝜑 𝜕2 𝜑 𝜕2 𝜑 1 𝜕2 𝜑
+ 𝜕𝑦 2 + 𝜕𝑧 2 = 𝑣2 𝜕𝑡 2 (1.4.5)
𝜕𝑥 2
𝑦1′ = 𝑦2
𝑦2′ = 𝑦3
𝑦3′ = 𝑦4
......................................
......................................
′
𝑦𝑛−1 = 𝑦𝑛
𝑦𝑛′ = 𝐹(𝑥, 𝑦, 𝑦 (1) , 𝑦 (2) , 𝑦 (3) , … … … … … … … . . , 𝑦 (𝑛−1) ) (1.5.3)
Then we get
∇2 𝜑 = 0 (1.6.1)
𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
+ 𝜕𝑦 2 + 𝜕𝑧 2 = 0 (1.6.2)
𝜕𝑥 2
in cartesian coordinates.
1 𝜕 𝜕𝑓 1 𝜕2 𝜌 𝜕2 𝑓
(𝜌 𝜕𝜌) + 𝜌2 𝜕∅2 + 𝜕𝑧 2 = 0 (1.6.3)
𝜌 𝜕𝜌
in cylindrical coordinates.
1 𝜕 𝜕𝑓 1 𝜕 𝜕𝑓 1 𝜕2 𝑓
(𝑟 2 𝜕𝑟 ) + 𝑟 2 𝑠𝑖𝑛𝜃 𝜕𝜃 (𝑠𝑖𝑛𝜃 𝜕𝜃) + 𝑟 2 𝑠𝑖𝑛2 𝜃 𝜕∅2 = 0 (1.6.4)
𝑟 2 𝜕𝑟
in spherical coordinates.
𝑑𝑄
Now, differentiating (1.6.5) with respect to 𝑡 and remembering 𝑑𝑡
= 𝐼, we have
a second-order differential equation
𝑑2 𝐼 𝑑𝐼 𝐼 𝑑𝑣
𝐿 𝑑𝑡 2 + 𝑅 𝑑𝑡 + 𝐶 = (1.6.6)
𝑑𝑡
𝐿𝑦 = 𝑓 (1.7.1)
𝑑𝑛 𝑦 𝑑𝑛−1 𝑦 𝑑𝑦
𝐿𝑛 (𝑦) = 𝐴0 (𝑡) 𝑑𝑡 𝑛 + 𝐴1 (𝑡) 𝑑𝑡 𝑛−1 + ⋯ … … … + 𝐴𝑛−1 (𝑡) 𝑑𝑡 + 𝐴𝑛 (𝑡)𝑦 (1.7.3)
The linearity condition of 𝐿 rules out operations such as taking the square of the
derivatives of 𝑦, but permits. It is convenient to rewrite the above equation in all
operator form such as
Such an equation is said to have order 𝑛, the index of the highest order
derivative of 𝑦 that is involved.
𝑑𝑁
= −𝐾𝑁 (1.7.5)
𝑑𝑡
𝑦 ′′′ − 2𝑦 ′′ + 4𝑦 ′ = 𝑒 𝑥 (1.7.6)
𝑦 ′′ − 3𝑦 ′ + 2𝑦 = 0 (1.7.7)
called a non-linear differential equation. i.e., if a differential equation does not satisfy
(1.7.1), (1.7.2), (1.7.3) and (1.7.4) is called a non-linear differential equation.
𝑑2 𝑦 𝑑𝑦
𝑥 𝑑𝑥 2 + 𝑥(𝑑𝑥 )2 + 𝑦 = 0 (1.7.8)
𝑑3 𝑦 𝑑2 𝑦 𝑑𝑦
2𝑥 𝑑𝑥 3 . 𝑑𝑥 2 = 3 𝑑𝑥 + 4 (1.7.9)
For example,
For the function 𝑔 (𝑢, 𝑣), when the boundary conditions are linear, then for
some constant 𝑘 we get, for some square matrices 𝐵𝑎 and 𝐵𝑏
In general, for both linear and non-linear boundary conditions, we can define
∂g ∂g
𝐵𝑎 = ∂u & 𝐵𝑏 = ∂v (1.9.4)
of analysis that act on continuous functions are replaced by algebraic operations on the
function values in the table.
𝑑2 𝑥
= 𝑓(𝑡, 𝑥, 𝑦 ′ ) (1.11.1)
𝑑𝑡 2
𝑑𝑥
= 𝑓1 (𝑡, 𝑥) + 𝑐1 (1.11.2)
𝑑𝑡
𝑥 = 𝑓2 (𝑡, 𝑥) + 𝑐1 𝑥 + 𝑐2 (1.11.3)
𝑑𝑥
Here 𝑑𝑡 is the velocity and 𝑥 is the displacement of the body at time 𝑡. The
functions 𝑓1 (𝑡, 𝑥) and 𝑓2 (𝑡, 𝑥) are the first and second integral form of 𝑓(𝑡, 𝑥, 𝑦 ′ ))
with respect to 𝑡. Also 𝑐1 & 𝑐2 are arbitrary constants, which are to be determined.
Then (1.11.3) is called the general solution of (1.11.1). For a particular value
of 𝑐1 & 𝑐2 , (1.11.3) represent a curve.
Now, if with (1.11.1) we also give the conditions 𝑥(𝑡0 ) = 𝑥0 ; 𝑥 ′ (𝑡0 ) = 𝑥0′ at
a particular value 𝑡 = 𝑡0 of the time 𝑡 in (1.11.2) and (1.11.3) respectively. Then we can
find the values of 𝑐1 & 𝑐2 . These extra conditions are called the initial or boundary
conditions. Then (1.11.1) becomes an initial value problem as
𝑑2 𝑥
= 𝑓(𝑡, 𝑥, 𝑦 ′ ) ; 𝑥(𝑡0 ) = 𝑥0 ; 𝑥 ′ (𝑡0 ) = 𝑥0′ (1.11.4)
𝑑𝑡 2
We can find several points on the curve of the family given by (1.11.3) pass
through certain points under the given initial conditions with the different values of 𝑐1
& 𝑐2 .
The overall goal of numerical methods is the design and analysis of techniques
to give approximate but accurate solutions to hard problems, the variety of which are
given below
c. Car companies can improve the crash safety of their vehicles by using computer
simulations of car crashes. Such simulations essentially consist of solving
partial differential equations numerically.
d. Hedge funds (private investment funds) use tools from all fields of numerical
analysis to calculate the value of stocks and derivatives more precisely than
other market participants.
f. Insurance companies use numerical programs for actuarial analysis. All of the
above requires a better technique which will minimize the computation error.
There are several methods for solving differential equations having numerical co-
efficient with initial or boundary conditions. Some well-known of those will be
discussed in the next chapters.
SOLUTION OF DIFFERENTIAL
EQUATIONS OF FIRST ORDER AND FIRST
DEGREE BY NUMERICAL METHODS OF
EARLY STAGE
2.1 INTRODUCTION
(i) A power series in 𝑥 for 𝑦, from which the values of 𝑦 can be obtained by
direct substitution.
𝑑𝑦
𝑦 ′ = 𝑑𝑥 = 𝑓(𝑥, 𝑦) ; 𝑦(𝑥0 ) = 𝑦0 (2.2.1)
Let 𝑦 = 𝑦 (𝑥) be the exact solution of (2.2.1) such that 𝑦(𝑥0 ) ≠ 0. Now
expanding (2.2.1) by Taylor’s series [12] about the point 𝑥 = 𝑥0 , we get
Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage
(𝑥−𝑥0 )2 (𝑥−𝑥0 )3
𝑦 = 𝑦(𝑥) = 𝑦0 + (𝑥 − 𝑥0 )𝑦0′ + 𝑦0′′ + 𝑦0′′′ + ⋯ … … (2.2.2)
2! 3!
In the expression (2.2.2), the derivatives 𝑦0′ , 𝑦0′′ , 𝑦0′′′ , … … … … are not explicitly
known. However, if 𝑓(𝑥, 𝑦) is differentiable several times, the following expression in
terms of 𝑓(𝑥, 𝑦) and its partial derivatives as the followings
𝑦 ′ = 𝑓(𝑥, 𝑦) = 𝑓
𝑦 ′′ = 𝑓 ′ (𝑥, 𝑦) = 𝑓𝑥 + 𝑦 ′ 𝑓𝑦 = 𝑓𝑥 + 𝑓𝑓𝑦
𝑦 ′′′ = 𝑓 ′′ (𝑥, 𝑦) = 𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓𝑦𝑦 𝑓 2 + 𝑓𝑥 𝑓𝑦 + 𝑓𝑦2 𝑓
Now, for suitable small step length ℎ = 𝑥𝑖 – 𝑥𝑖−1 , the function 𝑦 = 𝑦(𝑥) is
evaluated at 𝑥1 = 𝑥0 + ℎ. Then the Tailor’s expansion (2.2.2) becomes
ℎ2 ℎ3
𝑦(𝑥0 + ℎ) = 𝑦(𝑥0 ) + ℎ𝑦 ′ (𝑥0 ) + 𝑦 ′′ (𝑥0 ) + 𝑦 ′′′ (𝑥0 ) + ⋯ … … … ..
2! 3!
ℎ2 ℎ3
𝑜𝑟, 𝑦1 = 𝑦0 + ℎ𝑦0′ + 𝑦0′′ + 𝑦0′′′ + ⋯ … … … .. (2.2.3)
2 6
ℎ2 ℎ3
𝑦(𝑥0 + 2ℎ) = 𝑦(𝑥0 + ℎ) + ℎ𝑦 ′ (𝑥0 + ℎ) + 𝑦 ′′ (𝑥0 + ℎ) + 𝑦 ′′′ (𝑥0 + ℎ) + ⋯ … …
2! 3!
ℎ2 ℎ3
𝑜𝑟, 𝑦2 = 𝑦1 + ℎ𝑦1′ + 𝑦1′′ + 𝑦1′′′ + ⋯ … … (2.2.4)
2 6
ℎ2 ℎ3
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑦𝑛′ + 𝑦𝑛′′ + 𝑦𝑛′′′ + ⋯ … … (2.2.5)
2 6
This equation can be used to obtain the value of 𝑦𝑛+1 , which is the approximate
value to the actual value of 𝑦 = 𝑦(𝑥) at the value 𝑥𝑛+1 = 𝑥0 + (𝑛 + 1)ℎ.
ℎ2
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑦𝑛′ + 𝑦𝑛′′ + 𝑂(ℎ3 ) (2.2.6)
2
Here 𝑂(ℎ3 ) denotes all the remaining terms which contain the third and higher
powers of ℎ.
Now we can omit the terms 𝑂(ℎ3 ), which gives us an approximation error of
(2.2.6). For some constant 𝑘, the local truncation error in this approximation of
𝑦𝑛+1 is 𝑘ℎ2 . Then, for the better approximation of 𝑦𝑛+1 we may choose the terms up-
to ℎ3 or ℎ4 , so we obtain as
𝑦𝑛′ = 𝑓(𝑥𝑛 , 𝑦𝑛 ) = 𝑓
𝑦𝑛′′ = 𝑓 ′ (𝑥𝑛 , 𝑦𝑛 ) = 𝑓𝑥 + 𝑓𝑓𝑦
𝑦𝑛′′′ = 𝑓 ′′ (𝑥𝑛 , 𝑦𝑛 ) = 𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓𝑦𝑦 𝑓 2 + 𝑓𝑥 𝑓𝑦 + 𝑓𝑦2 𝑓
ℎ2
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓 + (𝑓𝑥 + 𝑓𝑓𝑦 ) + 𝑂(ℎ3 ) (2.2.7)
2
Again for better approximation with less truncation error, we are to utilize
higher-order derivatives. Then with truncation error 𝑂(ℎ4 ), (2.2.6) becomes
ℎ2 ℎ3
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓 + (𝑓𝑥 + 𝑓𝑓𝑦 ) + (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓𝑦𝑦 𝑓 2 + 𝑓𝑥 𝑓𝑦 + 𝑓𝑦2 𝑓 )
2 3!
+𝑂(ℎ4 ) (2.2.9)
Thus from Taylor’s theorem, considering the remainder term; i.e. the truncation
error of 𝑂(ℎ𝑘 ) is given as
ℎ𝑘+1
𝐸𝑇 = (𝑘+1)! 𝑦 𝑘+1 (𝜃) ; 𝑥𝑛 < 𝜃 < 𝑥𝑛+1 (2.2.10)
𝑑𝑦
Apply Taylor’s series method to solve = 𝑥 − 𝑦 with the initial condition
𝑑𝑥
𝑦(0) = 1 up-to 𝑥 = 0.4 where ℎ = 0.2.
𝑑𝑦
Solution: Given that 𝑦 ′ = 𝑑𝑥 = 𝑥 − 𝑦 = 𝑓(𝑥, 𝑦)
Also 𝑦0 = 1 when 𝑥0 = 0 and ℎ = 0.2, so that
𝑦′ = 𝑥 − 𝑦
𝑦 ′′ = 1 − 𝑦 ′
𝑦 ′′′ = −𝑦 ′′
𝑦 ′𝑣 = −𝑦 ′′′
𝑦 𝑣 = −𝑦 ′𝑣
𝑦 𝑣′ = −𝑦 𝑣
ℎ2 ℎ3 ℎ4 ℎ5 ℎ6
𝑦1 = 𝑦0 + ℎ𝑦0′ + 𝑦0′′ + 𝑦0′′′ + 𝑦0′𝑣 + 𝑦0𝑣 + 𝑦 𝑣′ + ⋯……
2 6 24 120 720 0
ℎ2 ℎ3 ℎ4 ℎ5 ℎ6
𝑦1 = 𝑦0 + ℎ𝑦0′ + 𝑦0′′ + 𝑦0′′′ + 24 𝑦0′𝑣 + 120 𝑦0𝑣 + 720 𝑦0𝑣′
2 6
(0.2)2 (0.2)3 (0.2)4 (0.2)5
= 1 + (0.2)(−1) + (2) + (−2) + (2) + (−2)
2 6 24 120
(0.2)6
+ (2)
720
= 0.837461511 (𝑎𝑝𝑝)
𝑖. 𝑒. 𝑦1 = 0.837461511 & 𝑥1 = 𝑥0 + ℎ = 0.0 + 0.2 = 0.2
ℎ2 ℎ3 ℎ4 ℎ5 ℎ6
𝑦2 = 𝑦1 + ℎ𝑦1′ + 𝑦1′′ + 𝑦1′′′ + 24 𝑦1′𝑣 + 120 𝑦1𝑣 + 720 𝑦1𝑣′ + ⋯ … … … ..
2 6
ℎ2 ℎ3 ℎ4 ℎ5 ℎ6
𝑦2 = 𝑦1 + ℎ𝑦1′ + 𝑦1′′ + 𝑦1′′′ + 24 𝑦1′𝑣 + 120 𝑦1𝑣 + 720 𝑦1𝑣′
2 6
(0.2)2
= 1 + (0.2)(−0.637461511) + (1.637461511)
2
(0.2)3 (0.2)4 (0.2)5
+ (−1.637461511) + (1.637461511) + (−1.637461511)
6 24 120
(0.2)6
+ (1.637461511)
720
= 0.740640099 (app)
𝑖. 𝑒. 𝑦2 = 0.740640099 & 𝑥2 = 𝑥1 + ℎ = 0.2 + 0.2 = 0.4
𝐼𝐹 = 𝑒 ∫ 𝑑𝑥 = 𝑒 𝑥
𝑜𝑟, 𝑦𝑒 𝑥 = ∫ 𝑥 𝑒 𝑥 𝑑𝑥 + 𝑐 ; 𝑐 ∈ 𝑹
𝑜𝑟, 𝑦𝑒 𝑥 = 𝑒 𝑥 (𝑥 − 1) + 𝑐
𝑜𝑟, 𝑦 = 𝑥 − 1 + 𝑐𝑒 −𝑥
1 = 0 − 1 + 𝑐𝑒 0 ⇒ 𝑐 = 2
𝑦 = 𝑥 − 1 + 2𝑒 −𝑥
Disadvantages of Taylor’s series method: In practical life, it has not much importance
for needing partial derivatives who are complex to compute. Also, ℎ should be small
enough; as a result successive terms of the series diminish quite rapidly. Again the
evaluation of additional terms becomes increasingly difficult. The most significant
disadvantage of this method is the requirement of evaluation of the higher-order
derivatives frequently. For a time-consuming process, it is highly disliked for
computation.
𝑑𝑦
𝑦 ′ = 𝑑𝑥 = 𝑓(𝑥, 𝑦) ; 𝑦(𝑥0 ) = 𝑦0 (2.4.1)
𝑦 𝑥
𝑜𝑟, [𝑦]𝑦0 = ∫𝑥 𝑓(𝑥, 𝑦)𝑑𝑥
0
𝑥
𝑜𝑟, 𝑦 − 𝑦0 = ∫𝑥 𝑓(𝑥, 𝑦)𝑑𝑥
0
𝑥
𝑜𝑟, 𝑦 = 𝑦0 + ∫𝑥 𝑓(𝑥, 𝑦)𝑑𝑥 (2.4.3)
0
Here the integral term in the right-hand side represents the increment in 𝑦
produced by an increment 𝑥– 𝑥0 in 𝑥. The equation is complicated by the presence of
𝑦 in (2.4.3) under the integral sign as well as outside it. An equation of this kind is
Study on Different Numerical Methods for Solving Differential Equations Page | 17
Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage
The integral is now a function of 𝑥 alone and the indicated integration can be
performed at least for one time. Having first approximation to 𝑦, substitute it for 𝑦 in
the integrand in (2.4.3) and by integrating again we get the second approximation of 𝑦
as following
𝑥
𝑦 (2) = 𝑦0 + ∫𝑥 𝑓(𝑥, 𝑦 (1) )𝑑𝑥 (2.4.5)
0
Proceeding in this way we obtain 𝑦 (3) , 𝑦 (4) & 𝑦 (5) and so on. Thus, we get the
𝑛𝑡ℎ approximation is given by the following equation
𝑥
𝑦 (𝑛) = 𝑦0 + ∫𝑥 𝑓(𝑥, 𝑦 (𝑛−1) )𝑑𝑥 (2.4.6)
0
Then putting 𝑦 (𝑛) for 𝑦 in (2.4.3), we get the next approximation as follows
𝑥
𝑦 (𝑛+1) = 𝑦0 + ∫𝑥 𝑓(𝑥, 𝑦 (𝑛) )𝑑𝑥 (2.4.7)
0
This process will be repeated in this way as many times as necessary or desirable.
This process will be terminated when two consecutive values of 𝑦 are the same as the
desired degree of accuracy.
𝑑𝑦
Apply Picard’s method of successive approximation to solve = 𝑥 − 𝑦 with
𝑑𝑥
the initial condition 𝑦(0) = 1.
𝑑𝑦
Solution: Given that = 𝑥 − 𝑦; 𝑦(0) = 1
𝑑𝑥
𝑥
𝑖. 𝑒. 𝑦 (𝑛+1) = 1 + ∫0 (𝑥 − 𝑦 (𝑛) )𝑑𝑥 (2.5.1)
𝑥
𝑦 (5) = 1 + ∫0 (𝑥 − 𝑦 (4) )𝑑𝑥
𝑥 𝑥3 𝑥4 𝑥5
= 1 + ∫0 (𝑥 − 1 + 𝑥 − 𝑥 2 + − 12 + 120)𝑑𝑥
3
𝑥 𝑥3 𝑥4 𝑥5
= 1 + ∫0 (2𝑥 − 1 − 𝑥 2 + − 12 + 120)𝑑𝑥
3
𝑥3 𝑥4 𝑥5 𝑥6
= 1 + [𝑥 2 − 𝑥 − + 12 − 60 + 720]0𝑥
3
𝑥3 𝑥4 𝑥5 𝑥6
= 1 − 𝑥 + 𝑥2 − + 12 − 60 + 720
3
At 𝑥 = 0.2, we get
𝑥2
𝑦 (1) = 1 − 𝑥 + 2
(0.2)2
= 1 − (0.2) + 2
= 0.82
𝑥3
𝑦 (2) = 1 − 𝑥 + 𝑥 2 − 6
(0.2)3
= 1 − (0.2) + (0.2)2 − 6
= 0.8387
𝑥3 𝑥4
𝑦 (3) = 1 − 𝑥 + 𝑥 2 − + 24
3
(0.2)3 (0.2)4
= 1 − (0.2) + (0.2)2 − +
3 24
= 0.837400
𝑥3 𝑥4 𝑥5
𝑦 (4) = 1 − 𝑥 + 𝑥 2 − + 12 − 120
3
(0.2)3 (0.2)4 (0.2)5
= 1 − (0.2) + (0.2)2 − + −
3 12 120
= 0.83746400
𝑥3 𝑥4 𝑥5 𝑥6
𝑦 (5) = 1 − 𝑥 + 𝑥 2 − + 12 − 60 + 720
3
(0.2)3 (0.2)4 (0.2)5 (0.2)6
= 1 − (0.2) + (0.2)2 − + − +
3 12 60 720
= 0.8374614222
𝑥3 𝑥4 𝑥5 𝑥6 𝑥7
𝑦 (6) = 1 − 𝑥 + 𝑥 2 − + 12 − 60 + 360 − 5040
3
(0.2)3 (0.2)4 (0.2)5 (0.2)6 (0.2)7
= 1 − (0.2) + (0.2)2 − + − + −
3 12 60 360 5040
= 0.837461508571
𝑑𝑦
Exact result: We have = 𝑥 − 𝑦; 𝑦(0) = 1
𝑑𝑥
From the analytical solution in section-2.3, the solution of the above differential
equation is obtained as
𝑦 = 𝑥 − 1 + 2𝑒 −𝑥
𝑦 = 𝑥 − 1 + 2𝑒 −𝑥
𝑥2 𝑥3 𝑥4 𝑥5 𝑥6 𝑥7
= 𝑥 − 1 + 2[1 − 𝑥 + − + − + − + ⋯……]
2! 3! 4! 5! 6! 7!
2𝑥 2 2𝑥 3 2𝑥 4 2𝑥 5 2𝑥 6 2𝑥 7
= 𝑥 − 1[2 − 2𝑥 + − + − 120 + 720 − 5040 + ⋯ … … ]
2 6 24
𝑥3 𝑥4 𝑥5 𝑥6 𝑥7
= 1 − 𝑥 + 𝑥2 − + 12 − 60 + 360 − 2520 + ⋯ … …
3
Now the sixth approximation is correct up to first seven terms in the series, thus
the truncation error is obtained as
𝐸𝑇 = 𝑦– 𝑦 (6)
𝑥3 𝑥4 𝑥5 𝑥6 𝑥7
= [1 − 𝑥 + 𝑥 2 − + 12 − 60 + 360 − 2520]
3
𝑥3 𝑥4 𝑥5 𝑥6 𝑥7
−[1 − 𝑥 + 𝑥 2 − + 12 − 60 + 360 − 5040]
3
𝑥7 𝑥7 𝑥7
= − 2520 + 5040 = − 5040
Now putting x = 0.2 in the analytical solution, we get the exact value of y is
Thus, we can conclude that in this case, the approximation in Picard’s method
is correct to eight decimal places.
(1) 𝑥2
𝑦0.2 = [1 − 𝑥 + ] = 0.82
2 𝑥=0.2
(2) 𝑥3
𝑦0.2 = [1 − 𝑥 + 𝑥 2 − ] = 0.8387
6 𝑥=0.2
(3) 𝑥3 𝑥4
𝑦0.2 = [1 − 𝑥 + 𝑥 2 − + 24]𝑥=0.2 = 0.837400
3
(4) 𝑥3 𝑥4 𝑥5
𝑦0.2 = [1 − 𝑥 + 𝑥 2 − + 12 − 120 ]𝑥=0.2 = 0.83746400
3
(5) 𝑥3 𝑥4 𝑥5 𝑥6
𝑦0.2 = [1 − 𝑥 + 𝑥 2 − + 12 − 60 + 720 ]𝑥=0.2 = 0.8374614222
3
(6) 𝑥3 𝑥4 𝑥5 𝑥6 𝑥7
𝑦0.2 = [1 − 𝑥 + 𝑥 2 − + 12 − 60 + 360 − 5040 ]𝑥=0.2 = 0.837461508571
3
𝑥3 𝑥4 𝑥5 𝑥6 𝑥7
𝑦0.2 = [1 − 𝑥 + 𝑥 2 − + 12 − 60 + 360 − 2520 + ⋯ … ]𝑥=0.2 = 0.837461506160
3
Now by putting the above values in graph [11], we get the following figure
Figure – (2.1)
The figure-(2.1) shows that the approximating curves approach the curve 𝑦 =
𝑦(𝑥) more closely with each successive approximation and passing over it at the sixth
approximation.
The successive approximations 𝑦 (1) , 𝑦 (2) , 𝑦 (3) , 𝑦 (4) , 𝑦 (5) & 𝑦 (6) have the same
terms as in the exact infinite series truncated after 2,3,4,5,6 & 7 terms respectively.
Both of Taylor’s series method and the Picard’s method involve analytic
operations [1].
Moreover, the truncation errors in the above two methods show that Taylor’s
series method gives accuracy correct to seven decimal places after the sixth step
whereas Picard’s method gives accuracy correct to eight decimal places.
Thus we can conclude that Picard’s method is better than Taylor’s series method
in real-life practice.
𝑑𝑦
𝑦 ′ = 𝑑𝑥 = 𝑓(𝑥, 𝑦); 𝑦(𝑥0 ) = 𝑦0 (2.7.1)
We know that if the function 𝑓 is continuous in the open interval 𝑎 < 𝑥 < 𝑏
containing 𝑥 = 𝑥0 , there exists a unique solution [11] of the equation (2.7.1) as
The solution is valid for throughout the interval 𝑎 < 𝑥 < 𝑏. We wish to
determine the approximate values 𝑦𝑟 of the exact solution 𝑦 = 𝑦(𝑥) in the given
interval for the value 𝑥 = 𝑥𝑟 = 𝑥0 + 𝑟ℎ; 𝑟 = 1,2,3, … … …
Figure-(2.2)
Now we will derive a tangent line equation for (2.7.1). From the above figure
Δ𝑦
≈ 𝑡𝑎𝑛𝜃
Δ𝑥
𝑜𝑟, ∆𝑦 ≈ ∆𝑥(𝑡𝑎𝑛𝜃)
𝑑𝑦
𝑜𝑟, 𝑦1 − 𝑦0 ≈ ℎ(𝑑𝑥 )0
𝑑𝑦
𝑜𝑟, 𝑦1 ≈ 𝑦0 + ℎ(𝑑𝑥 )0
𝑦2 ≈ 𝑦1 + ℎ𝑓(𝑥1 , 𝑦1 ) at 𝑥 = 𝑥2
𝑦3 ≈ 𝑦2 + ℎ𝑓(𝑥2 , 𝑦2 ) at 𝑥 = 𝑥3 and so on.
ℎ2
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑦𝑛′ + 𝑦 ′′ (𝑘) ; 𝑥𝑛 ≤ 𝑘 ≤ 𝑥𝑛+1 (2.7.6)
2!
ℎ2
𝑇𝑛+1 = 𝑦𝑛+1 − 𝑦𝑛 = ℎ𝑦𝑛′ + 𝑦 ′′ (𝑘) (2.7.7)
2
𝑀ℎ2
⃓ 𝑇𝑛+1 ⃓ < (2.7.8)
2
Here the right-hand side is an upper bound of the truncation error. The absolute
value of 𝑇𝑛+1 is taken for the magnitude of the error only.
Suppose a bob of mass 𝑚 is suspended from a fixed point with a thin, light and
inextensible string of length 𝑙. When the bob is shifted from its equilibrium position
and released, it will execute a simple harmonic motion [20,21]. The motion is described
by the equation
𝑑2 𝜃
𝑚𝑙 2 𝑑𝑡 2 = −𝑚𝑔𝑙𝑠𝑖𝑛𝜃 (2.8.1)
𝑑2 𝜃
Where 𝜃 the angle between the string and the vertical, is the acceleration
𝑑𝑡 2
of the bob and 𝑔 is the constant acceleration due to gravity.
𝑑2 𝜃 𝑔
+ 𝑙 𝑠𝑖𝑛𝜃 = 0 (2.8.2)
𝑑𝑡 2
Figure-(2.3)
The oscillation took as very small, then (2.8.2) can be reduced by approximating
𝑠𝑖𝑛𝜃 ≈ 𝜃 as 𝜃 → 0. So the approximation reduced the equation (2.8.2) as an analytical
solution given below
𝑑2 𝜃 𝑔
+𝑙 =0 (2.8.3)
𝑑𝑡 2
However 𝜃 is not small, then (2.8.3) cannot be carried out. Now multiply both
𝑑𝜃
sides of (2.8.2) by 2( 𝑑𝑡 ), we get
𝑑𝜃 𝑑2 𝜃 𝑑𝜃 𝑔
2( 𝑑𝑡 ) + 2( 𝑑𝑡 ) 𝑠𝑖𝑛𝜃 = 0 (2.8.4)
𝑑𝑡 2 𝑙
𝑑𝜃 𝑔
( 𝑑𝑡 )2 + 2 𝑐𝑜𝑠𝜃 = 𝑐; 𝑐 ∈ 𝑹 (2.8.5)
𝑙
𝑑𝜃 2𝑔 2𝑔
( 𝑑𝑡 )2 + 2 𝑐𝑜𝑠𝜃 = − 𝑐𝑜𝑠𝛼
𝑙 𝑙
𝑑𝜃 2𝑔
𝑜𝑟, ( 𝑑𝑡 )2 = 𝑙
𝑑𝜃 2𝑔
𝑜𝑟, 𝑑𝑡 = √ (𝑐𝑜𝑠𝜃 − 𝑐𝑜𝑠𝛼) (2.8.6)
𝑙
The differential equation is of first-order but not in linear form, so the analytical
method is not fruitful for it. Thus we are to apply the numerical method to solve (2.8.6).
i.e. apply Euler’s method for solving (2.8.6), for 𝑔 = 980 𝑐𝑚𝑠 −2 , 𝑙 = 50 𝑐𝑚, 𝛼 = 40
it becomes
𝑑𝜃 2×980
=√ (𝑐𝑜𝑠𝜃 − 𝑐𝑜𝑠40 )
𝑑𝑡 50
= √39.2(𝑐𝑜𝑠𝜃 − 𝑐𝑜𝑠40 )
𝑑𝜃
𝑖. 𝑒. 𝑓(𝑡, 𝜃) = = √39.2(𝑐𝑜𝑠𝜃 − 𝑐𝑜𝑠40 )
𝑑𝑡
𝑑𝜃
𝑓(𝑡, 𝜃) = = √39.2(𝑐𝑜𝑠𝜃 − 𝑐𝑜𝑠40 ) (2.8.7)
𝑑𝑡
𝜃𝑛+1 = 𝜃𝑛 + ℎ𝑓(𝑡𝑛 , 𝜃𝑛 )
𝑜𝑟, 𝜃𝑛+1 = 𝜃𝑛 + ℎ√39.2(𝑐𝑜𝑠𝜃𝑛 − 𝑐𝑜𝑠40 )
= 00 + (0.20)√39.2(𝑐𝑜𝑠00 − 𝑐𝑜𝑠40 )
= 0.0618026630
𝑖. 𝑒. 𝜃1 = 0.0618026630 & 𝑡1 = 𝑡0 + ℎ = 0.00 + 0.20 = 0.20
For second approximations we proceed as follows: Putting 𝑛 = 1 in (2.8.8), we get
𝜃2 = 𝜃1 + (0.20)√39.2(𝑐𝑜𝑠𝜃1 − 𝑐𝑜𝑠40 )
Now the solution of the simple pendulum problem is shown in the following
table
Approximation 𝑡 𝜃
No
00 0.00 00
01 0.20 0.0618026630
02 0.40 0.1235979460
03 0.60 0.1853710860
04 0.80 0.2471073220
05 1.00 0.3087918930
Figure-(2.4)
𝐴1 𝐵1 = ℎ𝑓(𝑡0 , 𝜃0 )
𝜃1 = 𝜃0 + ℎ𝑓(𝑡0 , 𝜃0 )
This is obtained by considering the constant slope of the line segment 𝑃𝐵1 .
𝐴2 𝐵2 = ℎ𝑓(𝑡0 + ℎ, 𝜃1 )
Generalizing this procedure, we obtain the equation (2.8.9), which gives the
ordinate 𝜃𝑛 of 𝐵𝑛 , the approximate value of 𝜃 at 𝜃𝑛 = 𝜃0 + 𝑛ℎ.
𝑑𝑦
Disadvantages of Euler’s method: In Euler’s method 𝑑𝑥 changes rapidly over an
interval; this gives a poor approximation at the beginning of the process in comparison
with the average value over the interval. So the calculated value of 𝑦 in this method
occurs much error than the exact value, which reasonably increased in the succeeding
intervals, then the final value of 𝑦 differs on a large scale from the exact value.
Euler’s method needs to take a smaller value of ℎ, because of this restriction this
method is unsuitable for practical use and can be applied for tabulating the value of
depending variable over a limited range only. Moreover, if ℎ is not small enough this
method is too inaccurate. In Euler’s method, the actual solution curve is approximated
by the sequence of short straight lines, which sometimes deviates from the solution
curve significantly.
Due to the above consideration, we can say that the computed values of 𝑦 will
deviate further and further from the actual values of 𝑦 so long as the curvature of the
graph does not change. This encourages doing a modification of Euler’s method
[11,18].
(1) 𝑑𝑦
𝑦1 ≈ 𝑦0 + ℎ(𝑑𝑥 )0 = 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 )
(1)
Here 𝑦1 is the first approximation of 𝑐 at 𝑥 = 𝑥1 .
Study on Different Numerical Methods for Solving Differential Equations Page | 29
Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage
𝑑𝑦 (1) (1)
(𝑑𝑥 )1 = 𝑓(𝑥1 , 𝑦1 )
ℎ 𝑑𝑦 𝑑𝑦 (1)
∆𝑦 ≈ 2 [(𝑑𝑥 ) + (𝑑𝑥 )1 ]
0
ℎ (1)
𝑜𝑟, ∆𝑦 ≈ 2 [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥1 , 𝑦1 )]
𝑑𝑦
ℎ(𝑑𝑥 )0 = ℎ𝑓(𝑥0 , 𝑦0 )
(2) ℎ (1)
𝑦1 ≈ 𝑦0 + 2 [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥1 , 𝑦1 )]
(1)
By substituting this improved value of 𝑦1 we get the second approximate
𝑑𝑦
value of 𝑑𝑥 = 𝑓(𝑥, 𝑦) as follows
𝑑𝑦 (2) (2)
(𝑑𝑥 )1 = 𝑓(𝑥1 , 𝑦1 )
(3) ℎ (2)
𝑦1 ≈ 𝑦0 + 2 [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥1 , 𝑦1 )]
(𝑛) ℎ (𝑛−1)
𝑦1 ≈ 𝑦0 + 2 [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥1 , 𝑦1 )] (2.9.1)
(𝑛+1) ℎ (𝑛)
𝑦1 ≈ 𝑦0 + 2 [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥1 , 𝑦1 )] (2.9.2)
(𝑛+1) ℎ (𝑛)
𝑦𝑚+1 ≈ 𝑦𝑚 + 2 [𝑓(𝑥𝑚 , 𝑦𝑚 ) + 𝑓(𝑥𝑚+1 , 𝑦𝑚+1 )] (2.9.3)
Truncation error of modified Euler’s method: First we will improve the modified
Euler’s formula for better accuracy and then find the truncation error with the help of
improved formula [11].
𝑑𝑦
𝑦𝑛+1 = 𝑦𝑛 + ℎ(𝑑𝑥 )𝑛 = 𝑦𝑛 + ℎ𝑦𝑛′ (2.9.4)
But as soon as two consecutive values of 𝑦 are known, the first approximation
to the succeeding values of 𝑦 can be found more accurately from the formula
ℎ2 ℎ3 ℎ4
𝑦𝑛+1 = 𝑦(𝑥𝑛 ) + ℎ𝑦 ′ (𝑥𝑛 ) + 𝑦 ′′ (𝑥𝑛 ) + 𝑦 ′′′ (𝑥𝑛 ) + 24 𝑦 ′𝑣 (𝑥𝑛 ) + ⋯ … (2.9.6)
2 6
ℎ2 ℎ3 ℎ4
𝑦𝑛−1 = 𝑦(𝑥𝑛 ) − ℎ𝑦 ′ (𝑥𝑛 ) + 𝑦 ′′ (𝑥𝑛 ) − 𝑦 ′′′ (𝑥𝑛 ) + 24 𝑦 ′𝑣 (𝑥𝑛 ) − ⋯ … (2.9.7)
2 6
ℎ3 ℎ5
𝑦𝑛+1 − 𝑦𝑛−1 = 2ℎ𝑦 ′ (𝑥𝑛 ) + 𝑦 ′′′ (𝑥𝑛 ) + 60 𝑦 𝑣 (𝑥𝑛 ) + ⋯ … … …
3
ℎ3 ℎ5
𝑜𝑟, 𝑦𝑛+1 = 𝑦𝑛−1 + 2ℎ𝑦 ′ (𝑥𝑛 ) + 𝑦 ′′′ (𝑥𝑛 ) + 60 𝑦 𝑣 (𝑥𝑛 ) + ⋯ … … … (2.9.8)
3
When ℎ is very small and only the first two terms in the right-hand members
ℎ2 ℎ3
of (2.9.6) and (2.9.8) are used, the truncation errors are 2 𝑦𝑛′′ and 3 𝑦𝑛′′′ the latter one
is much smaller than the former. Thus, (2.9.8) gives a more accurate value of 𝑦𝑛+1 .
Since the increment in 𝑦 for each step is obtained from the formula
ℎ ′
∆𝑦 = 2 (𝑦𝑛′ + 𝑦𝑛+1 ) (2.9.9)
The right-hand member of (2.9.9) has the form of the first group of terms in
Euler’s method; the principal part of the error in ∆𝑦 (by mean value theorem) is
obtained as
ℎ3 ′′ ′ ℎ3
− 12 (𝑦𝑛+1 − 𝑦 ′ 𝑛 ) = − 12 [𝑓 ′′ (𝑥𝑛+1 ) − 𝑓 ′′ (𝑥𝑛 )]
ℎ3
= − 12 𝑓 ′′′ (𝜏) ; 𝑥𝑛 < 𝜏 < 𝑥𝑛+1 (2.9.10)
This shows that the error involves only terms in ℎ3 and higher-order. For this,
it follows that the error of the order ℎ3 . Since in the case of Euler’s method the error is
of order ℎ2 , it is clear that the modified Euler’s method is more accurate than Euler’s
method.
𝑑𝑦
Solve the initial value problem = 𝑥 + 𝑦; 𝑦(0) = 1 by the modified Euler’s
𝑑𝑥
method.
𝑑𝑦
Solution: Given that 𝑦 ′ = 𝑑𝑥 = 𝑥 − 𝑦 = 𝑓(𝑥, 𝑦) with 𝑦0 = 1 at 𝑥0 = 0
(3) ℎ (2)
𝑦1 = 𝑦0 + 2 [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥1 , 𝑦1 )]
0.05
=1+ [1 + 1.1025] = 1.05256
2
(3) (3)
So, 𝑓(𝑥1 , 𝑦1 ) = 𝑥1 + 𝑦1 = 0.05 + 1.05256 = 1.10256
(3) (4)
Since 𝑦1 & 𝑦1 are the same, we get no further change in 𝑦 continuing the
iteration process. Therefore we take
(4) (4)
So, 𝑓(𝑥2 , 𝑦2 ) = 𝑥2 + 𝑦2 = 0.10 + 1.11038 = 1.21038
(3) (4)
Since 𝑦2 & 𝑦2 are the same, we get no further change in 𝑦 continuing the
iteration process. Therefore we take
𝑥 𝑦 𝑓(𝑥, 𝑦)
0.00 1.00000 1.00000
0.05 1.05256 1.10256
0.10 1.11038 1.21038
𝑑𝑦
Exact result: We have =𝑥+𝑦
𝑑𝑥
𝑑𝑦
𝑜𝑟, −𝑦 =𝑥
𝑑𝑥
𝐼𝐹 = 𝑒 ∫ −𝑑𝑥 = 𝑒 −𝑥
𝑑𝑦
𝑒 −𝑥 𝑑𝑥 − 𝑦𝑒 −𝑥 = 𝑥𝑒 −𝑥
𝑑
𝑜𝑟, 𝑑𝑥 (𝑦𝑒 −𝑥 ) = 𝑥𝑒 −𝑥
𝑜𝑟, 𝑦𝑒 −𝑥 = ∫ 𝑥 𝑒 −𝑥 𝑑𝑥 + 𝑐 ; 𝑐 ∈ 𝑹
𝑜𝑟, 𝑦𝑒 −𝑥 = 𝑐 − 𝑒 −𝑥 (𝑥 + 1)
𝑜𝑟, 𝑦 = 𝑐𝑒 𝑥 − 𝑥 − 1
1 = 𝑐𝑒 0 − 0 − 1 ⇒ 𝑐 = 2
𝑦 = 2𝑒 −𝑥 − 𝑥 − 1
This show by comparing with the table obtained from approximation, the
method can be improved by taking a smaller value of ℎ, since the difference of
approximate values and the exact solution values increasing step by step.
3.1 INTRODUCTION
Now, we are describing the numerical methods mentioned above in detail with
the applications, then will compare them.
ℎ
𝑦𝑖+1 = 𝑦𝑖 + 2 [𝑓(𝑥𝑖 , 𝑦𝑖 ) + 𝑓(𝑥𝑖+1 , 𝑦𝑖+1 )] (3.2.2)
The value of 𝑦𝑖+1 is the first estimate by (3.2.1) and then by using (3.2.2) gets
a better approximation of 𝑦𝑖+1 . This value of 𝑦𝑖+1 is again substituted in (3.2.2) to find
still a better approximation of 𝑦𝑖+1 . This procedure will repeat until two consecutive
iterated values of 𝑦𝑖+1 agree. This technique of refining an initial crude estimation of
𝑦𝑖+1 by means of a more accurate formula is known as the predictor-corrector method.
The equation (3.2.1) is taken as the predictor, while (3.2.2) serves as a corrector of 𝑦𝑖+1 .
Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method
+⋯⋯⋯ (3.3.1)
𝑥 − 𝑥0
Here 𝑢= ℎ
𝑜𝑟, 𝑥 = 𝑥0 + 𝑢ℎ ⇒ 𝑑𝑥 = ℎ𝑑𝑢
20 8 40 20
= 𝑦0 + ℎ[(4 − 8 + − 3) 𝑦0′ + (8 − + 8) 𝐸𝑦0′ + ( 3 − 8) 𝐸 2 𝑦0′
3 3
8 28
+ 3 𝐸 3 𝑦0′ + 90 ∆4 𝑦0′ ]
8 4 8 28
= 𝑦0 + ℎ[(0)𝑦0′ + 3 𝐸𝑦0′ − 3 𝐸 2 𝑦0′ + 3 𝐸 3 𝑦0′ + 90 ∆4 𝑦0′ ]
4ℎ 28
= 𝑦0 + [2𝑦1′ − 𝑦2′ + 2𝑦3′ ] + 90 ℎ∆4 𝑦0′
3
𝑝 4ℎ ′ ′
𝑦𝑛+1 = 𝑦𝑛−3 + [2𝑦𝑛−2 − 𝑦𝑛−1 + 2𝑦𝑛′ ] (3.3.6)
3
𝑐 ℎ
′ ′
𝑦𝑛+1 = 𝑦𝑛−1 + 3 [𝑦𝑛−1 + 4𝑦𝑛′ + 𝑦𝑛+1 ] (3.3.7)
Here the index 𝑝 & 𝑐 indicates the predicted and corrected values of 𝑦𝑛+1
respectively at 𝑥 = 𝑥𝑖+1 .
𝑝 28
[𝑦𝑛+1 ]𝐸𝑥𝑎𝑐𝑡 = 𝑦𝑛+1 + ℎ∆4 𝑦 ′ (3.3.8)
90
𝑐 1
[𝑦𝑛+1 ]𝐸𝑥𝑎𝑐𝑡 = 𝑦𝑛+1 − 90 ℎ∆4 𝑦 ′ (3.3.9)
𝑝 𝑐 28 1
𝑦𝑛+1 − 𝑦𝑛+1 = − 90 ℎ∆4 𝑦 ′ − 90 ℎ∆4 𝑦 ′
29 1
= − 90 ℎ∆4 𝑦 ′ = 29 (− 90 ℎ∆4 𝑦 ′ ) = 29𝐸𝑇
Here 𝐸𝑇 denotes the principal part of the error in equation (3.3.7). From this, we
get as following
𝑝 𝑐
29𝐸𝑇 = 𝑦𝑛+1 − 𝑦𝑛+1
1
𝑝 𝑐
𝑜𝑟, 𝐸𝑇 = 29 (𝑦𝑛+1 − 𝑦𝑛+1 ) (3.3.10)
1
Thus we can conclude that the error in (3.3.9) is (29)𝑡ℎ of the difference
between the predicted and corrected values of y at 𝑥 = 𝑥𝑖+1 .
𝑑𝑦
Solve the differential equation 𝑑𝑥 = 2𝑒 𝑥 − 𝑦 𝑎𝑡 𝑥 = 0.4 with the initial values
𝑦(0.0) = 2.000, 𝑦(0.1) = 2.010, 𝑦(0.2) = 2.040, 𝑦(0.3) = 2.090
𝑑𝑦
Solution: Given by 𝑦 ′ = 𝑑𝑥 = 2𝑒 𝑥 − 𝑦 = 𝑓(𝑥, 𝑦)
By taking the step length ℎ = 0.1 from initial conditions and, we get
4ℎ
𝑦4 = 𝑦0 + [2𝑦1′ − 𝑦2′ + 2𝑦3′ ]
3
4ℎ
= 𝑦0 + [2𝑓1 − 𝑓2 + 2𝑓3 ]
3
4(0.1)
= 0.000000 + [2(0.200342) − 0.4028062 + 2(0.609718)]
3
= 2.162309
Then 𝑓4 = 2𝑒 𝑥4 − 𝑦4 = 2𝑒 0.4 − 2.162309 = 0.821340
ℎ
𝑦4 = 𝑦2 + 3 [𝑦2′ + 4𝑦3′ + 𝑦4′ ]
ℎ
= 𝑦2 + 3 [𝑓2 + 4𝑓3 + 𝑓4 ]
First iteration:
(1) ℎ
𝑦4 = 𝑦2 + 3 [𝑓2 + 4𝑓3 + 𝑓4 ]
0.1
= 2.040 + [0.402806 + 4(0.609718 ) + 0.821340]
3
= 2.162101
Then 𝑓4 = 2𝑒 𝑥4 − 𝑦4 = 2𝑒 0.4 − 2.162101 = 0.821548
Second iteration:
(2) ℎ
𝑦4 = 𝑦2 + 3 [𝑓2 + 4𝑓3 + 𝑓4 ]
0.1
= 2.040 + [0.402806 + 4(0.609718 ) + 0.821548]
3
= 2.162108
Then 𝑓4 = 2𝑒 𝑥4 − 𝑦4 = 2𝑒 0.4 − 2.162108 = 0.821541
Third iteration:
(3) ℎ
𝑦4 = 𝑦2 + 3 [𝑓2 + 4𝑓3 + 𝑓4 ]
0.1
= 2.040 + [0.402806 + 4(0.609718 ) + 0.821541]
3
= 2.162107
Then 𝑓4 = 2𝑒 𝑥4 − 𝑦4 = 2𝑒 0.4 − 2.162107 = 0.821542
Fourth iteration:
(4) ℎ
𝑦4 = 𝑦2 + 3 [𝑓2 + 4𝑓3 + 𝑓4 ]
0.1
= 2.040 + [0.402806 + 4(0.609718 ) + 0.821542]
3
= 2.162107
Then 𝑓4 = 2𝑒 𝑥4 − 𝑦4 = 2𝑒 0.4 − 2.162107 = 0.821542
𝑦4 = 2.162107 at 𝑥4 = 0.4
The above equation contains seven unknowns. Suppose it holds for polynomials
up to degree four. Hence we take 𝑦(𝑥) = 𝑥 𝑛 ; 𝑛 = 0,1,2,3,4. Let the space between the
consecutive values of 𝑥 be unity. i.e. taking ℎ = 1.
1 = 𝐴0 + 𝐴1 + 𝐴2
1 = −𝐴1 − 2𝐴2 + 𝐵0 + 𝐵1 + 𝐵2 + 𝐵3
1 = 𝐴1 + 4𝐴2 − 2𝐵1 − 4𝐵2 − 6𝐵3
1 = −𝐴1 − 8𝐴2 + 3𝐵1 + 12𝐵2 + 27𝐵3
1 = 𝐴1 + 16𝐴2 − 4𝐵1 − 32𝐵2 − 108𝐵3 (3.5.2)
𝐴0 = 1 − 𝐴1 − 𝐴2
1
𝐵0 = 24 [55 + 9𝐴1 + 8𝐴2 ]
1
𝐵1 = 24 [−59 + 19𝐴1 + 32𝐴2 ]
1
𝐵2 = 24 [37 − 5𝐴1 + 8𝐴2 ]
1
𝐵3 = 24 [−9 + 𝐴1 ] (3.5.3)
Since 𝐴1 & 𝐴2 are arbitrary, choosing. 𝐴1 = 0 = 𝐴2 . Then we obtain from (3.5.3) the
followings
55 59 37 9
𝐴0 = 1, 𝐵0 = 24 , 𝐵1 = − 24 , 𝐵2 = 24 , 𝐵3 = − 24
𝑝 55 59
′ ′ 37 ′ 9
𝑦𝑛+1 = (1)𝑦𝑛 + ℎ[(24)𝑦𝑛′ + (− 24)𝑦𝑛−1 + (24)𝑦𝑛−2 + (− 24)𝑦𝑛−3 ]
𝑝 ℎ ′ ′ ′
𝑜𝑟, 𝑦𝑛+1 = 𝑦𝑛 + 24 [55𝑦𝑛′ − 59𝑦𝑛−1 + 37𝑦𝑛−2 − 9𝑦𝑛−3 ] (3.5.4)
The above equation contains seven unknowns. Suppose it holds for polynomials
up to degree four. Hence we take 𝑦(𝑥) = 𝑥 𝑛 ; 𝑛 = 0,1,2,3,4. Let the space between the
consecutive values of 𝑥 be unity. i.e. taking ℎ = 1.
1 = 𝑎0 + 𝑎1 + 𝑎2
9 = 𝑎1 + 24𝑏−1
𝑎0 = 1 − 𝑎1 − 𝑎2
1
𝑏−1 = [9 − 𝑎1 ]
24
1
𝑏0 = 24 [19 + 13𝑎1 + 8𝑎2 ]
1
𝑏1 = 24 [−5 + 13𝑎1 + 32𝑎2 ]
1
𝑏2 = 24 [1 − 𝑎1 + 8𝑎2 ] (3.5.7)
Since 𝑎1 & 𝑎2 are arbitrary, choosing. 𝑎1 = 0 = 𝑎2 . Then we obtain from (3.5.7) the
followings
9 19 5 1
𝑎0 = 1, 𝑏−1 = 24 , 𝑏0 = 24 , 𝑏1 = − 24 , 𝑏2 = 24
𝑐 ′ 9 ′ 19 ′ 5 1
𝑦𝑛+1 = (1)𝑦𝑛 + ℎ[(24)𝑦𝑛+1 + (24)𝑦𝑛′ + (− 24)𝑦𝑛−1 + (24)𝑦𝑛−2 ]
𝑐 ℎ ′ ′ ′
𝑜𝑟, 𝑦𝑛+1 = 𝑦𝑛 + 24 [9𝑦𝑛+1 + 19𝑦𝑛′ − 5𝑦𝑛−1 + 𝑦𝑛−2 ] (3.5.8)
We can find more predictor & corrector formulae using suitable new values of
𝐴1 & 𝐴2 and 𝑎1 & 𝑎2, to solving systems of equations (3.5.3) and (3.5.7).
𝑘 2 ℎ2 𝑘 3 ℎ3 𝑘 4 ℎ4 𝑘 5 ℎ5
𝑦𝑘 = 𝑦0 + 𝑘ℎ𝑦0′ + 𝑦0′′ + 𝑦0′′′ + 𝑦0′𝑣 + 𝑦0𝑣 + ⋯ … … (3.5.9)
2 6 24 120
𝑘 2 ℎ2 𝑘 3 ℎ3 𝑘 4 ℎ4 𝑘 5 ℎ5
𝑦𝑘′ = 𝑦0′ + 𝑘ℎ𝑦0′′ + 𝑦0′′′ + 𝑦0′𝑣 + 𝑦0𝑣 + 𝑦0𝑣′ + ⋯ … … (3.5.10)
2 6 24 120
′ ′ ′
Now, substituting all these values of 𝑦𝑛+1 , 𝑦𝑛 , 𝑦𝑛′ , 𝑦𝑛−1 , 𝑦𝑛−2 & 𝑦𝑛−3 in (3.5.4),
then we get
ℎ 251
′
(𝑦𝑛+1 − 𝑦𝑛 ) − 24 [55𝑦𝑛′ − 59𝑦𝑛−1 ′
+ 37𝑦𝑛−2 ′
− 9𝑦𝑛−3 ]= ℎ𝑦0𝑣 + ⋯ … … (3.5.11)
720
251
Here the truncation error is 720 ℎ𝑦0𝑣 + ⋯ … …. Using the first term of above
error as an estimate of the local truncation error of the Adams-Moulton predictor
251
formula is 720 ℎ𝑦0𝑣 .
′ ′ ′
Again, substituting all these values of 𝑦𝑛+1 , 𝑦𝑛 , 𝑦𝑛+1 , 𝑦𝑛′ , 𝑦𝑛−1 & 𝑦𝑛−2 in (3.5.8),
then we get
ℎ 19
′
(𝑦𝑛+1 − 𝑦𝑛 ) − 24 [9𝑦𝑛+1 ′
+ 19𝑦𝑛′ − 5𝑦𝑛−1 ′
+ 𝑦𝑛−2 ] = − 720 ℎ𝑦0𝑣 + ⋯ … … (3.5.11)
19
Here the truncation error is − 720 ℎ𝑦0𝑣 + ⋯ … …. Using the first term of above
error as an estimate of the local truncation error of the Adams-Moulton predictor
19
formula is − 720 ℎ𝑦0𝑣 .
251 19
Since ℎ𝑦 𝑣 & − 720 ℎ𝑦 𝑣 are taken as the principal parts of the errors, thus we
720
may write
𝑝 251
[𝑦𝑛+1 ]𝐸𝑥𝑎𝑐𝑡 = 𝑦𝑛+1 + 720 ℎ𝑦 𝑣 (3.5.12)
𝑐 19
[𝑦𝑛+1 ]𝐸𝑥𝑎𝑐𝑡 = 𝑦𝑛+1 − 720 ℎ𝑦 𝑣 (3.5.13)
𝑝 𝑐 251 19
𝑦𝑛+1 − 𝑦𝑛+1 =− ℎ𝑦 𝑣 − ℎ𝑦 𝑣
720 720
270 270 19 270
= − 720 ℎ𝑦 𝑣 = (− 720 ℎ𝑦 𝑣 ) = 𝐸𝑇
19 19
Here 𝐸𝑇 denotes the principal part of the error in equation (3.5.13). From this,
we get as following
270 𝑝 𝑐
𝐸𝑇 = 𝑦𝑛+1 − 𝑦𝑛+1
19
19
𝑝 𝑐
𝑜𝑟, 𝐸𝑇 = 270 (𝑦𝑛+1 − 𝑦𝑛+1 ) (3.5.14)
19
Thus we can conclude that the error in (3.5.13) is (270)𝑡ℎ of the difference
between the predicted and corrected values of y at 𝑥 = 𝑥𝑖+1 .
𝑑𝑦
Solve the differential equation 𝑑𝑥 = 1 + 𝑥𝑦 2 𝑎𝑡 𝑥 = 0.4 with the initial values
𝑦(0.0) = 1.000, 𝑦(0.1) = 1.105, 𝑦(0.2) = 1.223, 𝑦(0.3) = 1.355
𝑑𝑦
Solution: Given by 𝑦 ′ = 𝑑𝑥 = 1 + 𝑥𝑦 2 = 𝑓(𝑥, 𝑦)
By taking the step length ℎ = 0.1 from initial conditions and, we get
𝑥0 = 0.0, 𝑦0 = 1.000 ⇒ 𝑓0 = 1 + 𝑥0 𝑦02 = 1 + (0.0)(1.000)2 = 1.000000
𝑥1 = 0.1, 𝑦1 = 1.105 ⇒ 𝑓1 = 1 + 𝑥1 𝑦12 = 1 + (0.1)(1.105)2 = 1.122103
𝑥2 = 0.2, 𝑦2 = 1.223 ⇒ 𝑓2 = 1 + 𝑥2 𝑦22 = 1 + (0.2)(1.223)2 = 1.299146
𝑥3 = 0.3, 𝑦3 = 1.355 ⇒ 𝑓3 = 1 + 𝑥3 𝑦32 = 1 + (0.3)(1.355)2 = 1.550808
Finally, 𝑥4 = 𝑥3 + ℎ = 0.3 + 0.1 = 0.4
ℎ
𝑦4 = 𝑦3 + 24 [55𝑦3′ − 59𝑦2′ + 37𝑦2′ − 9𝑦0′ ]
ℎ
= 𝑦3 + 24 [55𝑓3 − 59𝑓2 + 37𝑓1 − 9𝑓0 ]
0.1
= 1.355 + 24 [55(1.550808) − 59(1.299146) + 37(1.122103)
−9(1.000000)]
= 1.526511
Then 𝑓4 = 1 + 𝑥4 𝑦42 = 1 + (0.4)(1.526511)2 = 1.932094
ℎ
𝑦4 = 𝑦3 + 24 [9𝑦4′ + 19𝑦3′ −5𝑦2′ + 𝑦1′ ]
ℎ
= 𝑦3 + 24 [9𝑓4 + 19𝑓3 − 5𝑓2 + 𝑓1 ]
First iteration:
(1) ℎ
𝑦4 = 𝑦3 + 24 [9𝑓4 + 19𝑓3 − 5𝑓2 + 𝑓1 ]
0.1
= 1.355 + 24 [9(1.932094) + 19(1.550808 ) − 5(1.299146)
+1.122103]
= 1.527836
Then 𝑓4 = 1 + 𝑥4 𝑦42 = 1 + (0.4)(1.527836)2 = 1.933713
Second iteration:
(2) ℎ
𝑦4 = 𝑦3 + 24 [9𝑓4 + 19𝑓3 − 5𝑓2 + 𝑓1 ]
0.1
= 1.355 + 24 [9(1.933713) + 19(1.550808 ) − 5(1.299146)
+1.122103]
= 1.527896
Then 𝑓4 = 1 + 𝑥4 𝑦42 = 1 + (0.4)(1.527896)2 = 1.933786
Third iteration:
(3) ℎ
𝑦4 = 𝑦3 + 24 [9𝑓4 + 19𝑓3 − 5𝑓2 + 𝑓1 ]
0.1
= 1.355 + 24 [9(1.933786) + 19(1.550808 ) − 5(1.299146)
+1.122103]
= 1.527899
Then 𝑓4 = 1 + 𝑥4 𝑦42 = 1 + (0.4)(1.527899)2 = 1.933790
Fourth iteration:
(4) ℎ
𝑦4 = 𝑦3 + 24 [9𝑓4 + 19𝑓3 − 5𝑓2 + 𝑓1 ]
0.1
= 1.355 + 24 [9(1.933790) + 19(1.550808 ) − 5(1.299146)
+1.122103]
= 1.527899
Then 𝑓4 = 1 + 𝑥4 𝑦42 = 1 + (0.4)(1.527899)2 = 1.933790
𝑦4 = 1.527899 at 𝑥4 = 0.4
Disadvantages: Since for each step forward in the equation using the shortest step-
length up-to (4𝑛 + 4) different co-efficient would have to calculate and the time taken
to this might be a significant proportion of the total computing time. Also, up-to 4n
quantities representing the previous three step-lengths and current step-length in each
equation must be stored and this together with a longer program would represent a
considerable increase in the storage space required, compared with other systems.
multi-step system, it takes a huge time, which is highly disliked in the modern fast
world.
There are many different schemes for solving ordinary differential equations
numerically. We have already introduced some of them. Many of the more advanced
techniques are making complexity to derive and analyze them. One of the standard
workhorses for solving ordinary differential equations, named as Runge-Kutta method.
It to be noted, numerical methods are subsequently improved to a level of considerable
degree. This development led to the method is known as the Runge-Kutta method,
which is particularly suitable in cases when the computations of higher-order
derivatives are complicated.
𝑑𝑦
𝑦 ′ = 𝑑𝑥 = 𝑓(𝑥, 𝑦) ; 𝑦(𝑥0 ) = 𝑦0 (3.8.1)
ℎ2 ℎ3 ℎ4
𝑦(𝑥 + ℎ) = 𝑦(𝑥) + ℎ𝑦 ′ (𝑥) + 𝑦 ′′ (𝑥) + 𝑦 ′′′ (𝑥) + 𝑦 ′𝑣 (𝑥) … … …
2! 3! 4!
ℎ2 ℎ3 ℎ4
𝑜𝑟, 𝑦(𝑥 + ℎ) − 𝑦(𝑥) = ℎ𝑦 ′ (𝑥) + 𝑦 ′′ (𝑥) + 𝑦 ′′′ (𝑥) + 𝑦 ′𝑣 (𝑥) … … … (3.8.2)
2! 3! 4!
𝑦 ′ = 𝑓(𝑥, 𝑦) = 𝑓
𝑦 ′′ = 𝑓 ′ (𝑥, 𝑦) = 𝑓𝑥 + 𝑓𝑓𝑦
𝑦 ′′′ = 𝑓 ′′ (𝑥, 𝑦) = 𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦 + 𝑓𝑥 𝑓𝑦 + 𝑓𝑓𝑦2
𝐹1 = 𝑓𝑥 + 𝑓𝑓𝑦
𝐹2 = 𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦
𝐹3 = 𝑓𝑥𝑥𝑥 + 3𝑓𝑓𝑥𝑥𝑦 + 𝑓 2 𝑓𝑥𝑦𝑦 + 𝑓 3 𝑓𝑦𝑦𝑦
Then we get as
𝑦′ = 𝑓
𝑦 ′′ = 𝐹1
𝑦 ′′′ = 𝐹2 + 𝑓𝑦 𝐹1
𝑦 ′𝑣 = 𝐹3 + 𝑓𝑦 𝐹2 + 3𝐹1 (𝑓𝑥𝑦 + 𝑓𝑓𝑦𝑦 ) + 𝐹1 (𝑓𝑥𝑦 + 𝑓𝑓𝑦𝑦 ) + 𝐹1 𝑓𝑦2
ℎ2 ℎ3 ℎ4
𝑦(𝑥 + ℎ) − 𝑦(𝑥) = ℎ𝑓 + 𝐹 + (𝐹2 + 𝑓𝑦 𝐹1 ) + {𝐹3 + 𝑓𝑦 𝐹2
2! 1 3! 4!
𝑘1 = ℎ𝑓(𝑥, 𝑦) = ℎ𝑓
𝑘2 = ℎ𝑓(𝑥 + 𝑚ℎ, 𝑦 + 𝑚𝑘1 )
𝑘3 = ℎ𝑓(𝑥 + 𝑛ℎ, 𝑦 + 𝑛𝑘2 )
𝑘4 = ℎ𝑓(𝑥 + 𝑝ℎ, 𝑦 + 𝑝𝑘3 ) (3.8.4)
Now, we may use Taylor’s series expansion for two variables as followings
𝑘1 = ℎ𝑓
1 1
𝑘2 = ℎ[𝑓 + 𝑚ℎ𝐹1 + 2 𝑚2 ℎ2 𝐹2 + 6 𝑚3 ℎ3 𝐹3 + ⋯ … … ]
1 1
𝑘3 = ℎ[𝑓 + 𝑛ℎ𝐹1 + 2 ℎ2 (𝑛2 𝐹2 + 2𝑚𝑛𝑓𝑦 𝐹1 ) + 6 ℎ3 {𝑛3 𝐹3 + 3𝑚2 𝑛𝑓𝑦 𝐹2
𝑎+𝑏+𝑐+𝑑 = 1
1
𝑏𝑚 + 𝑐𝑛 + 𝑑𝑝 =
2
1
𝑏𝑚2 + 𝑐𝑛2 + 𝑑𝑝2 = 3
1
𝑏𝑚3 + 𝑐𝑛3 + 𝑑𝑝3 = 4
1
𝑐𝑚𝑛 + 𝑑𝑛𝑝 = 6
1
𝑐𝑚2 𝑛 + 𝑑𝑛2 𝑝 = 12
1
𝑐𝑚𝑛2 + 𝑑𝑛𝑝2 = 8
1
𝑑𝑚𝑛𝑝 = 24
Now, putting these values in (3.8.4) and (3.8.5), we get fourth-order Runge-
Kutta formulae as follows
𝑘1 = ℎ𝑓(𝑥, 𝑦) = ℎ𝑓
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥 + 2 , 𝑦 + )
2
ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑥 + 2 , 𝑦 + )
2
𝑘4 = ℎ𝑓(𝑥 + ℎ, 𝑦 + 𝑘3 )
1
∆𝑦 = 𝑦(𝑥 + ℎ) − 𝑦(𝑥) = 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
When the initial values are (𝑥0 , 𝑦0 ), the first increment in 𝑦 is computed from
the formulae given below
𝑘1 = ℎ𝑓(𝑥0 , 𝑦0 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥0 + 2 , 𝑦0 + )
2
ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑥0 + 2 , 𝑦0 + )
2
𝑘4 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘3 )
1
∆𝑦 = 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
1
𝑜𝑟, 𝑦(𝑥0 + ℎ) − 𝑦(𝑥0 ) = 6 (𝑘 + 2𝑘2 + 2𝑘3 + 𝑘4 )
1
1
𝑜𝑟, 𝑦(𝑥0 + ℎ) = 𝑦(𝑥0 ) + (𝑘 + 2𝑘2 + 2𝑘3 + 𝑘4 )
6 1
1
𝑜𝑟, 𝑦1 = 𝑦0 + 6
(𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
Thus, the general fourth-order Runge-Kutta formulae for the 𝑛𝑡ℎ interval is
given by the followings
𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 + )
2
ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 + )
2
𝑘4 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + 𝑘3 )
1
𝑦𝑛+1 = 𝑦𝑛 + 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 ) (3.8.7)
The above formulae are called the standard forth-order Runge-Kutta formulae.
In a similar manner, we can derive second & third Runge-Kutta formulae given
by follows
𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
𝑘2 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + 𝑘1 )
𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 + )
2
𝑘3 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 − 𝑘1 + 2𝑘2 )
𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
𝑘2 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + 𝑘1 ) (3.8.8)
ℎ2 ℎ3
𝑦(𝑥𝑛+1 ) = 𝑦(𝑥𝑛 + ℎ) = 𝑦(𝑥𝑛 ) + ℎ𝑦 ′ (𝑥𝑛 ) + 𝑦 ′′ (𝑥𝑛 ) + 𝑦 ′′′ (𝑥𝑛 )
2 6
ℎ4
+ 24 𝑦 ′𝑣 (𝑥𝑛 ) + ⋯ … …
ℎ2 ℎ3 ℎ4
𝑜𝑟, 𝑦(𝑥𝑛+1 ) = 𝑦𝑛 + ℎ𝑦𝑛′ + 𝑦𝑛′′ + 𝑦𝑛′′′ + 24 𝑦𝑛′𝑣 + ⋯ … …
2 6
ℎ2 ℎ3
= 𝑦𝑛 + ℎ𝑓 + (𝑓𝑥 + 𝑓𝑓𝑦 ) + (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦
2 6
𝑘1 = ℎ𝑓
ℎ2
𝑘2 = ℎ[𝑓 + ℎ(𝑓𝑥 + 𝑓𝑓𝑦 ) + (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦 ) + ⋯ … … ]
2
ℎ3
= ℎ𝑓 + ℎ2 (𝑓𝑥 + 𝑓𝑓𝑦 ) + (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦 ) + ⋯ … …
2
1 2 ℎ3
𝑦𝑛+1 = 𝑦𝑛 + 2 [ℎ𝑓 + ℎ𝑓 + ℎ (𝑓𝑥 + 𝑓𝑓𝑦 ) + 2
(𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓2 𝑓𝑦𝑦 ) + ⋯ … … ]
ℎ2 ℎ3
𝑜𝑟, 𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓 + 2
(𝑓𝑥 + 𝑓𝑓𝑦 ) + 4
(𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓2 𝑓𝑦𝑦 ) + ⋯ … … (3.8.11)
ℎ2 ℎ3
𝐸𝑇 = [𝑦𝑛 + ℎ𝑓 + (𝑓𝑥 + 𝑓𝑓𝑦 ) + (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦 + 𝑓𝑥 𝑓𝑦 + 𝑓𝑓𝑦2 )
2 6
ℎ2 ℎ3
+ ⋯ ] − [𝑦𝑛 + ℎ𝑓 + (𝑓𝑥 + 𝑓𝑓𝑦 ) + (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦 ) + ⋯ ]
2 4
ℎ3 ℎ3 ℎ3
=(6 − ) (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦 ) + (𝑓𝑥 𝑓𝑦 + 𝑓𝑓𝑦2 ) + ⋯ … …
4 6
ℎ3 ℎ3
= − 12 (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦 ) + (𝑓𝑥 𝑓𝑦 + 𝑓𝑓𝑦2 ) + ⋯ … …
6
ℎ3
=− (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦 − 2𝑓𝑦 − 2𝑓𝑓𝑦2 (3.8.12)
12
𝑑𝑁
= −λ𝑁 (3.9.1)
𝑑𝑡
Here 𝑁 is the number of nuclei and 𝜆 is the decay constant. If the half-life of
the radioactive nuclei is 𝑇1⁄ then 𝜆 can be expressed as
2
0.693
λ= (3.9.2)
𝑇1⁄
2
𝑑𝑁
Solution: Given that 𝑁 ′ = = −λ𝑁 = 𝑓(𝑡, 𝑁) (3.9.3)
𝑑𝑡
0.693 0.693
Thus (3.9.2) gives λ = = 𝑑 −1 = 0.17325𝑑 −1
𝑇1⁄ 4
2
Now, taking the step-length ℎ = 0.25 day and according to the fourth-order
Runge-Kutta method, we can write
1
𝑁𝑛+1 = 𝑁𝑛 + 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 ) (3.9.5)
With 𝑘1 = ℎ𝑓(𝑡𝑛 , 𝑁𝑛 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑡𝑛 + 2 , 𝑁𝑛 + )
2
ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑡𝑛 + 2 , 𝑁𝑛 + )
2
𝑘4 = ℎ𝑓(𝑡𝑛 + ℎ, 𝑁𝑛 + 𝑘3 ) (3.9.6)
To find 𝑵𝟏 we proceed as follows: For the first interval, putting 𝑛 = 0 in (3.9.5) &
(3.9.6), from (3.9.4) we get as follows
𝑘1 = ℎ𝑓(𝑡0 , 𝑁0 )
= (0.25)𝑓(0, 104 )
= (0.25)(−0.17325)(104 )
= −433.125
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑡0 + 2 , 𝑁0 + )
2
0.25 (−433.125)
= (0.25)𝑓(0 + , 104 + )
2 2
= (0.25)𝑓(0.125,9783.438)
= (0.25)(−0.17325)(9783.438)
= −423.745
ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑡0 + 2 , 𝑁0 + )
2
0.25 (−423.745)
= (0.25)𝑓(0 + , 104 + )
2 2
= (0.25)𝑓(0.125,9788.127)
= (0.25)(−0.17325)(9788.127)
= −423.948
𝑘4 = ℎ𝑓(𝑡0 + ℎ, 𝑁0 + 𝑘3 )
= (0.25)𝑓(0 + 0.25, 104 + (−423.948))
= (0.25)𝑓(0.25,9567.052)
= (0.25)(−0.17325)(9567.052)
= −414.763
1
So that 𝑁1 = 𝑁0 + 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
4 1
= 10 + [(−433.125) + 2(−423.745) + 2(−423.948)
6
+(−414.763)]
= 9576.121
𝑖. 𝑒. 𝑁1 = 9576.121 ; 𝑡1 = 𝑡0 + ℎ = 0.00 + 0.25 = 0.25
To find 𝑵𝟐 we proceed as follows: For the second interval, putting 𝑛 = 1 in (3.9.5) &
(3.9.6), from (3.9.4) we get as follows
𝑘1 = ℎ𝑓(𝑡1 , 𝑁1 )
= (0.25)𝑓(0.25,9576.121)
= (0.25)(−0.17325)(9576.121)
= −414.766
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑡1 + 2 , 𝑁1 + )
2
0.25 (−414.766)
= (0.25)𝑓(0.25 + , 9576.121 + )
2 2
= (0.25)𝑓(0.375,9369.113)
= (0.25)(−0.17325)(9369.113)
= −405.800
ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑡1 + 2 , 𝑁1 + )
2
0.25 (−405.800)
= (0.25)𝑓(0.25 + 2
, 9576.121 + 2
)
= (0.25)𝑓(0.375,9373.221)
= (0.25)(−0.17325)(9373.221)
= −405.978
𝑘4 = ℎ𝑓(𝑡1 + ℎ, 𝑁1 + 𝑘3 )
= (0.25)𝑓(0.25 + 0.25,9576.121 + (−405.978))
= (0.25)𝑓(0.50,9170.143)
= (0.25)(−0.17325)(9170.143)
= −397.182
1
So that 𝑁2 = 𝑁1 + 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
1
= 9576.121 + 6 [(−414.766) + 2(−405.800) + 2(−405.978)
+(−397.182)]
= 9170.204
𝑖. 𝑒. 𝑁2 = 9170.204 ; 𝑡2 = 𝑡1 + ℎ = 0.25 + 0.25 = 0.50
To find 𝑵𝟑 we proceed as follows: For the third interval, putting 𝑛 = 2 in (3.9.5) &
(3.9.6), from (3.9.4) we get as follows
𝑘1 = ℎ𝑓(𝑡2 , 𝑁2 )
= (0.25)𝑓(0.50,9170.204)
= (0.25)(−0.17325)(9170.204 )
= −397.185
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑡2 + 2 , 𝑁2 + )
2
0.25 (−397.185)
= (0.25)𝑓(0.50 + , 9170.204 + )
2 2
= (0.25)𝑓(0.625,8971.612)
= (0.25)(−0.17325)(8971.612)
= − 388.583
ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑡2 + 2 , 𝑁2 + )
2
0.25 (− 388.583)
= (0.25)𝑓(0.50 + , 9170.204 + )
2 2
= (0.25)𝑓(0.625,8975.913)
= (0.25)(−0.17325)(8975.913)
= −388.769
𝑘4 = ℎ𝑓(𝑡2 + ℎ, 𝑁2 + 𝑘3 )
1
So that 𝑁3 = 𝑁2 + 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
1
= 9170.204 + [(−397.185) + 2(− 388.583) + 2(−388.769)
6
+(−380.346)]
= 8781.498
𝑖. 𝑒. 𝑁3 = 8781.498 ; 𝑡3 = 𝑡2 + ℎ = 0.50 + 0.25 = 0.75
To find 𝑵𝟒 we proceed as follows: For the fourth interval, putting 𝑛 = 3 in (3.9.5) &
(3.9.6), from (3.9.4) we get as follows
𝑘1 = ℎ𝑓(𝑡3 , 𝑁3 )
= (0.25)𝑓(0.75,8781.498)
= (0.25)(−0.17325)(8781.498)
= − 380.349
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑡3 + 2 , 𝑁3 + )
2
0.25 (− 380.349)
= (0.25)𝑓(0.75 + , 8781.498 + )
2 2
= (0.25)𝑓(0.875,8591.324)
= (0.25)(−0.17325)(8591.324)
= − 372.112
ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑡3 + 2 , 𝑁3 + )
2
0.25 (− 372.112 )
= (0.25)𝑓(0.75 + , 8781.498 + )
2 2
= (0.25)𝑓(0.875,8595.442)
= (0.25)(−0.17325)(8595.442)
= −372.290
𝑘4 = ℎ𝑓(𝑡3 + ℎ, 𝑁3 + 𝑘3 )
= (0.25)𝑓(0.75 + 0.25,8781.498 + (−372.290))
= (0.25)𝑓(1.00,8409.208)
= (0.25)(−0.17325)(8409.208)
= −364.224
1
So that 𝑁4 = 𝑁3 + 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
1
= 8781.498 + 6 [(−380.349) + 2(− 372.112) + 2(−372.290)
+(−364.224)]
= 8409.268
𝑖. 𝑒. 𝑁4 = 8409.268 ; 𝑡4 = 𝑡3 + ℎ = 0.75 + 0.25 = 1.00
𝑑𝑁
Exact solution: we have = −λ𝑁
𝑑𝑡
𝑑𝑁
𝑜𝑟, = −λ𝑑𝑡 (3.9.7)
𝑁
𝑜𝑟, [𝑙𝑛(𝑁)]𝑁
𝑁0 = −λ[t]𝑡𝑡0
𝑜𝑟, 𝑙𝑛(𝑁) − 𝑙𝑛(𝑁0 ) = −λ(t − 0)
𝑁
𝑜𝑟, 𝑙𝑛(𝑁 ) = −λt
0
𝑁
𝑜𝑟, 𝑁 = 𝑒 −λt
0
𝐸𝑇 = 𝑁𝐸𝑥𝑎𝑐𝑡 − 𝑁𝐴𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑒
= 8409.274 − 8409.268 = 0.006
⃓ 𝐸𝑇 ⃓
𝐸𝑅 = 𝑁
𝐸𝑥𝑎𝑐𝑡
0.006
= 8409.274 × 100% = 0.00007%
𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
𝑘2 = ℎ𝑓(𝑥𝑛 + 𝛼2 ℎ, 𝑦𝑛 + 𝛽2 𝑘1 )
𝑘3 = ℎ𝑓(𝑥𝑛 + 𝛼3 ℎ, 𝑦𝑛 + 𝛽3 𝑘1 + 𝛾3 𝑘2 )
𝑘4 = ℎ𝑓(𝑥𝑛 + 𝛼4 ℎ, 𝑦𝑛 + 𝛽4 𝑘1 + 𝛾4 𝑘2 + 𝛿4 𝑘3 )
1 √2−1 √2−1 1
𝛼2 = 𝛼3 = 𝛽2 = 2 , 𝛼4 = 1, 𝛽3 = , 𝛽4 = 0, 𝛾3 = , 𝛾4 = −
2 √2 √2
2+√2 1 1 1 1 1
𝛿4 = , 𝑎 = 𝑑 = 6 , 𝑏 = 3 (1 − ) , 𝑐 = 3 (1 + )
2 √2 √2
𝑦 ′ = 𝑓(𝑥, 𝑦) ; 𝑦(𝑥𝑛 ) = 𝑦𝑛
The next value of 𝑦 is given by
1
𝑦𝑛+1 = 𝑦𝑛 + 6 [𝑘1 + (2 − √2)𝑘2 + (2 + √2)𝑘3 + 𝑘4 ]
𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 + )
2
ℎ √2−1 √2−1
𝑘3 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 + 𝑘1 + 𝑘2 )
2 √2
1 2+√2
𝑘4 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 − 𝑘 + 𝑘3 )
√2 1 2
𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥𝑛 + 3 , 𝑦𝑛 + )
3
ℎ 𝑘1 𝑘2
𝑘3 = ℎ𝑓(𝑥𝑛 + 3 , 𝑦𝑛 + + )
6 6
ℎ 𝑘1 3𝑘3
𝑘4 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 + + )
8 8
𝑘1 3𝑘3
𝑘5 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + − + 2𝑘4 )
2 2
𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥𝑛 + 4 , 𝑦𝑛 + )
4
ℎ 𝑘1 𝑘2
𝑘3 = ℎ𝑓(𝑥𝑛 + 4 , 𝑦𝑛 + + )
8 8
ℎ 𝑘2
𝑘4 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 − + 𝑘3 )
2
3ℎ 3𝑘1 9𝑘4
𝑘5 = ℎ𝑓(𝑥𝑛 + , 𝑦𝑛 + + )
4 16 16
3𝑘1 2𝑘2 12𝑘3 12𝑘4 8𝑘5
𝑘6 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 − + + − + )
7 7 7 7 7
Kutta-Nystrom method: This is a sixth-order method, with the error term of order ℎ
up-to degree six, which involves six functional values. This method yields the following
form
1
𝑦𝑛+1 = 𝑦𝑛 + 192 [23𝑘1 + 125𝑘2 − 81𝑘5 + 125𝑘6 ]
𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 + )
3
2ℎ 4𝑘1 6𝑘2
𝑘3 = ℎ𝑓(𝑥𝑛 + , 𝑦𝑛 + + )
5 25 25
𝑘1 15𝑘3
𝑘4 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + − 3𝑘2 + )
4 4
2ℎ 2𝑘1 10𝑘2 50𝑘3 8𝑘4
𝑘5 = ℎ𝑓(𝑥𝑛 + , 𝑦𝑛 + + − + )
3 27 9 81 81
4ℎ 2𝑘1 12𝑘2 2𝑘3 8𝑘4
𝑘6 = ℎ𝑓(𝑥𝑛 + , 𝑦𝑛 + + + + )
5 25 25 15 75
𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥𝑛 + 4 , 𝑦𝑛 + )
4
3ℎ 3𝑘1 9𝑘2
𝑘3 = ℎ𝑓(𝑥𝑛 + , 𝑦𝑛 + + )
8 32 32
12ℎ 1932𝑘1 7200𝑘2 7296𝑘3
𝑘4 = ℎ𝑓(𝑥𝑛 + , 𝑦𝑛 + − + )
13 2197 2197 2197
439𝑘1 3680𝑘3 845𝑘4
𝑘5 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + − 8𝑘2 + − )
216 513 4104
ℎ 8𝑘1 3544𝑘3 1859𝑘4 11𝑘5
𝑘6 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 − + 2𝑘2 − + − )
27 2565 4104 40
The general 𝑝𝑡ℎ order Runge-Kutta method expressed as [3] following form
𝑦𝑛+1 = 𝑦𝑛 + ∑𝑝𝑚=1 𝑏𝑚 𝑘𝑚
𝑘𝑚 = ℎ𝑓(𝑥𝑛 + 𝑐𝑚 , 𝑦𝑚 + ∑𝑛−1
𝑙=1 𝑎𝑚,𝑙 𝑘𝑙 )
Figure-(3.1)
5. Finally, monitoring the local truncation error does not involve any additional
function evaluations using predictor-corrector methods, whereas it is quite
expensive for the Runge-Kutta method.
Therefore, for the above purpose, the minimum error bound Runge-Kutta
methods should be used.
4.1 INTRODUCTION
The general second-order linear partial differential equation [23] is of the form
𝜕2 𝑢 𝜕2 𝑢 𝜕2 𝑢 𝜕𝑢 𝜕𝑢
𝐴 𝜕𝑥 2 + 𝐵 𝜕𝑥𝜕𝑦 + 𝑐 𝜕𝑦 2 + 𝐷 𝜕𝑥 + 𝐸 𝜕𝑦 + 𝐹𝑢 = 𝐺
Here 𝐴, 𝐵, 𝐶, 𝐷, 𝐸, 𝐹 & 𝐺 are all functions of 𝑥 & 𝑦. The above equation (4.2.1)
can be classified with respect to the sign of the discriminant 𝛥𝑆 = 𝐵 2 – 4𝐴𝐶, in the
following way
1. The wave equation in three spatial variables (𝑥, 𝑦, 𝑧) and the time 𝑡 is
𝜕2 𝑢 𝜕2 𝑢 𝜕2 𝑢 𝜕2 𝑢
+ 𝜕𝑦 2 + 𝜕𝑧 2 =
𝜕𝑥 2 𝜕𝑡 2
The function 𝑢 represents the displacement at the time 𝑡 of the particle whose
position at the rest is (𝑥, 𝑦, 𝑧). With appropriate boundary conditions, this equation
governs vibrations of a three-dimensional elastic body.
𝜕2 𝑢 𝜕2 𝑢 𝜕2 𝑢 𝜕𝑢
+ 𝜕𝑦 2 + 𝜕𝑧 2 =
𝜕𝑥 2 𝜕𝑡
The function 𝑢 represents the temperature at the time 𝑡 of a particle whose position at
the co-ordinates are (𝑥, 𝑦, 𝑧).
3. Laplace’s equation is
𝜕2 𝑢 𝜕2 𝑢 𝜕2 𝑢
+ 𝜕𝑦 2 + 𝜕𝑧 2 = 0
𝜕𝑥 2
Also there two special cases of problems depend upon boundary conditions with
partial differential equations.
𝜕2 𝑢 𝜕2 𝑢
+ 𝜕𝑦 2 = 0 ; 𝑢(𝑥, 𝑦) = 𝑓
𝜕𝑥 2
1. We have Cauchy’s problem for 𝑡 > 0 arbitrary functions 𝑓(𝑥) & 𝑔(𝑥) as
following for
𝜕2 𝑢 𝜕2 𝑢 𝜕𝑢
− 𝜕𝑥 2 = 0 ; 𝑢(𝑥, 0) = 𝑓(𝑥), [ 𝜕𝑡 ] = 𝑔(𝑥)
𝜕𝑡 2 𝑡=0
Figure-(4.1)
The points of intersection of their families of lines are called mesh point or
lattice point or grid point.
In this section [12] we will study various techniques for solving Laplace’s and
Poisson’s equations, which are elliptic in nature. Various physical phenomena governed
by these well-known equations. Some of them, frequently encountered in physical and
engineering applications are steady heat equation, seepage through porous media, the
rotational flow of an ideal fluid, distributional potential, steady viscous flow,
equilibrium stresses in elastic structures etc.
𝜕2 𝑢 𝜕2 𝑢
+ 𝜕𝑦 2 = 0 (4.4.1)
𝜕𝑥 2
Figure-(4.2)
Equation (4.4.2) shows that the value of 𝑢 at any interior mesh point is the
average of its values at four neighboring points to adjacent it. The equation (4.4.2) is
known as the standard 5-point formula exhibited in figure-(4.3).
We know that Laplace’s equation remains invariant when the co-ordinate axes
are rotated through an angle of 45°. Then the formula (4.4.2) can be re-written as
1
𝑢𝑖,𝑗 = 4 [𝑢𝑖−1,𝑗−1 + 𝑢𝑖+1,𝑗−1 + 𝑢𝑖+1,𝑗+1 + 𝑢𝑖+1,𝑗−1 ] (4.4.3)
This is similar to (4.4.2), which shows that the value of 𝑢 at any interior mesh
point is the average of its values at four neighboring diagonal mesh points. The equation
(4.4.3) is known as the diagonal 5-point formula exhibited in figure-(4.4).
Figure-(4.3) Figure-(4.4)
Although (4.2.3) is less accurate than (4.4.2) but serves as a reasonably good
approximation for obtaining starting values of the mesh points. We use (4.4.3) to find
the initial values of 𝑢 at the interior mesh points and compute the following mesh points
1
𝑢3,3 = 4 [𝑏1,5 + 𝑏5,1 + 𝑏5,5 + 𝑏1,1 ]
1
𝑢2,4 = 4 [𝑏1,5 + 𝑢3,3 + 𝑏3,5 + 𝑏1,3 ]
1
𝑢4,4 = 4 [𝑏3,5 + 𝑏5,3 + 𝑢3,3 + 𝑏5,5 ]
1
𝑢4,2 = 4 [𝑢3,3 + 𝑏5,1 + 𝑏3,1 + 𝑏5,3 ]
1
𝑢2,2 = 4 [𝑏1,3 + 𝑏3,1 + 𝑢3,3 + 𝑏1,1 ]
The values of 𝑢 at the remaining interior mesh points are computed by (4.4.2),
in the following way
1
𝑢2,3 = 4 [𝑏1,3 + 𝑢3,3 + 𝑢2,4 + 𝑢2,2 ]
1
𝑢3,4 = 4 [𝑢2,4 + 𝑢4,4 + 𝑏3,5 + 𝑢3,3 ]
1
𝑢4,3 = 4 [𝑢3,3 + 𝑏5,3 + 𝑢4,4 + 𝑢4,2 ]
1
𝑢3,2 = 4 [𝑢2,2 + 𝑢4,2 + 𝑢3,3 + 𝑢3,1 ]
After determining 𝑢𝑖,𝑗 once, their accuracy is improved either by using Jacobi’s
iterative method or Gauss-Seidel iterative method. The process will be repeated until
two consecutive iterations become very close. i.e. the difference between two
consecutive iterations become negligibly small in order to achieve the desired level of
accuracy. The iterative formula in the case of Jacobi’s method and the Gauss-Seidel
method is given below.
(𝑛+1)
Here 𝑢𝑖,𝑗 denotes the (𝑛 + 1)𝑡ℎ iterative value of 𝑢𝑖,𝑗 and gives us the
improved values of 𝑢𝑖,𝑗 at the interior mesh points.
Gauss-Seidel iteration formula uses the latest iterative value available and scans the
mesh points symmetrically from left to right along successive rows. Also, the Gauss-
Seidel method is simple and well suited to computer calculation. Jacobi’s iteration
formula being slow, the working is the same but lengthy. However, it can be shown that
the Gauss-Seidel scheme converges twice as fast as Jacobi’s method.
𝜕2 𝑢 𝜕2 𝑢
+ 𝜕𝑦 2 = 𝑓(𝑥, 𝑦) (4.4.6)
𝜕𝑥 2
Solution of the elliptic equation by relaxation method: Let us consider the Laplace’s
equation in two dimensions as follows
𝜕2 𝑢 𝜕2 𝑢
+ 𝜕𝑦 2 = 0 (4.4.8)
𝜕𝑥 2
We take a square region and divide it into a square net of mesh size ℎ. Let the
value of 𝑢 at 𝐴 be 𝑢0 and its values at four adjacent points to be
𝑢1 , 𝑢2 , 𝑢3 & 𝑢4 respectively as shown as figure-(4.5)
Figure-(4.5)
𝜕2 𝑢 𝑢1 +𝑢3 −2𝑢0
Then ≈
𝜕𝑥 2 ℎ2
𝜕2 𝑢 𝑢2 +𝑢4 −2𝑢0
≈
𝜕𝑦 2 ℎ2
𝑜𝑟, 𝑢1 + 𝑢2 + 𝑢3 + 𝑢4 − 4𝑢0 ≈ 0
Let 𝑟0 & 𝑟1 be the residuals at the mesh point 𝐴 & 𝐵, then we have
𝑟0 = 𝑢1 + 𝑢2 + 𝑢3 + 𝑢4 − 4𝑢0 (4.4.9)
𝑟1 = 𝑢0 + 𝑢5 + 𝑢6 + 𝑢7 − 4𝑢1 (4.4.10)
This is a continuous process. The main aim of the method is to reduce all the
residuals to zero, by making them as small as possible step by step. Thus, we try to
adjust the value of 𝑢 at an internal mesh point, so as to make the residual threat zero.
When the value of 𝑢 changing at a mesh point, the values of the residuals at the
neighboring interior points will also are changed. If 𝑢0 is given an increment 1, then (i)
equation (4.4.9) shows that 𝑟0 is changed by −4 and (ii) equation (4.4.10) shows that
𝑟1 is changed by 1. The relaxation pattern is shown in figure-(4.6).
Figure-(4.6)
1
This shows that 4 𝑅𝑖,𝑗 is the change in the value of 𝑢𝑖,𝑗 for Gauss-Seidel iteration.
(𝑛)
In the successive over-relaxation method, large changes than this are given to 𝑢𝑖,𝑗 and
the iteration formula is written as
(𝑛+1) (𝑛) 1
𝑢𝑖,𝑗 = 𝑢𝑖,𝑗 + 4 𝑤𝑅𝑖,𝑗
(𝑛+1) (𝑛) 1 (𝑛+1) (𝑛) (𝑛+1) (𝑛) (𝑛)
𝑜𝑟, 𝑢𝑖,𝑗 = 𝑢𝑖,𝑗 + 4 𝑤[𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 + 𝑢𝑖,𝑗+1 + 𝑢𝑖,𝑗−1 ] + (1 − 𝑤)𝑢𝑖,𝑗 (4.4.12)
1. Write down by trial values, the initial values of 𝑢 at the interior mesh points
by (4.4.3).
3. Write the residuals at a mesh point on the right of this point and the value of
𝑢 on its left.
4. Obtain the solution by reducing the residuals to zero one by one, by giving
suitable increment to 𝑢 and using the figure-(4.7). At each step, we reduce
numerically largest residual to zero and record the increment of 𝑢 in the left
and the modified residual on the right.
6. Stop the relaxation process, when the current values of the residuals are quite
small. The current value of 𝑢 at each of nodes gives us the solution.
Application-1: Given the values of 𝑢(𝑥, 𝑦) on the boundary of the square in the
following figure-(4.8), evaluate the function 𝑢(𝑥, 𝑦) satisfying the Laplace’s equation
𝑢𝑥𝑥 + 𝑢𝑦𝑦 = 0 at the pivotal points of the figure-(4.7) by (i) Jacobi’s method (ii)
Gauss-Seidel method.
Figure-(4.7)
Solution: We know the standard 5-point formula and diagonal 5-point formula are
1
𝑢𝑖,𝑗 = 4 [𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 + 𝑢𝑖,𝑗−1 + 𝑢𝑖,𝑗+1 ] (4.5.1)
1
𝑢𝑖,𝑗 = 4 [𝑢𝑖−1,𝑗−1 + 𝑢𝑖+1,𝑗−1 + 𝑢𝑖+1,𝑗+1 + 𝑢𝑖+1,𝑗−1 ] (4.5.2)
Since the eighth iteration is very close to the seventh iteration, we can conclude
Since the sixth iteration is very close to the fifth iteration, we can conclude
Application-2: Apply the relaxation method to solve Laplace’s equation 𝑢𝑥𝑥 + 𝑢𝑦𝑦
inside the square bounded by 𝑥 = 0, 𝑥 = 4, 𝑦 = 0, 𝑦 = 4. Here, given that 𝑢 = 𝑥 2 𝑦 2 on
the boundary.
Figure-(4.8)
𝑟𝐴 = 0 + 16 + 16 + 𝑢4 − 4 × 24 = −8
𝑟𝐵 = 𝑟𝐷 = 𝑟𝐹 = 𝑟𝐺 = 𝑟𝐻 = 0, 𝑟𝐶 = −16, 𝑟𝐸 = 16, 𝑟𝐼 = −8
𝜕𝑢 𝜕2 𝑢
= 𝑐 2 𝜕𝑥 2 (4.6.1)
𝜕𝑡
𝑘
Here 𝑐 2 = 𝜌𝜎 𝑐𝑚2 𝑠 −1 is the diffusivity of the substance, where 𝑘 is the thermal
conductivity, 𝜌 is density and 𝜎 is the specific heat of the substance.
𝜕𝑢 𝑢𝑖,𝑗+1 −𝑢𝑖,𝑗
=
𝜕𝑡 𝑘
𝜕2 𝑢 𝑢𝑖−1,𝑗 −2𝑢𝑖,𝑗 +𝑢𝑖+1,𝑗
=
𝜕𝑥 2 ℎ2
𝑘𝑐 2
𝑜𝑟, 𝑢𝑖,𝑗+1 − 𝑢𝑖,𝑗 = 𝑟[𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗 ] ; 𝑟 = ℎ2
The relation (4.6.2) is a relation between the function values at the two-time
levels 𝑗 and (𝑗 + 1) and hence called 2 − 𝑙𝑒𝑣𝑒𝑙 formula. This Formula enables us to
determine the value of 𝑢 at the (𝑖, 𝑗 + 1)𝑡ℎ mesh point in terms of the known function
values at the points 𝑥𝑖−1 , 𝑥𝑖 & 𝑥𝑖+1 at the instant 𝑡𝑖 . The schematic form of (4.6.2) is
shown in figure-(4.9).
Figure-(4.9)
Hence the formula (4.6.2) is called Schmidt explicit formula which is valid only
for 0 < 𝑟 ≤ ½. In particular when 𝑟 = ½, equation (4.6.2) reduces to
This shows that the value of 𝑢 at 𝑥𝑖 when time 𝑡𝑖+1 is the mean of the values
of 𝑢 at 𝑥𝑖−1 and 𝑥𝑖+1 when time 𝑡𝑖 . This relation (4.6.3) is known as Bender-Schmidt
recurrence relation, gives the value of 𝑢 at the internal points with the help of boundary
conditions.
𝜕𝑢 𝑢𝑖,𝑗+1 −𝑢𝑖,𝑗
=
𝜕𝑡 𝑘
𝜕2 𝑢 1 𝑢𝑖−1,𝑗 −2𝑢𝑖,𝑗 +𝑢𝑖+1,𝑗 𝑢𝑖−1,𝑗+1 −2𝑢𝑖,𝑗+1 +𝑢𝑖+1,𝑗+1
= 2[ + ]
𝜕𝑥 2 ℎ2 ℎ2
On the left-hand side of (4.6.4), we have three unknowns and on the right-hand
side, all three quantities are known. Thus, the implicit scheme (4.6.4) is called the
Crank-Nicholson formula and it is convergent for all values of 𝑟. If there are 𝑁 internal
mesh points on each row, then formula (4.6.4) gives 𝑁 simultaneous equations for 𝑁
unknowns in terms of the given boundary values. Similarly, the internal mesh points on
all rows can be calculated. The computational model of this method is given below in
figure-(4.10)
Figure-(4.10)
In (4.6.5), the unknowns are 𝑢𝑖−1,𝑗+1 , 𝑢𝑖,𝑗+1 & 𝑢𝑖+1,𝑗+1 and all the others are
known since they are already computed at the 𝑗 𝑡ℎ step. Hence, dropping that all 𝑗 and
setting
𝑟
𝑐𝑖 = 𝑢𝑖,𝑗 + 2 [𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗 ] (4.6.6)
This expresses the (𝑛 + 1)𝑡ℎ iteration in terms of 𝑛𝑡ℎ iteration and is known as
Jacobi’s iteration formula. It can be seen from (4.6.8) that at the line of
(𝑛+1) (𝑛+1)
computing 𝑢𝑖 , the latest value of 𝑢𝑖−1 , namely 𝑢𝑖−1 is already available. Hence,
(𝑛)
the convergence of Jacobi’s iteration formula can be improved by replacing 𝑢𝑖−1 in
(𝑛+1)
(4.6.8) by its latest values available, namely 𝑢𝑖−1 . Accordingly, we obtain the Gauss-
Seidel iteration formula
We can show that the Gauss-Seidel scheme (4.6.9) converges for all finite
values of 𝑟 and that converges twice as fast as the Jacobi’s scheme (4.6.8).
𝜕𝑢 𝜕2 𝑢
Solve 𝜕𝑡 = 𝜕𝑥 2 by using (i) Schmidt method (ii) Crank-Nicholson method.
Subject to the conditions 𝑢(𝑥, 0) = 𝑠𝑖𝑛𝜋𝑥 ; 0 ≤ 𝑥 ≤ 1 and 𝑢(0, 𝑡) = 0 = 𝑢(1, 𝑡). Carry
out computations for the two levels, taking ℎ = 1/3 and 𝑘 = 1/36.
𝑘𝑐 2 1
Thus, 𝑟 = =4
ℎ2
𝜋 √3 2𝜋 √3
𝑢1,0 = 𝑠𝑖𝑛 3 = & 𝑢2,0 = 𝑠𝑖𝑛 =
2 3 2
Figure-(4.11)
1
𝑢2,1 = 4 [𝑢1,0 + 2𝑢2,0 + 𝑢3,0 ]
1 √3 √3
= 4[ 2 + 2 + 0] = 0.65
2
1 5 1 1 3 1
𝑜𝑟, − 4 𝑢1,1 + 2 𝑢2,1 − 4 (0) = 4 (0.67) + 2 (0.67) + 4 (0)
𝜕2 𝑢 𝜕2 𝑢
The wave equation 𝜕𝑡 2 = 𝜕𝑥 2 is the simplest example of the hyperbolic
partial differential equations. Its solution is the displacement function 𝑢(𝑥, 𝑡) defined
for values 0 ≤ 𝑥 ≤ 𝐿 and 0 ≤ 𝑡 ≤ ∞, satisfying the initial and boundary conditions
[12]. Such equations arise from a convective type of problems in vibrations, wave
mechanics, gas dynamics, elasticity, electromagnetic and seismology.
𝜕2 𝑢 𝜕2 𝑢
= 𝑐2 ; 0≤𝑥 ≤𝐿&0≤𝑡 ≤∞ (4.8.1)
𝜕𝑡 2 𝜕𝑥 2
𝜕𝑢
𝑢(𝑥, 0) = 𝑓(𝑥) & (𝑥, 0) = 𝑔(𝑥) ; 0 ≤ 𝑥 ≤ 𝐿 (4.8.2)
𝜕𝑡
The formula (4.8.4) shows that the function values at 𝑗 𝑡ℎ and (𝑗 − 1)𝑡ℎ levels
are required in order to determine those at the (𝑗 + 1)𝑡ℎ level. Such difference schemes
are called three-level difference schemes compared to the two-level difference schemes
derived in the parabolic equation case. By expanding the terms in (4.8.4) as Taylor’s
series and simplifying, it can be shown that the truncation error in (4.8.4) is of
𝑂(ℎ2 + 𝑘 2 ) and the formula (4.8.4) holds well if 𝑟 < 1, which is the condition for
stability.
There also exist implicit finite difference schemes for (4.8.1). Two such schemes
𝑐𝑘
are held well for all values of 𝑟 = ℎ are obtained by taking the average of the finite
difference approximations of different rows, are given by
1 𝑐2
[𝑢𝑖,𝑗−1 − 2𝑢𝑖,𝑗 + 𝑢𝑖,𝑗+1 ] = 2ℎ2 [(𝑢𝑖−1,𝑗+1 − 2𝑢𝑖,𝑗+1 + 𝑢𝑖+1,𝑗+1 )
𝑘2
𝜕2 𝑢 𝜕2 𝑢
Solve = 16 𝜕𝑥 2 by taking ∆𝑥 = 1, 𝑡 = 1.25 subject to the initial and
𝜕𝑡 2
𝜕𝑢
boundary conditions 𝑢(0, 𝑡) = 0 = 𝑢(5, 𝑡), 𝜕𝑡 (𝑥, 0) = 0 & 𝑢(𝑥, 0) = 𝑥 2 (5 − 𝑥).
𝑘
Solution: Here 𝑐 2 = 16, therefore for 𝑟 = ℎ the difference equation (4.8.4) reduces to
1 − 16𝑟 2 = 0
𝑘2 1
𝑜𝑟, 𝑟 2 = =
ℎ2 16
1
𝑜𝑟, 𝑘 = 4
Also 𝑢(𝑥, 0) = 𝑥 2 (5 − 𝑥)
𝜕𝑢
Finally (𝑥, 0) = 0
𝜕𝑡
𝑢𝑖,𝑗+1 −𝑢𝑖,𝑗
𝑜𝑟, =0 ; 𝑡=0
𝑘
Thus the required values of 𝑢𝑖,𝑗 are can be shown as the following table
𝒊→
𝒋 ↓ 0 1 2 3 4 5
0 0 4 12 18 16 0
1 0 4 12 18 16 0
2 0 8 10 10 2 0
3 0 6 6 -6 -6 0
4 0 -2 -10 -10 -8 0
5 0 -16 -18 -12 -4 0
The method of iteration and the method of relaxation are both methods for
solving partial differential equations with given boundary values [11]. Although they
reach the desired solution by different processes, both methods are of the same inherent
accuracy. Their points of similarity and dissimilarity are given below.
1. Both methods require that the bounded region be divided into a network of
squares or other similar polygons.
2. Both methods require that the boundary values are written down and that
rough values of the function are computed, estimated or assumed for all
interior points of the network.
4. The iteration process is slow, sure and frequently long. The relaxation process
is more rapid, less certain and usually reasonably short. The convergence is
rapid by both methods at first but becomes slow with both methods long
before the end is reached.
5. The arithmetic operations are easier and shorter with the method of relaxation.
The mental effort necessary to avoid mistakes however is much greater than
with the iteration method.
6. The greatest drawback to the method of iteration is its length and the greatest
drawback to the method of relaxation is its liability to errors of the
computation. Such errors can be kept out only by extreme care and unceasing
vigilance on the part of the computer.
After a problem has been formulated as the definite integral of the algebraic sum
of two or more homogeneous, positive and definite quadratic forms or as the quotient
of two such integrals, the desired unknown function is replaced in the integrals by the
assumed linear combination. Then the integral or quotient of the integrals is minimized
with respect to each of the arbitrary constants occurring in the linear combination.
This method is direct and short if only approximate results are desired, but if
results of high accuracy are required, the method is quite laborious and the labor cannot
be appreciably lessened by mechanical aids. The labor involved is mostly in long and
tedious algebraic manipulations.
A special and simple form of the Rayleigh-Ritz method was first used by Lord
Rayleigh for finding the fundamental vibration period of an elastic body. It was later
extended, generalized and its convergence proved by W.Ritz.
Figure-(4.12)
rubber, and assume that the membrane is made very fast at the edge while tightly
stretched [11].
Take a set of three mutually perpendicular axes, with the 𝑥𝑦 − 𝑝𝑙𝑎𝑛𝑒 coinciding
with the membrane and the 𝑧 − 𝑎𝑥𝑖𝑠 perpendicular to it. Then if an interior region of
the membrane be pulled or pushed in a direction at right angles to its plane of
equilibrium (the 𝑥𝑦 − 𝑝𝑙𝑎𝑛𝑒), it becomes distorted into a curved surface, the area of
which is
𝑎 𝑏 𝜕𝑧 𝜕𝑧
𝑆 = ∫0 ∫0 √1 + (𝜕𝑥)2 + (𝜕𝑦)2 𝑑𝑦𝑑𝑥
𝑎 𝑏 1 𝜕𝑧 1 𝜕𝑧
= ∫0 ∫0 [1 + 2 (𝜕𝑥)2 + 2 (𝜕𝑦)2 ]𝑑𝑦𝑑𝑥 (𝑎𝑝𝑝)
Since the distortion is small, the increase in the area of the membrane due to the
distortion is therefore
𝑎 𝑏 1 𝜕𝑧 1 𝜕𝑧
∆𝑆 = ∫0 ∫0 [1 + 2 (𝜕𝑥)2 + 2 (𝜕𝑦)2 ]𝑑𝑦𝑑𝑥 − 𝑎𝑏
1 𝑎 𝑏 𝜕𝑧 𝜕𝑧
= 2 ∫0 ∫0 [(𝜕𝑥)2 + (𝜕𝑦)2 ]𝑑𝑦𝑑𝑥
Let 𝑇 denote the tension on a unit length of the boundary of the membrane, the
direction of 𝑇 being perpendicular to the edge of the boundary. Then the work-done in
deflecting the membrane until its area is unchanged by an amount ∆𝑆 is as in the
following. Consider a rectangular region of dimensions 𝑢 & 𝑣 as in the figure-(4.13).
Figure-(4.13)
First let the side 𝐴𝐵 be fixed and let the membrane be pulled to the right with
the force 𝑇 pounds per unit of width, or 𝑇𝑣 for the whole side. The force 𝑇𝑣 will stretch
the membrane an amount ∆𝑢 and do 𝑇𝑣. ∆𝑢 units of work in doing so.
Now let the side 𝐵𝐶 be fixed and let the membrane be pulled in the direction of
the side 𝐴𝐵 by a force of 𝑇 pounds per unit length of the border, or 𝑇(𝑢 + ∆𝑢) for the
whole side. The force 𝑇(𝑢 + ∆𝑢) will stretch the membrane by an amount ∆𝑣 in that
direction and do 𝑇(𝑢 + ∆𝑢). ∆𝑣 units of work in doing. Hence the total work-done is
Now the potential energy in the deflected position is equal to the work done in
producing the deflection. Since the deflection is small, the tension 𝑇 remains practically
constant. Hence the potential energy of the membrane in a deflected position is
𝑇 𝑎 𝑏 𝜕𝑧 𝜕𝑧
𝑃𝐸 = 2 ∫0 ∫0 [(𝜕𝑥)2 + (𝜕𝑦)2 ]𝑑𝑦𝑑𝑥
𝑇 𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2
𝑃𝐸 = (2 ∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥)𝑠𝑖𝑛2 𝜔𝑡
𝑇 𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2
(𝑃𝐸)𝑚𝑎𝑥 = 2 ∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥
For the elementary mass 𝑑𝑚 = 𝜌𝑑𝑦𝑑𝑥, the kinetic energy of the membrane is
1 𝑑𝑧 1
𝜌𝑑𝑦𝑑𝑥( 𝑑𝑡 )2 = 2 𝜌𝑑𝑦𝑑𝑥𝑍 2 𝜔2 𝑐𝑜𝑠 2 𝜔𝑡
2
Here 𝜌 denotes the mass of the unit area of the membrane. The kinetic energy
of the entire vibrating membrane is therefore
1 𝑎 𝑏
𝐾𝐸 = 2 𝜔2 𝜌 𝑐𝑜𝑠 2 𝜔𝑡 ∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥
1 𝑎 𝑏
(𝐾𝐸)𝑚𝑎𝑥 = 2 𝜔2 𝜌 ∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥
1 𝑎 𝑏 𝑇 𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2
𝜔2 𝜌 ∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥 = 2 ∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥
2
𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2
𝑇 ∫0 ∫0 [( ) +( ) ]𝑑𝑦𝑑𝑥
𝜕𝑥 𝜕𝑦
𝑜𝑟, 𝜔2 = 𝑎 𝑏 (4.11.1)
𝜌 ∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥
We must now for 𝑍 assume a linear combination of simple functions that will
satisfy the boundary conditions of the problem. Such a function is
Assuming that 𝑍 in (4.11.1) has been replaced by (4.11.2) and (4.11.3) above;
we must determine that all 𝑎𝑖 so as to make 𝜔2 minimum. Hence the derivative of the
right member of (4.11.1) with respect to each of all 𝑎𝑖 must be zero. Then by the rule
for differentiating a quotient we have
𝑎 𝑏 𝜕 𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2
∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥 𝜕𝑎 (∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥)
𝑖
𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2 𝜕 𝑎 𝑏
− ∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥 𝜕𝑎 (∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥) = 0 (4.11.4)
𝑖
𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2 𝜔2 𝜌 𝑎 𝑏
∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥 = 𝑇
∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥 (4.11.5)
𝑎 𝑏 𝜕 𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2
∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥 𝜕𝑎 (∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥)
𝑖
𝜔2 𝜌 𝑎 𝑏 𝜕 𝑎 𝑏
− 𝑇
∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥 𝜕𝑎 (∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥) = 0 (4.11.6)
𝑖
𝑎 𝑏 𝜔2𝜌
Now taking out the common factor ∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥 and putting = 𝑘, we get
𝑇
for 𝑖 = 1,2,3, … … … … , 𝑛
𝜕 𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2 𝜔2 𝜌 𝜕 𝑎 𝑏
(∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥) − (∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥 ) = 0
𝜕𝑎𝑖 𝑇 𝜕𝑎𝑖
𝜕 𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2 𝑎 𝑏
𝑜𝑟, 𝜕𝑎 {(∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥 − 𝑘 (∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥)} = 0 (4.11.7)
𝑖
𝑍 = 𝑎1 (𝑝2 − 𝑥 2 )(𝑞 2 − 𝑦 2 )
𝜕𝑍
= −2𝑎1 𝑥(𝑞 2 − 𝑦 2 )
𝜕𝑥
𝜕𝑍
= −2𝑎1 𝑦(𝑝2 − 𝑥 2 )
𝜕𝑦
𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2
Hence ∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥
𝑝 𝑞
= 4𝑎12 ∫−𝑝 ∫−𝑞[𝑥 2 (𝑞 2 − 𝑦 2 )2 + 𝑦 2 (𝑝2 − 𝑥 2 )2 ]𝑑𝑦𝑑𝑥
128
= 𝑎12 𝑝3 𝑞 3 (𝑝2 + 𝑞 2 )
45
𝑎 𝑏
And ∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥
𝑝 𝑞
= ∫−𝑝 ∫−𝑞[𝑎12 (𝑞 2 − 𝑦 2 )2 (𝑝2 − 𝑥 2 )2 ]𝑑𝑦𝑑𝑥
256
= 225 𝑎12 𝑝5 𝑞 5
𝑎 𝑏 𝜔2𝜌
On substituting these in (4.11.7) and putting as 𝑝 = 2 , 𝑞 = 2 & 𝑇 = , we
𝑇
get as followings
𝜕 128 256
{( 45 𝑎12 𝑝3 𝑞 3 (𝑝2 + 𝑞 2 ) − 𝑘 225 𝑎12 𝑝5 𝑞 5 } = 0
𝜕𝑎𝑖
2
𝑜𝑟, (𝑝2 + 𝑞 2 ) − 5 𝑘𝑝2 𝑞 2 = 0
5(𝑝2 +𝑞2 )
𝑜𝑟, 𝑘 = 2𝑝2 𝑞 2
5 1 1
𝑜𝑟, 𝑘 = 2 (𝑝2 + 𝑞2 )
1 1
𝑜𝑟, 𝑘 = 10(𝑎2 + 𝑏2 )
𝜔2 𝜌 1 1
𝑜𝑟, = 10(𝑎2 + 𝑏2 )
𝑇
10𝑇 1 1
𝑜𝑟, 𝜔2 = (𝑎 2 + 𝑏 2 )
𝜌
10𝑇 1 1
𝑜𝑟, 𝜔 = √ (𝑎 2 + 𝑏 2 )
𝜌
𝜔 1 10𝑇 1 1
𝑓 = 2𝜋 = 2𝜋 √ (𝑎 2 + 𝑏 2 )
𝜌
√10 1
This is a natural vibration frequency of the membrane. Since 2𝜋 ≈ 2, the
vibration frequencies found by the classical method separating the variables are given
by the formula
1 𝑇 𝑚2 𝑛2
𝑓𝑚,𝑛 = 2 √𝜌 ( 𝑎2 + 𝑏2 )
1 𝑇 1 1
𝑓 = 2 √𝜌 (𝑎2 + 𝑏2 )
Three numerical methods [11] for solving partial differential equations with
certain conditions in two dimensions have been considered in the current chapter. Each
method has its advantages and disadvantages.
The iteration method is slow, self-correcting and well adapted to use with an
automatic sequence-controlled calculating machine. The arithmetical operations are
short and simple.
The relaxation method is faster and more flexible than the iteration method. The
arithmetical operations are simple, but mistakes are easy to make and not self-
correcting. It requires constant vigilance and alertness on the part of the computer. Ti
is not adapted to use by an automatic calculating machine.
Finally, it must be realized that not all three methods may be applicable to the
given problem. To use the iteration method and relaxation method, a physical problem
must first be set up as a partial differential equation and this must then be converted to
partial-difference equations. The Rayleigh-Ritz method will give an approximate
solution of a problem without setting up a partial differential equation, as was done in
the case of the vibrating membrane. In problems where all these three methods are
applicable, the Rayleigh-Ritz method would probably be the third choice.
5.1 INTRODUCTION
The simple examples of two-point linear boundary value problem [23] are
There exist many numerical methods for solving such boundary value problems.
Among them, we will discuss only the finite-difference method and the shooting
method. Also, we will discuss the applications of Green’s function and Laplace’s
equation for solving boundary value problems.
Due to the compression of this chapter, we will avoid the estimation of truncation
errors of the mentioned methods.
Let us consider a linear differential equation of order greater than one, with
conditions specified at the endpoints of an interval [𝑎, 𝑏]. We divide the interval [𝑎, 𝑏]
into 𝑁 equal parts of width ℎ [20]. We set 𝑥0 = 𝑎 and 𝑥𝑁 = 𝑏, defining the interior
mesh points as 𝑥𝑛 = 𝑥0 + 𝑛ℎ; 𝑛 = 0,1,2, … … , 𝑁 − 1. The corresponding values are
denoted by 𝑦𝑛 = 𝑦(𝑥𝑛 ) = 𝑦(𝑥0 + 𝑛ℎ); 𝑛 = 0,1,2, … … , 𝑁 − 1.
Chapter-5: Solution of the Boundary Value Problems with Applications
We shall sometimes have to deal with points outside the interval [𝑎, 𝑏]. These
will be called the exterior mesh points, those to the left of the 𝑥0 being denoted by
𝑥−1 = 𝑥0 − ℎ, 𝑥−2 = 𝑥0 − 2ℎ, 𝑥−3 = 𝑥0 − 3ℎ and so on, and those to right of the 𝑥𝑁
being denoted by 𝑥𝑁+1 = 𝑥𝑁 + ℎ, 𝑥𝑁+2 = 𝑥𝑁 + 2ℎ, 𝑥𝑁+3 = 𝑥𝑁 + 3ℎ and so on. The
corresponding values of 𝑦 at the exterior mesh points are denoted in the obvious way
as 𝑦−1 , 𝑦−2 , 𝑦−3 , … … … … … & 𝑦𝑁+1 , 𝑦𝑁+2 , 𝑦𝑁+3 , … … … … … respectively.
The finite-difference method for the solution of the boundary value problem
consists in replacing the derivatives occurring in the differential equation and in the
boundary conditions as well as by means of their finite-difference approximations and
then solving the resulting linear system of equations by a standard procedure [23].
In order to obtain the appropriate finite-difference approximation to the
derivatives, we proceed as follows.
ℎ2 ℎ3
𝑦(𝑥 + ℎ) = 𝑦(𝑥) + ℎ𝑦 ′ (𝑥) + 𝑦 ′′ (𝑥) + 𝑦 ′′′ (𝑥) + ⋯ … … … .. (5.2.1)
2 6
𝑦(𝑥+ℎ)−𝑦(𝑥) ℎ ℎ2
𝑦 ′ (𝑥) = − (2 𝑦 ′′ (𝑥) + 𝑦 ′′′ (𝑥) + ⋯ … … … . . )
ℎ 6
𝑦(𝑥+ℎ)−𝑦(𝑥)
𝑜𝑟, 𝑦 ′ (𝑥) = + 𝑂(ℎ) (5.2.2)
ℎ
ℎ2 ℎ3
𝑦(𝑥 − ℎ) = 𝑦(𝑥) − ℎ𝑦 ′ (𝑥) + 𝑦 ′′ (𝑥) − 𝑦 ′′′ (𝑥) + ⋯ … … … .. (5.2.3)
2 6
𝑦(𝑥)−𝑦(𝑥−ℎ) ℎ ℎ2
𝑦 ′ (𝑥) = − (2 𝑦 ′′ (𝑥) − 𝑦 ′′′ (𝑥) + ⋯ … … … . . )
ℎ 6
𝑦(𝑥)−𝑦(𝑥−ℎ)
𝑜𝑟, 𝑦 ′ (𝑥) = + 𝑂(ℎ) (5.2.4)
ℎ
𝑦(𝑥+ℎ)−𝑦(𝑥−ℎ)
𝑦 ′ (𝑥) = + 𝑂(ℎ2 )
2ℎ
𝑦(𝑥+ℎ)−𝑦(𝑥−ℎ)
𝑜𝑟, 𝑦 ′ (𝑥) ≈ 2ℎ
𝑦(𝑥𝑛 +ℎ)−𝑦(𝑥𝑛 −ℎ)
𝑜𝑟, 𝑦 ′ (𝑥𝑛 ) ≈ 2ℎ
𝑦𝑛+1 −𝑦𝑛−1
𝑖. 𝑒. 𝑦𝑛′ ≈ (5.2.5)
2ℎ
Again by adding (5.2.1) and (5.2.3), we get the central difference approximation
for 𝑦𝑛′′ as
𝑦(𝑥+ℎ)−2𝑦(𝑥)+𝑦(𝑥−ℎ)
𝑦 ′′ (𝑥) = + 𝑂(ℎ2 )
ℎ2
𝑦(𝑥+ℎ)−2𝑦(𝑥)+𝑦(𝑥−ℎ)
𝑜𝑟, 𝑦 ′′ (𝑥) ≈ ℎ2
𝑦(𝑥𝑛 +ℎ)−2𝑦(𝑥𝑛 )+𝑦(𝑥𝑛 −ℎ)
𝑜𝑟, 𝑦 ′′ (𝑥𝑛 ) ≈ ℎ2
𝑦𝑛+1 −2𝑦𝑛 +𝑦𝑛−1
𝑖. 𝑒. 𝑦𝑛′′ ≈ (5.2.6)
ℎ2
Similarly the central difference approximation for 𝑦𝑛′′′ and 𝑦𝑛′𝑣 are given by as
followings
𝑦𝑛+2 −3𝑦𝑛+1 +3𝑦−1 −𝑦𝑛−2
𝑦𝑛′′′ ≈ (5.2.7)
2ℎ3
𝑦𝑛+2 −4𝑦𝑛+1 +6𝑦𝑛 −4𝑦−1 +𝑦𝑛−2
𝑦𝑛′𝑣 ≈ (5.2.8)
ℎ4
Now taking the value of for 𝑦𝑛′ and 𝑦𝑛′′ from (5.2.5) and (5.2.6) respectively and
then substituting them in (5.1.1), we get at the point 𝑥 = 𝑥𝑛 .
ℎ ℎ
(1 − 2 𝑓1 ) 𝑎 + (−2 + ℎ2 𝑔1 )𝑦1 + (1 + 2 𝑓1 ) 𝑦2 = ℎ2 𝑟1
ℎ ℎ
(1 − 2 𝑓2 ) 𝑦1 + (−2 + ℎ2 𝑔2 )𝑦2 + (1 + 2 𝑓2 ) 𝑦3 = ℎ2 𝑟2
ℎ ℎ
(1 − 2 𝑓3 ) 𝑦3 + (−2 + ℎ2 𝑔3 )𝑦3 + (1 + 2 𝑓3 ) 𝑦4 = ℎ2 𝑟3
…………………………………………………………………
…………………………………………………………………
ℎ ℎ
(1 − 2 𝑓𝑁−2 ) 𝑦𝑁−3 + (−2 + ℎ2 𝑔𝑁−2 )𝑦𝑁−2 + (1 + 2 𝑓𝑁−2 ) 𝑦𝑁−1 = ℎ2 𝑟𝑁−2
ℎ ℎ
(1 − 2 𝑓𝑁−1 ) 𝑦𝑁−2 + (−2 + ℎ2 𝑔𝑁−1 )𝑦𝑁−1 + (1 + 2 𝑓𝑁−1 ) 𝑏 = ℎ2 𝑟𝑁−1
𝐴𝑦 = 𝑏 (5.2.10)
𝑑1 𝑐1
𝑎2 𝑑2 𝑐2
A= 𝑎3 𝑑3 𝑐3
… … … … …
𝑎𝑁−2 𝑑𝑁−2 𝑐𝑁−2
𝑎𝑁−1 𝑑𝑁−1
𝑑4 𝑦
The deflection of a beam is governed by the equation 𝑑𝑥 4 + 81𝑦 = 𝜑(𝑥) with
the boundary conditions 𝑦(0) = 𝑦 ′ (0) = 𝑦 ′′ (1) = 𝑦 ′′′ (1) = 0. Here φ(x) is given by
𝑥 1/3 2/3 1
𝜑(𝑥) 81 162 243
Evaluate the deflection of the pivot points of the beam using three sub-intervals
by the finite-difference approximation method.
1 1 2
Solution: Here ℎ = 3 and the pivot points are 𝑥0 = 0, 𝑥1 = 3 , 𝑥2 = 3 & 𝑥3 = 1. The
corresponding value of 𝑦 is 𝑦0 = 𝑦(𝑥0 ) = 0 and 𝑦1 , 𝑦2 & 𝑦3 are to be determined.
Using (5.2.8) in given boundary value problem at 𝑥 = 𝑥𝑛 ; 𝑛 ∈ 𝑵 we get by putting
the value of ℎ as follows
𝑦𝑛+2 −4𝑦𝑛+1 +6𝑦𝑛 −4𝑦−1 +𝑦𝑛−2
+ 81𝑦𝑛 = 𝜑(𝑥𝑛 )
ℎ4
𝑜𝑟, 𝑦1 − 𝑦−1 = 0
𝑜𝑟, 𝑦1 = 𝑦−1 (5.3.3)
𝑜𝑟, 𝑦4 − 2𝑦3 + 𝑦2 = 0
𝑜𝑟, 𝑦4 = 2𝑦3 − 𝑦2 (5.3.4)
𝑦3 − 4𝑦2 + 8𝑦1 = 1
−4𝑦3 + 3𝑦2 − 2𝑦1 = 2
3𝑦3 − 4𝑦2 + 2𝑦1 = 3
1 2
𝑦 (3) = 𝑦1 = 0.6154, 𝑦 (3) = 𝑦2 = 1.6923, 𝑦(1) = 𝑦3 = 2.8462
The shooting method requires good initial guesses for the slope and can be
applied to both linear and non-linear problems [23]. The main advantage of this method
is its easy applicability. We discuss this method with reference to the second-order
boundary value problem defined by
To apply any initial value method in (5.4.1), we must know the values of 𝑦(0)
and 𝑦 ′ (0). Since 𝑦 ′ (0) is not given, we consider it as an unknown parameter m (say),
which must be determined so that the resulting solution yields the given value 𝑦(1) to
some desired level of accuracy. We thus guess at the initial slope and setup an iterative
procedure for converging to the correct slope. Let 𝑚0 and 𝑚1 be two guesses at the
initial slope 𝑦 ′ (0) and let 𝑦(𝑚0 , 1) and 𝑦(𝑚1 , 1) be the values of 𝑦 at 𝑥 = 1 obtained
from integrating the differential equation. Graphically, the solution may be represented
by as in figure-(5.1) and figure-(5.2).
Figure-(5.1) Figure-(5.2)
In figure-(5.1) the solutions of the initial value problems are drawn, while in
figure-(5.2), 𝑦(𝑚, 1) is plotted as a function of 𝑚. A generally better approximation of
𝑚 can be obtained by linear interpolation. The intersection of the line joining 𝑝0 to 𝑝1
with the line 𝑦(1) = 1, has its m co-ordinates given by
𝑚2 −𝑚0 𝑚1 −𝑚0
= 𝑦(𝑚
𝑦(1)−𝑦(𝑚0 ,1) 1 ,1)−𝑦(𝑚0 ,1)
𝑦(1)−𝑦(𝑚0 ,1)
𝑜𝑟, 𝑚2 = 𝑚0 + (𝑚1 − 𝑚0 )[𝑦(𝑚 ] (5.4.2)
1 ,1)−𝑦(𝑚0 ,1)
Then we obtain 𝑦(𝑚2 , 1). Again use linear interpolation with (𝑚1 , 𝑦(𝑚1 , 1))
and (𝑚2 , 𝑦(𝑚2 , 1)) to obtain a better approximation 𝑚 = 𝑚3 and so on. This process
is repeated until convergence is obtained. i.e. until the value of 𝑦(𝑚𝑖 , 1) agrees with
𝑦(1) to the desired level of accuracy. The speed of convergence depends upon how
good the initial guesses were.
This method will be tedious to apply for higher-order boundary value problems
and in the case of the non-linear problems, linear interpolation yields unsatisfactory
results.
Apply the shooting method with taking 𝑚0 = 0.8 & 𝑚1 = 0.9 to solve the
boundary value problem
𝑥3 𝑥5 𝑥7 𝑥9
𝑦(𝑥) = 𝑦 ′ (0)[𝑥 + + 120 + 5040 + 362800 + ⋯ … … … ]
6
1 1 1 1
𝑜𝑟, 𝑦(1) = 𝑦 ′ (0)[1 + 6 + 120 + 5040 + 362800 + ⋯ … … … ]
1.1752−0.9402)
𝑚2 = 0.8 + (0.9 − 0.8) [ 1.0577−0.9402 ] = 1
This shows that 𝑦 ′ (0) = 1 = 𝑚2 , we now solve the initial value problem
Now we can solve the initial value problem (5.5.3) any other standard method
mentioned in previous chapters.
𝑑2 𝑢
+ 𝑘 2 𝑢 = −𝑓(𝑥) ; 0 < 𝑥 < 𝐿 (5.6.1)
𝑑𝑥 2
If the ends of the string are kept fixed, then this equation must be solved for 𝑢
subject to the boundary conditions
𝑐𝑜𝑠𝑘𝑥 𝑥 𝑠𝑖𝑛𝑘𝑥 𝑥
𝑢(𝑥) = ∫𝑐 𝑓(𝑦) 𝑠𝑖𝑛𝑘𝑦 𝑑𝑦 − ∫𝑐 𝑓(𝑦) 𝑐𝑜𝑠𝑘𝑦 𝑑𝑦 (5.6.7)
𝑘 1 𝑘 2
Here, 𝑐1 and 𝑐2 are constants which must be so chosen as to ensure that the
boundary conditions (5.6.2) are satisfied. Inserting the condition 𝑢(0) = 0 into (5.6.7),
we find that we must choose 𝑐1 such that
𝑥
∫𝑐 𝑓(𝑦) 𝑠𝑖𝑛𝑘𝑦 𝑑𝑦 = 0 (5.6.8)
1
The condition 𝑢(𝐿) = 0, when inserted into (5.6.7), will require that
𝑐𝑜𝑠𝑘𝐿 𝐿 𝑠𝑖𝑛𝑘𝐿 𝐿
𝑢(𝐿) = ∫𝑐 𝑓(𝑦) 𝑠𝑖𝑛𝑘𝑦 𝑑𝑦 − ∫𝑐 𝑓(𝑦) 𝑐𝑜𝑠𝑘𝑦 𝑑𝑦 = 0 (5.6.9)
𝑘 1 𝑘 2
𝑠𝑖𝑛𝑘𝐿 0 1 𝐿
− ∫𝑐 𝑓(𝑦) 𝑐𝑜𝑠𝑘𝑦 𝑑𝑦 − 𝑘 ∫0 𝑓(𝑦) 𝑠𝑖𝑛𝑘(𝑦 − 𝐿) 𝑑𝑦 = 0 (5.6.10)
𝑘 2
Combining the results (5.6.8) and (5.6.10), we see that the solution (5.6.7) can
now be written in the form
1 𝑥 𝑠𝑖𝑛𝑘𝑥 𝐿
𝑢(𝑥) = 𝑘 ∫0 𝑓(𝑦) 𝑠𝑖𝑛𝑘(𝑦 − 𝑥) 𝑑𝑦 − 𝑘𝑠𝑖𝑛𝑘𝐿 ∫0 𝑓(𝑦) 𝑠𝑖𝑛𝑘(𝑦 − 𝐿) 𝑑𝑦 (5.6.11)
𝑥 𝑠𝑖𝑛𝑘𝑦 𝑠𝑖𝑛𝑘(𝐿−𝑥) 𝐿 𝑠𝑖𝑛𝑘𝑦 𝑠𝑖𝑛𝑘(𝐿−𝑦)
= ∫0 𝑓(𝑥) 𝑑𝑦 + ∫𝑥 𝑓(𝑦) 𝑑𝑦
𝑘 𝑠𝑖𝑛𝑘𝐿 𝑘 𝑠𝑖𝑛𝑘𝐿
𝐿
=∫0 𝑓(𝑦)𝐺(𝑥, 𝑦)𝑑𝑦 (5.6.12)
𝑠𝑖𝑛𝑘𝑦 𝑠𝑖𝑛𝑘(𝐿−𝑥)
Here 𝐺(𝑥, 𝑦) = ;0 ≤ 𝑦 ≤ 𝑥
𝑘 𝑠𝑖𝑛𝑘𝐿
(5.6.13)
𝑠𝑖𝑛𝑘𝑦 𝑠𝑖𝑛𝑘(𝐿−𝑦)
𝐺(𝑥, 𝑦) = ;𝑥 ≤ 𝑦 ≤ 𝐿
𝑘 𝑠𝑖𝑛𝑘𝐿
This function 𝐺(𝑥, 𝑦) is the two-point function of the position known as the
Green’s function for the equation (5.6.1) and boundary conditions (5.6.2).
𝑟 2 = 𝑎2
𝑜𝑟, 𝑟 = ±𝑎
So that 𝐵 𝑠𝑖𝑛ℎ(𝑎) = 0
𝑜𝑟, 𝐵 = 0; 𝑎 ≠ 0
Here we have utilized the boundary conditions in writing down these functions.
Figure-(5.3)
The graphs of these functions will evidently intersect at some point where 𝑥 =
𝑠 and at that point the functions will be equal and their first derivatives will be unequal
as shown as figure-(5.3). Hence for 𝑥 = 𝑠, we have the form
𝐵 𝑠𝑖𝑛ℎ(𝑎(1− 𝑠))
𝐴= (5.7.7)
𝑠𝑖𝑛ℎ(𝑎𝑠)
sinh(𝑎𝑠)
𝐵 = 𝑎 sinh(𝑎) (5.7.8)
𝑠𝑖𝑛ℎ(𝑎(1− 𝑠))
𝐴= (5.7.9)
𝑎 𝑠𝑖𝑛ℎ(𝑎)
Study on Different Numerical Methods for Solving Differential Equations Page | 100
Chapter-5: Solution of the Boundary Value Problems with Applications
The function 𝐺(𝑠, 𝑥) is called the Green’s function for this problem. It is a
function of two independent variables 𝑥 and s in the interval [0,1] and is evidently
symmetrical variables.
The Green’s function given by (5.7.12) is thus the solution of the boundary
value problem defined in (5.7.1) and (5.7.2).
𝑑4 𝑤 𝑘
+ 𝐷 𝑤 = 𝐷−1 𝑞(𝑥) ; 𝑤(0) = 𝑤(𝑙) = 𝑤 ′ (0) = 𝑤 ′ (𝑙) (5.8.1)
𝑑𝑥 4
Here 𝐷 is the flexural rigidity of the beam, and 𝑘 is the spring constant of the
elastic foundation, and the load 𝑞(𝑥) acts vertically downwards per unit length of the
beam [17]. Mathematically, the system (5.8.1) belongs to a general class of boundary
problems of the form
𝑑4 𝑦
+ 𝑓(𝑥)𝑦(𝑥) = 𝑔(𝑥) ; 𝑎 < 𝑥 < 𝑏 (5.8.2)
𝑑𝑥 4
Here 𝑓(𝑥) and 𝑔(𝑥) are continuous on [𝑎, 𝑏] and 𝐴𝑖 , 𝐵𝑖 are finite real arbitrary
constants. The analytical solution of (5.8.2) for arbitrary choices of 𝑓(𝑥) and 𝑔(𝑥)
cannot be determined. So, numerical methods are developed to overcome this
limitation. It has formulated a simple condition that guarantees the uniqueness [19] of
the solution of the problem (5.8.2) and (5.8.3).
Study on Different Numerical Methods for Solving Differential Equations Page | 101
Chapter-5: Solution of the Boundary Value Problems with Applications
Derivations for Cubic B-spline: The given range of the independent variable is [𝑎, 𝑏].
For this range, we are to choose the equidistant points are given by the intervals 𝜋: {𝑎 =
𝑏−𝑎
𝑥0 , 𝑥1 , … … … , 𝑥𝑛 = 𝑏}. i.e. 𝑥𝑖 = 𝑥0 + 𝑖ℎ for ℎ = 𝑛 .
Let us define 𝑆3 (𝜋) = {𝑝(𝑡) ∈ 𝐶[𝑎, 𝑏]} such that 𝑝(𝑡) reduces to cubic
polynomial on each sub-interval [𝑥𝑖 , 𝑥𝑖+1 ]. The basis function 𝐵𝑖 (𝑥) for the different
interval is defined as
1
(𝑥 − 𝑥𝑖−2 )3 ; 𝑥𝑖−2 ≤ 𝑥 ≤ 𝑥𝑖−1
6ℎ3
1
{ℎ3 + 3ℎ2 (𝑥 − 𝑥𝑖−1 ) + 3ℎ(𝑥 − 𝑥𝑖−1 )2 − 3(𝑥 − 𝑥𝑖−1 )3 } ; 𝑥𝑖−1 ≤ 𝑥 ≤ 𝑥𝑖
6ℎ3
1
{ℎ3 + 3ℎ2 (𝑥𝑖+1 − 𝑥) + 3ℎ(𝑥𝑖+1 − 𝑥)2 − 3(𝑥𝑖+1 − 𝑥)3 } ; 𝑥𝑖 ≤ 𝑥 ≤ 𝑥𝑖+1
6ℎ3
1
(𝑥𝑖+2 − 𝑥)3 ; 𝑥𝑖+1 ≤ 𝑥 ≤ 𝑥𝑖+2
6ℎ3
1
(0) ; 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 (5.8.4)
6ℎ3
Let us introduce four additional knots 𝑥−2 < 𝑥−1 < 𝑥0 ; 𝑥𝑛+2 > 𝑥𝑛+1 > 𝑥𝑛 .
From the above expression, it is obvious that each 𝐵𝑖 (𝑥) ∈ 𝐶[𝑎, 𝑏]. The calculative
values of 𝐵𝑖 (𝑥), 𝐵𝑖′ (𝑥), 𝐵𝑖′′ (𝑥) and at nodal points are given by the following table-1
Since each 𝐵𝑖 (𝑥) is also a piecewise cubic polynomial with knots at the 𝜋,
each 𝐵𝑖 (𝑥) ∈ 𝑆3 (𝜋). Let 𝑢 = {𝐵−1 , 𝐵0 , 𝐵1 , . . . . . , 𝐵𝑛 , 𝐵𝑛+1 } and let 𝐵3 (𝜋) = 𝑠𝑝𝑎𝑛(𝑢).
The functions in 𝑢 are linearly independent on [𝑎, 𝑏], thus 𝐵3 (𝜋) is (𝑛 + 3)-
dimensional. Also, we have 𝐵3 (𝜋) = 𝑆3 (𝜋). Let 𝑠(𝑥) be the B-spline interpolating
function [17] at the nodal points and 𝑠(𝑥) ∈ 𝐵3 (𝜋).Then 𝑠(𝑥) can be written as
𝑠(𝑥) = ∑𝑛+1
𝑖=1 𝑐𝑖 𝐵𝑖 (𝑥) (5.8.5)
Therefore, for a given function 𝑦(𝑥), there exists a unique cubic spline (5.8.5)
satisfying the interpolating conditions
Study on Different Numerical Methods for Solving Differential Equations Page | 102
Chapter-5: Solution of the Boundary Value Problems with Applications
We will find the values of 𝑦(𝑥𝑖 ), 𝑦 ′ (𝑥𝑖 ), 𝑦 ′′ (𝑥𝑖 ), 𝑦 ′′′ (𝑥𝑖 ), 𝑦 ′𝑣 (𝑥𝑖 ) using
table-1 in the system of equations in (5.8.4) and applying above equations
(5.8.9), (5.8.10) & (5.8.11) as
𝑐𝑖−1 +4𝑐𝑖 +𝑐𝑖+1
𝑦(𝑥𝑖 ) = 𝑠(𝑥𝑖 ) = (5.8.12)
6
𝑐𝑖−1 −𝑐𝑖+1
𝑦 ′ (𝑥𝑖 ) = 𝑠 ′ (𝑥𝑖 ) = (5.8.13)
2ℎ
𝑐𝑖−1 −2𝑐𝑖 +𝑐𝑖+1
𝑦 ′′ (𝑥𝑖 ) = 𝑠 ′′ (𝑥𝑖 ) = (5.8.14)
ℎ2
𝑐𝑖+2 −2𝑐𝑖+1 +2𝑐𝑖−1 −2𝑐𝑖−2
𝑦 ′′′ (𝑥𝑖 ) = 𝑠 ′′′ (𝑥𝑖 ) = (5.8.15)
2ℎ3
𝑐𝑖+2 −4𝑐𝑖+1 +6𝑐𝑖 −4𝑐𝑖−1 +2𝑐𝑖−2
𝑦 ′𝑣 (𝑥𝑖 ) = 𝑠 ′𝑣 (𝑥𝑖 ) = (5.8.16)
ℎ4
Solution of special case fourth-order boundary value problem: Let us consider the
boundary value problem
Study on Different Numerical Methods for Solving Differential Equations Page | 103
Chapter-5: Solution of the Boundary Value Problems with Applications
6(𝑐𝑖+2 − 4𝑐𝑖+1 + 6𝑐𝑖 − 4𝑐𝑖−1 + 2𝑐𝑖−2 ) + 𝑓𝑖 ℎ4 (𝑐𝑖−1 + 4𝑐𝑖 + 𝑐𝑖+1 ) = 6ℎ4 𝑔𝑖 (5.8.21)
𝑦 𝑖𝑣 (𝑥) + 𝑝(𝑥)𝑦 ′′′ (𝑥) + 𝑞(𝑥)𝑦 ′′ (𝑥)+𝑟(𝑥)𝑦 ′ (𝑥) + 𝑡(𝑥)𝑦(𝑥) = 𝑢(𝑥) (5.8.26)
6(𝑐𝑖+2 − 4𝑐𝑖+1 + 6𝑐𝑖 − 4𝑐𝑖−1 + 2𝑐𝑖−2 ) + 3ℎ𝑝𝑖 (𝑐𝑖+2 − 2𝑐𝑖+1 + 2𝑐𝑖−1 − 2𝑐𝑖−2 )
+6ℎ2 𝑞𝑖 (𝑐𝑖−1 − 2𝑐𝑖 + 𝑐𝑖+1 ) + 3ℎ3 𝑟𝑖 (𝑐𝑖−1 − 𝑐𝑖+1 ) + ℎ4 𝑡𝑖 (𝑐𝑖−1 + 4𝑐𝑖 + 𝑐𝑖+1 )
= 6ℎ4 𝑢𝑖 (5.8.29)
Study on Different Numerical Methods for Solving Differential Equations Page | 104
Chapter-5: Solution of the Boundary Value Problems with Applications
Let (5.8.18) be the approximate solution of the boundary value problem. It must
satisfy the boundary value problem at knots. So, we have
This equation (5.8.32) together with equation (5.8.22) to (5.8.25) gives a non-
linear system of equations, which is solved to get the required solution of the boundary
value problem.
𝑐𝑖+2 −4𝑐𝑖+1 +6𝑐𝑖 −4𝑐𝑖−1 +2𝑐𝑖−2 𝑐𝑖+2 −2𝑐𝑖+1 +2𝑐𝑖−1 −2𝑐𝑖−2 𝑐𝑖−1 +4𝑐𝑖 +𝑐𝑖+1
+ 𝑝𝑖 = 𝑟𝑖 (𝑥𝑖 , ) (5.8.39)
ℎ4 2ℎ3 6
Study on Different Numerical Methods for Solving Differential Equations Page | 105
Chapter-5: Solution of the Boundary Value Problems with Applications
This equation (5.8.39) together with equations (5.8.40) to (5.8.43) gives a non-
linear system of equations, which is solved to get the required solution of boundary
value problem (5.8.33).
Study on Different Numerical Methods for Solving Differential Equations Page | 106
CHAPTER-6
6.1 INTRODUCTION
Also, a modified formula for solving the Elliptic equation by the finite-
difference approximations will be offered here. In which we are to establish a combined
finite-difference formula by means of standard 5-point formula and diagonal 5-point
formula, then we will improve the approximated values of several mesh points with the
help of the Gauss-Seidel iteration formula.
𝑥 − 𝑥0
Here 𝑢= ℎ
𝑜𝑟, 𝑥 = 𝑥0 + 𝑢ℎ ⇒ 𝑑𝑥 = ℎ𝑑𝑢
𝑥 +5ℎ 𝑢2 1 𝑢3 𝑢2 1 𝑢4
𝑜𝑟, [𝑦]𝑥00 = ℎ[𝑢𝑦0′ + ∆𝑦0′ + 2 ( 3 − )∆2 𝑦0′ + 6 ( 4 − 𝑢3 + 𝑢2 )∆3 𝑦0′
2 2
1 𝑢5 3𝑢4 11𝑢3 1 𝑢6 35𝑢4 50𝑢3
+ ( − + − 3𝑢2 )∆4 𝑦0′ + 120 ( 6 − 2𝑢5 + − + 12𝑢2 )∆5 𝑦0′
24 5 2 3 4 3
+ ⋯ ⋯ ]𝑢=5
𝑢=0
25 1 125 25
𝑜𝑟, 𝑦(𝑥0 + 5ℎ) − 𝑦(𝑥0 ) = ℎ[5𝑦0′ + ∆𝑦0′ + ( − ) ∆2 𝑦0′
2 2 3 2
1 625 1 1875 1375
+6( − 125 + 25)∆3 𝑦0′ + (625 − + − 75)∆4 𝑦0′
4 24 2 3
1 15625 21875 6250
+ 120 ( − 6250 + − + 300)∆5 𝑦0′ + ⋯ ⋯ ]
6 4 3
25 175 75 425 95
𝑜𝑟, 𝑦5 − 𝑦0 = ℎ[5𝑦0′ + ∆𝑦0′ + ∆2 𝑦0′ + ∆3 𝑦0′ + ∆4 𝑦0′ + ∆5 𝑦0′ + ⋯ ⋯ ]
2 12 8 144 288
Study on Different Numerical Methods for Solving Differential Equations Page | 108
Chapter-6: Two Proposed Methods for Solving Differential Equations
𝑥 +4ℎ 𝑢2 1 𝑢3 𝑢2 1 𝑢4
𝑜𝑟, [𝑦]𝑥00 = ℎ[𝑢𝑦0′ + ∆𝑦0′ + 2 ( 3 − )∆2 𝑦0′ + 6 ( 4 − 𝑢3 + 𝑢2 )∆3 𝑦0′
2 2
1 𝑢5 3𝑢4 11𝑢3 1 𝑢6 35𝑢4 50𝑢3
+ ( − + − 3𝑢2 )∆4 𝑦0′ + 120 ( 6 − 2𝑢5 + − + 12𝑢2 )∆5 𝑦0′
24 5 2 3 4 3
+ ⋯ ⋯ ]𝑢=4
𝑢=0
/ 1 64
𝑜𝑟, 𝑦(𝑥0 + 4ℎ) − 𝑦(𝑥0 ) = ℎ[4𝑦0′ + 8∆𝑦0 + 2 ( 3 − 8) ∆2 𝑦0′
1 1 1024 704
+ 6 (64 − 64 + 16)∆3 𝑦0′ + ( − 384 + − 48)∆4 𝑦0′
24 5 3
1 2048 3200
+ 120 ( − 2048 + 2240 − + 192)∆5 𝑦0′ + ⋯ ⋯ ]
3 3
20 8 14
𝑜𝑟, 𝑦4 − 𝑦0 = ℎ[4𝑦0′ + 8∆𝑦0′ + ∆2 𝑦0′ + 3 ∆3 𝑦0′ + ∆4 𝑦0′ + (0)∆5 𝑦0′ + ⋯ ⋯ ]
3 45
Study on Different Numerical Methods for Solving Differential Equations Page | 109
Chapter-6: Two Proposed Methods for Solving Differential Equations
𝑝 5ℎ ′ ′ ′ ′
𝑦𝑛+1 = 𝑦𝑛−4 + 144 [19𝑦𝑛−4 − 10𝑦𝑛−3 + 120𝑦𝑛−2 − 70𝑦𝑛−1 + 85𝑦𝑛′ ] (6.2.6)
𝑐 2ℎ ′ ′ ′ ′
𝑦𝑛+1 = 𝑦𝑛−3 + 45 [7𝑦𝑛−3 + 32𝑦𝑛−2 + 12𝑦𝑛−1 + 32𝑦𝑛′ + 7𝑦𝑛+1 ] (6.2.7)
Here the index 𝑝 & 𝑐 indicates the predicted and corrected values of 𝑦𝑛+1
respectively at 𝑥 = 𝑥𝑖+1 .
𝑑𝑦 𝑥+𝑦
Solve the differential equation 𝑑𝑥 = 2 𝑎𝑡 𝑥 = 2.5 with the initial values
𝑦(0.0) = 2.000, 𝑦(0.5) = 2.636, 𝑦(1.0) = 3.595, 𝑦(1.5) = 4.968, 𝑦(2.0) = 6.873
𝑑𝑦 𝑥+𝑦
Solution: Given by 𝑦 ′ = 𝑑𝑥 = = 𝑓(𝑥, 𝑦)
2
By taking the step length ℎ = 0.5 from initial conditions and, we get
1 1
𝑥0 = 0.0, 𝑦0 = 2.000 ⇒ 𝑓0 = 2 (𝑥0 + 𝑦0 ) = 2 (0.0 + 2.000) = 1.0000
1 1
𝑥1 = 0.5, 𝑦1 = 2.636 ⇒ 𝑓1 = 2 (𝑥1 + 𝑦1 ) = 2 (0.5 + 2.636) = 1.5680
1 1
𝑥2 = 1.0, 𝑦2 = 3.959 ⇒ 𝑓2 = 2 (𝑥2 + 𝑦2 ) = 2 (1.0 + 3.595) = 2.2975
1 1
𝑥3 = 1.5, 𝑦3 = 4.968 ⇒ 𝑓3 = 2 (𝑥3 + 𝑦3 ) = 2 (1.5 + 4.968) = 3.2340
1 1
𝑥4 = 2.0, 𝑦4 = 6.873 ⇒ 𝑓4 = 2 (𝑥4 + 𝑦4 ) = 2 (2.0 + 6.873) = 4.4365
5ℎ
𝑦5 = 𝑦0 + [19𝑦0′ − 10𝑦1′ + 120𝑦2′ − 70𝑦3′ + 85𝑦4′ ]
144
5ℎ
= 𝑦0 + 144 [19𝑓0 − 10𝑓1 + 120𝑓2 − 70𝑓3 + 85𝑓4 ]
5(0.5)
= 2.000 + [19(1.0000) − 10(1.5680) + 120(2.2975)
144
−70(3.2340) + 85(4.4365)]
= 9.4608
1 1
Then 𝑓5 = 2 (𝑥5 + 𝑦5 ) = 2 (2.5 + 9.4608) = 5.9804
Study on Different Numerical Methods for Solving Differential Equations Page | 110
Chapter-6: Two Proposed Methods for Solving Differential Equations
First iteration:
(1) 2ℎ
𝑦5 = 𝑦1 + [7𝑓1 + 32𝑓2 + 12𝑓3 + 32𝑓4 + 7𝑓5 ]
45
2(0.5)
= 2.636 + [7(1.5680) + 32(2.2975) + 12(3.2340)
144
+32(4.4365) + 7(5.9804)]
= 9.4612
1 1
Then 𝑓5 = 2 (𝑥5 + 𝑦5 ) = 2 (2.5 + 9.4612) = 5.9806
Second iteration:
(2) 2ℎ
𝑦5 = 𝑦1 + 45 [7𝑓1 + 32𝑓2 + 12𝑓3 + 32𝑓4 + 7𝑓5 ]
2(0.5)
= 2.636 + [7(1.5680) + 32(2.2975) + 12(3.2340)
144
+32(4.4365) + 7(5.9806)]
= 9.4613
1 1
Then 𝑓5 = 2 (𝑥5 + 𝑦5 ) = 2 (2.5 + 9.4613) = 5.9807
Third iteration:
(3) 2ℎ
𝑦5 = 𝑦1 + 45 [7𝑓1 + 32𝑓2 + 12𝑓3 + 32𝑓4 + 7𝑓5 ]
2(0.5)
= 2.636 + [7(1.5680) + 32(2.2975) + 12(3.2340)
144
+32(4.4365) + 7(5.9807)]
= 9.4613
1 1
Then 𝑓5 = 2 (𝑥5 + 𝑦5 ) = 2 (2.5 + 9.4613) = 5.9807
𝑦5 = 9.4613 at 𝑥5 = 2.5
𝑑𝑦 1
Exact result: Exact result: We have = 2 (𝑥 + 𝑦)
𝑑𝑥
𝑑𝑦 𝑦 𝑥
𝑜𝑟, −2=2
𝑑𝑥
Study on Different Numerical Methods for Solving Differential Equations Page | 111
Chapter-6: Two Proposed Methods for Solving Differential Equations
2 = 𝑐𝑒 0 − 0 − 2 ⇒ 𝑐 = 4
𝑥
Then the above solution becomes 𝑦 = 4𝑒 2 − 𝑥 − 2
Study on Different Numerical Methods for Solving Differential Equations Page | 112
Chapter-6: Two Proposed Methods for Solving Differential Equations
This is called the difference equation of Laplace’s equation, which shows the
value of 𝑢 at any point is the mean of its values at the four neighboring points. Equation
(6.4.4) is called the standard 5-point formula exhibited in figure-(6.1)
Figure-(6.1) Figure-(6.2)
We know that Laplace’s equation remains invariant when the co-ordinate axes
are rotated through an angle of 45°. Then the formula (6.4.4) can be re-written as
1
𝑢𝑖,𝑗 = 4 [𝑢𝑖−1,𝑗−1 + 𝑢𝑖+1,𝑗−1 + 𝑢𝑖+1,𝑗+1 + 𝑢𝑖+1,𝑗−1 ] (6.4.5)
This is called diagonal 5-point formula, which shows the value of 𝑢 at any point
is the mean of its values at the four diagonal points. The formula given by (6.4.5) is
represented in figure-(6.2).
Thus, a newer form for 𝑢𝑖,𝑗 has been proposed by (6.4.6). By which we can find
the value of 𝑢 at different mesh points by taking the mean of the value of all points
Study on Different Numerical Methods for Solving Differential Equations Page | 113
Chapter-6: Two Proposed Methods for Solving Differential Equations
surrounding it. So, we can call the proposed formula as surrounding 9-point formula.
The figure-(6.3) represents (6.4.6).
Figure-(6.3)
Algorithm: Now we are to discuss the algorithm for obtaining mesh points of a given
domain under the formula (6.4.6).
1. At first, we will consider the boundary values of the given domain.
2. Choose the non-boundary points as zero whose are to be taken as
surrounding points to an evaluation of a mesh point and it will be continued
till all the mesh points are approximated for once.
3. When a mesh point evaluation has done by taking some surrounding points
as zero is also to be used as a surrounding point with the current value to the
approximation of the next mesh point, if need.
4. The first approximation values of the mesh points are to be improved by the
iteration method of Gauss-Seidel.
5. Finally, we will consider the (𝑛 + 1)𝑡ℎ approximation values as the required
mesh points, if the (𝑛 + 1)𝑡ℎ approximation values are very close to the 𝑛𝑡ℎ
approximation values by a scale of accuracy.
Solve the Laplace’s equation 𝑢𝑥𝑥 + 𝑢𝑦𝑦 = 0 for the square mesh with the
boundary values is given in figure-(6.4) below, by the finite-difference method.
Figure-(6.4) Figure-(6.5)
Study on Different Numerical Methods for Solving Differential Equations Page | 114
Chapter-6: Two Proposed Methods for Solving Differential Equations
1
𝑢𝑖,𝑗 = 8 [𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 + 𝑢𝑖,𝑗−1 + 𝑢𝑖,𝑗+1
Now applying the above system for the following approximations with initial
(𝑛) (𝑛) (𝑛)
substitution 𝑢2 = 𝑢3 = 𝑢4 = 0.
First approximation:
(1) 1
𝑢1 = 8 [0 + 10 + 20 + 0 + 0 + 0 + 40 + 20] = 11.25
(1) 1
𝑢2 = 8 [10 + 20 + 30 + 40 + 50 + 0 + 0 + 11.25] = 20.16
(1) 1
𝑢3 = 8 [11.25 + 20.16 + 40 + 50 + 60 + 60 + 60 + 0] = 37.68
(1) 1
𝑢4 = 8 [20 + 11.25 + 20.16 + 37.68 + 60 + 60 + 60 + 40] = 38.64
Second approximation:
(2) 1
𝑢1 = 8 [0 + 10 + 20 + 20.16 + 37.68 + 38.64 + 40 + 20] = 23.61
(2) 1
𝑢2 = 8 [10 + 20 + 30 + 40 + 50 + 37.68 + 38.64 + 23.61] = 31.20
(2) 1
𝑢3 = 8 [23.31 + 31.20 + 40 + 50 + 60 + 60 + 60 + 38.64] = 45.39
(2) 1
𝑢4 = 8 [20 + 23.31 + 31.20 + 45.39 + 60 + 60 + 60 + 40] = 42.49
Study on Different Numerical Methods for Solving Differential Equations Page | 115
Chapter-6: Two Proposed Methods for Solving Differential Equations
Third approximation:
(3) 1
𝑢1 = 8 [0 + 10 + 20 + 31.20 + 45.39 + 42.49 + 40 + 20] = 26.14
(3) 1
𝑢2 = 8 [10 + 20 + 30 + 40 + 50 + 45.39 + 42.49 + 26.14] = 33.00
(3) 1
𝑢3 = 8 [26.14 + 33.00 + 40 + 50 + 60 + 60 + 60 + 42.49] = 46.45
(3) 1
𝑢4 = 8 [20 + 26.14 + 33.00 + 46.45 + 60 + 60 + 60 + 40] = 43.20
Fourth approximation:
(4) 1
𝑢1 = 8 [0 + 10 + 20 + 33.00 + 46.45 + 43.20 + 40 + 20] = 26.58
(4) 1
𝑢2 = 8 [10 + 20 + 30 + 40 + 50 + 46.45 + 43.20 + 26.58] = 33.28
(4) 1
𝑢3 = 8 [26.58 + 33.28 + 40 + 50 + 60 + 60 + 60 + 43.20] = 46.63
(4) 1
𝑢4 = 8 [20 + 26.58 + 33.28 + 46.63 + 60 + 60 + 60 + 40] = 43.31
Fifth approximation:
(5) 1
𝑢1 = 8 [0 + 10 + 20 + 33.28 + 46.63 + 43.31 + 40 + 20] = 26.65
(5) 1
𝑢2 = 8 [10 + 20 + 30 + 40 + 50 + 46.63 + 43.31 + 26.65] = 33.32
(5) 1
𝑢3 = 8 [26.65 + 33.32 + 40 + 50 + 60 + 60 + 60 + 43.31] = 46.66
(5) 1
𝑢4 = 8 [20 + 26.65 + 33.31 + 46.66 + 60 + 60 + 60 + 40] = 43.33
Sixth approximation:
(6) 1
𝑢1 = 8 [0 + 10 + 20 + 33.32 + 46.66 + 43.33 + 40 + 20] = 26.66
(6) 1
𝑢2 = 8 [10 + 20 + 30 + 40 + 50 + 46.66 + 43.33 + 26.66] = 33.33
(6) 1
𝑢3 = 8 [26.66 + 33.33 + 40 + 50 + 60 + 60 + 60 + 43.33] = 46.67
(6) 1
𝑢4 = [20 + 26.66 + 33.33 + 46.67 + 60 + 60 + 60 + 40] = 43.33
8
Seventh approximation:
(6) 1
𝑢1 = 8 [0 + 10 + 20 + 33.33 + 46.67 + 43.33 + 40 + 20] = 26.67
(6) 1
𝑢2 = 8 [10 + 20 + 30 + 40 + 50 + 46.67 + 43.33 + 26.67] = 33.33
(6) 1
𝑢3 = 8 [26.67 + 33.33 + 40 + 50 + 60 + 60 + 60 + 43.33] = 46.67
(6) 1
𝑢4 = 8 [20 + 26.67 + 33.33 + 46.67 + 60 + 60 + 60 + 40] = 43.33
Since sixth and seventh approximations have become so closer for values of
mesh points, we can choose
Study on Different Numerical Methods for Solving Differential Equations Page | 116
Chapter-6: Two Proposed Methods for Solving Differential Equations
Comment: The mesh points of the given domain (6.4) also can be obtained by using
the formulae of standard 5-point and diagonal 5-point, the values of the mesh points
obtained by these formulae are given below
1. Since the surrounding 9-point formula depends upon all mesh points around
it to determine any mesh points, it is more contributive and logical, which
gives better accuracy.
2. The initial zero substitution (taking unknown mesh points as zero who are
surrounded to a required mesh point) enables us to solve a bigger domain at
which most of the mesh points are absent. i.e. are to be estimated.
3. Using the Gauss-Seidel iteration formula gives the method a quick ending,
this saves the estimation time.
Study on Different Numerical Methods for Solving Differential Equations Page | 117
CHAPTER-7
CONCLUSION
In this thesis paper, we have discussed some numerical methods for the solution
of ordinary differential equations (in Chapter-2 & Chapter-3), partial differential
equations (in Chapter-4) and boundary value problems (in chapter-5). Also, we have
proposed two modified numerical methods (in Chapter-6) in this thesis paper.
Also, from Section-2.8 it can be said that computed values of y deviate rapidly
in Euler’s method and the disturbance has solved in Section-2.9 at modified Euler’s
method.
Thus, we can conclude that for practical purposes the Runge-Kutta method is to
be chosen for better accuracy.
Thus, we can choose the iteration method as the best among three methods and
the Rayleigh-Ritz method would probably the third one in practice.
Chapter-7: Conclusion
Also, from the Section-5.7, we have seen that Green’s function is applicable to
solve a two-point boundary value problem numerically.
1. Since the surrounding 9-point formula depends upon all mesh points around it
to determine any mesh point, it is more contributive and logical, which may give
better accuracy.
2. The initial zero substitution may enable us to solve a bigger domain at which
most of the mesh points are absent.
3. Using the Gauss-Seidel iteration formula may give the method a quick ending,
this will save the estimation time.
Study on Different Numerical Methods for Solving Differential Equations Page | 119
Chapter-7: Conclusion
1. The surrounding 9-point formula is not applicable to the domains having less
than nine mesh (grid) points.
Study on Different Numerical Methods for Solving Differential Equations Page | 120
REFERENCES
[08]. F. LANG, XIAO-PING XU, 2011. A new cubic B-spline method for linear
fifth order boundary value problems (Journal of Applied Mathematics and
Computing 36 (2011).) P.101
[15]. M. KUMAR, Y. GUPTA, 2010. Methods for solving singular boundary value
problems using splines (A review, Journal of Applied Mathematics and Computing
32(2010).). P.265
[17]. P. M. PRENTER, 1989. Splines and variation methods, John Wiley & sons,
New York.
Study on Different Numerical Methods for Solving Differential Equations Page | 122