Study On Different Numerical Methods For Solving Differential Equations

Download as pdf or txt
Download as pdf or txt
You are on page 1of 133

Study on different numerical methods for solving

differential equations

M.S (Thesis) in Pure Mathematics

A thesis submitted for the partial fulfillment of the requirement of the


degree of Master of Science in mathematics

Submitted by: Mahtab Uddin


Exam Roll No: 2009/08
Class Roll No: 10736
Registration No: 10781
Session: 2008-2009
M. S (Final) Exam: 2009 (Held in 2010-11)

Thesis supervisor: Dr. Munshi Nazrul Islam


Professor of Mathematics,
University of Chittagong.

Department of mathematics,
Chittagong University of Chittagong.
December 2011 Chittagong-4331
Bangladesh
Certification
This thesis is entitled “Study on Different Numerical Methods for Solving
Differential Equations” submitted by Mahtab Uddin as the partial fulfillment of the
requirement for the degree of Master of Science in Mathematics to the Department of
Mathematics, University of Chittagong, Bangladesh is carried by him under my
supervision.

Dr. Munsi Nazrul Islam


Professor, Department of Mathematics
University of Chittagong
Chittagong-4331, Bangladesh

i
Dedication

This thesis paper is dedicated to my beloved grandfather


for his cordial inspiration in my childhood

ii
Acknowledgments
At first, I am giving all praise to almighty Allah for enabling me to complete
this thesis work.

With great pleasure I would like to express my heartiest gratitude, cordial thanks,
deepest sense of respect and appreciation to my reverend teacher and thesis supervisor
Dr. Munshi Nazrul Islam, Professor, Department of mathematics, University of
Chittagong for his indispensable guidance, sympathetic encouragement, valuable
suggestions and generous help during the course of study and the progress of this thesis
work.

I also acknowledge my gratefulness to Dr. Musleh Uddin Ahmed and Dr. Nil
Raton Bhattacharjee, Professor and Ex-Chairman, Department of mathematics,
University of Chittagong for their valuable suggestions and kind inspiration in carrying
out this thesis work.

I am indebted to my respected teacher Mr. Milon Kanti Dhar, Professor,


Department of mathematics, University of Chittagong for his help and discussion
throughout the progress of this thesis and generous help of my University life.

I express my profuse thanks to Dr. Talky Bhattacharjee, Professor, Department


of mathematics, Dr. Ganesh Chandra Roy, Professor, Department of mathematics, Mr.
Forkan Uddin, Assistant Professor, Department of mathematics, University of
Chittagong for their positive support to prepare my thesis paper.

I would like to express my deep sense of gratitude to Dr. Abul Kalam Azad,
Professor & Chairman, Department of mathematics, University of Chittagong and all
of my honorable teachers of this department for their fruitful advice and
encouragement.

Cordial thanks are also extended to all my classmates, especially to Mustafij,


Khondoker, Forhad, Sumon, Thowhid, Uttam, Khorshed, Masud and Maksud. Also
thanks to the office staff and Seminar man of the department of mathematics, University
of Chittagong for their co-operation and assistance during the study time.

A special note of appreciation goes to Moniruzzaman Khan, Major, B.M.A,


Army education core and Mr. Khalilur Rahman, Lab assistant, B.M.A, Army education
core for their kind advice and inspiration during the thesis work.

Finally, I am highly grateful to my immediate senior brothers Md. Shahidul


Islam, Dewan Ferdous Wahid, and Md. Rashedul Islam for their indispensable
guidance, academic, and other support to complete my thesis work as well as my
university life.

Chittagong AUTHOR
December 2011.

iii
Abstract
This thesis paper is mainly analytic and comparative among various numerical
methods for solving differential equations but Chapter-4 contains two proposed
numerical methods based on (i) Predictor-Corrector formula for solving ordinary
differential equation of first order and first degree (ii) Finite-difference approximation
formula for solving partial differential equation of elliptic type.

Two types of problems are discussed in detail in this thesis work, namely the
ordinary differential equation in Chapters-2 & Chapter-3 and partial differential
equations in Chapter-4. Also, Chapter-5 highlights the boundary value problems.

The various chapters of this thesis paper are organized as follows

Chapter-1 of the thesis is an overview of differential equations and their


solutions by numerical methods.

Chapter-2 deals with the solution of ordinary differential equations by Taylor’s


series method, Picard’s method of successive approximation and Euler’s method.
Derivation of Taylor’s series method with truncation error and application are discussed
here. The solution of ordinary differential equations by Picard’s method of successive
approximations and its application is discussed in detail. The definition of Euler’s
method is mentioned, the simple pendulum problem is solved to demonstrate Euler’s
method. Error estimations and geometrical representation of Euler’s method and the
improved Euler’s method are mentioned as a Predictor-Corrector form, which forms
being discussed in Chapter-3 next.

Also in it, the comparison between Taylor’s series method and Picard’s method
of successive approximation has given. Moreover the advantages and disadvantages of
these three methods narrated in it.

Chapter-3 provides a complete idea of the Predictor-Corrector method.


Derivation of Milne’s predictor-corrector formula and Adams-Moulton Predictor-
Corrector formula with their local truncation errors and applications are discussed here.
Solutions of ordinary differential equations by the Runge-Kutta method with error
estimation are studied in this chapter. Some improved extensions of the Runge-Kutta
method are explained. Also, the general form of the Runge-Kutta method has given
here.

The law of the rate of nuclear decay is solved in this chapter by means of
standard fourth-order Runge-Kutta method and then the obtained solution is compared
with the exact solution, which is an application of the numerical method to the nuclear
physics. Comparison between the Predictor-Corrector method and the Runge-Kutta
method discussed in detail. Also, the advantages and disadvantages of these two
methods discussed in it.

Chapter-4 gives a review of the solution of partial differential equations. Three


types of partial differential equations such as elliptic equations, parabolic equations and
hyperbolic equations with methods of their solutions are discussed at length. To solve

iv
the method of the elliptic equation of iterations and relaxation are discussed. Schmidt's
method and the Crank-Nicholson method are discussed to solve parabolic equations.
The solution of vibrations of a stretched string is mentioned as a method of solution of
hyperbolic equations.

The solution of vibrations of the rectangular membrane by the Rayleigh-Ritz


method has given here. A comparison between the iterative method and relaxation
method has highlighted and then a total discussion of Rayleigh-Ritz with methods of
iteration and relaxation reviewed in this chapter.

Chapter-5 deals with the solution of the boundary value problems in both
ordinary differential equations and partial differential equations. It provides a brief
discussion of the finite-difference approximation method and shooting method with
their applications. Also, the applications of Green’s function to solve boundary value
problems are discussed in detail with the application.

Moreover, the B-Spline method for solving two-point boundary value problems
of order Four is introduced in this chapter at length. Derivations of cubic B-splines have
represented. Cubic B-spline solutions of the special linear fourth-order boundary value
problems, the general case of the boundary value problem, treatment of non-linear
problems and singular problems have discussed here.

Chapter-6 contains the proposal for the modification of two numerical methods.
One of which proposed a modification of Milne’s predictor-corrector formula for
solving ordinary differential equations of the first order and first degree, namely
Milne’s (modified) Predictor-Corrector formula. One more step-length and one more
term in Newton’s interpolation formula being calculated for deriving the predictor and
corrector formulae of Milne’s (modified) Predictor-Corrector formula. Also, a modified
formula for solving the elliptic equation by finite-difference approximation is proposed,
namely surrounding 9-point formula. This formula is obtained by combining standard
5-point formula and diagonal 5-point formula, which gives a more contributive to find
mesh points of a given domain in a certain region.

Moreover, the advantages of proposed methods over previous methods are


mentioned at the end of this chapter.

Chapter-7 provides us the conclusions of this thesis paper. In this chapter, we


have chosen the better methods in every chapter by comparing them. Also, the
advantages and limitations of Milne’s (modified) predictor-corrector formulae and
surrounding 9-point formula are given here.

Finally, recommendations for future research and a list of few further works
have mentioned.

v
Contents

ARTICLES PAGES

Certification…………………………………………………………………………….. i
Dedication………..……………………………………………………………………... ii
Acknowledgments…………………………………………………………………….... iii
Abstract…………………………………………………………………………………. iv-v
Content………………………………………………………………………………….. vi-x

CHAPTER-1: BASIC CONCEPTS OF DIFFERENTIAL EQUATIONS AND


NUMERICAL METHODS……………………………………………………………. 01-11
1.1 Introduction……………………………………………………………............ 01
1.2 Definition of differential equation………………………………………......... 01
1.3 Order and degree of differential equations………………………………........ 02
1.4 Classification of differential equation Ordinary differential equations-
i. Ordinary differential equations........................................................ 02
ii. Partial differential equations………..………….............................. 03
1.5 Reduction of a differential equation to the first-order system……..……......... 04
1.6 Physical examples of differential equations-
i. Laplace’s equation…………..………………………………......... 04
ii. Electrical circuit………..……………………………………......... 05
1.7 Linearity of differential equations-
i. Linear differential equations…………………………………........ 05
ii. Non-linear differential equations……………………………......... 06
1.8 Initial value problems……………………………………………………......... 07
1.9 Boundary value problems……………………………………………….......... 07
1.10 Numerical methods………………………………………………………........ 08
1.11 Why numerical methods are preferable? ……………………………….......... 09
1.12 Contributions of numerical methods…………………………………….......... 10

CHAPTER-2: SOLUTION OF DIFFERENTIAL EQUATIONS OF FIRST


ORDER AND FIRST DEGREE BY NUMERICAL METHODS OF EARLY 12-35
STAGE..............................................................................................................................

vi
2.1 Introduction……………………………………………………….................... 12
2.2 Taylor’s series method-
i. Derivation………………………………………………................. 12
ii. Truncation error……………………………………………........... 14
2.3 Application of Taylor’s series method-
i. Approximation by Taylor’s series method…………………........... 14
ii. Exact result……………………………………………………....... 16
2.4 Derivation of Picard’s method of successive approximation……………......... 17
2.5 Application of the Picard’s method of successive approximation-
i. Approximation by Picard’s method of successive approximation... 18
ii. Exact result……………………………………………………....... 21
iii. Graphical representation…………………………………….......... 22
2.6 Comparison between Taylor’s series method and Picard’s method of
successive approximation……………………………………………….......... 23
2.7 Euler’s method-
i. Derivation………………………………………………................. 23
ii. Truncation error……………………………………………........... 24
2.8 Physical application of Euler’s method-
i. Approximation by Euler’s method………………………..…......... 25
ii. Graphical representation of the application………..…………....... 28
2.9 Modification of Euler’s method-
iii. Derivation………………………………………………................. 29
iv. Truncation error……………………………………………........... 31
2.10 Application of modified Euler’s method-
iii. Approximation by modified Euler’s method …………….............. 32
iv. Exact result……………………………………………………....... 34

CHAPTER-3: SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS OF


FIRST ORDER AND FIRST DEGREE BY PREDICTOR-CORRECTOR 36-63
METHOD AND RUNGE-KUTTA METHOD...............................................................
3.1 Introduction………………………………………………………………........ 36
3.2 Definition of predictor-corrector method…………………………………....... 36
3.3 Milne’s predictor-corrector method-

vii
i. Derivation of Milne’s predictor formula..……………………........ 37
ii. Derivation of Milne’s corrector formula………………….…......... 38
iii. Local truncation error…………..…………………......................... 39
3.4 Application of Milne’s predictor-corrector method………………………....... 39
3.5 Adams-Moulton predictor-corrector method-
i. Derivation of Adams-Moulton predictor formula…………............ 41
ii. Derivation of Adams-Moulton corrector formula……………........ 42
iii. Local truncation error…………..…………………......................... 43
3.6 Application of Adams-Moulton predictor-corrector method……………......... 45
3.7 Comments on predictor-corrector methods………………………………........ 47
3.8 Runge-Kutta method-
i. Derivation of Runge-Kutta formulae………………………........... 48
ii. Error estimation in Runge-Kutta formulae………………….......... 52
3.9 Physical application of Runge-Kutta method-
i. Approximation by Runge-Kutta method…………………….......... 53
ii. Exact result……………………………………………………....... 58
3.10 Extensions of Runge-Kutta formulae……………………………………......... 59
3.11 Generalized formula for Runge-Kutta method………………………….......... 62
3.12 Comparison between the predictor-corrector method and Runge-Kutta method. 62

CHAPTER-4: SOLUTION OF PARTIAL DIFFERENTIAL EQUATIONS............ 64-90


4.1 Introduction………………………………………………………………........ 64
4.2 Classification of partial differential equation……………………………......... 64
4.3 Finite-difference approximations to partial derivatives…………………......... 64
4.4 Solution of elliptic equations-
i. Solution of Laplace’s equation………………………………........ 66
ii. Solution of Poisson’s equation………………………………......... 68
iii. Solution by relaxation method……………………………........…. 69
4.5 Application of solving elliptic equations………………………….......…….... 71
4.6 Solution of parabolic equations (one-dimensional heat equation)-
i. Schmidt method………………………………………….….......... 76
ii. Crank-Nicholson method……………………………………......... 77
iii. Iterative method………………………………………………....... 78

viii
4.7 Application of solving parabolic equation……………………………............. 79
4.8 Solution of hyperbolic equations (wave equation) ………………………....... 81
4.9 Application of solving hyperbolic equation………………………………....... 82
4.10 Comparison between iterative method and relaxation method………….......... 84
4.11 The Rayleigh-Ritz method-
i. Introduction…………………………………………….................. 85
ii. Vibration of a rectangular membrane……………………….......... 85
4.12 Comparative discussion of the Rayleigh-Ritz method with the iterative method
and relaxation method…………………………………………........................ 90

CHAPTER-5: SOLUTION OF BOUNDARY THE VALUE PROBLEM WITH


APPLICATIONS. ………………………………………………………………........... 91-106
5.1 Introduction………………………………………………………………........ 91
5.2 Finite-difference method………………………………………………............ 91
5.3 Application of finite-difference method…………………………………........ 94
5.4 Shooting method…………………………………………………………........ 96
5.5 Application of shooting method…………………………………………......... 97
5.6 Green’s function to solve boundary value problem………………………....... 98
5.7 Application of Green’s function……………………………………................ 99
5.8 Cubic B-Spline method for solving two-point boundary value problems of
order four-
i. Introduction………………………………….............…………..... 101
ii. Derivations for Cubic B-spline………………………………........ 102
iii. Solution of special case fourth-order boundary value problem....... 103
iv. General linear fourth-order boundary value problem....................... 104
v. Non-linear fourth-order boundary value problem………………… 104
vi. Singular fourth-order boundary value problem…………………… 105

CHAPTER-6: TWO PROPOSED METHODS FOR SOLVING DIFFERENTIAL


EQUATIONS…………………………………………………….................................... 107-117
6.1 Introduction………………………………………………………………........ 107
6.2 Milne’s (modified) predictor-corrector method-
i. Derivation of Milne’s (modified) predictor formula……………… 107

ix
ii. Derivation of Milne’s (modified) corrector formula…………........ 109
6.3 Application of Milne’s (modified) predictor-corrector method-
i. Approximation by Milne’s (modified) predictor-corrector
formulae……………………………………….………………...... 110
ii. Exact result……………………………………….……………….. 111
iii. Comment………………………………………………………….. 112
6.4 Surrounding 9-point formula-
i. Derivation of Surrounding 9-point formula…………………….… 112
ii. Algorithm………………….....………………………………….... 114
6.5 Application of surrounding 9-point formula-
i. Approximation by surrounding 9-point formula……….......……... 114
ii. Comment………………………………………………………….. 117
6.6 Advantages of proposed methods over previous methods………………......... 117

CHAPTER-7: CONCLUSION……………………………………………………… 118-120

REFERENCES……………………………………………………………….……….... 121-122

x
CHAPTER-1

BASIC CONCEPTS OF DIFFERENTIAL


EQUATIONS AND NUMERICAL METHODS

1.1 INTRODUCTION

Differential equations arise in many areas of science and technology,


specifically whenever a deterministic relation involving some continuously varying
quantities and their rates of change in the space and/or time (expressed as derivatives)
is known or postulated. This is illustrated in classical mechanics, where the motion of
a body is described by its position and velocity as the time varies. Newton’s laws allow
one to relate the position, velocity, acceleration and various forces acting on a body and
state the relation as a differential equation for an unknown position of the body as a
function of time.

An example of modeling a real-world problem using differential equations is


the determination of the velocity of a ball falling through the air, considering only
gravity and air resistance. The ball's acceleration towards the ground is the acceleration
due to the gravity minus the deceleration due to the air resistance. Gravity is constant
but air resistance may be modeled as proportional to the velocity of the ball. This means
the acceleration of the ball, which is derivative of its velocity, depends on the velocity.
Finding the velocity as a function of time involves solving a differential equation.

The study of differential equations is a wide field in pure and applied


mathematics, physics, meteorology, and engineering. All of these disciplines are
concerned with the properties of differential equations of various types. Pure
mathematics focuses on the existence and uniqueness of solutions, while applied
mathematics emphasizes the rigorous justification of the methods for approximating
solutions. Differential equations play an important role in modeling virtually every
physical, technical, or biological process, from celestial motion, to bridge design, to
interactions between neurons. Differential equations such as those used to solve real-
life problems may not necessarily be directly solvable, i.e. do not have closed-form
solutions. Instead, solutions can be approximated using numerical methods.

1.2 DEFINITION OF DIFFERENTIAL EQUATION

A differential equation is a mathematical equation for an unknown function of


one or more variables that relates the values of the function itself and its derivatives of
various orders.

The general form of a differential equation is [5] as follows


Chapter-1: Basic Concepts of Differential Equations and Numerical Methods

𝑑𝑛 𝑦 𝑑𝑛−1 𝑦 𝑑𝑦
𝑎0 (𝑥) 𝑑𝑥 𝑛 + 𝑎1 (𝑥) 𝑑𝑥 𝑛−1 + ⋯ … … … … … … + 𝑎𝑛−1 (𝑥) 𝑑𝑥 + 𝑎𝑛 (𝑥)𝑦 = 𝑓(𝑥) (1.2.1)

Here 𝑎𝑖 (𝑥) ; 𝑛 = 1, 2, 3, . . . . . . . .. and 𝑓(𝑥) is the function of 𝑥 and 𝑦 = 𝑦(𝑥)


is an unknown function in terms of 𝑥.

1.3 ORDER & DEGREE OF DIFFERENTIAL EQUATIONS

Let us consider the following differential equations

𝑑𝑦
𝑥 𝑑𝑥 = 3𝑦 (1.3.1)
𝑑2 𝑦 𝑑𝑦
𝑥 𝑑𝑥 2 + 6 𝑑𝑥 + 3𝑦 = 0 (1.3.2)

𝑑𝑦
√ +𝑦 =𝑥 (1.3.3)
𝑑𝑥

𝑑2 𝑦 𝑑𝑦
(𝑑𝑥 2 )3 + (𝑑𝑥 )2 = 𝑥 (1.3.4)

The order [22] of a differential equation is the order of the highest order
derivative appearing in the equation.

For example, orders of the differential equations (1.3.1), (1.3.2), (1.3.3) and
(1.3.4) are 1,2,1 & 2 respectively.

The degree of a differential equation is the degree of the highest order derivative
involved in it when the derivatives are free from radicals and fractions. i,e, write the
differential equations as polynomials in derivatives.

For example, degrees of the differential equations (1.3.1), (1.3.2), (1.3.3) and
(1.3.4) are 1,1,1 & 3 respectively.

1.4 CLASSIFICATION OF DIFFERENTIAL EQUATIONS

Depending on the number of the independent variables differential equations


can be classified into two categories.

Ordinary differential equation: In mathematics, an ordinary differential equation is a


relation that contains functions of only one independent variable and one or more of
their derivatives with respect to that variable.

Because the derivative is the rate of change, such an equation states how a
function changes but does not specify the function itself. Given sufficient initial
conditions, however, such as a specific function value, the function can be found by
various methods, most based on integration.

An implicit ordinary differential equation [25] of order 𝑛 depending on 𝑦 (𝑛) has


the form

Study on Different Numerical Methods for Solving Differential Equations Page | 2


Chapter-1: Basic Concepts of Differential Equations and Numerical Methods

𝐹(𝑥, 𝑦, 𝑦 (1) , 𝑦 (2) , 𝑦 (3) , … … … … … … … . . , 𝑦 (𝑛−1) , 𝑦 (𝑛) ) = 0 (1.4.1)

To distinguish the above case from this one, an equation of the form

𝐹(𝑥, 𝑦, 𝑦 (1) , 𝑦 (2) , 𝑦 (3) , … … … … … … … . . , 𝑦 (𝑛−1) ) = 𝑦 (𝑛) ) (1.4.2)

which is called an explicit ordinary differential equation.

A simple example of an ordinary differential equation is Newton’s second law


of motion of the form

𝑑2 𝑥(𝑡)
𝑚 = 𝐹(𝑥(𝑡)) (1.4.3)
𝑑𝑡 2

for the motion of a particle of constant mass 𝑚.

In general, the force 𝐹 depends upon the position 𝑥(𝑡) of the particle at time 𝑡,
and thus the unknown function 𝑥(𝑡) appears on both sides of (1.4.3), as indicated in the
notation 𝐹(𝑥(𝑡)).

Ordinary differential equations arise in many different contexts including


geometry, mechanics, astronomy and population modeling.

Partial differential equation: In mathematics, partial differential equations are


relations involving unknown functions of several independent variables and their partial
derivatives with respect to those variables.

Partial differential equations are used to formulate and thus aid to solution of
problems involving several variables [10], such as the propagation of sound or heat,
electrostatics, electrodynamics, fluid flow, and electricity. Seemingly distinct physical
phenomena may have indicated mathematical formulations and thus governed by the
same underlying dynamic. They find their generalization in stochastic partial
differential equations.

A partial differential equation for the function 𝑢 = 𝑢 (𝑥1 , 𝑥2 , 𝑥3 , . . . . . . . . . , 𝑥𝑛 ) is


of the form

𝜕𝑢 𝜕𝑢 𝜕𝑢 𝜕2 𝑢 𝜕2 𝑢
𝐹 (𝑥1 , 𝑥2 , 𝑥3 , . . . . . . , 𝑥𝑛 , 𝜕𝑥 , 𝜕𝑥 , … … 𝜕𝑥 , 𝜕𝑥 , 𝜕𝑥 ,……) = 0 (1.4.4)
1 2 𝑛 1 𝜕𝑥2 2 𝜕𝑥3

where 𝐹 is a linear function of 𝑢 and its derivatives.

Example of a partial differential equation, for a scalar function  =  (𝑥, 𝑦, 𝑧)


and velocity 𝑣 of the wave at any time 𝑡 can be mentioned by a wave equation in the
cartesian coordinates as follows

𝜕2 𝜑 𝜕2 𝜑 𝜕2 𝜑 1 𝜕2 𝜑
+ 𝜕𝑦 2 + 𝜕𝑧 2 = 𝑣2 𝜕𝑡 2 (1.4.5)
𝜕𝑥 2

Study on Different Numerical Methods for Solving Differential Equations Page | 3


Chapter-1: Basic Concepts of Differential Equations and Numerical Methods

1.5 REDUCTION OF A DIFFERENTIAL EQUATION TO THE FIRST ORDER


SYSTEM OF EQUATIONS

Any differential equation of order 𝑛 can be written as a system of 𝑛 order


differential equations. Given an explicit ordinary differential equation of order 𝑛 with
dimension one [24] as follows

𝐹(𝑥, 𝑦, 𝑦 (1) , 𝑦 (2) , 𝑦 (3) , … … … … … … … . . , 𝑦 (𝑛−1) , 𝑦 (𝑛) ) = 0 (1.5.1)

Define a new family of unknown functions

𝑦𝑖 = 𝑦 (𝑖−1) ; 𝑖 = 0,1,2,3, . . . . . . . . . . . . . . , 𝑛 (1.5.2)

The original differential equation can be re-written as a system of differential


equations with order one and dimension 𝑛 given by

𝑦1′ = 𝑦2
𝑦2′ = 𝑦3
𝑦3′ = 𝑦4
......................................
......................................

𝑦𝑛−1 = 𝑦𝑛
𝑦𝑛′ = 𝐹(𝑥, 𝑦, 𝑦 (1) , 𝑦 (2) , 𝑦 (3) , … … … … … … … . . , 𝑦 (𝑛−1) ) (1.5.3)

This can be written concisely in vector notation as given by

𝑌 ′ = 𝑭(𝑥, 𝒚) with 𝒚 = (𝑦1 , 𝑦2 , 𝑦3 , . . . . . . . . . , 𝑦𝑛 ) (1.5.4)

Then we get

𝑭(𝑥, 𝑦, 𝑦1′ , 𝑦2′ , 𝑦3′ , … … … , 𝑦𝑛′ ) =


𝑭(𝑦2 , 𝑦3 , … … … … , 𝑦𝑛 , 𝐹(𝑥, 𝑦, 𝑦 (1) , 𝑦 (2) , 𝑦 (3) , … … … … … … … . . , 𝑦 (𝑛−1) )) (1.5.5)

1.6 PHYSICAL EXAMPLES OF DIFFERENTIAL EQUATIONS

Laplace’s equation: In mathematics, the Laplace’s equation is a second-order partial


differential equation as follows

∇2 𝜑 = 0 (1.6.1)

Here ∇2 is the Laplace’s operator and scalar function.

Laplace’s equation is the simplest example of the elliptic partial differential


equation. Solutions of Laplace’s equation are all harmonic functions and are important.

Study on Different Numerical Methods for Solving Differential Equations Page | 4


Chapter-1: Basic Concepts of Differential Equations and Numerical Methods

In many fields of the science, notable the fields of electromagnetism,


astronomy, and fluid dynamics, as they can be used accurately describe the behavior of
electric, gravitational and fluid potentials. In the study of heat conduction, the Laplace’s
equation is the steady-state heat equation. Laplace’s equation has several forms as
follows

𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
+ 𝜕𝑦 2 + 𝜕𝑧 2 = 0 (1.6.2)
𝜕𝑥 2

in cartesian coordinates.

1 𝜕 𝜕𝑓 1 𝜕2 𝜌 𝜕2 𝑓
(𝜌 𝜕𝜌) + 𝜌2 𝜕∅2 + 𝜕𝑧 2 = 0 (1.6.3)
𝜌 𝜕𝜌

in cylindrical coordinates.

1 𝜕 𝜕𝑓 1 𝜕 𝜕𝑓 1 𝜕2 𝑓
(𝑟 2 𝜕𝑟 ) + 𝑟 2 𝑠𝑖𝑛𝜃 𝜕𝜃 (𝑠𝑖𝑛𝜃 𝜕𝜃) + 𝑟 2 𝑠𝑖𝑛2 𝜃 𝜕∅2 = 0 (1.6.4)
𝑟 2 𝜕𝑟

in spherical coordinates.

Electrical circuit: In an electrical circuit that contains resistance, inductance, and


capacitance, the voltage drop across the resistance is 𝐼𝑅 (𝐼 is the current in Amperes, 𝑅
𝑑𝐼
is resistance in Ohms), across the capacitance is 𝐿 𝑑𝑡 (𝐿 is the inductance in Henries),
𝑄
and across voltage is (𝑄 is the charge in the capacitor in Coulombs, 𝐶 is the
𝐶
capacitance in Farads). We can write, for the voltage difference [4] between points
𝐴 & 𝐵 is
𝑑𝐼 𝑄
𝑉𝐴𝐵 = 𝐿 𝑑𝑡 + 𝑅𝐼 + 𝐶 (1.6.5)

𝑑𝑄
Now, differentiating (1.6.5) with respect to 𝑡 and remembering 𝑑𝑡
= 𝐼, we have
a second-order differential equation

𝑑2 𝐼 𝑑𝐼 𝐼 𝑑𝑣
𝐿 𝑑𝑡 2 + 𝑅 𝑑𝑡 + 𝐶 = (1.6.6)
𝑑𝑡

If the voltage 𝑉𝐴𝐵 is suddenly brought to an upper level by connecting a battery


across the terminals and maintained steadily at that upper level, the current will flow
through the circuit, then by (1.6.6), we can determine how the current varies with a
given range of time.

1.7 LINEARITY OF DIFFERENTIAL EQUATIONS

Linear differential equation: In mathematics, a linear differential equation [22] is of


the form

𝐿𝑦 = 𝑓 (1.7.1)

Study on Different Numerical Methods for Solving Differential Equations Page | 5


Chapter-1: Basic Concepts of Differential Equations and Numerical Methods

Here the differential operator 𝐿 is a linear operator, 𝑦 is an unknown function,


and the right-hand side 𝑓 is a given function of the same nature of 𝑦.
For a function, which is dependent on time, we may write the equation more
expressively as

𝐿𝑦 (𝑡) = 𝑓 (𝑡) (1.7.2)

The linear operator 𝐿 may be considered as the form

𝑑𝑛 𝑦 𝑑𝑛−1 𝑦 𝑑𝑦
𝐿𝑛 (𝑦) = 𝐴0 (𝑡) 𝑑𝑡 𝑛 + 𝐴1 (𝑡) 𝑑𝑡 𝑛−1 + ⋯ … … … + 𝐴𝑛−1 (𝑡) 𝑑𝑡 + 𝐴𝑛 (𝑡)𝑦 (1.7.3)

The linearity condition of 𝐿 rules out operations such as taking the square of the
derivatives of 𝑦, but permits. It is convenient to rewrite the above equation in all
operator form such as

𝐿𝑛 (𝑦) = [𝐴0 (𝑡)𝐷 𝑛 + 𝐴1 (𝑡)𝐷𝑛−1 + ⋯ … … … + 𝐴𝑛−1 (𝑡)𝐷 + 𝐴𝑛 (𝑡)]𝑦 (1.7.4)

Here 𝐷 is the differential operator and 𝐴𝑖 (𝑡); 𝑖 = 0,1,2, . . . . . , 𝑛 are given


functions.

Such an equation is said to have order 𝑛, the index of the highest order
derivative of 𝑦 that is involved.

A typical simple example of the linear differential equation used to model


radioactive decay. Let 𝑁(𝑡) denote the number of material of time 𝑡. Then for some
constant 𝐾 ≥ 0, the number of radioactive atoms which decay can be modeled by the
following equation

𝑑𝑁
= −𝐾𝑁 (1.7.5)
𝑑𝑡

If 𝑦 is assumed to be a function of only one variable in (1.7.1), it called an


ordinary linear differential equation. Otherwise, it called the partial linear differential
equation, which involves derivatives with respect to several variables.

If 𝐴𝑖 (𝑡) ; 𝑖 = 0,1,2, . . . . . , 𝑛 are all constants then (1.7.1) is called a linear


differential equation with constant co-efficient, where 𝑓 is any function of given
variable or variables. For example

𝑦 ′′′ − 2𝑦 ′′ + 4𝑦 ′ = 𝑒 𝑥 (1.7.6)
𝑦 ′′ − 3𝑦 ′ + 2𝑦 = 0 (1.7.7)

Again if 𝑓 = 0, then (1.7.1) is called a homogeneous linear differential equation,


such an equation shown in (1.7.7). But (1.7.1) is called a non-homogeneous linear
differential equation if 𝑓 0, as shown as in (1.7.6).

Non-linear differential equation: In mathematics, a differential equation consisting


of a dependent variable and its derivatives occur as terms of degree more than one is

Study on Different Numerical Methods for Solving Differential Equations Page | 6


Chapter-1: Basic Concepts of Differential Equations and Numerical Methods

called a non-linear differential equation. i.e., if a differential equation does not satisfy
(1.7.1), (1.7.2), (1.7.3) and (1.7.4) is called a non-linear differential equation.

In other words, a non-linear differential equation is an equation where the


variable (or variables) to be solved cannot be written as a linear combination of
themselves and their derivatives.
Furthermore, a differential equation consisting of the terms degree one but two
of derivatives appears as multiple forms in it also considered as a non-linear differential
equation. For example

𝑑2 𝑦 𝑑𝑦
𝑥 𝑑𝑥 2 + 𝑥(𝑑𝑥 )2 + 𝑦 = 0 (1.7.8)
𝑑3 𝑦 𝑑2 𝑦 𝑑𝑦
2𝑥 𝑑𝑥 3 . 𝑑𝑥 2 = 3 𝑑𝑥 + 4 (1.7.9)

1.8 INITIAL VALUE PROBLEMS

In the field of the differential equation, an initial value problem is an ordinary


differential equation together with a specified value, called the initial condition, of the
unknown function at a given point in a domain of the solution.
In scientific fields, modeling a system frequently amounts to solving an initial
value problem. An initial value problem is a differential equation such as

𝑦 ′ (𝑡) = 𝑓(𝑡, 𝑦(𝑡)) (1.8.1)

This satisfies the following initial conditions

𝑦(𝑡0 ) = 𝑦0 ; 𝑦 ′ (𝑡0 ) = 𝑦0′ (1.8.2)

Here 𝑡0 ∈ 𝐼, for some open interval 𝐼 𝑹.

For example,

𝑦 ′′ − 𝑦 ′ 𝑠𝑖𝑛𝑥 + 𝑥𝑦 = 𝑒 𝑥 ; 𝑦(0) = 1, 𝑦 ′ (0) = −1 (1.8.3)

1.9 BOUNDARY VALUE PROBLEMS

In the field of the differential equation, a boundary value problem is a


differential equation together with a set of additional restraints, called the boundary
conditions. A solution to a boundary value problem is a solution to the differential
equation which also satisfies given boundary conditions.

The basic two-point boundary value problem is given by

𝑦 ′ (𝑡) = 𝑓(𝑡, 𝑦(𝑡)) (1.9.1)


with 𝑔(𝑦(𝑎), 𝑦(𝑏)) = 0 (1.9.2)

Study on Different Numerical Methods for Solving Differential Equations Page | 7


Chapter-1: Basic Concepts of Differential Equations and Numerical Methods

For the function 𝑔 (𝑢, 𝑣), when the boundary conditions are linear, then for
some constant 𝑘 we get, for some square matrices 𝐵𝑎 and 𝐵𝑏

𝐵𝑎 𝑦(𝑎) + 𝐵𝑏 𝑦(𝑏) = 𝑘 (1.9.3)

In general, for both linear and non-linear boundary conditions, we can define

∂g ∂g
𝐵𝑎 = ∂u & 𝐵𝑏 = ∂v (1.9.4)

Boundary value problems arise in many branches of physics as the physical


differential equation will have them. Problems involving the wave equation, as the
determination of normal modes, are often stated as a boundary value problem. A class
of very important boundary value problems is the Stum-Liouville problem.

For example, if a string is stretched between two points 𝑥 = 0 and 𝑥 = 𝐿 and 𝑈


denotes the amplitude of the displacement of the string, then 𝑈 satisfies the one-
dimensional wave equation in the region 0 ≤ 𝑥 ≤ 𝐿 and 𝑡 is limited. Since the string is
tied down at the ends, 𝑈 must satisfy the boundary condition

𝑈 (𝑡, 0) = 𝑈 (𝑡, 𝐿) = 0 (1.9.5)

The method of separation of variables for the wave equation

𝑈𝑡𝑡 = 𝑐2 𝑈𝑥𝑥 (1.9.6)


Leads to the solution of the form,

𝑈 (𝑡, 𝑥) = 𝑇(𝑡) 𝑋(𝑥) (1.9.7)


where 𝑇 ′′ + 𝑘 2 𝑐 2 𝑇 = 0 = 𝑋 ′′ + 𝑘 2 𝑥 (1.9.8)

The constant 𝑘 must be determined and 𝑐 to be given. The boundary conditions


then imply that 𝑋 is a multiple of 𝑠𝑖𝑛 𝑘𝑥 and 𝑘 must have the form
𝑛𝜋
𝑘= ;𝑛 ∈ 𝒁 (1.9.9)
𝐿

Each term in (1.9.9) corresponds to a mode of vibration of the string.

1.10 NUMERICAL METHODS


In mathematics, numerical methods of mathematical problems through the
performance of a finite number of elementary operations on numbers.
The elementary operations used are arithmetic operations, generally carried
approximately and subsidiary operations, such as recording intermediate results and
extracting information from tables. Numbers are expressed by a limited set of digits in
some positional numeration system. The number line is thus replaced by a discrete
system of numbers, called a net. A function of a continuous variable accordingly is
replaced by a table of the function value in this discrete system of numbers, operations

Study on Different Numerical Methods for Solving Differential Equations Page | 8


Chapter-1: Basic Concepts of Differential Equations and Numerical Methods

of analysis that act on continuous functions are replaced by algebraic operations on the
function values in the table.

Numerical methods reduce the solution of mathematical problems to


computations that can be performed manually or by means of calculating machines.
The development of new numerical methods and their use in computers have to lead to
the rise of computer mathematics.

Numerical methods designed for the constructive solution of mathematical


problems requiring particular numerical results, usually on a computer. A numerical
method is a complete and unambiguous set of procedures for the solution of a problem,
together with a computable error estimate. The study and implementation of such
methods is the province of numerical analysis.

Numerical methods continue a long tradition of practical mathematical


calculation. Modern numerical methods do not seek exact answers, because exact
answers are often impossible to obtain in practice. Instead, much of numerical methods
are concerned with obtaining approximate solutions while maintaining reasonable
bounds on errors.

Numerical methods naturally find applications in all fields of engineering and


physical sciences, but in the 21st century, the life sciences and even the arts have
adopted elements of scientific computations. Ordinary differential equations appear in
the movement of heavenly bodies (planets, stars, and galaxies), optimization occurs in
portfolio management. Numerical linear algebra is important for data analysis,
stochastic differential equations and Markov chains are essential in simulating living
cells for medicine and biology.

1.11 WHY NUMERICAL METHODS ARE PREFERABLE?

Many problems in science and engineering can be reduced to the problem of


solving differential equations under certain conditions [2,6]. The analytical methods of
solution can be applied to solve only a selected class of differential equations. Those
equations which govern physical systems do not process; in generally closed-form
solutions and hence recourse must be made to numerical methods for solving such
differential equations.

The analytical methods are limited to certain special forms of equations;


elementary courses normally treat only linear equations with constant co-efficient,
when the degree of the equation is not higher than first. Neither of these examples is
linear. Numerical methods have no such limitations.

Let us consider a second-order differential equation of the form

𝑑2 𝑥
= 𝑓(𝑡, 𝑥, 𝑦 ′ ) (1.11.1)
𝑑𝑡 2

this represents the acceleration of a body at time 𝑡.

Study on Different Numerical Methods for Solving Differential Equations Page | 9


Chapter-1: Basic Concepts of Differential Equations and Numerical Methods

Sometimes a differential equation cannot be solved at all or gives solutions; are


so difficult to obtain. For solving such differential equations numerical methods are
required. In numerical methods, we do not need to know the relationship between
variables but we need the numerical value of the dependent variable for certain values
of independent variables or variables.

Now by solving (1.11.1) by an analytical method, we get followings

𝑑𝑥
= 𝑓1 (𝑡, 𝑥) + 𝑐1 (1.11.2)
𝑑𝑡

𝑥 = 𝑓2 (𝑡, 𝑥) + 𝑐1 𝑥 + 𝑐2 (1.11.3)

𝑑𝑥
Here 𝑑𝑡 is the velocity and 𝑥 is the displacement of the body at time 𝑡. The
functions 𝑓1 (𝑡, 𝑥) and 𝑓2 (𝑡, 𝑥) are the first and second integral form of 𝑓(𝑡, 𝑥, 𝑦 ′ ))
with respect to 𝑡. Also 𝑐1 & 𝑐2 are arbitrary constants, which are to be determined.

Then (1.11.3) is called the general solution of (1.11.1). For a particular value
of 𝑐1 & 𝑐2 , (1.11.3) represent a curve.

Now, if with (1.11.1) we also give the conditions 𝑥(𝑡0 ) = 𝑥0 ; 𝑥 ′ (𝑡0 ) = 𝑥0′ at
a particular value 𝑡 = 𝑡0 of the time 𝑡 in (1.11.2) and (1.11.3) respectively. Then we can
find the values of 𝑐1 & 𝑐2 . These extra conditions are called the initial or boundary
conditions. Then (1.11.1) becomes an initial value problem as

𝑑2 𝑥
= 𝑓(𝑡, 𝑥, 𝑦 ′ ) ; 𝑥(𝑡0 ) = 𝑥0 ; 𝑥 ′ (𝑡0 ) = 𝑥0′ (1.11.4)
𝑑𝑡 2

We can find several points on the curve of the family given by (1.11.3) pass
through certain points under the given initial conditions with the different values of 𝑐1
& 𝑐2 .

Such a solution is called the numerical solution of differential equations having


numerical co-efficient and given initial conditions, by which we can find a solution of
any desired degree of accuracy.

1.12 CONTRIBUTIONS OF NUMERICAL METHODS

The overall goal of numerical methods is the design and analysis of techniques
to give approximate but accurate solutions to hard problems, the variety of which are
given below

a. Advanced numerical methods are essential in making numerical weather


prediction feasible.

b. Computing the trajectory of a spacecraft requires the accurate numerical


solution of a system of ordinary differential equations.

Study on Different Numerical Methods for Solving Differential Equations Page | 10


Chapter-1: Basic Concepts of Differential Equations and Numerical Methods

c. Car companies can improve the crash safety of their vehicles by using computer
simulations of car crashes. Such simulations essentially consist of solving
partial differential equations numerically.

d. Hedge funds (private investment funds) use tools from all fields of numerical
analysis to calculate the value of stocks and derivatives more precisely than
other market participants.

e. Airlines use sophisticated optimization algorithms to decide ticket prices,


airplane and crew assignments, and fuel needs. This field is also called
operations research.

f. Insurance companies use numerical programs for actuarial analysis. All of the
above requires a better technique which will minimize the computation error.

There are several methods for solving differential equations having numerical co-
efficient with initial or boundary conditions. Some well-known of those will be
discussed in the next chapters.

Study on Different Numerical Methods for Solving Differential Equations Page | 11


CHAPTER-2

SOLUTION OF DIFFERENTIAL
EQUATIONS OF FIRST ORDER AND FIRST
DEGREE BY NUMERICAL METHODS OF
EARLY STAGE

2.1 INTRODUCTION

The solution of the ordinary differential equation means to find an explicit


expression for the dependent variable 𝑦 in terms of a finite number of elementary
functions of 𝑥. Such a solution of the differential equation is called a closed or finite
form of the solution. In most numerical methods we replace the differential equation by
a difference equation and then solve it. The methods developed and applied to solve
ordinary differential equations of the first order and first degree will yield the solution
[23] in one of the following forms

(i) A power series in 𝑥 for 𝑦, from which the values of 𝑦 can be obtained by
direct substitution.

(ii) A set of tabulated values of 𝑥 and 𝑦.

In single-step methods such as Taylor’s series method and Picard’s


approximation method; the information about the curve represented by a differential
equation at one point is utilized and the solution is not iterated. The methods of Euler,
Milne, Adams-Moulton and Runge-Kutta belong to step by step method or marching
method. In these methods, the next point on the curve is evaluated in short steps ahead
for equal intervals of width ℎ of the dependent variable, by performing iterations until
the desired level of accuracy achieved.

In this chapter, we will discuss Taylor’s series method, Picard’s approximation


and Euler’s method (with modified), which are considered as the numerical methods of
early stage.

2.2 TAYLOR’S SERIES METHOD

Derivation: Let us consider the initial value problem

𝑑𝑦
𝑦 ′ = 𝑑𝑥 = 𝑓(𝑥, 𝑦) ; 𝑦(𝑥0 ) = 𝑦0 (2.2.1)

Let 𝑦 = 𝑦 (𝑥) be the exact solution of (2.2.1) such that 𝑦(𝑥0 ) ≠ 0. Now
expanding (2.2.1) by Taylor’s series [12] about the point 𝑥 = 𝑥0 , we get
Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

(𝑥−𝑥0 )2 (𝑥−𝑥0 )3
𝑦 = 𝑦(𝑥) = 𝑦0 + (𝑥 − 𝑥0 )𝑦0′ + 𝑦0′′ + 𝑦0′′′ + ⋯ … … (2.2.2)
2! 3!

In the expression (2.2.2), the derivatives 𝑦0′ , 𝑦0′′ , 𝑦0′′′ , … … … … are not explicitly
known. However, if 𝑓(𝑥, 𝑦) is differentiable several times, the following expression in
terms of 𝑓(𝑥, 𝑦) and its partial derivatives as the followings

𝑦 ′ = 𝑓(𝑥, 𝑦) = 𝑓
𝑦 ′′ = 𝑓 ′ (𝑥, 𝑦) = 𝑓𝑥 + 𝑦 ′ 𝑓𝑦 = 𝑓𝑥 + 𝑓𝑓𝑦
𝑦 ′′′ = 𝑓 ′′ (𝑥, 𝑦) = 𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓𝑦𝑦 𝑓 2 + 𝑓𝑥 𝑓𝑦 + 𝑓𝑦2 𝑓

In a similar manner, a derivative of any order of 𝑦 can be expressed in terms of


𝑓(𝑥, 𝑦) and its partial derivatives.

As the equation of higher-order total derivatives creates a hard stage of


computation, to overcome the problem we are to truncate Taylor’s expansion to the first
few convenient terms of the series. This truncation in the series leads to a restriction on
the value of 𝑥 for which the expansion is a reasonable approximation.

Now, for suitable small step length ℎ = 𝑥𝑖 – 𝑥𝑖−1 , the function 𝑦 = 𝑦(𝑥) is
evaluated at 𝑥1 = 𝑥0 + ℎ. Then the Tailor’s expansion (2.2.2) becomes

ℎ2 ℎ3
𝑦(𝑥0 + ℎ) = 𝑦(𝑥0 ) + ℎ𝑦 ′ (𝑥0 ) + 𝑦 ′′ (𝑥0 ) + 𝑦 ′′′ (𝑥0 ) + ⋯ … … … ..
2! 3!
ℎ2 ℎ3
𝑜𝑟, 𝑦1 = 𝑦0 + ℎ𝑦0′ + 𝑦0′′ + 𝑦0′′′ + ⋯ … … … .. (2.2.3)
2 6

The derivatives 𝑦0′ , 𝑦0′′ , 𝑦0′′′ , … … … … are evaluated at 𝑥1 = 𝑥0 + ℎ, and then


substituted in (2.2.3) to obtain the value of 𝑦 at 𝑥2 = 𝑥0 + ℎ given by

ℎ2 ℎ3
𝑦(𝑥0 + 2ℎ) = 𝑦(𝑥0 + ℎ) + ℎ𝑦 ′ (𝑥0 + ℎ) + 𝑦 ′′ (𝑥0 + ℎ) + 𝑦 ′′′ (𝑥0 + ℎ) + ⋯ … …
2! 3!
ℎ2 ℎ3
𝑜𝑟, 𝑦2 = 𝑦1 + ℎ𝑦1′ + 𝑦1′′ + 𝑦1′′′ + ⋯ … … (2.2.4)
2 6

By similar manner we get


ℎ2 ℎ3
𝑦3 = 𝑦2 + ℎ𝑦2′ + 𝑦2′′ + 𝑦2′′′ + ⋯ … … … ..
2 6
ℎ2 ℎ3
𝑦4 = 𝑦3 + ℎ𝑦3′ + 𝑦3′′ + 𝑦3′′′ + ⋯ … … … ..
2 6

Thus the general form obtained as

ℎ2 ℎ3
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑦𝑛′ + 𝑦𝑛′′ + 𝑦𝑛′′′ + ⋯ … … (2.2.5)
2 6

Study on Different Numerical Methods for Solving Differential Equations Page | 13


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

This equation can be used to obtain the value of 𝑦𝑛+1 , which is the approximate
value to the actual value of 𝑦 = 𝑦(𝑥) at the value 𝑥𝑛+1 = 𝑥0 + (𝑛 + 1)ℎ.

Truncation error: Equation (2.2.5) can be written as

ℎ2
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑦𝑛′ + 𝑦𝑛′′ + 𝑂(ℎ3 ) (2.2.6)
2

Here 𝑂(ℎ3 ) denotes all the remaining terms which contain the third and higher
powers of ℎ.

Now we can omit the terms 𝑂(ℎ3 ), which gives us an approximation error of
(2.2.6). For some constant 𝑘, the local truncation error in this approximation of
𝑦𝑛+1 is 𝑘ℎ2 . Then, for the better approximation of 𝑦𝑛+1 we may choose the terms up-
to ℎ3 or ℎ4 , so we obtain as

𝑦𝑛′ = 𝑓(𝑥𝑛 , 𝑦𝑛 ) = 𝑓
𝑦𝑛′′ = 𝑓 ′ (𝑥𝑛 , 𝑦𝑛 ) = 𝑓𝑥 + 𝑓𝑓𝑦
𝑦𝑛′′′ = 𝑓 ′′ (𝑥𝑛 , 𝑦𝑛 ) = 𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓𝑦𝑦 𝑓 2 + 𝑓𝑥 𝑓𝑦 + 𝑓𝑦2 𝑓

Now (2.2.6) becomes

ℎ2
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓 + (𝑓𝑥 + 𝑓𝑓𝑦 ) + 𝑂(ℎ3 ) (2.2.7)
2

Again for better approximation with less truncation error, we are to utilize
higher-order derivatives. Then with truncation error 𝑂(ℎ4 ), (2.2.6) becomes

ℎ2 ℎ3
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓 + (𝑓𝑥 + 𝑓𝑓𝑦 ) + (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓𝑦𝑦 𝑓 2 + 𝑓𝑥 𝑓𝑦 + 𝑓𝑦2 𝑓 )
2 3!

+𝑂(ℎ4 ) (2.2.9)

Thus from Taylor’s theorem, considering the remainder term; i.e. the truncation
error of 𝑂(ℎ𝑘 ) is given as

ℎ𝑘+1
𝐸𝑇 = (𝑘+1)! 𝑦 𝑘+1 (𝜃) ; 𝑥𝑛 < 𝜃 < 𝑥𝑛+1 (2.2.10)

2.3 APPLICATION OF THE TAYLOR’S SERIES METHOD

𝑑𝑦
Apply Taylor’s series method to solve = 𝑥 − 𝑦 with the initial condition
𝑑𝑥
𝑦(0) = 1 up-to 𝑥 = 0.4 where ℎ = 0.2.

𝑑𝑦
Solution: Given that 𝑦 ′ = 𝑑𝑥 = 𝑥 − 𝑦 = 𝑓(𝑥, 𝑦)
Also 𝑦0 = 1 when 𝑥0 = 0 and ℎ = 0.2, so that

𝑦′ = 𝑥 − 𝑦

Study on Different Numerical Methods for Solving Differential Equations Page | 14


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

𝑦 ′′ = 1 − 𝑦 ′
𝑦 ′′′ = −𝑦 ′′
𝑦 ′𝑣 = −𝑦 ′′′
𝑦 𝑣 = −𝑦 ′𝑣
𝑦 𝑣′ = −𝑦 𝑣

To find 𝒚𝟏 we are to proceed as follows:


𝑦0′ = 𝑥0 − 𝑦0 = −1
𝑦0′′ = 1 − 𝑦0′ = 2
𝑦0′′′ = −𝑦0′′ = −2
𝑦0′𝑣 = −𝑦0′′′ = 2
𝑦0𝑣 = −𝑦0′𝑣 = −2
𝑦0𝑣′ = −𝑦0𝑣 = 2

From (2.2.3) we get

ℎ2 ℎ3 ℎ4 ℎ5 ℎ6
𝑦1 = 𝑦0 + ℎ𝑦0′ + 𝑦0′′ + 𝑦0′′′ + 𝑦0′𝑣 + 𝑦0𝑣 + 𝑦 𝑣′ + ⋯……
2 6 24 120 720 0

Neglecting the terms containing ℎ7 and higher-order terms and by substituting


the values of 𝑦0′ , 𝑦0′′ , 𝑦0′′′ , 𝑦0′𝑣 , 𝑦0𝑣 & 𝑦0𝑣′ , we get

ℎ2 ℎ3 ℎ4 ℎ5 ℎ6
𝑦1 = 𝑦0 + ℎ𝑦0′ + 𝑦0′′ + 𝑦0′′′ + 24 𝑦0′𝑣 + 120 𝑦0𝑣 + 720 𝑦0𝑣′
2 6
(0.2)2 (0.2)3 (0.2)4 (0.2)5
= 1 + (0.2)(−1) + (2) + (−2) + (2) + (−2)
2 6 24 120
(0.2)6
+ (2)
720

= 0.837461511 (𝑎𝑝𝑝)
𝑖. 𝑒. 𝑦1 = 0.837461511 & 𝑥1 = 𝑥0 + ℎ = 0.0 + 0.2 = 0.2

To find 𝒚𝟐 we are to proceed as follows:


𝑦1′ = 𝑥1 − 𝑦1 = −0.637461511
𝑦1′′ = 1 − 𝑦1′ = 1.637461511
𝑦1′′′ = −𝑦1′′ = −1.637461511
𝑦1′𝑣 = −𝑦1′′′ = 1.637461511
𝑦1𝑣 = −𝑦1′𝑣 = −1.637461511
𝑦1𝑣′ = −𝑦1𝑣 = 1.637461511

Study on Different Numerical Methods for Solving Differential Equations Page | 15


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

From (2.2.4) we get

ℎ2 ℎ3 ℎ4 ℎ5 ℎ6
𝑦2 = 𝑦1 + ℎ𝑦1′ + 𝑦1′′ + 𝑦1′′′ + 24 𝑦1′𝑣 + 120 𝑦1𝑣 + 720 𝑦1𝑣′ + ⋯ … … … ..
2 6

Neglecting the terms containing ℎ7 and higher-order terms and by substituting


the values of 𝑦1′ , 𝑦1′′ , 𝑦1′′′ , 𝑦1′𝑣 , 𝑦1𝑣 & 𝑦1𝑣′ , we get

ℎ2 ℎ3 ℎ4 ℎ5 ℎ6
𝑦2 = 𝑦1 + ℎ𝑦1′ + 𝑦1′′ + 𝑦1′′′ + 24 𝑦1′𝑣 + 120 𝑦1𝑣 + 720 𝑦1𝑣′
2 6
(0.2)2
= 1 + (0.2)(−0.637461511) + (1.637461511)
2
(0.2)3 (0.2)4 (0.2)5
+ (−1.637461511) + (1.637461511) + (−1.637461511)
6 24 120
(0.2)6
+ (1.637461511)
720

= 0.740640099 (app)
𝑖. 𝑒. 𝑦2 = 0.740640099 & 𝑥2 = 𝑥1 + ℎ = 0.2 + 0.2 = 0.4

Thus we get 𝑦1 = 0.837461511 & 𝑦2 = 0.740640099


𝑑𝑦
Exact result: We have =𝑥−𝑦
𝑑𝑥
𝑑𝑦
𝑜𝑟, +𝑦 =𝑥
𝑑𝑥

This is a linear differential equation in 𝑦 whose integrating factor is

𝐼𝐹 = 𝑒 ∫ 𝑑𝑥 = 𝑒 𝑥

Multiplying the above differential equation by 𝑒 𝑥 , it becomes


𝑑𝑦
𝑒 𝑥 𝑑𝑥 + 𝑦𝑒 𝑥 = 𝑥𝑒 𝑥
𝑑
𝑜𝑟, 𝑑𝑥 (𝑦𝑒 𝑥 ) = 𝑥𝑒 𝑥

𝑜𝑟, 𝑦𝑒 𝑥 = ∫ 𝑥 𝑒 𝑥 𝑑𝑥 + 𝑐 ; 𝑐 ∈ 𝑹
𝑜𝑟, 𝑦𝑒 𝑥 = 𝑒 𝑥 (𝑥 − 1) + 𝑐
𝑜𝑟, 𝑦 = 𝑥 − 1 + 𝑐𝑒 −𝑥

From the initial condition we get, 𝑦 = 1; 𝑥 = 0, we get

1 = 0 − 1 + 𝑐𝑒 0 ⇒ 𝑐 = 2

Then the above solution becomes

Study on Different Numerical Methods for Solving Differential Equations Page | 16


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

𝑦 = 𝑥 − 1 + 2𝑒 −𝑥

At 𝑥 = 0.4 we get 𝑦 = 0.4 − 1 + 2𝑒 −0.4 = 0.740640092

So, the truncation error is

𝐸𝑇 = 𝐸𝑥𝑎𝑐𝑡 𝑉𝑎𝑙𝑢𝑒– 𝐴𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑒 𝑣𝑎𝑙𝑢𝑒


= 0.740640092 − 0.740640099 = −0.000000007

Hence in this case approximation in Taylor’s series method is correct to the


eight decimal places.

Advantages of Taylor’s series method: This is a single-step method in general and


works well so long as the successive derivatives can be calculated easily. Also if a
problem is written in a variable separable form it gives a correct solution with
significant digits of accuracy.

Disadvantages of Taylor’s series method: In practical life, it has not much importance
for needing partial derivatives who are complex to compute. Also, ℎ should be small
enough; as a result successive terms of the series diminish quite rapidly. Again the
evaluation of additional terms becomes increasingly difficult. The most significant
disadvantage of this method is the requirement of evaluation of the higher-order
derivatives frequently. For a time-consuming process, it is highly disliked for
computation.

2.4 PICARD’S METHOD OF SUCCESSIVE APPROXIMATION

Derivation: Let us consider the initial value problem

𝑑𝑦
𝑦 ′ = 𝑑𝑥 = 𝑓(𝑥, 𝑦) ; 𝑦(𝑥0 ) = 𝑦0 (2.4.1)

We have 𝑑𝑦 = 𝑓(𝑥, 𝑦)𝑑𝑥 (2.4.2)

Integrating (2.4.2) between corresponding limits 𝑥0 to 𝑥 and 𝑦0 to 𝑦, then


(2.4.2) gives as following
𝑦 𝑥
∫𝑦 𝑑𝑦 = ∫𝑥 𝑓(𝑥, 𝑦)𝑑𝑥
0 0

𝑦 𝑥
𝑜𝑟, [𝑦]𝑦0 = ∫𝑥 𝑓(𝑥, 𝑦)𝑑𝑥
0
𝑥
𝑜𝑟, 𝑦 − 𝑦0 = ∫𝑥 𝑓(𝑥, 𝑦)𝑑𝑥
0
𝑥
𝑜𝑟, 𝑦 = 𝑦0 + ∫𝑥 𝑓(𝑥, 𝑦)𝑑𝑥 (2.4.3)
0

Here the integral term in the right-hand side represents the increment in 𝑦
produced by an increment 𝑥– 𝑥0 in 𝑥. The equation is complicated by the presence of
𝑦 in (2.4.3) under the integral sign as well as outside it. An equation of this kind is
Study on Different Numerical Methods for Solving Differential Equations Page | 17
Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

called an integral equation and can be solved by a process of successive approximation


or iteration if the indicated integrations can be performed in the successive steps [11].

To solve (2.4.1) by the Picard’s method of successive approximation, we get a


first approximation in 𝑦 by putting 𝑦 = 𝑦0 = 𝑦 (0) in (2.4.3), then
𝑥
𝑦 (1) = 𝑦0 + ∫𝑥 𝑓(𝑥, 𝑦 (0) )𝑑𝑥 (2.4.4)
0

The integral is now a function of 𝑥 alone and the indicated integration can be
performed at least for one time. Having first approximation to 𝑦, substitute it for 𝑦 in
the integrand in (2.4.3) and by integrating again we get the second approximation of 𝑦
as following
𝑥
𝑦 (2) = 𝑦0 + ∫𝑥 𝑓(𝑥, 𝑦 (1) )𝑑𝑥 (2.4.5)
0

Proceeding in this way we obtain 𝑦 (3) , 𝑦 (4) & 𝑦 (5) and so on. Thus, we get the
𝑛𝑡ℎ approximation is given by the following equation
𝑥
𝑦 (𝑛) = 𝑦0 + ∫𝑥 𝑓(𝑥, 𝑦 (𝑛−1) )𝑑𝑥 (2.4.6)
0

Then putting 𝑦 (𝑛) for 𝑦 in (2.4.3), we get the next approximation as follows
𝑥
𝑦 (𝑛+1) = 𝑦0 + ∫𝑥 𝑓(𝑥, 𝑦 (𝑛) )𝑑𝑥 (2.4.7)
0

This process will be repeated in this way as many times as necessary or desirable.
This process will be terminated when two consecutive values of 𝑦 are the same as the
desired degree of accuracy.

2.5 APPLICATION OF THE PICARD’S METHOD OF SUCCESSIVE


APPROXIMATION

𝑑𝑦
Apply Picard’s method of successive approximation to solve = 𝑥 − 𝑦 with
𝑑𝑥
the initial condition 𝑦(0) = 1.

𝑑𝑦
Solution: Given that = 𝑥 − 𝑦; 𝑦(0) = 1
𝑑𝑥

Now integrating given equation between corresponding limits 𝑥0 = 0 to 𝑥 and


𝑦0 = 1 to 𝑦, then it becomes
𝑦 𝑥
∫1 𝑑𝑦 = ∫0 𝑓(𝑥, 𝑦)𝑑𝑥
𝑦 𝑥
𝑜𝑟, [𝑦]1 = ∫0 (𝑥 − 𝑦)𝑑𝑥
𝑥
𝑜𝑟, 𝑦 − 1 = ∫0 (𝑥 − 𝑦)𝑑𝑥
𝑥
𝑜𝑟, 𝑦 = 1 + ∫0 (𝑥 − 𝑦)𝑑𝑥

Study on Different Numerical Methods for Solving Differential Equations Page | 18


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

𝑥
𝑖. 𝑒. 𝑦 (𝑛+1) = 1 + ∫0 (𝑥 − 𝑦 (𝑛) )𝑑𝑥 (2.5.1)

For first approximation we proceed as follows: Putting 𝑛 = 0 in (2.5.1), we get


𝑥
𝑦 (1) = 1 + ∫0 (𝑥 − 𝑦 (0) )𝑑𝑥
𝑥
= 1 + ∫0 (𝑥 − 1)𝑑𝑥
𝑥2
= 1 + [ 2 − 𝑥]0𝑥
𝑥2
=1−𝑥+
2

For second approximation we proceed as follows: Putting 𝑛 = 1 in (2.5.1), we get


𝑥
𝑦 (2) = 1 + ∫0 (𝑥 − 𝑦 (1) )𝑑𝑥
𝑥 𝑥2
= 1 + ∫0 (𝑥 − 1 + 𝑥 − )𝑑𝑥
2
𝑥 𝑥2
= 1 + ∫0 (2𝑥 − 1 − )𝑑𝑥
2
𝑥3 𝑥
= 1 + [𝑥 2 − 𝑥 − ]
6 0
𝑥3
= 1 − 𝑥 + 𝑥2 − 6

For third approximation we proceed as follows: Putting 𝑛 = 2 in (2.5.1), we get


𝑥
𝑦 (3) = 1 + ∫0 (𝑥 − 𝑦 (3) )𝑑𝑥
𝑥 𝑥3
= 1 + ∫0 (𝑥 − 1 + 𝑥 − 𝑥 2 + )𝑑𝑥
6
𝑥 𝑥3
= 1 + ∫0 (2𝑥 − 1 − 𝑥 2 + )𝑑𝑥
6
𝑥3 𝑥4
= 1 + [𝑥 2 − 𝑥 − + 24]0𝑥
3
𝑥3 𝑥4
= 1 − 𝑥 + 𝑥2 − + 24
3

For fourth approximation we proceed as follows: Putting 𝑛 = 3 in (2.5.1), we get


𝑥
𝑦 (4) = 1 + ∫0 (𝑥 − 𝑦 (3) )𝑑𝑥
𝑥 𝑥3 𝑥4
= 1 + ∫0 (𝑥 − 1 + 𝑥 − 𝑥 2 + − 24)𝑑𝑥
3
𝑥 𝑥3 𝑥4
= 1 + ∫0 (2𝑥 − 1 − 𝑥 2 + − 24)𝑑𝑥
3
𝑥3 𝑥4 𝑥5
= 1 + [𝑥 2 − 𝑥 − + 12 − 120]0𝑥
3
𝑥3 𝑥4 𝑥5
= 1 − 𝑥 + 𝑥2 − + 12 − 120
3

For fifth approximation we proceed as follows: Putting 𝑛 = 4 in (2.5.1), we get

Study on Different Numerical Methods for Solving Differential Equations Page | 19


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

𝑥
𝑦 (5) = 1 + ∫0 (𝑥 − 𝑦 (4) )𝑑𝑥
𝑥 𝑥3 𝑥4 𝑥5
= 1 + ∫0 (𝑥 − 1 + 𝑥 − 𝑥 2 + − 12 + 120)𝑑𝑥
3
𝑥 𝑥3 𝑥4 𝑥5
= 1 + ∫0 (2𝑥 − 1 − 𝑥 2 + − 12 + 120)𝑑𝑥
3
𝑥3 𝑥4 𝑥5 𝑥6
= 1 + [𝑥 2 − 𝑥 − + 12 − 60 + 720]0𝑥
3
𝑥3 𝑥4 𝑥5 𝑥6
= 1 − 𝑥 + 𝑥2 − + 12 − 60 + 720
3

For sixth approximation we proceed as follows: Putting 𝑛 = 5 in (2.5.1), we get


𝑥
𝑦 (6) = 1 + ∫0 (𝑥 − 𝑦 (5) )𝑑𝑥
𝑥 𝑥3 𝑥4 𝑥5 𝑥6
= 1 + ∫0 (𝑥 − 1 + 𝑥 − 𝑥 2 + − 12 + 60 − 720)𝑑𝑥
3
𝑥 𝑥3 𝑥4 𝑥5 𝑥6
= 1 + ∫0 (2𝑥 − 1 − 𝑥 2 + − 12 + 60 − 720)𝑑𝑥
3
𝑥3 𝑥4 𝑥5 𝑥6 𝑥7
= 1 + [𝑥 2 − 𝑥 − + 12 − 60 + 360 − 5040]0𝑥
3
𝑥3 𝑥4 𝑥5 𝑥6 𝑥7
= 1 − 𝑥 + 𝑥2 − + 12 − 60 + 360 − 5040
3

At 𝑥 = 0.2, we get

𝑥2
𝑦 (1) = 1 − 𝑥 + 2
(0.2)2
= 1 − (0.2) + 2

= 0.82
𝑥3
𝑦 (2) = 1 − 𝑥 + 𝑥 2 − 6
(0.2)3
= 1 − (0.2) + (0.2)2 − 6

= 0.8387
𝑥3 𝑥4
𝑦 (3) = 1 − 𝑥 + 𝑥 2 − + 24
3
(0.2)3 (0.2)4
= 1 − (0.2) + (0.2)2 − +
3 24

= 0.837400
𝑥3 𝑥4 𝑥5
𝑦 (4) = 1 − 𝑥 + 𝑥 2 − + 12 − 120
3
(0.2)3 (0.2)4 (0.2)5
= 1 − (0.2) + (0.2)2 − + −
3 12 120

= 0.83746400

Study on Different Numerical Methods for Solving Differential Equations Page | 20


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

𝑥3 𝑥4 𝑥5 𝑥6
𝑦 (5) = 1 − 𝑥 + 𝑥 2 − + 12 − 60 + 720
3
(0.2)3 (0.2)4 (0.2)5 (0.2)6
= 1 − (0.2) + (0.2)2 − + − +
3 12 60 720

= 0.8374614222
𝑥3 𝑥4 𝑥5 𝑥6 𝑥7
𝑦 (6) = 1 − 𝑥 + 𝑥 2 − + 12 − 60 + 360 − 5040
3
(0.2)3 (0.2)4 (0.2)5 (0.2)6 (0.2)7
= 1 − (0.2) + (0.2)2 − + − + −
3 12 60 360 5040

= 0.837461508571

𝑑𝑦
Exact result: We have = 𝑥 − 𝑦; 𝑦(0) = 1
𝑑𝑥

From the analytical solution in section-2.3, the solution of the above differential
equation is obtained as

𝑦 = 𝑥 − 1 + 2𝑒 −𝑥

So the particular solution becomes

𝑦 = 𝑥 − 1 + 2𝑒 −𝑥
𝑥2 𝑥3 𝑥4 𝑥5 𝑥6 𝑥7
= 𝑥 − 1 + 2[1 − 𝑥 + − + − + − + ⋯……]
2! 3! 4! 5! 6! 7!
2𝑥 2 2𝑥 3 2𝑥 4 2𝑥 5 2𝑥 6 2𝑥 7
= 𝑥 − 1[2 − 2𝑥 + − + − 120 + 720 − 5040 + ⋯ … … ]
2 6 24
𝑥3 𝑥4 𝑥5 𝑥6 𝑥7
= 1 − 𝑥 + 𝑥2 − + 12 − 60 + 360 − 2520 + ⋯ … …
3

Now the sixth approximation is correct up to first seven terms in the series, thus
the truncation error is obtained as

𝐸𝑇 = 𝑦– 𝑦 (6)
𝑥3 𝑥4 𝑥5 𝑥6 𝑥7
= [1 − 𝑥 + 𝑥 2 − + 12 − 60 + 360 − 2520]
3
𝑥3 𝑥4 𝑥5 𝑥6 𝑥7
−[1 − 𝑥 + 𝑥 2 − + 12 − 60 + 360 − 5040]
3
𝑥7 𝑥7 𝑥7
= − 2520 + 5040 = − 5040

Now putting x = 0.2 in the analytical solution, we get the exact value of y is

𝑦 = 0.2 − 1 + 2𝑒 −0.2 = 0.837461506160

So, 𝐸𝑇 = 𝐸𝑥𝑎𝑐𝑡 𝑉𝑎𝑙𝑢𝑒 – 𝐴𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑒 𝑣𝑎𝑙𝑢𝑒

Study on Different Numerical Methods for Solving Differential Equations Page | 21


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

= 0.837461506160 − 0.837461508571 = − 0.000000002011

Thus, we can conclude that in this case, the approximation in Picard’s method
is correct to eight decimal places.

Graphical representation of above approximation and exact result: We have the


approximations in 𝑦 is obtained as

(1) 𝑥2
𝑦0.2 = [1 − 𝑥 + ] = 0.82
2 𝑥=0.2
(2) 𝑥3
𝑦0.2 = [1 − 𝑥 + 𝑥 2 − ] = 0.8387
6 𝑥=0.2
(3) 𝑥3 𝑥4
𝑦0.2 = [1 − 𝑥 + 𝑥 2 − + 24]𝑥=0.2 = 0.837400
3

(4) 𝑥3 𝑥4 𝑥5
𝑦0.2 = [1 − 𝑥 + 𝑥 2 − + 12 − 120 ]𝑥=0.2 = 0.83746400
3

(5) 𝑥3 𝑥4 𝑥5 𝑥6
𝑦0.2 = [1 − 𝑥 + 𝑥 2 − + 12 − 60 + 720 ]𝑥=0.2 = 0.8374614222
3

(6) 𝑥3 𝑥4 𝑥5 𝑥6 𝑥7
𝑦0.2 = [1 − 𝑥 + 𝑥 2 − + 12 − 60 + 360 − 5040 ]𝑥=0.2 = 0.837461508571
3

And the actual value in 𝑦 is

𝑥3 𝑥4 𝑥5 𝑥6 𝑥7
𝑦0.2 = [1 − 𝑥 + 𝑥 2 − + 12 − 60 + 360 − 2520 + ⋯ … ]𝑥=0.2 = 0.837461506160
3

Now by putting the above values in graph [11], we get the following figure

Figure – (2.1)

The figure-(2.1) shows that the approximating curves approach the curve 𝑦 =
𝑦(𝑥) more closely with each successive approximation and passing over it at the sixth
approximation.

Study on Different Numerical Methods for Solving Differential Equations Page | 22


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

The successive approximations 𝑦 (1) , 𝑦 (2) , 𝑦 (3) , 𝑦 (4) , 𝑦 (5) & 𝑦 (6) have the same
terms as in the exact infinite series truncated after 2,3,4,5,6 & 7 terms respectively.

Advantages of Picard’s method: The iteration process is quite easy to implement in


the computer algebra system and will sometimes yield useful accuracy in the numerical
solution. The speed of the calculation is another advantage of this method. Also, it gives
a better approximation of the desired solution than the previous method. i.e. correct up
to one more decimal place.

Disadvantages of Picard’s method: In practice, it is unsatisfactory for difficulties


arise in performing the necessary integrations. The integral part becomes more and
more difficult as we proceed to higher-order iterations. The adoption of the numerical
technique in this method for integrations consumes computation time besides affecting
the accuracy of the result.

2.6 COMPARISON BETWEEN TAYLOR’S SERIES METHOD & PICARD’S


METHOD OF SUCCESSIVE APPROXIMATION.

Both of Taylor’s series method and the Picard’s method involve analytic
operations [1].

Taylor’s series method involves only analytic differentiation and can be


mechanized quite readily on a digital computer. In fact, Taylor’s series method has been
proposed a general-purpose of numerical integration method and programs exist to
solve systems of differential equations by using the analytic continuation method.

On the other hand, Picard’s method involves indefinite integrations while


programs have been written to mechanize this process. They do not always work, even
when the integral can be expressed in terms of elementary functions for which the
indefinite integrals cannot be so expressed.

Moreover, the truncation errors in the above two methods show that Taylor’s
series method gives accuracy correct to seven decimal places after the sixth step
whereas Picard’s method gives accuracy correct to eight decimal places.

Thus we can conclude that Picard’s method is better than Taylor’s series method
in real-life practice.

2.7 EULER’S METHOD

Derivation: Let us consider the initial value problem

𝑑𝑦
𝑦 ′ = 𝑑𝑥 = 𝑓(𝑥, 𝑦); 𝑦(𝑥0 ) = 𝑦0 (2.7.1)

We know that if the function 𝑓 is continuous in the open interval 𝑎 < 𝑥 < 𝑏
containing 𝑥 = 𝑥0 , there exists a unique solution [11] of the equation (2.7.1) as

𝑦𝑟 = 𝑦(𝑥𝑟 ) ; 𝑟 = 1,2,3, … … … (2.7.2)

Study on Different Numerical Methods for Solving Differential Equations Page | 23


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

The solution is valid for throughout the interval 𝑎 < 𝑥 < 𝑏. We wish to
determine the approximate values 𝑦𝑟 of the exact solution 𝑦 = 𝑦(𝑥) in the given
interval for the value 𝑥 = 𝑥𝑟 = 𝑥0 + 𝑟ℎ; 𝑟 = 1,2,3, … … …

Figure-(2.2)

Now we will derive a tangent line equation for (2.7.1). From the above figure

Δ𝑦
≈ 𝑡𝑎𝑛𝜃
Δ𝑥

𝑜𝑟, ∆𝑦 ≈ ∆𝑥(𝑡𝑎𝑛𝜃)
𝑑𝑦
𝑜𝑟, 𝑦1 − 𝑦0 ≈ ℎ(𝑑𝑥 )0
𝑑𝑦
𝑜𝑟, 𝑦1 ≈ 𝑦0 + ℎ(𝑑𝑥 )0

𝑜𝑟, 𝑦1 ≈ 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 ) (2.7.3)

This is the first approximation for 𝑦1 = 𝑦(𝑥1 ) at 𝑥 = 𝑥1

Similarly, we get the next approximations as

𝑦2 ≈ 𝑦1 + ℎ𝑓(𝑥1 , 𝑦1 ) at 𝑥 = 𝑥2
𝑦3 ≈ 𝑦2 + ℎ𝑓(𝑥2 , 𝑦2 ) at 𝑥 = 𝑥3 and so on.

In general the (𝑛 + 1)𝑡ℎ approximation at 𝑥 = 𝑥𝑛+1 is given by

𝑦𝑛+1 ≈ 𝑦𝑛 + ℎ𝑓(𝑥𝑛 , 𝑦𝑛 ) ; 𝑛 = 0,1,2, … … … (2.7.4)

Truncation error: Let at 𝑥𝑛 = 𝑥0 + 𝑛ℎ the exact solution of (2.7.1) be 𝑦𝑛 = 𝑦(𝑥𝑛 )


the approximate solution is given by (2.7.4). Then we get

𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓(𝑥𝑛 , 𝑦𝑛 ) = 𝑦𝑛 + ℎ𝑦𝑛′ (2.7.5)

Assuming the existence of higher-order derivatives 𝑦𝑛+1 = 𝑦(𝑥𝑛+1 ) can be


expanded by Taylor’s series about 𝑥 = 𝑥𝑛 , we obtain

Study on Different Numerical Methods for Solving Differential Equations Page | 24


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

ℎ2
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑦𝑛′ + 𝑦 ′′ (𝑘) ; 𝑥𝑛 ≤ 𝑘 ≤ 𝑥𝑛+1 (2.7.6)
2!

Therefore the truncation error 𝑇𝑛+1 is given by

ℎ2
𝑇𝑛+1 = 𝑦𝑛+1 − 𝑦𝑛 = ℎ𝑦𝑛′ + 𝑦 ′′ (𝑘) (2.7.7)
2

Thus the truncation error is of 𝑂(ℎ2 ) ; ℎ → 0. i.e. the truncation error is


proportional to ℎ2 . By diminishing the size of ℎ, the error can be minimized. If 𝑀 is a
positive constant such as ⃓ 𝑦 ′′ (𝑥)⃓ < 𝑀, then

𝑀ℎ2
⃓ 𝑇𝑛+1 ⃓ < (2.7.8)
2

Here the right-hand side is an upper bound of the truncation error. The absolute
value of 𝑇𝑛+1 is taken for the magnitude of the error only.

2.8 PHYSICAL APPLICATION OF EULER’S METHOD

Suppose a bob of mass 𝑚 is suspended from a fixed point with a thin, light and
inextensible string of length 𝑙. When the bob is shifted from its equilibrium position
and released, it will execute a simple harmonic motion [20,21]. The motion is described
by the equation

𝑑2 𝜃
𝑚𝑙 2 𝑑𝑡 2 = −𝑚𝑔𝑙𝑠𝑖𝑛𝜃 (2.8.1)

𝑑2 𝜃
Where 𝜃 the angle between the string and the vertical, is the acceleration
𝑑𝑡 2
of the bob and 𝑔 is the constant acceleration due to gravity.

Then equation (2.8.1) takes the form

𝑑2 𝜃 𝑔
+ 𝑙 𝑠𝑖𝑛𝜃 = 0 (2.8.2)
𝑑𝑡 2

Figure-(2.3)

The oscillation took as very small, then (2.8.2) can be reduced by approximating
𝑠𝑖𝑛𝜃 ≈ 𝜃 as 𝜃 → 0. So the approximation reduced the equation (2.8.2) as an analytical
solution given below

Study on Different Numerical Methods for Solving Differential Equations Page | 25


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

𝑑2 𝜃 𝑔
+𝑙 =0 (2.8.3)
𝑑𝑡 2

However 𝜃 is not small, then (2.8.3) cannot be carried out. Now multiply both
𝑑𝜃
sides of (2.8.2) by 2( 𝑑𝑡 ), we get

𝑑𝜃 𝑑2 𝜃 𝑑𝜃 𝑔
2( 𝑑𝑡 ) + 2( 𝑑𝑡 ) 𝑠𝑖𝑛𝜃 = 0 (2.8.4)
𝑑𝑡 2 𝑙

Now integrating both sides of (2.8.4) with respect to 𝜃, we get

𝑑𝜃 𝑔
( 𝑑𝑡 )2 + 2 𝑐𝑜𝑠𝜃 = 𝑐; 𝑐 ∈ 𝑹 (2.8.5)
𝑙

By assuming suitable initial condition we can determine the value of 𝑐. Suppose


𝑑𝜃
the initial condition is 𝑑𝑡 = 0 as the angular displacement 𝜃 is maximum and equal to
𝛼, we thus obtain
2𝑔
𝑐=− 𝑐𝑜𝑠𝛼
𝑙

Then (2.8.5) becomes

𝑑𝜃 2𝑔 2𝑔
( 𝑑𝑡 )2 + 2 𝑐𝑜𝑠𝜃 = − 𝑐𝑜𝑠𝛼
𝑙 𝑙
𝑑𝜃 2𝑔
𝑜𝑟, ( 𝑑𝑡 )2 = 𝑙

𝑑𝜃 2𝑔
𝑜𝑟, 𝑑𝑡 = √ (𝑐𝑜𝑠𝜃 − 𝑐𝑜𝑠𝛼) (2.8.6)
𝑙

The differential equation is of first-order but not in linear form, so the analytical
method is not fruitful for it. Thus we are to apply the numerical method to solve (2.8.6).
i.e. apply Euler’s method for solving (2.8.6), for 𝑔 = 980 𝑐𝑚𝑠 −2 , 𝑙 = 50 𝑐𝑚, 𝛼 = 40
it becomes

𝑑𝜃 2×980
=√ (𝑐𝑜𝑠𝜃 − 𝑐𝑜𝑠40 )
𝑑𝑡 50

= √39.2(𝑐𝑜𝑠𝜃 − 𝑐𝑜𝑠40 )
𝑑𝜃
𝑖. 𝑒. 𝑓(𝑡, 𝜃) = = √39.2(𝑐𝑜𝑠𝜃 − 𝑐𝑜𝑠40 )
𝑑𝑡

Now, assuming the initial condition 𝜃 = 00 when 𝑡 = 0. Then an initial value


problem arises as follows

𝑑𝜃
𝑓(𝑡, 𝜃) = = √39.2(𝑐𝑜𝑠𝜃 − 𝑐𝑜𝑠40 ) (2.8.7)
𝑑𝑡

With 𝜃 = 00 when 𝑡 = 0for ℎ = 0.20, Euler’s method gives

Study on Different Numerical Methods for Solving Differential Equations Page | 26


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

𝜃𝑛+1 = 𝜃𝑛 + ℎ𝑓(𝑡𝑛 , 𝜃𝑛 )
𝑜𝑟, 𝜃𝑛+1 = 𝜃𝑛 + ℎ√39.2(𝑐𝑜𝑠𝜃𝑛 − 𝑐𝑜𝑠40 )

𝑜𝑟, 𝜃𝑛+1 = 𝜃𝑛 + (0.2)√39.2(𝑐𝑜𝑠𝜃𝑛 − 𝑐𝑜𝑠40 ) (2.8.8)

For first approximations we proceed as follows: Putting 𝑛 = 0 in (2.8.8), we get


𝜃1 = 𝜃0 + (0.20)√39.2(𝑐𝑜𝑠𝜃0 − 𝑐𝑜𝑠40 )

= 00 + (0.20)√39.2(𝑐𝑜𝑠00 − 𝑐𝑜𝑠40 )
= 0.0618026630
𝑖. 𝑒. 𝜃1 = 0.0618026630 & 𝑡1 = 𝑡0 + ℎ = 0.00 + 0.20 = 0.20
For second approximations we proceed as follows: Putting 𝑛 = 1 in (2.8.8), we get
𝜃2 = 𝜃1 + (0.20)√39.2(𝑐𝑜𝑠𝜃1 − 𝑐𝑜𝑠40 )

= 0.0618026630 + (0.20)√39.2(𝑐𝑜𝑠(0.0618026630 ) − 𝑐𝑜𝑠40 )


= 0.1235979460
𝑖. 𝑒. 𝜃2 = 0.1235979460 & 𝑡2 = 𝑡1 + ℎ = 0.20 + 0.20 = 0.40
For third approximations we proceed as follows: Putting 𝑛 = 2 in (2.8.8), we get
𝜃3 = 𝜃2 + (0.20)√39.2(𝑐𝑜𝑠𝜃2 − 𝑐𝑜𝑠40 )

= 0.1235979460 + (0.20)√39.2(cos(0.1235979460 ) − 𝑐𝑜𝑠40 )


= 0.1853710860
𝑖. 𝑒. 𝜃3 = 0.1853710860 & 𝑡3 = 𝑡2 + ℎ = 0.40 + 0.20 = 0.60
For fourth approximations we proceed as follows: Putting 𝑛 = 3 in (2.8.8), we get
𝜃4 = 𝜃3 + (0.20)√39.2(𝑐𝑜𝑠𝜃3 − 𝑐𝑜𝑠40 )

= 0.1853710860 + (0.20)√39.2(cos(0.1853710860 ) − 𝑐𝑜𝑠40 )


= 0.2471073220
𝑖. 𝑒. 𝜃4 = 0.2471073220 & 𝑡4 = 𝑡3 + ℎ = 0.60 + 0.20 = 0.80
For fifth approximations we proceed as follows: Putting 𝑛 = 4 in (2.8.8), we get
𝜃5 = 𝜃4 + (0.20)√39.2(𝑐𝑜𝑠𝜃4 − 𝑐𝑜𝑠40 )

= 0.2471073220 + (0.20)√39.2(cos(0.2471073220 ) − 𝑐𝑜𝑠40 )


= 0.3087918930
𝑖. 𝑒. 𝜃5 = 0.3087918930 & 𝑡5 = 𝑡4 + ℎ = 0.80 + 0.20 = 1.00

Study on Different Numerical Methods for Solving Differential Equations Page | 27


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

Now the solution of the simple pendulum problem is shown in the following
table

Approximation 𝑡 𝜃
No
00 0.00 00
01 0.20 0.0618026630
02 0.40 0.1235979460
03 0.60 0.1853710860
04 0.80 0.2471073220
05 1.00 0.3087918930

Graphical representation of the application: By drawing, the actual solution of 𝜃 =


𝜃(𝑡) of the differential equation (2.8.1) can be represented as shown as the following
figure located by the dotted curve 𝑃𝑄 [22].

Figure-(2.4)

In this method, the actual curve 𝑃𝑄 at any point is representing by a constant


slope over a small interval of length ℎ. With each of length ℎ successive sub-intervals
are [𝑎, 𝑎 + 𝑖ℎ]; 𝑖 = 1,2, . . . . . . , 𝑛 are considered, where the initial point 𝑎 = 𝑡0 .

The extension of Euler’s algorithm up-to point 𝑡 = 𝑎 + 𝑛ℎ yield the


approximation value 𝜃 = 𝜃𝑛 , given by

𝜃𝑛 = 𝜃0 + ℎ𝑓(𝑡0 , 𝜃0 ) + ℎ𝑓(𝑡0 + ℎ, 𝜃1 ) + ℎ𝑓(𝑡0 + 2ℎ, 𝜃2 ) + ⋯


… … … … + ℎ𝑓(𝑡0 + ̅̅̅̅̅̅̅
𝑛 − 1ℎ, 𝜃𝑛−1 ) (2.8.9)

Here 𝜃𝑖 is the value of 𝜃 at 𝑡 = 𝑡0 + 𝑖ℎ; 𝑖 = 1,2,3, … … ̅̅̅̅̅̅̅


𝑛 − 1. Now from the
above figure

𝐴1 𝐵1 = ℎ𝑓(𝑡0 , 𝜃0 )

So, the ordinate of 𝐵1 is given by

𝜃1 = 𝜃0 + ℎ𝑓(𝑡0 , 𝜃0 )

This is obtained by considering the constant slope of the line segment 𝑃𝐵1 .

Study on Different Numerical Methods for Solving Differential Equations Page | 28


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

The next slope to be computed corresponding to the point 𝐵1 ≡ (𝑡0 + ℎ, 𝜃1 ) the


slope is 𝑓(𝑡0 + ℎ, 𝜃1 ). According to this slope, we get the point where 𝐵2.

𝐴2 𝐵2 = ℎ𝑓(𝑡0 + ℎ, 𝜃1 )

Here the ordinate 𝜃2 of 𝐵2 is the sum of the ordinates of 𝐵1 and 𝐴2 𝐵2 , we get

𝜃2 = 𝜃0 + ℎ𝑓(𝑡0 , 𝜃0 ) + ℎ𝑓(𝑡0 + ℎ, 𝜃1 ) and so on.

Generalizing this procedure, we obtain the equation (2.8.9), which gives the
ordinate 𝜃𝑛 of 𝐵𝑛 , the approximate value of 𝜃 at 𝜃𝑛 = 𝜃0 + 𝑛ℎ.

Thus the error in this solution is given by 𝐴𝑛 𝐵𝑛 .

Advantages of Euler’s method: Since in this method no integration appeared in


calculation this is easier than the previous two methods of this chapter according to
practical purpose. As each approximation of the calculation result of the previous
approximation is used, it improves the accuracy of the solution. It is also less time-
consuming. Moreover, a problem that cannot be solved in analytical methods or hard
to be done by Taylor’s series method or Picard’s method, Euler’s method can be
successfully applied to solve that problem, for its recurring ability.

𝑑𝑦
Disadvantages of Euler’s method: In Euler’s method 𝑑𝑥 changes rapidly over an
interval; this gives a poor approximation at the beginning of the process in comparison
with the average value over the interval. So the calculated value of 𝑦 in this method
occurs much error than the exact value, which reasonably increased in the succeeding
intervals, then the final value of 𝑦 differs on a large scale from the exact value.

Euler’s method needs to take a smaller value of ℎ, because of this restriction this
method is unsuitable for practical use and can be applied for tabulating the value of
depending variable over a limited range only. Moreover, if ℎ is not small enough this
method is too inaccurate. In Euler’s method, the actual solution curve is approximated
by the sequence of short straight lines, which sometimes deviates from the solution
curve significantly.

2.9 MODIFICATION OF EULER’S METHOD

Due to the above consideration, we can say that the computed values of 𝑦 will
deviate further and further from the actual values of 𝑦 so long as the curvature of the
graph does not change. This encourages doing a modification of Euler’s method
[11,18].

Derivation: Starting with the initial value 𝑦0 , an approximate value of 𝑦1 is computed


from the relation given by

(1) 𝑑𝑦
𝑦1 ≈ 𝑦0 + ℎ(𝑑𝑥 )0 = 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 )

(1)
Here 𝑦1 is the first approximation of 𝑐 at 𝑥 = 𝑥1 .
Study on Different Numerical Methods for Solving Differential Equations Page | 29
Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

Substituting this approximate value of 𝑦1 in (2.7.1) we get an approximate value


𝑑𝑦
of = 𝑓(𝑥, 𝑦) at the end of the first interval, we get
𝑑𝑥

𝑑𝑦 (1) (1)
(𝑑𝑥 )1 = 𝑓(𝑥1 , 𝑦1 )

Now the improved value of ∆𝑦 is obtained by using the Trapezoidal rule as

ℎ 𝑑𝑦 𝑑𝑦 (1)
∆𝑦 ≈ 2 [(𝑑𝑥 ) + (𝑑𝑥 )1 ]
0
ℎ (1)
𝑜𝑟, ∆𝑦 ≈ 2 [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥1 , 𝑦1 )]

This value of ∆𝑦 is more accurate than the value

𝑑𝑦
ℎ(𝑑𝑥 )0 = ℎ𝑓(𝑥0 , 𝑦0 )

Then the second approximation for 𝑦1 is now

(2) ℎ (1)
𝑦1 ≈ 𝑦0 + 2 [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥1 , 𝑦1 )]

(1)
By substituting this improved value of 𝑦1 we get the second approximate
𝑑𝑦
value of 𝑑𝑥 = 𝑓(𝑥, 𝑦) as follows

𝑑𝑦 (2) (2)
(𝑑𝑥 )1 = 𝑓(𝑥1 , 𝑦1 )

Then the third approximation for 𝑦1 is now

(3) ℎ (2)
𝑦1 ≈ 𝑦0 + 2 [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥1 , 𝑦1 )]

Continuing this process, we can find

(𝑛) ℎ (𝑛−1)
𝑦1 ≈ 𝑦0 + 2 [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥1 , 𝑦1 )] (2.9.1)

Then the next approximation for 𝑦1 is obtained as

(𝑛+1) ℎ (𝑛)
𝑦1 ≈ 𝑦0 + 2 [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥1 , 𝑦1 )] (2.9.2)

This process is applied repeatedly until no significant change is produced in two


consecutive values of 𝑦1 .
The above process is applied for the first interval and the same manner can be
held for the next intervals also. Then the general formula for the modified Euler’s
method takes the following form

Study on Different Numerical Methods for Solving Differential Equations Page | 30


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

(𝑛+1) ℎ (𝑛)
𝑦𝑚+1 ≈ 𝑦𝑚 + 2 [𝑓(𝑥𝑚 , 𝑦𝑚 ) + 𝑓(𝑥𝑚+1 , 𝑦𝑚+1 )] (2.9.3)

and 𝑦𝑚+1 ≈ 𝑦𝑚 + ℎ𝑓(𝑥𝑚 , 𝑦𝑚 ) (2.9.4)

Truncation error of modified Euler’s method: First we will improve the modified
Euler’s formula for better accuracy and then find the truncation error with the help of
improved formula [11].

We have from Euler’s method first approximation to 𝑦2 , 𝑦3 , … … … … …. could


be found by means of the formula

𝑑𝑦
𝑦𝑛+1 = 𝑦𝑛 + ℎ(𝑑𝑥 )𝑛 = 𝑦𝑛 + ℎ𝑦𝑛′ (2.9.4)

But as soon as two consecutive values of 𝑦 are known, the first approximation
to the succeeding values of 𝑦 can be found more accurately from the formula

𝑦𝑛+1 = 𝑦𝑛−1 + 2ℎ𝑦𝑛′ (2.9.5)

To derive this formula, let us choose the function 𝑦 = 𝑦(𝑥) be represented in


the neighborhood of 𝑥𝑛 by Taylor’s series as follows

ℎ2 ℎ3 ℎ4
𝑦𝑛+1 = 𝑦(𝑥𝑛 ) + ℎ𝑦 ′ (𝑥𝑛 ) + 𝑦 ′′ (𝑥𝑛 ) + 𝑦 ′′′ (𝑥𝑛 ) + 24 𝑦 ′𝑣 (𝑥𝑛 ) + ⋯ … (2.9.6)
2 6
ℎ2 ℎ3 ℎ4
𝑦𝑛−1 = 𝑦(𝑥𝑛 ) − ℎ𝑦 ′ (𝑥𝑛 ) + 𝑦 ′′ (𝑥𝑛 ) − 𝑦 ′′′ (𝑥𝑛 ) + 24 𝑦 ′𝑣 (𝑥𝑛 ) − ⋯ … (2.9.7)
2 6

Now by subtracting (2.9.7) from (2.9.6), we get

ℎ3 ℎ5
𝑦𝑛+1 − 𝑦𝑛−1 = 2ℎ𝑦 ′ (𝑥𝑛 ) + 𝑦 ′′′ (𝑥𝑛 ) + 60 𝑦 𝑣 (𝑥𝑛 ) + ⋯ … … …
3
ℎ3 ℎ5
𝑜𝑟, 𝑦𝑛+1 = 𝑦𝑛−1 + 2ℎ𝑦 ′ (𝑥𝑛 ) + 𝑦 ′′′ (𝑥𝑛 ) + 60 𝑦 𝑣 (𝑥𝑛 ) + ⋯ … … … (2.9.8)
3

When ℎ is very small and only the first two terms in the right-hand members
ℎ2 ℎ3
of (2.9.6) and (2.9.8) are used, the truncation errors are 2 𝑦𝑛′′ and 3 𝑦𝑛′′′ the latter one
is much smaller than the former. Thus, (2.9.8) gives a more accurate value of 𝑦𝑛+1 .

The first approximation of 𝑦 formed from (2.9.5) is to be corrected and improved


by the averaging process described above. The principal part of the error in the final
value of 𝑦 can be found as follows.

Since the increment in 𝑦 for each step is obtained from the formula
ℎ ′
∆𝑦 = 2 (𝑦𝑛′ + 𝑦𝑛+1 ) (2.9.9)

Study on Different Numerical Methods for Solving Differential Equations Page | 31


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

The right-hand member of (2.9.9) has the form of the first group of terms in
Euler’s method; the principal part of the error in ∆𝑦 (by mean value theorem) is
obtained as

ℎ3 ′′ ′ ℎ3
− 12 (𝑦𝑛+1 − 𝑦 ′ 𝑛 ) = − 12 [𝑓 ′′ (𝑥𝑛+1 ) − 𝑓 ′′ (𝑥𝑛 )]
ℎ3
= − 12 𝑓 ′′′ (𝜏) ; 𝑥𝑛 < 𝜏 < 𝑥𝑛+1 (2.9.10)

This shows that the error involves only terms in ℎ3 and higher-order. For this,
it follows that the error of the order ℎ3 . Since in the case of Euler’s method the error is
of order ℎ2 , it is clear that the modified Euler’s method is more accurate than Euler’s
method.

2.10 APPLICATION OF MODIFIED EULER’S METHOD

𝑑𝑦
Solve the initial value problem = 𝑥 + 𝑦; 𝑦(0) = 1 by the modified Euler’s
𝑑𝑥
method.

𝑑𝑦
Solution: Given that 𝑦 ′ = 𝑑𝑥 = 𝑥 − 𝑦 = 𝑓(𝑥, 𝑦) with 𝑦0 = 1 at 𝑥0 = 0

We know from Euler’s and modified Euler’s formulae as

𝑦𝑚+1 = 𝑦𝑚 + ℎ𝑓(𝑥𝑚 , 𝑦𝑚 ) (2.10.1)


(𝑛+1) ℎ (𝑛)
𝑦𝑚+1 = 𝑦𝑚 + 2 [𝑓(𝑥𝑚 , 𝑦𝑚 ) + 𝑓(𝑥𝑚+1 , 𝑦𝑚+1 )] (2.10.2)

Here, taking ℎ = 0.05 and 𝑓(𝑥0 , 𝑦0 ) = 𝑥0 + 𝑦0 = 0 + 1 = 1

First approximation for 𝒚𝟏 : putting 𝑚 = 0 in (2.10.1), we get


(1)
𝑦1 = 𝑦1 = 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 )
= 1 + (0.05)(1) = 1.05
𝑥1 = 𝑥0 + ℎ = 0.00 + 0.05 = 0.05
(1) (1)
So, 𝑓(𝑥1 , 𝑦1 ) = 𝑥1 + 𝑦1 = 0.05 + 1.05 = 1.10

Second approximation for 𝒚𝟏 : putting 𝑛 = 1, 𝑚 = 0 in (2.10.2), we get


(2) ℎ (1)
𝑦1 = 𝑦0 + 2 [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥1 , 𝑦1 )
0.05
= 1+ [1 + 1.10] = 1.0525
2
(2) (2)
So, 𝑓(𝑥1 , 𝑦1 ) = 𝑥1 + 𝑦1 = 0.05 + 1.0525 = 1.1025

Third approximation for 𝒚𝟏 : putting 𝑛 = 2, 𝑚 = 0 in (2.10.2), we get

Study on Different Numerical Methods for Solving Differential Equations Page | 32


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

(3) ℎ (2)
𝑦1 = 𝑦0 + 2 [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥1 , 𝑦1 )]
0.05
=1+ [1 + 1.1025] = 1.05256
2
(3) (3)
So, 𝑓(𝑥1 , 𝑦1 ) = 𝑥1 + 𝑦1 = 0.05 + 1.05256 = 1.10256

Fourth approximation for 𝒚𝟏 : putting 𝑛 = 3, 𝑚 = 0 in (2.10.2), we get


(4) ℎ (3)
𝑦1 = 𝑦0 + 2 [𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥1 , 𝑦1 )]
0.05
=1+ [1 + 1.10256] = 1.05256
2
(4) (4)
So, 𝑓(𝑥1 , 𝑦1 ) = 𝑥1 + 𝑦1 = 0.05 + 1.0525 = 1.10256

(3) (4)
Since 𝑦1 & 𝑦1 are the same, we get no further change in 𝑦 continuing the
iteration process. Therefore we take

𝑦1 = 1.05256 & 𝑓(𝑥1 , 𝑦1 ) = 1.10256

First approximation for 𝒚𝟐 : putting 𝑚 = 1 in (2.10.1), we get


(1)
𝑦2 = 𝑦2 = 𝑦1 + ℎ𝑓(𝑥1 , 𝑦1 )
= 1.05256 + (0.05)(1.10256) = 1.10769
𝑥2 = 𝑥1 + ℎ = 0.05 + 0.05 = 0.10
(1) (1)
So, 𝑓(𝑥2 , 𝑦2 ) = 𝑥2 + 𝑦2 = 0.10 + 1.10769 = 1.20769

Second approximation for 𝒚𝟐 : putting 𝑛 = 1, 𝑚 = 1 in (2.10.2), we get


(2) ℎ (1)
𝑦2 = 𝑦1 + 2 [𝑓(𝑥1 , 𝑦1 ) + 𝑓(𝑥2 , 𝑦2 )]
0.05
= 1.05256 + [1.10256 + 1.20769] = 1.11032
2
(2) (2)
So, 𝑓(𝑥2 , 𝑦2 ) = 𝑥2 + 𝑦2 = 0.10 + 1.11032 = 1.21032

Third approximation for 𝒚𝟐 : putting 𝑛 = 2, 𝑚 = 1 in (2.10.2), we get


(3) ℎ (2)
𝑦2 = 𝑦1 + 2 [𝑓(𝑥1 , 𝑦1 ) + 𝑓(𝑥2 , 𝑦2 )]
0.05
= 1.05256 + [1.10256 + 1.21032] = 1.11038
2
(3) (3)
So, 𝑓(𝑥2 , 𝑦2 ) = 𝑥2 + 𝑦2 = 0.10 + 1.11038 = 1.21038

Fourth approximation for 𝒚𝟐 : putting 𝑛 = 3, 𝑚 = 1 in (2.10.2), we get


(4) ℎ (3)
𝑦2 = 𝑦1 + 2 [𝑓(𝑥1 , 𝑦1 ) + 𝑓(𝑥2 , 𝑦2 )]
0.05
= 1.05256 + [1.10256 + 1.21038] = 1.11038
2

Study on Different Numerical Methods for Solving Differential Equations Page | 33


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

(4) (4)
So, 𝑓(𝑥2 , 𝑦2 ) = 𝑥2 + 𝑦2 = 0.10 + 1.11038 = 1.21038

(3) (4)
Since 𝑦2 & 𝑦2 are the same, we get no further change in 𝑦 continuing the
iteration process. Therefore we take

𝑦2 = 1.11038 & 𝑓(𝑥2 , 𝑦2 ) = 1.21038

Collecting our results in tabular form, we have the following table

𝑥 𝑦 𝑓(𝑥, 𝑦)
0.00 1.00000 1.00000
0.05 1.05256 1.10256
0.10 1.11038 1.21038

𝑑𝑦
Exact result: We have =𝑥+𝑦
𝑑𝑥
𝑑𝑦
𝑜𝑟, −𝑦 =𝑥
𝑑𝑥

This is a linear differential equation in 𝑦 whose integrating factor is

𝐼𝐹 = 𝑒 ∫ −𝑑𝑥 = 𝑒 −𝑥

Multiplying the above differential equation by 𝑒 −𝑥 , it becomes

𝑑𝑦
𝑒 −𝑥 𝑑𝑥 − 𝑦𝑒 −𝑥 = 𝑥𝑒 −𝑥
𝑑
𝑜𝑟, 𝑑𝑥 (𝑦𝑒 −𝑥 ) = 𝑥𝑒 −𝑥

𝑜𝑟, 𝑦𝑒 −𝑥 = ∫ 𝑥 𝑒 −𝑥 𝑑𝑥 + 𝑐 ; 𝑐 ∈ 𝑹
𝑜𝑟, 𝑦𝑒 −𝑥 = 𝑐 − 𝑒 −𝑥 (𝑥 + 1)
𝑜𝑟, 𝑦 = 𝑐𝑒 𝑥 − 𝑥 − 1

From the initial condition we get, 𝑦 = 1; 𝑥 = 0, we get

1 = 𝑐𝑒 0 − 0 − 1 ⇒ 𝑐 = 2

Then the above solution becomes

𝑦 = 2𝑒 −𝑥 − 𝑥 − 1

So we obtain the required solutions as followings

𝑦 = 1.00000 & 𝑥 = 0.00

Study on Different Numerical Methods for Solving Differential Equations Page | 34


Chapter-2: Solution of Differential Equations of First Order and First Degree by
Numerical Methods of Early Stage

𝑦 = 1.05254 & 𝑥 = 0.05


𝑦 = 1.11034 & 𝑥 = 0.10

This show by comparing with the table obtained from approximation, the
method can be improved by taking a smaller value of ℎ, since the difference of
approximate values and the exact solution values increasing step by step.

Study on Different Numerical Methods for Solving Differential Equations Page | 35


CHAPTER-3

SOLUTION OF ORDINARY DIFFERENTIAL


EQUATIONS OF FIRST ORDER AND FIRST
DEGREE BY PREDICTOR-CORRECTOR
METHOD AND RUNGE-KUTTA METHOD

3.1 INTRODUCTION

In the previous chapter, we have discussed three numerical methods of early-


stage for solving ordinary differential equations. Now, in this chapter, we will discuss
two modern numerical methods for solving ordinary differential equations. These are
known as the predictor-corrector method and the Runge-Kutta method respectively. It
to be noted that one of the predictor-corrector based methods already has mentioned in
the previous chapter; namely, modified Euler’s method.

Now, we are describing the numerical methods mentioned above in detail with
the applications, then will compare them.

3.2 DEFINITION OF PREDICTOR-CORRECTOR METHOD

In the methods so far described to solve an ordinary differential equation over


an interval, only the value of 𝑦 at the beginning of the interval was required. Now in
the predictor-corrector methods, four prior values are needed for finding the value of 𝑦
at a given value of 𝑥 [2,6]. Though these methods are slightly complex, have the
advantage of giving an estimate of error from successive approximations of 𝑦𝑖 ,
with 𝑥𝑖+1 = 𝑥𝑖 + 1.

Then from Euler’s formula, we have

𝑦𝑖+1 = 𝑦𝑖 + ℎ𝑓(𝑥𝑖 , 𝑦𝑖 ) (3.2.1)

Also from modified Euler’s formula, we have


𝑦𝑖+1 = 𝑦𝑖 + 2 [𝑓(𝑥𝑖 , 𝑦𝑖 ) + 𝑓(𝑥𝑖+1 , 𝑦𝑖+1 )] (3.2.2)

The value of 𝑦𝑖+1 is the first estimate by (3.2.1) and then by using (3.2.2) gets
a better approximation of 𝑦𝑖+1 . This value of 𝑦𝑖+1 is again substituted in (3.2.2) to find
still a better approximation of 𝑦𝑖+1 . This procedure will repeat until two consecutive
iterated values of 𝑦𝑖+1 agree. This technique of refining an initial crude estimation of
𝑦𝑖+1 by means of a more accurate formula is known as the predictor-corrector method.
The equation (3.2.1) is taken as the predictor, while (3.2.2) serves as a corrector of 𝑦𝑖+1 .
Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

In this section, we will describe two predictor-corrector methods as Milne’s


predictor-corrector formula and Adams-Moulton predictor-corrector formula.

3.3 MILNE’S PREDICTOR-CORRECTOR METHOD


Milne’s method is a simple and reasonably accurate method of solving ordinary
differential equations numerically. To solve the differential equation 𝑦 ′ = 𝑓(𝑥, 𝑦) by
this method, first, we approximate the value of 𝑦𝑖+1 by predictor formula at
𝑥 = 𝑥𝑖+1 and then improve this value of 𝑦𝑖+1 by using a corrector formula. These
formulae will derive from Newton’s formula of interpolation [6,9].
Derivation of Milne’s predictor formula: We know that Newton’s formula for
forward interpolation in terms of 𝑦 ′ and 𝑢 are given by

𝑢(𝑢−1) 𝑢(𝑢−1)(𝑢−2) 𝑢(𝑢−1)(𝑢−2)(𝑢−3)


𝑦 ′ = 𝑦0′ + 𝑢∆𝑦0′ + ∆2 𝑦0′ + ∆3 𝑦0′ + ∆4 𝑦0′ + ⋯ ⋯
2! 3! 4!
(𝑢2 −𝑢) (𝑢3 −3𝑢2 +2𝑢) (𝑢4 −6𝑢3 +11𝑢2 −6𝑢)
𝑜𝑟, 𝑦 ′ = 𝑦0′ + 𝑢∆𝑦0′ + ∆2 𝑦0′ + ∆3 𝑦0′ + ∆4 𝑦0′
2 6 24

+⋯⋯⋯ (3.3.1)

𝑥 − 𝑥0
Here 𝑢= ℎ

𝑜𝑟, 𝑥 = 𝑥0 + 𝑢ℎ ⇒ 𝑑𝑥 = ℎ𝑑𝑢

Now, integrating (3.3.1) over the interval 𝑥0 to 𝑥0 + 4ℎ. i.e., 𝑢 = 0 to 𝑢 = 4,


we get

𝑥0 + 4ℎ 4 (𝑢2 −𝑢) (𝑢3 −3𝑢2 +2𝑢)


∫𝑥 𝑦 ′ 𝑑𝑥 = ℎ ∫0 [ 𝑦0′ + 𝑢∆𝑦0′ + ∆2 𝑦0′ + ∆3 𝑦0′
0 2 6
(𝑢4 −6𝑢3 +11𝑢2 −6𝑢)
+ ∆4 𝑦0′ + ⋯ ⋯ ⋯ ]𝑑𝑢
24
𝑥 +4ℎ 𝑢2 1 𝑢3 𝑢2 1 𝑢4
𝑜𝑟, [𝑦]𝑥00 = ℎ[𝑢𝑦0′ + ∆𝑦0′ + 2 ( 3 − )∆2 𝑦0′ + 6 ( 4 − 𝑢3 + 𝑢2 )∆3 𝑦0′
2 2
1 𝑢5 3𝑢4 11𝑢3
+ ( − + − 3𝑢2 )∆4 𝑦0′ + ⋯ ⋯ ⋯ ]𝑢=4
𝑢=0
24 5 2 3
20 8 28
𝑜𝑟, 𝑦(𝑥0 + 4ℎ) − 𝑦(𝑥0 ) = ℎ[4𝑦0′ + 8∆𝑦0′ + ∆2 𝑦0′ + 3 ∆3 𝑦0′ + ∆4 𝑦0′ + ⋯ ⋯ ]
3 90
20 8 28
𝑜𝑟, 𝑦4 − 𝑦0 = ℎ[4𝑦0′ + 8∆𝑦0′ + ∆2 𝑦0′ + 3 ∆3 𝑦0′ + ∆4 𝑦0′ + ⋯ ⋯ ] (3.3.2)
3 90

After neglecting those terms containing ∆5 and higher orders and


substituting ∆≡ 𝐸 − 1, from (3.3.2) we get Milne’s predictor formula as followings
20 8 14
𝑦4 − 𝑦0 = ℎ[4𝑦0′ + 8(𝐸 − 1)𝑦0′ + (𝐸 − 1)2 𝑦0′ + 3 (𝐸 − 1)3 𝑦0′ + ∆4 𝑦0′ ]
3 45
20
𝑜𝑟, 𝑦4 = 𝑦0 + ℎ[4𝑦0′ + 8(𝐸 − 1)𝑦0′ + (𝐸 2 − 2𝐸 + 1)𝑦0′
3
8 28
+ 3 (𝐸 3 − 3𝐸 2 + 3𝐸 − 1)𝑦0′ + 90 ∆4 𝑦0′ ]

Study on Different Numerical Methods for Solving Differential Equations Page | 37


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

20 8 40 20
= 𝑦0 + ℎ[(4 − 8 + − 3) 𝑦0′ + (8 − + 8) 𝐸𝑦0′ + ( 3 − 8) 𝐸 2 𝑦0′
3 3
8 28
+ 3 𝐸 3 𝑦0′ + 90 ∆4 𝑦0′ ]
8 4 8 28
= 𝑦0 + ℎ[(0)𝑦0′ + 3 𝐸𝑦0′ − 3 𝐸 2 𝑦0′ + 3 𝐸 3 𝑦0′ + 90 ∆4 𝑦0′ ]
4ℎ 28
= 𝑦0 + [2𝑦1′ − 𝑦2′ + 2𝑦3′ ] + 90 ℎ∆4 𝑦0′
3

[𝐸𝑦0′ = 𝑦1′ , 𝐸 2 𝑦0′ = 𝑦2′ , 𝐸 3 𝑦0′ = 𝑦3′ ]


4ℎ 28
𝑖. 𝑒. 𝑦4 = 𝑦0 + [2𝑦1′ − 𝑦2′ + 2𝑦3′ ] + 90 ℎ∆4 𝑦0′ (3.3.3)
3

Derivation of Milne’s corrector formula: To obtain the corrector formula, we


integrate (3.3.1) over the interval 𝑥0 to 𝑥0 + 2ℎ. i.e., 𝑢 = 0 to 𝑢 = 2, then we get

𝑥0 + 2ℎ 2 (𝑢2 −𝑢) (𝑢3 −3𝑢2 +2𝑢)


∫𝑥 𝑦 ′ 𝑑𝑥 = ℎ ∫0 [ 𝑦0′ + 𝑢∆𝑦0′ + ∆2 𝑦0′ + ∆3 𝑦0′
0 2 6
(𝑢4 −6𝑢3 +11𝑢2 −6𝑢)
+ ∆4 𝑦0′ + ⋯ ⋯ ⋯ ]𝑑𝑢
24
𝑥 +2ℎ 𝑢2 1 𝑢3 𝑢2 1 𝑢4
𝑜𝑟, [𝑦]𝑥00 = ℎ[𝑢𝑦0′ + ∆𝑦0′ + 2 ( 3 − )∆2 𝑦0′ + 6 ( 4 − 𝑢3 + 𝑢2 )∆3 𝑦0′
2 2
1 𝑢5 3𝑢4 11𝑢3
+ ( − + − 3𝑢2 )∆4 𝑦0′ + ⋯ ⋯ ⋯ ]𝑢=2
𝑢=0
24 5 2 3
1 14
𝑜𝑟, 𝑦(𝑥0 + 2ℎ) − 𝑦(𝑥0 ) = ℎ[2𝑦0′ + 2∆𝑦0′ + 3 ∆2 𝑦0′ + (0)∆3 𝑦0′ − ∆4 𝑦0′ + ⋯ ⋯ ]
45
1 1
𝑜𝑟, 𝑦2 − 𝑦0 = ℎ[2𝑦0′ + 2∆𝑦0′ + 3 ∆2 𝑦0′ − ∆4 𝑦0′ + ⋯ ⋯ ] (3.3.4)
90

After neglecting those terms containing ∆5 and higher orders and


substituting ∆≡ 𝐸 − 1, from (3.3.4) we get Milne’s corrector formula as followings
1 1
𝑦2 − 𝑦0 = ℎ[2𝑦0′ + 2(𝐸 − 1)𝑦0′ + 3 (𝐸 − 1)2 𝑦0′ − ∆4 𝑦0′ ]
90
1 1
𝑜𝑟, 𝑦4 = 𝑦0 + ℎ[2𝑦0′ + 2(𝐸 − 1)𝑦0′ + 3 (𝐸 2 − 2𝐸 + 1)𝑦0′ − ∆4 𝑦0′ ]
90
1 2 1 1
= 𝑦0 + ℎ[(2 − 2 + 3) 𝑦0′ + (2 − 3) 𝐸𝑦0′ + 3 𝐸 2 𝑦0′ − ∆4 𝑦0′ ]
90
1 4 1 1
= 𝑦0 + ℎ[3 𝑦0′ + 3 𝐸𝑦0′ + + 3 𝐸 2 𝑦0′ − ∆4 𝑦0′ ]
90
ℎ 1
= 𝑦0 + 3 [𝑦0′ + 4𝑦1′ + 𝑦2′ ] − ℎ∆4 𝑦0′
90

[𝐸𝑦0′ = 𝑦1′ , 𝐸 2 𝑦0′ = 𝑦2′ ]


ℎ 1
𝑖. 𝑒. 𝑦2 = 𝑦0 + 3 [𝑦0′ + 4𝑦1′ + 𝑦2′ ] − ℎ∆4 𝑦0′ (3.3.5)
90

Generalization of Milne’s predictor-corrector formula: We can write the form of


Milne’s predictor and corrector formulae according to (3.3.3) and (3.3.5) as follows

Study on Different Numerical Methods for Solving Differential Equations Page | 38


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

𝑝 4ℎ ′ ′
𝑦𝑛+1 = 𝑦𝑛−3 + [2𝑦𝑛−2 − 𝑦𝑛−1 + 2𝑦𝑛′ ] (3.3.6)
3
𝑐 ℎ
′ ′
𝑦𝑛+1 = 𝑦𝑛−1 + 3 [𝑦𝑛−1 + 4𝑦𝑛′ + 𝑦𝑛+1 ] (3.3.7)

Here the index 𝑝 & 𝑐 indicates the predicted and corrected values of 𝑦𝑛+1
respectively at 𝑥 = 𝑥𝑖+1 .

Local truncation error: The terms involving ∆4 𝑦 ′ omitted in above formulae,


whereas they indicate the principal parts of the errors in the values of 𝑦𝑛+1 as computed
from (3.3.6) and (3.3.7). It to be noticed that the errors occurred in (3.3.6) and (3.3.7)
are of the opposite sign with a very small magnitude.
28 1
Since 90 ℎ∆4 𝑦 ′ & − 90 ℎ∆4 𝑦 ′ are taken as the principal parts of the errors, thus
we may write

𝑝 28
[𝑦𝑛+1 ]𝐸𝑥𝑎𝑐𝑡 = 𝑦𝑛+1 + ℎ∆4 𝑦 ′ (3.3.8)
90
𝑐 1
[𝑦𝑛+1 ]𝐸𝑥𝑎𝑐𝑡 = 𝑦𝑛+1 − 90 ℎ∆4 𝑦 ′ (3.3.9)

Now substituting (3.3.9) from (3.3.8), then we get as

𝑝 𝑐 28 1
𝑦𝑛+1 − 𝑦𝑛+1 = − 90 ℎ∆4 𝑦 ′ − 90 ℎ∆4 𝑦 ′
29 1
= − 90 ℎ∆4 𝑦 ′ = 29 (− 90 ℎ∆4 𝑦 ′ ) = 29𝐸𝑇

Here 𝐸𝑇 denotes the principal part of the error in equation (3.3.7). From this, we
get as following
𝑝 𝑐
29𝐸𝑇 = 𝑦𝑛+1 − 𝑦𝑛+1
1
𝑝 𝑐
𝑜𝑟, 𝐸𝑇 = 29 (𝑦𝑛+1 − 𝑦𝑛+1 ) (3.3.10)

1
Thus we can conclude that the error in (3.3.9) is (29)𝑡ℎ of the difference
between the predicted and corrected values of y at 𝑥 = 𝑥𝑖+1 .

3.4 APPLICATION OF MILNE’S PREDICTOR-CORRECTOR METHOD

𝑑𝑦
Solve the differential equation 𝑑𝑥 = 2𝑒 𝑥 − 𝑦 𝑎𝑡 𝑥 = 0.4 with the initial values
𝑦(0.0) = 2.000, 𝑦(0.1) = 2.010, 𝑦(0.2) = 2.040, 𝑦(0.3) = 2.090
𝑑𝑦
Solution: Given by 𝑦 ′ = 𝑑𝑥 = 2𝑒 𝑥 − 𝑦 = 𝑓(𝑥, 𝑦)

By taking the step length ℎ = 0.1 from initial conditions and, we get

Study on Different Numerical Methods for Solving Differential Equations Page | 39


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

𝑥0 = 0.0, 𝑦0 = 2.000 ⇒ 𝑓0 = 2𝑒 𝑥0 − 𝑦0 = 2𝑒 0.0 − 2.000 = 0.000000


𝑥1 = 0.1, 𝑦1 = 2.010 ⇒ 𝑓1 = 2𝑒 𝑥1 − 𝑦1 = 2𝑒 0.1 − 2.010 = 0.200342
𝑥2 = 0.2, 𝑦2 = 2.040 ⇒ 𝑓2 = 2𝑒 𝑥2 − 𝑦2 = 2𝑒 0.2 − 2.040 = 0.402806
𝑥3 = 0.3, 𝑦3 = 2.090 ⇒ 𝑓3 = 2𝑒 𝑥3 − 𝑦3 = 2𝑒 0.3 − 2.090 = 0.609718
Finally, 𝑥4 = 𝑥3 + ℎ = 0.3 + 0.1 = 0.4

Now, putting 𝑛 = 3 in (3.3.6), we get Milne’s predictor formula for 𝑦4 , as


follows

4ℎ
𝑦4 = 𝑦0 + [2𝑦1′ − 𝑦2′ + 2𝑦3′ ]
3
4ℎ
= 𝑦0 + [2𝑓1 − 𝑓2 + 2𝑓3 ]
3
4(0.1)
= 0.000000 + [2(0.200342) − 0.4028062 + 2(0.609718)]
3

= 2.162309
Then 𝑓4 = 2𝑒 𝑥4 − 𝑦4 = 2𝑒 0.4 − 2.162309 = 0.821340

Now, putting 𝑛 = 3 in (3.3.7), we get Milne’s corrector formula for 𝑦4 , as


follows


𝑦4 = 𝑦2 + 3 [𝑦2′ + 4𝑦3′ + 𝑦4′ ]

= 𝑦2 + 3 [𝑓2 + 4𝑓3 + 𝑓4 ]

Then we get the approximations of 𝑦4 by above formula as followings

First iteration:
(1) ℎ
𝑦4 = 𝑦2 + 3 [𝑓2 + 4𝑓3 + 𝑓4 ]
0.1
= 2.040 + [0.402806 + 4(0.609718 ) + 0.821340]
3

= 2.162101
Then 𝑓4 = 2𝑒 𝑥4 − 𝑦4 = 2𝑒 0.4 − 2.162101 = 0.821548
Second iteration:
(2) ℎ
𝑦4 = 𝑦2 + 3 [𝑓2 + 4𝑓3 + 𝑓4 ]
0.1
= 2.040 + [0.402806 + 4(0.609718 ) + 0.821548]
3

= 2.162108
Then 𝑓4 = 2𝑒 𝑥4 − 𝑦4 = 2𝑒 0.4 − 2.162108 = 0.821541

Study on Different Numerical Methods for Solving Differential Equations Page | 40


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

Third iteration:
(3) ℎ
𝑦4 = 𝑦2 + 3 [𝑓2 + 4𝑓3 + 𝑓4 ]
0.1
= 2.040 + [0.402806 + 4(0.609718 ) + 0.821541]
3

= 2.162107
Then 𝑓4 = 2𝑒 𝑥4 − 𝑦4 = 2𝑒 0.4 − 2.162107 = 0.821542
Fourth iteration:
(4) ℎ
𝑦4 = 𝑦2 + 3 [𝑓2 + 4𝑓3 + 𝑓4 ]
0.1
= 2.040 + [0.402806 + 4(0.609718 ) + 0.821542]
3

= 2.162107
Then 𝑓4 = 2𝑒 𝑥4 − 𝑦4 = 2𝑒 0.4 − 2.162107 = 0.821542

Since the fourth approximation for 𝑦4 is same as the third approximation, we


can choose the following approximation values

𝑦4 = 2.162107 at 𝑥4 = 0.4

Now the local truncation error from (3.3.12), given by


1𝑝 𝑐 1
𝐸𝑇 = 29 (𝑦𝑛+1 − 𝑦𝑛+1 ) = 29 (2.162309 − 2.162107 ) = 0.000007

3.5 ADAMS-MOULTON PREDICTOR-CORRECTOR METHOD

Adams-Moulton method is a general approach to the predictor-corrector formula


which developed for using the information of a function 𝑦(𝑥) and its first derivative
given by 𝑦 ′ = 𝑓(𝑥, 𝑦) at the past three points together with one more old value of the
derivatives [2,9].

Derivation of Adams-Moulton predictor formula: The most general linear predictor


formula which involves the information about the function and its derivative at the past
three points together with the value of the at the given point being computed as
′ ′ ′
𝑦𝑛+1 = 𝐴0 𝑦𝑛 + 𝐴1 𝑦𝑛−1 + 𝐴2 𝑦𝑛−2 + ℎ[𝐵0 𝑦𝑛′ + 𝐵1 𝑦𝑛−1 + 𝐵2 𝑦𝑛−2 + 𝐵3 𝑦𝑛−3 ] (3.5.1)

The above equation contains seven unknowns. Suppose it holds for polynomials
up to degree four. Hence we take 𝑦(𝑥) = 𝑥 𝑛 ; 𝑛 = 0,1,2,3,4. Let the space between the
consecutive values of 𝑥 be unity. i.e. taking ℎ = 1.

Now putting ℎ = 1 and 𝑦(𝑥) = 1, 𝑥, 𝑥 2 , 𝑥 3 , 𝑥 4 successively in (3.5.1), we get

1 = 𝐴0 + 𝐴1 + 𝐴2

Study on Different Numerical Methods for Solving Differential Equations Page | 41


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

1 = −𝐴1 − 2𝐴2 + 𝐵0 + 𝐵1 + 𝐵2 + 𝐵3
1 = 𝐴1 + 4𝐴2 − 2𝐵1 − 4𝐵2 − 6𝐵3
1 = −𝐴1 − 8𝐴2 + 3𝐵1 + 12𝐵2 + 27𝐵3
1 = 𝐴1 + 16𝐴2 − 4𝐵1 − 32𝐵2 − 108𝐵3 (3.5.2)

We have five equations with seven unknowns. Taking 𝐴1 & 𝐴2 as parameters


and solving the equations in (3.5.2), we get

𝐴0 = 1 − 𝐴1 − 𝐴2
1
𝐵0 = 24 [55 + 9𝐴1 + 8𝐴2 ]
1
𝐵1 = 24 [−59 + 19𝐴1 + 32𝐴2 ]
1
𝐵2 = 24 [37 − 5𝐴1 + 8𝐴2 ]
1
𝐵3 = 24 [−9 + 𝐴1 ] (3.5.3)

Since 𝐴1 & 𝐴2 are arbitrary, choosing. 𝐴1 = 0 = 𝐴2 . Then we obtain from (3.5.3) the
followings
55 59 37 9
𝐴0 = 1, 𝐵0 = 24 , 𝐵1 = − 24 , 𝐵2 = 24 , 𝐵3 = − 24

Substituting these values in (3.5.1), we get Adams-Moulton predictor formula


as follows

𝑝 55 59
′ ′ 37 ′ 9
𝑦𝑛+1 = (1)𝑦𝑛 + ℎ[(24)𝑦𝑛′ + (− 24)𝑦𝑛−1 + (24)𝑦𝑛−2 + (− 24)𝑦𝑛−3 ]
𝑝 ℎ ′ ′ ′
𝑜𝑟, 𝑦𝑛+1 = 𝑦𝑛 + 24 [55𝑦𝑛′ − 59𝑦𝑛−1 + 37𝑦𝑛−2 − 9𝑦𝑛−3 ] (3.5.4)

Derivation of Adams-Moulton corrector formula: The most general linear corrector


formula which involves the information about the function and its derivative at the past
three points together with the value of the at the given point being computed as
′ ′ ′
𝑦𝑛+1 = 𝑎0 𝑦𝑛 + 𝑎1 𝑦𝑛−1 + 𝑎2 𝑦𝑛−2 + ℎ[𝑏−1 𝑦𝑛+1 + 𝑏0 𝑦𝑛′ + 𝑏1 𝑦𝑛−1 + 𝑏2 𝑦𝑛−2 ] (3.5.5)

The above equation contains seven unknowns. Suppose it holds for polynomials
up to degree four. Hence we take 𝑦(𝑥) = 𝑥 𝑛 ; 𝑛 = 0,1,2,3,4. Let the space between the
consecutive values of 𝑥 be unity. i.e. taking ℎ = 1.

Now putting ℎ = 1 and 𝑦(𝑥) = 1, 𝑥, 𝑥 2 , 𝑥 3 , 𝑥 4 successively in (3.5.5), we get

1 = 𝑎0 + 𝑎1 + 𝑎2
9 = 𝑎1 + 24𝑏−1

Study on Different Numerical Methods for Solving Differential Equations Page | 42


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

−19 = 13𝑎1 + 8𝑎2 − 24𝑏0


5 = 13𝑎1 + 32𝑎2 − 24𝑏1
1 = 𝑎1 − 8𝑎2 + 24𝑏2 (3.5.6)

We have five equations with seven unknowns. Taking 𝑎1 & 𝑎2 as parameters


and solving the equations in (3.5.6), we get

𝑎0 = 1 − 𝑎1 − 𝑎2
1
𝑏−1 = [9 − 𝑎1 ]
24
1
𝑏0 = 24 [19 + 13𝑎1 + 8𝑎2 ]
1
𝑏1 = 24 [−5 + 13𝑎1 + 32𝑎2 ]
1
𝑏2 = 24 [1 − 𝑎1 + 8𝑎2 ] (3.5.7)

Since 𝑎1 & 𝑎2 are arbitrary, choosing. 𝑎1 = 0 = 𝑎2 . Then we obtain from (3.5.7) the
followings
9 19 5 1
𝑎0 = 1, 𝑏−1 = 24 , 𝑏0 = 24 , 𝑏1 = − 24 , 𝑏2 = 24

Substituting these values in (3.5.5), we get Adams-Moulton corrector formula


as follows

𝑐 ′ 9 ′ 19 ′ 5 1
𝑦𝑛+1 = (1)𝑦𝑛 + ℎ[(24)𝑦𝑛+1 + (24)𝑦𝑛′ + (− 24)𝑦𝑛−1 + (24)𝑦𝑛−2 ]
𝑐 ℎ ′ ′ ′
𝑜𝑟, 𝑦𝑛+1 = 𝑦𝑛 + 24 [9𝑦𝑛+1 + 19𝑦𝑛′ − 5𝑦𝑛−1 + 𝑦𝑛−2 ] (3.5.8)

We can find more predictor & corrector formulae using suitable new values of
𝐴1 & 𝐴2 and 𝑎1 & 𝑎2, to solving systems of equations (3.5.3) and (3.5.7).

Local truncation error of Adams-Moulton predictor-corrector formula: We have


from Taylor’s series expansion

𝑘 2 ℎ2 𝑘 3 ℎ3 𝑘 4 ℎ4 𝑘 5 ℎ5
𝑦𝑘 = 𝑦0 + 𝑘ℎ𝑦0′ + 𝑦0′′ + 𝑦0′′′ + 𝑦0′𝑣 + 𝑦0𝑣 + ⋯ … … (3.5.9)
2 6 24 120

Putting 𝑛 + 1 & 𝑛 for 𝑘, then we get

(𝑛+1)2 ℎ2 (𝑛+1)3 ℎ3 (𝑛+1)4 ℎ4


𝑦𝑛+1 = 𝑦0 + (𝑛 + 1)ℎ𝑦0′ + 𝑦0′′ + 𝑦0′′′ + 𝑦0′𝑣
2 6 24
(𝑛+1)5 ℎ5
+ 𝑦0𝑣 + ⋯ … …
120
𝑛2 ℎ 2 𝑛3 ℎ 3 𝑛4 ℎ 4 𝑛5 ℎ 5
𝑦𝑛 = 𝑦0 + 𝑛ℎ𝑦0′ + 𝑦0′′ + 𝑦0′′′ + 𝑦0′𝑣 + 𝑦0𝑣 + ⋯ … …
2 6 24 120

Study on Different Numerical Methods for Solving Differential Equations Page | 43


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

Equation (3.5.9) also can be written as

𝑘 2 ℎ2 𝑘 3 ℎ3 𝑘 4 ℎ4 𝑘 5 ℎ5
𝑦𝑘′ = 𝑦0′ + 𝑘ℎ𝑦0′′ + 𝑦0′′′ + 𝑦0′𝑣 + 𝑦0𝑣 + 𝑦0𝑣′ + ⋯ … … (3.5.10)
2 6 24 120

Putting 𝑛 + 1, 𝑛, 𝑛 − 1, 𝑛 − 2 & 𝑛 − 3 for 𝑘, then we get

(𝑛+1)2 ℎ2 (𝑛+1)3 ℎ3 (𝑛+1)4 ℎ4



𝑦𝑛+1 = 𝑦0′ + (𝑛 + 1)ℎ𝑦0′′ + 𝑦0′′′ + 𝑦0′𝑣 + 𝑦0𝑣
2 6 24
(𝑛+1)5 ℎ5
+ 𝑦0𝑣′ + ⋯ … …
120
𝑛2 ℎ 2 𝑛3 ℎ 3 𝑛4 ℎ 4 𝑛5 ℎ 5
𝑦𝑛′ = 𝑦0′ + 𝑛ℎ𝑦0′′ + 𝑦0′′′ + 𝑦0′𝑣 + 𝑦0𝑣 + 𝑦0𝑣′ + ⋯ … …
2 6 24 120
(𝑛−1)2 ℎ2 (𝑛−1)3 ℎ3 (𝑛−1)4 ℎ4

𝑦𝑛−1 = 𝑦0′ + (𝑛 − 1)ℎ𝑦0′′ + 𝑦0′′′ + 𝑦0′𝑣 + 𝑦0𝑣
2 6 24
(𝑛−1)5 ℎ5
+ 𝑦0𝑣′ + ⋯ … …
120
(𝑛−2)2 ℎ2 (𝑛−2)3 ℎ3 (𝑛−2)4 ℎ4

𝑦𝑛−2 = 𝑦0′ + (𝑛 − 2)ℎ𝑦0′′ + 𝑦0′′′ + 𝑦0′𝑣 + 𝑦0𝑣
2 6 24
(𝑛−2)5 ℎ5
+ 𝑦0𝑣′ + ⋯ … …
120
(𝑛−3)2 ℎ2 (𝑛−3)3 ℎ3 (𝑛−3)4 ℎ4

𝑦𝑛−3 = 𝑦0′ + (𝑛 − 3)ℎ𝑦0′′ + 𝑦0′′′ + 𝑦0′𝑣 + 𝑦0𝑣
2 6 24
(𝑛−3)5 ℎ5
+ 𝑦0𝑣′ + ⋯ … …
120

′ ′ ′
Now, substituting all these values of 𝑦𝑛+1 , 𝑦𝑛 , 𝑦𝑛′ , 𝑦𝑛−1 , 𝑦𝑛−2 & 𝑦𝑛−3 in (3.5.4),
then we get

ℎ 251

(𝑦𝑛+1 − 𝑦𝑛 ) − 24 [55𝑦𝑛′ − 59𝑦𝑛−1 ′
+ 37𝑦𝑛−2 ′
− 9𝑦𝑛−3 ]= ℎ𝑦0𝑣 + ⋯ … … (3.5.11)
720

251
Here the truncation error is 720 ℎ𝑦0𝑣 + ⋯ … …. Using the first term of above
error as an estimate of the local truncation error of the Adams-Moulton predictor
251
formula is 720 ℎ𝑦0𝑣 .

′ ′ ′
Again, substituting all these values of 𝑦𝑛+1 , 𝑦𝑛 , 𝑦𝑛+1 , 𝑦𝑛′ , 𝑦𝑛−1 & 𝑦𝑛−2 in (3.5.8),
then we get

ℎ 19

(𝑦𝑛+1 − 𝑦𝑛 ) − 24 [9𝑦𝑛+1 ′
+ 19𝑦𝑛′ − 5𝑦𝑛−1 ′
+ 𝑦𝑛−2 ] = − 720 ℎ𝑦0𝑣 + ⋯ … … (3.5.11)

19
Here the truncation error is − 720 ℎ𝑦0𝑣 + ⋯ … …. Using the first term of above
error as an estimate of the local truncation error of the Adams-Moulton predictor
19
formula is − 720 ℎ𝑦0𝑣 .

Study on Different Numerical Methods for Solving Differential Equations Page | 44


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

251 19
Since ℎ𝑦 𝑣 & − 720 ℎ𝑦 𝑣 are taken as the principal parts of the errors, thus we
720
may write

𝑝 251
[𝑦𝑛+1 ]𝐸𝑥𝑎𝑐𝑡 = 𝑦𝑛+1 + 720 ℎ𝑦 𝑣 (3.5.12)
𝑐 19
[𝑦𝑛+1 ]𝐸𝑥𝑎𝑐𝑡 = 𝑦𝑛+1 − 720 ℎ𝑦 𝑣 (3.5.13)

Now substituting (3.5.13) from (3.5.12), then we get as

𝑝 𝑐 251 19
𝑦𝑛+1 − 𝑦𝑛+1 =− ℎ𝑦 𝑣 − ℎ𝑦 𝑣
720 720
270 270 19 270
= − 720 ℎ𝑦 𝑣 = (− 720 ℎ𝑦 𝑣 ) = 𝐸𝑇
19 19

Here 𝐸𝑇 denotes the principal part of the error in equation (3.5.13). From this,
we get as following
270 𝑝 𝑐
𝐸𝑇 = 𝑦𝑛+1 − 𝑦𝑛+1
19
19
𝑝 𝑐
𝑜𝑟, 𝐸𝑇 = 270 (𝑦𝑛+1 − 𝑦𝑛+1 ) (3.5.14)

19
Thus we can conclude that the error in (3.5.13) is (270)𝑡ℎ of the difference
between the predicted and corrected values of y at 𝑥 = 𝑥𝑖+1 .

3.4 APPLICATION OF ADAMS-MOULTON PREDICTOR-CORRECTOR


METHOD

𝑑𝑦
Solve the differential equation 𝑑𝑥 = 1 + 𝑥𝑦 2 𝑎𝑡 𝑥 = 0.4 with the initial values
𝑦(0.0) = 1.000, 𝑦(0.1) = 1.105, 𝑦(0.2) = 1.223, 𝑦(0.3) = 1.355
𝑑𝑦
Solution: Given by 𝑦 ′ = 𝑑𝑥 = 1 + 𝑥𝑦 2 = 𝑓(𝑥, 𝑦)

By taking the step length ℎ = 0.1 from initial conditions and, we get
𝑥0 = 0.0, 𝑦0 = 1.000 ⇒ 𝑓0 = 1 + 𝑥0 𝑦02 = 1 + (0.0)(1.000)2 = 1.000000
𝑥1 = 0.1, 𝑦1 = 1.105 ⇒ 𝑓1 = 1 + 𝑥1 𝑦12 = 1 + (0.1)(1.105)2 = 1.122103
𝑥2 = 0.2, 𝑦2 = 1.223 ⇒ 𝑓2 = 1 + 𝑥2 𝑦22 = 1 + (0.2)(1.223)2 = 1.299146
𝑥3 = 0.3, 𝑦3 = 1.355 ⇒ 𝑓3 = 1 + 𝑥3 𝑦32 = 1 + (0.3)(1.355)2 = 1.550808
Finally, 𝑥4 = 𝑥3 + ℎ = 0.3 + 0.1 = 0.4

Now, putting 𝑛 = 3 in (3.5.4), we get Adams-Moulton predictor formula for 𝑦4 ,


as follows

Study on Different Numerical Methods for Solving Differential Equations Page | 45


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method


𝑦4 = 𝑦3 + 24 [55𝑦3′ − 59𝑦2′ + 37𝑦2′ − 9𝑦0′ ]

= 𝑦3 + 24 [55𝑓3 − 59𝑓2 + 37𝑓1 − 9𝑓0 ]
0.1
= 1.355 + 24 [55(1.550808) − 59(1.299146) + 37(1.122103)

−9(1.000000)]
= 1.526511
Then 𝑓4 = 1 + 𝑥4 𝑦42 = 1 + (0.4)(1.526511)2 = 1.932094

Now, putting 𝑛 = 3 in (3.5.8), we get Adams-Moulton corrector formula for 𝑦4 ,


as follows


𝑦4 = 𝑦3 + 24 [9𝑦4′ + 19𝑦3′ −5𝑦2′ + 𝑦1′ ]

= 𝑦3 + 24 [9𝑓4 + 19𝑓3 − 5𝑓2 + 𝑓1 ]

Then we get the approximations of 𝑦4 by above formula as followings

First iteration:
(1) ℎ
𝑦4 = 𝑦3 + 24 [9𝑓4 + 19𝑓3 − 5𝑓2 + 𝑓1 ]
0.1
= 1.355 + 24 [9(1.932094) + 19(1.550808 ) − 5(1.299146)

+1.122103]
= 1.527836
Then 𝑓4 = 1 + 𝑥4 𝑦42 = 1 + (0.4)(1.527836)2 = 1.933713
Second iteration:
(2) ℎ
𝑦4 = 𝑦3 + 24 [9𝑓4 + 19𝑓3 − 5𝑓2 + 𝑓1 ]
0.1
= 1.355 + 24 [9(1.933713) + 19(1.550808 ) − 5(1.299146)

+1.122103]
= 1.527896
Then 𝑓4 = 1 + 𝑥4 𝑦42 = 1 + (0.4)(1.527896)2 = 1.933786

Third iteration:
(3) ℎ
𝑦4 = 𝑦3 + 24 [9𝑓4 + 19𝑓3 − 5𝑓2 + 𝑓1 ]
0.1
= 1.355 + 24 [9(1.933786) + 19(1.550808 ) − 5(1.299146)

Study on Different Numerical Methods for Solving Differential Equations Page | 46


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

+1.122103]
= 1.527899
Then 𝑓4 = 1 + 𝑥4 𝑦42 = 1 + (0.4)(1.527899)2 = 1.933790
Fourth iteration:
(4) ℎ
𝑦4 = 𝑦3 + 24 [9𝑓4 + 19𝑓3 − 5𝑓2 + 𝑓1 ]
0.1
= 1.355 + 24 [9(1.933790) + 19(1.550808 ) − 5(1.299146)

+1.122103]
= 1.527899
Then 𝑓4 = 1 + 𝑥4 𝑦42 = 1 + (0.4)(1.527899)2 = 1.933790

Since the fourth approximation for 𝑦4 is same as the third approximation, we


can choose the following approximation values

𝑦4 = 1.527899 at 𝑥4 = 0.4

Now the local truncation error from (3.5.14), given by


19 𝑝 𝑐 19
𝐸𝑇 = (𝑦𝑛+1 − 𝑦𝑛+1 )= (1.527899 − 1.932094) = 0.000098
270 270

3.7 COMMENTS ON PREDICTOR-CORRECTOR METHODS

Advantages: Predictor-Corrector methods allow different step-length to be used in


each evaluation and arbitrary increments in the independent variables. The main
advantage of these methods that, one has to compute the right-hand side term of one
grid point only. These are successful methods to give the solution of stiff systems of
ordinary differential equations. Since after predicting a value of depending variable by
predictor formula, they apply corrector formula several times to improve the value till
the desired level of accuracy, they are suitable methods for sophisticated problems. For
the multi-step system, truncation errors and round-off errors are minimized step by step
in these methods.

Disadvantages: Since for each step forward in the equation using the shortest step-
length up-to (4𝑛 + 4) different co-efficient would have to calculate and the time taken
to this might be a significant proportion of the total computing time. Also, up-to 4n
quantities representing the previous three step-lengths and current step-length in each
equation must be stored and this together with a longer program would represent a
considerable increase in the storage space required, compared with other systems.

In order to begin computation with these methods, one first to calculate


additional initial values. Because of this problem, these methods are not self-starting.
Also, the first few values must be computed using other formulae. Moreover, for the

Study on Different Numerical Methods for Solving Differential Equations Page | 47


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

multi-step system, it takes a huge time, which is highly disliked in the modern fast
world.

3.8 RUNGE-KUTTA METHOD

There are many different schemes for solving ordinary differential equations
numerically. We have already introduced some of them. Many of the more advanced
techniques are making complexity to derive and analyze them. One of the standard
workhorses for solving ordinary differential equations, named as Runge-Kutta method.
It to be noted, numerical methods are subsequently improved to a level of considerable
degree. This development led to the method is known as the Runge-Kutta method,
which is particularly suitable in cases when the computations of higher-order
derivatives are complicated.

In the Runge-Kutta method, the increment in functions is to be calculated once


for all by means of a definite set of formulae. The calculation for any increment is
exactly the same as for the first increment. The improved values of independent and
dependent variables are to be substituted in a set of recursive formulae.

Derivation of Runge-Kutta formulae: We will derive the formulae of the Runge-


Kutta method to obtain an approximate numerical solution of the first-order differential
equation 𝑦 ′ = 𝑓(𝑥, 𝑦) with the initial condition 𝑦(𝑥0 ) = 𝑦0 and it is assumed that 𝑥0 is
not a singular point. Also, the errors are to be so small that, they can be neglected [1,22].

Let us take the first-order differential equation

𝑑𝑦
𝑦 ′ = 𝑑𝑥 = 𝑓(𝑥, 𝑦) ; 𝑦(𝑥0 ) = 𝑦0 (3.8.1)

Let ℎ be the interval between two equidistant values of 𝑥. So that ℎ = 𝑥1 − 𝑥0 .


From Taylor’s series expansion, we have

ℎ2 ℎ3 ℎ4
𝑦(𝑥 + ℎ) = 𝑦(𝑥) + ℎ𝑦 ′ (𝑥) + 𝑦 ′′ (𝑥) + 𝑦 ′′′ (𝑥) + 𝑦 ′𝑣 (𝑥) … … …
2! 3! 4!
ℎ2 ℎ3 ℎ4
𝑜𝑟, 𝑦(𝑥 + ℎ) − 𝑦(𝑥) = ℎ𝑦 ′ (𝑥) + 𝑦 ′′ (𝑥) + 𝑦 ′′′ (𝑥) + 𝑦 ′𝑣 (𝑥) … … … (3.8.2)
2! 3! 4!

Differentiating (3.8.1) partially with respect to variables 𝑥 & 𝑦, we get

𝑦 ′ = 𝑓(𝑥, 𝑦) = 𝑓
𝑦 ′′ = 𝑓 ′ (𝑥, 𝑦) = 𝑓𝑥 + 𝑓𝑓𝑦
𝑦 ′′′ = 𝑓 ′′ (𝑥, 𝑦) = 𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦 + 𝑓𝑥 𝑓𝑦 + 𝑓𝑓𝑦2

𝑦 ′𝑣 = 𝑓 ′′′ (𝑥, 𝑦) = (𝑓𝑥𝑥𝑥 + 3𝑓𝑓𝑥𝑥𝑦 + 𝑓 2 𝑓𝑥𝑦𝑦 + 𝑓 3 𝑓𝑦𝑦𝑦 ) + 𝑓𝑦 (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦


+𝑓 2 𝑓𝑦𝑦 ) + 3(𝑓𝑥 + 𝑓𝑓𝑦 )(𝑓𝑥𝑦 + 𝑓𝑓𝑦𝑦 ) + 𝑓𝑦2 (𝑓𝑥 + 𝑓𝑓𝑦 )

Study on Different Numerical Methods for Solving Differential Equations Page | 48


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

Let us introduce the following convenient form

𝐹1 = 𝑓𝑥 + 𝑓𝑓𝑦
𝐹2 = 𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦
𝐹3 = 𝑓𝑥𝑥𝑥 + 3𝑓𝑓𝑥𝑥𝑦 + 𝑓 2 𝑓𝑥𝑦𝑦 + 𝑓 3 𝑓𝑦𝑦𝑦

Then we get as

𝑦′ = 𝑓
𝑦 ′′ = 𝐹1
𝑦 ′′′ = 𝐹2 + 𝑓𝑦 𝐹1
𝑦 ′𝑣 = 𝐹3 + 𝑓𝑦 𝐹2 + 3𝐹1 (𝑓𝑥𝑦 + 𝑓𝑓𝑦𝑦 ) + 𝐹1 (𝑓𝑥𝑦 + 𝑓𝑓𝑦𝑦 ) + 𝐹1 𝑓𝑦2

Now putting them in (3.8.2), we obtain

ℎ2 ℎ3 ℎ4
𝑦(𝑥 + ℎ) − 𝑦(𝑥) = ℎ𝑓 + 𝐹 + (𝐹2 + 𝑓𝑦 𝐹1 ) + {𝐹3 + 𝑓𝑦 𝐹2
2! 1 3! 4!

+3𝐹1 (𝑓𝑥𝑦 + 𝑓𝑓𝑦𝑦 ) + 𝐹1 (𝑓𝑥𝑦 + 𝑓𝑓𝑦𝑦 ) + 𝐹1 𝑓𝑦2 (3.8.3)

Now we shall develop a fourth-order formula. In order to develop the Runge-


Kutta formulae to find the co-efficient 𝑎, 𝑏, 𝑐, 𝑑, 𝑚, 𝑛 & 𝑝 from below

𝑘1 = ℎ𝑓(𝑥, 𝑦) = ℎ𝑓
𝑘2 = ℎ𝑓(𝑥 + 𝑚ℎ, 𝑦 + 𝑚𝑘1 )
𝑘3 = ℎ𝑓(𝑥 + 𝑛ℎ, 𝑦 + 𝑛𝑘2 )
𝑘4 = ℎ𝑓(𝑥 + 𝑝ℎ, 𝑦 + 𝑝𝑘3 ) (3.8.4)

Our aim then ∆𝑦 will be expressed in the form

∆𝑦 = 𝑦(𝑥 + ℎ) − 𝑦(𝑥) = 𝑎𝑘1 + 𝑏𝑘2 + 𝑐𝑘3 + 𝑑𝑘4 (3.8.5)

Now, we may use Taylor’s series expansion for two variables as followings

𝑘1 = ℎ𝑓
1 1
𝑘2 = ℎ[𝑓 + 𝑚ℎ𝐹1 + 2 𝑚2 ℎ2 𝐹2 + 6 𝑚3 ℎ3 𝐹3 + ⋯ … … ]
1 1
𝑘3 = ℎ[𝑓 + 𝑛ℎ𝐹1 + 2 ℎ2 (𝑛2 𝐹2 + 2𝑚𝑛𝑓𝑦 𝐹1 ) + 6 ℎ3 {𝑛3 𝐹3 + 3𝑚2 𝑛𝑓𝑦 𝐹2

+6𝑚𝑛2 𝐹1 (𝑓𝑥𝑦 + 𝑓𝑓𝑦𝑦 )} + ⋯ … … ]


1 1
𝑘4 = ℎ[𝑓 + 𝑝ℎ𝐹1 + 2 ℎ2 (𝑝2 𝐹2 + 2𝑛𝑝𝑓𝑦 𝐹1 ) + 6 ℎ3 {𝑝3 𝐹3 + 3𝑛2 𝑝𝑓𝑦 𝐹2

Study on Different Numerical Methods for Solving Differential Equations Page | 49


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

+6𝑛𝑝2 𝐹1 (𝑓𝑥𝑦 + 𝑓𝑓𝑦𝑦 ) + 6𝑚𝑛𝑝𝐹1 𝑓𝑦2 } + ⋯ … … ]

Putting above values of 𝑘1 , 𝑘2 , 𝑘3 & 𝑘4 in (3.8.5), we get


1 2 1 3
𝑦(𝑥 + ℎ) − 𝑦(𝑥) = 𝑎ℎ𝑓 + 𝑏ℎ[𝑓 + 𝑚ℎ𝐹1 + 2 𝑚2 ℎ 𝐹2 + 6 𝑚3 ℎ 𝐹3 + ⋯ … … ] +
1 2 1 3
𝑐ℎ[𝑓 + 𝑛ℎ𝐹1 + 2 ℎ (𝑛2 𝐹2 + 2𝑚𝑛𝑓𝑦 𝐹1 ) + 6 ℎ {𝑛3 𝐹3 + 3𝑚2 𝑛𝑓𝑦 𝐹2 +
1
6𝑚𝑛2 𝐹1 (𝑓𝑥𝑦 + 𝑓𝑓𝑦𝑦 )} + ⋯ … … ] + 𝑑ℎ[𝑓 + 𝑝ℎ𝐹1 + 2 ℎ2 (𝑝2 𝐹2 + 2𝑛𝑝𝑓𝑦 𝐹1 ) +
1 3 3
6
ℎ {𝑝 𝐹3 + 3𝑛2 𝑝𝑓𝑦 𝐹2 + 6𝑛𝑝2 𝐹1 (𝑓𝑥𝑦 + 𝑓𝑓𝑦𝑦 ) + 6𝑚𝑛𝑝𝐹1 𝑓2𝑦 } + ⋯ … … ]

𝑜𝑟, 𝑦(𝑥 + ℎ) − 𝑦(𝑥) = (𝑎 + 𝑏 + 𝑐 + 𝑑)ℎ𝑓 + (𝑏𝑚 + 𝑐𝑛 + 𝑑𝑝)ℎ2 𝐹2 + (𝑏𝑚2 +


ℎ3 𝐹2 ℎ4 𝐹3
𝑐𝑛2 + 𝑑𝑝2 ) + (𝑏𝑚3 + 𝑐𝑛3 + 𝑑𝑝3 ) + (𝑐𝑚𝑛 + 𝑑𝑛𝑝)ℎ3 𝑓𝑦 𝐹1 +
2 6

(𝑐𝑚2 𝑛 + 𝑑𝑛2 𝑝)ℎ4 𝑓𝑦 𝐹2 + (𝑐𝑚𝑛2 + 𝑑𝑛𝑝2 )ℎ4 𝐹1 (𝑓𝑥𝑦 + 𝑓𝑓𝑦𝑦 ) +


𝑑𝑚𝑛𝑝ℎ4 𝑓𝑦2 𝐹1 + ⋯ … … (3.8.6)

Now comparing (3.8.3) and (3.8.6), we get

𝑎+𝑏+𝑐+𝑑 = 1
1
𝑏𝑚 + 𝑐𝑛 + 𝑑𝑝 =
2
1
𝑏𝑚2 + 𝑐𝑛2 + 𝑑𝑝2 = 3
1
𝑏𝑚3 + 𝑐𝑛3 + 𝑑𝑝3 = 4
1
𝑐𝑚𝑛 + 𝑑𝑛𝑝 = 6
1
𝑐𝑚2 𝑛 + 𝑑𝑛2 𝑝 = 12
1
𝑐𝑚𝑛2 + 𝑑𝑛𝑝2 = 8
1
𝑑𝑚𝑛𝑝 = 24

Solving the above system of equation, we obtain


1 1 1
𝑚 = 𝑛 = 2 , 𝑝 = 1, 𝑎 = 𝑏 = 6 , 𝑏 = 𝑐 = 3

Now, putting these values in (3.8.4) and (3.8.5), we get fourth-order Runge-
Kutta formulae as follows

𝑘1 = ℎ𝑓(𝑥, 𝑦) = ℎ𝑓
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥 + 2 , 𝑦 + )
2

Study on Different Numerical Methods for Solving Differential Equations Page | 50


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑥 + 2 , 𝑦 + )
2

𝑘4 = ℎ𝑓(𝑥 + ℎ, 𝑦 + 𝑘3 )

1
∆𝑦 = 𝑦(𝑥 + ℎ) − 𝑦(𝑥) = 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )

When the initial values are (𝑥0 , 𝑦0 ), the first increment in 𝑦 is computed from
the formulae given below

𝑘1 = ℎ𝑓(𝑥0 , 𝑦0 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥0 + 2 , 𝑦0 + )
2
ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑥0 + 2 , 𝑦0 + )
2

𝑘4 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘3 )

1
∆𝑦 = 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
1
𝑜𝑟, 𝑦(𝑥0 + ℎ) − 𝑦(𝑥0 ) = 6 (𝑘 + 2𝑘2 + 2𝑘3 + 𝑘4 )
1
1
𝑜𝑟, 𝑦(𝑥0 + ℎ) = 𝑦(𝑥0 ) + (𝑘 + 2𝑘2 + 2𝑘3 + 𝑘4 )
6 1
1
𝑜𝑟, 𝑦1 = 𝑦0 + 6
(𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )

Thus, the general fourth-order Runge-Kutta formulae for the 𝑛𝑡ℎ interval is
given by the followings

𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 + )
2
ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 + )
2

𝑘4 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + 𝑘3 )

1
𝑦𝑛+1 = 𝑦𝑛 + 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 ) (3.8.7)

The above formulae are called the standard forth-order Runge-Kutta formulae.

In a similar manner, we can derive second & third Runge-Kutta formulae given
by follows

Second-order Runge-Kutta formulae:


1
𝑦𝑛+1 = 𝑦𝑛 + 2 (𝑘1 + 𝑘2 )

Study on Different Numerical Methods for Solving Differential Equations Page | 51


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
𝑘2 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + 𝑘1 )

Third-order Runge-Kutta formulae:


1
𝑦𝑛+1 = 𝑦𝑛 + 2 (𝑘1 + 4𝑘2 + 𝑘3 )

𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 + )
2

𝑘3 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 − 𝑘1 + 2𝑘2 )

Error estimation in Runge-Kutta formulae: The direct method of estimating the


error of higher-order Runge-Kutta formulae are very complicated and time-consuming.
Although it is possible to compute the errors in a laborious way, are very hard, involving
higher-order partial derivatives.
Thus we will first estimate the error in second-order Runge-Kutta formulae and
the errors for higher orders can be obtained by generalizing the computed error.
We get the second-order Runge-Kutta formulae as follows
1
𝑦𝑛+1 = 𝑦𝑛 + (𝑘1 + 𝑘2 )
2

𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
𝑘2 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + 𝑘1 ) (3.8.8)

Now, the truncation error is given by the following formula

𝐸𝑇 = 𝑦(𝑥𝑛+1 ) − 𝑦𝑛+1 (3.8.9)

Now expanding 𝑦(𝑥𝑛+1 ) by Taylor’s series expansion, we get

ℎ2 ℎ3
𝑦(𝑥𝑛+1 ) = 𝑦(𝑥𝑛 + ℎ) = 𝑦(𝑥𝑛 ) + ℎ𝑦 ′ (𝑥𝑛 ) + 𝑦 ′′ (𝑥𝑛 ) + 𝑦 ′′′ (𝑥𝑛 )
2 6
ℎ4
+ 24 𝑦 ′𝑣 (𝑥𝑛 ) + ⋯ … …
ℎ2 ℎ3 ℎ4
𝑜𝑟, 𝑦(𝑥𝑛+1 ) = 𝑦𝑛 + ℎ𝑦𝑛′ + 𝑦𝑛′′ + 𝑦𝑛′′′ + 24 𝑦𝑛′𝑣 + ⋯ … …
2 6
ℎ2 ℎ3
= 𝑦𝑛 + ℎ𝑓 + (𝑓𝑥 + 𝑓𝑓𝑦 ) + (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦
2 6

+𝑓𝑥 𝑓𝑦 + 𝑓𝑓𝑦2 ) + 𝑜(ℎ4 ) (3.8.10)

We may use Taylor’s series expansion in (3.8.8), we get

Study on Different Numerical Methods for Solving Differential Equations Page | 52


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

𝑘1 = ℎ𝑓
ℎ2
𝑘2 = ℎ[𝑓 + ℎ(𝑓𝑥 + 𝑓𝑓𝑦 ) + (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦 ) + ⋯ … … ]
2
ℎ3
= ℎ𝑓 + ℎ2 (𝑓𝑥 + 𝑓𝑓𝑦 ) + (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦 ) + ⋯ … …
2

1 2 ℎ3
𝑦𝑛+1 = 𝑦𝑛 + 2 [ℎ𝑓 + ℎ𝑓 + ℎ (𝑓𝑥 + 𝑓𝑓𝑦 ) + 2
(𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓2 𝑓𝑦𝑦 ) + ⋯ … … ]
ℎ2 ℎ3
𝑜𝑟, 𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓 + 2
(𝑓𝑥 + 𝑓𝑓𝑦 ) + 4
(𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓2 𝑓𝑦𝑦 ) + ⋯ … … (3.8.11)

Now, using (3.8.10) and (3.8.11) in (3.8.9), we get

ℎ2 ℎ3
𝐸𝑇 = [𝑦𝑛 + ℎ𝑓 + (𝑓𝑥 + 𝑓𝑓𝑦 ) + (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦 + 𝑓𝑥 𝑓𝑦 + 𝑓𝑓𝑦2 )
2 6
ℎ2 ℎ3
+ ⋯ ] − [𝑦𝑛 + ℎ𝑓 + (𝑓𝑥 + 𝑓𝑓𝑦 ) + (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦 ) + ⋯ ]
2 4
ℎ3 ℎ3 ℎ3
=(6 − ) (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦 ) + (𝑓𝑥 𝑓𝑦 + 𝑓𝑓𝑦2 ) + ⋯ … …
4 6
ℎ3 ℎ3
= − 12 (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦 ) + (𝑓𝑥 𝑓𝑦 + 𝑓𝑓𝑦2 ) + ⋯ … …
6
ℎ3
=− (𝑓𝑥𝑥 + 2𝑓𝑓𝑥𝑦 + 𝑓 2 𝑓𝑦𝑦 − 2𝑓𝑦 − 2𝑓𝑓𝑦2 (3.8.12)
12

Thus, (3.8.12) shows that the truncation error of second-order Runge-Kutta


formula is of order ℎ3 . Similarly, we can show that the truncation error is third-order,
fourth-order Runge-Kutta formulae are of ℎ4 & ℎ5 respectively.

Thus by applying Taylor’s series expansion as above manner, we get the


truncation error of nth-order Runge-Kutta formulae is of order ℎ𝑛+1 as follows

𝐸𝑇 = 𝑐ℎ𝑛+1 𝑦 𝑛+1 (3.8.13)

3.9 PHYSICAL APPLICATION OF RUNGE-KUTTA METHOD

Consider a large number of radioactive nuclei. Although the number of nuclei


is discrete we can often treat this number as a continuous variable. Using the approach,
the fundamental law of radioactive decay is that the rate of decay is proportional to the
number of nuclei of the decay time. Thus we can write

𝑑𝑁
= −λ𝑁 (3.9.1)
𝑑𝑡

Here 𝑁 is the number of nuclei and 𝜆 is the decay constant. If the half-life of
the radioactive nuclei is 𝑇1⁄ then 𝜆 can be expressed as
2

0.693
λ= (3.9.2)
𝑇1⁄
2

Study on Different Numerical Methods for Solving Differential Equations Page | 53


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

For a practical observation, let initially the number of nuclei is 𝑁0 = 104 we


have to find how much nuclei remain after 𝑡 = 1 day. Given that the half-life is 𝑇1⁄ =
2
4 days.

𝑑𝑁
Solution: Given that 𝑁 ′ = = −λ𝑁 = 𝑓(𝑡, 𝑁) (3.9.3)
𝑑𝑡

Here the given conditions are 𝑁0 = 104 , 𝑡0 = 0 & 𝑇1⁄ = 4𝑑


2

0.693 0.693
Thus (3.9.2) gives λ = = 𝑑 −1 = 0.17325𝑑 −1
𝑇1⁄ 4
2

Then (3.9.3) becomes

𝑁 ′ = 𝑓(𝑡, 𝑁) = −0.17325𝑁 (3.9.4)

Now, taking the step-length ℎ = 0.25 day and according to the fourth-order
Runge-Kutta method, we can write
1
𝑁𝑛+1 = 𝑁𝑛 + 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 ) (3.9.5)

With 𝑘1 = ℎ𝑓(𝑡𝑛 , 𝑁𝑛 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑡𝑛 + 2 , 𝑁𝑛 + )
2
ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑡𝑛 + 2 , 𝑁𝑛 + )
2

𝑘4 = ℎ𝑓(𝑡𝑛 + ℎ, 𝑁𝑛 + 𝑘3 ) (3.9.6)

To find 𝑵𝟏 we proceed as follows: For the first interval, putting 𝑛 = 0 in (3.9.5) &
(3.9.6), from (3.9.4) we get as follows

𝑘1 = ℎ𝑓(𝑡0 , 𝑁0 )
= (0.25)𝑓(0, 104 )
= (0.25)(−0.17325)(104 )
= −433.125
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑡0 + 2 , 𝑁0 + )
2
0.25 (−433.125)
= (0.25)𝑓(0 + , 104 + )
2 2

= (0.25)𝑓(0.125,9783.438)
= (0.25)(−0.17325)(9783.438)
= −423.745

Study on Different Numerical Methods for Solving Differential Equations Page | 54


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑡0 + 2 , 𝑁0 + )
2
0.25 (−423.745)
= (0.25)𝑓(0 + , 104 + )
2 2

= (0.25)𝑓(0.125,9788.127)
= (0.25)(−0.17325)(9788.127)
= −423.948
𝑘4 = ℎ𝑓(𝑡0 + ℎ, 𝑁0 + 𝑘3 )
= (0.25)𝑓(0 + 0.25, 104 + (−423.948))
= (0.25)𝑓(0.25,9567.052)
= (0.25)(−0.17325)(9567.052)
= −414.763

1
So that 𝑁1 = 𝑁0 + 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
4 1
= 10 + [(−433.125) + 2(−423.745) + 2(−423.948)
6

+(−414.763)]
= 9576.121
𝑖. 𝑒. 𝑁1 = 9576.121 ; 𝑡1 = 𝑡0 + ℎ = 0.00 + 0.25 = 0.25

To find 𝑵𝟐 we proceed as follows: For the second interval, putting 𝑛 = 1 in (3.9.5) &
(3.9.6), from (3.9.4) we get as follows

𝑘1 = ℎ𝑓(𝑡1 , 𝑁1 )
= (0.25)𝑓(0.25,9576.121)
= (0.25)(−0.17325)(9576.121)
= −414.766
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑡1 + 2 , 𝑁1 + )
2
0.25 (−414.766)
= (0.25)𝑓(0.25 + , 9576.121 + )
2 2

= (0.25)𝑓(0.375,9369.113)
= (0.25)(−0.17325)(9369.113)
= −405.800
ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑡1 + 2 , 𝑁1 + )
2
0.25 (−405.800)
= (0.25)𝑓(0.25 + 2
, 9576.121 + 2
)

= (0.25)𝑓(0.375,9373.221)

Study on Different Numerical Methods for Solving Differential Equations Page | 55


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

= (0.25)(−0.17325)(9373.221)
= −405.978
𝑘4 = ℎ𝑓(𝑡1 + ℎ, 𝑁1 + 𝑘3 )
= (0.25)𝑓(0.25 + 0.25,9576.121 + (−405.978))
= (0.25)𝑓(0.50,9170.143)
= (0.25)(−0.17325)(9170.143)
= −397.182

1
So that 𝑁2 = 𝑁1 + 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
1
= 9576.121 + 6 [(−414.766) + 2(−405.800) + 2(−405.978)

+(−397.182)]
= 9170.204
𝑖. 𝑒. 𝑁2 = 9170.204 ; 𝑡2 = 𝑡1 + ℎ = 0.25 + 0.25 = 0.50

To find 𝑵𝟑 we proceed as follows: For the third interval, putting 𝑛 = 2 in (3.9.5) &
(3.9.6), from (3.9.4) we get as follows

𝑘1 = ℎ𝑓(𝑡2 , 𝑁2 )
= (0.25)𝑓(0.50,9170.204)
= (0.25)(−0.17325)(9170.204 )
= −397.185
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑡2 + 2 , 𝑁2 + )
2
0.25 (−397.185)
= (0.25)𝑓(0.50 + , 9170.204 + )
2 2

= (0.25)𝑓(0.625,8971.612)
= (0.25)(−0.17325)(8971.612)
= − 388.583
ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑡2 + 2 , 𝑁2 + )
2
0.25 (− 388.583)
= (0.25)𝑓(0.50 + , 9170.204 + )
2 2

= (0.25)𝑓(0.625,8975.913)
= (0.25)(−0.17325)(8975.913)
= −388.769
𝑘4 = ℎ𝑓(𝑡2 + ℎ, 𝑁2 + 𝑘3 )

Study on Different Numerical Methods for Solving Differential Equations Page | 56


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

= (0.25)𝑓(0.50 + 0.25,9170.204 + (−388.769))


= (0.25)𝑓(0.75,8781.435)
= (0.25)(−0.17325)(8781.435)
= −380.346

1
So that 𝑁3 = 𝑁2 + 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
1
= 9170.204 + [(−397.185) + 2(− 388.583) + 2(−388.769)
6

+(−380.346)]
= 8781.498
𝑖. 𝑒. 𝑁3 = 8781.498 ; 𝑡3 = 𝑡2 + ℎ = 0.50 + 0.25 = 0.75

To find 𝑵𝟒 we proceed as follows: For the fourth interval, putting 𝑛 = 3 in (3.9.5) &
(3.9.6), from (3.9.4) we get as follows

𝑘1 = ℎ𝑓(𝑡3 , 𝑁3 )
= (0.25)𝑓(0.75,8781.498)
= (0.25)(−0.17325)(8781.498)
= − 380.349
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑡3 + 2 , 𝑁3 + )
2
0.25 (− 380.349)
= (0.25)𝑓(0.75 + , 8781.498 + )
2 2

= (0.25)𝑓(0.875,8591.324)
= (0.25)(−0.17325)(8591.324)
= − 372.112
ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑡3 + 2 , 𝑁3 + )
2
0.25 (− 372.112 )
= (0.25)𝑓(0.75 + , 8781.498 + )
2 2

= (0.25)𝑓(0.875,8595.442)
= (0.25)(−0.17325)(8595.442)
= −372.290
𝑘4 = ℎ𝑓(𝑡3 + ℎ, 𝑁3 + 𝑘3 )
= (0.25)𝑓(0.75 + 0.25,8781.498 + (−372.290))
= (0.25)𝑓(1.00,8409.208)
= (0.25)(−0.17325)(8409.208)
= −364.224

Study on Different Numerical Methods for Solving Differential Equations Page | 57


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

1
So that 𝑁4 = 𝑁3 + 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
1
= 8781.498 + 6 [(−380.349) + 2(− 372.112) + 2(−372.290)

+(−364.224)]
= 8409.268
𝑖. 𝑒. 𝑁4 = 8409.268 ; 𝑡4 = 𝑡3 + ℎ = 0.75 + 0.25 = 1.00

𝑑𝑁
Exact solution: we have = −λ𝑁
𝑑𝑡
𝑑𝑁
𝑜𝑟, = −λ𝑑𝑡 (3.9.7)
𝑁

Now integrating (3.9.7) within limit 𝑁 = 𝑁0 to 𝑁 = 𝑁 when 𝑡 = 𝑡0 to 𝑡 = 𝑡,


then we get
𝑁 𝑑𝑁 𝑡
∫𝑁 = −λ ∫𝑡 𝑑𝑡
0 𝑁 0

𝑜𝑟, [𝑙𝑛(𝑁)]𝑁
𝑁0 = −λ[t]𝑡𝑡0
𝑜𝑟, 𝑙𝑛(𝑁) − 𝑙𝑛(𝑁0 ) = −λ(t − 0)
𝑁
𝑜𝑟, 𝑙𝑛(𝑁 ) = −λt
0

𝑁
𝑜𝑟, 𝑁 = 𝑒 −λt
0

𝑜𝑟, 𝑁 = 𝑁0 𝑒 −λt (3.9.8)

Now applying the initial conditions 𝑁0 = 104 , 𝑡0 = 0 & λ = 0.17325𝑑 −1

Thus (3.9.8) takes the following form

𝑁 = 104 𝑒 −0.17325t (3.9.9)

When 𝑡 = 1 day, from (3.9.9) we obtain

𝑁 = 104 𝑒 −(0.17325)(1) = 8409.274

Hence the truncation error and relative error are

𝐸𝑇 = 𝑁𝐸𝑥𝑎𝑐𝑡 − 𝑁𝐴𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑒
= 8409.274 − 8409.268 = 0.006

⃓ 𝐸𝑇 ⃓
𝐸𝑅 = 𝑁
𝐸𝑥𝑎𝑐𝑡

Study on Different Numerical Methods for Solving Differential Equations Page | 58


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

0.006
= 8409.274 × 100% = 0.00007%

Advantages of Runge-Kutta method: Runge-Kutta method is the most widely used


numerical weapon, since it gives reliable values starting values and particularly suitable
when the computation of higher-order derivatives are complicated. It scores over the
earlier methods in obtaining greater accuracy of the solution and at the same time
avoiding the need for higher-order derivatives. Also, it possesses the advantage of
requiring only the function values at some selected points on the sub-intervals.

Moreover, it is easy to change the step-length for greater accuracy as we need


and there is no special procedure is necessary for starting, which minimize the
computing time.

Disadvantages of the Runge-Kutta method: Though the Runge-Kutta method is very


useful, it is also very laborious. It is a lengthy process and needs to check back the
values computed earlier. Also, the inherent error in the Runge-Kutta method is hard to
be estimated.

Moreover, it has its limitation in solving certain types of differential equations


only and the step-length is the key factor of the computation.

3.10 EXTENTIONS OF RUNGE-KUTTA FORMULAE

We have already discussed second-order, third-order and standard fourth-order


Runge-Kutta methods. Now we will give a brief discussion about the modifications of
the Runge-Kutta method [22], as given below.

Runge-Kutta-Gill method: This is a fourth-order step-by-step iteration method, being


a modification of a useful standard fourth-order Runge-Kutta method. The fourth-order
modified Runge-Kutta formula is of the following form

𝑦𝑛+1 = 𝑦𝑛 + 𝑎𝑘1 + 𝑏𝑘2 + 𝑐𝑘3 + 𝑑𝑘4

𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
𝑘2 = ℎ𝑓(𝑥𝑛 + 𝛼2 ℎ, 𝑦𝑛 + 𝛽2 𝑘1 )
𝑘3 = ℎ𝑓(𝑥𝑛 + 𝛼3 ℎ, 𝑦𝑛 + 𝛽3 𝑘1 + 𝛾3 𝑘2 )
𝑘4 = ℎ𝑓(𝑥𝑛 + 𝛼4 ℎ, 𝑦𝑛 + 𝛽4 𝑘1 + 𝛾4 𝑘2 + 𝛿4 𝑘3 )

Here the constants have to be determined by expanding both sides by Taylor’s


series and equating the co-efficient of powers of ℎ up-to degree four. Here again,
because of the degree of freedom, several solutions are possible. Of these solutions,
popularly used the fourth-order Runge-Kutta method is obtained by the following
choice of the co-efficient

1 √2−1 √2−1 1
𝛼2 = 𝛼3 = 𝛽2 = 2 , 𝛼4 = 1, 𝛽3 = , 𝛽4 = 0, 𝛾3 = , 𝛾4 = −
2 √2 √2

Study on Different Numerical Methods for Solving Differential Equations Page | 59


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

2+√2 1 1 1 1 1
𝛿4 = , 𝑎 = 𝑑 = 6 , 𝑏 = 3 (1 − ) , 𝑐 = 3 (1 + )
2 √2 √2

Then we obtain the Runge-Kutta-Gill method by putting them. This


modification was introduced by Gill with the above change for the equation

𝑦 ′ = 𝑓(𝑥, 𝑦) ; 𝑦(𝑥𝑛 ) = 𝑦𝑛
The next value of 𝑦 is given by
1
𝑦𝑛+1 = 𝑦𝑛 + 6 [𝑘1 + (2 − √2)𝑘2 + (2 + √2)𝑘3 + 𝑘4 ]

𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 + )
2
ℎ √2−1 √2−1
𝑘3 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 + 𝑘1 + 𝑘2 )
2 √2
1 2+√2
𝑘4 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 − 𝑘 + 𝑘3 )
√2 1 2

Runge-Kutta-Merson method: This is also a fourth-order method that involves an


additional derivative calculation and provides an error estimate. The error estimate is
exact if the derivative function is linear in 𝑥 & 𝑦. In fact, this method even provides a
technique to automatic adjustment of the step-length h to ensure good convergence of
the solution. The Runge-Kutta-Merson formula is given below
1
𝑦𝑛+1 = 𝑦𝑛 + 6 [𝑘1 + 4𝑘4 + 𝑘5 ]

𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥𝑛 + 3 , 𝑦𝑛 + )
3
ℎ 𝑘1 𝑘2
𝑘3 = ℎ𝑓(𝑥𝑛 + 3 , 𝑦𝑛 + + )
6 6
ℎ 𝑘1 3𝑘3
𝑘4 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 + + )
8 8
𝑘1 3𝑘3
𝑘5 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + − + 2𝑘4 )
2 2

Runge-Kutta-Butcher method: J.C.Butcher enhanced the order to five so that the


error term is now of order ℎ up-to degree six. This method requires six functional
values. Butcher showed that this method involves minimum computing time and at the
same time ensured greater accuracy. We get this method in the following form
1
𝑦𝑛+1 = 𝑦𝑛 + 90 [7𝑘1 + 32𝑘3 + 12𝑘4 + 32𝑘5 + 7𝑘6 ]

𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )

Study on Different Numerical Methods for Solving Differential Equations Page | 60


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥𝑛 + 4 , 𝑦𝑛 + )
4
ℎ 𝑘1 𝑘2
𝑘3 = ℎ𝑓(𝑥𝑛 + 4 , 𝑦𝑛 + + )
8 8
ℎ 𝑘2
𝑘4 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 − + 𝑘3 )
2
3ℎ 3𝑘1 9𝑘4
𝑘5 = ℎ𝑓(𝑥𝑛 + , 𝑦𝑛 + + )
4 16 16
3𝑘1 2𝑘2 12𝑘3 12𝑘4 8𝑘5
𝑘6 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 − + + − + )
7 7 7 7 7

Kutta-Nystrom method: This is a sixth-order method, with the error term of order ℎ
up-to degree six, which involves six functional values. This method yields the following
form
1
𝑦𝑛+1 = 𝑦𝑛 + 192 [23𝑘1 + 125𝑘2 − 81𝑘5 + 125𝑘6 ]

𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 + )
3
2ℎ 4𝑘1 6𝑘2
𝑘3 = ℎ𝑓(𝑥𝑛 + , 𝑦𝑛 + + )
5 25 25
𝑘1 15𝑘3
𝑘4 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + − 3𝑘2 + )
4 4
2ℎ 2𝑘1 10𝑘2 50𝑘3 8𝑘4
𝑘5 = ℎ𝑓(𝑥𝑛 + , 𝑦𝑛 + + − + )
3 27 9 81 81
4ℎ 2𝑘1 12𝑘2 2𝑘3 8𝑘4
𝑘6 = ℎ𝑓(𝑥𝑛 + , 𝑦𝑛 + + + + )
5 25 25 15 75

Runge-Kutta-Fehlberg method: Runge-Kutta-Fehlberg method now one of the most


popular methods of modification of Runge-Kutta methods. Only six functional
evaluations are required, which has the following form
16 6656 28561 9 2
𝑦𝑛+1 = 𝑦𝑛 + [135 𝑘1 + 12825 𝑘3 + 56430 𝑘4 − 50 𝑘5 + 55 𝑘6 ]

𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥𝑛 + 4 , 𝑦𝑛 + )
4
3ℎ 3𝑘1 9𝑘2
𝑘3 = ℎ𝑓(𝑥𝑛 + , 𝑦𝑛 + + )
8 32 32
12ℎ 1932𝑘1 7200𝑘2 7296𝑘3
𝑘4 = ℎ𝑓(𝑥𝑛 + , 𝑦𝑛 + − + )
13 2197 2197 2197
439𝑘1 3680𝑘3 845𝑘4
𝑘5 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + − 8𝑘2 + − )
216 513 4104
ℎ 8𝑘1 3544𝑘3 1859𝑘4 11𝑘5
𝑘6 = ℎ𝑓(𝑥𝑛 + 2 , 𝑦𝑛 − + 2𝑘2 − + − )
27 2565 4104 40

Study on Different Numerical Methods for Solving Differential Equations Page | 61


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

3.11 GENERALIZED FORMULA FOR RUNGE-KUTTA METHODS

The general 𝑝𝑡ℎ order Runge-Kutta method expressed as [3] following form

𝑦𝑛+1 = 𝑦𝑛 + ∑𝑝𝑚=1 𝑏𝑚 𝑘𝑚
𝑘𝑚 = ℎ𝑓(𝑥𝑛 + 𝑐𝑚 , 𝑦𝑚 + ∑𝑛−1
𝑙=1 𝑎𝑚,𝑙 𝑘𝑙 )

Here 𝑐1 = 0. The co-efficient 𝑎𝑚,𝑙 are collectively referred to as Runge-Kutta


matrices. The quantities 𝑏𝑚 are called Runge-Kutta weights and 𝑐𝑚 are called Runge-
Kutta nodes. The Runge-Kutta matrices, weights and nodes are often displayed
graphically as the following Runge-Kutta table.

Figure-(3.1)

3.12 COMPARISON BETWEEN PREDICTOR-CORRECTOR METHOD AND


RUNGE- KUTTA METHOD

To compare the predictor-corrector method and the Runge-Kutta method, we


have to discuss the following points [1].

1. Runge-Kutta method is self-starting the interval between steps that may be


changed at will and in general, they are particularly straight forward to apply
on a digital computer.

2. They are comparable in accuracy than corresponding order predictor-


corrector methods. However, if we do not monitor the per step error by using
additional function evaluations, we shall generally be required to choose the
step size ℎ conservatively. i.e. smaller than is actually necessary to achieve
the desired accuracy.

3. Further, they require a number of evaluations of 𝑓(𝑥, 𝑦) at each step and at


least equal to the order of the method. As we have seen, predictor-corrector
methods generally require only two evaluations per step. Since the
evaluation of 𝑓(𝑥, 𝑦) is usually the time-consuming part of solving the initial
value problem 𝑦 ′ = 𝑓(𝑥, 𝑦) ; 𝑦(𝑥0 ) = 𝑦0 . This means that predictor-
corrector methods are generally faster than the Runge-Kutta method. For

Study on Different Numerical Methods for Solving Differential Equations Page | 62


Chapter-3: Solution of Differential Equations of First Order and First Degree by
Predictor-Corrector Method and Runge-Kutta Method

example, fourth-order predictor-corrector methods are nearly twice as fast as


fourth-order Runge-Kutta methods.

4. Naturally, predictor-corrector methods have the advantage that ingredients


for estimating local errors are already in hand when needed. With Runge-
Kutta, a separate application of the formulae must be made, as just outlined.
This almost doubles the number of times that 𝑓(𝑥, 𝑦) has to be evaluated,
and since this is where the major computing effort is involved, running time
may be almost doubled. On the other hand, and as said before, whenever the
step size is changed it will be necessary to assist a Predictor-Corrector
method in making a restart. This means extra programming, and if frequent
changes are anticipated, it may be just as well to use the Runge-Kutta method
throughout.

5. Finally, monitoring the local truncation error does not involve any additional
function evaluations using predictor-corrector methods, whereas it is quite
expensive for the Runge-Kutta method.

Thus the self-starting characteristic of Runge-Kutta methods makes them an ideal


adjunct to the usual predictor-corrector methods for starting the solution. Since they
will be used for only a few steps or the computation, truncation error and instability is
the key consideration.

Therefore, for the above purpose, the minimum error bound Runge-Kutta
methods should be used.

Study on Different Numerical Methods for Solving Differential Equations Page | 63


CHAPTER-4

SOLUTION OF PARTIAL DIFFERENTIAL


EQUATIONS

4.1 INTRODUCTION

Partial differential equations occur in many branches of applied mathematics


such as in hydrodynamics, electricity, quantum mechanics and electromagnetic theory.
The analytical treatment of these equations is a rather involved process and requires the
application of advanced mathematical methods. On the other hand, it is generally easier
to produce sufficiently approximate solutions with simple and efficient numerical
methods. Several numerical methods have been proposed for the solution of partial
differential equations. Among those methods, we will only discuss the methods that are
related to the solution of Elliptic, Parabolic and Hyperbolic partial differential
equations. i.e. in this chapter, we will solve Elliptic, Parabolic and Hyperbolic partial
differential equations only.

4.2 CLASSIFICATION OF PARTIAL DIFFERENTIAL EQUATIONS.

The general second-order linear partial differential equation [23] is of the form

𝜕2 𝑢 𝜕2 𝑢 𝜕2 𝑢 𝜕𝑢 𝜕𝑢
𝐴 𝜕𝑥 2 + 𝐵 𝜕𝑥𝜕𝑦 + 𝑐 𝜕𝑦 2 + 𝐷 𝜕𝑥 + 𝐸 𝜕𝑦 + 𝐹𝑢 = 𝐺

𝑜𝑟, 𝐴𝑢𝑥𝑥 + 𝐵𝑢𝑥𝑦 + 𝐶𝑢𝑦𝑦 + 𝐷𝑢𝑥 + 𝐸𝑢𝑦 + 𝐹𝑢 = 𝐺 (4.2.1)

Here 𝐴, 𝐵, 𝐶, 𝐷, 𝐸, 𝐹 & 𝐺 are all functions of 𝑥 & 𝑦. The above equation (4.2.1)
can be classified with respect to the sign of the discriminant 𝛥𝑆 = 𝐵 2 – 4𝐴𝐶, in the
following way

If ∆𝑆 < 0, ∆𝑆 = 0 & ∆𝑆 > 0 at a point in the 𝑥𝑦 − 𝑝𝑙𝑎𝑛𝑒, then (4.2.1) is said to


be of elliptic, parabolic & hyperbolic type of equation respectively.

Many physical phenomena can be modeled mathematically by the differential


equation. When the function being studied involves two or more independent variables,
the differential equation will usually be a partial differential equation. Since the
function of several variables is intrinsically more complicated than those of single
variables, partial differential equations can lead to the most challenging among
numerical problems. In fact, their numerical solution is one type of scientific calculation
in which the resources of the biggest and fastest computing systems easily become
taxed. We shall see later this is so.

Some important partial differential equations and physical phenomena that


govern are listed below.
Chapter-4: Solution of Partial Differential Equations

1. The wave equation in three spatial variables (𝑥, 𝑦, 𝑧) and the time 𝑡 is

𝜕2 𝑢 𝜕2 𝑢 𝜕2 𝑢 𝜕2 𝑢
+ 𝜕𝑦 2 + 𝜕𝑧 2 =
𝜕𝑥 2 𝜕𝑡 2

The function 𝑢 represents the displacement at the time 𝑡 of the particle whose
position at the rest is (𝑥, 𝑦, 𝑧). With appropriate boundary conditions, this equation
governs vibrations of a three-dimensional elastic body.

2. The heat equation is

𝜕2 𝑢 𝜕2 𝑢 𝜕2 𝑢 𝜕𝑢
+ 𝜕𝑦 2 + 𝜕𝑧 2 =
𝜕𝑥 2 𝜕𝑡

The function 𝑢 represents the temperature at the time 𝑡 of a particle whose position at
the co-ordinates are (𝑥, 𝑦, 𝑧).

3. Laplace’s equation is

𝜕2 𝑢 𝜕2 𝑢 𝜕2 𝑢
+ 𝜕𝑦 2 + 𝜕𝑧 2 = 0
𝜕𝑥 2

It governs the steady-state distribution of heat or electric charge in a body.


Laplace’s equation also governs gravitational, electric, magnetic potentials and velocity
potentials in irrigational flows of incompressible fluids. In section 1.6 some special
forms of Laplace’s equation have mentioned.

Also there two special cases of problems depend upon boundary conditions with
partial differential equations.

1. In the Dirichlet problem, a continuous function 𝑓 on the boundary 𝑐 of a


region 𝑅, to find a function 𝑢 satisfies Laplace’s equation in 𝑅. i.e. to find 𝑢
such that

𝜕2 𝑢 𝜕2 𝑢
+ 𝜕𝑦 2 = 0 ; 𝑢(𝑥, 𝑦) = 𝑓
𝜕𝑥 2

1. We have Cauchy’s problem for 𝑡 > 0 arbitrary functions 𝑓(𝑥) & 𝑔(𝑥) as
following for

𝜕2 𝑢 𝜕2 𝑢 𝜕𝑢
− 𝜕𝑥 2 = 0 ; 𝑢(𝑥, 0) = 𝑓(𝑥), [ 𝜕𝑡 ] = 𝑔(𝑥)
𝜕𝑡 2 𝑡=0

4.3 FINITE DIFFERENCE APPROXIMATIONS TO PARTIAL DERIVATIVES

Let 𝑥𝑦 − 𝑝𝑙𝑎𝑛𝑒 be divided into a network of rectangles of sides ∆𝑥 = ℎ


and ∆𝑦 = 𝑘 by drawing the set of lines 𝑥 = 𝑖ℎ & 𝑦 = 𝑗𝑘; 𝑖, 𝑗 = 0,1,2, . . . . .. shown in
figure-(4.1)

Study on Different Numerical Methods for Solving Differential Equations Page | 65


Chapter-4: Solution of Partial Differential Equations

Figure-(4.1)

The points of intersection of their families of lines are called mesh point or
lattice point or grid point.

For 𝑢𝑖,𝑗 = 𝑢(𝑖ℎ, 𝑗𝑘) = 𝑢(𝑥, 𝑦)

We have following approximations


𝑢𝑖+1,𝑗 −𝑢𝑖,𝑗
𝑢𝑥 = + 𝑜(ℎ) (4.3.1)

𝑢𝑖,𝑗 −𝑢𝑖−1,𝑗
= + 𝑜(ℎ) (4.3.2)

𝑢𝑖+1,𝑗 −𝑢𝑖−1,𝑗
= + 𝑜(ℎ) (4.3.3)
2ℎ
𝑢𝑖−1,𝑗 −2𝑢𝑖,𝑗 +𝑢𝑖+1,𝑗
𝑢𝑥𝑥 = + 𝑜(ℎ2 ) (4.3.4)
ℎ2
𝑢𝑖,𝑗+1 −𝑢𝑖,𝑗
𝑢𝑦 = + 𝑜(ℎ) (4.3.5)

𝑢𝑖,𝑗 −𝑢𝑖,𝑗−1
= + 𝑜(ℎ) (4.3.6)

𝑢𝑖,𝑗+1 −𝑢𝑖,𝑗−1
= + 𝑜(ℎ) (4.3.7)
2ℎ
𝑢𝑖,𝑗−1 −2𝑢𝑖,𝑗 +𝑢𝑖,𝑗+1
𝑢𝑦𝑦 = + 𝑜(ℎ2 ) (4.3.8)
ℎ2

The derivatives in any partial differential equation can be replaced by their


corresponding difference equations (4.3.1) to (4.3.8), we obtain the finite-difference
analogies of the given equation.

4.4 SOLUTION OF ELLIPTIC EQUATIONS

In this section [12] we will study various techniques for solving Laplace’s and
Poisson’s equations, which are elliptic in nature. Various physical phenomena governed
by these well-known equations. Some of them, frequently encountered in physical and
engineering applications are steady heat equation, seepage through porous media, the
rotational flow of an ideal fluid, distributional potential, steady viscous flow,
equilibrium stresses in elastic structures etc.

Solution of Laplace’s equation: We consider the Laplace’s equation in two


dimensions as follows

Study on Different Numerical Methods for Solving Differential Equations Page | 66


Chapter-4: Solution of Partial Differential Equations

𝜕2 𝑢 𝜕2 𝑢
+ 𝜕𝑦 2 = 0 (4.4.1)
𝜕𝑥 2

We take a rectangular region 𝑅 for which 𝑢(𝑥, 𝑦) is known as the boundary.


Now, assuming that an exact sub-division of 𝑅 is possible, we divide this region into a
network of the square mesh of side ℎ as shown in figure-(4.2).

Figure-(4.2)

Replacing the derivatives in (4.4.1) by their finite-difference approximations


from (3.3.4) and (3.3.8) with taking 𝑘 = ℎ, we get
𝑢𝑖−1,𝑗 −2𝑢𝑖,𝑗 +𝑢𝑖+1,𝑗 𝑢𝑖,𝑗−1 −2𝑢𝑖,𝑗 +𝑢𝑖,𝑗+1
+ =0
ℎ2 ℎ2
1
𝑜𝑟, 𝑢𝑖,𝑗 = 4 [𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 + 𝑢𝑖,𝑗−1 + 𝑢𝑖,𝑗+1 ] (4.4.2)

Equation (4.4.2) shows that the value of 𝑢 at any interior mesh point is the
average of its values at four neighboring points to adjacent it. The equation (4.4.2) is
known as the standard 5-point formula exhibited in figure-(4.3).

We know that Laplace’s equation remains invariant when the co-ordinate axes
are rotated through an angle of 45°. Then the formula (4.4.2) can be re-written as
1
𝑢𝑖,𝑗 = 4 [𝑢𝑖−1,𝑗−1 + 𝑢𝑖+1,𝑗−1 + 𝑢𝑖+1,𝑗+1 + 𝑢𝑖+1,𝑗−1 ] (4.4.3)

This is similar to (4.4.2), which shows that the value of 𝑢 at any interior mesh
point is the average of its values at four neighboring diagonal mesh points. The equation
(4.4.3) is known as the diagonal 5-point formula exhibited in figure-(4.4).

Figure-(4.3) Figure-(4.4)

Although (4.2.3) is less accurate than (4.4.2) but serves as a reasonably good
approximation for obtaining starting values of the mesh points. We use (4.4.3) to find
the initial values of 𝑢 at the interior mesh points and compute the following mesh points

Study on Different Numerical Methods for Solving Differential Equations Page | 67


Chapter-4: Solution of Partial Differential Equations

1
𝑢3,3 = 4 [𝑏1,5 + 𝑏5,1 + 𝑏5,5 + 𝑏1,1 ]
1
𝑢2,4 = 4 [𝑏1,5 + 𝑢3,3 + 𝑏3,5 + 𝑏1,3 ]
1
𝑢4,4 = 4 [𝑏3,5 + 𝑏5,3 + 𝑢3,3 + 𝑏5,5 ]
1
𝑢4,2 = 4 [𝑢3,3 + 𝑏5,1 + 𝑏3,1 + 𝑏5,3 ]
1
𝑢2,2 = 4 [𝑏1,3 + 𝑏3,1 + 𝑢3,3 + 𝑏1,1 ]

The values of 𝑢 at the remaining interior mesh points are computed by (4.4.2),
in the following way
1
𝑢2,3 = 4 [𝑏1,3 + 𝑢3,3 + 𝑢2,4 + 𝑢2,2 ]
1
𝑢3,4 = 4 [𝑢2,4 + 𝑢4,4 + 𝑏3,5 + 𝑢3,3 ]
1
𝑢4,3 = 4 [𝑢3,3 + 𝑏5,3 + 𝑢4,4 + 𝑢4,2 ]
1
𝑢3,2 = 4 [𝑢2,2 + 𝑢4,2 + 𝑢3,3 + 𝑢3,1 ]

After determining 𝑢𝑖,𝑗 once, their accuracy is improved either by using Jacobi’s
iterative method or Gauss-Seidel iterative method. The process will be repeated until
two consecutive iterations become very close. i.e. the difference between two
consecutive iterations become negligibly small in order to achieve the desired level of
accuracy. The iterative formula in the case of Jacobi’s method and the Gauss-Seidel
method is given below.

Jacobi’s iteration formula & Gauss-Seidel iteration formula are given by

(𝑛+1) 1 (𝑛) (𝑛) (𝑛) (𝑛)


𝑢𝑖,𝑗 = 4 [𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 + 𝑢𝑖,𝑗+1 + 𝑢𝑖,𝑗−1 (4.4.4)
(𝑛+1) 1 (𝑛+1) (𝑛) (𝑛+1) (𝑛)
𝑢𝑖,𝑗 = 4 [𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 + 𝑢𝑖,𝑗+1 + 𝑢𝑖,𝑗−1 ] (4.4.5)

(𝑛+1)
Here 𝑢𝑖,𝑗 denotes the (𝑛 + 1)𝑡ℎ iterative value of 𝑢𝑖,𝑗 and gives us the
improved values of 𝑢𝑖,𝑗 at the interior mesh points.

Gauss-Seidel iteration formula uses the latest iterative value available and scans the
mesh points symmetrically from left to right along successive rows. Also, the Gauss-
Seidel method is simple and well suited to computer calculation. Jacobi’s iteration
formula being slow, the working is the same but lengthy. However, it can be shown that
the Gauss-Seidel scheme converges twice as fast as Jacobi’s method.

Solution of Poisson’s equation: We consider the Poisson’s equation in two dimensions


as follows

Study on Different Numerical Methods for Solving Differential Equations Page | 68


Chapter-4: Solution of Partial Differential Equations

𝜕2 𝑢 𝜕2 𝑢
+ 𝜕𝑦 2 = 𝑓(𝑥, 𝑦) (4.4.6)
𝜕𝑥 2

The method of solving (4.4.6) is similar to that of Laplace’s equation (4.4.1).


Here the standard 5-point formula for (4.4.6) takes the form

𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 + 𝑢𝑖,𝑗−1 + 𝑢𝑖,𝑗+1 − 4𝑢𝑖,𝑗 = ℎ2 𝑓(𝑖ℎ, 𝑗𝑘) (4.4.7)

Using (4.4.7) at each interior mesh point, we arrive at a system of linear


equations in the nodal values 𝑢𝑖,𝑗 , which can be solved by the Gauss-Seidel method.
The error in replacing 𝑢𝑥𝑥 by the finite-difference approximation is of the order ℎ2 .
Since 𝑘 = ℎ, the error in replacing 𝑢𝑦𝑦 by the finite-difference approximation is also of
the order ℎ2 . Thus the error in solving Laplace’s equation and Poisson’s equation by
finite difference method is of order ℎ2 .

Solution of the elliptic equation by relaxation method: Let us consider the Laplace’s
equation in two dimensions as follows

𝜕2 𝑢 𝜕2 𝑢
+ 𝜕𝑦 2 = 0 (4.4.8)
𝜕𝑥 2

We take a square region and divide it into a square net of mesh size ℎ. Let the
value of 𝑢 at 𝐴 be 𝑢0 and its values at four adjacent points to be
𝑢1 , 𝑢2 , 𝑢3 & 𝑢4 respectively as shown as figure-(4.5)

Figure-(4.5)

𝜕2 𝑢 𝑢1 +𝑢3 −2𝑢0
Then ≈
𝜕𝑥 2 ℎ2
𝜕2 𝑢 𝑢2 +𝑢4 −2𝑢0

𝜕𝑦 2 ℎ2

If (4.4.8) is satisfied at 𝐴, then we have


𝑢1 +𝑢3 −2𝑢0 𝑢2 +𝑢4 −2𝑢0
+ ≈0
ℎ2 ℎ2

𝑜𝑟, 𝑢1 + 𝑢2 + 𝑢3 + 𝑢4 − 4𝑢0 ≈ 0

Let 𝑟0 & 𝑟1 be the residuals at the mesh point 𝐴 & 𝐵, then we have

𝑟0 = 𝑢1 + 𝑢2 + 𝑢3 + 𝑢4 − 4𝑢0 (4.4.9)
𝑟1 = 𝑢0 + 𝑢5 + 𝑢6 + 𝑢7 − 4𝑢1 (4.4.10)

Study on Different Numerical Methods for Solving Differential Equations Page | 69


Chapter-4: Solution of Partial Differential Equations

This is a continuous process. The main aim of the method is to reduce all the
residuals to zero, by making them as small as possible step by step. Thus, we try to
adjust the value of 𝑢 at an internal mesh point, so as to make the residual threat zero.
When the value of 𝑢 changing at a mesh point, the values of the residuals at the
neighboring interior points will also are changed. If 𝑢0 is given an increment 1, then (i)
equation (4.4.9) shows that 𝑟0 is changed by −4 and (ii) equation (4.4.10) shows that
𝑟1 is changed by 1. The relaxation pattern is shown in figure-(4.6).

Figure-(4.6)

In general, equation (4.4.5) of Gauss-Seidel formula can be written as

(𝑛+1) (𝑛) 1 (𝑛+1) (𝑛) (𝑛+1) (𝑛) (𝑛)


𝑢𝑖,𝑗 = 𝑢𝑖,𝑗 + 4 [𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 + 𝑢𝑖,𝑗+1 + 𝑢𝑖,𝑗−1 − 𝑢𝑖,𝑗 ]
(𝑛+1) (𝑛) 1
𝑜𝑟, 𝑢𝑖,𝑗 = 𝑢𝑖,𝑗 + 4 𝑅𝑖,𝑗 (4.4.11)

1
This shows that 4 𝑅𝑖,𝑗 is the change in the value of 𝑢𝑖,𝑗 for Gauss-Seidel iteration.
(𝑛)
In the successive over-relaxation method, large changes than this are given to 𝑢𝑖,𝑗 and
the iteration formula is written as

(𝑛+1) (𝑛) 1
𝑢𝑖,𝑗 = 𝑢𝑖,𝑗 + 4 𝑤𝑅𝑖,𝑗
(𝑛+1) (𝑛) 1 (𝑛+1) (𝑛) (𝑛+1) (𝑛) (𝑛)
𝑜𝑟, 𝑢𝑖,𝑗 = 𝑢𝑖,𝑗 + 4 𝑤[𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 + 𝑢𝑖,𝑗+1 + 𝑢𝑖,𝑗−1 ] + (1 − 𝑤)𝑢𝑖,𝑗 (4.4.12)

The rate of convergence of (4.4.12) depends on the choice of 𝑤, which is called


the accelerating factor and lies between 1 & 2. In general, it is difficult to estimate the
best value of 𝑤.

To solve an elliptic equation by relaxation method, we will follow the following


algorithm

1. Write down by trial values, the initial values of 𝑢 at the interior mesh points
by (4.4.3).

2. Calculate the residuals at each of their points by (4.4.9). The application of


this formula at a point near the boundary, chopped off one or more endpoints
since there are no residuals at the boundary.

Study on Different Numerical Methods for Solving Differential Equations Page | 70


Chapter-4: Solution of Partial Differential Equations

3. Write the residuals at a mesh point on the right of this point and the value of
𝑢 on its left.

4. Obtain the solution by reducing the residuals to zero one by one, by giving
suitable increment to 𝑢 and using the figure-(4.7). At each step, we reduce
numerically largest residual to zero and record the increment of 𝑢 in the left
and the modified residual on the right.

5. When a round of relaxation is completed, the value of 𝑢 and its increments


added at each point. Using these values, calculate all the residuals afresh. If
some of the recalculated residuals are large, we liquidate these again.

6. Stop the relaxation process, when the current values of the residuals are quite
small. The current value of 𝑢 at each of nodes gives us the solution.

4.5 APLICATIONS OF SOLVING ELLIPTIC EQUATION

Application-1: Given the values of 𝑢(𝑥, 𝑦) on the boundary of the square in the
following figure-(4.8), evaluate the function 𝑢(𝑥, 𝑦) satisfying the Laplace’s equation
𝑢𝑥𝑥 + 𝑢𝑦𝑦 = 0 at the pivotal points of the figure-(4.7) by (i) Jacobi’s method (ii)
Gauss-Seidel method.

Figure-(4.7)

Solution: We know the standard 5-point formula and diagonal 5-point formula are
1
𝑢𝑖,𝑗 = 4 [𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 + 𝑢𝑖,𝑗−1 + 𝑢𝑖,𝑗+1 ] (4.5.1)
1
𝑢𝑖,𝑗 = 4 [𝑢𝑖−1,𝑗−1 + 𝑢𝑖+1,𝑗−1 + 𝑢𝑖+1,𝑗+1 + 𝑢𝑖+1,𝑗−1 ] (4.5.2)

Using the above formulae and assuming 𝑢4 = 0, we get followings


1
𝑢1 = 4 [1000 + 0 + 1000 + 2000] = 1000
1
𝑢2 = 4 [1000 + 0 + 500 + 1000] = 625
1
𝑢3 = 4 [2000 + 0 + 1000 + 500] = 875
1
𝑢4 = [875 + 0 + 625 + 0] = 375
4

Study on Different Numerical Methods for Solving Differential Equations Page | 71


Chapter-4: Solution of Partial Differential Equations

Here 𝑢1 have determined by using (4.5.2) and 𝑢2 , 𝑢3 & 𝑢4 have determined by


using (4.5.1) respectively.

(i) Using Jacobi’s formula:


(𝑛+1) 1 (𝑛) (𝑛)
𝑢1 = 4 [2000 + 𝑢2 + 1000 + 𝑢3 ]
(𝑛+1) 1 (𝑛) (𝑛)
𝑢2 = 4 [𝑢1 + 500 + 1000 + 𝑢4 ]
(𝑛+1) 1 (𝑛) (𝑛)
𝑢3 = 4 [𝑢1 + 2000 + 500 + 𝑢4 ]
(𝑛+1) 1 (𝑛) (𝑛)
𝑢4 = 4 [𝑢2 + 0 + 𝑢3 + 0]

First iteration: Putting 𝑛 = 0, we obtain


(1) 1
𝑢1 = 4 [2000 + 625 + 1000 + 875] = 1125
(1) 1
𝑢2 = 4 [1000 + 500 + 1000 + 375] = 718.75
(1) 1
𝑢3 = 4 [1000 + 2000 + 500 + 375] = 968.75
(1) 1
𝑢4 = 4 [875 + 0 + 625 + 0] = 375

Second iteration: Putting 𝑛 = 1, we obtain


(2) 1
𝑢1 = 4 [2000 + 718.75 + 1000 + 968.75] = 1171.88
(2) 1
𝑢2 = 4 [1125 + 500 + 1000 + 375] = 750
(2) 1
𝑢3 = 4 [1125 + 2000 + 500 + 375] = 1000
(2) 1
𝑢4 = 4 [968.75 + 0 + 718.75 + 0] = 421.88

Third iteration: Putting 𝑛 = 2, we obtain


(3) 1
𝑢1 = 4 [2000 + 750 + 1000 + 1000] = 1187.50
(3) 1
𝑢2 = 4 [1171.88 + 500 + 1000 + 421.88] = 773.44
(3) 1
𝑢3 = 4 [1171.88 + 2000 + 500 + 421.88] = 1023.44
(3) 1
𝑢4 = 4 [1000 + 0 + 750 + 0] = 437.50

Fourth iteration: Putting 𝑛 = 3, we obtain


(4) 1
𝑢1 = 4 [2000 + 773.44 + 1000 + 1023.44] = 1199.22
(4) 1
𝑢2 = 4 [1187.50 + 500 + 1000 + 437.50] = 781.25
(4) 1
𝑢3 = 4 [1187.50 + 2000 + 500 + 437.50] = 1031.25
(4) 1
𝑢4 = [1023.44 + 0 + 773.44 + 0] = 449.22
4

Study on Different Numerical Methods for Solving Differential Equations Page | 72


Chapter-4: Solution of Partial Differential Equations

Fifth iteration: Putting 𝑛 = 4, we obtain


(5) 1
𝑢1 = 4 [2000 + 781.25 + 1000 + 1031.25] = 1203.13
(5) 1
𝑢2 = 4 [1199.22 + 500 + 1000 + 449.22] = 787.11
(5) 1
𝑢3 = 4 [1199.22 + 2000 + 500 + 449.22] = 1037.11
(5) 1
𝑢4 = 4 [1031.25 + 0 + 781.25 + 0] = 453.13

Sixth iteration: Putting 𝑛 = 5, we obtain


(6) 1
𝑢1 = 4 [2000 + 787.11 + 1000 + 1037.11] = 1206.06
(6) 1
𝑢2 = 4 [1203.13 + 500 + 1000 + 453.13] = 789.07
(6) 1
𝑢3 = 4 [1203.13 + 2000 + 500 + 453.13] = 1039.07
(6) 1
𝑢4 = 4 [1037.11 + 0 + 787.11 + 0] = 456.06

Seventh iteration: Putting 𝑛 = 6, we obtain


(7) 1
𝑢1 = 4 [2000 + 789.07 + 1000 + 1039.07] = 1207.04
(7) 1
𝑢2 = 4 [1206.06 + 500 + 1000 + 456.06] = 790.53
(7) 1
𝑢3 = 4 [1206.06 + 2000 + 500 + 456.06] = 1040.53
(7) 1
𝑢4 = 4 [1039.07 + 0 + 789.07 + 0] = 457.04

Eighth iteration: Putting 𝑛 = 7, we obtain


(8) 1
𝑢1 = 4 [2000 + 790.53 + 1000 + 1040.53] = 1207.77
(8) 1
𝑢2 = 4 [1207.04 + 500 + 1000 + 457.04] = 791.02
(8) 1
𝑢3 = 4 [1207.04 + 2000 + 500 + 457.04] = 1041.02
(8) 1
𝑢4 = [1040.53 + 0 + 790.53 + 0] = 457.77
4

Since the eighth iteration is very close to the seventh iteration, we can conclude

𝑢1 = 1207.77 ≈ 1208, 𝑢2 = 791.02 ≈ 791


𝑢3 = 1041.02 ≈ 1041, 𝑢4 = 457.77 ≈ 458

(ii) Using Gauss-Seidel formula:


(𝑛+1) 1 (𝑛) (𝑛)
𝑢1 = 4 [2000 + 𝑢2 + 1000 + 𝑢3 ]
(𝑛+1) 1 (𝑛+1) (𝑛)
𝑢2 = [𝑢1 + 500 + 1000 + 𝑢4 ]
4

Study on Different Numerical Methods for Solving Differential Equations Page | 73


Chapter-4: Solution of Partial Differential Equations

(𝑛+1) 1 (𝑛+1) (𝑛)


𝑢3 = 4 [𝑢1 + 2000 + 500 + 𝑢4 ]
(𝑛+1) 1 (𝑛+1) (𝑛+1)
𝑢4 = 4 [𝑢2 + 0 + 𝑢3 + 0]

First iteration: Putting 𝑛 = 0, we obtain


(1) 1
𝑢1 = 4 [2000 + 625 + 1000 + 875] = 1125
(1) 1
𝑢2 = 4 [1125 + 500 + 1000 + 375] = 750
(1) 1
𝑢3 = 4 [1125 + 2000 + 500 + 375] = 1000
(1) 1
𝑢4 = [1000 + 0 + 750 + 0] = 437.50
4

Second iteration: Putting 𝑛 = 1, we obtain


(2) 1
𝑢1 = 4 [2000 + 750 + 1000 + 1000] = 1187.50
(2) 1
𝑢2 = 4 [1187.50 + 500 + 1000 + 437.50] = 781.25
(2) 1
𝑢3 = 4 [1187.50 + 2000 + 500 + 437.50] = 1031.25
(2) 1
𝑢4 = 4 [1031.25 + 0 + 781.25 + 0] = 453.13

Third iteration: Putting 𝑛 = 2, we obtain


(3) 1
𝑢1 = 4 [2000 + 781.25 + 1000 + 1031.25] = 1203.13
(3) 1
𝑢2 = 4 [1203.13 + 500 + 1000 + 453.13] = 789.07
(3) 1
𝑢3 = 4 [1203.13 + 2000 + 500 + 453.13] = 1039.07
(3) 1
𝑢4 = 4 [1039.07 + 0 + 789.07 + 0] = 457.04

Fourth iteration: Putting 𝑛 = 3, we obtain


(4) 1
𝑢1 = 4 [2000 + 789.07 + 1000 + 1039.07] = 1207.04
(4) 1
𝑢2 = [1207.04 + 500 + 1000 + 457.04] = 791.02
4
(4) 1
𝑢3 = 4 [1207.04 + 2000 + 500 + 457.04] = 1041.02
(4) 1
𝑢4 = 4 [1041.02 + 0 + 791.02 + 0] = 458.01

Fifth iteration: Putting 𝑛 = 4, we obtain


(5) 1
𝑢1 = 4 [2000 + 791.02 + 1000 + 1041.02] = 1208.01
(5) 1
𝑢2 = 4 [1208.01 + 500 + 1000 + 458.01] = 791.51
(5) 1
𝑢3 = 4 [1208.01 + 2000 + 500 + 458.01] = 1041.51
(5) 1
𝑢4 = 4 [1041.51 + 0 + 791.51 + 0] = 458.26

Study on Different Numerical Methods for Solving Differential Equations Page | 74


Chapter-4: Solution of Partial Differential Equations

Sixth iteration: Putting 𝑛 = 5, we obtain


(6) 1
𝑢1 = 4 [2000 + 791.51 + 1000 + 1041.1] = 1208.26
(6) 1
𝑢2 = 4 [1208.26 + 500 + 1000 + 458.26] = 791.63
(6) 1
𝑢3 = 4 [1208.26 + 2000 + 500 + 458.26] = 1041.63
(6) 1
𝑢4 = 4 [1041.63 + 0 + 791.63 + 0] = 458.32

Since the sixth iteration is very close to the fifth iteration, we can conclude

𝑢1 = 1208.26 ≈ 1208, 𝑢2 = 791.63 ≈ 792


𝑢3 = 1041.63 ≈ 1042, 𝑢4 = 458.32 ≈ 458

Application-2: Apply the relaxation method to solve Laplace’s equation 𝑢𝑥𝑥 + 𝑢𝑦𝑦
inside the square bounded by 𝑥 = 0, 𝑥 = 4, 𝑦 = 0, 𝑦 = 4. Here, given that 𝑢 = 𝑥 2 𝑦 2 on
the boundary.

Figure-(4.8)

Therefore, residual at 𝐴, by using (4.4.9), we get

𝑟𝐴 = 0 + 16 + 16 + 𝑢4 − 4 × 24 = −8

By a similar manner, we obtain

𝑟𝐵 = 𝑟𝐷 = 𝑟𝐹 = 𝑟𝐺 = 𝑟𝐻 = 0, 𝑟𝐶 = −16, 𝑟𝐸 = 16, 𝑟𝐼 = −8

Now, we determine the mesh points by following way

1. The numerically largest residual is 𝑟𝐸 = 16. To liquidate it, we increase 𝑢


by 4. So that, the residual becomes 0 and the residuals neighboring nodes
get increased by 4.

Study on Different Numerical Methods for Solving Differential Equations Page | 75


Chapter-4: Solution of Partial Differential Equations

2. The next numerically largest residual is 𝑟𝐶 = −16. To reduce it to 0, we


increase 𝑢 by −4. So that, the residuals at the adjacent nodes is increased
by −4.

3. Again, the numerically largest residual is 𝑟𝐴 = −8. To reduce it to 0, we


increase 𝑢 by −2. So that, the residuals at the adjacent nodes is increased
by −2.

4. The numerically largest current residual being 2. Now, we stop the


relaxation process

Thus the final values of 𝑢 at different points are obtained as

𝑢𝐴 = 22, 𝑢𝐵 = 56, 𝑢𝐶 = 100, 𝑢𝐷 = 16, 𝑢𝐸 = 16


𝑢𝐹 = 56, 𝑢𝐺 = 8, 𝑢𝐻 = 16, 𝑢𝐼 = 22

4.6 SOLUTION OF PARABOLIC EQUATIONS

In this section [12] we will consider a model problem of modest scope to


introduce some of the essential ideas. For technical reasons, the problem is said to be
of parabolic type.

Solution of one-dimensional heat equation: Let us consider the one-dimensional heat


equation

𝜕𝑢 𝜕2 𝑢
= 𝑐 2 𝜕𝑥 2 (4.6.1)
𝜕𝑡

𝑘
Here 𝑐 2 = 𝜌𝜎 𝑐𝑚2 𝑠 −1 is the diffusivity of the substance, where 𝑘 is the thermal
conductivity, 𝜌 is density and 𝜎 is the specific heat of the substance.

We can solve (4.6.1) by the Schmidt method, Crank-Nicholson method, and


iterative method. These methods are described below.

Schmidt method: We consider a rectangular mesh in 𝑥𝑡 − 𝑝𝑙𝑎𝑛𝑒 with spacing ℎ along


𝑥 − 𝑑𝑖𝑟𝑒𝑐𝑡𝑖𝑜𝑛 and 𝑘 along 𝑡 − 𝑑𝑖𝑟𝑒𝑐𝑡𝑖𝑜𝑛. Now denoting the mesh point (𝑥, 𝑡) =
(𝑖ℎ, 𝑗𝑘) as simply(𝑖, 𝑗), we have

𝜕𝑢 𝑢𝑖,𝑗+1 −𝑢𝑖,𝑗
=
𝜕𝑡 𝑘
𝜕2 𝑢 𝑢𝑖−1,𝑗 −2𝑢𝑖,𝑗 +𝑢𝑖+1,𝑗
=
𝜕𝑥 2 ℎ2

Using these in (4.6.1), we obtain


𝑢𝑖,𝑗+1 −𝑢𝑖,𝑗 𝑢𝑖−1,𝑗 −2𝑢𝑖,𝑗 +𝑢𝑖+1,𝑗
= 𝑐2
𝑘 ℎ2
𝑘𝑐 2
𝑜𝑟, 𝑢𝑖,𝑗+1 − 𝑢𝑖,𝑗 = [𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗 ]
ℎ2

Study on Different Numerical Methods for Solving Differential Equations Page | 76


Chapter-4: Solution of Partial Differential Equations

𝑘𝑐 2
𝑜𝑟, 𝑢𝑖,𝑗+1 − 𝑢𝑖,𝑗 = 𝑟[𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗 ] ; 𝑟 = ℎ2

𝑜𝑟, 𝑢𝑖,𝑗+1 = 𝑢𝑖,𝑗 + 𝑟[𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗 ]


𝑜𝑟, 𝑢𝑖,𝑗+1 = 𝑟𝑢𝑖−1,𝑗 + (1 − 2𝑟)𝑢𝑖,𝑗 + 𝑟𝑢𝑖+1,𝑗 (4.6.2)

The relation (4.6.2) is a relation between the function values at the two-time
levels 𝑗 and (𝑗 + 1) and hence called 2 − 𝑙𝑒𝑣𝑒𝑙 formula. This Formula enables us to
determine the value of 𝑢 at the (𝑖, 𝑗 + 1)𝑡ℎ mesh point in terms of the known function
values at the points 𝑥𝑖−1 , 𝑥𝑖 & 𝑥𝑖+1 at the instant 𝑡𝑖 . The schematic form of (4.6.2) is
shown in figure-(4.9).

Figure-(4.9)

Hence the formula (4.6.2) is called Schmidt explicit formula which is valid only
for 0 < 𝑟 ≤ ½. In particular when 𝑟 = ½, equation (4.6.2) reduces to

𝑢𝑖,𝑗+1 = ½𝑢𝑖−1,𝑗 + (1 − 1)𝑢𝑖,𝑗 + ½𝑢𝑖+1,𝑗


1
𝑜𝑟, 𝑢𝑖,𝑗+1 = 2 [𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 ] (4.6.3)

This shows that the value of 𝑢 at 𝑥𝑖 when time 𝑡𝑖+1 is the mean of the values
of 𝑢 at 𝑥𝑖−1 and 𝑥𝑖+1 when time 𝑡𝑖 . This relation (4.6.3) is known as Bender-Schmidt
recurrence relation, gives the value of 𝑢 at the internal points with the help of boundary
conditions.

Crank-Nicholson method: Crank and Nicholson proposed a method according to


𝜕2 𝑢
which 𝜕𝑥 2 is replaced by the average of the finite difference approximations on 𝑗 𝑡ℎ and
(𝑗 + 1)𝑡ℎ rows. Thus, we have

𝜕𝑢 𝑢𝑖,𝑗+1 −𝑢𝑖,𝑗
=
𝜕𝑡 𝑘
𝜕2 𝑢 1 𝑢𝑖−1,𝑗 −2𝑢𝑖,𝑗 +𝑢𝑖+1,𝑗 𝑢𝑖−1,𝑗+1 −2𝑢𝑖,𝑗+1 +𝑢𝑖+1,𝑗+1
= 2[ + ]
𝜕𝑥 2 ℎ2 ℎ2

Hence, (4.6.1) reduced to

𝑢𝑖,𝑗+1 −𝑢𝑖,𝑗 𝑐 2 𝑢𝑖−1,𝑗 −2𝑢𝑖,𝑗 +𝑢𝑖+1,𝑗 𝑢𝑖−1,𝑗+1 −2𝑢𝑖,𝑗+1 +𝑢𝑖+1,𝑗+1


= [ + ]
𝑘 2 ℎ2 ℎ2
𝑘𝑐 2
𝑜𝑟, 𝑢𝑖,𝑗+1 − 𝑢𝑖,𝑗 = 2ℎ2 [𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗

+𝑢𝑖−1,𝑗+1 − 2𝑢𝑖,𝑗+1 + 𝑢𝑖+1,𝑗+1 ]

Study on Different Numerical Methods for Solving Differential Equations Page | 77


Chapter-4: Solution of Partial Differential Equations

𝑜𝑟, 2[𝑢𝑖,𝑗+1 − 𝑢𝑖,𝑗 ] = 𝑟[𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗


𝑘𝑐 2
+𝑢𝑖−1,𝑗+1 − 2𝑢𝑖,𝑗+1 + 𝑢𝑖+1,𝑗+1 ] ; 𝑟 = ℎ2

𝑜𝑟, −𝑟𝑢𝑖−1,𝑗+1 + (2 + 2𝑟)𝑢𝑖,𝑗+1 − 𝑟𝑢𝑖+1,𝑗+1


= 𝑟𝑢𝑖−1,𝑗 + (2 − 2𝑟)𝑢𝑖,𝑗 + 𝑟𝑢𝑖+1,𝑗 (4.6.4)

On the left-hand side of (4.6.4), we have three unknowns and on the right-hand
side, all three quantities are known. Thus, the implicit scheme (4.6.4) is called the
Crank-Nicholson formula and it is convergent for all values of 𝑟. If there are 𝑁 internal
mesh points on each row, then formula (4.6.4) gives 𝑁 simultaneous equations for 𝑁
unknowns in terms of the given boundary values. Similarly, the internal mesh points on
all rows can be calculated. The computational model of this method is given below in
figure-(4.10)

Figure-(4.10)

Iterative method: By using an iterative method we develop a method by means of the


Crank-Nicholson method for solving (6.4.1). In the Crank-Nicholson method, the
partial differential equation (4.6.1) is replaced by the finite difference equations
𝑟
(1 + 𝑟)𝑢𝑖,𝑗+1 = 𝑢𝑖,𝑗 + [𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗
2

+𝑢𝑖+1,𝑗 + 𝑢𝑖−1,𝑗+1 − 2𝑢𝑖,𝑗+1 + 𝑢𝑖+1,𝑗+1 ] (4.6.5)

In (4.6.5), the unknowns are 𝑢𝑖−1,𝑗+1 , 𝑢𝑖,𝑗+1 & 𝑢𝑖+1,𝑗+1 and all the others are
known since they are already computed at the 𝑗 𝑡ℎ step. Hence, dropping that all 𝑗 and
setting
𝑟
𝑐𝑖 = 𝑢𝑖,𝑗 + 2 [𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗 ] (4.6.6)

Thus, (4.6.5) can be written as


𝑟 𝑖 𝑐
𝑢𝑖 = 2(1+𝑟) [𝑢𝑖−1 + 𝑢𝑖+1 ] + 1+𝑟 (4.6.7)

From (4.6.7), we obtain the iteration formula

(𝑛+1) 𝑟 (𝑛) (𝑛) 𝑐𝑖


𝑢𝑖 = [𝑢𝑖−1 + 𝑢𝑖+1 ] + (4.6.8)
2(1+𝑟) 1+𝑟

Study on Different Numerical Methods for Solving Differential Equations Page | 78


Chapter-4: Solution of Partial Differential Equations

This expresses the (𝑛 + 1)𝑡ℎ iteration in terms of 𝑛𝑡ℎ iteration and is known as
Jacobi’s iteration formula. It can be seen from (4.6.8) that at the line of
(𝑛+1) (𝑛+1)
computing 𝑢𝑖 , the latest value of 𝑢𝑖−1 , namely 𝑢𝑖−1 is already available. Hence,
(𝑛)
the convergence of Jacobi’s iteration formula can be improved by replacing 𝑢𝑖−1 in
(𝑛+1)
(4.6.8) by its latest values available, namely 𝑢𝑖−1 . Accordingly, we obtain the Gauss-
Seidel iteration formula

(𝑛+1) 𝑟 (𝑛+1) (𝑛) 𝑖𝑐


𝑢𝑖 = 2(1+𝑟) [𝑢𝑖−1 + 𝑢𝑖+1 ] + 1+𝑟 (4.6.9)

We can show that the Gauss-Seidel scheme (4.6.9) converges for all finite
values of 𝑟 and that converges twice as fast as the Jacobi’s scheme (4.6.8).

4.7 APPLICATION OF SOLVING PARABOLIC EQUATION

𝜕𝑢 𝜕2 𝑢
Solve 𝜕𝑡 = 𝜕𝑥 2 by using (i) Schmidt method (ii) Crank-Nicholson method.
Subject to the conditions 𝑢(𝑥, 0) = 𝑠𝑖𝑛𝜋𝑥 ; 0 ≤ 𝑥 ≤ 1 and 𝑢(0, 𝑡) = 0 = 𝑢(1, 𝑡). Carry
out computations for the two levels, taking ℎ = 1/3 and 𝑘 = 1/36.

Solution: Here given by 𝑐 2 = 1, ℎ = 1/3 & 𝑘 = 1/36.

𝑘𝑐 2 1
Thus, 𝑟 = =4
ℎ2
𝜋 √3 2𝜋 √3
𝑢1,0 = 𝑠𝑖𝑛 3 = & 𝑢2,0 = 𝑠𝑖𝑛 =
2 3 2

Also, all boundary values are zero as shown in figure-(4.11) below

Figure-(4.11)

(i) The Schmidt formula (4.6.2) in this case becomes


1
𝑢𝑖,𝑗+1 = 4 [𝑢𝑖−1,𝑗 + 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗 ]

For 𝑖 = 1,2 & 𝑗 = 0, we get


1
𝑢1,1 = 4 [𝑢0,0 + 2𝑢1,0 + 𝑢2,0 ]
1 √3 √3
= [0 + 2 + ] = 0.65
4 2 2

Study on Different Numerical Methods for Solving Differential Equations Page | 79


Chapter-4: Solution of Partial Differential Equations

1
𝑢2,1 = 4 [𝑢1,0 + 2𝑢2,0 + 𝑢3,0 ]
1 √3 √3
= 4[ 2 + 2 + 0] = 0.65
2

For 𝑖 = 1,2 & 𝑗 = 1, we get


1
𝑢1,2 = 4 [𝑢0,1 + 2𝑢1,1 + 𝑢2,1 ]
1
= 4 [0 + 2(0.65) + 0.65] = 0.49
1
𝑢2.2 = 4 [𝑢1,1 + 2𝑢2,1 + 𝑢3,1 ]
1
= 4 [0.65 + 2(0.65) + 0] = 0.49

Thus, by Schmidt scheme, we get the following mesh points

𝑢1,1 = 0.65 = 𝑢2,1 , 𝑢1,2 = 0.49 = 𝑢2,2

(i) The Crank-Nicholson formula (4.6.4) in this case becomes


1 5 1 1 3 1
− 4 𝑢𝑖−1,𝑗+1 + 2 𝑢𝑖,𝑗+1 − 4 𝑢𝑖+1,𝑗+1 = 4 𝑢𝑖−1,𝑗 + 2 𝑢𝑖,𝑗 + 4 𝑢𝑖+1,𝑗

For 𝑖 = 1,2 & 𝑗 = 0, we get


1 5 1 1 3 1
− 4 𝑢0,1 + 2 𝑢1,1 − 4 𝑢2,1 = 4 𝑢0,0 + 2 𝑢1,0 + 4 𝑢2,0
1 5 1 1 3 √3 1 √3
𝑜𝑟, − 4 (0) + 2 𝑢1,1 − 4 𝑢2,1 = 4 (0) + 2 +4
2 2
7√3
𝑜𝑟, 10𝑢1,1 − 𝑢2,1 = 2
(4.7.1)
1 5 1 1 3 1
− 4 𝑢1,1 + 2 𝑢2,1 − 4 𝑢3,1 = 4 𝑢1,0 + 2 𝑢2,0 + 4 𝑢3,0
1 5 1 1 √3 3 √3 1
𝑜𝑟, − 4 𝑢1,1 + 2 𝑢2,1 − 4 (0) = 4 +2 + 4 (0)
2 2
7√3
𝑜𝑟, −𝑢1,1 + 10𝑢2,1 = (4.7.2)
2

Now solving (4.7.1) and (4.7.2), we get 𝑢1,1 = 𝑢2,1 = 0.67

For 𝑖 = 1,2 & 𝑗 = 1, we get


1 5 1 1 3 1
− 4 𝑢0,2 + 2 𝑢1,2 − 4 𝑢2,2 = 4 𝑢0,1 + 2 𝑢1,1 + 4 𝑢2,1
1 5 1 1 3 1
𝑜𝑟, − 4 (0) + 2 𝑢1,2 − 4 𝑢2,2 = 4 (0) + 2 (0.67) + 4 (0.67)

𝑜𝑟, 10𝑢1,2 − 𝑢2,2 = 4.69 (4.7.3)


1 5 1 1 3 1
− 4 𝑢1,2 + 2 𝑢2,2 − 4 𝑢3,2 = 4 𝑢1,1 + 2 𝑢2,1 + 4 𝑢3,1

Study on Different Numerical Methods for Solving Differential Equations Page | 80


Chapter-4: Solution of Partial Differential Equations

1 5 1 1 3 1
𝑜𝑟, − 4 𝑢1,1 + 2 𝑢2,1 − 4 (0) = 4 (0.67) + 2 (0.67) + 4 (0)

𝑜𝑟, −𝑢1,2 + 10𝑢2,2 = 4.69 (4.7.4)

Now solving (4.7.3) and (4.7.4), we get 𝑢1,2 = 𝑢2,2 = 0.52

Thus, by Crank-Nicholson scheme we get the following mesh points

𝑢1,1 = 0.67 = 𝑢2,1 , 𝑢1,2 = 0.52 = 𝑢2,2

4.8 SOLUTION OF HYPERBOLIC EQUATIONS

𝜕2 𝑢 𝜕2 𝑢
The wave equation 𝜕𝑡 2 = 𝜕𝑥 2 is the simplest example of the hyperbolic
partial differential equations. Its solution is the displacement function 𝑢(𝑥, 𝑡) defined
for values 0 ≤ 𝑥 ≤ 𝐿 and 0 ≤ 𝑡 ≤ ∞, satisfying the initial and boundary conditions
[12]. Such equations arise from a convective type of problems in vibrations, wave
mechanics, gas dynamics, elasticity, electromagnetic and seismology.

Solution of the wave equation (vibration of stretched string): We consider the


boundary value problem that models the transverse vibrations of a stretched string as
followings

𝜕2 𝑢 𝜕2 𝑢
= 𝑐2 ; 0≤𝑥 ≤𝐿&0≤𝑡 ≤∞ (4.8.1)
𝜕𝑡 2 𝜕𝑥 2

Subject to the conditions

𝜕𝑢
𝑢(𝑥, 0) = 𝑓(𝑥) & (𝑥, 0) = 𝑔(𝑥) ; 0 ≤ 𝑥 ≤ 𝐿 (4.8.2)
𝜕𝑡

𝑢(0, 𝑡) = 𝜑(𝑡) = 𝑢(𝐿, 𝑡) ; 0 ≤ 𝑡 ≤ 𝑇 (4.8.3)

We have the finite difference approximations for the derivatives, as follows

𝜕2 𝑢 𝑢𝑖−1,𝑗 −2𝑢𝑖,𝑗 +𝑢𝑖+1,𝑗


=
𝜕𝑥 2 ℎ2
𝜕2 𝑢 𝑢𝑖,𝑗−1 −2𝑢𝑖,𝑗 +𝑢𝑖,𝑗+1
=
𝜕𝑡 2 𝑘2

Now, putting them in (4.8.1), we get


𝑢𝑖,𝑗−1 −2𝑢𝑖,𝑗 +𝑢𝑖,𝑗+1 𝑢𝑖−1,𝑗 −2𝑢𝑖,𝑗 +𝑢𝑖+1,𝑗
= 𝑐2
𝑘2 ℎ2
𝑘2𝑐 2
𝑜𝑟, 𝑢𝑖,𝑗−1 − 2𝑢𝑖,𝑗 + 𝑢𝑖,𝑗+1 = [𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗 ]
ℎ2
𝑐𝑘
𝑜𝑟, 𝑢𝑖,𝑗−1 − 2𝑢𝑖,𝑗 + 𝑢𝑖,𝑗+1 = 𝑟 2 [𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗 ] ; 𝑟 = ℎ

𝑜𝑟, 𝑢𝑖,𝑗+1 = −𝑢𝑖,𝑗−1 + 𝑟 2 [𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 ] + (2 − 𝑟 2 )𝑢𝑖,𝑗 (4.8.4)

Study on Different Numerical Methods for Solving Differential Equations Page | 81


Chapter-4: Solution of Partial Differential Equations

The formula (4.8.4) shows that the function values at 𝑗 𝑡ℎ and (𝑗 − 1)𝑡ℎ levels
are required in order to determine those at the (𝑗 + 1)𝑡ℎ level. Such difference schemes
are called three-level difference schemes compared to the two-level difference schemes
derived in the parabolic equation case. By expanding the terms in (4.8.4) as Taylor’s
series and simplifying, it can be shown that the truncation error in (4.8.4) is of
𝑂(ℎ2 + 𝑘 2 ) and the formula (4.8.4) holds well if 𝑟 < 1, which is the condition for
stability.

There also exist implicit finite difference schemes for (4.8.1). Two such schemes
𝑐𝑘
are held well for all values of 𝑟 = ℎ are obtained by taking the average of the finite
difference approximations of different rows, are given by

1 𝑐2
[𝑢𝑖,𝑗−1 − 2𝑢𝑖,𝑗 + 𝑢𝑖,𝑗+1 ] = 2ℎ2 [(𝑢𝑖−1,𝑗+1 − 2𝑢𝑖,𝑗+1 + 𝑢𝑖+1,𝑗+1 )
𝑘2

+(𝑢𝑖−1,𝑗−1 − 2𝑢𝑖,𝑗−1 + 𝑢𝑖+1,𝑗−1 )] (4.8.5)


1 𝑐2
[𝑢𝑖,𝑗−1 − 2𝑢𝑖,𝑗 + 𝑢𝑖,𝑗+1 = 4ℎ2 [(𝑢𝑖−1,𝑗+1 − 2𝑢𝑖,𝑗+1 + 𝑢𝑖+1,𝑗+1 )
𝑘2

+2(𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗 )(𝑢𝑖−1,𝑗−1 − 2𝑢𝑖,𝑗−1 + 𝑢𝑖+1,𝑗−1 )] (4.8.6)

4.9 APPLICATION OF SOLVING HYPERBOLIC EQUATION

𝜕2 𝑢 𝜕2 𝑢
Solve = 16 𝜕𝑥 2 by taking ∆𝑥 = 1, 𝑡 = 1.25 subject to the initial and
𝜕𝑡 2
𝜕𝑢
boundary conditions 𝑢(0, 𝑡) = 0 = 𝑢(5, 𝑡), 𝜕𝑡 (𝑥, 0) = 0 & 𝑢(𝑥, 0) = 𝑥 2 (5 − 𝑥).

𝑘
Solution: Here 𝑐 2 = 16, therefore for 𝑟 = ℎ the difference equation (4.8.4) reduces to

𝑢𝑖,𝑗+1 = 2(1 − 16𝑟 2 )𝑢𝑖,𝑗 + 16𝑟 2 [𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 ]−𝑢𝑖,𝑗−1 (4.9.1)

Given by ℎ = ∆𝑥 = 1 and, for the co-efficient of 𝑢𝑖,𝑗 vanishes, choosing 𝑘 as

1 − 16𝑟 2 = 0
𝑘2 1
𝑜𝑟, 𝑟 2 = =
ℎ2 16
1
𝑜𝑟, 𝑘 = 4

For 𝑟 = 𝑘/ℎ = ¼ < 1, (4.9.1) gives a convenient solution as

𝑢𝑖,𝑗+1 = 𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 −𝑢𝑖,𝑗−1 (4.9.2)

Given 𝑢(0, 𝑡) = 0 = 𝑢(5, 𝑡)


𝑜𝑟, 𝑢0,𝑗 = 0 = 𝑢5,𝑗 ; ∀ 𝑗

Also 𝑢(𝑥, 0) = 𝑥 2 (5 − 𝑥)

Study on Different Numerical Methods for Solving Differential Equations Page | 82


Chapter-4: Solution of Partial Differential Equations

𝑜𝑟, 𝑢𝑖,0 = 𝑖 2 (5 − 𝑖) = 4,12,18,16 ; 𝑖 = 1,2,3,4 & 𝑡 = 0

𝜕𝑢
Finally (𝑥, 0) = 0
𝜕𝑡
𝑢𝑖,𝑗+1 −𝑢𝑖,𝑗
𝑜𝑟, =0 ; 𝑡=0
𝑘

𝑜𝑟, 𝑢𝑖,𝑗+1 = 𝑢𝑖,𝑗

Putting 𝑗 = 1 & 𝑖 = 1,2,3,4 successively in (4.9.2), we get

𝑢1,2 = 𝑢0,1 + 𝑢2,1 −𝑢1,0 = 0 + 12 − 4 = 8


𝑢2,2 = 𝑢1,1 + 𝑢3,1 −𝑢2,0 = 4 + 18 − 12 = 10
𝑢3,2 = 𝑢2,1 + 𝑢4,1 −𝑢3,0 = 12 + 16 − 18 = 10
𝑢4,2 = 𝑢3,1 + 𝑢5,1 −𝑢4,0 = 18 + 0 − 16 = 2

Putting 𝑗 = 2 & 𝑖 = 1,2,3,4 successively in (4.9.2), we get

𝑢1,3 = 𝑢0,2 + 𝑢2,2 −𝑢1,1 = 0 + 10 − 4 = 6


𝑢2,3 = 𝑢1,2 + 𝑢3,2 −𝑢2,1 = 8 + 10 − 12 = 6
𝑢3,3 = 𝑢2,2 + 𝑢4,2 −𝑢3,1 = 10 + 2 − 18 = −6
𝑢4,3 = 𝑢3,2 + 𝑢5,2 −𝑢4,1 = 10 + 0 − 16 = −6

Putting 𝑗 = 3 & 𝑖 = 1,2,3,4 successively in (4.9.2), we get

𝑢1,4 = 𝑢0,3 + 𝑢2,3 −𝑢1,2 = 0 + 6 − 8 = −2


𝑢2,4 = 𝑢1,3 + 𝑢3,3 −𝑢2,2 = 6 − 6 − 10 = −10
𝑢3,4 = 𝑢2,3 + 𝑢4,3 −𝑢3,2 = 6 − 6 − 10 = −10
𝑢4,4 = 𝑢3,3 + 𝑢5,3 −𝑢4,2 = −6 + 0 − 2 = −8

Putting 𝑗 = 4 & 𝑖 = 1,2,3,4 successively in (4.9.2), we get

𝑢1,5 = 𝑢0,4 + 𝑢2,4 −𝑢1,3 = 0 − 10 − 6 = −16


𝑢2,5 = 𝑢1,4 + 𝑢3,4 −𝑢2,3 = −2 − 10 − 6 = −18
𝑢3,5 = 𝑢2,4 + 𝑢4,4 −𝑢3,3 = −10 − 8 + 6 = −12
𝑢4,5 = 𝑢3,4 + 𝑢5,4 −𝑢4,3 = −10 + 0 + 6 = −4

Thus the required values of 𝑢𝑖,𝑗 are can be shown as the following table

Study on Different Numerical Methods for Solving Differential Equations Page | 83


Chapter-4: Solution of Partial Differential Equations

𝒊→
𝒋 ↓ 0 1 2 3 4 5
0 0 4 12 18 16 0
1 0 4 12 18 16 0
2 0 8 10 10 2 0
3 0 6 6 -6 -6 0
4 0 -2 -10 -10 -8 0
5 0 -16 -18 -12 -4 0

4.10 COMPARISON BETWEEN ITERATIVE METHOD AND RELAXATION


METHOD

The method of iteration and the method of relaxation are both methods for
solving partial differential equations with given boundary values [11]. Although they
reach the desired solution by different processes, both methods are of the same inherent
accuracy. Their points of similarity and dissimilarity are given below.

1. Both methods require that the bounded region be divided into a network of
squares or other similar polygons.

2. Both methods require that the boundary values are written down and that
rough values of the function are computed, estimated or assumed for all
interior points of the network.

3. In order to start a computation, the iteration method assumes that a functional


value at any mesh point satisfies the given difference equation and thereby
derives the relation which must exist between that functional value and
adjacent functional values. The process of iteration is then applied until the
required relation is satisfied. On the other hand, the relaxation method
recognizes at the start that an assumed functional value at any mesh point
will not satisfy the given difference equation, but that there will be a residual
at that point. The residuals are computed for all points before the relaxation
process is started.

4. The iteration process is slow, sure and frequently long. The relaxation process
is more rapid, less certain and usually reasonably short. The convergence is
rapid by both methods at first but becomes slow with both methods long
before the end is reached.

5. The arithmetic operations are easier and shorter with the method of relaxation.
The mental effort necessary to avoid mistakes however is much greater than
with the iteration method.

6. The greatest drawback to the method of iteration is its length and the greatest
drawback to the method of relaxation is its liability to errors of the
computation. Such errors can be kept out only by extreme care and unceasing
vigilance on the part of the computer.

Study on Different Numerical Methods for Solving Differential Equations Page | 84


Chapter-4: Solution of Partial Differential Equations

7. Computational errors in the method of the iteration are immediately evident


and are self-correcting. In the method of relaxation, any errors in the
functional values remain hidden and can be brought to light only by the
application of formula (4.4.9). For this reason, all the interior net point values
should be checked by (4.4.9) several times during a long computation. Such
checking takes time and keeps the relaxation process from being as short as
it might a first appear.

8. In the iteration process, attention is always fixed on the functional values at


the lattice points, whereas in the relaxation process, attention is always
centered on the residuals at those points.

Thus, if anyone solves a problem of moderate length by iteration method and


relaxation method, then he can decide himself which method is preferable in his case.

4.11 THE RAYLEIGH-RITZ METHOD

Introduction: The Rayleigh-Ritz method of solving a boundary value problem is


entirely different from either of the methods considered in previous sections [11]. It is
not based on difference equations and does not employ them. In finding the solution of
a physical problem by this method, one assumes that the solution can be represented by
a linear combination of simple and easily calculated functions each of which satisfies
the given boundary conditions.

After a problem has been formulated as the definite integral of the algebraic sum
of two or more homogeneous, positive and definite quadratic forms or as the quotient
of two such integrals, the desired unknown function is replaced in the integrals by the
assumed linear combination. Then the integral or quotient of the integrals is minimized
with respect to each of the arbitrary constants occurring in the linear combination.

This method is direct and short if only approximate results are desired, but if
results of high accuracy are required, the method is quite laborious and the labor cannot
be appreciably lessened by mechanical aids. The labor involved is mostly in long and
tedious algebraic manipulations.

A special and simple form of the Rayleigh-Ritz method was first used by Lord
Rayleigh for finding the fundamental vibration period of an elastic body. It was later
extended, generalized and its convergence proved by W.Ritz.

Figure-(4.12)

Vibration of a rectangular membrane: Consider a thin elastic membrane of


rectangular form with sides 𝑎 & 𝑏 as shown as figure-(4.12) such as a very thin sheet of

Study on Different Numerical Methods for Solving Differential Equations Page | 85


Chapter-4: Solution of Partial Differential Equations

rubber, and assume that the membrane is made very fast at the edge while tightly
stretched [11].

Take a set of three mutually perpendicular axes, with the 𝑥𝑦 − 𝑝𝑙𝑎𝑛𝑒 coinciding
with the membrane and the 𝑧 − 𝑎𝑥𝑖𝑠 perpendicular to it. Then if an interior region of
the membrane be pulled or pushed in a direction at right angles to its plane of
equilibrium (the 𝑥𝑦 − 𝑝𝑙𝑎𝑛𝑒), it becomes distorted into a curved surface, the area of
which is

𝑎 𝑏 𝜕𝑧 𝜕𝑧
𝑆 = ∫0 ∫0 √1 + (𝜕𝑥)2 + (𝜕𝑦)2 𝑑𝑦𝑑𝑥

𝑎 𝑏 1 𝜕𝑧 1 𝜕𝑧
= ∫0 ∫0 [1 + 2 (𝜕𝑥)2 + 2 (𝜕𝑦)2 ]𝑑𝑦𝑑𝑥 (𝑎𝑝𝑝)

Since the distortion is small, the increase in the area of the membrane due to the
distortion is therefore

𝑎 𝑏 1 𝜕𝑧 1 𝜕𝑧
∆𝑆 = ∫0 ∫0 [1 + 2 (𝜕𝑥)2 + 2 (𝜕𝑦)2 ]𝑑𝑦𝑑𝑥 − 𝑎𝑏
1 𝑎 𝑏 𝜕𝑧 𝜕𝑧
= 2 ∫0 ∫0 [(𝜕𝑥)2 + (𝜕𝑦)2 ]𝑑𝑦𝑑𝑥

Let 𝑇 denote the tension on a unit length of the boundary of the membrane, the
direction of 𝑇 being perpendicular to the edge of the boundary. Then the work-done in
deflecting the membrane until its area is unchanged by an amount ∆𝑆 is as in the
following. Consider a rectangular region of dimensions 𝑢 & 𝑣 as in the figure-(4.13).

Figure-(4.13)

First let the side 𝐴𝐵 be fixed and let the membrane be pulled to the right with
the force 𝑇 pounds per unit of width, or 𝑇𝑣 for the whole side. The force 𝑇𝑣 will stretch
the membrane an amount ∆𝑢 and do 𝑇𝑣. ∆𝑢 units of work in doing so.
Now let the side 𝐵𝐶 be fixed and let the membrane be pulled in the direction of
the side 𝐴𝐵 by a force of 𝑇 pounds per unit length of the border, or 𝑇(𝑢 + ∆𝑢) for the
whole side. The force 𝑇(𝑢 + ∆𝑢) will stretch the membrane by an amount ∆𝑣 in that
direction and do 𝑇(𝑢 + ∆𝑢). ∆𝑣 units of work in doing. Hence the total work-done is

𝑇𝑣. ∆𝑢 + 𝑇(𝑢 + ∆𝑢). ∆𝑣 = 𝑇(𝑣. ∆𝑢 + 𝑢. ∆𝑣 + ∆𝑢. ∆𝑣)


= 𝑇(𝑎𝑟𝑒𝑎 𝑜𝑓 𝑡ℎ𝑒 𝑠ℎ𝑎𝑑𝑒𝑑 𝑏𝑜𝑟𝑑𝑒𝑟)
= 𝑇(𝑖𝑛𝑐𝑟𝑒𝑎𝑠𝑒 𝑖𝑛 𝑎𝑟𝑒𝑎 𝑚𝑒𝑚𝑏𝑟𝑎𝑛𝑒)
= 𝑇. ∆𝑆

Study on Different Numerical Methods for Solving Differential Equations Page | 86


Chapter-4: Solution of Partial Differential Equations

Now the potential energy in the deflected position is equal to the work done in
producing the deflection. Since the deflection is small, the tension 𝑇 remains practically
constant. Hence the potential energy of the membrane in a deflected position is

𝑇 𝑎 𝑏 𝜕𝑧 𝜕𝑧
𝑃𝐸 = 2 ∫0 ∫0 [(𝜕𝑥)2 + (𝜕𝑦)2 ]𝑑𝑦𝑑𝑥

Because of the elasticity of the membrane, the deflection at any point is


proportional to the force applied, and the motion is thus simple harmonic. Hence the
deflection is a periodic function of the time, or 𝑧 = 𝑍(𝑥, 𝑦)𝑠𝑖𝑛𝜔𝑡. On substituting this
value of 𝑧 in the above expression for the potential energy, we get

𝑇 𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2
𝑃𝐸 = (2 ∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥)𝑠𝑖𝑛2 𝜔𝑡

The maximum value of it obtained when 𝑠𝑖𝑛2 𝜔𝑡 = 1, then

𝑇 𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2
(𝑃𝐸)𝑚𝑎𝑥 = 2 ∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥

For the elementary mass 𝑑𝑚 = 𝜌𝑑𝑦𝑑𝑥, the kinetic energy of the membrane is

1 𝑑𝑧 1
𝜌𝑑𝑦𝑑𝑥( 𝑑𝑡 )2 = 2 𝜌𝑑𝑦𝑑𝑥𝑍 2 𝜔2 𝑐𝑜𝑠 2 𝜔𝑡
2

Here 𝜌 denotes the mass of the unit area of the membrane. The kinetic energy
of the entire vibrating membrane is therefore

1 𝑎 𝑏
𝐾𝐸 = 2 𝜔2 𝜌 𝑐𝑜𝑠 2 𝜔𝑡 ∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥

The maximum value of it obtained when 𝑐𝑜𝑠 2 𝜔𝑡 = 1, then

1 𝑎 𝑏
(𝐾𝐸)𝑚𝑎𝑥 = 2 𝜔2 𝜌 ∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥

Since there is assumed to be no loss of energy due to vibration, the maximum


potential energy is equal to the maximum kinetic energy and thus we have

1 𝑎 𝑏 𝑇 𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2
𝜔2 𝜌 ∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥 = 2 ∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥
2

𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2
𝑇 ∫0 ∫0 [( ) +( ) ]𝑑𝑦𝑑𝑥
𝜕𝑥 𝜕𝑦
𝑜𝑟, 𝜔2 = 𝑎 𝑏 (4.11.1)
𝜌 ∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥

We must now for 𝑍 assume a linear combination of simple functions that will
satisfy the boundary conditions of the problem. Such a function is

𝑍 = (𝑎 − 𝑥)(𝑏 − 𝑦)(𝑎1 + 𝑎2 𝑥 + 𝑎3 𝑦 + 𝑎4 𝑥 2 + 𝑎5 𝑦 2 + 𝑎5 𝑥𝑦 + ⋯ … … ) (4.11.2)

Study on Different Numerical Methods for Solving Differential Equations Page | 87


Chapter-4: Solution of Partial Differential Equations

In order to make the convergence as rapid as possible, however, we move the


origin to the center of the rectangle. Then because of the symmetry by taking as 𝑝 =
𝑎 𝑏
& 𝑞 = we may write
2 2

𝑍 = (𝑝2 − 𝑥 2 )(𝑞2 − 𝑦 2 )(𝑎1 + 𝑎2 𝑥 2 + 𝑎3 𝑦 2 + 𝑎4 𝑥 2 𝑦 2 + 𝑎5 𝑦 2 + ⋯ … … ) (4.11.3)

Assuming that 𝑍 in (4.11.1) has been replaced by (4.11.2) and (4.11.3) above;
we must determine that all 𝑎𝑖 so as to make 𝜔2 minimum. Hence the derivative of the
right member of (4.11.1) with respect to each of all 𝑎𝑖 must be zero. Then by the rule
for differentiating a quotient we have

𝑎 𝑏 𝜕 𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2
∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥 𝜕𝑎 (∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥)
𝑖

𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2 𝜕 𝑎 𝑏
− ∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥 𝜕𝑎 (∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥) = 0 (4.11.4)
𝑖

From (4.11.1) we get

𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2 𝜔2 𝜌 𝑎 𝑏
∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥 = 𝑇
∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥 (4.11.5)

Substituting (4.11.5) in (4.11.4), we get

𝑎 𝑏 𝜕 𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2
∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥 𝜕𝑎 (∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥)
𝑖

𝜔2 𝜌 𝑎 𝑏 𝜕 𝑎 𝑏
− 𝑇
∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥 𝜕𝑎 (∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥) = 0 (4.11.6)
𝑖

𝑎 𝑏 𝜔2𝜌
Now taking out the common factor ∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥 and putting = 𝑘, we get
𝑇
for 𝑖 = 1,2,3, … … … … , 𝑛

𝜕 𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2 𝜔2 𝜌 𝜕 𝑎 𝑏
(∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥) − (∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥 ) = 0
𝜕𝑎𝑖 𝑇 𝜕𝑎𝑖

𝜕 𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2 𝑎 𝑏
𝑜𝑟, 𝜕𝑎 {(∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥 − 𝑘 (∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥)} = 0 (4.11.7)
𝑖

The formula (4.11.7) will give 𝑛 homogeneous equations for determining 𝑛


values of 𝑘. If the form (4.11.3) is used for 𝑍, the limits of integration (4.11.7) will be
from −𝑝 ≤ 𝑥 ≤ 𝑝 & − 𝑞 ≤ 𝑦 ≤ 𝑞.

To get a first approximation to the vibration frequency of the membrane, we take


only the first term of the parenthetic polynomial in (4.11.3), then we get

𝑍 = 𝑎1 (𝑝2 − 𝑥 2 )(𝑞 2 − 𝑦 2 )
𝜕𝑍
= −2𝑎1 𝑥(𝑞 2 − 𝑦 2 )
𝜕𝑥

Study on Different Numerical Methods for Solving Differential Equations Page | 88


Chapter-4: Solution of Partial Differential Equations

𝜕𝑍
= −2𝑎1 𝑦(𝑝2 − 𝑥 2 )
𝜕𝑦

𝑎 𝑏 𝜕𝑧 2 𝜕𝑧 2
Hence ∫0 ∫0 [(𝜕𝑥) + (𝜕𝑦) ] 𝑑𝑦𝑑𝑥
𝑝 𝑞
= 4𝑎12 ∫−𝑝 ∫−𝑞[𝑥 2 (𝑞 2 − 𝑦 2 )2 + 𝑦 2 (𝑝2 − 𝑥 2 )2 ]𝑑𝑦𝑑𝑥
128
= 𝑎12 𝑝3 𝑞 3 (𝑝2 + 𝑞 2 )
45
𝑎 𝑏
And ∫0 ∫0 𝑍 2 𝑑𝑦𝑑𝑥
𝑝 𝑞
= ∫−𝑝 ∫−𝑞[𝑎12 (𝑞 2 − 𝑦 2 )2 (𝑝2 − 𝑥 2 )2 ]𝑑𝑦𝑑𝑥
256
= 225 𝑎12 𝑝5 𝑞 5

𝑎 𝑏 𝜔2𝜌
On substituting these in (4.11.7) and putting as 𝑝 = 2 , 𝑞 = 2 & 𝑇 = , we
𝑇
get as followings

𝜕 128 256
{( 45 𝑎12 𝑝3 𝑞 3 (𝑝2 + 𝑞 2 ) − 𝑘 225 𝑎12 𝑝5 𝑞 5 } = 0
𝜕𝑎𝑖
2
𝑜𝑟, (𝑝2 + 𝑞 2 ) − 5 𝑘𝑝2 𝑞 2 = 0
5(𝑝2 +𝑞2 )
𝑜𝑟, 𝑘 = 2𝑝2 𝑞 2
5 1 1
𝑜𝑟, 𝑘 = 2 (𝑝2 + 𝑞2 )
1 1
𝑜𝑟, 𝑘 = 10(𝑎2 + 𝑏2 )
𝜔2 𝜌 1 1
𝑜𝑟, = 10(𝑎2 + 𝑏2 )
𝑇
10𝑇 1 1
𝑜𝑟, 𝜔2 = (𝑎 2 + 𝑏 2 )
𝜌

10𝑇 1 1
𝑜𝑟, 𝜔 = √ (𝑎 2 + 𝑏 2 )
𝜌

The frequency is therefore

𝜔 1 10𝑇 1 1
𝑓 = 2𝜋 = 2𝜋 √ (𝑎 2 + 𝑏 2 )
𝜌

√10 1
This is a natural vibration frequency of the membrane. Since 2𝜋 ≈ 2, the
vibration frequencies found by the classical method separating the variables are given
by the formula

1 𝑇 𝑚2 𝑛2
𝑓𝑚,𝑛 = 2 √𝜌 ( 𝑎2 + 𝑏2 )

Study on Different Numerical Methods for Solving Differential Equations Page | 89


Chapter-4: Solution of Partial Differential Equations

For 𝑚 = 1 & 𝑛 = 1, this formula becomes

1 𝑇 1 1
𝑓 = 2 √𝜌 (𝑎2 + 𝑏2 )

The above formula is very similar to the frequency of a membrane solved by


analytical method. Thus we can conclude that the Rayleigh-Ritz method gives a close
approximation to the exact value.

4.12 COMPARATIVE DISCUSSION OF THE RAYLEIGH-RITZ METHOD WITH


ITERATION METHOD AND RELAXATION METHOD.

Three numerical methods [11] for solving partial differential equations with
certain conditions in two dimensions have been considered in the current chapter. Each
method has its advantages and disadvantages.

The iteration method is slow, self-correcting and well adapted to use with an
automatic sequence-controlled calculating machine. The arithmetical operations are
short and simple.

The relaxation method is faster and more flexible than the iteration method. The
arithmetical operations are simple, but mistakes are easy to make and not self-
correcting. It requires constant vigilance and alertness on the part of the computer. Ti
is not adapted to use by an automatic calculating machine.

The Rayleigh-Ritz method is of considerable value in handling problems of


equilibrium and elastic vibrations. It does not require a partial differential equation to
start with, but it requires that a physical problem be reduced to be the definite integral
of sum, difference or quotient of two or more homogeneous, positive and definite
quadratic forms. The method furnishes a short and easy way of finding a good
approximation to the natural vibration period of an elastic body, deflection of a
membrane, etc. The chief disadvantage of this method is the laborious algebra involved
in getting results of high accuracy.

It is an easy matter to estimate the accuracy of results obtained by the iteration


method and relaxation method, but this is not the case with the Rayleigh-Ritz method.
No simple and useful formula for estimating the inherent error involved in this method
has yet been devised.

Finally, it must be realized that not all three methods may be applicable to the
given problem. To use the iteration method and relaxation method, a physical problem
must first be set up as a partial differential equation and this must then be converted to
partial-difference equations. The Rayleigh-Ritz method will give an approximate
solution of a problem without setting up a partial differential equation, as was done in
the case of the vibrating membrane. In problems where all these three methods are
applicable, the Rayleigh-Ritz method would probably be the third choice.

Study on Different Numerical Methods for Solving Differential Equations Page | 90


CHAPTER-5

SOLUTION OF THE BOUNDARY VALUE


PROBLEMS WITH APPLICATIONS

5.1 INTRODUCTION

In the previous chapters, we have discussed some well-known methods for


solving differential equations satisfying certain initial conditions, which are called
initial value problems. In such problems, initial conditions are given at a single point.
In this chapter, we will discuss the problems in which the conditions are satisfied at
more than one point, which are known as boundary value problems. We will discuss
some methods for the solution of boundary value problems.

The simple examples of two-point linear boundary value problem [23] are

𝑦 ′′ (𝑥) + 𝑓(𝑥)𝑦 ′ (𝑥) + 𝑔(𝑥)𝑦(𝑥) = 𝑟(𝑥) (5.1.1)

With the boundary conditions

𝑦(𝑥0 ) = 𝑎 = 𝑦 ′ (𝑥0 ) ; 𝑦(𝑥𝑁 ) = 𝑏 = 𝑦 ′ (𝑥𝑁 ) (5.1.2)

And 𝑦 ′ (𝑥) + 𝑝(𝑥)𝑦(𝑥) = 𝑞(𝑥) (5.1.3)

With the boundary conditions

𝑦(𝑥0 ) = 𝑎; 𝑦(𝑥𝑁 ) = 𝑏 (5.1.4)

There exist many numerical methods for solving such boundary value problems.
Among them, we will discuss only the finite-difference method and the shooting
method. Also, we will discuss the applications of Green’s function and Laplace’s
equation for solving boundary value problems.

Due to the compression of this chapter, we will avoid the estimation of truncation
errors of the mentioned methods.

5.2 FINITE-DIFFERENCE METHOD

Let us consider a linear differential equation of order greater than one, with
conditions specified at the endpoints of an interval [𝑎, 𝑏]. We divide the interval [𝑎, 𝑏]
into 𝑁 equal parts of width ℎ [20]. We set 𝑥0 = 𝑎 and 𝑥𝑁 = 𝑏, defining the interior
mesh points as 𝑥𝑛 = 𝑥0 + 𝑛ℎ; 𝑛 = 0,1,2, … … , 𝑁 − 1. The corresponding values are
denoted by 𝑦𝑛 = 𝑦(𝑥𝑛 ) = 𝑦(𝑥0 + 𝑛ℎ); 𝑛 = 0,1,2, … … , 𝑁 − 1.
Chapter-5: Solution of the Boundary Value Problems with Applications

We shall sometimes have to deal with points outside the interval [𝑎, 𝑏]. These
will be called the exterior mesh points, those to the left of the 𝑥0 being denoted by
𝑥−1 = 𝑥0 − ℎ, 𝑥−2 = 𝑥0 − 2ℎ, 𝑥−3 = 𝑥0 − 3ℎ and so on, and those to right of the 𝑥𝑁
being denoted by 𝑥𝑁+1 = 𝑥𝑁 + ℎ, 𝑥𝑁+2 = 𝑥𝑁 + 2ℎ, 𝑥𝑁+3 = 𝑥𝑁 + 3ℎ and so on. The
corresponding values of 𝑦 at the exterior mesh points are denoted in the obvious way
as 𝑦−1 , 𝑦−2 , 𝑦−3 , … … … … … & 𝑦𝑁+1 , 𝑦𝑁+2 , 𝑦𝑁+3 , … … … … … respectively.

The finite-difference method for the solution of the boundary value problem
consists in replacing the derivatives occurring in the differential equation and in the
boundary conditions as well as by means of their finite-difference approximations and
then solving the resulting linear system of equations by a standard procedure [23].
In order to obtain the appropriate finite-difference approximation to the
derivatives, we proceed as follows.

Expanding 𝑦(𝑥 + ℎ) in Taylor’s series expansion, we get

ℎ2 ℎ3
𝑦(𝑥 + ℎ) = 𝑦(𝑥) + ℎ𝑦 ′ (𝑥) + 𝑦 ′′ (𝑥) + 𝑦 ′′′ (𝑥) + ⋯ … … … .. (5.2.1)
2 6

This can be written the forward difference approximation for 𝑦 ′ (𝑥) as

𝑦(𝑥+ℎ)−𝑦(𝑥) ℎ ℎ2
𝑦 ′ (𝑥) = − (2 𝑦 ′′ (𝑥) + 𝑦 ′′′ (𝑥) + ⋯ … … … . . )
ℎ 6
𝑦(𝑥+ℎ)−𝑦(𝑥)
𝑜𝑟, 𝑦 ′ (𝑥) = + 𝑂(ℎ) (5.2.2)

Now, expanding 𝑦(𝑥 − ℎ) in Taylor’s series expansion, we get

ℎ2 ℎ3
𝑦(𝑥 − ℎ) = 𝑦(𝑥) − ℎ𝑦 ′ (𝑥) + 𝑦 ′′ (𝑥) − 𝑦 ′′′ (𝑥) + ⋯ … … … .. (5.2.3)
2 6

This can be written the backward difference approximation for 𝑦 ′ (𝑥) as

𝑦(𝑥)−𝑦(𝑥−ℎ) ℎ ℎ2
𝑦 ′ (𝑥) = − (2 𝑦 ′′ (𝑥) − 𝑦 ′′′ (𝑥) + ⋯ … … … . . )
ℎ 6
𝑦(𝑥)−𝑦(𝑥−ℎ)
𝑜𝑟, 𝑦 ′ (𝑥) = + 𝑂(ℎ) (5.2.4)

A central difference approximation for 𝑦 ′ (𝑥) can be obtained by subtracting


(5.2.4) from (5.2.2), then we get the central difference approximation for 𝑦𝑛′ as

𝑦(𝑥+ℎ)−𝑦(𝑥−ℎ)
𝑦 ′ (𝑥) = + 𝑂(ℎ2 )
2ℎ
𝑦(𝑥+ℎ)−𝑦(𝑥−ℎ)
𝑜𝑟, 𝑦 ′ (𝑥) ≈ 2ℎ
𝑦(𝑥𝑛 +ℎ)−𝑦(𝑥𝑛 −ℎ)
𝑜𝑟, 𝑦 ′ (𝑥𝑛 ) ≈ 2ℎ
𝑦𝑛+1 −𝑦𝑛−1
𝑖. 𝑒. 𝑦𝑛′ ≈ (5.2.5)
2ℎ

Study on Different Numerical Methods for Solving Differential Equations Page | 92


Chapter-5: Solution of the Boundary Value Problems with Applications

Again by adding (5.2.1) and (5.2.3), we get the central difference approximation
for 𝑦𝑛′′ as

𝑦(𝑥+ℎ)−2𝑦(𝑥)+𝑦(𝑥−ℎ)
𝑦 ′′ (𝑥) = + 𝑂(ℎ2 )
ℎ2
𝑦(𝑥+ℎ)−2𝑦(𝑥)+𝑦(𝑥−ℎ)
𝑜𝑟, 𝑦 ′′ (𝑥) ≈ ℎ2
𝑦(𝑥𝑛 +ℎ)−2𝑦(𝑥𝑛 )+𝑦(𝑥𝑛 −ℎ)
𝑜𝑟, 𝑦 ′′ (𝑥𝑛 ) ≈ ℎ2
𝑦𝑛+1 −2𝑦𝑛 +𝑦𝑛−1
𝑖. 𝑒. 𝑦𝑛′′ ≈ (5.2.6)
ℎ2

Similarly the central difference approximation for 𝑦𝑛′′′ and 𝑦𝑛′𝑣 are given by as
followings
𝑦𝑛+2 −3𝑦𝑛+1 +3𝑦−1 −𝑦𝑛−2
𝑦𝑛′′′ ≈ (5.2.7)
2ℎ3
𝑦𝑛+2 −4𝑦𝑛+1 +6𝑦𝑛 −4𝑦−1 +𝑦𝑛−2
𝑦𝑛′𝑣 ≈ (5.2.8)
ℎ4

In a similar manner, it is possible to derive finite-difference approximations to


higher-order derivatives. In order to explain the procedure, we consider the boundary
value problem defined by (5.1.1) and (5.1.2).

To solve the problem by the finite-difference method sub-divide the range


[𝑥0 , 𝑥𝑛 ] into 𝑛 equal sub-interval of width ℎ. So that 𝑥𝑛 = 𝑥0 + 𝑛ℎ; 𝑛 =
0,1,2, … … , 𝑁 − 1. Then 𝑦𝑛 = 𝑦(𝑥𝑛 ) = 𝑦(𝑥0 + 𝑛ℎ); 𝑛 = 0,1,2, … … , 𝑁 − 1 are the
corresponding values of 𝑦 at these points.

Now taking the value of for 𝑦𝑛′ and 𝑦𝑛′′ from (5.2.5) and (5.2.6) respectively and
then substituting them in (5.1.1), we get at the point 𝑥 = 𝑥𝑛 .

𝑦𝑛+1 −2𝑦𝑛 +𝑦𝑛−1 𝑦𝑛+1 −𝑦𝑛−1


+ 𝑓(𝑥𝑛 ) + 𝑔(𝑥𝑛 )𝑦(𝑥𝑛 ) = 𝑟(𝑥𝑛 )
ℎ2 2ℎ

𝑜𝑟, 𝑦𝑛+1 − 2𝑦𝑛 + 𝑦𝑛−1 + 2 𝑓(𝑥𝑛 )(𝑦𝑛+1 − 𝑦𝑛−1 )𝑔(𝑥𝑛 )𝑦(𝑥𝑛 )ℎ2 = 𝑟(𝑥𝑛 )ℎ2
ℎ ℎ
𝑜𝑟, (1 − 2 𝑓𝑛 ) 𝑦𝑛−1 + (−2 + ℎ2 𝑔𝑛 )𝑦𝑛 + (1 + 2 𝑓𝑛 ) 𝑦𝑛+1 = ℎ2 𝑟𝑛 (5.2.9)

Since 𝑦0 and 𝑦𝑁 are specified by the conditions (5.1.2), so (5.2.9) is a general


representation of a linear system of (𝑁 − 1) equations with (𝑁 − 1) unknowns in
𝑦𝑛 ; 𝑛 = 0,1,2, … … , 𝑁 − 1. Writing out (5.2.9) and taking 𝑦0 = 𝑎 & 𝑦𝑁 = 𝑏, the
system takes the form

ℎ ℎ
(1 − 2 𝑓1 ) 𝑎 + (−2 + ℎ2 𝑔1 )𝑦1 + (1 + 2 𝑓1 ) 𝑦2 = ℎ2 𝑟1
ℎ ℎ
(1 − 2 𝑓2 ) 𝑦1 + (−2 + ℎ2 𝑔2 )𝑦2 + (1 + 2 𝑓2 ) 𝑦3 = ℎ2 𝑟2
ℎ ℎ
(1 − 2 𝑓3 ) 𝑦3 + (−2 + ℎ2 𝑔3 )𝑦3 + (1 + 2 𝑓3 ) 𝑦4 = ℎ2 𝑟3

Study on Different Numerical Methods for Solving Differential Equations Page | 93


Chapter-5: Solution of the Boundary Value Problems with Applications

…………………………………………………………………
…………………………………………………………………
ℎ ℎ
(1 − 2 𝑓𝑁−2 ) 𝑦𝑁−3 + (−2 + ℎ2 𝑔𝑁−2 )𝑦𝑁−2 + (1 + 2 𝑓𝑁−2 ) 𝑦𝑁−1 = ℎ2 𝑟𝑁−2
ℎ ℎ
(1 − 2 𝑓𝑁−1 ) 𝑦𝑁−2 + (−2 + ℎ2 𝑔𝑁−1 )𝑦𝑁−1 + (1 + 2 𝑓𝑁−1 ) 𝑏 = ℎ2 𝑟𝑁−1

This co-efficient in above system of linear equations can, of course, be


computed, since 𝑓(𝑥), 𝑔(𝑥) & 𝑟(𝑥) are known functions of 𝑥. We have above system
in a matrix form, as follows

𝐴𝑦 = 𝑏 (5.2.10)

Here 𝑦 = (𝑦1 , 𝑦2 , 𝑦3 , … … … 𝑦𝑁−2 , 𝑦𝑁−1 )𝑇 representing the vector of unknown


quantities, b representing the vector of known quantities on the right side of (5.2.10).
Also, 𝐴 is the matrix of co-efficient and in this case tri-diagonal of order (𝑁 − 1). The
matrix 𝐴 has the special form

𝑑1 𝑐1

𝑎2 𝑑2 𝑐2
A= 𝑎3 𝑑3 𝑐3
… … … … …
𝑎𝑁−2 𝑑𝑁−2 𝑐𝑁−2
𝑎𝑁−1 𝑑𝑁−1

The solution of system 𝐴𝑦 = 𝑏 constitutes an appropriate solution of the


boundary value problem defined by (5.1.1) and (5.1.2).

5.3 APPLICATION OF THE FINITE-DIFFERENCE METHOD

𝑑4 𝑦
The deflection of a beam is governed by the equation 𝑑𝑥 4 + 81𝑦 = 𝜑(𝑥) with
the boundary conditions 𝑦(0) = 𝑦 ′ (0) = 𝑦 ′′ (1) = 𝑦 ′′′ (1) = 0. Here φ(x) is given by

𝑥 1/3 2/3 1
𝜑(𝑥) 81 162 243

Evaluate the deflection of the pivot points of the beam using three sub-intervals
by the finite-difference approximation method.
1 1 2
Solution: Here ℎ = 3 and the pivot points are 𝑥0 = 0, 𝑥1 = 3 , 𝑥2 = 3 & 𝑥3 = 1. The
corresponding value of 𝑦 is 𝑦0 = 𝑦(𝑥0 ) = 0 and 𝑦1 , 𝑦2 & 𝑦3 are to be determined.
Using (5.2.8) in given boundary value problem at 𝑥 = 𝑥𝑛 ; 𝑛 ∈ 𝑵 we get by putting
the value of ℎ as follows
𝑦𝑛+2 −4𝑦𝑛+1 +6𝑦𝑛 −4𝑦−1 +𝑦𝑛−2
+ 81𝑦𝑛 = 𝜑(𝑥𝑛 )
ℎ4

Study on Different Numerical Methods for Solving Differential Equations Page | 94


Chapter-5: Solution of the Boundary Value Problems with Applications

𝑜𝑟, 81(𝑦𝑛+2 − 4𝑦𝑛+1 + 6𝑦𝑛 − 4𝑦−1 + 𝑦𝑛−2 ) + 81𝑦𝑛 = 𝜑(𝑥𝑛 )


1
𝑜𝑟, 𝑦𝑛+2 − 4𝑦𝑛+1 + 7𝑦𝑛 − 4𝑦−1 + 𝑦𝑛−2 = 81 𝜑(𝑥𝑛 ) (5.3.1)

Now we putting 𝑛 = 1,2,3 successively in (5.3.1) and using the values


of 𝜑(𝑥1 ) = 81, 𝜑(𝑥2 ) = 162, 𝜑(𝑥3 ) = 243. After simplification we get

𝑦3 − 4𝑦2 + 7𝑦1 − 4𝑦0 + 𝑦−1 = 1


𝑦4 − 4𝑦3 + 7𝑦2 − 4𝑦1 + 𝑦0 = 2
𝑦5 − 4𝑦4 + 7𝑦3 − 4𝑦2 + 𝑦1 = 3 (5.3.2)

Again applying given boundary condition in (5.2.5), for 𝑛 = 0 we get


𝑦1 −𝑦−1
𝑦0′ = =0
2ℎ

𝑜𝑟, 𝑦1 − 𝑦−1 = 0
𝑜𝑟, 𝑦1 = 𝑦−1 (5.3.3)

Again applying given boundary condition in (5.2.6), for 𝑛 = 3 we get


𝑦4 −2𝑦3 +𝑦2
𝑦3′′ = =0
ℎ2

𝑜𝑟, 𝑦4 − 2𝑦3 + 𝑦2 = 0
𝑜𝑟, 𝑦4 = 2𝑦3 − 𝑦2 (5.3.4)

Finally applying given boundary condition in (5.2.7), for 𝑛 = 3 we get


𝑦5 −2𝑦4 +2𝑦2 −𝑦1
𝑦3′′′ = =0
2ℎ3

𝑜𝑟, 𝑦5 − 2𝑦4 + 2𝑦2 − 𝑦1 = 0


𝑜𝑟, 𝑦5 = 2𝑦4 − 2𝑦2 + 𝑦1 (5.3.5)

Using (5.3.3), (5.3.4), (5.3.5) in (5.3.2), we get

𝑦3 − 4𝑦2 + 8𝑦1 = 1
−4𝑦3 + 3𝑦2 − 2𝑦1 = 2
3𝑦3 − 4𝑦2 + 2𝑦1 = 3

Then by solving the above system of linear equations by Gauss-Seidel


iteration method, we get
8 22 37
𝑦1 = 13 , 𝑦2 = 13 , 𝑦3 = 13

Hence the required solution (correct to the four decimal places) is

Study on Different Numerical Methods for Solving Differential Equations Page | 95


Chapter-5: Solution of the Boundary Value Problems with Applications

1 2
𝑦 (3) = 𝑦1 = 0.6154, 𝑦 (3) = 𝑦2 = 1.6923, 𝑦(1) = 𝑦3 = 2.8462

5.4 SHOOTING METHOD

The shooting method requires good initial guesses for the slope and can be
applied to both linear and non-linear problems [23]. The main advantage of this method
is its easy applicability. We discuss this method with reference to the second-order
boundary value problem defined by

𝑦 ′′ (𝑥) = 𝑓(𝑥) ; 𝑦(0) = 0, 𝑦(1) = 1 (5.4.1)

The main steps involved in this method are

1. Transformation of the boundary value problem into an initial value problem.


2. The solution of the initial value problem by any standard method as
highlighted in previous chapters.

3. Finally, the solution to the given boundary value problem.

To apply any initial value method in (5.4.1), we must know the values of 𝑦(0)
and 𝑦 ′ (0). Since 𝑦 ′ (0) is not given, we consider it as an unknown parameter m (say),
which must be determined so that the resulting solution yields the given value 𝑦(1) to
some desired level of accuracy. We thus guess at the initial slope and setup an iterative
procedure for converging to the correct slope. Let 𝑚0 and 𝑚1 be two guesses at the
initial slope 𝑦 ′ (0) and let 𝑦(𝑚0 , 1) and 𝑦(𝑚1 , 1) be the values of 𝑦 at 𝑥 = 1 obtained
from integrating the differential equation. Graphically, the solution may be represented
by as in figure-(5.1) and figure-(5.2).

Figure-(5.1) Figure-(5.2)

In figure-(5.1) the solutions of the initial value problems are drawn, while in
figure-(5.2), 𝑦(𝑚, 1) is plotted as a function of 𝑚. A generally better approximation of
𝑚 can be obtained by linear interpolation. The intersection of the line joining 𝑝0 to 𝑝1
with the line 𝑦(1) = 1, has its m co-ordinates given by
𝑚2 −𝑚0 𝑚1 −𝑚0
= 𝑦(𝑚
𝑦(1)−𝑦(𝑚0 ,1) 1 ,1)−𝑦(𝑚0 ,1)

𝑦(1)−𝑦(𝑚0 ,1)
𝑜𝑟, 𝑚2 = 𝑚0 + (𝑚1 − 𝑚0 )[𝑦(𝑚 ] (5.4.2)
1 ,1)−𝑦(𝑚0 ,1)

Study on Different Numerical Methods for Solving Differential Equations Page | 96


Chapter-5: Solution of the Boundary Value Problems with Applications

We now solve the initial value problem

𝑦 ′′ (𝑥) = 𝑓(𝑥) ; 𝑦(0) = 0, 𝑦 ′ (0) = 𝑚2 (5.4.1)

Then we obtain 𝑦(𝑚2 , 1). Again use linear interpolation with (𝑚1 , 𝑦(𝑚1 , 1))
and (𝑚2 , 𝑦(𝑚2 , 1)) to obtain a better approximation 𝑚 = 𝑚3 and so on. This process
is repeated until convergence is obtained. i.e. until the value of 𝑦(𝑚𝑖 , 1) agrees with
𝑦(1) to the desired level of accuracy. The speed of convergence depends upon how
good the initial guesses were.
This method will be tedious to apply for higher-order boundary value problems
and in the case of the non-linear problems, linear interpolation yields unsatisfactory
results.

5.5 APPLICATION OF SHOOTING METHOD

Apply the shooting method with taking 𝑚0 = 0.8 & 𝑚1 = 0.9 to solve the
boundary value problem

𝑦 ′′ (𝑥) = 𝑦(𝑥) ; 𝑦(0) = 0, 𝑦(1) = 1.1752 (5.5.1)

Solution: Applying Taylor’s series method, we obtain

𝑥3 𝑥5 𝑥7 𝑥9
𝑦(𝑥) = 𝑦 ′ (0)[𝑥 + + 120 + 5040 + 362800 + ⋯ … … … ]
6
1 1 1 1
𝑜𝑟, 𝑦(1) = 𝑦 ′ (0)[1 + 6 + 120 + 5040 + 362800 + ⋯ … … … ]

= 1.1752𝑦 ′ (0) (5.5.2)

Now for 𝑚0 = 0.8 & 𝑚1 = 0.9, (5.5.2) gives

𝑦(𝑚0 , 1) = (1.1752)(0.8) = 0.9402


𝑦(𝑚1 , 1) = (1.1752)(0.9) = 1.0577

Then by using (5.4.2), we get

1.1752−0.9402)
𝑚2 = 0.8 + (0.9 − 0.8) [ 1.0577−0.9402 ] = 1

This shows that 𝑦 ′ (0) = 1 = 𝑚2 , we now solve the initial value problem

𝑦 ′′ (𝑥) = 𝑓(𝑥) ; 𝑦(0) = 0, 𝑦 ′ (0) = 1 (5.5.3)

Since 𝑚2 = 1, then Taylor’s series method gives 𝑦(𝑚2 , 1) = 1.1752 which is


the same as 𝑦(1). Thus in this problem shooting method converges to the exact solution.

Now we can solve the initial value problem (5.5.3) any other standard method
mentioned in previous chapters.

Study on Different Numerical Methods for Solving Differential Equations Page | 97


Chapter-5: Solution of the Boundary Value Problems with Applications

5.6 GREEN’S FUNCTION TO SOLVE BOUNDARY VALUE PROBLEMS

Boundary value problems are an almost inevitable consequence of using


mathematics to study problems arising in the real world and it is not at all surprising
that their solution has been a concern to many mathematicians. In this section, we will
examine in detail a particular method which requires the construction of an auxiliary
function known as Green’s function. To show how such functions arise and to initiate
a further study of the method, we will first solve, by fairly elementary methods, a typical
one-dimensional boundary value problem [11,14].

Consider the problem of forced, transverse vibrations of a taut string of length 𝐿.


If the time-dependent parts of the solution are first removed by the usual variable
separation technique, we obtain the following differential equation containing the
transverse displacement of the string, 𝑢 as unknown

𝑑2 𝑢
+ 𝑘 2 𝑢 = −𝑓(𝑥) ; 0 < 𝑥 < 𝐿 (5.6.1)
𝑑𝑥 2

If the ends of the string are kept fixed, then this equation must be solved for 𝑢
subject to the boundary conditions

𝑢(0) = 0 = 𝑢(𝐿) (5.6.2)

To solve the boundary value problem posed by the ordinary second-order


differential equation (5.6.1) and associated boundary conditions (5.6.2), we will employ
the method of variation of parameters. i.e. we will assume that a solution to the problem
actually exists and that, furthermore, it has the precise form

𝑢(𝑥 ) = 𝐴(𝑥) 𝑐𝑜𝑠𝑘𝑥 + 𝐵(𝑥) 𝑠𝑖𝑛𝑘𝑥 (5.6.3)

If we differentiate (5.6.3) twice with respect to 𝑥 and passing assume that

𝐴′ (𝑥) 𝑐𝑜𝑠 𝑘𝑥 + 𝐵 ′ (𝑥) 𝑠𝑖𝑛𝑘𝑥 = 0 (5.6.4)

Then, we can find that (5.6.3) constitutes a solution provided that

−𝑘𝐴′ (𝑥) 𝑠𝑖𝑛𝑘𝑥 + 𝑘𝐵 ′ (𝑥) 𝑐𝑜𝑠𝑘𝑥 = −𝑓(𝑥) (5.6.5)

Although assumption (5.6.4) was introduced primarily to ease the ensuring


algebra, equation (5.6.4) and (5.6.5) are two linear algebraic equations using the
unknowns 𝐴′ (𝑥) & 𝐵′ (𝑥). Solving these equations, we readily find that
1 1
𝐴′ (𝑥) = 𝑘 𝑓(𝑥) 𝑠𝑖𝑛𝑘𝑥 & 𝐵 ′ (𝑥) = − 𝑘 𝑓(𝑥) 𝑐𝑜𝑠𝑘𝑥 (5.6.6)

Thus, formally we can write the solution of (5.6.1) in the form

𝑐𝑜𝑠𝑘𝑥 𝑥 𝑠𝑖𝑛𝑘𝑥 𝑥
𝑢(𝑥) = ∫𝑐 𝑓(𝑦) 𝑠𝑖𝑛𝑘𝑦 𝑑𝑦 − ∫𝑐 𝑓(𝑦) 𝑐𝑜𝑠𝑘𝑦 𝑑𝑦 (5.6.7)
𝑘 1 𝑘 2

Study on Different Numerical Methods for Solving Differential Equations Page | 98


Chapter-5: Solution of the Boundary Value Problems with Applications

Here, 𝑐1 and 𝑐2 are constants which must be so chosen as to ensure that the
boundary conditions (5.6.2) are satisfied. Inserting the condition 𝑢(0) = 0 into (5.6.7),
we find that we must choose 𝑐1 such that
𝑥
∫𝑐 𝑓(𝑦) 𝑠𝑖𝑛𝑘𝑦 𝑑𝑦 = 0 (5.6.8)
1

Since 𝑓(𝑦) is assumed arbitrary, this implies that we must choose 𝑐1 = 0.

The condition 𝑢(𝐿) = 0, when inserted into (5.6.7), will require that

𝑐𝑜𝑠𝑘𝐿 𝐿 𝑠𝑖𝑛𝑘𝐿 𝐿
𝑢(𝐿) = ∫𝑐 𝑓(𝑦) 𝑠𝑖𝑛𝑘𝑦 𝑑𝑦 − ∫𝑐 𝑓(𝑦) 𝑐𝑜𝑠𝑘𝑦 𝑑𝑦 = 0 (5.6.9)
𝑘 1 𝑘 2

After slight manipulation, we can re-write (5.6.9) in this form

𝑠𝑖𝑛𝑘𝐿 0 1 𝐿
− ∫𝑐 𝑓(𝑦) 𝑐𝑜𝑠𝑘𝑦 𝑑𝑦 − 𝑘 ∫0 𝑓(𝑦) 𝑠𝑖𝑛𝑘(𝑦 − 𝐿) 𝑑𝑦 = 0 (5.6.10)
𝑘 2

Combining the results (5.6.8) and (5.6.10), we see that the solution (5.6.7) can
now be written in the form

1 𝑥 𝑠𝑖𝑛𝑘𝑥 𝐿
𝑢(𝑥) = 𝑘 ∫0 𝑓(𝑦) 𝑠𝑖𝑛𝑘(𝑦 − 𝑥) 𝑑𝑦 − 𝑘𝑠𝑖𝑛𝑘𝐿 ∫0 𝑓(𝑦) 𝑠𝑖𝑛𝑘(𝑦 − 𝐿) 𝑑𝑦 (5.6.11)
𝑥 𝑠𝑖𝑛𝑘𝑦 𝑠𝑖𝑛𝑘(𝐿−𝑥) 𝐿 𝑠𝑖𝑛𝑘𝑦 𝑠𝑖𝑛𝑘(𝐿−𝑦)
= ∫0 𝑓(𝑥) 𝑑𝑦 + ∫𝑥 𝑓(𝑦) 𝑑𝑦
𝑘 𝑠𝑖𝑛𝑘𝐿 𝑘 𝑠𝑖𝑛𝑘𝐿
𝐿
=∫0 𝑓(𝑦)𝐺(𝑥, 𝑦)𝑑𝑦 (5.6.12)

𝑠𝑖𝑛𝑘𝑦 𝑠𝑖𝑛𝑘(𝐿−𝑥)
Here 𝐺(𝑥, 𝑦) = ;0 ≤ 𝑦 ≤ 𝑥
𝑘 𝑠𝑖𝑛𝑘𝐿
(5.6.13)
𝑠𝑖𝑛𝑘𝑦 𝑠𝑖𝑛𝑘(𝐿−𝑦)
𝐺(𝑥, 𝑦) = ;𝑥 ≤ 𝑦 ≤ 𝐿
𝑘 𝑠𝑖𝑛𝑘𝐿

This function 𝐺(𝑥, 𝑦) is the two-point function of the position known as the
Green’s function for the equation (5.6.1) and boundary conditions (5.6.2).

5.7 APPLICATION OF GREEN’S FUNCTION

Apply Green’s function to solve the differential equation 𝑦 ′′ − 𝑎2 𝑦 = 0 with


the boundary conditions 𝑦(0) = 0 = 𝑦(1).

Solution: By the usual elementary method of solving such equations, we have

𝑟 2 = 𝑎2
𝑜𝑟, 𝑟 = ±𝑎

Hence the general solution is 𝑦 = 𝑐1 𝑒 𝑎𝑥 + 𝑐2 𝑒 −𝑎𝑥


𝑜𝑟, 𝑦 = 𝐴 𝑐𝑜𝑠ℎ(𝑎𝑥) + 𝐵 𝑠𝑖𝑛ℎ(𝑎𝑥) (5.7.1)

Study on Different Numerical Methods for Solving Differential Equations Page | 99


Chapter-5: Solution of the Boundary Value Problems with Applications

Substituting the boundary values 𝑦 = 0 at 𝑥 = 0 & 𝑥 = 1 respectively in


(5.7.1), we get

0 = 𝐴 & 0 = 𝐴 𝑐𝑜𝑠ℎ(𝑎) + 𝐵 𝑠𝑖𝑛ℎ(𝑎)

So that 𝐵 𝑠𝑖𝑛ℎ(𝑎) = 0
𝑜𝑟, 𝐵 = 0; 𝑎 ≠ 0

Hence (5.7.1) reduce to a trivial solution as 𝑦 = 0

To get a worth-while solution of the problem we assume a function of the form


(5.7.3) for each point of the interval 𝑥 = 0 to 𝑥 = 1. However, since 𝑐𝑜𝑠ℎ𝜃 ≠ 0, it is
plain that the assumed functions need not contain 𝑐𝑜𝑠ℎ(𝑎𝑥). Hence we take

𝑦1 = 𝐴 𝑠𝑖𝑛ℎ(𝑎𝑥) & 𝑦2 = 𝐵 𝑠𝑖𝑛ℎ(𝑎(1 − 𝑥)) (5.7.2)

Here we have utilized the boundary conditions in writing down these functions.

Figure-(5.3)

The graphs of these functions will evidently intersect at some point where 𝑥 =
𝑠 and at that point the functions will be equal and their first derivatives will be unequal
as shown as figure-(5.3). Hence for 𝑥 = 𝑠, we have the form

𝐴 𝑠𝑖𝑛ℎ(𝑎𝑠) = 𝐵𝑠𝑖𝑛ℎ(𝑎(1 − 𝑠)) (5.7.5)


𝑑𝑦1 𝑑𝑦2
− = 𝐴 𝑐𝑜𝑠ℎ(𝑎𝑠) + 𝐵 𝑐𝑜𝑠ℎ(𝑎(1 − 𝑠)) = 1 (5.7.6)
𝑑𝑥 𝑑𝑥

From (5.7.5), we have

𝐵 𝑠𝑖𝑛ℎ(𝑎(1− 𝑠))
𝐴= (5.7.7)
𝑠𝑖𝑛ℎ(𝑎𝑠)

Substituting this value of 𝐴 in (5.7.6), we find

sinh(𝑎𝑠)
𝐵 = 𝑎 sinh(𝑎) (5.7.8)

Hence from (5.7.7), we get

𝑠𝑖𝑛ℎ(𝑎(1− 𝑠))
𝐴= (5.7.9)
𝑎 𝑠𝑖𝑛ℎ(𝑎)

Study on Different Numerical Methods for Solving Differential Equations Page | 100
Chapter-5: Solution of the Boundary Value Problems with Applications

Substituting (5.7.8) and (5.7.9) in (5.7.4), we get

𝑠𝑖𝑛ℎ(𝑎(1− 𝑠)) 𝑠𝑖𝑛ℎ(𝑎𝑥) sinh(𝑎𝑠) 𝑠𝑖𝑛ℎ(𝑎(1−𝑥))


𝑦1 = & 𝑦2 = (5.7.10)
𝑎 𝑠𝑖𝑛ℎ(𝑎) 𝑎 𝑠𝑖𝑛ℎ(𝑎)

These can be written as a single solution in the form 𝑦 = 𝐺(𝑠, 𝑥) as

𝑠𝑖𝑛ℎ(𝑎(1− 𝑠)) 𝑠𝑖𝑛ℎ(𝑎𝑥)


𝐺(𝑠, 𝑥) = ; 0≤𝑥≤𝑠
𝑎 𝑠𝑖𝑛ℎ(𝑎)
(5.7.11)
𝑠𝑖𝑛ℎ(𝑎𝑠) 𝑠𝑖𝑛ℎ(𝑎(1−𝑥))
𝐺(𝑠, 𝑥) = ; 𝑠≤𝑥≤1
𝑎 𝑠𝑖𝑛ℎ(𝑎)

The function 𝐺(𝑠, 𝑥) is called the Green’s function for this problem. It is a
function of two independent variables 𝑥 and s in the interval [0,1] and is evidently
symmetrical variables.

The Green’s function given by (5.7.12) is thus the solution of the boundary
value problem defined in (5.7.1) and (5.7.2).

5.8 CUBIC B-SPLINE METHOD FOR SOLVING TWO-POINT BOUNDARY


VALUE PROBLEMS OF ORDER FOUR

Introduction: Two-point and multi-point boundary value problems for fourth-order


ordinary differential equations have attracted a lot of attention [25] recently. Many
authors have studied the beam equation under various boundary conditions and by
different approaches. Consider smooth approximation to the problem of bending a
rectangular clamped beam of length resting on elastic foundation. The vertical
deflection of the beam satisfies the system

𝑑4 𝑤 𝑘
+ 𝐷 𝑤 = 𝐷−1 𝑞(𝑥) ; 𝑤(0) = 𝑤(𝑙) = 𝑤 ′ (0) = 𝑤 ′ (𝑙) (5.8.1)
𝑑𝑥 4

Here 𝐷 is the flexural rigidity of the beam, and 𝑘 is the spring constant of the
elastic foundation, and the load 𝑞(𝑥) acts vertically downwards per unit length of the
beam [17]. Mathematically, the system (5.8.1) belongs to a general class of boundary
problems of the form

𝑑4 𝑦
+ 𝑓(𝑥)𝑦(𝑥) = 𝑔(𝑥) ; 𝑎 < 𝑥 < 𝑏 (5.8.2)
𝑑𝑥 4

𝑦(𝑎) = 𝐴1 , 𝑦(𝑏) = 𝐴2 , 𝑦 ′ (𝑎) = 𝐵1 , 𝑦 ′ (𝑏) = 𝐵2 (5.8.3)

Here 𝑓(𝑥) and 𝑔(𝑥) are continuous on [𝑎, 𝑏] and 𝐴𝑖 , 𝐵𝑖 are finite real arbitrary
constants. The analytical solution of (5.8.2) for arbitrary choices of 𝑓(𝑥) and 𝑔(𝑥)
cannot be determined. So, numerical methods are developed to overcome this
limitation. It has formulated a simple condition that guarantees the uniqueness [19] of
the solution of the problem (5.8.2) and (5.8.3).

Study on Different Numerical Methods for Solving Differential Equations Page | 101
Chapter-5: Solution of the Boundary Value Problems with Applications

Among many numerical methods, as enumerated above, Spline methods have


been widely applied for the approximation solutions of boundary value problems
including fourth-order boundary value problems. Also, Cubic B-spline has been used
to solve boundary value problems and system of boundary value problems [13,16],
singular boundary value problems [15] and also, second-order perturbation problems.

Derivations for Cubic B-spline: The given range of the independent variable is [𝑎, 𝑏].
For this range, we are to choose the equidistant points are given by the intervals 𝜋: {𝑎 =
𝑏−𝑎
𝑥0 , 𝑥1 , … … … , 𝑥𝑛 = 𝑏}. i.e. 𝑥𝑖 = 𝑥0 + 𝑖ℎ for ℎ = 𝑛 .

Let us define 𝑆3 (𝜋) = {𝑝(𝑡) ∈ 𝐶[𝑎, 𝑏]} such that 𝑝(𝑡) reduces to cubic
polynomial on each sub-interval [𝑥𝑖 , 𝑥𝑖+1 ]. The basis function 𝐵𝑖 (𝑥) for the different
interval is defined as
1
(𝑥 − 𝑥𝑖−2 )3 ; 𝑥𝑖−2 ≤ 𝑥 ≤ 𝑥𝑖−1
6ℎ3
1
{ℎ3 + 3ℎ2 (𝑥 − 𝑥𝑖−1 ) + 3ℎ(𝑥 − 𝑥𝑖−1 )2 − 3(𝑥 − 𝑥𝑖−1 )3 } ; 𝑥𝑖−1 ≤ 𝑥 ≤ 𝑥𝑖
6ℎ3
1
{ℎ3 + 3ℎ2 (𝑥𝑖+1 − 𝑥) + 3ℎ(𝑥𝑖+1 − 𝑥)2 − 3(𝑥𝑖+1 − 𝑥)3 } ; 𝑥𝑖 ≤ 𝑥 ≤ 𝑥𝑖+1
6ℎ3
1
(𝑥𝑖+2 − 𝑥)3 ; 𝑥𝑖+1 ≤ 𝑥 ≤ 𝑥𝑖+2
6ℎ3
1
(0) ; 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 (5.8.4)
6ℎ3

Let us introduce four additional knots 𝑥−2 < 𝑥−1 < 𝑥0 ; 𝑥𝑛+2 > 𝑥𝑛+1 > 𝑥𝑛 .
From the above expression, it is obvious that each 𝐵𝑖 (𝑥) ∈ 𝐶[𝑎, 𝑏]. The calculative
values of 𝐵𝑖 (𝑥), 𝐵𝑖′ (𝑥), 𝐵𝑖′′ (𝑥) and at nodal points are given by the following table-1

𝐵𝑖 (𝑥) 𝐵𝑖′ (𝑥) 𝐵𝑖′′ (𝑥)


𝑥𝑖−2 0 0 0
𝑥𝑖−1 1/6 1/2h 1/h2
𝑥𝑖 4/6 0 -2/h2
𝑥𝑖+1 1/6 -1/2h 1/h2
𝑥𝑖+2 0 0 0

Since each 𝐵𝑖 (𝑥) is also a piecewise cubic polynomial with knots at the 𝜋,
each 𝐵𝑖 (𝑥) ∈ 𝑆3 (𝜋). Let 𝑢 = {𝐵−1 , 𝐵0 , 𝐵1 , . . . . . , 𝐵𝑛 , 𝐵𝑛+1 } and let 𝐵3 (𝜋) = 𝑠𝑝𝑎𝑛(𝑢).

The functions in 𝑢 are linearly independent on [𝑎, 𝑏], thus 𝐵3 (𝜋) is (𝑛 + 3)-
dimensional. Also, we have 𝐵3 (𝜋) = 𝑆3 (𝜋). Let 𝑠(𝑥) be the B-spline interpolating
function [17] at the nodal points and 𝑠(𝑥) ∈ 𝐵3 (𝜋).Then 𝑠(𝑥) can be written as

𝑠(𝑥) = ∑𝑛+1
𝑖=1 𝑐𝑖 𝐵𝑖 (𝑥) (5.8.5)

Therefore, for a given function 𝑦(𝑥), there exists a unique cubic spline (5.8.5)
satisfying the interpolating conditions

𝑠(𝑥𝑖 ) = 𝑦(𝑥𝑖 ) ; 𝑠 ′ (𝑎) = 𝑦 ′ (𝑎) ; 𝑠 ′ (𝑏) = 𝑦 ′ (𝑏) (5.8.6)

Study on Different Numerical Methods for Solving Differential Equations Page | 102
Chapter-5: Solution of the Boundary Value Problems with Applications

Let 𝑚𝑖 = 𝑠 ′ (𝑥𝑖 ) and 𝑀𝑖 = 𝑠 ′′ (𝑥𝑖 ) [8], we have


1
𝑚𝑖 = 𝑠 ′ (𝑥𝑖 ) ≅ 𝑦 ′ (𝑥𝑖 ) − 180 ℎ4 𝑦 𝑣 (𝑥𝑖 ) (5.8.7)
1 1
𝑀𝑖 = 𝑠 ′′ (𝑥𝑖 ) ≅ 𝑦 ′′ (𝑥𝑖 ) − 12 ℎ2 𝑦 𝑖𝑣 (𝑥𝑖 ) + 360 ℎ4 𝑦 𝑣𝑖 (𝑥𝑖 ) (5.8.8)

All 𝑀𝑖 can be applied to construct numerical difference formulae


′′′ (𝑥 ), ′𝑣 (𝑥 )
𝑦 𝑖 𝑦 𝑖 ; 𝑖 = 1,2,3, … … … … , (𝑛 − 1) as follows

𝑀𝑖+1 −𝑀𝑖−1 𝑠′′′ (𝑥𝑖−1 )+𝑠′′′ (𝑥𝑖+1 ) 1


= ≅ 𝑦 ′′′ (𝑥𝑖 ) + 12 ℎ2 𝑦 𝑣 (𝑥𝑖 ) (5.8.9)
2ℎ 2
𝑀𝑖+1 −2𝑀𝑖 +𝑀𝑖−1 𝑠′′′ (𝑥𝑖−1 )−𝑠′′′ (𝑥𝑖+1 ) 1
= ≅ 𝑦 ′𝑣 (𝑥𝑖 ) + 720 ℎ4 𝑦 𝑣𝑖 (𝑥𝑖 ) (5.8.10)
ℎ2 2
𝑀𝑖+2 −𝑀𝑖 𝑀 −𝑀 𝑀 −𝑀
𝑀𝑖+2 −2𝑀𝑖+1 +2𝑀𝑖−1 −𝑀𝑖−2 −2 𝑖+1 𝑖−1 + 𝑖 𝑖−2
= 2ℎ 2ℎ 2ℎ
≅ 𝑦 𝑣 (𝑥𝑖 ) (5.8.11)
2ℎ3 ℎ2

We will find the values of 𝑦(𝑥𝑖 ), 𝑦 ′ (𝑥𝑖 ), 𝑦 ′′ (𝑥𝑖 ), 𝑦 ′′′ (𝑥𝑖 ), 𝑦 ′𝑣 (𝑥𝑖 ) using
table-1 in the system of equations in (5.8.4) and applying above equations
(5.8.9), (5.8.10) & (5.8.11) as
𝑐𝑖−1 +4𝑐𝑖 +𝑐𝑖+1
𝑦(𝑥𝑖 ) = 𝑠(𝑥𝑖 ) = (5.8.12)
6
𝑐𝑖−1 −𝑐𝑖+1
𝑦 ′ (𝑥𝑖 ) = 𝑠 ′ (𝑥𝑖 ) = (5.8.13)
2ℎ
𝑐𝑖−1 −2𝑐𝑖 +𝑐𝑖+1
𝑦 ′′ (𝑥𝑖 ) = 𝑠 ′′ (𝑥𝑖 ) = (5.8.14)
ℎ2
𝑐𝑖+2 −2𝑐𝑖+1 +2𝑐𝑖−1 −2𝑐𝑖−2
𝑦 ′′′ (𝑥𝑖 ) = 𝑠 ′′′ (𝑥𝑖 ) = (5.8.15)
2ℎ3
𝑐𝑖+2 −4𝑐𝑖+1 +6𝑐𝑖 −4𝑐𝑖−1 +2𝑐𝑖−2
𝑦 ′𝑣 (𝑥𝑖 ) = 𝑠 ′𝑣 (𝑥𝑖 ) = (5.8.16)
ℎ4

Solution of special case fourth-order boundary value problem: Let us consider the
boundary value problem

𝑦 𝑖𝑣 (𝑥) + 𝑓(𝑥)𝑦(𝑥) = 𝑔(𝑥) (5.8.17)

Let the approximate solution of (5.8.17) be

𝑦(𝑥) = 𝑠(𝑥) = ∑𝑛+1


𝑖=1 𝑐𝑖 𝐵𝑖 (𝑥) (5.8.18)

Discrediting boundary value problem at the knots, we get

𝑦 𝑖𝑣 (𝑥𝑖 ) + 𝑓(𝑥𝑖 )𝑦(𝑥𝑖 ) = 𝑔(𝑥𝑖 ); 𝑖 = 1, 2, … … … , (𝑛 − 1) (5.8.19)

Putting values in terms of 𝑐𝑖 ′𝑠 using equations (5.8.11) to (5.8.15), by taking


𝑓𝑖 = 𝑓(𝑥𝑖 ) & 𝑔𝑖 = 𝑔(𝑥𝑖 ) we get
𝑐𝑖+2 −4𝑐𝑖+1 +6𝑐𝑖 −4𝑐𝑖−1 +2𝑐𝑖−2 𝑐𝑖−1 +4𝑐𝑖 +𝑐𝑖+1
+ 𝑓𝑖 = 𝑔𝑖 (5.8.20)
ℎ4 6

Study on Different Numerical Methods for Solving Differential Equations Page | 103
Chapter-5: Solution of the Boundary Value Problems with Applications

Simplifying (5.8.20) becomes

6(𝑐𝑖+2 − 4𝑐𝑖+1 + 6𝑐𝑖 − 4𝑐𝑖−1 + 2𝑐𝑖−2 ) + 𝑓𝑖 ℎ4 (𝑐𝑖−1 + 4𝑐𝑖 + 𝑐𝑖+1 ) = 6ℎ4 𝑔𝑖 (5.8.21)

This gives a system of (𝑛 − 1) linear equations for 𝑖 = 1, 2, . . . . . . . . . , (𝑛 − 1) in


(𝑛 + 3) unknowns. i.e. 𝑐𝑖 ; 𝑖 = −1,0, . . . . . . , 𝑛, (𝑛 + 1). Remaining four equations will
be obtained using the boundary conditions as follows

𝑦(𝑎) = 𝐴1 ⇒ 𝑐−1 + 4𝑐0 + 𝑐1 = 6𝐴1 (5.8.22)


𝑦(𝑏) = 𝐴2 ⇒ 𝑐𝑛−1 + 4𝑐𝑛 + 𝑐𝑛+1 = 6𝐴2 (5.8.23)
𝑦 ′ (𝑎) = 𝐵1 ⇒ −𝑐−1 + 𝑐1 = 2ℎ𝐵1 (5.8.24)
𝑦 ′ (𝑏) = 𝐵2 ⇒ −𝑐𝑛−1 + 𝑐𝑛+1 = 2ℎ𝐵2 (5.8.25)

The solution (5.8.18) obtained by solving the above system of (𝑛 + 3) linear


equations in (𝑛 + 3) unknowns using equations (5.8.21) and (5.8.22) to (5.8.25).

General linear fourth-order boundary value problem: Subject to boundary


conditions are given by (5.8.3) consider the following boundary value problem

𝑦 𝑖𝑣 (𝑥) + 𝑝(𝑥)𝑦 ′′′ (𝑥) + 𝑞(𝑥)𝑦 ′′ (𝑥)+𝑟(𝑥)𝑦 ′ (𝑥) + 𝑡(𝑥)𝑦(𝑥) = 𝑢(𝑥) (5.8.26)

Let (5.8.18) be the approximate solution of the boundary value problem. By


taking 𝑝(𝑥𝑖 ) = 𝑝𝑖 , 𝑞(𝑥𝑖 ) = 𝑞𝑖 , 𝑟(𝑥𝑖 ) = 𝑟𝑖 , 𝑡(𝑥𝑖 ) = 𝑡𝑖 & 𝑢(𝑥𝑖 ) = 𝑢𝑖 discrediting at the
knots

𝑦 𝑖𝑣 (𝑥𝑖 ) + 𝑝𝑖 𝑦 ′′′ (𝑥𝑖 ) + 𝑞𝑖 𝑦 ′′ (𝑥𝑖 ) + 𝑟𝑖 𝑦 ′ (𝑥𝑖 ) + 𝑡𝑖 𝑦(𝑥𝑖 ) = 𝑢𝑖 (5.8.27)

Putting the values of derivatives using (5.8.12) to (5.8.16), we get


𝑐𝑖+2 −4𝑐𝑖+1 +6𝑐𝑖 −4𝑐𝑖−1 +2𝑐𝑖−2 𝑐𝑖+2 −2𝑐𝑖+1 +2𝑐𝑖−1 −2𝑐𝑖−2 𝑐𝑖−1 −2𝑐𝑖 +𝑐𝑖+1
+ 𝑝𝑖 + 𝑞𝑖
ℎ4 2ℎ3 ℎ2
𝑐𝑖−1 −𝑐𝑖+1 𝑐𝑖−1 +4𝑐𝑖 +𝑐𝑖+1
+𝑟𝑖 + 𝑡𝑖 = 𝑢𝑖 (5.8.28)
2ℎ 6

On simplification (5.8.28) becomes

6(𝑐𝑖+2 − 4𝑐𝑖+1 + 6𝑐𝑖 − 4𝑐𝑖−1 + 2𝑐𝑖−2 ) + 3ℎ𝑝𝑖 (𝑐𝑖+2 − 2𝑐𝑖+1 + 2𝑐𝑖−1 − 2𝑐𝑖−2 )
+6ℎ2 𝑞𝑖 (𝑐𝑖−1 − 2𝑐𝑖 + 𝑐𝑖+1 ) + 3ℎ3 𝑟𝑖 (𝑐𝑖−1 − 𝑐𝑖+1 ) + ℎ4 𝑡𝑖 (𝑐𝑖−1 + 4𝑐𝑖 + 𝑐𝑖+1 )
= 6ℎ4 𝑢𝑖 (5.8.29)

Now, the approximate solution is obtained by solving the system given by


(5.8.29) and (5.8.22) to (5.8.25).

Non-linear fourth-order boundary value problem: Subject to boundary conditions


given in (5.8.3) consider non-linear fourth-order boundary value problem of the forms

Study on Different Numerical Methods for Solving Differential Equations Page | 104
Chapter-5: Solution of the Boundary Value Problems with Applications

𝑦 𝑖𝑣 (𝑥) = 𝑓(𝑥, 𝑦(𝑥), 𝑦 ′ (𝑥), 𝑦 ′′ (𝑥), 𝑦 ′′′ (𝑥)) (5.8.30)

Let (5.8.18) be the approximate solution of the boundary value problem. It must
satisfy the boundary value problem at knots. So, we have

𝑦 𝑖𝑣 (𝑥𝑖 ) = 𝑓(𝑥𝑖 , 𝑦(𝑥𝑖 ), 𝑦 ′ (𝑥𝑖 ), 𝑦 ′′ (𝑥𝑖 ), 𝑦 ′′′ (𝑥𝑖 )) (5.8.31)

Using (5.8.12) to (5.8.16), we get


𝑐𝑖+2 −4𝑐𝑖+1 +6𝑐𝑖 −4𝑐𝑖−1 +2𝑐𝑖−2
=
ℎ4
𝑐𝑖−1 +4𝑐𝑖 +𝑐𝑖+1 𝑐𝑖−1 −𝑐𝑖+1 𝑐𝑖−1 −2𝑐𝑖 +𝑐𝑖+1 𝑐𝑖+2 −2𝑐𝑖+1 +2𝑐𝑖−1 −2𝑐𝑖−2
𝑓(𝑥𝑖 , , , , ) (5.8.32)
6 2ℎ ℎ2 2ℎ3

This equation (5.8.32) together with equation (5.8.22) to (5.8.25) gives a non-
linear system of equations, which is solved to get the required solution of the boundary
value problem.

Singular fourth-order boundary value problem: Consider singular fourth-order


boundary value problem of the form
𝛾
𝑦 𝑖𝑣 (𝑥) + 𝑥 𝑦 ′′′ (𝑥) = 𝑓(𝑥, 𝑦(𝑥)) ; 0 ≤ 𝑥 ≤ 1 (5.8.33)

With 𝑦(0) = 𝐴1 , 𝑦 ′ (1) = 𝐴2 , 𝑦 ′′ (1) = 𝐵1 , 𝑦 ′′′ (0) = 𝐵2 (5.8.34)

Since 𝑥 = 0 is a singular point of equation (5.8.33), we first modify it at 𝑥 = 0


to get transformed problem as

𝑦 𝑖𝑣 (𝑥) + 𝑝(𝑥)𝑦 ′′′ (𝑥) = 𝑟(𝑥, 𝑦) (5.8.35)


𝛾
Here 𝑝(𝑥) = 0 ; 𝑥 = 0 & 𝑝(𝑥) = 𝑥 ; 𝑥 ≠ 0 (5.8.36)
𝑓(0,𝑦)
𝑟(𝑥, 𝑦) = ; 𝑥 = 0 & 𝑟(𝑥, 𝑦) = 𝑓(𝑥, 𝑦) ; 𝑥 ≠ 0 (5.8.37)
𝛾−1

Now, as in previous sections, let (5.8.18) be an approximate solution of the


boundary value problem. By taking 𝑝(𝑥𝑖 ) = 𝑝𝑖 , 𝑟(𝑥𝑖 ) = 𝑟𝑖 discrediting at knots, we get

𝑦 𝑖𝑣 (𝑥𝑖 ) + 𝑝𝑖 𝑦 ′′′ (𝑥𝑖 ) = 𝑟𝑖 (𝑥𝑖 , 𝑦(𝑥𝑖 )) (5.8.38)

Putting the values of derivatives using (5.8.12) to (5.8.16), we get

𝑐𝑖+2 −4𝑐𝑖+1 +6𝑐𝑖 −4𝑐𝑖−1 +2𝑐𝑖−2 𝑐𝑖+2 −2𝑐𝑖+1 +2𝑐𝑖−1 −2𝑐𝑖−2 𝑐𝑖−1 +4𝑐𝑖 +𝑐𝑖+1
+ 𝑝𝑖 = 𝑟𝑖 (𝑥𝑖 , ) (5.8.39)
ℎ4 2ℎ3 6

Finally, boundary conditions provide

𝑦(0) = 𝐴1 ⇒ 𝑐−1 + 4𝑐0 + 𝑐1 = 6𝐴1 (5.8.40)


𝑦 ′ (1) = 𝐴2 ⇒ −𝑐𝑛−1 + 𝑐𝑛+1 = 2ℎ𝐴2 (5.8.41)

Study on Different Numerical Methods for Solving Differential Equations Page | 105
Chapter-5: Solution of the Boundary Value Problems with Applications

𝑦 ′′ (1) = 𝐵1 ⇒ 𝑐𝑛−1 − 2𝑐𝑛 + 𝑐𝑛+1 = ℎ2 𝐵1 (5.8.42)


𝑦 ′′′ (0) = 𝐵2 ⇒ 𝑐2 − 2𝑐1 + 2𝑐−1 − 2𝑐−2 = 2ℎ3 𝐵2 (5.8.43)

This equation (5.8.39) together with equations (5.8.40) to (5.8.43) gives a non-
linear system of equations, which is solved to get the required solution of boundary
value problem (5.8.33).

Study on Different Numerical Methods for Solving Differential Equations Page | 106
CHAPTER-6

TWO PROPOSED METHODS FOR SOLVING


DIFFERENTIAL EQUATIONS

6.1 INTRODUCTION

In previous chapters, some well-known numerical methods for solving


differential equations have discussed. In these chapters, we have mentioned about the
limitations of discussed methods. Due to the age of modern civilization and the
technical sphere, it is to require upgrading the present numerical methods. This
requirement inspired us to propose some modification of present methods or to be
introducing newer numerical methods.

In this chapter, we are proposing a modified form of Milne’s predictor-corrector


formula for solving ordinary differential equations of the first order and first degree.
Here we are to approximate the value of the dependent variable under four initial
conditions and then improve this value by proper substitution in the formulae. This
process is an iterative way to obtain the values until we get a proper level of accuracy.

Also, a modified formula for solving the Elliptic equation by the finite-
difference approximations will be offered here. In which we are to establish a combined
finite-difference formula by means of standard 5-point formula and diagonal 5-point
formula, then we will improve the approximated values of several mesh points with the
help of the Gauss-Seidel iteration formula.

6.2 MILNE’S (MODIFIED) PREDICTOR-CORRECTOR METHOD

To solve the differential equation 𝑦 ′ = 𝑓(𝑥, 𝑦) by this method, first we are to


approximate the value of 𝑦𝑛+1 by predictor formula at 𝑥 = 𝑥𝑛+1 , then improve these
values of 𝑦𝑛+1 by using the corrector formula after proper substitution. These formulae
will be derived from Newton’s formula for forward interpolation.

Derivation of Milne’s (modified) Predictor formula: We know that Newton’s


formula for forward interpolation in terms of 𝑦 ′ and 𝑢 is given by

𝑢(𝑢−1) 𝑢(𝑢−1)(𝑢−2) 𝑢(𝑢−1)(𝑢−2)(𝑢−3)


𝑦 ′ = 𝑦0′ + 𝑢∆𝑦0′ + ∆2 𝑦0′ + ∆3 𝑦0′ + ∆4 𝑦0′
2! 3! 4!
𝑢(𝑢−1)(𝑢−2)(𝑢−3)(𝑢−4)
+ ∆5 𝑦0′ + ⋯ ⋯
5!
(𝑢2 −𝑢) (𝑢3 −3𝑢2 +2𝑢) (𝑢4 −6𝑢3 +11𝑢2 −6𝑢)
= 𝑦0′ + 𝑢∆𝑦0′ + 2
∆2 𝑦0′ + 6
∆3 𝑦0′ + 24
∆4 𝑦0′
(𝑢5 −10𝑢4 +35𝑢3 −50𝑢2 +24𝑢)
+ 120
∆5 𝑦0′ + ⋯ ⋯ (6.2.1)
Chapter-6: Two Proposed Methods for Solving Differential Equations

𝑥 − 𝑥0
Here 𝑢= ℎ

𝑜𝑟, 𝑥 = 𝑥0 + 𝑢ℎ ⇒ 𝑑𝑥 = ℎ𝑑𝑢

Now, integrating (6.2.1) over the interval 𝑥0 to 𝑥0 + 5ℎ. i.e., 𝑢 = 0 to 𝑢 = 5,


we get

𝑥0 + 5ℎ 5 (𝑢2 −𝑢) (𝑢3 −3𝑢2 +2𝑢)


∫𝑥 𝑦 ′ 𝑑𝑥 = ℎ ∫0 [ 𝑦0′ + 𝑢∆𝑦0′ + ∆2 𝑦0′ + ∆3 𝑦0′
0 2 6
(𝑢4 −6𝑢3 +11𝑢2 −6𝑢) (𝑢5 −10𝑢4 +35𝑢3 −50𝑢2 +24𝑢)
+ ∆4 𝑦0′ + ∆5 𝑦0′ + ⋯ ⋯ ]𝑑𝑢
24 120

𝑥 +5ℎ 𝑢2 1 𝑢3 𝑢2 1 𝑢4
𝑜𝑟, [𝑦]𝑥00 = ℎ[𝑢𝑦0′ + ∆𝑦0′ + 2 ( 3 − )∆2 𝑦0′ + 6 ( 4 − 𝑢3 + 𝑢2 )∆3 𝑦0′
2 2
1 𝑢5 3𝑢4 11𝑢3 1 𝑢6 35𝑢4 50𝑢3
+ ( − + − 3𝑢2 )∆4 𝑦0′ + 120 ( 6 − 2𝑢5 + − + 12𝑢2 )∆5 𝑦0′
24 5 2 3 4 3

+ ⋯ ⋯ ]𝑢=5
𝑢=0
25 1 125 25
𝑜𝑟, 𝑦(𝑥0 + 5ℎ) − 𝑦(𝑥0 ) = ℎ[5𝑦0′ + ∆𝑦0′ + ( − ) ∆2 𝑦0′
2 2 3 2
1 625 1 1875 1375
+6( − 125 + 25)∆3 𝑦0′ + (625 − + − 75)∆4 𝑦0′
4 24 2 3
1 15625 21875 6250
+ 120 ( − 6250 + − + 300)∆5 𝑦0′ + ⋯ ⋯ ]
6 4 3
25 175 75 425 95
𝑜𝑟, 𝑦5 − 𝑦0 = ℎ[5𝑦0′ + ∆𝑦0′ + ∆2 𝑦0′ + ∆3 𝑦0′ + ∆4 𝑦0′ + ∆5 𝑦0′ + ⋯ ⋯ ]
2 12 8 144 288

After neglecting those terms containing ∆6 higher orders and substituting as ∆≡


𝐸 − 1, we get Milne’s (modified) predictor formula as follows
25 175 75
𝑦5 − 𝑦0 = ℎ[5𝑦0′ + (𝐸 − 1)𝑦0′ + (𝐸 − 1)2 𝑦0′ + (𝐸 − 1)3 𝑦0′
2 12 8
425 95
+ (𝐸 − 1)4 𝑦0′ + 288 ∆5 𝑦0′ ]
144
25 175
𝑜𝑟, 𝑦5 = 𝑦0 + ℎ[5𝑦0′ + (𝐸 − 1)𝑦0′ + (𝐸 2 − 2𝐸 + 1)𝑦0′
2 12
75 425 95
+ (𝐸 3 − 3𝐸 2 + 3𝐸 − 1)𝑦0′ + (𝐸 4 − 4𝐸 3 + 6𝐸 2 − 4𝐸 + 1)𝑦0′ + ∆5 𝑦0′ ]
8 144 288
25 175 75 425 25 175 225 425
= 𝑦0 + ℎ[(5 − + − + 144) 𝑦0′ + ( 2 − + − ) 𝐸𝑦0′
2 12 8 6 8 36
175 225 425 75 425 425 / 95
+ ( 12 − + ) 𝐸 2 𝑦0′ + ( 8 − ) 𝐸 3 𝑦0′ + 144 𝐸 4 𝑦0 + 288 ∆5 𝑦0′ ]
8 24 36
95 25 25 175 425 95
= 𝑦0 + ℎ[144 𝑦0′ − 72 𝐸𝑦0′ + 𝐸 2 𝑦0′ − 𝐸 3 𝑦0′ + 144 𝐸 4 𝑦0′ + 288 ∆5 𝑦0′ ]
6 72
5ℎ 95
= 𝑦0 + 144 [19𝑦0′ − 10𝑦1′ + 120𝑦2′ − 70𝑦3′ + 85𝑦4′ ] + 288 ℎ∆5 𝑦0′

[∵ 𝐸𝑦0′ = 𝑦1′ , 𝐸 2 𝑦0′ = 𝑦2′ , 𝐸 3 𝑦0′ = 𝑦3′ , 𝐸 4 𝑦0′ = 𝑦4′ ]


5ℎ 95
𝑖. 𝑒. 𝑦5 = 𝑦0 + 144 [19𝑦0′ − 10𝑦1′ + 120𝑦2′ − 70𝑦3′ + 85𝑦4′ ] + 288 ℎ∆5 𝑦0′ (6.2.2)

Study on Different Numerical Methods for Solving Differential Equations Page | 108
Chapter-6: Two Proposed Methods for Solving Differential Equations

Derivation of Milne’s (modified) Corrector formula: To obtain the corrector


formula, we integrate (6.2.1) over the interval 𝑥0 to 𝑥0 + 4ℎ. i.e., 𝑢 = 0 to 𝑢 = 4,
then we get

𝑥0 + 4ℎ 4 (𝑢2 −𝑢) (𝑢3 −3𝑢2 +2𝑢)


∫𝑥 𝑦 ′ 𝑑𝑥 = ℎ ∫0 [ 𝑦0′ + 𝑢∆𝑦0′ + ∆2 𝑦0′ + ∆3 𝑦0′
0 2 6
(𝑢4 −6𝑢3 +11𝑢2 −6𝑢) (𝑢5 −10𝑢4 +35𝑢3 −50𝑢2 +24𝑢)
+ ∆4 𝑦0′ + ∆5 𝑦0′ + ⋯ ⋯ ]𝑑𝑢
24 120

𝑥 +4ℎ 𝑢2 1 𝑢3 𝑢2 1 𝑢4
𝑜𝑟, [𝑦]𝑥00 = ℎ[𝑢𝑦0′ + ∆𝑦0′ + 2 ( 3 − )∆2 𝑦0′ + 6 ( 4 − 𝑢3 + 𝑢2 )∆3 𝑦0′
2 2
1 𝑢5 3𝑢4 11𝑢3 1 𝑢6 35𝑢4 50𝑢3
+ ( − + − 3𝑢2 )∆4 𝑦0′ + 120 ( 6 − 2𝑢5 + − + 12𝑢2 )∆5 𝑦0′
24 5 2 3 4 3

+ ⋯ ⋯ ]𝑢=4
𝑢=0
/ 1 64
𝑜𝑟, 𝑦(𝑥0 + 4ℎ) − 𝑦(𝑥0 ) = ℎ[4𝑦0′ + 8∆𝑦0 + 2 ( 3 − 8) ∆2 𝑦0′
1 1 1024 704
+ 6 (64 − 64 + 16)∆3 𝑦0′ + ( − 384 + − 48)∆4 𝑦0′
24 5 3
1 2048 3200
+ 120 ( − 2048 + 2240 − + 192)∆5 𝑦0′ + ⋯ ⋯ ]
3 3
20 8 14
𝑜𝑟, 𝑦4 − 𝑦0 = ℎ[4𝑦0′ + 8∆𝑦0′ + ∆2 𝑦0′ + 3 ∆3 𝑦0′ + ∆4 𝑦0′ + (0)∆5 𝑦0′ + ⋯ ⋯ ]
3 45

After neglecting those terms containing ∆6 higher orders and substituting as ∆≡


𝐸 − 1, we get Milne’s (modified) corrector formula as follows
20 8
𝑦4 − 𝑦0 = ℎ[4𝑦0′ + 8(𝐸 − 1)𝑦0′ + (𝐸 − 1)2 𝑦0′ + 3 (𝐸 − 1)3 𝑦0′
3
14
+ (𝐸 − 1)4 𝑦0′ + (0)∆5 𝑦0′ ]
45
20
𝑜𝑟, 𝑦4 = 𝑦0 + ℎ[4𝑦0′ + 8(𝐸 − 1)𝑦0′ + (𝐸 2 − 2𝐸 + 1)𝑦0′
3
8 14
+ 3 (𝐸 3 − 3𝐸 2 + 3𝐸 − 1)𝑦0′ + (𝐸 4 − 4𝐸 3 + 6𝐸 2 − 4𝐸 + 1)𝑦0′ + (0)∆5 𝑦0′ ]
45
20 8 14 40 56
= 𝑦0 + ℎ[(4 − 8 + − 3 + 45) 𝑦0′ + (8 − + 8 − 45) 𝐸𝑦0′
3 3
20 28 8 56 14
+( −8+ ) 𝐸 2 𝑦0′ + ( − ) 𝐸 3 𝑦0′ + 𝐸 4 𝑦0′ + (0)∆5 𝑦0′ ]
3 15 3 45 45
14 64 24 64 14
= 𝑦0 + ℎ[45 𝑦0′ + 45 𝐸𝑦0′ + 45 𝐸 2 𝑦0′ + 45 𝐸 3 𝑦0′ + 45 𝐸 4 𝑦0′ + (0)∆5 𝑦0′ ]
2ℎ
= 𝑦0 + 45 [7𝑦0′ + 32𝑦1′ + 12𝑦2′ + 32𝑦3′ + 7𝑦4′ ] + (0)ℎ∆5 𝑦0′

[∵ 𝐸𝑦0′ = 𝑦1′ , 𝐸 2 𝑦0′ = 𝑦2′ , 𝐸 3 𝑦0′ = 𝑦3′ , 𝐸 4 𝑦0′ = 𝑦4′ ]


2ℎ
𝑖. 𝑒. 𝑦4 = 𝑦0 + 45 [7𝑦0′ + 32𝑦1′ + 12𝑦2′ + 32𝑦3′ + 7𝑦4′ ] + (0)ℎ∆5 𝑦0′ (6.2.3)

Generalization of Milne’s (modified) Predictor-Corrector formula: We can write


the general form [7] of Milne’s (modified) predictor and corrector formulae according
to (6.2.2) and (6.2.3) as follows

Study on Different Numerical Methods for Solving Differential Equations Page | 109
Chapter-6: Two Proposed Methods for Solving Differential Equations

𝑝 5ℎ ′ ′ ′ ′
𝑦𝑛+1 = 𝑦𝑛−4 + 144 [19𝑦𝑛−4 − 10𝑦𝑛−3 + 120𝑦𝑛−2 − 70𝑦𝑛−1 + 85𝑦𝑛′ ] (6.2.6)
𝑐 2ℎ ′ ′ ′ ′
𝑦𝑛+1 = 𝑦𝑛−3 + 45 [7𝑦𝑛−3 + 32𝑦𝑛−2 + 12𝑦𝑛−1 + 32𝑦𝑛′ + 7𝑦𝑛+1 ] (6.2.7)

Here the index 𝑝 & 𝑐 indicates the predicted and corrected values of 𝑦𝑛+1
respectively at 𝑥 = 𝑥𝑖+1 .

6.3 APPLICATION OF THE MILNE’S (MODIFIED) PREDICTOR-CORRECTOR


METHOD

𝑑𝑦 𝑥+𝑦
Solve the differential equation 𝑑𝑥 = 2 𝑎𝑡 𝑥 = 2.5 with the initial values
𝑦(0.0) = 2.000, 𝑦(0.5) = 2.636, 𝑦(1.0) = 3.595, 𝑦(1.5) = 4.968, 𝑦(2.0) = 6.873
𝑑𝑦 𝑥+𝑦
Solution: Given by 𝑦 ′ = 𝑑𝑥 = = 𝑓(𝑥, 𝑦)
2

By taking the step length ℎ = 0.5 from initial conditions and, we get
1 1
𝑥0 = 0.0, 𝑦0 = 2.000 ⇒ 𝑓0 = 2 (𝑥0 + 𝑦0 ) = 2 (0.0 + 2.000) = 1.0000
1 1
𝑥1 = 0.5, 𝑦1 = 2.636 ⇒ 𝑓1 = 2 (𝑥1 + 𝑦1 ) = 2 (0.5 + 2.636) = 1.5680
1 1
𝑥2 = 1.0, 𝑦2 = 3.959 ⇒ 𝑓2 = 2 (𝑥2 + 𝑦2 ) = 2 (1.0 + 3.595) = 2.2975
1 1
𝑥3 = 1.5, 𝑦3 = 4.968 ⇒ 𝑓3 = 2 (𝑥3 + 𝑦3 ) = 2 (1.5 + 4.968) = 3.2340
1 1
𝑥4 = 2.0, 𝑦4 = 6.873 ⇒ 𝑓4 = 2 (𝑥4 + 𝑦4 ) = 2 (2.0 + 6.873) = 4.4365

Finally, 𝑥5 = 𝑥4 + ℎ = 2.0 + 0.5 = 2.5

Now, putting 𝑛 = 4 in (6.2.6), we get Milne’s (modified) predictor formula for


𝑦5 , as follows

5ℎ
𝑦5 = 𝑦0 + [19𝑦0′ − 10𝑦1′ + 120𝑦2′ − 70𝑦3′ + 85𝑦4′ ]
144
5ℎ
= 𝑦0 + 144 [19𝑓0 − 10𝑓1 + 120𝑓2 − 70𝑓3 + 85𝑓4 ]
5(0.5)
= 2.000 + [19(1.0000) − 10(1.5680) + 120(2.2975)
144

−70(3.2340) + 85(4.4365)]
= 9.4608
1 1
Then 𝑓5 = 2 (𝑥5 + 𝑦5 ) = 2 (2.5 + 9.4608) = 5.9804

Study on Different Numerical Methods for Solving Differential Equations Page | 110
Chapter-6: Two Proposed Methods for Solving Differential Equations

Now, putting 𝑛 = 4 in (6.2.7), we get Milne’s (modified) corrector formula


for 𝑦5 , as follows
2ℎ
𝑦5 = 𝑦1 + 45 [7𝑦1′ + 32𝑦2′ + 12𝑦3′ + 32𝑦4′ + 7𝑦5′ ]
2ℎ
= 𝑦1 + 45 [7𝑓1 + 32𝑓2 + 12𝑓3 + 32𝑓4 + 7𝑓5 ]

Then we get the approximations of 𝑦5 by above formula as followings

First iteration:
(1) 2ℎ
𝑦5 = 𝑦1 + [7𝑓1 + 32𝑓2 + 12𝑓3 + 32𝑓4 + 7𝑓5 ]
45
2(0.5)
= 2.636 + [7(1.5680) + 32(2.2975) + 12(3.2340)
144

+32(4.4365) + 7(5.9804)]
= 9.4612
1 1
Then 𝑓5 = 2 (𝑥5 + 𝑦5 ) = 2 (2.5 + 9.4612) = 5.9806

Second iteration:
(2) 2ℎ
𝑦5 = 𝑦1 + 45 [7𝑓1 + 32𝑓2 + 12𝑓3 + 32𝑓4 + 7𝑓5 ]
2(0.5)
= 2.636 + [7(1.5680) + 32(2.2975) + 12(3.2340)
144

+32(4.4365) + 7(5.9806)]
= 9.4613
1 1
Then 𝑓5 = 2 (𝑥5 + 𝑦5 ) = 2 (2.5 + 9.4613) = 5.9807

Third iteration:
(3) 2ℎ
𝑦5 = 𝑦1 + 45 [7𝑓1 + 32𝑓2 + 12𝑓3 + 32𝑓4 + 7𝑓5 ]
2(0.5)
= 2.636 + [7(1.5680) + 32(2.2975) + 12(3.2340)
144

+32(4.4365) + 7(5.9807)]
= 9.4613
1 1
Then 𝑓5 = 2 (𝑥5 + 𝑦5 ) = 2 (2.5 + 9.4613) = 5.9807

Since third approximation for 𝑦5 is same as the second approximation, we can


choose the following approximation values

𝑦5 = 9.4613 at 𝑥5 = 2.5
𝑑𝑦 1
Exact result: Exact result: We have = 2 (𝑥 + 𝑦)
𝑑𝑥
𝑑𝑦 𝑦 𝑥
𝑜𝑟, −2=2
𝑑𝑥

Study on Different Numerical Methods for Solving Differential Equations Page | 111
Chapter-6: Two Proposed Methods for Solving Differential Equations

This is a linear differential equation in 𝑦 whose integrating factor is


1 𝑥
𝐼𝐹 = 𝑒 ∫ −2𝑑𝑥 = 𝑒 −2
𝑥
Multiplying the above differential equation by 𝑒 −2 , it becomes
𝑥 𝑥 𝑥
𝑑𝑦 𝑦 𝑥
𝑒 −2 𝑑𝑥 − 2 𝑒 −2 = 2 𝑒 −2
𝑥 𝑥
𝑑 𝑥
𝑜𝑟, 𝑑𝑥 (𝑦𝑒 −2 ) = 2 𝑒 −2
𝑥 𝑥
𝑥
𝑜𝑟, 𝑦𝑒 −2 = ∫ 2 𝑒 −2 𝑑𝑥 + 𝑐 ; 𝑐 ∈ 𝑹
𝑥 𝑥
𝑜𝑟, 𝑦𝑒 −2 = 𝑐 − 𝑒 −2 (𝑥 + 2)
𝑥
𝑜𝑟, 𝑦 = 𝑐𝑒 2 − 𝑥 − 2

From the initial condition we get, 𝑦 = 2; 𝑥 = 0, we get

2 = 𝑐𝑒 0 − 0 − 2 ⇒ 𝑐 = 4

𝑥
Then the above solution becomes 𝑦 = 4𝑒 2 − 𝑥 − 2

Now, we get the value of 𝑦 at 𝑥 = 2.5


2.5
𝑦 = 4𝑒 ( 2 ) − 2.5 − 2 = 9.4614

Comment: We have observed that

𝑦𝐸𝑥𝑎𝑐𝑡 − 𝑦𝐴𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑒 = 9.4614 − 9.4613 = 0.0001

Thus we can conclude that the value of y at 𝑥 = 2.5 obtained by Milne’s


(modified) predictor-corrector method gives very close value to the exact value.

6.4 SURROUNDING 9-POINT FORMULA

Laplace’s equation is an elliptic partial differential equation which can be solved


by finite-difference approximations. In that solution, it involves finding the mesh points
of the given domain certain boundary values. In this section, we are proposing a
formula, namely surrounding nine-point formula to find the mesh points of the
Laplace’s equation with a given domain.
Derivation of Surrounding Nine-point formula: Let us consider the Laplace’s
equation in 𝑥𝑦 − 𝑝𝑙𝑎𝑛𝑒, as follows

𝑢𝑥𝑥 + 𝑢𝑦𝑦 = 0 (6.4.1)

Study on Different Numerical Methods for Solving Differential Equations Page | 112
Chapter-6: Two Proposed Methods for Solving Differential Equations

We can now obtain finite-difference analogous of partial differential equation


by replacing the derivatives in the above equation by their corresponding difference
approximations as
𝑢𝑖−1,𝑗 −2𝑢𝑖,𝑗 +𝑢𝑖+1,𝑗
𝑢𝑥𝑥 ≈ (6.4.2)
ℎ2
𝑢𝑖,𝑗−1 −2𝑢𝑖,𝑗 +𝑢𝑖,𝑗+1
𝑢𝑦𝑦 ≈ (6.4.3)
𝑘2

Replacing the derivatives in (6.4.1) by their finite-difference approximations


from (6.4.2) and (6.4.3) with taking 𝑘 = ℎ, we get
𝑢𝑖−1,𝑗 −2𝑢𝑖,𝑗 +𝑢𝑖+1,𝑗 𝑢𝑖,𝑗−1 −2𝑢𝑖,𝑗 +𝑢𝑖,𝑗+1
+ =0
ℎ2 ℎ2
1
𝑜𝑟, 𝑢𝑖,𝑗 = 4 [𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 + 𝑢𝑖,𝑗−1 + 𝑢𝑖,𝑗+1 ] (6.4.4)

This is called the difference equation of Laplace’s equation, which shows the
value of 𝑢 at any point is the mean of its values at the four neighboring points. Equation
(6.4.4) is called the standard 5-point formula exhibited in figure-(6.1)

Figure-(6.1) Figure-(6.2)

We know that Laplace’s equation remains invariant when the co-ordinate axes
are rotated through an angle of 45°. Then the formula (6.4.4) can be re-written as
1
𝑢𝑖,𝑗 = 4 [𝑢𝑖−1,𝑗−1 + 𝑢𝑖+1,𝑗−1 + 𝑢𝑖+1,𝑗+1 + 𝑢𝑖+1,𝑗−1 ] (6.4.5)

This is called diagonal 5-point formula, which shows the value of 𝑢 at any point
is the mean of its values at the four diagonal points. The formula given by (6.4.5) is
represented in figure-(6.2).

Now by taking the average of (6.4.4) and (6.4.5), we get


1
𝑢𝑖,𝑗 = 8 [𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 + 𝑢𝑖,𝑗−1 + 𝑢𝑖,𝑗+1

+𝑢𝑖−1,𝑗−1 + 𝑢𝑖+1,𝑗−1 + 𝑢𝑖+1,𝑗+1 + 𝑢𝑖+1,𝑗−1 ] (6.4.6)

Thus, a newer form for 𝑢𝑖,𝑗 has been proposed by (6.4.6). By which we can find
the value of 𝑢 at different mesh points by taking the mean of the value of all points

Study on Different Numerical Methods for Solving Differential Equations Page | 113
Chapter-6: Two Proposed Methods for Solving Differential Equations

surrounding it. So, we can call the proposed formula as surrounding 9-point formula.
The figure-(6.3) represents (6.4.6).

Figure-(6.3)

Algorithm: Now we are to discuss the algorithm for obtaining mesh points of a given
domain under the formula (6.4.6).
1. At first, we will consider the boundary values of the given domain.
2. Choose the non-boundary points as zero whose are to be taken as
surrounding points to an evaluation of a mesh point and it will be continued
till all the mesh points are approximated for once.
3. When a mesh point evaluation has done by taking some surrounding points
as zero is also to be used as a surrounding point with the current value to the
approximation of the next mesh point, if need.
4. The first approximation values of the mesh points are to be improved by the
iteration method of Gauss-Seidel.
5. Finally, we will consider the (𝑛 + 1)𝑡ℎ approximation values as the required
mesh points, if the (𝑛 + 1)𝑡ℎ approximation values are very close to the 𝑛𝑡ℎ
approximation values by a scale of accuracy.

6.5 APPLICATION OF SURROUNDING NINE-POINT FORMULA

Solve the Laplace’s equation 𝑢𝑥𝑥 + 𝑢𝑦𝑦 = 0 for the square mesh with the
boundary values is given in figure-(6.4) below, by the finite-difference method.

Figure-(6.4) Figure-(6.5)

Solution: We are to consider figure-(6.5) and by comparing figure-(6.4) with the


figure-(6.5), we obtain

𝑐1 = 0, 𝑐2 = 10, 𝑐3 = 𝑐12 = 20, 𝑐4 = 30, 𝑐5 = 𝑐11 = 40


𝑐6 = 50, 𝑐7 = 𝑐8 = 𝑐9 = 𝑐10 = 60

We have from (6.4.6), we get

Study on Different Numerical Methods for Solving Differential Equations Page | 114
Chapter-6: Two Proposed Methods for Solving Differential Equations

1
𝑢𝑖,𝑗 = 8 [𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 + 𝑢𝑖,𝑗−1 + 𝑢𝑖,𝑗+1

+𝑢𝑖−1,𝑗−1 + 𝑢𝑖+1,𝑗−1 + 𝑢𝑖+1,𝑗+1 + 𝑢𝑖+1,𝑗−1 ]

Then by applying the above formula to figure-(6.5), we get


1
𝑢1 = 8 [𝑐1 + 𝑐2 + 𝑐3 + 𝑢2 + 𝑢3 + 𝑢4 + 𝑐11 + 𝑐12 ]
1
𝑢2 = 8 [𝑐2 + 𝑐3 + 𝑐4 + 𝑐5 + 𝑐6 + 𝑢3 + 𝑢4 + 𝑢1 ]
1
𝑢3 = [𝑢1 + 𝑢2 + 𝑐5 + 𝑐6 + 𝑐7 + 𝑐8 + 𝑐9 + 𝑢4 ]
8
1
𝑢4 = 8 [𝑐12 + 𝑢1 + 𝑢2 + 𝑢3 + 𝑐8 + 𝑐9 + 𝑐10 + 𝑐11 ]

Above the system of equations can be re-written by means of the Gauss-Seidel


iterative form as follows

(𝑛+1) 1 (𝑛) (𝑛) (𝑛)


𝑢1 = 8 [𝑐1 + 𝑐2 + 𝑐3 + 𝑢2 + 𝑢3 + 𝑢4 + 𝑐11 + 𝑐12 ]
(𝑛+1) 1 (𝑛) (𝑛) (𝑛+1)
𝑢2 = 8 [𝑐2 + 𝑐3 + 𝑐4 + 𝑐5 + 𝑐6 + 𝑢3 + 𝑢4 + 𝑢1 ]
(𝑛+1) 1 (𝑛+1) (𝑛+1) (𝑛)
𝑢3 = [𝑢1 + 𝑢2 + 𝑐5 + 𝑐6 + 𝑐7 + 𝑐8 + 𝑐9 + 𝑢4 ]
8
(𝑛+1) 1 (𝑛+1) (𝑛+1) (𝑛+1)
𝑢4 = 8 [𝑐12 + 𝑢1 + 𝑢2 + 𝑢3 + 𝑐8 + 𝑐9 + 𝑐10 + 𝑐11 ]

Now applying the above system for the following approximations with initial
(𝑛) (𝑛) (𝑛)
substitution 𝑢2 = 𝑢3 = 𝑢4 = 0.

First approximation:
(1) 1
𝑢1 = 8 [0 + 10 + 20 + 0 + 0 + 0 + 40 + 20] = 11.25
(1) 1
𝑢2 = 8 [10 + 20 + 30 + 40 + 50 + 0 + 0 + 11.25] = 20.16
(1) 1
𝑢3 = 8 [11.25 + 20.16 + 40 + 50 + 60 + 60 + 60 + 0] = 37.68
(1) 1
𝑢4 = 8 [20 + 11.25 + 20.16 + 37.68 + 60 + 60 + 60 + 40] = 38.64

Second approximation:
(2) 1
𝑢1 = 8 [0 + 10 + 20 + 20.16 + 37.68 + 38.64 + 40 + 20] = 23.61
(2) 1
𝑢2 = 8 [10 + 20 + 30 + 40 + 50 + 37.68 + 38.64 + 23.61] = 31.20
(2) 1
𝑢3 = 8 [23.31 + 31.20 + 40 + 50 + 60 + 60 + 60 + 38.64] = 45.39
(2) 1
𝑢4 = 8 [20 + 23.31 + 31.20 + 45.39 + 60 + 60 + 60 + 40] = 42.49

Study on Different Numerical Methods for Solving Differential Equations Page | 115
Chapter-6: Two Proposed Methods for Solving Differential Equations

Third approximation:
(3) 1
𝑢1 = 8 [0 + 10 + 20 + 31.20 + 45.39 + 42.49 + 40 + 20] = 26.14
(3) 1
𝑢2 = 8 [10 + 20 + 30 + 40 + 50 + 45.39 + 42.49 + 26.14] = 33.00
(3) 1
𝑢3 = 8 [26.14 + 33.00 + 40 + 50 + 60 + 60 + 60 + 42.49] = 46.45
(3) 1
𝑢4 = 8 [20 + 26.14 + 33.00 + 46.45 + 60 + 60 + 60 + 40] = 43.20

Fourth approximation:
(4) 1
𝑢1 = 8 [0 + 10 + 20 + 33.00 + 46.45 + 43.20 + 40 + 20] = 26.58
(4) 1
𝑢2 = 8 [10 + 20 + 30 + 40 + 50 + 46.45 + 43.20 + 26.58] = 33.28
(4) 1
𝑢3 = 8 [26.58 + 33.28 + 40 + 50 + 60 + 60 + 60 + 43.20] = 46.63
(4) 1
𝑢4 = 8 [20 + 26.58 + 33.28 + 46.63 + 60 + 60 + 60 + 40] = 43.31

Fifth approximation:
(5) 1
𝑢1 = 8 [0 + 10 + 20 + 33.28 + 46.63 + 43.31 + 40 + 20] = 26.65
(5) 1
𝑢2 = 8 [10 + 20 + 30 + 40 + 50 + 46.63 + 43.31 + 26.65] = 33.32
(5) 1
𝑢3 = 8 [26.65 + 33.32 + 40 + 50 + 60 + 60 + 60 + 43.31] = 46.66
(5) 1
𝑢4 = 8 [20 + 26.65 + 33.31 + 46.66 + 60 + 60 + 60 + 40] = 43.33

Sixth approximation:
(6) 1
𝑢1 = 8 [0 + 10 + 20 + 33.32 + 46.66 + 43.33 + 40 + 20] = 26.66
(6) 1
𝑢2 = 8 [10 + 20 + 30 + 40 + 50 + 46.66 + 43.33 + 26.66] = 33.33
(6) 1
𝑢3 = 8 [26.66 + 33.33 + 40 + 50 + 60 + 60 + 60 + 43.33] = 46.67
(6) 1
𝑢4 = [20 + 26.66 + 33.33 + 46.67 + 60 + 60 + 60 + 40] = 43.33
8

Seventh approximation:
(6) 1
𝑢1 = 8 [0 + 10 + 20 + 33.33 + 46.67 + 43.33 + 40 + 20] = 26.67
(6) 1
𝑢2 = 8 [10 + 20 + 30 + 40 + 50 + 46.67 + 43.33 + 26.67] = 33.33
(6) 1
𝑢3 = 8 [26.67 + 33.33 + 40 + 50 + 60 + 60 + 60 + 43.33] = 46.67
(6) 1
𝑢4 = 8 [20 + 26.67 + 33.33 + 46.67 + 60 + 60 + 60 + 40] = 43.33

Since sixth and seventh approximations have become so closer for values of
mesh points, we can choose

Study on Different Numerical Methods for Solving Differential Equations Page | 116
Chapter-6: Two Proposed Methods for Solving Differential Equations

𝑢1 = 26.67, 𝑢2 = 33.33, 𝑢3 = 46.67, 𝑢4 = 43.33

Comment: The mesh points of the given domain (6.4) also can be obtained by using
the formulae of standard 5-point and diagonal 5-point, the values of the mesh points
obtained by these formulae are given below

𝑢1 = 26.66, 𝑢2 = 33.33, 𝑢3 = 46.66, 𝑢4 = 43.33

Hence, after comparing we conclude that surrounding 9-point formula usable


for obtaining the mesh points of any given domain with the desired level of accuracy.

6.6 ADVANTAGES OF PROPOSED METHODS OVER PREVIOUS METHODS

Milne’s (modified) predictor-corrector method: Though Milne’s (modified)


predictor-corrector formulae seem to be a lengthy process of solving ordinary
differential equations, it has the following advantages over previous methods

1. The previous methods estimate the value of 𝑦 respecting a given value of


𝑥 by means of four initial conditions whereas Milne’s (modified) predictor-
corrector formulae estimate the value of 𝑦 respecting a given value of 𝑥 by
means of four initial conditions, which is more logical.

2. To obtain the value of 𝑦 at any value of 𝑥, previous methods are needed to


be calculating up-to fourth-order Newton’s formula for forward interpolation
but Milne’s (modified) predictor-corrector formulae need to be calculating
up-to fifth-order Newton’s formula for forward interpolation, which will
give better accuracy.

3. At Milne’s (modified) corrector formula the co-efficient of 𝛥5 is zero, then


the truncation error converging to zero, this will upgrade the level of
accuracy of the method.

Surrounding 9-point formula: It seems to be a time consuming process to obtain mesh


points by means of the surrounding 9-point formula, but it has the following advantages
over previous methods.

1. Since the surrounding 9-point formula depends upon all mesh points around
it to determine any mesh points, it is more contributive and logical, which
gives better accuracy.

2. The initial zero substitution (taking unknown mesh points as zero who are
surrounded to a required mesh point) enables us to solve a bigger domain at
which most of the mesh points are absent. i.e. are to be estimated.

3. Using the Gauss-Seidel iteration formula gives the method a quick ending,
this saves the estimation time.

Study on Different Numerical Methods for Solving Differential Equations Page | 117
CHAPTER-7

CONCLUSION

In this thesis paper, we have discussed some numerical methods for the solution
of ordinary differential equations (in Chapter-2 & Chapter-3), partial differential
equations (in Chapter-4) and boundary value problems (in chapter-5). Also, we have
proposed two modified numerical methods (in Chapter-6) in this thesis paper.

The conclusions of these discussions are coming next here in brief.

In Chapter-2, we get from Section-2.3 and Section-2.5, both of Taylor’s series


method and Picard’s method of successive approximations are correct to eight decimal
places with the exact solution for the given initial value problem. But from the
comparative discussion of them in Section-2.6, we can conclude that Picard’s method
of successive approximations is better than Taylor’s series method in this case.

Also, from Section-2.8 it can be said that computed values of y deviate rapidly
in Euler’s method and the disturbance has solved in Section-2.9 at modified Euler’s
method.

In Chapter-3, from the comparison between the predictor-corrector method and


the Runge-Kutta method in Section-3.12, we have seen that finding local truncation
error in the Runge-Kutta method is more laborious than in predictor-corrector method,
but the self-starting characteristic of Runge-Kutta method makes it favorable than
predictor-corrector method. Also, the Runge-Kutta method can be used for a wider
range of the solution and it is stable for the suitable step size.

Thus, we can conclude that for practical purposes the Runge-Kutta method is to
be chosen for better accuracy.

In Chapter-4, from the comparison between iteration method and relaxation


method in Section-4.10, we have seen that iteration method is a slow, sure and lengthy
process whereas relaxation method is a rapid, less certain and short process to get o
solution of partial differential equations under certain conditions. Also, the iteration
method is self-correcting and has minimum error bound than the relaxation method.

Moreover from Section-4.12, we have seen that to solve a physical problem by


iteration method and relaxation method, it needs to formulate as partial differential
equations whereas the Rayleigh-Ritz method will give an approximate solution without
any formulation. Here is to be noted that the Rayleigh-Ritz method is quite long and
having complexity during the calculation.

Thus, we can choose the iteration method as the best among three methods and
the Rayleigh-Ritz method would probably the third one in practice.
Chapter-7: Conclusion

In Chapter-5, from Section-5.3 and Section-5.5, we have seen that a two-point


boundary value problem can be solved directly by finite-difference method and no other
methods needs to its assistance, but the shooting method needs the help of one of the
other standard methods (i.e. Euler’s method, predictor-corrector method and Runge-
Kutta method) after primary formulation. Thus, we can take the finite-difference
method as the better method between the above two.

Also, from the Section-5.7, we have seen that Green’s function is applicable to
solve a two-point boundary value problem numerically.

Moreover, from Section-5.8, we can conclude that multi-order (fourth-order)


two-point boundary value problems of various cases can be solved numerically by the
help of the cubic B-spline method [25] with more accuracy.

Finally, in Chapter-6, we have proposed a modified form of Milne’s predictor-


corrector method for solving an ordinary differential equation of first order and first
degree. Also, a utilized formula of standard 5-point formula and diagonal 5-point
formula for solving partial differential equations of elliptic type have offered here.

Now, the advantages, limitations, and recommendations future research with


the aim of the above two proposed methods are given below.

Advantages of the Milne’s (modified) predictor-corrector formulae:

1. Milne’s (modified) predictor-corrector formulae estimate the value of y


respecting the given value of x by means of five initial conditions, which is more
contributive and logical.

2. Milne’s (modified) predictor-corrector formulae need to calculate up-to fifth-


order Newton’s formula for forward interpolation, which will give better
accuracy.

3. At Milne’s (modified) predictor-corrector formulae the co-efficient of ∆5 is


zero, then the truncation error converging to zero, this will upgrade the level of
accuracy of the method.

Advantages of the surrounding 9-point formula:

1. Since the surrounding 9-point formula depends upon all mesh points around it
to determine any mesh point, it is more contributive and logical, which may give
better accuracy.

2. The initial zero substitution may enable us to solve a bigger domain at which
most of the mesh points are absent.

3. Using the Gauss-Seidel iteration formula may give the method a quick ending,
this will save the estimation time.

Study on Different Numerical Methods for Solving Differential Equations Page | 119
Chapter-7: Conclusion

Limitations of the Milne’s (modified) predictor-corrector formulae:

1. In Milne’s (modified) predictor-corrector formulae it is required to use one more


initial condition than the previous.

2. It needs a few more calculation time than previous formulae.

Limitations of the surrounding 9-point formula:

1. The surrounding 9-point formula is not applicable to the domains having less
than nine mesh (grid) points.

2. It can be used to solve partial differential equations of elliptic type only.

Recommendations future research:

We can prove the advantages mentioned above by substituting proper


applications and comparisons. Due to the compression of the thesis paper, we have
omitted these proofs. But in Section-6.3 and Section-6.5 we have just shown some
applications and comments about these methods compared with exact solutions.

Therefore, in the future, these proofs are to be tried.

Further work can be done:

1. To measure the efficiency of Milne’s (modified) predictor-corrector formulae


and surrounding 9-point formula, compare them with previous all.

2. To construct a generalized predictor-corrector formula for solving ordinary


differential equations of the first order and first degree. Also. Similar formulae
as the surrounding 9-point formula for solving partial differential equations for
parabolic and hyperbolic types are to be tried to construct.

3. To implement Milne’s (modified) predictor-corrector formulae and surrounding


9-point formula to the real-world problems.

Study on Different Numerical Methods for Solving Differential Equations Page | 120
REFERENCES

[01]. ANTONY RALSTON, PHILIP RABINOWITZ, 1988. A first course in


numerical analysis (McGraw-Hill Book Company.). P.196

[02]. R. VASISTHA, VIPIN VASISTHA, 1999. Numerical analysis (Kedar Nath-


Ram Nath, Meerut.). P.265

[03]. BRIAN BRADIE, 2007. A friendly introduction to numerical analysis


(Pearson Prentice Hall, New Delhi.). P.588

[04]. CURTIS F. GERALD, PATRICK O. WHEATLEY, 1970. Applied


numerical analysis (Addison-Wesley publishing company.). P.340

[05]. Dr. B. D. SHARMA, 2006. Differential equations (Kedar Nath-Ram Nath,


Meerut.). P.01

[06]. Dr. B. S. GOEL, Dr. S. K. MITTAL, 1995. Numerical analysis (Pragati


Prakashan, India.). P.518

[07]. E. L. REISS, A. J. CALLEGARI, D. S. AHLUWALIA, 1776. Ordinary


Differential Equation with Applications, Holt, Rinehart and Winston, New Cork.

[08]. F. LANG, XIAO-PING XU, 2011. A new cubic B-spline method for linear
fifth order boundary value problems (Journal of Applied Mathematics and
Computing 36 (2011).) P.101

[09]. FRANCIS SCHELD, Ph.D., 1988. Numerical analysis (Schaum’s Outline


Series McGraw-Hill.). P.471

[10]. IAN N. SNEDDON, 1957. Elementary of Partial differential equations


(McGraw-Hill Book Company, INC.). P.327

[11]. JAMES B. SCARBOROUGH, Ph.D., 1966. Numerical mathematical


analysis (Oxford and IBM Publishing Co. Pvt. Ltd.). P.310

[12]. J. N. SHARMA, 2004. Numerical methods for Engineers and Scientists


(Narosa Publishing House, New Delhi.). P.222

[13]. M. DEHGHAN, M. LAKESTANI, 2008. Numerical solution of nonlinear


system of second-order boundary value problems using cubic B-spline scaling
functions (International Journal of Computer Mathematics, 85(9).). P.1455

[14]. M. D. RAISINGHANIA, S. CHAND, 2007. Integral equations and


Boundary value problems (S. Chand and Company Ltd.). P.11.5
References

[15]. M. KUMAR, Y. GUPTA, 2010. Methods for solving singular boundary value
problems using splines (A review, Journal of Applied Mathematics and Computing
32(2010).). P.265

[16]. N. CAGLAR, H. CAGLAR, K. ELFAITURI, 2006. B-spline interpolation


compared with finite difference, finite element and finite volume methods which
applied to two point boundary value problems, Applied Mathematics and
Computation 175 (2006). P.72

[17]. P. M. PRENTER, 1989. Splines and variation methods, John Wiley & sons,
New York.

[18]. P. N. CHATTERJI, 1999. Numerical analysis (Rajhans Prakashan Mandir,


Meerut.). P.528

[19]. R. A. USMANI, 1978. Discrete methods for boundary-value problems with


Engineering application, Mathematics of Computation, 32 (1978). P.1087

[20]. SAMUEL D. CONTE, CARL DE BOOR, 1980. Elementary numerical


analysis (McGraw-Hill Book Company.). P.432

[21]. STEVEN C. CHAPRA, Ph.D., RAYMOND P. CANDLE, Ph.D., 1990.


Numerical methods for Engineers (McGraw-Hill Book Company.). P.812

[22]. S. BALACHANDRA RAO, C. K. SHANTHA, 2000. Numerical methods


(Universities Press India Ltd.). P.359

[23]. S. S. SASTRY, 2002. Introductory methods of numerical analysis (Prentic-


Hall of India Private Limited.). P.267

[24]. WARD CHENEY, DAVID KINCAID, 1980. Numerical mathematics and


computing (Books/Cole Publishing Company Monterey, California.). P.362

[25]. Y. GUPTA, P. K. SRIVASTAVA, 2011, International Journal of Computer


Technology and Application, Vol 2(5). P.1426

Study on Different Numerical Methods for Solving Differential Equations Page | 122

You might also like