Mathematics-Iii: MATH F211 Dr. Suresh Kumar
Mathematics-Iii: MATH F211 Dr. Suresh Kumar
MATHEMATICS-III
MATH F211
Dr. Suresh Kumar
Note: Some concepts of Differential Equations are briefly described here just to help the students.
Therefore, the following study material is expected to be useful but not exhaustive for the Mathematics-III
course. For detailed study, the students are advised to attend the lecture/tutorial classes regularly, and
consult the text book prescribed in the hand out of the course.
3 Second Order DE 15
3.1 Second Order LDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Use of known solution to find another . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Homogeneous LDE with Constant Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Method of Undetermined Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.5 Method of Variation of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.6 Operator Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2
Mathematics-III 3
6 Fourier Series 43
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.2 Dirichlet’s conditions for convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.3 Fourier series for even and odd functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.4 Fourier series on arbitrary intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
9 Laplace Transforms 69
9.1 Definitions of Laplace and inverse Laplace transforms . . . . . . . . . . . . . . . . . . . . 69
9.2 Laplace transforms of some elementary functions . . . . . . . . . . . . . . . . . . . . . . . 69
9.3 Sufficient conditions for the existence of Laplace transform . . . . . . . . . . . . . . . . . . 70
9.4 Some more Laplace transform formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
9.4.1 Laplace transform of a function multiplied by eax . . . . . . . . . . . . . . . . . . 71
9.4.2 Laplace transform of derivatives of a function . . . . . . . . . . . . . . . . . . . . . 71
9.4.3 Laplace transform of integral of a function . . . . . . . . . . . . . . . . . . . . . . . 71
9.4.4 Laplace transform of a function multiplied by x . . . . . . . . . . . . . . . . . . . . 72
9.4.5 Laplace transform of a function divided by x . . . . . . . . . . . . . . . . . . . . . 72
9.5 Solution of DE using Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
9.6 Solution of integral equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
9.7 Heaviside or Unit Step Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
9.8 Dirac Delta Function or Unit Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . 77
Chapter 1
dS dr
= 8πr .
dt dt
Formally, we define a differential equation as follows: Any equation (non-identity) involving derivatives
of dependent variable(s) with respect to independent variable(s) is called a differential equation(DE).
Hereafter, we shall use the abbreviation DE for the phrase ‘differential equation’ and its plural ‘dif-
ferential equations’ as well.
4
Mathematics-III 5
Ordinary DE
A DE involving derivatives with respect to one independent variable is called ordinary DE.
An ordinary DE of order n, in general, can be expressed in the form
f (x, y, y 0 ) = 0,
f (x, y, y 0 , y 00 ) = 0.
Partial DE
A DE involving partial derivatives with respect to two or more independent variables is called partial DE.
For example, the well known Laplace equation
∂2u ∂2u
+ 2 = 0,
∂x2 ∂y
is a partial DE, which carries the second order partial derivatives of the dependent variable u(x, y) with
respect to the independent variables x and y.
Note:
Hereafter, we shall talk about ordinary DE only. So DE shall mean ordinary DE unless otherwise stated.
Linear DE
A DE is said to be linear if the dependent variable and its derivatives occur in first degree and are not
multiplied together.
An linear DE of order n can be expressed in the form
Non-linear DE
If a DE in not linear, then it is said to be non-linear.
1.2 Solutions of DE
Consider the nth order DE
any choice of the arbitrary constant c. Hence, y = −x2 /4 is a singular solution of the DE y = xy 0 + (y 0 )2 .
Note: Considering the types of solutions as discussed above we can say that a solution of (1.1) is any
relation-explicit or implicit- between x and y that does not involve derivatives and satisfies (1.1) identically.
then the DE (1) with these n conditions is said to be an initial value problem (IVP).
On the other hand, if k conditions are specified at one point say x0 while the remaining n − k points
are specified at some other point say x1 , then the DE (1.1) with the given conditions at two different
points is said to be a boundary value problem (BVP).
g(x, y, y 0 ) = 0. (2.1)
Sometimes, it is possible to write the first order DE (2.1) in the canonical form
y 0 = f (x, y) (2.2)
y 0 = F (x)G(y), (2.3)
where F (x) is function of x, and G(y) is function of y. Equation (2.3) can be rewritten as
dy
= F (x)dx,
G(y)
which on integration, yields the solution
Z Z
dy
= F (x)dx + C,
G(y)
where C is a constant of integration. xb
8
Mathematics-III 9
Homogeneous DE
A function h(x, y) is said to be homogeneous function of degree n if h(tx, ty) = tn h(x, y). A DE of the form
M (x, y)dx + N (x, y)dy = 0 is said to be homogeneous if M (x, y) and N (x, y) are homogeneous functions
of same degree. The DE M (x, y)dx + N (x, y)dy = 0 can be rewritten as y 0 = −M (x, y)/N (x, y) = f (x, y)
(say). Therefore, a DE expressed in the form y 0 = f (x, y) is homogeneous if f (tx, ty) = f (x, y) = f (1, y/x).
To solve the homogeneous DE, we use the transformation y = vx, where v is a function of x. This gives
y 0 = v + xv 0 . So the DE y 0 = f (x, y) transforms to
v + xv 0 = f (1, v),
which can be rearranged in the variable separable form
dv dx
= .
f (1, v) − v x
Integrating both sides, we get the general solution
Z
dv
= ln x + C.
f (1, v) − v
x+y
Ex. 2.1.3. Solve y 0 = x−y .
p
Sol. 2.1.3. tan−1 (y/x) = log x2 + y 2 + c.
Here a, b, c, p, q and r are constants. In case, ap = qb = m (say), we have ax + by = m(px + qy). Then
the transformation px + qy = t transforms the given DE into variable separable form. Now, consider
that ap 6= qb . In this case, we use the transformations x = X + h and y = Y + k, where h and k are
constants to be determined from the equations ah + bk + c = 0 and ph + qk + r = 0. The equation
y 0 = (ax + by + c)/(px + qy + r), then, transforms to
dY aX + bY
= ,
dX pX + qY
which is a homogeneous DE in X and Y .
Ex. 2.1.4. Solve y 0 = x+y+4
x−y−6 .
y+5
Sol. 2.1.4. tan−1 x−1
p
= log (x − 1)2 + (y + 5)2 + c.
Mathematics-III 10
2.2 Exact DE
dy
The first order DE dx = f (x, y) can be written in the canonical form M (x, y)dx + N (x, y)dy = 0 where
f (x, y) = −M (x, y)/N (x, y). It is said to be an exact DE if M dx + N dy is an exact differential of some
function say F (x, y), that is, M dx + N dy = dF .
The following theorem provides the necessary and sufficient condition for a DE to be exact.
Necessary and sufficient condition for exact DE: If M (x, y) and N (x, y) possess continuous first
order partial derivatives, then the DE M (x, y)dx + N (x, y)dy = 0 is exact if and only if ∂M ∂N
∂y = ∂x .
Proof. First assume that the DE M (x, y)dx + N (x, y)dy = 0 is exact. Then by definition, there exists
some function F (x, y) such that
M dx + N dy = dF. (2.4)
Also F (x, y) is a function of x and y. So from the theory of partial differentiation, we have
∂F ∂F
dx + dy = dF. (2.5)
∂x ∂y
From (2.4) and (2.5), we obtain
∂F ∂F
M= , N= . (2.6)
∂x ∂y
∂M ∂2F ∂N ∂2F
=⇒ = , = . (2.7)
∂y ∂y∂x ∂x ∂x∂y
∂2F
Given that M (x, y) and N (x, y) possess continuous first order partial derivatives. Therefore, ∂y∂x and
∂2F ∂2F ∂2F
∂x∂y are continuous functions, which in turn implies that ∂y∂x = ∂x∂y . Hence, (2.7) gives
∂M ∂N
= . (2.8)
∂y ∂x
Conversely assume that the condition (2.8) is satisfied. We shall prove that there exists a function
F (x, y) such that equation (2.4) and hence (2.6) are satisfied. Integrating first of the equations in (2.6)
w.r.t. x, we get
Z
F = M dx + g(y). (2.9)
Z
∂F ∂
=⇒ = M dx + g 0 (y).
∂y ∂y
Z
∂
=⇒ N= M dx + g 0 (y).
∂y
Z Z
∂
=⇒ g(y) = N− M dx dy. (2.10)
∂y
Mathematics-III 11
Z
∂
=⇒ The integrand N − M dx is a function of y only.
∂y
Z
∂ ∂
=⇒ N− M dx = 0.
∂x ∂y
∂2
Z
∂N
=⇒ − M dx = 0.
∂x ∂x∂y
Z
∂N ∂ ∂
=⇒ − M dx = 0.
∂x ∂y ∂x
∂N ∂M
=⇒ − = 0,
∂x ∂y
which is true in view of (2.8). This completes the proof.
Note. If the DE M dx + N dy = 0 is exact, then in view of (2.9) and (2.10) the solution F (x, y) = c reads
as
Z Z Z
∂
M dx + N− M dx dy = c.
∂y
Ex. Test the equation ey dx + (xey + 2y)dy = 0 for exactness and solve it if it is exact.
Sol. Comparing the given equation with M dx + N dy = 0, we get
M = ey , N = xey + 2y.
∂M ∂N
=⇒ = ey = .
∂y ∂x
This shows that the given DE is exact, and therefore its solution is given by
Z Z Z
∂
M dx + N− M dx dy = c.
∂y
Z Z Z
y y ∂ y
=⇒ e dx + xe + 2y − e dx dy = c.
∂y
Z
y y ∂ y
=⇒ xe + xe + 2y − (xe ) dy = c.
∂y
Z
y
=⇒ xe + (xey + 2y − xey ) dy = c.
=⇒ xey + y 2 = c.
Mathematics-III 12
∂M ∂µ ∂N ∂µ
=⇒ µ +M =µ +N .
∂y ∂y ∂x ∂x
1 ∂µ ∂µ ∂M ∂N
=⇒ N −M = − . (2.11)
µ ∂x ∂y ∂y ∂x
We can not determine µ in general from (2.11). If µ happens to be a function of x only, then (2.11)
reduces to
1 dµ 1 ∂M ∂N
= − = h(x) (say).
µ dx N ∂y ∂x
dµ
=⇒ = h(x)dx.
µ
R
=⇒ µ = e h(x)dx .
R
Thus, if N1 ∂M ∂y − ∂N
∂x = h(x) is a function of x only, then the IF µ = e h(x)dx .
R
1 ∂N ∂M
Similarly, if M ∂x − ∂y = h(y) is a function of y only, then the IF µ = e h(y)dy .
(p(x)y − q(x))dx + dy = 0.
d R p(x)dx R
=⇒ ye = q(x)e p(x)dx .
dx
R Z R
p(x)dx p(x)dx
=⇒ ye = q(x)e dx + c
2.3.3 Bernoulli’s DE
A non-linear DE of the form y 0 + p(x)y = q(x)y n is called Bernoulli’s DE, which can be reduced to LDE
by dividing it by y n and then substituting y 1−n = z.
Ex. Solve y 0 + xy = x3 y 3 .
2
Sol. y −2 = 1 + x2 + cex .
2.3.4 IF of homogeneous DE
If M (x, y)dx + N (x, y)dy = 0 is a homogeneous DE, then its IF is 1/(M x + N y) provided M x + N y 6= 0.
In case, M x + N y = 0, the IF is 1/x2 or 1/y 2 or 1/(xy).
2.4 Clairaut’s DE
A Clairaut’s DE is of the form
y = xy 0 + f (y 0 ), (2.13)
y f (p)
x= − , (p = y 0 ). (2.14)
p p
Mathematics-III 14
So the parametric equations x = −f 0 (t) and y = f (t) − tf 0 (t) define another solution of (2.13). It is
called singular solution of (2.13).
It should be noted that the straight lines given by the general solution y = cx + f (c) are tangential
to the curve given by the singular solution x = −f 0 (t) and y = f (t) − tf 0 (t). Hence, the singular solu-
tion is an envelope of the family of straight lines of general solution as illustrated in the following example.
Note: In general, a given DE need not to possess a solution. For example, |y 0 | + |y| + 1 = 0 has no
solution. The DE |y 0 | + |y| = 0 has only one solution y = 0.
Uniqueness Theorem: Let f (x, y) be continuous in a closed rectangular region R = {(x, y) : |x−x0 | ≤
a, |y − y0 | ≤ b}, and there exists some constant M > 0 such that |f (x, y)| ≤ M for all (x, y) ∈ R. Suppose
f (x, y) satisfies the Lipshitz condition in R with respect to y, that is, there exists a constant L such that
|f (x, y1 ) − f (x, y2 )| ≤ L|y1 − y2 | for all (x, y1 ), (x, y2 ) ∈ R. Then there exists a unique solution of the
IVP y 0 = f (x, y), y(x0 ) = y0 in the interval [x0 − h, x0 + h], where h = min{a, b/M }.
∂f
Picard’s Theorem: Let f (x, y) and be continuous in a closed rectangular region R. If (x0 , y0 ) is
∂y
any point in R, then there exists some constant h > 0 such that the IVP y 0 = f (x, y), y(x0 ) = y0 has a
unique solution in the interval [x0 − h, x0 + h].
Chapter 3
Second Order DE
f (x, y, y 00 ) = 0.
If r(x) = 0, then it is called homogeneous otherwise non-homogeneous. The following theorem guar-
antees the existence and uniqueness of solution of (3.1).
Theorem 3.1.1. (Existence and Uniqueness of Solution): If p(x), q(x) and r(x) are continuous
functions on [a, b] and x0 is any point in [a, b], then the IVP y 00 + p(x)y 0 + q(x)y = r(x), y(x0 ) = y0 ,
y 0 (x0 ) = y00 has a unique solution on [a, b].
Theorem 3.1.2. If p(x) and q(x) are continuous functions on [a, b] and x0 is any point in [a, b], then the
IVP y 00 + p(x)y 0 + q(x)y = 0, y(x0 ) = 0, y 0 (x0 ) = 0 has only the trivial solution y = 0 on [a, b].
Proof. We find that y(x) = 0 satisfies the homogeneous DE y 00 + p(x)y 0 + q(x)y = 0 along with the initial
conditions y(x0 ) = 0 and y 0 (x0 ) = 0. So the required result follows from the previous theorem 3.1.1.
Theorem 3.1.3. (Linearity Principle) If y1 and y2 are any two solutions of the homogeneous LDE
y 00 + p(x)y 0 + q(x)y = 0, then c1 y1 + c2 y2 is also a solution for any constants c1 and c2 .
Now substituting c1 y1 + c2 y2 for y into left hand side of the given homogeneous LDE, we obtain
Thus, c1 y1 + c2 y2 , the linear combination of the solutions y1 and y2 , is also a solution of the homogeneous
LDE.
Remark 3.1.1. The above result need not be true for a non-homogeneous or non-linear DE.
15
Mathematics-III 16
Definition 3.1.1. (Linearly Independent and Linearly Dependent Functions) Two functions
f (x) and g(x) are said to be linearly independent (LI) on [a, b] if f (x) is not a constant multiple of g(x)
on [a, b]. The functions, which are not LI, are known as linearly dependent (LD) functions.
For example, the functions x + 1 and x2 are LI on [1, 5] while the functions x2 + 1 and 3x2 + 3 are LD
functions on [1, 5]. The functions sin x and cos x are LI on any interval.
Definition 3.1.2. Definition
(Wronskian): Wronskian of two functions y1 (x) and y2 (x) is defined as
y1 y2
the determinant 0 and is denoted by W (y1 , y2 ).
y1 y20
∴ W (y1 , y2 ) = y1 y20 − y2 y10 .
Lemma 3.1.1. (Wronskian of Solutions of Homogeneous LDE) The Wronskian of two solutions
y1 (x) and y2 (x) defined on [a, b] of a homogeneous LDE y 00 + p(x)y 0 + q(x)y = 0 is either identically zero
or never zero.
Proof. Since y1 and y2 are solutions y 00 + p(x)y 0 + q(x)y = 0, we have
dW dW
=⇒ + p(x)W = 0. ∵ W = y1 y20 − y2 y10 & = y1 y200 − y2 y100
dx dx
R
=⇒ W = ce− p(x)dx
, where c is a constant of integration.
Lemma 3.1.2. (Wronskian of LD Solutions) Two solutions y1 and y2 defined on [a, b] of a homoge-
neous LDE y 00 + p(x)y 0 + q(x)y = 0 are LD if and only if W (y1 , y2 ) = 0 for all x ∈ [a, b].
Proof. If y1 and y2 are LD, then there exists some constant c such that y1 (x) = cy2 (x) for all x ∈ [a, b].
It follows that W (y1 , y2 ) = y1 y20 − y2 y10 = cy2 y20 − cy2 y20 = 0 for all x ∈ [a, b].
Conversely, let W (y1 , y2 ) = y1 y20 − y2 y10 = 0 for all x ∈ [a, b]. Now, there are two possibilities about
y1 . First, y1 = 0 for all x ∈ [a, b]. In this case, we have y1 = 0 = 0.y2 for all x ∈ [a, b], and consequently
y1 and y2 are LD. Next, if y1 is not identically 0 in [a, b] and x0 is any point in [a, b] such that y1 (x0 ) 6= 0,
then continuity of y1 ensures the existence of a subinterval [c, d] containing x0 in [a, b] such that y1 6= 0
for all x ∈ [c, d]. Dividing W (y1 , y2 ) = y1 y20 − y2 y10 = 0 by y12 , we get (y1 y20 − y2 y10 )/y12 = (y2 /y1 )0 = 0. So
we have y2 /y1 = k for all x ∈ [c, d], where k is some constant. This shows that y1 and y2 are LD in [c, d].
This completes the proof.
Mathematics-III 17
Theorem 3.1.4. (General Solution of Homogeneous LDE) If y1 (x) and y2 (x) are two LI solutions
of a homogeneous LDE y 00 + p(x)y 0 + q(x)y = 0 on [a, b], then c1 y1 (x) + c2 y2 (x), where c1 and c2 are
arbitrary constants, is the general solution of the homogeneous LDE.
Proof. Let y(x) be any solution of y 00 + p(x)y 0 + q(x)y = 0. We shall prove that there exists unique
constants c1 and c2 such that
Given that y1 (x) and y2 (x) are two LI solutions of the given homogeneous LDE on [a, b].Therefore,
y1 (x) y2 (x)
y 0 (x) y 0 (x) = W (y1 (x), y2 (x)) is non-zero for all x ∈ [a, b]. This in turn implies that the system of
1 2
equations (3.4) and (3.5) has a unique solution (c1 , c2 ). This completes the proof.
For example, y 00 + y = 0 has two LI solutions y1 = cos x and y2 = sin x. So its general solution is
c1 cos x + c2 sin x.
v 00 y1 + v 0 (2y10 + p(x)y1 ) = 0.
v 00 2y10
=⇒ = − − p(x).
v0 y1
Integrating,
Z
log v 0 = −2 log y1 − p(x)dx.
1 − R p(x)dx
=⇒ v0 = e .
y12
Mathematics-III 18
Z
1 − R p(x)dx
=⇒ v= e dx.
y12
Z
1 − R p(x)dx
∴ y2 = y1 e dx.
y12
y = c1 x + c2 x−1 .
y 00 + py 0 + qy = 0, (3.9)
where p and q are constants. Let y = emx be a solution of 3.9. Then, we have
(m2 + pm + q)emx = 0.
=⇒ m2 + pm + q = 0, (3.10)
since emx 6= 0.
Equation (3.10) is called auxiliary equation (AE) and its roots are
p p
−p + p2 − 4q −p − p2 − 4q
m1 = and m2 = .
2 2
Now three different cases arise depending on the nature of roots of the AE.
(i) If p2 − 4q > 0, then m1 and m2 are real and distinct. So em1 x and em2 x are two particular solutions of
3.9. Also these are LI being not constant multiple of each other. Therefore, general solution of (3.9) is
y = c1 em1 x + c2 em2 x .
Mathematics-III 19
(ii) If p2 − 4q < 0, then m1 and m2 are conjugate complex numbers. Let m1 = a + ib and m2 = a − ib.
Then we get the following solutions of 3.9:
As we are interested in real solutions of 3.9, adding 3.11 and 3.12 and then dividing by 2, we get a
real solution eax cos bx.
Similarly, subtracting 3.11 and 3.12 and then dividing by 2i, we get another real solution of 3.9 given
by eax sin bx.
Now, we see that the particular solutions eax cos bx and eax sin bx are LI. So general solution of (3.9)
is
(iii) If p2 − 4q = 0, then m1 and m2 are real and equal with m1 = m2 = − p2 . Therefore, one solution of
−px
(3.9) is y1 = e . Another LI solution of (3.9) is given by
2
Z Z Z
1 − R p(x)dx −px 1 − R pdx −px 1 −px −px
y2 = y1 e dx = e 2 e dx = e 2 e dx = xe 2 .
2
y1 e−px e−px
Ex. 3.3.4. Show that a DE of the form x2 y 00 + pxy 0 + qy = 0, where p, q are constants, reduces to a
homogeneous LDE with constant coefficients with the transformation x = ez . Hence, solve the equation
x2 y 00 + 2xy 0 − 6y = 0.
Sol. 3.3.4. We have x = ez . So z = log x and z 0 = 1/x. Therefore,
dy 0 dy
xy 0 = x z = .
dz dz
0
1 d2 y 0 d2 y dy
2 00 2 1 dy 2 1 dy
=⇒ x y =x =x 2
z − 2 = − .
x dz x dz x dz dz 2 dz
Thus, the equation x2 y 00 + pxy 0 + qy = 0 becomes
d2 y dy
2
+ (p − 1) + qy = 0,
dz dz
which a homogeneous LDE with constant coefficients.
Hence, with x = ez , the DE x2 y 00 + 2xy 0 − 6y = 0 reduces to
d2 y dy
+ − 6y = 0.
dz 2 dz
Its AE is m2 + m − 6 = 0 with the roots m = −3, 2. So its solution is
y = c1 e−3z + c2 e2z .
=⇒ y = c1 x−3 + c2 x2 .
Remark 3.3.1. The DE in the form x2 y 00 + pxy 0 + qy = 0 is called Euler’s or Cauchy’s equidimensional
equation. If we denote dy 0 00
dz by Dz y, then xy = Dz y and xy = Dz (Dz − 1)y. It can also be shown that
3 000 n (n)
x y = Dz (Dz − 1)(Dz − 2)y and in general x y = Dz (Dz − 1)...(Dz − n + 1)y. Thus, every Euler’s
or Cauchy’s equidimensional equation reduces to a homogeneous LDE with constant coefficients with the
transformation x = ez .
Ex. 3.3.5. Solve x2 y 00 + 3xy 0 + 10y = 0.
Sol. 3.3.5. y = x−1 (c1 cos(log x3 ) + c2 sin(log x3 ).
Ex. 3.3.6. Show that the general homogeneous LDE y 00 +p(x)y 0 +q(x)y = 0 is reducible to a homogeneous
3
LDE with constant coefficients if and only if (q 0 + 2pq)/q 2 is constant provided that z =
Rp
q(x)dx.
q(x)dx and z 0 = q(x). Therefore,
Rp p
Sol. 3.3.6. We have z =
dy 0 dy √
y0 = z = q.
dz dz
√ d2 y 0 q 0 dy d2 y q 0 dy
=⇒ y 00 = q 2z + √ =q 2 + √ .
dz 2 q dz dz 2 q dz
Plugging the values of y 0 and y 00 into y 00 + p(x)y 0 + q(x)y = 0 and dividing by q, we obtain
d2 y q 0 + 2pq dy
+ 3 + y = 0.
dz 2 2q 2 dz
3
This is a homogeneous LDE with constant coefficients if and only if (q 0 + 2pq)/q 2 is constant.
Mathematics-III 21
Ex. 3.3.7. Reduce xy 00 + (x2 − 1)y 0 + x3 y = 0 to a homogeneous LDE with constant coefficients and
hence solve it.
2x + 2 x − x1 x2
q 0 + 2pq
∴ 3 = = 2.
q2 x3
This shows that the given DE is reducible to a homogeneous LDE with constant coefficients given by
d2 y q 0 + 2pq dy
+ 3 + y = 0.
dz 2 2q 2 dz
2
q(x)dx = xdx = x2 .
Rp R
where z =
d2 y dy
=⇒ + + y = 0.
dz 2 dz
√
−1 3
Its AE is m2 + m + 1 = 0 with roots m = 2 ± 2 i. So the solution reads as
√ √ !
− z2 3 3
y=e c1 cos z + c2 sin z .
2 2
x2
Substituting z = 2 , we have
√ √ !
− x4
2 3 2 3 2
y=e c1 cos x + c2 sin x .
4 4
Proof. Let y be any solution of y 00 + p(x)y 0 + q(x)y = r(x). Then y − yp is a solution of homogeneous
LDE y 00 + p(x)y 0 + q(x)y = 0 since
(y −yp )00 +p(x)(y −yp )0 +q(x)(y −yp ) = (y 00 +p(x)y 0 +q(x)y)−(yp00 +p(x)yp0 +q(x)yp ) = r(x)−r(x) = 0.
But yh = c1 y1 + c2 y2 is general solution of y 00 + p(x)y 0 + q(x)y = 0. So there exists suitable constants
c1 and c2 such that
y − yp = c1 y1 + c2 y2 = yh or y = yh + yp .
This completes the proof.
In the next three sections, we shall learn some methods to find the particular solution yp of the
non-homogeneous LDE, namely the method of undetermined coefficients, the method of variation of
parameters and the operator methods.
Mathematics-III 22
Ex. Find a particular solution of y 00 − y 0 − 2y = 4x2 . Also determine the general solution.
Sol. Comparing the given equation with y 00 + py 0 + qy = r(x), we get r(x) = 4x2 . Therefore, the possible
non-zero derivatives of r(x) are 8x and 8. Let yp = Ax2 + Bx + C be a particular solution. Substituting
yp for y into the given DE, we obtain
2A − (2Ax + B) − 2(Ax2 + Bx + C) = 4x2 , (3.14)
Equating coefficients of x2 , x and x0 on both sides of 3.14, we have
−2A = 4, −2A − 2B = 0, 2A − B − 2C = 0.
=⇒ A = −2, B = 2, C = −3.
Thus, the particular solution is
yp = −2x2 + 2x − 3.
Next, we find the general solution yh of the corresponding homogeneous DE y 00 − y 0 − 2y = 0. Here
the AE is m2 − m − 2 = 0 with roots m = 2, −1. Therefore, yh = c1 e2x + c2 e−x .
Sol. Comparing the given equation with y 00 + py 0 + qy = r(x), we get r(x) = sin x. The only function
arising from derivative from r(x) is cos x. Let yp = A sin x + B cos x be a particular solution. But
yp00 + yp = 0, that is, the assumed yp satisfies the corresponding homogeneous DE y 00 + y = 0. Therefore,
we assume revised particular solution yp = x(A sin x + B cos x). Substituting it for y into the given DE,
we obtain
2A cos x − 2B sin x = sin x. (3.15)
Equating coefficients of sin x and cos x on both sides, we get
−1
A = 0, B= . (3.16)
2
Thus, the particular solution is
1
yp = − x cos x.
2
Mathematics-III 23
Let
y = c1 y1 + c2 y2 , (3.18)
be general solution of the corresponding homogeneous DE y 00 + p(x)y 0 + q(x)y. We replace the constants
by unknown functions v1 (x) and v2 (x), and attempt to determine these functions such that
y = v1 y1 + v2 y2 , (3.19)
v1 (y100 + p(x)y10 + q(x)y1 ) + v2 (y200 + p(x)y20 + q(x)y2 ) + p(x)(v10 y1 + v20 y2 ) + v10 y10 + v20 y20 = r(x). (3.21)
−y2 r(x)
Z Z
y1 r(x)
y = y1 dx + y2 dx.
W (y1 , y2 ) W (y1 , y2 )
Sol. Comparing the given equation with y 00 + p(x)y 0 + q(x)y = r(x), we get r(x) = csc x. The general
solution of the corresponding homogeneous equation y 00 + y = 0 is y = c1 cos x + c2 sin x. Let y1 = cos x
and y2 = sin x. Then W (y1 , y2 ) = 1, and hence by the method of variation of parameters, the particular
solution is obtained as
−y2 r(x)
Z Z
y1 r(x)
y = y1 dx + y2 dx
W (y1 , y2 ) W (y1 , y2 )
Z Z
= cos x − sin x csc xdx + sin x cos x csc xdx
= −x cos x + sin x log(sin x).
Mathematics-III 24
1
y= r(x),
f (D)
1
a particular solution of the DE. We can not operate f (D) on r(x) in general. It depends on forms of f (D)
and r(x). So we discuss the following cases.
(i) If a is a constant and f (D) = D − a, then the particular solution is given by
1
y= r(x).
D−a
Operating D − a on both sides, we get
(D − a)y = r(x)
dy
=⇒ − ay = r(x),
dx
which is a LDE with IF= e−ax and solution
Z
y = eax r(x)e−ax dx.
Z
1
Thus, r(x) = eax
r(x)e−ax dx.
D−a
1 1
R
If a = 0, then D r(x) = r(x)dx. This shows that D stands for the integral operator. Hence, inverse
operator of differential operator is the integral operator.
1
=⇒ y= e−x .
(D − 1)(D + 1)
1 1 −x
=⇒ y= e .
D−1 D+1
Mathematics-III 25
Z
1 −x x −x
=⇒ y= e e e dx .
D−1
1
=⇒ y= (xe−x ).
D−1
Z
=⇒ y = ex e−x xe−x dx.
1 1
=⇒ y = e−x − x − .
2 4
1 1
Remark: In the above example, we applied the operators D+1 and D−1 successively. We could, however,
also apply the operators after making partial fractions as illustrated in the following.
We have
1
y = e−x
(D − 1)(D + 1)
1 1 1
= − e−x
2 D−1 D+1
1 1 −x 1 −x
= e − e
2 D−1 D+1
Z Z
1 x −x −x −x x −x
= e e e dx − e e e dx
2
1 1
= e−x − − x .
4 2
1
(ii) If r(x) is some polynomial in x, then we write series expansion of f (D) in ascending powers of D as
illustrated in the following example.
(D2 + 1)y = x2 + x + 3.
1
=⇒ y= (x2 + x + 3).
D2 + 1
=⇒ y = (1 − D2 + D4 − ......)(x2 + x + 3).
Mathematics-III 26
=⇒ y = x2 + x + 3 − 2 + 0 − .......
=⇒ y = x2 + x + 1,
=⇒ D2 (ekx g(x)) = D(ekx (D + k)g(x)) = ekx D(D + k)g(x) + kekx (D + k)g(x) = ekx (D + k)2 g(x).
1
Since one can express f (D) in powers of D, so we have in general
1 1
(ekx g(x)) = ekx g(x).
f (D) f (D + k)
Sol. We have
1
y= (xex ).
D2 − 3D + 2
1
=⇒ y = ex x.
(D + 1)2 − 3(D + 1) + 2
1
=⇒ y = ex x.
D2 −D
x 1 1
=⇒ y = −e + x.
D 1−D
x 1 2
=⇒ y = −e + 1 + D + D + .... x.
D
x2
x
=⇒ y = −e +x+1 .
2
Chapter 4
In this chapter, we discuss qualitative behavior of the solutions of the second order homogeneous LDE
given by y 00 + p(x)y 0 + q(x)y = 0.
Proof. Denoting the Wronskian W (y1 , y2 ) by W (x), we have W (x) = y1 (x)y20 (x) − y2 (x)y10 (x). Since y( x)
and y2 (x) are LI, W (x) does not vanish. Let x1 and x2 be any two successive zeros of y2 . We shall prove
that y1 vanishes exactly once between x1 and x2 . Now x1 and x2 being zeros of y2 , we have
which implies that y1 (x1 ), y20 (x1 ), y1 (x2 ), y20 (x2 ) all are non-zero since W (x) does not vanish. Now y2
is continuous and has successive zeros x1 and x2 . Therefore, if y2 is increasing at x1 , then it must be
decreasing at x2 and vice versa. Mathematically speaking, y20 (x1 ) and y20 (x2 ) are of opposite sign. Also
W (x) being a non-vanishing and continuous function retains the same sign. So in view of (4.1), it is easy
to conclude that y1 (x1 ) and y1 (x2 ) must be of opposite sign. Therefore, y1 vanishes at least once between
x1 and x2 . Further, y1 can not vanish more than once between x1 and x2 . For if it does, then applying
the same argument as above, it can be proved that y2 has at least one zero between two zeros of y1 lying
between x1 and x2 . But this would contradict the assumption that x1 and x2 are successive zeros of y2 .
This completes the proof.
Ex. Two LI solutions of y 00 + y = 0 are sin x and cos x. Also, between any two successive zeros of sin x,
there is exactly one zero of cos x and vice versa.
27
Mathematics-III 28
1
R
On setting coefficient of u0 equal to 0 and solving, we get v = e− 2 p(x)dx
. Then (4.3) reduces to
where h(x) = q(x) − 41 p(x)2 − 12 p0 (x). The DE (4.4) is referred to as the normal form of DE (4.2).
1
R
Remark: Since v = e− 2 p(x)dx does not vanish and y = u(x)v(x), it follows that the solution y(x) of
(4.2) and the solution u(x) of (4.4) have the same zeros.
Theorem 4.2.1. If h(x) < 0, and if u(x) is a non-trivial solution of u00 + h(x)u = 0, then u(x) has at
most one zero.
Proof. Let x0 be a zero of u(x) so that u(x0 ) = 0. Then u0 (x0 ) must be non-zero otherwise u(x) would
be a trivial solution of u00 + h(x)u = 0 by theorem 3.1.2. Suppose u0 (x0 ) > 0. Then by continuity, u0 (x)
is positive in some interval to the right of x0 . So u(x) is an increasing function in the interval to the
right of x0 . We claim that u(x) does not vanish anywhere to the right of x0 . In case, u(x) vanishes at
some point say x2 to the right of x0 , then u0 (x) must vanish at some point x1 such that x0 < x1 < x2 .
Notice that x1 is a point of maxima of u(x). So u00 (x1 ) < 0, by second derivative test for maxima.
But u00 (x1 ) = −h(x1 )u(x1 ) > 0 since h(x1 ) < 0 and u(x1 ) > 0. So u(x) can not vanish to the right of
x0 . Likewise, we can show that u(x) does not vanish to the left of x0 . A similar argument holds when
u0 (x0 ) < 0. Hence, u(x) has at most one zero.
Theorem
R∞ 4.2.2. If h(x) > 0 for all x > 0, and u(x) is a non-trivial solution of u00 + h(x)u = 0 such that
1 h(x)dx = ∞, then u(x) has infinitely many zeros on the positive X-axis.
Proof. Suppose u(x) has only finite number of zeros on the positive X-axis, and x0 > 1 be any number
greater than the largest zero of u(x). Without loss of generality, assume that u(x) > 0 for all x > x0 . Let
g(x) = −u0 (x)/u(x) so that
Ex. 4.2.1. Show that the zeros of the functions a sin x + b cos x and c sin x + d cos x are distinct and
occur alternatively whenever ad − bc 6= 0.
Sol. 4.2.1. The functions a sin x + b cos x and c sin x + d cos x are solutions of the DE y 00 + y = 0. Also,
Wronskian of a sin x + b cos x and c sin x + d cos x is non-zero if ad − bc 6= 0, which in turn implies that the
two solutions are LI. Thus, by Theorem 4.1.1, the zeros of these functions occur alternatively whenever
ad − bc 6= 0.
Ex. 4.2.2. Find the normal form of Bessel’s equation x2 y 00 + xy 0 + (x2 − p2 )y = 0, and use it to show
that every non-trivial solution has infinitely many positive zeros.
Mathematics-III 29
Sol. 4.2.2. Comparing the Bessel’s equation with y 00 + p(x)y 0 + q(x)y = 0, we obtain p(x) = 1
x and
2 2
q(x) = x x−p
2 . Next, we evaluate
1 1 1 − 4p2
h(x) = q(x) − p(x)2 − p0 (x) = 1 + .
4 2 4x2
Therefore, the normal form of Bessel’s equation reads as
1 − 4p2
u00 + h(x)u = 0 or u00 + 1 + u = 0. (4.5)
4x2
Now we shall prove that every non-trivial solution u(x) of (4.5) has infinitely many positive zeros.
1 − 4p2
1 1 1
h(x) = 1 + 2
=1+ 2 +p −p ≥1>0 for all x > 0.
4x x 2 2
Also, we have
Z ∞ ∞
1 − 4p2
Z
h(x)dx = 1+ dx = ∞.
1 1 4x2
So by Theorem 4.2.2, every non-trivial solution u(x) has infinitely many positive zeros.
d2 u
+ h1 (t)u = 0, (4.6)
dt2
1−4p2
where h1 (t) = 1 + 4(t+x0 )2
. We see that h1 (t) > 0 for all t > 0, and
∞ ∞
1 − 4p2
Z Z
h1 (t)dt = 1+ dt = ∞.
1 1 4(t + x0 )2
So by Theorem 4.2.2, every non-trivial solution u(t) of (4.6) has infinitely many positive zeros. Since
x = t + x0 , so zeros of (4.5) and (4.6) differ only by x0 . Also x0 is a positive number. Therefore, every
non-trivial solution u(x) of (4.5) has infinitely many positive zeros.
From case(i) and case (ii), we conclude that every non-trivial solution u(x) of (4.5) has infinitely many
positive zeros.
Ex. 4.2.3. The hypothesis of the theorem 4.2.2 is false for the Euler equation x2 y 00 + ky = 0, but the
conclusion is sometimes true and sometimes false, depending on the magnitude of the constant k. Show
that every non-trivial solution has infinitely many positive zeros if k > 1/4, and only a finite number if
k ≤ 1/4.
Mathematics-III 30
Sol. 4.2.3. Comparing the given equation with y 00 + h(x)y = 0, we get h(x) = k/x2 . Therefore,
Z ∞
x=∞
h(x)dx = [−k/x]x=1 = k, which is finite number.
1
Its AE is
r r
2 1 1 1 1
m −m+k =0 with the roots m1 = + − k, m2 = − − k.
2 4 2 4
Now three cases arise depending on the values of k.
In each case, c1 and c2 are not both zero. In case (i) and case (ii), the solutions are non-periodic
and therefore possess at most finite number of zeros. In case (iii), the solutions being periodic in nature
possess infinitely many positive zeros.
Theorem 4.2.3. If u(x) is a non-trivial solution of u00 + h(x)u = 0 on a closed interval [a, b], then u(x)
has at most a finite number of zeros in this interval.
Proof. Assume that u(x) has infinitely many zeros in the interval [a, b]. Then the infinite set of zeros of
u(x) is bounded. So by Bolzano-Weierstrass theorem of advanced calculus, there exists some x0 in [a, b]
and a sequence {xn 6= x0 } of zeros of u(x) such that xn → x0 as n → ∞. Since u(x) is continuous and
differentiable, we have
u(xn ) − u(x0 )
u0 (x0 ) = lim = 0.
n→∞ xn − x0
By theorem 3.1.2, it follows that u(x) is a trivial solution of u00 + h(x)u = 0, which is not true as per the
given hypothesis. Hence, u(x) can not have infinitely many zeros in the interval [a, b].
Theorem 4.2.4. (Sturm Comparison Theorem) If y(x) and z(x) be non-trivial solutions of y 00 +
q(x)y = 0 and z 00 + r(x)z = 0 respectively, q(x) and r(x) are positive functions such that q(x) > r(x),
then y(x) vanishes at least once between two successive zeros of z(x).
Mathematics-III 31
Proof. Let x1 and x2 be two successive zeros of z(x) with x1 < x2 . Let us assume that y(x) does not
vanish on the interval (x1 , x2 ). We shall prove the theorem by deducing a contradiction. Without loss of
generality, we assume that y(x) and z(x) both are positive on (x1 , x2 ), for either function can be replaced
by its negative if necessary. Now, denoting the Wronskian W (y, z) by W (x), we have
W (x) = y(x)z 0 (x) − z(x)y 0 (x). (4.7)
32
Mathematics-III 33
Then it can be proved that f (x) possesses derivatives of all orders in |x − x0 | < R. Also, the series can
be differentiated termwise in the sense that
∞
X
0
f (x) = nan (x − x0 )n−1 = a1 + 2a2 (x − x0 ) + 3a3 (x − x0 )2 + ........,
n=1
∞
X
00
f (x) = n(n − 1)an (x − x0 )n−2 = 2a2 + 3.2a3 (x − x0 ) + ........,
n=2
and so on, and each of the resulting series converges for |x − x0 | < R. The successive differentiated series
suggest that an = f (n) (0)/n!. Also, the power series (5.2) can be integrated termwise provided the limits
of integration lie inside the interval of convergence.
∞
X
If we have another power series bn (x − x0 )n converging to g(x) for |x − x0 | < R, that is,
n=0
∞
X
g(x) = bn (x − x0 )n = b0 + b1 (x − x0 ) + b2 (x − x0 )2 + b3 (x − x0 )3 + ........, (5.3)
n=0
then (5.2) and (5.3) can be added or subtracted termwise, that is,
∞
X
f (x) ± g(x) = (an ± bn )(x − x0 )n = (a0 ± b0 ) + (a1 ± b1 )(x − x0 ) + (a2 ± b2 )(x − x0 )2 + ........
n=0
f (n+1) (ξ)
where Rn = (x − x0 )n+1 , ξ is some number between x0 and x. Obviously the power series
(n + 1)!
∞
X f (n) (x0 )
(x − x0 )n converges to f (x) for those values of x ∈ (x0 − R, x0 + R) for which Rn → 0 as
n!
n=0
n → ∞. Thus for a given function f (x), the Taylor’s formula enables us to find the power series that
converges to f (x). On the other hand, if a convergent power series is given, then it is not always possible
to find/recognize its sum function. In fact, very few power series have sums that are elementary functions.
∞
X f (n) (x0 )
If the power series (x − x0 )n converges to f (x) for all values of x in some neighbourhood
n!
n=0
of x0 (open interval containing x0 ), then f (x) is said to be analytic at x0 and the power series is called
Taylor series of f (x) at x0 . Notice that f (x) is analytic at each point in the interval of convergence
∞
X f (n) (x0 )
(x0 − R, x0 + R) of the power series (x − x0 )n .
n!
n=0
Mathematics-III 34
∞
X ∞
X
n−1
nan x − an xn = 0, (5.6)
n=0 n=0
which must be an identity in x since (5.4) is, by assumption, a solution of the given DE. So coefficients of
all powers of x must be zero. In particular, equating to 0 the coefficient of xn−1 , the lowest degree term
in x, we obtain
1
nan − 2an−1 = 0 or an = an−1 .
n
Substituting n = 1, 2, 3, ...., we get
a1 = a0 ,
1 1
a2 = a1 = a0 ,
2 2!
1 1
a3 = a2 = a0 ,
3 3!
and so on. Plugging the values of a1 , a2 , ..... into (5.4), we get
1 1
y = a0 + a0 x + a0 x2 + a0 x3 + ........,
2! 3!
x2 x3
= a0 1 + x + + + ..............
2! 3!
2 3
Let us examine the validity of this solution. We know that the power series 1 + x + x2! + x3! + ..............
converges for all x. It implies that the term by term differentiation carried out in (5.5) is valid for all x.
Similarly, the difference of the two series(5.4) and (5.5) considered in (5.6) is valid for all x. It follows
2 3
that y = a0 1 + x + x2! + x3! + .............. is a valid solution of the given DE for all x. Also, we know
x2 x3
that ex = 1 + x + 2! + 3! + ............... So y = a0 ex is general solution of the DE y 0 − y = 0, as expected.
Mathematics-III 35
If the functions p(x) and q(x) are analytic at x = x0 , then x0 is called an ordinary point of the DE (5.7).
If p(x) and/or q(x) fail to be analytic at x0 , but (x − x0 )p(x) and (x − x0 )2 q(x) are analytic at x0 , then we
say that x0 is a regular singular point of (5.7) otherwise we call x0 as an irregular singular point of x0 . For
example, x = 0 is a regular singular point of the DE x2 y 00 + xy 0 + 2y = 0 and every non-zero real number
is an ordinary point of the same DE. x = 0 is an irregular singular point of the DE x3 y 00 + xy 0 + y = 0.
The following theorem gives a criterion for the existence of the power series solution of near an ordinary
point.
Theorem 5.2.1. If a0 , a1 are arbitrary constants, and x0 is an ordinary point of a DE y 00 +p(x)y 0 +q(x)y =
0, then there exists a unique solution y(x) of the DE that is analytic at x0 such that y(x0 ) = a0 and
y 0 (x0 ) = a1 . Furthermore, the power series expansion of y(x) is valid in |x − x0 | < R provided the power
series expansions of p(x) and q(x) are valid in this interval.
The above theorem asserts that there exists a unique power series solution of the form
∞
X
y(x) = an (x − x0 )n = a0 + a1 (x − x0 ) + a2 (x − x0 )2 + a3 (x − x0 )3 + ........,
n=0
about the ordinary point x0 satisfying the initial conditions y(x0 ) = a0 and y 0 (x0 ) = a1 . The constants
a2 , a3 and so on are determined in terms of a0 or a1 as illustrated in the following examples.
Sol. 5.2.2. Here p(x) = 0 and q(x) = −4, both are analytic at x = 0. So x = 0 is an ordinary point of
the given DE. So there exists a power series solution
∞
X
y= an xn = a0 + a1 x + a2 x2 + a3 x3 + ........, (5.8)
n=0
1 1
a5 = a4 = a1 ,
5.4 5!
and so on. Plugging the values of a3 , a4 , a5 ..... into (5.8), we get
1 1 1 1
y = a0 + a1 x + a0 x2 + a1 x3 + a0 x4 + a1 x5 + ........,
2! 3! 4! 5!
1 2 1 4 1 3 1 5
= a0 1 + x + x + .............. + a1 x + x + x + .............. ,
2! 4! 3! 5!
the required power series solution of the given DE. We know that (ex +e−x )/2 = 1+ 2!1 x2 + 4!1 x4 +..............
and (ex − e−x )/2 = x + 3!1 x3 + 5!1 x5 + ................ So the power series solution becomes y = c1 ex + c2 e−x ,
where c1 = (a0 + a1 )/2 and c1 = (a0 − a1 )/2, which is the same solution of y 00 − y = 0 as we obtain by
exact method.
Ex. 5.2.3. Find power series solution of (1 + x2 )y 00 + xy 0 − y = 0 about x = 0.
Sol. 5.2.3. Here x = 0 is an ordinary point of the given DE. So there exists a power series solution
∞
X
y= an xn = a0 + a1 x + a2 x2 + a3 x3 + ......... (5.9)
n=0
Substituting the power series solution (5.9) into the given DE, we get
∞
X ∞
X ∞
X
(1 + x2 ) an n(n − 1)xn−2 + x an nxn−1 − an xn = 0.
n=2 n=1 n=0
∞
X ∞
X
=⇒ an n(n − 1)xn−2 + an [n(n − 1) + n − 1]xn = 0.
n=2 n=0
X∞ X∞
=⇒ an n(n − 1)xn−2 + an (n − 1)(n + 1)xn = 0.
n=2 n=0
Equating to 0 the coefficient of xn−2 , we obtain
n(n − 1)an + (n − 3)(n − 1)an−2 = 0.
3−n
=⇒ an = an−2 provided n 6= 1.
n
Substituting n = 2, 3, ...., we get
1
a2 = a0 ,
2
a3 = 0,
1 1
a4 = − a2 = − a0 ,
4 4.2
a5 = 0,
3 3
a6 = − a4 = a0 ,
6 6.4.2
and so on.
Plugging the values of a2 , a3 , a4 , a5 , a6 and so on into (5.9), we get
1 1 3
y = a0 + a1 x + a0 x2 + 0.x3 − a0 x4 + 0.x5 + a0 x6 + ........,
2 4.2 6.4.2
1 2 1 4 3 6
= a0 1 + x − x + x .............. + a1 x,
2 4.2 6.4.2
the required power series solution of the given differential equation.
Mathematics-III 37
The following theorem by Frobenius gives a criterion for the existence of the power series solution of
near a regular singular point.
Theorem 5.2.2. If x0 is a regular singular point of a DE y 00 + p(x)y 0 + q(x)y = 0, then there exists at
∞
X
least one power series solution of the form y = an (x − x0 )n+r (a0 6= 0), where r is some root of the
n=0
quadratic equation (known as indicial equation) obtained by equating to zero the coefficient of lowest
∞
X
degree term in x of the equation that arises on substituting y = an (x − x0 )n+r into the given DE.
n=0
RemarkP∞ 5.2.1. The n+r above theorem by Frobenius guarantees at least one power series solution of the
form n=0 an (x − x0 ) (a0 6= 0) of the DE y 00 + p(x)y 0 + q(x)y = 0, which we call Frobenius solution.
If the roots of the indicial equation do not differ by an integer, we get two LI Frobenious solutions. In
case, there exists only one Frobenious solution, it corresponds to larger root of the indicial equation. The
other LI solution depends on the nature of roots of the indicial equation as illustrated in the following
examples.
Ex. 5.2.4. Find power series solutions of 2x2 y 00 + xy 0 − (x2 + 1)y = 0 about x = 0.
Sol. 5.2.4. Here x = 0 is a regular singular point of the given DE. So there exists at least one Frobenius
solution of the form
∞
X
y= an xn+r = xr (a0 + a1 x + a2 x2 + a3 x3 + ........). (5.10)
n=0
a0 (r − 1)(2r + 1) = 0 or (r − 1)(2r + 1) = 0.
Therefore, roots of the indicial equation are r = 1, −1/2, which do not differ by an integer. So we shall
get two LI Frobenious solutions.
Next equating to 0 the coefficient of xr+1 , we find
x2 x4
y1 = a0 x 1 + + + ....... ,
2.7 2.7.4.11
x2 x4
−1/2
y2 = a0 x 1+ + + ....... .
2.1 2.1.4.5
Ex. 5.2.5. Find power series solutions of xy 00 + y 0 − xy = 0 about x = 0.
Sol. 5.2.5. Here x = 0 is a regular singular point of the given DE. So there exists at least one Frobenius
solution of the form
∞
X
y= an xn+r = xr (a0 + a1 x + a2 x2 + a3 x3 + ........). (5.12)
n=0
a0 r2 = 0 or r2 = 0.
Therefore, roots of the indicial equation are r = 0, 0, which are equal. So we shall get only one Frobenious
series solution.
Next equating to 0 the coefficient of xr , we find
a1 (r + 1)2 = 0 or a1 = 0 for r = 0.
where n = 2, 3, 4....
Therefore, we have
1 1 1 1
a2 = a0 , a3 = a1 = 0, a4 = a2 = a0 , .......
(r + 2)2 (r + 3)2 (r + 4)2 (r + 2)2 (r + 4)2
x2 x4
r
y = a0 x 1 + + + ....... (5.14)
(r + 2)2 (r + 2)2 (r + 4)2
x2 x4
y1 = a0 1 + 2 + 2 2 + .......
2 2 .4
To get another LI solution, we substitute (5.14) into the given DE. Then we have
Note that substitution of (5.14) into the given DE gives only the lowest degree term in x. Obviously
(y)r=0 = y1 satisfies (5.15) and hence the given DE. Now differentiating (5.15) partially w.r.t. r, we
obtain
∂y
(xD2 + D − x) = a0 (2rxr−1 + r2 xr−1 ln x). (5.16)
∂r
This shows that ∂y
∂r is a solution of the given DE. Thus, the second LI solution of the given DE is
r=0
x2
∂y 3 4
y2 = = y1 ln x − a0 + x + ......
∂r r=0 4 128
Ex. 5.2.6. Find power series solutions of x(1 + x)y 00 + 3xy 0 + y = 0 about x = 0.
Sol. 5.2.6. Here x = 0 is a regular singular point of the given DE. So there exists at least one Frobenius
solution of the form
∞
X
y= an xn+r = xr (a0 + a1 x + a2 x2 + a3 x3 + ........). (5.17)
n=0
a0 r(r − 1) = 0 or r(r − 1) = 0.
Therefore, roots of the indicial equation are r = 0, 1, which differ by an integer. So we shall get only one
Frobenious solution and that corresponds to the larger root r = 1.
Now equating to 0 the coefficient of xn+r−1 , we have the recurrence relation
n+r
an (n + r − 1) + an−1 (n + r) = 0 or an = − an−1 .
n+r−1
where n = 1, 2, 3, 4....
Therefore, we have
r+1 r+2 r+3
a1 = − a0 , a2 = − a0 , a3 = a0 , .......
r r r
For r = 1, we get a1 = −2a0 , a2 = 3a0 , a3 = −4a0 , ... So the Frobenious series solution is
Now we find the other LI solution. Since a1 , a2 ,...... are not defined at r = 0, so we replace a0 by b0 r in
(5.17). Thus the modified series solution reads as
y = xr (b0 r + a1 x + a2 x2 + a3 x3 + ........),
Obviously (y)r=0 and (y)r=1 satisfy the given DE. But we find that the solutions
r = 2, −1
23 2 3 4
y1 = a0 x 1 − x + x − ............ ,
10 56
y2 = a0 x−1 .
Ex. 5.2.8. Find power series solutions of x2 y 00 + 6xy 0 + (x2 + 6)y = 0 about x = 0.
Sol. 5.2.8.
1
r = −2, −3. an = − an−2
n(n + 1)
For r = −3, we find that a1 is arbitrary. In this case, r = −3 provides the general solution y = a0 y1 +a1 y2 ,
where
−3 1 2 1 4
y1 = x 1 − x + x − ............ ,
2! 4!
−3 1 3 1 5
y2 = x x − x + x − ............ .
3! 5!
Note that corresponding to the larger root r = −2, you will get the Frobenious solution, a constant
multiple of y2 . (Find and see!)
where a, b and c are constants, is called Hypergeometric Equation. We observe that x = 0 is a regular
singular points of (5.21). So there exists at least one Frobenius solution of the form
∞
X
y= an xn+r = xr (a0 + a1 x + a2 x2 + a3 x3 + ........). (5.22)
n=0
a0 r(c + r − 1) = 0 or r(c + r − 1) = 0.
Therefore, roots of the indicial equation are r = 0, 1 − c. Now comparing the coefficient of xn+r−1 , we
have the recurrence relation
(a + n − 1 + r)(b + n − 1 + r)
an (n+r)(c+n+r−1)−an−1 (n−1+r+a)(n−1+r+b) = 0 or an = an−1 .
(n + r)(c + n − 1 + r)
where n = 1, 2, 3, 4....
For r = 0, we have
(a + n − 1)(b + n − 1) a.b (a + 1)(b + 1) a(a + 1)b(b + 1)
an = an−1 , a1 = a0 a2 = a1 = a0 , .......
n(c + n − 1) 1.c 2(c + 1) 1.2c(c + 1)
So the Frobenius solution corresponding to r = 0 reads as
a.b a(a + 1)b(b + 1) 2
y = a0 1 + x+ x + .......... ,
1.c 1.2c(c + 1)
This series with a0 = 1 is called hypergeometric series and is denoted by F (a, b, c, x). Thus,
∞
X a(a + 1)...(a + n − 1)b(b + 1)...(b + n − 1)
F (a, b, c, x) = 1 + xn .
n!c(c + 1)...(c + n − 1)
n=0
F (1, b, b, x) = 1 + x + x2 + .........
the familiar geometric series. Thus, F (a, b, c, x) generalizes the geometric series. That is why it is named
as hypergeometric series. Further, we find
an+1 (a + n)(b + n)
lim |x| = lim
|x| = |x| ,
n→∞ an n→∞ (n + 1)(c + n)
provided c is not zero or negative integer. Therefore, F (a, b, c, x) is analytic function-called the hyper-
geometric function-on the interval |x| < 1. It is the simplest particular solution of the hypergeometric
equation.
Next we find the series solution corresponding to the indicial root r = 1 − c. The series solution in
this case is given by
where the constants a1 , a2 and so on can be determined using the recurrence relation. Alternatively, we
substitute y = x1−c z into the given DE (5.21) and obtain
which is the hypergeometric equation with the constants a, b and c replaced by a − c + 1, b − c + 1 and
2 − c. Therefore, (5.24) has the power series solution
z = F (a − c + 1, b − c + 1, 2 − c, x).
y = x1−c F (a − c + 1, b − c + 1, 2 − c, x).
Mathematics-III 42
Fourier Series
6.1 Introduction
We are familiar with the power series representation of a function f (x). The representation of f (x) in the
form of a trigonometric series given by
∞
a0 X
f (x) = + (an cos nx + bn sin nx), (6.1)
2
n=1
is required in the treatment of many physical problems such as heat conduction, electromagnetic waves,
mechanical vibrations etc. An important advantage of the series (6.1) over a usual power series in x is that
it can represent f (x) even if f (x) possesses many discontinuities (eg. discontinuous impulse function in
electrical engineering). On the other hand, power series can represent f (x) only when f (x) is continuous
and possesses derivatives of all orders.
Let m and n be positive integers such that m 6= n. Then we have,
Z π Z π Z π
cos nxdx = 0, sin nxdx = 0, cos mx sin nxdx = 0,
−π −π −π
Z π Z π
cos mx cos nxdx = 0, sin mx sin nxdx = 0.
−π −π
Z π Z π
Further, cos2 nxdx = π = sin2 nxdx.
−π −π
Now, we do some classical calculations that were first done by Euler. We assume that the function f (x)
in (6.1) is defined on [−π, π]. Also, we assume that the series in (6.1) is uniformly convergent so that
term by term integration is possible.
Integrating both sides of (6.1) over [−π, π], we get
Z π
1
a0 = f (x)dx. (6.2)
π −π
Multiplying both sides of (6.1) by cos nx, and then integrating over [−π, π], we get
Z π
1
an = f (x) cos nxdx. (6.3)
π −π
Note that this formula, for n = 0, gives the value of a0 as given in (6.2). That is why, a0 is divided by 2
in (6.1).
43
Mathematics-III 44
Next, multiplying both sides of (6.1) by sin nx, and then integrating over [−π, π], we get
1 π
Z
bn = f (x) sin nxdx. (6.4)
π −π
These calculations show that the coefficients an and bn can be obtained from the sum f (x) in (6.1)
by means of the formulas (6.3) and (6.4) provided the series (6.1) is uniformly convergent. However,
this situation is too restricted to be of much practical use because first we have to ensure that the given
function f (x) admits an expansion as a uniformly convergent trigonometric series. For this reason, we
set aside the idea of finding the coefficients an and bn in the expansion (6.1) that may or may not exist.
Instead we use formulas (6.3) and (6.4) to define some numbers an and bn . Then we use these to construct
a series of the form (6.1). When we follow this approach, the numbers an and bn are called the Fourier
coefficients of the function f (x) and the series (6.1) is called Fourier series of f (x). Obviously, the function
f (x) must be integrable in order to construct its Fourier series. Note that a discontinuous function may
be integrable.
We hope that the Fourier series of f (x) will converge to f (x) so that (6.1) is a valid representation or
expansion of f (x). However, this is not always true. There exist integrable functions whose Fourier series
diverge at one or more points. That is, why some advanced texts on Fourier series write (6.1) in the form
∞
a0 X
f (x) ∼ + (an cos nx + bn sin nx), (6.5)
2
n=1
where the sign ∼ is used in order to emphasize that the Fourier series on right is not necessarily convergent
to f (x).
Just like a Fourier series does not imply convergence, a convergent trigonometric series does not imply
to be a Fourier series of some function. For example, it is known that the trigonometric series
∞
X sin nx
ln(1 + n)
n=1
converges for all x. But it is not a Fourier series since 1/ ln(1 + n) can not be obtained from formula (6.4)
for any choice of integrable function f (x). In fact, this series fails to be Fourier series because it fails to
satisfy a remarkable theorem, which states that the term by term integral of any Fourier series (whether
convergent or not) must converge for all x.
Thus, the fundamental problem of the subject of Fourier series is clearly to discover the properties of
an integrable function that guarantee that its Fourier series not only converges but also converges to the
function. Before this, let us see some examples.
Ex. 6.1.1. Find Fourier series of the function f (x) = x, −π ≤ x ≤ π.
Sol. 6.1.1. We find
1 π
Z
a0 = f (x)dx = 0,
π −π
1 π
Z
an = f (x) cos nxdx = 0,
π −π
1 π −2
Z
bn = f (x) sin nxdx = (−1)n .
π −π n
So Fourier series of f (x) = x reads as
1 1
x = 2 sin x − sin 2x + sin 3x + ........ . (6.6)
2 3
Mathematics-III 45
Here equals sign is an expression of hope rather than definite knowledge. It can be proved that the Fourier
series in (6.6) converges to x in −π < x < π. At x = π or x = −π, the Fourier series converges to 0, and
hence does not converge to f (x) = x at x = π or x = −π. Further, each term on right hand side in (6.6)
has a period 2π. So the entire expression on right hand side of (6.6) has a period 2π. It follows that the
Fourier series in (6.6) does not converge to f (x) = x outside the interval −π < x < π. But if f (x) = x is
given to be a periodic function of period 2π, then the Fourier series in (6.6) converges to f (x) = x for all
real values of x except x = kπ, where k is any non-zero integer. In left panel of Figure 6.1, we show the
plots of x (Black line), 2 sin x (Green curve), 2 sin x − sin 2x (Red curve) and 2 sin x − sin 2x + (2/3) sin 3x
(Blue curve) in the range −π < x < π or −3.14 < x < 3.14. We see that as we consider more and
more terms of the the Fourier series in (6.6), it approximates the function f (x) = x better and better, as
expected.
3
3
2
1
2
-3 -2 -1 1 2 3
-1 1
-2
-3 -2 -1 1 2 3
-3
Figure 6.1: Left Panel: Plots of x (Black line), 2 sin x (Green curve), 2 sin x − sin 2x (Red curve) and 2 sin x − sin 2x +
(2/3) sin 3x (Blue curve) in the range −π < x < π or −3.14 < x < 3.14.
Right Panel: Plots of f (x) (Black lines), π2 (Green line), π2 + 2 sin x (Red curve) and π2 + 2 sin x + 32 sin 3x (Blue curve),
π
2
+ 2 sin x + 32 sin 3x + 25 sin 5x (Purple curve) in the range −π < x < π or −3.14 < x < 3.14.
π2 1 1 1
Hence show that 8 =1+ 32
+ 52
+ 72
+ ........
1 π
Z
1
an = f (x) cos nxdx = [(−1)n − 1],
π −π πn2
1 π (−1)n+1
Z
bn = f (x) sin nxdx = .
π −π n
So Fourier series of f (x) is
∞ X (−1)n+1 ∞
π X 1 n
f (x) = + [(−1) − 1] cos nx + sin nx. (6.8)
4 πn2 n
n=1 n=1
π2 1 1 1
= 1 + 2 + 2 + 2 + .........
8 3 5 7
If f (x) is odd, then its Fourier series carries only sine terms and the Fourier coefficients are given by
2 π
Z
an = 0, bn = f (x) sin nxdx.
π 0
Mathematics-III 47
For example, the Fourier coefficients of the odd function f (x) = x, −π ≤ x ≤ π are an = 0 and
n−1
bn = 2(−1)
n . So the Fourier series of x is given by
1 1
x = 2 sin x − sin 2x + sin 3x − ........... (6.9)
2 3
Note that the Fourier series converges to x for −π < x < π and not at the end points x = ±π.
Similarly, the Fourier coefficients of the even function f (x) = |x|, −π ≤ x ≤ π are a0 = π, an =
2
πn2
[(−1)n − 1] and bn = 0. So we have
π 4 1 1
|x| = − cos x + 2 cos 3x + 2 cos 5x + ........... (6.10)
2 π 3 5
It is interesting to observe that the two series (6.9) and (6.10) both represent the same function
f (x) = x on 0 ≤ x ≤ π since |x| = x for x ≥ 0. The series (6.9) is called Fourier sine series of x, and
the series (6.10) is called Fourier cosine series of x. Similarly, any function f (x) satisfying the Dirichlet’s
conditions on 0 ≤ x ≤ π can be expanded in both a sine series and a cosine series on this interval subject
to that the series does not converge to f (x) at the end points x = 0 and x = π unless f (x) = 0 at these
points. Thus, to obtain sine series of a function, we redefine the function (if necessary) to have the value
0 at x = 0, and then extend it over the interval −π ≤ x < 0 such that f (−x) = −f (x) for all x lying in
−π ≤ x ≤ π. It is called odd extension of f (x). Similarly, even extension of f (x) can be carried out in
order to obtain Fourier cosine series.
Ex. 6.3.1. Find Fourier sine and cosine series of f (x) = cos x, 0 ≤ x ≤ π.
2 π 2 π 2n 1 + (−1)n
Z Z
bn = f (x) sin nxdx = cos x sin nxdx = , n 6= 1.
π 0 π 0 π n2 − 1
2 π
Z
b1 = cos x sin xdx = 0.
π 0
So Fourier sine series of cos x is given by
∞
2n 1 + (−1)n
X
cos x = sin nx.
π n2 − 1
n=2
2 π 2 π
Z Z
an = f (x) cos nxdx = cos x cos nxdx = 0, n 6= 1.
π 0 π 0
2 π
Z
a1 = cos x cos xdx = 1.
π 0
So Fourier cosine series of cos x is given by
cos x = cos x.
Mathematics-III 48
In this chapter, we shall discuss the solution of some boundary value problems.
y(0, t) = 0, (7.2)
since the left end of the string is tied at (0, 0) for all the time, and hence it can not have displacement
along the y-axis.
The second condition is
y(π, t) = 0 (7.3)
since the right end of the string is tied at (π, t) for all the time, and hence it can not have displacement
along the y-axis.
The third condition is
∂y
= 0, at t = 0, (7.4)
∂t
since the string is in rest at t = 0.
The fourth condition is
Once the string is released from the initial shape y(x, 0) = f (x), we are interested to find the distance
or displacement of the string from the x-axis at any time t. It is equivalent to saying that we are interested
to solve (7.1) for y(x, t) subject to the four conditions (7.2)-(7.5).
49
Mathematics-III 50
where u(x) and v(t) are to be determined. Plugging (7.6) into (7.1), we get
Now, let us first solve (7.8). Later, we shall look for the solution of (7.9). Considering (7.6), the condition
y(0, t) = 0 in (7.2) gives u(0)v(t) = 0 or u(0) = 0. Similarly, y(π, t) = 0 in (7.3) gives u(π) = 0. Further,
we see that the nature of solution of (7.8) depends on the values of λ.
√ √
(i) When λ > 0, the solution reads as u(x) = c1 e λx + c2 e λx . Using the conditions u(0) = 0 and
u(π) = 0, we get c1 = 0 = c2 , and hence u(x) = 0. This leads to the trivial solution y(x, t) = u(x)v(t) = 0,
which is not of our interest.
(ii) When λ = 0, the solution reads as u(x) = c1 x + c2 . Again, using the conditions u(0) = 0 and
u(π) = 0, we get c1 = 0 = c2 , which leads to the trivial solution y(x, t) = u(x)v(t) = 0.
(iii) When λ < 0, say, λ = −n2 , the solution reads as u(x) = c1 sin nx + c2 cos nx. Applying the
condition u(0) = 0, we get c2 = 0. The condition u(π) = 0 then implies that c1 sin nπ = 0. Obviously,
for a non-trivial solution we must have c1 6= 0. Then the condition c1 sin nπ = 0 forces n to be a positive
integer. Thus,
Now, the solution of (7.9) with λ = −n2 reads as v(t) = c1 sin nat + c2 cos nat. The condition in (7.4)
leads to u(x)v 0 (0) = 0 or v 0 (0) = 0, which in turn gives c1 = 0. So
is also a solution of (7.1). To determine bn , we use the fourth condition y(x, 0) = f (x) given in (7.5).
Then (7.13) gives
∞
X
f (x) = bn sin nx. (7.14)
n=1
Notice that the series on right hand side in (7.14) is the Fourier sine series of f (x) in the interval [0, π].
So we have
2 π
Z
bn = f (x) sin nxdx. (7.15)
π 0
Hence,
∞
X
y(x, t) = bn sin nx cos nat, (7.16)
n=1
with bn from (7.15) is the solution of (7.1) subject to the four conditions (7.2)-(7.5).
∂2w 1 ∂w
2
= 2 , (7.17)
∂x a ∂t
where a is some positive constant. The heat equation is subjected to the following three conditions.
The first condition is
w(0, t) = 0, (7.18)
since the left end of the rod is kept at zero temperature for all t.
The second condition is
w(π, t) = 0 (7.19)
since the right end of the rod is kept at zero temperature for all t.
The third condition is
Having known the temperature of the rod at t = 0, we are interested to find the temperature of the
rod at any time t. It is equivalent to saying that we are interested to solve (7.17) for w(x, t) subject to
the three conditions (7.18)-(7.20).
Assume that (7.17) possesses a solution of the form
where u(x) and v(t) are to be determined. Plugging (7.21) into (7.17), we get
Following the strategy discussed in the previous section, the non-trivial solution of (7.23) subject to the
conditions (7.18) and (7.19), reads as
is also a solution of (7.17). To determine bn , we use the third condition w(x, 0) = f (x) given in (7.20).
Then (7.28) gives
∞
X
f (x) = bn sin nx. (7.29)
n=1
Notice that the series on right hand side in (7.29) is the Fourier sine series of f (x) in the interval [0, π].
So we have
2 π
Z
bn = f (x) sin nxdx. (7.30)
π 0
Hence,
∞
2 a2 t
X
w(x, t) = bn sin nxe−n , (7.31)
n=1
with bn from (7.30) is the solution of (7.17) subject to the three conditions (7.18)-(7.20).
Mathematics-III 53
∂2w ∂2w
+ = 0, (7.32)
∂x2 ∂y 2
known as the Laplace equation. With the transformations x = r cos θ and y = r sin θ, the polar form of
(7.32) reads as
∂ 2 w 1 ∂w 1 ∂2w
+ + = 0. (7.33)
∂r2 r ∂r r2 ∂θ2
For,
∂w ∂w ∂x ∂w ∂y ∂w ∂w
= + = cos θ + sin θ .
∂r ∂x ∂r ∂y ∂r ∂x ∂y
∂2w 2
2 ∂ w ∂2w 2
2 ∂ w ∂2w
= cos θ + cos θ sin θ + sin θ + sin θ cos θ
∂r2 ∂x2 ∂x∂y ∂y 2 ∂x∂y
∂w ∂w ∂x ∂w ∂y ∂w ∂w
= + = −r sin θ + r cos θ .
∂θ ∂x ∂θ ∂y ∂θ ∂x ∂y
∂2w 2
2
2 ∂ w 2 ∂2w ∂w 2
2
2 ∂ w 2 ∂2w ∂w
2
= r sin θ 2
− r sin θ cos θ − r cos θ + r cos θ 2
− r cos θ sin θ − r sin θ .
∂θ ∂x ∂x∂y ∂x ∂y ∂x∂y ∂y
2 2
Substituting the values of ∂∂rw2 , ∂w ∂ w
∂r and ∂θ2 into (7.33), we get (7.32).
Suppose the steady state temperature is given on the boundary of a unit circle r = 1, say w(1, θ) =
f (θ). Then the problem of finding the temperature at any point (r, θ) inside the circle is a Dirichlet’s
problem for a circle. Now we shall solve (7.33) subject to the condition
where u(r) and v(θ) are to be determined. Plugging (7.35) into (7.33), we get
where λ = n2 ; an , bn are constants such that both the terms on right hand side of (7.41) do not vanish
together for n = 1, 2, 3, ...... Let a20 be the solution corresponding to n = 0.
Mathematics-III 54
d2 u
− n2 u = 0, (7.40)
dz 2
where r = ez . Solutions of this equation are
u(z) = c1 + c2 z for n = 0
and
u(r) = c1 + c2 ln r for n = 0
and
Since we are interested in solutions which are well defined inside the circle r = 1, we discard the first
solution because ln r is not finite at r = 0. Similarly, the second solution is acceptable by discarding the
second term carrying r−n . Thus, the solution of our interest is
a0
is also a solution of (7.33). Since 2 is also a solution of (7.33), so
∞
a0 X n
w(r, θ) = + r (an cos nθ + bn sin nθ), (7.44)
2
n=1
Notice that the series on right hand side in (7.45) is the Fourier series of f (θ) in the interval [−π, π]. So
we have
1 π
Z
an = f (φ) cos nφdx, (n = 0, 1, 2, ....) (7.46)
π −π
Mathematics-III 55
Z π
1
bn = f (φ) sin nφdx, (n = 1, 2, 3, ......). (7.47)
π −π
Thus, (7.44) with an from (7.46) and bn from (7.47) is the solution of (7.33) subject to the condition
(7.34). Thus, the Dirichlet problem for the unit circle is solved.
Now substituting an from (7.46) and bn from (7.47) into (7.44), we get
∞
" #
1 π
Z
1 X n
w(r, θ) = f (φ) + r cos n(θ − φ) dφ. (7.48)
π −π 2
n=1
and
where neither both c1 and c2 nor both d1 and d2 are zero, is called a SLBVP. We see that y = 0 is trivial
solution of (7.51). The values of λ for which (7.51) has non-trivial solutions, are known as its eigen values
while the corresponding non-trivial solutions are known as eigen functions.
Ex. 7.4.1. Find eigen values and eigen functions of the SLBVP
y 00 + λy = 0, y(0) = 0, y(π) = 0.
In other words, any two distinct eigen functions ym and yn of the SLBVP are orthogonal with respect to
the weight function q(x). Let us prove this result.
Since ym and yn are eigen functions corresponding to the eigen values λm and λn , we have
0 0
(pym ) + (λm q + r)ym = 0 (7.54)
and
Moving the first two terms on right hand side, and then integrating from a to b, we have
Z b Z b Z b
0 0
(λm − λn ) qym yn dx = yn (pym ) dx − ym (pyn0 )0 dx
a a a
Z b Z b
0 b 0 0 0
= [ym (pyn )]a − b
ym (pyn )dx − [yn (pym )]a + yn0 (pym
0
)dx
a a
0 0 0 0
= p(b)[ym (b)yn (b) − yn (b)ym (b)] − p(a)[ym (a)yn (a) − yn (a)ym (a)]
= p(b)W (b) − p(a)W (a)
Notice that the eigen functions ym and yn are particular solutions of the SLBVP given by (7.51), (7.52)
and (7.52). So we have
0
c1 ym (a) + c2 ym (a) = 0, (7.58)
Mathematics-III 57
0
d1 ym (b) + d2 ym (b) = 0, (7.60)
Now by the given c1 and c2 are not zero together. So the homogeneous system given by (7.58) and (7.59)
has a non-trivial solution. It follows that ym (a)yn0 (a) − yn (a)ym
0 (a) = W (a) must be zero. Likewise, (7.60)
0 0
and (7.61) lead to ym (b)yn (b) − yn (b)ym (b) = W (b) = 0. So (7.57) becomes
Z b
(λm − λn ) qym yn dx = 0. (7.62)
a
Also, λm 6= λn . So we get
Z b
qym yn dx = 0, (7.63)
a
Remark 7.4.1. The orthogonality property of eigen functions can be used to write a given function as
the series expansion of eigen functions.
where n is a constant, is called Legendre’s Equation. We observe that x = 0 is an ordinary point of (8.1).
So there exists a series solution of the form
∞
X
y= ak xk = a0 + a1 x + a2 x2 + a3 x3 + ......... (8.2)
k=0
∴ a2 = − n(n+1)
2! a0 , a3 = (n−1)(n+2)
3! a1 a4 = (n−2)n(n+1)(n+3)
4! a0 , a5 = (n−3)(n−1)(n+2)(n+4)
5! a1 , .......
Substituting these values into (8.2), we obtain the general solution of (8.1) as
y = c1 y1 + c2 y2
where
n(n + 1) 2 (n − 2)n(n + 1)(n + 3) 4
y1 = a0 1 − x + x − ......... ,
2! 4!
(n − 1)(n + 2) 3 (n − 3)(n − 1)(n + 2)(n + 4) 5
y2 = a1 x − x + x − ......... .
3! 5!
We observe that y1 and y2 are LI solutions of the Legendre equation (8.1), and these are analytic in the
range −1 < x < 1. However, the solutions most useful in the applications are those bounded near x = 1.
Notice that x = 1 is a regular singular point of the Legendre equation (8.1). We use the transformation
t = (1 − x)/2 so that x = 1 corresponds to t = 0, and (8.1) transforms to the hypergeometric DE
58
Mathematics-III 59
where the prime denote derivative with respect to t. Here, a = −n, b = n + 1 and c = 1. So the solution
of (8.4) in the neighbourhood of t = 0 is given by
However, this solution is not bounded near t = 0. So any solution of (8.4) bounded near t = 0 is a constant
multiple of y1 . Consequently, the constant multiples of F (−n, n + 1, 1, (1 − x)/2) are the solutions of (8.1),
which are bounded near x = 1.
If n is a non-negative integer, then F (−n, n + 1, 1, (1 − x)/2) defines a polynomial of degree n known
as Legendre polynomial, denoted by Pn (x). Therefore,
Notice that Pn (1) = 1 for all n. Next, after a sequence of algebraic manipulations, we can obtain
1 dn
Pn (x) = [(x2 − 1)n ],
2n n! dxn
known as Rodrigue’s formula. The following theorem provides the alternative approach to obtain the
Rodrigue’s formula.
1 dn
Prove that Pn (x) = [(x2 − 1)n ].
2n n! dxn
Proof. Let v = (x2 − 1)n . Then we have
dv
v1 = 2nx(x2 − 1)n−1 where v1 = .
dx
=⇒ (x2 − 1)v1 = 2nx(x2 − 1)n .
=⇒ (1 − x2 )v1 + 2nxv = 0.
Differentiating it n + 1 times with respect to x using the Leibnitz theorem, we get
(n + 1)n
(1 − x2 )vn+2 + (n + 1)(−2x)vn+1 + (−2)vn + 2n[xvn+1 + (n + 1)vn ] = 0.
2!
=⇒ (1 − x2 )vn00 − 2xvn0 + n(n + 1)vn = 0.
This shows that cvn (c is an arbitrary constant) is a solution of the Legendre’s equation (8.1). Also cvn
is a polynomial of degree n. But we know that the nth degree polynomial Pn (x) is a solution of the
Legendre’s equation. It follows that
dn
Pn (x) = cvn = c [(x2 − 1)n ]. (8.7)
dxn
To find c, we put x = 1 into (8.7) to get
n
d 2 n
Pn (1) = c [(x − 1) ] .
dxn x=1
Mathematics-III 60
dn
n n
=⇒ 1=c [(x − 1) (x + 1) ] = c[n!(x + 1)n + Terms containing the factor (x − 1)]x=1 .
dxn x=1
1
=⇒ 1 = c.n!2n or c = .
n!2n
Thus, (8.7) becomes
1 dn
Pn (x) = [(x2 − 1)n ].
2n n! dxn
This completes the proof.
8 6 2 34 224
x4 + 3x3 − x2 + 5x − 2 = P4 (x) + P3 (x) − P2 (x) + P1 (x) − P0 (x).
35 5 21 5 105
Also, m 6= n. So it gives
Z 1
Pm (x)Pn (x)dx = 0.
−1
Legendre Series
Let f (x) be a function defined from x = −1 to x = 1. Then we can write,
∞
X
f (x) = cn Pn (x), (8.11)
n=0
where cn ’s are constants to be determined. Multiplying both sides of (8.11) by Pn (x) and integrating
from −1 to 1, we get
Z 1 Z 1
2
f (x)Pn (x)dx = cn Pn2 (x)dx = cn .
−1 −1 2n +1
2n + 1 1
Z
=⇒ cn = f (x)Pn (x)dx.
2 −1
Using the values of cn into (8.11), we get the expansion of f (x) in terms of Legendre polynomials, known
as the Legendre series of f (x).
Ex. 8.1.4. If f (x) = x for 0 < x < 1 otherwise 0, then show that f (x) = 41 P0 (x)+ 21 P1 (x)+ 16
5
P2 (x)+.......
1 1 1 1
Z Z
1
Sol. 8.1.4. c0 = f (x)P0 (x)dx = x.1dx = , etc.
2 −1 2 0 4
Mathematics-III 62
∞
X
2 −1/2
Ex. 8.1.5. Prove that (1 − 2xt + t ) = tn Pn (x), and
n=0
hence prove the recurrence relation nPn (x) = (2n − 1)xPn−1 (x) − (n − 1)Pn−2 (x).
Note: The function (1 − 2xt + t2 )−1/2 is called generating function of the Legendre polynomials. Note
that the Legendre polynomials Pn (x) appear as coefficients of tn in the expansion of the function
(1 − 2xt + t2 )−1/2 .
The condition n >Z0 is necessary in order to guarantee the convergence of the integral.
∞
Note that Γ(1) = e−x dx = 1.
0
Next, we have
Z ∞ ∞
Z ∞ Z ∞
e−x xn dx = xn e−x (−1) 0 − n nxn−1 e−x (−1)dx = n e−x xn−1 dx.
Γ(n + 1) =
0 0 0
∴ Γ(n + 1) = nΓ(n).
It is the recurrence relation for gamma function. Using this relation recursively, we have
Γ(2) = 1.Γ(1) = 1,
2 Z ∞ Z ∞
1 −x2 −y 2
Γ = 2 e dx 2 e dy
2 0 0
Z ∞Z ∞
2 2
= e−(x +y ) dxdy
0 0
Z 2π Z ∞
2
= e−r rdrdθ, x = r cos θ, y = r sin θ
0 0
= π
Mathematics-III 63
Having known the precise value of Γ 21 , we can calculate the values of gamma function at positive
5 3 1√
7 5 3 1 1
Γ = . . Γ = . . π.
2 2 2 2 2 2 2 2
For values of gamma function at positive fractions with denominator different from 2, we have to rely
upon the numerically approximated value of the integral arising in gamma function.
Note that Γ(n) given by (8.12) is not defined for n ≤ 0. We extend the definition of gamma function
by the relation
Γ(n + 1)
Γ(n) = (8.13)
n
Then Γ(n) is defined for all n except when n is any non-positive integer. If we agree Γ(n) to be ∞ for
non-positive integer values of n, then 1/Γ(n) is defined for all n. Such an agreement is useful while dealing
with Bessel functions. The Gamma function is, thus, defined as
Z ∞
e−x xn−1 dx , n>0
0
Γ(n) = Γ(n + 1)
, n < 0 but not an integer
n
∞, n = 0, −1, −2, .......
Note that the gamma function generalizes the concept of factorial from non-negative integers to any
real number via the formula
n! = Γ(n + 1).
x2 y 00 + xy 0 + (x2 − p2 )y = 0, (8.14)
where p is a non-negative constant, is called Bessel’s DE. We see that x = 0 is a regular singular point of
(8.14). So there exists at least one Frobenious series solution of the form
∞
X
y= an xn+r , (a0 6= 0). (8.15)
n=0
a0 (r2 − p2 ) = 0 or r2 − p2 = 0.
which is well defined for all real values of p in accordance with the definition of gamma function.
1.0
0.8
0.6
0.4
0.2
x
2 4 6 8 10
- 0.2
- 0.4
Figure 8.1: Plots of J0 (x) (Blue curve) and J1 (x) (Red curve).
From applications point of view, the most useful Bessel functions are of order 0 and 1, given by
x2 x4 x6
J0 (x) = 1 − + − + ..........
22 22 .42 22 .42 .62
x 1 x 3 1 x 5
J1 (x) = − + − ..........
2 1!2! 2 2!3! 2
Plots of J0 (x) (Blue curve) and J1 (x) (Red curve) are shown in Figure 8.1. It may be seen that J0 (x) and
J1 (x) vanish alternatively, and have infinitely many zeros on positive x-axis, as expected, since J0 (x) and
J1 (x) are two particular LI solutions of the Bessel’s DE (8.14). Later, we shall show that J00 (x) = −J1 (x).
Thus, J0 (x) and J1 (x) behave just like cos x and sin x. This analogy may also be observed by the fact
that the normal form of Bessel’s DE (8.14) given by
1 − 4p2
u00 + 1 + u = 0,
4x2
behaves as
u00 + u = 0,
for large values of x, with solutions cos x and sin x. It means J0 (x) and J1 (x) behave more precisely like
cos x and sin x for larger values of x.
Mathematics-III 65
exists, and it is taken as the Bessel function of second kind. Thus, it follows that (8.23) is general solution
of Bessel’s equation (8.14) in all cases. It is found that Yp (x) is not bounded near x = 0 for p ≥ 0.
Accordingly, if we are interested in solutions of Bessel’s equation near x = 0, which is often the case in
applications, then we must take c2 = 0 in (8.23).
Mathematics-III 66
p2
00 1 0
y + y + 1 − 2 y = 0,
x x
it follows that u(x) = Jp (λm x) and v(x) = Jp (λn x) satisfy the equations
p2
00 1 0 2
u + u + λm − 2 u = 0, (8.24)
x x
p2
1 0
v 00 + v + λ2n − 2 v = 0, (8.25)
x x
Mathematics-III 67
Multiplying (8.24) by v and (8.25) by u, and subtracting the resulting equations, we obtain
d 1
u0 v − v 0 u + (u0 v − v 0 u) = (λ2n − λ2m )uv.
dx x
After multiplication by x, it becomes
d
x(u0 v − v 0 u) = (λ2n − λ2m )xuv.
dx
Now, integrating with respect to x from 0 to 1, we have
Z 1
1
2 2
uv = x(u0 v − v 0 u) 0 = 0,
(λn − λm )
0
where f (x) is defined on the interval 0 ≤ x ≤ 1 and λn are positive zeros of some fixed Bessel function
Jp (x) with p ≥ 0. Now multiplying (8.26) by xJp (λn x) and integrating from x = 0 to x = 1, we get
Z 1
1 2
xf (x)Jp (λn x)dx = an Jp+1 (λn ),
0 2
Mathematics-III 68
which gives
Z 1
2
an = 2 xf (x)Jp (λn x)dx.
Jp+1 (λn ) 0
Laplace Transforms
The function K(p, x) is called kernel of T . In particular, if a = 0, b = ∞ and K(p, x) = e−px , then (9.1)
is called Laplace transform of f (x) and is denoted by L[f (x)].
Z ∞
∴ L[f (x) = e−px f (x)dx = F (p).
0
69
Mathematics-III 70
Z ∞
ax −px ax 1 −1 1
(2) L [e ] = e e dx = (p > a). L = eax
0 p−a p−a
∞
xn
Z
n −px n Γ(n + 1) −1 1
(3) L [x ] = e x dx = (p > 0). L =
0 pn+1 pn+1 Γ(n + 1)
eiax − e−iax
a −1 1 1
(4) L [sin ax] = L = 2 . L = sin ax
2i p + a2 2
p +a 2 a
eiax + e−iax
p −1 p
(5) L [cos ax] = L = 2 . L = cos ax
2 p + a2 p2 + a2
eax − e−ax
a −1 1 1
(6) L [sinh ax] = L = 2 . L = sinh ax
2 p − a2 p2 − a2 a
eax + e−ax
p p
(7) L [cosh ax] = L = 2 . L−1 = cosh ax
2 p − a2 p2 − a2
Sol. 9.2.1.
2
1 − cos 2x 1 1 p
L sin x = L = −
2 2 p p2 + 4
4 1
L 4 sin x cos x + e−x = L 2 sin 2x + e−x =
+ .
p2 +4 p+1
h i h i
Ex. 9.2.2. Find L−1 1
p2 +2
and L−1 1
p4 +p2
.
Sol. 9.2.2.
√
−1 1 1
L 2
= √ sin 2x.
p +2 2
−1 1 −1 1 1
L =L − = x − sin x.
p4 + p2 p2 p2 + 1
The above conditions are not necessary. Consider the function f (x) = x−1/2 . This function is not
piecewise continuous on [0,b] forp any positive real number b since it has infinite discontinuity at x = 0.
But L[x−1/2 ] = Γ(1/2)/p1/2 = π/p exists for p > 0.
Further, from (9.2), we see that lim F (p) = 0. It is true even if the function is not piecewise continuous
p→∞
or of exponential order. So if lim φ(p) 6= 0, then φ(p) can not be Laplace transform of any function. For
p→∞
example, L−1 [p], L−1 [cos p], L−1 [log p] etc. do not exist.
Mathematics-III 71
1
Sol. 9.4.1. Since L[sin x] = p2 +1
, so by shifting formula
1
L e2x sin x =
(p − 2)2 + 1
Z ∞ Z ∞
−px ∞
0 −px 0
e−px f (x)dx = pF (p) − f (0).
For L f (x) = e f (x)dx = f (x)e 0
+p
0 0
Likewise, we can show that
In general,
h i
L f (n) (x) = pn F (p) − pn−1 f (0) − pn−1 f 0 (0) − ...... − f (n−1) (0).
Ex. 9.4.2. Find Laplace transform of cos x considering that it is derivative of sin x.
1
Sol. 9.4.2. Here f (x) = sin x and F (p) = L[sin x] = p2 +1
.
p
∴ L [cos x] = pF (p) − f (0) = .
p2 +1
In general,
Now,
Z ∞
−px sin x sin x π
e dx = L = − tan−1 p.
0 x x 2
Z ∞
sin x π
Choosing p = 0, we get dx = .
0 x 2
h cos x i
Ex. 9.4.6. Show that L does not exist.
x
Sol. 9.4.6. Please try yourself.
Mathematics-III 73
−1 p+7
Ex. 9.4.7. Find L .
p2 + 2p + 5
Sol. 9.4.7. Please try yourself by making perfect square in the denominator.
2p2 − 6p + 5
−1
Ex. 9.4.8. Find L .
p3 − 6p2 + 11p − 6
Sol. 9.4.8. Please try yourself by making partial fractions.
−1 p+1
Ex. 9.4.9. Find L log .
p−1
Sol. 9.4.9. Please try yourself by letting
p+1
L[f (x)] = log
p−1
so that
2
L[xf (x)] = .
p2 −1
−1 p
Ex. 9.4.10. Show that L = x sinh ax.
(p2 − a2 )2
Sol. 9.4.10. Please try yourself.
−1 p
Ex. 9.4.11. Show that L .
p4 + p2 + 1
Sol. 9.4.11. Please try yourself by using
p 1 1 1
= −
p4 + p2 + 1 2 p2 − p + 1 p2 + p + 1
y = c1 cos x + c2 sin x.
y = e2x − e−x .
d[L[y]] −p
= 2 dp,
L[y] p +1
1 −1/2
2 −1/2 c 1 1 1 1 13 1
L[y] = c(p + 1) = 1+ 2 =c − + − ......
p p p 2 p3 2! 2 2 p5
x2 x4
y = c 1 − 2 + 2 2 − ........... = cJ0 (x).
2 2 .4
y = J0 (x).
Mathematics-III 75
1
Remark 9.5.1. From the above example, notice that L[J0 (x)] = p .
p2 +1
Z x
Theorem 9.5.1. (Convolution Theorem) Prove that L[f (x)].L[g(x)] = L f (x − t)g(t)dt .
0
Proof. We have
Z ∞ Z ∞
L[f (x)].L[g(x)] = e−ps f (s)ds. e−pt g(t)dt
Z0 ∞ Z ∞ 0
−p(s+t)
= e f (s)g(t)ds dt
Z0 ∞ Z ∞ 0
= e−px f (x − t)g(t)dxdt (s + t = x)
0 t
Z ∞Z x
= e−px f (x − t)g(t)dtdx (Change of order of integration)
Z0 ∞ 0 Z x
−px
= e f (x − t)g(t)dt dx
0 0
Z x
= L f (x − t)g(t)dt .
0
Remark 9.5.2. If L[f (x)] = F (p) and L[g(x)] = G(p), then by convolution theorem
Z x
L−1 [F (p)G(p)] = f (x − t)g(t)dt
0
.
−1 1
Ex. 9.5.5. Use convolution theorem to find L .
p2 (p2 + 1)
Sol. 9.5.5. We know that
−1 1 −1 1
L = x, L = sin x.
p2 p2 + 1
where the unknown function y(x) appears under the integral sign, is called an integral equation. Taking
Laplace transform of both sides of (9.3), we get
L[f (x)] = L[y(x)] + L[K(x)]L[y(x)].
So we have
L[f (x)]
L[y(x)] =
1 + L[K(x)]
Z x
Ex. 9.6.1. Solve y(x) = x3 + sin(x − t)y(t)dt.
0
Sol. 9.6.1. Taking Laplace transform of both sides, we get
L[y(x)] = L[x3 ] + L[sin x]L[y(x)].
So we have
L[x3 ] 6 6
L[y(x)] = = 4 + 6.
1 + L[sin x] p p
Taking inverse Laplace transform, we have
1 5
y(x) = x3 + x .
20
as → 0+ defines Dirac delta function, which is denoted by δ(t). So lim f (t) = δ(t), and we may
→0+
interpret that δ(t) = 0 for t 6= 0 and δ(t) = ∞ at t = 0. The delta function can be made to act at any
point say a ≥ 0. Then we define
0, t 6= a
δa (t) =
∞ , t = a.
The function f (t) can be written in terms of unit step function as
1
f (t) = [u(t) − u (t)].
It implies that
δ(t) = u0 (t).
Here it should be noted that ordinary derivative of u(t) does not exist at t = 0, u(t) being discontinuous
at t = 0. So it is to be understood as a generalized function or quasi function. Similarly, we have
0, t<a
f (t) = 1/ , a ≤ t ≤ a + (9.4)
0, t ≥ a + .
can be written as
1
f (t) = [ua (t) − ua+ (t)].
∴ δa (t) = u0a (t).
Now, let g(t) be any continuous function for t ≥ 0. Then using (9.4) , we have
Z ∞
1 a+
Z
g(t)f (t)dt = g(t) = g(t0 ),
0 a
where a < t0 < a + , by mean value theorem of integral calculus. So in the limit → 0, we get
Z ∞
g(t)δa (t)dt = g(a).
0
It means
L[δa (t)] = e−pa and L[δ(t)] = 1.
∴ L−1 [e−pa ] = δa (t) and L−1 [1] = δ(t).
Mathematics-III 78
Examples
Suppose the LDE
y 00 + ay 0 + by = f (t), y(0) = y 0 (0) = 0, (9.5)
describes a mechanical or electrical system at rest in its state of equilibrium. Here f (t) can be an impressed
external force F or an electromotive force E that begins to act at t = 0. If A(t) is solution (output or
indicial response) for the input f (t) = u(t) (the unit step function), then
A00 + aA0 + bA = u(t)
Taking Laplace transform of both sides, we get
1
p2 L[A] − pA(0) − A0 (0) + apL[A] + pL[A] − A(0) =
p
Using A(0) = A0 (0) = 0 and solving for L[A], we get
1 1
L[A] = = , (9.6)
p(p2 + ap + b) pZ(p)
where Z(p) = p2 + ap + b.
Similarly, taking Laplace transform of (9.5), we get
Z t Z t
L[f (t)] d
L[y] = = pL[A]L[f (t)] = pL A(t − τ )f (τ )dτ = L A(t − τ )f (τ )dτ . (9.7)
Z(p) 0 dt 0
Taking inverse Laplace Transform, we have
d t
Z Z t
=⇒ y(t) = A(t − τ )f (τ )dτ = A0 (t − τ )f (τ )dτ (∵ A(0) = 0). (9.8)
dt 0 0
Thus, finally the solution of (9.5) for the general input f (t) is given by the following two formulas:
Z t
y(t) = A0 (t − τ )f (τ )dτ, (9.9)
0
Z t
y(t) = f 0 (t − σ)A(σ)dσ + f (0)A(t). (9.10)
0
In case, the input is f (t) = δ(t), the unit impulse function, let us denote the solution (output or impulsive
response) of (9.5) by h(t) so that L[h(t)] = 1/Z(p) and
1 L[h(t)]
L[A(t)] = = .
pZ(p) p
So A0 (t) = h(t) and formula (9.9) becomes
Z t
y(t) = h(t − τ )f (τ )dτ. (9.11)
0
Mathematics-III 79