DEText Ch14
DEText Ch14
DEText Ch14
Let us continue the discussion we were having at the end of section 12.3 regarding the general
solution to any given homogeneous linear differential equation. By then we had seen that any
linear combination of particular solutions,
is another solution to that homogeneous differential equation. In fact, we were even beginning
to suspect that this expression could be used as a general solution to the differential equation
provided the yk ’s were suitably chosen. In particular, we suspected that the general solution to
any second-order, homogeneous linear differential equation can be written
were c1 and c2 are arbitrary constants, and y1 and y2 are any two solutions that are not constant
multiples of each other.
These suspicions should have been reinforced in the last chapter in which general solutions
were obtained via reduction of order. In the examples and exercises, you should have noticed
that the solutions obtained to the given homogeneous differential equations could all be written
as just described.
It is time to confirm these suspicions, and to formally state the corresponding results. These
results will not be of merely academic interest. We will use them for much of the rest of this text.
For practical reasons, we will split our discussion between this and the next chapter. This
chapter will contain the statements of the most important theorems regarding the solutions to
homogeneous linear differential equations, along with a little discussion to convince you that
these theorems have a reasonable chance of being true. More convincing (and lengthier) analysis
will be carried out in the next chapter.
297
298 Homogeneous Linear Equations — The Big Theorems
a0 y (N ) + a1 y (N −1) + · · · + a N −2 y ′′ + a N −1 y ′ + a N y = 0
where N , the order, is some positive integer. The coefficients — a , b and c in the second
order case, and the ak ’s in the more general case — will be assumed to be continuous functions
over some open interval I , and the first coefficient — a or a0 — will be assumed to be nonzero
at every point in that interval.
Recall the “principle of superposition”: If {y1 , y2 , . . . , y K } is a set of particular solutions
over I to a given homogeneous linear equation, then any linear combination of these solutions,
is also a solution over I to the the given differential equation. Also recall that this set of y’s is
called a fundamental set of solutions (over I ) for the given homogeneous differential equation
if and only if both of the following hold:
1. The set is linearly independent over I (i.e., none of the yk ’s is a linear combination of
the others over I ).
2. Every solution over I to the given differential equation can be expressed as a linear
combination of the yk ’s .
where {y1 , y R } is a linearly independent set of solutions on the interval of interest (we are using
the “subscript R ” just to emphasize that this part came from reduction of order).
Second-Order Homogeneous Equations 299
!◮Example 14.1: In section 13.2, we illustrated the reduction of order method by solving
x 2 y ′′ − 3x y ′ + 4y = 0
y1 (x) = x 2
was one solution to this differential equation, we applied the method of reduction of order to
obtain the general solution
(where A and B denote arbitrary constants). Observe that this is in the form
In this case,
y1 (x) = x 2 and y R (x) = x 2 ln |x| ,
and c1 and c R are simply the arbitrary constants A and B , renamed. Observe also that,
here, y1 and y R are clearly not constant multiples of each other. So
{ y1 , y R } = x 2 , x 2 ln |x|
is a linearly independent pair of functions on the interval of interest. And since every other
solution to our differential equation can be written as a linear combination of this pair, this set
is a fundamental set of solutions for our differential equation.
ay ′′ + by ′ + cy = 0 ,
generally obtained via reduction of order. Assuming we have one known nontrivial particular
solution y1 , we set
y = y1 u ,
plug this into the differential equation, and obtain (after simplification) an equation of the form
Au ′′ + Bu ′ = 0 , (14.2)
Av ′ + Bv = 0
using the substitution v = u ′ . Assuming A and B are reasonable functions on our interval of
interest, you can easily verify that the general solution to this first-order equation is of the form
v(x) = c R v0 (x)
where c R is an arbitrary constant, and v0 is any particular (nontrivial) solution to this first-order
equation. (More about this differential equation, along with some important properties of its
solutions, is derived in the next chapter.)
300 Homogeneous Linear Equations — The Big Theorems
Since u ′ = v , we can then recover the general formula for u from the general formula for
v by integration:
Z Z Z
u(x) = v(x) d x = c R v0 (x) d x = c R v0 (x) d x
= c R [u 0 (x) + c0 ] = c1 + c R u 0 (x)
v0 = u 0 ′ = 0 ,
contrary to the known fact that v0 is a nontrivial solution to equation (14.2). So u 0 is not
a constant, y R = u 0 y1 is not a constant multiple of y1 , and the pair {y1 , y R } is linearly
independent. And since all other solutions can be written as linear combinations of these two
solutions, {y1 , y R } is a fundamental set of solutions for out differential equation.
What we have just shown is that, assuming
then the reduction of order method yields a general solution to differential equation (14.1) of the
form
y(x) = c1 y1 (x) + c R y R (x)
where {y1 , y R } is a linearly independent set of solutions.1
1 In fact, theorem 11.2 on page 253 can be used to show that a y exists. The real difficulty is in verifying that A
1
and B are ‘reasonable’, especially if y1 is zero at some point in the interval.
Second-Order Homogeneous Equations 301
where y R is that second solution obtained through the reduction of order method. And since
this is a general solution and y2 is a particular solution, there must be constants κ1 and κ R such
that
y2 (x) = κ1 y1 (x) + κ R y R (x) .
Moreover, because {y1 , y2 } is (by assumption) linearly independent, y2 cannot be a constant
multiple of y1 . Thus κ R 6= 0 in the above equation, and that equation can be solved for y R ,
obtaining
κ1 1
y R (x) = − y1 (x) + y2 (x) .
κR κR
Consequently,
where
c R κ1 cR
C 1 = c1 − and C2 = .
κR κR
So any linear combination of y1 and y R can also be expressed as a linear combination of y1
and y2 . This means
y(x) = C1 y1 (x) + C2 y2 (x)
can also be used as a general solution, and, hence, {y1 , y2 } is also a fundamental set of solutions
for our differential equation.
So what? Well, if you are lucky enough to easily find a linearly independent pair of solutions
to a given second-order homogeneous equation, then you can use that pair as your fundamental
set of solutions — there is no need to grind through the reduction of order computations.2
In deriving this statement, we made some assumptions about the existence of solutions, and the
‘reasonableness’ of the first-order differential equation arising in the reduction of order method.
In the next chapter, we will rigorously rederive this statement without making these assumptions.
We will also examine a few related issues regarding the linear independence of solution sets and
the solvability of initial-value problems. What we will discover is that the following theorem can
be proven. This can be considered the “Big Theorem on Second-Order Homogeneous Linear
2 If you’ve had a course in linear algebra, you may recognize that a “fundamental set of solutions” is a “basis set”
for the “vector space of all solutions to the given homogeneous differential equation ”. This is worth noting, if you
understand what is being noted.
302 Homogeneous Linear Equations — The Big Theorems
Differential Equations”. It will be used repeatedly, often without comment, in the chapters that
follow.
The statement about “initial conditions” in the above theorem assures us that second-order
sets of initial conditions are appropriate for second-order linear differential equations. It also
assures us that a fundamental solution set for a second-order linear homogeneous differential
equation can not become “degenerate” at any point in the interval I . In other words, there is
no need to worry about whether an initial-value problem (with x0 in I ) can be solved. It has
a solution, and only one solution. (To see why we might be worried about “degeneracy”, see
exercise 14.2 on page 308.)
To illustrate how this theorem is used, let us solve a differential equation that you may recall
solving in chapter 11 (see page 247). Comparing the approach used there with that used here
should lead you to greatly appreciate the theory we’ve just developed.
!◮Example 14.2: Consider (again) the homogeneous second-order linear differential equation
y ′′ + y = 0 .
are two solutions to this differential equation, and in example 12.4 on page 268 we observed
that the set of these two solutions is linearly independent pair. The above theorem now assures
us that, indeed, this pair,
{ cos(x) , sin(x) } ,
is a fundamental set of solutions for the above second-order homogeneous linear differential
equation, and that
y(x) = c1 cos(x) + c2 sin(x)
is a general solution.
Hence,
c1 = 5 and c2 = 3 ,
and the solution to our initial-value problem is
Finding fundamental sets of solutions for most homogeneous linear differential equations
will not be as easy as it was for the differential equation in the last two examples. Fortunately,
fairly straightforward methods are available for finding fundamental sets for some important
classes of differential equations. Some of these methods are partially described in the exercises,
and will be more completely developed in later chapters.
In this case,
y1 ′ (x) = 2x and y2 ′ (x) = 3x 2 ,
and
y1 (x) y2 (x) x2 x3
W [y1 , y2 ] = = = x 2 3x 2 − 2x x 3 = x 4
y1 ′ (x) y2 ′ (x) 2x 3x 2
Wronskians naturally arise when dealing with initial-value problems. For example, suppose
we have a pair of functions y1 and y2 , and we want to find constants c1 and c2 such that
satisfies
y(x0 ) = 2 and y ′ (x0 ) = 5
for some given point x0 in our interval of interest. In solving for c1 and c2 , you can easily
show that
c1 W (x0 ) = 2y2 ′ (x0 ) − 5y2 (x0 ) and c2 W (x0 ) = 5y1 (x0 ) − 2y1 ′ (x0 ) .
Thus, if W (x0 ) 6= 0 , then there is exactly one possible value for c1 and one possible value for
c2 , namely,
a0 y (N ) + a1 y (N −1) + · · · + a N −2 y ′′ + a N −1 y ′ + a N y = 0
on some interval open I . Assume further that the ak ’s are continuous functions with a0 never
being zero on I . Then:
3 Of course, the choice of 2 and 5 as the initial values was not important; any other values could have been used
(we were just trying to reduce the number of symbols to keep track off). What is important is whether W (x 0 ) is
zero or not.
306 Homogeneous Linear Equations — The Big Theorems
1. If W (x0 ) = 0 for any single point x0 in I , then W (x) = 0 for every point x in I , and
the set {y1 , y2 , . . . , y N } is not linearly independent (and, hence, is not a fundamental
solution set) on I .
2. If W (x0 ) 6= 0 for any single point x0 in I , then W (x) 6= 0 for every point x in I ,
and {y1 , y2 , . . . , y N } is a fundamental solution set solutions for the given differential
equation on I .
This theorem (proven in the next chapter) gives a relatively easy to use test for determining
when a set of solutions to a linear homogeneous differential equation is a fundamental set of
solutions. This test is especially useful when the order of the differential equation 3 or higher.
You can easily verify that all are solutions (over the entire real line) to the homogeneous
third-order linear differential equation
y ′′′ + 4y ′ = 0 .
So, is
1, cos(2x) , sin2 (x)
a fundamental set of solutions for this differential equation? To check we compute the first-
order derivatives
Rather than compute out this determinant for ‘all’ values of x (which could be very tedious),
let us simply pick a convenient value for x , say x = 0 , and compute the Wronskian at that
point:
Theorem 14.3 assures us that, since this Wronskian vanishes at that one point, it must vanish
everywhere. More importantly for us, this theorem also tells us that {1, cos(2x) , sin2 (x)} is
not a fundamental set of solutions for our differential equation.
Additional Exercises 307
Again, you can easily verify that all are solutions (over the entire real line) to the homogeneous
third-order linear differential equation
y ′′′ + 4y ′ = 0 .
So, is
{ 1, cos(2x) , sin(2x)}
a fundamental set of solutions for our differential equation, above? To check we compute the
appropriate derivatives and form the corresponding Wronskian,
y1 y2 y3 1 cos(2x) sin(2x)
= y1 ′ y2 ′ ′
y3 = 0 −2 sin(2x) 2 cos(2x) .
y1 ′′ y2 ′′ y3 ′′ 0 −4 cos(2x) −2 sin(2x)
Letting x = 0 , we get
1 cos(2 · 0) sin(2 · 0) 1 1 0
W (0) = 0 −2 sin(2 · 0) 2 cos(2 · 0) = 0 0 2 = 8 6= 0 .
0 −4 cos(2 · 0) −2 sin(2 · 0) 0 −4 0
Theorem 14.3 assures us that, since this Wronskian is nonzero at one point, it is nonzero every-
where, and that {1, cos(2x) , sin(2x)} is a fundamental set of solutions for our differential
equation. Hence,
y(x) = c1 · 1 + c2 cos(2x) + c3 sin(2x)
is a general solution to our third-order differential equation.
Additional Exercises
d2 y dy
x2 2
+ 4x + sin(x) y = 0
dx dx
over the interval (0, ∞) . Keep in mind that this automatically requires y , y ′ and y ′′
to be defined at each point in (0, ∞) . Thus, both y and y ′ are differentiable on this
interval and, as you learned in calculus, this means that y and y ′ must be continuous
on (0, ∞) . Now, rewrite the above equation to obtain a formula for y ′′ in terms of
y and y ′ , and, using this formula, show that y ′′ must also be continuous on (0, ∞) .
Why can we not be sure that y ′′ is continuous at 0 ?
308 Homogeneous Linear Equations — The Big Theorems
d2 y dy
a 2
+ b + cy = 0
dx dx
over I . Assume, further, that a , b and c , as well as both y and y ′ are continuous
functions over I , and that a is never zero on I . Show that y ′′ also must be continuous
on I . Why do we require that a never vanishes on I ?
c. Let I be some interval, and assume y satisfies
a0 y (N ) + a1 y (N −1) + · · · + a N −2 y ′′ + a N −1 y ′ + a N y = 0
over I . Assume, further, that the ak ’s , as well as y , y ′ , . . . and y (n−1) are contin-
uous functions over I , and that a0 is never zero on I . Show that y (N ) also must be
continuous on I . Why do we require that a0 never vanishes on I ?
14.2. The following exercises all refer to theorem 14.1 on page 302 and the following pair of
functions:
{ y1 , y2 } = x 2 , x 3
.
x 2 y ′′ − 4x y ′ + 6y = 0
y(x) = c1 x 2 + c2 x 3
y(x) = c1 x 2 + c2 x 3
What ‘goes wrong’. Why does this not violate the claim in theorem 14.1 about initial-
value problems being solvable?
14.3. Particular solutions to the differential equation in each of the following initial-value
problems can found by assuming
y(x) = er x
Additional Exercises 309