DEText Ch14

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

14

Homogeneous Linear Equations — The


Big Theorems

Let us continue the discussion we were having at the end of section 12.3 regarding the general
solution to any given homogeneous linear differential equation. By then we had seen that any
linear combination of particular solutions,

y(x) = c1 y1 (x) + c2 y2 (x) + · · · + c M y M (x) ,

is another solution to that homogeneous differential equation. In fact, we were even beginning
to suspect that this expression could be used as a general solution to the differential equation
provided the yk ’s were suitably chosen. In particular, we suspected that the general solution to
any second-order, homogeneous linear differential equation can be written

y(x) = c1 y1 (x) + c2 y2 (x)

were c1 and c2 are arbitrary constants, and y1 and y2 are any two solutions that are not constant
multiples of each other.
These suspicions should have been reinforced in the last chapter in which general solutions
were obtained via reduction of order. In the examples and exercises, you should have noticed
that the solutions obtained to the given homogeneous differential equations could all be written
as just described.
It is time to confirm these suspicions, and to formally state the corresponding results. These
results will not be of merely academic interest. We will use them for much of the rest of this text.
For practical reasons, we will split our discussion between this and the next chapter. This
chapter will contain the statements of the most important theorems regarding the solutions to
homogeneous linear differential equations, along with a little discussion to convince you that
these theorems have a reasonable chance of being true. More convincing (and lengthier) analysis
will be carried out in the next chapter.

14.1 Preliminaries and a Little Review


We are discussing general homogeneous linear differential equations. If the equation is of second
order, it will be written as
ay ′′ + by ′ + cy = 0 .

297
298 Homogeneous Linear Equations — The Big Theorems

More generally, it will be written as

a0 y (N ) + a1 y (N −1) + · · · + a N −2 y ′′ + a N −1 y ′ + a N y = 0

where N , the order, is some positive integer. The coefficients — a , b and c in the second
order case, and the ak ’s in the more general case — will be assumed to be continuous functions
over some open interval I , and the first coefficient — a or a0 — will be assumed to be nonzero
at every point in that interval.
Recall the “principle of superposition”: If {y1 , y2 , . . . , y K } is a set of particular solutions
over I to a given homogeneous linear equation, then any linear combination of these solutions,

y(x) = c1 y1 (x) + c2 y2 (x) + · · · + c K y K (x) for all x in I ,

is also a solution over I to the the given differential equation. Also recall that this set of y’s is
called a fundamental set of solutions (over I ) for the given homogeneous differential equation
if and only if both of the following hold:

1. The set is linearly independent over I (i.e., none of the yk ’s is a linear combination of
the others over I ).

2. Every solution over I to the given differential equation can be expressed as a linear
combination of the yk ’s .

14.2 Second-Order Homogeneous Equations


Let us limit our attention to the possible solutions to a second-order homogeneous linear differ-
ential equation
ay ′′ + by ′ + cy = 0 . (14.1)
We will first look at what we can derive just from the reduction of order method (with a few
assumptions), and then see how that can be extended by some basic linear algebra. Because
of some of the assumptions we will make, our discussion here will not be completely rigorous,
but it will lead to some of the more important ideas regarding general solutions to second-order
homogeneous linear differential equations. After that, I will tell you what can be rigorously
proven regarding these general solutions. If you are impatient, you can skip ahead and read that
part (theorem 14.1 on page 302).

The Form of the Reduction of Order Solution


As I hope you observed, the reduction of order method applied to an equation of the form (14.1)
always led (in the previous chapter, at least) to a general solution of the form

y(x) = c1 y1 (x) + c R y R (x)

where {y1 , y R } is a linearly independent set of solutions on the interval of interest (we are using
the “subscript R ” just to emphasize that this part came from reduction of order).
Second-Order Homogeneous Equations 299

!◮Example 14.1: In section 13.2, we illustrated the reduction of order method by solving

x 2 y ′′ − 3x y ′ + 4y = 0

on the interval I = (0, ∞) . After first observing that

y1 (x) = x 2

was one solution to this differential equation, we applied the method of reduction of order to
obtain the general solution

y(x) = x 2 [ A ln |x| + B] = Ax 2 ln |x| + Bx 2

(where A and B denote arbitrary constants). Observe that this is in the form

y(x) = c1 y1 (x) + c R y R (x) .

In this case,
y1 (x) = x 2 and y R (x) = x 2 ln |x| ,
and c1 and c R are simply the arbitrary constants A and B , renamed. Observe also that,
here, y1 and y R are clearly not constant multiples of each other. So

{ y1 , y R } = x 2 , x 2 ln |x|


is a linearly independent pair of functions on the interval of interest. And since every other
solution to our differential equation can be written as a linear combination of this pair, this set
is a fundamental set of solutions for our differential equation.

Let’s look a little more closely at the solution to equation (14.1),

ay ′′ + by ′ + cy = 0 ,

generally obtained via reduction of order. Assuming we have one known nontrivial particular
solution y1 , we set
y = y1 u ,
plug this into the differential equation, and obtain (after simplification) an equation of the form

Au ′′ + Bu ′ = 0 , (14.2)

which can be treated as the first-order differential equation

Av ′ + Bv = 0

using the substitution v = u ′ . Assuming A and B are reasonable functions on our interval of
interest, you can easily verify that the general solution to this first-order equation is of the form

v(x) = c R v0 (x)

where c R is an arbitrary constant, and v0 is any particular (nontrivial) solution to this first-order
equation. (More about this differential equation, along with some important properties of its
solutions, is derived in the next chapter.)
300 Homogeneous Linear Equations — The Big Theorems

Since u ′ = v , we can then recover the general formula for u from the general formula for
v by integration:
Z Z Z
u(x) = v(x) d x = c R v0 (x) d x = c R v0 (x) d x

= c R [u 0 (x) + c0 ] = c1 + c R u 0 (x)

where u 0 is any single antiderivative of v0 , c0 is the (arbitrary) constant of integration and


c1 = c R c0 . This, with our initial substitution, yields the general solution

y(x) = y1 (x)u(x) = y1 (x) [c1 + c R u 0 (x)]

which, after letting


y R (x) = y1 (x)u 0 (x) ,
simplifies to
y(x) = c1 y1 (x) + c R y R (x) .
Thus, we have written a general solution to our second-order homogeneous differential
equation as a linear combination of just two particular solutions. The question now is whether
the set {y1 , y R } is linearly independent or not. But if not, then y R = u 0 y1 is a constant multiple
of y1 , which means u 0 is a constant and, consequently,

v0 = u 0 ′ = 0 ,

contrary to the known fact that v0 is a nontrivial solution to equation (14.2). So u 0 is not
a constant, y R = u 0 y1 is not a constant multiple of y1 , and the pair {y1 , y R } is linearly
independent. And since all other solutions can be written as linear combinations of these two
solutions, {y1 , y R } is a fundamental set of solutions for out differential equation.
What we have just shown is that, assuming

1. a nontrivial solution y1 to the second-order differential equation exists, and

2. the functions A and B are ‘reasonable’ over the interval of interest

then the reduction of order method yields a general solution to differential equation (14.1) of the
form
y(x) = c1 y1 (x) + c R y R (x)
where {y1 , y R } is a linearly independent set of solutions.1

Applying a Little Linear Algebra


But what if we start out with any linearly independent pair of solutions {y1 , y2 } to differential
equation (14.1)? Using y1 , we can still derive the general solution

y(x) = c1 y1 (x) + c R y R (x)

1 In fact, theorem 11.2 on page 253 can be used to show that a y exists. The real difficulty is in verifying that A
1
and B are ‘reasonable’, especially if y1 is zero at some point in the interval.
Second-Order Homogeneous Equations 301

where y R is that second solution obtained through the reduction of order method. And since
this is a general solution and y2 is a particular solution, there must be constants κ1 and κ R such
that
y2 (x) = κ1 y1 (x) + κ R y R (x) .
Moreover, because {y1 , y2 } is (by assumption) linearly independent, y2 cannot be a constant
multiple of y1 . Thus κ R 6= 0 in the above equation, and that equation can be solved for y R ,
obtaining
κ1 1
y R (x) = − y1 (x) + y2 (x) .
κR κR
Consequently,

y(x) = c1 y1 (x) + c R y R (x)


1
h κ i
1
= c1 y1 (x) + c R − y1 (x) + y2 (x) = C1 y1 (x) + C2 y2 (x)
κR κR

where
c R κ1 cR
C 1 = c1 − and C2 = .
κR κR
So any linear combination of y1 and y R can also be expressed as a linear combination of y1
and y2 . This means
y(x) = C1 y1 (x) + C2 y2 (x)
can also be used as a general solution, and, hence, {y1 , y2 } is also a fundamental set of solutions
for our differential equation.
So what? Well, if you are lucky enough to easily find a linearly independent pair of solutions
to a given second-order homogeneous equation, then you can use that pair as your fundamental
set of solutions — there is no need to grind through the reduction of order computations.2

The Big Theorem on Second-Order Homogeneous Linear


Differential Equations
Let me repeat what we’ve just derived:

The general solution of a second-order homogeneous linear differential equation is


given by
y(x) = c1 y1 (x) + c2 y2 (x)
where {c1 , c2 } is a pair of arbitrary constants and {y1 , y2 } is any linearly indepen-
dent pair of particular solutions to that differential equation.

In deriving this statement, we made some assumptions about the existence of solutions, and the
‘reasonableness’ of the first-order differential equation arising in the reduction of order method.
In the next chapter, we will rigorously rederive this statement without making these assumptions.
We will also examine a few related issues regarding the linear independence of solution sets and
the solvability of initial-value problems. What we will discover is that the following theorem can
be proven. This can be considered the “Big Theorem on Second-Order Homogeneous Linear

2 If you’ve had a course in linear algebra, you may recognize that a “fundamental set of solutions” is a “basis set”
for the “vector space of all solutions to the given homogeneous differential equation ”. This is worth noting, if you
understand what is being noted.
302 Homogeneous Linear Equations — The Big Theorems

Differential Equations”. It will be used repeatedly, often without comment, in the chapters that
follow.

Theorem 14.1 (general solutions to second-order homogenous linear differential equations)


Let I be some open interval, and suppose we have a second-order homogeneous linear differ-
ential equation
ay ′′ + by ′ + cy = 0
where, on I , the functions a , b and c are continuous, and a is never zero. Then the following
statements all hold:

1. Fundamental sets of solutions for this differential equation (over I ) exist.

2. Every fundamental solution set consists of a pair of solutions.

3. If {y1 , y2 } is any linearly independent pair of particular solutions over I , then:


(a) {y1 , y2 } is a fundamental set of solutions.
(b) A general solution to the differential equation is given by

y(x) = c1 y1 (x) + c2 y2 (x)

where c1 and c2 are arbitrary constants.


(c) Given any point x0 in I and any two fixed values A and B , there is exactly one
ordered pair of constants {c1 , c2 } such that

y(x) = c1 y1 (x) + c2 y2 (x)

also satisfies the initial conditions

y(x0 ) = A and y ′ (x0 ) = B .

The statement about “initial conditions” in the above theorem assures us that second-order
sets of initial conditions are appropriate for second-order linear differential equations. It also
assures us that a fundamental solution set for a second-order linear homogeneous differential
equation can not become “degenerate” at any point in the interval I . In other words, there is
no need to worry about whether an initial-value problem (with x0 in I ) can be solved. It has
a solution, and only one solution. (To see why we might be worried about “degeneracy”, see
exercise 14.2 on page 308.)
To illustrate how this theorem is used, let us solve a differential equation that you may recall
solving in chapter 11 (see page 247). Comparing the approach used there with that used here
should lead you to greatly appreciate the theory we’ve just developed.

!◮Example 14.2: Consider (again) the homogeneous second-order linear differential equation

y ′′ + y = 0 .

In example 12.2 on page 266 we discovered (“by inspection”) that

y1 (x) = cos(x) and y2 (x) = sin(x)


Homogeneous Linear Equations of Arbitrary Order 303

are two solutions to this differential equation, and in example 12.4 on page 268 we observed
that the set of these two solutions is linearly independent pair. The above theorem now assures
us that, indeed, this pair,
{ cos(x) , sin(x) } ,
is a fundamental set of solutions for the above second-order homogeneous linear differential
equation, and that
y(x) = c1 cos(x) + c2 sin(x)
is a general solution.

!◮Example 14.3: Now, consider the initial-value problem

y ′′ + y = 0 with y(0) = 3 and y ′ (0) = 5 .

We just found that


y(x) = c1 sin(x) + c2 cos(x)
is a general solution to the differential equation. Taking derivatives, we have

y ′ (x) = [c1 sin(x) + c2 cos(x)]′ = c1 cos(x) − c2 sin(x) .

Using this in our set of initial conditions, we get

3 = y(0) = c1 sin(0) + c2 cos(0) = c1 · 0 + c2 · 1


and
5 = y ′ (0) = c1 cos(0) − c2 sin(0) = c1 · 1 − c2 · 0 .

Hence,
c1 = 5 and c2 = 3 ,
and the solution to our initial-value problem is

y(x) = c1 sin(x) + c2 cos(x)


= 5 sin(x) + 3 cos(x) .

Finding fundamental sets of solutions for most homogeneous linear differential equations
will not be as easy as it was for the differential equation in the last two examples. Fortunately,
fairly straightforward methods are available for finding fundamental sets for some important
classes of differential equations. Some of these methods are partially described in the exercises,
and will be more completely developed in later chapters.

14.3 Homogeneous Linear Equations of Arbitrary


Order
The big theorem on second-order homogeneous equations, theorem 14.1, can be extended to an
analogous theorem covering homogeneous linear equations of all orders. That theorem is:
304 Homogeneous Linear Equations — The Big Theorems

Theorem 14.2 (general solutions to homogenous linear differential equations)


Let I be some open interval, and suppose we have an N th -order homogeneous linear differential
equation
a0 y (N ) + a1 y (N −1) + · · · + a N −2 y ′′ + a N −1 y ′ + a N y = 0
where, on I , the ak ’s are all continuous functions with a0 never being zero. Then the following
statements all hold:
1. Fundamental sets of solutions for this differential equation (over I ) exist.
2. Every fundamental solution set consists of exactly N solutions.
3. If {y1 , y2 , . . . , y N } is any linearly independent set of N particular solutions over I ,
then:
(a) {y1 , y2 , . . . , y N } is a fundamental set of solutions.
(b) A general solution to the differential equation is given by
y(x) = c1 y1 (x) + c2 y2 (x) + · · · + c N y N (x)
where c1 , c2 , . . . and c N are arbitrary constants.
(c) Given any point x0 in I and any N fixed values A1 , A2 , . . . and A N , there is
exactly one ordered set of constants {c1 , c2 , . . . , c N } such that
y(x) = c1 y1 (x) + c2 y2 (x) + · · · + c N y N (x)
also satisfies the initial conditions
y(x0 ) = A1 , y ′ (x0 ) = A2 ,
y ′′ (x0 ) = A2 , ··· and y (N −1) (x0 ) = A N .

A proof of this theorem is given in the next chapter.

14.4 Linear Independence and Wronskians


Let {y1 , y2 , . . . , y N } be a set of N (sufficiently differentiable) functions on an interval I . The
corresponding Wronskian, denoted by either W or W [y1 , y2 , . . . , , y N ] , is the function on I
generated by the following determinant of a matrix of derivatives of the yk ’s :
y1 y2 y3 ··· yN
y1 ′ y2 ′ y3 ′ ··· yN ′
W = W [y1 , y2 , . . . , , y N ] = y1 ′′ y2 ′′ y3 ′′ ··· y N ′′ .
.. .. .. .. ..
. . . . .
y1 (N −1) y2 (N −1) y3 (N −1) · · · y N (N −1)
In particular, if N = 2 ,
y1 y2
W = W [y1 , y2 ] = = y1 y2 ′ − y1 ′ y2 .
y1 ′ y2 ′
Linear Independence and Wronskians 305

!◮Example 14.4: Let’s find W [y1 , y2 ] on the real line when

y1 (x) = x 2 and y2 (x) = x 3 .

In this case,
y1 ′ (x) = 2x and y2 ′ (x) = 3x 2 ,
and

y1 (x) y2 (x) x2 x3
W [y1 , y2 ] = = = x 2 3x 2 − 2x x 3 = x 4
y1 ′ (x) y2 ′ (x) 2x 3x 2

Wronskians naturally arise when dealing with initial-value problems. For example, suppose
we have a pair of functions y1 and y2 , and we want to find constants c1 and c2 such that

y(x) = c1 y1 (x) + c2 y2 (x)

satisfies
y(x0 ) = 2 and y ′ (x0 ) = 5
for some given point x0 in our interval of interest. In solving for c1 and c2 , you can easily
show that

c1 W (x0 ) = 2y2 ′ (x0 ) − 5y2 (x0 ) and c2 W (x0 ) = 5y1 (x0 ) − 2y1 ′ (x0 ) .

Thus, if W (x0 ) 6= 0 , then there is exactly one possible value for c1 and one possible value for
c2 , namely,

2y2 ′ (x 0 ) − 5y2 (x 0 ) 5y1 (x 0 ) − 2y1 ′ (x 0 )


c1 = and c2 = .
W (x 0 ) W (x 0 )

However, if W (x0 ) = 0 , then the system reduces to

0 = 2y2 ′ (x0 ) − 5y2 (x0 ) and 0 = 5y1 (x0 )) − 2y1 ′ (x0 )

which cannot be solved for c1 and c2 .3


More generally, the vanishing of a Wronskian of a set of functions signals that the given set
is not a good choice in constructing solutions to initial-value problems. The value of this fact is
enhanced by the following remarkable theorem:

Theorem 14.3 (Wronskians and fundamental solution sets)


Let W be the Wronskian of any set {y1 , y2 , . . . , y N } of N particular solutions to an N th -order
homogeneous linear differential equation

a0 y (N ) + a1 y (N −1) + · · · + a N −2 y ′′ + a N −1 y ′ + a N y = 0

on some interval open I . Assume further that the ak ’s are continuous functions with a0 never
being zero on I . Then:
3 Of course, the choice of 2 and 5 as the initial values was not important; any other values could have been used
(we were just trying to reduce the number of symbols to keep track off). What is important is whether W (x 0 ) is
zero or not.
306 Homogeneous Linear Equations — The Big Theorems

1. If W (x0 ) = 0 for any single point x0 in I , then W (x) = 0 for every point x in I , and
the set {y1 , y2 , . . . , y N } is not linearly independent (and, hence, is not a fundamental
solution set) on I .

2. If W (x0 ) 6= 0 for any single point x0 in I , then W (x) 6= 0 for every point x in I ,
and {y1 , y2 , . . . , y N } is a fundamental solution set solutions for the given differential
equation on I .

This theorem (proven in the next chapter) gives a relatively easy to use test for determining
when a set of solutions to a linear homogeneous differential equation is a fundamental set of
solutions. This test is especially useful when the order of the differential equation 3 or higher.

!◮Example 14.5: Consider the functions

y1 (x) = 1 , y2 (x) = cos(2x) and y3 (x) = sin2 (x) .

You can easily verify that all are solutions (over the entire real line) to the homogeneous
third-order linear differential equation

y ′′′ + 4y ′ = 0 .

So, is
1, cos(2x) , sin2 (x)


a fundamental set of solutions for this differential equation? To check we compute the first-
order derivatives

y1 ′ (x) = 0 , y2 ′ (x) = −2 sin(2x) , y3 ′ (x) = 2 sin(x) cos(x) ,

the second-order derivatives

y1 ′′ (x) = 0 , y2 ′′ (x) = −4 cos(2x) and y3 ′′ (x) = 2 cos2 (x)−2 sin2 (x) ,

and form the corresponding Wronskian,

1 cos(2x) sin2 (x)


W (x) = W [1, cos(2x) , sin2 (x)] = 0 −2 sin(2x) 2 sin(x) cos(x) .
0 −4 cos(2x) 2 cos2 (x) − 2 sin2 (x)

Rather than compute out this determinant for ‘all’ values of x (which could be very tedious),
let us simply pick a convenient value for x , say x = 0 , and compute the Wronskian at that
point:

1 cos(2 · 0) sin2 (0) 1 1 0


W (0) = 0 −2 sin(2 · 0) 2 sin(0) cos(0) = 0 0 0 = 0 .
2 2
0 −4 cos(2 · 0) 2 cos (0) − 2 sin (0) 0 −4 2

Theorem 14.3 assures us that, since this Wronskian vanishes at that one point, it must vanish
everywhere. More importantly for us, this theorem also tells us that {1, cos(2x) , sin2 (x)} is
not a fundamental set of solutions for our differential equation.
Additional Exercises 307

!◮Example 14.6: Now consider the functions

y1 (x) = 1 , y2 (x) = cos(2x) and y3 (x) = sin(2x) .

Again, you can easily verify that all are solutions (over the entire real line) to the homogeneous
third-order linear differential equation

y ′′′ + 4y ′ = 0 .

So, is
{ 1, cos(2x) , sin(2x)}
a fundamental set of solutions for our differential equation, above? To check we compute the
appropriate derivatives and form the corresponding Wronskian,

W (x) = W [1, cos(2x) , sin(2x)]

y1 y2 y3 1 cos(2x) sin(2x)
= y1 ′ y2 ′ ′
y3 = 0 −2 sin(2x) 2 cos(2x) .
y1 ′′ y2 ′′ y3 ′′ 0 −4 cos(2x) −2 sin(2x)

Letting x = 0 , we get

1 cos(2 · 0) sin(2 · 0) 1 1 0
W (0) = 0 −2 sin(2 · 0) 2 cos(2 · 0) = 0 0 2 = 8 6= 0 .
0 −4 cos(2 · 0) −2 sin(2 · 0) 0 −4 0

Theorem 14.3 assures us that, since this Wronskian is nonzero at one point, it is nonzero every-
where, and that {1, cos(2x) , sin(2x)} is a fundamental set of solutions for our differential
equation. Hence,
y(x) = c1 · 1 + c2 cos(2x) + c3 sin(2x)
is a general solution to our third-order differential equation.

Additional Exercises

14.1 a. Assume y is a solution to

d2 y dy
x2 2
+ 4x + sin(x) y = 0
dx dx
over the interval (0, ∞) . Keep in mind that this automatically requires y , y ′ and y ′′
to be defined at each point in (0, ∞) . Thus, both y and y ′ are differentiable on this
interval and, as you learned in calculus, this means that y and y ′ must be continuous
on (0, ∞) . Now, rewrite the above equation to obtain a formula for y ′′ in terms of
y and y ′ , and, using this formula, show that y ′′ must also be continuous on (0, ∞) .
Why can we not be sure that y ′′ is continuous at 0 ?
308 Homogeneous Linear Equations — The Big Theorems

b. Let I be some interval, and assume y satisfies

d2 y dy
a 2
+ b + cy = 0
dx dx

over I . Assume, further, that a , b and c , as well as both y and y ′ are continuous
functions over I , and that a is never zero on I . Show that y ′′ also must be continuous
on I . Why do we require that a never vanishes on I ?
c. Let I be some interval, and assume y satisfies

a0 y (N ) + a1 y (N −1) + · · · + a N −2 y ′′ + a N −1 y ′ + a N y = 0

over I . Assume, further, that the ak ’s , as well as y , y ′ , . . . and y (n−1) are contin-
uous functions over I , and that a0 is never zero on I . Show that y (N ) also must be
continuous on I . Why do we require that a0 never vanishes on I ?

14.2. The following exercises all refer to theorem 14.1 on page 302 and the following pair of
functions:
{ y1 , y2 } = x 2 , x 3

.

a. Using the theorem, verify that


x 2, x 3


is a fundamental solution set for

x 2 y ′′ − 4x y ′ + 6y = 0

over the interval (0, ∞) .


b. Find the constants c1 and c2 so that

y(x) = c1 x 2 + c2 x 3

satisfies the initial conditions

y(1) = 0 and y ′ (1) = −4 .

c. Attempt to find the constants c1 and c2 so that

y(x) = c1 x 2 + c2 x 3

satisfies the initial conditions

y(0) = 0 and y ′ (0) = −4 .

What ‘goes wrong’. Why does this not violate the claim in theorem 14.1 about initial-
value problems being solvable?

14.3. Particular solutions to the differential equation in each of the following initial-value
problems can found by assuming

y(x) = er x
Additional Exercises 309

where r is a constant to be determined. To determine these constants, plug this formula


for y into the differential equation, observe that the resulting equation miraculously
simplifies to a simple algebraic equation for r , and solve for the possible values of r .
Do that with each equation; then use those solutions and the big theorem on general
solutions to second order, homogeneous linear equations (theorem 14.1 on page 302)
to construct a general solution, and, finally, solve the given initial-value problem:
a. y ′′ + y ′ − 2y = 0 with y(0) = 1 and y ′ (0) = 1

b. y ′′ + 4y ′ + 3y = 0 with y(0) = 2 and y ′ (0) = −1

c. 6y ′′ − 5y ′ + y = 0 with y(0) = 4 and y ′ (0) = 0

d. y ′′ + 3y ′ = 0 with y(0) = −2 and y ′ (0) = 3

14.4. Find solutions of the form


y(x) = er x
where r is a constant (as in the previous exercise) and use the solutions found (along
with the results given in theorem 14.2 on page 304) to construct general solutions to the
following differential equations:
a. y ′′′ − 9y ′ = 0 b. y (4) − 10y ′′ + 9y = 0

You might also like