Notes2011a PDF
Notes2011a PDF
Notes2011a PDF
Arne Jensen
Department of Mathematical Sciences
Aalborg University, Fr. Bajers Vej 7G
DK-9220 Aalborg Ø, Denmark
2 Prerequisites 1
5 Difference calculus 11
9 Supplementary exercises 44
9.1 Trial Exam December 2010 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
9.2 Exam January 2011 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
9.3 Exam February 2011 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
References 50
List of Figures
4.1 Point plot of the solution (4.10). Points connected with blue line segments . . 9
4.2 Point plot of the solution to (4.11). Points connected with blue line segments . 9
List of Tables
6.1 Method of undetermined coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . 21
i
1 Introduction
These lecture notes are intended for the courses “Introduction to Mathematical Methods”
and “Introduction to Mathematical Methods in Economics”. They contain a number of
results of a general nature, and in particular an introduction to selected parts of the theory
of difference equations.
2 Prerequisites
The reader is assumed to have the basic knowledge of mathematics provided by the Danish
high school system. This includes a basic understanding of mathematical notation and
familiarity with reading a mathematical text. These notes are written is the manner of an
ordinary mathematical text. This means the the density of information on each page is
quite high.
Experience shows that one aspect of mathematical notation is often not understood, or
worse, misunderstood. This concerns the order of mathematical operations. We start with
numbers. To make statements obvious we use a centered dot to denote the multiplication
of two numbers, as in 4·7. One of the rules states that multiplication takes precedence over
addition and subtraction. This means that 3+4·7 = 3+28 = 31, i.e. multiplication is carried
out before addition. The same rule applies to division, such that 3+8/4 = 3+2 = 5. Powers
√ √
and roots take precedence over multiplication and division. Thus 4 + 5 − 2 = 9 − 2 =
3 − 2 = 1.
Precedence can be changed using parenthesis. Thus 3 + 4 · 7 = 31, but (3 + 4) · 7 = 49.
The rules can be summarized as follows. The order of precedence is
3 − 4 · 2 + 32 /(2 + 1) = 3 − 4 · 2 + 32 /3 = 3 − 4 · 2 + 9/3
= 3 − 8 + 9/3 = 3 − 8 + 3 = −5 + 3 = −2.
The same rules apply to symbolic expressions, for example a polynomial of degree 3:
a3 x 3 + a2 x 2 + a1 x + a0 .
1
e.g. excel the result is 9, since it is −3 that is squared. In these notes we always use the
mathematical rule for the unary operator minus. In solving problems you must always
use the mathematical rule.
Some additional information can be found in [5]
N ⊂ N0 ⊂ Z ⊂ Q ⊂ R.
x(n)
n=nfirst
Here nfirst is called the lower limit and nlast the upper limit. x(n) is called the summand.
It is a function of n, which we denote by x(n).
Our results are sometimes expressed as indefinite sums. Here are two examples.
N N
X N(N + 1) X N(N + 1)(2N + 1)
n= and n2 = .
n=1
2 n=1
6
One important question is how to prove such general formulas. The technique used is
called proof by induction. We will give a simplified description of this technique. We have
a certain statement, depending on an integer n ∈ N. We would like to establish its validity
for all n ∈ N. The proof technique comprises two steps.
2. Induction step. Prove that if the statement holds for n, then it also holds when n is
replaced by n + 1.
2
Verification of these two steps constitutes the proof of the statement for all integers n ∈ N.
Let us illustrate the technique. We want to prove the formula
N
X N(N + 1)
n= for all N ∈ N.
n=1
2
1(1 + 1)
1= ,
2
which is true. For the second step we assume that the formula is valid for some N and
consider the left hand side for N + 1.
N+1
X N
X N(N + 1)
n= n + (N + 1) = +(N + 1).
n=1 n=1
2
The second equality follows from our assumption. We now rewrite this last expression.
The terminology is analogous the the one used for sums. In particular, we will be using
indefinite products. The product
YN
n
n=1
appears so often that is has a name. It is called the factorial of N, written as N!. So by
definition
N
Y
N! = n.
n=1
10! = 3628800,
20! = 2432902008176640000,
30! = 265252859812191058636308480000000.
x(n).
n=nfirst
3
Important convention We use the following conventions. If n1 > n2 , then by definition
n2
X n2
Y
a(n) = 0 and a(n) = 1. (3.1)
n=n1 n=n1
Recall our convention 0! = 1. The binomial coefficients satisfy many identities. One of
them is the following.
! ! !
n+1 n n
= + , k = 1, . . . , n. (3.5)
k k−1 k
Exercises
Exercise 3.1. Prove by induction that we have
N
X N(N + 1)(2N + 1)
n2 = .
n=1
6
N
X qN+1 − 1
qn = . (3.6)
n=0
q−1
PN
What is n=0 qn for q = 1?
4
Exercise 3.3. Prove by induction that we have
N
X N 2 (N + 1)2
n3 = .
n=1
4
The value x(0) is called the initial value. To prove that (4.2) solves (4.1), we compute as
follows.
x(n + 1) = an+1 x(0) = a(an x(0)) = ax(n).
Example 4.1. An amount of USD10, 000 is deposited in a bank account with an annual
interest rate of 4%. Determine the balance of the account after 15 years. This problem
leads to the difference equation
The solution is
b(n) = (1.04)n 10, 000,
in particular b(15) = 18, 009.44.
5
We write the equation (4.1) as
This equation is called a homogeneous first order difference equation with constant coef-
ficients. The term homogeneous means that the right hand side is zero. A corresponding
inhomogeneous equation is given as
where we take the right hand side to be a constant different from zero.
The equation (4.3) is called linear, since it satisfies the superposition principle. Let y(n)
and z(n) be two solutions to (4.3), and let α, β ∈ R be two real numbers. Define w(n) =
αy(n) + βz(n). Then w(n) also satisfies (4.3), as the following computation shows.
We now solve (4.4). The idea is to compute a number of terms, guess the structure of
the solution, and then prove that we have indeed found the solution. First we compute a
number of terms. In the computation of x(2) we give all intermediate steps. These are
omitted in the computation of x(3) etc.
x(1) = ax(0) + c,
x(2) = ax(1) + c = a(ax(0) + c) + c = a2 x(0) + ac + c,
x(3) = ax(2) + c = a3 x(0) + a2 c + ac + c,
x(4) = ax(3) + c = a4 x(0) + a3 c + a2 c + ac + c,
x(5) = ax(4) + c = a5 x(0) + a4 c + a3 c + a2 c + ac + c,
..
.
n−1
X
n
x(n) = a x(0) + c ak .
k=0
To prove that (4.5) is a solution to (4.4), we must prove that (4.5) satisfies this equation. We
compute as follows.
n
X
x(n + 1) = an+1 x(0) + c ak
k=0
n+1
=a x(0) + c(1 + a + a2 + · · · + an−1 + an )
= a(an x(0)) + c + a c(1 + a + a2 + · · · + an−1 )
n−1
X
= a an x(0) + c ak + c
k=0
= ax(n) + c.
6
Thus we have shown that (4.5) is a solution to (4.4). For a ≠ 1 the solution (4.5) can be
rewritten using the result (3.6):
an − 1
x(n) = an x(0) + c . (4.6)
a−1
In the general case both a and c will be functions of n. We have the following result.
Theorem 4.2. Let a(n) and c(n), n ∈ N0 , be real sequences. Then the linear first order
difference equation
Proof. We define the sequence y(n) by (4.8). We must show that it satisfies the equation
(4.7) and the initial condition. Due to the convention (3.1) the initial condition is trivially
satisfied. We first write out the expression for y(n + 1)
n n n
! !
Y X Y
y(n + 1) = a(k) y0 + a(j) c(k).
k=0 k=0 j=k+1
which implies
y(n + 1) = a(n)y(n) + c(n).
Thus we have shown that y(n) is a solution. Finally we must prove uniqueness. Assume
that we have two solutions y(n) and ỹ(n), which satisfy (4.7), i.e. both the equation and
the initial condition are satisfied by both solutions. Now consider {n ∈ N0 | y(n) ≠ ỹ(n)}.
Let n0 be the smallest integer in this set. We must have n0 ≥ 1, since y(0) = ỹ(0) = y0 .
By the definition of n0 we have y(n0 − 1) = ỹ(n0 − 1), and then
which is a contradiction. Thus we must have n0 = 0. But y(0) = ỹ(0), since the two
equations satisfy the same initial condition. It follows that the solution is unique.
7
4.1 Examples
We now give some examples. Details should be worked out by the reader.
x(n) = (−1)n 3.
The last equality requires results that are not covered by this course, so the first expression
is sufficient as the solution to the problem.
This problem can be solved in two different manners. One can directly use the general
formula (4.8). In this case one gets the solution
n−1
Y
x(n) = (k − 4).
k=0
But this solution is not very explicit. A more explicit solution can be found by noting that
for n ≥ 5 the product contains the factor 0, hence the product is zero. Thus one has the
explicit solution:
1 n = 0,
−4 n = 1,
12 n = 2,
x(n) = (4.10)
−24 n = 3,
24 n = 4,
0 n ≥ 5.
8
Figure 4.1: Point plot of the solution (4.10). Points connected with blue line segments
We illustrate the solution in Figure 4.1. Here we plot the values of x(n) as filled circles,
connected by blue line segments. We include the line segments to visualize the variations
in the values.
We note that the solution (4.10) is very sensitive to small changes in the equation. If
we add a small constant inhomogeneous term, the solution will rapidly diverge from the
solution zero for n ≥ 5. As an example we consider
1
x(n + 1) = (n − 4)x(n) + , x(0) = 1. (4.11)
20
A plot of this solution is shown in Figure 4.2.
Figure 4.2: Point plot of the solution to (4.11). Points connected with blue line segments
9
Example 4.6. Let us consider the payment of a loan. Payments are made periodically, e.g.
once a month. The interest rate per period is 100r %. The payment at the end of each
period is denoted p(n). The initial loan is q(0). The outstanding balance after n payments
is denoted q(n). Thus q(n) must satisfy the difference equation
Often the loan is paid back in equal installments, i.e. p(n) = p for all n. Then the above
sum can be computed. We get the result
p
q(n) = (1 + r )n q(0)− (1 + r )n − 1 . (4.14)
r
Suppose that we want to pay back the loan in N installments. Then the installment is
determined by
r
p = q(0) (4.15)
1 − (1 + r )−N
Exercises
Exercise 4.1. Fill in the details in Example 4.6. In particular the computations leading to
(4.14).
Exercise 4.3. Adapt the results in the Example 4.6 to the case, where initially no installments
are paid.
Exercise 4.4. Discuss the application to loans with a variable interest rate of the results in
this section.
Exercise 4.5. Implement the various formulas for interest computation and loan amortiza-
tion on a programmable calculator or in Maple. In particular, implement the formulas for
loans with a variable interest rate and try them out on some real world examples.
Exercise 4.6. Solve each of the following first order difference equations.
10
n−4
7. x(n + 1) = x(n), x(0) = 1.
n+1
8. x(n + 1) = x(n) + 3, x(0) = 1.
Prove that if x1 (n) is a solution to x(n + 1) = a(n)x(n) + c1 (n), and x2 (n) is a solution
to x(n + 1) = a(n)x(n) + c2 (n), then x(n) = x1 (n) + x2 (n) is a solution to the equation
(4.16).
5 Difference calculus
Before we proceed to the study of general difference equations, we establish some results
on the difference calculus. We denote all functions from Z to R by S(Z), and all functions
from N0 to R by S(N0 ).
The set S(Z) is a real vector space. See [3] for the definition.
Proposition 5.1. The set S(Z) is a real vector space, if the addition is defined as
Below we give definitions and results for x ∈ S(Z). To apply these results to functions
(sequences) on N0 , we consider S(N0 ) as a subset of S(Z). This is done in the following
manner. Given x ∈ S(N0 ), we define
x(n) for n ≥ 0,
(ιx)(n) =
0 for n < 0.
A function that maps a function x(n) to a new function y(n) is called an operator. An
example is the operator ι : S(N0 ) → S(Z) defined above. We define the operators ∆, S, and
I as follows:
11
The relation between the three operators is
∆ = S − I. (5.4)
The operators S and ∆ are linear. We recall from [3] that an operator U : S(Z) → S(Z) is
said to be linear, if it satisfies
Here b(n), c(n), f (n) are given sequences. If f (n) = 0 for all n, then the equation is
homogeneous, viz.
c1 x1 + c2 x2 + · · · + cN xN = 0 implies c1 = 0, c2 = 0, . . . , cN = 0. (6.3)
(i) The definition is the same as in [3], and many of the results stated there carry over
to the present more abstract framework.
(ii) We call the collection of vectors x1 , x2 , . . . , xN a list, since the elements are viewed as
ordered. In particular, in contrast to a set, repetition of entries is significant.
12
(iii) Let us state explicitly what it means that the list of vectors x1 , x2 , . . . , xN is linearly
dependent. It means that there exist c1 , c2 , . . . , cN with at least one cj ≠ 0, such that
We will need some results to prove linear independence of vectors in S(N0 ). We give the
general definition here. In this section we use it only for N = 2.
Proof. We give the proof in the case N = 2. Thus we have sequences x1 , x2 , and n0 ∈ N0 ,
such that
x (n ) x2 (n0 )
1 0
W (n0 ) = ≠ 0. (6.6)
x1 (n0 + 1) x2 (n0 + 1)
Now assume that we have a linear combination
c1 x1 + c2 x2 = 0.
More explicitly, this means that c1 x1 (n) + c2 x2 (n) = 0 for all n ∈ N0 . In particular, we have
c1 x1 (n0 ) + c2 x2 (n0 ) = 0,
c1 x1 (n0 + 1) + c2 x2 (n0 + 1) = 0.
Now the determinant condition W (n0 ) ≠ 0 implies that the coefficient matrix is invertible,
hence the only solution is the trivial one, c1 = 0 and c2 = 0.
The general case is left as an exercise.
Lemma 6.5. Assume that x1 and x2 are two solution to the homogeneous equation (6.2). Let
W (n) be the Casoration of these solutions, given by (6.5), N = 2. Then we have for n0 ∈ N0
that for all n ≥ n0
n−1
Y
W (n) = W (n0 ) c(n). (6.7)
k=n0
13
Proof. The equation (6.2) implies
Then we have
x (n + 1) x (n + 1)
1 2
W (n + 1) =
x1 (n + 2) x2 (n + 2)
x1 (n + 1) x2 (n + 1)
=
−c(n)x1 (n) − b(n)x1 (n + 1) −c(n)x2 (n) − b(n)x2 (n + 1)
x (n + 1) x2 (n + 1)
1
= = c(n)W (n).
−c(n)x1 (n) −c(n)x2 (n)
Solving the linear first order difference equation W (n + 1) = c(n)W (n) with initial value
W (n0 ) (see Theorem 4.2), we conclude the proof.
We now go through the steps leading to the complete solution to this equation, and then
at the end we summarize the results in a theorem.
We assume that c ≠ 0, since otherwise the equation is a first order equation for the
function y(n) = x(n + 1), which we have already solved. To solve the equation (6.8) we try
to find solutions of the form x(n) = r n , where r ≠ 0, and r may be either real or complex.
We will see below why we have to allow complex solutions. Insert x(n) = r n into (6.8) and
use r ≠ 0 to get the equation
r 2 + br + c = 0. (6.9)
Case 1 If b2 − 4c > 0, then (6.9) has two different real roots, which we denote by r1 and r2 .
Case 3 If b2 − 4c < 0, then (6.9) has pair of complex conjugate roots, which we denote by
r± = α ± iβ, β > 0.
Consider first Case 1. Let x1 (n) = r1n and x2 (n) = r2n , n ∈ N0 . We now use Proposition 6.4
with n0 = 0. We have
r 0 r 0
1 2
W (0) = 1 = r2 − r1 ≠ 0.
r1 r21
Thus we have found two linearly independent solutions to (6.8). Note that the solutions
are real.
Next we consider Case 3. Since we assume that the coefficients in (6.8) are real, we
would like to find real solutions. We state the following result.
14
Proposition 6.6. Let y be a complex solution to (6.8). Then x1 (n) = Re y(n) and x2 (n) =
Im y(n) are real solutions to (6.8).
Taking the real part and using that b, c are real, we get
which proves the result for x1 . The proof for x2 follows in the same manner by taking
imaginary parts.
We now use some results concerning complex numbers, see [3, Appendix C] and also [1].
We know that y(n) = r+n is a solution, and we use Proposition 6.6 to find two real solutions,
given by x1 (n) = Re r+n and x1 (n) = Im r+n . We now rewrite these two solutions. Let
q
ρ = |r+ | = α2 + β2 and θ = Arg r+ . (6.10)
We recall that we have 0 < θ < π , since we have β > 0. Now r+ = ρeiθ and then r+n = ρ n einθ .
Taking real and imaginary parts and using the de Moivre formula, we get
We use Proposition 6.4 to verify that x1 and x2 are linearly independent. We have
1 0
W (0) = = ρ sin(θ) ≠ 0,
ρ cos(θ) ρ sin(θ)
Now we look for y(n) satisfying y(n + 2) + by(n + 1) + cy(n) = 0. Using x1 (n) = r0n , we
get from (6.14) after division by x1 (n + 2) the equation
x1 (n + 1) 1
(∆u)(n + 1) + (∆u)(n) 1 + b = (∆u)(n + 1) + (∆u)(n) 1 + b = 0.
x1 (n + 2) r0
15
We have
1 1
1+b = 1 + b b = −1.
r0 −2
Solving the first order difference equation (∆u)(n+1)−(∆u)(n) = 0, we get (∆u)(n) = c1 ,
and then solving the first order equation u(n + 1) − u(n) = c1 , we get
u(n) = c1 n + c2 , c1 , c2 ∈ R.
Thus we have found the solutions y(n) = (c1 n + c2 )r0n . c1 = 0 leads to the already known
solutions c2 r0n , so we take c2 = 0 and c1 = 1 to get the solution x2 (n) = nr0n . We compute
the Casoration at zero of the two solutions that we have found.
1 0
W (0) = = r0 ≠ 0.
r0 1r0
Theorem 6.7. The second order homogeneous difference equation with constant real coeffi-
cients
x(n + 2) + bx(n + 1) + cx(n) = 0, b, c ∈ R, c ≠ 0, n ∈ N0 , (6.15)
always has two real linearly independent solutions x1 and x2 . They are determined from
the characteristic equation
r 2 + br + c = 0. (6.16)
(i) If b2 −4c > 0, the two real solutions to (6.16) are denoted by r1 and r2 . The two linearly
independent solutions to (6.15) are given by
(ii) If b2 − 4c = 0, the real solution to (6.16) is denoted by r0 . The two linearly independent
solutions to (6.15) are given by
(iii) If b2 − 4c < 0, the two complex conjugate solution to (6.16) are denoted by r± = α ± iβ,
β > 0. Let r+ = ρeiθ = ρ(cos(θ) + i sin(θ)), ρ = |r+ |, θ = Arg r+ . The two linearly
independent solutions to (6.15) are given by
Theorem 6.9 (Uniqueness). A solution y to (6.15) is uniquely determined by the initial values
y0 = y(0) and y1 = y(1).
16
Proof. Assume that we have two solutions y1 and y2 to (6.15), with the initial values y0 and
y1, i.e. y1 (0) = y2 (0) = y0 and y1 (1) = y2 (1) = y1. We must show that y1 (n) = y2 (n)
for all n ∈ N0 . Let y(n) = y1 (n) − y2 (n). Then by Theorem 6.8 y satisfies (6.15) with
initial values zero. It follows from (6.15), written as
that y(n) = 0 for all n ∈ N0 . More precisely, one proves this by induction.
Before proving the next Theorem we need the following result, which complements
Proposition 6.4.
Lemma 6.10. Assume that x1 and x2 are two linearly independent solutions to (6.15). Then
their Casoration W (n) ≠ 0 for all n ∈ N0 .
are linearly dependent, and we can find α ∈ R such that x1 (0) = αx2 (0) and x1 (1) = αx2 (1)
(or x2 (0) = αx1 (0) and x2 (1) = αx1 (1)). Let x = x1 − αx2 . Then x is a solution to (6.15)
and satisfies x(0) = 0, x(1) = 0. Thus by Theorem 6.9 we have x1 − αx2 = 0, contradicting
the linear independence of x1 and x2 . Thus we must have W (0) ≠ 0. It follows from
Lemma 6.5 and the assumption c ≠ 0 that W (n) ≠ 0 for all n ∈ N0 .
Let x1 and x2 be two real linearly independent solutions to this equation. Then there exist
c1 , c2 ∈ R, such that
y(n) = c1 x1 (n) + c2 x2 (n), n ∈ N0 . (6.21)
By Lemma 6.10 the Casoration of x1 and x2 satisfies W (0) ≠ 0. Thus the equation (6.22)
has a unique solution, which we denote by cc12 . Let u = c1 x1 + c2 x2 − y. Then we have that
u is a solution to (6.15) and satisfies u(0) = 0, u(1) = 0. The uniqueness result implies
that u = 0. Thus we have shown that y = c1 x1 + c2 x2 .
17
Thus the complete solution is given by
1 + √5 n 1 − √5 n
x(n) = c1 + c2 , c1 ∈ R, c2 ∈ R.
2 2
With the initial conditions x(0) = 0 and x(1) = 1 the solution is called the Fibonacci
numbers Fn , where √ √ !
1 1 + 5 n 1 − 5 n
Fn = √ − .
5 2 2
With the initial conditions x(0) = 2 and x(1) = 1 the solution is called the Lucas numbers
Ln , where
1 + √5 n 1 − √5 n
Ln = + .
2 2
Here f is a given sequence, where we assume f ≠ 0. First we show that to find all so-
lutions to the equation (6.24) it suffices to find one solution, which we call a particular
solution and then use our knowledge of the corresponding homogeneous equation, stated
in Theorem 6.7.
Theorem 6.13. Let xp be a solution to (6.24). Let x1 and x2 be two linearly independent
solutions to the corresponding homogeneous equation. Then all solutions to (6.24) are given
by
x = c1 x1 + c2 x2 + xp , c1 , c2 ∈ R. (6.25)
18
As a consequence of the above result we are left with the problem of finding a particular
solution to a given inhomogeneous equation. There are no completely general methods,
and, in general, the solution cannot be found in closed form. There are some techniques
available, and we will present some of them. One of them is based on a simple idea. One
tries to guess a solution. More precisely, if the right hand side is in the form of a linear
combination of functions of the form
r n, r n cos(an), or r n sin(an),
then the method may succeed. Here r and a are constants, inferred from the given right
hand side. We will start with some examples to clarify the method.
4 n
x(n) = c1 + c2 (−3)n + 2 , c1 , c2 ∈ R.
5
Example 6.15. We will find the complete solution to the equation
x(n + 2) + 4x(n) = 0.
If we try to find a particular solution of the form u(n) = c cos(2n), we find after sub-
stitution into the equation a term containing sin(2n). Thus the right form is u(n) =
c cos(2n) + d sin(2n). We insert this expression into the left hand side of (6.26), and
then use the addition formulas to get the following result.
u(n + 2) + 4u(n) = c cos(2(n + 2)) + d sin(2(n + 2)) + 4 c cos(2n) + d sin(2n)
19
= c cos(2n) cos(4) − sin(2n) sin(4)
+ d sin(2n) cos(4) + cos(2n) sin(4)
+ 4 c cos(2n) + d sin(2n)
= c cos(4) + d sin(4) + 4c cos(2n)
+ −c sin(4) + d cos(4) + 4d sin(2n)
for all n ∈ N0 . We now use that the sequences cos(2n) and sin(2n) are linearly indepen-
dent. Thus we get the linear system of equations
The solution is
4 + cos(4) sin(4)
c= , d= .
17 + 8 cos(4) 17 + 8 cos(4)
Thus the complete solution to (6.26) is given by
π π 4 + cos(4) sin(4)
x(n) = c1 2n cos( 2 n) + c2 2n sin( 2 n) + cos(2n) + sin(2n).
17 + 8 cos(4) 17 + 8 cos(4)
Example 6.16. There is a different way to find a particular solution to (6.26), based on
computations with complex numbers. We note that cos(2n) = Re ei2n . We find a particular
solution to the equation
y(n + 2) + 4y(n) = ei2n .
The particular solution to (6.26) is then found as the real part of this solution.
We note that ei2n = (e2i )n . Thus using the same technique as in Example 6.14 we guess
that the solution is of the form y(n) = ce2in , where now c can be a complex constant.
Insertion gives
1 e−4i + 4 e−4i + 4
c= = = .
e4i + 4 (e4i + 4)(e−4i + 4) 17 + 8 cos(4)
20
f (n) form of yp
n
r cr n
ν
n , ν an integer d0 + d1 n + · · · + dν nν
r n cos(an) cr n cos(an) + dr n sin(an)
r n sin(an) cr n cos(an) + dr n sin(an)
Since the sequences {1} and {n} are linearly independent, we get the linear system of
equations d1 − 6d0 = 0 and −6d1 = 36, with the solutions d0 = −1 and d1 = −6. Thus we
have found the particular solution u(n) = −1 − 6n. The complete solution is then
The method used in the examples above is called the method of undetermined coeffi-
cients. As is evident from the second example, even simple right hand sides can lead to
rather complicated particular solutions. To give a general prescription for the use of the
method is rather complicated. We give a simplified description here.
1. Find the complete solution to the corresponding homogeneous equation in the form
x = c1 x1 + c2 x2 , where x1 and x2 are linearly independent solutions.
2. Verify that the functions x1 , x2 ,and f are linearly independent (this can be done by
computing their Casoratian, or sometimes seen by inspection). If they are linearly
dependent, this version of the method does not apply.
3. Verify that the right hand side is a linear combination of the functions in the left hand
column of Table 6.1. If this is not the case, the method cannot be applied.
4. Use the form of the solution given in the second column of Table 6.1, insert in the
inhomogeneous equation (6.24), and determine the coefficients, as in the examples.
In the case, where x1 , x2 , f are linearly dependent, and f is a linear combination of the
form of functions in Table 6.1, the particular solution from this table is multiplied by n.
As an example, if we instead of (6.26) consider
π
x(n + 2) + 4x(n) = 2n sin( 2 n),
21
or, alternatively, of the form
Im(cn(2i)n ),
where in the second case c may be a complex constant. One finds in both cases the partic-
ular solution
n
xp (n) = − sin( π2 n).
4
Then there exist two linearly independent solutions x1 and x2 to (6.27). Let x be any solution
to (6.27). Then there exist c1 , c2 ∈ R, such that x = c1 x1 + c2 x2 . Furthermore, a solution to
(6.27) is uniquely determined by its initial values x(0) = y0 and x(1) = y1.
Proof. We define a sequence x1 as follows. Let x1 (0) = 1 and x1 (1) = 0. Then use (6.27)
to determine x1 (2) = −b(0)x1 (1) − c(0)x1 (0) = −c(0), and then x1 (3) = −b(1)x1 (2) −
c(1)x1 (1) = b(1)c(0). In general, we determine x1 (n), n ≥ 2, from x1 (n−1) and x1 (n−2).
Thus we get a solution x1 to (6.27). A second solution x2 is determined by letting x2 (0) = 0
and x2 (1) = 1, and then repeating the arguments above. Now we use Proposition 6.4 to
show that the solutions x1 and x2 are linearly independent. We have
x (0) x (0) 1 0
1 2
W (0) = = = 1,
x1 (1) x2 (1) 0 1
Sometimes one can guess one solution to (6.27). Then one can use the reduction of
order method to find a second, linearly independent, solution. We state the result in the
following theorem.
Theorem 6.19 (Reduction of order). Let x1 be a solution to (6.27) satisfying x1 (n) ≠ 0 for
all n ∈ N0 . Then a second solution x2 can be found by the following method. Let v be the
solution to the first order homogeneous difference equation
x1 (n + 1)
v(n + 1) + 1 + b(n) v(n) = 0, v(0) = 1. (6.28)
x1 (n + 2)
and let u be a solution to the first order inhomogeneous difference equation
Let x2 (n) = u(n)x1 (n). Then x2 is a solution to (6.27), and x1 , x2 are linearly independent.
22
Proof. Let u be a sequence, and let v = ∆u. Let y(n) = u(n)x1 (n). Repeating the com-
putations in (6.14), one finds immediately that in order for y to solve (6.27), y must be a
solution to the equation in (6.28). We take the solution v, which satisfies the initial con-
dition in (6.28). The existence and uniqueness of this solution follows from Theorem 4.2.
Then we solve (6.29), using again Theorem 4.2, and define x2 (n) = u(n)x1 (n). It remains
to verify that the two solutions are linearly independent. We compute their Casoratian at
zero.
x (0) x (0) x (0) u(0)x (0)
1 2 1 1
W (0) = =
x1 (1) x2 (1) x1 (1) u(1)x1 (1)
By assumption x1 (0) ≠ 0 and x1 (1) ≠ 0, and furthermore v(0) = 1. Thus x1 and x2 are
linearly independent.
We need to determine one solution to this equation, which we again call a particular so-
lution. First we note that Theorem 6.13 is valid also in the variable coefficient case. The
verification is left as an exercise.
We have the following general result. The method used is called variation of parameters.
Theorem 6.20 (Variation of parameters). Assume that c(n) ≠ 0 for all n ∈ N0 . Assume that
x1 and x2 are two linearly independent solutions to the homogeneous equation (6.27). Then
a particular solution to (6.30) is given by
Proof. We define
y(n) = u1 (n)x1 (n) + u2 (n)x2 (n)
and compute
23
Using this condition we compute once more
Now insert the expressions for y(n), y(n + 1), and y(n + 2) in (6.30) and simplify, using
the fact that both x1 and x2 satisfy the homogeneous equation. This leads to the equation
For each n ∈ N0 we can view the equations (6.33) and (6.34) as a pair of linear equations to
determine (∆u1 )(n) and (∆u2 )(n). Explicitly, we have
where W (n) is the Casoratian of x1 and x2 . We use the assumption that c(n) ≠ 0 for all
n ∈ N0 , the linear independence of x1 , x2 , and Lemma 6.5 to get that W (n) ≠ 0 for all
n ∈ N0 . Thus we have a unique solution to the linear system. We use Cramer’s method
(see [3]) to solve the system. The result is
0
x2 (n + 1)
g(n) x2 (n + 2)
x2 (n + 1)g(n)
x (n + 1) x (n + 1) = − W (n + 1) ,
(∆u1 )(n) = (6.37)
1 2
x1 (n + 2) x2 (n + 2)
x (n + 1) 0
1
x1 (n + 2) g(n)
x1 (n + 1)g(n)
(∆u2 )(n) =
x (n + 1) x (n + 1) =
. (6.38)
1 2 W (n + 1)
x1 (n + 2) x2 (n + 2)
Solving the two difference equations yields the expressions for u1 and u2 in the theorem.
Let us verify that u1 given by (6.31) solves (6.37). We have
We can also use Theorem 4.2 with the initial condition y0 = 0 to get the same solution.
24
6.5 Second order difference equations: Linear algebra
In this section we connect the results obtained in the previous sections with results from
linear algebra. We refer to [3] for the results that we use. First, we recall from Section 5
that S(N0 ) denotes all real functions x : N0 → R, or equivalently, all real sequences indexed
by N0 . It is a real vector space, as stated in Proposition 5.1.
We now formulate some of the results above in the language of linear algebra. We limit
the statements to the results in the case of a second order difference equation with constant
coefficients.
Thus we consider the inhomogeneous equation
Proposition 6.21. The operator L defined in (6.41) is a linear operator from S(N0 ) to S(N0 ).
Proof. The proof is the same as the proof of the superposition principle, Theorem 6.8. Here
are the details. Let x1 , x2 ∈ S(N0 ) and c1 , c2 ∈ R. Then we have
which is linearity of L.
Based on this result we can reformulate the problem of solving the inhomogeneous
equation (6.39) as follows. Given f ∈ S(N0 ), find x ∈ S(N0 ) satisfying L(x) = f . We state
the following two results. We recall that the null space of the linear operator L is defined
as
ker L = {x ∈ S(N0 ) | L(x) = 0}. (6.42)
Theorem 6.22. The linear operator L : S(N0 ) → S(N0 ) has the following two properties.
25
Proof. To prove part (i), let f ∈ S(N0 ). We must find x ∈ S(N0 ), such that L(x) = f . Go
back to the difference equation and write it as
We look for a solution x, which satisfies x(0) = 0, x(1) = 0. Using the equation, we find
x(2) = f (0), x(3) = −bf (0) + f (1), etc. More formally, we prove by induction that x(n)
is defined and satisfies the equation for all n. This proves part (i). Note that in the proof
the choice x(0) = 0 and x(1) = 0 is arbitrary. Any other choice would also lead to a proof
of existence.
Concerning part (ii), then the equation L(x) = 0 is the homogeneous equation (6.40)
written in linear algebra terms. The dimension statement is then a reformulation of Theo-
rems 6.7 and 6.11.
Exercises
Exercise 6.1. Answer the following questions. You must justify your answers.
3. Are the two sequences x1 (n) = 1 + (−1)n and x2 (n) = −1 + (−1)n+1 , n ∈ N0 , linearly
independent?
Exercise 6.2. Find two linearly independent solutions to the following second order differ-
ence equations.
1. x(n + 2) − 9x(n) = 0.
Exercise 6.3. Find the complete solution to each of the following difference equations.
1. x(n + 2) + x(n) = 0.
2. x(n + 2) + 4x(n) = 0.
Exercise 6.4. For each of the homogeneous difference equations in Exercise 6.3 find the
complete solution to the corresponding inhomogenenous equation, when the right hand
side is given as f (n) = 1 for all n ∈ N0 .
26
Exercise 6.5. For each of the homogeneous difference equations in Exercise 6.3 find the
complete solution to the corresponding inhomogenenous equation, when the right hand
side is given as f (n) = 2n for all n ∈ N0 .
Exercise 6.6. Solve each of the following initial value problems for a second order difference
equation.
1. Find the complete solution to the difference equation x(n + 2) − 4x(n) = −3n + 2.
Then find the solution that satisfies x(0) = 0, x(1) = 0.
2. Find the complete solution to the difference equation x(n + 2) + 2x(n + 1) − 8x(n) =
4 − 5n − 9(−1)n . Then find the solution that satisfies x(0) = 0, x(1) = 0.
3. Find the complete solution to the difference equation x(n + 2) + x(n) = 6 + 2n. Then
find the solution that satisfies x(0) = −2, x(1) = −3.
4. Find the complete solution to the difference equation x(n+2)+x(n) = −2 sin(π n/2).
Then find the solution that satisfies x(0) = 0, x(1) = 0.
5. Find the complete solution to the difference equation x(n + 2) + 2x(n + 1) + 2x(n) =
5n2 + 8n + 6. Then find the solution that satisfies x(0) = 0, x(1) = 0.
Exercise 6.7. Find a second order homogeneous difference equation with constant coeffi-
cients that has the two solutions u(n) = 1 and v(n) = 3n .
Exercise 6.8. Find a second order homogeneous difference equation with constant coeffi-
cients that has the solution u(n) = cos(π n/6).
Exercise 6.9. Find a second order homogeneous difference equation with constant coeffi-
cients that has the two solutions u(n) = 3n and v(n) = n3n .
Then find a corresponding inhomogeneous equation that has as a particular solution
yp (n) = 1 + n.
How many difference equations can you find that satisfy both conditions?
Exercise 6.10. Find a second order homogeneous difference equation with constant coeffi-
cients that has one of its solutions u(n) = 2n . How many second order difference equations
with constant coefficients can you find with this property?
Hint: Such a difference equations can be written as x(n + 2) + bx(n + 1) + cx(n) = 0. Use
the information supplied to determine b and c, if possible.
Exercise 6.11. Find a second order homogeneous difference equation with constant coeffi-
cients that has one of its solutions u(n) = n. How many second order difference equations
with constant coefficients can you find with this property?
Hint: See Exercise 6.10
27
7 Higher order linear difference equations
In this section we give a short introduction to the theory of higher order linear difference
equation. A difference equation of order k has the following structure
Here bk−1 (n), bk−2 (n), . . . , b0 (n) are given sequences. The right hand side f (n) is also a
given sequence. We only consider the case of real coefficients and right hand side. The
terminology is the same as in the case of the second order equations. If f (n) = 0 for all
n ∈ N0 , then the equation is said to be homogeneous, otherwise it is inhomogeneous.
Several results on the second order equations are valid also for higher order equations,
with proofs that are essentially the same.
Theorem 7.2. Let f ≠ 0 and let xp be a particular solution to (7.1). Then the complete
solution to (7.1) can be written as
x = xh + xp , (7.3)
where xh is any solution to the homogeneous equation (7.2).
It follows from this last result that in order to find all solutions to an equation (7.1)
we must solve two problem. One is to find a particular solution to the inhomogeneous
equation, and the other is to find the complete solution to the corresponding homogeneous
equation (7.2). In the general case this is quite complicated. We will here limit ourselves to
considering the case where bk−1 (n), . . . , b0 (n) are constant.
Thus we consider now the constant coefficient homogeneous difference equation of
order k,
where bk−1 , . . . , b0 ∈ R. The technique used to find a solution to (7.4) is the same as in the
order two case. We guess a solution of the form y(n) = r n , r ≠ 0, insert in the equation
to get
r n+k + bk−1 r n+k−1 + · · · + b1 r n+1 + b0 r n = 0.
Cancelling the common non-zero factor r n we get a polynomial equation of degree k,
r k + bk−1 r k−1 + · · · + b1 r 1 + b0 = 0.
Thus we need to know the structure of the roots of a polynomial. We state the result here.
The proof will be given in another course. See also [4].
28
Theorem 7.3. Let
r k + bk−1 r k−1 + · · · + b1 r 1 + b0
be a polynomial of degree k with real coefficients bk−1 , . . . , b0 . Then there exist integers K ≥ 0
and L ≥ 0, real numbers κ1 , . . . , κK , κj ≠ κj 0 , j ≠ j 0 , nonnegative integers k1 , k2 , . . . , kK ,
complex numbers ζ1 , . . . , ζL , ζj ≠ ζj 0 , j ≠ j 0 , with Im ζ1 ≠ 0, . . . , Im ζL ≠ 0, and nonnegative
integers l1 , l2 , . . . , lL , such that
r k + bk−1 r k−1 + · · · + b1 r 1 + b0
= (r − κ1 )k1 · · · (r − κK )kK (r − ζ1 )l1 (r − ζ1 )l1 · · · (r − ζL )lL (r − ζL )lL . (7.5)
Thus the κj , ζj 0 , and ζ j 0 are the distinct zeroes of the polynomial. The integers kj and 2lj 0
are called the multiplicities of the zeroes.
We have
XK XL
kj + 2 lj 0 = k. (7.6)
j=1 j 0 =1
The general statement above is rather complicated. We will give a number of examples
to clarify the statement. Consider first the polynomial of degree two r 2 + b1 r + b0 . In the
case b12 − 4b0 > 0 there are two distinct real roots κ1 and κ2 , given by
q q
−b1 − b12 − 4b0 −b1 + b12 − 4b0
κ1 = , κ2 = ,
2 2
and the factorization in (7.5) takes the form
r 2 + b1 r + b0 = (r − κ1 )(r − κ2 ).
In this case k1 = 2.
Finally, in the case b12 − 4b0 < 0 one of the complex roots is given by
q
−b1 + i 4b0 − b12
ζ1 = .
2
The other complex root is the complex conjugate of ζ1 . The factorization (7.5) now takes
the form
r 2 + b1 r + b0 = (r − ζ1 )(r − ζ1 ).
In this case l1 = 1.
These examples also exemplifies the notational convention used in the statement of
Theorem 7.3. In the first two cases there are no complex roots. In this case one has L = 0
in the statement of the theorem, and there are no complex roots. Analogously, in the third
case, there are no purely real roots, hence K = 0 in the statement of the theorem.
For polynomials of degree three there is a general formula for the roots. However,
it is very complicated, and is rarely used. For polynomials of degree four there are also
formulas, but they are even more complicated.
29
For polynomials of degree five or higher there is no general closed formula for the
roots. The existence of the factorization (7.5) can be proved for polynomials of any degree,
without using such formulas.
We now continue with the examples. For polynomials of degree three or higher we can
only give examples with explicit choice of the coefficients, as explained above. One can
show that one has
r 3 − 7 r 2 + 14 r − 8 = (r − 1) (r − 2) (r − 4) . (7.7)
In this case K = 3, κ1 = 1, k1 = 1, κ2 = 2, k2 = 1, and κ3 = 4, k3 = 1. By convention L = 0.
Next we consider
2
r 3 + r 2 − 21 r − 45 = (r + 3) (r − 5) . (7.8)
In this case K = 2, κ1 = −3, k1 = 2 and κ2 = 5, k2 = 1. By convention L = 0.
As the next example of polynomials of degree three we consider
r 3 − 12 r 2 + 22 r − 20 = (r − 10) (r − 1 − i) (r − 1 + i) . (7.9)
r 4 − 2 r 3 + 6 r 2 − 2 r + 5 = (r − 1 − 2 i) (r − 1 + 2 i) (r + i) (r − i) . (7.11)
Theorem 7.4. For a constant coefficient homogeneous difference equation of order three,
(i) Assume that p(r ) has three distinct real roots κ1 , κ2 , and κ3 . Define
(ii) Assume that p(r ) has two distinct real roots κ1 and κ2 , with multiplicities two and one,
respectively. Define
30
(iii) Assume that p(r ) has one real root κ1 and a pair of complex roots ζ1 , ζ 1 . Let ζ1 =
ρ1 (cos(θ1 ) + i sin(θ1 )). Define
(iii) Assume that p(r ) has one real root κ1 of multiplicity three. Define
The characteristic equation is given by (7.7). Using the factorization and the theorem we
have three linearly independent solutions
Concerning the inhomogeneous equation, then one can again use the method of unde-
termined coefficients. As an example we consider
We try y(n) = c3n , which we insert into the equation. A simple computation yields that a
1
particular solution is given by y(n) = − 2 3n . Thus the complete solution is given by
1
x(n) = c1 + c2 2n + c3 4n − 2 3n .
A result similar to Theorem 6.11 holds for third order difference equations. This means
that once we have found three linearly independent solutions to the homogeneous equation
(7.12), any other solution to this equation can be written as a linear combination of these
three solutions.
31
We now introduce the notation
" # " #
x1 (n) a11 a12
x(n) = , A= , (8.4)
x2 (n) a21 a22
Written in this form the equation has the same form as the first order equations considered
in Section 4.
The proof given in Section 4 can now be repeated and leads to the solution
We define " #
c1
c= .
c2
Thus the inhomogeneous equation can be written in vector-matrix form as
The arguments leading to the solution formula (4.5) can be repeated. However, we must
take care of writing terms in the right order, in contrast to (4.5). The result is
n−1
X
x(n) = An x(0) + Aj c, n ∈ N. (8.10)
j=0
The results for more than two equations are almost the same. For example, three equations
are given as
32
xk (n + 1) = ak1 x1 (n) + ak2 x2 (n) + · · · + akk xk (n) + ck . (8.17)
The only part of the proof differing from the one given in Section 4 is the derivation of
the formula (8.21). The result is stated here.
Proof. We prove the result by induction. Consider first n = 1. The left had side is
1−1
X
Aj = A0 = I.
j=0
Thus the formula (8.22) holds for n = 1. Next consider an arbitrary n ≥ 1, and assume that
(8.22) holds for this n. We then consider the formula for n + 1 and compute as follows.
n j−1
X X
Aj = An + Aj
j=0 j=0
= An (A − I) + (An − I) (A − I)−1
Thus the formula also holds for n + 1 and the induction argument is completed.
Theorem 8.1 is stated with a constant inhomogeneous term. We have the following
result in case c is a function of n.
33
Theorem 8.3. Let A be a k × k matrix, and let c : N0 → Rk be a function. Then the system of
first order difference equations
The formulas given for solving the inhomogeneous case are sometimes complicated.
We note that the method of undetermined coefficients can be applied also to systems. Let
us explain how in the case of a constant inhomogeneous term. Thus we consider
We have the same structure as in all the other cases of linear inhomogeneous equations,
see also Theorem 7.2. The complete solution to (8.25) is given as
where xh (n) is the complete solution to the corresponding homogeneous equation, and xp
is a particular solution to the inhomogeneous equation.
In order to apply this result we guess that the solution to (8.25) is a constant
" #
y1
y(n) = y = .
y2
y = Ay + c or (I − A)y = c
The matrix above is denoted by A. The eigenvalues are λ1 = 3 and λ2 = 4. We use the
results presented later in this section, in Theorem 8.7. After some computations we find
that " # " #
n n −1 1 n 2 −1
A =3 +4 .
−2 2 2 −1
To solve the inhomogeneous problem we first use the expression (8.21). A computation
shows that " #
1 1 1
(A − I)−1 = .
6 −2 4
Thus without simplifications we get the complete solution
" # " # " #! " #
x1 (n) −1 1 2 −1 x 1 (0)
= 3n + 4n
x2 (n) −2 2 2 −1 x2 (0)
34
" # " # " #! " #" #
−1 1 2 −1 1 0 1 1 1 2
+ 3n + 4n − .
−2 2 2 −1 0 1 6 −2 4 2
Now we can also use the methods of undetermined coefficients. This leads to the linear
system (I − A)y = c, which has the solution
" #
2 1
y=− .
3 1
Here d1 , d2 ∈ R are arbitrary. The difference between the two approaches is that in the first
case we can immediately insert x1 (0) and x2 (0) to find the solution satisfying a given initial
condition, whereas in the second case we must solve a linear system first to determine the
values of d1 and d2 from the initial condition.
Define
x1 (n + 1) = x2 (n), (8.30)
x2 (n + 1) = −bx2 (n) − cx1 (n), (8.31)
We can also carry out the argument in the opposite direction. Assume that we have
a system of the particular form (8.34). Assume that we have found a solution xx12 (n)
(n) to
(8.34). Then x(n) = x1 (n) is a solution to (8.33).
35
The same computations can be carried out for a difference equation of order three. We
now use the systematic notation from Section 7.
We define
x1 (n) = x(n), x2 (n) = x(n + 1), x3 (n) = x(n + 2), (8.36)
which leads to the matrix equation
x1 (n + 1) 0 1 0 x1 (n) 0
x2 (n + 1) = 0 0 1 x2 (n) + 0 . (8.37)
x3 (n + 1) −b0 −b1 −b2 x3 (n) f (n)
Again, the argument is also valid in the opposite direction, i.e. x(n) = x1 (n) from the
solution of the system is a solution to the third order difference equation.
We formulate the general case in the form of an equivalence Theorem.
Define
x1 (n) = x(n), x2 (n) = x(n + 1), . . . , xk (n) = x(n + k − 1). (8.39)
Then we get the matrix equation
x1 (n + 1) 0 1 0 ··· 0 x1 (n) 0
x2 (n + 1) 0 0 1 ··· 0 x2 (n) 0
.. .. .. .. .. .. .. ..
= . . + . (8.40)
. . . .
.
.
x (n + 1) 0 0 0 ··· 1 xk−1 (n) 0
k−1
xk (n + 1) −b0 −b1 −b2 ··· −bk−1 xk (n) f (n)
We have the following equivalence. Any solution to the k-th order difference equation (8.38)
gives a solution to the system (8.40) through the definition (8.39). Conversely, if x1 (n),…,xk (n)
is a set of solutions to the system (8.40), then x(n) = x1 (n) is a solution to the k-th order
difference equation (8.38).
We now consider how to compute the powers An of a matrix, which is what is needed
to get explicit solutions to the systems of first order difference equations.
We start with a system with two first order equations. It is written in matrix form in
(8.5), where the 2 × 2 matrix A is given in (8.4). The solution is given by the powers of A,
as in (8.6). First we assume that A can be diagonalized, see [3, p. 318]. This means that we
have " #
λ 1 0
A = P DP −1 , D = . (8.41)
0 λ2
Here λ1 and λ2 are the eigenvalues of A and P is a matrix whose columns are corresponding
linearly independent eigenvectors.
Now we see how (8.41) can be used to compute the powers. We have
36
and then
A3 = A2 A = (P D 2 P −1 )(P DP −1 ) = P D 2 (P P −1 )DP −1 = P D 3 P −1 .
In general we get
An = P D n P −1 . (8.42)
Since we have
λn
"
#
n 1 0
D = ,
0 λn2
and "#
2 0
D= .
0 −2
This allows us to find " #
n n 3 − 2(−1)n −6 + 6(−1)n
A =2 .
1 − (−1)n −2 + 3(−1)n
There is a general algorithm, which avoids diagonalization, and thus can be applied to
all matrices. It is a discrete version of Putzer’s algorithm, see [2, p. 118]. We state this
algorithm for the case of 2 × 2 matrices.
λn n
1 − λ2
An = λn
1I + (A − λ1 I), n ≥ 1. (8.46)
λ1 − λ2
An = λn n−1
1 I + nλ1 (A − λ1 I), n ≥ 1. (8.47)
37
Let us apply this theorem to the matrix B defined in (8.45). We see that the only eigen-
value is 1. Thus the theorem gives
" # " # " #! " #
n n 1 0 n−1 1 1 1 0 1 n
B =1 + n1 −1 = .
0 1 0 1 0 1 0 1
Example 8.8. We briefly explain how the application of Putzer’s algorithm to the matrix
from Example 8.6 leads to the same result as the one obtained in this example. Let again
" #
10 −24
A= .
4 −10
det(A − λI) = λ2 − 4 = 0,
Let us now compare the results for second order difference equations obtained in Sec-
tion 6 with the results for systems in this Section. We start with the homogeneous equation.
Thus we compare solutions to
r 2 + br + c = 0.
The eigenvalues of the matrix in (8.49) are determined by the roots of what is called the
characteristic polynomial
−λ 1
det(A − λI) = = λ2 + bλ + c.
−c −b − λ
38
We recall that λ1 + λ2 = −b and λ1 λ2 = c.
It is clear from Theorem 8.5 how to get two solutions to the system from two linearly
independent solutions to the second order equations. Now we show how to get the two
linearly independent solution λn n
1 and λ2 to (8.48) from the system. We do this using Putzer’s
algorithm. Using the above results we note that
" # " #
−λ1 1 −λ1 1
A − λ1 I = = .
−c −b − λ1 −λ1 λ2 λ2
The Putzer algorithm (8.46) then gives the following expression for the powers of A.
" n # " #
n λ1 0 λn n
1 − λ2 −λ1 1
A = + .
0 λn 1 λ1 − λ2 −λ1 λ2 λ2
By choosing appropriate initial conditions we get the two solutions to the second order
equation. Recall that it is the component x1 (n) that yields the solution to (8.48). Taking
" # " #
x1 (0) 1
= ,
x2 (0) λ1
39
8.2 Further results on matrix powers
We give a few additional results concerning matrix powers. One can prove the following
result, which is called the generalized spectral theorem for a 2 × 2 matrix.
(i) Assume that A has two different eigenvalues λ1 and λ2 . Then there exist 2 × 2 matrices
P and Q with the following four properties
P2 = P, (8.50)
Q2 = Q, (8.51)
P Q = 0, (8.52)
P + Q = I, (8.53)
such that
A = λ1 P + λ2 Q. (8.54)
We have that
P R 2 = ker(A − λ1 I) and QR 2 = ker(A − λ2 I) (8.55)
such that the ranges of P and Q are the one dimensional eigenspaces. Furthermore,
for all n ≥ 1 we have that
An = λn n
1 P + λ2 Q. (8.56)
(ii) Assume that A has only one distinct eigenvalue λ1 . Then there exists a 2 × 2 matrix N
with the property
N 2 = 0, (8.57)
such that
A = λ1 I + N. (8.58)
Furthermore, for all n ≥ 1 we have that
An = λn n−1
1 I + nλ1 N. (8.59)
40
The matrices " # " #
1 1 −2 1 4 2
P= and Q =
5 −2 4 5 2 1
satisfy all the conditions in Theorem 8.9, as one can verify by simple computations.
Exercises
Exercise 8.1. Consider the first order system
x1 (n + 1) = 3x1 (n)
x2 (n + 1) = −2x2 (n)
Write the system in vector-matrix form. Solve the system using the matrix method. Then
show that the system may be solved using the results from Section 4
Find the solution that satisfies the initial condition
x1 (n + 1) = 3x1 (n) − 4
x2 (n + 1) = −2x2 (n) + 5
x1 (n + 1) = λ1 x1 (n)
x2 (n + 1) = λ2 x2 (n)
Find the general solution, using the vector-matrix method, as in Exercise 8.1.
41
Exercise 8.3. Consider the first order system
x1 (n + 1) = x2 (n)
x2 (n + 1) = 4x1 (n)
Let x(n) = x1 (n). Show that if x1 (n), x2 (n) are solutions to the system, then x(n) is a
solution to the second order difference equation
x(n + 2) − 4x(n) = 0.
x(n + 1) = Ax(n).
A = P DP −1 .
Define a new sequence y as y(n) = P −1 x(n). Show that this sequence satisfies the difference
equation
y(n + 1) = Dy(n).
y1 (n + 1) = λ1 y1 (n),
y2 (n + 1) = λ2 y2 (n).
The system is said to be decoupled. This means that the system consists of two ordinary
first order equations, which can be solved directly.
Use the above considerations on the matrix
" #
2 1
A= (8.60)
0 4
Diagonalize this matrix, i.e. find its eigenvalues and corresponding eigenvectors, and de-
termine D and P .
For this particular matrix A given by (8.60) carry out all the above computations and
find the solution to the original system as
x(n) = P y(n).
Thus one solves the problem by transforming to the decoupled system, solving it, and then
transforming back to the original variables.
Now solve the system with the matrix A given in (4.1), by using Putzer’s algorithm.
Compare the two solutions.
42
Exercise 8.5. Use Putzer’s algorithm to determine the powers An of each matrix below.
" # " # " # " 3 #
2 0 3 0 −8 10 −2 2
A1 = , A2 = , A3 = , A4 = .
4 −2 7 3 −5 7 −1 32
Note that the eigenvalues in this case are complex. Use the polar form of complex numbers
to compute powers of the eigenvalues.
Exercise 8.7. In this exercise we consider the connections between a second order differ-
ence equation and the corresponding system of two first order equations. Given a second
order difference equation
x1 (n) = x(n),
x2 (n) = x(n + 1),
is given by " #
0 1
x(n + 1) = Ax(n) where A = . (8.62)
−c −b
Here we write " # " #
x1 (n) x(n)
x(n) = =
x2 (n) x(n + 1)
as usual.
We now take two solutions to the equation (8.61). We denote them by x(n) and y(n).
The corresponding vector solution ot the first order system (8.62) are denoted by x(n) and
y(n), respectively. We have
" # " #
x(n) y(n)
x(n) = og y(n) = .
x(n + 1) y(n + 1)
Show that
X(n + 1) = AX(n).
Recall the definition of matrix multiplication. Note that the Casoratian of the two solutions
x(n) and y(n) can be written as
Use the relaton between matrix multiplication and determinants to show that
W (n + 1) = cW (n),
43
such that
W (n) = c n W (0).
Exercise 8.8. We consider a system x(n + 1) = Ax(n) with four different matrices A. They
are given as follows.
" #
−1 1
1. A =
−2 −4
" #
2 1
2. A =
1 2
" #
4 6
3. A =
0 4
" #
−2 0
4. A =
7 −2
For each of these four matrices find the expression for An using Putzer’s algorithm. In the
first two cases write the result in the form
An = λn n
1 P + λ2 Q,
where P and Q are 2 × 2 matrices that you must determine. In both cases show that these
matrices have the properties
P2 = P, Q2 = Q, P Q = 0, P + Q = I.
What can you say about the columns in these matrices? Can one find the representation
A = λ1 P + λ2 Q in a different manner? Here P and Q must satisfy all four equations above.
Exercise 8.9. For each of the four matrices A in Exercise 8.8 solve the inhomognenous
system x(n + 1) = Ax(n) + c, where
" # " #
2 −1
c= and c = .
3 −2
In all eight cases you must also find the solution that satisfies the initial condition
" # " #
0 1
x(0) = and the initial condition x(0) =
0 2
9 Supplementary exercises
Full collections of exam problems are given below, in the version intended for mathemat-
ics students. Mathematics-economics student get a different problem four, covering the
economics part of their version of the course. The Danish version of the exam is given.
44
9.1 Trial Exam December 2010
Opgave 1. Der er givet en anden ordens differensligning
4. Bestem den løsning til den givne ligning (9.1), der opfylder betingelserne
x(0) = 5, x(1) = 1.
1. Vis, at
1
x(n) =
n+1
er en løsning til første ordens differensligningen
n+1
x(n + 1) = x(n).
n+2
Findes der andre løsninger til denne differensligning?
1
x(n) = , x1 (n) = 3n , x2 (n) = n3n
n+1
er lineært uafhængige følger.
45
Opgave 3. 1. Løs følgende problem grafisk
Minimér y1 + 2y2
y1 ,y2
u.b.b.:
y1 + 6y2 ≥ 15,
y1 + y2 ≥ 5,
−y1 + y2 ≥ −5,
y1 ≥ 0, y2 ≥ 0
3. Hvad sker der med den optimale duale løsning, hvis bibetingelsen y1 + 6y2 ≥ 15
ændres til y1 + 6y2 ≥ 15.1?
4. Betragt problemet
Maksimér x1 + 3x2 + 5
x1 ,x2
u.b.b.:
x1 + 2x2 ≤ 6,
x12 ≤ 4,
x1 ≥ 0, x2 ≥ 0
To aspekter ved dette problem skiller sig ud fra et sædvanligt LP problem. Hvilke?
3. Bestem den løsning til det givne system, der opfylder begyndelsesbetingelserne x1 (0) =
4 og x2 (0) = −1.
46
2. Bestem en partikulær løsning til den givne ligning (9.4).
4. Bestem den løsning til den givne ligning (9.4), der opfylder betingelserne
n−3
x(n + 1) = x(n), x(0) = 2.
n+3
er løsninger til denne tredje ordens differensligning. Gør rede for, at de tre følger
x1 (n), x2 (n) og x3 (n) er lineært uafhængige.
3. Bestem den fuldstændige løsning til den inhomogene tredje ordens differensligning
47
Opgave 3. 1. Betragt følgende problem
3. Antag, at de to højresider til bibetingelserne i det primale problem ændres til hhv.
20.1 og 20.8 (i nævnte rækkefølge). Hvad bliver den tilsvarende ændring i objektfunk-
tionen?
3. Bestem den løsning til det givne system, der opfylder begyndelsesbetingelserne x1 (0) =
2 og x2 (0) = −2.
48
9.3 Exam February 2011
Opgave 1. Der er givet en anden ordens differensligning
4. Bestem den løsning til den givne ligning (9.7), der opfylder betingelserne
x(0) = 2, x(1) = 1.
5. Vis, at
xp (n) = n · 2n
1. Vis, at
x(n) = 3n − 2
x(n + 1) = 3x(n) + 4
49
Opgave 3. Denne opgave omhandler lineær programmering mv.
3. Bestem den løsning til det givne system, der opfylder begyndelsesbetingelserne x1 (0) =
1 og x2 (0) = −1.
References
[1] Søren L. Buhl, Komplekse tal m.m. Lecture notes (in Danish), Aalborg 1992.
[2] Saber N. Elaydi, An introduction to difference equations, 3rd edition, Springer 2005.
50
[3] Stephen H. Friedberg, Arnold J. Insel, and Lawrence E. Spence, Elementary Linear Alge-
bra: International Edition, 2/E, Pearson Higher Education, ISBN-10: 0131580345.
[4] Arne Jensen, Lecture Notes on Polynomials. Department of Mathematical Sciences, Aal-
borg University. 3rd edition, 2010.
[5] http://en.wikipedia.org/wiki/Order_of_operations
51