Lecture 8

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Survey of Applied Mathematics Techniques

1
Brian Wetton

July 8, 2018

1
www.math.ubc.ca/~wetton, [email protected]
Lecture 8

Time Stepping Methods.

8.1 Initial Value Problems


We will consider time stepping schemes for general ODE initial value problems
for u(t):
du
= f (u(t), t) , u(0) = u0 given. (8.1)
dt
We will consider this general form for scalar u when we assess the accuracy
of time-stepping schemes, but all the schemes considered will be applicable to
the vector u case. It is the general form since, as discussed in Lecture #5,
higher order DEs and higher order systems can always be converted to first
order systems. We will also consider the simple scalar problem
du
= λu , u(0) = 1. (8.2)
dt
with λ a given complex constant to investigate the stability of time stepping
schemes.
We will often use Newton’s notation for time derivatives:
du d2 u
:= u̇, := ü, etc.
dt dt2

8.2 Basic Time Stepping Schemes and Ideas


Consider approximating u(t) on a regular grid in time with interval size k = ∆t.
We will use superscripts for the time level index (since later we will look at PDE
problems where we will discretize in space and time).
U n ≈ u(nk).
The simplest method for approximating solutions to (7.1) is the Explicit Euler
(Forward Euler) method:
U n+1 = U n + kf (U n , nk). (8.3)

1
We call this scheme a one-step method since the value of the approximation at
time level n determines the values at time level n + 1. This is a convergent
scheme with first order convergence: If the solution of (7.1) is well defined in
[0, T ] and (7.3) is used with N time steps of size k = T /N then
|u(T ) − U N | = O(k).
Note that if the exact solution is put into (7.3) we get an expression for the
truncation error from linear Taylor approximation, since f (u(nk), nk) = u̇(nk):
k2
u((n + 1)k) − u(nk) − kf (u(nk), nk) = ü(ξ)
2
for some ξ ∈ (nk, (n + 1)k). Thus, the local error (after one time step) is O(k 2 ).
It makes sense that the error after O(1/k) times steps is first order, O(k). The
local error is usually written as kτ , where τ is the truncation error. For the
Forward Euler method,
k
τ = ü(ξ). (8.4)
2
Theorem 1 (ODE Theory). If f (u, t) ∈ Cn with n ≥ 1, then the solution u(t)
of (7.1) exists, is unique and in Cn+1 in a neighbourhood of t = 0.
Theorem 2 (Dahlquist). Any “reasonable” one-step time stepping method con-
verges to the exact solution at times for which it is defined with an order of
convergence equal to the order of the truncation error.
Here, “reasonable” means that the scheme is consistent and has a very basic
stability property. Since all of the schemes we will discuss below are “reason-
able”, this theorem does not help us choose a scheme suitable for a specific
problem.
Consider the FE scheme (7.3) applied to the scalar problem (7.2). The exact
solution is
u(t) = eλt
and the discrete solution satisfies U0 = 1, U n+1 = (1 + kλ)U n , so
U n = (1 + z)n , with z = kλ.
It is clear that the so-called Growth Factor G(z) = 1 + z for the scheme should
approximate ez when z is small, as it does.
We consider the set of complex z such that |G(z)| ≤ 1. This is known as the
stability region of the method. For FE,
|G(z)| = |1 + z| ≤ 1
is a circle of radius 1 in the complex plane, centred at z = −1 as shown in
Figure 7.1. Note that if λ is real and negative, the exact solution decays in
time, but the FE approximation will only decay if
|z| = |kλ| < 2 , that is k < 2/λ. (8.5)
This does not violate Dahlquist’s theorem above, since as k → 0, (7.5) is even-
tually satisfied.

2
Figure 8.1: Stability region for Forward Euler time stepping.

8.3 Higher order and implicit methods


A second order explicit method called Improved Euler is given below:

U∗ = U n + kf (U n , nk)
k
U n+1 = U n + (f (U n , nk) + f (U ∗ , (n + 1)k)) .
2
This is a two-stage method, with two evaluations of the right hand side function
f and U ∗ is the value at the intermediate stage. However, since U n does deter-
mine U n+1 it is still a one-step method. It is one of a family of second order
Runge-Kutta methods. It can be written more compactly:
k
U n+1 = U n + (f (U n , nk) + f (U n + kf (U n , nk), (n + 1)k)) . (8.6)
2
From this form, the truncation error can be identified:
 
1 n+1 k
τ= u − un − (f n + f (un + kf n , (n + 1)k))
k 2

where un := u(nk) and f n := f (un , nk) = u̇(nk). We can expand all variables
in Taylor series at t = nk, to obtain

k2 k 3 d3 u k k2
   
1 k 2 4
τ= k u̇ + ü + − u̇ − u̇ + k(f u f + ft ) + (fuu f + f tt + f ut f ) + O(k )
k 2 6 dt3 2 2 2

where all functions are evaluated at t = nk and u = u(nk). Starting with


u̇ = f (u, t) it can be shown that

ü = fu f + ft

3
and
d3 u
= fuu f 2 + 2fut f + (fu )2 f + fu ft + ftt .
dt3
Using the first of the results above, we see that τ is second order
 3 
1d u 1
τ = k2 − (f uu f 2
+ ftt + f ut f ) + O(k 3 ).
6 dt3 4
Note that the dominant error term in the truncation error is not a simple time
derivative of u as it was for the FE method. This has implications for error
estimation in adaptive methods as we shall see in later discussion.
We can consider the stability of the Improved Euler scheme by considering
the form (7.6) with f (u, t) = λu, giving
z2 n
U n+1 = (1 + z + )U
2
which defines a growth factor G(z) = 1 + z + z 2 /2. The stability region, {z :
|G(z)| ≤ 1} is shown in Figure 7.2. As with FE, it is seen that IE is not suitable
for problems that have λ with a large negative real part. Such problems are
called stiff problems. We give a more complete definition below.
Definition 1 (Stiff ODE systems). Solutions of a well-behaved ODE system can
be multiplied by e−βt with some β > 0 so that they are bounded. The value of β
can always be chosen and time scaled so that the resulting transformed system,
when linearized around desired solutions, has O(1) eigenvalues (components that
evolve on an O(1) time scale). A system is called stiff if the linearization of this
transformed system also has eigenvalues of large size.
Theorem 3 (Dahlquist). The stability region of every explicit scheme is bounded.
From this, it can be seen that no explicit scheme is suitable for stiff problems.
There are two properties we would like to have for a time-stepping scheme
for stiff problems, summarized in the following definitions.
Definition 2 (L-stability). A time-stepping scheme is called L-stable if |G(z)| <
1 for all z with <(z) < 0. That is, the stability region contains the left half plane.
Definition 3 (A-stability). A time-stepping scheme is called A-stable if

G(z) → 0 as <(z) → −∞.

This matches the property that G(z) should approximate ez .


Important Note: In some literature, the “A” and “L” definitions are re-
versed! There were two groups that could not agree on notation. I like “L” for
left half-plane and “A” for asymptotic.
From Theorem 3 we see that no explicit scheme is L-stable or A-stable. Thus
we turn to implicit schemes. The simplest scheme of this type is the Backward
Euler scheme:
U n+1 = U n + kf (U n+1 , (n + 1)k).

4
Figure 8.2: Stability region for Improved Euler time stepping (inside the curve
shown). This is computed by setting G(z) = eiθ for a grid of θ points and
plotting the roots of the quadratic.

It is called an implicit method because U n+1 is specified by an implicit relation-


ship above. A linear or nonlinear system must be solved for U n+1 at every time
step. It is a first order method with truncation error
k
τ = − ü(ξ) (8.7)
2
for some ξ ∈ (nk, (n + 1)k). It has growth factor
1
G(z) =
1−z
and so has a stability region that is outside the unit circle centred at z = 1 as
shown in Figure 7.3. From the form of G(z) above and the shape of the stability
region, it is clear that BE is both L-stable and A-stable. Thus, it is suitable for
application to stiff problems.

8.4 Higher Order Implicit Schemes


We consider four higher order implicit schemes, each with some advantages and
some disadvantages.

8.4.1 Trapezoidal Rule


We consider the Trapezoidal Rule
k
U n+1 = U n + f (U n , nk) + f (U n+1 , (n + 1)k)

2

5
Figure 8.3: Stability region for Backward Euler time stepping.

and the Implicit Midpoint Rule

U n+1 = U n + kf ((U n+1 + U n )/2, (n + 1/2)k).

These are different schemes but become identical when applied to constant coef-
ficient, linear, autonomous problems. They have the same dominant error term
and the same growth factor and stability regions. When the Trapezoidal Rule is
applied to diffusion problems, it is also known as the Crank-Nicholson method,
and this name is sometimes used for the approach applied to other problems.
The growth factor is
1 + z/2
G(z) =
1 − z/2
Here, the stability region is exactly the left half plane, and the method is thus
L-stable. However,
G(z) → −1 as <(z) → −∞.
so the method is not A-stable. Care should be used in the application of the
method to stiff problems since components that should almost decay to zero
in one time step instead just oscillate in sign. On the other hand, it does have
desirable properties: L-stable, second order with a small error constant, one-step
and one-stage.

8.4.2 Second Order Backward Differentiation Formula (BDF2)


This is a multi-step method, involving the values of two previous time steps, U n
and U n−1 . The first step, U 1 , can be computed with BE without loss of overall

6
accuracy.
4 n 1 n−1 2k
U n+1 = U − U + f (U n+1 , (n + 1)k)
3 3 3
It is based on the second order, one sided difference formula we derived earlier
in the course
3U n+1 − 4U n + U n−1
u̇((n + 1)k) ≈ .
2k
It has truncation error
2 2 d3 u
τ= k (ξ)
9 dt3
with ξ ∈ ((n − 1)k, (n + 1)k). To analyze the stability, consider the method
applied to (7.2):
4 1 2
U n+1 = U n − U n−1 + zU n+1
3 3 3
or
2z 4 1
(1 − )U n+1 − U n + U n−1 = 0.
3 3 3
For a given z, this is a second order constant coefficient difference equation with
solution
U n = AGn1 + BGn2 (8.8)
for some constants A and B and G1 (z), G2 (z) roots of
2z 2 4 1
(1 − )G − G + = 0.
3 3 3
It is important to clarify that the superscript n for U in (7.8) is the time level,
but for G1 and G2 it is an exponent. The stability region for two-step schemes
is
{z : |G1 (z)| < 1 and |G2 (z)| < 1}
The stability region for BDF-2 is shown in Figure 7.4. It is clear that it is
L-stable. Since q
2 4 1
3 ± 9 − 3 (1 − 2z/3)
G1,2 = (8.9)
1 − 2z/3
it is also clear that the scheme is A-stable (|G1,2 | = O(|z|−1/2 ) in the limit
|z| → ∞). Considering (7.9) in the limit as z → 0 (think of fixed λ, k → 0), we
have
1
G1 ≈ 1 , G2 ≈ .
3
The term Gn2 appears as an initial layer that accounts for the error from the
initialization procedure for U 1 .

7
Figure 8.4: Stability region for BDF-2 (outside the curve shown). This is
computed by setting G(z) = eiθ for a grid of θ points and plotting z =
3/2 − (2G − 1/2)/G2 .

8.4.3 Second Order Diagonally Implicit Runge-Kutta (DIRK2)


This is a one-step, two-stage implicit method:

U∗ = U n + kαf (U ∗ , (n + α)k)
U n+1 = U n + k αf (U n+1 , (n + 1)k) + (1 − α)f (U ∗ , (n + α)k) .



with α = 1− 2/2. The term “diagonal” applies since each stage is only implicit
in the value at that stage. It is both A-stable and L-stable. Since it is a one-step
method, adaptive methods for time step size are more easily implemented than
for BDF-2. Over one time step, it is more accurate than BDF-2 under certain
assumptions (the error terms are not directly comparable in general). However,
taking into account the fact that BDF-2 has one implicit solve per time step
and DIRK-2 has two, BDF-2 is more efficient for fixed time step computations.

8.4.4 Radau II-A


This is a third order, two stage method that is both L-stable and A-stable.
 
∗ n 5 ∗ 1 n+1
U = U +k f (U , (n + 1/3)k) − f (U , (n + 1)k)
12 12
 
n+1 n 3 ∗ 1 n+1
U = U +k f (U , (n + 1/3)k) + f (U , (n + 1)k) .
4 4

Note that this scheme is implicit in both U ∗ and U n+1 simultaneously.

8
8.5 Adaptive Time Stepping
We discuss adaptive time stepping with a particular, simple example. Consider
applying BE to a problem and wanting to make the local error at each time step
smaller than a user defined tolerance δ. The local error is (7.7):
k2 k2
kτ = − ü(ξ1 ) ≈ − ü(nk)
2 2
The local error for FE (7.4) is
k2 k2
kτ = ü(ξ2 ) ≈ ü(nk)
2 2
So, if we compute the FE solution U F E as well as the BE solution U n+1 , the
local error E in U n+1 is approximately
1
E ≈ |U n+1 − U F E |.
2
Note that U F E is cheap to compute (an explicit method) and may be useful as
a good initial guess for iterative solvers for U n+1 . The value of U F E is not used
in subsequent calculations, so there is no stability problem.
From the value of E computed above, we would make some decisions about
the step from tn to tn+1 = tn + k:
If E > δ: We would fail the time step and recompute U n+1 with a reduced time
step k.
If E ≤ δ: We would accept the time step and would compute a new time step
k so that E ≤ δ would be likely to be satisfied again. Since
2
E ≈ Ckold
(with C ≈ |ü(tn )|/2) and we want E ≤ δ we could take knew with
2 E 2
Cknew ≈ 2 knew ≤ δ.
kold
In practice, the formula
p
knew = θkold δ/E
is used, with θ < 1 a computational parameter, a “safety” factor. I typi-
cally use θ = 0.8. There is also typically a limit on how much k is allowed
to increase. On a failed step for which E is not that much bigger than
δ, the formula above can also be used. On a “bad” failure, k is typically
reduced drastically (i.e. by a factor of two).
Note that higher order single step methods have more complicated error struc-
ture. For these methods, typically higher order methods are used to assess ac-
curacy of lower order methods. For example, MATLAB’s ode45 explicit solver
uses a fifth order Runge-Kutta method to assess the accuracy of a fourth order
one. It uses the Dormand-Prince pair, which is a six stage method carefully
chosen so that the same function evaluations are used for both methods.

9
8.6 Butcher Tables
The coefficients of one-step methods can be represented in Butcher Tables. The
general s stage method (s function evaluations) is given by
s
X
U n+1 = Un + bi Ki
i=1
 
s
X
Ki = kf U n + aij Kj , tn + ci k 
j=1

The constants in the general method above can be entered into a table:

c1 a11 a12 ··· a1s


c2 a21 a22 ··· a2s
.. .. .. ..
. . . .
cs as1 as2 ··· ass
b1 b2 ··· bs

An explicit method has zeros in the diagonal of A and above. A diagonally


implicit method has non-zeros on the diagonal but zeros above. The tables for
some of the schemes discussed in this section are given below:
Improved Euler:

0 0 0
1 1 0
1/2 1/2

DIRK-2: with α = 1 − 2/2
α α 0
1 1−α α
1−α α

Radau IIA:
1/3 5/12 −1/12
1 3/4 1/4
3/4 1/4

8.7 Lecture #8 Problems


Problem 1. Show that the DIRK-2 method in the class notes is second order
accurate and A-stable.

10
Problem 2. Implement the Forward Euler scheme on the problem u(t) with

ü + u = 0

with initial conditions u(0) = 1, u̇(0) = 0, written as a first order system. The
exact solution is u(t) = cos t. Observe the convergence of the scheme at t = 8π.
The exact solution satisfies
ü2 + u2 ≡ 1
for all time (this can be shown analytically). How does this quantity vary in
time in your scheme from A4? Based on the stability region for FE and the
eigenvalues of this problem written as a system, why would you expect this be-
haviour? Do any of the schemes considered in the lecture notes preserve this
identity exactly? (either discuss why they don’t or find one that does).

Problem 3. Show that the Radau IIA method in the class notes is indeed third
order order, L-stable and A-stable.
Problem 4. Consider the Lax Wendroff scheme applied to the one-way wave
equation ut = ux :
k2
U n+1 = U n + kD1 U n + D2 U n
2
where solutions are 1-periodic in x. Consider the scheme with k = Ch with
0 < C < 1. Show that it is second order accurate. Note that this is not a MOL
scheme. Hint: You will have to differentiate the partial differential equation
with time to match some of the terms. Show that the method is stable as long
as k < h. Hint: use von-Neumann analysis.
Problem 5. Consider the following scheme for ü = f (u):
 
k
U n+1 = U n + k V n + f (U n )
2
k
V n+1 = V n + f (U n ) + f (U n+1 ) .

2
Show that the method is second order accurate. Note that it is a non-standard
explicit scheme, specialized to this particular problem structure.
Problem 6. Consider the following scheme for the 1D wave equation utt = uxx :
1 1
(U n+1 − 2Ujn + Ujn−1 ) = 2 (Uj+1
n
− 2Ujn + Uj−1
n
).
k2 j h
What time step restrictions are needed for the scheme to be stable?

11

You might also like