Main
Main
A digital compendium
1 Introduction 7
1.1 Check Python and LiClipse plugin . . . . . . . . . . . . . . . . . 7
1.2 Scientific computing with Python . . . . . . . . . . . . . . . . . . 8
2
CONTENTS 3
3 Shooting Methods 96
3.1 Linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.1.1 Example: Couette-Poiseuille flow . . . . . . . . . . . . . . 100
3.1.2 Example: Simply supported beam with constant cross-
sectional area . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.1.3 Example: Simply supported beam with varying cross-
sectional area . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.2 Non-linear boundary value problems of ODEs . . . . . . . . . . . 111
3.2.1 Example: Large deflection of a cantilever . . . . . . . . . 117
3.3 Notes on similarity solutions . . . . . . . . . . . . . . . . . . . . . 122
3.3.1 Example: Freezing of a waterpipe . . . . . . . . . . . . . . 126
3.3.2 Example: Stokes 1. problem . . . . . . . . . . . . . . . . . 127
3.3.3 Example: Blasius-ligningen . . . . . . . . . . . . . . . . . 129
3.3.4 Example:Falkner-Skan ligningen . . . . . . . . . . . . . . 135
3.4 Skyting med to startbetingelser . . . . . . . . . . . . . . . . . . . 140
3.4.1 Lineære ligninger . . . . . . . . . . . . . . . . . . . . . . . 140
3.4.2 Example: Sylindrisk tank med væske . . . . . . . . . . . . 141
3.4.3 Eksempel på ikke-lineære ligninger . . . . . . . . . . . . . 146
3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
7 Diffusjonsproblemer 266
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
7.2 Confined, unsteady Couette flow . . . . . . . . . . . . . . . . . . 267
7.3 Stability: Criterion for positive coefficients. PC-criterion . . . . . 272
7.4 Stability analysis with von Neumann’s method . . . . . . . . . . 273
7.4.1 Example: Practical usage of the von Neumann condition . 276
7.5 Flere skjema for parabolske ligninger . . . . . . . . . . . . . . . . 278
7.5.1 Richardson-skjemaet (1910) . . . . . . . . . . . . . . . . . 278
7.5.2 Dufort-Frankel skjemaet (1953) . . . . . . . . . . . . . . . 280
7.5.3 Crank-Nicolson skjemaet. θ-skjemaet . . . . . . . . . . . . 281
7.5.4 Von Neumanns generelle stabilitetsbetingelse . . . . . . . 286
7.6 Trunkeringsfeil, konsistens og konvergens . . . . . . . . . . . . . . 287
CONTENTS 5
Bibliography 340
CONTENTS 6
Index 342
Chapter 1
Introduction
def systemCheck():
’’’
7
CHAPTER 1. INTRODUCTION 8
’’’
installed = " ... installed!"
print ""
print "Check for necessary modules needed for the course tkt4140"
print ""
except:
print " IMPORT ERROR; no version of numpy found"
if __name__ == ’__main__’:
systemCheck()
2.1 Introduction
With an initial value problem for an ordinary differential equation (ODE) we
mean a problem where all boundary conditions are given for one and the same
value of the independent variable. For a first order ODE we get e.g.
y 0 (x) = f (x, y) (2.1)
y(x0 ) = a (2.2)
while for a second order ODE we get
9
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 10
∂2θ g
2
+ sin(θ) = 0 (2.5)
∂τ l
dθ
θ(0) = θ0 , (0) = 0 (2.6)
dτ
To present the governing equation (2.6) on
p a more convenient form, we
introduce a dimensionless time t given by t = gl · τ such that (2.5) and (2.6)
may be written as:
g
θ
θ0
The dot denotes derivation with respect to the dimensionless time t. For small
displacements we can set sin(θ) ≈ θ, such that (2.7) and (2.8) becomes
The difference between (2.7) and (2.9) is that the latter is linear, while the
first is non-linear. The analytical solution of Equations (2.7) and (2.8) is given
in Appendix G.2. in the Numeriske Beregninger.
an (x)y (n) (x) + an−1 (x)y (n−1) (x) + · · · + a1 (x)y 0 (x) + a0 (x)y(x) = b(x) (2.11)
where y (k) , k = 0, 1, . . . n is referring to the k’th derivative and y (0) (x) = y(x).
If one or more of the coefficients ak also are functions of at least one y (k) , k =
0, 1, . . . n, the ODE is non-linear. From (2.11) it follows that (2.7) is non-linear
and (2.9) is linear.
Analytical solutions of non-linear ODEs are rare, and except from some
special types, there are no general ways of finding such solutions. Therefore
non-linear equations must usually be solved numerically. In many cases this is
also the case for linear equations. For instance it doesn’t exist a method to solve
the general second order linear ODE given by
From a numerical point of view the main difference between linear and
non-linear equations is the multitude of solutions that may arise when solving
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 12
such that y(x) → ∞ for x → 1, which isn’t evident from the equation itself.
(x − x0 )2 00 (x − x0 )n (n)
f (x) = f (x0 )+(x−x0 )·f 0 (x0 )+ f (x0 )+· · ·+ f (x0 )+higher order terms
2 n!
(2.12)
Let’s use this formula to find the first terms in the series expansion for θ(t)
around t = 0 from the differential equation given in (2.9):
First we observe that the solution to the ODE in (2.14) may be expressed as
an Taylor expansion around the initial point:
t2 t3 ... t4
θ(t) ≈ θ(0) + t · θ̇(0) + θ̈(0) + θ (0) + θ(4) (0) (2.15)
2 6 24
By use of the initial conditions of the ODE in (2.14) θ(0) = θ0 , θ̇(0) = 0 we
get
t2 t3 ... t4
θ(t) ≈ θ0 + θ̈ + θ (0) + θ(4) (0) (2.16)
2 6 24
From the ODE in (2.14) we obtain expressions for the differentials at the initial
point:
θ̈(t) = −θ(t) → θ̈(0) = −θ(0) = −θ0 (2.17)
Expressions for higher order differentials evaluated at the initial point may
be obtained by further differentiation of the ODE (2.14)
...
θ (t) = −θ̇(t) → θ̈(0) = −θ(0) = −θ0 (2.18)
and
t2 t4 t2 t4
θ(t) ≈ θ0 1 − + = θ0 1 − + (2.19)
2 24 2! 4!
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 13
x2 00 x3 x4
y(x) ≈ x · y 0 (0) + y (0) + y 000 (0) + y (4) (0)
2 6 24
From the differential equation we get y 0 (0) = 1. By consecutive differentiation
we get
y 00 (x) = −3 + y 0 + 2x + xy 0 + y → y 00 (0) = −2
y (x) = y 00 + 2 + xy 00 + 2y 0
000
→ y 000 (0) = 2
y (4) (x) = y 000 + xy 000 + 3y 00 → y (4) (0) = −4
3 4
Inserting above gives y(x) ≈ x − x2 + x3 − x6 .
3 4 5 6
Newton gave the following solution: y(x) ≈ x − x2 + x3 − x6 + x30 − x45 .
Now you can check if Newton calculated correctly. Today it is possible to
give the solution on closed form with known functions as follows,
√ √
√
h x i 2 2
y(x) =3 2πe · exp x 1 + · erf (1 + x) − erf
2 2 2
h x i
+4 · 1 − exp[x 1 + −x
2
√
Note the combination 2πe. See Hairer et al. [10] section 1.2 for more details
on classical differential equations.
y 0 (x) =g(x)
g 0 (x) = − y(x)
y(0) =a0 , g(0) = b0
Another example:
We set y 0 (x) = g(x) and y 00 (x) = g 0 (x) = f (x), and the system may be written
as
y 0 (x) =g(x)
g 0 (x) =f (x)
f 0 (x) = − 2f (x) + (g(x))2 − 2y(x) + x2
y 0 = y00 = y1
y 00 = y000 = y10 = y2
Finally, the third order ODE in (2.20) may be represented as a system of first
order ODEs:
where
dm y
y (m) ≡
dxm
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 16
y00 = y1
y10 = y2
.
.
0
ym−2 = ym−1
0
ym−1 = f (x, y0 , y1 , y2 , . . . , ym−1 )
2.4 Differences
We will study some simple methods to solve initial value problems. Later we
shall see that these methods also may be used to solve boundary value problems
for ODEs.
For this purpose we need to introduce a suitable notation to enable us to
formulate the methods in a convenient manner. In Figure 2.2) an arbitrary
function y is illustrated as a continuous function of x, ie y = y(x). Later y will
be used to represent the solution of an ODE, which only may be represented
and obtained for discrete values of x. We represent these discrete, equidistant
values of x by:
xj = x0 + jh
where h = ∆x is assumed constant unless otherwise stated and j is an integer
counter referring to the discrete values xj for which the corresponding discrete
value
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 17
y
y(xj)
y(xj-1)
y(xj-2)
y(x1)
y(x0)
h h h
x0 x1 xj-2 xj-1 xj x
yj = y(xj )
may be found (see Figure 2.3).
y yj+1½ yj+2
yj+1
yj+½
yj
yj-½
yj-1
½h ½h ½h ½h
Having introduced this notation we may the develop useful expressions and
notations for forward differences, backward differences, and central differences,
which will be used frequently later:
Forward differences:
∆yj = yj+1 − yj
Backward differences:
Central differences:
δyj+ 21 = yj+1 − yj
The linear difference operators ∆, ∇ and δ are useful when we are deriving
more complicated expressions. An example of usage is as follows,
However, for clarity we will mainly write out the formulas entirely rather
than using operators.
We shall find difference formulas and need again:
Taylor’s theorem:
1
y(x) =y(x0 ) + y 0 (x0 ) · (x − x0 ) + y 00 (x0 ) · (x − x0 )2 + (2.23)
2
1 (n) n
· · · + y (x0 ) · (x − x0 ) + Rn
n!
The remainder Rn is given by
1
Rn = y (n+1) (ξ) · (x − x0 )n+1 (2.24)
(n + 1)!
where ξ ∈ (x0 , x)
Now, Taylor’s theorem (2.23) may be used to approximate the value of y(xj+1 ),
i.e. the forward value of y(xj ), by assuming that y(xj ) and it’s derivatives are
known:
h2 00
y(xj+1 ) ≡y(xj + h) = y(xj ) + hy 0 (xj ) + y (xj )+ (2.25)
2
hn y (n) (xj )
··· + + Rn
n!
where the remainder Rn = O(hn+1 ), h → 0.
From (2.25) we also get
y(xj+1 ) − y(xj )
y 0 (xj ) = + O(h) (2.27)
h
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 19
y(xj ) − y(xj−1 )
y 0 (xj ) = + O(h) (2.28)
h
By adding (2.26) and (2.25) together we get an approximation of the second
derivative at the location xj :
y(xj+1 ) − y(xj−1 )
y 0 (xj ) = + O(h2 ) (2.30)
2h
Notation:
We let y(xj ) always denote the function y(x) with x = xj . We use yj both
for the numerical and analytical value, the intended meaning is hopfullye clear
from the context.
Equations (2.27), (2.28), (2.29) and (2.30) then may be used to deduce the
following difference expressions:
yj+1 − yj
yj0 = ; truncation error O(h) (2.31)
h
yj − yj−1
yj0 = ; truncation error O(h) (2.32)
h
yj+1 − 2yj + yj−1
yj00 = ; truncation error O(h2 ) (2.33)
h2
yj+1 − yj−1
yj0 = ; truncation error O(h2 ) (2.34)
2h
In summary, equation (2.31) is a forward difference, (2.32) is a backward
difference while (2.33) and (2.34) are central differences.
The expressions in (2.31), (2.32), (2.33) and (2.34) may also conveniently be
established from Figure 2.4.
Whereas, (2.31) follows directly from the definition of the derivative, whereas
the second order derivative (2.33) may be obtained as a derivative of the derivative
by:
00 yj+1 − yj yj − yj−1 1 yj+1 − 2yj + yj−1
yj (xj ) = − · =
h h h h2
and an improved expression for the derivative (2.34) may be obtained by
averaging the forward and the backward derivatives:
0 yj+1 − yj yj − yj−1 1 yj+1 − yj−1
yj = + · =
h h 2 2h
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 20
y yj+1
yj
yj-1
½h ½h ½h ½h
xj-1 xj xj+1 x
Figure 2.4: Illustration of how the discrete values may be used to estimate
various orders of derivatives at location xi .
a · yj−1 + b · yj + c · yj+1 =
h2 h3
a · yj − hyj0 + yj00 − yj000 (ξ) + b · yj + (2.36)
2 6
2
h 00 h3 000
0
c · yj + hyj + yj + yj (ξ)
2 6
By collecting terms in Eq. (2.36) we get:
a · yj−1 + b · yj + c · yj+1 =
(a + b + c) yj + (c − a) h yj0 + (2.37)
2 3
h 00 h 000
(a + c) y + (c − a) y (ξ)
2 j 6
From Eq.(2.37) we may then find a, b and c such that yj0 gets as high accuracy
as possible by :
a+b+c=0
(c − a) · h = 1 (2.38)
a+c=0
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 21
Forward differences:
dyi yi+1 − yi 1
= + y 00 (ξ)∆x
dx ∆x 2
dyi −3yi + 4yi+1 − yi+2 1
= + y 000 (ξ) · (∆x)2
dx 2∆x 3
dyi −11yi + 18yi+1 − 9yi+2 + yi+3 1
= + y (4) (ξ) · (∆x)3
dx 6∆x 4
d2 yi yi − 2yi+1 + yi+2
= + y 000 (ξ) · ∆x
dx2 (∆x)2
d2 yi 2yi − 5yi+1 + 4yi+2 − yi+3 11
= + y (4) (ξ) · (∆x)2
dx2 (∆x)2 12
Backward differences:
dyi yi − yi−1 1
= + y 00 (ξ)∆x
dx ∆x 2
dyi 3yi − 4yi−1 + yi−2 1
= + y 000 (ξ) · (∆x)2
dx 2∆x 3
dyi 11yi − 18yi−1 + 9yi−2 − yi−3 1
= + y (4) (ξ) · (∆x)3
dx 6∆x 4
d2 yi yi − 2yi−1 + yi−2 000
= + y (ξ) · ∆x
dx2 (∆x)2
d2 yi 2yi − 5yi−1 + 4yi−2 − yi−3 11
= + y (4) (ξ) · (∆x)2
dx2 (∆x)2 12
Central differences:
dyi yi+1 − yi−1 1
= − y 000 (ξ)(∆x)2
dx 2∆x 6
dyi −yi+2 + 8yi+1 − 8yi−1 + yi−2 1
= + y (5) (ξ) · (∆x)4
dx 12∆x 30
d2 yi yi+1 − 2yi + yi−1 1
= − y (4) (ξ) · (∆x)2
dx2 (∆x) 2 12
d2 yi −yi+2 + 16yi+1 − 30yi + 16yi−1 − yi−2 1
= + y (6) (ξ) · (∆x)4
dx2 12(∆x)2 90
d3 yi yi+2 − 2yi+1 + 2yi−1 − yi−2 1
= + y (5) (ξ) · (∆x)2
dx3 2(∆x)3 4
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 23
1 1
p(x1+ 12 ) ≈ (pi+1 + pi ), p(x1− 12 ) ≈ (pi + pi−1 ) (2.44)
2 2
Note that for p(x) = 1 = constant we get the usual expression
d2 u
ui+1 − 2ui + ui−1
= + O(h2 )
dx2 i h2
which gives
≈ h
2
which gives
y
y(xn+2)
y(x) numerical
solution
yn+2
y(xn+1)
yn+1
yn h∙f(xn,yn)
xn xn+1 xn+2 x
import numpy as np
import matplotlib.pylab as plt
Eulers method:
y^(n + 1) = y^(n) + h*f(x, y^(n)), h = dx
"""
N = 100
x = np.linspace(0, 1, N + 1)
h = x[1] - x[0] # steplength
y_0 = 1 # initial condition
Y = np.zeros_like(x) # vector for storing y values
Y[0] = y_0 # first element of y = y(0)
for n in range(N):
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 26
f = Y[n]
Y[n + 1] = Y[n] + h*f
Y_analytic = np.exp(x)
2.8
ex
2.6
euler
2.4
2.2
2.0
y
1.8
1.6
1.4
1.2
1.0
0.0 0.2 0.4 0.6 0.8 1.0
x
import numpy as np
import matplotlib.pylab as plt
from math import pi
thetha = Y[0, :]
thetha_analytic = thetha_0*np.cos(t)
plt.figure()
plt.plot(t, thetha_analytic, ’b’)
plt.plot(t, thetha, ’r--’)
import numpy as np
import matplotlib.pylab as plt
from math import pi
# define Euler solver
def euler(func, y_0, time):
""" Generic implementation of the euler scheme for solution of systems of ODEs:
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 28
0.15
0.10
0.05
0.00
−0.05
−0.10
0 cos(t)
euler
−0.15
0 1 2 3 4 5 6 7
t
y0’ = y1
y1’ = y2
.
.
yN’ = f(yN-1,..., y1, y0, t)
method:
y0^(n+1) = y0^(n) + h*y1
y1^(n+1) = y1^(n) + h*y2
.
.
yN^(n + 1) = yN^(n) + h*f(yN-1, .. y1, y0, t)
Args:
func(function): func(y, t) that returns y’ at time t; [y1, y2,...,f(yn-1, .. y1, y0, t)]
y_0(array): initial conditions
time(array): array containing the time to be solved for
Returns:
y(array): array/matrix containing solution for y0 -> yN for all timesteps"""
y = np.zeros((np.size(time), np.size(y_0)))
y[0,:] = y_0
for i in range(len(time)-1):
dt = time[i+1] - time[i]
y[i+1,:]=y[i,:] + np.asarray(func(y[i,:], time[i]))*dt
return y
y0’ = y1
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 29
y1’ = -y0
Args:
y(array): array [y0, y1] at time t
t(float): current time
Returns:
dy(array): [y0’, y1’] = [y1, -y0]
"""
dy = np.zeros_like(y)
dy[:] = [y[1], -y[0]]
return dy
N = 100
time = np.linspace(0, 2*pi, N + 1)
thetha_0 = [0.1, 0]
theta = euler(pendulum_func, thetha_0, time)
thetha = theta[:, 0]
thetha_analytic = thetha_0[0]*np.cos(time)
plt.figure()
plt.plot(time, thetha_analytic, ’b’)
plt.plot(time, thetha, ’r--’)
dz
z v=
dt
mg
d2 z
1 dz dz
= B − C · Cd (2.57)
dt2 A dt dt
For Equation (2.57) two initial conditions must be specified, e.g. v = v0 and
z = z0 for t = 0.
Figure 2.9 illustrates CD as a function of Re. The values in the plot are not
as accurate as the number of digits in the program might indicate. For example
is the location and the size of the "valley" in the diagram strongly dependent of
the degree of turbulence in the free stream and the roughness of the sphere. As
the drag coefficient CD is a function of the Reynolds number, it is also a function
of the solution v (i.e. the velocity) of the ODE in Equation (2.56). We will use
the function CD (Re) as an example of how functions may be implemented in
Python.
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 31
3
10
2
10
1
10
CD 10
0
-1
10
-2
10 -1 0 1 2 3 4 5 6 7
10 10 10 10 10 10 10 10 10
Re
Euler’s method for a system. Euler’s method may of course also be used
for a system. Let’s look at a simultaneous system of p equations
y10 = f1 (x, y1 , y2 , . . . yp )
y20 = f2 (x, y1 , y2 , . . . yp )
. (2.58)
.
yp0 = fp (x, y1 , y2 , . . . yp )
y0 = f (x, y) (2.60)
y(x0 ) = a
y10 =y2
y20 =y3 (2.62)
y30 = − y1 y3
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 32
zn+1 = zn + ∆t · vn
vn+1 = nn + ∆t · [g − α(vn )2 ]
med z0 = 0, v0 = 0
dz
=v (2.65)
dt
dv
= g − αv 2 (2.66)
dt
where
3ρf
α= · CD
4ρk · d
The analytical solution with z(0) = 0 and v(0) = 0 is given by
√
ln(cosh( αg · t))
z(t) = (2.67)
r α
g √
v(t) = · tanh( αg · t) (2.68)
α
r
dv g
The terminal velocity vt is found by = 0 which gives vt = .
dt α
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 33
3 3
We use data from a golf ball: d = 41 mm, ρk = 1275 kg/m , ρk = 1.22 kg/m ,
and choose CD = 0.4 which gives α = 7 · 10−3 . The terminal velocity then
becomes r
g
vt = = 37.44
α
If we use Taylor’s method from section 2.2 we get the following expression
by using four terms in the series expansion:
1 1
z(t) = gt2 · (1 − αgt2 ) (2.69)
2 6
1
v(t) =gt · (1 − αgt2 ) (2.70)
3
By applying the Euler scheme (2.53) on (2.65) and (2.66)
zn+1 = zn + ∆t · vn (2.71)
vn+1 = vn + ∆t · (g − α · vn2 ), n = 0, 1, . . . (2.72)
dz0
= z1
dt
dz1
= g − αz12
dt
One way of implementing the integration scheme is given in the following
function euler():
def euler(func,z0, time):
"""The Euler scheme for solution of systems of ODEs.
z0 is a vector for the initial conditions,
the right hand side of the system is represented by func which returns
a vector with the same size as z0 ."""
z = np.zeros((np.size(time),np.size(z0)))
z[0,:] = z0
for i in range(len(time)-1):
dt = time[i+1]-time[i]
z[i+1,:]=z[i,:] + np.asarray(func(z[i,:],time[i]))*dt
return z
# src-ch1/FallingSphereEuler.py;DragCoefficientGeneric.py @ git@lrhgit/tkt4140/src/src-ch1/DragCoeffi
from DragCoefficientGeneric import cd_sphere
from matplotlib.pyplot import *
import numpy as np
# change some default values to make plots more readable
LNWDT=2; FNT=11
rcParams[’lines.linewidth’] = LNWDT; rcParams[’font.size’] = FNT
z = np.zeros((np.size(time),np.size(z0)))
z[0,:] = z0
for i in range(len(time)-1):
dt = time[i+1]-time[i]
z[i+1,:]=z[i,:] + np.asarray(func(z[i,:],time[i]))*dt
return z
def v_taylor(t):
# z = np.zeros_like(t)
v = np.zeros_like(t)
alpha = 3.0*rho_f/(4.0*rho_s*d)*CD
v=g*t*(1-alpha*g*t**2)
return v
T = 10 # end of simulation
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 35
N = 20 # no of time steps
time = np.linspace(0, T, N+1)
z0=np.zeros(2)
z0[0] = 2.0
ze = euler(f, z0, time) # compute response with constant CD using Euler’s method
ze2 = euler(f2, z0, time) # compute response with varying CD using Euler’s method
k1 = np.sqrt(g*4*rho_s*d/(3*rho_f*CD))
k2 = np.sqrt(3*rho_f*g*CD/(4*rho_s*d))
v_a = k1*np.tanh(k2*time) # compute response with constant CD using analytical solution
# plotting
legends=[]
line_type=[’-’,’:’,’.’,’-.’,’--’]
plot(time, v_a, line_type[0])
legends.append(’Analytical (constant CD)’)
plot(time, ze[:,1], line_type[1])
legends.append(’Euler (constant CD)’)
40
35
30
25
Velocity [m/s]
Analytical (constant CD)
20 Euler (constant CD)
15 Euler (varying CD)
Taylor (constant CD)
10
−5
0 2 4 6 8 10
Time [s]
if Re <= 0.0:
CD = 0.0
elif Re > 8.0e6:
CD = 0.2
elif Re > 0.0 and Re <= 0.5:
CD = 24.0/Re
elif Re > 0.5 and Re <= 100.0:
p = array([4.22, -14.05, 34.87, 0.658])
CD = polyval(p, 1.0/Re)
elif Re > 100.0 and Re <= 1.0e4:
p = array([-30.41, 43.72, -17.08, 2.41])
CD = polyval(p, 1.0/log10(Re))
elif Re > 1.0e4 and Re <= 3.35e5:
p = array([-0.1584, 2.031, -8.472, 11.932])
CD = polyval(p, log10(Re))
elif Re > 3.35e5 and Re <= 5.0e5:
x1 = log10(Re/4.5e5)
CD = 91.08*x1**4 + 0.0764
else:
p = array([-0.06338, 1.1905, -7.332, 14.93])
CD = polyval(p, log10(Re))
return CD
# Make plot
from matplotlib import pyplot
# change some default values to make plots more readable
LNWDT=2; FNT=11
pyplot.rcParams[’lines.linewidth’] = LNWDT; pyplot.rcParams[’font.size’] = FNT
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 37
In the following, we will break up the program and explain the different parts.
In the first code line,
from numpy import logspace, zeros
the functions logspace and zeros are imported from the package numpy.
The numpy package (NumPy is an abbreviation for Numerical Python) enables
the use of array objects. Using numpy a wide range of mathematical operations
can be done directly on complete arrays, thereby removing the need for loops
over array elements. This is commonly called vectorization and may cause a
dramatic increase in computational speed of Python programs. The function
logspace works on a logarithmic scale just as the function linspace works on
a regular scale. The function zeros creates arrays of a certain size filled with
zeros. Several comprehensive guides to the numpy package may be found at
http://www.numpy.org.
In CDsphere.py a function cd_sphere was defined as follows:
def cd_sphere(Re):
"This function computes the drag coefficient of a sphere as a function of the Reynolds number Re.
# Curve fitted after fig . A -56 in Evett and Liu: "Fluid Mechanics and Hydraulics"
if Re <= 0.0:
CD = 0.0
elif Re > 8.0e6:
CD = 0.2
elif Re > 0.0 and Re <= 0.5:
CD = 24.0/Re
elif Re > 0.5 and Re <= 100.0:
p = array([4.22, -14.05, 34.87, 0.658])
CD = polyval(p, 1.0/Re)
elif Re > 100.0 and Re <= 1.0e4:
p = array([-30.41, 43.72, -17.08, 2.41])
CD = polyval(p, 1.0/log10(Re))
elif Re > 1.0e4 and Re <= 3.35e5:
p = array([-0.1584, 2.031, -8.472, 11.932])
CD = polyval(p, log10(Re))
elif Re > 3.35e5 and Re <= 5.0e5:
x1 = log10(Re/4.5e5)
CD = 91.08*x1**4 + 0.0764
else:
p = array([-0.06338, 1.1905, -7.332, 14.93])
CD = polyval(p, log10(Re))
return CD
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 38
The function takes Re as an argument and returns the value CD. All Python func-
tions begin with def, followed by the function name, and then inside parentheses
a comma-separated list of function arguments, ended with a colon. Here we have
only one argument, Re. This argument acts as a standard variable inside the
function. The statements to perform inside the function must be indented. At
the end of a function it is common to use the return statement to return the
value of the function.
Variables defined inside a function, such as p and x1 above, are local variables
that cannot be accessed outside the function. Variables defined outside functions,
in the "main program", are global variables and may be accessed anywhere, also
inside functions.
Three more functions from the numpy package are imported in the func-
tion. They are not used outside the function and are therefore chosen to be
imported only if the function is called from the main program. We refer to the
documentation of NumPy for details about the different functions.
The function above contains an example of the use of the if-elif-else
block. The block begins with if and a boolean expression. If the boolean
expression evaluates to true the indented statements following the if statement
are carried out. If not, the boolean expression following the elif is evaluated. If
none of the conditions are evaluated to true the statements following the else
are carried out.
In the code block
Npts = 500
Re = logspace(-1, 7, Npts, True, 10)
CD = zeros(Npts)
i_list = range(0, Npts-1)
for i in i_list:
CD[i] = cd_sphere(Re[i])
following, and, as we shall see, some are more time consuming than others. We
will also demonstrate how the time consumption (or efficiency) may be tested.
A simple extension of the single-valued function cd_sphere is as follows:
def cd_sphere_py_vector(ReNrs):
CD = zeros_like(ReNrs)
counter = 0
for Re in ReNrs:
CD[counter] = cd_sphere(Re)
counter += 1
return CD
CD = where((Re > 3.35e5) & (Re <= 5.0e5), 91.08*(log10(Re/4.5e5))**4 + 0.0764, CD) #condition 6
A third approach we will try is using boolean type variables. The 8 variables
condition1 through condition8 in the function cd_sphere_vector_bool are
boolean variables of the same size and shape as Re. The elements of the boolean
variables evaluate to either True or False, depending on if the corresponding
element in Re satisfy the condition the variable is assigned.
def cd_sphere_vector_bool(Re):
"Computes the drag coefficient of a sphere as a function of the Reynolds number Re."
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 41
# Curve fitted after fig . A -56 in Evett and Liu: "Fluid Mechanics and Hydraulics"
from numpy import log10, array, polyval, zeros_like
condition1 = Re < 0
condition2 = logical_and(0 < Re, Re <= 0.5)
condition3 = logical_and(0.5 < Re, Re <= 100.0)
condition4 = logical_and(100.0 < Re, Re <= 1.0e4)
condition5 = logical_and(1.0e4 < Re, Re <= 3.35e5)
condition6 = logical_and(3.35e5 < Re, Re <= 5.0e5)
condition7 = logical_and(5.0e5 < Re, Re <= 8.0e6)
condition8 = Re > 8.0e6
CD = zeros_like(Re)
CD[condition1] = 0.0
CD[condition2] = 24/Re[condition2]
p = array([4.22,-14.05,34.87,0.658])
CD[condition3] = polyval(p,1.0/Re[condition3])
p = array([-30.41,43.72,-17.08,2.41])
CD[condition4] = polyval(p,1.0/log10(Re[condition4]))
p = array([-0.1584,2.031,-8.472,11.932])
CD[condition5] = polyval(p,log10(Re[condition5]))
CD[condition8] = 0.2
return CD
solve e.g. the problems outlined in (2.5.4). In general, a file containing Python-
code may be executed either as a main program (script), typically with python
filename.py or imported in another script/module with import filename.
A module file should not execute a main program, but rather just define
functions, import other modules, and define global variables. Inside modules,
the standard practice is to only have functions and not any statements outside
functions. The reason is that all statements in the module file are executed from
top to bottom during import in another module/script [16], and thus a desirable
behavior is no output to avoid confusion. However, in many situations it is also
desirable to allow for tests or demonstration of usage inside the module file, and
for such situations the need for a main program arises. To meet these demands
Python allows for a fortunate construction to let a file act both as a module with
function definitions only (i.e. no main program) and as an ordinary program we
can run, with functions and a main program. The latter is possible by letting
the main program follow an if test of the form:
if __name__ ==’__main__’:
<main program statements>
The __name__ variable is automatically defined in any module and equals the
module name if the module is imported in another program/script, but when the
module file is executed as a program, __name__ equals the string ’__main__’.
Consequently, the if test above will only be true whenever the module file
is executed as a program and allow for the execution of the <main program
statements>. The <main program statements> is normally referred to as the
test block of a module.
The module name is the file name without the suffix .py [16], i.e. the module
contained in the module file filename.py has the module name filename. Note
that a module can contain executable statements as well as function definitions.
These statements are intended to initialize the module and are executed only
the first time the module name is encountered in an import statement. They
are also run if the file is executed as a script.
Below we have listed the content of the file DragCoefficientGeneric.py
to illustrate a specific implementation of the module DragCoefficientGeneric
and some other useful programming features in Python. The functions in the
module are the various implementations of the drag coefficient functions from
the previous section.
Python lists and dictionaries
• Lists hold a list of values and are initialized with empty brackets, e.g.
fncnames = []. The values of the list are accessed with an index, start-
ing from zero. The first value of fncnames is fncnames[0], the second
value of fncnames is fncnames[1] and so on. You can remove values
from the list, and add new values to the end by fncnames. Example:
fncnames.append(name) will append name as the last value of the list
fncnames. In case it was empty prior to the append-operation, name will
be the only element in the list.
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 43
which allows for convenient looping over all of the functions with the following
construction:
for func in funcs:
Afterwards the results may be printed with name and execution time, ordered
by the latter, with the most efficient function at the top:
for name_key in exec_keys_sorted:
print name_key, ’\t execution time = ’, ’%6.6f’ % exec_times[name_key]
Clearly, the function with the boolean variables was fastest, the straight forward
vectorized version cd_sphere_py_vector was slowest and the built-in function
vectorize was nearly as inefficient.
The complete module DragCoefficientGeneric is listed below.
# src-ch1/DragCoefficientGeneric.py
from numpy import linspace,array,append,logspace,zeros_like,where,vectorize,\
logical_and
import numpy as np
from matplotlib.pyplot import loglog,xlabel,ylabel,grid,savefig,show,rc,hold,\
legend, setp
from numpy.core.multiarray import scalar
# single-valued function
def cd_sphere(Re):
"Computes the drag coefficient of a sphere as a function of the Reynolds number Re."
# Curve fitted after fig . A -56 in Evett and Liu: "Fluid Mechanics and Hydraulics"
if Re <= 0.0:
CD = 0.0
elif Re > 8.0e6:
CD = 0.2
elif Re > 0.0 and Re <= 0.5:
CD = 24.0/Re
elif Re > 0.5 and Re <= 100.0:
p = array([4.22,-14.05,34.87,0.658])
CD = polyval(p,1.0/Re)
elif Re > 100.0 and Re <= 1.0e4:
p = array([-30.41,43.72,-17.08,2.41])
CD = polyval(p,1.0/log10(Re))
elif Re > 1.0e4 and Re <= 3.35e5:
p = array([-0.1584,2.031,-8.472,11.932])
CD = polyval(p,log10(Re))
elif Re > 3.35e5 and Re <= 5.0e5:
x1 = log10(Re/4.5e5)
CD = 91.08*x1**4 + 0.0764
else:
p = array([-0.06338,1.1905,-7.332,14.93])
CD = polyval(p,log10(Re))
return CD
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 45
CD = where((Re > 3.35e5) & (Re <= 5.0e5), 91.08*(log10(Re/4.5e5))**4 + 0.0764, CD) #condition 6
# vectorized boolean
def cd_sphere_vector_bool(Re):
"Computes the drag coefficient of a sphere as a function of the Reynolds number Re."
# Curve fitted after fig . A -56 in Evett and Liu: "Fluid Mechanics and Hydraulics"
condition1 = Re < 0
condition2 = logical_and(0 < Re, Re <= 0.5)
condition3 = logical_and(0.5 < Re, Re <= 100.0)
condition4 = logical_and(100.0 < Re, Re <= 1.0e4)
condition5 = logical_and(1.0e4 < Re, Re <= 3.35e5)
condition6 = logical_and(3.35e5 < Re, Re <= 5.0e5)
condition7 = logical_and(5.0e5 < Re, Re <= 8.0e6)
condition8 = Re > 8.0e6
CD = zeros_like(Re)
CD[condition1] = 0.0
CD[condition2] = 24/Re[condition2]
p = array([4.22,-14.05,34.87,0.658])
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 46
CD[condition3] = polyval(p,1.0/Re[condition3])
p = array([-30.41,43.72,-17.08,2.41])
CD[condition4] = polyval(p,1.0/log10(Re[condition4]))
p = array([-0.1584,2.031,-8.472,11.932])
CD[condition5] = polyval(p,log10(Re[condition5]))
return CD
if __name__ == ’__main__’:
#Check whether this file is executed (name==main) or imported as a module
import time
from numpy import mean
ReNrs = logspace(-2,7,num=500)
# make a vectorized version of the function automatically
cd_sphere_auto_vector = vectorize(cd_sphere)
fncnames.append(name)
# benchmark
t0 = time.clock()
CD[name] = func(ReNrs)
exec_times[name] = time.clock() - t0
# sort the dictionary exec_times on values and return a list of the corresponding keys
exec_keys_sorted = sorted(exec_times, key=exec_times.get)
z = np.zeros((np.size(time),np.size(z0)))
z[0,:] = z0
for i in range(len(time)-1):
dt = time[i+1]-time[i]
z[i+1,:]=z[i,:] + np.asarray(func(z[i,:],time[i]))*dt
return z
def v_taylor(t):
# z = np.zeros_like(t)
v = np.zeros_like(t)
alpha = 3.0*rho_f/(4.0*rho_s*d)*CD
v=g*t*(1-alpha*g*t**2)
return v
z0=np.zeros(2)
z0[0] = 2.0
for i in range(Ndts+1):
time = np.linspace(0, T, N+1)
ze = euler(f, z0, time) # compute response with constant CD using Euler’s method
v_a = k1*np.tanh(k2*time) # compute response with constant CD using analytical solution
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 49
abs_error=np.abs(ze[:,1] - v_a)
log_error = np.log2(abs_error[1:])
max_log_error = np.max(log_error)
plot(time[1:], log_error)
legends.append(’Euler scheme: N ’ + str(N) + ’ timesteps’ )
N*=2
if i > 0:
error_diff.append(previous_max_log_err-max_log_error)
previous_max_log_err = max_log_error
# fix plot
legend(legends, loc=’best’, frameon=False)
xlabel(’Time [s]’)
#ylabel(’Velocity [m/s]’)
ylabel(’log2-error’)
#savefig(’example_euler_timestep_study.png’, transparent=True)
show()
The plot resulting from the code above is shown in Figure (2.11). The
difference or distance between the curves seems to be rather constant after an
initial transient. As we have plotted the logarithm of the absolute value of the
error i , the difference di+1 between two curves is di+1 = log2 i − log2 i+1 =
i
log2 . A rough visual inspection of Figure (2.11) yields di+1 ≈ 1.0, from
i+1
which we may deduce the apparent order of the scheme:
i
n = log2 ≈ 1 ⇒ i+1 ≈ 0.488483620 i (2.73)
i+1
−2
−4
log2 -error
−6
−8
Euler scheme: N 10 timesteps
−10
Euler scheme: N 20 timesteps
−12 Euler scheme: N 40 timesteps
Euler scheme: N 80 timesteps
−14
Euler scheme: N 160 timesteps
−16
0 2 4 6 8 10
Time [s]
Figure 2.11: Plots for the logarithmic errors for a falling sphere with constant
drag. The timestep ∆t is reduced by a factor two from one curve to the one
immediately below.
f (xn + h) − f (xn )
y 00 (xn , yn ) = f 0 (xn , y(xn , yn )) ≈ (2.74)
h
The Taylor series expansion (2.25) gives
h2 00
y(xn + h) = y(xn ) + hy 0 [xn , y(xn )] + y [xn , y(xn )] + O(h3 )
2
which, inserting (2.74), gives
h
yn+1 = yn + · [f (xn , yn ) + f (xn+1 , y(xn+1 ))] (2.75)
2
This formula is called the trapezoidal formula, since it reduces to computing
an integral with the trapezoidal rule if f (x, y) is only a function of x. Since yn+1
appears on both sides of the equation, this is an implicit formula which means
that we need to solve a system of non-linear algebraic equations if the function
f (x, y) is non-linear. One way of making the scheme explicit is to use the Euler
scheme (2.53) to calculate y(xn+1 ) on the right side of (2.75). The resulting
scheme is often denoted Heun’s method.
The scheme for Heun’s method becomes
p
yn+1 = yn + h · f (xn , yn ) (2.76)
h p
yn+1 = yn + · [f (xn , yn ) + f (xn+1 , yn+1 )] (2.77)
2
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 51
Index p stands for "predicted". (2.76) is then the predictor and (2.77) is the
corrector. This is a second order method. For more details, see [4]. Figure 2.12
is a graphical illustration of the method.
y
corrected
yn+1 slope
slope
f(xn+1,yn+1)
y(x)
ypn+1
slope
yn
f(xn,yn)
xn xn+1 x
√ √
√
x 2 2
y(x) =3 2πe · exp x 1 + · erf (1 + x) − erf
2 2 2
h x i
+4 · 1 − exp x 1 + −x (2.81)
2
Here we have f (x, y) = 1 − 3x + y + x2 + xy = 1 + x(x − 3) + (1 + x)y
The following program NewtonHeun.py solves this problem using Heun’s
method, and the resulting figure is shown in Figure 2.13.
# src-ch1/NewtonHeun.py
# Program Newton
# Computes the solution of Newton’s 1st order equation (1671):
# dy/dx = 1-3*x + y + x^2 +x*y , y(0) = 0
# using Heun’s method.
import numpy as np
xend = 2
dx = 0.1
steps = np.int(np.round(xend/dx, 0)) + 1
y, x = np.zeros((steps,1), float), np.zeros((steps,1), float)
y[0], x[0] = 0.0, 0.0
for n in range(0,steps-1):
x[n+1] = (n+1)*dx
xn = x[n]
fn = 1 + xn*(xn-3) + y[n]*(1+xn)
yp = y[n] + dx*fn
xnp1 = x[n+1]
fnp1 = 1 + xnp1*(xnp1-3) + yp*(1+xnp1)
y[n+1] = y[n] + 0.5*dx*(fn+fnp1)
# Analytical solution
from scipy.special import erf
a = np.sqrt(2)/2
t1 = np.exp(x*(1+ x/2))
t2 = erf((1+x)*a)-erf(a)
ya = 3*np.sqrt(2*np.pi*np.exp(1))*t1*t2 + 4*(1-t1)-x
# plotting
import matplotlib.pylab as plt
0.5
0.0
−0.5
−1.0
−1.5
y
−2.0
−2.5
−3.0 Heun
Analytical
−3.5
0.0 0.5 1.0 1.5 2.0
x
Figure 2.13: Velocity of falling sphere using Euler’s and Heun’s methods.
Corrector:
p
zn+1 = zn + 0.5∆t · (vn + vn+1 ) (2.83)
2 p
)2
vn+1 = vn + 0.5∆t · 2g − α[vn + (vn+1
with initial values z0 = z(0) = 0, v0 = v(0) = 0. Note that we don’t use the
p
predictor zn+1 since it doesn’t appear on the right hand side of the equation
system.
One possible way of implementing this scheme is given in the following
function named heun(), in the program ODEschemes.py:
def heun(func, z0, time):
"""The Heun scheme for solution of systems of ODEs.
z0 is a vector for the initial conditions,
the right hand side of the system is represented by func which returns
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 54
for i, t in enumerate(time[0:-1]):
dt = time[i+1] - time[i]
zp = z[i,:] + np.asarray(func(z[i,:],t))*dt # Predictor step
z[i+1,:] = z[i,:] + (np.asarray(func(z[i,:],t)) + np.asarray(func(zp,t+dt)))*dt/2.0 # Correct
Using the same time steps as in (2.5.5), we get the response plotted in
Figure 2.14.
40
35
30
25
Velocity [m/s]
20
Figure 2.14: Velocity of falling sphere using Euler’s and Heun’s methods.
alpha = 3.0*rho_f/(4.0*rho_s*d)*CD
zout[:] = [z[1], g - alpha*z[1]**2]
return zout
T = 10 # end of simulation
N = 20 # no of time steps
time = np.linspace(0, T, N+1)
z0=np.zeros(2)
z0[0] = 2.0
ze = euler(f, z0, time) # compute response with constant CD using Euler’s method
ze2 = euler(f2, z0, time) # compute response with varying CD using Euler’s method
zh = heun(f, z0, time) # compute response with constant CD using Heun’s method
zh2 = heun(f2, z0, time) # compute response with varying CD using Heun’s method
k1 = np.sqrt(g*4*rho_s*d/(3*rho_f*CD))
k2 = np.sqrt(3*rho_f*g*CD/(4*rho_s*d))
v_a = k1*np.tanh(k2*time) # compute response with constant CD using analytical solution
# plotting
legends=[]
line_type=[’-’,’:’,’.’,’-.’,’--’]
xlabel(’Time [s]’)
ylabel(’Velocity [m/s]’)
#savefig(’example_sphere_falling_euler_heun.png’, transparent=True)
show()
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 56
f (xn + p1 h, yn + p2 K1 h) = f + p1 h fx + p2 K1 h fy + h.o.t.
= f + p1 h fx + p2 h f fy + h.o.t. (2.87)
where we for convenience have adopted the common notation for partial deriva-
tives
∂f ∂f
fx ≡ and fy ≡ (2.88)
∂x ∂y
yn+1 = yn + a1 h f (xn , yn ) + a2 h (f + p1 h fx + p2 h f fy )
= yn + (a1 + a2 ) hf + (a2 p1 fx + a2 p2 f fy ) h2 (2.89)
d2 y df dx dy
y 00 = 2
= = ∂f x + ∂f y = fx + f fy (2.90)
dx dx dx dx
which may be used in a second order Taylor expansion of the solution of (2.1):
h2 00
y(xn + h) = yn + hy 0 + y + O(h3 ) (2.91)
2
Substitution of (2.90) and (2.1) into (2.91) yields:
h2
y(xn + h) = yn + hf + (fx + f fy ) + O(h3 ) (2.92)
2
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 57
Now the idea of the generic second order Runge-Kutta method is to select
a1 , a2 , K1 , and K2 in such a way that (2.89) approximates (2.92), which will be
true if:
a1 + a2 = 1
1
a2 p1 = (2.93)
2
1
a2 p2 =
2
Note, that since we have 4 unknowns (a1 , a2 , K1 , and K2 ) and only 3
equations in (2.93), several methods of the kind proposed in (2.84) are possible. 2.8
is retrieved by selecting a2 = 1/2, from which we get a1 = 1/2 and p1 = p2 = 1
from (2.93).
4. The four gradients are averaged with weights 1/6, 2/6, 2/6 and 1/6. Using
the averaged gradient we calculate the final value yn+1 .
Each of the steps above are Euler steps.
y
y(x)
yn+1 4
3
2
yn 1
xn xn+½ xn+1 x
h
(y1 )n+1 = (y1 )n + (k1 + 2k2 + 2k3 + k4 )
6
h
(y2 )n+1 = (y2 )n + (l1 + 2l2 + 2l3 + l4 ) (2.95)
6
h
(y3 )n+1 = (y3 )n + (m1 + 2m2 + 2m3 + m4 )
6
(2.96)
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 59
where
k1 = y2
l1 = y3
m1 = −y1 y3
k4 = (y2 + hl3 )
l4 = (y3 + hm3 )
m4 = −[(y1 + hk3 )(y3 + hm3 )
for i, t in enumerate(time[0:-1]):
dt = time[i+1] - time[i]
dt2 = dt/2.0
k1 = np.asarray(func(z[i,:], t)) # predictor step 1
k2 = np.asarray(func(z[i,:] + k1*dt2, t + dt2)) # predictor step 2
k3 = np.asarray(func(z[i,:] + k2*dt2, t + dt2)) # predictor step 3
k4 = np.asarray(func(z[i,:] + k3*dt, t + dt)) # predictor step 4
z[i+1,:] = z[i,:] + dt/6.0*(k1 + 2.0*k2 + 2.0*k3 + k4) # Corrector step
Figure 2.16 shows the results using Euler, Heun and RK4. AS seen, RK4
and Heun are more accurate than Euler. The complete program Falling-
SphereEulerHeunRK4.py is listed below. The functions euler, heun and
rk4 are imported from the program ODEschemes.py.
# src-ch1/FallingSphereEulerHeunRK4.py;ODEschemes.py @ git@lrhgit/tkt4140/src/src-ch1/ODEschemes.py;
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 60
40
35
30
25
Velocity [m/s]
20
Analytical (constant CD)
Euler (constant CD)
15 Heun (constant CD)
RK4 (constant CD)
10
Euler (varying CD)
5 Heun (varying CD)
RK4 (varying CD)
0
0 2 4 6 8 10
Time [s]
Figure 2.16: Velocity of falling sphere using Euler, Heun and RK4.
z0=np.zeros(2)
z0[0] = 2.0
ze = euler(f, z0, time) # compute response with constant CD using Euler’s method
ze2 = euler(f2, z0, time) # compute response with varying CD using Euler’s method
zh = heun(f, z0, time) # compute response with constant CD using Heun’s method
zh2 = heun(f2, z0, time) # compute response with varying CD using Heun’s method
zrk4 = rk4(f, z0, time) # compute response with constant CD using RK4
zrk4_2 = rk4(f2, z0, time) # compute response with varying CD using RK4
k1 = np.sqrt(g*4*rho_s*d/(3*rho_f*CD))
k2 = np.sqrt(3*rho_f*g*CD/(4*rho_s*d))
v_a = k1*np.tanh(k2*time) # compute response with constant CD using analytical solution
# plotting
legends=[]
line_type=[’-’,’:’,’.’,’-.’,’:’,’.’,’-.’]
xlabel(’Time [s]’)
ylabel(’Velocity [m/s]’)
#savefig(’example_sphere_falling_euler_heun_rk4.png’, transparent=True)
show()
y
vf
Fɭ
vr
ϕ
Fd
v0 m
g
y1 = y2 = 0
y3 = v0 cos(α)
y4 = v0 sin(α)
—–
Let’s first look at the case of a smooth ball. We use the following data (which
are the data for a golf ball):
6m 3
Diameter d = 41mm, mass m = 46g which gives ρk = = 1275kg/m
πd3
We use the initial velocity v0 = 50 m/s and solve (2.104) using the Runge-Kutta
4 scheme. In this example we have used the Python package Odespy (ODE
Software in Python), which offers a large collection of functions for solving
ODE’s. The RK4 scheme available in Odespy is used herein.
The right hand side in (2.104) is implemented as the following function:
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 64
25
angle=30.0, smooth ball
angle=25.0, smooth ball
angle=20.0, smooth ball
20 angle=15.0, smooth ball
15
y [m]
10
00 20 40 60 80 100
x [m]
—–
Now let’s look at the same case for a golf ball. The dimension and weight
are the same as for the sphere. Now we need to account for the lift force from
the spin of the ball. In addition, the drag data for a golf ball are completely
different from the smooth sphere. We use the data from Bearman and Harvey
[1] who measured the drag and lift of a golf ball for different spin velocities in
a vindtunnel. We choose as an example 3500 rpm, and an initial velocity of
v0 = 50 m/s.
The right hand side in (2.104) is now implemented as the following function:
def f3(z, t):
"""4x4 system for golf ball with drag and lift in two directions."""
zout = np.zeros_like(z)
C = 3.0*rho_f/(4.0*rho_s*d)
vrx = z[2] - vfx
vry = z[3] - vfy
vr = np.sqrt(vrx**2 + vry**2)
Re = vr*d/nu
CD, CL = cdcl(vr, nrpm)
zout[:] = [z[2], z[3], -C*vr*(CD*vrx + CL*vry), C*vr*(CL*vrx - CD*vry) - g]
The function cdcl() (may be downloaded here) gives the drag and lift data
for a given velocity and spin.
The results are shown in Figure 2.19. The motion of a golf ball with drag
but without lift is also included. We see that the golf ball goes much farther
than the smooth sphere, due to less drag and the lift.
The complete program ParticleMotion2D.py is listed below.
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 65
50
angle=30.0, smooth ball
angle=25.0, smooth ball
angle=20.0, smooth ball
40 angle=15.0, smooth ball
angle=30.0, golf ball
angle=25.0, golf ball
30 angle=20.0, golf ball
angle=15.0, golf ball
y [m]
angle=30.0, golf ball (with lift)
angle=25.0, golf ball (with lift)
20 angle=20.0, golf ball (with lift)
angle=15.0, golf ball (with lift)
10
# src-ch1/ParticleMotion2D.py;DragCoefficientGeneric.py @ git@lrhgit/tkt4140/src/src-ch1/DragCoeffici
# smooth ball
def f(z, t):
"""4x4 system for smooth sphere with drag in two directions."""
zout = np.zeros_like(z)
C = 3.0*rho_f/(4.0*rho_s*d)
vrx = z[2] - vfx
vry = z[3] - vfy
vr = np.sqrt(vrx**2 + vry**2)
Re = vr*d/nu
CD = cd_sphere(Re) # using the already defined function
zout[:] = [z[2], z[3], -C*vr*(CD*vrx), C*vr*(-CD*vry) - g]
return zout
T = 7 # end of simulation
N = 60 # no of time steps
time = np.linspace(0, T, N+1)
N2 = 4
alfa = np.linspace(30, 15, N2) # Angle of elevation [degrees]
angle = alfa*np.pi/180.0 # convert to radians
legends=[]
line_color=[’k’,’m’,’b’,’r’]
figure(figsize=(20, 8))
hold(’on’)
LNWDT=4; FNT=18
rcParams[’lines.linewidth’] = LNWDT; rcParams[’font.size’] = FNT
z, t = solver.solve(time)
plot(z[:,0], z[:,1], ’.’, color=line_color[i])
legends.append(’angle=’+str(alfa[i])+’, golf ball (with lift)’)
u0 (t) = a u + b (2.105)
u(t0 ) = u0
The right hand side defining the differential equation has been implemented in
function f3 and the corresponding analytical solution is computed by u_nonlin_analytical:
—– The basic idea for the convergence test in the function convergence_test is
that we start out by solving numerically an ODE with an analytical solution on
a relatively coarse grid, allowing for direct computations of the error. We then
reduce the timestep by a factor two (or double the grid size), repeatedly, and
compute the error for each grid and compare it with the error of previous grid.
The Euler scheme (2.53) is O(h), whereas the Heun scheme (2.76) is O(h2 ),
and Runge-Kutta (2.94) is O(h4 ), where the h denote a generic step size which
for the current example is the timestep ∆t. The order of a particular scheme is
given exponent n in the error term O(hn ). Consequently, the Euler scheme is a
first oder scheme, Heun is second order, whereas Runge-Kutta is fourth order.
By letting i+1 and i denote the errors on two consecutive grids with
∆ti
corresponding timesteps ∆ti+1 = . The errors i+1 and i for a scheme of
2
order n are then related by:
1
i+1 = n i (2.107)
2
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 68
and the resulting plot is shown in Figure 2.20, and we see that our numerical
approximations for the orders of our schemes approach the theoretical values as
the number of timesteps increase (or as the timestep is reduced by a factor two
consecutively).
The complete function convergence_test is a part of the module ODEschemes
and is isolated below:
def convergence_test():
""" Test convergence rate of the methods """
from numpy import linspace, size, abs, log10, mean, log2
figure()
tol = 1E-15
T = 8.0 # end of simulation
Ndts = 5 # Number of times to refine timestep in convergence test
z0 = 2
5
rk4
heun
4
euler
0
120
240
480
960
192
0
Number of unknowns
Figure 2.20: The convergence rate for the various ODE-solvers a function of
the number of timesteps.
schemes_order={}
colors = [’r’, ’g’, ’b’, ’m’, ’k’, ’y’, ’c’]
linestyles = [’-’, ’--’, ’-.’, ’:’, ’v--’, ’*-.’]
iclr = 0
for scheme in schemes:
N = 30 # no of time steps
time = linspace(0, T, N+1)
order_approx = []
for i in range(Ndts+1):
z = scheme(f3, z0, time)
abs_error = abs(u_nonlin_analytical(z0, time)-z[:,0])
log_error = log2(abs_error[1:]) # Drop 1st elt to avoid log2-problems (1st elt is zer
max_log_err = max(log_error)
plot(time[1:], log_error, linestyles[i]+colors[iclr], markevery=N/5)
legends.append(scheme.func_name +’: N = ’ + str(N))
hold(’on’)
schemes_order[scheme.func_name] = order_approx
iclr += 1
legend(legends, loc=’best’)
xlabel(’Time’)
ylabel(’log(error)’)
grid()
N = N/2**Ndts
N_list = [N*2**i for i in range(1, Ndts+1)]
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 70
N_list = np.asarray(N_list)
figure()
for key in schemes_order:
plot(N_list, (np.asarray(schemes_order[key])))
def manufactured_solution():
""" Test convergence rate of the methods, by using the Method of Manufactured solutions.
The coefficient function f is chosen to be the normal distribution
f = (1/(sigma*sqrt(2*pi)))*exp(-((t-mu)**2)/(2*sigma**2)).
The ODE to be solved is than chosen to be: f’’’ + f’’*f + f’ = RHS,
leading to to f’’’ = RHS - f’’*f - f
"""
from numpy import linspace, size, abs, log10, mean, log2
from sympy import exp, symbols, diff, lambdify
from math import sqrt, pi
f = (1/(sigma*sqrt(2*pi)))*exp(-((t-mu)**2)/(2*sigma**2))
dfdt = diff(f, t)
d2fdt = diff(dfdt, t)
d3fdt = diff(d2fdt, t)
RHS = d3fdt + dfdt*d2fdt + f
f = lambdify([t], f)
dfdt = lambdify([t], dfdt)
d2fdt = lambdify([t], d2fdt)
RHS = lambdify([t], RHS)
def func(y,t):
yout = np.zeros_like(y)
yout[:] = [y[1], y[2], RHS(t) -y[0]- y[1]*y[2]]
return yout
z0 = np.array([f(t0), dfdt(t0), d2fdt(t0)])
figure()
tol = 1E-15
Ndts = 5 # Number of times to refine timestep in convergence test
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 71
order_approx = []
for i in range(Ndts+1):
z = scheme(func, z0, time)
abs_error = abs(fanalytic-z[:,0])
log_error = log2(abs_error[1:]) # Drop 1st elt to avoid log2-problems (1st elt is zer
max_log_err = max(log_error)
plot(time[1:], log_error, linestyles[i]+colors[iclr], markevery=N/5)
legends.append(scheme.func_name +’: N = ’ + str(N))
hold(’on’)
N *=2
time = linspace(t0, tend, N+1)
fanalytic = np.zeros_like(time)
k = 0
for tau in time:
fanalytic[k] = f(tau)
k = k + 1
schemes_order[scheme.func_name] = order_approx
iclr += 1
legend(legends, loc=’best’)
xlabel(’Time’)
ylabel(’log(error)’)
grid()
N = N/2**Ndts
N_list = [N*2**i for i in range(1, Ndts+1)]
N_list = np.asarray(N_list)
figure()
for key in schemes_order:
plot(N_list, (np.asarray(schemes_order[key])))
legends.append(’theoretical’)
legend(legends, loc=’best’, frameon=False)
title(’Method of Manufactured Solution’)
xlabel(’Number of unknowns’)
ylabel(’Scheme order approximation’)
axis([0, max(N_list), 0, 5])
# savefig(’MMSODEschemes.png’, transparent=True)
# test using MMS and solving a set of two nonlinear equations to find estimate of order
def manufactured_solution_Nonlinear():
""" Test convergence rate of the methods, by using the Method of Manufactured solutions.
The coefficient function f is chosen to be the normal distribution
f = (1/(sigma*sqrt(2*pi)))*exp(-((t-mu)**2)/(2*sigma**2)).
The ODE to be solved is than chosen to be: f’’’ + f’’*f + f’ = RHS,
leading to f’’’ = RHS - f’’*f - f
"""
from numpy import linspace, abs
from sympy import exp, symbols, diff, lambdify
from math import sqrt, pi
from numpy import log, log2
t = symbols(’t’)
sigma= 0.5 # standard deviation
mu = 0.5 # mean value
#### Perform needed differentiations based on the differential equation ####
f = (1/(sigma*sqrt(2*pi)))*exp(-((t-mu)**2)/(2*sigma**2))
dfdt = diff(f, t)
d2fdt = diff(dfdt, t)
d3fdt = diff(d2fdt, t)
RHS = d3fdt + dfdt*d2fdt + f
#### Create Python functions of f, RHS and needed differentiations of f ####
f = lambdify([t], f, np)
dfdt = lambdify([t], dfdt, np)
d2fdt = lambdify([t], d2fdt)
RHS = lambdify([t], RHS)
def func(y,t):
""" Function that returns the dfn/dt of the differential equation f + f’’*f + f’’’ = RHS
as a system of 1st order equations; f = f1
f1’ = f2
f2’ = f3
f3’ = RHS - f1 - f2*f3
Args:
y(array): solutian array [f1, f2, f3] at time t
t(float): current time
Returns:
yout(array): differantiation array [f1’, f2’, f3’] at time t
"""
yout = np.zeros_like(y)
yout[:] = [y[1], y[2], RHS(t) -y[0]- y[1]*y[2]]
return yout
if k==0:
h.append(time[1] - time[0]) # add this iteration’s dt to list h
z = scheme(func, z0, time) # Solve the ODE by calling the scheme with arguments. e.g:
fanalytic = f(time) # call analytic function f to compute analytical solutions at tim
ax[0][k].plot(time, z[:,0])
legendList.append(’$h$ = ’ + str(h[i]))
N *=2 # refine dt
Args:
error(list): list of calculated errors [error(h1), error(h2)]
h(list): list of steplengths corresponding to the list of errors
x0(list): list of starting (guessed) values for x
Returns:
x(array): iterated solution of x = [C, p]
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 74
"""
from sympy import Matrix
#### Symbolic computiations: ####
C, p = symbols(’C p’)
f1 = C*h[-2]**p - error[-2]
f2 = C*h[-1]**p - error[-1]
F = [f1, f2]
x = [C, p]
def jacobiElement(i,j):
return diff(F[i], x[j])
Jacobi = Matrix(2, 2, jacobiElement) # neat way of computing the Jacobi Matrix
JacobiInv = Jacobi.inv()
#### Numerical computations: ####
JacobiInvfunc = lambdify([x], JacobiInv)
Ffunc = lambdify([x], F)
x = x0
ht = np.asarray(h)
eulerError = np.asarray(schemes_error["euler"])
heunError = np.asarray(schemes_error["heun"])
rk4Error = np.asarray(schemes_error["rk4"])
print epsilon_euler_latex
print epsilon_heun_latex
print epsilon_rk4_latex
epsilon_euler = lambdify(h, epsilon_euler, np)
epsilon_heun = lambdify(h, epsilon_heun, np)
epsilon_rk4 = lambdify(h, epsilon_rk4, np)
N = N/2**(Ntds + 2)
N_list = [N*2**i for i in range(1, Ntds + 2)]
N_list = np.asarray(N_list)
print len(N_list)
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 75
print len(eulerError)
figure()
plot(N_list, log2(eulerError), ’b’)
plot(N_list, log2(epsilon_euler(ht)), ’b--’)
plot(N_list, log2(heunError), ’g’)
plot(N_list, log2(epsilon_heun(ht)), ’g--’)
plot(N_list, log2(rk4Error), ’r’)
plot(N_list, log2(epsilon_rk4(ht)), ’r--’)
LegendList = [’${\epsilon}_{euler}$’, epsilon_euler_latex, ’${\epsilon}_{heun}$’, epsilon_heu
legend(LegendList, loc=’best’, frameon=False)
xlabel(’-log(h)’)
ylabel(’-log($\epsilon$)’)
# #savefig(’../figs/MMS_example2.png’)
The complete module ODEschemes is listed below and may easily be down-
loaded in your Eclipse/LiClipse IDE:
# src-ch1/ODEschemes.py
import numpy as np
from matplotlib.pyplot import plot, show, legend, hold,rcParams,rc, figure, axhline, close,\
xticks, title, xlabel, ylabel, savefig, axis, grid, subplots, setp
z = np.zeros((np.size(time), np.size(z0)))
z[0,:] = z0
for i in range(len(time)-1):
dt = time[i+1] - time[i]
z[i+1,:]=z[i,:] + np.asarray(func(z[i,:], time[i]))*dt
return z
z = np.zeros((np.size(time), np.size(z0)))
z[0,:] = z0
zp = np.zeros_like(z0)
for i, t in enumerate(time[0:-1]):
dt = time[i+1] - time[i]
zp = z[i,:] + np.asarray(func(z[i,:],t))*dt # Predictor step
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 76
z = np.zeros((np.size(time),np.size(z0)))
z[0,:] = z0
zp = np.zeros_like(z0)
for i, t in enumerate(time[0:-1]):
dt = time[i+1] - time[i]
dt2 = dt/2.0
k1 = np.asarray(func(z[i,:], t)) # predictor step 1
k2 = np.asarray(func(z[i,:] + k1*dt2, t + dt2)) # predictor step 2
k3 = np.asarray(func(z[i,:] + k2*dt2, t + dt2)) # predictor step 3
k4 = np.asarray(func(z[i,:] + k3*dt, t + dt)) # predictor step 4
z[i+1,:] = z[i,:] + dt/6.0*(k1 + 2.0*k2 + 2.0*k3 + k4) # Corrector step
return z
if __name__ == ’__main__’:
a = 0.2
b = 3.0
u_exact = lambda t: a*t + b
def f_local(u,t):
"""A function which returns an np.array but less easy to read
than f(z,t) below. """
return np.asarray([a + (u - u_exact(t))**5])
def test_ODEschemes():
"""Use knowledge of an exact numerical solution for testing."""
from numpy import linspace, size
tol = 1E-15
T = 2.0 # end of simulation
N = 20 # no of time steps
time = linspace(0, T, N+1)
z0 = np.zeros(1)
z0[0] = u_exact(0.0)
order_approx = []
for i in range(Ndts+1):
z = scheme(f3, z0, time)
abs_error = abs(u_nonlin_analytical(z0, time)-z[:,0])
log_error = log2(abs_error[1:]) # Drop 1st elt to avoid log2-problems (1st elt is zer
max_log_err = max(log_error)
plot(time[1:], log_error, linestyles[i]+colors[iclr], markevery=N/5)
legends.append(scheme.func_name +’: N = ’ + str(N))
hold(’on’)
N *=2
time = linspace(0, T, N+1)
schemes_order[scheme.func_name] = order_approx
iclr += 1
legend(legends, loc=’best’)
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 78
xlabel(’Time’)
ylabel(’log(error)’)
grid()
N = N/2**Ndts
N_list = [N*2**i for i in range(1, Ndts+1)]
N_list = np.asarray(N_list)
figure()
for key in schemes_order:
plot(N_list, (np.asarray(schemes_order[key])))
def manufactured_solution():
""" Test convergence rate of the methods, by using the Method of Manufactured solutions.
The coefficient function f is chosen to be the normal distribution
f = (1/(sigma*sqrt(2*pi)))*exp(-((t-mu)**2)/(2*sigma**2)).
The ODE to be solved is than chosen to be: f’’’ + f’’*f + f’ = RHS,
leading to to f’’’ = RHS - f’’*f - f
"""
from numpy import linspace, size, abs, log10, mean, log2
from sympy import exp, symbols, diff, lambdify
from math import sqrt, pi
f = lambdify([t], f)
dfdt = lambdify([t], dfdt)
d2fdt = lambdify([t], d2fdt)
RHS = lambdify([t], RHS)
def func(y,t):
yout = np.zeros_like(y)
yout[:] = [y[1], y[2], RHS(t) -y[0]- y[1]*y[2]]
return yout
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 79
figure()
tol = 1E-15
Ndts = 5 # Number of times to refine timestep in convergence test
schemes =[euler, heun, rk4]
legends=[]
schemes_order={}
colors = [’r’, ’g’, ’b’, ’m’, ’k’, ’y’, ’c’]
linestyles = [’-’, ’--’, ’-.’, ’:’, ’v--’, ’*-.’]
iclr = 0
for scheme in schemes:
N = 100 # no of time steps
time = linspace(t0, tend, N+1)
fanalytic = np.zeros_like(time)
k = 0
for tau in time:
fanalytic[k] = f(tau)
k = k + 1
order_approx = []
for i in range(Ndts+1):
z = scheme(func, z0, time)
abs_error = abs(fanalytic-z[:,0])
log_error = log2(abs_error[1:]) # Drop 1st elt to avoid log2-problems (1st elt is zer
max_log_err = max(log_error)
plot(time[1:], log_error, linestyles[i]+colors[iclr], markevery=N/5)
legends.append(scheme.func_name +’: N = ’ + str(N))
hold(’on’)
legend(legends, loc=’best’)
xlabel(’Time’)
ylabel(’log(error)’)
grid()
N = N/2**Ndts
N_list = [N*2**i for i in range(1, Ndts+1)]
N_list = np.asarray(N_list)
figure()
for key in schemes_order:
plot(N_list, (np.asarray(schemes_order[key])))
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 80
def func(y,t):
""" Function that returns the dfn/dt of the differential equation f + f’’*f + f’’’ = RHS
as a system of 1st order equations; f = f1
f1’ = f2
f2’ = f3
f3’ = RHS - f1 - f2*f3
Args:
y(array): solutian array [f1, f2, f3] at time t
t(float): current time
Returns:
yout(array): differantiation array [f1’, f2’, f3’] at time t
"""
yout = np.zeros_like(y)
yout[:] = [y[1], y[2], RHS(t) -y[0]- y[1]*y[2]]
return yout
# #savefig(’../figs/normal_distribution_refinement.png’)
def Newton_solver_sympy(error, h, x0):
""" Function that solves for the nonlinear set of equations
error1 = C*h1^p --> f1 = C*h1^p - error1 = 0
error2 = C*h2^p --> f2 = C h2^p - error 2 = 0
where C is a constant h is the step length and p is the order,
with use of a newton rhapson solver. In this case C and p are
the unknowns, whereas h and error are knowns. The newton rhapson
method is an iterative solver which take the form:
xnew = xold - (J^-1)*F, where J is the Jacobi matrix and F is the
residual funcion.
x = [C, p]^T
J = [[df1/dx1 df2/dx2],
[df2/dx1 df2/dx2]]
F = [f1, f2]
This is very neatly done with use of the sympy module
Args:
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 82
Returns:
x(array): iterated solution of x = [C, p]
"""
from sympy import Matrix
#### Symbolic computiations: ####
C, p = symbols(’C p’)
f1 = C*h[-2]**p - error[-2]
f2 = C*h[-1]**p - error[-1]
F = [f1, f2]
x = [C, p]
def jacobiElement(i,j):
return diff(F[i], x[j])
ht = np.asarray(h)
eulerError = np.asarray(schemes_error["euler"])
heunError = np.asarray(schemes_error["heun"])
rk4Error = np.asarray(schemes_error["rk4"])
print epsilon_euler_latex
print epsilon_heun_latex
print epsilon_rk4_latex
# #savefig(’../figs/MMS_example2.png’)
def plot_ODEschemes_solutions():
"""Plot the solutions for the test schemes in schemes"""
from numpy import linspace
figure()
T = 1.5 # end of simulation
N = 50 # no of time steps
time = linspace(0, T, N+1)
z0 = 2.0
manufactured_solution_Nonlinear()
#test_ODEschemes()
#convergence_test()
#plot_ODEschemes_solutions()
#manufactured_solution()
show()
3.0
=1
2.5 = -2
2.0
x
1.5
e
1.0
0.5
0.0
0.0 0.2 0.4 0.6 0.8 1.0
x
Figure 2.21: Solution of ODE for exponential growth with positive and negative
growth factor λ.
We illustrate the stability concept for the numerical solution of ODEs by two
examples below, namely, the Euler scheme and the Heun scheme. In the latter
example we will also allude to how the analysis may be applied for higher order
RK-methods.
which yields:
yn = (1 + λh)n y0 , n = 1, 2, 3, . . . (2.113)
To investigate stability we first introduce the analytic amplification factor
Ga as
y(xn+1 )
Ga = (2.114)
y(xn )
Note that y(xn ) represents the analytical solution of (2.109) at xn , whereas
yn denotes the numerical approximation. For the model problem at hand we see
that Ga reduces to:
y(xn+1 )
Ga = = exp[λ(xn+1 − xn )] = eλh = 1 + λh + (λh)2 /2 . . . (2.115)
y(xn )
Exponential growth λ > 0.
Consider first the case of a positive λ > 0, corresponding to an exponential
growth, as given by (2.111). We observe that our scheme will also exhibit
exponential growth (2.114), and will thus have no concerns in terms of stability.
Naturally, the choice of h will affect the accuracy and numerical error of the
solution.
Exponential decay λ < 0.
In case of exponential decay with a negative λ < 0, we adopt the following
convention for convenience:
λ = −α, α>0 (2.116)
and (2.113) may be recasted to:
yn = (1 − αh)n y0 (2.117)
If yn is to decrease as n increases we must have
|1 − αh| < 1 ⇒ −1 < 1 − αh < 1 (2.118)
which yields the following criterion for selection of h:
0 < αh < 2, α > 0 (2.119)
The criterion may also be formulated by introducing the numerical amplifica-
tion factor G as:
yn+1
G= (2.120)
yn
which for the current example (2.112) reduces to:
yn+1
G= = 1 + λh = 1 − αh (2.121)
yn
For the current example G > 1 for λ > 0 and G < 1 for λ < 0. Compare with
the expression for the analytical amplification factor Ga (2.115).
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 86
For example when α = 100 we must choose h < 0.02 to produce an A-stable
numerical solution.
By the introduction of the G in (2.121) we may rewrite (2.113) as:
yn = Gn y0 , n = 0, 1, 2, . . . (2.123)
Consequently, the Euler scheme will be A-stable and free of spurious oscilla-
tions when
For example the numerical solution with α = 10 will be stable in the intervall
0 < h < 0.2, but exhibit oscillations in the interval 0.1 < h < 0.2. Note, however,
that the latter restriction on avoiding spurious oscillations will not be of major
concerns in pratice, as the demand for descent accuracy will dictate a far smaller
h.
h p 1
yn+1 = yn + · (f + fn+1 ) = 1 − αh + (αh)2 (2.128)
2 2
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 87
We focus on decaying solutions with λ < 0 and we have previously show from
(2.119) that 0 < αh < 2, α > 2 or:
(λh)2 (λh)3
G = 1 + λh + + (2.134)
2! 3!
The limiting values for the G-polynomial are found for G = ±1 which for
real roots are:
−2.5127 < λh < 0 (2.135)
For RK4 we get correspondingly
Stiff equations.
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 89
3 RK4
RK3
2
RK2
imaginary part of h
1 RK1
−1
−2
−3
−3 −2 −1 0 1 2 3
real part of h
Figure 2.22: Regions of stability for RK-methods order 1-4 for complex roots.
2.12 Exercises
Exercise 1: Solving Newton’s first differential equation us-
ing euler’s method
One of the earliest known differential equations, which Newton solved with series
expansion in 1671 is:
a) Solve Eq. (2.138) using Euler’s method. Plot and compare with Newton’s
solution Eq. (2.139) and the analytical solution Eq. (2.140). Plot between x = 0
and x = 1.5
0.4
0.2
0.0
y
−0.2
−0.4 analytical
euler
Newton
−0.6
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6
x
A
Y
O X
d2 Y
3Y A 3
= 2 − Y (2.141)
dX 2 D D 2D
where D is the middle depth, Y (X) is the wave height above middle depth
and A is the wave height at X = 0. The wave is symmetric with respect to
X = 0. See Figure 2.24. The coordinate system follows the wave.
By using dimensionless variables: x = X A Y
D , a = D , y = A , Eq. (2.141) can be
written as:
00 3
y (x) = a 3 y(x) 1 − y(x) (2.142)
2
Z
dy p
√ = −2 arctanh 1−y
y 1−y
Hint 1. Solutions:
a) y(0.6) = 0.88, y 0 (0.6) = −0.569.
b) y(0.6) = 0.8337, y 0 (0.6) = −0.4858.
2 4
c) y ≈ 1 − x2 + x6 , y 0 ≈ −x + 23 x3
d y = cosh2 1x /√2 = 1+cosh1 √2·x
( ) ( )
Hint 2. If you want you can use this template and fill in the lines where it’s
indicated.
#import matplotlib; matplotlib.use(’Qt4Agg’)
import matplotlib.pylab as plt
#plt.get_current_fig_manager().window.raise_()
import numpy as np
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 92
1.00
0.98
0.96
0.94
0.92
0.90
0.88
euler, y(0.6)=0.88
0.86 heun, y(0.6)=0.8336
taylor, y(0.6)=0.8416
0.84
analytic, y(0.6)=0.8396
0.82
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
""" This script solves the problem with the solitary wave:
y’’ = a*3*y*(1-y*3/2)
y(0) = 1, y’(0) = 0
y0’ = y’
y1’ = a*3*y0*(1-y0*3/2)
y0(0) = 1, y1(0) = 0
"""
a = 2./3
h = 0.2 # steplength dx
x_0, x_end = 0, 0.6
Y0_heun = np.zeros_like(x)
Y1_heun = np.zeros_like(x)
y0_p =
y1_p =
f0_p =
f1_p =
"Fill in lines above"
Y0_analytic = 1./(np.cosh(x/np.sqrt(2))**2)
d2 θ g
+ sin θ = 0 (2.143)
dτ 2 l
with initial conditions
θ(0) = θ0 (2.144)
dθ
(0) = θ̇0 (2.145)
dτ
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 94
g
θ
θ0
pg
We introduce a dimensionless time t = lt such that (2.143) may be written
as
θ(0) = θ0 (2.147)
θ̇(0) = θ̇0 (2.148)
Movie 1: mov-ch1/pendulum.mp4
a) Write a Python program that solves the ODE in (2.146) with the specified
initial conditions using Heun’s method, for given values of θ0 and θ̇0 . Set for
instance θ0 = 85o and θ̇0 = 0. (remember to convert use rad in ODE) Experiment
CHAPTER 2. INITIAL VALUE PROBLEMS FOR ODES 95
with different time steps ∆t, and carry out the computation for at least a whole
period. Plot both the amplitude and the energy function in (2.150) as functions
of t. Plot in separate figures.
b) Solve a) using Euler’s method.
c) Solve the linearized version of the ODE in (2.146):
using both euler and Heuns method. Plot all four solutions (Problem 2, 3a
and b) in the same figure. Experiment with different timesteps and values of θ0 .
2.0
EulerLin
1.5
HeunLin
EulerNonlin
1.0
HeunNonlin
Amplitude [rad]
0.5
0.0
−0.5
−1.0
−1.5
−2.0
0 2 4 6 8 10 12 14 16
Dimensionless time [-]
y
v0
m
v
α
mg
x
L
Figure 3.1: The trajectory of a ballistic object launched with an inital angle α.
y 00 = y(x) (3.1)
96
CHAPTER 3. SHOOTING METHODS 97
and consequently an initial value problem which can be shown to have the
following analytical solution:
y 0 (x) = g(x)
g 0 (x) = p(x) · g(x) + q(x) · y(x) + r(x) (3.8)
such that the boundary value y(b) = β is satisfied. As (3.6) is a linear ODE it
suffices to guess two values of s = g(α) ≡ y 0 (a). The correct value for the initial
value of s which gives the correct value of the solution y(b) = β, may then be
found by linear interpolation. Note that y(x) is always proportional to s when
the ODE is linear.
To quantify the goodness of how well the boundary value resulting from our
initial guess s = g(a) ≡ y 0 (a), we introduce a boundary value error function φ:
φ(s∗ ) = 0 (3.10)
The procedure may be outlined as follows:
φ1 − φ0 s1 · φ0 − φ1 · s0
ka = , k b = (3.12)
s1 − s0 s1 − s0
∗
and concequently we may find s as:
φ1 s0 − φ0 s1
s∗ = (3.13)
φ1 − φ0
CHAPTER 3. SHOOTING METHODS 99
y 0 (x) = g(x)
(3.15)
g 0 (x) = y(x)
For the given values of b and β and s = y 0 (0) = g(0) the boundary value
error function in (3.9) takes the form:
In accordance with the shooting method we choose two initial values s0 = 0.2
og s1 = 0.7 and solve (3.15) as an initial value problem with e.g. and RK4-solver
with ∆x = 0.1 and obtain the following results:.
m sm φ(sm )
0 0.2 -0.7650
1 0.7 -0.1774
Note that we use m as a superindex for iterations, which will be used in case of
nonlinear equations. Substitution into (3.13) gives the s∗ = 0.8510. Subsequent
use of the s∗ -value in the RK4-solver yields φ(0.8510) = 0.0001. We observe a
good approximation to the correct value for the initial value may be found from
1
the analytical solution as: y 0 (0) = = 0.8509
sinh(1)
The presentation above with the shooting method is chosen as it can easily
be generalized to the solution of nonlinear ODEs.
Alternative approach for linear second order ODEs. For linear second
order ODEs the solution may be found in somewhat simpler fashion, by solving
the following sub-problems:
Sub-problem 1
y000 (x) = p(x) · y00 (x) + q(x) · y0 (x) + r(x), y0 (a) = α, y00 (a) = 0 (3.17)
and
Sub-problem 2
y100 (x) = p(x) · y10 (x) + q(x) · y1 (x), y1 (a) = 0, y10 (a) = 1 (3.18)
That is, the two sub-problems differ only by the source term r(x) and the
boundary conditions. Notice that the condition y 0 (α) = 0 in (3.17) corresponds
to s0 = 0 and the condition y 0 (α) = 1 i (3.18) corresponds to s1 = 1.
CHAPTER 3. SHOOTING METHODS 100
1.0
y(x)
y(x,s1 )
0.8
y(x,s0 )
0.6
y 0.4
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0
x
Figure 3.2: The solution y(x) of a boundary value problem resulting from two
initial guesses y(x; s0 ) and y(x; s1 ).
Let y0 (x) represent the solution to (3.17) and y1 (x) be the solution to (3.18).
The complete solution of the boundary value problem in (3.6) with the boundary
conditions (3.7) may then be shown to be:
φ0
β − y0 (b)
y(x) = y0 (x) + · y1 (x) = y0 (x) − 1 · y1 (x) (3.19)
y1 (b) φ +β
y
U0
p(x) U
d2 u
= −P (3.21)
dy 2
with corresponding boundary conditions:
u0 (y) = u1 (y)
(3.24)
u01 (y) = −P
with corresponding boundary conditions:
1.0
P = -8
P = -4
0.8 P = -2
P=0
P=4
0.6
P=8
y 0.4
0.2
Figure 3.4: Velocity profiles for Couette-Poiseuille flow with various pressure
gradients.
N=200
L = 1.0
y = np.linspace(0,L,N+1)
# Guessed values
s=[1.0, 1.5]
z0=np.zeros(2)
# Solve the initial value problem which is a solution to the boundary value problem
z = rk4(f, z0, y)
plot(z[:,0],y,’-.’)
legends.append(’rk4: dp=’+str(dpdx))
d2 U P q
+ U =− (L2 − X 2 ), P > 0 (3.26)
dX 2 EI 2EI
with the following boundary conditions:
q
A B P
U(X)
X
L L
Figure 3.5: Simply supported beam with an evenly distributed load q per length
unit.
x P P L2
x= , u= 2
· U, θ2 = (3.28)
L qL EI
which by substitution in (3.26) and (3.27) yield:
d2 u 2
2
2 (1 − x )
+ θ · u = θ , −1 < x < 1 (3.29)
dx2 2
with the corresponding boundary conditions:
(1 − x2 )
1 cos(θx)
u(x) = 2 · −1 − (3.31)
θ cos(θ) 2
πEI
The buckling load for this case is given by Pk = such that
4L2
π
0≤θ≤ (3.32)
2
Numerical solution. We wish to solve the second order linear ODE in (3.29)
by means of shooting methods and choose the alternative approach as outlined
in equation (3.19).
Two similar sub-problems are then two be solved; they differ only with the
source term and the boundary conditions.
Sub-problem 1
(1 − x2 )
u000 (x) = −θ2 u0 (x) + θ2 , u0 (−1) = 0, u00 (−1) = 0 (3.33)
2
(3.34)
CHAPTER 3. SHOOTING METHODS 105
Sub-problem 2
From the superposition principle for linear equation given in equation n (3.19)
a complete solution is obtained as a combination of the solutions to the two
sub-problems:
u0 (1)
u(x) = u0 (x) − · u1 (x) (3.37)
u1 (1)
Now, to solve the equation numerically we write (3.33) and (3.35) as a system
of first order ODEs:
Sub-problem 1 as a system of first order ODEs
For convenience we introduce the notation u0 = y1 and u0o = y2 :
return yout
N = 20 # number of elements
L = 1.0 # half the length of the beam
x = np.linspace(-L, L, N + 1) # allocate space
theta = 1 # PL**2/EI
theta2 = theta**2
u0 = solver(SubProblem1, y0Sys1,x)
u1 = solver(SubProblem2, y0Sys2,x)
plot(x,u,’y’,)
plot(x,ua,’r:’)
legendList.append(’shooting technique’)
CHAPTER 3. SHOOTING METHODS 107
legendList.append(’analytical solution’)
## Add the labels
legend(legendList,loc=’best’,frameon=False)
ylabel(’u’)
xlabel(’x’)
grid(b=True, which=’both’, color=’0.65’,linestyle=’-’)
#savefig(’../fig-ch2/beam_deflect_shoot_constant.png’, transparent=True)
#savefig(’../fig-ch2/beam_deflect_shoot_constant.pdf’, transparent=True)
#sh
show()
0.40
0.35
0.30
0.25
0.20
u
0.15
0.10
0.05 shooting technique
analytical solution
0.001.0 0.5 0.0 0.5 1.0
x
Figure 3.6: The anyalytical and numerical solutions for simply supported beam
with an evenly distributed load q per length unit. The results are produced with
a shooting method implemented in beam_deflect_shoot_constant.py.
d2 U P q
+ U =− (L2 − X 2 ) (3.44)
dX 2 EI 2EI
dU (0)
U (−L) = U (L) = 0, =0 (3.45)
dX
We start by computing the moment distribution M (X) in the beam, which
is related with the displaceent by the following expression:
d2 U M
2
=− (3.46)
dX EI
By substitution of equation (3.46) in (3.44) and two esubsequent differentia-
tions we get:
d2 M P
2
+ @, M = −q (3.47)
dX EI
To represent the resulting second order ODE on a more convenient and
generic form we introduce the dimensionless variables:
X M
x= , m= (3.48)
L qL2
and let
EI0
P = (3.49)
L2
such that equation (3.47) becomes:
qL4
U= u (3.53)
EI0
CHAPTER 3. SHOOTING METHODS 109
y10 = y2 (3.56)
y20 = − (1 + (1 + xn ) · y1 )
In this case y2 (0) ≡ m0 (0) is given, which leaves m(0) ≡ y1 (0) to be guessed
such that the boundary condition at the other end m(1) ≡ y1 (1) = 0 is fulfilled.
To express the approach algorithmically as previously, we let s = y1 (0) and let
the bondary value error function be φ(s) = y1 (1; s).
φ0
s∗ = (3.58)
φ0 − φ1
The simplfied expression for s in equation (3.58) may be found from (3.14)
by setting s0 = 0 og s1 = 1.
One should be aware of that the choice in (3.58) is not necessarily always a
good choice even though the ODE is linear. If the solution is of the kind eαx
CHAPTER 3. SHOOTING METHODS 110
when both α and x are large, we may end up outside the allowable range even
for double precision which is approximately 10308 . In our example above this is
not a problem and we may use (3.58) and we have implemented the approach
in beam_deflect_shoot_varying.py as shown below where both the moment
and deflection is computed for n = 2.
# src-ch2/beam_deflect_shoot_varying.py;ODEschemes.py @ git@lrhgit/tkt4140/src/src-ch2/ODEschemes.py;
m0 = solver(f, y01, x)
m1 = solver(f, y02, x)
phi0 = m0[-1,0]
phi1 = m1[-1,0]
sfinal = phi0/(phi0 - phi1) # find correct initial value of moment m(x=0) using secant method
y03 = [sfinal, 0]
CHAPTER 3. SHOOTING METHODS 111
mfinal = solver(f,y03,x)
m = mfinal[:, 0] # extract moment from solution data
plot(x, m, ’b’,)
legendList.append(’m’)
plot(x, u, ’r’,)
legendList.append(’u’)
## Add the labels
legend(legendList,loc=’best’,frameon=False)
ylabel(’m, u’)
xlabel(’x’)
grid(b=True, which=’both’, axis=’both’,linestyle=’-’)
show()
1.0
m
0.8 u
0.6
m, u
0.4
0.2
0.0
−0.2
0.0 0.2 0.4 0.6 0.8 1.0
x
y 00 (x) = 32 y 2
(3.59)
y(0) = 4, y(1) = 1
Our model problem (3.59) may be proven to have two solutions.
Solution I:
4
yI = (3.60)
(1 + x)2
CHAPTER 3. SHOOTING METHODS 112
−2
−4
y
−6
−8
−10 yI
yII
−12
0.0 0.2 0.4 0.6 0.8 1.0
x
Figure 3.8: The two solutions yI og yII of the non-linear ODE (3.59).
As our model equation is non-linear our error function φ(s) will also be
non-linear, and we will have to conduct an iteration process as outlined in (3.63).
Our task will then be to find the values s∗ so that our boundary value error
function φ(s) becomes zero φ(s∗ ) = 0. For this purpose we choose the secant
method (See [3], section 3.3).
Two initial guesses s0 and s1 are needed to start the iteration process. Eq.
(3.61) is then solved twice (using s0 and s1 ). From these solutions two values
for the error function (3.63) may be calculated, namely φ0 and φ1 . The next
CHAPTER 3. SHOOTING METHODS 113
sm+1 = sm + ∆s (3.64)
where
sm − sm−1
m+1 m m
∆s = s −s = −φ(s ) · , m = 1, 2, . . .
φ(sm ) − φ(sm−1 )
ϕ
ϕ(sm-1)
ϕ(sm)
ϕ(sm+1)
sm-1 sm sm+1 s* s
Figure 3.9: An illustration of the usage of the secant method label to find the
zero for a non-linear boundary value error function.
Iteration process
3. Update
• sm−1 ← sm
• sm ← sm+1
• φ(sm−1 ) ← φ(sm )
CHAPTER 3. SHOOTING METHODS 114
∆s
sm+1 < ε2 (3.66)
The crietria (3.65) and (3.66) are frequently used in combination with:
N=20
L = 1.0
x = np.linspace(0,L,N+1)
smin=-45.0
smax=1.0
s_guesses = np.linspace(smin,smax,20)
# Guessed values
#s=[-5.0, 5.0]
z0=np.zeros(2)
z0[0] = 4.0
z = solver(f,z0,x)
phi0 = z[-1,0] - beta
nmax=10
eps = 1.0e-3
phi = []
for s in s_guesses:
z0[1] = s
z = solver(f,z0,x)
phi.append(z[-1,0] - beta)
plot(s_guesses,phi)
ylabel(’phi’)
xlabel(’s’)
grid(b=True, which=’both’, color=’0.65’,linestyle=’-’)
show()
close()
Based on our plots of φ we may now provided qualified inital guesses for
and choose s0 = −3.0 og s1 = −6.0. The complete shooting method for the our
boundary value problem may be found here:
# src-ch2/non_lin_ode.py;ODEschemes.py @ git@lrhgit/tkt4140/src/src-ch2/ODEschemes.py;
from ODEschemes import euler, heun, rk4
from numpy import cos, sin
import numpy as np
from matplotlib.pyplot import *
N=40
L = 1.0
x = np.linspace(0,L,N+1)
def dsfunction(phi0,phi1,s0,s1):
if (abs(phi1-phi0)>0.0):
return -phi1 *(s1 - s0)/float(phi1 - phi0)
else:
return 0.0
CHAPTER 3. SHOOTING METHODS 116
def y_analytical(x):
return 4.0/(1.0+x)**2
# Guessed values
# s=[-3.0,-9]
s=[-40.0,-10.0]
z0=np.zeros(2)
z0[0] = 4.0
z0[1] = s[0]
z = solver(f,z0,x)
phi0 = z[-1,0] - beta
nmax=10
eps = 1.0e-3
for n in range(nmax):
z0[1] = s[1]
z = solver(f,z0,x)
phi1 = z[-1,0] - beta
ds = dsfunction(phi0,phi1,s[0],s[1])
s[0] = s[1]
s[1] += ds
phi0 = phi1
print ’n = {} s1 = {} and ds = {}’.format(n,s[1],ds)
if (abs(ds)<=eps):
print ’Solution converged for eps = {} and s1 ={} and ds = {}. \n’.format(eps,s[1],ds)
break
legends=[] # empty list to append legends as plots are generated
plot(x,z[:,0])
legends.append(’y’)
plot(x,y_analytical(x),’:^’)
legends.append(’y analytical’)
After selection ∆x = 0.1 and using the RK4-solver, the following iteration
output may be obtained:
CHAPTER 3. SHOOTING METHODS 117
After four iterations s = y 0 (0) = −7.9989 , while the analytical value is −8.0. The
code non_lin_ode.py illustrates how our non-linear boundary value problem
may be solve with a shooting method and offer graphical comparison of the
numerical and analytical solution.
The secant method is simple and efficient and does not require any knowledge
of the analytical expressions for the derivatives for the function for which the
zeros are sought, like e.g. for Newton-Raphsons method. Clearly a drawback is
that two initial guesses are mandatory to start the itertion process. However, by
using some physical insight for the problemn and ploting the φ-function for a
wide range of s-values, the problem is normally possible to deal with.
A
θ y
P δ
ɭ
C B
ɭh
where κ denotes the curvature, M the moment and EI flexural rigidity. The
balance of moments in C may by computed as C: M = P · L(lh − x) which by
substitution in (3.68) results in:
dθ P L2
= (lh − x) (3.69)
dl EI
From figure 3.10 we may deduce the following geometrical relations:
dx dy
= cos θ, = sin θ (3.70)
dl dl
Further, differentiation of (3.69) with respect to the arch length l and substi-
tution of (3.70) yields:
d2 θ P L2
+ cos θ = 0 (3.71)
dl2 EI
For convenience we introduce the parameter α which is defined as:
P L2
α2 = (3.72)
EI
As a result we must solve the following differential equations:
d2 θ
+ α2 cos θ = 0 (3.73)
dl2
dy
= sin θ (3.74)
dl
with the following boundary conditions:
dθ
y(0) = 0, θ(0) = 0, (1) = 0 (3.75)
dl
The first two boundary conditions are due the anchoring in A, whereas the
latter is due to a vanishing moment in B. The analytical solution of this problem
may be found in appendix G, section G.3 in Numeriske Beregninger.
Numerical solution
First, we need to represent (3.73) and (3.74) as a system of first order
differntial equations. By introducing the conventions θ = z0 , θ0 = z1 and y = z2
we get:
z00 = z1
z10 = −α2 cos z0 (3.76)
z20 = sin z0
We have to guess the initial value θ0 (0) such that the condition dθ
dl (1) = 0 is
satisfied.
To do so, we let s = θ0 (0) = z1 (0) and φ(s) = θ0 (1; s) − 0 = z1 (1). Conse-
quently, we have to find s = s∗ such that φ(s∗ ) = 0, which with z-variables takes
the form: s = z1 (0).
Additionally we find:
s∗
lh = (3.79)
α2
Due to the nonlinear nature of (3.76) the function (3.78) will be nonlinear
too and we will use the secant method to find s∗ . To start the secant iterations
we need two intial guesses s0 og s1 . Suitable initial guesses may be found by
first looking at φ(s) graphically. The python-code phi_plot_beam_shoot.py
produces Figure 3.11.
# src-ch2/phi_plot_beam_shoot.py;ODEschemes.py @ git@lrhgit/tkt4140/src/src-ch2/ODEschemes.py;
N=20
L = 1.0
y = np.linspace(0,L,N+1)
def f(z, t):
"""RHS for deflection of beam"""
zout = np.zeros_like(z)
zout[:] = [z[1],-alpha2*cos(z[0]),sin(z[0])]
return zout
alpha2 = 5.0
beta=0.0 # Boundary value at y = L
N_guess = 30
s_guesses=np.linspace(1,5,N_guess)
z0=np.zeros(3)
CHAPTER 3. SHOOTING METHODS 120
phi = []
for s_guess in s_guesses:
z0[1] = s_guess
z = solver(f,z0,y)
phi.append(z[-1,1] - beta)
0
(s)
−1
−2
−3
−4
1.0 1.5 2.0 2.5 3.0 3.5
s
N=20
L = 1.0
y = np.linspace(0,L,N+1)
def dsfunction(phi0,phi1,s0,s1):
if (abs(phi1-phi0)>0.0):
return -phi1 *(s1 - s0)/float(phi1 - phi0)
else:
return 0.0
alpha2 = 5.0
beta=0.0 # Boundary value at y = L
# Guessed values
s=[2.5, 5.0]
z0=np.zeros(3)
z0[1] = s[0]
z = solver(f,z0,y)
phi0 = z[-1,1] - beta
nmax=10
eps = 1.0e-10
for n in range(nmax):
z0[1] = s[1]
z = solver(f,z0,y)
phi1 = z[-1,1] - beta
ds = dsfunction(phi0,phi1,s[0],s[1])
s[0] = s[1]
s[1] += ds
phi0 = phi1
print ’n = {} s1 = {} and ds = {}’.format(n,s[1],ds)
if (abs(ds)<=eps):
print ’Solution converged for eps = {} and s1 ={} and ds = {}. \n’.format(eps,s[1],ds)
break
legends=[] # empty list to append legends as plots are generated
plot(y,z[:,0])
legends.append(r’$\theta$’)
plot(y,z[:,1])
legends.append(r’$d\theta/dl$’)
plot(y,z[:,2])
legends.append(r’$y$’)
CHAPTER 3. SHOOTING METHODS 122
show()
3.5
3.0 d /dl
2.5 y
=5.0
2.0
2
1.5
,d /dl,y for
1.0
0.5
0.0
0.50.0 0.2 0.4 0.6 0.8 1.0
y/L
Figure 3.12: Solutions generated with a shooting method for large deflections
of a cantilever subjected to a point load.
∂T ∂2T
=α (3.80)
∂τ ∂X 2
where τ and X denote time and spatial coordinates, respectively. The
temperature T is a function of time and space T = T (X, τ ), and α is the thermal
diffusivity. (See appendix B in Numeriske Beregninger for a derivation)
In Figure 3.13 a one-dimensional beam in the right half-space (0 ≤ X < ∞)
is illustrated. The beam has initially a temperature Ts , but at time τ = 0, the
temperature at the left end X = 0 is abrubtly set to T0 , and kept constant
thereafter.
We wish to compute the temperaturedistribution in the beam as a function
of time τ . The partial differential equation describing this problem is given by
CHAPTER 3. SHOOTING METHODS 123
X
Ts
T0
equation (3.80), and to model the time evolution of the temperature we provide
the following initial condition:
∂u ∂2u
= , 0<x<∞ (3.84)
∂t ∂x2
accompanied by the dimensionless intitial condition:
We will now try to transform the partial differential equation (3.84) with
boundary conditions (3.85) to a simpler ordinary differential equaitons. We will
do so by introducing som apropriate scales for the time and space coordinates:
x̄ = a x and t̄ = b t (3.87)
where a and b are some positive constants. Substitution of equation (3.87)
into equation (3.84) yields the following equation:
∂u a2 ∂ 2 u
= (3.88)
∂ t̄ b ∂ x̄2
We chose b = a2 to bring the scaled eqution (3.88) on the canonical, dimen-
sionless for of equation (3.84) with the boundary conditions:
u = f (η) (3.94)
CHAPTER 3. SHOOTING METHODS 125
∂u ∂2u
By substituting the expressions for and in equation (3.84) we
∂t ∂x2
transform the original PDE in equation (3.84) to an ODE in equation (3.96) as
stated above:
1 η
f 00 (η) + f 0 (η) = 0
4t 2t
or equivalently:
f 00 (η)
= −2η
f 0 (η)
which may be integrated to yield:
ln f 0 (η) = −η 2 + ln C1 (3.97)
T − T0 0 − (−15) 3
u= = = = 0.4286
Ts − T0 20 − (−15) 7
Some values for erf(η) are tabulated below :
CHAPTER 3. SHOOTING METHODS 127
1.0
0.8
0.6
erf
erf , erfc
erfc
0.4
0.2
x erf(η) η erf(η)
0.00 0.00000 1.00 0.84270
0.20 0.22270 1.20 0.91031
0.40 0.42839 1.40 0.95229
0.60 0.60386 1.60 0.97635
0.80 0.74210 1.80 0.98909
From tabular values for Fra tabellverdi for erf(x) og (3.101) finner vi erf(η) =
0.4286 → η ≈ 0.4 som med (3.93) gir:
√ √
X = 0.4 · 2 τ · α = 0.8 · 720 · 5 · 10−4 = 0.48m
Vi har regnet med konstant diffusivitet α, mens den kan variere i intervallet
α ∈ [3 · 10−4 , 10−3 ]m2 /time. Dessuten vil jorda normalt ikke være tørr.
Y U=0
U(Y,τ)
U = U0
Vi har en stillestående fluid ved τ < 0 . Ved tiden τ = 0 får plata som er
parallell med X-aksen, en konstant hastighet U = U0 parallelt med X-aksen.
Plata må betraktes som uendelig tynn med uendelig utstrekning i X-retning
for en korrekt matematisk beskrivelse i dette tilfellet. Navier-Stokes ligning i
X-retning for inkompressibel strømning:
2
∂2U
∂U ∂U ∂U 1 ∂p ∂ U
+U +V =− +ν +
∂τ ∂X ∂Y ρ ∂X ∂X 2 ∂Y 2
(Se utledning i appendiks B i Numeriske Beregninger)
Antar at løsningen av dette problemet er uavhengig av X, samt at V = 0
som betyr at U = U (Y, τ ):
∂U ∂2U
=ν , 0<Y <∞ (3.104)
∂τ ∂Y 2
Initialbetingelse:
U (U, 0) = 0 (3.105)
Randbetingelser:
U (0, τ ) = U0 (heftbet.)
(3.106)
U (∞, τ ) = 0
Innfører følgende dimensjonsløse variable:
U Y τ ·ν
u= , y= , t= 2 (3.107)
U0 L L
der U0 og L er henholdsvis en karakteristisk hastighet og lengde. (3.107)
innsatt i (3.104) gir følgende ligning:
∂u ∂2u
= , 0<y<∞ (3.108)
∂t ∂y 2
Initialbetingelse:
u(y, 0) = 0 (3.109)
Randbetingelser:
u(0, t) = 1
(3.110)
u(∞, t) = 0
Vi ser at vi har fått samme ligning og problem som i (3.84) og (3.85), bare
med en ombytting av 0 og 1 i randbetingelsene. Løsningen av (3.108) og (3.109)
med randbetingelser (3.110) blir:
y
u(y, t) = 1 − erf(η) = erfc(η) = erfc √ (3.111)
2 t
CHAPTER 3. SHOOTING METHODS 129
U = constant
y
v δ
∂u ∂u ∂2u
u +v =ν 2 (3.115)
∂x ∂y ∂y
Randbetingelser:
u(0) = 0 (3.116)
u → U0 for y → δ (3.117)
Vi definerer et Reynoldstall:
U0 x
Rex = (3.119)
ν
Innfører følgende likedannhetsvariable for koordinat og strømfunksjon:
r
U0 ψ
η= · y og f (η) = √ (3.120)
2νx 2Uo νx
Transformasjonen (3.120) gir Blasius-ligningen for strømfunksjonen f :
u v η · f 0 (η) − f (η)
= f 0 (η), = √ (3.122)
U0 U0 2Rex
Heftbetingelsen u = 0 for y = 0 blir nå f 0 (0) = 0 da f 0 (η) = Uu0 fra (3.122).
Uten suging eller blåsing gjennom randen η = 0, blir den andre heftbetingelsen
v = 0 for η = 0. Fra (3.122) blir denne betingelsen f (0) = 0. Betingelsen
u → U0 fra (3.117), blir nå f 0 (η) → 1 for η → ∞. Randbetingelsene for
ugjennomstrømmelig sjikt blir da:
Numerisk løsning
Vi ønsker å løse (3.121) sammen med randbetingelsene i (3.123) ved bruk av
skyteteknikk og skriver derfor ligningen som et sett av tre 1. ordens differensial-
ligninger:
f 0 = f1 (3.125)
f10 = f2 (3.126)
f20 = −f · f2 (3.127)
Randbetingelser:
f (0) = f1 (0) = 0
(3.128)
f1 (η∞ ) = 1
Setter f 00 (0) = f2 (0) = s
CHAPTER 3. SHOOTING METHODS 131
Iterasjonsprosess
Iterasjonsprosessen ved bruk av sekantmetoden, er gitt ved:
sm − sm−1
m+1 m m
s = s + δs, δs = −φ(s ) · , m = 1, 2. . . . (3.130)
φ(sm ) − φ(sm−1 )
N = 50 # no x-values
x = linspace(xmin, xmax, N+1)
# Guessed values
#s=[0.1,0.8]
s_guesses=np.linspace(0.01,5.0)
z0=np.zeros(3)
beta=1.0 #Boundary value for eta=infty
phi = []
for s_guess in s_guesses:
z0[2] = s_guess
u = solver(fblasius, z0, x)
phi.append(u[-1,1] - beta)
plot(s_guesses,phi)
6
f'( ) - velocity
5 f''( ) - wall shear stress
0
0.0 0.2 0.4 0.6 0.8 1.0 1.2
def dsfunction(phi0,phi1,s0,s1):
if (abs(phi1-phi0)>0.0):
return -phi1 *(s1 - s0)/float(phi1 - phi0)
else:
return 0.0
solvers = [euler, heun, rk4] #list of solvers
solver=solvers[0] # select specific solver
# Guessed values
s=[0.1,0.8]
z0=np.zeros(3)
z0[2] = s[0]
u = solver(fblasius, z0, x)
phi0 = u[-1,1] - beta
nmax=10
eps = 1.0e-3
for n in range(nmax):
z0[2] = s[1]
u = solver(fblasius, z0, x)
phi1 = u[-1,1] - beta
ds = dsfunction(phi0,phi1,s[0],s[1])
s[0] = s[1]
s[1] += ds
phi0 = phi1
print ’n = {} s1 = {} and ds = {}’.format(n,s[1],ds)
if (abs(ds)<=eps):
print ’Solution converged for eps = {} and s1 ={} and ds = {}. \n’.format(eps,s[1],ds)
break
plot(u[:,1],x,u[:,2],x)
xlabel(’u og u\’’)
ylabel(’eta’)
legends=[]
legends.append(’velocity’)
legends.append(’wall shear stress’)
legend(legends,loc=’best’,frameon=False)
title(’Solution of the Blaisus eqn with ’+str(solver.func_name)+’-shoot’)
show()
close() #Close the window opened by show()
itr. s ds
1 0.471625 - 2.837e-002
2 0.469580 - 2.046e-003
3 0.469601 2.146e-005
4 0.469601 - 1.540e-008
CHAPTER 3. SHOOTING METHODS 135
η f f’ f"
0.00 0.000000 0.000000 4.69601e-001
0.25 0.014673 0.117364 4.69027e-001
0.50 0.058643 0.234228 4.65032e-001
0.75 0.131642 0.349324 4.54373e-001
1.00 0.232991 0.460634 4.34380e-001
1.25 0.361431 0.565601 4.03495e-001
1.50 0.515032 0.661476 3.61805e-001
1.75 0.691201 0.745765 3.11302e-001
2.00 0.886798 0.816697 2.55671e-001
2.25 1.098374 0.873559 1.99544e-001
2.50 1.322442 0.916810 1.47476e-001
2.75 1.555766 0.947928 1.02930e-001
3.00 1.795573 0.969057 6.77108e-002
3.25 2.039661 0.982573 4.19262e-002
3.50 2.286412 0.990711 2.44148e-002
3.75 2.534723 0.995319 1.33643e-002
4.00 2.783894 0.997773 6.87412e-003
4.25 3.033509 0.999000 3.32201e-003
4.50 3.283340 0.999577 1.50841e-003
4.75 3.533271 0.999832 6.43473e-004
5.00 3.783244 0.999938 2.57831e-004
5.25 4.033235 0.999980 9.70546e-005
5.50 4.283232 0.999995 3.43646e-005
5.75 4.533231 1.000000 1.15317e-005
f 000 + f f 00 + β · [1 − (f 0 )2 ] = 0 (3.131)
når fristrømshastigheten U (x) er gitt ved:
U0
η
π
β
2
π
β
2
f 000 + f f 00 + β · [1 − (f 0 )2 ] = 0 (3.134)
Randbetingelser:
f (0) = f 0 (0) = 0
(3.135)
f 0 (η∞ ) = 1
Vi løser ligningen på samme måte som Blasius ligning ved å skrive (3.134)
som et sett av 1. ordens differensialligninger:
f 0 = f1 (3.136)
f10 = f2 (3.137)
f20 = −[f f2 + β(1 − f12 )] (3.138)
Randbetingelser:
f (0) = f1 (0) = 0
(3.139)
f1 (η∞ ) = 1
Med f 00 (0) = f2 (0) = s, får vi:
U = U0∙x
−1
=4
−2 =5
=6
−3
0.8 0.9 1.0 1.1 1.2 1.3
s
6
= -0.198837
5 = -0.15
=0
= 0.3
4
=1
0
0.0 0.2 0.4 0.6 0.8 1.0
f'
6
= -0.198837
5 = -0.15
=0
= 0.3
4
=1
0
0.0 0.2 0.4 0.6 0.8 1.0 1.2
f''
separation
-17.9°
β = -0.1988... ≈ -17.9°
Det finnes løsninger av F-S-ligningen for β-verdier som ligger utenfor det
området vi fysisk kan knytte til en kilestrøm. For mer informasjon om disse
løsningene, henvises til Evans [5], White [24], og Schlichting [21].
Tilslutt bør det nevnes at det finnes en analytisk løsning av F-S-ligningen.
Vi tenker oss at det er plassert et sluk i x = 0 med r styrke K parallelt med
K y K
platekanten. I dette tilfellet blir U (x) = − og η = .
x x ν
Ligningen blir nå:
2
f 000 (η) − (f 0 (η)) + 1 = 0 (3.141)
med de vanlige randbetingelsene
√ √
f (η) = η + 2 3 − 3 2 · tanh(z) (3.144)
0 2
f (η) = 3 tanh (z) − 2 (3.145)
√
f 00 (η) = 3 2 · tanh(z) · [1 − tanh2 (z)] (3.146)
ϕ,Ψ
ϕ = A∙s + B∙r + C
Ψ = D∙s + E∙r + F
r
Ψ=0
(s*,r*)
ϕ=0
s
φ=A·s+B·r+C
(3.147)
ψ =D·s+E·r+F
der A, B, C, · · · , F er konstanter som må bestemmes. De rette verdiene r∗ og
s finnes fra φ(r∗ , s∗ ) = 0 og ψ(r∗ , s∗ ) = 0. (Se figuren) Vi har seks konstanter
∗
φ0 = A · s0 + B · r0 + C (3.148)
1 1 1
φ =A·s +B·r +C (3.149)
(2) (2) (2)
φ =A·s +B·r +C (3.150)
(3.151)
CHAPTER 3. SHOOTING METHODS 141
ψ 0 = D · s0 + E · r0 + F (3.152)
1 1 1
ψ =D·s +E·r +F (3.153)
(2) (2) (2)
ψ =D·s +E·r +F (3.154)
(3.155)
1) s0 = 0, r0 = 0 (3.156)
1 1
2) s = 0, r = 1 (3.157)
(2) (2)
3) s = 1, r =0 (3.158)
C = φ0 , B = φ1 − φ0 , A = φ(2) − φ0
(3.159)
F = ψ 0 , E = ψ 1 − ψ 0 , D = ψ (2) − ψ 0
d4 W H −X
4
+ B · W = −γ (3.161)
dX D
CHAPTER 3. SHOOTING METHODS 142
R
g
t
∇
M X
V W
V H
M
12(1 − ν 2 ) Et3
der B = , D = , γ = ρg
R2 t2 12(1 − ν 2 )
Her er ν Poissons tall, E elastisitetsmodelen og ρ væskas tetthet.
Innfører dimensjonsløse størrelser:
2
X E t
x= og w = · ·W (3.162)
H γHt R
som innsatt i (3.161) gir følgende differensialligning:
d4 w
+ 4β 4 · w = −4β 4 (1 − x) (3.163)
dx4
der
3(1 − ν 2 )H 4
β4 = (3.164)
R2 t2
Randverdier.
For x = 0:
dW
W = 0,= 0 (Fast innspent)
dX
Skjærkraft og moment = 0 for X = H som gir:
d2 W d3 W
M = −D 2
= 0, V = −D =0
dX dX 3
Randverdiene uttrykt ved dimensjonsløse størrelser blir da:
dw
For x = 0 : w = 0, dx3 = 0
2 (3.165)
For x = 1 : ddxw2 = 0, d w
dx3 = 0
CHAPTER 3. SHOOTING METHODS 143
d2 w
Dimensjonsløst moment m(x) = − og dimensjonsløs skjærkraft v(x) =
dx2
d3 w
− . Se appendiks D for flere detaljer.
dx3
Velger en en tank av betong med følgende dimensjoner:
Numerisk beregning
Setter w = y1 , w0 = y10 = y2 , w00 = y20 = y3 , w000 = y30 = y4 og skriver (3.161)
som et system av fire 1. ordens differensialligninger:
y10 = y2
y20 = y3
(3.167)
y30 = y4
0
y4 = −4β 4 (y + 1 − x)
med følgende randbetingelser:
Skytelengde m0 −v0
1.0 60.098 793.809
0.9 60.093 793.715
0.8 60.099 793.670
0.7 60.079 793.598
0.6 59.816 791.835
0.5 58.879 781.541
R
g t1
∇
M X W
H
V
V
M
t0
d2
2
d W tW
2
D 2
+ E 2 = −γ(H − X) (3.170)
dX dX R
Et3
der D = , γ = ρg
12(1 − ν 2 )
Konstantene har samme betydning som i del 1. (Se appendiks D i Numeriske
Beregninger for flere detaljer).
Dimensjonsløs form
2
3(1 − ν 2 )
X E t0
x= , w= · W, β 4 = · H4 (3.171)
H γHt0 R R2 t20
(3.171) innsatt i (3.170):
d2 2
3d w
(1 − αx) + 4β 4 (1 − αx) · w = −4β 4 (1 − x) (3.172)
dx2 dx2
Utderivert:
Numerisk beregning
Setter w = y1 , w0 = y10 = y2 , w00 = y20 = y3 , w000 = y30 = y4 , z = 1 − αx og
skriver (3.169) som et system av fire 1. ordens differensialligninger:
y10 = y2
y20 = y3
y30 = y4 (3.175)
6α2
4
4β 4 (1−x)
y4 = z y4 − z2 y3 − 4β
0 6α
z 2 y1 − z3
f (0) = 0 (3.178)
f 0 (η∞ ) = 1 (3.179)
U2
f 0 (−η∞ ) = (3.180)
U1
Merk her at η ≥ 0 for det øvre sjiktet og η ≤ 0 for det nedre sjiktet. Vi har
definert koordinaten η og strømfunksjonen f (η) på følgende måte:
q
U1
η = 2νx ·y
(3.181)
f 0 (η) = U (x,y)
U1
U2
ψ(−η∞ ; r, s) = f 0 (−η∞ ; r, s) − = 0 for s = s∗ , r = r∗ (3.184)
U1
Istedenfor å bruke negative verdier for η for det nedre sjiktet, velger vi å
innføre koordinaten ζ der ζ = −η. Får da følgende system:
f 0 (η∞ ) = 1 (3.186)
f (0) = 0, f 0 (0) = r, f 00 (0) = s (3.187)
0
φ(η∞ ; r, s) = f (η∞ ; r, s) − 1 = 0 (3.188)
U2
f 0 (ζ∞ ) = − (3.190)
U1
f (0) = 0, f 0 (0) = −r, f 00 (0) = s (3.191)
U2
ψ(ζ∞ ; r, s) = f 0 (ζ∞ ; r, s) + =0 (3.192)
U1
Merk at derivasjon er m. h. p. koordinaten ζ i det nedre sjiktet.
Da ζ = −η, får vi for eksempel:
φ(r∗ , s∗ ) = 0
(3.193)
ψ(r∗ , s∗ ) = 0
Velger så en verdi s = s0 , se Figure ??. Setter deretter r = r0 og beregner
verdiene av φ og ψ langs linja s = s0 med varierende r. Med henvisning til
figuren ovenfor, får vi fortløpende:
U2
U1 r∗ s∗
0.0 0.583776 0.284447
0.1 0.613568 0.267522
0.2 0.646947 0.247879
0.3 0.683594 0.225504
0.4 0.723101 0.200460
0.5 0.765049 0.172847
0.6 0.809060 0.142779
0.7 0.854807 0.110363
0.8 0.902021 0.075701
0.9 0.950480 0.038886
1.0 1.000000 0.000000
U2
Med α = U1 finner vi følgende kurvetilpasninger for r∗ og s∗ fra tabellen:
3.5 Exercises
Problem 5: Stokes first problem for a non-Newtonian fluid
y
u = U0
u(y,t)
∂u ∂τxy
ρ = (3.195)
∂t ∂y
∂u
The relation between the shear stress τxy and the velocitygradient ∂y is given
by:
21
∂u ∂u
τxy = K · (3.196)
∂y ∂y
where K is a positive constant.
We introduce the following transformation which reduces the system to an
ordinary differential equation:
y u
η=C· 2 , f (η) = (3.197)
t5 U0
where C is a positive constant, y ≥ 0 and t ≥ 0. By inserting Eq. (3.196)
into Eq. (3.195) and using (3.197), we get:
p
f 00 (η) + 2 η f 0 (η) = 0 (3.198)
With boundaryconditions:
Hint 2. Programming:
You may use the template script below:
# src-ch2/stokes.py;ODEschemes.py @ git@lrhgit/tkt4140/src/src-ch2/ODEschemes.py;
#### a: ####
def func(f, eta):
f0 = f[0]
f1 = f[1]
if f1<0:
# f1 should not be negative
f1=0
df0 = ?
df1 = ?
dF = np.array([df0, df1])
return dF
N = 5
Eta = np.linspace(0, 1, N + 1)
d_eta = Eta[1] - Eta[0]
f_0 = ?
df_0 = ?
F_0 = [f_0, df_0]
phi0 = ?
phi1 = ?
s = ?
s1, s0 = s, s1
print "n = {0}, s = {1}, ds = {2}, delta = {3} ".format(n, s0, s1 - s0, delta)
C = 1.
U0 = 1
dt = 0.05
t0, tend = dt, 2.
dy = 0.01
y0, yend = 0, 2.
f = F[:, 0]*U0
tck = splrep(Eta, f) # create interpolation splines
for n, t in enumerate(time):
Eta_n = C*Y/(t**(2./5))
lines=[] # list for plot lines for solvers and analytical solutions
legends=[] # list for legends for solvers and analytical solutions
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig, animate, frames=len(time), init_func=init, blit=False)
# Writer = animation.writers[’ffmpeg’]
# writer = Writer(fps=15, metadata=dict(artist=’Me’), bitrate=1800)
# anim.save(’../mov/stokes.mov’, writer=writer)
plt.show()
Chapter 4
b1 x1 + c1 x2 = d1
···
ai xi−1 + bi xi + ci xi+1 = di
(4.1)
···
aN xN −1 + bN xN = dN
i = 1, 2, ...N , a1 = cN = 0
or more convenient in a matrix representation:
b1 c1 x1 d1
a2 b2 c2 x2
d2
· · ·
·
·
· · · ·
·
=
· (4.2)
· · ·
·
·
aN −1 bN −1 cN −1 xN −1 dN −1
aN bN xN dN
154
CHAPTER 4. FINITE DIFFERENCES FOR ODES 155
Elimination:
aj
qj := for j = 2, 3 · · · , N
bj−1
bj := bj − qj · cj−1 (4.3)
dj := dj − qj · dj−1
Back substitution:
dN
xN :=
bN
dj − cj · xj+1
xj := for j = N − 1, N − 2, · · · , 1 (4.4)
bj
(4.5)
The Python-code for (4.3) and (4.4) is implemented in tdma, which corre-
sponds to tri in [3] , section 6.3 (see below).
def tdma(a, b, c, d):
"""Solution of a linear system of algebraic equations with a
tri-diagonal matrix of coefficients using the Thomas-algorithm.
Args:
a(array): an array containing lower diagonal (a[0] is not used)
b(array): an array containing main diagonal
c(array): an array containing lower diagonal (c[-1] is not used)
CHAPTER 4. FINITE DIFFERENCES FOR ODES 156
"""
n = len(b)
x = np.zeros(n)
# elimination:
for k in range(1,n):
q = a[k]/b[k-1]
b[k] = b[k] - c[k-1]*q
d[k] = d[k] - d[k-1]*q
# backsubstitution:
q = d[n-1]/b[n-1]
x[n-1] = q
for k in range(n-2,-1,-1):
q = (d[k]-c[k]*q)/b[k]
x[k] = q
return x
| b1 |>| c1 |> 0
| bi |≥| ai | + | ci |, ai · ci 6= 0 , i = 2, 3, · · · , N − 1 (4.6)
| bn |>| aN |> 0
Matrices satisfying (4.6) are called diagonally dominant for obvious reasons,
and strictly diagonally in case ≥ may be substitued with > in (4.6). Pivoting
during Gauss-elimination is not necessary for diagonally dominant matrices,
and thus the band structure is preserved and the algorithm becomes less CPU-
intensive. See appendix I in Numeriske beregninger for a proof of (4.6).
Notice that all coefficients in each row of (4.2) has the same index. However,
the linear algebraic equation system (4.1) may also be presented:
b1 x1 + c2 x2 = d1
···
ai−1 xi−1 + bi xi + ci+1 xi+1 = di
··· (4.7)
aN −1 xN −1 + bN xN = dN
i = 1, 2, ...N , a1 = cN = 0
CHAPTER 4. FINITE DIFFERENCES FOR ODES 157
or in matrix form:
b1 c2 x1 d1
a1 b2 c3 x2 d2
· · ·
·
·
· · · ·
· =
· (4.8)
· · ·
·
·
aN −2 bN −1 cN xN −1 dN −1
aN −1 bN xN dN
We notice that in this notation the coefficients of each column in the matrix
(4.8) has the same index.
The version in (4.8) may be deduced from (4.2) by subtraction of 1 from the
a-inddices and addition of 1 for the c-indices. Commercial codes like Matlab
store tridiagonal matrices on the form given in (4.8). We have implemented (4.8)
in tridiag.
def tripiv(a, b, c, d):
"""Solution of a linear system of algebraic equations with a
tri-diagonal matrix of coefficients using the Thomas-algorithm with pivoting.
Args:
a(array): an array containing lower diagonal (a[0] is not used)
b(array): an array containing main diagonal
c(array): an array containing lower diagonal (c[-1] is not used)
d(array): right hand side of the system
Returns:
x(array): solution array of the system
"""
n = len(b)
x = np.zeros(n)
fail = 0
# reordering
a[0] = b[0]
b[0] = c[0]
c[0] = 0
# elimination:
l = 0
for k in range(0,n):
q = a[k]
i = k
if l < n-1:
l = l + 1
for j in range(k+1,l+1):
q1 = a[j]
if (np.abs(q1) > np.abs(q)):
CHAPTER 4. FINITE DIFFERENCES FOR ODES 158
q = q1
i = j
if q == 0:
fail = -1
if i != k:
q = d[k]
d[k] = d[i]
d[i] = q
q = a[k]
a[k] = a[i]
a[i] = q
q = b[k]
b[k] = b[i]
b[i] = q
q = c[k]
c[k] =c[i]
c[i] = q
for i in range(k+1,l+1):
q = a[i]/a[k]
d[i] = d[i]-q*d[k]
a[i] = b[i]-q*b[k]
b[i] = c[i]-q*c[k]
c[i] = 0
# backsubstitution
x[n-1] = d[n-1]/a[n-1]
x[n-2] = (d[n-2]-b[n-2]*x[n-1])/a[n-2]
for i in range(n-3,-1,-1):
q = d[i] - b[i]*x[i+1]
x[i] = (q - c[i]*x[i+2])/a[i]
return x
T∞ D
X
Tr
d2 h̄P
(T − T∞ ) = (T − T∞ ) (4.10)
dX 2 kA
For convenience we introduce dimensonless variables:
X T − T∞ h̄P 2
x= , θ= , β2 = L (4.11)
L Tr − T∞ kA
where L is a characteristic length (see Figure B.5 i Numeriske Beregninger).
A relative, dimensionless temperature may be constructed by using a reference
temperature Tr and temperature and the temperature T∞ in combination. The
parameteren β 2 is frequently referred to as the Biot-number, Bi = β 2 .
Given the assumptions and prerequesites above (4.9) may be presented in
the more convenient and generic form:
d2 θ
− β2θ = 0 (4.12)
dx2
which is a second order ODE with the following generic analytical solution:
T = T∞ for X = 0
T = Tr for X = L
which may be presented on dimensionless form as:
θ=0 for x = 0 (4.14)
θ=1 for x = 1
CHAPTER 4. FINITE DIFFERENCES FOR ODES 160
The analytical solution (4.13) for these particular boundary conditions reduces
to:
sinh(βx) dθ cosh(βx)
θ(x) = , =β (4.15)
sinh(β) dx sinh(β)
With D = 0.02m and L = 0.2m the Biot-number β 2 = 4. By doubling the
length the Biot-number quadurples. For eksemplet ovenfor blir Biot-tallet:
h̄P 2 2 2
β2 =
L = L (4.16)
kA D
Mixed boundary conditions:
dT
Qx = 0 = for X = 0
dX
T = Tr for X = L
The analytical solution (4.13) reduces for the mixed boundary conditions to:
cosh(βx) dθ sinh(βx)
θ(x) = , =β (4.19)
cosh(β) dx cosh(β)
d2 θ
Numerical solution. We will use central differences for the term and
dx2
2
d θ θi−1 − 2θi + θi+1
with ≈ we get the following difference equation:
dx2 i h2
θi−1 − (2 + β 2 h2 ) θi + θi+1 = 0 (4.20)
Prescribed temperatures
We enumerate the nodes for the unkowns as shown in Figure 4.2:
0.0 1.0
X
0 1 2 3 i N-1 N N+1
θ0 − (2 + β 2 h2 ) θ1 + θ2 = 0 → (2 + β 2 h2 ) θ1 + θ2 = 0
which may be simplified by substitution of the prescribed boundary value
θ(0) = θ0 = 0
(2 + β 2 h2 ) θ1 + θ2 = 0
For the other boundary at i = N the generic scheme (4.20) is:
θN −1 − (2 + β 2 h2 ) θN + θN +1 = 0
which by substitution of the prescribed value for θN +1 = 1 yields:
θN −1 − (2 + β 2 h2 ) θN = −1
A complete system of equations may finally be obtained from (4.1.1), (4.20),
and (4.1.1):
i = 1 : − (2 + β 2 h2 ) θ1 + θ2 = 0
i = 2, 3, ..., N − 1 : θi−1 − (2 + β 2 h2 ) θi + θi+1 = 0 (4.21)
2 2
i = N : θN −1 − (2 + β h ) θN = −1
ai = 1 , i = 2, 3, ..., N
2 2
bi = −(2 + β h ) , i = 1, 2, ..., N (4.22)
ci = 1 , i = 1, 2, ..., N − 1
di = 0 , i = 1, 2, ..., N − 1
dN = −1
import numpy as np
import scipy as sc
import scipy.linalg
import scipy.sparse
import scipy.sparse.linalg
import time
from math import sinh
def theta_analytical(beta,x):
return np.sinh(beta*x)/np.sinh(beta)
diagonals=np.zeros((3,n))
diagonals[0,:]= 1 #all elts in first row is set to 1
diagonals[1,:]= -(2+(beta*h)**2)
diagonals[2,:]= 1
A_sparse = sc.sparse.spdiags(diagonals, [-1,0,1], n, n,format=’csc’) #sparse matrix instance
tic=time.clock()
theta2=sc.linalg.solve(A,d,)
toc=time.clock()
print ’linalg solver time:’,toc-tic
# Plot solutions
plot(x[1:-1],theta,x[1:-1],theta2,’-.’,x,theta_analytical(beta,x),’:’)
legend([’sparse’,’linalg’,’analytical’])
show()
close()
print ’done’
The numerical predictions are presented and compared with analytical values
in the table below:
x numerical analytical rel.err
0.0 0.00000 0.00000
0.1 0.05561 0.05551 0.00176
0.2 0.11344 0.11325 0.00170
0.3 0.17582 0.17554 0.00159
0.4 0.24522 0.24487 0.00144
0.5 0.32444 0.32403 0.00126
0.6 0.41663 0.41619 0.00105
0.7 0.52548 0.52506 0.00082
0.8 0.65536 0.65499 0.00056
0.9 0.81145 0.81122 0.00029
1.0 1.00000 1.00000 0.00000
Mixed boundary conditions. Version 1
For mixed boundary conditions the nodes with uknown temperatures are
enumerated as shown in Fig. (4.3), to make the first unknown temperature we
compute have index 1. The x-coordinates are given by: xi = (i − 1) h, i =
1, 2, ..., N + 1 der h = N1 and we wil use second order central differences as
dθ
an approximation of the zero-gradient boundary condition dx = 0 at the left
boundary where x = 0.
For a generic node ’i’ the central difference approximation may be denoted:
dθ θi+1 − θi−1
≈ (4.23)
dx
i 2h
CHAPTER 4. FINITE DIFFERENCES FOR ODES 164
0.0 1.0
X
1 2 3 i N-1 N N+1
θ0 = θ2 (4.25)
Due to the zero gradient boundary condition, the first of the N equations,
represented on generic form by (4.20), takes the particular form:
This first equation (4.26) is the only equation which differs from the resulting
equation system for prescibed boundary conditions in (4.21). All the coefficients
ai , bi , and di are the same as in for the prescribed temperature version in (4.22),
except for c1 :
c1 = 2, ci = 1, i = 2, ...N − 1 (4.27)
The first difference equation (4.20) in which θ3 occurs, is the one for node 2
θ1 − (2 + β 2 h2 ) θ2 + θ3 = 0 (4.31)
2 θ1 − (2 + β 2 h2 ) θ2 = 0 (4.32)
This is the first equation in the system of equations and the only one which
differs from (4.21). Rather than (4.22), we get the following equations for the
coefficients:
b1 = 2 bi = (2 + β 2 h2 ) i = 2, ..., N (4.33)
2 2
c1 = −(2 − β h ) ci = 1 i = 2, ..., N − 1 (4.34)
2θ1 − (2 − β 2 h2 ) θ2 = 0, i=1
2 2
θi−1 − (2 + β h ) θi + θi+1 = 0, i = 2, 3, ..., N − 1 (4.35)
2 2
θN −1 − (2 + β h ) θN = −1, i=N
import numpy as np
import scipy as sc
import scipy.linalg
import scipy.sparse
import scipy.sparse.linalg
import time
from numpy import cosh
def theta_analytical(beta,x):
return np.cosh(beta*x)/np.cosh(beta)
A=tri_diag_setup(a,b,c)
#Create matrix for sparse solver
diagonals=np.zeros((3,n))
diagonals[0,:]= 1.0 # all elts in first row is set to 1
diagonals[0,0]= 1.0
diagonals[1,:]= -(2+(beta*h)**2)
diagonals[2,:]=1.0
if (version==1):
diagonals[2,1]= 2.0 # index 1 as the superdiagonal of spdiags is not used,
else:
diagonals[1,0]=2.0 # Sets the first element in the main diagonal
diagonals[2,1]= -(2+(beta*h)**2) # index 1 as the superdiagonal of spdiags is not used,
super-diagonal
tic=time.clock()
theta2=sc.linalg.solve(A,d,)
toc=time.clock()
print ’linalg solver time:’,toc-tic
# Plot solutions
CHAPTER 4. FINITE DIFFERENCES FOR ODES 167
plot(x[0:-1],theta,x[0:-1],theta2,’-.’,x,theta_analytical(beta,x),’:’)
xlabel(’x’)
ylabel(r’Dimensionless temperature $\mathregular{\theta}$’)
legend([’sparse’,’linalg’,’analytical’])
show()
close()
print ’done’
The relative error is computed from εrel = |(θnum − θanalyt )/θanalyt |, and
the results of the computations are given in the table below:
x Anlytical Ctr.diff Rel.err Fwd.diff Rel. err
0.0 0.26580 0.26665 0.00320 0.26613 0.00124
0.1 0.27114 0.27199 0.00314 0.27156 0.00158
0.2 0.28735 0.28820 0.00295 0.28786 0.00176
0.3 0.31510 0.31594 0.00267 0.31567 0.00180
0.4 0.35549 0.35632 0.00232 0.35610 0.00171
0.5 0.41015 0.41095 0.00194 0.41078 0.00153
0.6 0.48128 0.48202 0.00154 0.48189 0.00128
0.7 0.57171 0.57237 0.00114 0.57228 0.00098
0.8 0.68510 0.68561 0.00075 0.68555 0.00067
0.9 0.82597 0.82628 0.00037 0.82625 0.00034
1.0 1.00000 1.00000 0.00000 1.00000 0.00000
We observe that the two versions for zero-gradient boundary condition yields
approximately the same result except close to x = 0, where the forward difference
is somewhat better. In section (4.5) we will take a closer look at the accuracy of
gradient boundary conditions.
Mixed boundary conditions. Version 3
Rather than the enumeration in Figure (4.3), we may use the enumeration in
Fig. (4.2) with xi = i h , i = 0, 1, ..., N + 1 where h = N1+1 such that we must
take i = 1 in (4.20) to get:
θ0 − (2 + β 2 h2 ) θ1 + θ2 = 0
The boundary condition in (4.28) the becomes:
dθ −3θ0 + 4θ1 − θ2
= =0
dx 0 2h
from which an elimination equation for θ0 may be obtained:
D
A(X)
b
L
T = TL
X
½D
½L
ϕ ½d
X
X=0 X=L
Figure 4.5 viser øvre halvpart av ribba. Av denne figuren finner vi følgende
relasjon:
dT (0) h̄0
= [T (0) − T∞ ] , T (L) = TL (4.40)
dX k
For trapestverrsnittet i fig. (4.4):
d2 θ dθ
x 2
+ − β 2 θ(x) = 0 (4.44)
dx dx
For x = 0:
dθ
(0) − β 2 θ(0) = 0 (4.45)
dx
Den analytiske løsningen av (4.42) og (4.44) er gitt i appendiks G, del G.5 i
kompendiet.
CHAPTER 4. FINITE DIFFERENCES FOR ODES 170
x θ (x) θ’(x)
0.00 0.18069 0.09034
0.10 0.19623 0.21842
0.20 0.22415 0.33967
0.30 0.26424 0.46318
0.40 0.31708 0.59556
0.50 0.38383 0.74211
0.60 0.46614 0.90754
0.70 0.56611 1.09629
0.80 0.68632 1.31291
0.90 0.82978 1.56211
1.00 1.00000 1.84901
d2 θ dθ
(1 + x) + − 8 θ(x) = 0 (4.48)
dx2 dx
med randbetingelser:
dθ θ(0)
(0) = β02 θ(0) = , θ(1) = 1 (4.49)
dx 2
Fremgangsmåten blir som for tilfelle 2, versjon 2, avsnitt (4.1.1) med samme
nummerering.
x-koordinatene er gitt ved: xi = (i − 1) h , i = 1, 2, ..., N + 1 der h = N1 .
Diskretisering:
θi+1 − 2θi + θi−1 θi+1 − θi−1
(1 + xi ) + − 8θi = 0
h2 2h
som ordnet blir:
0.0 1.0
X
1 2 3 i N-1 N N+1
h h
der γi = = , i = 1, 2, ..., N + 1 (4.51)
2(1 + xi ) 2[1 + (i − 1) h]
θ2 −θ0 θ1
For randbetingelsen i (4.49) får vi: 2h = 2 → θ0 = θ2 − θ1 h som innsatt
i (4.50) for i = 1 gir:
[2 + h (1 + 15 γ1 )] θ1 − 2θ2 = 0 (4.52)
For i = N :
−(1 − γN ) θN −1 + 2 (1 + 8h γN ) θN = 1 + γN (4.53)
For i = 2, 3, ..., N − 1 bruker vi (4.50). Vi har da et tridiagonalt ligningsett
som kan løses med Thomas-algoritmen . Tabellen nedenfor viser beregningene
med to forskjellige verdier av h. Vi ser at feilen er 100 ganger mindre for h = 0.01
enn for h = 0.1 som betyr at feilen er av O(h2 ) ; i overenstemmelse med teorien.
Den relative feilen er beregnet fra |(θnumerisk − θanalytisk )/θanalytisk |
0.1m 0.1m
d D
Part a Part b
dT
−dQx = P h [T (X) − T∞ ] dX der Qx = −kA
dX
For dX → 0 får vi da dQ → 0 ⇒ Q = konstant som betyr at på grenseflata
mellom de to legemene gjelder følgende relasjon:
dT dT
Qa = Qb ⇒ ka Aa = kb Ab (4.54)
dX a dX b
dT ka dT
Da Aa = Ab på grenseflata, følger: = som på dimen-
dX b kb dX a
sjonsløs form (med bruk av (4.41) blir:
dθ ka dθ
= (4.55)
dx b kb dX a
Bruker følgende tallverdier:
2h̄ L2
β2 = som gir βa2 = 2.0 og βb2 = 8.0 (4.59)
Dk
Ved å skrive (4.58) på den viste formen, har vi oppnådd kontinuiteten i (4.54)
Med henvisning til Figure 4.8 bruker vi følgende nummerering:
CHAPTER 4. FINITE DIFFERENCES FOR ODES 173
xi = (i − 1) h , i = 1, 2, ..., N + 1 der h = N1
(4.60)
M = N2 + 1 der N er et partall og xM = s0.5
β2h
3 1 1
− 1− θi−1 + 2 1 − + θi − 1 − θi+1 = 0 (4.62)
2i i i 2i
βa2 h
3 1 1
− 1− θ1 + 2 1 − + θ2 − 1 − θ3
4 2 2 4
CHAPTER 4. FINITE DIFFERENCES FOR ODES 174
0.5
Her bruker vi ligningen på formen gitt i (4.58)) som diskretisert med bruk av
(2.43)) fra kap. (2) gir:
2
βM − 12
2
= βa2 = 2.0 , βM + 21
= βb2 = 8.0
h (1 − h) h (1 + h)
xM − 12 = (N − 1) = , xM + 12 = (N + 1) =
2 2 2 2
som gir:
−4 (1 − h) θM −1 + [5 − 3h + 16h2 ] θM − (1 + h) θM +1 = 0 (4.65)
β 2 xi
Z
dθi
= θ(x)dx , i = 2, 3, ..., N + 1 (4.67)
dx xi x1
Rx
Integralet I = x1i θ(x)dx beregnes f.eks ved bruk av trapesmetoden . Bereg-
ningen av (4.67) er vist i pseudokode nedenfor:
CHAPTER 4. FINITE DIFFERENCES FOR ODES 175
θ10 := βa2 θ1
s := 0
Utfør for i := 2, ..., N + 1
x := h (i − 1)
s := s + 0.5h (θi + θi−1 )
βa2 s β2 s
Dersom (i ≤ M ) sett θi0 := ellers θi0 := b
x x
I tabellen på neste side har vi brukt (4.67) og trapesmetoden. I dette
tilfellet er nøyaktigheten av de to metodene for beregning av θ0 (x) temmelig
sammenfallende da både θ og θ0 er glatte funksjoner (bortsett fra x = 0.5),
men generelt vil integrasjonsmetoden gjerne være mer nøyaktig dersom det er
diskontinuiteter.
Løsning av lign. (4.58) med h = 0.01
der A og B er p x p - matriser.
4.2.1 Example
Fra lign.(4.12) og (4.14):
d2 θ
dx2 − β 2 θ(x) = 0
0
θ (0) = 0 , θ(1) = 1
Med notasjonen i (4.68) kan dette systemet skrives:
y10 = y2
y20 = β 2 y1
0 1 y (0) 0 0 y (1) 0
· 1 + · 1 =
0 0 y2 (0) 1 0 y2 (1) 1
Vi diskretiserer (4.68) med bruk av sentraldifferanser. Da (4.68) er et sett
av 1. ordens ligninger, velger vi å utvikle differanseutrykkene halveis mellom to
punkt; i dette tilfellet rundt xj− 12 som vist på Figure 4.10.
j-1 j-½ j
½h ½h
yj − yj−1
dy 1
= + O(h2 ) , yj− 21 = (yj + yj−1 ) + O(h2 ) (4.70)
dx j− 1 h 2
2
A · y0 + B · yN = c (4.72)
CHAPTER 4. FINITE DIFFERENCES FOR ODES 177
Notasjon:
b−a
xj = a + j · h, j = 0, 1, ...N , h = (4.73)
N
Med denne diskretiseringen kan det vises at følgende relasjon gjelder:
Wy
Wi
Wy
Figure 4.11 viser en varmeveksler som er modellert som to rør. I det ytre
røret foregår strømningen fra venstre mot høyre med konstant hastighet wy og i
det indre røret har vi en strømning med konstant hastighet wi . Vi har likestrøm
når wi er rettet mot høyre mens vi har motstrøm når wi er rettet mot venstre.
Temperaturen i det ytre og det indre røret betegnes henholdsvis Ty og Ti . Vi
regner det ytre røret så godt isolert at et eventuelt varmetap til omgivelsene kan
neglisjeres.
CHAPTER 4. FINITE DIFFERENCES FOR ODES 178
Rørlengde: L [m]
Lengdekoordinat: X [m]
Rørdiameter: di [m]
Areal av rørtverrsnitt: Ai = π · d2i /4 [m2 ]
Omkrets av indre rør: P = π · di [m]
(4.75)
Tetthet av medium: ρi [kg/m3 ]
Massestrøm: ṁi = ρi Ai wi [kg/s]
Totalt varmeovergangstall: h̄ [W/m2 /◦ C]
Egenvarme: ci [J/kg/◦ C]
Varmestrøm: Q̇i = ṁi ci Ti [W ]
For de indekserte størrelsene (4.75) har vi tilsvarende med indeks y for det
ytre røret. Q̇yi er varmestrømmen til det indre røret fra det ytre:
Med henvisning til Figure 4.12:
dX
Qyi
Ty Wy
Qi Wi Qi + dQi
Ti Wi di dy
Ty Wy
Qyi
ṁi ci dTi (X) = h̄P · dX[Ty (X) − Ti (X)] = −ṁy cy dTy (X) (4.78)
CHAPTER 4. FINITE DIFFERENCES FOR ODES 179
h̄P
dTy (X) = − [Ty (X) − Ti (X)] · dX (4.79)
ṁy cy
h̄P
dTi (X) = [Ty (X) − Ti (X)] · dX (4.80)
ṁi ci
Vi innfører nå dimensjonsløse størrelser. Lar Tir og Tyr være referansetem-
peraturer som kan gis passende verdier i konkrete tilfeller. De dimensjons-løse
temperaturene i ytre og indre rør betegnes nå henholdsvis u og v .
Ty − Tyr
Temperatur ytre rør: u = (4.81)
Tir − Tyr
Ti − Tyr
Temperatur indre rør: v = (4.82)
Tir − Tyr
X
Lengde: x = (4.83)
L
h̄P L h̄P L
αy = =
ṁy cy ρy Ay wy cy
Koeffisienter: (4.84)
h̄P L h̄P L
αi = =
ṁi ci ρi Ai wi ci
(4.81) innsatt i (4.79) gir følgende system:
du
Ytre rør: = −αy (u − v) (4.85)
dx
dv
Indre rør: = ±αi (u − v) (4.86)
dx
Vi har skrevet ± foran leddet på høyre side i (4.86) der pluss-tegnet gjelder
for strømningsretningen som vist på Figure 4.12, mens negativt fortegn gjelder
når wi skifter retning (motstrøm). Det er underforstått at u og v er funksjoner
av x slik at vi slipper å skrive u(x) og v(x).
Analytisk løsning
Dersom vi kjenner temperaturen for en gitt x-verdi både i det ytre og indre
røret, kan vi løse (4.85) analytisk.
Ved subtraksjon:
d
− (u − v) = (αy ± αi ) · (u − v)
dx
Innfører temperaturdifferansen
θ =u−v (4.87)
CHAPTER 4. FINITE DIFFERENCES FOR ODES 180
dθ
slik at vi får ligningen − dx = (αy ± αi ) · θ med løsning:
Ty − Ty (0)
Temperatur ytre rør: u = (4.93)
Ti (L) − Ty (0)
Ti − Ty (0)
Temperatur indre rør: v = (4.94)
Ti (L) − Ty (0)
Dette valget av referansetemperaturer gir følgende randbetingelser:
x=0 x=1
Outer tube:
u0 u1 u2 uj uN-1 uN
x=0 x=1
Inner tube:
v0 v1 v2 vj vN-1 vN
Randbetingelser: u0 = 0 , uN = 1 .
Algoritmen krever at den første ukjente har indeks 1 og den siste indeks N.
Som figuren ovenfor viser, stemmer dette for det ytre røret, men ikke for det
indre der v0 er den første ukjente.
Vi forandrer indeksene som vist nedenfor:
x=0 x=1
Outer tube:
u0 u1 u2 uj uN-1 uN
x=0 x=1
Inner tube:
v1 v2 v3 vj vN vN+1
For u:
Utskrevet for j = 1:
(2 + h · αy ) · u1 − h · αy · v1 − h · αy · v2 =0
h · αi · u1 − (h · αi + 2) · v1 + (2 − h · αi ) · v2 =0
hvor vi har benyttet u0 = 0 fra randbetingelsen.
Utskrevet for j = N :
(h · αy − 2) · uN −1 + (2 + h · αy ) · uN − h · αy · vN = h · αy
h · αi · uN −1 + h · αi · uN − (h · αi + 2) · vN = h · αi − 2
(1) (2)
For j = N blir utrykkene som i (4.104) bortsett fra dN = h · αy og dN =
(2) (4)
h · αi − 2 . cN og cN brukes ikke. Et eksempel med N = 6 er gitt vist nedenfor.
Rørlengde: L = 61m
Rørdiameter: d = 0.025m
Omkrets av rør: P = π · d = 0.07854m
Massestrøm: ṁy = 0.1 og ṁi = 0.23kJ/s
Totalt varmeovergangstal: h̄ = 40W/m2 /◦ C
Egenvarme: cy = 2130 og ci = 4180J/Kg/◦ C
h̄P L h̄P L
αy = ≈ 0.9 , αi = ≈ 0.2 (4.106)
ṁy cy ṁi ci
Referansetemperaturene i (4.76) og (4.87):
u(x) = v0 97 [1 − e−0.7·x ]
(4.109)
v(x) = v0 [1 + 27 (1 − e−0.7·x )]
h = 0.1 x Ty (◦ C) Ti (◦ C) h = 0.01 x Ty (◦ )C Ti (◦ C)
0.0 100.00000 38.80441 0.0 100.00000 38.80226
0.1 94.67864 37.62189 0.1 94.68054 37.62016
0.2 89.71719 36.51934 0.2 89.72070 36.31798
0.3 85.09129 35.49137 0.3 85.09618 35.49031
0.4 80.77825 34.53291 0.4 80.78431 34.53211
0.5 76.75692 33.63928 0.5 76.76395 33.63870
0.6 73.00756 32.80609 0.6 73.01539 32.80569
0.7 69.51178 32.02925 0.7 69.52026 32.02899
0.8 66.25243 31.30495 0.8 66.26142 31.30480
0.9 63.21352 30.62964 0.9 63.22291 30.62958
1.0 60.38014 30.00000 1.0 60.38981 30.00000
100.0
Oil
80.0
60.0
T i , Ty 40.0 Water
20.0
0.0
0.0 0.2 0.4 0.6 0.8 1.0
x
y1 = y + A · hk1 + B · hk+1
1 + ... (4.110)
der A og B er konstanter.
Dersom vi nå beregner to tilnærmelser y1 og y2 med to forskjellige verdier
h1 og h2 , får vi to ligninger av typen (4.110) og kan dermed eliminere A med
følgende resultat:
!
hk2 · y1 − hk1 · y2 hk2 hk+1
1 − hk1 hk+1
2
=y+B· + ... (4.111)
hk2 − hk1 hk2 − hk1
y 00 (x) = 32 y 2
(4.115)
y(0) = 4 , y(1) = 1
der en av løsningene er gitt ved:
4
y= (4.116)
(1 + x)2
Vi diskretiserer(4.115) med bruk av sentraldifferanser for y 00 (x) :
yi+1 − 2yi + yi−1 3
= yi2
h2 2
eller ordnet:
3 2
yi−1 − 2 + h yi yi + yi+1 = 0 (4.117)
2
CHAPTER 4. FINITE DIFFERENCES FOR ODES 188
x0 x1 xi xN-1 xN xN+1
0.0 1.0
med h = ∆x
Vi har delt intervallet [0, 1] i N + 1 deler der h = 1/(N + 1) og xi = h · i ,
i = 0, 1, ..., N + 1. Da y0 = 4 og yN +1 = 1, blir (4.117):
− 2 + 32 h2 y1 y1 + y2 = −4
...
yi−1 − 2 + 23 h2 yi yi + yi+1 = 0
(4.118)
...
3 2
yN −1 − 2 + 2 h yN yN = −1
der i = 2, 3, ...N − 1
Koeffisientmatrisa for (4.118) er tridiagonal, men systemet er ikke-lineært.
(Et system av 2. grads ligninger). Vi har ikke formler som kan løse slike system,
og settet må derfor lineariseres. La oss skal se nærmere på to metoder for å
utføre denne lineariseringen.
Vi har nå fått et lineært, tridiagonalt system som kan løses med Thomas-
algoritmen. For å starte iterasjonsprosessen, må vi tippe verdier for yi0 , i =
1, 2, ...N . Dersom vi ikke har spesielle opplysninger, kan vi f.eks. tippe yi0 = 0,
i = 1, 2, ...N . Gode startverdier vil generelt gi raskere konvergens. Vi må også
huske å teste for diagonal-dominans når vi bruker tdma. For (4.120) blir kravet:
2 + 3 yim h2 ≥ 2 , i = 1, 2, ..., N
2
som er oppfylt dersom alle yim > 0. Av Figure 3.8 ser vi at dette kravet er
oppfylt for den ene løsningen, men ikke for den andre. Når denne betingelsen
ikke er oppfylt, kan det være fornuftig å bruke en løsningsrutine med pivotering,
f.eks. tripiv istedenfor tdma. Når vi ikke vet hvoran iterasjonsprosessen vil
foreløpe, er det lurt å bare bruke antall iterasjoner som et stoppkrav. Etter vi
har fått innsikt i iterasjonsforløpet, kan vi legge til andre stoppkriterier. Endel
eksempler er vist i avsnitt (4.3.6)
Matlabprogrammet delay34 gir følgende resultat med h = 0.05 :
Max. avvik betyr her det maksimale relative avviket. Løsningen av differ-
4
anseligningen konvergerer langsomt mot løsningen yI = som vist på
(1 + x)2
(3.8) i avsnitt (3.2) . Den maksimale relative feilen i løsningen er ≈ 5.6 · 10−4 )
som ikke kan minskes uten å minske skrittlengden h.
Fordelen med denne metoden er at lineariseringsprosessen er enkel. Bakdelen
er at den konvergerer ofte langsomt, samt at vi gjerne må ha gode startverdier
for å få konvergens. Når vi vet hvordan iterasjonen forløper, kan vi som nevnt
ovenfor, legge inn et stoppkriterium basert på et iterasjonsavvik. Iterasjonsløkka
kan nå f.eks. se ut som vist nedenfor:
it = 0; itmax = 10 ; dymax = 1.0; RelTol = 1.0e-5;
d = 0 ; % høyre side
d(1) = - 4.0; d(n) = - 1.0;
while (dymax > RelTol) & (it < itmax)
it = it + 1; b = -(2.0 + fac*ym); % hoveddiagonal
ym1 = tdma(a,b,c,d); % Løser ligningsystemet
CHAPTER 4. FINITE DIFFERENCES FOR ODES 190
Legg merke til at for å starte iterasjonsløkka, må dymax være større enn
RelTol
4.3.2 Newton-linearisering
Før vi setter opp den formelle utviklingen, viser vi en variant som er enkel å
bruke når ikke-linearitetene er rene produkt.
2 2
yim+1 = (yim + δyi ) = (yim )2 + 2yim · δyi + (δyim )2
(4.122)
≈ (yim )2 + 2yim · δyi = yim (2yim+1 − yim )
m+1
yi−1 − (2 + 3yim h2 )yim+1 + yi+1
m+1
= − 23 (yim h)2 (4.123)
m+1 m 2 m+1
yN −1 − (2 + 3yN h )yN = − 32 (yN
m 2
h) − 1
der i = 2, 3, ..., N − 1 , N − 1, m = m = 0, 1, 2, ...
Vi har igjen fått et lineært, tridiagonalt system som kan løses med Thomas-
algoritmen. Velger samme startverdier som i foregående versjon. Matlabpro-
grammet taylor34 gir følgende resultat med h = 0.05 :
δzi → δyi , zim+1 → yim+1 , zim → yim , F (yi )m+1 = (yim+1 )2 , F (yi )m = (yim )2
som innsatt i (4.124) gir: (yim+1 )2 ≈ (yim )2 + 2yim · δyi som er identisk med
(4.122) . Fremgangsmåten i (4.124) kalles ofte Newton-linearisering.
I mange tilfeller inngår flere indekser i det ikke-linære leddet. Anta f.eks at
vi har et ledd der både zi og zi+1 inngår. Vi bruker da Taylorutvikling for to
variable:
∂F ∂F
F (zi , zi+1 )m+1 ≈ F (zi , zi+1 )m + δzi + δzi+1
∂zi m ∂zi+1 m (4.125)
der δzi = zim+1 − zim , δzi+1 = m+1
zi+1 − m
zi+1
m+1
der δzi−1 = zi−1 m
− zi−1 , δzi = zim+1 − zim , δzi+1 = zi+1
m+1 m
− zi+1
4.3.3 Example:
p
Gitt differensialligningen y 00 (x) + y(x) y(x) = 0 som diskretisert med sentral-
differanser blir:
q
m+1
yi+1 − 2yim+1 + yi−1
m+1
+ h2 yim+1 yim+1 = 0 ved iterasjon m + 1.
q
Det er leddet yim+1 · yim+1 som er ikke-lineært og må lineariseres. I dette
3
tilfellet har vi bare et indeks og vi bruker (4.124) med zi → yi og F (yi ) = yi2 :
CHAPTER 4. FINITE DIFFERENCES FOR ODES 192
∂F 3 3 1
F (yi )m+1 ≈ F (yi )m + δyi = (yim ) 2 + δyi · 2 · (yim ) 2 eller
∂yi m
q
yim+1 yim+1 ≈ yim yim + 3
yim · δyi der δyi = yim+1 − yim
p p
2 ·
h2 m p m
m+1 3 2p m
yi−1 + h yi − 2 yim+1 + yi+1 m+1
= y yi (4.127)
2 2 i
4.3.4 Example:
y 00 (x) + sin (y(x)) = 0 som diskretisert med sentral-differanser blir:
m+1
− 2yim+1 + yi−1
m+1
+ h2 sin yim+1 = 0
yi+1 (4.128)
Det er leddet yim+1 som er ikke-lineært og må lineariseres. Vi har bare ett
indeks og vi bruker (4.124) med zi → yi og F (yi ) = sin(yi ):
∂F
F (yi )m+1 ≈ F (yi )m + δyi = sin(yim ) + δyi · cos(yim ) eller
∂yi m
m+1
yi−1 + (h2 cos(yim ) − 2)yim+1 + yi+1
m+1
= h2 (yim cos(yim ) − sin(yim )) (4.129)
4.3.5 Example:
p
Gitt differensialligningen y 00 (x) + y(x) y 0 (x) = 0 som diskretisert med sentral-
differanser blir:
q p
m+1
yi+1 − 2yim+1 + yi−1
m+1
+ α · yim+1 m+1
yi+1 m+1
− yi−1 = 0 , der α = h h/2 (4.130)
q
Det er leddet yim+1 yi+1 m+1 m+1
− yi−1 som er ikke-lineært og må lineariseres. I
dette tilfellet har vi tre indekser og vi bruker (4.126) med zi−1 → yi−1 , zi → yi
√
og zi+1 → yi+1 samt F (yi−1 , yi , yi+1 )m = yi yi+1 − yi−1 . Finner de enkelte
leddene i (4.126) :
CHAPTER 4. FINITE DIFFERENCES FOR ODES 193
ym
∂F
F (yi−1 , yi , yi+1 )m = yim = − 2√ym i −ym
p m m ,
yi+1 − yi−1 ∂yi−1 ,
m i+1 i−1
m
∂F ∂F y
m − y m og tilslutt = 2√ym i −ym
p
∂yi = yi+1 i−1 ∂yi+1
m m i+1 i−1
Totalt:
q m
yim+1 m+1 m+1
≈ yim m − y m − √ yi
p
yi+1 − yi−1 yi+1 i−1 2 y m −y m
δyi−1
i+1 i−1
m
m − y m · δy + √ yi
p
+ yi+1 i−1 i 2 y m −y m
δyi+1
i+1 i−1
m+1 m+1
m+1
Merk at uttrykket nå er lineært da yi−1
og yi+1 , yi
bare inngår i 1.
potens. Vi setter dette inn i (4.130) som nå blir lineær:
ym ym
m+1
1 − α 2gim · yi−1 − (2 − αg m ) · yim+1 + 1 + α 2gim · yi+1
m+1
ym m m (4.131)
= α 2gim (yi+1 − yi−1 )
der g m =
p m m
yi+1 − yi−1
Når vi bruker denne metoden, må vi først skrive systemet på en form som viser
koeffisientene i ligning-systemet. Dersom koeffisientene inneholder avhengige
variable ved iterasjon m + 1, må disse erstattes med verdier fra iterasjon m. Men
dette er ofte det samme som å Taylor-utvikle koeffisientene rundt iterasjon m og
bruke bare 1. ledd i utviklingen.
CHAPTER 4. FINITE DIFFERENCES FOR ODES 194
N
1X
ta2 = |δyi | < tola (4.134)
n i=1
v
uN
1u X
ta3 = t (δyi )2 < tola (4.135)
n i=1
max(|δyi |)
tr3 = < tolr (4.138)
max(|yim+1 |)
Dersom de størrelsene vi beregner er av størrelsesorden 1, er det likegyldig
om vi bruker en absolutt eller en relativ test. Vanligvis bruker vi en relativ test
da denne samsvarer med uttrykket "antall korrekte siffer". Dersom den størrelsen
vi beregner er liten i hele sitt beregningsområde, bruker vi vanligvis en absolutt
test.
La oss bruke testene ovenfor på problemet i avsnitt (3.2) der vi skal beregne
løsningen yII ) gitt på fig (3.8)
Vi gjentar ligningen fra (4.115) :
3 2
y 00 (x) = y (4.139)
2
4
ys
2
yII
0
−2
ys ,yII
−4
−6
−8
−10
−12
0.0 0.2 0.4 0.6 0.8 1.0
x
x = (h:h:1.0-h)’;
ym = -20*x.*(1 - x); % Startverdier
b = zeros(n,1); d = b; dy = b; % allokering
fprintf(’ Itr. \n’);
for it = 1:9
b = -(2.0 + fac*ym); % hoveddiagonal
d = -(fac*0.5)*ym.^2 ; % høyre side
d(n) = d(n)- 1.0;
d(1) = d(1) - 4.0;
ym1 = tdma(a,b,c,d); % Løser ligningsystemet
dy = abs(ym1 - ym);
tr3 = max(dy)/max(abs(ym1));
tr2 = sum(dy)/sum(abs(ym1));
ym = ym1; % Oppdatering av y-verdier
fprintf(’ %10d %9.2e %9.2e \n’,it,tr2, tr3);
end
m 2 m m 3 m 2
δyN −1 − (2 + 3yN h )δyN = −(1 − 2yN + yN −1 ) + 2 (yN h)
(4.141)
der i = 2, 3, ..., N − 1 , m = 0, 1, 2, ...
Vi har benyttet at y0 = 4, δy0 = 0, yN +1 = 1, δyN +1 = 0 fra randbetingelsene.
For hver iterasjon oppdateres y-verdiene ved:
4.3.8 Kvasilinearisering
I det foregående har vi først diskretisert ligningen og deretter linearisert den. Det
er fullt mulig å linearisere ligningen først. Dette blir gjerne kalt kvasi-linearisering.
La oss se på en generell, ikke-lineær 2. ordens ligning:
δy = ym+1 − ym , δy 0 = ym+1
0 0
− ym , δy 00 = ym+1
00 00
− ym (4.145)
der m og m + 1 som vanlig betyr iterasjonsnummer. (I storparten av dette
avsnittet skriver vi iterasjonsnumrene som subindekser)
Ved rekkeutvikling av (4.144) rundt iterasjon m :
0 00 0 00
g(x, ym+1 , ym+1 , ym+1 ) ≈ g(x, ym , ym , ym )
∂g ∂g 0 ∂g
+ δy + δy + δy 00
∂y m ∂y 0 m ∂y 00 m
(4.146)
Anta at vi har iterert så mange ganger at
0 00 0 00
g(x, ym+1 , ym+1 , ym+1 ) ≈ g(x, ym , ym , ym )≈0
som innsatt i (4.146) gir:
∂g ∂g 0 ∂g
δy + δy + δy 00 = 0 (4.147)
∂y m ∂y 0 m ∂y 00 m
Ved derivasjon av (4.144):
∂g ∂f ∂g ∂f ∂g
=− , =− 0 , =1 (4.148)
∂y ∂y ∂y 0 ∂y ∂y 00
CHAPTER 4. FINITE DIFFERENCES FOR ODES 198
Ved å sette inn fra (4.145) i (4.149) samt bruk av (4.143), får vi:
00 ∂f 0 ∂f
ym+1 − ·y − · ym+1
∂y 0 m m+1 ∂y m
(4.150)
0 ∂f ∂f 0
= f (x, ym , ym )− · ym − · ym
∂y m ∂y 0 m
Skriver tilslutt (4.150) med vår vanlige notasjon med iterasjonsnummer oppe:
00 m+1 ∂f 0 m+1 ∂f
(y ) − 0
· (y ) − · y m+1
∂y m ∂y m
(4.151)
∂f ∂f
= f (x, y m , (y 0 )m ) − · (y 0 )m − · ym
∂y 0 m ∂y m
Vi har brukt en 2. ordens ligning som eksempel, men (4.149) – (4.151) lar
seg umiddelbart generalisere til en n’te ordens ligning, f.eks. en 3. ordens:
∂f ∂f ∂f
(y 000 )m+1 − · (y 00 )m+1 − · (y 0 )m+1 − · y m+1
∂y 00 ∂y 0 m ∂y m
∂f
= f (x, y m , (y 0 )m , (y 00 )m ) − · (y 00 )m
∂y 00 m
∂f 0 m ∂f
− · (y ) − · (y)m
∂y 0 m ∂y m
(4.152)
4.3.9 Example:
1)
3 2
Ligningen y 00 (x) = y
2
∂f ∂f
Her blir ∂y 0 =0, ∂y = 3y som innsatt i (4.151) gir:
3
(y 00 )m+1 − 3y m · y m+1 = − (y m )2
2
yi+1 −2yi +yi−1
Ved å diskretisere med sentraldifferanser yi00 ≈ h2 : får vi følgende
differanseligning:
CHAPTER 4. FINITE DIFFERENCES FOR ODES 199
m+1 3
yi−1 m+1
− (2 + 3h2 yim )yim+1 + yi+1 = − (hyim )2
2
som er i overenstemmelse med (4.123).
2)
Falkner-Skan-ligningen y 00 + y · y 00 + β · [1 − (y 0 )2 ] = 0
Vi skriver ligningen på formen:
y 00 = − y · y 00 + β · [1 − (y 0 )2 ] = f (x, y, y 0 , y 00 )
∂f ∂f ∂f
Lign. (4.152) gir med ∂y 00 = −y , ∂y 0 = 2βy 0 og ∂y = −y 00 :
xm+1
1 = xm
1 − ... (4.153)
CHAPTER 4. FINITE DIFFERENCES FOR ODES 200
R R0
P
W ϕ
4.7 Exercises
Exercise 6: Circular clamped plate with concentrated single
load
Figure 4.18 show a rigidly clamped plate with radius R0 . The plate is loaded with
a single load P in the plate centre. The differential equation for the deflection
W (R) is given by:
d3 W 1 d2 W 1 dW P
3
+ 2
− 2 = (4.154)
dR R dR R dR 2πD · R
3
Et
The plate stifness D is given by: D = 12(1−ν) , where E is the modulus of
elasticity, ν is the Poissons ratio and t is the plate thickness. The boundary
conditions are given by:
dW dW
W (R0 ) = (R0 ) = 0, (0) = 0 (4.155)
dR dR
In which the two first are because it is rigidly clamped, and the last due to
the symmetry. We introduce dimmensionless variables: r = RR0 , ω(R) = 16πDW P R02
,
so that Eq. (4.154) can be written:
d3 ω 1 d2 ω 1 dω 8
+ − 2 = , 0<r<1 (4.156)
dr3 r dr2 r dr r
and Eq. (4.155):
dω dω
ω(1) = (1) = 0, (0) = 0 (4.157)
dr dr
The analytical solution is given by:
CHAPTER 4. FINITE DIFFERENCES FOR ODES 201
dω
ω(r) = r2 [2 ln(r) − 1] + 1, (r) = 4 r ln(r) (4.158)
dr
Pen and paper:
The following problems should be done using pen and paper:
a) We introduce the inclination φ(r) = − dω
dr and insert into Eq. (4.156):
d2 φ 1 dφ φ 8
+ − 2 =− , (4.159)
dr2 r dr r r
with boundary conditions:
discretize Eq. (4.159) with central differences. Partion the interval [0, 1] into
N segments, so that h = ∆r = N1 , which gives r = h · i, i = 0, 1, . . . , N . The
coefficient matrix should be tridiagonal. Write out the discretized equation for
i = 1, i and i = N − 1. Write the expressions for the diagonals, and the right
hand side.
b) Choose N = 4 (h = 14 ) in a) and solve the corresponding system of
equations.
c) We will now find ω(r) by integrating the equation dω dr = −φ(r) using
Heuns method. Since the right hand side is independenpt of ω , Heuns method
reduces to the trapes method. (The predictor is not necessary).
Find the ω− values in the points as in b).
d) Eq. (4.159) can also be written:
d 1 d
[r · φ(r)] (4.161)
dr r dr
Discretize Eq. (4.161) using Eq. (2.43). The coefficient matrix should be
tridiagonal. Write out the discretized equation for i = 1, i and i = N − 1. Write
the expressions for the diagonals, and the right hand side.
e) Solve Eq. (4.156) and (4.157) directly by introducing a new independant
variable z = ln(r) and show that Eq. (4.156) can be written: ω 000 (z) − 2ω 00 (z) =
8r2 ≡ 8e2z . Next guess a particular solution on the form ωp (z) = k · z · e2z ,
where k is a constant. Lastly decide the constants using (4.157).
Programing:
a) Write a python program that solves b) and c) from the pen and paper
exercise, numerically. Experiment with finer segmentation. If you want you can
download the python scelleton clampedPlate.py and fill in where applicable.
CHAPTER 4. FINITE DIFFERENCES FOR ODES 202
Hint 2. programming:
# src-ch2/clampedPlate.py
import scipy
import scipy.linalg
import scipy.sparse
import scipy.sparse.linalg
# mainDiag = ?
# subDiag = ?
# superDiag = ?
# RHS = ?
return Phi
Omega = np.zeros_like(Phi)
Omega[-1] = 0
#for i in range(N - 1, -1, -1):
# i takes the values N-2, N-3, .... , 1, 0
#Omega[i] = ?
return Omega
N = 3
CHAPTER 4. FINITE DIFFERENCES FOR ODES 203
r = np.linspace(0, 1, N + 1)
h = 1./N
Phi = solve_phi(N, h)
Omega = solve_omega(Phi, N, h)
Omega_analytic = r[1:]**2*(2*np.log(r[1:]) - 1) + 1
Omega_analytic = np.append(1, Omega_analytic) # log(0) not possible
Phi_analytic = - 4*r[1:]*np.log(r[1:])
Phi_analytic = np.append(0, Phi_analytic) # log(0) not possible
ax[0][0].set_ylabel(r’$\phi$’)
plt.show()
Chapter 5
Mathematical properties of
partial differential equations
2
∂ u ∂2u
Du ∂u ∂u ∂u 1 ∂p
≡ +u +v =− +ν + 2 (5.1)
Dt ∂t ∂x ∂y ρ ∂x ∂x2 ∂y
2
∂2v
Dv ∂v ∂v ∂v 1 ∂p ∂ v
≡ +u +v =− +ν + 2 (5.2)
Dt ∂t ∂x ∂y ρ ∂y ∂x2 ∂y
2
∂2T
DT ∂T ∂T ∂T ∂ T
≡ +u +v =α + (5.3)
Dt ∂t ∂x ∂y ∂x2 ∂y 2
∂2u ∂2
2
+ 2 = f (x, y) (5.4)
∂x ∂y
204
CHAPTER 5. MATHEMATICAL PROPERTIES OF PDES 205
∂u ∂2u
=α 2 (5.5)
∂t ∂x
Bølgeligningen, akustisk grunnligning.
∂2u 2
2∂ u
= α 0 (5.6)
∂t2 ∂x2
1. ordens lineær bølgeligning ( konstant). Transportligning, adveksjonsligninga
∂u ∂u
+ α0 =0 (5.7)
∂t ∂x
Ikke-viskøs Burgers ligninng. Modell for Eulerligningene.
∂u ∂u
+u =0 (5.8)
∂t ∂x
Burgers ligning. Modell for inkompressibel Navier-Stokes ligninger.
∂u ∂u ∂2u
+u =ν 2 (5.9)
∂t ∂x ∂x
Tricomi-ligningen. Modell for transonisk strømning.
∂2u ∂2u
y + 2 =0 (5.10)
∂x2 ∂y
Konveksjon-diffusjonsligningen. Kalles også lineær Burgers ligning. Mod-
ellerer en rekke av tilfellene ovenfor.
∂u ∂u ∂2u
+ u0 =α 2 (5.11)
∂t ∂x ∂x
Utrykkene transport, konveksjon og adveksjon blir ofte benyttet om hverandre.
(5.7) ovenfor er kjent som adveksjonsligningen.
∂u
Ligningen er ikke-lineær dersom a eller b også er funksjoner av og/eller
∂x
∂u
. Transportsiden av f. eks. Navier-Stokes-ligningene er følgelig kvasi-lineære
∂y
1. ordens PDL, selvom vi vanligvis bruker betegnelsen ikke-lineær.
Skriver (5.12) på formen:
∂u b ∂u
a + +c=0 (5.13)
∂x a ∂y
Antar at u er kontinuerlig deriverbar:
∂u ∂u du ∂u dy ∂u
du = dx + dy → = +
∂x ∂y dx ∂x dx ∂y
Definerer nå karakteristikken for (5.12) ved:
dy b
= (5.14)
dx a
(5.14) innsatt ovenfor gir:
∂u b ∂u du
a + =a
∂x a ∂y dx
som videre gir:
a · du + c · dx = 0 (5.15)
dy
Langs karakteristikken gitt ved dx = ab reduseres (5.12) til en ODL gitt i (5.15).
(5.15) kalles kompatibilitetsligningen for (5.12). Dersom (5.12) er lineær, kan vi i
prinsippet først finne karakteristikkene og deretter løse (5.15). Dette er generelt
dy
ikke mulig når (5.12) er kvasilineær eller ikke-lineær fordi dx nå er en funksjon
av u, dvs: av løsningen selv.
Example 5.2.1. .
Adveksjonsligningen (5.7) gitt ved ∂u ∂u
∂t +a0 ∂x har løsningen u(x, t) = f (x−a0 t)
som er en bølge som forplanter seg med konstant profil og hastighet langs x-aksen.
Bølgen beveger seg mot høyre for α0 > 0. Karakteristikken er her gitt ved:
dx
dt = a0 .
Denne ligningen kan integreres direkte: x = a0 t + C1 . Konstanten C1 kan
f.eks. bestemmes ved: x = x0 for t = 0 som gir C1 = x0 . Ligningen for den
karakteristiske kurven bli da: x = a0 t + x0 . (5.15) med c = 0 og a = 1 gir
du = 0 . Dette betyr at u som her er partikkelhastigheten, er konstant langs
karakteristikken.
La oss som et eksempel på startverdier, velge
(
1 − x2 for |x| ≤ 1
u(x, 0) =
0for |x| > 1
CHAPTER 5. MATHEMATICAL PROPERTIES OF PDES 207
u
u(x,0) = 1 - x2 1
-1 1
x
t1
dx = a > 0
0
dt
t
Example 5.2.2. .
Ikke-viskøs Burgers ligning (5.8) er gitt ved:
∂u ∂u
+u =0
∂t ∂x
Denne har løsningen u(x, t) = f (x − u · t) og karakteristikker gitt ved: dx
dt = u.
Dette er samme løsningen som i forrige eksempel, med den viktige forskjellen
at nå er a0 erstattet med u. Karakteristikkenes helning varierer nå med x, som
indikert på figuren på neste side.
Samme startbetingelser som i forrige eksempel, gir følgende løsning:
(
1 − ζ 2 for |ζ| ≤ 1
u(x, t) = der ζ = x − u · t
0 for |ζ| > 1
Igjen ser vi at den eneste forskjellen fra forrige eksempel er at a0 nå er
erstattet med u. Løst eksplisitt m.h.p. u der u(x, t) = 0 for |ζ| > 1:
CHAPTER 5. MATHEMATICAL PROPERTIES OF PDES 208
t
dx = u
dt
1
u(x, t) = 2 [(2xt − 1) ± g(x, t)], g(x, t)
2t
p
= 1 − 4xt + 4t2 , |ζ| ≤ 1 (5.16)
∂u g(x, t) ± 1
= (5.17)
∂x t · g(x, t)
3.0
2.5
2.0
1.5
t
1.0
0.5
0.0
−2 −1 0 1 2 3
x
1.2
t=0
1.0 t = 12
t =1
0.8
0.6
u
0.4
0.2
0.0
−1.0 −0.5 0.0 0.5 1.0 1.5
x
∂ 2 φ ∂φ
∂φ
− − exy sin φ = 0 kvasilineær (5.19)
∂x ∂x2 ∂x
∂ 2 φ ∂φ
x 2 − − exy sin φ = 0 semilineær (5.20)
∂x ∂x
2 2 2
∂ φ ∂ φ
2
= 0 ikke-lineær (5.21)
∂x ∂y 2
En PDL som kan skrives på formen
∂u ∂u ∂v
A +B +C +f =0 (5.23)
∂x ∂y ∂y
∂u ∂v
− =0 (5.24)
∂y ∂x
Det kan vises at høyere ordens (kvasilineære) PDL alltid kan skrives som
et system av 1. ordens PDL, men oppspaltingen kan skje på mange måter.
Dersom (5.18) f.eks. er potensialligningen i gassdynamikk, kan vi tyde φ som
hastighetspotensialet. (5.24) er da betingelsen for rotasjonsfri strømning.
Vi vil nå forsøke å skrive (5.23) og (5.24) som et totalt differensial. (Se (5.13)
– (5.15)) Multipliserer (5.24) med en vilkårlig skalar σ og adderer med (5.23):
∂u B + σ ∂u ∂v C ∂v
A + −σ − +f =0 (5.25)
∂x A ∂y ∂x σ ∂y
Nå har vi:
du ∂u dy ∂u dv ∂v dy ∂v
= + , = +
dx ∂x dx ∂y dx ∂x dx ∂y
Derav:
du ∂u ∂u dv ∂v ∂v
= +λ , = +λ (5.26)
dx ∂x ∂y dx ∂x ∂y
der vi har definert λ ved:
dy
λ= (5.27)
dx
CHAPTER 5. MATHEMATICAL PROPERTIES OF PDES 211
B − Cλ
λ= (5.31)
A
Følgende 2. gradsligning finnes fra (5.31) for bestemmelse av λ:
Aλ2 − Bλ + C = 0 (5.32)
eller ved bruk av (5.27):
•
CHAPTER 5. MATHEMATICAL PROPERTIES OF PDES 212
∂2φ ∂2φ
(1 − M 2 ) + 2 = 0, M = Mach-tallet. (5.36)
∂x2 ∂y
• M = 1: Parabolsk (degenerert)
• M < 1: Elliptisk. Subsonisk strømning
• M > 1: Hyperbolsk. Supersonisk strømning.
∂2u ∂2u
2
= α02 2 (5.40)
∂t ∂x
dx
(5.40) har karakteristikkene = ±a0 der a0 er bølgeforplantnings- hastigheten.
dt
dx dx
Vi betegner = +a0 med C + og = −a0 med C − . Løsningsområdet for
dt dt
(5.40) er vist i fig. 5.5 nedenfor.
C+ C-
Region of
dependency
a b x
∂2u ∂2u
+ 2 =0 (5.41)
∂x2 ∂y
Løsningsområdet er som vist i fig. 5.6
Vi har ingen reelle karakteristikker. Hele området Ω, inkludert randkurven
C, er både avhengighetsområde og influensområde for P : Enhver forandring
av en verdi i Ω eller på C vil influere på verdien i P . (5.41) er da et rent
randverdiproblem, og følgende randbetingelser er aktuelle:
CHAPTER 5. MATHEMATICAL PROPERTIES OF PDES 214
y
n
C
Region of influence
and region of dependency
• u gitt på C: Dirichlet-betingelse
∂u
– gitt på C: Neumann-betingelse
∂n
∂u
– u og gitt: Blandet betingelse (Robin-betingelse)
∂n
∂u ∂2u
= (5.42)
∂t ∂x2
Løsningsområdet for (5.42) er gitt i fig. 5.7 nedenfor.
Fra klassifiseringsligningen (5.33) finner vi:
dt = 0 ⇒ t = konstant er karakteristikken (den karakteristiske kurva) i
dette tilfellet. Derav: a0 = dx
dt = ∞ som betyr at forplantnigshastigheten langs
karakeristikken t = konstant er uendelig stor. Verdien i P avhenger av verdien
i alle punktene i det fysiske rommet for tidligere verdier av tiden t, inkludert
nåtiden. (5.42) oppfører seg som en elliptisk ligning for hver verdi t = konstant.
Istedenfor t, kan vi ha en romkoordinat. Den tidslignende koordinaten kalles
ofte den evolusjonsvariable: Den retningen vi marsjerer i.
CHAPTER 5. MATHEMATICAL PROPERTIES OF PDES 215
Region of influence
P
Characteristic
Region of dependency
a b x
ofte egenskapene til to eller alle tre standard-typene samtidig. (Ikke i samme
punkt heldigvis)
Example 5.5.1. .
Kontinuitetsligningen betraktet som en ligning for tettheten ρ er en 1. ordens
PDL der karakteristikken er partikkelbanen. Denne ligningen er av hyperbolsk
karakter både for stasjonær og ikke-stasjonær strømning.
Example 5.5.2. .
Bevegelsesligningene betraktet som en ligning for hastigheten V ~ , er elliptisk
for stasjonær strømning og parabolsk for ikke-stasjonær strømning. Det samme
kan sies om energiligningen når vi betrakter den som en ligning for temperaturen
T.
Dersom vi antar at N-S-systemet beholder de matematiske egenskapene for
hver av ligningene betraktet enkeltvis, kan vi si at N-S-systemet er parabolsk-
hyperbolsk for ikke-stasjonær strømning og elliptisk-hyperbolsk for stasjonær
strømning. Den hyperbolske karakteren skyldes i begge tilfeller kontinuitets-
ligningen. Merk at evolusjonskarakteren for systemet forsvinner for en inkom-
pressibel fluid selv om strømningen er ikke-stasjonær. Rand- betingelsene for
hele systemet stilles i utgangspunktet utfra betraktningene som er gitt oven-
for. I kompliserte tilfeller blir dette en blanding av matematiske, fysikalske og
empiriske overlegninger.
Fig. 5.5 viser de karakteristiske kurvene som rette linjer. Dette vil ikke
være tilfellet generelt, selv for lineære ligninger. For kvasilineære ligninger vil
karakteristikkene, som tidligere nevnt, også være funksjon av selve løsningen.
Example 5.5.4. .
Potensialligningen i to dimensjoner er gitt ved:
∂u ∂v ∂u
(u2 − a2 ) + (v 2 − a2 ) + 2uv =0
∂x ∂y ∂y
CHAPTER 5. MATHEMATICAL PROPERTIES OF PDES 217
∂u ∂2u
= −iω · u(x, t), = −ω 2 · u(x, t) (5.44)
∂t ∂t2
∂u ∂2u
= iβ · u(x, t), = −β 2 · u(x, t) (5.45)
∂x ∂x2
∂3u 3 ∂4u
= −β · u(x, t), = β 4 · u(x, t) (5.46)
∂x3 ∂x4
Diffusjonsligningen
∂u ∂2u
=α 2 (5.47)
∂t ∂x
Innsatt fra (5.46) i (5.47): (αβ 2 − iω) · u(x, t) = 0 → iω = αβ 2 som gir:
2
u(x, t) = e−αβ ·t
· eiβ·x (5.48)
CHAPTER 5. MATHEMATICAL PROPERTIES OF PDES 218
2
Imaginærdelen av (5.48) gir u(x, t) = e−αβ ·t som f.eks. er i overenstemmelse
med løsningen i appendiks ??in appendix 7 in [?]. (Flere detaljer i kapittel ??)
Adveksjonsligningen
∂u ∂u
+ a0 =0 (5.49)
∂t ∂x
Innsatt fra (5.46) i (5.49): i · (−ω + α0 β) · u(x, t) = 0 → ω = α0 β som gir:
∂2u ∂2u
2
= a20 2 (5.51)
∂t ∂x
Innsatt fra (5.46) i (5.51): (−ω 2 + a20 β 2 ) · u(x, t) = 0 → ω = ±a0 β
som gir:
∂u ∂u ∂3u
+ a0 + b0 3 = 0 (5.53)
∂t ∂x ∂x
Dette er en ligning som beskriver vannbølger på grunt vann, og (5.53) er en lin-
earisert versjon av den egentlig ligningen. (Betegnes gjerne som KdV-ligningen).
Innsatt fra (5.46) i (5.53):
β · a0
For tillfelle 2) og 3) blir fasehastigheten cf = = a0 uavhengig av
β
bølgetallet. Slike bølger kalles da ikke-dispersive.
Svingende bjelke
∂2u 4
2∂ u EI
2
+ a 4
= 0, a2 = (5.56)
∂t ∂x m
Gir transversalsvingninger av en bjelke etter elementær bjelketeori. EI er
bjelke-stivheten og m er masse pr. lengdeenhet. Innsatt fra (5.46) i (5.56):
(−ω 2 + a2 β 4 ) · u(x, t) = 0 → ω = ±a · β 2 som gir:
6.1 Introduction
Important and frequently occuring practical problems are governed by elliptic
PDEs, including steady-state temperature distribution in solids.
• Laplace:
∇2 u = 0 (6.1)
• Poisson:
∇2 u = q (6.2)
• Helmholtz:
∇2 u + c · u = q (6.3)
∂2 ∂2 ∂2
where ∇2 = + + in 3D
∂x2 ∂y 2 ∂z 2
220
CHAPTER 6. ELLIPTIC PDES 221
and
∂2 ∂2
where ∇2 = + in 2D
∂x2 ∂y 2
The 2D versions of the famous equations above are special cases of the a
generic elliptic PDE may be represented by:
∂ ∂u ∂ ∂u
a + b +c·u=q (6.4)
∂x ∂x ∂y ∂y
where a, b, c og q may be functions of x and y and a and b have the same sign.
C
n
u(x,y)
Possion problems.
• The stress equation for torsion of an elastic rod
xi = x0 + i · ∆x, i = 0, 1, 2, . . .
yj = y0 + j · ∆y, j = 0, 1, 2, . . .
i-1 i i+1
j+1
y
i, j
j
Δy
j-1
x
Δx
∂ 2 u
ui+1,j − 2ui,j + ui−1,j
+ O (∆x)2
2
= 2
(6.10)
∂x i,j (∆x)
∂u ∂2u
Tilsvarende formler for ∂y og ∂y 2 følger direkte:
Foroverdifferanser:
∂u ui,j+1 − ui,j
= (6.11)
∂y i,j
∆y
Bakoverdifferanser:
∂u ui,j − ui,j−1
= (6.12)
∂y i,j ∆y
Sentraldifferanser
∂u ui,j+1 − ui,j−1
= (6.13)
∂y i,j 2∆y
∂ 2 u
ui,j+1 − 2ui,j + ui,j−1
2
= (6.14)
∂y i,j (∆y)2
CHAPTER 6. ELLIPTIC PDES 224
j+1
j-1
i-1 i i+1
seeking a fast solution, we are also looking for schemes with efficient storage
management a reasonable programming effort.
Let us start by discretizing the stationary heat equation in a rectangular
plated with dimension as given in Figure 6.4:
∂2T ∂2T
+ =0 (6.17)
∂x2 ∂y 2
y
100 100 100 100
100
1.0
xi = x0 + i · h, i = 0, 1, 2, . . .
yj = y0 + j · h, j = 0, 1, 2, . . .
which states that the temperature Ti,j in at the location (i, j) depends on the
values of its neighbors to the left, right, up and down. Frequently, the neighbors
CHAPTER 6. ELLIPTIC PDES 226
j+1
j-1
i-1 i i+1
Te + Tw + Tn + Ts − 4Tm = 0 (6.19)
Tn
Tw Tm Te
Ts
The smoothing nature of elliptic problems may be seen even more clearly by
isolating the Ti,j in (6.19) on the left hand side:
Te + Tw + Tn + Ts
Tm = (6.20)
4
showing that the temperature Tm in each point is the average temperature of
the neighbors (to the east, west, north, and south).
The temperature is prescribed at the bondaries (i.e. Dirichlet boundary
conditions) and are given by:
T = 0.0 at y = 0
T = 0.0 at x = 0 and x=1 for 0 ≤ y < 1.5 (6.21)
T = 100.0 at y = 1.5
CHAPTER 6. ELLIPTIC PDES 227
Our mission is now to find the temperature distribution over the plate by
using (6.18) and (6.21) with ∆x = ∆y = 0.25. In each discretized point in (6.5)
the temperatures need to satisfy (6.18), meaning that we have to satisfy as many
equations as we have unknown temperatures. As the temperatures in each point
depends on their neighbors, we end up with a system of algebraic equations.
To set up the system of equations we traverse our our unknows one by one
in a systematic manner and make use of use of (6.18) and (6.21) in each. All
unknown temperatures close to any of the boundaries (left, right, top, bottom)
in Figure 6.4 will be influenced by the prescribed and known temperatures the
wall via the 5-point stencil (6.18). Prescribed values do not have to be calculated
an can therefore be moved to the right hand side of the equation, and by doing
so we modify the numerical stencil in that specific discretized point. In fact,
inspection of Figure 6.4, reveals that only three unknown temperatures are not
explicitly influenced by the presences of the wall (T2,2 , T2,3 , and T2,3 ). The four
temperatures in the corners (T1,1 ,T1,5 , T3,1 , and T3,5 ) have two precribed values
to be accounted for on the right hand side of their specific version of the generic
numerical stencil (6.18). All other unknown temperatures close to the wall have
only one prescribed value to be accounted for in their specific numerical stenscil.
By starting at the lower left corner and traversing in the y-direction first,
and subsequently in the x-direction we get the following system of equations:
−4 1 0 0 0 1 0 0 0 0 0 0 0 0 0 T11 0
1 −4 1 0 0 0 1 0 0 0 0 0 0 0 0
T12
0
0 1 −4 1 0 0 0 1 0 0 0 0 0 0 0
T13
0
0 0 1 −4 1 0 0 0 1 0 0 0 0 0 0
T14
0
0 0 0 1 −4 0 0 0 0 1 0 0 0 0 0
T15
−100
1 0 0 0 0 −4 1 0 0 0 1 0 0 0 0
T21
0
0 1 0 0 0 1 −4 1 0 0 0 1 0 0 0
T22
0
0 0 1 0 0 0 1 −4 1 0 0 0 1 0 0 ·
T23 =
0
0 0 0 1 0 0 0 1 −4 1 0 0 0 1 0
T24
0
0 0 0 0 1 0 0 0 1 −4 0 0 0 0 1
T25
−100
0 0 0 0 0 1 0 0 0 0 −4 1 0 0 0
T31
0
0 0 0 0 0 0 1 0 0 0 1 −4 1 0 0
T32
0
0 0 0 0 0 0 0 1 0 0 0 1 −4 1 0
T33
0
0 0 0 0 0 0 0 0 1 0 0 0 1 −4 1 T34 0
0 0 0 0 0 0 0 0 0 1 0 0 0 1 −4 T35 −100
(6.24)
The analytical solution of (6.17) and (6.21) may be found to be:
∞
X
T (x, y) = 100 · An sinh(λn y) · sin(λn x) (6.25)
n=1
4
where λn = π · n and An =
λn sinh( 23 λn )
import scipy.linalg
import scipy.sparse
import scipy.sparse.linalg
import matplotlib.pylab as plt
import time
from math import sinh
# update A matrix
A[4, 5], A[5, 4], A[10, 9], A[9, 10] = 0, 0, 0, 0
# update RHS:
b[4], b[9], b[14] = -100, -100, -100
#print A.toarray()
tic=time.clock()
theta = scipy.sparse.linalg.spsolve(A,b) #theta=sc.linalg.solve_triangular(A,d)
toc=time.clock()
print ’sparse solver time:’,toc-tic
tic=time.clock()
theta2=scipy.linalg.solve(A.toarray(),b)
toc=time.clock()
print ’linalg solver time:’,toc-tic
# surfaceplot:
x = np.linspace(0, 1, 5)
y = np.linspace(0, 1.5, 7)
X, Y = np.meshgrid(x, y)
T = np.zeros_like(X)
T[-1,:] = 100
for n in range(1,6):
T[n,1] = theta[n-1]
T[n,2] = theta[n+5-1]
T[n,3] = theta[n+10-1]
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
CHAPTER 6. ELLIPTIC PDES 230
import numpy as np
fig = plt.figure()
ax = fig.gca(projection=’3d’)
surf = ax.plot_surface(X, Y, T, rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
ax.set_zlim(0, 110)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter(’%.02f’))
ax.set_xlabel(’x’)
ax.set_ylabel(’y’)
ax.set_zlabel(’T [$^o$C]’)
ax.set_xticks(x)
ax.set_yticks(y)
# Change some default values to make plots more readable on the screen
LNWDT=2; FNT=15
plt.rcParams[’lines.linewidth’] = LNWDT; plt.rcParams[’font.size’] = FNT
xmax=1.0
ymax=1.5
nx = Nx-1
ny = Ny-1
n = (nx)*(ny) #number of unknowns
print n, nx, ny
d = np.ones(n) # diagonals
CHAPTER 6. ELLIPTIC PDES 231
b = np.zeros(n) #RHS
d0 = d*-4
d1 = d[0:-1]
d5 = d[0:-ny]
A = scipy.sparse.diags([d0, d1, d1, d5, d5], [0, 1, -1, ny, -ny], format=’csc’)
tic=time.clock()
theta = scipy.sparse.linalg.spsolve(A,b) #theta=sc.linalg.solve_triangular(A,d)
toc=time.clock()
print ’sparse solver time:’,toc-tic
tic=time.clock()
theta2=scipy.linalg.solve(A.toarray(),b)
toc=time.clock()
print ’linalg solver time:’,toc-tic
# surfaceplot:
x = np.linspace(0, xmax, Nx + 1)
y = np.linspace(0, ymax, Ny + 1)
X, Y = np.meshgrid(x, y)
T = np.zeros_like(X)
for j in range(1,ny+1):
for i in range(1, nx + 1):
T[j, i] = theta[j + (i-1)*ny - 1]
fig = plt.figure()
ax = fig.gca(projection=’3d’)
surf = ax.plot_surface(X, Y, T, rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
CHAPTER 6. ELLIPTIC PDES 232
ax.set_zlim(0, Ttop+10)
ax.set_xlabel(’x’)
ax.set_ylabel(’y’)
ax.set_zlabel(’T [$^o$C]’)
nx=4
xticks=np.linspace(0.0,xmax,nx+1)
ax.set_xticks(xticks)
ny=8
yticks=np.linspace(0.0,ymax,ny+1)
ax.set_yticks(yticks)
nTicks=5
dT=int(Ttop/nTicks)
Tticklist=range(0,Ttop+1,dT)
ax.set_zticks(Tticklist)
Numerical values are listed below with analytical values from (6.25) enclosed
in paranthesis.
∂2T ∂2T
2
+ =0 (6.26)
∂x ∂y 2
CHAPTER 6. ELLIPTIC PDES 233
y
T = 1.0
1.0
∂T
=0 ∇ 2T = 0 T = 0.0
∂x
0.0 ∂T 1.0 x
=0
∂y
T =1 for y = 1, and 0 ≤ x ≤ 1
T =0 for x = 1, and 0 ≤ y < 1
j+1
T0,j T1,j
T-1,j T2,j
j-1
Figure 6.8: Illustration of how ghostcells with negative indices may be used to
implement von Neumann boundary conditions.
∂T −3Ti,j + 4Ti+1,j − Ti+2,j
=
∂x i,j 2∆x
∂T
which for = 0 for x = 0 reduce to:
∂x
4T1,j − T2,j
T0,j = (6.30)
3
For the problem at hand, illustrated in Figure 6.7, central difference ap-
proximation (6.29) and forward difference approximation (6.30) yields fairly
similar results, but (6.30) results in a shorter code when the methods in 6.4 are
employed.
To solve the problem in Figure 6.7 we employ the numerical stencil (6.19)
for the unknows in the field (not influenced by the boundaries)
Tw + Te + Ts + Tn − 4Tm = 0 (6.31)
where we used the same ordering as given in Figurw 6.9. For the boundary
conditions we have chosen to implement the by means of (6.29) which is illustrated
in Figure 6.9 by the dashed lines.
Before setting up the complete equation system it normally pays of to look
at the boundaries, like the lowermost boundary, which by systematic usage of
(6.31) along with the boundary conditions in (6.28) yield:
CHAPTER 6. ELLIPTIC PDES 235
y
1.0 1.0 1.0 1.0 1.0
T6 T5 T6 T7 T8 0.0
T2 T1 T2 T3 T4 0.0
x
T5 T6 T7 T8
T9 + 2T14 − 4T13 = −1
T10 + T13 + T15 − 4T14 = −1
T11 + T14 + T16 − 4T15 = −1
T12 + T15 − 4T16 = −1
Notice that for the coarse mesh with only 16 unknown temperatures (see
Figure 6.7) only 4 (T6 , T7 , T10 , and T11 ) are not explicitly influenced by the
CHAPTER 6. ELLIPTIC PDES 236
−4 2 0 0 2 0 0 0 0 0 0 0 0 0 0 0 T1 0
1 −4 1 0 0 2 0 0 0 0 0 0 0 0 0 0
T2
0
0 1 −4 1 0 0 2 0 0 0 0 0 0 0 0 0
T3
0
0 0 1 −4 0 0 0 2 0 0 0 0 0 0 0 0
T4
0
1 0 0 0 −4 2 0 0 1 0 0 0 0 0 0 0
T5
0
0 1 0 0 1 −4 1 0 0 1 0 0 0 0 0 0
T6
0
0 0 1 0 0 1 −4 1 0 0 1 0 0 0 0 0
T7
0
0 0 0 1 0 0 1 −4 0 0 0 1 0 0 0 0
· T8
= 0
0 0 0 0 1 0 0 0 −4 2 0 0 1 0 0 0
T9
0
0 0 0 0 0 1 0 0 1 −4 1 0 0 1 0 0
T10
0
0 0 0 0 0 0 1 0 0 1 −4 1 0 0 1 0
T11
0
0 0 0 0 0 0 0 1 0 0 1 −4 0 0 0 1
T12
0
0 0 0 0 0 0 0 0 1 0 0 0 −4 2 0 0
T13
−1
0 0 0 0 0 0 0 0 0 1 0 0 1 −4 1 0
T14
−1
0 0 0 0 0 0 0 0 0 0 1 0 0 1 −4 1 T15 −1
0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 −4 T16 −1
(6.32)
# src-ch7/laplace_Neumann.py; Visualization.py @ git@lrhgit/tkt4140/src/src-ch7/Visualization.py;
import numpy as np
import scipy
import scipy.linalg
import scipy.sparse
import scipy.sparse.linalg
import matplotlib.pylab as plt
import time
from math import sinh
#import matplotlib.pyplot as plt
# Change some default values to make plots more readable on the screen
LNWDT=2; FNT=15
plt.rcParams[’lines.linewidth’] = LNWDT; plt.rcParams[’font.size’] = FNT
1 1 1 -4 2 2 0 0 0 0
T6 T5 T6 0 1 -4 0 2 0 0 0
T4 T3 T4 0 1 0 -4 2 1 0 0
T2 T1 T2 0 --> A = 0 1 1 -4 0 1 ,b = 0
T3 T4 0 0 1 0 -4 2 -1
0 0 0 1 1 -4 -1
Args:
nx(int): number of elements in each row in the grid, nx=2 in the example above
ny(int): number of elements in each column in the grid, ny=3 in the example above
CHAPTER 6. ELLIPTIC PDES 237
Returns:
A(matrix): Sparse matrix A, in the equation A*T = b
b(array): RHS, of the equation A*t = b
"""
n = (nx)*(ny) #number of unknowns
d = np.ones(n) # diagonals
b = np.zeros(n) #RHS
d0 = d.copy()*-4
d1_lower = d.copy()[0:-1]
d1_upper = d1_lower.copy()
dnx_lower = d.copy()[0:-nx]
dnx_upper = dnx_lower.copy()
d1_lower[nx-1::nx] = 0 # every nx element on first diagonal is zero; starting from the nx-th elem
d1_upper[nx-1::nx] = 0
d1_upper[::nx] = 2 # every nx element on first upper diagonal is two; stating from the first elem
# this correspond to all equations on border (x=0, y)
dnx_upper[0:nx] = 2 # the first nx elements in the nx-th upper diagonal is two;
# This correspond to all equations on border (x, y=0)
b[-nx:] = -Ttop
b[nx-1::nx] += -Tright
A = scipy.sparse.diags([d0, d1_upper, d1_lower, dnx_upper, dnx_lower], [0, 1, -1, nx, -nx], forma
return A, b
if __name__ == ’__main__’:
from Visualization import plot_SurfaceNeumann_xy
# Main program
# Set temperature at the top
Ttop=1
Tright = 0.0
xmax=1.0
ymax=1.
Nx = 10
h=xmax/Nx
Ny = int(ymax/h)
Temp = scipy.sparse.linalg.spsolve(A, b)
∞
X
T (x, y) = An cosh(λn y) · cos(λn x), (6.33)
n=1
π (−1)n−1
der λn = (2n − 1) · , An = 2 , n = 1, 2, . . . (6.34)
2 λn cosh(λn )
The solution for the problem illustrated Figure 6.7 is computed and visualized
the the python code above. The solution is illustrated in Figure 6.10.
Marit 40: Har ikke endret figuren
Figure 6.10: Solution of the Laplace equation with von Neuman boundary
conditions.
Jacobi’s method
1
xm+1
1 = xm
1 + [b1 − (a11 xm m m
1 + a12 x2 + a13 x3 )] (6.38)
a11
1
xm+1
2 = xm
2 + [b2 − (a21 xm m m
1 + a22 x2 + a23 x3 )] (6.39)
a22
1
xm+1
3 = xm
3 + [b3 − (a31 xm m m
1 + a32 x2 + a33 x3 )] (6.40)
a33
xm+1
i = xmi + δxi (6.41)
n
1 X
δxi = bi − aij xm
j
, i = 1, 2, . . . , n, m = 0, 1, . . . (6.42)
aii j=1
Gauss-Seidel’s method
1
xm+1
1 = xm
1 + [b1 − (a11 xm m m
1 + a12 x2 + a13 x3 )] (6.43)
a11
1
xm+1
2 = xm
2 + [b2 − (a21 xm+1
1 + a22 xm m
2 + a23 x3 )] (6.44)
a22
1
xm+1
3 = xm
3 + [b3 − (a31 xm+1
1 + a32 xm+1
2 + a33 xm3 )] (6.45)
a33
SOR method
ω
xm+1
1 = xm
1 + [b1 − (a11 xm m m
1 + a12 x2 + a13 x3 )]
a11
ω
xm+1
2 = xm
2 + [b2 − (a21 xm+1
1 + a22 xm m
2 + a23 x3 )] (6.46)
a22
ω
xm+1
3 = xm
3 + [b3 − (a31 xm+1
1 + a32 xm+1
2 + a33 xm3 )]
a33
xm+1
i = xmi + δxi (6.47)
" i−1 n
!#
ω X
m+1
X
m
δxi = bi − aik xk + aik xk , i = 1, 2, . . . , n, (6.48)
aii
k=1 k=1
∂2u ∂2u
+ 2 = f (x, y) (6.49)
∂x2 ∂y
which after discretization with central differences, with ∆x = ∆y = h results
in the following difference equation:
xi = x0 + i · h, i = 0, 1, 2, . . . , nx
yj = y0 + j · h, j = 0, 1, 2, . . . , ny (6.51)
where nx and ny denote the number of grid cells in the x− and y-direction,
respectively (see 6.3).
By using the general SOR method (6.48) on (6.50) and (6.51) we get the
following iterative scheme:
um+1
i,j = umi,j + δui,j (6.52)
ω m+1
+ um+1 m m m 2
δui,j = u i,j−1 + ui+1,j + ui,j+1 − 4ui,j − h · fi,j (6.53)
4 i−1,j
CHAPTER 6. ELLIPTIC PDES 241
ω
um+1
i,j = um
i,j + Ri,j (6.55)
4
We will now solve the example in Figure 6.4, 6.3, with the iterative SOR-
scheme in (6.53).
y
T2,7 T3,7 T4,7 T5,7
T1,7 T5,6
m
Ti,j+1
T1,6
m+1
Ti-1,j m m
Ti,j Ti+1,j
T2,3
T1,3
T2,2 T3,2 Tm+1
T1,2 i,j-1
Figure 6.11: Rectangular domain as in Figure 6.4 but boundary nodes are
unknown due to von Neuman boundary conditions.
T[-1,:] = Ttop
T[0,:] = Tbottom
T[:,0] = Tleft
T[:,-1] = Tright
In this first simple program we use ω = 1.5 and perform a fixed number of
20 iterations as shown in the program laplace_sor.py below with h = 0.25:
CHAPTER 6. ELLIPTIC PDES 242
# src-ch7/laplace_Diriclhet2.py
import numpy as np
import scipy
import scipy.linalg
import scipy.sparse
import scipy.sparse.linalg
import matplotlib.pylab as plt
import time
from math import sinh
from astropy.units import dT
xmax=1.0
ymax=1.5
# Set simulation parameters
#need hx=(1/nx)=hy=(1.5/ny)
Nx = 40
h=xmax/Nx
Ny = int(ymax/h)
nx = Nx-1
ny = Ny-1
n = (nx)*(ny) #number of unknowns
# surfaceplot:
x = np.linspace(0, xmax, Nx + 1)
y = np.linspace(0, ymax, Ny + 1)
X, Y = np.meshgrid(x, y)
T[-1,:] = Ttop
T[0,:] = Tbottom
T[:,0] = Tleft
T[:,-1] = Tright
tic=time.clock()
omega = 1.5
for iteration in range(20):
for j in range(1,ny+1):
for i in range(1, nx + 1):
R = (T[j,i-1]+T[j-1,i]+T[j,i+1]+T[j+1,i]-4.0*T[j,i])
dT = 0.25*omega*R
T[j,i]+=dT
CHAPTER 6. ELLIPTIC PDES 243
toc=time.clock()
print ’GS solver time:’,toc-tic
1 XX 1 X X δTi,j
|δTi,j | < tola , < tolr , |Ti,j | =
6 0 (6.56)
N i j N i j Ti,j
300
250
h = 0.03125
200
iterasjoner
150
h = 0.0625
100
h = 0.125
50
h = 0.25
0
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2
ω
import scipy.sparse.linalg
import matplotlib.pylab as plt
import time
from math import sinh
from astropy.units import dT
# Change some default values to make plots more readable on the screen
LNWDT=2; FNT=15
plt.rcParams[’lines.linewidth’] = LNWDT; plt.rcParams[’font.size’] = FNT
# Set temperature at the top
Ttop=10
Tbottom=0.0
Tleft=0.0
Tright=0.0
xmax=1.0
ymax=1.5
nx = Nx-1
ny = Ny-1
n = (nx)*(ny) #number of unknowns
# surfaceplot:
x = np.linspace(0, xmax, Nx + 1)
y = np.linspace(0, ymax, Ny + 1)
X, Y = np.meshgrid(x, y)
T = np.zeros_like(X)
CHAPTER 6. ELLIPTIC PDES 245
T2 = T.copy()
reltol=1.0e-3
omega = 1.5
iteration = 0
rel_res=1.0
iteration = 0
rel_res=1.0
# Jacobi iterative solution
tic=time.clock()
while (rel_res > reltol):
R2 = (T2[1:-1,0:-2]+T2[0:-2,1:-1]+T2[1:-1,2:]+T2[2:,1:-1]-4.0*T2[1:-1,1:-1])
dT2 = 0.25*R2
T2[1:-1,1:-1]+=dT2
rel_res=np.max(dT2)/np.max(T2)
iteration+=1
toc=time.clock()
print "Jacobi solver time:\t\t{0:0.2f}. \t Nr. iterations {1}".format(toc-tic,iteration)
Lx Ly
nx = , ny = der nx og ny blir antall deler i henholdsvis x- og y-retning.
h h
nx og ny skal være heltall. Den teoretisk optimale ω er da gitt ved:
1
ρ= [cos(π/nx ) + cos(π/ny )] (6.58)
2
2
ω= p (6.59)
1 + 1 − ρ2
Dersom skrittlengden h er forskjellig i x- og y- retning med h = hx i x-retning
og h = hy i y-retning, får vi isteden for (6.59):
h (6.59) (6.61)
0.25 1.24 1.24
0.125 1.51 1.53
0.0625 1.72 1.74
0.03125 1.85 1.86
Vi ser at Garabedians estimat stemmer godt overens med de teoretisk eksakte
verdiene i dette tilfellet. Figur 6.12 stemmer også godt overens med verdiene i
denne tabellen.
Figure 6.13: Ghost-cells are used to implement von Neumann boundary condi-
tions.
% program lapsor2
clear
net = 1;
h = 0.25;
hn = h/2^(net -1);
nx = 1/hn; ny = nx;
imax = nx + 1; % points in x-direction
jmax = ny + 1; % points in y-direction
T = 0.5*ones(imax,jmax); % temperatures
% --- Compute optimal omega ---
ro = cos(pi/nx);
omega = 2/(1 + sqrt(1 - ro^2));
T(1:imax,jmax) = 1; % boundary values along y = 1
T(imax,1:jmax-1) = 0;% boundary values along x = 1
reltol = 1.0e-5; % relative iteration error
relres = 1.0; it = 0;
% --- Start iteration ---
while relres > reltol
it = it + 1;
Tsum = 0.0; dTsum = 0.0;
% --- boundary values along y = 0 ---
for i = 2: imax - 1
resid = 2*T(i,2) + T(i-1,1) + T(i+1,1) - 4*T(i,1);
dT = 0.25*omega*resid;
dTsum = dTsum + abs(dT);
T(i,1) = T(i,1) + dT;
Tsum = Tsum + abs(T(i,1));
end
% --- boundary values along x = 0 ---
for j = 2: jmax - 1
resid = 2*T(2,j) + T(1,j-1) + T(1,j+1) - 4*T(1,j);
dT = 0.25*omega*resid;
CHAPTER 6. ELLIPTIC PDES 248
2
h = 0.03125
1.9
h = 0.0625
1.8
1.7 h = 0.125
1.6
h = 0.25
ωopt
1.5
1.4
1.3
1.2
1.1
1
0.5 0.502 0.504 0.506 0.508 0.51 0.512 0.514 0.516 0.518 0.52
startverdier
Figure 6.14: The effect of intial values on the number of iterations for a problem
with sub-optimal ω.
∂2u ∂2u
+ 2 + u2 = −1 (6.63)
∂x2 ∂y
with Dicichlet boundary conditions, i.e precribed values u = 0 on all bound-
aries (see Figure 6.15).
y
u=0
1.0
Figure 6.15: Solution domain and boundary conditions for a non-linear Poisson
equation.
The PDE in (6.63) is only weakly non-linear, so-called semi-linear, but the
approach for how to solve a non-linear elliptic PDE is still illustrated.
We discretize (6.63) with central differences over the domain in Figure 6.15
by a constant stepsize h in both physical directions and get the following system
CHAPTER 6. ELLIPTIC PDES 250
of equations when the source term on the right hands side has been moved to
the left hand side:
1
[ui+1,j + ui−1,j + ui,j−1 + ui,j‘1 − 4ui,j ] + u2i,j + 1 = 0
h2
or equivalently with by introducing the function fi,j :
y
u1,5 = 0 u2,5 = 0 u3,5 = 0 u4,5 = 0 u5,5 = 0
um+1
i,j = um
i,j + δui,j (6.65)
f (uk,l )
δui,j = −ω (6.66)
∂f (uk,l )
∂ui,j
where:
uk,l = um+1
k,l for k < i, l < j (6.67)
uk,l = um
k,l ellers (6.68)
CHAPTER 6. ELLIPTIC PDES 251
and
∂f f
= −4 + 2h2 · ui,j and δui,j = ω (6.69)
∂ui,j 4 − 2h2 ui,j
We have implemented (6.64) and (6.66) to (6.69) in the python code nonlin_poisson
below:
# Python Gauss-Seidel method
tic=time.clock()
while (rel_res > reltol):
du_max=0.0
for j in range(1,ny+1):
for i in range(1, nx + 1):
R = (U[j,i-1]+U[j-1,i]+U[j,i+1]+U[j+1,i]-4.0*U[j,i]) + h**2*(U[j,i]**2+1.0)
df=4-2*h**2*U[j,i]
dU = omega*R/df
U[j,i]+=dU
du_max=np.max([np.abs(dU),du_max])
rel_res=du_max/np.max(np.abs(U))
iteration+=1
toc=time.clock()
print "Python Gauss-Seidel CPU-time:\t{0:0.2f}. \t Nr. iterations {1}".format(toc-tic,iteration)
We observe realtively good agreement between the two ω-valuse for a range of
grid sizes h, even though the PDE is weakly non-linear.
As the Gauss-Seidel algorithm above involves a triple-loop (the iterative
while-construct, pluss one loop in each physical direction), the naive python
implementation above must be expected to be computationally expensive.
For comparison we have also implemented another solution to the problem
by making use of numpy’s array slicing capabilities:
CHAPTER 6. ELLIPTIC PDES 252
Note with this implementation all values on the right hand side are at the
previous iteration, and thus the method must me denoted a Jacobian algorithm.
Finally, we have implemented a third method the Gauss-Seidel method (6.66)
with Cython. Cython is an optimising static compiler (based on Pyrex) for
both the Python programming language and the extended Cython programming
language. The ambition is to makes the writing of computationally superior C
extensions for Python as easy as Python itself.
The expensive triple loop is implemented in a typed Cython function which
looks very much like the Python implementation, save for the type declarations.
import cython
cimport cython
import numpy as np
cimport numpy as np
DTYPE = np.float64
ctypedef np.float64_t DTYPE_t
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.nonecheck(False)
it=0
rel_res=1.0
itmax=100
rel_res=dU_max/np.max(np.abs(U[:,:]))
# print ’rel_res’, rel_res
it+=1
if (it>=itmax): print ’Terminated after max iterations’
return U, rel_res, it
After compilation the Cython module is easily imported into our python-code
which allows for comparison with the methods above as illustrated in the code
below:
# Python Gauss-Seidel method
tic=time.clock()
while (rel_res > reltol):
du_max=0.0
for j in range(1,ny+1):
for i in range(1, nx + 1):
R = (U[j,i-1]+U[j-1,i]+U[j,i+1]+U[j+1,i]-4.0*U[j,i]) + h**2*(U[j,i]**2+1.0)
df=4-2*h**2*U[j,i]
dU = omega*R/df
U[j,i]+=dU
du_max=np.max([np.abs(dU),du_max])
rel_res=du_max/np.max(np.abs(U))
iteration+=1
toc=time.clock()
print "Python Gauss-Seidel CPU-time:\t{0:0.2f}. \t Nr. iterations {1}".format(toc-tic,iteration)
iteration = 0
rel_res=1.0
# Second method
# Jacobi iterative solution
tic=time.clock()
while (rel_res > reltol):
R2 = (U2[1:-1,0:-2]+U2[0:-2,1:-1]+U2[1:-1,2:]+U2[2:,1:-1]-4.0*U2[1:-1,1:-1]) + h**2*(U2[1:-1,1:-1
df=4-2*h**2*U2[1:-1,1:-1]
dU2 = R2/df
U2[1:-1,1:-1]+=dU2
rel_res=np.max(dU2)/np.max(U2)
iteration+=1
toc=time.clock()
print "Jacobi CPU-time:\t\t{0:0.2f}. \t Nr. iterations {1}".format(toc-tic,iteration)
# Third method
# Cython Gauss-Seidel method
rel_res=1.0
tic=time.clock()
U3, relreturn, itsused=gs.gauss(U3,reltol,h, omega)
toc=time.clock()
print "Cython Gauss-Seidel CPU-time:\t{0:0.2f}. \t Nr. iterations {1}".format(toc-tic,itsused)
omega=1.85
Python Gauss-Seidel CPU-time: 1.62. Nr. iterations 56
Jacobi CPU-time: 0.04. Nr. iterations 492
Cython Gauss-Seidel CPU-time: 0.97. Nr. iterations 56
Dx = (L + U)x + b (6.72)
Med bruk av notasjonen i (6.72), kan Jacobis iterasjonsmetode i (6.40) skrives:
Dxm+1 = (L + U)xm + b
som gir:
CHAPTER 6. ELLIPTIC PDES 255
Setter:
GJ = D−1 (L + U) (6.74)
som gir:
Setter:
som gir:
xm+1 = (I − ωD−1 L)−1 [(1 − ω)I + ωD−1 U]xm + (I − ωD−1 L)−1 ωD−1 b
(6.77)
Setter
Ved konvergens blir xm+1 = xm = x slik at det eksakte systemet kan skrives:
x = Gx + c (6.81)
m
Vi kaller feil-vektoren i den m’te iterasjonen for e slik at
em = x − xm (6.82)
Ved å trekke (6.80) fra (6.81) får vi:
lim Gm = 0 (6.85)
m→∞
En nødvendig og tilstrekkelig betingelse for å oppfylle (6.85), er at tallverdien
for den største egenverdien i G er mindre enn 1.
Dette skrives:
n
X
e0 = c1 v1 + c2 v2 + · · · + cn vn = ck vk
k=1
der ck , k = 1, 2, . . . , n er skalarer
Nå får vi:
n
X
e1 = Ge0 = c1 Gv1 + c2 Gv2 + · · · + cn Gvn = ck Gvk (6.87)
k=1
CHAPTER 6. ELLIPTIC PDES 257
La λk være den k’te egenverdien slik at med vk som den k’te egenvektoren, får
vi Gvk = λk vk som innsatt i (6.87) gir:
n
X
e1 = ck λk vk
k=1
som gir:
eller:
em =c1 λm m m m m m
1 v1 + c2 λ2 v2 + · · · + cn λn vn → ||e || ≤ ||c1 λ1 v1 || + ||c2 λ2 v2 ||
+ · · · + ||cn λm
n vn ||
m m
m
λ2 λn
=|λ1 | · ||c1 v1 || + ||c2 v2 || + · · · + ||cn vn ||
λ1 λ1
For tilstrekkelig store verdier av m:
||em+k ||
||em+k || ≈ ρk ||em || → ≈ ρk (6.92)
||em ||
Vi ønsker spesielt å finne for hvilken verdi av k er ||em+k || en tiendepart av
||em ||:
||em+k || 1
m
= ≈ ρk (6.93)
||e || 10
Ved bruk av den Briggske logaritmen i (6.93):
1
k≈− (6.94)
log10 (ρ)
k angir hvor mange iterasjoner vi må utføre for å vinne ett desimalsiffer. Stør-
relsen
GJ = D−1 (L + U)
For systemet i (6.71):
1
0 0 0
a11 0 a12 a13 a14
0 1
a2 0 0 a21 0 a23 a24
GJ = − 1
·
0 0 a3 0 a31 a32 0 a34
1 a41 a42 a43 0
0 0 0 a44
a12 a13 a14
0
a11 a11 a11
aa21 0 a23
a22
a24
a22
=− 22
a31 a32 a34
a33 a33 0 a33
a41 a42 a43
a44 a44 a44 0
CHAPTER 6. ELLIPTIC PDES 259
n
X
|λ| < ||GJ ||∞ = max |Gi,j | < 1
1≤i≤n
j=1
2 −1 0 0 x1 −1
−1 3 −1 0 x2 4
· =
0 −1 3 −1 x3 7
0 0 −1 2 x4 0
−1
2 0 0 0 0 0 0 0 0 0 0
0 3 0 0 , −U = 0 0 −1 0 −L = −1 0 0 0
D =
0 0 3 0 0 0 0 −1 0 −1 0 0
0 0 0 2 0 0 0 0 0 0 −1 0
Videre får vi:
1
0 0 0
2 0 0 0 2
1 1
−1 3 0 0 , (D − L)−1 = 6 3
0 0
D−L=
0 −1 3 0 1 1 1
18 9 3
0
0 0 −1 2 1 1 1 1
36 18 6 2
Iterasjonsmatrisa GGS i (6.76) blir nå:
1
0 2
0 0
1 1
0 0
GGS = (D − L)−1 U = 6 3
1 1 1
0 18 9 3
1 1 1
0 36 18 6
Beregner egenverdiene av iterasjonsmatrisa:
1
−λ 2
0 0
1 1
0 6
−λ 3
0
det(GGS − λI) = det
=0
1 1 1
0 18 9
−λ 3
1 1 1
0 36 18 6
−λ
CHAPTER 6. ELLIPTIC PDES 260
ρ = λ1 = 0.3692, λ2 = 0.0752, λ3 = λ4 = 0
Vi kan selvfølgelig bruke Matlab direkte til å finne egenverdiene:
> Ggs = [0 1/2 0 0; 0 1/6 1/3 0; 0 1/18 1/9 1/3; 0 1/36 1/18 1/6];
>> eig(Ggs)
ans = 0 0.0752 0.3692 0.0000
>>
Da tallverdien for alle egenverdiene er mindre enn 1, følger at Gauss-Seidels metode
konvergerer for dette tilfellet. (Systemet er strengt diagonaldominant). Konvergenstallet
− log10 (ρ) fra (6.95) blir − log10 (0.3692) = 0.4327 slik at k ≈ 2.3 som betyr at vi må utføre
litt mer enn to iterasjoner for hvert desimalsiffer vi ønsker.
Gauss-Seidel systemet i (6.75) blir i vårt tilfelle:
m+1 1
m − 1
0 0 0
x1 2 x1 2
1 1
x2 0 6 3
0 x2 76
= · + 49 , m = 0, 1, . . .
1 1 1
x3 0 18 9 3
x3 18
x4 1 1 1 x4 49
0 36 18 6 36
(πh)2
ρ = cosh(πh) ≈ 1 − for små h (6.97)
2
Gauss-Seidels metode:
SOR:
1 − sinh(πh)
ρ= ≈ 1 − 2πh for små h (6.99)
1 + sinh(πh)
Tilfelle (c) er basert på optimal ω.
Vi husker at betingelsen for konvergens er at spektralradien ρ < 1. For små h nærmer
spektralradien seg raskt 1 for alle metodene, men særlig er dette tydelig for Jacobi- og
Gauss-Seidels metode. Tabellen nedenfor viser dette klart.
1 1 1
Spektral radius h = 32 h = 64 h = 128
ρJ 0.9952 0.9988 0.9997
ρGS 0.9904 0.9976 0.9994
ρSOR 0.8215 0.9065 0.9521
Dette vises enda tydligere når vi beregner antall iterasjoner som behøves for å vinne ett
desimalsiffer, se lign. (6.94).
1 1 1
Iterasjoner h = 32 h = 64 h = 128
kJ 477 1910 7644
kGS 239 955 3822
kSOR 12 23 47
Konklusjon: Det er bare SOR som er praktisk brukbare av disse metodene.
På grunn av denne konklusjonen, har vi bare brukt SOR på eksemplene i dette avsnittet
med unntak av et demo-eksempel med Gauss-Seidel. Det er mulig å forbedre SOR betraktelig.
Vi kan f.eks. variere ω for hver iterasjon etter bestemte skjema istedenfor å la den være konstant
i hele iterasjonsprosessen. Legg også merke til at SOR, med Gauss-Seidel som spesialtilfelle,
er avhengig av nummereringen. Vi kan forbedre konvergensen ved f.eks å bruke sjakkbrett-
nummerering, dvs: Først gjennomløpe 1, 3, 5, . . . og deretter 2, 4, 6, . . . . Den SOR-versjonen vi
har brukt kalles punkt-SOR fordi vi går fra punkt til punkt. Her er det forbedringspotensiale
ved heller å operere på hele blokker, eventuelt hele linjer. For dem som er interessert i disse
variantene, henvises til Press [19] og Hageman & Young [9] der det også finnes programmer.
Det er sjelden bryet verdt å bruke disse mer avanserte versjonene fordi det idag finnes mer
effektive iterasjons-metoder. Fordelen med den enkle SOR-metoden vi har brukt, er at den er
lett å programmere både for lineære- og ikke-lineære ligninger. Legg merke til at når vi brukte
direkte-løsere (se 6.3), måtte vi sette opp hele matrisa på forhånd. Dette var forholdsvis enkelt
i de viste eksemplene, noe som ofte ikke er tilfelle ellers. Samtidig var vi avhengige av de
innebygde løserne i Matlab.
Dersom du ønsker å bruke iterasjonsmetoder på et fint nett, bør du bruke noen av de nyere
metodene. Disse går under betegnelsene Flernett-metoder (Multigrid) og Krylov-metoder.
En god introduksjon til Flernett-metoder finnes i Briggs [2]. En rekke av Krylov-metodene
finnes tilgjengelig i Matlab. Stikkord er her bicg, cgs, bicgstab, pcg, gmresogqmr. Skriv f.eks
doc gmres og se på eksemplene. Istedenfor matrisene som er gitt i eksemplene der, kan du bruke
koeffisient-matrisene fra eksemplene i avsnitt 6.3. Se Saad [20] for mer om Krylov-metoder.
Mange av disse metodene er også behandlet av Kelley [15] med nedlastbare Matlabprogram.
−4 1 0 0 0 1 0 0 0 0 T11 0
−4
1 1 0 0 0 1 0 0 0
T12 0
−4
T13
0 1 1 0 0 0 1 0 0 0
0
0 1 −4 1 0 0 0 1 0 T14
0
0 0 0 1 −4 0 0 0 0 1 · T15 = −100
2
0 0 0 0 −4 1 0 0 0 T21
0
0 2 0 0 0 1 −4 1 0 0 T22 0
−4
0 0 2 0 0 0 1 1 0
T 0
23
0 0 0 2 0 0 0 1 −4 1 T24 0
0 0 0 0 2 0 0 0 1 −4 T25 −100
(6.100)
Løsningen av (6.100) er gitt i Matlab-programmet lap1s nedenfor.
% program lap1s
clear
n = 10;
d = ones(n,1); % diagonal
b = zeros(n,1); % right hand side
% --- Update b ---
b(5) = -100; b(10) = -100;
% --- Generate A-matrix ---
A = spdiags([2*d d -4*d d d],[-5 -1 0 1 5], n,n);
% --- Update A ---
A(5,6) = 0; A(6,5) = 0;
% --- Solve system ---
T = A\b;
1.0
C
0.5
0.0
A B 1.0 x
∂2u ∂2u
+ = −1 (6.101)
∂x2 ∂y 2
CHAPTER 6. ELLIPTIC PDES 263
∂u
= 0 for x = 0.5
∂x
∂u
= 0 for y = 0.5
∂y
Vi ser at vi også kan benytte oss av symmetrien om diagonalene, slik at det f.eks er
tilstrekkelig å løse ligningen i trekanten ABC da AC er en symmetrilinje. Figur 6.18 på
neste side viser trekanten ABC med nummerering der vi har utnyttet symmetrien. Vi setter
u1 = u1,1 , u2 = u1,2 , u3 = u1,3 osv. Langs AB har vi avmerket randverdien u = 0.
14
14 C
14
15
11
13
13 14
7
11
10 11 12
2
8
6 7 8 9
0.0
4
1 2 3 4 5
Figure 6.18: Computational mesh and enumeration for the problem in 6.17.
med ∆x = ∆y = h
Vi har valgt h = 0.1 i figur 6.18. Beregningsmolekylet med enkle indekser:
ud
ua uc
um
ub
−4 2 0 0 0 0 0 0 0 0 0 0 0 0 0
u
1
h2
h2
1 −4 1 0 0 1 0 0 0 0 0 0 0 0 0
u2
−4 h2
0 1 1 0 0 1 0 0 0 0 0 0 0 0 u3
h2
0 0 1 −4 1 0 0 1 0 0 0 0 0 0 0 u4
h2
0 0 0 2 −4 0 0 0 1 0 0 0 0 0 0 u5
0
2 0 0 0 −4 2 0 0 0 0 0 0 0 0 u
6
h2
0 0 1 0 0 1 −4 1 0 1 0 0 0 0 0 u7 h2
0 0 0 1 0 0 1 −4 1 0 1 0 0 0 0
· u = − h2
8
−4 h2
0 u9
0 0 0 1 0 0 2 0 0 1 0 0 0
h2
0 0 0 0 0 0 2 0 0 −4 2 0 0 0 0 u10
h2
0
0 0 0 0 0 0 1 0 1 −4 1 1 0 0 u11
0 0 0 0 0 0 0 0 1 0 2 −4 0 1 0 u12 h2
−4 h2
0 0 0 0 0 0 0 0 0 0 2 0 2 0
u
13
0 0 0 0 0 0 0 0 0 0 0 1 2 −4 1 u14 h2
0 0 0 0 0 0 0 0 0 0 0 0 0 4 −4 u15 h2
(6.104)
Selv om bandstrukturen er tydelig, blir den mer rotet fordi vi har utnyttet symmetrien.
Det blir mer oppdatering av koeffisientmatrisa. Programmet poisson løser (6.104):
% program poisson
clear
n = 15;
h = 0.1; h2 = h*h;
d0 = zeros(n,1); % diagonal
d = ones(n,1); % diagonal
b = -h2*ones(n,1); % right hand side
% --- generate A-matrix ---
A = spdiags([d0 d0 d0 d -4*d d d0 d0 d0],[-4 -3 -2 -1 0 1 2 3 4], n,n);
% === Update A ===
% --- sub-diagonals ---
A(6,2) = 2; A(7,3) = 1; A(8,4) = 1; A(9,5) = 1;
A(10,7) = 2; A(11,8) = 1; A(12,9) = 1;
A(13,11) = 2; A(14,12) = 1;
A(5,4) = 2; A(6,5) = 0; A(9,8) = 2; A(10,9) = 0; A(12,11) = 2;
A(13,12) = 0; A(14,13) = 2; A(15,14) = 4;
% --- super-diagonals ---
A(1,2) = 2; A(5,6) = 0; A(6,7) = 2; A(9,10) = 0;
A(10,11) = 2; A(12,13) = 0; A(13,14) = 2;
A(11,13) = 1; A(12,14) = 1;
A(7,10 ) = 1; A(8,11) = 1; A(9,12) = 1;
A(2,6) = 1; A(3,7) = 1; A(4,8) = 1; A(5,9) = 1;
% --- solve system ---
u = A\b;
Tabellen nedenfor viser de numeriske verdiene fra programmet. De analytiske verdiene er
gitt i siste kolonne. Senterverdien u15 har en feil på 0.8%. Koordinatene refererer til figur 6.17.
CHAPTER 6. ELLIPTIC PDES 265
∞
(1 − ζ) 4 X 1 cosh(nπ ȳ)
u=ζ· − 3 · sin(nπζ) (6.105)
2 π n3 cosh nπ
n=1,3,5,... 2
1 1 1
der x̄ = x − , ȳ = y − 2 , ζ = 2 − x̄, (x, y) ∈ [0, 1].
2
Bytt om x̄ og ȳ dersom ȳ > x̄ (6.106)
Diffusjonsproblemer
7.1 Introduction
A one-dimensional diffusion equation takes the canonical form:
∂u ∂2u
=α 2 (7.1)
∂t ∂x
where t is an evolutionary variable, which might be both a time-coordinaten and a spatial
coordinate. Some classical diffusion problems are listed below:
• Heat conduction
∂T ∂2T
=α 2
∂t ∂x
• Unsteady boundary layers (Stokes’ problem):
∂u ∂2u
=ν 2
∂t ∂y
• Linearized boundary layer equation with x as an evolutionary variable:
∂u ν ∂2u
=
∂x U0 ∂y 2
• Flow in porous media:
∂u ∂2u
=c 2
∂t ∂x
Our model equation (7.1) may be classified according to (5.18):
dt = 0, B 2 − 4AC = 0
266
CHAPTER 7. DIFFUSJONSPROBLEMER 267
And we find that (7.1) is a parabolic PDE with the characteristics given by t = constant.
By dividing dt with dx we get:
dt dx
=0→ =∞ (7.5)
dx dt
which corresponds to an infinite propagation speed along the characteristic curve t =
constant.
Y b
U0
Figure 7.1: Confined, unsteady Couette flow or channel flow. The channel with
is b
In this section we will look at the problem of unsteady Couette flow, confined by two walls,
which has the following governing equation:
∂U ∂2U
=ν , 0<Y <b (7.6)
∂τ ∂Y 2
with the following boundary conditions, representing the presence of the two walls or
channel if you like:
o
U (0, τ ) = U0
=τ ≥0 (7.7)
U (b, τ ) = 0
Futher, the parabolic problem also needs initial conditions to be solved and we assume:
∂u ∂2u
= , 0<y<1 (7.10)
∂t ∂y 2
with the corresponding boundary conditions:
o
u(0, t) = 1
, t≥0 (7.11)
u(1, t) = 0
CHAPTER 7. DIFFUSJONSPROBLEMER 268
∂u n
un+1
j − un
j
≈ (7.14)
∂t j ∆t
and central differences for the spatial coordinate y:
∂2u un n n
j+1 − 2uj + uj−1
2
≈ 2
(7.15)
∂y (∆y)
where:
tn = n · ∆t, n = 0, 1, 2, . . . , yj = j · ∆y, j = 0, 1, 2, . . .
Substitution of (7.14) in (7.10) results in the following difference equation:
un+1
j = D (un n n
j+1 + uj−1 ) + (1 − 2D) uj (7.16)
where D is a dimesionless group, commonly denoted the diffusion number or the Fourier
number in heat conduction. See section 13.1 in [3])) for a discussion of (7.16).
∆t ∆τ
D= =ν (7.17)
(∆y)2 (∆Y )2
A scheme with Forward differences for the Time (evolutionary) variable and Central
differences for the Space variable, is commonly referred to as a FTCS (Forward Time Central
Space). For a FTCS-scheme we normally mean a scheme which is first order in t og and second
order in y. Another common name for this scheme is the Euler-scheme.
In section 7.6 we show that for D = 1/6, the scheme (7.16)) is second order in t og and
fourth order in y. Further, in fluid mechanics it is customary to write un
j rather than uj,n , such
that index for the evolutionary variable has a super index. We seek to adopt this convention
in general.
In Figure 7.2 we try to illustrate that the scheme is explicitt, meaning that the unknown
value at time n + 1 can be found explicitly from the formula without having to solve an equation
system. In other words, the unknown value at time n + 1 is not implicitly dependent on other
values at other spatial locations at time n + 1.
The above example is implemented in the code below. Download the code and experiment
using different diffusion numbers. The FTCS-scheme is explicitt and thus has a stability
constraint. We will look further into stability in the next sections, but as can be seen in
Figure 7.3 the stability limit in this example is D = 12 .
# src-ch5/couette_Flow_FTCS.py;Visualization.py @ git@lrhgit/tkt4140/src/src-ch5/Visualization.py;
import matplotlib; matplotlib.use(’Qt4Agg’)
import matplotlib.pylab as plt
plt.get_current_fig_manager().window.raise_()
import numpy as np
from math import exp, sin, pi
CHAPTER 7. DIFFUSJONSPROBLEMER 269
n+1
""" Method that calculates the analytical solution to the differential equation:
du/dt = d^2(u)/dx^2 , u = u(y,t), 0 < y < 1
Boundary conditions: u(0, t) = 1, u(1, t) = 0
Initial condition: u(t, 0) = 0 t<0, u(t, 0) = 1 t>0
Args:
y(np.array): radial coordinat
t(float): time
N(int): truncation integer. Truncate sumation after N elements
Returns:
w(float): velocity, us - ur
"""
sumValue = 0
for n in range(1,N+1):
temp = np.exp(-t*(n*np.pi)**2)*np.sin(n*np.pi*y)/n
sumValue += temp
u = 1 - y - (2/pi)*sumValue
return u
Uold_plus = Uold[2:]
Uold_minus = Uold[:-2]
Uold_mid = Uold[1:-1]
return Unew
if __name__ == ’__main__’:
import numpy as np
from Visualization import createAnimation
N = 20
y = np.linspace(0, 1, N + 1)
h = y[1] - y[0]
dt = D*h**2
T = 0.2 # simulation time
time = np.arange(0, T + dt, dt)
# Spatial BC
U_bottom = 1.0 # Must be 1 for analytical solution
U_top = 0.0 # Must be 0 for analytical solution
# solution matrices:
U = np.zeros((len(time), N + 1))
U[0, 0] = U_bottom # no slip condition at the plate boundary
U[0,-1] = U_top
#
Uanalytic = np.zeros((len(time), N + 1))
Uanalytic[0, 0] = U[0,0]
for n, t in enumerate(time[1:]):
Uold = U[n, :]
D = 0.4
1.0
0.8
D = 0.4
0.6
y [-]
increasing
0.4 time
0.2
0.0
D = 0.45
1.0
0.8
D = 0.45
0.6
y [-]
0.4
0.2
0.0
D = 0.5
1.0
0.8
D = 0.5
0.6
y [-]
0.4
0.2
0.0
D = 0.503
1.0
0.8
D = 0.503
0.6
y [-]
0.4
0.2
0.0
D = 0.51
1.0
0.8
D = 0.51
0.6
y [-]
0.4
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0
Velocity [-]
un+1
j = D (un n n
j+1 + uj−1 ) + (1 − 2 D) uj
In this case the coefficients are a1 = D, a2 = D, a3 = 1 − 2D such that the sum of all
coefficents is: a1 + a2 + a3 = 1.
From (7.21) we get:
n+1
min(un n n
j+1 , uj , uj−1 ) ≤ uj ≤ max(un n n
j+1 , uj , uj−1 )
∂T ∂2T
= α 2 + bT, b = konstant, t < tmaks
∂t ∂x
CHAPTER 7. DIFFUSJONSPROBLEMER 273
∆t
Tjn+1 = D (Tj+1
n n
+ Tj−1 ) + (1 − 2D + ∆t · b)Tjn , D=α
(∆x)2
and with the following coefficients a1 = a2 = D og a3 = 1 − 2D + ∆t · b we get:
a1 + a2 + a3 = 1 + ∆t · b ≤ 1
only for negative b-values.
The condition of positive coefficients then becomes: 1 − 2D + ∆t · b > 0 which corresponds
to
1 ∆t · b
0<D< +
2 2
where b < 0.
In this situation the criterion implies that the T − values from the difference equation will
not increase or be unstable for a negative b. This result agrees well the physics, as a a negative
b corresponds to a heat sink.
Note that (7.21) and (7.23) provide limits within which the solution is bounded, and
provides a sufficient criteria to prevent the occurrence of unstable ocillations in the solution.
This criteria may be far more restrictive that what is necessary for a stable solution. However,
in many situations we may be satisfied with such a criteria. The PC-criterion is used frequently
on difference equations for which a more exact analysis is difficult to pursue. Note that the
PC-criterion may only be applied for explicit schemes if no extra information is provided. For
parabolic equations we often have such extra information by means of max/min principles (see
(7.5.3)). Further, the criterion must be modified in case of increasing amplitudes.
One would of course hope for the existence of a necessary and sufficient condition for
numerical stability. However, for general difference equations we have no such condition, which
is hardly surprising. But a method which often leads to sufficient, and in some cases necessary,
conditions for stabily, is von Neumann’s method. This method involves Fourier-analysis of
the linearized difference equation and may be applied for both explicit and implicit numerical
schemes. We will present this method in 7.4.
∂u ∂2u
= (7.24)
∂t ∂x2
To motivate the rationale for the basis of the von Neumann analysis, we will start by a
revisit on the analytical solution of (7.24) by the method of separation of variables. The aim
of the method is to simplify the PDE to two ODEs which has analytical solutions. With this
approach we assume that the solution may constructed by means of separation of variables as
u(x, t), i.e. as a product of to f (t) og g(x), which each are only functions of time and space,
repsectively:
df (t) d2 g(x)
g(x) = f (t) (7.27)
dt dx2
or more conveniently:
df (t) 1 d2 g(x) 1
= (7.28)
dt f (t) dx2 g(x)
Observe that the left hand side of (7.28) is a function of t only, whereas the right hand
side is a function of x only. As the both sides of (7.28) must be satisfied for arbitrary values of
t and x, the only possible solution is that both sides of the equation equals a constant, which
we for convenience denote −β 2 , thus our original PDE in (7.24) has been transformed to two
ODEs:
df (t) 1 df (t)
= −β 2 → + β 2 f (t) = 0 (7.29)
dt f (t) dt
d2 g(x) 1 d2 g(x)
= −β 2 → + β 2 g(x) = 0 (7.30)
dx2 g(x) dx2
The first ODE (7.29) is of first order and has the solution (verify by susbtitution):
2
f (t) = e−β t
(7.31)
whereas the second ODE is of second order with solution (7.30):
2
u(x, t) = e−β t
[A sin(βx) + B cos(βx)] (7.33)
and since (7.25) is a linear PDE, the sum or superposition of solutions like (7.33) will also
represent a solution:
m=∞
X 2
u(x, t) = e−βm t [Am sin(βm x) + Bm cos(βm x)]
m=0
The coefficients Am , Bm og βm may be determined from the initial conditions and the
boundary conditions as demonstrated in Appendix G.6 of Numeriske Beregninger.
However, for the current purpose of demonstration of von Neumann analysis, two particular
solutions suffice:
2
e−β t sin(βx)
u(x, t) = 2 (7.34)
e−β t cos(βx)
which may be presented in a more compact for by making use of Euler’s formula:
√
ei x = cos(x) + i sin(x), i= −1 (7.35)
The more compact form of (7.34) is then:
2 2
u(x, t) = e−β t i βx
e = e−β t+i βx
(7.36)
Note. Both the real and the imaginary part of the complex (7.36) satisfy (7.24) and is
therefore included in the somewhat general solution. For particular problem (7.36) will be
multiplied with complex cofficients such that the solution is real.
CHAPTER 7. DIFFUSJONSPROBLEMER 275
2 2 2
u(xj , tn+1 ) = e−β tn+1
ei βxj = e−β (n+1) ∆t iβxj
e = (e−β ∆t n+1 iβxj
) e (7.38)
If we divide (7.37) with (7.38), the spatial dependency vanishes and a we get a spatially
independent expression for how the solution is amplified (or damped):
u(xj , tn+1 ) 2
Ga = = e−β ∆t (7.39)
u(xj , tn )
For this reason, Ga is commonly denoted the analytical amplificaiton factor. (See section
1.6.1 i Numeriske Beregninger). In our particular case we find that Ga < 1. Having introduced
the the amplification factor Ga we may rewrite (7.37) as:
∆t
un+1
j = D (un n n
j+1 + uj−1 ) + (1 − 2D)uj , D= (7.41)
(∆x)2
Clearly, the solution of the difference equation (7.41) will deviate from the analytical
solution to the differential equation for finite values of ∆x and ∆t. The deviation/error will
increase for increasing values of ∆x and ∆t and we have seen that when D > 12 , the difference
equation becomes unstable, with constantly increasing amplitudes and alternating signs. As
(7.40) is a solution of the differential equuation, we introuce a simpliar expression for the
difference equation (7.41):
un n n i βxj
j → Ej = G e (7.42)
where we have introduced the numerical ampflicication factor G:
Ejn+1
G= (7.43)
Ejn
which may be complex and is a function og ∆t and β. From (7.43) we may relate the error
Ejn at the n-th timestep with the initial error Ej0 :
|G| ≤ 1 (7.45)
CHAPTER 7. DIFFUSJONSPROBLEMER 276
The condition (7.45) is denoted strict as no increase in amplitude is allowed for. This
condition will be relaxed in section (7.5.4).
Even though we have demostrated the von Neumann method for a relatively simple
diffusion equation, the method is applicable for more generic equations. The method is simple:
Ejn+1 = D (Ej+1
n n
+ Ej−1 ) + (1 − 2D) Ejn (7.46)
We have previously (see (7.42)) established how the error Ejn is related with the numerical
amplification factor: Ejn = Gn ei β yj which upon substitution in (7.46) yields:
δ = βh (7.48)
By using terminology for periodical functions, β may be thought of as a wavenumber
(angular frequency) and δ as a phase angle. (See appendix A.3 in Numeriske Beregninger)
For further simplification we introduce some standard trigonometric formulas:
2 cos(x) = eix + e−ix
i 2 sin(x) = eix − e−ix (7.49)
cos(x) = 1 − 2 sin2 ( x2 )
By substitution (7.49) in (7.47) we get the simpler expression:
δ
G = 1 − 2D (1 − cos(δ)) = 1 − 4D sin2 (7.50)
2
As G is real, the condition |G| ≤ 1 has the following mathematical interpretation:
CHAPTER 7. DIFFUSJONSPROBLEMER 277
δ
−1 ≤ G ≤ 1 or − 1 ≤ 1 − 4D sin2 ≤1 (7.51)
2
The right hand side of (7.4.1) is always true as D ≥ 0. For the left hand side of (7.4.1) we
have
1
D≤
2 sin2 ( 2δ )
1
which is true for all δ ( −π ≤ δ ≤ π) when D ≤ 2
. A von Neumann condition for stability
of (7.16) may then be presented as:
1
0<D< (7.52)
2
∆t
where D = (∆y) 2 from (7.17).
As (7.16) is a two-level scheme with constant coefficients, the condition in (7.52) is both
sufficient and necessary for numerical stability. This condition agrees well in the previous
condition in (7.21), which is sufficient only.
The stability condition impose a severe limitation on the size of the timestep ∆t, which of
course influence the CPU-time.
A rough estimate of the CPU-time for the FTCS-schemes with constant D-value yields:
T2 h1 3
≈
T1 h2
where T1 and T2 are the CPU-times for ∆y = h1 and ∆y = h2 , respectively. For example
for a reduction in spatial resolution from h1 = 0.1 by a factor 10 to h2 = 0.01 we get an
increase in CPU-time by a factor T T
2
= 1000.
1
dG
G = 1 − 2D (1 − cos(δ)) ⇒ = −2D sin(δ)
dδ
which has extremal values for δ = 0,
δ = ±π, δ = 0 gir G = 1, whereas δ = ±π yields the condition G = 1 − 4D. Finally, the
condition |G| ≤ 1 yields:
−1 ≤ 1 − 4D ≤ 1
The right hand side of the inequality will always be satisfied for positive D, whereas the
left hand side yields D ≤ 21 as before.
In many situations we will find that δ = ±π are critical values, and it may therefore be
wise to assess these values, but remember it might not be sufficent in order to prove stability.
On the other hand these values might sufficient to prove instabilities for those values, as the
condition |G| ≤ 1 must be satisfied for all δ-values in the range [−π, π].
Ga = exp(−δ 2 D) (7.53)
CHAPTER 7. DIFFUSJONSPROBLEMER 278
And from (7.50) we have an expression for the numerical amplification factor:
δ
G = 1 − 2D (1 − cos(δ)) = 1 − 4D sin2 (7.54)
2
In figure 7.4 we plot G and Ga as functions of δ ∈ [0, π] for selected values of D. For
small values of δ we observe small differences between G and Ga , with slightly larger values of
G than for Ga , with progressively increasing differences as a function of δ.
1.0
0.5
G
0.0
Ga (D = 0.25)
G (D = 0.25)
Ga (D = 0.35)
−0.5
G (D = 0.35)
Ga (D = 0.5)
G (D = 0.5)
−1.0
0 20 40 60 80 100 120 140 160 180
[degrees]
Further, we observe large errors for δ ∈ [90◦ , 180◦ ] and that the amplification factor G
even has the wrong sign, which will lead to unpysical oscillations in the numerical solution.
The reason why the solution may still be usable is due to that the analytical solution
has an amplitude Ga which diminishes strongly with increasing frequency (see the analytical
solution in (7.13)). Such a smoothing effect is typical for parbolic PDEs. Yet the effect is
noticable as we in the current example have a discontinuity at y = 0, such that the solution
contains many high frequent components.
Errors in the amplitude is commonly quantified with εD = GG and denoted diffusion
a
error or dissipation error. No diffusion error corresponds to εD = 1. The term diffusive scheme
will normally refer to a numerical scheme with decreasing amplitude with increasing t. For our
FTCS-scheme applied for the diffusion equation we have:
εD = 1 − 4D sin2 (δ/2) exp(δ 2 D) (7.55)
The expression in (7.55) may be simplified by a Taylor-expansion:
εD = 1 − D2 δ 4 /2 + Dδ 4 /12 + O(δ 6 )
which confirms that the dissipation error is smal for low frequencies when D ≤ 1/2.
∂u n
un+1
j − ujn−1
≈
∂t j 2∆t
som gir følgende differanseligning:
un+1
j = ujn−1 + 2D un n n
j−1 − 2uj + uj+1 (7.56)
Dette er et eksplisitt, 3-nivå skjema som kalles Richardson-skjemaet; se Figure 7.5.
n+1
n-1
j-1 j j+1
Stabilitetsanalyse.
La oss først forsøke det tilstrekkelige kriteriet i (7.21). Betingelsen om bare positive
koeffisienter lar seg ikke oppfylle da koeffisienten foran un
j -leddet alltid er negativt for D > 0.
Vi prøver derfor med von Neumanns metode. (7.42) innsatt i (7.56) gir:
G2 = 1 + 4DG · (cos(δ) − dG
1)
2G dG
dδ
= 4D (cos(δ) − 1) dδ
− G sin(δ) , som med dG
dδ
=0
gir max-min for δ = 0, δ = ±π som for FTCS-skjemaet.p
δ = 0 gir G1,2 = ±1 mens δ = ±π gir G1,2 = −4D ± 1 + (4D)2
med instabilitet for |G2 | > 1 som tidligere.
CHAPTER 7. DIFFUSJONSPROBLEMER 280
n+1
n-1
j-1 j j+1
1 1
G2 = (1 − 2r) + 2r · G · (eiδ + e−iδ ) = [(1 − 2r) + 4r · G · cos(δ)]
1 + 2r 1 + 2r
som gir følgende 2. grads ligning:
Analysen viser at (7.58) faktisk er ubetinget stabilt. DuFort-Frankel skjemaet er det eneste
enkle kjente eksplisitte skjemaet med 2. ordens nøyaktighet som har denne egenskapen. Det
har derfor vært en del brukt ved løsning av Navier-Stokes ligninger. I avsnitt (7.6) skal vi
se at forholdene ikke er så fullt så rosenrøde som analysen ovenfor kan tyde på. For å starte
beregningen, kan FTCS-skjemaet brukes.
un+1
1
∂u n+ 2
j − un
j
= + O(∆t)2 (7.59)
∂t j ∆t
1
∂u n+ 2
Problemet blir nå å approksimere uten at nivået n + 12 eksplisitt inngår i skjemaet.
∂t j
Dette oppnås ved Crank-Nicolson
approksimasjonen (1947):
un+1 n+1
+ un+1
1
un n n
∂ 2 u n+ 2 j+1 − 2uj j+1 − 2uj + uj−1
1 j−1
= + + O(∆x)2 (7.60)
∂x2 j 2 (∆x)2 (∆x)2
1 n+1 1 n
un+1
j−1 − 2(1 + )u + un+1 n
j+1 = −uj−1 + 2(1 − )u − un (7.61)
D j D j j+1
∆t
der D = som før.
(δx)2
Skjemaet er illustrert i Figure 7.7.
Vi ser av (7.61) og Figure 7.7 at skjemaet er implisitt. Dette betyr at vi må løse
et ligningsystem. I dette tilfellet er systemet tridiagonalt, slik at Thomasalgoritmen og
programmet tdma kan brukes. Marie 46: Henvisning til Matlab
Vi skal nå undersøke stabiliteten av (7.61). For å slå flere fluer i et smekk, tar vi for oss
følgende differanseligning som kalles θ-skjemaet:
un+1
j = un n+1 n+1
j + D θ(uj+1 − 2uj + un+1 n n n
j−1 ) + (1 − θ)(uj+1 − 2uj + uj−1 ) (7.62)
der 0 ≤ θ ≤ 1 (7.63)
For θ = 0 får vi det eksplisitte FTCS-skjemaet. For θ = 12 får vi Crank-Nicolson
skjemaet. For θ = 1 fås et implisitt skjema som ofte kalles Laasonen-skjemaet (1949). I
strømningsmekanikken brukes gjerne betegnelsen BTCS-skjemaet. (Backward Time Central
Space) eller det totalt implisitte skjemaet.
(7.42) innsatt i (7.62) og divisjon med Gn · ei·βxj gir:
der δ = β · h
CHAPTER 7. DIFFUSJONSPROBLEMER 282
n+1
n+½
j-1 j j+1
1 − 4D(1 − θ) sin2 ( 2δ )
G= (7.64)
1 + 4Dθ sin2 ( 2δ )
Betingelsen for stabilitet er |G| ≤ 1 eller siden G er reell: −1 ≤ G ≤ 1
Da 0 ≤ θ ≤ 1 , er den høyre betingelsen tilfredstilt med D ≥ 0. For den venstre siden:
δ
2D sin2 (1 − 2θ) ≤ 1 eller
2
1 δ
D(1 − 2θ) ≤ da sin2 ≤1
2 2
For 21 ≤ θ ≤ 1 er betingelsen oppfylt for alle D ≥ 0, dvs.: Ubetinget stabil.
For 0 ≤ θ ≤ 12 er skjemaet betinget stabilt.
Stabilitetsbetingelsen er da:
1
D(1 − 2θ) ≤ , 0≤θ≤1 (7.65)
2
Bruk av derivasjon
Vi skriver nå:
G = 1 + D(eiδ + e−iδ − 2) · (Gθ + 1 − θ) = 1 + 2D cos(δ) − 1 · (Gθ + 1 − θ)
dG d dG i
h
= 2D (Gθ + 1 − θ) cos(δ) − 1 + cos(δ) − 1 θ ·
dδ dδ dδ
dG
Med = 0 får vi igjen max-min for δ = 0, δ = ±π (δ = 0 gir G = 1 som ventet) δ = ±δ
dδ
1 − 4D(1 − θ)
gir G = som er identisk med (7.64) innsatt for 2δ = 90◦ . Betingelsen |G| ≤ 1
1 + 4Dθ
1 − 4D(1 − θ)
blir −1 ≤ ≤ 1 med samme resultat som tidligere.
1 + 4Dθ
Nøyaktighet
La oss se nærmere på nøyaktigheten av -skjemaet som gitt i (7.62).
Skriver et enkelt Maple-program
CHAPTER 7. DIFFUSJONSPROBLEMER 283
1 1 ∂4u 1 ∂6u
h i
Tjn = −θ · ∆t − (∆x)2 · + (∆t)2 (1 − 3θ) · + ... (7.66)
2 12 ∂x4 6 ∂x6
Vi ser at Tjn = O(∆t) + O(∆x)2 for θ = 0 og 1, altså for henholdsvis Euler-skjemaet og
1
Laasonen-skjemaet, mens Tjn = O(∆t)2 + O(∆x)2 for θ = 2
som er Crank-Nicolson skjemaet.
Dersom vi legger til en linje
Tnj:= simplify(subs(theta =(1-h^2/(6*k))/2,Tnj));
1
i programmet ovenfor, finner vi at Tjn = O(∆t)2 + O(∆x)4 dersom vi velger D =
6(1 − 2θ)
1
når D ≤ .
2(1 − 2θ)
Mer om stabiliteten.
Vi har funnet at skjemaet er ubetinget stabilt for 12 ≤ θ ≤ 1. I praksis viser det seg at
skjemaet kan gi oscillasjoner rundt diskontinuiteter for θ = 12 . En θ-verdi > 0.5 vil dempe
disse oscillasjonene og denne dempningen er sterkest for θ = 1. La oss se på et eksempel der vi
bruker varmeledningsligningen på dimensjonell form.
∂T ∂2T
= α 2 , T = T (x, t)
∂t ∂x
Randbetingelser:
α = 100mm2 /s
Det numeriske Fourier-tallet:
CHAPTER 7. DIFFUSJONSPROBLEMER 284
∆t
D=α
(∆x)2
√
Velger ∆t = 0.25 s slik at ∆x = 5/ D. Ved å velge D = 4, får vi ∆x = 2.5 mm. Figurene
på neste side viser en beregning med θ = 12 og en med θ = 1; altså Crank-Nicolson skjemaet
og Laasonen-skjemaet.
Marit 48: Interaktivt plot?
1 − 4D(1 − θ) sin2 ( 2δ )
G=
1 + 4D sin2 ( 2δ )
1
For θ = 2
får vi:
1 − 2D sin2 ( 2δ )
G=
1 + 2D sin2 ( 2δ )
For δ i nærheten av π, vil G ligge rundt −1 for store verdier av D. Dette ser vi tydeligst
1 − 2D
ved å velge δ = π = β · h som gir G = .
1 + 2D
Fra (7.44) har vi:
1
un+1
j = θD(un+1 n+1 n n
j−1 + uj+1 ) + (1 − θ)D(uj−1 + uj+1 ) + 1 − (1 − θ)2D (7.67)
1 + 2θD
Ved å summere koeffisientene på høyre side, finner vi at summen er lik 1. Deretter må vi
forlange at koeffisientene er positive. Dette betyr at 0 < θ < 1 og 1 − (1 − θ) · 2D > 0. Den
siste ulikheten er oppfylt for D · (1 − θ) < 21 . Ved å sette θ = 0 og θ = 1, finner vi at summen
er lik 1 også for disse verdiene, slik at vi får følgende betingelse for oppfyllelse av PK-kriteriet:
1
D · (1 − θ) ≤ (7.68)
2
Fra von Neumann-analysen fant vi følgende betingelse fra (7.65):
1
D · (1 − 2θ) ≤ (7.69)
2
Vi ser at (7.68) er vesentlig strengere enn (7.69). Mens C-N-skjemaet med θ = 12 er
ubetinget stabil i følge von Neumann-analysen, må vi ha D ≤ 1 ifølge PK-kriteriet. PK-
kriteriet gir her en sikker betingelse forat det fysiske max-min-kriteriet også oppfylles for
differanseligningen. En test med eksemplet ovenfor, bekrefter dette kriteriet. Finnes det
CHAPTER 7. DIFFUSJONSPROBLEMER 286
2−θ
D · (1 − θ) ≤ (7.70)
4(1 − θ)
Vi ser at for θ = 12 gir dette kriteriet betingelsen D ≤ 32 . For θ = 34 gir (7.70) D ≤ 5 mens
PK-kriteriet gir D ≤ 2.
Hensikten med store D-verdier er for å kunne bruke forholdsvis store tidskritt når vi
ønsker å følge hele tidsforløpet mot en stasjonær tilstand. Men vi må selvfølgelig tenke på
nøyaktigheten også. Husk dessuten at dette er en enkel endimensjonal differanseligning med
konstante koeffisienter som kan løses meget raskt med en moderne PC nærmest uavhengig
at ∆t og ∆x så lenge vi holder oss innfor stabilitetsområdet. Ikke-stasjonære problem i tre
dimensjoner krever fremdeles mye regnetid.
∂T ∂2T
= α 2 + bT, b = konstant, t < tmaks (7.71)
∂t ∂x
Med b < 0 har vi et varmesluk, mens for b > 0 har vi en varmekilde. I det første tilfellet
kan vi bruke det strenge kriteriet, men for det andre tilfellet er det nødvendig å tillate |G| > 1.
En partikulær løsning av (7.71) er gitt ved:
2 2
T (x, t) = ebt · e−αβ ·t
cos(βx) = e(b−αβ )·t
cos(βx) (7.72)
La oss bruke (7.72) til å bestemme en analytisk forsterkningsfaktor, se (7.39):
T (xj , tn+1 ) 2
Ga = = exp (b − αβ 2 ) · tn+1 − (b − αβ 2 ) · tn = eb∆t · e−αβ ∆t (7.73)
T (xj , tn )
Vi ser at det er leddet eb·∆t som får amplituden til å øke med positiv b.
En rekkeutvikling for små ∆t:
b2
eb∆t = 1 + b · ∆t + (∆t)2 + . . . (7.74)
2
Dersom vi bruker FTCS-skjemaet, får vi:
Tjn+1 = D(Tj+1
n n
+ Tj−1 ) + (1 − 2D)Tjn + ∆tbTjn (7.75)
med
∆t
D=α (7.76)
(∆x)2
Dersom vi bruker PK-kriteriet, finner vi at summen av koeffisientene er lik 1 + b · ∆t, slik
at dette kriteriet bare kan brukes for b < 0.
Får ved å bruke von Neumanns metode på (7.75):
δ
G = 1 − 4D sin2 + ∆t · b (7.77)
2
1
Ved å sette D = 2
, får vi:
δ
|G| ≤ 1 − 2 sin2 + |∆t · b| ≤ 1 + ∆t · b, b > 0
2
Dersom vi sammenligner med (7.74), ser vi at vi får god overenstemmelse mellom den
analytiske og den numeriske forsterkningsfaktoren i dette tilfellet.
CHAPTER 7. DIFFUSJONSPROBLEMER 287
|G| ≤ 1 + K · ∆t (7.78)
der K er en positiv konstant.
Dette betyr at vi tillater amplituden å øke eksponentsielt for t < tmaks . I dette tilfellet
kan vi bruke den strenge betingelsen dersom vi resonnerer på følgende måte:
Da kildeleddet i (7.71) ikke inneholder noen derivert størrelse, kan vi se bort fra dette
leddet ved stabilitetsundersøkelsen. Vi har her samme type problemstilling som i avsnitt 1.6 i
Numeriske Beregninger, der vi diskuterer stive, ordinære differensialligninger. (Se f.eks. lign.
(1.6.8) i avsnitt 1.6.1 i Numeriske Beregninger).
For en voksende amplitude må vi minske skrittlengden for den uavhengige variable dersom
vi skal oppnå en foreskrevet nøyaktighet. For en minskende amplitude må vi derimot holde
oss under en maksimum skrittlengde for å få en stabil regneprosess.
La oss se på et annet eksempel med bruk av FTCS-skjemaet.
∂u ∂2u ∂u
= α 2 + a0 , α>0 (7.79)
∂t ∂x ∂x
Denne ligningen som kalles adveksjon-diffusjonsligningen, er fremdeles en parabolsk ligning
ifølge klassifiseringskjemaet i avsnitt 4.3 i Numeriske Beregninger.
∂u
Ved bruk av FTCS-skjemaet på (7.79) med sentraldifferanser for :
∂x
∆t ∆t
un+1 = un n n n
j + D · (uj+1 − 2uj + uj−1 ) + a0 (un − un
j−1 ), D = α
j
2∆x j+1 (∆x)2
Bruker von Neumanns metode:
δ ∆t
G = 1 − 4D sin2 + i · a0 sin(δ)
2 ∆x
som videre gir:
a20 · D
2 2 2
δ ∆t δ
|G|2 = 1 − 4D sin2 + a0 sin(δ) = 1 − 4D sin2 + · ∆t sin2 (δ)
2 ∆x 2 α
1
Velger igjen D = 2
:
r
a20 a20
2
δ
|G| = 1 − 2 sin2 + · ∆t · sin2 (δ) ≤ 1 + · ∆t
2 2α 2α
p
Vi har her brukt x2 + y 2 ≤ |x| + |y|
a2
0
Med K = ,
ser vi at den generelle betingelsen i (7.78) er oppfylt.
2α
Adveksjon-diffusjonsligningen er behandlet mer detaljert i avsnitt 6.10 i Numeriske Bereg-
ninger.
∂U n h2 ∂ 2 U n h3 ∂ 3 U n
n
Ui±1 ≡ U (xi±h , tn ) = Uin ± h · + · 2
± · + ... (7.83)
∂x i 2 ∂x i 6 ∂x3 i
∂U n k 2 ∂ 2 U n k 3 ∂ 3 U n
Uin±1 ≡ U (xi , tn±k ) = Uin ± k · + 2 · ∂t2 ± 6 · ∂t3 + . . . (7.84)
∂t i i i
La oss som eksempel finne den lokale trunkeringsfeilen Tin for FTCS-metoden anvendt på
diffusjonsligningen L(U ) = 0 der
∂U ∂2U
L(U ) = − =0
∂t ∂x2
Uin+1 − Uin n
Ui−1 − 2Uin + Ui+1
n
Tin = F (Uin ) = − (7.85)
k h2
Innsatt fra (7.83) i (7.85):
n n
∂2U k ∂2U h2 ∂ 4 U k 2 ∂ 3 U n
∂U
Tin = − + − + + O(k3 , h4 )
∂t ∂x2 2 ∂t 2 12 ∂x4 6 ∂t3 i
i i
∂U ∂2U
Da − =0
∂t ∂x2
n
k ∂2U h2 ∂ 4 U
Tin = 2
− + høyere ordens ledd (7.86)
2 ∂t 12 ∂x4
i
(7.86) viser at Tin = O(k) + O(h2 ) som ventet.
(7.86) kan også skrives:
n
h2 k ∂2U ∂4U
Tin = · 6 − + O(k2 ) + O(h4 )
12 h2 ∂t2 ∂x4
i
k 1
Ved å velge D = = , får vi:
h2 6
Tin = O(k2 ) + O(h4 ) (7.87)
∆t blir svært liten for D = 1/6, men med dagens PC-er er dette ikke noen problem,
bortsett fra eventuell akkumulering av avrundingsfeil.
7.6.2 Konsistens
Vi sier at differanseligningen er konsistent med den gitte differensial-ligningen dersom den
lokale trunkeringsfeilen Tin → 0 når ∆x og ∆t → 0 uavhengig av hverandre.
2 n n
∂U ∂2U k ∂2U k2 ∂ 3 U h2 ∂ 4 U k4 4 4
Tin = − + + − +O ,k ,h (7.88)
∂t ∂x2 h ∂t2 6 ∂t3 12 ∂x4 h2
i i
2
k
P.g.a faktoren er det viktig å spesifisere hvordan k og h → 0. Skjemaet er ikke
h
uten videre konsistent med den gitte ligningen. Et slikt skjema betegnes gjerne som betinget
konsistent.
Tilfelle 1
k
Setter r0 = → k = r0 · h, r0 en konstant > 0.
h
Innsatt for k i (7.88):
n
∂U ∂2U ∂2U
Tin = − + r02 · + O(h2 )
∂t ∂x2 ∂t2
i
For h → 0, ser vi at DuFort-Frankel skjemaet nå er konsistent med den hyperbolske
∂U ∂2U ∂2U
ligningen + r02 2 = og ikke den opprinnelige diffusjonsligningen.
∂t ∂t ∂x2
Tilfelle 2
k
Setter r0 = 2 → k = r0 · h2 . Innsatt for k i (7.88):
h
h in h in
∂U ∂2U 2
k2 ∂ 3 U h2 ∂ 4 U k4
Tin = ∂t
− ∂x2
+ r02 h2 ∂∂tU
2 + 6 ∂t3
− 12 ∂x4
+O h2
, k 4 , h4
h i in i
2
k2 ∂ 3 U h2 ∂ 4 U
= r02 h2 ∂∂tU
2 + 6 ∂t3
− 12 ∂x4
+ O r04 h6 , k4 , h4
i
n
∂U ∂2U
da − =0
∂T ∂x2
i
Vi ser at Tin → 0 for h og t → 0 med Tin = + O(h2 ). O(k2 )
Skjemaet er nå konsistent med diffusjonsligningen. D-F-skjemaet kan derfor brukes med
k = r0 · h2 . Men da har vi fått en begrensning på ∆t, ikke som et stabilitetskrav, men ved
kravet til konsistens. Ikke-konsistente skjema oppstår vanligvis når vi trikser med skjemaene
etter at vi har Taylor-utviklet dem på vanlig måte.
7.6.5 Konvergens
Med U som den eksakte løsningen av differensialligningen og u som den eksakte løsningen av
den tilhørende differanseligningen, sier vi at differanseligningen er konvergent dersom
∂u ∂2u 1 ∂u
= +
∂t ∂r2 r ∂r
Kule:
∂u ∂2u 2 ∂u
= +
∂t ∂r2 r ∂r
Ligningene kan da skrives:
∂u ∂2u λ ∂u
= + , λ = 0, 1, 2 (7.89)
∂t ∂r2 r ∂r
λ = 0 med r → x gir det velkjente kartesiske tilfellet.
(7.89) er en partiell diff. ligning med variable koeffisienter. Vi vil nå forsøke en von
Neumann-analyse av denne ligningen med θ-skjemaet fra avsnitt (7.5.3).
Stabilitetsanalyse med bruk av θ -skjemaet for radius r > 0
∆t
Setter rj = ∆r · j, j = 0, 1, . . . og innfører D = (∆r) 2
For r > 0:
un+1
j = un n+1 n+1
j + D θ(uj+1 − 2uj + un+1 n n n
j−1 ) + (1 − θ)(uj+1 − 2uj + uj−1 )
(7.90)
+ λD
2j
θ(un+1 n+1 n n
j+1 − uj−1 ) + (1 − θ)(uj+1 − uj−1 )
Utførerer en von Neumann-analyse ved å sette inn Ejn = Gn · ei·βrj = Gn ei·δ·j med
δ = β · ∆r og bruk av de vanlige formlene (7.49) og (7.50).
Vi får:
δ θλD δ (1 − θ)λD
G · 1 + 4θD sin2 −i· sin(δ) = 1 − 4(1 − θ)D · sin2 +i sin(δ)
2 j 2 j
δ δ
som ved bruk av formelen sin(δ) = 2 sin cos og betingelsen |G| ≤ 1 blir:
2 2
2 2
δ (1 − θ)2 λ2 D2 δ θ2 λ2 D2
1 − 4(1 − θ)D sin2 + sin2 (δ) ≤ 1 + 4θD sin2 + sin2 (δ)
2 j2 2 j2
og som videre gir:
δ λ2 λ2
2
D · (1 − 2θ) · sin · 4− 2 + 2 ≤ 2, j ≥ 1 (7.91)
2 j j
CHAPTER 7. DIFFUSJONSPROBLEMER 291
δ
Det er ikke vanskelig å se at leddet i parentesen har sin største verdi for sin2 = 1;
2
dvs. for δ = π. (Kan også finnes ved å derivere leddet m.h.p. δ som gir maksimum for δ = π).
λ2
Faktoren 2 faller da ut.
j
Vi får:
D · (1 − 2θ) · 2 ≤ 1 (7.92)
Som i avsnitt (7.5.3), må vi skille mellom to tilfeller.
1.
1
0≤θ≤
2
∆t 1
D= ≤ (7.93)
(∆r)2 2(1 − 2θ)
2.
1
≤θ≤1
2
Skifter fortegn i (7.92):
D · (2θ − 1) · 2 ≥ −1
Denne betingelsen er alltid oppfylt, slik at differanseligningen er ubetinget stabil for disse
θ-verdiene.
Vi har med andre ord fått de samme stabilitetsbetingelsene som for ligningen med konstante
∂u ∂2u
koeffisienter: = der vi har FTCS-skjemaet for θ = 0, Crank-Nicolson-skjemaet for
∂t ∂r2
θ = 1/2, og Laasonen-skjemaet for θ = 1.
Merk at denne analysen bare gjelder for r > 0.
Vi må da se på ligningen for r = 0.
λ ∂u
Leddet må behandles spesielt for r = 0.
r ∂r
L’Hopitals regel:
λ ∂u ∂2u ∂u ∂2u
lim =λ 2 → = (1 + λ) 2 for r = 0 (7.94)
r→0 r ∂r ∂r ∂t ∂r
Vi har funnet at ved bruk av FTCS-skjemaet, har vi den vanlige begrensningen D ≤ 1/2.
La oss derfor undersøke om randbetingelsene gir begrensninger når vi bruker FTCS-skjemaet.
Versjon 1
Diskretiserer (7.94) for r = 0 og utnytter symmetribetingelsen ∂u
∂r
(0) = 0:
un+1
0 = [1 − 2(1 + λ)D] · un n
0 + 2(1 + λ)D · u1 (7.95)
Dersom vi bruker PK-kriteriet på (7.95), får vi:
1
D≤ (7.96)
2(1 + λ)
For λ = 0 får vi den velkjente betingelsen D ≤ 1/2, mens vi for sylinderen med λ = 1 får
D ≤ 1/4 og for kule med λ = 2 får D ≤ 1/6. Spørsmålet er om disse betingelsen for sylinder
og kule er nødvendige. Vi vet at D ≤ 1/2 er både tilstrekkelig og nødvendig for λ = 0.
Det er vanskelig å finne et nødvendig og tilstrekkelig kriterium i dette tilfellet. Ser derfor
på et eksempel med oppstart av strømning i et rør, gitt som eksempel (7.7.1).
Versjon 2
For å unngå å bruke en separat ligning for r = 0 , diskretiserer vi symmetribetingelsen
∂u
(0) = 0 med 2. ordens foroverdifferanser:
∂r
∂u −3un n
0 + 4u1 − u2
n
1
(0) = 0 → → un
0 = (4un n
1 − u2 ) (7.97)
∂r 2∆r 3
For kula finnes det en detaljert analyse av Dennis Eisen i tidskriftet Numerische Mathematik
vol. 10, 1967, side 397-409. Han viser at en nødvendig og tilstrekkelig betingelse for løsning av
CHAPTER 7. DIFFUSJONSPROBLEMER 292
(7.90) sammen med (7.95) (for λ = 2 og θ = 0) er at D < 1/3. Dessuten viser han at ved å
unngå å bruke (7.95), får vi stabilitet for FTCS-skjemaet når D < 1/2.
Vi beregner nå to tilfeller for å se hvilke stabilitetskrav vi får i praksis når vi bruker begge
versjonene av randbetingelsene.
R0
R
τ R U Us R2 dp
t=ν 2
, r= , u = , us = der k = − 0 (7.99)
R0 R0 k k 4µ dz
som innført i (7.98) gir:
∂u ∂2u 1 ∂u
=4+ + (7.100)
∂t ∂r2 r ∂r
Randbetingelser:
∂u
u(±1, t) = 0, (0, t) = 0 (7.101)
∂r
∂u
Den siste er en symmetribetingelse. Finner stasjonær løsning us for ∂t
= 0:
d2 us 1 dus 1 d dus
+ = −4 → r = −4 som gir:
dr2 r dr r dr dr
dus C1 dus (0)
= −2r + med C1 = 0 da =0
dr r dr
Etter en ny integrasjon og bruk av randbetingelsene, får vi den velkjente parabolske
hastighetsfordelingen:
us = 1 − r2 (7.102)
CHAPTER 7. DIFFUSJONSPROBLEMER 293
Vi antar nå at vi har et tilfelle med fullt utviklet profil som gitt i (7.102). Plutselig fjerner
vi trykkgradienten. Fra (7.98) ser vi at dette gir en enklere ligning. Hastigheten ω(r, t) for
dette tilfellet er gitt ved:
W
ω(r, t) = us − u(r, t) med ω = (7.103)
k
Vi skal nå løse følgende problem:
∂ω ∂2ω 1 ∂ω
= + (7.104)
∂t ∂r2 r ∂r
Randbetingelser:
∂ω
ω(±1, t) = 0, (0, t) = 0 (7.105)
∂r
Startbetingelse:
ω(r, 0) = us = 1 − r2 (7.106)
Det opprinnelige problemet er da:
0.0 1.0
r
0 1 2 i N-1 N N+1
For j = 0:
1 + 4Dθ −4Dθ 0 0 0 0 0
−Dθ(1 − 0.5
1
) 1 + 2Dθ −Dθ(1 + 0.5
1
) 0 0 0 0
0 −Dθ(1 − 0.5
2
) 1 + 2Dθ −Dθ(1 + 0.5
2
) 0 0 0
0 0 · · · 0 0 ·
0 0 0 · · · 0
0.5
0 0 0 0 −Dθ(1 − N −2
) 1 + 2Dθ −Dθ(1 + N0.5
−2
)
0 0 0 0 0 −Dθ(1 − N0.5
−1
) 1 + 2Dθ
(7.114)
Note that we can also use a linalg-solver for θ = 0 (FTCS-scheme). In this case the
elements in the first upper- and lower diagonal are all 0. The determinant of the matrix is now
the product of the elements on the main diagonal. The criterion for non singualar matrix is
then that all the elements on the main diagonal are 6= 0, which is satisfied.
The python function thetaSchemeNumpyV1 show how one could implement the algo-
rithm for solving wn+1 . The function is part of the script/module startup.py which may be
downloaded in your LiClipse workspace.
# Theta-scheme and using L’hopital for r=0
def thetaSchemeNumpyV1(theta, D, N, wOld):
""" Algorithm for solving w^(n+1) for the startup of pipeflow
using the theta-schemes. L’hopitals method is used on the
governing differential equation for r=0.
Args:
theta(float): number between 0 and 1. 0->FTCS, 1/2->Crank, 1->Laasonen
D(float): Numerical diffusion number [dt/(dr**2)]
N(int): number of parts, or dr-spaces. In this case equal to the number of unknowns
wOld(array): The entire solution vector for the previous timestep, n.
Returns:
wNew(array): solution at timestep n+1
"""
superDiag = np.zeros(N - 1)
subDiag = np.zeros(N - 1)
mainDiag = np.zeros(N)
RHS = np.zeros(N)
j_array = np.linspace(0, N, N + 1)
tmp = D*(1. - theta)
superDiag[1:] = -D*theta*(1 + 0.5/j_array[1:-2])
mainDiag[1:] = np.ones(N - 1)*(1 + 2*D*theta)
subDiag[:] = -D*theta*(1 - 0.5/j_array[1:-1])
a = tmp*(1 - 1./(2*j_array[1:-1]))*wOld[0:-2]
b = (1 - 2*tmp)*wOld[1:-1]
c = tmp*(1 + 1/(2*j_array[1:-1]))*wOld[2:]
RHS[1:] = a + b + c
superDiag[0] = -4*D*theta
mainDiag[0] = 1 + 4*D*theta
RHS[0] = (1 - 4*tmp)*wOld[0] + 4*tmp*wOld[1]
return wNew
Randbetingelse gitt i (7.97)
Now let us look at the numerical solution when using the second order forward difference
for the bc at r = 0 repeated here for convinience:
1
ω0n =
(4ω1n − ω2n ), n = 0, 1, 2 . . . (7.115)
3
The difference equation for the θ-scheme for j = 1, . . . , N − 1 are the same as in the
previous case:
n+1
−Dθ · 1 − 1
2j
· ωj−1 + (1 + 2Dθ) · ωjn+1 − Dθ · 1 + 1
2j
n+1
· ωj+1
1
n
= D(1 − θ) · 1 − 2j · ωj−1 + [1 − 2D(1 − θ)] · ωjn (7.116)
1
n
+D(1 − θ) · 1 + 2j
· ωj+1
Now inserting Eq. (7.115) into Eq. (7.116) and collecting terms we get the following
difference equation for j = 1:
4 4 4 4
(1 + Dθ) · ω1n+1 − Dθω2n+1 = [1 − D(1 − θ)] · ω1n + D(1 − θ)ω2n (7.117)
3 3 3 3
When ω1 , ω2 , . . . are found, we can calculate ω0 from (7.115).
The initial values are as in Eq. (7.113), repeated here for convonience:
1 + 34 Dθ − 43 Dθ
0 0 0 0 0
−Dθ(1 − 0.5
2
) 1 + 2Dθ −Dθ(1 + 0.5
2
) 0 0 0 0
0 −Dθ(1 − 0.53
) 1 + 2Dθ −Dθ(1 + 0.5
3
) 0 0 0
0 0 · · · 0 0 ·
0 0 0 · · · 0
0.5
0 0 0 0 −Dθ(1 −
N −2
) 1 + 2Dθ −Dθ(1 + N0.5
−2
)
0 0 0 0 0 −Dθ(1 − N0.5
−1
) 1 + 2Dθ
(7.119)
The python function thetaSchemeNumpyV2 show how one could implement the algo-
rithm for solving wn+1 . The function is part of the script/module startup.py which may be
downloaded in your LiClipse workspace.
In Fig. 7.14 we see that the stability-limit for the two version is different for the to versions
of treating the BC, for FTCS. Using L’hopital’s rule for r=0 give a smaller stability-limit, and
we see that unstability arises at r=0, befor the general stability-limit for the FTSC scheme
(D = 1/2).
# Theta-scheme and using 2nd order forward difference for r=0
def thetaScheme_numpy_V2(theta, D, N, wOld):
""" Algorithm for solving w^(n+1) for the startup of pipeflow
using the theta-schemes. 2nd order forward difference is used
on the von-Neumann bc at r=0.
Args:
theta(float): number between 0 and 1. 0->FTCS, 1/2->Crank, 1->Laasonen
D(float): Numerical diffusion number [dt/(dr**2)]
N(int): number of parts, or dr-spaces.
wOld(array): The entire solution vector for the previous timestep, n.
CHAPTER 7. DIFFUSJONSPROBLEMER 297
Returns:
wNew(array): solution at timestep n+1
"""
superDiag = np.zeros(N - 2)
subDiag = np.zeros(N - 2)
mainDiag = np.zeros(N-1)
RHS = np.zeros(N - 1)
j_array = np.linspace(0, N, N + 1)
tmp = D*(1. - theta)
superDiag[1:] = -D*theta*(1 + 0.5/j_array[2:-2])
mainDiag[1:] = np.ones(N - 2)*(1 + 2*D*theta)
subDiag[:] = -D*theta*(1 - 0.5/j_array[2:-1])
a = tmp*(1 - 1./(2*j_array[2:-1]))*wOld[1:-2]
b = (1 - 2*tmp)*wOld[2:-1]
c = tmp*(1 + 1/(2*j_array[2:-1]))*wOld[3:]
RHS[1:] = a + b + c
superDiag[0] = -(4./3)*D*theta
mainDiag[0] = 1 + (4./3)*D*theta
RHS[0] = (1 - (4./3)*tmp)*wOld[1] + (4./3)*tmp*wOld[2]
return wNew
h̄ · b
Vi har valgt verdiene slik at forholdet = 1, noe som fører til en enklere analytisk
k
løsning. Verdiene som er angitt i (7.120), (7.121) og (7.122), passer bra for nikkel-legeringer.
CHAPTER 7. DIFFUSJONSPROBLEMER 298
radius [-]
0.8
D = 0.4 0.6
0.4
0.2
0.0
1.0
radius [-]
0.8
D = 0.45 0.6
0.4
0.2
0.0
1.0
radius [-]
0.8
D = 0.5 0.6
0.4
0.2
0.0
1.0
radius [-]
0.8
D = 0.53 0.6
0.4
0.2
0.0
1.0
radius [-]
0.8
D=1 0.6
0.4
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
Velocity [-] Velocity [-] Velocity [-] Velocity [-]
Tv
b
Tk
∂T
(0, t) = 0, (symmetribetingelse) (7.124)
∂r
For r = b :
∂T
k = h̄ · (Tv − Tb ) (7.125)
∂r
CHAPTER 7. DIFFUSJONSPROBLEMER 299
Startbetingelse:
T (r, 0) = Tk (7.126)
I dette eksemplet nøyer vi oss med bruk av FTCS-skjemaet, men beregningen kan lett
utvides til θ-skjemaet som vist i eksempel (7.7.1). Med rj = ∆r · j, ∆r = N1+1 , j =
∆t
0, 1, . . . , N + 1 og D = α (∆r)2 får vi fra (7.90) når u(r, t) → T (r, t), θ = 0 og λ = 2:
r(cm) T (◦ C) Ta r(cm) T (◦ C) Ta
0.00 53.38 53.37 2.60 49.79 49.78
0.10 53.37 53.36 2.70 49.52 49.51
0.20 53.36 53.35 2.80 49.24 49.23
0.30 53.33 53.32 2.90 48.95 48.94
0.40 53.29 53.28 3.00 48.65 48.64
0.50 53.24 53.23 3.10 48.35 48.34
0.60 53.18 53.17 3.20 48.03 48.03
0.70 53.11 53.10 3.30 47.71 47.71
0.80 53.03 53.02 3.40 47.38 47.38
0.90 52.93 52.93 3.50 47.05 47.04
1.00 52.83 52.82 3.60 46.70 46.70
1.10 52.72 52.71 3.70 46.35 46.35
1.20 52.59 52.58 3.80 46.00 45.99
1.30 52.46 52.45 3.90 45.63 45.63
1.40 52.31 52.30 4.00 45.26 45.26
1.50 52.16 52.15 4.10 44.89 44.88
1.60 51.99 51.98 4.20 44.50 44.50
1.70 51.81 51.81 4.30 44.11 44.11
1.80 51.63 51.62 4.40 43.72 43.71
1.90 51.43 51.42 4.50 43.32 43.31
2.00 51.22 51.22 4.60 42.92 42.91
2.10 51.01 51.00 4.70 42.51 42.50
2.20 50.78 50.78 4.80 42.09 42.09
2.30 50.55 50.54 4.90 41.67 41.67
2.40 50.30 50.30 5.00 41.25 41.24
2.50 50.05 50.04
u = u0 f (x − a0 t) (8.2)
and represents a wave propagating with a constant velocity a0 with unchanged shape. When
a0 > 0, the wave propagates in the positive x-direction, whereas for a0 < 0, the wave propagates
in the negative x-direction.
Equation (8.1) may serve as a model-equation for a compressible fluid, e.g if u denote
pressure it represents a pressure wave propagating with the velocity a0 . The advection equation
may also be used to model the propgation of pressure or flow in a compliant pipe, such as a
blood vessel.
To allow for generalization we will also when appropriate write (8.1) on the following form:
∂u ∂F
+ =0 (8.3)
∂t ∂x
where for the linear advection equation F (u) = a0 u.
∂u un+1
j − un
j ∂u un n
j+1 − uj−1
≈ , ≈
∂t ∆t ∂x 2∆x
and we may substitute the approximations (8.1.1) into the advection equation (8.1) to yield:
C n
un+1 = un
j − (u − un
j−1 ) (8.4)
j
2 j+1
301
CHAPTER 8. HYPERBOLIC PDES 302
n+1
j-1 j j+1
Figure 8.1: Illustration of the first order in time central in space scheme.
We will try to solve model equation (8.1) with the scheme (8.4) and initial conditions
illustrated in Fig (8.2) with the mathematical representation:
u(x, 0) = 1 for x < 0.5
u(x, 0) = 0 for x > 0.5
1.0
Solutions for three CFL-numbers: C=0.25, 0.5 and 1.0 are illustrated in Figure 8.3. Large
oscillations are observed for all values of the CFL-number, even though they seem to be sligtly
reduced for smaller C-values,; thus we have indications of an unstable scheme. As a first
approach observe that the coefficient for un j+1 in (8.4) always will be negative, and thus the
criterion of positive coefficients (PC-criterion) may not be satisfied for any value of C.
Marit 40: Har ikke endret figuren
However, as we know that the PC-criterion may be too strict in some cases, we proceed
with a von Neumann analysis by introducing the numerical amplification factor Gn for the
CHAPTER 8. HYPERBOLIC PDES 303
C = 1.0
4
u
0
-2
-4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
2.5
2 C = 0.5
u1.5
1
0.5
-0.5
-1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
C = 0.25
1.5
u
1
0.5
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x
Figure 8.3: Computed solutions with the (8.4). Dotted line: analytical solution,
solid line: computed soultion.
un n n
j → Ej = G · e
i·βxj
(8.6)
Substitution of (8.6) into (8.4) yields:
C
Gn+1 ei·β·xj = Gn ei·β·xj − Gn ei·βxj+1 − Gn ei·βxj−1
2
which after division with Gn ei·β·xj and introduction of the simplified notation δ = β · h yields:
C i·βh
G=1− e − e−i·βh = 1 − i · C sin(δ)
2
where the trigonometric relations:
2 cos(x) = eix + e−ix (8.7)
ix −ix
i · 2 sin(x) = e −e (8.8)
x 2
cos(x) = 1 − 2 sin ( ) (8.9)
2
CHAPTER 8. HYPERBOLIC PDES 304
have been introduced for convenience. Finally, we get the following expression for the
numerical ampliciation factor:
p
|G| = 1 + C 2 sin2 (δ) ≥ 1 for all C and δ
and concequently the FTCS-scheme is unconditionally unstable for the advection equation
and is thus not a viable scheme. Even a very small value of C will not suffice to dampe the
oscillations.
a0
n+1
j-1 j
un+1
j = un n n n n
j − C · (uj − uj−1 ) = (1 − C)uj + C · uj−1 (8.11)
(8.11) har nøyaktighet O∆t) + O(∆x)
CHAPTER 8. HYPERBOLIC PDES 305
p p
|G| = [1 + C cos(δ − 1)]2 + C 2 sin2 (δ) = 1 − 2C(1 − cos(δ)) · (1 − C) (8.12)
Stabilitetskriteriet |G| ≤ 1 gir følgende betingelse for C:
1 − 2C · (1 − cos(δ)) · (1 − C) ≤ 1 ⇒ C · (1 − cos(δ)) · (1 − C) ≥ 0
eller
δ
C · sin2 (1 − C) ≥ 0
2
Stabilitetsintervall:
0<C≤1 (8.13)
(8.13) gjelder også når vi setter C = |a0 | · ∆t/∆x . Alle stabile, eksplisitte 2-nivå skjema
for løsning av adveksjonsligning en har begrensninger på Courant-tallet .
Marit 58: Har ikke endret figur
CHAPTER 8. HYPERBOLIC PDES 306
1.5
C = 1.0
1
u
0.5
-0.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1.5
C = 0.5
1 C = 0.5
u
0.5
-0.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1.5
C = 0.25
1
u
0.5
-0.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
un+1
j − un
j un n
j+1 − uj−1
+ a0 =0 (8.14)
k 2h
Innsatt i (8.14) fra rekkeutviklingene i (??), kap. 7:
( )
i n i n
1 a0
h 2
h
k h2 h3
u + kut + 2
utt + · · · − un
j + u + hux + u
2 xx
+ u
6 xxx
+ · · ·
k 2h
j j
i n
a0
h
h2 h3
− u − hux + 2 uxx − 6 uxxx + · · · = 0
2h
j
Ordnet:
∂u ∂u k ∂2u a0 h2 ∂ 3 u
+ a0 =− 2
− + ··· (8.15)
∂t ∂x 2 ∂t 6 ∂x3
2
∂u
Vi bruker nå differensialligningen ∂t
+ a0 ∂u
∂x
= 0 til å transformere leddet − k2 ∂ u
∂t2
til å
uttrykke derivasjon m.h.p. x.
Differensialligningen gir:
∂u ∂u
= −a0 (8.16)
∂t ∂x
Deriverer (8.16) m.h.p. t:
∂2u ∂2u
2
= −a0
∂t ∂t∂x
Deretter deriveres (8.16) m.h.p. x:
∂2u ∂2u
2
= a20 2 (8.17)
∂t ∂x
(8.17) innsatt i (8.15):
∂u ∂u ka2 ∂ 2 u a0 h2 ∂ 3 u
+ a0 =− 0 − + ···
∂t ∂x 2 ∂x2 6 ∂x3
som ved innføring av Courant-tallet gir:
C = ah
0k
∂u ∂u Cha20 ∂ 2 u a0 h2 ∂ 3 u
+ a0 =− − + ··· (8.18)
∂t ∂x 2 ∂x2 6 ∂x3
(8.18) viser tydelig hvorfor skjemaet i (8.14) er ustabilt. I strømningsmekanikken kalles
ofte koeffisienten νN = − Cha2
0
kalles den numeriske viskositeten analogt med den virkelige
2
viskositeten i ligningen ∂u ∂t
= ν ∂∂xu
2 . Når vi bruker skjemaet i (8.14), gir dette den samme
effekten som å løse diffusjonsligningen
2
∂u
∂t
= ν ∂∂xu2 med negativ viskositet Helt sikkert ustabilt. (8.18) forklarer også hvorfor
oscillasjonene i fig. 8.3 avtar med minkende C. Ligning (8.18) kalles den modifiserte ligningen
for det gitte skjemaet og den tilhørende differensial-ligningen.
CHAPTER 8. HYPERBOLIC PDES 308
un+1
j − un
j (un n
j − uj−1 )
+ a0 =0 (8.19)
k h
Innsatt fra rekkeutviklingene i(??) og ordnet:
∂u ∂u k ∂2u a0 h ∂ 2 u
= a0 =− 2
+ + ··· (8.20)
∂t ∂x 2 ∂t 2 ∂x2
Da differensialligningen er den samme, får vi fra (8.17):
∂2u 2 a0 k
∂t2
= a20 ∂∂xu2 som innsatt i (8.19) sammen med gir C = h :
∂u ∂u a0 h ∂2u
+ a0 = (1 − C) 2 + · · · (8.21)
∂t ∂x 2 ∂x
a0 h
Koeffisienten νN = 2 (1 − C) gir nå den numeriske viskositeten.
For C > 1 blir dette leddet negativt, og differanseligningen er ustabil. Nødvendig betingelse
for stabilitet blir derfor C ≤ 1 som er i overenstemmelse med tidligere resultater. Legg merke
til at i dette tilfellet vil øke med minkende C. Dette betyr at en bratt front blir "smurt" utover
når C blir liten, noe som tydelig framgår av fig. 8.5.
Bemerkninger: Ved utledningen av både lign. (8.18) og (8.21) har vi brukt differensial-
2 2
ligningen til å finne relasjonen ∂∂t2u = a20 ∂∂xu2 . Problemet er at differanseligningen generelt ikke
tilfredstiller differensialligningen. Det betyr at vi istedenfor å derivere differensialligningen,
må derivere henholdsvis lign.(8.15) og (8.20) . Denne framgangsmåten, som er adskillig mer
arbeidskrevende, må benyttes i det generelle tilfellet. For adveksjonsligning en blir det første
leddet som skal omformes, korrekt når vi bruker differensialligningen. Dette stemmer også
for mer kompliserte transportligninger uten diffusjonsledd. Symbolske program som Maple
kommer her til sin rett. Nedenfor er det vist et Maple-program som gir lign. (8.18) med
korreksjon.
> restart:
> EQ:= (u(x,t+k)-u(x,t))/k + a*(u(x+h,t)-u(x-h,t))/(2*h):
> EQT:= mtaylor(EQ,[h,k]):
> MDE:=EQT:
> ELIM:=proc(i::integer,j::integer)
local DE,uxt,UXT:
global EQT,MDE:
DE:=convert(diff(EQT,x$i,t$j-1,D):
uxt:=convert(diff(u(x,t),x$i,t$j),D):
UXT:=solve(DE = 0,uxt):
subs(uxt = UXT,MDE):
end:
> MDE:=ELIM(0,2):
> MDE:=ELIM(1,1):
> MDE:=ELIM(0,3):
> MDE:=ELIM(1,2):
> MDE:=ELIM(2,1):
> # Substitute the Courant number C = a*k/h
> MDE:=expand(subs(k=C*h/a,MDE)):
> u2x:=convert(diff(u(x,t),x$2),D):
> u3x:=convert(diff(u(x,t),x$3),D):
> collect(MDE,[u2x,u3x]):
> RHSMDE:=-coeff(MDE,u2x)*convert(u2x,diff)
-coeff(MDE,u3x)*convert(u3x,diff);
∂2 ∂2
RHSMDE:= − 21 Cha ∂x2
u(x, t) − 1
6
ah2 + 13 C 2 h2 a ∂x2
u(x, t)
Legg merke til at vi her har fått med leddet 13 C 2 h2 a som mangler i (8.18). Teknikken
med den modifiserte ligningen kan også brukes på ikke-lineære differensialligninger. Det kan
CHAPTER 8. HYPERBOLIC PDES 309
vises at metoden har forbindelse med von Neumanns metode. Både i Hirsch [11] og Anderson
[23] finnes den modifiserte ligningen for de mest brukte differanseskjemaene.
|G|
εD = (8.24)
|Ga |
where |Ga | is the amplituden of analytical amplification factor Ga , i.e. we have no dissipative
error when εD = 1 .
For problems and models related with convection we also need to consider the error related
with timing or phase, and introduce a measure for this kind error as the dispersion error εφ :
φ
εφ = (8.25)
φa
where φa is the phase for the analytical amplification factor. And again εφ = 1 corresponds
to no dispersion error. Note for parabolic problems with φa = 0 , it is common to use εφ = φ.
hvor vi har satt inn fra (8.27). Dermed blir |Ga | = 1 → εD = |G| .
Derav:
∆t
φa = ω · ∆t = a0 β · ∆t = a0 β · ∆x = C · δ (8.28)
∆x
C er Courant-tallet og δ = β · ∆x som vanlig.
Fra (8.28) får vi også:
φa
a0 = (8.29)
β · ∆t
CHAPTER 8. HYPERBOLIC PDES 310
φ
anum = (8.30)
β · ∆t
Dispersjonsfeilen i (8.25) kan da skrives:
anum
εφ = (8.31)
a0
Når dispersjonsfeilen er større enn 1, betyr dette at den numeriske hastigheten er større
enn den fysiske. Den numerisk beregnede løsningen vil da synes å bevege seg hurtigere enn
den eksakte. For εφ < 1 vil derimot den numerisk beregnede løsningen synes å bevege seg
langsommere enn den eksakte.
La oss se nærmere på oppstrømskjemaet samt Lax-Wendroff metode.
φ 1 Csin(δ)
h i
εφ = = arctan (8.37)
φa Cδ 1 + C 2 (cos(δ) − 1)
Figur 8.7 og 8.8 viser (8.36) og (8.37) som funksjon av δ for forskjellige verdier av Courant-
tallet C. Området hvor εD er nær 1 er større på fig. 8.7 enn for oppstrømskjemaet, fig. ??.
Dette viser forskjellen på en første ordens metode og en 2. ordens metode. Fig. ?? viser εφ er
CHAPTER 8. HYPERBOLIC PDES 311
1.2 1.4
1.2
1.0
1.0
0.8
0.8
|G|
<G
0.6
0.6
0.4
CFL = 0.25 0.4
CFL = 0.5
0.2
CFL = 0.8 0.2
CFL = 1.0
0.0 0.0
0 20 40 60 80 100 120 140 160 180
[deg]
Figure 8.6: Diffusion error εD (left y-axis) and dispersion error εφ (right y-axis)
for the upwind scheme as a function of frequency for the upwind scheme. The
dotted lines of εφ correspond to same CFL-numbers as solid lines of εD with the
same color.
stort sett er mindre enn 1, slik at den numeriske hastigheten er lavere enn den fysiske. Det
er dette som er årsaken til oscillasjonene for eksempel i 5. Von Neumann-kriteriet garanterer
at |G| ≤ 1 , men utfører ingen test på fasehastigheten, slik at feil i fasehastigheten i dette
tilfellet gir en amplitude som er større enn 1. Husk at PK-kriteriet sikrer at det ikke oppstår
voksende oscillasjoner. Derfor blir ikke PK-kriteriet oppfylt for Lax-Wendroffs metode anvendt
på adveksjonsligningen. Vi kan da forlange at våre skjema skal tilfredstille PK-kriteriet for å
hindre oscillasjoner. Slik skjema kalles monotone skjema. Dessverre er slike skjema anvendt
på konvektive ligninger bare av første orden.
Marit 40: Har ikke endret figuren
CHAPTER 8. HYPERBOLIC PDES 312
1.2 1.4
1.2
1.0
1.0
0.8
0.8
|G|
<G
0.6
0.6
0.4
CFL = 0.25 0.4
CFL = 0.5
0.2
CFL = 0.8 0.2
CFL = 1.0
0.0 0.0
0 20 40 60 80 100 120 140 160 180
[deg]
Figure 8.7: Diffusion error εD (left y-axis) and dispersion error εφ (right y-axis)
as a function of frequency for the Lax-Wendroff scheme. The dotted lines of εφ
correspond to same CFL-numbers as solid lines of εD with the same color.
Amplitude |G| som funksjon av C og δ
0.8
0.6
0.4
0.2
0
0
0.5
1 1
1.5 0.8
0.6
δ(rad.) 2
0.4
2.5
0.2 C
3
0
un+1
j − 21 (un n
j+1 + uj−1 )
n
Fj+1 n
− Fj−1
=− (8.38)
∆t 2∆x
The Lax-Friedrich scheme is the obtained by isolation un+1
j at the right hand side:
1 n ∆t
un+1 = (u + un
j−1 ) −
n
(F n − Fj−1 ) (8.39)
j
2 j+1 2∆x j+1
By assuming a linear flux F = a0 u it may be shown that the Lax-Friedrich scheme takes
the form:
1 C n
un+1 = (un + un
j−1 ) − (u − unj−1 ) (8.40)
j
2 j+1 2 j+1
where we have introduced the CFL-number as given by (8.5) and have the simple python-
implementation:
def lax_friedrich(u):
u[1:-1] = (u[:-2] +u[2:])/2.0 - c*(u[2:] - u[:-2])/2.0
return u[1:-1]
whereas a more generic flux implementation is implemented as:
def lax_friedrich_Flux(u):
u[1:-1] = (u[:-2] +u[2:])/2.0 - dt*(F(u[2:])-F(u[:-2]))/(2.0*dx)
return u[1:-1]
∆t
un+1
j = un
j + Fj−1/2 − Fj+1/2 (8.41)
∆x
Before substitution of (8.43) in the Taylor expansion (8.42) we approximate the spatial
derivatives by central differences:
n n
∂u un n
j+1 − uj−1 ∂ 2 u un n n
j+1 − 2uj + uj−1
≈ and ≈ (8.44)
∂x (2∆x) ∂x2 (∆x)2
j j
CHAPTER 8. HYPERBOLIC PDES 314
un+1
j
n+1
j-1 j j+1
Ricthmyer Scheme. One of the earliest extensions of the scheme is the Richtmyer
two-step Lax–Wendroff method, which is on the conservative form (8.41) with the numerical
fluxes computed as follows:
n+1/2 1 1 ∆t n
uj+1/2 = un n
j + uj+1 +
n
Fj − Fj+1 (8.48)
2 2 ∆x
n+1/2
Fj+1/2 = F (uj+1/2 ) (8.49)
CHAPTER 8. HYPERBOLIC PDES 315
Lax-Wendroff two step. A Lax-Wendroff two step method is outlined in the following.
In the first step u(x, t) is evaluated at half time steps n + 1/2 and half grid points j + 1/2.
In the second step values at the next time step n + 1 are calculated using the data for n and
n + 1/2.
First step:
n+1/2 1 ∆t
uj+1/2 = un n
j+1 + uj − F (un n
j+1 ) − F (uj ) (8.50)
2 2∆x
n+1/2 1 ∆t
uj−1/2 = un n
j + uj−1 − F (un n
j ) − F (uj−1 ) (8.51)
2 2∆x
Second step:
∆t
n+1/2 n+1/2
un+1
j = un
j − F (uj+1/2 ) − F (uj−1/2 ) (8.52)
∆x
Notice that for a linear flux F = a0 u, the two-step Lax-Wendroff method ((8.51) and
(8.52)) may be shown to reduce to the one-step Lax-Wendroff method outlined in (8.45) or
(8.47).
∆t
upj = un
j + Fjn − Fj+1
n
∆x
(8.53)
1 p
1 ∆t p
Fj−1 − Fjp
un+1
j = un
j + uj +
2 2 ∆x
where we have introduced the convention Fjp = F (upj ).
Note that in the predictor step we employ the conservative formula (8.41) for a time
n
∆t with forward differencing, i.e. . Fj+1/2 = Fj+1 = F (un j+1 ). The corrector step may be
p
interpreted as using (8.41) for a time ∆t/2 with initial condition 12 un
j + uj+1 and backward
differencing.
Another MacCormack scheme may be obtained by reversing the predictor and corrector
steps. Note that the MacCormack scheme (8.53) is not written in conservative form (8.41).
However, it easy to express the scheme in conservative form by expressing the flux in (8.41) as:
1
Fjp + Fj+1
m n
Fj+1 = (8.54)
2
For a linear flux F (u) = a0 u, one may show that the MacCormack scheme in (8.53)
reduces to a two-step scheme:
upj = un n n
j + C uj − uj+1 (8.55)
1 p C
upj−1 − uj p
un+1
j = un
j + uj + (8.56)
2 2
and substitution of (8.55) into (8.56) shows that the MacCormack scheme is identical to
the Lax-Wendroff scheme (8.47) for the linear advection flux. A python implementation is
given by:
def macCormack(u):
up = u.copy()
up[:-1] = u[:-1] - c*(u[1:]-u[:-1])
u[1:] = .5*(u[1:]+up[1:] - c*(up[1:]-up[:-1]))
return u[1:-1]
import numpy as np
from matplotlib import animation
from scipy import interpolate
from numpy import where
from math import sin
LNWDT=2; FNT=15
plt.rcParams[’lines.linewidth’] = LNWDT; plt.rcParams[’font.size’] = FNT
# Lax-Wendroff
def lax_wendroff(u):
u[1:-1] = c/2.0*(1+c)*u[:-2] + (1-c**2)*u[1:-1] - c/2.0*(1-c)*u[2:]
return u[1:-1]
# Lax-Friedrich Advection
def lax_friedrich(u):
u[1:-1] = (u[:-2] +u[2:])/2.0 - c*(u[2:] - u[:-2])/2.0
return u[1:-1]
# Discretize
x = np.linspace(xmin, xmax, Nx+1) # discretization of space
dx = float((xmax-xmin)/Nx) # spatial step size
dt = c/a*dx # stable time step calculated from stability requirement
Nt = int((tmax-tmin)/dt) # number of time steps
time = np.linspace(tmin, tmax, Nt) # discretization of time
# solve from tmin to tmax
solvers = [ftbs,lax_wendroff,lax_friedrich,macCormack]
#solvers = [ftbs,lax_wendroff,macCormack]
#solvers = [ftbs,lax_wendroff]
#solvers = [ftbs]
u_solutions=np.zeros((len(solvers),len(time),len(x)))
uanalytical = np.zeros((len(time), len(x))) # holds the analytical solution
if k==0:
uanalytical[i,:] = f(x-a*t) # compute analytical solution for this time step
u_bc = interpolate.interp1d(x[-2:], u[-2:]) # interplate at right bndry
u_solutions[k,:,:] = un
CHAPTER 8. HYPERBOLIC PDES 318
### Animation
# First set up the figure, the axis, and the plot element we want to animate
fig = plt.figure()
ax = plt.axes(xlim=(xmin,xmax), ylim=(np.min(un), np.max(un)*1.1))
lines=[] # list for plot lines for solvers and analytical solutions
legends=[] # list for legends for solvers and analytical solutions
line, = ax.plot([], []) #add extra plot line for analytical solution
lines.append(line)
legends.append(’Analytical’)
plt.xlabel(’x-coordinate [-]’)
plt.ylabel(’Amplitude [-]’)
plt.legend(legends, loc=3, frameon=False)
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig, animate_alt, init_func=init, frames=Nt, interval=100, blit=False)
plt.show()
Movie 5: Result from code example above using a step function as the initial
value. mov-ch6/step.mp4
Movie 6: Result from code example above using a sine squared function as the
initial value mov-ch6/sine.mp4
CHAPTER 8. HYPERBOLIC PDES 319
where N is the number of sampling points, fˆ is the numerical-, and f is the analytical solution.
A code example performing this test on selected advections schemes, is showed below. As can
be seen in Figure 8.10, the Lax-Wendroff scheme quickly converge to it’s theoretical order,
whereas the ftbs and Lax Friedrich scheme converges to their theoretical order more slowly.
def test_convergence_MES():
from numpy import log
from math import sqrt
global c, dt, dx, a
# Discretize
x = np.linspace(xmin, xmax, Nx + 1) # discretization of space
dx = float((xmax-xmin)/Nx) # spatial step size
dt = c/a*dx # stable time step calculated from stability requirement
time = np.arange(tmin, tmax + dt, dt) # discretization of time
CHAPTER 8. HYPERBOLIC PDES 320
−1
¶
2
−2
¢
f-f
¡
−3 ftbs
N
1
µq
lax_friedrich
−4
log10
lax_wendroff
−5
160320 640 1280 2560
Nx
2
log2( n-1
n
)
1, 2 2, 3 3, 4 4, 5
( n-1, n)
Figure 8.10: The root mean square error E for the various advection schemes
as a function of the number of spatial nodes (top), and corresponding observed
convergence rates (bottom).
init_funcs = [init_step, init_sine4] # Select stair case function (0) or sin^4 function (1)
f = init_funcs[1]
solvers = [ftbs, lax_friedrich, lax_wendroff]
for i, t in enumerate(time[1:]):
u_bc = interpolate.interp1d(x[-2:], u[-2:]) # interplate at right bndry
u[1:-1] = solver(u[:]) # calculate numerical solution of interior
u[-1] = u_bc(x[-1] - a*dt) # interpolate along a characteristic to find the bound
if n>0:
previousError = errorDict[solver.func_name][n-1]
orderDict[solver.func_name].append(log(previousError/error)/log(2))
print " finished iteration {0} of {1}, dx = {2}, dt = {3}, tsample = {4}".format(n+1, Ntd
# refine grid and dt:
Nx *= 2
x = np.linspace(xmin, xmax, Nx+1) # new x-array, twice as big as the previous
dx = float((xmax-xmin)/Nx) # new spatial step size, half the value of the previous
dt = c/a*dx # new stable time step
time = np.arange(tmin, tmax + dt, dt) # discretization of time
# Plot error-values and corresponding order-estimates:
fig , axarr = plt.subplots(2, 1, squeeze=False)
lstyle = [’b’, ’r’, ’g’, ’m’]
legendList = []
N = Nx/2**(Ntds + 1)
N_list = [N*2**i for i in range(1, Ntds+1)]
N_list = np.asarray(N_list)
The python module scipy.optimize has many methods for parameter optimization and curve-
fitting. In the code example below we use scipy.optimize.curve_fit which fits a function
"f(x;params)" to a set of data "y-data" using a Levenberg-Marquardt algorithmwith a least
square minimization criteria. In the code example below we start by loading the calculated root
mean square errors E (hx , ht ) of the schemes from "advection_scheme_errors.txt", which
where calculated in the same manner as in the previous example. As can be seen by Figure 8.10
the ftbs, and Lax Friedriech scheme takes a while before they are in their asymptotic range
(area where they converge at a constant rate). In "advection_scheme_errors.txt" we have
computed E (hx , ht ) up to Nx = 89120 in which all schemes should be close to their asymptotic
range. This procedure is demonstrated in the code example below in which the following
expressions for the errors are obtained:
f tbs → = 1.3 h0.98
x + 6.5 h0.98
t (8.59)
laxF riedrich → = −1484 h1.9 1.0
x + 26 ht (8.60)
laxW endrof f → = −148 h2.0
x + 364. h2.0
t (8.61)
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from sympy import symbols, lambdify, latex
Args:
errorList(array): array of calculated numerical discretization errors E
hxList(array): array of spatial step lengths corresponding to errorList
htList(array): array of temporal step lengths corresponding to errorList
p (Optional[float]): spatial order. Assumed to be equal to theoretical value
q (Optional[float]): temporal order. Assumed to be equal to theoretical value
Returns:
Cx0(float): the optimized values of Cx
Ct0(float): the optimized values of Ct
"""
def func(h, Cx, Ct):
""" function to be matched with ydata:
The function has to be on the form func(x, parameter1, parameter2,...,parametern)
where where x is the independent variable(s), and parameter1-n are the parameters to be o
"""
return Cx*h[0]**p + Ct*h[1]**q
Args:
errorList(array): array of calculated numerical discretization errors E
hxList(array): array of spatial step lengths corresponding to errorList
htList(array): array of temporal step lengths corresponding to errorList
Cx0 (float): initial guessed value of Cx
p (float): initial guessed value of p
Ct0 (float): initial guessed value of Ct
q (float): initial guessed value of q
Returns:
Cx(float): the optimized values of Cx
p (float): the optimized values of p
Ct(float): the optimized values of Ct
q (float): the optimized values of q
"""
x0 = np.array([Cx0, p0, Ct0, q0]) # initial guessed values for Cx, p, Ct and q
xdata = np.array([hxList, htList]) # independent variables
ydata = errorList # data to be matched with expression in func
gx, p, gt, q = curve_fit(func, xdata, ydata, x0)[0] # call scipy optimization tool curvefit
gx, p, gt, q = round(gx,2), round(p, 2), round(gt,2), round(q, 2)
E_ftbs = []
E_lax_friedrich = []
E_lax_wendroff = []
lineNumber = 1
with open(’advection_scheme_errors.txt’, ’r’) as FILENAME:
""" Open advection_scheme_errors.txt for reading.
structure of file:
hx ht E_ftbs E_lax_friedrich E_lax_wendroff
with the first line containing these headers, and the next lines containing
the corresponding values.
"""
CHAPTER 8. HYPERBOLIC PDES 324
E_ftbs.append(float(lineList[2]))
E_lax_friedrich.append(float(lineList[3]))
E_lax_wendroff.append(float(lineList[4]))
lineNumber += 1
FILENAME.close()
E_ftbs = np.asarray(E_ftbs)
E_lax_friedrich = np.asarray(E_lax_friedrich)
E_lax_wendroff = np.asarray(E_lax_wendroff)
# optimize using only last 5 values of E and h for the scheme, as the first values may be outside
print errorExpr
errorExprHx = Cx*h_x**p
errorExprHt = Ct*h_t**q
CHAPTER 8. HYPERBOLIC PDES 325
# plotting:
fig , ax = plt.subplots(2, 1, squeeze=False)
ax[0][0].plot(XtickList, np.log10(E),lstyle[k])
ax[0][0].plot(XtickList, np.log10(errorFunc(hxList, htList)),lstyle[k] + ’--’)
ax[1][0].plot(XtickList[-5:], E[-5:],lstyle[k])
ax[1][0].plot(XtickList[-5:], errorFunc(hxList, htList)[-5:],lstyle[k] + ’--’)
ax[1][0].plot(XtickList[-5:], errorFuncHx(hxList[-5:]), lstyle[k] + ’-.’)
ax[1][0].plot(XtickList[-5:], errorFuncHt(htList[-5:]),lstyle[k] + ’:’)
−1.0
ftbs −1.0
lax_friedrich −1
lax_wendroff
=E =E =E
−1.5 −1.5 −2
0.98
=6.45 hx + 1.32 ht0.98 =-1483.49 hx
1.87
+ 25.85 ht1.0 =364.05 hx
2.0
+ -148.59 ht2.0
−2.0 −3
−2.0
log10( )
log10( )
log10( )
−2.5 −4
−2.5
−3.0 −5
−3.5 −3.0 −6
−4.0 −3.5 −7
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
(hx ,ht )n (hx ,ht )n (hx ,ht )n
0.0035 0.00006
=E 0.008 =E 0.00005 =E
0.0030
0.98
0.0025 =6.45 hx + 1.32 ht0.98 0.006 =-1483.49 hx
1.87
+ 25.85 ht1.0 0.00004 =364.05 hx
2.0
+ -148.59 ht2.0
0.98 1.87 0.00003 2.0
0.0020 =6.45 hx =-1483.49 hx =364.05 hx
0.98 0.004 1.0 0.00002 2.0
0.0015 =1.32 ht =25.85 ht =-148.59 ht
0.00001
0.0010 0.002
0.00000
0.0005 −0.00001
0.000
0.0000 −0.00002
5 6 7 8 9 5 6 7 8 9 5 6 7 8 9
(hx ,ht )n (hx ,ht )n (hx ,ht )n
Figure 8.11: Optimized error expressions for all schemes. Test using doconce
combine images
Figure 8.12: The solutions at time t=T=1 for different grids corresponding
to the code-example above. A sin4 wave with period T = 0.4 travelling with
wavespeed a = 1. The solutions were calculated with a cfl-number of 0.8
∆t
un+1
j = un
j + Fj−1/2 − Fj+1/2 (8.62)
∆x
Fredrik 63: do we need clarifying more about conservative methods and Fluxes calculated
at midpoints and halftime?
The general flux limiter method solves (8.62) with the following definitions of the fluxes
Fi−1/2 and Fi+1/2
CHAPTER 8. HYPERBOLIC PDES 327
Fj−1/2 = Fl (j − 1/2) + φ(rj−1/2 ) Fh (j − 1/2) − Fl (j − 1/2) (8.63)
Fj+1/2 = Fl (j + 1/2) + φ(rj+1/2 ) Fh (j + 1/2) − Fl (j + 1/2) (8.64)
where φ(r) is the limiter function, and r is a measure of the smoothness of the solution.
The limiter function φ is designed to be equal to one in regions where the solution is smooth,
in which F reduces to Fh , and a pure high order scheme. In regions where the solution is not
smooth (i.e. in regions containing sharp gradients and discontinuities ) φ is designed to be
equal to zero, in which F reduces to Fl , and a pure low order scheme. As a measure of the
smoothness, r is commonly taken to be the ratio of consecutive gradients
uj−1 − uj−2 uj − uj−1
rj−1/2 = , rj+1/2 = (8.65)
uj − uj−1 uj+1 − uj
In regions where the solution is constant (zero gradients), some special treatment of r is
needed to avoid division by zero. However the choice of this treatment is not important since
in regions where the solution is not changing, using a high or low order method is irrelevant.
Lax-Wendroff limiters. The Lax-Wendroff scheme for the advection equation may be
written in the form of (8.62) by defining the Lax-Wendroff Flux FLW as:
1 ∆t
FLW (j − 1/2) = FLW (uj−1 , uj ) = a uj−1 + a 1− a uj − uj−1 (8.66)
2 ∆x
1 ∆t
FLW (j + 1/2) = FLW (uj , uj+1 ) = a uj + a 1 − a uj+1 − uj (8.67)
2 ∆x
which may be showed to be the Lax-Wendroff two step method condensed to a one
∆t
step method as outlined in (8.5.1). Notice the term ∆x a = c. Now the Lax-Wendroff
flux assumes that of an
upwind flux (a uj−1 or a uj ) with an additional anti diffusive term
( 12 a (1 − c) uj − uj−1 for FLW (j − 1/2)). A flux limited version of the Lax-Wendroff scheme
could thus be obtained by adding a limiter function φ to the second term
CHAPTER 8. HYPERBOLIC PDES 328
1
F (j − 1/2) = a uj−1 + φ rj−1/2 a (1 − c) uj − uj−1 (8.68)
2
1
F (j + 1/2) = a uj + φ rj+1/2 a (1 − c) uj+1 − uj (8.69)
2
When φ = 0 the scheme is the upwind scheme, and when φ = 1 the scheme is the Lax-
Wendroff scheme. Many different limiter functions have been proposed, the optimal function
however is dependent on the solution. Sweby (Fredrik 64: Cite sweby ) showed that in order
for the Flux limiting scheme to possess the wanted properties of a low order scheme; TVD,
or monotonically varying in regions where the solution should be monotone, the following
equality must hold Fredrik 65: definition of TVD? more theory backing these statements?
φ(r)
0≤ , φ(r) ≤2 (8.70)
r
Where we require
φ(r) = 0, r ≤ 0 (8.71)
Hence for the scheme to be TVD the limiter must lie in the shaded region of Figure 8.14,
where the limiter function for the two second order schemes, Lax-Wendroff and Warming and
Beam are also plotted.
3
Warming and Beam, =r
Lax-Wendroff, =1
1
0 1 2 3
r
Figure 8.14: TVD region for flux limiters (shaded), and the limiters for the
second order schemes; Lax-Wendroff, and Warming and Beam
For the scheme Fredrik 66: maybe add only using uj−2 , uj−1 uj uj+1 to be second order
accurate whenever possible, the limiter must be an arithmetic average of the limiter of Lax-
Wendroff (φ = 1) and that of Warming and Beam(φ = r) Fredrik 67: need citing(Sweby),
and maybe more clearification . With this extra constraint a second order TVD limiter must
lie in the shaded region of Figure 8.15
Note that φ(0) = 0, meaning that second order accuracy must be lost at extrema. All
schemes pass through the point φ(1) = 1, which is a general requirement for second order
schemes. Many limiters that pass the above constraints have been proposed. Here we will only
consider a few:
0 1 2 3
r
Figure 8.15: Second orderTVD region for flux limiters; sweby diagram Fredrik
68: cite sweby?
Superbee Van-Leer
Min-Mod
0 1 2 3
r
1.0
0.8
0.6
u
0.4
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0
x
3.0
2.5
2.0
1.5
r
1.0
0.5
0.0
0.0 0.2 0.4 0.6 0.8 1.0
u
2.0
1.5 superbee
1.0
0.5
van-Leer
0.0 minmod
0.0 0.2 0.4 0.6 0.8 1.0
r
∂u ∂F (u)
+ =0 (8.81)
∂t ∂x
CHAPTER 8. HYPERBOLIC PDES 331
∆t
un+1
j = un
j − F (un n
j+1 ) − F (uj ) (8.82)
∆x
0
where we have assumed a forward propagating wave (a(u) = F (u) > 0, i.e. u > 0 for the
∂F (u) 1
burger equation). In the opposite case ∂x will be approximated by ∆x F (un n
j ) − F (uj−1 )
8.8.2 Lax-Friedrich
The Lax-Friedrich conservation equation take the form as given in (8.38), repeated here for
convinience:
n
Fj+1 n
− Fj−1
∆t n
un+1
j = (uj+1 + un
j−1 ) − (8.83)
2 2∆x
8.8.3 Lax-Wendroff
As outlined in ((8.5.1)), the general Lax-Wendroff two step method takes the form as given in
(8.51) and (8.52) repeated here for convinience:
First step:
n+1/2 1 ∆t
uj+1/2 = un n
j+1 + uj − F (un n
j+1 ) − F (uj ) (8.84)
2 2∆x
n+1/2 1 ∆t
uj−1/2 = un n
j + uj−1 − F (un n
j ) − F (uj−1 ) (8.85)
2 2∆x
∆t
n+1/2 n+1/2
un+1
j = un
j − F (uj+1/2 ) − F (uj−1/2 ) (8.86)
∆x
In the previous section ((8.5.1)) we showed how the two step Lax-Wendroff method could
be condensed to a one step method. The same procedure may be applied to a the general
transport equation given by (8.81). However for the nonlinear case (8.43) no longer holds.
This may be overcome be rewriting (8.43):
n n n n n
∂F (u)
∂u ∂F ∂ 2 u ∂ 2 F (u) ∂ ∂t
=− and = − =−
∂t ∂x ∂t2 ∂t∂x ∂x
j j nj j n j
∂F (u) ∂u (8.87)
∂ ∂u ∂t
∂ a(u) ∂F
∂x
=− =
∂x ∂x
j j
∆t ∆t2
h i
un+1
j = un
j − (Fj+1 − Fj−1 )+ aj+1/2 (Fj+1 − Fj )−aj−1/2 (Fj − Fj−1 ) (8.89)
2∆x 2∆x2
where a(u) is the wavespeed, or the Jacobian of F, F 0 (u), which is u for the burger equation.
As indicated a(u) has to be approximated at the indice (j + 1/2) and (j − 1/2). This may
simply be done by averaging the neighboring values:
1
aj+1/2 = un n
j + uj+1 (8.90)
2
CHAPTER 8. HYPERBOLIC PDES 332
for the burger equation. Another method that assure conservation is to use the following
approximation ( n
Fj+1 −Fjn
un −un
if uj+1 6= uj
aj+1/2 = j+1 j (8.91)
uj otherwise
8.8.4 MacCormack
The MacCormack scheme was discussed in ((8.5.1)) and is given by (8.94) repeated her for
convinience
∆t
upj = un
j +
n
Fjn − Fj+1 (8.92)
∆x
1 p
1 ∆t p
Fj−1 − Fjp
un+1
j = un
j + uj + (8.93)
2 2 ∆x
(8.94)
∂u ∂u
+u =Q (8.95)
∂t ∂x
In this chapter we will consider source terms that are a function of x and t only. The
basic approach to adding source terms to our schemes is to simply add a term Qn i to our
discrete equations. The schemes mentioned above with possibility to handle source terms are
summarized in Table (8.8.5).
Name of Scheme Scheme order
Upwind un+1
j = un
j −
∆t
∆x
+ ∆tQn j 1
n n
Fj+1 −Fj−1
Lax-Friedrichs un+1
j = ∆t2
(un n
j+1 + uj−1 ) − 2∆x
+ ∆tQn
j 1
n+1/2 1
∆t
∆t n
uj+1/2 = 2 uj+1 + uj − 2∆x F (uj+1 ) − F (un
n n n
j ) + 2 Qj+1/2
n+1/2
Lax-Wendroff uj−1/2 = 2 uj + uj−1 − 2∆x F (uj ) − F (uj−1 ) + ∆t
1 n n ∆t n n
2
Qn
j−1/2
2
n+1/2 n+1/2
un+1
j = un − ∆t
j ∆x
F (uj+1/2
) − F (u j−1/2
) + ∆tQ n
j
upj = un n + ∆tQn+1/2
∆t n
macCormack j − ∆x
Fj+1 − F j j 2
p p p ∆t n+1/2
un+1
j = 12 un 1 ∆t
j + uj − 2 ∆x Fj − Fj−1 + 2 Qj
Examples on how to implement these schemes are given below, where we have used
RHS(x, t) instead of Q for the source term:
ftbs or upwind:
def ftbs(u, t):
"""method that solves u(n+1), for the scalar conservation equation with source term:
du/dt + dF/dx = RHS,
where F = 0.5u^2 for the burger equation
with use of the forward in time backward in space (upwind) scheme
Args:
u(array): an array containg the previous solution of u, u(n). (RHS)
t(float): an array
Returns:
u[1:-1](array): the solution of the interior nodes for the next timestep, u(n+1).
CHAPTER 8. HYPERBOLIC PDES 333
"""
u[1:-1] = u[1:-1] - (dt/dx)*(F(u[1:-1])-F(u[:-2])) + dt*RHS(t-0.5*dt, x[1:-1])
return u[1:-1]
Lax-Friedrichs:
def lax_friedrich_Flux(u, t):
"""method that solves u(n+1), for the scalar conservation equation with source term:
du/dt + dF/dx = RHS,
where F = 0.5u^2 for the burger equation
with use of the lax-friedrich scheme
Args:
u(array): an array containg the previous solution of u, u(n). (RHS)
t(float): an array
Returns:
u[1:-1](array): the solution of the interior nodes for the next timestep, u(n+1).
"""
u[1:-1] = (u[:-2] +u[2:])/2.0 - dt*(F(u[2:])-F(u[:-2]))/(2.0*dx) + dt*(RHS(t, x[:-2]) + RHS(t, x
return u[1:-1]
Lax-Wendroff-Two-step:
def Lax_W_Two_Step(u, t):
"""method that solves u(n+1), for the scalar conservation equation with source term:
du/dt + dF/dx = RHS,
where F = 0.5u^2 for the burger equation
with use of the Two-step Lax-Wendroff scheme
Args:
u(array): an array containg the previous solution of u, u(n).
t(float): time at t(n+1)
Returns:
u[1:-1](array): the solution of the interior nodes for the next timestep, u(n+1).
"""
ujm = u[:-2].copy() #u(j-1)
uj = u[1:-1].copy() #u(j)
ujp = u[2:].copy() #u(j+1)
up_m = 0.5*(ujm + uj) - 0.5*(dt/dx)*(F(uj)-F(ujm)) + 0.5*dt*RHS(t-0.5*dt, x[1:-1] - 0.5*dx) #u(n+
up_p = 0.5*(uj + ujp) - 0.5*(dt/dx)*(F(ujp)-F(uj)) + 0.5*dt*RHS(t-0.5*dt, x[1:-1] + 0.5*dx)#u(n+0
Args:
u(array): an array containg the previous solution of u, u(n). (RHS)
t(float): an array
Returns:
u[1:-1](array): the solution of the interior nodes for the next timestep, u(n+1).
"""
up = u.copy()
up[:-1] = u[:-1] - (dt/dx)*(F(u[1:]) - F(u[:-1])) + dt*RHS(t-0.5*dt, x[:-1])
u[1:] = .5*(u[1:] + up[1:] - (dt/dx)*(F(up[1:]) - F(up[:-1])) + dt*RHS(t-0.5*dt, x[1:]))
return u[1:-1]
CHAPTER 8. HYPERBOLIC PDES 334
PEP 8, and Google have python style guides which we generally try to follow, though we have
made some different choices in some respect.
• https://www.python.org/dev/peps/pep-0008/
• https://google-styleguide.googlecode.com/svn/trunk/pyguide.html
9.2 Summary
• Have clear names and use proper namespacing in your code
• Document your code with docstrings adhering to the Goodle docstring standard. Clearly
indicate what is input and what is outout. Especially side-effects!
• Structure your code so that it is as self-explanatory as possible, and use comments
where additional clarification is useful.
• Remember that all code you write will end up being treated as a "black box" sooner or
later. So make sure that it actually works like one, with clean input-output dichotomy.
335
CHAPTER 9. PYTHON STYLE GUIDE 336
• Exceptions should crash your program, unless you have very specific reasons why it
should not.
• If you absolutely want to stop exceptions, you should not use "except", as this will catch
the system exit and keyboard interrupts. If you want catch-all use "except Exception as
e"
• You should catch, append and re-raise the exception at each level of the code, so that
an informative traceback will appear when the program dies.
• Unit Testing is desirable, and you should test the crucial behavior of each feature you
wish to push to the main project. Also, test that the code fails the way it should. Few
things are as hard to track down as code that passes wrong data onto other parts in the
program
• Keep your Unit tests. Write them well. Systematize them. Make sure they´re thorough,
independent of each other, and fast.
In this chapter we provide a very short introduction to SymPy customized for the applications
and examples in the current setting. For a more thorough presentation see e.g. [14].
10.1 Introduction
SymPy is a Python library for symbolic mathematics, with the ambition to offer a full-featured
computer algebra system (CAS). The library design makes SymPy ideally suited to make
symbolic mathematical computations integrated in numerical Python applications.
Symbols. SymPy introduces the class symbols (or Symbol) to represent mathematical
symbols as Python objects. An instance of type symbols has a set of attributes to hold the its
properties and methods to operate on those properties. Such symbols may be used to represent
and manipulate algebraic expressions. Unlike many symbolic manipulation systems, variables
in SymPy must be defined before they are used (for justification see sympy.org)
As an example, let us define a symbolic expression, representing the mathematical expres-
sion x2 + xy − y
import sympy
x, y = sympy.symbols(’x y’)
expr = x**2 + x*y -y
expr
Note that we wrote the expression as if "x" and "y" were ordinary Python variables, but
instead of being evaluated the expression remains unaltered.
To make the output look nicer we may invoke the pretty print feature of SymPy by:
sympy.init_printing(use_latex=True)
expr
The expression is now ready for algebraic manipulation:
337
CHAPTER 10. SYMPOLIC COMPUTATION WITH SYMPY 338
expr + 2
x**2 + x*y -y + 2
and
expr + y
x**2 + x*y
Note that the result of the above is not x2 + xy − y + y but rather x2 + xy, i.e. the −y and
the +y are added and found to cancel automatically by SymPy and a simplified expression is
outputted accordingly. Appart from√ rather obvious simplifications like discarding subexpression
that add up to zero (e.g. y − y or 9 = 3), most simplifications are not performed automatically
by SymPy.
expr2 = x**2 + 2*x + 1
expr3 = sympy.factor(expr2)
expr3
Matrices. Matrices in SymPy are implemented with the Matrix class and are constructed
by providing a list of row the vectors in the following manner:
sympy.init_printing(use_latex=’mathjax’)
M = sympy.Matrix([[1, -1], [2, 1], [4, 4]])
M
A matrix with symbolic elements may be constructed by:
a, b, c, d = sympy.symbols(’a b c d’)
M = sympy.Matrix([[a, b],[c, d]])
M
The matrices may naturally be manipulated like any other object in SymPy or Python.
To illustrate this we introduce another 2x2-matrix
n1, n2, n3, n4 =sympy.symbols(’n1 n2 n3 n4’)
N=sympy.Matrix([[n1, n2],[n3, n4]])
N
The two matrices may then be added, subtracted, multiplied, and inverted by the following
simple statements
M+N, M-N, M*N, M.inv()
M=sympy.Matrix([[0, a], [a, 0]])
Diagonaliztion:
L, D = M.diagonalize()
L, D
Differentiating and integrating. Consider also the parabolic function which may
describe the velocity profile for fully developed flow in a cylinder.
from sympy import integrate, diff, symbols, pi
v0, r = symbols(’v0 r’)
v = v0*(1 - r**2)
Q = integrate(2*pi*v*r, r)
Q
Q = sympy.factor(Q)
Q
newV = diff(Q, r)/(r*2*pi)
newV
sympy.simplify(newV)
R
compute cos(x)
from sympy import cos
integrate(cos(x), x)
CHAPTER 10. SYMPOLIC COMPUTATION WITH SYMPY 339
R∞
compute sin(x2 )
−∞
sin(x)
limits. perform limx→0 x
from sympy import limit
limit(sin(x)/x, x, 0)
[1] P.W. Bearman and J.K. Harvey. Golf ball aerodynamics. Aeronaut Q, 27(pt 2):112–122,
1976. cited By 119.
[2] William L. Briggs. A Multigrid Tutorial. SIAM, 2. edition, 2000.
[3] E. Cheney and David Kincaid. Numerical Mathematics and Computing. Cengage Learning,
4th edition, 1999.
[4] E. Cheney and David Kincaid. Numerical Mathematics and Computing 7th. Cengage
Learning, 7th edition, 2012.
[5] H. Evans. Laminar Boundary-Layer Theory. Addison-Wesley Publishing Company, 1968.
[6] J. Evett and C. Liu. 2,500 Solved Problems in Fluid Mechanics and Hydraulics. Schaum’s
Solved Problems Series. McGraw-Hill Education, 1989.
[7] C. A. J. Fletcher. Computational Techniques for Fluid Dynamics. 1. , Fundamental and
General Techniques. Springer series in computational physics. Springer, Berlin, Paris,
1991.
[8] G.E. Forsythe. Computer Methods for Mathematical Computations. Prentice-Hall, 1977.
[9] Louis A. Hageman and David M. Young. Applied Iterative Methods. Academic Press,
1981.
[10] Ernst Hairer, Syvert Paul Norsett, and Gerhard Wanner. Solving Ordinary Differential
Equations I: Nonstiff Problems, volume 1. Springer Science & Business, 2008.
[11] C. Hirsch. Numerical Computation of Internal and External Flows. Elsevier, 2007.
[12] George A. Hool and Nathan Clarke Johnson. Elements of Structural Theory-Definitions.
Handbook of Building Construction. New York. McGraw-Hill. Google Books, 1920. p.2.
[13] Hoffman J. D. Zucrow M. J. Gas Dynamics, volume I & II. John Wiley & Sons, 1976-77.
[14] Robert Johansson. Numerical Python. a Practical Techniques Approach for Industry.
Springer, 2015.
[15] C. T. Kelley. Iterative Methods for Linear and Nonlinear Equations. SIAM, 1995.
[16] Hans Petter Langtangen. A Primer on Scientific Programming With Python. Springer,
Berlin; Heidelberg; New York, fourth edition, 2011.
[17] P. D. Lax. Hyperbolic Systems of Conservation Laws and the Mathematical Theory of
Shock Waves. Society for Industrial and Applied Mathematics, 1973. Regional conference
series in applied mathematics.
[18] R. W. MacCormack. The effect of viscosity in hypervelocity impact cratering. Astronautics,
AIAA, 69:354, 1969.
[19] W. H. Press. Numerical Recipes in Fortran. the Art of Scientific Computing, volume 1 &
2. Cambridge University Press, 2. edition, 1992.
[20] Yousef Saad. Iterative Methods for Sparse Linear Systems. SIAM, 2. edition, 2003.
[21] H. Schlichting. Boundary Layer Theory. McGraw-Hill, 7th edition, 1978.
340
BIBLIOGRAPHY 341
[22] G. D. Smith. Numerical Solution of Partial Diff. Equations : Finite Difference Methods.
Oxford, 3. edition, 1985.
[23] John. C. Tannehil, Dale A. Anderson, and Richard H. Pletcher. Computational Fluid
Mechanics and Heat Transfer. Taylor & Francis, 1997.
[24] F.M. White. Viscous Fluid Flow. WCB/McGraw-Hill, 1991.
[25] F.M. White. Fluid Mechanics. McGraw-Hill series in mechanical engineering.
WCB/McGraw-Hill, 1999.
Index
Amplification factor, 74
Biot-number, 139
Implisitt, 76
Runge-Kutta, 75, 76
342