ES3J1 Lecture Lab 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

ES3J1_Lecture_Lab_4

February 6, 2023

1 ES3J1 Advanced Systems and Software Engineering


1.1 Python Lecture/Lab 4: Ordinary Differential Equations and Integration
1.1.1 Solving ODEs
In this lecture/lab, we will look at solving systems of ODEs using hand-coded solvers whose struc-
ture will mimic those of standard libraries such as scipy.integrate. While many problems that
you will encounter in practice can be solved using standard libraries (at least after setting param-
eters appropriately), there are also many problems that require something bespoke, so it is good
practice to design some simple solvers yourself.

[1]: import numpy as np


import matplotlib.pyplot as plt
import scipy as sp
import itertools

We will consider the solution of ODE initial value problems (IVPs) where we seek a function
𝑥 ∶ [𝑡0 , 𝑇 ] → ℝ𝑛 satisfying

𝑥(𝑡)
̇ = 𝑓(𝑡, 𝑥(𝑡)) for all 𝑡 > 𝑡0 , (1)
𝑥(𝑡0 ) = 𝑥0 (2)

where the vector field 𝑓 ∶ [𝑡0 , 𝑇 ]×ℝ𝑛 → ℝ𝑛 and the initial condition 𝑥0 ∈ ℝ𝑛 are given. Equivalently,
the solution 𝑥 (if it exists) satisfies the integral equation
𝑡
𝑥(𝑡) = 𝑥0 + ∫ 𝑓(𝑠, 𝑥(𝑠)) d𝑠 for all 𝑡 > 𝑡0 , (3)
𝑡0

For this reason, there is a close relationship between the numerical solution of ODEs and numer-
ical integration (quadrature), although IVPs are usually solved sequentially in time, i.e. working
forwards from 𝑡0 to later times 𝑡 > 𝑡0 .
Usually an analytically exact solution to the IVP is not available and numerical schemes output
numerical approximations 𝑋(𝑡𝑖 ) to 𝑥(𝑡𝑖 ) on some prescribed grid or mesh of times 𝑡0 < 𝑡1 <
𝑡2 < ⋯ < 𝑇 . (This lecture/lab will use lowercase letters for the exact solution and uppercase letters
for numerical approximations.) Sometimes the grid points 𝑡𝑖 are given by the user in advance;
sometimes they are chosen on the fly by the solution algorithm (so-called “adaptive methods”,
which are basically essential for serious problems).
In general, there is no need to the grid points 𝑡𝑖 to be specified in advance nor for them to be
equally spaced. Black-box solvers will generally produce adaptive and unevenly-spaced time grids.

1
The solvers written in this lab will all required pre-specified grids, and can be used with unevenly
spaced grids. However, for the sake of simplicity, we will illustrate all the ideas with a fixed time
step, so you can think of 𝑡𝑘 as simply 𝑡0 + 𝑘𝛿 for some fixed 𝛿 > 0.

1.1.2 Forward (explicit) Euler


The most basic ODE solver is the forward Euler or explicit Euler method, which is based on
the approximation

𝑥(𝑡𝑖 ) − 𝑥(𝑡𝑖−1 )
𝑥(𝑡
̇ 𝑖−1 ) ≈ . (4)
𝑡𝑖 − 𝑡𝑖−1

To save some space, write Δ𝑡𝑖 = 𝑡𝑖 − 𝑡𝑖−1 for the length of the time step — which is often taken to
be a constant in simple applications. We rearrange the above approximation to obtain

𝑥(𝑡𝑖 ) ≈ 𝑥(𝑡𝑖−1 ) + Δ𝑡𝑖 𝑥(𝑡


̇ 𝑖−1 ) (5)
= 𝑥(𝑡𝑖−1 ) + Δ𝑡𝑖 𝑓(𝑡𝑖−1 , 𝑥(𝑡𝑖−1 )). (6)

That is, we recursively define approximations 𝑋(𝑡𝑖 ) to 𝑥(𝑡𝑖 ) via

𝑋(𝑡𝑖 ) = 𝑋(𝑡𝑖−1 ) + Δ𝑡𝑖 𝑓(𝑡𝑖−1 , 𝑋(𝑡𝑖−1 )) (7)


𝑋(𝑡0 ) = 𝑥0 . (8)

[2]: def solve_ivp_fe(f, x0, T):


'''
Forward Euler / explicit Euler solver for an initial value problem
f = vector field, takes time as first argument and space as second␣
↪argument

x0 = initial state, can be a float or numpy array


T = list or 1d array of times on which the solution is sought
Returns both T and a list X of states, where X[i] is the approximate
solution at time T[i]
'''
X = [None for ti in T]
X[0] = x0
for i in range(len(T))[1:]:
Delta_t = T[i] - T[i - 1]
X[i] = X[i - 1] + Delta_t * f(T[i - 1], X[i - 1])
return T, X

Let’s try this solver on the simplest ODE of all, the one representing exponential decay:

𝑥(𝑡)
̇ = −𝑥(𝑡) (9)
𝑥(0) = 𝑥0 (10)

which has the explicit exact solution

𝑥(𝑡) = exp(−𝑡)𝑥0 . (11)

2
[7]: def decay_f(t, x):
return -x

x0 = np.array([2.0])
t_min = 0.0
t_max = 1.0
dt = 0.1
T = np.arange(t_min, t_max + dt, dt)
T, X_fe = solve_ivp_fe(decay_f, x0, T)
fig, ax = plt.subplots()
ax.plot(T, X_fe, color="blue")
ax.set_xlabel("t");
ax.set_ylabel("x");

[8]: X_ex = [ x0 * np.exp(- t) for t in T ] ## The exact solution to this IVP


err = [ np.abs(X_fe[i][0] - X_ex[i]) for i in range(len(T)) ] ## Corresponding␣
↪errors

fig, ax = plt.subplots(2)
ax[0].plot(T, X_fe, color="blue")
ax[0].plot(T, X_ex, color="blue", linestyle=":")

3
ax[1].plot(T, err, color="red")
ax[1].set_xlabel("t");
ax[0].set_ylabel("x");
ax[1].set_ylabel("error");

Linear Systems. When the vector field 𝑓 is linear in the state 𝑥, e.g. 𝑓(𝑡, 𝑥) = 𝐴(𝑡)𝑥(𝑡) + 𝑏(𝑡) for
some square matrix 𝐴(𝑡) and vector 𝑏(𝑡), it is often helpful to write the forward Euler update in
linear form:

𝑋(𝑡𝑖 ) = 𝑋(𝑡𝑖−1 ) + Δ𝑡𝑖 𝑓(𝑡𝑖−1 , 𝑋(𝑡𝑖−1 )) (12)


= 𝑋(𝑡𝑖−1 ) + Δ𝑡𝑖 (𝐴(𝑡𝑖−1 )𝑋(𝑡𝑖−1 ) + 𝑏(𝑡𝑖−1 )) (13)
= (𝐼 + Δ𝑡𝑖 𝐴(𝑡𝑖−1 ))𝑋(𝑡𝑖−1 ) + Δ𝑡𝑖 𝑏(𝑡𝑖−1 ). (14)

This representation is particularly useful for applications that rely on a linear representation of the
dynamics, such as the linear Kalman filter for state estimation.
Lab Task. In the example above, we used a fixed time step dt = 0.1. Write a function to calculate
the global error of the forward Euler scheme (i.e. the maximum absolute difference between the
forward Euler solution and the exact solution) for this problem as a function of the time step size
dt, and then plot this global error on appropriate axes. You should observe that the global error
scales roughly like the time step, which is why forward Euler is said to have “order one” (as opposed
to, e.g., “order two” if the global error were proportional to the square of the time step).

4
[9]: def global_error_fe(T):
pass

Warning! It is all too easy — especially with “stiff” or “chaotic” systems — to choose time step
sizes that are too large for the numerical solution to be any good at all, as the code snippet below
illustrates in rather extreme fashion. It turns out that forward Euler can be a unstable numerical
method. As a matter of good practice, regardless of your choice of ODE solver, you should always
check that your results are not unduly sensitive to numerical parameters such as time step sizes.

[10]: x0 = np.array([2.0])
t_min = 0.0
t_max = 10.0
dt = 2.0
T = np.arange(t_min, t_max + dt, dt)
T, X_fe = solve_ivp_fe(decay_f, x0, T)
X_ex = [ x0 * np.exp(- t) for t in T ]
err = [ np.abs(X_fe[i][0] - X_ex[i]) for i in range(len(T)) ]

fig, ax = plt.subplots(2)
ax[0].plot(T, X_fe, color="blue")
ax[0].plot(T, X_ex, color="blue", linestyle=":")
ax[1].plot(T, err, color="red")
ax[1].set_xlabel("t");
ax[0].set_ylabel("x");
ax[1].set_ylabel("error");

5
1.1.3 Backward (implicit) Euler
The next most basic initial value problem solver is the backward Euler or implicit Euler method,
which has much better stability properties and is based on the approximation
𝑥(𝑡𝑖 ) − 𝑥(𝑡𝑖−1 )
𝑥(𝑡
̇ 𝑖) ≈ . (15)
𝑡𝑖 − 𝑡𝑖−1
We rearrange the above approximation to obtain

𝑥(𝑡𝑖−1 ) ≈ 𝑥(𝑡𝑖 ) − Δ𝑡𝑖 𝑥(𝑡


̇ 𝑖) (16)

i.e., the next state 𝑋(𝑡𝑖 ) is determined implicitly as the solution of the equation

𝑋(𝑡𝑖−1 ) = 𝑋(𝑡𝑖 ) − Δ𝑡𝑖 𝑓(𝑡𝑖 , 𝑋(𝑡𝑖 )). (17)

In general, solving this equation will require the use of an optimisation routine (which we have
not covered yet) to minimise the difference between the LHS and RHS of the previous equation.
However, in the special case that 𝑓 is linear, with

𝑓(𝑡, 𝑥) = 𝐴(𝑡)𝑥 + 𝑏(𝑡) (18)

we have

𝑋(𝑡𝑖−1 ) = 𝑋(𝑡𝑖 ) − Δ𝑡𝑖 (𝐴(𝑡𝑖 )𝑋(𝑡𝑖 ) + 𝑏(𝑡𝑖 )) (19)

6
and so

𝑋(𝑡𝑖−1 ) − Δ𝑡𝑖 𝑏(𝑡𝑖 ) = (𝐼 − Δ𝑡𝑖 𝐴(𝑡𝑖 ))𝑋(𝑡𝑖 ) (20)

i.e.
−1
𝑋(𝑡𝑖 ) = (𝐼 − Δ𝑡𝑖 𝐴(𝑡𝑖 )) (𝑋(𝑡𝑖−1 ) − Δ𝑡𝑖 𝑏(𝑡𝑖 )). (21)

[12]: def solve_ivp_be(A, b, x0, T):


'''
Backward Euler / implicit Euler solver for an initial value problem
of the form
dx/dt = A(t) x + b(t)
A = matrix (2d array) field, takes time as its argument
b = vector (1d array) field, takes time as its argument
x0 = initial state, can be a float or numpy array
T = list or 1d array of times on which the solution is sought
Returns both T and a list X of states, where X[i] is the approximate
solution at time T[i]
'''
X = [None for ti in T]
X[0] = x0
d = len(x0)
for i in range(len(T))[1:]:
Delta_t = T[i] - T[i-1]
X[i] = np.linalg.solve(np.eye(d) - Delta_t * A(T[i]), X[i-1] - Delta_t␣
↪* b(T[i]))

return T, X

def decay_A(t):
return - 1.0 * np.eye(1)

def decay_b(t):
return np.zeros((1,))

x0 = np.array([2.0])
t_min = 0.0
t_max = 10.0
dt = 0.5
T = np.arange(t_min, t_max + dt, dt)
T, X_be = solve_ivp_be(decay_A, decay_b, x0, T)
X_ex = [ x0 * np.exp(- t) for t in T ]
err = [ np.abs(X_be[i][0] - X_ex[i]) for i in range(len(T)) ]

fig, ax = plt.subplots(2)
ax[0].plot(T, X_be, color="blue")
ax[0].plot(T, X_ex, color="blue", linestyle=":")
ax[1].plot(T, err, color="red")

7
ax[1].set_xlabel("t");
ax[0].set_ylabel("x");
ax[1].set_ylabel("error");

Lab Task. Investigate the global error of the backward Euler scheme as a function of fixed time
step dt, again using the decay ODE as your test case. What order does backward Euler appear to
have?

1.1.4 Higher-order methods


As mentioned above, forward Euler is an explicit scheme with global error of order one, meaning
that it produces an approximation 𝑋 to the true solution 𝑥 satisfying

max |𝑋(𝑡𝑖 ) − 𝑥(𝑡𝑖 )| ≤ 𝐶(𝑓, 𝑥0 , 𝑇 ) max Δ𝑡𝑖+1 (22)


0≤𝑡𝑖 ≤𝑇 0≤𝑡𝑖 ≤𝑇
⏟⏟⏟⏟⏟
mesh size

for some constant 𝐶(𝑓, 𝑥0 , 𝑇 ) ≥ 0 that depends only on the vector field 𝑓, the initial condition 𝑥0 ,
and the terminal time 𝑇 . Naturally, there is also interest in having solvers that converge to the
true solution at a faster rate, e.g.
𝑝
max |𝑋(𝑡𝑖 ) − 𝑥(𝑡𝑖 )| ≤ 𝐶(𝑓, 𝑥0 , 𝑇 ) ( max Δ𝑡𝑖+1 ) (23)
0≤𝑡𝑖 ≤𝑇 0≤𝑡𝑖 ≤𝑇

8
for some 𝑝 > 1. We call 𝑝 the order of the method.
Lab Task. Look up the definition of the fourth-order Runge–Kutta scheme and write an
implementation of it, following the model of the forward and backward Euler schemes given above.
Perform the same analysis of the global error as you did before, on the same test problem, to
convince yourself that it does indeed have order four.

[ ]: def solve_ivp_rk4(f, x0, T):


pass

1.1.5 Oscillators and semi-implicit schemes


Let’s now examine the behviour of the forward Euler solver on a two-dimensional system modelling
a simple harmonic oscillator. The second-order formulation of such a system is

𝑢(𝑡)
̈ = −𝑢(𝑡) (24)

but we can reformulate this as a first-order ODE by setting 𝑣 = 𝑢̇ and 𝑥 = (𝑢, 𝑣) so that

𝑢(𝑡)
̇ 𝑣(𝑡) 0 1
𝑥(𝑡)
̇ =( )=( )=( ) 𝑥(𝑡). (25)
𝑣(𝑡)
̇ −𝑢(𝑡) −1 0

This is not a particularly challenging system to solve, but it already illustrates some of the great
weaknesses of forward Euler and why one would consider more advanced and structured ODE
solvers in practice.

[22]: def oscillator_f(t, x):


assert x.shape == (2,)
return np.array([x[1], -x[0]])

x0 = np.array([2.0, 0.0])
t_min = 0.0
t_max = 100.0
dt = 0.01
T = np.arange(t_min, t_max + dt, dt)
T, X_fe = solve_ivp_fe(oscillator_f, x0, T)
X_ex = [ np.array([[np.cos(t), np.sin(t)], [-np.sin(t), np.cos(t)]]) @ x0 for t␣
↪in T ]

err = [ np.linalg.norm(X_fe[i] - X_ex[i]) for i in range(len(T)) ]

def total_energy(x):
'''
Calculates the total energy of the system as the sum of the
potential energy and the kinetic (the spring constant and the
mass both being 1 in our chosen units).
'''
return 0.5 * x[0] * x[0] + 0.5 * x[1] * x[1]

E_fe = [total_energy(x) for x in X_fe]

9
fig, ax = plt.subplots(nrows=3, ncols=1, figsize=(12, 9))
ax[0].plot(T, X_fe)
#ax[0].plot(T, X_ex)
ax[0].set_ylabel("x[0] (blue), x[1] (orange)");
ax[1].plot(T, err, color="red")
ax[1].set_ylabel("error")
ax[2].set_xlabel("t");
ax[2].plot(T, E_fe, color="black")
ax[2].set_ylabel("total energy");

Lab Task (Optional). How should oscillator_f() and total_energy() be modified to reflect
a non-unit mass and non-unit spring constant for the oscillator?
Lab Task. Solve this oscillator system using the backward Euler solver provided above. You
will need to design appropriate functions oscillator_A() and oscillator_b() and you should
observe that the backward Euler solutions decay towards (0, 0) over time, exhibiting pronounced
energy loss.
Conservation of energy, momentum, etc. are very important physical properties, so the systematic
error in the total energy of the system (energy gain under forward Euler and energy loss under
backward Euler) is rather worrying. Perhaps the Euler methods are simply too crude, or we made
a mistake, so let’s try a generic solver as offered by the scipy library.

10
[18]: x0 = np.array([2.0, 0.0])
#sol = sp.integrate.solve_ivp(oscillator_f, [t_min, t_max], x0)
sol = sp.integrate.solve_ivp(oscillator_f, [t_min, t_max], x0, max_step=0.1)
T , X_sp = sol.t , np.array(sol.y).T
E_sp = [ total_energy(x) for x in X_sp ]
X_ex = [ np.array([[np.cos(t), np.sin(t)], [-np.sin(t), np.cos(t)]]) @ x0 for t␣
↪in T ]

err = [ np.linalg.norm(X_sp[i] - X_ex[i]) for i in range(len(T)) ]

fig, ax = plt.subplots(nrows=3, ncols=1, figsize=(12, 9))


ax[0].plot(T, X_sp)
ax[0].set_ylabel("x[0] (blue), x[1] (orange)");
#ax[0].plot(T, X_ex)
ax[1].plot(T, err, color="red")
ax[1].set_ylabel("error")
ax[2].set_xlabel("t");
ax[2].plot(T, E_sp, color="black")
ax[2].set_ylabel("total energy");

The energy gain and energy loss exhibited by these solvers for a system that ought to have con-
stant total energy is very undesirable. Simply put, we cannot use such solvers to make physically

11
consistent predictions with any accuracy or confidence. Fortunately, there are solvers known as
structured integrators or geometric integrators that are carefully designed to preserve to-
tal energy, or at least to preserve a close approximation to it. The simplest example of such an
integrator is the semi-implicit Euler method for ODEs of the form

𝑥(𝑡)
̇ = 𝑓(𝑡, 𝑦(𝑡)), (26)
𝑦(𝑡)
̇ = 𝑔(𝑡, 𝑥(𝑡)), (27)
(𝑥(𝑡0 ), 𝑦(𝑡0 )) = (𝑥0 , 𝑦0 ). (28)

More involved examples of such integrators include the leapfrog method or Störmer–Verlet
method.

[21]: def solve_ivp_sie(f, g, x0, y0, T):


'''
Semi-implicit Euler solver for a system of ODEs of the form
dx/dt = f(t, y)
dy/dt = g(t, x)
f, g = vector fields, take time as first argument and y, x space as␣
↪second argument

x0, y0 = initial state, can be a float or numpy array


T = list or 1d array of times on which the solution is sought
Returns both T and lists X and Y of states, where X[i] (resp. Y[i]) is␣
↪the

approximation to the solution value x(T[i]) resp. y(T[i])


'''
X = [None for ti in T]
Y = [None for ti in T]
X[0] = x0
Y[0] = y0
for i in range(len(T))[1:]:
Delta_t = T[i] - T[i - 1]
Y[i] = Y[i - 1] + Delta_t * g(T[i - 1], X[i - 1])
X[i] = X[i - 1] + Delta_t * f(T[i - 1], Y[i])
return T, X, Y

def oscillator_2_f(t, y):


return y

def oscillator_2_g(t, x):


return -x

t_min = 0.0
t_max = 100.0
dt = 0.01
T = np.arange(t_min, t_max + dt, dt)
T, X_sie, Y_sie = solve_ivp_sie(oscillator_2_f, oscillator_2_g, 2.0, 0.0, T)
E_sie = [ total_energy([X_sie[i], Y_sie[i]]) for i in range(len(T)) ]

12
X_ex = [ np.array([[np.cos(t), np.sin(t)], [-np.sin(t), np.cos(t)]]) @ x0 for t␣
↪in T ]

err = [ np.linalg.norm(np.array([X_sie[i], Y_sie[i]]) - X_ex[i]) for i in␣


↪range(len(T)) ]

fig, ax = plt.subplots(nrows=3, ncols=1, figsize=(12, 9))


ax[0].plot(T, X_sie)
ax[0].plot(T, Y_sie)
#ax[0].plot(T, X_ex)
ax[0].set_ylabel("x");
ax[1].plot(T, err, color="red")
ax[1].set_ylabel("error")
ax[2].set_xlabel("t");
ax[2].plot(T, E_sie, color="black")
ax[2].set_ylabel("total energy");

Lab Task. Adjust the time step used in the semi-implicit Euler scheme and explore both the error
in the state space (x) and the error in energy (E). You should see that while the total energy does
oscillate, the energy error remains bounded and has an error of order dt.

13
1.1.6 Integration (quadrature)
Scipy offers simple one-line interfaces for the numerical integration (quadrature) of functions of one
or more variables. As a simple example, we can try to numerically replicate the exact integral
1 1
𝑥3
2 1
∫ 𝑥 d𝑥 = [ ] = . (29)
0 3 𝑥=0 3

Ultimately, numerical quadrature routines boil down to the selection of suitable nodes 𝑥1 , … , 𝑥𝑚
and weights 𝑤1 , … , 𝑤𝑚 and the approximation of the integral as
1 𝑚
∫ 𝑥2 d𝑥 ≈ ∑ 𝑤𝑖 𝑓(𝑥𝑖 ). (30)
0 𝑖=1

By comparing the estimates constructed using different nodal sets (e.g. for 𝑚 and 2𝑚) it is even
possible to give an estimate for the approximation error. The sp.integrate.quadrature() routine
does this by default, returning both an (approximate) value for the integral and an (approximate)
error estimate:

[23]: val, err_est = sp.integrate.quadrature(lambda x : x * x, 0.0, 1.0)


print(f"Quadrature value = {val}")
print(f" Error estimate = {err_est}")
print(f" Actual error = {np.abs(val - 1.0/3.0)}")

Quadrature value = 0.33333333333333337


Error estimate = 0.0
Actual error = 5.551115123125783e-17
For another example, consider

exp(−𝑥2 /2)
∫ 𝑥 √ d𝑥 (31)
−∞ 2𝜋
for which the exact value of the integral is 0 (by symmetry). Note that
sp.integrate.quadrature() does not support the use of infinite limits for integration,
whereas sp.integrate.quad() does.

[27]: def Gaussian_density(x):


return np.exp(- 0.5 * np.abs(x) ** 2.0) / np.sqrt(2.0 * np.pi)

R = 3
print(f"Using sp.integrate.quadrature over [-{R}, {R}]")
val, err_est = sp.integrate.quadrature(lambda x: x * Gaussian_density(x), -R, R)
print(f"Quadrature value = {val}")
print(f" Error estimate = {err_est}")
print(f" Actual error = {np.abs(val - 0.0)}")

print("\nUsing sp.integrate.quad() over [-np.inf, np.inf]")


val, err_est = sp.integrate.quad(lambda x: x * Gaussian_density(x), -np.inf, np.
↪inf)

print(f"Quadrature value = {val}")

14
print(f" Error estimate = {err_est}")
print(f" Actual error = {np.abs(val - 0.0)}")

Using sp.integrate.quadrature over [-3, 3]


Quadrature value = 2.498001805406602e-16
Error estimate = 2.498001805406602e-16
Actual error = 2.498001805406602e-16

Using sp.integrate.quad() over [-np.inf, np.inf]


Quadrature value = 0.0
Error estimate = 0.0
Actual error = 0.0

1.1.7 Multidimensional integration


Quadrature for functions of multiple variables is offered by sp.integrate.nquad(), which has
several shortcomings/difficulties: * poor interface: nquad insists that the integrand be a function
of n explicitly named variables, rather than handling array-valued input in a transparent way; *
the intrinsically high computational cost of accurate high-dimensional quadrature.
We can see these issues at play in the simple example of approximating

∫ 𝑥1 𝜌(𝑥) d𝑥 = 0, (32)
ℝ𝑛

where 𝜌 is the standard 𝑛-dimensional Gaussian density

exp(−‖𝑥‖2 /2)
𝜌(𝑥) = . (33)
(2𝜋)𝑛/2

[28]: from datetime import datetime as dt

def Gaussian_density(x):
return np.exp(- 0.5 * np.linalg.norm(x) ** 2.0) / (np.sqrt((2.0 * np.pi) **␣
↪x.shape[0]))

def report(n_dim, val, err_est, t0, t1):


print(f"Dimension {n_dim} quadrature value of {val} with estimated error␣
↪{err_est} took {(t1 - t0).total_seconds()}s")

dims = [1, 2, 3, 4, 5]

n_dim = 1
t0 = dt.now()
val, err_est = sp.integrate.nquad(lambda x0: x0 * Gaussian_density(np.
↪array([x0])), n_dim * [[-1000, 1000]])

t1 = dt.now()
report(n_dim, val, err_est, t0, t1)
runtimes = [ (t1 - t0).total_seconds() ]

15
n_dim = 2
t0 = dt.now()
## I WISH! val, err_est = sp.integrate.nquad(lambda x: x[0] *␣
↪Gaussian_density(x), n_dim * [[-1000, 1000]]) !!!

val, err_est = sp.integrate.nquad(lambda x0, x1: x0 * Gaussian_density(np.


↪array([x0, x1])), n_dim * [[-1000, 1000]])

t1 = dt.now()
report(n_dim, val, err_est, t0, t1)
runtimes.append((t1 - t0).total_seconds())

n_dim = 3
t0 = dt.now()
val, err_est = sp.integrate.nquad(lambda x0, x1, x2: x0 * Gaussian_density(np.
↪array([x0, x1, x2])), n_dim * [[-1000, 1000]])

t1 = dt.now()
report(n_dim, val, err_est, t0, t1)
runtimes.append((t1 - t0).total_seconds())

n_dim = 4
t0 = dt.now()
val, err_est = sp.integrate.nquad(lambda x0, x1, x2, x3: x0 *␣
↪Gaussian_density(np.array([x0, x1, x2, x3])), n_dim * [[-1000, 1000]])

t1 = dt.now()
report(n_dim, val, err_est, t0, t1)
runtimes.append((t1 - t0).total_seconds())

n_dim = 5
t0 = dt.now()
val, err_est = sp.integrate.nquad(lambda x0, x1, x2, x3, x4: x0 *␣
↪Gaussian_density(np.array([x0, x1, x2, x3, x4])), n_dim * [[-1000, 1000]])

t1 = dt.now()
report(n_dim, val, err_est, t0, t1)
runtimes.append((t1 - t0).total_seconds())

Dimension 1 quadrature value of 0.0 with estimated error 0 took 0.002043s


Dimension 2 quadrature value of 0.0 with estimated error 0 took 0.012035s
Dimension 3 quadrature value of 0.0 with estimated error 0 took 0.098391s
Dimension 4 quadrature value of 0.0 with estimated error 0 took 1.350695s
Dimension 5 quadrature value of 0.0 with estimated error 0 took 29.905539s
As we can see, the runtime increases by roughly one order of magnitude each time the dimension
is increased.

[29]: fig, ax = plt.subplots()


ax.semilogy(dims, runtimes);
ax.semilogy(dims, [0.0001 * 10 ** x for x in dims], linestyle="--")
ax.set_xlabel("input dimension, n");

16
ax.set_ylabel("runtime / s");
ax.set_xticks(dims)
ax.set_title(r"Quadrature runtimes (dashed = $10^{n - 4}$)");

1.1.8 Multidimensional integration via Monte Carlo sampling


It is an unfortunate fact of life that accurate numerical integration of functions of many vari-
ables is computationally expensive. Deterministic 𝑛-dimensional quadrature rules such as Gaussian
quadrature have costs that scale exponentially in 𝑛. Partly for this reason, there is huge interest in
quadrature rules with dimension-independent efficiency, even if the convergence rate as a function
of the number of sample points is quite slow.
The principal example of such a quadrature rule is Monte Carlo integration, which is essentially
an application of the Law of Large Numbers (LLN) from basic probability/statistics. Suppose that
we are interested in evaluating an integral of the form

∫ 𝑓(𝑥)𝜌(𝑥) d𝑥, (34)


ℝ𝑛

where 𝜌 ∶ ℝ𝑛 → [0, ∞) is a probability density function, i.e. ∫ℝ𝑛 𝜌(𝑥) d𝑥 = 1. If 𝑋 is a random


variable with PDF 𝜌, then the integral of interest is just the expected value 𝔼[𝑓(𝑋)]. The LLN

17
states that, if 𝑋1 , 𝑋2 , … , 𝑋𝑚 , … is a sequence of independent samples with the same PDF 𝜌, then

1 𝑚
𝔼[𝑓(𝑋)] = lim ∑ 𝑓(𝑋𝑖 ). (35)
𝑚→∞ 𝑚
𝑖=1

Even better, we have the error estimate

1 𝑚 𝐶(𝑓)
∣𝔼[𝑓(𝑋)] − ∑ 𝑓(𝑋𝑖 )∣ ≤ √ . (36)
𝑚 𝑖=1 𝑚

We illustrate the idea with the same example as we used for nquad:

∫ 𝑥1 𝜌(𝑥) d𝑥, (37)


ℝ𝑛

where 𝜌 is the standard 𝑛-dimensional Gaussian density

exp(−‖𝑥‖2 /2)
𝜌(𝑥) = . (38)
(2𝜋)𝑛/2

[30]: import numpy as np

def vanilla_Monte_Carlo_wrt_Gaussian(f, n, min_samples=1e3, max_samples=1e6,␣


↪check_samples=1e3, a_tol=1e-6, r_tol=1e-6):

X = []
min_samples = int(np.floor(min_samples))
max_samples = int(np.floor(max_samples))
check_samples = int(np.floor(check_samples))
## First we draw from the standard Gaussian min_samples times
for i in range(min_samples):
x = np.random.normal(size=(n,))
X.append(x)
## Here is a little function to calculate the sample average of the␣
↪integrand

## f against the collection of samples that we have


def sample_m():
return sum(f(x) for x in X) / len(X)
carry_on = True
old_m = sample_m()
## Now we continue to draw samples until we have drawn max_samples, or␣
↪until the

## absolute and relative changes in the sample mean are small enough,␣
↪checking

## those changes every check_samples iterations


while len(X) < max_samples and carry_on:
x = np.random.normal(size=(n,))
X.append(x)
if len(X) % check_samples == 0:

18
new_m = sample_m()
a_ch = np.abs(new_m - old_m) ## Absolute change in the estimate of␣
↪the mean
r_ch = a_ch / min([np.abs(old_m), np.abs(new_m)]) ## And the␣
↪relative change
if a_ch < a_tol and r_ch < r_tol:
carry_on = False ## Set a flag to terminate the run if the␣
↪absolute and

## relative change tolerances have both been satisfied


old_m = new_m
return old_m, len(X)

Since Monte Carlo is a random algorithm, its accuracy and runtime are random, but one thing that
quickly becomes evident is that the performance does not depend strongly upon the dimension of
the integration domain.

[33]: from datetime import datetime as dt

dims = list(range(2, 10))


errors = []
runtimes = []

for n_dim in dims:


t0 = dt.now()
val, N = vanilla_Monte_Carlo_wrt_Gaussian(lambda x: x[0], n=n_dim,␣
↪min_samples=1e5, a_tol=1e-3, r_tol=1e-2)

t1 = dt.now()
print(f"Dimension {n_dim} MC value of {val} with {N} samples took {(t1 -␣
↪t0).total_seconds()}s")

errors.append(np.abs(val))
runtimes.append((t1 - t0).total_seconds())

Dimension 2 MC value of 0.0032294027200061837 with 123000 samples took 0.708042s


Dimension 3 MC value of -0.0008655338265924212 with 186000 samples took
2.621826s
Dimension 4 MC value of -0.0016811575877218943 with 158000 samples took
1.701578s
Dimension 5 MC value of 0.0007427389396862868 with 118000 samples took 0.575649s
Dimension 6 MC value of -0.0028802377516155054 with 177000 samples took
2.359954s
Dimension 7 MC value of -0.003344434856775849 with 129000 samples took 0.873769s
Dimension 8 MC value of 0.007420075632850164 with 104000 samples took 0.282687s
Dimension 9 MC value of 0.0013144281901976119 with 103000 samples took 0.261876s

[34]: import matplotlib.pyplot as plt

fig, ax = plt.subplots(2)
ax[0].plot(dims, errors, color="red")

19
ax[1].plot(dims, runtimes, color="blue")
ax[0].set_ylabel("absolute error");
ax[1].set_ylabel("runtime / s");
ax[1].set_xlabel("dimension, n");

Lab Task. Plot the average performance (accuracy and cost) of Monte Carlo integration over a
number of runs on apprpriate axes.
Extensions. To implement Monte Carlo integration, one needs to be able to draw samples from
the density 𝜌, which is not always easy. To get around this difficulty, one can use the more advanced
Markov chain Monte Carlo technique, which we will not discuss in this module — but feel free
to ask me about it if you are interested.

20

You might also like