Computational Mathematics Notes

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 129

Computational Physics

INDEX

 Computational Physics

Course Description

 Ordinary Differential Equations

o Types of Differential Equation

o Euler Method

 Order of Accuracy

 Stability

 The Growth Equation

 Application to Non-Linear Differential Equations

 Application to Vector Equations

o The Leap-Frog Method

o The Runge-Kutta Method

o The Predictor-Corrector Method

o The Intrinsic Method

o Summary

o Problems

o Project -- Classical Electrons in a Magnetic Field

 A Uniform Field

 Units

 The Analytical Solution


 Choosing an Algorithm

 Crossed Electric and Magnetic Fields

 Oscillating Electric Field

 Your Report

 Partial Differential Equations

o Types of Equations

o Elliptic Equations -- Laplace's equation

o Hyperbolic Equations -- Wave equations

 A Simple Algorithm

 An Improved Algorithm -- the Lax method

 Non-Linear Equations

 Other methods for Hyperbolic Equations

o Eulerian and Lagrangian Methods

o Parabolic Equations -- Diffusion

 A Simple Method

 The Dufort-Frankel Method

 Other Methods

o Conservative Methods

 The Equation of Continuity

 The Diffusion Equation

 Maxwell's Equations

o Dispersion

o Problems
o Project -- Lagrangian Fluid Code

 The Difference Equations

 Boundary Conditions

 Initial Conditions

 The Physics

 An Improvement?

 The Report

o Project -- Solitons

 Introduction

 Discretisation

 Physics

 Matrix Algebra

o Introduction

o Types of Matrices

o Simple Matrix Problems

 Addition and Subtraction

 Multiplication of Matrices

o Elliptic Equations -- Poisson's Equation

 One Dimension

 2 or more Dimensions

o Systems of Equations and Matrix Inversion

 Exact Methods

 Iterative Methods
 The Jacobi Method

 The Gauss-Seidel Method

o Matrix Eigenvalue Problems

 Schrödinger's equation

 General Principles

 Full Diagonalisation

 The Generalised Eigenvalue Problem

 Partial Diagonalisation

 Sturm Sequence

 Sparse Matrices and the Lanczos Algorithm

o Problems

o Project -- Oscillations of a Crane

 Analysis

o Project -- Phonons in a Quasicrystal

 Introduction

 The Fibonacci Lattice

 The Model

 The Physics

 Monte Carlo Methods and Simulation

o Monte Carlo

 Random Number Generators

o Monte-Carlo Integration

o The Metropolis Algorithm


 The Ising model

 Thermodynamic Averages

o Quantum Monte-Carlo

o Molecular Dynamics

 General Principles

o Problems

o Project -- The Ising Model

 Introduction

 The Model and Method

 The Physics

o Project -- Quantum Monte Carlo Calculation

 Introduction

 The Method

 The Physics

 Computer Algebra

o Introduction

o Basic Principles

o Solving Equations

o Differential Equations

o Other Features

o Final Example

o Project -- The Thomas-Fermi Approximation

 Introduction
 Some Ideas

 Bibliography

Computational Physics

Course Description

The use of computers in physics, as well as most other branches of science and engineering, has

increased many times along with the rapid development of faster and cheaper hardware. This
course aims to give the student a thorough grounding in the main computational techniques used

in modern physics. It is particularly important in this course that the students should learn by

doing. The course is therefore designed such that a significant fraction of the students' time is

spent actually programming specific physical problems rather than learning abstract techniques.

The course will cover problems in 4(5) broad sections:

 Ordinary differential equations, such as those of classical mechanics.

 Partial differential equations, such as Maxwell's equations and the Diffusion and

Schrödinger equations.

 Matrix methods, such as systems of equations and eigenvalue problems applied to

Poisson's equation and electronic structure calculations.

 Monte Carlo and other simulation methods, such as the Metropolis algorithm and

molecular dynamics.

 (If time permits:) Computer Algebra; an introduction using Maple to the uses and abuses

of algebraic computing in physics.

This is not a short course in computing science, nor in programming. It focuses specifically on

methods for solving physics problems. Students will be expected to be familiar with basic

programming: successful completion of the 1st year computing Lab. is sufficient. There is no

requirement that the practical work be done using Microsoft C++ on the departmental computers,

but anyone wishing to use some other programming language or computer should consult the

lecturer beforehand. This is to make sure there is both help available from demonstrators and that

it will be possible to assess the work satisfactorily.


Ordinary Differential Equations

Subsections

 Types of Differential Equation

 Euler Method

o Order of Accuracy

o Stability

o The Growth Equation

o Application to Non-Linear Differential Equations

o Application to Vector Equations

 The Leap-Frog Method

 The Runge-Kutta Method

 The Predictor-Corrector Method

 The Intrinsic Method

 Summary

 Problems

 Project -- Classical Electrons in a Magnetic Field

o A Uniform Field

 Units

 The Analytical Solution


 Choosing an Algorithm

o Crossed Electric and Magnetic Fields

o Oscillating Electric Field

o Your Report

 Types of Differential Equation


 In this chapter we will consider the methods of solution of the sorts of ordinary

differential equations (ODEs) which occur very commonly in physics. By ODEs we

mean equations involving derivatives with respect to a single variable, usually time.

Although we will formulate the discussion in terms of linear ODEs for which we know

the analytical solution, this is simply to enable us to make comparisons between the

numerical and analytical solutions and does not imply any restriction on the sorts of

problems to which the methods can be applied. In the practical work you will encounter

examples which do not fit neatly into these categories. The work in this section is also

considered in chapter 16 of Press et al. (1992) or chapter II of Potter (1973) We consider

3 basic differential equations:

(1.1)

(1.2)

(1.3)


 which are representative of most more complex cases. Higher order differential equations

can be reduced to 1st order by appropriate choice of additional variables. The simplest

such choice is to define new variables to represent all but the highest order derivative. For

example, the damped harmonic oscillator equation, usually written as

(1.4)

 can be rewritten in terms of and velocity in the form of a pair of 1st order

ODEs

(1.5)

(1.6)

 Similarly any th order differential equation can be reduced to 1st order equations.

Such systems of ODEs can be written in a very concise notation by defining a vector,

say, whose elements are the unknowns, such as and in (1.1). Any ODE in

unknowns can then be written in the general form

(1.7)


 where and are -component vectors. Remember that there is no significance in the

use of the letter in the above equations. The variable is not necessarily time but could

just as easily be space, as in (1.9), or some other physical quantity. Formally we can write

the solution of (1.7) as

Euler Method

Consider an approximate solution of (1.8) over a small interval by writing the

integral as

(1.10)

to obtain

or, in a more concise notation,

(1.11)
We can integrate over any larger interval by subdividing the range into sections of width and

repeating (1.11) for each part. Equivalently we can consider that we have approximated the

derivative with a forward difference

(1.12)

We will also come across centred and backward differences,

(1.13)

respectively. Here we have used a notation which is very common in computational physics, in

which we calculate at discrete values of given by , and .

In what follows we will drop the vector notation except when it is important for the discussion.

Order of Accuracy

How accurate is the Euler method? To quantify this we consider a Taylor expansion of

around

(1.14)
and substitute this into (1.11)

(1.15)

(1.16)

where we have used (1.7) to obtain the final form. Hence, we see that the term in in the

expansion has been correctly reproduced by the approximation, but that the higher order terms

are wrong. We therefore describe the Euler method as 1st order accurate. An approximation to a

quantity is th order accurate if the term in in the Taylor expansion of the quantity is

correctly reproduced. The order of accuracy of a method is the order of accuracy with which the

unknown is approximated. Note that the term accuracy has a slightly different meaning in this

context from that which you might use to describe the results of an experiment. Sometimes the

term order of accuracy is used to avoid any ambiguity. The leading order deviation is called the

truncation error. Thus in (1.2.1) the truncation error is the term in .

Stability

The Euler method is 1st order accurate. However there is another important consideration in

analysing the method: stability. Let us suppose that at some time the actual numerical solution

deviates from the true solution of the difference equation (1.11) (N.B. not the original
differential equation (1.7)) by some small amount , due, for example, to the finite accuracy of

the computer. Then adding this into (1.11) gives

(1.17)

where the term in brackets, , is the Taylor expansion of with respect to . Subtracting

(1.11) we obtain a linear equation for

(1.18)

which it is convenient to write in the form

(1.19)

If has a magnitude greater than one then will tend to grow with increasing and may

eventually dominate over the required solution. Hence the Euler method is stable only if

or

(1.20)
As is positive by definition the 2nd inequality implies that the derivative must also be

positive. The 1st inequality leads to a restriction on , namely

(1.21)

When the derivative is complex more care is required in the calculation of . In this case it is

easier to look for solutions of the condition . For the oscillation equation (1.1c) the

condition becomes

The Growth Equation

Actually, our analysis doesn't make too much sense in the case of the growth equation as the true

solution should grow anyway. A more sensible condition would be that the relative error in the

solution does not grow. This can be achieved by substituting for above and looking for

the condition that does not grow. We will not treat this case further here but it is, in fact, very

important in problems such as chaos, in which small changes in the initial conditions lead to

solutions which diverge from one another.

Application to Non-Linear Differential Equations


The linear-differential equations in physics can often be solved analytically whereas most non-

linear ones can only be solved numerically. It is important therefore to be able to apply the ideas

developed here to such cases. Consider the simple example

(1.23)

In this case and which can be substituted into (1.21) to give the

stability condition

(1.24)

which depends on , unlike the simpler cases. In writing a program to solve such an equation it

may therefore be necessary to monitor the value of the solution, , and adjust as necessary to

maintain stability.

Application to Vector Equations

A little more care is required when and are vectors. In this case is an arbitrary

infinitesimal vector and the derivative is a matrix with components

(1.25)
in which and represent the components of and respectively. Hence (1.18) takes the

form

(1.26)

(1.27)

This leads directly to the stability condition that all the eigenvalues of must have modulus less

than unity (see problem 6). In general any of the stability conditions derived in this course for

scalar equations can be re-expressed in a form suitable for vector equations by applying it to all

the eigenvalues of an appropriate matrix.


The Leap-Frog Method

How can we improve on the Euler method? The most obvious way would be to replace the forward

difference in (1.12) with a centred difference (1.13) to get the formula

(1.28)

If we expand both and as before (1.28) becomes

(1.29)

(1.30)

from which all terms up to cancel so that the method is clearly 2nd order accurate. Note in passing

that using (1.28) 2 consecutive values of are required in order to calculate the next one: and

are required to calculate . Hence 2 boundary conditions are required, even though (1.28)

was derived from a 1st order differential equation. This so-called leap-frog method is more accurate
than the Euler method, but is it stable? Repeating the same analysis as for the Euler method we again

obtain a linear equation for

(1.31)

We analyse this equation by writing and to obtain

(1.32)

which has the solutions

(1.33)

The product of the 2 solutions is equal to the constant in the quadratic equation, i.e. . Since the 2

solutions are different, one of them always has magnitude . Since for a small random error it is

impossible to guarantee that there will be no contribution with this contribution will tend to

dominate as the equation is iterated. Hence the method is unstable. There is an important exception to
this instability: when the partial derivative is purely imaginary (but not when it has some general

complex value), the quantity under the square root in (1.33) can be negative and both 's have

modulus unity. Hence, for the case of oscillation (1.1c) where , the algorithm is just

stable, as long as

(1.34)

The stability properties of the leap-frog method are summarised below

Decay Growth Oscillation

unstable unstable

Again the growth equation should be analysed somewhat differently.

The Runge-Kutta Method

So far we have found one method which is stable for the decay equation and another for the oscillatory

equation. Can we combine the advantages of both? As a possible compromise consider the following

two step algorithm (ignoring vectors)


(1.35)

(1.36)

In practice the intermediate value is discarded after each step. We see that this method

consists of an Euler step followed by a Leap-Frog step. This is called the 2nd order Runge-Kutta or two-

step method. It is in fact one of a hierarchy of related methods of different accuracies. The stability

analysis for (1.4) is carried out in the same way as before. Here we simply quote the result

(1.37)

In deriving this result it is necessary to assume that the derivatives, are independent of . This

is not usually a problem. From (1.37) we conclude the stability conditions

Decay Growth Oscillation

unstable
Note that in the oscillatory case the method is strictly speaking unstable but the effect is so small that it

can be ignored in most cases, as long as . This method is often used for damped oscillatory

equations.

The Predictor-Corrector Method

This method is very similar to and often confused with the Runge-Kutta method. We consider

substituting the trapezoidal rule for the estimate of the integral in (1.8) to obtain the equation

(1.38)

Unfortunately the presence of on the right hand side makes a direct solution of (1.38) impossible

except for special cases. A possible compromise is the following method

(1.39)

(1.40)

This method consists of a guess for based on the Euler method (the Prediction) followed by a

correction using the trapezoidal rule to solve the integral equation. The accuracy and stability properties

are identical to those of the Runge-Kutta method.


The Intrinsic Method

Returning to the possibility of solving the integral equation using the trapezoidal or trapezium rule, let

us consider the case of a linear differential equation, such as our examples. For the decay equation we

have

(1.41)
which can be rearranged into an explicit equation for as a function of

(1.42)

This intrinsic method is 2nd order accurate as that is the accuracy of the trapezoidal rule for integration.

What about the stability? Applying the same methodology as before we find that the crucial quantity,

, is the expression in square brackets, , in (1.42) which is always for the decay equation and has

modulus unity in the oscillatory case (after substituting ). Hence it is stable in both cases.

Why is it not used instead of the other methods? Unfortunately only a small group of equations, such as

our examples, can be rearranged in this way. For non-linear equations it may be impossible, and even

for linear equations when is a vector, there may be a formal solution which is not useful in practice. It

is always possible to solve the resulting non-linear equation iteratively, using (e.g. ) Newton-Raphson

iteration, but this is usually not worthwhile in practice. In fact the intrinsic method is also stable for the

growth equation when it is analysed as discussed earlier, so that the method is in fact stable for all 3

classes of equations.

Decay Growth Oscillation


stable stable stable

Summary

In these notes we have introduced some methods for solving ordinary differential equations. However,

by far the most important lesson to be learned is that to be useful a method must be both accurate and

stable. The latter condition is often the most difficult to fulfil and careful analysis of the equations to be

solved may be necessary before choosing an appropriate method to use for finding a numerical solution.

The stability conditions derived above tend to have the form which may be interpreted as

should be less than the characteristic period of oscillation. This conforms with common sense. In fact

we can write down a more general common sense condition: should be small compared with the

smallest time scale present in the problem. Finally, many realistic problems don't fall into the neat

categories of (1.1). The simplest example is a damped harmonic oscillator. Often it is difficult to find an

exact analytical solution for the stability condition. It pays in such cases to consider some extreme

conditions, such as (e.g. ) very weak damping or very strong damping, work out the conditions for these

cases and simply choose the most stringent condition. In non-linear problems the cases when the

unknown, , is very large or very small may provide tractable solutions.


Problems

1. Write down definitions of the terms order of accuracy, truncation error, conditional stability as

applied to the numerical solution of ordinary differential equations.

2. Write down 1st order accurate finite difference approximations for

Hint: the result has to be something like . Expand the s around

and choose the coefficients to eliminate contributions from unwanted terms in the expansion.

N.B. This question refers to the accuracy of the approximation for the derivatives given, not to

the accuracy of .

3. Derive expressions for the truncation error of the following difference approximations.
4. The torsion of a bar is described by the differential equation

Show how to re-express this as a system of first order differential equations.

5. Write down an expression for solving the differential equation

by Euler's method and show under what conditions the method is stable. Write and test a short

program (it should only require a few lines) to test the method. Vary , and to

check the validity of the stability condition you have derived.

6. Using equation 1.2.5 show that the Euler method is stable for a vector equation provided all the

eigenvalues of have modulus less than or equal to unity.

7. Show that equation 1.37 gives the correct stability condition for both the Runge-Kutta and

Predictor-Corrector methods. Why do you think this is a good method for damped oscillatory

equations? (The last part doesn't have to be mathematically rigorous).


Project -- Classical Electrons in a Magnetic Field

The simplest source of ODEs in physics is classical mechanics, so we choose such a problem. The

dynamics of a charge in a magnetic field is described by the equation

(1.43)

where and are the mass and charge of the particle respectively, is the magnetic field and

represents some sort of friction.

A Uniform Field

Before considering the more general problem we start with a particle in a spatially uniform field.

This problem is analytically solvable and can be used to test the various methods before applying

them to the general case.

Units

Note firstly that there are only 2 independent constants in the problem, and , and that

these constants have the units of inverse time; in fact the former is the cyclotron frequency and the

latter is a damping rate. In general in any programming problem it pays to think carefully about the units

to be used in the program. There are several reasons for this.


 If inappropriate units are used the program may not work at all. An example of this would be

the use of SI units to study the dynamics of galaxies or to study atomic physics. In the former

might easily arise and be bigger than the largest number representable on the machine,

whereas in the latter may be smaller than the smallest number on the machine and be set

automatically to zero with disastrous consequences.

 The problem often has its own natural units and it makes sense to work in these units. This has

the consequence that most of the numbers in your program will be of order unity rather than

very large or very small.

In general you should look for the natural units of a problem and write your program appropriately.

Note that these will generally not be SI or cgs. In the problem we are considering here there are 2

natural time scales, and . If we decide to work in one of these, e.g. the cyclotron period

, we can rewrite (1.43) in the simpler form

(1.44)

(1.45)

or perhaps

(1.46)
(1.47)

depending on the sign of . Here and we have chosen our coordinate system

such that the magnetic field, , is in the -direction. Note, in addition, that choosing the units

appropriately has eliminated all but one of the constants from the problem. This cuts down on

superfluous arithmetic in the program.

The Analytical Solution

In order to understand the behaviour of the various methods for ODEs we need to know the analytical

solution of the problem. 2 dimensional problems such as this one are often most easily solved by turning

the 2D vector into a complex number. Thus by defining we can rewrite (1.43) in the form

(1.48)

which can be easily solved using the integrating factor method to give

(1.49)
Finally we take real and imaginary parts to find the and components

(1.50)

(1.51)

Choosing an Algorithm

The problem as posed does not fit neatly into the categories defined in (1.1). By considering the

accuracy and stability properties of the various algorithms described in these notes you have to decide

which is the most appropriate algorithm used for solving the problem. The computer time required by

the algorithm as well as the ease with which it can be programmed may be legitimate considerations. It

may not be possible to solve the the stability equations exactly in the most general case. Nevertheless

you should be able to deduce enough information on which to base a decision (See (e.g.) section 1.7). In

your report you should discuss the merits and demerits of the various possible algorithms and explain

the rationale behind your choice. You should also write a program to implement your chosen algorithm

and test it for various values of and in all the physically significant regimes.

Crossed Electric and Magnetic Fields

You are now in a position to apply your chosen algorithm to a more complicated problem. In

addition to the uniform magnetic field, , we now add an electric field in the -direction,

. Thus (1.43a) must be modified to read


(1.52)

(1.53)

You should now write a program to solve (1.9.2) using the most appropriate method as found

earlier. Try to investigate the behaviour of the system in various physical regimes. You should

also vary to check whether the stability conforms to your expectations. Think about the

physical system you are describing and whether your results are consistent with the behaviour

you would expect.

Oscillating Electric Field

Finally, if you have time1.1, you might consider making the electric field explicitly time

dependent

(1.54)

and investigate the behaviour of the system as a function of frequency.


Partial Differential Equations

Subsections
 Types of Equations

 Elliptic Equations -- Laplace's equation

 Hyperbolic Equations -- Wave equations

o A Simple Algorithm

o An Improved Algorithm -- the Lax method

o Non-Linear Equations

o Other methods for Hyperbolic Equations

 Eulerian and Lagrangian Methods

 Parabolic Equations -- Diffusion

o A Simple Method

o The Dufort-Frankel Method

o Other Methods

 Conservative Methods

o The Equation of Continuity

o The Diffusion Equation

o Maxwell's Equations

 Dispersion

 Problems

 Project -- Lagrangian Fluid Code

o The Difference Equations

o Boundary Conditions

o Initial Conditions

o The Physics

o An Improvement?
o The Report

 Project -- Solitons

o Introduction

o Discretisation

o Physics

Types of Equations

The PDE's which occur in physics are mostly second order2.1. The work in this section is also considered

in chapter III of Potter (1973) and chapter 17 of Press et al. (1992). For linear equations in 2 dimensions

there is a simple classification in terms of the general equation

(2.1)

as shown in the following table


Condition Type Example

Elliptic Laplace's equation (2.2)

Hyperbolic Wave equation (2.3)

Parabolic Diffusion/Schrödinger equation (2.4)

These are listed in their simplest form as follows (with the substitution where appropriate)

(2.2)

(2.3)

(2.4)

We shall consider each of these cases separately as different methods are required for each.
Elliptic Equations -- Laplace's equation

We shall deal with elliptic equations later when we come to consider matrix methods. For the moment it

suffices to note that, apart from the formal distinction, there is a very practical distinction to be made

between elliptic equations on the one hand and hyperbolic and parabolic on the other hand. Generally

speaking elliptic equations have boundary conditions which are specified around a closed boundary.

Usually all the derivatives are with respect to spatial variables, such as in Laplace's or Poisson's Equation.

Hyperbolic and Parabolic equations, by contrast, have at least one open boundary. The boundary

conditions for at least one variable, usually time, are specified at one end and the system is integrated

indefinitely. Thus, the wave equation and the diffusion equation contain a time variable and there is a

set of initial conditions at a particular time. These properties are, of course, related to the fact that an

ellipse is a closed object, whereas hyperbolæ and parabolæ are open.

Hyperbolic Equations -- Wave equations

The classical example of a hyperbolic equation is the wave equation

(2.5)

The wave equation can be rewritten in the form

(2.6)
or as a system of 2 equations

(2.7)

(2.8)

Note that the first of these equations (2.3a) is independent of and can be solved on it's own. The

second equation (2.3b) can then be solved by using the known solution of the first. Note that we could

equally have chosen the equations the other way round, with the signs of the velocity interchanged.

As the 2 equations (2.3) are so similar we expect the stability properties to be the same. We therefore

concentrate on (2.3a) which is known as the Advective equation and is in fact the conservation of mass

equation of an incompressible fluid

(2.9)

Note also that the boundary conditions will usually be specified in the form

(2.10)
which gives the value of for all at a particular time.

A Simple Algorithm

As a first attempt to solve (2.9) we consider using centred differences for the space derivative

and Euler's method for the time part

(2.11)

where the subscripts represent space steps and the superscripts time steps. By analogy with

the discussion of the Euler and Leap-Frog methods we can see immediately that this method is

1st order accurate in and 2nd order in . We note firstly that

(2.12)

whereas

(2.13)

and substitute these forms into (2.11) to obtain


(2.14)

so that, when the original differential equation (2.9) is subtracted, we are left with a truncation

error which is 2nd order in the time but 3rd order in the spatial part. The stability is a property of

the time, , integration rather than the space, . We analyse this by considering a plane wave

solution for the -dependence by substituting to obtain

(2.15)

or, after dividing out the common exponential factor,

(2.16)

Since the wave and advection equations express a conservation law the solution should neither

grow nor decay as a function of time. If we substitute and subtract (2.16) we

obtain an equation for

(2.17)
which is identical to (2.16). Hence the stability condition is simply given by the quantity in

square brackets. We call this the amplification factor and write it as

(2.18)

where

(2.19)

As is complex the stability condition becomes

(2.20)

The condition must be fulfilled for all wave vectors ; otherwise a component with a particular

will tend to grow at the expense of the others. This is known as the von Neumann stability

condition2.2. Unfortunately it is never fulfilled for the simple method applied to the advection

equation: i.e. the method is unstable for the advection equation.

An Improved Algorithm -- the Lax method

We see from (2.20) that our simple method is in fact unstable for the advection equation, for all

finite values of and . How might we improve on this? Let us consider a minor (?)

modification of (2.11)
(2.21)

in which the term in has been replaced by an average over its 2 neighbours. When we apply

the same (von Neumann) analysis to this algorithm we find

(2.22)

so that

(2.23)

(2.24)

which is stable for all as long as

(2.25)

which is an example of the Courant-Friedrichs-Lewy condition applicable to hyperbolic

equations. There is a simple physical explanation of this condition: if we start with the initial

condition such that everywhere except at one point on the spatial grid, then a
point steps away on the grid will remain zero until at least time steps later. If, however, the

equation is supposed to describe a physical phenomenon which travels faster than that then

something must go wrong. This is equivalent to the condition that the time step, , must be

smaller than the time taken for the wave to travel the distance of the spatial step, ; or that the

speed of propagation of information on the grid, , must be greater than any other speed in

the problem.

Non-Linear Equations

When applying the von Neumann analysis to non-linear equations the substitution

should be performed before setting . should then be

treated as a constant, independent of (or ). The resulting stability condition will then depend

on , just as in the case of ODE's.

Other methods for Hyperbolic Equations

There is a large number of algorithms applicable to hyperbolic equations in general. As before,

their suitability for solving a particular problem must be based on an analysis of their accuracy

and stability for any wavelength. Here we simply name a few of them:

 The Lelevier method.

 The one-step and two-step Lax-Wendroff methods.

 The Leap-Frog method.

 The Quasi-Second Order method.


Eulerian and Lagrangian Methods

In the methods discussed so far the differential equations have been discretised by defining the value of

the unknown at fixed points in space. Such methods are known as Eulerian methods. We have confined

ourselves to a regular grid of points, but sometimes it is advantageous to choose some other grid. An

obvious example is when there is some symmetry in the problem, such as cylindrical or spherical: often

it is appropriate to base our choice of grid on cylindrical or spherical coordinates rather than the

Cartesian ones used so far. Suppose, however, we are dealing with a problem in electromagnetism or

optics, in which the dielectric constant varies in space. Then it might be appropriate to choose a grid in

which the points are more closely spaced in the regions of higher dielectric constant. In that way we
could take account of the fact that the wavelengths expected in the solution will be shorter. Such an

approach is known as an adaptive grid. In fact it is not necessary for the grid to be fixed in space. In fluid

mechanics, for example, it is often better to define a volume of space containing a fixed mass of fluid

and to let the boundaries of these cells move in response to the dynamics of the fluid. The differential

equation is transformed into a form in which the variables are the positions of the boundaries of the

cells rather than the quantity of fluid in each cell. A simple example of such a Lagrangian method is

described in the Lagrangian Fluid project.


Parabolic Equations -- Diffusion

Parabolic equations such as the diffusion and time-dependent Schrödinger equations are similar to the

hyperbolic case in that the boundary is open and we consider integration with respect to time, but, as

we shall see, they present some extra problems.

 A Simple Method

 The Dufort-Frankel Method

 Other Methods

 A Simple Method
 We consider the diffusion equation and apply the same simple method we tried for the

hyperbolic case.

(2.26)

 and discretise it using the Euler method for the time derivative and the simplest centred

2nd order derivative to obtain

(2.27)

 Applying the von Neumann analysis to this system by considering a single Fourier mode

in space, we obtain

(2.28)

 so that the condition that the method is stable for all gives

(2.29)

 Although the method is in fact conditionally stable the condition (2.29) hides an

uncomfortable property: namely, that if we want to improve accuracy and allow for

smaller wavelengths by halving we must divide by . Hence, the number of space

steps is doubled and the number of time steps is quadrupled: the time required is

multiplied by . Note that this is different from the sorts of conditions we have

encountered up to now, in that it doesn't depend on any real physical time scale of the

problem.

 The Dufort-Frankel Method

We consider here one of many alternative algorithms which have been designed to overcome

the stability problems of the simple algorithm. The Dufort-Frankel method is a trick which
exploits the unconditional stability of the intrinsic method for simple differential equations.

We modify (2.27) to read

(2.30)

which can be solved explicitly for at each mesh point

(2.31)

where

(2.32)

When the usual von Neumann analysis is applied to this method it is found to be

unconditionally stable. Note however that this does not imply that and can be made

indefinitely large; common sense tells us that they must be small compared to any real

physical time or length scales in the problem. We must still worry about the accuracy of the

method. Another difficulty this method shares with the Leap-Frog method is that it requires

boundary conditions at 2 times rather than one, even though the original diffusion equation is

only 1st order in time.

Other Methods
Another important method for solving parabolic equations is the Crank-Nicholson method,

which we shall not discuss here but which is considered in chapter II of Potter (1973).

Conservative Methods

Most of the differential equations we encounter in physics embody some sort of conservation law. It is

important therefore to try to ensure that the method chosen to solve the equation obeys the underlying

conservation law exactly and not just approximately. This principle can also have the side effect of

ensuring stability. The simplest approach to deriving a method with such properties is to go back to the

original derivation of the differential equation.

 The Equation of Continuity

 The Diffusion Equation

 Maxwell's Equations

The Equation of Continuity


The archetypal example of a differential equation which implies a conservation law is the

equation of continuity, which, in its differential form says that

(2.33)

where represents a density and a current density. The density and current density could be

of mass, charge, energy or something else which is conserved. Here we shall use the words

charge and current for convenience. We consider here the 1D form for simplicity. The

equation is derived by considering space as divided into sections of length . The change

in the total charge in a section is equal to the total current coming in (going out) through its

ends

(2.34)

It is therefore useful to re-express the differential equation in terms of the total charge in the

section and the total current coming in through each face, so that we obtain a discrete

equation of the form

(2.35)
where represents the total charge in part and is the current through the boundary

between parts and . This takes care of the spatial part. What about the time

derivative? We can express the physics thus:

The change in the charge in a cube is equal to the total charge which enters through its

faces.

The Diffusion Equation

In many cases the current through a face is proportional to the difference in density (or total

charge) between neighbouring cubes

(2.36)

Substituting this into the equation of continuity leads directly to the diffusion equation in discrete

form

(2.37)

which is of course our simple method of solution. To check whether this algorithm obeys the

conservation law we sum over all , as should be conserved. Note that it helps to

consider the whole process as taking place on a circle as this avoids problems associated with
currents across the boundaries. In this case (e.g.) and it is easy to see that

the conservation law is obeyed for (2.37).

Maxwell's Equations

Let us now try to apply the same ideas to Maxwell's equations. In free space we have

(2.38)

(2.39)

In order to reverse the derivation of these equations we consider space to be divided into cubes

as before. For (2.6.3a) we integrate over a face of the cube and apply Stokes' theorem

(2.40)

(2.41)

and the integral of the electric field, , round the edges of the face is equal to minus the rate of

change of the magnetic flux through the face, i.e. Faraday's law. Here we can associate the

electric field, with the edges and the magnetic field with the face of the cube. In the case of

the diffusion equation we had to think in terms of the total charge in a cube instead of the

density. Now we replace the electric field with the integral of the field along a line and the
magnetic field with the flux through a face. Note also that we can analyse (2.6.3b) in a similar

way to obtain a representation of Ampère's law.

Dispersion

Let us return to the Lax method for hyperbolic equations. The solution of the differential equation has

the form

(2.42)

where . Let us substitute (2.42) into the Lax algorithm

(2.43)

By cancelling the common factors we now obtain

(2.44)
From this we can derive a dispersion relation, , for the discrete equation and compare the result

with for the original differential equation. Since, in general, could be complex we write it as

and compare real and imaginary parts on both sides of the equation to obtain

(2.45)

(2.46)

Taking the ratio of these or the sum of their squares respectively leads to the equations

(2.47)

(2.48)

The first of these equations tells us that in general the phase velocity is not , although for long

wavelengths, and , we recover the correct dispersion relationship. This is similar to

the situation when we compare lattice vibrations with classical elastic waves: the long wavelength sound

waves are OK but the shorter wavelengths deviate. The second equation (2.7b) describes the damping

of the modes. Again for small and , , but (e.g.) short wavelengths, , are

strongly damped. This may be a desirable property as short wavelength oscillations may be spurious.
After all we should have chosen to be small compared with any expected features. Nevertheless

with this particular algorithm is not damped. Other algorithms, such as Lax-Wendroff, have

been designed specifically to damp anything with a .

Problems

1. Explain the difference between hyperbolic, parabolic and elliptic partial differential equations,

and give an example of each. What is the important physical distinction between hyperbolic and

parabolic equations, on the one hand, and elliptic equations on the other?

2. Describe the von Neumann procedure for analysing the stability of partial differential equations.

3. Describe the physical principle behind the Courant-Friedrichs-Lewy condition as applied to the

numerical solution of partial differential equations.

4. Are the following equations hyperbolic, elliptic or parabolic?


5. The equation

can be represented by the difference equation


Derive the truncation error of this difference equation. Write down an alternative difference

equation which is 2nd order accurate in both and .

6. The Dufort-Frankel scheme is a method for the solution of the diffusion equation. Show that the

method is unconditionally stable. Discuss the advantages and disadvantages of this method.

7. The diffusion equation in a medium where the diffusion constant varies in space

is

Show that the difference equation

is not conservative, i.e. is not conserved. Construct an alternative difference scheme

which is conservative.
8. Show that the Lax scheme for the solution of the advection equation is equivalent to

Examine the behaviour of wave-like solutions in the Lax scheme and

explain the behaviour in terms of diffusion.

9. Describe what is meant by numerical dispersion.

10. Lax-Wendroff method consists of 2 steps, just like Runge-Kutta or Predictor-Corrector. It is given

by

(2.49)

(2.50)

11.

12. Draw a diagram to illustrate the way this algorithm operates on an grid. Show that the

algorithm is stable provided the Courant-Friedrichs-Lewy condition is obeyed. Show that the

algorithm tends to dampen waves with wavelength .


Project -- Lagrangian Fluid Code

Small amplitude sound waves travel at speed without changing shape (so that a sine

wave remains a sine wave). However, when the amplitude is large, the sound speed differs between the

peaks and the troughs of the wave and the wave changes shape as it propagates. This project is to

simulate this effect in one-dimension. A difference method is used in which the system is split up into

cells and the cell boundaries are allowed to move with the local fluid velocity (a so-called Lagrangian

method). You should not require any library routines to complete this project.

 The Difference Equations

 Boundary Conditions

 Initial Conditions

 The Physics

 An Improvement?

The Difference Equations

The difference equations are,

(2.51)

(2.52)

where,

(2.53)
and,

(2.54)

and are the positions and velocities of the cell boundaries. and are

the densities and pressures in the cells. is the mass in each cell. It is useful for the

purpose of programming to redefine things to get rid of the various half integer indices.

Hence we can write the equations as

(2.55)

(2.56)

(2.57)

which maps more easily onto the arrays in a program.

Boundary Conditions
Use periodic boundary conditions to allow simulation of an infinite wave train. Note, though, the

presence of and in (2.9.1). If runs from to then and will be needed.

This is best done by creating ``ghost'' cells at the end of the arrays. In C(++) this can be done by

a declaration such as

double x[N+1], u[N+1], P[N+2];

and, after each loop in which or are updated, by setting and ,

where is the equilibrium length of the grid. Note that this requires you to leave a couple of

empty array elements, but for large this is not significant.

Initial Conditions

Sensible initial conditions might be,

(2.58)

(2.59)

where , the amplitude of the sound wave, is the ratio of to the sound speed (i.e. the Mach

number). Be careful to choose sensible values for the parameters: e.g. the values of should

rise monotonically with , otherwise some cells will have negative fluid densities. The essential

physics of the problem is independent of the absolute values of the equilibrium pressure and

density so you can set , and . In addition you can assume that
. Note that the scheme is leap-frog but not of the dangerous variety: alternate

steps solve for and .

The Physics

Start with a relatively small value for and show that the wave maintains its shape

and moves at the correct velocity. Then increase to find out what happens. The stability

analysis for a non-linear equation like this is difficult. Try halving the distance between grid

points. How does the behaviour of the system change? Do the effects you observe describe real

physics or are they numerical artefacts? One common numerical problem only manifests itself

for large . Try running for a few steps at , look at the values for and try to work out

what has gone wrong. Think about how to prevent this problem (Hint: you should find something

very similar to the Courant-Friedrichs-Lewy condition).

An Improvement?

Consider the operation:

(2.60)
By applying the von Neumann trick you should be able to deduce the effect of such an operation.

Now try applying the transformation (2.60) to to and after a few iterations. How does the

behaviour of your simulation change?

Project -- Solitons

 Introduction

 Discretisation

 Physics

Introduction

The Korteweg de Vries equation,

(2.61)

is one of a class of non-linear equations which have so-called soliton solutions. In this case a

solution can be written in the form,

(2.62)
which has the form of a pulse which moves unchanged through the system. Ever since the

phenomenon was first noted (on a canal in Scotland) it has been recognised in a wide range

of different physical situations. The ``bore'' which occurs on certain rivers, notably the

Severn, is one such. The dynamics of phase boundaries in various systems, such as domain

boundaries in a ferromagnet, and some meteorological phenomena can also be described in

terms of solitons.

Discretisation

The simplest discretisation of (2.61), based on the Euler method, gives the equation

(2.63

You should check that this is indeed a sensible discretisation of (2.61) but that it is unstable.

Note that when analysing the non-linear term in (2.63) you should make the substitution

and retain the linear terms in . Thereafter you should treat as a constant,

independent of and , and apply the von Neumann method to . If you find the full stability

analysis difficult you might consider the 2 limits of large and small . In the former case the 3rd

derivative is negligible and (2.61) reduces to a non-linear advection equation, whereas in the

latter the non-linear term is negligible and the equation is similar to the diffusion equation but

with a 3rd derivative. In any case you will require to choose so that the equation is stable in

both limits. You are free to choose any method you wish to solve the equation but you will find
the Runge-Kutta or Predictor-Corrector methods most reliable. Hence treating as a long

vector and the terms on the right-hand-side of (2.63) as a vector function the R-K method

can be written concisely as

(2.64)

(2.65)

where is the quantity in square brackets in (2.63). Check that this method is stable, at

least in the 2 limiting cases. Bear in mind that the Runge-Kutta method is usable for oscillatory

equations in spite of the small instability as long as the term is small. By studying the

analytical solution (2.62) you should be able to choose a sensible value for in terms of and

from the stability conditions you can deduce an appropriate . Again, by looking at (2.62) you

should be able to decide on a sensible size for the total system. You should use periodic

boundary conditions, so that your solitons can run around your system several times if necessary.

The easiest way to do this is by using ``ghost'' elements at each end of your arrays. Suppose your

arrays should run from to . Then you can add a couple of extra elements to each end:

. After each step you can then assign these values as

(2.66)
so that the derivative terms in (2.63) can be calculated without having to take any special

measures.

Physics

You should first check your program by studying the single soliton case. Use (2.62) at as

your initial condition and watch how the soliton develops. Check that after it has gone round the

system once it retains its shape. Now you can try 2 solitons of different sizes. The initial

conditions should be the simple sum of 2 solitons well separated from one another. Watch what

happens when the solitons meet. In your report you should discuss the behaviour of the solitons

as well as the properties of the method chosen to solve the equations.

Matrix Algebra

Subsections

 Introduction

 Types of Matrices

 Simple Matrix Problems


o Addition and Subtraction

o Multiplication of Matrices

 Elliptic Equations -- Poisson's Equation

o One Dimension

o 2 or more Dimensions

 Systems of Equations and Matrix Inversion

o Exact Methods

o Iterative Methods

 The Jacobi Method

 The Gauss-Seidel Method

 Matrix Eigenvalue Problems

o Schrödinger's equation

o General Principles

o Full Diagonalisation

o The Generalised Eigenvalue Problem

o Partial Diagonalisation

 Sturm Sequence

o Sparse Matrices and the Lanczos Algorithm

 Problems

 Project -- Oscillations of a Crane

o Analysis

 Project -- Phonons in a Quasicrystal

o Introduction

o The Fibonacci Lattice


o The Model

o The Physics

Introduction

Nearly every scientific problem which is solvable on a computer can be represented by matrices.

However the ease of solution can depend crucially on the types of matrices involved. There are 3 main

classes of problems which we might want to solve:

1. Trivial Algebraic Manipulation such as addition, or multiplication, , of matrices.

2. Systems of equations: where and are known and we have to find . This also

includes the case of finding the inverse, . The standard example of such a problem is

Poisson's Equation.

3. Eigenvalue Problems: . This also includes the generalised eigenvalue problem:

. Here we will consider the time-independent Schrödinger equation.


In most cases there is no point in writing your own routine to solve such problems. There are many

computer libraries, such as Numerical Algorithms Group (n.d.), Lapack Numerical Library (n.d.) (for linear

algebra problems and eigenvalue problems). which contain well tried routines. In addition vendors of

machines with specialised architectures often provide libraries of such routines as part of the basic

software.

Types of Matrices

There are several ways of classifying matrices depending on symmetry, sparsity etc. Here we provide a

list of types of matrices and the situation in which they may arise in physics.

 Hermitian Matrices: Many Hamiltonians have this property especially those containing

magnetic fields: where at least some elements are complex.

 Real Symmetric Matrices: These are the commonest matrices in physics as most Hamiltonians

can be represented this way: and all are real. This is a special case of Hermitian

matrices.

 Positive Definite Matrices: A special sort of Hermitian matrix in which all the eigenvalues are

positive. The overlap matrices used in tight-binding electronic structure calculations are like this.

Sometimes matrices are non-negative definite and zero eigenvalues are also allowed. An

example is the dynamical matrix describing vibrations of the atoms of a molecule or crystal,

where .

 Unitary Matrices: The product of the matrix and its Hermitian conjugate is a unit matrix,

. A matrix whose columns are the eigenvectors of a Hermitian matrix is unitary; the
unitarity is a consequence of the orthogonality of the eigenvectors. A scattering ( ) matrix is

unitary; in this case a consequence of current conservation.

 Diagonal Matrices: All matrix elements are zero except the diagonal elements, when

. The matrix of eigenvalues of a matrix has this form. Finding the eigenvalues is equivalent

to diagonalisation.

 Tridiagonal Matrices: All matrix elements are zero except the diagonal and first off diagonal

elements, , . Such matrices often occur in 1 dimensional problems and at

an intermediate stage in the process of diagonalisation.

 Upper and Lower Triangular Matrices: In Upper Triangular Matrices all the matrix elements

below the diagonal are zero, for . A Lower Triangular Matrix is the other way

round, for . These occur at an intermediate stage in solving systems of

equations and inverting matrices.

 Sparse Matrices: Matrices in which most of the elements are zero according to some pattern. In

general sparsity is only useful if the number of non-zero matrix elements of an matrix

is proportional to rather than . In this case it may be possible to write a function which

will multiply a given vector by the matrix to give without ever having to store all the

elements of . Such matrices commonly occur as a result of simple discretisation of partial

differential equations, and in simple models to describe many physical phenomena.

 General Matrices: Any matrix which doesn't fit into any of the above categories, especially non-

square matrices.
There are a few extra types which arise less often:

 Complex Symmetric Matrices: Not generally a useful symmetry. There are however two related

situations in which these occur in theoretical physics: Green's functions and scattering ( )

matrices. In both these cases the real and imaginary parts commute with each other, but this is

not true for a general complex symmetric matrix.

 Symplectic Matrices: This designation is used in 2 distinct situations:

o The eigenvalues occur in pairs which are reciprocals of one another. A common example

is a Transfer Matrix.

o Matrices whose elements are Quaternions, which are matrices like

(3.1)

o Such matrices describe systems involving spin-orbit coupling. All eigenvalues are doubly

degenerate (Kramers degeneracy).

Simple Matrix Problems

 Addition and Subtraction

 Multiplication of Matrices

Addition and Subtraction


In programming routines to add or subtract matrices it pays to remember how the matrix is

stored in the computer. Unfortunately this varies from language to language: in C(++) and

Pascal the first index varies slowest and the matrix is stored one complete row after another;

whereas in FORTRAN the first index varies fastest and the matrix is stored one complete

column after another. It is generally most efficient to access the matrix elements in the order

in which they are stored. Hence the simple matrix algebra, , should be written in

C3.1 as

const int N =??;

int i, j;

double A[N][N], B[N][N], C[N][N];

for( i = 0; i < N; i++)

for( j = 0; j < N; j++)

A[i][j] = B[i][j] + C[i][j];

or its equivalent using pointers. In FORTRAN90 this should be written

integer :: i,j

integer, parameter :: N = ??

real, double precision :: A(N,N), B(N,N), C(N,N)

do i = 1,N

do j=1,N

A(j,i) = B(j,i) + C(j,i)

end do

end do

Note the different ordering of the loops in these 2 examples. This is intended to optimise the

order in which the matrix elements are accessed. It is perhaps worth noting at this stage that

in both C++ and FORTRAN90 it is possible to define matrix type variables (classes) so that

the programs could be reduced to


matrix A(N,N), B(N,N), C(N,N);

A = B + C;

The time taken to add or subtract 2 matrices is generally proportional to the total

number of matrix elements, , although this may be modified by parallel architecture.

Multiplication of Matrices

Unlike addition and subtraction of matrices it is difficult to give a general machine independent

rule for the optimum algorithm for the operation. In fact matrix multiplication is a classic

example of an operation which is very dependent on the details of the architecture of the

machine. We quote here a general purpose example but it should be noted that this does not

necessarily represent the optimum ordering of the loops.

const int L = ??, M = ??, N = ??;

int i, j, k;

double A[L][N], B[L][M], C[M][N], sum;

for ( j = 0; j < N; i++)

for ( i = 0; i < L; j++)

for ( sum = 0, k = 0; k < M; k++)

sum += B[i][k] * C[k][j];

A[i][j] = sum;

}
in C. The time taken to multiply 2 matrices is generally proportional to , although

this may be modified by parallel processing.

Elliptic Equations -- Poisson's Equation

At this point we digress to discuss Elliptic Equations such as Poisson's equation. In general these

equations are subject to boundary conditions at the outer boundary of the range; there are no initial

conditions, such as we would expect for the Wave or Diffusion equations. Hence they cannot be solved

by adapting the methods for simple differential equations.

 One Dimension

 2 or more Dimensions

One Dimension

We start by considering the one dimensional Poisson's equation

(3.2)

The 2nd derivative may be discretised in the usual way to give

(3.3)

where we define .
The boundary conditions are usually of the form at and at

, although sometimes the condition is on the first derivative. Since and are

both known the and equations (3.3) may be written as

(3.4)

(3.5)

This may seem trivial but it maintains the convention that all the terms on the left contain

unknowns and everything on the right is known. It also allows us to rewrite the (3.3) in matrix

form as

(3.6)

which is a simple matrix equation of the form

(3.7)
in which is tridiagonal. Such equations can be solved by methods which we shall consider

below. For the moment it should suffice to note that the tridiagonal form can be solved

particularly efficiently and that functions for this purpose can be found in most libraries of

numerical functions.

There are several points which are worthy of note.

 We could only write the equation in this matrix form because the boundary conditions

allowed us to eliminate a term from the 1st and last lines.

 Periodic boundary conditions, such as can be implemented, but they

have the effect of adding a non-zero element to the top right and bottom left of the

matrix, and , so that the tridiagonal form is lost.

 It is sometimes more efficient to solve Poisson's or Laplace's equation using Fast Fourier

Transformation (FFT). Again there are efficient library routines available (Numerical

Algorithms Group, n.d.). This is especially true in machines with vector processors.

2 or more Dimensions

Poisson's and Laplace's equations can be solved in 2 or more dimensions by simple

generalisations of the schemes discussed for 1D. However the resulting matrix will not in

general be tridiagonal. The discretised form of the equation takes the form

(3.8)
The 2 dimensional index pairs may be mapped on to one dimension for the purpose

of setting up the matrix. A common choice is so-called dictionary order,

Alternatively Fourier transformation can be used either for all dimensions or to reduce the

problem to tridiagonal form.

Systems of Equations and Matrix Inversion

We now move on to look for solutions of problems of the form

(3.9)

where is an matrix and and are matrices. Matrix inversion is simply the

special case where is an unit matrix.

 Exact Methods

 Iterative Methods
o The Jacobi Method

o The Gauss-Seidel Method

Exact Methods

Most library routines, for example those in the NAG (Numerical Algorithms Group, n.d.) or

LaPack (Lapack Numerical Library, n.d.) libraries are based on some variation of Gaussian

elimination. The standard procedure is to call a first function which performs an

decomposition of the matrix ,

(3.10)

where and are lower and upper triangular matrices respectively, followed by a function

which performs the operations

(3.11)

(3.12)

on each column of . The procedure is sometimes described as Gaussian Elimination. A

common variation on this procedure is partial pivoting. This is a trick to avoid numerical
instability in the Gaussian Elimination (or decomposition) by sorting the rows of to avoid

dividing by small numbers. There are several aspects of this procedure which should be noted:

 The decomposition is usually done in place, so that the matrix is overwritten by

the matrices and .

 The matrix is often overwritten by the solution .

 A well written decomposition routine should be able to spot when is singular and

return a flag to tell you so.

 Often the decomposition routine will also return the determinant of .

 Conventionally the lower triangular matrix is defined such that all its diagonal

elements are unity. This is what makes it possible to replace with both and .

 Some libraries provide separate routines for the Gaussian Elimination and the Back-

Substitution steps. Often the Back-Substitution must be run separately for each column of

 Routines are provided for a wide range of different types of matrices. The symmetry of

the matrix is not often used however.

The time taken to solve equations in unknowns is generally proportional to for the

Gaussian-Elimination ( -decomposition) step. The Back-Substitution step goes as for

each column of . As before this may be modified by parallelism.


Iterative Methods

As an alternative to the above library routines, especially when large sparse matrices are

involved, it is possible to solve the equations iteratively.

 The Jacobi Method

 The Gauss-Seidel Method

The Jacobi Method

We first divide the matrix into 3 parts

(3.13)

where is a diagonal matrix (i.e. for ) and and are strict lower and upper

triangular matrices respectively (i.e. for all ). We now write the Jacobi or Gauss-Jacobi

iterative procedure to solve our system of equations as

(3.14)

where the superscripts refer to the iteration number. Note that in practice this procedure requires

the storage of the diagonal matrix, , and a function to multiply the vector, by . This
algorithm resembles the iterative solution of hyperbolic or parabolic partial differential equations, and

can be analysed in the same spirit. In particular care must be taken that the method is stable. Simple C

code to implement this for a 1D Poisson's equation is given below.

int i;

const int N = ??; // incomplete code

double xa[N], xb[N], b[N];

while ( ... ) // incomplete code

for ( i = 0; i < N; i++ )

xa[i] = (b[i] - xb[i-1] - xb[i+1]) * 0.5;

for ( i = 0; i < N; i++ )

xb[i] = xa[i];

Note that 2 arrays are required for X and that the matrices, , and don't appear explicitely.

The Gauss-Seidel Method

Any implementation of the Jacobi Method above will involve a loop over the matrix elements in each

column of . Instead of calculating a completely new matrix at each iteration and then

replacing with it before the next iteration, as in the above code, it might seem sensible to replace

the elements of with those of as soon as they have been calculated. Naïvely we would expect

faster convergence. This is equivalent to rewriting the Jacobi equation as

(3.15)
This Gauss-Seidel method has better convergence than the Jacobi method, but only marginally so. As

before we consider the example of the solution of the 1D Poisson's equation. As C programs the basic

structure might be something like

int i;

const int N = ??; //incomplete code

double x[N], b[N];

while ( ... ) //incomplete code

for ( i = 0; i < N; i++ )

x[i] = (b[i] - x[i-1] - x[i+1]) * 0.5;

Note that only one array is now required to store X, whereas the Jacobi method needed 2. The time

required for each iteration is proportional to for each column of , assuming is sparse. The

number of iterations required depends on the details of the problem, on the quality of the initial guess

for , and on the accuracy of the required solution.


Matrix Eigenvalue Problems

No attempt will be made here to describe the detailed algorithms used for matrix diagonalisation. A full

discussion can be found in the book by Wilkinson and Reinsch (Wilkinson, 1964). The routines found in

the common libraries, such as NAG (Numerical Algorithms Group, n.d.) or LaPack (Lapack Numerical

Library, n.d.) are almost all based on the algorithms in this book.

 Schrödinger's equation

 General Principles

 Full Diagonalisation

 The Generalised Eigenvalue Problem

 Partial Diagonalisation

o Sturm Sequence

 Sparse Matrices and the Lanczos Algorithm

Schrödinger's equation

In dimensionless form the time-independent Schrödinger equation can be written as

(3.16)

The Laplacian, , can be represented in discrete form as in the case of Laplace's or

Poisson's equations. For example, in 1D (3.16) becomes

(3.17)
which can in turn be written in terms of a tridiagonal matrix as

(3.18)

An alternative and more common procedure is to represent the eigenfunction in terms of a

linear combination of basis functions so that we have

(3.19)

The basis functions are usually chosen for convenience and as some approximate analytical

solution of the problem. Thus in chemistry it is common to choose the to be known

atomic orbitals. In solid state physics often plane waves are chosen. Inserting (3.19) into

(3.16) gives

(3.20)

Multiplying this by one of the 's and integrating gives

(3.21)
We now define 2 matrices

(3.22)

(3.23)

so that the whole problem can be written concisely as

(3.24)

(3.25)

which has the form of the generalised eigenvalue problem. Often the 's are chosen to be

orthogonal so that and the matrix is eliminated from the problem.

General Principles
The usual form of the eigenvalue problem is written

(3.26)

where is a square matrix is an eigenvector and is an eigenvalue. Sometimes the

eigenvalue and eigenvector are called latent root and latent vector respectively. An

matrix usually has distinct eigenvalue/eigenvector pairs3.2. The full solution of the eigenvalue

problem can then be written in the form

(3.27)
(3.28)

where is a diagonal matrix of eigenvalues and ( ) are matrices whose columns (rows)

are the corresponding eigenvectors. and are the left and right handed eigenvectors

3.3
respectively, and

 For Hermitian matrices and are unitary and are therefore Hermitian transposes of

each other: .

 For Real Symmetric matrices and are also real. Real unitary matrices are

sometimes called orthogonal.

Full Diagonalisation

Routines are available to diagonalise real symmetric, Hermitian, tridiagonal and general

matrices. In the first 2 cases this is usually a 2 step process in which the matrix is first

tridiagonalised (transformed to tridiagonal form) and then passed to a routine for diagonalising a

tridiagonal matrix. Routines are available which find only the eigenvalues or both eigenvalues

and eigenvectors. The former are usually much faster than the latter. Usually the eigenvalues of a

Hermitian matrix are returned sorted into ascending order, but this is not always the case (check

the description of the routine). Also the eigenvectors are usually normalised to unity. For non-

Hermitian matrices only the right-handed eigenvectors are returned and are not normalised. In

fact it is not always clear what normalisation means in the general case. Some older FORTRAN

and all C (not C++) and PASCAL routines for complex matrices store the real and imaginary
parts as separate arrays. The eigenvalues and eigenvectors may also be returned in this form.

This is due to 2 facts

 The original routines from Wilkinson and Reinsch (Wilkinson, 1964) were written in

ALGOL, which had no complex type.

 Many FORTRAN compilers (even recent ones) handle complex numbers very

inefficiently, in that the use a function even for complex addition rather than inline code.

In C++ it is worthwhile checking the complex header file to see how this is implemented.

The Generalised Eigenvalue Problem

A common generalisation of the simple eigenvalue problem involves 2 matrices

(3.29)

This can easily be transformed into a simple eigenvalue problem by multiplying both sides

by the inverse of either or . This has the disadvantage however that if both matrices are

Hermitian is not, and the advantages of the symmetry are lost, together, possibly, with

some important physics. There is actually a more efficient way of handling the

transformation. Using Cholesky factorisation an decomposition of a positive definite

matrix can be carried out such that

(3.30)
which can be interpreted as a sort of square root of . Using this we can transform the

problem into the form

(3.31)

(3.32)

Most libraries contain routines for solving the generalised eigenvalue problem for Hermitian

and Real Symmetric matrices using Cholesky Factorisation followed by a standard routine.

Problem 6 contains a simple and informative example.

Partial Diagonalisation

Often only a fraction of the eigenvalues are required, sometimes only the largest or smallest.

Generally if more than 10% are required there is nothing to be gained by not calculating all of

them, as the algorithms for partial diagonalisation are much less efficient per eigenvalue than

those for full diagonalisation. Routines are available to calculate the largest or smallest

eigenvalues and also all eigenvalues in a particular range.

 Sturm Sequence

Sturm Sequence

The Sturm sequence is a very nice algorithm found in most libraries. It finds all the eigenvalues in a given

range, , and the corresponding eigenvectors. It is also able to find the number of
such eigenvalues very quickly and will return a message if insufficient storage has been allocated for the

eigenvectors. It does require a tridiagonalisation beforehand and is often combined with the Lanczos

algorithm, to deal with sparse matrices.

Sparse Matrices and the Lanczos Algorithm

None of the above diagonalisation procedures make any use of sparsity. A very useful algorithm

for tridiagonalising a sparse matrix is the Lanczos algorithm. This algorithm was developed into

a very useful tool by the Cambridge group of Volker Heine (including (e.g.) Roger Haydock,

Mike Kelly, John Pendry) in the late 60's and early 70's. A suite of programs based on the

Lanczos algorithm can be obtained by anonymous ftp from the HENSA archive.

Problems

1. Poisson's equation is usually differenced in 2 dimensions as

where and . The difference equation can be

written as a matrix equation where and .


Write down the matrix for . Assume the boundaries are at zero potential, i.e.

iff or .

2. The equation in question 1 can be solved by the Gauss-Seidel method. If and

otherwise, find and if for all and , where is the value of after

iterations.

3. Work out the factors, L and U, in the LU decomposition of the matrix,

Hence,

o Solve the simultaneous equations,

for a variety of right hand sides, .


o Evaluate

o Find

4. Show that the Jacobi method (3.14) is stable as long as, , the eigenvalue of largest modulus of

is less than unity.

5. Find both eigenvalues and the corresponding left and right handed eigenvectors of the matrix

6. The vibrational modes of a certain molecule are described by an equation of motion in the form

where and are the mass and the displacement respectively of the th atom and the real

symmetric matrix describes the interactions between the atoms. Show that this problem can

be represented in the form of a generalised eigenvalue problem: in which the

matrix is positive definite. By considering the transformation , show how to

transform this into a simple eigenvalue problem in which the matrix is real symmetric.
7. Write down the types of matrices which occur in the following problems:

1. A simple discretisation (as in question1) of Poisson's equation in 1 dimension.

2. The same but in more than 1D.

3. A simple discretisation (as above) of Schrödinger's equation in 3 dimensions.

4. Schrödinger's equation for a molecule written in terms of atomic basis functions.

5. Schrödinger's equation for a crystal at a general point in the Brillouin zone.

Project -- Oscillations of a Crane

A crane manufacturer wants to know how his newly designed crane is going to behave in a high wind.

He wants to know the oscillation frequencies of a steel wire with and without masses on the end. This

project is to write a program which calculates the modes of oscillation and their frequencies.

Analysis

The difference equations for the oscillating wire can be derived by dividing it into N

segments (10 or 20 should be sufficient) each of which can be considered rigid. This would

be the case if the wire consisted of a series of rods connected together.

Let be the displacement of the bottom of the th segment. Let be the angle it makes

with the vertical. Let be the tension in the th segment. Assume that the mass of the

wire is located at the joints of the segments, at each joint. where is the
length of each segment, and is the mass per unit length of the wire. The equation of motion

for the mass at the bottom of the nth segment is,

(3.33)

Assume small oscillations so that .

(3.34)

The mass associated with the end of the wire will be only since there is no

contribution from the (N+1)th segment. Consequently, the equation of motion for this point

is,

(3.35)

where is any mass carried by the crane. In addition the displacement of the top of the wire

is zero, so that in the equation for .

The modes of oscillation are calculated by seeking solutions of the form

. Substituting this into the equations of motion gives,


(3.36)

(3.37)

The specification of the equations is completed by noting that, from the equilibrium

conditions,

(3.38)

where is the acceleration due to gravity.

The equations can be organised in the form,

(3.39)

where is the column vector of displacements, , is a symmetric

tridiagonal matrix and is a diagonal matrix. The problem becomes one of finding the

eigenvalues, , and eigenvectors, , of a generalised eigenvalue problem. The

eigenvectors show how the wire distorts when oscillating in each mode and the eigenvalues

give the corresponding oscillation frequencies. Low frequency modes are more important

than high frequency modes to the crane manufacturer.


The problem can be solved most easily by using a LaPack routine which finds the

eigenvalues and eigenvectors directly. However, before doing so it is necessary to eliminate

the matrix using the same method as discussed in problem 6.

You should investigate both the computational aspects, such as the dependence of the results

on , as well as the physical ones, such as the dependence of the behaviour on the mass,

. Do your results make physical sense? You might even compare them with a simple

experiment involving a weight on the end of a string.

Project -- Phonons in a Quasicrystal

 Introduction

 The Fibonacci Lattice

 The Model

 The Physics

Introduction

Until a few years ago, it was thought that there were only two different kinds of solids:

crystals, in which the atoms are arranged in a regular pattern with translational symmetry

(there may be defects, of course); and amorphous solids, in which there is no long range

order, although there is some correlation between the positions of nearby atoms. It was also
known that it was impossible for a crystal to have five fold rotational symmetry, since this is

incompatible with translational order.

This was how things stood until 1984, when Shechtman et al. (Phys. Rev. Lett. 53 1951

(1984)), were measuring the X ray diffraction pattern of an alloy of Al and Mn and got a

sharp pattern with clear five fold symmetry. The sharpness of the pattern meant that there had

to be long range order, but the five fold symmetry meant that the solid could not be

crystalline. Shechtman called the material a ``quasicrystal''.

One possible explanation (although this has still not been conclusively established) is that

quasicrystals are three dimensional analogues of Penrose tilings (Scientific American,

January 1977 -- Penrose tilings were known as a mathematical curiosity before quasicrystals

were discovered). Penrose found that you could put together two (or more) different shapes

in certain well defined ways so that they ``tiled'' the plane perfectly, but with a pattern that

never repeated itself. Sure enough, some of these tilings do have five fold symmetries; and

sure enough, there is perfect long range order (although no translational symmetry) so that

the diffraction pattern from a Penrose lattice would be sharp.

The Fibonacci Lattice

The mathematical theory of Penrose tilings gets quite high brow and abstruse, but everything is

very simple in one dimension. Then the two shapes are lines of different lengths, which we shall

call and , for Adult and Child (Fibonacci actually studied the dynamics of rabbit
populations). Every year each adult has one child and each child becomes an adult. Let us start

with a single child

(3.40)

and then repeatedly apply the ``generation rule,''

(3.41)

to obtain longer and longer sequences. The first few sequences generated are,

(3.42)

Note the interesting property that each generation is the ``sum'' of the 2 previous generations:

In a one dimensional Fibonacci quasicrystal, the longs and shorts could represent the interatomic

distances; or the strengths of the bonds between the atoms; or which of two different types of

atom is at that position in the chain.

The Model

This project is to write a program to work out the phonons (normal mode vibrations) of a

Fibonacci quasicrystal with two different sorts of atom. The ``adults'' and ``children'' are the
masses, and , of the two kinds of atom. Since we are only interested in small vibrations,

we may represent the forces between the atoms as simple springs (spring constant ) and we

will assume that all the springs are identical ( is independent of the types of the atoms at the

ends of the spring). The equation of motion of such a system may be written as

(3.43)

where and are the displacement and mass of the th atom. If the chain contains

atoms, you will then have a set of coupled second order ordinary differential equations.

The choice of boundary conditions for the atoms at either end of the chain is up to you and

should not make much difference when the chain is long enough: you could fix the atoms at

either end to immovable walls, , or you could leave them free by removing the

springs at the 2 ends, . (Periodic boundary conditions -- when the chain of

atoms is looped around and joined up in a ring -- are convenient for analytic work but not so

good for numerical work in this case. Why?)

The equations (3.43) are linear algebraic equations which may be cast as a tridiagonal matrix

eigenproblem. The eigenvalues of this problem are and so give the vibrational frequencies,

and the eigenvectors give the corresponding normal mode coordinates, .


Note, however, that the presence of the masses in (3.43) means that the problem is in the

generalised form . As discussed in the notes and in problem 6 this can be

transformed into the normal eigenvalue problem by making the substitution and

multiplying the th equation by . I suggest you solve the eigenproblem by using a NAG

or LaPack routine for the eigenvalues only. There should be no problem in dealing with chains of

several hundred atoms or more.

The Physics

The idea is to investigate the spectrum (the distribution of the eigenvalues) as the ratio, ,

of the two masses changes. Your program should list the eigenvalues in ascending order and then

plot a graph of eigenvalue against position in the list. When , the crystal is ``perfect'',

the graph is smooth, and you should be able to work out all the eigenvalues analytically (easiest

when using periodic boundary conditions). But when and begin to differ, the graph

becomes a a ``devil's staircase'' with all sorts of fascinating fractal structure. Try to understand

the behaviour at small (when the wavelength is long and the waves are ``acoustic'') and the

limits as and .

Another interesting thing to do is to plot values of (on the axis) against the vibrational

frequencies (on the axis). Choose a value of , work out all the frequencies, and put a
point on the graph for each. The graph now has a line of points parallel to the axis at the given

value. Do this for a number of values of and see how the spectrum develops as

changes. Again, you should try to understand the behaviour when the frequency tends to zero,

and the large and small mass ratio limits.

If you have time it is interesting to investigate the fractal structure by focusing in on a single

peak for a short sequence and investigating how it splits when you add another ``generation''.

You should find that the behaviour is independent of the number of generations at which you

start.
Monte Carlo Methods and Simulation

 Monte Carlo

o Random Number Generators

 Monte-Carlo Integration

 The Metropolis Algorithm


o The Ising model

o Thermodynamic Averages

 Quantum Monte-Carlo

 Molecular Dynamics

o General Principles

 Problems

 Project -- The Ising Model

o Introduction

o The Model and Method

o The Physics

 Project -- Quantum Monte Carlo Calculation

o Introduction

o The Method

o The Physics
Monte Carlo

The term Monte-Carlo refers to a group of methods in which physical or mathematical problems are

simulated by using random numbers. Sometimes this is done at a very simple level. For example,

calculations of radiation damage in humans have been studied by simulating the firing of random

particles into human tissue and randomly carrying out the various possible processes. After a lot of

averaging one arrives at the likely damage due to different forms of incident radiation. Similar methods

are used to simulate the tracks left in particle physics experiments. Here we will concentrate on 3

different types of calculations using random numbers.

 Random Number Generators

Random Number Generators

Before discussing the uses of random numbers it is useful to have some idea of how random

numbers are generated on a computer. Most methods depend on a chaotic sequence. The
commonest is the multiplicative congruential method which relies on prime numbers. Consider

the sequence

(4.1)

where refers to the remainder on dividing by and and are large integers which

have no common factors (often both are prime numbers). This process generates all integers less

than in an apparently random order. After all integers have been generated the series will

repeat itself. Thus one important question to ask about any random number generator is how

frequently it repeats itself.

It is worth noting that the sequence is completely deterministic: if the same initial seed, is

chosen the same sequence will be generated. This property is extremely useful for debugging

purposes, but can be a problem when averaging has to be done over several runs. In such cases it

is usual to initialise the seed from the system clock.

Routines exist which generate a sequence of integers, as described, or which generate floating

point numbers in the range to . Most other distributions can be derived from these. There are

also very efficient methods for generating a sequence of random bits (Press et al., 1992). The

Numerical Algorithms Group library contains routines to generate a wide range of distributions.

A difficult but common case is the Gaussian distribution. One method simply averages several

(say ) uniform ( ) random numbers and relies on the central limit theorem. Another

method uses the fact that a distribution of complex numbers with both real and imaginary parts
Gaussian distributed can also be represented as a distribution of amplitude and phase in which

the amplitude has a Poisson distribution and the phase is uniformly distributed between and

As an example of generating another distribution we consider the Poisson case,

(4.2)

Then, if and are the probability distributions of and respectively,

(4.3)

must be true for all , as the probability of finding any must be the same as that for

finding any . It follows that

(4.4)

If is uniformly distributed between and , i.e. , then

(4.5)
Hence, a Poisson distribution is generated by taking the logarithm of numbers drawn from a

uniform distribution.

Monte-Carlo Integration

Often we are faced with integrals which cannot be done analytically. Especially in the case of

multidimensional integrals the simplest methods of discretisation can become prohibitively expensive.

For example, the error in a trapezium rule calculation of a -dimensional integral falls as ,

where is the number of different values of the integrand used. In a Monte-Carlo calculation the error

falls as independently of the dimension. Hence for Monte-Carlo integration will usually

converge faster.

We consider the expression for the average of a statistic, when is a random number

distributed according to a distribution , then

(4.6)
which is just a generalisation of the well known results for (e.g. ) or , where we are using the

notation to denote averaging. Now consider an integral of the sort which might arise while using

Laplace transforms.

(4.7)

This integral can be evaluated by generating a set of random numbers, , from a

Poisson distribution, , and calculating the mean of as

(4.8)

The error in this mean is evaluated as usual by considering the corresponding standard error of

the mean

(4.9)
As mentioned earlier, Monte-Carlo integration can be particularly efficient in the case of multi-

dimensional integrals. However this case is particularly susceptible to the flaws in random

number generators. It is a common feature that when a random set of coordinates in a -

dimensional space is generated (i.e. a set of random numbers), the resulting distribution

contains (hyper)planes on which the probability is either significantly higher or lower than

expected.

The Metropolis Algorithm

In statistical mechanics we commonly want to evaluate thermodynamic averages of the form

(4.10)

where is the energy of the system in state i and . Such problems can be solved using

the Metropolis et al. (1953) algorithm.

Let us suppose the system is initially in a particular state and we change it to another state .

The detailed balance condition demands that in equilibrium the flow from to must be

balanced by the flow from to . This can be expressed as

(4.11)
where is the probability of finding the system in state i and is the probability (or rate) that a

system in state i will make a transition to state j. (4.11) can be rearranged to read

(4.12)

(4.13)

Generally the right-hand-side of (4.3) is known and we want to generate a set of states which obey the

distribution . This can be achieved by choosing the transition rates such that

(4.14)

In practice if a random number, , is chosen between and and the system is moved to

state only if is less than or .

This method is not the only way in which the condition can be fulfilled, but it is by far the most

commonly used.
An important feature of the procedure is that it is never necessary to evaluate the partition

function, the denominator in (4.10) but only the relative probabilities of the different states. This

is usually much easier to achieve as it only requires the calculation of the change of energy from

one state to another.

Note that, although we have derived the algorithm in the context of thermodynamics, its use is by

no means confined to that case. See for example the quantum Monte-Carlo methods.

Subsections

 The Ising model

 Thermodynamic Averages

The Ising model

As a simple example of the Metropolis Method we consider the Ising model of a ferromagnet

(4.15)

where J is a positive energy, , and i and j are nearest neighbours on a lattice. In this

case we change from one state to another by flipping a single spin and the change in energy

is simply

(4.16)
where the sum is only over the nearest neighbours of the flipped spin.

The simulation proceeds by choosing a spin (usually at random) and testing whether the

energy would be increased or decreased by flipping the spin. If it is decreased the rules say

that the spin should definitely be flipped. If, on the other hand, the energy is increased, a

uniform random number, , between and is generated and compared with . If it

is smaller the spin is flipped, otherwise the spin is unchanged.

Further information can be found in the Ising Model project.

Thermodynamic Averages

To average over a thermodynamic quantity it suffices to average over the values for the sequence

of states generated by the Metropolis algorithm. However it is usually wise to carry out a number

of Monte-Carlo steps before starting to do any averaging. This is to guarantee that the system is

in thermodynamic equilibrium while the averaging is carried out.

The sequence of random changes is often considered as a sort of time axis. In practice we think

(e.g. ) about the time required to reach equilibrium. Sometimes, however, the transition rate

becomes very low and the system effectively gets stuck in a non-equilibrium state. This is often

the case at low temperatures when almost every change causes an increase in energy.
Quantum Monte-Carlo

The term Quantum Monte-Carlo does not refer to a particular method but rather to any method using

Monte-Carlo type methods to solve quantum (usually many-body) problems. As an example consider

the evaluation of the energy of a trial wave function where is a 3N dimensional position

coordinate of N particles,

(4.17)
where is the Hamiltonian operator. This can be turned into an appropriate form for Monte-Carlo

integration by rewriting as

(4.18)

such that the quantity in braces ( ) has the form of a probability distribution. This integral can now

easily be evaluated by Monte-Carlo integration. Typically a sequence of 's is generated using the

Metropolis algorithm, so that it is not even necessary to normalise the trial wave function, and the

quantity in square brackets is averaged over the 's.

This method, variational quantum Monte-Carlo, presupposes we have a good guess for the wave

function and want to evaluate an integral over it. It is only one of several different techniques

which are referred to as quantum Monte Carlo. Others include, Diffusion Monte-Carlo, Green's

function Monte-Carlo and World Line Monte-Carlo.


Molecular Dynamics

An alternative approach to studying the behaviour of large systems is simply to solve the equations of

motion for a reasonably large number of molecules. Usually this is done by using one of the methods

described for Ordinary Differential Equations to solve the coupled equations of motion for the atoms or

molecules to be described. Molecular solids in particular can be studied by considering the molecules as

rigid objects and using phenomenological classical equations of motion to describe the interaction

between them. As a simple example the Lennard-Jones potential

(4.19)

has been successfully used to describe the thermodynamics of noble gases.

 General Principles

General Principles

Consider a set of particles interacting through a 2-body potential, . The equations of

motion can be written in the form

(4.20)

(4.21)
In practice it is better to use the scalar force to avoid unnecessary numerical

differentiation.

A common feature of such problems is that the time derivative of one variable only involves

the other variable as with and in the above equations of motion. In such circumstances a

leap-frog like method suggests itself as the most appropriate. Hence we write

(4.22)

(4.23)

This method has the advantage of simplicity as well as the merit of being properly

conservative.

The temperature is defined from the kinetic energy of N particles via

(4.24)

where the averages are taken with respect to time.

As an example of another thermodynamic quantity consider the specific heat at constant

volume . This can be calculated by changing the total energy by multiplying all the
velocities by a constant amount ( ) and running for some time to determine the

temperature. Note that, as the temperature is defined in terms of a time average, multiplying

all the velocities by does not necessarily imply a simple change of temperature,

. At a 1st order phase transition, for example, the system might equilibrate to the same

temperature as before with the additional energy contributing to the latent heat.

When the specific heat is known for a range of temperatures it becomes possible, at least in

principle, to calculate the entropy from the relationship

(4.25)

The pressure is rather more tricky as it is defined in terms of the free energy, ,

using

(4.26)

and hence has a contribution from the entropy as well as the internal energy. Nevertheless

methods exist for calculating this.

It is also possible to define modified equations of motion which, rather than conserving

energy and volume, conserve temperature or pressure. These are useful for describing

isothermal or isobaric processes.


Note that for a set of mutually attractive particles it may not be necessary to constrain the

volume but for mutually repulsive particles it certainly is necessary.

Problems

1. Show that (4.1) has the property that it generates all the integers from to in an apparently

random order if and , and that the sequence repeats itself thereafter.

2. A certain statistical quantity is distributed according to

Given a function which generates random numbers uniformly distributed between & , show

how to transform these into the distribution .

3. Suggest a Monte-Carlo integration procedure for the integral

4. Describe how you would use the Metropolis algorithm to generate a set of random numbers

distributed according to the integrand of problem 3.


The following are essay type questions which are provided as examples of the sorts of questions which

might arise in an examination of Monte-Carlo and related methods.

1. In an ionic conductor, such as , there are several places available on the lattice for each

ion, and the ions can move relatively easily between these sites, subject to the Coulomb

repulsion between the ions. Describe how you would use the Metropolis algorithm to simulate

such a system.

2. Some quantum mechanics textbooks suggest that the ground state wave function for a

Hydrogen atom is

Describe how you would use the variational quantum Monte-Carlo procedure to calculate the

ground state energy for a range of values of and hence how you would provide estimates of

the true ground state energy and the value of .

3. Describe how you would simulate the melting (or sublimation) of Argon in vacuo using the

molecular dynamics method. Pay particular attention to how you would calculate the specific

and latent heats.


Project -- The Ising Model

 Introduction

 The Model and Method

 The Physics

Introduction

The Ising model for a ferromagnet is not only a very simple model which has a phase transition,

but it can also be used to describe phase transitions in a whole range of other physical systems.

The model is defined using the equation

(4.27)

where the and designate points on a lattice and takes the values . The various different

physical systems differ in the definition and sign of the various 's.

The Model and Method

Here we will consider the simple case of a 2 dimensional square lattice with interactions only

between nearest neighbours. In this case

(4.28)
where is only summed over the 4 nearest neighbours of .

This model can be studied using the Metropolis method as described in the notes, where the state

can be changed by flipping a single spin. Note that the change in energy due to flipping the th

spin from to is given by

(4.29)

The only quantity which actually occurs in the calculation is

(4.30)

and this can only take one of five different values given by the number of neighbouring spins.

Hence it is sensible to store these in a short array before starting the calculation. Note also that

there is really only 1 parameter in the model, , so that it would make sense to write your

program in terms of this single parameter rather than and separately.

The calculation should use periodic boundary conditions, in order to avoid spurious effects due

to boundaries. There are several different ways to achieve this. One of the most efficient is to

think of the system as a single line of spins wrapped round a torus. This way it is possible to
avoid a lot of checking for the boundary. For an system of spins define an array of

elements using the shortest sensible variable type: char in C(++). It is easier to use for spin

and for spin , as this makes the calculation of the number of neighbouring spins easier. In

order to map between spins in a 2d space and in the 1d array the following mapping can

be used.

(4.31)

where the 2nd elements of the array are always maintained equal to the 1st . This way it

is never necessary to check whether one of the neighbours is over the edge. It is important to

remember to change whenever is changed.

The calculation proceeds as follows:

1. Initialise the spins, either randomly or aligned.

2. Choose a spin to flip. It is better to choose a spin at random rather than systematically as

systematic choices can lead to spurious temperature gradients across the system.

3. Decide whether to flip the spin by using the Metropolis condition (see notes).

4. If the spin is to be flipped, do so but remember to flip its mirror in the array.

5. Update the energy and magnetisation.

6. Add the contributions to the required averages.

7. Return to step 2 and repeat.


The Physics

In general it is advisable to run the program for some time to allow it to reach equilibrium

before trying to calculate any averages. Close to a phase transition it is often necessary to run

for much longer to reach equilibrium. The behaviour of the total energy during the run is

usually a good guide to whether equilibrium has been reached. The total energy, , and the

magnetisation can be calculated from (4.27) and

(4.32)

It should be possible to calculate these as you go along, by accumulating the changes rather

than by recalculating the complete sum after each step. A lattice should suffice for

most purposes and certainly for testing, but you may require a much bigger lattice close to a

transition.

A useful trick is to use the final state at one temperature as the initial state for the next

slightly different temperature. That way the system won't need so long to reach equilibrium.

It should be possible to calculate the specific heat and the magnetic susceptibility. The

specific heat could be calculated by differentiating the energy with respect to temperature.

This is a numerically questionable procedure however. Much better is to use the relationship

(4.33)
Similarly, in the paramagnetic state, the susceptibility can be calculated using

(4.34)

where and the averages are over different states, i.e. can be calculated by

averaging over the different Metropolis steps. Both these quantities are expected to diverge at

the transition, but the divergence will tend to be rounded off due to the small size of the

system. Note however that the fact that (4.33) & (4.34) have the form of variances, and that

these diverge at the transition, indicates that the average energy and magnetisation will be

subject to large fluctuations around the transition.

Finally a warning. A common error made in such calculations is to add a contribution to the

averages only when a spin is flipped. In fact this is wrong as the fact that it isn't flipped

means that the original state has a higher probability of occupation.


Project -- Quantum Monte Carlo Calculation

 Introduction

 The Method

 The Physics

Introduction
This project is to use the variational quantum Monte Carlo method to calculate the ground

state energy of the He atom. The He atom is a two electron problem which cannot be solved

analytically and so numerical methods are necessary. Quantum Monte Carlo is one of the

more interesting of the possible approaches (although there are better methods for this

particular problem).

The Method

The Schrödinger equation for the He atom in atomic units is,

(4.35)

where and are the position vectors of the two electrons, , , and

. Energies are in units of the Hartree energy (1 Hartree = 2 Rydbergs) and

distances are in units of the Bohr radius. The ground state spatial wavefunction is symmetric

under exchange of the two electrons (the required antisymmetry is taken care of by the spin part

of the wavefunction, which we can forget about otherwise).

The expression for the energy expectation value of a particular trial wavefunction, ,

is,

(4.36)
In the variational Monte Carlo method, this equation is rewritten in the form,

(4.37)

where

(4.38)

is interpreted as a probability density which is sampled using the Metropolis algorithm. Note that

the Metropolis algorithm only needs to know ratios of the probability density at different points,

and so the normalisation integral,

(4.39)

always cancels out and does not need to be evaluated.

The mean of the values of the ``local energy'',

(4.40)
at the various points along the Monte Carlo random walk then gives an estimate of the energy

expectation value. By the variational principle, the exact energy expectation value is always

greater than or equal to the true ground state energy; but the Monte Carlo estimate has statistical

errors and may lie below the true ground state energy if these are large enough. Anyway, the

better the trial wavefunction, the closer to the true ground state energy the variational estimate

should be.

In this project you are given a possible trial wavefunction,

(4.41)

Use the variational Monte Carlo technique to calculate variational estimates of the true ground

state energy.

Before you start programming, you will need the analytic expressions for the local energy. This

involves some nasty algebra, but the answer is,

(4.42)

where and are the coordinates of the 2 atoms relative to the nucleus and .

The Monte Carlo moves can be made by generating random numbers (use a library routine to do

this) and adding them to the electron coordinates. I suggest that you update all six electron
position coordinates each move, and so you will need six random numbers

each time. The accepted lore is that the Metropolis algorithm is most efficient when the step size

is chosen to keep the acceptance probability close to 0.5. However, the method should work in

principle no matter what the step size and you should try a few different step sizes to confirm

that this is indeed the case. The starting positions of the two electrons can be chosen randomly,

but remember that the Metropolis algorithm only samples the probability distribution exactly in

the limit as the number of moves tends to infinity. You will therefore have to throw away the

results from the moves near the beginning of the run and only start accumulating the values of

the local energy once things have settled down. You should experiment to find out how many

moves you need to throw away.

The statistical errors in Monte Carlo calculations decrease like , where is the total

number of moves after the initial equilibration period. The errors therefore improve only slowly

as the length of the run is increased. You will not be able (and should not attempt) to attain great

accuracy. However, you should think hard about the magnitude of the statistical errors involved.

Calculating the variance of the values in the list of energies accumulated during the random walk

is easy and you should certainly do it.

The Physics

What are your best estimates of the ground state energy and the corresponding statistical error?

Can you see any physics behind the form of the trial wavefunction?

You might also like