DavidBorthwick
Introduction
to Partial
Differential
Equations
Universitext
Universitext
Series editors
Sheldon Axler
San Francisco State University, San Francisco, CA, USA
Vincenzo Capasso
Universit degli Studi di Milano, Milan, Italy
Carles Casacuberta
Universitat de Barcelona, Barcelona, Spain
Angus MacIntyre
Queen Mary University of London, London, UK
Kenneth Ribet
University of California, Berkeley, CA, USA
Claude Sabbah
CNRS cole Polytechnique Centre de mathmatiques, Palaiseau, France
Endre Sli
University of Oxford, Oxford, UK
Wojbor A. Woyczyski
Case Western Reserve University, Cleveland, OH, USA
Thus as research topics trickle down into graduate-level teaching, rst textbooks
written for new, cutting-edge courses may make their way into Universitext.
Introduction to Partial
Differential Equations
123
David Borthwick
Department of Mathematics
and Computer Science
Emory University
Atlanta, GA
USA
Partial differential equations (PDE) rst appeared over 300 years ago, and the vast
scope of the theory and applications that have since developed makes it challenging
to give a reasonable introduction in a single semester. The modern mathematical
approach to the subject requires considerable background in analysis, including
topics such as metric space topology, measure theory, and functional analysis.
This book is intended for an introductory course for students who do not nec-
essarily have this analysis background. Courses taught at this level traditionally
focus on some of the more elementary topics, such as Fourier series and simple
boundary value problems. This approach risks giving students a somewhat narrow
and outdated view of the subject.
My goal here is to give a balanced presentation that includes modern methods,
without requiring prerequisites beyond vector calculus and linear algebra. To allow
for some of the more advanced methods to be reached within a single semester, the
treatment is necessarily streamlined in certain ways. Concepts and denitions from
analysis are introduced only as they will be needed in the text, and the reader is
asked to accept certain fundamental results without justication. The emphasis is
not on the rigorous development of analysis in its own right, but rather on the role
that tools from analysis play in PDE applications.
The text generally focuses on the most important classical PDE, which are the
wave, heat, and Laplace equations. Nonlinear equations are discussed to some
extent, but this coverage is limited. (Even at a very introductory level, the nonlinear
theory merits a full course to itself.)
I have tried to stress the interplay between modeling and mathematical analysis
wherever possible. These connections are vital to the subject, both as a source of
problems and as an inspiration for the development of methods.
vii
viii Preface
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Partial Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Example: dAlemberts Wave Equation . . . . . . . . . . . . . . . . . . . 2
1.3 Types of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Well Posed Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Real Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Domains in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Differentiability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.6 Vector Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.7 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3 Conservation Equations and Characteristics . . . . . . . . . . . . . . . . . . . 25
3.1 Model Problem: Oxygen in the Bloodstream . . . . . . . . . . . . . . . 25
3.2 Lagrangian Derivative and Characteristics . . . . . . . . . . . . . . . . . 27
3.3 Higher-Dimensional Equations . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.4 Quasilinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.5 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.1 Model Problem: Vibrating String . . . . . . . . . . . . . . . . . . . . . . . . 45
4.2 Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.3 Boundary Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.4 Forcing Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.5 Model Problem: Acoustic Waves . . . . . . . . . . . . . . . . . . . . . . . . 59
4.6 Integral Solution Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.7 Energy and Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.8 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
ix
x Contents
5 Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.1 Model Problem: Overtones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.2 Helmholtz Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.3 Circular Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.4 Spherical Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.5 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6 The Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.1 Model Problem: Heat Flow in a Metal Rod . . . . . . . . . . . . . . . . 97
6.2 Scale-Invariant Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.3 Integral Solution Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.4 Inhomogeneous Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.5 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7 Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.1 Inner Products and Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.2 Lebesgue Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.3 Lp Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
7.4 Convergence and Completeness . . . . . . . . . . . . . . . . . . . . . . . . . 119
7.5 Orthonormal Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
7.6 Self-adjointness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
7.7 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
8 Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
8.1 Series Solution of the Heat Equation . . . . . . . . . . . . . . . . . . . . . 131
8.2 Periodic Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
8.3 Pointwise Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.4 Uniform Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.5 Convergence in L2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
8.6 Regularity and Fourier Coefcients . . . . . . . . . . . . . . . . . . . . . . . 145
8.7 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
9 Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
9.1 Model Problem: The Laplace Equation . . . . . . . . . . . . . . . . . . . . 155
9.2 Mean Value Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
9.3 Strong Principle for Subharmonic Functions . . . . . . . . . . . . . . . . 165
9.4 Weak Principle for Elliptic Equations . . . . . . . . . . . . . . . . . . . . . 167
9.5 Application to the Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . 170
9.6 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
10 Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
10.1 Test Functions and Weak Derivatives . . . . . . . . . . . . . . . . . . . . . 177
10.2 Weak Solutions of Continuity Equations . . . . . . . . . . . . . . . . . . 180
10.3 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
10.4 Sobolev Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Contents xi
: Equal by denition
Equal except on a set of measure zero
Asymptotic to (ratio approaches 1)
Comparable to (ratio bounded above and below)
k k Norm
k kp Lp norm
h; i Inner product (L2 by default)
; Distributional pairing
An Volume of the unit sphere Sn1 Rn
Bn Open unit ball fjxj\1g Rn
Bx0 ; R Open ball fjx x0 j\Rg Rn
ck Fourier coefcient
C Complex numbers
C m m-times continuously differentiable functions ! C
C m ; R Real valued C m functions
C m C m functions that admit extension across o
m
Ccpt C m functions with compact support in
A Characteristic (or indicator) function of a set A
D Unit disk in R2
D Multivariable derivative
x Dirac delta function
o Boundary of
D0 Distributions on
Df Dirichlet energy
dS Surface integral element
E Energy of a solution
F Fourier transform
Gamma function
r Gradient operator
H m Sobolev space, functions with weak derivatives in L2 to order m
xiii
xiv Notations
m
Hloc Local Sobolev functions
1
H01 Closure of Ccpt in H 1
Ht Heat kernel
Laplacian operator on Rn
Lp p-th power integrable functions ! C
L1loc Locally integrable functions ! C
p Discrete Lp space
N Natural numbers {1; 2; 3; . . .}
N0 Non-negative integers {0; 1; 2; 3; . . .}
Outward unit normal
Domain (open, connected set) in Rn
Closure of ( [ o)
Fundamental solution
r jxj in Rn
R Real numbers
R Rayleigh quotient
Sn1 Unit sphere fjxj 1g Rn
SRn Schwartz functions (smooth functions with rapid decay)
S 0 Rn Tempered distributions
Sn Partial sum of Fourier series
T R=2Z
Wt Wave kernel
Z Integers {. . .; 1; 0; 1; . . .}
Chapter 1
Introduction
Continuous phenomena, such as wave propagation or fluid flow, are generally mod-
eled with partial differential equations (PDE), which express relationships between
rates of change with respect to multiple independent variables. In contrast, phenom-
ena that can be described with a single independent variable, such as the motion
of a rigid body in classical physics, are modeled by ordinary differential equations
(ODE).
A general PDE for a function u has the form
u mu
F x, u(x), (x), . . . , (x) = 0. (1.1)
x j x j1 . . . x jm
The order of this equation is m, the order of the highest derivative appearing (which
is assumed to be finite). A classical solution u admits continuous partial derivatives
up to order m and satisfies (1.1) at all points x in its domain. In certain situations the
differentiability requirements can be relaxed, allowing us to define weak solutions
that do not solve the equation literally.
A somewhat subtle aspect of the definition (1.1) is the fact that the equation
is required to be local. This means that functions and derivatives appearing in the
equation are all evaluated at the same point.
Although classical physics provided the original impetus for the development
of PDE theory, PDE models have since played a crucial role in many other fields,
including engineering, chemistry, biology, ecology, medicine, and finance. Many
industrial applications of mathematics are based on the numerical analysis of PDE.
Most PDE are not solvable in the explicit sense that a simple calculus problem can
be solved. That is, we typically cannot obtain a exact formula for u(x). Therefore
much of the analysis of PDE is focused on drawing meaningful conclusions from an
equation without actually writing down a solution.
One of the earliest and most influential PDE models was the wave equation, devel-
oped by Jean dAlembert in 1746 to describe the motion of a vibrating string. With
physical constants normalized to 1, the equation reads
2u 2u
= 0, (1.2)
t 2 x2
where u(t, x) denotes the vertical displacement of the string at position x and time
t. If the string has length and is attached at both ends, then we also require that
u(t, 0) = u(t, ) = 0 for all t. We will discuss the formulation of this model in
Sect. 4.1.
DAlembert also found a general formula for the solution of (1.2), based on the
observation that (1.2) is solved by any function of the form f (x t), assuming f is
twice-differentiable. Given two such functions on R, we can write a general solution
A similar formula applies in the case of a string with fixed ends. If f is 2-periodic
on R, meaning f (x + 2) = f (x) for all x, then it is easy to check that
1
u(t, x) := [ f (x + t) f (t x)] (1.4)
2
satisfies u(t, 0) = u(t, ) = 0 for any t.
One curious feature of this formula is that it appears to give a sensible solution
even in cases where f is not differentiable. For example, to model a plucked string
we might take the initial displacement to be a simple piecewise linear function in the
form of a triangle from the fixed endpoints, as shown in Fig. 1.1.
If we extend this to an odd, 2-periodic function on R, then the formula (1.4) yields
the result illustrated in Fig. 1.2. The initial kink splits into two kinks which travel in
opposite directions on the string and and appear to rebound from the fixed ends.
This is not a classical solution because u is not differentiable at the kinks. However,
u does satisfy the requirements for a weak solution, as we will see in Chap. 10.
Although a physical string could not exhibit sharp corners without breaking, the
piecewise linear solutions are nevertheless physically reasonable. Direct observations
of plucked and bowed strings were first made in the late 19th century by Hermann
von Helmholtz, who saw patterns of oscillation quite similar to what is shown in
Fig. 1.2. The appearance of kinks propagating along the string is striking, although
the corners are not exactly sharp.
There is no general theory of PDE that allows us to analyze all equations of the form
(1.1). To make progress it is necessary to restrict our attention to certain classes of
equations and develop methods appropriate to those.
The most fundamental distinction between PDE is the property of linearity. A
PDE is called linear if it can be written in the form
Lu = f, (1.5)
n
2
n
L= ai j + bj + c, (1.6)
i, j=1
xi x j j=1
xj
4 1 Introduction
2u n
2u
L= ai j (x) + (lower order terms), (1.8)
t 2 i, j=1 xi x 2j
where once again [ai j ] is assumed to be a strictly positive matrix. Hyperbolic equa-
tions are used to model oscillatory phenomena.
Parabolic evolution equations have the form (1.5) with
u n
2u
L= ai j (x) + (lower order terms), (1.9)
t i, j=1
xi x 2j
1.3 Types of Equations 5
where [ai j ] is a strictly positive matrix. The heat equation, whose derivation we will
discuss in detail in Sect. 6.1, is the prototype for this type of equation. Parabolic
equations are generally used to model phenomena of conduction and diffusion.
Note that hyperbolic and parabolic equations revert to elliptic equations in the
spatial variables if the solution is independent of time. Elliptic equations thus serve
to model the equilibrium states of evolution equations.
Because of their association with phenomenological properties of a system, the
terms elliptic, hyperbolic, and parabolic are frequently applied more broadly
than this simple classification would suggest. A nonlinear equation is typically
described by the category of its linear approximations, which can change depending
on the conditions.
For problems on a bounded domain, the application usually dictates some restric-
tion on the solutions at the boundary. Two very common types are Dirichlet bound-
ary conditions, specifying the values of u at the boundary, and Neumann conditions,
specifying the normal derivatives of u at the boundary. These conditions are named
for Gustave Lejeune Dirichlet and Carl Neumann, respectively. By default we will
use these terms in the homogeneous sense, meaning that the boundary values of the
function or derivative are set equal to zero. For evolution equations, we also impose
initial conditions, specifying the values of u and possibly its time derivatives at some
initial time.
The set of functions used to formulate a PDE, which might include coefficients or
terms in the equation itself as well as boundary and initial conditions, is collectively
referred to as the input data. The most basic question for any PDE is whether a
solution exists for a given set of data. However, for most purposes we want to require
something more. A PDE problem is said to be well posed if, for a given set of data:
1. A solution exists.
2. The solution is uniquely determined by the data.
3. The solution depends continuously on the data.
These criteria were formulated by Jacques Hadamard in 1902. The first two properties
hold for ODE under rather general assumptions, but not necessarily for PDE. It is
easy to find nonlinear equations that admit no solutions, and even in the linear case
there is no guarantee.
The third condition, continuous dependence on the input data, is sometimes called
stability. One practical justification for this requirement is it is not possible to specify
input data with absolute accuracy. Stability implies that the effects of small variations
in the data can be controlled.
For certain PDE, especially the classical linear cases, we have a good under-
standing of the requirements for well-posedness. For other important problems, for
6 1 Introduction
Fig. 1.3 Numerical simulations of blood flow in the aorta. Courtesy of D. Gupta, Emory University
Hospital, and T. Passerini, M. Piccinelli and A. Veneziani, Emory Mathematics and Computer
Science
1.5 Approaches
We can organize the methods for handling PDE problems according to three basic
goals:
1. Solving: finding explicit formulas for solutions.
2. Analysis: understanding general properties of solutions.
3. Approximation: calculating solutions numerically.
Solving PDE is certainly worth understanding in those special cases where it is
possible. The solution formulas available for certain classical PDE provide insight
that is important to the development of the theory.
The goals of theoretical analysis of PDE are extremely broad. We wish to learn
as much as we can about the qualitative and quantitative properties of solutions and
their relationship to the input data.
Finally, numerical computation is the primary means by which applications of
PDE are carried out. Computational methods rely on a foundation of theoretical
analysis, but also bring up new considerations such as efficiency of calculation.
Example 1.1 Figure 1.3 shows a set of numerical simulations modeling the insertion
in the aorta of a pipe-like device designed to improve blood flow. The leftmost frame
shows the aorta before surgery, and the three panes on the right model the insertion
1.5 Approaches 7
at different locations. The PDE model is a complex set of fluid equations called the
Navier-Stokes equations. These fluid equations are famously difficult to analyze and
an exact solution is almost never possible. However, the cylinder is one case that can
be handled explicitly. For the numerical simulations, exact solutions for a cylindrical
pipe were used to provide boundary data at the point where the pipe meets the aorta.
Theoretical analysis also plays an important role here, in that the regularity theory
for the fluid equations is used to predict the accuracy of the simulation. (The complete
well-posedness analysis of the Navier-Stokes equations remains a famously unsolved
problem, however.)
The simulated flows displayed in Fig. 1.3 were computed numerically by a tech-
nique called the finite element method. This involves discretizing the problem to
reduce the PDE to a system of linear algebraic equations. Modeling a single heart-
beat in this simulation require solving a linear system of about 500 million equations.
Chapter 2
Preliminaries
In this chapter we set the stage for the study of PDE with a review of some core
background material.
The real number system R is constructed as the completion of the field of ratio-
nal numbers. This means that in addition to the algebraic axioms for addition and
multiplication, R satisfies an additional axiom related to the existence of limits. To
state this axiom we use the concept of the supremum (or least upper bound) of
subset A R. The supremum is a number sup(A) R such that (1) all elements of
A are less than or equal to sup(A); and (2) no number strictly less than sup(A) has
this property. The completeness axiom says that every nonempty subset of R that is
bounded above has a supremum. An equivalent statement is that a nonempty subset
that is bounded below has an infimum (greatest lower bound), which is denoted
inf(A).
It is convenient to extend these definitions to unbounded sets by defining sup(A) :=
when A is not bounded above, and inf(A) := when the set is not bounded
below. We also set sup() = and inf() := +. With these extensions, sup
and inf are defined for all subsets of R.
To illustrate the definition, we present a simple result that will prove useful in the
construction of approximating sequences for solutions of PDE.
Lemma 2.1 For a nonempty set A R, there exists a sequence of points xk A
such that
lim xk = sup A,
k
The complex number system C consists of numbers of the form z = x + iy, where
x, y R and i 2 := 1. The numbers x and y are called the real and imaginary parts
of z. The conjugate of z is
z := x iy,
so that
z + z z z
Re z := , Im z := .
2 2i
A nonzero complex number has a multiplicative inverse, given by
1 x iy
= 2 .
x + iy x + y2
|zw| := |z||w|
for z, w C.
The basic theory of sequences and series carries over from R to C with only minor
changes. A sequence {z k } in C converges to z if
lim |z k z| = 0,
k
2.2 Complex Numbers 11
and a series ak converges if the sequence of partial sums nk=1 ak is convergent.
The series converges absolutely if
|ak | < .
k=1
converges absolutely for all z C. The special case where z is purely imaginary
gives an important relation called Eulers formula:
2 4 3
ei = 1 + + + +i +
2! 4! 1! 3!
= cos + i sin . (2.2)
Leonhard Euler, arguably the most influential mathematician of the 18th century,
published this identity in 1748. It yields a natural polar-coordinate representation of
complex numbers,
z = r ei ,
where r = |z| and is the angle between z and the positive real axis.
The product rule for complex exponentials,
e z ew = e z+w , (2.3)
follows from the power series definition just as in the real case. In combination with
(2.2) this allows for a very convenient manipulation of trigonometric functions. For
example, setting z = i and w = i in (2.3) and taking the real and imaginary parts
recovers the identities
The calculus rules for differentiating and integrating exponentials are derived
from the power series expansion, and thus extend to the complex case. In particular,
d ax
e = aeax
dx
for a C.
12 2 Preliminaries
2.3 Domains in Rn
For points in Rn we will use the vector notation x = (x1 , . . . , xn ). The Euclidean
dot product is denoted x y, and the Euclidean length of a vector is written
|x| := x x.
lim |x k w| = 0.
k
A := {boundary points of A} .
boundary point
interior point
2.3 Domains in Rn 13
F F.
The union of a subset A Rn with its boundary is called the closure of A and
denoted
A := A A.
For example,
B(x 0 ; R) := x Rn ; |x x 0 | R .
It is potentially confusing that an overline is used for set closure and complex con-
jugation, but these notations are standard. Note that closure applies only to sets and
not to numbers or functions.
A closed set F Rn contains the limits of all sequences in F that converge in
R . This is because the limit of a sequence contained in a set must either be a point
n
2.4 Differentiability
supp f := {x ; f (x)
= 0}. (2.4)
Note that the definition includes a closure. This means that the support does not
exclude points where the function merely passes through zero. For example, the
support of sin(x) is R rather than R\ Z.
A closed and bounded subset of Rn is said to be compact. We denote by Ccptm
()
the space of functions on that have compact support, meaning that supp f is a
compact subset of . Since is open and the support is closed, this requires in
particular that supp f be a strict subset of . For example, 1 x 2 vanishes at the
boundary of (1, 1), but does not have compact support in this domain because its
support is [1, 1].
h(x) =
0, |x| 1,
which has support [1, 1]. As illustrated in Fig. 2.2, the function becomes extremely
flat as x 1.
To show that h is in fact smooth, we note that
qm (x) 1/(1x 2 )
(m) 2 me , |x| < 1,
h (x) = (1x )
0, |x| 1,
These smooth step functions are useful as building blocks for pasting together
smooth functions on different domains.
Our development of PDE theory will not rely on any advanced techniques for the
solution of ODE, but it will be useful to recall some basic material.
First-order ODE can often be solved directly by methods from calculus. The
easiest cases are equations of the form
dy
= g(y)h(t),
dt
where the variables can be separated to yield an integral formula
dy dt
= .
g(y) h(t)
Integrating both sides yields a family of solutions with one undetermined constant.
16 2 Preliminaries
dy
= ay, y(0) = y0 .
dt
for a
= 0. This is called the growth or decay equation, depending on the sign of a.
Separating the variables gives
dy
= a dt,
y
which integrates to ln y = at + C. Solving for y and using the initial condition gives
y(t) = y0 eat .
we define the vector-valued function w = (y, y , . . . , y (n1) ). This satisfies the first-
order system
w1 w2
d . .
.. ..
= .
dt wn1 wn
wn F(t, w)
dw
= F(t, w), w(t0 ) = w0 , (2.6)
dt
in an equivalent form as a recursive integral equation,
t
w(t) = w0 + F(s, w(s)) ds.
t0
for k = 1, 2, . . . . It can be shown that the limit of this sequence exists and solves
(2.6) under some general assumptions on F, which leads to a proof of the following
result.
Theorem 2.4 (Picard iteration) Suppose that F is a continuous function on I
where I is an open interval containing t0 and is a domain in Rn containing w0 , and
that F is continuously differentiable with respect to w. Then (2.6) admits a unique
solution on some interval (t0 , t0 + ) with > 0.
Applying Theorem 2.4 to (2.5) shows that an nth order ODE satisfying the reg-
ularity assumptions has a unique local solution specified by the initial values of the
function and its first n 1 derivatives.
The C 1 hypothesis on F is stronger than necessary, but this version will suffice for
our purposes. The point we would like to stress here is the relative ease with which
ODE can be analyzed under very general conditions. This is very different from the
PDE theory, where no such general results are possible.
d 2y
= 2 y,
dt 2
for > 0. In view of the solution to the growth/decay equation in Example 2.3, it is
reasonable to start with an exponential solution as a guess. Substituting et into the
equation yields 2 = 2 . From = i we obtain the general solution,
To see how this relates to the Picard iteration method described above, consider
the corresponding system (2.6) for w = (y, y ):
dw 0 1
= w.
dt 2 0
With w0 = (a, b), the recursive formula (2.7) yields the sequence of functions
k j
tj 0 1 a
uk (t) =
j! 2 0 b
j=1
Reading off y as the first component of w gives the familiar trigonometric solution,
The trigonometric and complex exponential solutions are related by Eulers formula
(2.2).
The classical theorems of vector calculus were motivated by PDE problems arising
in physics. For our purposes the most important of these results is the divergence
theorem. We assume that the reader is familiar with the divergence theorem in the
context of R2 or R3 . In this section we will cover the basic definitions needed to state
the result in Rn and develop its corollaries.
As noted in Sect. 2.3, we always take a domain Rn to be connected and
open. The gradient of f C 1 () is the vector-valued function
f f
f := ,..., .
x1 xn
u := (u).
f (x) d n x,
d u
u(t, x) d n x = (t, x) d n x.
dt t
Differentiation under the integral may still work when the integrals are improper, but
this requires greater care.
To set up boundary integrals for a domain with piecewise C 1 boundary, we need
to define the surface integral over a regular coordinate patch : U Rn1 .
Let : U Rn denote the unit normal vector pointing outwards from the domain.
The surface integral for such a patch is defined by
f d S :=
f ( (w)) det ,..., , d n1 w, (2.8)
(U ) U w w 1 n1
where det[. . . ] denotes the determinant of a matrix of column vectors. The full surface
integral over is defined by summing over the boundary coordinate patches. For
simplicity, we notate this as a single integral,
f d S.
In R3 , the unit normal for a surface patch can be computed from the cross product
of the tangent vectors. This leads to the surface integral formula
2
f d S := f ( (w)) d w.
(U ) U w1 w2
Even in low dimensions surface integrals can be rather complicated. We will make
explicit use of these formulas only in relatively simple cases, such as rectangular
regions and spheres.
We can use (2.8) to decompose integrals into radial and spherical components.
This is particularly useful when the domain is a ball. Let r := |x| be the radial
coordinate, and define the unit sphere
Sn1 := {r = 1} Rn .
A point x
= 0 can be written uniquely as r for Sn1 and r > 0. Let (y) be a
parametrization of Sn1 by coordinates y U Rn1 . For the change of variables
(r, y) x = r (y), the Jacobian formula gives
20 2 Preliminaries
x x x
d n x = det ,..., , dr dy1 dyn1 (2.9)
y1 yn1 r
= det ,..., , r n1 dr dy1 dyn1 .
y1 yn1
On the unit sphere, the outward unit normal is equal to . Thus (2.9) reduces to
d n x = r n1 dr d S(y).
For an integral over the ball this yields the radial integral formula,
R
f (x) d n x = f (r (y)) r n1 dr d S(y). (2.10)
B(0;R) Sn1 0
With these definitions in place, we turn to the divergence theorem, which relates
the flux of a vector field through a closed surface to the divergence of the field in the
interior. This result is generally attributed to Carl Friedrich Gauss, who published a
version in 1813 in conjunction with his work on electrostatics.
Theorem 2.6 (Divergence theorem) Suppose Rn is a bounded domain with
piecewise C 1 boundary. For a vector field F C 1 (; Rn ),
F dn x = F d S,
f
F= .
x3
d 3 x = d d dz,
1 2
2
f
Fd x = 3
r dz dr d
B3 0 1 2 z
2
1
= f cos , sin , 1 2 (2.11)
0 0
f cos , sin , 1 2 d d.
Note that z = 1 2 gives the restriction to the upper and lower hemispheres,
respectively.
We denote the two hemispheres S2 S2 and parametrize them as
(, ) = cos , sin , 1 2 .
Thus,
2
1
f cos , sin , 1 d d =
2 f 1 2 d S.
0 0 S2
On S2 we have F = f 1 2 , so that
f 1 2 dS = F d S.
S2 S2
F d3 x = F d S,
B3 S2
On the surface side, the integrand becomes the directional derivative with respect to
the outward unit normal, which is denoted
22 2 Preliminaries
u
:= u .
u
u d n x = d S.
where
An := vol(Sn1 ). (2.13)
By (2.12),
g g
dS = (a) d S
B(0;a) B(0;a) r
g
= An a n1 (a).
r
The formula from Corollary 2.8 reduces in this case to
a
g
g(r )r n1 dr = a n1 (a). (2.14)
0 r
This holds for all a > 0, so evidently the Laplacian of a radial function is given by
2.6 Vector Calculus 23
g = r 1n r n1 g. (2.15)
r r
In principle one could derive this formula directly from the chain rule, but the direct
computation is difficult in high dimensions.
There are two other direct corollaries of the divergence theorem that will be used
frequently. These are named for the mathematical physicist George Green, who used
them to develop solution formulas for some classical PDE.
The first result is a generalization of Corollary 2.8, obtained from Theorem 2.6
by the substitution F = vu for a pair of functions u, v. The product rule for
differentiation gives
(vu) = v u + vu, (2.16)
which can easily be checked by writing out the components of the gradient.
Theorem 2.10 (Greens first identity) If Rn is a bounded domain with piece-
wise C 1 boundary, then for u C 2 (), and v C 1 (),
u
[v u + vu] d x = n
v d S.
The second identity follows from the first by interchanging u with v and then
subtracting the result.
Theorem 2.11 (Greens second identity) If Rn is a bounded domain with
piecewise C 1 boundary, then for u, v C 2 (),
u v
[vu uv] d n x = v u d S.
2.7 Exercises
x1 = r cos , x2 = r sin .
w(x) = f (x)e j ,
24 2 Preliminaries
h(t) := f (x) d n x.
B(0;t)
dh
= f (w) d S(w).
dt B(0;t)
Note that (1) = 1 and integration by parts gives the recursion relation (z + 1) =
z(z). In this problem we will show that the volume of the unit sphere in Rn is given
by
n
2 2
An = . (2.18)
( n2 )
(a) Use the radial formula (2.10) and the substitution u := r 2 to compute that
1
er d n x =
2
An ( n2 ). (2.19)
Rn 2
er d n x = e(x1 ++xn ) d n x
2 2 2
Rn
R
n
n
ex d x .
2
= 2
0
A conservation law for a physical system states that a certain quantity (e.g., mass,
energy, or momentum) is independent of time. For continuous systems such as fluids
or gases, these global quantities can be defined as integrals of density functions. The
conservation law then translates into a local form, as a PDE for the density function.
In this section we will study some first-order PDE that arise from conservation
laws. We introduce a classic technique, called the method of characteristics, for
analyzing these equations.
To derive the conservation equation, we consider a simple model for the concentration
of oxygen carried by the bloodstream. For this discussion we ignore any external
effects that might break the conservation of mass, such as absorption of oxygen into
the walls of a blood vessel. (Some examples of external effects will be considered in
the exercises.)
Let us model an artery as a straight tube, as pictured in Fig. 3.1. We assume that the
concentration is constant on cross-sections of the tube, so that the problem reduces
to one spatial dimension. For the moment, suppose that the artery extends along the
real line and is parametrized by x R.
Let u(t, x) denote the oxygen concentration, expressed in units of mass per unit
length. Within a fixed interval [a, b], as highlighted in Fig. 3.1, the total mass at time
t is given by an integral,
b
m(t) := u(t, x) d x. (3.1)
a
The instantaneous flow rate at a given point x is called the flux q(t, x), expressed
as mass per unit time. The general relationship between flux and concentration is
x
a b
For the bloodstream model we can reasonably assume that velocity is independent
of the oxygen concentration (because oxygen accounts for a relatively small portion
of the total density). This assumption implies that q has a linear dependence on u. In
other models the velocity might depend on the concentration, making q a nonlinear
function of u.
Conservation of mass implies that the total amount of oxygen within the segment
changes only as oxygen flows across the boundary points at x = a and x = b. Since
the flow across these points is given by the flux, the corresponding equation is
dm
(t) = q(t, a) q(t, b). (3.2)
dt
If q is continuously differentiable with respect to position, then the fundamental
theorem of calculus allows us to write the right-hand side of (3.2) as an integral,
b
q
q(t, a) q(t, b) = d x.
a x
provided that u(t, x) is continuously differentiable with respect to time. These cal-
culations transform (3.2) into the integral equation
b
u q
+ d x = 0. (3.3)
a t x
Since the segment was arbitrary, (3.3) should hold for all values of a, b. This is
only possible if the integrand is identically zero, which gives the local form of the
law conservation of mass:
u q
+ = 0. (3.4)
t x
3.1 Model Problem: Oxygen in the Bloodstream 27
This relationship between concentration and flux is called the continuity equation
(or transport equation). The continuity equation applies generally to the physical
process of advection, which refers to the motion of particles in a bulk fluid flow.
To adapt (3.4) to a particular model, we need to specify the relationship between
q and u. As we remarked above, for the bloodstream model it is reasonable to assume
a linear relationship,
q = vu, (3.5)
where the velocity v(t, x) is part of the input data for the equation. Under this
assumption (3.4) reduces to
u u v
+v +u = 0, (3.6)
t x x
which is called the linear conservation equation.
In this section we will discuss the strategy for solving a first-order PDE such as (3.6).
The basic idea is to adopt the perspective of an observer traveling with velocity v.
This is like taking measurements in a river from a raft drawn by the current. Once
we fix a starting point for the observer, the observed concentration depends only on
the time variable, thus reducing the equation to an ODE.
This principle applies to any first-order PDE of the form
u u
+v + w = 0, (3.7)
t x
From the point of view of an observer carried by the flow, the measured concen-
tration is u(t, x(t)). The observed rate of change is the derivative of this quantity,
Du d
(t) := u(t, x(t)), (3.9)
Dt dt
called the Lagrangian derivative (or material derivative). This concept was devel-
oped by the 18th century mathematician and physicist Joseph-Louis Lagrange. Note
that Du/Dt depends also on the initial value (t0 , x0 ) that determines the character-
istic. For convenience we suppress the initial point from the notation.
Du
+ w = 0, (3.10)
Dt
Du u u d x
= + ,
Dt t x dt
with the understanding that the partial derivatives on the right are evaluated at the
point (t, x(t)). Because x(t) solves (3.8), this reduces to
Du u u
= +v , (3.11)
Dt t x
3.2 Lagrangian Derivative and Characteristics 29
If we restrict the variables in (3.7) to (t, x(t)), then the first two terms match the
right-hand side of (3.11), reducing the equation to (3.10).
If w = 0, then (3.10) becomes
Du
= 0.
Dt
With Theorem 3.2 we can effectively reduce the PDE (3.7) to a pair of ODE,
namely the characteristic equation (3.8) and the Lagrangian derivative equation
(3.10). In many cases, solving these ODE will lead to an explicit formula for u(t, x).
This approach is referred to as the method of characteristics.
u u
+ (at + b) = 0,
t x
with the initial condition
u(0, x) = g(x),
for some function g C 1 (R). The characteristics for this velocity, v(t, x) = at + b,
were computed in Example 3.1.
According to Theorem 3.2, u is constant along characteristics, implying that
a
u t, t 2 + bt + x0 = u(0, x0 ) = g(x0 ), (3.12)
2
for all t R. This is not yet a formula for u(t, x), but we can derive the solution
formula by identifying
a
x = t 2 + bt + x0 .
2
Solving for x0 in terms of x and substituting this into (3.12) gives
a
u(t, x) = g x t 2 bt .
2
30 3 Conservation Equations and Characteristics
Example 3.4 For steady flow through a pipe of changing diameter, the velocity would
vary with position rather than time. Let v(t, x) = a + bx for x 0, with a, b > 0.
The resulting characteristic equation (3.8) is
dx
= a + bx.
dt
This can be solved by the standard ODE technique of separating the t and x variables
to different sides of the equation:
dx
= dt.
a + bx
1
ln(a + bx) = t + C,
b
with C a constant of integration. (Note that a + bx > 0 by our assumptions.) Solving
for x gives
1 b(t+C)
x(t) = e a .
b
Given the assumption x 0, it is natural to index the characteristics by the start
time t0 such that x(t0 ) = 0. With this convention, the family of solutions is
a b(tt0 )
x(t) = e 1 . (3.13)
b
These characteristic curves are illustrated in Fig. 3.3.
With v = a + bx the linear conservation equation (3.6) becomes
u u
+ (a + bx) + bu = 0.
t x
Let us find the solution under the boundary condition
Du
+ bu = 0.
Dt
This is a decay equation, with the family of exponential solutions
To fix A, we substitute the starting point (t0 , 0) into the equation and obtain
Putting together (3.13) and (3.15) and applying the boundary condition (3.14)
gives a
u t, eb(tt0 ) 1 = f (t0 )eb(tt0 ) . (3.16)
b
x= e 1 ,
b
and solve for t0 to obtain
1 a
t0 = t + ln .
b a + bx
Substituting this expression into (3.16) gives the final form of the solution:
a 1 a
u(t, x) = f t + ln .
a + bx b a + bx
Fig. 3.4 Behavior of solutions for Example 3.4. In the contour plot on the left, darker regions
correspond to higher concentration. The change in colors corresponds to exponential decay along
the characteristics illustrated in Fig. 3.3
For flow problems in more than one spatial dimension, we can develop a continuity
equation analogous to (3.4) by the same reasoning as in Sect. 3.1. Suppose u(t, x)
represents a concentration defined for t R and x Rn . Let R Rn be a bounded
region with C 1 boundary. The total mass within this region is given by the volume
integral
m(t) := u(t, x) d n x.
R
Note that since q depends on both t and x, the notation q is slightly ambiguous.
We follow the standard convention that vector calculus operators such as and
act only on spatial variables.
If u is C 1 with respect to t, then we can also differentiate the integral for m to
obtain
dm u n
= d x.
dt R t
As in the one-dimensional case, we now observe that since (3.19) holds for an arbi-
trary region R, the integrand must vanish. This is the higher-dimensional continuity
equation:
u
+ q = 0. (3.20)
t
Suppose we make the linear assumption that q = vu for a velocity field v which
is independent of u. The product rule for the divergence of a vector field is
(vu) = ( v)u + v u.
Substituting this into (3.20) gives the higher-dimensional form of the linear conser-
vation equation
u
+ v u + ( v)u = 0. (3.21)
t
In the special case where v = 0 the velocity field is called solenoidal (or
divergence-free). This situation arises frequently in applications, because incom-
pressible fluids like blood or water have solenoidal velocity fields.
The method of characteristics from Sect. 3.2 can be adapted directly to (3.21).
Consider a somewhat more general first-order PDE in the form
u
+ v u + w = 0, (3.22)
t
with v = v(t, x) and w = w(t, x, u). The characteristics associated to this equation
are by definition the solutions of
dx
(t) = v(t, x(t)). (3.23)
dt
34 3 Conservation Equations and Characteristics
Theorem 2.4 guarantees that characteristics exist in the neighborhood of each start-
ing point (t0 , x 0 ) provided v(t, x) and its partial derivatives with respect to x are
continuous.
The Lagrangian derivative of u along x(t) is defined as before by
Du d
(t) := u(t, x(t)).
Dt dt
The higher-dimensional version of Theorem 3.2 is the following:
Theorem 3.5 On each characteristic curve, the PDE (3.22) reduces to the ODE
Du
+ w = 0, (3.24)
Dt
Du u dx
(t) = (t, x(t)) + u(t, x(t)) (t).
Dt t dt
Du u
= + v u.
Dt t
Substituting this into (3.22) reduces the equation to (3.24).
If w = 0 the equation becomes
Du
= 0,
Dt
which means precisely that u is constant along the characteristic curves.
Example 3.6 Consider a two-dimensional channel modeled as = R [1, 1]
with coordinates x = (x1 , x2 ). The velocity field
is solenoidal and vanishes on the boundary {x2 = 1}. The characteristic line orig-
inating from (a, b) at t = 0 is
x(t) = a + (1 b2 )t, b .
Let us consider the conservation equation (3.21) for (t, x) R , with v given
by (3.25), subject to the initial condition
3.3 Higher-Dimensional Equations 35
Fig. 3.5 Evolution of a circle according to the two-dimensional flow in Example 3.6
u(0, x) = g(x),
Figure 3.5 illustrates the evolution of a circular ink spot distribution under this
flow. Conservation of mass is reflected in the fact that the area of the spot is inde-
pendent of t.
u
+ a(u) u = 0, (3.26)
t
36 3 Conservation Equations and Characteristics
where a(u) := dq/du. This type of PDE is called quasilinear, which means that the
equation is linear in the highest-order derivatives (which are merely first order in this
case).
A comparison of (3.26) to the linear conservation equation (3.21) shows that a(u)
is now playing the role of velocity. This suggests a definition for the characteristics,
but we must keep in mind that a(u) depends on t and x implicitly through u.
Proof Suppose that a solution u exists. Let x(t) be the solution to the ODE
dx
(t) = a(u(t, x(t))), x(0) = x 0 ,
dt
for t [0, T ]. Existence of such a characteristic is guaranteed by Theorem 2.4, at
least for t near 0, because the composition a u is C 1 as a function of (t, x) by the
assumptions on a and u.
To establish the claim that u(t, x(t)) is independent of t, we use the chain rule to
differentiate
d u dx
u(t, x(t)) = (t, x(t)) + u(t, x(t)) (t)
dt t dt
u
= (t, x(t)) + a(u(t, x(t))) u(t, x(t)).
t
The right-hand side vanishes by (3.26), so that
d
u(t, x(t)) = 0.
dt
This implies that
u(t, x(t)) = u(0, x 0 ),
which means that a(u(t, x(t))) is also constant. The characteristic equation reduces
to
dx
(t) = a(u(0, x 0 )),
dt
In contrast to the characteristic equation (3.8) in the linear case, the equation for
x(t) here depends on the initial condition u(0, x 0 ). Furthermore, it is important to
keep in mind that Theorem 3.7 does not imply that a solution to (3.26) exists; this
3.4 Quasilinear Equations 37
Since v 0, the model always assumes that traffic moves to the right.
To eliminate the constants and focus on the equation itself, let us set vm = 1 and
u m = 1, reducing the velocity equation to
v(u) = 1 u
q(u) = u u 2 .
u(0, x) = h(x),
for some h : R [0, 1]. Assuming a solution exists, Theorem 3.7 gives the family
of characteristics
x(t) = x0 + (1 2h(x0 ))t. (3.28)
As we will demonstrate in the examples below, (3.29) leads to a solution formula for
some choices of h, while for others it leads to a contradiction.
1 1
h(x) = arctan(20x),
2
which could represent a line of cars stopped at a traffic light at the point x = 0. The
corresponding characteristic lines as given by (3.28) are plotted in Fig. 3.7.
To derive a formula for u(t, x) from (3.29), we need to invert the equation
x = x0 + (1 2h(x0 ))t,
to express x0 as a function of t and x. For the function h given above it is not possible
to do this explicitly. However, there is a unique solution for each (t, x), which can
easily be calculated numerically. The resulting solutions are shown in Fig. 3.8.
Example 3.9 In order to solve the traffic equation explicitly, let us simplify the initial
condition to the piecewise linear function
1, x 0,
h(x) = 1 x, 0 < x < 1,
0, x 1.
This is not C 1 , but the resulting solution could be interpreted as a weak solution in
the sense described in Sect. 1.2. We will discuss the precise definition in Chap. 10.
By the formula from Theorem 3.7, the characteristic lines are
x0 t, x0 0,
x(t) = x0 + (2x0 1)t, 0 < x0 < 1, (3.30)
x0 + t, x 1.
Fig. 3.7 Characteristic lines for the initial density shown in Fig. 3.6
t=0
t=2
u(t, x)
x + t, x t,
x0 = 1+2t
x+t
, t < x < 1 + t,
x t. x 1 + t.
x +t
v(t, x) = 1 u(t, x) = . (3.32)
1 + 2t
Let s(t) denote the position of the car at time t. For t x0 the velocity formula
(3.32) gives the equation
ds s+t
= , (3.33)
dt 1 + 2t
These trajectories are illustrated in Fig. 3.9. As we might expect, the cars further back
in the line wait longer before moving, but each car eventually moves forward and
gradually accelerates.
1 1
h(x) = + arctan(20x),
2
as shown in Fig. 3.10. This is the reverse of the initial condition of Example 3.8.
The characteristics specified in Theorem 3.7 now cross each other, as illustrated in
Fig. 3.11. The existence of crossings implies that a classical solution with this initial
condition cannot exist beyond the time of the first crossing.
If we were to trace the trajectories of individual cars, as we did in Example 3.9,
we would see that these also intersect each other at the points where characteristics
cross. In effect, the model predicts the formation of a traffic jam.
3.4 Quasilinear Equations 41
3.5 Exercises
u u
+c = 0,
t x
on the quadrant t 0, x 0. Suppose the boundary and initial conditions are
u(0, x) = g(x), x 0,
u(t, 0) = h(t), t 0,
for g, h C 1 [0, ).
42 3 Conservation Equations and Characteristics
3.2 In the continuity equation (3.4), external factors that break the conservation of
mass are accounted for by adding terms to the right-hand side.
(a) A forcing term f (t, x) is independent of the existing concentration. (In the
bloodstream model of Sect. 3.1, this could represent intravenous injection, for
example.) Assume that c is constant, f C 1 (R2 ), and g C 1 (R). Solve the
equation
u u
+c = f, u(0, x) = g(x),
t x
u u
+c = u, u(0, x) = g(x),
t x
u u
+ 2t = 0,
t x
for t R and x [0, 1]. Suppose the boundary conditions are given by
Find a relation between h 0 and h 1 . (This shows that we can only impose a boundary
condition at one side of the interval [0, 1].)
3.4 If the spatial domain in the linear conservation equation (3.21) is a bounded
region Rn , then for a given velocity field v, the inflow boundary in
is defined as the set of boundary points where v points into . Fixing boundary
conditions on the inflow boundary will generally determine the solution in the interior.
Suppose = (1, 1) (1, 1) R2 with coordinates (x1 , x2 ). For the velocity
fields below, determine the characteristics and specify the inflow boundary. Draw a
sketch of for each case, indicating these features.
3.5 Exercises 43
for some positive function f on (0, 1). Assume also that the concentration on the
left boundary {x1 = 0} is given by
u(t, 0, x2 ) = h(t, x2 ).
3.6 Burgers equation is a simple quasilinear equation that appears in models of gas
dynamics,
u u
+u = 0.
t x
(a) Use the method of characteristics as described in Sect. 3.4 to find a formula for
the solution u(t, x) given the initial condition
0, x 0,
u(0, x) = ax , 0 < x < a,
1, x a.
Show that all of the characteristics originating from x0 [0, 1] meet at the same
point (thus creating a shock).
3.7 In the mid-19th century, William Hamilton and Carl Jacobi developed a formu-
lation of classical mechanics based on ideas from geometric optics. In this approach
the dynamics of a free particle in R are described by a generating function u(t, x)
satisfying the Hamilton-Jacobi equation:
2
u 1 u
+ = 0. (3.34)
t 2 x
44 3 Conservation Equations and Characteristics
dx u
(t) = (t, x(t)), x(0) = x0 . (3.35)
dt x
(a) Assuming that x(t) solves (3.35), use the chain rule to compute d 2 x/dt 2 .
(b) Differentiate (3.34) with respect to x and then restrict the result to (t, x(t)),
where x(t) solves (3.35). Conclude from (a) that to
d2x
= 0.
dt 2
Hence, for some constant v0 (which depends on the characteristic),
x(t) = x0 + v0 t.
Du 1
= v02 ,
Dt 2
implying that
1
u(t, x0 + v0 t) = u(0, x0 ) + v02 t.
2
(d) Use this approach to find the solution u(t, x) under the initial condition
u(0, x) = x 2 .
(For the characteristic starting at (0, x0 ), note that you can compute v0 by eval-
uating (3.35) at t = 0.)
Chapter 4
The Wave Equation
As we noted in Sect. 1.2, dAlemberts derivation of the wave equation in the 18th
century was an early milestone in the development of PDE theory. In this chapter
we will develop this equation as a model for the vibrating string problem, and derive
dAlemberts explicit solution in one dimension using the method of characteristics
introduced in Chap. 3.
In higher dimensions the wave equation is used to model electromagnetic or
acoustic waves. We will discussion the derivation of the acoustic model later in
Sect. 4.5. A clever reduction trick allows the solution formula for Rn to be deduced
from the one-dimensional case. The resulting integral formula yields insight into the
propagation of waves in different dimensions.
The chapter concludes with a discussion of the energy of a solution, based on the
physical principles of kinetic and potential energy.
Consider a flexible string that is stretched tight between two points, like the strings
on a violin or guitar. The stretching of the string creates a tension force T that pulls
in both directions at each point along its length. For simplicity, let us assume that
any other forces acting on the string, including gravity, are negligible compared to
the tension. The linear density of mass is taken to be constant along the string.
For a violin string it is also reasonable to assume that the displacement of the
string is extremely small relative to its length. This assumption justifies taking T to
be a fixed constant, ignoring the additional stretching that occurs when the string
is displaced. It also allows us to treat horizontal and vertical components of the
displacement independently, so we can restrict our attention to the vertical.
Let the string be parametrized by x [0, ]. The vertical displacement as a
function of time is denoted by u(t, x). To develop an equation for u, we first discretize
the model by subdividing the total length into segments of length x = /n for
some large n. Each segment has a mass x and is subject to the tension forces
pulling in the direction of its neighbors on either side.
For j = 0, . . . n, let x j := jx be the position of the jth segment along the string.
The segments j = 0 and j = n represent the fixed endpoints, with j = 1, . . . , n 1
in the interior. Let u(t, x j ) denote the vertical displacement of the jth segment as a
function of time. Figure 4.1 illustrates this discretization (with displacements greatly
exaggerated).
To develop an equation for the string, we apply Newtons laws of motion to the
segments of the discretization, as if they were single particles. The jth particle is
being pulled by its neighbors with a force T on each side. Unless the string is straight,
these forces are not quite aligned.
In terms of the angles labeled in Fig. 4.2, the net vertical force on a single segment
is
F(t, x j ) = T sin j + T sin j .
We have assumed that the relative displacements are extremely small, so the angles
j , j will be very small also. To leading order, we can replace the sines by tangents,
which are linear in u,
T
F(t, x j ) = u(t, x j+1 ) + u(t, x j1 ) 2u(t, x j ) . (4.1)
x
The equation of motion for the jth segment now comes from Newtons law: mass
times acceleration equals force. At the point x j this translates to
2u
x (t, x j ) = F(t, x j ). (4.2)
t 2
Using (4.1) on the right then gives
2u T 2u
= 0. (4.4)
t 2 x 2
This is the one-dimensional wave equation. The fixed ends of the string correspond
to Dirichlet boundary conditions,
u(t, 0) = u(t, ) = 0.
4.2 Characteristics
For convenience, set c2 := T / in (4.4), assuming c > 0, and rewrite the equation
as
2u 2 u
2
c = 0. (4.5)
t 2 x 2
The constant c is called the propagation speed, for reasons that will become apparent
as we analyze the equation.
Let the physical domain be x R for the moment; we will discuss boundary
conditions later. The key to applying the method of characteristics to (4.5) is that the
differential operator appearing in the equation factors as a product of two first-order
operators, i.e.,
2 2
2
c = +c c . (4.6)
t 2 x 2 t x t x
48 4 The Wave Equation
for g C 2 (R) and h C 1 (R), the wave equation (4.5) admits a unique solution
1 1 x+ct
u(t, x) = g(x + ct) + g(x ct) + h( ) d . (4.8)
2 2c xct
u u
w := c . (4.9)
t x
By (4.6), w satisfies the linear conservation equation
w w
+c = 0.
t x
The characteristics for this equation are given by x+ (t) = x0 + ct. By Theorem 3.2
the unique solution with an initial condition w(0, x) = w0 (x) is
The unique solution to (4.12) under the initial condition u(0, x) = g(x) is given
by direct integration with respect to time:
t
u(t, x0 ct) = g(x0 ) + w(s, x0 cs) ds.
0
4.2 Characteristics 49
u u
w0 (x) := (0, x) c (0, x)
t x
g
= h(x) c (x).
x
The w0 contribution to (4.13) is then given by
x+ct
1 x+ct
1 1 x+ct g
w0 ( ) d = h( ) d ( ) d
2c xct 2c xct 2 xct x
x+ct
1 1
= h( ) d g(x + ct) g(x ct) .
2c xct 2
To highlight the role played by the characteristic lines in the solution of Theo-
rem 4.1, consider the functions
x
1 1
u (x) := g(x) h( ) d .
2 2c 0
matching the form of the solution stated in (1.3). The subscripts in u indicate the
propagation direction, i.e., u + propagates to the right and u to the left. In either
direction the speed of propagation is the parameter c.
50 4 The Wave Equation
Example 4.2 Consider the wave equation (4.5) with the initial conditions h(x) = 0
and
(1 x 2 )2 , |x| 1,
g(x) =
0, |x| > 1.
By (4.8) the solution is the superposition of two localized bumps, which propagate
in opposite directions as illustrated in Fig. 4.3.
In Example 4.2 the initial condition was supported in [1, 1], and we can see
in Fig. 4.3 that the resulting solution has support in a V-shaped region. This region
could be identified as the span of the characteristic lines emerging from the initial
support interval.
This restriction of the support of a solution is closely related to Huygens principle,
an empirical law for propagation of light waves published by Christiaan Huygens
in 1678. The one-dimensional wave equation exhibits a special, strict form of this
principle:
Theorem 4.3 (Strict Huygens principle in dimension one) Suppose u solves the
wave equation (4.5) for t 0, x R, with initial data given by (4.7). If the functions
g, h are supported in a bounded interval [a, b], then
supp u (t, x) R+ R; x [a ct, b ct] [a + ct, b + ct] .
Proof Consider the components of the solution (4.8). The g term will vanish unless
x ct [a, b]. In the h term, the integral over will vanish unless the interval
[x ct, x + ct] intersects [a, b]. This again requires that x ct [a, b].
The restriction of support described in Theorem 4.3 is illustrated in Fig. 4.4. At a
fixed observation point, the solution passes through in some finite time interval and
then vanishes forever.
Example 4.4 Suppose the initial data from Example 4.2 are altered to include a
singularity at x = 0. For example,
4.2 Characteristics 51
(1 |x|)2 , |x| 1,
g(x) =
0, |x| 1.
Then (4.8) still gives a formula for the solution even though g is not differentiable.
(This is a weak solution in the sense we will describe in Chap. 10). A set of solutions
at different points in time is plotted in Fig. 4.5. Observe that the original singularity
splits into two singularities, which propagate outward along the two characteristic
lines emanating from x = 0.
In the string model of Sect. 4.1 the domain of the wave equation (4.5) was restricted
to x [0, ], with Dirichlet boundary conditions
u
u(0, x) = g(x), (0, x) = h(x), (4.16)
t
52 4 The Wave Equation
with g C 2 [0, ], h C 1 [0, ]. Both g and h are assumed to vanish at the endpoints
of [0, ].
The solution of the wave equation on R provided in Theorem 4.1 can be adapted
to the boundary conditions (4.15). The idea is to extend g, h to R in such a way that
the formula (4.8) gives a solution satisfying the boundary conditions for all t.
Theorem 4.5 The unique solution of the wave equation (4.5) on [0, ], with Dirichlet
boundary conditions and satisfying the initial conditions (4.16), is given by (4.8),
provided g and h are extended to R as odd, 2-periodic functions.
Proof By linearity we can consider the g and h terms independently. Assume that
the g term,
1
g(x + ct) + g(x ct) , (4.17)
2
is defined for all t and x and satisfies the boundary conditions on [0, ] for all values
of t. At x = 0 the condition u(t, 0) = 0 will be satisfied if and only if
This is equivalent to the condition that g is odd with respect to reflection at the point
x = .
The composition of the reflections about 0 and gives translation by 2. Hence
the expression (4.17) satisfies the boundary conditions if and only if g is odd and
2-periodic.
A similar argument works for the h term,
x+ct
1
u(t, x) = h( ) d . (4.18)
2c xct
The requirement at x = 0 is
ct
h( ) d = 0, for all t 0. (4.19)
ct
Example 4.6 Consider the vibrating string problem with c = 1 and = 1. Suppose
that the solution initially has the form (4.14) with u + = 0 and the left-propagating
solution given by the function u shown in Fig. 4.6. For small t > 0 the solution is
but eventually the bump hits the boundary at x = 0, and we would like to understand
what happens then.
To apply Theorem 4.5, we must first solve for g and h in terms of u + . By (4.20)
we set g(x) = u + (x) and
du +
h(x) = u + (x + t)t=0 = (x).
t dx
The resulting functions g and h, extended to odd functions on R, are shown in Fig. 4.7.
According to Theorem 4.5 we can compute the solution from (4.8) using these
odd periodic extensions of g and h. The results are shown in Fig. 4.8. The bump
temporarily disappears at t = 0.3 and then reemerges as an inverted bump traveling
in the opposite direction.
The derivation of the string model in Sect. 4.1 assumed that no external forces act
on the string. Additional forces could be incorporated by adding extra terms to the
expression (4.1) for the force on a segment. In the continuum limit this yields a
2u 2 u
2
c = f, (4.21)
t 2 x 2
where f = f (t, x). The forcing term could be used to model plucking or bowing of
the string, for example.
In this section we introduce a technique, called Duhamels method, that allows us
to adapt solution methods for evolution equations to include a forcing term. The idea,
which is closely related to a standard ODE technique called variation of parameters,
is to reformulate the forcing term as an initial condition. This technique is named
for the 19th century French mathematician and physicist Jean-Marie Duhamel, who
developed the idea in a study of the heat equation.
4.4 Forcing Terms 55
To focus our attention on the driving term, let us consider (4.21) on the domain
x R with the initial conditions set to zero. For a given c, define the domain of
dependence of a point (t, x) with t > 0 and x R by
Dt,x := (s, x ) R+ R : x c(t s) x x + c(t s) .
This is a triangular region, as pictured in Fig. 4.9. The terminology refers to the fact
that the solution u(t, x) is influenced only by the values of f within Dt,x , as the next
result shows.
Theorem 4.7 For f C 1 (R), the unique solution of (4.21) satisfying the initial
conditions
u
u(0, x) = 0, (0, x) = 0,
t
is given by
1
u(t, x) = f (s, x ) d x ds. (4.22)
2c Dt,x
Proof For each s 0, let s (t, x) be the solution of the homogeneous wave equation
(4.5) for t s, subject to the initial conditions
s
s (t, x)t=s = 0, (t, x)t=s = f (s, x). (4.23)
t
This function can be written explicitly by shifting t to t s in (4.8),
x+c(ts)
1
s (t, x) = f (s, x ) d x . (4.24)
2c xc(ts)
The first term vanishes for all t by the initial condition (4.23), leaving
u t
s
(t, x) = (t, x) ds, (4.26)
t 0 t
u
(0, x) = 0.
t
Now let us check that the u defined in (4.25) solves (4.21). Differentiating (4.26)
once more gives
t 2
2u s s
(t, x) = (t, x) + (t, x) ds. (4.27)
t 2 t s=t 0 t
2
By (4.23) the first term on the right is equal to f (t, x). To simplify the second term,
we use the fact that s solves (4.5) and the definition of u to compute
t
2 s t
2 s
(t, x) ds = c2 (t, x) ds
0 t 2 0 x 2
2u
= c2 2 .
x
Therefore, (4.27) reduces to
2u 2 u
2
= f + c ,
t 2 x 2
proving that u solves (4.21).
Combining (4.24) and (4.25), we can write the formula for u as
t x+c(ts)
1
u(t, x) = f (s, x ) d x ds,
2c 0 xc(ts)
where 0 := / and > 0 is the driving frequency. Since sin(0 x) is odd and
2-periodic, the extension required by Theorem 4.5 is automatic. As in Theorem 4.7,
let us set the initial conditions g = h = 0 to focus on the forcing term.
Substituting (4.28) into (4.22) gives
58 4 The Wave Equation
t x+ts
1
u(t, x) = cos(s) sin(0 x ) d x ds
2 0 xt+s
t
1
= [cos(0 (x t + s)) cos(0 (x + t s))] cos(s) ds
20 0
For = 0 we obtain
sin(0 x)
u(t, x) = [cos(t) cos(0 t)] .
02 2
Note that the x dependence of the solution matches that of the forcing term. The
interesting part of this solution is the oscillation, which includes both frequencies
and 0 . Figure 4.11 illustrates the behavior of the amplitude as a function of time, in
a case where 0 . The large-scale oscillation has a period 1/, corresponding to
the low driving frequency. The solution also exhibits fast oscillations at the frequency
0 which depends only on .
For = 0 the formula (4.29) gives the solution
t
u(t, x) = sin(0 x) sin(0 t).
20
where C and are physical constants. We will fix background atmospheric values
of the pressure P0 and density 0 and focus on the deviations
u := P P0 , := 0 .
P0
u= . (4.32)
0
The dynamics of the gas are modeled with two conservation laws. The first is
conservation of mass (3.20), which yields
+ (v) = 0.
t
Since and v are both assumed to be very small, for the leading approximation we
can replace by 0 to obtain
+ 0 v = 0. (4.33)
t
The second dynamical law is conservation of momentum. This is encapsulated in
a fluid equation derived by Euler in 1757, called Eulers force equation:
P = + v v.
t
Eulers equation is an aggregate form of Newtons second law (force equals mass
times acceleration). Note that the acceleration term on the right is the Lagrangian
derivative of the velocity field v. As above, we substitute P = P0 + u and = 0 +
and keep only the first order terms to derive the linearization
v
u = 0 . (4.34)
t
The final step is to eliminate the velocity field from the equation. Substituting
(4.32) into (4.33) and differentiating with respect to time gives
2u
= P0 ( v). (4.35)
t 2 t
On the other hand, by (4.34),
v
( v) =
t t
u
=
0
1
= u.
0
4.5 Model Problem: Acoustic Waves 61
2u P0
u = 0.
t 2 0
2u
u = 0.
t 2
This problem can be reduced to the one-dimensional case by a clever averaging trick.
For f C 0 (R3 ), define
1
f(x; ) := f (w) d S(w), (4.36)
4 B(x;)
f(x; )
lim = f (x). (4.37)
0
The dimensional reduction of the wave equation is based on the following formula
of Jean-Gaston Darboux.
Lemma 4.9 (Darbouxs formula) For f C 2 (R),
2
f (x; ) = x f(x; ).
2
Proof To compute the radial derivative of the spherical average, it is helpful to change
coordinates by setting w = x + y, so that the domain of y is the unit sphere S2 R3 ,
1 1
f (w) d S(w) = f (x + y) d S( y).
42 B(x;) 4 S2
62 4 The Wave Equation
A further differentiation using (4.39) and the radial derivative formula from Exer-
cise 2.4 then gives
2 1
f (x; ) = f (w) d 3 w
2 4 B(x;)
(4.40)
1
= f (w) d S(w).
4 B(x;)
Proof Define
1
x; ) :=
u(t, u(t, w) d S(w).
4 B(x;)
Since u satisfies the wave equation, differentiating under the integral gives
2 1
x; ) =
u(t, u(t, w) d S(w).
t 2 4 B(x;)
2
x; ) = x u(t,
u(t, x; ).
t 2
Lemma 4.9 then shows that
2 2
2 x; ) = 0.
u(t, (4.42)
t 2
64 4 The Wave Equation
The initial conditions for u follow from the initial conditions for u,
u(0, x; ) = g(x;
),
u(0, x; ) = h(x; ).
t
By (4.37) we also have a boundary condition at = 0,
x; 0) = 0.
u(t,
Using Theorem 4.1 and the reflection argument from Theorem 4.5, we conclude
that the unique solution of (4.42) under these conditions is given by extending g(x; )
and h(x; ) to R with odd symmetry and then using the dAlembert formula,
+t
1 1
x; ) =
u(t,
g(x; + t) + g(x;
t) + h(x; ) d . (4.43)
2 2 t
x; )
u(t,
u(t, x) = lim . (4.44)
0
To evaluate this limit, first note that for 0 t the odd symmetry of g and h with
respect to can be used to rewrite (4.43) as
1 1 t+
x; ) =
u(t,
g(x; t + ) g(x;
t ) + h(x; ) d .
2 2 t
1
lim
g(x; t + ) g(x;
t ) = g(x;
t),
0 2 t
and t+
1
lim h(x; ) d = h(x; t).
0 2 t
This matches the result of Theorem 4.3 for the one-dimensional wave equation. The
strict Huygens phenomenon is readily observable for acoustic waves, in the fact that
4.6 Integral Solution Formulas 65
Fig. 4.13 The plot on the right shows the observed waveform at a distance 3 from the origin, caused
by the initial radial impulse shown on the left
a sudden sound like a clap propagates as a sharp wavefront that is heard as a single
discrete event, without aftereffects unless there are reflective surfaces to cause an
echo. Figure 4.13 illustrates this effect; an observer located away from the origin
experiences a waveform of duration equal to the diameter of the initial impulse. The
strict Huygens principle holds in every odd dimension, but fails in even dimensions,
as we will illustrate below.
The spherical averaging trick used for Theorem 4.10 also works in higher odd
dimensions, although the solution formulas become more complicated. For even
dimensions, solution formulas can be derived from the odd-dimensional case by a
technique called the method of descent.
We will work this out for the two-dimensional case. Suppose u C 2 ([0, )R2 )
solves the wave equation with initial conditions
u
u|t=0 = g, = h,
t t=0
Corollary 4.11 (Poissons integral formula) For u C 2 ([0, )R2 ), suppose that
2u
u = 0
t 2
under the initial conditions
u
u|t=0 = g, = h.
t t=0
Then
66 4 The Wave Equation
t g(x t y) 2 t h(x t y) 2
u(t, x) = d y + d y.
t 2 D 1 | y|2 2 D 1 | y|2
The claimed two-dimensional solution follows by substituting this formula for g and
the corresponding result for h in the Kirchhoff formula from Theorem 4.10.
Corollary 4.11 shows that the range of influence of (t0 , x 0 ) for the two-dimensional
wave equation is the solid region bounded by the forward light cone + (t0 , x 0 ), not
just the surface. Thus, in R2 the wave caused by a sudden disturbance has a lingering
tail after the initial wavefront has passed, as illustrated in Fig. 4.14.
Summing over the segments and passing the continuum limit gives a formula for the
total kinetic energy of the string:
2
u
E K := d x.
2 0 t
The potential energy of the solution can be calculated as the energy required to
move the string from zero displacement into the configuration described by u(t, ). Let
us represent this process by scaling the displacement to su(t, ) for s [0, 1]. By (4.1)
the opposing force generated by the tension also scales proportionally to s. The work
required to shift the segment at x j from s to s +s is therefore sF(t, x j )u(t, x j )s.
The potential energy associated with this segment is
1
E P (t, x j ) := sF(t, x j )u(t, x j ) ds
0
1
= u(t, x j )F(t, x j )
2
T 2u
u(t, x j ) 2 (t, x j )x,
2 x
68 4 The Wave Equation
with a minus sign because the displacement and force are in opposing directions.
Summing over the segments and taking the continuum limit gives the total poten-
tial energy,
T 2u
E P (t) := u d x.
2 0 x 2
For comparison to the kinetic term, it is convenient to integrate by parts and rewrite
this in the form
T u 2
E P (t) = d x.
2 0 x
E = EK + E P .
Under the condition u| = 0, Greens first identity (Theorem 2.10) applies to the
second term to give
u u
u d n x = u d n x. (4.47)
t t
Thus
d u 2u
E[u] = c2 u d n x,
dt t t 2
2u
c2 u = f, u| = 0,
t 2
u
u|t=0 = g, = h,
t t=0
is uniquely determined by the functions f, g, h.
Proof If u 1 and u 2 are solutions of the equation with the same initial conditions, then
w := u 1 u 2 satisfies (4.30) with the initial conditions
w
w(0, x) = 0, (0, x) = 0.
t
At time t = 0 this gives E[w] = 0, and Theorem 4.12 then implies that E[w] = 0
for all t. Since the terms in the integrand of E[w] are non-negative, they must each
vanish. This shows that w is constant, and hence w = 0 by the initial conditions.
Therefore u 1 = u 2 .
4.8 Exercises
4.1 Suppose u(t, x) is a solution of the wave equation (4.5) for x R. Let P be a
parallelogram in the (t, x) plane whose sides are characteristic lines. Show that the
value of u at each vertex of P is determined by the values at the other three vertices.
4.2 The wave equation (4.5) is an appropriate model for the longitudinal vibrations
of a spring. In this application u(t, x) represents displacement parallel to the spring.
Suppose that spring has length and is free at the ends. This corresponds to the
Neumann boundary conditions
u u
(t, 0) = (t, ) = 0, for all t 0.
x x
Assume the initial conditions are g and h as in (4.16), which also satisfy Neumann
boundary conditions on [0, ]. Determine the appropriate extensions of g and h
from [0, ] to R so that the solution u(t, x) given by (4.8) will satisfy the Neumann
boundary problem for all t.
4.3 In the derivation in Sect. 4.1, suppose we include the effect of gravity by adding
a term gx to the discrete equation of motion (4.2), where g > 0 is the constant
of gravitational acceleration. The wave equation is then modified to
2u 2 u
2
c = g. (4.48)
t 2 x 2
70 4 The Wave Equation
Assume that x [0, ], with u satisfying Dirichlet boundary conditions at the end-
points.
(a) Find an equilibrium solution u 0 (x) for (4.48), that satisfies the boundary condi-
tions but does not depend on time.
(b) Show that if u 1 is a solution of the original wave equation (4.5), also with Dirichlet
boundary conditions, then u = u 0 + u 1 solves (4.48).
(c) Given the initial conditions u(x, 0) = 0, u
t
(x, 0) = 0, find the corresponding
initial conditions for u 1 . Then apply Theorem 4.5 to find u 1 and hence solve
for u.
4.5 The telegraph equation is a variant of the wave equation that describes the
propagation of electrical signals in a one-dimensional cable:
2u u 2 u
2
+ a + bu c = 0,
t 2 t x 2
where u(t, x) is the line voltage, c is the propagation speed, and a, b > 0 are
determined by electrical properties of the cable (resistance, inductance, etc.). Show
that the substitution
u(t, x) = eat/2 w(t, x)
reduces the telegraph equation to an ordinary wave equation for w, provided a and
b satisfy a certain condition. Find the general solution in this case. (This result has
important practical applications, in that the electrical properties of long cables can
be tuned to eliminate distortion.)
2u 2 u
2
c = f.
t 2 x 2
u u
v1 := , v2 := .
t x
Show that v satisfies a vector equation
v v
A = b. (4.49)
t x
(b) The vector equation (4.49) can be solved by diagonalizing A. Check that if we
set
1 c
T := ,
1 c
then
1 c 0
T AT = .
0 c
w := T v,
u
u(0, x) = g(x), (0, x) = h(x),
t
into initial conditions for w1 and w2 , and then solve (4.50) using the method of
characteristics.
(d) Combine the solutions for w1 and w2 to compute v1 = u/t, and then integrate
to solve for u. Your answer should be a combination of of the dAlembert formula
(4.8) and the Duhamel formula (4.22).
u
iu = 0 (4.51)
t
(ignoring the physical constants). Suppose that u(t, x) is a solution of (4.51) for
t [0, ) and x Rn , with initial condition
u(0, x) = g(x).
Assume that
|g|2 d n x < .
Rn
2u
c2 u = 0. (4.52)
t 2
The plane wave solutions have the form
4.9 The Klein-Gordon equation in Rn is a variant of the wave equation that appears
in relativistic quantum mechanics,
2u
u + m 2 u = 0, (4.54)
t 2
where m is the mass of a particle.
4.8 Exercises 73
(a) Find a formula for = (k, m) under which this equation admits plane wave
solutions of the form (4.53).
(b) Show that we can define a conserved energy E for this equation by adding a term
proportional to u 2 to the integrand in (4.46).
Chapter 5
Separation of Variables
Some PDE can be split into pieces that involve distinct variables. For example, the
equation
u
a(t)b(x)u = 0
t
could be written as
1 u
= b(x)u,
a(t) t
provided a(t) = 0. This puts all of the t derivatives and t-dependent coefficients on
the left and all of the terms involving x on the right.
Splitting an equation this way is called separation of variables. For PDE that
admit separation, it is natural to look for product solutions whose factors depend on
the separate variables, e.g., u(t, x) = v(t)(x). The full PDE then reduces to a pair
of equations for the factors. In some cases, one or both of the reduced equations is
an ODE that can be solved explicitly.
This idea is most commonly applied to evolution equations such as the heat or
wave equations. The classical versions of these PDE have constant coefficients, and
separation of variables can thus be used to split the time variable from the spatial
variables. This reduces the evolution equation to a simple temporal ODE and a spatial
PDE problem.
Separation among the spatial variables is sometimes possible as well, but this
requires symmetry in the equation that is also shared by the domain. For example,
we can separate variables for the Laplacian on rectangular or circular domains in R2 .
But if the domain is irregular or the differential operator has variable coefficients,
then separation is generally not possible.
Despite these limitations, separation of variables plays a significant role the devel-
opment of PDE theory. Explicit solutions can still yield valuable information even if
they are very special cases.
Frequency (Hz)
Intensity (dB)
In 1636 the mathematician Marin Mersenne published his observation that a vibrat-
ing string produces multiple pitches simultaneously. The most audible pitch corre-
sponds to the lowest frequency of vibration, called the fundamental tone of the string.
Mersenne also detected higher pitches, at integer multiples of the fundamental fre-
quency. (The relationship between frequency and pitch is logarithmic; doubling the
frequency raises the pitch by one octave.)
The higher multiples of the fundamental frequency are called overtones of the
string. Figure 5.1 shows the frequency decomposition for a sound sample of a bowed
violin string, with a fundamental frequency of 440 Hz. The overtones appear as peaks
in the intensity plot at multiples of 440.
At the time of Mersennes observations, there was no theoretical model for string
vibration that would explain the overtones. The wave equation that dAlembert sub-
sequently developed (a century later) gave the first theoretical justification. However,
this connection is not apparent in the explicit solution formula developed in Sect. 4.3.
To understand how the overtones are predicted by the wave equation, we need to
organize the solutions in terms of frequency.
Pt u u = 0, (5.1)
u(t, x) = v(t)(x),
Pt v = v, = . (5.2)
Pt v v = 0.
1 1
Pt v = .
v
= . (5.3)
This is called the Helmholtz equation, after the 19th century physicist Hermann von
Helmholtz. The minus sign is included so that 0 for the most common types of
boundary conditions. Adapting the linear algebra terminology, we refer to the number
in (5.3) as an eigenvalue and the corresponding solution as an eigenfunction.
The Helmholtz equation is sometimes called the Laplacian eigenvalue equation.
We will present a general analysis of the Helmholtz problem on any bounded
domain in Rn in Chap. 11, and later in this chapter we will consider some two- or three-
dimensional cases for which further spatial separation is possible. For the remainder
of this section we restrict our attention to problems in one spatial dimensional, for
which (5.3) is an ODE.
Theorem 5.2 For C 2 [0, ] the equation
d 2
= , (0) = () = 0, (5.4)
dx2
has nonzero solutions only if
2 n2
n :=
2
78 5 Separation of Variables
Using the Dirichlet boundary conditions we can integrate by parts on the right without
any boundary term, yielding
2
d
|| d x =
2 d x. (5.6)
dx
0 0
Some of the eigenfunctions obtained in Theorem 5.2 are illustrated in Fig. 5.2. For
the sake of
application to our original string model, let us reinstate the propagation
speed c := T / and write the string equation as
2u
c2 u = 0, u(t, 0) = u(0, ) = 0. (5.7)
t 2
With the spatial solution given by the eigenfunction associated to n , the correspond-
ing temporal eigenvalue equation is also a harmonic oscillator ODE,
d 2v
= c2 n v.
dt 2
The solutions could be written in terms of sines and cosines, but for the temporal
component it is usually more convenient to use the complex exponential form. The
general complex-valued solution is
with an , bn C and cn
n := c n = . (5.8)
For real-valued solutions, the coefficients are restricted by bn = an .
Combining the temporal and spatial components gives a set of solutions for the
vibrating string problem:
u n (t, x) = an ein t + bn ein t sin( n x), (5.9)
for n N.
The functions (5.9) are referred to as pure-tone solutions, because they model
oscillation at a single frequency n . In the case of visible light waves, the frequency
corresponds directly to color. For this reason the set of frequencies {n } is called
the spectrum. By association, the term spectrum is also used for sets of eigenvalues
appearing in more general problems. For example, the set {n } of eigenvalues for
which the Helmholtz problem has a nontrivial solution is called the spectrum of the
Laplacian, even though n is proportional to the square of the frequency n .
From (5.8) we can deduce the fundamental tone of the string, as predicted by
dAlemberts wave equation model. To convert frequency to the standard unit of Hz
(cycles per second), we divide 1 by 2 to obtain the formula
1 1 T
= . (5.10)
2 2
will knock out the lower frequencies, a practice string players refer to as playing a
harmonic.
As this discussion illustrates, the spectral analysis of the wave equation is more
directly connected to experimental observation than the explicit solution formula
(4.8). The displacement of a vibrating string is technically difficult to observe directly
because the motion is both rapid and of small amplitude. Such observations were
first achieved by Hermann von Helmholtz in the mid-19th century.
Example 5.3 The one-dimensional wave equation can be used to model for the fluc-
tuations of air pressure inside a clarinet. The interior of a clarinet is essentially a
cylindrical column, and for simplicity we can assume that the pressure is constant
on cross-sections of the cylinder, so that the variations in pressure are described by
a function u(t, x) with x [0, ], where is the length of the instrument. Pressure
fluctuations are measured relative to the fixed atmospheric background, with u = 0
for atmospheric pressure.
The maximum pressure fluctuation occurs at the mouthpiece at x = 0, where a
reed vibrates as the player blows air into the instrument. Since a local maximum
of the pressure corresponds to a critical point of u(t, ), the appropriate boundary
condition is
u
(t, 0) = 0. (5.11)
x
At the opposite end the air column is open to the atmosphere, so the pressure does
not fluctuate,
u(t, ) = 0. (5.12)
d 2
= , (0) = 0, () = 0. (5.13)
dx2
The boundary condition at x = 0 implies that
(x) = cos( x),
2
n := (n 21 )2 ,
2
Fig. 5.3 The first four eigenfunctions for pressure fluctuations in a clarinet
Hz
dB
In dimension greater than one, spatial separation of variables is essentially the only
way to compute explicit solutions of the Helmholtz equation (5.3), and this only
works for very special cases. The most straightforward example is a rectangular
domain in Rn , which we will discuss in the exercises.
In this section we consider the simplest non-rectangular case, based on polar
coordinates (r, ) in R2 . Separation in polar coordinates allows us to compute eigen-
functions and eigenvalues on a disk in R2 , for example.
With x = (x1 , x2 ) in R2 , polar coordinates are defined by
2 2
= +
x12 x22
and then converting the partials with respect to x1 and x2 into r and derivatives
using the chain rule. The result is
1 1 2
= r + 2 2. (5.14)
r r r r
Note that there are no mixed partials involving both r and , and that the coefficients
do not depend on . This allows separation of r and , provided the domain is defined
by specifying ranges of r and .
To solve the radial eigenvalue equation, we will use Bessel functions, named for
the astronomer Friedrich Bessel. Bessels equation is the ODE:
with k C in general. For our application k will be an integer. The standard pair of
linearly independent solutions is given by the Bessel functions Jk (z) and Yk (z).
The Bessel J-functions, a few of which are pictured in Fig. 5.5, satisfy
J1
J2 J3
5.3 Circular Symmetry 83
z
k
2 l
1 z
Jk (z) = . (5.17)
2 l=0
l!(k + l)! 4
Together with (5.16), this shows that Jk (z) ck z |k| as z 0 for any k Z. In
contrast, the Bessel Y-function satisfies Yk (z) ck z |k| as z 0.
A change of sign in (5.15) gives the equation
Its standard solutions are the modified Bessel functions Ik (z) and K k (z). As z 0
these satisfy the asymptotics Ik (z) ck z |k| , as illustrated in Fig. 5.6, and K k (z)
ck z |k| .
= ,
that factors as a product h(r )w(). Then, up to a multiplicative constant, has the
form
,k (r, ) := h k (r )eik , (5.19)
Proof Under the assumption = hw, the Helmholtz equation reduces by (5.14) to
w h h 2w
r + 2 + hw = 0.
r r r r 2
I1
I2
I3
84 5 Separation of Variables
wk () := eik , k Z.
Before examining the radial equation, let us note that the assumption that is
C 2 imposes
a boundary condition at r = 0. To see this, first note that the function
r = x12 + x22 is continuous at (0, 0) but not differentiable. For r > 0,
r xj
= ,
x j r
which does not have a limit as r 0. On the other hand, the functions
r ei = x1 i x2
These functions are polynomial and hence C . We will see below that the solutions
of the radial equation corresponding to = k 2 satisfy h(r ) ar k as r 0, for
some constant a. The differentiability of at the origin will require the asymptotic
condition
h k (r ) ar |k| (5.23)
as r 0.
For wk () = eik , the radial component of (5.20) is
2
r h k + (r 2 k 2 )h k = 0. (5.24)
r
5.3 Circular Symmetry 85
2 k 2 = 0,
with solutions = k. Since a second order ODE has exactly two independent
solutions, the functions r k give a full set of solutions for k = 0. For k = 0 the two
possibilities are 1 and ln r . By the condition (5.23), the solutions ln r and r |k| must
be ruled out. The only possible solutions for = 0 are thus
h k (r ) = r |k| .
We should check that this function is at least C 2 at the origin. In fact, it follows from
the power series expansion (5.17) that ,k is C on R2 .
Similar
considerations apply for < 0, except that this time the substitution
z = r reduces (5.24) to (5.18). The condition (5.23) is satisfied only for the
solution Ik ( r ).
Example 5.5 The linear model for the vibration of a drumhead is the wave equation
(4.30). For a circular drum we can take the spatial domain to be the unit disk D :=
{r < 1} R2 . Lemma 5.1 reduces the problem of determining the frequencies of
the drum to the Helmholtz equation,
= , |D = 0. (5.25)
The possible product solutions are given by Lemma 5.4, subject to the boundary
condition h k (1) = 0. This rules out 0, because in that case h k (r ) has no zeros
for r > 0.
For > 0, we have h k (r ) = Jk ( r ), and the boundary condition takes the form
Jk ( ) = 0.
86 5 Separation of Variables
Table 5.1 Zeros of the Bessel function Jk . For each k, the spacing between zeros approaches as
m
k jk,1 jk,2 jk,3 jk,4
0 2.405 5.520 8.654 11.792
1 3.832 7.016 10.174 13.324
2 5.136 8.417 11.620 14.796
3 6.380 9.761 13.015 16.223
4 7.588 11.065 14.373 17.616
(This is analogous to the condition sin( ) = 0 from the one-dimensional string
problem.) Although Jk is not a periodic function, it does have an infinite sequence
of positive zeros with roughly evenly spacing. It is customary to write these zeros in
increasing order as
0 < jk,1 < jk,2 < . . . .
k,m = jk,m
2
,
k,m := cjk,m ,
k=0
k=1
k=2
Another special case that allows separation of spatial variables is spherical symmetry
in R3 . Spherical coordinates (r, , ) are defined through the relation
Note that is the azimuthal angle here and the polar angle, consistent with the
notation from Sect. 5.3. (This convention is standard in mathematics; in physics the
roles are often reversed.)
As in the circular case, we can use the chain rule to translate the three-dimensional
Laplacian into spherical variables:
1 2 1 1 2
= 2 r + 2 sin + 2 2 . (5.27)
r r r r sin r sin 2
It is not immediately clear that this operator admits separation, because the coef-
ficients depend on both r and . Note, however, that we can factor r 2 out of the
angular derivative terms, to write (5.27) as
1 1
= r2 + 2 S2 , (5.28)
r 2 r r r
where
1 1 2
S2 := sin + . (5.29)
sin sin2 2
Here S2 stands for the unit sphere {r = 1} R3 , and S2 is called the spherical
Laplacian.
The expression (5.29) may look awkward at first glance, but S2 is a very natural
operator geometrically. From the fact that is invariant under rotations of R3 about
the origin, we can deduce that S2 is also invariant under rotations of the sphere. It is
possible to show that S2 is the only second-order operator with this property, up to
a multiplicative constant. The operator S2 is thus as symmetric as possible, and the
reason that (5.29) looks so complicated is that the standard coordinate system (, )
does not reflect the full symmetry of the sphere.
We will discuss the radial component of (5.28) in an example below. For now let
us focus on the Helmholtz problem on the sphere, which allows further separation
of the and variables.
The classical ODE that arises from separation of the angle variables is the asso-
ciated Legendre equation:
2
(1 z ) f (z) 2z f (z) + ( + 1)
2
f (z) = 0, (5.30)
1 z2
Associated to this set of Legendre functions are functions of the angle variables
called spherical harmonics. These are defined by
where cl,m is a normalization constant whose value is not important for us.
From (5.31), using z = cos and 1 z 2 = sin2 , we can see that Ylm is a
polynomial of degree l in sin and cos . This makes it relatively straightforward
to check that each Ylm (, ) is a smooth function on S2 .
Lemma 5.6 Suppose u C 2 (S2 ) is a solution of the equation
S2 u = u (5.33)
2w
= w,
2
on R3 . (We have omitted the physical constants.) The eigenfunctions are assumed
to be bounded near r = 0 and decaying to zero as r .
Since the term 1/r is radial, separation of the radial and angular variables is
possible in (5.34). By Lemma 5.6, the angular components are given by spherical
harmonics. A corresponding full solution has the form
Substituting this into (5.34) and using the spherical form of the Laplacian (5.28)
gives the radial equation
1 d d l(l + 1) 1
r2 + h(r ) = h(r ). (5.36)
r 2 dr dr r2 r
One strategy used to analyze an ODE such as (5.36) is to first consider the asymptotic
behavior of solutions as r 0 or .
Suppose we assume h(r ) r as r 0. Plugging this into (5.36) and comparing
the two sides gives a leading term
on the left side, with all other terms of order r 1 or less. This shows that h(r ) r
as r 0 is possible only if
5.4 Spherical Symmetry 91
( + 1) = l(l + 1).
as r 0.
As r , if we consider the terms in (5.36) with coefficients of order r 0 and
drop the rest, the equation becomes
h (r ) h(r ). (5.37)
If 0 then this shows that h(r ) could not possibly decay at infinity. Hence we
assume that < 0 and set
2 := ,
h(r ) cer
as r .
Determining these asymptotics allows us to make an educated guess for the form
of the solution. For an as yet undetermined function q(r ), we set
with the conditions that q(0) = 1 and q(r ) has subexponential growth as r .
The goal of setting up the solution this way is that the equation for q(r ) will simplify.
Substituting (5.38) into (5.36) leads to the equation
2(k + l + 1) 1
ak+1 = ak . (5.40)
(k + 1)(k + 2l + 2)
If we assume that the numerator of (5.40) never vanishes, then the recursion
relation implies that
(2)k
ak
k!
1
= ,
2n
for some integer n l + 1. Under this assumption the sequence ak terminates at
k = n l 1. Since = 2 , this restriction on gives the set of eigenvalues
1
n := , n N.
4n 2
This is in fact the complete set of eigenvalues for this problem, given the conditions
we have imposed at r = 0 and r . With this eigenvalue calculation, Schrdinger
was able to give the first theoretical explanation of the emission spectrum of hydrogen
gas (i.e., the set of wavelengths observed when the gas is excited electrically). The
origin of these emission lines had been a mystery since their discovery by Anders
Jonas ngstrm in the mid-19th century.
Each value of n corresponds to a family of eigenfunctions given by
for l {0, . . . , n 1}, m {l, . . . , l}. Here qn,l (r ) denotes the polynomial of
degree n l 1 with coefficients specified by (5.40). To compute the multiplicity
of n , we count n 1 choices for l and then 2l + 1 choices of m for each l. The total
multiplicity is
n1
(2l + 1) = n 2 .
l=0
5.5 Exercises 93
5.5 Exercises
u = 0
that factor as a product u(x1 , x2 ) = g(x1 )h(x2 ), under the boundary conditions
5.2 The linear model for vibrations of a rectangular drumhead is the wave equa-
tion (4.30) with Dirichlet boundary conditions on a rectangle R := [0, 1 ][0, 2 ]
R2 . Separation of variables leads to the corresponding Helmholtz problem
= , |R = 0.
Find the eigenfunctions of product type, (x1 , x2 ) = 1 (x1 )2 (x2 ), and the associ-
ated frequencies of vibration. For 1 = 2 , compare the ratios of these frequencies
to Table 5.2. Would a square drum do a better job of producing a definite pitch?
5.3 The one-dimensional heat equation for the temperature u(t, x) of a metal bar
of length is
u 2u
2 = 0,
t x
for t 0 and x (0, ). (We will derive this in Sect. 6.1.) If the ends of the bar are
insulated, then u should satisfy Neumann boundary conditions
u u
(t, 0) = (t, ) = 0.
x x
2u u
+ u = 0, (5.41)
t 2 t
and find the set of possible values of . In particular, show that Im > 0 if > 0,
which implies that the solutions decay exponentially in time. Does this decay rate
depend on the oscillation frequency?
u
(u 2 ) = 0,
t
for t 0, x Rn .
(a) Assuming a product solution of the form u(t, x) = v(t)(x), separate variables
and find the equations for v(t) and (x).
(b) Show that (x) = |x|2 solves the spatial equation, and find the corresponding
function v(t) given the initial condition v(0) = a > 0. (Observe that the solution
blows up at a finite time that depends on a.)
which is a sector within the unit disk. Find the eigenvalues of on with Dirichlet
boundary conditions.
5.7 The quantum energy levels of a harmonic oscillator in Rn are the eigenvalues
of the equation
+ |x|2 = , (5.42)
Substitute
(x) = q(x)ex /2
2
(d) Returning to the original problem, by reducing (5.42) to n copies of the case
(5.43), deduce the set of eigenvalues .
5.8 Let B3 R3 be the unit ball {r < 1}. Consider the Helmholtz problem
= ,
where Ylm is the spherical harmonic introduced in Sect. 5.4. Find the radial equa-
tion for h l (r ).
(b) For l = 0 show that the radial equation is solved by
sin( r )
h 0 (r ) = .
r
What set of eigenvalues does this give?
(c) Show that the substitution,
h l (r ) = r 2 fl ( r ),
1
reduces the equation from (a) to a Bessel equation (5.15) for fl (z), with a frac-
tional value of k. Use this to write the solution h l (r ) in terms of Jk .
(d) Express the eigenvalues in terms of Bessel zeros with fractional values of k.
Chapter 6
The Heat Equation
In physics, the term heat is used to describe the transfer of internal energy within a
system of particles. When this transfer results from collective motion of particles in
a gas or fluid, the process is called convection. The continuity equation developed in
Sect. 3.1 describes convection by fluid flow, which is the special case called advection.
Another form of convection is conduction, where the heat transfer caused by random
collisions of individual particles.
The basic mathematical model for heat conduction is a PDE called the heat equa-
tion, developed by Joseph Fourier in the early 19th century. In this chapter we will
discuss the derivation and develop some basic properties of this equation, our first
example of a PDE of parabolic type.
A metal rod that is sufficiently thin can be treated as one-dimensional system. Let
u(t, x) denote the temperature of the rod at time t and position x, with x R for
now.
There are two physical principles that govern the flow of heat in the rod. The
first is the relationship between thermal (internal) energy and temperature. Thermal
energy is proportional to a product of density and temperature, by a constant c called
the specific heat of the material. Thus, the total thermal energy in a segment [a, b]
is given by
b
U =c u d x. (6.1)
a
We will assume that the density is constant, although it could be variable in some
applications.
The second principle is Fouriers law of heat conduction, which describes how heat
flows from hotter regions to colder regions. In its one-dimensional form, Fouriers
law says that the flux of thermal energy across a given point is given by
Springer International Publishing AG 2016 97
D. Borthwick, Introduction to Partial Differential Equations,
Universitext, DOI 10.1007/978-3-319-48936-0_6
98 6 The Heat Equation
u
q = k , (6.2)
x
where the constant k > 0 is the thermal conductivity of the material.
Assuming that the rod is thermally isolated, conservation of energy dictates the
rate of change of the thermal energy within the segment is equal to the flux across
its boundaries, i.e.,
dU
(t) = q(t, a) q(t, b). (6.3)
dt
As in our derivation of the local equation for conservation of mass, the combination
of (6.1) and (6.3) yields an integral equation
b
u q
c + d x = 0.
a t x
u q
c + = 0.
t x
Using the formula for q from Fouriers law (6.2), we obtain the one-dimensional
heat equation:
u k 2u
= 0. (6.4)
t c x 2
For a rod of finite length, the solution u will satisfy boundary conditions that
depend on how the rod interacts with its environment. If the rod is parametrized by
x [0, ] and we assume that each end is held at a fixed temperature, then this fixes
the values at the endpoints,
for all t. These are inhomogeneous Dirichlet boundary conditions. In one dimension
the inhomogeneous problem can be reduced very simply to the homogeneous case
by noting that
x x
u 0 (x) := T0 + T1 1
gives an equilibrium solution to the heat equation satisfying the boundary condi-
tions (6.5). By the superposition principle, u u 0 satisfies the heat equation with
homogeneous Dirichlet conditions.
Another possible boundary assumption is that the ends are insulated, so that no
thermal energy flows in or out. This means that q vanishes at the boundary, yielding
the Neumann boundary conditions
6.1 Model Problem: Heat Flow in a Metal Rod 99
u u
(t, 0) = (t, ) = 0.
x x
Example 6.1 On the bounded interval [0, ], we can find product solutions to the heat
equation using Lemma 5.1. For the Dirichlet boundary conditions u(0) = u( ) =
0, Theorem 5.2 gives the set of sine eigenfunctions (5.5). The corresponding heat
equation solutions are
2 n2
nx
u(t, x) = e 2 t sin
2 n2
nx
u(t, x) = e 2
t
cos
The higher dimensional form of the heat equation can be derived by an argument
similar to that given above. In Rn , the thermal flux q is vector valued, and Fouriers
law becomes the gradient formula
q = ku.
u
c + q = 0.
t
In combination, these yield the n-dimensional heat equation,
u k
u = 0. (6.6)
t c
The importance of the heat equation as a model extends well beyond its original
thermodynamic context. One of the most prominent examples of this is Albert Ein-
steins probabilistic derivation of the heat equation as a model for Brownian motion
in 1905, in one of the set of papers for which he was later awarded the Nobel prize.
Brownian motion is named for the botanist Robert Brown, who observed in 1827 that
minute particles ejected by pollen grains drifted erratically when suspended in water,
with a jittery motion for which no explanation was available at the time. Einstein
theorized that the motion was caused by collisions with a large number of molecules
whose velocities were distributed randomly. The existence of atoms and molecules
100 6 The Heat Equation
was still unconfirmed in 1905, and Einsteins model provided crucial supporting
evidence.
To summarize Einsteins argument, suppose that a total of n particles are distrib-
uted on the real line. In an interval of time , the position of each particle is assumed
to change by a random amount according to a distribution function . To be more
precise, the number of particles experiencing a displacement between and + d
is
dn = n( ) d.
To find an equation for , Einstein takes the Taylor expansions of the density on both
sides of (6.8), obtaining
(t + , x) = (t, x) + (t, x) + . . . (6.9)
t
on the left, and
1 2
(t, x ) = (t, x) (t, x) + (t, x) 2 + . . .
x 2 x2
inside the integral. Integrating the latter expansion against gives, by (6.7) and the
assumption that is even (which knocks out the linear term),
1 2
(t, x )( ) d = (t, x) + (t, x) 2 ( ) d + . . . .
2 x2
Substituting this formula together with (6.9) into (6.8) and keeping the leading terms
gives
1 2
(t, x) = (t, x) 2 ( ) d.
t 2 x2
6.1 Model Problem: Heat Flow in a Metal Rod 101
2
D 2 = 0,
t x
i.e., the heat equation. Remarkably, the function representing the random distri-
bution of displacements plays no role in the final equation, except in the value of
the constant D. This fact is related to a fundamental result in probability called the
central limit theorem.
Diffusion models involving random motions of particles are prevalent in physics,
biology, and chemistry. The same statistical principles appear in other applications
as well, for example in models of the spread of infection in medicine, or in the study
of fluctuating financial markets. The heat equation plays a fundamental role in all of
these applications.
u 2u
2 = 0. (6.11)
t x
Note that the equation is invariant under the rescaling (t, x) (2 y, x), with
a nonzero
constant. This suggests a change of variables to the scale-invariant ratio
y := x/ t might simplify the equation.
Let us try to find a solution of the form u(t, x) = q(y) for t > 0. By the chain
rule,
u y 2u 1
= q , = q .
t 2t x2 t
Thus, as an equation for q, (6.11) reduces to an ODE,
y
q = q .
2
This can be solved for q by separation of variables for ODE, as described in Sect. 2.5,
q (y) = q (0)ey /4
2
.
102 6 The Heat Equation
It is easy to confirm that this solves (6.11) for t > 0. To see what happens as t 0,
note that
ey /4 dy = ,
2
In view of this limiting behavior, let us consider the particular solution U defined by
setting C1 = 14 , C2 = 21 ,
x
1 t 1
ey /4
2
U (t, x) := dy + . (6.13)
4 0 2
This solution is plotted for some small values of t in Fig. 6.1. By (6.12), limt0
U (t, x) = (x), the Heaviside step function defined by
1, x > 0,
(x) := 21 , x = 0,
0, x < 0.
The fact that U (t, x) has such a simple limit as t 0 can be used to derive a more
general integral formula. Suppose we want to solve (6.11) for the initial condition
u(t, x) = (x)
with Ccpt (R). The key observation is that can be reproduced by integrating its
derivative against the Heaviside function,
x
(z)(x z) dz = (z) d x
= (x).
This suggests that we could solve the heat equation with initial condition by setting
u(t, x) := (z)U (t, x z) dz.
the solution is
u(t, x) = Ht (x z)(z) dz.
u
u = 0 (6.15)
t
for t > 0, with initial condition
104 6 The Heat Equation
u(0, x) = g(x)
for x Rn .
Inspired by the calculations in Sect. 6.2, we define
Ht (x) d n x = 1. (6.17)
Rn
Ht Ht = 0, (6.18)
t
so that Ht (x) is a solution of the heat equation. However, the limit of Ht (x) as t 0
is 0 for x
= 0 and for x = 0, which does not seem to make sense as a distribution
of temperatures. (We will return to discuss the interpretation of this initial condition
in Chap. 12.)
With (6.14) as motivation, our goal in this section is to show that the convolution
u = Ht g satisfies the heat equation on Rn for a continuous and bounded initial
condition g. A function that acts on other functions by convolution is an integral
kernel, and Ht is specifically called the heat kernel on Rn . It is also called the
fundamental solution of the heat equation, for reasons we will explain in Chap. 12.
Because the domain is infinite here, we should treat differentiation under the integral
with some care. To justify this, the key point is that the partial derivatives of Ht can be
estimated by expressions of the form c1 (t, x)ec2 (t,x)| y| , where the dependence of the
2
6.3 Integral Solution Formula 105
constants c1 and c2 on t and x is continuous. We will not go into the technical details,
but this makes it relatively straightforward to check that differentiating under the
integral works in this case. In particular, the fact that Ht (x) solves the heat equation
implies (6.19).
the initial condition, fix some x R . A change of variables to w =
n
To check
( y x)/ t in (6.21) gives
u(t, x) = (4 ) 2 e|w| g x + t 2 w d n w
n 2 1
Rn
1
= H1 (w)g x + t 2 w d n w
Rn
Given > 0, we can use the exponential decay of H1 (w) as |w| to choose
R so that
H1 (w) d n w < .
|w|R
On the other hand, by the continuity of g we can choose > 0 so that for y such
that for |x y| < , we have
|g (x y) g(x)| .
Theorem 6.3 Under the assumption that u(t, ) is bounded on [0, T ] Rn for each
T > 0, the solution of the heat equation (6.19) is unique.
We will develop tools to prove this result (maximum principles) in Chap. 9. The
statement can be improved by weakening the boundedness hypothesis to an assump-
tion of exponential growth. The counterexamples to uniqueness exhibit superexpo-
nential growth and are not considered valid as physical solutions.
In combination, Theorems 6.2 and 6.3, show that a bounded solution of the heat
equation on Rn with continuous initial data satisfies (6.21). The function Ht (x) is C
in both variables for t > 0. As we noted in the proof of Theorem 6.2, differentiation
under the integral is justified in (6.21), so this regularity can be extended to general
solutions.
Theorem 6.4 Suppose that u is a bounded solution of the heat equation (6.19) for
a bounded initial condition g C 0 (Rn ). Then u C ((0, ) Rn ).
Similar regularity results hold for the heat equation in other contexts, for example
on a bounded domain. We will discuss some of these cases later in Sect. 8.6. This
behavior, i.e., smoothness of solutions that does not depend on the regularity of the
initial data, is characteristic of parabolic equations.
Another interesting feature of the heat kernel is the fact that Ht (x) is strictly
positive for all t > 0 and x Rn . This means that if g is nonnegative and not
identically zero, then u is nonzero at all points x Rn for t > 0. Compare this to the
Huygens principle that we observed in Chap. 4, which says that for solutions of the
wave equation the range of influence of a point is limited by the (finite) propagation
speed. The heat equation exhibits infinite propagation speed.
6.3 Integral Solution Formula 107
We can see the origin of the infinite propagation speed in Einsteins diffusion
model from Sect. 6.1. In (6.10) the value D, which is assumed to be constant, is the
average squared displacement per unit time. The fact that D is fixed implies that in
the continuum limit 0, the average absolute displacement per unit time diverges.
Hence the infinite propagation speed is built in to the construction of the model. It
reflects the fact that models of diffusion are inherently statistical, and not expected
to be accurate on a microscopic scale.
u
u = f (6.25)
t
for t > 0, with initial condition u(0, x) = 0. For s 0, let s (t, x) be the solution
of the homogeneous heat equation (6.19) for t s, subject to the initial condition
s (t, x)
t=s = f (s, x). (6.26)
Using the formula for s provided by Theorem 6.2, the proposed solution can be
written t
u(t, x) = Hts (x y) f (s, y) d n y ds. (6.27)
0 Rn
To justify this formula, we must investigate carefully what happens near the point
t = s.
Theorem 6.5 Assuming that f C 2 ([0, ) Rn ) and is compactly supported,
the formula (6.27) yields a classical solution to the inhomogeneous heat equation
(6.25).
Proof We can see that u is at least C 2 by changing variables in the integral formula
to obtain t
u(t, x) = Hs ( y) f (t s, x y) d n y ds.
0 Rn
108 6 The Heat Equation
and t
u(t, x) = Hs ( y) x f (t s, x y) d n y ds. (6.29)
0 Rn
Our goal is to integrate by parts in these formulas, to exploit the fact that Hs solves
the heat equation. Here we must be careful, because of the singular behavior of Hs
at s = 0.
To deal with this singularity, we split the integral at s = . For the first integral in
(6.28), switching the t derivative to an s derivative and integrating by parts gives
t
f
Hs ( y) (t s, x y) d n y ds
Rn t
t
f
= Hs ( y) (t s, x y) d n y ds
Rn s
t
Hs
= ( y) f (t s, x y) d n y ds
Rn s
Ht ( y) f (0, x y) d n y + H ( y) f (t , x y) d n y
Rn Rn
The corresponding result for (6.29) has no boundary terms because of the compact
support of f , t
Hs ( y) x f (t s, x y) d n y ds
Rn
t
= y Hs ( y) f (t s, x y) d n y ds
Rn
Applying these integrations by parts to (6.28) and (6.29), and using the fact that
( s )Hs = 0, we obtain
u(t, x) = H ( y) f (t , x y) d n y
t Rn
(6.30)
+ Hs ( y) x f (t s, x y) d y ds.
n
0 Rn t
6.4 Inhomogeneous Problem 109
The remaining limit is very close to the limit computed in the proof of Theorem 6.2,
except that t is replaced by t . A simple modification of that argument shows that
lim H ( y) f (t , x y) d n y = f (t, x).
0 Rn
6.5 Exercises
u
+ u u = 0.
t
for t 0. Under the additional requirement that u(t, ) is bounded, find a solution
u(t, x). Hint: use separation of variables and assume that the temporal components
have the form eit .
u
u = 0,
t
on (0, ) . Following the discussion from Sect. 6.1, we define the total thermal
energy at time t by
U[t] = u(t, x) d n x.
u
= 0,
(the insulated case). Show that U is constant.
(b) Assume that u is positive in the interior of and equals 0 on the boundary.
Show that U(t) is decreasing in this case.
u
u = 0,
t
on (0, ) . Define
(t) := u(t, x)2 d n x. (6.31)
u(t, x)| x = 0
u|t=0 = g, u| x = h,
In the preceding chapters we have seen that separation of variables can generate
families of product solutions for certain PDE. For example, we found families of
trigonometric solutions of the wave equation in Sect. 5.2 and the heat equation in
Sect. 6.1. By the superposition principle, finite linear combinations of these functions
give more general solutions.
It is natural to hope that we could push this construction farther and obtain solu-
tions by infinite series. Solutions of PDE by trigonometric series were studied exten-
sively in the 18th century by dAlembert, Euler, Bernoulli, and others. However,
notions of convergence were not well developed at that time, and many fundamental
questions were left open.
In this chapter we will introduce some basic concepts of functional analysis, which
will give us the tools to address some of these fundamental issues.
We assume that the reader has had a basic course in linear algebra and is familiar
with the notion of a vector space, i.e., a set equipped with the operations of addition
and scalar multiplication. The basic finite-dimensional example is the vector space
Rn . This space comes equipped with a natural inner product given by the dot product
v w for v, w Rn . The Euclidean length of a vector v Rn is v := v v.
In this section we will review the corresponding definitions for general real or
complex vector spaces, which include function spaces. One important set of exam-
ples are the spaces C m () introduced in Sect. 2.4, consisting of m-times continuously
differentiable complex-valued functions on a domain Rn . Because differentia-
bility and continuity of functions are preserved under linear combination and scalar
multiplication, C m () is naturally a complex vector space.
An inner product on a complex vector space V is a function of two variables,
u, v V u, v C,
Springer International Publishing AG 2016 111
D. Borthwick, Introduction to Partial Differential Equations,
Universitext, DOI 10.1007/978-3-319-48936-0_7
112 7 Function Spaces
Together, (I2) and (I3) imply conjugate linearity in the second variable,
v, w := v w. (7.1)
One way to define an inner product on function spaces is by integration. For example,
on C 0 [0, 1] we could take 1
f, g := f g d x.
0
Certain geometric notions are carried over from Euclidean geometry to inner product
spaces. For example, vectors u, v in an inner product space V are called orthogonal if
u, v = 0.
for all v, w V .
|v, w |2
0 q(t0 ) = v2 ,
w2
The triangle inequality for (7.2) follows from the Cauchy-Schwarz inequality by
u + v2 = u + v, u + v
= u2 + 2 Re u, v + v2
u2 + 2 |u, v | + v2
u2 + 2 u v + v2
= (u + v)2 .
Thus (7.2) defines a norm associated to the inner product. This definition of the norm
is used by default on an inner product space.
114 7 Function Spaces
It is possible to have a norm that is not associated to an inner product. For example,
this is the case for the sup norm, defined for f C 0 (), with Rn bounded, by
sup | f (x)| := sup | f (x)| ; x . (7.3)
x
We will explain how to tell that a norm does not come from an inner product in the
exercises.
In the early 20th century, Henri Lebesgue developed an extension of the classic
definition of the integral introduced by Bernhard Riemann in 1854. (Riemanns is
the version commonly taught in calculus courses.) Lebesgues definition agrees with
the Riemann integral when the latter exists, but extends to a broader class of integrable
functions.
A full course would be needed to develop this integration theory properly. In this
section, we present only a brief sketch of the Lebesgue theory, with the focus on the
features most relevant for applications to PDE.
The Lebesgue integral is based on a generalized notion of volume for subsets of
Rn , which can be defined in terms of approximation by rectangles. For a rectangular
subset in R Rn , let vol(R) denote the usual notion of volume, the product of
the lengths of the sides. (It is conventional to use volume as a general term when
the dimension is arbitrary.) The volume of a subset A Rn can be overestimated
by covering the set with rectangles, as illustrated in Fig. 7.1. The (n-dimensional)
measure of A is defined by taking the infimum of these overestimates,
m n (A) := inf vol(R j ); A Rj . (7.4)
j=1 j=1
For a bounded region with C 1 boundary, the definition (7.4) reproduces the notion of
volume used in multivariable calculus. Note that the concept of measure is dependent
on the dimension. The measure of a line segment in R1 is the length, but a line segment
has measure zero in Rn for n 2.
There is a major technicality in the application of (7.4). In order to make the
definition of measure consistent with respect to basic set operations, we cannot apply
it to all possible subsets of Rn . Instead, the definition is restricted to a special class
of measurable sets. Lebesgue gave a criterion for measurability that rules out certain
exotic sets for which volume is ill-defined. Fortunately, these sets are so exotic that
we are unlikely to encounter them in normal usage. All open and closed sets in Rn
are included in the measurable category, as are any sets constructed from them by
basic set operations of union and intersection.
7.2 Lebesgue Integration 115
For example, in R the characteristic functions of the intervals (a, b) and [a, b] are
equivalent. In measure theory, a property is said to hold almost everywhere if it fails
only on a set of measure zero. The equivalence (7.6) amounts to identifying functions
that agree almost everywhere.
116 7 Function Spaces
then there is no way to distinguish them in terms of integration. The definition (7.6)
is motivated by the following:
Lemma 7.2 For measurable functions f, g : C with Rn ,
| f g| d n x = 0
if and only if f g.
7.3 L p Spaces
with the understanding that functions in L p are identified according to the equivalence
(7.6). The space L p () is clearly closed under scalar multiplication. Closure under
addition is a consequence of the convexity of the function x |x| p for p 1,
which implies the inequality
f + g p | f | p + |g| p
2 2
.
To check that this is really a norm, we first note that Lemma 7.2 implies positive
definiteness (N1) because of the equivalence relation (7.6). Homogeneity (N2) is
satisfied because of cancellation between the powers p and 1/ p.
7.3 L p Spaces 117
f + g p f p + g p ,
is called the Minkowski inequality and holds for p 1. We omit the proof because
we are mainly concerned with the cases L 1 and L 2 .
Example 7.3 To illustrate the distinction between the L p norms, consider the func-
tion
h := a[0,l] ,
If we think of h as a density function, then the L 1 norm gives the total mass h1 = al.
The sensitivity of p to the spread of the function decreases as p increases, as
illustrated by the fact that
lim h p = a,
p
Example 7.3 suggests the possibility of defining a space L that is a limiting case
of the L p spaces, with a norm that generalizes the sup norm (7.3). The sup norm itself
does not respect the equivalence (7.6), so we must modify the definition to define a
norm consistent with the other L p spaces.
118 7 Function Spaces
Note that {h > a} has measure zero precisely when h is equivalent to a function
bounded by a. The value ess-sup(h) is thus the least upper bound among all functions
equivalent to h. For continuous functions the essential supremum reduces to the
supremum.
For f : C, we define
f := ess-sup | f | . (7.11)
i = .
t
which corresponds to the conservation of total probability. On the other hand, solu-
tions also satisfy a dispersive estimate
for all t > 0, with C a dimensional constant. The norm on the left measures the peak
amplitude of the wave. By the estimate on the right, this amplitude is bounded in
terms of the mass and decays as a function of time. In general, dispersive estimates
describe the spreading of solutions as a function of time.
It is conventional to represent elements of L p as ordinary functions, even though
each element is actually an equivalence class of functions identified under (7.6).
This usually causes no trouble because equivalent functions give the same results in
integrals.
7.3 L p Spaces 119
One point that requires clarification, however, is the issue of continuity or differ-
entiability of functions in L p . Under (7.6), a C m function is equivalent to a class of
functions which are not even continuous. To account for this technicality, we adopt
the convention that if a function in L p is equivalent to a continuous function, then the
continuous representative is used by default. This is unambiguous because the con-
tinuous representative is unique when it exists. Under this convention, the statement
that f L p is a C m function really means that f admits a continuous representative
which is C m .
because continuous functions on a compact set are bounded. The Lebesgue theory
gives the following:
Theorem 7.5 Assume 1 p < . For a function f L p () there exists an
approximating sequence {k } Ccpt (), such that
lim k f p = 0.
k
The most useful tool for this purpose is a slightly weaker form of convergence. A
sequence {vk } V is said to be Cauchy if the difference between elements converges
to zero: given > 0 there exists an N such that k, m N implies
vk vm < .
lim vk vm = 0.
k,m
If the sequence converges then the terms on the right are arbitrarily small for k and
m sufficiently large.
In Rn , it follows from the completeness axiom for real numbers that all Cauchy
sequences are convergent. (See Theorem A.3.) This property does not necessarily
hold in a general normed vector space, as the following demonstrates.
Example 7.6 Consider the space C 0 [1, 1] equipped with the L 1 norm 1 . For
n N define the functions
1, x < n ,
1
f n (x) = nx n x n1 ,
1
1, x > n1 ,
This limit equals 0 only if f (x) = 1 for x < 0 and f (x) = 1 for x > 0. That
is not possible for f continuous. Therefore the sequence { f n } does not converge in
C 0 [1, 1].
A normed vector space V is complete if all Cauchy sequences in V converge
within V . Theorem A.3 implies that Euclidean Rn is complete in this sense. For L p
spaces the Lebesgue integration theory gives the following result.
Theorem 7.7 For a domain Rn , the normed vector space L p () is complete
for each p [1, ].
In functional analysis, a complete normed vector space is called a Banach space
and a complete inner product space is called a Hilbert space. Thus Theorem 7.7 could
be paraphrased as the statement that L p () is a Banach space. The inner product
space L 2 () is a Hilbert space.
A subspace W V is closed if it contains the limit of every sequence in W that
converges in V .
Lemma 7.8 If V is a complete normed vector space and W V is a closed sub-
space, then W is complete with respect to the norm of V .
Proof Suppose {wk } W is a Cauchy sequence. The sequence is also Cauchy in
V , and so converges to some v V by the completeness of V . Since W is closed,
v W.
The L p function spaces have discrete counterparts, denoted by p , whose elements
are sequences. To a sequence (a1 , a2 , . . . ) of complex numbers we associate the
function a : N C defined by j a j . The p norm of this function is
1
p p
a p := a j ,
j=1
for p 1. It is possible to prove directly that p (N) is complete, but this can also be
deduced easily from Lemma 7.8. We interpret p (N) as a closed subspace of L p (R)
consisting of functions which are constant on each interval [ j, j + 1) for j N
and zero on (, 0). On this subspace the L p norm reduces to the p norm, so that
Lemma 7.8 implies that p (N) is complete. In particular, 2 (N) is a Hilbert space
with the inner product
122 7 Function Spaces
a, b 2 := ajbj.
j=1
v= cjej, (7.16)
j=1
with c j C.
As we will see in Sect. 7.6, the sets of eigenfunctions of certain differential opera-
tors naturally form orthonormal sequences with respect to the L 2 inner product. For
example the sine eigenfunctions appearing in Theorem 5.2 have this property. If a
sequence of eigenfunctions forms a basis, then we can expand general functions in
terms of eigenfunctions.
Suppose we are given an orthonormal sequence e j H , and we would like to
show that this forms a basis. To represent an element v H in the form (7.16), we
must decide how to choose the coefficients c j . This works in much the same way as
it does in finite dimensions. By the orthonormality property (7.15), we can compute
that
n
c j e j , ek = ck (7.17)
j=1
for all n k. Assuming that c j e j converges to v in H , we can take the limit
n in (7.17) to compute
v, ek = ck . (7.18)
n
Sn [v] := c j [v]e j . (7.20)
j=1
c j [v]2 v2 .
j=1
Proof Using the sesquilinearity (I3) of the inner product, we can expand
v Sn [v]2 := v Sn [v], v Sn [v]
= v, v Sn [v], v v, Sn [v] + Sn [v], Sn [v] ,
for n N. By the definition (7.20) of Sn [v] and the orthonormality condition (7.15),
n
Sn [v], v = v, Sn [v] = Sn [v], Sn [v] = c j [v]2 .
j=1
n
v Sn [v]2 = v2 c j [v]2 . (7.21)
j=1
Since the left-hand side is positive, the identity (7.21) shows that
n
c j [v]2 v2 ,
j=1
2
for all n N. The partial sums of the series c j [v] are thus bounded and the
terms are all positive. Hence the series converges by the monotone sequence theorem,
to a limit satisfying the claimed bound,
124 7 Function Spaces
c j [v]2 v2 .
j=1
To complete the proof, note that Sn [v] v in H means that the limit as n
of the left-hand side of (7.21) is zero. Hence Sn [v] v if and only if
n
lim c j [v]2 = v2 .
n
j=1
For n m,
2
m
Sm [v] Sn [v]2 = c j [v]e j
j=n+1
m
= c j [v]2 .
j=n+1
meaning that the sequence {Sn [v]} is Cauchy in H . By completeness of H this implies
that Sn [v] v for some v H .
7.5 Orthonormal Bases 125
Now assume that 0 is the only vector orthogonal to e j for all j. For n j we
have
v Sn [v], e j = v, e j Sn [v], e j
= c j [v] c j [v]
= 0.
7.6 Self-adjointness
for all u, v Cn . (In the real case self-adjoint is the same as symmetric.)
The spectral theorem in linear algebra says that for a self-adjoint matrix A there
exists an orthonormal basis for Cn consisting of eigenvectors for A, with real eigen-
values. Functional analysis allows a powerful extension of this result, that applies
in particular to certain differential operators acting on L 2 spaces. The full spectral
theorem for Hilbert spaces is too technical for us to state here, but we will prove a
version of this for the Laplacian on bounded domains later in Sect. 11.5.
Self-adjointness remains important as a hypothesis for the more general spectral
theorem, but even this condition becomes rather technical in the Hilbert space set-
ting. The issues arise from the fact that differentiable operators cannot act on the
whole space L 2 () because L 2 functions need not be differentiable. We will avoid
these complexities, by focusing on the Laplacian and restricting our attention to C 2
functions.
Lemma 7.11 Suppose that Rn is a bounded domain with C 1 boundary. If
u, v C 2 () both satisfy either Dirichlet or Neumann boundary conditions on ,
then
u, v = u, v . (7.24)
126 7 Function Spaces
u| = v| = 0,
implying the vanishing of the right-hand side of (7.25). Similarly, the Neumann
conditions
u v
= =0 (7.26)
also imply that the integrand on the right vanishes.
Boundary conditions for which (7.24) holds are called self-adjoint boundary con-
ditions (for the Laplacian). Formally, (7.24) resembles the matrix condition (7.23),
but of course there is no analog of boundary conditions in the matrix case. The proper
definition of self-adjointness in functional analysis involves a more precise speci-
fication of the domain on which acts and (7.24) holds. Even without going into
these details, we can still draw some meaningful conclusions from Lemma 7.11.
Lemma 7.12 Suppose j is a sequence of eigenvalues of on a bounded
domain Rn , with eigenvectors in C 2 () subject to a self-adjoint boundary
condition. Then j R and, after possible rearrangement, the eigenvectors form an
orthonormal sequence in L 2 ().
Furthermore, j > 0 for Dirichlet conditions, and j 0 for Neumann.
Proof Suppose we have a sequence j C 2 () satisfying
j = j j .
2
For j = k the inner product equals j 2 > 0, implying that j R for all j.
can thus drop the conjugation in (7.27). If j = k , then this now implies that
We
j , k = 0.
7.6 Self-adjointness 127
If some of the j s are equal, then every linear combination of the corresponding
of eigenfunctions will still be an eigenfunction for the same value of j . Hence we
can rearrange the eigenfunctions sharing a common eigenvalue into an orthogonal
set using the Gram-Schmidt procedure from linear algebra.
By multiplying the eigenfunctions by constants we can normalize so that j 2 =
1. The divergence theorem (Theorem 2.6) then implies
j = j , j
j
= j 2 d n x j d S.
Either Dirichlet or Neumann conditions will cause the second term to vanish, imply-
ing that j 0. If j = 0 then the equation also shows that j 0, implying that
j is constant. In the Dirichlet case the only constant solution is trivial, j 0, but
for Neumann conditions a nonzero constant is possible.
Example 7.13 In Example 5.5 we found a set of eigenfunctions for a circular drum-
head modeled by the unit disk, with Dirichlet boundary conditions. The eigenfunc-
tions were given in polar coordinates by
where jk,m is the mth positive zero of the Bessel function Jk . The eigenvalues of
in this case are the values jk,m . Since the only possible matches among the Bessel
zeros are jk,m = jk,m , these are the only potential non-orthogonal pairs.
Let us examine the orthogonality condition more explicitly. In polar coordinates,
the L 2 inner product of two eigenfunctions is given by
1 2
k,m , k ,m L2
= k,m (r, )k ,m (r, ) r d dr
0 0
1 2
= ei(kk ) Jk ( jk,m r )Jk ( jk ,m r ) r d dr.
0 0
Note that the eigenfunctions are clearly orthogonal when k = k , because the
integral vanishes in this case. If we set k = k , then the integral is trivial and the
inner product becomes
1
k,m , k,m L2
= 2 r Jk ( jk,m r )Jk ( jk,m r ) dr.
0
By Lemma 7.12 this integral vanishes for m = m . The cancellations occur because
of the oscillations, just as for sine functions, as Fig. 7.4 illustrates.
128 7 Function Spaces
7.7 Exercises
for all v, w V .
(a) Show that a norm defined by an inner product as in (7.2) satisfies the parallelo-
gram law.
(b) In L p (R), define the functions
f n (x) =
0, x < 0.
gn (x) = n 1 [0,n] .
7.4 Consider the sequence f n (x) = x n for x (0, 1). Show that f n 0 in L p (0, 1)
for each p [1, ), but not for p = .
7.5 Assume that Rn is a bounded domain. Show that there is a constant C > 0
such that for f L 2 (),
f 1 C f 2 .
This implies in particular that L 2 () L 1 (). Find an example to show that this
result does not hold for unbounded.
u|t=T = 0, u| x = 0.
(c) Suppose that (0) > 0. Then by continuity log (t) is defined at least in some
neighborhood of t = 0. Using (7.28), show that
(log (t)) 0.
This implies that log (t) is bounded below by its tangent lines. In particular
(0)
log (t) log (0) + t,
(0)
130 7 Function Spaces
which implies
(t) (0)ect ,
for c = (0)/(0) > 0. Thus if (0) > 0 then is strictly positive for all
t 0.
(d) Conclude from (c) that if (T ) = 0, then (t) = 0 for all t, and deduce that
u = 0.
7.7 Recall the radial decomposition formula (2.10). We can use this to get a basic
picture of the degree of singularity or decay at infinity that is allowed in each L p .
ck [1] := 1, k .
Under Theorem 7.9, what explicit summation condition corresponds to the con-
vergence Sn [1] 1 in L 2 (0, )?
Chapter 8
Fourier Series
In his study of heat flow in 1807, Fourier made the radical claim that it should be
possible to represent all solutions of the one-dimensional heat equation by trigono-
metric series. As we noted in the introduction to Chap. 7, trigonometric series had
been studied earlier by other mathematicians. Fouriers innovation was to suggest
that the general solution could be obtained this way.
This claim proved difficult to resolve, because the tools of functional analysis that
we discussed in Chap. 7 were not yet available in Fouriers time. Indeed, the difficult
problem of Fourier series convergence provided some of the strongest motivation for
the development of these tools.
In this chapter we will analyze Fourier series in more detail, and show that the
Fourier approach yields a general solution for the one-dimensional heat equation. The
primary significance of this approach to PDE is the philosophy of spectral analysis
that it inspired. The decomposition of functions with respect to the spectrum of
a differential operator is a tool with enormous applications, both theoretical and
practical.
u(t, x) = v(t)(x),
=
v
= v,
t
with the family of solutions
v(t) = v(0)et . (8.2)
Let us assume that the equation for admits a sequence of solutions k , with
eigenvalues k . We have seen specific examples of this in Chap. 5, including the
trigonometric case in Theorem 5.2. By (8.2), the corresponding product solutions of
the heat equation are
u k (t, x) := ek t k (x).
If we can show that {n } forms an orthonormal basis of L 2 (), then this gives us
a way to assign coefficients to h such that (8.4) holds, at least in the sense of L 2
convergence.
Even if the orthonormal basis property is established, some big issues still remain.
The fact that each term u k satisfies the heat equation does not guarantee that u does,
because of the infinite series summation. Similarly, the limit of (8.3) as t 0
is not necessarily (8.4), because the limit cannot necessarily be taken inside the
summation. In this chapter we will explain how to resolve these problems in the
context of trigonometric series.
Example 8.1 Consider the case of a one-dimensional metal rod with insulated ends.
For convenience take the length to be , so that x [0, ] and the Neumann boundary
conditions are
u u
(t, 0) = (t, ) = 0.
x x
8.1 Series Solution of the Heat Equation 133
h(x) = 3 x 2 2x 3 , (8.5)
n (x) = cos(nx), n N0 .
Hence the strategy outlined above calls for us to represent the initial condition as a
series
h(x) = an cos(nx). (8.6)
n=0
This could be checked with trigonometric identities, but it is perhaps easier is to use
the complex form cos(kx) = 21 (eikx + eikx ).
Since the sequence {n } is not normalized, the coefficient formula (7.18) must be
interpreted as
1
an = h, n
.
n 2
3
h(x) = 3x2 2x3 2
48
cos x
Fig. 8.1 Comparison of the initial condition h(x) with the first two terms of its cosine series
Figure 8.1 shows a comparison between h and the partial sum S1 [h]. The close match
between these functions is clearly evident. And since the higher coefficients decay
by a factor of n 4 , convergence of this series seems quite plausible. The resulting
solution would be given by
3 48
en t cos(nx).
2
u(t, x) =
2 nN
n 4
odd
We saw examples of Fourier series based on sines in Theorem 5.2 and cosines in
Example 8.1. To account for both cases, it is convenient to consider periodic functions
on R. We define
T := R/(2 Z), (8.10)
where the quotient notation means that points separated by an integer multiple of
2 are considered equivalent. The space C m (T) consists of the functions in C m (R)
which are 2 -periodic.
Integrals of functions on T are defined by restricting the range of integration to
an arbitrary interval of length 2 in R. We will write the inner product on L 2 (T) as
f, g
= f g d x,
2
= ,
x2
for C 2 (T), with no need for additional boundary conditions because of the
periodicity. The eigenfunctions are the complex exponentials
for k Z, with k = k 2 .
It is possible to recover cosine and sine Fourier series from the periodic case,
by restricting our attention to even or odd functions on T. We will demonstrate this
specialization in the examples and exercises.
The complex exponentials satisfy a simple orthogonality relation,
k , l
= ei(kl)x d x
2, k = l,
=
0, k = l.
1
ck [ f ] := f, k
2
(8.12)
1
= f (x)eikx d x.
2
Because the index set is Z rather than N, we define the partial sums of the periodic
Fourier series by truncating on both sides,
n
Sn [ f ](x) := ck [ f ]eikx . (8.13)
k=n
For the sequence {k }, Bessels inequality (Theorem 7.9) takes the form
ck [ f ] 2 1 f 2 , (8.14)
2
2
kZ
In the specific example considered in Example 8.1, the Fourier series appeared
to converge very quickly. To illustrate potential complications with the convergence,
let us consider a function with a jump discontinuity.
as pictured on the left in Fig. 8.2. As noted above, in order to represent h as a cosine
series using the periodic eigenfunctions, we first extend h to T as an even function,
i.e.,
0, x [ 2 , 2 ] + 2 Z,
h(x) :=
1, x ( 2 , 3
2
] + 2 Z.
By (8.12), with a shift to the more convenient interval [0, 2 ], the Fourier coefficients
of h are 3
1 2
ck [h] = eikx d x
2 2
1
, k = 0,
= 2(1)k
k
k
sin 2 , k = 0.
Since ck [h] = ck [h], we can combine terms in the partial sums (8.13) to give
n
1 (1)k k
Sn [h](x) = +2 sin cos(kx).
2 k=1
k 2
Figure 8.2 shows a sample of these partial sums. In contrast to the case of Exam-
ple 8.1, where 2 terms of the Fourier series were enough to give a very convincing
approximation, we can see significant issues with convergence in the vicinity of the
jump, even with 40 terms.
The Fourier series computed in Example 8.2 makes for a good illustration of some
different notions of convergence. Consider the sequence of differences h Sn [h], as
n = 20 n = 40 n = 100
It is not easy to judge such a limit visually, but this claim is true, as we will prove in
Sect. 8.6.
We could instead focus our attention the values of Sn [h](x) for some fixed x. A
sequence of functions f n is said to converge pointwise to f (assuming these functions
have a common domain) if for each fixed x in the domain,
In Fig. 8.3, if we focus our attention on some point x away from the center, then the
bumps at this point do seem to be decreasing in size as n gets larger. We will verify
in Sect. 8.3 that this Fourier series converges pointwise except at x = 2 .
Another feature that is quite apparent in Fig. 8.3 is the spike near the center. It is
possible to prove that such a spike persists, with height essentially constant, for all
values of n. The historical term for this effect, which is caused by the jump discon-
tinuity, is the Gibbs phenomenon. It was actually first observed in 1848 by Henry
Wilbraham, but remained generally unknown until it was rediscovered independently
by J. Willard Gibbs in 1899.
The Gibbs phenomenon relates to yet a third definition of convergence. A sequence
of bounded functions f n is said to converge uniformly to a function f on a set W if
lim sup f n (x) f (x) = 0. (8.15)
n xW
Since this does not decrease, uniform convergence fails for this series. However, the
sequence does converge uniformly on domains that exclude a neighborhood of the
jump point, for example on the interval [0, 2 ] for > 0.
138 8 Fourier Series
The basic theory of pointwise convergence of Fourier series was worked out by
Dirichlet in the mid-19th century. In this section we will establish a criterion for
pointwise convergence of periodic Fourier series.
Theorem 8.3 Suppose f L 2 (T), and that for x T the estimate,
f (x) f (x y)
ess-sup < ,
(8.16)
y[,] y
The essential supremum was defined in (7.10). The inequality (8.16) means that,
after possibly replacing f by an equivalent function in the sense of (7.6), we can
assume that
f (x) f (x y)
sup < .
(8.17)
0<|y| y
Thus Theorem 8.3 shows that the Fourier series a C 1 function converges pointwise
on all of T. The same argument can be extended to functions on T which are merely
piecewise C 1 .
It is possible to prove pointwise convergence with a weaker hypothesis than
that of Theorem 8.3. However, there are counterexamples, discovered by Fejr and
Lebesgue, that show that pointwise convergence of the Fourier series may fail for
f C 0 (T).
Before getting into the proof of Theorem 8.3, let us consider the structure of the
partial sums in more detail. Plugging the coefficient formula (8.14) into (8.13) gives
n 2
1
Sn [ f ](x) = eikx f (y)eiky dy
k=n
2 0
(8.18)
1 ik(xy)
2 n
= f (y) e dy.
0 2 k=n
8.3 Pointwise Convergence 139
1 ikt
n
Dn (t) := e . (8.19)
2 k=n
With this definition the formula for the partial sum becomes
2
Sn [ f ](x) = f (y)Dn (x y) dy. (8.20)
0
Sn [ f ] = f Dn .
Because the sum (8.19) is finite, it is clear that the Dirichlet kernel is a smooth
function on T. It is also easy to compute that
2
Dn (t) dt = 1 (8.21)
0
for n N, since only the k = 0 term in (8.19) contributes to the integral. Applying
the polynomial identity
z m+1 1
1 + z + z2 + + zm = .
z1
1 ei(n+1)t eint
Dn (t) = . (8.22)
2 eit 1
1 sin((n + 21 )t)
Dn (t) = ,
2 sin( 21 t)
Proof of Theorem 8.3 Because both f and Dn are periodic, a change of variables
y x y allows us to rewrite the convolution in the opposite order:
140 8 Fourier Series
Sn [ f ](x) = Dn (y) f (x y) dy. (8.23)
The crucial observation here is that if we separate the terms inside the brackets, then
this looks like a formula for Fourier coefficients.
Assuming that the hypothesis of the theorem is satisfied at x T, consider the
function
f (x) f (x y)
h(y) := , (8.25)
ei y 1
f (x) f (x y) y
h(y) = ,
y ei y 1
note that the first factor is essentially bounded near y = 0 by the assumption (8.16).
Since ei y 1 i y as y 0 by Taylors approximation, the second factor is also
bounded near y = 0. The hypothesis (8.16) thus guarantees that h(y) is equivalent to
a function that is bounded on the interval [, ]. Since f L 2 (T) and (ei y 1)1
is bounded for y [, ], we conclude from this that h L 2 (T).
We can thus interpret (8.24) in terms of Fourier coefficients,
Bessels inequality, which takes the form (8.14) here, implies that ck [h] 0 as
k . By (8.26) this establishes pointwise convergence at x.
8.4 Uniform Convergence 141
For n large we can control the first and third terms on the right by the assumption of
uniform convergence. To control the middle term we can use the continuity of f n .
Fix x and > 0. By uniform convergence there exists n so that
sup f n ( y) f ( y) < . (8.28)
y
The fact that f n is continuous at x means that we can find > 0 (depending on x)
such that for y satisfying |x y| < ,
f n (x) f n ( y) < . (8.29)
Combining (8.28) and (8.29) with (8.27) shows that for y satisfying
|x y| < ,
| f (x) f ( y)| < 3.
Thus f is continuous at x.
Uniform convergence is particularly easy to check for periodic Fourier series,
because the eigenfunctions k satisfy
ikx
e = 1.
142 8 Fourier Series
ck [ f ] = ikck [ f ]. (8.30)
Since f
L 2 (T) also, applying Bessels inequality in the form (8.14) to the
coefficients (8.30) implies that
kck [ f ] 2 < . (8.31)
kZ
Let
2 (Z\{0}) denote the discrete L 2 space on the set consisting of functions
Z\{0} C. (The
p spaces were introduced in Sect. 7.4.) The sequence
ak := |kck [ f ]|
defines an element of
2 (Z\{0}) by (8.31). If we define b
2 (Z\{0}) by bk := k 1 ,
then the sum of the coefficients ck [ f ] with k = 0 can be expressed as an
2 pairing,
|ck [ f ]| = a, b
2 .
k=0
a, b
2 a
2 b
2 < .
8.5 Convergence in L 2
1
eikx , k Z,
2
u, k
= 0 (8.34)
for each k Z. By Theorem 7.10 the conclusion will follow if we can deduce that
this implies u = 0.
As noted above, for C 1 (T) Theorem 8.5 implies that Sn [] in L 2 (T).
In particular, this gives
lim u, Sn []
= u,
. (8.35)
n
u,
= 0.
Now recall Theorem 7.5, which says that Ccpt (0, 2 ) forms a dense subset of
L (0, 2 ). This implies also that C (T) is dense in L 2 (T). Therefore we can choose
2 1
u22 = lim u, l
,
l
and we just showed that all terms on the right are zero under the assumption (8.34).
Therefore u = 0.
The combination of Theorems 8.6 and 7.9 immediately yields the following:
Corollary 8.7 (Parsevals identity) For f L 2 (T), the periodic Fourier coefficients
satisfy
1
|ck [ f ]|2 = f L 2 .
2
kZ
Example 8.8 In Example 8.2, we found for the step function h that ck [h] = k
1
for
k odd, c0 [h] = 2 , and otherwise ck [h] = 0. So for this case,
1
1 1
|ck [h]|2 = +2 .
kZ
4 kN
2k2
odd
1 2 1 1
+ 2 = .
4 kN k 2 2
odd
8.5 Convergence in L 2 145
The space L 2 (0, 2 ) can be identified withL 2 (T) by extending functions peri-
1 ikx
odically. Hence Theorem 8.6 also implies that 2 e is an orthonormal basis for
L (0, 2 ). We can also specialize the periodic results to show that cosine or sine
2
In the preceding sections we have made some progress in understanding the repre-
sentation of a function by Fourier series. However, we still have not addressed one of
the primary questions raised in Sect. 8.1: when does a Fourier series yield a classical
solution to the original PDE? In this section we will resolve this issue by studying
the relationship between the regularity of a function and the decay of its Fourier
coefficients.
The starting point for this discussion is the computation used in the proof of
Theorem 8.5,
ck [ f
] = ikck [ f ]
|ak |
ak = o(k ) means lim = 0.
|k| |k|
for all sufficiently large |k|, with C independent of k. Note that the little-o condition
is stronger. The content of the statement ak = o(k ) is that the ratio ak /k tends to
zero, while ak = O(k ) says only that the ratio is bounded.
and
ck [ f ] = o(k m ).
Proof The inequality (8.39) follows immediately from a combination of Lemma 8.9
and and Bessels inequality in the form (8.14). Since the terms in a convergent series
must approach zero,
lim k m ck [ f ] = 0,
|k|
This shows much faster decay than predicted, but not the rapid decay we would have
seen if the even periodic extension had been smooth.
Our next goal is to develop a converse to Theorem 8.10 that says that a certain
level of decay rate of Fourier coefficients guarantees a corresponding level of dif-
ferentiability for the function. In fact, the first stage of this result has already been
worked out. Suppose f L 2 (T) and its coefficients satisfy
|ck [ f ]| < . (8.40)
kZ
By the m = 0 result, the sequence f n
converges uniformly to some g C 0 (T).
Our goal is to show that g = f
, which means
f (x + y) f (x)
g(x) = lim ,
y0 t
The first term on the right approaches zero by the definition of f n
, and the second
term approaches zero as n by the construction of g. The remainder term is
(eik(x+y) eikx )
Rn (x, y) := ck ,
|k|>n
y
By noting that eiky 1 = 2 sin(ky/2), a simple calculus estimate gives
iky
e 1
|k|
y
In particular, by the assumption (8.41) the remainder term is arbitrarily small for n
large.
148 8 Fourier Series
Fix x T and > 0. By (8.43) and the fact that f n
(x) g(x), we can pick n
so that
f (x) g(x) < and |Rn (x, y)| < .
n
for all y = 0. For this n and x, the definition of f n
(x) says that we can choose such
that 0 < |y| < implies
f n (x + y) f n (x)
f n (x) < .
y
Applying these estimates to (8.42) shows that for 0 < |y| < ,
f (x + y) f (x)
g(x) 3.
y
Since was arbitrarily small, this shows that f
(x) = g(x). And since g is continuous,
we conclude that f C 1 (T).
The same argument can now be repeated for higher derivatives, assuming (8.41)
holds for larger m.
and this series converges provided > m + 1. Hence Theorem 8.12 implies that
f C m (T) under the condition that
ck [ f ] = O(k m1 )
u 2u
2 = 0,
t x
and satisfying
8.6 Regularity and Fourier Coefficients 149
for each x T.
Proof For t > 0, the Fourier coefficients of u(t, ) decay exponentially, and Theo-
rem 8.12 shows that u(t, ) C (T) for each t. The same arguments used in the
proof of that theorem apply to the t derivatives. To see this, let u n denote the partial
sum of (8.44),
n
ck ek t eikx .
2
u n (t, x) :=
k=n
u n n
(k 2 )ck ek t eikx .
2
(t, x) = (8.46)
t k=n
For > 0, the estimate (8.47) shows that the convergence is uniform for t .
Lemma 8.4 shows that the limit is continuous for t . Since > 0 is arbitrary, this
implies g C 0 ((0, ) T).
We can argue that g = u/t by considering
u(t + s, x) u(t, x) u n (t + s, x) u n (t, x) u n
h(t, x) = (t, x)
s s t
u n
+ (t, x) h(t, x) + Rn (t, s, x),
t
where
ek s 1
2
k 2 t ikx
Rn (t, s, x) := ck e e .
|k|>n
s
At this point the argument becomes essentially parallel to the analysis of (8.42), so
we will omit the details. The conclusion is that u/t is continuous on (0, ) T.
The argument can be repeated for higher t derivatives, allowing us to conclude
that u C ((0, ) T). Moreover, the partial derivatives of u n converge to the
150 8 Fourier Series
Fig. 8.5 Solutions of the heat equation become smooth for t > 0
8.7 Exercises 151
8.7 Exercises
(a) Extend f to an odd function on T and compute the periodic Fourier coeffi-
cients ck [ f ] according to (8.12). (Note that the case k = 0 needs to be treated
separately.) Show that the periodic series reduces to a sine series in this case.
(b) Show that the convergence of the Fourier series at x = 2 , which is guaranteed
by Theorem 8.3, yields the summation formula
1 1 1
= 1 + + .
4 3 5 7
(c) Show the Parseval identity (Corollary 8.7) leads to the formula
1 2
= .
k=1
k2 6
(a) Extend g to an even function on T and compute the periodic Fourier coeffi-
cients ck [g] according to (8.12). (Note that the case k = 0 needs to be treated
separately.) Show that the periodic series reduces to a cosine series in this case.
(b) Show that the convergence of the Fourier series at x = 0, which is guaranteed
by Theorem 8.3, reproduces the formula (8.37).
(c) Show the Parseval identity (Corollary 8.7) implies the formula
1 4
= .
kN
k4 96
odd
2u 2u
=0
t 2 x2
u
u(0, x) = g(x), (0, x) = h(x),
t
uniformly for t R.
(c) By the arguments used in Theorem 8.13, (b) implies that the series (8.48) con-
verges to a solution u satisfying the initial conditions. What could you conclude
about the differentiability of u?
8.4 In L 2 (0, ) consider the sequence
2
k (x) := sin kx,
for k N.
(a) Show that {k } is an orthonormal sequence.
(b) Suppose that f L 2 (0, ) and f, k
= 0 for all k N. Show that f 0.
(Hint: extend f to an odd 2 -periodic function on R, which can be regarded as
an element of L 2 (T). Then apply Theorem 8.6.)
(c) Conclude that {k } is an orthonormal basis for L 2 (0, ).
8.5 Suppose that f L 2 (, ) satisfies
x l f (x) d x = 0
for all l N0 .
(a) Show that qm,k , f = 0 for all m N and k Z, where
m
(ikx)l
qm,k (x) := .
l=0
l!
for all k Z.
(d) Conclude from Theorem 8.6 that f 0. (In other words, the monomials
1, x, x 2 , . . . form a basis for L 2 (, ), although not an orthonormal one.)
1 dk 2
Pk (z) := (z 1)k ,
2k k! dz k
where
d d
L := (z 2 1) .
dz dz
(a) For u, v C 2 [1, 1], check that L satisfies a formal self-adjointness condition,
u, Lv
L 2 = Lu, v
L 2 .
Conclude that the Pk s with distinct values of k are orthogonal in L 2 (1, 1).
(b) Use the result of Exercise 8.5 to show that {Pk } forms an orthogonal basis for
L 2 (1, 1). (The Pk are normalized by the condition Pk (1) = 0, rather than by
unit L 2 norm.)
Chapter 9
Maximum Principles
We saw in Sect. 4.7 that conservation of energy can be used to derive uniqueness for
solutions of the wave equation. In this chapter we will consider another approach to
issues of uniqueness and stability, based on maximum values. This method applies
generally to elliptic equations, which describe equilibrium states, and to parabolic
equations, which are generally used to model diffusion.
As noted in Sect. 5.2, the classical evolution equations such as the heat or wave
equation have the form
Pt u u = 0,
u = 0,
which is called the Laplace equation. A solution of the Laplace equation is also called
a harmonic function. The Laplace equation on a bounded domain is generally
formulated with an inhomogeneous Dirichlet boundary condition,
u| = f
for f : R.
v = ,
If the vector field v is also solenoidal ( v = 0), then the potential satisfies the
Laplace equation
= 0.
In fluid dynamics in R3 , for example, the velocity field is solenoidal for an incom-
pressible fluid, such as water, and conservative precisely when the flow is irrotational
( v = 0).
Electrostatics provides another important source of Laplace problems. In the
absence of charges, the electric field E is conservative and is commonly written
as
E =
where is the electric potential. On the other hand, Gausss law of electrostatics says
that E is proportional to the electric charge density. Hence, the electric potential
for a charge-free region satisfies the Laplace equation.
In the remainder of this section we will consider a particular classical case, the
Laplace problem on the unit disk. Circular symmetry allows us to solve the equation
explicitly using Fourier series, and the resulting formula gives some insight into the
general behavior of harmonic functions.
Let D denote the open unit disk in R2 . Given g C 0 (D), our goal is to solve
u = 0, u|D = g. (9.1)
In Sect. 5.3 we used separation of variables in polar coordinates to find the family of
harmonic functions,
k (r, ) := r |k| eik ,
where 2
1
ck [g] := eik g() d.
2 0
9.1 Model Problem: The Laplace Equation 157
Given that
eik = k (1, ),
Theorem 8.10 shows that the sequence {ck [g]} is bounded for g continuous. Note
also that
|k (r, )| = r |k|
and
r |k| <
kZ
for r < 1 by geometric series. This implies that (9.3) converges absolutely for r < 1.
In fact the convergence is uniform on {r R} for R < 1.
We can write u(r, ) more explicitly by substituting the definition of ck [g] into
the integral,
1 2
u(r, ) = r |k| eik() g() d.
kZ
2 0
Uniform convergence in for r < 1 allows us to move the sum inside the integral,
yielding the formula
2
1
u(r, ) = Pr ( )g() d, (9.4)
2 0
where
Pr () := r |k| eik . (9.5)
kZ
r=.9
r=.8
r=.6
r=.4
From the series formula (9.5) we can also deduce directly that
2
1
Pr () d = 1, (9.7)
2 0
u = 0 in D, u|D = g,
admits a classical solution u C (D) C 0 (D) given by the Poisson integral (9.4).
Proof The function Pr () is smooth for r < 1, and it follows from (9.5) that
Pr () = 0,
u = 0.
for every D, which will also show that u C 0 (D). Note that (9.9) is not the
same as claiming that the Fourier series for g converges, which is not necessarily
true. The difference lies in the order of the limits. In (9.9) we take the limit of the
Fourier series first for r < 1, and then the limit r 1 . This limit exists, as we will
see below, but if we first set r = 1 in (9.3) then the sum over k may diverge.
By (9.7) and (9.8) we can write
1
u(r, ) g() = Pr () g( ) g() d. (9.10)
2
The goal is to estimate the left-hand side for r close to 1. Fix D and let > 0.
Since g is continuous, there exists > 0 so that
lim Pr () = 0
r 1
|u(r, ) g()| 2
For students who know some complex analysis, we note that the formula (9.4)
could be deduced from the Cauchy integral formula, because any harmonic function
on D is the real part of a holomorphic function.
Example 9.2 For 0 < a < , suppose the boundary function is given by
||
1 , || a,
g() = a
0, a < || ,
a a
9.2 Mean Value Formula 161
where An denotes the volume of the unit sphere in Rn , as defined in (2.13). It follows
from the radial integral formula (2.10) that
An r n
vol[B(x 0 ; r )] = . (9.15)
n
To state the mean value formula for a ball of radius R, we introduce the family of
radial functions,
1
ln( Rr ), n = 2,
G R (x) := 2 (9.16)
1
(n2)An R n2
1
r n2 , n 3.
1
G R 1
= , G R
= 0. (9.17)
r An r n1 r =R
Note that G R is integrable on B(0; R), despite the singularity at the origin, because
the radial volume element is An r n1 dr by (2.10).
G R (x) = 0
for x
= 0. For > 0, we can therefore apply Greens second identity (Theorem 2.11)
on the domain { < r < R} to obtain
u G R
G R u d x = n
GR u dS
{<r <R} {r =R} r r
(9.18)
u G R
GR u d S.
{r =} r r
Because G R is integrable on B(0; R), on the left-hand side of (9.18) we can take
0 to obtain
lim G R u d n x = G R u d n x. (9.19)
0 {<r <R} B(0;R)
The first of these integrals can be estimated by noting that u/r is a directional
derivative and thus bounded by the magnitude of |u|. By the assumption that u
C 2 (), |u/r | is therefore bounded by a constant C for r R, yielding the estimate
u
d S C An n1 .
{r =} r
Hence
u G R 1
lim GR u d S = lim u dS .
0 {r =} r r 0 An n1 {r =}
The term in brackets is the average of u over a sphere of radius . Since u is continuous,
this average approaches u(0) as 0, so that
u G R
lim GR u d S = u(0). (9.21)
0 {r =} r r
For harmonic functions, Theorem 9.3 gives a generalization of the circle formula
(Theorem 9.3) to spherical averages in higher dimensions. As we will now show, the
mean value property can be stated in a equivalent form in terms of averages over a
ball.
Proof The fact that (A) implies (B) follows immediately by setting u = 0 in the
formula of Theorem 9.3.
To see that (B) and (C) are equivalent, fix some x 0 and define
h(r ) := u d n x,
B(x 0 ;r )
for r 0 such that B(x 0 ; r ) . As we saw in Exercise 2.4, the derivative of h(r )
is given by a surface integral
h (r ) = u d S.
B(x 0 ;r )
h (r ) = An r n1 u(x 0 ),
An r n
h(r ) = u(x 0 ).
n
which contradicts (9.22). The same argument applies if u(x 0 ) > 0. We thus con-
clude that (B) implies u 0.
u 0
max u = max u.
Proof By the extreme value theorem (Theorem A.2), u achieves a global maximum
at some point x 0 . If x 0 then the claimed equality clearly holds. The goal
is thus to show that x 0 implies that u is constant.
Because is open, an interior point x 0 has a neighborhood contained in . We
may thus assume that B(x 0 ; R) for some R > 0. Applying Theorem 9.3 to this
ball gives
1
u(x 0 ) = u(x)d S + G R (x x 0 )u(x)d n x.
An R n1 B(x 0 ;R) B(x 0 ;R)
E := {x ; u(x) < M} ,
F := {x ; u(x) = M} .
for all x .
The maximum principle implies the following stability result for the Laplace
equation.
u 1 | = g1 , u 2 | = g2 ,
Note that uniqueness of solutions of the Laplace equation also follows directly
from Greens first identity (Theorem 2.10), in the case where has piecewise C 1
boundary and u C 2 (; R). If u = 0, then setting v = u in Greens formula
gives
u
u d x =
2 n
u d S.
Thus if u = 0 on , then
u2 d n x = 0.
Although the mean value formula gives a direct proof of the strong maximum princi-
ple, this approach applies only to the Laplacian itself. In this section we will present
an alternative approach that generalizes quite easily to operators with variable coef-
ficients.
On a domain Rn let us consider a second order elliptic operator of the form
n
2 n
L= ai j (x) + b j (x) , (9.26)
i, j=1
xi x j j=1
x j
For the maximum principle we need a stronger assumption, called uniform ellip-
ticity, that says that for some fixed constant > 0,
n
ai j (x)vi v j v2 (9.27)
i, j=1
for all x and v Rn . An equivalent way to say this is that the smallest eigenvalue
of [ai j ] is bounded below by at each point x.
in , then
max u = max u.
Proof For the moment let u be a general function in C 2 (; R). Suppose that u has
a local maximum at x 0 . The first partial derivatives of u vanish at a local
maximum, so that
n
2u
Lu(x 0 ) = ai j (x 0 ) (x 0 ). (9.28)
i, j=1
xi x j
for v Rn . To see this, set h(t) := u(x 0 + tv) and note that h has a local maximum
at t = 0, implying h (0) 0. Evaluating h (0) yields the inequality stated above.
The right-hand side of (9.28) could be written as tr(AB) where A and B are the
positive symmetric matrices
2u
A = ai j (x 0 ) , B= (x 0 ) .
xi x j
n
tr(AB) = jbjj.
j=1
9.4 Weak Principle for Elliptic Equations 169
tr(AB) 1 tr B.
Lu(x 0 ) 0.
Thus argument shows that Lu(x 0 ) 0 for x 0 a local interior maximum. Therefore,
the strict inequality Lu < 0 implies that u cannot have a local interior maximum and
that
max u = max u. (9.29)
h(x) := e M x1 .
By the definition of L,
Lh = a11 M 2 + b1 M h.
1
M> max b1 ,
L(u + h) < 0
Since h 0, clearly
max u max(u + h).
On the other hand, since is bounded, we may assume x1 < R in , for some R
sufficiently large. This implies
h eM R,
170 9 Maximum Principles
so that
max(u + h) max u + e M R .
max u max u + e M R
for all > 0. Since M and R are independent of , we can take 0 to conclude
that
max u max u,
Lu = f, u| = g,
u
u 0, (9.31)
t
9.5 Application to the Heat Equation 171
u
(t0 , x 0 ) = 0, (9.32)
t
as well as
u 2u
(t0 , x 0 ) = 0, (t0 , x 0 ) 0. (9.33)
x j x 2j
In particular,
u
u (t0 , x 0 ) 0. (9.34)
t
u u
u = u 2n < 0. (9.35)
t t
The existence of a local maximum for u within (0, T ) is ruled out by (9.34).
We conclude that u attains a global maximum at a boundary point of [0, T ] .
Let us label this point (t , x ), so that
max u = u (t , x ). (9.36)
[0,T ]
By (9.35), this implies also that u (T, x ) > 0, which is ruled out by (9.33). Hence
t
= T if x .
Therefore (t , x ) lies in the set
Let R be sufficiently large so that B(0; R). This means that |x| R on , so
the inequality
u u u + R 2
holds at every point in [0, T ] . From (9.36) we can thus conclude that
max u u (t , x )
[0,T ]
(9.38)
u(t , x ) + R 2 .
because (t , x ) . Since this inequality holds for every > 0, this proves
max u max u.
[0,T ]
For a solution of the heat equation, both u satisfy the hypothesis of Theorem 9.8,
which implies that
min u u(t, x) max u,
for (t, x) (0, T ) , where is defined by (9.37). In particular this yields the
following:
The same arguments could be applied to the more general parabolic equation
u
Lu = 0,
t
where L is a uniformly elliptic operator as defined in (9.27).
In Sect. 6.3, we stated without proof a uniqueness result for solutions of the heat
equation on Rn . We now have the means to prove this, by establishing a maximum
principle for Rn as a corollary of Theorem 9.8.
u
u = 0, u|t=0 = g, (9.39)
t
u(t, x) M
|x y|2
v(t, x) := u(t, x) (T t) 2 e 4(T t) .
n
The term resembles the heat kernel defined by (6.16), except that the sign in the
exponential is reversed. Direct differentiation shows that this expression satisfies the
heat equation on (0, T ) Rn , and hence v does also.
For R > 0, let us apply the maximum principle of Theorem 9.8 to v on the domain
(0, T ) B( y; R). By construction,
v(0, x) g(x),
R2
v(t, x) M (T t) 2 e 4(T t)
n
M T 2 e R /4T
n 2
.
With T fixed, the right-hand side of this second inequality is arbitrarily negative for
large R. Therefore, for sufficiently large R, Theorem 9.8 implies that
max v max g.
[0,T ]B( y;R) B( y;R)
v(t, y) max g.
nR
g + (T t) 2 .
n
u(t, y) max
n R
u(t, y) max g
n R
The argument given here can be refined to show that conclusion (9.40) holds under
the weaker growth condition
2
u(t, x) Mec|x|
for t [0, T ].
Corollary 9.10 implies Theorem 6.3 by the argument used in Corollary 9.9. That
is, if u 1 and u 2 are bounded solutions of (6.19), then (u 1 u 2 ) solves (6.19) with
g = 0. It then follows from (9.40) that u 1 = u 2 .
9.6 Exercises
u| = | .
Show that
u
|u(x)| M
for all x Rn .
(a) For x 0 Rn , set r0 = |x 0 |. Use Corollary 9.4 at the centers 0 and x 0 to show
that
n
u(0) u(x 0 ) = ud x
n n
ud x (9.41)
An R n B(0;R) B(x 0 ;R)
for R > 0. Note that the integrals cancel on the intersection of the two balls.
(b) Show that
vol [B(0; R)\B(x 0 ; R)] vol B(0; R)\B( x20 ; R r0
2
)
An n n
= R R r20 ,
n
(c) Apply the volume estimates and the fact that |u| M to (9.41) to estimate
r0 n
R n (R )
|u(0) u(x 0 )| 2M 2
.
Rn
u = f, u| = 0,
for f C 0 ().
(a) Find a constant c > 0 (depending on f and R), such that u+c |x|2 is subharmonic
on .
(b) For this value of c, apply the maximum principle to u + c |x|2 to deduce that
max u 3n min u.
B(0;R) B(0;R)
Hint: For x B(0; R), apply the maximum principle to write u(x) as an integral
over the balls B(x; R) and B(x; 3R). Then show that
u
(x 0 ) > 0.
r
To show this, let B := B(0; R) Rn for some R > 0, and set
max u M.
A
u
(x 0 ) 0.
r
(e) By computing u /r and taking 0, conclude that
u
(x 0 ) > 0.
r
Chapter 10
Weak Solutions
In Sect. 1.2 we observed that dAlemberts formula for a solution of the wave equation
makes sense even when the initial data are not differentiable. This concept of a weak
solution that is not actually required to solve the equation literally has come up in other
contexts as well, for example in the discussion of the traffic equation in Sect. 3.4. In
this chapter we will discuss the mathematical formulation of this generalized notion
of solution.
Weak solutions first appeared in physical applications as idealized, limiting cases
of true solutions. For example, one might replace a smooth density function by a
simpler piecewise linear approximation, as illustrated in Fig. 10.1, in order to simplify
computations. (We used this idea in Example 3.9.)
Up until the late 19th century, the limiting process by which weak solutions were
obtained was understood rather loosely, and justified mainly by physical intuition.
Weak solutions proved to be extremely useful, and eventually a consistent mathe-
matical framework was developed.
The function plays the role of a experimental probe that takes a particular sample
of the values of u.
Let us consider how we would detect a derivative using test functions. Suppose
for the moment that u C 1 (R), with u = f . If we measure this derivative associated
using the test function Ccpt (R), the result is
u dx = f d x. (10.2)
R R
The fact that u = f is equivalent to the statement that (10.2) holds for all
Ccpt (R).
Note that the left-hand side could be integrated by parts, since has compact
support, yielding
u d x = f d x. (10.3)
R R
This condition now makes sense even when u fails to be differentiable. The only
requirement is that u and f be integrable on compact sets, a property we refer to as
local integrability. We can say that locally integrable functions satisfy u = f in the
weak sense provided (10.3) holds for all Ccpt (R).
To generalize this definition to a domain Rn , let us define the space of locally
integrable functions,
L 1loc () := f : C; f | K L 1 (K ) for all compact K .
The same equivalence relation (7.6) used for L p spaces applies to L 1loc , i.e., functions
that differ on a set of measure zero are considered to be the same.
Inspired by (10.3), for u and f L 1loc () we say that
u
= f
x j
as a weak derivative if
u dx = f dx (10.4)
x j
for all Ccpt (R). The condition (10.4) determines f uniquely as an element of
L loc (), by the following:
1
10.1 Test Functions and Weak Derivatives 179
for all Ccpt (), then f 0.
Proof It suffices to consider the case when is bounded, since a larger domain
could be subdivided into bounded pieces. For bounded the local integrability of f
implies that f L 2 (). By Theorem 7.5 we can choose a sequence k in Ccpt ()
such that k f in L (). This implies that
2
lim f, k =
f
2 .
k
If we ignore the points where g is not differentiable, then we would expect that the
derivative of g is given by
1, 0 x 1,
f (x) =
0, |x| > 1.
g(x) g (x)
Using integration by parts on the first term and evaluating the second gives
1
g d x = (1) d x (1)
0
= f d x.
where w C 1 (R). For Ccpt (R),
0
w dt = w dt w+ dt
0
0
= [w+ (0) w (0)](0) + w dt + w+ dt.
0
The term proportional to (0) could not possibly come from the integral of against
a locally integrable function, because the value of the integrand at a single point does
not affect the integral. Hence w admits a weak derivative only under the matching
condition
w (0) = w+ (0).
Weak derivatives of higher order are defined by an extension of (10.4). To write the
corresponding formulas, it is helpful to have a simplified notation for higher partials.
For each multi-index = (1 , . . . , n ) with j N0 , we define the differential
operator on Rn ,
1 n
D := 1 . (10.6)
x1 xnn
|| := 1 + + n .
10.1 Test Functions and Weak Derivatives 181
Repeated integration by parts introduces a minus sign for each derivative. Therefore,
a function u L 1loc () admits a weak derivative D u L 1loc () if
||
(D u) d x = (1)
n
u D d n x (10.7)
for all Ccpt ().
It might seem that we should distinguish between classical and weak derivatives
in the notation. This is made unnecessary by the following:
u q
+ = 0, u|t=0 = g. (10.8)
t x
The flux q could depend on u as well as t and x. To allow for the nonlinear case, we
will assume that u is real-valued here.
Suppose for the moment that q is differentiable and u is a classical solution of
(10.8). Let be a test function in Ccpt ([0, ) R). Use of the closed interval [0, )
means that and its derivatives are not necessarily zero at t = 0. Pairing u t
with
and integrating by parts thus generates a boundary term,
u
dt = u
t=0 u dt.
0 t 0 t
182 10 Weak Solutions
On the other hand, the spatial integration parts has no boundary term,
q
dx = q d x.
x x
When the left-hand side of (10.8) is paired with and integrated over both t and x,
the result is thus
u q
+ d x dt = u +q d x dt
0 t x 0 t x
u|t=0 d x.
for all Ccpt ([0, ) R).
The t = 0 integral in (10.9) makes sense for g L 1loc (R). Under this assumption,
we define u L 1loc ((0, ) R; R) to be a weak solution of (10.8) provided q
L 1loc ((0, ) R; R) and (10.9) holds for all test functions.
Example 10.5 Consider the linear conservation equation with constant velocity,
which means q = cu in (10.8). By the method of characteristics (Theorem 3.2),
the solution is
u(t, x) = g(x ct).
Let us check that this defines a weak solution for g L 1loc (R).
For Ccpt (R) the first term in (10.9) is
g(x ct) (t, x) c (t, x) d x dt. (10.10)
0 t x
= t, y = x ct,
and define
, y) := ( , y + c ).
(
This gives
y) dy
u c d x dt = g(y)(0,
0 t x
= g(y)(0, y) dy,
u
q(u) = 0 (10.12)
t x
with the flux q(u) a smooth function of u which is independent of t and x. As we saw
in Sect. 3.4, the characteristics are straight lines whose slope depends on the initial
conditions. Let us study the situation pictured in Fig. 3.11, where a shock forms as
characteristic lines cross at some point. For simplicity we assume that the initial
crossing occurs at the origin.
One possible way to resolve the issue of crossing characteristics is to subdivide the
(t, x) plane into two regions by drawing a shock curve C, as illustrated in Fig. 10.3.
Suppose that classical solutions u are derived by the method of characteristics above
and below this curve. We will show that this combination yields a weak solution
provided a certain jump condition is satisfied along C. The jump condition was
discovered in the 19th century by engineers William Rankine and Pierre Hugoniot,
who developed the first theories of shock waves in the context of gas dynamics.
184 10 Weak Solutions
u+
Proof Since we are not concerned with the boundary conditions, we consider a test
function Ccpt ((0, ) R), for which (10.9) specializes to
u + q(u) d x dt = 0. (10.14)
0 t x
Since the solutions u are classical and is C 1 , we can separate the integral (10.14)
at the shock curve and integrate by parts on either side.
Consider first the u side. For the term involving the x derivative the integration
by parts is straightforward:
(t) (t)
q(u ) d x dt =
q(u ) d x dt
0 x 0 x
+ q(u )
x=(t) dt.
0
For the t-derivative term we start by using the fundamental theorem of calculus to
derive
10.2 Weak Solutions of Continuity Equations 185
(t)
d (t)
u
u d x = u + d x + (t)u
x=(t) .
dt t t
By the compact support of , the integral over t of the left-hand side vanishes,
yielding
(t) (t)
u
u d x dt = d x dt
0 t 0 t
(t)u
x=(t) dt.
0
(t)u q(u )
dt.
0 x=(t)
The first term on the right vanishes by the assumption that u is a classical solution,
leaving
(t)
u + q(u ) d x dt = u q(u )
dt.
0 t x 0 x=(t)
u+ + q(u + ) d x dt = u + q(u + )
dt.
0 (t) t x 0 x=(t)
(u + u ) (q(u + ) q(u ))
dt.
0 x=(t)
(u + u ) (q(u + ) q(u ))
=0
x=(t)
u u
+ (1 2u) = 0,
t x
for which q(u) = u u 2 . For the initial condition take a step function,
a, x < 0,
g(x) := (10.15)
b, x > 0.
u (t, x) = a, u + (t, x) = b.
q(b) q(a) = (b a) .
= 1 b a.
Since the discontinuity starts at the origin, the shock curve is thus given by
(t) = (1 b a)t.
Some cases are illustrated in Fig. 10.4. In the plot on the left, the shock wave
propagates backwards.
For certain initial conditions, the definition (10.9) of a weak solution is not suf-
ficient to determine the solution uniquely. For example, if we had taken a > b in
(10.15), then instead of overlapping the characteristics originating from t = 0 would
separate, leaving a triangular region with no characteristic lines. An additional physi-
cal condition is required to specify the solution uniquely in this case. We will discuss
this further in the exercises.
10.3 Sobolev Spaces 187
x x
u+ = 0.5
u+ = 0.7
t t
u = 0.3
u = 0.4
Boundary values are not well defined for locally integrable functions. We were able
to avoid this issue in the discussion of the continuity equation in Sect. 10.2, because
solutions were required to be constant along characteristics. In general, the formu-
lation of boundary or initial conditions requires a class of functions with greater
regularity.
The most obvious class to consider consists of functions that admit weak higher
partial derivatives. However, it proves to be very helpful to strengthen the integrability
requirements as well. Such function spaces were introduced by Sergei Sobolev in
the mid 20th century and have since become fundamental tools of analysis.
The Sobolev spaces based on L 2 are defined by
H m () := u L 2 (); D u L 2 () for all || m ,
u, v H m := D u, D v . (10.16)
||m
(Our convention will be that a bracket without subscript denotes the L 2 inner product.)
The corresponding norm is
21
u
H m :=
D u
22 . (10.17)
||m
By Lemma 7.8 H01 () is also a Hilbert space with respect to the H 1 norm.
If is piecewise C 1 , then for functions in H 1 () it is possible to define bound-
ary restrictions in L 2 () that generalize the boundary restriction of a continuous
function. In this case, H01 () consists precisely of the functions whose boundary
restriction vanishes. Thus the space H01 () can be interpreted as the class of H 1
functions satisfying Dirichlet boundary conditions on .
The theory of boundary restrictions is too technical for us to cover here, but we
can at least show how this works in the one-dimensional case.
Theorem 10.9 If u H01 (a, b) then u is continuous on [a, b] and equal to zero at
the endpoints.
Proof Suppose u H01 (a, b). By definition, there exists a sequence of Ccpt (a, b)
such that
10.3 Sobolev Spaces 189
lim
k u
H 1 = 0.
k
where I denotes the characteristic function of the interval I . Thus, by the Cauchy-
Schwarz inequality (Theorem 7.1),
| j (x) k (x)| x a
j k
2 .
In view of the definition of the H 1 norm, this implies the uniform bound
j k
b a
j k
H 1 (10.19)
u(a) = u(b) = 0.
for all Ccpt ().
By the definition of H01 (), there exists a sequence of k Ccpt () such that
k u in the H norm. Since k has compact support within , integration by
1
parts gives
k d n x = k d n x. (10.22)
In this section we will consider the relationship between weak regularity, defined in
terms of Sobolev spaces, and regularity in the classical sense. This connection plays
a central role in the application of Sobolev spaces to PDE.
This result can be sharpened and extended in various ways. One important variant
includes differentiability up to the boundary under certain conditions on . For
example, if the boundary is piecewise C 1 then it is possible to show that
H m () C k ().
These boundary results are quite important but too technically difficult for us to
include here.
10.4 Sobolev Regularity 191
The strategy we will use for Theorem 10.11 is based on the connection estab-
lished in Sect. 8.6 between regularity and the decay of Fourier coefficients. Recall
the definition T := R/2Z introduced in Sect. 8.2. To extend Fourier series to higher
dimensions we introduce the corresponding space
Tn := Rn /(2Z)n .
The integral over Tn can be taken over [, ]n , or any translate of this cube. The
argument from Theorem 8.6 can be adapted, with minor notational changes, to prove
the following:
for f, g L 2 (Tn ).
Because of the periodic structure of Tn , it is not necessary to assume that test func-
tions have compact support. For f L loc
1
(T) the weak derivative D f L loc1
(T) is
defined by the condition that
2 2
||
D f d x = (1) f D d x (10.25)
0 0
for all C (T). The space H m (Tn ) consists of functions in L 2 (Tn ) which have
weak partial derivatives up to order m contained in L 2 (Tn ).
It is convenient notate powers of the components of k by analogy with D ,
k := k11 knn ,
D ei kx = (i k) ei kx .
ck [D f ] = (i k) ck [ f ] (10.26)
Proof By (10.26) and Bessels inequality (Proposition 7.9), the condition that D f
L 2 (Tn ) implies that
k ck [ f ]
2 < . (10.28)
kZn
By Parsevals identity (10.24), the inner product of g with Ccpt (Tn ) gives
g , = (2)n (i k) ck [ f ]ck []
kZn
||
= (1) 2 ck [ f ]ck [D ]
kZ
||
= (1) f, D .
Theorem 10.13 makes the connection between Sobolev regularity and decay of
Fourier coefficients. Our task is now to translate this back into classical regularity.
H m (Tn ) C q (Tn ).
Proof Using the notation for discrete spaces introduced in Sect. 7.4, the space 2 (Zn )
is defined as the Hilbert space of functions Zn C, equipped with the inner product
10.4 Sobolev Regularity 193
, 2 := (k)(k).
kZn
22 := (1 + |k|)2m
kZn
(1 + |x|)2m d n x
Rn
= An (1 + r )2m r n1 dr.
0
so that
, 2 = |ck [ f ]|.
kZn
2
2 , (10.29)
kZn
for m > n2 . To apply it to higher derivatives we note that if f H m (Tn ) for m > q + n2
then for || q the weak derivatives D f will lie in H mq (T N ). For m > q + n2
it follows from (10.30) that these derivatives are continuous. By Theorem 10.4, this
shows that u C q (Tn ).
We are now prepared to derive the Sobolev embedding result for a bounded domain
as a consequence of Theorem 10.14.
194 10 Weak Solutions
B(x 0 ; ) .
Suppose that Ccpt () has support contained in B(x 0 ; ) and is equal to 1
inside B(x 0 ; /2). (Such a function can be constructed as in Example 2.2.) Since
is smooth, u H m () also. Thus, assuming < 2, we can extend u by
periodicity to a function in H m (Tn ). Theorem 10.14 then shows that u C k (Tn )
if m > k + n/2. Since u and u agree in a neighborhood of x 0 , this shows that u
is k-times continuously differentiable at x 0 . This argument applies at every interior
point of , so we conclude that u C k ().
The Laplace equation introduced in Sect. 9.1 is the prototypical elliptic equation.
Another classic example is the Poisson equation u = f , which we will discuss
in more detail in Sect. 11.1.
If Rn is a bounded domain, then for u, Ccpt (), Greens first identity
(Theorem 2.10) gives
u d n x = u d n x. (10.31)
u = u + f, u| = 0. (10.33)
for every Ccpt (). This definition could be extended to more general elliptic
equations of the form Lu = f , but for simplicity we restrict our attention to the case
of the Laplacian. We will study the existence and regularity of solutions of (10.34)
10.5 Weak Formulation of Elliptic Equations 195
u = f, u(0) = u(2) = 0,
with
x, 0 x 1,
f (x) =
1, 1 x 2.
Since f is piecewise linear, it makes sense to try using classical solutions on the two
subintervals. Imposing the boundary and continuity requirements gives a family of
possible solutions
x ax,
1 3
0 x 1,
u(x) = 6
21 x 2 + (a + 4
3
)x 2a 2
3
, 1 x 2.
for Ccpt (0, 2). Using integration by parts, the first term evaluates to
2 1 2
u d x = ( 21 x 2 a) (x) d x + (x + a + 43 ) (x) d x
0 0 1
2 1 2
u d x = ( 21 a)(1) x(x) d x (a + 13 )(1) + (x) d x
0 0 1
2
= ( 16 2a)(1) f d x.
0
This result is illustrated in Fig. 10.6. Note that the condition on a corresponds to
a matching of the first derivatives at x = 1, so that u C 1 [0, 2].
We will show in Sect. 11.3 that solutions of (10.34) are unique, so the function
obtained in Example 10.15 is the only possible solution. The matching of derivatives
required for this solution is indicative of a more general regularity property for
solutions of elliptic equation, which we will discuss in detail in Sect. 11.4.
196 10 Weak Solutions
The heat and wave equations are the primary examples of linear evolution equations.
Weak solutions for these equations can be defined by essentially the same strategy
used in Sect. 10.5. Starting from a classical solution, we pair with a test function and
use integration by parts to find the corresponding integral equation. Unfortunately,
the time dependence creates some technicalities in the definition that we are not
equipped to fully resolve here, but we can at least illustrate the basic philosophy by
working through some examples.
Consider first the wave equation on a bounded domain Rn with Dirichlet
boundary conditions,
2u
u = 0, u| x = 0, (10.36)
t 2
subject to the initial conditions
u|t=0 = g,
= h.
t t=0
Assuming u is a classical solution, pairing the wave equation for u with a test function
Ccpt ([0, ) ) gives
2
u
2 u d n x dt = 0.
0 t
In the t variable, we integrate by parts twice and pick up a boundary term each time
because the test function is not assumed to vanish at t = 0. The result is
10.6 Weak Formulation of Evolution Equations 197
2u
u
2 dt = g
dt
0 t t t=0 0 t t
2
= g
+ h|t=0 + u 2 dt.
t t=0 0 t
= g
dn x + h|t=0 d n x.
t t=0
Example 10.16 Consider the piecewise linear dAlembert solution for the wave
equation introduced in Sect. 1.2. On [0, 2] we take the initial conditions h = 0
and
x, 0 x 1,
g(x) :=
2 x, 1 x 2.
1
u(t, x) = g(x + t) + g(x t) .
2
The linear components of the resulting solution are shown in Fig. 10.7. Because u is
piecewise linear and vanishes at x = 0 and 2, it is clear that u(t, ) H01 (0, 2) for
each t.
For this case the weak solution condition (10.37) specializes to
2 2 2
u
u 2 + d x dt = g
d x. (10.38)
0 0 t x x 0 t t=0
Checking this is essentially a matter of integration by parts, but the integrals must
be broken into many pieces for large t. As a sample case, let us assume that has
support in [0, 1) (0, 2).
198 10 Weak Solutions
u=1t
u=x u = x
Now let us consider the weak formulation of the heat equation with Dirichlet
boundary conditions,
u
u = 0, u| x = 0, u|t=0 = h. (10.39)
t
Derivation of the integral equation works just as for the wave equation, except that
there is only a single integration by parts in the time variable. Assuming that u(t, )
H01 () for each t > 0 and h L 1loc (), the weak solution condition is
u + u d x dt =
n
h|t=0 d n x (10.40)
0 t
for all Ccpt ([0, ) ).
Example 10.17 Consider the heat equation on the interval (0, ), with initial con-
dition h L 2 (0, ; R). In view of the Dirichlet boundary conditions, we use the
orthonormal basis for L 2 (0, ) developed in Exercise 8.4, given by the sine func-
tions
2
k (x) := sin(kx)
t
200 10 Weak Solutions
By the smoothness of , Theorem 8.10 implies that the coefficients satisfy bk (t) =
O(k ), uniformly in t, and so the series
(t, x) = bk (t)k (x)
k=1
(t, x) = bk (t)k (x)
t k=1
for t 0.
Similarly, for t 0 we have L 2 convergent series
2
(t, x) = bk (t)k cos(kx),
x k=1
u 2
k 2 t
(t, x) = ak (t)e k cos(kx).
x k=1
Switching the order of the summation and integration is justified if series converges
uniformly on the domain of the integral, but that is not necessarily the case here. To
check this carefully, we break the sum at some value k = N . For the finite sum there
is no convergence issue, so that
N
d k 2 t
N
ak ek k 2 bk (t) bk (t) dt =
2
t
ak e bk (t) dt
0 k=1 k=1 0 dt
N
= ak bk (0).
k=1
To estimate the tail of the sum, note that the sequence {ak } is bounded because
|ak |2 < . For bk (t), we apply repeated integration by parts to deduce
2m 2m
k (x) (t, x) d x = (t, x) k (x) d x
x x
0 0
= (1)m k 2m (t, x)k (x) d x
0
= (1)m k 2m bk (t).
for m N. Since Ccpt ([0, ) (0, )), this gives an estimate
|bk (t)| Cm k 2m ,
where Cm is independent of t. The same reasoning applies to bk (t). Combining the
m = 2 estimate for bk with the m = 1 case for bk gives
k 2 t 2
ak e k bk (t) bk (t)
Ck 2 .
k 2 t
ak e k bk (t) bk (t)
C N 1 ,
2
(10.45)
k=N +1
independently of t.
202 10 Weak Solutions
Now fix M > 0 so the support of is contained in [0, M]. Applying (10.45) to
the integral gives
M
k 2 t
ak e k bk (t) bk (t) dt
C M N 1 .
2
0
k=N +1
By taking N , we deduce
u
u + d n x dt = ak bk (0).
0 0 t x x k=1
10.7 Exercises
du
x = 1.
dx
(a) Develop a weak formulation of this ODE in terms of pairing with a test function
Ccpt (R)
(b) Show that u(x) = log |x| is locally integrable and solves the equation in the
weak sense.
u u
+u = 0,
t x
with the initial condition
10.7 Exercises 203
a, x 0,
u(0, x) = a(1 x) + bx, 0 < x < 1,
b, x 1.
For a > b a shock forms at some positive time. Assuming u is a weak solution, find
the equation of the shock curve starting from this point.
u u
+ (1 2u) =0
t x
with initial data
0, x > 0,
g(x) =
1, x < 0.
(a) Sketch the characteristic lines for this initial condition, and show that they leave
a triangular region uncovered.
(b) Show that the constant solution u(t, x) = g(x) satisfies the Rankine-Hugoniot
condition for the shock curve (t) = 0 and thus gives a weak-solution of the
traffic equation. (This solution is considered unphysical because characteristic
lines emerge from the shock line to fill the triangular region.)
(c) The physical solution is specified by an entropy condition that says that charac-
teristic lines may only intersect when followed forwards in time. Show that the
continuous function
0, x > t,
u(t, x) = 2 2t , t < x < t,
1 x
1, x < t,
with f C 1 (Rn ).
f
(a) For j = 1, . . . , n 1, show that f has weak partial derivatives given by x j
for
xn > 0.
204 10 Weak Solutions
f f
(b) Show that the weak partial x n
exists and is given by xn
for xn > 0 only if f
extends to a continuous function at xn = 0.
10.5 Let D R2 be the unit disk {r < 1} with r := |x|. Consider the function
u(x) = r with R constant.
u
(a) Compute the ordinary partial derivatives x j
, j = 1, 2, for r = 0.
(b) Show that for > 1 these partials lie in L 1 (D) and define weak derivatives.
(c) For what values of is u H 1 (D)?
with r := |x| and a > 0. (With appropriate physical constants this is the equation
for the gravitational potential of a spherical planet of radius a.)
(a) Assuming that u depends only on r , formulate a weak solution condition in terms
of pairing with a test function (r ) with Ccpt [0, ).
(b) Find the unique solution which is smooth at r = 0 and for which u(r ) 0 as
r .
for all C (). Show that u satisfies the Poisson equation with Neumann bound-
ary condition,
u
u = f,
= 0. (10.47)
Recall the formulas for the kinetic and potential energy of a solution of the wave
equation derived in Sect. 4.7. At equilibrium the kinetic energy is zero, and by phys-
ical reasoning the system should occupy a state of minimum potential energy. This
suggests a strategy of reformulating the Laplace equation, which models the equi-
librium state, as a minimization problem for the energy.
In this application, the potential energy term from the wave equation is called the
Dirichlet energy. For a bounded domain Rn and w C 2 (), let
1
E[w] := |w|2 d n x. (11.1)
2
The term functional is used to describe functions such as E[], to indicate that the
domain is a function space.
To see how minimization of energy is related to the Laplace equation, let us
suppose that u C 2 (; R) satisfies
E[u] E[u + ]
for all Ccpt (; R). This implies that for t R the function t E[u + t]
achieves a global minimum at t = 0. Hence
d
E[u + t] = 0. (11.2)
dt t=0
d 1 d
E[u + t] = |u|2 + 2tu + t 2 ||2 d n x
dt t=0 2 dt
t=0
= u d x.
n
By Greens first identity (Theorem 2.10) and the fact that vanishes on ,
u d x = n
u d n x.
This holds for all Ccpt (; R) if and only if u = 0 on .
The empirical law describing the electric field in the presence of a charge distribution
was formulated by Gauss in the mid-19th century. Gausss law states that the outward
flux of the electric field through a closed surface is proportional to the total electric
charge contained within the region bounded by the surface. More specifically, if
R3 is a bounded domain with piecewise C 1 boundary and is the charge
density within , then
E d S = 4k d 3 x,
E = 4k (11.3)
In Sect. 9.1 we noted that the electric potential and electric field E are related
by
E = .
= 4k.
u = f, u| = 0. (11.4)
for every Ccpt ().
For convenience let us consider real-valued functions. (In the complex case we
could split the Poisson problem into real and imaginary parts.) In view of (11.5), we
define the functional
208 11 Variational Methods
D f [w] := E[w] f, w
, (11.6)
for f L 2 (; R) and w H01 (; R), where E[] is the Dirichlet energy (11.1).
D f [u] D f [w]
for all w H01 (; R), then u is a weak solution of the Poisson equation, in the sense
of (11.5).
Proof Since Ccpt (; R) H01 (), the assumption on u implies
D f [u] D f [u + t]
for Ccpt (; R) and t R. Therefore
d
0= D f [u + t]t=0
dt
d
= E[u + t] f, u + t
dt t=0
n
= u f d x.
The functional D f defined in (11.6) consists of a quadratic term plus a linear term.
The Dirichlet minimization problem is thus analogous to minimizing the polynomial
ax 2 + bx for x R. This polynomial obviously has a minimum if and only if a > 0.
For the Dirichlet case the analogous condition is a lower bound on the quadratic term
E[]. The original form of this result was proven by Henri Poincar.
u22 2 E[u]
The right-hand side can be written as an L 2 pairing with the characteristic function
of [a, x],
(x) = , [a,x] .
|(x)|2 [a,x] 22 22
(b a)E[],
for all x [a, b]. We can integrate this estimate over x to obtain
22 (b a)2 E[]
for Ccpt (a, b).
Now let us consider the higher dimensional case Rn . The domain is assumed
to be bounded, so
R := [M, M]n
for some large M. Functions in Ccpt () can be extended by zero to smooth functions
on R, so it suffices to derive the Poincar inequality for Ccpt (R). Following
the one-dimensional case, we apply the fundamental theorem of calculus in the x1
variable to write x1
(x) = (y, x2 , . . . , xn ) dy.
M x 1
22 4M 2
. (11.7)
x1 2
n
2
=
.
2 j=1 x j 2
22 8M 2 E[], (11.8)
for Ccpt ().
To complete the argument, suppose that u H01 (). By the definition of H01
there exists an approximating sequence {k } Ccpt () such that k u in the H 1
norm. By (11.8) the inequality
k 22 8M 2 E[k ] (11.9)
holds for each k N. Our goal is thus to take the limit k on both sides.
For the energy side note that
n
u u k k
E[u] E[k ] = , ,
j=1
x j x j x j x j
n
(u k ) u k (u k )
= , + ,
j=1
x j x j x j x j
n
(u k )
u
k
|E[u] E[k ]|
+
.
j=1
x j 2 x j 2 x j 2
|E[u] E[k ]| u k H 1 (u H 1 + k H 1 ) .
11.3 Coercivity and Existence of a Minimum 211
u22 8M 2 E[u].
u2
() := sup 1 .
uH01 ()\{0} E[u] 2
which is rather poor compared to the best known bounds. In Sect. 11.5 we will estab-
lish a direct relationship between the Poincar constant and the lowest eigenvalue of
on .
Since our goal is to minimize D f [] over H01 , it is useful to express the conclusion
of Theorem 11.2 in terms of the H 1 norm. Because
u2H 1 = u22 + E[u], (11.10)
(11.8) is equivalent to
u2H 1 (2 + 1)E[u] (11.11)
for all u H01 (). A quadratic functional on a Hilbert space is called coercive if its
ratio to the norm squared is bounded below. The Poincar inequality thus states that
E[] is coercive on H01 ().
The identity (11.10) also gives an upper bound,
E[u] u2H 1 . (11.12)
A quadratic functional is called bounded if its ratio to the norm squared is bounded
above. For the energy this condition is automatic.
We are now prepared to tackle the minimization problem for D f [], by exploiting
the fact that E[] is both coercive and bounded.
212 11 Variational Methods
D f [u] D f [w]
D f [w] E[w] | f, w
| .
Applying (11.11) to the energy and the Cauchy-Schwarz inequality to the inner
product gives
1
D f [w] 2 w2H 1 f 2 w2
+1
(11.13)
1
2 w2H 1 f 2 w H 1 .
+1
The right-hand side has the form cx 2 bx where c = 1/(2 + 1), b = f 2 , and
x = w H 1 . According to the minimization formula for a quadratic polynomial,
b
min(cx 2 bx) =
xR 4c
2 + 1
D f [w] f 22 , (11.14)
4
then (11.14) shows that d0 > . By Lemma 2.1, there exists a sequence of
wk H01 () so that
lim D f [wk ] = d0 . (11.15)
k
Our strategy is to argue that the sequence {wk } is Cauchy in H01 (), and therefore
converges by completeness.
The quadratic structure of E[] implies that
u+v 1 1 1
E = E[u] + E[v] E[u v] (11.16)
2 2 2 4
11.3 Coercivity and Existence of a Minimum 213
2 2 2 4 2
1 1 1
= D f [wk ] + D f [wm ] E[wk wm ].
2 2 4
Because d0 is the infimum of D f [], this implies
1 1 1
d0 D f [wk ] + D f [wm ] E[wk wm ]. (11.17)
2 2 4
Turning this inequality around gives
By (11.15),
lim 2D f [wk ] + 2D f [wm ] 4d0 = 0,
k,m
lim wk wm H 1 = 0,
k,m0
i.e., the sequence {wk } is Cauchy in H01 (). Therefore, by completeness, there exists
a function u H01 () such that
u := lim wk .
k
1 1 1
d0 = D f [u 1 ] + D f [u 2 ] E[u 1 u 2 ].
2 2 4
214 11 Variational Methods
E[u 1 u 2 ] = 0.
Note the crucial role that completeness plays in the proof of Theorem 11.3. If we
had taken the domain of the Dirichlet energy to C 2 (), then there would be no way
to deduce convergence of the sequence {wk } from the energy limit (11.18).
Corollary 11.4 For f L 2 () the weak formulation (11.5) of the Poisson equation
admits a unique solution u H01 ().
Proof Existence of the solution follows from Theorems 11.1 and 11.3. To prove
uniqueness, suppose that u 1 and u 2 both satisfy (11.5). Subtracting the equations
gives
(u 1 u 2 ) d n x = 0
for all Ccpt (). By the definition (10.18) of H01 () we can take a sequence of
k H0 () such that k u 1 u 2 in the H 1 norm. This implies in particular that
1
= 0.
u = f.
If f Hloc
m
() for m 0 then u Hloc
m+2
().
for k < m n2 . Thus Theorem 11.5 shows that the weak solution of the Poisson
equation obtained in Corollary 11.4 is a classical solution provided f Hloc m
()
with m > 2 . Furthermore, if f C () then u C () also.
n
Using 10.26, we can translate this into a relation between the Fourier coefficients,
n
u
ck [ f ] = ik j ck
j=1
x j (11.20)
= |k| ck [u].
2
We can now deduce the interior elliptic regularity result from Lemma 11.6 by
localizing.
Proof of Theorem 11.5. Suppose that u = f in the weak sense of (11.5), with
u H01 () and f L 2loc (). By rescaling and translating, if necessary, we can
assume that [0, 2]n , which allows us to identify with a subset of Tn . For
Ccpt () we can extend u by zero to a function on [0, 2]n . This allows us
to consider u as a function in H 1 (Tn ). Our goal is to apply Lemma 11.6 to the
localized function u.
We must first show that u satisfies a weak Poisson equation on Tn . Given
C (Tn ) we can use the test function Ccpt () in the weak solution condition
(11.5) to obtain
[u () f ] d n x = 0.
Tn
Using the product rule for the gradient, we can rewrite this as
0= [u + u f ] d n x
Tn
(11.21)
= [(u) u + u f ] d x. n
Tn
In order to interpret this as a weak equation for u, we need to rewrite the second
term as an integral involving rather than .
Since the components of u are in Ccpt (), the definition of the weak derivative
u gives
u () d n x = u () d n x
= [u + u] d n x.
This holds for all C (Tn ), implying that u is a weak solution, in the sense of
(11.19), of the equation
(u) = 2u u + f . (11.23)
11.4 Elliptic Regularity 217
Because Ccpt (), the right-hand side of (11.23) lies in L 2 (Tn ) by the
assumptions on u and f . Hence u H 2 (Tn ) by Lemma 11.6. This holds for
all Ccpt (), implying that u Hloc
2
(Tn ).
q
We can now apply the same argument inductively. If u Hloc (), and f
q1
Hloc (), then the right-hand side of (11.23) lies in H (T ) for Ccpt
q1 n
().
q+1
Lemma 11.6 then gives u Hloc ().
In Sect. 7.6, we mentioned that the spectral theorem for finite-dimensional matrices
has an extension to certain differential operators. In this section, we will prove this
result in the classical setting of the Laplacian on a bounded domain with Dirichlet
boundary conditions.
k = k k
lim k = .
k
for all Ccpt ().
If we substitute in place of then the second term in (11.24) is times the L 2
inner product ,
. By the same token we could interpret the first term as an inner
product form of the Dirichlet energy (11.1). We will denote this by
218 11 Variational Methods
E[u, v] := u v d n x,
so that E[u] = E[u, u]. The L 2 and H 1 inner products are related by
u, v
H 1 = u, v
+ E[u, v].
With this convention we can write the weak eigenvalue equation (11.24) in an equiv-
alent form as
E[, ] = ,
(11.25)
for all Ccpt ().
For the minimization argument, it will prove convenient to enlarge the space of test
functions from Ccpt () to H01 (). To justify this, note that by definition a function
v H0 () can be approximated by a sequence of k Ccpt
1
() with respect to the
1
H norm. This implies in particular that
E[, v] = , v
(11.26)
is called the Rayleigh quotient for v. Note also that if satisfies (11.26) then
R[] = . (11.27)
1
R[v] >0 (11.28)
2
for v H01 (), v = 0. This suggests that the smallest eigenvalue is related to the
Poincar constant, and that we can locate it by minimizing R[].
The argument for existence of a minimum is a little trickier than the analysis of
the Dirichlet principle in Sect. 11.3. To understand why, note that
R[cv] = R[v]
11.5 Eigenvalues by Minimization 219
for c C\{0}, so the minimizing function is not unique. Therefore it is quite possible
to have a sequence that minimizes the Rayleigh quotient but does not converge in H 1 .
The tool that allows us to resolve this issue was developed by Franz Rellich in the
early 20th century.
Theorem 11.8 (Rellichs theorem) Suppose Rn is a bounded domain and {vk }
is a sequence in H01 () that satisfies a uniform bound
vk H 1 M
1 = 1 1
R[vk ] = E[vk ].
The sequence of energies E[vk ] is bounded by (11.30). This also implies that the
sequence {vk } is bounded with respect to the H 1 norm, because
vk 2H 1 = 1 + E[vk ]
lim vk 1 2 = 0. (11.31)
k
By the definition of 1 ,
vk + vm
R 1 .
2
Therefore
vk + vm 1
E vk + vm 22 .
2 4
By (11.31), we have
lim vk + vm 22 = 21 22 = 4,
k,m
lim E[vk vm ] 0.
k,m
Since we already know that {vk } converges in the L 2 norm, we conclude that
lim vk vm H 1 0.
k,m
That is, the sequence {vk } is Cauchy with respect to the H 1 norm.
By completeness {vk } converges with respect to the H 1 norm to some u H01 ().
Since the L 2 norm is bounded above by the H 1 norm, this means vk u in the
L 2 sense also. Hence u 1 , which proves that 1 H01 (). It then follows from
(11.30) that
E[1 ] = R[1 ] = 1 .
11.5 Eigenvalues by Minimization 221
To see that this implies the weak solution condition, suppose w H01 (). Using
the inner product form of E[], we can expand
for t R. Similarly,
1 + tw22 = 1 + 2t Re 1 , w
+ t 2 w22 . (11.35)
for all w H01 (). By replacing w by iw, we can deduce also that
Im E[1 , w] = 1 Im 1 , w
for all w H01 (). In combination these give (11.26), so 1 is a weak solution of
1 = 1 1 .
In principle, 1 could be complex-valued at this point, but since its real and imag-
inary parts each satisfy (11.26) separately, we can select one of these to specialize
to the real-valued case.
To deduce the regularity of 1 , we apply Theorem 11.5 with f = 1 . The fact
that 1 H01 () then implies that 1 Hloc
3
(). Starting from Hloc 3
() then gives
q
1 Hloc (), and so on. This inductive argument shows that 1 Hloc () for each
5
q N. We conclude that 1 C () by Theorem 10.11.
It is clear that the 1 produced in Theorem 11.9 is the smallest eigenvalue, since
all eigenvalues occur as values of the Rayleigh quotient by (11.27). An example of 1
is shown in Fig. 11.1. We will see in the exercises that the first eigenfunction cannot
have zeros in . This can be used to show that the first eigenfunction is unique up to
a multiplicative constant, i.e., 1 has multiplicity one.
To find other eigenvalues, the strategy is to restrict to subspaces and then apply the
same construction used for 1 . For a subset A L 2 () the orthogonal complement is
222 11 Variational Methods
A := w L 2 (); w, v
= 0 for all v A .
W = A H01 (),
w, v
= lim wk , v
= 0.
k
This shows that w W . Therefore W is closed. By Lemma 7.8, this implies that W
is a Hilbert space with respect to the H 1 inner product
Proof of Theorem 11.7 Let 1 H01 (; R) be the eigenvector obtained in
Theorem 11.9, normalized so that 1 2 = 1. The subspace
W1 := {1 } H01 ()
for all w W1 \ {0}. By the same variational argument used for 1 , this implies that
E[2 , w] = 2 2 , w
(11.37)
for all w W1 .
11.5 Eigenvalues by Minimization 223
To extend this formula to the full weak solution condition, note that 2 , 1
= 0
because 2 W1 . The fact that 1 satisfies (11.26) also gives
E[1 , 2 ] = 1 1 , 2
= 0. (11.38)
v c1 , 1
= 0,
+ E[2 , w] 2 2 , w
The first line on the right is zero by (11.38) and the second is zero by (11.37). Thus
(11.26) is satisfied for all v H01 (), showing that 2 = 2 2 in the weak
sense. By taking the real or imaginary part we can assume that 2 is real-valued.
Subsequent eigenvalues are obtained by repeating this process inductively. After
k eigenfunctions have been found, we set
and minimize the Rayleigh quotient over Wk to find k+1 and k+1 . Note that Wk
always contains nonzero vectors, because H01 () is infinite-dimensional. The reg-
ularity argument from the end of the proof of Theorem 11.9 applies to any solution
of (11.26), so that k C () for each k.
This process produces an orthonormal sequence of eigenfunctions {k } with eigen-
values satisfying
1 2 3 . . . .
k 2H 1 = 1 + k . (11.40)
Suppose the sequence {k } is bounded. Then (11.40) shows that the sequence {k } is
bounded with respect to the H 1 norm. Theorem 11.8 then implies that a subsequence
of {k } converges in L 2 (). But the k are orthonormal with respect to the L 2 norm,
so that
k m 2 = 2
The final claim is that {k } forms an orthonormal basis of L 2 (). After obtaining
the full sequence from the inductive procedure, let us set
W := {1 , 2 , . . . } H01 ().
Suppose that W contains a nonzero vector. Applying the Rayleigh quotient mini-
mization as above produces yet another eigenvalue . Since the k s were constructed
by minimizing the Rayleigh quotient on subspaces Wk W , this new eigenvalue
satisfies k for all k N. This is impossible because k . Hence W = {0}.
In other words, the only vector in H01 () that is orthogonal to all of the k is
0. Since Ccpt () H01 () and Ccpt () is dense in L 2 () by Theorem 7.5, this
implies that the only vector in L 2 () that is orthogonal to all of the k is 0. Hence
{k } is a basis by Theorem 7.10.
In this section we take up the proof of Rellichs theorem (Theorem 11.8). Results
of this type, that force convergence of an approximating sequence, are a crucial
component of variational strategies for PDE.
In a normed vector space, a subset A is said to be sequentially compact if every
sequence within A contains a subsequence converging to a limit in A. A fundamental
result in analysis called the Bolzano-Weierstrass theorem (Theorem A.1) says that in
Rn this is equivalent to the definition of compact given in Sect. 2.3. That is, a subset
of Rn is sequentially compact if and only if it is closed and bounded.
Rellichs theorem could be paraphrased as the statement that a closed and bounded
subset of H01 () is sequentially compact as a subset of L 2 (), provided we are
careful about the two different norms referenced in this statement. Our strategy will
be to reduce Rellichs theorem to an application of Bolzano-Weierstrass using Fourier
series. We start with the periodic case.
Theorem 11.10 Suppose that v j is a sequence in H 1 (Tn ) that satisfies a uniform
bound
v j H 1 M (11.41)
m
Sm [v j ] = ck [v j ]eikx .
k=m
ck [v j ] 1 v j 2 .
2
The assumption (11.41) implies also that the L 2 norms v j 2 are bounded by M, so
that
ck [ f j ] M
2
such that
lim ck v (1)
j = ak
j
for k = 1, 0, 1. Since the partial sum S1 involves only these three coefficients, this
implies the uniform convergence
1
lim S1 v (1)
j = ak eikx ,
j
k=1
such that
lim ck v (2)
j = ak
j
l
lim Sl v (l)
j = ak eikx (11.43)
j
k=l
in L 2 (T).
To complete the proof, set
( j)
w j := v j .
M2
,
1 + m2
independent of j.
By the triangle inequality,
wi w j 2 wi Sm [wi ]2 + Sm [wi ] Sm [w j ]2 + w j Sm [w j ]2 .
M
< .
1 + m2
wi w j 2 < 3
for i. j N . This shows that the subsequence w j is Cauchy with respect to the L 2
norm. By completeness the subsequence converges in L 2 (T).
As we saw in Sect. 11.5, the Rayleigh principle for the eigenvalue problem is very
useful as a theoretical tool. It also leads to some very practical applications in terms
of estimating eigenvalues or calculating them numerically.
The basic strategy is to exploit the formula for eigenvalues that appeared in the
proof of Theorem 11.7. Assuming the Dirichlet eigenvalues {k } of are written in
increasing order and repeated according to multiplicity,
where
Wk1 := {1 , . . . , k1 } H01 (),
228 11 Variational Methods
with k the eigenfunction corresponding to k The only problem with this formula
is that determining the kth eigenfunction requires knowledge of the first k 1 eigen-
functions. This issue is resolved by the following:
Theorem 11.11 (Minimax principle) For a bounded domain Rn , let k denote
the set of all k-dimensional subspaces of H01 (). Let {k } denote the sequence of
Dirichlet eigenvalues of in increasing order. Then
k = min max R[u] (11.49)
V k uV \{0}
for each k N.
Proof Let {k } H01 () denote the eigenfunction basis. By the weak solution
condition (11.25), orthonomality in L 2 () implies also that
i , i = j,
E[i , j ] = (11.50)
0, i = j.
Let us set
Veig = [1 , . . . , k ] H01 (),
where [. . . ] denotes the linear span of a collection of vectors. For u Veig , expanded
as
k
u= c j k ,
j=1
R[u] k
w V [1 , . . . , k1 ] .
k R[w].
Hence,
k max R[u] (11.52)
uV \{0}
for V k .
In combination, (11.51) and (11.52) show that the minimum on the right-hand
side of (11.49) exists and is equal to k .
H01 () H01 (), (11.53)
k ()
k ()
for all k N.
Another way to make use the Rayleigh and minimax principles is to approxi-
mate eigenvalues by restricting subspaces of computationally simple functions within
H01 (). This approach can give surprisingly good estimates even when the subspaces
are small.
230 11 Variational Methods
Example 11.13 On the unit disk D = {r < 1} R2 , consider the family of functions
w (r ) := 1 r
The L 2 norm is 1
w 22 = (1 r )2 2r dr
0
2
= .
2 + 3 + 2
2
1 R[w ] = +3+
for > 0. The optimal choice is = 2, which gives
.
1 3 + 2 2 = 5.828. (11.54)
Compare this to the exact value computed in Example 5.5 in terms of zeros of the
Bessel J -function,
.
1 = j0,1
2
= 5.783. (11.55)
We can estimate higher eigenvalues and improve the accuracy by using a larger
subspace. This computational strategy was introduced by Walter Ritz in 1909, and
is referred to as the Rayleigh-Ritz method. Given a finite dimensional subspace A
H01 (), we let k (A) denote the k-dimensional subspaces of A. The approximate
eigenvalues associated to A are then given by
k := min max R[u] , (11.56)
W k (A) uW \{0}
for k = 1, . . . , dim A.
Since A is finite dimensional, the calculation of (11.56) can be recast as a matrix
eigenvalue problem. By the same arguments used in the proof of Theorem 11.7,
the values k are associated to vectors vk A satisfying the approximate weak
eigenvalue equation,
E[vk , w] = k vk , w
(11.57)
for all w A.
To interpret (11.57) as a matrix eigenvalue equation we fix a basis {w j }mj=1 for A.
In terms of this basis, the energy functional and L 2 inner product define matrices
E i j := E[wi , w j ], Fi j := wi , w j . (11.58)
If vk is expanded as
m
vk = cjwj,
j=1
m
ci (E i j k Fi j ) = 0 (11.59)
i=1
for j = 1, . . . , m. This equation has a nontrivial solution only if the rows of the
matrix E k F are linearly dependent, which is equivalent to the vanishing of the
determinant. The values k can thus be calculated as the roots of a polynomial
{1 , . . . , k } = { : det(E F) = 0} . (11.60)
Example 11.14 Consider D as in Example 11.13, but now take the subspace A =
[w1 , w2 , w3 ], where
w j (r ) := 1 r 2 j .
232 11 Variational Methods
The estimate 1 matches the exact value (11.55) very closely; in fact,
|1 1 | 106 .
The problem is that the space A consists entirely of radial functions, so that the
second eigenfunction, &
2 (r, ) = J1 ( 2 r )ei ,
is orthogonal to A.
Example 11.15 Let us consider the domain (0, ), for which the exact Dirichlet
eigenfunctions are k (x) = sin kx for k N, with k = k 2 . Define a mesh by
subdividing (0, ) into m + 1 intervals of length /(m + 1). To the jth vertex we
associate the element
11.7 Estimation of Eigenvalues 233
x m+1
j1
, j1
x j
,
m+1 m+1
w j (x) = x + m+1 , m+1 x
j+1 j j+1
,
m+1
0, otherwise,
Table 11.1 shows the resulting approximate eigenvalues as a function of the number
of elements m.
Table 11.1 Approximate Dirichlet eigenvalues for (0, ) computed using m elements
m 3 10 25 50 exact
1 1.05 1.01 1.00 1.00 1
2 4.86 4.11 4.02 4.01 4
3 12. 84 9.56 9.10 9.03 9
4 17.80 16.31 16.08 16
5 29.45 25.77 25.20 25
In the mid-18th century, Lagrange and Euler jointly developed a framework for
expressing problems in classical mechanics in terms of the minimization of an action
functional. Euler coined the term calculus of variations to describe this approach,
which proved adaptable to a great variety of problems.
In the original classical mechanics setting, the action functional was the integral
of a Lagrangian function, defined as kinetic energy minus potential energy. These
might be energies of a single particle or a system.
11.8 Euler-Lagrange Equations 235
L : Rn R R.
for every Ccpt (). This implies a (possibly nonlinear) PDE for u, called the
Euler-Lagrange equation of L.
The Dirichlet principle gives the most basic example of this setup. For
| p|2
L( p, w, x) := (11.61)
2
the action functional is the Dirichlet energy E[w], and the Euler-Lagrange equation is
the Laplace equation. To formulate the Poisson equation, we modify the Lagrangian
to include the forcing term f L 2 (; R),
| p|2
L( p, w, x) := f w. (11.62)
2
In this case the action is the functional D f [w] defined in (11.6).
A classic nonlinear example is the surface area minimization problem. For
R2 , the graph of a function w : R defines a surface in R3 . According to (2.8),
the surface area of this patch of surface is given by
+
A[w] := 1 + |w|2 d 2 x.
Let us work out the Euler-Lagrange equation for the surface area functional. For
Ccpt (), we have
+
d d
A[u + t] = 1 + |u + t|2 d 2 x
dt t=0 dt t=0
u
= & d 2 x.
1 + |u|2
By Greens first identity (Theorem 2.10), and the fact that vanishes near the bound-
ary, , -
d u
A[u + t] = & d 2 x.
dt t=0 1 + |u|2
Setting this equal to zero for all gives the Euler-Lagrange equation
, -
u
& = 0. (11.63)
1 + |u|2
11.9 Exercises
u = min D f [w].
wA
m
Given a basis w1 , . . . wm for A (not necessarily orthonormal), set u = i=1 ci wi .
Find equation an equation for (c1 , . . . , cm ) in terms of f and the matrices E and F
defined in (11.58).
11.2 To demonstrate the role that ellipticity plays in Theorem 11.5, consider the
operator L = r 2 on the unit disk D = {r < 1} R2 , where r := |x|2 . For
f L 2 (D), a weak solution of the equation Lu = f with Dirichlet boundary
conditions is defined as a function u H01 (D) satisfying
u (r 2 ) + f d 2 x = 0 (11.64)
D
for all Ccpt (D).
(a) Compute the weak partial derivatives of the function log r and show that log r
H01 (D).
(b) Show that u(x) = log r satisfies (11.64) with f = 0. Note that even though D
and f are C , the solution u is not even locally H 2 . (It is not a coincidence that
the singularity of u occurs at the point where ellipticity of the operator fails.)
11.3 Let R2 be the equilateral triangle with vertices (0, 0), (2, 0), and (1, 3).
Define w H01 () to be the piecewise linear function
whose graph forms a tetrahe-
dron over , with the top vertex at (1, 1/ 3, 1/ 3). Approximate the first eigen-
value by computing the Rayleigh quotient R[w]. (For comparison, the exact value
2
is 1 = 43 ; this corresponds to the eigenfunction shown in Fig. 11.1.)
11.4 Let B3 denote the unit ball {r < 1} R3 . Find an upper bound on the first
eigenvalue by computing the Rayleigh quotient of the radial function w(r ) = 1 r .
(Compare your answer to the exact value 1 = 2 found in Exercise 5.8.)
with 0 and + 0.
(b) Note Lemma 10.10 implies that H01 (; R). Assuming that 1 is normal-
ized 1 2 = 1, show that
+ 2 + 2 = 1,
E[+ ] + E[ ] = 1 .
E[ ] 1 2 .
Use this, together with the fact that E[1 ] = 1 , to deduce that
E[ ] = 1 2 .
= 1 . (11.65)
(d) Use the strong maximum principle of Theorem 9.5 to deduce from (11.65) that
if has a zero within then 0. Conclude that 1 has no zeros in .
(e) If u H01 (; R) C () is some other eigenfunction with eigenvalue 1 ,
then u c1 is also an eigenfunction for each c R. Show that c can be chosen
so that u c1 has a zero in . Conclude that u c1 .
A distribution on Rn is a more general functional Ccpt () C, not necessarily
expressible as an integral. To qualify as a distribution, a functional is required to
satisfy conditions that insure that weak derivatives and other basic operations are
well defined.
As with weak derivatives, the concept of a distribution was inspired by idealized
situations in physics. Indeed, the term distribution was inspired by charge distrib-
utions in electrostatics, an example that we will discuss in Sect. 12.1. Distributions
generalize the notion of weak solutions, in the sense that every function in L 1loc ()
also defines a distribution. The trade-off for the increased generality is that some
basic operations for functions cannot be applied to distributions. The product of two
distributions is not generally well defined, for example.
There are some technicalities in the mathematical theory of distributions that
require more background on the topology of function spaces than we assume for this
text. We will treat these technicalities rather lightly; our focus will be on exploring
the PDE applications.
kq0 x
E(x) = , (12.1)
|x|3
E = 4k, (12.2)
This is consistent with (12.2), in that Coulomb assumes the charge density is zero
for x = 0. However, if a function in L 1loc vanishes except at a single point, then
that function is zero by the equivalence (7.6). Thus a point charge density has no
meaningful interpretation as a locally integrable function.
To reconcile (12.1) with Gausss law, let us consider the weak form of (12.2),
E d 3 x = 4k d 3 x (12.4)
R3 R3
for all Ccpt (R3 ). The left side of (12.4) is well defined because the components
of E are locally integrable.
Since the Coulomb field is smooth away from the origin, we can integrate by parts
as long as we exclude the origin from the region of integration by writing the integral
as a limit,
E d x = lim
3
E d 3 x. (12.5)
R3 0 {r }
The region {r } has boundary given by the sphere {r = }. In this case the
outward unit normal is a radial unit vector pointing towards the origin,
x
= . (12.6)
r r =
By the divergence theorem (Theorem 2.6), and the fact that E = 0 for r > 0 by
(12.3),
12.1 Model Problem: Coulombs Law 241
E d 3 x = E d S.
{r } {r =}
for every Ccpt (R3 ). This is consistent with the physical interpretation of as a
charge located exactly at the origin.
The concept of a point density was widely used in physics applications in the
18th and 19th centuries. In a 1930 book on quantum mechanics, the physicist Paul
Dirac described such densities in terms of a delta function (x), whose defining
property is that
f (x)(x) d n x := f (0), (12.9)
Rn
for a continuous function f . This terminology and notation are potentially mislead-
ing, because is not a function and (12.9) is not actually an integral. However, Diracs
formulation hints at the proper mathematical interpretation, which is that should
be understood as a functional f f (0).
If we accept the intuitive definition of the delta function for the moment, then we
can interpret the calculation (12.8) as showing that
x
= 4. (12.10)
r3
242 12 Distributions
A distribution on a domain Rn is a continuous linear functional Ccpt () C.
The map defined by a distribution u is usually written as a pairing of u with a test
function, i.e.,
(u, ) C (12.11)
for Ccpt (). Linearity means that
(u, c1 1 + c2 2 ) = c1 ( f, 1 ) + c2 ( f, 2 ),
for all c1 , c2 C and 1 , 2 Ccpt ().
The definition of distribution also includes the word continuous. To define con-
tinuity for functionals we must first specify what convergence means in Ccpt (). The
standard definition is that for a sequence {k } to converge to in Ccpt () means that
all k have support in some fixed compact set K , and the sequence of functions
and all sequences of partial derivatives converge uniformly on K . Continuity of the
functional (12.11) is then defined by the condition that convergence of a sequence
k in Ccpt () implies that
In finite dimensions continuity is implied by linearity. That is not the case here, but
in practice it is quite difficult to come up with a functional that is linear but not
continuous.
The set of distributions on forms a vector space denoted by D
(). Linear
combinations of distributions are defined in the obvious way by
(c1 u 1 + c2 u 2 , ) := c1 (u 1 , ) + c2 (u 2 , ),
for u 1 , u 2 D
() and c1 , c2 C. The mathematical theory of distributions was
developed independently in the mid-20th century by Sergei Sobolev and Laurent
Schwartz. Schwartz used D as a notation for Ccpt , and the prime accent on D
comes
from the notation for the dual of a vector space in linear algebra.
A locally integrable function f L 1loc () defines a distribution through the
integral pairing
( f, ) := f d n x. (12.13)
L 1loc () D
().
As we saw with the point charge density in Sect. 12.1, not all distributions are
given by functions. We use the notation x for the delta function centered at x ,
defined by
( x , ) := (x). (12.14)
( f u, ) := (u, f ).
It does not make sense, however, to multiply two distributions together. This fact was
intuitively clear in early applications: the product of two charge densities makes no
physical sense.
Convergence of a sequence of distributions is defined in a very straightforward
way. We say that u k u in D
() if
lim (u k , ) = (u, )
k
for all Ccpt (). All distributions can in fact be approximated by smooth functions
by such a limit, although we are not equipped to prove that here. We will present
one useful special case, a construction of the delta function as a limit of integrable
functions.
Lemma 12.1 Given f L 1 (Rn ) satisfying
f d n x = 1, (12.15)
Rn
as a distributional limit.
Proof For Ccpt (Rn ) we can evaluate the pairing with fa using a change variables,
( f a , ) = a n f (ax)(x) d n x
Rn
= f (x)(x/a) d n x.
Rn
244 12 Distributions
Given > 0, the fact that f is integrable implies that exists R sufficiently large
so that
| f | d n x < . (12.18)
|x|R
for all |x| R. Using (12.18) and (12.19) to estimate the difference (12.17) gives
( f a , ) (0) 2 | f (x)| d n x + | f (x)| d n x
|x|R |x|R
2 + f 1 ,
lim ( f a , ) = (0).
a
The rescaling used in Lemma 12.1 is illustrated in Fig. 12.1. Note that this looks
very similar to Fig. 9.1, and in fact the proof of Lemma 12.1 uses essentially the
same argument as that of Theorem 9.1. We saw another case of this construction in
the proof of Theorem 6.2. Indeed, we can now interpret the result of Theorem 6.2 as
a distributional limit of the heat kernel,
lim Ht = ,
t0
f8
f4
f2
f1
The distributional derivative extends the concept of the weak derivative introduced
in Sect. 10.1. By analogy with (10.7), for u D
() and we define the distribution
D u by
(D u, ) := (1)|| (u, D ), (12.20)
with
1 n
D := 1
x1 xnn
Example 12.2 Let us reconsider Example 10.3, where we considered the derivative
of w L 1loc () defined by
w (t), t < 0,
w(t) =
w+ (t), t 0,
w (t), t < 0,
h(t) :=
w+ (t), t > 0.
(D x , ) = (1)|| ( x , D )
= (1)|| D (x).
In other words, the distribution D x evaluates the derivative of the test function at
the point x, up to a sign.
Example 12.4 The function ln |x| is locally integrable on R and so defines a distri-
bution in D
(R). Therefore (ln |x|)
exists in the distribution sense. This is puzzling
because
d 1
ln |x| =
dx x
for Ccpt (R). To compute this we avoid the singularity at 0 by writing
(ln |x|)
, = lim
(x) ln |x| d x. (12.22)
0 |x|
and similarly
(x)
(x) ln |x| d x = () ln + d x.
x
(x)
(x) ln |x| d x = () () ln + d x.
|x| |x| x
() ()
lim =
(0).
0 2
Therefore
lim () () ln = 2
(0) lim ln
0 0
= 0.
The limit on the right exists for Ccpt (R), even though x 1 is not integrable,
because the limit is taken symmetrically. This limiting procedure defines a distribu-
tion called the principal value of x 1 , written as PV[x 1 ]. We could rephrase (12.23)
as
d
ln |x| = PV x 1 .
dx
Example 12.5 Let us reinterpret the discussion from Sect. 12.1 in terms of distrib-
utional derivatives. We already noted that the components of x/r 3 are locally inte-
grable, so we can consider the Coulomb formula (12.1) for E as the definition of a
vector-valued distribution. The distributional divergence of x/r 3 is defined by the
condition that
x
x
3 , := 3
d 3 x,
r R 3 r
for Ccpt (R3 ). The derivation of (12.8) thus shows that
x
= 4. (12.24)
r3
248 12 Distributions
We can also consider the corresponding result for the Coulomb electric potential
1
(x) = .
r
(ignoring the physical constants). The gradient of exists in the weak sense and is
given by
1 x
= 3.
r r
Because the Poisson equation is linear, it makes sense to construct a solution with
a continuous density by superimposing a field of point sources. With a change of
variables, we can see from (12.25) that the potential function corresponding to a
point source at y R3 is
1
y (x) := .
4|x y|
Weighting the point sources by the density and summing them with an integral
gives
1 (y) 3
u(x) = d y. (12.26)
4 R3 |x y|
This formula, which is often stated as the integral form of Coulombs law, does
indeed yield a solution of the Poisson equation on R3 under certain conditions. For
example if Ccpt
1
(R3 ) then one can confirm that u = by direct computation.
1
The C condition is stronger than necessary here, but continuity alone would not
be sufficient. (The precise notion of regularity needed for this problem is something
called Hlder continuity.)
This idea of constructing of general solutions by superposition of point sources is
the inspiration for the concept of a fundamental solution. For a constant-coefficient
differential operator L acting on Rn , of the form
L= a D ,
||m
12.4 Fundamental Solutions 249
L = . (12.27)
For example, in the Coulomb case the calculation (12.25) gives the fundamental
solution of on R3 . Fundamental solutions are especially important for classical
problems involving the Laplacian.
The solution formula (12.26) resembles the convolution used to solve the heat
equation in Sect. 6.3. For f, g L 1 (Rn ) the convolution is defined as
f g(x) := f (y)g(x y) d n y.
Rn
f g = g f.
The x integration looks almost like the convolution of with g, except with the
argument switched from y x to x y. With the reflection defined by
g (x) := g(x),
we have
g (y) = g(x y)(x) d n x.
Rn
If , Ccpt (Rn ), then it is easy to check that Ccpt
(Rn ) also. Moreover,
the map is linear and continuous. We can thus define u for u D
(Rn )
and Ccpt (Rn ) by
(u , ) := (u, ) (12.29)
for Ccpt (Rn ).
The distribution plays a special role with regard to convolutions. By the definition
(12.29),
250 12 Distributions
( , ) := (, )
= (0)
= (x)(x) d n x
= (, ).
D (w f ) = (D w) = w (D f ).
Proof For , Ccpt (), we compute directly that
D ( )(x) = (x y)(y) d n y
= D (x y)(y) d n y
= (D ) (x).
Since the convolution is symmetric, the same formula holds with and switched.
Thus the formula
D ( ) = (D ) = (D ) (12.31)
(D ( f ), ) = (1)|| ( f, (D )).
(D ( f ), ) = (1)|| ( f, D ( ))
= (D f, )
= ((D f ) , ),
12.4 Fundamental Solutions 251
and also
(D ( f ), ) = (1)|| ( f, (D ) ))
= ( f (D ), ).
Lu = f
is solved by
u = f.
= (L) f.
Note that the second step only works because the coefficients a are assumed to be
constant. Since L = , we see from (12.30) that
L( f ) = f.
A result called the Malgrange-Ehrenpreis theorem, proven in the 1950s, says that
every constant coefficient differential operator on Rn admits a fundamental solution.
The fundamental solution of the Laplacian, which we will now work out for any
dimension, is the most important case.
(, ) = (, )
252 12 Distributions
for Ccpt (Rn ). To evaluate this, it is useful to first compute the gradient,
x
(x) = .
An r n
The function x/r n is locally integrable in Rn and has compactly support. Therefore
we can deduce from Greens first identity (Theorem 2.10) that
d x =
n
d n x
Rn Rn
1 x
= d n x
A n Rn r n
1 1 n
= d x.
A n Rn r n1 r
hence = .
Although fundamental solutions are defined only for the domain Rn , one of their
principle applications is to boundary value problems on a bounded domain Rn .
The connection comes from a integral formula introduced in 1828 by George Green.
For this section, let denote the fundamental solution of the Laplacian on Rn ,
as given by (12.32). For y Rn we set
for y .
Proof Because the point y is fixed, for notational convenience we can change
variables to assume y = 0. For > 0 set
B := B(0; ),
for > 0, with M independent of . Using the fact that vol( B ) = An n1 and the
formula (12.32) for , we can estimate
u ln , n = 2,
d S M
r
, n 3.
B n2
1
= ,
r An r n1
The right-hand side is the average value of u over the sphere B . By continuity,
1
lim u d S = u(0).
0 vol( B ) B
The representation formula of Theorem 12.9 has many applications. The original
goal that Green had in mind was a solution formula for the Poisson problem with
inhomogeneous Dirichlet boundary conditions, which we will now describe.
Suppose there exists a family of functions Hy C 2 (), for y , satisfying
Hy = 0, Hy = y . (12.36)
G y := y Hy . (12.37)
The result then follows by subtracting (12.38) from the representation formula of
Theorem 12.9.
Example 12.11 The Greens function for the unit disk D R2 can be derived using
a trick from electrostatics called the method of images. This involves placing charges
outside the domain in order to solve the boundary value problem. For the unit disk,
in order to find Hy we consider a charge placed at the point y given by reflecting
y C\{0} across the unit circle, i.e.,
y
y := . (12.39)
|y|2
If y is replaced by y,
the corresponding formula is
|x y|
2 = 1 + |y|
2 2|y|
cos
= 1 + |y|2 2|y|1 cos .
which simplifies to
|x y|
|x y|
= (12.40)
|y|
for x D.
y
0
256 12 Distributions
Since (x) = 2
1
ln |x| in R2 , taking the logarithm of (12.40) gives
1
y (x) = y (x) + ln |y|
2
for y = 0 or x. Thus we can solve (12.36) for y = 0 by setting
1
Hy := y ln |y|.
2
For y = 0 the obvious solution is H0 := 0 because |D = 0.
The Greens function is thus
|xy|
2
1
ln |x y|
|y|
, y = 0
G y (x) =
2 ln |x|,
1
y = 0.
To apply this in the solution formula, we need the radial derivative of G. For y fixed
and r := |x| we compute
ln |x y| = x ln |x y|
r
(x y)
=x
|x y|2
1xy
= .
|x y|2
G y 1 |y|2
(x) = . (12.41)
r 2|x y|2
This will look more familiar in polar coordinates. With y = (r cos , r sin ) and
x = (cos , sin ), the formula becomes
12.5 Greens Functions 257
1 2
1 r2
u(r, ) = g(cos , sin ) d.
2 0 1 + r 2 2r cos( )
This is the classical Poisson formula (9.4) that we derived from Fourier series.
For Ccpt (Rn ) the pairing (wt , ) is a complex-valued function of t.
The function t wt is differentiable with respect to time if there exists a family
of distributions w
t
t
D
(Rn ) such that
d wt
(wt , ) = , , (12.42)
dt t
for all Ccpt (). Higher derivatives are defined in the same way.
so that
n
t , = (n) (t).
t n
We conclude that
n n
n
t = (1) t . (12.43)
t n x n
Let us try to deduce the fundamental solution for the one-dimensional wave equa-
tion from dAlemberts formula (4.8),
258 12 Distributions
1 1 x+t
u(t, x) = g(x + t) + g(x t) + h( ) d , (12.44)
2 2 xt
1
Wt := (t,t) . (12.45)
2
The time derivatives of Wt are computed from the pairing
t
1
(Wt , ) = dx
2 t
for Ccpt (R). By the fundamental theorem of calculus,
d 1
(Wt , ) = (t) + (t) , (12.46)
dt 2
which shows that
Wt 1
= t + t . (12.47)
t 2
Differentiating again using (12.43) gives
2 Wt 1
= t + t . (12.48)
t 2 2
On the other hand, x-derivatives of Wt are defined by (12.20). In particular,
2 Wt
, := Wt ,
x 2
, := dx
x 2 2 t
1
=
(t)
(t) .
2
12.6 Time-Dependent Fundamental Solutions 259
= t + t . (12.49)
x 2 2
By (12.48) and (12.49), Wt is a distributional solution of the wave equation,
2 2
Wt = 0.
t 2 x 2
1 Wt
g(x + t) + g(x t) = g(x).
2 t
Thus the full convolution formula for the solution reads
Wt
u(t, ) = g + Wt h.
t
12.7 Exercises
For x = 0, f
(x) = |x|1 , but this is not locally integrable. Show that the distribu-
tional derivative is
260 12 Distributions
1
(x) (0) (x)
( f , ) = + d x.
1 |x| |x|1 |x|
12.3 Let H denote the upper half-plane {x2 > 0} R2 . The goal of this problem is
to show that the Laplace equation on H,
u = 0, u(, 0) = g,
G y 1 |y 2 |
(x) = .
r An |x y|n1
(c) Find the resulting solution formula from Theorem 12.10, and show that this gen-
eralizes the mean value formula for harmonic functions obtained in Theorem 9.3.
Chapter 13
The Fourier Transform
(x) := eix ,
and in particular
eix = ||2 eix .
Note that the integral is well defined by the integrability of f , and in fact
f () f L 1 (13.2)
F : f f.
f (x) = ea|x| ,
2
F[D x ]() = (i) (), (13.4)
and
F[x ]() = (i D ) (). (13.5)
Pulling the differentiation outside the integral in the final step is justified by the
smoothness and decay assumptions on .
In Lemma 13.1 we can see Schwartzs motivation for the definition of S. Under
the Fourier transform, smoothness translates to rapid decay, and vice versa. These
properties are balanced in the definition of S, which leads to the following result.
Lemma 13.2 The Fourier transform F maps S(Rn ) S(Rn ).
Proof Suppose that f S. In order to show that f is Schwartz, we need to produce
a bound on the function D f for each , . By (13.4) and (13.5),
D f() = i ||+|| eix x D x f (x) d n x. (13.6)
Rn
To estimate, we set
M N ,, := (1 + |x|2 ) N x D x f ,
(x) := eax
2
d
= 2ax.
dx
Taking the Fourier transform of both sides and applying Lemma 13.1 gives
d
i = 2ai ,
d
264 13 The Fourier Transform
which reduces to
d
= .
d 2a
2
/4a
()
= (0)e
.
Thus,
2 /4a
()
= e .
a
Rn
n
ei x j j ax j d x j
2
=
j=1
n
2j /4a
= e .
j=1
a
Thus
n2
F ea|x| () = e|| /4a
2 2
(13.8)
a
for a > 0.
For f, g S(Rn ), consider the integral
f (x)ei xy g(y) d n x d n y. (13.9)
Rn Rn
13.1 Fourier Transform 265
The integrals over x and y can be taken in either order, yielding the useful identity:
f g d n x = fg d n y (13.10)
Rn Rn
for f, g S(Rn ),
Theorem 13.4 The Fourier transform on S(Rn ) has an inverse F 1 given by
f (x) = (2) n
eix f() d n . (13.11)
Rn
Proof In (13.10) let us set g = eax for a > 0, By (13.8), this implies
2
n2
x 2 /4a
f(y)eay d n y.
2
f (x)e d x=
n
(13.12)
a Rn Rn
On the left-hand side we can use the same argument as in the proof of Lemma 12.1
to show that
n
2 x 2 /4a n
f ( ax)ex /4 d n x
n 2
lim f (x)e d x = lim
2
a0 a Rn a0 Rn
= (2)n f (0).
Because the convergence is not uniform, we will check this carefully. The difference
of the two sides can be estimated by
f(y)eay d n y
2
n
f (y) d y f (y) 1 eay d n y.
2
Rn Rn Rn
Rn
Tw f (y) := f (y + w).
The pairing formula (13.10) suggests that the L 2 inner product will behave natu-
rally under the Fourier transform. Indeed, by Theorem 13.4 we can compute
f()g()
dn = f (x)ei x d n x g()
dn
Rn Rn Rn
= ei x g()
dn f (x) d n x
Rn Rn
= (2) n
f (x)g(x) d n x,
Rn
f, g
= (2)n
f, g. (13.15)
13.1 Fourier Transform 267
The integral (13.1) defining the Fourier transform does not necessarily converge
for f L 2 (Rn ), but the identity (13.15) makes it possible to define transforms on
L 2 by taking limits.
Theorem 13.5 (Plancherels theorem) The Fourier transform extends from S(Rn )
to an invertible map on L 2 (Rn ), such that (13.15) holds for all f, g L 2 (Rn ).
Proof First note that Theorem 7.5 implies that S(Rn ) is dense in L 2 (Rn ) because
it includes the compactly supported smooth functions. Hence for f L 2 (Rn ) there
exists a sequence of Schwartz functions k f in L 2 . As a convergent sequence,
{k } is automatically Cauchy, i.e.,
lim k m 2 = 0.
k,m
By (13.15),
k m = (2)n/2 k m 2 ,
2
f := lim k ,
k
k , m = (2)n
k , m .
f, g
= (2)n
f, g.
The same argument can be used to show that f is independent of the choice of
approximating sequence.
Here the word tempered refers to a restriction on the growth at infinity. Because the
Schwartz functions decay rapidly, a locally integrable function is essentially required
to have a polynomial growth rate at infinity in order to define an element of S (Rn ).
The definition of continuity of a functional on S(Rn ) depends on a notion of
convergence for Schwartz
functions. A sequence {k } S(Rn ) converges if the
sequences x D k converge uniformly for each , . To say that u S (Rn ) is
for p [1, ]. This follows fairly directly from the fact that S(Rn ) L p (Rn ) for
p [1, ].
The pairing formula (13.10) gives the prescription for extending F to the tempered
distributions. For u S (Rn ), we define u by
(u,
) := (u, ) (13.16)
for S(Rn ). To justify this definition one needs to check that the Fourier trans-
form is continuous as a map S(Rn ) S(Rn ). This essentially follows from the
calculations in the proof of Lemma 13.2.
As an example, consider the function u = 1 as an element of S (Rn ). For
S(Rn ),
) := (1, )
(1,
=
(x) d n x.
Rn
Therefore
1 = (2)n . (13.17)
with the understanding that the integral on the right is not to be taken literally.
The Fourier transform of is a similar calculation. For S,
13.2 Tempered Distributions 269
) := (, )
(,
= (0)
(13.18)
= (x) d n x.
Rn
Therefore
= 1. (13.19)
F[D x u] = (i) u,
and
F[x u] = (i D) u.
with a C. According to Lemma 13.6 and (13.19), the Fourier transform of the
equation
L =
is
P() = 1,
where
P() := a (i) .
||m
If the reciprocal of P() makes sense as a tempered distribution then we can set
() = 1/P() and take the inverse Fourier transform to construct a fundamental
solution as an element of S (Rn ).
Because ||2 is not globally integrable, we cannot apply the formula (13.11)
directly. A trick to get around this is based on the fact that
et dt = 1
0
for = 0. We can pair both sides with a Schwartz function () and integrate to
show that
F 1 ||2 = F 1 et|| dt.
2
(13.20)
0
4 0
In terms of the gamma function (2.17) this calculation gives the fundamental solution
1 n n
(x) = 2 ( 2 1) |x|2n . (13.21)
4
This agrees with the formula for from Theorem 12.8, because
n
2 2
An =
( n2 )
and
( n2 )
( n2 1) = .
n
2
1
13.3 The Wave Kernel 271
Following the discussion in Sect. 12.6, we can try to define the wave kernel Wt on
Rn by solving the distributional equations
2 Wt
Wt = 0, W0 = 0, = . (13.22)
t 2 t t=0
The function W t is smooth and bounded, and therefore defines a tempered distribution
on Rn . The inverse Fourier transform Wt S (Rn ) thus yields a general solution
formula for the wave equation on Rn . For initial conditions g, h S(Rn ),
Wt
u(t, ) = g + Wt h. (13.24)
t
The direct computation of the inverse Fourier transform of (13.23) is rather tricky,
but we can check this formula against the results we already know. For n = 1 we
have Wt = 21 [t,t] from the dAlembert formula. Since this is integrable the Fourier
transform can be computed directly:
1
W t () = [t,t] (x) ei x d x
2
t
1
= ei x d x
2 t
sin(t)
, = 0,
=
t, = 0.
For n = 3, the Kirchhoff formula from Theorem 4.10 shows that the wave kernel
is the distribution defined by
272 13 The Fourier Transform
1
(Wt , ) := d S,
4t B(0;t)
Since (y) has rapid decay as y and the x integral is restricted to a sphere,
we can switch the order of integration and conclude that
1
W t () = ei x d S(x).
4t B(0;t)
To compute this surface integral, note that we could rotate the x coordinate without
changing the result of the integration. It therefore suffices to consider the case where
is parallel to the x3 axis. If we then use the spherical coordinates (r, , ) for the
x variables, this gives
x = || r cos .
d S(x) = t 2 sin d d.
Hence the Kirchhoff formula agrees with the transform solution (13.23).
13.4 The Heat Kernel 273
By analogy with (13.22), the heat kernel Ht is defined as the solution of the distrib-
utional equation
Ht = 0, H0 = . (13.25)
t
H t () = et|| .
2
(13.26)
Because H t is a Schwartz function for t > 0, we can compute the inverse Fourier
transform by the direct integral formula (13.11), which gives
n
eix et|| d n .
2
Ht (x) = (2)
Rn
In Sect. 6.3 we guessed this formula from a calculation in the one-dimensional case.
The Fourier transform allows for a systematic derivation.
13.5 Exercises
13.1 Let H R2 denote the upper half space {x2 > 0}. The Poisson kernel on H is
the distributional solution of the equation
P = 0, P|x2 =0 = .
x2 ) = ex2 || .
P(,
274 13 The Fourier Transform
(d) For f S(R), use P to write an integral formula for the solution of the Laplace
problem on H:
u = 0, u|x2 =0 = f.
Show that
ck [ f ] = (k).
(b) Obtain the summation formula by comparing f to its Fourier series expansion
at x = 0.
13.3 Recall that the heat equation on T was solved by Fourier series in Theorem
8.13.
(a) Use the solution formula (8.44) to show that the heat kernel on T is given by the
series
1 k 2 t+ikx
h t (x) := e .
2 k=
(b) Use the Poisson summation formula from Exercise 13.2 to show that the periodic
heat kernel h t and the heat kernel Ht on R are related by averaging
h t (x) = Ht (x + 2m)
m=
for t > 0. (Note that this shows h t (x) > 0 for all x T, t > 0, which is not
clear in the formula from (a).)
13.5 Exercises 275
u
i u = 0, u|t=0 = g,
t
was introduced in Exercise 4.7.
(a) Assuming that g S(Rn ), find a formula for the spatial Fourier transform u(t,
).
(b) Show that the result from Exercise 4.7,
|u(t, x)| d x =
2 n
|g|2 d n x
Rn Rn
In this section we will develop some implications of the completeness axiom for R
which are referenced in the text.
The fundamental result from which the others follow is the equivalence of com-
pactness and sequential compactness for subsets of Rn . Recall from Sect. 11.6 that
a set A is sequentially compact if every sequence within A contains a subsequence
converging to a limit in A. The equivalence was first proven by Bernard Bolzano in
the early 19th century, and later rediscovered by Karl Weierstrass.
exists by the completeness axiom. The sequence {bn } is decreasing, because the
supremum is taken over successively smaller sets, and also bounded by the hypothesis
on A. Therefore the number
:= inf bn
nN
If this were not the case, then for some n we would have xk / ( , + ) for
all k n. This would imply either bn or bn + , both of which are
impossible by the definition of .
This proves the existence of a subsequence converging to . The fact that A is
closed implies A, so this completes the argument that a closed bounded subset
of R is sequentially compact.
To extend this argument to higher dimensions, consider a sequence {x k } in a
compact subset A Rn . The sequence of first coordinates of the x k is a bounded
sequence in R, so the above argument yields a subsequence such that the first coor-
dinates converge. We can then restrict our attention to this subsequence and apply
the same reasoning to the second coordinate, and so on. After n steps this procedure
produces a subsequence which converges to an element of A.
Bolzano used sequential compactness to prove the following result, which serves
as the foundation for applications of calculus to optimization problems.
Proof Assume that f : K R is continuous. We will show first that f is bounded.
Suppose there is a sequence x j K such that f (x j ) . By Theorem A.1,
after restricting to a subsequence if necessary, we can assume that x j w
K . Continuity implies f (x j ) f (w), but this is impossible if f (x j ) .
Therefore a continuous function on K is bounded.
Since f (K ) is a bounded subset of R, b := sup f (K ) exists in R by the com-
pleteness axiom. To prove that f achieves a maximum, we need to show b f (K ).
If b
/ f (K ) then the function
1
h(x) :=
b f (x)
By the definition of Cauchy, for > 0 there exists N sufficiently large such that
x j xk <
for all j, k N . We can also choose an element x l in the subsequence such that
l N and
|x l w| < .
for all j N . Since the choice of was arbitrary, this shows that the full sequence
converges to w.
References
A support, 14
acoustic waves, 59 completeness, 9, 119, 276
action functional, 232 conduction, 95
advection, 27, 95 conjugate, 10
almost everywhere, 113 connected, 13
continuity equation, 27, 33
convection, 95
B convergence
ball, 12 pointwise, 135
Banach space, 119 uniform, 135, 139
Bessel convolution, 101, 137, 247
functions, 80 Coulomb, 237
inequality, 121, 133
zeros, 84
Bolzano-Weierstrass theorem, 275 D
boundary conditions Darboux, 61
Dirichlet, 5 delta function, 239, 241
Neumann, 5 dense, 117
self-adjoint, 124 Dirichlet, 136
boundary point, 12 eigenvalues, 215, 226
Burgers equation, 43 energy, 203
kernel, 137
Dirichlets principle, 205
C dispersive estimate, 116
calculus of variations, 232 distribution, 237, 240
Cauchy-Schwarz inequality, 111 tempered, 265
Cauchy sequence, 118 distributional derivative, 243
characteristic, 27, 33 divergence-free, 33
characteristic function, 113 divergence theorem, 20
classical solution, 1 domain, 13
closed, 13, 119 domain of dependence, 54
closure, 13, 186 Duhamels method, 54
coercive, 209
compact, 14 E
sequential, 222 eigenfunction, 75
sequentially, 222, 275 eigenvalue, 75
T
Q
telegraph equation, 70
quasilinear equation, 36
tempered distribution, 265
test function, 175
R traffic equation, 37
range of influence, 56, 64 transport equation, 27
Rankine-Hugoniot condition, 182
rapidly, 260
rarefaction wave, 201 V
Rayleigh variation of parameters, 54
principle, 216 vector space, 109
quotient, 216
Rayleigh-Ritz method, 229
Reaction-diffusion equations, 107
W
reaction term, 42
regularity wave equation, 2, 47, 83, 269
elliptic, 212 acoustic, 60
Rellichs theorem, 217, 222 damped, 91
resonance, 58 wave kernel, 257, 269
Weak derivative, 176
weak solution, 1, 51, 175
S Weierstrass, 275
Schrdinger equation, 71 well posed, 5, 165