Partial Differential Equations 2020 Solutions To CW 2: 1 Week 5, Problem 5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Partial Differential Equations 2020

Solutions to CW 2
Gustav Holzegel
April 11, 2020

1 Week 5, Problem 5
(Fritz John, 4.2 (3)) Prove the weak maximum principle for solutions
of the two-dimensional elliptic equation

Lu = auxx + 2buxy + cuyy + 2dux + 2euy = 0

where a, b, c, d, e are continuous functions of x and y in Ω̄ and ac −


b2 > 0 (ellipticity) as well as a > 0 hold (on Ω̄). HINT: Prove
it first
h under the strict condition
i Lu > 0, then use u + v for v =
2 2
exp M (x − x0 ) + M (y − y0 ) with appropriate M , x0 , y0 .

Assume first the strict inequality Lu > 0. Suppose there is an interior


maximum at (x0 , y0 ) ∈ Ω. Then ux (x0 , y0 ) = uy (x0 , y0 ) = 0 and the Hessian
 
uxx (x0 , y0 ) uxy (x0 , y0 )
Hess u (x0 , y0 ) =
uxy (x0 , y0 ) uyy (x0 , y0 )

is non-positive definite (which means its eigenvalues λi ≤ 0), i.e.

uxx (x0 , y0 ) + uyy (x0 , y0 ) ≤ 0 (1)


2
uxx uyy (x0 , y0 ) − (uxy ) (x0 , y0 ) ≥ 0 (2)
On the other hand, the condition Lu (x0 , y0 ) > 0 gives

auxx (x0 , y0 ) + 2buxy (x0 , y0 ) + cuyy (x0 , y0 ) > 0 . (3)

We see that uxx (x0 , y0 ) > 0 is already inconsistent with (1) and (2) so uxx (x0 , y0 ) ≤
0 and similarly uyy (x0 , y0 ) ≤ 0. But then we can write (3) as
2 
√ b2
 
|b| p
a −uxx − −uyy + c − (−uyy )
a a
q
+2|b| uxx uyy (x0 , y0 ) + 2buxy (x0 , y0 ) < 0 , (4)

1
which yields the desired contradiction using (2) and the fact that ac − b2 > 0
as well as a > 0. So in case of the strict inequality Lu > 0 the maximum can
only be assumed on the boundary. For the non-strict case look at (this is easier
than the hint)
v (x, y) = u (x, y) +  · eλx
and compute
Lv ≥ eλx aλ2 + 2dλ > 0


for sufficiently large λ (now fixed) and any  > 0.


Applying the maximum principle to v yields

max u (x, y) +  · eλx = max u (x, y) +  · eλx .


 
Ω ∂Ω

We deduce

max u (x, y) +  min eλx ≤ max u (x, y) +  max eλx


Ω Ω ∂Ω ∂Ω

Since Ω is bounded maxΩ eλx is bounded and we can let  → 0 to obtain

max u (x, y) ≤ max u (x, y) .


Ω ∂Ω

The reverse inequality is trivial and hence the maximum principle has been
established.

2 Week 5, Problem 6
(Harnack’s inequality) Let u ∈ C 2 for |x| < a and u ∈ C 0 for |x| ≤ a.
Let also u ≥ 0 and ∆u = 0 hold for |x| < a (in other words, u is a
non-negative harmonic function). Show that for |ξ| < a

an−2 (a − |ξ|) an−2 (a + |ξ|)


n−1 u (0) ≤ u (ξ) ≤ n−1 u (0)
(a + |ξ|) (a − |ξ|)

Discuss the meaning of this estimate. What can you say for arbitrary
regions?

We use the formula proven in lectures

ad−2 a2 − |ξ|2
Z
u(ξ) = d−1
u(x)dSx
|x|=a a ωd |x − ξ|d

together with the easily established inequalities for x with |x| = a (draw a
picture!)
2 2
a − |ξ| d−2 a − |ξ| a + |ξ|
ad−2 ≤ a ≤ ad−2 .
(a + |ξ|)d−1 |x − ξ|d (a − |ξ|)d−1

2
For the upper bound this produces
ad−2 a2 − |ξ|2
Z
u(ξ) = d−1
u(x)dSx
|x|=a a ωd |x − ξ|d
a + |ξ| a + |ξ|
Z
1
≤ ad−2 u(x)dSx = ad−2 u(0) ,
(a − |ξ|)d−1 |x|=a ad−1 ωd (a − |ξ|)d−1
with the last bound following from the mean value property of harmonic func-
tions. The lower bound is of course entirely analogous.
Note that this implies that in Ba (0) we have
1
u(y) ≤ u(x) ≤ Cu(y) .
C
for all x, y ∈ Ba (0) with the constant depending only on how close x and y are
to the boundary of Ba (0). In particular, on any compact subset V of the open
ball Ba (0) we can estimate the maximum of u by the minimum of u in V . This
is a manifestation of the averaging effects of the Laplacian.

See the revision sheet for arbitrary regions (the idea is of course to cover
them with balls!)!

3 Week 6, Problem 3
(Best constant in Poincare’s inequality; F. John Chapter 5) Show
that if there exists a function u ∈ C 2 Ω vanishing for ∂Ω for which
the quotient
|∇u|2
R
ΩR


u2
reaches its smallest value λ, then u is an eigenfunction to the eigen-
value λ, i.e. ∆u+λu = 0 in Ω. In fact λ must  be the smallest eigenvalue
belonging to an eigenfunction in C 2 Ω .
Fix a φ ∈ C0∞ (Ω). The function
|∇(u + tφ)|2
R
Ψ : (−, ) 3 t → ΩR

(u + tφ)2
with u the minimiser of the assumptions is well defined for sufficiently small  > 0
(as u + tφ is non-trivial) and by the general result of Week 5 (on interchanging
the derivative and the integral) Ψ is also differentiable. The assumptions of
the problem imply that Ψ has a minimum at t = 0 and Ψ(0) = λ > 0. Hence
d
dt Ψ|t=0 = 0 and we compute

|∇(u + tφ)|2 |∇u|2 Ω uφ
R R R R
d ΩR
h∇u, ∇φi
Ω R ΩR
0= =2 −2 R
dt (u + tφ)2 u2 u2 u2

Ω Ω Ω Ω

t=0
R
(∆u + λu)φ
= −2 Ω R 2 (5)

u

3
where we have integrated by parts using that φ vanishes on the boundary of Ω.
Since φ ∈ C0∞ (Ω) was arbitrary, the right hand side has to be zero for any test
function φ ∈ C0∞ (Ω) and this implies (since ∆u + λu is continuous) that

∆u + λu = 0

If we had a smaller eigenvalue, i.e. ∆u + µu = 0 for µ < λ and u ∈ C 2 Ω̄
non-trivial, we would have (multiplying the equation by u and integrating by
parts) that Z Z
|∇u|2 = µ u2
Ω Ω
which contradicts the fact that the minimum value of the quotient in the exercise
is λ.

4 Week 6, Problem 4
Let n = 3 and Ω be the ball |x| < π. Show that a solution u of
∆u + u = w (x) with vanishing boundary values can only exist if

sin |x|
Z
w (x) dx = 0
Ω |x|

An easy computation shows that the homogeneous adjoint problem is given


by ∆v + v = 0 and v = 0 on ∂Ω. Going to polar coordinates we easily check
that
 
sin |x| 1 sin r 1 sin r
∆ = 2 ∂r r2 ∂r = 2 ∂r (− sin r + r cos r) = −
|x| r r r r
sin |x|
and hence that |x| is a solution to the homogeneous adjoint problem. (Note
sin |x|
|x| is smooth at the origin and vanishes on the boundary r = π.) By the
Fredholm alternative, the original inhomogeneous problem can only have a so-
lution if the right hand side w(x) is L2 -orthogonal to the kernel of the adjoint
problem and this yields precisely the condition stated.
Remark: To see that ũ(x) = sin|x||x| is a legitimate solution to the adjoint
problem, we need to check that ũ ∈ H01 (Ω). First note that sin|x||x| is smooth on
R3 , including the origin, so that certainly ũ ∈ H 1 (Ω). Note that the function
vanishes at the boundary r = π. Let χ̃ : R+ → R be a smooth functionsuch 
that χ̃(r) = 0 when r ≤ 1 and χ̃(r) = r when r ≥ 2. Define χε (x) = εχ̃ |x| ε
for x 6= 0 and χε (0) = 0 and note this is a smooth function on R3 .
Set ũε = χε (ũ). We claim that ũε ∈ Cc∞ (Ω) for each ε > 0. As a composition
of smooth functions, ũε is smooth. Note that by definition ũε = 0 whenever
|ũ| < ε. Since ũ vanishes on the boundary and Ω is pre-compact, this will be
true in a neighbourhood of ∂Ω. In fact, {x ∈ Ω : |ũ| < ε} ⊂ {π − ε ≤ |x| ≤ π}

4
which directly implies that supp(ũε ) ⊂ Bπ−ε (0) which is compactly contained
in Ω. Finally note that in the region {x ∈ Ω : ũ(x) > 2ε}, ũε = ũ, so that
{x ∈ Ω : ũε (x) 6= ũ(x)} ⊂ {π − 2ε ≤ |x| ≤ π}.
To show that indeed ũ ∈ H01 (Ω), it suffices to prove that ũε → u in H 1 (Ω).
Using that |∇χε | ≤ C and the fact that ũε and ũ agree everywhere except an
annulus of size ε around the boundary, we find
ε→0
kũ − ũε kH 1 (Ω) ≤ C sup (|ũ(x)| + |∇ũ(x)|) Ln (Ω \ Bπ−2ε ) −−−→ 0 (6)
x∈Ω\Bπ−2ε

5 Week 6, Problem 6
Prove the following basic version of the Banach Alaoglu theorem
(which we used in connection with the difference quotients): Let (uk )
be a bounded sequence in a separable Hilbert space H, i.e. kuk kH ≤ C.
Then there exists a subsequence which converges weakly in H. Hint:
Use the following outline
1. Pick an ONB (ek ) and use a diagonal argument to show that for
(n)
a subsequence of the (uk ), denoted (un ) (arising from a Cantor
diagonal argument) we have that

hu(n)
n , ek i → vk ∈ R holds for all ek .
P∞
|vk |2 < ∞ and hence v =
P
2. Show that k=1 k vk ek ∈ H.
(n)
3. Show that un * v.

Step 1: We use a diagonal argument to show that for a subsequence of the


(n)
(uk ), denoted (un ) (arising from a Cantor diagonal argument) we have that

hu(n)
n , ek i → vk ∈ R

holds for all ek . Indeed, by Bolzano-Weierstrass we can find a subsequence


(1) (1)
(un ) of (un ) such that hun , e1 i → v1 as n → ∞ for some v1 ∈ R. Next we
(1) (2) (2)
choose a subsequence of (un ), denoted (un ) which is such that hun , e2 i → v2
as n → ∞ for some v2 ∈ R. Continuing in this way and then choosing finally
(n)
the diagonal sequence (un ) we have that

hun(n) , ek i → vk

for all k. P∞
Step 2: We next show that k=1 |vk |2 < ∞. To do this we compute
v
XK XK K
X
uK
uX
2 (n) (n) (n) t
|vk | = lim vk hun , ek i = lim hun , vk ek i ≤ lim sup kun k |vk |2
n→∞ n→∞ n→∞
k=1 k=1 k=1 k=1

5
and hence
K
X
|vk |2 ≤ C
k=1
2
P∞
for all K and therefore vk ∈ ` and hence that v = k=1 vk ek ∈ H (recall the
sum converges iff vk ∈ `2 ).
(n) P
Step 3: We show that un * v, i.e. that for any φ = k φk ek ∈ H we have
that
X∞
lim hu(n) (n)
n − v, φi = lim hun − v, φk e k i = 0 .
n→∞ n→∞
k=1

To see this, let  > 0 be prescribed. We first fix K large (independently of n)


such that
∞ ∞ ∞
X X X 
|hu(n)
n − v, φk ek i| ≤ ku(n)
n − vkk φk ek k ≤ 2C |φk |2 < .
2
k=K+1 k=K+1 k=K+1

Next we choose n large such that


K K
X X 
hu(n)
n − v, φk e k i = φk hu(n)
n − v, ek i <
2
k=1 k=1

This is possible because K has been fixed and every summand in this finite sum
(n)
goes to zero in view of hun − v, ek i → 0 for all k. Adding the two terms finishes
the proof.

Remark. The above proof is for a Hilbert space over R (which covers the
spaces used in lectures). The proof for C is entirely analogous.

You might also like