Ode 10 Lecture 3
Ode 10 Lecture 3
Ode 10 Lecture 3
Assumption: For I = [ξ, ξ + a], J = [η − b, η + b], we have f (·, ·) ∈ C 0 (I × J), |f |C 0 (I×J) ≤ M for some
M > 0, a > 0, b > 0.
order to be able to invoke the version of Picard’s theorem we proved last time, we need to extend fε (x, y) to
all values of y ∈ R. This we can easily by extending fε (x, y) as a constant beyond the values y = η ± MbM
+1 .
Then we have the following simple
Lemma 1.4. Under the above restrictions on y, ε > 0, the expression (1.2) is well-defined. Furthermore,
extending fε (x, y) to all values of y as above, we have the bounds
|fε (x, y)|C 0 (R) ≤ M
b
where R = I × R as long as ε < 2(M +1) ; further, we have the Lipschitz bound
C
(1.3) |fε (x, y) − fε (x, ȳ)| ≤ M |y − ȳ|
ε
The proof of the lemma is left as a simple exercise.
We can now apply Picard’s theorem as proved last time, to the initial value problems
(1.4) y 0 = fε (x, y), y(ξ) = η,
Note that here we use L = Cε M . We obtain unique C 1 -regular solutions yε (x) on I = [ξ, ξ + a]. It is then
intuitively clear that one way to potentially obtain a solution to the original problem (1.1) is to let ε → 0 and
look for a limit to yε (x). The key to extract such a limit comes from a crucial compactness property of the
family of functions {yε (x)}ε>0 . To prepare this, we first state the simple
Lemma 1.5. For x ∈ Ie (defined in the statement of theorem 1.1), we have the uniform bounds
bM
|yε (x) − η| ≤ , |yε0 (x)| ≤ M
M +1
The proof is immediate; for example, using (1.4), we get
Z x
|yε (x) − η| ≤ M ds ≤ M |x − ξ|
ξ
from which the first bound follows. The second bound is immediate.
The preceding lemma implies that the family of functions {yε (x)}ε > 0 is uniformly continuous on I.
e The
desired compactness is then a consequence of the following very general fact:
Proposition 1.6. (Arzela-Ascoli) Let K be a compact metric space, and A ⊂ C 0 (K) a non-empty subset.
Then A is pre-compact precisely if the following two conditions are satisfied:
(ii) A is uniformly continuous: ∀ε > 0, ∃δε > 0 such that ∀x, y ∈ K with ρ(x, y) < δε , we have
|f (x) − f (y)| < ε, f ∈ A
Assuming this for now, we finish the proof of Peano’s theorem as follows: Letting A = {yε }ε>0 ⊂ C 0 (I),
e
Lemma 1.5 implies (check!) that both conditions of Proposition 1.6 are satisfied. In particular, choosing
any sequence {εj }j≥1 with εj → 0, we can select a subsequence {εjk }k≥1 ⊂ {εj }j≥1 with the property that
0 e
{yεjk }k≥1 converges with respect to | · |C 0 (I)
e to some y∗ ∈ C (I). For simplicity, we again label the sub-
sequence as {εj }j≥1 .
Proof. (Claim) First it is clear that yε (ξ) = η ∀ε > 0, so that clearly also y∗ (ξ) = η. To see that y∗0 = f (x, y∗ )
on I,
e we observe that
(1.5) lim fε (x, yε (x)) = f (x, y∗ (x)), x ∈ Ie
j j
j→∞
LECTURE 3 3
Second, we have
lim |fε (x, y∗ (x)) − f (x, y∗ (x))| = 0, x ∈ Ie
ε→0
The two preceding steps imply the validity of (1.5). Using it, we infer
Z x
y∗ (x) = η + f (t, y∗ (t)) dt,
ξ
1.1. Proof of Arzela-Ascoli. Here we prove the sufficiency of conditions (i), (ii) in Arzela-Ascoli, leaving
the necessity as an exercise. Thus, assuming (i), (ii), we show that the set A ⊂ C 0 (K) is pre-compact.
This is equivalent to the statement that for every sequence {fn }n≥1 ⊂ A, there exists a sub-sequence
{fnk }k≥1 ⊂ {fn }n≥1 converging in C 0 (K).
Thus let {fn }n≥1 ⊂ A be given. To construct a converging sub-sequence, we shall use a Cantor diagonal
procedure to find a sub-sequence converging on a countable dense subset of K. We first pick such a subset.
Put εk = k1 , k ∈ N. By compactness of K, for each k ∈ N, we can cover K by finitely many discs
Dε1k , Dε2k , . . . , Dεjkk of radius εk , centered at p1k , p2k , . . . , pjkk , respectively. By re-labeling, we put ∪∞ jk
k=1 ∪l=1
l
pk = {pk }k≥1 . By property (i), we can pick a subsequence {fn11 , fn12 , . . . , fn1l , . . .} ⊂ {fn }n≥1 , such that
{fn1l (p1 )}l≥1 converges. Next, pick {fn2l }l≥1 ⊂ {fn1l }l≥1 such that also {fn2l (p2 )}l≥1 converges. Inductively,
pick {fnrl }l≥1 ⊂ {fn(r−1)l }l≥1 such that
{fnrl (pj )}l≥1 , j = 1, 2, . . . , r
all converge. Then the diagonal sequence {fnrr }r≥1 has the property (check!) that
{fnrr (pj )}r≥1
converges for all j ≥ 1. Now we conclude via the following
Proof. (Claim) We first prove that {fnrr (x)}r≥1 converges for each x ∈ K. Given x ∈ K, pick a sequence
{pjk }k≥1 ⊂ {pk }k≥1 with limk→∞ pjk = x. Given ε > 0, pick k0 ∈ N such that |fn (pjk ) − fn (x)| < 2ε ∀k ≥ k0 ,
∀n ≥ 1; this is possible on account of (ii). Then
lim sup |fnkk (x) − fnll (x)| ≤ lim sup |fnkk (pjk0 ) − fnll (pjk0 )|
k,l→∞ k,l→∞
+ lim sup |fnkk (x) − fnkk (pjk0 )|
k→∞
+ lim sup |fnll (x) − fnll (pjk0 )|
l→∞
≤ε
Since ε > 0 is arbitrary, the sequence {fnkk (x)}k≥1 is Cauchy. Define f (x) := limk→∞ fnkk (x). In order to
complete the proof of the Claim, it suffices to show that this limit is uniform in x(why?).
Given ε > 0, pick a finite subset B := {pjk }L ∞
k=1 ⊂ ∪k=1 pk with the property that ∀x ∈ K ∃pjl ∈ B with
ε
It turns out that the preceding example gives the generic behavior. We have the following
Theorem 2.1. Under the same assumptions as for theorem 1.1, for any x0 ∈ I,
e the set
Jx0 := {y(x0 )| y(x) solves (1.1)}
is a compact and connected.
Proof. The compactness follows from the boundedness of Jx0 , which in turn follows from
|y(x0 )| ≤ |η| + M |x0 − ξ|,
as well as the closedness of Jx0 . To see the latter, assume that yn (x), n = 1, 2, . . ., solve(1.1), and that
yn (x0 ) → y∗ ∈ R. By uniform continuity and boundedness, as in the proof of Peano’s theorem, for the yn (x),
we can then extract a subsequence {ynk (x)}, which converges uniformly on Ie to some limit y∗ (x). This limit
solves (1.1), as follows as usual by passing to the integral equation. In particular, we have yn (x0 ) → y∗ (x0 ),
whence Jx0 is closed.
LECTURE 3 5
In order to complete the proof of the theorem, we have to show that Jx0 is connected. this we do via
contradiction: assume Jx0 is not connected. Then we can write
(2.1) Jx0 = J1 ∪ J2 ,
where both J1,2 are compact, and we have dist(J1 , J2 ) = 2δ > 0. Pick two solutions y1,2 (x) of (1.1) with the
property that y1 (x0 ) ∈ J1 , y2 (x0 ) ∈ J2 . The idea then is to deform y1 (x) into y2 (x), and thereby construct a
solution of (1.1) with y(x0 ) in neither J1 nor J2 , contradicting (2.1). The technical complication here comes
from the fact that we cannot use Peano’s existence theorem to construct this deformation, on account of the
lack of uniqueness of solutions; instead, we shall invoke the Picard theorem, via slick modification of f (x, y):
First, we pick a sequence fj (x, y) which is C ∞ with respect to y, and such that
fj (x, y) → f (x, y)
bM bM
uniformly for x ∈ I,
e y ∈ [η −
M +1 , η + M +1 ], as in the proof of Theorem 1.1. Next, introduce the auxiliary
functions
fj1 (x, y) := fj (x, y) + f (x, y1 (x)) − fj (x, y1 (x))
fj2 (x, y) := fj (x, y) + f (x, y2 (x)) − fj (x, y2 (x))
Observe that we then still have
y10 (x) = fj1 (x, y1 (x)), y20 (x) = fj2 (x, y2 (x)),
but the functions fj1,2 (x, y) are now C ∞ smooth with respect to y. Also, note that
lim fj1,2 (x, y) = f (x, y)
j→∞
for x, y as above.
To obtain the deformation of y1 (x) into y2 (x), we now consider, for each λ ∈ [0, 1] the auxiliary problems
0
(2.2) yj,λ = λfj1 (x, yj,λ ) + (1 − λ)fj2 (x, yj,λ ), yj,λ (ξ) = η,
By Picard’d theorem, there is a unique solution yj,λ on I, e and of course when λ = 0, yj,λ (x) = y2 (x), while
when λ = 1, yj,λ (x) = y1 (x) (bad notation...). Specializing to x = x0 , we see that as λ traces out [0, 1],
yj,λ (x0 ) connects y1 (x0 ) ∈ J1 to y2 (x0 ) ∈ J2 . To be more precise, we need
Since we have dist(J1 , J2 ) = 2δ > 0, for each j ∈ N we can find λj ∈ (0, 1) such that
dist(yj,λj (x0 ), J1 ∪ J2 ) ≥ δ
Using the same compactness argument as in the proof of Peano’s theorem, the set {yj,λj (x)}j≥1 ⊂ C 0 (I)
e is
0 e
compact, whence we can extract a uniformly converging subsequence, which converges to some y∗ (x) ∈ C (I).
Then
We then clearly also have dist(y∗ (x0 ), J1 ∪ J2 ) ≥ δ, and this contradicts (2.1).
We can refine the preceding theorem specifically for scalar ODEs a bit as follows: we know that
Jx0 = [a(x0 ), b(x0 )], x0 ∈ Jx0 ,
for some functions a(x0 ), b(x0 ). The next theorem says that these functions are themselves solutions of (1.1).
Specifically:
Theorem 2.2. Both the upper limit a(x0 ) and b(x0 ), x0 ∈ I,
e solve (1.1); we call them the maximal, resp.
the minimal solution.
6 LECTURE 3
Proof. We first note that whenever y1 (x), y2 (x) solve (1.1), so does
y∗ (x) := max{y1 (x), y2 (x)}
To see this, distinguish between points x where y1 (x) 6= y2 (x) and those where y1 (x) = y2 (x). In the first
case, there is a neighborhood of x where max{y1 (x), y2 (x)} = y1 (x) or y2 (x), whence the statement is clear.
If we have y1 (x) = y2 (x), then by differentiability of y1,2 (x), we have
y1 (x + t) = y1 (x) + tf (x, y1 (x)) + o(t), y2 (x + t) = y2 (x) + tf (x, y(x)) + o(t)
from which we infer
y∗ (x + t) = y∗ (x) + tf (x, y∗ (x)) + o(t)
This implies differentiability of y∗ (e e = x, and y∗0 (x) = f (x, y∗ (x)).
x) at x
Now for each xe ∈ I,e by definition of b(e x) we can choose a solution yxe(x) of (1.1) with b(e
x) = yxe(e
x). Then
pick a countable dense subset
x
e1 , x ek , . . . ⊂ Ie
e2 , . . . , x
and consider the sequence of functions
yN (x) := max {yxej (x)}
j=1,2,...,N
According to the preceding paragraph applied inductively, we see that yN (x) solves (1.1). Furthermore,
since as in the proof of Theorem 1.1 the set {yN (x)}N ≥1 ⊂ C 0 (I)
e is compact, we may select a subsequence
{yNk (x)}k≥1 which converges to some y∗ (x) (where as usual we restrict x ∈ I) e solving (1.1). But since
yN (e xj ) for j ≤ N , we necessarily obtain
xj ) = b(e
y∗ (e xj ) ∀j ≥ 1,
xj ) = b(e
and by density of {e
xj }j≥1 ⊂ I,
e we infer that y∗ (x) = b(x) for all x ∈ I.
e
Exercise: Verify this last step by proving the continuity of the function x0 → b(x0 ).