Lecture 23
Lecture 23
Lecture 23
Ec 181 KC Border
Convex Analysis and Economic Theory AY 2019–2020
g≧f
if
( ∀x ∈ A ) [ g(x) ⩾ f (x) ].
We say that g strictly dominates f on A, written
gf
if
( ∀x ∈ A ) [ g(x) > f (x) ].
f = fˆ |A .
23.3.4 Remark Some statements of the Hahn–Banach Theorem (e.g., Royden [8,
Theorem 10.3.4, p. 233], Dunford and Schwartz [3, Theorem II.3.10, p. 62],
Wilanksy [9, Theorem 12.4.1, p. 269]) impose other conditions on h, such as
sublinearity (positive homogeneity and subadditivity). Together these imply con-
vexity. It turns out the homogeneity is unnecessary, but it is typically satisfied
Make sure we do in the cases they have in mind. We shall see (I hope) that if a linear function f is
this.
dominated by a convex function h, there is a sublinear function p (the directional
derivative) satisfying h ≧ p ≧ f .
strictly larger subspace. The second part says that this is enough to conclude that
there is a dominated extension defined on all of X.
So let M be a subspace of X that includes V and assume that we have an
extension g of f to M that satisfies g ≦ h on M . If M = X, then we are done. So
suppose there exists v ∈ X \ M . Let N be the linear span of M ∪{v}. For each
x ∈ N there is a unique decomposition
x = z + λv where z ∈ M .
(To see the uniqueness, suppose x = z1 +λ1 v = z2 +λ2 v. Then z1 −z2 = (λ2 −λ1 )v.
Since z1 − z2 ∈ N and v ∈ / N , it must be the case that λ2 − λ1 = 0. But then
λ1 = λ2 and z1 = z2 .)
Any linear extension ĝ of g satisfies ĝ(z+λv) = g(z)+λĝ(v), so the requirement
that it be dominated by h on N becomes
The constant ĝ(v) plays no real rôle here, and we may simply omit it. Then
multiplying by µλ > 0 and rearranging terms, we see that (2) is equivalent to
( ∀x, y ∈ M ) ( ∀µ, λ > 0 ) [ g(λx + µy) ⩽ λh(x − µv) + µh(y + λv) ]. (3)
Then A and B are disjoint (since g ≧ φ) and convex. Moreover it is easy to see
that every point in A is a core point.
Thus by the Algebraic Separating Hyperplane Theorem 23.4.1, there is a sep-
arating hyperplane, which must be non-vertical. It is thus the graph of an affine
function φ̂ + β, which satisfies Elaborate on this.
φ̂ + β ≧ φ on M,
so
φ̂ = φ on M and β ⩾ 0.
On the other hand
φ̂ ≦ φ̂ + β ≦ g everywhere.
Thus φ̂ is the desired extension of φ.
The next proof is standard, and is taken from the Hitchhiker’s Guide [1, The-
orem 5.61].
n o
Let M = α(−z) : α ∈ R , the one-dimensional subspace generated by −z,
and define f : M → R by f (α(−z)) = α. Clearly, f is linear and moreover f ≤ pC
on M , since for each α ⩾ 0 we have pC (α(−z)) = αpC (−z) ≥ α = f (α(−z)),
and α < 0 yields f (α(−z)) < 0 ≤ pC (α(−z)). By the Hahn–Banach Extension
Theorem 23.3.3, f extends to fˆ defined on all of X satisfying fˆ(x) ≤ pC (x) for all
x ∈ X. Note that fˆ(z) = −1, so fˆ is nonzero.
To see that fˆ separates A and B let a ∈ A and b ∈ B. Then we have
This shows that the nonzero linear functional fˆ separates the convex sets A and
B.
To see that the separation is proper, let z = a − b, where a ∈ A and b ∈ B.
Since fˆ(z) = −1, we have fˆ(a) 6= fˆ(b), so A and B cannot lie in the same
hyperplane.
23.6.4 Subdifferentiability
Separation Theorem
=⇒
=⇒
Hahn–Banach
=⇒
Krein–Rutman
=⇒
Nonzero Positive Functionals
23.6.9 Fact (cf. [4, pp. 2–3]) If M is a linear subspace of a vector space
X, there is a (not unique) subspace N that is complementary to M . That is,
M ∩ N = {0}, and every x ∈ X has a unique representation as x = xM + xN ,
where xM ∈ M and xN ∈ N . This is expressed as X = M ⊕ N .
Let ≧ be the order induced by P , and let φ be positive on M . Let Y be the span
of P ∪ M , and let Y = M ⊕ N . For y ∈ Y , we may write
y = p1 − p2 + x,
where p1 , p2 ∈ P and x ∈ M .
Let
g(y) = inf{φ(x) : x ∈ M & x ≧ y}.
Then g is sublinear and φ ≦ g on M . Extend φ to φ̂ ≦ g on Y by Hahn–Banach.
Now show that φ̂ is positive:
Let x ∈ P and let x̄ ∈ P ∩ M . For λ ⩾ 0, we have
x̄ + λx ∈ P.
Thus
x̄/λ ≧ −x.
But x̄/λ ∈ M , so by definition of g,
g(−x) ⩽ φ(x̄/λ).
Thus
φ̂(−x) ⩽ g(−x) ⩽ φ(x̄)/λ.
Let λ → ∞ to get φ̂(−x) ⩽ 0, which proves that φ̂ is positive on Y . Then use
complementary subspaces to extend φ̂ to all of X.
Assume 0 ∈ A \ cor(A), and let P the cone generated by cor A. Any positive
functional supports A at 0.
g : y 7→ p · y − β
x
epi f
x, f (x) (p, 0)
(p, −1)
f
H = [(p, −1) = β]
Figure 23.6.1. The affine function g : y 7→ p · y − β satisfies g ≦ f and
H = {(y, α) ∈ X × R : (p, −1) ·
g(x) = f (x). Equivalently, The hyperplane
(y, α) = β} supports epi f at the point x, f (x) , which maximizes (p, −1)
over epi f and the maximum values is β.
Let x ∈ A \ (cor A), and let ρ be the gauge function of A. The epigraph of ρ is WTF? This makes
no sense.
a convex cone. Let φ be a subgradient of ρ at x. It supports the epigraph at
x, ρ(x) . Slice through X × R with the horizontal plane {(x, α) : α = 1}.
(x ∼ y & y ∼ z) =⇒ x ∼ z; x ∼ y =⇒ y ∼ x; and x ∼ x.
[x] = {y : y ∼ x}.
Observe that
Thus the ∼-equivalence classes form a partition of X into disjoint sets. The
collection of ∼-equivalence classes of X is called the quotient of X modulo
∼, often written as X/∼. The function x 7→ [x] from X to X/∼ is called the
quotient map.
In many contexts, mathematicians say that they identify the members of an
equivalence class. What they mean by this is that they write X instead of X/∼,
and they write x instead of [x].
Given any function f with domain X, we can define an equivalence relation
∼ on X by x ∼ y whenever f (x) = f (y). This is one of the most common ways
to define equivalence relations.
x = xM + xN , where xM ∈ M and xN ∈ N .
In this case we write X = M ⊕ N and say that X is the direct sum of M and
N.
It is well-known
n that every linear subspace oM of Rm has an orthogonal comple-
ment M⊥ = x ∈ Rm : ( ∀z ∈ M ) [ x · z = 0 ] . In more general linear subspaces
there may not be an inner product, but nevertheless we still have the following.
Proof : (cf. Holmes[4, § C, pp. 2–3]) Let M be a linear subspace of the vector
space X. Define the relation ∼M on X by
x ∼M y if x − y ∈ M.
Exercise 4.2.2 proves that ∼M is an equivalence relation. Let X/M denote the set
of equivalence classes of ∼M , and let [x] denote the equivalence class of x. Then
[x] = x + M.
but this is clearly true. As a result [0] = M is the zero of the vector space X/M .
We now have to show that X/M can be identified with a complementary
subspace of X. Since X/M is a linear space, it has a basis {[bi ] : i ∈ I} where
I is some index set, and each bi is a fixed representative of its ∼M -equivalence
class. It follows that {bi : i ∈ I} is a linearly independent subset of X. Moreover,
N = span{bi : i ∈ I} is complementary to M : Let x ∈ X. Then we can uniquely
write
X
k
[x] = αi [bi ]
i=1
X
k
αi bi ∈ [x] = x + M.
i=1
Let
X
k
xN = α i bi and xM = xN − x.
i=1
References
[1] C. D. Aliprantis and K. C. Border. 2006. Infinite dimensional analysis: A
hitchhiker’s guide, 3d. ed. Berlin: Springer–Verlag.
[2] T. M. Apostol. 1967–69. Calculus, 2d. ed., volume 1–2. Waltham, Mas-
sachusetts: Blaisdell.
[3] N. Dunford and J. T. Schwartz. 1957. Linear operators: Part I. New York:
Interscience.
[4] R. B. Holmes. 1975. Geometric functional analysis and its applications. Num-
ber 24 in Graduate Texts in Mathematics. Berlin: Springer–Verlag.
[5] V. L. Klee, Jr. 1951. Convex sets in linear spaces. Duke Mathematical Journal
18(2):443–466. DOI: 10.1215/S0012-7094-51-01835-2
[6] R. Lay, Steven. 1982. Convex sets and their applications. Pure and Applied
Mathematics: A Wiley-Interscience Series of Texts, Monographs, and Tracts.
NY: Wiley–Interscience.
[8] H. L. Royden. 1988. Real analysis, 3d. ed. New York: Macmillan.
[9] A. Wilansky. 1998. Topology for analysis. Mineola, NY: Dover. Unabridged re-
publication of the work originally published by Ginn and Company, Waltham,
MA in 1970.