Multilinear Algebra - MIT
Multilinear Algebra - MIT
Multilinear Algebra - MIT
CHAPTER 1
MULTILINEAR ALGEBRA
1.1 Background
We will list below some definitions and theorems that are part of
the curriculum of a standard theory-based sophomore level course
in linear algebra. (Such a course is a prerequisite for reading these
notes.) A vector space is a set, V , the elements of which we will refer
to as vectors. It is equipped with two vector space operations:
Vector space addition. Given two vectors, v1 and v2 , one can add
them to get a third vector, v1 + v2 .
Scalar multiplication. Given a vector, v, and a real number, , one
can multiply v by to get a vector, v.
These operations satisfy a number of standard rules: associativ-
ity, commutativity, distributive laws, etc. which we assume youre
familiar with. (See exercise 1 below.) In addition well assume youre
familiar with the following definitions and theorems.
1. The zero vector. This vector has the property that for every
vector, v, v + 0 = 0 + v = v and v = 0 if is the real number, zero.
(1.1.1) Rk V , (c1 , . . . , ck ) c1 v1 + + ck vk
is 1 1.
B :V V R
and
B(v, w) = B(v, w) .
B(v, w) = B(w, v) .
B(v, v) 0 .
B(w, v) = B(w, v)
and
B(w, v1 + v2 ) = B(w, v1 ) + B(w, v2 ) .
The items on the list above are just a few of the topics in linear al-
gebra that were assuming our readers are familiar with. Weve high-
lighted them because theyre easy to state. However, understanding
them requires a heavy dollop of that indefinable quality mathe-
matical sophistication, a quality which will be in heavy demand in
the next few sections of this chapter. We will also assume that our
readers are familiar with a number of more low-brow linear algebra
notions: matrix multiplication, row and column operations on matri-
ces, transposes of matrices, determinants of n n matrices, inverses
of matrices, Cramers rule, recipes for solving systems of linear equa-
tions, etc. (See 1.1 and 1.2 of Munkres book for a quick review of
this material.)
4 Chapter 1. Multilinear algebra
Exercises.
(a1 , . . . , an ) = (a1 , . . . , an ) .
(a) Commutativity: v + w = w + v.
(b) Associativity: u + (v + w) = (u + v) + w.
(c) For the zero vector, 0 = (0, . . . , 0), v + 0 = 0 + v.
(d) v + (1)v = 0.
(e) 1v = v.
(f) Associative law for scalar multiplication: (ab)v = a(bv).
(g) Distributive law for scalar addition: (a + b)v = av + bv.
(h) Distributive law for vector addition: a(v + w) = av + aw.
is an inner product.
v = c1 e1 + + cn en ci R .
Let
(1.2.5) ei (v) = ci .
If v = c1 e1 + + cn en and v = c1 e1 + + cn en then v + v =
(c1 + c1 )e1 + + (cn + cn )en , so
ei (v + v ) = ci + ci = ei (v) + ei (v ) .
Claim: ei , i = 1, . . . , n is a basis of V .
A : W V
A (1 + 2 ) = A 1 + A 2
and that
A = A ,
i.e., that A is linear.
Proof. Let
X
A fi = cj,i ej .
Then X
A fi (ej ) = ck,i ek (ej ) = cj,i
k
so ai,j = cj,i .
8 Chapter 1. Multilinear algebra
Exercises.
(a) Show that the W -cosets are the lines, x2 = a, parallel to the
x1 -axis.
(b) Show that the sum of the cosets, x2 = a and x2 = b is the
coset x2 = a + b.
(c) Show that the scalar multiple of the coset, x2 = c by the
number, , is the coset, x2 = c.
W = { V , (w) = 0 if w W } .
: (V ) (W )
: V (W ) .
Show that this map is onto and that its kernel is W . Conclude from
exercise 8 that there is a natural bijective linear map
: V /W (W )
Ker A = (Im A)
and
Im A = (Ker A) .
10 Chapter 1. Multilinear algebra
Show that
X
1 j=k
(1.2.12) ai,j ai,k =
0 j=
6 k
(c) Let A be the matrix [ai,j ]. Show that (1.2.12) can be written
more compactly as the matrix identity
(1.2.13) AAt = I
where I is the identity matrix.
1.2 Quotient spaces and dual spaces 11
1.3 Tensors
T :Vk R
is said to be linear in its ith variable if, when we fix vectors, v1 , . . . , vi1 ,
vi+1 , . . . , vk , the map
Exercise.
Then for v = c1 e1 + cn en
X
T (v1 , . . . , vn1 , v) = ci Ti (v1 , . . . , vn1 ) ,
and that the left and right distributive laws are valid: For k1 = k2 ,
(1.3.6) (T1 + T2 ) T3 = T1 T3 + T2 T3
14 Chapter 1. Multilinear algebra
and for k2 = k3
(1.3.7) T1 (T2 + T3 ) = T1 T2 + T1 T3 .
A particularly interesting tensor product is the following. For i =
1, . . . , k let i V and let
(1.3.8) T = 1 k .
Thus, by definition,
(1.3.9) T (v1 , . . . , vk ) = 1 (v1 ) . . . k (vk ) .
A tensor of the form (1.3.9) is called a decomposable k-tensor. These
tensors, as we will see, play an important role in what follows. In
particular, let e1 , . . . , en be a basis of V and e1 , . . . , en the dual basis
of V . For every multi-index, I, of length k let
eI = ei1 eik .
Then if J is another multi-index of length k,
1, I = J
(1.3.10) eI (ej1 , . . . , ejk ) =
0 , I 6= J
by (1.2.6), (1.3.8) and (1.3.9). From (1.3.10) its easy to conclude
Theorem 1.3.2. The eI s are a basis of Lk (V ).
Proof. Given T Lk (V ), let
X
T = TI eI
where the TI s are defined by (1.3.2). Then
X
(1.3.11) T (ej1 , . . . , ejk ) = TI eI (ej1 , . . . , ejk ) = TJ
by (1.3.10); however, by Proposition 1.3.1 the TJ s determine T , so
T = T . This proves that the eI s are a spanning set of vectors for
Lk (V ). To prove theyre a basis, suppose
X
CI eI = 0
for constants, CI R. Then by (1.3.11) with T = 0, CJ = 0, so the
eI s are linearly independent.
A T : V k R
to be the function
Its clear from the linearity of A that this function is linear in its
ith variable for all i, and hence is k-tensor. We will call A T the
pull-back of T by the map, A.
Proposition 1.3.3. The map
(1.3.13) A : Lk (W ) Lk (V ) , T A T ,
is a linear mapping.
We leave this as an exercise. We also leave as an exercise the
identity
(1.3.14) A (T1 T2 ) = A T1 A T2
(1.3.15) (AB) T = B (A T )
for all T Lk (W ).
Exercises.
3. Verify (1.3.14).
4. Verify (1.3.15).
16 Chapter 1. Multilinear algebra
A (1 k ) = A 1 A k .
Permutations
P
Let k be the k-element P set: {1,
P 2, . . . , k}. A permutation of order k
is a bijective map, : k k . Given two permutations, 1 and
2 , their product, 1 2 , is the composition of 1 and 2 , i.e., the map,
i 1 (2 (i)) ,
Check:
(i) = j
(1.4.1) (j) = i
() = , 6= i, j .
= ik (ik )
(1.4.3) (1) = 1 .
1.4 Alternating k-tensors 19
Claim:
For , Sk
Proof. By definition,
Y x (i) x (j)
(1) = .
xi xj
i<j
and
Y x (i) x (j)
(1.4.6)
x (i) x (j)
i<j
For i < j, let p = (i) and q = (j) when (i) < (j) and let p = (j)
and q = (i) when (j) < (i). Then
(i.e., if (i) < (j), the numerator and denominator on the right
equal the numerator and denominator on the left and, if (j) < (i)
are negatives of the numerator and denominator on the left). Thus
(1.4.6) becomes
Y x(p) x(q)
= (1) .
p<q
x p xq
Alternation
3. T = (T ) .
(1 k ) (v1 , . . . , vk )
= 1 (v1 (1) ) k (v1 (k) ) .
Setting 1 (i) = q, the ith term in this product is (q) (vq ); so the
product can be rewritten as
or
((1) (k) )(v1 , . . . , vk ) .
The proof of 2 well leave as an exercise.
(1 k ) = (1) (k)
= ( (1) (k) )
= ((1 ) ) .
We claim
Proposition 1.4.6. For T Lk (V ) and Sk ,
1. (Alt T ) = (1) Alt T
2. if T Ak (V ) , Alt T = k!T .
3. Alt T = (Alt T )
4. the map
Alt : Lk (V ) Lk (V ) , T Alt (T )
is linear.
Proof. To prove 1 we note that by Proposition (1.4.4):
X
(Alt T ) = (1) (T )
X
= (1) (1) T .
But as runs over Sk , runs over Sk , and hence the right hand
side is (1) Alt (T ).
Proof of 2. If T Ak
X
Alt T = (1) T
X
= (1) (1) T
= k! T .
Proof of 3.
X X
Alt T = (1) T = (1) (1) T
= (1) Alt T = (Alt T ) .
22 Chapter 1. Multilinear algebra
eI = ei1 eik
and
I = Alt (eI ) .
Proposition 1.4.8. 1. I = (1) I .
2. If I is repeating, I = 0.
I = I r = (1) I = I .
1.4 Alternating k-tensors 23
Proof of 3: By definition
X
I (ej1 , . . . , ejk ) = (1) eI (ej1 , . . . , ejk ) .
But by (1.3.10)
1 if I = J
(1.4.9) eI (ej1 , . . . , ejk ) = .
0 if I 6= J
Since
k!T = Alt (T )
1 X X
T = aJ Alt (eJ ) = bJ J .
k!
We can discard all repeating terms in this sum since they are zero;
and for every non-repeating term, J, we can write J = I , where I
is strictly increasing, and hence J = (1) I .
Conclusion:
Claim.
I = (i1 , . . . , ik )
I = 0
(1.4.13) Ak = {0} .
Exercises.
(i) = i + 1 , i = 1, . . . , k 1
I 1 (V ) = {0} .
T T = 1 i1 i i r 1 s
(1.5.1) T = (1) T + S
where S is in I k .
1.5 The space, k (V ) 27
Corollary. If T Lk , the
(1.5.2) Alt (T ) = k!T + W ,
where W is in I k .
P
Proof. By definition Alt (T ) = (1) T , and by Proposition 1.5.4,
T = (1) T + W , with W I k . Thus
X X
Alt (T ) = (1) (1) T + (1) W
= k!T + W
P
where W = (1) W .
28 Chapter 1. Multilinear algebra
with W I k .
1
T2 = k! W.
so T1 = 0, and hence T2 = 0.
Let
(1.5.3) k (V ) = Lk (V )/I k (V ) ,
(1.5.4) : Lk k , T T + Ik
Exercises.
1 and by the author of these notes in his book with Alan Pollack, Differential Topol-
ogy
30 Chapter 1. Multilinear algebra
: k Ak
(i1 , . . . , in ) (i1 + 0, i2 + 1, . . . , ik + k 1)
Claim.
This wedge product is well defined, i.e., doesnt depend on our choices
of T1 and T2 .
(T1 T2 ) = (T1 T2 ) .
(1.6.2) 1 2 3 = (1 2 ) 3 = 1 (2 3 ).
32 Chapter 1. Multilinear algebra
(1.6.3) (1 2 ) = (1 ) 2 = 1 (2 )
(1.6.4) (1 + 2 ) 3 = 1 3 + 2 3
and
(1.6.5) 1 (2 + 3 ) = 1 2 + 1 3 .
(1.6.6) 1 (V ) = V = L1 (V ).
(1.6.7) 1 k = (T ) k (V ) .
(1.6.9) (T ) = (1) (T ) .
(1.6.10) 1 2 = 2 1
1.6 The wedge product 33
(1.6.11) 1 2 3 = 2 1 3 = 2 3 1 .
More generally, its easy to deduce from (1.6.8) the following result
(which well leave as an exercise).
Theorem 1.6.1. If 1 r and 2 s then
(1.6.12) 1 2 = (1)rs 2 1 .
Exercises:
= e1 f1 + + ek fk
k = k!e1 f1 ek fk .
(1.7.2) v T = v1 T + v2 T ,
and if T = T1 + T2
(1.7.3) v T = v T1 + v T2 ,
and we will leave for you to verify by inspection the following two
lemmas:
Lemma 1.7.1. If T is the decomposable k-tensor 1 k then
X
(1.7.4) v T = (1)r1 r (v)1 br k
where the cap over r means that its deleted from the tensor prod-
uct ,
and
Lemma 1.7.2. If T1 Lp and T2 Lq
(1.7.6) v (v T ) = 0 .
v (1 k ) = v (T )
= v T + (1)k1 (v)T
by (1.7.5). Hence
v (v (T )) = v (v T ) + (1)k2 (v)v T
+(1)k1 (v)v T .
But by induction the first summand on the right is zero and the two
remaining summands cancel each other out.
(1.7.7) v1 v2 = v2 v1 .
0 = v v = (v1 + v2 )(v1 + v2 )
= v1 v1 + v1 v2 + v2 v1 + v2 v2
= v1 v2 + v2 v1
v T = v T1 T2
+(1)p T1 v ( ) T2
+(1)p+2 T1 v T2 .
1.7 The interior product 37
However, the first and the third terms on the right are redundant
and
v ( ) = (v) (v)
by (1.7.4).
(1.7.8) v = (v T ) .
(1.7.12) v1 v2 = v2 v1 .
Exercises:
(1.7.15) : V n1 , v v ,
V V V
by setting
e1 e2 = e3
(1.7.17) e2 e3 = e1
e3 e1 = e2 .
A T = A 1 A k
(1.8.2) A = (A T ) .
Claim:
(A T ) = (A T ) .
A : k (W ) k (V ) ,
(i) If i ki (W ), i = 1, 2, then
(1.8.3) A (1 2 ) = A 1 A 2 .
(1.8.4) B A = (AB) .
A : n (V ) n (V ) ,
(1.8.5) A = det(A)
(AB) = det(AB)
= B (A ) = det(B)A
= det(B) det(A) ,
A = B IW
is in n (W ) it is zero.
and since IW
A (f1 fn ) = A f1 A fn
X
= (a1,k1 ek1 ) (an,kn ekn )
ki = (i) i = 1, . . . , n
But
(e1 en ) = (1) e1 en
so we get finally the formula
where
X
(1.8.8) det[ai,j ] = (1) a1,(1) an,(n)
summed over Sn . The sum on the right is (as most of you know)
the determinant of [ai,j ].
Notice that if V = W and ei = fi , i = 1, . . . , n, then = e1
en = f1 fn , hence by (1.8.5) and (1.8.7),
Exercises.
1.9 Orientations
L1 = {v > 0}
and
L2 = {v, < 0} .
Then by (1.7.7)
f1 fn = det[ai,j ]e1 en
so we conclude:
Proposition 1.9.2. If e1 , . . . , en is positively oriented, then f1 , . . . , fn
is positively oriented if and only if det[ai,j ] is positive.
Corollary 1.9.3. If e1 , . . . , en is a positively oriented basis of V , the
basis: e1 , . . . , ei1 , ei , ei+1 , . . . , en is negatively oriented.
Now let V be a vector space of dimension n > 1 and W a sub-
space of dimension k < n. We will use the result above to prove the
following important theorem.
Theorem 1.9.4. Given orientations on V and V /W , one gets from
these orientations a natural orientation on W .
Remark What we mean by natural will be explained in the course
of the proof.
Exercises.
8. A key step in the proof of Theorem 1.9.4 was the assertion that
the matrix A expressing the vectors, ei , as linear combinations of the
vectors, fi , had to have the form (1.9.2). Why is this the case?
B : V /W V /W
with property
A = B ,
being the projection of V onto V /W .
(b) Assume that V , W and V /W are compatibly oriented. Show
that if A is orientation-preserving and its restriction to W is orien-
tation preserving then B is orientation preserving.
vol = e1 en n (V )
and bi,j = B(vi , vj ), the matrices A = [ai,j ] and B = [bi,j ] are related
by: B = A+ A.
(b) Show that if is the volume form, e1 en , and A is orien-
tation preserving
1
A = (det B) 2 .
n (V )
= An (V ) .
CHAPTER 2
DIFFERENTIAL FORMS
The identification
(2.1.2) Tp Rn Rn , (p, v) v
(p, v) = (p, v) .
Df (p) : Rn Rm
50 Chapter 2. Differential forms
dfp : Tp Rn Tq Rm
to be the map
Its clear from the way weve defined vector space structures on Tp Rn
and Tq Rm that this map is linear.
Suppose that the image of f is contained in an open set, V , and
suppose g : V Rk is a C 1 map. Then the base-pointed version
of the chain rule asserts that
(2.1.5) p Rn (p, v)
is a vector field. Vector fields of this type are constant vector fields.
p U f (p)v(p) .
Exercise.
(2.1.9) Lv (f1 f2 ) = f1 Lv f2 + f1 Lv f2 .
is an integral curve.
Theorem 2.1.5 (Smooth dependence on initial data). Let v be a
C -vector field, on an open subset, V , of U , I R an open interval,
a I a point on this interval and h : V I U a mapping with the
properties:
(i) h(p, a) = p.
p : I U p (t) = h(p, t)
is an integral curve of v.
Then the mapping, h, is C .
2.1 Vector fields and one-forms 53
(2.1.11) c : Ic U , c (t) = (t + c)
is an integral curve.
We recall that a C 1 -function : U R is an integral of the system
(2.1.11) if for every integral curve (t), the function t ((t)) is
constant. This is true if and only if for all t and p = (t)
d d
0 = ((t)) = (D)p = (D)p (v)
dt dt
where (p, v) = v(p). But by (2.1.6) the term on the right is Lv (p).
Hence we conclude
Theorem 2.1.7. C 1 (U ) is an integral of the system (2.1.11) if
and only if Lv = 0.
Well now discuss a class of objects which are in some sense dual
objects to vector fields. For each p Rn let (Tp R) be the dual vec-
tor space to Tp Rn , i.e., the space of all linear mappings, : Tp Rn
R.
Definition 2.1.8. Let U be an open subset of Rn . A one-form on U
is a function, , which assigns to each point, p, of U a vector, p ,
in (Tp Rn ) .
Some examples:
(2.1.12) dfp : Tp Rn Tc R
Tc R = {c, R} = R
54 Chapter 2. Differential forms
Exercise.
: [0, b) U ,
i. b = +
or
ii. |(t)| + as t b
or
{(t) , 0 t, b}
Hence if we can exclude ii. and iii. well have shown that an integral
curve with (0) = p exists for all positive time. A simple criterion
for excluding ii. and iii. is the following.
56 Chapter 2. Differential forms
Lemma 2.1.9. The scenarios ii. and iii. cant happen if there exists
a proper C 1 -function, : U R with Lv = 0.
Example.
Let U = R2 and let v be the vector field
v = x3 y .
y x
supp v = q U , v(q) 6= 0} ,
d
0 (t) = 0 = v(p) ,
dt
so it is an integral curve of v. Hence if (t), a < t < b, is any
integral curve of v with the property, (t0 ) = p, for some t0 , it has
to coincide with 0 on the interval, a < t < a, and hence has to be
the constant curve, (t) = p, on this interval.
Now suppose the support, A, of v is compact. Then either (t) is
in A for all t or is in U A for some t0 . But if this happens, and
2.1 Vector fields and one-forms 57
ft : U U
(2.1.19) ft fa = ft+a .
Writing
n
X
v = vi , vi C k (U )
xi
i=1
and
m
X
w = wj , wj C k (V )
yj
j=1
: I U1
(To see this note that if f (p) = q then at the point p the right hand
side is
(d)q dfp (v1 (p))
by the chain rule and by definition the left hand side is
dq (v2 (q)) .
Moreover, by definition
(2.1.24) (q) : Tq Rm R
dfp : Tp Rn Tq Rn
q dfp : Tp Rn R ,
q dfp = dq dfp = d( f )p
i.e.,
(2.1.25) f = d f .
Exercises.
(2.1.25 ) f d = df .
t R (r cos(t + ) , r sin(t + ))
is the unique integral curve of v passing through the point, (r cos , r sin ),
at t = 0.
62 Chapter 2. Differential forms
P
(b) Let U = Rn and let v be the constant vector field: ci /xi .
Show that the curve
t R a + t(c1 , . . . , cn )
(a) ft : R R , ft (x) = x + t
(b) ft : R R , ft (x) = et x
(c) ft : R2 R2 , ft (x, y) = (cos t x sin t y , sin t x + cos t y)
t2 2 t3 3
exp tA = I + tA + A + A +
2! 3!
converges and defines a one-parameter group of diffeomorphisms of
Rn .
a
x(t) =
a at
2.1 Vector fields and one-forms 63
for all C (U ).
9. The vector field w in exercise 8 is called the Lie bracket of
the vector fields v1 and v2 and is denoted [v1 , v2 ]. Verify that Lie
bracket satisfies the identities
[v1 , v2 ] = [v2 , v1 ]
and
[v1 [v2 , v3 ]] + [v2 , [v3 , v1 ]] + [v3 , [v1 , v2 ]] = 0 .
Hint: Prove analogous identities for Lv1 , Lv2 and Lv3 .
P
10. Let v1 = /xi and v2 = gj /xj . Show that
X
[v1 , v2 ] = gi .
xi xj
f w = (f1 w) .
f Lw = Lf w f .
Hint: (2.1.26).
64 Chapter 2. Differential forms
ft : U U , < t < .
L[v,w] = Lw
where
d
w = f w |t=0 .
dt t
Hint: Differentiate the identity
ft Lw = Lwt ft
Lw + Lw (Lv ) .
16. Let
x1 dx2 x2 dx1
= 1 (R2 {0}) ,
x21 + x22
and let : [0, 2] R2 R{0} be the closed curve, t (cos t, sin t).
Compute the line integral, , and show that its not zero. Conclude
that cant be d of a function, f C (R2 {0}).
2.2 k-forms
One-forms are the bottom tier in a pyramid of objects whose kth tier
is the space of k-forms. More explicitly, given p Rn we can, as in
1.5, form the kth exterior powers
(2.2.1) k (Tp Rn ) , k = 1, 2, 3, . . . , n
(2.2.2) 1 (Tp Rn ) = Tp Rn
Example 1.
66 Chapter 2. Differential forms
Example 2.
Let fi , i = 1, . . . , k be a real-valued C function on U . Letting
i = dfi we get from (2.2.3) a k-form
(2.2.4) df1 dfk
whose value at p is the wedge product
(2.2.5) (df1 )p (dfk )p .
Since (dx1 )p , . . . , (dxn )p are a basis of Tp Rn , the wedge products
(2.2.6) (dxi1 )p (dx1k )p , 1 i1 < < ik n
are a basis of k (Tp ). To keep our multi-index notation from getting
out of hand, well denote these basis vectors by (dxI )p , where I =
(i1 , . . . , ik ) and the Is range over multi-indices of length k which
are strictly increasing. Since these wedge products are a basis of
k (Tp Rn ) every element of k (Tp Rn ) can be written uniquely as a
sum X
cI (dxI )p , cI R
and every k-form, , on U can be written uniquely as a sum
X
(2.2.7) = fI dxI
where dxI is the k-form, dxi1 dxik , and fI is a real-valued
function,
fI : U R .
Definition 2.2.2. The k-form (2.2.7) is of class C r if each of the
fI s is in C r (U ).
Henceforth well assume, unless otherwise stated, that all the k-
forms we consider are of class C , and well denote the space of
these k-forms by k (U ).
We will conclude this section by discussing a few simple operations
on k-forms.
2.2 k-forms 67
2. Given i k (U ), i = 1, 2 we define 1 + 2 k (U ) to be
the k-form
p U (1 )p + (2 )p k (Tp Rn ) .
(Notice that this sum makes sense since each summand is in k (Tp Rn ).)
Exercises.
(a) 1 2 .
(b) 2 3 .
(c) 3 1 .
(d) 1 2 3 .
= dx1 dxn .
j1 i1 , . . . , jk ik .
(2.3.1) d : k (U ) k+1 (U ) .
(2.3.3) d(d) = 0 .
(2.3.4) d(df ) = 0
as claimed.)
70 Chapter 2. Differential forms
Hence (2.3.7) implies (2.3.6) for all multi-indices I. The same argu-
ment shows that for any sum over indices, I, for length k
X
fI dxI
(As above we can ignore the repeating Is, since for these Is, dxI =
0, and by (2.3.8) we can make the non-repeating Is strictly increas-
ing.)
Suppose now that 1 k (U ) and 2 (U ). Writing
X
1 = fI dxI
and
X
2 = gJ dxJ
2.3 Exterior differentiation 71
or
X X X
dfI dxI gJ dxJ + (1)k dgJ dxJ ,
or finally:
d1 2 + (1)k 1 d2 .
Thus the d defined by (2.3.7)
P has Property II. Lets now check that
it has P
Property III. If = fI dxI , fI C (U ), then by definition,
d = dfI dxI and by (2.3.6) and (2.3.2)
X
d(d) = d(dfI ) dxI ,
so by (2.3.7)
n
X
f
d(df ) = d dxj
xj
j=1
n n
!
X X 2f
= dxi dxj
xi xj
j=1 i=1
X 2f
= dxi dxj .
xi xj
i,j
Notice, however, that in this sum, dxi dxj = dxj dxi and
2f 2f
=
xi xj xj xi
so the (i, j) term cancels the (j, i) term, and the total sum is zero.
Exercises:
much sense for zero forms since there arent any 1 forms. However, if f C (U )
and df = 0 then f is constant on connected components of U . (See 2.1, exercise 2.)
2.3 Exterior differentiation 73
(2.3.13) = dt +
d X d
= fI (x, t) dxI
dt dt
and !
n
X X
dU = fI (x, t) dxi dxI .
xi
I i=1
Show that
d
d = dt + dU .
dt
(c) Let be the form (2.3.13). Show that
d
d = dt dU + dt + dU
dt
and conclude that is closed if and only if
d
(2.3.14) = dU
dt
dU = 0.
d
(2.3.15) = .
dt
P P
Hint: Let = fI (x, t) dxI and = gI (x, t) dxI . The equa-
tion (2.3.15) reduces to the system of equations
d
(2.3.16) gI (x, t) = fI (x, t) .
dt
Let c be a point on the interval, A, and using freshman calculus show
that (2.3.16) has a unique solution, gI (x, t), with gI (x, c) = 0.
2.4 The interior product operation 75
(2.3.17) d
is reduced.
(f) Let X
= hI (x, t) dx)I
be a reduced k-form. Deduce from (2.3.14) that if is closed then
d
= 0 and dU = 0. Conclude that hI (x, t) = hI (x) and that
dt
X
= hI (x) dxI
(2.4.1) (v(p))(p) .
linear in v:
(2.4.6) (v)((v)) = 0 .
(2.4.7) = 1 k ,
then
k
X
(2.4.8) (v) = (1)r1 ((v)r )1
br k .
r=1
We will also leave for you to prove the following two assertions, both
of which are special cases of (2.4.8). If v = /xr and = dxI =
dxi1 dxik then
k
X
(2.4.9) (v) = (1)r iir dxIr
r=1
where
(
1 i = ir
iir = .
0, i 6= ir
2.4 The interior product operation 77
P
and Ir = (i1 , . . . , bir , . . . , ik ) and if v = fi /xi and = dx1
dxn then
X
(2.4.10) (v) = b r dxn .
(1)r1 fr dx1 dx
(2.4.12) dLv = Lv d
(2.4.13) Lv ( ) = Lv + Lv
By (2.4.13)
k
X
Lv dxI = dxi1 Lv dxir dxik ,
r=1
and by (2.4.12)
Lv dxir = dLv xir
78 Chapter 2. Differential forms
Lv xir = gir
and
X fI
Lv f I = gi .
xi
We will leave the verification of (2.4.12) and (2.4.13) as exercises,
and also ask you to prove (by the method of computation that weve
just sketched) the divergence formula
X gi
(2.4.14) Lv (dx1 dxn ) = dx1 dxn .
xi
Exercises:
2. Show that if is the k-form, dxI and v the vector field, /xr ,
then (v) is given by (2.4.9).
3. Show
P that if is the n-form, dx1 dxn , and v the vector
field, fi /xi , (v) is given by (2.4.10).
dLv = Lv d
and
v Lv = Lv v .
Hint: Deduce the first of these identities from the identity d(d) = 0
and the second from the identity (v)((v)) = 0 .)
5. Given i ki (U ), i = 1, 2, show that
Lv (1 2 ) = Lv 1 2 + 1 Lv 2 .
f (1 + 2 ) = f 1 + f 2 .
In other words
(2.5.7) f 1 2 = f 1 f 2 .
Thus if is in k (W )
(2.5.8) f (g ) = (g f ) .
so
d dxI = f dxi f dxik
by (2.5.7), and by (2.5.6)
f dxi = df xi = dfi
(2.5.12) d f = f d .
= f d .
2.5 The pull-back operation on forms 83
In other words
fi
(2.5.13) f dx1 dxn = det dx1 dxn .
xj
f1 : U U , f (p) = p ,
84 Chapter 2. Differential forms
f0 : U U , f0 (p) = p0 .
From the theorem above its easy to see that the Poincare lemma
holds for contractable open subsets of Rn . If U is contractable every
closed k-form on U of degree k > 0 is exact. (Proof: Let be such a
form. Then for the identity map f0 = and for the constant map,
f0 = 0.)
Exercises.
(a) = x2 dx3
(b) = x1 dx1 dx3
(c) = x1 dx1 dx2 dx3
(i.e., none of the summands involve dt). For a reduced form, , let
Q (U ) be the form
X Z 1
(2.5.15) Q = fI (x, t) dt dxI
0
(2.5.18) = dt +
(a) Prove
Theorem 2.5.4. If the form (2.5.18) is closed then
(2.5.19) 0 1 = dQ .
(2.5.20) 0 1 = dQ .
(2.5.21) f0 f1 = dQ
{x Rn , kxk < 1} .
U1 U2 Rn = Rn1 Rn2
d
(2.5.22) f = Lv .
dt t
Here is a sketch of a proof:
2.5 The pull-back operation on forms 87
(a) Let (t) be the curve, (t) = ft (p), and let be a zero-form,
i.e., an element of C (U ). Show that
ft (p) = ((t))
and by differentiating this identity at t = 0 conclude that (2.4.40)
holds for zero-forms.
(b) Show that if (2.4.40) holds for it holds for d. Hint: Differ-
entiate the identity
ft d = dft
at t = 0.
(c) Show that if (2.4.40) holds for 1 and 2 it holds for 1 2 .
Hint: Differentiate the identity
ft (1 2 ) = ft 1 ft 2
at t = 0.
(d) Deduce (2.4.40) from a, b and c. Hint: Every k-form is a sum
of wedge products of zero-forms and exact one-forms.
(2.5.26) Qt = ft (v) .
Hint: Formula (2.4.11).
88 Chapter 2. Differential forms
(b) Let
Z 1
Q = ft (v) dt .
0
(2.5.27) f1 f0 = dQ + Q d .
(v)f = f (w) .
the operations (2.6.1), and from the point of view of general covari-
ance, this formulation is much more satisfactory: the only symmetries
of R3 which preserve div and curl are translations and rotations,
whereas the operations (2.6.1) admit all diffeomorphisms of R3 as
symmetries.
To describe how grad, div and curl are related to the opera-
tions (2.6.1) we first note that there are two ways of converting vector
fields into P
forms. The first makes use of the natural inner product,
B(v, w) = vi wi , on Rn . From this inner product one gets by 1.2,
exercise 9 a bijective linear map:
(2.6.2) L : Rn (Rn )
with the defining property: L(v) = (w) = B(v, w). Via the
identification (2.1.2) B and L can be transferred to Tp Rn , giving one
an inner product, Bp , on Tp Rn and a bijective linear map
(2.6.3) Lp : Tp Rn Tp Rn .
The second way of converting vector fields into forms is via the
interior product operation. Namely let be the n-form, dx1
dxn . Given an open subset, U of Rn and a C vector field,
X
(2.6.8) v= fi
xi
on U the interior product of v with is the (n 1)-form
X
(2.6.9) (v) = b r dxn .
(1)r1 fr dx1 dx
(2.6.10) v d(v) .
Moreover, by (2.4.11)
d(v) = Lv
and by (2.4.14)
Lv = div(v)
where
n
X fi
(2.6.11) div(v) = .
xi
i=1
for some vector field w, and the left-hand side of this formula de-
termines w uniquely. Now let U be an open subset of R3 and v a
2.6 Div, curl and grad 91
(2.6.14) curl v = w ,
(2.6.16) curl v = g1 + g2 + g3
x1 x2 x3
where
f2 f3
g1 =
x3 x2
f3 f1
(2.6.17) g2 =
x1 x3
f1 f2
g3 = .
x2 x1
To summarize: the grad, curl and div operations in 3-dimensions
are basically just the three operations (2.6.1). The grad operation
is the operation (2.6.1) in degree zero, curl is the operation (2.6.1)
in degree one and div is the operation (2.6.1) in degree two. How-
ever, to define grad we had to assign an inner product, Bp , to the
next tangent space, Tp Rn , for each p in U ; to define div we had to
equip U with the 3-form, , and to define curl, the most compli-
cated of these three operations, we needed the Bp s and . This is
why diffeomorphisms preserve the three operations (2.6.1) but dont
preserve grad, curl and div. The additional structures which one
needs to define grad, curl and div are only preserved by translations
and rotations.
92 Chapter 2. Differential forms
(2.6.18) div vE = q
(2.6.19) curl vE = vM
t
(2.6.20) div vM = 0
(2.6.21) c2 curl vM = w+ vE
t
where vE and vM are the electric and magnetic fields, q is the scalar
charge density, w is the current density and c is the velocity of light.
(To simplify (2.6.25) slightly well assume that our units of space
time are chosen so that c = 1.) As above let = dx1 dx2 dx3
and let
(2.6.22) E = (vE )
and
(2.6.23) M = (vM ) .
(2.6.18 ) dE = q
and
(2.6.20 ) dM = 0 .
(2.6.24) M = M vE dt
and
(2.6.25) E = E vM dt
(2.6.26) = q + (w) dt .
We will leave for you to show that the four equations (2.6.18)
(2.6.21) are equivalent to two elegant and compact (3+1)-dimensional
identities
(2.6.27) dM = 0
and
(2.6.28) dE = .
Exercises.
(See 1.6, exercise 5.) Hence since these are exactly n! non-repeating
multi-indices
i.e.,
1 n
(2.7.4) =
n!
where
(2.7.7) ft = .
Lets see what such vector fields have to look like. Note that by
(2.5.23)
d
(2.7.8) f = ft Lv ,
dt t
hence if ft = for all t, the left hand side of (2.7.8) is zero, so
ft Lv = 0 .
Lv = d(v) + (v) d .
96 Chapter 2. Differential forms
But by (2.7.2) d = 0 so
Lv = d(v) .
(2.7.9) (v) = dH
But
(
1 i=i
dxj =
xi 0 i=
6 j
and
dyj = 0
xi
Thus since
X H H
dH = dxi + dyi
xi yi
we get from (2.7.9)(2.7.11)
H H
(2.7.12) fi = and gi =
yi xi
so v has the form:
X H H
(2.7.13) v= .
yi xi xi yi
In particular if (t) = (x(t) , y(t)) is an integral curve of v it has
to satisfy the system of differential equations
dxi H
(2.7.14) = (x(t) , y(t))
dt yi
dyi H
= (x(t) , y(t)) .
dt xi
The formulas (2.7.10) and (2.7.11) exhibit an important property of
the Darboux form, . Every one-form on U can be written uniquely
as a sum X
fi dyi gi dxi
with fi and gi in C (U ) and hence (2.7.10) and (2.7.11) imply
Theorem 2.7.3. The map, v (v), sets up a one-one correspon-
dence between vector field and one-forms.
In particular for every C function, H, we get by correspondence
a unique vector field, v = vH , with the property (2.7.9).
We next note that by (1.7.6)
Lv H = (v) dH = (v)((v)) = 0 .
Thus
(2.7.15) Lv H = 0
98 Chapter 2. Differential forms
(2.7.16) ft =
The first set of equation are essentially just the definitions of mo-
menta, however, if we plug them into the second set of equations we
get
d2 xi V
(2.7.20) mi 2
=
dt xi
and interpreting the term on the right as the force exerted on the ith
point-mass and the term on the left as mass times acceleration this
equation becomes Newtons second law.
Exercises.
n
X H1 H2 H2 H1
(2.7.23) H= .
xi yi xi yi
i=1
{H1 , H2 } = {H2 , H1 }
2.7 Symplectic geometry and classical mechanics 101
6. Show that
(a) {H1 , H2 } = 0.
(b) H1 is an integral of motion of v2 .
(c) H2 is an integral of motion of v1 .
Tt (x1 , . . . , xn ) = (x1 + t, . . . , xn + t) .
(a) Show that the function (2.7.18) is invariant under the group of
diffeomorphisms
t (x, y) = (Tt x, y) .
10. Let Rti : R2n R2n be the rotation which fixes the variables,
(xk , yk ), k 6= i and rotates (xi , yi ) by the angle, t:
H
(c) Conclude that if Hr = yr then Hr has to satisfy the equation
n
X
yi Hr = 0 .
yi
i=1
(d) Conclude that Hr has to be constant along the rays (x, ty),
0 t < .
(e) Conclude finally that Hr has to be a function of x alone, i.e., doesnt
depend on y.
CHAPTER 3
INTEGRATION OF FORMS
3.1 Introduction
Z
(y) dy
V
Z
f (x)| det Df (x)| dx
U
exists, and if these integrals exist they are equal. Proofs of this can
be found in [?], [?] or [?]. This chapter contains an alternative proof
of this result. This proof is due to Peter Lax. Our version of his
proof in 3.5 below makes use of the theory of differential forms;
but, as Lax shows in the article [?] (which we strongly recommend as
collateral reading for this course), references to differential forms can
be avoided, and the proof described in3.5 can be couched entirely
in the language of elementary multivariable calculus.
The virtue of Laxs proof is that is allows one to prove a version
of the change of variables theorem for other mappings besides dif-
feomorphisms, and involves a topological invariant, the degree of a
mapping, which is itself quite interesting. Some properties of this in-
variant, and some topological applications of the change of variables
formula will be discussed in 3.6 of these notes.
[a1 , b1 ] [an , bn ] .
(the hat over the dxi meaning that dxi has to be omitted from the
wedge product). Then
n
X fi
d = (1)i1 dx1 . . . dxn ,
xi
i=1
Then Z Z Z
= f (x, t) dx dt =
Rn1 Rn Rn
108 Chapter 3. Integration of forms
by (3.2.1). Thus
Z
(3.2.2) u(x, t) dt = 0 .
= d( + ) .
Since and are both in cn1 (U A) this proves that is in
d cn1 (U A) and hence that U A has property P .
is compactly supported.
f
(x, y) , i = 1, . . . , k ,
xi
is of class C 1 and
Z
g f
(x) = (x, y) dy .
xi xi
fi (x + h, y) fi (x, y) = Dx fi (c)h
Lemma 3.2.5. Given > 0 there exists a > 0 such that for |h|
is of class C r .
is in ck1 (U ).
(c) Show that if d = 0, then d = 0. Hint: By (3.2.4)
X Z fI
d = (x, t) dt dxi dxI
A xi
I,i
Z
= (dU ) dt
A
d
and by (??) dU = .
dt
3.2 The Poincare lemma for compactly supported forms on rectangles 111
d = dt ( (t)) +
X
= dt ( uI (x, t) dxI ) +
I
where
Z
uI (x, t) = fI (x, t) (t) fI (x, t) dt .
A
d = d dU .
(3.2.5) {(x1 , . . . , xn ) ; x1 0}
112 Chapter 3. Integration of forms
where the right hand side is the usual Riemann integral of f over
Hn . (This integral makes sense since f is compactly supported.) Show
that if = d for some cn1 (Rn ) then
Z Z
(3.2.7) =
Hn Rn1
(x2 , . . . , xn ) (0, x2 , . . . , xn ) .
P ci dxn . Mimicking the (b)
Hint: Let = i fi dx1 dx
(a) part of the proof of Theorem 3.2.1 show that the integral (3.2.6)
is the integral over Rn1 of the function
Z 0
f1
(x1 , x2 , . . . , xn ) dx1 .
x1
1 2
Q0
set is open and that its complement is open; so, by the connectivity
of U , U = A.
c0 c0 cN =
f cf 0 = f d = d f ,
= deg(g) deg(f ) .
W
is a proper C mapping.
2. Let U and V be open subsets of Rn and Rk and let f : U V
be a proper continuous mapping. Prove:
Theorem 3.4.2. If B is a compact subset of V and A = f 1 (B)
then for every open subset, U0 , with A U0 U , there exists an
open subset, V0 , with B V0 V and f 1 (V0 ) U0 .
f (x1 , . . . , xn ) = (x1 + x2 , x2 , . . . , xn ).
f (x1 , . . . , xn ) = (x1 , x2 , . . . , xn )
Show that
X
BACe1 = bj ej
and
n
X
BACei = = (aj,i + ci bj )ej + ci b1 e1
j
for i > 1.
(b)
n
X
(3.4.7) Lei = j,i ej , i = 1, . . . , n .
j=1
j,1 = bj
1,i = b1 ci
j,i = aj,i + ci bj
for i, j > 1.
(c) Suppose L is invertible. Conclude that A, B and C are invertible
and verify that Theorem 3.4.1 holds for B and C using the previous
exercises in this section.
(d) Show by an inductive argument that Theorem 3.4.1 holds for
A and conclude from (3.4.3) that it holds for L.
(a) Prove Theorem 3.4.1 for linear mappings which are orthogonal,
i.e., satisfy Lt L = I.
Hints:
i. Show that L (x21 + + x2n ) = x21 + + x2n .
ii. Show that L (dx1 dxn ) is equal to dx1 dxn or
dx1 dxn depending on whether L is orientation preserving
or orinetation reversing. (See 1.2, exercise 10.)
iii. Let be as in exercise 4 and let be the form
= (x21 + + x2n ) dx1 dxn .
Show that L = if L is orientation preserving and L = if
L is orientation reversing.
(b) Prove Theorem 3.4.1 for linear mappings which are self-adjoint
(satisfy Lt = L). Hint: A self-adjoint linear mapping is diagonizable:
there exists an intervertible linear mapping, M : Rn Rn such that
(3.4.8) M 1 LM ei = i ei , i = 1, . . . , n .
(3.5.2) g2 f g1
has the same degree as f , so it suffices to prove the theorem for this
mapping. Notice however that this mapping maps the origin onto
the origin. Hence, replacing f by this mapping, we can, without loss
of generality, assume that 0 is in the domain of f and that f (0) = 0.
Next notice that if A : Rn Rn is a bijective linear mapping the
theorem is true for A (by exercise 9 of 3.4), and hence if we can
prove the theorem for A1 f , (3.4.1) will tell us that the theorem
is true for f . In particular, letting A = Df (0), we have
1
Lemma 3.5.3. There exists a > 0 such that |g(x)| 2 |x| for
|x| .
gi
(0) = 0 ;
xj
3.5 The change of variables formula 121
1 1
|gi (x)| sup |xi | = |x| ,
2 2
so
1
|g(x)| = sup |gi (x)| |x| .
2
(3.5.5) fe(x) = f (x) for |x| .
2
In addition, for all x Rn :
1
(3.5.6) |fe(x)| |x| .
2
by Lemma 3.5.3.
Now let Qr be the cube, {x Rn , |x| r}, and let Qcr = Rn Qr .
From (3.5.6) we easily deduce that
for all r, and hence that fe is proper. Also notice that for x Q ,
3
|fe(x)| |x| + |g(x)| |x|
2
by Lemma 3.5.3 and hence
Set Z
(x) = (y x)(y) dy .
and hence
Z
(x) = (x)(y x) dy
so
Z
(x) (x) = ((x) (y))(y x) dy
and
124 Chapter 3. Integration of forms
Z
|(x) (x)| |(x) (y)| (y x) dy .
|(x) (x)| .
Thus
Z Z
( )(y) dy | |(y) dy
V Z V
| |(xy) dy
VZ
2c (y) dy 2
so
Z Z
(3.5.12) (y) dy (y) dy 2 .
V V
and hence
Z Z
(y) dy = f (x)| det Df (x)| dx .
Hint: Interpret the left and right hand sides of this formula as im-
proper integrals over U Int Hn and V Int Hn .
5. The boundary of Hn is the set
bHn = {(0, x2 , . . . , xn ) , (x2 , . . . , xn ) Rn }
so the map
: Rn1 Hn , (x2 , . . . , xn ) (0, x2 , . . . , xn )
in exercise 9 in 3.2 maps Rn1 bijectively onto bHn .
The main result of this section is a recipe for computing the de-
gree of f by counting the number of pi s above, keeping track of
orientation.
Theorem 3.6.4. For each pi f 1 (q) let pi = +1 if f : Ui W is
orientation preserving and 1 if f : Ui W is orientation reversing.
Then
N
X
(3.6.1) deg(f ) = p i .
i=1
3.6 Techniques for computing the degree of a mapping 129
Since f : Ui W is a diffeomorphism
Z Z
f = = +1 or 1
Ui W
(3.6.2) f (x) = y
(3.6.3) F : U A V A
ft : U V , ft (x) = Ft (x)
is proper.
Now let U and V be open subsets of Rn .
Theorem 3.6.8. If f0 and f1 are properly homotopic, their degrees
are the same.
Proof. Let
= (y) d y1 d yn
be a compactly supported n-form on X whose integral over V is 1.
The the degree of ft is equal to
Z
(3.6.4) (F1 (x, t), . . . , Fn (x, t)) det Dx F (x, t) dx .
U
{x Rn , kxk 1} .
(3.6.5) k(x)k 1
(3.6.6) (x) = x
= (1 ) + (1 ) 0
= 0 .
132 Chapter 3. Integration of forms
f(x)
x
(x)
Figure 3.6.1.
We will prove
Theorem 3.6.11. The mapping, p, is proper and deg(p) = n.
Proof. For t R
g : R R2 R2 , z pt (z)
C = sup{|ai | , i = 0, . . . , n 1} .
{z C , aC|z|n1 R} ,
and this shows that g is a proper homotopy. Thus each of the map-
pings,
pt : C C ,
is proper and deg pt = deg p1 = deg p = deg p0 . However, p0 : C C
is just the mapping, z z n and an elementary computation (see
exercises 5 and 6 below) shows that the degree of this mapping is n.
134 Chapter 3. Integration of forms
p(z) = z n + an1 z n1 + + a0 ,
has two distinct real roots, s+ (x) and s (x), with s+ (x) > s (x).
Prove that s+ and s are functions of class C r .
Hint: What are the roots of the quadratic polynomial: as2 +bs+c?
f (x) + s0 (x f (x))
Argue from figure 1 that this polynomial has to have two distinct
real roots.
3.6 Techniques for computing the degree of a mapping 135
3. Show that the Brouwer fixed point theorem isnt true if one
replaces the closed unit ball by the open unit ball. Hint: Let U be
the open unit ball (i.e., the interior of B n ). Show that the map
x
h : U Rn , h(x) =
1 kxk2
is a diffeomorphism of U onto Rn , and show that there are lots of
mappings of Rn onto Rn which dont have fixed points.
4. Show that the fixed point in the Brouwer theorem doesnt have
to be an interior point of B n , i.e., show that it can lie on the bound-
ary.
Df (z) = nz n1 .
(z + h)n z n nz n1 h
|h|
(3.6.8) f0 f1 = d
Q = {(x1 , . . . , xn ) S n1 ; xi 0}
into itself.
(b) Its easy to prove that Q is homeomorphic to the unit ball
B n1 ,i.e., that there exists a continuous map, g : Q B n1 which is
invertible and has a continuous inverse. Without bothering to prove
this fact deduce from it Frobenius theorem.
(3.7.2) Cf = {p U , Df (p) = 0} .
(3.7.6) W f (Cf A) .
fi
To prove Theorem 3.7.1 let Ui,j be the subset of U where 6= 0.
xj
Then [
U= Ui,j Cf ,
f1
(3.7.7) (p) 6= 0 for all p U }
x1
Then
(3.7.9) g x1 = f x1 = f1 (x1 , . . . , xn )
and
f1
(3.7.10) det(Dg) = 6= 0 .
x1
Thus, by the inverse function theorem, g is locally a diffeomorphism
at every point, p U . This means that if A is a compact subset of
U we can cover A by a finite number of open subsets, Ui U such
that g maps Ui diffeomorphically onto an open subset Wi in Rn . To
conclude the proof of the theorem well show that Rn f (Cf Ui A)
is a dense subset of Rn . Let h : Wi Rn be the map h = f g1 .
To prove this assertion it suffices by Remark 3.7.4 to prove that the
set
Rn h(Ch )
is dense in Rn . This we will do by induction on n. First note that for
n = 1, Cf = Cf , so weve already proved Theorem 3.7.1 in dimension
one. Now note that by (3.7.8), h x1 = x1 , i.e., h is a mapping of the
form
(3.7.11) h(x1 , . . . , xn ) = (x1 , h2 (x), . . . , hn (x)) .
Thus if we let Wc be the set
(3.7.12) {(x2 , . . . , xn ) Rn1 ; (c, x2 , . . . , xn ) Wi }
and let hc : Wc Rn1 be the map
(3.7.13) hc (x2 , . . . , xn ) = (h2 (c, x2 , . . . , xn ), . . . , hn (c, x2 , . . . , xn )) .
Then
(3.7.14) det(Dhc )(x2 , . . . , xn ) = det(Dh)(c, x2 , . . . , xn )
and hence
(3.7.15) (c, x) Wi Ch x Chc .
Now let p0 = (c, x0 ) be a point in Rn . We have to show that every
neighborhood, V , of p0 contains a point p Rn h(Ch ). Let Vc
Rn1 be the set of points, x, for which (c, x) V . By induction Vc
contains a point, x Rn1 hc (Chc ) and hence p = (c, x) is in V by
definition and in Rn h(Cn ) by (3.7.15).
Q.E.D.
3.7 Appendix: Sards theorem 141
f (x) = A(x) + x0
5. Verify (3.7.1).