Riem Geom

Download as pdf or txt
Download as pdf or txt
You are on page 1of 55

Riemannian geometry

Notes
(*) means that the proof/proposition of the theorem is not required while (**) means
that the theorem/the argument is not required for the exam

Contents
1. Topological manifolds 1
2. Smooth manifolds 4
3. The partition of unity (*) 7
4. Tangent and cotangent bundle 13
5. Vector bundles 18
6. The cotangent bundle 22
7. Tensors 24
7.1. Differential forms and integrations 29
8. Integral curves and symmetry generators 32
8.1. Sumbmanifolds 36
9. A mini course on linear connections 39
9.1. Riemannian geometry 44
9.2. Riemannian distance 48
9.3. Metric connection 48
9.4. Geodesics in Riemannian geometry (**) 51
9.5. The Riemannian curvature 52
10. Principal bundle and gauge theories (**) 54

1. Topological manifolds
Informally we can think that manifolds are generalization of surfaces and curves to
higher dimensions. The dimension of the manifolds would be the number independent
parameters needed to specify a “point”; thus an n dimensional manifold is in some sense
an object modelled locally on Rn . The easiest example we can visualize is the sphere S 2
given by the equation x2 + y 2 + z 2 = 1; near the north pole (0, 0, 1) we can solve for z
as a function of x and y thus we would need two parameter to specify a point close to
the north pole and that’s why we will say that the sphere is a 2 dimensional manifold
in the sense the “locally” looks like R2 . But what do we mean with looks like??? the
main idea is that U ∈ Rk and V ∈ Rn are said to be homeomorphic if we can find a one
to one correspondence ϕ : U → V s.t. ϕ and its inverse are continuous map.
Let’s try to be a bit more “mathy” and abstract now. We want to come up with a
notion of “space” in which the notion continuous function makes sense. Let’s discuss the
case of Rn for simplicity the generalization to any metric space is obvious. A set U in Rn
is said to be open if for every p in U there is an open ball Br (p) such that Br (p) ⊂ U . A
neighborhood of p is an open set containing p. It is clear that the union of an arbitrary
collection of open sets is open, but NOTE the same is not true of the intersection of
infinitely many open sets. A map f from an open subset of Rn to Rm is continuous if
and only if the inverse image f −1 (V ) of any open set V in Rm is open in Rn . This shows
that continuity can be defined in terms of open sets only for Rn as for every metric
1
2

spaces 1. More in details first one defines open ball Br (p) = {x ∈ Rn s.t. d(x, p) < r}
where d(x, y) is the usual euclidean distance. We define axiomatically thus open sets
looking at what happens in Rn
Definition 1.1. Let X be a set; a topology on X is a collection τ of subsets of X, called
open sets,such that
• X, ∅ ∈ τ
• U1 , .., Un ∈ τ → U1 ∩ .. ∩ Un ∈ τ (intersection of finite
S number of opens)
• {Ui } finite or infinite collection of elements of τ → Ui ∈ τ
We call the pair (X, τ ) or simply X , a topological space
The idea of open sets it is somehow needed to recover the notion of nearness we would
have in a metric space. In this sense a neighborhood of a point p ∈ X is just an open set
U containing p. A natural example of topology of a set S is the collection of all subsets
of S. This is sometimes called the discrete topology.
Let’s resume an example: Consider the set S = {1, 2, 3}
• τ = {{1}, {1, 2}, {1, 2, 3}, {∅}} is a topology
• τ = {{S}, {∅}} is the trivial topology
• τ = {{1}, {2}, {3}{1, 2}, {1, 3}, {2, 3}{1, 2, 3}, {∅}} is the discrete topology
Once we have a top. space (S, τ ) a subset naturally have a top. space structure (A, τA )
given by
τA := {U ∩ A with U ∈ τ }
This is sometimes called subspace topology or relative topology.
Take now a topological space X and a set S and π : X → S a surjective map. We can
naturally define a topology on S by saying that U ∈ S is open if π −1 (U ) is open. This
is called quotient topology In general it’s hard to describe all the open sets in τ and we
introduce thus the notion of a base.
Definition 1.2. A subcollection β of a topology τ of a set S is a basis for τ if for every
U ∈ τ and p ∈ U we can find V ∈ β s.t. p ∈ V ⊂ U
An useful criterion for deciding if a collection of subsets is a basis for some topology
is given by the following:
Proposition 1.3. A collection β = {ui } of subset of S is a basis for some topology of
S if and only if
• S is the union of all ui
• given ui and uj and p ∈ ui ∩ uj there is uk ∈ β s.t. p ∈ uk ⊂ ui ∩ uj
Another point of view or if you prefer a consequence of the previous definition is to
say that β is a basis if every open set of S can be recovered by a union of sets in β.
Definition 1.4. A top space is named second countable if it admits a countable basis.
Fact: Rn is second countable and and subset of a second countable top space is second
countable One of the most important property of a second countable top space is that
any open cover admits a countable subcover.
Sometimes one may want that open sets separate points. More precisely
1For metric space we say that given (X, d ) and (Y, d ) be metric spaces a function f : X → Y
X Y
is continuous at p ∈ X if for every  > 0 there exists δ > 0 such that dX (x, p) < δ implies that
dY (f (x), f (a)) < .
3

Definition 1.5. A topological space S is Hausdorff if given any two distinct points x
and y in S, there exist disjoint open sets U, V such that x ∈ U and y ∈ V .
We now go back to the concept of continuity of a function. Let f : X1 → X2 be a
function of topological spaces. Mimicking the definition from advanced calculus, we say
that f is continuous at a point p in X1 if for every neighborhood U2 of f (p) in X2 (that
is an open set containing f (p) , there is a neighborhood U1 of p such that f (U1 ) ⊂ U2 .
In the case of quotient topology one can prove that a function from the quotient space
is continuous IFF its composition with the quotient map is continuous.

The notion of equal in the “category” of top spaces is encoded in the definition of
homeomorphism
Definition 1.6. Two top space X and Y are said to be homeomorphic or topologically
equivalent if exists an homeomorphism between them that is a cont and bijective map
with cont inverse.
Definition 1.7. A topological space is disconnected if it is the union of 2 disjoint non
empty open sets. If it is not disconnected we will call it connected.
Definition 1.8. An open cover of a top space S is a collection of open subsets whose
union is S. A subcover is a subcollection still covering S. We will say that S is compact
if every open cover admits a finite subcover.
A subset A of a top space S is named closed if its complement is open. Note that
closed does not mean not open for example given the top space S the open set S is also
closed being its complement the empty set. We can define the closure of a set A in S by
Ā = ∩{B ⊂ S : A ⊂ B and B is closed}
It is time to translate to the math language the notion of “looks like Rn ”. A topologi-
cal space M is said to be locally Euclidean of dimension n if every point p ∈ M has a
neighborhood U that is homeomorphic to an open subset of Rn , that is we have an home-
omorphism ϕ : U → U e ⊂ Rn . Let’s call the pair (U, ϕ) the local coordinate chart and
we sometimes refer to U as coordinate open set. If the image of ϕ is an open ball of Rn
we will call U Euclidean open ball in M if ϕ(p) = 0 we say that the chart is centered in p.

Definition 1.9. A subset of a top space is called precompact if its closure is compact.
Definition 1.10. A topological manifold is a top space that is Hausdorff second count-
able and locally Euclidean.
Note that open subset of a top manifold is a top manifold. Let’s discuss now a couple
of examples:
• S n is for free Hausdorff and second countable bc it lives inside Rn . Thus we need
just to check that it’s locally Euclidean. To this aim consider:
Ui± = {x = (x1 , .., xn+1 ) ∈ S n t.c. ± xi > 0}
equipped with the following homomorphism
ϕ±
i : Ui± → U ei ⊂ Rn
(x1 , .., xn+1 ) 7→ (x1 , .., xi−1 , xi+1 , .., xn+1 )
4

with continuous inverse


(ϕ±
i )
−1 : Uei ⊂ Rn → U ±
i p
(z1 , .., zn ) 7→ (z1 , .., zi−1 , ± 1 − |z|2 , zi+1 , .., zn )
This choice is often call graph coordinate chart. Another opportunity to construct
a coordinate chart for the sphere is to use the stereographic projection.
• RPn : this is the set of lines in Rn+1 . Mathematically we can define it by the map
Rn+1 r 0 → RPn sending each point into the line passing trhough itself and the
origin. Note that for any λ ∈ R r 0 we have π(x) = π(λx) thus the projective
space can be also defined by dividing Rn+1 r 0 by the equivalence class x ∼ λx.
Now we equip RPn with the quotient topology that one can easily check that is
Hausdorff. We want to prove now that it second countable and locally Euclidean.
Let’s start with the last one. Define Vi ∈ Rn+1 r 0 to be the set where the i-th
component of x is non vanishing and define Ui = π(Vi ).Then consider
ϕi : Ui± → U ei ⊂ Rn
[(x1 , .., xn+1 )] 7→ (x1 /xi , .., xi−1 /xi , xi+1 , .., xn+1 /xi )
with inverse
(ϕi )−1 : ei → U
U
(z1 , .., zn ) 7→ [(z1 , .., zi−1 , 1, zi+1 , .., zn )]
This choice is often call canonical coordinate chart. We may want these maps to
be continuous, well in the first case it is enough to note that ϕi ◦ π : Rn+1 → Rn
is continuous while for the inverse it is easy to prove that open goes to open.
Is it enough to declare RPn a topological manifold?? Yes and no..because we
need second countability but it is somehow for free thank to the following Lemma
(see for example Lee, Introduction to topological manifolds, Lemma 3.2.1):
Lemma 1.11. Suppose M → N is a quotient map and M second countable
(for example a top. manifold) then if N is locally Euclidean then is also second
countable. Invoking this Lemma we have all the properties we need to prove that
the projective space is a topological manifold.

2. Smooth manifolds
We now want to add more structure to our manifolds in order to define functions
derivative of functions or more in general the concept of smoothness, in a consistent
way. In particular if we consider a topological manifold M we may want to define a
function f : M → R smooth at p ∈ M if fˆ := f ◦ ϕ−1 : Rn → R is smooth at p̂ := ϕ(p)
for some coordinate chart (U, ϕ). The problem of this definition is that it depends too
much on the choice of the coordinate chart. Consider for example two charts (U, ϕU ),
(V, ϕV ) with U ∩ V 6= ∅ and a point p ∈ U ∩ V then we may obtain that in the coordinate
chart U
fˆ|U := f ◦ ϕ−1
U
is smooth at p while in the intersection U ∩ V in the V chart
(fˆ|V )|U ∩V := f ◦ ϕ−1 |U ∩V = f ◦ ϕ−1 ◦ (ϕU ◦ ϕ−1 )|U ∩V
V U V
If ϕU ◦ ϕ−1
V is not good enough we can find a contradictions, that is f is smooth in a
given coordinate chart but not in the other. Let’s solve this problem. Let’s consider
to chart (Ui , ϕi ) and (Uj , ϕj ) and let’s define Uij := Ui ∩ Uj and we construct the so
called transition function ϕij := ϕj ◦ ϕ−1 n n
i |ϕi (Uij ) : R ⊃ ϕi (Uij ) → ϕj (Uij ) ⊂ R (we will
5

often omit the domain of the transition function to simplify the notation). In the case
of topological manifolds the transition functions are obviously homomorphisms. But
we want something more. In teh previous formula we emphasized the domani of he
functions and the chart used, we will omit in the future those details in order not tobe
pedantic with th notation, it willbe clear for the context what we are doing.
Let’s remember to ourselves the following definition:
Definition 2.1. A smooth map from an open set V of Rn to an open set W of Rm
bijective and with a smooth inverse is called a diffeomorphism
That’s what we need.
Definition 2.2. Two coordinate charts (Ui , ϕi ) and (Uj , ϕj ) are named smoothly com-
patible if Uij = ∅ or ϕij is a diffeo.
Now we put all charts together in a clever way.
Definition 2.3. A smooth atlas, or simply an atlas, A, for a topological manifold M is
a collection of smoothly compatible charts covering M .
Note that it is enough to check that every transition function ϕij is smooth (thus for
all i, j) to prove that we have an Atlas. We thus successfully removed the ambiguity of
the definition of a smooth function on M once we fixed the Atlas; a function f : M → R
then is smooth if fˆ = f ◦ ϕ−1 is smooth for every coordinate chart in the atlas A. We
are tempted to call this a “smooth structure”. A minor problem appears now. Consider
for example on R2 the following atlases
A1 := {(R2 , id)}
and
A2 := {(B1 (x), id)}
they are obviously different but they determine the same set of smooth functions. The
way out is easy
Definition 2.4. A smooth structure is an equivalence class of atlases where we will
declare the atlas A1 equivalent to A2 if A1 ∪ A2 ia another smooth atlas
Alternatively one can say the following
Definition 2.5. A smooth structure is a maximal Atlas Amax , one thus that is not
contained in a larger atlas.
Lemma 2.6. (1) Every smooth atlas A for M is contained in a unique max atlas
(2) Two atlases determine the same max atlas IFF their union is an atlas
Proof. Define now Amax to be the set of all possible chart compatible with those in A;
note that this means that is maximal by construction. Is it an Atlas? Amax obviously
cover M thus we need to prove compatibility of the charts. Lets consider (V1 , ψ1 ) and
(V2 , ψ2 ) charts in Amax ans suppose V1 ∩ V2 6= ∅; are the compatible? that is ψ1 ◦ ψ2−1
is a diffeo? Take a point p ∈ V1 ∩ V2 , then we can find a chart (U, ϕ) in A with p ∈ U .
Observe now that
ψ1 ◦ ψ2−1 = ψ1 ◦ ( ϕ−1 ◦ ϕ ) ◦ ψ2 = (ψ1 ◦ ϕ−1 ) ◦ (ϕ ◦ ψ2 )
| {z } | {z } | {z }
we insert them here smooth smooth
6

Thus ψ1 ◦ψ2−1 is smooth in a neighborhood of p or better to say ψ2 (p), thus it is smooth in


ψ2 (V1 ∩ V2 ). Obviously also the inverse is smooth thus this transition function is a diffeo
and Amax an Atals. Uniqueness is easy to prove, consider B max another maximal atlas
for A, since every chart of B max are compatible with the chart in A we obviously get that
B max ⊆ Amax , but since we claim that B max is maximal then we have B max = Amax .

Let’s see now the second part


(→) obvious
(←) Let A1 = (Ui , ϕi ) and A2 = (Va , ψa ). A1 ∪ A2 atlas implies that all the chart on A1
are also compatible with the one one A2 thus we have that ϕi ◦ ψa is a diffeo. Using that
info, and after having constructed Amax1 := (UI , φI ) and Amax
2 := (VA , ΞA ) we observe
that (we are sloppy with the domain and image of those functions in order not to be
pedantic)
ϕi ◦ ψa−1 = smooth = ϕi ◦ (φ−1 −1 −1 −1
I ◦ φI ) ◦ ψa = (ϕi ◦ φI ) ◦φI ◦ ψa
| {z }
smooth

then we have that φI ◦ ψa−1must be smooth. This implies that Amax


1 ⊆ Amax
2 . In a
max
similar fashion, or if you prefer by democracy, one also has A2 max
⊆ A1 then the
statement follows. 

We are finally ready to define a smooth structure


Definition 2.7. A smooth manifold is the information of a topological mfd and a smooth
structure
Definition 2.8. Let M be a smooth manifold. A function f : M → R is said to be
smooth if ∀(U, ϕ) ∈ Amax we have that f ◦ φ−1 |ϕ(U ) is smooth
You may think that we have a problem bc we can’t check for all charts on the max
atlas..they are too many. No problem:
Lemma 2.9. Given a smooth manifold M with smooth structure given by the max atlas
Amax , then fixed an atlas A = {(ui , ϕi )} ⊂ Amax then f : M → R is smooth (in the
sense of the previous def ) if f ◦ ϕ−1
i |ϕi (U ) is smooth for all i.

Notation: Once we choose a coordinate chart (U, ϕ) (o more in general an Atlas) it


is in convenient to simplify the notation as follows: Ũ = ϕ(U ) and fˆ = f ◦ ϕ−1 : Ũ → R
Examples
• Take Rn with the standard chart (Rn , id), this is obviously a smooth manifold.
• Let U be any open subset of Rn . Then U is a topological manifold, and the
single chart (U, Id) defines a smooth structure on U . This is in general true for
every open set of a smooth manifold.
• Any real finite dimensional vector space V . Take any positive definite scalar
product on V in order to induce on V the structure of a topological mfd (
Hausdorf and second countable). Let s discuss the smooth structure. We know
that any choice of a basis B establish an isomorphism between V and Rn as
follows
ϕB : v → vB = (v1 , .., vn ) coordinate in the given basis B
7

and V, ϕ is our smooth structure. Suppose now we want to use another basis B̃
where vB̃ = (ṽ1 , .., ṽn ), we know we can always find an invertible matrix Aij so
that
ṽi = Aij vj
thus in general an invertible linear map
ϕBB̃ : Rn → Rn
vB → vB̃
that is our construction does not depend on the choice of the basis as expected
in the sense that the two chart are compatible and thus the smooth structure
induced is the same.
• Any open subset U of Rn is a topological manifold and the restriction of every
single chart of the given atlas for Rn defines a smooth structure on U . this
generalize to any open subset of a smooth manifold.
• The set of real invertible matrices GLn (R) can be defined by as det−1 (R \ 0)
where
2
det : Rn → R
2
where we have identified square matrices with Rn . Being the determinant a
continuous function and (R \ 0 an open set we can conclude that GLn (R) is open
2
inside Rn and thus a smooth manifold
• More example in the exercise sheet 1
Now we want generalize a bit the construction and we look at maps between smooth
manifolds M and N , F : M → N . We consider the smooth atlases (Ui , ϕi ) and (Va , ψa );
we say thaty F is smooth if ψa ◦F ◦ϕ−1 n
i : R ⊃ ϕi (Ui ∩F
−1 (V )) → ψ (V ) ⊂ Rm for every
a a a
(Ui , ϕi ) and (Va , ψa ). Sometimes we we just write F̂ for the coordinate representation
of F in some coordinate charts.
Definition 2.10. A map between smooth mfds F : M → N is a diffeomorphism if it is
smooth with a smooth inverse. In this case we would say that M and N are diffeomorphic
Note that the notion of being diffeomorphic depends on the smooth structure chosen
We have discuss previously that one can equip a top manifold with different atlases.In
case they are compatible we thus work with the same smooth structure, but in case not
we are temped to say that the manifolds are different from the “smooth” point of view.
This sentence is probably too strong. In the end in fact we may ad different smooth
structures to a top manifold but end up with diffeomorphic smooth mfds. Let s see it with
an example. Consider R with the standard atlas ϕ = id or with ψ(x) = x3 and denote
for simplicity Rϕ and Rψ the smooth mfds arising form those smooth structures. They
1
determine different smooth structures on R in fact the transition function ϕ◦ψ −1 (x) = x 3
1
that is not smooth at the origin. Anyhow defining F : Rϕ → Rψ F (x) = x 3 we can
conclude that they are diffeomorphic since F̂ = id. For this point of view the right
quastion one could try to ask is how many smooth structure up to diffeomorphism one
can add to a top manifold.

3. The partition of unity (*)


In the context of topological spaces it is often usefull the gluing Lemma that is need
to “glue” continuous functions defined on open or closed subsets. In the case of smooth
manifold we have something weaker:
8

Lemma 3.1. Take M and N smooth mfds and {Ui } an open cover of M . Suppose then
that we have a collection of smooth functions Fi : Ui → N agreeing on the overlap then
exists a unique smooth map F : M → N such that F |Ui = Fi .
What about closed subsets?? Take as a counterexample M = N = R with the
standard smooth structure, U± = [±1, 0] and f± = ±x. They agree on the overlap
obviously but their ”union” |x| is not smooth at the origin.
Question: is there a way to have a weaker version of the gluing Lemma for smooth
manifolds, namely blend together local smooth objects without assuming that they agree
on “too big” overlaps ??
Answer: Yes it is called the partition of unity, and it is a crucial tool in differential
geometry. The idea is to construct functions that are identically vanishing in specified
parts of a manifolds. A partition of unity is used in two ways:
• to decompose a global object on a manifold into a locally finite sum of local
objects on the open sets {Ui } of an open cover
• to patch together local objects on the open sets {Ui } into a global object on the
manifold.
Thus, a partition of unity serves as a bridge between global and local analysis on a
manifold. This is useful because while there are always local coordinates on a manifold,
there may be no global coordinates. In subsequent sections. It is the single feature
that makes the behaviour of smooth manifolds so different from that of real-analytic or
complex manifolds.

One of the main tool needed will be the bumb function that we will now introduce.
Let us briefly remind ourselves that given a smooth function f : M → R on M , we can
define its support as follows:
supp(f ) = closure{p ∈ M s.t. f (p) 6= 0}
and we say f supported in some subset U if supp(f ) ⊆ U
Definition 3.2. A smooth function ρ : Rn → R is called a BUMP function for A
supported in U ⊂ Rn if it is equal to 1 in a specified closed set A and supp(f ) ⊂ U .
Sometimes we will we consider A a neighbourhood of some point p ∈ M and we sat
bumb functions at p supported in U meaning that it is 1 in some neighbourhood of p.
We now construct an example of useful and non trivial bump function.
Fact: The function
− 1t

f (t) := e t>0
0 t≤0
is smooth (BUT not analytic). Moreover the following function we can construct out of
the previous one
f (2 − t)
r(t) :=
f (2 − t) + f (t − 1)
is such that r(t) = 1 when t ≤ 1, 0 < r(t) < 1 when 1 < t < 2 and zero otherwise.
Fact: The function ρ(x) : Rn → [0, 1] defined by ρ(x) = r(|x|) is a bump function
supported in any open set containing B̄2 (0) and identically equal to 1 in B̄1 (0). The
next goal is tu use bump function efficiently. For example:
9

Proposition 3.3. Suppose fU is a smooth function defined on a neighbourhood U of


q in a manifold M . Then there is a smooth function f on M which agrees with fU in
some possibly smaller (w.r.t. U ) neighbourhood of q.
Proof. It is enough to take a bump function supported in U identically 1 in a neighbour-
hood of p and then define:
(
ρ(x)f (x) in U
f :=
0 otherwise
. 
We want something more, we want to “glue” smooth functions possibly in disagree-
ment on overlaps.
Definition 3.4. We say that a collection of subsets {Ui } of a top space is locally finite
if each point admit a neighbourhood that intersect at most finitely many of Ui
Definition 3.5. Consider U = {Ui } an open cover of a smooth manifold M . A partition
of unity subordinate to U is a collection of smooth functions φi : M → R (they are not
coordinate charts !!!) such that
• 0 ≤ φi (p) ≤ 1
• supp(φi ) ⊂ Ui
• the
P set {supp φi } is locally finite
• φi (p) = 1 ∀p ∈ M
Remark Since the collection of supports is locally finite, every point p belongP to
finitely many of the sets supp(φi ) thus φi (p) 6= 0 only for finitely many i thus i φi is
a finite sum at every point.
Remark Suppose now {fi } is a collection of smooth functions on a manifold M such
that {supp(fi )} is locally finite. Then every point p in M has a neighbourhood Up that
10
P
intersects finitely
P many supp(fi ); thus i fi is actually a finite sum on Up and thus
the function fi is well defined and smooth on the whole manifold M It is a natural
question if a smooth manifold admits a partition of unity. Let’s start with something
somehow simpler
Proposition 3.6. Let M be a compact manifold and C = {Ca } an open cover. There
exists a partition of unity subordinate to C
Proof. First of all we note, without proving, that, given f1 , .., fn functions on a manifold
M then X [
supp( fi ) ⊂ {supp(fi )}
i
Now for each p ∈ M we define with ψp a bumb function for p ∈ Ca supported in Ca .
We call A the set of all possible values of a. We call now Wp a neighbourhood of p
where ψp > 0. Now the crucial point is that being M compact by assumption the cover
Pname {Wp1 , .., Wpm }. Being ψpi the corresponding bumb
{Wp } has a finite subcover we
functions, we construct ψ = m i=1 ψpi and
ψpi
ϕi :=
ψ
P
Note that ϕi = 1 and suppϕi ⊂ Ca for some a ∈ A. This is really close to a partition
of unity, we need just to reindex to get the given open cover as part of the game. Call
a(i) ∈ A to be an index so that
suppϕi ⊂ Ca(i)
P
and fixed a ∈ A call ρa := i s.t.a(i)=a ϕi . It is easy to check that {ρa } is a partition of
unity for Ca 
To extend this result to every manifold we need some results form topology replacing
the compactness.
Lemma 3.7. Every top mfd admits a countable locally finite cover by precompact (mean-
ing the closure is compact) open sets.
We now use this result to prove the following
Proposition 3.8. Consider a smooth manifold M . Every open cover {Ci } for M admits
refinement, that is another open cover {Wa } such that for each Wa I can find a Ci so
that Wa ⊂ Ci , satisfying:
a {Wa } is countable and locally finite
b I can always find a diffeomorphism ψa : Wa → B3 (0)
c Ua := {ψa−1 (B1 (0)} still cover M
that is {Wa } is a regular refinement of {Ci }.
comment The choice B3 (0) and B1 (0) used in the previous definition may look a bit
strange; We will need it in the next theorem, and it is a choice made in order to avoid
to write balls with increasing radius r < r0 < r00

Proof. See for example Lee book “introduction to smooth manifolds”. The compact
manifold case is some how simpler but less instructive and can be found in W. Tu book
“introduction to manifolds”.
Let C = {Ci } an open cover for M . Consider {Vj } a countable locally finite cover for
11

M by precompact open sets that we know exists by the previous Lemma. Then for each
p ∈ M construct (Wp , ψp ) a coordinate chart centred at p so that
• ψp (Wp ) = B3 (0)
• for every p we can find a Ci so that Wp ⊂ Ci
• When p ∈ Vj then Wp ⊂ Vj (comment this is possible because the local finite-
ness of {Vj } that is each p ∈ M has a neighbourhood intersecting finitely many
of the Vj ).
Define now Up = ψp−1 (B1 (0)), and observe that for every p ∈ Vj the set {Up } is an
open cover for V̄j and by paracompactness of V̄j we have taht I can a finite subset of
(j) (j) (j) (j)
those covering it that we call {U1 , .., Un(j) } with corresponding {W1 , .., Wn(j) }. The
(j) (j)
set (Wi , ψi ) refines C and satisfies b) and c) by construction. We need to prove a)
.It is obviously countable, is it also locally finite??? This is a direct consequence of the
fact that {Vj } is a countable locally finite cover of M . We refer to the afro mentioned
books for more details. 
Theorem 3.9. Given a smooth mfd M and an open cover C = {Ci } we can always find
a partition of unity subordinate to C
Proof. Consider a regular refinement {Wa }|a∈A of {Ci } where A is just a set (we need
to specify it now). Define then
Ua = ψa−1 (B1 (0)) , Oa := ψa−1 (B2 (0))
Note that by hp {Wa } still covers M . We then use the bumb function ρ constructed
previously to construct:

ρ ◦ ψa on Wa
fa :=
0 on M \ Ōa
Note that suppfa ⊂ Wa . Define then
12

fa (p)
ga (p) = P
a fa (p)

It is here that the regular refinement property plays a crucial role: the denominator
contains in fact finitely many non vanishing terms being {Wa } locally finite. Observe
then that fa = 1 on Ua by construction, and since {Ua } cover M by the regularity
of the refinement every point is P contained in some Ua and we can conclude that the
denominator never vanishes and a ga = 1 and 0 ≤ ga (p) ≤ 1 for all p in M . Now we
need to reindex to go back to the given cover {Ci }. Given Ci we can find a subset A(i)
of A so that Ua ∈ ci for all a ∈ A(i) and define
X
φi = ga , a ∈ A(i)
a

Now one can check this function satisfies the desired request. In particular note that
{supp(φi )} is locally finite because {Ua } is locally finite and supp(fa ) ⊂ Ua thus supp(φi ) ⊂
Ci 

We can now use this theorem to define the bump function on a manifold:
Corollary 3.10. Let M be a smooth mfd then for any closed set A ⊂ M and open set
U containing A we can find a smooth function β : M → R such that:
β(p) = 1 , ∀p ∈ A
and supp(β) ⊂ U
Proof. Consider the open cover given by {U1 := U, U2 := M \ A}. We know we can
construct a partition of unity {φ1 , φ2 } subordinate to this cover.The function
P φ2 by
construction vanishes on A because supp(φ2 ) ⊂ M \ A thus on A we have φi = φ1 = 1
and supp(φ1 ) ⊂ U1 = U thus φ1 is our β 

this is a powerful object that we call again BUMP function for A supported on U .
Let’s use it:
Lemma 3.11. Let f be a smooth function defined on a closed subset A of M . For any
open U containing A we can find f ext a smooth function on M such that it coincide with
f on A and supp(f ext ) ⊂ U
13

Proof. We extend f to a smooth function, still called f , on some neighborhood W of A


assuming that W ⊂ U . Take the bump function β for A supported in W and define

ext β(x)f (x) on W
f :
0 on M \ supp(β)
by construction on A ⊂ W β = 1 thus f ext |A = fA there. The support is contained
in W and thus on U . Note that both defs coincide on the overlap W \ supp(β) where
obviously β ◦ f = 0 

4. Tangent and cotangent bundle


We first comment on tangent vectors on Rn viewed as derivations and then we gen-
eralize. Let’s take R2 equipped with the standard basis {e1 , e2 } and the origin. Let’s
consider then the space of vectors applied at a general point p ∈ R2p . We call this
vector space R2p and its general element vp , that is the vector v = v1 e1 + v2 e2 . It
is naturally equipped with the standard basis {e1 p , e2 p } so that one trivially obtains
vp = v1 e1 p + v2 e2 p . Vecto applied to a point are useful to take directional derivative.
In particular, given a smooth function f a point p and a vector v we can define
 
d ∂
ṽp · f := f (p + tvp )|t=0 = vi i f (p)
dt ∂x


It is sometimes useful to use the notation ∂i |p f to indicate ∂x i f (p)

Definition 4.1. A lin map Xp : C ∞ (Rn ) → R (that we will denote by Xp · f or simply


Xp f ) and satisfying the Leibniz rule
Xp (f g) = f (p)Xp · g + g(p)Xp · f
is called a derivation at p.
properties:
(1) f constant ⇒ Xp · f = 0
That’s easy to prove, consider the function f = 1 then by Leibniz one has
Xp · f = Xp · (f 2 ) = 2f (p)Xp · f = 2Xp · f from which Xp · f = 0
(2) f (p) = g(p) = 0 ⇒ Xp · (f g) = 0
The space of all derivation at a point p in Rn is naturally a vector space we will
denote by Tp Rn ; directional derivatives at a point ṽp are derivations. Natural question:
are every derivations of this form??
Proposition 4.2. Rnp and Tp Rn are isomorphic vector spaces

Proof. Consider The map vp → ṽp := v i ∂i |p . Consider the function f = xj , then we


have
ṽp xj = v i ∂i |p xj = v j
Suppose now ṽp is the zero derivation, the previous relation teaches us that v j must
vanishes; repeating the same argument for every fixed j we end up with v = 0, thus the
kernel of this map is only the null vector. Let’s check surjectivity. Consider a general
Xp ∈ Tp Rn and let’s use Taylor expansion around the point p with coordinate (x10 , .., xn0 )
to write a general smooth function f as follows:
f (x) = f (p) = ∂i f (p)(xi − xi0 ) + gi (x)(x − xi0 )
14

for some smooth functions gi vanishing in p. We now use the Leibniz rule and we get
Xp · f = Xp · f (p) +Xp · (∂i f (p)(xi − xi0 )) + Xp · gi (x)(xi − xi0 ) = (∂i f )(p)Xp (xi − xi0 )
| {z } | {z }
0 by prop. 1 0 bc of property 2

Defining now Xp (xi ) := vi we obtain that every Xp comes from a vector v = v i ei 


Corollary 4.3. {∂i |p } is a basis for Tp Rn
A (smooth) vector field X on an open subset U of Rn is a function that assigns to
each point p in U a tangent vector Xp in Tp Rn . Every element of this space can be
written as:
X = ai (x)∂i
with ai smooth functions. In particular for every p ∈ U one has
(X · f )(p) = Xp · f = ai (p)∂i |p f
We aim now to generalize to a general smooth manifold and many results and definition
discussed previously generalize naturally to this setting.
Definition 4.4. A lin map Xp : C ∞ (M ) → R (that we will denote by Xp · f or simply
Xp f ) and satisfying the Leibniz rule
Xp · (f g) = f (p)Xp · g + g(p)Xp · f
is called a derivation at p.
Properties 1) and 2) discussed previously still hold. There is an alternative definition
some how more geometrical that uses the concept of tangent vectors to curves. In order
to carefully introduyce this point of view we need more mathg stuctures.
Suppose now we have a map F : M → N , then, for every p ∈ M , it naturally induces a
linear map F∗ : Tp M → TF (p) N by
(F∗ Xp )F (p) · f := Xp · (F ◦ f )
is called push forward. Sometimes it is useful to specify the initial point and we are
forced to modify the notetion and use dFp or F∗ |p instead of just F∗ . We will sometimes
write just F∗ Xp instead of (F∗ Xp )F (p) in order to simplify the notation.
Push forward properties:
• (G ◦ F )∗ = G∗ ◦ F∗
• F a diffeomorphism ⇒ F∗ an isomorphism
It is interesting to observe how the definition of tangent space it is actually a local
construction even if we have used smooth functions on M to define it and not on an
neighborhood U of the point p. Let’s justify it by using the following Lemma:
Lemma 4.5. If f, g ∈ C ∞ (M ) agree on some neighborhood U of p ∈ M then
Xp · f = Xp · g
Proof. Take the closed subset A := M \ U and consider the bumb function β for A
supported in M \ {p} and define h = f − g. Note now that in U we have that h = 0 and
and A we have β = 1, combining this two infos we have that h(q) = h(q)β(q) for every
q ∈ M thus:
Leibniz
Xp · (h) = Xp · (hβ) = h(p)Xp · β + β(p)Xp · h
that vanishes because h(p) = β(p) = 0. Thus by linearity on every point in U we have
Xp · f = Xp · g 
15

Fact: The tangent space at a point p of a finite dimensional vector space V , that is
Tp V , is isomorphic to V , namely
Tp V ∼ V , ∀p ∈ V
We were able so far to write functions representation in a chart, we will do the same for
derivations. Consider the chart (U, ϕ) for M . We know we have a canonical basis for
Tp̂ Rn given by ∂i |p̂ and that, being ϕ a diffeomorphism we know by the second property
of the push forward that Tp̂ Rn ∼ Tp M thus the set {∂i |p } defined by
∂i |p := (ϕ−1 )∗ ∂i |p̂
is a basis for Tp M . What’s the meaning of this object?? Consider a functionf : U → R
then
∂ fˆ
∂i |p f = (ϕ−1 )∗ ∂i |p̂ f = ∂i |p̂ (f ◦ ϕ−1 ) = ∂i |p̂ fˆ = (p̂)
∂xi
Let ‘s see now how to pushforward this basis. Consider two charts (U, ϕ) and (V, ψ) for

M and N and denote the coordinates in the domain by p̂ = ϕ(p) = (x1 , ..., xn ) = xi and
q̂ = ψ(q) = (z 1 , ..., z m ) = z a , take then consider f ∈ C ∞ (N ):
(F∗ (∂xi |p ))f = (∂xi |p ))(F ◦ f )

= (∂xi |p̂ ))(f ◦ F ◦ ϕ−1 )

= (∂xi |p̂ ))(f ◦ ψ −1 ◦ ψ ◦ F ◦ ϕ−1 )


| {z } | {z }
fˆ F̂
∂ F̂ a
chain rule = ∂xi
(p̂)(∂z a fˆ)(F
[ (p))

∂ F̂ a
= ∂xi
(p̂) ∂z a |F (p) f
from which we get
∂ F̂ a
(F∗ (∂xi |p )) =(p̂) ∂z a |F (p)
∂xi
We have seen that in a given coordinate chart (U, ϕ) with coordinates function of ϕ
16

given by (x1 , .., xn ), we have a natural basis for Tp M thus every tangent vector at p can
be written as

Xp = ai ∂i |p = ai i
∂x p
What if we change chart (V, ψ) with coordinate function (x̃1 , .., x̃n )? (note we take
p ∈ V ∩ U ). In this case we can use the basis {∂˜i |p } with ∂˜i |p := ∂∂x̃i |p and we have
Xp = ãi ∂˜i |p
How the tilde and non tilde are related?? First of all let’s write the transition map:
ψ ◦ ϕ−1 (x) = (x̃1 (x), .., x̃n (x)) = x̃(x)
Let’s compute it by acting on a general smooth function f :
def
∂i |p f = ((ϕ−1 )∗ ∂i |ϕ(p) ) f

= ( ∂i |ϕ(p) ) f ◦ ϕ−1
| {z }
fˆ(x)
= ( ∂i |ϕ(p) ) f ◦ ψ ◦ ψ ◦ ϕ−1
−1
| {z } | {z }
fˆ(x̃) x̃(x)
 
∂ x̃j
chain rule = ∂xi ϕ(p) ∂˜j |ψ(p) fˆ(x̃)
 
def ∂ x̃j ˜
= ∂xi ϕ(p) ∂j |p f

from which we get


∂ x̃j
∂i |p = ∂˜j |p
∂xi ϕ(p)
and thus
∂xi
ai = ãj
∂ x̃j ϕ(p)
17

There is a more geometrical interpretation of tangent vectors we are now able to discuss
properly
Definition 4.6. A smooth curve is a smooth map γ : J ⊂ R → M ; we will often denote
it by γ(t)

Take t0 ∈ J, define p0 = γ(t0 ) and f smooth function on M ; consider then the function
restricted to the curve, that is f (γ(t)), let’s see how it changes
 
d
(f ◦ γ) (t0 ) = (γ∗ ∂t |t0 )f
dt
In a given coordinate chart containing γ(t0 ) we know how to handle this object and we
get
d d d
(γ∗ ∂t |t0 )γ(t0 ) f = f ◦γ = (f ◦ ϕ−1 ◦ ϕ ◦ γ ) = fˆ(γ 1 (t), .., γ n (t))
dt t0 dt t0 | {z } | {z } dt t0
fˆ γ̂

where γ̂ = (γ 1 (t), .., γ n (t)) simply means we are taking the coordinate representation of
every point along the curve. In conclusion we obtain:
γ̇ i (t0 )∂i |γ̂(t ) fˆ = γ̇ i (t0 )∂i |γ(t ) f
0 0

with γ̇ i
:= ∂t γi.
We will sometimes denote this vector by γ̇γ(t0 ) In conclusion given a
curve γ we can produce a tangent vector at p0 = γ(t0 ) that is γ̇ i (t0 )∂i |p0 .
Natural question: every tangent vector arise from a curve?
Lemma 4.7. Let p ∈ M then every Xp ∈ Tp M is the tangent vector to some curve
Proof. Consider a coordinate chart (U, ϕ) centered at p (that is ϕ(p) = 0) the vector
Xp = ai ∂i |p . We want to construct a curve γ : (−, ) → U such that γ̇ i (0) = ai and
γ(0) = p. To this aim we construct γ̂ = (ta1 , .., tan ) from which we have
γ(t = 0) = ϕ−1 (γ̂(t = 0)) = ϕ−1 (0) = p
and
γ̇γ(0) = γ̇ i (0)∂i |γ(0) = ai ∂i |γ(0)=p

18

5. Vector bundles
We want now to extend the notion of vector field on Rn to a general smooth manifold
M , namely something that evaluated at each point p produces an element of Tp M
“smoothly”. This goal can be achieved using the general notion of vector bundle that
we introduce int he following:
Definition 5.1. Consider M and E smooth manifolds a a smooth surjective map π :
E → M . If
def
• Ep = π −1 (p) named the fiber over p, is a real vector space of dimension k
• for every p ∈ M we can find a neighborhood of p and a diffeomorphism
Φ : π −1 (U ) → U × Rk
s.t. Φ|π−1 (p) : Ep → {p} × Rk ∼ Rk is a linear isomorphims and pr1 ◦ Φ = π. The
diffeo φ satisfying this requirements is called local trivialization
we will call the set of data (E, M, π, {(U, φ)}) a (smooth) vector bundle of rank k (we
will just denote it by π : E → M or jut E sometimes in the future).
What happen when we change trivialization?
Lemma 5.2. Given two trivialization (U, Φ) and (V, Ψ) of a vector bundle of rank k
with U ∩ V 6= ∅, then we can find a smooth map g : U ∩ V → GLk (R) s.t. for every
p ∈ U ∩ V we have
Φ ◦ Ψ−1 (p, v) = (p, g(p) · v)
where we denoted by g(p) · v the action of an element of GLk (R) on an arbitrary element
v ∈ Rk
Proof. We know that pr1 ◦ Φ = π = pr1 ◦ Ψ thus the following diagram commutes

Ψ−1 Φ
(U ∩ V ) × Rk π −1 (U ∩ V ) (U ∩ V ) × Rk
pr1
π
pr1
U ∩V
19

then we can conclude that


pr1 ◦ Φ ◦ Ψ−1 = pr1 −→ Φ ◦ Ψ−1 (p, v = (p, ν(p, v))
for some ν : (U ∩ V ) × Rk → Rk that is smooth by construction. Fixed p ∈ M we
know that the trivialization map is a lin. isomorphism thus is ν(p, v) = g(p) · v (thus
g(p) ∈ GLk (R)) . Smoothness of ν easily implies smoothness of g. Choose a basis {ba }
for Ep with dual basis {β a } then we can write explicitly
g(p) · v = g a b (p)v b ba
from which one gets that g a b (p) = β a (ν(p, bb )) that is g a b (p) are smooth functions being
the composition of smooth functions. 
Definition 5.3. A local section of a vector bundle is a smooth map σ : U ⊂ M → E
s.t. π ◦ σ = idU . In the case U = M we will call it global section and we will denote
them by Γ(E)
A set of local (global) sections {σ1 , .., σk } s.t. for each p ∈ U (or every point in M ) we
have that
{σ1 (p), .., σk (p)}
is a basis for Ep , is called local (global) frames.
Notation: It is common to identify a section of a rank k vector bundle at a point,
in a given trivialization, with an element of Rk , that is
σ Φ (p) := Φ ◦ σ(p) = (p, v) ∼ v ∈ Rk
Observation: being the trivialization a pointwise isomorphism we can conclude that
local frames and trivialization are in 1-1 correspondence (see the exercise sheet).
Definition 5.4. The disjoint union
T M := tp Tp M
is named tangent bundle
We need to prove it deserve the name bundle by showing that:
(1) T M is a smooth manifold
20

(2) π : T M → M is a smooth surjective map


(3) it could be equipped with a canonical set of local trivializations.
prove 1: Given a coordinate chart (U, ϕ) for M we can construct
Φ : π −1 (U ) ⊂ T M → (Ũ ) × Rn ⊂ R2n
(p, ai ∂i |p ) 7→ (x1 , .., xn , a1 , .., an )
| {z }
x=p̂

it’s a bijection (when restricted toits image) with inverse map


Φ−1 : Ũ × Rn → π −1 (U ) ⊂ T M
(x1 , .., xn , a1 , .., an ) 7→ (ϕ−1 (x), ai ∂i |ϕ−1 (x) )
| {z }
x=p̂

We want to use now Φ to transfer the topology of ϕ(U ) × Rn ⊂ R2n to T U .


We declare a set A in T U := π −1 (U ) open if its image by Φ is open in ϕ(U )×Rn ⊂ R2n .
Observe that if V is an open subset of U then the subset topology of T V coincide with
the one given by the trivilization map restricted to T V . Consider now an atlas given by
the open sets γ := {Ui } and let’s construct
C := {A s.t. A is open in T Ui with Ui ∈ γ}
that is the collection of all open subsets of T Ui where Ui is a given collection of coordinate
open sets.
Lemma 5.5. (**) Take U1 and U2 coordinate open sets in M and A1 and A2 opens
inside T U1 and T U2 , then A12 := A1 ∩ A2 is open in T12 := T (U12 ) where U12 = U1 ∩ U2 .
Proof. Note then that A1 ∩ T (U12 ) and A2 ∩ T (U12 ) are opens in T (U12 ) by definition
of subset topology; note then that A12 ⊂ T U1 ∩ T U2 = T12 hence
A12 = A1 ∩ A2 = A1 ∩ A2 ∩ T12 = (A1 ∩ T (U12 )) ∩ (A2 ∩ T (U12 ))
thus is open because intersection of opens. 
It is easy to see that the union of all A in C gives T M . This observation and the
previous lemma shows that C satisfies the condition of Lemma (1.3) that we report here
for convenience:
A collection β = {ui } of subset of S is a basis for some topology of S if and only if
• S is the union of all ui
• given ui and uj and p ∈ ui ∩ uj there is uk ∈ β s.t. p ∈ uk ⊂ ui ∩ uj
It wold be in fact enough to take foe every p in T U1 ∩ T U2 we can infact take the
open T12 that is in C as shown before. Thus we can conclude that on T M we can induce
the topology given by this basis
Lemma 5.6. (*) A manifold M has a countable basis consisting of coordinate open sets.
Proof. Let {(Ua , ϕa )} be a maximal atlas on M and γ = {ui } a countable basis for
M . for each p ∈ M and Ua containing p we can find an elment of γ we name up,a s.t.
p ∈ up,a ⊂ Ua (essentailly by the definition of basis for the topology). The collection
{up,a } without duplicate elements is a subcollecition of β and thus second countable. Is
it a basis? For any open set U and p ∈ U we can find Ua s.t.
p ∈ Ua ⊂ U
21

thus
up,a ⊂ U
which shows that {up,a } is a basis for a topology on M . 

Proposition 5.7. (*) T M is second contable

Proof. Consider a countable basis of coordinate opens sets {Ui }. Being T Ui homeomor-
phic to R2n is second countable. For each T Ui chooose a countable basis, and thus also
T M will be second countable being the basis for M choosen countable. 

It is then easy to prove that T M is Hausdorff. If (p, Xp ) and (q, Xq ) live on two
different fibers then we can find two neighborhoods for p and q we call U1 and U2
separating the points so that T U1 and T U2 separates (p, Xp ) and (q, Xq ). If we consider
points on the same fiber we can easily use the isomorphism with Rn
Thus T M is also locally Euclidean by construction.
We know what happens when we change coordinate chart from (U, ϕ) to (V ψ) on M
and Tp M , putting these stuff together we obtain

∂ x̃1 (x) i
Ψ ◦ Φ−1 (x1 , .., xn , a1 , .., an ) = (x̃1 (x), .., a , ...)
∂xi
that is clearly smooth. We thus can construct in this way a smooth atlas {T Ui , Φi }.
prove 2: By construction
prove 3: It is easy to show that Φ̃ = (ϕ−1 × idRn ) ◦ Φ : π −1 (U ) ⊂ T M → U × Rn is a
trivialization map

Definition 5.8. A (smooth) vector field is a section of T M . Sometimes one might be


interested in maps from M to T M not necessary smooth or continuous. Here we will
consider only smooth maps thus (smooth) sections of T M unless specified. The standard
notation for Γ(T M ) is X(M )

Lemma 5.9. In every coordinate chart, the component functions of X ∈ X(M ) are
smooth functions. Moreover a vector field, for any f ∈ C ∞ (M ), naturally define the
function X · f : M → R defined by (X · f )(p) = Xp · f ; this one is smooth.

Proof. Given the chart (U, ϕ) with coordinate function x = (x1 , .., xn ) for M we construct
the coordinate rep of X : M → T M

X̂(x) = (x1 , , .xn , â1 (x), .., ân (x))

thus the component functions âi on U must be smooth functions, thus in the given
[
coordinate chart the coordinate representation X ·f

[
X · f = âi (x)(∂i fˆ)(x)

that is obviously smooth being âi and ∂i f smooth functions. For this reason we think at
a vector field in a given coordinate chart as something of the form ai ∂i with ai smooth
functions on M 
22

6. The cotangent bundle

Take now the dual space of Tp M ; it is called the cotangent space and denoted by Tp∗ M .
In a given coordinate open setU , with coordinate functions (x1 , .., xn ) we have the co-
∂ i
ordinate basis for Tp M given by { ∂x i p }, and we denote the dual one by { dx p }. Thus

every element of Tp M can be written as ωp = ωi dxi p . In particular we note that, being


dxi p (∂j |p ) = δji one has that
ωi = ωp ( ∂i |p )
What happens when we change the coordinate chart? denoting as usual the new coor-
dinate function by (x̃1 , .., x̃n ) we know that
∂ x̃j
∂i |p = ∂˜j |p
∂xi ϕ(p)
thus
∂xi
dxi |p = dx̃j |p
∂ x̃j ϕ(p)
Let us resume for convinience what we obtain so far:

(x1 , .., xn ) → (x̃1 , .., x̃n )

∂xj
∂i |p → ∂˜i |p = ∂ |
∂ x̃i ϕ(p) j p

∂ x̃i
ai → ãi = ∂xj ψ(p)
aj

∂ x̃i
dxi |p → dx̃i |p = ∂xj ψ(p)
dxj |p

∂xj
ωi → ω̃i = ω
∂ x̃i ϕ(p) j

Definition 6.1. The disjoint union


T ∗ M := tp Tp∗ M
is named cotangent bundle
Proposition 6.2. T ∗ M has a natural stucture of a vector bundle of rank n over M
Proof. The proof mimic the one discussed previously for T M 
Definition 6.3. Smooth sections of T ∗ M ,namely Γ(T ∗ M ), are named covector fields
on M . Covector fields are sometimes called differential one forms and the set of covector
fields is also denoted by Ω1 (M ). Given a covector field ω and a smooth function f then
f ω is a covector field naturally defined by (f ω)p = f (p)ωp
Lemma 6.4. Let ω ∈ Ω1 (M ), then
• In any coordinate chart for M and in the coordinate basis for Tp∗ M we will write
ω := ωi dxi . The components ωi of the coordinate representation for ω are smooth
functions.
23

2
• Given a smooth vector field X one has that ω(X)(p) = ωp (Xp ) is smooth.
Given a smooth function we can naturally produce a covector field df called the
differential of f .The symbol d will be also discussed in the following it willplay a crucial
role; at thsi stage let s just view df as a unique symbol ; df is a covector field defined
such that
dfp (Xp ) = Xp · f
In a given coordinate chart we have
dfp (∂i |p ) = (∂i f )(p)
so locally we may write df = ∂i f dxi . Note that by construction df is a smooth map from
M to T ∗ M . The “dual” manouvre with respect to the pushforward is named pullback:
give the smooth map F : M → N we construct
F ∗ : TF∗ (p) N → Tp∗ M
defined by
(F ∗ ωF (p) )(Xp ) = ωF (p) (F∗ Xp )
Observation If F : M → N is as usual a smooth map and ω ∈ Ω1 (N ) then F ∗ ω ∈
Ω1 (M ). It is easy to check in a coordinate system. Note that with respect to the
vector field case we don’t have an ambiguity (that can be caused for example by an non
injective map) and everything is well defined since since we are pulling back objects. We
will sometimes use the following notations to specify the same thing
(F ∗ (ωF (p) ))p = (F ∗ ω)p = F ∗ (ωF (p) )

Lemma 6.5. Let F : M → N a smooth map, f ∈ C ∞ (N ) and ω ∈ Ω1 (N )


• F ∗ df = d(f ◦ F )
• F ∗ (f ω) = (f ◦ F )F ∗ ω
Proof. Let’s prove the first one. Take a smooth vector field X ∈ X(M )
(F ∗ dfF (p) )p (Xp ) = dfF (p) (F∗ Xp )F (p)
by definition of differential = (F∗ Xp )F (p) · f
by definition of pushforward = Xp · (f ◦ F )
by definition of differential = (d(f ◦ F ))p (Xp )
Similarly for the second property one has:
F ∗ ((f ω)F (p) ) = F ∗ (f (F (p)ωF (p) )
= f (F (p))(F ∗ ωF (p) )

Observation the previous Lemma can be efficiently used as follows. Consider the
identity map from M with a given coordinate chart and M with an other coordinate
chart. The pullback can be then used to compute how a given covector changes under a
change of coordinate.
2Remember we use the notation: ω(p) = ω and X(p) = X
p p
24

7. Tensors
We start defining tensors on a real finite dimensional vector space V and we then
generalize the construction to a general manifold.
Definition 7.1. A covariant k tensor is a multilinear map:

| × {z
τ :V .. × V} → R
k times

The set of all covariant k tensor is denoted by T k V .


A controcovariant p tensor is a multilinear map:

.. × V }∗ → R
| × {z
Y :V
p times

The set of all controvariant p tensor is denoted by Tp V .


A mixed tensor of type (k, p) is a multilinear map:

| × {z
F :V .. × V} × V .. × V }∗ → R
| × {z
k times p times

The set of all mixed tensor is denoted by Tpk V .

Observe that T k V and Tp V and Tpk V can be endowed with the structure of a real
vector space. We can in principle be more general and for example consider multilinear
map from V × W → R with both V and W real finite dimensional vector spaces.

Example 7.2. Consider α, β ∈ V ∗ and define the map


τα,β : V × V → R
v1 , v2 → α(v1 )β(v2 )
This map is obviously multilinear thus τα,β ∈ T 2 V . Let us remark at this point that,
given a ∈ R
aτα,β = τaα,β = τα,aβ
and
τα+α0 ,β = τα,β + τα0 ,β
τα,β+β 0 = τα,β + τα,β 0
We will go back to those relations soon.
Take now τ ∈ T k V and ρ ∈ T m V and define an element of T k+m V denoted by τ ⊗ ρ by
(τ ⊗ ρ)(v1 , .., vk+m ) = τ (v1 , .., vk )ρ(vk+1 , .., vk+m )
It is easy to show that the tensor product is associative.
Particularly interesting covariant k tensor that are symmetric alternating, that is: let
σ ∈ Sk a permutation of k object and denote by sign(σ) the sign of the permutation.
25

Definition 7.3. ρ ∈ T k V is named symmetric if


ρ(v1 , .., vk ) = ρ(σ(v1 ), .., σ(vk ))
and alternating if
ρ(v1 , .., vk ) = sign(σ)ρ(σ(v1 ), .., σ(vk ))
An example of alternating covariant tensor is the determinant of a matrix, viewed as
multilinear map on column vectors. The set of alternating ans symmetric covariant k
tensor are denoted by Λk V and Σk V . Given ρ ∈ T k V we can construct two natural map
to obtain symmetric or alternating tensor
1 X
Symm : T k V → Σk V by Symm(ρ)(v1 , .., vk ) = ρ(σ(v1 ), .., σ(vk ))
k!
σ∈Sk
and similarly
1 X
Alt : T k V → Λk V by Alt(ρ)(v1 , .., vk ) = sign(σ) ρ(σ(v1 ), .., σ(vk ))
k!
σ∈Sk

Take now τ ∈ Λk V and ρ ∈ Λm V ; we can construct another element of Λk+m V denoted


by τ ∧ ρ and constructed as follows
(k + m)!
τ ∧ρ= Alt(τ ⊗ ρ)
k!m!
Explicitly we have (fix coeff)
(k + m)! 1 X
(τ ∧ ρ)(v1 , .., vk+m ) = τ (σ(v1 ), .., σ(vk ))ρ(σ(vk+1 ), .., σ(vk+m ))
k!m! (k + m)!
σ∈Sk

Proposition 7.4. Given ρ and τ as before we have that :

• the wedge product is associative


• (τ ∧ ρ) = (−)km (ρ ∧ τ )
Proposition 7.5. Let {bi } and {β i } dual basis for V and V ∗ . Then set given by all
covariant k-tensor or rank k of the forms
β i1 ⊗ .. ⊗ β ik
form a basis for T k V
Proof. Take any ρ ∈ T k V and define ρi1 ..ik := ρ(bi1 , .., bik ). It is easy now to show that
ρ = ρi1 ..ik β i1 ⊗ .. ⊗ β ik .Thus the set {β i1 ⊗ .. ⊗ β ik } span T k V . Are they lin indip?
Consider the equation
λi1 ..ik β i1 ⊗ .. ⊗ β ik = 0
and apply it to any sequence (bj1 , .., bjk ) and we discover that zero can be obtained only
taking all the λ to be zero. 
Proposition 7.6. Given a real finite dimensional vector space V and a basis and dual
basis {bi } and {β j }, the set {β i1 ∧ .. ∧ β ik } with i1 < i2 < .. < ik is a basis for Λk V
Proof. In order to avoid confusion with the notation we consider the case k = 2. Suppose
that λ = λij β i ∧ β j = 0 withj > i vanishes. Applying this to (b1 , b2 ) for example, we
get that
1 1
λ(b1 , b2 ) = λij 2!Alt(β i ⊗ β j )(b1 , b2 ) = λij 2!( β i ⊗ β j )(b1 , b2 ) − β i ⊗ β j )(b2 , b1 ))
2 2
26

Being i < j only one term survive , namely the first one, proving that λ1,2 = λ(b2 , b1 ).
In this way we prove that all λij vanishes proving again linear independence. Do our
basis span Λk V ? Consider a general ρ ∈ Λ2 V and define ρ0 := ρ(bi , bj )β i ∧ β j the on
every pair (bk , bm ) one finds ρ = ρ0 

Notation It is useful to write any element of Λk V as


1
ω= ωi ..i β i1 ∧ .. ∧ β ik
k! 1 k
1
where we are not assuming the ordering of indices, paying the price of the factor k! and
where
ωi1 ..ik = ω(b11 , .., bik )
and the symbols ωi1 ..ik are completely antisymmetric in the sense that we get a minus
sign every time we exchange two indices. This is like we were working with upper
triangular matrices and we work now with antisymmetric matrices. Tensors are in fact
in some sense generalization of matrices while alternating tensors are generalization of
antisymmetric matrices. Let’s see that in an easy example:
example Consider Λ2 R3 with canonical basis {ei } and dual {j }. According to the
previous theorem any element ω of this space can be written as
ω12 1 ∧ 2 + ω13 1 ∧ 3 + ω23 2 ∧ 3
Defining ω21 = −ω12 and so on, and using the properties of the wedge product we can
obviously write the previous expression as
1
ωij i ∧ j , , 1, j = 1, 2, 3
2
Moreover
ω(e1 , e2 ) = ω12 2Alt(1 ⊗ 2 )(e1 , e2 ) = ω12
We are now really temped to write T k V = V ∗ ⊗ .. ⊗ V ∗ , but what it means? what
are the properties of this symbol ⊗? In order to better understand it let’s go back to
our first example τα,β .This is an example of a “decomposable” tensor that can be indeed
written as α ⊗ β := αi βj β i ⊗ β j where in a standard notation we have denoted by αi
the components of the covect α with respect to the basis {β i }.Observe that not every
tensor are decomposable and can be thus be written as the tensor product of vectors
and covectors; in particular in the case of T 2 R2 , for example, the tensor
1 ⊗ 1 + 22 ⊗ 2
with {1 , 2 } being the canonical base of (R2 )∗ , can not be written as α ⊗ β for some
α, β ∈ (R2 )∗ (you must try to convince yourself). Let’s now be a tiny bit more abstract.
Consider to finite dimensional vectors spaces V and W and construct another vector
space, denoted by V ∗ ⊗ W ∗ consisting of linear combination of objects of the form
v ⊗ w, with v ∈ V ∗ and w ∈ W ∗ viewed as multilinear map from V × W to the reals.
This is then by definition the free vector space R < V ∗ × W ∗ >. We consider the
subspace I spanned by all elements of the form
a(v, w) − (av, w) , a(v, w) − (v, aw)

(v + v0 , w) − (v, w) − (v0 , w) , (v, w + w0 ) − (v, w) − (v, w0 )


27

with a ∈ R. We the define V ∗ ⊗ W ∗ := R < V ∗ × W ∗ > /I. From this definition we


naturally obtain the desired relations discussed in the example 7.2:

a(v ⊗ w) = (av) ⊗ w = v ⊗ (aw)


v ⊗ w + v0 ⊗ w = v + v0 ⊗ w
v ⊗ w + v ⊗ w0 = v ⊗ w + w0

Proposition 7.7. Given V and W finite dim vector spaces and S any vector space.
Given a BILINEAR map f : V ×W → S there is a unique LINEAR map f˜ : V ⊗W → S
such that the following diagram commutes:
f
V ×W S

π

V ⊗W
where π(v, w) = v ⊗ w

Proof. Sketchy: we first extend f uniquely to a linear map f¯ : R < V × W >→ S defined
byf¯(v, w) = f (v, w) whenever (v, w) ∈ V × W ⊂ R < V × W >. We then note that
the subset I defined previously is contained in the kernel of f¯ therefore f¯ descends to a
linear map f˜ = R < V × W > /I → S satisfying f˜ ◦ π = f by construction. Since every
element of V ⊗ W can be written as linear combination of object of the form v ⊗ w
and on such elements f˜ is uniquely determined by f˜(v ⊗ w) = f¯(v, w) = f (v, w), then
uniqueness follows. 

Proposition 7.8. The vector space V ∗ ⊗ W ∗ is canonically isomorphic to the vector


space Bil(V, W ) of bilinear function from V ×W to the reals , thus we will identify them.

Proof. In order to simplify the notation we consider the case V = W . First define the
map f : V ∗ × V ∗ → Bil(V, V ) as follows:

f (α, β)(v, w) = α(v)β(w)

Then by the previous proposition we have a unique f˜ : V ∗ ⊗ V ∗ → Bil(V, V ). We claim


that this is an isomorphism o vector spaces. Consider the basis {bi } and {β i } basis for
V and V ∗ ; we know that any element ρ of V ∗ ⊗ V ∗ can be written as ρij β i ⊗ β j . We
then define the map g : Bil(V, V ) → V ∗ ⊗ V ∗ as follows

g(B) = B(bi , bj )β i ⊗ β j

We now claim that g is the inverse of f˜

(g ◦ f˜)(ρ) = f˜(ρ)(bi , bj )β i ⊗ β j

by linearity = ρkm f˜(β k ⊗ β m )(bi , bj )β i ⊗ β j

by construction of f˜ = ρkm f (β k , β m )(bi , bj )β i ⊗ β j

by definition of f = ρkm δik δjm β i ⊗ β j = ρ


28

Along the same line for a general B ∈ Bil(V, V ) and v1 , v2 ∈ V we have


 
(f˜ ◦ g)(B) (v1 , v2 ) = B(bi , bj )f˜(β i ⊗ β k )(v1 , v2 )

by construction of f˜ = B(bi , bj )f (β i , β j )(v1 , v2 )

by definition of f = B(bi , bj )v1i v2j

by linearity = B(bi v1i , bj v2j ) = B(v1 , v2 )


Note that the construction of g is completely basis independent that’s why we cal lit
canonical. Being the isom. canonical we will just say that Bil(V, V ) = V ∗ ⊗ V ∗ 
Corollary 7.9. Let V be a finite dimensional vector space then

Trk V = V .. ⊗ V }∗ ⊗ V
| ⊗ {z | ⊗ {z
.. ⊗ V}
k times r times

Let us better understand the meaning of those objects with some example.
• T00 V = R
• T 1V = V ∗
• T1 V = V

• on V = Rn det(v1 , .., vn ) ∈ Λn V ⊂ V .. ⊗ V }∗
| ⊗ {z
n times
• a scalar product on V is an element of Σ2 V ⊂ V ∗ ⊗ V ∗ of element of the form
gij β i ⊗ β j with gij = gji
What about End(V )? We note that there is a canonical isomorphism φ : End(V ) → T11 V
given by (φ(L))(v, α) = α(L(v)); this map is injective and since dimEnd(V ) = n2 =
dim(V ∗ ⊗ V ∗ ) we get the desired result. In components, given T ji β i ⊗ bj we construct
the endomorphism LT (v) := T ji β i (v) ⊗ bj . In analogy with this construction we have
k
Proposition 7.10. the vector space Tp+1 is canonically isomorphic to the space of mul-
tiliear maps

| × {z
V .. × V} × V
| × {z.. × V }∗ → V
k times p times

Let us “bundleize” this construction


Definition 7.11. Covariant tensor bundle and covariant tensor field of rank k
T k M := tp T k (Tp M ) → T k (M ) := Γ(T k M )
Controvariant tensor bundle and contovariant tensor field of rank r
Tr M := tp Tr (Tp M ) → Tr (M ) := Γ(Tr M )
Mixed tensor bundle and mixed tensor field of rank (k, r)
Trk M := tp Trk (Tp M ) → Trk (M ) := Γ(Trk M )
In a given local coordinate system with coordinate functions xi any tensor can be
written as
T ∈ Trk (M ) → “locally” T = T j1 ..jr i1 ..ik dxi1 ⊗ .. ⊗ dxik ⊗ ∂j1 ⊗ .. ⊗ ∂jr
29

where the components T j1 ..jr i1 ..ik are smooth functions. As usual we will denote by
Tp ∈ Trk (Tp M ) the value of a tensor field at a point namely
Tp (X1p , ., Xkp , ω1p , .., ωrp ) = T (X1 , .., Xk , ω1 , .., ωr )(p)
Consider now an element S of T11 (M ). In a given coordinate chart it looks like
S = S ji dxi ⊗ ∂j . When it acts on a vector v = v i ∂i and ω = ωi dxi , using ∂i (dxj ) = δik =
dxj (∂i ) we have
S(v, ω) = S ji v k ωm dxi (∂k )dxm (∂j ) = Sij v i ωj
It is kind of evident now that this map is C ∞ (M ) linear. This can obviously generalized
to any tensor field. So a (1,1) tensor field is a C ∞ (M ) multilinear map from X(M ) ×
Ω1 (M ) → C ∞ (M ).There is something more:
Lemma 7.12. a map
X(M ) × .. × X(M ) × Ω1 (M ) × .. × Ω1 (M ) → C ∞ (M )
| {z } | {z }
k times r times
or
X(M ) × .. × X(M ) × Ω1 (M ) × .. × Ω1 (M ) → X(M )
| {z } | {z }
k times r−1 times
is induced by a rank (k, r) tensor field if and only if it is multilinear over C ∞ (M )
Let ρ ∈ T k (N ) and F : M → N a smooth map. We can pullback covariant k tensor
in analogy with cavariant vector as follows:
(F ∗ ρ)p (X1 , .., Xk ) = ρF (p) (F∗ X1 , .., F∗ Xk )
with X1 , .., Xk vector fiedlds on M .
Poperties:
• F ∗ is linear over R
• F ∗ (f ρ) = (f ◦ F )F ∗ ρ
• F ∗ (ρ ⊗ ω) = F ∗ ρ ⊗ F ∗ ω
• (G ◦ F )∗ = F ∗ ◦ G∗
7.1. Differential forms and integrations.
Definition 7.13.
Λk M = tp Λk (Tp M )
Sections of Λk M are named differential forms of rank k and the set of those objects
is denoted by Ωk (M ). A differential k form can be written, in a given coordinate chart,
as
1
ω = ωi1 ..1k dxi1 ∧ .. ∧ dxik
k!
Lemma 7.14. F : M → N a smooth map, and consider the coordinate functions
(x) = (x1 , .., xn ) for M and (y) = (y 1 , .., y m ) for N (defined locally obviously)
then:
F ∗ (ωi1 ..ik dy i1 ∧ .. ∧ dxyk ) = (ωi1 ..ik ◦ F )d(y i1 ◦ F ) ∧ .. ∧ d(y ik ◦ F )
Proof. it follows easily from the properties discussed previously see Lemma (6.5). 
This Lemma has a useful corollary:
30

Corollary 7.15. Let F : M → N smooth map between n manifolds then choose the
coordinates functions (x) on U ⊂ M and (y) on V ⊂ N we have on U ∩ F −1 (V )
F ∗ (f dy 1 ∧ .. ∧ dy n ) = (f ◦ F ) det(∂i F j )dx1 ∧ .. ∧ dxn
Proof. Note that
ω 1 ∧ .. ∧ ω n (X1 , .., Xn ) = det(ω i (Xj ))
as one can easily observe in an example for n = 3 (discussed in the exercise session).In
particular using the previous Lemma we have
F ∗ (f dy 1 ∧ .. ∧ dy n ) = (f ◦ F )d(F 1 ) ∧ .. ∧ d(F n )
with dF i = y i ◦ F . The components of d(F 1 ) ∧ .. ∧ d(F n ) are
d(F 1 ) ∧ .. ∧ d(F n )(∂1 , .., ∂n ) = det(∂i F j )
thus
d(F 1 ) ∧ .. ∧ d(F n ) = det(∂i F j )dx1 ∧ .. ∧ dxn

For any smooth manifold there is a differential operator
d : Ωk (M ) → Ωk+1 (M )
We will write it in coordinates now, in the following we will show a more general defini-
tion.
1
dω := (dωi1 ..ik ) ∧ dxi1 ∧ .. ∧ dxik
k!
that can be written as
1
dω := (∂ ω )dxj ∧ dxi1 ∧ .. ∧ dxik
(k + 1)! [j i1 ..ik ]
where [ji1 , , ik ] means that we are taking the totally alternating part on the indices, that
is if we exchange two of them we get a minus sign morally.
Properties of the exterior derivative:
• d is well defined independently of the coordinates chosen (we will see that later
with the coordinate independent formula)
• d is linear over R
• d2 = 0
• ω ∈ Ωk (M ) and ρ ∈ Ωn (M ) then
d(ω ∧ ρ) = (dω) ∧ ρ + (−1)k ω ∧ dρ
Let’s see an example.Consider on R3 the one form
ω = f dx + gdy + hdz
then
dω = df ∧dx+dg∧dy+dh∧dz = (∂x g−∂y f )dx∧dy+(∂x h−∂z f )dx∧dz+(∂y h−∂z g)dy∧dz
where you see the components of the curl of the vector field f ∂x + g∂y + h∂z (modulo
signs).
Let’s go back to vector spaces: Let V be a finite dimensional vector space then an
orientation for V is an equivalence class of ordered basis defined to be equivalent if they
are related by a positive determinant matrix. A basis is called positively oriented if it
belong to the chosen orientation.
31

Lemma 7.16. Consider V a real vector space of dimension n and Ω ∈ Λn V then the
set of bases (b1 , .., bn ) so that Ω(b1 , .., bn ) > 0 is an orientation for V
Proof. Given (b1 , .., bn ) (with dual basis (β 1 , .., β n )) and (b01 , .., b0n ) two basis related
by
b0i = Aji bj
and so that Ω(b1 , .., bn ) = cβ 1 ∧ .. ∧ β n ((b1 , .., bn )) = c > 0 then
Ω(b01 , .., b0n ) = c det(β i (b0j )) = c det(Aij )
then Ω define a class of basis with the same orientation by the previous formula 
We will say that given an orientd vector space Ω ∈ Λn V is positively oriented if
itinduces the same orientation of V . In the case of a manifold M an orientation is a
choice of orientation for each Tp M that we want to be smooth in the sense that in a
neighborhood of point we cna always find a local frame pointwise positively oriented. A

coordinate chart (U, ϕ) is said to be positive oriented if the coordinate basis { ∂x i
|p } for
Tp M is positively oriented for each p ∈ U .
Proposition 7.17. Let M be a smooth manifold of dimension n. A nowhere vanishing
Ω ∈ Ωn (M ) (sometimes called volume form) determines a unique orientation of M for
which Ω is positively oriented at each point.
Proof. In a coordinate chart (U, ϕ) with U connected, we can write Ω = f dx1 ∧ .. ∧ dxn
with f nowhere vanishing and we have
Ω(∂1 , .., ∂n ) = f
thus on U it is alway positive or negative oriented. If it’s negative oriented it is enough
to change the sign of one of the coordinate representation of any point and obtain thus
a continuous orientation. 
One can also prove that conversely an orientation for M induce a nonvanishing n form
positively oriented at each point. Consider now a top form defined on a compact domain
D ⊂ Rn
ω = f dx1 ∧ .. ∧ dxn
To avoid convergence issues we will always require that ω is compactly supported in U
and we define Z Z Z
ω= ω= f dx1 ..dxn
U D D
where supp(f ) ⊂ D ⊂ U
Proposition 7.18. Suppose U , V are open subsets of Rn , F : V → U is an orientation-
preserving diffeomorphism, then
Z Z
ω= F ∗ω
U V
Proof. This is just a consequence of 7.15 and the change of coordinates for an integral
(absolute value of the determinant of the Jacobian). 
We are ready to consistently define the integral on a manifold M . Given ω compactly
supported in an oriented coordinate chart (U, ϕ)
Z Z
ω := (ϕ−1 )∗ ω
M ϕ(U )
32

By construction this definition does not depend on the choice of oriented charts whose
domain contain suppω. If the n form is supported on M oriented we can use the partition
of unity {ψi } subordinate to a coordinate charts (Ui , ϕi ) covering M considering on each
Ui the one form ψi ω supported there, and then summing overall i
Z XZ
ω := ψi ω
M i M

Note: One can prove that the final answer does NOT depend on the choice of coordi-
nates charts and the partition of unity.

8. Integral curves and symmetry generators


Definition 8.1. Given X ∈ X(M ), an integral curve of X is a curve γ : I → M on some
open interval I ⊆ R such that
X(γ(t0 )) := Xγ(t0 ) = γ̇(t0 )
We will sometimes say that the curve start at p0 , or that p0 is the initial point for
γ, whenever p0 = γ(0). Morally an integral curve is a solution of a systems of ODEs.
In coordinates, suppose we have a chart (U, ϕ) we can choose J ⊆ I so that γ(J) ⊆ U ,
then we have
γ̇(t) = γ i (t)∂i
Along the curve we have Xγ(t) = ai (γ(t))∂i |γ(t) on U thus we can conclude that

(1) γ̇ i (t) = â(γ 1 (t), .., γ n (t)) , ∀t ∈ J


Theorem 8.2. Consider thie previous ODE we have:
• Unique: every 2 solutions γ1 : I1 → U and γ2 : I2 → U with t0 ∈ I1 ∩ I2 and so
that γ1 (t0 ) = γ2 (t0 ) agree on I1 ∩ I2
• Existence: Fore very t0 , p in I and U and V ⊆ U neighborhood of p, then exist
on V a curve satisfying the previous ODE with initial condition γ(t0 ) = p
• Smoothness: The solution of the previous point depends smoothly on the choice
of the initial point and t.
Consider now the set of integral curve of X starting @ p, and define a partial order
by γ1 ≤ γ2 whenever I1 ⊆ I2
Definition 8.3. A maximal integral curve of X starting @ p is a solution of 1 that is
greater or equal to other solutions (or if you prefer is not contained in other solutions).
We will denote it by
γp : Ip → M
and Ip is named the maximal interval.
A vector field X on a manifold is name complete if every maximal integral curve of
X is defined on the whole R, i.e. Ip = R for all p.
Proposition 8.4. The maximal curve of X starting @ p is unique
Given the solution discussed above we can imagine that the initial point p ∈ M moves
along the maximal integral curve in the sense that after a time t it reaches the point
γp (t). This machine is called Flow of the vector field. More in details
33

Definition 8.5. Given X and


D := {(t, p) ∈ R × M : t ∈ Ip } ⊆ R × M
named the flow domain, the flow of X is the map
F lX : D → M
defined by
F lX (t, p) := F ltX (p) = γp (t)
Theorem 8.6. We have:
• F lX is a smooth map
• (F ltX ◦ F lsX )(p) = (F lt+s
X )(p)

• F l0X = idM
The vector field X is called the infinitesimal generator of the flow and it is strictly
related to the concept of symmetries of a manifold. Let’s see that with examples:
Example 8.7.
X = x∂x + y∂y ∈ X(R2 )
then
F ltX (x0 , y0 ) = (et x0 , et y0 )
and we will say that it generates the dilations
Example 8.8.
X = ∂x ∈ X(R2 )
then
F ltX (x0 , y0 ) = (x0 + t, y0 )
and we will say that ∂x generates the translation along the x axis.
Observation: Fixed t ∈ R we can define
Mt = (p ∈ M : (t, p) ∈ D)
notice it can be empty. Then there is a well defined map
Φt : Mt → M
The map Φt induced by a vector field X is sometimes called the local 1 parameter group
of local diffeomorphisms generated by X, and X is called the infinitesimal generator.
This terminology is motivated by (8.6).
Definition 8.9. Given f ∈ C ∞ (M ) and X and Y vector fields we have
• (LX f )(p) = ∂t |0 (f ◦ F ltX (p))
X) Y
• (LX Y )(p) = ∂t |0 ((F l−t ∗ F ltX (p) ) the Lie derivative of a function and a vector
field respectively.
In order to better understand the Lie bracket of a vector field we take a small detour
and discuss a famous algebraic object. Given two vectors fields X and Y we can induce
a third one, we denote by [X, Y ], by the following algebraic operation called Lie bracket
(or sometimes commutator) defined at a point by:
[X, Y ]p f = Xp · (Y · f ) − Yp · (X · f )
Proposition 8.10. We have:
• [X, Y ]p ∈ Tp M
34

• The assignment p → [X, Y ]p defines a smooth vector field [X, Y ] satisfying


[X, Y ]f = X · Y · f − Y · X · f
• In coordinates given X = ai ∂i and Y = bj ∂j we have
[X, Y ] = (aj ∂j bi − bj ∂j ai )∂i
It is useful to think at those square brackets as an algebraic operator called Lie bracket
[ , ] : X(M ) × X(M ) → X(M )
such that
• [ , ] is R bilinear
• is skew symmetric
• satisfies the so called Jacobi identity, namely
[[X, Y ], Z] + [[Y, Z], X] + [[Z, X], Y ] = 0
Proposition 8.11.
(LX f )(p) = ∂t |0 (f ◦ F ltX (p)) = Xp · f
and
(LX Y )(p) = [X, Y ]
Proof. (*) Let’s see the first relation
(LX f )(p) = ∂t |0 (f ◦ F ltX (p)) = ∂t |0 f (γp (t)) = γ̇(0) · f = Xp · f
The second relation is more involved, let s see it step by step. In order to simplify the
notation we denote by F (t, p) = F ltX (p) and F i the coordinates representation of this
function in some coordinates chart and X = ai ∂i and Y = bj ∂j . We first observe that
by definition
F i (t, p) = γpi (t)
thus Ḟ i (0, p) = ai (p) while
∂k |F i (0, p) = δki
because F i (0, p) = ϕ(γpi (0)) = ϕ(p) = xi . Then
X X
(F l−t )∗ YF ltX (p) = (F l−t )∗ (bi (F (t, p))∂i |F (t,p) ) = bi (F (t, p))∂i F j (−t, F (t, p))∂j |p
We take now the derviative with respect tot and using the relation described before we
get
X) Y
∂t |t=0 (F l−t k i j i j
∗ F ltX (p) = (∂t F (0, p)∂k b (p)∂i F (0, p) − b (p)∂t ∂i F (0, p)

+bi (p)∂t F k (0, p)∂k ∂i F j (0, p) ∂j |p




= ∂j bi |p aj (p)∂i |p − ∂j ai |p bj (p)∂i |p = ([X, Y ])ip ∂i |p


where in the last line we used that ∂i F k = δik thus ∂m ∂i F k = 0. 
Let’s see how vector fields can be efficiently used.
Definition 8.12. A symmetry of a smooth function f on a manifold M is a diffeomor-
phism preserving f . We will call it local if it applies on a submanifold on M only.3
3A submaifold is a subset with a smooth structure induced by M ; we will define it more carefully in
the next chapter
35

Let’s see an example. Consider on R2 a function depending on r2 = x2 + y 2 only, that


is f (x, y) = f (r(x, y)). Consider the the rotation map R(x, y) = (x cos θ−y sin θ, x sin θ+
y cos θ). Then one has that (R∗ f )(r(x, y)) = f (r(R(x, y)) = f (r(x, y))
Definition 8.13. An infinitesimal symmetry of a smooth function f is a vector field
X ∈ X(M ) whose flow (for fixed t thus Φt ) is a local symmetry for f .
Consider on R2 for example f = f (y), we have seen that the vector field ∂x generates
translation along the x-axis,that this is a symmetry for f .
Can we define symmetries for a vector field too??Sure
Definition 8.14. A symmetry of a vector field Y on a manifold M is a diffeomorphism
Φ : M → M preserving Y in the sense that Φ∗ Y = Y . A local symmetry of Y is a
diffeomorphism between open submanifolds of M .
Given for example on R2 the vector field X = x∂x + y∂y then the rotation is a
symmetry. In fact we have
R∗ (x∂x + y∂y )f (x, y) = (x∂x + y∂y )f (z, t)
with z = x cos θ − y sin θ and t = x sin θ + y cos θ. then by the chain rule we get
R∗ X = t∂t + z∂z . In general we can play the following game. R∗ X = A∂x + B∂y ; by
construction we have that A = (R∗ X) · x = X · (x ◦ R)
Definition 8.15. An infinitesimal symmetry of a vector field Y is another vector field
X on whose flow (for fixed t thus Φt ) induce a local symmetries of Y .
Proposition 8.16. X is a local symmetry for f iff X · f = 0; X is a local symmetry
for Y iff [X, Y ] = 0
Proof. If X is a local symmetry then (F ltX )∗ f (p) = f (F ltX (p)) = f (p) then f is constant
along the flow and
(LX f )(p) = (X · f )(p) = ∂t |t=0 (F ltX )∗ f = 0
while if X · f = 0 we have that ∂t (F ltX )∗ f = 0 then f (F ltX (p)) is constant along the
intergral curve for X. 
The Lie derivative can be extended to any covariant tensor field τ as follows:
(LX τ )(p) = ∂t |0 ((F ltX )∗ τF lX (p) )
t

Proposition 8.17. Let σ and τ be covariant tensors f a smooth function and X a


vector field then
• LX (f σ) = (LX f )σ + f LX (σ)
• LX (σ ⊗ τ ) = LX (σ) ⊗ τ + σ ⊗ LX (τ )
• LX (τ (Y1 , .., Yk )) = LX (τ )(Y1 , .., Yk ) + τ (LX Y1 , Y2 , .., Yk ) + ... + τ (Y1 , ..LX Yk )
and in particular for a covariant rank k tensor one has:
(LX τ )(Y1 , .., Yk ) = X · τ (Y1 , .., Yk ) − τ (LX Y1 , .., Yk ) − .. − τ (Y1 , .., LX Yk )
We are now ready to go back to the exterior derivative and write it down in a coor-
dinate independent way. Let ‘s start with the case of a one form α.
dα(X, Y ) := Xα(Y ) − Y α(X) − α([X, Y ])
36

The first thing to observe (and prove) is that dα is a tensor thus a C ∞ (M ) multilinear
map. Let’s compute its components in a coordinate chart
1
dα = aij dxi ∧ dxj
2
then
dα(∂i , ∂j ) = aij = ∂i α(∂j ) − ∂i α(∂j ) = ∂i αj − ∂j αi
In general we have
i+1 X α(X , ..X
P
dα(X1 , .., Xk+1 ) := i (−1) i 1 i−1 , Xi+1 , .., Xk+1 )

+ i<j (−1)i+j α([Xi , Xj ], X1 , ..Xi−1 , Xi+1 , .., Xj−1 , Xj+1 , .., Xk )


P

8.1. Sumbmanifolds.
Definition 8.18. A subset S of a manifold M (of dimension n) is called a (regular)
submanifold of dimension k if for every p ∈ S we can find a coordinate chart (U, ϕ) in
the maximal Atlas, with p ∈ U and coordinate functions (x1 , .., xn ) such that U ∩ S
(better to say U
\ ∩ S ) is defined by the vanishing of n − k coordinates functions that we
can assume to be the last n − k coordinates functions without loosing generalities.
We call (U, ϕ) adapted relative to S. Note that ϕ|U ∩S = (x1 , .., xk , 0, .., 0) and we can
define using the projection on the first k components pr(k)
ϕS := pr(k) ◦ ϕ : U ∩ S → Rk
so that (U ∩ S, ϕS ) is a coordinate chart for S with the subspace topology.
Proposition 8.19. Let S be a regular submanifold of N and U = (U, ϕ) a collection
of compatible adapted charts of N that covers S. Then (U ∩ S, ϕS ) is an atlas for S.
Therefore, a regular submanifold is itself a manifold.
Submanifolds are typically presented as images or level sets of smooth maps. A level
set of a map F : M → N is a subset
F −1 (q) := {p ∈ M s.t. F (p) = q}
for some q ∈ M . In the case N = Rn we call ξ(F ) = F −1 (0) the zero set of F .
Definition 8.20. Given F : M → N we will say that q ∈ N is a regular value of F
if either q is not in Im(F ) or for every p ∈ F −1 (q) we have F∗ |p : Tp M → TF (p)N is
surjective. The preimage of regular value is called a regular level set.
Before we proceed with our analysis let’s state a couple of natural results that is the
inverse function theorem and its generalization to case of manifolds:
Theorem 8.21. Let W be an open subset of Rn and F : W → Rn a smooth map defined
on an open subset. For any point p in W the map F is locally invertible at p if and only
if the Jacobian determinant ∂i F j is not zero.
that can be generalized for manifolds as
Theorem 8.22. Let F : M → N a smooth map between two manifolds of the same
dimension, and p ∈ M . Suppose we have some charts (U, ϕ) with coordinates functions
(x1 , .., xn ) around p and (V, ψ) with coordinates functions (y 1 , ..., y n ) around F(p) with
F̂ j
F (U ) ⊂ V . Then F is locally invertible at p if and only if det ∂∂x i |ϕ(p) is nonvanishing.

This theorem is usually successfully applied in the following form


37

Corollary 8.23. Take a manifold M of dimension n. A set of smooth functions


F 1 , .., F n defined on a coordinate chart (U, ϕ) with coordinates functions (x1 .., xn ) around
i i
a point p, induces a coordinate chart around p if det ∂F | = det ∂∂xF̂j |ϕ(p) is nonvanishing
∂xj p
.
Proof. Sketchy: we can define Φ := (F 1 , .., F n ) : U → Rn by the inverse function
theorem it has nonvanishing Jacobian determinant IFF we can find a neighborhood W
of p such that Φ : W → F (W ) is a diffeomorphism (essentially by local invertibility)
then IFF (W, Φ) is a coordinate chart 
Let’s discuss an important case
The 2-sphere in R3 : the 2 sphere is the level set f −1 (0) of the function f (x, y, z) =
x2 + y 2 + z 2 − 1. Note that
∂x f = 2x, ∂y f = 2y, ∂z f = 2z
thus for every point p on S2 = f −1 (0)
Tp R3 → T0 R
given by f∗ (a∂x + b∂y + c∂z ) = 2xa + 2by + 2cz is surjective because the point (0, 0, 0)
(that is the only critical one) does not belong to the sphere then 0 is a regular value.
Consider a point p ∈ S 2 such that ∂z f |p is nonzero. Then construct the set of functions
Φ1 = (x, y, f ) : R3 → R3 then the Jacobian is non vanishing on p. For the inverse
function theorem there is a neighborhood (U1 , Φ1 ) is a chart for R3 and the set U1 ∩ S 2
is such that (U1 ∩ S 2 , pr(2) Φ1 ) is a coordinate chart for S 2 . Along the same line one can
prove the same for x and y and construct adapted charts covering S 2 proving that S 2 is
a (regular) submanifold.
With this idea in mind one can prove the following:
Theorem 8.24. Let F : M → N be a smooth map of manifolds, with dimensions m and
n respectively. Then a non empty regular level set, is a regular submanifold of dimension
m − n.

Proof. (*) Consider the charts (U, ϕ) with coordinates functions xa = x1 , .., xm on M
and (V, ψ) with coordinate functions (y i ) = y 1 , .., y n and centered at a point q with
38

F −1 (V ) containing U and F −1 (q). Observe that o that we can view F −1 (q) as the
zero set of ψ ◦ F since ψ(q) = 0. Note now that since the regular level set is assumed
to be non empty it means that the differential map at F −1 (q) is a surjection then we
must have m ≥ n. Call now ψ ◦ F = F 1 , .., F n with F i smooth functions on M : Note
that F i (p) = 0 for every p ∈ F −1 (q). By regularity me must then have that every
p ∈ F −1 (q) are such that the jacobian matrix ∂a F j |p has rank n. Without loosing of
generality we assume last n × n block of the Jacobian is non singular (we will denote it
by ∂â F i |p ). Then we use F 1 , .., F n as the last coordinates. In particular we claim that
we have a neighborhood Up of a fixed p ∈ F −1 (q) such that (Up , ϕF ) is a chart, with
ϕF (r) = (x1 , ..., xm−n , F 1 , .., F m ) for some r ∈ Up . This chart is well defined because of
the inverse function theorem and it is an adapted coordinate chart for F −1 (q).
 
i I 0
∂a F |p =
0 ∂â F i |p
where â = m + 1, .., n 
Let us now characterize regular submanifolds.
Definition 8.25. Let the F : M → N with dimensions m and n. The rank of F at p is
the rank of the pushforward at a point that is rankF∗ |p that is dimIm(F∗ |p ).
• If m ≤ n and for every p ∈ M we have rankF = m, we say that F is an
immersion
• If m ≥ n and for every p ∈ M we have rankF = n, we say that F is an
submersion
• If m = n and for every p ∈ M we have rankF = n = m, we say that F is an
local diffeomorphism
Theorem 8.26. Let F : M → N a smooth map of manifolds and q ∈ N . Suppose
F has constant rank k in a neighborhood of p ∈ M .The we can find a coordinate (U, ϕ)
centered at p and (V, ψ) centered at F (p) so that F̂ (x1 , .., xm ) = (x1 , .., xk , 0, .., 0)
This theorem called the constant rank theorem has nice consequences for example if
f : M → N has constant rank in a neighborhood of a level set, then the level set is a
regular submanifold.
Consider now a one to one immersion F : N → M ; the image F (N ) is called immersed
submanifold. In general its topology and smooth structure has nothing to do with the
one on M and has to be considered as extra data. This observation leads to the following
definition:
Definition 8.27. A map F : M → N is called an embedding if it is a one-to-one
immersion and f (M ) with the subspace topology is homeomorphic to M trough f
Let’s see some example of non embedding to understand which type of situation we
want to avoid:
Example 1 f : R → R2 defined by f (t) = (t2 , t3 ).This map is one to one but not an
immersion since the differential map at zero f∗ |0 is not injective.
Example 2 f : R → R2 defined by f (t) = (t2 − 1, t3 − t).This map is NOT one to one
because f (1) = f (−1) and an immersion since the differential map at zero f∗ |0 is always
injective.
Example 3 Consider the map given in the picture. It is a one to one immersion but
39

the topology induced on R2 doesn t match the original topology because, for example,
there are point close to f (p) corresponding in R to point far away from p. Thus M and
f (M ) in this example are not homeomorphic thus it is not an embedding.
Theorem 8.28. If F : M → N is an embedding then F (M ) is a regular submanifold.
Proof. (*) By the constant rank theorem we know we can choose local coordinates charts
(U, ϕ) and (V, ψ) such that
F̂ (x1 , .., xm ) = (x1 , .., xm , 0, .., 0)
We may have trouble since V ∩ f (N ) may be larger then F (U ). It is here where we use
the subspace topology to find a V 0 such that V 0 ∩ F (N ) = F (U ) (we skip this details)
and thus on V ∩ V 0 we can construct the adapted coordinate chart as before. 

9. A mini course on linear connections


Big problem: Consider a section σ of a vector bundle E → M and we want to study
how it changes for example when we move along a curve γ on M starting at p (kind of
useful approach). We are tempted to write something like
σ(γ(t)) − σ(p)
∂t σ(γ(t))|t=0 = lim
t→0 t
This formula in general doesn t make sense; note in fact that given two different points
p and q with σ(p) ∈ Ep and σ(q) ∈ Eq BUT Ep and Eq are different vector spaces (even

if isomorphic) thus to compare them we must specify the isomorphism Ep − → Eq that
in general is not canonical, unless the bundle is trivial that is E = M × Rk .

Goal: We now look for and in some sense characterize the isomorphism Pγt : Ep −→ Eq
where p = γ(0) and p = γ(t) Let’s start constructing the following subvector space of
Ep
Definition 9.1.
VP E := {A ∈ TP E s.t. π∗ A = 0}
called vertical vector space at P ∈ E over p ∈ M (that is π(P ) = p)
V E = tP ∈E VP E
40

is called the vertical bundle.


Note that VP E = TP Ep (More comments). We now choose a complement for VP E,
we call it the horizontal vector space HP E so that :
TP E = VP E ⊕ HP E
The choice of HP E is NOT canonical, it is a CHOICE. We will call HE = tP ∈E HP E
the horizontal bundle. Observe then that π∗ : TP E → Tp M has VP E as kernel thus it
induces an isomorphism

HorP : Tp M −→ HP E ⊂ TP E
Called sometimes the horizontal lift. A curve through the total space E is called hori-
zontal if its tangent vector is horizontal. Then given p ∈ M and P ∈ Ep , any curve γ(t)
with γ(0) = p lifts uniquely to a horizontal curve γ̃(t) with γ̃(0) = P that is
(1) π(γ̃P ) = γp
(2) γ̃P (0) = P
It can be proven easily that γ̃| ˙ γ̃(t) = Horγ̃(t) γ̇|γ(t) . Take now a vector field X on M
and X(p) = Xp ∈ Tp M and define then X̃P := Horp (Xp ) the horizontal lift of XP at
P . Take then γp the max integral curve for X at p and construct its lift at P ∈ E
(remember π(P ) = p) called γ̃P by:
(1) π(γ̃P ) = γp
(2) γ̃P (0) = P
(3) γ̃˙ P |γ̃P (t) = Horγ̃P (t) (Xp ) ∈ Hγ̃P (t) E
IDEA!!! The isomorphism Ptγ we are looking form can be constructed out of the flow
of the horizontal lift of X
F ltX̃ : Ep → Eγp (t)
where we have denoted by X̃ the unique vector field on E such that X̃(P ) = X̃P =
HorP (Xp ). Observe that by construction the flow is invertible but in principle is not
guaranteed that it induce a linear map (and thus a vect space isomorphism). To this
aim note that every point on Ep is a vector and it is useful to denote very point P on
Ep by the pair (p, ρ) or simply ρ when is not crucial to specify that it is a point on the
fiber over p. Given Xp ∈ Tp M It is natural to require that the horizontal map satisfies
the following:
Horaρ (Xp ) = aHorρ (Xp )
We will call an horizontal lift satisfyibng the porevious relation linear.
Proposition 9.2. Given the horizontal lift satisfying the previous relation one has that
F ltX̃ (aρ) = aF ltX̃ (ρ)
Proof. Consider γ̃ρ and γ̃aρ then one has
d d
|t=0 (aγ̃ρ ) = aHorρ (Xp ) = Horaρ (Xp ) = |t=0 (γ̃aρ )
dt dt
being the initianl points and the tangent vectors the same the flows must coincide as
stated in the proposition. 
We use now this observation within the context of the following proposition
Proposition 9.3. Let F : Rn → Rm with F (ax) = aF (x) then F is linear.
41

Proof. The proof relies on the taylor expansion


F (x + x0 ) = F (x0 ) + ∂x F |x0 x + R(x)
|R(x)|
where the rest R is such that |x| → 0 when x → 0.Now we must have R(ax) = aR(x)
thus
|R(ax)| |R(x)|
lim =
a→0 |ax| |x|
thus R(x) = 0 
We then have the desired result that is the flow induces a linear isomorphism among
fibers. We have then all ingredients needed to study how a section σ changes along the
integral curve for a vector fields X:
d X̃
(∇X σ)(p) := (F l−t ◦ σ ◦ F ltX (p))
dt t=0
Computing explicitly the time derivative and identifying T Ep with Ep one can easily
prove that
(2) (∇X σ)(p) = −X̃σ(p) + σ∗ Xp
This operator is often called covariant derivative.
OBSERVATION: π∗ X̃σ(p) = Xp by construction andπ∗ σ∗ Xp = (π ◦ σ)∗ Xp = Xp thus
the right hand side of the previous relations an element of Vσ(p) E that we identify
with Ep , then we have; taking into account the smoothness of all the ingredients this
observation implies that ∇ can be viewed as a map
∇ : X(M ) × Γ(E) → Γ(E)
Proposition 9.4. We have:
(1) ∇f X σ = f ∇X σ
(2) ∇X1 +X2 σ = ∇X1 σ + ∇X2 σ
(3) ∇X (f σ) = f ∇X σ + (X · f )σ
(4) ∇X (σ1 + σ2 ) = ∇X σ1 + ∇X σ2
Proof.
(1) Let’s prove the first relation. Using equation (2) and the linearity of the HorP
map one has
]
(∇f X σ)(p) = −(f X)σ(p) + σ∗ (f (p)Xp )

= −f (p)X̃σ(p) + f (p)σ∗ Xp
= f (p)∇X σ(p)
(2) The second relation is a direct consequence of (2)
(3) The third relation comes from proposition (9.2) and the definition of ∇;
d X̃
∇X (f σ) = dt t=0 (F l−t ◦ f σ ◦ F ltX (p))
 
d X̃ ◦ f (γ (t)) σ(γ (t))
= dt t=0 F l−t p p

d X̃
by prop (9.2) = dt t=0 f (γp (t))F l−t (σ(γp (t))

(Xp · f ) σ(p) + f (p) (∇X σ)(p)


42

(4) The last relation again comes from the fact that F ltX is linear

Definition 9.5. A linear connection on a vector bundle is a choice of a linear isomor-
phism among fibers, equivalently a linear horizontal lift.
We are ready now to work in coordinates. Consider the coordinates functions (x1 , .., xn )
for M , the coordinate basis for Tp M , and the trivialization map Φ associated to the local
frame (α ) with α = 1, .., k; writing the vector field X = ai ∂i the section as σ = σ α α ,
we have using (9.4)
∇X σ = ∇ai ∂i (σ α α )
by (1) = ai ∇i (σ α α )
by (3) = ai ∂i (σ α )α + ai σ α ∇i α
where ∇∂i := ∇i . Let’s focus now on the element ∇i α . Due to the linearity of the
construction we have that
∇i α = B(∂i )βα β
for some objects B(∂i )βα we now analyze. Fixed the index i this is nothing else that a
k × k matrix associated to an endomorphism of Rk . Fixed the indices α, β and due to
the first two of (9.4) we view B βα as maps X(M ) → C ∞ (M ) linear over C ∞ (M ). In
conclusion we have that locally B βα ∈ Ω1 (M, End(Rk )). We will call it connection one
form and just denote its components by (Bi )βα or simply Bi βα and call them Christoffel
symbols. It is common and useful to write
∇i σ α = ∂i σ α + Bi αβ σ β
Observe now that this construction depends on the choice of coordinates and triv-
ialization. The natural question is what happens when we change trivialization and
coordinate chart. Suppose on U we have the coordinate chart (x1 , .., xn ) and local frame
1 , .., k while on V we have (x̃1 , .., x̃n ) and e ˜1 , .., ˜k . We know that σ̃ β = τ βα σ α We
want to compare on U ∩ V the expressions
ai (∂i σ α + Bi αβ σ β )α
and
ãi (∂˜i σ̃ α + B̃i αβ σ̃ β )˜

where B̃ is the connection one form one would obtain in the trivialization induced by the
frames ˜α and the coordinate x̃, that is B̃i αβ σ̃ β ˜α = B̃(∂˜i )αβ σ̃ β ˜α d. Remembering that
j
∂˜i = ∂x ∂ and observing that the one form connection B̃ can be viewed as a one form
∂ x̃i j
taking values in the algebra of k × k matrices (i.e. Mk ) thus recalling that τ : M → Mk
one has
∂xj  
∂˜i σ̃ α + B̃i αβ σ̃ β = ∂ j (τ α γ
γ σ ) + (τ )β
γ ( B̃ )
j β
α γ
σ
∂ x̃i
from which we get, combining with α = τ βα ˜β , that
∂xj −1 α δ γ ∂xj −1 α
(B̃i )αβ = (τ ) δ (B ) τ
j γ β + (τ ) δ ∂j τ δβ
∂ x̃i ∂ x̃i
This ugly formula is often written in a compact form, suppressing the matrix indices
and using differential forms notation, as
B̃ = τ −1 Bτ + τ −1 dτ
43

When dealing with T M instead of E we have a natural coordinate basis for the fiber
{∂i } and we denote the connection one form by Γjk = (Γi )jk dxi = Γi j k dxi . In this case
we have τ ij = ∂˜i xj and the connection is called affine connection. We have defined the
covariant derivative associated to an affine connection on vector fields only so far. We
now generalize to every tensor field T ∈ Tlk M by the following.
(1) k = 0, l = 1 we have that ∇X Y is defined as before, in components by
∇i Y j = ∂i Y j + Γ i j k Y k
(2) k = 0 = l we define it as
∇X f = X · f
(3) T = F ⊗ G then
∇X T = ∇X F ⊗ G + F ⊗ ∇X G
(4) Denoting by Yi and ω j with i = 1, .., k and j = 1, .., l two sets of vector fields
and covectors (NOT the components of a (co)vector field) we require
(∇X )F (Y1 , .., Yk , ω 1 , .., ω l ) = X · F (Y1 , .., Yk , ω 1 , .., ω l )
−F (∇X Y1 , Y2 , ..) − ..
−F (Y1 , .., Yk , ∇X ω 1 , ..) − ..
Observe that the extension is unique. We can in fact define the covariant derivative on
covectors by
(∇X ω)(Y ) = X · ω(Y ) − ω(∇X Y )
and once we know it again using the number (4) we can uniquely extend the construction
to every tensor field.In components in the coordinate basis we have
∇i ωj = ∂i ωj − Γik j ωk
and in general
∇i T j1 ..jlm1 ..mk = ∂i T j1 ..jlm1 ..mk
+Γij1j T j..jl m1 ..mk + ..
−Γimm1 T j1 ..jlm..mk − ...
It is often useful to work along curves. Consider γ : I → M a smooth curve and define
a vector field along γ as a smooth map Ỹ : I → T M s.t. Ỹ (t) ∈ Tγ(t) M . A vector field
along γ is called extendible if exists a vector field Y at least in an open subset of M
containing γ, such that Y (γ(t)) = Ỹ (t). This construction naturally extends to all tensor
fields. An affine connection induces on the space of vector field along a curve a unique
linear operator Dt : X(γ) × X(γ) → X(γ) satisfying Dt (f Ỹ ) = f˙Ỹ + f Dt Ỹ Consider then
an extendible vector field along γ and define
Dt Ỹ (t) := ∇t Y := ∇γ̇(t) Y
In components we have
(3) ∇γ̇(t) Y = γ̇ i ∂i Y k (γ(t))∂k |γ(t) + γ̇ i Γi k j Y j ∂k |γ(t)
| {z }
Ẏ k
In the following we will deal with vector field along curves obtained by restriction thus
extendable.
44

Definition 9.6. A section σ is said to be parallel transported along γ(t) with t ∈ I if


∇γ̇ σ = 0 for every t ∈ I
Observe that by (2) parallel transport and horizontal lift are on the same footing.
Suppose we are given an element ρ ∈ Ep on the fiber over p = γ(0). The parallel
transport of ρ along γ (with γ(0) = p) is the unique local section σ ∈ Γ(E) s.t.
∇γ̇ σ = 0
σ(p) = ρ
and in this sense parallel transport induce a way of moving elements of the fibers along
a curve, and this provides linear isomorphisms between the fibers at points along the
curve (by the properties of the covariant derivative that one has to assume from this
point o view).
Comment: Often textbook uses a dual approach defining the parallel transport and
then the fiber isomorphism using the covariant derivative operator.
Definition 9.7. A smooth curve is called a geodesics if its tangent vector is parallel
transported along itself that is Dt γ̇ = 0. In components, by (3) we get
γ̈ k + γ̇ i Γi k j γ̇ j = 0
Theorem 9.8. Consider an affine connection ∇ defined on M . For every p ∈ M and
Yp ∈ Tp M there exist an open interval I ⊆ R containing 0 and a geodesices γ : I → M
satisfying γ(0) = p and γ̇(0) = Yp . Any two such geodesics agree on their common
domain.
Proof. (*) Sketchy this is a consequence of existence of uniqueness of the solution of an
ODE. Consider a coordinate chart (U, ϕ) with coordinate for γ(t) given by (x1 (t), .., xn (t))
and define then the velocity by v i = γ̇ i . Then the geodesics equation yields
ẋi = v i
and
v̇ k = −v i v j Γik j (x(t))
Thinking at (x, v) as coordinate on (U ×Rn )the previous equation is just the flow induced
by the vector field on U × Rn
∂ ∂
vk − v i v j Γik j (x(t)) k
∂xk ∂v
Then by the existence and uniqueness (locally) of the solution of an ODE the statement
follows. 

9.1. Riemannian geometry. A Riemannian metric on a manifold M is the assignment


to each point p in M of an inner product on the tangent space Tp M that is required
to be smooth. Thus we can view a Riemannian metric as a symmetric covariant tensor
g ∈ Σ2 M that is
g(X, Y ) = g(Y, X)
that pointwise induces an inner product < •, • >p on Tp M . For every vector fields X, Y
it is sometimes useful to write
< X, Y >p = g(X, Y )(p) = gp (Xp , Yp )
45

Given ω and α one form it is sometimes useful to write their symmetric product as
follows
1
ωα := (ω ⊗ α + α ⊗ ω)
2
With this notation in mind we write the metric in a given coordinate chart as
g = gij dxi dxj
The pair (M, g) is called Riemannian manifold
Warning: A common mistake made by novices is to assume that one can find coordi-
nates near p such that the coordinate vector fields ∂i are orthonormal. The coordinate
basis for Tp is a canonical choice but sometimes not the best one. At each point for
example one can find an orthonormal basis. We define then an othonormal frame as a
set of sections of T M (e1 , .., ep ) such that
 
1 0 .. 0 0
0 1 ... 0 0
 
g(ea , eb ) = δab = 0 0 . . . 0 0
 
 
0 0 ... 1 0
0 0 ... 0 1
Dually we a have the orthonormal coframe (pointwise basis for Tp∗ M ) (1 , .., n ) where
by construction b (ea ) = δab . Thus we could write in this basis
g = δab a b
We could always write ea as linear combination over C ∞ M of ∂i that is
ea := eia ∂i
as well as
a = eai dxi
for some functions (or matrices if you prefer) eia and ebj that are such that
eai eib = δba , eia eaj = δji
being ea and a (as well as dxi and ∂i ) pointwise dual basis. Observe that by construction
we have  
g = δab a b = δab eai ebj dxi dxj
thus
(4) gij = δab eai ebj , and viceversa δab = gij eia ejb .
Those objects are the main ingredients of the Cartan moving frame theory and other
physical models like supergravity and superstring theory, because, in some sense, they
encode the information of the metric in a more geometrical way.

Definition 9.9. Take now 2 Riem, manifolds (M, g) and (M 0 , g 0 ), a diffeomorphism


F : M → M 0 is called an isometry if F ∗ g 0 = g (and we will say that (M, g) and (M 0 , g 0 )
are isometric). If for every p ∈ M we can find a neighborhood U of p such that a
map F |U is an isometry then we will call F a local isometry and the two manifolds
locally isometric. A Riemannian manifold is named FLAT if it is locally isometric to
the Euclidean space.
46

Theorem 9.10. On every manifold M there is a Riemannian metric.


Proof. This metric can be constructed as follows; given a coordinate chart (U, ϕ), at
each point p ∈ U we can define the inner product
< ∂i |p , ∂j |p >= δij
and thus we have a metric on U . Given an Atlas we can define on each chart a Rie-
mannian metric and mix everything using the partition of unity subordinate to the given
atlas. 
We now see ho to construct Riemannian metrics in certain natural situations.
Lemma 9.11. (*) Suppose (M 0 , g 0 ) is a Riemannian manifold, M is a smooth manifold
and F : M → M 0 is a smooth map. Then g := F ∗ g 0 is a Riemannian metric on M if
and only if F is an immersion.
Suppose (M 0 , g 0 ) is a Riemannian manifold: given a smooth immersion F : M → M 0 ,
then we can construct an induced metric on M by g := F ∗ g 0 . On the other hand, if
M is already endowed with a given Riemannian metric g, an immersion or embedding
F : M → M 0 satisfying F ∗ g 0 = g is called an isometric immersion or isometric embed-
ding.
In the case we deal with embedded or immersed submanifolds M ⊂ M 0 (that is F :
N → M 0 with F (N ) = M is an immersion or embedding) then we will say that M
equipped with the induced metric obtained by the inclusion map ι : M → M 0 is a
Riemannian submanifold. Let’s see now how to get the induced metric from a m di-
mensional Riemannian manifold M 0 to an n dimensional submanifold M . Consider the
map X : U → M 0 with U open subset in Rn so that X(U ) is an open subset of M and
such that X is a diffeo into its image we call U . Then its inverse ϕ : U → Ũ can be
viewed as a coordinate map that is (U, ϕ) is a chart for M with coordinates functions
(x1 , .., xn ). This is called local parametrization. Then denoting by g the induced metric
by the inclusion map by g 0 we have
X ∗ g = X ∗ ι∗ g 0 = (ι ◦ X)∗ g 0 = X ∗ g 0
In the case M 0 = Rm equipped with the standard Euclidean metric, the induced metric
on U would be given by
m m X n

X
i 2 i 2 i 2
X ∂X i j
g = X ( (dy ) = (d(y ◦ X)) = (dX ) = ( dx )
∂xj
i=1 i=1 j=1

Musical isomorphism:
Definition 9.12.
[ : T M → T ∗M
by [(X :) = X [ is the covector such that X [ (Y ) = g(X, Y ) for every Y ∈ T M
Working in component we have Xi := X [ (∂i ) = gij X j . Being the metric invertible
one can also define g −1 that in components looks like g −1 = g ij ∂i ⊗ ∂j with g ij = g ji
where gij g jk = δik . This induces an inner product on the cotangent space Tp∗ M , namely:
< ωp , αp >p := g −1 (p)(ωp , αp ) = g ij (p)ωi (p)ωj (p)
The inverse map of the flat map is given by the sharp map
] : T ∗M → T M
47

by ω ] such that α(ω ] ) = g −1 (ω, α) that is in components


dxi (ω ] ) := ω i = g ij ωj
This operators can be applied to any type of tensor to raise or lower indices.
We know go back to the integvation problem. We consider an orientable n dimensional
Riemannian manifold (M, g) and consider positively oriented orthonromal frame and
coframe {ea } and {a }. It is natural to require that the area spanned by the oorthonormal
basis is one, we thus want that exist a Riemannian volume form (alias a top form)
satisfying, locally
dVg (e1 , .., en ) = 1
This object is unique and given locally simply by
dVg = 1 ∧ ... ∧ n
It is useful to write everything in terms of coordinate function (x1 , .., xn ) associated to
an oriented coordinate chart, and the induced coordinate basis for sections of any tensor
bundle. Namely being
a = eai dxi
we obtain
dVg = det eai dx1 ∧ ... ∧ dxn
by the Binet theorem and (4) we have
det gij = det eai det δab det ebj
thus
det gij dx1 ∧ ... ∧ dxn
p
dVg =
Consider then f a compactly supported functions then f dVg is a compactly supported
top form then Z
f dVg
M
is well defined.
(**) to the end of the chapter
When our manifold is not oriented we run into the annoyng problem of the change of
coordinate formula when the determinant of the Jacobian has not the absolute value as
it should. For this reason one define the densities (namely section of the denity bundle
we skip more details) that are object of the form
µ = u|dx1 ∧ ... ∧ dxn |
for some smooth function u and where for a top form ω we define
|ω|(X, Y, ..., Z) := |ω(X, Y, ..., Z)|
Given a smooth function f we can define its gradient by
grad(f ) := (df )]
Moreover given a vector field X on an oriented n-dimensional Riemannian manifold,
we can construct dVg (X, •, ...., •) ∈ Ωn−1 (M ); taking its exterior derivative we obtain
another top form we use to define divX as follows
d(dVg (X)) := (divX)dVg
48

We can finally defined an important object called the Laplace-Beltrami operator ∆ :


C ∞ (M ) → C ∞ (M ) defined by
∆f := div(grad(f ))
In certain cases it would be useful to define it with a minus sign in order to have non
negative eigenvalue for the Laplacian.
9.2. Riemannian distance. We define a regular curve as a curve γ : I → M so that
γ̇(t) 6= 0 for t ∈ I, that is γ is an immersion. A curve segment is a curve defined on a
compact interval. It is useful to deal with picewise regular curve segment that is a curve
segment γ : [a, b] → M so that I can find a partition of [a, b] that is a0 < a1 < .. < ak
with ak = b and a0 = a so that γ|[ai ,ai+1 is regular for every i = 0, .., k − 1. Note that
in this definition the speed of the curve approaching ai from the left and right are not
required to be equal. We call those curve admissible curve.
Definition 9.13. Consider an admissible curve γ : [a, b] → M we define its length to be
Z b
Lg (γ) = g(γ̇, γ̇)dt
a
It is easy to check that this definition is parameter independent and invariant under
isometry. Moreover one can prove that if M is a connected smooth manifold then any
two points can be joined by an admissible curve. Thus for a connected Riemannian
manifold we can construct a notion of distance as follows
dg (p, q) = inf Lg (γ)
where gamma is an arbitrary admissible curve connecting p and q.
Observation Any Riemannian manifold equipped with this notion of distance is a metric
space. It is convenient to become confident with clever reparametrization of a regular
curve. If γ : I → M is a curve and ϕ : I˜ → I is a diffeomorphism of intervals in R then
define the reparametrization of γ by γ̃ : γ ◦ ϕ. Consider now a curve n a Riemannian
manifold (M, g) and an arbitrary t0 in I and construct
Z tp
s(t) = g(γ̇(τ ), γ̇(τ ))dτ
t0
˜
p
by construction ṡ = g(γ̇(t), γ̇(t)) > 0 thus we can interpret s as a diffeo from I → I.
−1
Define now a reparametrization of gamma as follows: γ̃ = γ ◦ s . It is easy to see by
˙ γ̃)
applying the chain rule that γ̃ has unit speed that is g(γ̃, ˙ = 1. We summarize in the
following
Proposition 9.14. Every regular curve in a Riemannian manifold has a unit-speed
reparametrization.
9.3. Metric connection. Give a Riemannian manifold we want to describe the com-
patibility between the metric and the connection. To this aim we say that
Definition 9.15. An affine connection on Riemannian manifold is compatible with the
metric if given a curve γ and two vector fields X and Y parallel transported along γ we
have that gγ(t) (Xγ(t) , Yγ(t) ) is constant. We will call it metric connection
Let’s now characterize the metric connection
Proposition 9.16. The following are equivalent:
49

(1) The connection is compatible with the metric


d
(2) X,Y vector fields along a curve then dt g(X, Y ) = g(Dt X, Y ) + g(X, Dt Y )
(3) X, Y, Z vector fields than X · g(Y, Z) = g(∇x Y, Z) + g(X, ∇X Y )
(4) ∇X g = 0 for every X
Proof. 1) → 2) : If the connection is compatible with the metric we can take an or-
thonormal basis on a point p ∈ γ and then construct their parallel transported vector
fields (e1 , .., en ) (NOT that genral ODE theory in fact assure us that the solution of
the parallel trnasport equation with some given initial condition exsits and is unique!!).
Being the connection compatible with metric the inner product is constant along the
curve thus (e1 , .., en ) is an orthonormal frame along the curve. Using it we have locally
X = X a ea , Y = Y b eb
from which
Dt X = (∂t X a )ea
since Dt ea = 0 by assumption. Thus
d
g(Dt X, Y ) + g(X, Dt Y ) = Ẋ a Y b δab + Ẏ a X b δab = g(X, Y )
dt
the other way around follows easily by the assumption. Let’s see now how 3) → 2).Con-
sider
d d
g(X, Y ) = (Ẋ i Y j + X i Y˙ j )gij + X i Y j g(∂i , ∂j )
dt dt
by using 3) we have
d
g(∂i , ∂j ) = g(Dt ∂i , ∂j ) + g(∂i , Dt ∂j )
dt
and then the result follows. The other way around can be proved along the same line.
Let’s now see 3) ↔ 4)
(∇X g)(Y, Z) = X · g(Y, Z) − g(∇x Y, Z) − g(∇X Z, Y )
and this is zero for every X, Y, Z iff 3) is satisfied for all X, Y, Z. 
Corollary 9.17. Given a geodesics γ on a Riemannian manifold (M, g) then < γ̇, γ̇ >g
is constant along the curve
there are too many connection metric compatible; to pin down a unique one we require
the following condition:
T (X, Y ) := ∇X Y − ∇Y X − [X, Y ] = 0
It is easy to show that T is a (2, 1) tensor called T ORSION . An affine connection
satisfying this relation is called torsion free or symmetric since torsion freeness implies
in components that
Γi kj = Γj k i =: Γkij
Theorem 9.18. On a Riemannian manifold there is a unique metric compatible sym-
metric affine connection called the Levi Civita connection
Proof. Metric compatibility and symmetry gives
X · g(Y, Z) = g(∇X Y, Z) + g(Y, ∇X Z)
∇X Z = ∇Z X + [X, Z] by torsionfreeness = g(∇X Y, Z) + g(Y, ∇Z X) + g(Y, [X, Z])
50

by permuting the previous result on the symbols X, Y, Z in a clever way we have


X·g(Y, Z)+Y ·g(Z, X)−Z·g(X, Y ) = 2g(∇X Y, Z)+g(Y, [X, Z])+g(Z, [Y, X])−g(X, [Z, Y ])
that we can solve for g(∇X Y, Z):
1
g(∇X Y, Z) = 2 (X · g(Y, Z) + Y · g(Z, X) − Z · g(X, Y )
−g(Y, [X, Z]) − g(Z, [Y, X]) + g(X, [Z, Y ]))
˜ are such that
Note that two Levi Civita connections ∇ and ∇
˜ X Y, Z) = 0
g(∇X Y − ∇
being the metric non singular we must have then ∇X Y = ∇ ˜ X Y . In a given coordinate
chart we thus have
1
g(∇i ∂j , ∂k ) = (∂i g(∂j , ∂k ) + ∂j g(∂k , ∂i ) − ∂k g(∂i , ∂j ))
2
that is
1
Γkij = g km (∂i gmj + ∂j gmi − ∂m gij )
2
This formula prove the existence of the Levi Civita connection in a coordinate chart and
thus everywhere. 

What happen in a different frame?? Consider the orthonormal frame {ea } and define
c by
the smooth functions fab
c
[ea , eb ] = fab ec
Then one gets
1 c b0 cc0 a0 cc0
Γcab = (fab − fac0δ δb0 b − fbc0δ δa0 a )
2
Consider now an embedded surface in R3 etc...
Suppose M and M 0 are smooth manifolds and ∇0 an affine connection on M 0 and F :
M → M 0 is a diffeomorphism then F ∗ ∇0 defined by
(F ∗ ∇0 )X Y = (F −1 )∗ (∇F∗ X F∗ Y )
is an affine connection on M called the pullback connection.
Proposition 9.19 (Naturality of the LC connection). Suppose (M, g) and (M 0 , g 0 ) are
Riemannian with Levi Civita connections ∇ and ∇0 . If F : M → M 0 is an isometry
then F ∗ ∇0 = ∇
Ledt’s now go back to geodesics for the LC connection.
Definition 9.20. Let (M, g) be a Riemannian manifold. An admissible curve γ in M
is said to be a minimizing curve if Lg (γ) ≤ Lg (γ̃) for every admissible curve γ̃ with the
same endpoints.
Now we state the following result whose proof involve calculus of variations tools we
are not going to discuss in this notes
Proposition 9.21. In a Riemannian manifold, every minimizing curve is a geodesic
when it is given a unit-speed parametrization.
51

It is easy to see that the literal converse is not true, because not every geodesic segment
is minimizing. For example, every geodesic segment on S 2 that goes more than halfway
around the sphere is not minimizing, because the other portion of the same great circle
is a shorter curve segment between the same two points. What can be proved by using
Riemann normal coordinate is that geodesics γ are locally minimizing that is for every
t0 ∈ I we can find a neighborhood I0 such that γ restricted to I0 is minimizing
9.4. Geodesics in Riemannian geometry (**). We have already discussed the ex-
istence and uniqueness of a geodesics given the initial speed and point. Now we will
focus on the case of geodesics obtained out of the Levi Civita connection only. Let now
denote by ΥX p the geodesics with initial point p and speed Xp , be carefull this is not the
integral curve for X. From the construction it is easy to see the the following relation
holds:
ΥaX X
p (t) = Υp (at)
whenever either side is defined. Consider now E = {X ∈ X(M ) s.t. ΥX p } is defined in
an interval containing [0,1] for all p; then construct the following map
exp(X) = ΥX
• (1)
or simply expp (X) when we want to specify also the initial point
Proposition 9.22. For each X the geodesics ΥX
p is given by

ΥX
p = expp (tX)

Proof. This is a direct consequence of the rescaling property discussed before 


When we restrict to a point is sometimes useful to denote by Ep vectors at p so that
the exponential map is well defined.
Proposition 9.23. (*) Let F : M → N be an isometry of Riemannian manifolds and
p ∈ M . Then the following diagram is commutative
dF |p
Ep EF (p)
expp expF (p)

F
M N
Consider now T0 (Tp M ); it can be canonically identified with Tp M . If we consider the
differential of the exponential map at the origin that is d(expp )|0 it can be viewed as a
map from Tp M to itself. Consider now the curve on T pM given by c(t) = tX and use it
to compute the differential of the exponential map:
d d d
d(expp )|0 (X) = |0 expp (c(t)) = |0 expp (tX) = |0 ΥX p (t) = X
dt dt dt
We reassume this result in the following proposition
Proposition 9.24. The differential at the origin of the exponential map is the identity
map on Tp M
This result is crucial since by the inverse function theorem we can say that there
is a local diffeomorphism from Tp M to M around the origin. Better to say , there
are neighborhoods V of 0 in Tp M and U of p in M such that expp : V → U is a
diffeomorphism. A neighborhood U of p that is diffeomorphic trough the exponential
map to some neighborhood V of 0 in Tp M is called a NORMAL NEIGHBORHOOD of
52

p. Given an orthonormal basis ei |p for Tp M define the map Φ : Rn → Tp M associated


to the trivialization induced pointwise by the orthonormal frame by
Φ(y 1 , .., y n ) = y i ei |p
Being the exponential map locally invertible we can construct for a normal neighbor-
hood of p the clever coordinate chart (U, ϕ), called normal coordinate centered at p by
constructing ϕ as follows
Φ−1
V ⊂ Tp M Rn
ϕ
( expp |V )−1

U
Observe that given this coordinate we have that the coordinate basis for Tp M is or-
thonormal so in some sense we combine the beauty of the coordinate and orthonormal
frame. Nice things happens in this coordinates:
Proposition 9.25. Given a Riemannian manifold (M, g) and a normal coordinate chart
centered at p we have
(1) In this coordinate chart gij (p) = δij
(2) For every X = ai ∂i the geodesics ΥX p is represented in this coordinates by the
1 n
line (ta , .., ta )
(3) the Christoffel symbols at p in this coordinates vanish
Proof. 
From this proposition it is evident that locally Riemannian geodesic substitute the
notion of straight lines. One can easily prove that for S 2 grate circles are geodesics.

9.5. The Riemannian curvature. We look for a mathematical object telling us if a


Riemannian manifold is flat (locally isometric to the Euclidean space or not).We note
that on an Euclidean space given a vector field Z one has, for example, that
∇1 ∇2 Z − ∇2 ∇1 Z = ∂1 ∂2 Z − ∂2 ∂1 Z = 0
or more in general that
∇X ∇Y Z − ∇Y ∇X Z = ∇[X,Y ] · Z
and this motivate the following definition:
Definition 9.26.
Given a Riemannian manifold we define the (3,1) tensor field R, called the Riemann
tensor, by the following
R(X, Y, Z) = R(X, Y ) · Z = ∇X ∇Y Z − ∇Y ∇X Z − ∇[X,Y ] Z
with ∇ being the Levi Civita connection and where we interpret R(X, Y ) as a (1,1)
tensor field thus pointwise an element of End(Tp M ) and for this reason one can call it
the curvature endomorphism. In components one has

R(∂i , ∂j , ∂k ) = Rlkij ∂l
or if you prefer
Rlkij = dxl (R(∂i , ∂j , ∂k ))
53

In components one gets explicitly


Rlkij = ∂i Γljk − ∂j Γlik + Γm l m l
jk Γim − Γik Γjm
We say that a vector field is parallel with respect to a connection ∇ if it is parallel
transported along any curve. We have the following useful Lemma
Lemma 9.27. Suppose M is a smooth manifold, and ∇ is any flat connection on M .
Given p ∈ M and any vector v ∈ Tp M , there exists a parallel vector field X (parallel
transported along any curve γ) defined on a neighborhood of p such that X(p) = Xp = v.
Proof. We take a coordinate chart for M centered at p and we assume without loosing
generality that its image is a cube on Rn . We will denote by γi the integral curve for ∂i .
We parallel transort v by γ1 then the results parallel transported by γ2 and so on. The
resulting vector field we call X is such that ∇1 X = 0 on the x1 axis (the point where
x2 = ... = xn = 0) ∇2 X = 0 on the x1 -x2 plane (the point where x3 = ... = xn = 0)
and so on. In general we have that ∇k X = 0 on Mk that is the set of points where
xk+1 = ... = xn = 0. We prove by induction that
∇1 X = ... = ∇k X = 0 on Mk
It is obviously tru fork = 1 and we assume that it is true forsome k. On Mk+1 we have
that ∇k+1 X = 0 on Mk+1 and ∇i X = 0 for 0 ≤ i ≤ k on Mk . Since partial derivative
commutes we have that the flatness criterion implies that
∇k+1 ∇i X = ∇i ∇k+1 X
and they both vanishes on Mk+1 . The previous thell us that ∇i X is parallel along the
curves γk+1 starting on Mk . But ∇i X = 0 on Mk and the parallel trnasport of the zero
vector is the zerovector field. Thus ∇i X = 0 on Mk+1 for all i = 1, ..., k + 1 
Proposition 9.28. A Riemannian manifold is flat if and only if its curvature tensor
vanishes identically.
Proof. On the Euclidean space we have that the curvature vanishes thus by the naturality
of the Levic Civita connection one has one direction of the statement. Suppose now that
the Riemann tensor vanishes identically and thus the Levi Civita connection is such that
∇X ∇Y Z − ∇Y ∇X Z − ∇[X,Y ] · Z = 0
Consider an orthonormal basis (e1 |p , ..., en |p ) for Tp M . Thanks to the previous Lemma
we may extend each ei |p locally to a parallel vector field (e1 , ..., en ) with ei (p) = ei |p and
since the parallel transport preserve the inner product then (e1 , ..., en ) is an orthonormal
frame. Being the Levi Civita connection torsion free by the definition of parallel vector
field we have:
∇ei ej − ∇ej ei − [ei , ej ] = 0 = [ei , ej ]
Being this frame commutative we can find a coordinate system (U, ϕ) with coordinates
functions (xi ) such that ei = ∂i (we should prove it but we skip it). Remember that
when we write ∂i |q we mean (ϕ−1 )∗ (∂i |ϕ(q) ) where q is any point in U even if we often
avoid this notation its hould be clear fromt he context. Denoting by g the metric on M
and by ḡ the Euclidean metric, by construction we have
g(∂i , ∂j ) = (g((ϕ−1 ))∗ (∂i ), (ϕ−1 )∗ (∂j ) = (ϕ−1 )∗ g(∂i , ∂j )) = δij = ḡ(∂i , ∂j )
thus (ϕ−1 )∗ g = ḡ and ϕ is the local isometry we were looking for. Note that the ∂i on
the first and last braket of the previous relation are different objects, the first one is a
vector field on M the other one on the Euclidean space. 
54

The Riemann tensor satisfies many identities one can easily recover out of its defi-
nition. We list them in the following by using the components notation and using the
(4,0) tensor obtained by the Riemann one raising an index Rijkl = gim Rmjkl :

Rijkl = −Rjikl Rijkl = −Rijlk

Rijkl = Rklij
• First Bianchi idenity
Rijkl + Rkijl + Rjkil = 0
• Second Bianchi identity
∇m Rijkl + ∇i Rjmkl + ∇j Rmikl = 0
Other interesting tensors are the Ricci tensor Ric that is a (2, 0) symmetric tensor defined
in components as
R = Rij dxi dxj
with Rij = Rk ikj and the scalar curvature S that is its trace S = g ij Rij . A Riemannian
manifold is called an Einstein manifold if
Rij = λgij
for some λ ∈ R
Definition 9.29. A vector field X is named Killing vector field if LX g = 0
Proposition 9.30. If X = ak ∂k is a Killing vector field then ∇i aj + ∇j ai = 0

10. Principal bundle and gauge theories (**)


A principal bundle is defined as a fiber bundle π : E → M with standard fiber a Lie
group G which is endowed with an equivalence class of principal bundle atlases that we
will describe below. The group G is referred to as the structure group of the principal
bundle and principal bundles with structure group G are also called principal G-bundle.
In the theoretical physics literature G is name instead the gauge group. More in details
we have the following
Definition 10.1. A G principal bundle is a fiber bundle π : E → M where E is equipped
by a right G action
R• ( ) : E × G → E
(P, g) 7→ P · g := Rg (P )
that along the fiber is free (P · g = P ⇒ g = e) and transitive (∀P, P 0 ∈ M ∃ g ∈
G s.t. P 0 = P · g) (this definition essentially implies that the each fiber is homeomorphic
as topological space to the structure group). Moreover we require that the trivialization
map is G equivariant, namely, being {Uα } an open cover of M we have that the following
diagram commutes
ψα
π −1 (Uα ) Uα × G
π
pr1

55

and the trivialization map (that is a diffeomorphism by definition) ψα (P ) = (p; =


π(P ), gα (P )) is such that gα (P · h) = gα (P ) · h
Let’s now discuss the compatibility of trivialization map on the overlaps Uαβ =
Uα ∩ Uβ . Given two trivializations ψα (P ) = (p; = π(P ), gα (P )) and ψβ (P ) = (p; =
π(P ), gβ (P )),restricting our analysis on P ∈ Uαβ we can easily find the transition func-
tion gαβ sucht that gα (P ) = gαβ (P )gβ (P ); by the grouppropety in fact gα (P )gβ−1 (P ) =
gαβ (P ). Observe now that the transition function only depends on the fiber not on the
point on the fiber in fact the equivariance implies that
gαβ (P · h) = gα (P )h(gβ (P )h)−1 = gα (P )hh−1 gβ−1 (P ) = gαβ (P )
thus we view gαβ as a map from Uαβ and the group G. One cna alsoeasily check
gαβ gβα = e = gαβ gβγ gγα
Definition 10.2. A connection on a G principal bundle E → M is an equivariant choice
of the horizontal bundle, that is at each point P ∈ E we have a TP E = VP E ⊕ HP E
with (Rh )∗ HP = HP ·h .
The action of G on E defines the foundamental vector fields ξ • : g → X(E) by
d
ξpX := |t=0 (P · etX )
dt
It is easy to check that π∗ (ξpX ) = 0 thus it defines a vertical vector (fields) and since G
acts freely we have the isomorphism g → VP E
Lemma 10.3.
−1 Xg
(Rg )∗ ξ X = ξ g
The horizontal subspace being a linear subspace, can be obtained by k = dimG linear
equations TP E → R that is HP E is the kernel of a one-form ω at P with values in a k
dimensional vector space. There is a natural such vector space, namely the Lie algebra
g of G, and since the one form ω annihilates horizontal vectors it is defined by what it
does to the vertical vectors. This yields to the following
Definition 10.4. The connection one form associated to the horizontal bundle HE is
the g valued one form on E defined by
X if V = ξ X

ω(V) =
0 if V ∈ HE
Proposition 10.5. (Rg )∗ ω = adg−1 ◦ ω
Proof.
(Rg )∗ ω(ξ X ) = ω((Rg )∗ ξ X ) = ω(ξ adg−1 X ) = adg−1 X

Given a local section σα : Uα → E we can pullback ω and define
Aα := (σα )∗ ω|Uα ∈ Ω1 (Uα , g)
Proposition 10.6. Let for simplicity denote g := gαβ : Uαβ → G; suppose we get in
two overlapping trivialization Aα and Aβ . then we have, for matrix Lie goup:

Aα = gAβ g −1 − (dg)g −1

You might also like