Dual Spaces PDF
Dual Spaces PDF
Dual Spaces PDF
1
1 Introduction
In the same way that vector spaces embody linearity, Tensor Products embody
multi-linearity. As such tensors (the elements of tensor products) are a widely
applicable generalization of vectors and Vector Spaces.
In this paper, I seek to give an introductory, abstract understanding of ten-
sors and Tensor Products. This paper assumes knowledge commensurate with
an introductory undergraduate course in Linear Algebra and an undergraduate
course in Abstract Algebra.
2 Dual Spaces
To understand Tensor Products it is easiest to begin with Dual Spaces and
linearity.
Given finite dimensional vector spaces V and W over a field F we define the
set of all linear transformations Hom(V, W ). [1]
Theorem 1. Given two finite dimension vector spaces V and W over a field F ,
Hom(V, W ) forms a vector space over F where. for S, T ∈ Hom(V, W ), α ∈ F,
and ~v ∈ V , vector addition is defined: (S + T )(~v = S(~v ) + T (~v ) and scalar
multiplication is defined: (αS)(~v ) = αS(~v ). [1]
Proof. Let ~v , ~u ∈ V, α, β ∈ F , and w
~ ∈ W.
Grab S, T ∈ Hom(V, W ). Then,
(S + T )(~u + ~v ) = S(~u + ~v ) + T (~u + ~v )
= S(~u) + S(~v ) + T (~u) + T (~v )
= S(~u) + T (~u) + S(~v ) + T (~v ) Vector addition is commutative in W
= (S + T )(~u) + (S + T )(~v )
So, (S + T ) ∈ Hom(V, W ) and addition is closed on Hom(V, W ).
And,
(S + T )(~v ) = S(~v ) + T (~v )
= T (~v ) + S(~v ) Vector addition is commutative in W
= (T + S)(~v )
So, addition in Hom(V, W ) is commutative.
Grab S, T, R ∈ Hom(V, W ). Then,
((S + T ) + R)(~v ) = (S + T )(~v ) + R(~v )
= S(~v ) + T (~v ) + R(~v )
= S(~v ) + (T + R)(~v )
= (S + (T + R))(~v )
So, addition in Hom(V, W ) is associative.
2
Define O : V → W by O(~v ) = ~0. O ∈ Hom(V, W ). Grab S ∈ Hom(V, W ).
Then, (S +O)(~v ) = S(~v )+O(~v ) = S(~v )+~0 = S(~v ). Addition is commutative, so
(O +S)~v = S(~v ) So, Hom(V, W ) is non-empty and contains an identity element.
~ Define Ŝ : V → W ∈ Hom(V, W ) by Sˆ(~v ) = −w.
Let S(~v ) = w. ~ Then,
(αS)(~u + ~v ) = α(S(~u + ~v )
= α(S(~u + S(~v )
= (αS)(~u) + (αS)(~v )) Scalar multiplication is distributive in W
(1S)(~v ) = 1S(~v )
= S(~v ) Definition of 1 in W
...
3
Given a F -vector space V , we define the dual space of V to be V ∗ =
Hom(V, F ). The elements of V ∗ are linear forms, linear functional, or linear
maps. Then, V ∗ is a vector space over F .
So, for each basis vector v~k , ρk = 0. B spans V . V is the domain for V ∗ , so B ∗
is linearly independent.
Grab S ∈ V ∗ and ~v ∈ V . ~v = a1 v~1 + · · · + an v~n .
First note, because S is linear, S(a~v ) = aS(~v ).
Consider,
We can also consider the more general case, Hom(V, W ), where V and W
are vector spaces over field F .
Corollary 2 (Theorem 2). Given finite vector spaces V and W over field F
with dim(V ) = n and dim(W ) = m, dim(Hom(V, W ) = nm
4
Proof. Let ai , bi , ci be scalars in F , {v~1 , . . . , v~n } be a basis of V and {w~1 , . . . , w~m }
be a basis of W .
Consider the set B = {φkl ∈ Hom(V, W ) : φkl (a1 v~1 + · · · + ak v~k + · · · +
an v~n ) =P~l } P
w
n m
Let k=1 l=1 akl φkl = id. Then,
n X
X m
( akl φkl )(v~k ) = O(v~k )
k=1 l=1
n X
X m
(akl (φkl (v~k ))) = O(v~k )
k=1 l=1
m
X
~ l = ~0
akl w
l=1
akl = 0 w
~i are linearly independent
...
We can also consider the dual space of the dual space. That is the double
(algebraic) dual space (V ∗ )∗ = V ∗∗ which consists of all linear forms ω : V ∗ →
F . So, the elements of V ∗∗ are maps which assign scalar values to each linear
form in V ∗ . [2]
5
Remark 1. V ∗∗ is a vector space over F .
It is then natural to consider the map ω̄~v : V ∗ → F defined by ω~v (S) = S(~v ).
This map is obviously an evaluation homomorphism.
Proposition 2. ω~v ∈ V ∗∗
Proof. let S, T ∈ V ∗ and a, b ∈ F . Then,
3 Multi-Linear Forms
We can further extend the ideas embodied in the Dual Space to include multi-
linearity. So, as with Dual Spaces, we begin with (multi-) linear forms.
Given vector spaces V1 , V2 , . . . , Vn , and W over field F we can define a map
φ : V1 × · · · × Vn → W . If φ is linear on each term. φ is n−linear.
For a bi-linear map, T : V1 × V2 → W is linear if, for a, b ∈ F, ~u, ~v ∈ V1 , and
~x, ~y ∈ V2 , T (a(~u + b~v , ~x) = aT (~u, ~x) + bT (~v , ~x) and T (~u, a~x + b~y ) = aT (~u, ~x) +
bT (~u, ~y ).
6
In the same way that V ∗ is a vector space if V is a vector space, the set
of all n−linear maps from V1 × · · · × Vn to W , which is denoted L(U, V ; W ).
If V1 , . . . , Vn are vector spaces over field F and φ : V1 × · · · × Vn → F , φ is a
n−linear form.
It is important to note that V1 × · · · × Vn is a Cartesian product (not a direct
product). The notation Ln (V1 × · · · × Vn ; W ) denotes the set of linear maps
from the direct product V1 × · · · × Vn to W . [2]
Lemma 4. Given finite vector spaces V and W for all T ∈ L(V, W ):
• There exists a ϕ ∈ L(V, W ) such that ϕ(~v ) = ~0 for all ~v ∈ V .
• ~v = ~0 if and only if, for all φi ∈ L(V, W ), φi (~v ) = ~0
Proof. Let O ∈ L(V, W ) be the zero transformation. Then, for all ~v ∈ V ,
O(~v ) = ~0,
Then consider ~0 ∈ V and φ ∈ L(V, W ).
φ(~0) = 0φ(~v ) = ~0
Finally, O ∈ L(V, W ). O(~v ) = ~0 implies ~v = ~0.
Theorem 5. Given vector spaces V, U, and W over field F, L2 (V, U ; W ) is a
vector space.
Proof. First consider Hom(U, Hom(V, W )). By Theorem 1, Hom(V, W ) is a
vector space. Similarly Hom(U, Hom(V, W )) is a vector space.
As an aside, T : Hom(U, Hom(V, W )) is L(U, L(V, W ).
Let a, b ∈ F, u~i ∈ U, v~i ∈ V, and φi ∈ L(U, L(V, W ))
Define map T : L(U, L(V, W ) → L2 (U, V ; W ) by T (φ) = φ̂(~u, ~v ) = (φ(~u))(~v ),
for all ~u ∈ U and ~v ∈ V.
First, we check the bilinearity of φ̂
Consider,
And,
So, φ̂ is bilinear.
Consider,
So, T is linear.
7
Consider, ker(T ) = {φ ∈ L(U, L(V, W ) : T (φ) = ~0 = O}.
So, O(~u, ~v ) = φ̂(~u, ~v ) = (φ(~u))(~v ) ( for all ~u ∈ U, ~v ∈ V.)
Therefore, φ(~u) = ~0 for all ~u ∈ U .
Then, by Lemma 4, φ = ~0. So, ker(T ) = {~0}.
Then, T is injective.
Note dim(ker(T )) = dim({~0) = 0
Consider, Im(T ) = {T (φ) ∈ L(U, V ; W ) : φ ∈ L(U, L(V, W ))}.
Using dim(L2 (U, V ; W )) = dim(Im(T )) − dim(ker(T )), we conclude that
dim(Im(T )) = dim(L(U, V ; W )).
Therefore, T is surjective.
Therefore, T is an isomorphism and L(U, L(V, W )) ' L(U, V ; W ).
Therefore, L2 (U, V ; W ) is a vector space.
8
Consider the n = k + 1 case: Lk+1 (V1 , . . . , Vk+1 ; W ).
Define map Tk+1 : L(V1 , Lk (V2 , . . . , vk+1 ; W )) → Lk+1 (V1 , . . . , Vk ; W ) by
Tk+1 (φ(v~1 , v~2 , . . . v~k )) = φ̂(v~1 , v~2 , . . . , vk+1
~ ) = (φ(v~1 , v~2 , . . . v~k ))(vk+1
~ ).
The proof that Tk+1 is almost identical as the proof in Theorem 5.
By the induction hypothesis φ(v~1 , v~2 , . . . , v~k ) = (ϕ(v~1 , v~2 , . . . vk−1 ~ ))(v~k ) for
ϕ ∈ Lk−1 (V1 , . . . , Vk−1 ).
That is, Tk (ϕ) = φ. So, Tk+1 (φ) = (Tk (ϕ))(vk+1 ~ )
Consider ker(Tk1 ) = {φ ∈ L(V1 , Lk (V2 , . . . , vk ; W )) : Tk+1 (φ) = ~0}.
For all v~1 ∈ V1 , . . . vk+1 ~ ∈ Vk+1 and for all φ ∈ L(V1 , Lk (V2 , . . . , vk ; W )),
~0 = φ̂(v~1 , . . . , vk+1
~ )
= (φ(v~1 , . . . , v~k ))(vk+1
~ )
= (Tk (ϕ(v~1 , . . . , v~k ))(vk+1
~ )
4 Tensor Products
Now, we can begin to understand tensors and tensor products. We start with
the simplest case: two finite vector spaces.
Given finite vector spaces U , V , and W over a field F , we can construct
another vector space U ⊗ V with bilinear map π : U × V → U ⊗ V . If for any W
and every T ∈ L2 (U, V ; W ), there exists a unique linear map, T̂ : U ⊗ V → W
such that T = π ◦ T̂ , U ⊗ V is a tensor product. [2]
First we must prove that tensor products exist. As we will later see, the
tensor product of two vector spaces is unique up to isomorphism. For this
reason, our choice of ”W ” is unimportant. [4]
9
Theorem 8. Given two finite vector spaces U and V , U ⊗ V is unique up to
isomorphism.
Proof. Let (U ⊗ V )1 , φ1 and (U ⊗ V )2 , φ2 be two tensor products of U, V .
First note that (U ⊗ V )1 and (U ⊗ V )2 are Vector Spaces. So, we can define
a tensor product ”to” them.
By definition, we can obtain a unique linear map T : (U ⊗ V )1 → (U ⊗ V )2
such that φ2 = T ◦ φ1 .
Similarly, we can obtain a unique linear map S : (U ⊗ V )2 → (U ⊗ V )1 such
that φ1 = S ◦ φ2 .
Therefore, (S ◦ T ) ◦ φ1 = S ◦ φ2 = φ1 .
Then, by the uniqueness of T and S, S ◦ T = id. Therefore, S = T −1 and T
is an isomorphism. So, (U ⊗ V )1 , φ1 ' (U ⊗ V )2 , φ2 .
Because a Tensor Product of two vectors spaces is unique up to isomorphism,
we refer to the Tensor Product.
It should then be obvious that L(U ⊗ V ; W ) ' L2 (U, V ; W )
This allows us to ”set” W to the field of scalars F .
It then becomes more convenient to use V, V ∗ , and V ∗∗ .
For the next theorem, it is necessary to understand ~v ⊗ w.
~ For the purposes
of this paper, it is adequate to say that ~v ⊗ w
~ is the outer product of ~v and w.
~
[1
3 [1
As a concrete example, consider 2 ∈ R and
∈ R2 .
2]
3]
1 1 1 2
2 ⊗ 1 = 2 1 2 = 2 4
2
3 3 3 6
Theorem 9. Given {u~1 , . . . , u~n } a basis of finite vector space U and {v~1 , . . . , v~m }
a basis of finite vector space V , the set {u~i ⊗ v~j } for 1 ≤ i ≤ n and 1 ≤ j ≤ m
forms a basis of U ⊗ V .
Proof. Again, the proof of this theorem goes well beyond the scope of this paper.
[5]
Corollary 4. Given finite dimensional Vector Spaces U and V , dim(U ⊗ V ) =
dim(U ) dim(V )
Proof. If {u~i ⊗ v~j } for 1 ≤ i ≤ n and 1 ≤ j ≤ m forms a basis f U ⊗ V ,
dim(U ⊗ V ) = |{u~i ⊗ v~j }| = dim(U ) dim(V ).
We can then begin to explore the behavior of ⊗. [4]
Proposition 3. Given finite vector spaces U and V where ~u ∈ U and ~v ∈ V ,
if ~u ⊗ ~v = ~0, then, ~u = ~0 or ~v = ~0.
Proposition 4. Given finite dimensional vector spaces U and V , U ⊗ V '
V ⊗ U.
10
Proposition 5. Given finite vector spaces U, V , and W then, U ⊗ (V ⊗ W ) '
(U ⊗ V ) ⊗ W .
5 Conclusion
At its core, the tensor product generalizes the concept of multi-linearity to form
a ”new” finite vector space from a given set of finite vector spaces with the
operation of multi-linear composition.
The elements of a tensor product are called tensors. In general, tensors are
generalizations of vectors.
In the same way that many systems (physical or otherwise) require more
than a simple scalar to be described, some systems require more than a vector
to be described.
Tensors, as a generalization of vectors, allow for these systems to be de-
scribed.
For example, the stress in a 3−Dimensional object requires 2 ”components”
per direction. So, stress lends itself to being described by a tensor with 2
components per basis, making it a (2, 0)−Tensor.
As a resultant, Tensors are widely applicable across a variety of fields.
6 Sources
[1] Judson,T. Abstract Algebra: Theory and Applications, Orthogonal Publishing
L3C, 2020. Chp 18.
[2] Roman, S. Advanced Linear Algebra: Second Edition, New York, NY:
Springer, 2005. Chp 14
[3] Herstein, N. Topics in Algebra: Second Edition, New York, NY: John
Wiley & Sons, 1975. Chp. 4.
[4] Erdman, J. ”Elements of Linear and Multilinear Algebra.” (Lecture
Notes, Portland State University, 2014).
[5] Lang, S. Algebra: Third Edition, New York, NY: Springer, 2002. Chp 16
11