Tensor Notation: 1.1 Cartesian Frame of Reference

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

Chapter 1

Tensor Notation
A Working Knowledge in Tensor Analysis
This chapter is not meant as a replacement for a course in tensor analysis, but it will
provide a working background to tensor notation and algebra.
1.1 Cartesian Frame of Reference
Physical quantities encountered are either scalars (e.g., time, temperature, pres-
sure, volume, density), or vectors (e.g., displacement, velocity, acceleration, force,
torque, or tensors (e.g., stress, displacement gradient, velocity gradient, alternating
tensorswe deal mostly with second-order tensors). These quantities are distin-
guished by the following generic notation:
s denotes a scalar (lightface italic)
u denotes a vector (boldface)
F denotes a tensor (boldface)
The distinction between vector and tensor is usually clear fromthe context. When
they are functions of points in a three-dimensional Euclidean space E, they are
called elds. The set of all vectors (or tensors) form a normed vector space U.
Distances and time are measured in the Cartesian frame of reference, or simply
frame of reference, F = {O; e
1
, e
2
, e
3
}, which consists of an origin O, a clock, and
an orthonormal basis {e
1
, e
2
, e
3
}, see Fig. 1.1,
e
i
e
j
=
ij
, i, j = 1, 2, 3 (1.1)
where the Kronecker delta is dened as

ij
=
_
1, i = j,
0, i = j.
(1.2)
We only deal with right-handed frames of reference (applying the right-hand rule:
the thumb is in direction 1, and the forenger in direction 2, the middle nger lies
in direction 3), where (e
1
e
2
) e
3
= 1.
N. Phan-Thien, Understanding Viscoelasticity, Graduate Texts in Physics,
DOI 10.1007/978-3-642-32958-6_1, Springer-Verlag Berlin Heidelberg 2013
1
2 1 Tensor Notation
Fig. 1.1 Cartesian frame of
reference
Fig. 1.2 Albert Einstein
(18791955) got the Nobel
Prize in Physics in 1921 for
his explanation in
photoelectricity. He derived
the effective viscosity of a
dilute suspension of neutrally
buoyant spheres,
=
s
(1 +
5
2
),
s
: the
solvent viscosity, : the
sphere volume fraction
The Cartesian components of a vector u are given by
u
i
=u e
i
(1.3)
so that one may write
u =
3

i=1
u
i
e
i
= u
i
e
i
. (1.4)
Here we have employed the summation convention, i.e., whenever there are repeated
subscripts, a summation is implied over the range of the subscripts, from (1, 2, 3).
For example,
A
ij
B
jk
=
3

j=1
A
ij
B
jk
. (1.5)
This short-hand notation is due to Einstein (Fig. 1.2), who argued that physical laws
must not depend on coordinate systems, and therefore must be expressed in tensorial
format. This is the essence of the Principle of Frame Indifference, to be discussed
later.
The alternating tensor is dened as

ijk
=
_

_
+1, if (i, j, k) is an even permutation of (1, 2, 3),
1, if (i, j, k) is an odd permutation of (1, 2, 3),
0, otherwise.
(1.6)
1.2 Frame Rotation 3
Fig. 1.3 Two frames of
reference sharing a common
origin
1.1.1 Position Vector
In the frame F = {O; e
1
, e
2
, e
3
}, the position vector is denoted by
x = x
i
e
i
, (1.7)
where x
i
are the components of x.
1.2 Frame Rotation
Consider the two frames of references, F = {O; e
1
, e
2
, e
3
} and F

= {O; e

1
, e

2
, e

3
},
as shown in Fig. 1.3, one obtained from the other by a rotation. Hence,
e
i
e
j
=
ij
, e

i
e

j
=
ij
.
Dene the cosine of the angle between e
i
, e

j
as
A
ij
=e

i
e
j
.
Thus A
ij
can be regarded as the components of e

i
in F, or the components of e
j
in F

. We write
e

p
= A
pi
e
i
, A
pi
A
qi
=
pq
.
Similarly
e
i
= A
pi
e

p
, A
pi
A
pj
=
ij
.
1.2.1 Orthogonal Matrix
A matrix is said to be an orthogonal matrix if its inverse is also its transpose; fur-
thermore, if its determinant is +1, then it is a proper orthogonal matrix. Thus [A] is
a proper orthogonal matrix.
4 1 Tensor Notation
We now consider a vector u, expressed in either frame F or F

,
u = u
i
e
i
= u

j
e

j
.
Taking scalar product with either base vector,
u

i
= e

i
e
j
u
j
= A
ij
u
j
,
u
j
= e
j
e

i
u
i
= A
ij
u

i
.
In matrix notation,
[A] =
_
_
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33
_
_
, [u] =
_
_
u
1
u
2
u
3
_
_
,
_
u

_
=
_
_
u

1
u

2
u

3
_
_
,
we have
_
u

_
= [A] [u], [u] = [A]
T

_
u

_
,
u

i
= A
ij
u
j
, u
j
= A
ij
u

i
.
(1.8)
In particular, the position vector transforms according to this rule
x = x

i
e

i
= x
j
e
j
, x

i
= A
ij
x
j
or x
j
= A
ij
x

i
.
1.2.2 Rotation Matrix
The matrix A is called a rotationin fact a proper rotation (det A= 1).
1.3 Tensors
1.3.1 Zero-Order Tensors
Scalars, which are invariant under a frame rotation, are said to be tensors of zero
order.
1.3.2 First-Order Tensor
A set of three scalars referred to one frame of reference, written collectively as
v = (v
1
, v
2
, v
3
), is called a tensor of rst order, or a vector, if the three components
transform according to (1.8) under a frame rotation.
1.3 Tensors 5
Clearly,
If u and v are vectors, then u +v is also a vector.
If u is a vector, then u is also a vector, where is a real number.
The set of all vectors form a vector space U under addition and multiplication. In
this space, the usual scalar product can be shown to be an inner product. With the
norm induced by this inner product, |u|
2
= u u, U is a normed vector space. We
also refer to a vector u by its components, u
i
.
1.3.3 Outer Products
Consider now two tensors of rst order, u
i
and v
i
. The product u
i
v
j
represents the
outer product of u and v, and written as (the subscripts are assigned from left to
right by convention),
[uv] =
_
_
u
1
v
1
u
1
v
2
u
1
v
3
u
2
v
1
u
2
v
2
u
2
v
3
u
3
v
1
u
3
v
2
u
3
v
3
_
_
.
In a frame rotation, from F to F

, the components of this change according to


u

i
v

j
=A
im
A
jn
u
m
v
n
.
1.3.4 Second-Order Tensors
In general, a set of 9 scalars referred to one frame of reference, collectively written
as W= [W
ij
], transformed to another set under a frame rotation according to
W

ij
=A
im
A
jn
W
mn
, (1.9)
is said to be a second-order tensor, or a two-tensor, or simply a tensor (when the
order does not have to be explicit). In matrix notation, we write
_
W

_
= [A][W][A]
T
or W

=AWA
T
or W

ij
=A
ik
W
kl
A
jl
.
In the direct notation, we denote a tensor by a bold face letter (without the square
brackets). This direct notation is intimately connected to the concept of a linear
operator, e.g., Gurtin [29].
6 1 Tensor Notation
1.3.5 Third-Order Tensors
A set of 27 scalars referred to one frame of reference, collectively written as
W= [W
ijk
], transformed to another set under a frame rotation according to
W

ijk
= A
il
A
jm
A
kn
W
lmn
, (1.10)
is said to be a third-order tensor.
Obviously, the denition can be extended to a set of 3n scalars, and W =
[W
i
1
i
2
...i
n
] (n indices) is said to be an n-order tensor if its components transform
under a frame rotation according to
W

i
1
i
2
...i
n
= A
i
1
j
1
A
i
2
j
2
A
i
n
j
n
W
j
1
j
2
...j
n
. (1.11)
We will deal mainly with vectors and tensors of second order. Usually, a higher-
order (higher than 2) tensor is formed by taking outer products of tensors of lower
orders, for example the outer product of a two-tensor T and a vector n is a third-
order tensor Tn. One can verify that the transformation rule (1.11) is obeyed.
1.3.6 Transpose Operation
The components of the transpose of a tensor W are obtained by swapping the in-
dices:
[W]
ij
= W
ij
, [W]
T
ij
= W
ji
.
A tensor S is symmetric if it is unaltered by the transpose operation,
S =S
T
, S
ij
= S
ji
.
It is anti-symmetric (or skew) if
S = S
T
, S
ij
= S
ji
.
An anti-symmetric tensor must have zero diagonal terms (when i = j).
Clearly,
If U and V are two-tensors, then U+V is also a two-tensor.
If U is a two-tensor, then U is also a two-tensor, where is a real number. The
set of U form a vector space under addition and multiplication.
1.3 Tensors 7
1.3.7 Decomposition
Any second-order tensor can be decomposed into symmetric and anti-symmetric
parts:
W=
1
2
_
W+W
T
_
+
1
2
_
WW
T
_
,
W
ij
=
1
2
(W
ij
+W
ji
) +
1
2
(W
ij
W
ji
).
(1.12)
Returning to (1.9), if we interchange i and j, we get
W

ji
= A
jm
A
in
W
mn
= A
jn
A
im
W
nm
.
The second equality arises because m and n are dummy indices, mere labels in the
summation. The left side of this expression is recognized as the components of the
transpose of W, B. The equation asserts that the components of the transpose of W
are also transformed according to (1.9). Thus, if Wis a two-tensor, then its transpose
is also a two-tensor, and the Cartesian decomposition (1.12) splits an arbitrary two-
tensor into a symmetric and an anti-symmetric tensor (of second order).
We now go through some of the rst and second-order tensors that will be en-
countered in this course.
1.3.8 Some Common Vectors
Position, displacement, velocity, acceleration, linear and angular momentum, linear
and angular impulse, force, torque, are vectors. This is because the position vector
transforms under a frame rotation according to (1.8). Any other quantity linearly
related to the position (including the derivative and integral operation) will also be
a vector.
1.3.9 Gradient of a Scalar
The gradient of a scalar is a vector. Let be a scalar, its gradient is written as
g = , g
i
=

x
i
.
Under a frame rotation, the new components of are

i
=

x
j
x
j
x

i
= A
ij

x
j
,
which qualies as a vector.
8 1 Tensor Notation
1.3.10 Some Common Tensors
We have met a second-order tensor formed by the outer product of two vectors,
written compactly as uv, with components (for vectors, the outer products is written
without the symbol )
(uv)
ij
= u
i
v
j
.
In general, the outer product of n vectors is an n-order tensor.
Unit Tensor The Kronecker delta is a second-order tensor. In fact it is invariant
in any coordinate system, and therefore is an isotropic tensor of second-order. To
show that it is a tensor, note that

ij
= A
ik
A
jk
= A
ik
A
jl

kl
,
which follows from the orthogonality of the transformation matrix.
ij
are said to
be the components of the second-order unit tensor I. Finding isotropic tensors of
arbitrary orders is not a trivial task.
Gradient of a Vector The gradient of a vector is a two-tensor: if u
i
and u

i
are
the components of u in F and F

,
u

i
x

j
=
x
l
x

x
l
(A
ik
u
k
) = A
ik
A
jl
u
k
x
l
.
This qualies the gradient of a vector as a two-tensor.
Velocity Gradient If u is the velocity eld, then u is the gradient of the velocity.
Be careful with the notation here. By our convention, the subscripts are assigned
from left to right, so
(u)
ij
=
i
u
j
=
u
j
x
i
.
In most books on viscoelasticity including this, the term velocity gradient is taken
to mean the second-order tensor L = (u)
T
with components
L
ij
=
u
i
x
j
. (1.13)
Strain Rate and Vorticity Tensors The velocity gradient tensor can be decom-
posed into a symmetric part D, called the strain rate tensor, and an anti-symmetric
part W, called the vorticity tensor:
D=
1
2
_
u + u
T
_
, W=
1
2
_
u
T
u
_
. (1.14)
1.4 Tensor and Linear Vector Function 9
Fig. 1.4 Dening the stress
tensor
Stress Tensor and Quotient Rule We are given that stress T = [T
ij
] at a point x
is dened by, see Fig. 1.4,
t =Tn, t
i
= T
ij
n
j
, (1.15)
where n is a normal unit vector on an innitesimal surface S at point x, and t is
the surface traction (force per unit area) representing the force the material on the
positive side of n is pulling on the material on the negative side of n. Under a frame
rotation, since both t (force) and n are vectors,
t

=At, t =A
T
t

, n

=An, n =A
T
n

,
A
T
t

=t =Tn =TA
T
n

, t

=ATA
T
n

.
From the denition of the stress, t

=T

, and therefore
T

=ATA
T
.
So the stress is a second-order tensor.
In fact, as long as t and n are vector, the 9 components T
ij
dened in the manner
indicated by (1.15) form a second-order tensor. This is known as the quotient rule.
1.4 Tensor and Linear Vector Function
L is a linear vector function on U if it satises
L(u
1
+u
2
) = L(u
1
) +L(u
2
) ,
L(u) = L(u), u, u
1
, u
2
U, R.
1.4.1 Claim
Let W be a two-tensor, and dene a vector-valued function through
v = L(u) =Wu,
10 1 Tensor Notation
then L is a linear function. Conversely, for any linear function on U, there is a unique
two-tensor W such that
L(u) =Wu, u U.
The rst statement can be easily veried. For the converse part, given the linear
function, let dene W
ij
through
L(e
i
) =W
ji
e
j
.
Now, u U,
v =L(u) =L(u
i
e
i
) =u
i
W
ji
e
j
,
v
j
=W
ji
u
i
.
W is a second-order tensor because u and v are vectors. The uniqueness part of W
can be demonstrated by assuming that there is another W

, then
_
WW

_
u = 0, u U,
which implies that W

=W.
In this connection, one can dene a second-order tensor as a linear function,
taking one vector into another. This is the direct approach, e.g., Gurtin [29], em-
phasizing linear algebra. We use whatever notation is convenient for the purpose at
hand. The set of all linear vector functions forms a vector space under addition and
multiplication. The main result here is that
L(e
i
) =We
i
=W
ji
e
j
, W
ji
=e
j
(We
i
).
1.4.2 Dyadic Notation
Thus, one may write
W=W
ij
e
i
e
j
. (1.16)
This is the basis for the dyadic notation, the e
i
e
j
play the role of the basis vectors
for the tensor W.
1.5 Tensor Operations
1.5.1 Substitution
The operation
ij
u
j
= u
i
replaces the subscript j by ithe tensor
ij
is therefore
sometimes called the substitution tensor.
1.5 Tensor Operations 11
1.5.2 Contraction
Given a two-tensor W
ij
, the operation
W
ii
=
3

i=1
W
ii
= W
11
+W
22
+W
33
is called a contraction. It produces a scalar. The invariance of this scalar under a
frame rotation is seen by noting that
W

ii
= A
ik
A
il
W
kl
=
kl
W
kl
= W
kk
.
This scalar is also called the trace of W, written as
tr W= W
ii
. (1.17)
It is one of the invariants of W (i.e., unchanged in a frame rotation). If the trace
of W is zero, then W is said to be traceless. In general, given an n-order tensor,
contracting any two subscripts produces a tensor of (n 2) order.
1.5.3 Transpose
Given a two-tensor W= [W
ij
], the transpose operation swaps the two indices
W
T
= (W
ij
e
i
e
j
)
T
= W
ij
e
j
e
i
,
_
W
T
_
ij
= W
ji
. (1.18)
1.5.4 Products of Two Second-Order Tensors
Given two second-order tensors, U and V,
U= U
ij
e
i
e
j
, V= V
ij
e
i
e
j
,
one can form different products from them, and it is helpful to refer to the dyadic
notation here.
The tensor product UV is a 4th-order tensor, with component U
ij
V
kl
,
UV= U
ij
V
kl
e
i
e
j
e
k
e
l
. (1.19)
The single dot product U V is a 2nd-order tensor, sometimes written without the
dot (the dot is the contraction operator),
U V=UV= (U
ij
e
i
e
j
) (V
kl
e
k
e
l
) = U
ij
e
i

jk
V
kl
e
l
= U
ij
V
jl
e
i
e
l
, (1.20)
12 1 Tensor Notation
with components U
ik
V
kl
, just like multiplying two matrices U
ik
and V
kj
. This
single dot product induces a contraction of a pair of subscripts (j and k) in U
ij
V
kl
,
and acts just like a vector dot product.
The double dot (or scalar, or inner) product produces a scalar,
U: V = (U
ij
e
i
e
j
) : (V
kl
e
k
e
l
) = (U
ij
e
i
)
jk
(V
kl
e
l
)
= U
ij
V
kl

jk

il
= U
ij
V
ji
. (1.21)
The dot operates on a pair of base vectors until we run out of dots. The end result
is a scalar (remember our summation convention). It can be shown that the scalar
product is in fact an inner product.
The norm of a two-tensor is dened from the inner product in the usual manner,
U
2
=U
T
: U= U
ij
U
ij
= tr
_
U
T
U
_
. (1.22)
The space of all linear vector functions therefore form a normed vector space.
One writes U
2
=UU, U
3
=U
2
U, etc.
A tensor U is invertible if there exists a tensor, U
1
, called the inverse of U, such
that
UU
1
=U
1
U=I (1.23)
One can also dene the vector cross product between two second-order tensors
(and indeed any combination of dot and cross vector products). However, we
refrain from listing all possible combinations here.
1.6 Invariants
1.6.1 Invariant of a Vector
When a quantity is unchanged with a frame rotation, it is said to be invariant. From
a vector, a scalar can be formed by taking the scalar product with itself, v
i
v
i
= v
2
.
This is of course the magnitude of the vector and it is the only independent scalar
invariant for a vector.
1.6.2 Invariants of a Tensor
From a second-order tensor S, there are three independent scalar invariants that can
be formed, by taking the trace of S, S
2
and S
3
,
I = tr S = S
ii
, II = tr S
2
= S
ij
S
ji
, III = tr S
3
= S
ij
S
jk
S
ki
.
1.7 Decompositions 13
However, it is customary to use the following invariants
I
1
=I, I
2
=
1
2
_
I
2
II
_
, I
3
=
1
6
_
I
3
3I II +2III
_
= det S.
It is also possible to form ten invariants between two tensors (Gurtin [29]).
1.7 Decompositions
We now quote some of the well-known results without proof, some are intuitively
obvious, others not.
1.7.1 Eigenvalue and Eigenvector
A scalar is an eigenvalue of a two-tensor S if there exists a non-zero vector e,
called the eigenvector, satisfying
Se = e. (1.24)
The characteristic space for S corresponding to the eigenvalue consists of all
vectors in the eigenspace, {v : Sv =v}. If the dimension of this space is n, then
is said to have geometric multiplicity of n. The spectrum of S is the ordered list
{
1
,
2
, . . .} of all the eigenvalues of S.
A tensor S is said to be positive denite if it satises
S : vv >0, v =0. (1.25)
We record the following theorems:
The eigenvalues of a positive denite tensor are strictly positive.
The characteristic spaces of a symmetric tensor are mutually orthogonal.
Spectral decomposition theorem: Let S be a symmetric two-tensor. Then there is
a basis consisting entirely of eigenvectors of S. For such a basis, {e
i
, i = 1, 2, 3},
the corresponding eigenvalues {
i
, i = 1, 2, 3} form the entire spectrum of S, and
S can be represented by the spectral representation, where
S =
_

3
i=1

i
e
i
e
i
, when S has three distinct eigenvalues,

1
ee +
2
(I ee), when S has two distinct eigenvalues,
I, when S has only one eigenvalue.
(1.26)
14 1 Tensor Notation
1.7.2 Square Root Theorem
Let S be a symmetric positive denite tensor. Then there is a unique positive denite
tensor U such that U
2
=S. We write
U=S
1/2
.
The proof of this follows from the spectral representation of S.
1.7.3 Polar Decomposition Theorem
For any given tensor F, there exist positive denite tensors U and V, and a rotation
tensor R, such that
F =RU=VR. (1.27)
Each of these representations is unique, and
U=
_
F
T
F
_
1/2
, V=
_
FF
T
_
1/2
. (1.28)
The rst representation (RU) is called the right, and the second (VR) is called the
left polar decomposition.
1.7.4 CayleyHamilton Theorem
The most important theorem is the CayleyHamilton theorem: Every tensor S satis-
es its own characteristic equation
S
3
I
1
S
2
+I
2
S I
3
I =0, (1.29)
where I
1
= tr S, I
2
=
1
2
((tr S)
2
tr S
2
), and I
3
= det S are the three scalar invariants
for S, and I is the unit tensor in three dimensions.
In two dimensions, this equation reads
S
2
I
1
S +I
2
I =0, (1.30)
where I
1
= tr S, I
2
= det S are the two scalar invariants for S, and I is the unit tensor
in two dimensions.
CayleyHamilton theorem is used to reduce the number of independent tensorial
groups in tensor-valued functions. We record here one possible use of the Cayley
Hamilton theorem in two dimensions. The three-dimensional case is reserved as an
exercise.
1.8 Derivative Operations 15
Suppose C is a given symmetric positive denite tensor in 2-D,
[C] =
_
C
11
C
12
C
12
C
22
_
,
and its square root U=C
1/2
is desired. From the characteristic equation for U,
U= I
1
1
(U)
_
C+I
3
(U)I
_
,
so if we can express the invariants of U in terms of the invariant of C, were done.
Now, if the eigenvalues of U are
1
and
2
, then
I
1
(U) =
1
+
2
, I
2
(U) =
1

2
,
I
1
(C) =
2
1
+
2
2
, I
2
(C) =
2
1

2
2
.
Thus
I
2
(U) =
_
I
2
(C),
I
2
1
(U) = I
1
(C) +2
_
I
2
(C).
Therefore
U=
C+

I
2
(C)I
_
I
1
(C) +2

I
2
(C)
.
1.8 Derivative Operations
Suppose (u) is a scalar-valued function of a vector u. The derivative of (u) with
respect to u in the direction v is dened as the linear operator D(u)[v]:
(u +v) = (u) +D(u)[v] +HOT,
where HOT are terms of higher orders, which vanish faster than . Also, the square
brackets enclosing v are used to emphasize the linearity of in v. An operational
denition for the derivative of (u) in the direction v is therefore,
D(u)[v] =
d
d
_
(u +v)
_
=0
. (1.31)
This denition can be extended verbatim to derivatives of a tensor-valued (of any
order) function of a tensor (of any order). The argument v is a part of the denition.
We illustrate this with a few examples.
Example 1 Consider the scalar-valued function of a vector, (u) = u
2
= u u. Its
derivative in the direction of v is
16 1 Tensor Notation
D(u)[v] =
d
d
(u +v)
=0
=
d
d
_
u
2
+2u v +
2
v
2
_
=0
= 2u v.
Example 2 Consider the tensor-valued function of a tensor, G(A) = A
2
= AA. Its
derivative in the direction of B is
DG(A)[B] =
d
d
_
G(A+B)
_
=0
=
d
d
_
A
2
+(AB+BA) +O
_

2
__
=0
=AB+BA.
1.8.1 Derivative of det(A)
Consider the scalar-valued function of a tensor, (A) = det A. Its derivative in the
direction of B can be calculated using
det(A+B) = det A
_
A
1
B+
1
I
_
=
3
det Adet
_
A
1
B+
1
I
_
=
3
det A
_

3
+
2
I
1
_
A
1
B
_
+
1
I
2
_
A
1
B
_
+I
3
_
A
1
B
__
= det A
_
1 +I
1
_
A
1
B
_
+O
_

2
__
.
Thus
D(A)[B] =
d
d
_
(A+B)
_
=0
= det Atr
_
A
1
B
_
.
1.8.2 Derivative of tr(A)
Consider the rst invariant I (A) = tr A. Its derivative in the direction of B is
DI(A)[B] =
d
d
_
I (A+B)
_
=0
=
d
d
[tr A+ tr B]
=0
= tr B =I : B.
1.8.3 Derivative of tr(A
2
)
Consider the second invariant II(A) = tr A
2
. Its derivative in the direction of B is
DII(A)[B] =
d
d
_
II(A+B)
_
=0
1.9 Gradient of a Field 17
=
d
d
_
A: A+(A: B+B : A) +O
_

2
__
=0
= 2A: B.
1.9 Gradient of a Field
1.9.1 Field
A function of the position vector x is called a eld. One has a scalar eld, for exam-
ple the temperature eld T (x), a vector eld, for example the velocity eld u(x), or
a tensor eld, for example the stress eld S(x). Higher-order tensor elds are rarely
encountered, as in the many-point correlation elds. Conservation equations in con-
tinuum mechanics involve derivatives (derivatives with respect to position vectors
are called gradients) of different elds, and it is absolutely essential to know how to
calculate the gradients of elds in different coordinate systems. We also nd it more
convenient to employ the dyadic notation at this point.
1.9.2 Cartesian Frame
We consider rst a scalar eld, (x). The Taylor expansion of this about point x is
(x +r) = (x) +r
j

x
j
(x) +O
_

2
_
.
Thus the gradient of (x) at point x, now written as , dened in (1.31), is given
by
[r] =r

x
. (1.32)
This remains unchanged for a vector or a tensor eld.
Gradient Operator This leads us to dene the gradient operator as
=e
j

x
j
=e
1

x
1
+e
2

x
2
+e
3

x
3
. (1.33)
This operator can be treated as a vector, operating on its arguments. By itself, it has
no meaning; it must operate on a scalar, a vector or a tensor.
Gradient of a Scalar For example, the gradient of a scalar is
=e
j

x
j
=e
1

x
1
+e
2

x
2
+e
3

x
3
. (1.34)
18 1 Tensor Notation
Gradient of a Vector The gradient of a vector can be likewise calculated
u =
_
e
i

x
i
_
(u
j
e
j
) =e
i
e
j
u
j
x
i
. (1.35)
In matrix notation,
[u] =
_
_
_
_
_
_
_
_
_
u
1
x
1
u
2
x
1
u
3
x
1
u
1
x
2
u
2
x
2
u
3
x
2
u
1
x
3
u
2
x
3
u
3
x
3
_

_
.
The component (u)
ij
is u
j
/x
i
; some books dene this differently.
Transpose of a Gradient The transpose of a gradient of a vector is therefore
u
T
=e
i
e
j
u
i
x
j
. (1.36)
In matrix notation,
[u]
T
=
_
_
_
_
_
_
_
_
_
u
1
x
1
u
1
x
2
u
1
x
3
u
2
x
1
u
2
x
2
u
2
x
3
u
3
x
1
u
3
x
2
u
3
x
3
_

_
.
Divergence of a Vector The divergence of a vector is a scalar dened by
u =
_
e
i

x
i
_
(u
j
e
j
) =e
i
e
j
u
j
x
i
=
ij
u
j
x
i
,
u =
u
i
x
i
=
u
1
x
1
+
u
2
x
2
+
u
3
x
3
.
(1.37)
The divergence of a vector is also an invariant, being the trace of a tensor.
Curl of a Vector The curl of a vector is a vector dened by
u =
_
e
i

x
i
_
(u
j
e
j
) =e
i
e
j
u
j
x
i
=
kij
e
k
u
j
x
i
=e
1
_
u
3
x
2

u
2
x
3
_
+e
2
_
u
1
x
3

u
3
x
1
_
+e
3
_
u
2
x
1

u
1
x
2
_
. (1.38)
The curl of a vector is sometimes denoted by rot.
1.9 Gradient of a Field 19
Fig. 1.5 Cylindrical and
spherical frame of references
Divergence of a Tensor The divergence of a tensor is a vector eld dened by
S =
_
e
k

x
k
_
(S
ij
e
i
e
j
) =e
j
S
ij
x
i
. (1.39)
1.9.3 Non-Cartesian Frames
All the above denitions for gradient and divergence of a tensor remain valid in a
non-Cartesian frame, provided that the derivative operation is also applied to the
basis vectors as well. We illustrate this process in two important frames, cylindrical
and spherical coordinate systems (Fig. 1.5); for other systems, consult Bird et al. [4].
Cylindrical Coordinates In a cylindrical coordinate system(Fig. 1.5, left), points
are located by giving them values to {r, , z}, which are related to {x = x
1
, y = x
2
,
z =x
3
} by
x =r cos , y =r sin, z =z,
r =
_
x
2
+y
2
, = tan
1
_
y
x
_
, z =z.
The basis vectors in this frame are related to the Cartesian ones by
e
r
= cos e
x
+sine
y
, e
x
= cos e
r
sine

,
e

= sine
x
+cos e
y
, e
y
= sine
r
+cos e

.
Physical Components In this system, a vector u, or a tensor S, are represented by,
respectively,
u = u
r
e
r
+u

+u
z
e
z
,
S = S
rr
e
r
e
r
+S
r
e
r
e

+S
rz
e
r
e
z
+S
r
e

e
r
+S

+S
z
e

e
z
+S
zr
e
z
e
r
+S
z
e
z
e

+S
zz
e
z
e
z
.
20 1 Tensor Notation
Gradient Operator The components expressed this way are called physical com-
ponents. The gradient operator is converted from one system to another by the chain
rule,
= e
x

x
+e
y

y
+e
z

z
= (cos e
r
sine

)
_
cos

sin
r

_
+(sine
r
+cos e

)
_
sin

r
+
cos
r

_
+e
z

z
= e
r

r
+e

1
r

+e
z

z
. (1.40)
When carrying out derivative operations, remember that

r
e
r
= 0,

r
e

= 0,

r
e
z
= 0,

e
r
=e

= e
r
,

e
z
= 0,

z
e
r
= 0,

z
e

= 0,

z
e
z
= 0.
(1.41)
Gradient of a Vector The gradient of any vector is
u =
_
e
r

r
+e

1
r

+e
z

z
_
(u
r
e
r
+u

+u
z
e
z
)
= e
r
e
r
u
r
r
+e
r
e

r
+e
r
e
z
u
z
r
+e

e
r
1
r
u
r

+e

u
r
r
+e

1
r
u

e
r
u

r
+e

e
z
1
r
u
z

+e
z
e
r
u
r
z
+e
z
e

z
+e
z
e
z
u
z
z
,
u = e
r
e
r
u
r
r
+e
r
e

r
+e
r
e
z
u
z
r
+e

e
r
_
1
r
u
r

r
_
+e

_
1
r
u

+
u
r
r
_
+e

e
z
1
r
u
z

+e
z
e
r
u
r
z
+e
z
e

z
+e
z
e
z
u
z
z
.
(1.42)
Divergence of a Vector The divergence of a vector is obtained by a contraction of
the above equation:
u =
u
r
r
+
1
r
u

+
u
r
r
+
u
z
z
. (1.43)
1.9 Gradient of a Field 21
1.9.4 Spherical Coordinates
In a spherical coordinate system (Fig. 1.5, right), points are located by giving them
values to {r, , }, which are related to {x = x
1
, y = x
2
, z = x
3
} by
x = r sin cos , y = r sin sin, z = r cos ,
r =
_
x
2
+y
2
+z
2
, = tan
1
_
_
x
2
+y
2
z
_
, = tan
1
_
y
x
_
.
The basis vectors are related by
e
r
=e
1
sin cos +e
2
sin sin +e
3
cos ,
e

=e
1
cos cos +e
2
cos sin e
3
sin,
e

= e
1
sin +e
2
cos ,
and
e
1
=e
r
sin cos +e

cos cos e

sin,
e
2
=e
r
sin sin +e

cos sin +e

cos ,
e
3
=e
r
cos e

sin.
Gradient Operator Using the chain rule, it can be shown that the gradient oper-
ator in spherical coordinates is
=e
r

r
+e

1
r

+e

1
r sin

. (1.44)
We list below a few results of interest.
Gradient of a Scalar The gradient of a scalar is given by
=e
r

r
+e

1
r

+e

1
r sin

. (1.45)
Gradient of a Vector The gradient of a vector is given by
u = e
r
e
r
u
r
r
+e
r
e

r
+e
r
e

r
+e

e
r
_
1
r
u
r

r
_
+e

_
1
r
u

+
u
r
r
_
+e

e
r
_
1
r sin
u
r

r
_
+e

1
r
u

+e

_
1
r sin
u

r
cot
_
+e

_
1
r sin
u

+
u
r
r
+
u

r
cot
_
. (1.46)
22 1 Tensor Notation
Fig. 1.6 Carl Friedrich
Gauss (17771855) was a
Professor of Mathematics at
the University of Gttingen.
He made several
contributions to Number
Theory, Geodesy, Statistics,
Geometry, Physics. His motto
was few, but ripe (Pauca, Sed
Matura), and nothing further
remains to be done. He did
not publish several important
papers because they did not
satisfy these requirements
Divergence of a Vector The divergence of a vector is given by
u =
1
r
2

r
_
r
2
u
r
_
+
1
r

(u

sin) +
1
r sin
u

. (1.47)
Divergence of a Tensor The divergence of a tensor is given by
S = e
r
_
1
r
2

r
_
r
2
S
rr
_
+
1
r sin

(S
r
sin) +
1
r sin
S
r

+S

r
_
+e

_
1
r
3

r
_
r
3
S
r
_
+
1
r sin

(S

sin) +
1
r sin
S

+
S
r
S
r
S

cot
r
_
+e

_
1
r
3

r
_
r
3
S
r
_
+
1
r sin

(S

sin)
+
1
r sin
S

+
S
r
S
r
+S

cot
r
_
. (1.48)
1.10 Integral Theorems
1.10.1 Gauss Divergence Theorem
Various volume integrals can be converted to surface integrals by the following the-
orems, due to Gauss (Fig. 1.6):
_
V
dV =
_
S
ndS, (1.49)
_
V
udV =
_
S
n udS, (1.50)
_
V
SdV =
_
S
n SdS. (1.51)
1.11 Problems 23
Fig. 1.7 A region enclosed
by a closed surface with
outward unit vector eld
The proofs may be found in Kellogg [38]. In these, V is a bounded regular region,
with bounding surface S and outward unit vector n (Fig. 1.7), , u, and S) are
differentiable scalar, vector, and tensor elds with continuous gradients. Indeed the
indicial version of (1.50) is valid even if u
i
are merely three scalar elds of the
required smoothness (rather than three components of a vector eld).
1.10.2 Stokes Curl Theorem
Various surfaces integrals can be converted into contour integrals using the follow-
ing theorems:
_
S
n ( u) dS =
_
C
t udC, (1.52)
_
S
n ( S) dS =
_
C
t SdC. (1.53)
In these, t is a tangential unit vector along the contour C. The direction of integration
is determined by the right-hand rule: thumb pointing in the direction of n, ngers
curling in the direction of C.
1.10.3 Leibniz Formula
If is a eld (a scalar, a vector, or a tensor) dene on a region V(t ), which is
changing in time, with bounding surface S(t ), also changing in time with velocity
u
S
, then (Leibniz formula, Fig. 1.8)
d
dt
_
V
dV =
_
V

t
dV +
_
S
u
S
ndS. (1.54)
24 1 Tensor Notation
Fig. 1.8 Gottfried W.
Leibniz (16461716) was a
German philosopher and
mathematician, who
independently with Newton,
laid the foundation for
integral and differential
calculus in 1675
1.11 Problems
Problem 1.1 The components of vectors u, v, and w are given by u
i
, v
i
, w
i
. Verify
that
u v = u
i
v
i
,
u v =
ijk
e
i
u
j
v
k
,
(u v) w=
ijk
u
i
v
j
w
k
,
(u v) w=u (v w),
(u v) w= (u w)v (v w)u,
(u v)
2
= u
2
v
2
(u v)
2
,
where u
2
= |u|
2
and v
2
= |v|
2
.
Problem 1.2 Let A be a 3 3 matrix with entries A
ij
,
[A] =
_
_
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33
_
_
.
Verify that
det[A] =
ijk
A
1i
A
2j
A
3k
=
ijk
A
i1
A
j2
A
k3
,

lmn
det[A] =
ijk
A
il
A
jm
A
kn
=
ijk
A
li
A
mj
A
nk
,
det[A] =
1
6

ijk

lmn
A
il
A
jm
A
kn
.
Problem 1.3 Verify that

ijk

imn
=
jm

kn

jn

km
.
1.11 Problems 25
Given that two 3 3 matrices of components
[A] =
_
_
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33
_
_
, [B] =
_
_
B
11
B
12
B
13
B
21
B
22
B
23
B
31
B
32
B
33
_
_
verify that if [C] = [A] [B], then the components of C are C
ij
= A
ik
B
kj
. Thus if
[D] = [A]
T
[B], then D
ij
= A
ki
B
kj
.
Problem 1.4 Show that, if [A
ij
] is a frame rotation matrix,
det[A
ij
] =
_
e

1
e

2
_
e

3
= 1,
[A]
T
[A] = [A][A]
T
= [I], [A]
1
= [A]
T
, det[A] = 1.
Problem 1.5 Verify that

ijk
u
i
v
j
w
k
= det
_
_
u
1
u
2
u
3
v
1
v
2
v
3
w
1
w
2
w
3
_
_
.
Consider a second-order tensor W
ij
and a vector u
i
=
ijk
W
jk
. Show that if W is
symmetric, u is zero, and if W is anti-symmetric the components of u are twice
those of W in magnitude. This vector is said to be the axial vector of W.
Hence, show that the axial vector associated with the vorticity tensor of (1.14) is
u.
Problem 1.6 If D, S and W are second-order tensors, D symmetric and W anti-
symmetric, show that
D: S = D: S
T
=D:
1
2
_
S +S
T
_
,
W: S = W: S
T
=W:
1
2
_
WW
T
_
,
D: W = 0.
Further, show that
if T : S = 0 S then T = 0,
if T : S = 0 symmetric S then T is anti-symmetric,
if T : S = 0 anti-symmetric S then T is symmetric.
Problem 1.7 Show that Q is orthogonal if and only if H=QI satises
H+H
T
+HH
T
= 0, HH
T
=H
T
H.
26 1 Tensor Notation
Problem 1.8 Show that, if S is a second-order tensor, then I = tr S, II = tr S
2
, III =
det S are indeed invariants. In addition, show that
det(S I) =
3
+I
1

2
I
2
+I
3
.
If is an eigenvalue of S then det(S I) = 0. This is said to be the characteristic
equation for S.
Problem 1.9 Apply the result above to nd the square root of the CauchyGreen
tensor in a two-dimensional shear deformation
[C] =
_
1 +
2

1
_
.
Investigate the corresponding formula for the square root of a symmetric positive
denite tensor S in three dimensions.
Problem 1.10 Write down all the components of the strain rate tensor and the vor-
ticity tensor in a Cartesian frame.
Problem 1.11 Given that r = x
i
e
i
is the position vector, a is a constant vector, and
f (r) is a function of r = |r|, show that
r = 3, r =0, (a r) =a, f =
1
r
df
dr
r.
Problem 1.12 Show that the divergence of a second-order tensor S in cylindrical
coordinates is given by
S = e
r
_
S
rr
r
+
S
rr
S

r
+
1
r
S
r

+
S
zr
z
_
+e

_
S
r
r
+
2S
r
r
+
1
r
S

+
S
z
z
+
S
r
S
r
r
_
+e
z
_
S
rz
r
+
S
rz
r
+
1
r
S
z

+
S
zz
z
_
. (1.55)
Problem 1.13 Show that, in cylindrical coordinates, the Laplacian of a vector u is
given by

2
u = e
r
_

r
_
1
r

r
(ru
r
)
_
+
1
r
2

2
u
r

2
+

2
u
r
z
2

2
r
2
u

_
+e

_

r
_
1
r

r
(ru

)
_
+
1
r
2

2
u

2
+

2
u

z
2
+
2
r
2
u
r

_
+e
z
_
1
r

r
_
r
u
z
r
_
+
1
r
2

2
u
z

2
+

2
u
z
z
2
_
. (1.56)
1.11 Problems 27
Problem 1.14 Show that, in cylindrical coordinates,
u u = e
r
_
u
r
u
r
r
+
u

r
u
r

+u
z
u
r
z

r
_
+e

_
u
r
u

r
+
u

r
u

+u
z
u

z
+
u

u
r
r
_
+e
z
_
u
r
u
z
r
+
u

r
u
z

+u
z
u
z
z
_
. (1.57)
Problem 1.15 The stress tensor in a material satises S = 0. Show that the
volume-average stress in a region V occupied by the material is
S =
1
2V
_
S
(xt +tx) dS, (1.58)
where t =n S is the surface traction. The quantity on the left side of (1.58) is called
the stresslet (Batchelor [3]).
Problem 1.16 Calculate the following integrals on the surface of the unit sphere
nn =
1
S
_
S
nndS, (1.59)
nnnn =
1
S
_
S
nnnndS. (1.60)
These are the averages of various moments of a uniformly distributed unit vector on
a sphere surface.
http://www.springer.com/978-3-642-32957-9

You might also like