0% found this document useful (0 votes)
55 views

Chapter 2. Review of Matrix Algebra: X A X X A X A X A X

This chapter reviews key concepts in matrix algebra that are important for finite element analysis, including definitions of matrices and indices, the summation convention, free indices, symmetric and antisymmetric matrices, the identity matrix, matrix multiplication, transposes, orthogonal matrices, determinants, and eigenvalues and eigenvectors. Eigenvalues and eigenvectors are particularly important for characterizing stress tensors in finite element analysis.

Uploaded by

sakeriraq81
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views

Chapter 2. Review of Matrix Algebra: X A X X A X A X A X

This chapter reviews key concepts in matrix algebra that are important for finite element analysis, including definitions of matrices and indices, the summation convention, free indices, symmetric and antisymmetric matrices, the identity matrix, matrix multiplication, transposes, orthogonal matrices, determinants, and eigenvalues and eigenvectors. Eigenvalues and eigenvectors are particularly important for characterizing stress tensors in finite element analysis.

Uploaded by

sakeriraq81
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

ME 478 FINITE ELEMENT METHOD

Chapter 2. Review of Matrix Algebra


Matrices and Indicial Notation
 a11 a12 a13 
a = a21 a22 a23  = aij
 a31 a32 a33  i is the row
j is the column
Summation Convention
2 repeated indices implies summation (Einstein’s notation)
(3 repeated indices means nothing)
3 3
aii = ∑ aii aij b jk = ∑ aij b jk
Ex: i =1 j =1
Free indices
xˆi = aim xm
this implies three equations ( i is called a free index)
xˆ1 = a11 x1 + a12 x2 + a13 x3
xˆ2 = a21 x1 + a22 x2 + a23 x3
xˆ3 = a31 x1 + a32 x2 + a33 x3
Symmetric Matrices
aij = a ji
Antisymetric Matrices
aij = −aij , i≠ j
Identity Matrix
1 0 0
I =  0 1 0 
 0 0 1 

2.1
or we can use Kronecker’s delta
δ ij = 1 for i = j
=0 for i ≠ j
Matrix Multiplication
 a11 a12 a13  b11 b12 b13  3
c = ab = a21 a22 a23  b21 b22 b23  = ∑ aijb jk = aij b jk = cik
a31 a32 a33  b31 b32 b33  j =1
 a11b11 + a12b21 + a13b31 a11b12 + a12b22 + a13b32 a11b13 + a12b23 + a13b33 
= a21b11 + a22b21 + a23b31 a21b12 + a22b22 + a23b32 a21b13 + a22b23 + a23b33 
 a31b11 + a32b21 + a33b31 a31b12 + a32b22 + a33b32 a31b13 + a32b23 + a33b33 
a11 a12 a13 1 0 0 a11 0 0  3
c = aI = a21 a22 a23 0 1 0 =  0 a22 0  = ∑aijδ jk = aijδ jk = cik
a31 a32 a33 0 0 1  0 0 a33  j =1
Transpose of a Matrix
T
 a11 a12 a13   a11 a 21 a31 
a T = a 21 a 22 a 23  = a12 a 22 a32 
 a31 a32 a33   a13 a 23 a33 
Swap rows and
T columns
 x1 
x T =  x2  = [x1 x2 x3 ]
 x3 
Some interesting things
If a is symmetric
T
 a11 a12 a13   a11 a12 a13 
aT = a12 a22 a23  = a12 a22 a23  = a
 a13 a23 a33   a13 a23 a33 

2.2
If a is antisymmetric
a11 0 0
a+a
=  0 0  = aI
T
a22
2
 0 0 a33 
Orthogonal Matrix
Each set of columns form an orthogonal set of unit vectors
 a11 a12 a13 
Given: a = a 21 a 22 a 23 
 a31 a32 a33 
a is an orthogonal matrix if:
a11   a12   a13 
     
a21  Is orthogonal to a22  Is orthogonal to a23 
a  a  a 
 31   32   33 
Example:
1 / 2 0 1/ 2 
 
a= 0 1 0 
1 / 2 0 1/ 2 
 
a is an orthogonal matrix since
1 / 2  0 1 / 2  1 / 2  0 1 / 2 
           
 0  • 1 = 0  0 •
  0 =0 1 •
   0 =0
1 / 2  0 1 / 2  1 / 2  0 1 / 2 
           
An orthogonal matrix also has the property
a −1 = aT

2.3
Determinant of a square matrix
The determinant of a square matrix is a scalar quantity which
summarizes the tensorial property in the form of a multilinear
functional
Determinant of matrix products
det(a ⋅ b) = det a det b

Eigenvalues and Eigenvectors (in relation to stress tensors)

There exists a nonzero vector x such that the linear transformation


σ ⋅ x is a multiple of x
σ ⋅ x = λx
where the eigenvalues λi define the three principle values of
stress and the eigenvectors x i span the triad of the principle
directions.

This is equivalent to stating:


(σ − λI )x = 0
which for a nontrivial solution to exist:
det(σ − λI ) = 0
which gives the characteristic polynomial
p (λ ) = det(σ − λI )
Note all aigenvalues are real as long as σ = σ is symmetric,
T

which is the case for nonpolar materials because of the conjugate


shear stresses σ ij = σ ji .

2.4

You might also like