Cheat Sheet (Regular Font) PDF
Cheat Sheet (Regular Font) PDF
Cheat Sheet (Regular Font) PDF
Vector spaces
Subspace: If u and v are in W , then u + v are in W ,
and cu is in W
Nul(A): Solutions of Ax = 0. Row-reduce A.
Row(A): Space spanned by the rows of A: Row-reduce
A and choose the rows that contain the pivots.
Col(A): Space spanned by columns of A: Row-reduce
A and choose the columns of A that contain the pivots
Rank(A): = Dim(Col(A)) = number of pivots
Rank-Nullity theorem:
Rank(A) + dim(N ul(A)) = n, where A is m n
Linear transformation: T (u + v) = T (u) + T (v),
T (cu) = cT (u), where c is a number.
T is one-to-one if T (u) = 0 u = 0
T is onto if Col(T ) = Rm .
Linearly independence:
a1 v1 + a2 v2 + + an vn = 0 a1 = a2 =
= an = 0.
To show lin. ind, form the matrix of the vectors, and
show that N ul(A) = {0}
Linear dependence: a1 v1 + a2 v2 + + an vn = 0
for a1 , a2 , , an , not all zero.
Span: Set of linear combinations of v1 , vn
Basis B for V : A linearly independent set such that
Span (B) = V
To show sthg is a basis, show it is linearly independent
and spans.
To find a basis from a collection of vectors, form the
matrix A of the vectors, and find Col(A).
To find a basis for a vector space, take any element of
that v.s. and express it as a linear combination of
simpler vectors. Then show those vectors form a
basis.
Dimension: Number of elements in a basis.
To find dim, find a basis and find num. elts.
Theorem: If V has a basis of vectors, then every basis
of V must have n vectors.
Basis theorem: If V is an ndim v.s., then any lin. ind.
set with n elements is a basis, and any set of n elts.
which spans V is a basis.
Matrix of a lin. transf T with respect to bases B and C:
For every vector v in B, evaluate T (v), and express
Diagonalization
Diagonalizability: A is diagonalizable if
A = P DP 1 for some diagonal D and invertible P .
A and B are similar if A = P BP 1 for P invertible
Theorem: A is diagonalizable A has n linearly
independent eigenvectors
Theorem: IF A has n distinct eigenvalues, THEN A is
diagonalizable, but the opposite is not always true!!!!
Notes: A can be diagonalizable
even if its not
0 0
invertible (Ex: A =
). Not all matrices are
0 0
1 1
diagonalizable (Ex:
)
0 1
1
Consequence: A = P DP
An = P Dn P 1
How to diagonalize: To find the eigenvalues, calculate
det(A I), and find the roots of that.
To find the eigenvectors, for each find a basis for
N ul(A I), which you do by row-reducing
Rational roots theorem: If p() = 0 has a rational
root r = ab , then a divides the constant term of p, and b
divides the leading coefficient.
Use this to guess zeros of p. Once you have a zero that
where:
C=
sin() cos()
det(A)
Orthogonality
u, v orthogonal
if u v = 0.
kuk = u u
{u1 un } is orthogonal if ui uj = 0 if i 6= j,
orthonormal if ui ui = 1
W : Set of v which are orthogonal to every w in W .
If {u1 un } is an orthogonal basis, then:
yu
y = c1 u1 + cn un cj = u uj
j
R
Inner product spaces f g = ab f (t)g(t)dt. G-S
applies with this inner product as well.
Cauchy-Schwarz: |u v| kuk kvk
Triangle inequality: ku + vk kuk + kvk
Symmetric matrices (A = AT )
Has n real eigenvalues, always diagonalizable,
orthogonally diagonalizable (A = P DP T , P is an
orthogonal matrix, equivalent to symmetry!).
Theorem: If A is symmetric, then any two
eigenvectors from different eigenspaces are orthogonal.
How to orthogonally diagonalize: First diagonalize,
then apply G-S on each eigenspace and normalize.
Then P = matrix of (orthonormal) eigenvectors, D =
matrix of eigenvalues.
Quadratic forms: To find the matrix, put the
x2i -coefficients on the diagonal, and evenly distribute
the other terms. For example, if the x1 x2 term is 6,
then the (1, 2)th and (2, 1)th entry of A is 3.
Then orthogonally diagonalize A = P DP T .
Then let y = P T x, then the quadratic form becomes
2 , where are the eigenvalues.
1 y12 + + n yn
i
Spectral decomposition:
1 u1 u1 T + 2 u2 u2 T + + n un un T
f (t)
g(t)
h(t)
0
0
0
f
g (t)
h (t) (for 3 functions).
W (t) = f (t)
f 00 (t) g 00 (t) h00 (t)
f (t0 )) is easy to
Then pick a point t0 where det(W
evaluate. If det 6= 0, then f, g, h are linearly
independent! Try to look for simplifications before you
differentiate.
Fundamental solution set: If f, g, h are solutions and
linearly independent.
Largest interval of existence: First make sure the
leading coefficient equals to 1. Then look at the domain
of each term. For each domain, consider the part of the
interval which contains the initial condition. Finally,
intersect the intervals and change any brackets to
parentheses. Harmonic oscillator:
my 00 + by 0 + ky = 0 (m = inertia, b = damping,
k = stiffness)
3
sin 2 3
2
v2 =
=
sin 4 3
23
Case N = 3
2
1
0
2
1
Equation:
= Ax, A = 1
0
1
2
Proper frequencies: Eigenvalues of A:
2i,
2 + 2 i,
2 2 i
x00
Proper modes:
2
sin 4
sin 2 4
2
v1 = sin 2 4 = 1 , v2 = sin 4 4 =
2
sin 3 4
sin 6 4
2
2
sin 3 4
1
2
0 , v3 = sin 6 =
1
4
2
1
sin 9 4
2
General case (just in case!)
Equation: x00 = Ax,
2
1
0
0
1
2
1
0
0
1
2
1
0
A= .
.
.
.
.
.
..
..
..
..
..
..
0
0
1
2
1
0
0
0
0
1
2
k
Proper frequencies: 2i sin 2(N +1) ,
k = 1, 2, N
sin
N +1
sin 2k
N +1
Proper modes: vk =
..
.
N k
sin N +1
X 00 (x)
X(x)
2u
u
+
P (x)
=
t
x2
u(0,
t)
=
U
,
u(L, t) = U 2
1
u(x, 0) = f (x)
Then u(x,
w(x, t), where: i
h t) = v(x) +
R R
x
v(x) = U2 U1 + 0L 0z 1 P (s)dsdz L
+ U1
RxRz 1
P
(s)dsdz
and
w(x,
t)
solves
the
hom.
eqn:
0 0
w
2w
= x2
t
w(0, t) = 0,
w(L, t) = 0