Linear Algebra Cheat Sheet
Linear Algebra Cheat Sheet
Linear Algebra Cheat Sheet
1.1.4
1.1.1
A linear equation in the variables x1 , x2 , . . ., xn is an equation that can be written in the form
a1 x1 + a2 x2 + . . . an xn = b, where a1 , . . ., an are the coefficients. A system of linear equations (or a
linear system) is a collection of one or more linear equations involving the same variables. A solution
of a linear system is a list of numbers that makes each equation a true statement. The set of all possible
solutions is called the solution set of the linear system. Two linear systems are called equivalent if
they have the same solution set. A linear system is said to be consistent, if it has either one solution or
infinitely many solutions. A system is inconsistent if it has no solutions.
Matrices
The essential information of a linear system can be recorded compactly in a rectangular array called a
matrix. A matrix containing only the coefficients of a linear system is called the coefficient matrix,
while a matrix also including the constant at the end of a linear equation, is called an augmented
matrix. The size of a matrix tells how many columns and rows it has. An m n matrix has m rows
and n columns.
There are three elementary row operations. Replacement adds to one row a multiple of another.
Interchange interchanges two rows. Scaling multiplies all entries in a row by a nonzero constant. Two
matrices are row equivalent if there is a sequence of row operations that transforms one matrix into
the other. If the augmented matrices of two linear systems are row equivalent, then the two systems have
the same solution set.
I is called an identity matrix, and has 1s on the diagonal and 0s elsewhere. In is the identity matrix
of size n n. It is always true that In x = x for every x in Rn .
1.1.3
Linear Transformations
A transformation (or function or mapping) T from Rn to Rm is a rule that assigns to each vector x
in Rn a vector T (x) in Rm . For x in Rn , the vector T (x) in Rm is called the image of x. The set Rn is
called the domain of T , and Rm is called the codomain. The set of all images T (x) is called the range
of T .
If A is an m n matrix, then the scalar in the ith row and the jth column is denoted by aij . The
diagonal entries in a matrix are the numbers aij where i = j. They form the main diagonal of A.
A diagonal matrix is a square matrix whose nondiagonal entries are 0. An example is In . A matrix
whose entries are all zero is called a zero matrix, and denoted as 0. To matrices are equal if they have
the same size, and all their corresponding entries are equal.
Matrix Operations
Two matrices can be multiplied, by multiplying one matrix by the columns of the other matrix. If A is
an m n matrix and B is an n p matrix with columns b1 , b2 , . . ., bp , then the product AB is the m p
matrix AB = A [ b1 b2 . . . bp ] = [ Ab1 Ab2 . . . Abp ]. Note that usually AB 6= BA. If AB = BA,
then we say that A and B commute with one another.
Since it is possible to multiply matrices, it is also possible to take their power. If A is a square matrix,
then Ak = A . . . A, where there should be k As. Also A0 is defined as In . Given an m n matrix, the
transpose of A is the n m matrix, denoted by AT , whose columns are formed from the corresponding
rows of A. So rowi (A) = coli (AT ). The transpose should not be confused by a matrix to the power T .
Inverses
Subspaces
A subspace of Rn is any set H in Rn for which three properties apply. The zero vector 0 is in H, for
each u and v in H, the sum u + v is in H, and for each u in H, the vector cu is in H (for every scalar c).
Subspaces are always a point (0-dimensional) on the origin, a line (1-dimensional) through the origin, a
plane (2-dimensional) through the origin, or any other multidimensional plane through the origin.
14. If T : Rn ! Rm is a linear transformation, then there exists a unique matrix A such that T (x) = Ax
for all x in Rn . In fact, A = [ T (e1 ) T (e2 ) . . . T (en ) ].
15. If T : Rn ! Rm is a linear transformation, and T (x) = Ax, then:
(a) T is one-to-one if, and only if the equation T (x) = 0 has only the trivial solution.
(b) T is one-to-one if, and only if the columns of A are linearly independent.
(c) T maps Rn onto Rm if, and only if the columns of A span Rm .
16. If A and B are equally sized square matrices, and AB = I, then A and B are both invertible, and
A = B 1 and B = A 1 .
2.1.5
11. Let T : Rn ! Rn be a linear transformation, and let A be the standard matrix for T . That
is, T (x) = Ax. Then T is invertible if, and only if A is an invertible matrix. In that case,
T 1 (x) = A 1 x.
Suppose the set = {b1 , . . . , bp } is a basis for a subspace H. For each x in H, the coordinates of
x relative to the basis
are the weights c1 , . . ., cp such that x = c1 b1 + . . . + cp bp . The vector
[x] in Rp with coordinates c1 , . . ., cp is called the coordinate vector of x (relative to ) or the
beta-coordinate vector of x.
9. Each elementary matrix E is invertible. The inverse of E is the elementary matrix of the same type
that transforms E back into I.
2.1.6
Let T be a linear transformation. The kernel (or null space) of T , denoted as ker T , is the set of all u
such that T (u) = 0. The range of T , denoted as range T , is the set of all vectors v for which T (x) = v
has a solution. If T (x) = Ax, then the kernel of T is the null space of A, and the range of T is the column
space of A.
2.2
18. The Basis Theorem. Let H be a p-dimensional subspace of Rn . Any linearly independent set of
exactly p elements in H is automatically a basis for H. Also any set of p elements of H that spans
H is automatically a basis for H.
19. If the linear transformation T (x) = Ax, then ker T = Nul A and range T = Col A.
20. If Rn is the domain of T , then dim ker T + dim range T = n.
21. If two matrices A and B are row equivalent, then their row spaces are the same. If B is in echelon
form, the nonzero rows of B form a basis for the row space of A as well as for that of B.
Theorems
2. From the Row-Column Rule can be found that rowi (AB) = rowi (A) B.
3. If A has size 2 2. If ad
1
ad bc
"
d
c
22. The Invertible Matrix Theorem. The following statements are equivalent for a particular
square n n matrix A (be careful: these statements are not equivalent for rectangular matrices).
That is, if one is true, then all are true, and if one is false, then all are false:
(a) A is an invertible matrix.
#
b
.
a
4. If A is an invertible matrix, then for each b in Rn , the equation Ax = b has the unique solution
x = A 1 b.
5. If A is invertible, then A
is invertible, and (A
= A.
(g) The equation Ax = b has at least one solution for each b in Rn . That is, the mapping x 7! Ax
is onto Rn .
6. If A and B are n n matrices, then so is AB, and the inverse of AB is the product of the inverses
of A and B in the reverse order. That is, (AB) 1 = B 1 A 1 . This also goes for any number of
matrices. That is, if A1 , . . ., Ap are n n matrices, then (A1 A2 . . . Ap ) 1 = Ap 1 . . . A2 1 A1 1 .
7. If A is an invertible matrix, then so is AT , and the inverse of AT is the transpose of A
(AT ) 1 = (A 1 )T .
. That is,
Calculation Rules
Algebraic Definitions
If A, B and C are m n matrices, then the addition and multiplication is defined as:
3
2
3 2
3 2
a11 . . . a1n
b11 . . . b1n
(a11 + b11 ) . . . (a1n + b1n )
6 .
6
6
7
.. 7
.. 7
..
..
7 6 .
7 6
7
.
A+B =6
. 5 + 4 ..
. 5=4
.
.
4 .
5
am1 . . . amn
bm1 . . . bmn
(am1 + bm1 ) . . . (amn + bmn )
3 2
a1n
ra11
6
. 7
7 6 .
.. 5 = 4 ..
. . . amn
ram1
...
3
ra1n
. 7
7
.. 5
. . . ramn
(r + s)A = rA + sA
(2.10)
r(sA) = (rs)A
(2.11)
3. Determinants
3.1
A(BC) = (AB)C
(2.12)
A(B + C) = AB + AC
(2.13)
(B + C)A = BA + CA
(2.14)
(2.15)
Im A = A = AIn
(2.16)
A0 = In
(2.17)
Iu = u
(2.18)
(AT )T = A
(2.19)
(A + B)T = AT + B T
(2.20)
3.1.1
(2.1)
(2.2)
2
a11
6 .
.
AT = 6
4 .
a1n
...
3
am1
.. 7
7
. 5
anm
(2.21)
3.1.2
(2.22)
Given A = [aij ], the (i, j)-cofactor of A is the number Cij given by Cij = ( 1)i+j det Aij . The
determinant of A can be determined using a cofactor expansion. The formula det A = a11 C11 + a12 C12 +
. . . a1n C1n is called a cofactor expansion across the first row of A.
Cofactors
A matrix B = [bij ] of cofactors of A, where bij = Cij , is called the adjugate (or classical adjoint) of
A. This is denoted by adjA.
(2.3)
Theorems
(2.5)
2.3.2
(rA)T = r(AT )
(2.4)
...
n
X
Next to writing det A to indicate a determinant, it is also often used to write |A|.
3.2
(AB)T = B T AT
Determinants
For any square matrix, let Aij denote the submatrix formed by deleting the ith row and the jth column
of A. For n
2, the determinant of an n n matrix A = [aij ] is the sum of n terms of the form
a1j det A1j , with plus and minus signs alternating, where the entries a11 , a12 , . . . , a1n are from the first
row of A. In symbols:
j=1
Ak = A . . . A (k times)
...
(q) det A 6= 0 (The definition for determinants will be given in chapter 3.)
a11
6 .
.
rA = r 6
4 .
am1
13. If a set S = {v1 , v2 , . . ., vp } contains the zero vector 0, then the set is linearly dependent.
10. An n n matrix A is invertible if, and only if A is row equivalent to In , and in this case, any
sequence of elementary row operations that reduces A to In also transforms In into A 1 .
11. A indexed set S = {v1 , v2 , . . ., vp } is linearly dependent if, and only if at least one of the vectors
in S is a linear combination of the others.
The column space of a matrix A is the set Col A of all linear combinations of the columns of A. The
column space of an m n matrix is a subspace of Rm . The row space of a matrix A is the set Row A of
all linear combinations of the rows of A. The null space of a matrix A is the set Nul A of all solutions
to the homogeneous equation Ax = 0. The null space of an m n matrix is a subspace of Rn . A basis
for a subspace H or Rn is a linearly independent set in H that spans H.
2.3.1
10. If Ax = b is consistent for some given b, and if Ap = b, then the solution set of Ax = b is the set
of all vectors w = p + v where v is any solution of Ax = 0.
The dimension of a subspace H, denoted by dim H, is the number of vectors in any basis for H. The
zero subspace has no basis, since the zero vector itself forms a linearly dependent set. The rank of a
matrix A, denoted by rank A, is the dimension of the column space of A. So per definition rank A =
dim Col A.
If A and B are both m n matrices, and A + B = C then C is also an m n matrix whose entries are
the sum of the corresponding entries of A and B. If r is a scalar, then the scalar multiple C = rA is
the matrix whose entries are r times the corresponding entries of A.
2.3
2. Matrix Algebra
2.1.4
7. The following four statements are equivalent for a particular m n coefficient matrix A. That is,
if one is true, then all are true, and if one is false, then all are false:
12. If a set contains more vectors than there are entries in each vector, then the set is linearly dependent.
That is, any set {v1 , v2 , . . ., vp } in Rn is linearly dependent if p > n.
Matrix Types
2.1.3
6. If A is an m n matrix, and if b is in Rm , the matrix equation Ax = b has the same solution set
as the linear system whose augmented matrix is [a1 a2 . . . an b].
9. If the reduced echelon form of A has d free variables, then the solution set consists of a d-dimensional
plane (that is, a line is a 1-dimensional plane, a plane is a 2-dimensional plane), which can be
described by the parametric vector equation x = a1 u1 + a2 u2 + . . . + ad ud .
Linear Independence
A pivot position in a matrix A is a location in A the corresponds to a leading 1 in the reduced echelon
form of A. A pivot column is a column of A that contains a pivot position. Variables corresponding to
pivot columns in the matrix are called basic variables. The other variables are called free variables.
A general solution of a linear system gives an explicit description of all solutions.
2.1.2
5. A vector b is in Span{v1 , . . ., vp } if, and only if the linear system with augmented matrix
[v1 v2 . . . vp b] has a solution.
1.1.8
2.1.1
3. If a linear system is consistent, and if there are no free variables, there exists only 1 solution. If
there are free variables, the solution set contains infinitely many solutions.
A leading entry of a row is the leftmost nonzero entry in the row. A rectangular matrix is in echelon
form (and thus called an echelon matrix) if all nonzero rows are above any rows of all zeros, if each
leading entry of a row is in a column to the right to the leading entry of the row above it, and all entries
in a column below a leading entry are zeros. A matrix in echelon form is in reduced echelon form if
also the leading entry in each nonzero row is 1, and each leading 1 is the only nonzero entry in its column.
If a matrix A is row equivalent to an echelon matrix U , we call U an echelon form of A.
2.1
2. A linear system is consistent if, and only if the rightmost column of the augmented matrix is not a
pivot column.
1.1.7
Theorems
1. Each matrix is row equivalent to one, and only one, reduced echelon matrix.
4. A vector equation x1 a1 + x2 a2 + . . . + xn an = b has the same solution set as the linear system
whose augmented matrix is [a1 a2 . . . an b].
Matrix Equations
1.1.6
1.1.2
1.2
Vectors
A matrix with only one column is called a vector. Two vectors are equal if, and only if, their corresponding entries are equal. A vector whose entries are all zero is called the zero vector, and is denoted
by 0. If v1 , . . ., vp are in Rn , then the set of all linear combinations of v1 , . . ., vp is denoted by Span{v1 ,
. . ., vp } and is called the subset of Rn spanned by v1 , . . ., vp . So Span{v1 , . . ., vp } is the collection
of all vectors that can be written in the form c1 v1 + c2 v2 + . . . + cp vp with c1 , c2 , . . ., cp scalars.
detB.
Algebraic Rules
(2.6)
(A + B) + C = A + (B + C)
(2.7)
A+0=A
(2.8)
r(A + B) = rA + rB
(2.9)
10
1
det A adjA.
9. If A is a 2 2 matrix, the area of the parallelogram determined by the columns of A is |det A|. If
A is a 3 3 matrix, the volume of the parallelepiped determined by the columns of A is |det A|.
10. Let T : R ! R be the linear transformation determined by a 2 2 matrix A. If S is any
region in R2 with finite area, then {area of T (S)} = |det A| {area of S}. Also, if T is determined
by a 3 3 matrix A, and if S is any region in R3 with finite volume, then {volume of T (S)} =
|det A| {volume of S}.
2
4.1
4.1.1
4.2
2. Let S = {e1 , . . . , en } be a set in V , and let H = Span{e1 , . . . , en }. If one of the vectors in S, say,
vk , is a linear combination of the remaining vectors in S, then the set formed from S by removing
vk still spans H.
Vector Spaces
A vector space is a nonempty set V of objects, called vectors, on which are defined two operations,
called addition and multiplication by scalars, subject to the ten axioms listed in paragraph 4.3. As was
already mentioned in the chapter Matrix Algebra, a subspace of a vector space V is a subset H of V
that has three properties:
1. The zero vector of V is in H.
2. H is closed under vector addition. That is, for each u and v in H, the sum u + v is in H.
3. H is closed under multiplication by scalars. That is, for each u in H and each scalar c, the vector
cu is in H.
If v1 , . . . , vp are in a vector space V , then Span{v1 , . . . , vp } is called the subspace spanned by
v1 , . . . , vp . Given any subspace H of V , a spanning set for H is a set v1 , . . . , vp in H such that
H = Span{v1 , . . . , vp }.
4.1.2
Bases
4.1.3
Theorems
7. If a vector space V has a basis of n vectors, then every basis of V must consist of exactly n vectors.
8. Let H be a subspace of a finite-dimensional vector space V . Any linearly independent set in H can
be expanded, if necessary, to a basis for H. Also, H is finite-dimensional and dim H dim V .
9. The Basis Theorem: Let V be a p-dimensional vector space, p 1. Any linearly independent
set of exactly p elements in V is automatically a basis for V . Any set of exactly p elements that
spans V is automatically a basis for V .
4.3
The following axioms must hold for all the vectors u, v and w in the vector space V and all scalars c
and d.
Coordinate Systems
Suppose = {b1 , . . . , bn } is a basis for V and x is in V . The coordinates of x relative to the basis
(or the -coordinates of x) are the weights c1 , . . . , cn such that x = c1 b1 + . . . + cn bn . If c1 , . . . , cn
are the -coordinates of x, then the vector [x] in Rn (consisting of c1 , . . . , cn ) is the coordinate vector
of x (relative to ), or the -coordinate vector of x. The mapping x 7! [x] is the coordinate
mapping (determined by ).
If P = [ b1 . . . bn ], then the vector equation x = c1 b1 + . . . + cn bn is equivalent to x = P [x] . We
call P the change-of-coordinates matrix from to the standard basis Rn . Since P is invertible
(invertible matrix theorem), also [x] = P 1 x.
u in V such that u + ( u) = 0.
4.1.4
8. (c + d)u = cu + du.
11
12
13
5.1.6
5.1
Many dynamical systems can be described or approximated by a series of vectors xk where xk+1 = Axk .
The variable k often indicates a certain time variable. If A is a diagonal matrix, having n eigenvalues
forming a basis for Rn , any vector xk can be described by xk = c1 ( 1 )k v1 + . . . + cn ( n )k vn . This is
called the eigenvector decomposition of xk .
5.1.1
9. c(du) = (cd)u.
10. 1u = u.
Dynamical Systems
The graph x0 , x1 , . . . is called a trajectory of the dynamical system. If, for every x0 , the trajectory
goes to the origin 0 as k increases, the origin is called an attractor (or sometimes sink). If, for every
x0 , the trajectory goes away from the origin, it is called a repellor (or sometimes source). If 0 attracts
for certain x0 and repels for other x0 , then it is called a saddle point. For matrices having complex
eigenvalues/eigenvectors, it often occurs that the trajectory spirals inward to the origin (attractor) or
outward (repellor) from the origin (the origin is then called a spiral point).
(c) If A is diagonalizable and k is a basis for the eigenspace corresponding to k for each k, then
the total collection of vectors in the sets 1 , . . . , p forms an eigenvector basis for Rn .
10. Diagonal Matrix Representation: Suppose A = P DP 1 , where D is a diagonal n n matrix.
If = {b1 , . . . , bn } is the basis for Rn formed from the columns of P , then D is the -matrix for
the transformation x 7! Ax.
11. If P is the matrix whose columns come from the vectors in
[T ] = P 1 AP .
5.1.2
5.1.7
Dierential Equations
Linear algebra comes in handy when dierential equations take the form x0 = Ax. The solution is then
a vector-valued function that satisfies x0 = Ax for all t in some interval. There is always a fundamental
set of solutions, being a basis for the set of all solutions. If a vector x0 is specified, then the initial
value problem is to construct the function such that x0 = Ax and x(0) = x0 .
If A and B are n n matrices, then A and B are similar if there is an invertible matrix P such that
P 1 AP = B. Changing A into P 1 AP is called a similarity transformation.
5.2
Diagonalization
5.1.4
In the common case when W is the same as V , and the basis is the same as , the matrix M is
called the matrix for T relative to or simply the -matrix for T and is denoted by [T ] . Now
[T (x)] = [T ] [x] .
1, . . . ,
1, . . . ,
k
p.
(b) The matrix A is diagonalizable if, and only if the sum of the dimensions of the distinct
eigenspaces equals n, and this happens if, and only if the dimension of the eigenspace for
each k equals the multiplicity of k .
15
6.1.5
6.1
The Gram-Schmidt Process is an algorithm for producing an orthogonal or orthonormal basis {u1 ,
. . ., up } for any nonzero subspace of Rn . Let W be the subspace, having basis {x1 , . . ., xp }. Let u1 = x1
and ui = xi xi for 1 < i n, where xi is the projection of xi on the subspace with basis {u1 , . . .,
xi vi 1
1
ui 1 }. In formula: u1 = x1 and ui = xi ( vx1i v
v1 v1 + . . . + vi 1 vi 1 vi 1 ).
3. Two vectors u and v are orthogonal if, and only if ku + vk2 = kuk2 + kvk2 .
4. A vector z is in W ? if, and only if z is orthogonal to every vector in a set that spans W .
5. W ? is a subspace of Rn .
6. If A is an m n matrix, then ( RowA)? = NulA and ( ColA)? = NulAT .
u
A unit vector is a vector whose length is 1. For any nonzero vector u, the vector kuk
is a unit vector in
the direction of u. This process of creating unit vectors is called normalizing. The distance between
u and v, written as dist(u, v), is the length of the vector v u. That is, dist(u, v) = kv uk.
6.1.6
A set of vectors {u1 , . . ., un } in Rn is said to be an orthogonal set if each pair of distinct pair of vectors
is orthogonal, that is, if ui uj = 0 whenever i 6= j. An orthogonal basis for a subspace W of Rn is a
basis for W of Rn is a basis for W that is also an orthogonal set.
Orthonormal Sets
If u is any nonzero vector in R , then it is possible to decompose any vector y in R into the sum of two
vectors, one being a multiple of u, and one being orthogonal to it. The projection y
(being the multiple
of u) is called the orthogonal projection of y onto u, and the component of y orthogonal to u is,
surprisingly, called the component of y orthogonal to u.
n
Just like it is possible to project vectors on a vector, it is also possible to project vectors on a subspace.
The projection y
onto the subspace W is called the orthogonal projection of y onto W . y
is sometimes
also called the best approximation to y by elements of W .
Linear Models
Suppose we have a certain amount of measurement data which, when plotted, seem to lie close to a
straight line. Let y = 0 + 1 x. The dierence between the observed value (from the measurements)
and the predicted value (from the line) is called a residual. The least-squares line is the line that
minimizes the sum of the squares of the residuals. This line is also called a line of regression of y on
x. The coefficients 0 and 1 are called (linear) regression coefficients.
The previous system is equivalent to solving the least-squares solution of X = y if X = [ 1 x ] (where
1 has entries 1, 1, . . ., 1, and x has entries x1 , . . ., xn ), has entries 0 and 1 and y has entries y1 , . . .,
yn . A common practice before computing a least-squares line is to compute the average x
of the original
x-values, and form a new variable x = x x
. The new x-data are said to be in mean-deviation form.
In this case, the two columns of X will be orthogonal.
The residual vector is defined as = y X . So y = X + . Any equation in this form is referred
to as a linear model, in which should be minimized.
9. Let {u1 , . . ., up } be an orthogonal basis for a subspace W of Rn . For each y in W , the weights in
yu
yu
the linear combination y = c1 u1 + . . . + cp up are given by cj = uj ujj = kuj kj2 .
11. Let U be an m n matrix with orthonormal columns, and let x and y be in Rn , then:
(a) kU xk = kxk
(b) (U x) (U y) = x y
12. If U is a square matrix, then U is an orthogonal matrix if, and only if its columns are orthonormal
columns. The rows of an orthogonal matrix are also orthonormal rows.
13. If y and u are any nonzero vectors in Rn , then the orthogonal projection of y onto u is y
=
yuj
y
.
kuj k2 u, and the component z of y orthogonal to u is z = y
15. The Best Approximation Theorem. Let W be a subspace of Rn , y any vector in Rn , and y
1. hu, vi = hv, ui
2. hu + v, vi = hu, wi + hv, wi
3. hcu, vi = chu, vi
4. hu, ui 0
and
hu, ui = 0 if, and only if u = 0
A vector space with an inner product is called an inner product space.
17. The set of least-squares solutions of Ax = b coincides with the nonempty set of solutions of the
normal equations AT Ax = AT b.
18. The matrix AT A is invertible if, and only if the columns of A are linearly independent. In that case,
the equation Ax = b has only one least-squares solution x
, and it is given by x
= (AT A) 1 AT b.
Theorems
yuj
uj uj u
14. Let W be a subspace of Rn . Then each y in Rn can be written uniquely in the form y = y
+z
where y
is in W and z is in W ? . In fact, if {u1 , . . ., up } is any orthogonal basis of W , then
yu
1
y
= uyu
u1 + . . . + up upp up , and z = y y
.
1 u1
An inner product on a vector space V is a function that, to each pair of vectors u and v in U , associates
a real number hu, vi and satisfies the following axioms, for all u, v, w in V and all scalars c:
6.2
In statistical analysis of scientific and engineering data, there is commonly a dierent notation used.
Instead of Ax = b, we write X = y and refer to X as the design matrix, as the parameter vector,
and y as the observation vector.
6.1.8
Decomposing Vectors
Least-Squares Problem
The general least-squares problem is to find an x that makes kb Axk as small as possible. If A is
mn and b is in Rm , a least-squares solution of Ax = b is an x
in Rn such that kb A
xk kb Axk
for all x in Rn . When a least-squares solution x
is used to produce A
x as an approximation of b, the
distance from b to A
x is called the least-squares error of this approximation.
6.1.7
Orthogonal Sets
6.1.4
16
Basics of Vectors
Two vectors u and v in Rn can be multiplied with each other, using the dot product, also called
the inner product, which produces a scalar value. It is denoted as u v, and defined as u v =
u1 v1 + . . . + un vnp
. The length
p of a vector u, sometimes also called the norm, is denoted by kuk. It is
defined as kuk = u u = u21 + . . . + u2n .
6.1.3
(c) | i | > 1 for some i and | i | < 1 for all other i. In that case the system is a saddle point.
15. For linear dierential equations, each eigenvalue-eigenvector pair provides a solution of x0 = Ax.
This solution is x(t) = ve t .
of an n n matrix
6.1.2
8. If a n n matrix has n distinct eigenvalues, then it is diagonalizable. (Note that the opposite is
not always true.)
14
6.1.1
satisfies det(A
I) = 0 if, and only if there is a nonzero vector x in Cn such that
a (complex) eigenvalue and x a (complex) eigenvector corresponding to .
bi (b "6= 0) and
# an associated
a
b
.
b a
6. If n n matrices A and B are similar, then they have the same characteristic polynomial and hence
the same eigenvalues (with the same multiplicities).
Complex Eigenvalues
A complex scalar
Ax = x. We call
= a
16. For linear dierential equations, any linear combination of solutions is also a solution for the differential equation. So if u(t) and v(t) are solutions, then cu(t) + dv(t) is also a solution for any
scalars c and d.
Let V be an n-dimensional vector space, W an m-dimensional vector space, T any linear transformation
from V to W , a basis for V and a basis for W . Now the image of any vector [x] (the vector x relative
to the base ) to [T (x)] is given by [T (x)] = M [x] , where M = [[T (b1 )] [T (b2 )] . . . [T (bn )] ]. The
m n matrix M is a matrix representation of T , called the matrix for T relative to the bases and
.
5.1.5
2. The eigenvalues of a triangular matrix are the entries on its main diagonal.
(d) | i | = 1 for some i. In that case the trajectory can converge to any vector in the eigenspace
corresponding to the eigenvalue 1. However, it can also diverge.
Theorems
5.1.3
18
19