4 12
4 12
2007/2/16
page 323
i i
16. Using Equation (4.11.11), determine all vectors sat- [Hint: ||v + w||2 = v + w, v + w.]
isfying v, v > 0. Such vectors are called spacelike
(b) Two vectors v and w in an inner product space
vectors.
V are called orthogonal if v, w = 0. Use (a)
17. Make a sketch of R2 and indicate the position of the to prove the general Pythagorean theorem: If v
null, timelike, and spacelike vectors. and w are orthogonal in an inner product space
V , then
18. Consider the vector space Rn , and let v =
(v1 , v2 , . . . , vn ) and w = (w1 , w2 , . . . , wn ) be vec- ||v + w||2 = ||v||2 + ||w||2 .
tors in Rn . Show that the mapping , dened by
v, w = k1 v1 w1 + k2 v2 w2 + + kn vn wn (c) Prove that for all v, w in V ,
is a valid inner product on Rn if and only if the con- (i) ||v + w||2 ||v w||2 = 4v, w.
stants k1 , k2 , . . . , kn are all positive. (ii) ||v + w||2 + ||v w||2 = 2(||v||2 + ||w||2 ).
19. Prove from the inner product axioms that, in any inner
product space V , v, 0 = 0 for all v in V . 21. Let V be a complex inner product space. Prove that
for all v, w in V ,
20. Let V be a real inner product space.
(a) Prove that for all v, w V , ||v + w||2 = ||v||2 + 2Re(v, w) + ||v||2 ,
||v + w||2 = ||v||2 + 2v, w + ||w||2 . where Re denotes the real part of a complex number.
DEFINITION 4.12.1
Let V be an inner product space.
(That is, every vector is orthogonal to every other vector in the set.)
3. A vector v in V is called a unit vector if ||v|| = 1.
4. An orthogonal set of unit vectors is called an orthonormal set of vectors.
Thus, {v1 , v2 , . . . , vk } in V is an orthonormal set if and only if
i i
i i
i i main
2007/2/16
page 324
i i
Remarks
1. The conditions in (4a) and (4b) can be written compactly in terms of the Kronecker
delta symbol as
vi , vj = ij , i, j = 1, 2, . . . , k.
2. Note that the inner products occurring in Denition 4.12.1 will depend upon which
inner product space we are working in.
1
3. If v is any nonzero vector, then v is a unit vector, since the properties of an
||v||
inner product imply that
' (
1 1 1 1
v, v = v, v = ||v||2 = 1.
||v|| ||v|| ||v||2 ||v||2
Example 4.12.2 Verify that {(2, 1, 3, 0), (0, 3, 1, 6), (2, 4, 0, 2)} is an orthogonal set of vectors
in R4 , and use it to construct an orthonormal set of vectors in R4 .
Solution: Let v1 = (2, 1, 3, 0), v2 = (0, 3, 1, 6), and v3 = (2, 4, 0, 2).
Then
v1 , v2 = 0, v1 , v3 = 0, v2 , v3 = 0,
so that the given set of vectors is an orthogonal set. Dividing each vector in the set by
its norm yields the following orthonormal set:
1 1 1
v1 , v2 , v3 .
14 46 2 6
Example 4.12.3 Verify that the functions f1 (x) = 1, f2 (x) = sin x, and f3 (x) = cos x are orthogonal in
C 0 [, ], and use them to construct an orthonormal set of functions in C 0 [, ].
Solution: In this case, we have
f1 , f2 = sin x dx = 0, f1 , f3 = cos x dx = 0,
1 2
f2 , f3 = sin x cos x dx = sin x = 0,
2
so that the functions are indeed orthogonal on [, ]. Taking the norm of each function,
we obtain
||f1 || = 1 dx = 2 ,
1
||f2 || = sin x dx =
2
(1 cos 2x) dx = ,
2
1
||f3 || = cos2 x dx = (1 + cos 2x) dx = .
2
i i
i i
i i main
2007/2/16
page 325
i i
DEFINITION 4.12.4
A basis {v1 , v2 , . . . , vn } for a (nite-dimensional) inner product space is called an
orthogonal basis if
vi , vj = 0 whenever i = j,
and it is called an orthonormal basis if
vi , vj = ij , i, j = 1, 2, . . . , n.
There are two natural questions at this point: (1) How can we obtain an orthogonal
or orthonormal basis for an inner product space V ? (2) Why is it benecial to work with
an orthogonal or orthonormal basis of vectors? We address the second question rst.
In light of our work in previous sections of this chapter, the importance of our next
theorem should be self-evident.
Theorem 4.12.5 If {v1 , v2 , . . . , vk } is an orthogonal set of nonzero vectors in an inner product space V ,
then {v1 , v2 , . . . , vk } is linearly independent.
Example 4.12.6 Let V = M2 (R), let W be the subspace of all 2 2 symmetric matrices, and let
2 1 1 1 2 2
S= , , .
1 0 1 2 2 3
i i
i i
i i main
2007/2/16
page 326
i i
v = c1 v1 + c2 v2 + + cn vn , (4.12.2)
where the unique n-tuple (c1 , c2 , . . . , cn ) consists of the components of v relative to the
given basis. It is easier to determine the components ci in the case of an orthogonal basis
than it is for other bases, because we can simply form the inner product of both sides of
(4.12.2) with vi as follows:
v, vi = c1 v1 + c2 v2 + + cn vn , vi
= c1 v1 , vi + c2 v2 , vi + + cn vn , vi
= ci ||vi ||2 ,
where the last step follows from the orthogonality properties of the basis {v1 , v2 , . . . , vn }.
Therefore, we have proved the following theorem.
Theorem 4.12.7 Let V be a (nite-dimensional) inner product space with orthogonal basis {v1 , v2 , . . . , vn }.
Then any vector v V may be expressed in terms of the basis as
v, v1 v, v2 v, vn
v= v1 + v2 + + vn .
||v1 ||2 ||v2 ||2 ||vn ||2
Theorem 4.12.7 gives a simple formula for writing an arbitrary vector in an inner
product space V as a linear combination of vectors in an orthogonal basis for V . Let us
illustrate with an example.
Example 4.12.8 Let V , W , and S be as in Example 4.12.6. Find the components of the vector
0 1
v=
1 2
relative to S.
Solution: From the formula given in Theorem 4.12.7, we have
2 2 1 2 1 1 10 2 2
v= + ,
6 1 0 7 1 2 21 2 3
11 This denes a valid inner product on V by Problem 4 in Section 4.11.
i i
i i
i i main
2007/2/16
page 327
i i
Corollary 4.12.9 Let V be a (nite-dimensional) inner product space with an orthonormal basis
{v1 , v2 , . . . , vn }. Then any vector v V may be expressed in terms of the basis as
Remark Corollary 4.12.9 tells us that the components of a given vector v relative to
the orthonormal basis {v1 , v2 , . . . , vn } are precisely the numbers v, vi , for 1 i n.
Thus, by working with an orthonormal basis for a vector space, we have a simple method
for getting the components of any vector in the vector space.
Example 4.12.10 We can write an arbitrary vector in Rn , v = (a1 , a2 , . . . , an ), in terms of the standard
basis {e1 , e2 , . . . , en } by noting that v, ei = ai . Thus, v = a1 e1 + a2 e2 + + an en .
Example 4.12.11 We can equip the vector space P1 of all polynomials of degree 1 with inner product
1
p, q = p(x)q(x) dx,
1
thus making
P1 into an inner product space. Verify that the vectors p0 = 1/ 2 and
p1 = 1.5x form an orthonormal basis for P1 and use Corollary 4.12.9 to write the
vector q = 1 + x as a linear combination of p0 and p1 .
Solution: We have
1
1
p0 , p1 = 1.5x dx = 0,
1 2
1 1
1
||p0 || = p0 , p0 = p0 dx =
2 dx = 1 = 1,
1 1 2
1 1 3 2 1 3 1
||p1 || = p1 , p1 = p12 dx = x dx = x 1 = 1 = 1.
1 1 2 2
Thus, {p0 , p1 } is an orthonormal (and hence linearly independent) set of vectors in P1 .
Since dim[P1 ] = 2, Theorem 4.6.10 shows that {p0 , p1 } is an (orthonormal) basis for
P1 .
Finally, we wish to write q = 1 + x as a linear combination of p0 and p1 , by using
Corollary 4.12.9. We leave it to the reader to verify that q, p0 = 2 and q, p1 = 23 .
Thus, we have
2 1 2 3
1 + x = 2 p0 + p1 = 2 + x .
3 2 3 2
So the component vector of 1 + x relative to {p0 , p1 } is ( 2, 23 )T .
i i
i i
i i main
2007/2/16
page 328
i i
(w v)
P(w, v) = v,
||v||2
or equivalently, using the notation for the inner product introduced in the previous section,
w, v
P(w, v) = v.
||v||2
v1 = x1
and
x2 , v1
v2 = x2 P(x2 , v1 ) = x2 v1 . (4.12.4)
||v1 ||2
i i
i i
i i main
2007/2/16
page 329
i i
Note from (4.12.4) that v2 can be written as a linear combination of {x1 , x2 }, and
hence, v2 span{x1 , x2 }. Since we also have that x2 span{v1 , v2 }, it follows that
span{v1 , v2 } = span{x1 , x2 }. Next we claim that v2 is orthogonal to v1 . We have
) x2 , v1 * ) x2 , v1 *
v2 , v1 = x2 v1 , v1 = x2 , v1 v1 , v1
||v1 ||2 ||v1 ||2
x2 , v1
= x2 , v1 v1 , v1 = 0,
||v1 ||2
which veries our claim. We have shown that {v1 , v2 } is an orthogonal set of vectors
which spans the same subspace of V as x1 and x2 .
The calculations just presented can be generalized to prove the following useful
result (see Problem 32).
Lemma 4.12.12 Let {v1 , v2 , . . . , vk } be an orthogonal set of vectors in an inner product space V . If x V ,
then the vector
x P(x, v1 ) P(x, v2 ) P(x, vk )
is orthogonal to vi for each i.
Now suppose we are given a linearly independent set of vectors {x1 , x2 , . . . , xm } in
an inner product space V . Using Lemma 4.12.12, we can construct an orthogonal basis for
the subspace of V spanned by these vectors. We begin with the vector v1 = x1 as above,
and we dene vi by subtracting off appropriate projections of xi on v1 , v2 , . . . , vi1 .
The resulting procedure is called the Gram-Schmidt orthogonalization procedure.
The formal statement of the result is as follows.
v1 = x1
x2 , v1
v2 = x2 v1
||v1 ||2
x3 , v1 x3 , v2
v3 = x3 v1 v2
||v1 ||2 ||v2 ||2
..
.
i1
xi , vk
vi = xi vk
||vk ||2
k=1
..
.
m1
xm , vk
vm = xm vk .
||vk ||2
k=1
Proof Lemma 4.12.12 shows that {v1 , v2 , . . . , vm } is an orthogonal set of vectors. Thus,
both {v1 , v2 , . . . , vm } and {x1 , x2 , . . . , xm } are linearly independent sets, and hence
i i
i i
i i main
2007/2/16
page 330
i i
are m-dimensional subspaces of V . (Why?) Moreover, from the formulas given in The-
orem 4.12.13, we see that each xi span{v1 , v2 , . . . , vm }, and so span{x1 , x2 , . . . , xm }
is a subset of span{v1 , v2 , . . . , vm }. Thus, by Corollary 4.6.14,
span{v1 , v2 , . . . , vm } = span{x1 , x2 , . . . , xm }.
and
x3 , v1 x3 , v2
v3 = x3 v1 v2
||v1 ||2 ||v2 ||2
1 3
= (1, 2, 0, 1) + (1, 0, 1, 0) (0, 1, 0, 1)
2 2
1 1 1 1
= , , , .
2 2 2 2
Example 4.12.15 Determine an orthogonal basis for the subspace of C 0 [1, 1] spanned by the functions
f1 (x) = x, f2 (x) = x 3 , f3 (x) = x 5 , using the same inner product introduced in the
previous section.
Solution: In this case, we let {g1 , g2 , g3 } denote the orthogonal basis, and we apply
the Gram-Schmidt process. Thus, g1 (x) = x, and
f2 , g1
g2 (x) = f2 (x) g1 (x). (4.12.5)
||g1 ||2
i i
i i
i i main
2007/2/16
page 331
i i
We have
1 1
f2 , g1 = f2 (x)g1 (x) dx = x 4 dx = 2
5 and
1 1
1
||g1 ||2 = g1 , g1 = x 2 dx = 23 .
1
f3 , g1 f3 , g2
g3 (x) = f3 (x) g1 (x) g2 (x). (4.12.6)
||g1 ||2 ||g2 ||2
g3 (x) = x 5 37 x 29 x(5x 2 3) = 1
63 (63x
5 70x 3 + 15x).
i i
i i
i i main
2007/2/16
page 332
i i
True-False Review For Problems 67, show that the given set of vectors is an
orthogonal set in Cn , and hence obtain an orthonormal set of
For Questions 17, decide if the given statement is true or
vectors in Cn in each case.
false, and give a brief justication for your answer. If true,
you can quote a relevant denition or theorem from the text. 6. {(1 i, 3 + 2i), (2 + 3i, 1 i)}.
If false, provide an example, illustration, or brief explanation
of why the statement is false. 7. {(1 i, 1 + i, i), (0, i, 1 i), (3 + 3i, 2 + 2i, 2i)}.
1. Every orthonormal basis for an inner product space V 8. Consider the vectors v = (1i, 1+2i), w = (2+i, z)
is also an orthogonal basis for V . in C2 . Determine the complex number z such that
{v, w} is an orthogonal set of vectors, and hence obtain
2. Every linearly independent set of vectors in an inner an orthonormal set of vectors in C2 .
product space V is orthogonal.
For Problems 910, show that the given functions in
3. With the inner product f, g = 0 f (t)g(t) dt, the C 0 [1, 1] are orthogonal, and use them to construct an or-
functions f (x) = cos x and g(x) = sin x are an or- thonormal set of functions in C 0 [1, 1].
thogonal basis for span{cos x, sin x}.
9. f1 (x) = 1, f2 (x) = sin x, f3 (x) = cos x.
4. The Gram-Schmidt process applied to the vectors 10. f1 (x) = 1, f2 (x) = x, f3 (x) = 21 (3x 2 1). These
{x1 , x2 , x3 } yields the same basis as the Gram-Schmidt are the Legendre polynomials that arise as solutions
process applied to the vectors {x3 , x2 , x1 }. of the Legendre differential equation
5. In expressing the vector v as a linear combination of
(1 x 2 )y 2xy + n(n + 1)y = 0,
the orthogonal basis {v1 , v2 , . . . , vn } for an inner prod-
uct space V , the coefcient of vi is when n = 0, 1, 2, respectively.
v, vi For Problems 1112, show that the given functions are or-
ci = .
||vi ||2 thonormal on [1, 1].
6. If u and v are orthogonal vectors and w is any vector, 11. f1 (x) = sin x, f2 (x) = sin 2x, f3 (x) = sin 3x.
then [Hint: The trigonometric identity
P(P(w, v), u) = 0.
sin a sin b = 21 [cos(a + b) cos(a b)]
7. If w1 , w2 , and v are vectors in an inner product space will be useful.]
V , then
12. f1 (x) = cos x, f2 (x) = cos 2x, f3 (x) =
P(w1 + w2 , v) = P(w1 , v) + P(w2 , v). cos 3x.
13. Let
Problems
1 1 1 1
For Problems 14, determine whether the given set of vec- A1 = , A2 = , and
1 2 2 1
tors is an orthogonal set in Rn . For those that are, determine
a corresponding orthonormal set of vectors. 1 3
A3 = .
0 2
1. {(2, 1, 1), (1, 1, 1), (0, 1, 1)}.
Use the inner product
2. {(1, 3, 1, 1), (1, 1, 1, 1), (1, 0, 2, 1)}
A, B = a11 b11 + a12 b12 + a21 b21 + a22 b22
3. {(1, 2, 1, 0), (1, 0, 1, 2), (1, 1, 1, 0), (1, 1, 1, 0)}.
to nd all matrices
4. {(1, 2, 1, 0, 3), (1, 1, 0, 2, 1), (4, 2, 4, 5, 4)}
a b
5. Let v1 = (1, 2, 3), v2 = (1, 1, 1). Determine all A4 =
c d
nonzero vectors w such that {v1 , v2 , w} is an orthogo-
nal set. Hence obtain an orthonormal set of vectors in such that {A1 , A2 , A3 , A4 } is an orthogonal set of ma-
R3 . trices in M2 (R).
i i
i i
i i main
2007/2/16
page 333
i i
For Problems 1419, use the Gram-Schmidt process to deter- On Pn , dene the inner product p1 , p2 by
mine an orthonormal basis for the subspace of Rn spanned
by the given set of vectors. p1 , p2 = a0 b0 + a1 b1 + + an bn
17. {(1, 0, 1, 0), (1, 1, 1, 0), (1, 1, 0, 1)} For Problems 2829, use this inner product to determine an
orthogonal basis for the subspace of Pn spanned by the given
18. {(1, 2, 0, 1), (2, 1, 1, 0), (1, 0, 2, 1)}. polynomials.
19. {(1, 1, 1, 0), (1, 0, 1, 1), (2, 1, 2, 1)}. 28. p1 (x) = 1 2x + 2x 2 , p2 (x) = 2 x x 2 .
On M2 (R) dene the inner product A, B by The set W is called the orthogonal complement of W
in V . Problems 3338 explore this concept in some detail.
A, B = 5a11 b11 + 2a12 b12 + 3a21 b21 + 5a22 b22 Deeper applications can be found in Project 1 at the end of
this chapter.
for all matrices A = [aij ] and B = [bij ]. For Problems 26
27, use this inner product in the Gram-Schmidt procedure 33. Prove that W is a subspace of V .
to determine an orthogonal basis for the subspace of M2 (R) 34. Let V = R3 and let
spanned by the given matrices.
W = span{(1, 1, 1)}.
1 1 2 3
26. A1 = , A2 = .
2 1 4 1 Find W .
27. A1 =
0 1
, A2 =
0 1
, A3 =
1 1
. 35. Let V = R4 and let
1 0 1 1 1 0
W = span{(0, 1, 1, 3), (1, 0, 0, 3)}.
Also identify the subspace of M2 (R) spanned by
{A1 , A2 , A3 }. Find W .
i i
i i
i i main
2007/2/16
page 334
i i
36. Let V = M2 (R) and let W be the subspace of 2 2 [Hint: You may assume that interchange of the
symmetric matrices. Compute W . innite summation with the integral is permissi-
ble.]
37. Prove that W W = 0. (That is, W and W have
no nonzero elements in common.) (c) Use a similar procedure to show that
38. Prove that if W1 is a subset of W2 , then (W2 ) is a
1
subset of (W1 ) . bm = f (x) sin mx dx.
39. The subject of Fourier series is concerned with the rep-
resentation of a 2-periodic function f as the follow- It can be shown that if f is in C 1 (, ), then
ing innite linear combination of the set of functions Equation (4.12.7) holds for each x (, ).
{1, sin nx, cos nx}
n=1 : The series appearing on the right-hand side of
+ (4.12.7) is called the Fourier series of f , and the
f (x) = 21 a0 + n=1 (an cos nx + bn sin nx). constants in the summation are called the Fourier
(4.12.7) coefcients for f .
In this problem, we investigate the possibility of per- (d) Show that the Fourier coefcients for the function
forming such a representation. f (x) = x, < x , f (x + 2) = f (x),
(a) Use appropriate trigonometric identities, or some are
form of technology, to verify that the set of func-
an = 0, n = 0, 1, 2, . . . ,
tions
{1, sin nx, cos nx}
n=1
2
bn = cos n, n = 1, 2, . . . ,
n
is orthogonal on the interval [, ].
(b) By multiplying (4.12.7) by cos mx and integrat- and thereby determine the Fourier series of f .
ing over the interval [, ], show that
(e) Using some form of technology, sketch the
1 approximations to f (x) = x on the interval
a0 = f (x) dx
(, ) obtained by considering the rst three
terms, rst ve terms, and rst ten terms in the
and Fourier series for f . What do you conclude?
1
am = f (x) cos mx dx.
1. A set of vectors V .
2. A set of scalars F (either the set of real numbers R, or the set of complex numbers
C).
3. A rule, +, for adding vectors in V .
4. A rule, , for multiplying vectors in V by scalars in F .
Then (V , +, ) is a vector space over F if and only if axioms A1A10 of Denition 4.2.1
are satised. If F is the set of all real numbers, then (V , +, ) is called a real vector
space, whereas if F is the set of all complex numbers, then (V , +, ) is called a complex
i i
i i
i i main
2007/2/16
page 335
i i
vector space. Since it is usually quite clear what the addition and scalar multiplication
operations are, we usually specify a vector space by giving only the set of vectors V .
The major vector spaces we have dealt with are the following:
Rn the (real) vector space of all ordered n-tuples of real numbers.
Cn the (complex) vector space of all ordered n-tuples of complex numbers.
Mn (R) the (real) vector space of all n n matrices with real elements.
C k (I ) the vector space of all real-valued functions that are continuous and have
(at least) k continuous derivatives on I .
Pn the vector space of all polynomials of degree n with real coefcients.
Subspaces
Usually the vector space V that underlies a given problem is known. It is often one that
appears in the list above. However, the solution of a given problem in general involves
only a subset of vectors from this vector space. The question that then arises is whether
this subset of vectors is itself a vector space under the same operations of addition and
scalar multiplication as in V . In order to answer this question, Theorem 4.3.2 tells us
that a nonempty subset of a vector space V is a subspace of V if and only if the subset
is closed under addition and closed under scalar multiplication.
Spanning Sets
A set of vectors {v1 , v2 , . . . , vk } in a vector space V is said to span V if every vector in
V can be written as a linear combination of v1 , v2 , . . . , vk that is, if for every v V ,
there exist scalars c1 , c2 , . . . , ck such that
v = c1 v1 + c2 v2 + + ck vk .
Given a set of vectors {v1 , v2 , . . . , vk } in a vector space V , we can form the set of all
vectors that can be written as a linear combination of v1 , v2 , . . . , vk . This collection of
vectors is a subspace of V called the subspace spanned by {v1 , v2 , . . . , vk }, and denoted
span{v1 , v2 , . . . , vk }. Thus,
span{v1 , v2 , . . . , vk } = {v V : v = c1 v1 + c2 v2 + + ck vk }.
i i
i i
i i main
2007/2/16
page 336
i i
v = c1 v1 + c2 v2 + + ck vk ,
u, v = 0.
Additional Problems
For Problems 12, let r and s denote scalars and let v and w 3. The set of polynomials of degree 5 or less whose co-
denote vectors in R5 . efcients are even integers.
1. Prove that (r + s)v = rv + sv. 4. The set of all polynomials of degree 5 or less whose
coefcients of x 2 and x 3 are zero.
2. Prove that r(v + w) = rv + rw.
5. The set of solutions to the linear system
For Problems 313, determine whether the given set (to-
2x2 + 5x3 = 7,
gether with the usual operations on that set) forms a vector
4x1 6x2 + 3x3 = 0.
space over R. In all cases, justify your answer carefully.
i i
i i
i i main
2007/2/16
page 337
i i
6. The set of solutions to the linear system For Problems 1924, decide (with justication) whether W
is a subspace of V .
4x1 7x2 + 2x3 = 0,
5x1 2x2 + 9x3 = 0. 19. V = R2 , W = {(x, y) : x 2 y = 0}.
7. The set of 2 2 real matrices whose entries are either 20. V = R2 , W = {(x, x 3 ) : x R}.
all zero or all nonzero. 21. V = M2 (R), W = {2 2 orthogonal matrices}. [An
8. The set of 2 2 real matrices that commute with the n n matrix A is orthogonal if it is invertible and
A1 = AT .]
matrix
1 2
0 2
. 22. V = C[a, b], W = {f V : f (a) = 2f (b)}.
b
9. The set of all functions f : [0, 1] [0, 1] such that 23. V = C[a, b], W = {f V : a f (x) dx = 0}.
f (0) = f ( 41 ) = f ( 02 ) = f ( 43 ) = f (1) = 0.
24. V = M32 (R),
10. The set of all functions f : [0, 1] [0, 1] such that
f (x) x for all x in [0, 1]. a b
W = c d : a + b = c + f and a c = e f d .
11. The set of n n matrices A such that A2 is symmetric. e f
12. The set of all points (x, y) in R2 that are equidistant For Problems 2532, decide (with justication) whether or
from (1, 2) and (1, 2). not the given set S of vectors (a) spans V , and (b) is linearly
13. The set of all points (x, y, z) in R3 that are a distance independent.
5 from the point (0, 3, 4). 25. V = R3 , S = {(5, 1, 2), (7, 1, 1)}.
14. Let
26. V = R3 , S = {(6, 3, 2), (1, 1, 1), (1, 8, 1)}.
V = {(a1 , a2 ) : a1 , a2 R, a2 > 0}. 27. V = R4 , S = {(6,3,2,0),(1,1,1,0),(1,8,1,0)}.
Dene addition and scalar multiplication on V as 28. V = R3 , S = {(10, 6, 5), (3, 3, 2), (0, 0, 0),
follows: (6, 4, 1), (7, 7, 2)}.
(a1 , a2 ) + (b1 , b2 ) = (a1 + b1 , a2 b2 ),
29. V = P3 , S = {2x x 3 , 1 + x + x 2 , 3, x}.
k(a1 , a2 ) = (ka1 , a2k ), k R.
30. V = P4 , S = {x 4 +x 2 +1,x 2 +x+1,x + 1,x 4 +2x+3}.
Explicitly verify that V is a vector space over R.
31. V = M23 (R),
15. Show that
1 0 0 3 2 1 1 2 3
S= , , ,
0 1 1 1 2 3 3 2 1
W = {(a, 2a ) : a R}
11 6 5
.
is a subspace of the vector space V given in the pre- 1 2 5
ceding problem.
32. V = M2 (R),
16. Show that {(1, 2), (3, 8)} is a linearly dependent set in
1 2 3 4 2 1
the vector space V in Problem 14. S= , , ,
2 1 4 3 1 2
17. Show that {(1, 4), (2, 1)} is a basis for the vector space 3 0 2 0
, .
V in Problem 14. 0 3 0 0
18. What is the dimension of the subspace of P2 given by 33. Prove that if {v1 , v2 , v3 } is linearly independent and
v4 is not in span{v1 , v2 , v3 }, then {v1 , v2 , v3 , v4 } is
W = span{2 + x 2 , 4 2x + 3x 2 , 1 + x}? linearly independent.
i i
i i
i i main
2007/2/16
page 338
i i
1 0 2
Prove that
(a) V W is a vector space, under componentwise 1 3 5
1 3 1
operations.
46. A = 0 2 3 .
(b) Via the identication v (v, 0), V is a subspace 1 5 2
of V W , and likewise for W .
1 5 8
(c) If dim[V ] = n and dim[W ] = m, then dim[V
W ] = m + n. [Hint: Write a basis for V W in For Problems 4750, nd an orthogonal basis for the span
terms of bases for V and W .] of the set S, where S is given in
38. Show that a basis for P3 need not contain a polynomial 47. Problem 25.
of each degree 0, 1, 2, 3.
48. Problem 26.
39. Prove that if A is a matrix whose nullspace and column 1
space are the same, then A must have an even number 49. Problem 29, using p q = 0 p(t)q(t) dt.
of columns.
50. Problem 32, using the inner product dened in Prob-
40. Let lem 4 of Section 4.11.
b1
b2 For Problems 5154, determine the angle between the given
B= . and C = c1 c2 . . . cn . vectors u and v using the standard inner product on Rn .
..
bn 51. u = (2, 3) and v = (4, 1).
Prove that if all entries b1 , b2 , . . . , bn and 52. u = (2, 1, 2, 4) and v = (3, 5, 1, 1).
c1 , c2 , . . . , cn are nonzero, then the n n matrix
A = BC has nullity n 1. 53. Repeat Problems 5152 for the inner product on Rn
given by
For Problems 4144, nd a basis and the dimension for the
row space, column space, and null space of the given matrix u, v = 2u1 v1 + u2 v2 + u3 v3 + + un vn .
A.
i i
i i
i i main
2007/2/16
page 339
i i
54. Let t0 , t1 , . . . , tn be real numbers. For p and q in Pn , 55. Find the distance from the point (2, 3, 4) to the line in
dene R3 passing through (0, 0, 0) and (6, 1, 4).
p q = p(t0 )q(t0 ) + p(t1 )q(t1 ) + + p(tn )q(tn ).
(a) Prove that p q denes a valid inner product on 56. Let V be an inner product space with basis
Pn . {v1 , v2 , . . . , vn }. If x and y are vectors in V such that
x vi = y vi for each i = 1, 2, . . . , n, prove that
(b) Let t0 = 3, t1 = 1, t2 = 1, and t3 = 3.
x = y.
Let p0 (t) = 1, p1 (t) = t, and p2 (t) = t 2 . Find
a polynomial q that is orthogonal to p0 and p1 ,
such that {p0 , p1 , q} is an orthogonal basis for 57. State as many conditions as you can on an nn matrix
span{p0 , p1 , p2 }. A that are equivalent to its invertibility.
Show that W is a subspace of V and that W and W share only the zero vector:
W W = {0}.
Part 2 Examples
(rowspace(A)) = nullspace(A)
and
(colspace(A)) = nullspace(AT ).
Use this to nd the orthogonal complement of the row space and column space of
the matrices below:
3 1 1
(i) A = .
6 0 4
1 0 6 2
(ii) A = 3 1 0 4 .
1 1 1 1
(i) the line in R3 containing the points (0, 0, 0) and (2, 1, 3).
(ii) the plane 2x + 3y 4z = 0 in R3 .
i i
i i
i i main
2007/2/16
page 340
i i
Unless the data points are collinear, the system Ax = y obtained in part (a) has no
solution for x. In other words, the vector y does not lie in the column space of A.
The goal then becomes to nd x0 such that the distance ||y Ax0 || is as small as
possible. This will happen precisely when y Ax0 is perpendicular to the column
space of A. In other words, for all x R2 , we must have
(Ax) (y Ax0 ) = 0.
(b) Using the fact that the dot product of vectors u and v can be written as a matrix
multiplication,
u v = uT v,
show that
(Ax) (y Ax0 ) = x (AT y AT Ax0 ).
i i
i i
i i main
2007/2/16
page 341
i i
x0 = (AT A)1 AT y
and therefore,
Ax0 = A(AT A)1 AT y
where
Part 2 Some Applications In parts (a)(d) below, nd the equation of the least-squares
line to the given data points.
(a) (0, 2), (1, 1), (2, 1), (3, 2), (4, 2).
(d) (3, 1), (2, 0), (1, 1), (0, 1), (2, 1).
In parts (e)(f), by using the ideas in this project, nd the distance from the point
P to the given plane.
i i
i i