Linear and Geometric Algebra (Geometric Algebra & Calculus) 1453854932, 9781453854938

This textbook for the first undergraduate linear algebra course presents a unified treatment of linear algebra and geome

1,991 418 4MB

English Pages 224 [208] Year 2011

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Linear and Geometric Algebra (Geometric Algebra & Calculus)
 1453854932, 9781453854938

  • Commentary
  • Cropped

Table of contents :
Contents
Preface
To the Student
I Linear Algebra
1 Vectors
1.1 Oriented Lengths
1.2 Rn
2 Vector Spaces
2.1 Vector Spaces
2.2 Subspaces
2.3 Linear Combinations
2.4 Linear Independence
2.5 Bases
2.6 Dimension
3 Matrices
3.1 Matrices
3.2 Systems of Linear Equations
4 Inner Product Spaces
4.1 Oriented Lengths
4.2 Rn
4.3 Inner Product Spaces
4.4 Orthogonality
II Geometric Algebra
5 G3
5.1 Oriented Areas
5.2 Oriented Volumes
5.3 G3
5.4 Generalized Complex Numbers
5.5 Rotations in R3
6 Gn
6.1 Gn
6.2 How Geometric Algebra Works
6.3 Inner and Outer Products
6.4 The Dual
6.5 Product Properties
6.6 Blades as Outer Products
7 Project, Rotate, Reflect
7.1 Project
7.2 Rotate
7.3 Reflect
III Linear Transformations
8 Linear Transformations
8.1 Linear Transformations
8.2 The Adjoint Transformation
8.3 Outermorphisms
8.4 The Determinant
9 Representations
9.1 Matrix Representations
9.2 Eigenvectors
9.3 Invariant Subspaces
9.4 Symmetric Transformations
9.5 Orthogonal Transformations
9.6 Skew Transformations
9.7 Singular Value Decomposition
IV Appendices
A Prerequisites
A.1 Sets
A.2 Logic
A.3 Theorems
A.4 Functions
B Software
B.1 Linear Algebra
B.2 Geometric Algebra
Index

Citation preview

Linear and G eo m etric A lgebra A lan M acdonald Luther College, Decorah, IA 52101 USA [email protected] faculty, luther.edu / ~rnacdonal

“G eom etry w ithout algebra is dumb! - Algebra w ith o u t geom etry is blind!” - David Hestenes

“T he principal argum ent for the adoption of geom etric algebra is th a t it provides a single, simple m ath em atical fram ework which elim­ inates th e plethora of diverse m ath em atical descriptions and tech­ niques it would otherwise be necessary to learn.” - Allan McRobie and Joan Lasenby

To David Hestenes, founder, chief theoretician, and m ost forceful advocate for m odern geometric algebra and calculus, and inspiration for this book.

To my Grandchildren, Aida, Pablo, Miles, and Graham .

Copyright © 2010 Alan M acdonald

Contents C o n te n ts

iii

P r e fa c e

vii

To t h e S tu d e n t

I 1

Linear A lgeb ra V e c to r s 1.1 Oriented L e n g t h s ......................................................................................... 1.2

Rn

xi

1 3 3

.........................................................................................................................................

12

2

V e c to r S p a c e s 2.1 Vector S p a c e s ................................................................................................ 2.2 S u b s p a c e s ....................................................................................................... 2.3 Linear C o m b in a tio n s .................................................................................. 2.4 Linear I n d e p e n d e n c e .................................................................................. 2.5 Bases .............................................................................................................. 2.6 D im e n s io n .......................................................................................................

15 15 20 22 24 27 30

3

M a tr ic e s 3.1 M a t r i c e s .......................................................................................................... 3.2 Systems of Linear E q u a tio n s ....................................................................

33 33 45

4

In n er P r o d u c t S p a c e s 4.1 Oriented L e n g t h s ......................................................................................... 4.2 Mn .................................................................................................................... 4.3 Inner P ro d u ct S p a c e s .................................................................................. 4.4 O rth o g o n a lity ................................................................................................

51 51 56 57 63

II 5

G eo m etric A lgeb ra

71

where 0 < t \ < 1 and 0 < t 2 < 1- W h at is th e set of all such oriented lengths? Hint: See Exercise 1.5

F i g . 1 .2 2 : “C lock” vec­ tors.

1 .1 .3 . Let [u] = (2,1) and [v] = (1, —1). Draw u, v, u + v , and u —v. 1 .1 .4 . Let [u] = (1 ,2 ,3 ), [v] = (4 ,5 ,6 ), and [w] = (5,6,7). coordinates of th e following vectors. a. u + v. b. 7 u + 2 w .

C om pute th e

d. 3u + 2v + w. 1 .1 .5 . W h a t are th e coordinates of th e oriented length whose tail has coordi­ n ate s (1,2, 3) and whose head has coordinates (3, - 2 , 7)? 1 .1 .6 . Let [u] = (2 ,3 ,4 ). Suppose th a t we place the tail of u a t ( 0 , 3 ,- 3 ) . W h a t are th e coordinates of its head?

S

ec t io n

1.1: O

r ien t ed

L

11

engths

1.1.7. a. Let a and b be given oriented lengths and c ^ 0 a given scalar. Solve algebraically for the unknown oriented length v: a = cv + b. b. Solve for (x , y ): (8,6) = 2( x ,y ) + (2,4).

1.1.8. For w h at value or values of y is (2, y) parallel to ( —4,8)? A n orig in . Some applications of oriented lengths require th a t some point be designated as th e origin. T h en th e heads of those oriented lengths w ith their tail at the origin are in oneto-one correspondence w ith points: v o P. See Figure 1.23, where the origin is labeled “O ” and, for example, h P i. Problem s 1.1.9, 1.1.10, 4.1.3, and 5.1.5 are examples: they use an origin to w rite vector equations of points on lines and planes.

F i g . 1 .2 4 : v =

V2 + t( v i —v 2).

v* ° ^ l'

1.1.9 (E quation of a line). T he equation of a line in a plane, y = m x + b, relates the variables x and y to each other. A line in three dimensions is com­ monly represented using an ex tra scalar, called a parameter. Instead of relating variables x, y, and z to each other, th ey are all related to th e param eter. Consider the line I in three dimensions th ro u gh the heads of vectors v x and V2 . See Figure 1.24. T he vector Vi — V2 is parallel to i. T h e head of a vector v is on t if and only if for some t

v = v 2 + t(v ! - v 2),

i.e.,

v = t v i + (1 - t ) v 2.

This is a vector equation of £ w ith param eter t. A different choice of O would give different vectors v but the points a t th eir heads would still describe th e line. a. Given coordinates (x , y , z ) w ith origin (0 ,0 ,0 ), param eterize th e line thro u g h th e points (4 ,5 ,6 ) and (1 ,2 ,3 ). b. Take com ponents of your answer to P a rt (a) to give three scalar equations param eterizing l\ x ( t) = . . . , y{t) = . . . , z ( t ) = . . . . c. R estrict t in your answer to P a rt (a) so th a t it param eterizes th e line segment between th e points. d. W h a t is th e vector equation of the line th ro u g h th e head of vo and parallel to v p ? 1 .1 .1 0 (E quation of a plane). Let V! and v 2 be given vectors. According to Exercise 1.5, as scalars t\ and £2 vary, v = fjVj + f 2v 2 varies over the plane determ ined by th e two vectors. T he plane is parameterized by t\ and t 2. a. Let vo be another given vector. W h a t does v = vq + iiV ! + t 2v 2 p aram ­ eterize? b. W h a t does v = v 0 + t i ( v ! — v 0) + t 2 ( v 2 - v 0 ) param eterize? Hint: Successively set ty = 0, t 2 = 0; ty = 1, f2 = 0; and 1 1 = 0, t 2 = 1. c. T he scalar equations x — 2t\ + 312 + 1, y = ti — 1 , z = —ty + t 2 + 2 param eterize a plane. Give a vector equation for the plane.

C

12

1.2

hapter

1: V

ectors

Mn

This section extends the notion of vectors to dimensions greater th an three. We defined oriented lengths and th eir operations in three dimensions with pictures. In more th an three dimensions we cannot draw pictures, so we cannot take this route. However, we have a theorem which expresses geometric operations on ori­ ented lengths in term s of operations on th eir coordinates in R 3 (Theorem 1.3). This theorem will m otivate th e definition of vectors and their operations in higher dimensions. In this way vectors in higher dimensions will be as simi­ lar as possible to oriented lengths. Moreover, your understanding of oriented lengths in 3-dimensions will help you to develop an intuition ab o u t dimensional vectors.

V ectors D e f i n it i o n 1.6 (n-dim ensional vector, R n ). An ordered list of n scalars v — ( v \ , v 2 , . . . , v n ) is called an n-dimensional vector. T he set of all n-dim ensional vectors is denoted R ” . For exam ple, v = (3, —4 .2 ,7r, 0 ,1 7 /3 ) € R 5. I hope you see th a t w ith our definition th ere is nothing mysterious a b o u t a vector in 5-dimensions. We live in three dimensions. So why does anyone care about R n for n > 3? Because R n for n > 3 is useful and interesting. Here are two examples. T here are many, m any more. Linear models. Experim ental d a ta is sometimes in the form of n d a ta points (■^l, 2/i), (^ 2 , 2/2 ), • • ■, {xn,Un)- W hen plotted, th e points often ap p ear to lie close to some line. T he problem of defining and finding th e “closest” line to the d a ta points is p art of th e stu d y of linear models, im portant in a wide variety of statistical m ethods. We shall see th a t the analysis uses R " (p. 68). Phase space. In physics, a vector ( x i , x 2 , x ^ , v \ , v 2 ,v^) £ R6 often represents

a particle P : th e Xi are the three coordinates of its position, and th e are the th ree coordinates of its velocity. As the position and velocity of the particle change, th e vector moves around in R 6. T h a t is n o t all. A system of n particles is represented by a 6n-dimensional vector. For example, a system of two particles, P and P ' , is represented by th e vector { x i , x 2 , x z , v \ , v 2 , v 3 ,x'l ,x' 2 ,x' 3 ,v'1 ,v'2 ,v':i) e R 12. If th e system is four gram s of helium in a box, then n « 6 x 1023. T h e gas is represented by a vector in R 6"! T h is phase space representation is useful for studying th e properties of th e gas.

S ectio n

13

1.2:

O p erations We use th e coordinate expression of operations on oriented lengths (Theo­ rem 1.3) to m otivate definitions of operations on vectors in R n . D e f in itio n 1.7 (O perations in R n). L et v and w be vectors in R n , w ith v = ( v i , v 2 , . . . i vn ) and w = (w u w 2, . . . , w n ). Then a v = (avl ,a v 2, . . . ,a v n ). V + W = (Vi + W 1,V2 + W 2 , . . . , V n + W n )o = ( 0 , 0 , , 0). For example, 3(1, —2 ,0 ,5 ) = (3, —6 ,0 ,1 5 ). Theorem 1.2 gathered together several properties of L 2 and L 3. T he fol­ lowing theorem states th a t R n enjoys exactly th e sam e properties. Recall th a t “scalar” m eans “real num ber” (p. 4). T h e o r e m 1 .8 . Let V = R” . T h ere are two operations defined on V : scalar m ultiplication a v and vector addition v + w . T here is a zero vector 0. For all vectors u , v, an d w in V n an d all scalars a and b: V0. a v G V , v + w G V . V I. V2. V3. V4. V5. V6. V7.

v + w = w + v. (u + v) + w = u + (v + w ). v + 0=v. Ov - 0. l v = v. a(bv) = (a b )v . a ( v + w) = a v + aw .

V is closed under scalar multipli­ cation and vector addition. Vector addition is commutative. Vector addition is associative. 0 is th e additive identity. 1 is th e multiplicative identity. Scalar m ultiplication distributes over vector addition.

V8. (a + b)v — a v + bv. O ur proofs of theorem s ab o u t oriented lengths used pictures. Here we have no pictures so proofs m ust be algebraic. It is great if a picture is useful for intuition to guide a proof, b u t it cannot be p a rt of a proof. Proof. We prove only V l, leaving th e proofs of some of th e o th er properties to exercises. v + -w — ( V i + W i ,V 2 + w 2, . . . , v n + w n) = (w 1 + v i , w 2 + v 2 , . . . , w n + v n) 3

.

= W + V. Step (1) uses Definition 1.7 of vector addition in R ", Step (2) uses the commu­ ta tiv ity of scalars, and Step (3) again uses the definition of vector addition. □

C

14

hapter

1: V

ectors

E x e r c is e 1 .1 4 . Verify V3: v + 0 = v. Problem s 1.2.1-1.2.3 ask you to prove V4, V5, and V8. You should think of an oriented length as a single object in 2D or 3D, and not as its coordinates. Similarly, you should think of a vector in R n as a single object, and not as its coordinates. T his even though the vector is defined using coordinates. Still, it is a single object, a set of coordinates. Let me elaborate w ith an analogy. A car is m ade of a bunch of parts. B ut you are able to ab stract away th e details of the p arts and th in k of th e car as a single object. Similarly, we distinguish a vector in M” (a car), from its coordinates (its parts). Theorem 1.8 lists arithm etic properties of vectors in Mn . T he proof of the theorem uses coordinates. B ut th e statem ent of the theorem ab stracts th e coordinates away, considering vectors to be single objects, w ith no coordinates in sight.

P ro b le m s 1.2 1 .2 .1 . Verify V4 of T heorem 1.8: v -f ( —l ) v = 0. 1 .2 .2 . Verify V5 of T heorem 1.8: l v = v. 1 .2 .3 . Verify V8 of T heorem 1.8: (a + 6)v = av + bv. 1 .2 .4 . Theorem 1.2 and Definition 1.7 are similarly worded. Explain why one is a theorem and the o ther a definition. 1 .2 .5 . Define th e norm. |v| of a vector v in Rn . 1 .2 .6 . Describe th e geom etric object in M4 represented by the p aram etric equa­ tion. a. v = ( 1 ,2 ,3 ,4 )t. b. v = (1 ,0 ,1 ,0 ) + (1, 2 , 3 , 4)t. c. v = ( 1 ,2 ,3 ,4 )ti + ( l , - l , l , - l ) i 2d. v = ( 1 ,0 ,1 ,0) + (l, 2 ,3 ,4 )i1 + ( l , - 1 , 1 , - l ) t 2. e. v = ( 1 ,0 ,1 ,0 )ta -I- (1,2,3,4)^2 + (1, —1,1, —l ) i 3. f. v = (1 ,1 ,1 ,1 ) + (1,0,1,0)*! + (1 ,2 ,3 ,4 )t2 + ( l , - l , l , - l ) i 3.

C h ap ter 2

Vector Spaces This chapter m ay well be the h ard est in th e book for you, as it uses intri­ cate ab stra c t m athem atical reasoning of a kind you likely have n o t encountered before. W ork hard to m aster it, as it is th e foundation for th e rest of th e book.

2.1

V ector Spaces

C h ap ter 1 defined tw o kinds of vectors: oriented lengths and n-dim ensional vectors. T his chapter brings a radical change of viewpoint. In stead of defining more kinds of vectors, we sta te as axioms th e p roperties th a t a set of objects m ust have to qualify as a set of vectors. T h e set of o bjects is th en called a vector space. I will point o u t th e m erits of this approach as we proceed. Theorem s 1.2 and 1.8 gave identical lists V0-V8 of properties of oriented lengths and n-dim ensional vectors. T his is the set of p roperties which defines a vector space. Recall th a t “scalar” m eans “real num ber” (p. 4). D e f i n it i o n 2.1. A vector space. V is a set of o bjects called vectors. T here are two operations defined on V : scalar m ultiplication a v and vector addition v + w. T here is a zero vector 0. Axioms V0-V8 m ust b e satisfied for all vectors u , v , an d w and all scalars a and b: VO. a v f V , v | w C V . V I. V2. V3. V4. V5. V6. V7.

v + w = w + v. ( u + v ) + w = u + (v + w ). v I 0 v. Ov - . 0. l v = V. a(frv) = (afr)v. a ( v + w ) = a v + aw .

V8. (a 4- b)v

a v j bv.

V is closed under scalar m ultipli­ cation and vector addition. Vector a d d itio n i s commutative. Vector ad d itio n is associative. 0 is th e additive identity. 1 is the multiplicative identity. Scalar m ultiplication distributes over vector addition.

C

16

hapter

2: V e c t o r S

paces

According to this definition, Theorem 1.2 states th a t L 3 and L 3 are vector spaces and Theorem 1.8 states th a t R ” is a vector space. There are many other interesting and useful vector spaces. We will encounter m any in this text. To summarize, a vector space is: I. A n y set o f o b je c t s I I . on which scalar m ultiplication, vector addition, and a 0 are defined I I I . satisfying Axioms V0-V8.

V ecto r S p ace E x am p les Our first example is of interest only because it illustrates well the idea th a t any set of objects which satisfies the definition of a vector space is a vector space. We follow carefully Definition 2.1 of a vector space as ju s t summarized. I. We need a set of objects to be vectors. O ur vectors will be ordered pairs (x. y) of scalars. II. We need to define scalar m ultiplication, vector addition, and a zero vector: S c a l a r m u l t i p l i c a t i o n . Define a ( x , y ) = (ax + a - I, ay + a - 1).

Example: 3(10,20) = (32,62). V e c t o r a d d i t i o n . Define (2 1 , 2/ 1) + {x 2 , 2/ 2 ) = (z i + 22 + 1 , 2/1 + y 2 + l). Example: ( 1 ,2) + (3,4) = (5,7).

A ZERO VECTOR. Define 0 = ( - l , - l ) . Although th e vectors here are the same as the vectors in R 2, this vector space is not K 2, as th e operations and th e zero are different. I I I . We need to show th a t the vectors and operations satisfy Axioms V0-V8. We prove V0-V3 and leave some of th e others as exercises. VO. B oth a ( x , y ) and (x y ,y i) + (x 2 , y 2) are pairs of scalars, i.e., vectors. T h u s th e vectors are closed under scalar m ultiplication and vector addition. V I.

{ x i , y i ) + (2 2 , 2/2 ) =: (xi + x 2 + l , y 1 + y 2 + 1) = (x 2 + x ! + 1 , 2/2 + 2/i + 1 ) = (2 2 , 2/2 ) + ( x u yi).

. V2.

( ( x 1 , y 1 ) + ( x 2, y 2)) + (2 3 , 2/3 ) = (21 + x 2 + 1 , 2/1 + y 2 + 1) + (2 3 , 2/3 ) = (21 + x 2 + 1 + x 3 + 1 , 2/1 + 2/2 + 1 + 2/3 + 1) = (x \ + x 2 + 2 3 + 2 , 2/1 + 2/2 + 2/3 + 2).

E valuating ( 2 1 , 2/1 ) + ( ( 2 2 , 2/2 ) + (2 3 , 2/3 ) ) gives the same result. V3. ( 2 , y) + 0 = (2 , y ) + ( - 1 , - 1 ) = (x + 1 - 1 , y + 1 - 1 ) = (x, y). E x e r c is e 2.1 . Prove V4. E x e r c is e 2.2. Prove V5.

E x e rc is e 2.3. Prove V6.

S

e c t io n

2.1: V

ector

S paces

17

T he next example is a member of an im p o rtan t class of vector spaces known as function spaces. Am ong other places, function spaces arise in th e study of differential equations. I. We take as vectors th e set C\a, 6] of all continuous functions on the closed interval [a, 6], Note carefully: The vectors in C[a,b\ are functions, each consid­ ered to be a single object, ju s t as each vector in R n is a single object. In term s of th e car analogy (p. 14), we distinguish a function / , a vector in C[a,b] (a car), from the scalar f ( x ) , its value at x (a p art). II. We need to define scalar multiplication, vector addition, and a zero. We define vectors / e C\a, 6] by defining all of their values f ( x ) . S c a l a r m u l t i p l i c a t i o n . Define a f by (a f ) ( x ) = a f ( x ) . T he value of a f a t x is a times f ( x ) . This is familiar. V e c t o r a d d i t i o n . Define / + g by ( / + g )(x) = f ( x ) + g(x). T he value

of / + g at x is f { x ) + g(x). This is also familiar. A z e r o VECTOR. Define 0 by 0(.x) = 0, th e function which is identically zero. This is a continuous function, and so is a vector in C[a,b\. III. We need to verify th e axioms. VO. A scalar m ultiple of a continuous function is continuous and th e sum of two continuous functions is continuous. T hus C[a, 6] is closed under these operations. T h e remaining axioms assert th e equality of two vectors. Two functions in C[a, b] are equal if they have the same value when applied to x ’s in [a, b], V I. ( f + g)(x) = f ( x ) + g ( x ) = g(x) + f ( x ) = (g + f ) ( x ) . (Steps ( 1 ) and (3) use th e definition of vector addition ju st given. Step ( 2 ) uses th e com m utativity of scalar addition.) Therefore, / + g = g + f . V3. ( / + 0)(x) = f ( x ) + 0 (x) = f ( x ) + 0 = f ( x ) . Therefore / + 0 = / . E x e r c is e 2.4. Verify V4. E x e r c is e 2.5. Verify V5. T h e remaining axioms are verified similarly. A similar vector space is Pn , th e set of polynom ials p of degree less th a n or equal to n. (Polynomials of degree equal to n are not closed under addition, e.g., x n + ( ~ x n ) — 0.) For variety we do not restrict th e polynom ials to an interval, b u t consider th e m for all x. For example, if p(x) = 2x 3 —4 x + 7, th en p € P 3 . In fact, p £ Pn for all n > 3. Since vector spaces come in such a wide variety, no single image in your mind for “vector” can be adequate to cap tu re th em all. Nevertheless, sketches of oriented lengths often provide good intuition for general vectors. We will use them often in our figures. However, note well: a sketch m ight provide intuition for a proof, but it cannot be p art of a proof.

C

18

hapter

2: V

ector

S

paces

Four S im p le T h eo rem s Now th a t we understand w h at vector spaces are, we are in a position to prove theorem s about them. As we sta rt to prove theorem s ab o u t vector spaces we know nothing ab o u t their vectors except th a t they satisfy axioms V0-V8. In particular, they are not necessarily oriented lengths in L 3 or coordinate vectors in R ". T hey m ight be in one of th e vast num ber of vector spaces used today. They might be in an as yet undiscovered vector space. Each step in the proof of a theorem m ust be a logical consequence of previous steps or be justified by appeal to an axiom or to a previously proved theorem . T hus all theorem s are ultim ately logical consequences of th e axioms. This provides great economy: we prove a theorem once from th e axioms of a vector space, and it is tru e in every vector space. However, som ething tru e in one vector space need not be tru e all vector spaces. For individual vector spaces have properties not shared by all vector spaces. These are m ajor points. B u t th ey are n ot easily com prehended on a first encounter. I hope th a t th ey will become clear to you as we proceed. T he following four exercises ask you to prove simple vector space theorems. This will be a warm -up for th e more difficult proofs of subsequent sections and chapters. T he theorem s give some simple rules ab o u t vectors th a t you m ust know. By themselves, th e rules are uninteresting and th eir proofs pedantic. But th ey illustrate th e new approach of this ch ap ter in a simple setting. h e o r e m . Show th a t v + ( - l ) v = 0, i.e., ( —l ) v is an additive inverse of v. Give a justification for th e num bered steps (l)-(4) in th e following proof of th e theorem:

E x e r c is e 2 .6. T

v + ( —l ) v = l v + ( - l ) v = (1 + ( —1 )) v = Ov = 0 .

Do n o t panic over all of th e symbols above. Focus on one equal sign at a time. Com pare its two sides, seeing w hat is different and w h at is th e same. T h en try to understand why th e two sides are equal. To help you get started: Step (3) uses a property of scalars, which needs no justification, and the other steps each depend on a single vector space axiom. Now we know th a t v + (—l ) v = 0 in every vector space. T his includes L 3 and Mn , even though neither was m entioned in th e proof. As w ith oriented lengths, we abbreviate ( —l ) v to —v so th e theorem states th a t —v is an additive inverse of v. If we cite this theorem to justify a step in th e proof of a later theorem , th en ultim ately th e axioms justify th e step.

E x e rcise 2.7.

T heorem .

a0 = 0. Justify the steps of this proof:

aO = a ( 0 v ) = ( a 0 ) v = Ov = 0 .

S e c t io n 2 .1:

V

ector

S

19

paces

E x e r c is e 2.8. THEOREM. If a v = 0, then a = 0 or v = 0. Prove the theorem. Hint: S ta rt with a v = 0, where a / 0. E x e r c is e 2 .9 . T h e o r e m . There is only one additive inverse of v: If w is an additive inverse of v, i.e., if v + w = 0, th en w = —v. Hint: S ta rt by adding —v to both sides of v + w = 0.

P ro b le m s 2.1 2 .1 .1 . Each p art below defines an addition and scalar m ultiplication for ordered pairs of num bers (x, y ). If th e pairs are vector spaces under th e operations, say so. If not, explain why not. a. ( z i , y i) + (x 2, y2) = (xi + x 2, yi + y2); a(x, y) = (ax, 0). b. ( x i . y i ) + (x 2,y 2) = (x \ + x 2, 0); a ( x ,y ) = (a x ,a y). c- (z i,J /i) + (x 2,y 2) = ( x i , y i + y2); a ( x ,y ) = (a x ,a y ). 2 .1 .2 . Suppose th a t u + w = v + w. Show th a t u = v. Give reasons for your steps. 2 .1 .3 . T h e vector —v is th a t additive inverse of v because it is th e vector satisfying v + (—v) = 0. Show th a t th e additive inverse of —v is v. 2 .1 .4 . Suppose th a t v + w = v. Show th a t w = 0. 2 .1 .5 . Let a be a scalar and v a vector. a. Prove th a t (—a )v = a ( —v). Hint: ( —a )v = ( a (—l))v . b. Prove th a t ( —a ) v = —av . Hint: (—a ) v = (( —l) a ) v . 2 .1 .6 . a. Define a ( x ,y ) — (a x ,a y ) as th e scalar m ultiplication on pairs (x ,y ) . Use the usual vector addition. Does this define a vector space? b. Define a ( x ,y ) — (0,0) as th e scalar m ultiplication on pairs (x ,y ). Use the usual vector addition. Does this define a vector space? 2 .1 .7 . Axiom V5 states th a t l v = v. W hy do we take th is as an axiom? Isn’t it obvious? 2 .1 .8 . T his problem is ab o u t a m ind stretcher of a vector space. Its vectors are positive real numbers. W hen th e positive num ber x is considered a vector we write |x j rather th a n th e usual x. Define scalar m ultiplication, vector addition, and a zero: a \_x\ - |* ° lLxj + [y\ = [xy\.

E.g., ( - 2 ) L3J = L3-2J = L1/9J. E.g., L2J + [3j = [ 2 x 3 ] = L6J-

0 = L1JVerify axiom: a. VO. b. V I. c. V2. d. V3. e. V4. f. V7. 2 .1 .9 . Suppose th a t we add th e theorem s of Exercises 2.6-2.9 to th e axioms of a vector space, saving us th e b other of proving them . a. Is th ere anything wrong logically ab o u t this? b. Are there disadvantages?

C

20

2.2

hapter

2: V

ector

S paces

Subspaces

This section and the following four discuss five fundam ental vector space concepts: subspace, linear combination, linear independence, basis, and dim en­ sion. T hey will be p art of our basic vocabulary, used repeatedly in th e rest of this book. D e fin itio n 2.2 (Subspace). Let U be a set of vectors in a vector space V . Define scalar m ultiplication and vector addition in U by inheritance: perform these operations in U ju s t as th ey are performed in V . If U is th en a vector space, it is called a subspace of V. For example, suppose th a t there are vectors u 1, u 2 6 U w ith u x + U2 ^ U . Then U is not closed under vector addition and so violates Axiom VO. T h u s it is not a subspace of V . However, violating Axiom VO is all th a t can go wrong: T h e o r e m 2 .3 (Subspace test). L et U set of vectors in a vector space V , w ith 0 6 V . Let U inherit scalar m ultiplication and vector addition from V . T h en U is a subspace of V U is closed under scalar multiplication and vector addition. Proof. We prove separately the (=>) and () Suppose th a t U is a subspace of V . T hen it m ust satisfy all of th e axioms for a vector space. In particular, Axiom VO requires th a t U be closed under scalar m ultiplication and vector addition. ( 0. e. {( 2 , 1/, z)} w ith x — 0. 2 .2 .2 . a. Show th a t th e intersection, U j n U 2 , of two subspaces of a vector space is again a subspace. b. Give an exam ple of a union, U i U U 2 , of two subspaces which is not a subspace. 2 .2 .3 . Consider th e set of vectors in M3 of th e form (2r —s, r -f 2s, s), for scalars r, s. Show th a t this is a subspace of R 3. 2 .2 .4 . Let U i and U 2 be subspaces of a vector space V . Let U be the set of all vectors of th e form u x + U2 , w ith u x £ U j and U2 € U 2 . Show th a t U is a subspace of V.

C

22

2.3

hapter

2: V

ector

S paces

Linear Com binations

W h at can we do with vectors? Weil, we can m ultiply them by scalars and we can add them. And we can do this repeatedly. D e f in itio n 2 .4 (Linear com bination). A linear combination of the vectors Vi, v 2, . . . , v r is an expression of th e form a xv i + a 2V 2 + • • • + a r v r ,

where a x, a 2 , ■ ■ •, &r &re scalars. Here are some examples of linear com binations of vectors u and v: 2u + 3v,

| u - 3v ( = | u + ( - 3 ) v ) ,

\ /2 u (= V 2 u + Ov),

0 ( = 0 u + 0v).

D e f in itio n 2.5 (Span). Let V be a set of vectors in a vector space. T he span of V , span(V ), is th e set of all linear combinations of vectors from V. Thus: “v G sp an (V ):’ “v is a linear com bination of vectors from V ” . T h e span of a single vector v is the set of all scalar m ultiples of v. T he span of two nonzero oriented lengths in L3, neither of which is a scalar multiple of th e other, consists of all vectors in the plane determ ined by th e two vectors. T his is Exercise 1.5 expressed in our new terminology. E x e r c is e 2 .1 1 . Show th a t s p a n ( ( l ,0,0), (0 ,2 ,0 ), (0 ,0 ,3 )) = R 3. We pause to note th a t Axiom V7, o(v + w ) = a v + aw , extends to more th a n two vectors. For example, applying th e axiom in Steps (2) and (3) gives 2

3

a(vj + V2 + V3 ) = a( (vi + V2 ) + v 3) = a ( v x + V2 ) + a v 3 = a v x + a v 2 + a v 3. T his extends to any num ber of vectors. T h e o r e m 2 .6 . If V is a nonem pty set of vectors in a vector space V , th e n span(V ) is a subspace of V . Proof. It is sufficient to show th a t th e span is closed under scalar m ultiplication and vector addition (Theorem 2.3). Let v G sp an (F ). T his means th a t v = a xv x + a 2 V2 + ■ • •+ a rv r for some scalars a x, a2, ■ ■ ■, a r and vectors v 1; V2 , . . . , v r in V . T h en using Axiom V6 in Step (3), a v = a ( a xV] + a 2v 2 H-----+ a r v r ) = a ( a i v x) + a(a 2 w2) + ------ 1- a ( a r v r ) 3

= ( a a i ) v x + (a a 2 )v 2 + • • • + (a a r )v r G s p a n (y ). Exercise 2.12 shows th a t s p a n (F ) is closed under vector addition. E x e r c is e 2 .12. Show that span(V) is closed under vector addition.



S

ec t io n

2.3:

L in e a r C

o m b in a t io n s

23

E x e r c is e 2 .1 3 . Show th a t ( 1 ,1 ,2 ) ^ s p a n ((l, 0, 2), ( 0 , 1 , 1 )). E x e r c is e 2 .1 4 . Suppose th a t the vectors v j, v 2, . . . , v r span a vector space. Let v be any vector. Show th a t th e vectors v i; v 2, .. ., v r , v also span the space.

P ro b lem s 2.3 2 .3 .1 . Suppose th a t each of u i , u 2 , u 3 is a linear com bination of v ! , v 2. Show t h a t every linear com bination of the u ’s is a linear com bination of th e v ’s. 2 .3 .2 . Is (6,6, - 6 ) G s p a n ((3 ,2, - 4 ) , (0 ,2 ,2 ))? 2 .3 .3 . Suppose th a t V\ and V2 are subsets of a vector space. a. Show th a t if Vi C V2, th en span(Vi) C span(V2). b. Is span(V! U V2) = s p a n ^ ) U s p a n ^ ) ? Explain. c. Is span(Vi n V2) = spaji(F 1) fl span(V2)? Explain. 2 .3 .4 . Suppose th a t th e vectors { v lt v 2, v 3, v 4} span a vector space V . Show th a t th e vectors {t / i v i , n 2 v 2 )'113V3 , u4 v 4}, where th e v's are not 0, span V . 2 .3 .5 . Suppose th a t th e vectors {v i, v 2, V3 , v 4} span a vector space V . Show th a t th e vectors {v! — v 2, v 2 — v 3, v 3 —v 4, v 4} also span V . 2 .3 .6 . Show th a t s p a n (u i, u 2, . . . , u r ) = sp an (v !, v 2, . . . , v s ) if and only if each v is a linear com bination of th e u ’s and vice-versa. 2 .3 .7 . Let U = s p a n (u i, u 2, . . . , u r ) be a subspace of th e vector space V . Show th a t a subspace of V containing th e u ’s contains U . T hus U is th e “sm allest” subspace containing th e u ’s.

24

C

2.4

hapter

2: V

ector

S

paces

Linear Independ en ce

We ask: Given vectors Vi, V2 , ■ ■ ■, v r , is there a linear com bination of the vectors equal to 0: ajV! + 0 2 V2 + • ■ • + a r v r = 0 ?

(2.1)

T he answer is obviously yes: set all of the a ’s to zero. Call this th e trivial linear combination. So we ask: Is th e trivial linear com bination the only linear com bination of the vectors satisfying Eq. (2.1)? For some sets of vectors the answer is “yes” and for others it is “no” . T his tu rn s o u t to be such an im p o rtan t distinction th a t there are names for the two possibilities. D e f in itio n 2 .7 (Linear independence, dependence). Let v i, V2 , . . . , v r be vec­ tors in a vector space V . If th e only linear com bination of th e vectors satisfying Eq. (2.1) is th e trivial linear com bination, th en th e vectors are linearly indepen­ dent. If th ere is a nontrivial linear com bination satisfying Eq. (2.1), th e n the vectors are linearly dependent. E x e r c i s e 2 .1 5 . a. If th e vectors v 1, v 2 , . . . , v r are linearly dependent, th en th ere are scalars a i, a2, ■ ■ ■, ar , not all zero, so t h a t _____________________ b. If th e vectors v i , v 2, . . . , v r are linearly independent and OiVj + a 2 V2 + • • • + ar v r — 0, t h e n ______________________ Let us look a t several simple cases. A single vector v ^ 0 is linearly independent. For if v ^ 0 and a v = 0, th en a = 0. (Exercise 2.8.) T h e vector 0 by itself is linearly dependent, as shown by th e equation 10 = 0. E x e r c i s e 2 .1 6 . Prove: T h e vectors 0, v 1; v 2, . . . , v r are linearly dependent. Two vectors are linearly dependent if and only if one of th em is a scalar m ultiple of th e other. See Figures 2.1 and 2.2. To see this, suppose first th a t v j and v 2 are linearly dependent. T hen a i v i + a 2 V2 = 0, where a i, say, is not zero. T hus v j is a m ultiple of V2 : v x = ( —a 2/ a i ) v 2. Now suppose th a t v 2, say, is a multiple of vi: v 2 = av j . T hen th e vectors are linearly dependent: a v ! — V2 = 0.

F i g . 2 .1 : Linearly d ep en d en t o r ien te d lengths.

T his can be generalized.

F i g . 2 .2 : Linearly in d ep en d en t oriented lengths

S e c t i o n 2 .4 : L i n e a r I n d e p e n d e n c e

25

T h e o r e m 2 .8 (Linear dependence test). Two or more vectors are linearly dependent one of the vectors is a linear com bination of th e others. This is a useful test, b u t students tend to overuse it. Do not forget th e more sym m etric Definition 2.7 of linear dependence w hen constructing proofs. E x e r c is e 2 .1 7 . Prove T heorem 2.8. E x e r c is e 2 .1 8 . Suppose th a t the vectors v i , . . . , v r are linearly dependent. M ust th e vector V) be a linear com bination of v 2, . . . , v r ? Explain. E x e r c is e 2 .1 9 . Suppose th a t each pair of vectors v lt v 2; v x, V3 ; and v 2 , v 3 is linearly independent. M ust v 1 ,V 2 , v 3 be linearly independent? Explain. T h e o r e m 2 .9 . Let v j , v 2, . ■ -, v r be linearly independent vectors. a. Suppose th a t v ^ span(V ). T h en th e vectors v, v lt v 2, . . . , v r are linearly independent. b. Remove a vector from v*, v 2, . . . , v r . T h en th e remaining vectors are linearly independent. ' Now let v i , v 2, . . . , v r be linearly d ependent vectors. c. For any vector v th e vectors v, v 1; v 2, . . . , v r are linearly dependent. d. O ne of th e vectors can be removed from v*, v 2, . . . , v r w ith o u t changing their span. Proof, a. Suppose th a t a v + a j v i + a 2 v 2 + • • • + a r v r = 0 . If a ^ 0, th en solving this equation for v puts it in s p a n (v i, v 2, . . . , v r ), contrary to hypothesis. Thus a — 0. T h en owing to th e linear independence of the vectors v l 7v 2, . , . , v r , th e subscripted a ’s are also zero. T hus th e vectors v , v y, v 2, . . . , v r are linearly independent. b. Remove, say, v j. Suppose th a t a 2v 2 + • • • + a r v r = 0 . T h en 0 v i + a 2 v 2 + • • ■ + a r v r = 0. Since th e vectors v 2, . . . , v r are linearly independent, all of th e a ’s in th e second equation are zero. T hus all of th e a ’s in th e first equation are zero. T he vectors v j, v 2, . . . , v r are linearly independent. Exercises 2.20 and 2.21 ask you to prove P arts (c) and (d). □ E x e r c is e 2 .2 0 . Prove T heorem 2.9c. E x e r c is e 2 . 2 1 . Prove Theorem 2.9d.

C

26

hapter

2: V

ector

S

paces

P ro b le m s 2.4 2 .4 .1 . a. Suppose th a t two vectors in R 3 span a plane. Are they linearly independent? Explain. b. Describe geometrically where to p u t a third vector so th a t th e resulting vectors are linearly independent. 2 .4 .2 . Let Vl = [l, 0, 0] , v 2 = [l, 1, 0] , v 3 = [l, 1, l] , v 4 = [3, 2, l] . a. Show th a t th e vectors V i , v 2 , v 3 are linearly independent. b. Show th a t th e vectors v 2, v 3, v 4 are linearly dependent. 2 .4 .3 . Suppose th a t th e vectors V i , v 2 , v 3 , v 4 are linearly independent. Show t h a t th e vectors v i — v 2, v 2 — v 3 , v 3 — v 4 , v 4 are also linearly independent. 2 .4 .4 . Suppose th a t th e vectors u , v , w are linearly independent. disprove: a. T h e vectors u + v , v + w , u + w are linearly independent. b. T he vectors u — v , v + w , u + w are linearly independent.

Prove or

2 .4 .5 . Let U and V be subspaces of a vector space w ith U f l V = {0}. Suppose t h a t u i , U 2 , . . . j U r are linearly independent vectors U and v 1; v 2 , . . . , v s are linearly independent vectors in V . Show th a t the vectors u i , u 2 , . . . , u 7. , v 1, v 2 , . . . , v s

are linearly independent. 2 .4 .6 . Define th e functions / , g, and h by f ( x ) = sin2(x), g(x) = cos2(x), and h (x ) = 1. T h e functions are vectors in th e vector space Are the vectors linearly independent or dependent? Explain.

S

ec t io n

2.5

2.5:

B

ases

27

B ases

D e f i n it i o n 2 .1 0 (Basis). A finite set B — { b 1; b 2, . . . , b n } of vectors in a vector space V is a basis for V if every vector in V is a linear com bination of th e vectors in B in one and only one way. T he “one way” p art of the definition of a basis states th a t B spans V . T h e “only one way” p art of th e definition states th a t th e vectors are linearly independent. To see this, first express th e “only one way” p art as n n y , bkbfc = ^ 2 frfcbfc => bk = b'f. for all k. (2.2) k~ 1

A:—1

Now use th e following lemma. L e m m a 2 .1 1 . Vectors b i , b 2 , . .. , b n are linearly independent if and only if th ey satisfy Eq. (2.2). Proof. Suppose first th a t th e b ’s are linearly independent. T hen ' Y I bk b k = Y

b'kh k

=>

Y ( bk ~ b' ^ h k = 0

^

bk = b'k for a11 k -

Now suppose th a t Eq. (2.2) is satisfied. T h en ^fr/cbfc^O

=>

= ^ 0 b fc

bk = 0 for all k.



W e have proved th e following theorem . T h e o r e m 2 .1 2 (Test for a basis). A set of vectors B is a basis for a vector space V

(i) # spans V and (ii) $ is a linearly independent set. D e f i n it i o n 2 .1 3 (S tandard basis). The standard basis for R" is B — { ( 1, 0, . . . , ( ) ) , ( 0 , 1,

.

.

(

0

, 0 , ...,1)}.

E x e r c i s e 2 .2 2 . Show th a t th e stan d ard basis for R 3 is a basis. E x e r c i s e 2 .2 3 . F ind a basis for span( (1 ,2 ,3 ,4 ), (2 ,1 ,3 ,1 ), ( 3, 3, 6 , 5 ) ) . th e r r e f m eth o d of SymPy. See A ppendix B.

Use

T h e next two exercises help develop insight into bases by investigating them in L 2 and L 3. E x e r c i s e 2 .2 4 (Bases in L 2). a. L et Vi and v 2 be vectors in L 2, neither of which is a scalar m ultiple of th e other. Show th a t th e vectors form a basis for L 2. Hint: Use Exercise 1.5. b. Is th ere a basis for L 2 w ith fewer th a n two vectors? Explain. c. Is th ere a basis for L 2 w ith more th a n two vectors? Explain. d. Suppose th a t { b 1, b 2} is a basis for L 2. Show th a t { - b ^ b g } is a basis.

C

28

hapter

2: V

ector

S paces

E x e r c is e 2 .2 5 (Bases in L 3). a. Show th a t no set of coplanar vectors in L 3 spans L 3. T hey are therefore not a basis for L 3. b. Show th a t three coplanar vectors in L 3 are linearly dependent. T hey are therefore not a basis for L3. c. Show th a t three nonzero noncoplanar vectors in L 3 are linearly independent. An extension of Exercise 1.5 to three dimensions shows th a t th e vectors span L 3. Thus vectors form a basis for L 3.

C oord in ates T he notion of a basis is precisely w hat we need to define th e coordinates of a vector. D e f in itio n 2 .1 4 (C oordinates). L et B — { b j , . - ., b n } be a basis for a vector space V . T h en given v G V , there is a unique linear com bination of th e b ’s equal to v: v = ^ i b i + v 2 h 2 + • • ■ + vn h n (Definition 2.10). T h e coordinate vector of v w ith respect to B is [v ] b = (ui, v2, . . . , vn ) G M” . T hus if n = 3 and v = b i + 2 b 2 — 3 b 3 G V , th en [v]g = (1,2, —3) G M3. As w ith th e coordinates of oriented lengths, [v]g is not intrinsic to v; it (obviously) depends on a basis. C hange B and th e coordinates change. P rob­ lem 3.1.24 shows how they change. If th e basis is understood by context or is not relevant, then we w rite [v]. E x e r c is e 2 .2 6 . Recall th e vector space Pn of polynomials of degree less th an or equal to n (p. 17). Vectors in P 2 are of th e form a x 2 + bx + c. Thus B = {j?o> P i, P 2 }, where p 0 ( x ) — 1, P i(x) = x, p 2 (x) = x 2, is a basis for P2. And B ' — {5 0 , qi, q2}, where qo(x) = 1, qi(x) = 1 + x, q2 (x) ~ l + x + x 2 is also a basis. Let [p]e- = ( 2 , - 4 , 1 ). F ind [p]g. T he next theorem shows th a t scalar m ultiplication, vector addition, th e zero vector, and negation act on coordinates exactly as they do in K” (Definition 1.7). T h e o r e m 2 .1 5 (C oordinate properties). Let a be a scalar and v and w be vectors in a vector space V . T hen a. [av] = a [ v j . b. [v + w] = [v] + [w], c- [0] = ( 0 , 0 , . . . , 0 ) . d-.-v] |v. T his theorem , “C oordinate properties” , is th e first of twenty or so in this book w ith th e description " . . . p roperties”. Such theorem s are im p o rtan t be­ cause th ey give us rules for using th e concept under consideration. Before proving th e theorem , le t’s be clear about w h at it says. In P a rt (a), for example, th e scalar m ultiplications a v and a[v] take place in different vector spaces. On th e left side, a v takes place in V . T h en we take coordinates [av], which lands us in K” . On th e right side, [v] is already in R ” , which is where th e scalar m ultiplication a[v] takes place. P a rt (a) says th a t th e two sides are equal vectors in Mn .

S e c t io n 2 .5 :

B ases

29

Proof, a. See Exercise 2.27. b. Let v = t>ibi + v 2h 2 + • • • + v n b n and w = u ;ib i + w 2h 2 + • ■ • + w n b n . Add. v + w = [vi + w i ) h i + [v2 + u)2) b 2 + ------1- ( v n + u;n ) b n .

Thus [v + w] = (V! + W i , v 2 + w 2 , . . . , v n + w n ) 2

3

= { V l, V 2 , . . . , U „ ) + ( w i , w 2 , . . . , w n ) = [v] + [w].

Steps ( 1 ) and (3) are justified by Definition 2.14 and Step ( 2 ) by Definition 1.7c. c. S ta rt w ith v = v + 0. T h en using P a rt (b), [v] = [v + 0] = [v] + [0], This can be so only if [0] = ( 0 , 0 , . . . , 0 ). d. See Exercise 2.28. □ E x e r c i s e 2 . 2 7 . Prove P a rt (a) of T heorem 2.15. E x e r c i s e 2 . 2 8 . Prove P a rt (d) of T heorem 2.15.

P r o b le m s 2.5 2 . 5 . 1 . a. Show th a t th e vectors (x , y , z ) whose coordinates satisfy x —2 y + 3 z = 0

form a subspace of R 3. b. F ind a basis for th e subspace. 2 . 5 . 2 . F ind th e coordinates of th e vector ( 4, 2, 3) w ith respect to th e basis

{(2, 0,0), (0, 1,0), (0 ,1 ,1 )} of R 3. 2 . 5 . 3 . T he vectors

and v 2 in Figure 2.3 form a basis B for L 2. Sketch a vector v w ith coordinates [v]B = ( 2 , 1 ). 2 .5 .4 . T h e set of vectors of th e form (a + b, b, 3a —6 , a + 2b) is a subspace of R 4. Find a basis for th e subspace. 2 . 5 . 5 . T h e set of vectors of th e form (a, b, c, d) w ith c — 2b is a

subspace of R 4. Find a basis for th e subspace.

Fig. 2.3: [V]B = (2 , 1 )

2 . 5 . 6 . Suppose th a t { v 1, v 2, v 3 } is a basis for a vector space. Let v 1 , v 2 , v 3 be

nonzero scalars. Show th a t {uiVi, v 2 v 2, U3 V3 } is also a basis.

C

30

2.6

hapter

2: V

ector

S paces

D im ension

T h e o r e m 2.16. Let B be a basis of n vectors for a vector space V . Thus th e vectors are linearly independent and span V (Theorem 2.12). Then: a. A set of more th a n n vectors is not linearly independent. b. A set of fewer th a n n vectors does not span V . P a rt (a) is the key to establishing th e concept of dimension. Its proof, among all those in this chapter, is th e one th a t you would least likely find on your own. Proof, a. Let B = {bj , b 2 , . . . , b n }. Suppose th a t th e vectors v j, v 2, . . . , v m are linearly independent and m > n. Since span(i3) = V , 1 v i = i>ibi + b2 h 2 + • • ■ + bn h n ,

(2-3)

where a t least one of th e b's is not zero. Suppose for definiteness th a t b\ -/= 0. Replace b i w ith vj in { b j , b 2 , ■ - - , b n } to form { v ! , b 2 , . . . , b n }. Now express an arb itrary v e V a s a linear com bination of b ^ b 2 , . . . , b„: v = c j b j + c2b 2 -\------- 1- Cnbn .

(2.4)

Solve Eq. (2.3) for b j. S u b stitu te th e result in Eq. (2.4): v = c i ( i v ! - j ^ b 2 --------- t f b n ) + c2b 2 + ------ h Cnhn . T h u s v 6 sp an (v !, b 2 , . . . , b n ). Since v above is arbitrary, s p a n (v 1, b 2 , ■ ■ •, b „ ) = V . T hus for some scalars b j , b2, - . . , b'n , V2 = 6iVj + &'b2 + • • ■ + b'nh n . If b2 ~ ■ ■ ■ = b'n = 0, th en v 2 = b\ v j , contradicting th e linear independence of th e v ’s. T h u s at least one of b2, . . . , 6^, say b'2, is not zero. As above, replace b 2 w ith v 2 in v j , b 2 , . . . , b „ . T h en s p a n (v 1; V2 , b 3 , . . . , b „) = V . Replace b ’s w ith v ’s until s p a n ( v 1;v 2, . . . , v n ) = V . Since m > n, there are m ore v ’s. These v ’s are linear combinations of v i , V 2 , . . . , v n . This is a contradiction, since th e v ’s were supposed linearly independent. b. Suppose there were a spanning set w ith fewer th a n n vectors. If th e vectors are linearly dependent, delete one of th em w ithout changing their span (Theorem 2.9d). If th e rem aining vectors are still linearly dependent, delete an o th e r w ithout changing their span. Continue deleting vectors until th e re­ m aining m vectors are linearly independent and span, i.e., until a basis remains. T h e n th e n > m linearly independent vectors in B contradict P a rt (a) of th e th eo rem . □ C o r o l l a r y 2 .1 7 (Dimension theorem ). If a vector space V has a basis of n vectors, th e n all bases for V have n vectors. Proof. Bases with different num bers of elements would violate b o th p arts of T h eo rem 2.16. □ 1 Part (a) of the theorem only requires that span(B) = V , not that B is a basis.

S

e c t io n

2.6:

D

im ension

31

T he corollary makes possible th e following definition. D e f i n it i o n 2 .1 8 (Dimension). A vector space V w ith a basis of n vectors has dimension n: dim V = n. T h e vector space V = {0} has dim ension 0. According to Exercise 2.22, d i m R n = n. © E x e r c i s e 2 .2 9 . Suppose th a t a set of m vectors spans M3. a. W h a t can you say ab o u t m l b. Will every set of vectors in R 3 containing m vectors span R 3? Definition 2.18 is th e culm ination of “intricate ab stra ct m athem atical reason­ ing of a kind you likely have n o t encountered before” prom ised a t th e beginning of this chapter. T h in k of th e concepts required: vector space, linear combina­ tion, linear independence, spanning, and basis. If you found it tough going, take heart: “this chapter may well be th e h ard est in th e book for you” , also from th e beginning of th e chapter. A set of vectors which is linearly independent and spans a vector space is a basis (Theorem 2.12). T he next theorem shows th a t if a vector space is ndimensional, th en a set of n vectors which is linearly independent or spans the vector space is a basis. T h e o r e m 2 .1 9 . Let V be an n-dim ensional vector space. Then: a. A set of n linearly independent vectors in V is a basis. b. A set of n vectors which span V is a basis. Proof, a. If th e n linearly independent vectors do n o t span V , th en adjoin a vector to obtain n + 1 linearly independent vectors (Theorem 2.9a). This contradicts Theorem 2.16a. b. See Exercise 2.30. □ E x e r c i s e 2 .3 0 . Prove P a rt (b) of T heorem 2.19. T h e o r e m 2 .2 0 . Let V be an n-dim ensional vector space. Then: a. Linearly independent vectors in V can be extended to a basis. In partic­ ular, a basis for a subspace of V can be extended t o a basis for V . b. Vectors can be removed from a set spanning V to o b tain a basis. Proof, a. S ta rt w ith linearly independent vectors. If th e vectors do n ot span V , th en adjoin a vector while retaining linear independence (Theorem 2.9(a)). If th e vectors still do not span V , th en rep eat th is process. T he process cannot continue past n linearly independent vectors (Theorem 2.19a). W hen it stops we have linearly independent vectors which span, i.e., a basis. b. See Exercise 2.31. □ E x e r c is e 2.3 1 . Prove P a rt (b) of T heorem 2.20.

32

C

hapter

2: V

ector

S paces

C o r o l l a r y 2.2 1 . Let U be a subspace of an n-dim ensional vector space V . T hen U is a finite dimensional vector space w ith d i m l j < n. Proof. If U = {0}, then there is nothing to prove. Otherwise choose a nonzero vector u j E U . I t is linearly independent. Now proceed as in th e proof of T heorem 2.20a, adjoining vectors from U . As there, the process cannot continue p ast n linearly independent vectors, a t which tim e we have a basis for U . □ In this book a basis is, by definition, a finite set of vectors. If a vector space has a basis, then it is called finite dimensional. Otherwise it is infinite dimensional. Do not let th e word “infinite” scare you; “infinite dim ensional” sim ply means “no (finite) basis” . T h e vector space C[a, 6] is infinite dimensional. To see this, suppose th a t C \a , 6] is n-dimensional. T h en the n + 1 polynomials 1, x, x 2, . . . , x n in C[a, b} would be linearly dependent. There then would be scalars a*, n ot all zero, so th a t ao + a \ X + a 2 X2 -\-------b anx n = 0, the zero function. B u t this is impossible, since an n th degree polynom ial is zero for at m ost n values of x. M any theorem s are tru e in all vector spaces, finite and infinite dimensional alike. For example, th e theorem s in Sections 2.1-2.4 are tru e in all vector spaces. Unless otherwise noted, th e vector spaces in this book will be finite dimen­ sional. Infinite dimensional vector spaces are im p o rtan t in m any applications, b u t in this book they will be used only as examples. W e now have much of th e basic theory of vector spaces in hand. B u t we have n o t discussed computation. Suppose th a t we have a set of vectors. How can we d eterm ine w hether th ey are linearly independent or dependent? Do they span the vector space? If they form a basis, and we are given a vector, how do we d eterm ine its coordinates w ith respect to th e basis? This will be taken up in th e problem s of Section 3.2. A ll theorem s th a t we have proved in this chapter apply to all vector spaces. We need not prove th e theorem s for each vector space. T his is an example of th e power of th e axiom atic m ethod adopted here.

P ro b le m s 2.6 2 .6 .1 . Show th a t { (1, 1,0), (1 ,0 ,1 ), (0 ,1 ,1 ) } is a basis for R 3 in two ways. a. Show th a t th e vectors are linearly independent. b. Show th a t { (1,0,0), (0,1,0), (0, 0,1) } C span({ (1 ,1 ,0 ), (1 ,0 ,1 ), (0 ,1 ,1 ) }). W h y does this show th a t the set is a basis? 2 .6 .2 . L et v i, V2 , . . . , v r be vectors in V . Prove: s p a n (v 1; V2 , . . . , v r ) is an r-dim ensional subspace of V if and only if th e vectors are linearly independent. 2 .6 .3 . Let {b i, b 2 , . . . , b n } be a basis for a vector space. Prove: { v j, v 2, . . . , v n } is a basis for th e space it and only if every b is a linear com bination of th e v ’s.

C hap ter 3

M atrices Your vector space background from C h ap ters 1 and 2 will help you under­ stand both sections of this chapter.

3.1

M atrices

D e f in itio n 3 .1 (Matrix). A m atrix is a rectangular array of scalars. Example: A —

1

-2

2

1

-1

1

This is a 2 x 3 matrix: it has 2 rows and 3 columns. (Mnemonic: T h e columns are vertical, as are columns in a building.) Denote th e entry in th e i th row and j th column by a ^ . Thus B A = I. First note that if B x = 0, then x = (A B )x = A(Bx) = AO = 0. Let { b x, b 2 , . . . , b n} be a basis for E 71. Then the vectors Bbi , B b 2, . . . , B b n are linearly independent. To see this, suppose that ci (B bi) + c2 {Bb2) + ■ ■ • + Cn(Bbn) = B(cjbj + c2b 2 + ■ • • + c„bn) = 0. Then from the calculation of the last paragraph, cibi + C2 b 2 + • • • + cnh n = 0. Since the b ’s are linearly independent, all of the c’s are 0. Since the vectors B b \, B b 2, . . . , B h n are linearly independent, they form a basis (Theorem 2.19a). Expand an arbitrary vector x with respect to this basis: x = d i - B b i + d 2 B h 2 + • ■ • + d nB b n = B ( d \ b i + d 2b 2 + • • • + dnt>n)-

Then B A x = B A B (d ibj + d2b 2 H------ b d„bn) = B(d\b i + d2b 2 H------ b dnb n) = x. Thus from Exercise 3.2b, B A = I.



C o ro llary 3.7. If a matrix has an inverse, then the inverse is unique. Proof. Suppose that B and C are inverses of A: A B = AC = I. By Lemma 3.6, B A = r I. Thus B = B(AC) = (BA)C = IC = C. □ C o ro llary 3.8. If a matrix A is invertible, then so is A ~ x and (A-1 )-1 = A. E xercise 3.15. Prove Corollary 3.8. E xercise 3.16. Solve the matrix equation Gx —y + B y = x for y. (G and R are matrices.)

S e c t i o n 3.1 : M

a trices

41

The Transpose of a Matrix D efinition 3.9 (Transpose). If A = [ay] is an n x m matrix, then the transpose of A is the m x n matrix defined by A* = [aji].“ “The notation A1 is more common than A*. I chose A* because the transpose is a special case of the adjoint of a linear transformation f, which is denoted f*. See Theorem 8.10.

The row i column j entry of A, aij, is the row j column i entry of A *, a^. In other words, the rows of A* are the columns of A and the columns of A* are the rows of A. For example, if A =

2

-1

1

0 -2

5

2

, then A* =

-1

0

-2

1

5

In the example, a ^ = a i 2 = —1The transpose of a square n x n matrix is another an n x n matrix, obtained by “reflecting across the diagonal” . Thus, * an

Ol2

0l3

a21

a 22

a 23

0.31

032

033_

=

on

021

O 31

012

022

O 32

0\3

&23

O 33

T h e o re m 3.10 (A* properties). Let A and B be matrices. Then a. A** - A. b- (aA)* = aA*. c. (A + B)* = A* + B*. (if A + B exists) d. (AB)* = B*A*. (if A B exists) e. (A*)-1 = (A-1 )* (if A-1 exists) Proof, a. Switching the rows and columns of a matrix twice gets you back where you started. b. To compute (aA)*, multiply all entries of A by a and then interchange rows and columns. To compute aA*, interchange rows and columns of A and then multiply all entries by a. The results are the same. c. The proof is similar to that of Part (b). d. Write the row-column products of Eq (3.8) as ij entry of A B = (il row A)(j 1 column B). Then ij entry of (AB)* = j i entry of A B = (jth row of v4)(ith column of B) ij entry of B*A* = (zth row of B*)(jth column of A*) = (ith column of S ) ( j th row of A). e. Using Part (d), A* (j4-1 )* = (A ~ 1Al) — I* = I.

D

C

42

hapter

3: M

atrices

4

Y CO

1

1

CO

CO

0 4 ,c = 3 1

to

0-2

to II

CO OJ 1

3.1.1. Let

to

Problems 3.1

Compute the following products when possible, (a) AB. (b) BA. (c) AC. (d) BC . (e) CA. (f) CB. 3.1.2 (Diagonal matrices). Let A — [ “q1 ]. This is a diagonal matrix ; it is square and entries off of its diagonal are zero. a. Find A 2, A3, and A n . Write a general statement about powers of diagonal matrices of any size. b. If a n and a 22 are not zero, show that A-1 exists. Write a general statement about inverses of diagonal matrices of any size. 3.1.3. For scalars a arid b, (ab) 2 = a2b2. Let A and B be square matrices of the same size. Give a condition that (AB)- = A~B~ 3.1.4. a. Formulate a rule for matrix multiplication similar to Eq. (3.7) using the rows of A. b. Compute again the product of Exercise 3.4 using Part (a). 3.1.5. Let A be a 3 x 3 matrix and v e B 3 with A3v = 0 but A2v ^ 0. Show that the vectors v, Av, A2v are linearly independent. Theorem 2.19 shows that the set is a basis for R3. 3.1.6. The 2 x 2 matrix A has A[\ \ = [|] and A\ \ ] = [gj. Find A. Hint: Use Eq. (3.7). 3.1.7. A matrix A satisfies (I A 21 = 0

-1 4 1 H 1 b 1 1 -2 , A 0 2 , A -2 — 1 . 6_ .1 . 6.

1 1 I) -2 1 1 , A = -2 2 -1 3 . 0. L 3.

Find A. Hint: See Eq. (3.7). 3.1.8. Suppose that A, B, and C are invertible matrices of the same size. Show that A B C is invertible and give a formula for it. 3.1.9. Let A be a square matrix satisfying A2 — A + I = 0. Show that A-1 exists and is equal to I — A. 3.1.10. Let A, B, and C be n x n matrices with A B C = I. Show that B is invertible. Hint: A B C = (AB)C. Use Lemma 3.6. 3.1.11. Suppose that B is the inverse of A2. Find the inverse of A.

S ec t io n 3.1: M a tr ic es

43

3.1.12. Let ' 1 0 o' ~1 0 O' A = £121 1 0 , B = 0 1 0 _&3I 0 1 0 ^32 1 Compute AB, (AB) \ BA, (BA) *. Use SymPy. 3.1.13. Show that the set of n x n matrices is closed under matrix multiplication 3.1.14. Let A be a square matrix. a. Prove that if A has a column of zeros, then it does not have an inverse. b. Prove that if A has a row of zeros, then it does not have an inverse. 3.1.15. Let A and B be square matrices with A B = BA. Prove that A B n = B n A for all positive integers n. 3.1.16. Theorem 3.2 states that the set of all n x m matrices is a vector space. Prove the following parts of the theorem for the vector space of 2 x 3 matrices. a. What matrix is the zero vector in the vector space? Why? b. Given a vector in the vector space, what is its negative? Why? c. Find a basis for the vector space. 3.1.17. Find the inverse of the matrix

‘1 2 3 4 1 -2 3 — 4 3 4 5 6 .3 -4 5 -6

. Use SymPy.

3.1.18. Find the inverse of the matrix

‘1 2 3 4 1 -1 2 _2 5 6 7 8 5 -6 7 -8

. Use SymPy.

3.1.19. Show that if A and B commute, then so do A* and B*. 3.1.20 (Markov process). A reserve park protects animals of a certain species. Each year some animals leave the park and others enter. The probabilities of leaving/entering can be represented by a matrix: P = [ ® gg]. Column one tells us that 90% of the animals in the park remain there and 10% leave. Column two tells us that 1% of the animals outside the park enter the park and 99% remain outside. The numbers in each column add to 1, as they must. This is a Markov process: there are a finite number of states (two in our case: inside and outside the park), and the probabilities of transition from one state to another (given by P) remain the same year-to-year. Markov processes have been applied to biology, statistics, chemistry, physics, and engineering. Some applications have millions of states. a. Suppose that 2% of the animals are in the park in year 0 and 98% are outside. What are the percentages in year n? Hint: Start by showing that the percentages in year 1 one are P [;°g ]. b. The long term percentages are independent of those in year 0. This always happens in a Markov process when all members of P are positive. Find these percentages. Hint: Compute powers of P until they appear not to change. Call the resulting matrix P°°. Then show that P°° [ = P°° [ gf ], independently of x. You can get high powers of P more quickly by repeatedly squaring to obtain P 2, P 4 = (P2)2, P 8 = (P 4)2, . . . .

C h a p t e r 3: M

44

atrices

3.1.21. Rotate the vector [y\ in M2 by 90° clockwise. The result is [ _yx \. a. Find a 2 x 2 matrix A so that A[y \ = [ _ l ]. b. Without making a calculation, explain why A 4 = I, the identity matrix. c. Explain why A -1 = A3. 3.1.22 (Trace). The trace, tr(A), of a square matrix A is the sum of its diagonal elements. For example, the trace of the matrix of Problem 3.1.17 is —2. Let A and B be square matrices of the same size. a. Show that tr(AB) = tr (BA). Hint: See Eq. (3.9). b. Show that tr (ABC) = tr(CAB). Hint: Use Part (a). c. Show that tr(A + B) = tr(A) + t r (B) d. Show that there are no n x n matrices A and B satisfying A B —B A = I. In some infinite dimensional vector spaces A B —B A = I holds for some A and B. This is important in quantum mechanics. 3.1.23 (Centralizer). Let A be an n x n matrix. The centralizer C(A) of A is the set of all ii x n matrices which commute with A. a. Prove that C(A) is a subspace of the vector space o f n x n matrices. b. Prove that C(A) is closed under matrix multiplication. 3.1.24 (Change of basis). This problem shows how the representation [v]g changes when B changes. Let B = {bj} and B' = {b'} be bases for an 71dimensional vector space. Then [b^g- denotes the coordinates of b* 6 B with respect to the B’ basis. Place these coordinates as columns of a matrix [i]g'g: [ijs'B = [ [t»i]s' [b2]g< ••• [bn]B/].

(3-12)

This is an n x n matrix: there are n different bj for the columns, each of which has n coordinates for the rows. a. Prove that Ms- = Mb' b M b . (3.13) For this reason [i]b'b is called the transition matrix from B to B1. Read Eq. (3.13) from right to left: start with [v]g and transform it with [i]b ’B to [v]g/. Hint: Start with v = bibi + 62b 2 + • • • 6nb n . Show that M b ' = Mbi]i3' + M b 2],B' + • • • &n [bn]B'Now compute [i]B'B[v]g using matrix multiplication by columns (Eq. (3.4)). Note that [ijg'g does not transform vectors; the same v appears on both sides of Eq. (3.13). Instead, [i]g b c =i> d => a. a => b. Multiply both sides of Ax = b by A -1 and use A ” 1Ax = I x = x. b => c. Part (c) is a special case of Part (b). c => d. Referring to Eq. (3.4), we see that Part (c) says that the columns of A are linearly independent. d => a. We will construct the matrix B = A ~ l . The n linearly indepen­ dent vectors a^ form a basis for R" (Theorem 2.19a). Thus there are scalars 611, 621,. *., bn\ so that

(3.18)

The bn form the first column of the inverse B and Eq. (3.18) is the first column of A B = 1. Similarly, there is a linear combination of the columns of A, with coefficients 612, 622, • - •, &n2 , equal to the second column of 1 . And so on. Then looking at Eq. (3.7) we see that the matrix B = [bij] satisfies A B = I , i.e., B = A~l. □

S e c t i o n 3 .2 : S y s t e m s

of

L inear E q u a tio n s

49

Problems 3.2 3.2.1. Write the system of equations x —2y + 3z = 2, -2 y + 4 z = 1, 3x—3y+z = 4 as a matrix equation. 3.2.2. Let A be a 2 x 3 matrix. Suppose that Axo = [§], Axi = [g], A x -> = [o] a. Is 3x0 + Xi + 5x 2 a solution of Ax = [£]? b. Write a solution of Ax = [§ ]. c. Can Ax = b be solved for every b G R3? Explain. 3.2.3. a. Suppose that Xi is a solution of Ax = b and x 2 is a solution of Ax = 0. Show that 2x x + x 2 is a solution of Ax = 2b. b. Suppose that xi and x 2 are solutions of Ax = b. Show that xi - x 2 is a solution of Ax = 0. 3.2.4. Suppose that Ax! — bi and Ax2 = b 2. Find a solution to Ax = b 3 when b 3 s sp an (b i,b 2). 3.2.5. Suppose A is a matrix with Ax = 0 for some x ^ 0. Is there a solution to Ax = b for every b? Explain. 3.2.6. a. Suppose that the product A B of matrices A and B exists. Show that IC(B)CIC(AB). b. Show that IC(A) = 1C(A*A). Hint: To show that IC(A*A) C IC(A), start x e KL(A*A) = » ■■■. c. Show that if fC(A) = {0}, then the square matrix A*A has an inverse. 3.2.7. a. Solve in the form Eq. (3.17): w —2 x + 3y + 4z = 2 2w + 7y —5z = 9 w + x + y + z = 8. Use the p r i n t r r e f method of SymPy. b. Remove the third equation in Part (a) and solve. 3.2.8. Solve 2x — 2y + z + 3w = 3, 2x + y — 2z — 4, x + 2y — z —2w = 1 in the form Eq. (3.17). Use the p r i n t r r e f method of SymPy. 3.2.9. Solve 2x — 2y + z + 3tu + 2u = 3, 2x + y —2z — 2u = 4, x + 2y — z — 2w + 5u = 1 in the form Eq. (3.17). Use the p r i n t r r e f method of SymPy.

C h a p t e r 3: M a t r i c e s

50

The rank (A) method of SymPy is useful for some of the problems to follow. It returns the rank of the matrix A: the maximum number of linearly indepen­ dent rows of A. For example, if A has four rows and rank(A) = 3, then the rows are linearly dependent. Definition 8.13 starts a discussion o f th e rank o f a matrix.

3.2.10. Determine whether the vectors (1,2, 3,4), (5, 6 ,7 , 8 ), (9,10,11,12) are linearly dependent or independent in two ways. a. Use the rank method of SymPy. b. Set up a system of linear equations expressing linear dependence of the vectors. Then use the p r i n t r r e f method of SymPy. 3.2.11. Determine whether the vectors (2, -2 ,1 ,3 ,2 ), (2,1, - 2 ,0 ,- 2 ) , (1,2, -1 , -2 ,5 ) are linearly dependent in two ways. a. Use the rank method of SymPy. b. Set up a system of linear equations expressing linear dependence of the vectors. Then use the p r i n t r r e f method of SymPy. 3.2.12. Let U = span((l, 2,3,4), (4,2, 1 ,5), (3,5,1,7)), a subspace of K4. An­ swer the following using a system of linear equations and the p r i n t r r e f method of SymPy. a. Is the vector v = (8 ,9,5,16) in U? b. Is the vector w = (7,2,1,3) in U? 3.2.13. Do Problem 3.2.11 again using the rank method of SymPy. 3.2.14. Find the coordinates of the vector (2, -1 ,1 ,3 ) with respect to the basis {(1,1,3,3), (2 , - 2 ,4. -4 ), (3,3,5,5), (4, - 4 , 6 , - 6 )}. Do this in two ways: a. Use the result of Problem 3.1.17. b. Solve a system of linear equations with the p r i n t r r e f method of SymPy. 3.2.15. Find a polynomial of degree 4 with the following values: / ( I ) = ~3, / ( - l ) = -3 , /(2) = 0, / ( - 2 ) = 12 , /(3) = 37, / ( - 3 ) = 85. Use the p r i n t r r e f method of SymPy. Problems of this sort are more elegantly solved with a Lagrange polynomial.

Chapter 4

Inner Product Spaces The vector space axioms do not mention norms, angles, or projections. Nor can these important geometric ideas be derived from the axioms. The inner product of two vectors (sometimes called the “dot" product) adds these concepts to the vector space formalism. The first three sections of this chapter discuss this for oriented lengths, for Mn, and for general vector spaces.

4.1

Oriented Lengths

Recall that the length, or norm, of an oriented length v is denoted |v|. D efinition 4.1 (Inner product). The inner product of oriented lengths u and v is the scalar defined by , U

•V

-

Ull | V

COS 0,

0