Linear algebra and matrix theory are fundamental tools for almost every area of mathematics, both pure and applied. This
925 189 4MB
English Pages 318 [338] Year 2015
Table of contents :
Contents
Preface
Note to the Reader
1. Preliminaries
2. Inner Product Spaces and Orthogonality
3. Eigenvalues, Eigenvectors, Diagonalization, and Triangularization
4. The Jordan and Weyr Canonical Forms
5. Unitary Similarity and Normal Matrices
6. Hermitian Matrices
7. Vector and Matrix Norms
8. Some Matrix Factorizations
9. Field of Values
10. Simultaneous Triangularization
11. Circulant and Block Cycle Matrices
12. Matrices of Zeros and Ones
13. Block Designs
14. Hadamard Matrices
15. Graphs
16. Directed Graphs
17. Nonnegative Matrices
18. ErrorCorrecting Codes
19. Linear Dynamical Systems
Bibliography
Index
Sally
The
SERIES
Pure and Applied UNDERGRADUATE TEXTS
Linear Algebra and Matrices Topics for a Second Course
Helene Shapiro
American Mathematical Society
24
Linear Algebra and Matrices Topics for a Second Course
Sally
The
Pure and Applied Undergraduate Texts • 24
SERIES
Linear Algebra and Matrices Topics for a Second Course
Helene Shapiro
American Mathematical Society Providence, Rhode Island
EDITORIAL COMMITTEE Gerald B. Folland (Chair) Joseph Silverman
Jamie Pommersheim Susan Tolman
2010 Mathematics Subject Classiﬁcation. Primary 1501, 0501.
For additional information and updates on this book, visit www.ams.org/bookpages/amstext24
Library of Congress CataloginginPublication Data Shapiro, Helene, 1954– Linear algebra and matrices : topics for a second course / Helene Shapiro. pages cm. — (Pure and applied undergraduate texts ; volume 24) Includes bibliographical references and index. ISBN 9781470418526 (alk. paper) 1. Algebra, Linear–Textbooks. Algebras, Linear–Study and teaching (Higher) Textbooks. 4. Matrices–Study and teaching (Higher) I. Title. QA184.2.S47 512.5—dc23
3. Matrices–
2015 2014047088
Copying and reprinting. Individual readers of this publication, and nonproﬁt libraries acting for them, are permitted to make fair use of the material, such as to copy select pages for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society. Permissions to reuse portions of AMS publication content are handled by Copyright Clearance Center’s RightsLink service. For more information, please visit: http://www.ams.org/rightslink. Send requests for translation rights and licensed reprints to [email protected]. Excluded from these provisions is material for which the author holds copyright. In such cases, requests for permission to reuse or reprint material should be addressed directly to the author(s). Copyright ownership is indicated on the copyright page, or on the lower righthand corner of the ﬁrst page of each article within proceedings volumes. c 2015 by the author. All rights reserved. The American Mathematical Society retains all rights except those granted to the United States Government. Printed in the United States of America. ∞ The paper used in this book is acidfree and falls within the guidelines
established to ensure permanence and durability. Visit the AMS home page at http://www.ams.org/ 10 9 8 7 6 5 4 3 2 1
20 19 18 17 16 15
Contents
Preface
xi
Note to the Reader
xv
Chapter 1. Preliminaries
1
1.1. Vector Spaces
1
1.2. Bases and Coordinates
3
1.3. Linear Transformations
3
1.4. Matrices
4
1.5. The Matrix of a Linear Transformation
5
1.6. Change of Basis and Similarity
6
1.7. Transposes
8
1.8. Special Types of Matrices
8
1.9. Submatrices, Partitioned Matrices, and Block Multiplication
9
1.10. Invariant Subspaces
10
1.11. Determinants
11
1.12. Tensor Products
13
Exercises
14
Chapter 2. Inner Product Spaces and Orthogonality
17
2.1. The Inner Product
17
2.2. Length, Orthogonality, and Projection onto a Line
18
2.3. Inner Products in Cn
21
2.4. Orthogonal Complements and Projection onto a Subspace
23
2.5. Hilbert Spaces and Fourier Series
27
2.6. Unitary Tranformations
31 v
vi
Contents
2.7. The Gram–Schmidt Process and QR Factorization 2.8. Linear Functionals and the Dual Space Exercises
33 35 36
Chapter 3. Eigenvalues, Eigenvectors, Diagonalization, and Triangularization 39 3.1. Eigenvalues 39 3.2. Algebraic and Geometric Multiplicity 40 3.3. Diagonalizability 41 3.4. A Triangularization Theorem 44 3.5. The Gerˇsgorin Circle Theorem 45 3.6. More about the Characteristic Polynomial 46 3.7. Eigenvalues of AB and BA 48 Exercises 48 Chapter 4. The Jordan and Weyr Canonical Forms 4.1. A Theorem of Sylvester and Reduction to Block Diagonal Form 4.2. Nilpotent Matrices 4.3. The Jordan Form of a General Matrix 4.4. The Cayley–Hamilton Theorem and the Minimal Polynomial 4.5. Weyr Normal Form Exercises
51 53 57 63 64 67 74
Chapter 5. Unitary Similarity and Normal Matrices 5.1. Unitary Similarity 5.2. Normal Matrices—the Spectral Theorem 5.3. More about Normal Matrices 5.4. Conditions for Unitary Similarity Exercises
77 77 78 81 84 86
Chapter 6. Hermitian Matrices 6.1. Conjugate Bilinear Forms 6.2. Properties of Hermitian Matrices and Inertia 6.3. The Rayleigh–Ritz Ratio and the Courant–Fischer Theorem 6.4. Cauchy’s Interlacing Theorem and Other Eigenvalue Inequalities 6.5. Positive Deﬁnite Matrices 6.6. Simultaneous Row and Column Operations 6.7. Hadamard’s Determinant Inequality 6.8. Polar Factorization and Singular Value Decomposition Exercises
89 89 91 94 97 99 102 105 106 109
Chapter 7. Vector and Matrix Norms 7.1. Vector Norms
113 113
Contents
7.2. Matrix Norms Exercises
vii
117 119
Chapter 8. Some Matrix Factorizations 8.1. Singular Value Decomposition 8.2. Householder Transformations 8.3. Using Householder Transformations to Get Triangular, Hessenberg, and Tridiagonal Forms 8.4. Some Methods for Computing Eigenvalues 8.5. LDU Factorization Exercises
121 121 127 129 134 138 141
Chapter 9. Field of Values 9.1. Basic Properties of the Field of Values 9.2. The Field of Values for TwobyTwo Matrices 9.3. Convexity of the Numerical Range Exercises
143 143 145 148 150
Chapter 10. Simultaneous Triangularization 10.1. Invariant Subspaces and Block Triangularization 10.2. Simultaneous Triangularization, Property P, and Commutativity 10.3. Algebras, Ideals, and Nilpotent Ideals 10.4. McCoy’s Theorem 10.5. Property L Exercises
151 151 152 154 157 158 161
Chapter 11. Circulant and Block Cycle Matrices 11.1. The J Matrix 11.2. Circulant Matrices 11.3. Block Cycle Matrices Exercises
163 163 163 165 167
Chapter 12. Matrices of Zeros and Ones 12.1. Introduction: Adjacency Matrices and Incidence Matrices 12.2. Basic Facts about (0, 1)Matrices 12.3. The Minimax Theorem of K¨ onig and Egerv´ary 12.4. SDRs, a Theorem of P. Hall, and Permanents 12.5. Doubly Stochastic Matrices and Birkhoﬀ’s Theorem 12.6. A Theorem of Ryser Exercises
169 169 172 173 174 176 180 182
Chapter 13. Block Designs 13.1. tDesigns
185 185
viii
Contents
13.2. Incidence Matrices for 2Designs 13.3. Finite Projective Planes 13.4. Quadratic Forms and the Witt Cancellation Theorem 13.5. The Bruck–Ryser–Chowla Theorem Exercises
189 191 198 202 205
Chapter 14. Hadamard Matrices 14.1. Introduction 14.2. The Quadratic Residue Matrix and Paley’s Theorem 14.3. Results of Williamson 14.4. Hadamard Matrices and Block Designs 14.5. A Determinant Inequality, Revisited Exercises
207 207 208 212 216 219 219
Chapter 15. Graphs 15.1. Deﬁnitions 15.2. Graphs and Matrices 15.3. Walks and Cycles 15.4. Graphs and Eigenvalues 15.5. Strongly Regular Graphs Exercises
221 221 223 224 226 227 232
Chapter 16. Directed Graphs 16.1. Deﬁnitions 16.2. Irreducibility and Strong Connectivity 16.3. Index of Imprimitivity 16.4. Primitive Graphs Exercises
235 235 238 242 245 247
Chapter 17. Nonnegative Matrices 17.1. Introduction 17.2. Preliminaries 17.3. Proof of Perron’s Theorem 17.4. Nonnegative Matrices 17.5. Irreducible Matrices 17.6. Primitive and Imprimitive Matrices Exercises
249 249 250 254 258 259 260 262
Chapter 18.1. 18.2. 18.3.
265 265 266 267
18. ErrorCorrecting Codes Introduction The Hamming Code Linear Codes: Parity Check and Generator Matrices
Contents
18.4. The Hamming Distance 18.5. Perfect Codes and the Generalized Hamming Code 18.6. Decoding 18.7. Codes and Designs 18.8. Hadamard Codes Exercises Chapter 19.1. 19.2. 19.3.
19. Linear Dynamical Systems Introduction A Population Cohort Model FirstOrder, Constant Coeﬃcient, Linear Diﬀerential and Diﬀerence Equations 19.4. Constant Coeﬃcient, Homogeneous Systems 19.5. Constant Coeﬃcient, Nonhomogeneous Systems; Equilibrium Points 19.6. Nonnegative Systems 19.7. Markov Chains Exercises
ix
269 271 273 274 276 277 279 279 281 283 285 288 292 295 300
Bibliography
303
Index
311
Preface
This book began with two upper level mathematics courses I taught at Swarthmore College: a second course in linear algebra and a course in combinatorial matrix theory. In each case, I had expected to use an existing text, but then found these did not quite ﬁt my plans for the course. Consequently, I wrote up complete notes for the classes. Since the material on nonnegative matrices belonged in both courses and required a fair amount of graph theory, it made sense to me to combine all of these chapters into one book. Additional chapters on topics not covered in those courses were added, and here is the total. I started with topics I view as core linear algebra for a second course: Jordan canonical form, normal matrices and the spectral theorem, Hermitian matrices, the Perron–Frobenius theorem. I wanted the Jordan canonical form theory to include a discussion of the Weyr characteristic and Weyr normal form. For the Perron–Frobenius theorem, I wanted to follow Wielandt’s approach in [Wiel67] and use directed graphs to deal with imprimitive matrices; hence the need for a chapter on directed graphs. For the combinatorial matrix theory course, I chose some favorite topics included in Herbert Ryser’s beautiful courses at Caltech: block designs, Hadamard matrices, and elegant theorems about matrices of zeros and ones. But I also wanted the book to include McCoy’s theorem about Property P, the Motzkin–Taussky theorem about Hermitian matrices with Property L, the ﬁeld of values, and other topics. In addition to linear algebra and matrix theory per se, I wanted to display linear algebra interacting with other parts of mathematics; hence a brief section on Hilbert spaces with the formulas for the Fourier coeﬃcients, the inclusion of a proof of the Bruck–Ryser–Chowla Theorem which uses matrix theory and elementary number theory, the chapter on errorcorrecting codes, and the introduction to linear dynamical systems. Do we need another linear algebra book? Aren’t there already dozens (perhaps hundreds) of texts written for the typical undergraduate linear algebra course? Yes, but most of these are for ﬁrst courses, usually taken in the ﬁrst or second year. There are fewer texts for advanced courses. There are wellknown classics,
xi
xii
Preface
such as Gantmacher [Gan59], Halmos [Hal87], Mal’cev [Mal63], Hoﬀman and Kunze [HK71], Bellman [Bell70]. More recently, we have the excellent book by Horn and Johnson [HJ85], now expanded in the second edition [HJ13]. But, much as I admire these books, they weren’t quite what I wanted for my courses aimed at upper level undergraduates. In some cases I wanted diﬀerent topics, in others I wanted a diﬀerent approach to the proofs. So perhaps there is room for another linear algebra book. This book is not addressed to experts in the ﬁeld nor to the wouldbe matrix theory specialist. My hope is that it will be useful for those (both students and professionals) who have taken a ﬁrst linear algebra course but need to know more. A typical linear algebra course taken by mathematics, science, engineering, and economics majors gets through systems of linear equations, vector spaces, inner products, determinants, eigenvalues and eigenvectors, and maybe diagonalization of symmetric matrices. But often in mathematics and applications, one needs more advanced results, such as the spectral theorem, Jordan canonical form, the singular value decomposition. Linear algebra plays a key role in so many parts of mathematics: linear differential equations, Fourier series, group representations, Lie algebras, functional analysis, multivariate statistics, etc. If the two courses mentioned in the ﬁrst paragraph are the parents of this book, the aunts and uncles are courses I taught in diﬀerential equations, partial diﬀerential equations, number theory, abstract algebra, errorcorrecting codes, functional analysis, and mathematical modeling, all of which used linear algebra in a central way. Thank you to all the chairs of the Swarthmore College Mathematics Department for letting me teach such a variety of courses. In the case of the modeling course, a special thank you to Thomas Hunter for gently persuading me to take this on when no one else was eager to do it. One more disclaimer: this is not a book on numerical linear algebra with discussion of eﬃcient and stable algorithms for actually computing eigenvalues, normal forms, etc. I occasionally comment on the issues involved and the desirability of working with unitary change of basis, if possible, but my acquaintance with this side of the subject is very limited. This book does not contain new results. It may contain some proofs not typically seen or perhaps not readily available elsewhere. In the proof of the Jordan canonical form, the argument for the last nittygritty part comes from a lecture Halmos gave at the 1993 ILAS conference. He explained that he had never been satisﬁed with the argument in his book and always thought there should be a more conceptual approach and that he had ﬁnally found it. My apologies if the account I give here is more complicated than necessary—my notes from the talk had a sketch of a proof and then I needed to ﬁll in details. A former colleague, Jim Wiseman, showed me the shear argument with the spherical coordinates used in the proof that the numerical range of a 2 × 2 matrix is an ellipse. From Hans Schneider I learned about the connection between the eigenvalue structure for an irreducible nonnegative matrix and the cycle lengths of its directed graph, and the proof in this book starts with the graph and uses it to obtain the usual result about the eigenvalues.
Preface
xiii
I would like to thank my wonderful teachers, even though many are no longer alive to receive this thanks. Olga Taussky Todd lured me into matrix theory with her course at Caltech and graciously accepted me as her doctoral student. In addition to my direct debt to her, she told me to take Herbert Ryser’s matrix theory course. Since I had already taken Ryser’s combinatorics course, this meant I had the good fortune to experience two years of mathematics presented with the utmost elegance, clarity, and precision. Much of the material for the chapters on zeroone matrices, Hadamard matrices, and designs comes from my notes from these courses and from Ryser’s book [Rys63]. After Caltech, I spent a year at the University of Wisconsin, where Hans Schneider kindly mentored me and gave me the chance to coteach a graduate matrix theory course with him. From Hans Schneider I learned of the Weyr characteristic, the connection between directed graphs and the Perron–Frobenius Theorem, and acquired the Wielandt notes [Wiel67]. I want to thank Charles Johnson, who hosted me for a semester of leave at the University of Maryland in 1984. Many thanks to Roger Horn, for inviting me to write the survey article [Sha91] on unitary similarity—this was how I came to relearn the Weyr normal form and to appreciate the power of Sylvester’s theorem. I also thank him for inviting me to write the American Mathematical Monthly article [Sha99] on the Weyr normal form and his patient corrections of my many errors misusing “which” and “that”. Alas, I fear I still have not mastered this. Going back further to undergraduate years at Kenyon College, thank you to all of my college mathematics teachers. Daniel Finkbeiner introduced me to the beautiful world of abstract mathematics in a ﬁrst linear algebra course, followed by a second linear algebra course and more. Thanks to Robert Fesq for his Moore method abstract algebra course—I came to this course thinking seriously of being a math major, but this was the experience that sealed the deal. Thanks to Stephen Slack, both for a wonderful freshman course and then a Moore method course in topology. And thanks to Robert McLeod, both for his courses, and for generously giving his time to supervise me in an independent reading course my second year. And thanks also to Wendell Lindstrom for his beautiful course in abstract algebra. Finally, I was fortunate to have excellent math teachers in the Philadelphia public schools; I mention here Mr. Kramer (sorry I don’t know his ﬁrst name) of Northeast High School, for his tenth grade math class and twelfth grade calculus class.
Note to the Reader
The basic prerequisite for this book is familiarity with the linear algebra and matrix theory typically covered in a ﬁrst course: systems of linear equations, linear transformations, inner product spaces, eigenvalues, determinants, etc. Some sections and proofs use basic concepts from algebra and analysis (quotient spaces, cosets, convergence, open and closed sets, etc.); readers unfamiliar with these concepts should just skip those parts. For those using this book as a text for a course, there are various options. The author has used an earlier form of some chapters for two diﬀerent courses. One was a linear algebra course, covering material from Chapters 2, 3, 4, 5, 6, and 17. The other was a course in combinatorial matrix theory, focused on material from Chapters 11 through 17. Material needed from Chapters 2, 3, and 4 was mentioned as needed. It is my hope that users will ﬁnd a variety of choices that suit them. The complete proof of the Jordan canonical form can eat up a lot of class time—those who prefer to include more of the other topics should probably just outline, or skip, the proof. When the Jordan form is needed later, it is suﬃcient to know what it is. I also hope that some chapters are useful on their own as introductions to particular topics: block designs, Hadamard matrices, errorcorrecting codes, linear dynamical systems, Markov chains. Many wellknown results are typically mentioned without reference to the original sources; in some cases, it is not so clear what the original source is. In a ﬁeld like linear algebra—used as a basic tool for so much of mathematics—important results may be discovered and rediscovered several times by diﬀerent people and in diﬀerent contexts. I have made some attempt to cite some of the original papers but make no claim to have been complete or consistent in this endeavor. For more information on older original sources, I recommend MacDuﬀee’s book The Theory of Matrices [MacD33]. I have also found the “Notes and References” section at the end of each chapter of [StewSun90] very helpful. Despite several rounds of error detection and correction, I have surely missed some. I apologize for these and hope readers will let me know where they are.
xv
Chapter 1
Preliminaries
This chapter is a brief summary of some basic facts, notation, and terminology for vector spaces, linear transformations, and matrices. It is assumed the reader is already familiar with most of the contents of this chapter.
1.1. Vector Spaces A ﬁeld F is a nonempty set with two binary operations, called addition and multiplication, satisfying the following: • Addition is commutative and associative. There is an additive identity, called zero (0), and every element of F has an additive inverse. • Multiplication is commutative and associative. There is a multiplicative identity, called one (1), which is diﬀerent from 0. Every nonzero element of F has a multiplicative inverse. • For all elements a, b, and c of F, the distributive law holds, i.e., a(b + c) = ab + ac. Thus, in the language of abstract algebra, a ﬁeld F is a nonempty set with two binary operations, addition and multiplication, such that F is an abelian group under addition, the set of nonzero elements of F is an abelian group under multiplication, and the distributive law holds, i.e., a(b + c) = ab + ac for all elements a, b, and c of F. Since the multiplicative and additive identities, 0 and 1, are required to be diﬀerent, any ﬁeld has at least two elements. The smallest ﬁeld is the binary ﬁeld of two elements, F = {0, 1}, where 1 + 1 = 0. The most familiar ﬁelds are the rational numbers Q, the real numbers R, the complex numbers C, and Zp , the ﬁnite ﬁeld of order p, where p is prime; when p = 2 we get the binary ﬁeld. Most of the time we will be working over the ﬁeld R or C. A vector space V, over a ﬁeld F, is a nonempty set V, the objects of which are called vectors, with two operations, vector addition and scalar multiplication, such that the following hold: 1
2
1. Preliminaries
• V is an abelian group under vector addition; i.e., vector addition is commutative and associative, there is a zero vector which is the additive identity, and every vector has an additive inverse. • Scalar multiplication satisﬁes the following properties: – For any a in F and x, y in V, we have a(x + y) = ax + ay. – For any a, b in F and x in V, we have (a + b)x = ax + bx. – For any a, b in F and x in V, we have (ab)x = a(bx). – For any x in V we have 1x = x. We use bold face letters to denote vectors. The term scalars refers to elements in F. It can be shown that for any vector x the product 0(x) is the zero vector, 0. The most basic example of a vector space is Fn , the set of all ordered ntuples, x = (x1 , . . . , xn ), where the xi ’s are elements of F, with vector addition and scalar multiplication deﬁned in the usual coordinatewise manner: • (x1 , . . . , xn ) + (y1 , . . . , yn ) = (x1 + y1 , . . . , xn + yn ). • a(x1 , . . . , xn ) = (ax1 , . . . , axn ). Although we write x = (x1 , . . . , xn ) in row form for typographical convenience, we generally regard vectors in Fn as column vectors, in order to conform with our notation for the matrix of a linear transformation, presented later in this chapter. A nonempty subset U of V is a subspace of V if U is itself a vector space over F, using the same operations as in V. Equivalently, a nonempty subset U is a subspace of V if U is closed under vector addition and scalar multiplication. If U and W are subspaces, then the intersection U ∩ W is a subspace, but, unless either U ⊆ W or W ⊆ U, the union U ∪ W is not a subspace. However, the sum, U + W = {u + w  u ∈ U and w ∈ W}, is a subspace. If for each element z of U + W there is exactly one choice of u ∈ U and w ∈ W such that z = u + w, then we say the sum U + W is a direct sum and write U ⊕ W. Equivalently, the sum U + W is a direct sum if and only if U ∩ W = {0}. n ai xi , for A vector y is a linear combination of the vectors x1 , . . . , xn if y = i=1
some a1 , . . . , an in F. The set of all possible linear combinations of x1 , . . . , xn is the span of x1 , . . . , xn , denoted span[x1 , . . . , xn ]. For an inﬁnite set S of vectors, span[S] is the set of all possible linear combinations of a ﬁnite number of vectors in S. The span of the empty set is deﬁned to be the zero subspace, {0}. For any subset S of vectors in V, the span of S is a subspace of V, called the subspace spanned by S and denoted as span[S]. If U is a subspace of V and U= span[S], we say S is a spanning set for U, or S spans U. The vectors x1 , . . . , xn are said to be linearly dependent if there exist scalars n ai xi = 0. Equivalently, x1 , . . . , xn are linearly a1 , . . . , an , not all zero, such that i=1
dependent if one of the vectors xi is a linear combination of the other vectors in the list. An inﬁnite set S of vectors is linearly dependent if it has a ﬁnite subset which is linearly dependent. The vectors x1 , . . . , xn are linearly independent if they are n not linearly dependent, i.e., if the only choice of scalars for which ai xi = 0 is i=1
1.3. Linear Transformations
3
the trivial choice a1 = a2 = · · · = an = 0. An inﬁnite set S of vectors is said to be linearly independent if every ﬁnite subset is linearly independent. We say a vector space V is ﬁnite dimensional if there is a ﬁnite set of vectors which spans V. Otherwise, V is inﬁnite dimensional. For example, Fn is spanned by the unit coordinate vectors e1 , . . . , en , where ei is the vector with a one in position i and zeroes elsewhere. So, Fn is ﬁnite dimensional. However, the set of all inﬁnite sequences of elements of F, {(a1 , a2 , a3 , . . . )  ai ∈ F}, is inﬁnite dimensional. In this book, we are concerned mainly with ﬁnite dimensional spaces.
1.2. Bases and Coordinates Let V be a ﬁnite dimensional vector space. Any linearly independent set of vectors which spans V is called a basis for V. It can be shown that every basis for V has the same number of elements. The number of elements in a basis of V is the dimension of V. For example, {e1 , . . . , en } is a basis for Fn , and Fn is ndimensional. If V is ndimensional, then any spanning set for V has at least n elements, and any linearly independent subset of V has at most n elements. If U is a subspace of the ndimensional vector space V, then dim(U) ≤ n, and dim(U) = n if and only if U = V. If k < n and v1 , . . . , vk are linearly independent, then there exist vectors vk+1 , . . . , vn such that {v1 , . . . , vk , vk+1 , . . . , vn } is a basis for V; i.e., any linearly independent set can be extended to a basis. A onedimensional subspace is called a line; we denote the line spanned by x as tx, representing {tx  t ∈ F}. Two linearly independent vectors, u and w, span a plane, which is a twodimensional subspace. We use su + tw to denote the plane span[u, w]. A basis is the key tool for showing that any ndimensional vector space over F may be regarded as the space Fn . Theorem 1.1. The set B = {b1 , . . . , bn } is a basis for V if and only if each vector v ∈ V can be expressed in exactly one way as a linear combination of the vectors b1 , . . . , bn ; i.e., for each vector v in V, there is a unique choice of coeﬃcients n vi bi . v1 , . . . , vn such that v = i=1
If B = {b1 , . . . , bn } is a basis for V and v =
n
vi bi , then we call the co
i=1
eﬃcients v1 , . . . , vn the coordinates of v with respect to the Bbasis and write [v]B = (v1 , . . . , vn ). Observe that for any scalars a, b in F, and vectors v, w in V, we have a[v]B + b[w]B = [av + bw]B . In the vector space Fn , when we write x = (x1 , . . . , xn ), the xi ’s are the coordinates of x with respect to the standard basis {e1 , . . . , en }.
1.3. Linear Transformations Let V and W be vector spaces over the ﬁeld F. A function T : V → W is called a linear transformation if T (ax + by) = aT (x) + bT (y), for all scalars a, b in F, and all vectors x, y in V. The null space or kernel of T is ker(T ) = {x ∈ V  T (x) = 0}.
4
1. Preliminaries
The image or range space of T is range(T ) = {y ∈ W  y = T (x) for some x ∈ V}. The set ker(T ) is a subspace of V, and range(T ) is a subspace of W. The dimension of ker(T ) is called the nullity of T , and the dimension of range(T ) is called the rank of T . Theorem 1.2 (Rank plus nullity theorem). Let V be a ﬁnitedimensional vector space, and let T : V → W be a linear transformation from V to W. Then dim(ker(T )) + dim(range(T )) = dim(V). The map T is onetoone (injective) if and only if ker(T ) = {0}, and T is onto (surjective) if and only if range(T ) = W. For ﬁnitedimensional V, the map T is bijective (onetoone and onto) if and only if ker(T ) = 0 and dim(V) = dim(W). For a ﬁnitedimensional vector space V, Theorem 1.2 tells us that a linear transformation T : V → V is onetoone if and only if it is onto; in this case it is invertible. We say T is an isomorphism if it is a bijection, and we say V and W are isomorphic if there exists an isomorphism from V onto W. If T is an isomorphism, then the set S ⊆ V is linearly independent if and only if T (S) is linearly independent, and S spans V if and only if T (S) spans W. Consequently, when T is an isomorphism, B is a basis for V if and only if T (B) is a basis for W. Hence, two ﬁnitedimensional vector spaces V and W over F are isomorphic if and only if they have the same dimension. Theorem 1.3. Let B = {b1 , . . . , bn } be a basis for the vector space V over F. Then the map T (v) = [v]B is an isomorphism from V onto Fn . As a consequence of Theorem 1.3, any ndimensional vector space over F can be regarded as Fn .
1.4. Matrices An m × n matrix A, over F, is a rectangular array of mn elements of F arranged in m rows (horizontal lines) and n columns (vertical lines). The entry in row i and column j is denoted aij . For two m × n matrices A and B and scalars x, y, the matrix xA + yB is the m × n matrix with xaij + ybij in position ij. n The dot product of two vectors x and y in Fn is x · y = xi yi . The product i=1
of an m × r matrix A and an r × n matrix B is the m × n matrix C = AB in which the (i, j) entry is the dot product of row i of A and column j of B. If A is an m × n matrix and x ∈ Fn is regarded as a column vector, then Ax is a column vector of length m and the ith entry is the dot product of row i of A with x. Note that column j of the matrix product AB is then A times column j of B. Also, row i of AB is the product of row i of A with the matrix B. For a matrix A, when we write A = (A1 A2 · · · An ), we mean that Aj is the jth column n of A. Note that Ax = xi Ai is a linear combination of the columns of A. For an i=1
n × k matrix B with columns B1 , . . . , Bk , we have AB = A(B1 B2 · · · Bk ) = (AB1 AB2 · · · ABk ).
1.5. The Matrix of a Linear Transformation
5
Matrix multiplication and addition satisfy the following properties: • Multiplication is associative: A(BC) = (AB)C. • Distributive property: A(B + C) = AB + AC and (B + C)A = BA + CA. Matrix multiplication is generally not commutative. Also, the product of two nonzero matrices may be zero, for example 3 −3 1 1 0 0 = . 5 −5 1 1 0 0 The n×n matrix with ones on the main diagonal and zeroes elsewhere is denoted In and is called the identity matrix, for it serves as the multiplicative identity for n × n matrices. We use 0m×n to denote an m × n matrix of zeroes; note that for any m × n matrix A, we have A + 0m×n = A. We will omit the subscripts on the identity and zero matrices when the size is clear from the context.
1.5. The Matrix of a Linear Transformation Given an m × n matrix A, we can deﬁne a linear transformation T : Fn → Fm by the rule T (x) = Ax. In fact, any linear transformation on ﬁnite dimensional vector spaces can be represented by such a matrix equation, relative to a choice of bases for the vector spaces. We now describe this in more detail and introduce some notation for the matrix of a linear transformation, relative to a choice of bases. Let V and W be ﬁnitedimensional vector spaces over F. Let n = dim(V) and m = dim(W), and suppose T : V → W is a linear transformation. Suppose we have a basis B = {v1 , . . . , vn } for V and a basis C = {w1 , . . . , wm } for W. n n For any x ∈ V, we have x = xj vj , and thus T (x) = xj T (vj ). Hence [T (x)]C =
n
j=1
i=1
xj [T (vj )]C . Let A be the m×n matrix which has [T (vj )]C in column j ⎛ ⎞ x1 ⎜ x2 ⎟ ⎟ and note that [x]B = ⎜ ⎝ ... ⎠. We then have [T (x)]C = A[x]B . We say the matrix j=1
xn A represents T relative to the bases B and C, and we write [T ]B,C = A. In the case where V = W and B = C, we write [T ]B = A. Most of the time, when we have V = Fn , W = Fm , and are using the standard bases for Fn and Fm , we identify the linear transformation T with the matrix A and simply write T (x) = Ax, with x regarded as a column vector of length n. The rank of a matrix may be deﬁned in several equivalent ways. Let A be an m × n matrix, and let T be the linear transformation T (x) = Ax. We can then deﬁne the rank of A to be the rank of T . Alternatively, one can consider the subspace spanned by the n columns of A; this is called the column space and its dimension is called the column rank of A. Clearly, the column rank equals the rank of T . The space spanned by the m rows of A is called the row space of A; its dimension is called the row rank. When A is the coeﬃcient matrix for a system of m linear
6
1. Preliminaries
equations in n unknowns, Ax = b, we can ﬁnd the general solution by using Gaussian elimination to reduce A to row echelon form. Since the elementary row operations of Gaussian elimination clearly preserve the row space, the row rank of A is the number of leading ones (equivalently, number of nonzero rows) in a row echelon form of A. One can then show that the set of columns of A corresponding to the positions of the leading ones in an echelon form is a basis for the column space of A; hence, the row rank equals the column rank. The matrix A will be invertible if and only if the transformation T is invertible. Hence, A is invertible if and only if m = n and A has rank n, or equivalently, nullity 0. We also use the word nonsingular. The reader has no doubt seen some version of the following list of equivalent conditions. Theorem 1.4. Let A be an n × n matrix. Then the following are equivalent. (1) The rank of A is n. (2) The columns of A are linearly independent. (3) The columns of A span Fn . (4) The rows of A are linearly independent. (5) The rows of A span Fn . (6) For any b in Fn , there is exactly one solution to Ax = b. (7) The only solution to Ax = 0 is the trivial solution x = 0. (8) The matrix A is invertible. (9) The determinant of A is nonzero. (10) All of the eigenvalues of A are nonzero.
1.6. Change of Basis and Similarity Let V be an ndimensional vector space, and suppose we have two bases for V, n Let x = xj v j . labeled B = {v1 , . . . , vn } and R = {r1 , . . . , rn }. j=1 ⎛ ⎞ x1 n ⎜ x2 ⎟ ⎟ Then [x]B = ⎜ ⎝ ... ⎠ and [x]R = j=1 xj [vj ]R . Let P be the n × n matrix with xn [vj ]R in column j. We then have [x]R = P [x]B . We call P the change of basis matrix. The matrix P is invertible, and thus [x]B = P −1 [x]R . Now let T : V → W be a linear transformation from an ndimensional vector space V to an mdimensional vector space W. Suppose B and R are two bases for the space V, and suppose C and S are two bases for the space W. Consider the following two matrix representations for T : A = [T ]B,C and B = [T ]R,S (see Figure 1.1). Then for any x ∈ V, we have (1.1)
[T (x)]C = A[x]B ,
(1.2)
[T (x)]S = B[x]R .
1.6. Change of Basis and Similarity
A = [T ]B,C
V (basis B)
7

W (basis C)
P
Q ?
B = [T ]R,S
V (basis R)
? 
W (basis S)
Figure 1.1. Change of basis: BP = QA.
Let P be the nonsingular n × n matrix such that [x]R = P [x]B for any x ∈ V. Substitute this into equation (1.2) to get (1.3)
[T (x)]S = BP [x]B .
Let Q be the nonsingular m × m matrix such that [y]S = Q[y]C for any y ∈ W. Using y = T (x) and equation (1.1), we get (1.4)
[T (x)]S = Q[T (x)]C = QA[x]B .
Combining (1.3) and (1.4) gives BP [x]B = QA[x]B for any x ∈ V. Hence, BP = QA, and we have B = QAP −1 and A = Q−1 BP. In particular, when V = W, B = C, and R = S, we have [T ]B = A and [T ]R = B, with P = Q. Hence, B = QAQ−1 . Or we can set S = Q−1 and write B = S −1 AS. Deﬁnition 1.5. We say two m × n matrices A and B over F are equivalent if there exist nonsingular matrices P and Q, over F, of sizes n × n and m × m, respectively, such that QA = BP . We have shown that two matrices which represent the same linear transformation with respect to diﬀerent bases are equivalent. The converse also holds; i.e., given two equivalent matrices, one can show they may be regarded as two matrix representations for the same linear transformation. Deﬁnition 1.6. We say two n × n matrices A and B over F are similar if there exists an n × n nonsingular matrix S such that B = S −1 AS. Two n × n matrices are similar if and only if they represent the same linear transformation T : V → V with respect to diﬀerent bases. For a linear transformation T : Fn → Fn given by an n × n matrix A, so that T (x) = Ax, it will often be convenient to regard T and A as the same object. This is particularly useful when dealing with the issue of similarity, for if B is similar to A, then B may be regarded as another matrix representation of T with respect to a diﬀerent basis.
8
1. Preliminaries
1.7. Transposes For an m × n matrix A, the transpose AT is the n × m matrix which has aij in position (j, i). Thus, row i of A becomes column i of AT , and column j of A becomes row j of AT . You can think of AT as the array obtained by ﬂipping A across its main diagonal. Note that (AT )T = A. We say a square matrix A is symmetric if AT = A, and skewsymmetric if T A = −A. Since (A + B)T = AT + B T and (cA)T = cAT , for any scalar c, we see that A + AT is symmetric and A − AT is skewsymmetric. If char(F) = 2, we have (1.5)
A=
A + AT A − AT + , 2 2
which expresses a general square matrix A as the sum of a symmetric matrix and a skewsymmetric matrix. If A is a complex matrix, then A∗ denotes the conjugate transpose; i.e., the (j, i) entry of A∗ is aij . A square matrix A is Hermitian if A∗ = A, and it is skewHermitian if A∗ = −A. Note that A is Hermitian if and only if iA is skew∗ ∗ + A−A . If we set Hermitian. Analogous to equation (1.5), we have A = A+A 2 2 ∗ A−A∗ H = A+A and K = , then H and K are both Hermitian and 2 2i (1.6)
A=
A − A∗ A + A∗ +i = H + iK. 2 2i
Equation (1.6) is analagous to writing a complex number z as z = x + iy, where x and y are real numbers. Direct computation shows that (AB)T = B T AT and (AB)∗ = B ∗ A∗ . For an invertible matrix A we have (AT )−1 = (A−1 )T and (A∗ )−1 = (A−1 )∗ .
1.8. Special Types of Matrices We say a square matrix D is diagonal if dij = 0 for all i = j; i.e., if all oﬀdiagonal entries are zero. In this case we often denote the ith diagonal entry as di ; thus ⎛
d1 ⎜ 0 D=⎜ ⎝ ...
0 d2 .. .
··· ··· .. .
⎞ 0 0 ⎟ . .. ⎟ . ⎠
0
0
···
dn
We shall also abbreviate this as D = diag(d1 , . . . , dn ). If A is an m × n matrix, then column j of the product AD is dj (Aj ), where Aj denotes the jth column of A. Thus, the eﬀect of multiplying a matrix A on the right by a diagonal matrix D is to multiply the columns of A by the corresponding diagonal entries of D. If A is n × k, then the ith row of DA is di times the ith row of A—multiplying a matrix A on the left by a diagonal matrix D multiplies the rows of A by the corresponding diagonal entries of D.
1.9. Submatrices, Partitioned Matrices, and Block Multiplication
9
We say a square matrix T is upper triangular if all entries below the main diagonal are zero; thus tij = 0 for all i > j. We have ⎞ ⎛ t11 t12 · · · t1n ⎜ 0 t22 · · · t2n ⎟ T =⎜ . .. .. ⎟ .. ⎝ ... . . . ⎠ 0 0 · · · tnn When not concerned with the precise entries in the positions above the main diagonal, we write ⎞ ⎛ ∗ t11 ∗ · · · ∗ ⎟ ⎜ 0 t22 · · · , T =⎜ .. .. ⎟ .. ⎝ ... . . . ⎠ 0 0 · · · tnn and we abbreviate this as T = triang(t11 , . . . , tnn ). If S = triang(s11 , . . . , snn ) and T = triang(t11 , . . . , tnn ) are both upper triangular, then the product ST is also upper triangular, and it is triang(s11 t11 , . . . , snn tnn ). If tii = 0 for i = 1, . . . , n, that is, tij = 0 for all i ≥ j, then we say T is strictly upper triangular. The terms lower triangular and strictly lower triangular are deﬁned in similar fashion: T is lower triangular if all entries above the main diagonal are zero and strictly lower triangular if all entries on or above the main diagonal are zero. When we deal with triangular matrices in this book we mainly use upper triangular; the term triangular alone will be used as shorthand for “upper triangular”.
1.9. Submatrices, Partitioned Matrices, and Block Multiplication Let A be an m × n matrix. Suppose we have integers i1 , . . . , ir and j1 , . . . , js with 1 ≤ i1 < i2 < · · · < ir ≤ n and 1 ≤ j1 < j2 < · · · < js ≤ m. Then we can form an r × s matrix with aip jq in position p, q. This is called a submatrix of A; it is the submatrix formed from the entries in the rows i1 , . . . , ir and columns j1 , . . . , js . Putting R = {i1 , . . . , ir } and S = {j1 , . . . , js }, we use A[R,S] to denote this submatrix. If m = n and R = S, the submatrix A[R,R] is called a principal submatrix of A and denoted A[R] . ⎛ ⎞ 1 2 3 Example 1.7. Let A = ⎝ 4 5 6 ⎠. The three principal submatrices of order 7 8 9 1 2 1 3 5 6 two are A[{1,2}] = , A[{1,3}] = , and A[{2,3}] = . For the 4 5 7 9 8 9 4 6 sets R = {2, 3} and S = {1, 3}, we have A[R,S] = . If R = {2, 3} and 7 9 S = {1, 2, 3}, then A[R,S ] is the last two rows of A. We sometimes work with partitioned matrices, i.e., matrices which have been partitioned into blocks. Thus, suppose we have an m × n matrix A, and positive r s mi = m and nj = n. We integers m1 , . . . , mr and n1 , . . . , ns such that i=1
i=1
partition the rows of A into r sets, consisting of the ﬁrst m1 rows, then the next m2 rows, the next m3 rows, and so on. Similarly, we partition the columns into s sets,
10
1. Preliminaries
consisting of the ﬁrst n1 columns, then the next n2 columns, etc. This partitions the matrix A into rs submatrices Aij , where Aij denotes the block formed from the ith set of rows and the jth set of columns. The size of Aij is mi × nj . When an m × n matrix A and a n × q matrix B are partitioned conformally, the product AB can be computed with block multiplication. More precisely, suppose A has been partitioned as described above. Partition B by using the numbers n1 , . . . , ns to partition the rows of B, and by using positive integers q1 , . . . , qt to partition the columns. Block Aij is size mi × nj and block Bjk is size nj × qk , so the matrix product Aij Bjk is deﬁned and has size mi × qk . The (i, k) block of C = AB s Aij Bjk , where C is partitioned using the numbers m1 , . . . , mr for the is then j=1
rows and q1 , . . . , qt for the columns. This block multiplication formula follows from the ordinary matrix multiplication rule. As an example, consider an n × n matrix M partitioned into four blocks by using r = s = 2 with m1 = n1 = k and m2 = n2 = n − k. We then have A B M= , C D where A and D are square of orders k and n − k, respectively, while B is k × (n − k) and C is (n−k)×k. Suppose now that the upper lefthand block A is invertible. We may then do a block version of the usual row operation used in Gaussian elimination (i.e., adding a multiple of a row to a lower row) by multiplying M on the left by the partitioned matrix Ik 0 L= . −CA−1 In−k The product is
LM =
A 0
B D − CA−1 B
.
The square matrix D − CA−1 B is called the Schur complement of A in M . Since det L = 1, we have det M = det LM = det A det(D − CA−1 B) . If n − k = k (so n is even and k = n2 ), we also have det A det(D − CA−1 B) = det(AD − ACA−1 B).
1.10. Invariant Subspaces Let T : V → V be a linear transformation. We say the subspace U is invariant under T , or T invariant, if T (U) ⊆ U. Suppose dim(V) = n and dim(U) = k, where 1 ≤ k < n. Let {b1 , . . . , bk } be a basis for U. We can extend this set to get a basis, B = {b1 , . . . , bk , bk+1 , . . . , bn }, for V and consider the matrix [T ]B representing T relative to the B basis. Column j of [T ]B gives the coordinates of T (bj ) relative to the Bbasis. Since U is T invariant, whenever 1 ≤ j ≤ k, the vector T (bj ) is a linear combination of the ﬁrst k basis vectors b1 , . . . , bk . Hence,
1.11. Determinants
11
A = [T ]B has the block triangular form A11 0
A12 A22
,
where A11 is the k × k matrix representing the action of T on the subspace U, while A12 is k × (n − k) and A22 is (n − k) × (n − k). Now suppose U and W are invariant subspaces of T such that V = U ⊕ W. Let k = dim(U), let {u1 , . . . , uk } be a basis for U, and let {wk+1 , wk+2 , . . . , wn } be a basis for W. Then B = {u1 , . . . , uk , wk+1 , wk+2 , . . . , wn } is a basis for V, and A = [T ]B has the block diagonal form A11 0 , 0 A22 where A11 is the k × k matrix representing the action of T on the subspace U, while A22 is the (n − k) × (n − k) matrix representing the action of T on the subspace V. In this case we write A = A11 ⊕ A22 . This can be extended in the obvious way for the case of t invariant subspaces, U1 , . . . , Ut such that V = U1 ⊕ U2 ⊕ · · · ⊕ Ut , in which case we get a block diagonal matrix representation for T , ⎞ ⎛ A1 0 · · · 0 ⎜ 0 A2 · · · 0 ⎟ = A1 ⊕ A2 ⊕ · · · ⊕ At , A=⎜ .. .. ⎟ .. ⎝ ... . . . ⎠ 0
···
0
At
where the block Ai represents the action of T on the invariant subspace Vi .
1.11. Determinants There are several ways to approach the deﬁnition of determinant. One can start with a formula, or with abstract properties, or take the more abstract approach through the theory of alternating, multilinear forms. Here, we merely review some basic formulas and facts. If A is an n × n matrix, then the determinant of A is
(−1)σ a1σ(1) a2σ(2) · · · anσ(n) , (1.7) det(A) = σ
where the sum is over all permutations σ of {1, 2, . . . , n} and the sign (−1)σ is plus one if σ is an even permutation and minus one if σ is an odd permutation. There are n! terms in this sum. Each is a signed product of n entries of A, and each product has exactly one entry from each row and from each column. Except for small values of n (such as two and three), formula (1.7) is generally a poor way to compute det(A) as it involves a sum with n! terms. When the matrix has few nonzero entries, formula (1.7) may be useful, but, usually, the eﬃcient way to compute det(A) is to reduce A to triangular form via row operations and use the following properties of the determinant function. (1) If i = j, adding a multiple of row i of A to row j of A will not change the determinant. The same holds if we add a multiple of column i to column j.
12
1. Preliminaries
(2) Exchanging two rows, or two columns, of A changes the sign of the determinant. (3) If we multiply a row or column of A by a nonzero constant c, then the determinant of the resulting matrix is c det(A). (4) The determinant of a triangular matrix is the product of the diagonal entries. Another important determinant formula is the formula for expansion by cofactors. For an n × n matrix A, let Aij denote the (n − 1) × (n − 1) matrix which remains after removing row i and column j from A. Then, for any i = 1, . . . , n, we have n
(1.8) det(A) = (−1)i+k aik det(Aik ). k=1 i+k
The quantities (−1) det(Aik ) are called cofactors, and formula (1.8) is the Lagrange expansion by cofactors. Formula (1.8) gives the expansion along row i. A similar result holds for columns; the formula for cofactor expansion along column j is n
(1.9) det(A) = (−1)k+j akj det(Akj ). k=1
In these formulas, the cofactor (−1)i+k det(Aik ) is multiplied by the entry aik , of the same position. If we use the cofactors of entries in row i but multiply them by entries from a diﬀerent row, then the sum is zero. Thus, if i = r, then (1.10)
n
(−1)r+k aik det(Ark ) = 0.
k=1
A similar formula holds for columns: if j = r, then (1.11)
n
(−1)k+r akj det(Akr ) = 0.
k=1
Letting Cof(A) denote the n × n matrix with (−1)i+j det(Aij ) in position i, j, formulas (1.8) and (1.10) tell us A(Cof(A))T = (det A)In . When det(A) = 0, we have 1 (Cof(A))T . det A Formula (1.12) is rarely a good way to compute A−1 , but it can give useful qualitative information about the entries of A−1 . For example, if A is a matrix of integers and det A = ±1, then formula (1.12) shows that A−1 has integer entries. Finally, a few other important facts should be familiar to the reader: (1.12)
A−1 =
(1) An n × n matrix A is invertible if and only if det(A) = 0. (2) If A and B are n × n matrices, then det(AB) = det(A) det(B). (3) If A is an n × n matrix, then det A = det(AT ).
1.12. Tensor Products
13
1.12. Tensor Products More conceptual and abstract approaches to tensor products may be found elsewhere. For our purposes, a coordinate approach will suﬃce. Let V and W be ﬁnitedimensional vector spaces over a ﬁeld F. Let m = dim V and n = dim V. Then we may consider V to be Fm and W to be Fn . For any ordered pair of column vectors (v, w), where v ∈ V and w ∈ W, we deﬁne the tensor product v ⊗ w to be the column vector ⎛ ⎞ w1 v ⎜ w2 v ⎟ ⎜ . ⎟. ⎝ .. ⎠ wn v This vector has mn coordinates. The tensor product of V and W is the space spanned by all possible vectors v ⊗ w and is denoted V ⊗ W. Letting v and w run through the unit coordinate vectors in V and W, respectively, it is clear that V ⊗ W contains the mn unit coordinate vectors of Fmn , and so V ⊗ W is mndimensional space over F. Let S : V → V and T : W → W be linear transformations. We deﬁne (S ⊗ T )(v ⊗ w) = (S(v)) ⊗ (T (w)) and extend by linearity to deﬁne a linear transformation S ⊗ T on V ⊗ W. Let A be an m × n matrix, and let B be an r × s matrix. The tensor product of A and B is the mr × ns matrix ⎛
b11 A ⎜ b21 A A⊗B =⎜ ⎝ ...
(1.13)
br1 A
b12 A · · · b22 A · · · .. . ··· br2 A · · ·
⎞ b1s A b2s A ⎟ . .. ⎟ . ⎠ brs A
In (1.13), the (i, j) block is Abij ; there are rs blocks, each of size m × n. For example,
a c
b d
⊗
1 2 3 4 5 6
⎛
a b d ⎜ c =⎝ 4a 4b 4c 4d
2a 2c 5a 5c
2b 3a 2d 3c 5b 6a 5d 6c
⎞ 3b 3d ⎟ ⎠. 6b 6d
If A and B are square matrices of sizes m × m and r × r, respectively, this deﬁnition of A ⊗ B will give (A ⊗ B)(v ⊗ w) = (Av) ⊗ (Bw) for coordinate vectors v ∈ Fm and w ∈ Fr . More generally, for matrices A, B, C, D, of sizes m × n, r × s, n × p, and s × q, respectively, we have the formula (1.14)
(A ⊗ B)(C ⊗ D) = (AC) × (BD).
This can be established by direct computation using block multiplication.
14
1. Preliminaries
Exercises 1. Show that if U and W are subspaces of V, then the intersection U ∩ W is a subspace. 2. Give an example of subspaces U and W of R2 for which U ∪W is not a subspace. 3. Show that for subspaces U and W, the set U ∪ W is a subspace if and only if either U ⊆ W or W ⊆ U. 4. Let U and W be subspaces of V. Show that the following conditions are equivalent: (a) For each element z of U + W, there is exactly one choice of u ∈ U and w ∈ W such that z = u + w. (b) U ∩ W = {0}. 5. Let U and W be ﬁnitedimensional subspaces of V. Prove the formula dim(U + W) = dim(U) + dim(W) − dim(U ∩ W). Illustrate with the example of two planes in R3 . 6. Let T : V → W be a linear transformation from V to W. (a) Let R = {w1 , . . . , wk } be a linearly independent set in T (W). For each i = 1, . . . , k, choose vi in the inverse image of wi , that is, T (vi ) = wi . Show that the set P = {v1 , . . . , vk } is linearly independent. (b) Show by example that if S is a linearly independent set of vectors in V, then T (S) can be linearly dependent. (c) Show that if T is injective and S is a linearly independent set of vectors in V, then T (S) is linearly independent. (d) Show by example that if S spans V, then T (S) need not span W. (e) Show that if T is surjective and S spans V, then T (S) spans W. 7. Show that an m × n matrix A has rank one if and only if there are nonzero column vectors x ∈ Fm and y ∈ Fn such that A = xyT . 8. Let A and B be matrices such that the product AB is deﬁned. Show that rank(AB) ≤ rank(A) and rank(AB) ≤ rank(B). 9. (a) Show that an m × n matrix A has rank k if and only if there is an m × k matrix B and a k × n matrix C such that rank(B) = rank(C) = k and A = BC. Note that Exercise 7 is the special case k = 1. (b) Referring to part 9(a), let b1 , . . . , bk be the columns of B, and let cT1 , . . . , cTk be the rows of C. (So ci is a column vector and thus cTi is a row.) Let Ci denote the k × n matrix with cTi in the ith row and zeroes in all the other k Ci , show that A = BC gives rows. Using C = i=1
(1.15)
A=
k
bi cti .
i=1
Equation (1.15) decomposes the rank k matrix A into a sum of k matrices of rank one.
Exercises
15
λ1 r and B = , where λ1 = λ2 . Find a triangular 0 λ2 1 x matrix of the form S = such that S −1 AS = B. 0 1 Hint: Work with the equation AS = SB instead of S −1 AS = B. λ 1 λ r 11. Let A = and B = . Assuming r = 0, ﬁnd a diagonal matrix 0 λ 0 λ S such that S −1 AS = B.
10. Let A =
λ1 0
0 λ2
12. Recall that the trace of a square matrix is the sum of the diagonal entries. Show that if A is m × n and B is n × m, then AB and BA have the same trace. 13. The Hadamard product of two m × n matrices R and S is the m × n matrix with rij sij in entry (i, j); thus, it is entrywise multiplication. We denote the Hadamard product by R ◦ S. Show that if A is m × n and B is n × m, then the trace of AB is the sum of the entries of A ◦ B T (equivalently, the sum of the entries of AT ◦ B). (If you did Exercise 12 in the usual way, then this should follow easily.) 14. Show that similar matrices have the same trace. Hint: Use Exercise 12. 15. Let A be an n × n complex matrix. Show that the trace of A∗ A is
n n
aij 2 .
i=1 j=1
16. Find a pair of similar matrices A and B such that A∗ A and B ∗ B have diﬀerent traces. Hint: Use Exercises 10 and 15. 17. A complex matrix U is said to be unitary if U −1 = U ∗ ; that is, if U U ∗ = U ∗ U = I. Show that if A is an m × n matrix and B = U ∗ AV , where U and V are unitary matrices of sizes m × m and n × n, respectively, then A∗ A and B ∗ B have the same trace. 18. A linear transformation T : V → V is called a projection if T 2 = T . For example, the matrix P = Ik ⊕ 0n−k is a projection matrix. In this exercise, you will show that every projection on a ﬁnitedimensional vector space can be represented by a matrix of this form. Let T be a projection on a vector space V. Let N = {x ∈ V  T x = 0} and R = {x ∈ V  T x = x}. (a) Show that N and R are subspaces of V. (b) Show that for any x ∈ V, the vector x − T x is in N and the vector T (x) is in R. (c) Use part (b) to show that V = R + N . Furthermore, since R ∩ N = 0, we have V = R ⊕ N . (d) Now let V be ﬁnite dimensional. Let n = dim(V) and k = dim(R). Let {r1 , . . . , rk } be a basis for R, and let {rk+1 , . . . , rn } be a basis for N . Then B = {r1 , . . . , rn } is a basis for V. Show that the matrix for T with respect to this basis is Ik ⊕ 0n−k .
16
1. Preliminaries
19. Suppose the linear transformation T : V → V satisﬁes T 2 = I. Let U = {x ∈ V  T x = x} and
20.
21. 22.
23.
24.
W = {x ∈ V  T x = −x}.
(a) Show that U and W are subspaces of V. (b) Show that for any x ∈ V, we have (x + T x) ∈ U and (x − T x) ∈ W. (c) Show that if the ﬁeld F does not have characteristic 2, then V = U ⊕ W. 1 1 , we have A2 = I. (d) Show that for V = Z22 and the 2 × 2 matrix A = 0 1 However, for this example, U = W, and the conclusion of part (c) does not hold. Let A be a square matrix, and suppose A = S + T , where S is symmetric and T is skewsymmetric. Assuming the ﬁeld does not have characteristic 2, show T T and T = A−A . that S = A+A 2 2 Let A be a square, complex matrix, and suppose A = H + iK, where H and K ∗ ∗ and K = A−A are Hermitian. Show that H = A+A 2 2i . Recall that we say a relation, ∼, is an equivalence relation on a set S if the following three properties hold: (a) For every a ∈ S, we have a ∼ a. (The relation is reﬂexive.) (b) If a ∼ b, then b ∼ a. (The relation is symmetric.) (c) If a ∼ b and b ∼ c, then a ∼ c. (The relation is transitive.) Show that similarity is an equivalence relation on the set Mn (F). Let ∼ be an equivalence relation on a set S. For each a ∈ S, deﬁne the equivalence class of a to be [a] = {x ∈ Sx ∼ a}. Show that for any a, b ∈ S, either [a] = [b] or [a] ∩ [b] = ∅. Hence, each element of S belongs to exactly one equivalence class. We say the equivalence classes partition the set. Verify formula (1.14).
Chapter 2
Inner Product Spaces and Orthogonality
2.1. The Inner Product The inner product is used to deﬁne metric quantities, such as length, distance, and angle, in a real or complex vector space. Deﬁnition 2.1. An inner product, denoted x, y, on a real or complex vector space V is a scalarvalued (real or complex, respectively) function deﬁned on ordered pairs of vectors x, y, in V which satisﬁes the following properties: (1) For any x, y in V, we have x, y = y, x. (2) For any x, y in V and any scalars a, b, we have au + bv, x = au, x + bv, x. (3) For any x in V, we have x, x ≥ 0, and x, x = 0 if and only if x = 0. Property (1) says that the inner product is conjugate symmetric. Note that property (1) implies that x, x is a real number for any x in V. It also tells us x, y = 0 if and only if y, x = 0. When V is a real vector space, we can ignore the conjugation in property (1) and thus get a symmetric inner product. Property (2) says the inner product is linear in the ﬁrst entry. Combining properties (1) and (2), we get x, au + bv = ax, u + bx, v, which says the inner product is conjugate linear in the second entry. We say the inner product is conjugate bilinear. Property (2) is sometimes stated as x, au + bv = ax, u + bx, v, in which case we get an inner product which is linear in the second entry and conjugate linear in the ﬁrst entry. Property (3) is called positivity and allows us to deﬁne the length of a vector as x = x, x, using the positive square root. Note the following consequences of property (3). If x, y = 0 for all y ∈ V, then setting y = x gives x, x = 0, and hence x = 0. If x, y = z, y for all y ∈ V, then x − z, y = 0 for all y, and hence x = z. 17
18
2. Inner Product Spaces and Orthogonality
A real or complex vector space with an inner product is called a unitary space. The most familiar example of a complex unitary space is Cn with the inner product n x, y = xi y i ; we call this the standard inner product and denote it as x, yI . i=1
For x and y in Rn we have yi = y i , and x, yI =
n
xi yi = x · y is the usual dot
i=1
product. For column vectors x and y, the transpose conjugate y∗ is a row vector and x, yI = y∗ x. If A is an n × n matrix, then x, AyI = (Ay)∗ x = y∗ A∗ x = A∗ x, yI . Furthermore, this property can be used to deﬁne A∗ , and is sometimes called the adjoint of A. See Section 2.8 for adjoints with respect to a general inner product. Theorem 2.2. Let A be an n × n complex matrix. Then x, AyI = Bx, yI holds for all vectors x, y in Cn if and only if B = A∗ . Proof. We have already seen that x, AyI = A∗ x, yI for all x, y. Now suppose x, AyI = Bx, yI for all vectors x, y. Then Bx, yI = A∗ x, yI for all vectors x, y, and hence Bx = A∗ x for all x, so we must have B = A∗ . For V = C[a, b], the vector space of continuous realvalued functions on the closed interval [a, b], the usual inner product used is b f (x)g(x)dx. f, g = a
For complexvalued functions, we use f, g =
b
f (x)g(x)dx.
a
For column vectors x, y in Fn , the product xyT is an n × n matrix and is sometimes called the outer product. More generally, for x ∈ Fm and y ∈ Fn , the product xyT gives an m × n matrix. We leave it as an exercise to show that an m × n matrix A has rank one if and only if there are nonzero vectors x ∈ Fm , and y ∈ Fn such that A = xyT .
2.2. Length, Orthogonality, and Projection onto a Line 1
The length of a vector x in an inner product space is deﬁned as x = x, x 2 . For n 1 1 the standard inner product in Cn this gives x = (x∗ x) 2 = ( xi 2 ) 2 . i=1
We now use the inner product to deﬁne orthogonality and orthogonal projection. Note how the proofs use the properties in Deﬁnition 2.1 and do not rely on any speciﬁc formula for x, y. Theorem 2.3. If x, y = 0, then x + y2 = x2 + y2 . Proof. Using linearity, (2.1)
x + y2 = x + y, x + y = x, x + x, y + y, x + y, y. 2
2
2
Since x, y = 0, we have x + y = x + y .
2.2. Length, Orthogonality, Projection
19
Remark 2.4. In a real inner product space, x, y = y, x and equation (2.1) becomes x + y2 = x, x + 2x, y + y, y. In this case, x + y2 = x2 + y2 if and only if x, y = 0, and thus the converse of Theorem 2.3 holds. However, the converse need not hold in a complex inner product space. For example, in C2 with the usual inner product, if we take x = (0, 1) and y = (0, i), then x, y = 0, 2 2 2 but x + y = x + y . Theorem 2.3 is the vector version of the Pythagorean Theorem and motivates the following deﬁnition. Deﬁnition 2.5. Vectors x and y in a unitary space V are said to be orthogonal if x, y = 0. We say a set of vectors is orthogonal if each pair of distinct vectors in the set is orthogonal. Theorem 2.6. An orthogonal set of nonzero vectors is linearly independent. Proof. Let S be an orthogonal set of nonzero vectors, and let {x1 , . . . , xk } be any k ci xi = 0. Since xi , xj = 0 for i = j, we have ﬁnite subset of S. Suppose i=1
0=
k
k ci x i , x j = ci xi , xj = cj xj , xj .
i=1
i=1
Since xj = 0, we have cj = 0 for j = 1, . . . , k, and hence x1 , . . . , xk are linearly independent. Deﬁnition 2.7. A set of vectors is orthonormal if the set is orthogonal, and every vector in the set has length one. For the standard inner product, the unit coordinate vectors e1 , . . . , en are an orthonormal set in Cn . In R2 the vectors x = (cos θ, sin θ) and y = (− sin θ, cos θ) are orthonormal. The set {(1, 2), (2, −1)} is orthogonal but not orthonormal, since the vectors have length one. If x1 , . . . , xk are nonzero and orthogonal, then do not xi i = 1, . . . , k is orthonormal. the set xi Consider now the vector space of continuous, realvalued functions on [−π, π] π with the inner product f, g = π1 f (x)g(x)dx. It can be shown that the inﬁnite −π
set of functions √1 , 2
cos x, sin x, cos 2x, sin 2x, cos 3x, sin 3x, . . . , cos nx, sin nx, . . .
is an orthonormal set, a key fact for Fourier series. Alternatively, one can work with complexvalued functions and use the complex exponentials, {einx  n ∈ Z}, π 1 f (x)g(x)dx. with the inner product f, g = 2π −π
We now turn to orthogonal projection. Let x and y be vectors in a unitary space V, with x = 0. For the orthogonal projection of y onto x, we want a scalar multiple of x, say p = αx, such that y − p is orthogonal to x (see Figure 2.1). Thus, we need to have (2.2)
0 = y − αx, x = y, x − αx, x,
20
2. Inner Product Spaces and Orthogonality
C
y
C
C x
: C
t
p t
Figure 2.1. Orthogonal projection of y onto x.
and hence α =
y, x . Note that x = 0 guarantees the denominator is not zero. x, x
Deﬁnition 2.8. For vectors x and y in a unitary space V, with x = 0, the orthogonal projection of y onto x is the vector y, x (2.3) projx y = x. x, x Theorem 2.9. Let x = 0, and let p = projx y. Then (1) p ≤ y, and equality holds if and only if x and y are linearly dependent. (2) y − p ≤ y, and equality holds if and only if x and y are orthogonal. (3) p is the point on the line spanned by x which is closest to y. Proof. We have y = p + (y − p) and y − p is orthogonal to p. So 2
2
y = p + y − p2 . Hence, p ≤ y, with equality if and only if y = p. Since p is a scalar multiple of x, we thus have equality if and only if x and y are linearly dependent. Also, y − p ≤ y, with equality if and only if p = 0, in which case x, y = 0. For property (3), consider a point αx on the line spanned by x. Then we have y − αx = (y − p) + (p − αx). Since (y − p) and (p − αx) are orthogonal, y − αx2 = (y − p)2 + (p − αx)2 , which shows that y − αx is minimized when αx = p.
From Theorem 2.9 we get the famous Cauchy–Schwarz inequality. There are various proofs of this; our approach here is in [Bret09]. Theorem 2.10 (Cauchy–Schwarz inequality). For any x, y in a unitary space V, we have (2.4)
x, y ≤ x y.
Equality holds in (2.4) if and only if x and y are linearly dependent. Proof. The inequality certainly holds for x = 0. Now assume x = 0 and let p = projx y. Then p ≤ y gives y, x y, x y, x = x (2.5) p = x, x x, x x = x ≤ y.
2.3. Inner Products in Cn
21
Hence, x, y ≤ x y. Equality holds in (2.5) if and only if p = y; by Theorem 2.9, this occurs if and only if x and y are linearly dependent. Theorem 2.11. Let V be a ﬁnitedimensional inner product space. Then V has an n αi bi , orthonormal basis. If B = {b1 , . . . , bn } is an orthonormal basis and x = then αi = x, bi . For any x, y in V, we have x, y =
n
i=1
x, bi y, bi .
i=1
Proof. Let n be the dimension of V; we use induction on n to prove vV has an orthonormal basis. If V is one dimensional and v is nonzero, then is an v orthonormal basis. Assume the result holds for n = k and suppose the dimension of V is k + 1. Let U be a kdimensional subspace of V. By the induction hypothesis, U has an orthonormal basis. Let B = {b1 , . . . , bk } be an orthonormal basis for U. Choose x ∈ U. Then projbi x = x, bi bi . Set y =x−
k
x, bi bi .
i=1
Since x ∈ U, we know y ∈ U, and hence {b1 , . . . , bk , y} is a basis for V. Using the fact that B is orthonormal, we see that for 1 ≤ j ≤ k, y, bj = x, bj −
k
x, bi bi , bj = x, bj − x, bj bj , bj = 0.
i=1
Hence, if we put bk+1 =
y , the set {b1 , . . . , bk , bk+1 } is an orthonormal basis y
for V. The rest follows easily from the orthonormality of B; we leave the details as an exercise. Note that x =
n
x, bi bi is telling us the coordinates of x with respect to the
i=1
orthonormal basis B are x, bi , i = 1, . . . , n. The equation x, y =
n
x, bi y, bi
i=1
may then be rewritten as x, y = [y]∗B [x]B . So Theorem 2.11 tells us that, when we use coordinates relative to an orthonormal basis, a general inner product on Cn is computed using the formula for the standard inner product.
2.3. Inner Products in Cn We now take a closer look at the relationship between the standard inner product and general inner products on Cn . Let e1 , . . . , en be the unit coordinate vectors, and let , be a general inner product on Cn . Let P be the n × n matrix with pij = ej , ei in entry ij. Using
22
2. Inner Product Spaces and Orthogonality
standard coordinates, x, y =
n j=1
xj e j ,
n
i=1
n n yi ei = yi pij xj = y∗ P x. i=1 j=1
Since the inner product is conjugate symmetric, pji = pij and hence P = P ∗ ; i.e., P is Hermitian. From the positivity property, we have x∗ P x > 0 for all nonzero vectors x. Deﬁnition 2.12. An n × n Hermitian matrix P is said to be positive deﬁnite if x∗ P x > 0 for all nonzero vectors x. Thus, for any inner product on Cn there is a positive deﬁnite matrix P such that x, y = y∗ P x; we shall denote this inner product as x, yP . When P = I this gives the standard inner product. Conversely, given any positive deﬁnite matrix, P , the formula x, y = y∗ P x deﬁnes an inner product; the reader may check that the conditions of the deﬁnition are satisﬁed. Now we see how this ties in with Theorem 2.11. Theorem 2.13. Let P be an n × n matrix. Then the following are equivalent: (1) P is positive deﬁnite. (2) x, y = y∗ P x deﬁnes an inner product on Cn . (3) There is a nonsingular matrix B such that P = B ∗ B. Proof. The equivalence of the ﬁrst two statements has already been established. Now assume P is positive deﬁnite and deﬁne the inner product x, yP = y∗ P x. Let B = {b1 , . . . , bn } be an orthonormal basis for Cn with respect to the inner product , P . Let B −1 denote the nonsingular matrix which has bj in column j. For any vector x, we have x = B −1 [x]B , where [x]B gives the coordinates of x with respect to the Bbasis. So for x, y in Cn , we have Bx = [x]B and By = [y]B . From Theorem 2.11 we know the coordinates of x and y with respect to the Bbasis are x, bi , i = 1, . . . , n and y, bi , i = 1, . . . , n, respectively, and x, yP =
n
x, bi y, bi = [x]B , [y]B I = (By)∗ (Bx) = y∗ (B ∗ B)x.
i=1
Hence, for all x and y, we have y∗ P x = y∗ (B ∗ B)x and so P = B ∗ B. Finally, to show (3) implies (1), suppose P = B ∗ B for some nonsingular matrix B. Then P is clearly Hermitian. For any nonzero x, we know Bx is nonzero, and so x∗ P x = x∗ (B ∗ B)x = (Bx)∗ (Bx) = Bx2 > 0. So P is positive deﬁnite.
The same arguments (without the complex conjugation bars) show any inner product on Rn may be expressed in the form x, y = yT P x, where P is a real symmetric, positive deﬁnite matrix, and hence there is a real nonsingular B such that P = B T B.
2.4. Projection onto a Subspace
23
2.4. Orthogonal Complements and Projection onto a Subspace Deﬁnition 2.14. If U is asubspace of a unitary space V, the orthogonal complement of U is U ⊥ = {x ∈ V x, u = 0 for all u ∈ U}. Here are some basic properties of the orthogonal complement. Theorem 2.15. Let V be a unitary space, and let U be a subspace of V. Then (1) The orthogonal complement, U ⊥ , is a subspace of V. (2) U ∩ U ⊥ = {0}. (3) If U ⊂ W, then W ⊥ ⊂ U ⊥ . (4) If V is ﬁnite dimensional, then dim(U)+dim(U ⊥ ) = dim(V), and V = U ⊕U ⊥ . (5) If V is ﬁnite dimensional, then U ⊥⊥ = U. Proof. We leave (1) and (3) as exercises for the reader. For (2), note that if x ∈ U ∩ U ⊥ , then x, x = 0, and hence x = 0. Now suppose V is ndimensional and U is kdimensional. Let u1 , . . . , uk be a basis for U. Then x ∈ U ⊥ if an only if x, ui = 0 for i = 1, . . . , k. Let A be the n × k matrix with columns u1 , . . . , uk . We know there is positive deﬁnite Hermitian matrix P such that x, y = y∗ P x, so x, ui = ui ∗ P x. Hence, x ∈ U ⊥ if and only if A∗ P x = 0, so U ⊥ is the null space of A∗ P . Since A has rank k, and P is nonsingular, the k ×n matrix A∗ P also has rank k, and thus the null space of A∗ P has dimension n − k. This, combined with (2), establishes (4). For (5), ﬁrst note that U ⊆ U ⊥⊥ . Then from (4), we have dim(U ⊥⊥ ) = n − dim(U ⊥ ) = n − (n − k) = k = dim(U), so U ⊥⊥ = U. For a onedimensional subspace U of Cn , that is, a line through the origin, the orthogonal complement U ⊥ is an (n − 1)dimensional subspace. An (n − 1)dimensional subspace of an ndimensional space is called a hyperplane. If we ﬁx a nonzero vector a in an ndimensional space, then the equation a, x = 0 deﬁnes a hyperplane through the origin. Over the real numbers and using the dot product, this equation takes the form a1 x1 + a2 x2 + · · · + an xn = 0. Deﬁnition 2.8 gives the orthogonal projection of y onto the onedimensional subspace spanned by x. We now look at orthogonal projection onto a general subspace. Let U be a subspace of an inner product space V, and let y ∈ V. There are two approaches to the deﬁnition of orthogonal projection: we can seek p ∈ U such that y − p ∈ U ⊥ , or we can seek p ∈ U such that y − p is minimized. Later in this section, we will see these two characterizations are equivalent; for now we take the ﬁrst as the deﬁnition. There are several issues to consider: uniqueness, existence, and computing p. We start with the uniqueness question. Theorem 2.16. Let U be a subspace of an inner product space V, and let y ∈ V. Then there is at most one vector p in U such that y − p ∈ U ⊥ . Proof. Suppose we have p, q ∈ U with y − p ∈ U ⊥ and y − q ∈ U ⊥ . Since U and U ⊥ are both subspaces, q − p ∈ U and (y − p) − (y − q) = q − p ∈ U ⊥ . But U ∩ U ⊥ = 0, so q − p = 0 and hence q = p.
24
2. Inner Product Spaces and Orthogonality
Deﬁnition 2.17. Let U be a subspace of an inner product space V, and let y ∈ V. If there exists a vector p ∈ U such that y − p ∈ U ⊥ , then p is called the orthogonal projection of y onto U and denoted p = projU y. Note that y = p + (y − p) decomposes y into the sum of its projection onto V and a vector orthogonal to V. We have y2 = p2 +y−p2 and thus p ≤ y, with equality if and only if p = y. The more subtle question is whether projU y always exists. For a onedimensional subspace U, we have seen the answer is yes; if x spans U, then formula (2.3) tells us how to compute projU y. The next theorem generalizes this result to any ﬁnitedimensional subspace U, giving a formula for projU y in terms of an orthonormal basis for U. Theorem 2.18. Let U be a nonzero, ﬁnitedimensional subspace of an inner product space V. Let y ∈ V, and let u1 , . . . , uk be an orthogonal basis for U. Then (2.6)
projU y =
k
projui y =
i=1
k
y, ui ui . ui , ui i=1
Setting p = projU y, the following holds: for any x in U with x = p, we have y − p < y − x. Proof. Set p =
k i=1
projui y. We need to show that p = projU y. Since u1 , . . . , uk
is an orthogonal basis, ui , uj = 0 when i = j. Then for any basis vector uj , p, uj =
(2.7) Let u ∈ U. Then u =
k
k
y, ui ui , uj = y, uj . u i , ui i=1
aj uj and
j=1
y, u =
k
j=1
aj y, uj =
k
k aj p, uj = p, aj uj = p, u.
j=1
j=1
So y − p, u = 0 for all u ∈ U, which means y − p ∈ U ⊥ . Hence, p = projU y. Now suppose x ∈ U. Then p − x ∈ U and y − p ∈ U ⊥ , and so (y − p)2 + (p − x)2 = y − x2 . So (y − p) ≤ y − x and equality holds only when x = p.
When the basis u1 , . . . , uk is orthonormal, formula (2.6) simpliﬁes to p = projU y =
k
y, ui ui .
i=1
We now express this in matrix form for the standard inner product on Cn . Let B be the n × k matrix with the vector uj in column j. The reader may check that (2.8)
p = projU y =
k
i=1
y, ui I ui = BB ∗ y.
2.4. Projection onto a Subspace
25
Earlier we said that an equivalent way to deﬁne the orthogonal projection of y on a subspace U is to ﬁnd the point in U which is closest to y (provided such a point exists). Theorem 2.18 establishes this equivalence for a ﬁnitedimensional U; we now use this to show the equivalence holds in general. Theorem 2.19. Let U be a subspace of an inner product space V. Let y ∈ V. Then for x ∈ U, the following are equivalent: (1) y − x ∈ U ⊥ . (2) For any u ∈ U with u = x, we have y − x < y − u. Proof. Suppose there exists x ∈ U such y − x ∈ U ⊥ . Let u ∈ U. Then y − u = (y − x) + (x − u). Since y − x ∈ U ⊥ and (x − u) ∈ U, the vectors (y − x) and (x − u) are orthogonal. Hence, (2.9)
y − u2 = (y − x)2 + (x − u)2 ≥ (y − x)2 .
Equality holds in equation (2.9) if and only if x = u, so y − x < y − u for any u ∈ U with u = x. (Yes, we did just repeat the argument at the end of the previous proof.) Conversely, suppose x ∈ U satisﬁes the condition in statement (2). Let u ∈ U, and let P be the subspace spanned by x and u. Since P ⊆ U, we know that y − x < y − v for any v ∈ P with v = x. Since P is ﬁnite dimensional, Theorem 2.18 tells us projP y = p must exist, and y − p < y − v for any v ∈ P with v = p. But then we must have p = x. So y − x ∈ P ⊥ . Hence, (y − x) is orthogonal to u. Since this holds for any u ∈ U, we have y − x ∈ U ⊥ . If U is inﬁnite dimensional, we are not guaranteed the existence of projU y. Example 2.20. Let l 2 be the set of all square summable complex sequences, that ∞ is, the space of all sequences {an }∞ ai 2 conn=1 of complex numbers such that i=1
verges. Using the Cauchy–Schwarz inequality, one can show that for sequences ∞ ∞ 2 ai bi converges. The set l 2 is an inner product {an }∞ n=1 , {bn }n=1 in l , the series i=1
space with vector sum and scalar multiplication deﬁned componentwise and the inner product ∞
∞ {an }∞ , {b } = ai bi . n n=1 n=1 i=1
Let U be the subspace of all sequences which have only a ﬁnite number of nonzero entries. The subspace U contains all of the unit coordinate sequences ej , where ej denotes the sequence with a one in position j and zeroes elsewhere. Then U ⊥ = {0}. Let y = (1, 12 , 13 , 14 , . . .) be the sequence with n1 in position n. Then y ∈ l 2 , but for any x ∈ U the vector y − x has nonzero entries, so it cannot be in U ⊥ . Hence, U contains no vector x such that y − x is in U ⊥ , and so projU y does not exist. A Hilbert space is an inner product space which is a complete metric space with respect to the metric induced by the inner product (that is, the distance from x 1 to y is x − y = x − y, x − y 2 ). For any n, the space Cn is a Hilbert space. The space l 2 is an inﬁnitedimensional Hilbert space. In a Hilbert space one has
26
2. Inner Product Spaces and Orthogonality
the following “closest point property” for closed, convex subsets. See [Youn88, pp. 26–28] for a proof. Theorem 2.21. Let S be a nonempty, closed, convex set in a Hilbert space H. For any x ∈ H, there is a unique point y ∈ S such that x − y < x − a for all a = y in S. The problem with the subspace U in Example 2.20 is that it is not closed. The vector y = (1, 12 , 13 , 14 , . . .) is not in U but is the limit of a sequence of points in U. Although y is not in U, it is in the closure of U. However, any subspace is a convex set, so when W is a closed subspace of a Hilbert space H, the closest point property holds for W, and thus for any x ∈ H, the orthogonal projection projW x will exist. Returning to the ﬁnitedimensional world, formula (2.6) gives projU y if we have an orthogonal basis for U. For the standard inner product on Cn , we now obtain a formula for computing projU y from an arbitrary basis for U. First we need the following fact about matrices. Theorem 2.22. Let A be an n × k matrix. Then the four matrices A, A∗ , AA∗ , and A∗ A all have the same rank. Proof. We ﬁrst show that A and A∗ A have the same null space. We clearly have ker(A) ⊆ ker(A∗ A). Now suppose A∗ Ax = 0. Then x∗ A∗ Ax = 0. But x∗ A∗ Ax = (Ax)∗ (Ax) = Ax2 . So Ax = 0 and hence Ax = 0. So we have ker(A∗ A) ⊆ ker(A). Hence, ker(A) = ker(A∗ A). Since A and A∗ A also have the same number of columns, k, the rank plus nullity theorem then tells us they must have the same rank. Now, since the row and column space of a matrix have the same dimension, we know A∗ has the same rank as A. Using A∗ in the ﬁrst argument then tells us that A∗ and A∗∗ A∗ = AA∗ have the same rank. In particular, when k ≤ n and the n × k matrix A has linearly independent columns, the k × k matrix A∗ A has rank k and thus is invertible. Let {a1 , . . . , ak } be a basis for a kdimensional subspace U of an ndimensional inner product space V. Let A be the n×k matrix with aj in column j. The columns of A are linearly independent, and the k × k matrix A∗ A is invertible. For y ∈ V, set p = projU y. Since p is a linear combination of the columns of A, we have p = Ax for some x in Ck . In the proof of Theorem 2.18 we saw that p, u = y, u for any u ∈ U. Since the columns of A are vectors from U, and entry j of A∗ y is y, aj I , we have A∗ y = A∗ p = A∗ Ax. Hence x = (A∗ A)−1 A∗ y and (2.10)
p = projU y = Ax = A(A∗ A)−1 A∗ y.
For k = 1, equation (2.10) reduces to (2.3). If the columns of A are orthogonal, then A∗ A is a diagonal matrix, and equation (2.10) reduces to (2.6). If the columns of A are orthonormal, then A∗ A = Ik , and (2.10) reduces to (2.8). Formula (2.10) typically appears as the solution to ﬁnding the best least squares ﬁt. Consider a system of linear equations, Ax = b, where A is n × k. This system has a solution if and only if b is in the column space of A. When the system has no solution, we want to ﬁnd the vector w in the column space of A which is as close as possible to b. Thus, letting U be the subspace spanned by the columns of A, we are looking for w = projU b. Equivalently, we want the vector w = Ax
2.5. Hilbert Spaces and Fourier Series
27
2
which minimizes Ax − b, or we want to minimize Ax − b , which is the sum of the squares of the diﬀerences between the coordinates of Ax and b; hence the term “best least squares ﬁt”. If the columns of A are linearly independent, then x = (A∗ A)−1 A∗ b and w = Ax = A(A∗ A)−1 A∗ b. We conclude this section with a fact needed in later chapters, both in the proof of the Jordan canonical form and in the proof of the spectral theorem for normal matrices. Theorem 2.23. Let A be an n × n complex matrix. Then a subspace U of Cn is invariant under A if and only if U ⊥ is invariant under A∗ . Proof. First we note that we are working here with the standard inner product in Cn (although one can formulate this theorem using the more general deﬁnition of adjoint). Suppose U is an Ainvariant subspace. Let y ∈ U ⊥ , and let u be any vector in U. Then Au ∈ U and so u, A∗ y = Au, y = 0, which shows that A∗ y ∈ U ⊥ ; hence U ⊥ is invariant under A∗ . Conversely, if U ⊥ is invariant under A∗ , then U ⊥⊥ is invariant under A∗∗ . But U ⊥⊥ = U and A∗∗ = A, so U is Ainvariant.
2.5. Hilbert Spaces and Fourier Series The Hilbert space l 2 appeared in Example 2.20. Recall that a Hilbert space is a unitary space which is a complete metric space under the distance induced by the inner product. (A metric space is complete if every Cauchy sequence of points in the space converges to a point in the space.) We say an inﬁnite sequence of vectors x1 , x2 , x3 , . . . converges to x if x − xn → 0 as n → ∞. Suppose H is an inﬁnitedimensional Hilbert space and {en } = e1 , e2 , e3 , . . . is an inﬁnite sequence of orthonormal vectors in H. For each positive integer k, let Uk denote the kdimensional subspace spanned by e1 , . . . , ek . For each k and any x in H, we put pk = projUk x =
k
x, ei ei .
i=1
Since pk 2 ≤ x2 , we have k
(2.11)
x, ei 2 ≤ x2
i=1
for any positive integer k. The inequality (2.11) says that the partial sums of the ∞ x, ei 2 are bounded above by x2 . Hence, the series converges inﬁnite series i=1
and (2.12)
∞
i=1
x, ei 2 ≤ x2 ;
28
2. Inner Product Spaces and Orthogonality
this is known as Bessel’s inequality. The question now is when equality holds ∞ in (2.12). When does the vector sum x, ei ei converge, and when does it coni=1
verge to x? The ﬁrst step is a criterion for the convergence of the inﬁnite sum ∞ cn e n . n=1
Theorem 2.24. Let {en }∞ n=1 be an orthonormal sequence in a Hilbert space H, ∞ ∞ cn e n and let {cn }n=1 be a sequence of scalars. Then the inﬁnite vector sum converges if and only if
∞
n=1
cn  converges. 2
n=1 k
Proof. For each integer k, let yk =
cn en be the kth partial sum of the series
n=1
∞
cn en . Since the en ’s are orthonormal,
n=1
ym+p − ym 2 =
m+p
ci 2 .
i=m+1 ∞
m+p
cn 2 converges, then ci 2 → 0 as m → ∞, and hence the sequence n=1 i=m+1 {yk }∞ k=1 is a Cauchy sequence in H. Since H is complete, this means the sequence of ∞ cn en converges. partial sums {yk }∞ k=1 converges to a point in H; hence the series n=1 ∞ If
cn en converges to the sum y. Using the
Conversely, suppose the series
n=1
continuity of the inner product and the fact that the ei ’s are orthonormal, we have ∞ ∞ cn en , ei = ci . Bessel’s inequality then tell us the series cn 2 y, ei = n=1
n=1
converges.
Deﬁnition 2.25. An sequence of orthonormal vector e1 , e2 , e3 , . . . in a Hilbert space H is said to be complete if the only vector in H which is orthogonal to every en is the zero vector. For example, in l 2 , if en is the sequence with a one in the nth entry and zeros elsewhere, then the sequence {en }∞ n=1 is a complete orthonormal sequence. Theorem 2.26. Let {en }∞ n=1 = e1 , e2 , e3 , . . . be an orthonormal sequence in a Hilbert space H. Then the following are equivalent. (1) The sequence {en }∞ n=1 is complete. ∞ x, ei ei . (2) For any x ∈ H, we have x = i=1
(3) For any x ∈ H, we have x2 =
∞
x, ei 2 .
i=1
(4) If U is the subspace spanned by {en }∞ n=1 , then the closure of U is H.
2.5. Hilbert Spaces and Fourier Series
29
Proof. First we show the ﬁrst three statements are equivalent. Suppose the sequence {en }∞ n=1 is complete. Given x in H, we know the sum ∞ ∞ x, ei ei converges; put y = x, ei ei , and note that y, en = x, en for all n. i=1
i=1
Hence, y − x is orthogonal to each en . Since the sequence {en }∞ n=1 is complete, we ∞ x, ei ei = y = x. must have y − x = 0, and so i=1
Now suppose (2) holds, and let pk = the series. Then pk 2 =
k
k
x, ei ei be the kth partial sum of
i=1
x, ei 2 ; since the sequence {pk } converges to x,
i=1
continuity of the length function gives (3). Now suppose (3) holds. Suppose we have x ∈ H with en , x = 0 for all n. Then (3) implies x = 0 and hence x = 0, so the orthonormal sequence {en } is complete. It remains to deal with condition (4). Suppose the sequence {en }∞ n=1 is complete, and let U be the subspace spanned by {en }∞ n=1 . Let x be in H. From (2) we ∞ ∞ know x, ei ei converges and has sum x. But x, ei ei is in the closure of U, i=1
i=1
hence the closure of U contains any x ∈ H. Conversely suppose the closure of U is H. Consider any x ∈ H such that x, en = 0 for all n. Let E = {y ∈ H  x, y = 0}. Since x, en = 0 for all n, we see U ⊂ E, and so the continuity of the inner product tells us the closure of U is contained in E; hence H ⊆ E. But then we have x ∈ E, so x, x = 0 and x = 0. The point here is that when we have a complete orthonormal sequence in a Hilbert space, every vector in the space can be represented as the sum shown in (2) of Theorem 2.26. The most wellknown example is the Fourier expansion of a real or complexvalued function. We have already mentioned the space C[−π, π], of realvalued, continuous functions on [−π, π] with the inner product 1 π f (x)g(x)dx, f, g = π −π and noted that the set of functions { √1 , cos nx, sin nx n = 1, 2, 3, . . .} is or2
thonormal with respect to this inner product. However, the space C[−π, π] is not a Hilbert space because it is not complete—there are Cauchy sequences of continuous functions which do not converge to a continuous function. We need to use a larger set of functions. A complete treatment requires more analysis than we care to get into here; we mention some key facts and move on. It is convenient to consider complexvalued functions as well as realvalued functions, as this allows the option of using the functions {einx  n ∈ Z} for the orthonormal sequence. The identities einx = cos nx + i sin nx, einx + e−inx = cos nx, 2 inx −inx −e e = sin nx 2i
30
2. Inner Product Spaces and Orthogonality
show that the subspace spanned by the functions {einx  n ∈ Z} is the same as that spanned by { 12 , cos nx, sin nx  n = 1, 2, 3, . . .}. Deﬁnition 2.27. For real numbers a < b, the set L2 (a, b) is the set of Lebesgue measurable functions f : (a, b) → C which are square integrable, i.e., such that b f (t)2 dt is ﬁnite. a
The set L2 (a, b) is an inner product space with the inner product f, g = f (t) g(t) dt. The thoughtful reader will protest that this is not quite right, a b because for a noncontinuous function it is possible to have a f (t)2 dt = 0 even if f is not the zero function. For example, f could be zero everywhere except at a ﬁnite number of points or on a set of measure zero. Strictly speaking, we need to consider two functions equivalent if they agree except on a set of measure zero, and then consider the elements in our space L2 (a, b) to be equivalence classes of functions. The reader who ﬁnds this confusing or intimidating may just ignore it—or seek enlightenment elsewhere in texts on real analysis or Fourier analysis. Our goal here is simply to exhibit the classic formulas for Fourier coeﬃcients and ∞ Fourier series as a special case of the expansion x = x, ei ei , where {en } is a b
i=1
complete orthonormal system for an inﬁnitedimensional Hilbert space. So, ignoring the analytic diﬃculties, and moving on, consider the Hilbert space L2 (−π, π), and the orthonormal sequence 1 (2.13) { √ , cos nx, sin nx n = 1, 2, 3, . . .}. 2 This is a complete orthonormal sequence—proving this takes some hard work, and we refer the reader elsewhere [K¨ or88, Youn88] for that story. For a realvalued ∞ 2 function f ∈ L (−π, π), the coeﬃcients in the expansion x = x, ei ei take the i=1
form
π 1 1 a0 = √ f (x) dx = f, √ , 2π −π 2 π 1 an = f (x) cos nx dx = f, cos nx, π −π 1 π bn = f (x) sin nx dx = f, sin nx. π −π
The aj ’s and bj ’s are the Fourier coeﬃcients of the function f . Let Vk denote the subspace spanned by { √12 , cos nx, sin nx  1 ≤ n ≤ k}. The orthogonal projection of f onto Vk is k
a0 √ + (aj cos jx + bj sin jx). fk = 2 j=1 From the orthonormality of the vectors { √12 , cos nx, sin nx  1 ≤ n ≤ k}, we have k k 2 a
0 (aj cos jx + bj sin jx) = a20 + (a2n + b2n ). √ + 2 j=1 j=1
2.6. Unitary Tranformations
31
k a0 Since fk = √ + (aj cos jx + bj sin jx) is the orthogonal projection of f onto U, 2 j=1 this gives k
1 π 2 2 2 2 (2.14) a0 + (an + bn ) ≤ f = (f (x))2 dx. π −π j=1
Inequality (2.14) is valid for all k. Hence, the partial sums of the inﬁnite series ∞ (a2n + b2n ) are bounded; since all terms of the series are nonnegative, this a20 + j=1
tells us the sum converges. Bessel’s inequality takes the form ∞
1 π 2 2 2 2 (2.15) a0 + (an + bn ) ≤ f = (f (x))2 dx. π −π j=1 The fact that the orthonormal system (2.13) is complete gives equality in equation (2.15) and tells us that the sequence of orthogonal projections {fk }∞ k=1 converges to f in the space L2 (−π, π). Note that convergence here means convergence in the norm—that is, f − fk → 0 as k → ∞. The issue of when the sequence of functions {fk }∞ k=1 converges pointwise or uniformly to f is more subtle and diﬃcult; we again refer the reader elsewhere [K¨ or88].
2.6. Unitary Tranformations We now look at linear transformations which preserve length. A length preserving linear transformation on a unitary space V must be injective, because T (x) = 0 if and only if T (x) = 0, and the fact that T preserves length then gives x = 0 and hence x = 0. For a ﬁnitedimensional space V, any injective linear map T : V → V is also surjective, so a length preserving linear transformation on a ﬁnitedimensional space must be a bijection, and hence is invertible. However, when V is inﬁnite dimensional, the map T can be injective without being surjective. For example, consider the right shift operator on l 2 , that is, for any x = (x1 , x2 , x3 , . . .) in l 2 deﬁne T : l 2 → l 2 by T (x1 , x2 , x3 , . . .) = (0, x1 , x2 , x3 , . . .). Then T preserves length but is not surjective. Deﬁnition 2.28. A linear transformation T : V → V on a unitary space V is said to be unitary if T (x) = x for all x in V and T is a bijection. We ﬁrst show that preserving length is equivalent to preserving the inner product. Theorem 2.29. Let T : V → V be a linear transformation on a unitary space V. Then the following are equivalent. (1) T x, T y = x, y for all x, y ∈ V. (2) T x = x for all x ∈ V. Proof. Since T x2 = x, x, setting x = y in (1) yields (2). Obtaining (1) from (2) takes a bit more work. Assume (2) holds. Then for any x, y ∈ V, we have T (x + y), T (x + y) = T (x + y)2 = (x + y)2 = x + y, x + y.
32
2. Inner Product Spaces and Orthogonality
Using conjugate bilinearity, T x2 + T x, T y + T y, T x + T y2 = x2 + x, y + y, x + y2 , and hence, since T x2 = x2 and T y2 = y2 , we have (2.16)
T x, T y + T y, T x = x, y + y, x.
Replace x with ix in (2.16) to get iT x, T y + T y, iT x = ix, y + y, ix, and thus (2.17)
iT x, T y − iT y, T x = ix, y − iy, x.
Divide (2.17) by i to get (2.18)
T x, T y − T y, T x = x, y − y, x.
Add (2.16) and (2.18) and divide by two to get T x, T y = x, y.
We now turn to the matrix version, using the standard inner product in Cn . Theorem 2.30. Let A be an n×n matrix over C. Then the following are equivalent. (1) Ax, AyI = x, yI for all x, y ∈ Cn . (2) Ax = x for all x ∈ Cn . (3) A∗ = A−1 . (4) A∗ A = I. (5) AA∗ = I. (6) The columns of A are orthonormal. (7) The rows of A are orthonormal. Proof. Theorem 2.29 covers the equivalence of (1) and (2). The equivalence of (3), (4), and (5) follows from the fact that a square matrix has a right inverse if and only if it has a left inverse; in this case the left and right inverses must be equal and thus be a twosided inverse for the matrix. Properties (4) and (6) are equivalent because the (i, j) entry of A∗ A is the inner product of columns j and i of A. Similarly, properties (5) and (7) are equivalent because the (i, j) entry of AA∗ is the inner product of rows i and j of A. So, we have shown (1) and (2) are equivalent, and properties (3) through (7) are equivalent. We complete the proof by showing (1) and (3) are equivalent. We have Ax, AyI = A∗ Ax, yI . So (1) holds if and only if A∗ Ax, y = x, y for all x, y, which is true if and only if A∗ Ax = x for all x. Hence, (1) holds if and only if A∗ A = I, which shows the equivalence of properties (1) and (3). Deﬁnition 2.31. We say an n × n complex matrix A is unitary if A−1 = A∗ . If A is a real matrix which is unitary, then A−1 = AT ; in this case we say A is orthogonal. Deﬁnition 2.32. An n × n real or complex matrix A is said to be orthogonal if A−1 = AT .
2.7. The Gram–Schmidt Process and QR Factorization
33
We leave the proofs of the following facts as exercises to the reader: • If A and B are n × n unitary matrices, then AB is unitary. • If A is a unitary matrix, then  det A = 1.
2.7. The Gram–Schmidt Process and QR Factorization The Gram–Schmidt orthogonalization process is an algorithm for constructing an orthogonal, or orthonormal, set of vectors from a linearly independent set. Let x1 , . . . , xt be linearly independent. Set y1 = x1 , y2 = x2 − projy1 x2 , y3 = x3 − projy1 x3 − projy2 x3 , and, in general, for k = 1, . . . , t, (2.19)
yk = xk −
k−1
projyi xk = xk −
i=1
k−1
i=1
xk , yi yi . yi , yi
Then the vectors y1 , . . . , yt have the following properties: (1) The vector yk is a linear combination of x1 , . . . , xk , and for each k = 1, . . . , t, span[x1 , . . . , xk ] = span[y1 , . . . , yk ]. (2) The vectors y1 , . . . , yt are orthogonal. This can be proven by induction; we leave this as an exercise. Set Uk = span[x1 , . . . , xk ]; then yk+1 = xk+1 − projUk yk+1 . From the orthogonal set y1 , . . . , yt one can obtain yk . an orthonormal set by using the vectors yk This process can also be expressed as a matrix factorization. Let A be a nonsingular n × n matrix with columns x1 , . . . , xn . Apply the Gram–Schmidt process to x1 , . . . , xn to obtain an orthonormal set q1 , . . . , qn . Each vector qk is a linear n combination of x1 , . . . , xk , so we have qk = sik xi where sik = 0 whenever i > k. i=1
Let S be the upper triangular matrix with sik in entry i, k, and let Q be the n × n matrix with qk in column k. Then AS = Q. The matrix Q is unitary. Note that S is nonsingular and R = S −1 is also upper triangular. So A = QS −1 = QR, where Q is unitary and R is upper triangular. The R may be chosen to have positive diagonal entries, in which case this is known as the QRfactorization. In Chapter 8 we see another approach to the QRfactorization and see how this factorization is used in the famous QRalgorithm for computing eigenvalues and eigenvectors. However, at this point we content ourselves with using the QRfactorization to prove Hadamard’s determinant inequality. This arises in connection with the following question. Suppose we have an n × n matrix A and M = max aij . How large can det A be? 1≤i,j≤n
34
2. Inner Product Spaces and Orthogonality
Theorem 2.33 (Hadamard determinant inequality [Had93]). Let A be an n × n matrix with columns A1 , . . . , An . Then  det A ≤ A1 A2 A3 · · · An ,
(2.20)
i.e., the modulus of the determinant of A is less than or equal to the product of the lengths of the columns of A. Equality holds if and only if either one of the columns of A is zero, or the columns are orthogonal. Proof. If A is singular, then det A = 0, so the inequality clearly holds. Assume now that A is nonsingular, and let A = QR be the QRfactorization of A. Looking k at column k, we have Ak = Q(Rk ) = rik Qi . Since the column vectors Q1 , . . . , Qn i=1
are orthonormal, this gives Ak 2 =
(2.21)
k
i=1
rik 2 Qi 2 =
k
rik 2 ,
i=1
so rkk  ≤ Ak . Since Q is unitary and R is triangular, 2
2
 det A =  det QR =  det Q det R = r11 r22 · · · rnn , and hence  det A ≤ A1 A2 A3 · · · An . Now, if A has a column of zeroes, then both sides of (2.20) are obviously zero. Otherwise, equality holds in (2.20) if and only if Ak = rkk  for k = 1, . . . , n. Equation (2.21) shows this happens if and only if R is a diagonal matrix. The columns of A are then scalar multiples of the columns of the orthogonal matrix Q, and hence are orthogonal. √ Now, suppose M = max aij . Then we have Aj ≤ M n for each column 1≤i,j≤n n
of A, so  det A ≤ M n (n) 2 . Theorem 2.33 tells us that if equality holds, then the columns must be orthogonal. Furthermore, we must have M = aij  for all of the entries of A. It now suﬃces to consider the case M = 1. Here is a way to construct n 2πi an n × n matrix A with aij  = 1 for all (i, j) and det A = n 2 . Let ω = e n . The 2 n−1 are the nth roots of unity; i.e., they are the roots of the numbers 1, ω, ω , . . . , ω polynomial z n − 1. Deﬁne the n × n matrix V (n) to be the matrix with ω (i−1)(j−1) in entry i, j: ⎛ ⎞ 1 1 1 ··· 1 ⎜1 ω ω2 ··· ω (n−1) ⎟ ⎜ ⎟ 2 4 2(n−1) ⎜ ⎟. 1 ω ω · · · ω (2.22) V (n) = ⎜ ⎟ . . . . ⎝ .. ⎠ .. .. .. 1 ω (n−1) ω 2(n−1) · · · ω (n−1)(n−1) The matrix V (n) is a Vandermonde matrix (see Exercise 5 of Chapter 11; in the notation of that exercise, V (n) = V (1, ω, ω 2 , . . . , ω n−1 )). One can then show that n the columns of V (n) are orthogonal and V (n)V (n)∗ = nI. So  det V (n) = n 2 . If you are familiar with group representations, then V (n) is the character table of the cyclic group of order n. Those familiar with the discrete Fourier transform will also recognize V (n).
2.8. Linear Functionals and the Dual Space
35
2.8. Linear Functionals and the Dual Space Let V be a vector space over a ﬁeld F. A linear functional is a linear transformation from V to F. We shall denote linear functionals with lowercase letters, such as f, g, h, etc. Let V ∗ be the set of all linear functionals on V. One can easily show that V ∗ is a vector space over F, called the dual space of V. The range of any nonzero linear functional on V is F; hence, any nonzero linear functional has rank one. For an ndimensional space V the null space of any nonzero linear functional has dimension n − 1. Suppose V is ndimensional, and let B = {v1 , . . . , vn } be a basis for V. For each i = 1, . . . , n deﬁne a linear functional fi by setting fi (vj ) = δij on the basis vectors and then extending to V by linearity. (The symbol δij is the “Kronecker delta”; it takes the value zero when i = j and the value one when i = j.) For n v= aj vj , we have fi (v) = ai . We claim the linear functionals f1 , . . . , fn are j=1
a basis for V ∗ . To show they span V ∗ , let g be any linear functional on V. For n cj fj . Then f is a linear functional, each i = 1, . . . , n, let g(vi ) = ci . Let f = j=1
and for each basis vector vi , we have g(vi ) = ci = f (vi ). Since the maps f and g are linear and agree on the basis vectors, we must have f = g, and so f1 , . . . , fn n span V ∗ . Now we show f1 , . . . , fn are linearly independent. Suppose aj fj = 0. Then for each vi , we have
n
j=1
aj fj (vi ) = ai = 0, so the functionals f1 , . . . , fn are
j=1
linearly independent. So {f1 , . . . , fn } is a basis for V ∗ ; we call this the dual basis to {v1 , . . . , vn }. In particular, note that for a ﬁnitedimensional vector space V, we have dim(V) = dim(V ∗ ). Consider now the case where V is a unitary space; i.e., a real or complex vector space with inner product. Fix a vector v in V and deﬁne the map fv on V by fv (x) = x, v for all x. Then fv is a linear functional. We now show that when V is ﬁnite dimensional, every element of V ∗ has this form. Theorem 2.34. Let V be an ndimensional unitary space. Then for any linear functional f ∈ V ∗ , there is a vector v ∈ V such that f (x) = x, v holds for all x in V, that is, f = fv . The map T : V → V ∗ deﬁned by T (v) = fv is a bijection and satisﬁes T (av + bw) = aT (v) + bT (w) for all scalars a, b and vectors v, w ∈ V. Proof. Let V be an ndimensional unitary space, and let {v1 , . . . , vn } be an orthonormal basis for V. Let {f1 , . . . , fn } be the dual basis. For any linear funcn tional f , there is a unique choice of coeﬃcients a1 , . . . , an such that f = aj fj . Put v =
n
aj vj . For any basis vector vi , we have f (vi ) =
j=1
fv (vi ) = vi , v = vi ,
n
n
j=1
aj fj (vi ) = ai and
j=1
aj vj = ai . So the linear functionals f and fv agree on
j=1
the basis vectors, and hence must be equal.
36
2. Inner Product Spaces and Orthogonality
This shows that the map T is surjective; we leave it as an exercise to the reader to check that T is injective and satisﬁes the property T (av + bw) = aT (v) + bT (w) for all scalars a, b and vectors v, w ∈ V. Theorem 2.34 need not hold in an inﬁnitedimensional space. Example 2.35. Let V be the space of all complex inﬁnite sequences which have only a ﬁnite number of nonzero coordinates. Then V is a unitary space with the ∞ standard inner product x, y = xi y¯i . Consider the map f : V → C deﬁned by f (x) =
∞
i=1
xi . Then f ∈ V ∗ . Let ei denote the unit coordinate vector with a one
i=1
in position i and zeroes elsewhere. Then we have f (ei ) = 1 for each i, but for any v ∈ V, we have ei , v = v¯i . Since the sequence of all ones is not in V, there is no vector v ∈ V such that f = fv . Suppose V and W are two vector spaces over the ﬁeld F, and suppose T : V → W is a linear transformation from V to W. If f ∈ W ∗ , then the composition f ◦ T is in V ∗ . Deﬁne the map T ∗ : W ∗ → V ∗ by T ∗ (f ) = f ◦ T for every f in W ∗ . The map T ∗ is called the adjoint of T . In the case where V and W are unitary spaces, we now show this adjoint agrees with our previous notion of adjoint. Let x ∈ W, and let y ∈ V. Then T (y) is in W, so we can take the inner product T (y), x in W. We have (2.23)
T (y), x = fx (T (y)) = (fx ◦ T )(y) = T ∗ (fx )(y).
Now, T ∗ (fx ) is a linear functional on V, so we know there exists a unique vector zx in V such that T ∗ (fx )(y) = y, zx for every y in V. So, for all x ∈ W and y ∈ V, we have T (y), x = y, zx . Reverse the order on both sides to get (2.24)
x, T (y) = zx , y.
Our previous deﬁnition of the adjoint T ∗ of the linear transformation T : V → W was that T ∗ was the linear transformation from W to V such that x, T y = T ∗ x, y for all x ∈ W and all y ∈ V. Comparing with (2.24), we see that zx = T ∗ (x).
Exercises 1. Prove that m × n matrix A over a ﬁeld F has rank one if and only if there are nonzero vectors x ∈ Fm , and y ∈ Fn such that A = xyT . 2. Prove the parts of Theorem 2.11 left as an exercise to the reader. 3. Let U be a subspace of a unitary space V. Prove that U ⊥ is a subspace of V. 4. Prove that if U and W are subspaces of a unitary space V, and U ⊆ W, then W ⊥ ⊆ U ⊥. 5. Let U be a subspace of a unitary space V. Suppose {u1 , . . . , uk } is a basis for U. Show that x ∈ U ⊥ if and only if x is orthogonal to each of the basis vectors. 6. In Example 2.20 we said that the set U of sequences which have only a ﬁnite number of nonzero entries is a subspace of l 2 . Prove this.
Exercises
37
7. Prove that the vectors y1 , . . . , yk deﬁned by (2.19) have the following properties: (a) The vector yk is a linear combination of x1 , . . . , xk . (b) For each k = 1, . . . , t, we have span[x1 , . . . , xk ] = span[y1 , . . . , yk ]. (c) The vectors y1 , . . . , yt are orthogonal. 1 v v∗ . 8. Let v be a nonzero column vector in Cn . Let P be the n×n matrix v∗ v Show that for any x we have projv x = P x. How does the formula for P simplify if v = 1? 9. Generalize Exercise 8 to the case of projection on a ﬁnitedimensional subspace of Cn . Let u1 , . . . , uk be an orthonormal basis for a kdimensional subspace U. Let B be the n × k matrix with uj in column j. (a) Show that projU x = BB ∗ x. (b) Let P = BB ∗ . Explain geometrically why we should have P 2 = P , and check this algebraically. 10. Show that if A and B are n × n unitary matrices, then AB is unitary. Also, if A and B are n × n orthogonal matrices, then AB is orthogonal. 11. Show that if A is a unitary matrix, then det A = 1. If A is an orthogonal matrix, then det A = ±1. 12. Give an example of a unitary matrix which is not orthogonal. Give an example of an orthogonal matrix which is not unitary. 13. Show that any 2 × 2 real, orthogonal matrix has the form
cos θ sin θ
− sin θ cos θ
or
cos θ sin θ
sin θ − cos θ
for some θ. Note that the ﬁrst matrix is rotation about the origin by angle θ and has determinant 1. The second matrix has determinant −1 and represents reﬂection across a line. 14. Show that for a nonsingular matrix A, the upper triangular matrix R in A = QR (where Q is unitary) may be chosen to have positive entries on the diagonal. In this case, show the QRfactorization is unique. That is, if A is a nonsingular matrix and A = Q1 R1 = Q2 R2 , where Q1 , Q2 are unitary and R1 , R2 are upper triangular with positive diagonal entries, then we must have Q1 = Q2 and R1 = R2 . 15. Suppose that A is an n × n matrix in which every entry is either 1 or −1. n Use the Hadamard determinant inequality to show that  det A ≤ n 2 . Equal 1 1 ity holds if and only if the columns are orthogonal; note that and 1 −1 ⎛ ⎞ 1 1 1 1 1 −1 ⎟ ⎜ 1 −1 ⎝ ⎠are examples of such matrices. 1 1 −1 −1 1 −1 −1 1
38
2. Inner Product Spaces and Orthogonality
16. Show that the functions 1 √ , cos x, sin x, cos 2x, sin 2x, cos 3x, sin 3x, . . . , cos nx, sin nx, . . . 2 are an orthonormal set in the vector space of continuous, realvalued functions π f (x)g(x). on [−π, π] with the inner product f, g = π1 −π
17.
18.
19. 20.
Hint: You can minimize the amount of computation by noting that the cosine functions are even, the sine functions are odd, and by using some helpful trig identities. An easier calculation is to show that the functions einx , n ∈ Z, are an orthogonal set. The orthogonality of the sine, cosine set can then be deduced from einx − e−inx einx + e−inx and sin nx = . this by using the identities cos nx = 2 2i Let w be a vector of length one in Rn . Set Hw = I − 2wwT ; the matrix Hw is called a Householder matrix. Show that Hw has the following properties. 2 = I. (a) Hw is an orthogonal matrix, Hw is symmetric, and Hw (b) Hw (w) = −w. (c) If y is orthogonal to w, then Hw (y) = y. Thus, the Householder matrix Hw ﬁxes the hyperplane orthogonal to w, and sends w to its negative. For an arbitrary vector y, let h = y − projw y. Then h is in the hyperplane orthogonal to w and y = h + projw (y). We then have Tw (y) = h − projw (y). Thus, the map Hw is reﬂection across the hyperplane orthogonal to w. Check that the map fv , deﬁned in Section 2.8, is a linear functional. Verify that the map T in Theorem 2.34 is injective and satisﬁes T (av + bw) = aT (v) + bT (w) for all scalars, a, b and vectors v, w ∈ V.
Chapter 3
Eigenvalues, Eigenvectors, Diagonalization, and Triangularization
3.1. Eigenvalues Let T : V → V be a linear transformation. If x is a nonzero vector such that T x = λx for some scalar λ, we say λ is an eigenvalue of T and x is an associated eigenvector. The same deﬁnition holds for eigenvalues and eigenvectors of a square matrix. Geometrically, the line spanned by x is invariant under T ; for a real number λ, we may visualize it as a stretching or contraction factor in the x direction. For most problems involving matrices and linear transformations, eigenvalues and eigenvectors play a key role in understanding what is happening. 1 1 1 1 , x2 = . Then Ax1 = 2x1 Example 3.1. Let A = and x1 = 1 −1 1 1 and Ax2 = 0. So, 2 and 0 are eigenvalues of A, with associated eigenvectors x1 and x2 , respectively. Note that Ax = λx if and only if (A − λI)x = 0, so λ is an eigenvalue of A if and only if the null space of (A − λI) is nontrivial. Hence, we have the following. Proposition 3.2. The number λ is an eigenvalue of A if and only if det(λI − A) = 0. When A is an n × n matrix, det(xI − A) is a polynomial of degree n in the variable x. The eigenvalues of A are the roots of this polynomial. Since a polynomial of degree n has at most n distinct roots, an n × n matrix has at most n distinct eigenvalues. Deﬁnition 3.3. If A is an n × n matrix, then pA (x) = det(xI − A) is the characteristic polynomial of A. 39
40
3. Eigenvalues, Eigenvectors, Diagonalization, and Triangularization
The roots of pA (x) may not all lie in the ﬁeld F. However, when F is algebraically closed, then all the eigenvalues of A will be in the ﬁeld F; in particular this is the case for F = C. The fact that the eigenvalues are roots of the characteristic polynomial is important, but we urge the reader to think of eigenvalues in terms of the deﬁnition, T x = λx. This is usually what matters for understanding what is going on. Except for small values of n, computing the characteristic polynomial is generally not a good way to ﬁnd eigenvalues—consider the diﬃculties involved in computing the determinant, det(xI − A), when n is large, and then the problem of ﬁnding the roots of a polynomial of high degree. We shall say a bit more about computation of eigenvalues in Chapter 8, but we will not discuss the algorithms in detail. For that, we refer the reader to texts on numerical linear algebra such as [Hou64, Wilk65, GvL89, Stew73, Tref97], to mention a few. Since det(xI − A) = det(S −1 (xI − A)S) = det(xI − S −1 AS), similar matrices have the same characteristic polynomial. Note also that Ax = λx if and only if S −1 AS(S −1 x) = λS −1 x, so x is an eigenvector of A if and only if S −1 x is an eigenvector of S −1 AS. Remark 3.4. Eigenvalues are also called characteristic values, proper values, and latent roots. The characteristic polynomial is often deﬁned as det(A − xI); if A is n × n, then det(A − xI) = (−1)n det(xI − A). For a triangular matrix A = triang(a11 , . . . , ann ), the eigenvalues are the diagonal entries a11 , . . . , ann . This follows from the fact that A − λI is singular if and only if λ = aii for some i. If you insist on using the characteristic polyonomial to do this, then the fact that A is triangular gives pA (x) = (x − a11 )(x − a22 ) · · · (x − ann ). Note this formula is for the triangular case, not for a general matrix. Sometimes we use left eigenvectors. We say the nonzero row vector, (x1 , . . . , xn ), is a left eigenvector of the n × n matrix A if (x1 , . . . , xn )A = λ(x1 , . . . , xn ) for some scalar λ. For a column vector x we can write this as xT A = λxT . We then have AT x = λx, so another way to think about left eigenvectors is that they are the eigenvectors of AT , written as rows. Since det(λI − A) = det(λI − AT ), the matrices A and AT have the same eigenvalues. But, they generally do not have the same eigenvectors.
3.2. Algebraic and Geometric Multiplicity For an eigenvalue λ of A, deﬁne Vλ = {x ∈ V  Ax = λx}. Then Vλ is the nullspace of A−λI and hence is a subspace. The subspace Vλ consists of all of the eigenvectors of A belonging to the eigenvalue λ, plus the zero vector. Deﬁnition 3.5. If λ is an eigenvalue of A, then Vλ = {v  Av = λv} is called the eigenspace corresponding to λ and dim(Vλ ) is called the geometric multiplicity of λ. Example 3.6. Let Jn denote the n × n matrix of ones—i.e., every entry of Jn is a one. Then rank(Jn ) = 1. Assuming n > 1, we see zero is an eigenvalue of Jn and the eigenspace V0 has dimension (n − 1). So the geometric multiplicity of the
3.3. Diagonalizability
41
eigenvalue zero is n − 1. The other eigenvalue of Jn is n; the eigenspace Vn is one dimensional, spanned by the allone vector. Example 3.1 is the case n = 2. Note we did not use the characteristic polynomial in Example 3.6; we used the deﬁnition of eigenvalue and eigenvector and the obvious fact that Jn has rank one. Indeed, if asked to ﬁnd the characteristic polynomial, we can now just write it down, because we have the eigenvalues. Thus, pJn (x) = xn−1 (x − n). Deﬁnition 3.5 deﬁnes the term geometric multiplicity of an eigenvalue; now we consider a diﬀerent type of multiplicity called algebraic multiplicity. Consider the characteristic polynomial pA (x) of an n × n matrix A. Most of the time, there will be n distinct roots, λ1 , . . . , λn , and pA (x) = (x − λ1 )(x − λ2 ) · · · (x − λn ). When there are repeated roots, let λ1 , . . . , λt be a list of the distinct roots of pA (x), and let mi be the multiplicity of λi as a root of pA (x). Then pA (x) = (x − λ1 )m1 (x − λ2 )m2 · · · (x − λt )mt , where m1 + m2 + · · · + mt = n. We say that mi is the algebraic multiplicity of λi . When we simply say the multiplicity of λ, we mean algebraic multiplicity. Theorem 3.7. Let λ be an eigenvalue of A with algebraic multiplicity m. Then dim(Vλ ) ≤ m, i.e., the geometric multiplicity of λ is less than or equal to the algebraic multiplicity. Proof. Let d = dim(Vλ ), and let v1 , . . . , vd be a basis for Vλ . Choose vectors vd+1 , . . . , vn such that v1 , . . . , vn is a basis for Fn . Let S be the change of basis matrix and put B = S −1 AS, so that B is the matrix representation for the transformation T (x) = Ax with respect to the new basis. Since T vi = λvi for i = 1, . . . , d, we have λId B12 , B= 0 B22 where B12 is d × (n − d) and B22 is square of order (n − d). Then pA (x) = pB (x) = (x − λ)d det(xI − B22 ),
so the algebraic multiplicity of λ is at least d.
The set of eigenvalues of a matrix A is called the spectrum of A; we denote this as spec(A). For example, spec(Jn ) = {n, 0}. When we wish to list the eigenvalues of an n × n matrix A with repeated eigenvalues listed multiple times, we shall write eigen(A) = λ1 , . . . , λn . For example, we would write eigen(J4 ) = 4, 0, 0, 0.
3.3. Diagonalizability To understand the action of a linear transformation T (x) = Ax, it helps if the matrix A has a simple form. For example, if A is diagonal, ⎞ ⎛ d1 0 · · · 0 ⎜ 0 d2 · · · 0 ⎟ A=D=⎜ , .. . . .. ⎟ ⎝ ... . . . ⎠ 0
0
···
dn
42
3. Eigenvalues, Eigenvectors, Diagonalization, and Triangularization
then Ax = (d1 x1 , . . . , dn xn ). The map D simply multiplies the ith coordinate of x by di . In geometric terms, the component along the ith axis is stretched (or contracted) by the factor di . If A is a more complicated matrix, we want to ﬁnd a simpler matrix which is similar to A, or, equivalently, we want to ﬁnd a basis B of V such that [T ]B is as simple as possible. The ﬁrst step is to determine when there is a basis B such that [T ]B is diagonal. If not, things are more complicated—that story will continue in the next chapter. Deﬁnition 3.8. We say an n × n matrix A is diagonable or diagonalizable if there exists a nonsingular matrix S such that S −1 AS is diagonal. Equivalently, we say the linear transformation T (x) = Ax is diagonable if there is a basis, B, such that [T ]B is a diagonal matrix. Note that [T ]B is a diagonal matrix if and only if each basis vector is an eigenvector of T . So T is diagonable if and only there is a basis of eigenvectors. One could then translate this into the matrix version, but we are going to do a slightly more computational argument to see exactly how to form the matrix S. Theorem 3.9. The n × n matrix A is diagonable (over F) if and only if there is a basis for Fn consisting of eigenvectors of A. Proof. Suppose S −1 AS = D is diagonal, for some nonsingular matrix S. Then AS = SD. Letting Sj denote column j of S, we have ASj = dj Sj , where dj is the jth diagonal entry of D. Hence, every column of S is an eigenvector of A. Since S is nonsingular, the n columns of S form a basis for Fn , giving a basis of eigenvectors. Note also that the diagonal entries of D are eigenvalues of A. Conversely, suppose we have a basis B = {b1 , . . . , bn } for Fn in which each basis vector is an eigenvector of A. Let Abj = λj bj . Let S be the matrix with the vector bj in column j. Then AS = SD, where D is the diagonal matrix with λ1 , . . . , λn in the diagonal entries. So S −1 AS = D and A is diagonable. Now the question is, when does Fn have a basis consisting of eigenvectors of A? Equivalently, when does A have n linearly independent eigenvectors in Fn ? One concern is whether the ﬁeld F contains all of the eigenvalues of A. If an eigenvalue λ is in F, then Fn must contain nonzero vectors satisfying Ax = λx, because x is simply a solution to the linear system (A − λI)x = 0. However, when λ is not in F, and x is a nonzero vector in Fn , we see Ax is in Fn , but λx is not, so no nonzero vector in Fn can satisfy Ax = λx. So we cannot hope to diagonalize A over F unless all of the eigenvalues of A are in F. However, even in this case, there is more to do. We start with the simplest case, which is when A has n distinct eigenvalues. Theorem 3.10. Let λ1 , . . . , λt be distinct eigenvalues of the matrix A, with associated eigenvectors v1 , . . . , vt , respectively. Then the vectors v1 , . . . , vt are linearly independent. Proof. We use induction on t; the result certainly holds for t = 1. Suppose (3.1)
t
i=1
ci vi = 0.
3.3. Diagonalizability
43
Apply A to each side, and use Avi = λi vi , to get t
(3.2)
ci λi vi = 0.
i=1
Multiply equation (3.1) by λ1 and subtract from equation (3.2) to get t
ci (λi − λ1 )vi = 0,
i=2
or (3.3)
c2 (λ2 − λ1 )v2 + c3 (λ3 − λ1 )v3 + · · · + ct (λt − λ1 )vt = 0.
By the induction hypothesis, we may assume v2 , . . . , vt are linearly independent. Hence, ci (λi − λ1 ) = 0 for i = 2, . . . , t. Since (λi − λ1 ) = 0, we have ci = 0 for i = 2, . . . , t. But then equation (3.1) shows us c1 is also zero, and hence v1 , . . . , vt are linearly independent. In particular, when an n×n matrix A has n distinct eigenvalues, the corresponding n eigenvectors will be a basis and A will be diagonable. Theorem 3.11. If an n × n matrix A has n distinct eigenvalues in F, then A is diagonable over F. With repeated eigenvalues, things are more complicated. Here are two possible extremes. The scalar matrix cIn has only one eigenvalue c of multiplicity n, yet it is certainly diagonable (since it is already diagonal). Note that the eigenvalue c has geometric multiplicity n. Now consider the strictly upper triangular matrix ⎞ ⎛ 0 1 0 0 ··· 0 ⎜0 0 1 0 ··· 0⎟ ⎟ ⎜ ⎜0 0 0 1 ··· 0⎟ ⎜ N = ⎜ .. .. .. .. . . . ⎟, . .. ⎟ ⎟ ⎜. . . . ⎝0 0 0 0 ··· 1⎠ 0 0 0 0 ···
0
with ones in the superdiagonal positions and zeroes elsewhere. This matrix also has a single eigenvalue 0 of algebraic multiplicity n. However, rank(N ) = (n − 1), so N has a one dimensional nullspace and 0 has geometric multiplicity one. There is only one linearly independent eigenvector for 0, namely e1 (or any nonzero multiple of e1 ), and N cannot be diagonalized. Alternatively, note that if we did have S −1 N S = D, then D would have to be the zero matrix, which would give N = S0S −1 = 0, which is not true. We urge the reader to pay attention to this example; the matrix N plays a starring role in the next chapter. Theorem 3.12. Let A be an n×n matrix. Let λ1 , . . . , λt be the distinct eigenvalues of A with algebraic multiplicities m1 , . . . , mt . Assume λ1 , . . . , λt are in F. Then A is diagonable over F if and only if mi = dim(Vλi ) for each i = 1, . . . , t. Proof. Suppose the linear transformation T (x) = Ax is diagonable. Then there is a basis B such that [T ]B = λ1 Im1 ⊕ λ2 Im2 ⊕ · · · ⊕ λt Imt . Since the ﬁrst m1 vectors of B are in Vλ1 , the next m2 vectors of B are in Vλ2 , and so on, we see that
44
3. Eigenvalues, Eigenvectors, Diagonalization, and Triangularization
dim(Vλi ) ≥ mi for each i. But we know from Theorem 3.7 that dim(Vλi ) ≤ mi . Hence, mi = dim(Vλi ) for each i = 1, . . . , t. Conversely, suppose mi = dim(Vλi ) for each i = 1, . . . , t. For each i = 1, . . . , t, let Bi be a basis for the eigenspace Vλi . Then Bi  = mi . Put B = B1 ∪ B2 ∪ · · · ∪ Bt . Using Theorem 3.10 one can show that B is linearly independent. Then since B = m1 + m2 + · · ·+ mt = n, we see B is a basis for Fn , and so T is diagonable. Theorem 3.12 tells us a matrix is diagonable if and only if, for each eigenvalue, the geometric and algebraic multiplicities are the same. Suppose the n × n matrix A is diagonable with S −1 AS = D = diag(λ1 , . . . , λn ).
(3.4)
Let x1 , . . . , xn be the columns of S. Then each xi is an eigenvector of A with Axi = λi xi . Let y1T , . . . , ynT be the rows of S −1 . From S −1 A = DS −1 , we see that each yiT is a left eigenvector of A, with yiT A = λi yiT . Let Di denote the n × n diagonal matrix with λi in the ith diagonal entry and zeroes elsewhere. n n Then D = Di and so A = SDi S −1 . Since SDi S −1 = λi xi yiT , we get the i=1
i=1
decomposition (3.5)
A=
n
λi xi yiT .
i=1
3.4. A Triangularization Theorem As we have seen, not all matrices are diagonable. The next chapter is about the Jordan canonical form, which can be viewed as the closest we can get to a diagonal form for an arbitrary matrix, over an algebraically closed ﬁeld. This will take some work. We start with a triangularization theorem, sometimes attributed to Jacobi [Bru87, MacD33, Jac57]. Theorem 3.13. If A is an n × n matrix over the ﬁeld F, and F contains the eigenvalues of A, there is a nonsingular matrix S over F such that S −1 AS = T is upper triangular. Proof. We use induction on n. Let λ be an eigenvalue of A, and let w1 be an associated eigenvector. Let W be a nonsingular matrix which has w1 as its λ ∗ ﬁrst column. Then W −1 AW has a block triangular form , where A22 0 A22 is square of order n − 1 and the asterisk represents entries 2 through n of the ﬁrst row. By the induction hypothesis, there is a nonsingular matrix V2 of or1 0 −1 der n − 1 such that V2 A22 V2 is upper triangular. Put V = . Then 0 V2 λ ∗ V −1 W −1 AW V = is upper triangular. Put S = W V and note 0 V2−1 A22 V2 −1 −1 −1 that S = V W . Observe that the diagonal entries of the triangular matrix T must be the eigenvalues of A. In the ﬁrst step of the proof we get to select any one of these eigenvalues,
3.5. The Gerˇsgorin Circle Theorem
45
so it is possible to ﬁnd an S such that the eigenvalues of A appear in any desired order along the diagonal of the triangular matrix T = S −1 AS. In Chapter 5 we will see a unitary version of this theorem (that is, the matrix S is replaced by a unitary matrix), due to Schur [Sch09].
3.5. The Gerˇsgorin Circle Theorem Finding the eigenvalues of a matrix is a nontrivial problem, so theorems which give information about the eigenvalues from easily computed quantities are of great interest. One of the most wellcelebrated results of this type is the Gerˇsgorin Circle Theorem [OTT49]. A complex number, z = x + iy, where x, y are real, can be represented as the point (x, y) in the real plane R2 . For complex numbers z and w, the real number z − w gives the distance between the points z and w. For a nonnegative real number r and a ﬁxed complex number a, the equation z − a = r represents a circle with center a and radius r; the inequality z − a ≤ r gives the closed disc centered at a with radius r. n n ˆ i = Ri − aii  = aij . So, Ri aij  and R For an n × n matrix A, let Ri = j=1
j=1 j=i
ˆ i is the sum of the is the sum of the absolute values of the entries of row i and R absolute values of the nondiagonal entries of row i. Theorem 3.14. Let A be an n × n complex matrix, and let Di denote the closed ˆ i . Then the eigenvalues of A lie in the union of the discs Di . disc z − aii  ≤ R Proof. Let λ be an eigenvalue of A, and let x be an associated eigenvector. Choose p so that xi  ≤ xp  for all i = 1, . . . , n. Since x = 0, we know xp = 0. We now show that λ is in disc Dp . Entry p of the vector equation Ax = λx gives (3.6)
λxp =
n
apj xj .
j=1
Rearrange and divide both sides of equation (3.6) by xp to get λ − app =
n
apj
j=1 j=p
xj . xp
x j Since ≤ 1, the triangle inequality then gives xp λ − app  ≤
n
apj ,
j=1 j=p
ˆp. and hence λ − app  ≤ R
The discs in the Gerˇsgorin theorem are easily found. Also, since A and AT have the same eigenvalues, the same result holds if we use column sums rather than
46
3. Eigenvalues, Eigenvectors, Diagonalization, and Triangularization
row sums. Thus, if we put Cj =
n
n aij  and Cˆj = Cj − ajj  = aij , then the
i=1
i=1 i=j
eigenvalues of A are contained in the union of the disks z − aii  ≤ Cˆi . Deﬁnition 3.15. Let A be a square matrix. The spectral radius of A is ρ(A) = max{λ λ is an eigenvalue of A}. ˆ i is contained in the disk z ≤ Ri . Hence, if we let The disk z − aii  ≤ R R = max{R1 , . . . , Rn }, then all of the Gerˇsgorin disks are contained in the single disk z ≤ R and thus every eigenvalue of A is contained in z ≤ R. This gives the following bound on the spectral radius. Theorem 3.16. Let A be an n × n complex matrix and R = max{R1 , . . . , Rn }, n where Ri = aij . Then ρ(A) ≤ R. j=1
Remark 3.17. A similar result holds for column sums. ˆ i for i = 1, . . . , n. Deﬁnition 3.18. We say A is diagonally dominant if aii  > R ˆi, If A is diagonally dominant, then none of the Gerˇsgorin disks, z − aii  ≤ R contains the point 0, and hence 0 is not an eigenvalue of A. Thus, we have the following theorem. Theorem 3.19. If A is diagonally dominant, then A is nonsingular. We obtained Theorem 3.19 as a consequence of the Gerˇsgorin theorem. It is also possible to derive the Gerˇsgorin theorem from Theorem 3.19. Theorem 3.19 is the “recurring theorem” of Olga Taussky’s article, “A recurring theorem on determinants,” which traces the history of these results [OTT49]. See also [Var04] and [HJ13, Chapter 6].
3.6. More about the Characteristic Polynomial Let A be an n × n matrix with characteristic polynomial pA (x) = det(xI − A). If we compute det(xI − A) with the general determinant formula, one of the terms in the sum is the product of the n diagonal entries, (x − aii ), of xI − A. Any other product in the sum for det(xI − A) must omit at least two diagonal entries, and hence will be a polynomial of degree at most (n − 2) in x. Hence, the coeﬃcient of xn−1 in the polynomial pA (x) is the same as the coeﬃcient of xn−1 in the product (x − a11 )(x − a22 ) · · · (x − ann ), which is −(a11 + a22 + · · · + ann ). Deﬁnition 3.20. The trace of a square matrix is the sum of the diagonal entries. Writing pA (x) = xn + c1 xn−1 + c2 xn−2 + · · · + cn−1 x + cn , we have c1 = −trace(A). Substituting x = 0, gives cn = det(−A) = (−1)n det A. These facts are special cases of the following formula for the coeﬃcients ck . Theorem 3.21. Let A be an n × n matrix with characteristic polynomial pA (x) = det(xI − A) = xn + c1 xn−1 + c2 xn−2 + · · · + cn−1 x + cn . The coeﬃcient ck is (−1)k times the sum of the determinants of all of the principal k × k submatrices of A.
3.6. More about the Characteristic Polynomial
47
Proof. Note that ck is the coeﬃcient of xn−k . Consider how terms of degree (n−k) occur when we compute det(xI − A) with the general determinant formula. This formula involves a signed sum, where each term of the sum is a product of n entries of the matrix xI − A, chosen one from each row and column. Since the variable x appears only in the diagonal entries of xI − A, a term of degree (n − k) must come from a product in which we use at least n − k of the diagonal entries, and then, in multiplying out the factors of that product, choose the x term from exactly n − k of the diagonal entries, (x − aii ), which were used. The remaining k factors in the product will then be one from each row and column of some k × k principal submatrix of −A. Considering all possible choices which can contribute to the coeﬃcient of xk gives the result. Choosing k = 1 gives our previous result, c1 = −trace(A), and k = n gives cn = (−1)n det A. There is also a formula for the coeﬃcients ck in terms of the eigenvalues of A. Let λ1 , . . . , λn be the eigenvalues of A, where repeated eigenvalues are listed according to multiplicity. Then we have pA (x) = (x − λ1 )(x − λ2 ) · · · (x − λn ) = xn + c1 xn−1 + c2 xn−2 + · · · + cn−1 x + cn . Expanding the product of the binomials (x − λi ), we get a term of degree n − k by choosing exactly k of the −λi ’s and n − k of the x’s. So ck is (−1)k times the sum of all possible products of k of the λi ’s. In other words, ck is (−1)k times the kth elementary symmetric polynomial in λ1 , . . . , λn . Suppose we have an n×n diagonalizable matrix A of rank r. Then there are exactly r nonzero eigenvalues (counted according to multiplicities) and the eigenvalue zero has multiplicity n − r. Letting λ1 , . . . , λr denote the nonzero eigenvalues, pA (x) = (x − λ1 )(x − λ2 ) · · · (x − λr )xn−r = xn + c1 xn−1 + c2 xn−2 + · · · + cr xn−r , where cr = (−1)r λ1 λ2 · · · λr = 0. By Theorem 3.21, the matrix A must have a principal submatrix of size r which is nonsingular. Since in a matrix of rank r every square submatrix larger than r × r has determinant zero, we then have the following fact. Theorem 3.22. If A is a diagonalizable matrix, then the rank of A equals the size of the largest nonsingular principal submatrix of A. The hypothesis that A is diagonalizable is essential in Theorem 3.22, for otherwise we cannot conclude that the rank of A equals the number of nonzero eigenvalues. For example, if A is the n × n matrix ⎛ ⎞ 0 1 0 ··· 0 ⎜0 0 1 ··· 0⎟ ⎜. . . ⎟ ⎜ .. .. . . . . . ... ⎟ , ⎜ ⎟ ⎝0 0 ··· 0 1⎠ 0
0 ···
0
0
then A has rank (n − 1), but every principal submatrix is singular. For general matrices, the rank can be characterized as the size of the largest square submatrix with nonzero determinant, but this involves all submatrices, not just the principal ones.
48
3. Eigenvalues, Eigenvectors, Diagonalization, and Triangularization
3.7. Eigenvalues of AB and BA Suppose A and B are n × n matrices and A is nonsingular. Then BA = A−1 (AB)A and so AB and BA are similar. Thus, they have the same characteristic polynomial, and the same eigenvalues and multiplicities. The same argument applies if B is nonsingular, using AB = B −1 (BA)B. However, if A and B are both singular, then AB and BA need not be similar, as the following example shows. 1 1 1 1 2 2 Example 3.23. Let A = and B = . Then AB = −1 −1 1 1 −2 −2 and BA = 0. So AB and BA are not similar. Note however that they do have the same characteristic polynomial, namely x2 . Before stating the general result, we look at a simple special case. Suppose A is 1 × n (i.e., a row matrix) and B is n × 1 (a column matrix). Then AB is 1 × 1. Setting α = AB, we have for its characteristic polynomial pAB (x) = (x − α). Now BA is an n × n matrix of rank one; the number α is one of its eigenvalues, and the remaining n − 1 eigenvalues are zero, so pBA (x) = xn−1 (x − α). Theorem 3.24. Let A be an r × s matrix, and let B be an s × r matrix, where r ≤ s. Then pBA (x) = xs−r pAB (x). Proof. (In [StewSun90, p. 27] this argument is attributed to Kahan [Kah67]. It also appears in Horn and Johnson [HJ85].) that AB Note is r × rand BA is s × s. Form the (r + s) × (r + s) matrices AB 0 0 0 and , where the zeros represent zero blocks of the required B 0 B BA Ir A sizes. Now use the nonsingular (r + s) × (r + s) matrix . Compute 0 Is AB ABA Ir A 0 0 AB 0 Ir A = = . B BA B BA 0 Is 0 Is B 0 AB 0 0 0 Ir A is invertible, this shows and are similar. Since B 0 B BA 0 Is Since they are similar, they have the same characteristic polynomials, and hence xs pAB (x) = xr pBA (x), giving pBA (x) = xs−r pAB (x).
Exercises 1. Let A be a 2 × 2 real orthogonal matrix. Show the following. (a) If det A = 1, then the eigenvalues of A are eiθ , e−iθ for some real number θ. (b) If det A = −1, then the eigenvalues of A are ±1. 2. Find the eigenvalues and eigenvectors of the rotation matrix cos θ − sin θ Rθ = . sin θ cos θ Are they real? For which values of θ does Rθ have real eigenvalues and eigenvectors? Explain in geometric terms.
Exercises
49
3. Find the eigenvalues and eigenvectors of the reﬂection matrix cos θ sin θ . Rθ = sin θ − cos θ 4.
5.
6. 7.
8.
9.
10. 11.
12. 13.
Explain in geometric terms. Let A be an n × n matrix, and let λ1 , λ2 , . . . , λn be the eigenvalues of A, listed according to multiplicities. (This means that we do not assume the n numbers λ1 , λ2 , . . . , λn are all diﬀerent, but that an eigenvalue of algebraic multiplicity m is listed m times.) Use the fact that A is similar to a triangular matrix to show that for any positive integer k, the eigenvalues of Ak are (λ1 )k , (λ2 )k , . . . , (λn )k . Show that when A is invertible the statement also holds for negative integers k. Referring to the second part of the proof of Theorem 3.12, prove the claim that B = B1 ∪ B2 ∪ · · · ∪ Bt is linearly independent. (For each i = 1, . . . , t, the set Bi is a basis for the eigenspace Vλi .) We derived Theorem 3.19 from Theorem 3.14. Show how to obtain Theorem 3.14 from Theorem 3.19. Let x and y be column vectors of length n. Let A = xyT ; note that A is n × n. Let α = xT y, and assume α = 0. Show that the eigenvalues of A are α and 0, and that the eigenvalue α has multiplicity 1, while the eigenvalue 0 has multiplicity n − 1. Furthermore, the eigenspace Vα is spanned by x and the eigenspace V0 = ker(A) is the hyperplane orthogonal to y. Let A be an n × n matrix with left eigenvector x and right eigenvector y, with corresponding eigenvalues λ and μ. Thus, xT A = λxT and Ay = μy. Show that if λ = μ, then xT y = 0. Let us call a square matrix P a projection matrix if P 2 = P . (Note that P does not necessarily represent orthogonal projection.) Revisit Exercise 18 of Chapter 1 and interpret the results in the language of eigenvalues and eigenspaces. Review the Householder matrix Hw in Exercise 18 of Chapter 2. What are the eigenvalues and eigenspaces of Hw ? Review Exercise 19 in Chapter 1 (about transformations satisfying T 2 = I), and rephrase the results in the language of eigenvalues and eigenspaces. Note 2 = I. that the Householder transformation satisﬁes Hw Show that if λ is an eigenvalue of a unitary transformation, then λ = 1. Let x, y be nonzero column vectors in Fn , and let A = I + xyT . (a) Show that Av = v if and only if yT v = 0. Use this to show that 1 is an eigenvalue of A of geometric multiplicity n − 1. (b) Show that the value of the remaining eigenvalue is 1 + yT x. Hint: The sum of the eigenvalues is the trace of the matrix. (c) Show that A is invertible if and only if yT x = −1. In this case, ﬁnd the value of α such that A−1 = I + αxyT .
Chapter 4
The Jordan and Weyr Canonical Forms
Similarity is an equivalence relation on the set of n × n matrices over the ﬁeld F, and thus partitions Mn (F) into equivalence classes called similarity classes. By a canonical form for matrices under similarity, we mean a rule for selecting exactly one representative from each equivalence class. This is the canonical form for the matrices in that class, and two matrices are similar if and only if they have the same canonical form. We might want various things from this canonical form. We might want it to be as “simple” as possible, or to display important information about the matrix (such as rank, eigenvalues, etc.), or to be useful for some speciﬁc computations. For example, consider computing the exponential of a matrix A, deﬁned via the Taylor series for the exponential function exp(A) =
∞
Ak . k! n=0
Direct calculation of Ak is impractical for large k, and then there is the issue of summing the terms. However, we have exp(S −1 AS) =
∞ ∞
(S −1 AS)k S −1 Ak S = = S −1 (exp A)S, k! k! n=0 n=0
so if we can ﬁnd S such that (S −1 AS)k is easy to compute, we can use this to compute exp A. When S −1 AS = D is diagonal, it is easy to compute Dk and show ⎛
(4.1)
d1 ⎜ 0 exp ⎜ ⎝ ...
0 d2 .. .
··· ··· .. .
0
0
···
⎞ ⎛ d1 0 e 0 ⎟ ⎜ 0 =⎜ . .. ⎟ . ⎠ ⎝ .. 0 dn
0 ed2 .. . 0
··· ··· .. . ···
⎞ 0 0 ⎟ . .. ⎟ . ⎠ edn 51
52
4. The Jordan and Weyr Canonical Forms
We then ﬁnd exp A from exp A = exp(SDS −1 ) = S(exp D)S −1 . If A is not diagonable, we would like a canonical form which is close to being diagonal and for which calculations like the one above are easily done. The Jordan canonical form is probably the best known of various canonical forms for matrices under similarity and is valid whenever the ﬁeld F contains the eigenvalues of the matrix. In this form, the eigenvalues of the matrix appear on the main diagonal, the entries on the superdiagonal (i.e., the diagonal line directly above the main diagonal) are either zeroes or ones, and all other entries of the matrix are zero. Thus, it is close to being diagonal. Furthermore, it has a block diagonal structure which dictates the positions of the superdiagonal ones and reﬂects important information about the matrix. Here is an example of a matrix in Jordan form. ⎞ ⎛7 1 0
(4.2)
⎜0 7 1 ⎜0 0 7 ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝
⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠
7 1 0 7 8 0 0 0
1 8 0 0
0 1 8 0
0 0 1 8 10 −4
This example has two diagonal blocks (of sizes 3 × 3 and 2 × 2) belonging to eigenvalue 7, one 4 × 4 block belonging to eigenvalue 8, and 1 × 1 blocks for the eigenvalues 10 and −4. There are various approaches to the Jordan canonical form. In abstract algebra, it comes as an application of the classiﬁcation theorem for ﬁnitely generated modules over principal ideal domains; see [Rot88, Art11]. Another classical approach is via the equivalence theory of matrices over polynomial rings, using elementary divisors and invariant factors, as done, for example, in [Gan59, Mal63, HK71]. There are several ways to obtain the Jordan form using only techniques from linear algebra, as is done in [Hal87, HJ85] and other texts. For a more combinatorial approach, see Brualdi’s article [Bru87]. Our approach here is based on three key steps: a theorem of Sylvester, the Schur triangularization theorem, and a ﬁnal step presented by Halmos in his 1993 lecture in Pensacola, Florida. (In that lecture, Halmos said he had never been quite satisﬁed with the proof in his book [Hal87].) Except for the last part, much of the development here is similar to that in Horn and Johnson [HJ85]. We begin with a theorem of Sylvester, of signiﬁcant interest in its own right. This theorem enables us to use matrix methods to reach a block diagonal form, where each diagonal block corresponds to an eigenvalue of the transformation. The more standard way to achieve this is by decomposing the vector space into a direct sum of generalized eigenspaces. In a sense, we are using the Sylvester theorem to bypass that algebraic argument. Of course, one could argue that the decomposition into generalized eigenspaces is essential to understanding the structure of the linear
4.1. Reduction to Block Diagonal Form
53
transformation, and thus should be approached directly; the reader may ﬁnd that method elsewhere, for example in [HK71].
4.1. A Theorem of Sylvester and Reduction to Block Diagonal Form Let A be an m × m matrix, and let B be an n × n matrix. Let C be m × n. Consider the matrix equation (4.3)
AX − XB = C,
where the unknown matrix X is m × n. Considered entry by entry, this is a system of mn linear equations in the mn unknowns xij . Sylvester’s theorem is about whether, for a given pair of matrices A, B, equation (4.3) has a solution for any righthand side C. We state two equivalent versions; the equivalence of these two forms of the theorem follows from basic results about systems of linear equations (speciﬁcally, the fact that for a square matrix M , the system M x = b is consistent for every vector b if and only if the homogeneous system M x = 0 has no nontrivial solutions.) Theorem 4.1 (Sylvester [Syl84]). Let A be m×m, and let B be n×n. The matrix equation AX − XB = 0 has nontrivial solutions if and only if spec(A) ∩ spec(B) is nonempty. Theorem 4.2. Let A be m × m, and let B be n × n. Then the matrix equation AX − XB = C is consistent for every choice of m × n matrix C if and only if spec(A) ∩ spec(B) is empty. We will obtain Sylvester’s theorem as a consequence of Theorem 4.3 below. First, note that if V = M(m, n) is the mndimensional space of m × n matrices, we can deﬁne a linear transformation on V by (4.4)
TA,B (X) = AX − XB
for any matrix X ∈ M(m, n). Suppose α ∈ spec(A) and β ∈ spec(B). Let y be an eigenvector of A associated with α; thus, Ay = αy. Note that y is a nonzero column vector with m coordinates. Let zT be a left eigenvector of B associated with β; i.e., zT is a nonzero row vector with n coordinates such that zT B = βzT . Put X = yzT ; note that X is m × n. Then (4.5)
TA,B (X) = AyzT − yzT B = αyzT − βyzT = (α − β)X.
So the m × n nonzero matrix X = yzT is an eigenvector of TA,B with associated eigenvalue α − β. We claim that all of the eigenvalues of TA,B are obtained in this way. Theorem 4.3. Let A be m × m, and let B be n × n with eigen(A) = α1 , . . . , αm and eigen(B) = β1 , . . . , βn . Then the eigenvalues of the map TA,B , deﬁned by TA,B (X) = AX − XB for X ∈ M(m, n), are the mn numbers αi − βj , where i = 1, . . . , m and j = 1, . . . , n and repeated values are to be listed according to multiplicities. Remark 4.4. As an example of how to treat repeated eigenvalues, suppose A is 2 × 2 with eigen(A) = 5, 6 and B is 3 × 3 with eigen(B) = 2, 2, 3. Then we would have eigen(TA,B ) = 5 − 2, 5 − 2, 5 − 3, 6 − 2, 6 − 2, 6 − 3, or 3, 3, 2, 4, 4, 3.
54
4. The Jordan and Weyr Canonical Forms
Proof. First, assume the m eigenvalues of A are distinct and the n eigenvalues of B are distinct. Let y1 , . . . , ym be a basis of eigenvectors for A and zT1 , . . . , zTn be a basis of left eigenvectors for B. So Ayi = αi yi for i = 1, . . . , m and zTj B = βj zTj for j = 1, . . . , n. From equation (4.5), we have TA,B (yi zTj ) = (αi − βj )yi zTj . Now, the mn matrices yi zTj correspond to the mn vectors yi ⊗ zj . Since {y1 , . . . , ym } and {zT1 , . . . , zTn } are each linearly independent sets, the set {yi ⊗ zj  i = 1, . . . , m; j = 1, . . . , n} is linearly independent, and hence forms a basis of eigenvectors for the map TA,B . We can now use a continuity argument to deal with repeated eigenvalues. Let {Ap }∞ p=1 be a sequence of matrices with distinct eigenvalues which converges to A, and let {Bp }∞ p=1 be a sequence of matrices with distinct eigenvalues which converges to B. Then {TAp ,Bp }∞ p=1 converges to TA,B , and the result follows from the fact that the eigenvalues of a matrix are continuous functions of the matrix entries. As a consequence of Theorem 4.3, the map TA,B is nonsingular if and only if spec(A) ∩ spec(B) is empty, giving Theorem 4.2. Those who prefer to deal with the case of repeated eigenvalues with an algebraic argument may prefer the following approach. We can rearrange the entries of X into a column vector of length mn by stacking the columns of X, starting with the ﬁrst column on top, then placing column two below it, and so on. We denote this column vector formed from X as X stack . For example if ⎛ ⎞ 1 4 X = ⎝2 5⎠, 3 6 then
⎛ ⎞ 1 ⎜2⎟ ⎜ ⎟ ⎜3⎟ X stack = ⎜ ⎟ . ⎜4⎟ ⎝ ⎠ 5 6 For a p × q matrix R and an m × n matrix S, we have the tensor product ⎞ ⎛ Rs11 Rs12 · · · Rs1n ⎜ Rs21 Rs22 · · · Rs2n ⎟ . R⊗S =⎜ .. .. ⎟ .. ⎝ ... . . . ⎠ Rsm1 stack
Rsm2
···
Rsmn stack
= (A ⊗ In )X stack and (XB) = (Im ⊗ B T )X stack . We We then have (AX) leave the veriﬁcation of these formulas to to the reader—reluctance to write out the details is one reason we did the proof with the continuity argument. The ﬁrst stack = (A ⊗ In )X stack is easy to check if you simply work with the equation, (AX) columns of X. The second one is a bit messier to deal with, but persevere and keep your rows and columns straight, and it will work out. Combining these equations gives (4.6)
(AX − XB)stack = (A ⊗ In − Im ⊗ B T )X stack .
4.1. Reduction to Block Diagonal Form
55
This shows that the matrix for the linear transformation TA,B , considered as acting on the column vectors, X stack , is A ⊗ In − Im ⊗ B T . By Schur’s triangularization theorem, there are nonsingular matrices, P and Q, of sizes m×m and n×n, respectively, such that P −1 AP = triang(α1 , . . . , αm ) and Q−1 B T Q = triang(β1 , . . . , βn ) are upper triangular. Then, (P ⊗ Q)−1 (A ⊗ In − Im ⊗ B T )(P ⊗ Q) = P −1 AP ⊗ In − Im ⊗ Q−1 B T Q. Now P −1 AP ⊗ In is block diagonal; each of the n diagonal blocks is a copy of the triangular matrix P −1 AP . From the tensor product formula, and the fact that Q−1 B T Q is triangular, we see Im ⊗ Q−1 B T Q is triangular, and the main diagonal consists of the scalar matrices β1 Im , β2 Im , . . . , βn Im in that order. So P −1 AP ⊗ In − Im ⊗ Q−1 B T Q is upper triangular and the diagonal entries are exactly the mn numbers αi − βj , where i = 1, . . . , m and j = 1, . . . , n. We need the following corollary of Theorem 4.2. Corollary 4.5. Let A be k × k, let B be r × r, and assume spec(A) ∩ spec(B) = ∅. A C Then for any k × r matrix C, the block triangular matrix M = is similar 0 B A 0 to the block diagonal matrix A ⊕ B = . 0 B Proof. Since spec(A) ∩ spec(B) = ∅, Sylvester’s theorem guarantees the existence Ik X of a k × r matrix X such that AX − XB = −C. Put S = . Then 0 Ir Ik −X S −1 = and 0 Ir S −1 M S =
=
Ik 0 A 0
−X Ir
A C 0 B C − XB Ik B 0
Ik X 0 Ir A X = 0 Ir
0 B
.
One can extend Corollary 4.5 to apply to any number of blocks by using an induction argument. Thus, if ⎛
A11 ⎜ 0 A=⎜ ⎝ ... 0
A12 A22 .. .
··· ··· .. .
⎞ A1t A2t ⎟ , .. ⎟ . ⎠
0
···
Att
and spec(Aii ) ∩ spec(Ajj ) = ∅ whenever i = j, then A is similar to the block diagonal matrix A11 ⊕ A22 ⊕ · · · ⊕ Att . Now consider an n × n matrix A over an algebraically closed ﬁeld F, with spec(A) = {λ1 , . . . , λt }, where eigenvalue λi has multiplicity mi . From Theorem 3.13, we know A is similar to a triangular matrix in which the eigenvalues
56
4. The Jordan and Weyr Canonical Forms
appear in the order λ1 , . . . , λ1 , λ2 , . . . , λ2 , . . . , λt , . . . , λt . We then have ⎛ ⎞ A11 A12 · · · A1t A22 · · · A2t ⎟ ⎜ 0 (4.7) S −1 AS = ⎜ , .. .. ⎟ .. ⎝ ... . . . ⎠ 0
···
0
Att
where each block Aij is mi × mj , and the diagonal block Aii is triangular of size mi with λi along the main diagonal. Using Corollary 4.5 and abbreviating Aii as Ai , we see that A is similar to the block diagonal matrix A1 ⊕ A2 ⊕ · · · ⊕ At . The next step is to ﬁnd a canonical form for a typical block Ai , i.e., for a matrix which has a single eigenvalue. This takes some work. Note that if there are no repeated eigenvalues, then t = n and the block diagonal matrix A1 ⊕ A2 ⊕ · · · ⊕ At is just an ordinary diagonal matrix. All of the fuss in the next two sections is to deal with repeated eigenvalues. Sylvester’s theorem is also useful for simultaneously reducing a commuting pair of matrices to triangular or block form. Theorem 4.6. Suppose A, B are a pair of ⎛ A11 A12 A22 ⎜ 0 (4.8) A=⎜ .. ⎝ ... . 0
0
n × n matrices which commute and ⎞ · · · A1t · · · A2t ⎟ , .. ⎟ .. . . ⎠ ···
Att
where for i = j, we have spec(Ai ) ∩ spec(Aj ) = ∅. Then B must have the form ⎞ ⎛ B11 B12 · · · B1t B22 · · · B2t ⎟ ⎜ 0 B=⎜ , .. .. ⎟ .. ⎝ ... . . . ⎠ 0
0
···
Btt
where Aii and Bii have the same size for each i = 1, . . . , t. Proof. We do the proof for the case t = 2, and leave it as an exercise to the reader to do the induction argument to extend to the general case. So, suppose we have A1 A12 X Y A = . Partition B conformally with A, and write B = . Z W 0 A2 Then the (2, 1) block of AB is A2 Z and the (2, 1) block of BA is ZA1 . Since AB = BA, we must have A2 Z = ZA1 . Since spec(A1 ) ∩ spec(A2 ) = ∅, Sylvester’s theorem tells us Z = 0. Consequently, if AB = BA, and S is a similarity such that S −1 AS has the form (4.8), with spec(Ai ) ∩ spec(Aj ) = ∅ when i = j, then S −1 BS must also be in a conformal block triangular form. When the diagonal blocks, Aii , are all scalar blocks, we can apply a further block diagonal similarity R = R1 ⊕ R2 ⊕ · · · ⊕ Rt to both A and B so that each diagonal block of B is put in triangular form; observe that R will preserve the scalar diagonal blocks of A.
4.2. Nilpotent Matrices
57
4.2. Nilpotent Matrices So far, we know that any square matrix over an algebraically closed ﬁeld is similar to a block diagonal matrix in which each diagonal block has a single eigenvalue. The rest of the argument is to deal with a square matrix A with a single eigenvalue, spec(A) = {λ}. Then spec(A − λI) = {0}. Since S −1 (A − λI)S = S −1 AS − λI, ﬁnding a canonical form for A is equivalent to ﬁnding a canonical form for (A − λI). Deﬁnition 4.7. A square matrix N is nilpotent if N k = 0 for some positive integer k. If N is nilpotent, the smallest positive number e such that N e = 0 is called the index of nilpotency of N . ⎛ ⎞ 0 1 0 Example 4.8. The matrix N = ⎝ 0 0 1 ⎠ is nilpotent of index 3. More gener0 0 0 ally, the p × p matrix ⎛ ⎞ 0 1 0 ··· 0 ⎜0 0 1 ··· 0⎟ ⎜. . . ⎟ . . . . . . . ... ⎟ , (4.9) Np = ⎜ ⎜. . ⎟ ⎝0 0 ··· 0 1⎠ 0 0 ··· 0 0 where all the entries are zeroes except for the ones along the superdiagonal, is nilpotent of index p. This can be shown by direct calculation of the powers of N ; with each successive power, the line of ones moves up to the next diagonal line. The matrix Np appeared in Chapter 3, where we promised it would play an important role later in Chapter 4. We are going to show that any nilpotent matrix is similar to a direct sum of blocks of the form shown in (4.9). The ﬁrst step is the following result. Proposition 4.9. A matrix A is nilpotent if and only if spec(A) = {0}. Proof. Suppose A is nilpotent and λ is an eigenvalue of A. Let x be an associated eigenvector, so Ax = λx. Then for any positive integer k, we have Ak x = λk x. Since A is nilpotent, Ak = 0 for some k, and hence λk = 0. But then λ = 0. So spec(A) = {0}. Conversely, suppose spec(A) = {0}. Then A is similar to a triangular matrix T with zeroes in all the diagonal positions, i.e., ⎛ ⎞ t1n 0 t12 t13 · · · t2n ⎟ ⎜ 0 0 t23 · · · ⎜. ⎟ . .. . . ⎜ ⎟. .. .. .. T = ⎜ .. . ⎟ ⎝0 0 ··· 0 t(n−1)n ⎠ 0 0 ··· 0 0 Each successive power of T has an additional diagonal line of zeroes—as one computes T, T 2 , T 3 , etc., the possible nonzero entries move up and to the right, until with T (n−1) , the only possible position of a nonzero entry is the 1, n position. So, T n = 0, and T , and hence A, is nilpotent.
58
4. The Jordan and Weyr Canonical Forms
For the matrix Np in equation (4.9), we have Np e1 = 0 and Np ei = e(i−1) for i = 2, . . . , p. Putting x = ep , the p vectors x, Np (x), Np2 (x), . . . , Npp−1 (x) are just ep , ep−1 , . . . , e1 . Proposition 4.10. Let N be a nilpotent transformation. Suppose N k (x) = 0 but N k−1 (x) = 0. Then the vectors x, N (x), N 2 (x), . . . , N (k−1) (x) are linearly independent. Proof. Suppose k−1
(4.10)
ci N i (x) = 0.
i=0
Since N (x) = 0, we have N (x) = 0 whenever j ≥ k. Apply N (k−1) to both sides of equation (4.10) to get c0 N (k−1) (x) = 0. Since N k−1 (x) = 0, we have c0 = 0, and thus k−1
ci N i (x) = 0. (4.11) k
j
i=1 k−2
Now apply N to both sides of equation (4.11) to get c1 N k−1 (x) = 0 and thus c1 = 0. Repeating this argument with N k−3 , N k−4 , etc., shows that ci = 0 for i = 0, . . . , (k − 1). So, x, N (x), N 2 (x), . . . , N k−1 (x) are linearly independent. Now we deﬁne the building blocks of the Jordan canonical form. Deﬁnition 4.11. The Jordan block of size p associated with λ is the p × p matrix with λ in each diagonal entry, a one in each superdiagonal entry, and zeroes elsewhere. We denote this matrix as Jp (λ). Thus, ⎛ ⎞ λ 1 0 ··· 0 ⎜0 λ 1 ··· 0⎟ ⎜. . . ⎟ . . . . . . . ... ⎟ . (4.12) Jp (λ) = ⎜ ⎜. . ⎟ ⎝0 0 ··· λ 1⎠ 0 0 ··· 0 λ Note that Jp (0) = Np and Jp (λ) = λIp + Np . The matrix (4.2) is J3 (7) ⊕ J2 (7) ⊕ J4 (8) ⊕ J1 (10) ⊕ J1 (−4). We now come to the hard part of the argument—showing that any nilpotent transformation has a matrix representation which is a direct sum of blocks of type Np for various values of p. The result is valid over any ﬁeld, but our proof here uses the standard inner product x, y in Cn . This proof comes from an argument presented by Halmos in his March 1993 talk at the ILAS meeting in Pensacola, Florida. The proof is also valid over Rn and over other ﬁelds (such as Q) where this inner product can be used. It uses the fact that for nonzero x, we have x, x = 0. Theorem 4.12. If A is an n × n nilpotent matrix, then there exist positive integers p1 ≥ p2 ≥ · · · ≥ pr such that A is similar to Jp1 (0) ⊕ Jp2 (0) ⊕ · · · ⊕ Jpr (0). Furthermore, the numbers p1 ≥ p2 ≥ · · · ≥ pr are uniquely determined by A.
4.2. Nilpotent Matrices
59
Proof. At this point we are just going to prove the ﬁrst statement. We will then need to develop some more machinery to prove the uniqueness of the numbers p1 ≥ p2 ≥ · · · ≥ pr . Let p be the index of nilpotence of A, so Ap−1 = 0, but Ap = 0. If p = n, choose a vector v such that Ap−1 v = 0. Proposition 4.10 then tells us the set B = {An−1 v, An−2 v, . . . , Av, v} is a basis for Cn . If T is the linear transformation T (x) = Ax, then [T ]B = Jn (0). Hence A is similar to Jn (0), and we are done. Furthermore, note that if A is similar to Jp1 (0) ⊕ Jp2 (0) ⊕ · · · ⊕ Jpr (0), where p1 ≥ p2 ≥ · · · ≥ pr , then p1 is the index of nilpotence of A, so we must have r = 1 and p1 = n. Now suppose p < n. We then use induction on n. Let B = A∗ . Then B is also nilpotent with index p. Choose y so that B p−1 y = 0. Then {y, By, B 2 y, . . . , B p−1 y} is linearly independent and spans a pdimensional subspace U, which is Binvariant. So U = span [y, By, B 2 y, . . . , B p−1 y]. ⊥ Note that U is invariant under B ∗ = A and dim(U ⊥ ) = n − p. Put x = B p−1 y. Then Ap−1 x, y = x, (A∗ )p−1 y = x, B p−1 y = x, x.
(4.13)
Since x = 0, we know x, x = 0, so equation (4.13) tells us Ap−1 x = 0. Let W be the pdimensional subspace spanned by {x, Ax, A2 x, . . . , Ap−1 x}; then W is Ainvariant. W = span [x, Ax, A2 x, . . . , Ap−1 x]. n We claim C is the direct sum of the two Ainvariant subspaces W and U ⊥ . We already know dim(W) + dim(U ⊥ ) = p + (n − p) = n, so it will suﬃce to show W ∩ U ⊥ = {0}. Suppose z ∈ W ∩ U ⊥ . Since z ∈ W, z = c0 x + c1 Ax + · · · + cp−1 Ap−1 x for some scalars ci . Whenever j + k ≥ p, we have Aj x, B k y = x, B j+k y = x, 0 = 0. So, Since z ∈ U But then
⊥
z, B p−1 y = c0 x, B p−1 y = c0 x, x. and B y ∈ U, we have z, B p−1 y = 0. But, x, x = 0, so c0 = 0. p−1
z, B p−2 y = c1 Ax, B p−2 y = c1 x, x, and so c1 = 0. Similarly, z, B p−3 y = c2 A2 x, B p−3 y = c2 x, x and c2 = 0. Continuing in this fashion, we get c0 = c1 = · · · = cp−1 = 0 and hence z = 0. So W ∩ U ⊥ = {0} and Cn = W ⊕ U ⊥ . Now, B1 = {Ap−1 x, Ap−2 x, . . . , Ax, x} is a basis for W. Let B2 be a basis for ⊥ U and put B = B1 ∪ B2 . Then B is a basis for Cn and for T (x) = Ax, we have [T ]B = Jp (0) ⊕ A22 , where the (n − p) × (n − p) matrix A22 represents the action of T on the Ainvariant subspace U ⊥ . So, A22 is nilpotent, and by the induction
60
4. The Jordan and Weyr Canonical Forms
hypothesis, A22 is similar to Jp2 (0)⊕Jp3 (0)⊕· · ·⊕Jpr (0), where p2 ≥ p3 ≥ · · · ≥ pr . Observe that p2 is the index of nilpotence of A22 , so p ≥ p2 . Setting p1 = p, we have the ﬁrst part of our result. Now we want to show that the numbers p1 ≥ p2 ≥ · · · ≥ pr in Theorem 4.12 are uniquely determined by the matrix A. We could do another induction argument, but we prefer to take another route which characterizes these numbers in geometric terms. This approach goes back to Weyr [Weyr90] and may be found in older sources, such as [MacD33, TurA32]. If A and B are similar matrices, then so are Ak and B k for any positive integer k. So rank(Ak ) = rank(B k ) and nullity(Ak ) = nullity(B k ) for any positive integer k. We denote the nullity of Ak by ν(Ak ). Consider the matrix N = Jp1 (0)⊕Jp2 (0)⊕· · ·⊕Jpr (0). The columns containing ones are linearly independent, so ν(N ) is the number of zero columns in N . Since each block has one column of zeroes, ν(N ) = r, the number of blocks. As we compute powers of N , the diagonal lines of ones in the blocks move up, and we get an additional column of zeroes in each block that has not yet become a block of zeroes. For example, when we compute N 2 , we get an additional column of zeroes in every block of size two or greater. Hence, ν(N 2 ) = ν(N ) + the number of blocks of size at least 2. Similarly, ν(N 3 ) = ν(N 2 ) + the number of blocks of size at least 3, and, in general, ν(N k ) = ν(N k−1 ) + the number of blocks of size at least k. Since N p1 = 0 we have ν(N p1 ) = n. The key fact we get from the equation above is (4.14)
ν(N k ) − ν(N k−1 ) = the number of blocks of size at least k.
Deﬁnition 4.13. Let A be an n × n nilpotent matrix of index p. Set ω1 (A) = ν(A) and ωk (A) = ν(Ak ) − ν(Ak−1 ) for k ≥ 2. The list of numbers ω1 (A), ω2 (A), . . . , ωp (A) is called the Weyr characteristic of A. Remark 4.14. Since ker(Ak−1 ) ⊆ ker(Ak ), we know ν(Ak−1 ) ≤ ν(Ak ) and hence ωk ≥ 0. Also, if A is nilpotent of index p, then Ak = 0 for all k ≥ p and so ωk = 0 for all k > p. In fact, one can show all of the numbers ω1 , . . . , ωp are positive. (See Section 4.5.) Also, note that ω1 = r is the number of blocks.
4.2. Nilpotent Matrices
61
We can now use equation (4.14) to express the number of blocks of each size in N in terms of the Weyr characteristic of N : the number of blocks of size 1 = ω1 − ω2 = r − ω2 the number of blocks of size 2 = ω2 − ω3 the number of blocks of size 3 = ω3 − ω4 .. . (4.15)
the number of blocks of size k = ωk − ωk+1 .
The numbers ω1 , . . . , ωp are deﬁned in terms of the nullities of the powers of the matrix, so similar matrices have the same Weyr characteristic. So, if the nilpotent matrix A is similar to Jp1 (0) ⊕ Jp2 (0) ⊕ · · · ⊕ Jpr (0), the number of blocks of each size is uniquely determined by A, and hence the numnbers p1 , . . . , pr are uniquely determined by A. This proves the uniqueness part of Theorem 4.12. Deﬁnition 4.15. Let A be an n × n nilpotent matrix which is similar to Jp1 (0) ⊕ Jp2 (0) ⊕ · · · ⊕ Jpr (0), where p1 ≥ p2 ≥ · · · ≥ pr > 0. The list of numbers p1 , . . . , pr is called the Segre characteristic of A. The Weyr and Segre characteristics of an n × n nilpotent matrix A are related in the following way. The numbers p1 , . . . , pr are nonincreasing and sum to n, so they form a partition of n. The number ωi (A) is the number of pj ’s which are greater than or equal to i. So, the Weyr characteristic, ω1 (A), ω2 (A), . . . , ωp1 (A) is the conjugate partition of the Segre characteristic. If we represent the partition, p1 , . . . , pr , in a Ferrers diagram, where pi is the number of dots in row i, then ωj gives the number of dots in column j. Example 4.16. For the matrix N = Jn (0), we have r = 1 and p1 = n. For the Weyr characteristic, ω1 (N ) = ω2 (N ) = · · · = ωn (N ) = 1. Example 4.17. For the matrix N = J5 (0) ⊕ J2 (0) ⊕ J1 (0) ⊕ J1 (0), we have r = 4 and Segre characteristic p1 = 5, p2 = 2, p3 = 1, p4 = 1. The Weyr characteristic is ω1 = 4, ω2 = 2, ω3 = 1, ω4 = 1, ω5 = 1. Example 4.18. Suppose A is a strictly upper triangular n×n matrix with nonzero entries in each superdiagonal position, i.e., ai,i+1 = 0 for i = 1, . . . , n − 1. Then An = 0, but An−1 has zeroes in every entry except the upper righthand corner position, 1, n, where it has the nonzero product a12 a23 a34 · · · an−1,n . Hence, A is nilpotent of index n and the Jordan canonical form of A is Jn (0). Let us summarize the results for nilpotent matrices. Let A be an n×n nilpotent matrix, and let p1 be the index of nilpotency. Then A is similar to the direct sum r pi = n. The numbers Jp1 (0)⊕Jp2 (0)⊕· · ·⊕Jpr (0), where p1 ≥ p2 ≥ · · · ≥ pr and i=1
p1 , . . . , pr are uniquely determined by A and are called the Segre characteristic of A. The conjugate partition of p1 , . . . , pr gives the Weyr characteristic, ω1 , . . . , ωp1 , and ω1 (A) = ν(A) = r, and ωk (A) = ν(Ak ) − ν(Ak−1 ) for k ≥ 2. We say that Jp1 (0) ⊕ Jp2 (0) ⊕ · · · ⊕ Jpr (0) is the Jordan form of A. Two n × n nilpotent matrices are similar if and only if they have the same Jordan form.
62
4. The Jordan and Weyr Canonical Forms
Example 4.19. Consider the 4 × 4 strictly upper triangular matrix ⎛ ⎞ 0 0 a b ⎜0 0 0 c⎟ A=⎝ ⎠, 0 0 0 0 0 0 0 0 where at least one of the constants a, b, c is nonzero. Then A = 0, but A2 = 0. So the index of nilpotency of A is two, and hence the largest block in the Jordan form of A is 2 × 2. If ac = 0, then A has nullity 2, so the Jordan form has two blocks. Thus, for ac = 0, the Jordan form of A is J2 (0) ⊕ J2 (0). The Segre characteristic is then p1 = 2, p2 = 2, and the Weyr characteristic is ω1 = 2, ω2 = 2. If ac = 0, then A has nullity 3, so the Jordan form of A has three blocks, and thus must be J2 (0)⊕J1 (0)⊕J1 (0). The Segre characteristic is p1 = 2, p2 = 1, p3 = 1, and the Weyr characteristic is ω1 = 3, ω2 = 1. Example 4.20. Consider the 5 × 5 nilpotent matrix ⎛ ⎞ 0 0 a b c ⎜0 0 0 d e ⎟ ⎜ ⎟ A = ⎜0 0 0 0 f ⎟, ⎝ ⎠ 0 0 0 0 0 0 0 0 0 0 where at least one of the constants a, b, c, d, e, f is nonzero, so that A = 0. We have ⎛ ⎞ 0 0 0 0 af ⎜0 0 0 0 0 ⎟ ⎜ ⎟ A2 = ⎜ 0 0 0 0 0 ⎟ ⎝ ⎠ 0 0 0 0 0 0 0 0 0 0 and A3 = 0. If af = 0, the index of nilpotency of A is 3, and the Jordan form contains a 3×3 block. If d = 0, then A has nullity 2, and the Jordan form has two blocks. The Jordan form of A is then J3 (0) ⊕ J2 (0). The Segre characteristic is p1 = 3, p2 = 2 and the Weyr characteristic is ω1 = 2, ω2 = 2, ω3 = 1. However, if af = 0 but d = 0, then A has nullity 3. The Jordan form then has three blocks; since the largest one is 3 × 3, it must be J3 (0) ⊕ J1 (0) ⊕ J1 (0). The Segre characteristic is p1 = 3, p2 = 1, p3 = 1 and the Weyr characteristic is ω1 = 3, ω2 = 1, ω3 = 1. When af = 0, we have A2 = 0, so the index of nilpotency is 2. The largest Jordan block has size 2 × 2. Since either a = 0 or f = 0 (or both), we see that the nullity of A is at least 3; depending on the other entries, it could be either 3 or 4. If the nullity of A is 3, then there are three Jordan blocks, and so the Jordan form must be J2 (0)⊕J2 (0)⊕J1 (0). In this case the Segre characteristic is p1 = 2, p2 = 2, p3 = 1 and the Weyr characteristic is ω1 = 3, ω2 = 2. If the nullity of A is 4, there are four Jordan blocks and the Jordan form is J2 (0) ⊕ J1 (0) ⊕ J1 (0) ⊕ J1 (0). The Segre characteristic is then p1 = 2, p2 = 1, p3 = 1, p4 = 1 and the Weyr characteristic is ω1 = 4, ω2 = 1.
4.3. The Jordan Form of a General Matrix
63
4.3. The Jordan Form of a General Matrix We now put the pieces together and describe the Jordan form of a general matrix. The reader feeling a bit dazed by the proof of Theorem 4.12 should take heart: in most cases where Jordan canonical form is used, it suﬃces to know what it is—i.e. what the block diagonal structure looks like. Let A be an n × n complex matrix with eigenvalues λ1 , . . . , λt of multiplicites m1 , . . . , mt . In Section 4.1, we saw A is similar to A1 ⊕ A2 ⊕ · · · ⊕ At where Ai is mi × mi and spec(Ai ) = {λi }. So Ni = Ai − λi I is nilpotent and has a Jordan form, J (Ni ) = Jpi1 (0) ⊕ Jpi2 (0) ⊕ · · · ⊕ Jpiri (0) of the type described in Section 4.2. Note that pi1 + pi2 + · · · + piri = mi . The Jordan form of A is then the direct sum of the blocks λi Imi + J (Ni ). The block J (Ai ) = λi Imi ⊕ J (Ni ) is size mi × mi and corresponds to the eigenvalue λi ; this block itself is a sum of smaller blocks, each of the form Jk (λi ). We shall refer to these blocks as the blocks belonging to or corresponding to the eigenvalue λi . For example, consider the matrix in (4.2). The eigenvalues are 7, 8, 10, −4, of multiplicities 5, 4, 1, 1, respectively. For the eigenvalue λ1 = 7 we have m1 = 5 and the Segre characteristic for this eigenvalue is p11 = 3 and p12 = 2, from the 3 × 3 and 2 × 2 blocks belonging to the eigenvalue 7. For the eigenvalue λ2 = 8, we have only one block of size 4 × 4, so we have m2 = p21 = 4. The other two eigenvalues each have multiplicity one, so m3 = p31 = 1 and m4 = p41 = 1. Certainly, two matrices with the same Jordan form are similar, and we have already seen that two nilpotent matrices which are similar must have the same Jordan form. To complete the proof that two similar matrices must have the same Jordan form, we need one more fact. Theorem 4.21. Suppose A = A1 ⊕ A2 and A = B1 ⊕ B2 are similar, where A1 and B1 are both k × k and A2 and B2 are both (n − k) × (n − k). Assume also that spec(A1 ) ∩ spec(B2 ) = ∅ and spec(A2 ) ∩ spec(B1 ) = ∅. Then if S −1 AS = B, we must have S = S1 ⊕ S2 , where S1 is k × k and S2 is (n − k) × (n − k). Hence, S1−1 A1 S1 = B1 and S2−1 A2 S2 = B2 . S11 S12 −1 Proof. Suppose S AS = B. Then AS = SB. Partition S = S21 S22 conformally with A and B. We have A1 S11 A1 S12 S11 B1 S12 B2 AS = = SB = . A2 S21 A2 S22 S21 B1 S22 B2 The upper righthand corner gives A1 S12 = S12 B2 , or A1 S12 − S12 B2 = 0. But spec(A1 ) ∩ spec(B2 ) = ∅ so Sylvester’s Theorem 4.1 tells us S12 = 0. Similarly, the lower lefthand corner gives A2 S21 = S21 B1 ; since spec(A2 ) ∩ spec(B1 ) = ∅ we get S21 = 0. Hence, S = S1 ⊕ S2 and the result follows. Now consider two similar n × n matrices, A and B. Since they are similar, they have the same eigenvalues, λ1 , . . . , λt , with the same multiplicities m1 , . . . , mt . Thus, A is similar to A1 ⊕ A2 ⊕ · · · ⊕ At , and B is similar to B1 ⊕ B2 ⊕ · · · ⊕ Bt where Ai and Bi are both mi × mi and spec(Ai ) = spec(Bi ) = {λi }. Applying
64
4. The Jordan and Weyr Canonical Forms
Theorem 4.21 repeatedly, we see that Ai is similar to Bi , and hence the nilpotent matrices Ai − λ1 Imi and Bi − λ1 Imi have the same Jordan form. Therefore, A and B have the same Jordan form. We have been concerned here with a description and proof of the Jordan canonical form. Computing it is another story. First one needs to know all the eigenvalues and their multiplicities, in itself a signiﬁcant computational problem. For example, consider the issue of roundoﬀ errors in the calculations. If two or more computed eigenvalues are very close in value, how do we know if these computed values represent diﬀerent eigenvalues or a repeated eigenvalue? Furthermore, the Jordan canonical form is not λ 1 of the matrix. λ a continuous function of the entries . If = 0, the Jordan form is Consider the 2 × 2 matrix 0 λ 0 λ ; if = 0 it is λ 0 . Small changes in the matrix entries (which may result from the limitations 0 λ of measurement precision and from roundoﬀ error in the computations) can change the Jordan canonical form in a qualitative way. It is usually desirable to design matrix algorithms to avoid general similarity and change of basis; it is preferable to use unitary change of basis and unitary similarity, which preserve length and angle. This is another reason why the Weyr characteristic is useful—it is determined by the dimensions of the null spaces of powers of a matrix, which can be found with unitary change of basis methods. Once one knows the numbers of the Weyr characteristic, the sizes of the Jordan blocks are known. There are other classic canonical forms; we do not discuss them here but refer the reader elsewhere, for example, [Art11, Mal63, Rot88, HK71]. For a general ﬁeld, F, the rational canonical form represents a linear transformation with a block diagonal matrix which is a direct sum of companion matrices coming from irreducible factors of the characteristic polynomial. There is also a canonical form based on the Weyr characteristic [Weyr90, Sha99]—we describe this in Section 4.5. One more comment: “most” matrices are diagonable. To be precise, the set of matrices which are not diagonable is a set of measure zero. This theory to deal with the general case takes a lot of work, but in many applications the relevant matrix is diagonable and the general Jordan form is not needed.
4.4. The Cayley–Hamilton Theorem and the Minimal Polynomial Let A be n × n with eigenvalues λ1 , . . . , λt of multiplicites m1 , . . . , mt . The characteristic polynomial of A is pA (x) = (x−λ1 )m1 (x−λ2 )m2 · · · (x−λt )mt . Substituting A for x in this polynomial gives pA (A) = (A − λ1 I)m1 (A − λ2 I)m2 · · · (A − λt I)mt . Theorem 4.22 (Cayley–Hamilton Theorem). If A is an n × n matrix, then A satisﬁes its characteristic polynomial, i.e., pA (A) = 0. Proof. For any polynomial p(x) we have p(S −1 AS) = S −1 p(A)S, so p(A) = 0 if and only if p(S −1 AS) = 0. Hence, without loss of generality, we may assume A = A1 ⊕ A2 ⊕ · · · ⊕ At , where Ai is mi × mi and Ai = λi Imi + Ni with Ni being nilpotent. So (Ai − λi I)mi = Nimi = 0. Now consider the product pA (A) = (A − λ1 I)m1 (A − λ2 I)m2 · · · (A − λt I)mt .
4.4. The Cayley–Hamilton Theorem and the Minimal Polynomial
65
For each i = 1, . . . , t, the ith block of the factor (A − λi I)mi is zero, and hence pA (A) = 0. Suppose J (Ai ) = Jpi1 (λi ) ⊕ Jpi2 (λi ) ⊕ · · · ⊕ Jpiri (λi ) is the Jordan form of block Ai . Since the largest block has size pi1 , the pi1 th power of J (Ai ) − λi Imi will be zero. Hence, (Ai − λi Imi )pi1 = 0 and A satisﬁes the polynomial mA (x) = (x − λ1 )p11 (x − λ2 )p21 · · · (x − λt )pt1 , where pi1 is the size of the largest Jordan block belonging to the eigenvalue λi . We have pi1 ≤ mi , with equality if and only if there is a single Jordan block Jmi (λi ) belonging to λi . Deﬁnition 4.23. We say the polynomial p(x) annihilates the matrix A if p(A) = 0. Deﬁnition 4.24. We say the polynomial m(x) is a minimal polynomial for the matrix A if the following hold: (1) m(A) = 0. (2) No polynomial of degree smaller than m(x) annihilates A. (3) m(x) is monic; i.e., the coeﬃcient of the highest degree term is one. We claim that an n × n matrix A has a unique minimal polynomial and it is mA (x) = (x − λ1 )p11 (x − λ2 )p21 · · · (x − λt )pt1 . Theorem 4.25. Let A be an n × n matrix, and let m(x) be a minimal polynomial for A. Then the following hold: (1) If p(x) annihilates A then m(x) divides p(x). (2) The polynomial m(x) is unique. (3) If λ is an eigenvalue of A, then m(λ) = 0. Proof. Suppose p(A) = 0. Divide m(x) into p(x) to get p(x) = q(x)m(x) + r(x), where either r(x) = 0, or r(x) has smaller degree than m(x). The polynomials q(x) and r(x) are the quotient and remainder, respectively. Substitute A for x to get 0 = p(A) = q(A)m(A) + r(A) = 0 + r(A). So r(A) = 0. Since no polynomial of degree smaller than m(x) can annihilate A, the remainder r(x) must be zero, and hence m(x) divides p(x). This proves (1). Suppose m1 (x) and m2 (x) are two minimal polynomials for A. From (1), we know that m1 (x) divides m2 (x). But, by the deﬁnition of minimal polynomial, we know that m1 (x) and m2 (x) must have the same degree. So, we must have m2 (x) = cm1 (x), where c is a constant. Since m1 (x) and m2 (x) are both monic, c = 1 and m1 (x) = m2 (x). Now let λ be an eigenvalue of A. Then m(λ) is an eigenvalue of m(A) = 0, so m(λ) = 0. From Theorem 4.25 and the Cayley–Hamilton Theorem 4.22 it follows that the minimal polynomial of A must have the form (x − λ1 )q1 (x − λ2 )q2 · · · (x − λt )qt , where 1 ≤ qi ≤ mi . From the Jordan form of A, we see that qi = pi1 , the size of
66
4. The Jordan and Weyr Canonical Forms
the largest Jordan block belonging to λi . Note the following facts: • A is diagonable ⇐⇒ all the Jordan blocks are 1 × 1 ⇐⇒ mA (x) = (x − λ1 )(x − λ2 ) · · · (x − λt ) ⇐⇒ mA (x) has no repeated roots. • mA (x) = pA (x) ⇐⇒ there is exactly one Jordan block for each eigenvalue of A. • Suppose p(x) is a polynomial with distinct roots and p(A) = 0. Since mA (x) divides p(x), the polynomial mA (x) has no repeated roots and A is diagonable. Deﬁnition 4.26. We say A is nonderogatory if mA (x) = pA (x). For a nonderogatory matrix, the Jordan canonical form is completely determined by the characteristic polynomial; hence two nonderogatory matrices are similar if and only if they have the same characteristic polynomial. We now see that a nonderogatory matrix is similar to a special type of matrix, called a companion matrix, which is formed from the coeﬃcients of the characteristic polynomial. Deﬁnition 4.27. Let p(x) = xn + cn−1 xn−1 + cn−2 xn−2 + · · · + c1 x + c0 be a monic polynomial of degree n. The companion matrix of p(x) is the n × n matrix with ones on the subdiagonal, the entries −c0 , −c1 , . . . , −cn−2 , −cn−1 in the last column, and zeroes elsewhere: ⎞ ⎛ 0 0 0 ··· 0 −c0 ⎜1 0 0 ··· 0 −c1 ⎟ ⎟ ⎜ ⎜0 1 0 ··· 0 −c2 ⎟ ⎜. . . .. ⎟ .. C = ⎜ . . .. ⎟. . . .. ⎜. . . ⎟ ⎟ ⎜ ⎠ ⎝ 0 0 0 . . . 0 −c n−2 0 0 0 · · · 1 −cn−1 Remark 4.28. This is one of several forms that may be used for the companion matrix of a polynomial; one can also use the transpose of the matrix shown in Deﬁnition 4.27. A computation shows that the characteristic polynomial of the companion matrix C is the original polynomial p(x); we leave this as an exercise to the reader. Furthermore, as one computes powers of C, the subdiagonal line of ones moves one line further down with each power, so the n matrices I, C, C 2 , C 3 , . . . , C n−1 are linearly independent. But this means the degree of the minimal polynomial of C must be n, and so p(x) is both the minimal and the characteristic polynomial of the companion matrix C, that is, a companion matrix is nonderogatory. Thus, when A is a nonderogatory matrix, it is similar to the companion matrix of the characteristic polynomial pA (x). Furthermore, this similarity transformation can be done over the ﬁeld containing the entries of the matrix. This is the starting point for developing the rational canonical form, which is a direct sum of companion matrices [Art11, Mal63, Rot88, HK71]. Now consider A as the matrix of a linear transformation T . Let C be the companion matrix for pA (x). Then C is the matrix for T with respect to a basis of the form B = {v, T v, T 2 v, . . . , T n−1 v}, where v is such that the n vectors in B are linearly independent.
4.5. Weyr Normal Form
67
4.5. Weyr Normal Form The Segre characteristic lists the sizes of the diagonal blocks in the Jordan normal form of a matrix. We showed these block sizes are uniquely determined by the matrix by using the Weyr characteristic, which comes from the dimensions of null spaces of powers of a nilpotent matrix. For a nilpotent matrix, the Segre characteristic is the conjugate partition of the Weyr characteristic. An alternative normal form, due to Weyr, has block sizes given directly by the Weyr characteristic. One could well argue that the Weyr form is more natural, as it comes directly from the dimensions of the null spaces of the powers. Let A be an n × n nilpotent matrix of index p, and let ω1 , . . . , ωp be the Weyr characteristic of A. So ω1 (A) = ν(A) and ωk (A) = ν(Ak ) − ν(Ak−1 ) for k ≥ 2. Note that p
i=1
ωi =
p
(ν(Ai ) − ν(Ai−1 )) = ν(Ap ) = n.
i=1
We will show below that ω1 ≥ ω2 ≥ · · · ≥ ωp . For now, we assume this is true, so as to describe the Weyr normal form of A. For r ≥ s, let Ir,s denote the r × s matrix which has Is in the ﬁrst s rows, and zeroes in the remaining r − s rows. Let W be the n × n matrix, partitioned into blocks of sizes ωi × ωj , in which all blocks are zero except the blocks directly above the diagonal blocks, and the ωk × ωk+1 block directly above the (k + 1)st diagonal block is the matrix Iωk ,ωk+1 . Note the diagonal blocks are zero blocks of sizes ωi × ωi . Direct calculation shows ν(W ) = ω1 , ν(W 2 ) = ω1 + ω2 , and in general, ν(W k ) = ω1 + ω2 + ω3 + · · · + ωk , for k = 1, . . . , p. So the matrix W has the same Weyr characteristic as A, and hence the nilpotent matrices W and A must have the same Segre characteristic and thus the same Jordan form. Therefore, A and W are similar. We deﬁne W to be the Weyr normal form of the nilpotent matrix A. Note that the numbers of the Weyr characteristic give the sizes of the diagonal blocks of W (which are blocks of zeros). Example 4.29. Suppose N = Jn (0). Then for k = 1, . . . , n, we have ν(N k ) = k and so ωk = 1 for each k. So Jn (0) is already in Weyr canonical form. Example 4.30. Suppose N = J3 (0) ⊕ J2 (0). ν(N 3 ) = 5. The Weyr characteristic is ω1 = canonical form of N is ⎛ 0 0  1 0 ⎜0 0 0 1 ⎜ ⎜ ⎜ ⎜ 0 0 0 0 ⎜ ⎜0 0 0 0 ⎝ 0 0 0 0
We have ν(N ) = 2, ν(N 2 ) = 4, and 2, ω1 = 2, and ω3 = 1. The Weyr ⎞  0 0⎟ ⎟ ⎟ 1⎟ ⎟. 0⎟ ⎟ ⎠ 0
Example 4.31. Suppose N = J2 (0) ⊕ J2 (0) ⊕ J1 (0). Then N 2 = 0; we have ν(N ) = 3 and ν(N 2 ) = 5. The Weyr characteristic is ω1 = 3 and ω2 = 2. The
68
4. The Jordan and Weyr Canonical Forms
Weyr canonical form is
0 0 0 ⎜ 0 0 0 ⎜ ⎜0 0 0 ⎜ ⎜ ⎝ 0 0 0 0 0 0 ⎛
⎞ 1 0 0 1⎟ ⎟ 0 0⎟ ⎟. ⎟ ⎠ 0 0 0 0
One can then obtain a Weyr normal form for a general matrix A, with eigenvalues λ1 , . . . , λt of multiplicities m1 , . . . , mt . As we did for the Jordan form, A is similar to A1 ⊕ A2 ⊕ · · · ⊕ At , where Ai is mi × mi , and spec(Ai ) = {λi }. So Ni = Ai − λi I is nilpotent and has a Weyr normal form, Wi , as described above. The Weyr normal form of Ai is then λi I + Wi , and the Weyr form of A is the direct sum of the blocks λi I + Wi . In the Weyr form, the diagonal blocks are scalar matrices, and the ones appear in the superdiagonal blocks. It remains to show that ωk ≥ ωk+1 for 1 ≤ k ≤ p − 1. Let T : V → V be a nilpotent linear transformation of index p on an ndimensional vector space V. For any positive integer k, the null space of T k is contained in the null space of T k+1 . Moreover, for k = 1, . . . , p, these null spaces form a strictly increasing sequence. That is, using “⊂” to mean “strict subset of”, we have (4.16)
ker(T ) ⊂ ker(T 2 ) ⊂ ker(T 3 ) ⊂ · · · ⊂ ker(T p−1 ) ⊂ ker(T p ) = V.
This follows from the following fact. Lemma 4.32. Let T be a nilpotent linear transformation on a vector space V. If for some positive integer k, we have ker(T k ) = ker(T k+1 ), then ker(T k ) = ker(T k+r ) for every positive integer r. Proof. Suppose ker(T k ) = ker(T k+1 ). Let v ∈ ker(T k+2 ). Then T k+2 v = 0. But T k+2 v = T k+1 (T v), so T v ∈ ker(T k+1 ). Hence T v ∈ ker(T k ) and so T k+1 v = 0. So ker(T k+2 ) ⊆ ker(T k+1 ). Since ker(T k+1 ) ⊆ ker(T k+2 ), we have ker(T k+2 ) = ker(T k+1 ). Repeating the argument r times (or, more formally, doing an induction argument) gives the result. Getting back to the Weyr characteristic, ω1 , . . . , ωp , we have dim(ker(T k )) = dim(ker(T k−1 )) + ωk , for k = 1, . . . , p We want to show ωk ≥ ωk+1 for each k ≤ p − 1. Before doing the proof for general k, we illustrate the key idea of the argument with the case k = 1. Let {x1 , . . . , xω1 } be a basis for ker(T ). Since the dimension of ker(T 2 ) is ω1 + ω2 , and ker(T ) ⊂ ker(T 2 ), we can choose ω2 linearly independent vectors y1 , . . . , yω2 in ker(T 2 ) such that B = {x1 , . . . , xω1 } ∪ {y1 , . . . , yω2 } 2
is a basis for ker(T ). Note the vectors y1 , . . . , yω2 are not in ker(T ). Hence, the ω2 vectors T (y1 ), . . . , T (yω2 ) are ω2 nonzero vectors in ker(T ); we now show they are linearly independent. Since ω1 is the dimension of ker(T ), this will prove ω2 ω2 ci T (yi ) = 0. Then ci yi is in ker(T ), and so ω1 ≥ ω2 . Suppose then that i=1
i=1
4.5. Weyr Normal Form
ω2
ci y i =
i=1
ω1
69
bj xj . But then, since B is linearly independent, all of the coeﬃcients
j=1
must be zero. Hence, the vectors T (y1 ), . . . , T (yω2 ) are linearly independent. For k > 1, we use a similar argument, but work in the quotient spaces ker(T k )/ker(T k−1 ) and ker(T k+1 )/ker(T k ). For a proof which does not use quotient spaces, see [Sha99]. For the remainder of this section, we use Uk to denote ker(T k ). Rewriting (4.16) in this notation, U1 ⊂ U2 ⊂ U3 ⊂ · · · ⊂ Up−1 ⊂ Up = V.
(4.17)
The next result and its proof set the foundation for the rest of this section. Theorem 4.33. Let T : V → V be a nilpotent linear transformation of index p on an ndimensional vector space V; let ω1 , . . . , ωp be the Weyr characteristic of T . Then ωk ≥ ωk+1 for k = 1, . . . , p − 1. Proof. Let Uk denote the null space of T k . The space U0 is the zero space and Up = V. For any 1 ≤ k ≤ p − 1, we have Uk−1 ⊂ Uk ⊂ Uk+1 . The dimension of the quotient space Uk /Uk−1 is ωk , and the dimension of the quotient space Uk+1 /Uk is ωk+1 . Choose ωk+1 vectors y1 , . . . , yωk+1 in Uk+1 such that the cosets yi + Uk 1 ≤ i ≤ ωk+1 are a basis for the quotient space Uk+1 /Uk . Note that for each i, the vector yi is in Uk+1 but not in Uk . Consequently, T (yi ) is in Uk , but not in Uk−1 . We claim that the ωk+1 cosets T (yi ) + Uk−1 , 1 ≤ i ≤ ωk+1 , are linearly independent in the quotient space Uk /Uk−1 . For, suppose the sum
ωk+1
ai T (yi ) + Uk−1
i=1
ω k+1 is the zero vector in the quotient space Uk /Uk−1 . Then T ai yi is in Uk−1 i=1 ω ω k+1 k+1 k and so T ai yi = 0. But then ai yi ∈ Uk , which means that the linear i=1 ω k+1
combination
i=1
ai yi + Uk is the zero vector in the quotient space Uk+1 /Uk .
i=1
Since the cosets yi + Uk , for 1 ≤ i ≤ ωk+1 , are linearly independent, we have ai = 0 for all i, and thus T (yi ) + Uk−1 1 ≤ i ≤ ωk+1 is linearly independent in the quotient space Uk /Uk−1 . Since ωk is the dimension of Uk /Uk−1 , we must have ωk ≥ ωk+1 . Our assertion that two n×n nilpotent matrices are similar if they have the same Weyr characteristic was based on the fact that the Weyr characteristic determines the Jordan canonical form. We now see how to show this directly with the Weyr theory, without invoking Jordan canonical form. First, some preliminaries.
70
4. The Jordan and Weyr Canonical Forms
Deﬁnition 4.34. We say a matrix has full column rank if it has linearly independent columns. Note that a matrix B has full column rank if and only if the only solution to Bx = 0 is x = 0. Lemma 4.35. If B and C are matrices of full column rank of sizes r × s and s × t, respectively, then BC has full column rank. Proof. Suppose (BC)x = 0. Then B(Cx) = 0. Since B has full column rank, this gives Cx = 0. But C also has full column rank, so x = 0. Hence, the columns of BC are linearly independent. Lemma 4.36. Suppose m1 ≥ m2 ≥ · · · ≥ mp . Suppose A is a block triangular matrix, with block (i, j) of size mi × mj , where the diagonal blocks of A are all blocks of zeroes, and each superdiagonal block Ak,(k+1) has full column rank. Thus, A has the form ⎞ ⎛0 A A A ··· A (4.18)
⎜ ⎜ ⎜ A=⎜ ⎜ ⎜ ⎝
m1
12
13
14
0 0 .. .
0m2 0 .. .
A23 0m3 .. .
A24 A34 .. .
··· ··· .. .
A2p A3p .. .
1p
0 0
0 0
0 0
··· 0
0mp−1 ···
A(p−1),p 0mp
⎟ ⎟ ⎟ ⎟, ⎟ ⎟ ⎠
where the mk × mk+1 block Ak,(k+1) has full column rank. Then m1 , . . . , mp is the Weyr characteristic of A. Proof. Since each block Ak,(k+1) has rank mk+1 , the last n − m1 columns of A are linearly independent, and dim(ker(A)) = m1 . Squaring A, we see that A2 has blocks of zeroes in both the diagonal and superdiagonal blocks, while the next diagonal line (i.e., the diagonal line above the superdiagonal) contains the products A12 A23 ,
A23 A34 ,
A34 A45 , . . . ,
A(p−2),(p−1) A(p−1),p .
From Lemma 4.35, each of these products has full column rank, and hence the dimension of ker(A2 ) is m1 + m2 . With each successive power of A, we get an additional diagonal line of zero blocks. In Ak , each block in the ﬁrst diagonal line of nonzero blocks is a product of k consecutive Ai,(i+1) blocks, hence has linearly independent columns. So the dimension of the null space of Ak is m1 +m2 +· · ·+mk and m1 , . . . , mp is the Weyr characteristic of A. Now we show that any nilpotent linear transformation on a ﬁnitedimensional vector space has a matrix representation of the form (4.18). We again use quotient spaces for this argument, but the result can be obtained in other ways [Sha99]. Theorem 4.37. Suppose T : V → V is a nilpotent linear transformation on an ndimensional vector space V. Let ω1 , . . . , ωp be the Weyr characteristic of T . Then there is a basis B for V such that [T ]B has the block triangular form of (4.18) with
4.5. Weyr Normal Form
mk = ωk . That is,
71
⎛0
⎜ ⎜ ⎜ ⎜ A = [T ]B = ⎜ ⎜ ⎜ ⎜ ⎝
(4.19)
0 0 .. .
A12 0ω2 0 .. .
A13 A23 0ω3 .. .
A14 A24 A34 .. .
··· ··· ··· .. .
0 0
0 0
0 0
··· 0
0ωp−1 ···
ω1
A1p A2p A3p .. .
⎞
⎟ ⎟ ⎟ ⎟ ⎟, ⎟ ⎟ A(p−1),p ⎟ ⎠ 0ωp
where the ωk × ωk+1 block Ak,(k+1) has full column rank. Proof. We use the method in the proof of Theorem 4.33 to construct B. For each k = 0, . . . , (p − 1), choose ωk+1 vectors b(k+1),1 , . . . , b(k+1),ωk+1 in Uk+1 \ Uk such that {b(k+1),i + Uk 1 ≤ i ≤ ωk+1 } is a basis for the quotient space Uk+1 /Uk . Set p Bk+1 = {b(k+1),1 , . . . b(k+1),ωk+1 }. Then B = Bk is a basis for V. k=1
Let A = [T ]B . For each k, the transformation T maps the vectors of Bk into the space Uk−1 , so, for each i, the vector T (bk,i ) is a linear combination of vectors k−1 Bj . Consequently, A is block triangular with blocks of sizes ωi × ωj , and the in j=1
diagonal blocks are zero. From the proof of Theorem 4.33, we know that T (b(k+1),i ) + Uk−1 i = 1, . . . , ωk+1 is linearly independent in the quotient space Uk /Uk−1 , so each superdiagonal block Ak,(k+1) has full column rank. The quotient space and coset notation in this argument may be obscuring the essential idea, so let us describe it in a simpler way. The set B1 is a basis for ker(T ) and has ω1 vectors in it. The set B2 has ω2 vectors and is chosen so that B1 ∪ B2 is a basis for ker(T 2 ). Thus, we start with a basis for the null space of T , and then extend it to get a basis for the null space of T 2 . Then we adjoin the ω3 vectors of B3 to get a basis for the null space of T 3 , and so on. In general, at stage k, we have k a basis Bi for the null space of T k , and adjoin ωk+1 additional vectors to get a i=1
basis for the null space of T k+1 ; the set Bk+1 is the set of these k + 1 additional vectors. We used the quotient space language as a convenient tool to prove that the blocks Ak,(k+1) have full column rank. We now examine two particular ways of choosing the sets Bk . One yields an orthonormal basis, B. The other will yield a basis such that [T ]B is in Weyr canonical form. Theorem 4.38. Let T : V → V be a nilpotent linear transformation on an ndimensional inner product space V. Let ω1 , . . . , ωp be the Weyr characteristic of T . Then there is an orthonormal basis B for V such that [T ]B has the block triangular form of (4.19). Proof. Select the set B1 so that it is an orthonormal basis for U1 = ker(T ). Since U1 is an ω1 dimensional subspace of the (ω1 + ω2 )dimensional subspace U2 , the
72
4. The Jordan and Weyr Canonical Forms
subspace U1⊥ ∩ U2 is an ω2 dimensional subspace of U2 . Choose B2 to be an orthonormal basis for U1⊥ ∩ U2 ; then B1 ∪ B2 is an orthonormal basis for U2 . In general, Uk−1 is an (ω1 + ω2 + · · · + ωk−1 )dimensional subspace of the ⊥ ∩ Uk is an ωk dimensional subspace of Uk ; choose Bk to be an subspace Uk , so Uk−1 ⊥ orthonormal basis for Uk−1 ∩ Uk . Now, for i < k, we have Ui ⊂ Uk , so Uk⊥ ⊂ Ui⊥ . ⊥ Hence, the fact that Bk ⊆ Uk−1 tells us that vectors of Bk are orthogonal to the p Bk is an orthonormal basis for V. vectors in Bi whenever i < k. Hence, B = i=1
Theorem 4.38 is important for numerical computation where stability issues are important. It is desirable to work with unitary similarity and stick with orthonormal change of basis. See [Sha99] for references. Finally, to show any nilpotent matrix is similar to one in Weyr normal form, we need to show that there is a basis B such that the block triangular matrix of (4.19) has Ak,(k+1) = Iωk ,ωk+1 , with all the other blocks being zero. This will involve starting with the set Bp and working backwards. Before plunging into the morass of notation for the general case, let us see the argument for the case p = 2. Let B2 = {y1 , . . . , yω2 } be a linearly independent set of ω2 vectors from U2 \ U1 . Then, as seen in the proof of Theorem 4.33, the vectors T (y1 ), T (y2 ), . . . , T (yω2 ) are linearly independent and are in U1 . We extend the set {T (y1 ), T (y2 ), . . . , T (yω2 )} to a basis for U1 by adjoining m = ω1 − ω2 additional vectors v1 , . . . , vm from U1 . (If m = 0, we do not need to adjoin any vectors.) We now have the following basis B for V: B = {T (y1 ), T (y2 ), . . . , T (yω2 ), v1 , . . . , vm , y1 , . . . , yω2 }. Remember that ω2 + m = ω1 . Since T (T (yi )) = T 2 (yi ) = 0 and T (vi ) = 0, we see T maps the ﬁrst ω1 vectors in this set to zero. (These ﬁrst ω1 vectors are a basis for U1 .) And clearly, T maps the last ω2 vectors of this basis onto the ﬁrst ω2 vectors, retaining the order. Hence, 0ω1 Iω1 ,ω2 [T ]B = . 0 0ω2 This is the basic idea of the proof for the general case, but we need to repeat the procedure p − 1 times, working backwards from Up to U1 . Try to keep this main idea in mind as the notation escalates. Theorem 4.39. Let T : V → V be a nilpotent linear transformation on an ndimensional vector space V. Let ω1 , . . . , ωp be the Weyr characteristic of T . Then there is a basis, B, for V such that [T ]B is block triangular with blocks of sizes ωi × ωj , with Ak,(k+1) = Iωk ,ωk+1 and all other blocks are zero. Thus, ⎞ ⎛0 I 0 0 ··· 0 (4.20)
⎜ ⎜ ⎜ A = [T ]B = ⎜ ⎜ ⎜ ⎝
ω1
ω1 ,ω2
0 0 .. .
0ω2 0 .. .
Iω2 ,ω3 0ω3 .. .
0 Iω3 ,ω4 .. .
··· ··· .. .
0 0 .. .
0 0
0 0
0 0
··· 0
0ωp−1 ···
Iωp−1 ,ωp 0ωp
⎟ ⎟ ⎟ ⎟. ⎟ ⎟ ⎠
4.5. Weyr Normal Form
73
Proof. First we set up some notation. If S is a set of vectors, write T (S) for {T (x)  x ∈ S}. For a set of vectors, S = {y1 , . . . , ym } in Uk+1 , we write S = {yi + Uk  1 ≤ i ≤ m} for the corresponding cosets of Uk+1 /Uk . Let Bp be a set of ωp vectors in Up \Up−1 such that Bp is a basis for the quotient space Up /Up−1 . Then T (Bp ) ⊆ Up−1 , and from the proof of Theorem 4.33, we know T (Bp ) is linearly independent in the quotient space Up−1 /Up−2 . Hence, we can extend T (Bp ) to a basis for Up−1 /Up−2 by adjoining mp = ω(p−1) − ωp additional cosets from vectors in Up−1 \ Up−2 . Label these additional vectors v1 , . . . , vmp and set Bp−1 = T (Bp ) ∪ {v1 , . . . , vmp }. Then Bp−1 is a basis for Up−1 /Up−2 . Repeating the process, we have T (Bp−1 ) ⊆ Up−2 and T (Bp−1 ) is linearly independent in the quotient space Up−2 /Up−3 . We adjoin mp−1 = ωp−2 − ωp−1 cosets from vectors w1 , . . . , wmp−1 in Up−2 \ Up−3 and get Bp−2 = T (Bp−1 ) ∪ {w1 , . . . , wmp−1 }, where Bp−2 is a basis for Up−2 /Up−3 . So far, we have Bp−2 ∪ Bp−1 ∪ Bp = {T (Bp−1 ), w1 , . . . , wmp−1 , T (Bp ), v1 , . . . , vmp , Bp }. Continue in this fashion. For each k, with Bk a basis for Uk /Uk−1 , the set T (Bk ) is linearly independent in the quotient space Uk−1 /Uk−2 and thus can be extended to a basis for Uk−1 /Uk−2 by adjoining cosets from mk = ω(k−1) − ωk vectors from Uk−1 \ Uk−2 . Here is the key point: for k ≥ 2, the transformation T sends the ωk vectors of Bk to the ﬁrst ωk vectors of Bk−1 . The ω1 vectors of B1 are sent to zero. Hence, if p Bi , the matrix [T ]B has the desired form. we use the basis B = i=1
As a consequence of Theorem 4.39, we have now shown, directly from the Weyr characteristic, that two nilpotent matrices are similar if and only if they have the same Weyr characteristic. Note also that if we reorder the vectors of the basis B, constructed in the proof of Theorem 4.39, we can recover the Jordan canonical form. Speciﬁcally, for each of the ωp vectors x in Bp , the p vectors, T p−1 x, T p−2 x, . . . , T x, x, are in the basis B. The matrix representing the action of T on these p vectors is the Jordan block Jp (0). If mp > 0, then for each of the mp = ωp−1 − ωp vectors vi , we have a chain of p − 1 vectors, T p−2 v, T p−3 v, . . . , T v, v such that the action of T on these p − 1 vectors is represented by the Jordan block Jp−1 (0). Hence by reordering the vectors of B, starting with the chains generated by vectors in Bp , then the chains generated by the vectors in Bp−1 \ T (Bp ), next the chains generated by the vectors in Bp−2 \ T (Bp−1 ), and so on, we get a matrix representation for T which is in Jordan canonical form. Note that ωp gives the number of Jordan blocks of size p, and then mp = ωp−1 − ωp gives the number of
74
4. The Jordan and Weyr Canonical Forms
Jordan blocks of size (p − 1), and so on. In general, mk = ωk−1 − ωk gives the number of Jordan blocks of size (k − 1), as we saw in Section 4.2, equation (4.15).
Exercises 1. Show that (S −1 AS)k = S −1 Ak S. 2. Prove formula (4.1). 3. Show that TA,B deﬁned in equation (4.4) is a linear transformation on M(m, n). 4. Do the induction argument to extend Corollary 4.5 to the case of t blocks. 5. Do the induction argument to complete the proof of Theorem 4.6. 6. Note that the binomial expansion formula can be used to compute (λI + N )k , and use this compute the kth power of the Jordan block Jn (λ). 7. Let V be an ndimensional vector space, and let T : V −→ V be a nilpotent linear transformation with index of nilpotency p. Since T p−1 = 0, there must be a vector x such that T p−1 x = 0. (a) Show that for such an x, the set {x, T x, T 2 x, . . . , T p−1 x} is linearly independent. (b) Let U be the subspace spanned by the set B = {x, T x, T 2 x, . . . , T p−1 x}. Since T (U) ⊂ U, we can consider T U , the map T restricted to the subspace U.What is the matrix of T U with respect to the basis B? (That is, what is T U B ?) 8. Let N = Jk1 (0) ⊕ Jk2 (0) ⊕ · · · ⊕ Jkr (0), where k1 ≥ k2 ≥ · · · ≥ kr . (a) Show that r is the nullity of N . (b) Show that k1 is the index of N . (c) Let A = λI + N . Show that r is the geometric multiplicity of λ. 9. Write down all possible Jordan forms for nilpotent 3 × 3 matrices. 10. Write down all possible Jordan forms for nilpotent 4 × 4 matrices. 11. True or False: Two n × n nilpotent matrices are similar if and only if they have the same index of nilpotency? 12. What are the possible Jordan forms for a 5 × 5 nilpotent matrix of index 2? Of index 3? Of index 4? Of index 5? 13. Let A = Jn (λ)) be an n × n Jordan block. Determine the Jordan form of A2 . (The case λ = 0 needs special care.) Determine the Jordan form of A3 . More generally, determine the Jordan form of Ak . 14. Find all possible Jordan forms for matrices with the given characteristic and minimal polynomials. (a) pA (x) = (x − 1)3 (x − 2)2 (x − 3)2 and mA (x) = (x − 1)3 (x − 2)2 (x − 3)2 . (b) pA (x) = (x − 1)3 (x − 2)2 (x − 3)2 and mA (x) = (x − 1)2 (x − 2)2 (x − 3). (c) pA (x) = (x − 1)3 (x − 2)2 (x − 3)2 and mA (x) = (x − 1)(x − 2)(x − 3). (d) pA (x) = (x − 1)5 and mA (x) = (x − 1)3 . (e) pA (x) = (x − 1)5 and mA (x) = (x − 1)2 . (f) pA (x) = (x − 1)4 (x − 2)3 and mA (x) = (x − 1)2 (x − 2)2 .
Exercises
75
15. Verify the claim made in Section 4.4, that if C is the companion matrix of the polynomial p(x), then p(x) = det(xI − C). 16. Verify the claim made in Section 4.4, that if C is an n × n companion matrix, then the n matrices I, C, C 2 , C 3 , . . . , C n−1 are linearly independent. 17. Verify the claim made in Section 4.4 that if A is the matrix of a linear transformation T and C is the companion matrix for pA (x), then C is the matrix for T with respect to a basis of the form B = {v, T v, T 2 v, . . . , T n−1 v}, where v is such that the n vectors in B are linearly independent.
Chapter 5
Unitary Similarity and Normal Matrices
In this chapter we examine unitary similarity and normal matrices. The main result is the spectral theorem. Unless otherwise stated, all matrices in this chapter are square, complex matrices.
5.1. Unitary Similarity Square matrices A and B are similar if B = S −1 AS for some nonsingular matrix S. Similar matrices give representations of the same linear transformation with respect to diﬀerent bases, and thus have the same rank, eigenvalues, characteristic polynomial, trace, determinant, and Jordan form, since these things are intrinsic to the linear transformation. Unitary similarity is more restrictive and preserves additional properties. Deﬁnition 5.1. The n × n complex matrices, A and B, are unitarily similar if there is a unitary matrix U such that B = U ∗ AU. Since U is unitary, U ∗ AU = U −1 AU , so unitary similarity is a special type of similarity. 12 n n aij 2 . Deﬁnition 5.2. The Frobenius norm of A is AF = i=1 j=1
Direct calculation shows A2F = trace(A∗ A) = trace(AA∗ ). Since trace(A∗ A) = trace(U ∗ A∗ AU ) = trace[(U ∗ A∗ U )(U ∗ AU )], unitary similarity preserves the Frobenius norm, i.e., for any unitary matrix U we have AF = U ∗ AU F . This is not true of general similarity. Forexample, if 1 r r denotes a nonnegative real number, then all matrices of the form are 0 2 77
78
5. Unitary Similarity and Normal Matrices
similar, but no two of them have the same Frobenius norm. Hence, no two of them are unitarily similar. One can show that 2 × 2 matrices are unitarily similar if and only if they have the same eigenvalues and the same Frobenius norm. For n > 2, the problem of necessary and suﬃcient conditions for n × n matrices to be unitarily similar is more complicated [Sha91]. We have seen that any complex, square matrix A is similar to an upper triangular matrix. A slight modiﬁcation of the proof yields a unitary similarity version of this theorem. Theorem 5.3 (Schur [Sch09]). If A is a square, complex matrix there exists a unitary matrix U such that U ∗ AU = T is upper triangular. Proof. We use induction on n, where A is n × n. Let λ be an eigenvalue of A and which let u1 be an associated eigenvector of length one. Let W be a unitary matrix λ ∗ has u1 as its ﬁrst column. Then W ∗ AW has the block triangular form , 0 A22 where A22 is square of order n − 1 and the asterisk represents entries two through n of the ﬁrst row. By the induction hypothesis, there is a unitary matrix V2 of 1 0 ∗ order (n − 1) such that V2 A22 V2 is upper triangular. Put V = . Then 0 V2 λ ∗ is upper triangular. Since U = W V is unitary, we V ∗ W ∗ AW V = 0 V2∗ A22 V2 are done. Observe that the diagonal entries of the triangular matrix T = U ∗ AU are the eigenvalues of A, and the unitary matrix U may be chosen to get these diagonal entries in any desired order.
5.2. Normal Matrices—the Spectral Theorem A square, complex matrix A is said to be normal if AA∗ = A∗ A. The class of normal matrices includes all Hermitian matrices, as well as all skewHermitian and unitary matrices. The main result about normal matrices is the spectral theorem, which says that a matrix is normal if and only if it has an orthogonal basis of eigenvectors. We give two proofs of this theorem. The ﬁrst uses more geometric, vector space arguments; the second uses Theorem 5.3 and matrix calculations. 5.2.1. The Spectral Theorem: First Proof. Our ﬁrst proof of the spectral theorem relies on some facts about invariant subspaces. First we need Theorem 2.23; for convenience, we repeat the statement here. Theorem 5.4. Let A be a square, complex matrix. The subspace U is invariant under A if and only if U ⊥ is invariant under A∗ . Next, a result about commuting matrices. Theorem 5.5. Let A and B be n × n matrices over F such that AB = BA. Let λ be an eigenvalue of A which is in F, and let Vλ be the associated eigenspace. Then Vλ is invariant under B.
5.2. Normal Matrices—the Spectral Theorem
79
Proof. Let x ∈ Vλ . Then A(Bx) = B(Ax) = λBx so Bx ∈ Vλ . Hence, Vλ is invariant under B. We now use Theorems 5.4 and 5.5 to prove the following. Theorem 5.6. Let A be a normal matrix, and let Vλ be an eigenspace of A. Then Vλ ⊥ is invariant under A. Proof. Since A commutes with A∗ , Theorem 5.5 tells us Vλ is invariant under A∗ . So by Theorem 5.4, the subspace Vλ ⊥ is invariant under A. Theorem 5.7 (The Spectral Theorem). Let A be an n × n complex matrix. Then the following are equivalent. (1) A is normal. (2) There is an orthonormal basis of Cn consisting of eigenvectors of A. (3) There exists a unitary matrix U such that U ∗ AU is diagonal. Proof. We use induction on n to show (1) implies (2). Suppose A is normal. Let λ be an eigenvalue of A with associated eigenspace Vλ . Let k = dim(Vλ ). If k = n, then A = λIn , and we are done. If k < n, then let u1 , . . . , uk be an orthonormal basis for Vλ . From Theorem 5.4, the subspace Vλ ⊥ is invariant under A∗ , and from Theorem 5.6, the subspace Vλ ⊥ is invariant under A; hence Vλ ⊥ is invariant under both A and A∗ . Since A is normal, the restriction of A to Vλ ⊥ is also normal. So, by the induction hypothesis, there exists an orthonormal basis uk+1 , . . . , un of Vλ ⊥ consisting of eigenvectors of AVλ ⊥ . Then {u1 , . . . , un } is an orthonormal basis for Cn consisting of eigenvectors of A. To show (2) implies (3), let {u1 , . . . , un } be an orthonormal basis for Cn consisting of eigenvectors of A. Let U be the n × n matrix with uj in column j. Then U is unitary and U ∗ AU = U −1 AU is diagonal, with the eigenvalues of A appearing along the diagonal. Finally, suppose U ∗ AU = D is diagonal; note the diagonal matrices D and D∗ commute. Then A = U DU ∗ and AA∗ = (U DU ∗ )(U D∗ U ∗ ) = U DD∗ U ∗ = U D∗ DU ∗ = (U D∗ U ∗ )(U DU ∗ ) = A∗ A. 5.2.2. The Spectral Theorem: a Matrix Proof. Here is a diﬀerent proof of the spectral theorem using the Schur unitary triangularization Theorem 5.3. First, two preliminary facts. Proposition 5.8. If U is unitary, then A is normal if and only if U ∗ AU is normal. We leave the proof as an exercise to the reader. Proposition 5.9. An upper triangular matrix T is normal if and only if T is diagonal. Proof. Let T be an upper triangular n × n matrix. The n, n entry of T T ∗ is tnn 2 , n−1 while the n, n entry of T ∗ T is tnn 2 + tin 2 . So, if T is normal, we must have i=1
tin = 0 for i = 1, . . . , n − 1. Repeating this argument on the other diagonal entries
80
5. Unitary Similarity and Normal Matrices
(working from entry (n − 1, n − 1) back to entry (1, 1)) shows that T is diagonal. Conversely, if T is diagonal, then T is certainly normal. Now let A be an n × n complex matrix. By Theorem 5.3, we have A = U T U ∗ , where U is unitary and T is upper triangular. By Proposition 5.8, A is normal if and only if T is normal, and by Proposition 5.9, we see A is normal if and only if T is diagonal, thus proving the spectral theorem. Note also that if A = U T U ∗ , where U is unitary and T is upper triangular, A2F = T 2F =
n
tii 2 +
i=1
tij 2 ,
1≤i 1. Apply the same argument to the remaining columns of blocks (moving from left to right) to show that Si,j = 0 whenever i > j. Hence, S has the same upper triangular form −1 as A and B. Note also that Sii Aii Sii = Bii . The unitary version of Theorem 5.17 gives a bit more. Theorem 5.18. Let ⎛ A11 A12 A22 ⎜ 0 A=⎜ . .. ⎝ .. . 0 0
··· ··· .. .
⎞ A1t A2t ⎟ .. ⎟ . ⎠
···
Att
⎛
and
B11 ⎜ 0 B=⎜ ⎝ ... 0
B12 B22 .. .
··· ··· .. .
⎞ B1t B2t ⎟ .. ⎟ . ⎠
0
···
Btt
be similar, block triangular matrices with the same block sizes. Assume that spec(Aii ) = spec(Bii ) for i = 1, . . . , t and spec(Aii ) ∩ spec(Ajj ) = ∅ whenever i = j. Suppose U is a unitary matrix such that U −1 AU = B. Then U must be block diagonal with the same size diagonal blocks as A, i.e., U = U1 ⊕ U2 ⊕ · · · ⊕ Ut , where Ui has the same size as Aii . Proof. From Theorem 5.17, we know U must be block triangular. Since U is unitary, it must then have the block diagonal form U = U1 ⊕ U2 ⊕ · · · ⊕ Ut (see Exercises). Corollary 5.19. Suppose A and B are unitarily similar, upper triangular matrices with aii = bii for i = 1, . . . , n. Assume the diagonal entries a11 , . . . , ann are all distinct and that U is a unitary matrix with U ∗ AU = B. Then U must be diagonal and aij  = bij  for all i > j. It is now easy to construct examples of matrices which are similar and have the same Frobenius norm, but are not unitarily similar. We simply need two upper triangular matrices A, B with the same diagonal entries (all diﬀerent) and for which the sum of the squares of the entries above the diagonal is the same, but with aij  = bij  for some i and j. For example, let λ1 , λ2 , λ3 be distinct, and put ⎛ ⎞ ⎛ ⎞ √ λ1 1 0 λ1 2 0 A = ⎝ 0 λ2 1 ⎠ , B = ⎝ 0 λ2 0 ⎠ . 0 0 λ3 0 0 λ3 Then A and B are similar and A2F = A2F =
3
λi 2 + 2, but Corollary 5.19
i=1
tells us A and B are not unitarily similar. We now come to a theorem of Specht which gives a necessary and suﬃcient condition for two matrices to be unitarily similar. First, some notation. We use ω(x, y) to denote a monomial in the noncommuting variables x and y. Some examples are xyx, xy 2 x3 y, yxyxy 2 . We call ω(x, y) a word in x and y.
86
5. Unitary Similarity and Normal Matrices
Theorem 5.20 (Specht [Spe40]). Let A and B be n × n complex matrices. Then A and B are unitarily similar if and only if trace ω(A, A∗ ) = trace ω(B, B ∗ ) for every word ω(x, y) in x and y. The necessity of this condition is easy to prove. Suppose B = U ∗ AU , where U is unitary. Then for any word, ω(x, y), we have ω(B, B ∗ ) = ω(U ∗ AU, U ∗ A∗ U ) = U ∗ ω(A, A∗ )U, and hence trace ω(A, A∗ ) = trace ω(B, B ∗ ) . The proof of the converse is harder, and beyond the scope of this book, but the following remarks may make it seem plausible. First, note that for the words ω(x) = xk , where k = 1, 2, 3, . . ., the condition gives trace(Ak ) = trace(B k ) for all k. It is known that this implies that A and B have the same eigenvalues, for if A has eigenvalues λ1 , . . . , λn and B has eigenvalues μ1 , . . . , μn , then trace(Ak ) = trace(B k ) gives (5.2)
n
i=1
λki =
n
μki
i=1
for all k, and this implies (after reordering, if necessary) that λi = μi . In fact, one only needs (5.2) for 1 ≤ k ≤ n. Next, for the word ω(x, y) = xy, the condition gives trace(AA∗ ) = trace(BB ∗ ), and so AF = BF . Specht’s theorem gives a nice set of unitary invariants, but it is an inﬁnite set, for there are inﬁnitely many words in x and y. The following reﬁnement, due to Pearcy, shows that a ﬁnite set of words will do. Let W(k) denote the set of words ω(x, y) for which the sum of the exponents on x and y is at most k. Theorem 5.21 (Pearcy [Pea62]). Let A and B be n × n complex matrices. Then A and B are unitarily similar if and only if trace ω(A, A∗ ) = trace ω(B, B ∗ ) for every word ω(x, y) in W(2n2 ).
Exercises 1. Prove Proposition 5.8. 2. Find a matrix which is normal, but neither Hermitian, skewHermitian, nor unitary. A − A∗ A + A∗ and K = are Hermitian. Show 3. Let A = H + iK, where H = 2 2i that A is normal if and only if HK = KH. ¯ for any scalar α. 4. Show that if Ax = A∗ x, then (A−αI)x = (A∗ − αI)x 5. Let A be a complex m × n matrix. Show that if AA∗ = 0, then A = 0. 6. Prove Theorem 5.12.
Exercises
87
7. The proof of Theorem 5.18 used the fact that a block triangular unitary matrix U must be block diagonal. One way to prove this is by using Theorem 5.11. Give an alternative proof which directly uses the fact that U is unitary. Hint: The inverse of an invertible block triangular matrix ⎛
A11 ⎜ 0 A=⎜ ⎝ ... 0
A12 A22 .. .
··· ··· .. .
⎞ A1t A2t ⎟ .. ⎟ . ⎠
0
···
Att
is also block triangular with diagonal blocks A−1 ii . 8. Prove Corollary 5.19. 9. If A and B are normal and commute, show that AB is normal. 10. Give an example of normal matrices, A and B, such that AB is not normal. 11. Show that U is unitary if and only if U = exp(iH), where H is Hermitian. 12. For any square, complex matrix A the determinant of exp(A) is exp(trace(A)). Hint: Use the fact that A is similar to a triangular matrix. 0 θ cos θ sin θ 13. For K = , show that exp(K) = . −θ 0 − sin θ cos θ 14. If Q is a real matrix and det(Q) = 1, then Q is orthogonal if and only if Q = exp(K), where K is a real skewsymmetric matrix. 15. Suppose H is Hermitian. (a) Show that I − iH is invertible. (b) Let U = (I + iH)(I − iH)−1 ; show that U is unitary. (c) Show that −1 is not an eigenvalue of U . (d) Show that H = i(I − U )(I + U )−1 . (e) This is called the Cayley transform and may be considered as a matrix 1 + iz , where version of the following facts about the function f (z) = 1 − iz z is a complex variable. (Note f is an example of a fractional linear transformation, or M¨ obius transformation). If z is a real number, then f (z) = 1. The H plays the role of the real number, and the U plays the role of a complex number of modulus 1. The function f maps the real line to the unit circle. The number −1 is not the image of any real number, but if we allow the value ∞, then it is mapped to −1. Setting w = f (z) 1−w 1−w . So, the function g(w) = i is and solving for z, we get z = i 1+w 1+w the inverse of f . If w = 1, but w = −1, then g(w) is a real number. The function g maps the unit circle to the real line, with the number −1 getting sent to ∞.
88
5. Unitary Similarity and Normal Matrices
16. For the real version of Exercise 15: Let K be a real, skewsymmetric matrix. Then (I +K) is invertible and Q = (I −K)(I +K)−1 is a real orthogonal matrix which does not have −1 as an eigenvalue. We have K = (I − Q)(I + Q)−1 . t 17. Suppose A = i=1 λi Pi , where P1 , . . . , Pt are n × n matrices satisfying the following: (a) Pi2 = Pi = Pi∗ ; (b) Pi Pj = 0 when i = j; t P i = In . (c) i=1
Show that A is normal, and each column of Pi is an eigenvector of A corresponding to the eigenvalue λi .
Chapter 6
Hermitian Matrices
Hermitian matrices appeared earlier in connection with the inner product and as a special case of normal matrices. We now study them in more detail.
6.1. Conjugate Bilinear Forms The inner product x, y, for x, y in an inner product space, is a special type of a more general object called a conjugate bilinear form. However, we begin with the notion of a bilinear form, which can be deﬁned for a vector space over a general ﬁeld F. Deﬁnition 6.1. Let V be a vector space over a ﬁeld F. A bilinear form, denoted B, is a map from V × V to F satisfying the following: for all vectors x, y, z ∈ V and scalars α, β ∈ F, B(αx + βy, z) = αB(x, z) + βB(y, z), B(x, αy + βz) = αB(x, y) + βB(x, z). Thus, the function B(x, y) is linear in each of the two vector inputs. We usually dispense with the label B and use angle brackets to denote bilinear forms, writing B(x, y) = x, y. If x, y = y, x for all vectors x, y ∈ V, we say the form is symmetric. Example 6.2. Let V be ndimensional, and let A be any n × n matrix over F. Deﬁne x, y = yT Ax; then this is a bilinear form. If A is symmetric, the form is symmetric. If A = In , this form gives the usual dot product. It may seem perverse to put x, y = yT Ax rather than xT Ay—the reason is to be compatible with the analogous formula for a conjugate bilinear form. The following example is important in analysis. Example 6.3. Let V be the space of continuous realvalued functions on a closed b interval [a, b], and deﬁne f (x), g(x) = f (x)g(x) dx. a
89
90
6. Hermitian Matrices
Example 6.3 may be viewed as a continuous version of the ordinary dot product n in Rn , where x, y = xi yi . The functions f and g play the roles of x and y, i=1
the variable x plays the role of the index of summation, i, the product f (x)g(x) is analogous to xi yi , and the integral sign replaces the summation sign. When F = C, we usually work with conjugate bilinear forms rather than bilinear forms. Deﬁnition 6.4. Let V be a vector space over C. A conjugate bilinear form, denoted , , is a map from V ×V to C which satisﬁes the following: for all vectors x, y, z ∈ V and scalars α, β ∈ F, αx + βy, z = αx, z + βy, z, x, αy + βz = αx, y + βx, z. Thus, the function , is linear in the ﬁrst entry but conjugate linear in the second entry. Some deﬁne the form to be linear in the second entry and conjugate linear in the ﬁrst entry. A conjugate bilinear form which satisﬁes x, y = y, x for all vectors x, y ∈ V is called conjugate symmetric. Example 6.5. Let V = Cn , and let A be any n × n complex matrix. Deﬁne x, y = y∗ Ax; then this is a conjugate bilinear form. If A is Hermitian (i.e., A = A∗ ), the form is conjugate symmetric. If A = In , this form gives the usual inner product for Cn . Note that we used y∗ Ax rather than x∗ Ay because of our choice to have the forms be conjugate linear in the second entry. (Of course, this is an argument for using the alternative deﬁnition which puts conjugate linearity in the ﬁrst entry.) We now show that any bilinear or conjugate bilinear form on a ﬁnitedimensional vector space can be represented by a matrix and is of the type described in Examples 6.2 (for a bilinear form) or 6.5 (for a conjugate linear form). Theorem 6.6. Let , be a bilinear form on an ndimensional vector space V. Let B = {v1 , . . . , vn } be a basis for V. Let A be the n × n matrix with vj , vi in entry (i, j). Then x, y = [y]TB A[x]B . Proof. Let [x]TB = (x1 , . . . , xn ) and [y]TB = (y1 , . . . , yn ). Put aij = vj , vi . Then n n xj vj and y = yi vi , so x= j=1
i=1
x, y = =
n n
i=1 j=1 n
xj yi vj , vi =
n n
xj yi aij =
i=1 j=1
yi (entry i of A[x]B ) = [y]TB A[x]B .
n
i=1
yi
n
aij xj
j=1
i=1
When the form is symmetric, aij = aji , and hence A is symmetric. For a conjugate bilinear form, a similar calculation, but with conjugating the yi ’s, shows x, y = [y]∗B A[x]B . When the form is conjugate symmetric, aij = aji and A is
6.2. Properties of Hermitian Matrices and Inertia
91
Hermitian. To get the usual inner product in Cn , take B to be the standard basis {e1 , . . . , en } and A = I. The matrix A in Theorem 6.6 depends on the choice of basis B. We now see how changing the basis aﬀects the matrix. Let A = {v1 , . . . , vn } and B = {w1 , . . . , wn } be two bases for the ndimensional vector space V. Let the matrix A represent the bilinear form with respect to the Abasis, and let B represent the form with respect to the Bbasis. Thus, x, y = [y]TA A[x]A = [y]TB B[x]B . We now obtain a formula relating A and B. Let P be the n × n nonsingular matrix which takes us from Bcoordinates to Acoordinates: [z]A = P [z]B for all z ∈ V. Then bij = wj , wi = [wi ]TA A[wj ]A = (P [wi ]B )T AP [wj ]B = [wi ]TB (P T AP )[wj ]B . But [wi ]B and [wj ]B are just the unit coordinate vectors ei and ej , so bij = ei P T AP ej = entry ij of P T AP and hence B = P T AP . The calculation for a conjugate bilinear form is similar, and gives B = P ∗ AP . Deﬁnition 6.7. Two n × n matrices A and B with entries from a ﬁeld F are said to be congruent (over F) if B = P T AP for some nonsingular matrix P with entries in F. Two complex n × n matrices A and B are said to be conjunctive if B = P ∗ AP for some nonsingular complex matrix P . The calculation above shows that matrices which represent the same bilinear form with respect to diﬀerent bases are congruent. The steps can be reversed to prove the converse. Thus, we have the following. Theorem 6.8. Let V be an ndimensional vector space over the ﬁeld F. Two n × n matrices A and B, with entries from F, are congruent over F if and only if they represent the same bilinear form with respect to diﬀerent bases. If F = C, then n × n complex matrices A and B are conjunctive if and only if they represent the same conjugate bilinear form with respect to diﬀerent bases. Both congruence and conjunctivity are equivalence relations, and thus partition the set of square matrices into equivalence classes. Note, however, that this equivalence relation is quite diﬀerent from the similarity relation. Matrices A and B are similar if B = P −1 AP for some nonsingular matrix P ; similar matrices represent the same linear transformation with respect to diﬀerent bases. Congruent matrices A and B satisfy B = P T AP and represent the same bilinear form with respect to diﬀerent bases. In the special case where P −1 = P T , congruence and similarity coincide, but this is not typically the case.
6.2. Properties of Hermitian Matrices and Inertia A + A∗ If A is an n×n complex matrix, we may write A as H +iK, where H = and 2 A − A∗ are Hermitian. This is analogous to the representation of a complex K= 2i
92
6. Hermitian Matrices
number z as x + iy, where x and y are real numbers; the Hermitian matrices H and K play the role of the real and complex components. Theorem 6.9. Let A be an n × n complex matrix. The following are equivalent. (1) A is Hermitian. (2) x∗ Ax is a real number for any x ∈ Cn . (3) A is normal and all of the eigenvalues of A are real numbers. Proof. The equivalence of (1) and (3) was established in Theorem 5.13. The fact that (1) implies (2) is easy: if A is Hermitian, then x∗ Ax = (x∗ Ax)∗ = x∗ A∗ x∗∗ = x∗ Ax. The converse takes a bit more work. For any x, y in Cn we have (6.1)
(x + y)∗ A(x + y) = x∗ Ax + x∗ Ay + y∗ Ax + y∗ Ay.
Assume (2) holds. Then the lefthand side of (6.1) is real, as are the numbers x∗ Ax and y∗ Ay. So (x∗ Ay + y∗ Ax) must be real. Choosing x = ej and y = ek , shows that (ajk + akj ) is a real number, so we must have ajk = r + is and akj = t − is for some real numbers r, s, and t. Now select x = ej and y = iek to see that i(ajk − akj ) is also a real number. This gives r = t, and so akj = ajk . Hence, A is Hermitian. Let H denote a Hermitian matrix. The spectral theorem tells us H = U DU ∗ , where U is unitary and D is diagonal. The diagonal entries of D are the eigenvalues of H and hence are real numbers. So, H is conjunctive to a real, diagonal matrix. If H is also real; i.e., if H is a real symmetric matrix, then H has real eigenvectors, and the unitary matrix U can be taken to be a real, orthogonal matrix, in which case U ∗ = U T , and H is congruent to D. Deﬁnition 6.10. Let H be an n × n Hermitian matrix. Let i+ (H) denote the number of positive eigenvalues of H, let i− (H) denote the number of negative eigenvalues of H, and let i0 (H) denote the number of zero eigenvalues of H (where repeated eigenvalues are to be counted according to multiplicity). The ordered triple of numbers i(H) = (i+ (H), i− (H), i0 (H)) is called the inertia of H. Theorem 6.11. Let H be a Hermitian matrix with inertia i(H) = (p, q, t). Then H is conjunctive to the matrix Ip ⊕ (−Iq ) ⊕ 0t , i.e., to a diagonal matrix with p positive ones, q negative ones, and t zeroes on the main diagonal. Proof. Let λ1 , . . . , λp be the positive eigenvalues of H, let λp+1 , . . . , λp+q be the negative eigenvalues of H, and put D = diag(λ1 , . . . , λn ), where n = p + q + t and λp+q+1 = λp+q+2 = · · · = λn = 0. Let F be the diagonal matrix with the 1 in the ith diagonal position for 1 ≤ i ≤ p + q and ones in the number λi  last t diagonal entries. Then F DF = Ip ⊕ (−Iq ) ⊕ 0t . We know U ∗ HU = D, for some unitary matrix U , so F U ∗ HU F = Ip ⊕ (−Iq ) ⊕ 0t . Putting P = U F , we have P ∗ = F ∗ U ∗ = F U ∗ and P ∗ HP = Ip ⊕ (−Iq ) ⊕ 0t . Since F and U are both nonsingular, P is nonsingular, and H is conjunctive to Ip ⊕ (−Iq ) ⊕ 0t .
6.2. Properties of Hermitian Matrices and Inertia
93
For a real symmetric matrix, the unitary matrix U can be replaced by a real orthogonal matrix, and the P in the proof of Theorem 6.11 will be a real matrix, giving the following real version of Theorem 6.11. Theorem 6.12. Let S be a real symmetric matrix with inertia i(S) = (p, q, t). Then S is congruent to the matrix Ip ⊕ (−Iq ) ⊕ 0t , i.e., to a diagonal matrix with p positive ones, q negative ones, and t zeroes on the main diagonal. In the proof of Theorem 6.11, we used the spectral theorem to get to the diagonal form, with the eigenvalues on the diagonal, and then further reduced the diagonal entries to ±1’s by using square roots. This argument would not apply over the ﬁeld of rational numbers or, more generally, over subﬁelds of R or C which do not contain the eigenvalues of the matrix or square roots of those eigenvalues. The question of conjunctivity or congruence of matrices over Q and other subﬁelds of C is a much more complicated matter [Jon50, Newm72]. 1 0 Example 6.13. Let H = . Over the ﬁeld R, the matrix H is congruent 0 2 to I2 . However, these matrices are not congruent over Q. For, suppose we had H = P I2 P T = P P T for some nonsingular, rational matrix P . Then det(H) = 2 = (det P )2 . But if P is a rational matrix, then det P is a rational number, and hence cannot square to 2. So we have a contradiction, and H is not congruent to I2 . The 1 0 same argument holds for any matrix of the form , where p is an integer 0 p prime. The Jordan canonical form gives a canonical form for the set of n × n complex matrices under the equivalence relation of similarity. Sylvester’s law of inertia tells us that the form in Theorem 6.11 provides a canonical form for Hermitian matrices under the equivalence relation of conjunctivity (or for real symmetric matrices under the equivalence relation of congruence). Theorem 6.14 (Sylvester’s Law of Inertia). Two Hermitian matrices A and B are conjunctive if and only if i(A) = i(B). Two real symmetric matrices A and B are congruent if and only if i(A) = i(B). Proof. We prove the ﬁrst statement; the proof of the second is essentially the same. Suppose i(A) = i(B) = (p, q, t). Theorem 6.11 tells us A and B are both conjunctive to Ip ⊕ (−Iq ) ⊕ 0t and hence, since conjunctivity is an equivalence relation, are conjunctive to each other. Conversely, suppose A and B are two conjunctive n × n Hermitian matrices. They must have the same rank, so i0 (A) = i0 (B). So it will suﬃce to show that i+ (A) = i+ (B); for then i− (A) = n − i0 (A) − i+ (A) = i− (B). Let i+ (A) = k and i+ (B) = j, and let P be a nonsingular matrix such that B = P ∗ AP . Let v1 , . . . , vk be a linearly independent set of eigenvectors of A, corresponding to the k positive eigenvalues of A. Let U be the subspace spanned by {P −1 v1 , P −1 v2 , . . . , P −1 vk }. Since P is nonsingular, dim(U) = k. Now let wj+1 , . . . , wn be a linearly independent set of eigenvectors of B, corresponding
94
6. Hermitian Matrices
to the negative and zero eigenvalues of B, and let W = span[wj+1 , . . . , wn ]; the subspace W has dimension n − j. So, dim(U) + dim(W) = k + (n − j) = n + (k − j). Suppose k > j. Then dim(U) + dim(W) > n and hence U ∩ W is nonzero. Let y be a nonzero vector in U ∩ W. Since y ∈ W, we have y∗ By ≤ 0. But y ∈ U, so P y is a linear combination of v1 , . . . , vj . Also, P y is nonzero because y = 0 and P is nonsingular. Hence, (P y)∗ AP y > 0. But (P y)∗ AP y = y∗ (P ∗ AP )y = y∗ By, so we have y∗ By > 0, a contradiction. Therefore, k > j is impossible. Reversing the roles of A and B in this argument shows that k < j is also impossible. Hence, k = j and i+ (A) = i+ (B).
6.3. The Rayleigh–Ritz Ratio and the Courant–Fischer Theorem x∗ Ax is When A is Hermitian, x∗ Ax is a real number. For nonzero x, the ratio ∗ x x called a Rayleigh–Ritz ratio or a Rayleigh quotient. These ratios play an important role in studying the eigenvalues of Hermitian matrices. If we normalize x, putting x x∗ Ax u= , then u∗ u = 1 and ∗ = u∗ Au. So, x x x x∗ Ax ∗ u Au u = 1 . x = 0 = ∗ x x Our goal is a characterization of the eigenvalues of A in terms of these Rayleigh quotients. We begin with the following. Theorem 6.15. Let A be Hermitian with eigenvalues λ1 , . . . , λn , ordered in nondecreasing order: λ1 ≤ λ2 ≤ · · · ≤ λn . Then the following hold. (1) λ1 = min x∗ Ax x = 1 . (2) λn = max x∗ Ax x = 1 . (3) [λ1 , λn ] = x∗ Ax x = 1 , i.e., the set x∗ Ax x = 1 is the closed interval with endpoints λ1 and λn . (4) max Ax = max{λ1 , λn }. x =1
Proof. Let U be a unitary matrix such that A = U ∗ DU , where D is the diagonal matrix diag(λ1 , . . . , λn ). Since U is unitary, U maps the set of vectors of length one onto itself. Hence, ∗ x Ax x∗ x = 1 = x∗ U ∗ DU x x∗ x = 1 = y∗ Dy y = U x, x∗ x = 1 = y∗ Dy y∗ y = 1 n n = λi yi 2 yi 2 = 1 . i=1
Under the restriction
n i=1
i=1
yi 2 = 1, the minimum value of
n
λi yi 2 occurs when
i=1
y1 = 1 and yi = 0 for i > 1; this minimum value is λ1 . The maximum value is λn , achieved when yn = 1 and yi = 0 for i < n. Any number between λ1 and λn can
6.3. The Rayleigh–Ritz Ratio and the Courant–Fischer Theorem
95
√ √ be expressed as tλ1 + (1 − t)λn , for some 0 ≤ t ≤ 1. Setting y1 = t, yn = 1 − t n n λi yi 2 = tλ1 + (1 − t)λn and yi2 = 1. and yi = 0 for 1 < i < n will give i=1
i=1
For the last part, note that when A is Hermitian, Ax2 = (Ax)(Ax)∗ = x∗ A∗ Ax = x∗ A2 x. Now A2 is also Hermitian and has eigenvalues λ2i , for i = 1, . . . , n. So, for x = 1, we have x∗ A2 x ≤ max{λ21 , λ2n }. Taking square roots, max Ax = max{λ1 , λn }. x =1
Under the restriction
n
yi 2 = 1, the sum
i=1
n
λi yi 2 takes its minimum value,
i=1
λ1 , precisely when all of the nonzero coordinates of y are in positions corresponding to those of λ1 in the diagonal matrix D. (Note λ1 could occupy two or more diagonal entries if it is a repeated eigenvalue.) So we have x∗ Ax = λ1 and x = 1 if and only if x is an eigenvector of A corresponding to the smallest eigenvalue λ1 . Similarly, x∗ Ax = λn and x = 1 occur only when x is an eigenvector of A corresponding to the largest eigenvalue λn . Also, putting x = ei gives x∗ Ax = aii , thus establishing the following corollary. Corollary 6.16. If A is a Hermitian matrix with smallest eigenvalue λ1 and largest eigenvalue λn , then λ1 ≤ aii ≤ λn for each diagonal entry aii of A. Corollary 6.16 is a special case of a theorem of Cauchy, stating that the eigenvalues of a principal submatrix of a Hermitian matrix must “interlace” the eigenvalues of the full matrix. We will prove this interlacing theorem using the Courant–Fischer “minimax” and “maximin” characterizations of the eigenvalues of a Hermitian matrix. Before stating and proving this general result, it may be useful to ﬁrst look at the simple case of a real diagonal matrix. Example 6.17. Let D = diag(λ1 , . . . , λn ) be a real diagonal matrix and assume λ1 ≤ λ2 ≤ · · · ≤ λn . Set Uk = span[e1 , . . . , ek ]. Then ∗ x Dx x ∈ U and x = 0 . λk = max k x∗ x Now let Vk denote any kdimensional subspace of Cn . Put W = span[ek , . . . , en ]. Then dim(W) = (n − k + 1), so Vk ∩ W is nonzero. Choose x ∈ Vk ∩ W. Since x∗ Dx ≥ λk . Hence, for any kdimensional subspace Vk , we have x ∈ W, we have x∗ x ∗ x Dx ≥ λk . But, for the particular kdimensional subspace Uk , we know the max x∈Vk x∗ x maximum value is equal to λk . This gives the following characterization of λk : x∗ Dx , λk = min max Vk x∈Vk x∗ x x=0
where the minimum is over all kdimensional subspaces Vk . The argument in Example 6.17 gives a minmax characterization of the eigenvalues of a real diagonal matrix. This characterization also holds for the eigenvalues of a general Hermitian matrix. There is also a maxmin characterization.
96
6. Hermitian Matrices
Theorem 6.18 (Courant and Fischer [CouHil37, Fisc05]). Let A be an n × n Hermitian matrix with eigenvalues λ1 ≤ λ2 ≤ · · · ≤ λn . Then x∗ Ax (6.2) λk = min max ∗ , Vk x∈Vk x x x=0
where the minimum is over all kdimensional subspaces Vk . Also, x∗ Ax , (6.3) λk = max min Vn−k+1 x∈Vn−k+1 x∗ x x=0
where the maximum is over all (n − k + 1)dimensional subspaces Vn−k+1 . Proof. Let {v1 , . . . , vn } be an orthonormal basis of eigenvectors of the matrix A, n with Avi = λi vi . For x = xi vi we have i=1
x∗ Ax =
n n
xi xj (vj∗ Avi ) =
i=1 j=1
n n
xi xj λi (vj∗ vi ) =
i=1 j=1
n
xi 2 λi .
i=1
Let Uk = span[v1 , . . . , vk ]. Then 1 x∗ Ax 2 = λk . = max x  λ max i i x∈Uk x∗ x x∗ x i=1 k
x=0
Let Vk be any kdimensional subspace of Cn , and let W = span[vk , . . . , vn ]. Then dim(Vk ) = k and dim(W) = (n − k + 1), so Vk ∩ W is nonzero. Choose y ∈ Vk ∩ W n n n with y = 1. Then y = yi vi and yi 2 = 1. So y∗ Ay = yi 2 λi ≥ λk . i=k
Hence, max
x∈Vk x=0 ∗
Therefore, λk = min max Vk
x∈Vk x=0
i=k
i=k
x∗ Ax ≥ y∗ Ay ≥ λk . x∗ x
x Ax , where choosing Vk = Uk gives the value λk . x∗ x
To prove the maxmin statement, put Rk = span[vk , . . . , vn ]. We then have x∗ Ax = λk . Let Vn−k+1 be any (n − k + 1)–dimensional subspace. Then min x∈Rk x∗ x x=0
Vn−k+1 ∩ Uk is nonzero. Choose y ∈ Vn−k+1 ∩ Uk , with y = 1. Since y ∈ Uk , we have y∗ Ay ≤ λk , and so x∗ Ax ≤ λk . min x∈Vn−k+1 x∗ x x=0
The maximum value of the lefthand side occurs when Vn−k+1 = Rk ; this gives the value λk , thus proving the maxmin equality (6.3). Remark 6.19. If k = 1, then (6.2) gives λ1 = min x=0
x∗ Ax gives λn = max ∗ , as we saw in Theorem 6.15. x=0 x x
x∗ Ax , and if k = n, then (6.3) x∗ x
6.4. Cauchy’s Interlacing Theorem and Other Eigenvalue Inequalities
97
6.4. Cauchy’s Interlacing Theorem and Other Eigenvalue Inequalities We now use Theorem 6.18 to obtain some inequalities about eigenvalues of Hermitian matrices. Any principal submatrix of a Hermitian matrix is Hermitian; Cauchy’s interlacing theorem gives a relationship between the eigenvalues of the principal submatrices and the eigenvalues of the whole matrix. The proof below uses the maximin and minimax characterizations of Theorem 6.18; see Exercise 11 in this chapter for a diﬀerent proof. Theorem 6.20 (Cauchy [Cauc29]). Let A be an n × n Hermitian matrix with eigenvalues λ1 ≤ λ2 ≤ · · · ≤ λn . Let Aˆ be a principal submatrix of A of order n − 1 with eigenvalues μ1 ≤ μ2 ≤ · · · ≤ μn−1 . Then for each 1 ≤ k ≤ n − 1, we have λk ≤ μk ≤ λk+1 , i.e., λ1 ≤ μ1 ≤ λ2 ≤ μ2 ≤ λ3 ≤ · · · ≤ λn−1 ≤ μn−1 ≤ λn .
(6.4)
We say the eigenvalues of Aˆ interlace those of A. Proof. We use A[j] to denote the (n − 1) × (n − 1) principal submatrix obtained by removing row and column j from A. We write x ∈ Cn−1 as x = (x1 , . . . , xj−1 , xj+1 , . . . , xn ) and put ˆ = (x1 , . . . , xj−1 , 0, xj+1 , . . . , xn ). x ∗
ˆ∗ ˆ
∗
ˆ ∗ x. If S is a subset of Cn−1 , we use Sˆ to denote Then x x= x x and x A[j] x = x Aˆ ˆ x∈S . x Let Uk be a kdimensional subspace of Cn−1 . Then Uˆk is a kdimensional subspace of Cn , and max
x∈Uk x=0
x∗ A[j] x ˆ ∗ Aˆ x x y∗ Ay = max = max , ∗ ˆ∗x ˆ x∗ x ˆ ∈Uˆk x x y∈Uˆk y y ˆ =0 x
y=0
ˆ as y to simplify notation. where in the last step we renamed x Let μ1 ≤ μ2 ≤ · · · ≤ μn−1 be the eigenvalues of A[j] . We ﬁrst use the minmax characterization (6.2) to show λk ≤ μk . μk = min max Uk
= min max
y∗ Ay y∗ y
≥ min max
y∗ Ay = λk , y∗ y
ˆk U
(6.5)
x∈Uk x=0
x∗ A[j] x x∗ x
Vk
y∈Uˆk y=0
y∈Vk y=0
where in line (6.5) the minimum is over all kdimensional subspaces of Cn . Since the subspaces Uˆk are only some of the kdimensional subspaces of Cn , we have
98
6. Hermitian Matrices
“greater than or equal to” rather than “equal”. Now we use the maxmin characterization (6.3) to show μk ≤ λk+1 . Let Un−k denote any (n − k)dimensional subspace of Cn−1 . Then x∗ A[j] x . μk = max min Un−k x∈Un−k x∗ x x=0
Now, min
x∈Un−k x=0
x∗ A[j] x ˆ ∗ Aˆ x y∗ Ay x = min = min . ∗ ˆn−k x ˆ∗x ˆ x∗ x ˆ ∈U x y∈Uˆn−k y y ˆ =0 x
y=0
The spaces Uˆn−k are only some of the (n − k)dimensional subspaces of Cn . If we maximize over all (n − k)dimensional subspaces, Vn−k of Cn , we get y∗ Ay y∗ Ay ≤ max min = λk+1 , μk = max min ∗ Vn−k y∈Vn−k y∗ y ˆn−k y∈Uˆn−k y y U y=0
where we have λk+1 because Vn−k is a (n − k)dimensional subspace of Cn and n − k = n − (k + 1) + 1. Corollary 6.21. If an n × n Hermitian matrix A has a repeated eigenvalue λ, then λ is an eigenvalue of A[j] for each j. Proof. Let λ1 ≤ λ2 ≤ · · · ≤ λn be the eigenvalues of A, and suppose λk = λk+1 for some k. Let μ1 ≤ μ2 ≤ · · · ≤ μn−1 be the eigenvalues of A[j] . Since λk ≤ μk ≤ λk+1 , we have λk = μk = λk+1 . The proof of Corollary 6.21 in fact shows that if A has a repeated eigenvalue λ of multiplicity m, then λ is an eigenvalue of A[j] of multiplicity at least m − 1. Repeated application of Theorem 6.20 gives the following generalization. Theorem 6.22. Let A be an n × n Hermitian matrix. Let λ1 ≤ λ2 ≤ · · · ≤ λn be the eigenvalues of A. Let Ar be any r × r principal submatrix of A; let the eigenvalues of Ar be λ1 (Ar ) ≤ λ2 (Ar ) ≤ · · · ≤ λr (Ar ). Then λk ≤ λk (Ar ) ≤ λk+n−r . Proof. Let A = An , An−1 , An−2 , . . . , Ar be a sequence of principal submatrices of A such that An−j is obtained from An−j+1 by deleting a single row and column. From Theorem 6.20, λk (Ar+1 ) ≤ λk (Ar ) ≤ λk+1 (Ar+1 ). But we also have λk (Ar+2 ) ≤ λk (Ar+1 ) and λk+1 (Ar+1 ) ≤ λk+2 (Ar+2 ), so λk (Ar+2 ) ≤ λk (Ar ) ≤ λk+2 (Ar+2 ). Repeating the argument gives λk (Ar+3 ) ≤ λk (Ar ) ≤ λk+3 (Ar+3 ), and after n − r steps, we reach λk ≤ λk (Ar ) ≤ λk+n−r . For computational problems, we want to know how small changes in the entries in a matrix (due to errors, or roundoﬀ) aﬀect the eigenvalues of the matrix. Here is a useful result for Hermitian matrices—as a perturbation theorem, one should think of B as having small entries.
6.5. Positive Deﬁnite Matrices
99
Theorem 6.23. Let A and B be n × n Hermitian matrices, and let C = A + B. Denote the eigenvalues of A, B, and C, respectively, as α1 ≤ α2 ≤ · · · ≤ αn ,
β1 ≤ β2 ≤ · · · ≤ βn , and
γ1 ≤ γ2 ≤ · · · ≤ γn .
Then αk + β1 ≤ γk ≤ αk + βn .
(6.6) Proof. We have γk = min max Vk
x∈Vk x=0
x∗ Ax x∗ Bx x∗ (A + B)x = min + ∗ . max Vk x∈Vk x∗ x x∗ x x x x=0
x∗ Bx Since β1 ≤ ∗ ≤ βn , x x x∗ Ax x∗ Ax x∗ Ax x∗ Bx + β1 ≤ + ∗ ≤ + βn . ∗ ∗ x x x x x x x∗ x This gives x∗ Ax x∗ Bx x∗ Ax x∗ Ax + + βn , ≤ min min max ∗ + β1 ≤ min max max Vk x∈Vk x x Vk x∈Vk Vk x∈Vk x∗ x x∗ x x∗ x x=0
x=0
x=0
and hence αk + β1 ≤ γk ≤ αk + βn .
This inequality locates the eigenvalue γk of C = A + B in an interval about the eigenvalue αk of A. When β1 > 0, equation (6.6) tells us αk ≤ γk . Rewriting the inequality as β1 ≤ γk − αk ≤ βn and letting μ = max{β1 , βn }, we get γk − αk  ≤ μ. For an n × n matrix A, the 2norm is deﬁned as A2 = max x=0
Ax = max Ax. x x =1
Part (4) of Theorem 6.15 tells us that when A is Hermitian, A2 = max{λ1 , λn }, giving the following corollary of Theorem 6.23. Corollary 6.24. Let A, B be Hermitian matrices, and let C = A + B. Let the eigenvalues of A and C be α1 ≤ α2 ≤ · · · ≤ αn and γ1 ≤ γ2 ≤ · · · ≤ γn , respectively. Then γk − αk  ≤ B2 . We have here barely scratched the surface of the ﬁeld of eigenvalue inequalities for Hermitian matrices; we refer the reader elsewhere for deeper results [HJ85, Stew73, Weyl49].
6.5. Positive Deﬁnite Matrices An n × n Hermitian matrix A is said to be positive deﬁnite if x∗ Ax > 0 for all nonzero x ∈ Cn . This term was introduced in the discussion of inner products. If x∗ Ax ≥ 0 for all nonzero x ∈ Cn , we say A is positive semideﬁnite. Similarly, if x∗ Ax < 0 for all nonzero x, we say A is negative deﬁnite, and if x∗ Ax ≤ 0 for all nonzero x, we say A is negative semideﬁnite. Note that A is positive deﬁnite if
100
6. Hermitian Matrices
and only if (−A) is negative deﬁnite. We say A is indeﬁnite if x∗ Ax takes on both positive and negative 0 values. For example, In is positive deﬁnite, −In is negative is indeﬁnite. deﬁnite, and 10 −1 ∗ Since x (A + B)x = x∗ Ax + x∗ Bx, the sum of two positive deﬁnite matrices is positive deﬁnite; similarly, the sum of two positive semideﬁnite matrices is positive semideﬁnite. Theorem 2.13 of Chapter 2 gives several equivalent properties for a Hermitian matrix to be positive deﬁnite. We now extend the list. Theorem 6.25. Let A be an n×n Hermitian matrix. The following are equivalent. (1) x∗ Ax > 0 for all nonzero x ∈ Cn . (2) All of the eigenvalues of A are positive. (3) A is conjunctive to the identity matrix In . (4) A = P ∗ P for some nonsingular matrix P . Proof. Let λ be an eigenvalue of A with associated eigenvector x; note x = 0. Then x∗ Ax = λ(x∗ x). Since x∗ x > 0, this shows that (1) implies (2). We have already seen, in Theorem 6.11, that (2) implies (3). If A is conjunctive to In , then A = P ∗ In P = P ∗ P for some nonsingular matrix P , so (3) implies (4). Finally, if A = P ∗ P for a nonsingular matrix P , then x∗ Ax = x∗ P ∗ P x = P x2 . Since P is nonsingular, P x is nonzero for any x = 0, and so (4) implies (1). Similar arguments prove the following version for semideﬁnite matrices. Theorem 6.26. Let A be an n × n Hermitian matrix. Then the following are equivalent. (1) x∗ Ax ≥ 0 for all nonzero x ∈ Cn . (2) All of the eigenvalues of A are nonnegative. (3) A = P ∗ P for some square matrix P . If A is positive deﬁnite, then det(A) > 0; this follows from the fact that the determinant of a matrix is the product of the eigenvalues. We now turn to another characterization of positive deﬁnite matrices in terms of principal submatrices. For an n × n Hermitian matrix A, let Ak denote the k × k principal submatrix formed from the ﬁrst k rows and columns of A. Thus, ⎞ ⎛ a11 a12 · · · a1k ⎜ a21 a22 · · · a2k ⎟ . Ak = ⎜ .. .. ⎟ .. ⎝ ... . . . ⎠ ak1
ak2
For x = (x1 , . . . , xk , 0, 0, . . . , 0) we have
···
akk ⎛
⎞ x1 ⎜ x2 ⎟ ⎟ x∗ Ax = (x1 , . . . , xk )Ak ⎜ ⎝ ... ⎠ , xk
6.5. Positive Deﬁnite Matrices
101
showing that if A is positive deﬁnite, then so is Ak . We now have the following characterization of positive deﬁnite matrices. Theorem 6.27. An n × n Hermitian matrix A is positive deﬁnite if and only if det(Ak ) > 0 for k = 1, . . . , n. Proof. If A is positive deﬁnite, then each submatrix Ak is also positive deﬁnite, and hence det(Ak ) > 0 for k = 1, . . . , n. To prove the converse, we use induction on n. The result clearly holds for the basis case, n = 1. Assume the result holds for Hermitian matrices of order n, and let A be of order n + 1, with det(Ak ) > 0 for k = 1, . . . , n + 1. By the induction hypothesis, An is positive deﬁnite. Let 0 < λ1 (An ) ≤ λ2 (An ) ≤ · · · ≤ λn (An ) be the eigenvalues of An and λ1 (A) ≤ λ2 (A) ≤ · · · ≤ λn (A) ≤ λn+1 (A) be the eigenvalues of A. By the interlacing Theorem 6.20, we have λk (A) ≤ λk (An ) ≤ λk+1 (A) for 1 ≤ k ≤ n. Since 0 < λ1 (An ), this shows λi (A) > 0 for i ≥ 2. But det A > 0 and n+1 ! det A = λi (A), so we must also have λ1 (A) > 0. Hence, all of the eigenvalues i=1
of A are positive, and A is positive deﬁnite.
Corollary 6.28. An n × n Hermitian matrix A is negative deﬁnite if and only if the sign of det Ak is (−1)k for k = 1, . . . , n. Proof. Note that A is negative deﬁnite if and only if (−A) is positive deﬁnite, and det(−A)k = (−1)k det Ak . The result then follows from Theorem 6.27. Theorem 6.27 plays a role in the second derivative test for functions of several variables. Suppose f (x1 , . . . , xn ) is a realvalued, twicediﬀerentiable function of the real variables x1 , . . . , xn . Recall that p is a critical point of f if ∇f (p) = 0. To ﬁnd local extrema, one ﬁnds the critical points and then tries to determine which are local minima, which are local maxima, and which are saddle points. The Hessian of f is the n × n symmetric matrix Hf that has the secondorder partial derivative ∂2f ∂xi ∂xj in entry i, j. Using the Taylor expansion of f at the critical point p, one can show that when Hf (p) is positive deﬁnite, then f has a local minimum at p. When Hf (p) is negative deﬁnite, f has a local maximum at p. And if Hf (p) is indeﬁnite, then f has a saddle point at p. For the case n = 2, with the function written as 2 f (x, y), the second derivative test is often stated in terms of the signs of ∂∂xf2 and of 2 2 ∂ 2 f 2 the discriminant D = ∂∂xf2 ∂∂yf2 − ∂x∂y ; these are just the determinants det A1 and det A2 for A = Hf . We now deﬁne an ordering on the set of n × n Hermitian matrices. Deﬁnition 6.29. For n × n Hermitian matrices A and B, we write A B if A − B is positive semideﬁnite and A B if A − B is positive deﬁnite The following properties are easily veriﬁed. • • • •
If If If If
A B, then P ∗ AP P ∗ BP for any n × n matrix P . A B, then P ∗ AP P ∗ BP for any nonsingular n × n matrix P . A B and B C, then A C. A B and B C, then A C.
102
6. Hermitian Matrices
6.6. Simultaneous Row and Column Operations A symmetric or Hermitian matrix can be reduced to diagonal form with elementary row and column operations. This gives a way to compute the inertia of a Hermitian matrix without having to ﬁnd the eigenvalues. It is also a useful tool for dealing with symmetric matrices over general ﬁelds. To solve a system of linear equations, we can use elementary row operations to reduce the matrix of the system to a simpler form. Recall the three types of elementary row operation: (1) For i = j, add a multiple of row j to row i. (2) Exchange two rows. (3) Multiply a row by a nonzero scalar. In Gaussian elimination, we use these row operations to reduce the matrix to row echelon form, which is upper triangular. Each elementary row operation can also be expressed as left multiplication by an elementary matrix. For example, if E = In + cEij , where Eij is the matrix with a one in position i, j and zeroes elsewhere, then the product EA is the matrix obtained by adding c times row j of A to row i of A. To exchange two rows i and j, use the permutation matrix obtained by exchanging rows i and j of In ; to multiply row i by c, use the diagonal matrix with c in the ith diagonal entry and ones in the remaining diagonal entries. By using row and column operations, we can reduce a symmetric matrix to diagonal form. Replacing the word “row” by “column”, we have three types of column operations: (1) For i = j, add a multiple of column j to column i. (2) Exchange two columns. (3) Multiply a column by a nonzero scalar. Each of these column operations can be expressed as right multiplication by an elementary matrix. For example, with E = In + cEij , the product AE T will add c times column j of A to column i. Note that E T = In + cEji is also an elementary matrix. For the other two types of column operation, use the same matrices as we used for the row operations; note these matrices are symmetric. Let A be an n × n symmetric matrix over a ﬁeld F. If a11 = 0, we can add multiples of the ﬁrst row to the remaining rows to create zeros in positions 2 through n of the ﬁrst column. These operations will not change any entries in the ﬁrst row of A. Since A is symmetric, the corresponding column operations, using exactly the same multipliers used for the rows, will create zeros in positions 2 through n of the ﬁrst row. Hence, we have a sequence E2 , . . . , En of elementary T EnT has the form matrices such that En En−1 · · · E2 A E2T · · · En−1 ⎛ (6.7)
a11 ⎜ 0 ⎜ . ⎝ .. 0
0 0...0 A1
⎞ ⎟ ⎟, ⎠
6.6. Simultaneous Row and Column Operations
103
where A1 has order (n − 1). If P1 = En En−1 . . . E2 , then T EnT = P1 AP1T , En En−1 · · · E2 A E2T · · · En−1
which is symmetric, because A is symmetric. Therefore, A1 is symmetric. If a11 = 0 but some diagonal entry aii is not zero, exchange row i and row 1, and also column i and column 1, to get aii in position (1, 1), and proceed as above. If all of the diagonal entries are zero, but A = 0, choose a nonzero element aij of A. Since aij = aji , adding row j of A to row i, and then column j of A to row i will result in a symmetric matrix with 2aij in position (i, i). Provided the ﬁeld F is not characteristic 2, we will then have a nonzero entry on the diagonal and can again proceed as above. So, we have shown that if char(F) = 2, we can reduce a symmetric matrix A to the form shown in equation (6.7) by elementary row and column operations, where each row operation is paired with the corresponding column operation. We then repeat the process on the smaller matrix A1 , to obtain something of the form ⎛ ⎞ a11 0 0 . . . 0 ⎜ 0 a22 0 . . . 0 ⎟ ⎜ ⎟ ⎜ 0 ⎟, 0 ⎜ . ⎟ .. ⎝ .. ⎠ . A2 0 0 with A2 being symmetric. We can repeat this process until we have a diagonal matrix. Theorem 6.30. Let A be a symmetric matrix over a ﬁeld F with char(F) = 2. Then there is a sequence of elementary matrices E1 , . . . , Em such that T T Em Em−1 · · · E2 E1 A E1T E2T · · · Em−1 Em =D
is diagonal. Putting P = Em Em−1 · · · E2 E1 , the matrix P is nonsingular and P AP T = D. Thus, we can reduce any symmetric matrix to diagonal form by elementary row and column operations, provided the characteristic of the ﬁeld is not two. If F = R, we can then use a diagonal congruence to turn the nonzero entries of D into ±1’s, and so, using row and column exchange, if necessary, reduce A to the form Ip ⊕ (−Iq ) ⊕ 0t , where i(A) = (p, q, t) is the inertia of A. However, the inertia of A can be seen directly from the matrix D, by noting the number of positive, negative, zero diagonal entries. Example 6.31. Let
⎛
⎞ 1 2 −1 A = ⎝ 2 3 0 ⎠. −1 0 7
Add (−2) times row one to row two, and then add row one to row three, to get ⎛ ⎞ 1 2 −1 A → ⎝ 0 −1 2 ⎠ . 0 2 6
104
6. Hermitian Matrices
Now add (−2) times column one to column two, and then add column one to column three to get a matrix of the form in (6.7). ⎛ ⎞ 1 0 0 ⎝ 0 −1 2 ⎠ . 0 2 6 Adding 2 times row two to row three, followed by the corresponding column operation, then results in the diagonal matrix ⎛ ⎞ 1 0 0 ⎝ 0 −1 0 ⎠ . 0 0 10 There are two positive numbers on the diagonal, and one negative number, so i(A) = (2, 1, 0). For Hermitian matrices, a similar process applies, but we must modify the column operation corresponding to the ﬁrst type of row operation. If we add c times row j of A to row i, then we need to add c times column j of A to column i. So, when we use the elementary matrix E = In + cEij for the row operation, we use E ∗ = In + cEji for the corresponding column operation. Also, if we multiply row i by a nonzero scalar c, then we must multiply column i by c to preserve the Hermitian form. Here is the Hermitian version of Theorem 6.30. Theorem 6.32. Let A be a complex Hermitian matrix. Then there is a sequence of elementary matrices E1 , . . . , Em such that ∗ ∗ Em =D Em Em−1 · · · E2 E1 A E1∗ E2∗ · · · Em−1
is diagonal. Putting P = Em Em−1 · · · E2 E1 , the matrix P is nonsingular and P AP ∗ = D. In some cases, the entries of the diagonal matrix D can be expressed in terms of determinants of the leading principal submatrices of A. Note that the ﬁrst type of row (column) operation, in which we add a multiple of one row (column) to another, does not change the determinant of the matrix, or the determinant of any submatrix which contains both of the rows (columns) involved. Suppose we are able to reduce the matrix to diagonal form by using only row operations of the following type: Add a multiple of a row to a lower row, i.e., add a multiple of row i to row j where i < j, together with the corresponding column operations. These operations will not change the determinant of Ak , the principal submatrix formed by the ﬁrst k rows and columns of A, for k = 1, . . . , n. Hence, in this case, we have det Ak = d1 d2 · · · dk for k = 1, . . . , n. When these determinants are all nonzero, we get the formula dk =
det Ak . det Ak−1
6.7. Hadamard’s Determinant Inequality
105
Example 6.33. In Example 6.31 we have det A1 = 1,
det A2 = −1,
det A3 = −10,
with d1 = 1, d2 = −1, and d3 = 10. For small matrices (2 × 2 and 3 × 3), this can be a useful method for ﬁnding the inertia of a Hermitian matrix; for larger matrices, the row operations are generally more eﬃcient.
6.7. Hadamard’s Determinant Inequality We now prove a theorem of Hadamard about the determinant of a positive deﬁnite matrix and a generalization due to Fischer. Theorem 6.34. Let A be an n × n positive semideﬁnite matrix. Then det A ≤ a11 a22 · · · ann . Proof. Since A is positive semideﬁnite, A = P ∗ P for some matrix P of order n. Letting Pj denote column j of P , we have ajj = Pj 2 . Now, det A =  det P 2 , n ! Pj . So, and from Theorem 2.33 we know that  det P  ≤ j=1
det A =  det P 2 ≤
n "
Pj 2 = a11 a22 · · · ann .
j=1
Corollary 6.35. Let A be an n × n positive semideﬁnite matrix with eigenvalues λ1 ≤ λ2 ≤ · · · ≤ λn . Let ai1 i1 , . . . , aik ik be any kdiagonal entries of A, where the indices i1 , . . . , ik are distinct. Then λ 1 λ 2 · · · λ k ≤ a i1 i1 a i2 i2 · · · a ik ik . Proof. Let Ak denote the principal submatrix formed with rows and columns i1 , . . . , ik , and let λi (Ak ), i = 1, . . . , k, be the eigenvalues of Ak . From the interlacing Theorem 6.20 we know λ1 λ2 · · · λk ≤ λ1 (Ak )λ2 (Ak ) · · · λk (Ak ) = det Ak , and from the Hadamard inequality (Theorem 6.34) we have det Ak ≤ ai1 i1 ai2 i2 · · · aik ik . Combining these inequalities, λ1 λ2 · · · λk ≤ det Ak ≤ ai1 i1 ai2 i2 · · · aik ik .
For the proof of the generalization of Theorem 6.34 we need the following fact. Theorem 6.36. If A and B are positive semideﬁnite matrices with A B, then det A ≥ det B. Proof. Let U be a unitary matrix such that U ∗ AU = diag(λ1 , . . . , λn ), where λ1 , . . . , λn are the eigenvalues of A. Then U ∗ AU U ∗ BU so diag(λ1 , . . . , λn ) − U ∗ BU
106
6. Hermitian Matrices
is positive semideﬁnite, and hence has nonnegative diagonal entries. Let d1 , . . . , dn denote the diagonal entries of U ∗ BU . Then λi ≥ di for i = 1, . . . , n. Since B is positive semideﬁnite, λi ≥ di ≥ 0, and hence λ1 λ2 · · · λn ≥ d1 d2 · · · dn . Theorem 6.34 tells us d1 d2 · · · dn ≥ det(U ∗ BU ) = det B, and so det A = λ1 λ2 · · · λn ≥ det B. Theorem6.37 (Fischer [Fisc07]). Let A be a positive deﬁnite matrix and partition B C A as A = , where B and D are square. Then det A ≥ det B det D. C∗ D Proof. Let B be k × k. Since A is positive deﬁnite, so is B, and hence B is Ik 0 ∗ invertible. Also, B is Hermitian, so B = B . Put P = . Then −C ∗ B −1 In−k Ik B C 0 ∗ P AP = P∗ C∗ D −C ∗ B −1 In−k B C Ik −B −1 C = 0 D − C ∗ B −1 C 0 In−k B 0 = . 0 D − C ∗ B −1 C Since det P = 1, we have det A = det(P ∗ AP ) = det B det(D − C ∗ B −1 C). Now D − C ∗ B −1 C is positive deﬁnite because P AP ∗ is positive deﬁnite. Also, D is positive deﬁnite, and D − (D − C ∗ B −1 C) = C ∗ B −1 C is positive deﬁnite, because B is positive deﬁnite. Hence, by Theorem 6.36, we have det D ≥ det(D−C ∗ B −1 C). Therefore, det A = det B det(D − C ∗ B −1 C) ≤ det B det D.
6.8. Polar Factorization and Singular Value Decomposition In this section we obtain the polar factorization for nonsingular matrices and use it to obtain another famous factorization, the singular value decomposition. Some may feel we are going about this backwards; that it is easier to get the singular value decomposition ﬁrst, and then use it to get the polar decomposition. We will do this in Chapter 8, which includes the singular value decomposition for m × n matrices and examines it in more detail. Our reason for discussing these factorizations here is the connection with Hermitian matrices. We begin by seeing how to ﬁnd a square root of a positive semideﬁnite Hermitian matrix K. Let U be a unitary matrix which diagonalizes K and put ). Since λi ≥ 0, it has a real, nonnegative square root. U ∗ KU = diag(λ√1 , . . . , λn√ Put D = diag( λ1 , . . . , λn ). Then K = U D2 U ∗ = (U DU ∗ )2 , so U DU ∗ is a square root of K. Note that U DU ∗ is positive semideﬁnite, and if K is positive deﬁnite, then so is U DU ∗ . We will show that a positive semideﬁnite matrix has a unique positive semideﬁnite square root. Lemma 6.38. Let D = λ1 Im1 ⊕ λ2 Im2 ⊕ · · · ⊕ λk Imk , where λ1 , . . . , λk are distinct k mi = n. Then an n × n matrix A commutes with D if and only if A has and i=1
the form A = A1 ⊕ A2 ⊕ · · · ⊕ Ak , where Ai is mi × mi .
6.8. Polar Factorization and Singular Value Decomposition
107
Proof. Partition A conformally with D, and let Aij denote block (i, j), of size mi × mj . If AD = DA, we have λj Aij = λi Aij . Since λi = λj when i = j, all oﬀdiagonal blocks of A must be zero and A is the direct sum of the diagonal blocks Aii . Put Ai = Aii to get A = A1 ⊕ A2 ⊕ · · · ⊕ Ak . Conversely, if A = A1 ⊕ A2 ⊕ · · · ⊕ Ak where Ai is mi × mi , then AD = DA. Lemma 6.39. Let A be a diagonalizable matrix with nonnegative eigenvalues λ1 , . . . , λn . Then B commutes with A if and only if B commutes with A2 . Proof. Certainly if B commutes with A, then B commutes with A2 . For the converse, assume B commutes with A2 . Since A and B commute if and only if S −1 AS and S −1 BS commute, we may, without loss of generality, assume that A is already diagonal, thus A = λ1 Im1 ⊕ λ2 Im2 ⊕ · · · ⊕ λk Imk , where λ1 , . . . , λk are the distinct eigenvalues of A with multiplicities m1 , . . . , mk . Since λ1 , . . . , λk are nonnegative real numbers and are all diﬀerent, the numbers λ21 , λ22 , . . . , λ2k are also all diﬀerent. Since B commutes with A2 = λ21 Im1 ⊕ λ22 Im2 ⊕ · · · ⊕ λ2k Imt , Lemma 6.38 tells us B = B1 ⊕ B2 ⊕ · · · ⊕ Bk , where Bi is mi × mi , and hence B commutes with A. Theorem 6.40. If H and K are positive semideﬁnite Hermitian matrices and H 2 = K 2 , then H = K. Proof. Let λ1 , . . . , λk be the distinct eigenvalues of H, let mi be the multiplicity of λi , and let D = λ1 Im1 ⊕ λ2 Im2 ⊕ · · · ⊕ λk Imk . Then H 2 has eigenvalues λ21 , . . . , λ2k with multiplicities m1 , . . . , mk . Since H 2 = K 2 and K is also positive semideﬁnite, the eigenvalues of K must also be λ1 , . . . , λk with multiplicities m1 , . . . , mk . Hence, there exist unitary matrices U and V such that H = U DU ∗ and K = V DV ∗ . Then H 2 = K 2 gives U D2 U ∗ = V D2 V ∗ , and so (V ∗ U )D2 = D2 (V ∗ U ). Thus, V ∗ U commutes with D2 . Lemma 6.38 then tells us that V ∗ U = A1 ⊕ A2 ⊕ · · · ⊕ Ak , where Ai is mi × mi , and so V ∗ U commutes with D. Hence, H = U DU ∗ = (V V ∗ )(U DU ∗ )(V V ∗ ) = V (V ∗ U )D(U ∗ V )V ∗ = V D(V ∗ U U ∗ V )V ∗ = V DV ∗ = K.
The polar form of a complex number z is ρeiθ , where ρ = z and 0 ≤ θ < 2π. For nonzero ρ, the angle θ is uniquely determined by z. In the matrix analogue of this statement, a nonsingular matrix plays the role of the complex number z, a positive deﬁnite matrix serves as the modulus ρ, and a unitary matrix replaces eiθ . Theorem 6.41. Let A be an n × n nonsingular complex matrix. Then there exists a unitary matrix U and a positive deﬁnite Hermitian matrix H such that A = U H. Both U and H are uniquely determined by A. Proof. Let K = A∗ A. Then K is positive deﬁnite and hence has a positive deﬁnite square root, H. Thus, we have H 2 = A∗ A. Now set U = AH −1 ; we claim that U is unitary. For, we have U ∗ U = (AH −1 )∗ (AH −1 ) = (H ∗ )−1 A∗ AH −1 = H −1 H 2 H −1 = I. So A = U H, where U is unitary and H is positive deﬁnite.
108
6. Hermitian Matrices
To prove uniqueness, suppose A = U1 H1 = U2 H2 , where the Ui ’s are unitary and the Hi ’s are positive deﬁnite Hermitian. Then A∗ A = H12 = H22 , so Theorem 6.40 tells us H1 = H2 . Put H = H1 = H2 ; then U1 H = U2 H. Since H is nonsingular, we can multiply on the right by H −1 to get U1 = U2 . The factorization A = U H of Theorem 6.41 is called the polar decomposition. It can also be done in the reverse order with the Hermitian factor on the left and the unitary factor on the right. This version is obtained by applying the same proof to the positive deﬁnite matrix AA∗ ; we leave the details as an exercise. Theorem 6.42. Let A be an n × n nonsingular complex matrix. Then there exists a unitary matrix U and a positive deﬁnite Hermitian matrix H such that A = HU . Both U and H are uniquely determined by A. We will deal with the polar decomposition for a singular matrix later, but note here that in the singular case the polar decomposition is not unique. This is analogous to the fact that for the complex number zero, the polar form has ρ = 0, but any value of θ may be used for the eiθ factor. 1 0 Example 6.43. Let A = . Then we can take H = A, and for both 0 0 1 0 1 0 U1 = and U2 = , we have A = U1 H = U2 H. In fact, for any 0 1 0 −1 1 0 U= , we have A = U H. 0 eiθ We can now add another necessary and suﬃcient condition for a matrix to be normal. Theorem 6.44. Suppose A has a polar decomposition A = U H, where U is unitary and H is positive semideﬁnite. Then A is normal if and only if U H = HU . Proof. Let A = U H, where U is unitary and H is positive semideﬁnite Hermitian. Then A∗ A = H 2 and AA∗ = U H 2 U ∗ . If U and H commute, then AA∗ = U H 2 U ∗ = H 2 U U ∗ = H 2 = A∗ A, and so A is normal. To prove the converse, suppose A is normal. Then we have H 2 = U H 2 U ∗ , so H 2 U = U H 2 , i.e., U commutes with H 2 . But H has nonnegative eigenvalues and is diagonalizable, so Lemma 6.39 tells us U commutes with H. From the polar decomposition, we can obtain the singular value decomposition. Theorem 6.45. Let A be an n × n nonsingular complex matrix. Then there exist unitary matrices U and V and a diagonal matrix D, with positive diagonal entries, such that A = U DV . Proof. Let A = W H be the polar decomposition of A. Since H is Hermitian positive deﬁnite, there is a unitary matrix V and a diagonal matrix D, with positive diagonal entries, such that H = V ∗ DV . So A = W V ∗ DV . Put U = W V ∗ . Then U is unitary and we have A = U DV .
Exercises
109
The factorization in Theorem 6.45 is called the singular value decomposition. Since A∗ A = V ∗ DU ∗ U DV = V ∗ D2 V , the diagonal entries of D are the square roots of the eigenvalues of A∗ A. These numbers are called the singular values of A. In the next chapter we will extend the singular value decomposition to cover singular and nonsquare matrices. If A is m × n, then the positive semideﬁnite Hermitian matrix A∗ A has n nonnegative eigenvalues, which we may denote as σ12 , σ22 , . . . , σn2 , and order so that σ12 ≥ σ22 ≥ · · · ≥ σn2 . The singular values of A are the numbers σ1 ≥ σ2 ≥ · · · ≥ σn . Deﬁnition 6.46. Let A be an m × n complex matrix. Let σ12 ≥ σ22 ≥ · · · ≥ σn2 be the eigenvalues of A∗ A. The singular values of A are the numbers σ1 , . . . , σn . Alternatively, one may deﬁne the singular values to be the square roots of the eigenvalues of AA∗ ; note that the nonzero eigenvalues of AA∗ and A∗ A are the same. Since A, A∗ A, and AA∗ all have the same rank, if r = rank(A), then σ1 , σ2 , . . . , σr are nonzero, and the remaining singular values are zero. One may also deﬁne the singular values to be just the list of nonzero values σ1 , σ2 , . . . , σr . n We leave it as an exercise to show that A2F = σi2 . i=1
Theorem 6.47. Let A be an n × n complex matrix with eigenvalues λ1 , . . . , λn ordered so that λ1  ≥ λ2  ≥ · · · ≥ λn , and with singular values σ1 , . . . , σn . Then A is normal if and only if λi  = σi for each i = 1, . . . , n. Also, A is positive semideﬁnite Hermitian if and only if λi = σi for each i = 1, . . . , n. Proof. Let D be the diagonal matrix diag(λ1 , . . . , λn ). Suppose A is normal. Then A = U ∗ DU for some unitary matrix U . Hence, A∗ A = U ∗ D2 U and the singular values of A are the numbers λ1 , λ2 , . . . , λn . Conversely, suppose λi  = σi for n n i = 1, . . . , n. Then A2F = σi2 = λi 2 , so by Theorem 5.10, A is normal. i=1
i=1
The last part follows from the fact that a normal matrix is positive semideﬁnite Hermitian if and only if all of its eigenvalues are nonnegative real numbers. In the proof of Theorem 6.45 the U and V are not unique. The diagonal entries of D are the singular values of A, so these numbers are determined by A. When A is positive semideﬁnite Hermitian, we may take V = U ∗ .
Exercises 1. Let V be ndimensional; let A be an n×n matrix over F. Deﬁne x, y = yT Ax. Show this is a bilinear form. Show it is symmetric if and only if the matrix A is symmetric. 2. Let V = Cn , and let A be an n × n complex matrix. Deﬁne x, y = y∗ Ax. Show this is a conjugate bilinear form. Show it is conjugate symmetric if and only if A is Hermitian (i.e., A = A∗ ).
110
6. Hermitian Matrices
3. Show that congruence and conjunctivity are equivalence relations. 4. Let P be nonsingular. Show that A is symmetric if and only if P T AP is symmetric. Give an example of an n × n matrix A and a singular n × n matrix P such that P T AP is symmetric but A is not. (Of course, P = 0 will work, but try for something more interesting.) 5. Let P be a nonsingular complex matrix. Show that A is Hermitian if and only if P ∗ AP is Hermitian. 6. Prove that the diagonal entries of a positive deﬁnite matrix are positive. Prove that the diagonal entries of a positive semideﬁnite matrix are nonnegative. 7. Suppose A is a positive semideﬁnite matrix and aii = 0. Prove that every entry of row i and of column i must be zero. 8. Let A be an n × n Hermitian matrix with eigenvalues λ1 ≤ λ2 ≤ · · · ≤ λn . Prove that if aii = λ1 for some i, then every other entry of row and column i is zero. Similarly, prove that if aii = λn for some i, then every other entry of row and column i is zero. 9. Suppose A and B are Hermitian matrices of the same size, and suppose A is positive deﬁnite. Show there is a nonsingular matrix P such that P ∗ AP = I and P ∗ BP is diagonal. 10. Determine the inertia of each of the following matrices. Try to do this with a minimum of calculation. 1 4 (a) 4 1 ⎛ ⎞ 3 1 1 1 ⎜1 3 1 1⎟ (b) ⎝ ⎠ 1 1 3 1 1 1 1 3 ⎛ ⎞ 1 2 3 4 ⎜2 5 1 1⎟ (c) ⎝ ⎠ 3 1 3 1 4 1 1 3 ⎛ ⎞ 17 2 3 4 1 ⎟ ⎜ 2 36 1 (d) ⎝ ⎠ 3 1 85 8 4 1 8 100 (Hint: Remember the Gerˇsgorin theorem.) 11. This exercise outlines an alternative proof of Theorem 6.20, which can be found in [StewSun90, pp. 197–198] and [Parl80, Chapter 10], where it is attributed to Kahan ([Kah67]).
Exercises
111
Let A be an n × n Hermitian matrix with eigenvalues λ1 ≤ λ2 ≤ · · · ≤ λn , and let B be a principal submatrix of A of order n − 1 which has eigenvalues μ1 ≤ μ2 ≤ · · · ≤ μn−1 . Without loss of generality, we may assume B is the B a leading principal submatrix of A, so that A has the form A = , a∗ α n−1 . The proof is by contradiction. Suppose for some a ∈ C λ1 ≤ μ1 ≤ λ2 ≤ μ2 ≤ λ3 ≤ · · · ≤ λn−1 ≤ μn ≤ λn does not hold. Then for some i, either μi < λi or λi+1 < μi . (a) Let τ be a real number and assume B − τ I is nonsingular. Put P =
I 0
−(B − τ I)−1 a 1
and show that P ∗ (A − τ I)P =
B − τI 0
0 γ
, , where
γ = α − τ − a∗ (B − τ I)−1 a.
B − τI 0 . Part 11(a) tells us H is conjunctive to A − τ I. 0 γ (c) Suppose μi < λi . Let μi < τ < λi . Show that A − τ I has at most (i − 1) negative eigenvalues, but H has at least i negative eigenvalues. Since H is conjunctive to A − τ I, this is a contradiction. (d) Now suppose λi+1 < μi . Let λi+1 < τ < μi . Show that A − τ I has at least (i + 1) negative eigenvalues, but H has at most i negative eigenvalues. So again, we have a contradiction.
(b) Put H =
12. Give examples of square matrices A and B such that B commutes with A2 but not with A. 13. Prove Theorem 6.42. 14. Let the n × m complex matrix A have singular values σ1 , σ2 , . . . , σn . Prove that n A2F = σi2 . i=1
15. Let A be an n × n complex matrix with eigenvalues λ1 , . . . , λn and singular n n σi2 ≥ λi 2 . values σ1 , . . . , σn . Show that i=1
i=1
16. Let A be an n×n complex matrix with singular value decomposition A = U DV . Show that A is normal if and only if V U commutes with D. 17. Let A and B be n × n complex matrices. Show that there is a unitary matrix U such that B = U A if and only if A∗ A = B ∗ B. n ! 18. The proof of Theorem 6.34 used Theorem 2.33, det P  ≤ Pj . Show that j=1
the two results are equivalent by using Theorem 6.34 to prove Theorem 2.33.
112
6. Hermitian Matrices
19. Householder ([Hou64, page 3]) deﬁnes an elementary matrix to be one of the form E(u, v; σ) = I − σuv∗ , where u, v are column vectors in Cn and σ is a scalar. (a) Show how to choose u, v and σ to get each of the three types of elementary matrices deﬁned in Section 6.6. (b) Show how to choose u, v and σ to get the Householder matrices deﬁned in Exercise 18 of Chapter 2. (c) Show that E(u, v; σ)E(u, v; τ ) = E(u, v; γ), where γ = σ + τ − στ v∗ u. Hence, for nonzero σ, τ satisfying σ −1 + τ −1 = v∗ u, we have (E(u, v; σ))−1 = E(u, v; τ ). (d) Show that for nonzero u, v and σ, the matrix E(u, v; σ) has an (n − 1)dimensional eigenspace corresponding to the eigenvalue 1; this eignenspace is the hyperplane orthogonal to v∗ . (e) Use the previous part and the trace to ﬁnd the other eigenvalue of E. (f) Show that det(E(u, v; σ)) = 1 − σv∗ u.
Chapter 7
Vector and Matrix Norms
This short chapter contains some basic material on vector and matrix norms. More comprehensive treatments may be found elsewhere, for example [Stew73, Chapter 4], [HJ85, Chapter 5].
7.1. Vector Norms A norm on a vector space V is basically a length function. Starting with a norm, we can then deﬁne distance and regard V as a metric space, with the metric space topology. Deﬁnition 7.1. Let V be a vector space over C or R. A norm on V is a realvalued function ν : V → R that satisﬁes the following properties. (1) For any nonzero x in V, we have ν(x) > 0. (2) For any scalar α and any vector x, we have ν(αx) = αν(x). (3) For any x and y in V, we have ν(x + y) ≤ ν(x) + ν(y). As a consequence of (2), we get ν(0) = 0. Property (3) is the triangle inequality. Given a norm ν on V, we can deﬁne a distance function on V by (7.1)
d(x, y) = ν(x − y).
We leave it as an exercise to show that (7.1) satisﬁes the required properties for a distance function and that V is a metric space under the distance d. We then have the usual metric space topology on V. Also left as an exercise to the reader is the inequality (7.2)
ν(y) − ν(x) ≤ ν(y − x),
used in one of the proofs below. The most familiar example of a norm on Cn is the usual Euclidean length 12 n xi 2 . More generally, for any inner product function: set ν(x) = x = i=1
113
114
7. Vector and Matrix Norms
1
space V with inner product , , the function ν(x) = x, x 2 deﬁnes a norm on V. However, there are other norms of interest. For a general norm, ν, we sometimes denote ν(x) as xsubscript , where “subscript” may be either ν or some other notation identifying the norm. Here are some commonly used norms on Cn . (1) The sum norm is x1 =
n
xi .
i=1
(2) The max norm is x∞ = max xi . 1≤i≤n
(3) For any p ≥ 1, the lp norm is xp =
n
xi p
p1
.
i=1
For all of these, properties (1) and (2) clearly hold; the nontrivial thing to check is the triangle inequality. For the sum and max norms, this follows immediately from the corresponding triangle inequality for complex numbers: z + w ≤ z + w. The proof is harder for the lp norm, where the triangle inequality is known as Minkowski’s inequality. Note that p = 2 gives the usual Euclidean norm; in that case, the Cauchy–Schwarz inequality may be used to prove the triangle inequality. Note also that p = 1 gives the sum norm, and if we let p → ∞, we get the max norm; hence the notation x∞ for the max norm. For a vector space V with norm ν, we say a sequence of vectors {xj }∞ j=1 converges to y in the norm if lim ν(y − xj ) = 0. We also call this convergence with j→∞
respect to the norm. For p ∈ V, and r > 0, the open ball centered at p with radius r is Bν (p, r) = {x  ν(x − p) < r}. Since diﬀerent norms lead to diﬀerent distance functions, one might now be concerned about the dependence of arguments involving topological concepts, such as continuity, open and closed sets, compactness, convergence, etc., on the choice of norm. The happy news for ﬁnitedimensional spaces is that all norms are equivalent, by which we mean the following. Deﬁnition 7.2. We say two norms ν1 and ν2 , on a vector space V, are equivalent if there are positive constants, α and β, such that for all x in V (7.3)
αν1 (x) ≤ ν2 (x) ≤ βν1 (x).
The reader may check that (7.3) does indeed deﬁne an equivalence relation on the set of norms. One can also show that when ν1 and ν2 are equivalent norms on V, the metric spaces determined by these two norms have the same open sets. Hence, the topological properties (open sets, closed sets, limits, compactness, continuity, etc.) are the same for the metric spaces associated with the two norms. Before showing that all norms on Cn are equivalent, let us look at the easy case n = 1. Using the number 1 as the basis vector for C, property (2) of the deﬁnition gives ν(x) = ν(x1) = xν(1). So any norm on the onedimensional vector space C is just a positive constant (namely ν(1)) times the usual absolute value function, and convergence of a sequence with respect to the norm ν is the same as our usual notion of convergence for a sequence of complex numbers. We now show that all norms on Cn are equivalent by showing that every norm on Cn is equivalent to the l2 norm.
7.1. Vector Norms
115
Theorem 7.3. Let ν be a norm on Cn . Then there exist positive constants α and β such that for any x in Cn , (7.4)
αx2 ≤ ν(x) ≤ β x2 .
Proof. Let ν be a norm on Cn . Let e1 , . . . , en be the unit coordinate vectors. We start by proving the right side of the inequality. Let b = ν(e1 ), ν(e2 ), · · · , ν(en ) . For x in Cn , let x denote the vector (x1 , x2 , . . . , xn ). Using the triangle inequality for the norm ν, n n (7.5) ν(x) = ν xi e i ≤ xi ν(ei ) = x · b. i=1
i=1
The Cauchy–Schwarz inequality then gives ν(x) ≤ x2 b2 , and setting β = b2 gives the righthand inequality of (7.4). That was the easy part of the proof. Establishing the lefthand side will take a bit more work. We ﬁrst use the already established inequality, ν(x) ≤ βx2 , to show that the realvalued function f : Cn → R deﬁned by f (x) = ν(x) is continuous in the metric space topology deﬁned by the usual Euclidean metric, the 2norm. We should leave this as an exercise to the reader, but your algebraist author is so excited by the chance to do an δ proof that she cannot resist. So, the classic opening: let > 0. It will then be no surprise to anyone who has seen these proofs before that we now put δ = β , and let B(x, δ) be the open ball of radius δ in the metric of the 2norm. Then for any y in B(x, δ), we have y − x2 < δ, and hence ν(y − x) ≤ βy − x2 < βδ = . Using (7.2), we then have ν(y) − ν(x) ≤ ν(y − x) < . Therefore, for any y in B(x, δ), we have ν(y) − ν(x) < , showing that the function f (x) = ν(x) is continuous in the metric space topology of the usual Euclidean metric. Flushed with the success of the δ proof, we now use another mainstay of the analysis toolkit, namely, the theorem that a continuous function on a compact set must attain a maximum and a minimum on that set. For our compact set, we use U = {u u2 = 1}. Recall that in the metric space Cn with the usual Euclidean metric, a set is compact if and only if it is closed and bounded; hence U is compact. (At this step it is critical that we are working in the ﬁnitedimensional space Cn ; in the inﬁnitedimensional case, closed and bounded does not guarantee compactness.) Now, the function f (x) = ν(x) is continuous in the 2norm metric, so f has a minimum value m on U. Since ν(x) > 0 for any nonzero x, we must x . Then x = x2 u have m > 0. Let x be any nonzero vector in Cn . Set u = x2 and u ∈ U. Hence, ν(x) = ν(x2 u) = x2 ν(u) ≥ mx2 . Set α = m to get the lefthand side of (7.4). As noted above, the ﬁrst consequence of Theorem 7.3 is that for any norm ν on Cn , the open sets of the ν metric space topology are the same. Hence, for any result involving purely topological properties, such as continuity, convergence of a sequence, etc., we may choose to work with whatever norm is convenient. Typical choices are the familiar norms 1 , 2 , and ∞ . We now use this to show that in Cn , a sequence of vectors converges in the norm if and only if it converges. Just to
116
7. Vector and Matrix Norms
be clear about what we mean by convergence of a sequence of vectors: lim xj = b j→∞
means that if xij denotes the ith coordinate of vector xj , then for each i = 1, . . . , n, the sequence of complex numbers, {xij }∞ j=1 , converges to bi . Theorem 7.4. Let ν be a norm on Cn , and let {xj }∞ j=1 be a sequence of vectors converges in the norm to 0 if and only if in Cn . Then the sequence {xj }∞ j=1 lim xj = 0.
j→∞
Proof. By the remarks above, it suﬃces to prove this for a particular norm on Cn . For the ﬁrst part of the proof, we use the norm ∞ . Suppose lim xj ∞ = 0. j→∞
Writing out the coordinates, xj = (x1j , x2j , . . . , xnj ), and recalling that xj ∞ = max xij , 1≤i≤n
it is then clear that xj → 0 as j → ∞. Conversely, suppose each coordinate sequence {xij }∞ j=1 converges to 0. We now n use the l1 norm, xj 1 = xij . Since each coordinate sequence converges to 0, we have lim
n
j→∞ i=1
i=1
xij  = 0, and so lim xj 1 = 0.
j→∞
∞ Since {xj }∞ j=1 converges in the norm to p if and only if {xj − p}j=1 converges in the norm to zero, we have the following corollary.
Corollary 7.5. Let ν be a norm on Cn . Then {xj }∞ j=1 converges in the norm to p if and only if lim xj = p. j→∞
As pointed out in the proof of Theorem 7.3, being in a ﬁnitedimensional space was critical; we needed the fact that U is compact. To see what can happen in an inﬁnitedimensional space, we turn to l2 , the Hilbert space of all square summable ∞ complex sequences. From the standard inner product x, y = xn y¯n , we get the n=1 1
usual 2norm, x2 = x, x 2 . Let ej be the sequence with a one in entry j and zeroes elsewhere, and consider the sequence of vectors, {ej }∞ j=1 . Each coordinate sequence converges to zero. However, ej 2 = 1 for all j, so the sequence {ej }∞ j=1 does not converge to 0 in the 2norm. The set U = {u  u2 = 1} is closed and bounded in l2 , but it is not compact. Moreover, suppose we deﬁne a new inner product on the space l2 by setting ∞ x y ∞ x 2 1 i ¯i i , and set ν(x) = . Then we have ν(ej ) = , and the x, y = n n j n=1 n=1 sequence {ej }∞ does converge to zero in this norm. Note also that since ν(ej ) = j=1 1 j ej 2 , there is no positive number α for which the inequality αx2 ≤ ν(x) will hold for every x; thus the norm ν is not equivalent to the 2norm.
7.2. Matrix Norms
117
7.2. Matrix Norms One way to obtain matrix norms is to consider the set of m × n matrices as vectors in Cmn and use any norm deﬁned on Cmn . However, for an m × n matrix A and a vector x in Cn , we are usually interested in knowing how the norm of Ax is related to the norm of x. In computational situations, the entries of x may involve error, ˆ + e, either from measurement or roundoﬀ errors. One might model x as x = x where e represents the errors. For Ax, we then have Ax = Aˆ x + Ae. Assuming we have some bound on the size of the entries of e, we would like a bound on the size of Ae. Thus, we are interested in having some inequality of the form Aeν ≤ ceν , where c is a positive constant. For the remainder of this section, we assume m = n, so A is a square matrix. We focus on those norms on the set of n × n matrices which are induced by a vector norm ν on Cn . Deﬁnition 7.6. For a vector norm ν deﬁned on Cn , we deﬁne the norm ν on the set of n × n matrices by (7.6)
Aν = max Axν . x ν =1
We call this the matrix norm induced by the vector norm ν. Since {x xν = 1} is a closed, bounded set in Cn , the fact that a realvalued, continuous function on a compact set has a maximum value on that set guarantees us the existence of the righthand side of (7.6). We leave it as an exercise to check that (7.6) satisﬁes the three properties in the deﬁnition of a norm. Note that another way to state (7.6) is (7.7)
Aν = max x=0
Axν , xν
so we have Axν ≤ Aν xν for all x. We warn the reader that other notations for matrix norms are used. In our notation, A1 means the matrix norm induced by the l1 vector norm x1 . One n n might also use A1 to mean aij . Were we planning to discuss norms i=1 j=1
in more detail, we would take more care to introduce notation that distinguished induced matrix norms from the norms obtained by thinking of a matrix as a vector with the entries arranged in m rows and n columns. However, our main interest is in these induced norms. (To be honest, the only norms used in future chapters are the induced 2norm and the Frobenius norm.) An induced matrix norm has another useful property: (7.8)
ABν ≤ Aν Bν .
This is called the submultiplicative property. We leave this as another exercise for the reader. Example 7.7. The l1 norm on Cn induces the matrix norm A1 . We claim that A1 = max aj 1 , 1≤j≤n
118
7. Vector and Matrix Norms
where aj is column j of A. Thus, A1 is the maximum column sum matrix norm (although we are not summing the actual entries in each column, but their absolute values). This is often left as an exercise to the reader, but since we have already done so much of that, we show the details. First, note that for each unit coordinate vector, ej 1 = 1 and Aej = aj . So Aej 1 = aj 1 , which shows that aj 1 ≤ A1 for every j = 1, . . . , n. n On the other hand, we have Ax = xj aj . Hence, for any x with x1 = 1, j=1
the triangle inequality gives Ax1 ≤
n
xj  aj 1 ≤
j=1
n
xj 
j=1
max aj 1 = max aj 1 .
1≤j≤n
1≤j≤n
Since that example was so much fun, we do another. Example 7.8. The l∞ norm on Cn induces the matrix norm A∞ . We claim that A∞ = max ai 1 , 1≤i≤n
where ai is row i of A. Thus, A∞ is the maximum row sum matrix norm (again, we are not summing the actual entries, but their absolute values). Suppose x∞ = 1. Then xj  ≤ 1 for j = 1, . . . , n, and once again, the triangle inequality is the key, giving entry i of Ax ≤
n
aij xj  ≤
i=1
n
aij  = ai 1 .
j=1
Now, to show that the maximum row sum value is actually attained, suppose row n k gives the maximum row sum value akj . Write each entry of the row in polar j=1
form akj = akj eiθj , and set x = (e−iθ1 , e−iθ2 , . . . , e−iθn ). Then x∞ = 1 and n akj . entry k of Ax is the maximum row sum j=1
What about the induced matrix norm A2 ? We have Ax22 = x∗ A∗ Ax. For x2 = 1, this has the maximum value σ12 , where σ12 is the largest eigenvalue of the positive semideﬁnite matrix A∗ A. So A2 = σ1 , the largest singular value of A. Finally, we mention one more matrix norm of interest to us: the Frobenius norm n n 12 1 AF = aij 2 = [trace(A∗ A)] 2 . i=1 j=1
The reader may check that it is indeed a norm and ABF ≤ AF BF . We say a matrix norm is unitarily invariant if for any unitary U, V , we have U AV ν = Aν . The 2norm and the Frobenius norm are both unitarily invariant norms.
Exercises
119
Exercises 1. Show that equation (7.1) satisﬁes the properties of a distance function for a metric space. 2. Show that ν(y) − ν(x) ≤ ν(y − x). 3. Check that (7.3) does indeed deﬁne an equivalence relation on the set of norms. 4. Show that if ν1 , ν2 are equivalent norms on V, then the metric space topologies deﬁned by ν1 and ν2 have the same open sets. 5. For the vector space R2 , sketch the ball of radius one, centered at x, for each of the norms 1 , 2 , and ∞ . 6. Show that (7.6) satisﬁes the properties in the deﬁnition of the norm. 7. Show that (7.7) is the same as (7.6). 8. Prove (7.8). 9. Recall that the spectral radius ρ(A) is max{λ λ is an eigenvalue of A}. Show that for any induced matrix norm, ρ(A) ≤ Aν . Note that for the special case of the norms 1 and ∞ this gives Theorem 3.16 and the following Remark 3.17, established in Chapter 3 using the Gerˇsgorin circle theorem. 10. Show that the function f (A) = ρ(A) does not deﬁne a norm on the set of n × n matrices. Speciﬁcally, show that of the three properties in the deﬁnition of norm, this function satisﬁes only the second. 11. Another context for the Frobenius norm is to deﬁne an inner product on the set of n × n complex matrices by putting A, B = trace(B ∗ A). Show that this is indeed an inner product and note that A, A = A2F . 12. Show that the Frobenius norm satisﬁes ABF ≤ AF BF . 13. Show that the 2norm and the Frobenius norm are both unitarily invariant norms. 14. Show that for any unitary matrix, U , we have U 2 = 1. 15. Show that if A is a normal matrix, then A2 = ρ(A). 16. Show that A2 = A∗ 2 . 17. Show that for any induced matrix norm, Iν = 1, and for any invertible matrix A we have Aν A−1 ν ≥ 1.
Chapter 8
Some Matrix Factorizations
This chapter concerns some matrix factorizations. The singular value decomposition (SVD) was introduced in Section 6.8; we now examine the SVD in more detail. Next, we look at Householder transformations (introduced in Chapter 2, Exercise 18 and Chapter 3, Exercise 10) and use them to achieve the QR factorization, and to transform matrices to special forms, such as Hessenberg and tridiagonal forms, with unitary transformations. To give some idea of why these techniques are useful in computational matrix theory, we brieﬂy describe the basics for a few classic methods for the numerical computation of eigenvalues. For serious discussion of these methods with analysis of the error and the eﬃciency for each algorithm, see one of the many texts on computational and numerical linear algebra, some classic, and some more recent, for example [Hou64, Wilk65, LawHan74, GvL89, Stew73, Parl80, StewSun90, Tref97, Demm97]. The chapter concludes with a brief review of the wellknown LDU factorization, which comes from the Gaussian elimination process. Except for the section on the LDU factorization, matrices in this section are over C, unless we speciﬁcally say that the matrix is real.
8.1. Singular Value Decomposition Section 6.8 obtained the singular value decomposition for nonsingular square matrices from the polar factorization. Here we approach the SVD directly for m × n matrices. Recall the following fact (Theorem 2.22) from Chapter 2: If A is a complex matrix, then the four matrices A, A∗ , AA∗ , and A∗ A all have the same rank. Let A be an m × n matrix of rank r. Then A∗ A and AA∗ are both positive semideﬁnite matrices of rank r; each has r positive eigenvalues and then n − r and m − r zero eigenvalues, respectively. Theorem 3.24 tells us that A∗ A and AA∗ have the same nonzero eigenvalues. 121
122
8. Some Matrix Factorizations
Deﬁnition 8.1. Let A be an m × n matrix of rank r. Let σ12 ≥ σ22 ≥ · · · ≥ σr2 be the positive eigenvalues of A∗ A (equivalently, of AA∗ ). The numbers σ1 , . . . , σr are called the singular values of A. Remark 8.2. Since the n × n matrix A∗ A has n eigenvalues, one may also put σj = 0 for r + 1 ≤ j ≤ n and deﬁne the singular values of A to be σ1 , . . . , σn . Another option is use the m × m matrix AA∗ , with the list σ1 , . . . , σm , where σj = 0 for r + 1 ≤ j ≤ m. We use all of these, depending on what is convenient. Theorem 8.3. Let A be an m × n matrix of rank r; let σ1 ≥ σ2 ≥ · · · ≥ σr be the singular values of A. Let ΣA be the m × n diagonal matrix with σ1 , . . . , σr in the ﬁrst r diagonal entries and zeroes elsewhere. Then there exist unitary matrices U and V , of sizes m × m and n × n, respectively, such that A = U ΣA V . Proof. Note that D = Σ∗A ΣA = ΣTA ΣA is a real n × n diagonal matrix with σ12 ≥ σ22 ≥ · · · ≥ σr2 in the ﬁrst r diagonal entries and zeroes elsewhere. The n × n matrix A∗ A is Hermitian, with r positive eigenvalues σ12 ≥ σ22 ≥ · · · ≥ σr2 and (n − r) eigenvalues equal to zero. Hence, from the spectral theorem, there is an n × n unitary matrix V such that (8.1)
V A∗ AV ∗ = D = Σ∗A ΣA .
The ij entry of V A∗ AV ∗ is the inner product of columns j and i of AV ∗ , so equation (8.1) tells us the columns of AV ∗ are pairwise orthogonal. Furthermore, when 1 ≤ j ≤ r the length of column j is σj . For j > r, equation (8.1) tells us the jth column of AV ∗ is a column of zeroes. For 1 ≤ j ≤ r, divide column j of AV ∗ by its length, σj , and let Ur denote the m × r matrix with σ1j (column j of AV ∗ ) as its jth column. The r columns of Ur are then an orthonormal set. Now complete Ur to an m × m unitary matrix by using an orthonormal basis for the orthogonal complement of the column space of Ur for the remaining m − r columns. We then have AV ∗ = U ΣA . Multiply both sides on the right by V to obtain A = U ΣA V . Deﬁnition 8.4. Let A be m × n complex matrix of rank r, with singular values σ1 ≥ σ2 ≥ · · · ≥ σr . A factorization of the form A = U ΣA V , where ΣA is the m × n diagonal matrix with σ1 , . . . , σr in the ﬁrst r diagonal entries and zeroes elsewhere, and U and V are unitary matrices of sizes m × m and n × n, respectively, is called a singular value decomposition for A. We write SVD for a singular value decomposition. Theorem 8.3 states that any complex matrix has an SVD. Note that while ΣA is uniquely determined by A, the unitary matrices U, V are not. The equation AV ∗ = U ΣA tells us how the transformation A acts: A (column j of V ∗ ) = σj (column j of U ). Thus, using the columns of V ∗ as an orthonormal basis for Cn and the columns of U as an orthonormal basis for Cm , the eﬀect of the transformation A is to map the jth basis vector of Cn to a multiple of the jth basis vector of Cm ; the multiplier is the singular value σj . The basis vectors have length one, so the jth basis vector in Cn is mapped to a vector of length σj in Cm . The largest singular value σ1 is the largest factor by which the length of a basis vector is multiplied. We now show that σ1 is the largest factor by which the length of any vector is multiplied.
8.1. Singular Value Decomposition
123
Theorem 8.5. Let A be an m × n matrix with singular values σ1 ≥ σ2 ≥ · · · ≥ σr . Then max Ax = σ1 . x =1
Proof. We have already seen, from the equation AV ∗ = U ΣA , that if x is the ﬁrst column of V ∗ , then x = 1 and Ax = σ1 . Now suppose x is any vector in Cn of length one. From the singular value decomposition A = U ΣA V , we have Ax = U ΣA V x. Set y = V x; since V is unitary, y = 1. In the product ΣA y, coordinate i of y gets multiplied by σi ; hence ΣA y ≤ σ1 y = σ1 . Since U is unitary, Ax = U ΣA V x = U ΣA y = ΣA y ≤ σ1 .
Deﬁnition 8.6. The spectral norm of an m × n matrix A is max Ax = σ1 . We denote the spectral norm of A as A2 .
x =1
The spectral norm is also called the operator norm; in Section 7.2 we saw it is the norm induced by the 2norm on Cn . We have Ax ≤ A2 x for all x. When A is square and v is an eigenvector of A, we have Av = λv = λv, so λ ≤ A2 for any eigenvalue λ of A. Hence, ρ(A) ≤ A2 = σ1 . 12 n m 1 Recall the Frobenius norm AF = trace(A∗ A) 2 = aij 2 . It is i=1 j=1
often more convenient to work with A2F = trace(A∗ A). Since AF = U AV F for any unitary matrices U, V , we have A2F = ΣA 2F = σ12 + σ22 + · · · + σr2 . There are other ways to write the factorization A = U ΣA V . Since only the ﬁrst r diagonal entries of ΣA are nonzero, the last (m − r) columns of U and the ˆ A be the r × r diagonal matrix, last (n − r) rows of V are superﬂuous. Let Σ diag(σ1 , . . . , σr ). Replace the n × n unitary matrix V with the r × n matrix Vr consisting of the ﬁrst r rows of V ; these rows form an orthonormal set. And instead of completing the m × r matrix Ur in the proof of Theorem 8.3 to get a square unitary matrix U , simply use Ur , which has r orthonormal columns. We then have ˆ r Vr , (8.2) A = Ur Σ ˆ r is an r × r diagonal matrix with where Ur is m × r with orthonormal columns, Σ positive diagonal entries, and Vr is r × n with orthonormal rows. We may also decompose (8.2) into a sum of r rank one matrices. Let ui denote column i of Ur , and let vi be column i of Vr∗ . Then row i of Vr is vi∗ , and (8.3)
A=
r
σi ui vi∗ = σ1 u1 v1∗ + σ2 u2 v2∗ + · · · + σr ur vr∗ .
i=1
Since σ1 ≥ σ2 ≥ · · · ≥ σr > 0, while all of the vectors ui and vi have the same length (namely, one), the sum (8.3) shows that the earlier terms in the sum (that is, the terms corresponding to larger σi ’s) make larger contributions to A. This suggests that to approximate A with a matrix of lower rank (i.e., one with fewer terms in the sum), one should use the terms corresponding to larger singular values
124
8. Some Matrix Factorizations
and drop terms with smaller singular values. Let 1 ≤ k ≤ r. From A = U ΣA V , let Uk be the m × k matrix consisting of the ﬁrst k columns of U , and let Vk be the k × n matrix consisting of the ﬁrst k rows of V . Set Ak = Uk diag(σ1 , . . . , σk )Vk =
k
σi ui vi∗ .
i=1
We shall see that of all matrices of rank k, the matrix Ak is the one which is “closest” to A, where distance is measured by the Frobenius norm. Let Σk denote the m × n matrix with σ1 , . . . , σk in the ﬁrst k diagonal positions and zeroes elsewhere. Then Ak = U Σk V and (8.4)
A − Ak 2F = U (ΣA − Σk )V 2F = ΣA − Σk 2F =
r
σi2 .
i=k+1
The following theorem is usually attributed to Eckart and Young [EcYo36]. Stewart [Stew93] points out that the result is older and is contained in [Schm07]. Theorem 8.7 (Schmidt [Schm07]). Let A be an m × n matrix of rank r with r σi ui vi∗ , where ui is column i of U singular value decomposition A = U ΣA V = i=1
and vi∗ is row i of V . For 1 ≤ k ≤ r, set Ak =
k i=1
matrix B of rank at most k, we have A − Ak 2F =
σi ui vi∗ . Then for any m × n r
i=k+1
σi2 ≤ A − B2F .
We give two proofs. The ﬁrst, from [Stew73, pp. 322–323], has the advantage of being direct and natural. The second proof, based on Schmidt’s argument as presented in [Stew93], seems sneakier and more complicated, but we feel it has its own charm. First proof. Let B be an m×n matrix of rank at most k, which gives the minimum value of A − XF for all matrices X of rank at most k. At this point, one must ask how we are guaranteed the existence of such a B, and need to do a bit of analysis. Deﬁne the realvalued function, f , on the space of m × n complex matrices by f (X) = A − XF . The function f is continuous. The set of matrices of rank at most k is a closed set. Since we seek to minimize A − XF , we may restrict our attention to matrices X satisfying A − XF ≤ AF . The set {X rank(X) ≤ k and A − XF ≤ AF } is then a closed bounded set in a ﬁnitedimensional normed space; hence, it is compact and the continuous function f attains its minimum value for some B in this set. Now let β1 ≥ β2 ≥ · · · ≥ βk ≥ 0 be the singular values of B, and let U ΣB V be the singular value decomposition of B. The Frobenius norm is invariant under unitary transformations, so we may, without loss of generality, assume Dk 0 B = ΣB = , 0 0
8.1. Singular Value Decomposition
125
where Dk = diag(β1 , . . . , βk ). Partition A conformally with B, thus A11 A12 A= , A21 A22 where A11 is k × k. Then
If A12
A − B2F = A11 − Dk 2F + A12 2F + A21 2F + A22 2F . Dk A12 0, then B1 = = is a matrix of rank at most k with 0 0 A − B1 2F = A11 − Dk 2F + A21 2F + A22 2F < A − B2F .
So, by the choice of B, we must haveA12 = 0. The same argument shows A21 = 0 Dk 0 and Dk = A11 . Hence, we have A = and the numbers β1 , . . . , βk are 0 A22 k of the singular values of A. Also, A − BF = A22 F . Since A22 2F is the sum of the squares of the remaining singular values of A, and since k ≤ r, it is clear r that the minimum value possible for A22 2F is σi2 . To achieve this value we i=k+1
must have βi = σi for i = 1, . . . , k.
Now for the second proof, based on the presentation of Schmidt’s argument in [Stew93]. First, a lemma is needed in the proof, but it is of interest in its own right. Lemma 8.8. Let A be an m × n complex matrix of rank r with singular values σ1 ≥ σ2 ≥ · · · ≥ σr > 0. For k > r, put σk = 0. Suppose {x1 , . . . , xk } is a set of k k k orthonormal vectors in Cn . Then Axi 2 ≤ σi2 . i=1
i=1
Remark 8.9. For k = 1 this gives Theorem 8.5. Proof. Let X be the n × k matrix with column vectors x1 , . . . , xk . Since the columns are orthonormal, X ∗ X = Ik . We have AX = ( Ax1 Ax2 · · · Axk ) and k
Axi 2 = AX2F = trace(X ∗ A∗ AX). i=1
Let A = U ΣV be the singular value decomposition of A, where U, V are unitary matrices of sizes m × m and n × n, respectively, and Σ is the m × n diagonal matrix with the singular values on the diagonal. Put Vk = V X; the n × k matrix Vk has orthonormal columns, and AX = U ΣVk . Using U ∗ U = I, we have X ∗ A∗ AX = Vk∗ ΣT U ∗ U ΣVk = Vk∗ ΣT ΣVk . Recall that trace(BC) = trace(CB); apply this with B = Vk∗ ΣT Σ and C = Vk to get trace(X ∗ A∗ AX) = trace(Vk Vk∗ ΣT Σ). Let vi denote the ith row of Vk . The (i, i) entry of Vk Vk∗ is then vi 2 and so (8.5)
k
i=1
Axi = 2
trace(Vk Vk∗ ΣT Σ)
=
r
i=1
σi2 vi 2 .
126
8. Some Matrix Factorizations
Now
k
vi 2 = trace(Vk Vk∗ ) = trace(Vk∗ Vk ) = k, because Vk∗ Vk = Ik . So the
i=1
last sum in (8.5) has the form
r i=1
ci σi2 , where the coeﬃcents ci = vi 2 satisfy
0 ≤ ci ≤ 1 and sum to k. Since σ1 ≥ σ2 ≥ · · · ≥ σr , a sum of this form is r k maximized by setting ci = 1 for i = 1, . . . , k and thus σi2 vi 2 ≤ σi2 . i=1
i=1
Now for the second proof of Theorem 8.7. Second proof. Since A − Ak 2F =
r
σi2 , we need to show that for any m × n
i=k+1 r
matrix B with rank(B) ≤ k, we have
σi2 ≤ A − B2F . Since rank(B) ≤ k,
i=k+1
we can factor B as B = XY ∗ , where X is m × k and Y is k × n. Furthermore, this can be done with an X which has orthonormal columns. (One way to get this is from the SVD of B. We have B = Uk ΣVk where Uk is m × k with orthonormal columns and Vk is k × n with orthonormal rows, and Σ is diagonal. We can then put X = Uk and Y ∗ = ΣVk .) Since X has orthonormal columns, X ∗ X = Ik and (A − B)∗ (A − B) = A∗ A − A∗ B − B ∗ A + B ∗ B = A∗ A − A∗ XY ∗ − Y X ∗ A + Y X ∗ XY ∗
(8.6)
= A∗ A − A∗ XY ∗ − Y X ∗ A + Y Y ∗ . Now the key trick: (Y − A∗ X)(Y − A∗ X)∗ = Y Y ∗ − A∗ XY ∗ − Y X ∗ A + A∗ XX ∗ A. Hence, (8.7)
−A∗ XY ∗ − Y X ∗ A = (Y − A∗ X)(Y − A∗ X)∗ − Y Y ∗ − A∗ XX ∗ A.
Substitute (8.7) in (8.6) to get (8.8)
(A − B)∗ (A − B) = A∗ A + (Y − A∗ X)(Y − A∗ X)∗ − A∗ XX ∗ A.
Since trace(Y − A∗ X)(Y − A∗ X)∗ ≥ 0 and trace(A∗ XX ∗ A) = trace(X ∗ AA∗ X), we have A − B2F ≥ trace(A∗ A) − trace(X ∗ AA∗ X). Let xi denote column i of X; then the (i, i) entry of X ∗ AA∗ X is A∗ xi 2 . Since k x1 , . . . , xk are orthonormal, Lemma 8.8 tells us trace(X ∗ AA∗ X) ≤ σi2 , and so i=1
A − B2F ≥ trace(A∗ A) −
k
i=1
σi2 =
r
σi2 .
i=k+1
Theorem 8.7 is for the Frobenius norm. Mirsky [Mirs60] showed that the result holds for any unitarily invariant norm. See also [StewSun90, Chapter IV]. In Section 6.8, we got the singular value decomposition for nonsingular square matrices by starting with the square root of a positive deﬁnite Hermitian matrix and then using the polar decomposition of a square matrix. We now reverse the process.
8.2. Householder Transformations
127
Suppose A is a square n × n matrix with singular value decomposition A = U ΣA V . Insert I = V V ∗ after the factor U to get A = U ΣA V = (U V )(V ∗ ΣA V ). Then U1 = U V is unitary and P = V ∗ ΣA V is Hermitian, positive semideﬁnite, so we obtain a polar factorization A = U1 P. Recall that for a nonsingular A, the factors of the polar decomposition are uniquely determined by A. If A is singular, then A∗ A = P 2 is positive semideﬁnite, and since A∗ A has a unique positive semideﬁnite square root, we see the P is uniquely determined by A. The U , however, is not. Finally, we mention an analogue of the Courant–Fischer minimax/maximin characterization (Theorem 6.18) of the eigenvalues of a Hermitian matrix for singular values. Theorem 8.10. Let A be an m×n matrix with singular values σ1 ≥ σ2 ≥ · · · ≥ σn . Then Ax σk = max min , Vk x∈Vk x x=0
where the maximum is over all kdimensional subspaces Vk of Cn . Also, σk = min
max
Vn−k+1 x∈Vn−k+1 x=0
Ax , x
where the minimum is over all (n − k + 1)dimensional subspaces Vn−k+1 . These may be used to obtain inequalities for singular values, such as the following analogue of Corollary 6.24. Theorem 8.11. Let A, B be m × n matrices. Let the singular values of A and B be σ1 ≥ σ2 ≥ · · · ≥ σn and τ1 ≥ τ2 ≥ · · · ≥ τn , respectively. Then σk − τk  ≤ A − B2 . We have focused on proving the existence of the SVD. For numerical computation, one must consider the eﬃciency and stability of the algorithms used to compute the factorization A = U ΣA V . It is generally not a good strategy to compute A∗ A and then try to ﬁnd the eigenvalues. We refer the reader elsewhere [LawHan74, GvL89, Tref97, Demm97, Hou64] for discussion of the issues involved and descriptions of algorithms. However, we will discuss one basic tool: using Householder transformations to obtain a QR factorization and to put a matrix in Hessenberg or tridiagonal form.
8.2. Householder Transformations Householder transformations [Hou64] are orthogonal reﬂections across hyperplanes. Given two linearly independent vectors in Rn of the same length, there is a simple formula for constructing a Householder transformation which sends one to the other. These transformations provide another way to compute the QR factorization, presented in Chapter 2 as a byproduct of the Gram–Schmidt orthogonalization process. They are also useful for transforming matrices to special forms via unitary change of basis.
128
8. Some Matrix Factorizations
uu∗ Let u be a nonzero column vector in Cn . The n × n matrix Pu = ∗ is u u the linear transformation which projects orthogonally onto the line spanned by u. The map Pu ﬁxes (pointwise) the line spanned by u and sends the orthogonal complement u⊥ to zero. Deﬁne the Householder transformation Hu by (8.9)
Hu = I − 2Pu .
Since Pu∗ = Pu = Pu2 , we have Hu∗ = Hu and Hu∗ Hu = Hu 2 = I − 4Pu + 4Pu 2 = I. Hence, the transformation Hu is Hermitian, is unitary, and squares to the identity. Note that Hu u = −u, and if x is orthogonal to u, then Hu x = x. The Householder transformation Hu is thus a reﬂection map, ﬁxing the hyperplane which is the orthogonal complement of the line spanned by u, and reﬂecting the line spanned by u. So far, we have been working in Cn . We now move to Rn . Let a and b be linearly independent vectors in Rn which have the same length. Let P be the plane spanned by a and b. Reﬂecting P across the line spanned by (a + b) sends the vector a to b. We can extend this reﬂection of P to a Householder transformation on Rn which ﬁxes every vector in P ⊥ . In fact, this transformation is the map Hu , where u = a − b. Theorem 8.12. Let a and b be linearly independent vectors in Rn with a = b. Then there is exactly one Householder transformation which maps a to b. This transformation is Hu , where u = a − b. Proof. Since a and b have the same length, the vectors w = a+b and u = a−b are orthogonal. Note that a = w+u and b = w−u 2 2 . Since Hu u = −u while Hu w = w, we have Hu a = b. Now suppose Hv is a Householder transformation mapping a to b. Let W denote the orthogonal complement of the line spanned by v. Then there is a unique scalar α and a unique vector w1 in W such that a = αv + w1 . Similarly, there is a unique scalar β and a unique vector w2 in W such that b = βv + w2 . So Hv a = b gives −αv + w1 = βv + w2 . Hence, we must have w1 = w2 and β = −α. Put w = w1 = w2 ; then a = αv + w and b = −αv + w. Subtracting, we get a − b = 2αv. This tells us that v must be a scalar multiple of a − b, which is the vector u of the ﬁrst part of the proof. But, for any scalar γ, we have Hγv = Hv . Thus the maps Hv and Hu are the same map. How much of Theorem 8.12 is valid in Cn ? The second part of the proof is valid in Cn , so, given linearly independent vectors a and b of the same length in Cn , there can be at most one Householder transformation which maps a to b; moreover, it must be the map Hu where u = a − b. However, the ﬁrst part of the proof uses the fact that for real vectors a and b of the same length, the vectors w = a + b and u = a − b are orthogonal. This is valid in Rn , where the inner product is symmetric. But the standard inner product in Cn is conjugate symmetric. If a∗ b is not a real number, then a∗ b = b∗ a. In this case, a = b does not imply that a + b is orthogonal to a − b. Consider two vectors, a and b, of the same length in Cn . Using the polar factorization for complex numbers, we have a∗ b = eiθ a∗ b for some 0 ≤ θ < 2π.
8.3. Triangular, Hessenberg, Tridiagonal Forms
129
ˆ = e−iθ b, then a∗ b ˆ = a∗ b, which is a real number. Then putting If we set b ˆ = e−iθ b. The unitary transformation eiθ Hu will ˆ we will have Hu a = b u = a − b, map a to b. Note that a Householder transformation Hu has eigenvalues +1 and −1 of multiplicities n−1 and 1, respectively. The vector u is an eigenvector corresponding to the eigenvalue −1 and the orthogonal complement W of the line spanned by u is the eigenspace for the eigenvalue +1. The unitary transformation eiθ Hu has eigenvalues eiθ and −eiθ of multiplicities n − 1 and 1, respectively.
8.3. Using Householder Transformations to Get Triangular, Hessenberg, and Tridiagonal Forms We begin by using Householder transformations to obtain a QR factorization. We then use them to reduce matrices to some special forms which are useful in numerical computation, where one must consider the size of the errors and the eﬃciency of the algorithm. Since unitary transformations preserve the 2norm, one can hope that algorithms which involve multiplication by unitary matrices will be stable. For purposes of eﬃciency and data storage, it helps if the matrices have some special pattern of zero and nonzero entries. Thus, results which tell us how to put a matrix in one of these special forms via unitary transformations are useful. In Section 2.7 we obtained the QR factorization from the Gram–Schmidt orthogonalization process. We now see how to use Householder transformations to compute a QR type factorization. Theorem 8.13. Let A be an m × n matrix. Then there is a m × m unitary matrix U such that U A is upper triangular. Furthermore, U can be obtained as the product of at most m Householder matrices. Proof. We use induction on n. Let a1 denote the ﬁrst column of A. If a1 is a scalar multiple of the ﬁrst unit coordinate vector e1 , leave it alone and move on to the next step. Otherwise, set r1 = a1 e1 . Then a1 and r1 are linearly independent and there is a Householder matrix H1 such that H1 a1 = eiα r1 for some real number α. The product H1 A has eiα r1 in the ﬁrst column. Let Aˆ denote the (m − 1) × (n − 1) submatrix obtained by deleting row one and column one of H1 A. By the induction ˆ such that U ˆ Aˆ is upper hypothesis, there is an (m − 1) × (m − 1) unitary matrix U ˆ is the product of at most m − 1 Householder matrices H ˆ i of triangular, where U ˆ ˆ size (m − 1) × (m − 1). For each Hi , set Hi = 1 ⊕ Hi to get an m × m Householder ˆ . The product U = (1 ⊕ U ˆ )H1 is then a matrix; the product of these Hi ’s is 1 ⊕ U product of at most m Householder matrices and U A is upper triangular. In case the induction argument is hiding the computational process: ﬁrst construct the Householder matrix H1 to get the ﬁrst column in the desired form. Then look at the last m − 1 entries of the second column of H1 A. Construct a Houseˆ 2 to get it in the desired form, and multiply H1 A on the holder transformation H ˆ left by H2 = 1 ⊕ H2 . Then repeat the process on the third column of H2 H1 A and so on. The key point is that at each stage, the zeroes in the already transformed columns are preserved.
130
8. Some Matrix Factorizations
Now we sharpen Theorem 8.13 a bit. Theorem 8.14. Let A be an m × n matrix of rank r. Then A can be factored as A = QRP , where Q is an m × m unitary matrix, P is an n × n permutation matrix, R11 R12 and the m × n matrix R has the form R = , where R11 is r × r, upper 0 0 triangular of rank r, the block R12 is r × (m − r), and the remaining m − r rows of R are zero. The matrix Q is the product of at most r Householder matrices. Proof. Since A has rank r, it has r linearly independent columns. Note r ≤ m and r ≤ n. Let us initially assume that the ﬁrst r columns of A are linearly independent. Let Ar denote the m × r submatrix formed by the ﬁrst r columns of A. From Theorem 8.13, there is an m × m unitary matrix U , such that U is a product of at most r Householder matrices, and U Ar is upper triangular. We R11 have U Ar = , where R11 is a square r × r upper triangular matrix with 0 R11 R12 nonzero entries on the diagonal. Now, U A = R has the form . Since 0 R22 ∗ R has rank r, we must then have R22 = 0. Set Q = U . Then A = QR and R has the desired form. Furthermore, if U = Hr Hr−1 · · · H1 gives U as a product of r Householder matrices, then Q = U ∗ = H1 H2 · · · Hr . If the ﬁrst r columns of A are not linearly independent, multiply A on the right by an n × n permutation matrix P such that the ﬁrst r columns of AP are linearly independent. Then proceed as above to ﬁnd Q such that QAP has the desired form. Theorem 8.14 gives a QR type factorization for rectangular matrices. To get positive real numbers for the ﬁrst r diagonal entries of R, factor out a unitary diagonal matrix D of size m and replace Q by QD. When m = n = r, i.e., when A is a square nonsingular matrix, we will have P = In and get the usual A = QR factorization. The nice thing about this method of computing the QR factorization is that at each step we only work with one column; the Householder matrix used is completely speciﬁed by that column. When A is a real orthogonal matrix, the method of the proof may be used to show that A is a product of at most n Householder transformations; we leave the details as an exercise. For a unitary matrix U the proof shows U may be written as a product of a diagonal unitary matrix and at most n Householder transformations. Now we use Householder matrices to transform matrices into some special forms. Deﬁnition 8.15. An m × n matrix A is upper bidiagonal if the only nonzero entries are on the diagonal, or on the line directly above the diagonal; that is, aij = 0 whenever j < i and whenever j > i + 1. For example, a 4 × 6 bidiagonal matrix ⎛ ∗ ∗ 0 0 ⎜0 ∗ ∗ 0 ⎝ 0 0 ∗ ∗ 0 0 0 ∗
has 0 0 0 ∗
the pattern ⎞ 0 0⎟ ⎠. 0 0
8.3. Triangular, Hessenberg, Tridiagonal Forms
131
There is a similar deﬁnition for lower bidiagonal. Deﬁnition 8.16. An m × n matrix A is tridiagonal if the only nonzero entries are on the main diagonal, subdiagonal or superdiagonal; that is, aij = 0 when i − j > 1. A 6 × 6 tridiagonal matrix has the pattern ⎛ ∗ ∗ 0 0 0 ⎜∗ ∗ ∗ 0 0 ⎜ ⎜0 ∗ ∗ ∗ 0 ⎜ ⎜0 0 ∗ ∗ ∗ ⎝ 0 0 0 ∗ ∗ 0 0 0 0 ∗
⎞ 0 0⎟ ⎟ 0⎟ ⎟. 0⎟ ⎠ ∗ ∗
More generally, a matrix A is said to be banded with bandwidth 2p + 1 if aij = 0 whenever i − j > p. A tridiagonal matrix is banded with bandwidth 3. Deﬁnition 8.17. A matrix A is upper Hessenberg if aij = 0 whenever i > j + 1. We say A is lower Hessenberg if aij = 0 whenever j > i + 1. Thus, in an upper Hessenberg matrix, all entries below the subdiagonal are zero. We shall be working exclusively with upper Hessenberg matrices, and will use “Hessenberg” to mean “upper Hessenberg”. We now use Householder matrices to do the following. • Reduce an m×n matrix A to upperbidiagonal form with a sequence of Householder transformations, each computed from entries of A. • For any n × n matrix A, ﬁnd a unitary matrix U , which is a product of at most (n − 1) Householder matrices, each computed from entries of A, such that U ∗ AU is upper Hessenberg. • If A is an n × n Hermitian matrix, ﬁnd a unitary matrix U , which is a product of at most (n − 1) Householder matrices, each computed from entries of A, such that U ∗ AU is tridiagonal. The signiﬁcance is not just the existence of unitary matrices which achieve these forms, but that we can specify a ﬁnite sequence of Householder transformations computed directly from the entries of A via the operations of arithmetic and taking square roots. In contrast, the Schur unitary triangularization of a matrix assumes we already know the eigenvalues. The eigenvalues are typically what we are trying to ﬁnd, and these transformations can be used as a ﬁrst step before using an iterative process to approximate the eigenvalues. We ﬁrst need some detail about the form of the Householder matrices to be used. Let a = (a1 , . . . , an ), where ak = eiθk ak , for θk real. Put (8.10)
b = (a1 , a2 , . . . , ak , eiθk+1
n
aj 2
12
, 0, 0, · · · 0).
j=k+1
The vectors a and b have the same entries in the ﬁrst k positions, a = b, and b has zeroes in the last (n − k − 1) entries. Note also that a∗ b is real; the reason
132
8. Some Matrix Factorizations
we want this was explained at the end of the previous section (and is why we are fussing with ak = eiθk ak ). Now put u = a − b = (0, 0, . . . , 0, ak+1 − eiθk
n
aj 2
12
, ak+2 , . . . , an ).
j=k+1
Then Hu a = b (this is where we need the fact that a∗ b is a real number). Next, ˆ to be the last n − k coordinates of u; thus we deﬁne u ˆ = (ak+1 − eiθk u
n
aj 2
12
, ak+2 , . . . , an ).
j=k+1
Since the ﬁrst k coordinates of u are zeroes, the ﬁrst k rows and columns of the uu∗ projection matrix Pu = ∗ are all zero, and Hu = Ik ⊕ Huˆ , where Huˆ has size u u (n − k) × (n − k). Note that for any n × n matrix A, the product Hu A has the same ﬁrst k rows as A, and the product AHu has the same ﬁrst k columns as A. This will be important as we proceed to obtain a bidiagonal form. We will be multiplying on the left by Householder matrices to create zeroes in the entries below the diagonal line; we want to preserve these zeroes as we multiply on the right by Householder matrices to create zeroes in entries above the superdiagonal line. Observe that the same procedure may be applied to a row vector a by using the product aHu . Theorem 8.18. Let A be an m×n complex matrix. Then there are unitary matrices P and Q of sizes m × m and n × n, respectively, such that P AQ is upper bidiagonal. If μ = min{m, n}, then P and Q can each be obtained as a product of at most μ − 1 Householder matrices. Proof. Let a1 denote the ﬁrst column of A, and deﬁne b1 as in equation (8.10), using k = 0. Then with u1 = a1 − b1 , the ﬁrst column of Hu1 A has zeroes in the last m − 1 entries. Now we work on the ﬁrst row of Hu1 A. Using a to denote the ﬁrst row of Hu1 A, deﬁne b as in equation (8.10), using k = 1. We then form Hv1 such that the ﬁrst row of Hu1 AHv1 has zeroes in the last n − 2 entries. Since Hv1 = 1 ⊕ Hvˆ1 , the ﬁrst column of Hu1 AHv1 is the same as the ﬁrst column of Hu1 A and hence has zeroes in the last m − 1 positions. We could now ﬁnish oﬀ with an induction argument, but, just to see how the iterative process works, we examine the details of one more step. Let a2 denote the second column of Hu1 AHv1 , and deﬁne b2 as in equation (8.10), using k = 1. Put u2 = a2 − b2 . Since the ﬁrst entry of u2 is zero, Hu2 = 1 ⊕ Huˆ 2 . Since the ﬁrst column of Hu1 AHv1 has zeroes in the last m − 1 entries, the product Hu2 Hu1 AHv1 will still have those zeroes in the ﬁrst column, and in the second column the last m − 2 entries will be zero. Also, the form of Hu2 = 1 ⊕ Huˆ 2 means that multiplying on the left by Hu2 does not change the ﬁrst row; the zeroes already created in positions 3, . . . , n of that row are preserved. Use the second row of Hu2 Hu1 AHv1 to form Hv2 = I2 ⊕ Hvˆ2 , such that in Hu2 Hu1 AHv1 Hv2 the second row has zeroes in the last n − 3 positions. Note that the form of Hv2 means that the ﬁrst two
8.3. Triangular, Hessenberg, Tridiagonal Forms
133
columns of Hu2 Hu1 AHv1 are not changed, and also, in the ﬁrst row, the zeroes in positions 3 through n are preserved. Continuing in this manner, for each successive row and column, we can create zeroes in all positions below the diagonal and above the superdiagonal by multiplying by Householder transformations on the left and right. At each stage, the fact that we use transformations of the form Hu = Ik ⊕ Huˆ assures that we retain the zeroes created in previous steps. Why might this be useful? After all, Theorem 8.18 is not giving us a similarity transformation on A. However, since P and Q are unitary, the matrices A and B = P AQ have the same singular values. Now, for the bidiagonal matrix B, the product B ∗ B is tridiagonal and Hermitian. There are good numerical algorithms for ﬁnding the eigenvalues of tridiagonal Hermitian matrices. Schur’s unitary triangularization theorem tells us that any square complex matrix A is unitarily similar to a triangular matrix T with the eigenvalues of A being on the diagonal of T . However, to get this triangular form we need to know the eigenvalues of A. The eigenvalues of an n × n matrix are the roots of the characteristic polynomial pA (x), which has degree n. It is known that for n ≥ 5, there is no formula involving only rational operations and extraction of roots which gives the roots of a general polynomial of degree n. Hence it is not possible to compute the eigenvalues of a general n × n matrix with a ﬁnite number of such operations; one must use iterative methods which converge to the eigenvalues. Consequently, we know it is not possible to put A in triangular form with similarity transformations using only a ﬁnite number of computations with rational arithmetic and extraction of roots. However, we now see how to get to Hessenberg form with a ﬁnite sequence of Householder similarity transformations. Theorem 8.19. Let A be an n × n complex matrix. Then we can ﬁnd a unitary matrix U , which is a product of at most (n − 2) Householder matrices, such that U AU ∗ is in Hessenberg form. Furthermore, each of the Householder transformations is computed from the entries of A, using rational operations and square roots, (and in the case of a complex A, computing the polar form of some entries). Proof. The basic idea is the same as used to prove Theorem 8.18, but in the ﬁrst step we create zeroes in entries 3, . . . , n of column one. Let a1 be the ﬁrst column of A, and deﬁne b1 as in equation (8.10), using k = 1. Then with u1 = a1 − b1 , the ﬁrst column of Hu1 A has zeroes in the last (n − 2) entries. Since Hu1 = 1 ⊕ Huˆ 1 , we see that multiplying Hu1 A on the right by Hu1 does not change the ﬁrst column; hence Hu1 AHu1 still has those zeroes in the last (n − 2) entries of column one. More generally, at each stage, we multiply on the left by some Huk = Ik ⊕ Huˆ k to create the desired zeroes in column k, and then multiplying on the right by Huk does not change the ﬁrst k columns. So, letting U be the product U = Hu(n−2) Hu(n−3) · · · Hu2 Hu1 , and recalling that Householder matrices are Hermitian, we have U ∗ = Hu1 Hu2 · · · Hu(n−3) Hu(n−2) ,
134
8. Some Matrix Factorizations
and U AU ∗ is in Hessenberg form. From our deﬁnition of the matrices Hui , we see each is formed from the entries of the matrix using only arithmetic on complex numbers and a square root. If the original A is Hermitian, U AU ∗ is also Hermitian. A matrix which is both Hermitian and Hessenberg is tridiagonal, so the process used to prove Theorem 8.19 yields a tridiagonal form when A is Hermitian. If A is a real symmetric matrix, the U will be a real orthogonal matrix. How does Theorem 8.19 help? The idea is that algorithms, which may be “expensive” (in terms of operation count, storage, etc.) when applied to a general matrix, may be more practicable when applied to a matrix with special structure, such as a Hessenberg matrix or a tridiagonal matrix. Again, we refer the reader to texts on matrix computations for more information. The product of two matrices in Hessenberg form need not have Hessenberg form; however, if A is upper triangular and B is upper Hessenberg, then both of the products AB and BA (when deﬁned) are upper Hessenberg. We leave this for the reader to check; it is a straightforward matter of looking at the matrix multiplication and the positions of the zeroes in A and B. Note also that if A is upper Hessenberg and A = M R, where R is upper triangular with nonzero diagonal entries, then M must be upper Hessenberg. In particular, in a QR factorization of a nonsingular upper Hessenberg matrix, the Q must be upper Hessenberg. Note also that RQ will also be upper Hessenberg, a fact that is relevant for the QR algorithm, described in the next section. Finally, let us note that Householder transformations are not the only tool used to produce zeroes in speciﬁed entries. In the Jacobi method applied to a Hermitian matrix, one uses a plane rotation at each step to create zeros in a pair of positions (i, j) and (j, i), with i = j [Hou64, pp. 179–181]. The Givens transformation is also a plane rotation, and can be used to transform a vector or matrix entry to zero [LawHan74, pp. 10–11].
8.4. Some Methods for Computing Eigenvalues First a disclaimer: the author knows little about numerical linear algebra. We refer those with a serious interest in this topic to one of the texts on the subject. However, we want to give some context for the contents of the previous sections and give some idea of the role the SV D and QR factorizations play in algorithms for computing things like the rank and the eigenvalues of a matrix. From the theoretical point of view, computing the rank of a matrix A would not seem to be problematic—one does elementary row operations to reduce the matrix to echelon form and then counts the number of nonzero rows. However, even this most basic computation of linear algebra leads to interesting and complicated numerical problems. See any text on numerical linear algebra for examples illustrating the numerical pitfalls of basic Gaussian elimination. In addition to the issues of roundoﬀ error and the problems introduced if one pivots on a small diagonal entry, consider the fact that the rank of a matrix takes integer values, and thus it is not a continuous function of the matrix entries. For example, the matrix 10 0 has rank two when = 0 and rank one when = 0. For numerical computation, numbers appear in decimal form, and we have only a ﬁnite number of digits; how do we
8.4. Some Methods for Computing Eigenvalues
135
know whether a small number represents a small nonzero value, or represents a zero? So, rather than use this “elementary” method of Gaussian elimination to determine the rank, one might use a singular value decomposition, achieved by unitary transformations. Since unitary transformations preserve length, one can hope the algorithm will not create growth in the size of the errors and will yield good approximations of the singular values. Theorem 8.7 then gives more information about what the actual value of the rank might be, and how close the matrix is to other matrices of various ranks. In particular, for an n × n matrix, the smallest singular value σn gives the distance to the nearest singular matrix, as measured by the Frobenius norm. Finding the eigenvalues of a matrix involves theoretical and numerical problems. The roots of a general polynomial of degree greater than four cannot be computed exactly from the coeﬃcients, using a ﬁnite number of operations involving only arithmetic and extraction of roots, so it is not possible to exactly compute the eigenvalues of a general matrix with such operations. To compute eigenvalues, one uses iterative processes, which (hopefully) give closer approximations to the eigenvalues at each step. We must then consider the convergence issues: Does the process converge, and, if so, how many steps will it take to achieve a desired level of accuracy? How stable is the algorithm; i.e., how sensitive is the outcome to small changes in the input? What about eﬃciency—how long might the algorithm take to compute an answer for an n × n matrix? The convention is to count the number of multiplications needed. To multiply a general n × n matrix A times a vector x requires n multiplications for each entry; thus n2 multiplications to compute Ax. But this may be drastically reduced if the matrix is “sparse”. For example with a tridiagonal A, we do at most three multiplications for each entry of Ax, for a total of at most 3n multiplications. There is also the storage issue—we need n2 entries for a general n × n matrix, but for a tridiagonal matrix we need only keep track of the 3n entries (actually 3n − 2) on the nonzero diagonal lines. We are not going to deal seriously with these issues here. We merely outline three standard methods for computing the eigenvalues of a matrix: the power method, the inverse power method, and the QR algorithm, in hopes of giving some sense of the relevance of the results of the previous section. We begin with the power method. To keep things simple, let us assume we have an n × n diagonalizable matrix A, with n distinct eigenvalues λ1 , . . . , λn , all of diﬀerent moduli, ordered so that λ1  > λ2  > · · · > λn . Let v1 , . . . , vn be an associated eigenbasis. In this case we say λ1 is a dominant eigenvalue and v1 is a n dominant eigenvector. For any vector x, we have x = ci vi and i=1
(8.11)
Ak x =
n
#
ci λki vi = λk1 c1 v1 +
n
ci
λ k i
$ vi .
λ1 i=1 i=2 λ λ k i i Since < 1, we see that → 0 as k gets large. As long as c1 = 0, the λ1 λ1 A k x converges to c1 v1 as k → ∞, thus giving us the eigenvector v1 . vector λ1 λ λ2 i The ratio is the largest of the ratios , and hence it determines the rate of λ1 λ1
136
8. Some Matrix Factorizations
convergence. Of course we do not know λ1 at the start, but other scaling factors may be used to control the size of the entries in Ak x. The iteration takes the form xk+1 =
1 Axk , sk
where sk is the scaling factor. Some possible choices for sk might be xk ∞ , or x∗ Axk xk 2 , or the Rayleigh quotient k∗ . Once we have a good approximation to v1 , xk xk we may proceed as in the ﬁrst step of the proof of Schur’s unitary triangularization Theorem 5.3 to reduce the problem to ﬁnding the eigenvalues of a matrix of size (n−1) (a process known as deﬂation). We then move on to λ2 , and so on. Although the power method has been superseded by other methods, it is still helpful for understanding more complicated methods. We move on to the inverse power method. In its purest form, this is simply the power method applied to A−1 , which has the eigenvalues λ1i , i = 1, . . . , n, with 1 1 1 < < ··· < . λ1  λ2  λn  Since A and A−1 have the same eigenvectors, the power method applied to A−1 should yield a good approximation to λ1n and the eigenvector vn . One then deﬂates 1 and proceeds to ﬁnd λn−1 and the eigenvector vn−1 , and so on. In general, one −1 does not compute A , but ﬁnds xk+1 = s1k A−1 xk by solving the system (8.12)
Axk+1 =
1 xk . sk
We can accelerate things by using a scalar shift; moreover this technique enables us to start with any of the eigenvalues. Suppose μ is a number which is closer to the eigenvalue λi than to any of the other eigenvalues: μ − λi  < μ − λj  for j = i. The inverse power method, applied to (A − μI), will then lead us to (λi − μ)−1 and the eigenvector vi . The idea is for (λi − μ) to be small, so that (λi − μ)−1 is much larger than the values (λj − μ)−1 when j = i. In this scenario, equation (8.12) is replaced by 1 (A − μk I)xk+1 = xk . sk For the scalar shift, μk , one would use some reasonable approximation to the eigenx∗ Axk value we are trying to ﬁnd. One choice is to use the Rayleigh quotient k∗ . xk xk We now come to the QR algorithm. The basic idea is to do a QR factorization of a square matrix, A = QR, then reverse the factors to form A1 = RQ. Then do a QR factorization of A1 and repeat, hoping that the process converges to something useful, speciﬁcally a triangular matrix which is unitarily similar to A. To get more speciﬁc, deﬁne the following recursive procedure to generate a sequence of matrices A = A1 , A2 , A3 , . . .. Given Ak at stage k, do a QR factorization Ak = Qk Rk , and then deﬁne Ak+1 by Ak+1 = Rk Qk .
8.4. Some Methods for Computing Eigenvalues
137
Note that if A is nonsingular and in Hessenberg form, then so is Q, and hence each Qk retains the Hessenberg form. So, before starting the QR process, it can help to ﬁrst apply a unitary similarity to reduce A to Hessenberg form. When A is symmetric, one can ﬁrst reduce A to tridiagonal form before starting the QR algorithm; in this case all of the Qk ’s will be in tridiagonal form, thus reducing the computation count. Aside from lots of QR factorizations, what does this do for us? First, note that (8.13)
Ak+1 = Rk Qk = Q∗k (Qk Rk )Qk = Q∗k Ak Qk .
So all of the Ak ’s in the sequence are unitarily similar to A. Repeated application of (8.13) yields (8.14)
Ak+1 = (Q∗k Q∗k−1 · · · Q∗2 Q∗1 )A(Q1 Q2 · · · Qk−1 Qk ).
Set (8.15)
Wk = Q1 Q2 · · · Qk−1 Qk ;
then Wk is unitary and (8.14) becomes Ak+1 = Wk∗ AWk . Here now is the magical result of this process: under suitable conditions, the sequence {Ak } converges. We are not going to give a proof, or explore in detail what the “suitable conditions” are, nor will we discuss the issue of rate of convergence. However, assuming the ˆ then we expect the sequence of Wk ’s to converge sequence converges, with Ak → R, to some orthogonal matrix W , and hence Qk → I as k → ∞. Combine this with ˆ will be upper triangular. The diagonal entries of R ˆ will Ak = Qk Rk to see that R be the eigenvalues of A. Let us try to get some clue as to why one might expect convergence—why ˆ It is related to the power might this process lead to the upper triangular R. method, discussed above. Here is the connection between the Qk ’s, the Rk ’s and powers of A. (8.16)
Ak = (Q1 Q2 · · · Qk−1 Qk )(Rk Rk−1 · · · R2 R1 ).
Why is this? For k = 1, formula (8.16) simply says A = Q1 R1 . So far, so good. Let’s try A2 : A2 = AA = Q1 R1 Q1 R1 = Q1 (R1 Q1 )R1 = Q1 (Q2 R2 )R1 . The reader should now do the calculation for A3 and will then be convinced that we can surely prove this formula by induction. So, assuming the formula correct for Ak , we have Ak+1 = A(Ak ) = (Q1 R1 )(Q1 Q2 · · · Qk−1 Qk )(Rk Rk−1 · · · R2 R1 ) = Q1 (R1 Q1 )(Q2 · · · Qk−1 Qk )(Rk Rk−1 · · · R2 R1 ) = Q1 (Q2 R2 )(Q2 · · · Qk−1 Qk )(Rk Rk−1 · · · R2 R1 ). Now replace the pair R2 Q2 with Q3 R3 to get Q1 Q2 Q3 (R3 Q3 · · · Qk−1 Qk )(Rk Rk−1 · · · R2 R1 ). Continue in this fashion, at each step replacing a product Ri Qi with Qi+1 Ri+1 until the formula is achieved.
138
8. Some Matrix Factorizations
Note that the product of the Q’s in formula (8.16) is Wk , so Ak = Wk (Rk Rk−1 · · · R2 R1 ),
(8.17)
where Wk is unitary and each Ri is upper triangular. Let R(k) = Rk Rk−1 · · · R2 R1 , and let r1 (k) denote the 1, 1 entry of R(k). Then (8.17) tells us Ak e1 = r1 (k) (column 1 of Wk ). Assume A has a dominant eigenvalue, λ1 , and let v1 be an associated eigenvector. Then Ak e1 gets closer to being a scalar multiple of v1 as k → ∞; hence the ﬁrst column of Wk should become parallel to v1 as k gets large. Now Ak+1 = Wk∗ AWk . Since the ﬁrst column of Wk is approaching an eigenvector (for the eigenvalue λ1 ) ˆ where Aˆ is of A as k → ∞, the matrix Wk∗ AWk approaches the form λ1 ⊕ A, (n − 1) × (n − 1). One must now extend this idea to higherdimensional subspaces to explain why Ak converges to an uppertriangular matrix. See texts on numerical linear algebra for more.
8.5. LDU Factorization The contents of this section apply in any ﬁeld F. A system of m linear equations in the n unknowns x1 , . . . , xn may be expressed as a single matrix equation Ax = b, where A is an m×n matrix, b has n coordinates, and the ith equation of the system corresponds to the ith entry of Ax = b: ai1 x1 + ai2 x2 + · · · + ain = bi . A standard way to solve such a system (at least for small systems being done by hand calculation) is to do elementary operations to reduce the system to row echelon or reduced row echelon form. We assume the reader is familiar with this method, typically covered in ﬁrst courses in linear algebra. The elementary operations on the equations correspond to the following types of elementary row operations performed on the augmented matrix ( A  b ). (1) For i = j, add a multiple of row j to row i. (2) Exchange two rows. (3) Multiply a row by a nonzero scalar. In Gaussian elimination, we use a sequence of these row operations to reduce the matrix to row echelon form, which is upper triangular. The basic idea is to add multiples of each row to the rows below it, in order to create zeroes below the diagonal entries. These row operations preserve the row space, and hence preserve the rank of the matrix. A row echelon form, E, for a matrix of rank r is speciﬁed as follows. (1) The ﬁrst r rows of E are nonzero, and the last m − r rows are rows of zeroes. (2) The ﬁrst nonzero entry in each of the ﬁrst r rows is a one, called the leading one. (3) If i < j ≤ r, then the leading one in row i is to the left of the leading one in row j. Thus, the nonzero entries form a “staircase” pattern: the leading ones form the steps and all entries below the the staircase are zeroes.
8.5. LDU Factorization
139
The usual procedure is to work from left to right, and from top to bottom, adding multiples of rows to lower rows in order to create the zeroes below the staircase. Row exchanges are done as needed. The leading ones are obtained by multiplying each row by a suitable scalar. Each elementary row operation can also be expressed as left multiplication by an elementary matrix. For example, if E = In + cEij , where Eij is the matrix with a one in position i, j and zeroes elsewhere, then the product EA is the matrix obtained by adding c times row j of A to row i of A. To exchange two rows i and j, use the permutation matrix obtained by exchanging rows i and j of In ; to multiply row i by c, use the diagonal matrix with c in the ith diagonal entry and ones in the remaining diagonal entries. Consider now the case of an n × n matrix A of rank r, for which the ﬁrst r rows are linearly independent, and for which it is possible to get to an uppertriangular form with nonzero entries in the ﬁrst r diagonal positions, using only operations of the ﬁrst type, in which we always add multiples of a row to lower rows. For example, if a11 = 0, we would start by adding suitable multiples of row one to rows two through n to create zeros in entries two through n of the ﬁrst column. If the resulting matrix then had a nonzero entry in the (2, 2) position, we would then add suitable multiples of row two to rows 3 through n to create zeroes in entries 3 through n in the second column. Assume that we continue to get nonzero entries in the diagonal positions up to, and including, position (r, r). Since rank(A) = r, we can then reduce the last n − r rows to rows of zeroes by adding multiples of the ﬁrst r rows to those last n − r rows. We can then reduce A to an uppertriangular form T , using only operations of the ﬁrst type with i > j. Each of these may be done by multiplying A on the left by an elementary matrix E = In +cEij , where i > j. So E is lower triangular, and every diagonal entry of E is a one. Let E1 , . . . , Et be a list of the elementary matrices used to reduce A to the uppertriangular matrix T ; then ˆ = Et Et−1 · · · E3 E2 E1 . Then L ˆ is lower triangular Et Et−1 · · · E3 E2 E1 A = T. Set L ˆ ˆ with ones on the main diagonal, and LA = T . Put L = L−1 . The matrix L is also lower triangular and has ones on the main diagonal. We have A = LT . Let d1 , . . . , dn be the diagonal entries of T ; note that d1 , . . . , dr are nonzero, but dr+1 = dr+2 = · · · = dn = 0. Put D = diag(d1 , . . . , dn ). Then we can factor T as T = DU , where U is upper triangular, with ones on the diagonal. Note that the last n − r rows of T are zero. Thus, we have the factorization A = LDU , where L is lower triangular with ones on the diagonal, D is diagonal, and U is upper triangular with ones on the diagonal. The nonzero diagonal entries of D are sometimes called the pivots. Theorem 8.20. Let A be a nonsingular matrix. Suppose it is possible to factor A as A = LDU , where L is lower triangular with ones on the diagonal, U is upper triangular with ones on the diagonal, and D is diagonal. Then the following hold. (1) L, D, and U are uniquely determined by A. (2) A is symmetric if and only if U = LT . (3) A is Hermitian if and only if D has real diagonal entries and U = L∗ . (4) A is positive deﬁnite if and only if D has positive diagonal entries.
140
8. Some Matrix Factorizations
Proof. Suppose we have A = L1 D1 U1 = L2 D2 U2 , where Li , Di , Ui are of the required forms for i = 1, 2. Since A is nonsingular, all the diagonal entries of each Di are nonzero. Then (8.18)
−1 L−1 2 L1 D1 = D2 U2 U1 .
The inverse of a lowertriangular matrix with ones on the diagonal is of that form, and the product of two such matrices is also of that form, so L−1 2 L1 is lower triangular with ones on the diagonal. Similarly, U2 U1−1 is upper triangular with ones on the diagonal. From the diagonal entries on both sides of (8.18), we have D1 = D2 = D. Now, the lefthand side of (8.18) is a lower triangular matrix while the righthand side is upper triangular. So the only way they can be equal is to be diagonal matrices. Since all the diagonal entries of Di , i = 1, 2, are nonzero, this tells us L−1 2 L1 and U2 U1−1 are both diagonal matrices with ones on the diagonal, thus, L−1 L =I 1 2 and U2 U1−1 = I. Hence L1 = L2 and U1 = U2 . Now suppose A = LDU is a symmetric matrix. From A = AT , we have LDU = U T DLT . Since U T is lower triangular with ones on the diagonal, while LT is upper triangular with ones on the diagonal, part (1) tells us U = LT . Conversely, if U = LT , then A = LDLT is clearly symmetric. A similar argument, replacing the transposes with transpose conjugates proves the third statement. Finally, suppose A is positive deﬁnite. For the real symmetric case, we have A = LDLT and for the Hermitian case we have A = LDL∗ . In the real case, A is congruent to D; in the Hermitian case, A is conjunctive to D. Hence, A is positive deﬁnite if and only the diagonal entries of D are positive.
Remark 8.21. For a symmetric matrix, the factorization A = LDLT is sometimes called the Cholesky decomposition. When A is positive deﬁnite, the D has positive diagonal √ entries, so each diagonal entry has a positive square root. The matrix ˜ = L D is then lower triangular with positive diagonal entries and A = L ˜L ˜T . L ∗ ˜ ˜ For the case of a positive deﬁnite Hermitian A, we will have A = LL . Now let A[k] denote the k × k principal submatrix formed by the ﬁrst k rows and columns of A. The value of det(A[k] ) will not be changed by the row operations of type one for which i > j, so det(A[k] ) = d1 d2 d3 · · · dk . For k ≤ r, we have
dk =
det(A[k] ) . det(A[k−1] )
In general, one also needs row operations of the second type, where we exchange rows. If all of these row exchanges are combined and represented by a single permutation matrix P , then we can reduce P A by operations of type one, and thus get a factorization P A = LDU , with L, D, U as described above.
Exercises
141
Exercises 1. Show A2 ≤ AF . 1 α 2. For A = , ﬁnd ρ(A), A2 , and AF . Show that if α = 0, we have 0 1 ρ(A) < A2 < AF . 3. Let Q be an n × n, real orthogonal matrix; assume Q = I. (a) Show that Q may be written as the product of a ﬁnite number of Householder matrices, and that the number needed is at most n. Thus, any orthogonal matrix may be expressed as a product of reﬂection matrices, each reﬂection being reﬂection across a hyperplane. Hint: See the proof of Theorem 8.14. (b) Show that if det Q = 1, the number of Householder matrices needed in Exercise 3(a) must be even, but if det Q = −1, the number used must be odd. cos θ − sin θ (c) Write the 2 × 2 rotation matrix as a product of two sin θ cos θ reﬂection matrices. 4. Let A be an m × n matrix of rank r, and let σ1 ≥ σ2 ≥ · · · ≥ σr bethe nonzero 0m A singular values of A. Let S be the (m + n) × (m + n) matrix S = . A∗ 0n x (a) For x ∈ Cm and y ∈ Cn , show that is an eigenvector of S, correy x sponding to eigenvalue λ, if and only if is an eigenvector of S with −y eigenvalue −λ. (b) Compute S 2 . Use this, together with the ﬁrst part, to show that the eigenvalues of S are ±σ1 , ±σ2 , . . . , ±σr plus the eigenvalue 0 with multiplicity m + n − 2r. ⎛ ⎞ 0 0 α (c) Use this to ﬁnd the eigenvalues of the matrix ⎝ 0 0 β ⎠. α ¯ β¯ 0 n−1 (d) More generally, , ﬁnd the eigenvalues of the for a column vector x ∈ C 0n−1 x n × n matrix . 0 x∗ 5. Prove Theorem 8.10. 6. Prove Theorem 8.11. 7. Show that if A is upper triangular and B is upper Hessenberg, then both of the products AB and BA (when deﬁned) are upper Hessenberg. 8. Show that if A is upper Hessenberg and A = M R, where R is upper triangular with nonzero diagonal entries, then M must be upper Hessenberg.
142
8. Some Matrix Factorizations
9. Suppose A is a 2 × 2 matrix of rank one with A = LDU , where L is lower triangular with ones on the diagonal, D = diag(d1 , d2 ) is diagonal, and U is upper triangular with ones on the diagonal. Show that if d2 = 0, the L and U are uniquely determined by A, but if d1 = 0, then L, U are not uniquely determined by A. 10. Show that for D = diag(1, 0, 0), it is possible to have L1 DU1 = L2 DU2 , with the Li ’s lower triangular with ones on the diagonal, the Ui ’s upper triangular with ones on the diagonal, but L1 = L2 and U1 = U2 . So the ﬁrst part of Theorem 8.20 need not hold if A is singular. 11. Suppose A is a 3 × 3 matrix of rank two, with A = LDU , where L is lower triangular with ones on the diagonal, D = diag(d1 , d2 , 0), and U is upper triangular with ones on the diagonal. Show that the L, D, and U are uniquely determined by A. 12. Suppose A is a n × n matrix of rank n − 1, with A = LDU , where L is lower triangular with ones on the diagonal, D = diag(d1 , . . . , dn ) is diagonal, with dn = 0, and U is upper triangular with ones on the diagonal. Show that the ﬁrst three parts of Theorem 8.20 hold for such a matrix, and if we change “positive deﬁnite” to “positive semideﬁnite”, then this modiﬁed version of the last part also holds.
Chapter 9
Field of Values
In this chapter, we deal with square matrices with entries from C. If A is an n × n complex matrix and x ∈ Cn , then x∗ Ax is a complex number. Deﬁnition 9.1. The set of all complex numbers x∗ Ax, where x = 1, is called the ﬁeld of values of A and is denoted F(A). Thus, F(A) = {x∗ Ax x ∈ Cn , x = 1}. The ﬁeld of values is also called the numerical range. We limit our attention to the numerical range of an n × n matrix, but this concept is also of interest for linear operators on inﬁnitedimensional complex spaces.
9.1. Basic Properties of the Field of Values Let S = {x x ∈ Cn , x = 1} denote the unit sphere in Cn . Then we have F(A) = {x∗ Ax x ∈ S}. For any n × n complex matrix A, the map f (x) = x∗ Ax is continuous, and F(A) is the image of S under f . Since S is a compact connected set and f is continuous, F(A) is a compact connected subset of C. With the usual representation of C as the complex plane, i.e., identifying the complex number z = x + iy with the point (x, y) in R2 , we may regard F(A) as a compact connected subset of R2 . The set F(A) has another important property: it is convex. We prove this in a later section. First we establish some easier facts. Theorem 9.2. Let A be an n × n complex matrix. (1) For any unitary matrix U , we have F(A) = F(U ∗ AU ). (2) F(A) contains all of the eigenvalues of A. (3) F(A) contains all of the diagonal entries of A. (4) If A[k] is any principal submatrix of A, then F(A[k] ) ⊆ F(A). (5) If P is a n × k matrix with orthonormal columns, then F(P ∗ AP ) ⊆ F(A). 143
144
9. Field of Values
Proof. Let U be unitary, and put y = U x. Since U gives a bijection of S onto itself, F(A) = {y∗ Ay y ∈ S} = {y∗ Ay y = U x and x ∈ S} = {x∗ U ∗ AU x x ∈ S} = F(U ∗ AU ). To prove (2), let λ be an eigenvalue of A, and let x be an associated eigenvector of length one. Then x∗ Ax = λx∗ x = λ. Part (3) follows from ei ∗ Aei = aii . For (4) note that we may choose a permutation matrix Q such that A[k] is formed from the ﬁrst k rows and columns of Q∗ AQ. Since any permutation matrix is unitary, item (1) tells us F(Q∗ AQ) = F(A). So, without loss of generality, we may assume that A[k] comes from the ﬁrst k rows and columns of A. Now let z ∈ Ck ˆ = (z1 , . . . , zk , 0, 0, . . . , 0). Then ˆ ˆ∗ Aˆ with z = 1. Put z z = 1 and z∗ A[k] z = z z, so F(A[k] ) ⊆ F(A). For (5), let z ∈ Ck with z = 1. Since P has orthonormal columns, P z = 1; note that P z ∈ Cn . So z∗ P ∗ AP z = (P z)∗ A(P z) ∈ F(A), and hence F(P ∗ AP ) is a subset of F(A). Remark 9.3. Note that (4) is a special case of (5): if A[k] is formed from rows and columns i1 , . . . , ik , let P be the matrix with columns eij , for j = 1, . . . , k. Item (3) is the special case of (4) in which k = 1. Theorem 6.15 tells us the ﬁeld of values of a Hermitian matrix A is the closed, real line segment [λ1 , λn ], where λ1 is the smallest eigenvalue of A and λn is the largest eigenvalue of A. This is a special case of the following result about normal matrices. Theorem 9.4. Let A be an n×n normal matrix with eigenvalues λ1 , . . . , λn . Then F(A) is the closed convex hull of the points λ1 , . . . , λn . Proof. Let U be a unitary matrix such that U ∗ AU = D = diag(λ1 , . . . , λn ). By n n Theorem 9.2, F(A) = F(D). Now, F(D) = xi 2 λi xi 2 = 1 , which is i=1
exactly the set of all convex combinations of λ1 , . . . , λn .
i=1
Example 9.5. Suppose the 3×3 matrix A is normal and has eigenvalues 1+i, 3−i, and 4 + 3i. Then F(A) is the triangular region with vertices 1 + i, 3 − i, 4 + 3i, represented in the plane as the points (1, 1), (3, −1), and (4, 3) (see Figure 9.1). The converse of Theorem 9.4 is not true; it is possible to have matrices which are not normal, but for which the ﬁeld of values is the convex hull of the eigenvalues. However, the size of the matrix must be at least 5 × 5 for this to happen [MoMa55], [ShTa80]. We will see why later in this chapter. Note the following facts, to be used in the next section. For any matrix A and complex number γ, we have F(γI + A) = γ + F(A) and F(γA) = γF(A). In particular, for γ = eiθ , the ﬁeld of values of F(eiθ A) is obtained from F(A) by rotating F(A) by θ.
9.2. The Field of Values for TwobyTwo Matrices
145
s s @ @ @ @s Figure 9.1. Example 9.5
9.2. The Field of Values for TwobyTwo Matrices A famous result, going back to Hausdorﬀ [Haus19] and Toeplitz [Toep18], is that F(A) is a convex set. This is shown by ﬁrst reducing the general case to the 2 × 2 case, and then using the fact that the ﬁeld of values of a 2 × 2 matrix is the closed region bounded by an ellipse. This fact about 2 × 2 matrices is of interest in its own right and will be used to prove other facts about the ﬁeld of values. The proof involves a direct computation, with the following facts used to simplify the work. For any square complex matrix A there is a unitary matrix U such that U ∗ AU ∗ is upper triangular; the diagonal entries of U AU are the eigenvalues of A. For a λ1 b 2 × 2 matrix A, we have U ∗ AU = . Write b in polar form, b = reiθ , 0 λ2 1 0 where r = b, and apply the diagonal unitary similarity V = to get 0 e−iθ λ1 r V ∗ (U ∗ AU )V = , where r = b is a nonnegative real number. Since U V 0 λ2 λ1 r is unitary, A is unitarily similar to . Furthermore, since the Frobenius 0 λ2 norm is invariant under unitary similarity, A2F = λ1 2 + λ2 2 + r 2 , giving the following formula for r: % (9.1) r = A2F − λ1 2 − λ2 2 . Theorem 9.6. Let A be a 2 × 2 complex matrix with eigenvalues λ1 and λ2 . Let r = A2F − λ1 2 − λ2 2 . Then F(A) consists of the points inside and on the ellipse with foci λ1 and λ2 , and minor axis of length r. 2 I has eigenvalues ±λ, Proof. If A has eigenvalues λ1 and λ2 , then B = A− λ1 +λ 2 λ1 −λ2 iθ −iθ where λ = 2 . Let λ = se ; then e B has eigenvalues ±s. Since translation and rotation do not changethe shape of the ﬁeld of values, it will suﬃce to prove the s r result for the matrix A = , where r and s are nonnegative real numbers. 0 −s Also, for any vector x of length one, we have (eiθ x)∗ A (eiθ x) = x∗ Ax, so we need only consider vectors x in which x1 is real. Hence, for x = 1, we may assume x
146
9. Field of Values
has the form x =
cos φ2 iθ e sin φ2
. Now we compute x∗ Ax:
cos φ2 s r 0 −s eiθ sin φ2 φ φ φ φ = s cos2 + reiθ sin cos − s sin2 2 2 2 2 r iθ = s cos φ + e sin φ, 2
x∗ Ax = ( cos φ2
e−iθ sin φ2 )
where we used the trig identities cos2 φ2 − sin2 φ2 = cos φ and 2 sin φ2 cos φ2 = sin φ in the last step. So, F(A) is the set of all numbers of the form r (9.2) s cos φ + eiθ sin φ, 2 where the real parameters φ and θ vary from 0 to 2π. If s = 0, this gives the set of all numbers r2 eiθ sin φ, which is the closed disk of radius 2r centered at (0, 0). We get the boundary circle when sin φ = 1 and interior points when  sin φ < 1. Now let s = 0. The Cartesian coordinates of the points corresponding to the complex numbers in (9.2) are r (9.3) X = s cos φ + cos θ sin φ, 2 r Y = sin θ sin φ. 2 We use spherical coordinates and a shear map to show that the set of all such points is the ellipse described in the statement of the theorem. (Thanks to Jim Wiseman who suggested this approach.) Recall the usual formulas for spherical coordinates in R3 : x = ρ cos θ sin φ, (9.4)
y = ρ sin θ sin φ, z = ρ cos φ.
If we put ρ = r2 and let θ range from 0 to 2π while φ ranges from 0 to π, we get the surface of the sphere of radius r2 , centered at the origin. Let S denote this surface. Since ρ = 2r , we have 2z z . cos φ = = ρ r Comparing equations (9.3) and (9.4) gives 2s z + x, r Y = y.
X=
Now consider the following shear map on the space R3 : ⎛ ⎞ ⎛ ⎞ x x + 2s r z ⎠. T ⎝y⎠ = ⎝ y z z
9.2. The Field of Values for TwobyTwo Matrices
147
This linear transformation maps the surface S onto the surface of an ellipsoid, which we label E. The vertical projection of the surface E onto the x, y plane is then an ellipse. The map T only shears in the x direction; the shear depends only on the z coordinate. The ellipsoid E is symmetric about the x, z plane and hence the projected ellipse is symmetric about the xaxis. Therefore, the projected ellipse has its principal axes along the x and y coordinate axes. Furthermore, the vertices on the yaxis are the points (0, ± r2 ). To locate the vertices on the x axis, we need to ﬁnd the extreme values of X = s cos φ + r2 cos θ sin φ. We have s cos φ +
r r cos θ sin φ = (cos φ, sin φ) · (s, cos θ). 2 2
Since (cos φ, sin φ) = 1, the Cauchy–Schwarz inequality gives & r 2 r (9.5) s cos φ + cos θ sin φ ≤ s2 + cos2 θ. 2 2 r . The vectors (cos φ, sin φ) and Choose θ = 0, and choose φ so that tan φ = 2s r (s, 2 cos θ) will then be parallel and hence equality will hold in (9.5), giving & 2 s cos φ + r sin φ = s2 + r . 2 2
Hence, the vertices of the ellipse which lie on the xaxis are the points & r 2 2 s + , 0 . ± 2 So the major axis of the ellipse is on the x axis, the foci are at (±s, 0), and the minor axis has length r. The matrix A is normal if and only if r = 0; in this case the ellipse degenerates into the line segment with endpoints λ1 and λ2 . Theorem 9.6 is a special case of a more general result for n × n matrices. For an n × n matrix, A = H + iK, with H, K Hermitian, it turns out that F(A) is determined by the polynomial det(zI + xH + yK), and, viewed as an equation in projective line coordinates, the equation det(zI + xH + yK) = 0 gives an algebraic curve of class n. The foci of that curve correspond to the eigenvalues of A, and the closed convex hull of the real part of that curve is F(A). [Kipp51, Murn32, Sha82]. We can use the decomposition, A = H + iK, where H and K are Hermitian, to ﬁnd a rectangle containing F(A). We have x∗ Ax = x∗ Hx + ix∗ Kx. Put z = x∗ Ax = a + ib, where a and b are real. Since x∗ Hx and x∗ Kx are real numbers, a = x∗ Hx and b = x∗ Kx. Let λmin (H) and λmax (H) denote the smallest and largest eigenvalues of H, with similar notation for K. Then we have the inequalities λmin (H) ≤ a ≤ λmax (H) and λmin (K) ≤ b ≤ λmax (K). Hence, the set F(A) is contained in the rectangle [λmin (H), λmax (H)] × [λmin (K), λmax (K)]. Furthermore, by choosing x to be an eigenvector of H or K corresponding to their smallest and largest eigenvalues, we see that F(A) does meet each of the four sides of this rectangle.
148
9. Field of Values
9.3. Convexity of the Numerical Range From Theorem 9.6 we know that the numerical range of a 2 × 2 matrix is a convex set. We use this to prove the result for the n × n case. Theorem 9.7. For any n × n complex matrix A, the ﬁeld of values F(A) is a convex set. Proof. Let p, q ∈ F(A). We have to show that any point on the line segment joining p and q is in F(A). This holds trivially if p = q, so assume p = q. Let x and y be vectors of length one such that p = x∗ Ax and q = y∗ Ay. Since p = q, the vectors x and y are linearly independent. Let V be the twodimensional subspace spanned by x and y, and let u1 , u2 be an orthonormal basis for V. Then there are scalars ci , di , i = 1, 2 such that x = c 1 u1 + c 2 u2 , y = d1 u1 + d2 u2 . c1 d1 Put c = and d = . Since u1 , u2 are orthonormal and x = y = 1, c2 d2 2 2 2 we have c1  + c2  = d1  + d2 2 = 1, so c = d = 1. Let R be the n × 2 matrix with columns u1 , u2 . Then x = Rc and y = Rd. We have p = x∗ Ax = c∗ R∗ ARc and q = y∗ Ay = d∗ R∗ ARd, so p and q are points in F(R∗ AR). But R∗ AR is a 2 × 2 matrix, so we know F(R∗ AR) contains the line segment with endpoints p and q. Part (5) of Theorem 9.2 tells us F(R∗ AR) ⊆ F(A), so F(A) contains the line segment with endpoints p and q.
Theorem 9.8 (R¨ oseler [R¨ os33]). Let A be an n×n matrix. If an eigenvalue λ of A is on the boundary of F(A), there is a unitary matrix U such that U ∗ AU = λ ⊕ A2 , where A2 is of order n − 1. Proof. Suppose the eigenvalue λ is on the boundary of F(A). Let u1 be an associated eigenvector of length one, and choose u2 , . . . , un so that u1 , . . . , un is an orthonormal basis for Cn . Let U be the unitary matrix with columns u1 , . . . , un . Then U ∗ AU has the form ⎛ ⎞ λ a12 a13 . . . a1n ⎜0 ⎟ ⎜. ⎟, (9.6) ⎝ .. ⎠ ˆ A 0 where Aˆ is (n−1)×(n−1). Now, F(A) = F(U ∗ AU ), so λ is on the boundary of the ﬁeld of values of the matrix in (9.6). We use this to show a1j = 0 for j = 2, . . . , n. λ a1j be the principal 2 × 2 submatrix of U ∗ AU formed from rows Let Bj = 0 ajj and columns 1 and j. Then F(Bj ) ⊆ F(A). If a1j = 0, then F(Bj ) is a nondegenerate ellipse and λ is one of the foci, so λ lies in the interior of F(Bj ), and hence in the interior of F(A). This contradicts the fact that λ lies on the boundary of F(A). Therefore, a1j = 0 for j = 2, . . . , n.
9.3. Convexity of the Numerical Range
149
We now have the tools to consider for which values of n the converse of Theorem 9.4 holds. Theorem 9.9. If A is an n × n matrix with n ≤ 4, and F(A) is the convex hull of the eigenvalues of A, then A must be normal. Proof. Suppose F(A) is the convex hull of the eigenvalues of A. Since n ≤ 4, there are at most four distinct eigenvalues, hence F(A) is either a point, a line segment, a triangle, or a quadrilateral. For the triangle or quadrilateral, the vertices are eigenvalues. For a line segment considered as a subset in the plane, every point of the line segment is a boundary point. Hence, we see that in all cases, we are guaranteed that at least n − 1 of the n eigenvalues are boundary points of F(A). (The only case in which an eigenvalue would not be on the boundary is when A is 4 × 4, the set F(A) is a triangle, and the fourth eigenvalue lies in the interior of the triangle.) So Theorem 9.8 tells us A is normal. It should now be clear why the converse of Theorem 9.4 fails for n ≥ 5 [MoMa55, ShTa80]. Choose three points, λ1 , λ2 , λ3 in the complex plane that are the vertices of a nondegenerate triangle T . Now select two points λ4 , λ5 in the interior of T , and let r be a positive real number small enough that the ellipse with foci λ4 , λ5 and minor axis of length r is contained in the interior of T . Then the matrix ⎞ ⎛ 0 0 0 λ1 0 0 0 ⎟ ⎜ 0 λ2 0 ⎟ ⎜ 0 λ3 0 0 ⎟ A=⎜ 0 ⎠ ⎝ 0 0 0 λ4 r 0 0 0 0 λ5 is not normal, but F(A) is the closed convex hall of its eigenvalues, which is the triangle T . We conclude with a result of Givens [Giv53], showing that for a ﬁxed matrix A, the set of points in the complex plane that belong to the ﬁeld of values of every matrix similar to A is the closed convex hull of the eigenvalues of A. This is obvious when A is diagonalizable; we use the Jordan canonical form of A to get the result for general A. Theorem 9.10 (Givens [Giv53]). Let A be an n × n matrix, and let H be the closed convex hull of the eigenvalues of A. Then the intersection of all of the sets F(S −1 AS), where S varies over the set of n × n invertible matrices, is H. Proof. We know that H ⊆ F(S −1 AS) for any invertible S. Suppose then that p is a point in the complex plane which is not in H. We need to show there is a matrix B, such that B is similar to A, but p ∈ / F(B). Let J be the Jordan canonical form of A. Decompose J into the sum J = D+N , where D is diagonal and N is nilpotent. Thus, D and J have the same diagonal entries, and N = J − D has the same superdiagonal line of ones and zeroes as J, with zeroes in all other positions. For any positive real number , the matrix A is similar to J = D + N . We have x∗ J x = x∗ Dx + x∗ N x and so (9.7)
x∗ J x − x∗ Dx = x∗ N x.
150
9. Field of Values
The set F(N ) is a closed convex set containing 0; let M = max{z z ∈ F(N )} = max{x∗ N x x = 1}. Equation (9.7) shows that for any x with x = 1, we have (9.8)
x∗ J x − x∗ Dx ≤ M.
Now suppose p ∈ / H. Let δ be the distance from p to H; that is, δ = min{p − h h ∈ H}. Since p ∈ / H, we have δ > 0. Choose small enough that δ > M . Since H = F(D), inequality (9.8) shows p ∈ / F(J ). So we have shown that for any point p outside δ ) such that p does not H, there is some matrix similar to A (namely J , with < M belong to the ﬁeld of values of that matrix. This chapter is merely an introduction to the numerical range. We refer the reader to the literature for the many papers on the numerical range and its generalizations.
Exercises 1. Let A be an n × n complex matrix, and let p ∈ F(A). Show there is a unitary matrix U , such that p is the 1, 1 entry of U ∗ AU . 2. Suppose the ith diagonal entry of the matrix A is on the boundary of F(A). Show that every nondiagonal entry in row and column i of A must be zero; i.e., show that for j = i, we have aij = aji = 0. 3. Show that F(A) ⊂ R if and only if A is Hermitian. Show that F(A) ⊂ (0, ∞) if and only if A is positive deﬁnite. 4. Let A be an n × n complex matrix. Show that F(A) is a subset of a straight line if and only if there is a Hermitian matrix H and complex numbers α, β such that A = αH + βI. 5. Show that the n × n complex matrix A is unitary if and only if all eigenvalues of A have modulus one and F(A) is the closed convex hull of the eigenvalues of A.
Chapter 10
Simultaneous Triangularization
A square complex matrix can be put in triangular form with a similarity transformation. This also holds for a matrix over a ﬁeld which contains the eigenvalues of the matrix. We now look at sets of matrices which can be simultaneously triangularized.
10.1. Invariant Subspaces and Block Triangularization One may state and prove the results in terms of matrices and similarity or in terms of linear transformations and change of basis. We will use both approaches, as convenient. Let A be a nonempty set of linear transformations of an ndimensional vector space V, over a ﬁeld F. We say a subspace U of V is Ainvariant if A(U) ⊆ U, that is, if T (u) ∈ U for every T ∈ A and every u ∈ U. We can then regard A as a set of linear transformations acting on the subspace U; we use T U to denote the action of T restricted to U. We can also deﬁne an action of T on the quotient space V/U by deﬁning T (v + U) = T v + U; the fact that U is Ainvariant guarantees that this is well deﬁned. We say U is proper if U is nonzero and not equal to V. Suppose U is a kdimensional Ainvariant subspace, where 1 ≤ k < n. Let {u1 , . . . , uk } be a basis for U and extend this to get a basis B = {u1 , . . . , uk , uk+1 , . . . , un } for V. For each T ∈ A, the matrix for T with respect to the Bbasis has the block A11 A12 triangular form , where A11 is k ×k and A22 is (n−k)×(n−k). In the 0 A22 A11 A12 notation of Section 1.5, we have [T ]B = . The block A11 represents 0 A22 151
152
10. Simultaneous Triangularization
the action of T on the subspace U, and the block A22 represents the action of T on the quotient space V/U. Alternatively, if one regards A as a set of n × n matrices over the ﬁeld F, and S is the change of basis matrix for the Bbasis, then for each M in A, the matrix S −1 M S will have the block triangular form described in the previous paragraph. Theorem 10.1. Let F be an algebraically closed ﬁeld, and let V be an ndimensional vector space over F, with n > 1. Let A be a nonempty commutative set of linear transformations of V. Then there is a proper Ainvariant subspace of V. Proof. If A is a set of scalar transformations, every subspace of V is Ainvariant. So, we may assume A contains a transformation B which is not a scalar multiple of the identity. Let λ be an eigenvalue of B, and let U be the associated eigenspace; then U is a proper subspace. Let u ∈ U and A ∈ A. Since BA = BA, we have B(Au) = AB(u) = λAu, and hence Au ∈ U. So the subspace U is an invariant subspace of each transformation in A. Note that the proof uses the fact that F is algebraically closed to guarantee that λ ∈ F. The argument may look familiar; it was used in Chapter 5 for a pair of commuting matrices (Theorem 5.5).
10.2. Simultaneous Triangularization, Property P, and Commutativity The notation T = triang(t11 , . . . , tnn ) denotes an n × n upper triangular matrix with diagonal entries t11 , . . . , tnn . For A = triang(a11 , . . . , ann ) and B = triang(b11 , . . . , bnn ), we have A + B = triang(a11 + b11 , . . . , ann + bnn ) and AB = triang(a11 b11 , . . . , ann bnn ). It is then easy to see that for any polynomial p(X, Y ) in the noncommuting variables X and Y , the matrix p(A, B) is upper triangular with diagonal entries p(aii , bii ), for i = 1, . . . , n. Recall also that in a triangular matrix, the diagonal entries are the eigenvalues. This motivates the following deﬁnition. Deﬁnition 10.2. Two n × n matrices A and B are said to have Property P if there is an ordering of the eigenvalues α1 , . . . , αn of A and β1 , . . . , βn of B such that for every polynomial p(X, Y ) in the noncommuting variables X and Y , the matrix p(A, B) has eigenvalues p(αi , βi ), for i = 1, . . . , n. Any pair of triangular matrices has Property P. We can extend the deﬁnition of Property P to sets of matrices. Deﬁnition 10.3. The n × n matrices A1 , . . . , At are said to have Property P if for each Ai there is an ordering of the eigenvalues αki , k = 1, . . . , n, of Ai , such that for every polynomial p(X1 , . . . , Xt ) in the noncommuting variables X1 , . . . , Xt , the matrix p(X1 , . . . , Xt ) has eigenvalues p(αk1 , . . . , αkt ), for k = 1, . . . , n. We say an inﬁnite set of square matrices has Property P if any ﬁnite subset does. Note that any set of triangular matrices has Property P.
10.2. Simultaneous Triangularization, Property P, and Commutativity
153
Deﬁnition 10.4. We say a set A of n × n matrices with entries in a ﬁeld F is simultaneously triangularizable over F if there is a nonsingular n × n matrix S over F such that S −1 AS is upper triangular for each A ∈ A. When F = C, the S in Deﬁnition 10.4 may be chosen to be a unitary matrix. Theorem 10.5. Suppose A is a simultaneously triangularizable set of matrices over C. Then there is a unitary matrix U such that U ∗ AU is upper triangular for each A ∈ A. Proof. Suppose there is a nonsingular matrix S such that S −1 AS is upper triangular for each A ∈ A. Put TA = S −1 AS. Let S = QR be the QR factorization of S, where Q is unitary and R is triangular. Then S −1 AS = R−1 Q∗ AQR = TA , and so Q∗ AQ = RTA R−1 . Since RTA R−1 is triangular, we may use U = Q.
For a set of normal matrices, simultaneous triangularizability is equivalent to simultaneous diagonalizability. Corollary 10.6. Suppose A is a simultaneously triangularizable set of normal matrices over C. Then there is a unitary matrix U such that U ∗ AU is diagonal for each A ∈ A. Proof. From Theorem 10.5 there is a unitary matrix U such that U ∗ AU is triangular for each A ∈ A. But A is normal; hence U ∗ AU is normal. Since any normal matrix which is triangular must be diagonal, U ∗ AU is diagonal. If A is simultaneously triangularizable, the eigenvalues of the matrices of A will be the diagonal entries of the triangular matrices S −1 AS, so F must contain the eigenvalues of the matrices of A. Any set A which is simultaneously triangularizable must have Property P. The main result of this chapter, McCoy’s theorem, states that the converse also holds. We begin with an earlier result, due to Frobenius. Theorem 10.7 (Frobenius [Frob96]). Let A be a commutative set of n×n matrices over an algebraically closed ﬁeld F. Then A can be simultaneously triangularized. Proof. The theorem clearly holds for n = 1; we proceed by induction. If A is a set of scalar matrices, then it is a set of diagonal matrices, and hence is already in triangular form. So, we may assume that A has elements which are not scalar. Theorem 10.1 tells us there is a proper Ainvariant subspace U. Let k = dim(U). Then 1 ≤ k < n andthere is a nonsingular matrix P such that for each C in A, C C 11 12 , where C11 is k × k and C22 is n − k × n − k. we have P −1 CP = 0 C22 Let A1 be the set of all the k × k matrices C11 which occur as C varies over A, and let A2 be the set of all the n − k × n − k matrices C22 which occur as C varies over A. Since A is commutative, the sets A1 and A2 are also commutative. Since k < n and n − k < n, we can then apply the induction hypothesis to each of the Ai ’s. Thus, there exist nonsingular matrices Q1 and Q2 , of sizes k × k and n − k × n − k, respectively, such that the sets Q−1 i Ai Qi are triangular for i = 1, 2.
154
10. Simultaneous Triangularization
Then put Q = Q1 ⊕ Q2 and S = P Q. The matrix S −1 AS is then triangular for each A ∈ A. This argument can also be done in the language of transformations, subspaces, and quotient spaces; see [Flan56]. One can easily ﬁnd examples of triangular matrices which do not commute, so the converse of Theorem 10.7 is not true. Since any set of commuting matrices has Property P but not every set of matrices with Property P is commutative, one might regard Property P as a generalization of commutativity. Before moving on to McCoy’s theorem, let us say a bit more about the case where the matrices are normal. Corollary 10.6 tells us a set of normal matrices will be simultaneously triangularizable if and only if the set is commutative. If A, B are n × n normal matrices and AB = BA, then there is a unitary transformation U such that U ∗ AU = diag(α1 , . . . , αn ) and U ∗ BU = diag(β1 , . . . , βn ). We then have A − B2F =
(10.1)
n
αi − βi 2 .
i=1
Equation (10.1) normal matrices. For example, neednot hold for noncommuting 1 1 1 0 consider A = and B = . The eigenvalues of A are 2 and 0. The 1 1 0 0 eigenvalues of B are 1 and 0. Depending on the ordering ofthe eigenvalues, the 2 0 1 2 value of αi − βi  is either 1 or 5. But A − B = , so A − B2F = 3. 1 1 i=1 However, there is something one can say in the noncommuting case. Observe that 3 is between 1 and 5. The following theorem of Hoﬀman and Wielandt gives bounds for A − B2F in terms of the eigenvalues. We defer the proof to Chapter 12, as it uses a result about doubly stochastic matrices; see Section 12.5. Theorem 10.8 (Hoﬀman and Wielandt [HW53]). Let A and B be n × n normal matrices with eigenvalues α1 , . . . , αn and β1 , . . . , βn , respectively. Then there are permutations μ, σ of 1, . . . , n such that n n
αi − βμ(i) 2 ≤ A − B2F ≤ αi − βσ(i) 2 . (10.2) i=1
i=1
The lefthand side of this inequality is probably of more interest, as we typically want to know how close the eigenvalues of B are to those of A if B is close to A. For example, we may want to compute the eigenvalues of A, but due to various errors, have actually found the eigenvalues of some matrix B which is close to A. Bounds on the sizes of the errors in the entries enable us to compute an upper bound on A − B2F , and (10.2) then gives information on how much the computed eigenvalues may diﬀer from the actual eigenvalues.
10.3. Algebras, Ideals, and Nilpotent Ideals There are various ways to prove McCoy’s theorem; the proof here comes from Flanders [Flan56]. We need some basic ideas from abstract algebra, starting with the deﬁnition of the term algebra.
10.3. Algebras, Ideals, and Nilpotent Ideals
155
Deﬁnition 10.9. An associative algebra is a vector space A over a ﬁeld F, such that A is also an associative ring, and, for any t ∈ F and any elements u, v in A, we have t(uv) = (tu)v = u(tv). We shall be dealing only with associative algebras and will use the word algebra to mean associative algebra. We do not assume that the algebra has a multiplicative identity. Thus, an algebra A is a vector space endowed with an associative multiplication such that A satisﬁes the ring axioms, and the elements of the ground ﬁeld F satisfy the last property given in the deﬁnition. A subspace of A which is itself an algebra (under the same operations used in A) is called a subalgebra. The algebras of interest to us are subalgebras of Mn , using the usual matrix multiplication. If B and C are subalgebras of A, then BC denotes the set of all ﬁnite sums of the form bi ci where the bi ’s are in B and the ci ’s are in C. While BC is a subspace of A, it need not be a subalgebra. For the noncommuting variables X1 , . . . , Xt , any product of a ﬁnite number of nonnegative integer powers of the Xi ’s (in any order) will be called a monomial in X1 , . . . , Xt . The degree of the monomial is the sum of the exponents on the Xi ’s. We deﬁne the monomial of degree zero to be the identity matrix I. For example, X1 X2 X1 X3 , X3 X23 X1 X3 , X23 , and X3 X24 X32 are monomials in X1 , X2 , and X3 , of degrees 4, 6, 3, and 7, respectively. A linear combination of such monomials is a polynomial in the noncommuting variables X1 , . . . , Xt . For example, the sum X1 X2 + 4X2 X1 + 7X35 + 8 is a polynomial in X1 , X2 , X3 ; the scalar 8 is shorthand for 8I. Let S be a nonempty subset of an algebra A. The set of all possible linear combinations of positive degree monomials of elements of S is closed under addition, scalar multiplication, and multiplication, and is thus a subalgebra of S, denoted by A(S). The subalgebra A(S) is the set of all ﬁnite sums of the form ai mi (B1 , . . . , Bt ), where ai ∈ F, and mi (X1 , . . . , Xt ) can be any positive degree monomial in the noncommuting variables X1 , . . . , Xt , where t can be any positive integer, and B1 , . . . , Bt are any elements of S. Any subalgebra of A which contains S must contain A(S), so A(S) is the smallest subalgebra of A which contains S. We say A(S) is the algebra generated by S. Deﬁnition 10.10. A subalgebra B of the algebra A is said to be an ideal of A if AB ⊆ B and BA ⊆ B. Example 10.11. Let α be an element of the algebra A, and let I(α) be the set of all possible linear combinations of elements of the form rαs, where r, s ∈ A. Then I(α) is an ideal and is called the ideal generated by α. If A has an identity element, then α ∈ I(α). The next example is of great interest for our purposes. Example 10.12. Let T be the algebra of all n × n uppertriangular matrices, and let B be the set of all strictly upper triangular matrices. Then B is an ideal of T . Deﬁnition 10.13. We say an element x of an algebra A is nilpotent if xk = 0 for some positive integer k.
156
10. Simultaneous Triangularization
Nilpotent matrices played a central role in Chapter 4, and we saw there that for elements of Mn the following are equivalent. (1) (2) (3) (4)
A is nilpotent. All of the eigenvalues of A are zero. A is similar to a strictly uppertriangular matrix. An = 0.
Deﬁnition 10.14. We say A is a nil algebra if all of its elements are nilpotent. We say A is a nilpotent algebra if Ak = 0 for some positive integer k, where Ak means the product of A with itself k times. The terms nil ideal and nilpotent ideal are deﬁned similarly. For example, the algebra of all strictly uppertriangular n × n matrices is nil. It is also nilpotent, since the product of any n such matrices is the zero matrix. Any nilpotent algebra is clearly a nil algebra. In general, the converse is not true, but it does hold for algebras of linear transformations on a ﬁnitedimensional vector space. This will follow from the next theorem, taken from Flanders [Flan56]. Theorem 10.15. Let A be a nil algebra which is a subalgebra of Mn . Then there is a proper Ainvariant subspace of V. Proof. If A = 0, any proper subspace will do, so we may assume A is nonzero. Hence there is a vector v in V such that Av = 0. Let U = Av; then U is a nonzero Ainvariant subspace of V. We now show that v is not in U, and hence U is a proper subspace of V. Suppose v is in U. Then v = Av for some A in A and so 1 would be an eigenvalue of A. But A is nilpotent, so this is impossible. Therefore, v is not in U and hence U is a proper Ainvariant subspace of V. The following result may be proved with an argument similar to that used to prove Theorem 10.7. Theorem 10.16. A nil algebra which is a subalgebra of Mn can be simultaneously triangularized. Corollary 10.17. A nil algebra which is a subalgebra of Mn is nilpotent. Proof. By Theorem 10.16 we can assume the nil algebra has been triangularized. Since the elements are nilpotent, they are then strictly upper triangular. The result then follows from the fact that the product of any n strictly upper triangular n × n matrices is zero. Remark 10.18. By using the regular representation, one can show that any ﬁnite dimensional algebra is nil if and only if it is nilpotent (see [Flan56]). Note that if B and C are two nilpotent ideals of A, then the sum B + C is also a nilpotent ideal. We now introduce the additive commutator. Deﬁnition 10.19. If A and B are two n×n matrices, then AB −BA is the additive commutator of A and B. We use [A, B] to denote the additive commutator of A and B.
10.4. McCoy’s Theorem
157
Those familiar with Lie algebras will recall that Mn with the product [A, B] is a Lie algebra. This product is not associative, but does satisfy the Jacobi identity [[A, B], C] + [[C, A], B] + [[B, C], A] = 0. If a set of matrices S has Property P, then all of the eigenvalues of [A, B] are zero for any A, B ∈ S. So [A, B] is nilpotent. Furthermore, for any matrices C1 , . . . , Ct in S and any polynomial p(X1 , . . . , Xt ) in the noncommuting variables X1 , . . . , Xt , the matrix p(C1 , . . . , Ct )[A, B] is nilpotent.
10.4. McCoy’s Theorem We now come to the main result of this chapter. Theorem 10.20 (McCoy [McCoy36]). Let S be a nonempty set of n × n matrices over an algebraically closed ﬁeld F. Then the following are equivalent. (1) The set S is simultaneously triangularizable. (2) The set S has Property P . (3) For any matrices A, B, C1 , . . . , Ct in S and any polynomial p(X1 , . . . , Xt ) in the noncommuting variables X1 , . . . , Xt , the matrix p(C1 , . . . , Ct )[A, B] is nilpotent. We have already observed that (1) implies (2) and that (2) implies (3); the hard part of the proof is showing that (3) implies (1). This can be done in several ways. Again, we follow the approach in [Flan56]. We ﬁrst need the following. Theorem 10.21. Let F be an algebraically closed ﬁeld, let V be a vector space over F with dim(V) > 1, and let A be an algebra of linear transformations of V. Suppose there is a nil ideal B of A such that A/B is commutative. Then V has a proper Ainvariant subspace. Proof. If B = 0, then A is commutative, and the result follows from Theorem 10.1. If B = 0, then, as shown in the proof of Theorem 10.15, there exists a vector v such that U = Bv is a proper Binvariant subspace of V. Since B is an ideal of A, we have AB ⊆ B, and so AU = ABv ⊆ Bv = U. Hence the proper subspace U is Ainvariant. Proof of Theorem 10.20. We now complete the proof of McCoy’s theorem by showing that (3) implies (1). We do this with Theorem 10.21, using condition (3) to obtain the nil ideal B. First, note that if S = S ∪ {I}, then each of the properties (1), (2), (3) holds for S if and only if it holds for S . So, without loss of generality, we may assume S includes the identity I. Let A = A(S) be the algebra generated by S. Assume S satisﬁes property (3). Note that for any pair of n × n matrices R, S, the matrix RS is nilpotent if and only if SR is nilpotent. Hence, [A, B]p(C1 , . . . , Ct ) is nilpotent for any matrices A, B, C1 , . . . , Ct in S. If we replace the elements C1 , . . . , Ct of S by elements D1 , . . . , Dt of A, the resulting expression p(D1 , . . . , Dt ) is still some polynomial in elements of S. Therefore, every element of the ideal generated by [A, B] in A is nilpotent. So I([A, B]) is a nil ideal, and hence, by Corollary 10.17, is
158
10. Simultaneous Triangularization
nilpotent. Now let R be a maximal nilpotent ideal of A. Then the sum R+I([A, B]) is also a nilpotent ideal; since R is maximal, we have R + I([A, B]) ⊆ R, and so I([A, B]) ⊆ R. Hence, [A, B] ∈ R. Consider the quotient A/R. Since [A, B] ∈ R for all A, B ∈ S, the elements A+R and B +R commute in A/R. Since S generates A, this means that A/R is commutative. Hence, by Theorem 10.21, there is a proper Ainvariant subspace. We now proceed as in the proof of Theorem 10.7. Use the Ainvariant subspace to put the algebra A into block triangular form. The algebras formed by the diagonal blocks will satisfy condition (3), so one can use an induction argument and assume the blocks can be triangularized. Thus A has property (1). Those familiar with Lie algebras will recognize that this result is closely related to the theorems of Lie and Engel.
10.5. Property L We now consider a weaker property, called Property L. Deﬁnition 10.22. The n×n matrices A and B are said to have Property L if there is an ordering of the eigenvalues α1 , . . . , αn of A and β1 , . . . , βn of B such that for any scalars x and y, the eigenvalues of xA + yB are xαi + yβi for i = 1, . . . , n. The set of all matrices of the form xA + yB is called the pencil of A and B. Matrices which have Property P certainly have Property L, but in general, the converse is not true. Example 10.23. This example comes from ⎛[MT52], the on ⎞ ﬁrst of two ⎛ papers ⎞ 0 1 0 0 0 0 Property L by Motzkin and Taussky. Let A = ⎝ 0 0 −1 ⎠ and B = ⎝ 1 0 0 ⎠. 0 0 0 0 1 0 ⎛ ⎞ 0 x 0 Then xA + yB = ⎝ y 0 −x ⎠ and (xA + yB)3 = 0. Hence, for every x and y, all 0 y 0 of the eigenvalues of⎛xA + yB are ⎞ zero, so the pair A, B has Property L. However, 1 0 0 the product AB = ⎝ 0 −1 0 ⎠ has eigenvalues 1, −1, and 0, so A and B do not 0 0 0 have Property P. In [MT52, MT55], Motzkin and Taussky establish signiﬁcant results about the pair A, B, the pencil xA+yB, and Property L, and they use algebraic geometry to study the characteristic curve associated with the polynomial det(zI − xA − yB). Here are a few of the main results. Theorem 10.24. Let A and B be n×n matrices over an algebraically closed ﬁeld F of characteristic p. Assume all the matrices in the pencil xA+yB are diagonalizable, and, for p = 0, assume that n ≤ p or that A and B have Property L. Then A and B commute.
10.5. Property L
159
The proof of Theorem 10.24 uses methods from algebraic geometry and is beyond our scope here. However, the next two results are proven with matrix theory tools. Theorem 10.25 (Motzkin and Taussky [MT52]). Let A and B be n × n matrices with Property L, and assume A is diagonalizable. Let α1 , . . . , αn be the eigenvalues of A, listed so that repeated eigenvalues appear together; assume there are t distinct eigenvalues of multiplicities m1 , . . . , mt . Let β1 , β2 , . . . , βn be the corresponding eigenvalues of B. Let A = P −1 AP be in Jordan form, and let B = P −1 BP . Write ⎞ ⎛ B11 B12 · · · B1t ⎜ B21 B22 · · · B2t ⎟ , B = ⎜ .. .. ⎟ .. ⎝ ... . . . ⎠ Bt1
···
Bt2
where Bij is size mi × mj . Then det(zI − B ) =
Btt t !
det(zI − Bii ) and
i=1
bik bki = 0,
where the sum is over all i < k for which the entry bik lies outside every diagonal block Bjj . 0. Proof. Property L is not aﬀected by a translation, sowe may assume α1 = 0 0 B C 11 12 The matrix A is diagonal, and we write A = and B = , 0 A22 C21 C22 where A22 has size (n − m1 ) × (n − m1 ), the zeros indicate blocks of zeros, and B is partitioned conformally with A. Note that A22 will be nonsingular. Consider the polynomial zI − B11 −C12 det(zI − xA − B ) = det −C21 zI − xA22 − C22 in x and z, and note that the coeﬃcient of xn−m1 is det(zI − B11 ) det(−A22 ). However, since A and B have Property L, we also have m1 n " " (z − βi ) (z − xαi − βi ). det(zI − xA − B ) = i=1
i=m1 +1
From this we see the coeﬃcient of xn−m1 is
m !1
n !
(z −βi )
i=1
two expressions for the coeﬃcient of xn−m1 gives m1 " (z − βi ) det(zI − B11 ) det(−A22 ) = i=1
But det(−A22 ) =
n !
(−αi ). Equating these
i=m1 +1 n "
(−αi ).
i=m1 +1
(−αi ), and this is nonzero, so det(zI − B11 ) =
i=m1 +1
m !1
(z − βi ).
i=1
Applying this argument to each of the eigenvalues of A yields (10.3)
det(zI − B ) =
t "
det(zI − Bii ).
i=1
The second part of the theorem comes from examining the coeﬃcient of z n−2 on both sides of (10.3). In each case, the coeﬃcient of z n−2 is the sum of the
160
10. Simultaneous Triangularization
determinants of all 2 × 2 principal submatrices. The diagonal entries of these submatrices are the same for both sides, but for the oﬀdiagonal entries, the lefthand side gives the sum of all products bik bki , where i = k, while on the righthand side we have the sum of all such products where bik is inside one of the diagonal blocks. So the sum of all such products where bik comes from outside the diagonal blocks must be zero. One can think of the theorem as telling us that when A and B have Property L they retain some of the behavior of a pair of matrices which have Property P. We now use Theorem 10.25 to show that Hermitian matrices with Property L must commute. Theorem 10.26 (Motzkin and Taussky). If A and B are Hermitian matrices with Property L, then AB = BA. Proof. Since A is Hermitian, we can reduce it to Jordan form with a unitary similarity P , so A = P −1 AP is diagonal and B = P −1 AP is still Hermitian. Hence, bik bki = bik 2 , and the second part of Theorem 10.25 tells us all of the oﬀdiagonal blocks of B are zero. Since the diagonal blocks of B pair up with scalar blocks in A , we see that A and B commute, and hence so do A and B. Wielandt [Wiel53] generalized Theorem 10.26 to pairs of normal matrices. We prove this generalization using Theorem 5.12, which we restate and prove below. Theorem 10.27. Let A be a normal matrix with eigenvalues λ1 , . . . , λn . Partition A into t2 blocks, where the diagonal blocks A11 , . . . , Att are square. Suppose the direct sum of the diagonal blocks A11 ⊕ A22 ⊕ · · · ⊕ Att has eigenvalues λ1 , . . . , λn . Then Aij = 0 when i = j, and so A = A11 ⊕ A22 ⊕ · · · ⊕ Att . Proof. Since A is normal, we have n n n
(10.4) A2F = aij 2 = λi 2 . i=1 j=1
i=1
Let S denote the sum of the squares, aij  , of the entries aij which are in the diagonal blocks, A11 , . . . , Att . Since A11 ⊕A22 ⊕· · ·⊕Att has eigenvalues λ1 , . . . , λn , n λi 2 . But clearly, A2F ≥ S. Combine this with (10.4) to get we have S ≥ 2
i=1
n
i=1
λi 2 = A2F ≥ S ≥
n
λi 2 .
i=1
Hence, we must have A2F = S. This means all of the entries of A which are outside the diagonal blocks must be zeros, and hence Aij = 0 when i = j. Theorem 10.28 (Wielandt [Wiel53]). If A and B are n × n normal matrices with Property L, then AB = BA. Proof. Since A is normal, we can reduce it to Jordan form with a unitary similarity U , so A = U ∗ AU is diagonal and B = U ∗ BU is still normal. From Theorem 10.25, t ! we have det(zI − B ) = det(zI − Bii ), and so B has the same eigenvalues as i=1
Exercises
161
B11 ⊕ B22 ⊕ · · · ⊕ Btt . Theorem 10.27 then tells us the oﬀdiagonal blocks of B are zero. The diagonal blocks of B pair up with scalar blocks in A . So A and B commute, and hence so do A and B.
Exercises 1. Let A be a nonempty set of linear transformations of an ndimensional vector space V over a ﬁeld F, and let U be an Ainvariant subspace of V. For T ∈ A, show that we can deﬁne an action of T on the quotient space V/U by deﬁning T (v + U) = T v + U. (The main thing you need to show here is that this is well deﬁned.) 2. Find a pair of uppertriangular matrices which do not commute. 3. For an n × n matrix A, let C(A) denote the set of matrices which commute with A. Thus, C(A) = {B ∈ Mn AB = BA}. (a) What is C(I) ? (b) Show that C(A) is a subspace of Mn . (c) Show that if p(x) is any polynomial, then p(A) ∈ C(A). (d) Revisit part (a) to see that there can be matrices in C(A) which are not polynomials in A. (e) Show that if the matrix A is a single Jordan block A = Jp (λ), then every matrix in C(A) is a polynomial in A. In this case, what is the dimension of C(A)? Hint: Note that B commutes with Jp (λ) if and only if B commutes with the nilpotent matrix N = Jp (0), so it suﬃces to ﬁnd C(N ). (f) Find a matrix which commutes with A = J2 (0) ⊕ J1 (0) which is not a polynomial in A. 4. Give the details of the proof of Theorem 10.16. 5. Show that if B and C are two nilpotent ideals of A, then the sum B + C is also a nilpotent ideal. 6. Let A be an associative algebra. For x, y ∈ A, deﬁne [x, y] = xy − yx. Show this Lie bracket product satisﬁes the Jacobi identity [[x, y], z] + [[z, x], y] + [[y, z], x] = 0.
Chapter 11
Circulant and Block Cycle Matrices
This brief chapter deals with some special types of matrices needed in later chapters. The J matrix has appeared as an example in Chapter 3; we review it here as it will be a frequent player in Chapters 12 and 13. Circulant matrices show up in various applications, and the more general “block cycle” matrices will be needed for studing nonnegative matrices in Chapter 17.
11.1. The J Matrix We use J to designate a matrix in which every entry is a one. We use Jm,n for an m × n matrix of ones and Jn for a square n × n matrix of ones. However, we use e for a column vector of ones. Note that Jn e = ne, so n is an eigenvalue of Jn . Since rank(Jn ) = 1, the only other eigenvalue is zero, and it has multiplicity n − 1. Hence the eigenspace corresponding to the eigenvalue n is the line spanned by e. The characteristic polynomial of Jn is xn−1 (x − n), and the minimal polynomial of Jn is x(x − n). Note that Jn2 = nJn . For scalars a and b, the eigenvalues of aI + bJn are a + bn of multiplicity one and a of multiplicity n − 1. Hence, det(aI + bJn ) = an−1 (a + bn).
11.2. Circulant Matrices We start with the basic cycle matrix. Example 11.1. The linear transformation ⎛ ⎞ ⎛ ⎞ x1 x2 ⎜ x2 ⎟ ⎜ x3 ⎟ ⎜ ⎟ ⎜ . ⎟ x ⎟ ⎜ ⎟ P⎜ ⎜ .3 ⎟ = ⎜ .. ⎟ ⎝ .. ⎠ ⎝ x ⎠ n
xn
x1 163
164
11. Circulant and Block Cycle Matrices
permutes the coordinates of x by cycling them around; this permutation is an ncycle. The permutation matrix corresponding to this cycle is ⎞ ⎛ 0 1 0 0 ··· 0 0 ⎜0 0 1 0 ··· 0 0⎟ ⎟ ⎜ ⎜0 0 0 1 ··· 0 0⎟ ⎟ ⎜ .. ⎜ . 0 0⎟ (11.1) P = ⎜0 0 0 0 ⎟. ⎟ ⎜. . . . . . . . . . ... ⎟ ⎜ .. .. .. .. ⎟ ⎜ ⎝0 0 0 0 ··· 0 1⎠ 1 0 0 0 ···
0
0
We have P = I. The characteristic polynomial of P is xn − 1. The eigenvalues of P are the nth roots of unity; that is, the numbers 1, ωn , ωn 2 , . . . , ωn n−1 , where 2πi ωn = e n . These numbers are on the unit circle z = 1 and are the vertices of a regular nsided polygon. The eigenvector corresponding to the eigenvalue ωnk is n
vk = (1, ωnk , ωn2k , ωn3k , . . . , ωn(n−1)k ). Note that P −1 = P n−1 = P T . Hence, P P T = P T P = I; the matrix P is normal, √ and the vectors v1 , . . . , vn are pairwise orthogonal. We have vk = n. Let Vˆ denote the n × n matrix with vk in column k. Then V = √1n Vˆ is a unitary matrix, and V ∗ P V = diag(1, ω, ωn 2 , . . . , ωn n−1 ). Now we replace the ones in (11.1) with positive real numbers. Example 11.2. Let a1 , . . . , an be positive real numbers, and let ⎛ ⎞ 0 a1 0 0 · · · 0 0 ⎟ 0 a2 0 · · · ⎜ 0 ⎜ ⎟ 0 0 a3 · · · 0 ⎟ ⎜ 0 ⎟. A=⎜ . . . . . ⎜ . ⎟ .. .. .. .. 0 ⎟ ⎜ . ⎝ 0 0 0 0 · · · an−1 ⎠ an 0 0 0 · · · 0 The ai ’s are in the same positions as the ones in the ncycle matrix. Compute powers of A to see that An = (a1 a2 · · · an )In . The characteristic polynomial of A is pA (x) = xn − a1 a2 · · · an . The spectral radius of A is the positive real number, ρ, that is the nth root of a1 a2 · · · an , and the eigenvalues of A are the n complex 2πi numbers ρ, ρωn , ρωn 2 , . . . , ρωn n−1 , where ωn = e n . These numbers lie on the circle z = ρ and are the vertices of a regular nsided polygon. Deﬁnition 11.3. An n × n matrix of the form C = c0 I + c1 P + c2 P 2 + c3 P 3 + · · · + cn−2 P n−2 + cn−1 P n−1 is called a circulant matrix. Thus, C is a polynomial in the ncycle matrix P . Writing out the entries of C, we see that each row is obtained by shifting the previous row one position to the right and then “wrapping around” the rightmost entry to the beginning of the row. Thus, the entries are shifted in a cyclic, or
11.3. Block Cycle Matrices
circular fashion:
⎛
c0
⎜ cn−1 ⎜ ⎜ cn−2 ⎜ ⎜c C = ⎜ n−3 ⎜ .. ⎜ . ⎜ ⎝ c 2 c1
165
c1 c0
c2 c1 c0
cn−1 cn−2 .. .
cn−1 .. .
c3 c2 c1 c0 .. .
c3 c2
c4 c3
c5 c4
··· ··· ··· ··· .. . .. . ···
cn−2 cn−3 cn−4 ··· .. . c0 cn−1
⎞ cn−1 cn−2 ⎟ ⎟ cn−3 ⎟ ⎟ cn−4 ⎟ . .. ⎟ ⎟ . ⎟ ⎟ c1 ⎠ c0
Since C is a polynomial in P , any eigenvector of P is also an eigenvector of C. Also, since P is normal, C must be normal. Letting p(t) = c0 + c1 t + c2 t2 + c3 t3 + · · · + cn−2 tn−2 + cn−1 tn−1 , (n−1)k
and, as deﬁned above, vk = (1, ωnk , ωn2k , ωn3k , . . . , ωn and V ∗ CV = diag(p(1), p(ωn ), p(ωn2 ), . . . , p(ωnn−1 ).
), we have Cvk = p(ωnk )vk
11.3. Block Cycle Matrices We now come to the main purpose of this chapter—the type of matrix used in Chapter 17. We generalize Example 11.2, replacing the entries with matrices, to obtain a partitioned matrix. The zeroes are replaced by blocks of zeroes, and the positive entries ai are replaced by nonnegative matrices. More speciﬁcally, let n = n1 + n2 + · · · + nt , where n1 , . . . , nt are positive integers. For 1 ≤ i ≤ t − 1, let Ai be a matrix of size ni × ni+1 , and let At be a matrix of size nt × n1 . Let A be the n × n partitioned matrix with square diagonal blocks of zeros of sizes n1 , . . . , nt , and the blocks A1 , . . . , At as shown in equation (11.2). The zero in position i, j represents an ni × nj block of zeroes: ⎞ ⎛ 0 A1 0 0 ··· 0 0 ⎟ 0 A2 0 · · · ⎜ 0 ⎟ ⎜ 0 0 A3 · · · 0 ⎟ ⎜ 0 ⎟. ⎜ (11.2) A = ⎜ .. .. .. .. .. ⎟ . . . . 0 ⎟ ⎜ . ⎠ ⎝ 0 0 0 0 ··· A At
0
0
0
···
t−1
0
2
Then A has the form shown below. ⎛ 0 0 ⎜ 0 0 ⎜ ⎜ ⎜ 0 0 ⎜ .. .. (11.3) A2 = ⎜ . . ⎜ ⎜ 0 0 ⎜ ⎝A A 0 t−1 t 0 At A1 3
A1 A2 0
0 A2 A3
0 .. .
0 .. .
0 0 0
0 0 0
··· ··· .. . ... ··· ···
0 0
⎞
⎟ ⎟ ⎟ ⎟ 0 ⎟ .. ⎟. . ⎟ At−2 At−1 ⎟ ⎟ ⎠ 0 0
For A , the nonzero blocks are products of the form Ai Ai+1 Ai+2 , where the subscripts are to be read modulo t; these blocks are in block positions (1, 4), (2, 5), etc. For Ar , the nonzero blocks are products of r consecutive Ai ’s; the line of nonzero blocks shifts upward to the right with each increase in r. For r = t, each product
166
11. Circulant and Block Cycle Matrices
has all t of the Ai ’s, and these products are in the diagonal block positions. Thus At is the block diagonal matrix shown below. (11.4) ⎞ ⎛ A1 A2 . . . At 0 0 ··· 0 0 ··· 0 0 A2 A3 . . . At A1 ⎟ ⎜ ⎟ ⎜ A . . . A A A · · · 0 0 0 A ⎟ ⎜ 3 4 t 1 2 t ⎟. ⎜ A =⎜ .. .. .. .. .. ⎟ . . . . . ⎟ ⎜ ⎠ ⎝ 0 0 0 ··· 0 0 0 0 · · · At A1 A2 . . . At−1 Recall (Theorem 3.24) that when B and C are matrices of of sizes k × m and m × k, respectively, the products BC and CB have the same nonzero eigenvalues with the same multiplicities. Therefore, the diagonal blocks of At all have the same nonzero eigenvalues (with the same multiplicities). If λ is a nonzero eigenvalue of A, then λt is an eigenvalue of At , and hence must be an eigenvalue of the n1 × n1 matrix 2πi A1 A2 · · · At . Let ωt = e t . We now show that if μ is a nonzero eigenvalue of A1 A2 · · · At , and λ is a tth root of μ, then the t numbers λ, λωt , λωt 2 , . . . , λωt t−1 are all eigenvalues of A. Let A be the matrix of (11.2). Suppose x is an eigenvector of A, with Ax = λx. Partition x conformally with A, i.e., put ⎞ x1 ⎜ x2 ⎟ ⎟ x=⎜ ⎝ ... ⎠ , ⎛
xt where xi has ni coordinates. Then ⎛
(11.5)
A 1 x2 A2 x3 A3 x4 .. .
⎞
⎛
λx1 λx2 λx3 .. .
⎞
⎟ ⎜ ⎟ ⎟ ⎜ ⎟ ⎟ ⎜ ⎟ ⎟=⎜ ⎟. ⎟ ⎜ ⎟ ⎟ ⎜ ⎟ ⎝ ⎠ λxt−1 ⎠ t−1 xt At x1 λxt
⎜ ⎜ ⎜ Ax = ⎜ ⎜ ⎜ ⎝A
Hence, Ai xi+1 = λxi for i = 1, . . . , t, where the subscript t + 1 is read “modulo t”; thus t + 1 ≡ 1 (mod t). Now put ⎛ ⎜ ⎜ ⎜ y=⎜ ⎜ ⎜ ⎝
x1 ωx2 ω 2 x3 ω 3 x4 .. . ω t−1 xt
⎞ ⎟ ⎟ ⎟ ⎟, ⎟ ⎟ ⎠
Exercises
167
so the ith block of y is ω i−1 xi . Compute Ay ⎞ ⎛ ⎛ ωλx1 ωA1 x2 ⎜ ω 2 A2 x3 ⎟ ⎜ ω 2 λx2 ⎟ ⎜ ⎜ ⎜ ω 3 A3 x4 ⎟ ⎜ ω 3 λx3 ⎟=⎜ ⎜ Ay = ⎜ .. .. ⎟ ⎜ . . ⎟ ⎜ ⎜ ⎝ ω t−1 A x ⎠ ⎝ ω t−1 λx
and use (11.5) to get ⎛ ⎞ ⎞ x1 ⎜ ωx2 ⎟ ⎟ ⎜ 2 ⎟ ⎟ ⎜ ω x ⎟ ⎟ ⎟ = λω ⎜ 3 3 ⎟ = λωy, ⎜ ω x4 ⎟ ⎟ ⎜ ⎟ ⎟ .. ⎝ ⎠ ⎠ .
At x1
ω t−1 xt
t−1 t
t−1
λxt
where we have used ω t = 1. This shows λω is an eigenvalue of A and y is an associated eigenvector. Note that x = 0 guarantees that y = 0. So, we have shown that whenever λ ∈ spec(A), then λω ∈ spec(A). Repeating this process shows that all t of the numbers λ, λωt , λωt 2 , . . . , λωt t−1 are eigenvalues of A. If we visualize the eigenvalues of A as points in the complex plane, then multiplication by ω rotates 2π by angle 2π t , so the set of eigenvalues is invariant under rotation by angle t .
Exercises 1. Show that P =
√1 Jn n
is a projection matrix.
T
2. Find an orthogonal matrix U such that U J2 U =
2 0 . Find a positive 0 0
semideﬁnite symmetric matrix S such that S 2 = J2 . 3. At the beginning of this section we used the eigenvalues of aI + bJn to show det(aI + bJn ) = an−1 (a + bn). Give an alternative argument, using row operations, to show det(aI + bJn ) = an−1 (a + bn). 4. Find the eigenvalues and eigenvectors of the matrix in Example 11.2. 5. For x = (x1 , . . . , xn ), the Vandermonde matrix V (x) is ⎞ ⎛ 1 x1 x21 x31 · · · xn−1 1 ⎟ ⎜ 1 x2 x22 x32 · · · xn−1 2 ⎟ ⎜ ⎜ 1 x3 x2 x3 · · · xn−1 ⎟ 3 3 3 V (x) = ⎜ ⎟. ⎜. .. .. .. .. ⎟ ⎝ .. . . . . ⎠ 1 xn
x2n
x3n
···
xn−1 n
(a) Note that the matrix Vˆ of Section 11.2 is the transpose of the Vandermonde 2πi matrix V (1, ω, ω 2 , . . . , ω n−1 ), where ω = ωn = e n . in the variables (b) Show that det(V (x)) is a polynomial of degree n(n−1) 2 x1 , . . . , xn . (c) Show that if xi = xj for some i = j, then det(V (x)) = 0. (d) Use elementary row operations to show that the polynomial det(V (x)) is divisible by xi − xj , for each ! pair i, j, where i = j. (xi − xj ). (e) Show that det(V (x)) = i>j
Hint: Use parts (b) and (d) and think about the product of the diagonal entries of V (x).
168
11. Circulant and Block Cycle Matrices
6. Let p(x) = c0 + c1 x + c2 x2 + · · · + cn−1 xn−1 be a polynomial of degree n − 1 in the variable x. Let x1 , . . . , xn denote n distinct values, and put yi = p(xi ) for i = 1, . . . , n. Then for c = (c0 , c1 , c2 , . . . , cn−1 ) and y = (y1 , . . . , yn ) we have V (x)c = y. Since V (x) is invertible, we have the formula c = (V (x))−1 y.
Chapter 12
Matrices of Zeros and Ones
Matrices of zeros and ones are used to represent various combinatorial structures, such as graphs and block designs. This chapter contains some basic results about matrices of zeros and ones, coming mostly from Herbert Ryser’s book Combinatorial Mathematics [Rys63] and notes taken in Ryser’s courses (1976–78) at Caltech. Graphs and block designs are discussed in more detail in later chapters.
12.1. Introduction: Adjacency Matrices and Incidence Matrices A (0, 1)matrix is a matrix in which every entry is either a 0 or a 1. Let us see how such a matrix can represent a graph. Deﬁnition 12.1. A graph G = G(V, E) is a ﬁnite set V = {v1 , . . . , vn } and a set E of unordered pairs {vi , vj } of elements of V . The elements of V are called vertices, and the elements of E are called edges. If {vi , vj } ∈ E, we say vi and vj are adjacent. We usually visualize vertices as points and edges as line segments between pairs of points. In general, edges of the form {vi , vi } are permitted and are called loops. However, sometimes we shall permit only edges with i = j. See Chapter 15 for more detailed terminology. Example 12.2. Let V = {v1 , v2 , v3 } and E = {{v1 , v2 }, {v1 , v3 }, {v2 , v3 }}. Then G can be visualized as the triangle shown on the lefthand side of Figure 12.1. Example 12.3. Let V = {v1 , v2 , v3 , v4 } and E = {{v1 , v2 }, {v1 , v3 }, {v1 , v4 }, {v3 , v4 }, {v4 , v4 }}. Then G is shown on the righthand side of Figure 12.1; note the loop at vertex v4 . We often use such pictures to represent graphs, but we can also use (0, 1)matrices. 169
170
12. Matrices of Zeros and Ones
v1
t
vt3
JJ
J
J
J J
t v4
tv3
t
t v2
JJt v2
v1
Figure 12.1. Examples 12.2 and 12.3
Deﬁnition 12.4. Let G(V, E) be a graph with n vertices. The adjacency matrix of G is the n × n matrix A deﬁned by • aij = aji = 1 if {vi , vj } is an edge of G, • aij = aji = 0 if {vi , vj } is not an edge of G.
⎛
⎞ 0 1 1 Example 12.5. For the graph in Example 12.2, the matrix ⎝ 1 0 1 ⎠ is the 1 1 0 adjacency matrix. ⎞ ⎛ 0 1 1 1 ⎜1 0 0 0⎟ Example 12.6. For Example 12.3, the adjacency matrix is ⎝ ⎠. 1 0 0 1 1 0 1 1 Chapter 15 treats graphs and adjacency matrices in more detail, but we give an example here to illustrate how (0, 1)matrices are useful in studying graphs. Let A be the adjacency matrix of a graph and consider A2 . The i, j entry of A2 is n aik akj . The product aik akj is either 0 or 1. It is 1 if and only if aik = akj = 1, k=1
i.e., if and only if {vi , vk } and {vk , vj } are both edges of G. So each 1 in the sum n aik akj corresponds to a walk vi → vk → vj of length 2 in G, and the sum counts
k=1
the number of walks of length 2 from vi to vj . The i, j entry of A2 thus gives the number of walks of length 2 from vi to vj . ⎛ ⎞ 2 1 1 For Example 12.5, we have A2 = ⎝ 1 2 1 ⎠ . 1 1 2 ⎛ ⎞ 3 0 1 2 ⎜0 1 1 1⎟ For Example 12.6, we have A2 = ⎝ ⎠. 1 1 2 2 2 1 2 3
12.1. Introduction: Adjacency Matrices and Incidence Matrices
171
More generally, one can show that the i, j entry of Ak gives the number of walks of length k from vi to vj . Thus, the same matrix multiplication rule deﬁned so that matrix multiplication will represent function composition gives meaningful information when the matrices represent graphs. In the opposite direction, graphs can give useful information about matrices. Chapter 17 deals with the Perron– Frobenius theorem for nonnegative matrices; we will see that the structure of the graph of a nonnegative matrix gives information about the eigenvalues of that matrix. Here is another use of (0, 1)matrices to represent a combinatorial structure. Let S be a ﬁnite set with n elements; we call S an nset. Put S = {x1 , . . . , xn }. Let S1 , . . . , Sm be a list of subsets of S; the subsets Si need not be distinct. We shall call such a system a conﬁguration. A conﬁguration can be represented by an m × n matrix A, deﬁned as aij = 1 if xj ∈ Si , aij = 0 if xj ∈ Si . The ith row of A tells us the elements of subset Si ; the positions of the ones tell us which elements of S are in Si . Column j tells us which subsets Si contain the element xj . The matrix A is called the incidence matrix of the conﬁguration. 4}, S2 = {1, 3},⎞and Example 12.7. Let S be the set {1, 2, 3, 4}, and let S1 = {1, 3, ⎛ 1 0 1 1 S3 = {2, 3}. The incidence matrix of this conﬁguration is A = ⎝ 1 0 1 0 ⎠ . 0 1 1 0 Consider the product AAT for an incidence matrix A. Entry i, j of AAT is the dot product of rows i and j of A, and hence gives the number of elements in the intersection Si ∩ Sj . In Example 12.7 we have ⎛ ⎞ 3 2 1 AAT = ⎝ 2 2 1 ⎠ . 1 1 2 Note that the diagonal entries give the sizes of the subsets Si . What do the entries of AT A tell us? Entry i, j of AT A is the dot product of columns i and j of A, so it counts the number of times columns i and j have a 1 in the same position. Hence, it counts the number of subsets which contain both of the points xi and xj . For Example 12.7 we have ⎛ ⎞ 2 0 2 1 ⎜0 1 1 0⎟ AT A = ⎝ ⎠. 2 1 3 1 1 0 1 1 Entry (1, 4) shows there is exactly one subset which contains both x1 and x4 ; entry (1, 3) tells us there are two subsets which contain both x1 and x3 . The kth diagonal entry tells us the number of subsets which contain xk . Once again, matrix multiplication gives meaningful information about the conﬁguration represented by the incidence matrix A. Chapter 13 deals with block designs, which are conﬁgurations having special properties. We can translate these
172
12. Matrices of Zeros and Ones
properties into matrix equations and then use matrix algebra to get information about the block designs.
12.2. Basic Facts about (0, 1)Matrices Let A be a (0, 1)matrix of size m × n. The (i, j) entry of AAT is the dot product of rows i and j of A. Set i = j to see that the number of ones in row i is the (i, i) entry of AAT . The (j, j) entry of AT A gives the number of ones in column j of A. Deﬁnition 12.8. A permutation matrix is a square (0, 1)matrix in which each row and column has exactly one 1. Example 12.9. Here are some examples of permutation matrices: ⎛ ⎞ 0 1 0 0 0 ⎛ ⎞ ⎛0 1 0 0⎞ 0 1 0 ⎜0 0 1 0 0⎟ 0 0 1 0⎟ ⎜ ⎟ ⎝1 0 0⎠ ⎜ ⎝ ⎠ ⎜0 0 0 1 0⎟. 1 0 0 0 ⎝ ⎠ 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 0 An equivalent deﬁnition is that a permutation matrix is a matrix obtained by permuting the columns (or rows) of the identity matrix. If P is an n×n permutation matrix and A is m × n, multiplying A on the right by P will permute the columns of A. Multiplying A on the left by an m × m permutation matrix permutes the rows of A. Suppose A is the incidence matrix of a conﬁguration of subsets S1 , . . . , Sm of an nset S. Reordering the rows of A is equivalent to reordering the subsets S1 , . . . , Sm , while reordering the columns of A is equivalent to reordering the elements x1 , . . . , xn of S. Typically, we are interested in properties of conﬁgurations which are invariant under such permutations, so the speciﬁc order of the subsets and elements is not important. We often rearrange the rows and columns of A for convenience, or, equivalently, we replace A by P AQ, where P and Q are permutation matrices. We say two conﬁgurations, S1 , . . . , Sm of an nset S, and T1 , . . . , Tm of an nset T , with incidence matrices A and A , respectively, are isomorphic if A = P AQ for some permutation matrices P and Q (sizes m × m and n × n, respectively). Note these basic facts about the incidence matrix A of a conﬁguration. • The number of ones in row i of A is Si  and is equal to the (i, i) entry of AAT . • The number of ones in column j of A equals the number of sets Si  which contain the element xj . This number is the (j, j) entry of AT A. • The i, j entry of AAT equals Si ∩ Sj . • The i, j entry of AT A equals the number of sets containing the pair xi , xj . ⎛ S  ⎞ 1
⎜ S2  ⎟ ⎟ • Letting e denote a column vector of ones, we have Ae = ⎜ ⎝ .. ⎠. . Sm  • If S1  = S2  = · · · = Sm  = k, then Ae = ke.
12.3. The Minimax Theorem of K¨onig and Egerv´ary
173
We use J to denote a matrix in which every entry is one; Jm,n means the all 1 matrix of size m × n and Jn means the n × n matrix of ones. However, we use e to denote a column vector of ones. ⎛ ⎞ 1 1 1 1 1 1 1 . Example 12.10. J3 = ⎝ 1 1 1 ⎠ and J2,4 = 1 1 1 1 1 1 1 We have Jn e = ne. More generally, suppose A is an m × n matrix and Ri =
n
aij
j=1
is the sum of the entries in row i. Then ⎛
⎞ R1 ⎜ R2 ⎟ ⎟ Ae = ⎜ ⎝ ... ⎠ . Rm
Letting Cj =
m
aij denote the sum of the entries in column j of A, we have
i=1
eT A = (C1 , C2 , . . . , Cn ). Observe also that eT Ae gives the sum of all the entries of A.
12.3. The Minimax Theorem of K¨ onig and Egerv´ ary A line of a matrix designates either a row or a column. Deﬁnition 12.11. Let A be a (0, 1)matrix of size m × n. The term rank of A is the maximal number of ones in A with no two on a line. For example, any n × n permutation matrix has term rank n. The zero matrix has term rank 0. Note that term rank is very diﬀerent from the ordinary rank of a matrix; Jn has rank 1, but term rank n. ⎛ ⎞ 1 0 1 0 ⎜0 1 1 1⎟ Example 12.12. The term rank of ⎝ ⎠ is three. 0 0 0 1 0 0 0 1 Observe that to cover all of the ones in the matrix in Example 12.12, we need at least three lines; the ﬁrst two rows and the last column contain all of the ones in the matrix. This illustrates the following minimax theorem, which appears in Ryser’s book [Rys63]. The proof below comes from Ryser’s 1969 paper [Rys69], where Ryser attributes the theorem to K¨ onig and Egerv´ary and points out that it is equivalent to the theorem of P. Hall [Hall35] about systems of distinct representatives, presented in the next section. For the proof in [Rys69], Ryser writes “The author is of the opinion that this proof was ﬁrst pointed out to him by W. B. Jurkat.” Theorem 12.13 (K¨ onig and Egerv´ary). Let A be a (0, 1)matrix of size m × n. The minimal number of lines needed to cover all of the ones in A is equal to the term rank of A.
174
12. Matrices of Zeros and Ones
Proof. Let ρ denote the term rank of A, and let ρ be the minimal number of lines needed to cover all of the ones in A. Since A has ρ ones, no two of which are on the same line, it certainly takes at least ρ lines to cover all the ones in A, so ρ ≤ ρ . We need to prove ρ ≤ ρ. We use induction on the number of lines, m + n. The result clearly holds if m = 1 or n = 1, so we may assume m > 1 and n > 1. We say a minimal covering is proper if it does not consist of all m rows of A or of all n columns of A. The proof now splits into two cases. Case 1. Assume A has a proper covering of e rows and f columns, where ρ = e + f . Note that e < m and f < n. Permute the rows and columns of A so that the lines in this proper covering appear as the ﬁrst e rows and ﬁrst f columns. Permuting rows and columns does not aﬀect the result, so we may assume A is already in this ∗ A1 form, and hence A = , where the asterisk represents a submatrix of size A2 0 e × f , the matrix A1 has e rows, the matrix A2 has f columns, and the zero block is (m − e) × (n − f ). We know m − e and n − f are both positive because e < m and f < n. Now, A1 cannot be covered by fewer than e lines because a covering of A1 by fewer than e lines, together with the ﬁrst f columns, would give a covering of A with fewer than e + f = ρ lines. Similarly, A2 cannot be covered by fewer than f lines. Since A1 and A2 both have fewer lines than A, the induction hypothesis tells us we can ﬁnd e ones in A1 , no two on a line, and f ones in A2 , no two on a line. This then gives us e + f ones in A, with no two on a line, and hence ρ = e + f ≤ ρ. Case 2. Suppose A has no minimal covering which is proper. Then ρ = min{m, n}. Choose an entry in A which is 1; say aij = 1. Let A1 denote the (m − 1) × (n − 1) matrix which remains after deleting row i and column j from A. If A1 could be covered by fewer than min{m−1, n−1} lines, we could use these lines, together with row i and column j to get a proper minimal covering of A. Therefore, any covering of A1 has at least min{m − 1, n − 1} lines. Applying the induction hypothesis to A1 , there must be min{m − 1, n − 1} ones in A1 , with no two on a line. Using these ones together with the one in the i, j position, we have min{m, n} ones in A, with no two on a line. So ρ = min{m, n} ≤ ρ, and we are done.
12.4. SDRs, a Theorem of P. Hall, and Permanents We now use Theorem 12.13 to prove a theorem of Philip Hall about systems of distinct representatives. Let S be an nset, and let S1 , . . . , Sm be m subsets of S, not necessarily distinct. We say that D = (a1 , . . . , am ) is a system of distinct representatives (SDR) provided that a1 , . . . , am are all distinct and ai ∈ Si . We say ai represents Si . Example 12.14. Let S = {1, 2, 3, 4, 5}, with S1 = {1, 2}, S2 = {1, 2, 3}, S3 = {4, 5}, and S4 = {4, 5}. Some SDRs for this system are (1, 3, 4, 5), (2, 1, 4, 5), and (2, 3, 5, 4). If A is the incidence matrix of the conﬁguration S1 , . . . , Sm , then an SDR corresponds to selecting a one from each row of A such that no two of the selected
12.4. SDRs, a Theorem of P. Hall, and Permanents
175
ones are in the same column. For Example 12.14 we have ⎛ ⎞ 1 1 0 0 0 ⎜1 1 1 0 0⎟ A=⎝ ⎠. 0 0 0 1 1 0 0 0 1 1 Theorem 12.15 (P. Hall [Hall35]). The subsets S1 , . . . , Sm of an nset S have an SDR if and only if, for all k = 1, 2, . . . , m and all ksubsets {i1 , i2 , . . . , ik } of {1, 2, . . . , m}, the set union Si1 ∪ Si2 ∪ · · · ∪ Sik contains at least k elements. Proof. Suppose S1 , . . . , Sm have an SDR, say (a1 , a2 , . . . , am ). Then for any k subsets Si1 , Si2 , . . . , Sik , the union Si1 ∪ Si2 ∪ · · · ∪ Sik contains the k elements a i1 , a i2 , . . . , a ik . Conversely, suppose that for any k = 1, 2, . . . , m and ksubset {i1 , i2 , . . . , ik } of {1, 2, . . . , m}, the union Si1 ∪ Si2 ∪ · · · ∪ Sik contains at least k elements. The case k = m tells us that m ≤ n. Let A be the incidence matrix of the conﬁguration. We need to show that we can select a one from each row of A such that no two of the selected ones are in the same column—i.e., we need to show that A has term rank m. We argue by contradiction. Suppose the term rank is less than m. Then Theorem 12.13 tells us we can cover the nonzero entries of A with e rows and f columns, where e + f < m. Permute the rows and columns of A so that these are the ﬁrst e rows and ﬁrst f columns. Permuting the rows is equivalent to reordering the sets Si , and permuting the columns amounts to relabeling the elements xi . So, ∗ A1 without loss of generality, we may assume that A has the form , where A2 0 the zero block has size (m − e) × (n − f ). Now, A2 is (m − e) × f , and since e + f < m, we have m − e > f . So A2 has more rows than columns. But then the last m − e rows of A correspond to m − e subsets whose union contains fewer than m − e elements. This contradicts our hypothesis. Therefore, A does have term rank m and the conﬁguration has an SDR. The determinant of an n × n matrix A is given by the formula
(12.1) det(A) = (−1)σ a1,σ(1) a2,σ(2) · · · an,σ(n) , σ
where the sum is over all permutations σ of the numbers 1, . . . , n, and the sign (−1)σ is +1 if σ is an even permutation and −1 if σ is an odd permutation. Each term of the sum is a signed product of n entries of A, with exactly one entry chosen from each row and each column. If we use these same products but simply add them all up, we get the permanent of A. The permanent can also be deﬁned for matrices which are not square. Deﬁnition 12.16. If A is an m × n matrix with m ≤ n, the permanent of A is
(12.2) per(A) = a1,σ(1) a2,σ(2) · · · am,σ(m) , σ
where the sum is over all maps σ : {1, 2, . . . , m} → {1, 2, . . . , n} which are injective (onetoone).
176
12. Matrices of Zeros and Ones
For example,
per
and per
a11 a21
a12 a22
a13 a23
a11 a21
a12 a22
= a11 a22 + a12 a21
= a11 a22 + a11 a23 + a12 a21 + a12 a23 + a13 a21 + a13 a22 .
For an m × m matrix, the number of terms in the sum is m!. If A has more rows than columns, one can extend the deﬁnition of permanent to A by using per(AT ). Like the determinant, the permanent function is linear in each row and column. The formula for computing a determinant by cofactor expansion can be modiﬁed in the obvious way to get a similar formula for permanents—just drop the minus signs. However, the permanent is actually more diﬃcult to compute than the determinant, because the usual row reduction method does not apply—adding a multiple of one row to another does not change the determinant of a matrix, but it does change the permanent. Also, the permanent is not multiplicative—i.e., per(AB) does not generally equal per(A)per(B). We mention the permanent here because of the connection between permanents and SDRs. Suppose A is the incidence matrix of a conﬁguration. Then each entry of A is either a zero or a one, and any nonzero product a1,σ(1) a2,σ(2) · · · am,σ(m) corresponds to a choice of a one from each row, no two being in the same column. This then corresponds to an SDR for the conﬁguration. Thus, we have the following. Theorem 12.17. Let A be the incidence matrix for a conﬁguration of m subsets S1 , S2 , . . . , Sm of the nset S. Then per(A) gives the number of SDRs of the conﬁguration.
12.5. Doubly Stochastic Matrices and Birkhoﬀ’s Theorem Now we come to a famous theorem about doubly stochastic matrices. Deﬁnition 12.18. A real n × n matrix A is called doubly stochastic if aij ≥ 0 for all i, j and all the line sums are 1. ⎛ ⎞ .2 .7 .1 .2 .8 For example, and ⎝ .3 .1 .6 ⎠ are doubly stochastic matrices. .8 .2 .5 .2 .3 Any permutation matrix is doubly stochastic, as is the matrix n1 Jn . The nonnegative matrix A is doubly stochastic if and only if Ae = e and eT A = eT . If A is nonnegative and all of the line sums are s, then 1s A is doubly stochastic. Suppose c1 , . . . , ct are nonnegative real numbers which sum to one: c1 + c2 + · · · + ct = 1. Let P1 , . . . , Pt be permutation matrices. Then the convex combination
t
ci Pi is a
i=1
doubly stochastic matrix. The Birkhoﬀ theorem says that every doubly stochastic matrix can be obtained in this way.
12.5. Doubly Stochastic Matrices and Birkhoﬀ’s Theorem
177
Before stating and proving this theorem, observe that the notion of term rank is easily extended to general matrices by replacing the term “1” with “nonzero”. Theorem 12.13 then becomes: Theorem 12.19. Let A be an m × n matrix. The minimal number of lines needed to cover all of the nonzero entries of A is equal to the maximal number of nonzero entries of A such that no two are on a line. Our proof of Birkhoﬀ’s theorem relies on the following lemma. Lemma 12.20. Let A be an n × n, nonnegative matrix with all line sums equal to s, where s > 0. Then A has term rank n. Proof. Suppose A has term rank ρ. Then all of the nonzero entries of A can be covered by ρ lines. Since each line sum is s, this means the sum of all the entries of A is at most ρs. But, the sum of all the entries of A is exactly ns, and hence, ns ≤ ρs. So n ≤ ρ, and A has term rank n. Theorem 12.21 (Birkhoﬀ [Birk46]). Let A be an n × n doubly stochastic matrix. Then there exist a ﬁnite number of permutation matrices P1 , . . . , Pt and positive t real numbers c1 , . . . , ct such that A = ci P i . i=1
Proof. By Lemma 12.20 the matrix A has term rank n, so we can ﬁnd n positive entries of A with no two on a line. Let c1 be the smallest of these entries and let P1 be the permutation matrix with ones in the positions of these n positive entries. Then A − c1 P1 is nonnegative, and all of the line sums of A − c1 P1 are 1 − c1 . If c1 = 1, we have A = P1 , and we are done. Otherwise, A − c1 P1 is nonzero but has at least one more zero entry than the original matrix A. We then repeat the argument on A − c1 P1 to obtain a second permutation matrix, P2 , and a positive number c2 such that A − c1 P1 − c2 P2 is nonnegative, has line sums 1 − c1 − c2 , and has at least one more zero entry than A − c1 P1 . If 1 − c1 − c2 = 0, we have A = c1 P1 + c2 P2 . Otherwise, we may repeat the argument on A − c1 P1 − c2 P2 . At each step, we produce a matrix with more zero entries than in the previous step. Hence, after a ﬁnite number, say t, of iterations, we must reach the zero matrix, t and thus we get A = ci P i . i=1
the proof with a 3 × 3 example. ⎞ .7 .1 .1 .6 ⎠. We start with the entries in positions .2 .3 ⎛ ⎞ 0 1 0 (1, 2), (2, 3), and (3, 1), and put c1 = .5 and P1 = ⎝ 0 0 1 ⎠. Then 1 0 0 ⎛ ⎞ ⎛ ⎞ 0 .5 0 .2 .2 .1 A − .5P1 = A − ⎝ 0 0 .5 ⎠ = ⎝ .3 .1 .1 ⎠ . .5 0 0 0 .2 .3 We illustrate the argument of ⎛ .2 Example 12.22. Let A = ⎝ .3 .5
178
12. Matrices of Zeros and Ones ⎛
⎞ 0 1 0 Next, take positions (1, 2), (2, 1), and (3, 3) with c2 = .2 and P2 = ⎝ 1 0 0 ⎠. 0 0 1 So ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ .2 .2 .1 0 .2 0 .2 0 .1 A − .5P1 − .2P2 = ⎝ .3 .1 .1 ⎠ − ⎝ .2 0 0 ⎠ = ⎝ .1 .1 .1 ⎠ . 0 .2 .3 0 0 .2 0 .2 .1 Now use c3 = .1 and P3 = I to get
⎛
⎞ .1 0 .1 A − .5P1 − .2P2 − .1P3 = ⎝ .1 0 .1 ⎠ . 0 .2 0 ⎛ ⎞ ⎛ ⎞ 1 0 0 0 0 1 Finally, with c4 = c5 = .1, with P4 = ⎝ 0 0 1 ⎠ and P5 = ⎝ 1 0 0 ⎠, we get 0 1 0 0 1 0 A − .5P1 − .2P2 − .1P3 − .1P4 − .1P5 = 0. So A = .5P1 + .2P2 + .1P3 + .1P4 + .1P5 , or ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ .2 .7 .1 0 1 0 0 1 0 1 ⎝ .3 .1 .6 ⎠ = .5 ⎝ 0 0 1 ⎠ + .2 ⎝ 1 0 0 ⎠ + .1 ⎝ 0 .5 .2 .3 1 0 0 0 0 1 0 ⎛ ⎞ ⎛ ⎞ 1 0 0 0 0 1 + .1 ⎝ 0 0 1 ⎠ + .1 ⎝ 1 0 0 ⎠ . 0 1 0 0 1 0
⎞ 0 0 1 0⎠ 0 1
The representation of the doubly stochastic matrix A as a convex sum of permutation matrices is not unique. In the above example, if we start with the entries in positions (1, 2), ⎛ ⎞ (2, 1), and (3, 3), then in the ﬁrst step we have c1 = .3 and 0 1 0 P1 = ⎝ 1 0 0 ⎠, resulting in a diﬀerent decomposition: 0 0 1 ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ .2 .7 .1 0 1 0 0 1 0 ⎝ .3 .1 .6 ⎠ = .3 ⎝ 1 0 0 ⎠ + .4 ⎝ 0 0 1 ⎠ .5 .2 .3 0 0 1 1 0 0 ⎞ ⎛ ⎞ ⎛ 1 0 0 0 0 1 + .2 ⎝ 0 0 1 ⎠ + .1 ⎝ 0 1 0 ⎠ . 0 1 0 1 0 0 We now use Theorem 12.21 to prove the Hoﬀman–Wielandt Theorem 10.8, mentioned in Chapter 10. First, some preliminaries. For matrices A, B of the same size, the entrywise product is called the Hadamard product and is denoted A ◦ B. ¯ Thus, the ij entry of A ◦ B is aij bij . Suppose U is a unitary matrix. Then U ◦ U is a doubly stochastic matrix. Deﬁnition 12.23. A doubly stochastic matrix P is said to be orthostochastic if ¯ for some unitary matrix U . P =U ◦U
12.5. Doubly Stochastic Matrices and Birkhoﬀ’s Theorem
179
⎛
⎞ 0 1 1 From the example A = 12 ⎝ 1 0 1 ⎠, we can see that not every doubly sto1 1 0 chastic matrix is orthostochastic. Lemma 12.24. Let D = diag(d1 , . . . , dn ) and E = diag(e1 , . . . , en ) be diagonal matrices, and let R, S be n × n matrices. Then ⎛ ⎞ d1 d ⎜ 2⎟ ⎟ (12.3) trace(RDSE) = ( e1 e2 · · · en ) R ◦ S T ⎜ . ⎝ .. ⎠ . dn Proof. This is a straightforward computation. The (i, i) entry of RDSE is (row i of RD)(column i of SE) =
n
rij dj sji ei .
j=1
The trace of RDSE is then
n n
rij dj sji ei . Since rij sji is the i, j entry of R ◦ S T ,
i=1 j=1
equation (12.3) follows.
Theorem 12.25 (Hoﬀman and Wielandt [HW53]). Let A and B be n × n normal matrices with eigenvalues α1 , . . . , αn and β1 , . . . , βn , respectively. Then there are permutations μ, σ of 1, . . . , n such that (12.4)
n
αi − βμ(i) 2 ≤ A − B2F ≤
i=1
n
αi − βσ(i) 2 .
i=1
Proof. Let DA = diag(α1 , . . . , αn ) and DB = diag(β1 , . . . , βn ). Since A and B are normal, we have A = U DA U ∗ and B = V DB V ∗ for some unitary matrices U and V . Then A − B = U (DA − U ∗ V DB V ∗ U )U ∗ . Put W = U ∗ V . Then W is unitary and A − B = U (DA − W DB W ∗ )U ∗ . Since the Frobenius norm is unitarily invariant, we have A − BF = DA − W DB W ∗ F . We then have A − B2F = trace(DA − W DB W ∗ )(DA − W DB W ∗ )∗ ¯ A + W DB D ¯ B W ∗ − W DB W ∗ D ¯ A − DA W D ¯ B W ∗) = trace(DA D ¯ A + DB D ¯ B ) − trace(W DB W ∗ D ¯ A + DA W D ¯ B W ∗ ). = trace(DA D ¯ A )∗ = DA W D ¯ B W ∗ , we get Since (W DB W ∗ D
¯ A + DA W D ¯ A )) . ¯ B W ∗ ) = 2 (trace(W DB W ∗ D trace(W DB W ∗ D
From Lemma 12.24,
⎞ β1 . ⎟ ¯ A ) = (α ¯ ⎜ ¯1 , . . . , α ¯ n )W ◦ W trace(W DB W ∗ D ⎝ .. ⎠ . ⎛
βn
180
12. Matrices of Zeros and Ones
Hence,
⎛ ⎞ β1 . ⎟ ¯ A + DB D ¯ B ) − 2 (α ¯ ⎜ A − B2F = trace(DA D ¯1 , . . . , α ¯ n )W ◦ W ⎝ .. ⎠ . βn
For an n × n matrix M , put
⎞ β1 ⎜ ⎟ ¯ n )M ⎝ ... ⎠ . f (α, M, β) = (α ¯1 , . . . , α βn
⎛
¯ is doubly stochastic and hence is a convex combinaSince W is unitary, W ◦ W tion of permutation matrices. Of the permutation matrices which appear in that sum, select one, Pσ , for which f (α, Pσ , β) is minimal, and another, Pμ , for which ¯ , β) ≤ f (α, Pμ , β), so f (α, Pμ , β) is maximal. Then f (α, Pσ , β) ≤ f (α, W ◦ W ¯ A + DB D ¯ B ) − 2f (α, Pμ , β) ≤ A − B2F trace(DA D ¯ A + DB D ¯ B ) − 2f (α, Pσ , β). ≤ trace(DA D It is now just a matter of algebra to check that for any permutation, φ, ⎛ ⎞ β1 n n n
⎜ .. ⎟ 2 2 αi  + βi  − 2 (α ¯1 , . . . , α ¯ n )Pφ ⎝ . ⎠ = αi − βφ(i) 2 . i=1 i=1 i=1 βn
Finally, we mention the connection between doubly stochastic matrices and majorization. For real vectors (x1 , . . . , xn ) and (y1 , . . . , yn ) with x1 ≥ x2 ≥ · · · ≥ xn
and y1 ≥ y2 ≥ · · · ≥ yn ,
we say that y majorizes x (or, x is majorized by y) if x1 + x2 + x3 + · · · + xk ≤ y1 + y2 + y3 + · · · + yk holds for k = 1, 2, . . . , n, with equality when k = n. It can be shown that y majorizes x if and only if x = Sy for some doubly stochastic matrix S. We leave it as an exercise to show that if x = Sy, then y majorizes x. The converse is harder to prove.
12.6. A Theorem of Ryser We conclude this chapter with a theorem of Ryser [Rys69]. Deﬁnition 12.26. A triangle of a (0, 1)matrix is a 3 × 3 submatrix such that all line sums are two—i.e., ⎛ any of⎞the six matrices obtainable by permuting the rows 0 1 1 and/or columns of ⎝ 1 0 1 ⎠. 1 1 0 The matrix shown in the deﬁnition is the adjacency matrix of the triangle graph (Example 12.2); hence, the name triangle.
12.6. A Theorem of Ryser
181
Theorem 12.27 (Ryser [Rys69]). Let A be a (0, 1)matrix of size m × n such that AAT is positive and A contains no triangles. Then A must contain a column of ones.
Proof. We use induction on m. The result clearly holds for m = 1 and m = 2, so we may assume m ≥ 3. Delete row one of A and consider the submatrix of A consisting of the last m − 1 rows. This submatrix satisﬁes the hypotheses of the theorem and so, by the induction hypothesis, contains a column of ones. Hence, either A itself contains a column of ones or it contains a column of the form (0, 1, 1, 1, . . . , 1)T . Now we may delete row two of A and apply the same argument to conclude that either A has a column of ones or a column of the form (1, 0, 1, 1, . . . , 1)T . Finally, delete row three of A to see that either A has a column of ones or a column of the form (1, 1, 0, 1, . . . , 1)T . Now, A contains no triangle, so it cannot contain all three of the columns ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ 0 1 1 ⎜1⎟ ⎜0⎟ ⎜1⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜1⎟ ⎜1⎟ ⎜0⎟ ⎜ ⎟ , ⎜ ⎟ , and ⎜ ⎟ . ⎜1⎟ ⎜1⎟ ⎜1⎟ ⎜.⎟ ⎜.⎟ ⎜.⎟ ⎝ .. ⎠ ⎝ .. ⎠ ⎝ .. ⎠ 1
1
Hence, A must contain a column of ones.
1
This theorem can be used to prove the onedimensional case of Helly’s theorem ([Helly23], see also [Radon21, K¨ onig22]). Theorem 12.28 (Helly). Suppose we have a ﬁnite number of closed intervals on the real line such that every pair of the intervals has a nonempty intersection. Then all of the intervals have a point in common.
Proof. Let the intervals be I1 , . . . , Im , and let the endpoints of the intervals be ordered from smallest to largest: e1 < e2 < e3 < · · · < en . Let A be the m × n incidence matrix of endpoints versus intervals. Thus, akj = 1 if ej ∈ Ik and akj = 0 if ej ∈ Ik . Row k of A tells us which of the points ej are in interval Ik . Since the endpoints ej were listed in increasing order, each row of A has the form 0 0 · · · 0 1 1 · · · 1 0 0 · · · 0, i.e., all of the ones in a row occur consecutively. This property also applies to any submatrix, so A cannot contain a triangle because any triangle contains the row 1 0 1. Since each pair of intervals has a nonempty intersection, the dot product of each pair of rows of A is nonzero, and hence AAT is positive. By Theorem 12.27, A contains a column of ones, say column k. This column gives an endpoint ek , which is in all of the intervals.
182
12. Matrices of Zeros and Ones
Exercises 1. Let A be a (0, 1)matrix of size m by n with m ≤ n. Show that per(A) = 0 if and only if the term rank ρ(A) is less than m. Acknowledgment: Problems 2 through 6 come from a problem set of H. J. Ryser, 1976. 2. Let A be a (0, 1)matrix of order n. Suppose that det(A) = 0. Explain why we may conclude that A has term rank n. Write down a matrix of order n > 1 that has term rank n but such that det(A) = 0. 3. Let A be a (0, 1)matrix of size m by n. Suppose that A contains a zero submatrix of size e by f , where e + f > n. Show that then it is impossible for A to have term rank equal to m. 0 et 4. Let A be the (0, 1)matrix of order n of the form A = , where e is e In−1 the column vector of n − 1 ones. Evaluate det(A) and per(A). 5. Let A be a matrix of order n such that all of the elements of A are equal to ±1. Suppose that per(A) = n!, where the bars denote absolute value. Show that the matrix A has rank one. 6. Let A be a (0, 1)matrix of order n that satisﬁes the matrix equation A2 = J, where J is the matrix of ones of order n. Show that A has all of its line sums equal to a positive integer k and that n = k2 . Moreover, prove that A must have exactly k ones on its main diagonal. Hint: Use eigenvalues and eigenvectors. 7. Suppose that in Theorem 12.15 the union Si1 ∪ Si2 ∪ · · · ∪ Sik always contains at least (k + 1) elements. Let x be any element of S1 . Show that the sets S1 , . . . , Sm have an SDR with x representing S1 . 8. Suppose A is an n × n (0, 1)matrix satisfying A + AT = J − I. Prove that the term rank of A is at least n − 1. 9. Let A be an m × n (0, 1)matrix. Suppose there is a positive integer p such that each row of A contains at least p ones and each column of A contains at most p ones. Prove that perA > 0. 10. Let (x1 , . . . , xn ) and (y1 , . . . , yn ) be real vectors with x1 ≥ x2 ≥ · · · ≥ xn
and y1 ≥ y2 ≥ · · · ≥ yn .
Suppose there is a doubly stochastic matrix S such that x = Sy. Show that for k = 1, 2, . . . , n, x1 + x2 + x3 + · · · + xk ≤ y1 + y2 + y3 + · · · + yk , with equality for k = n (i.e., show that y majorizes x). 11. Let A be an n × n invertible matrix. Show that all row and column sums of A−1 ◦ AT are one.
Exercises
183
12. Let A be an n × n doubly stochastic matrix. Let A be a matrix of order n − 1 obtained by deleting a row and column of a positive element of A. Prove that perA > 0. 13. Prove that the product of two n × n doubly stochastic matrices is doubly stochastic. 14. We say a matrix is row stochastic if all of its entries are nonnegative and all row sums are one. Show that if A and B are row stochastic matrices of sizes r × s and s × t, respectively, then AB is row stochastic. (A similar deﬁnition and result hold for column stochastic matrices.) 15. Give an example of a square matrix which is row stochastic but not column stochastic. 16. Let A be a (0, 1) square matrix with all line sums equal to k. Prove that A is a sum of k permutation matrices. 17. Let A be a square matrix in which each row sum is r and each column sum is c. Prove that r = c. 18. Let 0 ≤ α ≤ 1. Find all 2 × 2 doubly stochastic matrices with α in the (1, 1) entry. 19. Show that there is a 3 × 3 doubly stochastic matrix of the form ⎛ ⎞ α β ∗ ⎝β α ∗⎠ ∗ ∗ ∗ if and only if 0 ≤ α, 0 ≤ β, and 12 ≤ α + β ≤ 1. 20. Express the doubly stochastic matrix A below in the form c1 P1 +c2 P2 +· · ·+ct Pt , where the Pi ’s are permutation matrices and the ci ’s are positive real numbers which sum to 1: ⎛ ⎞ .3 .5 .2 A = ⎝ .6 .1 .3 ⎠ . .1 .4 .5
Chapter 13
Block Designs
We now use (0, 1)matrices to get information about special types of conﬁgurations called block designs. Most of this chapter comes from [Rys63] and from notes taken in Ryser’s courses at Caltech.
13.1. tDesigns Deﬁnition 13.1. A tdesign, with parameters (v, k, λ), is a collection D of subsets (called blocks) of a vset S such that the following hold: • Every member of D has k elements. • Any set of t elements from S is contained in exactly λ of the blocks. The elements of S are called points. To exclude degenerate cases, we assume S and D are nonempty, that v ≥ k ≥ t, and λ > 0. We also refer to such conﬁgurations as t(v, k, λ) designs. A Steiner system is a tdesign with λ = 1. A tdesign with t = 2 and k < v − 1 is called a balanced incomplete block design or BIBD. These arise in statistics in the design of experiments, where the elements of S are called varieties; hence, the use of the letter v for the number of elements of S. The following simple example could be viewed as a starting point for design theory. Example 13.2. Let S be a vset, and let D be the collection of all subsets S of v−tof ways size k. Suppose t ≤ k, and we select any t points of S. There are then k−t to choose k − t additional points from the remaining v − t points of S to get a subset of size k which contains the originally selected t points. Hence D is a tdesign with v−t . λ = k−t The challenge is to ﬁnd tdesigns in which D is not simply the collection of all ksubsets of a vset S. Here is a famous example. 185
186
13. Block Designs 1 t
J '$
J 7 t
Jt 4 H 6 tH J
H HHJ
H t &% Jt t 3 5 2 Figure 13.1. The projective plane of order 2.
Example 13.3. Put S = {1, 2, 3, 4, 5, 6, 7}, and let D be the collection of subsets {1, 2, 4}, {2, 3, 5}, {3, 4, 6}, {4, 5, 7}, {5, 6, 1}, {6, 7, 2}, {7, 1, 3}. Then D is a 2design with v = 7, k = 3, incidence matrix for this conﬁguration is ⎛ 1 1 0 ⎜0 1 1 ⎜ ⎜0 0 1 ⎜ A = ⎜0 0 0 ⎜ ⎜1 0 0 ⎝ 0 1 0 1 0 1
and λ = 1, i.e., a 2(7, 3, 1) design. The 1 0 1 1 0 0 0
0 1 0 1 1 0 0
0 0 1 0 1 1 0
⎞ 0 0⎟ ⎟ 0⎟ ⎟ 1⎟. ⎟ 0⎟ ⎠ 1 1
This design is called the projective plane of order two; we deﬁne ﬁnite projective planes later in this chapter. We visualize it as a conﬁguration of seven points and seven lines, where the “lines” correspond to the blocks. In Figure 13.1, the point 6 is the point in the center of the circle, and the seventh “line” is the circle through points 4, 5, and 7. Each line has three points, each point is on three lines, and each pair of points is on exactly one line. For the incidence matrix A, we have ⎞ ⎛ 3 1 1 1 1 1 1 ⎜1 3 1 1 1 1 1⎟ ⎟ ⎜ ⎜1 1 3 1 1 1 1⎟ ⎟ ⎜ T T AA = A A = ⎜ 1 1 1 3 1 1 1 ⎟ = 2I + J. ⎟ ⎜ ⎜1 1 1 1 3 1 1⎟ ⎠ ⎝ 1 1 1 1 1 3 1 1 1 1 1 1 1 3 Observe that A is a circulant matrix; each row is obtained by shifting the previous row to the right and wrapping around in a cyclic fashion. Example 13.4. Here is a 3(8, 4, 1) design, based on the eight vertices of a cube (see Figure 13.2). These are labeled 1, 2, 3, . . . , 8, and the blocks are the following sets. {1, 2, 3, 4} {1, 4, 6, 7} {1, 3, 6, 8} {1, 2, 5, 6} {2, 3, 5, 8} {2, 4, 5, 7} {2, 3, 6, 7} {1, 3, 5, 7} {5, 6, 7, 8} {2, 4, 6, 8} {1, 4, 5, 8} {3, 4, 5, 6} {3, 4, 7, 8} {1, 2, 7, 8}
13.1. tDesigns
187
t8 # # # # t 4 #
1
t#
#
#
t 5
# # # # 3# t
# # # # # t 2
t7
t 6
Figure 13.2. Example 13.4
Each of the six blocks in the ﬁrst column contains the vertices of one of the six faces of the cube. Each block in the middle column contains the vertices of one of the six diagonal planes. The last two blocks consist of the vertices of the two regular tetrahedra which can be inscribed in the cube. Two major questions about tdesigns are: • For which values of the parameters t, v, k, λ do t(v, k, λ) designs exist? • If t(v, k, λ) designs exist for some values of t, v, k, λ, how many diﬀerent (i.e., nonisomorphic) designs are there with these parameters? Neither of these questions has been settled, even for special cases such as λ = 1 or t = 2. The Bruck–Ryser–Chowla theorem, in Section 13.5, gives necessary conditions for the existence of symmetric 2designs. Much is known about ﬁnite projective planes, which are symmetric 2designs with λ = 1, but even this case is far from settled. This chapter is merely a brief introduction to designs; see [HP85, CVL80] for extensive treatments. We begin with a basic counting result. Theorem 13.5. Let D be a t(v, k, λ) design. Then for any integer s such that 0 ≤ s ≤ t, the conﬁguration D is also an s(v, k, λs ) design, where (13.1)
λs
k−s v−s =λ . t−s t−s
This theorem tells us that a tdesign is also a 1design, a 2design, and so on, up to a (t − 1)design. Furthermore, since λs is an integer, we see that k−s t−s must v−s divide λ t−s . This requirement seriously restricts the possibilities for v, k, and λ. Note that for s = 0, the number λ0 is just the number of blocks, which we denote by b. Also, λ1 is the number of blocks which contain a given point x; we denote
188
13. Block Designs
this number as r. Equation (13.1) gives the following formula for λs : ' λ v−s λ(v − s)! (k − s)! t−s λs = k−s = (t − s)!(v − t)! (t − s)!(k − t)! t−s ' (k − s)! λ(v − s)! = (v − t)! (k − t)! =
λ(v − s)(v − s − 1) · · · (v − t + 1) . (k − s)(k − s − 1) · · · (k − t + 1)
Example 13.6. Let us apply Theorem 13.5 to the 3(8, 4, 1) design of Example 13.4. The theorem tells us it is also a 2(8, 4, λ2 ) design with 8−2 6 6 3−2 λ2 = 4−2 = 12 = = 3, 2 3−2 1 as well as a 1(8, 4, λ1 ) design with 8−1
7
3−1 = 23 = r = λ1 = 4−1 3−1
2
21 = 7. 3
Note that each point is in seven blocks—each vertex of the cube belongs to three of the faces, three of the diagonal planes, and one of the inscribed tetrahedra. The number of blocks b is v 8 8·7·6 t b = λ0 = λ k = 34 = = 14. 4·3·2 3 t The proof of Theorem 13.5 is a classic counting argument—we count the same thing in two diﬀerent ways. It is helpful to look at two special cases before plunging into the proof of the general result. We begin with the case t = 1 and s = 0. Proposition 13.7. In a 1(v, k, λ) design, we have bk = λv. Proof. Deﬁne a 1ﬂag to be a pair (x, B) where x is a point and B is a block containing x. We count the number of 1ﬂags in two diﬀerent ways. There are v choices for the point x, and each point x belongs to λ blocks, giving λ choices for B. So we have vλ ﬂags. But there are b choices for the block B, and each block B contains k points, which gives bk ﬂags. Hence, bk = λv. Next we take on the case t = 2 and s = 1. Proposition 13.8. In a 2(v, k, λ) design, each point x is contained in exactly r = λ(v−1) (k−1) blocks. Hence, a 2(v, k, λ) design is also a 1(v, k, λ1 ) design where λ1 = r and r(k − 1) = λ(v − 1). Proof. Deﬁne a 2ﬂag (x, y, B) to be an unordered pair of distinct points x, y and a block B which contains both x and y. Fix a point p. We will count the number of 2ﬂags of the form (p, y, B) in two diﬀerent ways. First, there are (v − 1) choices for the point y. Once we have chosen a y, there are exactly λ blocks B which contain both p and y. So, the number of 2ﬂags of the form (p, y, B) is λ(v − 1). Alternatively, let r be the number of blocks which contain the point p. Then we
13.2. Incidence Matrices for 2Designs
189
have r choices for B. Now, B has k points, one of which is p, leaving (k − 1) choices for the point y. So, the number of 2ﬂags of the form (p, y, B) must be r(k − 1). Hence, λ(v − 1) = r(k − 1). Note that this argument shows that r = λ(v−1) (k−1) , and hence, the number r does not depend on the choice of the point p. We now generalize the process to prove Theorem 13.5. Proof. Let 0 ≤ s ≤ t, and let Rs be a ﬁxed set of s points. Let m denote the depends only on s number of blocks B which contain Rs . We will show that m v−s and not on the choice of the set Rs by showing m k−s t−s = λ t−s . This will prove Theorem 13.5 with m = λs . Deﬁne an admissible pair (T , B) to be a tset T and a block B such that Rs ⊆ T ⊆ B. We count the number of admissible pairs in two diﬀerent ways. First, the number of ways to choose (t−s) points out of the (v −s) points which are not in Rs is v−s t−s , so this gives the number of tsets T such that Rs ⊆ T . There are then exactly λ blocks B which contain T . So, the number of admissible pairs is λ v−s t−s . On the other hand, there are m blocks B such that Rs ⊆ B. For each such ways to choose (t − s) points of B which are not in Rs , block B there are k−s k−s t−s which gives t−s subsets T of size t such that Rs ⊆ T ⊆ B. So, the number of k−s v−s admissible pairs is m k−s t−s . Hence, m t−s = λ t−s , and the number m = λs depends only on s and not on the particular set Rs . Corollary 13.9. Let D be a t(v, k, λ) design. Let b = λ0 be the number of blocks, and let λ1 = r be the number of blocks which contain a point x. Then (1) b = λ0 = λ
v(v − 1)(v − 2) · · · (v − t + 1) . k(k − 1)(k − 2) · · · (k − t + 1)
(2) bk = rv. (3) If t > 1, then r(k − 1) = λ2 (v − 1). Example 13.10. Suppose D is a k(v, k, 1) design. Then v(v − 1)(v − 2) · · · (v − k + 1) k(k − 1)(k − 2) · · · (k − k + 1) v v(v − 1)(v − 2) · · · (v − k + 1) = . = k k!
b=
So D is the “trivial” design of all ksubsets of the vset.
13.2. Incidence Matrices for 2Designs The term block design is often used for 2designs. The design consisting of all ksubsets of a vset is the trivial 2design; we are interested in the case where not every ksubset is a block. These are called balanced incomplete block designs or BIBDs. Let A be the incidence matrix of a 2(v, k, λ) design having b blocks and λ1 = r. The matrix A is size b × v. Each row of A corresponds to a block of the design
190
13. Block Designs
and contains exactly k ones. Each column corresponds to a point; since each point belongs to exactly r blocks, each column of A has r ones. Furthermore, the inner product of columns i and j of A counts the number of rows which have ones in both positions i and j, and hence gives the number of blocks which contain both of the points xi and xj . So the inner product of columns i and j of A is λ when i = j, and it is r when i = j. These facts can be stated as the following three matrix equations: AJv = kJb×v , Jb A = rJb×v , AT A = (r − λ)I + λJv . These equations will be the key in proving several important results about 2designs. We will need the following facts, which appeared in earlier chapters. • The determinant of the matrix aI + bJn is (a + nb)an−1 (see Section 11.1). • For any matrix A, the matrices A, AT , AAT , and AT A all have the same rank. Consider a 2(v, k, λ) design with b blocks and λ1 = r. The case k = v simply gives b repeated copies of the full vset {x1 , . . . , xv }, which is of little interest. Hence, we shall assume k ≤ v − 1. Recall the equations bk = vr, r(k − 1) = λ(v − 1). Since k − 1 < v − 1, the second equation gives r > λ. We now prove Fischer’s inequality, b ≥ v. Theorem 13.11 (Fischer’s inequality). Let b be the number of blocks in a 2(v, k, λ) design D. If k ≤ v − 1, then b ≥ v. Proof. Let A be the incidence matrix for the design. Then A is b × v and satisﬁes AT A = (r − λ)I + λJv . Now, det (r − λ)I + λJv = (r − λ)v−1 (r − λ + vλ). But r − λ + vλ = r + λ(v − 1) = r + r(k − 1) = rk. So, det(AT A) = (r − λ)v−1 rk. Since k ≤ v − 1, we know r > λ, and hence det(AT A) = 0. Therefore, the rank of AT A is v, and so A also has rank v. Since A has b rows, we must have b ≥ v. Deﬁnition 13.12. A 2(v, k, λ) design with k ≤ v − 1 is called symmetric if b = v. In a symmetric BIBD, the number of blocks equals the number of points. The projective plane of order two, Example 13.3, is a symmetric block design. Theorem 13.13. In a 2(v, k, λ) design with k ≤ v−1, the following are equivalent: (1) b = v. (2) r = k. (3) Any pair of blocks have exactly λ points in common.
13.3. Finite Projective Planes
191
Proof. The equation bk = vr shows the equivalence of (1) and (2). Now suppose b = v. Then the incidence matrix A of the design is square. Since we also have r = k, we get AJ = kJ = rJ = JA. (Note both A and J are v × v.) So, A commutes with J. But then A commutes with (r − λ)I + λJ = AT A, and so A(AT A) = (AT A)A. From the proof of Fisher’s inequality we know A has rank v and hence is invertible. Multiply both sides of A(AT A) = (AT A)A on the right by A−1 to get AAT = AT A = (r − λ)I + λJ. Since entry i, j of AAT is the dot product of rows i and j of A, the number of points in the intersection of blocks i and j is λ when i = j. Conversely, suppose any pair of blocks intersect in exactly λ points. Then, since each block has k points, AAT = (k − λ)I + λJ. If (k − λ) = 0, then AAT = λJ, and so A has rank one. But we know A has rank v and v > 1. So (k − λ) = 0 and det(AAT ) = (k − λ)b−1 (k − λ + λb) = 0. So AAT has rank b, and hence b = v.
In summary, in a symmetric block design, the number of blocks equals the number of points. Each pair of points belongs to exactly λ blocks, and each pair of blocks intersects in exactly λ points. Each block has k points, and each point is in exactly k blocks. We then have (13.2)
AT A = AAT = (k − λ)I + λJ.
Theorem 13.13 enables us to set up a dual design by reversing the roles of points and blocks. The points of the dual design correspond to the blocks of the original design. If x is a point in the original design, the set of blocks which contain x becomes a block in the dual design. If A is the incidence matrix of the original design, then AT is the incidence matrix of the dual design.
13.3. Finite Projective Planes The last section concluded with a comment on duality. This concept is more typically encountered in projective geometry. Deﬁnition 13.14. A ﬁnite projective plane is a symmetric 2(v, k, 1) design. The points of the plane are the elements of the vset, and the lines of the plane are the blocks of the design. In a ﬁnite projective plane, we have λ = 1, b = v, and r = k. Each pair of points is contained in exactly one line, and each pair of lines intersects in exactly one point. The equation r(k − 1) = λ(v − 1) then becomes k(k − 1) = (v − 1), and so v = k2 − k + 1. We also have v = (k − 1)2 + (k − 1) + 1; the number (k − 1) is called the order of the projective plane. Example 13.3 is a projective plane of order two; it is, in fact, the only projective plane of order two, so we may call it “the” projective plane of order two. A major question in this area is, For which numbers n = k − 1 do there exist projective planes of order n? There is a method for constructing a projective plane
192
13. Block Designs
of order n whenever n is a prime number or a power of a prime number. This tells us, for example, that there are projective planes of orders 2, 3, 4 = 22 , 5, 7, 8 = 23 , 9 = 32 , 11, 13, 16 = 24 , 17, 19, 23, 25 = 52 , . . . . There is a close connection between ﬁnite projective planes and complete sets of orthogonal Latin squares; see [Rys63, Chapter 7]. The fact that there is no pair of orthogonal Latin squares of order 6 (conjectured by Euler in 1782 in connection with the famous problem of the 36 oﬃcers, and later proven by Tarry in 1901 [Tarry01]) then tells us there is no projective plane of order six. The question of the existence of a projective plane of order 10 was one of the big unsolved problems in combinatorics for many years. Here, k = 10 + 1 = 11 and v = 102 + 10 + 1 = 121. In 1989, Lam, Thiel, and Swiercz established that there is no projective plane of order 10; see [Lam91]. A more general question is whether all projective planes have order equal to a prime power. Probably the most famous theorem about the existence of symmetric 2designs is the result of Bruck, Ryser, and Chowla [BrucRys49, ChowRys50]. Theorem 13.15 (Bruck, Ryser, and Chowla). Suppose a symmetric 2(v, k, λ) design exists. Put n = k − λ. Then: (1) If v is even, then n is a square. (2) If v is odd, then the equation z 2 = nx2 + (−1) (x, y, z) not all zero.
v−1 2
λy 2 has a solution in integers
We give a proof in Section 13.5. For now, we use the theorem to show there is no projective plane of order six. Suppose a projective plane of order n = 6 existed. Then v = 62 + 6 + 1 = 43. So v is odd and v−1 2 = 21. The equation in part (2) of Theorem 13.15 is then (13.3)
z 2 = 6x2 − y 2 .
We show this equation has no nonzero integer solutions. Suppose it has an integer solution, x = a, y = b, z = c. So, (13.4)
c2 = 6a2 − b2 .
If a, b, c have a common factor d, then x = ad , y = db , z = dc is also a solution, so we may assume a, b, c have no common factors. Consider equation (13.4) modulo 3. This gives c2 ≡ −b2 (mod 3). If c ≡ b ≡ 0 (mod 3), then 9 divides both c2 and b2 , and so 9 divides 6a2 . But then 3 divides a2 , and so 3 divides a. This contradicts the assumption that a, b, c have no common factors. So, b and c are nonzero, modulo 3, and hence, c2 ≡ −b2 (mod 3) implies that −1 is a square in Z3 . Since this is not true, we have a contradiction, and hence equation (13.3) has no nonzero integer solutions. Theorem 13.15 then tells us there is no projective plane of order six. Theorem 13.15 does not settle the existence question for a projective plane of order 10. For n = 10, we have v = 111 and v−1 2 = 55. The Diophantine equation is then z 2 = 10x2 − y 2 , which has the solution z = 3, x = 1, y = 1. We deﬁned ﬁnite projective planes combinatorially, as symmetric 2(v, k, 1) designs. One can also take a geometric approach.
13.3. Finite Projective Planes
193
Deﬁnition 13.16. A projective plane π is a system composed of two types of objects, called points and lines, and an incidence relation subject to the following axioms. (1) Two distinct points of π are on exactly one line of π. (2) Two distinct lines of π pass through exactly one point of π. (3) There exist four distinct points of π, no three of which lie on the same line. A projective plane with a ﬁnite number of points is called a ﬁnite projective plane. Conditions (1) and (2) are the essential ones of interest; we need condition (3) to exclude degenerate cases, such as those in the following example. Example 13.17. Here are examples of systems which satisfy axioms (1) and (2) of Deﬁnition 13.16 but not axiom (3). (1) A single line with any number of points. (2) Let the points be 1, 2, 3, . . . , n, and let the lines be the sets {1, i} for i = 2, . . . n plus the set {2, 3, 4, . . . , n}. The incidence matrix for this system is ⎛ ⎞ 0 1 1 1 ··· 1 ⎜1 1 0 0 ··· 0⎟ ⎜ ⎟ ⎜1 0 1 0 ··· 0⎟ ⎟ ⎜ A = ⎜1 0 0 1 ··· 0⎟. ⎜. . . .⎟ .. ⎝ .. .. .. . .. ⎠ 1 0 0 0 ···
1
(3) A triangle with three points and three lines. t T T T T t Tt We have seen that a symmetric 2(v, k, 1) design, with k > 2, is a projective plane in which the blocks are the lines. Conversely, a ﬁnite projective plane gives a symmetric 2(v, k, 1) design by letting the blocks of the design be the lines of the plane. To prove this, we must show that each line of a ﬁnite projective plane, as deﬁned in Deﬁnition 13.16, contains the same number of points, and the number of points equals the number of lines. First, we show the three axioms of Deﬁnition 13.16 imply the following dual of axiom (3). Proposition 13.18. Let π be a projective plane. Then there exist four lines of π, no three of which pass through the same point. Proof. By axiom (3) of Deﬁnition 13.16, we can choose four points, p1 , p2 , p3 , p4 of π such that no three are collinear. Consider now the four lines (see Figure 13.3): L1 through points p1 and p2 ,
L3 through points p1 and p3 ,
L2 through points p3 and p4 ,
L4 through points p2 and p4 .
194
13. Block Designs
L2
L1
t p3
t p1
J
J
Jt J p4 J J
J J
Jt p2 J J
L3
J L 4
Figure 13.3. Proposition 13.18
Since no three of the points, p1 , p2 , p3 , p4 , are collinear, we know these four lines are distinct. Suppose three of these lines intersect in a point p. Note that any three of the lines Li include a pair of lines which intersect in one of the four points pi . Since two distinct lines interesect in only one point, the point p must be one of the four points p1 , p2 , p3 , p4 . But we know none of these points lies on three of the lines, so we have a contradiction. Hence, no three of these lines can intersect in a point. So any projective plane π satisﬁes all four of the following: (1) Two distinct points of π are on exactly one line of π. (2) Two distinct lines of π pass through exactly one point of π. (3) There exist four distinct points of π, no three of which lie on the same line. (4) There exist four distinct lines of π, no three of which pass through the same point. Each of these four statements has a dual, obtained by exchanging the words “point” and “line”. Number (2) is the dual of (1), and number (4) is the dual of (3). This duality is a key principle in studying projective geometry. It means that for every theorem about points and lines, there is a dual theorem, obtained by exchanging the roles of points and lines. Theorem 13.19. Let p and p be distinct points, and let L and L be distinct lines of a projective plane π. Then there exists a bijective map from the set of points on L onto the set of points on L , as well as bijective maps from the set of lines through p onto the set of lines through p , and from the set of points on L onto the set of lines through p. Proof. For distinct points r and s of π, we designate the unique line through r and s as rs. Let L and L be distinct lines of π. We claim there exists a point o of π, such that o is on neither of the lines L nor L . For, if not, then every point of π lies
13.3. Finite Projective Planes
L b t L t a t d t c
195
H tr H H r H t H L H L t o tH
Figure 13.4. Theorem 13.19
on L or L , and hence there must be points a, b on L, and c, d on L , which satisfy axiom (3) of Deﬁnition 13.16 (see Figure 13.4, lefthand side). Then the lines ac and bd intersect in some point e. Now ac is not the line L, but intersects the line L in point a, so e cannot lie on L. Similarly, bd is not the line L , but intersects L at d, so e cannot lie on L . So we have a point e which lies on neither L nor on L , a contradiction. Now, choose a point o of π, which is not on L or on L . We use this point o to establish a onetoone mapping of the points of L onto the points of L . For each point r of L, the line or passes through a unique point r of L (see Figure 13.4, righthand side). The map r → r then gives our mapping. For, if r1 = r2 , then the lines or1 and or2 are diﬀerent and so r1 = r2 , showing this map is onetoone. To show it is onto, choose a point s on line L . Then the line os must intersect line L in a unique point s, and the map sends s to s . The second assertion is the dual of the ﬁrst, and hence requires no additional proof. Finally, let L be a line, and let p be a point. If p is not on L, then for each ˜ through p point q on L, the line pq is a line through p; conversely, each line L intersects L in a unique point q. So, we get a onetoone mapping from points on L to lines through p. If p is on the line L, then choose a point q which is not on L. The second part of the theorem tells us there is a bijective map from the set of lines through p to the set of lines through q, and if we compose this with a bijective map from the set of lines through q onto the set of points on L, we get our bijection from the set of lines through p onto the set of points on L. The ﬁrst part of this theorem tells us that each line of a projective plane has the same number of points, and hence we may make the following deﬁnition. Deﬁnition 13.20. A projective plane π is called ﬁnite if it has a ﬁnite number of points. If the number of points on a line of π is n + 1, then the number n is called the order of the projective plane π.
196
13. Block Designs
Theorem 13.21. Let π be a projective plane of order n. Then any line of π has n + 1 points, and through any point of π there pass n + 1 lines. The total number of points in π is n2 + n + 1; this is also the total number of lines in π. Proof. The ﬁrst assertion follows from Theorem 13.19. To count the number of points in π, ﬁx a point P and consider the n + 1 lines through P . Note that every point of π, other than the point P , lies on exactly one of these lines. Each of these lines has n + 1 points, one of which is P . So, the total number of points in π is 1 + n(n + 1) = n2 + n + 1. By duality, the number of lines equals the number of points, so there are n2 + n + 1 lines. As a consequence, we see that any ﬁnite projective plane of order n yields a symmetric 2(v, k, 1) design, with k = n + 1 and v = n2 + n + 1, and the blocks correspond to the lines of the plane. We now see how to construct projective planes of order n when n is a power of a prime. Let p be a prime number, and let q = pα , where α is a positive integer. Let F = GF (q) be the ﬁnite ﬁeld of order q, and let V be the vector space F3 = {(x, y, z)  x, y, z ∈ F}. We say nonzero vectors (x, y, z) and (x , y , z ) are equivalent if (x, y, z) = t(x , y , z ) for some nonzero scalar t ∈ F. We then partition the q 3 − 1 nonzero vectors of V into equivalence classes. For any nonzero vector (x, y, z), the equivalence class of (x, y, z) is [(x, y, z)] = {t(x, y, z) t ∈ F∗ }, where F∗ denotes the set of nonzero elements of F. Each equivalence class contains q − 1 3 −1 vectors, and so the number of equivalence classes is qq−1 = q 2 + q + 1. These equivalence classes will be the points of our projective plane π. Example 13.22. Let F = Z5 , so q = 5. There are 124 nonzero vectors, partitioned into 31 equivalence classes; each equivalence class contains four vectors. Here are two of the equivalence classes: [(1, 2, 0)] = {(1, 2, 0), (2, 4, 0), (3, 1, 0), (4, 3, 0)}, [(1, 3, 4)] = {(1, 3, 4), (2, 1, 3), (3, 4, 2), (4, 2, 1)}. Remark 13.23. When F = Z2 , each equivalence class contains only one vector, so the equivalence classes are just the 2n − 1 nonzero vectors of Zn2 . Now we deﬁne the lines of π. If a = [(x, y, z)] and b = [(x , y , z )] are two distinct points of π, the vectors (x, y, z) and (x , y , z ) are linearly independent in V. Hence there is a unique plane in V spanned by the vectors (x, y, z) and (x , y , z ). This plane, denoted ab, will be the line in π through a and b. Two such lines, say L1 and L2 , correspond to two planes through the origin in V, and hence intersect in an ordinary line in V. But an ordinary line through the origin in V corresponds to an equivalence class of vectors, and hence to a point in π. So, any pair of distinct lines in π intersect in a point of π and our system satisﬁes axioms (1) and (2) of Deﬁnition 13.16. To verify axiom (3), note that no three of the points [(1, 0, 0)], [(0, 1, 0)], [(0, 0, 1)], [(1, 1, 1)] are collinear in π. The plane in V spanned by (x, y, z) and (x , y , z ) is {r(x, y, z) + s(x , y , z )  r, s ∈ F}. The nonzero vectors in this plane may be partitioned into q + 1 equivalence classes, where q of them have the form [(x, y, z) + λ(x , y , z )], for λ ∈ F, and the remaining class is [(x , y , z )]. Hence, the line ab in π contains q + 1 points, and π is a projective plane of order q.
13.3. Finite Projective Planes
197
Example 13.24. Let F = Z5 , and put a = [(1, 0, 0)] and b = [(0, 1, 2)]. The line P Q contains the six points [(1, 0, 0)] [(1, 0, 0) + (0, 1, 2)] = [(1, 1, 2)] [(1, 0, 0) + 2(0, 1, 2)] = [(1, 2, 4)] [(1, 0, 0) + 3(0, 1, 2)] = [(1, 3, 1)] [(1, 0, 0) + 4(0, 1, 2)] = [(1, 4, 3)] [(0, 1, 2)]. Example 13.25. We revisit the projective plane of order two, presented in Example 13.3. The seven points are p1 = [(1, 0, 0)] p4 = [(1, 1, 0)] p7 = [(1, 0, 1)].
p2 = [(0, 1, 0)] p5 = [(0, 1, 1)]
p3 = [(0, 0, 1)] p6 = [(1, 1, 1)]
The seven lines are then L1 = {[(1, 0, 0)], [(0, 1, 0)], [(1, 1, 0)]} L2 = {[(0, 1, 0)], [(0, 0, 1)], [(0, 1, 1)]} L3 = {[(0, 0, 1)], [(1, 1, 0)], [(1, 1, 1)]} L4 = {[(1, 1, 0)], [(0, 1, 1)], [(1, 0, 1)]} L5 = {[(0, 1, 1)], [(1, 1, 1)], [(1, 0, 0)]} L6 = {[(1, 1, 1)], [(1, 0, 1)], [(0, 1, 0)]} L7 = {[(1, 0, 1)], [(1, 0, 0)], [(0, 0, 1)]}. The construction of the projective plane is often explained in terms of adding points and a line “at inﬁnity” to the ordinary aﬃne plane. This can be done with projective coordinates. Start with the ordinary plane F2 , but instead of designating the points with ordered pairs, use ordered triples, (x, y, z), where (x, y, z) and λ(x, y, z) represent the same point, for any nonzero scalar λ. We identify the orthe projective dinary point (x, y) in F2 with projective coordinates (x,y, 1), while coordinates (x, y, z) correspond to the ordinary point xz , yz , z , provided z = 0. Points with z = 0, i.e., (x, y, 0), are called “points at inﬁnity” and these points at inﬁnity make up the “line at inﬁnity”, given by the equation z = 0. Equations of lines in projective coordinates have the form ax + by + cz = 0. We are concerned here with the combinatorial aspects of ﬁnite projective planes. For more on projective geometry, see either modern geometry books or older classics, such as Coolidge [Coo31]. For more on ﬁnite geometries, see Dembowski [Dem68]. There are connections between ﬁnite projective geometries and sets of orthogonal Latin squares; see Ryser [Rys63] for this approach. In particular, Ryser proves the existence of projective planes of orders q = pα via a theorem about orthogonal Latin squares, using the ﬁnite ﬁeld GF (q) to construct the orthogonal Latin squares.
198
13. Block Designs
13.4. Quadratic Forms and the Witt Cancellation Theorem This section presents results about matrix congruence needed for the proof of the Bruck–Ryser–Chowla theorem. Recall that if S and S are n×n symmetric matrices over a ﬁeld F, we say S and S are congruent over F if there exists a nonsingular matrix P , with entries from F, c such that P T SP = S . We write S =S if S and S are congruent. Congruence is an equivalence relation; that is: c
(1) For any S, we have S =S. c
c
(2) If S =S , then S =S. c
c
c
(3) If S =S and S =S , then S =S . Congruence is important in the study of quadratic forms. Let x be the column vector of the variables x1 , . . . , xn , let S be an n × n matrix, and deﬁne the function f (x1 , . . . , xn ) by T
(13.5)
f (x1 , . . . , xn ) = x Sx =
For example, if S =
1 2 , then 2 3
n n
sij xi xj .
i=1 j=1
f (x1 , x2 ) = x21 + 2x1 x2 + 2x2 x1 + 3x22 = x21 + 4x1 x2 + 3x22 . When S is symmetric, sij = sji , so xi xj has the same coeﬃcient as xj xi ; we usually combine these terms to get 2sij xi xj . Expression (13.5) is the quadratic form of the matrix S. Each term is degree two in the variables x1 , . . . , xn . Let P be a nonsingular n × n matrix, and deﬁne new variables y1 , . . . , yn by y = P −1 x, equivalently, x = P y. Then f (x1 , . . . , xn ) = xT Sx = (P y)T S(P y) = yT (P T SP )y. Thus, we have a new quadratic form f (y1 , . . . , yn ) = yT (P T SP )y in the variables y1 , . . . , yn , and using the matrix S = P T SP . The forms f and f are congruent, corresponding to the congruent matrices S and S. For any speciﬁed values of the variables x1 , . . . , xn , if we put y = P −1 x, we have f (x1 , . . . , xn ) = f (y1 , . . . , yn ). We would like to have as simple an equation as possible for the quadratic form. Since congruent matrices give congruent forms, and congruent forms are essentially the same form but with respect to diﬀerent variables, we want to ﬁnd P such that P T SP is in a simple form. Typically, we want P T SP to be a diagonal matrix. For, n if S = P T SP = diag(d1 , . . . , dn ), then yT S y = di yi2 . This can be achieved in i=1
any ﬁeld which is not of characteristic two. Theorem 13.26. Let S be a symmetric matrix over a ﬁeld F, where char(F) = 2. Then S is congruent to a diagonal matrix. Theorem 13.26 can be proven using the simultaneous row and column operations described in Section 6.6. The diagonal entries di are not uniquely determined by the matrix S. For example, for any diagonal matrix R = diag(r1 , . . . , rn ),
13.4. Quadratic Forms and the Witt Cancellation Theorem
199
with nonzero diagonal entries ri , the matrix D = diag(d1 , . . . , dn ) is congruent to RT DR = diag(r12 d1 , r22 d2 , . . . , rn2 dn ). If one asks for a normal form for symmetric matrices under congruence, the answer depends critically on the ﬁeld F. In Chapter 6 we saw that for the ﬁeld R, any real symmetric matrix S is congruent to a matrix of the form Ip ⊕ −Iν ⊕ 0k , where p + ν is the rank of S. This special diagonal form can be reached by ﬁrst applying a congruence to get S to a diagonal form D = diag(d1 , . . . , dn ). Let r be the rank of S. We can reorder the diagonal entries with a permutation congruence, so can assume that d1 , . . . , dr are nonzero, with the positive di ’s listed ﬁrst, followed by the negative di ’s. Let p be the number of positive di ’s. Put 1 1 1 1 P = diag √ , . . . , , ,..., √ , 1, . . . , 1 . d1 −dr dp −dp+1 Then P T DP = Ip ⊕−Iν ⊕0k , where ν = r −p. Sylvester’s inertia theorem 6.14 tells us the numbers p and ν are uniquely determined by S. In fact, p is the number of positive eigenvalues of S, and ν is the number of negative eigenvalues of S. (Recall that real symmetric matrices always have real eigenvalues.) The values of p and ν can also be determined from the signs of the leading principal minors of S and can be determined via Gaussian elimination. If the ﬁeld F is C, the ﬁeld of complex numbers, any real symmetric matrix S is congruent to Ir ⊕ 0n−r , where r is the rank of S. We can use the method of the previous paragraph, but with 1 1 P = diag √ , . . . , √ , 1, . . . , 1 . d1 dr Since we are working over C, we can take square roots of negative numbers. In both of these cases, F = R and F = C, we used square roots to obtain the ±1’s on the diagonal. This will not be possible in all ﬁelds. For example, in the ﬁeld 2 of rational numbers, Q, only rationals of the form ab2 , where a and b are integers, have rational square roots. c
c
Now consider direct sums. Suppose S =S and T =T . Let S = P T SP and T = QT T Q. Then (P ⊕ Q)t (S ⊕ T )(P ⊕ Q) = P T SP ⊕ QT T Q = S ⊕ T , so S ⊕ T is congruent to S ⊕ T . The proof of the Bruck–Ryser–Chowla theorem will use the Witt cancellation theorem [Witt37]. We will need the following fact in the proof. Lemma 13.27. Let F be a ﬁeld with char(F) = 2. Let A and B be n × n symmetric matrices over F, which have the same rank. Then, if there is an n × n matrix P such that P T AP = B, there must be a nonsingular matrix Q such that QT AQ = B. Proof. Suppose P is a square matrix such that P T AP = B. Since char(F) = 2, we know there are nonsingular matrices, R and S, and diagonal matrices, D and E, such that A = RT DR and B = S T ES. So, P T AP = P T RT DRP = S T ES, and we have (RP S −1 )T D(RP S −1 ) = E. Hence, without loss of generality, we may assume A and B are diagonal. Let r = rank(A) = rank(B), and set A = Dr ⊕ 0n−r and B = Er ⊕ 0n−r , where Dr and Er are nonsingular r × r diagonal matrices.
200
13. Block Designs
Partition the matrix P conformally as P =
P11 P21
P12 , where P11 is r × r and P22
P22 is (n − r) × (n − r). Then T T P11 P21 Dr 0 T P AP = P T T 0 0 P12 P22 T P11 P12 P11 Dr 0 = T P12 Dr 0 P21 P22 T T P11 Dr P11 P11 Dr P12 Er = = T T 0 P12 Dr P11 P12 Dr P12
0 0
.
T Dr P11 = Er . Since Dr and Er are nonsingular, The upper lefthand corner gives P11 the matrix P11 must be nonsingular. Put Q = P11 ⊕ In−r . Then Q is nonsingular and QT AQ = B.
Theorem 13.28 (Witt Cancellation Theorem [Witt37]). Let A, B, and C be c symmetric matrices over a ﬁeld F, with char(F) = 2. Suppose that A ⊕ B =A ⊕ C. c Then B and C must be congruent: B =C. Proof. The proof is by induction on the size of A. Most of the work is to establish the basis case, when A is 1 × 1. So, suppose A = aI1 and B, C are n × n, and we have a 0 c a 0 = . 0 C 0 B Note that B and C must have the same rank. Let P be an (n + 1) × (n + 1) matrix such that P T (aI1 ⊕ B)P = aI1 ⊕ C. Partition P as p wT P = , v Q where v and w are column vectors of length n and Q is n × n. Then p vT a 0 T P P (aI1 ⊕ B)P = 0 B w QT ap vT B p wT = v Q aw QT B 2 T ap + v Bv apwT + vT BQ = apw + QT Bv awwT + QT BQ a 0 = . 0 C This gives the following equations: ap2 + vT Bv = a, apwT + vT BQ = 0, (13.6)
apw + QT Bv = 0, awwT + QT BQ = C.
13.4. Quadratic Forms and the Witt Cancellation Theorem
201
The third equation is just the transpose of the second. If a = 0, the fourth equation c gives QT BQ = C. Since B and C have the same rank, Theorem 13.27 tells us B =C. If a = 0, proceed as follows. Let t be a constant, to be chosen later, and let R = Q − tvwT . Then (13.7)
RT BR = QT BQ − twvT BQ − tQT BvwT + t2 wvT BvwT .
From (13.6), we have vT Bv = a(1 − p2 ) with vT BQ = −apwT , and QT Bv = −apw. Substitute these in equation (13.7) to get RT BR
(13.8)
= QT BQ + 2aptwwT + a(1 − p2 )t2 wwT = QT BQ + 2apt + a(1 − p2 )t2 wwT .
Looking at the last line of (13.6), we want to choose t to satisfy 2apt + a(1 − p2 )t2 = a, or, since a = 0, (1 − p2 )t2 + 2pt − 1 = 0.
(13.9)
The lefthand side factors to give (1 − p)t + 1 (1 + p)t − 1 = 0. Choose t as follows: • If p = 1, let t =
1 . 2
1 • If p = −1, let t = − . 2 • Otherwise, t can be either of the numbers
1 1 or . p+1 p−1
With these choices of t, equation (13.9) is satisﬁed, and we have RT BR = QT BQ + awwT = C. c
Lemma 13.27 now tells us B =C. This establishes the basis case for the induction argument. Assume, as the induction hypothesis, that the result holds if A is (k − 1) × (k − 1). Now let A be a k × k matrix. Let P be a nonsingular k × k matrix such that P T AP is diagonal. Put P T AP = D = diag(d1 , . . . , dk ). If B, C are n × n, put Q = P ⊕ In . Then QT (A ⊕ B)Q = (P T ⊕ In )(A ⊕ B)(P ⊕ In ) = P T AP ⊕ B = D ⊕ B, and QT (A ⊕ C)Q = D ⊕ C. c
c
Since A ⊕ B =A ⊕ C, we have D ⊕ B =D ⊕ C. The basis case, k = 1, then gives us c
diag(d2 , d3 , . . . , dk ) ⊕ B =diag(d2 , d3 , . . . , dk ) ⊕ C. c
Hence, by the induction hypothesis, B =C.
202
13. Block Designs
13.5. The Bruck–Ryser–Chowla Theorem The proof of the Bruck–Ryser–Chowla theorem uses quadratic forms over Q, the ﬁeld of rational numbers. Congruence of symmetric matrices over Q is a much more complicated business than congruence over R or C. In R, any positive real number has a square root. When d1 , . . . , dn are positive, we can put P = diag √1d , √1d , . . . , √1d and get P T diag(d1 , . . . , dn )P = In . But, positive rational 1 2 n numbers generally do not have rational square roots. The theory of quadratic forms with rational coeﬃcients, or, equivalently, of the congruence of rational symmetric matrices, is quite complex, leading to deep number theoretic questions [Jon50]. However, we need only one number theoretic result for the proof of the Bruck– Ryser–Chowla theorem: the Lagrange foursquare theorem. We state this without proof; most elementary number theory books contain a proof. Theorem 13.29 (Lagrange Foursquare Theorem). For any positive integer m there exist integers a, b, c, d such that m = a2 + b2 + c2 + d2 . Thus, any positive integer can be expressed as the sum of four squares. Zeros are allowed. A given integer may have more than one representation as a sum of four squares. For example, 19 = 42 + 12 + 12 + 12 = 32 + 32 + 12 + 02 , 33 = 42 + 42 + 12 + 02 = 52 + 22 + 22 + 02 , 100 = 52 + 52 + 52 + 52 = 82 + 62 + 02 + 02 . Let m be a positive integer with m = a2 + b2 + c2 + d2 . Put ⎛ ⎞ a b c d c⎟ ⎜ b −a −d H=⎝ ⎠. c d −a −b d −c b −a c
Then H T H = (a2 + b2 + c2 + d2 )I4 = mI4 . Hence, mI4 =I4 over Q. Furthermore, c for any n ≡ 0 (mod 4), we get mIn =In , by using the direct sums m(I4 ⊕ I4 ⊕ · · · ⊕ I4 ) = (H ⊕ H ⊕ · · · ⊕ H)T (H ⊕ H ⊕ · · · ⊕ H). This establishes the following fact. c
Theorem 13.30. If m is any positive integer and n is a multiple of 4, then mIn =In over Q. We now prove the Bruck–Ryser–Chowla theorem (stated in Section 13.3) giving the proof from Ryser’s 1982 paper [Rys82]. Theorem 13.31 (Bruck, Ryser, and Chowla). Suppose a symmetric 2(v, k, λ) design exists. Put n = k − λ. Then: (1) If v is even, then n is a square. (2) If v is odd, the equation z 2 = nx2 + (−1) (x, y, z), not all zero.
v−1 2
λy 2 has a solution in integers
13.5. The Bruck–Ryser–Chowla Theorem
203
Proof. Let A be the incidence matrix of a symmetric 2(v, k, λ) design. Then A is a (0, 1)matrix of size v × v that satisﬁes AAT = (k − λ)I + λJ.
(13.10)
Since the design is symmetric, r = k, and the equation r(k − 1) = λ(v − 1) gives k(k − 1) = λ(v − 1). So k2 − k = λv − λ, and hence, k − λ = k2 − λv.
(13.11)
Suppose v is even. Using (13.10), [det(A)]2 = det(AAT ) = det[(k − λ)I + λJ] = (k − λ + λv)(k − λ)v−1 . From (13.11), we have k − λ + λv = k2 , so [det(A)]2 = k2 (k − λ)v−1 . Now, det(A) is an integer, so the integer (k − λ)v−1 must be a square. But v is even, so v − 1 is odd. Hence, n = k − λ must be a square. Part two takes a bit more work. Let e be the allone column vector of length v, and let A be the (v + 1) × (v + 1) matrix A e . A = eT λk Let D and E be the following diagonal matrices of size (v + 1) × (v + 1). Iv 0 D= = Iv ⊕ (−λ)I1 , 0 −λ k − λ 0 (k − λ)Iv E= = (k − λ)Iv ⊕ − I1 . k−λ 0 − λ λ Then A D(A )T =
A eT
−λe −k
AT eT
e k λ
=
AAT − λJ eT AT − keT
Ae − ke 2 v − kλ
.
Using AAT − λJ = (k − λ)I and Ae = ke, we get (k − λ)Iv 0 k2 T 2 I1 . A D(A ) = ⊕ v − = (k − λ)I v 0 v − kλ λ (λv − k2 ) k−λ k2 c = =− . Hence, A D(A )T = E and D=E over Q. λ λ λ For v odd, there are now two cases to consider, depending on whether v is congruent to 1 or to 3, modulo 4. But, v −
c
Suppose v ≡ 1 (mod 4). From Theorem 13.30, we have Iv−1 =(k − λ)Iv−1 . c Since D=E, we have 1 0 k−λ 0 c Iv−1 ⊕ =(k − λ)Iv−1 ⊕ . 0 −λ 0 − k−λ λ c
Since Iv−1 =(k − λ)Iv−1 , the Witt cancellation theorem tells us k−λ 0 1 0 c = . 0 − k−λ 0 −λ λ
204
13. Block Designs
So, for any rational number x1 , there are rational numbers y1 and z1 such that ( z1
y1 )
1 0 0 −λ
z1 y1
0)
= ( x1
k−λ 0
0
− k−λ λ
x1 0
.
Therefore, the rational numbers x1 , y1 , z1 satisfy z12 − λy12 = (k − λ)x21 . Since v ≡ 1 v−1 2 = 1, so (mod 4), the number v−1 2 is even and (−1) z12 = (k − λ)x21 + (−1)
v−1 2
λy12 .
Since x1 , y1 , z1 are rational numbers, we can ﬁnd an integer M such that the numbers M x1 , M y1 , M z1 are integers. Put x = M x1 , y = M y1 , and z = M z1 . Then x, y, z is an integer solution to z 2 = (k − λ)x2 + (−1)
v−1 2
λy 2 .
Now suppose v ≡ 3 (mod 4). For this case, we add an additional component, (k − λ)I1 , to the matrices D and E, to get diagonal matrices of order v + 2: c
D ⊕ (k − λ)I1 =E ⊕ (k − λ)I1 . c
Since v + 1 ≡ 0 (mod 4), we have (k − λ)Iv+1 =Iv+1 , and so k − λ c I1 Iv ⊕ (k − λ)I1 ⊕ (−λ)I1 =(k − λ)Iv ⊕ (k − λ)I1 ⊕ − λ k − λ c I1 . =Iv+1 ⊕ − λ Apply the Witt cancellation theorem to get
k−λ 0
0 −λ
c
=
1 0 0 − k−λ λ
.
For any rational number z1 there exist rational numbers x1 , y1 such that ( x1
y1 )
k−λ 0
0 −λ
x1 y1
= ( z1
0)
1 0 0 − k−λ λ
So, z12 = (k − λ)x21 − λy12 . Since v ≡ 3 (mod 4), the number v−1 (−1) 2 = −1, so z12 = (k − λ)x21 + (−1)
v−1 2
z1 0
v−1 2
. is odd and
λy12 .
Choose M so that x = M x1 , y = M y1 , and z = M z1 are integers to get an integer v−1 solution to z 2 = (k − λ)x2 + (−1) 2 λy 2 .
Exercises
205
Exercises Acknowledgment: Problems 1, 2, 3 come from a problem set of H. J. Ryser, 1977. 1. Let A be the incidence matrix of a symmetric 2(v, k, λ) design. Show the following. (a) We may write A in the form A = P1 +· · ·+Pk , where the Pi are permutation matrices of order v. (b) We may permute the rows of A so that the trace of A is v. 1 λ AT − J. (c) The inverse of A is given by the formula A−1 = k−λ k(k − λ) 2. Let A be an n × n matrix with nonnegative integer elements that satisﬁes the matrix equation AAT = aI, where a is a positive integer. Show that we must then have A = cP , where P is a permutation matrix and c is a positive integer such that a = c2 . 3. Let X and Y be n × n, where n > 1, with integer elements that satisfy the matrix equation XY = (k − λ)I + λJ, where k and λ are integers such that 0 < λ < k. Suppose that all of the row sums of the matrix X are equal to r and that all of the column sums of the matrix Y are equal to s. Show that then all of the line sums of X are equal to r and that all of the line sums of Y are equal to s where rs = k + (n − 1)λ. Further, show that XY = Y X. 4. Let us say two integer matrices, A and B, of order n are rationallyequivalent 1 0 T is if B = P AP for some nonsingular rational matrix P . Show that 0 2 not rationally equivalent to I2 . More generally, show that if p is any prime, 1 0 then is not rationally equivalent to I2 . 0 p 2 Hint: If B = P AP T , then det(B) = det(A) det P . 5. Use the Bruck–Ryser–Chowla theorem to show there are no symmetric 2designs with parameters (22, 7, 2). 6. Use the Bruck–Ryser–Chowla theorem to show there are no symmetric 2designs with parameters (29, 8, 2). Hint: This will involve using some number theory arguments with the resulting Diophantine equation. Think about factors of 2 and 3. 7. Show there are no projective planes of orders 14, 22, 30, 33, or 38.
Chapter 14
Hadamard Matrices
Hadamard matrices are matrices of 1’s and −1’s with orthogonal rows. They are of interest in their own right, as well as being useful in the theory of designs and the construction of errorcorrecting codes.
14.1. Introduction Deﬁnition 14.1. An n × n matrix H is said to be a Hadamard matrix if each entry of H is either a 1 or a −1, and H satisﬁes HH T = nIn . 1 1 Example 14.2. Here is a Hadamard matrix of order two, ; and one of 1 −1 ⎛ ⎞ 1 1 1 1 1 −1 ⎟ ⎜ 1 −1 order four, ⎝ ⎠. 1 1 −1 −1 1 −1 −1 1 The equation HH T = nI tells us H is nonsingular and H −1 = n1 H T . So, we also have H T H = nI. Thus, the rows of H are√orthogonal, and so are the columns. Each row and column vector of H has length n, so √1n H is an orthogonal matrix. We say a Hadamard matrix is normalized if every entry in the ﬁrst row and column is a 1; the two matrices in Example 14.2 are normalized. If H is a Hadamard matrix, we can multiply any row or column by −1, and we will still have a Hadamard matrix. Thus, we can normalize any Hadamard matrix simply by multiplying rows and columns by −1. The next theorem, found in Hadamard’s paper [Had93], is perhaps the best known result about Hadamard matrices. Theorem 14.3. If H is a Hadamard matrix of order n ≥ 3, then n is a multiple of 4. 207
208
14. Hadamard Matrices
Proof. Let H be a Hadamard matrix of order n ≥ 3, and let a, b, and c be three diﬀerent rows of H. Since HH T = nI, the dot product of any pair of distinct rows of H is zero, while the dot product of a row with itself is n. Hence, we have a + b, a + c = a, a = n. The entries of H are all ±1, so in the vector sums a + b and b + c, each entry is either a ±2 or 0. Hence, a + b, a + c is a sum of numbers, each of which is either a ±4 or a 0. Therefore n is a multiple of 4. Theorem 14.3 tells us that, for n > 2, a necessary condition for the existence of an n × n Hadamard matrix is that n be a multiple of 4; it is not known if this is suﬃcient. For a long time, the smallest order in question had been n = 428, but Kharaghani and TayfehRezaie constructed a Hadamard matrix of this size in 2004 [KhaTay04]. As far as the author knows, the smallest unknown order at this time (August 2014) is 668; i.e., Hadamard matrices are known to exist for every n = 4t, for 1 ≤ t < 167, and it is not known if there is a Hadamard matrix of order 668. Dokovi´c, in [Doko07], lists the 13 values of t ≤ 500 for which the existence of Hadamard matrices of size 4t was unsettled as of 2007: 167, 179, 223, 251, 283, 311, 347, 359, 419, 443, 479, 487, 491. A Hadamard matrix of order 4(251) = 1004 is constructed in the 2013 paper [DGK13], thus removing t = 251 from the list. This chapter presents some methods for constructing Hadamard matrices. A key fact is that we can build Hadamard matrices of larger sizes by taking tensor products of smaller ones. Theorem 14.4. If H and K are Hadamard matrices of orders m and n, respectively, then H ⊗ K is a Hadamard matrix of order mn. Proof. First, note that H ⊗ K is a matrix of 1’s and −1’s and has order mn. We have (H ⊗K)(H T ⊗K T ) = (HH T )⊗(KK T ) = (mIm )⊗(nIn ) = mnImn , so H ⊗K is a Hadamard matrix. ⎛ ⎞ 1 1 1 1 1 1 1 −1 ⎟ ⎜ 1 −1 For example, from H2 = , we get H2 ⊗ H2 = ⎝ ⎠. 1 −1 1 1 −1 −1 1 −1 −1 1 We can repeat the process to construct H2 ⊗ H2 ⊗ H2 , a Hadamard matrix of order 8. More generally, the tensor product of H2 with itself k times yields a Hadamard matrix of order 2k . This idea goes back to Sylvester [Syl67] and proves the following. Theorem 14.5. If n is a power of 2, then there exists a Hadamard matrix of order n.
14.2. The Quadratic Residue Matrix and Paley’s Theorem Methods from elementary number theory can be used to construct Hadamard matrices of certain sizes. This method goes back to Scarpis [Scar98] and was generalized by Paley in 1933.
14.2. The Quadratic Residue Matrix and Paley’s Theorem
209
Theorem 14.6 (Paley [Pal33]). If pα is a prime power and pα ≡ 3 (mod 4), there is a Hadamard matrix of order n = pα + 1. If pα ≡ 1 (mod 4), there is a Hadamard matrix of order n = 2(pα + 1). The proof uses ﬁnite ﬁelds and the quadratic character. Let p be an odd prime, and let α be a positive integer. Let F = GF (pα ) be the ﬁnite ﬁeld of order pα . Deﬁne the function χ on F by: • χ(0) = 0. • χ(a) = 1 if a is a square in F. • χ(a) = −1 if a is not a square in F. The function χ is called the quadratic character onF. When α = 1 and F = Zp , a then χ(a), for a = 0, is the Legendre symbol . We will need the following p facts about the function χ; proofs may be found in number theory books, for example, [NivZu91, Chapter 3]. • χ(ab) = χ(a)χ(b). • If p is an odd prime, then exactly half of the nonzero elements of GF (pα ) are squares. • If pα ≡ 1 (mod 4), then χ(−1) = 1. If pα ≡ 3 (mod 4), then χ(−1) = −1. We illustrate with the ﬁelds Z5 and Z7 . Example 14.7. In the ﬁeld Z5 , the squares are 1 and 4, while the nonsquares are 2 and 3. Note that 5 ≡ 1 (mod 4) and −1 ≡ 4 ≡ 22 (mod 5), so −1 is a square in Z5 . Example 14.8. In the ﬁeld Z7 , the squares are 1, 2, and 4, while the nonsquares are 3, 5, and 6. Note that 7 ≡ 3 (mod 4) and −1 ≡ 6 (mod 7), so −1 is not a square in Z7 . We now deﬁne the quadratic residue matrix, Q. Let p be an odd prime, let q = pα , and let a1 , . . . , aq be the elements of GF (pα ) in some order. For α = 1, put ai = i − 1. Deﬁne Q to be the q × q matrix with qij = χ(ai − aj ) in position i, j. Example 14.9. For p = 5 and α = 1, with the elements ordered 0, 1, 2, 3, 4, we have ⎛
(14.1)
⎞ 0 1 −1 −1 1 0 1 −1 −1 ⎟ ⎜ 1 ⎜ ⎟ Q5 = ⎜ −1 1 0 1 −1 ⎟ . ⎝ ⎠ −1 −1 1 0 1 1 −1 −1 1 0
210
14. Hadamard Matrices
Example 14.10. For p = 7 and α we have ⎛ 0 1 0 ⎜ −1 ⎜ ⎜ −1 −1 ⎜ (14.2) Q7 = ⎜ 1 −1 ⎜ 1 ⎜ −1 ⎝ 1 −1 1 1
= 1, and the elements ordered 0, 1, 2, 3, 4, 5, 6, 1 −1 1 −1 1 1 −1 1 0 1 1 −1 −1 0 1 1 −1 −1 0 1 1 −1 −1 0 −1 1 −1 −1
⎞ −1 −1 ⎟ ⎟ 1⎟ ⎟ −1 ⎟ . ⎟ 1⎟ ⎠ 1 0
The diagonal entries of Q are zero, while all of the other entries are 1’s or −1’s. When α = 1 we put ai = i − 1, so qij = χ(i − j). Along each diagonal line of Q the entries are the same; in fact, Q is a circulant matrix. For all q, we have χ(aj − ai ) = χ(−1)χ(ai − aj ). If pα ≡ 1 (mod 4), then χ(−1) = 1 and Q is symmetric. If pα ≡ 3 (mod 4), then χ(−1) = −1 and Q is skewsymmetric. In each row and column of Q the entries sum to zero, because the number of nonzero squares in GF (pα ) is the same as the number of nonsquares. Hence, QJ = JQ = 0. We now show that QT Q = QQT = qIq − J. The proof depends on the following fact. Lemma 14.11. Let F be a ﬁnite ﬁeld with char(F) = 2. Then for any nonzero c ∈ F, we have
χ(b)χ(b + c) = −1. b∈F
Remark 14.12. If c = 0, we get
(χ(b))2 = pα − 1.
b∈F
Proof. For the term b = 0, we have χ(0)χ(0 + c) = 0, because χ(0) = 0. If b = 0, put z = (b + c)b−1 , so that b + c = bz. Since c = 0, we know z = 1. We claim that as b ranges over the nonzero elements of F, the value of z ranges over all elements of F except 1. For, suppose b = b but b + c = bz, b + c = b z. Subtracting the second equation from the ﬁrst, we get b − b = (b − b )z, which is not possible because z = 1. So, diﬀerent values of b correspond to diﬀerent values of z, and hence z ranges over all elements of the ﬁeld F except 1. Using the fact that χ is multiplicative, we have
χ(b)χ(b + c) = χ(b)χ(b + c) = χ(b)χ(bz) b∈F
b∈F,b=0
=
χ(b )χ(z) =
b=0
=
2
#
b=0
χ(z)
z=1
$ χ(z) − χ(1) = 0 − 1 = −1.
z∈F
Theorem 14.13. Let q = pα , where p is an odd prime, and let a1 , . . . , aq be the elements of GF (pα ). Let Q be the q × q matrix with χ(ai − aj ) in entry (i, j). Then QQT = QT Q = qIq − J and QJ = JQ = 0.
14.2. The Quadratic Residue Matrix and Paley’s Theorem
211
Proof. Row i of Q consists of the numbers χ(ai − aj ), where j = 1, 2, . . . , q. As j runs from 1 to q, the (ai − aj )’s run through the elements of F. Hence, entry i of q χ(ai − aj ) = χ(z) = 0. This shows QJ = 0. A similar argument gives Qe is j=1
z∈F
JQ = 0. Now consider B = QQT . Since QT = ±Q, we have QQT = QT Q. Now bij = (row i of Q) · (row j of Q) = =
q
k=1 q
χ(ai − ak )χ(aj − ak ) χ(ai − ak )χ (ai − ak ) + (aj − ai ) .
k=1
Put b = ai − ak and c = aj − ai . As kruns from 1 to q, the element b runs over χ(b)χ(b + c). If i = j, we have c = 0, the elements of F, and we get bij = b∈F
so Lemma 14.11 tells us bij = −1. If i = j, then c = 0 and bii = q − 1. So B = QT Q = QQT = qIq − J. We now use Q to construct a Hadamard matrix. Let eq denote the column vector of q ones. For q ≡ 3 (mod 4), put 0 eTq . C= −eq Q Then C is skewsymmetric and 0 0 eTq CC T = −eq Q eq
−eTq QT
=
q 0
0 QQT + eq eTq
.
But QQT + eq eTq = QQT + J = qIq . Hence CC T = qIq+1 . Now put n = q + 1 and form Hn = Iq+1 + C. Then Hn is a matrix of 1’s and −1’s and since C + C T = 0, we have Hn HnT = (I + C)(I + C)T = I + C + C T + CC T = I + qI = (1 + q)I. Hence, H is a Hadamard matrix of order q + 1. If q ≡ 1 (mod 4), then 4 does not divide q+1, so we know there is no Hadamard matrix of order q + 1 in this case. Also, note that the construction above used the fact that Q is skewsymmetric when q ≡ 3 (mod 4). When q ≡ 1 (mod 4), the matrix Q is symmetric. However, in this case we can use Q to construct a Hadamard matrix of order 2n, where n = q + 1. So, suppose now that q ≡ 1 (mod 4), and put n = q + 1. Put 0 eT S= . e Q Since Q is symmetric, so is S. As before, we have 0 eTq q 0 0 eTq = = qIn . (14.3) SS T = 0 QQT + eq eTq eq Q eq QT
212
14. Hadamard Matrices
Now form the matrix (14.4)
K2n
1 1 1 −1 = In ⊗ +S⊗ 1 −1 −1 −1 S −S In + S In In + = = −S −S In −In In − S
In − S −In − S
.
Since S has zeroes on the diagonal, but 1’s and −1’s elsewhere, the entries of K2n T are all 1’s and −1’s. Now compute K2n K2n . In + S In + S T In − S In − S T T = K2n K2n . In − S T −In − S T In − S −In − S For the (1, 1) block, we have (In + S)(In + S T ) + (In − S)(In − S T ) = In + S + S T + SS T + In − S − S T + SS T = 2In + 2SS T = 2In + 2qIn = 2(q + 1)In . We get the same result in the (2, 2) block: (In − S)(In − S T ) + (In + S)(In + S T ) = 2(q + 1)In . And for (1, 2) block, (In + S)(In − S T ) − (In − S)(In + S T ) = In + S − S T − SS T − In + S − S T + SS T = 0, T because S = S T . Similarly, the (2, 1) block is zero. So K2n K2n = 2nI2n and K2n is a Hadamard matrix of order 2n.
14.3. Results of Williamson The method used above to construct a Hadamard matrix of order 2(q + 1), for the case q = pα ≡ 1 (mod 4), was generalized by Williamson [Will44]. With this generalization, we can use a Hadamard matrix of order m to construct one of order m(pα + 1), where p is prime and pα ≡ 1 (mod 4). It is based on the lemma below. Lemma 14.14 (Williamson [Will44]). Let S be an n×n matrix such that S T = S, where = ±1 and SS T = (n − 1)In . Let A and B be m × m matrices that satisfy AAT = BB T = mIm and AB T = − BAT . Then K = In ⊗ A + S ⊗ B satisﬁes KK T = mnImn . Proof. The proof is a straightforward calculation: KK T = (In ⊗ A + S ⊗ B)(In ⊗ AT + S T ⊗ B T ) = In ⊗ AAT + S ⊗ BAT + S T ⊗ AB T + SS T ⊗ BB T = mIn ⊗ Im + S ⊗ BAT − 2 S ⊗ BAT + (n − 1)mIn ⊗ Im = (m + nm − m)In ⊗ Im = nmImn .
14.3. Results of Williamson
213
The matrix K2n of (14.4) is obtained by setting = 1 with 1 1 1 −1 A= , B= . 1 −1 −1 −1 Suppose A is a Hadamard matrix of order m > 1. Then m is even and we may form the m × m matrix 0 m2 I m2 0 1 m ⊗ T = = I . 2 −I m2 0 m2 −1 0 For example, if m = 4, then ⎛
0 0 0 ⎜ 0 T =⎝ −1 0 0 −1
1 0 0 0
⎞ 0 1⎟ ⎠. 0 0
Let B = T A. Since AAT = mIm and T T T = Im , we have BB T = T AAT T T = mIm T T T = mIm . Also, AB T = AAT T T = mT T = −mT and BAT = T AAT = mT . So we have AB T = −BAT . This, together with Lemma 14.14, can be used to construct larger Hadamard matrices from smaller ones. Lemma 14.15 (Williamson [Will44, Will47]). Let S be an n × n symmetric matrix such that SS T = (n − 1)I, and such that each diagonal entry of S is a zero and each oﬀdiagonal entry ofS is ±1.Let A be an m × m Hadamard matrix, and 0 1 let B = T A, where T = I m2 ⊗ . Then K = In ⊗ A + S ⊗ B is a Hadamard −1 0 matrix of order mn. Proof. The calculation preceding the statement of the lemma shows that A, B, and S satisfy the hypotheses of Lemma 14.14, so KK T = mnImn . It remains to check that K is a matrix of ±1’s. The matrix S has 0’s on the main diagonal and ±1’s elsewhere. The entries of A and B are ±1’s. In I ⊗ A, each entry is either a 0, or ±1, with the ±1’s in exactly the positions where the zeroes occur in S ⊗ B. So K is a matrix of ±1’s. Theorem 14.16 (Williamson [Will44]). If there exists a Hadamard matrix of order m > 1, then there exists a Hadamard matrix of order m(pα + 1), where p is an odd prime and pα + 1 ≡ 2 (mod 4). Proof. Set q = pα , deﬁne Q as in Theorem 14.13, let n = q + 1, and put 0 eT S= , e Q where e is a column vector of ones. Since q ≡ 1 (mod 4), the matrix Q is symmetric and hence so is S. As before (14.3), we have SS T = qIq+1 .
214
14. Hadamard Matrices
Now let Abe a Hadamard matrix of order m > 1 and B = T A, where T is the 0 1 matrix I m2 ⊗ . By Lemma 14.15 K = In ⊗ A + S ⊗ B is a Hadamard −1 0 α matrix of order m(p + 1). We have seen that when pα + 1 ≡ 0 (mod 4) there exists a Hadamard matrix of order pα + 1, so if we also have a Hadamard matrix of order m, we can take the tensor product to obtain a Hadamard matrix of order m(pα + 1). Combining this with Theorem 14.16 yields the following. Corollary 14.17. If p is any odd prime and a Hadamard matrix of order m > 1 exists, then there exists a Hadamard matrix of order m(pα + 1). Example 14.18. The following list shows that there are Hadamard matrices of orders 4, 8, 12, 16, . . . , 52. The reader can extend the list. 4
32 = 25
8 = 23
36 = 2(17 + 1)
12 = 11 + 1 4
40 = 2(19 + 1)
16 = 2
44 = 43 + 1
20 = 19 + 1
48 = 4(11 + 1)
24 = 2(11 + 1)
52 = 2(52 + 1)
28 = 2(13 + 1) Theorems 14.4, 14.6, and 14.16 establish the existence of Hadamard matrices of order n for many values of n. We now present some further results of Williamson; these require the following deﬁnition. Deﬁnition 14.19. We say a Hadamard matrix H is of Type I if H = In + C where C is skewsymmetric. For example, if n = q + 1, where q = pα ≡ 3 (modT4) for an odd prime p, and 0 eq Q is the quadratic residue matrix, then C = is skew symmetric, and −eq Q In + C is a Hadamard matrix of Type I. The next result is a Type I version of Corollary 14.17. Theorem 14.20 (Williamson [Will44]). If there exists a Hadamard matrix of Type I and order m, and p is an odd prime such that pα ≡ 3 (mod 4), then there exists a Hadamard matrix of Type I and order m(pα + 1). Proof. Put q = pα and n = q + 1; let Q be the quadratic residue matrix. Then we know A = In + C is a Hadamard matrix of Type I, where 0 eTq C= . −eq Q Since q ≡ 3 (mod 4), the matrix C is skew symmetric. Let T be the q × q matrix with ones on the counterdiagonal and zeros elsewhere. Thus, T is obtained by reversing the order of the rows in the identity matrix. Let P = T Q. Since
14.3. Results of Williamson
215
qij = χ(ai − aj ), we have pij = χ(aq+1−i − aj ). Now a1 , . . . , aq is a list of the elements of the ﬁnite ﬁeld of order q, and we may list these elements in any order we please. Choose the order so that aq+1−i = −ai , where i = 1, 2, . . . , q. Then pij = χ(aq+1−i − aj ) = χ(−ai + aq+1−j ) = pji and P is symmetric. Note also that the counterdiagonal entries of P are zero, so P + T is a matrix of ±1’s. Let −1 0 −1 0 1 eT −1 −eT B= A= = . 0 T 0 T −e Iq + Q −e T + P 2 −1 0 = I, we get Then B is a symmetric matrix of ±1’s. Since 0 T B T B = AT A = nIn . Also, −1 0 −1 0 T T AB = AA =n = BAT . 0 T 0 T Now let H be a Hadamard matrix of order m and Type I. We have H = I + S, where S is skewsymmetric. So HH T = I + SS T = mI and SS T = (m − 1)Im . Hence, A, B and S satisfy the hypotheses of Lemma 14.14 with = −1, and K = Im ⊗ A + S ⊗ B is a Hadamard matrix of order m(pα + 1). We now need to show that K is of Type I. Substituting A = In + C, we have K = Imn + Im ⊗ C + S ⊗ B. Let W = Im ⊗C +S ⊗B. Since C and S are skewsymmetric, while B is symmetric, W T = Im ⊗ C T + S T ⊗ B T = −Im ⊗ C − S ⊗ B = −W, so W is skewsymmetric and K = I + W is of Type I.
Theorem 14.21 (Williamson [Will44]). If there exists an Hadamard matrix of Type I and order n, then there exists a Hadamard matrix of order n(n − 1). Proof. Let H be a Hadamard matrix of Type I and order n, so H = In + C, where C is skewsymmetric. Then HH T = In + CC T = nIn , so (14.5)
CC T = (n − 1)In .
If we multiply both row and column j of H by −1, the resulting matrix will still be a Hadamard matrix of Type I, so we may assume each entry in the ﬁrst row of H 1 eT is a one. So, H = , where D is (n − 1) × (n − 1) and skewsymmetric. −e D 1 eT n+1 −eT + eT DT 1 −eT HH T = = . −e D e DT −e + De Jn−1 + DDT Since HH T = nI, we have De =e, (14.6)
DDT =nIn−1 − Jn−1 .
Now put K = In ⊗Jn−1 +C ⊗D. Remember that C has zeroes on the diagonal, and ±1’s elsewhere, and D is a matrix of ±1’s, so the positions of the ones in In ⊗ Jn−1
216
14. Hadamard Matrices
correspond to the positions of the zeros in C ⊗ D, and K is a matrix of ±1’s. Using C T = −C and DJn−1 = Jn−1 , with (14.5) and (14.6), we have 2 KK T = In ⊗ Jn−1 + C T ⊗ Jn−1 DT + C ⊗ DJn−1 + CC T ⊗ DDT
= (n − 1)In ⊗ Jn−1 + C T ⊗ Jn−1 + C ⊗ Jn−1 + CC T ⊗ DDT = (n − 1)In ⊗ Jn−1 + (n − 1)In ⊗ (nIn−1 − Jn−1 ) = (n − 1)In ⊗ nIn−1 = n(n − 1)In(n−1) . So K is a Hadamard matrix of order n(n − 1).
14.4. Hadamard Matrices and Block Designs There are a number of results relating Hadamard matrices and block designs; see, for example, [HedWal78]. We look at two of the more basic constructions. First, one can construct a symmetric 2design from a normalized Hadamard matrix. Theorem 14.22 ([Rys63, p. 107]). A normalized Hadamard matrix of order n, where n = 4t ≥ 8, is equivalent to a symmetric 2(v, k, λ) design with parameters v = 4t − 1, k = 2t − 1, and λ = t − 1. Proof. Suppose H is a normalized Hadamard matrix of order n = 4t, where t ≥ 2. Delete the ﬁrst row and column of H and then replace the −1’s by zeroes, to get a (0, 1)matrix, A, of order v = n − 1 = 4t − 1. In the original matrix H each row except the ﬁrst row had 2t ones and 2t minus ones. Since one of the 1’s was in the ﬁrst column of H, each row of A has exactly 2t − 1 ones. Now, consider the ﬁrst column of H together with any other two columns. Form an n×3 array with these three columns, and consider the n rows of this array. Since the ﬁrst column of H is a column of ones, there are four possible types: ( 1 1 1 ), ( 1 1 −1 ), ( 1 −1 1 ), and ( 1 −1 −1 ). • Let r be the number of rows of type ( 1
1
1 ).
• Let s be the number of rows of type ( 1 −1
1 ).
• Let u be the number of rows of type ( 1
1 −1 ).
• Let v be the number of rows of type ( 1 −1 −1 ). Since there are n rows, r + s + u + v = n. The orthogonality of columns one and two gives r − s + u − v = 0. The orthogonality of columns one and three yields r + s − u − v = 0, and the orthogonality of columns two and three gives r − s − u + v = 0. Thus, we have the following system of four linear equations: r + s + u + v = n, (14.7)
r − s + u − v = 0, r + s − u − v = 0, r − s − u + v = 0.
14.4. Hadamard Matrices and Block Designs
217
The coeﬃcient matrix of this system of equations is the Hadamard matrix ⎛
⎞ 1 1 1 1 1 −1 ⎟ ⎜ 1 −1 ⎝ ⎠, 1 1 −1 −1 1 −1 −1 1 and the unique solution is r = s = u = v = n4 = t. Now, in the matrix A, the ﬁrst row and column of H have been deleted, −1’s have been changed to zeroes, and one of the ( 1 1 1 ) rows (namely, the one from the ﬁrst row of H) has been deleted. Therefore, the dot product of any two columns of A is r − 1 = t − 1. Hence, each pair of columns of A has exactly λ = t − 1 ones in common. Each row of A has k = 2t − 1 ones. So A is the incidence matrix of a symmetric 2(v, k, λ) design, with v = 4t − 1, k = 2t − 1 and λ = t − 1. Conversely, suppose we have a symmetric 2(4t − 1, 2t − 1, t − 1) design with incidence matrix A. We reverse the process by changing the zeroes of A to −1’s, and then adding a ﬁrst row and column of ones. Note that J − A has ones where A has zeroes, and zeroes where A has ones. So, changing the zeroes of A to −1’s gives the matrix A + (−1)(J − A) = 2A − J. Now put H=
1 eT e 2A − J
,
where e is a column of 2t − 1 ones. Then T
HH = =
1 eT e 2A − J
1 + eT e e + (2A − J)e
1 e
eT 2AT − J
eT + eT (2AT − J) T ee + (2A − J)(2AT − J)
.
Since Ae = (2t − 1)e, we have e + (2A − J)e = e + 2Ae − Je = e + (2)(2t − 1)e − (4t − 1)e = 0, so the lower lefthand block is zero. Since HH T is symmetric, the top righthand block is also zero. Finally, we compute the lower righthand block, using the fact that AAT = AT A = tI + (t − 1)J: eeT + (2A − J)(2AT − J) = J + 4AAT − 2JAT − 2AJ + J 2 = J + 4(tI + (t − 1)J) − 2(2t − 1)J − 2(2t − 1)J + (4t − 1)J = 4tI + (1 + 4t − 4 − 8t + 4 + 4t − 1)J = 4tI. So HH t = 4tI.
218
14. Hadamard Matrices
Example 14.23. Let H =
1 1 1 −1
⊗
1 1 1 −1
⊗
1 1 . Writing out the 1 −1
entries, we have ⎛1 ⎜1 ⎜ ⎜1 ⎜ ⎜1 H =⎜ ⎜1 ⎜ ⎜1 ⎝ 1 1
1 −1 1 −1 1 −1 1 −1
1 1 −1 −1 1 1 −1 −1
1 −1 −1 1 1 −1 −1 1
1 1 1 1 −1 1 1 1 −1 1 −1 −1 −1 −1 −1 −1 1 −1 −1 −1 1 −1 1 1
1 −1 −1 1 −1 1 1 −1
⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟. ⎟ ⎟ ⎟ ⎠
We have t = 2, v = 7, k = 3, and λ = 1. The matrix ⎛
0 ⎜1 ⎜ ⎜0 ⎜ A = ⎜1 ⎜ ⎜0 ⎝ 1 0
1 0 0 1 1 0 0
0 0 1 1 0 0 1
1 1 1 0 0 0 0
0 1 0 0 1 0 1
1 0 0 0 0 1 1
⎞ 0 0⎟ ⎟ 1⎟ ⎟ 0⎟ ⎟ 1⎟ ⎠ 1 0
represents the projective plane of order 2. Now we use a normalized Hadamard matrix to construct a 3design. Let H be a normalized Hadamard matrix of size n = 4t. Delete the ﬁrst row (which is all ones), and use each of the remaining rows to deﬁne two blocks as follows: one block is formed using the positions of the 1’s, and another block is formed using the positions of the −1’s. So we get 2(n − 1) = 8t − 2 blocks, each containing n2 = 2t elements. We claim this is a 3design, with λ = t − 1 = n4 − 1. The argument is similar to that used to prove Theorem 14.22. Choose any three points; these correspond to three distinct columns of H. Look at the rows in the n × 3 array formed from these three columns. • Let r be the number of rows of type ± ( 1
1
1 ).
• Let s be the number of rows of type ± ( 1 −1
1 ).
• Let u be the number of rows of type ± ( 1
1 −1 ).
• Let v be the number of rows of type ± ( 1 −1 −1 ). From the fact that n is the total number of rows, and each pair of distinct columns is orthogonal, we see that r, s, u, v satisfy equations (14.7). So r = s = u = v = t. Now, the number of blocks which contain the three points is the number of rows with ± ( 1 1 1 ) in the positions for those points. Since we removed the top row of all ones, there are r − 1 = t − 1 blocks which contain the three points. This shows that we have a 3design with v = 4t, k = 2t, and λ = t − 1.
Exercises
219
14.5. A Determinant Inequality, Revisited Recall the Hadamard determinant inequality from Chapter 2. Theorem 14.24. Let A be an n × n matrix with columns A1 , . . . , An . Then (14.8)
 det A ≤ A1 A2 A3 · · · An ,
i.e., the modulus of the determinant of A is less than or equal to the product of the lengths of the columns of A. Recall that equality holds if and only if the √columns of A are orthogonal. If the entries of A satisfy aij  ≤ 1, then  det A ≤ ( n)n . To attain the maximum value n n 2 , we must have aij  = 1 for all i, j and the columns of A must be orthogonal. n So, if A is a real n × n matrix satisfying aij  ≤ 1 and  det A = n 2 , then A must be a Hadamard matrix.
Exercises 1. Suppose a Hadamard matrix of order 20 is used to construct a 3design as in Section 14.4. What are the values of the parameters v, k, λ, r, and b for this design? 2. We have now established the following facts: (a) If there is Hadamard matrix of order n, then 4 divides n. (b) If there exist Hadamard matrices of orders m and n, then their tensor product is a Hadamard matrix of order mn. (c) If p is an odd prime and pα ≡ 3 (mod 4), then there is a Hadamard matrix of order pα + 1. (d) If p is an odd prime and pα ≡ 1 (mod 4), then there is a Hadamard matrix of order 2(pα + 1). Use these facts to show there exist Hadamard matrices of order 4t for all positive integers t < 23.
Chapter 15
Graphs
Graphs were introduced in Chapter 12; we now present more detailed deﬁnitions and some results about graphs and matrices.
15.1. Deﬁnitions We usually visualize a graph as a set of points (vertices) and edges connecting pairs of points, and we are often interested in paths and circuits in the graph. For example, Figure 15.1 shows two graphs on four vertices. For the lefthand graph, the notation v1 → v3 → v2 → v4 describes a path from v1 to v4 . We now set down some more formal notation and terminology; some of this was already introduced in Chapter 12. A graph G = G(V, E) consists of a ﬁnite set V of vertices (also called points) together with a prescribed set E of unordered pairs of vertices of V . Each unordered pair α = {v, w} in E is called an edge. If v = w, we say α is a loop. The number of vertices in V is the order of G. The two graphs in Figure 15.1 have order 4; the one on the left has edges E = {{v1 , v3 }, {v1 , v4 }, {v2 , v3 }, {v2 , v4 }}. v3 v4 r r @ @ @ @ @ @ @ r @r v1 v2
c
r
d r
r a
b
r
Figure 15.1. Two isomorphic graphs.
221
222
15. Graphs
r
r T T T
T
r r @ @ @ @ @ @ @ @r r
T Tr
sLH H L HH HHs s L Q L B Q L B Q Q Q L B Q L B B QQ L Q Ls Bs
Figure 15.2. Graphs K3 , K4 , K5 .
v3 r
v4 r
r
r
v1
t v3
v2
v1
t
t v2
t v4
Figure 15.3. A graph with an isolated vertex (lefthand side) and a multigraph (righthand side).
We say v and w are the endpoints of the edge α = {v, w}. We also say v and w are incident with α, and we say the edge α is incident with its endpoints v and w. If {v, w} is an edge of G, we say the vertices v and w are adjacent. We also say two edges are adjacent if they share a common endpoint. Thus, edges {v, w} and {w, z} are adjacent. This deﬁnition of graphs allows loops, which are edges of the form {v, v}, but sometimes we want to exclude loops. The term simple graph is used to specify a graph without loops. Two graphs G and G are isomorphic if there is a onetoone correspondence between their vertex sets which preserves adjacency. For example, the graphs in Figure 15.1 are isomorphic with the correspondence between their vertex sets being v1 ←→ c,
v2 ←→ b,
v3 ←→ a,
v4 ←→ d.
One may regard isomorphic graphs as being the same graph with diﬀerent labels or names for the vertices. The complete graph of order n, designated Kn , is the (simple) graph on n vertices in which all possible pairs of distinct vertices are edges. Figure 15.2 shows the complete graphs K3 , K4 , and K5 . A vertex is called isolated if it is incident with no edges. For example, in the graph on the lefthand side of Figure 15.3, vertex v2 is isolated. The degree, or valency, of a vertex is the number of edges incident with that vertex. In Figure 15.1, all of the vertices have degree 2. For the graph on the lefthand side of Figure 15.3, vertex v1 has degree 2, vertices v3 and v4 have degree 1, and vertex v2 has degree 0.
15.2. Graphs and Matrices
223
r @ @ @r
r r
r
r @ @
r
@r
Figure 15.4. A cubic graph.
Since each edge has two endpoints, the sum of the valencies of all the vertices in a graph is twice the number of edges. We say a simple graph G is regular if all of the vertices have the same degree; if this common degree is k, we say G is regular of degree k. The graphs in Figure 15.1 are regular of degree 2. The complete graph Kn is regular of degree n − 1. A regular graph of degree 3 is called a cubic graph. Figure 15.4 shows a cubic graph of order 8. We sometimes use graphs with multiple edges; i.e., we may have more than one edge joining a pair of vertices. We use the term general graph to indicate that both loops and multiple edges are allowed. The term multigraph is also used. The graph on the righthand side of Figure 15.3 has loops and multiple edges.
15.2. Graphs and Matrices Pictures are good for visualizing graphs, and they show how graphs arise naturally in modeling communications networks, ﬂows, etc. Less obvious is that graphs play an important role in certain types of matrix theory problems. Let G be a general graph (i.e., we are allowing both loops and multiple edges) of order n with vertex set V = {v1 , . . . , vn }. Let aij = m{vi , vj } be the number of edges with endpoints vi and vj . Note that aij = aji . Also, aij = 0 if {vi , vj } is not an edge of G. The n × n matrix A is called the adjacency matrix of G. Note that A is symmetric and all entries of A are nonnegative integers. Example 15.1. The adjacency matrix ure 15.1 is ⎛ 0 ⎜0 A=⎝ 1 1
for the graph on the lefthand side in Fig
Example 15.2. The adjacency matrix side of Figure 15.3 is ⎛ 0 ⎜3 A=⎝ 0 0
for the (general) graph on the righthand
0 0 1 1
3 0 1 0
1 1 0 0
0 1 1 0
⎞ 1 1⎟ ⎠. 0 0 ⎞ 0 0⎟ ⎠. 0 1
224
15. Graphs
v3
v1
v3
t
t
t v2
G
v1
t
t G
t v2
Figure 15.5. Example 15.4
Example 15.3. The adjacency matrix for the complete graph Kn is J − I. When G is a simple graph (no multiple edges and no loops), the adjacency matrix is a (0, 1)matrix with zeroes on the diagonal. The sum of the entries in row i (or column i) gives the degree of the ith vertex. Suppose G and G are isomorphic graphs with adjacency matrices A and A . We can think of G and G as the same graph, but with the vertices relabeled. Since the rows and columns of the adjacency matrix correspond to the vertices, relabeling vertices corresponds to reordering the rows and columns of the adjacency matrix in the same way. Hence, there is a permutation matrix P such that P AP T = A . Example 15.4. Consider the isomorphic graphs, ⎛ 0 The corresponding adjacency matrices are A = ⎝ 0 1 ⎛ ⎞ 1 0 0 For P = ⎝ 0 0 1 ⎠, we have P AP T = A . 0 1 0
G and ⎞ G shown in⎛Figure 0 1 0 1 0 0 ⎠ and A = ⎝ 1 0 0 0 0 0
15.5. ⎞ 0 0 ⎠. 0
15.3. Walks and Cycles Let G be a general graph. A sequence of m successively adjacent edges, {v0 , v1 }, {v1 , v2 }, . . . , {vi−1 , vi }, {vi , vi+1 }, . . . , {vm−1 , vm }, (where m > 0) is called a walk of length m and is denoted as v0 → v1 → · · · → vm−1 → vm , or simply v0 , . . . , vm . The endpoints of the walk are v0 and vm . If v0 = vm , we say the walk is closed ; otherwise, we say it is open. A walk with distinct edges is called a trail. A chain is a trail with distinct vertices, except possibly v0 = vm . A closed chain is called a cycle in a graph. Cyles of length 3 are called triangles. In a simple graph, a cycle must have at least three edges. In a general graph, where loops and multiple edges are allowed, a loop gives a cycle of one edge, and we can also have cycles of two edges by using a pair of edges with the same vertices. Example 15.5. Here are some examples of walks in K4 , the complete graph of order 4.
15.3. Walks and Cycles
225
• v1 → v2 → v3 → v1 → v2 → v4 is an open walk of length 5. Since the edge {v1 , v2 } appears more than once, it is not a trail. • v1 → v2 → v3 → v1 → v4 is a trail of length 4. It is not a chain, because the vertex v1 is repeated. • v2 → v3 → v4 → v1 is an open chain of length 3. • v3 → v2 → v4 → v5 → v3 is a cycle. Let A be the adjacency matrix of a graph G on n vertices; consider A2 . Entry n aik akj . In this sum, a term is nonzero if and only if both aik and akj (i, j) is k=1
are nonzero. This happens if and only if both {vi , vk } and {vk , vj } are edges of G, and hence there is a walk of the form vi → vk → vj . So, entry i, j of A2 gives the number of walks of length 2 from vertex vi to vertex vj . This argument applies in general graphs as well as in simple graphs. We now use an induction argument to generalize this result to walks of length m. Theorem 15.6. Let A be the adjacency matrix of a graph G of order n. The (i, j) entry of Am gives the number of walks of length m with endpoints vi and vj . Proof. We use induction on m; certainly the result holds for the basis case m = 1. Assume it is true for m and consider Am+1 = Am A. Let bij denote the i, j entry n bik akj . Each term in the sum is a of Am . The i, j entry of Am+1 is then k=1
nonnegative integer, and the kth term is nonzero if and only if both bik and akj are nonzero. By the induction hypothesis, bik gives the number of walks of length m from vi to vk . Appending the edge {vk , vj } to one of these walks gives a walk of length m + 1 from vi to vj . So, the product bik akj gives the number of walks of length m + 1 from vi to vj that conclude with the edge {vk , vj }. Since every walk from vi to vj of length m + 1 concludes with an edge {vk , vj } for some k with n bik akj gives the total number of walks of length m + 1 from akj = 0, the sum k=1
vi to vj .
We say a graph G is connected if for every pair of vertices v, w, there is a walk with v and w as endpoints. We also regard any vertex v as connected with itself. If G is a connected graph, the length of the shortest walk between two vertices v and w is the distance between v and w, denoted d(v, w). We set d(v, v) = 0. The maximum value of d(v, w), over all pairs of vertices of G, is called the diameter of G. Example 15.7. For the graph on the lefthand side of Figure 15.6, d(v1 , v2 ) = 1, d(v1 , v3 ) = 2, and d(v1 , v4 ) = 3. The diameter of the graph is 3. For the graph on the righthand side of Figure 15.6, d(v1 , v2 ) = 1, d(v1 , v5 ) = 2, and d(v1 , v4 ) = 2. The diameter of this graph is 2. Example 15.8. The complete graph Kn has diameter 1.
226
v6 r S S
15. Graphs v5 r
S Sr v1
v4 r S S S
v5 r
Sr v3
r v2
v4 r S S
S v6 r Sr v3 H S HH S HH S H S r HH r v2 v1
Figure 15.6. Example 15.7
15.4. Graphs and Eigenvalues Theorem 15.6 tells us that powers of the adjacency matrix give information about walks in the graph. Now we look at some results linking the eigenvalues of the adjacency matrix to information about the graph. Let G be a graph with adjacency matrix A. The characteristic polynomial of G is deﬁned to be the characteristic polynomial of A, that is, det(xI − A). Similarly the spectrum of G is deﬁned to be the spectrum of A, the set of eigenvalues of A, denoted by spec(A). For a permutation matrix P we have P AP T = P AP −1 , so the characteristic polynomial and spectrum are invariant under graph isomorphism. Example 15.9. Let G = Kn be the complete graph of order n. The adjacency matrix is A = J −I, which has spectrum {(n−1), −1}, where the eigenvalue −1 has multiplicity n − 1. The characteristic polynomial is f (x) = (x + 1)n−1 (x − (n − 1)). There are interesting connections between properties of the graph and its spectrum; we mention only a few. For more on this subject, see [Biggs74]. Theorem 15.10. Let G be a connected, general graph of diameter d. Then the spectrum of G has at least d + 1 distinct eigenvalues. Proof. Let G be a graph of diameter d. Let A be the adjacency matrix of G. Let v, w be vertices of G with d(v, w) = d, and let v = v0 → v1 → v2 → · · · → vd = w be a walk of length d from v to w. Then, for each vertex vi in the walk, there is a walk of length i joining v0 to vi , but no shorter walk joining v0 to vi , for, if there were, we would have a shorter walk from v to w, contradicting the fact that the distance from v to w is d. So Ai has a nonzero entry in the position corresponding to the vertex pair v, vi , but all of the matrices I, A, A2 , . . . , Ai−1 have zeroes in this position. Therefore, for i = 1, . . . , d, the matrix Ai cannot be a linear combination of I, A, A2 , . . . , Ai−1 . Hence, no linear combination of I, A, A2 , . . . , Ad−1 , Ad can be zero. But then we cannot have c0 I + c1 A + c2 A2 + · · · + cd Ad = 0 for any nontrivial choice of coeﬃcients c0 , . . . , cd , and thus A cannot satisfy any polynomial of degree d or smaller. Therefore, the degree of the minimal polynomial of A is at least d + 1. Since A is a real, symmetric matrix, it is diagonalizable, and its minimal polynomial has no repeated roots. Therefore, A has as least d + 1 distinct eigenvalues.
15.5. Strongly Regular Graphs
227
Example 15.11. The complete graph Kn has diameter d = 1. Its adjacency matrix J − I has two distinct eigenvalues, n − 1 and −1.
15.5. Strongly Regular Graphs Now we consider a special type of graph called a strongly regular graph. First, note that if A is the adjacency matrix of a regular graph G of degree k, then every line of A has exactly k ones, so Ae = ke and k is an eigenvalue of G. Deﬁnition 15.12. A strongly regular graph on the parameters (n, k, λ, μ) is a simple graph of order n, with n ≥ 3, that is regular of degree k and satisﬁes the following conditions. (1) If v and w are distinct vertices of G which are joined by an edge, there are exactly λ vertices of G which are joined to both v and w. (2) If v and w are distinct vertices of G which are not joined by an edge, there there are exactly μ vertices of G which are joined to both v and w. We exclude the complete graph Kn , and the void graph (i.e., a graph with no edges) so that both conditions (2) and (3) are nonvacuous. Example 15.13. Figure 15.7 shows two strongly regular graphs. The graph on the lefthand side has parameters (4, 2, 0, 2); the graph on the righthand side has parameters (5, 2, 0, 1). Example 15.14. The complete bipartite graph Km,m is the graph with 2m vertices, v1 , . . . , vm , vm+1 , . . . , v2m , in which the edges are all pairs (vi , vj ) with 1 ≤ i ≤ m and (1 + m) ≤ j ≤ 2m. This graph is strongly regular with parameters (2m, m, 0, m); Figure 15.8 shows K3,3 . Examples 15.13 and 15.14 are fairly easy examples. The famous Petersen graph (shown in Figure 15.9) is a more interesting example. This strongly regular cubic graph has parameters (10, 3, 0, 1). We shall see that a strongly regular, connected graph has exactly three eigenvalues. One of these, of course, is k; we will establish formulas for the other two. Let A be the adjacency matrix for a strongly regular graph G with parameters (n, k, λ, μ). Consider the matrix A2 . Entry i, j of A2 gives the number of walks of
t
t
t
t
t ,l , l , l , l t, lt B B B B B Bt t
Figure 15.7. Two strongly regular graphs.
228
15. Graphs
v3 P t tv6 QPP Q PP P Q PP v2 P t QQ Ptv5 PPQ P P Q PPQ PQ t Ptv4 v1 Figure 15.8. The graph K3,3
t ,l , l , l , l , l , l , l t , l L tP , l L t B PPP L t t P B L b "" b B L" b B bb""L B ""bb L B t bLt " B @ B @ Bt @t Figure 15.9. The Petersen graph.
length 2 from vi to vj . Since each vertex of G has degree k, the diagonal entries of A2 are all k. For i = j, entry i, j of A2 is λ if {vi , vj } is an edge of G, and is μ if {vi , vj } is not an edge of G. So, A2 has λ’s in the oﬀdiagonal positions where A has 1’s and has μ’s in the oﬀdiagonal positions where A has 0’s. Hence, A2 = λA + μ(J − A − I) + kI, or (15.1)
A2 + (μ − λ)A + (μ − k)I = μJ.
Now, AJ = JA = kJ, so (A − kI)J = 0. Multiply both sides of equation (15.1) by A − kI to get (A − kI)(A2 + (μ − λ)A + (μ − k)I) = (A − kI)(μJ) = 0, thus proving the following theorem. Theorem 15.15. Let A be the adjacency matrix of a strongly regular graph G with parameters (n, k, λ, μ). Then (A − kI)(A2 + (μ − λ)A + (μ − k)I) = 0, and so A satisﬁes the cubic polynomial (15.2) m(x) = (x − k) x2 + (μ − λ)x + (μ − k) = x3 + (μ − λ − k)x2 + [(μ − k) − k(μ − λ)]x + k(k − μ).
15.5. Strongly Regular Graphs
229
When G is connected, we can apply Theorem 15.10. Since G = Kn , we know d ≥ 2, and hence the minimal polynomial of A has degree at least three. But, we have already found a monic, cubic polynomial which is satisﬁed by A; this must then be the minimal polynomial of A. Thus, we have the following. Theorem 15.16. Let A be the adjacency matrix of a strongly regular, connected graph G with parameters (n, k, λ, μ). Then the minimal polynomial of A is (15.3)
m(x) = x3 + (μ − λ − k)x2 + [(μ − k) − k(μ − λ)]x + k(k − μ).
The following example shows that if G is strongly regular but not connected, there need not be three distinct eigenvalues. Example 15.17. Let G consist of the union of c > 1 copies of the complete graph Km . Then G is not connected, but is strongly regular with parameters n = cm, k = m − 1, λ = m − 2, and μ = 0. The eigenvalues of G are the same as the eigenvalues of Km , which are m − 1 and −1. The polynomial (15.2) is then m(x) = [x − (m − 1)][x2 − (m − 2)x − (m − 1)] = [x − (m − 1)][x − (m − 1)][x − (−1)] = [x − (m − 1)]2 (x + 1). Now we ﬁnd the roots of the cubic polynomial m(x) = (x − k) (x2 + (μ − λ)x + (μ − k)). One of the roots is λ1 = k. The quadratic formula gives the other two roots: (λ − μ) ± (μ − λ)2 − 4(μ − k) . λ2 , λ3 = 2 We claim that the quantity under the square root, D = (μ − λ)2 + 4(k − μ), must be positive. For, if vertices v and w are not adjacent, there are μ other vertices adjacent to both of them, so k ≥ μ and k − μ ≥ 0. When v and w are adjacent, there are λ vertices adjacent to both of them, so k ≥ λ + 1. So if we should have k = μ, then μ > λ. Therefore, at least one of the quantities (k − μ) and (μ − λ) must be positive and so D > 0. Therefore, λ2 = λ3 . Now we ﬁnd the multiplicities of the eigenvalues for the case of a connected strongly regular graph; this will give information about possible values for k, λ, and μ. Let mi be the multiplicity of the eigenvalue λi for i = 1, 2, 3. Since A is the adjacency matrix of a connected graph, the Perron–Frobenius theorem (to be proven in Chapter 17) tells us that the eigenvalue ρ(A) = k has multiplicity 1, so for λ1 = k we have m1 = 1. The sum of the multiplicities is n, giving the ﬁrst equation in (15.4). The sum of the eigenvalues is the trace of the adjacency matrix A. Since A has zeroes in the diagonal positions, trace(A) = 0, giving the second equation in (15.4). 1 + m2 + m3 = n (15.4)
k + m2 λ2 + m3 λ3 = 0
230
15. Graphs
Rearrange to get m2 + m3 = n − 1, m2 λ2 + m3 λ3 = −k. Consider this as a system of two linear equations in the unknowns mi . Writing the system in matrix form, we have m2 n−1 1 1 = . −k m3 λ2 λ3 Since λ2 = λ3 , the matrix is invertible and 1 m2 λ3 −1 n−1 = . m3 −k λ3 − λ2 −λ2 1 Hence, λ3 (n − 1) + k −λ2 (n − 1) − k and m3 = . m2 = λ3 − λ2 λ3 − λ2 With D = (μ − λ)2 − 4(μ − k), we have √ √ (λ − μ) + D (λ − μ) − D and λ3 = . λ2 = 2 2 √ Then λ3 − λ2 = − D and √ (n − 1)[(λ − μ) − D] + k 2 m2 = √ − D n − 1 (n − 1)(λ − μ) k √ = −√ − 2 2 D D n − 1 (n − 1)(λ − μ) + 2k √ = − . 2 2 D A similar calculation gives n − 1 (n − 1)(λ − μ) + 2k √ m3 = + . 2 2 D Note that m2 + m3 = n − 1. What do these formulas tell us? We know that √ k, λ, μ, and n are all nonnegative integers and m2 , m3 are positive integers. So, D must be a rational number and hence the integer D = (μ − λ)2 − 4(μ − k) must be a perfect square. If D = r 2 , then r must divide the integer (n − 1)(λ − μ) + 2k. For example, in the Petersen graph, D = 9 = 32 and (n − 1)(λ − μ) + 2k = 9(−1) + 6 = −3. We then get m2 = 5 and m3 = 4. See [CVL80, Chapter 2] for a more detailed discussion of strongly regular graphs and later chapters of that book for connections between strongly regular graphs and designs. Here are two examples of families of strongly regular graphs. The triangular graph T (m), for m ≥ 4, is deﬁned as follows. The vertices are the m m(m−1) twoelement subsets of an mset. Distinct vertices are adjacent if and 2 2 = only if they are not disjoint. This gives a strongly regular graph with n = m(m−1) 2
15.5. Strongly Regular Graphs
v2
v6
231
v1 t H S HH HH S HH S Ht v3 t S S S S S S S S S S S S S t St v5 S HH HH S HHS HS S H t v4 Figure 15.10. The triangular graph T (4).
vertices of degree k = 2(m − 2). For, given a vertex {a, b}, it is adjacent to all vertices of the form {a, x} where x ∈ / {a, b}, and {b, y} where y ∈ / {a, b}. There are m − 2 choices for x and m − 2 choices for y. Now consider a pair of adjacent vertices {a, b} and {a, c}. These are both adjacent to the vertex {b, c} and to all vertices of the form {a, x}, where x ∈ / {a, b, c}. So, there are m − 3 choices for x and thus λ = m − 3 + 1 = m − 2. For disjoint vertices {a, b} and {c, d} there are four common adjacent vertices, namely {a, c}, {a, d}, {b, c}, and {b, d}. So μ = 4. Example 15.18. Figure 15.10 shows the triangular graph T (4). There are n = 6 vertices, and k = 2(4 − 2) = 4. We have λ = 2 and μ = 4. The six vertices correspond to the twoelement subsets of {1, 2, 3, 4} as follows. v1 = {1, 2}
v4 = {3, 4}
v2 = {1, 3}
v5 = {2, 4}
v3 = {1, 4}
v6 = {2, 3}
The lattice graph, L2 (m), for m ≥ 2, is deﬁned as follows. The vertices correspond to the elements of S × S, where S is an mset. Thus, there are n = m2 vertices. Distinct vertices (a, b) and (x, y) are adjacent if either a = x or b = y. This gives a regular graph of degree k = 2(m − 1), for (a, b) is adjacent to the m − 1 vertices of the form (a, y), where y = b and the m − 1 vertices of the form (x, b), where x = a. Now consider two adjacent vertices (a, b) and (a, c). These are both adjacent to the m − 2 vertices of the form (a, y), where y ∈ / {b, c} So, λ = m − 2. On the other hand, if we have two nonadjacent vertices (a, b) and (c, d), where a = c and b = d, they are both adjacent to (a, d) and (c, b). So μ = 2. If we use the integer lattice points (i, j), where i, j = 1, 2, . . . , m, as the vertices of L2 (m), then two vertices are adjacent if they lie on the same horizontal or on the same vertical line. Figure 15.11 shows the lattice graph L2 (3), which has parameters (9, 4, 1, 2).
232
15. Graphs
t
t
t
t
t
t
t
t
t
Figure 15.11. The lattice graph L2 (3).
Exercises 1. Prove that a simple graph always has two distinct vertices with the same degree. Show by example that this need not hold for multigraphs. 2. Prove that a cubic graph has an even number of vertices. 3. Show that k is an eigenvalue of a regular graph of degree k. 4. Let G be a graph on the vertices v1 , . . . , vn . Deﬁne the complement of G, denoted G C , to be the graph with the same vertices, but {vi , vj } is an edge of G C if and only if {vi , vj } is not an edge of G. If A is the adjacency matrix of the graph G, what is the adjacency matrix of the complement G C ? 5. Let G be a graph of order n. Suppose that G is regular of degree k, and let λ1 = k, λ2 , . . . , λn be the spectrum of G. Prove that G C is strongly regular of degree (n − 1 − k), and that the spectrum of G C is (n − 1 − k), (−1 − λ2 ), (−1 − λ3 ), . . . , (−1 − λn ). 6. Let p(x) = xn + c1 xn−1 + c2 xn−2 + · · · + cn be the characteristic polynomial of a simple graph G of order n. Prove that c1 = 0, that c2 equals −1 times the number of edges of G and c3 equals −2 times the number of cycles of length 3 (triangles) of G. 7. Let K1,n−1 be the graph of order n whose vertices have degrees (n − 1), 1, 1, . . . , 1. We √ call K1,n−1 the star of order n. Prove that the spectrum of K1,n−1 is ± n − 1, 0, 0, . . . , 0. 8. Let G be a connected graph of order n which is regular of degree 2. The edges of G thus form a cycle of length n, and G is sometimes called a cycle graph of order n. Find the spectrum of G.
Exercises
233
9. Prove that a regular connected graph with three distinct eigenvalues is strongly regular. Hint: Use the fact that if A is the adjacency matrix of a regular connected graph of degree k, then k is a simple eigenvalue of A (i.e., the geometric multiplicity of the eigenvalue k is 1).
Chapter 16
Directed Graphs
The positions of the zero entries of a matrix can have a profound eﬀect on important matrix properties, such as rank and eigenvalues. One can use a directed graph to represent the zerononzero pattern of a matrix. The purpose of this chapter is to present results about directed graphs needed for Chapter 17, where we look at the theorems of Perron and Frobenius about the eigenvalues of nonnegative matrices.
16.1. Deﬁnitions A digraph or directed graph D consists of a ﬁnite set V of vertices (or points) together with a prescribed set E of ordered pairs of vertices, not necessarily distinct. Each such ordered pair α = (v, w) in E is called an arc, or directed edge, or directed line of D. An arc of the form (v, v) is called a loop. If multiple arcs are permitted, we have a general digraph. The order of D is the number of vertices in V . Note the key diﬀerence between a graph and a digraph. In a graph, the edges are unordered pairs; in a digraph, they are ordered pairs. In diagrams, this is represented by an arrow showing the direction. If (v, w) and (w, v) are both in E, we put arrows going in both directions on the segment joining v and w. We say (v, w) is an arc from v to w. Example 16.1. Figure 16.1 shows a digraph of order 3 with vertex set V = {v1 , v2 , v3 } and edge set E = {(v1 , v2 ), (v1 , v3 ), (v3 , v1 ), (v3 , v2 )}. In the arc (v, w), we say v is the initial vertex and w is the terminal vertex. The indegree of a vertex v is the number of arcs with terminal vertex v. The outdegree of v is the number of arcs with initial vertex v. In Example 16.1, vertex v1 has indegree 1 and outdegree 2. Vertex v2 has indegree 2 and outdegree 0, and vertex v3 has indegree 1 and outdegree 2. Let D be a general digraph of order n with vertices V = {v1 , . . . , vn }. Let aij be the number of arcs from vi to vj . The n × n matrix A = (aij ) is called the 235
236
16. Directed Graphs
v1
r
vr 3 J
J
J
JJ
^J
J

JJr v2
Figure 16.1. Example 16.1.
adjacency matrix of D. For example,
⎛
⎞ 0 1 1 A = ⎝0 0 0⎠ 1 1 0
is the adjacency matrix of the graph in Example 16.1. The entries in row i represent arcs with initial vertex vi ; hence the sum of the entries in row i gives the outdegree of vi . The entries in column j represent arcs with terminal vertex vj , so the sum of the entries in column j gives the indegree of vj . We say a digraph D is regular of degree k if all line sums of the adjacency matrix are equal to k. This tells us that k is both the indegree and outdegree for every vertex. A directed walk of length m in a digraph D is a sequence of arcs of the form (v0 , v1 ), (v1 , v2 ), . . . , (vm−1 , vm ), where m > 0, and v0 , v1 , . . . , vm denote arbitrary vertices in V. (We use this possibly misleading labeling to avoid double subscripts, such as va0 , va1 , va2 , etc.) We also denote a directed walk with the notation v0 → v1 → v2 → · · · → vm−1 → vm . We say v0 → v1 → v2 → · · · → vm−1 → vm is a directed walk from v0 to vm . We use the terms directed trails, directed chains (paths), and directed cycles for the obvious “directed” versions of the deﬁnitions given for graphs in Chapter 15. We say the vertices v and w are strongly connected if there is a directed walk from v to w and a directed walk from w to v. A vertex is considered to be strongly connected to itself. Strong connectivity is an equivalence relation on the set of vertices, (i.e., it is reﬂexive, symmetric, and transitive), and thus partitions the set of vertices into equivalence classes. Figure 16.2 shows two directed graphs of order 5. For the graph on the left, the equivalence classes under strong connectivity are {v1 , v2 } and {v3 , v4 , v5 }. For the graph on the right, the equivalence classes are {v1 , v2 }, {v3 , v5 }, and {v4 }. Let V1 , . . . , Vt denote the equivalence classes of the vertices of a digraph D under strong connectivity. For each subset Vi , let D(Vi ) be the subdigraph formed from the vertices in Vi and those arcs of D for which both endpoints are in Vi . The subgraphs D(V1 ), D(V2 ), . . . , D(Vt ) are called the strong components of D. Figure 16.3 shows the strong components of the two graphs in Figure 16.2. We say D is strongly connected if it has exactly one strong component—i.e., if each pair of vertices of D is strongly connected. Figure 16.4 shows two digraphs of
16.1. Deﬁnitions
237
v2 v4 r r H HH jH * v3 H HH 6 ? r HH ? * H YH Hr r v1 v5
v2 v4 r r H HH jH * v3 H H 6 r HHH ? 6 YH * j HHr r v1 v5
Figure 16.2. Two directed graphs of order 5.
v2 r
v4 r * v3 ? r H HH H YH Hr v5
6 ? r v1
v2 r
v4 r v3 rH HH YH jH Hr v5
6 ? r v1
Figure 16.3. Strong components of the graphs in Figure 16.2.
r r A AU A Ar

r A A K A Ar r
r r A A AK Ar


r A A AK Ar r
Figure 16.4. Two digraphs of order 6.
order 6. The one on the left is strongly connected but the one on the right is not. In fact, for the digraph on the right, no pair of vertices is strongly connected. We have deﬁned the adjacency matrix of a digraph; now we go in the other direction and deﬁne the digraph of a matrix. Let A be an n × n matrix. Deﬁne the digraph of A, denoted D(A), as follows. The digraph D(A) has n vertices, V = {v1 , . . . , vn }, and (vi , vj ) is an arc of D(A) if and only if aij = 0. Thus, the arcs of D(A) show the positions of the nonzero entries of A. In some situations, it may be useful to view aij as a “weight” attached to the arc (vi , vj ). If we replace the nonzero entries of A with ones, we get a (0, 1)matrix which has the same digraph as A. If D is a digraph on n vertices with no multiple edges, and A is its adjacency matrix, then A is the (0, 1)matrix with D(A) = D.
238
16. Directed Graphs
The following construction will be needed later in this chapter and in Chapter 17. Deﬁnition 16.2. Let D be a directed graph with vertex set V , and let k be a positive integer. Deﬁne D(k) to be the directed graph with vertex set V and edges determined as follows: for v, w ∈ V , the ordered pair (v, w) is a directed edge of D(k) if and only if D contains a directed walk of length k from v to w. Let A be an n × n matrix with nonnegative entries, and let D = D(A) be the digraph of A. The argument used to prove Theorem 15.6 shows that entry i, j of Ak is nonzero if and only if D contains a directed walk of length k from vertex vi to vertex vj . Hence, D(Ak ) is D(k) .
16.2. Irreducibility and Strong Connectivity Deﬁnition 16.3. An n × n matrix A is said to be reducible if there exists a A11 A12 permutation matrix P such that P T AP = , where A11 is k × k, and 0 A22 A22 is (n − k) × (n − k), where 1 ≤ k ≤ n − 1. If A is not reducible, we say A is irreducible. Thus, a reducible matrix can be brought to block triangular form via a simultaneous permutation of its rows and columns, while an irreducible matrix cannot. A matrix with no zero entries is clearly irreducible, while any ⎛ triangular ⎞ or block trian 0 1 0 0 1 gular matrix is reducible. The matrices and ⎝ 0 0 1 ⎠ are irreducible. 1 0 1 0 0 More generally, the matrix representing the ncycle, ⎞ ⎛ 0 1 0 0 ··· 0 ⎜0 0 1 0 ··· 0⎟ ⎟ ⎜ ⎜0 0 0 1 ··· 0⎟ ⎜ . ⎟, Cn = ⎜ .. .. .. .. . .. ⎟ ⎟ ⎜. . . ⎝0 0 0 0 ··· 1⎠ 1 0 0
0 ···
0
is irreducible. The following theorem gives the important connection between irreducibility and the structure of the digraph of the matrix. Theorem 16.4. Let A be an n × n matrix. Then A is irreducible if and only if the digraph D(A) is strongly connected. Proof. Suppose A is reducible. Then there is a permutation matrix P such that A11 A12 T P AP = , where A11 is n1 ×n1 , A22 is n2 ×n2 , and n1 , n2 are positive 0 A22 integers such that n1 + n2 = n. This simultaneous permutation of the rows and columns of A corresponds to relabeling the vertices of D = D(A). Let V1 be the n1 vertices of D corresponding to the block A11 , and let V2 be the n2 vertices of D corresponding to the block A22 . The n1 × n2 block of zeros in the lower lefthand corner of P T AP tells us the digraph D has no arcs from vertices in V2 to vertices
16.2. Irreducibility and Strong Connectivity
239
in V1 . Hence, if v ∈ V2 and w ∈ V1 , there can be no directed walk from v to w. So, D is not strongly connected. Conversely, suppose D is not strongly connected; we need to show A is reducible. Since D is not strongly connected, there exist vertices v and w for which there is no directed walk from v to w. Let W1 consist of the vertex w and all vertices q such that there is a directed walk from q to w. Let W2 consist of all the other vertices. Since v ∈ W2 , we know W2 is nonempty. If p ∈ W2 , then D cannot contain any edges from p to vertices in W1 . Apply a simultaneous row and column permutation to the matrix A so that those corresponding to the vertices in W1 come ﬁrst, followed by those in W2 . Labeling these groups of rows and columns accordingly, we have P T AP = X =
W1 X11 X21
W2 X12 X22
W1 . W2
Since there are no edges from vertices in W2 to vertices in W1 , we have X12 = 0, and A is reducible. If D is a digraph of order n and there is a directed walk from vertex v to w in D, there must be a directed walk from v to w of length at most n − 1. For, in any walk of length n or more, at least one vertex must be repeated. We can remove any portion of the walk which starts and ﬁnishes at the same vertex, until we get down to a walk from v to w with no repeated vertices. For example, in the walk v → va → vb → vc → va → vd → w, we can remove the portion va → vb → vc → va and be left with the shorter directed walk v → va → vd → w from v to w. This fact leads to our next result. See Exercise 12 (also, Exercise 9 of Chapter 17) for a proof of Theorem 16.5 which does not use graphs. Theorem 16.5. Let A be an n × n nonnegative matrix. Then A is irreducible if and only if every entry of (I + A)n−1 is positive. Proof. Expanding (I + A)n−1 , we have (I + A)n−1 = I + c1 A + c2 A2 + · · · + cn−2 An−2 + An−1 , where c1 , . . . , cn−2 are the usual binomial coeﬃcients. Each coeﬃcient is a positive real number, and A is nonnegative, so all entries of (I+A)n−1 are positive if and only if, for each position i, j with i = j, there is some exponent k with 1 ≤ k ≤ n − 1, such that Ak has a positive entry in the i, j position. But, this is equivalent to having a directed walk of length k from vertex vi to vertex vj in the digraph D(A). Now, D(A) is strongly connected if and only if for each pair of distinct vertices vi and vj there is a directed walk from vi to vj of some length k, where 1 ≤ k ≤ n − 1. So, D(A) is strongly connected if and only if c1 A+c2 A2 +· · ·+cn−2 An−2 +An−1 has positive entries in every oﬀdiagonal position. Hence D(A) is strongly connected if and only if I + (c1 A + c2 A2 + · · · + cn−2 An−2 + An−1 ) = (I + A)n−1 is positive. By Theorem 16.4, A is irreducible if and only if (I + A)n−1 is positive. Our next concern is to apply a permutation similarity to a square matrix A to obtain the ﬁnest possible block triangular form.
240
16. Directed Graphs
Consider a digraph D with strong components D(V1 ), D(V2 ), . . . , D(Vt ). Each component is strongly connected. For any pair of distinct strong components D(Vi ) and D(Vj ) the digraph D may either contain arcs from vertices in Vi to vertices in Vj , or arcs from vertices in Vj to vertices in Vi , but not both—we cannot have both arcs from Vi to Vj and arcs from Vj to Vi because D(Vi ) and D(Vj ) are diﬀerent strong components of D. We deﬁne the condensation digraph of D to be the digraph D∗ , on t vertices v1 , . . . , vt , in which there is an arc from vi to vj if and only if i = j, and there is an arc in D from some vertex of Vi to some vertex of Vj . Thus, we “condense” each strong component D(Vi ) to a single point vi and use only arcs from one component to another. Example 16.6. The top part of Figure 16.5 shows a digraph on eleven vertices. Below it are shown the strong components D(Vi ), i = 1, . . . , 5, where V1 = {v1 , v2 , v3 },
V2 = {v4 , v5 , v6 , v7 },
V4 = {v9 },
V5 = {v10 , v11 }.
V3 = {v8 },
The condensation digraph of D has ﬁve vertices and is shown in the bottom part of Figure 16.5. Lemma 16.7. The condensation graph D∗ of a digraph D has no closed directed walks. Proof. Suppose D∗ has a closed directed walk. By deﬁnition, D∗ has no loops, so the length of such a walk is at least 2. Let Vk and Vm be distinct vertices on the walk. Then if v ∈ Vk and w ∈ Vm , the vertices v and w will be strongly connected in D, contradicting the fact that Vk and Vm are diﬀerent strong components of D. Therefore, V ∗ has no closed directed walks. Lemma 16.8. Let D be a digraph of order n which has no closed directed walks. Then there is an ordering of the vertices of D, say v1 , . . . , vn , such that every arc of D has the form (vi , vj ) with 1 ≤ i < j ≤ n. Proof. We use induction on n. If n = 1, there are no arcs, so the result is trivially true. Assume then that n > 1. Since there are no closed directed walks, there must be a vertex with indegree 0 (left as an exercise). Let v1 be such a vertex. Delete v1 and all arcs with initial vertex v1 to obtain a digraph D of order n − 1. Since D has no closed directed walks, D can have no closed directed walks. Therefore, by the induction hypothesis, we can order the vertices of D as v2 , . . . , vn , where every arc of D has the form (vi , vj ) with 2 ≤ i < j ≤ n. The ordering v1 , . . . , vn then has the desired property for the original digraph D. These facts about digraphs yield a block triangular form for a matrix. Theorem 16.9. Let A be an n × n matrix. Then there exists a positive integer t and a permutation matrix P of order n, such that P AP T is block triangular of the form triang(A11 , . . . , Att ), where the diagonal blocks A11 , . . . , Att are square, irreducible matrices. The integer t is uniquely determined by A, and the diagonal blocks Aii are uniquely determined up to simultaneous row and column operations. However, their ordering along the diagonal is not necessarily uniquely determined by A.
16.2. Irreducibility and Strong Connectivity
v2 r @ @ @@ I @ ? @r v3
v5 r

6 ?
241
v7 r
r v6
v9 v11 r r @ @ @ @ R @ 6 @ 6 ? @ @ @ @r r v8 v10
v7 r
v9 r
?
r v1
r v4

A digraph D.
v2 r @ @ @ I @ @ ? @r v3
v5 r

6 ?
v11 r
?
6 ?
r v1
r v4
r v6
r v8
r v10
The strong components of D.
V1
r

r
V2
V3
r
The condensation digraph of D.
V4 r @ @ R @ @ @ @r V5
Figure 16.5. A digraph D, with its strong components, and condensation digraph.
242
16. Directed Graphs
Proof. Let D be the digraph of A, and let D(V1 ), D(V2 ), . . . , D(Vt ) be the strong components of D. Let D∗ be the condensation digraph of D. By Lemma 16.8, we may reorder the Vi ’s, to get, say, W1 , . . . , Wt , such that every arc of D∗ has the form (Wi , Wj ) with 1 ≤ i < j ≤ t. Simultaneously permute the rows and columns of A so that those corresponding to W1 come ﬁrst, followed by those in W2 , then those in W3 , and so on. Let P be the corresponding permutation matrix, and partition P AP T into t2 blocks accordingly, to get ⎞ ⎛ A11 A12 · · · A1t ⎜ A21 A22 · · · A2t ⎟ P AP T = ⎜ . .. .. ⎟ .. ⎝ ... . . . ⎠ At1
At2
···
Att
The strong component D(Wi ) is the digraph of block Aii . Since D(Wi ) is strongly connected, Theorem 16.4 tells us the diagonal blocks Aii are irreducible. When i > j, there are no arcs in D∗ of the form (Wi , Wj ), so Aij = 0 when i > j. Hence, P AP T is in the desired block triangular form. To establish the uniqueness claims, suppose that for some permutation matrix Q, the matrix QAQT = triang(B11 , . . . , Bss ) is block triangular with irreducible diagonal blocks B11 , . . . , Bss . Let D1 , D2 , . . . , Ds be the subdigraphs of D corresponding to the diagonal blocks B11 , . . . , Bss . Since the matrices Bii are irreducible, the digraphs Di are strongly connected. Moreover, the zeroes below the diagonal blocks tell us that there are no arcs from vertices of Di to those of Dj whenever i > j. Hence, when i > j, there are no directed walks from vertices of Di to those of Dj , and D1 , D2 , . . . , Ds must be the strong components of D. Since the strong components of a digraph are uniquely determined, we must have s = t, and, after simultaneous row and column permutations on each block Bii , the blocks B11 , . . . , Btt must be A11 , . . . , Att in some order. The block triangular form of Theorem 16.9 is called the Frobenius normal form of A, and the diagonal blocks Aii are called the irreducible components of A.
16.3. Index of Imprimitivity The following concept plays a crucial role in the next chapter. Deﬁnition 16.10. Let D be a strongly connected digraph of order n > 1 with vertices V = {v1 , . . . , vn }. Let k = k(D) be the greatest common divisor of the lengths of the closed directed walks of D. The integer k is called the index of imprimitivity of D. If k = 1, we say D is primitive. If k = 1, we say D is imprimitive. Any closed directed walk is made up of one or more directed cycles, and the length of the walk is the sum of the lengths of these cycles. So, the number k is also the greatest common divisor of the lengths of the directed cycles of D. For example, the graph on the lefthand side of Figure 16.6 has cycles of lengths 3 and 4, so k = 1 and the graph is primitive. The graph on the righthand side has cycles of lengths 3 and 6; the index of imprimitivity is k = 3.
16.3. Index of Imprimitivity
243
v5 r
v2
r
?
v1
r

v3 r
v6 r
6
6
r v4

r v5
r v6 A A AU A A Ar v1

v4 r A A A AK A v3 Ar r v2
v8 r
6 r v7
Figure 16.6. A primitive graph (left) and a graph of index 3 (right).
Theorem 16.11. Let D be a strongly connected digraph of order n with index of imprimitivity k. Then the following hold. (1) For each vertex v of D, the number k is the greatest common divisor of the lengths of the closed directed walks containing v. (2) Let v and w be any pair of vertices of D. Suppose we have two walks from v to w of lengths L1 and L2 . Then L1 ≡ L2 (mod k). (3) The set V of vertices of D can be partitioned into nonempty subsets V1 , . . . , Vk , where, setting Vk+1 = V1 , each arc of D issues from a vertex of Vi and terminates in a vertex of Vi+1 , for some 1 ≤ i ≤ k. (4) Suppose pi ∈ Vi and pj ∈ Vj and L is the length of a directed walk from pi to pj . Then L ≡ j − i (mod k). Proof. Let v and w be vertices of D. Let kv denote the greatest common divisor of the lengths of the closed directed walks containing v. Similarly, let kw be the greatest common divisor of the lengths of the closed directed walks containing the vertex w. Let α be any closed directed walk containing w, and let Lα be the length of α. Since D is strongly connected, there is a directed walk, say β, from v to w; let Lβ be the length of β. There is also a directed walk, say γ from w to v; let Lγ be the length of γ. Then β, α, γ is a closed directed walk of length Lβ + Lα + Lγ which contains v. But, the walk β, γ is also a closed directed walk containing v and it has length Lβ + Lγ . So, kv divides both of the numbers Lα + Lβ + Lγ and Lβ + Lγ . Therefore, kv divides Lα . Since Lα represents the length of any closed directed walk containing w, the integer kv must divide kw . Reversing the roles of v and w in the argument shows that kw divides kv . Hence kv = kw . So, the number kv is the same for any vertex, and we must have kv = k. Now let v and w be vertices of D. Let α and β be directed walks from v to w of lengths Lα and Lβ , respectively. Let γ be a directed walk from w to v of length Lγ . Then α, γ and β, γ are both closed directed walks containing v; these walks have lengths Lα + Lγ and Lβ + Lγ , respectively. So, k divides both of the
244
16. Directed Graphs
numbers Lα + Lγ and Lβ + Lγ . Hence, k divides Lα + Lγ − (Lβ + Lγ ) = Lα − Lβ , and we have Lα ≡ Lβ (mod k). For part (3), let v be a vertex of V. For each i = 1, . . . , k, let Vi be the set of all vertices xi of D, such that there is a directed walk from v to xi with length congruent to i modulo k. Part (2) tells us Vi ∩ Vj = ∅ if i = j. Note that v ∈ Vk . Since D is strongly connected, every vertex of D belongs to one of the Vi ’s. Furthermore, none of these sets can be empty, because if there is a directed walk from v to w of length L and z is the penultimate vertex of that walk, there is a directed walk from v to z of length L − 1. Now, suppose (xi , xj ) is an arc of D, where xi ∈ Vi and xj ∈ Vj . Let α be a directed walk from v to xi of length L. Then we know L ≡ i (mod k). The walk α, followed by the arc (xi , xj ), gives a walk of length L + 1 from the vertex v to xj . Since L + 1 ≡ i + 1 (mod k), we must have xj ∈ Vi+1 and j ≡ i + 1 (mod k). Finally, for part (4), suppose β is a directed walk from xi to xj . Let α be a directed walk from v to xi . Then Lα ≡ i (mod k). The walk α, β has length Lα + Lβ and is a directed walk from v to xj , so Lα + Lβ ≡ j (mod k). Hence Lβ ≡ j − Lα ≡ j − i (mod k). The sets V1 , . . . , Vk of Theorem 16.11 are called the sets of imprimitivity of the digraph D. Although we constructed them by choosing a speciﬁc vertex v, any choice of v will yield the same sets. All directed edges with an initial vertex in Vi have a terminal vertex in Vi+1 . Any closed directed walk cycles through these sets Vi in the order of the subscripts. For example, if D is the graph shown on the right of Figure 16.6 and v = v1 , we get the following the sets of imprimitivity: V1 = {v2 , v5 },
V2 = {v3 , v6 , v7 },
V3 = {v1 , v4 , v8 }.
Suppose the strongly connected directed graph D is D(A) for some n × n matrix A. Let k be the index of imprimitivity of D, let V1 , . . . , Vk be the sets of imprimitivity, and let ni be the number of vertices in Vi . Relabel the vertices of D so that the ﬁrst n1 vertices belong to V1 , the next n2 vertices are those of V2 , and so on. This corresponds to a permutation similarity P on A, to get a matrix P AP T , in which the ﬁrst n1 rows and columns correspond to vertices in V1 , the next n2 rows and columns correspond to vertices of V2 , and so on. Partition P AP T conformally to get k2 blocks, where the entries in block i, j correspond to arcs from vertices in Vi to vertices in Vj . Then block i, j is all zeroes, except when we have j ≡ i + 1 (mod k). The matrix P AP T has the form ⎞ ⎛ 0 0 ··· 0 0 A12 0 ··· 0 ⎟ 0 A23 ⎜ 0 ⎟ ⎜ 0 0 A34 · · · 0 ⎟ ⎜ 0 ⎜ (16.1) .. .. .. .. ⎟ .. ⎟. ⎜ .. . . . . . ⎟ ⎜ . ⎠ ⎝ 0 0 0 0 ··· A Ak1
0
0
0
···
k−1,k
0
We shall say a matrix in the form shown in (16.1) is in block cyclic form or, when we want to specify the number of blocks, block kcyclic form. In Chapter 17 we shall be interested in the eigenvalues of nonnegative block cycle matrices, as discussed in Chapter 11.
16.4. Primitive Graphs
245
16.4. Primitive Graphs We say a nonnegative matrix A is primitive if its associated digraph is primitive. There is also a commonly used, diﬀerent deﬁnition of the term “primitive matrix”, given at the end of Chapter 17. We shall see there that the two deﬁnitions are equivalent. We now present some results needed in Chapter 17. We start with a numbertheoretic fact, which has been attributed to Schur [BR91, p. 72]. The proof below comes from [KemSnell60, p. 7]. Theorem 16.12 (Schur). Let S be a nonempty set of positive integers which is closed under addition. Let d be the greatest common divisor of the integers in S. Then there exists a positive integer N such that for every integer t ≥ N , the set S contains td. Remark 16.13. When we apply this theorem, the set S will be the set of lengths of closed directed walks in a directed graph D. Proof. We can divide each integer in S by d and thus assume, without loss of generality, that d = 1. Now, there exists a ﬁnite set of integers {s1 , . . . , sm } in S such that gcd(s1 , . . . , sm ) = 1. Hence, there exist integers a1 , . . . , am such that m aj sj = 1. From this, we can multiply by k and easily see that every integer k can j=1
be expressed as a linear combination of the sj ’s with integer coeﬃcients. However, these coeﬃcients need not be nonnegative. m aj sj with positive coeﬃcients (aj > 0), Let p be the sum of all the terms in j=1
and let n be the positive sum of all the terms with negative coeﬃcients (aj < 0). Since the set S is closed under addition, the positive numbers p and n are both in S, and we have p − n = 1. Now set N = n(n − 1). Suppose t ≥ N . Divide t by n to get (16.2)
t = qn + r,
where 0 ≤ r < n. Since t ≥ n(n − 1), we have q ≥ n − 1, so q ≥ r. Substituting 1 = p − n in (16.2) yields (16.3)
t = qn + r(p − n) = (q − r)n + rp.
Since q − r ≥ 0, and r ≥ 0, and S is closed under addition, equation (16.3) shows that t is in S. Deﬁnition 16.14. Let S and d satisfy the hypotheses of Theorem 16.12. The smallest positive integer φ such that nd ∈ S whenever n ≥ φ is called the Frobenius– Schur index of S and denoted φ(S). If S is the set of all nonnegative linear combinations of the positive integers r1 , . . . , rm , then we also write φ(r1 , . . . , rm ) for φ(S). We now have the following consequence of the Schur theorem.
246
16. Directed Graphs
Theorem 16.15. Let D be a strongly connected digraph of order n with vertex set V . Let k be the index of imprimitivity of D, and let V1 , . . . , Vk be the sets of imprimitivity of D. Then there exists a positive integer N such that the following holds: for any xi ∈ Vi and xj ∈ Vj , there are directed walks from xi to xj of every length (j − i) + tk where t ≥ N . Proof. Let a ∈ Vi and let b ∈ Vj . Let Sb be the set of the lengths of all closed directed walks containing vertex b. Then Sb is a nonempty set of positive inegers which is closed under addition. By Theorem 16.11, the number k is the greatest common divisor of the integers in Sb . By Theorem 16.12, there is a positive integer Nb such that tk ∈ Sb for every integer t ≥ Nb . Now, we know every directed walk from vertex a to vertex b has length (j − i) + tk for some integer t. Let tab be an integer such that there is a directed walk from a to b of length (j − i) + tab k. Then, by following this walk by a closed directed cycle through b, we see there is a directed walk from a to b of length (j − i) + tab k + tk for every integer t ≥ Nb . Let N = max{tab + Nb  b ∈ V}. Then, for all 1 ≤ i, j ≤ k, there is a directed walk from a to b of length (j − i) + tk for every t ≥ N . Theorem 16.16. Let A be an n × n nonnegative matrix. Then A is primitive if and only if Am is positive for some positive integer m. If A is primitive, there is a positive integer N such that At > 0 for every positive integer t ≥ N . Proof. Let D(A) be the digraph of A. Suppose A is primitive. Then the index of imprimitivity of D(A) is 1, and Theorem 16.15 tells us that there is a positive integer N such that, for any pair of (not necessarily distinct) vertices xi , xj of D(A), there is a directed walk of length t from xi to xj for any integer t ≥ N . But this tells us that the i, j entry of At is positive for all i, j = 1, . . . , n, and hence At > 0 for all t ≥ N . Conversely, suppose Am > 0 for some positive integer m. Let k be the index of imprimitivity of D(A). If k > 1, we can apply a permutation similarity P so that P AP T has the block cyclic form shown in equation (16.1). No power of this matrix can be positive, so no power of A can be positive, and we have a contradiction. Hence, k = 1 and A is primitive. Let A be an n×n irreducible matrix with nonnegative entries, and let D = D(A) be the digraph of A. Since A is irreducible, the graph D is strongly connected. Let k be the index of imprimitivity of D, and let V1 , . . . , Vk be the sets of imprimitivity for D. Applying a suitable row and column permutation, we may assume A is in the block cyclic form shown in equation (16.1). From Chapter 11, we know that Ak is the direct sum (16.4)
(A1 A2 A3 · · · Ak ) ⊕ (A2 A3 · · · Ak A1 ) ⊕ · · · · · · ⊕ (Ak A1 · · · Ak−1 ).
Now recall Deﬁnition 16.2 of the graph D(k) . If (v, w) is a directed edge of D(k) , then there is a directed walk of length k in D from v to w, and hence v and w must belong to the same set of imprimitivity, say Vi . Consider now the subgraph of D(k) (k) with vertex set Vi , which we denote as Di = D(k) (Vi ). Since D (k) = D(Ak ), we have (16.5)
(k)
Di
= D(Ai Ai+1 Ai+2 · · · Ak A1 · · · Ai−1 ).
Exercises
247
Let v, w ∈ Vi . Since D is strongly connected, there is a directed walk in D from v to w. The length of this walk must be a multiple of k, say mk, for some positive (k) integer k. This directed walk then gives a directed walk of length m in Di from (k) (k) v to w. Hence, the graph Di is strongly connected. Moreover, we claim Di is primitive. For, Theorem 16.15 tells us there exists a positive integer N such that, for any pair of vertices v, w in Vi , there are directed walks from v to w of every (k) length tk, where t ≥ N . Hence, in the graph Di there are directed walks of length (k) t from v to w, for all t ≥ N . So, Di is primitive. This proves the following theorem, to be used later in Chapter 17. Theorem 16.17. Let A be an n × n, nonnegative, irreducible matrix such that k is the index of imprimitivity of D(A). Then there is a permutation matrix such that P T AP is in block kcyclic form: ⎛
(16.6)
0 0 0 .. .
⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 0 Ak1
A12 0 0 .. .
0 A23 0 .. .
0 0 A34 .. .
··· ··· ··· .. .
0 0 0 .. .
0 0
0 0
0 0
··· ···
Ak−1,k 0
⎞ ⎟ ⎟ ⎟ ⎟. ⎟ ⎟ ⎠
Furthermore, each of the products Ai Ai+1 Ai+2 · · · Ak A1 · · · Ai−1 is irreducible and primitive.
Exercises 1. Give an example of a digraph of order 5, with a symmetric adjacency matrix, and in which every vertex has indegree 2 and outdegree 2. 2. Give an example of a digraph of order 5, with a nonsymmetric adjacency matrix, and in which every vertex has indegree 2 and outdegree 2. (Hint: Loops are allowed.) 3. Let A be the adjacency matrix of a digraph D. Show there is a directed walk of length m from vertex vi to vertex vj if and only if the i, j entry of Am is positive. 4. Suppose D is a digraph in which every vertex has a positive indegree. Prove that D contains a directed cycle. 5. Prove that a digraph is strongly connected if and only if there is a closed directed walk which contains each vertex at least once. 6. Let D be the digraph on four vertices with directed edges (v1 , v2 ), (v2 , v3 ), (v3 , v4 ), (v4 , v1 ). Draw the graph D and write down the adjacency matrix A. Compute A2 , A3 , and A4 , and draw the graphs D(2) , D (3) , D(4) .
248
16. Directed Graphs
7. Let A be an n × n matrix with nonnegative entries, and let D = D(A) be the digraph of A. In Section 16.1 we claimed, “The argument used to prove Theorem 15.6 shows that entry i, j of Ak is nonzero if and only if D contains a directed walk of length k from vertex vi to vertex vj . Hence, D(Ak ) is D(k) .” Check that this is correct. 8. A tournament of order n is a digraph which can be obtained by assigning a direction to each of the edges of the complete graph Kn . If A is the adjacency matrix of a tournament, show that A + AT = J − I. Note: A tournament can be used to model the outcomes of a “round robin” competition with n players, in which each player plays every other player, and the edge (i, j) means that player i beats player j. 9. Prove that a tournament of order n contains a path of length n − 1. Conclude that the term rank of a tournament matrix of order n is either n − 1 or n. How is this related to Exercise 8 of Chapter 12? 10. Determine the smallest number of nonzero elements of an irreducible matrix of order n. 11. Show by example that the product of two irreducible matrices may be reducible, even if the matrices have nonnegative elements. 12. Let A be an irreducible matrix of order n with nonnegative entries. Assume each diagonal entry of A is positive. Let x be a nonzero column vector with nonnegative entries. (a) Prove that if x contains at least one 0, then Ax has fewer 0’s than x. (b) Show that all of the entries of An−1 x are positive. (c) Show that all entries of An−1 are positive.
Chapter 17
Nonnegative Matrices
The Perron–Frobenius theorem gives striking information about the eigenvalues of real matrices with nonnegative entries. Perron’s theorem [Per07] is for matrices of positive entries. Frobenius [Frob12] generalized Perron’s result to nonnegative matrices and obtained further information about the eﬀect of the zerononzero pattern of the matrix on the set of eigenvalues. There are several ways to prove these results. The approach here draws heavily on Wielandt, from the notes [Wiel67].
17.1. Introduction We say a real matrix A is nonnegative and write A ≥ 0, if aij ≥ 0 for all entries of A. We say A is positive and write A > 0 when aij > 0 for all entries of A. Recall that for a square matrix A the spectral radius of A is the nonnegative real number ρ(A) = max{λ λ ∈ spec(A)}. Ingeneral,ρ(A) need not be an 1 0 1+i 0 eigenvalue of A; for example, consider the matrices and . 0 −2 0 1−i The Perron–Frobenius theorem says that when A is nonnegative, the spectral radius ρ(A) is an eigenvalue of A. Furthermore, much more can be said when A is positive or when A is nonnegative and satisﬁes certain other conditions. Here is Perron’s theorem for positive matrices. Theorem 17.1 (Perron [Per07]). Let A be an n × n positive matrix. Then the following hold. (1) ρ = ρ(A) > 0 and ρ(A) ∈ spec(A). (2) There is a positive eigenvector p such that Ap = ρp. Furthermore, any other positive eigenvector is a multiple of p. (3) The eigenvalue ρ is a simple eigenvalue—i.e., it has algebraic multiplicity 1. Consequently, if Av = ρv, then v is a scalar multiple of p. (4) If λ is an eigenvalue of A and λ = ρ, then λ < ρ. 249
250
17. Nonnegative Matrices
(5) Let B =
1 ρ A,
so that ρ(B) = 1. Then lim B k = L exists and has rank 1. k→∞
Each column of L is a positive multiple of p. (6) Let x ≥ 0, x = 0. Then lim B k x = Lx is a positive multiple of p. k→∞
Frobenius investigated this problem [Frob08, Frob09, Frob12], generalizing the result to nonnegative matrices in 1912. For a nonnegative matrix A it is still true that ρ(A) is an eigenvalue of A, but ρ need not be a simple eigenvalue. However, for a special class of nonnegative matrices called irreducible matrices, ρ(A) is a simple eigenvalue, and one can say something very interesting about the set of eigenvalues of modulus ρ(A)—they form the vertices of a regular polygon. The complete statement and proof of these results appears later in this chapter.
17.2. Preliminaries For real matrices A and B of the same size, we say A ≥ B if A − B ≥ 0, i.e., aij ≥ bij for all entries. Similarly, A > B if A − B > 0. The following facts are easily checked. m m Ai ≥ Bi . (1) If Ai ≥ Bi , for i = 1, . . . , m, then i=1
i=1
(2) If A ≥ B and the scalar c is nonnegative, then cA ≥ cB. (3) If A, B, C are matrices with A ≥ B and C ≥ 0, then AC ≥ BC and CA ≥ CB, whenever the products are deﬁned. (4) If Ai ≥ Bi , and the limits lim Ak = A and lim Bk = B exist, then A ≥ B. k→∞
k→∞
If A is a real or complex matrix, A denotes the matrix with aij  in position (i, j). The following properties hold. The ﬁrst two are obvious, (3) and (4) follow from the triangle inequality for complex numbers z and w, namely z+w ≤ z+w, and (5) follows from (4). (1) A ≥ 0 and A = 0 if and only if A = 0. (2) For any complex number γ, we have γA = γ A. (3) For matrices A and B of the same size, A + B ≤ A + B. (4) For matrices A and B for which the product AB is deﬁned, AB ≤ AB. (5) For a square matrix A and any positive integer m, we have Am  ≤ Am . We will need to know when lim Ak exists, for a square matrix A. k→∞
Theorem 17.2. Let A be an n × n complex matrix. (1) lim Ak = 0 if and only if ρ(A) < 1. k→∞
(2) If ρ(A) > 1, then lim Ak does not exist. k→∞
(3) If ρ(A) = 1, then lim Ak exists if and only if the only eigenvalue λ satisfying k→∞
λ = 1 is λ = 1, and every Jordan block belonging to the eigenvalue 1 is a 1 × 1 block. In this case, the rank of L = lim Ak is the multiplicity of the k→∞
eigenvalue 1.
17.2. Preliminaries
251
Proof. Let λ be an eigenvalue of A with λ = ρ(A). Let x be an associated eigenvector, so Ax = λx. Then Ak x = λk x. If λ > 1, then λk → ∞ as k → ∞, and hence lim Ak does not exist, proving (2). If λ = 1, but λ = 1, then lim λk k→∞
k→∞
does not exist, so lim Ak does not exist. k→∞
To complete the proof, we use the Jordan canonical form of A. If A = S −1 BS, then Ak = S −1 B k S, so lim Ak exists if and only if lim B k exists. Hence, we k→∞
k→∞
may assume A is already in Jordan canonical form. This form is a direct sum of Jordan blocks. Since (A1 ⊕ A2 ⊕ · · · ⊕ Ar )k = Ak1 ⊕ Ak2 ⊕ · · · ⊕ Akr , it will suﬃce to examine lim Aki , where Ai is a Jordan block. Consider then an m × m Jordan k→∞
block Jm (λ) = λIm + N , where
⎛
0 1 0 ⎜0 0 1 ⎜. . . . . .. N =⎜ ⎜. . ⎝0 0 ··· 0 0 ···
··· ··· .. . 0 0
⎞ 0 0⎟ .. ⎟ .⎟ ⎟. 1⎠ 0
Since Im and N commute, we may use the binomial theorem to get k
k k−i i k (λIm + N ) = (17.1) λ N i i=0 k k−2 2 λ N + · · · + N k. = λk I + kλk−1 N + 2 Since N t = 0 for all t ≥ m, the sum (17.1) has at most m nonzero terms (up to the power N m−1 ), regardless of the value of k. The matrix (Jm (λ))k is upper triangular, and on each diagonal line the entries are the same. k k−2 k m−1 ⎞ ⎛ k λ ··· ··· λ kλk−1 2 λ k k−2 m−2 ⎟ m−1 k ⎜ 0 k k−1 λ kλ ··· ⎜ ⎟ 2 λ m−2 λ ⎜ ⎟ . ⎜ ⎟ .. k k−1 k ⎜ ⎟. 0 0 λ kλ · · · (17.2) (λIm + N ) = ⎜ ⎟ .. .. .. .. ⎜ ... ⎟ . . . . ⎜ ⎟ ⎝ 0 ⎠ k k−1 kλ 0 0 ··· λ k 0 0 0 ··· ··· λ lim
Suppose λ < 1 for any eigenvalue λ of A. Therefore, k k−i ρ(A) < 1. Then k λ = 0 and lim A = 0. Conversely, if lim Ak = 0, then lim λk = 0, i
k→∞
k→∞
k→∞
k→∞
so λ < 1 for every eigenvalue of A and hence ρ(A) < 1. This proves (1). Now consider a Jordan block with λ = 1. If m > 1, then k ⎞ k ⎛ 1 k k2 3 · · · m−1 ⎟ ⎜0 1 k k k ··· ⎟ ⎜ 2 m−2 k ⎟ ⎜ ⎟ ⎜ 0 0 1 k · · · m−3 ⎟ (17.3) (Jm (1))k = ⎜ ⎟, ⎜. . . . . . .. .. ⎟ .. .. ⎜ .. .. ⎟ ⎜ ⎝0 0 0 ··· 1 k ⎠ 0 0 0 ··· 0 1
252
17. Nonnegative Matrices
and lim (Jm (1))k does not exist. Hence, if ρ(A) = 1 and lim Ak exists, then k→∞
k→∞
each Jordan block belonging to the eigenvalue one is a 1 × 1 block. Let r be the multiplicity of the eigenvalue 1. Then A = Ir ⊕B, where B is a direct sum of Jordan blocks corresponding to eigenvalues of modulus less than one. Hence lim B k = 0, k→∞
and lim Ak = Ir ⊕ 0n−r , which has rank r. k→∞
Theorem 17.3 (Frobenius [Frob09]). Let A and B be n × n real matrices with A ≤ B. Then ρ(A) ≤ ρ(B). Proof. The proof is by contradiction. Suppose ρ(B) < ρ(A). Then we can choose a positive real number γ such that γρ(B) < 1 < γρ(A), and thus ρ(γB) < 1 < ρ(γA). Therefore, lim (γA)k does not exist, but lim (γB)k = 0. However, A ≤ B, so k→∞
k→∞
γA ≤ γB and (γA)k ≤ (γB)k . So, lim (γB)k = 0 forces lim (γA)k = 0, and we k→∞
have a contradiction. Therefore, ρ(A) ≤ ρ(B).
k→∞
We now obtain some bounds for the spectral radius of a matrix. Let A be an n n × n complex matrix, and let Ri = aij . The Gerˇsgorin Circle Theorem tells j=1
us every eigenvalue λ of A satisﬁes λ ≤ max{R1 , . . . , Rn }. Hence, we have an easily computed upper bound for ρ(A), namely, ρ(A) ≤ max{R1 , . . . , Rn }. When A ≥ 0, the Ri ’s are the row sums of A. If the row sums are all the same, say, R1 = R2 = · · · = Rn = R, then Ae = Re, where e is the allone vector. So, for a nonnegative matrix with equal row sums, R = ρ(A). A similar result holds for column sums. We now use this, together with Theorem 17.3, to show that min{R1 , . . . , Rn } is a lower bound for the spectral radius of a nonnegative matrix. Theorem 17.4. Let A be an n × n nonnegative matrix. Let Ri = Cj =
n
n
aij and
j=1
aij be the row and column sums of A. Then the following inequalities
i=1
hold: min{R1 , . . . , Rn } ≤ρ(A) ≤ max{R1 , . . . , Rn }, min{C1 , . . . , Cn } ≤ρ(A) ≤ max{C1 , . . . , Cn }. Proof. Let m = min{R1 , . . . , Rn } and M = max{R1 , . . . , Rn }. We can construct nonnegative matrices B and C such that all row sums of B are m, all row sums of C are M , and B ≤ A ≤ C. By Theorem 17.3, ρ(B) ≤ ρ(A) ≤ ρ(C). But ρ(B) = m and ρ(A) = M , so m ≤ ρ(A) ≤ M . Apply this to AT , and use the fact that A and AT have the same eigenvalues, to obtain the corresponding result for column sums. Suppose (x1 , . . . , xn ) is a positive vector in Rn , and let D be the diagonal matrix x −1 diag(x1 , . . . , xn ). If A is an n × n matrix, the i, j entry of D AD is xji aij . If
17.2. Preliminaries
253
A ≥ 0 and Ri is the ith row sum of D−1 AD, we have Ri =
1 xi
n
aij xj . Hence, for
j=1
any positive vector x, n n 1 1 aij xj ≤ ρ(A) ≤ max aij xj . i=1,...,n xi i=1,...,n xi j=1 j=1
min
Let x vary over all positive vectors in Rn . Then (17.4)
max x>0
n 1 aij xj ≤ ρ(A) ≤ min x>0 i=1,...,n xi j=1
min
n 1 aij xj . i=1,...,n xi j=1
max
In fact, equality actually holds on both sides of (17.4); to prove this we use the following theorem of Collatz. Theorem 17.5 (Collatz [Col42]). If A ≥ 0, x > 0, and σ and τ are nonnegative real numbers such that σx ≤ Ax ≤ τ x, then σ ≤ ρ(A) ≤ τ . Proof. Entry i of the vector inequality σx ≤ Ax gives σxi ≤ the positive number xi to get σ ≤
1 xi
n
n
aij xj . Divide by
j=1
aij xj for each i = 1, . . . , n. So,
j=1
n 1 σ ≤ min aij xj ≤ ρ(A). i=1,...,,n xi j=1
Similarly,
1 xi
n
aij xj ≤ τ , giving the inequality
j=1 n 1 aij xj ≤ τ. i=1,...,n xi j=1
ρ(A) ≤ max
Corollary 17.6. If A ≥ 0, and p > 0 satisﬁes Ap = μp, then μ = ρ(A). Proof. Since A ≥ 0 and p > 0, we must have μ ≥ 0. Apply Theorem 17.5 to the inequality μp ≤ Ap ≤ μp to conclude that ρ(A) = μ. Corollary 17.7. If A ≥ 0, and A has a positive eigenvector p, then max x>0
n 1 aij xj = min x>0 i=1,...,n xi j=1
min
n 1 aij xj = ρ(A). i=1,...,n xi j=1
max
n Proof. From Corollary 17.6, we know Ap = ρ(A)p. So ρ(A) = p1i aij pj for j=1 1 n each i = 1, . . . , n. Then, from max min j=1 aij xj ≤ ρ(A), we see that the x>0 i=1,...,n xi maximum must occur when x = p. Similarly, ρ(A) ≤ min max x1i nj=1 aij xj x>0 i=1,...,n
tells us that the minimum occurs when x = p.
254
17. Nonnegative Matrices
17.3. Proof of Perron’s Theorem We ﬁrst prove Perron’s theorem (Theorem 17.1) for the case ρ(A) = 1. The general 1 case follows easily, because if B > 0 and A = ρ(B) B, then ρ(A) = 1. Note that ρ(B) > 0 because the spectral radius of a matrix is zero if and only if the matrix is nilpotent, and a positive matrix cannot be nilpotent. We prove the various parts of Perron’s theorem as a sequence of separate theorems. Theorem 17.8. Let A be an n × n positive matrix with ρ(A) = 1. Then λ = 1 is an eigenvalue of A, and there is a positive vector p such that Ap = p. Proof. This proof is from [Wiel67]. Since ρ(A) = 1, there is an eigenvalue λ with λ = 1. Let Ax = λx where x = 0. Then x = λ x = λx = Ax ≤ A x = Ax. So, x ≤ Ax. We now show that x = Ax. Let y = Ax−x = (A−I)x. Then y ≥ 0. If y = 0, we are done; we need to show that y = 0 leads to a contradiction. So, suppose y = 0. Then, since A > 0, we would have Ay > 0. Let z = Ax; note that z > 0. For suﬃciently small > 0, we have Ay > z. But, Ay = A(A − I)x = (A − I)Ax = (A − I)z. So, (A − I)z > z and thus Az > (1 + )z. The Collatz theorem (Theorem 17.5) then gives ρ(A) ≥ (1 + ), a contradiction. Therefore, y = 0 and Ax = x. Since Ax = x, the nonnegative vector x is an eigenvector with associated eigenvalue 1. Since A > 0, the vector Ax must be positive, so p = x is a positive eigenvector such that Ap = p. For the next proof, we need to know when equality holds in the triangle inequality. Let z, w be complex numbers; then z + w ≤ z + w. Write z and w in polar form: z = zeiα and w = weiβ . For nonzero z and w, we have z + w = z + w if and only if α = β, in which case z + w = (z + w)eiα . This fact extends to sums of more than two complex numbers. Thus, for complex numbers z1 , . . . , zk , k k we have zj ≤ zj , and equality holds if and only if there is some α such j=1
j=1
that z = zeiα for j = 1, . . . , k. Theorem 17.9. Suppose A is an n × n positive matrix and ρ(A) = 1. If λ is an eigenvalue of A such that λ = 1, then λ = 1. If Ax = x, then there is a positive vector p such that x = θp for some scalar θ ∈ C. Proof. The proof of Theorem 17.8 shows that if λ = 1 and Ax = λx for x = 0, n aij xj = λxi , so then Ax = x. Entry i of the equation Ax = λx gives j=1 n aij xj = λxi  = λ xi  = xi . But, we also have Ax = x, and hence j=1 n n n aij xj  = xi . Hence, aij xj = aij xj . Since aij ≥ 0 for all i, j, this j=1
j=1
j=1
tells us that there is an α such that xj = xj eiα for j = 1, . . . , n. Put p = x.
17.3. Proof of Perron’s Theorem
255
Then x = eiα p = θp, where θ = eiα , and p = x = Ax > 0. Finally, Ax = A(θx) = θAx = θx = x, so λ = 1. Theorem 17.10. Let A be a positive n × n matrix with ρ(A) = 1. Then every Jordan block belonging to the eigenvalue λ = 1 is a 1 × 1 block. Equivalently, the algebraic multiplicity of the eigenvalue λ = 1 equals its geometric multiplicity. Proof. We know that Ap = p for some positive vector p. Hence, Ak p = p for all n (k) (k) positive integers k. Let aij denote the i, j entry of Ak . Then aij pj = pi . So j=1
(k)
(k)
aij pj ≤ pi and aij ≤ ppji . This tells us the entries of Ak are bounded by a number which is independent of k. Speciﬁcally, for M = max ppji i, j = 1, . . . n , then aij ≤ M for all i, j and all positive integers k. Let J = S −1 AS be the Jordan form of A. Then J k = S −1 Ak S, so the entries of J k must also be bounded. This would not be the case if the Jordan form of A had a block belonging to λ = 1 of size 2 or larger. Therefore, all Jordan blocks belonging to the eigenvalue λ = 1 are size 1 × 1. (k)
Theorem 17.11. Let A be a positive n × n matrix with ρ(A) = 1. Then λ = 1 is a simple root of the characteristic polynomial pA (x) = det(xI − A) so the eigenvalue λ = 1 has geometric and algebraic multiplicity 1. Proof. Theorem 17.10 tells us that the geometric and algebraic multiplicity of the eigenvalue λ = 1 are the same, so the multiplicity of λ = 1 is the dimension of the eigenspace V1 = {x  Ax = x}. Suppose v, w are linearly independent vectors in V1 . Choose scalars α, β ∈ C, not both zero, such that z = αx + βy has a zero entry. Note that z = 0 because x and y are linearly independent. Then Az = z. By Theorem 17.9, we must have z = θp for some positive vector p, so z can have no zero entries. This is a contradiction, so v and w cannot be linearly independent, and V1 must be one dimensional. Theorem 17.12. Let A be an n × n positive matrix with ρ(A) = 1. Then lim Ak k→∞
exists and the matrix L = lim Ak has rank 1. k→∞
Proof. Let B be the Jordan form of A. From Theorem 17.11, we know B = 1 ⊕ C, where C is a direct sum of Jordan blocks with eigenvalues of modulus less than 1. So, lim C k = 0. Hence, lim B k = 1 ⊕ 0n−1 . Since A = SBS −1 , for some k→∞
k→∞
nonsingular matrix S, we have lim Ak = S( lim B k )S −1 = S(1 ⊗ 0n−1 )S −1 .
k→∞
Put L = S(1 ⊕ 0n−1 )S
−1
k→∞
; then L has rank 1.
Now we examine the matrix lim Ak in more detail. k→∞
Theorem 17.13. Let A be an n × n positive matrix with ρ(A) = 1, and let L = lim Ak . Then there are positive vectors p and q such that Ap = p, qT A = qT , k→∞
and L = pqT . The matrix L is positive.
256
17. Nonnegative Matrices
Proof. As in the proof of Theorem 17.12, let B = 1 ⊕ C = S −1 AS be the Jordan form of A. The ﬁrst column of S is an eigenvector belonging to the eigenvalue λ = 1; without loss of generality, we may assume the ﬁrst column of S is a positive vector p such that Ap = p. From the proof of Theorem 17.12, we have ⎛ ⎞ p1 0 · · · 0 ⎛ ⎞ 1 0 ··· 0 ⎜ p 0 ··· 0⎟ ⎟ ⎜0 ⎟ −1 ⎜ 2 p3 0 · · · 0 ⎟ S −1 . ⎜ ⎟ L = S ⎝ .. S =⎜ ⎜ ⎠ .⎟ . 0n−1 ⎝ .. .. ⎠ . . · · · .. 0 pn 0 . . . 0 Let qT = (q1 , . . . , qn ) be the ﬁrst row of S −1 . Then L = pqT . Furthermore, since A > 0, we know L ≥ 0, and hence q ≥ 0. Also, L = 0, so we know q = 0. Now, L = LA, so L = pqT = pqT A. The ith row of the matrix equation T pq = p(qT A) gives pi qT = pi (qT A). Since pi = 0, we have qT = qT A. Transposing gives AT q = q, and hence q is an eigenvector of AT corresponding to the eigenvalue 1. But AT is a positive matrix with spectral radius 1, so the nonnegative vector q must actually be a positive vector. In Theorem 17.13, the vector p is a right eigenvector of A corresponding to λ = 1 and the vector qT is a left eigenvector belonging to λ = 1. The matrix L = pqT is then easily computed from these left and right eigenvectors. The next result is of interest in the case where the matrix A represents a Markov chain; we will look at these in Chapter 19. Corollary 17.14. Let A be an n × n positive matrix with ρ(A) = 1, and let b ≥ 0 be nonzero. Let p be a positive eigenvector of A with Ap = p. Then lim Ak b is a k→∞
positive multiple of p. Proof. Put L = lim Ak and L = pqT as in Theorem 17.13. Then k→∞
Lb = p(qT b) = (qT b)p. The scalar qT b must be positive because q > 0 and b ≥ 0 is nonzero.
Note that a positive matrix with all row sums equal to 1 has spectral radius 1. Such matrices arise in the study of Markov processes. And ﬁnally, one more fact. Theorem 17.15. Let A be an n × n positive matrix with ρ(A) = 1, and let p be a positive eigenvector of A with Ap = p. Suppose b is a nonnegative eigenvector of A associated with the eigenvalue λ. Then λ = 1 and b is a positive multiple of p (and hence b is positive). Proof. Since A > 0 and the nonzero vector b is nonnegative, we have Ab > 0. But Ab = λb, so b has no zero entries, showing b > 0 and λ > 0. From Theorem 17.13 we know L = lim Ak = pqT , where q > 0. So k→∞
Lb = lim Ak b = lim λk b = pqT b = (qT b)p. k→∞
k→∞
17.3. Proof of Perron’s Theorem
257
Now (qT b) is positive. If we had λ < 1, then lim λk b = 0, a contradiction. So we k→∞
must have λ = 1, and thus b = (qT b)p is a positive multiple of p.
We collect all of these results together and restate Perron’s theorem for general positive matrices. Theorem 17.16. Let A be an n × n positive matrix, and let ρ be the spectral radius of A; note that ρ > 0. Then the following hold. (1) The positive number ρ is an eigenvalue of A. (2) The eigenvalue ρ is a simple root of the characteristic polynomial pA (x). (3) There exists a positive vector p such that Ap = ρp. If x is any other eigenvector associated with ρ, then x is a scalar multiple of p. The vector p is called a Perron eigenvector. (4) If λ is an eigenvalue of A and λ = ρ, then λ < ρ. (5) If x is a nonnegative eigenvector of A, then x must be positive and Ax = ρx. In other words, the only nonnegative eigenvectors of A are multiples of the Perron eigenvector p. k (6) lim ρ1 A = L exists. Furthermore, L > 0 and L = pqT where p is a Perron k→∞
eigenvector of A and q is a Perron eigenvector of AT (also called a left Perron eigenvector of A). (7) Let b ≥ 0 and b = 0. Choose the scalar γk such that the ﬁrst entry of γk Ak b is 1, and set bk = γk Ak b. Then lim bk exists and is equal to θp, where k→∞ 1 θ= . p1 Proof. Parts (1) through (6) follow from the corresponding results for positive matrices of spectral radius 1, because ρ1 A is a positive matrix of spectral radius 1, and multiplying a matrix by a scalar has the eﬀect of multiplying all of its eigenvalues by that same scalar, but does not change the eigenvectors or the sizes of the blocks in the Jordan form. Part (7) is essentially Corollary 17.14 adapted to the general case. Since b is nonzero, b ≥ 0, and ρ1 A has spectral radius 1, we know that lim
A k
k→∞
ρ
b = αp
1 1 bk to get lim bk = αp. k→∞ γk ρk γk 1 Since the ﬁrst coordinate of the vector bk is 1, we have lim = αp1 . Then k→∞ γk ρk 1 lim bk = αp1 ( lim bk ) = αp, and so lim bk must exist and have ﬁrst k→∞ γk ρk k→∞ k→∞ component 1.
for some positive number α. Substitute Ak b =
258
17. Nonnegative Matrices
17.4. Nonnegative Matrices The question now is, which parts of Perron’s theorem still hold for nonnegative matrices? We start with two simple examples. Example 17.17. For the identity matrix A = In , the spectral radius is 1, which is an eigenvalue, but not a simple eigenvalue. Any nonzero vector is an eigenvector of In . Example 17.18. Let
⎛
1 0 0 ··· 0 1 0 ··· 0 0 1 ··· .. .. .. .. . . . . 0 0 0 ··· 1 0 0 0 ···
0 ⎜0 ⎜ ⎜0 A=⎜ ⎜ ... ⎜ ⎝0
⎞ 0 0⎟ ⎟ 0⎟ ⎟ 0⎟ ⎟ 1⎠ 0
be the permutation matrix for the ncycle. Then An = I. The characteristic polynomial of A is pA (x) = xn − 1. The eigenvalues are the nth roots of unity, i.e., the n numbers ωnk , for k = 0, . . . , n − 1, where 2πi 2π 2π ωn = e n = cos + i sin . n n The spectral radius 1 is an eigenvalue of A, and it is a simple root of the characteristic polynomial. Moreover, the positive vector e = (1, 1, . . . 1)T is an eigenvector with Ae = e. However, the other eigenvalues of A also have modulus 1, showing that part (4) of Theorem 17.16 need not hold for a nonnegative matrix. The next theorem tells us which parts of Perron’s theorem still hold for all nonnegative matrices. In the next section we shall see that more can be said for special types of nonnegative matrices. Theorem 17.19 (Frobenius [Frob12]). Let A be an n×n nonnegative matrix with spectral radius ρ. Then the following hold. (1) The number ρ is an eigenvalue of A. (2) There exists a nonzero vector p ≥ 0 such that Ap = ρp. (3) There exists a nonzero vector y ≥ 0 such that yT A = ρyT . Proof. Let A1 , A2 , A3 , . . . be a sequence of positive matrices such that Ak > Ak+1 for all k, and lim Ak = A. For example, we could put Ak = A + k1 J, where J is k→∞
the allone matrix. Let ρk be the spectral radius of Ak . From Theorem 17.3, we have ρk ≥ ρk+1 and ρk ≥ ρ for all k. So, the sequence {ρk }∞ k=1 is a nonincreasing sequence which is bounded below by ρ. Hence lim ρk = μ exists and μ ≥ ρ. The k→∞
next step is to show that μ is an eigenvalue of A. Since Ak > 0, for each k there is a positive vector pk such that Ak pk = ρk pk . By using an appropriate scalar multiple, we may choose pk so that the entries sum to 1, i.e., so that eT p = 1, where eT = (1, 1, . . . , 1). The sequence of vectors, p1 , p2 , p3 , . . . then lies in the compact set {x ∈ Rn  x ≥ 0, and eT x = 1}.
17.5. Irreducible Matrices
259
Hence, the sequence {pk }∞ k=1 has a convergence subsequence, pν1 , pν2 , pν3 , . . . . Let p = lim pνi . We have p ≥ 0 and eT p = 1, so p = 0. Then, i→∞
Ap = lim Aνi pνi = lim ρνi pνi = μp. i→∞
i→∞
So μ is an eigenvalue of A. Hence, μ ≤ ρ. But we already had μ ≥ ρ, so μ = ρ. Therefore, ρ is an eigenvalue of A and the nonnegative vector p is an associated eigenvalue. Part (3) follows by noting that AT ≥ 0 and ρ(A) = ρ(AT ).
17.5. Irreducible Matrices In the last section, we saw that only some parts of Perron’s theorem for positive matrices hold for general nonnegative matrices. We now see that parts (2) and (3) of Theorem 17.16 hold for irreducible matrices. Recall the following Deﬁnition 16.3 from Chapter 16. Deﬁnition 17.20. An n × n matrix A issaid to be reducible if there exists a A11 A12 permutation matrix P such that P T AP = , where A11 is k × k, A22 0 A22 is (n − k) × (n − k), and 1 ≤ k ≤ n − 1. If A is not reducible, we say A is irreducible. Note that reducibility is determined by the location of the zero and nonzero entries of the matrix. In Chapter 16, we saw that A is irreducible if and only if the directed graph D(A) is strongly connected (Theorem 16.4). We then used graphs to prove the following result (Theorem 16.5). Theorem 17.21. Let A be an n × n nonnegative matrix. Then A is irreducible if and only if (I + A)n−1 > 0. Remark 17.22. For a matrix with negative entries, note that A is irreducible if and only if A is irreducible. So this theorem is equivalent to the statement that A is irreducible if and only if (I +A)n−1 > 0. Theorem 17.21 can also be proven with matrix methods, rather than by using the directed graph D(A). See Exercise 9 for this matrix approach. When a nonnegative matrix is irreducible, the Perron root is a simple root and the Perron eigenvector is positive. Theorem 17.23 (Frobenius [Frob12]). If A is an n × n, nonnegative, irreducible matrix, then the following hold. (1) The spectral radius ρ = ρ(A) of A is positive and is an eigenvalue of A. (2) The eigenvalue ρ is a simple root of the characteristic polynomial pA (x) of A. (3) There is a positive vector p such that Ap = ρp. Furthermore, Ax = ρx if and only if x is a scalar multiple of p. (4) There is a positive vector q such that qT A = ρqT , and yT A = ρyT if and only if y is a scalar multiple of q.
260
17. Nonnegative Matrices
Proof. Since A ≥ 0, we know there exists a nonzero, nonnegative vector p such that Ap = ρp. Let B = (I + A)n−1 . Since A is irreducible, B > 0. Also, Bp = (I + A)n−1 p = (1 + ρ)n−1 p. Part (5) of Theorem 17.16 then tells us that p > 0. But then Ap = 0, and so ρ cannot be zero. So ρ > 0. Now, let λ1 = ρ, λ2 , . . . , λn be the eigenvalues of A. The eigenvalues of B are (1 + ρ)n−1 , (1 + λ2 )n−1 , . . . , (1 + λn )n−1 , with (1 + ρ)n−1 = ρ(B). If ρ were a multiple root of pA (x), then (1 + ρ)n−1 would be a multiple root of pB (x), which is not possible by Perron’s Theorem 17.16 for positive matrices. Hence, ρ is a simple root of pA (x). So, we have established the ﬁrst three parts. Part (4) follows from the fact that AT is irreducible if and only if A is irreducible. (This may not be obvious directly from the deﬁnition, but it follows easily from Theorem 17.21.) Theorem 17.23 says nothing about lim Ak . Observe that the ncycle matrix k→∞
in Example 17.18 is an irreducible, nonnegative matrix with spectral radius 1 for which this limit does not exist.
17.6. Primitive and Imprimitive Matrices We now obtain more information about the eigenvalues of irreducible, nonnegative matrices. The structure of the set of eigenvalues depends on the zerononzero structure of the matrix, i.e., on the directed graph associated with the matrix. If A is a nonnegative, irreducible matrix, then the associated directed graph D(A) is strongly connnected. In Chapter 16, we deﬁned the terms primitive and index of imprimitivity for strongly connected directed graphs (Deﬁnition 16.10). We extend these deﬁnitions to nonnegative, irreducible matrices by using the associated digraph. There is also another way to deﬁne the index of imprimitivity of a matrix; we discuss this alternative deﬁnition later and will see it is equivalent to the graph deﬁnition. Deﬁnition 17.24. Let A be an n × n, irreducible, nonnegative matrix. We say k is the index of imprimitivity of A if k is the index of imprimitivity of the associated directed graph D(A). If k = 1, we say A is primitive. Let λ1 , . . . , λn be the eigenvalues of an n × n matrix A where repeated eigenvalues are listed according to their multiplicities. For any positive integer k, the eigenvalues of Ak are λ1 k , λ2 k , . . . , λn k and ρ(Ak ) = [ρ(A)]k . Theorem 17.25. Let A be a nonnegative, irreducible, primitive matrix. Then the eigenvalue ρ = ρ(A) is a simple eigenvalue, and is the only eigenvalue of modulus ρ. Proof. From Theorem 17.23, we know ρ is a simple eigenvalue. Since A is primitive, Theorem 16.16 tells us there is a positive integer M such that AM > 0. By Perron’s theorem for positive matrices, ρM is a simple eigenvalue of AM , and every other eigenvalue of AM has modulus less than ρM . Hence, if λ is an eigenvalue of A and λ = ρ, we must have λM < ρM and so λ < ρ. We now reap the beneﬁts of the work done in Chapter 16 on the structure of imprimitive graphs, and in Section 11.3 on the eigenvalues of block cycle matrices.
17.6. Primitive and Imprimitive Matrices
261
Theorem 17.26. Let A be an irreducible, nonnegative matrix with index of imprimitivity k. Then the following hold. (1) The spectrum of A is invariant under rotation by angle we have ωk spec(A) = spec(A).
2π k .
Thus, if ωk = e
2πi k
,
(2) The matrix A has exactly k eigenvalues of modulus ρ, and they are the kth roots of ρk , namely, the numbers ρ, ρωk , ρωk2 , . . . , ρωkk−1 , which form the vertices of a regular ngon. Each of these eigenvalues is a simple eigenvalue of A. (3) There is a permutation matrix P such that P T AP is in block kcyclic form: ⎞ ⎛ 0 ··· 0 0 A1 0 0 ⎟ 0 A2 0 · · · ⎜ 0 ⎟ ⎜ · · · 0 ⎟ 0 0 0 A ⎜ 3 T ⎟, ⎜ (17.5) P AP = ⎜ .. .. .. .. .. ⎟ . . . . 0 ⎟ ⎜ . ⎝ 0 0 0 0 · · · Ak−1 ⎠ Ak 0 0 0 ··· 0 where the product A1 A2 · · · Ak is irreducible and primitive. Proof. Part (3) was established in Theorem 16.17. Without loss of generality, we can assume A is in this block kcyclic form. Recall that Ak is then the direct sum of k diagonal blocks, block i being the product Ai Ai+1 Ai+2 · · · Ak A1 A2 · · · Ai−1 . Let λ1 , . . . , λt be the nonzero eigenvalues of the matrix A1 A2 · · · Ak , with repeated eigenvalues listed according to their multiplicities. Then λ1 , . . . , λt is also a list of the nonzero eigenvalues of each diagonal block, Ai Ai+1 · · · Ak A1 · · · Ai−1 , of Ak . In Section 11.3 we saw that the nonzero eigenvalues of A are then the kt numbers 1
1
1
1
1
1
1
1
1
λ1k , λ1k ωk , λ1k ωk2 , · · · , λ2k , λ2k ωk , λ2k ωk2 , · · · , .. .. .. . . . ···
1
λ1k ωkk−1 1
λ2k ωkk−1 .. . 1
λtk , λtk ωk , λtk ωk2 , · · · , λtk ωkk−1 , 1
where λik is some ﬁxed kth root of λi . This list may contain repetitions if the 1 product A1 A2 · · · Ak has repeated eigenvalues. Now, put λ1 = ρk and λ1 k = ρ. From Theorem 16.17, we know A1 A2 · · · Ak is irreducible and primitive. Therefore, ρk is a simple eigenvalue of A1 A2 · · · Ak , and λi  < ρk for all i > 1. Hence, the eigenvalues of A of modulus ρ are exactly the numbers ρ, ρωk , ρωk2 , . . . , ρωkk−1 , and each of these is a simple eigenvalue of A.
This theorem tells us that for an irreducible, nonnegative matrix A, the number of distinct eigenvalues of modulus ρ must be equal to the index of imprimitivity of D(A). Hence, we have the following theorem of Frobenius.
262
17. Nonnegative Matrices
Theorem 17.27 (Frobenius [Frob12]). Let A be a nonnegative, irreducible matrix, and let A have exactly k distinct eigenvalues λi such that λi  = ρ. Then each λi is a simple eigenvalue, the λi ’s are the numbers ρ, ρωk , ρωk2 , . . . , ρωkk−1 , and conditions (1) and (3) of Theorem 17.26 hold. This can be proven without using graph methods; for example, see [Var62]. To establish Theorem 17.27 without using the graph D(A), one deﬁnes the index of imprimitivity of a nonnegative, irreducible matrix A to be the number of distinct eigenvalues of modulus ρ. Theorem 17.26 shows this is equivalent to Deﬁnition 17.24.
Exercises 1. Show that if A, B, C are matrices with A ≥ B and C ≥ 0, then AC ≥ BC and CA ≥ CB, whenever the products are deﬁned. 2. For matrices A and B for which the product AB is deﬁned, show AB ≤ A B. 3. Let J denote the n × n matrix with a 1 in each entry. Find ρ(A) and ﬁnd a positive eigenvector p. 4. True or False: If A is a n × n nonnegative matrix, then ρ(A) is positive. If true, give a proof; if false, give a counterexample. 5. Let A be a matrix of rank 1. Show that there exist nonzero column vectors, p and q such that A = pqT . Show that p is then an eigenvector of A. What is the associated eigenvalue? Note that when A > 0 we may choose p and q to be positive—in this case what is the value of ρ(A)? Illustrate all of this with the matrix ⎛ ⎞ 1 2 3 A = ⎝2 4 6 ⎠, 5 10 15 i.e., ﬁnd positive vectors p and q such that A = pqT and determine ρ(A). 6. The proof of Theorem 17.2 used the following fact: when λ < 1, we have lim ki λk−i = 0. Prove this. k→∞
7. Let A be an n × n nonnegative matrix with row sums R1 , . . . , Rn . Let m = min{R1 , . . . , Rn } and M = max{R1 , . . . , Rn }. The following claim is made in the proof of Theorem 17.4: We can construct nonnegative matrices B and C such that all row sums of B are m, all row sums of C are M , and B ≤ A ≤ C. Give a method for constructing such B and C. Find such B and C for ⎛ ⎞ 1 1 1 A = ⎝0 3 1⎠. 1 .5 0
Exercises
263
(k)
8. Let A be an n × n matrix, let aij denote the i, j entry of Ak , and assume (k)
there is a positive number M such that aij  ≤ M for all i, j = 1, . . . , n and all positive integers k. Let B = S −1 AS for some nonsingular matrix S. Suppose all the entries of S are less than or equal to α in absolute value, and all of the entries of S −1 are less than or equal to β in absolute value. Show that for (k) i, j = 1, . . . , n and any positive integer k, we have bij  ≤ αβM n2 . 9. The point of this problem is to give a matrix proof of the fact that a nonnegative, n × n matrix A is irreducible if and only if all of the entries of (I + A)n−1 are positive. This result is often proven by using the directed graph of A, but graphs will not be used here. (a) Let A be a nonnegative, irreducible matrix of order n. Let x be a nonzero column vector of length n, with nonnegative entries, but having at least one entry equal to 0. Show that the number of 0 entries in x is greater than the number of 0 entries in (I + A)x. Hint: Without loss of generality, you can rearrange the entries of x so that all of the nonzero entries are on top. The corresponding permutation of the rows and columns of A will replace A by P AP T , where P is a permutation matrix. Now partition A conformally with the zero and nonzero parts of x and examine the product (I + A)x. (b) Use part 9(a) to show that if A is irreducible, then all of the entries of (I + A)n−1 are positive. (c) Conversely, show that if A is reducible, then any power of (I + A) has some 0 entries.
Chapter 18
ErrorCorrecting Codes
When information is transmitted over a communications channel, errors may be introduced. The purpose of errorcorrecting codes is to enable the recipient to detect and correct errors. The theory of errorcorrecting codes involves linear algebra, ﬁnite ﬁeld theory, block designs, and other areas of combinatorics and graph theory. This chapter is a brief introduction to give some idea of how linear algebra is used to construct binary linear codes. We consider a message to be a string of zeros and ones, and work in a vector space over the binary ﬁeld Z2 . More generally, one can work with an alphabet of q distinct symbols. If q = pr is a power of a prime number p, then the set of alphabet symbols can be the elements of the ﬁnite ﬁeld GF (q) We refer the reader to books on error correcting codes, such as [Berle68, MacSlo77, Hamming80, CVL80, VanLi82], for comprehensive accounts.
18.1. Introduction Consider a message given in the form of a string of zeros and ones. When the message is sent via a communications channel, errors may occur, either from random noise or other sorts of error. It is possible that you send a zero, but your recipient receives a one, or vice versa. A simple way to protect against error is by using repetition. You could send a block of two zeroes (0 0) for each zero, and a block of two ones (1 1) for each one. Should noise distort one of the bits, so that the receiver gets (1 0) or (0 1), she knows an error has occurred and can ask you to resend the message. Of course, if both bits get changed by errors, the receiver cannot detect this. Thus, this scheme allows the receiver to detect one error but not two. Increasing the number of repetitions will increase the number of detectable errors and also enable some error correction. For example, sending (000) for each zero and (111) for each one enables the receiver to detect and correct a single error—for example if the string (010) arrives, the receiver would assume the middle digit was wrong and decode the message as zero. However, if two digits are changed by error, then the message will be incorrectly decoded. More generally, if you send a block of 265
266
18. ErrorCorrecting Codes
k zeroes, (000 · · · 0), for each zero in the message, and a block of k ones, (111 · · · 1), for each one in the message, and the decoding algorithm is “majority rule”, then up errors can be corrected when k is odd. If k is even, then up to (k−2) errors to (k−1) 2 2 k can be corrected, while 2 errors can be detected but not corrected. Increasing the block length enables correction of more errors. The cost is transmission rate; you are using k bits for each bit of the message. Now consider the string of zeros and ones as separated into words with k bits in each; thus, each message word is a block of k digits; each digit is either zero or one. We can view each block as a vector with k coordinates over the binary ﬁeld Z2 . We write the kdigit block as a vector, x = (x1 , . . . , xk ). The number of possible words of this type is 2k . We now adjoin an extra bit xk+1 deﬁned by xk+1 =
(18.1)
k
xi .
i=1
Since we are working in the binary ﬁeld Z2 , the sum is 0 when (x1 , . . . , xk ) has an even number of ones and is 1 when there are an odd number of ones; hence this additional bit xk+1 is called a “parity check”. We adjoin the digit xk+1 to the original block (x1 , . . . , xk ) and transmit the codeword (x1 , . . . , xk , xk+1 ). Since equation (18.1) is equivalent to the equation x1 + · · · + xk + xk+1 = 0, the code words are those vectors in Zk+1 which satisfy the linear equation 2 k+1
xi = 0.
i=1
The set of code words is then a kdimensional subspace of Zk+1 ; it is, in fact, the 2 null space of the 1 × (k + 1) matrix A = ( 1 1 · · · 1 ). This is an example of a linear code. The receiver checks each block of (k + 1) digits by summing the digits. If they sum to zero, it is assumed no errors were made; if they sum to one it is assumed one error was made. This code can detect a single error, but we will not know which coordinate of the block has the error. However, only one extra bit is needed to get this error detection capability. We are going to generalize this procedure to construct codes which can detect more errors, and also correct errors. The idea is to use more linear equations as checks.
18.2. The Hamming Code Our story begins in the late 1940s when Richard Hamming created a family of single errorcorrecting codes, (published in [Hamming50]). (These codes also appeared in papers by Shannon (1948) and Golay (1949). See [Berle68, page 8]). Frustrated by the shutdown of long computer programs running on weekends (when no operators were present to restart a program shutdown due to a detected error), Hamming wanted a system which could not only detect an error, but correct it. The [7, 4] Hamming code has four message bits and three parity check bits. Denoting the message word as (x1 , x2 , x3 , x4 ), we adjoin three additional check bits x5 , x6 , x7
18.3. Linear Codes: Parity Check and Generator Matrices
267
deﬁned by the equations x5 x6 x7
(18.2)
= x1 = x1 =
+ +
x2 x2
x3 + x3
+ x4 + x4 + x4 .
Since +1 and −1 are the same in Z2 , we may rewrite the system (18.2) as x1 x1
+ x2 + x2
+ x3 + x3
+ +
x4 x4 x4
+ x5 + +
x6 x7
= 0 = 0 = 0.
Rewriting this system of three linear equations in seven unknowns in matrix form, we see that the codewords are the solutions to ⎛ ⎞ 1 1 0 1 1 0 0 ⎝ 1 0 1 1 0 1 0 ⎠ x = 0. (18.3) 0 1 1 1 0 0 1 Now put
⎞ 1 1 0 1 C = ⎝1 0 1 1⎠ 0 1 1 1 ⎛
and A = (C
I3 ) .
Equation (18.3) then becomes Ax = 0. The 3 × 7 matrix A has rank 3, so it has a fourdimensional nullspace. The 16 codewords, formed from the 16 possible messages using equations (18.2), are exactly the vectors in this nullspace. The set of codewords forms a fourdimensional subspace of Z72 . How does the decoding work? Suppose we send the codeword x, but, due to error, the receiver gets y. Put e = y − x; equivalently, y = x + e. The vector e is the error vector. A 1 in entry j of e means that the jthreceived entry is wrong. Since x is a codeword, Ax = 0, and Ay = Ax + Ae = Ae. When no errors occur, y = x and Ay = 0. Now, suppose exactly one error occurs, and that it is in entry j. Then Ae will be the jth column of the matrix A. Here is the key point: the seven columns of A are all diﬀerent; in fact they are the seven nonzero binary vectors with three coordinates. If only one error occurs, computing Ay will tell us exactly where the error was—ﬁnd the column of A which matches Ay = Ae, and the position of that column is the position of the error. Hence, this code can detect and correct a single error in any position. It uses a 7bit code word for a 4bit message. To achieve this error correction by simply repeating each message bit three times would require 12 bits for the 4bit message.
18.3. Linear Codes: Parity Check and Generator Matrices With this famous example under our belt, we move on to some general deﬁnitions. We start with linear equations and matrices and will see that this leads to a more abstract deﬁnition of a linear code as a subspace of Zn2 . For the remainder of this chapter we use the term length of a vector x to mean the number of coordinates of x. So, vectors in Zn2 have length n.
268
18. ErrorCorrecting Codes
Consider message words which are binary vectors of length k. We then adjoin n − k parity check bits, xk+1 · · · xn , deﬁned by linear equations xk+i =
(18.4)
k
cij xj
j=1
for 1 ≤ i ≤ n − k. Let C be the (n − k) × k coeﬃcient matrix; that is, cij is the i, j entry of C. The matrix form of equation (18.4) is then ⎛ ⎞ ⎛x ⎞ x1 k+1 ⎜ x2 ⎟ ⎜ xk+2 ⎟ ⎟ ⎜ ⎟ C⎜ ⎝ ... ⎠ = ⎝ .. ⎠. . xk xn Over Z2 , this becomes
⎞ ⎛x ⎞ x1 k+1 ⎜ x2 ⎟ ⎜ xk+2 ⎟ ⎟ ⎜ ⎟ C⎜ ⎝ ... ⎠ + ⎝ .. ⎠ = 0, . xk xn ⎛
and thus (C In−k ) x = 0, In−k ) is (n − k) × n. The where x = (x1 , . . . , xn ) and the matrix A = ( C codewords are in the nullspace of A. Since each of the 2k messages yields one codeword, there are exactly 2k codewords. The matrix A has rank n − k, so its nullspace is k dimensional and thus has 2k vectors. Hence, the set of code words is exactly the nullspace of A and is a kdimensional subspace of Zn2 . We claim that any kdimensional subspace of Zn2 may be obtained as the In−k ). Suppose U nullspace of an (n − k) × k matrix of the form A = ( C is a kdimensional subspace of Zn2 . Let B = {b1 , . . . , bk } be a basis for U, and let B denote the k × n matrix with bi in row i. The nullspace of B has dimension n − k. Let Aˆ be an (n − k) × n matrix such that the rows of Aˆ are a basis for the ˆ T = 0 and the nullspace of Aˆ contains U. Since Aˆ has nullspace of B. Then AB ˆ rank n − k, the nullspace of Aˆ has dimension k, and hence U is the nullspace of A. ˆ ˆ Since A has rank n − k, we can use elementary row operations to reduce A to the In−k ), where C has size (n − k) × k. The elementary row operations form ( C In−k ). do not change the nullspace of a matrix, so U is the nullspace of A = ( C Deﬁnition 18.1. A binary [n, k] linear code C is a kdimensional subspace of Zn2 . The discussion above shows that the following deﬁnition is equivalent to Deﬁnition 18.1. Deﬁnition 18.2. A binary [n, k] linear code C is the nullspace of an (n − k) × n In−k ), where C is (n − k) × k. The matrix binary matrix of the form A = ( C A is called the parity check matrix for the code C. We started with the parity check matrix as a natural way to generalize the ﬁrst examples—i.e., the parity check code and the [7, 4] Hamming code. Another way to describe the subspace C is to use a generator matrix, formed as follows. Let
18.4. The Hamming Distance
269
g1 , . . . , gk be a basis for C, and let G be the k × n matrix with gi in row i. Then C is the row space of G, and we say G is a generator matrix for C. There is an easy In−k ). way to obtain a generator matrix from the parity check matrix A = ( C We have Ik In−k ) (C = C + C = 0. C Ik Hence the k columns of are in C; since these columns are linearly independent, C C T ) is a generator matrix for the code they form a basis for C. So G = ( Ik In−k ). We have AGT = 0. The parity check with parity check matrix A = ( C matrix A has size (n − k) × n and rank (n − k); the code C is the nullspace of A. The generator matrix G has size k × n and rank k; the code C is the row space of G.
18.4. The Hamming Distance When we use a parity check matrix of the form A = ( C In−k ), the message (x1 , . . . , xk ) is encoded into the codeword x = (x1 , . . . , xn ) by setting ⎞ ⎛ ⎞ ⎛x x1 k+1 ⎜ x2 ⎟ ⎜ xk+2 ⎟ ⎜ . ⎟ = C ⎜ . ⎟. ⎝ .. ⎠ ⎝ . ⎠ . xk xn We transmit the codeword x and the receiver gets y = x + e, where e represents error. The ith coordinate of e is a one if and only if there is an error in entry i. Thus, to discuss errors and decoding, we want to count the number of coordinates in which x and y diﬀer. Deﬁnition 18.3. Let x and y be vectors in Zn2 . The Hamming distance, denoted d(x, y), is the number of coordinates for which xi = yi . The weight of a vector x, denoted wt(x), is the number of nonzero coordinates; thus, wt(x) = d(x, 0). In the binary ﬁeld, xi + yi = 0 when xi = yi and xi + yi = 1 when xi = yi . n (xi + yi ). The reader may check that the Hamming distance Hence, d(x, y) = i=1
satisﬁes the three properties required of a distance function: (1) d(x, y) ≥ 0 and d(x, y) = 0 if and only if x = y. (2) d(x, y) = d(y, x). (3) d(x, y) ≤ d(x, z) + d(z, y). Recall that the third property is known as the triangle inequality. The usual decoding scheme for an errorcorrecting code is to use nearest neighbor decoding: if the word y is received, look for the codeword x which is closest to y; that is, look for the codeword x which minimizes d(x, y). This is based on the assumption that if k < j, it is more likely that k errors were made than j errors. Let us examine this more closely. Suppose that for each bit in a word the probability of error is p. We also assume that each bit is an
270
18. ErrorCorrecting Codes
independent event. Then for 0 ≤ k ≤ n, the probability of exactly k errors in a 1 , these numbers word of length n is nk pk (1 − p)n−k . We claim that when p < n+1 decrease with k. 1 and 0 ≤ j < k ≤ n, then n+1 n j n k p (1 − p)n−j > p (1 − p)n−k . j k
Lemma 18.4. If 0 < p
1 n−k p holds when
1 p
>
This last inequality holds for p
λ2 , λ2 < 0, and λ2  < 1. Using the fact that the eigenvalues satisfy λ+1 = λ2 , we see that the corresponding eigenvectors are 1 1 and v2 = . v1 = λ1 λ2 1 Writing x(0) = as a linear combination of these eigenvectors, 1 λ1 −λ2 1 x(0) = = √ v1 + √ v2 , 1 5 5 and so λk+1 (−λ2 )k+1 1 √ x(k) = √ v1 + v2 . 5 5
19.4. Constant Coeﬃcient, Homogeneous Systems Since x(k) =
fk fk+1
287
, we get the following formula for the kth Fibonacci number: fk =
+ (−λ2 )k+1 λk+1 1 √ . 5
Example 19.4. We consider a simple migration model for a population divided into two categories: urban and rural [Luen79, pp. 144146]. Let x1 (k) denote the urban population in year k, and let x2 (k) denote the rural population in year k; total population is then p(k) = x1 (k) + x2 (k). We assume an overall population growth rate of α, which is the same for both sectors. Some rural activity (farming, etc.) is needed to support the total population; we assume that the optimal rural base is some fraction γ of the total population. When x2 (k) exceeds γp(k), there will be migration from rural to urban areas, but when x2 (k) is less than γp(k) there will be migration in the reverse direction, from urban to rural. We assume the migration is proportional to the diﬀerence x2 (k) − γp(k). A simple model based on these assumptions is then x1 (k + 1) = αx1 (k) + β[x2 (k) − γ(x1 (k) + x2 (k))], x2 (k + 1) = αx2 (k) − β[x2 (k) − γ(x1 (k) + x2 (k))], where the growth factor α and migration factor β are both positive. We will also assume β < α. In matrix form, we have α − βγ β(1 − γ) x(k + 1) = x(k). βγ α − β(1 − γ) The column sums of this matrix are both α, so one of the eigenvalues is α. The trace of the matrix is 2α − β, so the other eigenvalue is α − β. With the assumption β < α, we have 0 < α − β < α and α is the dominant eigenvalue. One can check 1−γ that a right eigenvector corresponding to the eigenvalue α is ; this is the γ equilibrium distribution with the rural population being γ times total population. Now consider the continuous time version. Let t be a continuous variable, and let x(t) = (x1 (t), . . . , xn (t))T be the state vector. Assume the coordinate functions xi (t) are diﬀerentiable functions of t. A linear, constant coeﬃcient, ﬁrstorder continuous system is one of the form x (t) = Ax(t) + b, where A is an n × n matrix and b is a vector in Rn . As for the discrete time version, we say the system is homogeneous when b = 0. We then have x (t) = Ax(t),
(19.22) which has the general solution
x(t) = eAt x(0),
(19.23)
where eAt is deﬁned by substituting the matrix At for the variable x in the Taylor series expansion for the exponential function ex . Thus, eAt =
∞
(At)k k=0
k!
=
∞ k k
t A k=0
k!
.
288
19. Linear Dynamical Systems
At
Term by term diﬀerentiation shows d(edt ) = AeAt , and thus (19.23) satisﬁes the equation x (t) = Ax(t) and the initial conditions. In other words, the formula for the solution to (19.22) is the same as the familiar formula x = x0 eat for the solution to the scalar equation x = ax, with the scalar a replaced by the n × n matrix A. Again, we could use the Jordan canonical form of A to compute eAt , but for diagonalizable A we obtain a more useful formula using the eigenvalues n cj vj , where v1 , . . . , vn are n linearly and eigenvectors. As before, let x(0) = j=1
independent eigenvectors of A with corresponding eigenvalues λ1 , . . . , λn . For each eigenvector, eigenvalue pair, set xj (t) = eλj t vj . Then d λj t (e vj ) = λj eλj t vj = eλj t (λj vj ) = eλj t (Avj ) = A(eλj t vj ), dt showing that xj (t) = Axj (t). Hence, each of the functions xj (t) is a solution to the homogeneous system (19.22). Now put x(t) =
n
cj xj (t).
j=1
We then have x(0) =
n
cj vj and x (t) = Ax(t). Thus, for diagonalizable A, the
j=1
general solution to (19.22) may be expressed as (19.24)
x(t) =
n
j=1
cj xj (t) =
n
cj eλj t vj .
j=1
For each eigenvalue λj , let λj = rj + isj where rj and sj are real, thus, rj = (λj ) is the real part of λj . Then eλj t  = erj t eisj t  = erj t . If rj < 0, then erj t → 0 as t → ∞. Suppose one of the eigenvalues has a real part which is strictly larger than the real part of all of the other eigenvalues. Renumbering so that r1 > rj for j = 2, 3, . . . , n, and rewriting (19.24) as n n
x(t) = cj xj (t) = eλ1 t c1 v1 + cj e(λj −λ1 )t vj , j=1
j=2
we see that e(λj −λ1 )t → 0 as t → ∞, so, provided c1 = 0, we see that x(t) becomes parallel to v1 as t → ∞.
19.5. Constant Coeﬃcient, Nonhomogeneous Systems; Equilibrium Points In this section we use the following fact about matrix power series. Theorem 19.5. Let f (x) =
∞ k=0
ck xk be a power series with positive radius of
convergence R. Let A be an n × n matrix with ρ(A) < R. Then the series ∞ f (A) = ck Ak converges. k=0
19.5. Equilibrium Points
289
We refer the reader to the exercises for an outline of the proof. In particular, ∞ 1 we will be using the geometric series 1−x = xk , which has radius of convergence k=0
R = 1, and the Taylor series for the exponential function ex =
∞ k=0
xk k! ,
which
converges for all x (that is, R is inﬁnity). Now consider the system x(k + 1) = Ax(k) + b for general b. Starting with k = 0 and iterating for k = 1, 2, 3, . . . , we obtain x(1) = Ax(0) + b x(2) = A2 x(0) + Ab + b x(3) = A3 x(0) + A2 b + Ab + b .. . (19.25)
x(k) = Ak x(0) + (Ak−1 + Ak−2 + · · · + A2 + A + I)b.
Since (A − I)(Ak−1 + Ak−2 + · · · + A2 + A + I) = Ak − I, when A − I is invertible we may rewrite equation (19.25) as (19.26)
x(k) = Ak x(0) + (A − I)−1 (Ak − I)b.
Compare equation (19.26) with the formula for a single equation, (19.14). Alternatively, we can ﬁnd the general solution to the nonhomogeneous system x(k + 1) = Ax(k) + b by ﬁnding a particular solution and adding it to the general solution to the related homogeneous equation x(k + 1) = Ax(k). ¯ is a ﬁxed point or equilibrium point of the system Deﬁnition 19.6. We say x x(k + 1) = Ax(k) + b ¯ = A¯ if x x + b. ¯ is an equilibrium point if and only if (I − A)¯ So, x x = b. When I − A is ¯ = (I − A)−1 b. If I − A invertible, there is exactly one equilibrium point, namely x is singular, the system (I − A)¯ x = b has either no solutions or inﬁnitely many, depending on the choice of the vector b. Note that I − A is invertible if and only if 1 is not an eigenvalue of A. ¯ is an equilibrium point for x(k + 1) = Ax(k) + b. Deﬁnition 19.7. Suppose x ¯ is asymptotically stable if for any choice of initial value x(0), we have We say x ¯. lim x(k) = x k→∞
¯ , then Note that if a system has an asymptotically stable equilibrium point x ¯ is the only equilibrium point. For, if y ¯ is any equilibrium point, then choosing x ¯ gives x(k) = y ¯ for all k, and hence lim x(k) = x ¯ gives y ¯=x ¯. x(0) = y k→∞
The proof of the next theorem depends on the fact that Ak → 0 as k → ∞ if and only if ρ(A) < 1 (see Theorem 17.2). Theorem 19.8. The system x(k + 1) = Ax(k) + b has an asymptotically stable equilibrium point if and only if ρ(A) < 1.
290
19. Linear Dynamical Systems
Proof. Suppose ρ(A) < 1. Then I − A is invertible and there is a unique equi¯ = (I − A)−1 b. Since ρ(A) < 1, we have Ak → 0 as k → ∞ and librium point x ¯. equation (19.26) tells us that as k → ∞, we have x(k) → −(A − I)−1 b = x Conversely, suppose the system x(k + 1) = Ax(k) + b has a stable equilibrium ¯ . Then x ¯ is the only equilibrium point and x ¯ = (I − A)−1 b. Substitute this point x −1 k in equation (19.26) (note that (A − I) and (A − I) commute) to get ¯) + x ¯. x = Ak (x(0) − x x(k) = Ak x(0) + (I − Ak )¯ ¯ is asymptotically stable, x(k) → x ¯ for any choice of x(0). Therefore, we Since x ¯ ) → 0 for any choice of x(0). Hence, Ak → 0 as k → ∞, and have Ak (x(0) − x ρ(A) < 1. Analogous deﬁnitions and results apply to the continuous time version, but the condition on eigenvalues which gives an asymptotically stable equilibrium point is that the eigenvalues have negative real part. For the discrete time system, we used the fact that Ak → 0 as k → ∞ if and only if ρ(A) < 1. For continuous time systems, the key fact is that eAt → 0 as t → ∞ if and only if every eigenvalue of A has negative real part. Consider a ﬁrstorder, linear, constant coeﬃcient, continuous time system x (t) = Ax(t) + b. We shall show that the general solution is t (19.27) x(t) = eAt x(0) + eAs ds b, 0
which may be viewed as the continuous time version of formula (19.25). The integration of the matrix valued function eAs may be done as term by term integration of the power series t ∞ t ∞
A k sk Ak tk+1 = . eAs ds = k! (k + 1)! 0 0 k=0
We then have (19.28)
t
eAs ds = A
A 0
k=0
∞ ∞ Ak tk+1 Ak+1 tk+1 = = (eAt − I). (k + 1)! (k + 1)! k=0
k=0
When A is invertible, (19.29)
t
eAs ds = A−1 (eAt − I).
0
Equation (19.29) is the matrix version of
t 0
eas ds =
eat −1 a ,
with the nonzero constant
a replaced by the invertible matrix A. If we substitute t = 0 in equation (19.27), we get x(0) on both sides, so the initial condition is satisﬁed. We now show (19.27) satisﬁes x (t) = Ax(t)+b. Using the fundamental theorem of calculus, diﬀerentiate (19.27) to get x (t) = AeAt x(0) + eAt b.
19.5. Equilibrium Points
Now use (19.28) to get Ax(t) = AeAt x(0) + A
291
t
eAs ds b = AeAt x(0) + (eAt − I)b,
0
and so
Ax(t) + b = AeAt x(0) + eAt b = x (t). Equation (19.27) is a matrix version of (19.17), the formula for the solution to the ﬁrstorder diﬀerential equation dx dt = ax + b. For a continuous time linear system x (t) = Ax(t) + b, equilibrium or ﬁxed points are those at which the derivative is zero. ¯ is a ﬁxed point or equilibrium point of the system Deﬁnition 19.9. We say x x + b = 0. x (t) = Ax(t) + b if A¯
¯ is an equilibrium point if and only if A¯ So, x x = −b. When A is invertible, ¯ = −(A)−1 b. But if A is singular, there is exactly one equilibrium point, namely x the system A¯ x = −b has either no solutions or inﬁnitely many, depending on the choice of the vector b. Note that A is invertible if and only if 0 is not an eigenvalue of A. ¯ is an equilibrium point for x (t) = Ax(t)+b. We say Deﬁnition 19.10. Suppose x ¯. ¯ is asymptotically stable if for any choice of initial value x(0), we have lim x(t) = x x t→∞
As for the discrete time version, when there is an asymptotically stable equilibrium point, it must be the only equilibrium point of the system. We have the following analogue of Theorem 19.8. Theorem 19.11. The system x (t) = Ax(t) + b has an asymptotically stable equilibrium point if and only if every eigenvalue of A has negative real part. Proof. Suppose every eigenvalue of A has negative real part. Then A is invertible ¯ = −A−1 b. Since all of the eigenvalues and there is a unique equilibrium point x At have negative real parts, e → 0 as t → ∞. Using (19.27) and (19.29), (19.30)
x(t) = eAt x(0) + A−1 (eAt − I)b → −A−1 b.
¯ as t → ∞ and x ¯ is stable. Hence, x(t) → x Conversely, suppose the system x (t) = Ax(t) + b has an asymptotically stable ¯ . Since there is a unique equilibrium point, the matrix A must equilibrium point x ¯ = −(A)−1 b. Substituting in (19.30), be invertible and x ¯) + x ¯. x = eAt (x(0) − x x(t) = eAt x(0) − (eAt − I)¯ ¯ is asymptotically stable, x(t) → x ¯ as t → ∞ for any choice of x(0) and Since x ¯ ) → 0 for any choice of x(0). Hence, eAt → 0 as t → ∞ and each so eAt (x(0) − x eigenvalue of A must have negative real part. Example 19.12. We return to the situation of Example 19.1 but modify the way the producer decides how much to supply in period k + 1. In Example 19.1, we assumed the producer would use the price reached at time k to plan how much French babka to bake for time k + 1. We now try a more complicated approach and assume the producer will consider both the price reached at time k and the
292
19. Linear Dynamical Systems
price reached at time k − 1 to predict the price at time k + 1. For example, one might use an average or, more generally, a weighted average. Thus, we replace p(k) in equation (19.19) with a weighted average γp(k) + δp(k − 1), where γ, δ are nonnegative and sum to one. Our new equations are then s(k + 1) = β(γp(k) + δp(k − 1)) − s0 , d(k + 1) = −αp(k + 1) + d0 . Setting s(k + 1) = d(k + 1) gives following secondorder diﬀerence equation for the price function: s0 + d0 β . p(k + 1) = − [γp(k) + δp(k − 1)] + α α β 0 +p0 We have the same equilibrium point p¯ = sα+β as in Example 19.1. Setting r = α and noting that δ = 1 − γ, the characteristic equation for the related homogeneous equation p(k + 1) + rγp(k) + rδp(k − 1)] = 0
is then x2 + rγx + r(1 − γ) = 0.
(19.31)
The equilibrium point p¯ will be asymptotically stable when both roots of equation (19.31) have magnitude less than one. The roots are −rλ ± r 2 γ 2 − 4r(1 − γ) . λ1 , λ2 = 2 One may now consider various cases. For γ = 1, the equation reduces to the ﬁrstorder equation in Example 19.1. If γ = 12 , then γ = δ = 12 , so a simple average of p(k) and p(k − 1) is used. We claim that in this case the equilibrium point p¯ is asymptotically stable for r < 2. To see this, ﬁrst note that the roots λi are complex when r 2 γ 2 < 4r(1 − γ). Since r > 0, this is equivalent to rγ 2 < 4(1 − γ).
(19.32) For complex roots, we have (19.33)
λ1 2 = λ2 2 =
r 2 γ 2 + r 2 γ 2 − 4r(1 − γ) = r(1 − γ). 4
For γ = 12 , equation (19.32) becomes r < 8 and (19.33) gives λ1 2 = λ2 2 = r2 . So for r < 2 the roots will be nonreal numbers and will have magnitude less than one; hence p¯ will be asymptotically stable.
19.6. Nonnegative Systems When the matrix A and the vector b are nonnegative, the Perron–Frobenius theory gives more information about the equilibrium points. We begin with the case of a discrete system. Deﬁnition 19.13. We say the system x(k+1) = Ax(k)+b is nonnegative provided that whenever the initial vector x(0) is nonnegative, then x(k) is nonnegative for all k.
19.6. Nonnegative Systems
293
Clearly, a nonnegative matrix A and a nonnegative vector b will yield a nonnegative system. Happily, it turns out that this is the only way we can have a nonnegative system. Theorem 19.14. The system x(k + 1) = Ax(k) + b is nonnegative if and only if A ≥ 0 and b ≥ 0. Proof. As already noted, if A and b are both nonnegative, then the system is clearly nonnegative. We need to prove the converse. So, suppose x(k + 1) = Ax(k) + b is nonnegative. If we choose x(0) = 0, then x(1) = b, so we must have b ≥ 0. Now, suppose the matrix A has a negative entry in position i, j. Let c be a positive constant large enough that caij  > bi . Since aij < 0, we then have c(aij ) + bi < 0. Choose x(0) = c(ej ) where ej is the jth unit coordinate vector. Then x(1) = Ax(0) + b = c(column j of A) + b, so x(1) has a negative entry in the ith coordinate. This contradicts the fact that the system is nonnegative. Hence, every entry of A must be nonnegative. For nonnegative systems, we are interested in knowing if there is a nonnegative equilibrium point. It turns out that the criterion for a nonnegative equilibrium point is the same as that for stability. Theorem 19.15. Suppose x(k + 1) = Ax(k) + b is a nonnegative system and b > 0. Then there is a nonnegative equilibrium point if and only if ρ(A) < 1. Proof. Suppose ρ(A) < 1. Theorem 19.8 then tells us there is a stable equilibrium ¯ is unique and given by the formula point; furthermore, the equilibrium point x ∞ −1 ¯ = (I − A) b. Since ρ(A) < 1, the geometric series Aj converges to (I − A)−1 x ¯= and x
∞
j=0
¯ ≥ 0. Aj b. Since A ≥ 0 and b ≥ 0, it is clear that x
j=0
Conversely, suppose there is a nonnegative equilibrium point, that is, a nonneg¯ such that x ¯ = A¯ ative vector x x + b. Since A ≥ 0, the Perron–Frobenius theorem tells us that ρ = ρ(A) is an eigenvalue of A; furthermore, there is a corresponding nonnegative left eigenvector vT . We then have ¯ = vT A¯ ¯ ) + vT b. vT x x + vT b = ρ(vT x Rearrange this to get (19.34)
¯ = vT b. (1 − ρ)vT x
Since v is nonnegative and is not the zero vector and b > 0, we must have vT b > 0. ¯ ≥ 0, equation (19.34) tells us 1−ρ > 0, hence ρ = ρ(A) < 1. Since we also have x Now consider a continuous time system, x (t) = Ax(t) + b. We ﬁrst want to ﬁnd the conditions on A and b which guarantee that the system preserves nonnegativity of the state vector. We clearly must have b ≥ 0, because if the initial state x(0) is the zero vector, then x (0) = b must be nonnegative to preserve nonnegativity. However, it turns out that it is not necessary that all of the entries of A be nonnegative, but only the oﬀdiagonal entries. Deﬁnition 19.16. A square matrix A is a Metzler matrix if aij ≥ 0 for all i = j.
294
19. Linear Dynamical Systems
Theorem 19.17. The continuous time system x (t) = Ax(t) + b preserves nonnegativity if and only if A is a Metzler matrix and b ≥ 0. Proof. Suppose the system preserves nonnegativity. Then when a nonnegative state vector x(t) has a zero in the ith coordinate, we must have xi (t) ≥ 0 in order for that coordinate to remain nonnegative. If we start at x(0) = 0, then x (0) = b, so we must have b ≥ 0. Next, consider the i, j entry of A, where i = j. Suppose aij is negative. Choose a positive constant c such that caij + bi < 0. Let x(0) = cej . Entry i of x(0) is then 0, while entry i of x (0) is caij + bi < 0, which contradicts the assumption that the system preserves nonnegativity. Therefore, for i = j, we have aij ≥ 0, and A is a Metzler matrix. Conversely, suppose A is a Metzler matrix and b ≥ 0. If we start at a nonnegative vector x(0), then the trajectory, x(t) will stay in the nonnegative region of Rn unless there is some value of t and some coordinate i such that xi (t) = 0 but xi (t) < 0; that is, unless the trajectory reaches a boundary point of the region and the derivative of the boundary coordinate is negative. So, suppose x(t) ≥ 0 with xi (t) = 0 for some t and some coordinate i. Since A is a Metzler matrix, the ith coordinate of Ax(t) will then be nonnegative, because the zero in the ith coordinate of x(t) will pair up with the ith diagonal entry of A and thus xi (t) ≥ 0. Hence, Ax(t) + b ≥ 0 and the system preserves nonnegativity. There is a close relationship between Metzler matrices and nonnegative matrices. If A is a Metzler matrix, then for a suﬃciently large positive constant c the matrix P = cI + A is nonnegative. Therefore, P has a Perron–Frobenius eigenvalue λ0 = ρ(P ) with a corresponding nonnegative eigenvector v0 . We then have Av0 = (λ0 − c)v0 , so v0 is an eigenvector of A corresponding to the eigenvalue λ0 − c. Set μ0 = λ0 − c = ρ(P ) − c. Then μ0 is real. Since the eigenvalues of A are obtained from those of P by shifting them distance c to the left, we can take the circle of radius ρ(P ) centered at the origin, and shift it c units to the left to obtain a circle centered at −c with radius ρ(P ) which contains the eigenvalues of A. This shifted circle is then tangent to the vertical line x = μ0 , which tells us that μ0 is the eigenvalue of A with largest real part. Theorem 19.18. Let A be a Metzler matrix. Then A has a real eigenvalue μ0 which satisﬁes the following. (1) There is a nonnegative eigenvector v0 corresponding to μ0 . (2) If μ = μ0 is any other eigenvalue of A, then (μ) < μ0 . The next result is the analogue of Theorem 19.15 for continuous systems. Theorem 19.19. Let A be a Metzler matrix, and let b > 0. Then the system ¯ if and only if all of the x (t) = Ax(t) + b has a nonnegative equilibrium point x eigenvalues of A are strictly in the left half of the complex plane (that is, (λ) < 0 for every eigenvalue λ of A). Proof. If (λ) < 0 for every eigenvalue λ of A, Theorem 19.11 tells us the system ¯ . For any trajectory x(t) we then has an asymptotically stable equilibrium point x ¯ . However, since the system is nonnegative, we know that for any have lim x(t) → x t→∞
19.7. Markov Chains
295
nonnegative starting point x(0), every point of the trajectory x(t) is nonnegative. ¯ ≥ 0. Therefore, x ¯ . Then Conversely, suppose the system has a nonnegative equilibrium point x ¯ ≥ 0 and A¯ x x + b = 0. Since b > 0, we must have A¯ x < 0. Let μ0 be the eigenvalue of A with largest real part. Since A is a Metzler matrix, we know that μ0 is a real number and that there is a corresponding left nonnegative ¯ . Since w0T x ¯ > 0, x < 0 and w0T A¯ x = μ0 w0T x eigenvector w0 . We then have w0T A¯ this tells us μ0 < 0. Hence, every eigenvalue of A has a negative real part.
19.7. Markov Chains Suppose we have a set of n states and a process (also called a chain) which moves successively from state to state in discrete time periods. Each move is called a step. If the chain is currently in state i, we let pij denote the probability that it will move n to state j at the next step. Note that 0 ≤ pij ≤ 1 and pij = 1. Let P be the n×n j=1
matrix with pij in entry i, j. The probabilities pij are called transition probabilities n and the matrix P is called the transition matrix. The condition pij = 1 says j=1
that in each row of P , the entries sum to one. Deﬁnition 19.20. We say an n × n matrix P is row stochastic if the entries of P are all nonnegative real numbers and all of the row sums of P are one. Observe that P is row stochastic if and only if P ≥ 0 and P e = e, where e denotes the allone vector. Deﬁnition 19.21. We say a vector x is a probability vector if all of the entries of x are nonnegative and the sum of the coordinates of x is 1. Equivalently, x ≥ 0, and if e denotes the allone vector, then xT e = 1. We interpret the coordinates of a probability vector x as representing the probabilities of being in each of the n states; thus xj represents the probability the chain is in state j. A matrix P is row stochastic if and only if each row of P is a probability vector. Proposition 19.22. If x is a probability vector and P is row stochastic, then xT P is a probability vector. If P and Q are n × n row stochastic matrices, then P Q is row stochastic. Proof. Suppose x is a probability vector and P is row stochastic. Then xT P is clearly nonnegative, and (xT P )e = xT (P e) = xT e = 1, so xT P is a probability vector. Now suppose P and Q are n × n row stochastic matrices. Then P Q is clearly nonnegative, and (P Q)e = P (Qe) = P e = e, so P Q is row stochastic. Now suppose x(k) is the probability vector in which xj (k) gives the probability the chain is in state j at step (or time) k. Then pij xi (k) gives the probability that
296
19. Linear Dynamical Systems
the chain is in state i at time k and in state j at time k +1. If we sum the quantities pij xi (k) over i, we get the probability the chain is in state j at time k + 1. Thus, (19.35)
xj (k + 1) =
n
pij xi (k), j = 1, . . . , n.
i=1
Writing the probability vectors as row vectors, the matrix form of (19.35) is xT (k + 1) = xT (k)P.
(19.36)
Our apologies for this notational change to row vectors and placing the matrix transformation on the right; however, this seems to be the usual notation for Markov chains. As we have seen before, equation (19.36) leads to the formula xT (k) = xT (0)P k .
(19.37)
We now use the fact that P is row stochastic to study the behavior of x(k) as k → ∞. Since P ≥ 0, the Perron–Frobenius theorem applies. From P e = e, we see λ = 1 is an eigenvalue of P and e is an associated eigenvector. The Gerˇsgorin Circle Theorem (more speciﬁcally, Theorem 3.16) tells us ρ(P ) ≤ 1. Since 1 is an eigenvalue, ρ(P ) = 1, and we have the positive eigenvector e. Of critical importance to analyzing the behavior of x(k) as k → ∞ is whether there are other eigenvalues of modulus one—i.e., whether the matrix P is irreducible or not, and, in the case of an irreducible P , whether P is primitive or not. Theorem 19.23. Suppose A is an n × n row stochastic matrix which is irreducible and primitive. Then there exists a unique, positive probability vector q such that (1) qT A = qT . (2) lim Ak = e qT . k→∞
Proof. Since A is row stochastic, we know ρ(A) = 1; since A is irreducible, we know λ = 1 is a simple eigenvalue of A. Also, we know that A has a positive left eigenvector x corresponding to the eigenvalue 1. Since 1 is a simple eigenvalue, the n xi to get the unique corresponding eigenspace is one dimensional. Divide x by i=1
probability vector, q =
n1
x, i=1 xi
which satisﬁes qT A = qT .
For the second part, consider the Jordan canonical form of A. Since A is primitive, we know that for any eigenvalue λ = 1, we have λ < 1. Hence, the ¯ where J¯ is a sum of Jordan canonical form of A may be written as J = 1 ⊕ J, Jordan blocks corresponding to eigenvalues of modulus less than one. We then have lim J¯k = 0 and lim J k = 1 ⊕ 0n−1 . This tells us lim Ak exists and is a k→∞
k→∞
k→∞
matrix of rank 1. Let B = lim Ak ; then B is row stochastic and has rank 1. Also, k→∞
BA = B, so every row of B must be a left eigenvector of A corresponding to the eigenvalue 1. Since every row of B is also a probability vector, we see every row of B is the vector qT . Hence, B = lim Ak = eqT . k→∞
Theorem 19.23 tells us the following. If the transition matrix P of a Markov chain is irreducible and primitive, then in the long run, the probability distribution
19.7. Markov Chains
297
for the chain is given by the left eigenvector corresponding to the eigenvalue 1. In particular, note that if P > 0, then P is irreducible and primitive. Example 19.24. Let us return to Hesh’s bakery and consider a group of regular customers, each of whom buys exactly one of the following items each week: the French babka, the chocolate chip pound cake (referred to as the CCC), or the sticky buns. Suppose that if a customer buys a babka one week, there is a probability of .30 that she buys the babka the following week, a probability of .50 she buys the CCC the following week, and then a .20 probability she buys the sticky buns. Of those who buy the sticky buns one week, 60% will buy the babka the following week, and 30% will buy the CCC, while 10% stick with the buns. Finally, of the CCC buyers, half will buy the CCC the following week, while 30% switch to sticky buns, and the remaining 20% buy babka the following week. If our three states are (1) State 1: French babka, (2) State 2: Sticky buns, (3) State 3: CCC, then the transition matrix for this Markov chain is ⎞ ⎛ .3 .2 .5 P = ⎝ .6 .1 .3 ⎠ . .2 .3 .5 In the long run, what proportion of sales will be babka, sticky buns, and CCC? This corresponds to the left eigenvector for P ; thus, we want to ﬁnd the probability vector q satisfying qT P = qT . A straightforward calculation gives ⎛ ⎞ ⎛ ⎞ 36 .32 1 ⎝ ⎠ ⎝ q= 25 = .22 ⎠ 112 51 .46 rounded oﬀ to two decimal places. Thus, over a long period of time, we may expect 32% of customers to buy babka, 22% to buy sticky buns, and 46% to buy chocolate chip cake. Things are more complicated if P is reducible or imprimitive. We refer the reader elsewhere for more complete treatments of the general case. Here, we consider one more special situation: the absorbing Markov chain. Deﬁnition 19.25. We say a state of a Markov chain is an absorbing state if once the process is in that state, it remains in that state. When state i is an absorbing state, the ith row of the transition matrix P has a 1 in the ith entry and zeroes elsewhere. Deﬁnition 19.26. We say a Markov chain is an absorbing chain if it has at least one absorbing state, and, from any nonabsorbing state, there is a positive probability of reaching an absorbing state in a ﬁnite number of steps. In this case we call the nonabsorbing states transient states. The deﬁnition of absorbing chain can also be stated in terms of the directed graph of the matrix P : there are no edges coming out of any absorbing state, while from any nonabsorbing state, there is a path to some absorbing state.
298
19. Linear Dynamical Systems
Suppose that we have an absorbing nstate Markov chain with t transient states and r absorbing states; so t + r = n. We order the states so that states 1, 2, . . . t are the transient states and states t + 1, t + 1, . . . , t + r = n are the absorbing states. The transition matrix P then takes the form Q R , (19.38) P = 0 Ir where Q is t × t, while R is t × r and the block of zeros in the lower lefthand corner is r × t. Then P k has the form k Q Rk Pk = , 0 Ir where Rk is t × r; we now ﬁnd a formula for Rk in terms of Q and R. From 2 Q QR + R P2 = , 0 Ir we see R2 = QR + R = (Q + I)R. 3 Computing P , we have 3 Q Q2 R + QR + R P3 = , 0 Ir so R3 = Q2 R+QR+R = (Q2 +Q+I)R. One may now guess the following formula, which may be proven by induction: (19.39)
Rk = (I + Q + Q2 + · · · + Qk−1 )R.
From (19.39), we see that Rm ≥ Rk when m ≥ k. In particular, note that if Rk has a nonzero entry in position i, j, then for any m > k, the matrix Rm will also have a nonzero entry in position i, j. Q R Theorem 19.27. Suppose P = is the transition matrix for an absorbing 0 Ir Markov chain with r absorbing transient states and t −1 states, with Q being t×t. Then 0t (I − Q) R k ρ(Q) < 1 and limk→∞ P = . 0 Ir Proof. Let 1 ≤ i ≤ t; then state i is nonabsorbing. Hence, starting from state i, there is a positive integer ki such that the process has positive probability of being in one of the absorbing states after ki steps. This means that the ith row of P ki will have a positive entry in at least one of the last r columns. So for any m ≥ ki , the matrix Rm has a positive entry in row i. Choose m = max{k1 , k2 , . . . , kt }. Now P is row Then every row of Rm must have a positive entry. stochastic; hence Qm Rm m any power of P is also row stochastic. So P = is row stochastic 0 Ir and every row of Rm has a positive entry. This tells us that each row sum of the nonnegative matrix Qm is less than one, and hence ρ(Qm ) < 1. Since ρ(Qm ) = (ρ(Q))m , we have ρ(Q) < 1. Now ρ(Q) < 1 tells us lim Qk = 0 and the inﬁnite series k→∞
I + Q + Q2 + Q3 + · · ·
19.7. Markov Chains
converges to (I − Q)−1 . Hence, lim Rk = (I − Q)−1 R and k→∞ 0t (I − Q)−1 R lim P k = . 0 Ir k→∞
299
Deﬁnition 19.28. For an absorbing Markov chain with transition matrix P = Q R , the matrix N = (I − Q)−1 is called the fundamental matrix for P . 0 Ir The i, j entry of Qk gives the probability that a system that starts in state i will be in state j after k steps. Hence, it should seem reasonable that the i, j entry of the inﬁnite sum I + Q + Q2 + Q3 + · · · gives the expected number of times the process is in transient state j provided it starts in the transient state i. A proper proof requires a more thorough treatment of probability theory, including random variables and expected values; we refer the reader elsewhere (for example, [GrinSn97, page 418]) for this. Theorem 19.29. Let N be the fundamental matrix for an absorbing Markov chain. Then the i, j entry of N gives the expected number of times the process is in the transient state j provided it starts in the transient state i. The sum of the entries in row i of N gives the expected number of times the process is in transient states, provided it starts in transient state i, thus, the sum of entries of row i of N gives the expected time before absorption into an absorbing state, provided we start in state i. Letting e denote the all one vector, we thus have the following. Theorem 19.30. If the absorbing Markov chain P starts in transient state i, then the expected number of steps before absorption is the ith entry of t = N e. As a consequence of Theorem 19.27, we have the following. Theorem 19.31. Let 1 ≤ i ≤ t and 1 ≤ j ≤ r. Let bij be the probability that an absorbing chain is absorbed into the absorbing state t + j, provided it starts in transient state i. Let B be the t × r matrix with bij in entry ij. Then B = N R. Example 19.32. We return again to the happy land of Hesh’s bakery, with the French babka, sticky buns, and chocolate chip cake (and many other wonderful things which we have not mentioned here). Consider again our faithful regulars, each buying exactly one of these excellent cakes each week. However, suppose now that the French babka has become so spectacularly good, that once a customer tries the babka, he or she never buys anything else. Thus, any customer who buys the babka in any week will, the following week, buy babka again with probability 1. Suppose those who buy sticky buns one week, stick with the buns with probability p the following week, switch to CCC with probability q, and thus switch to babka with probability 1 − p − q. Assume that CCC buyers switch to sticky buns with probability α and stay with CCC the following week with probability β. Then babka is an absorbing state. Assign state number 3 to the absorbing state, babka, with sticky buns being state 1 and CCC being state 2. The transition matrix is ⎛ ⎞ p q 1−p−q P = ⎝α β 1 − α − β ⎠. 0 0 1
300
19. Linear Dynamical Systems
We assume p, q, α, and β are all positive; we also assume that at least one of the numbers 1 − p − q, 1 − α − β is positive; this will ensure that we do indeed have an absorbing Markov chain. (Note that when p + q = α + β = 1, the chain is not absorbing, for in this case sticky bun and CCC buyers would never switch to babka and thus miss out on a great life experience). In the notation of (19.38), we have p q 1−p−q r = 1, Q = , and R = . Then α β 1−α−β 1−p −q I−Q= −α 1 − β and
1 1−β α det(I − Q) Suppose p = .5, q = .25, α = .4, and β = .5. Then .5 −.25 I −Q= −.4 .5
q 1−p
N = (I − Q)−1 =
and N = (I − Q)−1 =
1 .15
.5 .25 .4 .5
.
.
We have
1 .3 2 = . Ne = 6 .15 .9 For the customer who starts with sticky buns, the expected time to the babka absorbing state is two weeks. For the customer who starts with the CCC, the expected time to the babka absorbing state is six weeks.
Exercises 1. Let A be an n × n diagonalizable matrix with eigenvalues λ1 , . . . , λn , and sup∞ pose f (x) = ck xk is a power series with radius of convergence R > 0. Show k=1
that the series
∞
ck Ak converges if and only if
k=1
∞
ck λi k converges for each
k=1
eigenvalue. 2. Suppose A is a Jordan block Jn (x) = xI + N . Use the binomial expansion to compute Ak . By the mth superdiagonal line of a matrix, we mean the set of positions i, j for which j − i = m. Show that for m = 0, . . . , n − 1, each 1 dm k ) dx entry on the mth superdiagonal line is ( m! m (x ). (The case m = 0 gives the diagonal entries.) ∞ ck xk is a power series with radius of 3. Use Exercise 2 to prove that if f (x) = k=1
convergence R > 0 and A is an n × n matrix with ρ(A) < R, then the series ∞ ck Ak converges. k=1
Exercises
301
4. The power series
∞
xk k
has radius of convergence R = 1. The series diverges −1 1 at the point x = 1, but converges at −1. Let A = . Determine 0 −1 ∞ Ak whether k converges or diverges. k=1
k=1
5. Let A be an n × n matrix with eigenvalues λ1 , . . . , λn . Show that eAt → 0 as t → ∞ if and only if (λi ) < 0 for each eigenvalue, λi , i.e., if and only if all the eigenvalues are in the left half of the complex plane. 6. Each week, Peter has either a cheesesteak, a pizza, or a hoagie. For week k, let x1 (k) denote the probability he has a cheesesteak, let x2 (k) be the probability he has a pizza, and let x3 (k) be the probability he eats a hoagie. • If he has a cheesesteak one week, then the probability he has a cheesesteak the following week is .4, while the probability he eats a hoagie is .5 and the probability he has a pizza is .1. • If he has pizza one week, the probability of pizza the next week is .4, while the probability of hoagie is .5 and the probability of cheesesteak is .1. • Finally, if he has a hoagie one week, the probability of cheesesteak the next week is .3, while the probability of a pizza is .5 and the probability of hoagie is .2. (a) Write down the transition matrix for this Markov chain. (b) If Peter has a pizza in week 1, what is the probability he has a pizza in week 3? (c) Determine the steady state probability distribution. In the long run, what is the probability Peter eats each of the three items in any given week? (d) Does this aﬀect Peter’s consumption of babka? 7. Suppose we have a twostate Markov process with one absorbing state and one transient state. Write down the transition matrix and fundamental matrix for such a system.
Bibliography
[Art11]
Michael Artin. Algebra, 2nd edition, Pearson Prentice Hall, New Jersey, 2011.
[Aut02]
L. Autonne. Sur les groupes lin´eaires, r´eels et orthogonaux, Bulletin de la Soci´et´e Math´ematique de France, 30: 121–134, 1902.
[Bell70]
R. Bellman. Introduction to Matrix Analysis, 2nd edition, McGrawHill, New York, 1970.
[BenCrag84] R. Benedetti and P. Cragnolini. Versal families of matrices with respect to unitary conjugation, Adv. in Math., 54: 314–335, 1984. [Berle68]
Elwyn R. Berlekamp. Algebraic Coding Theory, McGrawHill, New York, 1968.
[Biggs74]
Norman Biggs. Algebraic Graph Theory, Cambridge University Press, London, New York, 1974.
[Birk46]
Garrett Birkhoﬀ. Three observations on linear algebra, Universidad Nacional de Tucum´ an, Revista. Serie A. Mathem´ aticas y F´ısica Te´ orica, 5: 147–151, 1946.
[Bren51]
J. Brenner. The problem of unitary equivalence, Acta. Math., 86: 297–308, (1951).
[Bret09]
Otto Bretscher. Linear Algebra with Applications, 4th edition, Pearson Prentice Hall, 2009.
[Bru87]
Richard A. Brualdi. The Jordan Canonical Form: An Old Proof, The American Mathematical Monthly, 94: 257–267, 1987.
[BR91]
Richard A. Brualdi and Herbert J. Ryser. Combinatorial Matrix Theory, Cambridge University Press, Cambridge, New York, 1991.
[BrucRys49] R. H. Bruck and H. J. Ryser. The nonexistence of certain ﬁnite projective planes, Canadian Jour. Math., 1: 88–93, 1949. [CVL80]
P. J. Cameron and J. H. Van Lint. Graphs, Codes and Designs, London Mathematical Society Lecture Note Series # 43, Cambridge University Press, 1980.
303
304
[Cauc29]
Bibliography
A. L. Cauchy. Sur l’´equation a ` l’aide de laquelle on d´etermine les in´egalit´es s´eculaires des mouvements des plan`etes, in Oeuvres Compl`etes (II e s´erie), volume 9, 1829.
[ChowRys50] S. Chowla and H. J. Ryser. Combinatoria problems, Canadian Jour. Math., 2: 93–99, 1950. [Col42]
L. Collatz. Einschliessungssatz f¨ ur die charakteristischen Zahlen von Matrizen, Math. Z., 48: 221–226, 1942.
[Coo31]
J. L. Coolidge. A Treatise on Algebraic Plane Curves, Oxford University Press, Oxford, 1931.
[CouHil37]
R. Courant and D. Hilbert. Methods of Mathematical Physics, Volume I, WileyInterscience, New York, Wiley Classics Edition, 1989.
[CraiWall93] R. Craigen and W. D. Wallis. Hadamard Matrices: 1893–1993, Congressus numerantium, 97: 99–129, 1993. [Demm97]
James W. Demmel. Applied Numerical Linear Algebra, SIAM, Philadelphia, 1997.
[Dem68]
P. Dembowski. Finite Geometries, SpringerVerlag, Berlin, New York, etc., 1968. ˇ Dokovi´c. Hadamard matrices of order 764 exist, Combinatorica, Dragomir Z. 28: 487–489, 2008. ˇ Dokovi´c, Oleg Golubitsky, and Ilias S. Kotsireas. Some new Dragomir Z. orders of Hadamard and skewHadamard matrices, Journal of Combinatorial Designs, 22 (6): 270–277, 2014. Published online 12 June 2013 in Wiley Online Library.
[Doko07] [DGK13]
[Draz51]
M. P. Drazin. Some generalizations of matric commutativity, Proc. London Math. Soc., 1: 222–231, 1951.
[DDG51]
M. P. Drazin, J. W. Dungey, K. W. Gruenberg. Some theorems on commutative matrices, J. London Math. Soc., 26: 221–228, 1951.
[EcYo36]
C. Eckart and G. Young, The approximation of one matrix by another of lower rank, Psychometrika, 1: 211–218, 1936. ¨ E. Fischer, Uber quadratische Formen mit reelen Koeﬃzienten, Monatsh. f. Math. u. Phys., 16: 234–249, 1905. ¨ E. Fischer, Uber den Hadamardschen Determinantensatz, Archiv d. Math. u. Phys., (3) 13: 32–40, 1907.
[Fisc05] [Fisc07] [Flan56] [Frob96] [Frob08] [Frob09] [Frob12] [Gan59] [Ger31]
H. Flanders Methods of proof in Linear Algebra, American Mathematical Monthly, 63: 1–15, 1956. ¨ G. Frobenius. Uber Vertauschbare Matrizen, S.B. K. Preuss. Akad. Wiss. Berlin, 601–614, 1896. ¨ G. Frobenius. Uber Matrizen aus positiven Elementen, S.B. K. Preuss. Akad. Wiss. Berlin, 471–476, 1908. ¨ G. Frobenius. Uber Matrizen aus positiven Elementen II, S.B. K. Preuss. Akad. Wiss. Berlin, 514–518, 1909. ¨ G. Frobenius. Uber Matrizen aus nicht negativen Elementen, S.B. K. Preuss. Akad. Wiss. Berlin, 456–477, 1912. F. R. Gantmacher. Matrix Theory, Vol. I, Chelsea Publishing Company, New York, 1959. ¨ S. A. Gershgorin. Uber die Abgrenzung der Eigenwerte einer Matrix, Izv. Akad. Nauk. SSSR, Ser. Fiz.Mat., 6: 749–754, 1931.
Bibliography
305
W. Givens. Fields of values of a matrix, Proc. Amer. Math. Soc., 3: 206–209, 1953. [GvL89] Gene H. Golub and Charles F. Van Loan. Matrix Computations, 2nd edition, The Johns Hopkins University Press, Baltimore, 1989. [GrinSn97] Charles M. Grinstead and J. Laurie Snell. Introduction to Probability, 2nd revised edition, American Mathematical Society, 1997. [GJSW87] R. Grone, C. R. Johnson, E. M. Sa and H. Wolkowicz. Normal matrices, Linear Algebra Appl., 87: 213–225, 1987. [Had93] J. Hadamard. R´esolution d’une question relative aux d´eterminants, Bull. Sci. Math., (2) 17: 240–248, 1893. [Hall67] Marshall Hall. Combinatorial Theory, John Wiley and Sons, Inc, New York, 1967. [Hall35] P. Hall. On representatives of subsets, Jour. London Math. Soc., 10: 26–30, 1935. [Hal87] P. R. Halmos. FiniteDimensional Vector Spaces, SpringerVerlag, New York, 1987. [Hamm70] S. J. Hammarling. Latent Roots and Latent Vectors, The University of Toronto Press, 1970. [Hamming50] Richard W. Hamming. Error detecting and errorcorrecting codes, Bell System Technical Journal, 29: 147–160, 1950. [Hamming80] Richard W. Hamming. Coding and Information Theory, Prentice Hall, Englewood Cliﬀs, New Jersey, 1980. [Haus19] F. Hausdorf. Der Wertvorrat einer Bilinearform, Math Z., 3: 314–316, 1919. ¨ [Helly23] E. Helly. Uber Mengen konvexer K¨ orper mit gemeinschaftlichen Punkte, Jahresbericht der Deutschen MathematikerVereinigung, 32: 175–176, 1923. [HedWal78] A. Hedayat, W. D. Wallis. Hadamard matrices and their applications, The Annals of Statistics, 6 (6): 1184–1238, 1978. [High96] Nicholas J. Higham. Accuracy and Stability of Numerical Algorithms, SIAM, Philadelphia, 1996. [HK71] A. J. Hoﬀman and R. Kunze. Linear Algebra, 2nd edition. Prentice Hall, Englewood Cliﬀs, N. J., 1971. [HW53] K. Hoﬀman and H. W. Wielandt. The variation of the spectrum of a normal matrix, Duke Math. J., 20: 37–39, 1953. [HJ85] R. A. Horn and C. R. Johnson. Matrix Analysis, Cambridge University Press, Cambridge, 1985. [HJ13] R. A. Horn and C. R. Johnson. Matrix Analysis, second edition, Cambridge University Press, Cambridge, 2013. [Hou64] Alston S. Householder. The Theory of Matrices in Numerical Analysis, Blaisdell Publishing Company, New York, 1964. [HP85] D. R. Hughes and F. C. Piper. Design Theory, Cambridge University Press, 1985. ¨ [Jac57] C. G. J. Jacobi. Uber eine elementare Transformation eines in Buzug jedes von zwei VariablenSystemen linearen und homogenen Ausdrucks, Journal f¨ ur die reine und angewandte Mathematik, 53: 265–270, (1957, posthumous). [Jon50] Burton W. Jones. The Arithmetic Theory of Quadratic Forms, The Mathematical Association of America, 1950. [Giv53]
306
Bibliography
[Jord70]
´ C. Jordan. Trait´e des Substitutions et des Equations Alg´ebriques, Paris, 1870.
[Kah67]
W. Kahan. Inclusion Theorems for Clusters of Eigenvalues of Hermitian Matrices, Technical report, Computer Science Department, University of Toronto, 1950.
[KemSnell60] John G. Kemeny and J. Laurie Snell. Finite Markov Chains, D. Van Nostrand Company, Inc., Princeton, NJ, 1960. [KhaTay04]
[Kipp51] [K¨ onig22]
H. Kharaghani, B. TayfehRezaie. A Hadamard matrix of order 428, Journal of Combinatorial Designs, 13 (6): 439–440, 2005. Published online 13 December 2004 in Wiley Interscience. ¨ R. Kippenhahn. Uber den Wertevorrat einer Matrix, Math. Nachrichten, 6: 193–228, 195152. ¨ E. K¨ onig. Uber konvexe K¨ orper, Mathematische Zeitschrift, 14: 208–210, 1922.
[K¨ onig50]
E. K¨ onig. Theorie der endlichen und unendlichen Graphen, Chelsea Publishing Co., New York, 1950.
[K¨ or88]
T. W. K¨ orner. Fourier Analysis, Cambridge University Press, 1988.
[Kubl66]
V. N. Kublanovskaya. On a method of solving the complete eigenvalue problem for a degenerate matrix, U. S. S. R. Comput. Math. and Math. Physics, 6: 1–14, 1966.
[Lam91]
Clement W. H. Lam. The Search for a ﬁnite projective plane of order 10, American Mathematical Monthly, 98: 305–318, 1991.
[LawHan74]
Charles L. Lawson and Richard J. Hanson. Solving Least Squares Problems, PrenticeHall, Englewood Cliﬀs, New Jersey, 1974.
[Lit53]
D. E. Littlewood. On unitary equivalence, J. London Math. Soc., 28:314– 322, 1953.
[Loew98]
A. Loewy. Sur les formes quadratique d´eﬁnes ` a ind´etermin´ees conjug´ees de M Hermite, C. R. Acad. Sci. Paris, 123: 168–171, 1898. (Cited in MacDuffee [MacD33, p. 79]).
[Luen79]
David G. Luenberger. Introduction to Dynamic Systems, Theory, Models and Applications, John Wiley and Sons, 1979.
[MacD33]
C. C. MacDuﬀee. The Theory of Matrices, Ergebnisse der Mathematik und ihrer Grenzgebiete, Springer Verlag, Berlin, 1933.
[MacSlo77]
F. J. MacWilliams, N.J. A. Sloane. The Theory of Error Correcting Codes, North Holland, 1977.
[McRae55]
V. V. McRae. On the unitary similarity of matrices, Ph.D. Dissertation, Catholic University of America Press, Washington, 1955.
[Mal63]
A. I Mal’cev. Foundations of Linear Algebra (Translated from the Russian by Thomas Craig Brown), W. H. Freeman and Company, 1963.
[MM64]
M. Marcus and H. Minc. A Survey of Matrix Theory and Matrix Inequalities, Allyn and Bacon, Inc., Boston, 1964.
[McCoy36]
N. H. McCoy. On the characteristic roots of matric polynomials, Bull. Amer. Math. Soc., 42: 592–600, 1936.
[Mirs60]
L. Mirsky. Symmetric gage functions and unitarily invariant norms, Quarterly Journal of Mathematics, 11: 50–59, 1960.
[Mit53]
B. E. Mitchell. Unitary transformations, Canad. J. Math, 6: 69–72, 1954.
Bibliography
307
[MT52]
T. S. Motzkin and O. Taussky Todd. Pairs of matrices with property L, Trans. Amer. Math. Soc., 73: 108–114, 1952.
[MT55]
T. S. Motzkin and O. Taussky Todd. Pairs of matrices with property L, II, Trans. Amer. Math. Soc., 80: 387–401, 1955.
[MoMa55]
B. N. Moyls and M. D. Marcus. Field convexity of a square matrix, Proc. Amer. Math. Soc., 6: 981–983, 1955.
[Murn32]
F. D. Murnaghan. On the ﬁeld of values of a square matrix, Proc. Nat. Acad. Sci. U.S.A., 18: 246–248, 1932.
[Newm72]
Morris Newman. Integral Matrices, Academic Press, New York and London, 1972.
[NivZu91]
Ivan Niven, Herbert S. Zuckerman, Hugh L. Montgomery. An Introduction to the Theory of Numbers, ﬁfth edition, John Wiley and Sons, Inc., New York, etc. 1991.
[Pal33]
R. E. A. C. Paley. On orthogonal matrices, J. Math. Phys., 12: 311–320, 1933.
[Par48]
W. V. Parker. Sets of numbers associated with a matrix, Duke Math. J., 15: 711–715, 1948.
[Parl80]
Beresford N. Parlett. The Symmetric Eigenvalue Problem, Prentice Hall, Englewood Cliﬀs, N.J., 1980.
[Pea62]
C. Pearcy. A complete set of unitary invariants for operators generating ﬁnite W*algebras of type I, Paciﬁc J. Math., 12: 1405–1416, 1962.
[Per07]
O. Perron. Zur Theorie der Matrizen, Math. Ann., 64: 248–263, 1907.
[Radj62]
H. Radjavi. On unitary equivalence of arbitrary matrices, Trans. Amer. Math. Soc., 104: 363–373, 1962.
[Radon21]
J. Radon. Mengen konvexer K¨ orper, die einen gemeinsamen Punkt enthalten, Mathematische Annalen, 83: 113–115, 1921.
[R¨ os33]
H. R¨ oseler. Normalformen von Matrizen gegen¨ uber unit¨ aren Transformationen, Dissertation, Universit¨ atsverlag von Robert Noske in BornaLeipzig, Leipzig, 1933.
[Rot88]
Joseph J. Rotman. An Introduction to the Theory of Groups, 3rd edition, Wm. C. Brown Publishers, Dubuque, Iowa, 1988.
[Rys50]
H. J. Ryser. A Note on a Combinatorial Problem, Proceedings of the American Mathematical Society, 1: 422–424, 1950.
[Rys63]
H. J. Ryser. Combinatorial Mathematics, The Mathematical Association of America, 1963.
[Rys69]
H. J. Ryser. Combinatorial conﬁgurations, SIAM Journal in Applied Mathematics, 17: 593–602, 1969.
[Rys82]
H. J. Ryser. The Existence of Symmetric Block Designs, Journal of Combinatorial Theory, Series A, 32: 103–105, 1982.
[Scar98]
V. Scarpis. Sui determinanti di valore massimo, Rend. R. Ist. Lombaro Sci. e Lett., 31: 1441–1446, 1898.
[Schm07]
E. Schmidt. Zur theorie der linearen und nichtlinearen Integralgleichungen, I Tiel, Entwicklung willk¨ urlichen Funktionen nach System vorgeschriebener, Math. Ann., 63: 433–476, 1907. ¨ I. Schur. Uber die charakteristischen Wurzeln einer linearen Substitutionen mit einer Anwendung auf die Theorie der Intergralgleichungen, Math. Ann., 66: 488–510, 1909.
[Sch09]
308
Bibliography
[Serg84]
V. V. Sergeˇichuk. Classiﬁcation of linear operators in a ﬁnitedimensional unitary space, Functional Anal. Appl., 18(3): 224–230, 1984.
[Sha82]
H. Shapiro. On a conjecture of Kippenhahn about the characteristic polynomial of a pencil generated by two Hermitian matrices, I, Linear Algebra Appl., 43: 201–221, 1982.
[Sha91]
H. Shapiro. A survey of canonical forms and invariants for unitary similarity, Linear Algebra Appl., 147: 101–167, 1991.
[Sha99]
H. Shapiro. The Weyr Characteristic, American Mathematical Monthly, 106: 919–929, 1991.
[ShTa80]
H. Shapiro and O. Taussky. Alternative proofs of a theorem of Moyls and Marcus on the numerical range of a square matrix, Linear and Multilinear Algebra, 8: 337–340, 1980.
[Spe40]
W. Specht. Zur Theorie der Matrizen, II, Jahresber. Deutsch. Math.Verein., 50: 19–23, 1940.
[Stew73]
G. W. Stewart. Introduction to Matrix Computations, Academic Press, New York and London, 1973.
[Stew93]
G. W. Stewart. On the early history of the singular value decomposition, SIAM Review, 35: 551–566, 1993.
[StewSun90] G. W. Stewart and Jiguang Sun. Matrix Perturbation Theory, Academic Press, Boston, etc., 1990. [Syl52]
J. J. Sylvester. Thoughts on inverse orthogonal matrices, simultaneous sign successions, and tessellated pavements in two or more colours, with applications to Newton’s rule, ornamental tilework, and the theory of numbers, Philosophical Magazine, 34: 461–475, 1867.
[Syl67]
J. J. Sylvester. A demonstration of the theorem that every homogeneous quadratic polynomial is reducible by real orthogonal substitutions to the form of a sum of positive and negative squares, Philosophical Magazine, IV: 138–142, 1852.
[Syl84]
J. J. Sylvester. Sur l’´equation en matrices px = xq, Comptes Rendus, 99: 67–71, 1884.
[Tarry00]
Gaston Tarry. Le Probl´eme de 36 Oﬃciers, Comptes Rendus, 1: 122–123, 1900.
[Tarry01]
Gaston Tarry. Le Probl´eme de 36 Oﬃciers, Comptes Rendus, 2: 170–203, 1901.
[Todd33]
J. A. Todd. A combinatorial problem, Journal of Mathematica and Physics, 12: 321–333, 1933.
[OTT49]
Olga Taussky Todd. A recurring theorem on determinants, American Mathematical Monthly, 56: 672–676, 1949.
[Toep18]
O. Toeplitz. Das Algebraische Analogon zu einem Satze von Fejer, Math Z., 2: 187–197, 1918.
[Tref97]
Lloyd N. Trefethen and David Bau. Numerical Linear Algebra, Society for Industrial and Applied Mathematics, Philadelphia, PA, 1997.
[TurA32]
H. W. Turnbull and A. C. Aitken. An Introduction to the Theory of Canonical Matrices, Blackie and Son Limited, London and Glasgow, 1932.
[VanLi82]
J. H. van Lint. Introduction to Coding Theory, SpringerVerlag, New York 1981.
Bibliography
309
[Var62]
Richard Varga. Matrix Iterative Analysis, PrenticeHall, Englewood Cliﬀs, N.J., 1962.
[Var04]
Richard S. Varga. Gerˇsgorin and His Circles, Springer, Berlin, New York, 2004.
[WW49]
A. G. Walker and J. D. Weston. Inclusion theorems for the eigenvalues of a normal matrix, J. London Math. Soc., 24: 28–31, 1949.
[Wallis93]
W. Wallis. Hadamard Matrices, pp. 235–243 in Combinatorial and GraphTheoretical Problems in Linear Algebra, Richard A. Brualdi, Shmuel Friedland, Victor Klee, editors, Springer Verlag, New York, etc., 1993.
[Wedd34]
J. H. M. Wedderburn. Lectures on Matrices, American Mathematical Society Colloquium Publications, V. XVII, American Mathematical Society, New York, 1934.
[Weyl49]
H. Weyl. Inequalities between the two kinds of eigenvalues of a linear transformation, Proceedings of the National Academy of Sciences, 35: 408–411, 1949.
[Weyr90]
E. Weyr. Zur Theorie der bilinearen Formen, Monatsh. Math. und Physik, 1: 163–236, 1890.
[Weyr85]
E. Weyr. R´epartition des matrices en esp`eces et formation de toutes les esp`eces, C. R. Acad. Sci. Paris, 100: 966–969, 1885.
[Wieg53]
N. Wiegmann. Pairs of normal matrices with property L, Proc. Am. Math. Soc., 4: 35–36, 1953.
[Wiel50]
H. Wielandt. Unzerlegbare nichtnegative Matrizen, Math. Z., 52: 642–648, 1950.
[Wiel53]
H. Wielandt. Pairs of normal matrices with property L, J. Res. Nat. Bur. Standards, 51: 89–90, 1953.
[Wiel67]
H. Wielandt, Robert R. Meyer. Topics in the Analytic Theory of Matrices, Lecture notes prepared by Robert R. Meyer from a course by Helmut Wielandt, Department of Mathematics, University of Wisconsin, Madison, Wisconsin, 1967.
[Wilk65]
J. H. Wilkinson. The Algebraic Eigenvalue Problem, Clarendon Press, Oxford, 1965.
[Will44]
J. Williamson. Hadamard’s determinant theorem and the sum of four squares, Duke Math. J., 11: 65–81, 1944.
[Will47]
J. Williamson. Note on Hadamard’s determinant theorem, Bull. Amer. Math. Soc., 53: 608–613, 1947.
[Witt37]
E. Witt. Theorie der quadratischen Formen in beliebigen K¨ orpern, Journal f¨ ur die reine und angewandte Mathematik, 176: 31–44, 1937.
[Youn88]
N. Young. An Introduction to Hilbert Space, Cambridge University Press, 1988.
Index
absorbing Markov chain, 297 absorbing state, 297 additive commutator, 156 adjacency matrix, 223–225 adjacency matrix of a digraph, 236 adjacency matrix of a graph, 170 adjacent edges, 222 adjacent vertices, 169 adjoint, 18, 36 AF , 77 algebraic multiplicity, 41 asymptotically stable, 289 balanced incomplete block design, 185, 189–191 banded, 131 bandwidth, 131 basis, 3 Bessel’s inequality, 28, 31 best least squares ﬁt, 26 BIBD, 185, 189–191 symmetric, 190–191 bidiagonal, 130 bijective, 4 bilinear form, 89 conjugate bilinear, 90 conjugate symmetric, 90 symmetric, 89 Birkhoﬀ theorem, 176–178 block cycle matrices, 165–167 block cyclic form, 244 block cyclic matrices, 244 block design, 185, 189
symmetric, 190–191 block diagonal, 11 block triangular, 11 block triangular matrice, 84 block triangular matrix, 80 Bruck–Ryser–Chowla theorem, 187, 192, 202–204 C[a, b], 18 canonical form, 51 Cauchy’s interlacing theorem, 97 Cauchy–Schwarz inequality, 20 Cayley–Hamilton theorem, 64–65 chain, 224 directed, 236 change of basis, 6–7 change of basis matrix, 6 characteristic polynomial, 39 characteristic polynomial of a graph, 226 characteristic values, 40 Cholesky decomposition, 140 circulant matrices, 163–165 closed walk, 224 cofactors, 12 expansion by, 12 cohort model, 281 Collatz, 253 column operations, 102–105 column rank, 5 column space, 5 commutator, additive, 156 companion matrix, 66
311
312
complement of a graph, 232 complete bipartite graph, 227 complete graph, 222 complete metric space, 27 complete orthonormal sequence, 28 components, strong, 236 condensation digraph, 240 conﬁguration, 171 conformally partitioned, 10 congruence, 198 congruent matrices, 91 conjugate bilinear, 17 conjugate bilinear form, 90 conjugate symmetric bilinear form, 90 conjugate symmetric inner product, 17 conjugate transpose, 8 conjunctive matrices, 91 connected graph, 225 diameter, 225 continuous linear dynamical system, 279 convergence in the norm, 31, 114, 116 coordinates, with respect to a basis, 3 coset leader, 273 Courant–Fischer theorem, 95–96 covers, 274 cubic graph, 223, 227 cycle directed, 236 cycle in a graph, 224 deﬂation, 136 degree of a regular graph, 223 degree of a vertex, 222 determinant, 11–12, 175 diagonable, 42 diagonal matrix, 8 diagonalizability, 41–44 diagonalizable, 42 diagonalization, 41–44 diagonally dominant, 46 diameter of a graph, 225 diﬀerence equation, 280 diﬀerential equation, 280 digraph, 235 of a matrix, 237 dimension, 3 direct sum matrix, 11 subspace, 2 directed chain, 236 cycle, 236
Index
trail, 236 directed edge, 235 directed graph, 235–238 directed walk, 236 discrete linear dynamical system, 279 distance between vertices of a graph, 225 dominant eigenvalue, 135 dot product, 4 doubly stochastic matrices, 176–178 dual basis, 35 dual space, 35 dynamical system, 279–281 Eckart–Young theorem, 124 edge directed, 235 edge of a graph, 169, 221 edges adjacent, 222 multiple, 223 eigenspace, 40 eigenvalue, 39–40 of a Hermitian matrix, 94, 95, 97 of a positive deﬁnite matrix, 100 eigenvector, 39 left, 40 elementary matrix, 112 elementary row and column operations, 102–105 elementary row operations, 138 ellipse, 145 endpoints of an edge, 222 entrywise product, 178 equilibrium point, 289, 291 equivalent matrices, 7 equivalent norms, 114 error vector, 267, 269 errorcorrecting codes, 265–267 expansion by cofactors, 12 exponential of a matrix, 51 Fibonacci numbers, 279, 286 ﬁeld, 1 ﬁeld of values, 143–150 convexity, 148 twobytwo matrices, 145–147 ﬁnite dimensional, 3 ﬁnite projective planes, 191–197 ﬁrstorder diﬀerence equation, 283 ﬁrstorder diﬀerential equation, 283 ﬁxed point, 289, 291
Index
ﬂag, 188 Fn , 2 Fourier coeﬃcients, 30 Fourier series, 29 Frobenius, 153, 252, 262 Frobenius norm, 77, 118, 123 Frobenius normal form, 242 Frobenius–Schur index, 245 full column rank, 70 general digraph, 235 general graph, 223 generalized Hamming code, 272 generator matrix, 268–269 geometric multiplicity, 40 Gerˇsgorin Circle Theorem, 45–46, 119, 252 Givens, 149 Gram–Schmidt orthogonalization process, 33 graph, 169, 221 characteristic polynomial of, 226 complement of, 232 complete, 222 connected, 225 cubic, 223 diameter, 225 edge, 221 general, 223 imprimitive, 242 isomorphic, 222 order, 221 primitive, 242 regular, 223 simple, 222 spectrum, 226 strongly regular, 227–231 vertex, 221 Hadamard, 33 Hadamard codes, 276 Hadamard determinant inequality, 33, 37, 105 Hadamard matrices, 207–208 Hadamard product, 15, 178 Hall, Philip, 174–175 Hamming code, 266–267, 271–272 generalized, 272 Hamming distance, 269 Helly’s theorem, 181 Hermitian matrix, 8, 92, 139 eigenvalues, 94, 95, 97
313
ﬁeld of values, 144 inertia, 92, 102 principal submatrix of, 97 repeated eigenvalue, 98 Hessenberg matrix, 131 Hilbert space, 25, 27 Hoﬀman, 154, 179 Hoﬀman–Wielandt theorem, 154, 178–180 homogeneous linear system, 285 Householder matrix, 38 Householder transformation, 38, 127–133 hyperplane, 23 identity matrix, 5 image, 4 imprimitive graph, 242 imprimitive matrix, 260–262 incidence matrices for 2designs, 189–191 incidence matrix, 171 incident, 222 indeﬁnite, 100 indegree, 235 index of imprimitivity, 242–244 index of nilpotency, 57 induced norm, 117 inertia Sylvester’s law, 93–94 inertia of a Hermitian matrix, 92, 102 initial vertex, 235 injective, 4 inner product, 17 conjugate bilinear, 17 conjugate symmetric, 17 in Cn , 21–22 standard, 18 symmetric, 17 interlacing theorem, 97 invariant subspace, 10, 151 inverse power method, 136 invertible, 4, 6 irreducible components, 242 irreducible matrix, 238, 259–260 isolated vertex, 222 isomorphic, 4 isomorphic graphs, 222 isomorphism, 4 J, 163 Jacobi identity, 157
314
Jn , 40 Jordan block, 58 Jordan canonical form, 52, 63–64 Jp (λ), 58 kernel, 3 Km,m , 227 K¨ onig–Egerv´ ary theorem, 173 Kronecker delta, 35 l2 , 116 L property, 158–161 Lagrange expansion by cofactors, 12 Lagrange foursquare theorem, 202 latent roots, 40 lattice graph L2 (m), 231 LDU factorization, 138–140 least squares ﬁt, 26 left eigenvector, 40 length of a vector, 17, 18 Leslie matrix, 282 Lie algebra, 157 line, 3 line of a matrix, 173 linear code, 267–269 linear combination, 2 linear constant coeﬃcient systems, 285–288 linear dynamical system, 279–281 linear functional, 35 linear transformation, 3 linearly dependent, 2 linearly independent, 2 loop, 169, 221, 222, 235 lower triangular matrix, 9 lp norm, 114 l 2 , 25 L2 (a, b), 30 majorization, 180 Markov chain, 295–300 absorbing, 297 Markov process, 295–300 matrix, 4 block diagonal, 11 block triangular, 11, 80 companion, 66 diagonal, 8 equivalent, 7 Hermitian, 8, 92, 139 imprimitive, 260–262 irreducible, 259–260
Index
lower triangular, 9 m × n, 4 nilpotent, 57–61 nonderogatory, 66 nonnegative, 249, 258–259 normal, 78, 81–83 of a linear transformation, 5 orthogonal, 32 positive, 249 positive deﬁnite, 22, 99–102 primitive, 260–262 product, 4 similar, 7, 40 skewHermitian, 8 skewsymmetric, 8 strictly lower triangular, 9 strictly upper triangular, 9 symmetric, 8, 139 tensor product of, 13 triangular, 9, 40 unitary, 15, 32 upper triangular, 9 matrix norms, 117–118 max norm, 114 maximin, 95–96 McCoy’s theorem, 157–158 Metzler matrix, 293 minimal polynomial, 65–66 minimax, 95–96 minimum distance of a code, 270–271 Minkowski’s inequality, 114 multigraph, 223 multiple edges, 223 multiplicity algebraic, 41 geometric, 40 nearest neighbor decoding, 269–270 negative deﬁnite, 99, 101 negative semideﬁnite, 99 nil algebra, 156 nil ideal, 156 nilpotent algebra, 156 nilpotent ideal, 156 nilpotent matrix, 57–61 nonderogatory, 66 nonnegative matrix, 249, 258–259 nonnegative system, 292 nonsingular, 6 norm, 113–118, 123 norm, Frobenius, 77 normal matrix, 78, 81–83
Index
ﬁeld of values, 144 nset, 171 null space, 3 nullity, 4 number of walks of length m in a graph, 225 numerical range, 143 onetoone, 4 onto, 4 open walk, 224 operator norm, 123 order of a graph, 221 order of a projective plane, 191, 195 orthogonal complement, 23 projection onto a line, 19–20 projection onto a subspace, 23–26 set, 19 vectors, 19 orthogonal matrix, 32 orthogonal projection onto a line, 19–20 onto a subspace, 23–26 orthonormal, 19 orthonormal basis, 21 orthostochastic, 178 outdegree, 235 outer product, 18 Paley, 208 Paley’s theorem, 209 parity check, 266 parity check matrix, 267–268, 271 Parker, 81 partitioned conformally, 10 partitioned matrices, 9 Pearcy, 86 pencil, 158 perfect code, 272 permanent, 175–176 permutation matrix, 172 Perron’s theorem, 249, 254–257 Perron–Frobenius theorem, 249 Petersen graph, 227 plane, 3 polar decomposition, 107–108, 126–127 polar factorization, 107–108 population cohort model, 281 positive deﬁnite matrix, 22, 99–102, 139 eigenvalues, 100 positive matrix, 249
315
positive semideﬁnite, 99 positive semideﬁnite matrix square root of, 106 positivity, 17 power method, 135 power series, 288 primitive graph, 242 primitive matrix, 245, 260–262 principal submatrix, 9, 47 principal submatrix of a Hermitian matrix, 97 probability vector, 295 product of matrices, 4 projection, 15, 49 projective plane of order 2, 185, 186, 191 projective planes, 191–197 proper values, 40 property L, 158–161 property P, 152 Pythagorean theorem, 18–19 QR factorization, 33, 129–130 quadratic character, 209 quadratic form, 198–199 quadratic residue matrix, 209–211 range space, 4 rank, 4, 47 column, 5 of a linear transformation, 4 of a matrix, 5 row, 5 rank k approximation, 123–125 rank plus nullity theorem, 4 rational canonical form, 64 Rayleigh quotient, 94 Rayleigh–Ritz ratio, 94 reducible matrix, 238 regular graph, 223 degree of, 223 regular of degree k, 236 repetition code, 265–266 right shift operator, 31 row echelon form, 138 row operations, 102–105 row rank, 5 row space, 5 row stochastic matrix, 295 scalar, 2 Scarpis, 208
316
Schmidt, 124 Schur, 245 Schur complement, 10 Schur, trace inequality, 80 Schur, unitary triangularization theorem, 78 SDR, 174, 176 Segre characteristic, 61 sequences square summable, 25 similar matrices, 7 similarity, 7, 51 simple graph, 222 simultaneous row and column operations, 102–105 simultaneous triangularization, 151–158 Singleton bound, 271 singular value decomposition, 108–109, 121–126 singular values, 108–109, 121–122 skewHermitian matrix, 8 skewsymmetric matrix, 8 span, 2 spanning set, 2 Specht’s theorem, 86 spectral norm, 123 spectral radius, 46, 123, 249, 252 spectral theorem, 79, 83 spectrum, 41 spectrum of a graph, 226 spherepacking bound, 272 square root of positive semideﬁnite matrix, 106 square summable sequences, 25 stable equilibrium point, 289, 291 standard inner product, 18 state vector, 279 Steiner system, 185 stochastic matrix, 295 strictly lower triangular matrix, 9 strictly upper triangular matrices, 155 strictly upper triangular matrix, 9 strong components of a digraph, 236 strongly connected, 236, 238 strongly connected vertices, 236 strongly regular graph, 227–231 subdigraph, 236 submatrix, 9 principal, 9 subspace, 2 direct sum, 2
Index
invariant, 10 sum, 2 subword, 274 sum direct, 2 subspace, 2 sum norm, 114 supply demand model, 284 surjective, 4 Sylvester’s law of inertia, 93–94 Sylvester’s theorem, 53–54 symmetric 2design, 190–191 symmetric bilinear form, 89 symmetric block design, 190–191 symmetric inner product, 17 symmetric matrix, 8, 139 syndrome, 273 system of distinct representatives, 174, 176 tdesigns, 185–189 tensor product of Hadamard matrices, 208 tensor product of matrices, 13 tensor products, 13 term rank, 173, 177 terminal vertex, 235 T (m), 230 trace, 15, 46 trail, 224 directed, 236 transient state, 297 transition matrix, 295 transition probabilities, 295 transpose, 8 triangle in a graph, 224 triangle inequality, 113 triangular graph, 230 triangular matrix, 9, 40 triangularization, simultaneous, 151–158 tridiagonal matrix, 131–133 unitarily invariant norm, 118 unitary equivalence, 77 unitary invariants, 86 unitary matrix, 15, 32 unitary similarity, 77, 84–86 unitary space, 18 unitary transformation, 31 upper bidiagonal, 130 upper triangular matrix, 9
Index
valency of a vertex, 222 Vandermonde matrix, 34, 167 vector norms, 113–116 vector space, 1 vertex degree of, 222 initial, 235 isolated, 222 terminal, 235 vertices of a graph, 169, 221 walk closed, 224 directed, 236 open, 224 walk in a graph, 224 Weyr, 60 Weyr characteristic, 60–61 Weyr normal form, 67–68 Wielandt, 154, 179 Williamson, 212–216 Witt cancellation theorem, 200 (0, 1)matrix, 169 zero matrix, 5 zeroone matrix, 169
317
Published Titles in This Series 24 23 22 21
Helene Shapiro, Linear Algebra and Matrices, 2015 Sergei Ovchinnikov, Number Systems, 2015 Hugh L. Montgomery, Early Fourier Analysis, 2014 John M. Lee, Axiomatic Geometry, 2013
20 Paul J. Sally, Jr., Fundamentals of Mathematical Analysis, 2013 19 R. Clark Robinson, An Introduction to Dynamical Systems: Continuous and Discrete, Second Edition, 2012 18 Joseph L. Taylor, Foundations of Analysis, 2012 17 Peter Duren, Invitation to Classical Analysis, 2012 16 Joseph L. Taylor, Complex Variables, 2011 15 Mark A. Pinsky, Partial Diﬀerential Equations and BoundaryValue Problems with Applications, Third Edition, 1998 14 Michael E. Taylor, Introduction to Diﬀerential Equations, 2011 13 12 11 10
Randall Pruim, Foundations and Applications of Statistics, 2011 John P. D’Angelo, An Introduction to Complex Analysis and Geometry, 2010 Mark R. Sepanski, Algebra, 2010 Sue E. Goodman, Beginning Topology, 2005
9 8 7 6
Ronald Solomon, Abstract Algebra, 2003 I. Martin Isaacs, Geometry for College Students, 2001 Victor Goodman and Joseph Stampﬂi, The Mathematics of Finance, 2001 Michael A. Bean, Probability: The Science of Uncertainty, 2001
5 Patrick M. Fitzpatrick, Advanced Calculus, Second Edition, 2006 4 Gerald B. Folland, Fourier Analysis and Its Applications, 1992 3 Bettina Richmond and Thomas Richmond, A Discrete Transition to Advanced Mathematics, 2004 2 David Kincaid and Ward Cheney, Numerical Analysis: Mathematics of Scientiﬁc Computing, Third Edition, 2002 1 Edward D. Gaughan, Introduction to Analysis, Fifth Edition, 1998
Linear algebra and matrix theory are fundamental tools for almost every area of mathematics, both pure and applied. This book combines coverage of core topics with an introduction to some areas in which linear algebra plays a key role, for example, block designs, directed graphs, error correcting codes, and linear dynamical systems. Notable features include a discussion of the Weyr characteristic and Weyr canonical forms, and their relationship to the betterknown Jordan canonical form; the use of block cyclic matrices and directed graphs to prove Frobenius’s theorem on the structure of the eigenvalues of a nonnegative, irreducible matrix; and the inclusion of such combinatorial topics as BIBDs, Hadamard matrices, and strongly regular graphs. Also included are McCoy’s theorem about matrices with property P, the Bruck–Ryser–Chowla theorem on the existence of block designs, and an introduction to Markov chains. This book is intended for those who are familiar with the linear algebra covered in a typical ﬁrst course and are interested in learning more advanced results.
For additional information and updates on this book, visit www.ams.org/bookpages/amstext24
AMS on the Web
www.ams.org
AMSTEXT/24
Sally
The
SERIES
This series was founded by the highly respected mathematician and educator, Paul J. Sally, Jr.