226 86 12MB
English Pages [370] Year 1959
Table of contents :
Front Cover
Volume One
PREFACE
PUBLISHERS’ PREFACE
CONTENTS
1. MATRICES AND OPERATIONS ON MATRICES
2. THE ALGORITHM OF GAUSS AND SOME OF ITS APPLICATIONS
3. LINEAR OPERATORS IN AN nDIMENSIONAL VECTOR SPACE
4. THE CHARACTERISTIC POLYNOMIAL AND THE MINIMAL POLYNOMIAL OF A MATRIX
5. FUNCTIONS OF MATRICES
6. EQUIVALENT TRANSFORMATIONS OF POLYNOMIAL MATRICES. ANALYTIC THEORY OF ELEMENTARY DIVISORS
7. THE STRUCTURE OF A LINEAR OPERATOR IN AN nDIMEN SIONAL SPACE
8. MATRIX EQUATIONS
9. LINEAR OPERATORS IN A UNITARY SPACE
10. QUADRATIC AND HERMITIAN FORMS
INDEX
THE THEORY OF
MATRICES F. R. GANTMACHER
VOLUME ONE
1
9
5
9
lfl
PREFACE T h e m atrix c a lc u lu s is widely applied nowadays in various branches of mathematics, mechanics, theoretical physics, theoretical electrical engineer ing, etc. However, neither in the Soviet nor the foreign literature is there a book that gives a sufficiently complete account of the problems of matrix theory and of its diverse applications. The present book is an attempt to fill this gap in the mathematical literature. The book is based on lecture .courses on the theory of matrices and its applications that the author has given several times in the course of the last seventeen years at the Universities of Moscow and Tiflis and at the Moscow Institute of Physical Technology. The book is meant not only for mathematicians (undergraduates and research students) but also for specialists in allied fields (physics, engi neering) who are interested in mathematics and its applications. Therefore the author has endeavoured to make his account of the material as accessible as possible, assuming only that the reader is acquainted with the theory of determinants and with the usual course of higher mathematics within the programme of higher technical education. Only a few isolated sections in the last chapters of the book require additional mathematical knowledge on the part of the reader. Moreover, the author has tried to keep the indi vidual chapters as far as possible independent of each other. For example, Chapter V, Functions of Matrices, does not depend on the material con tained in Chapters II and III. At those places of Chapter V where funda mental concepts introduced in Chapter IV are being used for the first time, the corresponding references are given. Thus, a reader who is acquainted with the rudiments of the theory of matrices can immediately begin with reading the chapters that interest him. The book consists of two parts, containing fifteen chapters. In Chapters I and III, information about matrices and linear operators is developed ab initio and the connection between operators and matrices is introduced. Chapter II expounds the theoretical basis of Gauss’s elimination method and certain associated effective methods of solving a system of n linear equations, for large n. In this chapter the reader also becomes acquainted with the technique of operating with matrices that are divided Into rectan gular ‘blocks/
iii
iv
P reface
In Chapter IV we introduce the extremely important ‘characteristic’ and ‘minimal’ polynomials of a square matrix, and the ‘adjoint’ and ‘reduced adjoint’ matrices. In Chapter V, which is devoted to functions of matrices, we give the general definition of f(A ) as well as concrete methods of computing it— where f(X) is a function of a scalar argument X and A is a square matrix. The concept of a function of a matrix is used in §§ f> and 6 of this chapter for a complete investigation of the solutions of a system of linear differen tial equations of the first order with constant coefficients. Roth the concept of a function of a matrix and this latter investigation of differential equa tions are based entirely on the concept of the minimal polynomial of a matrix and—in contrast to the usual exposition—do not use the socalled theory of elementary divisors,, which is treated in Chapters VI and VII. These five chapters constitute a first course on matrices and their appli cations. Very important problems in the theory of matrices arise in con nection with the reduction of matrices to a normal form. This reduction is carried out on the basis of Weierstrass’ theory of elementary divisors. In view of the importance of this theory we give two expositions in this book: an analytic one in Chapter VI and a geometric one in Chapter VII. We draw the reader’s attention to §§ 7 and ft of Chapter VI, where we study effective methods of finding a matrix that transforms a given matrix to normal form. In § ft of Chapter VII we investigate in detail the method of A. N. Krylov for the practical computation of the coefficients of the characteristic polynomial. In Chapter VIII certain types of matrix equations are solved. We also consider here the problem of determining all the matrices that are permutable with a given matrix and we study in detail the manyvalued functions of matrices and In A. Chapters IX and X deal with the theory of linear operators in a unitary space and the theory of quadratic and hermitian forms. These chapters do not depend on Weierstrass? theory of elementary divisors and use, of the preceding material, only the basic information bn matrices and linear opera tors contained in the first three chapters of the book. In § 9 of Chapter X we apply the theory of forms to the study of the principal oscillations of a system with n degrees of freedom. In § 11 of this chapter we give an account of Frobenius’ deep results on the theory of Ilankel forms. These results are used later, in Chapter XV, to study special cases of the RouthIIurwit2 problem. The last five chapters form the second part of the book [the second volume, in the present English translation!. In Chapter XI we determine normal forms for complex symmetric, skewsymmetric, and orthogonal mat
P reface
v
rices and establish interesting connections of these matrices with real matrices of the same classes and with unitary matrices. In Chapter XII we expound the general theory of pencils of matrices of th^form A 4 XB, where A and B are arbitrary rectangular matrices of the same dimensions. Just as the study of regular pencils of matrices A + XB is based on Weierstrass’ theory of elementary divisors, so the study of singu lar pencils is built upon Kronecker’s theory of minimal indices, which is, as it were, a further development of Weierstrass’s theory. By means of Kronecker’s theory—the author believes that he has succeeded in simplifying the exposition of this theory—we establish in Chapter XII canonical forms of the pencil of matrices A + XB in the most general case. The results obtained there are applied to the study of systems of linear differential equations with constant coefficients. In Chapter X III we explain the remarkable spectral properties of mat rices with nonnegative elements and consider two important applications of matrices of this class: 1) homogeneous Markov chains in the theory of probability and 2) oscillatory properties of elastic vibrations in mechanics. The matrix method of studying homogeneous Markov chains was developed in the book [25] by V. I. Romanovskii and is based on the fact that the matrix of transition probabilities in a homogeneous Markov chain with a finite number of states is a jnatrix with nonnegative elements of a special type (a 1stochastic1matrix). The oscillatory properties of elastic vibrations are connected with another important class of nonnegative matrices—the ‘oscillation matrices.9 These. matrices and their applications were studied by M. G. Krein jointly with the author of this book. In Chapter XIII, only certain basic results in this domain are presented. The reader can find a detailed account of the whole material in the monograph [7]. In Chapter XIV we compile the applications of the theory of matrices to systems of differential equations with variable coefficients. The central place (§§59) in this chapter belongs to the theory of the multiplicative integral (Produktintegral) and its connection with Volterra’s infinitesimal calculus. These problems are almost entirely unknown in Soviet mathe matical literature. In the first sections and in § 11, we study reducible systems (in the sense of Lyapunov) in connection with the problem of stabil ity of motion; we also give certain results of N. P. Erugin. Sections 911 refer to the analytic theory of systems of differential equations. Here we clarify an inaccuracy in Birkhoff’s fundamental theorem, which is usually applied to the investigation of the solution of a system of differential equa tions in the neighborhood of a singular point, and we establish a canonical form of the solution in the case of a regular singular point.
VI
P reface
In § 12 of Chapter XIV we give a brief survey of some results of the fundamental investigations of I. A. LrppoDanilevskii on analytic functions of several matrices and their applications to differential systems. The last chapter, Chapter XV, deals with the applications of the theory of quadratic forms (in particular, of Hankel forms) to the RouthHurwitz problem of determining the number of roots of a polynomial in the right halfplane (Re z > 0). The first sections of the chapter contain the classical treatment of the problem. In § 5 we give the theorem of A. M. Lyapunov in which a stability criterion is set up which is equivalent to the RouthHurwitz criterion. Together with the stability criterion of RouthHurwitz we give, in § 11 of this chapter, the comparatively little known criterion of Lienard and Chipart in which the number of determinant inequalities is only about half of that in the RouthHurwitz criterion. At the end of Chapter XV we exhibit the close connection between stabil ity problems and two remarkable theorems of A. A. Markov and P. L. Chebyshev, which were obtained by these celebrated authors on the basis of the expansion of certain continued fractions of special types in series of decreas ing powers of the argument. Here we give a matrix proof of these theorems. This, then, is a brief summary of the contents of this book. F. R. Gantmacher
PUBLISHERS’ PREFACE TftE P u b lish e r s w ish to thank Professor Gantmacher for his kindness in communicating to the translator new versions of several paragraphs of the original Russian language book. The Pu blishers also take pleasure in thanking the VEB Deutscher Verlag der "Wissenschaften, whose many published translations of Russian scientific books into the German language include a counterpart of the present work, for their kind spirit of cooperation in agreeing to the use of their formulas in the preparation of the present work. No material changes have been made in the text in translating the present work from the Russian except for the replacement of several paragraphs by the new versions supplied by Professor Gantmacher. Some changes in the references and in the Bibliography have been made for the benefit of the Englishlanguage reader.
CONTENTS P reface ~ P u b l is h e b s ’ P r e fa c e —
I.
M a t r ic e s
§ § § § II.
1. 2. 3. 4.
Ma t r ic e s 
1
Matrices. Basic notationAddition and multiplication of rectangular matricesSquare matricesCompound matrices. Minors of the inverse matrix
1 3 12 19
and
O p e r a t io n s
T h e A l g o r it h m
§ § § §
vi
.
of
Ga u ss
on
and
Some
o f it s
A p p l ic a t io n s
23
Gauss's elimination method 23 Mechanical interpretation of Gauss's algorithm> 28 Sylvester’s determinant identity 31 The decomposition of a square matrix into triangular fac tors  33 § 5. The partition of a matrix into blocks. The technique of operating with partitioned matrices. The generalized algo rithm of Gauss —— 41 III.
1. 2. 3. 4.
L in e a r O p e r a t o r s
in a n jiD im e n s io n a l
V e c to r S pa c e
50
§ 1. Vector spaces —  .§ 2. A linear operator mapping an wdimensional space into an * mdimerisional space55 § 3. Addition and multiplication of linear operators.................. 57 § 4. * Transformation of coordinates ..................... —. 59 § 5. Equivalent matrices. The rank of an operator. Sylvester's § 6. Linear operators mapping an wdimensional space into itse lfvii
66
viii
Contents
§ 7. Characteristic values and characteristic vectors of a linear 69 operator § 8. Linear operators of simple structure_________________ 72 TV, T h e Ch a r a c ter istic P o ly n o m ia l a n d t h e M in im a l P o i y n o m ia l o f a M a t r ix __ ______________________________
§ § § §
Addition and multiplication of matrix polynomials........ Right and left division of matrix polynomials_______ The generalized Bezout theorem—__ ___ The characteristic polynomial of a matrix. The adjoint matrix ____ ..______ § 5. The method of Faddeev for the simultaneous computation of the coefficients of the characteristic polynomial and of the adjoint matrix ..............—__ § 6. The minimal polynomial of a matrix________________ V.
1. 2. 3. 4.
F u n c t io n s
of
76 76 77 80 82
87 89
M a t r ic e s ______________________________ 95
§ 1. Definition of a function of a matrix____________________95 § 2. The LagrangeSylvester interpolation polynomial______ 101 § 3. Other forms of the definition of f(A ). The components of the matrix A .................... —...........—.... 104 § 4. Representation of fmictions of matrices by means of series 110 § 5. Application of a function of a matrix to the integration of a system of linear differential equations with constant coefficients ........................... 116 § 6. Stability of motion in the case of a linear system............... 125 VI,
E q u iv a l e n t T r a n sf o r m a t io n s o f P o l y n o m ia l M a t r ic e s , A n a l y t ic T h eo r y o f E l e m e n t a r y D iv is o r s 130
§ 1. Elementary transformations of a polynomial matrix 130 § 2. Canonical form of a ^matrix______________________ 134 § 3. Invariant polynomials and elementary divisors of a poly nomial m atrix139 § 4. Equivalence of linear binomials_____________________ 145 § 5. A criterion for similarity of matrices*— 147 § 6. The normal forms of a matrix —— 149 § 7. The elementary divisors of the matrix /(A )— 153
Contents
ix
§ fi. A general mot hod of constructing the transforming matrix 159 § 9. Another method of constructing a transforming matrix__ 164 VII. T h e S tr ucture o f a Lin ea r O perator in an ^ D im e n sio n a l S p a c e _______________________________________________175 § 1. The minimal polynomial of a vector and a space (with respect to a given linear operator)175 § 2. Decomposition into invariant snhspaces with coprime minimal polynom ials 1 177 § 3. Congruence. Factor sp a ce   ............. 181 § 4. Decomposition of a space into cyclic invariant subspaces— 184 § § § §
5. 6. 7. 8.
The normal form of a matrix......... ............................... Invariant polynomials. Elementary divisors............~......... The Jordan normal form of a matrix... .......................... ..... K rylov’s method of transforming the secular equation.....
VIII. M atrix E q u a t io n s ......... ................................  ...................... § § § § § § § §
1. 2. 3. 4. 5.
§ § § § §
— 215
The equation A X = X B .................... .......... .... The special ease A ~ B. Commuting matrices...................The equation A X — X B = C The scalar equation f ( X ) = 0 ....................... Matrix polynomial equations.......................... ....................... 6. The extraction of mth roots of a nonsingular matrix...... 7. The extraction of mth roots of a singular matrix............... 8. The logarithm of a matrix..................................
IX. Lin ea r O perators 1. 2. 3. 4. 5.
in a
U nitary S pace ..—
... 190 193 200 202
215 220 225 225 227 231 234 239
 242
General considerations — .... .... .... —....... ..... 242 Metrization of a space........... ............................................................ 243 Gram’s criterion for linear dependence of vectors— ................ 246 Orthogonal projection............... 248 The geometrical meaning of the Gramian and some in equalities  —r — — 250 § 6. Orthogonalization of a sequence of vectors.......................... 256 § 7. Orthonormal b ases........................... ~ 262 § 8. The adjoint operator..... ...... 265
Contents
X
§ 9. Normal operators in a unitary space— 268 §10. The spectra of normal, hermitian, and unitary operators— 270 §
11 . Positivesemidefinite and positivedefinite hermitian op
§
erators   274 12 . Polar decomposition of a linear operator in a unitary space.
Cayley’s formulas___ .___________________________ 276 8 13. Linear operators in a euclidean space_____ ___—.________ 280 §14. Polar decomposition of an operator and the Cayley for mulas in a euclidean space____________________________ 286 §15. Commuting normal operators_________________________ 290 X.
Q uadratic
and
H e r m it ia n F o r m s _______________________ 294
§ 1. Transformation of the variables in a quadratic form_____ 294 § 2. Reduction of a quadratic form to a sum of squares. The law of inertia............. .................. ................... .................... 296 § 3. The methods of Lagrange and Jacobi of reducing a quad ratic form to a sum of squares__ 299 § 4. Positive quadratic forms______________ ________ 304 § 5. Reduction of a quadratic form to principal axes____ ___ 308 § 6. Pencils of quadratic forms.—__..___ ________________ 310 § 7. Extremal properties of the characteristic values of a regu lar pencil of forms_—_______ ________________ _ 317 § 8. Small oscillations of a system with n degrees of freedom_ 326 § 9. Hermitian forms_____ ___ ______ _________________ 331 § 10. Hankel forms _____ ________ ___________________ 338 B ib l io g r a p h y  
351 369
CHAPTER I MATRICES AND OPERATIONS ON MATRICES §1. Matrices. Basic Notation 1. Let f be a given number field.1 D efin itio n 1 : A rectangular array of numbers of the field f a ll
°12 •
a 21
“ 22 •
“ ml
°w 2 * •' *
•«2»
is called a matrix. When m = n,the matrix is called square and the number m, equal to n, is called its order. In the general case the matrix is called rectangular (of dimension m X n ). The numbers that constitute the matrix are called its elements. N otation : In the doublesubscript notation for the elements, the first subscript always denotes the row and the second subscript the column con taining the given element. As an alternative to the notation (1) for a matrix we shall also use the abbreviation K ill
(» = 1, 2 , . . . ,
to;
£ = 1 , 2 ............» ) .
(2)
Often the matrix (1) will also be denoted by a single letter, for example A. If A is a square matrix of order n, then we shall write A — att*. The determinant of a square matrix A = ot t J will be denoted by \ aik\\ or by
Ml____ 1 A number field is defined as an arbitrary collection of numbers within which the four operations of addition, subtraction, multiplication, and division by a nonzero number can always be carried out. Examples of number fields are: the set of all rational numbers, the set of all real num bers, and the set of all complex numbers. All the numbers that will occur in the sequel are assumed to belong to the number field given initially.
1
2
I. Matrices
and
Matrix Operations
Wo introduce a concise notation for determinants formed from elements of the given matrix: rB, so that rx ~ r B. On comparing this with (28), we have: When a rectangular matrix is multiplied on the left or on the right by a nonsingular matrix, the rank of the original matrix remains unchanged. Note that (26) implies. A  •  A"*1  = 1, i.e.
For any two nonsingular matrices we have (30) 3. All the matrices of order n form a ring11 with unit element E {n}. Since in this ring the operation of multiplication by a number of f is defined, and since there exists.a basis of n2 linearly independent matrices in terms of which all the matrices of order n can be expressed linearly,12 the ring of matrices of order n is an algebra.13 11A ring is a collection of elements in which two operations are defined and can always be carried out uniquely: the ‘ addition * of two elements (with the commutative and associative properties) and the ‘multiplication* of two elements (with the associa* tive and distributive properties with respect to addition); moreover, the addition is reversible. See, for example, van der Waerden, Modern Algebra, § 14. 12 For, an arbitrary matrix A =  a**  * with elements in f can be represented in
n
the form A = 2 aik&ikt where Eik is the matrix of order n in which there is a 1 at the
—1
intersection of the ith row and the hth column and all the other elements are zeros. 18 See, for example, van der Waerden, Modern Algebra, § 17.
18
I.
M atrices
and
M a trix O per a t io n s
All the square matrices of order n form a commutative group with respect to the operation of addition.11 All the nonsingular matrices of order n form a (lioncommutative) group with respect to the operation of multiplication. A square matrix A = \\ai&\\* is called upper triangular (lower trian gular) if all the elements below (above) the main diagonal are zero: an 0 A= 0
• • flm ®22 ' • • a2n * * , 0
..
o .. IIfll1 ®22 • ' .. 00 I1 ' a21 A = i ■ l ' k i
«n2 • • • «»n
A diagonal matrix is a special case both of an upper triangular matrix and a lower triangular matrix. Since the determinant of a triangular matrix is equal to the product of its diagonal elements, a triangular (and, in particular, a diagonal) matrix is nonsingular if and only if all its diagonal elements are different from zero. It is easy to verify that the sum and th° product of two diagonal (upper triangular, lower triangular) matrices is a diagonal (upper triangular, lower triangular) matrix and that the inverse of a nonsingular diagonal (upper triangular, lower triangular) matrix is a matrix of the same type. Therefore : 1. All the diagonal matrices of order n form a commutative group under the operation of addition, as do all the upper triangular matrices or all the lower triangular matrices. 2. All the nonsingular diagonal matrices form a commutative group under multiplication. 3. All the nonsingular upper (lower) triangular matrices form a (noncommutative) group under multiplication. 4. We conclude this section with a further important operation on matrices —transposition. 14 A group is a set of objects in which an operation is defined which associates with any two elements a and b of the set a welldefined third element a * b of the same set provided that 1) 2) 3) a—1 *
the operation has the associative property ((a * b) * c = a * (6 * c )), there exists a unit element e in the set (a * e — e * a = a), and for every element e of the set there exists an inverse element a 1 ( a * a ~ *123= o = e).
A group is called commutative, or abelian, if the group operation has the commutative property. Concerning the group concept see, for example, [53], pp. 245ff.
§ 4. Compound M atrices. M inors
19
If A —  aik  (i= 1, 2, . . . , m ; /r = 1 , 2, , then the transpose AT is defined as A T =  ajk   , where aj{ = aik (i= 1, 2, . . ., ni; k = 1, 2, . .. , n). If A is of dimension m X ?i, then AT is of dimension n X m. H is easy to verify the following properties :ir> a) (A + B)T= A T + Bt } b) (aAjT= a A T, c) (AB)T= BTA T, d) ( ^ 1)T= ( i Tr 1. If a square matrix S — fy* coincides with its transpose ($T= >9 ),then it is called symmetric. In a symmetric matrix elements that are symmet rically placed with respect to the main diagonal are equal. Note that the roduct of two symmetric matrices is not, in general, symmetric. By 3., this holds if and only if the two given symmetric matrices are permutable. If a square matrix K —  j[J differs from its transpose by a factor — 1 (KT = — K), then it is called skewsymmetric. In a skewsymmetric matrix any two elements that are symmetrical to the main diagonal differ from each other by a factor — 1 and the diagonal elements are zero. Prom 3. it follows that the product of two permutable skewsymmetric matrices is a symmetric matrix . 1516
§ 4. Compound Matrices.
Minors of the Inverse Matrix
1. Let A = ] aik J be a given matrix. We consider all possible minors of A of order p (1 p ^ n) : (31) The number of these minors is N2, where N =1 p I is the number of combinations of n objects taken p at a time. In order to arrange the minors (31) in a square array, we enumerate in some definite order—lexicographic order, for example—all the N combinations of p indices selected from among the indices 1 , 2, . . . , n. 15 In formulas 1., 2., 3., A and B are arbitrary rectangular matrices for which the corresponding operations are feasible. In 4., A is an arbitrary square nonsingular matrix. 16 As regards the representation of a square matrix A in the form of a product of two symmetric matrices (y4=.SiSa) or two skewsymmetric matrices (A = K i K i ) , see [357],
20
I. Matrices
and
M atrix Operations
If the combinations of indices %i < i2 < . . • < iP and k} < k2 < . . . < kp have the numbers a and /?, then the minors (31) will also be denoted as follows: a*0= A (l By giving to a and /? independently all the values from all the minors of A = J aik * of order p. The square matrix of order N
1
to N, we obtain
is called the pth compound matrix of A = a«*; p can take the values 1 , 2, . . . , n. Here 511 = A, and 2ln consists of the single element  A . Note. The order of enumeration of the combination of indices is fixed once and for all and does not depend on the choice of A. Example. Let *ix *11 *11 *14 °ai *tl *11 *U °*i *91 *11 *14 *41 *41 *41 *44 We enumerate all combinations of the indices 1, 2, 3, 4 taken two at a time by arranging them m the following order: (12)
(13)
(14)
(23)
(24)
(34).
Then
'(iD 'G IK D 'G D
'(ID '(ID '(ID '(ID '(ID '(ID 'C D '(ID 4 (n )
'(ID '(ID '(ID '(ID
'(ID '(ID '(ID "GD
4 (2 4 ) ^ ( 34)
§ 4. Compound Matrices. M inors
21
We mention some properties of compound matrices: 1. From C = AB it follows that (£p = 2Ip • 23p( p = 1, 2 For when we express the minors of order p (1 ^ p ^ n ) of the matrix product C, by formula (19), in terms of the minors of the same order of the * factors, then we have: C fh H •
2 I =» < I?< ••*< Ip lji.C i.C 
•< ip
* •< * ,
(32)
Obviously, in the notation of this section, equation t32) can he written a s
f o llo w
s
i
c * = j r » aaXbw
(at ft = 1, 2, . . N)
(here a, fiy and / arc* the numbers of the combinations of indices *2 ^ ^2 ^ > A?2 ^ ^2 ^ ^ kp i li I2‘C • • • ^p)* Hence ( p = l , 2, 2. F ro m B — A ~ ' it follow* that © p— Sip1 (p “ 1, 2 , . . . , «).
This result follows immediately from tin* preceding one when we set C — E and b *ar in mind that (iy, is Ihe unit malrix of order N = From 2. Mere follows an important formula that expresses the* minors of the inverse matrix in terms of the* minors of the given matrix : *p If B — A then for arbitrary (i ^Oyabafi = Syfi ==={ ^ (y * « 
(34)
Equations (34) can also be written as follows:
turn out to be different from zero. This algorithm of Gauss consists of operations of a simple type such as can easily be carried out by presentday computing machines. 3. Let us express the coefficients and the righthand sides of the reduced system in terms of the coefficients and the righthand sides of the original system (1). We^hall not assume here that in the reduction process all the numbers au , turn out to be different from zero; we consider the general case, in which the first p of these numbers are different from zero:
a£ V 0 , .... « g rV o
(P ^ n l).
(7)
This enables us (at the pth step of the reduction) to put the original system of equations into the form
all*l +
+ • * • • • • ............ 4**1 + • • ....................... a lix i
(ri) ar,r+1 •
0
0
..0
0
..0
The transition from A to G is effected by a certain number N of opera tions of the following type: to the ith row of the matrix we add the jth row (j < i ) y after a preliminary multiplication by some number a. Such an operation is equivalent to the multiplication on the left of the matrix to he transformed by the matrix (?)
I ...
0
«
. ..
0
...
0
1
(31) 0
...
a
0
...
0
1
...
0
......... 0
...
1
In this matrix the main diagonal consists entirely of units, and all the remaining elements, except a. are zero. Thus, Q = W n ~ . W 2W xA ,
where each matrix W i, Wiy. . . , WN is of the form (31) and is therefore a lower triangular matrix with diagonal elements equal to 1 . 4 8ee formulas (19).
G coincides with the, matrix Gp (p. 25) for p = r.
§ 4. D ecomposition
of
S quare Matrix
into
T riangular F actors
35
Let. W = W f'
(32)
w 2w
Then G=W A.
(33)
We shall call W the transforming matrix for A in Gauss’s elimination method. Both matrices G and W are uniquely determined by A. From (32) it follows that W is lower triangular with diagonal elements equal to 1 . Since W is nonsingular, we obtain from (33) : A = W~lG.
(33')
We have thus represented A in the form of a product of a lower triangular matrix W ~ 1 and an upper triangular matrix G. The problem of decom posing a matrix A into factors of this type is completely answered by the following theorem: T heorem 1 : Every matrix A = o t t J of rank r in which the first r successive principal minors are different from zero
Dk =A
for k = 1 , 2 . . r
(34)
can he represented in the form of a product of a lower triangular matrix B and an upper triangular matrix C
A = B C =
bn
0
...
0
C11
C12 • • • Cln
bn
bM . . .
0
0
c aa • • • c *»
^nl
fyt2 * • • ^n»
0
0
...
(35)
em
Here ^11C11— &19
^22C22 — 2)* ’
’
, Dr h” C r ,~ D r_ l 
(36)
The values of the first r diagonal elements of B and C can he chosen arhitrarily subject to the conditions (36). When the firstr diagonal elements of B and C are given, then the ele ments of the first r rows of B and of the first r columns of C are uniquely determined, and are given by the following formulas: 4
1 2 ...k  lg \ 1 2 ... k 1 k) j /1 2 ... h • A \l 2 ... k)
(9 = k , k + 1 , .
... k — 1 k\ ... k  l g) * . 11 2 ... k\ A \12 ...k ) n; k = l J2, .... r). 1 2 2
A \>1
(37)
36
II.
T h e A lgo rithm
of
G a uss
and
S om e A pp l ic a t io n s
If r < n ( A  = 0 ) , then all the elements in the last n — r rows of B can he put equal to zero and all the elements of the last n — r columns of C can he chosen arbitrarily; or, conversely, the last n — r rows of C can he filled with zeros and the last n — r rows of B can he chosen arbitrarily. Proof. That a representation of a matrix satisfying conditions (34) can be given in the form of a product (35) has been proved above (see (33')). Now let B and C be arbitrary lower and upper triangular matrices whose product is A. Making use of the formulas for the minors of the product of two matrices we find: A
k~ 1 £
y
J l
2
• k~ l
1 4/ a, 2>
can he represented in the form of a product of a lower triangular matrix F, a diagonal matrix D, and an upper triangular matrix L : o
o
1 A ••• L
A
/» 1 • • • 0 A = FDL =
A A
0 1 .,. A ,
f,*
A,,
0 0
 V
(1 2 . . . k — 1 9\ " * \1 2 . . . *   I k )
, _
U 2 . . . k) (g = k + 1 , .
$ ??* II J* JO
fm
where
(44)
A
0 0 ...
Afli2.
.. * —1 *\ ll 2 . a 2 ... *\ A t\ (1 2 ...* ; • ,5r ) ,
and f„k and h , are arbitrary for g — k + 1 ,. .., n ; k = r + 1 , . . . , n.
(45)
§ 4.
D ecom position
op S q uare
M a tr ix
into
T r ia n g u lar F actors
39
3. The elimination method of Gauss, when applied to a matrix A = aa.  J of rank r for which Dk 0 (k = 1, 2,. . . , r), yields two matrices: a lower triangular matrix W with diagonal elements 1 and an upper triangular matrix G in which the first r diagonal elements are , 2y ~ and the last n — r rows consist entirely of zeros. G is the Gaussian form of the matrix A; W is the transforming matrix. For actual computation of the elements of W we recommend the follow* ing device. We obtain the matrix W when we apply to the unit matrix E all the transformations (given by that we have performed on A in the algorithm of Gauss (in this case we shall have instead of the product W A, equal to G, the product WE, equal to W). Let us, therefore, write the unit matrix E on the right of A : «11 • • • «ln
«»1 •
I
. . .
* ®n» 0
0
1
By applying all the transformations of the algorithm of Gauss to this rectangular matrix we obtain a rectangular matrix consisting of the two square matrices G and W : (G ,W ). Thus, the application of Gauss’s algorithm to the matrix (46) gives the matrices G and W simultaneously. If A is nonsingular, so that  A \ 0, then  G  ^ 0 as well. In this case, (33) implies that A ~ l = G~l W. Since G and W are determined by means of the algorithm of Gauss, the task of finding the inverse matrix A ~ J reduces to determining G~1 and multiplying G~l by W. Although there is no difficulty in finding the inverse matrix G 1 once the matrix G has been determined, because G is triangular, the operations involved can nevertheless be avoided. For this purpose we introduce, to gether with the matrices G and W, similar matrices G\ and Wx for the transposed matrix A7. Then AT= W \~rGy, i.e., A = G\ Wj"1. Let us compare (33') with (44) :
A = W 'G , A = FDL.
(47)
40
II.
T h e A lgorithm
op
G a uss
and
S om e A p p l ic a t io n s
These equations may be regarded as two distinct decompositions of the form (35); here we take the product DL as the second factor C. Since the first r diagonal elements of the first factors are the same (they are equal to 1 ), their first r columns coincide. But then, since the last n — r columns of F may be chosen arbitrarily, we chose them such that F = W~K
(48)
On the other hand, a comparison of (47) with (44), A = QJWJ~1, A —FDL, shows that we may also select the arbitrary elements of L in such a way that L = W?~i.
(49)
Replacing F and L in (44) by their expressions (48) and (49), we obtain A = W ^D W ^K
(50)
Comparing this equation with (33') and ( 4 7 ) we find : O = DWJ~1,
OJ = W 'D.
(51)
We now introduce the diagonal matrix (52) Since D=DDDt it follows from (50) and (51) that A = 0JDG.
(53)
Formula (53) shows that the decomposition of A into triangular factors can be obtained by applying the algorithm of Gauss to the matrices A and Ar. A Now let A be nonsingular (r = n). Then  D  0, D = D~1. Therefore it follows from (50) that Ai =W?DW.
(64)
This formula yields an effective computation of the inverse matrix A ~ x by the application of Gauss’s algorithm to the rectangular matrices (A ,E )
( A \E ) .
§ 5.
P a rtitio n ed M a tr ic es .
G e n er a lized A lgo rithm
of
G a uss
41
If, in particular, we take as our A a symmetrical matrix S, then G\ coincides with G and Wx with W, and therefore formulas (53) and (54) assume the form S = 0 TD 0 ,
(66)
S  i= W TDW.
(56)
§ 5. The Partition of a Matrix into Blocks. The Technique of Operat ing with Partitioned Matrices. The Generalized Algorithm of Gauss It often becomes necessary to use matrices that are partitioned into rec tangular parts—'cells’ or ‘blocks.’ In the present section we deal with such partitioned matrices. 1. Let a rectangular matrix •4 = = ll°«ll
( * = 1 . 2 ......... to; i = l, 2, . . . , » )
(67)
be given. By means of horizontal and vertical lines we dissect A into rectangular blocks: * 1
^
A
• l t
A ff
.
. . A u \
. • *
A 21
} ™ 
i
> TO* ( 6 8 )
: ^ 01 *
^ $ t
. . A „ )
}
to
.
We shall say of matrix (58) that it is partitioned into st blocks, or cells A afi of dimensions ma X n$ (a = 1, 2, . . . , s ; ft = 1, 2, . . . , t), or that it is represented in the form of a partitioned, or blocked, matrix. Instead of (58) we shall simply write A = (Aafi)
(« = 1, 2,
0 = 1 ,2,
(59)
In the case s = t we shall use the following notation: A=(Aafi)
(60)
42
II.
T h e A lgorithm
of
G a uss
and
S om e A p p l ic a t io n s
Operations on partitioned matrices are performed according to the same formal rules as in the case in which we have numerical elements instead of blocks. For example, let A and B be two rectangular matrices of equal dimensions partitioned into blocks in exactly the same way : A = ( A ap), B = (B ap)
(* = 1 , 2 ,
0 = 1, 2 ,
(61)
( (a < /?). A quasidiagonal matrix is a special case of a quasitriangular matrix. From the formulas (64) it is easy to see that? The product of two upper (lower) quasitriangular matrices is itself an upper (lower) quasitriangular matrix ;6 the diagonal cells of the product are obtained by multiplying the corresponding diagonal cells of the factors. For when we set s —t in (64) and Aaf= 0 ,
Bat= 0
for
at“ \ obtaining A=
A —BDrXC O G D
A —BD~lC  \D\
where A is a square nonsingular matrix of order n (  A  ^ 0), then the rank of R is equal to n if and only if D== CA*1]}.
(81)
Proof. We subtract from the second row of blocks of R the first, multiplied on the left by CA” 1. Then we obtain the matrix T = (A B ). \0 D  C A 'B J
(82)
By Theorem 3, the matrices R and T have the same rank. But the rank of T coincides with the rank of A (namely, n) if and only if D — CA~1B = O, i.e., when (80) holds. This proves the theorem. Prom Theorem 4 there follows an algorithm9 for the construction of the inverse matrix A ” 1 and, more generally, the product C A lB, where B and C are rectangular matrices of dimensions n X p and q X n. By means of Gauss’s algorithm, 10 we reduce the matrix
to the form
( J £)
(83)
(84) C
* ■ )•
We will show that X ~C A~lB.
(85)
For, the same transformation that was applied to the matrix (83) reduces the matrix •See'[181]. 10 We do not apply here the entire algorithm of Gauss to the matrix (83) but only the first n steps of the algorithm, where n is the order of the matrix. This can be done if the conditions (15) hold for p = n. But if these conditions do not hold, then, since A  o, we may renumber the first n rows (or the first n columns) of the matrix (83) so that the n steps of Gauss *s algorithm turn out to be feasible. Such a modified Gaussian algorithm is sometimes applied even when the conditions (15), with p — n, are satisfied.
48
II.
T h e A lgo rithm
op
Gauss
(A {C
and
S om e A p p l ic a t io n s
B \  CA'BJ
(86)
'0 Bx \ K 0 X C A 'B } '
(87)
By Theorem 4, the matrix (8 6 ) is of rank n (n is the order‘of A). But then (87) must also be of rank n. Hence X — CA~1B = 0, i.e., (85) holds In particular, if B = y, where y is a column matrix, and C = E, then X = A~^y. Therefore, when we apply Gauss’s algorithm to the matrix ( A \e
oy
we obtain the solution of the system of equations A z= y. Further, if in (83) we set B — C = E f then by applying the algorithm of Gauss to the matrix
we obtain
where X = A~\ Let us illustrate this method by finding A "
1
in the following example.
Example. Let 2
A=
1 0
1 1 2
.
3 1 2 It is required to compute A  1. We apply a somewhat modified elimination method11 to the matrix n See the preceding footnote.
§ 5.
pARTITIONED MATRICES.
GENERALIZED ALGORITHM OF GAUSS
2 1 1 1 0 2 1 2 3 0 0 1 01 0 0 01
1 0 0
.o 0 0
0 1 0 0 .()
0
49
0 0 1 0 0 0
To all the rows we add certain multiples of the second row and we arrange that all the elements of the first column, except the second, become zero. Then we add to all the rows, except the second, the third row multiplied by certain factors and see to it that in the se~ond column all the elements, except the second and third, become zero. Then we add to the last three rows the first row with suitable factors and· obtain a matrix of the form
* * * * * * * * * * * * *0 0* 021 * * * 2* 0 0
0 0
0 0
4 1
13 11
Therefore .A1=
_:I·
2 1 1 4 1 1 1
C H A P T E R III LINEAR OPERATORS IN AN nDIMENSIONAL VECTOR SPACE Matrices constitute the fundamental analytic apparatus for the study of linear operators in an ndimensional space. The study of these operators, in turn, enables us to divide all matrices into classes and to exhibit the significant properties that all matrices of one and the same class have in common. In the present chapter we shall expound the simpler properties of linear operators in an ndimensional space. The investigation will be continued in Chapters VII and IX.
8 1. Vector Spaces 1. Let R be a set of arbitrary elements x f y, * , ... in which two operations are defined:1 the operation of *addition * and the operation of ‘multiplication by a number of the field f. ’ We postulate that these operations can always be performed uniquely in R and that the following rules hold for arbitrary elements x, y, % of R and numbers a, of f : 1. x + y = y + x. 2.
(* + y) + zz= x + (y + *). 3. There exists an element o in R such that the product of the number 0 with any element * of R is equal to o : 0 • * —o\ 4. 1 • * = 5. a ( 0 x ) = : ( a p ) x . 6. (a 4 P ) x = a x + p x . 7. t t ( x + y ) = a x + a y .
i These operations will be denoted by the usual signs sometimes be omitted. 50
‘j 1and
* \ the latter sign will
§ 1.
V ector S pa c e s
51
D e f in it io n 1: A set R of elements in which two operations—‘addition of elements and ‘multiplication of elements of R by a number of F*—can always be performed uniquely and for which postulates 1.7. hold is called a vector space (over the field f ) and the elements are called vectors? D e f in it io n 2. The vectors x ,y y. . . , u of R, are called linearly dependent if there exist numbers a, /?, in f , not all zero, such that
a* f P y + • ♦• + d u = o .
’
(1)
I f such a linear dependence does not hold, then the vectors x, y, . . ., u are called linearly independent. If the vectors x, y, . . . , u are linearly dependent, then one of the vectors can be repesented as a linear combination, with coefficients in r , of the remaining ones. For example, if a ^ O in (1), then
D e fin itio n 3. The space R is called finitedimensional and the number n is called the dimension of the space if there exist n linearly independent vectors in R, while any n + 1 vectors in R are linearly dependent. I f the space contains linearly independent systems of an arbitrary number of vectors, then it is called infinitedimensional. In this book we shall study mainly finitedimensional spaces. D e fin itio n 4. A system of n linearly independent vectors elf e2, . . ., en of an ndimensional space, given in a definite order, is called a basis of the space.
2. Example 1. The set of all ordinary vectors (directed geometrical seg ments) is a threedimensional vector space. The part of this space that consists of the vectors parallel to some plane is a twodimensional space, and all the vectors parallel to a given line form a onedimensional vector space. Example 2. Let us call a column x ~ (xiy x2y. . ., x„) of n numbers of f a vector (where n is a fixed number). We define the basic operations as operations on column matrices: 2 It is easy to see that all the usual properties of the operations of addition and of multiplication by a number follow from properties 1.7. For example, for arbitrary x of R we have:
x\o = x
[*+ o =1 •*f0*=(l + 0)•* = !•*=:*]; x f (— *) = p, where — * = < — 1) • * ;
etc.
52
III.
L in e a r O perators
in a n wD im e n s io n a l
V ector S pa c e
(*l* xz* ♦• •* xn) “h (Vi* Vi* • • ♦» Vn) — (xi 4" Vv x%4" y%t  • •> X n + y»)»
a (xv x2, . . X n ) = (OLXl t .......... . xz* • • •» xn>• • •) 4 {Vx* yz* • • •> • • •) = (^i 4 yi, x%4* y*» *• •» xn 4" • • •)» a (xlt x2, .. .» xnt ...) = (oca?!, T . D e fin itio n 7.
The product of the operators A and B is the operator C
for which C x= A (B x)
(x tR ).
(15)
holds for every x of R. The operator C maps R into T :
Prom the linearity of the operators A and B follows the linearity of C. We choose arbitrary bases in R, S, and T and denote by A, B, and C the matrices corresponding, in this choice of basis, to the operators A, B, and C. Then the vector equations z = z A y ,y = B x , z = C x
(16)
correspond to the matrix equations: z —Ay, y = Bxy z—Cxt where x, y, z are the coordinate columns of the vectors x, y, *. Hence Cx= A(Bx) = (AB)x and as the column x is arbitrary C—AB.
(17)
Thus, the product C = AB of the operators A and B corresponds to the matrix C =  cif  (i = 1, 2 ,..., m ; j = 1, 2 ,..., q), which is the product of the matrices A and B.
§ 4.
Transformation
We leave it to the reader to
show
C — aA
of
Coordinates
59
that the operator6 (a
€f )
corresponds to the matrix C = aA. Thus we see that in Chapter I the operations on matrices were so defined that the sum A + B, the product AB, and the product aA correspond to the matrices A + B, AB, and aA, respectively, where A and B are the matrices corresponding to the operators A and B , and a is a number of f .
§ 4, Transformation of Coordinates 1. In an /idimensional vector space we consider two bases: e lt e2t . . ., en (the ‘old’ basis) and e \t . . ., e* the ‘new’ basis). The mutual disposition of the basis vectors is determined if the coordi nates of the vectors of the basis are given relative to the other basis. We set el
= *11 e l + *21 ®2 + *• * + *nl
en
e2=
*12 e l + *22 e 2 + *** + *n2 e n
e»—
*1n e l + *2n e 2 + **“ + *»» e «
(18)
or in abbreviated form, ■
t« i
*a
(%~ 1» 2, ...» »).
(18')
We shall now establish the connection between the coordinates of one and the same vector in the two different bases. Let xlt x2, . . ., xH and xi, • • • >x% coordinates of the vector x relative to the ‘old ’ and the 4new ’ bases, respectively: x = ^ fxi ei i~l i1
e l.
(19)
In (19) we substitute for the vectors ej the expressions given for them in (18). We obtain: e I.©., the operator for which C * =
aA x
( * c R) .
60
III. Linear Operators in an wDimensjonal V ector Space x = J L xk JL hke i
* l
(X). Hence we have: i ( A) =y ( A) q( A) + r(A). Since f {A) = 0 and y>{A) = 0, it follows that r(A) = 0. But the degree of r(X) is less than that of the minimal polynomial y>(X). Therefore r(X) == O.101 Hence: Every annihilating polynomial of a matrix is divisible vnthout remainder by the minimal polynomial. Let y>i(X) and V>2(X) be two minimal polynomials of one and the same matrix. Then each is divisible without remainder by the other, i.e., the polynomials differ by a constant factor. This constant factor must be 1, because the highest coefficients in v'iU) and y^iX) are 1. Thus wTe have proved the uniqueness of the minimal polynomial of a given matrix A. 2. We shall now derive a formula connecting the minimal polynomial with the characteristic polynomial. We denote by Dn_l (X) the greatest common divisor of all the minors of order n — 1 of the characteristic matrix XE — A, i.e., of all the elements of the matrix B(X) =  bik(X) ? (see the preceding section). Then B(X) = Dn^(X)C(X)t
(47)
where C(X) is a certain polynomial matrix, the ‘reduced’ adjoint matrix of XE — A . From (20) and (47) we have: A(X)E — (XE —A) C (X) Dn_t (X).
(48)
Hence it follows that A(X) is divisible without remainder by Dn_i (X) :al A,! (A)= v where y>(X) is some polynomial. The factor Dn_x{X) in (48) may be can celled on both sides :12 y>(X)E = (XE  A) C (A). (60) 10 Otherwise there would exist an annihilating polynomial of degree less than th at of the minimal polynomial. 11 We could also verify this immediately by expanding the characteristic determ inant A{X) with respect to the elements of an a rbitrary row. 12 In this case we have, a p art from (50), also the identity (see (2 0 ')) t p ( X ) E = C ( X ) (ZE — A), i.e., C(X) is a t one and the same time the left quotient and right quotient of y>(k)E on division by XE — A.
• § 6. Minimal P olynomial
of
Matrix
91
Since \p(X)E is divisible on the left without remainder by XE— A, it follows by the Generalized Bezout Theorem that y>(A) = 0. Thus/the polynomial y>(X) defined by (49) is an annihilating polynomial of A . Let us show that it is the minimal polynomial. We denote the minimal polynomial by y>*(X). Then y>(X) is divisible by ty*(X) without remainder: *(*) = * •(* )* (* ).
(51)
Since y>*(A) = 0, by the Generalized Bezout Theorem the matrix polynomial xp*(X)E is divisible on the left by XE — A without remainder: y>* (X) E ~ ( X E —A) C* (X) .
(52)
From (51) and (52) it follows that tp(X)E = (XE—A) C* (X) x (X).
. (53)
The identities (50) and (53) show that C(X) as well as C*(X)%(X) are left quotients of \p(X)E on division by XE — A. By the uniqueness of division C(X) = C*(X)x (X). Hence it follows that x(X) is a common divisor of all the elements of the polynomial matrix C(X). But, on the other hand, the greatest common divisor of all the elements of the reduced adjoint matrix C(X) is equal to 1, because the matrix was obtained from B(X) by division by Dni(X). There fore %(X) = const. Since the highest coefficients of y>(X.) and y>*(X) are equal, we have in (51) %(A) = 1 , i.e., y>(X) =y?*(X)t and this is what we had to prove. We have established the following formula for the minimal polynomial: * = i C T r
3. For the reduced adjoint matrix C(X) we have a formula analogous to (31) (p. 84): C(X) = }F(XEt A); (55) where the polynomial W(Xt [x) is defined by the equation13 13 Formula (55) can be deduced in the same way as (31). On both sides of the dentity y>(^)— \p(fi) = (X— //) we substitute for X and /u the matrices XE and A md compare the matrix equation so obtained with (50).
92
IV.
Characteristic
Minimal P olynomial
and
op a
M atrix
(56)
Moreover, (AE—A)C(A) = y) (A) E.
3! 1
(57)
Going over to determinants on both sides of (57), we obtain A(A)\C(A)\ = [V(A)]B.
(58) j
Thus, A (A) is divisible without remainder by ip(A) and some power of ip(A) is divisible without remainder by A{A), i.e., the sets of all the distinct roots of the polynomials A (A) and \p(A) are equal* In other words : All the distinct characteristic values of A are roots of y(A). If I zl(A) = (A A 1)».(A A a)". ••• (A A ,)'
(A^A, for i¥=j; nt > 0, i, 7 = 1 ,2 ,...,« ) ,
(59)
then y>(A)= (A—A,)1"* (A—Aj)™* • • • (AA.)"**,
(60)
where 0 < mk ^ nh
(k = 1,2, . . . , a).
v61)
4. We mention one further property of the matrix C(A). Let A0 be an arbitrary characteristic value of A =  a iK (?. Then i p ( A 0 ) ~ 0 and therefore, by (57), ( ^ E ^ A ) C ( A f,)=0.
(62)
Note that C(A0) ^ 0 always holds, for otherwise all the elements of the reduced adjoint matrix C(A) would be divisible without remainder by A— A0f and this is impossible. We denote by c an arbitrary nonzero column of C(A0). Then from (62) (X0E —A) c =o i.e., Ac■ —AqC,
(63)
In other words, every nonzero column of C(A0) (and such a column always exists) determines a characteristic vector for A= A0. Example. 3 —3 2 —1 5 —2 , A = —1 A(A) =
3
0
A—3 3 —2 1 A —5 2 = A*—8A2f 20A—16 =(A —2)* (A—4), 1 ’ —3 A
'
§ 6 . ■M in im a l P o l y n o m i a l o f M a t r ix . ..
.
8( Kp) =
A (p) — A (i)
M_ x
,
93
—8) + A*—8 i + 20,
B(X) =zA* + (X— 8)^4 + (i» —8A + 20) if 10 — 18 12 —*6 22 — 12 —6 18 — 8
3 —3 2I —1 5 — 2 + (*« — 8 * + 20) —1 3 o
+ (A   8 ) i A» — 5X + 6
] I
— 3A + 6 — A + 2 A* — 3 A + 2 3A — 6 — A+ 2
2A — 4 . — 2A + 4 I. Aa — 8 A f 2 
All the elements of the matrix B(X) are divisible by D2(X) = X— 2. Cancel ling this factor, we have: < ? ( * ) =
I'* — 3  — 1  — 1
—3 2' X— I — 2 i j 3 il — 6 !
and V ( X ) = ^ = ( X  2 ) (X4).
In (7(A) we substitute for Athe value x0= 2: 0 (2 ) =
—3 2 I — 2J. 3 — 41
The first column gives us the characteristic vector (1, 1, 1,) for A0= 2. The second column gives us the characteristic vector (— 3, 1, 3) for the same characteristic value A0= 2. The third column is a linear combination of the first two. Similarly, setting A0= 4, we find from the first column of the matrix (7(4) the characteristic vector (1, — 1, — 1) corresponding to the charac teristic value A0= 4. The reader should note that iy/(A) and (7(A) could have been determined by a different method. To begin with, let us find D2(A).. Z)2(A) can only have 2 and 4 as its roots. For A= 4 the second order minor 1 A— 6 = 1 —3
A+ 2
of /4(A) does not vanish. Therefore D2(4) ^ 0. For A= 2 the columns of /4(A) become proportional. Therefore all the minors of order two in zl(A)
94
IV. C haracteristic
and
M inimal P olynomial
op a
Matrix
vanish for X — 2 :Z>2(2) = 0 . Since the minor to be computed is of the first degree, D 2(X) cannot be divisible by (A — 2 )2. Therefore D,(X) = X — 2.
Hence j —2)
V (X) =
a— 4).=
 6 X + 8,
v ( X . m }  VI'4>  * W  V + X X— 3 C ( X ) = V (XE,A)
=
A + (X — 3 ) E =
6,
—3
—1 X— 1
—1
3
2 —2
X— 6
CHAPTER V FUNCTIONS OF MATRICES
§ 1. Definition of a Function of a Matrix 1. Let A =  (hk T be a square matrix and f(X) a function of a scalar argument X. We wish to' define what is to be meant by f (A), i.e., we wish to extend the function f(X) to a matrix value of the argument We already know the solution of this problem in the simplest special case where f(X) = Vo^ + yi X1' 1 4 • • • + yt is a polynomial in X. In this case, f(A) = y0A l\~yl A1’1 +••• + ytE. Starting from this special case, we shall obtain a definition of f (A) in the general case. We denote by y>(A)=(AAir ( A  A tp ...( A  A ,) M'
(1)
the minimal polynomial1of A (where Xu X2, .. ., Xs are all the distinct charac9
teristic values of A). The degree of this polynomial is m = Let g(X) and h(X) be two polynomials such that g( A) =h( A) .
(2)
Then the difference d(A) =g(X) — h(X), as an annihilating polynomial for A y is divisible by y>(X) without remainder; we shall write this as follows: g(X) = h(X) (mody>(*)).
(3)
Hence b f (1) i.e.,
d(Xk) = 0,
d'(Xk) = 0,
4 mk l)(Xk) = 0
9(Xk)=h(Xk)t g'(Xk) = h'(Xk), . . . , (* = 1 ,2 ,...,* ) . 1 See Chapter IV, § 6.
( * = 1 ,2 ,...,* ) , (Xk) = h ^ l) (Xk) ( '
96
Y.
F
u n c t io n s of
M a tric es
The m numbers /(**), f ( h ) .......... r  ' U h )
(i = 1,2,
(5)
will be called the values of the function f (X) on the spectrum of the matrix A and the set of all these values will be denoted symbolically by f ( AA). If for a function f(X) the values (5) exist (i.e., have meaning), then we shall say that the function f(X) is defined on the spectrum of the matrix A. Equation (4) shows that the polynomials g(X) and h(X) have the same values on the spectrum of A. In symbols: g(AA)= h (A A). Our argument is reversible : from (4) follows (3) and therefore (2). Thus, given a matrix A, the values of the polynomial g(X) on the spec trum of A determine the matrix g(A) completely, i.e., all polynomials g(X) that assume the same values on the spectrum of A have one and the same matrix value g(A). We postulate that the definition of f ( A) in the general case be subject to the same principle: The values of the function f(X) on the spectrum of the matrix A must determine f ( A ) completely, i.e.f ail functions f(X) having the same values on the spectrum of A must have the same matrix value f (A) . But then it is obvious that for the general definition of f ( A) it is suffi cient to look for a polynomial2 g{X) that assumes the same values on the spectrum of A as f{X) does and to set: f(A) = g(A). We are thus led to the following definition: D efinition 1: If the function f(X) is defined on the spectrum of the matrix A, then where g(X) is an arbitrary polynomial that assumes on the spectrum of A the same values os does f(X) : f ( A) =g( A) . Among all the polynomials with complex coefficients that assume on the spectrum of A the same values as f(X) there is one and only one polynomial 2 It will be proved in § 2 that such an interpolation polynomial always exists and an algorithm for the computation of the coefficients of the interpolation polynomial of least degree will be given.
§ 1.
D e f i n i t i o n o p F u n c t i o n o f M a t r ix
97
r(A) that is of degree less than m.3 1'his polynomial r{X) is uniquely deter mined by the interpolation conditions: r(h) = Hh)> r’(Xk)=f' {Xt), . . . . r ^  l){Xk) = t{mk ' )(Xk) '(* = 1 ,2 ,...,* ) .
(g)
The polynomial r(A) is called the LagrangeSylvester interpolation poly nomial for f()I) on the spectrum of A. Definition 1 can also be formulated as follows: D e f in it io n 1 ': Let f(X) be a function defined on the spectrum of a matrix A and r{X) the corresponding Lagrang eSylvester interpolation poly nomial. Then f(A)=r(A).
Note. If the minimal polynomial \p(X) of a matrix A has no multiple roots4 (in (1) mi = m2= . . . = ms = 1; s = m), then for f {A) to have a meaning it is sufficient that f(X) be defined at the characteristic values Ai, X2f . . . , Xm. But if xp{X) has multiple roots, then for some characteristic values the derivatives of f(X) up to a certain order (see (6)) must be'defined as well. Example 1: Let us consider the matrix5 n
10 1 0 ..0 0 0 1. . . 0 0 0 0. . ■!, 0 0 0. , , . o]
Its minimal polynomial is Xn. Therefore the values of f(X) on the spec trum o f # are the numbers /(0), /'(0), . . . , / 2, • • *, mk\ h = 1,2, *.., a).
(I B )
§ 3.
O th e r F orm s o f D e fin itio n o r f ( A ) .
C o m p o n e n ts
105
The matrices Zkj are completely determined when A is given and do not depend on the choice of the function f{X). On the righthand side of (17) the function f(X) is represented only by its values on the spectrum of A. The matrices Zkj (j = 1, 2, . . . , mk; k = 1, 2, . . . , $) will be called the constituent matrices or components of the given matrix A . ' The components Zkj are linearly independent. For suppose that » m* 2*1 / 71 ckfi*j = 0 ' Then by (18) x M )= o ,
(i9)
where . * mk = 2? J ? Hftv (*)•
(20)
Since by (20) the degree of %(X) is less than m, the degree of the minimal polynomial y>(X), it follows from (19) that x«=o. But then, since the m functions 2, ..., mk; k — 1,2, ..., s), and this is what we had to prove. 2. From the linear independence of the constituent matrices Zk! it follows, among other things, that none of these matrices can be zero. Let us also note that any two components Zkj are permutable among each other and with A, because they are all scalar polynomials in A . The formula (17) for f (A) is particularly convenient to use when it is necessary to deal with several functions of one and the same matrix A, or when the function f(X) depends not only on X, but also on some parameter t. In the latter case, the components Zkj on the righthand side of (17) do not depend on t, and the parameter t enters only into the scalar coefficients of the matrices. In the example at the end of §2, where y(X) — (X — Xi)2(X — X2)3, we may represent r(X) in the form r (^) = / (^1) ^11 (1) ~h f ’(^1) Viz (^) "h / (^2) Vzi (1) + ff (^2) Vzz {X) + f" (^2) 9^23(1)> where
V.
106
_ „4
F
i I — M * r. a
,u
u n c t io n s o f
M atric es
3(* —*i)l
■/
= *?i2 (4) = ^
W
j jr (4 — ^iE)
[*  j— j,; (A  i,JP>] , (A
—i t E )*, . . . .
3, When the matrix .4 is given and its components have actually to be found, we can set in the fundamental formula (17) /(//)= y— X [A, where A is a parameter. Then we obtain (.X E  A r*
_Z*l
A—A*.
V(A)
UZr (A—A*)* ^
_________ J ________ [ C(X) t f  l ) ! ( m *  j ) l [ y >,(A)
(7=1,2,
,m
k \ 1c —
1,2, . .
5)
.
( 22 )
When we replace the constituent matrices in (17) by their expressions (22), we can represent the fundamental formula (17) in the form 1
10 For f{fi) —jjy
we have f{A) = ( X E — A ) ml . For f ( A ) = . r ( A ) , where r(p) is
the LagrangeSylvester interpolation polynomial. From the fact that ftp) and r(/u) coincide on the spectrum of A it follows that (A— p) r(p) and (A— p) / (p) = 1 coincide on this spectrum. Hence ( XE— A) r (A) = ( XE— .4) / (A) = E.
§ 3. O tiie r Form s op D efin itio n or f (A ).
Components
1(A) . y _ l ___ r m / r n f
107 (23)
Example 1 111 9
A— 2 0 1
ii
i fia
2 — 1 1 12 0 1 1   1 , 1 1  1
A =
1 X— 1 — 1
— 1  1 A— 1
In this case A (A) =  AE — A  = (/ — A)2(A— 2). Since the minor of the element in the first row and second column of AE — A is equal to 1, we have D2(A) = 1 and, therefore, y,(A) =
A {A )~ {A —
y (4, M) =
v W
It
\)* (X ^2 ) =
A* —
4Aa{ 5A—2,
~ ? .^1 = /.» + (4 —4) f t + 4»—44 + 5 A
and
c (4) — 'F(XE, A)=A*+(X — i)A + (4s — 44 + 5) E
3 —2 2 —1
2
2
—3
3 1
3 3 f (A — 4) 1
2 —1 1 0
—1
1
1
10 0 +
(4*—44 + 5)
1 1
0 1 0
0 0 1
The fundamental formula lias in this case the form f ( A) = f (1) Z n + f (1)
Setting 1(n)
+ /< 2 ) Z a .
(2 4 )
X— h , we f in d : ( XE— A )*:
C(X) y>(X)
J i , __ i Z\2 Z 21 ' X — 1 ‘r (4 — 1)* i " X — 2 ’
hence Zn = ~ C ( l )  C ' ( l ) ,
Z lt =  C ( 1 ) ,
Z n = C ( 2).
We now use the above expression for C (a), compute Zn , tute the results obtained in (24) •
Z2U and substi
11 The elements of the sum column are printed in italics and are used for checking the computation. When we multiply the rows of A into the sum column of 3 we obtain the sum column of AB.
V.
108
F
u n c t io n s of
M atrices
1 0 0 1 —1 1 0 0 Oj 1 0 0 + /'(!) 1 —1 1 + /(2) “ 1 1 0 0 0 0 —1 1 0! 1 —1 1 1 (I) + /'(!)  / '( I ) /'(I) 1 /(D + / '( ! )  / (2) —/' (!)•+/(2) /'(I) . / ( I )  / (2) —/ (D + /(2) / (D!i
(26)
Example 2: Let us show that we can determine f ( A) starting only from the fundamental formula. Again let
A =
Then
2
—1
1
0
1
1
— 1
1
1
f(A) = f(l)Zl f f'{l)Z2+ /(2)Za.
• (24')
In (24') we substitute for /(A) in succession 1, A— 1, (A— l ) 2:
Zi f Z^ — E —
Z t j Z%— A — E —
Z^ = (A — E \2—
10 0 0 10 0 0 1 1 —1 1 II 0 0 1 1 1 0 0 0 0 0 0 1 1 0 0. 1 1 0 0
Computing the third equation from the first two term by term, we can determine all the Z. Substituting in (24'), we obtain the expression for fU). 4. The examples we have analyzed illustrate three methods of practical computation of f (A). In the first method, we found the interpolation poly nomial r(A) and put f (A) = r(A). In the second method, we made use of the decomposition (21) and expressed the components Zkj in (17) by the values of the reduced adjoint matrix C(A) on the spectrum of A. In the third method, we started from the fundamental formula (17) and substituted in suecessicfci certain simple polynomials for /(A) ; from the linear equations so obtained we determined the constituent matrices Zkj.
§3.
O th e r F orms
of
D e fin it io n
or
f (A).
C o m po n e n t s
109
The third method is perhaps the mast convenient for practical purposes. In the general case it can be stated as follows: In (17) we substitute for f{X) successively certain polynomials g\(X), 02(A), • • • i 0m(A) : 9i(A) —
[ft CW Zkl + 9i (A*) Zk2 + • ** + qT* 1}(A*) z k mj (*= 1,2,
(26)
From the m equations (26) we determine the matrices Zkj and substitute the expressions so obtained in (17). The result of eliminating Zkj from the (m + 1) equations (26) and (17) can be written in the form t(h)
ffi(A )
s M A i) •
9m (A )
? m U j) • ■
..
/& ) ..
1 M
/M )
■■ J i W
•
• ffm U .) •
Expanding this determinant with respect to the elements of the first column, we obtain the required expression for f(A). As the factor of f (A) we have here the determinant A =  (Xk) I (in the tth row of A there are found the values of the polynomial g{(X) on the spectrum of A ; i = 1, 2, . . . , m). In order to determine f( A ) we must have A 0. This will be so if no linear combination12 of the polynomials vanishes completely on the spectrum of A, i.e., is divisible by rp(X). The condition A ^ 0 is always satisfied when the degrees of the poly nomial gi(X)y 0 2 (A), . . ., gm(X) areO, 1 , ra — 1, respectively.13 5. In conclusion, we mention that high powers of a matrix An can be con veniently computed by formula (17) by setting f(X) equal to An.14 II it is required to compute the Example: Given the matrix A = II 54 —4 — 3 elements of A100. The minimal polynomial of the matrix is y>(X) = (X — I ) 2. 12 With coefficients not all equal to zero. is in the last example, m = 3, gx{X) = 1, g9(X) = X— 1, gs(X) = (X — l)a. 1^ Formula (17) may also be used to compute the inverse matrix A * 1, by setting /(A) = j* or, what is the same, by setting X= 0 in (21).
V.
110
F
u n c t io n s op
M atrices
The fundamental formula is
/(^) = /(l)Zl + /'(l)Zi . Replacing /(A) successively by 1 and A— 1, we obtain: Z. — E,
Therefore
Zt — A — E.
HA) = f ( l ) E + r ( l ) ( A  E ) .
Setting/(A) = A100, we find A'°° = E + 100 (A —E) =
1 01 1+ 100 II 4 —4 0 11 !U
1401 —400 I400 —399
§ 4. Representation of Functions of Matrices by means of Series 1. Let A =  a oq *o some limit if and only if the sequence f p(X) converges for p —> oo on the spectrum of A to a limit, i.e., the limits lim fv (A) and lim fp (Aa) p + oo
p
* o o
always exist simultaneously. Moreover, the equation
implies that
lim fp(AA) = f { A A)
(27)
lim fp(A) = f(A)
(28)
p+oo
m d conversely. Proof. 1) If the values of /P(A) converge on the spectrum of A for p —» oc to limit values, then from the formulas /„ (A ) = 2 [fp (K) Zn + fp W Z,2+    + there follows tne existence of the limit \ i mf p(A). p~*
oo
(At) ZkmJ
(29)
On the basis of this
formula and of (17) we deduce (28) from (27). 2) Suppose, conversely, that lim /p(A) exists. Since the m constituent p * 
oo
matrices Z are linearly independent, we can express, by (2y), the m values of / P(A) on the spectrum of A (as a linear form) by the m elements of the matrix f p(A). Hence the existence of the limit lim fp(AA) follows, and (27) holds in the presence of (28). P According to this theorem, if a sequence of polynomials gp(X) (p = 1, 2, 3 ,...) converges to the function / (A) on the spectrum of A }then lim gp(A) = f (A). p~*
OO
2, This formula underlines the naturalness and generality of our definition of f {A). f ( A) is always obtained from the gp(A) by passing to the limit P—» oo, provided only that the sequence of polynomials gp(X) converges to /(A) on the spectrum of A. The latter condition is necessary for the exist ence pi the limit lim gp(A).
112
V.
F
u n c t io n s o f
M a tric es
We shall say that the series
up(X) converges on the spectrum of A p=0 to the function f(X) and we shall write oo
(30)
p—0
if all the functions occurring here are defined on the spectrum of A and the following equations hold: /
( p— *o
*
)
=0 ......... p— (* = 1,2, . . . ,«),
P=0
where the series on the righthand sides of these equations converge. In other words, if we set (*>= 0 ,1 ,2 ,...),
$=0 then (30) is equivalent to
f{A A)= lim sp(AA). p+oo
(31)
It is obvious that the theorem just proved can be stated in the following equivalent form: T heorem V : OO
the series
The series
£ p= °
up(A) converges to a matrix if and only if
up(X) converges on the spectrum of A. Moreover, the equation p 0
f ( AA) = J t v p(AA) p—0
implies that oo
f { A ) = J £ u (A), po and conversely. 3. Suppose a power series is given with the circle of convergence  X—X01< R and the sum f(X) : /(A) = J? « p W * o )p P—0
(tA—A0< R).
(32)
§ 4. R epresentation
of
F unctions
of
Matrices
by
S eries
113
Since a power series may be differentiated term by term any number of times within the circle of convergence, (32) converges on the spectrum of any matrix whose characteristic values lie within the circle of convergence. Thus we have: * T h eo rem 2: If the function f(X) can be expanded in a power series in the circle  X— X0\ < r, f(X) = J V A  A o ) ' , p

(33)
o
then this expansion remains valid when the scalar argument X is replaced by a matrix A whose characteristic values lie within the circle of convergence. Note. In this theorem we may allow a characteristic value Xk of A to fall on the circumference of the circle of convergence; but we must then postulate in addition that the series (33), differentiated mk— 1 times term by term, should converge at the point X= Xk. It is well known that this already implies the convergence of the j times differentiated series (33) at the point Xk to for j = 0 ,1 ,..., mk — 1. The theorem just proved leads, for example, to the following expansions :15 °° AV
00 /_______________________ 00
eA=£ « V ''
J2pfl
cosA = £ 00 A
00
2p
cosh A =
^ 2 p + l
Sinh^t = 2 p F + IjT ’ oo
( E — A)1 = 2 A ” p 0
“ / __1)P1
»n A = 2 K ~ ~ (A E y p 1
y
( IA*f < 1; * = 1,2,
(At —1 ] < 1; A:= 1,2, ...,« )
(by ln^ we mean here the socalled principal value of the manyvalued function hnX, i.e., that branch for which L n l = 0). Let G(ui, u2y . . • , U\) be a polynomial in u1} u2t . . . , Ui; let f1(X), f2(X), • •• *f i W be functions of X defined on the spectrum of the matrix A, and let 9 W = G u 1w , f * w ,   ; f i ( m Then from 9( A a) = o there follows:
(34) =
15 The expansions in the first two rows hold for an arbitrary matrix A.
(35)
114
V.
F
u n c t io n s op
M a trices
For let us denote by fi(X), /2(A), • • ■>f i W the LagrangeSylvester inter polation polynomials forr^A), r2(A), . . . , rt(X) , and let us set: G [f, (A), f2 (A), ...,/ , (A)] =G irx (A), r2 (A), . . rf {A)] =h{ A) = 0, Then (34) implies h{X)=G[r1(X),r2(X),...,rl (X)]t Hence it follows that h( AA) = 0. ' (36) and this is what we had to show. This result allows us to extend identities between functions of a scalar variable to matrix values of the argument. For example, from cos* X 4* sin2X = 1 we obtain tor an arbitrary matrix A cos2A f,u r ,  ^ jx (in this case G (uv u2) = u \ + u \ — 1, f1(X) = cos A, and /2(A) = sin X). Similarly, for every matrix A i.eM
eAe~A= E ,
e^ = (eV Further, for every matrix A e A = cos A f i sin A Let A be a nonsingular matrix (  A  =^=0 ). We denote by /X the single valued branch of the manyvalued function VX that is defined in a domain not containing the origin and containing all the characteristic values of A. Then j/A has a meaning. From (yX)* — X = 0 it now follows that
(V2y = A Let f(X) = * j and let A =  aik ? be a nonsingular matrix. Then f(X) is defined as the spectrum of A, and in the equation we can therefore replace Aby A:
m )= i
Le.,18
f (A)=A~K Denoting by r(A) the interpolation polynomial for the function 1/A we may represent the inverse matrix A” 1 in the form of a polynomial in A : 18 We have already made use of this on p. 10 a.
See footnote 10.
§ 4 .' R epresentation
op
F unctions
of
Matrices
by
S eries
115
Ai = r(A). Lei us consider a rational function g (A) = (X) where g(X) and ft (A) are coprime polynomials in A. This function is defined on the spectrum of A if and only if the characteristic values of A are not roots of ft (A), i.e.,IT if  ft( A)  0. Under this assumption we may replace Aby A in the identity n
Q(i )hW=g(X), obtaining: q (A) ft (A)
=g (A).
Hence Q( A)=g (A) [ft (A)U1 = [ft (A) ] " 1 g (A).
(37)
Notes. 1) If A is a linear operator in an wdimensional space R, then f ( A) is defined exactly like /( A ) : / ( A) =r ( A) , where r(A) is the LagrangeSylvester interpolation polynomial for /(A) on the spectrum of the operator A (the spectrum of A is determined by the minimal annihilating polynomial y;(A) of A ) . According to this definition, if the matrix A —  aiJc ? corresponds to the operator A in some basis of the space, then in the same basis the matrix /(A ) corresponds to the operator f ( A ) . All the statements of this chapter in which there occurs a matrix A remain valid after replacement of the matrix A by the operator A . 2) We can also define18 a function of a matrix /(A ) starting from the characteristic polynomial
^(A) = /J(AA*)"* *=»i instead of the minimal polynomial
vUA)=77(*A*p*. See (25) on p. 84. is See, for example, MacMillan, W. D., Dynamics of Rigid Bodies (New York, 1936).
116
V.
F
u n c t io n s of
M atric es
We have then to set f ( A ) = g ( A ) , where g(X) is an interpolation polynomial of degree less than n modulo A(X) of the function/(I ) . 19 The formulas (17), ( 2 1 ), and (23) are to be replaced by the following20 f (A) = £ [/ (*k) Zk\ + /' (1k) Zk2. + •*
(h)
.
(17')
1
Z k\
K
aw
1 lZki
nri AA* + (A•— A*j*
HA)
(nt — I)!2*** + (AA*)"*1
*k
(
21' )
(23')
where J t ( A) =— K (A—A*)”*
( *=1, 2, . . . , «) .
However, in (17') the v a l u e s ( 4) , • • ■ (4 ) occur only fictitiously, because a comparison of (2 1 ) with ( 2 1 ') yields: Zk\ —Zk\> • • • »Zk7Tik>z kmk4 1 = • • • =Zknk=zO• § 5. Application of a Function of a Matrix to the Integration of a System of Linear Differential Equations with Constant Coefficients 1. We begin by considering a system of homogeneous linear differential equations of the first order with constant coefficients: dxi
 anxk + an x2 + • • • + alnx„
dxn
= a 21Xl + a 92?2 + ’ * * + a inx n dx
~djr= a nlXl + a n2*2 + * * * + a tmx n>
where t is the independent variable, x2, . .. , xn are unknown functions of t, and Oik (i, k = 1 , 2 , . . . , n) are complex numbers. We introduce the square matrix A = j ? of the’coefficients and the column matrix x = (xu x2, .. . , xn). Then the system (38) can be written in the form of a single matrix differential equation The polynomial g (k ) is not uniquely determined by the equation f ( A ) = where the a*k (t, k = 1, 2, . . ., n) are constant coefficients. Introducing again the column x = {xu x2, . . . , xn) and the square matrix A =  0 we can find a 0 such that for arbitrary initial values of the parameters z l0, x20, . . . , zno (for t = t0) with moduli less than d the parameters xu x2, ... , xn remain of moduli less than e for the whole time of the motion (t ^ t0), i.e., if for every e > 0 we can find a d > 0 such that from I *«, I <
where the ptk(t) are continuous functions for t ^ tn (i, k = 1 , 2 , . . . , n). In matrix form the system ( 6 8 ) can be written as follows: 27 In these parameters, the motion to be studied is characterized by constant zero values X\ = 0, .r*= 0............... = Therefore in the mathematical treatment of the prob lem we speak of the 1stability 9 of the zero solution of the system (65) of differential equations. 28 See [141, P 13; [9], pp. 1011; or [36], pp. 1112. See also [3].
126
V.
F u n c t io n s
of
M a tric es
— = F ( 0 *,
(6 8 ')
where x is the column matrix with the elements xlf x2, . . . , xn and P\t) =  pm(t) i is the coefficient matrix. We denote by q21(t),
qnf(t)
i j ~ 1 ,2 , ...,n )
(69)
n linearly independent solutions of ( 6 8 ) . 29 The matrix Q(t) =  q{j jJ whose columns are these solutions is called an integral matrix of the sys tem ( 6 8 ). Every solution of the system of linear homogeneous differential equa tions is obtained as a linear combination of n linearly independent solutions with constant coefficients: ?i
cfUj M)
—1 »2 , . . . , w),
or in matrix form, (70)
x =Q( t ) c ,
where c is the column matrix whose elements are arbitrary constants Ci, c2 . . . , cn. We now choose the special integral matrix for which Q(t0) = E;
(71)
in other words, in the choice of n linearly independent solutions of (69' we shall start from the following special initial conditions :30
qn)
Then setting t = t0 in (70), we find from (71): x0= c. and therefore formula (70) assumes the form x = Q{t)x0
(72)
or, in expanded form, =
3= 1
( * T :b 2 >•••* n>•
(72')
29 Here the second subscript j denotes the number of the solution. 30 Arbitrary initial conditions determine uniquely a certain solution of a given system.
§ 6. Stability
of
Motion
in the
Case
of
Linear System
127
We consider three cases: 1. Q (t) is a bounded matrix in the interval (t0, f oo), i.e., there exists a number M such that Iq ij(t)\^M
i , 7
1
, 2 , . . . , n).
In this case it follows from (72') that Ixi (01^
max 
.
The condition of stability is satisfied. (It is sufficient to take in (66) and (67).) The motion characterized by the zero solution X\ = 0 , X2— 0 , . . . , xn = 0 is stable. 2. lim Q(t) =0. In this case the matrix Q(t) is bounded in the interval (f0,foo) and therefore, as we have already explained, the motion is stable. Moreover, it follows from (72) that lim x (t)= o . for every x0. The motion is asymptotically stable. 3. Q(t) is an unbounded matrix in the interval (t0y + oo). This means that at least one of the functions qij(t), say q*k(t), is not bounded in the interval. We take the initial conditions xl0 = 0> x20 —0, . . ., xk_lt0 = 0, xto 7^ 0, a?*+ito= 0 , . . . , xn0 = 0 . Then x h
(0 =
Qhk
(0
X kO
*
However small inmodulus xkn may be, the function xh(t) is unbounded. The condition (67) is not satisfied for any ) maximal value m* = m,,. The expression (76) can be repre sented in the form
ee (/
[ ^ Z kpnf ^  ^ + (*)] , 7= 1
where fi2>. . . , 0r are distinct real numbers and (*) denotes a matrix that tends to zero as f —» doo. From this representation it follows that the matrix e**(**•) is not r
bounded for a0 + m0 — 1 > 0 , because the matrix ^
c a
n
n
o
t
converge for
t —> j" 00 • We can see this by showing that
/< o = 2 V * where C) are complex numbers and Si real and distinct numbers, can converge to zero foi t + d" oo only when f ( t ) s 0. But, in fact, it follows from lim f { t ) = 0 that
i
r
T
T c /* = lim and therefore
1
Lf\f{t)\*dl=0 0
Cl —c2
«»= 0 .
§ 6. S tab ility
o f
Motion in th e Case o f Linear System
129
The results of the investigation may be formulated in the form of the following theorem :32 T heorem 3 : The zero solution of ike linear system (68) for P — const. is stable in the sense of Lyapunov if 1) the real parts of all the characteristic values of P are negative or zero, 2) those characteristic values whose real part is zero, i.e., the pure imagi nary characteristic values (if any such exist), are simple roots of the minimal polynomial of P; ' ' 1’ and it is unstable if at least one of the conditions 1 ), 2 ) is violated. The zero solution of the linear system (68) is asymptotically stable if and only if all the characteristic values of P have negative real parts. The considerations above enable us to make a statement about the nature of the integral matrix eP(‘~*o) in the general case of arbitrary characteristic values of the constant matrix P. T heorem 4: The integral matrix ep^ to>of the linear system (68) for P = const, is always representable in the form ep«'**) = Z.(t) + Z0 + Z+(t)f where ) lim Z _ (f)= 0 , l*>4oo 2) Z0 is cither constant or is a bounded matrix in the interval (t0, + &o ) that does not have a limit for t+ 4 oo, 3) Z+ (t) —0 or Z +(t) is an unbounded matrix in the interval(t0, f oo). Proof. On the righthand side of (76) we divide all the summands into three groups. We denote by Z (t) the sum of all the terms containing the factors eXk^~K), with Re Xk < 0. We denote by Z0the sum of all those matrices Zki for which Re = 0. We denote by Z+(t) the sum of all the remaining terms. It is easy to see that Z _ (t), Z0(i), and Z+ (t) have the properties 1), 2), .3) of the theorem. 1
32 On the question of sharpening the criteria of stability and instability for quasi* linear systems (i.e., of nonlinear systems that become linear after neglecting the non linear terms), see further Chapter XIV, § 3.
C H A P T E R VI EQUIVALENT TRANSFORMATIONS OF POLYNOMIAL MATRICES. ANALYTIC THEORY OF ELEMENTARY DIVISORS The first three sections of this chapter deal with the theory of equivalent polynomial matrices. On the basis of this, we shall develop, in the next three sections, the analytical theory of elementary divisors, i.e., the theory of the reduction of a constant (nonpolynomial) square matrix A to a normal form A (A = T A T '1). In the last two sections of the chapter two methods for the construction of the transforming matrix T will be given.
§ 1. Elementary Transformations of a Polynomial Matrix 1. D efinition 1 : A polynomial matrix, or kmatrix, is a rectangular matrix A (A) whose elements are polynomials in k: A (A) =  ait (A)  = 1! aA' + a™ * 1 + • • • + a
(i = 1 , 2 , . . . , m ;
fc=l,2,
»);
here I is the largest of the degrees of the polynomials a&Xk). Setting ^ = H * ll
D1 1 H  +
a£
(t = l,2, ..
m; k =
is a polynomial in D with constant coefficients; D= operator. The matrix of operator coefficients J4 (D )= a tt(2))
(* = 1 , 2 ,
d
1,2, ..
n)
is the differential
i= l, 2,
is a polynomial matrix, or Dmatrix. Clearly, the left elementary operation 1 . on the matrix 4(D) signifies termbyterm multiplication of the ith differential equation of the system by the number c^= 0 . The left elementary operation 2 . signifies the termbyterm addition to the ith equation of the jth equation which has pre viously been subjected to the differential operator b(D). The left ele mentary operation 3. signifies an interchange of the ith and jth equation. Thus, if we replace in (4) the matrix A(D) of operator coefficients by a leftequivalent matrix B(D), we obtain a deduced system of equations. Since, conversely, by the same reasoning, the original system is a conse quence of the new system, the two systems of equations are equivalent.6 It is not difficult in this example to interpret the right elementary opera tions as well. The first of them signifies the introduction of a new unknown function Xi — for the unknown function the second signifies the introduction of a new unknown function X j = X j + b ( D ) X i (instead of x f ) ; the third signifies the interchange of the terms in the equations that contain Xi and Xf (i.e., xi = xil xy = x11U) &uW) ... h m m • • • K 0 622 (A) ••• 6 * (A) •• ■ K 0
0
0 W W
••• ft(A) ••••ft(A) (m g»)
0 0 0
^12(A) • • • &1„(A) ^22(A) • • • ft* (A) 0 0
• ft. (A) .. . 0
0 .. . (m ^ n)
0
If the polynomial 6 22 (A) is not identically equal to zero, then by applying a left elementary operation of the second type we can make the degree of the element 6 1 2 (A) less than the degree of 6 2 2 (A) (if 6 2 2 (A) is of degree zero, then 6 1 2 (A) becomes identically equal to zero). Iq the same way, if = 0 , then by left elementary operations of the second type we make the degrees of the elements 6 13 (A), 6 23 (A) less than the degree of 6 3 3 (A) with out changing the elements 6 1 2 (A), etc. We have established the following theorem: 1: An arbitrary rectangular polynomial matrix of dimension m X n can always be brought into the form (5) by means of left elementary operations, where the polynomials 6 U(A), 6 U(A), . . . , bk_ 1%lc (A) are of degree less than that of 6 fcfc(A), provided 6 fcfc(A )^ 0 , and are all identically equal to zero if 6 *fc(A) = const. 0 (fc = 2, 3 , . . . , min (m, n)). T
h e o r e m
Similarly, we prove
2: An arbitrary rectangular polynomial matrix of dimension m X n can always be brought into the form T
h e o r e m
13C
VI. E q u iv a len t T ran sform ation s o f P o ly n o m ia l M atrices
0
....
«u(*)
c22 W *■«.
cml W
cm2 W **
0 0
0 0
.. .
.0 .. 0
0 . .. 1cn (A) C21W) c22 W) •
0 0
U) • a )
cnl
W
c n2
Cm l
U)
Cm 2 & ) •
o ;..0
(m ^ n)
(r/v ^ n) by means of right elementary operations, where the polynomials ck2(A) . c**_3 U) are of degree less than that of ckk(A), provided c kk( X ) ^ 0, amd all are identically equal to zero if ckk { X) = con*t. ^ C (k = 2, 3 ,..., min (m, n ) ). 2. From Theorem? 1 and 2 we deduce the corollary : Corollary: If the determinant of a square polynomial matrix P{)^ does not depend on ) and is different from zero, then the matrix car. be represented in the form of a product of a finite number of elementary matrices. For by Theorem 1 the matrix P(X) can bo brought into the form bn {X) 6U(A). ■■bln(X) 1 b,,(A) • * c 0
0
■bnn(A)
by left elementary operations, where n is the ordei of P(X). Since in the application of elementary operations to a square polynomial matrix the determinant of the matrix is only multiplied by constant nonzero factors, the determinant of the matrix (7), like that of P(A) does not depend on 1 and is different from 0 , i.e., bn (k)
6 2 2 (A)
• • • 6 „„(A)= const. ^ 0 .
Hence bik(X) = const. ¥* 0 (k = 1 , 2 , '. . . , n) . But then, also by Theorem 1 , the matrix (7) has the diagonal form  bkdik ” and can therefore be reduced to the unit matrix R by means of left ele mentary operations of type 1. But then, conversely, the unit matrix E can be transformed into P{X) by means of the left elementary operations whose matrices are S2t. . . , Sp. Therefore P (A) =
. • • SXE = V p  i •••*!•
§ 2. Canonical F orm
of a
AMatrix
137
As we pointed out on p. 133, from this corollary there follows the equiva lence of the two Definitions 2 and 2' of equivalence of polynomial matrices. 3, Let us return to our example of the system of differential equations (4). We apply Theorem 1 to the matrix  (iik(D)  of operator coefficients. As we have shown on p. 135, the system (4) is then replaced by an equivalent system 3ii(D)*i + &is(D)xk + ■■■+ bu (D) x , —— &!, ,,+1 (D) x!+1 •—&!»(£>)*». j &22 (^) “I *’ r bzs(D) xa— bZt3±i (D) Xg+i bznP ) xn» 1, {P) xs bSt,+x (D) xs+1 where 5 = min (m,n). In this system we may choose the functions xt+1, .. ., xn arbitrarily, after which the functions xa> . . . , xx can be deter mined successively; however, at each stage of this process only one differen tial equation with one unknown function has to be integrated. 4. We now pass on to establishing the 'canonical* form into which a rec tangular matrix A(X) can be brought by applying to it both left and right elementary operations. Among all the elements aik{X) of A {a) that are not identically equal to zero we choose one which has the least degree in X and by suitable permuta tions of the rows and columns we make this element into ftn(A). Then we find the quotients and remainders of the polynomials a,u(X) and axk{X) on division by On(A) : “n (A) = a u (A) qa (A) + rn (A), alk (A) = au (A) qik (A) + rlk (A) (i = 2, 3, . . . , m; k = 2, 3, . . . , n) . If at least one of the remainders rix (X), rlk(X) (i = 2, . . ., m ; k = 2, . . . , n), for example rlk(X). is not identically equal to zero, then by subtracting from the hth column the first column multiplied by qudX), we replace dik(X) by the remainder rlk(X), which is of smaller degree than au (X). Then we can again reduce the degree of the element in the top left corner of the matrix by putting in its place an element of smaller degree in X. But if all the remainders r2i(X) , . . . , rmX(A) ; r 12 (A),. . . , r*n(X) are iden . tically equal to zero, then by subtracting from the ith row the first multi plied by qu (X) (i = 2, . . ., m) and from the /cth column the first multiplied by qik(X) (k = 2 , . . ., n) y we reduce our polynomial matrix to the form Oil(A) 0 0
0
...
0
a 2 i W **• a2n W)
“m2(A) ■
138
VI. E quivalent T ransformations
P olynomial Matrices
of
If at least one of the elements aik(X) (i = 2,. . . , m ; k = 2, . . . , n) is not divisible without remainder by an (A), then by adding to the first column that column which contains such an element we arrive at the preceding case and can therefore again replace the.element au {X) by a polynomial of smaller .degree. Since the original element an (l) had a definite degree and since the process of reducing this degree cannot be continued indefinitely, we must, after a finite number of elementary operations, obtain a matrix of the form .. . 0 ■K M
0 0
W *'■■bm {X)
0
in which all the elements &ifc(A) are divisible without remainder by 0 ,
(16)
all the nonzero numbers among d'v . . . , h\y Cj, .. ., g" . Then the matrix C(X) is equivalent to the Matrix (14), and by a permuta tion of rows and of columns the latter can be brought into ‘diagonal’ form {[