An Introduction to Statistical Analysis of Random Arrays [Reprint 2018 ed.] 9783110916683, 9783110354775

153 92 59MB

English Pages 699 [700] Year 1998

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

An Introduction to Statistical Analysis of Random Arrays [Reprint 2018 ed.]
 9783110916683, 9783110354775

Table of contents :
CONTENTS
List of basic notations and assumptions
Preface and some historical remarks
Chapter 1. Introduction to the theory of sample matrices of fixed dimension
Chapter 2. Canonical equations
Chapter 3. The First Law for the eigenvalues and eigenvectors of random symmetric matrices
Chapter 4. The Second Law for the singular values and eigenvectors of random matrices. Inequalities for the spectral radius of large random matrices
Chapter 5. The Third Law for the eigenvalues and eigenvectors of empirical covariance matrices
Chapter 6. The first proof of the Strong Circular Law
Chapter 7. Strong Law for normalized spectral functions of nonselfadjoint random matrices with independent row vectors and simple rigorous proof of the Strong Circular Law
Chapter 8. Rigorous proof of the Strong Elliptic Law
Chapter 9. The Circular and Uniform Laws for eigenvalues of random nonsymmetric complex matrices with independent entries
Chapter 10. Strong V-Law for eigenvalues of nonsymmetric random matrices
Chapter 11. Convergence rate of the expected spectral functions of symmetric random matrices is equal to 0(n-1/2)
Chapter 12. Convergence rate of expected spectral functions of the sample covariance matrix Ȓm„(n) is equal to 0(n-1/2) under the condition m„n-1≤c

Citation preview

An Introduction to Statistical Analysis of Random Arrays

An Introduction to Statistical Analysis of Random Arrays Vyacheslav L. Girko

///VSP/// Utrecht, The Netherlands, 1998

VSP BV P.O. Box 346 3700 AH Zeist The Netherlands

Tel: +31 30 692 5790 Fax: +31 30 693 2081 E-mail: [email protected] Home Page: http://www.vsppub.com

© V S P BV 1998 First published in 1998 ISBN 90-6764-293-2

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner.

Printed in The Netherlands by Ridderprint bv,

Ridderkerk.

CONTENTS

List of basic notations and assumptions Preface and some historical remarks C h a p t e r 1. Introduction to the theory of sample matrices of fixed dimension 1. Perturbation formulae for eigenvalues and eigenvectors of matrix 2. Measurability and nondegeneracy of eigenvalues of random matrices 3. Maximum likelihood estimates of the parameters of a multivariate normal distribution 4. Wishart density w n (a,R) 5. Generalized variance 6. Symmetric random real matrices 7. Random Hermitian matrices 8. Random antisymmetric matrices 9. Distribution of roots of a random polynomial with real coefficients 10. Expected number of real roots of a random polynomial with real coefficients 11. Expected number of real roots of a random real analytic function 12. Distribution of roots of a random polynomial with complex coefficients 13. Average density of complex roots of a random polynomial with complex coefficients 14. Expected normalized counting functions of the roots of the random polynomial obtained by V'-trarisform 15. ^-distribution of the roots of a random polynomial 16. Calculation of expected logarithm of the absolute value of a random polynomial 17. Distribution function of the eigenvalues of a random nonsymmetric real matrix when its eigenvalues are real 18. Random nonsymmetric real matrices 19. Random nonsymmetric real matrices with special form of selected eigenvectors 20. Random complex matrices 21. Random complex symmetric matrices 22. Random complex antisymmetric matrices 23. Distribution of triangular decomposition of random complex matrices 24. Distribution of triangular decomposition of unitary invariant random complex matrices 25. Distribution of the triangular decomposition of the triangular random complex matrices 26. Random unitary matrices 27. Orthogonal random matrices 28. The method of triangular matrices for rigorous derivation of the distribution function of eigenvalues of Gaussian nonsymmetric real matrix (Girko, 1980) 29. Straight and back spectral Kolmogorov equations for densities of eigenvalues of random symmetric matrix processes with independent increments

xvii xx 1 1 3 4 4 4 5 6 6 8 9 11 11 12 13 16 18 20 23 35 36 37 37 38 39 39 40 41

44 47

vi

Contents

30. Spectral stochastic differential equations for random symmetric matrix processes with independent increments 31. Spectral stochastic differential equations for random matrix-valued processes with multiplicative independent increments 32. Asymptotics of eigenvalues and eigenvectors of empirical covariance and correlation matrices of fixed dimension Chapter 2. Canonical equations 1. Canonical equation K\ 2. Canonical equation K2 3. Canonical equation for symmetric random matrices with infinitely small entries 4. Canonical equation K4 for symmetric random matrices with infinitely small entries 5. Canonical equation K5 for symmetric random matrices with infinitely small entries 6. Canonical equation for symmetric random matrices with identically distributed entries 7. Canonical equation K j for random Gram matrix 8. Canonical equation K$ 9. Canonical equation Kg for random matrices whose entries have identity variances 10. Canonical equation Kio 11. Canonical equation K\\. Limit theorem for normalized spectral functions of empirical covariance matrices under the modified Lindeberg condition 12. Canonical equation K12 for random Gram matrices with infinitely small entries 13. Canonical equation K\z for random Gram matrices with infinitely small entries 14. Canonical equation K14 for random Gram matrices with infinitely small entries 15. Canonical equation for random Gram matrices with identically distributed entries 16. Canonical equation K\§ 17. Canonical equation K n 18. Canonical equation K\$ 19. Canonical equation Ki$ 20. Canonical equation K20 21. Canonical equation K21 for random matrices with independent pairs of entries 22. Canonical equation K22 for random matrices with independent pairs of entries which have zero expectation. Circular and Elliptic Laws 23. Canonical equation K23 for random matrices with independent pairs of entries which have different covariances and zero expectations 24. Canonical equation K24 for random G-matrices with infinitesimally small random entries 25. Canonical equation K25 for random G-matrices.

48 49 49 51 51 53 57 58 59 60 61 64 65 66

71 75 76 78 79 79 83 84 84 86 87 89 90 92

Contents

26.

27. 28.

29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46, 47. 48,

Strong K-law Canonical V-equation /sf26- The V-density of eigenvalues of random matrices, the variances of the entries of which form a doubly stochastic matrix Canonical equation K^i for normalized spectral functions of random symmetric matrices with independent blocks Canonical equation K28 for normalized spectral functions of random symmetric matrices with identically distributed independent blocks Canonical equation K ^ . SS-Law Canonical equation K30 for Fourier transform of resolvent of symmetric block random matrix Canonical equation for normalized spectral functions of random nonsymmetric matrices with independent blocks Canonical equation K32 for normalized spectral functions of random nonsymmetric matrices with identically distributed independent blocks Canonical equation K33 for Fourier transform of resolvent of Gram block random matrix Canonical equation K34 for normalized spectral functions of empirical covariance matrix with asymptotically independent blocks Canonical equation K35 for normalized spectral functions of random matrices pencil Canonical equation K36 for normalized spectral functions of random matrices pencil Canonical equation K37 for normalized spectral functions of empirical random matrices pencil Canonical equation K38 for normalized spectral functions of random nonsymmetric matrices pencil Stochastic canonical equation K^g for normalized spectral functions of random symmetric matrices Stochastic canonical equation K40 for normalized spectral functions of random Gram matrices Stochastic canonical equation K41 for normalized spectral functions of empirical covariance matrices Stochastic canonical equation K42 for normalized spectral functions of random symmetrical matrices with block structure Stochastic canonical equation K43 for normalized spectral functions of random nonsymmetric block matrices Stochastic canonical equation K44 for normalized spectral functions of empirical covariance matrices with block structure Stochastic canonical equation K45 for normalized spectral functions of random matrices pencil Canonical equation K46 for normalized spectral functions of band random matrices Dyson canonical equation K47 for normalized spectral functions of band random matrices Canonical equation K48 for normalized spectral functions of product of random matrices

vii 94 95 100

105 105 106 110 114 114 115 118 119 119 122 123 124 126 127 128 130 131 131 135 135

viii

Contents

49. Canonical equation for normalized spectral functions of product of random matrices 50. Canonical equation K5Q for normalized spectral functions of random S-matrices Chapter 3. The First Law for the eigenvalues and eigenvectors of random symmetric matrices 1. Introduction and historical review. Some important formulas 2. Strong law for normalized spectral functions of random matrices with stochastically independent row vectors 3. Strong law for the entries of resolvents of random matrices 4. Invariance principle for the entries of resolvents of random matrices 5. Limit theorem for eigenvalues and eigenvectors of random symmetric matrices when Lindeberg's condition is not fulfilled. Stochastic power method 6. Weak Limit Theorem for eigenvalues and eigenvectors of random symmetric matrices when Lindeberg's condition is fulfilled. REFORM method 7. Accompanying canonical equation K\ for random symmetric matrices 8. Solution of the accompanying canonical equation is equal to the vector of Stieltjes' transforms of certain distribution functions. Method of extending random matrices 9. Uniqueness and boundedness of the solution of the accompanying spectral equation Li 10. Equation Ci for the boundary points of accompanying spectral density 11. Existence of the solution of accompanying equation L\ 12. Proof of the existence of the density of accompanying normalized spectral function based on the uniqueness of the solution of spectral equation L\ 13. Uniform inequality for normalized spectral functions of random symmetric matrices 14. Invariance principle for the traces of resolvents of random matrices for the case of convergence of random eigenvalues in probability 15. Invariance principle for random matrices for the case of convergence of random eigenvalues with probability one 16. Main equation M\ for random symmetric matrices 17. Equation for the sum of smoothed distribution functions of eigenvalues of a random symmetric matrix for the case of convergence of random eigenvalues in probability 18. Equation for the sum of smoothed distribution functions of eigenvalues of a random symmetric matrix for the case of convergence of random eigenvalues with probability one 19. Double F-method for finding limits of eigenvalues for the case of convergence of random eigenvalues in probability 20. Double F-method for finding limits of eigenvalues for the case of convergence of random eigenvalues with probability one 21. Structure of the accompanying density. Limit theorems for extreme eigenvalues of random matrices

136 137 139 141 143 146 149 154 155 156 157 158 160 163 163 166 167 169 170 178 179 180 183 183

Contents

22. 23. 24. 25.

Weak limit theorem for eigenvalues of random symmetric matrices Strong limit theorem for eigenvalues of random symmetric matrices Main assertions Correlation function of random eigenvalues

ix 188 191 191 193

Chapter 4. The Second Law for the singular values and eigenvectors of random matrices. Inequalities for the spectral radius of large random matrices 201 1. Introduction and historical review 2. Strong law for normalized spectral functions of random matrices with stochastically independent column or row vectors 3. Strong law for the entries of resolvents of random matrices 4. Invariance principle for the entries of resolvents of random matrices 5. Limit theorem for eigenvalues and eigenvectors of random Gram matrices when Lindeberg's condition is not fulfilled. Stochastic power and integral representation methods 6. Weak limit theorem for eigenvalues and eigenvectors of random Gram matrices when Lindeberg's condition is fulfilled. Stochastic power and integral representation methods 7. Accompanying canonical equation K26 for random Gram matrices 8. Solution of the accompanying canonical equation is equal to Stieltjes' transform of a certain distribution function. Method of extending random matrices 9. Uniqueness and boundedness of the solution of the accompanying spectral equation L2 10. Canonical equation ¿2 and C2 for the boundary points of G-density 11. Existence of the solution of accompanying spectral equation L2 12. Proof of the existence of the density of accompanying normalized spectral functions based on the uniqueness of the solution of the spectral equation L2 13. Uniform inequality for normalized spectral functions of random Gram matrices 14. REFORM method of deriving the main equation M3 of spectral theory of random matrices 15. Inequalities for the coefficients of the main equation M3 16. Calculations of coefficients of the main equation M3 for the case of convergence of random eigenvalues in probability 17. Calculations of the coefficients of the main equation M4 for the case of convergence of random eigenvalues with probability one 18. Invariance principle for the traces of resolvents of random matrices for the case of convergence of random eigenvalues in probability 19. Invariance principle for random matrices for the case of convergence of random eigenvalues with probability one 20. Equation for the sum of smoothed distribution functions of singular values of random matrices for the case of convergence of random eigenvalues in probability 21. Equation for the sum of smoothed distribution functions of singular values of random matrices. Convergence of random eigenvalues with probability one 22. Double F-method for finding the boundaries of extremal

202 204 205 209 215 216 217 218 219 222 228 228 229 230 233 236 239 247 250 261 262

X

Contents

eigenvalues. Convergence of eigenvalues in probability 23. Double F-method for finding the boundaries of extremal eigenvalues. Convergence of eigenvalues with probability one 24. The Second Law for the singular values of random matrices 25. Strong Law for eigenvectors of random matrices 26. Strong Law for the spectral radius of large random matrices Chapter 5. The Third Law for the eigenvalues and eigenvectors of empirical covariance matrices 1. Introduction and historical review 2. Some auxiliary formulas 3. Strong law for normalized spectral functions of random matrices with stochastically independent row vectors 4. Strong Law for the entries of resolvents of random matrices 5. Invariance principle for the entries of resolvents of random matrices 6. Limit theorem for eigenvalues and eigenvectors of empirical matrices under the G-condition when Lindeberg's condition is not fulfilled. Stochastic power methods 7. Weak limit theorem for eigenvalues and eigenvectors of empirical covariance matrices when Lindeberg's condition is fulfilled. REFORM method 8. Accompanying canonical equations for empirical covariance matrices 9. Solution of the accompanying canonical equation is equal to the Stieltjes transform of certain distribution functions. Method of extending random matrices 10. Canonical equation L3 for G-density 11. Existence of the solution of accompanying equation L3 12. Uniqueness and boundedness of the solution of the accompanying equation L3 13. Equation C3 for the boundary points of spectral density 14. Boundedness of the boundary points of spectral density 15. Proof of the existence of the density of accompanying normalized spectral functions based on the uniqueness of the solution of spectral equation L3 16. Uniform inequality for normalized spectral functions of random symmetric matrices 17. REFORM method of deriving the main equation M5 of spectral theory of random matrices 18. Inequalities for the coefficients of the main equation M5 19. Calculations of the coefficients of the main equation M 5 for the case of convergence of normalized spectral functions in probability 20. Calculations of the coefficients of the main equation Me for the case of convergence of normalized spectral functions with probability one 21. Invariance principle for the traces of resolvents of random matrices for the case of convergence of random eigenvalues in probability 22. Invariance principle for random matrices for the case of convergence of random eigenvalues with probability one 23. Equation for the sum of smoothed distribution functions of singular values of random matrices for the case of convergence of random eigenvalues in probability

263 267 268 273 274 277 278 280 280 282 285 286 287 288 289 289 289 289 290 293 293 294 295 296 299 302 303 311 312

Contents

24. Equation for the sum of smoothed distribution functions of singular values of random matrices. The case of convergence of random eigenvalues with probability one 25. Double F-method. Weak limit theorem for eigenvalues of empirical covariance matrices 26. Strong limit theorem for eigenvalues of empirical covariance matrices 27. The Third Law for the eigenvalues and eigenvectors of empirical covariance matrices Chapter 6. The first proof of the Strong Circular Law 1. Historical review and an outline of the proof of the Strong Circular Law 2. V-transform of spectral functions 3. Limit theorems like the strong law of large numbers for normalized spectral functions of nonselfadjoint random matrices with independent entries 4. The regularized V-transform for a spectral function 5. Estimate of the rate of convergence of Stieltjes' transform of spectral functions 6. The estimates of the deviations of spectral functions from the limit functions 7. Strong circular law Chapter 7. Strong Law for normalized spectral functions of nonselfadjoint random matrices with independent row vectors and simple rigorous proof of the Strong Circular Law 1. Modified ^-transform of spectral functions 2. Inverse formula for modified V-transform 3. Strong law for normalized spectral functions of nonselfadjoint random matrices with independent row vectors 4. The method of perpendiculars for proving the Strong Circular Law 5. Substitution of the determinant of a Gram matrix for the determinant of a random matrix 6. The regularized modified ^-transform for a spectral function 7. The rigorous proof of the Strong Circular Law Chapter 8. Rigorous proof of the Strong Elliptic Law 1. Modified ^-transform of spectral and G-functions 2. A criticism of an inverse formula for the modified ^-transform 3. The method of perpendiculars for proving the Strong Elliptic law. Resolvent formulas for matrix with independent pairs in its entries 4. Strong law for normalized spectral functions of nonselfadjoint random matrices with independent pairs in entries 5. Substitution of the determinant of a Gram matrix for the determinant of a random matrix 6. The regularized modified ^-transform for the spectral function 7. Useful integral 8. The canonical equation 9. Calculation of a specific integral for the solution of the canonical

xi

312 313 316 322 325 326 328 331 337 339 345 346

349 349 351 351 359 359 364 367 369 370 370 372 374 380 386 390 391

xii

Contents

equation K22 10. Invariance principle for random matrices 11. Canonical equation. Limit theorem for G-functions 12. Strong Elliptic Law Chapter 9. The Circular and Uniform Laws for eigenvalues of random nonsymmetric complex matrices with independent entries 1. 2. 3. 4. 5.

Truncated conditional V\-transform and V2-transform Canonical spectral equation and the boundary points of spectral density Strong law for G-functions jjLn{x,r) Limit theorems for G-functions. Canonical equation K26 The REFORM method of obtaining the main equation of spectral theory of random matrices 6. Invariance principle for random matrices 7. The equation for the sum of smoothed distribution functions of eigenvalues of random matrices 8. Method of Fourier and inverse Fourier transforms for finding boundaries of eigenvalues 9. Limit theorem for the singular values of random matrices 10. Method of perpendiculars for proving the Strong Circular Law 11. Central limit theorem for randomly normalized random determinant 12. Substitution of entries of normally distributed random variables for random matrix 13. Substitution of determinant of Gram matrix for determinant of random matrix 14. Regularized V3-transform 15. Regularized ^-transform. Circular Law 16. Limit theorems for eigenvalues of random nonsymmetric matrices Chapter 10. Strong V-Law for eigenvalues of nonsymmetric random matrices 1. Strong Law for normalized spectral functions of nonselfadjoint random matrices with independent row vectors 2. Substitution of the determinant of Gram matrix for the determinant of random matrix 3. Regularized modified V-transform for the spectral function 4. Limit theorem for Stieltjes' transform of spectral functions. Canonical spectral equation K7 5. The Strong V-Law 6. The K-density of eigenvalues of random matrices with independent entries 7. One example of the V-density for eigenvalues of random matrices with independent entries Chapter 11. Convergence rate of the expected spectral functions of symmetric random matrices is equal to 0 ( n - 1 / 2 ) 1. Introduction and historical review 2. Hermite polynomials 3. Christoffel-Darboux formula

394 396 398 402 405 405 406 407 408 408 409 413 414 420 421 421 423 424 427 429 432 435 436 437 442 446 447 450 451 453 453 454 455

Contents

4. Formula for the average spectral density of Gaussian Hermitian matrices 455 5. Integral formula for the Hermite polynomials 456 6. Method of steepest descent. The first main asymptotic formula for Hermite polynomials 457 7. The second main asymptotic formula for Hermite polynomials 464 8. Structure of the Semicircle Law 465 9. "Imaginary logarithmic" inequality for the expected normalized spectral functions 466 10. Inequality for random normalized spectral functions 470 11. Inequality for the heavy tails of normalized spectral functions outside of interval \x\ > 2 + c, c > 0 471 12. REFORM and martingale differences methods of deriving the first main inequality 473 13. The invariance principle for symmetric random matrices with independent entries 476 14. Main equation for the resolvent of random symmetric matrix 480 15. Inequalities for the coefficients of the main equation 481 16. Important lemma 482 17. Calculations of coefficients of the main equation 482 18. Equation for the sum of smoothed distribution functions of eigenvalues of random symmetric matrix 483 19. The double F-method for finding the second main inequality for spectral functions 484 20. Convergence rate of expected normalized spectral functions of the Hermitian Gaussian matrices outside of the interval \x\ 0 485 21. The third main inequality 485 22. The fourth main inequality 486 23. Inequality for the distance between expected normalized spectral function and Semicircle Law 489 24. Inequality for the expected maximal distance between normalized 491 spectral function and Semicircle Law 492 25. Convergence rate for eigenvalues of random symmetric matrices 26. Convergence rate of expected normalized spectral functions outside of the interval > 2 + c, c > 0 493 Chapter 12. Convergence rate of expected spectral functions of the sample covariance matrix Rm„(n) is equal to 0 ( n - 1 / 2 ) under the condition _1 TO„n < c < 1 1. 2. 3. 4. 5.

Introduction and historical review Laguerre polynomials. The basic properties Orthonormalized Laguerre functions Asymptotics of Laguerre polynomials in scheme of series Method of steepest descent. The first main asymptotic formula for Laguerre polynomials 6. The second main asymptotic formula for Laguerre polynomials 7. Convegence rate for normalized spectral functions of Gaussian Gram matrices 8. REFORM and martingale-differences methods of deriving the first

495 495 497 499 500 502 513 516

xiv

Contents

main inequality 9. In variance principle for symmetric random matrices with independent entries 10. The second main inequality 11. The third main inequality 12. Double F-method for finding the boundaries of eigenvalues 13. Convergence rate of expected normalized spectral functions outside of a certain region 14. Convergence rate of normalized spectral function of empirical covariance matrices is equal to 0 ( n - 1 / 2 ) 15. The main fifth inequality for the expected maximal distance between a normalized spectral function and the limit law 16. Convergence rate for eigenvalues of sample covariance matrices 17. Convergence rate of expected normalized spectral functions outside of the domain 47 - (1 + 7 - z) 2 > c Chapter 13. The First Spacing Law for random symmetric matrices 1. Introduction and historical review. Monte Carlo approach to quantum mechanics 2. Spacings and their spectral functions 3. Expected spacing spectral functions 4. Expected truncated spacing spectral functions 5. Gaussian representation of probability that an interval contains no eigenvalues 6. Integral representation of the expected normalized spacing spectral functions 7. Limits for Hermite polynomials 8. Semicircle Law for Gaussian Hermitian matrices 9. Spectral density outside of the Semicircle Law 10. Limit theorem for normalized spacing spectral functions 11. The first spacing law 12. Limit of expected normalized spectral function of the spacings of the squares of eigenvalues 13. Wigner surmise 14. Mathematical formulation of the Wigner surmise 15. Mean level distance Dn(Xmax - A m i n ) 16. Limit theorem for the local mean level distance Dn(an) 17. Local limit average density for spacings of Hermitian Gaussian matrices 18. Histograms of eigenvalues of random matrices 19. Histograms of a part of eigenvalues of random matrices Chapter 14. Ten years of General Statistical Analysis (The main G-estimators of General Statistical Analysis) 1. G\ estimator of generalized variance 2. Gz-estimator of the real Stieltjes transform of the normalized spectral function of covariance matrices 3. C?3-estimator of inverse covariance matrix 4. Class of (^-estimators for the traces of the powers of covariance matrices

518 523 527 527 529 529 531 532 533 533 535 535 537 538 539 539 540 541 544 544 545 545 546 547 547 548 548 549 551 552 553 556 557 560 561

Contents

5. G5-estimator of smoothed normalized spectral function of symmetric matrices 6. -estimator of Stieltjes' transform of covariance matrix pencil 7. G7-estimator of the states of discrete control systems 8. Class of Gs-estimators of the solutions of systems of linear algebraic equations (SLAE) 9. Gg-estimator of the solution of the discrete Kolmogorov-Wiener filter 10. Gio-estimator of the solution of a regularized discrete Kolmogorov-Wiener filter with known free vector 11. G11-estimator of the Mahalanobis distance 12. Gi2-regularized Mahalanobis distance estimator 13. Discrimination of two populations with common unknown covariance matrix. Gi3-Anderson-Fisher statistics estimator 14. Gn-estimator of regularized discriminant function 15. Gis-estimator of the nonlinear discriminant function, obtained by observation of random vectors with different covariance matrices 16. Class of Gj6-estimators in the theory of experimental design, when the design matrix is unknown 17. Gi7-estimate of T 2 -statistics 18. Gis-estimate of regularized T 2 -statistics 19. Quasi-inversion method for solving G-equations 20. Estimator G20 of regularized function of unknown parameters 21. G21-estimator in the likelihood method 22. G22-estimator in the classification method 23. G23-estimator in the method of stochastic approximation 24. Class of estimators G24, which minimizes the certain mean-square risks 25. G25-estimator of the Stieltjes transform of the principal components 26. G26-estimator of eigenvalues of the covariance matrix 27. G27-estimators of eigenvectors corresponding to extreme eigenvalues of the covariance matrix 28. G28 consistent estimator of the trace of the resolvent of the Gram matrix 29. G29-consistent estimator of singular values of the matrix 30. G3o-consistent estimator of eigenvectors corresponding to extreme singular values of the matrix 31. G3i-estimator of the resolvent of a symmetric matrix 32. G32-estimator of eigenvalues of a symmetric matrix 33. G33-estimator of the eigenvector which corresponds to the extreme eigenvalues of a symmetric matrix 34. G34-estimator of the V-transform 35. G35-estimators of eigenvalues of random matrices with independent pairs of entries 36. G36-estimator of eigenvectors of matrices with independent pairs of entries 37. G37-estimator of the [/-statistic

xv

562 563 566 571 575 576 579 580 581 582 583 584 589 589 590 595 596 598 600 601 602 603 607 608 609 610 611 616 618 620 621 622 623

xvi

Contents

38. C?38-estimators of symmetric functions of eigenvalues of covariance matrices 39. G39-estimator of symmetric function of eigenvalues of G r a m random matrix 40. G ! 4o-estimator of permanent of matrix 41. Method of random determinants. Class G41 -estimates of the permanent of a matrix 42. G42-estimator of a product of matrices 43. (?43-estimator of product of matrices in the scheme of series 44. Class of G44-estimators of solutions of the system of linear differential equations with covariance matrix of coefficients 45. Class of (?45-estimators of solutions of the system of linear differential equations with non-negative defined matrix of coefficients 46. G46-estimator for solution of the system of linear differential equations with symmetrical matrix of coefficients 47. G47-estimator for solution of the system of linear differential equations 48. G'48-estimator for solution of the system of linear differential equations when coefficients have arbitrary variances 49. G49-estimator for solution of the system of linear differential equations with symmetric block structure 50. Gso-consistent estimator for solution of linear programming problem (LPP) 51. G51 -estimator of solutions of L P P 52. G52-estimator of solution of L P P with nonsymmetric matrix A 53. G53-estimator of solution of L P P obtained by integral representation method 54. G54-estimator of solution of L P P with block structure

624 626 626 627 629 630 631 633 634 635 639 639 640 642 643 644 646

References

649

Index

669

LIST OF BASIC NOTATIONS A N D A S S U M P T I O N S

If two sequences zn and un of complex numbers have the property that u>n ^ 0 for all n 6 N and lim zn (w n ) _ 1 = 1 we write zn = un. n—¥ oo Occasionally we make use of the notation zn = O (an), zn — o(an) if an > 0, to state that zn ( a n ) - 1 is bounded, or tends to 0, respectively, as n -»• oo. lim : limit p lim : limit in probability In : natural logarithm max : maximum min : minimum sgn(a) : sign of number a inf : infimum (greatest lower bound) sup : supremum (least upper bound) i : imaginary unit a* or a : conjugate of complex number a. |a| : modulus of number a Re and Im signify real and imaginary parts (a,ij) : matrix whose (i,j)-th An =

entry is a,ij

j : square matrix of order n

diag A : diagonal matrix ^i(-A) < • • • < An(yl) : eigenvalues of matrix A hi, i = 1, ...,n : eigenvectors of matrix A Q > 0 : nonnegative definite real matrix Q n Hn{x, A) = n - 1 x {AP(A) < i } : normalized spectral function of a square matrix A P= l A~l : inverse of square matrix A AT : transpose of matrix A A+ : generalized inverse of A det A : determinant of square matrix A Aij : cofactor of entry Oy of square matrix A ||yl|| : norm of matrix A

xviii

List of basic

notations

I : identity matrix rank(j4) : rank of matrix A Tr.4 : trace of square matrix A C[a,b] '• set °f all continuous real-valued functions on [a, 6] X\, • • •, xn independent observations of a random vector £ E f j = a : expectation (mean) of random vector £ Cov

=

E

(j—

(17 — E j t )

T

: covariance of random vectors

Var £ : variance of £ 6ij : the Kronecker symbol R : the set of real numbers C : the set of complex numbers Rn : real Euclidean n-dimensional space Gm : group of real orthogonal matrices of order m H : normalized Haar measure on Gm : dimX : dimension of X The symbol en denotes a constant that tends to 0 as n where 6 > 0 and I is some positive integer,

00 and l i m ^ o o n~se~l = 0,

ipn{x) is a sequence of complex functions satisfying the inequality limsup I |V*n(a;)|da; < 00. n—»00 J Throughout this book we understand a vector to be a column-vector if it is not indicated as a row-vector. We will denote constants by the letter c. They might be different in different formulas. We assume that random variables e ( n i , n 2 ) and matrices E g i X 9 2 size q\ x q2 in formulas have the following property plim e ( n i , n 2 ) = 0, n 1 ,ri2—>oo plim E , i X 9 2 = 0 . "1 ,n2—*oo An inequality A > 0 for a symmetric real matrix A denotes its positive definiteness. We define the Hilbert-Schmidt norm of a complex matrix A by ||A|| 2 = T r (AAT) and its spectral (or operator) norm by \A\2 = max x* AA*x. x'x) and the set C now can be chosen so that Ai (u>) is not a random variable. In general, there are no exact formulae for eigenvalues as n > 5, and therefore it is difficult to find such a change of variables in the expression for the joint distribution of the eigenvalues that one-to one correspondence between old and new variables is established. It is easy to select a proper change of variables if the eigenvalues \k (w), k — 1, ...,n are arranged in nondecreasing order. One can be convinced of the measurability of such eigenvalues by considering the following relations: A; (u>) = m (u) - 1/1 (w), ¿ = l,...,n, where v\ M = lim [ T r E ^ ] 1 / 2 s ; Hi (u) = lim Tr (E n + ui (w) /„ s—too L

1/2 s

1

H2 M = lim [Tr (E„ + vx (w) I n f s s—too L

(w)

1/2»

Let E G G be a random matrix and G be some group of matrices. We say that the distribution function F of the random matrix E € G is absolutely continuous with respect to Lebesgue measure m (or Haar measure /z if group G is compact) with the density p (x); x £ G, if for every measurable set B € G P ^{ E G 5 } =

/ p(x)m(dx). JB JB In what follows, the eigenvalues Afc , k = 1 ,...,n are assumed to be random variables. We can make the following statement: THEOREM 2.1. [Gir45, p.20], [Gir54, p.55] If the entries of the random matrix E have joint probability density, then the eigenvalues Ajt (j

; s 0 = n;

sfc=TrE^

4

Chapter 1

is some polynomial of the entries of E which is not identically equal to zero. Therefore, since the entries of E have joint probability density and det(.s t+J ) is a polynomial function we have

P {Ai ji A,-; i * j} = P | f [ (A4 - Aj) 2 ± o | = P {det ( * + , • ) £ „ / 0 } = 1.

Theorem 2.1 is proved. 3. MAXIMUM LIKELIHOOD ESTIMATES OF THE PARAMETERS OF A MULTIVARIATE NORMAL DISTRIBUTION In this section we consider the mean and covariance matrix formed from a sample from a multivariate Normal distribution. The Normal density of a random vector is (27r)-m/2 det R~1/2 exp{ - i ( f - afR'1

{x - a ) } ,

where a is the mean vector and R is the covariance matrix. We will assume throughout Sections 3-5 that R is a positive definite matrix. Let xi,...,xn be independent observations of a normally, Nm(a,R), distributed random 1 vector. Then the sample mean vector is a = n Xk, and the sample empirical covariance matrix is R = (n — l ) - 1 —a)(xk —a)T. The vector a and the matrix R are stochastically independent. For n > m the maximum likelihood estimators of a and R are a and (n — respectively.

l)n~lR

4. WISHART DENSITY W n (a,R) If Xi, ...,£„ are independent observations from a normally, Nm(a,R), distributed random vector, where n > m, then the Wishart density function wn(a,R) of the sample covariance matrix R =

n~l

n+l ^ ( ¿ f j — a)(xi — a ) T , ¿=i

where a --

n+1 y ^ Xk k=i

n~l

[^(^det^/f^D^expj-lTVnii-^JCdetS)^-^Z

2

,

where S is positive definite matrix of order m, and T m (-) denotes the multivariate gamma function m r m ( a ) = TT-C—D/4 JJ ¿=i

.. _

(i

_

1}

j)

Re a > - ( m - 1).

5. GENERALIZED VARIANCE If a multivariate distribution has a covariance matrix R then one overall measure of the spread of the distribution is the scalar quantity det R, called the generalized variance.

Introduction to the theory of sample matrices

5

If the matrix R has Wishart density wn{a,Rm), where n > m then det R/ det R has the same distribution as n£Li Xn-i+i > where the Xn-j+i f° r * = l , . . . , m , denote independent x 2 random variables with n — i + 1 degrees of freedom respectively. This result gives a tidy representation for the distribution of generalized variance, however the derivation of the density function of the product of independent x 2 random variables it is not easy. But it is easy to obtain an expression for the moments m E [nm detfi]* = (det R)k

¿=1

Jj{2Ar

n

¿+1

n — i+ 1

+ k

] } , k = 1,2,....

6. S Y M M E T R I C R A N D O M R E A L M A T R I C E S

Let us matrix Oi; i = system

choose n measurable eigenvectors 6f, i — l , . . . , n of the symmetric random E, corresponding to the eigenvalues A*; k = 1, ...,n. Note that the eigenvectors 1,..., n are defined uniquely within a coefficient 1 with probability 1 from the of equations (E n - Xil) 8i = 0;

= 1; i =

l,...,n.

One can rather easily choose n measurable orthogonal eigenvectors by fixing a sign of some nonzero element of each vector. If some eigenvalues coincide, then the eigenvector is chosen by fixing some of its components and a sign of some nonzero component of each vector. Thus, the eigenvalues A*; k — l , . . . , n , arranged in nondecreasing order and corresponding eigenvectors 6l\ i = 1,..., n of the matrix E thus chosen are random variables and vectors. Let 0 „ = be a random orthogonal matrix whose column vectors are equal to i — 1, ...,n; 0\j > 0, j — 1, ...,n, B be the a- algebra of Borel sets of orthogonal n x n matrices on it; and /i the normalized Haar measure on the group of the real n-dimensional orthogonal matrices G. THEOREM 6.1. ([Girl2, p.337], [Girl9], [Gir45, p.21], [Gir54, p.55]) If a real symmetric matrix E n has the probability density p(Zn) then for any subset E 6 B and the real numbers oil, (3i\ ¿ = l , . . . , n P { 0 „ 6 E; cti < A, < A; i = l, ...,n} = Cx(n)

J

p(XnYnXl)

yi>y2> - >Vn; Ci.->yn\Un£E

i

0, k = 1,..., s;

|Ai + im \ > • • • > | A* + i/xjb |,

&2s+k = A A

s

+

i

> • • • > A n - s ; k = 1, ...,n - 2s; s = 1,..., [n/2]

and for s = 0, 6P = A p ; Ai > • • • > A n ; p = 1, ...,n,

1

if

\xi +iî/i| > • •• > |zs +iys\, z-2s+k = xs+k',

^s+i

; yk > 0, k = l,...,s;

> • • • > xn-s,

> 0

k = 1 , . . . , n - 2s,

otherwise,

where s = 0,1,..., [n/2]. THEOREM 9.1. ([Girl2], [Girl9], [Gir45], [Gir54]) If the random coefficients of the equation density p(x\,

f (t)

- • •, £n),

:= tn + ^ t "

- 1

+ • • • + £ n _ i i + £„ = 0 have the joint

£n probability

then for any real numbers a , , ft; i = 1 , . . . , n

P { R e ^ j < a j , I m f l j < ft; i — 1, . . . , n } =

[n/2] l'*7 J H 2 s=0

/• / P(Ai.-"-,A„)

where the domain

° * JJdxjdi/i dzi, i=1 i=2s+l

p>l of integration

is equal

(9.1)

to

K s = { z j : Re Zi < a i , Im Zj < ft; î = 1, ...,n} ,

= (-l)fc

^ h£n of the equation f {t) := tn H \-£n-it + £n = 0 have the joint probability density p(x i, • • •, xn), then d ^E8n(x)

f = / p{vi,v2,---,vn-i,[vn

where g (x) = xn + xn~lvi

+

— ff(x)])

f- vn.

Proof. Suppose that n is odd. Using Theorem 9.1 we get

dg {x) dx

.—i

10

Chapter

[«/2] E 6

n

.n-1 s

( x ) = J 2

JJ

["/2J



d Zi

f

H

fn-2sV /

s=0 s

V

>'

n_2s

x(22S+*
••• > A„. THEOREM 17.1. ([Girl2], [Girl9], [Gir45], [Gir54]).

Introduction I f a real p ( X

n

ues

Ai

square

> •• • > An

where

f i is the

> s

n n

n

of the

Haar

, and

the

En

matrix

) = 0;

1, ...,n)

q(spp,p=

•••

random

) x { I m A f c( X

to the theory of sample

=

=

(£jj)" J = 1

k = l,...,n}

matrix

Ci(n)

measure,

En

is equal

has

density

the

21 probability

q(spp,p

=

density

l , . . . , n ) of

eigenval-

to

p { H

n

S

n

H n ) n

n

{ A H

n

) & S

n

J J (s pp

~

su),

p>(

is a lower

C\{n)

2

, n >

the

/ J hu>0,i=l,...,n

Sn

constant

then

matrices

real

is defined

triangular

matrix,

in Section

dS„ =

n n dspi, P>I

sn

>

6:

n

Ci(n) = 2 n 7r n ' n + 1 )/ 4 Y [ {r [(n - i + 1) /2]} _ 1 . i=1

Let us calculate the Jacobian M p , i = i , . . , n = HnSnHZ- Since

Proof.

J (H

n

, S

n

)

det

=

, A

dx pi

=

J ( H

n

, S

n

)

of the change of variables

p,(=l,...,n B

d f i j . i,j=l,...,n(n—1)/2

=

dx

pi

dsi

p,l= 1

X

n

=

n

i,j=l,...,n(n+l)/2

where ^¿j are the Euler angles of the matrix H, then J (Hn, Sn) is a polynomial of values spp. It is easy to show that it is zero if at least two values spp and su coincide; therefore, it must be divided without reminder into the values spp - su. But this polynomial is homogeneous of the degree n(n - l)/2. Thus J ( H n , S n ) = ip { H n } J~[p>; {sPP ), where ip is a certain function. Using this equation we have

J (.Hn,

S

n

) —

det

det

= mod det spi=0,p>l

A\ a2 a3~ B i b2 b3 Ci c2 c3

where dx pp

¿i =

ßViji

q = n ( n — 1)

dx pp

dsn dx "pp

Ci =

Sij

=

J ( H

n

, S

, A2

=

i,j=l,...,q

p>l=l,...,n

pi , A3 dtpij i,j = l,...,q dx

=

dx Pi

pj=l,...,q

pi dsn

=

p>l=l,...,n ,

B3

=

p>i = l,...,n

ds t j J i>j=l,...,q

dx PI

dsn

=

ds

i=l,...tn p j . Therefore, we can put in this expression 1, ...,n. Then, obviously A2 = A3,B2 = B3. Therefore,

0,i > j , i , j =

22

Chapter 1

'Ai Bx C\

J (Hn, Sn) = mod det

A2 B2 C2

0

0 Cz — C2

But det

Ax

A2

Bx

B2

is the Jacobian of the change of variables Xn = HnYnH^, where Xn is a symmetric matrix and Yn is a real diagonal matrix and this Jacobian is equal to (see Theorem 2.1) («) I I (yP ~

1 =

(n ~ !) /2} ' Vp>Vh

n

P>

I,

P>I

where q i = 1,..., n (n — 1) /2} is the density of the Haar measure on the group of orthogonal matrices with respect to the Lebesgue measure on the set of Euler angles tpi. Then, taking into account that det [c3 - c2f

= [det [hpihj

- hH

M?>j=i;:::£]2 P>1

= det
j

fl

s>q

"

= det < - ^ [

hqihSj)

1

(hpihij - huhpj) (h s ih q j - h q ih S j) >

i,j=i

)

s>q

= det {[¿ps(5ilg = 1, we complete the proof of Theorem 17.1. Remark 1. We have calculated the Jacobian in the proof of this theorem following the standard method of mathematical analysis (see [Girl9]). The calculations may conveniently be done also with the help of the calculus of alternating differential forms ([LeS], [Ed4]). THEOREM 17.2. ([Girl2], [Girl9], [Gir45], [Gir54], [Gin]). If a real square random matrix p(Xn)

E n = (£ij)"_, =1 , n > 2 has the probability

= (27r)~ n2/2 exp { - 2 - 1 T r XnX^}

density

and

dn

q { X l

'-'

X

»

) =

dx

1

...dx

n

xP{[A1(En)•••>£„;

the constant Ci(n)

23

is defined in Section 6.

18. R A N D O M N O N S Y M M E T R I C REAL MATRICES Let En = ( £ t j ) " J = 1 be a real square random matrix with the probability density p(Xn), where Xn = is a real square matrix. Let us introduce the notations: Xk + i/Xfe, A* — ifi/i', k = 1,..., s; Xi, I = s + 1, ...,n — 2s are the eigenvalues of E „ and Zk = Xk + iffk, ¿I = Xk - iyk', k — 1,..., s; zi, I = s + 1 ,...,n - 2s are the corresponding eigenvectors of matrix E „ , where s is a certain random variable which takes values from the set {0,..., [n/2]} . ([n/2] denotes the integer part of number [n/2]) If s = 0 we use the following notation for the eigenvalues and eigenvectors: Aj, ; I = 1, ...,n. Here the values and the vectors Xk, Hk,Xk,yk are real. Before studying the distribution function of the eigenvalues and eigenvectors of E n , we need to choose them so that they are random. It is known that we can choose the eigenvalues as the continuous functions of the entries of the matrix . We arrange the eigenvalues of the matrix E n in the following order

Xk + ifik,

A f c - i (ik\

0, k = 1,..., s;

¡ik>

V h

>

|Ai +i/ti| > ••• > |AS + i£s|;

> A n _ s ; s = 1,..., [n/2]

and for s = 0 : Ai > • • • > A„. The corresponding eigenvectors are equal to Zk=xk

+ iy k , z*k=Xk-m;

k = 1,...,s; zu I = s + 1 ,...,n - 2s.

For simplification of formulae we denote Xk, jj-k, Zk,Xk,Vk as Xk,Hk,Zk,Xk,Vk- Of course, there are many other ways of ordering the eigenvalues, but we adhere to this principle as the most natural. We require that z{zi = di, I = l,...,n, where di are fixed numbers, a r g z i p = c p , p = l,...,s, where |cp| < IT, p = l,...,s, Z\P are the components of the vector zp\ and that the first nonzero component of each vector zk, k = s + 1, ...,n — 2s is positive. For our convenience we choose cp = 0. Using the proof of Theorem 1.1 we obtain that the eigenvalues of the matrix S n are distinct and the first components of the eigenvectors do not equal zero with probability one. Then with probability one, the matrix En can be represented in the following form

H ^ T d i a g K ^

£ ) , - ,

(_A;f

i;);A

s + 1

,.,A„_

= T d i a g { A i , • • •, A n } T~1;

s

}T-;s

=

l,...,[n/2],

s = 0,

where the diagonal matrix diag

, k = l,...,s;

Xp, p = s + 1, ...,n -

sj

consists of a diagonal part due to the real eigenvalues and s blocks of the type f Xk

lik \

V

Xk )

24

Chapter 1

due to the complex conjugate pairs, with the rest of the entries being zero, and T being a real nondegenerate matrix with probability one whose column vectors are xk,yk; k = l,...,s; zi, I = s + 1, ...,n - 2s. Let K be a group of real nondegenerate matrices Xn, B the cr-algebra of the Borel subsets of the group K, 9i\ i = l,...,n be the eigenvalues of E n , chosen as described above, and

{

1,

if yk > 0, k = l,...,s,

xs+i >

> |z s + iy$\;

> z„_ s , s = 1,..., [n/2], otherwise,

0,

where

|xi + it/i| >

S = l,...,[n/2]; y (0) = diag{xi, • • • , z n } ; s = 0. THEOREM 1 8 . 1 . ([Girl2], [Girl9], [Gir45], [Gir54]) If a real square random matrix En = ((,ij)1J=1 has the probability density p(Xn), then for any subset E C B and any real numbers a,, ft; i = 1,..., n

P {TneE;

Re6i < au I m ^ < ft; i = l , . . . , n }

[n/2]

= £ Cs(non) f piXnY^X-1) Jk s=0 ' s

[ ] |A„

n

-n

i— 1

3=2

A, ( r ( s ) ) | 2

p>l

n n

X

( j,2i-1 + j,2i)

|detXn|-n (18.1)

-1/2 X

ft.))-

p=2s+l

n dp - ^ Xjp 0=2

-1/2

dX„dy(s)

where the domain of integration K s is equal to 1/2

Ks = jxn,y(s)

: Xn e E; x1:2i-i = ¿=2

j=2

(XJ,2i-1 + xj,2i) < di> xlfii = 0, 1 = 1,..., s;

zip =

(18.2)

1/2 '

dp-J2x:

0, k = 1, ...,s, |xi + iyi \ > • • • > \x, + \ya\, xs+1 > o n = 1, dXn=

n

diy;

dy (s) =

> xn-s,

n

s = 0, ...,[n/2],

d^dy,-;

(18.3)

Introduction to the theory of sample matrices

25

dk are certain positive constants and the constants Cs(non) are 2™. Proof. Let H s be the random event such that the matrix —n has exactly s pairs of complex conjugate eigenvalues and such that the other eigenvalues are real. Then [n/2]

¥ (/, cp,d[)

= Ef

(Tn, e „ ) = £

[ E / (T n , e „ ) | H S ] P ( H . ) ,

(18.4)

5=0

where / is an arbitrary bounded continuous function of the entries of T„ and Q n

=

(Mi) • 1, Let us calculate the Jacobian J ( X n , F ( s j ) of the transformation Zn = XnY^Xn [X„, Y{s)] € K , . It can be easily verified that XnY^X*1 = P(S}Q(S)P^, where Q{s) = {SPiqi)p l=:1 , qi are the eigenvalues of the matrix V(s) and P( s ) = {zi, I = 1,..., n } is the complex matrix of the vectors:

Zk = xk + i$k, Xk ~ m;

k = 1,...,s;

zi, l = s + 1 ,...,n-

2s.

Let (pi, I — 1,..., n 2 — n be independent parameters of the matrix P( s ). For instance, it will be convenient to take as such parameters the polar coordinates. Then the entries of the matrix P ( s j will be differentiate by these coordinates almost everywhere. It is easy to verify that the number of independent variables on the right side and on the left side of the equality Zn = XnY(s)X~l, B y the differentiating the equalities P(S)P,=

on the set Xn,

Y,(s) £ K s is the same.

I, Zn = P(s)Q{s)P^)

by the variables

ifi, I = 1, . . . , n 2 - n, we find

(s)

where S = P ^ ^ f -

dReq/is)

d

Ipk —

p,k=

¿pfc, fc = s + 1, ...,n - s; ¿¿pi: - i(5p_i,/t,

k = n - s + 1,

...,n;

For any matrix An £ K we have from (18.4) [n/2] s=0

,

yK,(A„)

where

a = 1,

[n/2]; F 0 = diag { n , • • • ,z„} ; s = 0.

1 ,...,n.

Introduction

to the theory of sample

matrices

27

For a simplification of the proof of Theorem 18.1 we assume that the arguments of certain column vectors of the matrix Xn are equal to the fixed numbers 7* and the domain of integration K s in the s-th integral is equal to £

K S (An)

\[(AnTn)jt2i^]2

+ [(¿„T„)ji2J2]

=di,i

a r g | ( / l „ T n ) l i 2 i _ 1 + i ( i 4 n r n ) I ) 2 j | =n,i

=

=

1,...,s;

= 1 ,...,s;

{AnTn)lp>

0; £ \(AnTn)jp]2 =dp, p=2s + l,...,n-, J j=l I m X 2 k - i ( 0 n ) > 0 , A; = 1,..., s; |Ai(0„)| > > A s + 1 ( 0 n ) > • • • > A„ ( 0 „ ) ;

|A,(e„)|,

where Afc(0„) are the eigenvalues of the matrix 0 n . Now, we make in every s-th integral the change of variables Zn (choosing, for instance, polar change of variables) Then, using (18.7) we get ¥(f,cp,dt)

=

=

XnY(S)X~l.

Ef(Tn,Qn) [n/2]

= È

/

S=0 x

fiAnXn^piAnXnY^X-'A^1)

(lg>g)

*'L»(4n)

n iap p>i

-

a


•» i=i

-A(

* n k p>'

ds

(18.15)

2p-i,2 p -i,

where S2p,2p = 1, P=

l,—,k;

su = 1, I = 2k+

1 ,...,n.

In this integral we will make the following change of variables S n Y ^ S " 1 = Q(k) where r^ii ¿21

'12 ¿22

931 932

o 0

0 hk-l,2k-l hk,2k-l Ç2k+l,2k-l

Q(k) =

0 0

0 0 0

0 0 0

hk-l,2k hk,2k Q2k+l,2k

%2k+l

In,2k

Qn,2k+l

0

0

'''

Xn-2k •

: L Qnl Çn2

I. Indeed, consider the group L of triangular lower real matrices of the n-th order, the set M of the real matrices of the following form 0 0 931 Q(k)

932

= Q2k+l,2k-l

0 0

0 0

0 0 0

0 0 0 0

X2k+\

Q2k+l,2k

0 - Qnl

Qn,2k-\

Qn2

1n,2k

Xn-2kJ

Qn,2k+1

and the intergral

/

(18.17)

/(