Random circulant matrices 9780429435508, 0429435509, 9780429788178, 0429788177, 9780429788185, 0429788185, 9780429788192, 0429788193, 9781138351097

Circulant matrices have been around for a long time and have been extensively used in many scientific areas. This book s

403 40 25MB

English Pages [213] Year 2018

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Random circulant matrices
 9780429435508, 0429435509, 9780429788178, 0429788177, 9780429788185, 0429788185, 9780429788192, 0429788193, 9781138351097

Table of contents :
Content: Circulants Circulant Symmetric circulant Reverse circulant k-circulant Exercises Symmetric and reverse circulant Spectral distribution Moment method Scaling Input and link Trace formula and circuits Words and vertices (M) and Riesz's condition (M) condition Reverse circulant Symmetric circulant Related matrices Reduced moment A metric Minimal condition Exercises LSD: normal approximation Method of normal approximation Circulant k-circulant Exercises LSD: dependent input Spectral density Circulant Reverse circulant Symmetric circulant k-circulant Exercises Spectral radius: light tail Circulant and reverse circulant Symmetric circulant Exercises Spectral radius: k-circulant Tail of product Additional properties of the k-circulant Truncation and normal approximation Spectral radius of the k-circulant k-circulant for sn = kg + Exercises Maximum of scaled eigenvalues: dependent input Dependent input with light tail Reverse circulant and circulant Symmetric circulant k-circulant k-circulant for n = k + k-circulant for n = kg + , g >
Exercises Poisson convergence Point Process Reverse circulant Symmetric circulant k-circulant, n = k + Reverse circulant: dependent input Symmetric circulant: dependent input k-circulant, n = k + : dependent input Exercises Heavy tailed input: LSD Stable distribution and input sequence Background material Reverse circulant and symmetric circulant k-circulant: n = kg + Proof of Theorem Contents viik-circulant: n = kg Tail of the LSD Exercises Heavy-tailed input: spectral radius Input sequence and scaling Reverse circulant and circulant Symmetric circulant Heavy-tailed: dependent input Exercises Appendix Proof of Theorem Standard notions and results Three auxiliary results

Citation preview

Random Circulant Matrices

Random Circulant Matrices

By Arup Bose Koushik Saha

CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2019 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed on acid-free paper Version Date: 20180913 International Standard Book Number-13: 978-1-138-35109-7 (Hardback) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Names: Bose, Arup, author. | Saha, Koushik, author. Title: Random circulant matrices / Arup Bose and Koushik Saha. Description: Boca Raton : CRC Press, Taylor & Francis Group, 2018. Identifiers: LCCN 2018028758 | ISBN 9781138351097 (hardback) Subjects: LCSH: Random matrices--Problems, exercises, etc. | Matrices--Problems, exercises, etc. | Eigenvalues--Problems, exercises, etc. Classification: LCC QA196.5 .B6725 2018 | DDC 512.9/434--dc23 LC record available at https://lccn.loc.gov/2018028758

Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com

To the memory of my teacher J.C. Gupta A.B.

To my parents and teachers K.S.

Contents

Preface

xi

About the Authors

xiii

Introduction 1 Circulants 1.1 Circulant . . . . . . 1.2 Symmetric circulant 1.3 Reverse circulant . . 1.4 k-circulant . . . . . 1.5 Exercises . . . . . .

xv

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

1 1 2 3 3 8

2 Symmetric and reverse circulant 2.1 Spectral distribution . . . . . . . . 2.2 Moment method . . . . . . . . . . 2.2.1 Scaling . . . . . . . . . . . 2.2.2 Input and link . . . . . . . 2.2.3 Trace formula and circuits . 2.2.4 Words and vertices . . . . . 2.2.5 (M1) and Riesz’s condition 2.2.6 (M4) condition . . . . . . . 2.3 Reverse circulant . . . . . . . . . . 2.4 Symmetric circulant . . . . . . . . 2.5 Related matrices . . . . . . . . . . 2.6 Reduced moment . . . . . . . . . 2.6.1 A metric . . . . . . . . . . . 2.6.2 Minimal condition . . . . . 2.7 Exercises . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

9 9 10 12 12 13 14 16 16 17 20 22 23 23 24 25

3 LSD: normal approximation 3.1 Method of normal approximation . . 3.2 Circulant . . . . . . . . . . . . . . . . 3.3 k-circulant . . . . . . . . . . . . . . . 3.4 Exercises . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

27 27 28 31 35

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

vii

viii 4 LSD: dependent input 4.1 Spectral density . . 4.2 Circulant . . . . . . 4.3 Reverse circulant . . 4.4 Symmetric circulant 4.5 k-circulant . . . . . 4.6 Exercises . . . . . .

Contents

. . . . . .

37 37 38 45 47 50 65

5 Spectral radius: light tail 5.1 Circulant and reverse circulant . . . . . . . . . . . . . . . . . 5.2 Symmetric circulant . . . . . . . . . . . . . . . . . . . . . . . 5.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

67 67 70 78

6 Spectral radius: k-circulant 6.1 Tail of product . . . . . . . . . . . . . . 6.2 Additional properties of the k-circulant 6.3 Truncation and normal approximation . 6.4 Spectral radius of the k-circulant . . . . 6.4.1 k-circulant for sn = k g + 1 . . . 6.5 Exercises . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

79 79 83 86 88 97 98

7 Maximum of scaled eigenvalues: dependent 7.1 Dependent input with light tail . . . . . . . 7.2 Reverse circulant and circulant . . . . . . . 7.3 Symmetric circulant . . . . . . . . . . . . . 7.4 k-circulant . . . . . . . . . . . . . . . . . . 7.4.1 k-circulant for n = k 2 + 1 . . . . . . 7.4.2 k-circulant for n = k g + 1, g > 2 . . 7.5 Exercises . . . . . . . . . . . . . . . . . . .

input . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

99 99 100 104 114 115 117 118

8 Poisson convergence 8.1 Point process . . . . . . . . . . . . . . . 8.2 Reverse circulant . . . . . . . . . . . . . 8.3 Symmetric circulant . . . . . . . . . . 8.4 k-circulant, n = k 2 + 1 . . . . . . . . . 8.5 Reverse circulant: dependent input . . . 8.6 Symmetric circulant: dependent input 8.7 k-circulant, n = k 2 + 1: dependent input 8.8 Exercises . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

119 119 120 126 128 135 137 137 138

. . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

139 139 140 144 145 149

. . . . . . . . .

9 Heavy-tailed input: LSD 9.1 Stable distribution and input sequence . 9.2 Background . . . . . . . . . . . . . . . . 9.3 Reverse circulant and symmetric circulant 9.4 k-circulant: n = k g + 1 . . . . . . . . . . 9.4.1 Proof of Theorem 9.4.2 . . . . . .

. . . . . .

Contents 9.5 9.6 9.7

ix

k-circulant: n = k g − 1 . . . . . . . . . . . . . . . . . . . . . Tail of the LSD . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10 Heavy-tailed input: spectral radius 10.1 Input sequence and scaling . . . . 10.2 Reverse circulant and circulant . . 10.3 Symmetric circulant . . . . . . . . 10.4 Heavy-tailed: dependent input . . 10.5 Exercises . . . . . . . . . . . . . .

154 156 157

. . . . .

159 159 160 164 166 171

11 Appendix 11.1 Proof of Theorem 1.4.1 . . . . . . . . . . . . . . . . . . . . . 11.2 Standard notions and results . . . . . . . . . . . . . . . . . . 11.3 Three auxiliary results . . . . . . . . . . . . . . . . . . . . .

173 173 177 182

Bibliography

185

Index

189

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

Preface

Circulant matrices have been around for a long time and have been extensively used in many scientific areas. The classic book Circulant Matrices by P. Davis, has a wealth of information on these matrices. New research on, and applications of, these matrices are continually appearing everywhere. This book studies the properties of the eigenvalues for various types of circulant matrices—usual circulant, reverse circulant, and k-circulant when the dimension of the matrices grow, and the entries are random. The behavior of the spectral distribution, the spectral radius, and the appropriate point processes are developed systematically using the method of moments, and normal approximation results. This behavior varies according as the entries are independent, are from a linear process, and are light- or heavy-tailed. The eigenvalues of these matrices can be calculated explicitly or can be described by a combinatorial formula. We take advantage of this formula to describe the asymptotic behavior of the eigenvalues. The limiting spectral distribution (LSD) of these matrices are described using some functions of the Gaussian distribution. For those matrices which are symmetric, these limits are established using the moment method. For nonsymmetric matrices we use the method of normal approximation. With the help of sharp normal approximation results, it is shown that the spectral radius has Gumbel-type limit. With a natural labeling of the eigenvalues that uses the discrete Fourier frequencies, the point processes defined by the eigenvalues converge to Poisson point processes. Some of the above results are generalized to the situation where the entries are not i.i.d. but are from a stationary linear process. The LSDs are mixtures of Gaussian, their functions, and the spectral density of the linear process. The limit distribution of the spectral radius can only be obtained after suitable scaling of the individual eigenvalues. We also demonstrate what happens when the entries are in the domain of attraction of a heavy-tailed distribution. To keep the material focused and widely accessible, many topics have been omitted. Block circulants, period m-circulants, sparse circulants, and the role of the random circulant in the study of random Toeplitz matrices, are conspicuous by their absence. Much of the material is based on collaborative research between Rajat Subhra Hazra, Arnab Sen, Joydip Mitra, and us. We joyfully recall the good times. xi

xii

Preface

The work of AB was supported in part by the J.C. Bose Fellowship of the Govt. of India. The work of KS was supported in part by the Seed Grant of IIT Bombay. It is always a pleasure to work with the Acquiring Editor John Kimmel, and the super-efficient go-to person for all our LATEX needs, Mr. Sashi Kumar. Arup Bose Kolkata, India Koushik Saha Mumbai,India

About the Authors

Arup Bose earned his B.Stat., M.Stat. and Ph.D. degrees from the Indian Statistical Institute. He has been on its faculty at the Theoretical Statistics and Mathematics Unit, Kolkata, India, since 1991. He is a Fellow of the Institute of Mathematical Statistics, and of all three national science academies of India. He is a recipient of the S.S. Bhatnagar Prize and the C.R. Rao Award. He is the author of three books—Patterned Random Matrices, Large Covariance and Autocovariance Matrices (with Monika Bhattacharjee) and U -Statistics, Mm -Estimators and Resampling (with Snigdhansu Chatterjee). Koushik Saha earned a B.Sc. in Mathematics from the Ramakrishna Mission Vidyamandira, Belur and an M.Sc. in Mathematics from the Indian Institute of Technology Bombay. He obtained his Ph.D. degree from the Indian Statistical Institute under the supervision of Arup Bose. His thesis on circulant matrices received high praise from the reviewers. He has been on the faculty of the Department of Mathematics, Indian Institute of Technology Bombay since 2014.

xiii

Introduction

Circulant matrices have been around for a long time and have been extensively used in many scientific areas. One may recall the classic book by Davis (1979) on these matrices with fixed entries. New research and applications on these matrices are continually appearing everywhere. We focus on the usual circulant (Cn ), and their close cousins such as the symmetric circulant (SCn ), the reverse circulant (RCn ) and the k-circulant matrices when the entries are random and the dimension of the matrix increases to infinity. Collectively, they will be labelled as “circulant-type matrices”. We collect together the variety of results that are available on the behavior of the eigenvalues of these matrices. An n × n k-circulant matrix is defined as   x0 x1 x2 ... xn−2 xn−1  xn−k xn−k+1 xn−k+2 . . . xn−k−2 xn−k−1      Ak,n =  xn−2k xn−2k+1 xn−2k+2 . . . xn−2k−2 xn−2k−1  .   ..   . xk

xk+1

xk+2

...

xk−2

xk−1

n×n

The sequence {xi } is called the input sequence. The subscripts of the entries are to be read modulo n. For 1 ≤ j < n − 1, its (j + 1)-th row is obtained by giving its j-th row a right circular shift by k mod n positions. For the special cases of k = 1 and k = n − 1, they are known as the circulant matrix (Cn ) and the reverse circulant matrix (RCn ) respectively. If symmetry is imposed on Cn , then it is called the symmetric circulant (SCn ). There are many branches of science where these matrices play important roles. Davis (1979) is a rich source for results in and applications of circulant matrices which have non-random entries. See also Pollock (2002). Here are a few scattered examples of the uses of P circulant matrices. The periodogram of n−1 n−1 a sequence {al }l≥0 is defined as n−1 | l=0 al e2πij/n |2 , −b n−1 2 c ≤ j ≤ b 2 c. Properties of the periodogram are fundamental in the spectral analysis of time series. See for instance Brockwell and Davis (2006), and Fan and Yao (2003). It is easy to see that the periodogram is a straightforward function of the eigenvalues of a suitable circulant matrix. The k-circulant matrices and their block versions arise in multi-level supersaturated design of experiment (see Georgiou and Koukouvinos (2006)), spectra of De Bruijn graphs (see Strok (1992)) and (0, 1)-matrix solutions to Am = Jn (see Wu et al. (2002)). xv

xvi

Introduction

Non-random Toeplitz matrices and the corresponding Toeplitz operators are well-studied objects in mathematics. Toeplitz matrices appear in several places—as the covariance matrix of stationary processes, in shift-invariant linear filtering and in many aspects of combinatorics, time series and harmonic analysis. The matrix Cn plays a crucial role in the study of large dimensional Toeplitz matrices with non-random input. See, for example, Grenander and Szeg˝ o (1984) and Gray (2006). As mentioned earlier, our aim is to study the eigenvalue properties of the circulant-type matrices. If λ1 , . . . , λn are the eigenvalues of any n × n matrix An , then its empirical spectral distribution function (ESD) is given by n

FAn (x, y) =

1X I{R(λi ) ≤ x, I(λi ) ≤ y}. n i=1

If the entries of An are random and n → ∞, then these are called large dimensional random matrices (LDRM). The behavior of the eigenvalues of LDRMs, from many different angles, has attracted considerable interest in physics, mathematics, statistics, wireless communication and other branches of sciences. The limiting spectral distribution (LSD) of the sequence {An } is defined as the weak limit of the sequence {FAn }, if it exists. LSDs for different random matrix models have been established over the years. In particular, the LSD of a sequence of random symmetric Toeplitz matrices exists under appropriate conditions. This LSD is absolutely continuous with respect to the Lebesgue measure and quite interestingly, the embedding of the Toeplitz matrix into a circulant matrix has been crucially used to establish this. Another aspect that is studied for LDRMs is the limiting behavior near the “edge”—of the extreme eigenvalues and the spectral radius. This behavior of the extreme eigenvalues and related quantities is very non-trivial for most random matrices. Similarly, the spacings between the eigenvalues are also difficult objects to analyze for most random matrix models. We shall study all of these aspects for the circulant-type matrices. In Chapter 1, we describe the structure of the different circulant-type matrices and their eigenvalues. A formula solution for the eigenvalues of the general k-circulant is presented in terms of the input sequence. Its proof is technical and is relegated to Appendix. Though this solution is in a reasonable closed form, considerable work is needed to turn it into results on the LSD, the spectral radius and the spacings of eigenvalues. These results are developed in a systematic and precise way, using the moment method and the method of normal approximation. In Chapter 2 first a general approach, based on the moment method, is discussed in the context of LSD of symmetric random matrices. This is used to establish the LSDs of the two circulants that are symmetric, namely RCn and SCn , when the input is i.i.d. In Chapter 3, we introduce the basic normal approximation technique and

Introduction

xvii

use it to obtain the LSDs of the non-symmetric circulants, namely Cn and Ak,n (for certain combinations of k and n) when the input is independent. In Chapter 4 the above results are extended to inputs which are stationary two-sided moving average processes of infinite order, that is, xn =

∞ X

ai εn−i , where an ∈ R and

i=−∞

X

|an | < ∞.

(0.1)

n∈Z

The limits are suitable mixtures of normal, symmetric square root of the chi-square, and other distributions, with the spectral density of the process involved in the mixtures. For instance, under some conditions on {xn }, the ESD of √1n RCn converges weakly to the distribution FR , where

FR (x) =

 Z 1/2 x2   1 − e− 2πf (2πt) dt    0 Z     

1/2

x2

e− 2πf (2πt) dt

if x > 0

if x ≤ 0,

0

and f is the spectral density function of {xn }. In Chapter 5, sharper normal approximation results are introduced as the main tools. These are used to derive results on the spectral radius of scaled Cn , RCn and SCn matrices when the input sequence is i.i.d. with finite moments of suitable order. The almost sure and the distributional convergence of the spectral radius of RCn and Cn are established. For instance, suppose that {xi } is i.i.d. with mean µ and E|xi |2+δ < ∞ for some δ > 0, and consider the RCn with this input. Then sp(RCn ) − |µ|n D √ → N (0, 1) if µ 6= 0, n and

sp( √1n RCn ) − dq cq

where q = q(n) = b

D

→ Λ if µ = 0,

p n−1 1 c, dq = ln q, cq = √ , 2 2 ln q

and Λ is the standard Gumbel distribution. The joint behavior of the minimum and the maximum eigenvalue of SCn and inter-alia the distributional convergence of the spectral radius are also established in this chapter. In Chapter 6, we first identify the tail behavior of the product of i.i.d. exponential random variables. Suppose Hm (x) = P[E1 E2 · · · Em > x], where {Ei } are i.i.d. standard exponential random variables. First we show

xviii

Introduction

that for any m, 1 − Hm (·) lies in the maximum domain of attraction of the Gumbel distribution. Using this result and significantly extending the ideas of the previous chapter, the limit distribution of the spectral radius of the k-circulant when n = k g + 1, g ≥ 2, and when sn = k g + 1 with some suitable condition on s is derived. Now suppose that the input sequence is a stationary two-sided moving average process of infinite order. For such an input sequence, the eigenvalues are scaled by the spectral density at appropriate ordinates. The limit behavior of the maximum of the modulus, say M , of these scaled eigenvalues is derived in Chapter 7. For instance, suppose {xn } is the two-sided moving average process as in (0.1) and |λk | 1 M ( √ RCn , f ) = max p , 1≤k≤n n 2πf (ωk ) where λk are the eigenvalues of √1n RCn and f is the spectral density of {xn }. Then under some assumptions on {xn }, M ( √1n RCn , f ) − dq cq where q = q(n) = b n−1 2 c, dq =



D

→ Λ,

ln q and cq = √1 2

ln q

.

In Chapter 8, the convergence of the point process based on the eigenvalues of RCn , SCn and k-circulant with i.i.d. entries is discussed. For example, consider RCn . Since most of the eigenvalues of RCn appear in pairs with opposite signs, we consider the following point process based on half of them: ηn (·) =

q X j=0



ωj ,

λj −bq aq

 (·),

where {λj } are the eigenvalues (labelled in a specified way) of 2πj n }

√1 RCn , n

{ωj =

are the Fourier frequencies, aq , bq are appropriate scaling and centering D

constants, and q = b n2 c. Then under some conditions on the input, ηn → η, where η is a Poisson process on [0, π] × (−∞, ∞] with intensity function λ(t, x) = π −1 e−x . Joint convergence of the upper k-ordered eigenvalues and their spacings follow from this result. In Chapter 9 the LSD of a class of circulant-type random matrices with heavy-tailed input is considered. Unlike the light-tailed case where the limit is non-random, here the limit is a random probability distribution. An explicit representation of the limit is provided. The distributional convergence of the spectral radius of the scaled eigenvalues of Cn , RCn and SCn when the input sequence is i.i.d. with appropriate heavy tail, is considered in Chapter 10. For instance, consider RCn with an input sequence {Zt , t ∈ Z} which is i.i.d. with a common distribution

Introduction

xix

F in the domain of attraction of an α-stable distribution with 0 < α < 1. D Then sp( b1n RCn ) → Yα , where Yα has stable distribution with index α and bn ≈ n1/α L0 (n) for some slowly varying function L0 . In this heavy tail situation the limit distribution is not Gumbel anymore. We also consider the maximum of the modulus of scaled eigenvalues for RCn , Cn and SCn matrices when the input is a moving average with heavy-tailed noise sequence, and establish its weak limit. In Appendix, we have established in detail an eigenvalue formula solution for the k-circulant. We have also briefly defined and explained some of the background material in probability theory that is needed in the main text.

1 Circulants

There are several reasons to study the circulant matrix and its variants. For example, the large dimensional non-random Toeplitz matrix and the corresponding Toeplitz operator are well-studied objects in mathematics. The non-random circulant matrix plays a crucial role in these studies. See, for example, Grenander and Szeg˝o (1984) and Gray (2006). The eigenvalues of the circulant matrix also arise crucially in time series analysis. Pn−1For instance, the periodogram of a sequence {al }l≥0 is defined as n−1 | l=0 al e2πijl/n |2 , n−1 −b n−1 2 c ≤ j ≤ b 2 c and it is a straightforward function of the eigenvalues of a suitable circulant matrix. The properties of the periodogram are fundamental in the spectral analysis of time series. See for instance Fan and Yao (2003). The maximum of the periodogram, in particular, has been studied in Davis and Mikosch (1999). The k-circulant matrices and their block versions arise in areas such as multi-level supersaturated design of experiment (Georgiou and Koukouvinos (2006)), spectra of De Bruijn graphs (Strok (1992)) and (0, 1)matrix solutions to Am = Jn (Wu et al. (2002)). See also Davis (1979) and Pollock (2002). In this chapter we will define all the variants of the circulant matrix that we shall discuss in this book. The nice thing about these circulants is that formula solutions are known for their eigenvalues. These formulae are simple for some of the matrices and much harder to describe for some others. We provide detailed derivation of these formulae. Except in the next chapter, the developments in all the subsequent chapters rely heavily on these formulae.

1.1

Circulant

The usual circulant matrix is defined as  Cn

   =  

x0 xn−1 xn−2

x1 x0 xn−1

x2 x1 x0

... ... ... .. .

xn−2 xn−3 xn−4

xn−1 xn−2 xn−3

x1

x2

x3

...

xn−1

x0

      

.

n×n

1

2

Circulants

For 1 ≤ j < n − 1, its (j + 1)-th row is obtained by giving its j-th row a right circular shift by one position and the (i, j)-th element of the matrix is x(j−i+n) mod n . It is easy to see that irrespective of the entries, the eigenvectors of the circulant matrix are (1, ω, . . . , ω n−1 )T where ω is any n-th root of unity. Its eigenvalues are also known explicitly. Define ωk =

√ 2πk for k ≥ 0 and i = −1. n

(1.1)

Then the eigenvalues of Cn are given by (see, for example, Brockwell and Davis (2006)): λk

=

n−1 X

xj exp(iωk j)

j=0

=

n−1 X

xj cos(ωk j) + i

j=0

1.2

n−1 X

xj sin(ωk j), 0 ≤ k ≤ n − 1.

(1.2)

j=0

Symmetric circulant

This is really the symmetric version of Cn and is  x0 x1 x2 . . . x2  x1 x0 x1 . . . x3   SCn =  x2 x1 x0 . . . x2  ..  . x1

x2

x3

...

defined as  x1 x2   x3  .    x1 x0 n×n

The first row (x0 x1 x2 · · · x2 x1 ) is a palindrome and the (j + 1)-th row is obtained by giving its j-th row a right circular shift by one position. Its (i, j)-th element is given by xn/2−|n/2−|i−j|| . Specializing the general formula for the eigenvalues of Cn to SCn , its eigenvalues are given by: (a) for n odd: n

λ 0 = x0 + 2

b2c X

xj ,

j=1 n

λk = x0 + 2

b2c X

n xj cos(ωk j), 1 ≤ k ≤ b c, 2 j=1

with λn−k = λk for 1 ≤ k ≤ b n2 c. Here bxc denotes the greatest integer that is less than or equal to x.

Reverse circulant

3

(b) for n even: n 2 −1

λ0 = x0 + 2

X

xj + xn/2 ,

j=1 n 2 −1

λk = x0 + 2

X

xj cos(ωk j) + (−1)k xn/2 , 1 ≤ k ≤

j=1

with λn−k = λk for 1 ≤ k ≤

1.3

n , 2

n 2.

Reverse circulant

The so-called reverse circulant matrix  x0 x1 x2  x1 x2 x3   x2 x3 x4 RCn =    xn−1 x0 x1

is given by ... ... ... .. .

xn−2 xn−1 x0

xn−1 x0 x1

...

xn−3

xn−2

      

.

n×n

For 1 ≤ j < n − 1, its (j + 1)-th row is obtained by giving its j-th row a left circular shift by one position. This is a symmetric matrix and its (i, j)-th element is given by x(i+j−2) mod n . The eigenvalues of RCn are given by the following formula which is a special case of the general k-circulant matrix treated in the next section. λ0 =

n−1 X

xj ,

j=0

λ n2 =

n−1 X

(−1)j xj , if n is even, and

j=0

λk = −λn−k = |

n−1 X j=0

1.4

xj exp(iωk j)|, 1 ≤ k ≤ b

n−1 c. 2

(1.3)

k-circulant

This is a k-shift generalization of Cn . For positive integers k and n, the n × n k-circulant matrix is defined as

4 

Ak,n

x0  xn−k   =  xn−2k   xk

x1 xn−k+1 xn−2k+1

x2 xn−k+2 xn−2k+2

... ... ... .. .

xn−2 xn−k−2 xn−2k−2

xn−1 xn−k−1 xn−2k−1

xk+1

xk+2

...

xk−2

xk−1

Circulants       

.

n×n

We emphasize that all subscripts appearing above are calculated modulo n. For 1 ≤ j < n − 1, its (j + 1)-th row is obtained by giving its j-th row a right circular shift by k positions (equivalently, k mod n positions). It is clear that Cn and RCn are special cases of the k-circulant when we let k = 1 and k = n − 1, respectively. We now give a brief description of the eigenvalues. For any positive integers k, n, let p1 < p2 < · · · < pc be all their common prime factors so that n = n0

c Y

pβq q and k = k 0

q=1

c Y

q pα q .

(1.4)

q=1

Here αq , βq ≥ 1 and n0 , k 0 , pq are pairwise relatively prime. For any positive integer m, let Zm = {0, 1, 2, . . . , m − 1}. For fixed k and n, define the sets S(x) = {xk b (mod n0 ) : b is an integer and b ≥ 0},

(1.5)

where x ∈ Zn0 and n0 is as in (1.4). We will use the following notation throughout the book. For any set A, #A = cardinality of A. Let gx = #S(x). We call gx the order of x. Note that g0 = 1. We observe the following about the sets S(x) and the numbers gx . (i) S(x) = {xk b (mod n0 ) : 0 ≤ b < gx }. (ii) An alternative description of gx , which will be used later, is the following. For x ∈ Zn0 , let Ox = {b > 0 : b is an integer and xk b = x (mod n0 )}.

(1.6)

Then gx = min Ox , that is gx is the smallest positive integer b such that xk b = x (mod n0 ). (iii) For x 6= u, either S(x) = S(u) or S(x) ∩ S(u) = ∅. As a consequence, the distinct sets from the collection {S(x) : x ∈ Zn0 } form a partition of Zn0 .

k-circulant

5

We call the distinct sets from {S(x) : x ∈ Zn0 } the eigenvalue partition of Zn0 and we will denote the partitioning sets and their sizes by {P0 = {0}, P1 , . . . , Pl−1 }, and nj = #Pj , 0 ≤ j < l.

(1.7)

Define yj :=

Y

λty , j = 0, 1, . . . , l − 1,

(1.8)

t∈Pj

where y = n/n0 and λty is as defined in (1.2). We now provide the solution for the eigenvalues of Ak,n . In Appendix, we have reproduced its proof from Bose et al. (2012b). Theorem 1.4.1 (Zhou (1996)). The characteristic polynomial of Ak,n is given by `−1 0 Y χ (Ak,n ) (λ) = λn−n (λnj − yj ) , (1.9) j=0

where yj is as defined in (1.8). Now we note some useful properties of the eigenvalue partition {Pj , j = 0, 1, . . . , l − 1} in the following lemmata. The readers may ignore these properties for the time being but they will be used in Chapters 4, 6 and 8. A more detailed analysis of the eigenvalues, useful in deriving the limiting distribution of the spectral radius for a specific class of k-circulant matrices, has been developed in Section 6.2. Lemma 1.4.2. (a) Let x, y ∈ Zn0 . If n0 − t0 ∈ S(y) for some t0 ∈ S(x), then for every t ∈ S(x), we have n0 − t ∈ S(y). (b) Fix x ∈ Zn0 . Then gx divides b for every b ∈ Ox . Furthermore, gx divides g1 for each x ∈ Zn0 . (c) Suppose g divides g1 . Set m := gcd(k g − 1, n0 ). Let X(g) and Y (g) be defined as n o n o X(g) := x : x ∈ Zn0 and x has order g , Y (g) := bn0 /m : 0 ≤ b < m . Then X(g) ⊆ Y (g), #Y (g) = m, and

[

X(h) = Y (g).

h:h|g

Proof. (a) Since t ∈ S(x) = S(t0 ), we can write t = t0 k b (mod n0 ) for some b ≥ 0. Therefore n0 − t = (n0 − t0 )k b (mod n0 ) ∈ S(n0 − t0 ) = S(y). (b) Fix b ∈ Ox . Since gx is the smallest element of Ox , it follows that gx ≤ b. Suppose, if possible, b = qgx + r where 0 < r < gx . By the fact that xk gx = x (mod n0 ), it then follows that x = xk b (mod n0 ) = xk qgx +r (mod n0 ) = xk r (mod n0 ).

6

Circulants

This implies that r ∈ Ox and r < gx , which is a contradiction to the fact that gx is the smallest element in Ox . Hence, we must have r = 0 proving that gx divides b. Note that k g1 = 1 (mod n0 ), implying that xk g1 = x (mod n0 ). Therefore g1 ∈ Ox proving the assertion. (c) Clearly, #Y (g) = m. Fix x ∈ X(h) where h divides g. Then, xk g = x(k h )g/h = x (mod n0 ), since g/h is a positive integer. Therefore n0 divides x(k g − 1). So, n0 /m divides x(k g − 1)/m. But n0 /m is relatively prime to (k g − 1)/m and hence n0 /m divides x. So, x = bn0 /m for someSinteger b ≥ 0. Since 0 ≤ x < n0 , we have 0 ≤ b < m, and x ∈ Y (g), proving h:h|g X(h) ⊆ Y (g) and in particular, X(g) ⊆ Y (g). On the other hand, take 0 ≤ b < g. Then (bn0 /m) k g = (bn0 /m) mod n0 . Hence g ∈ Obn0 /m , and which S implies, by part (b) of the lemma, that gcn0 /m divides g. Therefore Y (g) ⊆ h:h|g X(h), and that completes the proof. γm Lemma 1.4.3. Let g1 = q1γ1 q2γ2 · · · qm where q1 < q2 < · · · < qm are primes. Define for 1 ≤ j ≤ m,  Lj := qi1 qi2 · · · qij : 1 ≤ i1 < · · · < ij ≤ m

and Gj =

X

#Y (g1 /`j ) =

lj ∈Lj

X

 gcd k g1 /`j − 1, n0 .

lj ∈Lj

Then we have (a) # {x ∈ Zn0 : gx < g1 } = G1 − G2 + G3 − G4 + · · · . (b) G1 − G2 + G3 − G4 + · · · ≤ G1 . Proof. Fix x ∈ Zn0 . By Lemma 1.4.2(b), gx divides g1 and hence we can write ηm gx = q1η1 · · · qm where, 0 ≤ ηb ≤ γb for 1 ≤ b ≤ m. Since gx < g1 , there is at least one b so that ηb < γb . Suppose that exactly h-many η’s are equal to the corresponding γ’s where 0 ≤ h < m. To keep notation simple, we will assume that, ηb = γb , 1 ≤ b ≤ h and ηb < γb , h + 1 ≤ b ≤ m. (a) Then x ∈ Y (g1 /qb ) for h + 1 ≤ b ≤ m and x 6∈ Y (g1 /qb ) for 1 ≤ b ≤ h. So, x is counted (m − h) times in G1 . Similarly, x is counted m−h times in G2 , 2  m−h times in G3 , and so on. Hence the total number of times x is counted 3 in (G1 − G2 + G3 − · · · ) is       m−h m−h m−h − + − · · · = 1. 1 2 3 (b) Note that m − h ≥ 1. Further, each element in the set {x ∈ Zn0 : gx < g1 } is counted once in G1 − G2 + G3 − · · · and (m − h) times in G1 . The result follows immediately.

k-circulant

7

In Lemma 1.4.2(b), we observed that gx ≤ g1 for all x ∈ Zn0 . We will now consider the elements in Zn0 whose orders are strictly less than g1 . We define υk,n0 = #{x ∈ Zn0 : gx < g1 }.

(1.10)

The following lemma establishes upper bounds on υk,n0 and will be crucially used in Chapters 4 and 6. Lemma 1.4.4. (a) If g1 = 2, then υk,n0 = gcd(k − 1, n0 ). (b) If g1 ≥ 4 is even and k g1 /2 = −1 (mod n), then X υk,n0 ≤ 1 + gcd(k g1 /b − 1, n0 ). b: b|g1 ,b≥3

(c) If g1 ≥ 2 and q1 is the smallest prime divisor of g1 , then υk,n0 < 2k g1 /q1 . Proof. (a) This is immediate from Lemma 1.4.3(a) which asserts that υk,n0 = #Y (1) = gcd(k − 1, n0 ). (b) Fix x ∈ Zn0 with gx < g1 . Since gx divides g1 and gx < g1 , gx must be of the form g1 /b for some integer b ≥ 2 provided g1 /b is an integer. If b = 2, then xk g1 /2 = xk gx = x (mod n0 ). But k g1 /2 = −1 (mod n0 ) and so, xk g1 /2 = −x (mod n0 ). Therefore, 2x = 0 (mod n0 ) and x can be either 0 or n0 /2, provided n0 /2 is an integer. But g0 = 1 < 2 ≤ g1 /2 so x cannot be 0. So, there is at most one element in the set X(g1 /2). Thus, we have X #{x ∈ Zn0 : gx < g1 } = #X(g1 /2) + #{x ∈ Zn0 : gx = g1 /b} b|g1 , b≥3

= #X(g1 /2) +

X

#X(g1 /b)

b|g1 , b≥3

≤1+

X

#Y (g1 /b)

[by Lemma 1.4.2(c)]

b|g1 , b≥3

=1+

X

gcd(k g1 /b − 1, n0 )

[by Lemma 1.4.2(c)].

b|g1 , b≥3 γm (c) As in Lemma 1.4.3, let g1 = q1γ1 q2γ2 · · · qm where q1 < q2 < · · · < qm are primes. Then by Lemma 1.4.3,

υ

k,n0

= G1 − G2 + G3 − G4 + · · · ≤ G1 =

m X

gcd(k g1 /qb − 1, n0 )

b=1


0 and t ∈ CF , P(|FAn (t) − F (t)| > ) → 0 as n → ∞. Since (ii) is equivalent to saying that for all t ∈ CF , Z  2 FAn (t) − F (t) d P(ω) → 0 as n → ∞, Ω

we also say that FAn converges to F in L2 . It is also easy to see that (i) ⇒ (ii). Several methods to establish the LSD of a sequence of random matrices are known in the literature. Out of these, the moment method is the most suitable to deal with real symmetric matrices. We now discuss this method. Then we shall use it to establish results on the LSD of SCn and RCn .

2.2

Moment method

For any real-valued random variable X or its distribution F , βh (F ) and βh (X) respectively will denote its h-th moment. The following result is well-known in the literature of weak convergence of probability measures. Its proof is easy and we leave it as an exercise. Lemma 2.2.1. Let {Yn } be a sequence of real-valued random variables with distributions {Fn }. Suppose that βh (Yn ) → βh (say) for every positive integer h. If there is a unique distribution F whose moments are {βh }, then Yn (or equivalently Fn ) converges to F in distribution. Observe that the above lemma requires a uniqueness condition. The following result of Riesz (1923) offers a sufficient condition for this uniqueness

Moment method

11

and will be enough for our purposes. A proof can be found in Bose (2018) and in Bai and Silverstein (2010). Lemma 2.2.2. Let {βk } be the sequence of moments of the distribution function F . Then F is the unique distribution with these moments if lim inf k→∞

1 1 2k β2k < ∞ (Riesz’s condition). k

(2.1)

The moments of any normal random variable satisfy Riesz’s condition (2.1). This fact will be useful to us and we record it as a lemma. It can be proven using Stirling’s approximation for factorials and is left as an exercise. Lemma 2.2.3. {βk } satisfies condition (2.1) if for some 0 < ∆ < ∞, β2k ≤

(2k)! k ∆ , k = 0, 1, . . . . k!2k

Now consider the ESD FAn . Its h-th moment has the following nice form: n

βh (FAn ) =

1X h 1 λi = Tr(Ahn ) = βh (An ) (say), n i=1 n

(2.2)

where Tr denotes the trace of a matrix. This is often known as the tracemoment formula. Let EFn and E denote the expectations respectively, with respect to the ESD Fn , and the probability on the space where the entries of the random matrix are defined. Thus, Lemma 2.2.1 comes into force, except that now the moments are also random. Lemma 2.2.4 links convergence of moments of the ESD to those of the LSD. Consider the following conditions: (M1) For every h ≥ 1, E[βh (An )] → βh . (M2) Var[βh (An )] → 0 for every h ≥ 1. (M4)

∞ X

E[βh (An ) − E(βh (An ))]4 < ∞ for every h ≥ 1.

n=1

(R) The sequence {βh } satisfies Riesz’s condition (2.1). Note that (M4) implies (M2). The following lemma follows easily from Lemma 2.2.1 and the Borel-Cantelli Lemma. We omit its proof. Lemma 2.2.4. (a) If (M1), (M2) and (R) hold, then {FAn } converges in probability to F determined by {βh }. (b) If further (M4) holds, then the convergence in (a) is almost sure. The computation of E[βh (An )] involves computation of the expected trace of Ahn or at least its leading term. This ultimately reduces to counting the

12

Symmetric and reverse circulant

number of contributing terms in the following expansion (here aij denotes the (i, j)-th entry of An ): X E[Tr(Ahn )] = E[ai1 i2 ai2 i3 · · · aih−1 ih aih i1 ]. 1≤i1 ,i2 ,...,ih ≤n

The method is straightforward but requires all moments to be finite. This problem is usually circumvented by first assuming this to be the case, and then resorting to truncation arguments when higher moments are not necessarily finite. Also note that in specific cases, the combinatorial arguments involved may become quite unwieldy as h and n increase. Nevertheless, this approach has been successfully used to establish the existence of the LSD for several important real symmetric random matrices.

2.2.1

Scaling

The matrices need appropriate scaling for the existence of a non-trivial LSD. To understand what this scaling should be, assume that {xi } have mean zero and variance 1 and consider the specific case of the symmetric circulant. Let Fn denote the ESD of SCn and let Xn denote a random variable with distribution Fn . Then n

EFn (Xn ) = EFn (Xn2 ) =

=

1X 1 λi = Tr(SCn ) = x0 , E[EFn (Xn )] = 0, and n i=1 n n  1X 2 1 λi = Tr SCn 2 n i=1 n  Pb n2 c 2 1  xj ],  n [nx20 + 2n j=1

 

1 2 n [nx0

+ 2n

P n2 −1 j=1

for n odd,

x2j + x2n/2 ], for n even, and

E[EFn (Xn2 )] = n. Hence, for stability of the second moment, it is appropriate to consider √1 SCn . The same scaling is needed for all the circulant-type matrices. n

2.2.2

Input and link

The sequence of variables {xi , i ≥ 0} that is used to construct our matrix will be called the input sequence. Let us begin by making the following assumption. Later we shall introduce several weaker variations of this. Assumption 2.2.1. {xi } are i.i.d., E(xi ) = 0, Var(xi ) = 1, and E(x2k i ) < ∞ for all k ≥ 1. Any of the circulant-type matrices is really constructed out of an input

Moment method

13

sequence in the following way. Let Z be the set of all integers and let Z+ denote the set of all non-negative integers. Let Ln : {1, 2, . . . , n}2 → Z, n ≥ 1

(2.3)

be a sequence of functions. For notational convenience, we shall write Ln = L and call it the link function. By abuse of notation we write Z2+ as the common domain of {Ln }. Then each circulant-type matrix is of the form An = ((xL(i,j) )),

(2.4)

and has its own distinct link function. If L is symmetric, (L(i, j) = L(j, i) for all i, j), then An is symmetric. In particular the link functions for the symmetric circulant and the reverse circulant are respectively given by L(i, j) = n/2 − |n/2 − |i − j|| and L(i, j) = (i + j − 2) mod n. The link functions of our circulants share some common features that shall be crucial: let {An } be any sequence of circulant-type matrices. Let kn = number of distinct variables in An , αn = maximum number of times a variable appears in An , and βn = maximum number of times any variable appears in a row/column of An . Then kn = O(n), αn = O(n) and βn = O(1).

2.2.3

(2.5)

Trace formula and circuits

Let An = ((xL(i,j) )). Using (2.2), the h-th moment of Fn−1/2 An is given by the following trace formula or the trace-moment formula: 1 Tr n



A √n n

h =

1 n1+h/2

X

xL(i1 ,i2 ) xL(i2 ,i3 ) · · · xL(ih−1 ,ih ) xL(ih ,i1 ) .

1≤i1 ,i2 ,...,ih ≤n

(2.6) We shall now develop an equivalence relation which will help us to group the terms with the same expectation together. Circuit: π : {0, 1, 2, . . . , h} → {1, 2, . . . , n} with π(0) = π(h) is called a circuit of length h. The dependence of a circuit on h and n will be suppressed. Clearly, the convergence in (M1), (M2) and (M4) may be written in terms of circuits. For example, (M1) can be written as h1 1 Tr E[βh ( √ An )] = E n n



A √n n

h i

=

1 n1+h/2

X π: π circuit

E Xπ → βh ,

(2.7)

14

Symmetric and reverse circulant

where Xπ = xL(π(0),π(1)) xL(π(1),π(2)) · · · xL(π(h−2),π(h−1)) xL(π(h−1),π(h)) . Matched Circuit: We call a circuit π matched if each value L(π(i − 1), π(i)), 1 ≤ i ≤ h is repeated at least twice. If π is non-matched, then E(Xπ ) = 0. If each value is repeated exactly twice (so h is necessarily even) then we say π is pair-matched, and in that case E(Xπ ) = 1. To deal with (M2) or (M4), we need multiple circuits: k circuits π1 , . . . , πk are jointly-matched if each L-value occurs at least twice across all circuits. They are cross-matched if each circuit has at least one L-value which occurs in at least one of the other circuits. Note that this implies that none of them are self-matched. Equivalence of circuits: Say that π1 and π2 are equivalent (write π1 ∼ π2 ) if and only if their L-values match at the same locations. That is, for all i, j, L(π1 (i−1), π1 (i)) = L(π1 (j−1), π1 (j)) ⇔ L(π2 (i−1), π2 (i)) = L(π2 (j−1), π2 (j)). (2.8) This defines an equivalence relation, and if π1 ∼ π2 then E Xπ1 = E Xπ2 .

2.2.4

Words and vertices

Any equivalence class can be indexed by a partition of {1, 2, . . . , h}. Each partition block identifies the positions where the L-matches take place. We label these partitions by words of length h of letters where the first occurrence of each letter is in alphabetical order. For example, if h = 5 then the partition {{1, 5}, {2, 3, 4}} is represented by the word abbba. This identifies all circuits π for which L(π(0), π(1)) = L(π(4), π(5)) and L(π(1), π(2)) = L(π(2), π(3)) = L(π(3), π(4)). Let w[i] denote the i-th entry of w. The class Π: The equivalence class corresponding to w will be denoted by Π(w) = {π : w[i] = w[j] ⇔ L(π(i − 1), π(i)) = L(π(j − 1), π(j))}. The number of distinct letters in w is the number of partition blocks corresponding to w and will be denoted by |w|. If π ∈ Π(w), then clearly, |w| = #{L(π(i − 1), π(i)) : 1 ≤ i ≤ h}. Note that for any fixed h, as n → ∞, the number of words is finite but the number of circuits in any given Π(w) may grow indefinitely. Notions introduced for circuits carry over to words in an obvious way. For instance, w is pair-matched if every letter appears exactly twice. Let W2k (2) = {w : w is pair-matched and is of length 2k}.

(2.9)

Vertex: Any i (or π(i) by abuse of notation) will be called a vertex. It is

Moment method

15

generating, if either i = 0 or w[i] is the first occurrence of a letter. Otherwise it is called non-generating. For example, if w = abbbcab then only π(0), π(1), π(2), π(5) are generating. Clearly for our two matrices, a circuit is completely determined, up to a finitely many choices, by its generating vertices. The number of generating vertices is |w| + 1 and hence #Π(w) = O(n|w|+1 ). We already know that it is enough to consider matched circuits. The next lemma shows that we can restrict attention to only pair-matched words. Let N = number of matched but not pair-matched circuits of length h. Lemma 2.2.5. Let L be the SCn or the RCn link function. (a) There is a constant Ch , depending on L and h, such that N ≤ Ch nb(h+1)/2c and hence n−(1+h/2) N → 0 as n → ∞.

(2.10)

(b) If the input sequence satisfies Assumption 2.2.1, then for every k ≥ 1, 1 lim E[β2k+1 ( √ An )] = 0 and (2.11) n n X 1 1 lim E[β2k ( √ An )] = lim 1+k #Π(w) (if the limit exists). (2.12) n n n n w∈W2k (2)

Proof. (a) Since the number of words is finite, we can fix a word and argue. Let w be a word of length h which is matched but not pair-matched. Either h = 2k or h = 2k − 1 for some k. In both cases |w| ≤ k − 1. If we fix the generating vertices, since βn = O(1) (see (2.5)), the number of choices for the non-generating vertices is upper bounded by Ch , say. Hence #Π(w) ≤ nCh nk−1 = Ch nb(h+1)/2c . Relation (2.10) is an immediate consequence. (b) For the first part of (b), since h = 2k + 1 is odd, there are only terms which are matched but not pair-matched. Since all moments are finite, there is a grand upper bound for all the moments. Now use part (a). To prove the second part of (b), using the mean zero and independence assumption, (provided the last limit below exists), 1 lim E[β2k ( √ An )] n

= =

lim

1 n1+k

X w matched

X

E Xπ

π circuit

lim

1 n1+k

X π∈Π(w)

E Xπ .

(2.13)

16

Symmetric and reverse circulant

By Holder’s inequality and Assumption 2.2.1, for some constant C2k , X | E Xπ | ≤ #Π(w)C2k . π: π∈Π(w)

Therefore, from part (a), matched circuits which are not pair-matched do not contribute to the limit in (2.12). So X 1 1 lim E[β2k ( √ An )] = lim 1+k #Π(w), (2.14) n n n w∈W2k (2)

if the limit exists. This establishes (2.12) and the proof is complete.

2.2.5

(M1) and Riesz’s condition

Define, for every k and for every w ∈ W2k (2), p(w) = lim n

1 #Π(w), whenever the limit exists. n1+k

(2.15)

For any fixed word, this limit will be positive and finite only if the number of elements in Π(w) is of exact order nk+1 . Lemma 2.2.5 implies that then the limiting (2k)-th moment (provided the limit above exists) is the finite sum X β2k = p(w). (2.16) w∈W2k (2)

This would essentially establish the (M1) condition. We shall verify the existence of the limit (2.15) shortly for RCn and SCn . We can easily establish Riesz’s condition (2.1) for these two matrices. Recall that once the generating vertices are fixed, the number of choices for each non-generating vertex is bounded above (say by ∆). [Indeed, we shall show later that for the words w for which p(w) 6= 0, we have ∆ = 1]. Hence, for each pair-matched w of length 2k, p(w) ≤ ∆k . As there are

(2k)! k!2k

pair-matched words of length 2k, we get β2k ≤

(2k)! k ∆ , k!2k

and hence condition (2.1) holds by Lemma 2.2.3.

2.2.6

(M4) condition

The next two lemmata help to verify (M4). We skip the proofs. For detailed proofs of these lemmata, see Chapter 1 of Bose (2018). Let Qh,4 = #{(π1 , π2 , π3 , π4 ) : all are of length h and are jointly- and cross-matched}.

Reverse circulant

17

Lemma 2.2.6. Let L be the SCn or the RCn link function. Then there exists a K, depending on L and h, such that Qh,4 ≤ Kn2h+2 . Lemma 2.2.7. Let An = SCn or RCn . If the input sequence {xi } satisfies Assumption 2.2.1, then E

1 An  h 1 An h 4 Tr √ − E Tr √ = O(n−2 ), n n n n

and hence (M4) holds.

2.3

Reverse circulant

Figure 2.1 shows the histogram of the simulated eigenvalues of

√1 RCn . n

Bose

900 800 700 600 500 400 300 200 100 0 -4

-3

-2

-1

0

1

2

3

4

FIGURE 2.1 Histogram of the eigenvalues of

√1 RCn n

for n = 1000, 30 replications. Input is i.i.d. N (0, 1).

and Mitra (2002) first established the LSD of √1n RCn by using a normal approximation when the input variables have a finite third moment. Here we present the moment method proof from Bose and Sen (2008). The following words play a crucial role. Definition 2.3.1. (Symmetric word). A pair-matched word is symmetric if each letter occurs once each in an odd and an even position. We shall denote the set of symmetric words of length 2k by S2k . For example, w = aabb is a symmetric word while w = abab is not. A simple counting argument leads to the following lemma. Its proof is left as an exercise.

18

Symmetric and reverse circulant

Lemma 2.3.1. For every k ≥ 1, #S2k = k!. Let R denote a random variable which is distributed as the symmetrized square root of χ22 /2 where χ22 is a chi-squared random variable with two degrees of freedom. Let LR denote its distribution. This distribution is known as the symmetrized Rayleigh distribution. The density and moments of this distribution are given by fR (x) β2k (LR )

= |x| exp(−x2 ), −∞ < x < ∞,

(2.17)

= k! and β2k+1 (LR ) = 0, k = 0, 1, . . . .

Theorem 2.3.2 (Bose and Sen (2008)). If {xi } satisfies Assumption 2.2.1, then the almost sure LSD of √1n RCn is LR . Proof. Using Lemmata 2.2.5, 2.2.7 and (2.17), it is enough to show that 1 lim E[β2k ( √ RCn )] n n

X

=

lim n

w∈W2k (2)

1 #Π(w) nk+1

= k!. Now due to Lemma 2.3.1, it is enough to verify the following two statements: c (i) If w ∈ W2k (2) ∩ S2k then limn→∞

1 #Π(w) nk+1

= 0.

(ii) If w ∈ S2k then for each choice of the generating vertices there is exactly 1 one choice for the non-generating vertices. Hence limn→∞ nk+1 #Π(w) = 1. Proof of (i): It is enough to restrict attention to pair-matched words. Let vi = π(i)/n and ti = vi + vi−1 . Note that the vertices i, j match if and only if (π(i − 1) + π(i) − 2) mod n = ⇔ ti − tj

=

(π(j − 1) + π(j) − 2) mod n 0, 1 or − 1.

Since w is pair-matched, let {(is , js ), 1 ≤ s ≤ k} be such that w[is ] = w[js ], where js , 1 ≤ s ≤ k, is in ascending order and jk = 2k. Define Un = k {0, n1 , n2 , . . . n−1 n }. Let r = (r1 , . . . , rk ) denote a typical sequence in {0, ±1} . Then we can write X  #Π(w) = # (v0 , v1 , . . . , v2k ) : v0 = v2k , vi ∈ Un , and tis − tjs = rs . r

Let S be the set of generating vertices and vS = {vi : i ∈ S}. Any vi , i ∈ /S is a linear combination of the elements in vS and we can write (r)

vi = Li (vS ) + ai , (i 6∈ S),

Reverse circulant

19 (r)

1 for some integer ai . Clearly, nk+1 #Π(w) can be written as a Riemann sum (in k + 1 dimension) and it converges to Z 1 XZ 1 (r) (r) ··· I(0 ≤ Li (vS )+ai ≤ 1, ∀ i ∈ / S∪{2k})I(v0 = L2k (vS )+a2k )dvS . r | 0 {z 0 } k+1

(2.18) The first indicator appears since every non-generating vertex i, when solved in terms of the generating vertices, must lie in Un . The second indicator comes from the fact that the numerical value of the non-generating vertex 2k equals the numerical value of the vertex 0 due to the circuit condition. Now assume that this limit is non-zero. Then at least one of the terms in the above sum must be non-zero. This automatically implies that we must have (otherwise it is the volume of a subspace in dimension k or less and hence is zero) (r) v0 = L2k (vS ) + a2k . (2.19) Now (tis − tjs − rs ) = 0 for all s, 1 ≤ s ≤ k. Hence trivially, for any choice of {αs }, k X v2k = v2k + αs (tis − tjs − rs ). s=1

Let us choose integers {αs } as follows: let αk = 1. Having fixed αk , αk−1 , . . . , αs+1 , we choose αs as follows: (a) if js + 1 ∈ {im , jm } for some m > s, then set αs = ±αm according as js + 1 equals im or jm , (b) if there is no such m, choose αs to be any integer. Pk By this choice of {αs }, we ensure that in v2k + s=1 αs (tis − tjs − rs ), the coefficients of each vi , i ∈ / S cancel out. Hence we get v2k = v2k +

k X

αs (tis − tjs − rs ) = L(vS ) + a (some linear combination).

s=1 (r)

However, from (2.19), v0 = LH 2k (vS ) + a2k . Hence, because only generating vertices are left in both the linear combinations, v2k +

k X

αs (tis − tjs − rs ) − v0 = 0,

(2.20)

s=1

and thus the coefficient of each vi in the left side has to be zero including the constant term. Now consider the coefficients of {ti } in (2.20). First, since αk = 1, the coefficient of t2k is −1. On the other hand, the coefficient of v2k−1 is 0. Hence the coefficient of t2k−1 has to be +1.

20

Symmetric and reverse circulant

Proceeding to the next step, we know that the coefficient of v2k−2 is 0. However, we have just observed that the coefficient of t2k−1 is +1. Hence the coefficient of t2k−2 must be −1. If we continue in this manner, in the expression (2.20) for all odd i, ti must have coefficient +1 and for all even i, ti must have coefficient −1. Now suppose that for some s, is and js both are odd or both are even. Then for any choice of αs , tis and tjs will have opposite signs in the expression (2.20). This contradicts the fact stated in the previous paragraph. Hence, either is is odd and js is even, or the other way around. Since this happens for all s, 1 ≤ s ≤ k, w must be a symmetric word, proving (i). Proof of (ii): Let w ∈ S2k . First fix the generating vertices. Then we determine the non-generating vertices from left to right. Consider L(π(i − 1), π(i)) = L(π(j − 1), π(j)), where i < j and π(i − 1), π(i) and π(j − 1) have been determined. We rewrite it as π(j) = Z + dn for some integer d where Z = π(i − 1) + π(i) − π(j − 1). Clearly π(j) can be determined uniquely from the above equation since 1 ≤ π(j) ≤ n. Continuing, we obtain the whole circuit uniquely. Hence the first part of (ii) is proven. As a consequence, for w ∈ S2k , only one term in the sum (2.18) will be non-zero and that term equals 1. Since there are exactly k! symmetric words, (ii) is proven completely. This completes the proof of the theorem.

2.4

Symmetric circulant

A histogram of the simulated eigenvalues of √1n SCn given in Figure 2.2 suggests that its LSD is normal. We now state this as a theorem and pro1200

1000

800

600

400

200

0 -4

-3

-2

-1

0

1

2

3

4

5

FIGURE 2.2 Histogram of the eigenvalues of

√1 SCn n

for n = 1000, 30 replications. Input is i.i.d. N (0, 1).

Symmetric circulant

21

vide a moment method proof given originally by Bose and Sen (2008). The details are similar to, but much simpler than, the proof of Theorem 2.3.2. Theorem 2.4.1 (Bose and Sen (2008)). If {xi } satisfies Assumption 2.2.1, then the almost sure LSD of √1n SCn is standard normal. Proof. Using Lemmata 2.2.5 and 2.2.7, it suffices to show that the even moments converge to the moments of a normal random variable. The (2k)-th moment of the standard normal is given by (2k)! . Since 2k k! #W2k (2) =

(2k)! , 2k k!

it is thus enough to show that lim

n→∞

1 #Π(w) = 1, ∀ w ∈ W2k (2). nk+1

Let the slopes be defined by s(l) = π(l) − π(l − 1). Clearly, vertices i, j, i < j, match if and only if |n/2 − |si || = ⇔ s(i) − s(j)

=

|n/2 − |sj || 0, ±n Or

s(i) + s(j) = 0, ±n.

That is six possibilities in all. We first show that the first three possibilities do not contribute asymptotically. Lemma 2.4.2. Fix a pair-matched word w with |w| = k. Let N be the number of pair-matched circuits of w which have at least one pair i < j, s(i) − s(j) = 0, ±n. Then, as n → ∞, N = O(nk ) and hence n−(k+1) N → 0. Proof. Let (i1 , j1 ), (i2 , j2 ), . . . , (ik , jk ) denote the pair-partition corresponding to the word w, so that w[il ] = w[jl ], 1 ≤ l ≤ k and i1 < i2 < · · · < ik . Suppose, without loss of generality, s(ik ) − s(jk ) = 0, ±n.

(2.21)

Clearly a circuit π becomes completely specified if we know π(0) and all the {s(i)}. As already observed, if we fix some value for s(il ), there are at most six options for s(jl ). We may choose the values of π(0), s(i1 ), s(i2 ), . . . , s(ik−1 ) in O(nk ) ways and then we may choose values of s(j1 ), s(j2 ), . . . , s(jk−1 ) in O(6k ) ways. For any such choice, from the sum restriction 2k X i=1

s(i) = π(2k) − π(0) = 0,

22

Symmetric and reverse circulant

we know s(ik ) + s(jk ). On the other hand, equation (2.21) holds. As a consequence, the pair (s(ik ), s(jk )) has at most 3 possibilities. This implies that there are at most O(nk ) circuits with the given restrictions and the proof of the lemma is complete. Now we continue with the proof of Theorem 2.4.1. Due to the above lemma, it now remains to show that with Π0 (w) = {π : π is a circuit, w[i] = w[j] ⇒ si + sj = 0, ±n}, lim

n→∞

1 #Π0 (w) = 1. nk+1

Suppose for some i < j, si + sj = 0, ±n. Reading left to right, if we know the circuit up to position (j − 1) then π(j) has to take one of the values A − n, A, A + n, where A = π(j − 1) − π(i) + π(i − 1). Noting that −(n − 2) ≤ A ≤ (2n − 1), exactly one of the three values will fall within 1 and n and hence be a valid choice for π(j). Thus we first choose the generating vertices arbitrarily, then the non-generating vertices are determined, from left to right uniquely, so that s(i) + s(j) = 0, ±n. This automatically yields π(0) = π(2k) as follows: π(2k) − π(0) =

2k X

si = dn for some d ∈ Z.

i=1

But since |π(2k) − π(0)| ≤ n − 1, we must have d = 0. Thus #Π0 (w) = nk+1 , 1 and hence limn→∞ nk+1 #Π0 (w) = 1, proving the theorem.

2.5

Related matrices

For the symmetric Toeplitz matrix, L(i, j) = |i − j| and for the symmetric Hankel matrix, L(i, j) = i + j. The symmetric circulant SCn may be considered as a doubly symmetric Toeplitz matrix. In a similar manner the doubly

Reduced moment symmetric Hankel matrix DHn is  x0 x1  x1 x2   x2 x3  DHn =     x2 x1 x1 x0

23 defined as x2 x3 x4

... ... ... .. .

x3 x2 x1

x2 x1 x0

x0 x1

... ...

x5 x4

x4 x3

x1 x0 x1



    .   x3  x2

Its link function is L(i, j) = n/2 − |n/2 − ((i + j − 2) mod n)|, 1 ≤ i, j ≤ n. Massey et al. (2007) defined a (symmetric) matrix to be palindromic if its first row is a palindrome. The palindromic Toeplitz P Tn is defined below. P Hn is defined similarly.   x0 x1 x2 . . . x 2 x1 x0  x1 x0 x1 . . . x 3 x2 x1     x2 x1 x0 . . . x 4 x3 x2    P Tn =  . ..   .    x1 x2 x3 . . . x 1 x0 x1  x0 x1 x2 . . . x 2 x1 x0 We outline in the exercises how the LSD of all these matrices can be derived from the symmetric circulant LSD.

2.6

Reduced moment

The “all moments finite” stipulation in Assumption 2.2.1 is clearly restrictive. We now show how this condition can be significantly relaxed. Again, the fact that we are dealing with real symmetric matrices helps us. Our goal in this section is to show that all LSD results in this chapter remain valid when the input sequence satisfies the following assumption. Assumption 2.6.1. {xi } are i.i.d. with mean zero and variance 1.

2.6.1

A metric

It is well-known that weak convergence of probability measures is metrizable. We work with a specific metric which is defined on the space of all probability distributions with finite second moment. The W2 -metric is defined on this

24

Symmetric and reverse circulant

space as follows. The distance between any two distribution functions F and G with finite second moment is defined as  1 W2 (F, G) = inf E[X − Y ]2 2 . (X∼F,Y ∼G)

Here the infimum is taken over all pairs of random variables (X, Y ) such that their marginal distributions are F and G, respectively. The following lemma links weak convergence and convergence in the above metric. We omit its proof which can be found in Bose (2018). Lemma 2.6.1. W2 is a complete metric and W2 (Fn , F ) → 0 if and only if D Fn → F and β2 (Fn ) → β2 (F ). An estimate of the metric distance W2 between two ESDs in terms of the trace will be crucial to us. Lemma 2.6.2. Suppose A, B are n × n real symmetric matrices with eigenvalues {λ1 (A) ≤ · · · ≤ λn (A)} and {λ1 (B) ≤ · · · ≤ λn (B)}. Then n

W22 (FA , FB ) ≤

1X 1 (λi (A) − λi (B))2 ≤ Tr(A − B)2 . n i=1 n

Proof. The first inequality follows by considering the joint distribution which puts mass 1/n at (λi (A), λi (B)). Then the marginals are the two ESDs of A and B. The second inequality follows from the Hoffmann-Wielandt inequality (see Hoffman and Wielandt (1953)).

2.6.2

Minimal condition

Theorem 2.6.3 (Bose and Sen (2008)). Suppose the input sequence satisfies Assumption 2.6.1. Then the LSDs of √1n RCn and √1n SCn continue to be as before almost surely. Proof. We shall briefly sketch the proof. The reader is invited to complete the details. First consider the SCn . Fix a level of truncation K > 0. Define the variables   −1 xi,K = σK xi I(|xi | ≤ K) − E{xi I(|xi | ≤ K)} , where 2 σK = Var[xi I(|xi | ≤ K)].

Then {xi,K } are mean zero variance 1 i.i.d. random variables. For sufficiently 2 large K, σK is bounded away from zero and hence {xi,K } are also bounded. 2 Moreover σK → 1 as K → ∞. Let SCn,K be the symmetric circulant with the input sequence {xi,K }. By Theorem 2.4.1 the almost sure LSD of √1n SCn,K is standard normal.

Exercises

25

On the other hand, W22 (Fn−1/2 SCn , Fn−1/2 SCn,K )

≤ = →

1 Tr(SCn − SCn,K )2 n2 n−1 1X [xi − xi,K ]2 n i=0 E[x1 − x1,K ]2 almost surely as n → ∞.

Now if we let K → ∞, it is easy to show by an application of Dominated Convergence Theorem (DCT) that the right side converges to zero. The proof is then essentially complete. The same proof works for the reverse circulant.

2.7

Exercises

1. Show that in Definition 2.1.3 of weak convergence of random probability measures, (i) ⇒ (ii). 2. Prove Lemma 2.2.1. 3. Suppose {Xi } are i.i.d. with mean zero and variance 1 and with all moments finite. Using Lemma 2.2.1, show that √1n (X1 + · · · + Xn ) converges weakly to the standard normal distribution. Then use the W2 metric to relax the all moment finite condition. 4. Prove Lemma 2.2.2. 5. Prove Lemma 2.2.3. 6. Prove Lemma 2.2.4. 7. Prove Lemma 2.2.6. 8. Prove Lemma 2.2.7. 9. Prove Lemma 2.3.1. 10. Show that (a) the n × n principal minor of DHn+3 is P Hn . (b) the n × n principal minor of SCn+1 is P Tn . 11. Let Jn be the n × n matrix with entries 1 in the main anti-diagonal and zero elsewhere. Show that (P Hn )Jn = Jn (P Hn ) = P Tn and (P Tn )2k = (P Hn )2k . 12. Show that under Assumption 2.2.1, the almost sure LSD of all the

26

Symmetric and reverse circulant √1 P Tn , √1 P Hn and √1 DHn is the standard norn n n β2k (N ) = (2k)! . Hint: Use the previous two exercises k 2 k!

three matrices

mal N with and the interlacing inequality (see Bhatia (1997)). 13. Prove the first inequality in Lemma 2.6.2.

14. Complete the details in the proof of Theorem 2.6.3. 15. Show that under Assumption 2.6.1, the almost sure LSD of all the three matrices √1n P Tn , √1n P Hn and √1n DHn is standard normal. 16. Simulate the eigenvalues of √1n RCn for n = 1000 when the input sequence is i.i.d. standardized U (0, 1), and construct a histogram plot to verify Figure 2.1. 17. Simulate the eigenvalues of √1n SCn for n = 1000 when the input sequence is i.i.d. standardized U (0, 1), and obtain a histogram plot to verify Figure 2.2.

3 LSD: normal approximation

In Chapter 1, we have derived explicit formulae for the eigenvalues of circulanttype matrices. This makes the method of normal approximation ideally suited for studying their LSD. In this chapter we show how normal approximation can be used to establish the LSD of circulant-type random matrices, symmetric as well as non-symmetric, with independent entries. The case of dependent entries will be treated in the next chapter. In the later chapters, more sophisticated normal approximation results will be used to study extreme eigenvalues and spectral gaps.

3.1

Method of normal approximation

In Chapter 2 we have established the almost sure LSD of the symmetric matrices SCn and RCn when the second moment is finite. We are now going to encounter non-symmetric matrices and we are also going to drop the identically distributed assumption. Hence we strengthen the moment assumption on the entries as follows: Assumption 3.1.1. {xi } are independent, E(xi ) = 0, Var(xi ) = 1, and supi E |xi |2+δ < ∞ for some δ > 0. Instead of aiming for almost sure convergence, we shall be satisfied with the weaker L2 -convergence. Recall that the ESD of {An } converges to the distribution function F in L2 if, at all continuity points (x, y) of F , Z  2 FAn (x, y) − F (x, y) d P(ω) → 0 as n → ∞. (3.1) Ω

Note that the above relation holds if at all continuity points (x, y) of F , E[FAn (x, y)] → F (x, y) and Var[FAn (x, y)] → 0.

(3.2)

We often write Fn for FAn when the sequence of matrices under consideration is clear from the context. We shall use the following result on normal approximation (Berry-Esseen bound). Its proof follows easily from Corollary 18.1, page 181 and Corollary 18.3, page 184 of Bhattacharya and Ranga Rao (1976). 27

28

LSD: normal approximation

Lemma 3.1.1. Let X1 , . . . , Xk be independent random vectors with values in Rd , having zero means and an average positive-definite covariance matrix Vk = Pk k −1 j=1 Cov(Xj ). Let Gk denote the distribution of k −1/2 Tk (X1 + · · · + Xk ), where Tk is the symmetric, positive-definite matrix which satisfies Tk2 = Vk−1 , n ≥ 1. If E kXj k(2+δ) < ∞ for some δ > 0, then there exists C > 0 (depending only on d), such that (a) sup |Gk (B) − Φd (B)| ≤ Ck −δ/2 [λmin (Vk )]−(2+δ) ρ2+δ ; B∈C

(b) for any Borel set A, |Gk (A) − Φd (A)|

≤ Ck −δ/2 [λmin (Vk )]−(2+δ) ρ2+δ + 2 sup Φd ((∂A)η − y), y∈Rd

where Φd is the d-dimensional standard normal distribution function, C is the class of all Borel measurable convex subsets of Rd , ρ2+δ = k −1

k X

E kXj k(2+δ) and η = Cρ2+δ n−δ/2 .

j=1

3.2

Circulant

The first theorem is on the LSD of Cn with independent input. Figure 3.1 provides a scatter plot of the eigenvalues of this matrix for n = 2000. Theorem 3.2.1. If Assumption 3.1.1 is satisfied then the ESD of √1n Cn converges in L2 to the two-dimensional normal distribution given by N(0, D), where D is a 2 × 2 diagonal matrix with diagonal entries 1/2. Remark 3.2.1. Sen (2006) had proven the above result under a third moment assumption. Meckes (2009) established the result for independent complex entries that satisfies E(xj ) = 0, E |xj |2 = 1 and for every  > 0, n−1 1X E(|xj |2 I{|xj |>√n} ) = 0. n→∞ n j=0

lim

(3.3)

This generality is obtained by using Lindeberg’s central limit theorem along with the normal approximation bounds. The proof is left as an exercise. Proof of Theorem 3.2.1. First recall the eigenvalues of Cn from (1.2) of Section 1.1. Then observe that we may ignore the eigenvalue λn and also λn/2

Circulant

29 3

2

1

0

-1

-2

-3 -3

-2

-1

0

1

2

3

4

FIGURE 3.1 Eigenvalues of

√1 Cn n

for n = 2000, 10 replications. Input is i.i.d. N (0, 1).

whenever n is even, since they contribute at most 2/n to the ESD Fn (x, y). So for x, y ∈ R, 1 n

E[Fn (x, y)] ∼

n−1 X

P(bk ≤ x, ck ≤ y),

k=1,k6=n/2

where n−1 n−1 1 X 1 X 2πk bk = √ xj cos(ωk j), ck = √ xj sin(ωk j), and ωk = . n n j=0 n j=0

Recall from (3.2) that it is enough to show that for all x, y ∈ R, E[Fn (x, y)] → Φ0,D (x, y) and Var[Fn (x, y)] → 0. To show E[Fn (x, y)] → Φ0,D (x, y), define for 1 ≤ l, k ≤ n − 1, k 6= n/2, √ √ 0 Xl,k = 2xl cos(ωk l), 2xl sin(ωk l) . Note that E(Xl,k ) = 0,

n−1 1X Cov(Xl,k ) = I, and n

(3.4)

l=0

sup sup [ n 1≤k≤n

n−1 1X E k Xlk k(2+δ) ] ≤ C < ∞. n l=0

For k 6= n/2, X √ √  1 n−1 (bk ≤ x, ck ≤ y) = √ Xl,k ≤ ( 2x, 2y)0 . n l=0

(3.5)

30

LSD: normal approximation

√ √ Since {(r, s) : (r, s)0 ≤ ( 2x, 2y)0 } is convex in R2 and {Xl,k , 0 ≤ l ≤ n − 1} satisfies (3.5), we can apply Lemma 3.1.1(a) for k 6= n/2 to get, n−1 X √ √ √ √   P √1 Xl,k ) ≤ ( 2x, 2y)0 − P (N1 , N2 )0 ≤ ( 2x, 2y)0 n l=0

≤ Cn−δ/2 [

n−1 1X E kXlk k(2+δ) ] ≤ Cn−δ/2 → 0, as n → ∞, n l=0

where N1 and N2 are i.i.d. standard normal variables. Therefore lim E[Fn (x, y)]

n→∞

=

1 n→∞ n

n−1 X

lim

P bk ≤ x, ck ≤ y



k=1,k6=n/2 n−1 X

=

1 n→∞ n

=

Φ0,D (x, y).

lim

√ √  P (N1 , N2 )0 ≤ ( 2x, 2y)0

k=1,k6=n/2

(3.6)

Now, to show Var[Fn (x, y)] → 0, it is enough to show that 1 n2

n X

Cov(Jk , Jk0 ) =

k6=k0 =1

1 n2

n X

[E(Jk Jk0 ) − E(Jk ) E(Jk0 )] → 0,

(3.7)

k6=k0 =1

where Jk is the indicator that {bk ≤ x, ck ≤ y}. Now as n → ∞, 1 n2

n X

E(Jk ) E(Jk0 )

k6=k0 =1

n n 1 X 2 2 1 X E(Jk ) − 2 E(Jk ) n n k=1 k=1  2 → Φ0,D (x, y)] .

=

So to show (3.7), it is enough to show that as n → ∞, 1 n2

n X

 2 E(Jk , Jk0 ) → Φ0,D (x, y) .

k6=k0 ;k,k0 =1

Along the lines of the proof used to show (3.6), one may now extend the vectors of two coordinates defined above to ones with four coordinates, and proceed exactly as above to verify this. We omit the routine details. This completes the proof of Theorem 3.2.1. Using the ideas in the above proof, it is not hard to prove the following result for SCn and RCn with independent entries. We leave it as an exercise. Theorem 3.2.2. Suppose {xi } satisfies Assumption 3.1.1. Then (a) the ESD of and

√1 SCn n

converges in L2 to the standard normal distribution,

(b) the ESD of √1n RCn converges in L2 to the symmetrized Rayleigh distribution LR defined in (2.17).

k-circulant

3.3

31

k-circulant

From the formula solution of the eigenvalues of the k-circulant matrix given in Theorem 1.4.1, it is clear that for many combinations of k and n, a lot of eigenvalues are zero. For example, if k is prime and n = m × k where gcd(m, k) = 1, then 0 is an eigenvalue with multiplicity (n − m). To avoid this degeneracy and to keep our exposition simple, we primarily restrict our attention to the case when gcd(k, n) = 1. In general, the structure of the eigenvalues depends on the relation between k and n. For any fixed value of k other than 1, the LSD starts to depend on the particular subsequence along which n → ∞. For example, if k = 3, then the behavior of the ESD depends on whether n is or is not a multiple of 3 (see Figure 3.2). See Bose (2018) for more such simulation examples. 1.5

1.5

1

1

0.5

0.5

0

0

-0.5

-0.5

-1

-1.5

-2

-1

-1.5

-1

-0.5

0

0.5

1

1.5

2

-1.5 -2.5

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

2.5

FIGURE 3.2 Eigenvalues of 20 realizations of √1n Ak,n with input i.i.d. N (0, 1) when (i) (left) k = 3, n = 999 and (ii) (right) k = 3, n = 1000.

The next theorem, due to Bose et al. (2012b), implies that the radial component of the LSD of k-circulants with k ≥ 2 is always degenerate, at least when the input sequence is i.i.d. normal, as long as k = no(1) and gcd(k, n) = 1. Observe that, in this case also n tends to infinity along a sub-sequence, so that the condition gcd(k, n) = 1 continues to hold. Theorem 3.3.1 (Bose et al. (2012b)). Suppose {xi }i≥0 is an i.i.d. sequence of N (0, 1) random variables. Let k ≥ 2 be such that k = no(1) and n → ∞ with gcd(n, k) = 1. Then Fn−1/2 Ak,n converges weakly in probability to the uniform over the circle with center at (0, 0) and radius r = i h √ distribution exp(E log E ), E being an exponential random variable with mean one. Remark 3.3.1. The random variable − log E has the standard Gumbel distri  1 1 bution with mean γ = lim 1 + + · · · + − log n ≈ 0.57721 (the Eulern→∞ 2 n Mascheroni constant). It follows that r = e−γ/2 ≈ 0.74930. It is thus natural to consider the case when k g is of the order n and gcd(k, n) = 1 where g is a fixed integer. In the next two theorems, we consider two special cases of the above scenario, namely when n divides k g ± 1.

32

LSD: normal approximation

We shall need the following lemma to prove Theorem 3.3.1. We shall also use it in Chapters 6 and 8. Lemma 3.3.2. Fix k and n. Suppose that {xl }0≤lC

×I

Y t∈Pj

=

=

1 n − NC

1 n − NC

`−1 X

 s + Θj 1 nj × n−1 ∈ [θ1 , θ2 ], s = 1, . . . , nj j # s: nj 2π

nj

j=0, nj >C

1/nj  1 NC  | √ λt | ∈ (r − , r + ) + O n n

 (θ2 − θ1 ) + O(C −1 ) I 2π

NC  +O n `−1 X (θ2 − θ1 ) nj × ×I 2π

j=0,nj >C

NC  + O(C −1 ) + O n

Y t∈Pj

Y t∈Pj

 λt 1/nj |√ | ∈ (r − , r + ) n

1/nj  1 | √ λt | ∈ (r − , r + ) n

34

LSD: normal approximation

=

(θ2 − θ1 ) 1 + 2π n − NC

`−1 X

nj × I

j=0, nj >C

Y t∈Pj

1/nj  1 | √ λt | 6∈ (r − , r + ) n

NC  + O(C −1 ) + O . n

(3.9)

To show that the second term in the above expression converges to zero in L1 , and hence in probability, it remains to prove, P

Y t∈Pj

1/nj  1 | √ λt | 6∈ (r − , r + ) n

(3.10)

is uniformly small for all j such that nj > C and for all but finitely many n, provided we take C sufficiently large. By Lemma 3.3.2, for each 1 ≤ t < n, | √1n λt |2 is an exponential random variable with mean one, and λt is independent of λt0 if t0 6= n−t and |λt | = |λt0 | otherwise. Let E, E1 , E2 , . . . be i.i.d. exponential random variables with mean one. Observe that depending on whether or not Pj is conjugate to itself, (3.10) equals respectively, nj /2

P

Y

Et

1/nj

t=1

 6∈ (r − , r + ) or P

nj Y √

Et

1/nj

 6∈ (r − , r + ) .

t=1

The theorem now follows by letting first n → ∞ and then C → ∞ in (3.9), and by observing that the Strong Law of Large Numbers (SLLN) implies that C Y √

Et

1/C

√   → r = exp E log E almost surely, as C → ∞.

t=1

Let {Ei } be i.i.d. Exp(1), U1 be uniformly distributed over (2g)-th roots of unity, U2 be uniformly distributed over the unit circle where U1 , U2 are independent of {Ei }. Then the following result can be established. Theorem 3.3.3 (Bose et al. (2012b)). Suppose {xl }l≥0 , satisfies Assumption 3.1.1. Fix g ≥ 1 and let p1 be the smallest prime divisor of g. (a) Suppose k g = −1 + sn where s = 1 if g = 1, and sQ = o(np1 −1 ) if g > 1. g Then Fn−1/2 Ak,n converges weakly in probability to U1 ( j=1 Ej )1/2g . (b) Suppose k g = 1 + sn where s = 0 if g = 1, and s Q = o(np1 −1 ) if g > 1. g Then Fn−1/2 Ak,n converges weakly in probability to U2 ( j=1 Ej )1/2g . We skip the proof of the above theorem. However it will follow as a special case from the results with dependent inputs in the next chapter. Figures 3.3 and 3.4 provide simulations for g = 2 and g = 5 respectively.

Exercises

35

2.5

2.5

2

2

1.5

1.5

1

1

0.5

0.5

0

0

-0.5

-0.5

-1

-1

-1.5

-1.5

-2 -2.5 -2.5

-2

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

-2.5 -2.5

2.5

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

2.5

FIGURE 3.3 Eigenvalues of 20 realizations of k = 10 and (ii) (right) n =

k2

√1 Ak,n n

with input i.i.d. N (0, 1) where (i) (left) n = k2 +1,

− 1, k = 10.

1.5

2

1.5

1

1 0.5

0.5

0

0

-0.5

-0.5

-1 -1

-1.5 -2.5

-1.5

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

2.5

-2 -2

-1.5

-1

-0.5

0

0.5

1

1.5

2

2.5

FIGURE 3.4 Eigenvalues of 20 realizations of k = 4 and (ii) (right) n =

3.4

k5

√1 Ak,n n

with input i.i.d. N (0, 1) where (i) (left) n = k5 +1,

− 1, k = 4.

Exercises

1. Prove Remark 3.2.1. 2. Prove Theorem 3.2.2 using normal approximation. 3. Show that if {xi } satisfies Assumption 3.1.1, then the ESD of √1 P Tn converges weakly in L2 to the standard normal distribun tion. 4. Prove Theorem 3.2.1 when the input sequence is complex and satisfies (3.3).   Pn−1 5. Show that `=0 cos 2πj` sin 2πk` = 0 for all 0 ≤ j, k < n and n n use it to complete the proof of Lemma 3.3.2(a). 6. Simulate and generate the scatter plot of the eigenvalues of the k-circulant matrix for the cases: (a) (b) (c) (d)

n = 1000, k = 2, n = 1001, k = 2, n = k 3 + 1, k = 10, n = k 3 − 1, k = 10.

4 LSD: dependent input

The class of stationary linear processes is an important class of dependent sequences. In this chapter the LSD results for i.i.d. inputs are extended to the situation where the input sequence is a stationary linear process. Under very modest conditions on the process, the LSDs for the circulant-type matrices exist and are interesting functions of the spectral density of the process.

4.1

Spectral density

The input sequence will be a linear process that satisfies the following assumptions. Assumption 4.1.1. {xn , n ≥ 0} is a two-sided linear process xn =

∞ X

ai εn−i , where an ∈ R and

i=−∞

X

|an | < ∞.

(4.1)

n∈Z

Assumption 4.1.2. {εi , i ∈ Z} are i.i.d. random variables with mean zero, variance one, and E |εi |2+δ < ∞ for some δ > 0. Recall the autocovariance function γ(h) = Cov(xt , xt+h ) which is finite under the above assumptions. Definition 4.1.1. (i) Whenever the infinite sum below makes sense, the spectral density function of {xn } is defined as 1 X f (s) = γk exp(iks) 2π k∈Z X  1  = γ0 + 2 γk cos(ks) , for s ∈ [0, 2π]. 2π k≥1

(ii) The numbers ωk = 2πk/n, k = 0, 1, . . . , n − 1, are called the Fourier frequencies. The periodogram of {xi } is defined as In (ωk ) =

n−1 2 1 X xt e−itωk , k = 0, 1, . . . , n − 1. n t=0

(4.2)

37

38

LSD: dependent input

P∞ Under Assumptions 4.1.1 and 4.1.2, h=0 |γ(h)| < ∞ and f is continuous. If f has some additional properties, the LSD of the circulant-type matrices exist. These LSDs are appropriate mixtures involving the spectral density f , the normal, the symmetrized Rayleigh, and other distributions. These results also reduce to the ones given in Chapter 3 when we specialize to i.i.d. input. Throughout the chapter c and C denote generic constants.

4.2

Circulant

The following functions and matrices will be crucial to us. Let ψ(eis ) =

∞ X

aj eijs , ψ1 (eis ) = R[ψ(eis )], ψ2 (eis ) = I[ψ(eis )],

(4.3)

j=−∞

where aj ’s are as in (4.1). It is easy to see that |ψ(eis )|2 = [ψ1 (eis )]2 + [ψ2 (eis )]2 = 2πf (s). Let

 B(s) =

ψ1 (eis ) −ψ2 (eis ) ψ2 (eis ) ψ1 (eis )

 .

(4.4)

Let N1 and N2 be i.i.d. standard normal variables. Define for (x, y) ∈ R2 and s ∈ [0, 2π],

HC (s, x, y) =

 √   P B(s)(N1 , N2 )0 ≤ 2(x, y)0 

I(x ≥ 0, y ≥ 0)

if f (s) 6= 0, if f (s) = 0.

Let C0 = {t ∈ [0, 1] : f (2πt) = 0} and Leb(C0 ) = Lebesgue measure of C0 . Lemma 4.2.1. (a) For fixed x, y, HC is a bounded continuous function in s. (b) FC defined below is a proper distribution function. Z 1 FC (x, y) = HC (2πs, x, y)ds.

(4.5)

0

(c) If Leb(C0 ) = 0 then FC is continuous everywhere and equals ZZ Z v 2 +v 2  1  1 2 1 − 2πf (2πs) ds dv dv . FC (x, y) = I{(v1 ,v2 )≤(x,y)} e 1 2 2 0 2π f (2πs)

(4.6)

Further, FC is bivariate normal if and only if f is constant a.e. (Lebesgue). (d) If Leb(C0 ) 6= 0 then FC is discontinuous only on D1 = {(x, y) : xy = 0}.

Circulant

39

Proof. The proof is easy. We omit the details and just show how the normality claim in (c) follows. If f is a constant function then it is easy to see that FC is bivariate normal. Conversely, suppose FC is bivariate normal. Let (X, Y ) be a random vector with distribution function FC . It is easy to see that Z 1 Z 1 E(X) = 0, E(X 2 ) = π f (2πs)ds, and E(X 4 ) = 3π 2 f 2 (2πs)ds. 0

0

On the other hand, since (X, Y ) is bivariate normal, X is a normal random variable and hence Z 1 2 Z 1 E(X 4 ) = 3[E(X 2 )]2 ⇒ 3π 2 f 2 (2πs)ds = 3π 2 f (2πs)ds . (4.7) 0

0

Now by an application of Cauchy-Schwarz inequality, (4.7) holds if and only if f is constant almost everywhere. Theorem 4.2.2 (Bose et al. (2009)). Suppose Assumptions 4.1.1 and 4.1.2 hold. Then the ESD of √1n Cn converges in L2 to FC (·) given in (4.5)–(4.6). Remark 4.2.1. If {xi } are i.i.d. with E |xi |2+δ < ∞, then f (s) ≡ (2π)−1 , and FC is the bivariate normal distribution whose covariance matrix is diagonal with entries 1/2 each. This agrees with Theorem 3.2.1. Figure 4.1 provides a scatter plot of the eigenvalues of this matrix for n = 1000 when the input sequence is an MA(3) process. Before going into the proof of Theorem 4.2.2, we observe a general fact which will be used in the proofs. Lemma 4.2.3. Suppose {λn,k }1≤k≤n is a triangular sequence of Rd -valued random variables such that λn,k = ηn,k + yn,k for 1 ≤ k ≤ n. Suppose F is a continuous distribution function such that Pn (i) limn→∞ n1 k=1 P(ηn,k ≤ x ˜) = F (˜ x) for x ˜ ∈ Rd , Pn (ii) limn→∞ n12 k,l=1 P(ηn,k ≤ x ˜, ηn,l ≤ y˜) = F (˜ x)F (˜ y ) for x ˜, y˜ ∈ Rd , and (iii) for any ε > 0, max1≤k≤n P(|yn,k | > ε) → 0 as n → ∞. Then

(b)

Pn

1 n limn→∞ n12

(a) limn→∞

k=1

P(λn,k ≤ x ˜) = F (˜ x),

Pn

k,l=1

P(λn,k ≤ x ˜, λn,l ≤ y˜) = F (˜ x)F (˜ y ).

Proof. Observe that n

1X P(λn,k ≤ x ˜) n k=1

=

n

n

k=1

k=1

1X 1X P(λn,k ≤ x ˜, |yn,k | > ε) + P(λn,k ≤ x ˜, |yn,k | ≤ ε). n n

(4.8)

40

LSD: dependent input

5 4 3 2 1 0 −1 −2 −3 −4 −5 −4

−3

−2

−1

0

1

2

3

4

5

FIGURE 4.1 Eigenvalues of √1n Cn for n = 1000, 10 replications. Input sequence is the MA(3) process xt = εt + 0.5εt−1 + 0.3εt−2 + 0.2εt−3 where {εt } is i.i.d. N (0, 1).

Now, by condition (iii), n

n

1X 1X P(λn,k ≤ x ˜, |yn,k | > ε) ≤ P(|yn,k | > ε) n n k=1

k=1

≤ max P(|yn,k | > ε) 1≤k≤n

→ 0, as n → ∞.

(4.9)

Also observe that P(λn,k ≤ x ˜, |yn,k | ≤ ε) ≤ P(ηn,k ≤ x ˜ + ε˜) and P(λn,k ≤ x ˜, |yn,k | ≤ ε) ≥ P(ηn,k ≤ x ˜ − ε˜) − P(|yn,k | > ε), where ε˜ = (ε, . . . , ε) ∈ Rd . Now using conditions (i) and (iii), from the last two inequalities we get, F (˜ x − ε˜) ≤ lim P(λn,k ≤ x ˜, |yn,k | ≤ ε) ≤ F (˜ x + ε˜). n→∞

(4.10)

Since ε > 0 can be chosen arbitrarily small, from (4.8), (4.9) and (4.10), we conclude that n 1X lim P(λn,k ≤ x ˜) = F (˜ x). n→∞ n k=1

Circulant

41

This completes the proof of (a). Proof of (b) is similar, so we skip it. We now proceed to prove Theorem 4.2.2. The normal approximation Lemma 3.1.1 and the result from Fan and Yao (2003) quoted below allow us to approximate the eigenvalues by appropriate partial sums of independent random variables. Theorem 4.2.4 (Fan and Yao (2003)). Suppose that {xi } is a sample from a stationary process defined in (4.1), where {εt } are i.i.d. random variables with mean zero and variance 1. For k = 1, 2, . . . , b n−1 2 c, define n−1 n−1 1 X 1 X 2πk ξ2k−1 = √ εt cos(ωk t), ξ2k = √ εt sin(ωk t), where ωk = . n n t=0 n t=0

(a) As n → ∞, {ξk ; k = 1, 2, . . . , n} is a sequence of asymptotically i.i.d. N (0, 1/2) in the sense that for any integer r ≥ 1, any c1 , c2 , . . . , cr ∈ R, and any kr > · · · > k1 ≥ 1, r X

r

D

cj ξkj −→ N (0,

j=1

1X 2 c ). 2 j=1 j

(b) For k = 1, 2, . . . , n, In (ωk ) =

n−1 2 1 X xt e−itωk = Ln (ωk ) + Rn (ωk ), n t=0

where 2 2 Ln (ωk ) = 2πg(ωk )(ξ2k−1 + ξ2k ),

(4.11)

g(·) is the spectral density of {xt } and lim max E |Rn (ωk )| = 0.

n→∞ 1≤k≤n

The next lemma is similar to Theorem 4.2.4(b). We provide a proof for the sake of completeness. Lemma 4.2.5. Suppose Assumption 4.1.1 holds and {εt } are i.i.d. random variables with mean 0 and variance 1. For k = 1, 2, . . . , n, write n−1 1 X √ xl eiωk l = ψ(eiωk )[ξ2k−1 + iξ2k ] + Yn (ωk ), n l=0

where ξ2k−1 and ξ2k are as in Theorem 4.2.4. Then, as n → ∞, we have max1≤k l, E |Yn (ωk )|

∞ 1 X 2 1/2 √ |aj |(E Unj ) n j=−∞ r ∞ 2 X ≤ |aj |{min(|j|, n)}1/2 n j=−∞ X √  1 X ≤ 2 √ |aj ||j|1/2 + |aj | . n



|j|≤l

|j|>l

The last expression is free of k. First choose l large enough so that the second sum is small. Now as n → ∞, the first sum goes to zero. Hence max E |Yn (ωk )| → 0.

1≤k≤n

Proof of Theorem 4.2.2. As pointed out in (3.2) of Chapter 3, to prove that the ESD Fn converges to an F in L2 , it is enough to show that at all continuity points (x, y) of F , E[Fn (x, y)] → F (x, y) and Var[Fn (x, y)] → 0.

(4.12)

This is what we show here and in every proof later on. First assume that Leb(C0 ) = 0. Recall the eigenvalues of Cn from Section 1.1 of Chapter 1. As

Circulant

43

before we may ignore the eigenvalue λn and also λn/2 (when n is even). So for x, y ∈ R, E[Fn (x, y)] ∼

1 n

n−1 X

P(bk ≤ x, ck ≤ y),

k=1,k6=n/2

where n−1 n−1 1 X 1 X 2πk bk = √ xj cos(ωk j), ck = √ xj sin(ωk j), ωk = . n n j=0 n j=0

Define for k = 1, 2, . . . , n, ηk = (ξ2k−1 , ξ2k )0 , Y1n (ωk ) = R[Yn (ωk )], Y2n (ωk ) = I[Yn (ωk )], where Yn (ωk ) are the same as defined in Lemma 4.2.5. Then (bk , ck )0 = B(ωk )ηk + (Y1n (ωk ), Y2n (ωk ))0 . Now in view of Lemmata 4.2.3 and 4.2.5, to conclude that E[Fn (x, y)] → FC (x, y) it is sufficient to show that 1 n

n−1 X

  P B(ωk )ηk ≤ (x, y)0 → FC (x, y).

(4.13)

k=1,k6=n/2

For this, define for 1 ≤ k ≤ n − 1, (except for k = n/2) and 0 ≤ l ≤ n − 1, √ √ 0 Xl,k = 2εl cos(ωk l), 2εl sin(ωk l) . Note that E(Xl,k ) = 0,

(4.14)

n−1 1X Cov(Xl,k ) = I, and n

(4.15)

l=0

sup sup [ n 1≤k≤n

n−1 1X E k Xl,k k(2+δ) ] ≤ C < ∞. n l=0

For k 6= n/2, n−1 √ √   1 X B(ωk )ηk ≤ (x, y)0 = B(ωk )( √ Xl,k ) ≤ ( 2x, 2y)0 . n l=0

Since

√ √ {(r, s) : B(ωk )(r, s)0 ≤ ( 2x, 2y)0 }

(4.16)

44

LSD: dependent input

is a convex set in R2 , and {Xl,k , l = 0, 1, . . . , (n − 1)} satisfies √ (4.14)–(4.16), √ we can apply Lemma 3.1.1(a) for k 6= n/2 to get, with z = ( 2x, 2y)0 , n−1 X   P B(ωk )( √1 Xl,k ) ≤ z − P B(ωk )(N1 , N2 )0 ≤ z n l=0

≤ Cn−δ/2 [

n−1 1X E kXlk k(2+δ) ] n l=0

−δ/2

≤ Cn

→ 0, as n → ∞.

Since by Lemma 4.2.1(a), HC is bounded continuous for every fixed (x, y), 1 n→∞ n

n−1 X

lim

P B(ωk )ηk ≤ (x, y)0

n−1 X 1 2πk HC ( , x, y) n→∞ n n k=1,k6=n/2 Z 1 = HC (2πs, x, y)ds



=

k=1,k6=n/2

lim

0

= FC (x, y). Hence, by (4.13), Z

1

E[Fn (x, y)] →

HC (2πs, x, y)ds = FC (x, y).

(4.17)

0

To show Var[Fn (x, y)] → 0, it is enough to show that 1 n2

n X

Cov(Jk , Jk0 )

=

k6=k0 =1



1 n2

n X

[E(Jk , Jk0 ) − E(Jk ) E(Jk0 )]

k6=k0 =1

0,

(4.18)

where for 1 ≤ k ≤ n, Jk is the indicator that {bk ≤ x, ck ≤ y}. As n → ∞, n n n h1 X i2 2 1 X 1 X 0 E(J ) E(J ) = E(J ) − E(Jk ) k k k 2 2 n n n k6=k0 =1 k=1 k=1 Z h 1 i2 → HC (2πs, x, y)ds . 0

So, to show (4.18), it is enough to show that as n → ∞, n hZ 1 i2 1 X 0) → E(J , J H (2πs, x, y)ds . k k C n2 0 0 k6=k =1

Similar to the proof of (4.17), instead of vectors with two coordinates, define appropriate vectors with four coordinates and proceed as above to verify this. We omit the details. This completes the proof for the case Leb(C0 ) = 0. If Leb(C0 ) 6= 0, we have to show (4.12) on D1c (of Lemma 4.2.1). All the above steps in the proof remain valid for (x, y) ∈ D1c . Hence, if Leb(C0 ) 6= 0, we have our required LSD. This completes the proof of Theorem 4.2.2.

Reverse circulant

4.3

45

Reverse circulant

Let G be the standard exponential distribution function, G(x) = 1 − e−x , x > 0. Define HR (s, x) on [0, 2π] × R as (  2  x G 2πf if (s) HR (s, x) = 1 if

f (s) 6= 0 f (s) = 0.

Let S0 = {t ∈ [0, 1/2] : f (2πt) = 0}. The following lemma is analogous to Lemma 4.2.1. We omit its proof. Lemma 4.3.1. (a) For fixed x, HR (s, x) is bounded continuous on [0, 2π]. (b) FR defined below is a valid distribution function.  Z 1/2 1   HR (2πt, x)dt  + 2 FR (x) = Z0 1/2   1−  HR (2πt, x)dt 2 0

if x > 0 (4.19) if x ≤ 0.

(c) If Leb(S0 ) = 0 then FR is continuous everywhere and equals  Z 1/2 x2   e− 2πf (2πt) dt  1− FR (x) = Z 1/2 0 x2    e− 2πf (2πt) dt

if x > 0 (4.20) if x ≤ 0.

0

Further, FR equals LR given in (2.17) ⇔ f is constant almost everywhere (Lebesgue). (d) If Leb(S0 ) 6= 0 then FR is discontinuous only at x = 0. Theorem 4.3.2 (Bose et al. (2009)). If Assumptions 4.1.1 and 4.1.2 hold then the ESD of √1n RCn converges in L2 to FR given in (4.19)–(4.20). Remark 4.3.1. If {xi } are i.i.d., with E(|xi |2+δ ) < ∞, then f (s) ≡ 1/2π and the LSD FR (·) agrees with LR given in (2.17) of Section 2.3, Chapter 2. Figure 4.2 shows the histogram of the simulated eigenvalues of when the input sequence is an MA(1) process.

√1 RCn n

46

LSD: dependent input 1000 900 800 700 600 500 400 300 200 100 0 -5

-4

-3

-2

-1

0

1

2

3

4

5

FIGURE 4.2 Histogram of the eigenvalues of √1n RCn for n = 1000, 30 replications. Input sequence is the MA(1) process xt = εt + 0.5εt−1 where {εt } is i.i.d. N (0, 1).

Proof of Theorem 4.3.2. As before, we give the proof only when Leb(S0 ) = 0. The structure of the eigenvalues {λk } of RCn (see Section 1.3 of Chapter 1) implies that the LSD is going to be symmetric about 0. As before we may also ignore the two eigenvalues λ0 and λn/2 . Hence for x > 0, with ωk = 2πk/n, n−1

E[Fn (x)]



b 2 c 1 X 1 1/2 + P( λ2k ≤ x2 ) n n k=1

b n−1 2 c

=

1/2 +

1 X P(In (ωk ) ≤ x2 ), n k=1

where In (ωk ) is as in (4.2). Along the same lines as in the proof of Theorem 4.2.2, using Lemma 4.2.3 and Theorem 4.2.4(b), it is sufficient to show that n−1

b 2 c Z 1/2  1 X 2 P Ln (ωk ) ≤ x → HR (2πt, x)dt, n 0 k=1

where 2 2 Ln (ωk ) = 2πf (ωk )(ξ2k−1 + ξ2k ), as in (4.11).

For 1 ≤ k ≤ b n−1 2 c and 0 ≤ l ≤ n − 1, let √ √ 0 Xl,k = 2εl cos(lωk ), 2εl sin(lωk ) ,  Akn = (r1 , r2 ) : πf (ωk )(r12 + r22 ) ≤ x2 . Note that {Xl,k } satisfies (4.14)–(4.16) and 

X  1 n−1 Ln (ωk ) ≤ x2 = √ Xl,k ∈ Akn . n l=0

Symmetric circulant

47

Since Akn is a convex set in R2 , we can apply Lemma 3.1.1(a) to get n−1

b 2 c 1 X | P(Ln (ωk ) ≤ x2 ) − Φ0,I (Akn )| ≤ Cn−δ/2 → 0. n k=1

But n−1

b 2 c 1 X Φ0,I (Akn ) n k=1

n−1

=

b 2 c Z 1/2 1 X 2πk HR ( , x) → HR (2πt, x)dt. n n 0 k=1

Hence for x ≥ 0, 1 E[Fn (x)] → + 2

Z

1/2

HR (2πt, x)dt = FR (x). 0

The rest of the argument is same as in the proof of Theorem 4.2.2. We leave it as an exercise.

4.4

Symmetric circulant

For x ∈ R and s ∈ [0, π], define  p 2πf (s)N (0, 1) ≤ x) if f (s) 6= 0,  P HS (s, x) =  I(x ≥ 0) if f (s) = 0.

(4.21)

Let S0 = {t ∈ [0, 1/2] : f (2πt) = 0}. The following lemma is analogous to Lemma 4.3.1. We omit its proof. Lemma 4.4.1. (a) For fixed x, HS is a bounded continuous function in s and HS (s, x) + HS (s, −x) = 1. (b) FS given below is a proper distribution function and FS (x) + FS (−x) = 1. Z 1/2 FS (x) = 2 HS (2πs, x)ds. (4.22) 0

(c) If Leb(S0 ) = 0 then FS is continuous everywhere and equals Z x h Z 1/2 i t2 1 p FS (x) = e− 4πf (2πs) ds dt. π f (2πs) −∞ 0

(4.23)

Further, FS is normal if and only if f is constant almost everywhere (Lebesgue). (d) If Leb(S0 ) 6= 0 then FS is discontinuous only at x = 0.

48

LSD: dependent input

Theorem 4.4.2 (Bose et al. (2009)). Suppose Assumptions 4.1.1 and 4.1.2 hold and bnp/2c 1 X  2πk −3/2 f( ) → 0 for all 0 < p < 1. n→∞ n2 n

lim

(4.24)

k=1

Then the ESD of

√1 SCn n

converges in L2 to FS given in (4.22)–(4.23).

Figure 4.3 shows the histogram of the simulated eigenvalues of when the input sequence is an MA(1) process.

√1 SCn n

1500

1000

500

0 -6

-4

-2

0

2

4

6

FIGURE 4.3 Histogram of the eigenvalues of √1n SCn for n = 1000, 30 replications. Input sequence is the MA(1) process xt = εt + 0.5εt−1 where {εt } is i.i.d. N (0, 1).

Remark 4.4.1. If inf s∈[0,2π] f (s) > 0, then (4.24) holds. It is not clear 1 whether the LSD result will be true if (4.24) does not hold. If f ≡ 2π then FS is the standard normal distribution function. This agrees with the conclusion of Theorem 3.2.2. We give the proof only for odd n = 2m + 1. The proof in the even case is similar. First recall the eigenvalues of SCn from Section 1.2 of Chapter 1. The approximation Lemma 4.2.5 now takes the following form. Its proof is left as an exercise. Lemma 4.4.3. Suppose Assumption 4.1.1 holds and {εt } are i.i.d. random variables with mean 0 and variance 1. For n = 2m + 1 and 1 ≤ k ≤ m, write m X t=1

xt cos

m m X X 2πkt 2πkt 2πkt √ = ψ1 (eiωk ) εt cos − ψ2 (eiωk ) sin + nYn,k , n n n t=1 t=1

where ψ1 (eiωk ), ψ2 (eiωk ) are as in (4.3). Then max1≤k≤m E(|Yn,k |) → 0.

Symmetric circulant

49

Proof of Theorem 4.4.2. All eigenvalues {λk , 0 ≤ k ≤ n − 1} are real in this case. As before, we provide the detailed proof only when Leb(S0 ) = 0. As x0 before the eigenvalue λ0 can be ignored. Further, the term √ can be ignored n from the expression of the eigenvalues {λk }. So for x ∈ R, E[Fn (x)] ∼

m m m  2X 1 2X 1 X 2πkt P( √ λk ≤ x) ∼ P √ 2xt cos ≤x . n n n n n t=1 k=1

k=1

To show that E[Fn (x)] → FS (x), following the argument given in the circulant case, and using Lemmata 4.2.3 and 4.4.3, it is sufficient to show that, m

m

m

i 2X h 2X 2πkt 2 X 2πkt P ψ1 (eiωk ) εt cos − ψ2 (eiωk ) √ sin ≤x n n t=1 n n n t=1 k=1

m

=

m

o 2 X n −1/2 X P m Xl,k ∈ Ck → FS (x), n t=1 l=1

where Xl,k

=

Ck

=

2πkl 2πkl  2 2σn−1 εl cos , 2δn−1 εl sin , σn = 2 − 1/m, δn2 = 2 + 1/m, n n p  (u, v) : σn ψ1 (eiωk )u + δn ψ2 (eiωk )v ≤ n/mx .

Let Vk =

1 kπ − √4m12 −1 tan 2m+1

kπ − √4m12 −1 tan 2m+1 1

! .

Note that m

1 X Cov(Xl,k ) = Vk , m

m

1 X E kXl,k k2+δ ≤ C < ∞. 1≤k≤m m l=1 l=1 (4.25) Let αk be the minimum eigenvalue of Vk . Then αk ≥ αm for 1 ≤ k ≤ m, and E(Xl,k ) = 0,

αm = 1 − √

sup

1 mπ 2m + 1 2 tan ≈1− ≈ 1 − = α, say. 2m + 1 mπ π 4m2 − 1

Since {Xl,k } satisfies (4.25) and Ck is a convex set in R2 , we can apply Lemma 3.1.1(a) for k = 1, 2, . . . , m, to get m h n m m 2 X o i X 2 X −3/2 P m−1/2 Xl,k ∈ Ck − Φ0,Vk (Ck ) ≤ Cm−δ/2 αk n n k=1

l=1

k=1

≤ Cm−δ/2 α−3/2 → 0, where Φ0,Vk is a bivariate normal distribution with mean zero and covariance matrix Vk . Note that for large m, σn2 ≈ 2 and δn2 ≈ 2. Hence  √ Ck0 = (u, v) : ψ1 (eiωk )u + ψ2 (eiωk )v ≤ x

50

LSD: dependent input

serves as a good approximation to Ck and we get m

m

m

k=1

k=1

k=1

2X 2X 2X Φ0,Vk (Ck ) ∼ Φ0,Vk (Ck0 ) = P(µk N (0, 1) ≤ x), n n n where µ2k = ψ1 (eiωk )2 + ψ2 (eiωk )2 + 2ψ1 (eiωk )ψ2 (eiωk ) √

1 kπ tan . 2m + 1 4m2 − 1

Define νk2 = ψ1 (eiωk )2 + ψ2 (eiωk )2 . Now we show that m 2 X   lim P(µk N (0, 1) ≤ x) − P(νk N (0, 1) ≤ x) = 0. n→∞ n

(4.26)

k=1

Let 0 < p < 1. Now as n → ∞, using Assumption (4.24), the above expression, summed only for 1 ≤ k ≤ bmpc, equals bmpc Z 2 X x/µk 1 − t2 √ e 2 dt ≤ n 2π x/ν k k=1



bmpc 2|x| X µ2k − νk2 n µk νk (µk + νk ) k=1

bmpc X 2|x| tan pπ 1 2 → 0. 3 2 m νk α(1 + α) k=1

On the other hand, for every n, m  2 X  P(µk N (0, 1) ≤ x) − P(νk N (0, 1) ≤ x) ≤ 4(1 − p). n bmpc+1

Therefore, by letting first n → ∞ and then p → 1, (4.26) holds. Hence m

 2X P νk N (0, 1) ≤ x n→∞ n lim

m

=

k=1



 2X p P 2πf (2πk/n)N (0, 1) ≤ x n→∞ n k=1 Z 1/2 2 HS (2πs, x)ds. lim

0

The rest of the argument in the proof is the same as in the proof of Theorem 4.2.2.

4.5

k-circulant

First recall the eigenvalues of the k-circulant matrix Ak,n and related notation from Section 1.4 of Chapter 1. For any positive integers k, n, let p1 < p2
1 where p1 is the smallest prime divisor of g. υ Then g1 = 2g for all but finitely many n, and k,n n → 0. (b) Suppose k g = 1 + sn, g ≥ 1 fixed, n → ∞ with s = 0 if g = 1 and s = o(np1 −1 ) where p1 is the smallest prime divisor of g. Then g1 = g for all υ but finitely many n, and k,n n → 0. Proof. (a) First note that gcd(n, k) = 1 and therefore n0 = n. When g = 1, it is easy to check that g1 =2 and by Lemma 1.4.4(a), υk,n ≤ 2. Now assume g > 1. Since k 2g = (sn − 1)2 = 1 (mod n), g1 divides 2g. Observe that g1 6= g = 2g/2 because k g = −1 (mod n). If g1 = 2g/b, where b divides g and b ≥ 3, then by Lemma 4.5.1,   gcd(k g1 − 1, n) = gcd k 2g/b − 1, (k g + 1)/s ≤ gcd k 2g/b − 1, k g + 1 ≤ k gcd(2g/b,

g)

+ 1.

Note that since gcd(2g/b, g) divides g and gcd(2g/b, g) ≤ 2g/b < g, we have gcd(2g/b, g) ≤ g/p1 . Consequently, gcd(k 2g/b − 1, n) ≤ k g/p1 + 1 ≤ (sn − 1)1/p1 + 1 = o(n),

(4.28)

which is a contradiction to the fact that k g1 = 1 (mod n). This implies that gcd(k g1 − 1, n) = n. Hence g1 = 2g. Now by Lemma 1.4.4(b) it is enough to show that for any fixed b ≥ 3 so that b divides g1 , gcd(k g1 /b − 1, n)/n = o(1) as n → ∞. However, we have already proved this in (4.28). (b) Again gcd(n, k) = 1 and n0 = n. The case when g = 1 is trivial as then we have gx = 1 for all x ∈ Zn and υk,n = 0. Now assume g > 1. Since k g = 1 (mod n), g1 divides g. If g1 < g, then g1 ≤ g/p1 which implies that k g1 ≤ k g/p1 = (sn + 1)1/p1 = o(n), which is a contradiction. Thus g = g1 . Now Lemma 1.4.4(c) immediately yields υk,n 2k g1 /p1 2(1 + sn)1/p1 < ≤ = o(1). n n n Now we state LSD results for two types of k-circulant matrices with dependent inputs. Type I. n = k g + 1 for some fixed g ≥ 2. We observe a simple but crucial property of the eigenvalue partitioning {Pj } of Zn (see (1.7) of Chapter 1). For every integer t ≥ 0, tk g = (−1+n)t = −t ( mod n). Hence λt and λn−t belong to the same partition block S(t) = S(n − t). Thus each S(t) contains an even number of elements, except for t = 0, n2 . Hence the eigenvalue partitioning sets Pj are self-conjugate. So, we can find sets Aj ⊂ Pj such that Pj = {x : x ∈ Aj or n − x ∈ Aj } and #Aj =

1 #Pj . 2

(4.29)

k-circulant

53

However, it follows from Lemma 4.5.2 that for n = k g + 1, g1 = 2g and υk,n /n → 0. Indeed, for g = 2, it is easy to check that S(1) = {1, k, n−1, n−k}, and hence g1 = 4. Thus  {0, n/2} if n is even {x ∈ Zn : gx < g1 } = (4.30) {0} if n is odd. As a consequence, υk,n /n ≤ 2/n → 0. For any d ≥ 1 and {Ei } i.i.d. Exp(1), let Gd (x) = P

d Y

 Ei ≤ x .

i=1

Note that Gd is continuous. For any integer d ≥ 1, define Hd (s1 , . . . , sd , x) for (s1 , s2 , . . . , sd ) ∈ [0, 2π]d and x ≥ 0, as    Qd x2d  Gd Q if d i=1 f (si ) 6= 0 d (2π) f (si ) Hd (s1 , . . . , sd , x) = i=1 Q d  1 if f (s ) = 0. i=1

i

The proof of the following lemma is omitted. Lemma 4.5.3. (a) For fixed x, Hd (s1 , . . . , sd , x) is bounded continuous on [0, 2π]d . (b) Fd defined below is a valid continuous distribution function. Z Fd (x)

1

Z ···

= 0

1

Hd (2πt1 , . . . , 2πtd , x) 0

d Y

dti for x ≥ 0. (4.31)

i=1

Theorem 4.5.4 (Bose et al. (2009)). Suppose Assumptions 4.1.1 and 4.1.2 hold. Suppose n = k g + 1 for some fixed g ≥ 2. Then as n → ∞, Fn−1/2 Ak,n Qg converges in L2 to the distribution of U1 ( i=1 Ei )1/2g where {Ei } are i.i.d. with distribution function Fg given in (4.31) and U1 is uniformly distributed over the (2g)-th roots of unity, independent of the {Ei }. Remark 4.5.1. QgIf {xi } are i.i.d., then f (s) = 1/2π for all s ∈ [0, 2π] and the LSD is U1 ( i=1 Ei )1/2g where {Ei } are i.i.d. Exp(1), U1 is as in Theorem 4.5.4 and independent of {Ei }. This limit agrees with Theorem 3.3.3(a). Proof of Theorem 4.5.4. This proof also uses the normal approximation, and the eigenvalue description given in Section 1.4 of Chapter 1. For simplicity we first prove the result when g = 2. Note that gcd(k, n) = 1, and hence in this case n0 = n in (1.9) of Chapter 1. Recall that υk,n is the total number of eigenvalues γj of Ak,n such that j ∈ Pl and |Pl | < g1 . In view of Lemma 4.5.2(a), we have υk,n /n → 0, and hence these eigenvalues do not contribute to the LSD. Hence it remains to consider only the eigenvalues corresponding to the sets Pl which have size exactly equal to g1 .

54

LSD: dependent input

Note that S(1)Q= {1, k, n−1, n−k}, and hence g1 = 4. Recall the quantities nj = #Pj , yj = t∈Pj λt , where λj , 0 ≤ j < n are as in (1.2) of Chapter 1. Also, for every integer t ≥ 0, tk 2 = −t (mod n), so that λt and λn−t belong to the same partition block S(t) = S(n − t). Thus each yt is real. Let us define In = {l : #Pl = 4}. n It is clear that #I → 4. Without any loss, let In = {1, 2, . . . , #In }. n 2 3 Let 1, ω, ω , ω be the fourth roots of unity. Now, for every j, the eigenvaly 1/4 k ues of n−1/2 Ak,n corresponding to the set Pj are: nj2 ω , k = 0, 1, . . . , 3. yj 1/4 Hence it suffices to consider only the modulus n2 as j varies: if these have an LSD F , say, then the LSD of the whole sequence will be (r, θ) in polar coordinates where r is distributed according to F and θ is distributed uniformly across all the fourth roots of unity. Moreover r and θ are independent. With this in mind, we consider for x > 0,

Fn (x) =

 #In  1 X h yj i 14 I ≤ x . #In i=1 n2

Since the set of λ values corresponding to any Pj is closed under conjugation, there exists a set Ai ⊂ Pi of size 2 (see (4.29)) such that Pi = {x : x ∈ Ai or n − x ∈ Ai }. Combining each λj with its conjugate, we may write yj in the form, Y yj = (b2t + c2t ),

(4.32)

t∈Aj

where bt =

n−1 X

xj cos(ωt j) and ct =

j=0

n−1 X

xj sin(ωt j).

j=0

Note that for x > 0, E[Fn (x)] =

#In  1 X yj P 2 ≤ x4 . #In j=1 n

So our first aim is to show #In  1 X yj P 2 ≤ x4 → F2 (x), #In j=1 n

where F2 (x) is as in (4.31) with d = 2. Now using (4.32) and Theorem 4.2.4, y we can write nj2 as yj = Ln,j + Rn,j for 1 ≤ j ≤ #In , n2

k-circulant

55

where Ln,j = 4π 2 fj y j , y j =

Y

2 2 (ξ2t−1 + ξ2t ), fj =

t∈Aj

Y

f (ωt ),

(4.33)

t∈Aj

and {ξt } are as in Theorem 4.2.4. If Aj = {j1 , j2 } then Rn,j = Ln (ωj1 )Rn (ωj2 ) + Ln (ωj2 )Rn (ωj1 ) + Rn (ωj1 )Rn (ωj2 ), where Ln (ωt ) and Rn (ωt ) are as in (4.11). Now using Theorem 4.2.4 it is easy to see that for any  > 0, as n → ∞, we have max1≤j≤#In E(|Rn,j | > ) → 0. So in view of Lemma 4.2.3 it is enough to show that #In  1 X P Ln,j ≤ x4 → F2 (x). #In j=1

(4.34)

We show this in two steps. Step I. Normal approximation: #In  i 1 Xh  P Ln,j ≤ x4 − Φ4 (An,j ) → 0 as n → ∞, #In j=1

(4.35)

where 2 n Y An,j = (x1 , y1 , x2 , y2 ) ∈ R4 : [2−1 (x2i + yi2 )] ≤ i=1

x4 o , 1 ≤ j ≤ #In . 4π 2 fj

#In 1 X Step II. lim Φ4 (An,j ) = F2 (x). n→∞ #In j=1

Proof of Step I. It is important to note that An,j is non-convex. So we have to apply care while using normal approximation. Define   2πtl  2πtl  Xl,j = 21/2 εl cos , εl sin , t ∈ Aj , 0 ≤ l < n, 1 ≤ j ≤ #In . n n Note that {Xl,j } satisfies (4.14)–(4.16), and X   1 n−1 Ln,j ≤ x4 = √ Xl,j ∈ An,j . n l=1

For (4.35), it suffices to show that for every  > 0, there exists N = N () such that for all n ≥ N (),   sup P Ln,j ≤ x4 − Φ4 (An,j ) ≤ . j∈In

56

LSD: dependent input

Fix  > 0 and M1 > 0 so that Φ([−M1 , M1 ]c ) ≤ /16. By Assumption 4.1.2,  1 n−1  1 n−1 X X 2πlt 2 2πlt 2 √ E εl cos =E √ εl sin = 1/2, n n n n l=0

l=0

for any n ≥ 1 and 0 < t < n. Now by Chebyshev’s inequality, we can find M2 > 0 such that for each n ≥ 1 and for each 0 < t < n,  1 n−1  X 2πlt P |√ εl cos | ≥ M2 ≤ /16. n n l=0

The same bound holds for the sine term also. Set M = max{M1 , M2 }. Define the set n o B := (x1 , y1 , x2 , y2 ) ∈ R4 : |xj |, |yj | ≤ M ∀ j . Then for all j ∈ In ,  1 n−1  X Xl,j ∈ An,j − Φ4 (An,j ) P √ n l=0

 1 n−1  X Xl,j ∈ An,j ∩ B − Φ4 (An,j ∩ B) + /2. ≤ P √ n l=0

Since An,j is a non-convex set, we now apply Lemma 3.1.1(b) for An,j ∩ B to obtain  1 n−1  X sup P √ Xl,j ∈ An,j ∩ B − Φ4 (An,j ∩ B) n j∈In l=0   −δ/2 ≤ C1 n ρ2+δ + 2 sup sup Φ4 (∂(An,j ∩ B))η − z , j∈In z∈R4

where ρ2+δ = sup j∈In

n−1 1X E kXl,j k2+δ n

and η = η(n) = C2 ρ2+δ n−δ/2 .

l=0

Note that ρ2+δ is uniformly bounded in n by Assumption 4.1.2. It thus remains to show that for all sufficiently large n,   sup sup Φ4 (∂(An,j ∩ B))η − z ≤ /8. j∈In z∈R4

(4.36)

k-circulant

57

Note that ∂(An,j ∩ B) ⊆ ∂An,j ∩ ∂B ⊆ ∂B, and hence   sup sup Φ2g (∂(An,j ∩ B))η − z j∈In z∈R4 Z = sup sup φ(x1 − z1 ) · · · φ(y2 − z4 )dx1 . . . dy2 j∈In z∈R4 (∂(An,j ∩B))η

Z ≤

φ(x1 − z1 ) · · · φ(y2 − z4 )dx1 . . . dy2

sup z∈R4 (∂B)η

Z ≤

dx1 . . . dy2 . (∂B)η

Finally note that ∂B is a compact 3-dimensional manifold which has zero measure under the 4-dimensional Lebesgue measure. By compactness of ∂B, we have (∂B)η ↓ ∂B as η → 0, and (4.36) follows by dominated convergence theorem. Therefore E[Fn (x)] equals #In #In #In   1 X  1 X 1 X  yj P 2 ≤ x4 ∼ P Ln,j ≤ x4 ∼ Φ4 (An,j ). #In j=1 n #In j=1 #In j=1

Proof of Step II. To identify the limit, recall the structure of the sets S(x), Pj , Aj and their properties. Since #In /n → 1/4, vk,n ≤ 2 and either S(x) = S(u) or S(x) ∩ S(u) = ∅, we have #In 1 X 1 Φ4 (An,j ) = lim n→∞ #In n→∞ n j=1

lim

n X

Φ4 (An,j ).

(4.37)

j=1,|Aj |=2

For n = k 2 + 1, write {1, 2, . . . , n − 1} as {ak + b; 0 ≤ a ≤ k − 1, 1 ≤ b ≤ k}. Using the construction of S(x), (except for at most two values of j), Aj = {ak + b, bk − a} for j = ak + b; 0 ≤ a ≤ k − 1, 1 ≤ b ≤ k. Recall that for fixed x, H2 (s1 , s2 , x) is uniformly continuous on [0, 2π]×[0, 2π]. Therefore, given any positive number ρ, we can choose N large enough such that for all n = k 2 + 1 > N , sup 0≤a≤k−1,

 2π(ak + b) 2π(bk − a)   2πa 2πb  , , x − H2 √ , √ , x < ρ. H2 n n n n 1≤b≤k (4.38)

58

LSD: dependent input

Finally, using (4.37) and (4.38) we have #In 1 X lim Φ4 (An,j ) n→∞ #In j=1

n

=

1X Φ4 (An,j ) n→∞ n j=1

=

1 X  x4  G2 n→∞ n 4π 2 fj j=1

=

b nc b nc  2π(ak + b) 2π(bk − a)  1 X X lim H2 , ,x n→∞ n n n a=0

lim

n

lim





b=1 √ √ b nc b nc

=

 2πa 2πb  1 X X H2 √ , √ , x n→∞ n n n b=1 a=0 Z 1Z 1 H2 (2πs, 2πt, x)ds dt

=

F2 (x).

=

lim

0

0

To show that Var[Fn (x)] → 0, since the variables involved are all bounded, it is enough to show that  y X   yj 0 j n−2 Cov I 2 ≤ x4 , I 2 ≤ x4 → 0. n n 0 j6=j

This can be shown by extending the proof used to show E[Fn (x)] → F2 (x). We have to extend the vectors with 4 coordinates defined above to ones with 8 coordinates and proceed. We omit the details. This completes the proof of the Theorem for g = 2. The above argument can be extended to cover the general (g > 2) case. We highlight only a few of the technicalities and omit the other details. For general g we need the following lemma. Lemma 4.5.5. Suppose Ln (ωj ), Rn (ωj ) are as defined in Theorem 4.2.4. Then given any , η > 0, there exists an N ∈ N such that g s Y Y  P Ln (ωji ) Rn (ωji ) >  < η for all n ≥ N . i=1

i=s+1

k-circulant

59

Proof. Note that by an iterative argument, g g s s Y Y Y Y   P Ln (ωji ) Rn (ωji ) >  ≤ P Ln (ωji ) Rn (ωji ) > 1/M i=1

i=s+1

i=2

i=s+1

 + P Ln (ωj1 ) ≥ M  s X  ≤ P Ln (ωji ) ≥ M | i=2 g Y  +P Rn (ωji ) > 1/M s i=s+1

 + P Ln (ωj1 ) ≥ M  . Further note that g Y  P Rn (ωji ) > 1/M s i=s+1



g Y   P Rn (ωji ) > 1/M s + P Rn (ωjs+1 ) > 1 i=s+2



g−1 X   s P Rn (ωjg ) > 1/M + P Rn (ωji ) > 1 i=s+1



 M s + g − s − 1 max E |Rn (ωk )|. 1≤k≤n

Combining all the above, we get g s Y Y  P Ln (ωji ) Rn (ωji ) >  i=1

i=s+1

s  X  ≤ P Ln (ωj1 ) ≥ M  + P Ln (ωji ) ≥ M i=2

 + M s + g − s − 1 max E |Rn (ωk )| 1≤k≤n

 1 ≤ (s − 1 + 1/)4π max f (s) + M s + g − s − 1 max E |Rn (ωk )|. 1≤k≤n M s∈[0,2π] The first term in the right side is smaller than η/2 if we choose M large. Since max1≤k≤n E |Rn (ωk )| → 0 as n → ∞, we can choose N ∈ N such that the second term is less than η/2 for all n ≥ N , proving the lemma. Now we return to the main proof for general g ≥ 2. As before, n0 = n and υk,n /n → 0. Hence it remains to consider only the eigenvalues corresponding to the sets Pl which have size exactly equal to g1 . It follows from Lemma

60

LSD: dependent input

4.5.2(a) that g1 = 2g. We can now proceed as in the case of g = 2. First we show that #In  1 X  yj P g ≤ x2g → Fg (x). (4.39) #In j=1 n Now write

yj ng

as yj = Ln,j + Rn,j for 1 ≤ j ≤ #In , ng

where Y

Ln,j =

Ln (ωt ) = (2π)g fj

t∈Aj

yj , ng

fj and y j are as in (4.33). Using Lemma 4.5.5, it is easy to show that max1≤j≤#In P(|Rn,j | > ) → 0, for any  > 0. So, by Lemma 4.2.3, to show (4.39), it is sufficient to show that #In  1 X P Ln,j ≤ x2g → Fg (x). #In j=1

We prove this in two steps as we did for g = 2. Define g n Y A¯n,j = (xi , yi , i = 1, 2, . . . , g) ∈ R2g : [2−1 (x2i + yi2 )] ≤ i=1

x2g o . (2π)g fj

Now, in Step I, for fixed  > 0 we find M1 > 0 large such that Φ([−M1 , M1 ]c ) ≤ /(8g) and M2 > 0 such that  1 n−1  X 2πlt P |√ εl cos | ≥ M2 ≤ /(8g), n n l=0

and the same bound holds for the sine term. Set M = max{M1 , M2 } and define n o B := (xj , yj ; 1 ≤ j ≤ g) ∈ R2g : |xj |, |yj | ≤ M ∀ j . Note that, ∂B is a compact (2g −1)-dimensional manifold which has zero measure under the 2g-dimensional Lebesgue measure. Now proceeding as before, we have #I 1 #I  Xn  Xn 1 P Ln,j ≤ x4 − Φ4 (A¯n,j ) → 0. #In #In l=1

l=1

g

Now note that for n = k + 1 we can write {1, 2, . . . , n − 1} as {b1 k g−1 + b2 k g−2 + · · · + bg−1 k + bg ; 0 ≤ bi ≤ k − 1, for 1 ≤ i ≤ k − 1; 1 ≤ bg ≤ k}. So we can write the sets Aj (see (4.29)) explicitly using this decomposition of

k-circulant

61

{1, 2, . . . , n − 1} as done in the case g = 2, that is, in the case n = k 2 + 1. For example, if g = 3, Aj = {b1 k 2 + b2 k + b3 , b2 k 2 + b3 k − b1 , b3 k 2 − b1 k − b2 }, for j = b1 k 2 + b2 k + b3 (except for finitely many j, bounded by vk,n and they do not contribute to the limit). Using this fact, and proceeding as before, we conclude that the LSD is now Fg (·), proving Theorem 4.5.4 completely. Type II. n = k g − 1 for some g ≥ 2. First we extend the given in (4.4) and define B(s1 , . . . , sg ) as  ψ1 (eis1 ) −ψ2 (eis1 ) 0 0 ···  ψ2 (eis1 ) ψ1 (eis1 ) 0 0 ···  is2 is2  0 0 ψ (e ) −ψ (e ) ··· 1 2  is2 is2  0 0 ψ (e ) ψ (e ) ··· 2 1   ..  0 0 0 0 .   0 0 0 ··· ψ1 (eisg ) 0 0 0 ··· ψ2 (eisg )

definition of B(s) 

0 0 0 0

     .   0  isg  −ψ2 (e ) ψ1 (eisg )

For zi , wi ∈ R, i = 1, 2, . . . , g, and with {Ni } i.i.d. N (0, 1), define Hg (si , zi , wi , i = 1, . . . , g)  = P B(s1 , . . . , sg )(N1 , . . . , N2g )0 ≤ (zi , wi , i = 1, . . . , g)0 . Let C0 = {t ∈ [0, 1] : f (2πt) = 0}. The proof of the following lemma is omitted. Lemma 4.5.6. (a) For fixed {zi , wi , i = 1, . . . , g}, Hg is a bounded continuous function of (s1 , . . . , sg ) ∈ [0, 2π]g . (b) Fg defined below is a proper distribution function. Z

1

Fg (zi , wi , i = 1, . . . , g) =

Z ···

0

1

Hg (2πti , zi , wi , i = 1, . . . , g)

Y

dti .

0

(4.40) (c) If Leb(C0 ) = 0 then Fg (zi , wi , i = 1, .., g) is continuous everywhere and equals Z

Z

1

Z ···

I{t≤(zk ,wk ,k=1,.,g)} 0

0

1

g t2 +t2 I{Qg f (2πui )6=0} Y 2i 2i−1 − 12 πf i=1 (2πu i ) du dt, Qg e (2π)g i=1 [πf (2πui )] i=1

Q Q where t = (t1 , . . . , t2g ), dt = dti , and du = dui . Fg is multivariate normal if and only if f is constant almost everywhere (Lebesgue). (d) If Leb(C Qg0 ) 6= 0 then Fg is discontinuous only on Dg = {(zi , wi , i = 1, . . . , g) : i=1 zi wi = 0}.

62

LSD: dependent input

Theorem 4.5.7 (Bose et al. (2009)). Suppose Assumptions 4.1.1 and 4.1.2 hold. Suppose n = k g −1 for some g ≥ 2. Then as n → ∞, Fn−1/2 Ak,n converges Qg in L2 to the distribution of ( i=1 Gi )1/g where (R(Gi ), I(Gi ); i = 1, 2, . . . g) has the distribution Fg given in (4.40). Remark 4.5.2. If {xi } are i.i.d., with Qg finite (2 + δ)-th moment, then f (s) ≡ 1/2π and the limit simplifies to U2 ( i=1 Ei )1/2g where {Ei } are i.i.d. Exp(1) and U2 is uniformly distributed over the unit circle independent of {Ei }. This agrees with the conclusion in Theorem 3.3.3(b). Proof of Theorem 4.5.7. First assume Leb(C0 ) = 0. Since k g = 1 + n = 1 mod n, we have gcd(k, n) = 1 and g1 |g. If g1 < g, then g1 ≤ g/α where α = 2 if g is even and α = 3 if g is odd. In either case, k g1

≤ k g/α ≤ (1 + n)1/α = o(n).

Hence g = g1 . By Lemma 4.5.2(b), the total number of eigenvalues γj of Ak,n such that, j ∈ Al and |Al | < g, is asymptotically negligible. Unlike in the previous theorem, here the partition sets Al are not necessarily self-conjugate. However, the number of indices l such that Al is selfconjugate is asymptotically negligible compared to n. To show this, we need to bound the cardinality of the following set for 1 ≤ l < g: Dl

= {t ∈ {1, 2, . . . , n} : tk l = −t (mod n)} = {t ∈ {1, 2, . . . , n} : n|t(k l + 1)}.

Note that t0 = n/ gcd(n, k l +1) is the minimum element of Dl and every other element is a multiple of t0 . Thus |Dl | ≤

n ≤ gcd(n, k l + 1). t0

Let us now estimate gcd(n, k l + 1). For l > [g/2], gcd(n, k l + 1)



gcd(k g − 1, k l + 1)

=

gcd k g−l (k l + 1) − (k g−l − 1), k l + 1



k g−l ,



which implies gcd(n, k l + 1) ≤ k [g/2] for all 1 ≤ l < g. Therefore gcd(n, k l + 1) k [g/2] 2 2 = g ≤ [(g+1)/2] ≤ = o(1). 1/g n (k − 1) k ((n) )[(g+1)/2] So, we can ignore the sets Pj which are self-conjugate. For other Pj , yj defined below will be complex. Y yj = (bt + ict ). t∈Pj

k-circulant

63

For simplicity we provide the detailed argument only for g = 2. Then, n = k 2 − 1 and write {0, 1, 2, . . . , n} as {ak + b; 0 ≤ a ≤ k − 1, 0 ≤ b ≤ k − 1}. Using the construction of S(x) we have Pj = {ak + b, bk + a} and #Pj = 2 for j = ak + b; 0 ≤ a ≤ k − 1, 0 ≤ b ≤ k − 1 (except for finitely many j and hence such indices do not affect the LSD). Let us define In = {j : #Pj = 2}. Clearly, n/#In → 2. Without loss, let In = {1, 2, . . . , #In }. Suppose Pj = {j1 , j2√}. We √first show √ the√L2 convergence of the empirical distribution of √1n ( nbj1 , ncj1 , nbj2 , ncj2 ) for those j for which #Pj = 2. Let Fn (x, y, z, w) be the ESD of {(bj1 , cj1 , bj2 , cj2 )}, that is, #In  1 X Fn (z1 , w1 , z2 , w2 ) = I bjk ≤ zk , cjk ≤ wk , k = 1, 2 . #In j=1

We show that for z1 , w1 , z2 , w2 ∈ R, E[Fn (z1 , w1 , z2 , w2 )] → F2 (z1 , w1 , z2 , w2 ) and Var[Fn (z1 , w1 , z2 , w2 )] → 0. (4.41) Let Yn (ωj ) be as in Lemma 4.2.5 and Y1n (ωj ) = R(Yn (ωj )), Y2n (ωj ) = I(Yn (ωj )). For j = 1, 2, . . . , n, let ηj = (ξ2j1 −1 , ξ2j1 , ξ2j2 −1 , ξ2j2 )0 , 0 Yn,j = Y1n (ωj1 ), Y2n (ωj1 ), Y1n (ωj2 ), Y2n (ωj2 ) . Then (bj1 , cj1 , bj2 , cj2 )0 = B(ωj1 , ωj2 )ηj + Yn,j . By Lemma 4.2.5, for any  > 0, max1≤j≤n P(kYn,j k > ) → 0 as n → ∞. So in view of Lemma 4.2.3, to show E[Fn (z1 , w1 , z2 , w2 )] → F2 (z1 , w1 , z2 , w2 ), it is enough to show that #In 1 X P(B(ωj1 , ωj2 )ηj ≤ (z1 , w1 , z2 , w2 )0 ) → F2 (z1 , w1 , z2 , w2 ). #In j=1

0 For this we use normal approximation. Let N = N1 , N2 , N3 , N4 , where {Ni } are i.i.d. N (0, 1) and define   2πj l   2πj l   2πj l   2πj l 0 1 1 2 2 1/2 Xl,j = 2 εl cos , εl sin , εl cos , εl sin . n n n n Note that 

B(ωj1 , ωj2 )ηj ≤ (z1 , w1 , z2 , w2 )0



n−1 √ √ √ √  1 X = B(ωj1 , ωj2 )( √ Xl,j ) ≤ ( 2z1 , 2w1 , 2z2 , 2w2 )0 . n l=0

64

LSD: dependent input

Since √ √ √ √  (r1 , r2 , r3 , r4 ) : B(ωj1 , ωj2 )(r1 , r2 , r3 , r4 )0 ≤ ( 2z1 , 2w1 , 2z2 , 2w2 )0 is a convex set in R4 and {Xl,j ; l = 0, 1, . . . , (n − 1)} satisfies (4.14)–(4.16), we can show using Lemma 3.1.1(a) that, as n → ∞, #In 1 X P(B(ωj1 , ωj2 )ηj ≤ (z1 , w1 , z2 , w2 )0 ) #In j=1 √ √ √ √ − P(B(ωj1 , ωj2 )N ≤ ( 2z1 , 2w1 , 2z2 , 2w2 )0 ) → 0.

Hence #In 1 X P(B(ωj1 ωj2 )ηj ≤ (z1 , w1 , z2 , w2 )0 ) n→∞ #In j=1

lim

=

#In √ √ √ √ 1 X P(B(ωj1 , ωj2 )N ≤ ( 2z1 , 2w1 , 2z2 , 2w2 )0 ) n→∞ #In j=1

=

n √ √ √ √ 1X P(B(ωj1 , ωj2 )N ≤ ( 2z1 , 2w1 , 2z2 , 2w2 )0 ) n→∞ n j=1

=

1X H2 (ωj1 , ωj2 , z1 , w1 , z2 , w2 ) n→∞ n j=1

=

b nc b nc  1 X X 2π(ak + b) 2π(bk + a) lim H2 , , z1 , w1 , z2 , w2 n→∞ n n n a=0

lim

lim

n

lim





b=0 √ √ b nc b nc

= =

 1 X X 2πa 2πb H2 √ , √ , z1 , w1 , z2 , w2 n→∞ n n n a=0 b=0 Z 1Z 1 H2 (2πs, 2πt, z1 , w1 , z2 , w2 )ds dt = F2 (z1 , w1 , z2 , w2 ). lim

0

0

Similarly we can show Var[Fn (x)] → 0 as n → ∞. Hence, the empirical distribution of yj for those j for which #Pj = 2 Q2 converges to the distribution of i=1 Gi in L2 such that (R(Gi ), I(Gi ); i = 1, 2) has distribution F2 . Hence the LSD of √1n Ak,n is the distribution of 1/2 Q2 , proving the result when g = 2 and Leb(C0 ) = 0. i=1 Gi If Leb(C0 ) 6= 0 then we have to show (4.41) on D2c (of Lemma 4.5.6). All the above steps remain valid for all (zi , wi ; i = 1, 2) ∈ D2c . Hence we have our required LSD. This proves the Theorem when g = 2. For g > 2, we can write {0, 1, 2, . . . , n} as {b1 k g−1 + b2 k g−2 + · · · + bg−1 k + bg ; 0 ≤ bi ≤ k − 1, for 1 ≤ i ≤ k}. The sets Aj can be explicitly written down as was done for n = k 2 − 1. For example, if g = 3, Aj = {b1 k 2 + b2 k + b3 , b2 k 2 + b3 k + b1 , b3 k 2 + b1 k + b2 },

Exercises

65

for j = b1 k 2 + b2 k + b3 (except for finitely many j, bounded by υk,n and they do not contribute to the limit). Using this and proceeding as before, 1/g Qg we obtain the limit as where (R(Gi ), I(Gi ); i = 1, 2, . . . g) has i=1 Gi distribution Fg . Figure 4.4 provides simulations for g = 3 when the input sequence is an MA(3) process.

2.5

3

2 2

1.5 1

1

0.5 0

0 −0.5

−1

−1 −1.5

−2

−2 −2.5 −4

−3

−2

−1

0

1

2

3

−3 −3

4

−2

−1

0

1

2

3

FIGURE 4.4 Eigenvalues of 10 realizations of

√1 Ak,n n

where (i) (left) n = k3 + 1, k = 10 and (ii) (right)

k3 −1,

n= k = 10. Input sequence is the MA(3) process xt = εt +0.5εt−1 +0.3εt−2 +0.2εt−3 where {εt } is i.i.d. N (0, 1).

4.6

Exercises

1. Complete the proof of Lemma 4.2.1(c). 2. Check that the variance µ2 and the fourth moment µ4 of FS equal R 1/2 R 1/2 4πf (2πs)ds and 0 24π 2 f 2 (2πs)ds, respectively. 0 3. Prove Lemma 4.4.1. 4. Suppose Assumptions 4.1.1, 4.1.2 and (4.24) hold. Show that the ESD of √1n P Tn (see (2.5) for the definition of P Tn ) converges in L2 . 5. Prove Lemma 4.4.3. 6. To complete the proof of Theorem 4.3.2, show that Var[Fn (x)] → 0. 7. Prove Lemma 4.5.2. 8. Find sequences {k = k(n)} and {N = N (n)} such that the LSD of N −1/2 Ak,N has some positive mass at the Qg origin and the rest of the probability mass is distributed as U1 ( i=1 Ei )1/2g where U1 and {Ei } are as in Theorem 4.5.4.

5 Spectral radius: light tail

For any matrix A, its spectral radius sp(A) is defined as  sp(A) := max |λ| : λ is an eigenvalue of A , where |z| denotes the modulus of z ∈ C. It has been an important object of study in random matrix theory for different types of random matrices. Our focus in this chapter is on the asymptotic behavior of this quantity for the reverse circulant, circulant and symmetric circulant random matrices when the input sequence is i.i.d. In particular, when the input sequence has finite (2 + δ)-th moment, with proper centering and scaling, the spectral radius converges to an extreme value distribution. The spectral radius of the k-circulant is discussed in the next chapter, and of the circulant-type matrices when the input sequence is an appropriate linear process as discussed in Chapter 7.

5.1

Circulant and reverse circulant

Extreme value theory is an important area of statistics and probability. See Resnick (1987) for an excellent introduction to this area. One of the most basic distributions in this theory is the following. Definition 5.1.1. The Gumbel distribution with parameter θ > 0 is characterized by its cumulative distribution function Λθ (x) = exp{−θ exp(−x)}, x ∈ R. The special case of Λ ≡ Λ1 is known as the standard Gumbel distribution. If Xi are i.i.d. random variables then max1≤i≤n Xi , after appropriate centering and scaling, converges to a Gumbel distribution, provided the tail of the distribution of X1 satisfies certain conditions. The scaling and centering constants depend on the behavior of the tail. We shall see shortly that the Gumbel distribution arises also in connection with the asymptotic distributions of the spectral radius of the circulant-type matrices. 67

68

Spectral radius: light tail

Recall the formulae (1.2) and (1.3) from Chapter 1 for the eigenvalues of Cn and RCn , respectively. From, these formulae, it is clear that sp(Cn ) = sp(RCn ). Hence the spectral radii for these two matrices do not have to be dealt with separately. We denote convergence in distribution and convergence in probaD P bility by → and →, respectively. Theorem 5.1.1 (Bose et al. (2011c)). Consider RCn and Cn where the input {xi } is i.i.d. with mean µ, variance one, and E |xi |2+δ < ∞ for some δ > 0. (a) If µ 6= 0 then sp(RCn ) − |µ|n D √ → N (0, 1). n (b) If µ = 0 then sp( √1n RCn ) − dq

D

→ Λ, cq √ where q = q(n) = b n−1 ln q, cq = √1 2 c, dq = 2

ln q

, and Λ is the standard

Gumbel distribution. Conclusions (a) and (b) hold for Cn also. Proof. As pointed out earlier, since sp(Cn ) = sp(RCn ), it is enough to deal with only RCn . Fortunately, the asymptotic behavior of the maximum of the periodogram has been discussed in the literature, for example by Davis and Mikosch (1999). We take help from these results. Let λ0 , λ1 , . . . , λn−1 be the eigenvalues of √1n RCn . Note that {|λk |2 ; 1 ≤ k < n/2} is the periodogram of {xi } at the frequencies { 2πk n ; 1 ≤ k < n/2}. If µ = 0 then Theorem 2.1 of Davis and Mikosch (1999) (stated as Theorem 11.3.2 in Appendix) yields D

max Ix,n (ωk ) − ln q → Λ,

1≤k< n 2

where Ix,n (ωk ) =

n−1 1 X 2πk | xt e−itωk |2 and ωk = . n t=0 n

Therefore

D

max |λk |2 − ln q → Λ,

1≤k 0. Since for even n, n−1 1 X D λn/2 = √ (−1)t xt → N (0, 1), n t=0

this can also be neglected as before, and hence (a) holds also for even n. A similar proof works when µ < 0. This proves (a) completely.

70

Spectral radius: light tail

(b) Now assume µ = 0. Then |λ0 | is tight and √ |λ0 | − ln q P → −∞. (ln q)−1/2 Hence An dominates |λ0 |, and as a consequence,

5.2

sp( √1n RCn )−dq D → cq

Λ.

Symmetric circulant

The behavior of sp( √1n SCn ) is similar to that of sp( √1n RCn ) but the normalizing constants change. The following normalizing constants, well-known in the context of maxima of i.i.d. normal variables, will be used in the statements of our results. an = (2 ln n)−1/2 and bn = (2 ln n)1/2 −

ln ln n + ln 4π . 2(2 ln n)1/2

(5.4)

The following lemmata are well known. The first lemma is on the joint behavior of the maxima and the minima of i.i.d. normal random variables. We leave its proof as an exercise. The second lemma is from Einmahl (1989) Corollary 1(b), page 31, in combination with his Remark on page 32. We omit its proof. Lemma 5.2.1. Let {Ni } be i.i.d. N (0, 1), mn = min1≤i≤n Ni and Mn = max1≤i≤n Ni . Then with an and bn as in (5.4), −mn − bn Mn − bn  D , −→ Λ ⊗ Λ, an an where Λ ⊗ Λ is the joint distribution of two independent standard Gumbel random variables. Let Id be the d × d identity matrix and | · | be the Euclidean norm in Rd . Let φC be the density of a d-dimensional centered normal random vector with covariance matrix C. Lemma 5.2.2. Let {ψi } be independent mean zero random vectors in Rd with finite moment generating functions in a neighborhood of the origin and Cov (ψ1 + ψ2 + · · · + ψn ) = Bn Id , where Bn > 0. Let {ηk } be independent N (0, σ 2 Cov(ψk )), independent of {ψk }, and σ 2 ∈ (0, 1]. Let −1/2 Pn ∗ ψk∗ = ψk + ηk , and letPp∗n denote the density of Bn k=1 ψk . Let α ∈ n 1 3 (0, 2 ) be such that α k=1 E |ψk | exp(α|ψk |) ≤ Bn . Let βn = βn (α) = −3/2 Pn 1/2 3 2 ≥ −c2 βn2 ln βn and Bn k=1 E |ψk | exp(α|ψk |). If |x| ≤ c1 αBn , σ −2 Bn ≥ c3 α , where c1 , c2 , c3 are constants depending only on d, then for some constant c4 depending on d, p∗n (x) = φ(1+σ2 )Id (x) exp(T¯n (x)) with |T¯n (x)| ≤ c4 βn (|x|3 + 1).

Symmetric circulant

71

We shall use the above lemma now to derive a normal approximation result. This shall be used in the proof of Theorem 5.2.4 and then again in Section 8.3 of Chapter 8. For n = 2j + 1, define a triangular array of centered random variables {¯ xt ; 1 ≤ t ≤ 2j + 1} by x ¯t = xt I(|xt | ≤ (1 + 2j)1/s ) − E[xt I(|xt | ≤ (1 + 2j)1/s )].

(5.5)

For 1 ≤ i1 < i2 < · · · < id < j and 1 ≤ t ≤ j, let vd (0) =



2(1, 1, . . . , 1), vd (t) = 2 cos

2πi1 t 2πi2 t 2πid t  , cos , . . . , cos . 2j + 1 2j + 1 2j + 1

Lemma 5.2.3. Let n = 1+2j and σj2 = (1+2j)−c for some c > 0 and let {xt } be i.i.d. mean zero with E(x20 ) = 1 and E |x0 |s < ∞ for some s > 2. Suppose Nt ’s are i.i.d. N (0, 1) random variables independent of {xt } and p˜j (x) is the Pj 1 density of √1+2j xt + σj Nt )vd (t). Then for any measurable subset B of t=0 (¯ d R , there exists η > 0 and j → 0 as j → ∞, such that the following holds uniformly over d-tuples 1 ≤ i1 < i2 < · · · < id < j: Z Z Z p˜j (x)dx− φ(1+σj2 )Id (x)dx ≤ j φ(1+σj2 )Id (x)dx+O(exp(−(1+2j)η )). B

B

B

Pj

Proof. Let Sj,¯x = ¯t vd (t) and let s = 2 + δ. Then observe that t=0 x Cov(Sj,¯x ) = Bj Id , where Bj = (2j + 1) Var(¯ xt ). Since {¯ xt vd (t)}0≤t≤j are independent mean zero random vectors, we can use Lemma 5.2.2. 1

By choosing α =

α

c5 (1+2j)− s √ 2 d j X

, it easily follows that

E |¯ xt vd (t)|3 exp(α|xt vd (t)|) < Bj .

t=0 −3/2 Pj

Let β˜j = Bj

t=0

E |¯ xt vd (t)|3 exp(α|¯ xt vd (t)|). Then clearly 1−δ 1 β˜j ≤ C(1 + 2j)−( 2 − s ) .

1/2

1

1

Let c = 12 − 1−δ ≈ c2 (1 + 2j) 2 − s and s > 0. Now choose |x| ≤ c1 αBj σj2 satisfying, 1 ≥ σj2 ≥ c3 (ln(2j + 1))(2j + 1)−2c . Clearly Bj ≥ c4 α−2 and Bj ≈ (1 + 2j). Let c1 , c2 , c3 , c4 be constants depending only on d. Then Lemma 5.2.2 implies that p˜j (x) = φ(1+σj2 )Id (x) exp(|Tj (x)|) with |Tj (x)| ≤ c5 β˜j (|x|3 + 1). 1

1

Note that |Tj (x)| → 0 uniformly for |x|3 = o(min{(1 + 2j)−c , (1 + 2j) 2 − s }). For the choice of σj2 = (1 + 2j)−c , the above condition can be seen to be satisfied. Now the following inequality can be shown for any measurable subset B of

72

Spectral radius: light tail

Rd . We omit its proof since a similar argument is provided in a more involved situation in Corollary 6.3.3 of Chapter 6. Z Z Z p˜j (x)dx− φ(1+σ2 )I (x)dx ≤ j φ(1+σ2 )I (x)dx+O(exp(−(1+2j)η )), B

B

j

d

B

j

d

where j → 0 as j → ∞. This completes the proof. For √1n RCn , leaving out the eigenvalues λ0 and λn/2 , due to perfect symmetry, the maximum and minimum eigenvalues are equal in magnitude. This is not the case for √1n SCn . Hence we should look at the joint behavior of the maximum and minimum eigenvalues. Theorem 5.2.4 (Bose et al. (2011c)). Let {λk , 0 ≤ k ≤ n−1} be the eigenvalues of √1n SCn . Let q = b n2 c, Mq,x = max1≤k≤q λk and mq,x = min1≤k≤q λk . If {xi } are i.i.d. with E(x0 ) = 0 , E(x20 ) = 1, and E |x0 |s < ∞ for some s > 2, then −mq,x − bq Mq,x − bq  D , −→ Λ ⊗ Λ, aq aq where aq and bq are given by (5.4). The same limit continues to hold if the eigenvalue λ0 is included in the definition of the max and the min above. Proof. The proof is broken down into four steps. We use truncation and Lemma 5.2.3 along with Bonferroni inequality. First assume n = 2j + 1. Let s = 2 + δ. Step 1: Truncation. Let x ¯t be as in (5.5) and x ˜t = xt I(|xt | ≤ (1 + 2j)1/s ). We show that it is enough to deal with the truncated random variables {¯ xt } ¯ k and λ ˜ k denote the eigenvalues of √1 SCn with input (see (5.6) below). If λ n ¯ ˜ {¯ xt } and {˜ xP t } respectively, then λk = λk . By Borel-Cantelli Lemma (see Sec∞ tion 11.2), t=1 |xt |I(|xt | > t1/s ) is bounded with probability 1 and consists of only a finite number of non-zero terms. Thus there exists a random positive integer N such that j X

|xt − x˜t | =

t=0

j X

|xt |I(|xt | > (1 + 2j)1/s )

t=0



∞ X

|xt |I(|xt | > t1/s )

t=0

=

N X t=0

|xt |I(|xt | > t1/s ).

Symmetric circulant

73

It follows that for 2j + 1 ≥ {N, |x1 |s , . . . , |xN |s }, the left side is zero. Conse˜ k = λk a.s. for all k. Therefore for all j quently, for all j sufficiently large, λ sufficiently large, −mj,x − bj Mj,x − bj  D −mj,¯x − bj Mj,¯x − bj  , = , , aj aj aj aj

(5.6)

where ¯ k and Mj,¯x = max λ ¯k . mj,¯x = min λ 1≤k≤j

1≤k≤j

Step 2: Application of Bonferroni Inequality. Define for 1 ≤ k ≤ j, j X √ 1 2πkt  2¯ x0 + 2 x ¯t cos , 2j + 1 2j + 1 t=1

¯0 λ k

=

¯0 λ k

j X √ 2πkt  ¯ 0 + √ σj = λ 2N + 2 Nt cos 0 k n 1 + 2j t=1



¯ 0 + σj N 0 , = λ k j,k 0 where σj2 = (1 + 2j)−c for some c > 0. Observe that Nj,k are i.i.d. N (0, 1) for k = 1, 2, . . . , j. Define

¯ 0 and mj,¯x+σN = min λ ¯0 . Mj,¯x+σN = max λ k k 1≤k≤j

1≤k≤j

Let A=

 −mj,¯x+σN − bj Mj,¯x+σN − bj < x, < y and aj aj

B=

 − min1≤k≤j (1 + σj2 )Nk − bj max1≤k≤j (1 + σj2 )Nk − bj < x, x0 , and such that f has an absolute continuous derivative with f 0 (x) → 0 as x → ∞ so that, Z x  F # (x) = 1 − F# (x) = exp − (1/f (y))dy , x > x0 . (6.5) x0

Further, a choice for the normalizing constants (for F # (x)) is given by −1 d∗n = 1/(F # ) (n) and c∗n = f (d∗n ). (6.6) Comparing the two representations of F # given in (6.4) and (6.5), the following choice of f verifies (6.5): f (x) =

x 1 ∼ as x → ∞. 2ax2 − b 2ax

Now we identify d∗n and c∗n . Noting that d∗n → ∞ and F # (d∗n ) = c∗n = f (d∗n ) ∼

1 n,

we get

1 1 and (d∗n )b exp(−a((d∗n )2 − 1)) = . 2ad∗n n

Taking logarithms on both sides of the second equation, we have ad∗n 2 − b ln d∗n − a = ln n. Let d∗n =

 ln n 1/2 a

 1 + δn . Using this in (6.7) we get δn =

b 2

where n = −b ln(1 + δn ) − d∗n

= ∼

cn

=

Hence

(6.7)

ln ln n + n (ln ln n)2  +O , 2 ln n (ln n)2 b 2

ln a + a. So we get

ln n 1/2  b ln ln n a − 2b ln a − b ln(1 + δn )  (ln ln n)2  1+ + +O a 4 ln n 2 ln n (ln n)3/2 ln n 1/2  b ln ln n a − 2b ln a  1+ + = dˆn (say), and a 4 ln n 2 ln n 1 . 1/2 2a (ln n)1/2 F (n) − dˆn D → ΛCe−a . cn

The result follows by using dn = cn ln(Ce−a ) + dˆn and convergence of types result (Proposition 0.2 of Resnick (1987), see Section 11.2 in Appendix). The following two corollaries follow immediately from Theorems 6.1.1 and 6.1.2. They will be useful in the proofs of Theorem 6.4.1 and Lemma 6.4.2, respectively.

Additional properties of the k-circulant

83

Corollary 6.1.3. Let {Xn } be a sequence of i.i.d. random variables where D

Xi = (E1 E2 · · · Ek )1/2k and {Ei } are i.i.d. Exp(1) random variables. Then max1≤i≤n Xi − dn D → Λ, cn where Λ is the standard Gumbel distribution, cn

=

dn

=

Ck

=

1 2k 1/2 (ln n)1/2

,

ln Ck − k−1 ln n 1/2  (k − 1) ln ln n  2 ln k + 1+ , and k 4 ln n 2k 1/2 (ln n)1/2 k−1 1 √ (2π) 2 . k

Corollary 6.1.4. Let {Ei }, cn and dn be as in Corollary 6.1.3. Let σn2 = n−c , c > 0. Then there exists some positive constant K = K(x), such that for all large n we have  K P (E1 E2 · · · Ek )1/2k > (1 + σn2 )−1/2 (cn x + dn ) ≤ , x ∈ R. n

6.2

Additional properties of the k-circulant

Before we study the extreme of the k-circulant, we need some additional facts about their eigenvalues. Recall the eigenvalue structure of the k-circulant matrices from Section 1.4 of Chapter 1. Also recall that S(x) = {xk b (mod n0 ) : 0 ≤ b < gx } and gx = #S(x), the order of x. Note that g0 = 1. Recall from (1.8) and (1.10) of Chapter 1 that Zn

=

yj

=

{0, 1, 2, . . . , n − 1}, Y λty , j = 0, 1, . . . , l − 1

where y = n/n0 ,

and

t∈Pj

υk,n0

=

#{x ∈ Zn0 : gx < g1 }.

Here we focus on the k-circulant matrix with n = k g + 1 and observe that n = n0 . Define Jk X(k)

:= {Pi : #Pi = k}, nk := #Jk , and := {x : x ∈ Zn and x has order k}.

Lemma 6.2.1. The eigenvalues {ηi } of the k-circulant with n = k g +1, g ≥ 2, satisfy the following:

Spectral radius: k-circulant

84

Pn−1 n (a) η0 = t=0 xt , is always an eigenvalue and if n is even, then η 2 = Pn−1 t t=0 (−1) xt , is also an eigenvalue and both have multiplicity one. (b) For x ∈ Zn r {0, n2 }, gx = g1 or

g1 b

for some b ≥ 2 and

(c) For all large n, g1 = 2g . Hence from (b), for x ∈ Zn r{0, The total number of eigenvalues corresponding to J2g is

g1 2b

is an integer.

n 2 },

gx = 2g or

2g b .

2g × #J2g = #X(2g) ≈ n. X( 2g b )

(d) = ∅ for 2 ≤ b < g, b even. If g is even then X( 2g g ) = X(2) is either empty or contains exactly two elements with eigenvalues n ηl = |λl | and ηn−l = −|λl |, for some 1 ≤ l ≤ . 2 g (e) Suppose b is odd, 3 ≤ b ≤ g and b is an integer. For each Pj ∈ J 2g , b

2g there are 2g b eigenvalues given by the ( b )-th roots of yj . The total number of eigenvalues corresponding to the set J 2g is b

2g 2g × #J 2g = #X( ) ≈ (k g/b + 1)(1 + n−a ) for some a > 0. b b b There are no other eigenvalues. Proof. Since n and k are relatively prime, we have n0 = n. Pn−1 (a) P0 = S(0) = {0} and the corresponding eigenvalue is η0 = t=0 xt with multiplicity one. If n is even then k is odd and hence S(n/2) = { n2 }, and the Pn−1 corresponding eigenvalue is η n2 = t=0 (−1)t xt of multiplicity one. (b) From Lemma 1.4.2(b), gx divides g1 and hence gx = g1 or gx = gb1 for some b ≥ 2. Also for every integer t ≥ 0, tk g = (−1 + n)t = −t (mod n). Hence, λt and λn−t belong to the same partition block S(t) = S(n − t). Thus, each S(t) contains an even number of elements, except for t = 0, n2 . Hence gb1 must be even, that is, g2b1 must be an integer. (c) From Lemma 4.5.2(a), g1 = 2g for all but finitely many n, and υk,n /n → 0 as n → ∞. For each Pj ∈ J2g , we have 2g-many eigenvalues which are the 2g-th roots of yj . Now the result follows from the fact that n = 2g#J2g + υk,n . g1

2 = xk g = x (mod n). (d) Suppose b = 2 and x ∈ X( g21 ) = X( 2g 2 ). Then xk g g But k = −1 (mod n) and so, xk = −x (mod n). Therefore 2x = 0 (mod n), and hence x is either 0 or n/2. But from (a), g0 = gn/2 = 1. Hence X( 2g 2 ) = ∅. Now suppose b > 2 and is even. From Lemma 1.4.4(b),

2g ) ≤ gcd(k 2g/b − 1, k g + 1) for b ≥ 3. b Now observe that for b even,  1 if k is even, gcd(k 2g/b − 1, k g + 1) = 2 if k is odd. #X(

Additional properties of the k-circulant

85

So we have #X( 2g b ) ≤ 2 for b > 2 and b even. Suppose if possible, there exists an x ∈ Zn such that gx = 2g #S(x) = 2g b and for all y ∈ S(x), gy = b . Hence

2g b .

Then

 2g 2g # y : gy = ≥ > 2 for g > b > 2, b even. b b This contradicts the fact that #X( 2g b ) ≤ 2 for g > b > 2, b even. Hence X( 2g ) = ∅ for b even and g > b > 2. b If b = g and is even, then from the previous discussion #X( 2g g ) = 0 or 2. In the latter case there are exactly two elements in Zn whose orders are 2 and there will be only one partitioning set containing them. So the corresponding eigenvalues will be ηl = |λl | and ηn−l = −|λl |, for some 1 ≤ l ≤ n2 . (e) We first show the following for b odd. Note that (e) is a simple consequence of this. X 2g (k g/b + 1) − (k g/bi + 1) ≤ #X( ) ≤ k g/b + 1, (6.8) b ∗ X where is sum over all odd bi > b, such that bgi is a positive integer. Let ∗

 Zn,b = x : x ∈ Zn and xk 2g/b = x mod (k g + 1) .

(6.9)

Then it is easy to see that X( Let x ∈ Zn,b and

g b

2g ) ⊆ Zn,b . b

(6.10)

= m. Then

k g + 1 | x(k 2g/b − 1) ⇒ k bm + 1 | x(k 2m − 1) ⇒ k (b−1)m − k (b−2)m + k (b−3)m − · · · − k + 1 | x(k m − 1). But gcd(k m − 1, k (b−1)m − k (b−2)m + k (b−3)m − · · · − k + 1) = 1, and therefore x is a multiple of (k (b−1)m − k (b−2)m + k (b−3)m − · · · − k + 1). Hence #Zn,b

=

 (k (b−1)m



 k bm + 1 (b−3)m +k − · · · − k + 1)

k (b−2)m

= km + 1 = k g/b + 1, and combining with (6.10), #X(

2g ) ≤ #Zn,b = k g/b + 1, b

establishing the right side inequality in (6.8). On the other hand, if x ∈ Zn,b

Spectral radius: k-circulant

86

2g then either gx = 2g b or gx < b . For the second case gx = bi odd, and therefore x ∈ Zn,bi . Hence

#X(

2g ) ≥ b ≥

#Zn,b −

X

2g bi

for some bi > b,

#Zn,bi



(k g/b + 1) −

X (k g/bi + 1), ∗

where

X

is sum over all odd bi > b, such that

g bi

is a positive integer.



6.3

Truncation and normal approximation

Truncation: From Section 6.2, n = n0 and S(t) = S(n − t) except for t = 0, n/2. So for Pj 6= S(0), S(n/2), we can define Aj such that Pj = {x : x ∈ Aj or n − x ∈ Aj } and #Aj =

1 #Pj . 2

(6.11)

For any sequence of random variables b = {bl }l≥0 , define for Pj ∈ J2k , βb,k (j) =

Y 1 n−1 X 2 2πi  √ bl ω tl , where ω = exp . n n

t∈Aj

(6.12)

l=0

Suppose {xl }l≥0 are independent, mean zero and variance one random variables. For each n ≥ 1, define a triangular array of centered random variables (n) {¯ xl }0≤l 2. Then, almost surely, max (βx,g (j))1/2g − max (βx¯,g (j))1/2g = o(1).

1≤j≤q

Proof. Since where

Pn−1 l=0

1≤j≤q

ω tl = 0 for 0 < t < n, it follows that βx¯,n (j) = βx˜,n (j) (n)

x ˜l = x ˜l

=x ¯l + E xl I|xl |≤n1/γ = xl I|xl |≤n1/γ . P∞ By the Borel-Cantelli lemma, t=0 |xt |I|xt |>t1/γ is finite a.s. and has only

Truncation and normal approximation

87

finitely many non-zero terms. Thus, there exists a random positive integer N (ω) such that n X

|xt − x˜t | =

t=0

n X

|xt |I|xt |>n1/γ ≤

t=0

∞ X

N (ω)

|xt |I|xt |>t1/γ =

t=0

X

|xt |I|xt |>t1/γ .

t=0

(6.13) It follows that for n ≥ {N, |x1 |γ , . . . , |xN |γ }, the left side of (6.13) is zero. Consequently, for all n sufficiently large, βx,n (j) = βx˜,n (j) = βx¯,n (j) a.s. for all j,

(6.14)

and the assertion follows immediately. Normal For d ≥ 1, and any distinct integers i1 , i2 , . . . , id ,  approximation: from 1, 2, . . . , d n−1 e , define 2 v2d (l) = cos

T 2πij l  2πij l  , sin :1≤j≤d , n n

l ∈ Zn .

Let φΣ (·) denote the density of the 2d-dimensional normal random vector which has mean zero and covariance matrix Σ. Let I2d be the identity matrix of order 2d. The following lemma is from Davis and Mikosch (1999). It is also a consequence of Lemma 5.2.2. Lemma 6.3.2 (Davis and Mikosch (1999)). Let {xt } be i.i.d. random variables with E[x0 ] = 0, E[x0 ]2 = 1, and E |x0 |γ < ∞ for some γ > 2. Let p˜n be the density function of n

1 X 21/2 √ (¯ xt + σn Nt )v2d (t), n t=1 where {Nt } is independent of {xt } and σn2 = Var(¯ xt )s2n , for some sequence −2c 2 {sn }. If n ln n < sn ≤ 1 with c = 1/2−(1−δ)/γ for arbitrarily small δ > 0, then uniformly for |x|3 = od (min (nc , n1/2−1/s )), p˜n (x) = φ(1+σn2 )I2d (x)(1 + o(1)). We shall use this lemma also in Section 8.2. The following corollary is similar to Lemma 5.2.3. Corollary 6.3.3. Let γ > 2 and σn2 = n−c where c is as in Lemma 6.3.2. Then for any measurable B ⊆ R2d , Z Z Z p˜n (x)dx − φ(1+σn2 )I2d (x)dx ≤ εn φ(1+σn2 )I2d (x)dx + Od (exp(−nη )), B

B

B

where εn → 0 as n → ∞ and η > 0. The above holds uniformly over all the d-tuples of distinct integers 1 ≤ i1 < i2 < · · · < id ≤ d n−1 2 e.

Spectral radius: k-circulant

88

Proof. Set r = nα where 0 < α < 1/2 − 1/γ. Using Lemma 6.3.2, we have Z Z p ˜ (x)dx − ϕ(1+σn2 )I2d (x)dx n B B Z Z ≤ p˜n (x)dx − ϕ(1+σn2 )I2d (x)dx B∩{kxk≤r} B∩{kxk≤r} Z Z + p˜n (x)dx + ϕ(1+σn2 )I2d (x)dx B∩{kxk>r} B∩{kxk>r} Z Z Z ≤ εn ϕ(1+σn2 )I2d (x)dx + p˜n (x)dx + ϕ(1+σn2 )I2d (x)dx {kxk>r}

B∩{kxk≤r}

{kxk>r}

= T1 + T2 + T3 (say). (j)

Let v2d (l) denote the j-th coordinate  of v2d (l), 1 ≤ j ≤ 2d. Then, using the normal tail bound, P |N (0, σ 2 )| > x ≤ 2e−x/σ for x > 0, n−1  

1 X p˜n (x)dx = P 21/2 √ (¯ al + σn Nl )v2d (l) > r n {kxk>r}

Z T2 =

l=0

n−1   1 X (j) ≤ 2d max P 21/2 √ (¯ al + σn Nl )vd (l) > r/(2d) 1≤j≤2d n l=0

 1 n−1 X √  √  (j) ≤ 2d max P √ a ¯l vd (l) > r/(4 2d) + 4d exp − rnc/2 /(4 2d) . 1≤j≤2d n l=0

(j)

Note that a ¯l vd (l), 0 ≤ l < n are independent, have mean zero, variance at most one, and are bounded by 2n1/γ . Therefore, by applying Bernstein’s inequality and simplifying, for some constant K > 0,  1 n−1 X √  (j) P √ a ¯l vd (l) > r/4 2d ≤ exp(−Kr2 ). n l=0

Further, Z T3 = {kxk>r}

ϕ(1+σn2 )I2d (x)dx ≤ 4d exp(−r/4d).

Combining the above estimates finishes the proof.

6.4

Spectral radius of the k-circulant

Now we finally arrive at the main result of this chapter.

Spectral radius of the k-circulant

89

Theorem 6.4.1 (Bose et al. (2012a)). Suppose {xi }i≥0 is an i.i.d. sequence of random variables with mean zero, variance 1, and E |xi |γ < ∞ for some γ > 2. If n = k g + 1 for some fixed positive integer g, then sp( √1n Ak,n ) − dq cq where q = qn = cn

=

dn

=

Cg

=

D

→ Λ, as n → ∞,

n 2g ,

1 , 2g 1/2 (ln n)1/2 ln Cg − g−1 ln n 1/2  (g − 1) ln ln n  2 ln g + 1+ , and 1/2 1/2 g 4 ln n 2g (ln n) g−1 1 √ (2π) 2 . g

To establish the theorem we shall use the following lemmata whose proofs are given later. Recall that {βx,g (t)1/2g } are the eigenvalues corresponding to the set of partitions which have cardinality 2g. We derive the behavior of the maximum of these eigenvalues in Lemma 6.4.2. Then using the results of Lemma 6.4.3, we show that the maximum of the remaining eigenvalues is negligible compared to the above. Lemma 6.4.2.

max1≤t≤q βx,g (t)1/2g − dq D → Λ, cq

where dq , cq are as in Corollary 6.1.3, q = qn = n → ∞. As a consequence,

n 2g

− kn and

(6.15) kn n

max1≤t≤q βx,g (t)1/2g − dn/2g D → Λ. cn/2g

→ 0 as

(6.16)

The next lemma is technical. Let cn (l)

=

dn (l)

=

Cl

=

1 , 2l1/2 (ln n)1/2 ln Cl − l−1 ln n 1/2  (l − 1) ln ln n  2 ln l + 1+ , l 4 ln n 2l1/2 (ln n)1/2 l−1 1 √ (2π) 2 , and l

cn2j = cn2j (j), dn2j = dn2j (j), cn/2g = cn/2g (g) and dn/2g = dn/2g (g).

Spectral radius: k-circulant

90

Lemma 6.4.3. Let n = k g + 1. If j < g and for some a > 0, 2jn2j = (k j + j 1)(1 + n−a ) ≈ n g or is bounded, then there exists a constant K = K(j, g) ≥ 0 such that cn/2g dn/2g − dn2j → K and → ∞ as n → ∞. cn2j cn2j Proof of Theorem 6.4.1. If #Pi = j, then the eigenvalues corresponding to Pi ’s are the j-th roots of yi and hence these eigenvalues have the same modulus. From Lemma 6.2.1, the possible values of #Pi are {1, 2, 2g and 2g/b (3 ≤ b < g, b odd, gb ∈ Z)}. Recall from (6.12) that βx,j (i) is the modulus of the eigenvalue associated with the partition set Pi , where #Pi = 2j. Normally distributed case: In case of normally distributed entries, it easily follows that βx,j (i) is the product of j exponential random variables, and they are independent as i takes n2j -many distinct values. So from Corollary 6.1.3, if n2j → ∞ then the maximum of βx,j (k)1/2j has a Gumbel limit. Non-normal case: For general entries the proof of Lemma 6.4.2 also implies that βx,j (k)1/2j − dn2j D max → Λ as n2j → ∞, (6.17) 1≤k≤n2j cn2j where cn2j and dn2j are as above. Now let xn = cn x + dn , q = q(n) = Then

n g and B = {b : b odd, 3 ≤ b < g, ∈ Z}. 2g b

 1 P sp( √ Ak,n ) > xq ≥ P n

 max βx,g (j)1/2g > xq ,

j:Pj ∈J2g

and  1 P sp( √ Ak,n ) > xq n ≤

P

 X max βx,g (j)1/2g > xq + P

j:Pj ∈J2g

1 + P |√ n +P

b∈B n−1 X

 1 xl | > xq + P | √ n l=0  max βx,2 (j)1/2 > xq )

max

j:Pj ∈J 2g

βx, gb (j)b/2g > xq



b

n−1 X

(−1)l xl | > xq



l=0

j:Pj ∈J2

=: A + B + C + D + E. We first verify that B, C, D, E are negligible. From Lemma 6.2.1, the term D appears only when n2 ∈ Z, and the term E appears only if g is even and in that case J2 contains only one element. It is easy to see that C, D and E tend to zero since we are taking the maximum of a single element.

Spectral radius of the k-circulant

91

Note that B is a sum of finitely many terms. Now suppose for b ∈ B, we have some finite Kb such that dn/2g − dn2g/b cn/2g → Kb and → ∞ as n → ∞. cn2g/b cn2g/b

(6.18)

Observations (6.17) and (6.18) imply B goes to zero. So it remains to verify (6.18) for b ∈ B. But this follows from Lemma 6.2.1(e) and Lemma 6.4.3. Now the limit in A can be identified using Lemma 6.4.2 and that finishes the proof of the theorem. Proof of Lemma 6.4.2. First assume that {xl }l≥0 are i.i.d. standard normal. Let {Ej }j≥1 be i.i.d. standard exponentials. By Lemma 3.3.2, it easily follows that   P max (βx,g (t))1/2g > cq x + dq 1≤t≤q   1/2g = P Eg(j−1)+1 Eg(j−1)+2 · · · Egj > cq x + dq for some 1 ≤ j ≤ q . The lemma then follows in this special case from Corollary 6.1.3. For the general case, we break the proof into the following three steps and make use of the two results from Section 6.3. The proof of these three steps will be given later. Fix x ∈ R. Step 1: Claim: (n)

(n)

lim [Q1 − Q2 ] = 0,

(6.19)

n→∞

where (n)

Q1

(n)

Q2

:= P := P

 max (βx¯+σn N,g (j))1/2g > cq x + dq ,

1≤j≤q

max (1 + σn2 )1/2 Eg(j−1)+1 Eg(j−1)+2 · · · Egj

1/2g

1≤j≤q

 > cq x + dq ,

and {Nl }l≥0 is a sequence of i.i.d. standard normal random variables. Step 2: Claim: max1≤j≤q (βx¯+σn N,g (j))1/2g − dq D → Λ. cq

(6.20)

max1≤t≤q βx¯,g (t)1/2g − dq D → Λ. cq

(6.21)

Step 3: Claim:

Now combining Lemma 6.3.1 and (6.21) we can conclude that max1≤t≤q βx,g (t)1/2g − dq D → Λ. cq

Spectral radius: k-circulant

92

This completes the proof of the first part, namely of (6.15) of the lemma. By convergence of types result, the second part, namely, (6.16) follows since the following hold. We omit the tedious algebraic details. dq − dn/2g cq → 1 and → 0, as n → ∞. cn/2g cq (n)

Proof of Step 1: We approximate Q1 Bonferroni inequality. For all m ≥ 1, 2m X

(n)

(−1)j−1 Sj,n ≤ Q1

j=1

(6.22) (n)

by the simpler quantity Q2



2m−1 X

(−1)j−1 Sj,n ,

using

(6.23)

j=1

where X

Sj,n =

 P (βx¯+σn N,g (ti ))1/2g > cq x + dq , i = 1, . . . , j .

1≤t1 21/2 (cq x + dq ); 0 ≤ t < j . l=1 D

By Corollary 6.3.3 and the fact that N12 +N22 = 2E1 , we deduce that uniformly over all the d-tuples 1 ≤ t1 < t2 < · · · < tj ≤ q, n−1 X  P 21/2 √1 (¯ xl + σn Nl )v2gj (l) ∈ Bn(j) n l=0

g Y

− P (1 + σn2 )1/2

Eg(tm −1)+i

1/2g

 > cq x + dq , 1 ≤ m ≤ j

i=1

≤ n P (1 +

1 σn2 ) 2 η

Eg(tm −1)+1 Eg(tm −1)+2 · · · Egtm

1  2g

> cq x + dq , 1 ≤ m ≤ j



+ O(exp(−n )). Therefore, as n → ∞,   n Kj |Sj,n − Tj,n | ≤ n Tj,n + O(exp(−nη )) ≤ n + o(1) → 0, j j!

(6.27)

where O(·) and o(·) are uniform over j. Hence, using (6.23), (6.24), (6.26) and (6.27), we have (n)

(n)

lim sup |Q1 − Q2 | ≤ lim sup T2m+1,n + lim sup T2m,n , n

n

for each m ≥ 1.

n

(n)

(n)

Letting m → ∞, we conclude that limn→∞ [Q1 − Q2 ] = 0. This completes the proof of Step 1. Proof of Step 2: Since by Corollary 6.1.3, 1/2g max Eg(j−1)+1 Eg(j−1)+2 · · · Egj = Op ((ln n)1/2 ) and σn2 = n−c , 1≤j≤q

Spectral radius: k-circulant

94 it follows that

(1 + σn2 )1/2 max1≤j≤q Eg(j−1)+1 Eg(j−1)+2 · · · Egj cq

1/2g

− dq

D

→ Λ,

and consequently max1≤j≤q (βx¯+σn N,g (j))1/2g − dq D → Λ. cq This completes the proof of Step 2. Proof of Step 3: In view of (6.20), it suffices to show that max (βx¯+σn N,g (j))1/2g − max (βx¯,g (j))1/2g = op (cq ).

1≤j≤q

1≤j≤q

Note that g g Y X 1 n−1 2 Y lek αj,k 2 (say), j √ (¯ xl + σn Nl )ω = βx¯+σn N,g (j) = n k=1

and βx¯,g (j) =

l=0

k=1

g g Y X Y 1 n−1 k 2 √ γj,k 2 (say). x ¯l ω lej = n

k=1

l=0

k=1

Now by the inequality, |

g Y i=1

ai −

g Y i=1

g j−1 g X Y Y bi | ≤ ( bi )|aj − bj |( ai ) j=1 i=1

(6.28)

i=j+1

for non-negative numbers {ai } and {bi }, we have βx¯+σn N,g (j) − βx¯,g (j) ≤

g X

|γj,1 |2 · · · |γj,k−1 |2 |αj,k |2 − |γj,k |2 |αj,k+1 |2 · · · |αj,g |2 .

k=1

For any sequence of random variables {Xn }n≥0 , define X 1 n−1 Mn (X) := max √ Xl ω tl . 1≤t≤n n l=0

As a trivial consequence of Theorem 2.1 of Davis and Mikosch (1999) (see Theorem 11.3.2 in Appendix), we have Mn2 (σn N ) = Op (σn ln n) and Mn2 (¯ x + σn N ) = Op (ln n).

Spectral radius of the k-circulant √ Therefore αj,k = Op ( ln n). Now

95

n−1 X k γj,k ≤ αj,k + σn √1 Nl ω lej , n l=0

√ √ and therefore γj,k = (1 + σn )Op ( ln n) = Op ( ln n). So we have max βx¯+σn N,g (j) − max βx¯,g (j) 1≤j≤q 1≤j≤q ≤ max βx¯+σn N,g (j) − βx¯,g (j) 1≤j≤q



max

1≤j≤q

g X

Op (ln n)

g−1  αj,k − γj,k |αj,k | + |γj,k |

k=1

g X √ αj,k − γj,k ≤ Op (ln n)g−1 Op ( ln n) max 1≤j≤q

k=1

g− 12

≤ Op (ln n) ≤ op n

−c/4

gσn Mn (N )  (ln n)g .

Hence max (βx¯+σn N,g (j))1/2g − max (βx¯,g (j))1/2g 1≤j≤q

1≤j≤q



max βx¯+σn N,g (j) − max βx¯,g (j) 1≤j≤q

1≤j≤q

1 , 2gξ 1−1/2g

where ξ lies between max1≤j≤q βx¯+σn N,g (j) and max1≤j≤q βx¯,g (j). We know that max1≤j≤q βx¯+σn N,g (j) P → 1 and (ln n)g max1≤j≤q βx¯+σn N,g (j) − max1≤j≤q βx¯,g (j) (ln n)g

P

→ 0.

Therefore max1≤j≤q βx¯,g (ln n)g max1≤j≤q βx¯+σn N,g (j) max1≤j≤q βx¯,g (j) − max1≤j≤q βx¯+σn N,g (j) P = + → 1. (ln n)g (ln n)g Hence  1 ξ ξ 1−1/2g 1 P P → 1 ⇒ → 1 ⇒ 1−1/2g = Op (ln n) 2 −g . g g(1−1/2g) (ln n) (ln n) ξ

Spectral radius: k-circulant

96

Combining all these we have   max βx¯+σ N,g (j)1/2g − max βx¯,g (j)1/2g ≤ op n−c/4 (ln n)g Op (ln n) 12 −g n 1≤j≤q

1≤j≤q

= op (cq ). This completes the proof of Step 3, and hence completes the proof of Lemma 6.4.2. Proof of Lemma 6.4.3. First observe that if nj is finite then the result holds trivially. If n2j =

(kj +1)(1+n−a ) 2j

then 1 1  + j/g (1 + o(1)) − ln 2j, a n n

ln n2j = j ln k +

1

for some a > 0, and since k = (n − 1) g , we have cn/2g j → as n → ∞. cn2j g Similarly we get for some a0 > 0, j

ln ln n2j = ln ln n g + Now observe that

dn/2g −dn2j cn2j

J1 = 2j 1/2 (ln n2j )1/2

can be broken into the following three parts:

g−1 2 ln g n 1/2 1/2 2g (ln 2g )

 ln Cg −

J2 = 2j 1/2 (ln n2j )1/2

1  (1 + o(1)) − ln 2j. ln n

na0



 ln Cj − j−1 2 ln j → m1 (finite). 1/2 1/2 2j (ln n2j )

 ln n/2g 1/2 ln n2j 1/2  − → m2 g j

(j − 1) ln ln n2j  4(j ln n2j )1/2 √  (g − 1) ln ln n/2g (j − 1) g ln ln n2j  = 2j 1/2 (ln n2j )1/2 − + o(1) 1/2 1/2 4(g ln n/2g) 4j(ln n/2g) 1/2 1/2   j (ln n2j ) (j − 1)g = (g − 1) ln ln n/2g − ln ln n2j + o(1) j 2(g ln n/2g)1/2  j 1/2 (ln n2j )1/2  g(j − 1)  = (g − 1) − ln ln n/2g + o(1) → ∞ (as g > j). 1/2 j 2(g ln n/2g)

J3 = 2j 1/2 (ln n2j )1/2

 (g − 1) ln ln n/2g

(finite).

4(g ln n/2g)1/2

Hence Lemma 6.4.3 is proven.



Spectral radius of the k-circulant

6.4.1

97

k-circulant for sn = k g + 1

We have seen in Chapter 4 that the LSD of √1n Ak,n exists when k g = sn − 1 and s = o(np1 −1 ), where p1 is the smallest prime factor of g. To derive a limit for the spectral radius, we need to strengthen this assumption slightly. Theorem 6.4.4 (Bose et al. (2012a)). Suppose {xl }l≥0 is an i.i.d. sequence of random variables with mean zero, variance 1, and E |xl |γ < ∞ for some γ > 2. If s ≥ 1, sn = k g + 1 where s = o(np1 −1−ε ), 0 < ε < p1 , and p1 is the smallest prime factor of g, then as n → ∞, sp( √1n Ak,n ) − dq cq where q = q(n) = dn

=

cn

=

Cg

=

n 2g .

D

→ Λ,

The constants cn and dn can be taken as follows:

ln Cg − g−1 ln n 1/2  (g − 1) ln ln n  2 ln g + 1+ , 1/2 1/2 2g 4 ln n 2g (ln n) 1 , and 1/2 2g (ln n)1/2 g−1 1 √ (2π) 2 . g

Note that the case s = 1 reduces to the previous theorem. The proof for the general case is along the same line except for certain modifications. The condition s = o(np1 −1 ) implies υk,n /n → 0. But this is not enough to υ −a1 neglect certain terms. We need the stronger result k,n ) for some n = o(n a1 > 0 so that these terms are negligible in the log scale that we have. We omit the detailed proof of this but provide a brief heuristic explanation. Since s > 1 it can be checked that (see proof of Lemma 1.4.4), #X(

2g ) b

kg + 1 ) s ≤ gcd(k 2g/b − 1, k g + 1). ≤ gcd(k 2g/b − 1,

(6.29)

Now recall Zn,b from (6.9) and observe that  kg + 1 # x : x ∈ Zn and xk 2g/b = x mod ( ) ≥ #Zn,b . s

(6.30)

From observations (6.29) and (6.30) it easily follows that Lemma 6.2.1(d) remains valid in this case also, that is, X(

2g ) = ∅ for 2 ≤ b < g, b even. b

Moreover, if g is even then X( 2g g ) = X(2) is either empty or contains exactly two elements.

Spectral radius: k-circulant

98

Further, it can be shown that (as in Lemma 6.2.1(e)), for b odd, 3 ≤ b ≤ g, 1≥

#X( 2g b ) ≥ 1 − n−α (1 + o(1)), g/b k +1

for some α > 0. Now the rest of the proof is as Theorem 6.4.1.

6.5

Exercises

1. Prove Corollary 6.1.3. 2. Prove Corollary 6.1.4. 3. Complete the proof of Theorem 6.4.4.

7 Maximum of scaled eigenvalues: dependent input

In this chapter we try to generalize the results of Chapters 5 and 6 on spectral radius to the situation where the input sequence is dependent. We take {xn } P∞ to be an infinite order moving average process, x = a n i=−∞ i εn−i , where P {an ; n ∈ Z} are non-random with |a | < ∞, and {ε i ; i ∈ Z} are i.i.d. n n In this case the eigenvalues have unequal variances. So, we resort to scaling each eigenvalue by an appropriate quantity and then consider the distributional convergence of the maximum of these scaled eigenvalues of different circulant matrices. This scaling has the effect of (approximately) equalizing the variance of the eigenvalues. Similar scaling has been used in the study of the periodogram (see Walker (1965), Davis and Mikosch (1999), and Lin and Liu (2009)).

7.1

Dependent input with light tail

Let {xn ; n ≥ 0} be a two-sided moving average process, xn =

∞ X

ai εn−i ,

(7.1)

i=−∞

P where {an ; n ∈ Z} are non-random, n |an | < ∞, and {εi ; i ∈ Z} are i.i.d. random variables. Let f (ω), ω ∈ [0, 2π] be the spectral density of {xn }. Note that σ2 if {xn } is i.i.d. with mean 0 and variance σ 2 . f≡ 2π We make the following assumption. Assumption 7.1.1. {εi , i ∈ Z} are i.i.d. with E(εi ) = 0, E(ε2i ) = 1, E|εi |2+δ < ∞ for some δ > 0, ∞ X

|aj ||j|1/2 < ∞, and f (ω) > 0 for all ω ∈ [0, 2π].

j=−∞

99

100

7.2

Maximum of scaled eigenvalues: dependent input

Reverse circulant and circulant

Define M (·, f ) for the reverse circulant matrix as follows: 1 |λk | M ( √ RCn , f ) = maxn p , 1≤k< 2 n 2πf (ωk ) where f is the spectral density corresponding to {xn }, λk are the eigenvalues √1 of √1n RCn as defined in Section 1.3, and ωk = 2πk n . Note that M ( n Cn , f ) is defined similarly and satisfies M ( √1n RCn , f ) = M ( √1n Cn , f ). Note that λ0 is not included in the definition of M (·, f ). When E(ε0 ) = µ = 0, λ0 does not affect the limiting behavior of M ( √1n RCn , f ). However if µ 6= 0, then it does. See Remark 7.2.1. In the next theorem we assume µ = 0. Theorem 7.2.1 (Bose et al. (2011c)). Let {xn } be the two-sided moving average process defined in (7.1) and which satisfies Assumption 7.1.1. Then M ( √1n RCn , f ) − dq

D

→ Λ, cq √ where q = q(n) = b n−1 ln q and cq = √1 2 c, dq = 2

ln q

. The same result

continues to hold for M ( √1n Cn , f ). We need the following lemma. This lemma is an approximation result which is a stronger version of Theorem 3 of Walker (1965). We will first use this result in the proof of Theorem 7.2.1 but not in full force. We will again use it in Section 7.4. Lemma 7.2.2. Let {xn } be a two-sided moving average process as defined in (7.1) and which satisfies Assumption 7.1.1. Then I (ω ) √ x,n k maxn − Iε,n (ωk ) = op (n−1/4 ln n), 1≤k< 2 2πf (ωk ) where Ix,n (ωk ) =

n−1 n−1 1 X 1 X itωk 2 2πk | xt eitωk |2 , Iε,n (ωk ) = | εt e | , and ωk = . n t=0 n t=0 n

Proof. First observe that minω∈[0,2π] f (ω) > α > 0. Now for any r, |

r X

εt eiωt |2

=

r X

r−|s|

eiωs

X

s=−r

t=1



r X s=−r

εt εt+|s|

t=1 r−|s|

|

X t=1

εt εt+|s| |.

Reverse circulant and circulant

101

Hence E



max |

0≤ω≤π

r X

εt eiωt |2



≤ E

t=1

r X

r−1 r−s X   X 1/2 ε2t + 2 E( εt εt+s )2

t=1

= r+2

s=1

r−1 X s=1 Z r

≤ r+2

t=1

(r − s)1/2 x1/2 dx

1

≤ Kr3/2 ,

(7.2)

where K is a constant independent of r. So E



max |

0≤ω≤π

r X

 εt eiωt | ≤ K 1/2 r3/4 .

(7.3)

t=1

Note that (7.3) continues to hold if the limits of summation for t are replaced by 1 + p and r + p, where p is an arbitrary (positive or negative) integer. Let Jx,n

n−1 n−1 1 X 1 X iωt √ xt eiωt , Jε,n = √ εt e , n t=0 n t=0

=

Rn (ω)

= Jx,n (ω) − A(ω)Jε,n (ω),

Tn (ω)

= Ix,n (ω) − |A(ω)|2 Iε,n (ω), and ∞ X = aj eiωj .

A(ω)

j=−∞

Then it is easy to see that 2πf (ω) = |A(ω)|2 , and Tn (ω)

= =

|Rn (ω) + A(ω)Jε,n (ω)|2 − |A(ω)|2 Iε,n (ω) ¯ J¯ε,n (ω) + R ¯ n (ω)A(ω)Jε,n (ω) + |Rn (ω)|2 . Rn (ω)A(ω)

Now Rn (ω)

=

Jx,n (ω) − A(ω)Jε,n (ω)

=

n−1 1 X √ n j=0

=

∞ n−1 X X  n−1  1 X √ at eiωt εj−t eiω(j−t) − j eiωj n t=−∞ j=0 j=0

=

∞ 1 X √ at eiωt Zn,t (ω), say. n t=−∞

∞ X

∞ n−1 X  1 X at εj−t eiωj − √ at eiωt εj eiωj n t=−∞ t=−∞ j=0

102

Maximum of scaled eigenvalues: dependent input

Observe that |Zn,0 (ω)| = 0 and  −1 n−1 X X     | εl eiωj | + | εj eiωj |,    j=−t j=n−t    −t−1  n−1−t X X | εj eiωj | + | εj eiωj |, |Zn,t (ω)| ≤   j=n l=0    n−1−t n−1  X X   |  εj eiωj | + | εj eiωj |,   −t

1≤t 0. Consider n = 2m + 1 for simplicity. For n = 2m the calculations are similar. m

λk

p

m

Ak X 2πkt Bk X 2πkt − 2√ εt cos( ) + 2√ εt sin( ) = Yn,k , n n n n 2πf (ωk ) t=1 t=1

where Yn,k

=

Uk,j

=

∞ X   1 2πkj 2πkj aj cos Uk,j − sin Vk,j , √ p n n n 2πf (ωk ) j=−∞ m X 

εt−j cos

2πk(t − j) 2πkt  − εt cos , and n n

εt−j sin

2πkt  2πk(t − j) − εt sin . n n

t=1

Vk,j

m X 

=

t=1

Now using (7.2), we get 2 E{max Uk,j } k

 ≤

2 E{max Vk,j }≤



k

4K|j|3/2 4Km3/2

if if

|j| < m |j| ≥ m, and

(7.10)

4K|j|3/2 4Km3/2

if if

|j| < m |j| ≥ m.

(7.11)

Therefore E{max |Yn,k |} k

∞   1 1 X √ |aj | E{max |Uk,j |} + E{max |Vk,j |} k k 2πα n j=−∞







X  2K 1/2 1  X √ √ |aj ||j|3/4 + |aj |m3/4 2πα n |j| k we have Cov(λk,N , λ √k0 ,N ) = νk,k0 , where σk and νk,k0 are defined in (7.17). Let xq = aq x + bq ≈ 2 ln q. By Bonferroni inequalities we have for j > 1, 2j 2j−1 X X (−1)d−1 B˜d ≤ P(max1 λk,N > xq ) ≤ (−1)d−1 B˜d , k∈Ln

d=1

d=1

where B˜d =

X

P(λi1 ,N > xq , . . . λid ,N > xq ).

i1 ,i2 ,...,id ∈L1 n

all distinct

Observe that by the choice of pn as given in (7.13), we have 1 πpn 2n1/2+δ1 tan( )≈ → 0. n 2 πn Hence for some  > 0, for large n we have 1 −  < σk2 < 1 + , and for any k, k 0 ∈ L1n (or L2n ) we have |νk,k0 | → 0 as n → ∞. We shall use this simple observation very frequently in the proof. Next we make the following claim. Claim: x2 d

X ii ,i2 ,...,id ∈L1 n

all distinct

P(λi1 ,N > xq , . . . λid ,N

q d exp(− 2q ) √ > xq ) ≈ , for d ≥ 1. (7.21) d!xdq ( 2π)d

108

Maximum of scaled eigenvalues: dependent input

To avoid notational complications, we establish the above claim for d = 1 and d = 2, and indicate what changes are necessary for a higher dimension. σk2 → 0, and for x > 0, x2q

d=1: Using the fact that 

1−

1  exp(−x2 /2) exp(−x2 /2) √ √ ≤ P(N (0, 1) > x) ≤ , 2 x 2πx 2πx

it easily follows that X k∈L1n

Observe that P σk k∈L1n

√ 2πxq

√qpn 2πxq

X

P(N (0, 1) > xq /σk ) ≈



k∈L1n

x2

exp(− 2σq2 ) k

exp(−

x2q 2 )

=

x2q σk exp(− 2 ). 2σk 2πxq

x2q 1 1 X σk exp(− ( 2 − 1)) qpn 2 σk 1 k∈Ln

=

x2q Ak Bk 1 X πk σk exp(− 2 tan( )). qpn 2σ n n k 1 k∈Ln

The last term above goes to 1. Since pn ≈ 1, the claim is proved for d = 1. d=2: We shall use Lemma 7.3.4 for this case. Without loss of generality assume that σk2 > σk20 . Let α = (α1 , α2 ), where α = ~1V −1 and   σk2 νk,k0 V = . νk,k0 σk20  2  σ −ν 0 σ 2 −ν 0 Hence (α1 , α2 ) = k0 |V |k,k , k |V k,k , where |V | is the determinant of V . For | any 0 <  < 1, it easily follows that αi > 1− |V | for large n and for i = 1, 2. Hence, from Lemma 7.3.4 it follows that as n → ∞, X

P(λk,N > xq , λk0 ,N > xq ) ≈

k,k0 ∈L1n

X k,k0 ∈L1n



1 p

exp(− 12 x2q~1V −1~1T ) . α1 α2 x2q |V |

Now denote ψk,k0

1 h Ak B k πk Ak 0 B k 0 πk 0 − tan( ) − tan( ) |V | n n n n i Ak B k πk Ak0 Bk0 πk 0 2 + tan( ) tan( ) − 2νk,k0 + 2νk,k 0 , n n n n

=

and observe that |x2q ψk,k0 | ≤ C

x2q πpn tan( ) → 0 as n → ∞. n 2

Symmetric circulant P √ k,k0 ∈L1 n



109

1 |V |α1 α2 x2q

exp(− 12 x2q~1V −1~1T )

q 2 exp(−x2q ) 2!x2q 2π

=

=



2 q2 2 q2 2 q2

1

X p k,k0 ∈L1n

X k,k0 ∈L1n

X k,k0 ∈L1n

 1 exp − x2q (α1 + α2 ) + x2q 2 |V |α1 α2

 x2q |V |3/2 exp − (α1 + α2 − 2) (σk20 − νk,k0 )(σk2 − νk,k0 ) 2 x2q |V |3/2 exp(− ψk,k0 ) → 1 as n → ∞ and as  → 0. (1 − )2 2

A lower bound can be obtained similarly to verify the claim for d = 2. d > 2 : The probability inside the sum in claim (7.21) is P(N (0, Vn ) ∈ En ) where En = {(y1 , y2 , . . . , yd )T : yi > xq , i = 1, 2 . . . , d}, and Vn denotes the covariance matrix {Vn (s, t)}ds,t=1 with Vn (s, s) = σi2s and for s 6= t, Vn (s, t) = νis it , where σis , νis it are as in (7.17). Without loss of generality assume that σi1 ≥ σi2 ≥ · · · ≥ σid , since we can always permute the original vector to achieve this, and the covariance matrix changes accordingly. Note that kVn − Id k∞ → 0 as n → ∞, P∞ j ~ where kAk∞ = max |ai,j |. As Vn−1 = j=0 (Id − Vn ) , we have α = 1 + P∞ j j ~ j=1 1(Id − Vn ) . Now since kId − Vn k∞ → 0, we have k(Id − Vn ) k∞ → 0 and hence elements of (Id −Vn )j go to zero for all j. So we get that αi ∈ (1−, 1+) for i = 1, 2, . . . , d and 0 <  < 1. Hence, we can again apply Lemma 7.3.4. For further calculations it is enough to observe that for x = (x1 , x2 , . . . , xd ) 6= 0, d xVn xT 1 X 2 1 πik 1 =1+ 2 xk Aik Bik tan( )+ 2 |x|2 |x| n n |x| k=1

X

xk xk0 νik ,ik0 .

1≤k6=k0 ≤d

Since the last two terms go to zero, given any  > 0, we get for large n, 1 −  ≤ λmin (Vn ) ≤ λmax (Vn ) ≤ 1 + , where λmin (Vn ) and λmax (Vn ) denote the minimum and the maximum eigenvalues of Vn . The rest of the calculation is similar to the d = 2 case. This proves the claim completely. Back to the proof of (7.18). Using the fact that an and bn are normalizing constants for the maxima of i.i.d. standard normal variables, it follows that x2 d

q d exp(− 2q ) 1 √ ≈ exp(−dx). d d d! d!xq ( 2π)

110

Maximum of scaled eigenvalues: dependent input

So from the Bonferroni inequalities, and observing exp(− exp(−x)) = P∞ (−1)d d=0 d! exp(−dx), it follows that P(max1 λk,N > xq ) → exp(− exp(−x)), k∈Ln

proving (7.18) completely. Proof of (7.20): We first observe that n/2 X

P(N (0, 1) > xq /σk ) ≤

k=npn /2

n xq (1 − pn ) P(N (0, 1) > √ ), 2 2

since σk2 ≤ 2 for k ≤ n/2. Expanding the expressions for an and bn , we get x2q 1 ln q 1 x = (aq x + bq )2 = o(1) + − ln(4π ln q) + . 4 4 2 4 2 Now x2

n(1 − pn ) n(1 − pn ) exp(− 4q ) P(N (0, 2) > xq ) ≤ C 2 2 xq n(1 − pn ) ≈ Cn−1/2 √ 2 ln q 1 ≈C δ √ → 0 as n → ∞. n 1 ln q Breaking up the set L1 = {k : 1 ≤ k ≤ b n2 c and k is even} into L1n and ˜ 1n = {k : bnpn /2c < k ≤ b n c and k is even}, we get L 2 P(max λk,N > xq ) = P(max(max1 λk,N , max λk,N ) > xq ) k∈L1

k∈Ln

˜1 k∈L n

≤ P(max1 λk,N > xq ) + P(max λk,N > xq ) k∈Ln

˜1 k∈L n n

≤ P(max1 λk,N > xq ) + k∈Ln

b2c X

P(N (0, σk2 ) > xq )

t=bnpn /2c

= P(max1 λk,N > xq ) + o(1). k∈Ln

Hence the upper bound is obtained. The lower bound easily follows from (7.18). Similar calculations for the set L2 = {k : 1 ≤ k < b n2 c and k is odd} can be done. To complete the proof, observe that P

max1≤k1+ ≤P >1+ 2 ln n 2 ln n maxk∈L2 λk,N √ + P( > 1 + ), 2 ln n

and both these probabilities go to zero as n → ∞.

Symmetric circulant

111

Remark 7.3.1. By calculations similar to those given above, it can be shown that for σ 2 = n−c where c > 0, X

P((1 + σ 2 )1/2 λi1 ,N > xq , . . . , (1 + σ 2 )1/2 λid ,N > xq ) ≤

ii ,...,id ∈L1 n

Cd , (7.22) d!

all distinct

for some constant K > 0. This will be used in the proof of Theorem 7.3.3. Now we prove Theorem 7.3.3 using Lemma 7.3.5. Proof of Theorem 7.3.3. We shall prove only (7.15). Proof of (7.16) is similar. Again for simplicity we assume that n = 2m + 1. We break the proof into four steps. Step 1: Truncation: Define ∞ X

1

ε˜t = εt I(|εt | ≤ n 2+δ ), εt = ε˜t − E ε˜t , x ˜t =

aj ε˜t−j , xt =

j=−∞

∞ X

aj εt−j ,

j=−∞

m m X X 1 2πkt 1 2πkt λk,˜x = √ [˜ x0 + 2 x ˜t cos ], λk,¯x = √ [x0 + 2 xt cos ]. n n n n t=1 t=1

Claim: To prove (7.15), it is enough to show that maxk∈L1n λk,ε − bq D → Λ, aq

(7.23)

where √ λk,ε =

m m 2Ak ε0 2Ak X 2πkt 2Bk X 2πkt √ + √ εt cos( )− √ εt sin( ). n n n n t=1 n t=1

To prove the claim first note that λk,¯x = λk,˜x . Choose η such that ( 12 −

1 2+δ

(7.24)

− η) > 0 and observe that

nη E[ maxn |λk,¯x − λk,x |] = nη E[ maxn |λk,˜x − λk,x |] 1≤k≤b 2 c

1≤k≤b 2 c





m X ∞ X

2 n1/2−η 2 n1/2−η Z +

t=0 j=−∞ m X ∞ X

1

|aj |E(|εt−j |I(|εt−j | > n 2+δ ))  1 1 |aj | n 2+δ P(|εt−j | > n 2+δ )

t=0 j=−∞



1 n 2+δ

 P(|εt−j | > u)du

= I1 + I2 , say,

112

Maximum of scaled eigenvalues: dependent input

and I1 = ≤

m X ∞ X

2 n1/2−η 2 n1/2−η E(|ε0 | n

1

t=0 j=−∞ m X ∞ X t=0 j=−∞ ∞ X

2+δ



1

|aj |n 2+δ P(|εt−j | > n 2+δ )

)

1 1 2 − 2+δ −η

1

|aj |n 2+δ

1 E(|εt−j |2+δ ) n

|aj | → 0, as n → ∞.

j=−∞

Similarly, I2 → 0 as n → ∞. Hence  maxn |λk,¯x − λk,x | = op n−η .

1≤k≤b 2 c

(7.25)

Also from Lemma 7.3.2 we have m m λk,¯x 2Ak X 2πkt 2Bk X 2πkt max1 p −√ εt cos( )+ √ εt sin( ) k∈Ln aq n n naq t=1 naq t=1 2πf (ωk ) √ ln n = op ( δ1 ). (7.26) n Now (7.25) and (7.26) prove the claim completely. Step 2: Normal Approximation: This is an intermediate step to approximate λk,ε by λk,N , where λk,N is defined in Lemma 7.3.5. Define √ m 2Ak 2Ak X 2πkt λk,+σN = √ (ε0 + σN0 ) + √ (εt + σNt ) cos( ) n n n t=1 m 2Bk X 2πkt − √ (εt + σNt ) sin( ). n n t=1

Claim: P(max λk,+σN > xq ) − P(max (1 + σ 2 )1/2 λk,N > xq ) → 0, 1 1 k∈Ln

(7.27)

k∈Ln

where λk,N is defined in Lemma 7.3.5. Proof of this claim is similar to the proof of Lemma 5.2.3. It uses Lemma 5.2.2. We omit the details. Step 3: In this step we shall prove (7.23). First observe the following: lim P(max1 λk,ε+σN > xq ) = Λ(x).

n→∞

(7.28)

k∈Ln

Proof of this observation is similar to Step 3 of the proof of Theorem 5.2.4. Here we skip the details. Now note that max maxk∈L1n λk,ε − bq σ maxk∈L1n λk,N P k∈L1n λk,ε+σN − bq − → 0. ≤ aq aq aq

Symmetric circulant

113

Now using (7.28) it follows that maxk∈L1n λk,ε − bq D → Λ. aq This completes the proof of Step 4, and hence of (7.23). As a consequence (7.15) is proven completely. This completes the proof of the theorem. Theorem 7.3.6 (Bose et al. (2011c)). If {λk,x } are the eigenvalues of then under the assumptions of Theorem 7.3.3,

√1 SCn n

λ max1≤k≤b n2 c √ k,x 2πk 2πf (wk ) P √ → 1, where ωk = . n ln n

Proof. As before we assume that n = 2m + 1. It is now easy to see from the truncation part of Theorem 7.3.3 and Lemma 7.3.2 that it is enough to show that, max1≤k≤b n2 c λk,ε P √ → 1, ln n where √ m m 2Ak ε0 2Ak X 2πkt 2Bk X 2πkt λk,ε = √ + √ εt cos( )− √ εt sin( ), n n n n t=1 n t=1 and εt = εt I(|εt | ≤ n1/s ) − E[εt I(|εt | ≤ n1/s )]. The steps are the same as the steps used to prove (7.20) in Lemma 7.3.5. Observe from there that, to complete the proof, it is enough to show that n

b2c X

P(λk,ε > xq ) → 0 as n → ∞.

(7.29)

k=bnpn /2c+1

Let √ n 2πkt 2πkt m = b c, v1 (0) = 2Ak , and v1 (t) = 2Ak cos( ) − 2Bk sin( ). 2 n n Since {εt v1 (t)} is a sequence of bounded independent mean zero random variables, by applying Bernstein’s inequality (see Section 11.2 of Appendix) we get m

m X

1 X P( √ εt v1 (t) > xq ) ≤ m t=0

P(|

=

P(|



2 exp −

εt v1 (t)| >



mxq )

t=0 m X

xq εt v1 (t)| > m √ ) m t=0 mx2q 2

Pm

t=0

Var(εt v1 (t)) +

xq 2 1/s m √ 3 Cn m



.

114

Maximum of scaled eigenvalues: dependent input

Now observe that D

:= ≥ = ≥

Therefore P(|

mx2q 2

x

Pm

t=0

q Var(εt v1 (t)) + 23 Cn1/s m √m

x2q 4 n1

Pm

t=0

Var(εt v1 (t)) + 43 Cn1/s−1/2 xq x2q

4(1 + 4(1 + m X

Cxq Ak Bk 4 tan πk n n ) + 3 n1/2−1/s x2q x2q ≥ . 2 8 π ) + o(1)

εt v1 (t)| >



mxq ) ≤ 2 exp(−

t=0

x2q ), 8

and hence n

b2c X t=bnpn /2c

m x2q 1 X P( √ | εt v1 (t)| > xq ) ≤ n(1 − pn ) exp(− ) 4 n t=0



C → 0. nδ1 (ln n)1/4

This completes the proof of (7.29), and hence the proof of the theorem. Remark 7.3.2. In Theorem 7.3.3 we were unable to consider the convergence over L1n ∪ L2n . It is not clear if the maxima over the two subsets are asymptotically independent and hence it is not clear if we would continue to obtain the same limit. Observe that for example, if k is odd and k 0 is even, then −Dk,k0 π(k + k 0 ) Ek,k0 π(k 0 − k) cot − cot , n 2n n 2n where Dk,k0 and Ek,k0 are as defined in (7.17). So for these covariance terms to tend to zero, we have to truncate the index set from below appropriately. For instance, if the inputs are normal, we may consider the set L0 = {(k, k 0 ) : 1 < k < bnpn /2c, k + bnqn /2c < k 0 < bnpn /2c} with qn → 0, and approximate it by the i.i.d. counterparts since supk,k0 ∈L0 | Cov(λk,x , λk0 ,x )| → 0 as n → ∞. The complication comes when dealing with the complement of L0 since it no longer has small cardinality. Cov(λk,x , λk0 ,x ) =

7.4

k-circulant

First recall the eigenvalues of the k-circulant matrix Ak,n from Section 1.4 of Chapter 1. For any positive integers k and n, let p1 < p2 < · · · < pc be all

k-circulant

115

their common prime factors so that n = n0

c Y

pβq q and k = k 0

q=1

c Y

q pα q .

q=1

Here αq , βq ≥ 1 and n0 , k 0 , pq are pairwise relatively prime. Then the characteristic polynomial of Ak,n is given by χ (Ak,n ) = λn−n

0

`−1 Y

(λnj − yj ) ,

(7.30)

j=0

where yj , nj are as defined in Section 1.4.

7.4.1

k-circulant for n = k 2 + 1

We first consider the k-circulant matrix with n = k 2 + 1. In this case, clearly n0 = n and k 0 = k. From Lemma 4.5.2(a), g1 = 4 and the eigenvalue partition of {0, 1, 2, . . . , n − 1} contains exactly q = b n4 c sets of size 4, say {P1 , P2 , . . . , Pb n4 c }. Since each Pi is self-conjugate, we can find a set Ai ⊂ Pi of size 2 such that Pj = {x : x ∈ Aj or n − x ∈ Aj }. Since we shall be using the bounds from Lemma 7.2.2, we define a few relevant notations for convenience. Define n n 2 1 X 1 X iωj l 2 iωj l Ix,n (ωj ) = xl e , Iε,n (ωj ) = εl e , n n l=1

l=1

n n 1 X 1 X iωj l Jx,n (ω) = √ xl eiωj l , Jε,n (ω) = √ εl e , n n l=1 l=1 Y Y βx,n (t) = Ix,n (ωj ), βε,n (t) = Iε,n (ωj ),

A(ωj ) =

j∈At ∞ X

j∈At

at eiωj t , Tn (ωj ) = Ix,n (ωj ) − |A(ωj )|2 Iε,n (ωj ),

t=−∞

1/4 βx,n (t) 1 βex,n (t) = Q , and M ( √ Ak,n , f ) = max βex,n (t) . 1≤t≤q n j∈At 2πf (ωj ) Theorem 7.4.1 (Bose et al. (2011c)). Let {xn } be the two-sided moving average process defined in (7.1), and which satisfies Assumption 7.1.1. Then for n = k 2 + 1, as n → ∞, M ( √1n Ak,n , f ) − dq cq

D

→ Λ,

where q = q(n) = b n4 c and cq , dq are same as in Theorem 6.4.1 with g = 2.

116

Maximum of scaled eigenvalues: dependent input

Proof. Observe that βx,n (t) βex,n (t) := Q = βε,n (t) + Rn (t), j∈At 2πf (ωj ) where Rn (t) = Iε,n (ωt1 )

Tn (ωt2 ) Tn (ωt1 ) Tn (ωt1 ) Tn (ωt2 ) + Iε,n (ωt2 ) + . 2πf (ωt2 ) 2πf (ωt1 ) 2πf (ωt1 ) 2πf (ωt2 )

Let q = b n4 c. Recall that 1/4 1 M ( √ Ak,n , f ) = max βex,n (t) . 1≤t≤q n

(7.31)

We shall show that max1≤t≤q |βex,n (t) − βε,n (t)| → 0 in probability. Now |βex,n (t) − βε,n (t)| ≤ |Iε,n (ωt1 )

Tn (ωt2 ) Tn (ωt1 ) Tn (ωt1 ) Tn (ωt2 ) | + |Iε,n (ωt2 ) |+| |. 2πf (ωt2 ) 2πf (ωt1 ) 2πf (ωt1 ) 2πf (ωt2 )

Note that max |Iε,n (ωt1 )

1≤t≤q

Tn (ωt2 ) 1 |≤ max |Iε,n (ωt )| maxn |Tn (ωt )|. 1≤t< 2 2πf (ωt2 ) 2πα 1≤t< n2

From (7.7) we get max |Tn (ωt )| = Op (n−1/4 (ln n)1/2 ).

1≤t≤n

Therefore max |Iε,n (ωt1 )

1≤t≤q

and max |

1≤t≤q

Tn (ωt2 ) | = Op (n−1/4 (ln n)3/2 ), 2πf (ωt2 )

Tn (ωt1 ) Tn (ωt2 ) | = Op (n−1/2 ln n). 2πf (ωt1 ) 2πf (ωt2 )

Combining all these we have max |Rn (t)| = max |βex,n (t) − βε,n (t)| = Op (n−1/4 (ln n)3/2 ).

1≤t≤q

1≤t≤q

Note that 1/4 1/4 1/4 βε,n (t) − |Rn (t)|1/4 ≤ βex,n (t) ≤ βε,n (t) + |Rn (t)|1/4 , and hence   max βex,n (t) 1/4 − max βε,n (t) 1/4 = Op (n−1/16 (ln n)3/8 ). 1≤t≤q

1≤t≤q

(7.32)

k-circulant

117

From Theorem 6.4.1, we know that max1≤t≤q βε,n (t) cq

1/4

− dq

D

→ Λ.

(7.33)

Hence, from (7.31), (7.32) and (7.33) it follows that M ( √1n Ak,n , f ) − dq cq

7.4.2

D

→ Λ.

k-circulant for n = k g + 1, g > 2

Now we extend Theorem 7.4.1 for n = k g + 1 where g > 2. Here, we use a slightly different notation to use the developments of Section 6.2 of Chapter 6. Define 1/2l βx,j (t) 1 βex,j (t) := Q and M ( √ Ak,n , f ) = max max βex,l (j) . j:P ∈J l 2πf (ω ) n j l l l∈At Theorem 7.4.2 (Bose et al. (2011c)). Let {xn } be the two-sided moving average process defined in (7.1) and which satisfies Assumption 7.1.1. Then for n = k g + 1, g > 2, as n → ∞, M ( √1n Ak,n , f ) − dq cq where q = q(n) =

n 2g ,

D

→ Λ,

and cq , dq are as defined in Theorem 6.4.1.

Proof. The line of argument is similar to that in the case of g = 2. To prove the result we use following two facts: (i) From (7.7), max |Tn (ωt )| = op (n−1/4 (ln n)1/2 ).

1≤t< n 2

(ii) From Davis and Mikosch (1999) (see Theorem 11.3.2 in Appendix), max |Iε,n (ωt )| = Op (ln n) and

1≤t< n 2

max |Ix,n (ωt )| = Op (ln n).

1≤t< n 2

Using these, and inequality (6.28) of Chapter 6, it is easy to see that, for some δ0 > 0, max max βex,l (t) − βε,l (t) = op (n−δ0 ). (7.34) l

j:Pj ∈Jl

Now the results follow from Theorem 6.4.1 and (7.34).

118

Maximum of scaled eigenvalues: dependent input

7.5

Exercises

1. Prove the results mentioned in Remark 7.2.1. 2. Complete the proof of (7.10). 3. Prove (7.19) of Lemma 7.3.5. 4. Prove (7.24). 5. Prove (7.28) in the proof of Theorem 7.3.3. 6. Show that under the conditions of Theorem 7.3.6, |λ | max1≤k≤b n2 c √ k,x 2πf (ωk ) P √ → 1. ln n

Hint: The proof is similar to the proof of Theorem 7.3.6, with the normalizing constants changed suitably. 7. If we include λ0 in the definition of M ( √1n SCn , f ), that is, if M ( √1n SCn , f ) = max0≤k≤b n2 c √

|λk | 2πf (ωk )

, then show that under As-

sumption 7.1.1 except that mean µ of {εi } is now non-zero, √ D 1 M ( √ SCn , f ) − |µ| n → N (0, 2). n

8 Poisson convergence

So far we have studied the asymptotic behavior of circulant-type random matrices through the bulk distribution of the eigenvalues (LSD) and the distribution of the extremes (spectral radius). It seems then natural to study the joint behavior of the eigenvalues via the point process approach. The most appropriate point process for circulant type matrices is the one λ −b based on the points {(ωk , kaq q ), 0 ≤ k < n} where {λk } are the appropriately labelled eigenvalues, {ωk = 2πk n } are the Fourier frequencies, and aq , bq are appropriate scaling and centering constants that appeared in the weak convergence of the spectral radius in Chapter 5. In this chapter we study their asymptotic behavior in the case of i.i.d. light-tailed entries. In each case the limit measure turns out to be Poisson. In particular, this yields the distributional convergence of any k-upper ordered eigenvalues of these matrices, and also yields the joint distributional convergence of any fixed number of spacings of the upper ordered eigenvalues.

8.1

Point process

Let E be any set equipped with an appropriate sigma-algebra E. For our purposes, usually E = ([0, π]×(−∞, ∞]) and is equipped with the Borel sigmaalgebra. Let M (E) be the space of all point measures on E, endowed with the topology of vague convergence. Any point process on E is a measurable map N : (Ω, F, P) → (M (E), M(E)). It is said to be simple if P(N ({x}) ≤ 1, x ∈ E) = 1. V

Let → denote the convergence in distribution relative to the vague topology (see Section 11.2 of Appendix). The following result provides a criterion for convergence of point processes. Its proof is available in Kallenberg (1986), Resnick (1987) and Embrechts et al. (1997). This lemma will play a key role in the proofs of our results. 119

120

Poisson convergence

Lemma 8.1.1. Let {Nn } be a sequence of point processes and N be a simple point process on a complete separable metric space E. Let T be a basis of relatively compact open sets such that T is closed under finite unions and intersections, and for I ∈ T , P[N (∂I) = 0] = 1. If lim P[Nn (I) = 0] = n→∞

P[N (I) = 0] and

V

lim E[Nn (I)] = E[N (I)] < ∞, then Nn → N in M (E).

n→∞

Very few studies exist on the point process of eigenvalues of general random matrices. As an example, Soshnikov (2004) considered the point process based on the positive eigenvalues of an appropriately scaled Wigner matrix with entries {xij } which satisfy P(|xij | > x) = h(x)x−α , where h is a slowly varying function at infinity and 0 < α < 2. He showed that it converges to an inhomogeneous Poisson random point process. A similar result was proved for sample covariance matrices with Cauchy entries in Soshnikov (2006). These results on the Wigner and the sample covariance matrices were extended in Auffinger et al. (2009) for 2 ≤ α < 4.

8.2

Reverse circulant

Earlier we used the labelling λk , λn,k etc. for the eigenvalues. Now it will be convenient to re-label them as λn,x (ωk ) for the input sequence {xi }. Hence the eigenvalues of √1n RCn with the input sequence {xi } (see Section 1.3 of Chapter 1) are now written as: n−1 1 X =√ xt n t=0 n−1 1 X λn,x (ωn/2 ) = √ (−1)t xt , if n is even n t=0 q n−1 λn,x (ωk ) = −λn,x (ωn−k ) = In,x (ωk ), 1 ≤ k ≤ b c, 2

    λn,x (ω0 )              where

In,x (ωk ) =

(8.1)

n−1 1 X 2πk | xt e−itωk |2 and ωk = . n t=0 n

Since the eigenvalues occur in pairs with opposite signs (except when n is odd), it suffices to define our point process based on the points λ (ω )−b (ωk , n,x aqk q ) for k = 0, 1, 2, . . . , b n2 c. Let a (·) denote the point measure which gives unit mass to any set containing a. Motivated by the scaling and centering constants in Chapter 5, we define ηn (·) =

q X j=0



ωj ,

λn,x (ωj )−bq aq

 (·),

(8.2)

Reverse circulant

121

where q = q(n) = b n2 c, aq = √1 2

ln q

and bq =



ln q. Throughout this chapter,

the input sequence is assumed to satisfy the following assumption. Assumption 8.2.1. {xi } are i.i.d., E[x0 ] = 0, E[x0 ]2 = 1, and E |x0 |s < ∞ for some s > 2. Theorem 8.2.1 (Bose et al. (2011b)). Suppose {xi } satisfies Assumption 8.2.1. Let ηn be as in (8.2) and let η be a Poisson point process on [0, π] × V (−∞, ∞] with intensity function λ(t, x) = π −1 e−x . Then ηn → η. The main ideas of the proof are germane in the normal approximation methods that we have already encountered. When the input sequence is i.i.d. normal, the positive eigenvalues of RCn are independent and each is distributed as the square root of exponential (see Lemma 3.3.2), and the convergence follows immediately. When the entries are not normal, the sophisticated normal approximation given in Lemma 6.3.2 allows us to replace the variables by appropriate normal variables after requisite truncation. Let xt = xt I(|xt | < n1/s ) − E[xt I(|xt | < n1/s )]. Let {Nt } be a sequence of i.i.d. N (0, 1) random variables, and φCd be the density of the d-dimensional centered normal random vector with covariance matrix Cd . For d ≥ 1, and distinct Fourier frequencies ωi1 , . . . , ωid , let vd (t) = (cos(ωi1 t), sin(ωi2 t), . . . , cos(ωid t), sin(ωid t))0 .

(8.3)

Now consider RCn with the input sequences {¯ xt + σn Nt } and {¯ xt }, where σn will be chosen later appropriately. Let ηn∗ and η¯n be the respective point processes. For technical convenience, while defining ηn∗ , we consider the distinct eigenvalues and leave out λ0 . Then we proceed to show that as n → ∞, (i) ηn∗ converges to η, and (ii) ηn∗ and ηn are close in probability via η¯n . This is essentially the program that is carried out for other matrices also. Finally, the dependent case is reduced to the independent case by an appropriate approximation result such as Lemma 7.2.2. Proof of Theorem 8.2.1. The proof is done in two steps. V

Step 1: We first show that ηn∗ → η where ηn∗ (·) =

q X j=1



ωj ,

λn,¯ x+σn N (ωj )−bq aq

and λn,¯x+σn N (ωk ) are the eigenvalues of

√1 RCn n

 (·),

with entries {¯ x t + σ n Nt }

122

Poisson convergence

where σn2 = n−c and c is as in Lemma 6.3.2. First note that if we define the set q Adq = {(x1 , y1 , . . . , xd , yd )0 : x2i + yi2 > 2zq }, where zq = aq x + bq , it easily follows that P (λn,¯x+σn N (ωi1 ) > zq , . . . , λn,¯x+σn N (ωid ) > zq ) n  1 X = P 21/2 √ (xt + σn Nt )vd (t) ∈ Adq n t=1 Z = φ(1+σn2 )I2d (x)(1 + o(1))dx Ad q

=

q −d exp(−dx)(1 + o(1)).

(8.4)

Since the limit process η is simple, by Lemma 8.1.1, it suffices to show that E ηn∗ ((a, b] × (x, y]) → E η((a, b] × (x, y]) =

b − a −x (e − e−y ), π

(8.5)

for all 0 ≤ a < b ≤ π and x < y, and for all k ≥ 1, P(ηn∗ ((a1 , b1 ] × R1 ) = 0, . . . , ηn∗ ((ak , bk ] × Rk ) = 0) → P(η((a1 , b1 ] × R1 ) = 0, . . . , η((ak , bk ] × Rk ) = 0),

(8.6)

where 0 ≤ a1 < b1 < · · · < ak < bk ≤ π, and R1 , . . . , Rk are bounded Borel sets, each of which consists of a finite union of intervals of (−∞, ∞]. Proof of (8.5): This relation is established as follows: X E ηn∗ ((a, b] × (x, y]) = P(aq x + bq < λn,¯x+σn N (ωj ) ≤ aq y + bq ) ωj ∈(a,b]

(by (8.4))



(b − a)n −1 −x (b − a) −x q (e − e−y ) → (e − e−y ). 2π π

Proof of (8.6): Set nj := #{i : ωi ∈ (aj , bj ]} ∼ n(bj − aj ). The complement of the event in (8.6) is the union of m = n1 + · · · + nk events, so that 1 − P(ηn∗ ((a1 , b1 ] × R1 ) = 0, . . . , ηn∗ ((ak , bk ] × Rk ) = 0) =

P

k [

[

j=1 ωi ∈(aj ,bj ]

{

 λn,¯x+σn N (ωi ) − bq ∈ Rj } . aq

(8.7)

Now for any choice of d distinct integers i1 , . . . , id ∈ {1, . . . , q} and integers j1 , . . . , jd ∈ {1, . . . , k}, we have from (8.4) that d d \ Y  λn,¯x+σn N (ωir ) − bq −d P { ∈ Rjr } = q Λ(Rjr )(1 + o(1)), aq r=1 r=1

(8.8)

Reverse circulant 123 R where Λ(B) = B e−x dx, B bounded measurable, and (8.8) holds uniformly over all d-tuples i1 , . . . , id . Using this and an elementary counting argument, the sum of the probabilities of all collections of d distinct sets from the m sets that comprise the union in (8.7) equals, Sd 

X

=

(u1 ,...,uk ),

   n1 nk −u1 u1 ··· q λ (R1 ) · · · q −uk λuk (Rk )(1 + o(1)) u1 uk

u1 +···+uk =d

1 ((b1 − a1 )λ(R1 ))u1 · · · ((bk − ak )λ(Rk ))uk (1 + o(1)) u1 ! · · · uk !π d

X

=

(u1 ,...,uk ),

u1 +···+uk =d

→ (d!)−1 π −d ((b1 − a1 )λ(R1 ) + · · · + (bk − ak )λ(Rk ))d . Now it follows that 2s X

(−1)j−1 Sj

n→∞

−→

j=1

2s X (−1)j−1 j=1

s→∞

−→

j!π j

1 − exp −

((b1 − a1 )λ(R1 ) + . . . + (bk − ak )λ(Rk ))j

k X

 (bj − aj )π −1 λ(Rj ) ,

j=1

which by Bonferroni inequality and (8.7), proves (8.6). Step 2: It remains to show that ηn∗ is close to ηn . Define η¯n (·) =

q X j=1



λn,¯ x (ωj )−bq ωj , aq

 (·) and η 0 (·) = n

q X j=1



ωj ,

λn,x (ωj )−bq aq

 (·).

It then suffices to show that (see Theorem 4.2 of Kallenberg (1986)) P

η¯n − ηn∗

−→

η¯n − ηn0

P

ηn0

− ηn

0,

−→ P

−→

(8.9)

0, and

(8.10)

0.

(8.11)

For this, it is enough to show that for any continuous function f on [0, π] × (−∞, ∞] with compact support, P

P

P

η¯n (f ) − ηn∗ (f ) −→ 0, η¯n (f ) − ηn0 (f ) −→ 0, and ηn0 (f ) − ηn (f ) −→ 0, R where η(f ) denotes f dη. Suppose that support of f ⊆ [0, π] × [K + γ0 , ∞), for some γ0 > 0 and K ∈ R. Define ω(γ) := sup{|f (t, x) − f (t, y)|; t ∈ [0, π], |x − y| ≤ γ}.

(8.12)

124

Poisson convergence

Note that, since f is uniformly continuous, ω(γ) → 0 as γ → 0. Proof of (8.9): Let An =



max |

j=1,...,q

λn,¯x+σn N (ωj ) λn,¯x (ωj ) − |≤γ . aq aq

Then we have for γ < γ0 , f (ωj , λn,¯x+σn N (ωj ) − bq ) − f (ωj , λn,¯x (ωj ) − bq ) aq aq ( λn,¯ x+σn N (ωj )−bq ω(γ) if >K aq ≤ λn,¯ x+σn N (ωj )−bq 0 if ≤ K. aq

(8.13)

Also note that 1 max |λn,¯x+σn N (ωj ) − λn,¯x (ωj )| aq 1≤j≤q n σn X 1 ≤ max | √ Nt eiωj t | aq 1≤j≤q n t=1 v u n n u1 X σn 2πkt 2 1 X 2πkt 2 ≤ max t Nt cos + Nt sin aq 1≤j≤q n t=1 n n t=1 n q σn 2 + X2 , ≤ max X1j 2j aq 1≤j≤q where {X1j , X2j ; 1 ≤ j ≤ q} are i.i.d. N (0, 1). Now, q σn 2 + X 2 = O (σ ln n). max X1j P n 2j aq 1≤j≤q Therefore limn→∞ P(Acn ) = 0. For any  > 0, choose γ > 0 so that γ < γ0 . Define Bn = {|¯ ηn (f ) − ηn∗ (f )| > }. Then lim sup P(Bn ) ≤ lim sup(P(Bn ∩ An ) + P(Acn )) n→∞

n→∞



lim sup P(ω(γ)ηn∗ ([0, π] × [K, ∞)) > ) + lim sup P(Acn )



lim sup E ηn∗ ([0, π] × [K, ∞))ω(γ)/



e−K ω(γ)/ → 0 as γ → 0.

n→∞ n→∞

n→∞

Reverse circulant

125

This completes the proof of (8.9). The proof of (8.10) is essentially as above and we omit it. Proof of (8.11): Finally for any  > 0, P(|ηn0 (f ) − ηn (f )| > )

λn,x (ω0 ) − bq )| > ) aq λn,x (ω0 ) − bq ≤ P( ≥ K) aq =

P(|f (0,

=

n−1 1 X P( √ xl > Kaq + bq ) → 0, as n → ∞. n l=0

The proof of Theorem 8.2.1 is now complete. Let λn,(q) ≤ · · · ≤ λn,(2) ≤ λn,(1) be the ordered eigenvalues. Then for any fixed k, the joint limit distribution of k-upper ordered eigenvalues as well as their spacings can be derived from Theorem 8.2.1. Corollary 8.2.2. Under the assumption of Theorem 8.2.1, (a) for any real numbers xk < · · · < x2 < x1 ,   λn,(1) − bq λn,(k) − bq P ≤ x1 , . . . , ≤ xk → P(Y(1) ≤ x1 , . . . , Y(k) ≤ xk ), aq aq where (Y(1) , . . . , Y(k) ) has the density exp(− exp(−xk ) − (x1 + · · · + xk−1 )).   λn,(i) −λn,(i−1) D (b) −→ (i−1 Ei )i=1,...,k , where {Ei } is a sequence of aq i=1,...,k

i.i.d. standard exponential random variables. Proof. The proof is similar to the proof of Theorem 4.2.8 of Embrechts et al. (1997). We just briefly sketch the steps. Let xk < · · · < x1 be real numbers, and write Ni,n = ηn ([0, π] × (xi , ∞)) for the number of exceedances of xi by λn,x (ωj )−bq , j = 1, . . . , q. Then aq P

 λn,(1) − bq λn,(k) − bq ≤ x1 , . . . , ≤ xk aq aq = P(N1,n = 0, N2,n ≤ 1, . . . , Nk,n ≤ k − 1) → P(N1 = 0, N2 ≤ 1, . . . , Nk ≤ k − 1),

where Ni = η([0, π] × (xi , ∞]). Let Zi = η([0, π] × (xi , xi−1 ]) with x0 = ∞. To calculate the above probability, it is enough to consider P(N1 = a1 , N2 = a1 + a2 , . . . , Nk = a1 + · · · + ak ), where ai ≥ 0. However, P(N1 = a1 , N2 = a1 + a2 , . . . , Nk = a1 + · · · + ak ) = =

P(Z1 = a1 , Z2 = a2 , . . . , Zk = ak ) (e−x1 )a1 (e−x2 − e−x1 )a2 (e−xk − e−xk−1 )ak −e−xk ··· e . a1 ! a2 ! ak !

126

Poisson convergence

This proves Part (a). Part (b) is an easy consequence of Part (a).

8.3

Symmetric circulant

The eigenvalues of √1n SCn (we now index them by {ωk }) are given by (see Section 1.2 of Chapter 1): (i) for n odd:  bn 2c  X   1    √ λ (ω ) = x + 2 xj n,x 0 0   n j=1 (8.14) bn 2c  X   1  n   xj cos(ωk j) , 1 ≤ k ≤ b c   λn,x (ωk ) = √n x0 + 2 2 j=1 (ii) for n even:    λn,x (ω0 ) =   λ (ω ) n,x k

=

√1 n

  P n2 −1 x0 + 2 j=1 xj + xn/2

√1 n

  P n2 −1 x0 + 2 j=1 xj cos(ωk j) + (−1)k xn/2 , 1 ≤ k ≤

n 2,

with λn,x (ωn−k ) = λn,x (ωk ) in both cases. Now define a sequence of point processes based on the points λ (ω )−b (ωj , n,x aqj q ) for k = 0, 1, . . . , q(= b n2 c), where λn,x (·) are as in (8.14). Note that we are not considering the eigenvalues λn−k for k = 1, . . . , b n2 c since λn,x (ωn−k ) = λn,x (ωk ) for k = 1, . . . , b n2 c. Define ηn (·) =

q X j=0



ωj ,

λn,x (ωj )−bq aq

 (·),

(8.15)

where bn = cn + an ln 2, an = (2 ln n)−1/2 and cn = (2 ln n)1/2 −

ln ln n + ln 4π . 2(2 ln n)1/2 (8.16)

Theorem 8.3.1 (Bose et al. (2011b)). Let {xt } be i.i.d. random variables V which satisfy Assumption 8.2.1. Then ηn → η, where η is a Poisson point process on [0, π] × (−∞, ∞] with intensity function λ(t, x) = π −1 e−x . Proof of Theorem 8.3.1. The proof is similar to that of Theorem 8.2.1. So

Symmetric circulant

127

we sketch it. Let ηn∗ be the point process based on (ωj , 1 ≤ j ≤ q, where

λ0n,¯ x+σn N (ωj )−bq ) aq

for

n

b2c X 1 √ 2πjt  n 0 λn,¯x+σn N (ωj ) = √ 2(¯ x0 +σn N0 )+2 (¯ xt +σn Nt ) cos , 0 ≤ j ≤ b c. n 2 n t=1 D

Then by verifying (8.5) and (8.6) it follows that ηn∗ → η. Now define the point processes, η¯n0 (·)

=

q X j=1



λ0 (ωj )−bq x ωj , n,¯ aq

ηn0 (·) =

 (·), η¯n (·) =

j=1

q X j=1

q X



ωj ,

λn,x (ωj )−bq aq



ωj ,

λn,¯ x (ωj )−bq aq

 (·),

 (·),

where n

λ0n,¯x (ωj )

b2c X 1 √ 2πjt  n =√ 2¯ x0 + 2 x ¯t cos , 0 ≤ j ≤ b c, n 2 n t=1

and {λn,¯x (ωj )} are as in (8.14) with xt replaced by x ¯t . It now suffices to show that P

P

P

P

η¯n0 − ηn∗ −→ 0, η¯n − η¯n0 −→ 0, η¯n − ηn0 −→ 0 and ηn0 − ηn −→ 0. (8.17) For the first relation in (8.17), define An = { max |λ0n,¯x (ωj ) − λn,¯x+σn N (ωj )| ≤ γ}, 1≤j≤q

and observe that bn/2c X √ σn 2πjt Nt cos | max |λ0n,¯x (ωj ) − λn,¯x+σn N (ωj )| = √ max | 2N0 + 2 1≤j≤q n n 1≤j≤q t=1

= Op (σn ln n). Hence P(Acn ) → 0. The remaining argument is similar to the proof of (8.9). For the second relation, note that √   ( 2 − 1)|x0 | 0 √ P max |λn,¯x (ωj ) − λn,¯x (ωj )| >  ≤ P >  → 0. 1≤j≤q n Proofs of the third and the fourth relations are similar to the proofs of (8.10) and (8.11) in the proof of Theorem 8.2.1. The proof of the following corollary is left as an exercise.

128

Poisson convergence

Corollary 8.3.2. Under the assumption of Theorem 8.3.1, (a) for any real numbers xk < · · · < x2 < x1 , P



n,(1)

− bq

aq

≤ x1 , . . . ,

 λn,(k) − bq ≤ xk → P(Y(1) ≤ x1 , . . . , Y(k) ≤ xk ), aq

where (Y(1) , . . . , Y(k) ) has the density exp(− exp(−xk ) − (x1 + · · · + xk−1 )). (b) λ

n,(i)

− λn,(i−1)  D −→ (i−1 Ei )1≤i≤k , aq 1≤i≤k

where {Ei } is a sequence of i.i.d. standard exponential random variables.

8.4

k-circulant, n = k 2 + 1

One can consider an appropriate point process based on the eigenvalues of the k-circulant matrix for n = k g +1 where g > 2, and can prove a result similar to Theorem 8.4.2. But for general g > 2, algebraic details are quite complicated. For simplicity, we consider the k-circulant matrix only for n = k 2 + 1. From Lemma 4.5.2 and (4.29) of Chapter 4, g1 = 4, the eigenvalue partition of {0, 1, 2, . . . , n − 1} contains exactly q = b n4 c sets of size 4, and each set is self-conjugate. Moreover, if k is even then there is only one more partition set containing only 0, and if k is odd then there are two more partition sets containing only 0 and only n/2, respectively. To define an appropriate point process, we need a clear picture of the eigenvalue partition of Zn = {0, 1, 2, . . . , n − 1}. First write S(x) defined in (1.5) of Chapter 1 as follows: S(ak + b) = {ak + b, bk − a, n − ak − b, n − bk + a}; 0 ≤ a ≤ k − 1, 1 ≤ b ≤ k. Lemma 8.4.1. For n = k 2 + 1, [ Zn =

S(ak + b)

[

S(0), if k is even

S(ak + b)

[

S(0)

0≤a≤b k−2 2 c,a+1≤b≤k−a−1

=

[

[

S(n/2), if k is odd,

0≤a≤b k−2 2 c,a+1≤b≤k−a−1

where all S(ak + b) are disjoint and form an eigenvalue partition of Zn .

k-circulant, n = k 2 + 1

129

Proof. First observe that S(0) = {0} and S(n/2) = {n/2} if k is odd, and #{x : x ∈ S(ak + b); 0 ≤ a ≤ b  n − 1 if k is even = n − 2 if k is odd.

k−2 c, a + 1 ≤ b ≤ k − a − 1} 2

So if we can show that S(ak + b); 0 ≤ a ≤ b k−2 2 c, a + 1 ≤ b ≤ k − a − 1 are mutually disjoint then we are done. We shall show S(a1 k+b1 )∩S(a2 k+b2 ) = ∅ for a1 6= a2 or b1 6= b2 . We divide the proof into four different cases. Case (i) (a1 < a2 , b1 > b2 ) Note that a1 + 1 < a2 + 1 ≤ b2 < b1 ≤ k − (a1 + 1). Since {S(x); 0 ≤ x ≤ n − 1} forms a partition of Zn , it is enough to show that a1 k + b1 ∈ / S(a2 k + b2 ) = {a2 k + b2 , b2 k − a2 , n − a2 k − b2 , n − b2 k + a2 }. As (a2 −a1 )k > k and (b1 −b2 ) < k, we have a1 k +b1 6= a2 k +b2 . Also (b2 −a1 )k ≥ 3k 2k and a2 + b1 ≤ b k−2 2 c + k − (a1 + 1) ≤ 2 ; therefore a1 k + b1 6= b2 k − a2 . Note that a1 k + b1 + a2 k + b2

≤ (a1 + a2 )k + 2k − 2(a1 + 1) k−2 ≤ 2b ck + 2k − 2(a1 + 1) 2 ≤ k 2 − 2k + 2k − 2(a1 + 1) < k 2 + 1 = n.

Therefore a1 k + b1 6= n − (a2 k + b2 ). Similarly, a1 k + b1 + b2 k − a2 ≤ a1 k + k − (a1 + 1) + (k − (a2 + 1))k − a2 < k 2 + 1 = n, and so a1 k + b1 6= n − (b2 k − a2 ). Hence S(a1 k + b1 ) ∩ S(a2 k + b2 ) = ∅. Case (ii) (a1 < a2 , b1 < b2 ) In this case it is very easy to see that a1 k + b1 ∈ / S(a2 k + b2 ), and hence S(a1 k + b1 ) ∩ S(a2 k + b2 ) = ∅. Case (iii) (a1 = a2 , b1 < b2 ) Let a1 = a2 = a. Obviously ak + b1 6= ak + b2 . Since 0 ≤ a ≤ b k−2 2 c and a+1 ≤ b1 < b2 ≤ k−(a+1), we have (b2 −a)k ≥ 2k > (a + b1 ). Hence ak + b1 6= b2 k − a. Also 2ak + b1 + b2 ≤ k(k − 2) + 2k = k 2 < n, so ak + b1 6= n − (ak + b2 ). Finally, b1 + b2 k + ak − a ≤ [k − (a + 1)](k + 1) + ak − a = k 2 − 2a − 1 < k 2 + 1 = n, implies ak + b1 6= n − (b2 k − a). Hence ak + b1 ∈ / S(ak + b2 ) and S(a1 k + b1 ) ∩ S(a2 k + b2 ) = ∅. Case (iv) (a1 < a2 , b1 = b2 ) In this case also S(a1 k + b1 ) ∩ S(a2 k + b2 ) = ∅. This completes the proof.

130

Poisson convergence

Now we are ready to define our point process. We neglect {0, n/2} if n is even, and {0} if n is odd. Let S = Zn − {0, n/2}, Tn = {(a, b) : 0 ≤ a ≤ b

k−2 c, a + 1 ≤ b ≤ k − (a + 1)}, 2

(8.18)

n−1

1 X 2πjt λt (x) = √ xj exp( ), n n j=0 Y βx,n (a, b) = λt (x), and t∈S(ak+b)

λx (a, b) = (βx,n (a, b))1/4 . Now define ηn (·) =

X



(a,b)∈Tn

λx (a,b)−dq √a , √b , cq n n

 (·),

(8.19)

where q = q(n) = b n4 c, cn

=

dn

=

(8 ln n)−1/2 , and   (ln n)1/2 1 ln ln n 1 π √ 1+ + ln . 4 ln n 2 2(8 ln n)1/2 2

Theorem 8.4.2 (Bose et al. (2011b)). Let {xt } be i.i.d. random variV ables which satisfy Assumption 8.2.1. Then ηn → η, where η is a Poisson point process on [0, 1/2] × [0, 1] × [0, ∞] with intensity function λ(s, t, x) = 4I{s≤t≤1−s} e−x . Proof. Though the main idea of the proof is similar to the proof of Theorem 8.2.1, the details are more complicated. We do it in two steps. Step 1: Define ηn∗ (·) =

X (a,b)∈Tn



λx ¯+σn N (a,b)−dq √a , √b , cq n n

 (·).

D

We show ηn∗ → η. Observe that the first two components of the limit are uniformly distributed over a triangle whose vertices are (0, 0), (1/2, 1/2), (0, 1). Denote this triangle by 4. Since the limit process is simple, it suffices to show that E ηn∗ ((a1 , b1 ] × (a2 , b2 ] × (x, y]) → E η((a1 , b1 ] × (a2 , b2 ] × (x, y]),

(8.20)

for all 0 ≤ a1 < b1 ≤ 1/2, 0 ≤ a2 < b2 ≤ 1 and x < y, and for all l ≥ 1, P(ηn∗ ((a1 , b1 ] × (c1 , d1 ] × R1 ) = 0, . . . , ηn∗ ((al , bl ] × (cl , dl ] × Rl ) = 0) (8.21)

k-circulant, n = k 2 + 1

131

−→ P(η((a1 , b1 ] × (c1 , d1 ] × R1 ) = 0, . . . , η((al , bl ] × (cl , dl ] × Rl ) = 0), where ∩li=1 (ai , bi ] × (ci , di ] = ∅ and R1 , . . . , Rl are bounded Borel sets, each consisting of a finite union of intervals on [0, ∞]. Proof of (8.20): Consider the following box sets. See also Figure 8.1. (i) (a1 , b1 ] × (a2 , b2 ] lies entirely inside the triangle 4. (ii) (a1 , b1 ] × (a1 , b1 ] where 0 ≤ a1 < b1 ≤ 1/2. (iii) (a1 , b1 ] × (1 − b1 , 1 − a1 ] where 0 ≤ a1 < b1 ≤ 1/2. (iv) (a1 , b1 ] × (a2 , b2 ] lies entirely outside the triangle 4.

Type(iii)

-

I, I,

y-axis

IJ

"' Type(i)

I,

x-axis

x-axis

FIGURE 8.1 The left figure shows four types of basic sets and the right figure shows the decomposition of a rectangle into these four types of sets.

Since any rectangle in [0, 1/2]×[0, 1] can be expressed as the disjoint union of these four kinds of sets (see Figure 8.1), it is sufficient to prove (8.20) and (8.21) for the above four kinds of boxes only. Let Ii denote the i-th type of set. It is enough to prove that for each i, as n → ∞, E ηn∗ (Ii × (x, y]) → E η(Ii × (x, y]). (a) Proof of (8.20) for Type (i) sets: E ηn∗ ((a1 , b1 ] × (a2 , b2 ] × (x, y]) X  = E  √a √b λx¯+σn N (a,b)−dq  ((a1 , b1 ] × (a2 , b2 ] × (x, y]) (a,b)∈Tn

=

n

,

n

,

X ( √an , √bn )∈(a1 ,b1 ]×(a2 ,b2 ]

cq

P

 λx¯+σn N (a, b) − dq ) ∈ (x, y] cq

1 (b1 − a1 )(b2 − a2 )n (e−x − e−y )(1 + o(1)) q → 4(b1 − a1 )(b2 − a2 )(e−x − e−y ) = E η((a1 , b1 ] × (a2 , b2 ] × (x, y]). ∼

132

Poisson convergence

(b) Proof of (8.20) for Type (ii) sets: Similarly, E ηn∗ ((a1 , b1 ] × (a1 , b1 ] × (x, y]) 1 1 ∼ (b1 − a1 )(b1 − a1 )n (e−x − e−y )(1 + o(1)) 2 q 1 2 −x → (b1 − a1 ) 4(e − e−y ) 2 = E η((a1 , b1 ] × (a1 , b1 ] × (x, y]). (c) Proof of (8.20) for Type (iii) sets is same as for Type (ii) sets. (d) Proof of (8.20) for Type (iv) sets: E ηn∗ ((a1 , b1 ] × (a2 , b2 ] × (x, y]) X  λx¯+σn N (a, b) − dq = P ) ∈ (x, y] cq a b ( √n , √n )∈(a1 ,b1 ]×(a2 ,b2 ]

=

0

=

E η((a1 , b1 ] × (a2 , b2 ] × (x, y]),

since {(a, b) ∈ Tn : ( √an , √bn ) ∈ (a1 , b1 ] × (a2 , b2 ]} = ∅. This completes the proof of (8.20). Proof of (8.21): We prove (8.21) for the four types of sets separately. (a) Type (i) sets: (ai , bi ] × (ci , di ] lies completely inside the triangle 4 for all i = 1, 2, . . . , l. Let nj

a b #{(a, b) : ( √ , √ ) ∈ (aj , bj ] × (cj , dj ]} n n √ √ ∼ n(bj − aj ) n(dj − cj ) =

= n(bj − aj )(dj − cj ). Then the complement of the event in (8.21) is the union of m = n1 + · · · + nl events, that is,  1 − P ηn∗ ((a1 , b1 ] × (c1 , d1 ] × R1 ) = 0, . . . , ηn∗ ((al , bl ] × (cl , dl ] × Rl ) = 0  λx¯+σn N −dq = P ∪lj=1 ∪( √a , √b )∈(aj ,bj ]×(cj ,dj ] { ∈ Rj } . n n cq Now following the argument to prove (8.6) of Theorem 8.2.1, we get  P ηn∗ ((a1 , b1 ] × (c1 , d1 ] × R1 ) = 0, . . . , ηn∗ ((al , bl ] × (cl , dl ] × Rl ) = 0 n→∞

−→ exp {−

l X (bj − aj )(dj − cj )4λ(Rj )} j=1

= P(η((a1 , b1 ] × (c1 , d1 ] × R1 ) = 0, . . . , η((al , bl ] × (cl , dl ] × Rl ) = 0).

k-circulant, n = k 2 + 1

133

This proves (8.21) for Type (i) sets. (b) Type (ii) sets: Here ci = ai , di = bi , and nj

= ∼

a b #{(a, b) : ( √ , √ ) ∈ (aj , bj ] × (aj , bj ]} n n √ 1√ n n(bj − aj ) n(bj − aj ) = (bj − aj )2 . 2 2

Remaining part of the proof is as in the previous case. Finally we get  P ηn∗ ((a1 , b1 ] × (a1 , b1 ] × R1 ) = 0, . . . , ηn∗ ((al , bl ] × (al , bl ] × Rl ) = 0 n→∞

−→ exp {−

l X 1 j=1

2

(bj − aj )2 4λ(Rj )}

= P(η((a1 , b1 ] × (a1 , b1 ] × R1 ) = 0, . . . , η((al , bl ] × (al , bl ] × Rl ) = 0). (c) Proof of (8.21) for Type (iii) sets is the same as that given for Type (ii) sets. T (d) For Type (iv) sets, (ai , bi ] × (ci , di ] 4 = ∅ for all i = 1, . . . , l. Note that for all i, a b #{(a, b) ∈ Tn : ( √ , √ ) ∈ (ai , bi ] × (ci , di ]} = 0, n n and therefore  P ηn∗ ((a1 , b1 ] × (c1 , d1 ] × R1 ) = 0, . . . , ηn∗ ((al , bl ] × (cl , dl ] × Rl ) = 0 = 1. Also for the Poisson point process η, P(η((a1 , b1 ] × (c1 , d1 ] × R1 ) = 0, . . . , η((al , bl ] × (cl , dl ] × Rl ) = 0) = 1. Consequently, the proof of Step 1 is complete. Step 2: Now define the process, X η¯n (·) = (a,b)∈Tn



λ ¯ (a,b)−dq √a , √b , x cq n n

 (·).

Then it suffices to show that for any continuous function f on [0, 1/2]×[0, 1]× [0, ∞) with compact support, P

P

η¯n (f ) − ηn∗ (f ) −→ 0 and η¯n (f ) − ηn (f ) −→ 0.

(8.22)

Suppose the compact support of f is contained in the set [0, 1/2] × [0, 1] × [K + γ0 , ∞) for some γ0 > 0 and K ∈ R. Since f is uniformly continuous, as γ → 0, ω(γ) := sup{|f (s, t, x) − f (s, t, y)|; s ∈ [0, 1/2], t ∈ [0, 1], |x − y| ≤ γ} → 0.

134

Poisson convergence P

Proof of η¯n (f ) − ηn∗ (f ) −→ 0: On the set An =



max | (a,b)∈Tn

λx¯+σn N (a, b) λx¯ (a, b) − |≤γ , cq cq

we have for γ < γ0 , f ( √a , √b , λx¯+σn N (a, b) − dq ) − f ( √a , √b , λx¯ (a, b) − dq ) cq cq n n n n ( λn,¯ (ω )−b j q x+σn N ω(γ) if >K aq ≤ λn,¯ (ω )−b j q x+σn N 0 if ≤ K. aq

(8.23)

Now if P(Acn ) → 0, then using (8.23),  ω(γ) −K lim sup P |ηn∗ (f ) − η¯n (f )| >  ≤ 4e → 0, as γ → 0.  n→∞ Now we show that P(Acn ) → 0. For any random variables (Xi )0≤i P(Acn ) = P max − a,b cq cq c/4 nc/4  n 1 =P max λx¯+σn N (a, b) − λx¯ (a, b) > → 0 as n → ∞. ln n a,b cq ln n P

Hence η¯n (f ) − ηn∗ (f ) −→ 0. The other part of (8.22) follows from (6.14) of Chapter 6. Proof of Step 2 and hence of the theorem is now complete. We invite the reader to formulate and prove a result similar to Corollary 8.2.2 for the ordered eigenvalues of the k-circulant matrix.

8.5

Reverse circulant: dependent input

Let {xn ; n ≥ 0} be a two sided moving average process, ∞ X

xn =

ai εn−i ,

(8.26)

i=−∞

P where {an } are non-random with n |an | < ∞, and {εi ; i ∈ Z} are i.i.d. random variables. Let f (ω), ω ∈ [0, 2π] be the spectral density of {xn }. Assumption 8.5.1. {εi ; i ∈ Z} are i.i.d. random variables with E(ε0 ) = 0, E(ε20 ) = 1, and E |ε0 |s < ∞ for some s > 2, ∞ X

|aj ||j|1/2 < ∞, and f (ω) > 0 for all ω ∈ [0, 2π].

j=−∞

Let λn,x (ωk ) be the eigenvalues of √1n RCn as given in (8.1). As in Chapter 7, we scale each eigenvalue and consider the point process, η˜n (·) =

q X



j=1

ωj ,

λn,x (ωj ) ˜ n,x (ωj ) = √ where λ , aq = √1 2πf (ωj )

2

ln q

˜ n,x (ω )−bq λ k aq

, bq =



 (·),

(8.27)

ln q and q = q(n) = b n2 c.

Theorem 8.5.1 (Bose et al. (2011b)). Let {xn } be as in (8.26) and which V satisfies Assumption 8.5.1. Then η˜n → η, where η is a Poisson point process on [0, π] × (−∞, ∞] with intensity function λ(t, x) = π −1 e−x .

136

Poisson convergence

Proof. Define ηn (·) =

q X



j=1

ωj ,

λn,ε (ωj )−bq aq

 (·).

(8.28)

V

In Theorem 8.2.1, we have shown that ηn → η. Now it is enough to show that for any continuous function g on E with compact support, P

η˜n (g) − ηn (g) → 0 as n → ∞. Suppose the compact support of g is contained in the set [0, π] × [K + γ0 , ∞) for some γ0 > 0 and K ∈ R. Since g is uniformly continuous, ω(γ) := sup{|g(t, x) − g(t, y)|; t ∈ [0, 1], |x − y| ≤ γ} → 0 as γ → 0. On the set An = { max | j=1,...,q

aq

λn,x (ωj ) λn,ε (ωj ) p − | ≤ γ}, aq 2πf (ωj )

we have for γ < γ0 , ˜ g(ωj , λn,x (ωj ) − bq )−g(ωj , λn,ε (ωj ) − bq ) ≤ aq aq

(

ω(γ)

if

0

if

λn,ε (ωj )−bq aq λn,ε (ωj )−bq aq

>K ≤ K. (8.29)

Observe that 1 λn,x (ωj ) max | p − λn,ε (ωj )| aq 1≤j≤q 2πf (ωj ) q 1 ≤ max |λn,x (ωj ) − 2πf (ωj )λn,ε (ωj )| αaq 1≤j≤q ≤

∞ n−1 X X 1 n−1 1 1 X iωj l max √ xl eiωj l − ( at eiωj t ) √ εl e αaq 1≤j≤q n n t=−∞ l=0

=

op (n

−1/4

l=0

), by (7.5) of Chapter 7.

Therefore limn→∞ P(Acn ) = 0. Now, for any δ > 0, choose 0 < γ < γ0 . Then, by intersecting the event {|˜ ηn (g) − ηn (g)| > δ} with An and Acn and using (8.29), we obtain lim sup P(|˜ ηn (g) − ηn (g)| > δ) n→∞



lim sup(P({|˜ ηn (g) − ηn (g)| > δ} ∩ An ) + P(Acn ))



lim sup P(ω(γ)ηn ([0, π] × [K, ∞)) > ) + lim sup P(Acn )



lim sup E ηn ([0, π] × [K, ∞))ω(γ)/ ≤ e−K ω(γ)/.

n→∞ n→∞

n→∞

n→∞ P

Since ω(γ) → 0 as γ → 0, η˜n − ηn → 0.

Symmetric circulant: dependent input

8.6

137

Symmetric circulant: dependent input

Now we consider the two sided moving average process defined in (8.26) with an extra assumption that aj = a−j for all j ∈ N. Define η˜n (·) =

q X



ωj ,

j=0

where q = q(n) ∼

n 2,

˜ n,x (ω )−bq λ j aq

 (·),

(8.30)

λn,x (ωj ) ˜ n,x (ωj ) = √ λ , and λn,x (ωj ) are the eigenvalues 2πf (ωj )

of the symmetric circulant matrix given in (8.14), and aq , bq are as in (8.16). Theorem 8.6.1 (Bose et al. (2011b)). Let {xn } be as in (8.26) with aj = a−j V

and which satisfies Assumption 8.5.1. Then η˜n → η, where η is a Poisson point process on [0, π] × (−∞, ∞] with intensity function λ(t, x) = π −1 e−x . Proof. The proof is very similar to the proof of Theorem 8.5.1. We only point out that to show limn→∞ P(Acn ) = 0, we use the following fact from (7.12) of Chapter 7: λ (ω ) n,x k − λn,ε (ωk ) = op (n−1/4 ). maxn p 1≤k≤b 2 c 2πf (ωk )

8.7

k-circulant, n = k 2 + 1: dependent input

First recall the eigenvalues of the k-circulant matrix for n = k 2 + 1 given in Section (8.4) and define the following notation based on that: Y βε,n (a, b) = λt (ε), λε (a, b) = (βε,n (a, b))1/4 , t∈S(ak+b)

Q β˜x,n (a, b)

=

t∈S(ak+b)

λt (x)

4π 2 f (ωak+b )f (ωbk−a )

˜ x (a, b) = (β˜x,n (a, b))1/4 . , and λ

Now with q = q(n) = b n4 c, and dq , cq as in (8.20), define our point process based on points {( √an , √bn , η˜n (·) =

˜ x (a,b)−dq λ ) cq

X (a,b)∈Tn

where Tn is as defined in (8.18).



: (a, b) ∈ Tn } as: ˜ x (a,b)−dq λ √a , √b , cq n n

 (·),

(8.31)

138

Poisson convergence

Theorem 8.7.1 (Bose et al. (2011b)). Let {xn } be as in (8.26) and which V satisfies Assumption 8.5.1. Then η˜n → η, where η is a Poisson point process on [0, 1/2] × [0, 1] × [0, ∞] with intensity function λ(s, t, x) = 4I{s≤t≤1−s} e−x . Proof. First define a point process based on {( √an , √bn ,

λε (a,b)−dq ) cq

D

: (a, b) ∈

Tn }. In Theorem 8.4.2, we have shown that ηn → η, where η is a Poisson point process on [0, 1/2] × [0, 1] × (−∞, ∞] with intensity function λ(s, t, x) = 4I{s≤t≤1−s} e−x . The rest of the argument is similar to that in the proof of Theorem 8.5.1. The additional fact that is needed is: ˜ P( max λ x (a, b) − λε (a, b) > γ) → 0, (a,b)∈Tn

which follows from the proof of Theorem 7.4.1 of Chapter 7.

8.8

Exercises

1. Complete the proof of (8.10). 2. Complete the proof of Corollary 8.3.2. 3. Complete the proof of Theorem 8.6.1. 4. Complete the proof of Theorem 8.7.1.

9 Heavy-tailed input: LSD

In this chapter we consider the circulant-type matrices when the input sequence belongs to the domain of attraction of an α-stable law with α ∈ (0, 2). We show that the LSDs are random distributions in these cases and determine explicit representations of the limits. The method of proof relies heavily on the ideas borrowed from the results available outside the random matrix context.

9.1

Stable distribution and input sequence

A random variable Yα is said to have a stable distribution Sα (σ, β, µ) if there are parameters 0 < α ≤ 2, σ ≥ 0, −1 ≤ β ≤ 1 and µ real such that its characteristic function has the following form:   exp{iµt − σ α |t|α (1 − iβ sgn(t) tan(πα/2))}, if α 6= 1, E[exp(itYα )] =  exp{iµt − σ|t|(1 + (2iβ/π) sgn(t) ln |t|)}, if α = 1. An excellent reference for stable distributions and stable processes is Samorodnitsky and Taqqu (1994). A sequence of i.i.d. random variables {Xi } is said to belong to the domain of attraction of a stable law with index α if there exists an → ∞ such that n 1 X D (Xk − cn ) → Sα , an k=1

where Sα is a stable random variable and cn = E[X1 I(|X1 | ≤ an )]. It is well-known that a random variable X is in the domain of attraction of a (non-normal) stable law with index α ∈ (0, 2) if and only if P[|X| > t] = t−α `(t), for some slowly varying function `(·) and lim

t→∞

P[X > t] = p ∈ [0, 1]. P[|X| > t]

(9.1)

139

140

Heavy-tailed input: LSD

In that case, the normalizing constants an can be chosen so that n P[|X| > an x] → x−α .

(9.2)

In this chapter we deal with the situation where the elements of the input sequence are in the domain of attraction of a stable distribution. Thus, we make the following assumption on the input sequence {Xi }. Assumption 9.1.1. The elements of the input sequence {Xi } are i.i.d. random variables in the domain of attraction of a stable law with index α ∈ (0, 2).

9.2

Background

So far, the LSDs, either in probability or almost surely, have all been nonrandom. But with heavy-tailed entries this is not the case anymore. To understand the mode of convergence of ESDs of circulant-type matrices with heavy-tailed entries, we need to first discuss the convergence of a sequence of random probability measures to a random probability measure. Let (Ω, A, P) be a probability space and µ be a measurable map from (Ω, A, P) to the set of probability measures on R. Hence µ(·, ω) is a probability measure on R. Let {µn } be a sequence of measurable maps from (Ω, A, P) to the set of probability measures on R. We say µn converges to the random measure µ weakly if for any bounded continuous function f : R → R, Z Z f (x)µn (dx, ω) → f (x)µ(dx, ω), for almost every ω ∈ Ω. R R Note that Yn (ω) := f (x)µn (dx, ω) and Y (ω) := f (x)µn (dx, ω) are realvalued random variables defined on (Ω, A, P). Hence the above weak convergence is the almost sure convergence of the random variables Yn to Y . Also recall Skorohod’s representation theorem which lifts weak convergence to almost sure convergence. This result will be a key ingredient in the proofs of the results of this chapter. Skorohod’s representation theorem: Let {µn } be a probability measure on a completely separable metric space S such that µn converges weakly to some probability measure µ on S as n → ∞. Then there exists a sequence of Svalued random variables {Xn } and another random variable X, all defined on a common probability space (Ω, A, P) such that the law of Xn is µn for all n, the law of X is µ, and Xn converges to X almost surely. A few results from Knight (1991) will be of direct use in our situation. We describe these now. Suppose {Xi } are i.i.d. random variables in the domain of attraction of a stable law with index α ∈ (0, 2). Then, following Knight (1991), order the Xk ’s, 1 ≤ k ≤ n by their magnitudes |Xn1 | ≥ |Xn2 | ≥ · · · ≥ |Xnn |.

Background

141

Let Ynk = |Xnk | and Bnk be the random signs so that Xnk = Bnk Ynk . Let πn1 , πn2 , . . . , πnn be the anti-ranks of Yn1 , Yn2 , . . . , Ynn , that is, for 1 ≤ k ≤ n, πnk (ω) = l if Ynk (ω) = |Xl (ω)| for ω ∈ Ω. Define Unk =

πnk 1 and Znk = Ynk . n an

Suppose {Uk } are i.i.d. random variables that are uniformly distributed on (0, 1). The variables {Bk } are i.i.d. random variables with P[B1 = 1] = p = 1 − P[B1 = −1] where p is defined by equation (9.1). The sequence {Zk } is defined as k X −1/α −1/α Zk = Γk = Ei , i=1

where {Ej } are i.i.d. exponential random variables with mean 1. Now we have the following lemmata from Knight (1991). Lemma 9.2.1 (Knight (1991), Lemma 1). In R∞ , we have D

U n = (Un1 , . . . , Unn , 0, 0, . . .) → U = (U1 , U2 , . . .), D

B n = (Bn1 , . . . , Bnn , 0, 0, . . .) → B = (B1 , B2 , . . .), D

Z n = (Zn1 , . . . , Znn , 0, 0, . . .) → Z = (Z1 , Z2 , . . .). Moreover, in the above, the three sequences converge jointly with the three limiting sequences being mutually independent. Now due to Skorohod’s representation theorem, we can assume that all the above random variables are defined on a suitable probability space (Ω, A, P) where the above convergence in distribution holds almost surely. That is, {Unk }, {Bnk }, {Znk }, {Uk }, {Bk } and {Zk } are defined on the probability space (Ω, A, P) and U n → U , B n → B and Z n → Z almost surely. A typical element of Ω will be denoted by ω. Lemma 9.2.2 (Knight (1991), Lemma 2). Suppose that the convergence results of Lemma 9.2.1 hold in almost sure sense and let 0 < ε < γ ≤ ∞. (a) Then, as n → ∞, ∞ X Bnk Znk I(ε < Znk ≤ γ) − Bk Zk I(ε < Zk ≤ γ) → 0, almost surely. k=1

(b) If γ < ∞ or if γ ≤ ∞ and α > 1 then, as n → ∞, ∞ X k=1

 E Bnk Znk I(ε < Znk ≤ γ) − Bk Zk I(ε < Zk ≤ γ) → 0.

142

Heavy-tailed input: LSD

Now we state another important lemma from Knight (1991). We need the following notion of rational independence to state the result. Definition 9.2.1. A sequence of real numbers {bj }1≤j≤m ∈ (0, 1) is said to be rationally independent if for integers {aj }1≤j≤m , a1 b1 + · · · + am bm = 0 (mod 1) ⇔ a1 = · · · = am = 0. Lemma 9.2.3 (Knight (1991), Lemma 3). Let {πnj }{j=1,...,m} , n ≥ 1 be positive real numbers such that πnj → uj ∈ (0, 1), n

(9.3)

where {uj } are rationally independent. Let Rn be a random vector taking values  πn1 πnm t (mod 1), . . . , t (mod 1) , t = 1, 2, . . . , n, n n with probability 1/n for each t. Then the distribution of Rn converges weakly to the uniform distribution on the unit cube [0, 1]m . Now we state the main result of Knight (1991). The LSD results for the symmetric circulant matrix will follow from it. Theorem 9.2.4 (Knight (1991), Theorem 5). Let h(·) be a continuous function with period one and {Xj } be i.i.d. as in Assumption 9.1.1. Define n X ˆ j (h) = 1 X h(jk/n)Xk an

˜ j (h) = 1 X an ˆ ∗ (A) = 1 P n n ˜ ∗ (A) = 1 P n n

k=1 n X

for j = 1, 2, . . . , n;

h(jk/n)(Xk − cn )

for j = 1, 2, . . . , n;

k=1 n X

 ˆ j (h) ∈ A , empirical measure of {X ˆ j (h)}; I X

j=1 n X

 ˜ j (h) ∈ A , empirical measure of {X ˜ j (h)}; I X

j=1

where an is as in (9.2) and cn = E[X1 I(|X1 | ≤ an )]. (a) If 0 < α < 2 then D ˜∗ ˜∗ → P P , n

˜ ∗ (·, ω) is the probability distribution of where P ∞ X k=1

 h(Uk∗ ) Bk (ω)Zk (ω) − E(Bk Zk I(Zk < 1)) ,

Background

143

and {Uk∗ } is a sequence of i.i.d. uniform random variables on (0, 1). R1 Pn (b) If, either, (i) #{j : k=1 h(jk/n) 6= 0} = o(n) and 0 h(x)dx = 0, or (ii) α > 1 and E(X1 ) = 0, or (iii) α < 1, then D ˆ∗ ˆ∗ → P P , n

ˆ ∗ is the probability distribution of where P ∞ X

h(Uk∗ )Bk (ω)Zk (ω).

k=1

We will not give a detailed proof of Theorem 9.2.4; instead we outline the ˆ ∗ to P ˆ ∗ . The idea of the main idea to prove convergence in distribution of P n ∗ D ∗ ˜ →P ˜ is similar. We will follow the same idea to prove the LSD proof of P n results of circulant-type matrices. First observe that ˆ j (h) = X

n X

h(jUnk )Bnk Znk =

k=1

∞ X

h(jUnk )Bnk Znk ,

k=1

where {Unk }, {Bnk } and {Znk } are as defined before. So, sampling from Pˆn∗ produces the random variable ∞ X

∗ h(Unk )Bnk Znk ,

k=1 ∗ ∗ ∗ where (Un1 , Un2 , . . . , Unn , 0, . . .) is chosen at random from the set of sequences  (jUn1 (mod 1), . . . , jUnn (mod 1), 0, . . .), j = 1, 2, . . . , n .

Now from Lemma 9.2.1 and Skorohod’s representation theorem, {Unk }, {Bnk }, {Znk }, {Uk }, {Bk } and {Zk } are defined on the same probability space (Ω, A, P) and U n → U , B n → B and Z n → Z almost surely. Moreover the limiting sequences {Uk }, {Bk } and {Zk } are mutually independent. ∗ ∗ ∗ Now Lemma 9.2.3 suggests that (Un1 , Un2 , . . . , Unn , 0, . . .) will converge in distribution to (U1∗ , U2∗ , U3∗ , . . .), and hence Pˆn∗ will converge in distribution to the random probability measure of the (random) random variables ∞ X

h(Uk∗ )Bk Zk ,

k=1

where {Uk∗ } is a sequence of i.i.d. uniform random variables on (0, 1). Note that the randomness of the limiting measure is induced by {Bk } and {Zk }. We shall follow a similar idea to prove the LSD results in the following sections.

144

Heavy-tailed input: LSD

9.3

Reverse circulant and symmetric circulant

Let {λj } denote the eigenvalues of the random matrix a1n Mn with input sequence {Xi }, where {an } is a positive sequence of constants as defined in (9.2). Then we denote the ESD of a1n Mn by LMn so that, n

LMn (A) =

1X I(λj ∈ A). n j=1

If the input sequence for Mn is {Xi − cn } with cn = E[X1 I(|X1 | ≤ an )], then ˜ i } and the corresponding ESD the eigenvalues of a1n Mn will be denoted by {λ ˜ M . So will be denoted by L n n

X ˜ j ∈ A). ˜ M (A) = 1 L I(λ n n j=1 Using the methods of Freedman and Lane (1981), Bose et al. (2003) showed that the LSD of the reverse circulant matrix with heavy tailed input exists. However, no closed form representation of the limit was given. On the other hand, Knight (1991) considered the empirical distribution of the periodogram of the {Xi } and was able to obtain some very nice representations for it. One can combine these two results to state the following theorem. It also follows as a special case of Theorem 9.4.2 which is stated later. We recall random variables {Bj }, {Γj } and {Zj } from Section 9.2 to state the LSD results for the reverse circulant, the symmetric circulant and the kcirculant matrices where the input sequence {Xj } satisfies Assumption 9.1.1. {Bj } and {Γj } are independent random sequences defined on the same probability space where Bj are i.i.d. with P[B1 = 1] = p = 1 − P[B1 = −1], p Pj −1/α is defined by equation (9.1), Γj = i=1 Ei , Zj = Γj , and {Ei } are i.i.d. exponential random variables with mean 1. Now define µt = E[Bt Zt I(Zt ≤ 1)]. Let

{Uj∗ }

(9.4)

be a sequence of i.i.d. uniform random variables on (0, 1).

D ˜ ˜ RC → ˜ RC (·, ω) is Theorem 9.3.1 (Bose et al. (2011a)). (a) L LRC , where L n the random distribution of the symmetric square root of ∞ ∞ X 2  X 2 cos(2πUt∗ )(Bt (ω)Zt (ω) − µt ) + sin(2πUt∗ )(Bt (ω)Zt (ω) − µt ) . t=1

t=1 D

(b) LRCn → LRC , where LRC (·, ω) is the random distribution of the symmetric square root of ∞ ∞ X 2  X 2 cos(2πUt∗ )Bt (ω)Zt (ω) + sin(2πUt∗ )Bt (ω)Zt (ω) . t=1

t=1

k-circulant: n = k g + 1

145

We now give the corresponding result for the symmetric circulant matrix. D ˜ ˜ SC → ˜ SC (·, ω) is Theorem 9.3.2 (Bose et al. (2011a)). (a) L LSC , where L n the distribution of

2

∞ X

cos(2πUt∗ )(Bt (ω)Zt (ω) − µt ).

t=1 D

(b) LSCn → LSC , where LSC (·, ω) is the distribution of ∞ X

2

cos(2πUt∗ )Bt (ω)Zt (ω).

t=1

We briefly sketch a proof of the above result. Let λ0 , λ1 , . . . , λn−1 be the eigenvalues of a1n SCn given by: (i) for n odd:    λ0 =

1 an

  λ k

1 an

=

X1 + 2

P[n/2]

X1 + 2

P[n/2]

j=1

Xj+1

 (9.5)

2πjk j=1 Xj+1 cos( n )



, 1 ≤ k ≤ [n/2];

(ii) for n even:   P n2 −1 1  Xj+1 + Xn/2  λ0 = an X1 + 2 j=1   λ k

=

1 an

X1 + 2

P n2 −1 j=1

 k n Xj+1 cos( 2πjk n ) + (−1) X 2 +1 , 1 ≤ k ≤

n 2

with λn−k = λk in both cases. Now observe that, as far as determining the LSD of a1n SCn is concerned, we can ignore the quantity a1n X1 and also the quantity a1n Xn/2+1 , if n is even, from the eigenvalue formula since a1n X1 → 0 in probability. Then the results follow from Theorem 9.2.4.

9.4

k-circulant: n = k g + 1

First suppose n = k 2 + 1. Then from Lemma 6.2.1, if k is even then there is one singleton partition set {0}, and if k is odd then there are two singleton partition sets {0} and {n/2}, respectively (which we may ignore); all the remaining partitions have four elements each. In general, for n = k g + 1, g ≥ 1, the eigenvalue partition (see Section n 6.2 of Chapter 6) of {0, 1, 2, . . . , n − 1} contains approximately q = [ 2g ] sets

146

Heavy-tailed input: LSD

each of size (2g) and each set is self-conjugate; in addition, the remaining sets do not contribute to the LSD. We shall call the partition sets of size (2g) the major partition sets. We will benefit from expressing the eigenvalues in a convenient form. This is given in the following lemma for easy reference. To do this, observe that a typical S(x) may be written as S(b1 k g−1 + b2 k g−2 + · · · + bg ), which in turn is the union of the following two sets: n b1 k g−1 + b2 k g−2 + · · · + bg , b2 k g−1 + b3 k g−2 + · · · + bg k − b1 , . . . , o bg k g−1 − b1 k g−2 − · · · − bg−1 and its conjugate, that is,  n − (b1 k g−1 + b2 k g−2 + · · · + bg ), . . . , n − (bg k g−1 − b1 k g−2 − · · · − bg−1 ) , where 0 ≤ b1 ≤ k − 1, . . . , 0 ≤ bg−1 ≤ k − 1 and 1 ≤ bg ≤ k. Define Tn = {(b1 , b2 , . . . , bg ) : 0 ≤ b1 ≤ k − 1, . . . , 1 ≤ bg ≤ k} , Ct =

n X j=1

Xj cos(

2πjt ) n

and St =

n X

Xj sin(

j=1

2πjt ), for t ∈ N. n

g Lemma 9.4.1. The eigenvalues of the k-circulant a−1 n Ak,n with n = k + 1 corresponding to the major partition sets may be written as n o 2g−1 λ(b1 ,b2 ,...,bg ) , λ(b1 ,b2 ,...,bg ) ω2g , . . . , λ(b1 ,b2 ,...,bg ) ω2g : (b1 , b2 , . . . , bg ) ∈ Tn ,

where ω2g is a primitive (2g)-th root of unity and an λ(b1 ,b2 ,...,bg ) = Cb21 kg−1 +···+bg + Sb21 kg−1 +···+bg

1  2g 1 . . . Cb2g kg−1 −···−bg−1 + Sb2g kg−1 −···−bg−1 2g .

g In view of Lemma 9.4.1, tofind the LSD of a−1 n Ak,n where n = k + 1, it suffices to consider the ESD of λ(b1 ,b2 ,...,bg ) : (b1 , . . . , bg ) ∈ Tn : if these have an LSD F , then the LSD of a−1 n Ak,n will be (r, θ) in polar coordinates where r is distributed according to F , and θ is distributed uniformly across all the (2g)-th roots of unity, and r and θ are independent. With this in mind, define X 1 LAk,n (A, ω) = I(λ(b1 ,...,bg ) ∈ A), #Tn (b1 ,...,bg )∈Tn

˜ A (A, ω) L k,n

=

1 #Tn

X (b1 ,...,bg )∈Tn

˜ (b ,...,b ) ∈ A), I(λ 1 g

k-circulant: n = k g + 1

147

where n o ˜ (b ,b ,...,b ) , λ ˜ (b ,b ,...,b ) ω2g , . . . , λ ˜ (b ,b ,...,b ) ω 2g−1 : (b1 , b2 , . . . , bg ) ∈ Tn λ 2g 1 2 g 1 2 g 1 2 g are the eigenvalues of a−1 n Ak,n corresponding to the major partition sets with input sequence {Xi −cn }, cn = E[X1 I(|X1 | ≤ an )]. Further, let {Γj }, {Zj } and ∗ {Bj } be as defined earlier. Let {Ut,j } be a sequence of i.i.d. U (0, 1) random variables. Finally, for 1 ≤ j ≤ g define ˜ j (ω) = L ∞ ∞ X X 2 2 ∗ ∗ sin(2πUt,j )(Bt (ω)Zt (ω) − µt ) + cos(2πUt,j )(Bt (ω)Zt (ω) − µt ) , t=1

Lj (ω) =

t=1 ∞ X

∞ X 2 2 ∗ ∗ sin(2πUt,j )Bt (ω)Zt (ω) + cos(2πUt,j )Bt (ω)Zt (ω) .

t=1

t=1

We now state the following theorem. Theorem 9.4.2 (Bose et al. (2011a)). Let n = k g + 1. D ˜ ˜A → ˜ A (·, ω) is the random distribution induced by (a) Then L LAk , where L k,n k 1/2g 1/2g 1/2g ˜ 1 (ω) ˜ 2 (ω) ˜ g (ω) L L ···L . D

(b) Then LAk,n → LAk , where LAk (·, ω) is the random distribution induced by 1/2g 1/2g L1 (ω) · · · Lg (ω) . We recall the sequence of random variables {Ynk }, {Bnk }, {πnk } and {Unk } defined in Section 9.2. We order the Xk ’s, 1 ≤ k ≤ n by their magnitudes |Xn1 | ≥ |Xn2 | ≥ · · · ≥ |Xnn |. Let Ynk = |Xnk | and Bnk be the random signs so that Xnk = Bnk Ynk . Let πn1 , πn2 , . . . , πnn be the anti-ranks of Yn1 , Yn2 , . . . , Ynn , Unk = πnnk and Znk = a1n Ynk . Now we state a key lemma which shall be used in the proof of Theorem 9.4.2 and the theorems in the next section. This lemma is a suitable modification of Lemma 9.2.3. Lemma 9.4.3. Let n = k g + 1 and {πnj }{j=1,...,m} , n ≥ 1 satisfy (9.3) and kπnj k g−1 πnj (mod 1) → v1,j ∈ (0, 1), . . . , (mod 1) → vg−1,j ∈ (0, 1), (9.6) n n where {v1,j , . . . , vg−1,j } are rationally independent. Let π ˜nj (b1 , b2 , . . . , bg )  πnj πnj = (b1 k g−1 + · · · + bg )(mod 1), . . . , (bg k g−1 − · · · − bg−1 )(mod 1) , n n ˜ n be a random vector which takes values from the set and R (˜ πn1 (b1 , b2 , . . . , bg ), . . . , π ˜nm (b1 , b2 , . . . , bg ))

148

Heavy-tailed input: LSD

˜ n converges with probability 1/#Tn for each (b1 , b2 , . . . , bg ) ∈ Tn . Then R gm weakly to the uniform distribution on the unit cube [0, 1] . Proof. We will give a proof only for the case g = 2. The proof for the general case is along similar lines. It is enough to show that Z ˜ n (x) → 0, exp(2πiw.x)dR (9.7) where w = (w1 , w2 , . . . , w2m ) is any vector in R2m with integer coordinates. Note that we can write the left side of (9.7) as 1 #Tn

X

exp 2πi(ak + b)

m X

w2j−1

j=1

(a,b)∈Tn

m X πnj πnj  + 2πi(bk − a) w2j . n n j=1

As #Tn ∼ n, we can write the above expression as ≈

1 X n

X

exp(2πiaxkn ) exp(2πibykn ),

(9.8)

1≤a≤k 1≤b≤k

where xkn ykn

=

=

m X j=1 m X

m

w2j−1 k

πnj X πnj − w2j n n j=1

and

m

w2j k

j=1

πnj X πnj + w2j−1 . n n j=1

Note that both xkn and ykn can be considered with (mod 1), and xkn (mod 1) →

m X j=1

w2j−1 vj −

m X

w2j uj =: x∗ .

j=1

By rational independence of {uj , vj }, for non-zero integers {wj } we have x∗ 6= 0 (mod 1). Hence, for all large n, xkn is not an integer. Also note that for all large n, this is bounded away from zero and one. Similar conclusions hold for {ykn }. Hence, summing the above geometric series (9.8) we get 1 1 − exp (2πikxkn ) 1 − exp (2πikykn ) exp (2πixkn ) exp (2πiykn ) . n 1 − exp (2πixkn ) 1 − exp (2πiykn ) Now using the fact that the numerator is bounded, and {xkn } and {ykn } stay bounded away from 0 and 1 for sufficiently large n, it easily follows that the above expression goes to zero. This completes the proof.

k-circulant: n = k g + 1

9.4.1

149

Proof of Theorem 9.4.2

We shall prove the theorem only for g = 2. For g > 2, the argument will be similar but with more complicated algebraic calculations. Now define h(x, y)

=

˜ n (a, b) X

=

(cos(2πx), sin(2πx), cos(2πy), sin(2πy)), for (x, y) ∈ R,  1 ˜ Cak+b , S˜ak+b , C˜bk−a , S˜bk−a , for (a, b) ∈ Tn , an

where C˜t =

n X

n

(Xj − cn ) cos(

j=1

X 2πjt 2πjt ) and S˜t = (Xj − cn ) sin( ). n n j=1

˜ n (a, b) defined by, Let P˜n (·, ω) be the empirical distribution function of X X 1 ˜ n (a, b) ∈ ·). I(X #Tn (a,b)∈Tn

If we can show that P˜n converges in distribution then the result will follow as ˜ n (a, b)) will converge in distribution, where the empirical measure of f (X 1

1

f (x1 , x2 , x3 , x4 ) = (x21 + x22 ) 4 (x23 + x24 ) 4 . As discussed, we ignore the eigenvalues coming from the partition sets Pj with #Pj < 4. Now ˜ n (a, b) = X =

n  (ak + b)j (bk − a)j  X − µ X j n h , n n a n j=1 n X

h (ak + b)Unj , (bk − a)Unj



 Bnj Znj − E [Bnj Znj I(Znj ≤ 1)] .

j=1

Let  Wnj (a, b) = (ak + b)Unj (mod 1), (bk − a)Unj (mod 1) . ∗ ∗ To study the behavior of P˜n , we need to choose (Wn1 (a, b), . . . , Wnn (a, b), 0, . . .) at random from the set of sequences {(Wn1 (a, b), . . . , Wnn (a, b), 0, . . .), (a, b) ∈ Tn } . Let ∗ ∗ ∗ Wnj = (Unj , Vnj ).

So P˜n produces the random variable Yn∗ (ω) =

∞ X

∗ ∗ h(Unj , Vnj )(Bnj (ω)Znj (ω) − E [Bnj Znj I(Znj ≤ 1)]).

j=1 ∗ ∗ ∗ To derive the convergence of Wnj = (Unj , Vnj ), we apply Lemma 9.4.3. The following lemma ensures that the anti-ranks satisfy the conditions of Lemma 9.4.3.

150

Heavy-tailed input: LSD

Lemma 9.4.4. Let πnj be the anti-ranks as defined in the previous section. π Then k s nnj (mod 1) converges in distribution to the uniform distribution on (0, 1), where s = 0, 1, 2, . . . , g − 1. Further, π  πn1 πnn πnn n1 , . . . , k g−1 (mod 1), . . . , , . . . , k g−1 (mod 1) n n n n D ˜ ˜ ˜ ˜ → (U1,1 , . . . , Ug,1 , U1,2 , . . . , Ug,2 , · · · ). Proof. The proof goes along the same lines as the proof of Lemma 9.4.3 and easily follows when one considers equation (9.7). We skip the details. Proof of Theorem 9.4.2. The main idea of the proof is similar to the idea of the proof of Theorem 5 of Knight (1991) as discussed in Section 9.2. We indicate the main steps. Let Y ∗ (ω) =

∞ X

h(Uj∗ , Vj∗ )(Bj (ω)Zj (ω) − E [Bj Zj I(Zj ≤ 1)]).

j=1 D

We show that Yn∗ → Y ∗ . ˜ be the probability and expectation induced by the ranLet P˜ and E ∗ ˜ domness of Pn (·, ω) (or equivalently induced by Wnj ). ∗ Now note that Yn (ω) can be broken into the following two parts: ∞ X

∗ ∗ h(Unj , Vnj )(Bnj (ω)Znj (ω)I(Znj (ω) > ε) − E [Bnj Znj I(ε < Znj ≤ 1)])

j=1

(9.9) and ∞ X

∗ ∗ h(Unj , Vnj )(Bnj (ω)Znj (ω)I(Znj (ω) ≤ ε) − E [Bnj Znj I(Znj ≤ ε)]). (9.10)

j=1

We show that the expression in (9.9) converges in distribution almost surely (with respect to the probability measure on Ω) and the expression in (9.10) goes to zero in L2 in probability (with respect to the probability measure on Ω). Since h is a bounded function, it follows directly from Lemma 9.2.2 of Knight (1991) that ∞

X   ∗ ∗

h(Unj , Vnj ) Bnj (ω)Znj (ω)I(Znj > ε) − Bj Zj I(Zj > ε) → 0 a.s., j=1

(9.11) and ∞

X  ∗ ∗

h(Unj , Vnj ) E [Bnj Znj I(ε < Znj ≤ 1)] j=1

 − E [Bj (w)Zj (w)I(ε < Zj ≤ 1)] → 0.

(9.12)

k-circulant: n = k g + 1

151

Now note that, if g : R2m → R4 is a bounded continuous map having periodicity one in each coordinate, then under the assumptions of Lemma 9.4.3 it follows that 1 X πn1 πn1 πnm πnm g((ak + b) , (bk − a) , . . . , (ak + b) , (bk − a) ) n n n n n (a,b)∈Tn Z 1 Z 1 → ··· g(x1 , . . . , x2m )dx1 · · · dx2m . 0

0

Now we use g(x1 , . . . , x2m ) =

m X

h(x2j−1 , x2j )(Bj Zj I(Zj > ε) − E [Bj Zj I(ε < Zj ≤ 1)])

j=1

and Lemma 9.4.4 to conclude that, for fixed m, m X

∗ ∗ h(Unj , Vnj )(Bj Zj I(Zj > ε) − E [Bj Zj I(ε < Zj ≤ 1)])

j=1 ˜ D



m X

h(Uj∗ , Vj∗ ) (Bj Zj I(Zj > ε) − E [Bj Zj I(ε < Zj ≤ 1)]) ,

j=1

˜ denotes convergence in distribution with respect to P˜ . as n → ∞. Here D Since Zj → 0 as j → ∞ almost surely and kh(x, y)k ≤ 2 for all (x, y) ∈ R2 , we have ∞ ∞

X

X

∗ ∗ h(Unj , Vnj )(Bj Zj I(Zj > ε) ≤ 2 Zj I(Zj > ε) → 0, a.s.,

j=m+1

k=m+1

as n → ∞ and m → ∞. Similarly, ∞ ∞

X

X

∗ ∗ h(Unj , Vnj ) E [Bj Zj I(ε < Zj ≤ 1)] ≤ 2 E(Zj I(ε < Zj ≤ 1)) → 0,

j=m+1

j=m+1

as n → ∞ and m → ∞, since E(Zj I(ε < Zj ≤ 1)) = O(ε−j /Γ(j)) as j → ∞. Now ∞ X h(Uj∗ , Vj∗ ) (Bj Zj I(Zj > ε) − E [Bj Zj I(ε < Zj ≤ 1)]) j=1

is finite for almost all ω, with P˜ (defined at the beginning of this proof) probability 1. So, as m → ∞, we have m X

h(Uj∗ , Vj∗ ) (Bj Zj I(Zj > ε) − E [Bj Zj I(ε < Zj ≤ 1)])

j=1 P˜



∞ X j=1

h(Uj∗ , Vj∗ ) (Bj Zj I(Zj > ε) − E [Bj Zj I(ε < Zj ≤ 1)]) .

152

Heavy-tailed input: LSD

Therefore m X

∗ ∗ h(Unj , Vnj )(Bnj Znj I(Znj > ε) − E [Bnj Znj I(ε < Znj ≤ 1)])

j=1 ˜ D



∞ X

h(Uj∗ , Vj∗ ) (Bj Zj I(Zj > ε) − E [Bj Zj I(ε < Zj ≤ 1)]) ,

j=1

as n → ∞, in probability. Now to complete the proof part (a), we need to show that ∞ h X 2 i P ∗ ∗ ˜ E h(Unj , Vnj )(Bnj Znj I(Znj ≤ ε) − E(Bnj Znj I(Znj ≤ ε)) → 0, j=1

as n → ∞ and ε → 0. Now observe that ˜ E

∞ h X

∗ ∗ h(Unj , Vnj )(Bnj Znj I(Znj ≤ ε) − E(Bnj Znj I(Znj ≤ ε))

2 i

j=1

=

1 #Tn

X (a,b)∈Tn

n  1 X  (ak + b)j (bk − a)j  h , an j=1 n n

Xj I(|Xj | ≤ an ε) − E(Xj I(|Xj | ≤ an ε))

 2

.

Now the expectation with respect to E of the last expression is bounded by  2 22 na−2 n E X1 I(|X1 | ≤ an ε) , and this expression goes to 0 as n → ∞ and ε → 0. This completes the proof of the first part of the theorem. The proof of the second part is similar. We indicate the main steps. First, following arguments similar to those given in part (a), as n → ∞, ∞ X

˜ D

∗ ∗ h(Unj , Vnj )Bnj Znj I(Znj > ε)→

j=1

∞ X

h(Uj∗ , Vj∗ )Bj Zj I(Zj > ε).

j=1

Proof of the next step needs extra care. To show that ∞ h X i

P ˜

P h(Uj∗ , Vj∗ ) (Bj Zj I(Zj ≤ ε)) > γ → 0, j=1

k-circulant: n = k g + 1

153

as n → ∞ and ε → 0, observe that ∞ h X i

∗ ∗ P˜ h(Unj , Vnj )Bnj Znj I(Znj ≤ ε) > γ j=1 ∞ h X

≤ P˜ |

∗ cos(2πUnj )Bnj Znj I(Znj ≤ ε)| >

j=1

γi + ··· 2

∞ h X γi ∗ + P˜ | sin(2πVnj )Bnj Znj I(Znj ≤ ε)| > . 2 j=1

(9.13)

First observe that ∞ h X γi P ∗ P˜ | cos(2πUnj )Bnj Znj I(Znj ≤ ε)| > → 0. 2 j=1

In fact, we have ∞ γi h X ∗ P˜ cos(2πUnj )Bnj Znj I(Znj ≤ ε) > 2 j=1

=

n γo n X 1 2π(ak + b)l  # (a, b) : a−1 cos X I(|X | ≤ a ε) > l l n n #Tn n 2 l=1

n 1 = # (a, b) ∈ An #Tn

n γo X 2π(ak + b)l  : a−1 cos Xl I(|Xl | ≤ an ε) > n n 2 l=1

n γo n X 1 2π(ak + b)l  + # (a, b) ∈ Acn : a−1 cos Xl I(|Xl | ≤ an ε) > , n #Tn n 2 l=1

where An = {(a, b) :

n X l=1

cos(

2π(ak + b)l ) 6= 0}. n

However, #An = o(n) and

#Tn → 1. n

154

Heavy-tailed input: LSD

So n γo n 1 2π(ak + b)l  −1 X # (a, b) : an cos Xl I(|Xl | ≤ an ε) > #Tn n 2 l=1 n o(n) 1 = + # (a, b) ∈ Acn : n #Tn n γo 2π(ak + b)l  −1 X cos Xl I(|Xl | ≤ an ε) > an n 2 l=1

n n X o(n) 1 2π(ak + b)l  = + # (a, b) ∈ Acn : a−1 cos n n #Tn n l=1   γ o Xl I(|Xl | ≤ an ε) − E(Xl I(|Xl | ≤ an ε)) > 2 n n X  o(n) 1 2π(ak + b)l ≤ + # (a, b) : a−1 cos n n #Tn n l=1   γ o Xl I(|Xl | ≤ an ε) − E(Xl I(|Xl | ≤ an ε)) > 2 ∞ h X o(n) ∗ = + P˜ cos(2πUnj )(Bnj Znj I(Znj ≤ ε) n j=1 γi − E(Bnj Znj I(Znj ≤ ε))) > 2 P

→ 0, by the proof of Theorem 9.4.2(a). h P Now a similar conclusion holds for the other i P ∞ ˜ three terms of (9.13). So P h(U ∗ , V ∗ ) Bnj Znj I(Znj ≤ ε) > γ → j=1

nj

nj

0, as n → ∞ and ε → 0. So, this completes the proofs for both parts (a) and (b) of Theorem 9.4.2.

9.5

k-circulant: n = k g − 1

As before, n0 = n and k 0 = k. Now the eigenvalue partition of {0, 1, 2, . . . , n−1} contains approximately q = [ ng ] sets of size g which are the major partition sets. The remaining sets do not contribute to the LSD. For detailed explanation see the first part of the proof of Theorem 4.5.7. Similar to the developments in the previous section, now the major partition sets {S(x)} may be listed as {b1 k g−1+b2 k g−2+· · ·+bg , b2 k g−1+· · ·+bg k+b1 , . . . , bg k g−1+b1 k g−2+· · ·+bg−1 }, where 0 ≤ b1 ≤ k − 1, . . . , 0 ≤ bg−1 ≤ k − 1, 1 ≤ bg ≤ k,

k-circulant: n = k g − 1

155

with (b1 , b2 , . . . , bg ) 6= (k − 1, k − 1, . . . , k − 1) and (b1 , b2 , . . . , bg ) 6= (k − 1, k − 1, . . . , k − 1, k). Now define  Tn0 = (b1 , b2 , . . . , bg ) : 0 ≤ bj ≤ k − 1, for 1 ≤ j < g and 1 ≤ bg ≤ k, (b1 , b2 , . . . , bg ) 6= (k − 1, k − 1, . . . , k − 1) and (b1 , b2 , . . . , bg ) 6= (k − 1, k − 1, . . . , k − 1, k) ,  1 Cb1 kg−1 +···+bg + iSb1 kg−1 +···+bg · · · an  · · · Cbg kg−1 +···+bg−1 + iSbg kg−1 +···+bg−1 ,  i arg γ(b1 ,b2 ,...,bg ) 1/g = |γ(b1 ,b2 ,...,bg ) | exp{ }. g

γ(b1 ,b2 ,...,bg ) =

η(b1 ,b2 ,...,bg )

g Then the eigenvalues of the k-circulant a−1 n Ak,n with n = k −1 corresponding g−1 g−2 to the partition set S(b1 k + b2 k + · · · + bg ) are

η(b1 ,b2 ,...,bg ) , η(b1 ,b2 ,...,bg ) ωg , η(b1 ,b2 ,...,bg ) ωg2 , . . . , η(b1 ,b2 ,...,bg ) ωgg−1 , where ωg is a primitive g-th root of unity. So, to find the LSD, it suffices to consider the ESD of {γ(b1 ,b2 ,...,bg ) : (b1 , . . . , bg ) ∈ Tn0 }: if these have an LSD 0 0 F , then the LSD of a−1 n Ak,n will be (r , θ) where r is distributed according i arg(z)

to h(F ) where h(z) = |z|1/g e g and θ is distributed uniformly across all the g-th roots of unity, and r0 and θ are independent. Hence, define LAk,n (A, ω) ˜ A (A, ω) L k,n

= =

1 #Tn0 1 #Tn0

X

I(γ(b1 ,...,bg ) ∈ A),

(b1 ,...,bg )∈Tn0

X

I(˜ γ(b1 ,...,bg ) ∈ A),

(b1 ,...,bg )∈Tn0

where {˜ γ(b1 ,...,bg ) } are same as {γ(b1 ,...,bg ) } with the input sequence is {Xi −cn }, ∗ cn = E[X1 I(|X1 | ≤ an )]. For 1 ≤ j ≤ g, and with Ut,j , Bt , Zt as defined in Section 9.4, define ˜ j (ω) = L

∞ X

 ∗ cos(2πUt,j )(Bt (ω)Zt (ω) − µt )

t=1

+i

∞ X

 ∗ sin(2πUt,j )(Bt (ω)Zt (ω) − µt ) ,

t=1

Lj (ω) =

∞ X t=1

∞ X   ∗ ∗ cos(2πUt,j )Bt (ω)Zt (ω) + i sin(2πUt,j )Bt (ω)Zt (ω) . t=1

Theorem 9.5.1 (Bose et al. (2011a)). Let n = k g − 1. ˜ A (·, ω) be the ESD of {˜ (a) Let L γ(b1 ,...,bg ) : (b1 , . . . , bg ) ∈ Tn0 }. Then k,n

156

Heavy-tailed input: LSD

D ˜A ˜ A , where L ˜ A (·, ω) is the random distribution induced by L → L k,n k k ˜ 1 (ω)L ˜ 2 (ω) · · · L ˜ g (ω). L D

(b) Then LAk,n → LAk , where LAk (·, ω) is the random distribution induced by L1 (ω)L2 (ω) . . . Lg (ω). The proof of Theorem 9.5.1 is similar to the proof of Theorem 9.4.2, so we skip it.

9.6

Tail of the LSD

Now we show that even though the input sequence is heavy-tailed, the LSDs (in the almost sure sense) are light-tailed. Indeed for α ∈ (0, 1) they have bounded support. P∞ Lemma 9.6.1. (a) For α ∈ (0, 1), the variable t=1 cos(2πUt∗ )Bt (ω)Zt (ω) has bounded support for almost all ω ∈ Ω. P∞ (b) For α ∈ [1, 2), the variable t=1 cos(2πUt∗ )Bt (ω)Zt (ω) has light-tail for almost all ω ∈ Ω. It is clear from Lemma 9.6.1(a) that the LSD obtained in Theorems 9.4.2(b), 9.5.1(b) and 9.3.2(b) have bounded support for α ∈ (0, 1) and for almost all ω ∈ Ω. From Lemma 9.6.1(b), it follows that the LSD in Theorems 9.4.2(b), 9.5.1(b) and 9.3.2(b) have light-tail for α ∈ [1, 2) and for almost all ω ∈ Ω. Similar conclusions hold about the LSD in Theorems 9.4.2(a), 9.5.1(a) and 9.3.2(a). Now we briefly sketch the proof of Lemma 9.6.1. P −1/α −1/α j Proof. Recall that Zj = Γj = E , where {Ei } is a sequence t=1 t of i.i.d. exponential random variables with mean 1. Hence, by the Law of the Iterated Logarithm (see Section 11.2 of Appendix), for almost all ω and for arbitrary ε > 0 there exist j0 (ω) so that for j ≥ j0 (ω), 

 α1   α1 1 1 √ < Zj (ω) < . √ √ j + ( 2 + ε) j log log j j − ( 2 + ε) j log log j √

Hence j0 (ω)



X t=1



∞ X t=1

Zt (ω) −

∞  X t=j0 (ω)

 α1 1 √ √ t − ( 2 + ε) t log log t j0 (ω)

cos(2πUt∗ )Bt (ω)Zt (ω)



X t=1

Zt (ω) +

∞  X

t=j0

 α1 1 . √ t − ( 2 + ε) t log log t (ω) √

Exercises 157 P∞ So t=1 cos(2πUt∗ )Bt (ω)Zt (ω) is bounded for almost all ω when α ∈ (0, 1). For α ∈ [1, 2), and for almost all ω, ∞ X

Var

∞  X cos(2πUt∗ )Bt (ω)Zt (ω) = Bt2 (ω)Zt2 (ω) Var(cos(2πUt∗ ))

t=1

t=1 ∞ X

= Var(cos(2πU1∗ ))

Bt2 (ω)Zt2 (ω)

t=1

≤ Var(cos(2πU1∗ ))

0 (ω)  jX

Zt2 (ω)

t=1

+

∞ X t=j0 (ω)

and hence

9.7

P∞

t=1

 α2  1 < ∞, √ t − ( 2 + ε) t log log t √

cos(2πUt∗ )Bt (ω)Zt (ω) is light-tailed.

Exercises

1. Prove Theorem 9.3.1. 2. Prove Theorem 9.3.2. 3. Prove Lemma 9.2.1 using Lemmata 1 and 2 of LePage et al. (1981). 4. Prove Lemma 9.4.4. 5. Check how (9.11) and (9.12) follow from Lemma 2 of Knight (1991). 6. Look up the proof of Theorem 9.2.4 in Knight (1991). 7. Prove Theorem 9.5.1.

10 Heavy-tailed input: spectral radius

In this chapter the focus is on the distributional convergence of the spectral radius and hence of the spectral norm of the circulant, the reverse circulant and the symmetric circulant matrices when the input sequence is heavy-tailed. The approach is to use the already known methods for the study of the maximum of periodograms for heavy-tailed sequences. This method is completely different from the method used to derive the results in Chapter 5 for light-tailed entries.

10.1

Input sequence and scaling

We recall some of the definitions and notations introduced in Section 9.1 of Chapter 9. Let {Zt , t ∈ Z} be a sequence of i.i.d. random variables with common distribution F which is in the domain of attraction of an α-stable law with 0 < α < 1. Thus, there exist p, q ≥ 0 with p + q = 1 and a slowly varying function `, such that as x → ∞, P(Z1 > x) P(Z1 ≤ −x) → p, → q and P(|Z1 | > x) ≈ x−α `(x). P(|Z1 | > x) P(|Z1 | > x)

(10.1)

Let {Γj }, {Uj } and {Bj } be three independent sequences defined on the same Pj probability space where Γj = i=1 Ei and {Ei } are i.i.d. exponential with mean 1, Uj are i.i.d. U (0, 1), and Bj are i.i.d. with P(B1 = 1) = p and P(B1 = −1) = q,

(10.2)

where p and q are as defined in (10.1). We recall the class Sα (σ, β, µ) from Section 9.1, and also define ∞ Z ∞ −1 X −1 −1/α D Yα = Γj ' Sα (Cα α , 1, 0) where Cα = x−α sin xdx . j=1

0

(10.3) For a non-decreasing function f on R, let f ← (y) = inf{s : f (s) > y}. Then the scaling sequence {bn } is defined as bn =

← 1 (n) ≈ n1/α `0 (n) for some slowly varying function `0 . P[|Z1 | > ·] 159

160

Heavy-tailed input: spectral radius

10.2

Reverse circulant and circulant

Consider the RCn with the input sequence {Zt }. Then from Section 1.3 of Chapter 1, the eigenvalues {λk , 0 ≤ k ≤ n − 1} of b1n RCn are given by:  λ0      λn/2      λk

=

1 bn

Pn−1

Zt

=

1 bn

Pn−1

(−1)t Zt , if n

t=0

= −λn−k =

where In (ωk ) = The eigenvalues of

t=0

1 bn Cn

λj =

p

is even

(10.4)

In (ωk ), 1 ≤ k ≤ b n−1 2 c,

n−1 1 X 2πk | Zt e−itωk |2 and ωk = . b2n t=0 n

are given by n 1 X Zt eitωj , 0 ≤ j ≤ n − 1. bn t=1

From the eigenvalue structure of Cn and RCn , it is clear that sp(

1 1 Cn ) = sp( RCn ) bn bn

and therefore they have identical limiting behavior. This behavior is described in the following result. Theorem 10.2.1 (Bose et al. (2010)). Suppose the input sequence {Zt } satisfies (10.1). Then for α ∈ (0, 1), both sp( b1n Cn ) and sp( b1n RCn ) converge in distribution to Yα , where Yα is as in (10.3). Note that {|λk |2 ; 1 ≤ k < n/2} is the periodogram of {Zt } at the frequencies {ωk ; 1 ≤ k < n/2}. Mikosch et al. (2000) have established the weak convergence of the maximum of the periodogram based on a heavy-tailed sequence for 0 < α < 1. The main idea for the proof of Theorem 10.2.1 is taken from their work. Let x (·) denote the point measure which gives unit mass to any set containing x and let E = [0, 1] × ([−∞, ∞]\{0}). Let Mp (E) be the set of point measures on E, topologized by vague convergence. The following convergence result follows from Proposition 3.21 of Resnick (1987): Nn :=

n X k=1

V

(k/n,Zk /bn ) → N :=

∞ X j=1

(Uj ,Bj Γ−1/α ) in Mp (E).

(10.5)

j

Suppose f is a bounded continuous complex valued function defined on R and

Reverse circulant and circulant

161

without loss of generality assume |f (x)| ≤ 1 for all x ∈ R. Now pick η > 0, and define Tη : Mp (E) −→ C[0, ∞) as follows: X (Tη m)(x) = vj I{|vj |>η} f (2πxtj ), j

if m =

P

j (tj ,vj )

∈ Mp (E) and vj ’s are finite. Elsewhere, set (Tη m)(x) = 0.

Lemma 10.2.2. The map Tη : Mp (E) −→ C[0, ∞) is continuous almost surely with respect to the distribution of N . Proof. This Lemma was proved by Mikosch et al. (2000) for the function f (x) = exp(−ix). The same proof works in our case. For the sake of completeness we give the details. V It is enough to show that if xn → x ≥ 0 and mn → m in Mp (E), where m{∂([0, 1] × {|v| ≥ η}) ∩ [0, 1] × {−∞, ∞}} = 0, then (Tη mn )(xn ) → (Tη m)(x). To show this, let X mn (·) =  (n) j

tj

(n)

,vj

 (·) and m(·) =

X

(tj ,vj ) (·).

j

Consider the set Kη := [0, 1] × {v : |v| ≥ η}. V

Clearly, Kη is compact in E with m(∂Kη ) = 0. Since mn → m, we can find an n0 such that for n ≥ n0 , mn (Kη ) = m(Kη ) =: l, say, and there is an enumeration of the points in Kη such that   (n) (n)  tk , vk , 1 ≤ k ≤ l → (tk , vk ), 1 ≤ k ≤ l . Without loss of generality we can assume that for given ξ > 0, (n)

sup |xn | ∨ sup |vk | ≤ ξ.

n≥n0

1≤k≤l

Therefore l l X X (n) (n) |(Tη mn )(xn ) − (Tη m)(x)| = vk f (−2πxn tk ) − vk f (−2πxtk ) k=1

k=1

l X (n) v f (−2πxn t(n) ) − vk f (−2πxtk ) ≤ k

k

k=1



l X (n) v − vk k

k=1

+

l X k=1

(n) |vk | f (−2πxn tk ) − f (−2πxtk ) ,

162

Heavy-tailed input: spectral radius

so that lim |(Tη mn )(xn ) − (Tη m)(x)| = 0.

n→∞

This completes the proof of the Lemma. Lemma 10.2.3. For 0 < α < 1 and 0 ≤ x < ∞, the following convergence holds in C[0, ∞) as n → ∞: Jn,Z (x/n) :=

n X Zj j=1

D

f (2πxj/n) → J∞ (x) :=

bn

∞ X

−1/α

Bj Γj

f (2πxUj ).

j=1

Proof. The proof is similar to the proof of Proposition 2.2 of Mikosch et al. (2000). We briefly sketch the proof in our case. Applying Lemma 10.2.2 on (10.5) we have (η)

Jn,Z (x/n) := D



n X Zj j=1 ∞ X

bn

f (2πxj/n)I{|Zj |>ηbn } −1/α

Bj Γj

(η) f (2πxUj )I{Γ−1/α >η} =: J∞ (x) in C[0, ∞). j

j=1

Also, as η → 0, by DCT we have D

(η) J∞ (x) → J∞ (x) :=

∞ X

−1/α

Bj Γj

f (2πxUj ).

j=1

So using Theorem 3.2 of Billingsley (1995), the proof will be complete if for any  > 0,  (η) lim lim sup P kJn,Z − Jn,Z k∞ >  = 0. (10.6) η→0 n→∞

Here kx(·) − y(·)k∞ , the metric distance in C[0, ∞), is given by kx(·) − y(·)k∞

=

kx(·) − y(·)kn

=

∞ X 1 [kx(·) − y(·)kn ∧ 1] , where 2n n=1

sup |x(t) − y(t)|. t∈[0,n]

Now n X Zj   (η) I{|Z |≤ηb } >  lim lim sup P kJn,Z − Jn,Z k∞ >  ≤ lim lim sup P j n η→0 n→∞ η→0 n→∞ b n j=1



Z1  lim lim sup n−1 E I{|Z1 |≤ηbn } . η→0 n→∞ bn

By an application of Karamata’s theorem (see Resnick (1987), Exercise 0.4.2.8), we get that Z1  n E I{|Z1 |≤ηbn } bn



α α nη P(|Z1 | > ηbn ) ≈ η 1−α , 1−α 1−α

Reverse circulant and circulant and

α 1−α 1−α η

163

→ 0 as η → 0. This completes the proof of the lemma.

Proof of Theorem 10.2.1. We use Lemma 10.2.2 and Lemma 10.2.3 with f (x) = exp(−ix). It is immediate that n 1 1 X sp(Cn ) ≤ |Zt |. bn bn t=1

(10.7)

It is well known that (see Feller (1971)) n ∞ X 1 X D −1/α D |Zt | → Yα = Γj ' Sα (Cα−1/α , 1, 0). bn t=1 j=1

(10.8)

Hence it remains to show that for γ > 0, lim inf P n→∞

 1 sp(Cn ) > γ ≥ P(Yα > γ). bn

(10.9)

Now observe that for any integer K and sufficiently large n,   P sup |Jn,Z (j/n)| > γ ≥ P sup |Jn,Z (j/n)| > γ . j=1,...,b n 2c

j=1,...,K

On the other hand, from Lemma 10.2.3 we have  D  Jn,Z (j/n), 1 ≤ j ≤ K → J∞ (j), 1 ≤ j ≤ K in Rk . Hence

D

|Jn,Z (j/n)| →

sup j=1,...,K

|J∞ (j)|,

sup j=1,...,K

and so letting K → ∞, lim inf P n→∞

sup j=1,...,b n 2c

 |Jn,Z (j/n)| > γ ≥ P

sup

 |J∞ (j)| > γ .

j=1,...,∞

Now the theorem follows from Lemma 10.2.4 given below. Lemma 10.2.4. |J∞ (j)| =

sup j=1,...,∞

sup j=1,...,∞

∞ X −1/α Bt Γt exp(−2πijUt ) = Yα a.s. t=1

Proof. Define Ω0

=



ω∈Ω:

∞ X

−1/α

Γj

(ω) < ∞ and for all m ≥ 1,

j=1

(U1 (ω), . . . , Um (ω)) are rationally independent .

164

Heavy-tailed input: spectral radius

Then P(Ω0 ) = 1. Let x denote the fractional part of x. Now by Weyl (1916), for any ω ∈ Ω0 ,  nU1 (ω), . . . , nUm (ω) , n ∈ N is dense in [0, 1]m . Fix any ω ∈ Ω0 and  > 0. Then there exist an N ∈ N such P∞ −1/α that j=N +1 Γj (ω) < , and from the results of Weyl (1916) there exist an N0 ∈ N such that   R Bj exp(−2πiN0 Uj ) ≥ 1 − , j = 1, . . . , N. −1/α N Γj Then we have ∞ N X X −1/α −1/α sup Bt Γt exp(−2πijUt ) ≥ sup Bt Γt exp(−2πijUt )

1≤j≤∞

1≤j≤∞

t=1

t=1 ∞ X



−1/α

Γt

t=N +1 N X −1/α ≥ Bt Γt exp(−2πiN0 Ut ) −  t=1

≥R

N X

−1/α

Bt Γt

 exp(−2πiN0 Ut ) − 

t=1



N X

1−

t=1

=

N X

 −1/α N Γt

−1/α

Γt



−1/α

Γt

−

− 2.

t=1

Letting first N → ∞, and then  → 0, we get supj=1,...,∞ |J∞ (j)| ≥ Yα . Trivially supj=1,...,∞ |J∞ (j)| ≤ Yα . This completes the proof.

10.3

Symmetric circulant

We recall that the eigenvalues {λk , 0 ≤ k ≤ n − 1} of b1n SCn , when the input sequence is {Zt }, is given by (see Section 1.2 of Chapter 1): (i) for n odd:    λ0  

λk

=

1 bn

 Pb n2 c  Z0 + 2 j=1 Zj

=

1 bn

  Pb n2 c Z0 + 2 j=1 Zj cos(ωk j) , 1 ≤ k ≤ b n2 c,

Symmetric circulant

165

(ii) for n even:    P n2 −1 1  Zj + Zn/2  λ0 = bn Z0 + 2 j=1   λ k

=

1 bn

  P n2 −1 Z0 + 2 j=1 Zj cos(ωk j) + (−1)k Zn/2 , 1 ≤ k ≤

n 2,

with λn−k = λk in both cases. Theorem 10.3.1 (Bose et al. (2010)). Assume that the input sequence is D i.i.d. {Zt } and satisfies (10.1). Then for α ∈ (0, 1), sp( b1n SCn ) → 21−1/α Yα , where Yα is as in (10.3). Proof. The proof is similar to the proof of Theorem 10.2.1. We provide a sketch of the proof for odd n. For even n, the changes needed are minor. Define q 1 X Jn,Z (x) := 2 Zt cos(2πxt) and Mn,Z := max Jn,Z (k/n) , 0≤k≤q bn t=1

(10.10)

where q = q(n) = b n2 c. Since sp( b1n SCn ) − Mn,Z → 0 almost surely, it is D

enough to show that Mn,Z → 21−1/α Yα . Note Pqthat (10.5) holds with P∞[0, 1] replaced by [0, 1/2], and upon letting Nn = k=1 (k/n,Zk /bq ) , N = j=1 (Uj ,Bj Γ−1/α ) , and Uj to be i.i.d. U [0, 1/2]. j

Now following the arguments given in the proof of Lemma 10.2.2, Lemma 10.2.3, and with f (x) = cos x, Jn,Z (x/n) = 2

q ∞ 1 X 2πkx D 1−1/α X −1/α Zk cos →2 Bj Γj cos(2πxUj ). bn n j=1 k=1

(10.11) It is obvious that Mn,Z ≤ 2

q ∞ X 1 X D −1/α |Zt | → 21−1/α Γj = 21−1/α Yα . bn t=1 j=1

Following the arguments given to prove (10.9), we can show that lim inf P(Mn,Z > η) ≥ P(21−1/α Yα > η), for η > 0. n→∞

This completes the proof of the theorem. Remark 10.3.1. (i) Theorems 10.2.1 and 10.3.1 are rather easy to derive when p = 1, that is, when the left tail is negligible compared to the right tail. Let us consider sp( b1n RCn ). From its eigenvalue structure, n 1 1 X sp( RCn ) ≤ |Zt |. bn bn t=1

166

Heavy-tailed input: spectral radius

For the lower bound note that n   1 1 X P sp( RCn ) > x ≥ P(λ0 > x) = P Zt > x . bn bn t=1

Now since P(|Z1 | > x) ≈ P(Z1 > x) as x → ∞, the upper and the lower bounds converge with the same scaling constant and hence Theorem 10.2.1 holds. The details on these convergences can be found in Chapter 1 of Samorodnitsky and Taqqu (1994). A similar conclusion can be drawn for the symmetric circulant matrices too when p = 1. (ii) When the input sequence {Zi } is i.i.d. non-negative and satisfies (10.1) with α ∈ (1, 2) then from the above arguments it is easy to derive the distributional behavior of the spectral radius. In particular, if ∞

kj =

X α−1 α−1  α −1 D fα = j α − (j − 1) α and Y (Γj − kj ) ∼ Sα (Cα α , 1, 0) α−1 j=1

then, as n → ∞, P and P

 sp(RC ) − n E[Z ]   n 1 fα > x >x →P Y bn

 sp(SC ) − n E[Z ]   n 1 fα > x . > x → P 21−1/α Y bn

If α = 1 and {Zi } are non-negative, then R  sp(RCn ) − nbn ∞ sin( x ) P(Z1 ∈ dx)   f bn 0 fα > x , P >x →P Y bn f fα is an S1 (2/π, 1, 0) random variable. Similar results hold for the where Y symmetric circulant matrix also.

10.4

Heavy-tailed: dependent input

Now suppose that the input sequence is a linear process {Xt , t ∈ Z} and it is given by Xt =

∞ X j=−∞

aj Zt−j ,

t ∈ Z, where

∞ X

|aj |α−ε < ∞ for some 0 < ε < α.

j=−∞

(10.12)

Heavy-tailed: dependent input

167

Suppose that {Zi } are i.i.d. random variables and satisfy (10.1) with 0 < α < 1. Using E |Z|α−ε < ∞ and the assumption on the {aj }, we have E |Xt |α−ε ≤

∞ X

|aj |α−ε E |Zt−j |α−ε = E |Z1 |α−ε

j=−∞

∞ X

|aj |α−ε < ∞.

j=−∞

Hence Xt is finite almost surely. Let ψ(x) =

∞ X

aj exp(−i2πxj), x ∈ [0, 1],

j=−∞

be the transfer function of the linear filter {aj }, and fX (x) = |ψ(x)|2 , be the power transfer function of {Xt }. Define |λk | |λk | M (RCn , fX ) = maxn p , M (Cn , fX ) = maxn p , 0≤k< 2 0≤k< 2 fX (k/n) fX (k/n) |λk | M (SCn , fX ) = maxn p , 0≤k< 2 fX (k/n) where in each case {λk } are the eigenvalues of the corresponding matrix. From the eigenvalue structure of Cn and RCn , M (Cn , fX ) = M (RCn , fX ). Theorem 10.4.1 (Bose et al. (2010)). Suppose that {Zt } is i.i.d and satisfies (10.1). Further, the input sequence is {Xn }, and {aj } satisfies (10.12). Suppose fX is strictly positive on [0, 1/2]. Then 1 1 D D Cn , fX ) → Yα and M ( RCn , fX ) → Yα . bn bn 1 D (b) Further, if aj = a−j , then M ( SCn , fX ) → 21−1/α Yα . bn (a) M (

Proof. (a) The proof is along the lines of the proof of Lemma 2.6 in Mikosch cn be the circulant matrix formed with independent entries et al. (2000). Let C {Zi }. To prove the result it is enough to show that P cn ) → M ( 1 Cn , fX ) − sp( 1 C 0. bn bn Pn Let Jn,Z (x) = b1n t=1 Zt exp(−i2πxt). Note that cn ) = sup (fX (k/n))−1/2 |Jn,X (k/n)| M ( 1 Cn , fX ) − sp( 1 C bn bn 1≤k≤n − sup |Jn,Z (k/n)| 1≤k≤n

≤ sup |ψ(k/n)−1 Jn,X (k/n)| − |Jn,Z (k/n)| 1≤k≤n ≤ sup ψ(k/n)−1 Jn,X (k/n) − Jn,Z (k/n) , 1≤k≤n

168

Heavy-tailed input: spectral radius

and Jn,X (x) = =

n 1 X Xt exp(−i2πxt) bn t=1 ∞ n X  1 X aj exp(−i2πxj) Zt exp(−i2πxt) + Vn,j bn j=−∞ t=1

= ψ(x)Jn,Z (x) + Yn (x),

(10.13)

where Vn,j Yn (x)

n−j X

=

=

Zt exp(−i2πxt) −

Zt exp(−i2πxt),

t=1

t=1−j ∞ X

1 bn

n X

aj exp(−i2πxj)Vn,j .

j=−∞

Since fX is bounded away from 0 and (10.13) holds, it is enough to show that P max1≤k≤n |Yn (k/n)| → 0. Now Yn (x) =

∞ n 1 X 1 X aj exp(−i2πxj)Vn,j + aj exp(−i2πxj)Vn,j bn j=n+1 bn j=1

+

−n−1 −1 1 X 1 X aj exp(−i2πxj)Vn,j + aj exp(−i2πxj)Vn,j bn j=−∞ bn j=−n

= S1 (x) + S2 (x) + S3 (x) + S4 (x). Now following an argument similar to that given in the proof of Lemma 2.6 in Mikosch et al. (2000), we can show that P

max |Si (k/n)| → 0 for i = 1, 2.

1≤k≤n

The behavior of S3 (x) and S4 (x) are similar to S1 (x) and S2 (x), respectively. For the sake of completeness we give the details. Note that ∞ X S1 (x) = −Jn,Z (x) aj exp(−i2πxj) j=n+1

+

1 bn

∞ X

aj exp(−i2πxj)

j=n+1

n−j X

Zt exp(−i2πxj)

t=1−j

= S11 (x) + S12 (x), and max |S11 (k/n)| ≤ max |Jn,Z (x)|

1≤k≤n

1≤k≤n

∞ X j=n+1

P

|aj | → 0.

Heavy-tailed: dependent input

169 P

To show that max1≤k≤n |S12 (k/n)| → 0, we write |S12 (x)| ≤ Note that

−1 n−t −n−1 n−t X X 1 X 1 X |Zt | |aj | + |Zt | |aj |. bn t=−n bn t=−∞ n+1 j=1−t

(10.14)

−1 n−t n n+l X X |Zt | X |Zl | X D P |aj | = |aj | → 0, b b n j=n+1 t=−n n j=n+1 l=1

since

1 bn

Pn

l=1

D

|Zl | → Yα (see Feller (1971)), and by (10.12), ∞ X

|aj | → 0.

j=n+1

To tackle the second term in (10.14), we observe that −n−1 n−t  n o X 1 X φn (λ) := E exp − λ |Zt | |aj | = E[exp(T1 + · · · + Tn )], bn t=−∞ j1−t

where Tj = −λ b1n

(10.15)

P∞

i=n+1 |Zi ||ai+j |, j = 1, . . . , n. Note that

E exp(T1 + · · · + Tn ) ≥

n Y

E exp(Tj ) ≥ E exp(T1 )

n

,

(10.16)

j=1

since T1 , . . . , Tn are associated. Let φ(λ) = E exp{−λ|Z1 |}. Now by Karamata’s Tauberian theorem (see page 471, Feller (1971)), − log φ(λ) ∼ 1 − φ(λ) ∼ Γ(1 − α)λα L(1/λ),

λ ↓ 0.

(10.17)

where L(tx)/L(t) → 1 as t → ∞. Now using the Potter bounds (see Proposition 0.8 in Resnick (1987)), we see that for large n and some c > 0, E exp(T1 )

=

∞ Y

φ(

i=n+1

= ≥ ≥

exp

n

exp

n

exp

n

− −

1 λ|aj |) bn ∞ X

(− log φ(

i=n+2 ∞ X

o 1 λ|aj |)) bn

− log φ( b1n λ|aj |) o c n i=n+2 − log φ( b1n λ)

∞ o c X − |aj |(α−ε) , by (10.17), n i=n+2

170

Heavy-tailed input: spectral radius

and hence from (10.15) and (10.16), we conclude that φn (λ) → 1. Hence P

max |S12 (k/n)| → 0.

1≤k≤n

Now we write S2 in the following way: n 0 X 1 X aj exp(−i2πxj) Zt exp(−i2πxt) bn j=1 t=1−j

|S2 (x)| =



n n X 1 X aj exp(−i2πxj) Zt exp(−i2πxt) bn j=1 t=n−j+1

n 1 X |aj | bn j=1



0 X

n X

|Zt | +

t=1−j

 |Zt |

t=n−j+1

= S21 + S22 . Now S21

=

n X

|aj |

j=1 D

=

n X

|aj |

j=1

=

j X |Zk |

bn

k=1

k0 n X X k=1

=

0 X |Zt | bn t=1−j

j=k

n X  |Zk | |aj | + bn

k=k0 +1

n X j=k

 |Zk | |aj | bn

S211 + S212 ,

and for fixed k0 , S211 ≤

X

k0

k=1

If we choose k0 such that

P∞

k=k0

|ak | < ε, then

n X

S212 ≤ ε

∞ |Zk |  X P |aj | → 0. bn j=1

k=k0 +1

n

X |Zk | D |Zk | ≤ε → εYα . bn bn k=1

P

Since ε > 0 can be chosen arbitrarily small, we conclude that S212 → 0. This P proves that S21 → 0. For S22 , we note that bn S22 =

n X j=1

|aj |

n X t=n−j+1

|Zt | =

n X t=1

|Zt |

n X j=n−t+1

D

|aj | =

n X t=1

|Zt |

n X j=t

|aj |.

Exercises

171

Now the expression on the right side of the last equation can be analyzed in a similar way as S21 . This completes the proof of the fact that P

max |S2 (k/n)| → 0.

1≤k≤n

The behavior of S3 (x) and S4 (x) are similar to S1 (x) and S2 (x) respectively. Therefore, following a similar argument we can show that P max1≤k≤n |Sj (k/n)| → 0 for j = 3, 4. This completes the proof of part (a). dn be the symmetric circulant matrix formed with independent (b) Let SC entries {Zi }. In view of Theorem 10.3.1, it is enough to show that P dn ) → M ( 1 SCn , fX ) − sp( 1 SC 0. bn bn Rest of the proof is similar to part (a). We leave it as an exercise.

10.5

Exercises

1. Show that ∞ X

−1/α Γj

D

'

−1 Sα (Cα α , 1, 0)

Z where Cα =

j=1



x−α sin xdx

−1

.

0

2. Prove Theorem 10.3.1 when n is even. 3. (Exercise 0.4.2.8. of Resnick (1987)) Suppose F (x) is a distribution on R+ and 1 − F (x) ∼ x−α L(x), where L is a slowly varying function. Using integration by parts or Fubini’s theorem, show that for η ≥ α, Rx η u F (du) α lim η0 = . x→∞ x (1 − F (x)) η−α 4. Complete the proof of part (b) of Theorem 10.4.1.

11 Appendix

The purpose of this Appendix is three-fold: first, to give a detailed proof of the eigenvalue formula solution for the k-circulant that has been used crucially in the book; second, to briefly define and explain some of the background material in probability theory that was needed in the main text; finally, to state a few technical results from the literature that have been used repeatedly in the proofs of our results.

11.1

Proof of Theorem 1.4.1

In what follows, χ(A)(λ) stands for the characteristic polynomial of the matrix A evaluated at λ but for ease of notation, we shall often suppress the argument λ and write simply χ(A). We recall {αq } and {βq } from (1.4) and define m := max dβq /αq e, 1≤q≤c

[t]m,b := tk m mod b, b is a positive integer.

(11.1)

Let em,d be a d × 1 vector whose only non-zero element is 1 at (m mod d)-th position, Em,d be the d × d matrix with ejm,d , 0 ≤ j < d as its columns, and for dummy symbols δ0 , δ1 , . . ., let ∆m,b,d be a diagonal matrix as given below. e0m,d

=

Em,d

=

∆m,b,d

=

[0 · · · 0 1 0 · · · 0]1×d , (11.2)   e0,d em,d e2m,d . . . e(d−1)m,d , (11.3)   diag δ[0]m,b , δ[1]m,b , . . . , δ[j]m,b , . . . , δ[d−1]m,b . (11.4)

Note that h i ∆0,b,d = diag δ0 mod b , δ1 mod b , . . . , δj mod b , . . . , δd−1 mod b . We need the following lemmata for the main proof. Lemma 11.1.1. Let π = (π(0), π(1), . . . , π(b − 1)) be a permutation of (0, 1, . . . , b − 1). Let   Pπ = eπ(0),b eπ(1),b . . . eπ(b−1),b . 173

174

Appendix

Then, Pπ is a permutation matrix and the (i, j)th element of PπT Ek,b ∆0,b,b Pπ is given by  δt if (i, j) = (π −1 (kt mod b), π −1 (t)), 0 ≤ t < b (PπT Ek,b ∆0,b,b Pπ )i,j = 0 otherwise. The proof is easy and we omit it. Lemma 11.1.2. Let k and b be positive integers. Then χ (Ak,b ) = χ (Ek,b ∆0,b,b ) , where δj =

Pb−1 l=0

al ω jl , 0 ≤ j < b, ω = cos(2π/b) + i sin(2π/b), i =



−1.

Proof. Define the b × b permutation matrix   0 Ib−1 Pb = . 1 0T Observe that for 0 ≤ j < b, the j-th row of Ak,b can be written as aT Pbjk where Pbjk stands for the jk-th power of Pb . From direct calculation, it is easy to verify that Pb = U DU ∗ is a spectral decomposition of Pb , where D

=

diag(1, ω, . . . , ω b−1 ),

U

=

[u0 u1 . . . ub−1 ] with

uj

=

−1/2

b

j

2j

(1, ω , ω , . . . , ω

(11.5) (11.6) (b−1)j T

) , 0 ≤ j < b.

Note that δj = aT uj , 0 ≤ j < b. From easy computations, it now follows that U ∗ Ak,b U = Ek,b ∆0,b,b , so that, χ (Ak,b ) = χ (Ek,b ∆0,b,b ), and the lemma is proven. Lemma 11.1.3. Let k and b be positive integers, and x = b/gcd(k, b). Let for dummy variables γ0 , γ1 , γ2 , . . . , γb−1 , Γ = diag (γ0 , γ1 , γ2 , . . . , γb−1 ) . Then  χ (Ek,b × Γ) = λb−x χ Ek,x × diag γ0 mod b , γk mod b , . . . , γ(x−1)k mod b . (11.7) Proof. Define the matrices,   Bb×x = e0,b ek,b e2k,b . . . e(x−1)k,b and P = [B B c ] , where B c consists of those columns (in any order) of Ib that are not in B. This makes P a permutation matrix.

Proof of Theorem 1.4.1

175

Clearly, Ek,b = [B B · · · B] which is a b × b matrix of rank x, and we have  χ (Ek,b Γ) = χ P T Ek,b ΓP . Note that 

T

P Ek,b ΓP

=

Ix 0(b−x)×x

 =

C

where C

C

... ...

Ix 0(b−x)×x

 ΓP

 P

0(b−x)×b 

=

Ix 0(b−x)×x



0(b−x)×b

c

[B B ] =



CB 0

CB c 0



=

[Ix Ix · · · Ix ] Γ

=

[Ix Ix · · · Ix ] × diag(γ0 , γ1 , . . . , γb−1 ).

,

Clearly, the characteristic polynomial of P T Ek,b ΓP does not depend on CB c . This explains why we had not bothered to specify the order of columns in B c . Thus we have  χ (Ek,b Γ) = χ P T Ek,b ΓP = λb−x χ (CB) . It now remains to show that  CB = Ek,x × diag γ0 mod b , γk mod b , γ2k mod b , . . . , γ(x−1)k mod b . Note that the j-th column of B is ejk,b . So the j-th column of CB is actually the (jk mod b)-th column of C. Hence (jk mod b)-th column of C is γjk mod b ejk mod x . So  CB = Ek,x × diag γ0 mod b , γk mod b , γ2k mod b , . . . , γ(x−1)k mod b , and the Lemma is proven completely. Proof of Theorem 1.4.1.. We first prove the Theorem for Ak,n0 . Since k and n0 are relatively prime, by Lemma 11.1.2, χ(Ak,n0 ) = χ(Ek,n0 ∆0,n0 ,n0 ). Get the partitioning sets P0 , P1 , . . . of {0, 1, . . . , n0 − 1}, as in (1.7) where Pj = {rj k x mod n0 , 0 ≤ x < #Pj } for some integer rj . Let N0 = 0 and Pj Nj = i=1 ni where ni = #Pi . Define a permutation π on the set Zn0 as follows: π(0) = 0 and π(Nj + t) = rj+1 k t−1

mod n0 for 1 ≤ t ≤ nj+1 and j ≥ 0.

176

Appendix

This permutation π automatically yields a permutation matrix Pπ as in Lemma 11.1.1. Consider the positions of δv for v ∈ Pj in the product PπT Ek,n0 ∆0,n0 ,n0 Pπ . We know that v = rj k t−1 mod n0 for some 1 ≤ t ≤ nj . Thus  π −1 rj k t−1 mod n0 = Nj−1 + t, 1 ≤ t ≤ nj , so that the position of δv for v = rj k t−1 mod n0 , 1 ≤ t ≤ nj in PπT Ek,n0 ∆0,n0 Pπ is given by,  π −1 (rj k t mod n0 ), π −1 (rj k t−1 mod n0 )  (Nj−1 + t + 1, Nj−1 + t) if, 1 ≤ t < nj = . (Nj−1 + 1, Nj−1 + nj ) if, t = nj Hence PπT Ek,n0 ∆0,n0 ,n0 Pπ = diag (L0 , L1 , . . .) ,   where for j ≥ 0, if nj = 1 then Lj = δrj is a 1 × 1 matrix, and if nj > 1, then   0 0 0 ... 0 δrj knj −1 mod n0  δrj mod n0  0 0 ... 0 0     0 0 δrj k mod n 0 . . . 0 0 Lj =  .   ..   . 0

0

0 ...

δrj knj −2

mod n0

0

Clearly, χ(Lj ) = λnj − yj . Now the result follows from the identity Y Y χ (Ek,n0 ∆0,n0 ,n0 ) = χ(Lj ) = (λnj − yj ). j≥0

j≥0 β

Now let us prove the result for the general case. Recall that n = n0 × Πcq=1 pq q . Then again using Lemma 11.1.2, χ(Ak,n ) = χ(Ek,n ∆0,n,n ). Recalling (11.1) and Lemma 11.1.2 and using Lemma 11.1.3 repeatedly with y = n/n0 , χ(Ak,n )

= = = =

χ(Ek,n ∆0,n,n ) 0 λn−n χ(Ek,n0 ∆m,n,n0 ) 0 λn−n χ(Ek,n0 ∆m+j,n,n0 ) [for all j ≥ 0]  0 λn−n χ Ek,n0 × diag δ[0]0,n , δ[y]0,n , δ[2y]0,n , . . . , δ[(n0 −1)y]0,n .

 Replacing ∆0,n0 ,n0 by diag δ[0]0,n , δ[y]0,n , δ[2y]0,n , . . . , δ[(n0 −1)y]0,n , we can mimic the rest of the proof given for Ak,n0 , to complete the proof in the general case.

Standard notions and results

11.2

177

Standard notions and results

Probability inequalities: In this book we have used a few elementary probability inequalities. Boole’s inequality: Suppose {Ei , i ≥ 1} are events in some probability space (Ω, A, P). Then ∞ ∞ [  X P Ei ≤ P(Ei ). (11.8) n=1

n=1

Bonferroni inequality: These provide both upper and lower bounds, and are based on the inclusion-exclusion principle. Suppose E1 , E2 , . . . , En are events in some probability space (Ω, A, P). Then for every integer m such that 2 ≤ 2m ≤ n, 2m n [  2m−1 X X (−1)j−1 Sj,n ≤ P Ei ≤ (−1)j−1 Sj,n , j=1

j=1

j=1

where X

Sj,n :=

P

1≤i1 0. Often the convergence is verified by using Chebyshev’s inequality. Convergence in probability implies convergence in distribution. Almost sure convergence: Suppose {Xn } is a sequence of Rd -valued random variables and X is another Rd -valued random variable, all defined on the same probability space. Then we say that Xn converges to X almost surely if P{ω : lim Xn (ω) = X(ω)} = 1. n→∞

Almost sure convergence implies convergence in probability. Borel-Cantelli lemma: Suppose {En } is a sequence of events in a probability space. Then ∞ X

∞ P(En ) < ∞ implies that P(lim sup En ) = P(∩∞ n=1 ∪k=n Ek ) = 0.

n=1

Dominated Convergence Theorem (DCT): Suppose {Xn } is a sequence of real-valued random variables which converges almost surely to X. If there exists a random variable Y such that supn |Xn | ≤ Y and E Y < ∞, then E(Xn ) → E(X). Limit theorems for sums: Three basic limit theorems in probability are the strong law of large numbers (SLLN), the central limit theorem (CLT) and the law of iterated logarithm. Central Limit Theorem: This is the first fundamental limit theorem in probability theory and is an excellent non-trivial example of convergence in distribution. Suppose {Xn } is a sequence of i.i.d. real-valued random variables defined on the same probability space with mean µ and finite variance σ 2 . Then X1 + X2 + · · · + Xn − nµ √ → N (0, 1) in distribution, nσ where N (0, 1) denotes the standard normal distribution.

Standard notions and results

179

Strong Law of Large Numbers: This is the second fundamental limit theorem in probability theory and is an excellent non-trivial example of almost sure convergence. Suppose {Xn } is a sequence of i.i.d. real-valued random variables on the same probability space with mean µ. Then X1 + X2 + · · · + Xn → µ almost surely. n Law of Iterated Logarithm: This is the third fundamental limit theorem in probability theory. Suppose {Xn } is a sequence of i.i.d. real-valued random variables with E Xi = 0 and Var(Xi ) = 1. Then Pn Pn Xi i=1 Xi lim sup √ = − lim inf √ i=1 = 1 almost surely. 2n ln ln n 2n ln ln n Normal approximation: As in the Central Limit Theorem, there are many situations where the limit distribution is normal. When this is the case, one tries to provide the accuracy of the normal approximation. The simplest but a far reaching result in this direction is the following bound. Berry-Esseen Bound: Suppose {Xn } is a sequence of i.i.d. real-valued random variables defined on the same probability space with mean µ and finite variance σ 2 . Then X1 + X2 + · · · + Xn − nµ C E |X1 |3  √ √ sup P ≤ x − P(X ≤ x) ≤ nσ σ3 n x∈R where C is a universal constant independent of the distribution of X1 and X is a normal random variable with mean zero and variance 1. In the main text, we have used other sophisticated normal approximation results from the literature. Stationary processes: Broadly speaking, stationary processes are those whose probabilistic behavior (in some sense) remains invariant over time. The basic notions we have used are the following: Weakly stationary sequence: A sequence of real-valued random variables {Xn } is said to be weakly stationary if E(Xn ) does not depend on n and there exists a function γ(·) on non-negative integers such that Cov(Xn , Xm ) = γ(|n − m|) for all values of n and m. Spectral distribution and density: Any such covariance sequence γ(·) is associated with a (unique) distribution function (non-decreasing, rightcontinuous) F with support on [0, 2π] and with the property that Z 2π γ(k) = eikx dF (x) for all k. 0

If F has a density f , then Xit is called the spectral density of the process {Xn }. It is well-known that if |γ(n)| < ∞, then the spectral density exists.

180

Appendix

White noise: If γ(n) = 0 for all n 6= 0, Then {Xn } is called a white noise. It is clear that the spectral density of a white noise sequence is given by f (x) =

γ(0) , for all x ∈ [0, 2π]. 2π

Extreme value theory: There is a well-developed theory of the probabilistic behavior of ordered and extreme values of random sequences. We have borrowed the following notions from this theory: Convergence of types theorem: This is a technical result which allows the replacement of normalizing constants by some equivalent sequence. Let Xn be a sequence of real-valued random variables which converges in distribution to X. Suppose further that for some sequences an > 0 and bn ∈ R, an Xn + bn converges in distribution to X 0 and both X and X 0 are non-degenerate. Then an → a and bn → b for some a > 0, b ∈ R. Equivalently, suppose {Fn }, F and F 0 are distribution functions where F and F 0 are non-degenerate. Moreover, there exist an , a0n > 0 and bn , b0n ∈ R such that Fn (an x + bn ) → F (x) and Fn (a0n x + b0n ) → F 0 (x) for all continuity points x of F , respectively F 0 . Then an /a0n → a > 0, and (bn − b0n )/a0n → b ∈ R, and F 0 (ax + b) = F (x) for all x ∈ R. Standard Gumbel distribution: Its distribution function is defined as Λ(x) = exp{− exp(−x)}, x ∈ R. The Gumbel distribution arises in the study of limit distribution of the maximum. Suppose {Xn } is a sequence of i.i.d. real-valued random variables with distribution function F . Let Mn = max{X1 , . . . , Xn }. In a fundamental result in extreme value theory, necessary sufficient conditions are given for the distributional convergence of Mn∗ = (Mn − bn )/an to a non-degenerate limit for suitable choices of {an , bn }, and three classes of limit distributions are possible. The one which is relevant to us is the Gumbel distribution. In that case we say that X1 or its distribution F is in the max-domain of attraction of the Gumbel distribution. The numbers an and bn are called the norming constants. Here is the result that we have used. A distribution function F# with right end x0 (can be ∞) is called a Von Mises function if it has the following representation: there must exist z0 < x0 such that for z0 < x < x0 and c > 0, n Z x o 1 − F# (x) = c exp − (1/f (u))du , (11.10) z0

where f (u) > 0 for z0 < x < x0 , and f is absolutely continuous on (z0 , x0 ) with density f 0 (u) and limu↑x0 f 0 (u) = 0. Proposition 11.2.1. If F# is a Von Mises function with representation

Standard notions and results

181

(11.10) then F# belongs to the max-domain of attraction of the Gumbel distribution. The norming constants may be chosen as bn = (1/(1 − F ))← (n)

and an = f (bn ).

(11.11)

Here for any non-decreasing function G, G← (x) = inf{y : G(y) ≥ x}. There are criteria which help to identify the norming constants an and bn . The convergence of types result also comes in handy in picking these constants. For instance, if F1 is the standard normal distribution, it is known that in (11.11), an and bn can be taken to be bn = (2 ln n − ln ln n − ln(4π))1/2 and an =

1 . bn

Heavy-tailed distributions: Variables with infinite second moment, and in general with heavy tails, have very different properties compared to the lighttailed ones. We have used the following notions: Stable distributions: Suppose X is a random variable. Let X1 and X2 be independent random variables with the same distribution as that of X. Then X or its distribution is said to be stable if for any constants a, b > 0 the random variable aX1 + bX2 has the same distribution as cX + d for some constants c > 0 and d. For example the normal and the Cauchy distributions are stable. Regularly varying function: A measurable function U : R+ → R+ is said to be regularly varying at ∞ with index ρ if for x > 0, lim

t→∞

U (tx) = xρ . U (t)

We denote this class of function as RVρ . If ρ = 0 we call U slowly varying. Slowly varying functions are generally ∈ RV0 . denoted by L(x) or `(x). Note that if U ∈ RVρ then Ux(x) ρ Notions from Stochastic Processes Vague convergence: A sequence of measures µn is said to converge vaguely to anotherR measure µ, if forRevery bounded continuous function f with compact support, f (x)dµn (x) → f (x)dµ(x). Point process: Roughly speaking, a point process is a collection of points randomly located on some suitable space. Formally, let E be any set equipped with an appropriate sigma-algebra E. For example usually E is some nice subset of Rd and is equipped with the Borel sigma-algebra. Let M (E) be the space of all point measures on E, endowed with the topology of vague convergence. Any point process on E is a measurable map N : (Ω, F, P) → (M (E), E).

182

Appendix

It is said to be simple if P(N ({x}) ≤ 1, x ∈ E) = 1. Poisson point process: The simplest example of a point process is a Poisson point process. Let λ(x), x ∈ Rd be a locally integrable positive function so that Z Λ(B) := λ(x)dx < ∞ for any bounded region B. B

Then N (·) is said to be a Poisson point process with intensity function λ(·) if for any collection of disjoint bounded Borel measurable sets B1 , B2 , . . . , Bk : P(N (Bi ) = ni , i = 1, 2, . . . , k) =

k Y (Λ(Bi ))ni

ni !

i=1

e−Λ(Bi ) .

Clearly, any Poisson point process is simple.

11.3

Three auxiliary results

The following three results have been repeatedly used in the text. Theorem 11.3.1 (Theorem 7.1.2; Brockwell and Davis (2006)). Suppose {Xt } satisfies Xt = µ +

∞ X

ψj Zt−j ,

{Zt } ∼ IID(0, σ 2 ),

j=−∞

where

P∞

j=−∞

|ψj | < ∞ and

P∞

j=−∞

ψj 6= 0. Then

D ¯ n − µ) → n1/2 v −1/2 (X N (0, 1), P P P∞ n ∞ 1 2 2 ¯n = where X i=1 Xi , v = h=−∞ γ(h) = σ ( j=−∞ ψj ) , and γ(·) is the n autocovariance function of {Xt }.

Theorem 11.3.2 (Theorem 2.1; Davis and Mikosch (1999)). Let {Zt } be a sequence of real-valued i.i.d. random variables with mean zero and variance one. If E |Z1 |s < ∞ for some s > 2, then D

max In,Z (ωk ) − ln qn → Y,

k=1,...,qn

where n X 2 In,Z (ωi ) = n−1 exp(−iωk j)Zj , j=1

ωk =

2πk , n

n , and 2 Y has the standard Gumbel distribution Λ(x) = exp(− exp(−x)), x ∈ R. qn = max{k : 0 < ωk < π}, that is, qn ∼

Three auxiliary results

183

Karamata’s theorem examines the properties of the integral of regularly varying functions. Suppose U is locally integrable and is also integrable in a neighborhood of 0. We are interested only in the behavior of functions near ∞. Theorem 11.3.3 (Karamata’s Theorem 0.6, Resnick (1987)). (a) Suppose ρ ≥ −1 and U ∈ RVρ . Then Z x xU (x) U (t)dt ∈ RVρ+1 and lim R x = ρ + 1. x→∞ U (t)dt 0 0 If ρ < −1 (or if ρ = −1 and R∞ U (t)dt is finite, and x Z

R∞ x

U (s)ds < ∞), then U ∈ RVρ implies that



U (t)dt ∈ RVρ+1 x

and

xU (x) lim R ∞ = −ρ − 1. U (t)dt x

x→∞

(b) If U satisfies xU (x) lim R x = λ ∈ (0, ∞), U (t)dt 0

x→∞

then U ∈ RVλ−1 . If

R∞ x

U (t)dt < ∞ and xU (x) lim R ∞ = λ ∈ (0, ∞), U (t)dt x

x→∞

then U ∈ RV−λ−1 .

Bibliography

Auffinger, A., Ben Arous, G., and P´ech´e, S. (2009). Poisson convergence for the largest eigenvalues of heavy tailed random matrices. Ann. Inst. Henri Poincar´e Probab. Stat., 45(3):589–610. Bai, Z. D. and Silverstein, J. W. (2010). Spectral Analysis of Large Dimensional Random Matrices. Springer Series in Statistics. Springer, New York, second edition. Bhatia, R. (1997). Matrix Analysis, volume 169 of Graduate Texts in Mathematics. Springer-Verlag, New York. Bhattacharya, R. N. and Ranga Rao, R. (1976). Normal Approximation and Asymptotic Expansions. John Wiley & Sons, New York-London-Sydney. Wiley Series in Probability and Mathematical Statistics. Billingsley, P. (1995). Probability and Measure. John Wiley & Sons, Inc., New York, third edition. Bose, A. (2018). Patterned Random Matrices. Chapman & Hall. Bose, A., Chatterjee, S., and Gangopadhyay, S. (2003). Limiting spectral distributions of large dimensional random matrices. J. Indian Statist. Assoc., 41(2):221–259. Bose, A., Guha, S., Hazra, R. S., and Saha, K. (2011a). Circulant type matrices with heavy tailed entries. Statist. Probab. Lett., 81(11):1706– 1716. Bose, A., Hazra, R. S., and Saha, K. (2009). Limiting spectral distribution of circulant type matrices with dependent inputs. Electron. J. Probab., 14:no. 86, 2463–2491. Bose, A., Hazra, R. S., and Saha, K. (2010). Spectral norm of circulant type matrices with heavy tailed entries. Electron. Commun. Probab., 15:299–313. Bose, A., Hazra, R. S., and Saha, K. (2011b). Poisson convergence of eigenvalues of circulant type matrices. Extremes, 14(4):365–392. Bose, A., Hazra, R. S., and Saha, K. (2011c). Spectral norm of circulant-type matrices. J. Theoret. Probab., 24(2):479–516. 185

186

Bibliography

Bose, A., Hazra, R. S., and Saha, K. (2012a). Product of exponentials and spectral radius of random k-circulants. Ann. Inst. Henri Poincar´e Probab. Stat., 48(2):424–443. Bose, A. and Mitra, J. (2002). Limiting spectral distribution of a special circulant. Statist. Probab. Lett., 60(1):111–120. Bose, A., Mitra, J., and Sen, A. (2012b). Limiting spectral distribution of random k-circulants. J. Theoret. Probab., 25(3):771–797. Bose, A. and Sen, A. (2007). Spectral norm of random large dimensional noncentral Toeplitz and Hankel matrices. Electron. Comm. Probab., 12:29–35. Paging changed to 21-27 on journal site. Bose, A. and Sen, A. (2008). Another look at the moment method for large dimensional random matrices. Electron. J. Probab., 13(21):588–628. Brockwell, P. J. and Davis, R. A. (2006). Time Series: Theory and Methods. Springer Series in Statistics. Springer, New York. Reprint of the second (1991) edition. Dai, M. and Mukherjea, A. (2001). Identification of the parameters of a multivariate normal vector by the distribution of the maximum. J. Theoret. Probab., 14(1):267–298. Davis, P. J. (1979). Circulant Matrices. John Wiley & Sons, New YorkChichester-Brisbane. Davis, R. A. and Mikosch, T. (1999). The maximum of the periodogram of a non-Gaussian sequence. Ann. Probab., 27(1):522–536. Einmahl, U. (1989). Extensions of results of Koml´os, Major, and Tusn´ady to the multivariate case. J. Multivariate Anal., 28(1):20–68. Embrechts, P., Kl¨ uppelberg, C., and Mikosch, T. (1997). Modelling Extremal Events, volume 33 of Applications of Mathematics (New York). SpringerVerlag, Berlin. Erd´elyi, A. (1956). Asymptotic Expansions. Dover Publications, Inc., New York. Fan, J. and Yao, Q. (2003). Nonlinear Time Series. Springer Series in Statistics. Springer-Verlag, New York. Feller, W. (1971). An Introduction to Probability Theory and its Applications. Vol. II. Second edition. John Wiley & Sons, Inc., New York-LondonSydney. Freedman, D. and Lane, D. (1981). The empirical distribution of the Fourier coefficients of a sequence of independent, identically distributed longtailed random variables. Z. Wahrsch. Verw. Gebiete, 58(1):21–39.

Bibliography

187

Georgiou, S. and Koukouvinos, C. (2006). Multi-level k-circulant supersaturated designs. Metrika, 64(2):209–220. Gray, R. M. (2006). Toeplitz and Circulant Matrices: A review. Foundations R and Trends in Communications and Information Theory, 2(3):155– 239. Grenander, U. and Szeg˝ o, G. (1984). Toeplitz Forms and their Applications. Chelsea Publishing Co., New York, second edition. Hoffman, A. J. and Wielandt, H. W. (1953). The variation of the spectrum of a normal matrix. Duke Math. J., 20:37–39. Kallenberg, O. (1986). Random Measures. Akademie-Verlag, Berlin; Academic Press, Inc., London, fourth edition. Knight, K. (1991). On the empirical measure of the Fourier coefficients with infinite variance data. Statist. Probab. Lett., 12(2):109–117. LePage, R., Woodroofe, M., and Zinn, J. (1981). Convergence to a stable distribution via order statistics. Ann. Probab., 9(4):624–632. Lin, Z. and Liu, W. (2009). On maxima of periodograms of stationary processes. Ann. Statist., 37(5B):2676–2695. Massey, A., Miller, S. J., and Sinsheimer, J. (2007). Distribution of eigenvalues of real symmetric palindromic Toeplitz matrices and circulant matrices. J. Theoret. Probab., 20(3):637–662. Meckes, M. W. (2009). Some results on random circulant matrices. In High Dimensional Probability V: the Luminy Volume, volume 5 of Inst. Math. Stat. (IMS) Collect., pages 213–223. Inst. Math. Statist., Beachwood, OH. Mikosch, T., Resnick, S., and Samorodnitsky, G. (2000). The maximum of the periodogram for a heavy-tailed sequence. Ann. Probab., 28(2):885–908. Pollock, D. S. G. (2002). Circulant matrices and time-series analysis. Internat. J. Math. Ed. Sci. Tech., 33(2):213–230. Resnick, S. I. (1987). Extreme Values, Regular Variation, and Point Processes, volume 4 of Applied Probability. A Series of the Applied Probability Trust. Springer-Verlag, New York. Riesz, M. (1923). Surle probleme des moments. Troisieme note (French). Ark. Mat. Fys, 16:1–52. Samorodnitsky, G. and Taqqu, M. S. (1994). Stable Non-Gaussian Random Processes. Chapman & Hall, New York.

188

Bibliography

Sen, A. (2006). Large Dimensional Random Matrices. M. Stat Project Report. Indian Statistical Institute, Kolkata. Soshnikov, A. (2004). Poisson statistics for the largest eigenvalues of Wigner random matrices with heavy tails. Electron. Comm. Probab., 9:82–91. Soshnikov, A. (2006). Poisson statistics for the largest eigenvalues in random matrix ensembles. In Mathematical Physics of Quantum Mechanics, volume 690 of Lecture Notes in Phys., pages 351–364. Springer, Berlin. Strok, V. V. (1992). Circulant matrices and spectra of de Bruijn graphs. Ukra¨ın. Mat. Zh., 44(11):1571–1579. Walker, A. M. (1965). Some asymptotic results for the periodogram of a stationary time series. J. Austral. Math. Soc., 5:107–128. ¨ Weyl, H. (1916). Uber die Gleichverteilung von Zahlen mod. Eins (German). Math. Ann., 77(3):313–352. Wu, Y. K., Jia, R. Z., and Li, Q. (2002). g-circulant solutions to the (0, 1) matrix equation Am = Jn . Linear Algebra Appl., 345:195–224. Zhou, J. T. (1996). A formula solution for the eigenvalues of g-circulant matrices (Chinese). Math. Appl. (Wuhan), 9(1):53–57.

Index

Ak,n , additional properties, 83 Ak,n , dependent, 53, 62, 138 Ak,n , heavy tailed, 147, 155 Ak,n , point process, 130 Ak,n , scaled eigenvalues, 115 Ak,n , spectral radius, 89 Cn , LSD, 28 Cn , dependent, 39 Jn , 25 Qh,4 , 16 RCn , LSD, 30 RCn , dependent, 45 RCn , heavy tailed, 144 RCn , point process, 121 RCn , scaled eigenvalues, 100 RCn , spectral radius, 78 SCn dependent, 48 SCn heavy-tailed, spectral radius, 165 SCn , LSD, 30 SCn , extreme, 72 SCn , heavy tailed, 145 SCn , point process, 126 SCn , scaled eigenvalues, 104 W2 -metric, 23 EFn , 11 Π(w), 14, 15 Π∗ (w), 15 β2k , 16 W2k (2), 14, 16 LR , moment, 18 k-circulant, 3, 31, 50, 114 k-circulant, n = k 2 + 1, 115, 128 k-circulant, n = k 2 + 1, dependent, 137 k-circulant, n = k g + 1, 117, 128, 145 k-circulant, n = k g − 1, 154

k-circulant, sn = k g + 1, 97 k-circulant, LSD, 34 k-circulant, additional properties, 83 k-circulant, eigenvalue proof, 173 k-circulant, radial component, 31 k-circulant, spectral radius, 88 w, number of distinct letters, 14 w[i], 14 (M1), 16 (M1), circuit, 13 (M2), circuit, 14 (M4), 16, 17 (M4), circuit, 14 almost sure convergence, 178 alphabetical order, 14 asymptotics, Laplace, 79 autocovariance function, 37 Bernstein’s inequality, 177 Berry-Esseen bound, 27, 179 bivariate normal, 39 Bonferroni inequality, 72, 123, 177 Boole’s inequality, 177 Borel-Cantelli lemma, 11, 178 bound, Berry-Esseen, 27, 179 centering, 78 central limit theorem, 69, 178 central limit theorem, Lindeberg, 28 central limit theorem, stationary process, 182 characteristic polynomial, 5 Chebyshev’s inequality, 56, 177 circle, 31 circuit, 13 circuit, Π(w), 14 circuit, (M1), 13 189

190 circuit, (M2), 14 circuit, (M4), 14 circuit, equivalence, 14 circuit, matched, 14 circuit, multiple, 14 circuit, pair-matched, 14 circulant, 1, 28, 38, 67, 100, 160 condition, minimal, 24 convergence in distribution, 178 convergence in probability, 178 convergence of types, 92, 180 convergence, almost sure, 178 cross-matched, 14 dependent Ak,n , 138 dependent input, heavy tailed, 166 distribution, convergence, 178 distribution, exponential, 45 distribution, Gumbel, 31, 45, 67, 79, 90, 180 distribution, heavy-tailed , 181 distribution, stable, 139, 181 distribution, symmetrized Rayleigh, 18 domain of attraction, 139, 140 dominated convergence theorem, 57, 178 doubly symmetric Hankel matrix, 23 doubly symmetric Toeplitz matrix, 22 eigenvalue, scaled, 99 empirical spectral distribution, 9 equivalence class, 14 equivalence relation, 13 equivalent circuits, 14 Euclid, 51 exceedance, 125 expected spectral distribution function, 9 exponential distribution, 45 exponential variable, 121, 125, 128 exponentials, product, 79, 83, 90 extreme order statistics, 79 extreme value, 67

Index extreme, SCn , 72 Fourier frequency, 37 generating vertex, 15 Gumbel distribution, 31, 45, 67, 90, 180 Gumbel limit, periodogram, 182 Gumbel, max-domain, 79 Hankel, doubly symmetric, 22 heavy-tailed distribution, 181 heavy-tailed input, 160 heavy-tailed, dependent input, 166 Hoffmann-Wielandt inequality, 24 Holder’s inequality, 16, 177 inequality, Bernstein, 177 inequality, Bonferroni, 72, 123, 177 inequality, Boole, 177 inequality, Chebyshev, 56, 177 inequality, Hoffmann-Wielandt, 24 inequality, Holder, 16, 177 inequality, interlacing, 26 inhomogeneous Poisson, 120 input sequence, 12 intensity function, 121, 126, 130, 135, 137, 138 interlacing inequality, 26 jointly-matched, 14 Karamata’s theorem, 162 Laplace’s asymptotics, 79 light tail, 99 light-tailed LSD, 156 limiting moment, 16 limiting spectral distribution, 10 Lindeberg’s central limit theorem, 28 linear process, 37 link function, 13 LSD, Cn , 28 LSD, k-circulant, 34 LSD, RCn , 30 LSD, SCn , 30

Index LSD, LSD, LSD, LSD, LSD, LSD,

191 dependent Ak,n , 53, 62 dependent Cn , 39 dependent RCn , 45 dependent SCn , 48 light-tailed, 156 mixture, 38

ordered eigenvalues, 125

pair-matched circuit, 14 pair-matched word, 14, 15 palindrome, 23 partial sum, 41 partition, 155 partition block, 14 major partition, 146 partition, major, 146, 154 matched circuit, 14 partition, singleton, 145 matched, crossed, 14 periodogram, 37, 160 matched, jointly, 14 periodogram, Gumbel limit, 182 matched, location, 14 periodogram, heavy-tailed, 160 matched, self, 14 matrix, doubly symmetric Hankel, 23 point measure, 119, 181 point process, Ak,n , 130 matrix, doubly symmetric Toeplitz, point process, RCn , 121 22 point process, SCn , 126 matrix, palindrome, 23 point process, Poisson, 182 max-domain, 79 point process, simple, 119, 182 max-domain, product of Poisson point process, 121, 126, 130, exponentials, 83 133, 135, 137, 138 maxima, normal random variable, 70 polar coordinates, 54 maximum, periodogram, 160 probability inequalities, 177 metric, W2 , 23 process, stationary, 179 metrizable, 23 product of exponentials, 79, 83, 90 Mill’s ratio, 106 minima, normal random variable, 70 Rayleigh, symmetrized, 30 minimal condition, 24 reduced moment, 23 moment generating function, 70 relatively prime, 84 moment method, 10 reverse circulant, 3, 17, 67, 100, 104, moment, LR , 18 160 moment, limiting, 16 reverse circulant limit, LR , 18 moment, reduced, 23 reverse circulant, heavy tail, 144 moving average, two-sided, 37, 99, reverse circulant, normal 100, 104, 115, 117 approximation, 17 multiple circuits, 14 reverse circulant, point process, 120 Riesz’s condition, 11, 16 non-generating vertex, 15 Riesz’s condition, normal moments, normal approximation, 27, 87, 112 11 normal distribution, max domain, root of unity, 2 181 normal moments, Riesz’s condition, scaled eigenvalue, 99 11 normal random variable, maxima, 70 scaled eigenvalues, Ak,n , 115 normal random variable, minima, 70 scaled eigenvalues, RCn , 100 scaled eigenvalues, SCn , 104 norming constants, 181 scaling, 12, 78, 159

192 self-matched, 14 simple point process, 119, 182 simulation, √1n Ak,n , 31, 65 simulation, √1n Cn , 29, 40 simulation, √1n RCn , 17, 46 simulation, √1n SCn , 20, 48 spectral density, 37, 179 spectral distribution, 9, 179 spectral norm, 78 spectral radius, 67 spectral radius, Ak,n , 89 spectral radius, k-circulant, 88 spectral radius, RCn , 78 spectral radius, SCn heavy-tailed, 165 stable distribution, 139, 181 stable, domain of attraction, 139, 140 standard normal, 38 stationary process, 179 stationary process, central limit theorem, 182 stationary, weakly, 179 strong law of large numbers, 179 symmetric circulant, 2, 20, 47, 104, 105, 126, 171 symmetric circulant, dependent, 137 symmetric circulant, heavy tail, 144 symmetric word, 17 symmetrized Rayleigh, 18, 30 tail, 67 tail of product, 79 tail, light, 99, 156 Toeplitz, doubly symmetric, 22 topology, vague, 119 trace-moment formula, 11, 13 truncation, 86, 111 two-sided moving average, 37, 99, 100, 104, 115, 117 unit circle, 34, 62 vague convergence, 119, 181 vague topology, 119 vertex, 14

Index vertex, generating, 15 vertex, non-generating, 15 Von Mises function, 180 weak convergence, almost surely, 10 weak convergence, in probability, 10 weakly stationary, 179 Weyl’s result, 164 white noise, 180 word, pair-matched, 14, 15 word, symmetric, 17