Problems from the Discrete to the Continuous: Probability, Number Theory, Graph Theory, and Combinatorics (Universitext) [2014 ed.] 3319079646, 9783319079646

The primary intent of the book is to introduce an array of beautiful problems in a variety of subjects quickly, pithily

584 150 1006KB

English Pages 167 [165] Year 2014

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Problems from the Discrete to the Continuous: Probability, Number Theory, Graph Theory, and Combinatorics (Universitext) [2014 ed.]
 3319079646, 9783319079646

Table of contents :
Preface
Contents
A Note on Notation
1 Partitions with Restricted Summands or ``the Money Changing Problem''
Chapter Notes
2 The Asymptotic Density of Relatively Prime Pairs and of Square-Free Numbers
Chapter Notes
3 A One-Dimensional Probabilistic Packing Problem
Chapter Notes
4 The Arcsine Laws for the One-Dimensional Simple Symmetric Random Walk
Chapter Notes
5 The Distribution of Cycles in Random Permutations
Chapter Notes
6 Chebyshev's Theorem on the Asymptotic Density of the Primes
Chapter Notes
7 Mertens' Theorems on the Asymptotic Behavior of the Primes
Chapter Notes
8 The Hardy–Ramanujan Theorem on the Number of Distinct Prime Divisors
Chapter Notes
9 The Largest Clique in a Random Graph and Applications to Tampering Detection and Ramsey Theory
9.1 Graphs and Random Graphs: Basic Definitions
9.2 The Size of the Largest Clique in a Random Graph
9.3 Detecting Tampering in a Random Graph
9.4 Ramsey Theory
Chapter Notes
10 The Phase Transition Concerning the Giant Component in a Sparse Random Graph: A Theorem of Erdős and Rényi
10.1 Introduction and Statement of Results
10.2 Construction of the Setup for the Proofs of Theorems 10.1 and 10.2
10.3 Some Basic Large Deviations Estimates
10.4 Proof of Theorem 10.1
10.5 The Galton–Watson Branching Process
10.6 Proof of Theorem 10.2
Chapter Notes
Appendix A A Quick Primer on Discrete Probability
Appendix B Power Series and Generating Functions
Appendix C A Proof of Stirling's Formula
Appendix D An Elementary Proof of n=1∞1n2=π26
References
Index

Citation preview

Universitext

Ross G. Pinsky

Problems from the Discrete to the Continuous Probability, Number Theory, Graph Theory, and Combinatorics

Universitext

Universitext Series Editors: Sheldon Axler San Francisco State University Vincenzo Capasso Università degli Studi di Milano Carles Casacuberta Universitat de Barcelona Angus MacIntyre Queen Mary University of London Kenneth Ribet University of California, Berkeley Claude Sabbah CNRS, École Polytechnique, Paris Endre Süli University of Oxford Wojbor A. Woyczynski Case Western Reserve University, Cleveland, OH

Universitext is a series of textbooks that presents material from a wide variety of mathematical disciplines at master’s level and beyond. The books, often well class-tested by their author, may have an informal, personal even experimental approach to their subject matter. Some of the most successful and established books in the series have evolved through several editions, always following the evolution of teaching curricula, to very polished texts. Thus as research topics trickle down into graduate-level teaching, first textbooks written for new, cutting-edge courses may make their way into Universitext.

For further volumes: http://www.springer.com/series/223

Ross G. Pinsky

Problems from the Discrete to the Continuous Probability, Number Theory, Graph Theory, and Combinatorics

123

Ross G. Pinsky Department of Mathematics Technion-Israel Institute of Technology Haifa, Israel

ISSN 0172-5939 ISSN 2191-6675 (electronic) ISBN 978-3-319-07964-6 ISBN 978-3-319-07965-3 (eBook) DOI 10.1007/978-3-319-07965-3 Springer Cham Heidelberg New York Dordrecht London Library of Congress Control Number: 2014942157 Mathematics Subject Classification (2010): 05A, 05C, 05D, 11N, 60 © Springer International Publishing Switzerland 2014 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)

In most sciences one generation tears down what another has built and what one has established another undoes. In Mathematics alone each generation builds a new story to the old structure. —Hermann Hankel A peculiar beauty reigns in the realm of mathematics, a beauty which resembles not so much the beauty of art as the beauty of nature and which affects the reflective mind, which has acquired an appreciation of it, very much like the latter. —Ernst Kummer

To Jeanette and to E. A. P. Y. U. P. L. A. T-P. M. D. P.

Preface

It is often averred that two contrasting cultures coexist in mathematics—the theorybuilding culture and the problem-solving culture. The present volume was certainly spawned by the latter. This book takes an array of specific problems and solves them, with the needed tools developed along the way in the context of the particular problems. The book is an unusual hybrid. It treats a mélange of topics from combinatorial probability theory, multiplicative number theory, random graph theory, and combinatorics. Objectively, what the problems in this book have in common is that they involve the asymptotic analysis of a discrete construct, as some natural parameter of the system tends to infinity. Subjectively, what these problems have in common is that both their statements and their solutions resonate aesthetically with me. The results in this book lend themselves to the title “Problems from the Finite to the Infinite”; however, with regard to the methods of proof, the chosen appellation is the more apt. In particular, generating functions in their various guises are a fundamental bridge “from the discrete to the continuous,” as the book’s title would have it; such functions work their magic often in these pages. Besides bridging discrete mathematics and mathematical analysis, the book makes a modest attempt at bridging disciplines—probability, number theory, graph theory, and combinatorics. In addition to the considerations mentioned above, the problems were selected with an eye toward accessibility to a wide audience, including advanced undergraduate students. The technical prerequisites for the book are a good grounding in basic undergraduate analysis, a touch of familiarity with combinatorics, and a little basic probability theory. One appendix provides the necessary probabilistic background, and another appendix provides a warm-up for dealing with generating functions. That said, a moderate dose of the elusive quality known as mathematical maturity will certainly be helpful throughout the text and will be necessary on occasion. The primary intent of the book is to introduce a number of beautiful problems in a variety of subjects quickly, pithily, and completely rigorously to graduate students and advanced undergraduates. The book could be used for a seminar/capstone course in which students present the lectures. It is hoped that the book might also be ix

x

Preface

of interest to mathematicians whose fields of expertise are away from the subjects treated herein. In light of the primary intended audience, the level of detail in proofs is a bit greater than what one sometimes finds in graduate mathematics texts. I conclude with some brief comments on the novelty or lack thereof in the various chapters. A bit more information in this vein may be found in the chapter notes at the end of each chapter. Chapter 1 follows a standard approach to the problem it solves. The same is true for Chap. 2 except for the probabilistic proof of Theorem 2.1, which I haven’t seen in the literature. The packing problem result in Chap. 3 seems to be new, and the proof almost certainly is. My approach to the arcsine laws in Chap. 4 is somewhat different than the standard one; it exploits generating functions to the hilt and is almost completely combinatorial. The traditional method of proof is considerably more probabilistic. The proofs of the results in Chap. 5 on the distribution of cycles in random permutations are almost exclusively combinatorial, through the method of generating functions. In particular, the proof of Theorem 5.2 makes quite sophisticated use of this technique. In the setting of weighted permutations, it seems that the method of proof offered here cannot be found elsewhere. The number theoretic topics in Chaps. 6–8 are developed in a standard fashion, although the route has been streamlined a bit to provide a rapid approach to the primary goal, namely, the proof of the Hardy– Ramanujan theorem. In Chap. 9, the proof concerning the number of cliques in a random graph is more or less standard. The result on tampering detection constitutes material with a new twist and the methods are rather probabilistic; a little additional probabilistic background and sophistication on the part of the reader would be useful here. The results from Ramsey theory are presented in a standard way. Chapter 10, which deals with the phase transition concerning the giant component in a sparse random graph, is the most demanding technically. The reader with a modicum of probabilistic sophistication will be at quite an advantage here. It appears to me that a complete proof of the main results in this chapter, with all the details, is not to be found in the literature. Acknowledgements It is a pleasure to thank my editor, Donna Chernyk, for her professionalism and superb diligence.

Haifa, Israel April 2014

Ross G. Pinsky

Contents

1

Partitions with Restricted Summands or “the Money Changing Problem” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

The Asymptotic Density of Relatively Prime Pairs and of Square-Free Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

3

A One-Dimensional Probabilistic Packing Problem . . . . . . . . . . . . . . . . . . . .

21

4

The Arcsine Laws for the One-Dimensional Simple Symmetric Random Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

5

The Distribution of Cycles in Random Permutations . . . . . . . . . . . . . . . . . .

49

6

Chebyshev’s Theorem on the Asymptotic Density of the Primes. . . . . .

67

7

Mertens’ Theorems on the Asymptotic Behavior of the Primes . . . . . . .

75

8

The Hardy–Ramanujan Theorem on the Number of Distinct Prime Divisors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81

2

9

The Largest Clique in a Random Graph and Applications to Tampering Detection and Ramsey Theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 9.1 Graphs and Random Graphs: Basic Definitions . . . . . . . . . . . . . . . . . . . . 89 9.2 The Size of the Largest Clique in a Random Graph . . . . . . . . . . . . . . . . 91 9.3 Detecting Tampering in a Random Graph . . . . . . . . . . . . . . . . . . . . . . . . . . 99 9.4 Ramsey Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

10

The Phase Transition Concerning the Giant Component in a Sparse Random Graph: A Theorem of Erd˝os and Rényi . . . . . . . . . 10.1 Introduction and Statement of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Construction of the Setup for the Proofs of Theorems 10.1 and 10.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Some Basic Large Deviations Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Proof of Theorem 10.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

109 109 111 113 115

xi

xii

Contents

10.5 10.6

The Galton–Watson Branching Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Proof of Theorem 10.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

Appendix A

A Quick Primer on Discrete Probability . . . . . . . . . . . . . . . . . . . . . 133

Appendix B

Power Series and Generating Functions . . . . . . . . . . . . . . . . . . . . . 141

Appendix C

A Proof of Stirling’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

Appendix D

An Elementary Proof of

P1

1 nD1 n2

D

2 6

. . . . . . . . . . . . . . . . . . . . . . 149

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

A Note on Notation

Z denotes the set of integers ZC denotes the set of nonnegative integers N denotes the set of natural numbers: f1; 2;    g R denotes the set of real numbers .x/ f .x/ D O.g.x// as x ! a means that lim supx!a j fg.x/ j < 1; in particular, f .x/ D O.1/ as x ! a means that f .x/ remains bounded as x ! a

f .x/ D o.g.x// as x ! a means that limx!a as x ! a means limx!a f .x/ D 0 f  g as x ! a means that limx!a

f .x/ g.x/

f .x/ g.x/

D 0; in particular, f .x/ D o.1/

D1

gcd.x1 ; : : : ; xm / denotes the greatest common divisor of the positive integers x1 ; : : : ; x m The symbol Œ is used in two contexts: 1. Œn D f1; 2; : : : ; ng, for n 2 N 2. Œx is the greatest integer function; that is, Œx D n, if n 2 Z and n  x < n C 1 Bin.n; p/ is the binomial distribution with parameters n and p Pois./ is the Poisson distribution with parameter  Ber.p/ is the Bernoulli distribution with parameter p X  Bin.n; p/ means the random variable X is distributed according to the distribution Bin.n; p/

xiii

Chapter 1

Partitions with Restricted Summands or “the Money Changing Problem”

Imagine a country with coins of denominations 5 cents, 13 cents, and 27 cents. How many ways can you make change for $51,419.48? That is, how many solutions .b1 ; b2 ; b3 / are there to the equation 5b1 C 13b2 C 27b3 D 5;141;948, with the restriction that b1 ; b2 ; b3 be nonnegative integers? This is a specific case of the following general problem. Fix m distinct, positive integers faj gm j D1 . Count the number of solutions .b1 ; : : : ; bm / with integral entries to the equation b1 a1 C b2 a2 C    C bm am D n; bj  0; j D 1; : : : ; m:

(1.1)

A partition of n is a sequence of integers .x1 ; : : : ; xk /, where k is a positive integer, such that k X

xi D n and x1  x2      xk  1:

iD1

Let Pn denote the number of different partitions of n. The problem of obtaining an asymptotic formula for Pn is celebrated and very difficult. It was solved in 1918 by G.H. Hardy and S. Ramanujan, who proved that Pn 

p 2n 1 p e  3 ; as n ! 1: 4n 3

Now consider partitions of n where we restrict the values of the summands xi above m to the set faj gm j D1 . Denote the number of such restricted partitions by Pn .faj gj D1 /. m A moment’s thought reveals that the number of solutions to (1.1) is Pn .faj gj D1 /. Does there exist a solution to (1.1) for every sufficiently large integer n? And if so, can one evaluate asymptotically the number of such solutions for large n? Without posing any restrictions on faj gm j D1 , the answer to the first question is negative. For example, if m D 3 and a1 D 5; a2 D 10; a3 D 30, then clearly there is no solution to (1.1) if n − 5. Indeed, it is clear that a necessary condition R.G. Pinsky, Problems from the Discrete to the Continuous, Universitext, DOI 10.1007/978-3-319-07965-3__1, © Springer International Publishing Switzerland 2014

1

2

1 Partitions with Restricted Summands

for the existence of a solution for all large n is that faj gm j D1 are relatively prime: gcd.a1 ; : : : ; am / D 1. This is the time to recall a well-known result concerning solutions .b1 ; : : : ; bm / with (not necessarily nonnegative) integral entries to the equation b1 a1 C b2 a2 C    C bm am D n. A fundamental theorem in algebra/number theory states that there exists an integral solution to this equation for all n 2 Z if and only if gcd.a1 ; : : : ; am / D 1. This result has an elegant group theoretical proof. We will prove that for all large n, (1.1) has a solution .b1 ; : : : ; bm / with integral entries if and only if gcd.a1 ; : : : ; am / D 1, and we will give a precise asymptotic estimate for the number of such solutions for large n. Theorem 1.1. Let m  2 and let faj gm j D1 be distinct, positive integers. Assume that the greatest common divisor of faj gm j D1 is 1: gcd.a1 ; : : : ; am / D 1. Then for all sufficiently large n, there exists at least one integral solution to (1.1). Furthermore, the number Pn .faj gm j D1 / of such solutions satisfies Pn .faj gm j D1 / 

nm1 Q ; as n ! 1: .m  1/Š m j D1 aj

(1.2)

Remark. In particular, we see (not surprisingly) that for fixed m and sufficiently large n, the smaller the faj gm j D1 are, the more solutions there are. We also see .1/

.2/

m2 1 that given m1 and faj gm j D1 , and given m2 and faj gj D1 , with m2 > m1 , then for sufficiently large n there will be more solutions for the latter set of parameters.

Proof. We will prove the asymptotic estimate in (1.2), from which the first statement of the theorem will also follow. Let hn denote the number of solutions to (1.1). (For the proof, the notation hn will be a lot more convenient than Pn .faj gm j D1 /.) Thus, we need to show that (1.2) holds with hn in place of Pn .faj gm j D1 /. We define the generating function of fhn g1 nD1 : H.x/ D

1 X

hn x n :

(1.3)

nD1

A simple, rough estimate shows that hn 

m Qmn

j D1 aj

, from which it follows that the

power series on the right hand side of (1.3) converges for jxj < 1. See Exercise 1.1. It turns out that we can exhibit H explicitly. We demonstrate this for the case m D 2, from which the general case will become clear. For k D 1; 2, we have 1 D 1 C x ak C x 2ak C x 3ak C    ; 1  x ak and the series converges absolutely for jxj < 1. Thus,

1 Partitions with Restricted Summands

3

   1 a1 2a1 3a1 a2 2a2 3a2 Cx Cx C    1Cx Cx Cx C    D D 1Cx .1x a1 /.1x a2 /     1 C x a1 C x 2a1 C x 3a1 C    C x a2 C x a1 Ca2 C x 2a1 Ca2 C x 3a1 Ca2 C    C   2a2 (1.4) x C x a1 C2a2 C x 2a1 C2a2 C x 3a1 C2a2 C    C    A little thought now reveals that on the right hand side of (1.4), the number of times the term x n appears is the number of integral solutions .b1 ; b2 / to (1.1) with m D 2; that is, hn is the coefficient of x n on the right hand side of (1.4). So H.x/ D 1 . Clearly, the same argument works for all m; thus we conclude that .1x a1 /.1x a2 / H.x/ D

1 ; jxj < 1: .1  x a1 /.1  x a2 /    .1  x am /

(1.5)

We now begin an analysis of H , as given in its closed form in (1.5), which will lead us to the asymptotic behavior as n ! 1 of the coefficients hn in its power series representation in (1.3). Consider the polynomial p.x/ D .1  x a1 /.1  x a2 /    .1  x am /: 2 ij

1 . Clearly 1 is a For each k, the roots of 1  x ak are the ak th roots of unity: fe ak gjakD0 root of p.x/ of multiplicity m. Because of the assumption that gcd.a1 ; : : : ; am / D 1, it follows that every other root of p.x/ is of multiplicity less than m—that is, there is 2 ijk

no complex number r that can be written in the form r D e ak , simultaneously for k D 1; : : : ; m, where 1  jk < ak . Indeed, if r can be written in the above form for all k, then it follows that ajkk is independent of k. In particular, ak D jkja1 1 , for k D 2; : : : ; m. Since 1  j1 < a1 , it follows that there is at least one prime factor of a1 which is a factor of all of the ak , k D 2; : : : ; m, and this contradicts the assumption that gcd.a1 ; : : : ; am / D 1. Denote the distinct roots of p.x/ by 1; r2 ; : : : ; rl , and note from above that jrj j D 1, for all j . Let mk denote the multiplicity of the root rk , for k D 2; : : : ; l. Also, note that p.0/ D 1. Then we can write .1  x a1 /.1  x a2 /    .1  x am / D .1  x/m .1 

x m2 x /    .1  /ml ; r2 rl

(1.6)

where 1  mj < m; for j D 2; : : : ; l: In light of (1.5) and (1.6), we can write the generating function H.x/ in the form H.x/ D

1 .1 

x/m .1



x m2 / r2

   .1 

x ml / rl

:

(1.7)

4

1 Partitions with Restricted Summands

By the method of partial fractions, we can rewrite H from (1.7) in the form A11 A12 A1m  C C C    C .1  x/m .1  x/m1 .1  x/  A  A A2m2  Alml  21 l1 C  C : x m2 C    C x x ml C    C .1  r2 / .1  r2 / .1  rl / .1  rxl / H.x/ D



(1.8)

For positive integers k, the function F .x/ D .1  x/k has the power series expansion .1  x/

k

.n/

! 1 X nCk1 n D x : k1 nD0

To prove this, just verify that F nŠ.0/ D hand side of (1.8) can be expanded as

nCk1 . Thus, the first term on the right k1

! 1 X A11 nCm1 n D A11 x : .1  x/m m1 nD0

(1.9)

The coefficient of x n on the right hand side above is A11

.n C m  1/.n C m  2/    .n C 1/ nm1  A11 as n ! 1: .m  1/Š .m  1/Š

Every other term on the right hand side of (1.8) is of the form .1Ax /k where 1  r k < m and jrj D 1. By the same argument as above, the coefficient of x n in the k1 x expansion for .1Ax /k is asymptotic to r An n .k1/Š as n ! 1 (substitute r for x in the r appropriate series expansion). Thus, each of these terms is on a smaller order than the coefficient of x n in (1.9). We thereby conclude that the coefficient of x n in H.x/ nm1 is asymptotic to A11 .m1/Š as n ! 1. By (1.3), this gives hn  A11

nm1 ; as n ! 1: .m  1/Š

(1.10)

It remains to evaluate the constant A11 . From (1.8), it follows that H.x/ 

  A11 1 ; as x ! 1: CO m m1 .1  x/ .1  x/

Thus, lim .1  x/m H.x/ D A11 :

x!1

(1.11)

1 Partitions with Restricted Summands

5

But on the other hand, from (1.5), we have .1  x/m H.x/ D

m Y x1 .1  x/m D : aj  1 .1  x a1 /.1  x a2 /    .1  x am / x j D1

(1.12)

Since .x aj /0 jxD1 D aj x aj 1 jxD1 D aj , we conclude from (1.12) that lim .1  x/m H.x/ D Qm

x!1

j D1

From (1.11) and (1.13) we obtain A11 D that hn 

nm1 Q .m1/Š m j D1 aj

1

Qm 1

j D1 aj

aj

:

(1.13)

, and thus from (1.10) we conclude 

.

Exercise 1.1. If b1 a1 C b2 a2 C    C bm am D n, then of course bj aj  n, for all j 2 Œm. Use this to obtain the following rough upper bound on the number of m solutions hn to (1.1): hn  Qmn a . Then use this estimate together with the third j D1 j

“fundamental result” in Appendix B to show that the series defining H.x/ in (1.3) converges for jxj < 1. Exercise 1.2. Go through the proof of Theorem 1.1 and convince yourself that the result of the theorem holds even if the integers faj gm j D1 are not distinct. That is, the number of solutions to (1.1) is asymptotic to the expression on the right hand side of (1.2). Note though that the number of such solutions is not equal to Pn .faj gnj D1 /. What is the leading asymptotic term as n ! 1 for the number of ways to make n cents out of quarters and pennies, where one distinguishes the quarters by their mint marks—“p” for Philadelphia, “d” for Denver, and “s” for San Francisco—but where the pennies are not distinguished? Exercise 1.3. In the case that d D gcd.a1 ; : : : ; am / > 1, use Theorem 1.1 to formulate and prove a corresponding result. Exercise 1.4. A composition of n is an ordered sequence of positive integers P .x1 ; : : : ; xk /, where k is a positive integer, such that kiD1 xi D n. A favorite method of combinatorists to calculate the size of some combinatorial object is to find a bijection between the object in question and some other object whose size is known. Let Cn denote the number of compositions of n. To calculate Cn , we construct a bijection as follows. Consider a sequence of n dots in a row. Between each pair of adjacent dots, choose to place or choose not to place a vertical line. Consider the set of all possible dot and line combinations. (For example, if n D 5, here are two possible such combinations: (1)   j  j   (2)     ): (a) Show that there are 2n1 dot and line combinations. (b) Show that there is a bijection between the set of compositions of n and the set of dot and line combinations. (c) Conclude from (a) and (b) that Cn D 2n1 .

6

1 Partitions with Restricted Summands

Exercise 1.5. Let Cnf1;2g denote the number of compositions of n with summands restricted to the integers 1 and 2, that is, compositions .x1 ;    ; xk / of n with the restriction that xi 2 f1; 2g, for all i . The series 1

F .x/ WD

converges absolutely for jxj < p

X 1 D .x C x 2 /n 1  x  x2 nD0 p

51 2

(1.14)

since jx C x 2 j  jxj C jxj2 < 1 if jxj
n, then .k/ will not appear on the right hand side of (2.18). If k 2  n, then .k/ will appear on the right hand side of (2.18) Πkn2  times, namely, when j D k 2 ; 2k 2 ; : : : ; Πkn2 k 2 . Thus, we have n X

2 .j / D

j D1

n X X

.k/ D

j D1 k 2 jj

X n X n Π2  .k/ D Π2  .k/ D k k 2 1

k n

kŒn 2 

X .k/ X  n n n Œ .k/: C   k2 k2 k2 1 1 kŒn 2 

(2.19)

kŒn 2 

Since each summand in the second term on the right hand side of (2.19) is bounded in absolute value by 1, we have j

X  n n 1 Π2   2 .k/j  n 2 : k k 1

(2.20)

kŒn 2 

It follows from (2.16), (2.19), and (2.20) that 1

X .k/ jAn j D : n!1 n k2 lim

kD1

Using this with (2.14) gives (2.17) and completes the proof of the theorem.



We now give a heuristic probabilistic proof and a rigorous probabilistic proof of Theorem 2.1. In the heuristic proof, we put quotation marks around the steps that are not rigorous. Heuristic Probabilistic Proof of Theorem 2.1. Let fpk g1 kD1 be an enumeration of the primes. In the spirit described in the first paragraph of the chapter, if we pick a positive integer “at random,” then the “probability” of it being divisible by the prime number pk is p1k . (Of course, this is true also with pk replaced by an arbitrary positive integer.) If we pick two positive integers “independently,” then the “probability” that they are both divisible by pk is p1k p1k D p12 , by “independence.” k

So the “probability” that at least one of them is not divisible by pk is 1  p12 . The k “probability” that a “randomly” selected positive integer is divisible by the two distinct primes, pj and pk , is pj1pk D p1j p1k . (The reader should check that this “holds” more generally if pj and pk are replaced by an arbitrary pair of relatively prime positive integers, but not otherwise.) Thus, the events of being divisible by pj and being divisible by pk are “independent.” Now two “randomly” selected positive integers are relatively prime if and only if, for every k, at least one of the integers is not divisible by pk . But since the “probability” that at least one of them is not divisible by pk is 1  p12 , and since being divisible by a prime pj and being divisible k

16

2 Relatively Prime Pairs and Square-Free Numbers

by a different prime pk are “independent” events, the “probability” that the two “randomly” selected positive Q integers1 are such that, for every k, at least one of them is not divisible by pk is 1 kD1 .1  p 2 /. Thus, this should be the “probability” that k two “randomly” selected positive integers are relatively prime.  Rigorous Probabilistic Proof of Theorem 2.1. For the probabilistic proof, the second alternative suggested in the second paragraph of the chapter will be more convenient. Thus, we choose an integer from Œn uniformly at random and then choose a second integer from Œn uniformly at random. Let n D Œn. The appropriate probability space on which to analyze the model described above is the space .n n ; Pn /, where the probability measure Pn on n n is the uniform , for any A  n n . The point .i; j / 2 n n measure; that is, Pn .A/ D jAj n2 indicates that the integer i was chosen the first time and the integer j was chosen the second time. Let Cn denote the event that the two selected integers are relatively prime; that is, Cn D f.i; j / 2 n n W gcd.i; j / D 1g: Then the probability qn that the two selected integers are relatively prime is qn D Pn .Cn / D

jCn j : n2

Let fpk g1 kD1 denote the prime numbers arranged in increasing order. (Any enumeration of the primes would do, but for the proof it is more convenient to 1 choose the increasing enumeration.) For each k 2 N, let BnIk denote the event that 2 the first integer chosen is divisible by pk and let BnIk denote the event that the second integer chosen is divisible by pk . That is, 1 2 BnIk D f.i; j / 2 n n W pk ji g; BnIk D f.i; j / 2 n n W pk jj g: 1 2 Note of course that the above sets are empty if pk > n. The event BnIk \ BnIk D f.i; j / 2 n n W pk ji and pk jj g is the event that both selected integers have pk as a factor. There are Πpnk  integers in n that are divisible by pk , namely, pk ; 2pk ;    ; Πpnk pk . Thus, there are Πpnk 2 pairs .i; j / 2 n n for which both coordinates are divisible by pk ; therefore,

1 2 Pn .BnIk \ BnIk /D

Πpnk 2 n2

:

(2.21)

1 2 n 1 2 Note that [1 kD1 .BnIk \ BnIk / D [kD1 .BnIk \ BnIk / is the event that the two selected integers have at least one common prime factor. (The equality above 1 2 and BnIk are clearly empty for k > n.) Consequently, follows from the fact that BnIk Cn can be expressed as

2 Relatively Prime Pairs and Square-Free Numbers

17

 c 1 2 1 2 c Cn D [nkD1 .BnIk \ BnIk / D \nkD1 .BnIk \ BnIk / ; where Ac WD n n  A denotes the complement of an event A  n n . Thus,   1 2 c \ BnIk / : Pn .Cn / D Pn \nkD1 .BnIk

(2.22)

Let R < n be a positive integer. We have 1 2 c 1 2 c n 1 2 \nkD1 .BnIk \ BnIk / D \R kD1 .BnIk \ BnIk /  [kDRC1 .BnIk \ BnIk / 1 2 c 1 2 c and, of course, \nkD1 .BnIk \ BnIk /  \R kD1 .BnIk \ BnIk / . Thus,

   n  1 2 c 1 2 Pn \R kD1 .BnIk \ BnIk /  Pn [kDRC1 .BnIk \ BnIk /      1 2 c 1 2 c \ BnIk /  Pn \R Pn \nkD1 .BnIk kD1 .BnIk \ BnIk / :

(2.23)

Using the sub-additivity property of probability measures for the first inequality below, and using (2.21) for the equality below, we have n n X X   1   1 2 2 Pn [nkDRC1 .BnIk \ BnIk /  Pn BnIk \ BnIk / D kDRC1

kDRC1

Πpnk 2 n2



1 X

1

p2 kDRC1 k

:

(2.24) Up until now, we have made no assumption on n. Q Now assume that pk jn, for k D 1;    ; R; that is, assume that n is a multiple of R kD1 pk . Denote the set of such n by DR ; that is, DR D fn 2 N W pk jn for k D 1;    ; Rg: 1 2 \ BnIk is the event that both selected integers are divisible Recall that the event BnIk 1 2 by k. We claim that if n 2 DR , then the events fBnIk \ BnIk gR kD1 are independent. That is, for any subset I  f1; 2;    ; Rg, one has

  Y 1 2 1 2 Pn \k2I .BnIk \ BnIk / D Pn .BnIk \ BnIk /; if n 2 DR :

(2.25)

k2I

The proof of (2.25) is a straightforward counting exercise and is left as Exercise 2.4. c R If events fAk gR kD1 are independent, then the complementary events fAk gkD1 are also independent. See Exercise A.3 in Appendix A. Thus, we conclude that R   Y  1  1 2 c 2 c D .B \ B / Pn .BnIk \ BnIk / ; if n 2 DR : Pn \R kD1 nIk nIk kD1

(2.26)

18

2 Relatively Prime Pairs and Square-Free Numbers

 1  2 c 1 2 By (2.21) we have Pn .BnIk \ BnIk / D 1  Pn .BnIk \ BnIk / D 1 n. Thus, from the definition of DR , we have

Πpn 2 k

n2

 1  1 2 c \ BnIk / D 1  2 ; if n 2 DR : Pn .BnIk pk

, for any

(2.27)

From (2.22) to (2.24), (2.26), and (2.27), we conclude that R Y

.1

kD1

1 R X Y 1 1 1 /  P .C /  .1 2 /; for R 2 N and n 2 DR : (2.28) n n pk2 kDRC1 pk2 p k kD1

QRWe now use 0 (2.28) to obtain an estimate on Pn .Cn / for general n. Let n  kD1 pk . Let n denote the largest integer in DR which is smaller or equal to n, and let n00 denote the smallest integer Q in DR which is larger or equal to n. Since DR is the set of positive multiples of R kD1 pk , we obviously have n0 > n 

R Y

pk and n00 < n C

kD1

R Y

pk :

(2.29)

kD1

For any n, note that n2 Pn .Cn / D jCn j is the number of pairs .i; j / 2 n n that are relatively prime. Obviously, the number of such pairs is increasing in n. Thus .n0 /2 Pn0 .Cn0 /  n2 Pn .Cn /  .n00 /2 Pn00 .Cn00 /, or equivalently, .

n00 n0 2 / Pn0 .Cn0 /  Pn .Cn /  . /2 Pn00 .Cn00 /: n n

(2.30)

Since n0 ; n00 2 DR , we conclude from (2.28)–(2.30) that

.

n

QR

kD1 pk 2

n

/

R Y

.1

kD1

1 X

1

/ 2

pk

1 

p2 kDRC1 k

< Pn .Cn / < .

nC

QR

kD1 pk 2

n

/

R Y

.1

kD1

1

/: pk2 (2.31)

Letting n ! 1 in (2.31), we obtain R Y

.1 

kD1

1 R X Y 1 1 1 /   lim inf P .C /  lim sup P .C /  .1  2 /: n n n n 2 2 n!1 pk p pk n!1 kDRC1 k kD1 (2.32)

Now (2.32) holds for arbitrary R; thus letting R ! 1, we conclude that lim Pn .Cn / D

n!1

1 Y

.1 

kD1

1 /: pk2

(2.33)

2 Relatively Prime Pairs and Square-Free Numbers

19

The celebrated Euler product formula states that 1 kD1 .1 

Q1

1 / pkr

D

1 X 1 ; r > 1I nr nD1

(2.34)

see Exercise 2.5. From (2.33), (2.34), and (2.13), we conclude that 1 lim qn D lim Pn .Cn / D P1

n!1

1 nD1 n2

n!1

D

6 : 2



Exercise 2.1. Give a direct proof of Corollary 2.1. (Hint: The Euler -function .n/ counts the number of positive integers that are less than or equal to n and relatively prime to n. We employ the sieve method, which from the point of view of set theory is the method of inclusion–exclusion. Start with a list of all n integers between 1 and n as potential members of the set of the .n/ relatively prime integers n to n. Let fpj gm j D1 be the prime divisors of n. For any such pj , the pj numbers pj ; 2pj ; : : : ; pnj pj are not relatively prime to n. So we should strike these numbers from our list. When we do this for each j , the remaining numbers on the list are those numbers that are relatively prime to n, and the size of theP list is .n/. Now n we haven’t necessarily reduced the size of our list to N1 WD n  m j D1 pj , because some of the numbers we have deleted might be multiples of two different primes, pi and pj , in which case they were subtracted above twice. Thus we need to add back to N1 all of the pinpj multiples of pi pj , for i ¤ j . That is, we now have P N2 WD N1 C i¤j pinpj . Continue in this vein. Exercise 2.2. This exercise presents an alternative proof to Proposition 2.2: P (a) Show that the arithmetic function d jn .d / is multiplicative. Use the fact that  is multiplicative—see Exercise 2.3. P (b) Show that d jn .d / D n, when n is a prime power. (c) Conclude that Proposition 2.2 holds. Exercise 2.3. The Chinese remainder theorem states that if n and m are relatively prime positive integers, and a 2 Œn and b 2 Œm, then there exists a unique c 2 Œnm such that c D a mod n and c D b mod m. (For a proof, see [27].) Use this to prove that the Euler -function is multiplicative. Then use the fact that  is multiplicative to prove (2.7). Exercise 2.4. Prove (2.25). Exercise 2.5. Prove the Euler product formula (2.34). (Hint: Let N` denote the set of positive integers all of whose prime factors are in the set fpk g`kD1 . Using the fact that 1 X 1 1 D rm ; 1 p 1  pr mD0 k k

20

2 Relatively Prime Pairs and Square-Free Numbers

for all k 2 N, first show that 11 1 11 1 p1r p2r Q` P 1 1 n2N` nr , for any ` 2 N.) kD1 1 1 D

D

P

1 n2N2 nr

, and then show that

pkr

Exercise 2.6. Using Theorem 2.1, prove the following result: Let 2  d 2 N. Choose two integers uniformly at random from Œn. As n ! 1, the asymptotic probability that their greatest common divisor is d is d 26 2 . Exercise 2.7. Give a probabilistic proof of Theorem 2.2.

Chapter Notes It seems that Theorem 2.1 was first proven by E. Cesàro in 1881. A good source for the results in this chapter is Nathanson’s book [27]. See also the more advanced treatment of Tenenbaum [33], which contains many interesting and nontrivial exercises. The heuristic probabilistic proof of Theorem 2.1 is well known and can be found readily, including via a Google-search. I am unaware of a rigorous probabilistic proof in the literature.

Chapter 3

A One-Dimensional Probabilistic Packing Problem

Consider n molecules lined up in a row. From among the n  1 nearest neighbor pairs, select one pair at random and “bond” the two molecules together. Now from all the remaining nearest neighbor pairs, select one pair at random and bond the two molecules together. Continue like this until no nearest neighbor pairs remain. Let MnI2 denote the random variable that counts the number of bonded molecules. Let EMnI2 denote the expected value of MnI2 , that is, the average number of bonded molecules. The first thing we would like to do is to compute the limiting average fraction of bonded molecules: limn!1 EMnnI2 . Then we would like to show that MnnI2 is close to this limiting average with high probability as n ! 1; that is, we would like to prove that MnnI2 satisfies the weak P law of large numbers. Of course, by definition, EMnI2 D nj D0 jP .MnI2 D j /, where P .MnI2 D j / is the probability that MnI2 is equal to j . However, it would be fruitless to pursue this formula to evaluate EMnI2 asymptotically because the calculation of P .MnI2 D j / is hopelessly complicated. We will solve the problem with the help of generating functions. Actually, we will consider a slightly more general problem, where the pairs are replaced by k-tuples, for some k  2. So the problem is as follows. There are n molecules on a line. From among the n  k C 1 nearest neighbor k-tuples, select one at random and “bond” the k molecules together. Now from among all the remaining nearest neighbor k-tuples, select one at random and bond the k molecules together. Continue like this until there are no nearest neighbor k-tuples left. Let MnIk denote the random variable that counts the number of bonded molecules, and let EMnIk denote the expected value of MnIk . See Fig. 3.1. Here is our result. Theorem 3.1. For each integer k  2, Z 1 k1 k1 j X X EMnIk 1 s lim D k exp.2 / / ds WD pk : exp.2 n!1 n j 0 j j D1 j D1 Furthermore,

MnIk n

(3.1)

satisfies the weak law of large numbers; that is, for all > 0,

R.G. Pinsky, Problems from the Discrete to the Continuous, Universitext, DOI 10.1007/978-3-319-07965-3__3, © Springer International Publishing Switzerland 2014

21

22

3 Probabilistic Packing Problem

Fig. 3.1 A realization with n = 21 and k = 3 that gives M21;3 D 15

lim P .j

n!1

MnIk  pk j  / D 0: n

(3.2)

Remark 1. Only when k D 2 can pk be calculated explicitly, one obtains p2 D 1e 2  0:865. Numerical integration gives p3  0:824, p4  0:804, p5  0:792, p10  0:770, p100  0:750, p1000  0:748, and p10;000 D 0:748. The expression pk seems surprisingly difficult to analyze. We suggest the following open problem to the reader. Open Problem. Prove that pk is monotone decreasing and calculate limk!1 pk . Remark 2. Any molecule that remains unbonded at the end of the nearest neighbor k-tuple bonding process occurs in a maximal row of j unbonded molecules, for some j 2 Œk  1. In the limit as n ! 1, what fraction of molecules ends up in a maximal row of j unbounded molecules? See Exercise 3.2. (In Fig. 3.1, numbering from left to right, molecules #4 and #8 occur in a maximal row of one unbounded molecule, while molecules #15, #16, #20, and #21 occur in a maximal row of two unbounded molecules.) .k/

.k/

2 Proof. For notational convenience, let Hn D EMnIk and Ln D EMnIk . To prove the theorem, it suffices to show that

EMnIk D Hn.k/ D pk n C o.n/; as n ! 1;

(3.3)

2 2 2 2 D L.k/ EMnIk n D pk n C o.n /; as n ! 1:

(3.4)

and that

This method of proof is known as the second moment method. It is clear that (3.1) follows from (3.3). An application of Chebyshev’s inequality shows that (3.2) follows from (3.3) and (3.4). To see this, note that if Z is a random variable with expected value EZ and variance 2 .Z/, then Chebyshev’s inequality states that P .jZ  EZj  ı/ 

2 .Z/ ; for any ı > 0: ı2

Also, 2 .Z/ D EZ 2  .EZ/2 . We apply Chebyshev’s inequality with Z D Using (3.3) and (3.4), we have

MnIk . n

.k/

EZ D

Hn n

D pk C o.1/; as n ! 1;

(3.5)

3 Probabilistic Packing Problem

23

and .k/

2 .Z/ D

.k/

Ln .Hn /2  D pk2 C o.1/  .pk C o.1//2 D o.1/; as n ! 1: 2 n n2

Thus, we obtain for all ı > 0, .k/

P .j

o.1/ MnIk Hn  j  ı/  2 ; as n ! 1; n n ı

or, equivalently, .k/

lim P .j

n!1

MnIk Hn  j  ı/ D 0; for all ı > 0: n n

(3.6)

We now show that (3.2) follows from (3.3) and (3.6). Fix > 0. We have .k/

j

MnIk MnIk Hn  pk j D j  n n n

.k/

C

Hn n

.k/

 pk j  j

.k/

MnIk Hn Hn  jCj n n n

 pk j:

.k/

For sufficiently large n , one has from (3.3) that j Hnn  pk j  2 , for n  n . Thus, for n  n , a necessary condition for j Consequently,

MnIk n

 pk j  is that j

MnIk n



.k/

Hn n

j  2 .

.k/

P .j

MnIk MnIk Hn  pk j  /  P .j  j  /; for n  n : n n n 2

Now (3.2) follows from this and (3.6). Our proofs of (3.3) and (3.4) will follow similar lines. Before commencing with the proof of (3.3), we trace its general architecture. Only the first step of the proof involves probability. In this step, we employ probabilistic reasoning to produce a .k/ .k/ .k/ .k/ recursion equation that gives Hn in terms of H0 ; H1 ; : : : ; Hnk . In this form, .k/ the equation is not useful because as n ! 1, it gives Hn in terms of a growing Pn .k/ .k/ number of its predecessors. However, defining Sn D j D0 Hj , and using the .k/

abovementioned recursion equation, we find that Sn is given in terms of only two of its predecessors. We then construct the generating function g.t / whose .k/ .k/ coefficients are fSn g1 nD0 . Using the recursion equation for Sn , we show that g solves a linear, first order differential equation. We solve this differential equation to obtain an explicit formula for g.t /. This explicit formula reveals that g possesses a singularity at t D 1. Exploiting this singularity allows us to evaluate limn!1 .k/

.k/

Sn , n2 .k/ Sn limn!1 n2 .

and then a simple observation allows us to obtain limn!1 Hnn from We now commence with the proof of (3.3). Note that if we start with n < k molecules, then none of them will get bonded. Thus,

24

3 Probabilistic Packing Problem

Hn.k/ D 0; for n D 0; : : : ; k  1:

(3.7)

.k/

We now derive a recursion relation for Hn . The method we use is called first step analysis. We begin with a line of n  k unbonded molecules, and in the first step, one of the nearest neighbor k-tuples is chosen at random and its k molecules are bonded. In order from left to right, denote the original n  k C 1 nearest neighbor k-tuples by fBj gnkC1 j D1 . If Bj was chosen in the first step, then the original row now contains a row of j  1 unbonded molecules to the left of the bonded k-tuple Bj and a row of n C 1  j  k unbonded molecules to the right of Bj . To complete the random bonding process, we choose random k-tuples from these two sub-rows until there are no more k-tuples to choose from. This gives us the following formula for the conditional expectation of MnIk given that Bj was selected first: for n  k, .k/

.k/

E.MnIk jBj selected first/DkCE.Mj 1Ik CMnC1j kIk /DkCHj 1 CHnC1j k : (3.8) Of course, for each j 2 Œn  k C 1, the probability that Bj was chosen first is 1 . Thus, we obtain the formula nkC1 EMnIk D Hn.k/ D

nkC1 X

P .Bj selected first/E.MnIk jBj selected first/ D

j D1 nkC1 X 1 .k/ .k/ .k C Hj 1 C HnC1j k /; n  k: n  k C 1 j D1

We can rewrite this as Hn.k/

nk X 2 .k/ DkC H ; n  k: n  k C 1 j D0 j

(3.9)

.k/

The above recursion equation is not useful directly because it gives Hn in terms of n  k C 1 of its predecessors; we want a recursion equation that expresses a given term in terms of a fixed finite number of its predecessors. To that end, we define Sn.k/ D

n X

.k/

Hj :

(3.10)

j D0

Substituting this in (3.9) gives Hn.k/ D k C

2 .k/ S ; n  k: n  k C 1 nk

(3.11)

3 Probabilistic Packing Problem

25

Writing (3.7) and (3.11) in terms of fSn g1 nD0 , we obtain .k/

Sn.k/ D 0; for n D 0; : : : ; k  1;

(3.12)

and .k/

Sn.k/  Sn1 D k C

2 .k/ S ; n  k: n  k C 1 nk

(3.13) .k/

This recursion equation has the potential to be useful since it gives Sn in terms of .k/ .k/ only two of its predecessors—Sn1 and Snk . Of course, we have paid a price—we .k/ .k/ are now working with Sn instead of Hn ; but this will be dealt with easily. For .k/ .k/ .k/ convenience, we drop the superscript k from Sn ; Hn , and Ln for the rest of the chapter, except in the statement of propositions. We rewrite (3.13) as .n  k C 1/Sn D .n  k C 1/Sn1 C 2Snk C k.n  k C 1/; n  k:

(3.14)

We now define the generating function for fSn g1 nD0 and use (3.14) to derive a linear, first-order differential equation that is satisfied by this generating function. The generating function g.t / is defined by g.t / D

1 X

Sn t n D

nD0

1 X

Sn t n ;

(3.15)

nDk

where the second equality follows from (3.12). From the definitions, it follows that Hn  n, and thus Sn  12 n.n C 1/. Consequently, the sum on the right hand side of (3.15) converges for jt j < 1, with the convergence being uniform for jt j  , for any  2 .0; 1/. It follows then that g 0 .t / D

1 X

nSn t n1 ; jt j < 1:

(3.16)

nDk

Multiply equation (3.14) by t n and group the terms in the following way: nSn t n  .k  1/Sn t n D .n  1/Sn1 t n  .k  2/Sn1 t n C 2Snk t n C k.n  k C 1/t n : Now summing the equation over all n  k, and appealing to (3.15), (3.16), and (3.12), we obtain the differential equation tg 0 .t /  .k  1/g.t / D t 2 g 0 .t /  .k  2/tg.t / C 2t k g.t / C k t

1 X nDk

nt n1  k.k  1/

1 X nDk

t n:

(3.17)

26

Since P1

3 Probabilistic Packing Problem

P1 nDk n1

nt n1 is the derivative of

P1 nDk

tk , 1t

D

tn

.1t/kt k1 Ct k . .1t/2

tk 0 . 1t /

it follows that

D D Using these facts and doing some algebra, nDk nt which leads to many cancelations, we obtain kt

1 X

nt n1  k.k  1/

nDk

1 X

tn D

nDk

ktk : .1  t /2

(3.18)

Substituting this in (3.17), and doing a little algebra, we obtain g 0 .t / D

k t k1 .k  1/  .k  2/t C 2t k g.t / C ; 0 < t < 1: t .1  t / .1  t /3

(3.19)

Note that we have excluded t D 0 because we have divided by t . There are two singularities in the above equation—one at t D 0 and one at t D 1. The singularity at t D 0 is removable; indeed, g.0/ D 0 so the first term on the right hand side of (3.19) can be defined at 0. The singularity at 1, on the other hand, is authentic, and actually contains the solution to our problem—we will just need to “unzip” it. The linear, first-order differential equation in (3.19) is written in the form g 0 .t / D a.t /g.t / C b.t /, where a.t / D

k t k1 .k  1/  .k  2/t C 2t k ; b.t / D : t .1  t / .1  t /3

(3.20)

Let 2 .0; 1/ and rewrite the differential equation as .g.t /e 

Rt

a.s/ ds 0

/ D b.t /e 

Rt

a.s/ ds

:

Integrating from to t 2 . ; 1/ gives g.t /e 

Rt

Z

t

D g. / C

a.r/ dr

b.s/e 

Rs

a.r/ dr

ds; t 2 . ; 1/;



which we rewrite as g.t / D g. /e

Rt

Z a.r/ dr

t

C

b.s/e

Rt s

a.r/ dr

ds; t 2 . ; 1/:

(3.21)



Since limt!0 t a.t / D k  1, there exists a t0 > 0 such that a.t /  0 < t  t0 . Thus, for < t0 , one has e

Rt

0

a.r/ dr

e

Rt

1 0 k 2 r

dr

t0 1 D . /k 2 :

k 12 t

, for

3 Probabilistic Packing Problem

27

By (3.15) we have g. / D O. k / as ! 0. Therefore, lim g. /e

Rt

a.r/ dr

!0

D lim g. /e

Rt

0

a.r/ dr

!0

e

Rt t0

a.r/ dr

 e

Rt t0

a.r/ dr

t0 1 lim g. /. /k 2 D0: !0

Thus, letting ! 0 in (3.21) gives Z

t

g.t / D

b.s/e

Rt s

a.r/ dr

ds; 0  t < 1:

(3.22)

0

Using partial fractions, one finds that .k  1/  .k  2/r k1 1 D C : r.1  r/ r 1r We also have r k1 1 D  .1 C r C    C r k2 /: .1  r/ 1r Thus, we can rewrite a.r/ from (3.20) as a.r/ D

3 k1 C  2.1 C r C    C r k2 /: r 1r

We then obtain Z

t

a.r/ dr D .k  1/ log t  3 log.1  t /  2

k1 j X t j D1

j

;

and thus e

Rt s

a.r/ dr

Pk1 t j  Pk1 s j   D t k1 .1  t /3 e 2 j D1 j s 1k .1  s/3 e 2 j D1 j :

(3.23)

Substituting this in (3.22) and recalling the definition of b from (3.20), we obtain g.t / D

tj t k1 2 Pjk1 D1 j e 3 .1  t /

Z

t

ke 2

Pk1

sj j D1 j

ds:

(3.24)

0

We see that g has a third-order singularity at t D 1. We proceed to “unzip” this singularity to reveal the answer to our problem. We have the following proposition which connects the limiting behavior of Hn with that of Sn .

28

3 Probabilistic Packing Problem

Proposition 3.1. .k/

Hn n!1 n lim

D`

if and only if .k/

Sn ` lim D : n!1 n2 2 

Proof. The proof is immediate from (3.11).

And we have the following proposition which connects the limiting behavior of Sn with the singularity in g at t D 1. Proposition 3.2. If .k/

Sn D L; n!1 n2 lim

then lim .1  t /3 g.t / D 2L:

t!1

Proof. Since limn!1

have limn!1

Choose n0 such that

Sn D L, we also n2 Sn j n.n1/  Lj  , for

n > n0 . Then recalling (3.15), we have

n0 X

1 X

Sn t n C.L /

n.n1/t n  g.t / 

nDn0 C1

nD0

n0 X

Sn n.n1/

D L. Let > 0.

1 X

Sn t n C.LC /

n.n1/t n :

nDn0 C1

nD0

(3.25) Now 1 X

n.n  1/t n D t 2 .

nD0

1 X nD0

t n /00 D t 2 .

1 00 2t 2 / D ; 1t .1  t /3

so 1 X nDn0

0 X 2t 2 n.n  1/t D  n.n  1/t n : 3 .1  t / C1 nD0

n

n

Substituting this latter equality in (3.25), multiplying by .1  t /3 , and letting t ! 1, we obtain

3 Probabilistic Packing Problem

29

2L  2  lim inf.1  t /3 g.t /  lim sup.1  t /3 g.t /  2L C 2 : t!1

t!1



As > 0 is arbitrary, the proposition follows.

In order to exploit Propositions 3.1 and 3.2, we will establish the existence of the limit limn!1 Snn2 . .k/

Proposition 3.3. limn!1

Sn n2

exists.

Proof. Rewriting the recursion equation for Sn in (3.13) so that only Sn appears on Sn1 the left hand side, then dividing both sides by n2 and subtracting .n1/ 2 from both sides, we have Sn Sn1 k Sn1 Sn1 2Snk D  D 2C 2  C 2 n2 .n  1/2 n n .n  1/2 n .n  k C 1/ k 2n  1 2Snk D  2 Sn1 C 2 2 2 n n .n  1/ n .n  k C 1/ k 2 2n  1 2Sn1  .HnkC1 C    CHn1 / D  Sn1 C 2 n2 n2 .n  1/2 n .n  k C 1/ n2 .n  k C 1/ k .2k  5/n C 3  k 2 Sn1  2 .HnkC1 C    C Hn1 /: C 2 2 2 n n .n  1/ .n  k C 1/ n .n  k C 1/ (3.26) As already noted, from the definitions, we have Hl  l and Sl  12 l.l C 1/. Thus, there exists a C > 0 such that ˇ .2k  5/n C 3  k ˇ ˇ ˇSn1  C and n2 .n  1/2 .n  k C 1/ n2 n2 .n

2 C .HnkC1 C    C Hn1 /  2 :  k C 1/ n

(3.27)

This shows that the right hand side of (3.26) is O. n12 / and thus so is the left hand  Sn  P Sn1 side. Consequently, the telescopic series 1 nD2 n2  .n1/2 is convergent. Since X  Sj Sj 1  Sn ; D  2 n2 j .j  1/2 j D2 n

we conclude that limn!1

Sn n2



exists.

By Propositions 3.1 and 3.3, ` WD limn!1 and 3.2 (with L D 2` ), it follows that

Hn n

exists. Then by Propositions 3.1

lim .1  t /3 g.t / D `:

t!1

30

3 Probabilistic Packing Problem

However, from the explicit formula for g in (3.24), we have lim .1  t /3 g.t / D ke 2

Pk1

1 j D1 j

t!1

Z

1

e2

Pk1

sj j D1 j

ds D pk :

0

Thus, ` D pk , completing the proof of (3.3). We now turn to the proof of (3.4). We derive a formula by the method used to obtain (3.8). Recall the discussion preceding (3.8). Note that conditioned on Bj being chosen on the first step, the final state of the j  1 molecules to the left of Bj and the final state of the n C 1  j  k molecules to the right of Bj are independent of one another. Let Mj 1IkI1 and MnC1j kIkI2 be independent random variables distributed according to the distributions of Mj 1Ik and MnC1j kIk , respectively. Then similar to (3.8), we have 2 jBj selected first/ D E.k C Mj 1IkI1 C MnC1j kIkI2 /2 D E.MnIk

k 2 C Lj 1 C LnC1j k C 2kHj 1 C 2kHnC1j k C 2Hj 1 HnC1j k ; (3.28) where the last term comes from the fact that the independence gives EMj 1IkI1 MnC1j kIkI2 D EMj 1IkI1 EMnC1j kIkI2 : Thus, similar to the passage from (3.8) to (3.9), we have Ln D k 2 C

nk nk nk X X X 2 4k 2 Lj C Hj C Hj Hnkj ; n  k C 1 j D0 n  k C 1 j D0 n  k C 1 j D0

for k  n:

(3.29)

We simplify the above recursion relation by defining Rn D

n X

Lj :

j D0

Of course, we have Ln D 0, for n D 0; : : : ; k  1, and thus, Rn D 0; for n D 0; : : : ; k  1:

(3.30)

Recalling (3.10), we can now rewrite (3.29) in the form Rn DRn1 Ck 2 C

nk X 2 4k 2 Rnk C Snk C Hj Hnkj ; n  k: nkC1 nkC1 nkC1 j D0

(3.31)

3 Probabilistic Packing Problem

31

Proposition 3.4. nk pk2 1 X .k/ .k/ H H D : j nkj n!1 n3 6 j D0

lim

Proof. Let > 0. Since limn!1 Hnn D pk , we can find an n such that .pk  /n  Hn  .pk C /n, for n > n . Thus .pk  /2

X n 0: 2n

(4.10)

We leave this as Exercise 4.3. In light of this, it follows that (4.9) would also hold if C we had defined O2n in an asymmetric fashion as the number of steps up to Œ2n for which the random walk is nonnegative: jfk 2 Œ2n W Sk  0gj. Our approach to proving the above two theorems will be completely combinatorial rather than probabilistic. Generating functions will play a seminal role. A random walk path of length m is a path fxj gm j D0 which satisfies x0 D 0I xj  xj 1 D ˙1; j 2 ŒmI

(4.11)

See Fig. 4.1. Since a random walk path has two choices at each step, there are 2m random walk paths of length m. The probability that the simple, symmetric random walk behaves in a certain way up until time m is simply the number of random walk paths that behave in that certain way divided by 2m .

40

0

4 Arcsine Laws for Random Walk

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Fig. 4.2 A Dyck path of length 16

Our basic combinatorial object upon which our results will be developed is the Dyck path. A Dyck path of length 2n is a nonnegative random walk path fxj g2n j D0 of length 2n which returns to 0 at step 2n; that is, in addition to satisfying (4.11) with m D 2n, it also satisfies the following conditions: xj  0; j 2 Œ2nI x2n D 0:

(4.12)

See Fig. 4.2. We use generating functions to determine the number of Dyck paths. Let dn denote the number of Dyck paths of length 2n. We also define d0 D 1. Proposition 4.2. The number of Dyck paths of length 2n is given by ! 1 2n dn D ; n  1: nC1 n Remark. The number Cn WD

 

1 2n nC1 n

is known as the nth Catalan number.

Proof. We derive a recursion formula for fdn g1 nD0 . A primitive Dyck path of length 2k is a Dyck path fxj g2k of length 2k which satisfies xj > 0 for j D 1; : : : ; 2k1. j D0 Let vk denote the number of primitive Dyck paths of length 2k. Every Dyck path of length 2n returns to 0 for the first time at 2k, for some k 2 Œn. Consider a Dyck path of length 2n that returns to 0 for the first time at 2k. The part of the path from time 0 to time 2k is a primitive Dyck path of length 2k, and the part of the path from time 2k to 2n is an arbitrary Dyck path of length 2n  2k. (In Fig. 4.2, the Dyck path of length 16 is composed of an initial primitive Dyck path of length 6, followed by a Dyck path of length 10.) This reasoning yields the recurrence relation dn D

n X

vk dnk ; n  1:

(4.13)

kD1

Now we claim that vk D dk1 ; k  1:

(4.14)

4 Arcsine Laws for Random Walk

41

Indeed, a primitive Dyck path fxj g2k j D0 must satisfy x1 D 1, xj  1, for j 2 Œ2k2, x2k1 D 1, x2k D 0. Thus, letting yj D xj C1  1, 0  j  2k  2, it follows that fyj g2k2 j D0 is a Dyck path. Of course, this analysis can be reversed. This shows that there is a 1-1 correspondence between primitive Dyck paths of length 2k and arbitrary Dyck paths of length 2.k  1/, proving (4.14). From (4.13) and (4.14) we obtain the Dyck path recursion formula dn D

n X

dk1 dnk :

(4.15)

kD1

Let D.x/ D

1 X

dn x n

(4.16)

nD0 2n random walk paths of be the generating function for fdn g1 nD0 . Since there are 2 2n n length 2n, we have the trivial estimate dn  2 D 4 . Thus, the power series defining D.x/ is absolutely convergent forP jxj < 14 . The product P1 P1 of ntwo absolutely 1 n n convergent power series a x and b x is n n nD0 nD0 nD0 cn x , where cn D Pn j D0 aj bnj . Thus, if in (4.15), the term dk1 were dk instead, and the summation started from k D 0 instead of from k D 1, then we would have had D 2 .x/ D D.x/. As it is, we “correct” for these deficiencies by multiplying by x and adding 1: it is easy to check that (4.16) and (4.15) give

D.x/ D xD 2 .x/ C 1:

(4.17) p

Solving this quadratic equation in D gives D.x/ D 1˙ 2x14x . Since we know from (4.16) that D.0/ D 1, we conclude that the generating function for fdn g1 nD0 is given by D.x/ D 1

1

p 1  4x 1 ; jxj < : 2x 4

1

Now .1  4x/ 2 jxD0 D 1, ..1  4x/ 2 /0 jxD0 D 2 and n1 1 1 Y 2n .2n  2/Š 1 .2j  1/ D  n1 ..1  4x/ 2 /.n/ jxD0 D  2n D nŠ nŠ j D1 nŠ2 .n  1/Š ! 2 2n  1  ; for n  2I 2n  1 n

(4.18)

42

4 Arcsine Laws for Random Walk

thus, the Taylor series for p

p 1  4x is given by

1  4x D 1  2x 

1 X nD2

2 The coefficient of x nC1 in (4.19) is  2nC1 with (4.18) and (4.19), we conclude that

D.x/ D

1 X nD0

! 2 2n  1 n x : 2n  1 n 2nC1 nC1

2 D  nC1

2n . Using this along n

! 1 1 2n n x ; jxj < : nC1 n 4

From (4.20) and (4.16) it follows that dn D

(4.19)

  .

1 2n nC1 n

(4.20) 

The proof of the proposition gives us the following corollary. Corollary 4.1. The generating function for the sequence fdn g1 nD0 , which counts Dyck paths, is given by D.x/ D

1

p 1  4x 1 ; jxj < : 2x 4

Let wn denote the number of nonnegative random walk paths of length 2n. The difference between such a path and a Dyck path is that for such a path there is no requirement that it return to 0 at time 2n. We also define w0 D 1. We now calculate 1 fwn g1 nD0 by deriving a recursion formula which involves fdn gnD0 . Proposition 4.3. The number wn of nonnegative random walk paths of length 2n is given by ! 2n wn D ; n  1: n

(4.21)

Remark. The number   of random walk paths of length 2n that return to 0 at time 2n , since to obtain such a path, we must choose n jumps of +1 and is also given by 2n n n jumps of 1. Thus, we have the following somewhat surprising corollary. Corollary 4.2. P .S1  0; : : : ; S2n  0/ D P .S2n D 0/: Proof of Proposition 4.3. Of course every nonnegative random walk path of length 2n C 2, when restricted to its first 2n steps, constitutes a nonnegative random walk path of length 2n. A nonnegative random walk path of length 2n which does not return to 0 at time 2n, that is, which is not a Dyck path, can be extended in four different ways to create a nonnegative random walk path of length 2n C 2. On the

4 Arcsine Laws for Random Walk

43

other hand, a nonnegative random walk path of length 2n which is a Dyck path can only be extended in two different ways to create a nonnegative random walk path of length 2n C 2. Thus, we have the relation wnC1 D 4.wn  dn / C 2dn D 4wn  2dn ; n  0:

(4.22)

Let W .x/ D

1 X

wn x n

nD0

be the generating function for fwn g1 nD0 . As with the power series defining D.x/, it is clear that the power series defining W .x/ converges for jxj < 14 . Multiply equation (4.22) by x n and sum over n from 0 to 1. On the left side we obtain P 1 1 n nD0 wnC1 x D x .W .x/1/, and on the right hand side we obtain 4W .x/2D.x/. From the resulting equation, x1 .W .x/  1/ D 4W .x/  2D.x/, we obtain W .x/ D

1  2xD.x/ : 1  4x

(4.23)

Substituting for D.x/ in (4.23) from Corollary 4.1, we obtain W .x/ D p

1 1  4x

; jxj
0, 2k 2n2k  k

nk 22n



1 1 ; uniformly over n  k  .1  /n; as n ! 1: p  k.n  k/ (4.24)

Using (4.24) and (4.6), we have for 0 < a < b < 1

1 

Œnb X

q

kDŒnaC1

2k 2n2k 

Œnb X

.2n/

L P .a < 0  b/ D 2n

k

kDŒnaC1

nk 22n



Œnb X

1 1 D p  k.n  k/ kDŒnaC1

1

1 ; as n ! 1: k k n .1  / n n

(4.25)

But the last term on the right hand side of (4.25) is a Riemann sum for R 1 b p 1 dx. Thus, letting n ! 1 in (4.25) gives  a x.1x/ .2n/

L 1 lim P .a < 0  b/ D n!1 2n 

Z a

b

p p 1 2 2 dx D arcsin b  arcsin a; p   x.1  x/

for 0 < a < b < 1; which is equivalent to (4.7). This completes the proof of Theorem 4.1. We now turn to the proof of Theorem 4.2.



4 Arcsine Laws for Random Walk

45

Proof of Theorem 4.2. We need to prove (4.8). Of course, (4.9) follows from (4.8) C just like (4.7) followed from (4.6). Recalling the symmetric definition of O2n , for the purpose of this proof, we will refer to S2k as “positive” if either S2k > 0 or S2k D 0 and S2k1 > 0. Let cn;k denote the number of random walk paths of length 2n which are positive at exactly 2k steps. Since there are 22n random walk paths of length 2n, in order to prove (4.8), we need to prove that ! ! 2k 2n  2k cn;k D ; k D 0; 1; : : : ; n: (4.26) k nk     , and by symmetry, cn;0 D 2n ; thus, (4.26) By Proposition 4.3, we have cn;n D 2n n n holds for k D 0; n. C Consider now k 2 Œn  1. A random walk path that satisfies O2n D 2k must return to 0 before step 2n. Consider the first return to 0. If the path was positive before the first return to 0, then the first return to 0 must occur at step 2j , for some j 2 Œk (for otherwise, the path would be positive for more than 2k steps). If the path was negative before the first return to 0, then the first return to 0 must occur at step 2j , for some j 2 Œn  k (for otherwise the path would be positive for fewer than 2k steps). In light of these facts, and recalling that vj D dj 1 is the number of primitive Dyck paths of length 2j , it follows that for j 2 Œk, the number of random walk paths of length 2n which start out positive, return to 0 for the first time at step 2j , and are positive for exactly 2k steps is equal to dj 1 cnj;kj , Similarly, for j 2 Œn  k, the number of random walk paths of length 2n which start out negative, return to 0 for the first time at step 2j , and are positive for exactly 2k steps is equal to dj 1 cnj;k . Thus, we obtain the recursion relation cn;k D

k X

dj 1 cnj;kj C

j D1

nk X

dj 1 cnj;k ; k 2 Œn  1:

(4.27)

j D1

  Let en WD 2n , n  0. As follows from the remark after Proposition 4.3, for n n  1, en is the number of random walk paths of length 2n that are equal to 0 at step 2n. We derive a recursion formula for fen g1 nD0 . A random walk path of length 2n which is equal to 0 at step 2n must return to 0 for the first time at step 2k, for some k 2 Œn. The number of random walk paths of length 2n which are equal to 0 at time 2n and which return to 0 for the first time at step 2k is equal to 2vk enk D 2dk1 enk . Consequently, we obtain the recursion formula en D

n X

2dk1 enk :

(4.28)

kD1

We can now prove (4.26) by considering (4.27) and (4.28) and applying induction. To prove (4.26) we need to show that for all n  1, cn;k D ek enk ; for k D 0; 1; : : : ; n:

(4.29)

46

4 Arcsine Laws for Random Walk

When n D 1, (4.29) clearly holds. We now assume that (4.29) holds for all n  n0 and prove that it also holds for n D n0 C 1. When n D n0 C 1 and k D 0 or k D n0 C 1, we already know that (4.29) holds. So we need to show that (4.29) holds for n D n0 C 1 and k 2 Œn0 . Using (4.27) for the first equality, using the inductive assumption for the second equality, and using (4.28) for the third equality, we have cn0 C1;k D

k X

dj 1 cn0 C1j;kj C

j D1 k X j D1

dj 1 ekj en0 C1k C

n0X C1k

dj 1 cn0 C1j;k D

j D1 n0X C1k

dj 1 ek en0 C1kj D

j D1

1 1 ek en0 C1k C en0 C1k ek D ek en0 C1k ; 2 2

(4.30)

which proves that (4.29) holds for n D n0 C 1 and completes the proof of Theorem 4.2.  Exercise 4.1. This exercise completes the proof of Proposition 4.1. We proved that with probability one, the simple, symmetric random walk on Z visits 0 infinitely often. (a) For fixed x 2 Z, use the fact that with probability one the random walk visits 0 infinitely often to show that with probability one the random walk visits x infinitely often. (Hint: Every time the process returns to 0, it has probability . 12 /jxj of moving directly to x in jxj steps.) (b) Show that with probability one the random walk visits every x 2 Z infinitely often. Exercise 4.2. In this exercise, you will prove that ET0 D 1, where T0 is the first return time to 0. We can consider the random walk starting from any j 2 Z, rather than just from 0. When we start the random walk from j , denote the corresponding probabilities and expectations by Pj and Ej . Fix n  1 and consider starting the random walk from some j 2 f0; 1; : : : ; ng. Let T0;n denote the first nonnegative time that the random walk is at 0 or n. (a) Define g.j / D Ej T0;n . By analyzing what happens on the first step, show that g solves the difference equation g.j / D 1 C 12 g.j C 1/ C 12 g.j  1/, for j D 1; : : : ; n  1. Note that one has the boundary conditions g.0/ D g.n/ D 0. (b) Use (a) to show that Ej T0;n D j.n  j /. (Hint: Write the difference equation in the form g.j C 1/  g.j / D g.j /  g.j  1/  2.) (c) In particular, (b) gives E1 T0;n D n  1. From this, conclude that ET0 D 1. O0

Exercise 4.3. Prove (4.10): limn!1 P . 2n2n > / D 0; for all > 0. (Hint: P2n 0 0 by O2n D Represent O2n j D1 1fSj D0g , where 1fSj D0g is as in the proof of

4 Arcsine Laws for Random Walk

47

Proposition 4.1. From this representation, show that limn!1 E from this that (4.10) holds.)

0 O2n 2n

D 0. Conclude

Exercise 4.4. Use Stirling’s formula to prove (4.24). That is, show that for any ; ı > 0, there exists an n ;ı such that if n  n ;ı , then 2k 2n2k  1ı 

k

nk 22n



p

k.n  k/  1 C ı;

for all k satisfying n  k  .1  /n. Exercise 4.5. If one considers a simple, symmetric random walk fSk g2n kD0 up to time 2n, the probability of seeing any particular one of the 22n random walk paths 2n of length 2n 2n is equal to 2 . Recall from the remark after Proposition 4.3 that there are n random walk paths of length 2n that return to 0 at time 2n. It follows from symmetry  that conditioned on S2n D 0, the probability of seeing any particular one random walks paths of length 2n which return to 0 at time 2n is equal of the 2n n to 2n1 . .n/ (a) Let p 2 .0; 1/  f 12 g and consider the simple random walk on Z which jumps one unit to the right with probability p and one unit to the left with probability .p/ 1  p. Denote the random walk by fSn g1 nD0 . Consider this random walk up to time 2n. For each particular random walk path of length 2n, calculate the probability of seeing this path. The answer now depends on the path. .p/ (b) Conditioned on S2n D 0, show that the probability of seeing any particular one 2n of the n random walk paths of length 2n which return to 0 at time 2n is equal to 2n1 . .n/ Exercise 4.6. Let 0  j  m. Consider the random walk fSn g1 nD0 as in Exercise 4.5, with p 2 .0; 1/, but starting from j , and denote probabilities by Pj . .p/ Let T0;m denote the first nonnegative time that this random walk is at 0 or at m. Use the method of Exercise 4.2—analyzing what happens on the first step—to calculate .p/ Pj .S .p/ D 0/, that is, the probability that starting from j , the random walk reaches .p/

T0;m

0 before it reaches m. (Hint: The calculation in the case p D separately.)

1 2

needs to be treated

Chapter Notes The arcsine law in Theorem 4.2 was first proven by P. Lévy in 1939 in the context of Brownian motion, which is a continuous time and continuous path version of the simple, symmetric random walk. The proof of Theorem 4.2 is due to K.L. Chung and W. Feller. One can find a proof in volume 1 of Feller’s classic text in probability [19].

48

4 Arcsine Laws for Random Walk

One can also find there a proof of Theorem 4.1. Our proofs of these theorems are a little different from Feller’s proofs. As expected, the proofs in Feller’s book have a probabilistic flavor. We have taken a more combinatorial/counting approach via generating functions. Proposition 4.3 and Corollary 4.2 can be derived alternatively via the “reflection principle”; see [19]. For a nice little book on random walks from the point of view of electrical networks, see Doyle and Snell [15]; for a treatise on random walks, see the book by Spitzer [32].

Chapter 5

The Distribution of Cycles in Random Permutations

In this chapter we study the limiting behavior of the total number of cycles and of the number of cycles of fixed length in random permutations of Œn as n ! 1. This class of problems springs from a classical question in probability called the envelope matching problem. You have n letters and n addressed envelopes. If you randomly place one letter in each envelope, what is the asymptotic probability as n ! 1 that no letter is in its correct envelope? Let Sn denote the set of permutations of Œn. Of course, Sn is a group, but the group structure will not be relevant for our purposes. For us, a permutation 2 Sn is simply a 1-1 map of Œn onto Œn. The notation j will be used to denote the image of j 2 Œn under this map. We have jSn j D nŠ. Let PnU denote the , for any subset A  Sn . uniform probability measure on Sn . That is, PnU .A/ D jAj nŠ If j D j , then j is called a fixed point for the permutation . Let Dn  Sn denote the set of permutations that do not fix any points; that is, 2 Dn if j ¤ j , for all j 2 Œn. Such permutations are called derangements. The classical envelope matching problem then asks for limn!1 PnU .Dn /. The standard way to solve the envelope matching problem is by the method of inclusion–exclusion. Define Gi D f 2 Sn W i D i g. (We suppress the dependence of Gi on n since n is fixed in this discussion.) Then the complement Dnc of Dn is given by Dnc D [niD1 Gi , and the inclusion–exclusion principle states that P .[niD1 Gi / D X

n X iD1

P .Gi / 

X

P .Gi \ Gj /C

1i 0 (Z  Pois./) if P .Z D j / D e 

j ; for j D 0; 1; : : : : jŠ j

The j discrete random variables fXi giD1 are called independent if P .X1 D Qj j x1 ; : : : ; Xj D xj / D iD1 P .Xi D xi /, for all choices of fxi giD1  R. In the sequel, Z will denote a random variable distributed according to Pois./, and it j j will always be assumed that fZi giD1 are independent for distinct fi giD1 . We will prove a weak convergence result for small cycles. . /

Theorem 5.2. Let 2 .0; 1/. Let j be a positive integer. Under the measure Pn , .n/ .n/ .n/ the distribution of the random vector .C1 ; C2 ; : : : ; Cj / converges weakly to the distribution of .Z ; Z ; : : : ; Z /. That is, 2

.n/

lim Pn. / .C1

n!1

j

.n/

D m 1 ; C2

.n/

D m 2 ; : : : ; Cj

D mj / D

j Y



e i

iD1

. i /mi ; mi Š

mi  0; i D 1; : : : ; j:

(5.4)

Remark. Let j be a positive integer and let 1  k1 < k2 <    < kj . In Exercise 5.7 the reader is asked to show that by making a small change in the proof of Theorem 5.2, one has .n/ lim P . / .Ck1 n!1 n

D

.n/ m1 ; Ck2

D

.n/ m2 ; : : : ; Ckj

D mj / D

j Y iD1

mi  0; i D 1; : : : ; j:



e  ki

. k i /mi mi Š

; (5.5)

.n/

In particular, for any fixed j , the distribution of Cj converges weakly to the Pois. j / distribution. Actually, (5.5) can be deduced directly from (5.4); see Exercises 5.2 and 5.3. Our proofs of these two theorems will be very combinatorial, through the method of generating functions. The use of purely probabilistic reasoning will be rather minimal. For the proofs of the two theorems, we will need to evaluate the normalizing constant Kn . /. Of course, this is trivial in the case of the uniform measure, that is, the case D 1. Let s.n; k/ denote the number of permutations in Sn that have exactly k cycles. From the definition of Kn . /, we have Kn . / D

n X kD1

s.n; k/ k :

(5.6)

54

5 Cycles in Random Permutations

Proposition 5.2. Kn . / D .n/ : Remark. The numbers s.n; k/ are called unsigned Stirling numbers of the first kind. Proposition 5.2 and (5.6) show that they arise as the coefficients of the polynomials qn . / WD .n/ D . C 1/    . C n  1/. Proof. There are .n  1/Š permutations in Sn that contain only one cycle and one permutation in Sn that contains n cycles: s.n; 1/ D .n  1/Š; s.n; n/ D 1:

(5.7)

We prove the following recursion relation: s.n C 1; k/ D ns.n; k/ C s.n; k  1/; n  2; 2  k  n:

(5.8)

Note that (5.7) and (5.8) uniquely determine s.n; k/ for all n  1 and all k 2 Œn. To create a permutation 0 2 SnC1 , we can start with a permutation 2 Sn and then take the number n C 1 and either insert it into one of the existing cycles of or let it stand alone as a cycle of its own. If we insert n C 1 into one of the existing cycles, then 0 will have k cycles if and only if has k cycles. There are n possible locations in which one can place the number nC1 and preserve the number of cycles. (The reader should verify this.) Thus, from each permutation in Sn with k cycles, we can construct n permutations in SnC1 with k cycles. If, on the other hand, we let n C 1 stand alone in its own cycle, then 0 will have k cycles if and only if has k  1 cycles. Thus, from each permutation in Sn with k  1 cycles, we can construct one permutation in SnC1 with k cycles. Now (5.8) is the mathematical expression of this verbal description. Let cn;k denote the coefficient of k in qn . / D . C 1/    . C n  1/. Clearly cn;1 D .n  1/Š and cn;n D 1, for n  1. Writing qnC1 . / D qn . /. C n/, one sees that cnC1;k D ncn;k C cn;k1 , for n  2; 2  k  n. Thus, cn;k satisfies the same recursion relation (5.8) and the same boundary condition (5.7) as does s.n; k/. We conclude that cn;k D s.n; k/. The proposition follows from this along with (5.6).  . /

In light of Proposition 5.2, from now on, we write the probability measure Pn in the form .n/

Pn. / .f g/ D

N . / : .n/

We now set the stage to prove Theorem 5.1. The probability generating function PX .s/ of a random variable X taking nonnegative integral values is defined by PX .s/ D Es X D

1 X iD0

s i P .X D i /; jsj  1:

5 Cycles in Random Permutations

55

The probability generating function uniquely determines the distribution; indeed, 1 d i PX .s/ jsD0 D P .X D i /. Let PN .n/ .sI / denote the probability generating iŠ ds i . /

function for the random variable N .n/ under Pn : PN .n/ .sI / D

n X

s i Pn. / .N .n/ D i /:

iD1

Recalling that s.n; i / denotes the number of permutations in Sn with i cycles, it follows that Pn. / .N .n/ D i / D

i s.n; i / : .n/

Using this with (5.6) and Proposition 5.2 gives PN .n/ .sI / D

n X iD1

n Y iD1

.

si

i s.n; i / .s /.n/ s .s C 1/    .s C n  1/ D D D .n/ .n/ . C 1/    . C n  1/

i 1 sC /: Ci 1 Ci 1

(5.9)

A random variable X is distributed according to the Bernoulli distribution with parameter p 2 Œ0; 1 if P .X D 1/ D p and P .X D 0/ D 1  p. We write X  Ber.p/. The probability generating function for such a random variable is ps C 1  p. Now let fX . Ci1/1 gniD1 be independent random variables, where Pn /. Let Zn; D X . Ci1/1  Ber. Ci1 iD1 X . Ci1/1 . Then the probability generating function for Zn; is given by PZn; .s/ D Es Zn; D Es

Pn

i D1

X . Ci 1/1

D

n Y

Es X . Ci 1/1 D

iD1 n Y

.

iD1

i 1 sC /: Ci 1 Ci 1

(5.10)

For the third equality above we have used the fact that the expected value of a product of independent random variables is equal to the product of their expected values. From (5.9), (5.10), and the uniqueness of the probability generating function, we obtain the following proposition. . /

Proposition 5.3. Under Pn , the distribution of N .n/ is equal to the distribution of P n n iD1 X . Ci1/1 , where fX . Ci1/1 giD1 are independent random variables, and X . Ci1/1  Ber . Ci1 /.

56

5 Cycles in Random Permutations

Remark. As an alternative way of arriving at the result in the proposition, there is a nice probabilistic construction of uniformly random permutations ( D 1) that immediately yields the result, and the construction can be amended to cover the case of general . See Exercise 5.4. We now use Proposition 5.3 and Chebyshev’s inequality to prove the first theorem. Pn Proof of Theorem 5.1. Let Zn; D iD1 X . Ci1/1 . By Proposition 5.3, it suffices to show that lim P .j

n!1

Zn;  j  / D 0; for all > 0: log n

(5.11)

If Xp  Ber.p/, then the expected value of Xp is EXp D p, and the variance is Var.Xp / D p.1  p/. Since the expectation is linear, we have EZn; D Pn iD1 Ci1 . By considering the above sum simultaneously as an upper Riemann sum and as a lower Riemann sum of appropriate integrals, we have 



Z

n

log.n C /  log D 0

Z

n1

1C 0

X 1 dx  D EZn;  Cx C i 1 iD1 n

  1 dx D 1 C log.n  1 C /  log : Cx

Since log.n C / D log n C log.1 C n / and log.n  1 C / D log n C log.1 C 1 /, n the above inequality immediately yields EZn; D log n C O.1/; as n ! 1:

(5.12)

Since the variance of a sum of independent random variables is the sum of the P .i1/ variances of the random variables, we have Var.Zn; / D niD1 . Ci1/ 2 . Similar to the integral estimate above for the expectation, we have Var.Zn; / 

n X iD2

1  C i 1

Z

n1 1

1 dx D C log.n  1/: x

(5.13)

Using (5.12) for the last inequality below, we have for sufficiently large n Zn;  j  / D P .jZn;  log nj  log n/ D log n    P .j Zn;  EZn; C EZn;  log n/j  log n/  P .j

P .jZn;  EZn; j  log n  jEZn;  log n/j/  P .jZn;  EZn; j 

1 log n/: 2

(5.14)

5 Cycles in Random Permutations

57

Applying Chebyshev’s inequality to the last term in (5.14), it follows from (5.13) and (5.14) that for sufficiently large n, P .j

C log.n  1/ Zn;  j  /  : 1 2 log n log2 n 4

(5.15) 

Now (5.11) follows from (5.15).

We now develop a framework that will lead to the proof of Theorem 5.2. Given a positive integer n and given a collection fai gniD1 of nonnegative integers satisfying Pn iD1 i ai D n, let cn .a1 ; : : : ; an / denote the number of permutations 2 Sn with . / cycle type .a1 ; : : : ; an /. From the definition of Pn , we have .n/ Pn. / .C1

D

.n/ a1 ; C2

D

a2 ; : : : ; Cn.n/



D an / D . /

.n/

To prove Theorem 5.2, we need to analyze Pn .C1 large n and fixed j . We have .n/

.n/

Pn. / .C1

D m1 ; C2 X

Pj

Pn

.n/

D m2 ; : : : ; Cj

Pn

i D1 ai

cn .a1 ; : : : ; an / : .n/ .n/

D m 1 ; : : : ; Cj

D mj /, for

D mj / D

.n/

.n/

.n/

Pn. / .C1 Dm1 ; : : : ; Cj Dmj ; Cj C1 Daj C1 ; : : : ; Cn.n/ Dan /D

i mi C i Dj C1 i ai Dn aj C1 0;:::;an 0

i D1

X



Pj iD1

Pn mi C iDj C1 ai

Pj

Pn i D1 i mi C i Dj C1 i ai Dn aj C1 0;:::;an 0

cn .m1 ; : : : ; mj ; aj C1 ; : : : ; an / : .n/

We calculate cn .a1 ; : : : ; an / by direct combinatorial reasoning. Lemma 5.1. nŠ : ai iD1 i ai Š

cn .a1 ; : : : ; an / D Qn

Remark. From the lemma and (5.16), we obtain .n/

Pn. / .C1

.n/

D m 1 ; C2

j nŠ Y . i /mi .n/ iD1 mi Š

.n/

D m 2 ; : : : ; Cj X

Pj

P imi C niDj C1 iai Dn aj C1 0;:::;an 0

i D1

D mj / D

n Y . i /ai : ai Š iDj C1

(5.16)

58

5 Cycles in Random Permutations

The sum on the right hand side above is a real mess; however, a sophisticated application of generating functions in conjunction with the lemma will allow us to evaluate the right hand side of (5.16) indirectly. Proof of Lemma 5.1. First we separate out a1 numbers for 1 cycles, 2a2 numbers for 2 cycles,: : :, .n  1/an1 numbers for .n  1/ cycles, and finally the last nan numbers for n cycles. The number of ways of doing this is n a1

!

n  a1 2a2

!

! ! n  a1  2a2 n  a1      .n  1/an1  D 3a3 nan

nŠ : a1 Š.2a2 /Š    .nan /Š The a1 numbers selected for 1 cycles need no further differentiation. The 2a2 numbers selected for 2 cycles must be separated out into a2 pairs. Of course the order of the pairs is irrelevant, so the number of ways of doing this is 1 2a2 a2 Š 2

!

! ! ! 2a2  2 4 2 .2a2 /Š :  D a2 Š.2Š/a2 2 2 2

The 3a3 numbers selected for 3 cycles must be separated out into a3 triplets, and then each such triplet must be ordered in a cycle. The number of ways of separating the 3a3 numbers into triplets is 1 3a2 a3 Š 3

!

! ! ! 3a3  3 6 3 .3a3 /Š :  D a3 Š.3Š/a3 3 3 3

Each such triplet can be ordered into a cycle in .31/Š ways.a Thus, we conclude that 3 .3a3 /Š the 3a3 numbers can be arranged into a3 3 cycles in ..31/Š/ ways. Continuing a3 Š.3Š/a3 like this, we obtain c.a1 ; : : : ; an /D

.2a2 /Š ..3  1/Š/a3 .3a3 /Š nŠ ..n1/Š/an .nan /Š  D a a 2 3 a1 Š.2a2 /Š    .nan /Š a2 Š.2Š/ a3 Š.3Š/ an Š.nŠ/an

nŠ : a1 Ša2 Š    an Š2a2 3a3    nan

 We now turn to generating functions. Consider an infinite dimensional vector x D .x1 ; x2 ; : : :/, and for any positive integer n, define x .n/ D .x1 ; : : : ; xn /. For a D .a1 ; : : : ; an /, let x a D .x .n/ /a WD x1a1    xnan . Let T . / denote the cycle type of 2 Sn . Define the cycle index of Sn , n  1, by

5 Cycles in Random Permutations

59

n .x/ D n .x .n/ / D

1 X T . / 1 x D nŠ 2G nŠ

X Pn

cn .a/x a :

i D1 iai Dn a1 0;:::;an 0

We also define 0 .x/ D 1. We now consider (formally for the moment) the generating function for n . x/: G . / .x; t / D

1 X

n . x/t n ; x D .x1 ; x2 ; : : :/:

nD0

Using Lemma 5.1, we can obtain a very nice representation for G . / , as well as a domain on which its defining series converges. Let jjxjj1 WD supn1 jxn j. Proposition 5.4. G . / .x; t / D exp.

1 X xi t i iD1

i

/; for jt j < 1; jjxjj1 < 1:

Proof. Consider t 2 Œ0; 1/ and x with xj  0 for all j , and jjxjj1 < 1. Using Lemma 5.1 and the definition of n .x/, we have G . / .x; t / D

1 X nD0

1 X nD0

X

nD0

Pn

j D1 jaj Dn a1 0;:::;an 0

. x1 /a1    . xn /an t n nŠ D ai nŠ iD1 i ai Š

j D1 jaj Dn a1 0;:::;an 0

X

i n Y . xi t /ai

i

Pn

iD1 j D1 jaj Dn a1 0;:::;an 0

exp.

1 X xi t i iD1

cn .a/. x/a t n D nŠ

Qn

Pn

1 X

X

i

/:

ai Š

D

X

1 xi t i ai Y . /

a1 0;a2 0;::: iD1

i

ai Š

D

1 Y

e

xi t i i

D

iD1

(5.17)

The right hand side above converges for t and x in the range specified at the beginning of the proof. Since all of the summands in sight are nonnegative, it follows that the series defining G . / is convergent in this range. For t and x in the range specified in the statement of the theorem, the above calculation shows that there is absolute convergence and hence convergence.  We now exploit the formula for G . / .x; t / in Proposition 5.4 in a clever way. Recall that

60

5 Cycles in Random Permutations

log.1  t / D 

1 i X t iD1

i

:

(5.18)

For x D .x1 ; x2 ; : : :/ and a positive integer j , let x .j /I1 D .x1 ; : : : ; xj ; 1; 1; : : :/. In other words, x .j /I1 is the infinite dimensional vector which coincides with x in its first j places and has 1 in all of its other places. From Proposition 5.4 and (5.18) we have G . / .x .j /I1 ; t /D exp.

1 i X t iD1

i

/ exp.

j X .xi 1/t i

i

iD1

X .xi 1/t i 1 /: exp. .1t / i iD1 (5.19) j

/D

We will need the following lemma. P Lemma 5.2. Let 2 .0; 1/. Let 1 iD0 bi be a convergent series, and assume that 1 1 X X 1 i b t D i t i ; jt j < 1: i .1  t / iD0 iD0

P1 If P 1 >i 1, also assume that iD0 jbi j < 1. If 2 .0; 1/, also assume that iD0 s jbi j < 1, for some s > 1. Then 1

X nŠ n D bi : .n/ n!1 iD0 lim

1 .n/ Proof. Since . .1t/ jtD0 D . C 1/    . C n  1/ D .n/ , the Taylor expansion /

for

1 .1t/

is given by 1

X .n/ 1 t n; D .1  t / nŠ nD0

(5.20)

where P for convenience we have defined .0/ D 1. Thus, the Taylor expansion for 1 1 i iD0 bi t is given by .1t/ 1 1 X X 1 i b t D dn t n ; i .1  t / iD0 nD0

where dn D

Pn iD0

.ni /

bi .ni/Š . Therefore, by the assumption in the lemma, we have

n D

n X iD0

bi

.ni/ : .n  i /Š

5 Cycles in Random Permutations

61

If D then kŠ D .k/ , for all k. Consequently the above equation reduces to P1, n n D iD0 bi , and thus the statement of the lemma holds. When ¤ 1, then using the additional assumptions on fbi g1 iD0 , we can show that n 1 X nŠ X .ni/ b bi ; D i n!1 .n/ .n  i /Š iD0 iD0

lim

(5.21)

which finishes the proof of the lemma. The reader is guided through a proof of (5.21) in Exercise 5.5.  We can now give the proof of Theorem 5.2. Proof of Theorem 5.2. From (5.19) and the original definition of G . / .x; t /, we have j 1 X X 1 .xi  1/t i / D exp. n . x .j /I1 /t n : .1  t / i iD1 nD0

(5.22)

Considering x and as constants, we apply Lemma 5.2 to (5.22). In terms of the lemma, we have n D n . x

.j /I1

j 1 X X .xi  1/t i / and exp. bi t i : /D i iD1 iD0

(5.23)

In order to be able to apply the lemma for all > 0, we need to show that P1 i N 1 iD0 s jbi j < 1, for some s > 1. Define fbi giD0 by j 1 X X jxi  1jt i /D exp. bNi t i : i iD1 iD0

(5.24)

Since all of the coefficients in the sum in the exponent on the left hand side of (5.24) are nonnegative, we have bNi  jbi j  0, for all i . The reader is asked to prove this in Exercise 5.6. The function on the left hand side of (5.24) is real analytic for all t 2 R (and complex analytic for all complex t ); consequently, its power series on the right hand converges for all t 2 R. From this and the nonnegativity of bNi , it Pside 1 iN N follows P1thati iD0 s bi < 1, for all s  0, and then, since jbi j  bi , we conclude that iD0 s jbi j < 1, for all s  0. By definition, from (5.23), we have 1 X iD0

bi D exp.

j X xi  1 /: i iD1

(5.25)

62

5 Cycles in Random Permutations

Consider now nŠ 1 nŠ n D .n/ n . x .j /I1 / D .n/ .n/

X

cn .a/. x .j /I1 /a :

Pn

(5.26)

iai Dn a1 0;:::;an 0 i D1

For any given j -vector .m1 ; : : : ; mj / with nonnegative integral entries, the coeffim cient of x1m1 x2m2    xj j in (5.26) is X

1 .n/

Pj



Pj

mi C

i D1

Pn

i Dj C1 ai

Pn

cn .m1 ; : : : ; mj ; aj C1 ; : : : ; an /:

imi C i Dj C1 iai Dn aj C1 0;:::;an 0

i D1

. /

.n/

.n/

.n/

But by (5.16), this is exactly Pn .C1 D m1 ; C2 D m2 ; : : : ; Cj D mj /. By Lemma 5.2, limn!1 nŠ.n/ n exists, and this is true for every choice of x and ; thus, we conclude that nŠ nŠ n D lim .n/ n . x .j /I1 / D .n/ n!1 n!1 m

X

lim

m

pm1 ;:::;mj . /x1m1    xj j ;

1 0;:::;mj 0

(5.27) where .n/

pm1 ;:::;mj . / D lim Pn. / .C1 n!1

.n/

D m 1 ; C2

.n/

D m 2 ; : : : ; Cj

D mj /:

(5.28)

Applying Lemma 5.2, we conclude from (5.25) and (5.27) that exp.

j X xi  1 /D i iD1 m

X

m

pm1 ;:::;mj . / x1m1    xj j :

(5.29)

1 0;:::;mj 0

m

On the one hand, (5.29) shows that the coefficient of x1m1    xj j in the Taylor Pj expansion about x D 0 of the function exp. iD1 xi i1 / is pm1 ;:::;mj . /. On the other hand, by Taylor’s formula, this coefficient is equal to  Pj @m1 CCmj exp. iD1 1 mj 1 m1 Š    mj Š @m x1    @xj

xi 1 / i

 jxD0 D

j j j mi X Y 1 Y mi 1 . / exp. / . / D : e i i m1 Š    mj Š i iD1 i mi Š iD1 iD1

(5.30)

5 Cycles in Random Permutations

63

Thus, from (5.28)–(5.30), we conclude that .n/

lim Pn. / .C1

n!1

.n/

D m 1 ; C2

.n/

D m 2 ; : : : ; Cj

D mj / D

j Y iD1



e i

. i /mi ; mi Š 

completing the proof of Theorem 5.2. . /

Exercise 5.1. Verify that Proposition 5.1 follows from the definition of Pn with Proposition 5.2 and Lemma 5.1.

along

Exercise 5.2. Show that (5.4) is equivalent to . /

.n/

lim Pn .C1

n!1

.n/

 m1 ; C2

.n/

 m2 ; : : : ; Cj

 mj /D

X

j Y

0r1 m1 ;:::;0rj mj iD1

mi  0; i D 1; : : : ; j:



e i

. i /ri ; ri Š (5.31)

Exercise 5.3. In this exercise you will show directly that (5.5) follows from (5.4). (a) Fix an integer j  2. Use (5.31) to show that for any > 0, there exists an N such that if n  N and m  N , then .n/

Pn. / .Ci

> m; for some i 2 Œj  / < :

(5.32)

(b) From (5.31) and (5.32), deduce that (5.31) also holds if some of the mi are equal to 1. (c) Prove that (5.5) follows from (5.4). Exercise 5.4. This exercise gives an alternative probabilistic proof of Proposition 5.3. A uniformly random ( D 1) permutation 2 Sn can be constructed in the following manner via its cycles. We begin with the number 1. Now we randomly choose a number from Œn. If we chose j , then we declare that 1 D j . This is the first stage of the construction. If j ¤ 1, then we randomly choose a number from Œn  fj g. If we chose k, then we declare that j D k. This is the second stage of the construction. If k ¤ 1, then we randomly choose a number from Œn  fj; kg. We continue like this until we finally choose 1, which closes the cycle. For example, if after k we chose 1, then the permutation would contain the cycle .1j k/. Once we close a cycle, we begin again, starting with the smallest number that has not yet been used. We continue like this for n stages, at which point the permutation has been defined completely. (a) The above construction has n stages. Show that the probability of completing a 1 cycle on the j th stage is nC1j . Thus, letting

64

5 Cycles in Random Permutations

( .n/ Xj

.n/

it follows that Xj (b) Argue that

D

1; if a cycle was completed at stage j I 0; otherwise;

1  Ber. nC1j /.

.n/ fXj gnj D1

are independent.

P .n/ (c) Show that the number of cycles N .n/ can be represented as N .n/ D nj D1 Xj , thereby proving Proposition 5.3 in the case D 1. (d) Let 2 .0; 1/. Amend the above construction as follows. At any stage j , close the cycle with probability nC j , and choose any other particular number 1 that has not yet been used with probability nC j . Show that this construction . /

yields a permutation distributed according to Pn , and use the above reasoning to prove Proposition 5.3 for all > 0. P Exercise 5.5. (a) Show that if 1 iD0 jbi j < 1 and the triangular array fcn;i W i D 0; 1; : : : ; nI n DP0; 1; : : :g is bounded P and satisfies limn!1 cn;ni D 1, for all i , then limn!1 niD1 bi cn;ni D 1 iD1 bi . Then use this to prove (5.21) in the case that > 1. nŠ .ni / n .0/ (b) Show that if 2 .0; 1/, then .ni/Š  ni , if i < n. Also, nŠ  n, 0Š .n/ .n/ .0/ D 1. where we recall, P i i (c) Show that if 1 iD0 jbi js < 1, where s > 1, then jbi j  s , for all large i . .ni / nŠ Pn into (d) For 2 .0; 1/, prove (5.21) as follows. Break the sum .n/ iD0 bi .ni/Š three parts—from i D 0 to i D N , from i D N C 1 to i D Œ n2 , and from i D Œ n2  C 1 to i D n. Use the reasoning in the proof of (a) to show that by choosing N sufficiently 1 of the first part can be made P large, the limit as n !P 1 arbitrarily close to 1 b . Use the fact that iD0 i iD0 jbi j < 1 to show that by choosing N sufficiently large, the lim supn!1 of the second part can be made arbitrarily small. Use (b) and (c) to show that the limit as n ! 1 of the third part is 0. N 1 Exercise 5.6. Prove that bNi  jbi j, where fbi g1 iD0 and fbi giD0 are defined in (5.23) and (5.24). Exercise 5.7. Make a small change in the proof of Theorem 5.2 to show that (5.5) holds. .1/

.1/

Exercise 5.8. Consider the uniform probability measure Pn on Sn and let En .1/ denote the expectation under Pn . Let Xn D Xn . / be the random variable denoting the number of nearest neighbor pairs in the permutation 2 Sn , and let Yn D Yn . / be the random variable denoting the number of nearest neighbor triples in 2 Sn . (A nearest neighbor pair for is a pair k; k C 1, with k 2 Œn  1, such that i D k and iC1 D k C 1, for some i 2 Œn  1, and a nearest neighbor triple is a triple .k; k C 1; k C 2/ with k 2 Œn  2 such that i D k, iC1 D k C 1 and iC2 D k C 2, for some i 2 Œn  2.)

5 Cycles in Random Permutations

65

.1/

(a) Show that En Xn D 1, for all n. (Hint: Represent Xn as the sum of indicator random variables fIk gn1 kD1 , where Ik . / is equal to 1 if k; k C 1 is a nearest neighbor pair in and is equal to 0 otherwise.) It can be shown that the distribution of Xn converges weakly to the Pois.1/ distribution as n ! 1; see [17]. .1/ .1/ (b) Show that limn!1 En Yn D 0 and conclude that limn!1 Pn .Yn D 0/ D 1.

Chapter Notes In this chapter we investigated the limiting distribution as n ! 1 of the random vector denoting the number of cycles of lengths 1 through j in a random permutation from Sn . It is very interesting and more challenging to investigate the limiting distribution of the random vector denoting the j longest cycles or, alternatively, the j shortest cycles. For everything you want to know about cycles in random permutations, and lots of references, see the book by Arratia et al. [6]. Our approach in this chapter was almost completely combinatorial, through the use of generating functions. Such methods are used occasionally in [6], but the emphasis is on more sophisticated probabilistic analysis. Our method is similar to the generating function approach of Wilf in [34], which deals only with the case D 1. For an expository account of the intertwining of combinatorial objects with stochastic processes, see the lecture notes of Pitman [30].

Chapter 6

Chebyshev’s Theorem on the Asymptotic Density of the Primes

Let .n/ denote the number of primes that are no larger than n; that is, .n/ D

X

1;

pn

where here and elsewhere in this chapter and the next two, the letter p in a summation denotes a prime. Euclid proved that there are infinitely many primes: limn!1 .n/ D 1. The asymptotic density of the primes is 0; that is, lim

n!1

.n/ D 0: n

The prime number theorem gives the leading order asymptotic behavior of .n/. It states that lim

n!1

.n/ log n D 1: n

This landmark result was proved in 1896 independently by J. Hadamard and by C.J. de la Vallée Poussin. Their proofs used contour integration and Cauchy’s theorem from analytic function theory. A so-called “elementary” proof, that is, a proof that does not use analytic function theory, was given by P. Erd˝os and A. Selberg in 1949. Although their proof uses only elementary methods, it is certainly more involved than the proofs of Hadamard and de la Vallée Poussin. We will not prove the prime number theorem in this book. In this chapter we prove a precursor of the prime number theorem, due to Chebyshev in 1850. Chebyshev was the first to prove that .n/ grows on the order logn n . Chebyshev’s methods were ingenious but entirely elementary. Given the truly elementary nature of his approach, it is quite impressive how close his result is to the prime number theorem. Here is Chebyshev’s result.

R.G. Pinsky, Problems from the Discrete to the Continuous, Universitext, DOI 10.1007/978-3-319-07965-3__6, © Springer International Publishing Switzerland 2014

67

68

6 Chebyshev’s Theorem

Theorem 6.1 (Chebyshev). 0:693  log 2  lim inf n!1

.n/ log n .n/ log n  lim sup  log 4  1:386: n n n!1

Chebyshev’s result is not the type of result we are emphasizing in this book, since it is not an exact asymptotic result but rather only an estimate. We have included the result because we will need it to prove Mertens’ theorems in Chap. 7, and one of Mertens’ theorems will be used to prove the Hardy–Ramanujan theorem in Chap. 8. Define Chebyshev’s -function by .n/ D

X

log p:

(6.1)

pn

Chebyshev realized that an understanding of the asymptotic behavior of .n/ allows one to infer the asymptotic behavior of .n/ (and vice versa), and that the direct asymptotic analysis of the function is much more tractable than that of the function , because the sum of logarithms is the logarithm of the product. Indeed, note that .n/ D log

Y

p:

(6.2)

pn

We will give an exceedingly simple proof of the following result, which links the asymptotic behavior of to that of . Proposition 6.1. (i) lim infn!1 .n/ D lim infn!1 .n/nlog n ; n .n/ (ii) lim supn!1 n D lim supn!1 .n/nlog n . Proof. We have the trivial inequality .n/ D

X

log p  .n/ log n:

pn

Dividing this by n and letting n ! 1, we obtain lim inf n!1

.n/ .n/ log n  lim inf I n!1 n n

lim sup n!1

.n/ .n/ log n  lim sup : (6.3) n n n!1

We have for 2 .0; 1/, .n/ 

X Œn1  n for every prime p and every k > Œ log ; thus log 2

.n/  .n/ D

X

log n

Πlog 2 

log p D

p k n;k2

X X kD2

log p D

1 pn k

log n

Πlog 2 

X

1

.Œn k / 

kD2

Now trivially, .k/ D

P pk

log n 1 .Œn 2 /: log 2

(6.10)

log p  k log k. Using this with (6.10) gives 1

.n/  .n/ 

.log n/2 n 2 : 2 log 2

(6.11)

From (6.11) it follows that lim inf n!1

.n/  lim inf n!1 n

.n/ I n

lim sup n!1

The proposition follows from (6.9) and (6.12).

.n/  lim sup n n!1

.n/ : n

(6.12) 

6 Chebyshev’s Theorem

71

Remark. The bound obtained in (6.11) can be improved by replacing the trivial bound on , namely, .k/  k log k, by the bound obtained from Theorem 6.2. We will carry out a lower-bound analysis of . This will be somewhat more involved than the upper bound analysis for but still entirely elementary. For n 2 N and p a prime, let vp .n/ denote the largest exponent k such that p k jn. One calls vp .n/ the p-adic value of n. It follows from the definition of vp that any positive integer n can be written as nD

Y

p vp .n/ :

(6.13)

p

In Exercise 6.1 the reader is asked to prove the following simple formula: vp .mn/ D vp .m/ C vp .n/; m; n 2 N:

(6.14)

From (6.14) it follows that vp .nŠ/ D

n X

vp .m/:

(6.15)

mD1

We will need the following result. Proposition 6.3. vp .nŠ/ D

1 X n Πk : p kD1

Proof. We can write X

vp .m/ D

1:

1k 2n, that is, if k >

0  vp ..2n/Š/  2vp .nŠ/ 

 log 2n  : log p

(6.19)

From (6.17) and (6.19) we have the estimate ! Y Πlog 2n  2n p log p :  n p2n

(6.20)

On the other hand we have the easy estimate ! 2n 22n :  n 2n

(6.21)

    To prove (6.21), note that the middle binomial coefficient 2n maximizes 2n over n k k 2 Œ2n. The reader is asked to prove this in Exercise 6.2. Thus, we have 2n

2

D .1 C 1/

2n

! ! ! ! 2n 2n1 X X 2n 2n 2n 2n D D 2C  2 C .2n  1/  2n : k k n n kD0

kD1

6 Chebyshev’s Theorem

73

From (6.20) and (6.21), we conclude that Y Πlog 2n  22n  p log p 2n p2n or, equivalently, 2n log 2  log 2n 

X  log 2n  p2n

log p

log p:

(6.22)

P Recalling from (6.8) that .2n/ D pk 2n;k1 log p, it follows that the summand log p appears in .2n/ one time for each k  1 that satisfies p k  2n; that is, the 2n summand log p appears Πlog  times. Thus, the right hand side of (6.22) is equal to log p .2n/, giving the inequality .2n/  2n log 2  log 2n:

(6.23)

.2n C 1/  2n log 2  log 2n:

(6.24)

Of course then we also have

Dividing (6.23) by 2n and dividing (6.24) by 2n C 1, and letting n ! 1, we conclude that lim inf n!1

.n/  log 2; n

which completes the proof of the theorem.



We can now prove Chebyshev’s theorem in one line. Proof of Theorem 6.1. The upper bound follows from Theorem 6.2 and part (ii) of Proposition 6.1, while the lower bound follows from Theorem 6.3, part (i) of Proposition 6.2, and part (i) of Proposition 6.1.  Exercise 6.1. Prove (6.14): vp .mn/ D vp .m/ C vp .n/; m; n 2 N.     D maxk2Œ2n 2n . Exercise 6.2. Prove that 2n n k Exercise 6.3. Bertrand’s postulate states that for each positive integer n, there exists a prime in the interval .n; 2n/. This result was first proven by Chebyshev. Use the upper and lower bounds obtained in this chapter for Chebyshev’s -function to prove the following weak form of Bertrand’s postulate: For every > 0, there exists an n0 . / such that for every n  n0 . / there exists a prime in the interval .n; .2 C /n/.

74

6 Chebyshev’s Theorem

Chapter Notes Chebyshev also proved that if limn!1 .n/nlog n exists, then this limit must be equal to 1. For a proof, see Tenenbaums’ book [33]. Late in his life, in a letter, Gauss recollected that in the early 1790s, when he was 15 or 16, he conjectured the prime number theorem; however, he never published the conjecture. The theorem was conjectured by Dirichlet in 1838. For some references for further reading, see the notes at the end of Chap. 8.

Chapter 7

Mertens’ Theorems on the Asymptotic Behavior of the Primes

Given a sequence of positive numbers fan g1 nD1 satisfying limn!1 an D 1, one way to measure theP rate at which the sequence approaches 1 is to consider the rate at which the series nj D1 a1j grows. For aj D j , it is well known that the harmonic P P series nj D1 j1 satisfies nj D1 j1 D log nCO.1/ as n ! 1. How does the harmonic series of the primes behave? The goal of this chapter is to prove a theorem known as Mertens’ second theorem. Theorem 7.1. X1 D log log n C O.1/; as n ! 1: p pn Mertens’ second theorem will play a key role in the proof of the Hardy– Ramanujan theorem in Chap. 8. For our proof of Mertens’ second theorem, we will need a result known as Mertens’ first theorem. Theorem 7.2. X log p pn

p

D log n C O.1/; as n ! 1:

We now prove Mertens’ two theorems. Proof of Mertens’ first theorem. We will analyze the asymptotic behavior of log nŠ in two different ways. Comparing the two results will prove the theorem. First we show that log nŠ D n log n C O.n/; as n ! 1:

R.G. Pinsky, Problems from the Discrete to the Continuous, Universitext, DOI 10.1007/978-3-319-07965-3__7, © Springer International Publishing Switzerland 2014

(7.1)

75

76

7 Mertens’ Theorems

p We note that (7.1) follows from Stirling’s formula: nŠ  nn e n 2 n. However, we certainly don’t need such a precise estimate of nŠ to obtain (7.1). We give a quick direct proof of (7.1). Consider an integer m  2 and x 2 Œm  1; m. Integrating the inequality log.m  1/  log x  log m over x 2 Œm  1; m gives Z log.m  1/ 

m

log x dx  log m;

m1

which we rewrite as Z 0  log m 

m

log x dx  log m  log.m  1/:

m1

Summing this inequality from m D 2 to m D n, and noting that the resulting series on the right hand side is telescopic, we obtain Z

n

0  log nŠ 

log x dx  log n:

(7.2)

1

An integration by parts shows that in (7.2) gives

Rn 1

log x dx D n log n  n C 1. Substituting this

n log n  n C 1  log nŠ  n log n  n C 1 C log n; which completes the proof of (7.1). To analyze log nŠ in another way, we utilize the function vp .n/ introduced in Chap. 6. Recall that vp .n/, the p-adic value of n, is equal to the largest Q exponent vp .n/ k such that p k jn and that by the definition of vp , we have n D D pp Q vp .n/ p , for any integer m that is greater than or equal to the largest prime pm divisor of n. Recall that Proposition 6.3 states that vp .nŠ/ D

1 X n Πk : p kD1

Thus, we have nŠ D

Y pn

p vp .nŠ/ D

Y

P1

p

n kD1 Πp k



;

pn

and log nŠ D

1 1 XX XX X n n  n  Πk  log p D Π log p C Πk  log p: p p p pn pn pn kD1

kD2

(7.3)

7 Mertens’ Theorems

77

We now analyze the two terms on the right hand of (7.3), beginning with the second term. We have 1 1 1 X X 1 n n p2 Πk  n Dn D : 1 p pk p.p  1/ 1 p kD2 kD2

Thus, we obtain 1 X log p XX n   C n; Πk  log p  n p p.p  1/ pn pn

(7.4)

kD2

for constant > 0, the latter inequality following from the fact that P some PC 1 log p log m < pn p.p1/ mD2 m.m1/ < 1. We write the first term on the right hand side of (7.3) as X log p X n X n n  Π log p D n .  Π/ log p: p p p p pn pn pn

(7.5)

Recalling that Theorem 6.2 gives .n/  .log 4/n, we can estimate the second term on the right hand side of (7.5) by 0

X n X n .  Π/ log p  log p D .n/  .log 4/n: p p pn pn

(7.6)

From (7.3)–(7.6), we conclude that log nŠ D n

X log p pn

p

C O.n/; as n ! 1:

Comparing (7.1) with (7.7) allows us to conclude that completing the proof of Mertens’ first theorem.

P pn

log p p

(7.7) D log n C O.1/, 

In order to use Mertens’ first theorem to prove his second theorem, we need to introduce Abel summation, a tool that is used extensively in number theory. Abel summation is a discrete version of integration by parts. It appears in a variety of guises, the following of which is the most suitable in the present context. Proposition 7.1 (Abel Summation). Let j0 ; n 2 Z with j0 < n. Let a W Œj0 ; n \ PŒt Z ! R, and let A W Œj0 ; n ! R be defined by A.t / D kDj0 a.k/. Let f W Œj0 ; n ! R be continuously differentiable. Then X j0 1: log t

We use Abel summation in the form (7.9) with j0 D 2. By Mertens’ first theorem, we have A.t / D

Œt X kD2

a.k/ D

X log p D log t C O.1/; as t ! 1: p

(7.13)

pŒt

Thus, we obtain from (7.9) and (7.13), Z n X log p 1 X X1 D D a.r/f .r/ D A.n/f .n/  A.t /f 0 .t / dt D p p log p 2 pn pn 2rn Z n log t C O.1/ log n C O.1/ C dt: (7.14) log n t .log t /2 2 We have Z 2

and since

R

1 t.log t/2

n

1 dt D log log t jn2 D log log n  log log 2; t log t

dt D  log1 t , we have Z 2

1

1 dt < 1: t .log t /2

Using these two facts in (7.14) gives X1 D log log n C O.1/; as n ! 1; p pn completing the proof of Mertens’ second theorem.

(7.15) 

80

7 Mertens’ Theorems

Exercise 7.1. (a) Use Mertens’ first theorem and Abel summation to prove that X log2 p p

pn

D

1 log2 n C O.log n/: 2

P P 2 (Hint: Write pn logp p D 1rn a.r/ log r, where a.r/ is as in the proof of Mertens’ second theorem.) (b) Use induction and the result in (a) to prove that X logk p pn

p

D

1 logk n C O.logk1 n/; k

for all positive integers k. Exercise 7.2. Proposition 6.1Pin Chap. 6 showed that the two statements, P log p  n and .n/ D pn 1  logn n , can easily be derived one from the pn other. The prime number theorem cannot be derived P from Mertens’ second theorem. Derive Mertens’ second theorem in the form pn p1  log log n from the prime number theorem, .n/  logn n . (Hint: Use Abel summation.)

Chapter Notes The two theorems in this chapter were proven by F. Mertens in 1874. For some references for further reading, see the notes at the end of Chap. 8.

Chapter 8

The Hardy–Ramanujan Theorem on the Number of Distinct Prime Divisors

Let !.n/ denote the number of distinct prime divisors of n; that is, !.n/ D

X

1:

pjn

Thus, for example, !.1/ D 0, !.2/ D 1, !.9/ D 1, !.60/ D 3. The values of !.n/ obviously fluctuate wildly as n ! 1, since !.p/ D 1, for every prime p. However, there are not very many prime numbers, in the sense that the asymptotic density of the primes is 0. In this chapter we prove the Hardy– Ramanujan theorem, which in colloquial language states that “almost every” integer n has “approximately” log log n distinct prime divisors. The meaning of “almost every” is that the asymptotic density of those integers n for which the number of distinct prime divisors is not “approximately” log log n is zero. The meaning of “approximately” is that the actual number of distinct prime divisors of n falls in 1 1 the interval Œlog log n  .log log n/ 2 Cı ; log log n C .log log n/ 2 Cı , where ı > 0 is arbitrarily small. Theorem 8.1 (Hardy–Ramanujan). For every ı > 0, 1

jfn 2 ŒN  W j!.n/  log log nj  .log log n/ 2 Cı gj D 1: N !1 N lim

(8.1)

Remark. From the proof of the theorem, it is very easy to infer that the statement of the theorem is equivalent to the following statement: For every ı > 0, 1

jfn 2 ŒN  W j!.n/  log log N j  .log log N / 2 Cı gj lim D 1: N !1 N

R.G. Pinsky, Problems from the Discrete to the Continuous, Universitext, DOI 10.1007/978-3-319-07965-3__8, © Springer International Publishing Switzerland 2014

81

82

8 Hardy–Ramanujan Theorem

While the statement of the theorem is probably more aesthetically pleasing than this latter statement, the latter statement is more practical. Thus, for example, take ı D :1. Then for sufficiently large n, a very high percentage of the positive integers n up to the astronomical number N D e e will have between nn:6 and nCn:6 distinct prime factors. Let n D 109 . We leave it to the interested reader to estimate the O.1/ terms appearing in the proofs of Mertens’ theorems, and to keep track of how they appear in the proof of the Hardy–Ramanujan theorem below, and to conclude 109

that over ninety percent of the positive integers up to N D e e have between 109  .109 /:6 and 109 C .109 /:6 distinct prime factors. That is, over ninety percent of the positive integers up to e e distinct prime factors.

109

have between 109  251; 188 and 109 C 251; 188

Our proof of the Hardy–Ramanujan theorem will have a probabilistic flavor. For any positive integer N , let PN denote the uniform probability measure on ŒN ; that is, PN .fj g/ D N1 , for j 2 ŒN . Then we may think of the distinct prime divisor function ! D !.n/ as a random variable on the space ŒN  with the probability measure PN . For the sequel, note that when we write PN .! 2 A/, where A  ŒN , what we mean is PN .! 2 A/ D PN .fn 2 ŒN  W !.n/ 2 Ag/ D

jfn 2 ŒN  W !.n/ 2 Agj : N

Let EN denote the expected value with respect to the measure PN . The expected value of ! is given by EN ! D

N 1 X !.n/: N nD1

(8.2)

N 1 X 2 ! .n/: N nD1

(8.3)

The second moment of ! is given by EN ! 2 D

The variance VarN .!/ of ! is defined by VarN .!/ D EN .!  EN !/2 D EN ! 2  .EN !/2 :

(8.4)

We will prove the Hardy–Ramanujan theorem by applying Chebyshev’s inequality to the random variable !: PN .j!  EN !j  / 

VarN .!/ ; for  > 0: 2

(8.5)

8 Hardy–Ramanujan Theorem

83

In order to implement this, we need to calculate EN ! and VarN .!/ or, equivalently, EN ! and EN ! 2 . The next two theorems give the asymptotic behavior as N ! 1 of EN ! and of EN ! 2 . The proofs of these two theorems will use Mertens’ second theorem. Theorem 8.2. EN ! D log log N C O.1/; as N ! 1: Remark. Recall the definition of the average order of an arithmetic function, given in the remark following the number-theoretic proof of Theorem 2.1. Theorem 8.2 shows that the average order of !, the function counting the number of distinct prime divisors, is given by the function log log n. Proof. From the definition of the divisor function we have N X

!.n/ D

nD1

N

N X X

1D

nD1 pjn

X

X

pN pjn;nN

1D

X N ΠD p pN

X 1 X N N  .  Π/: p pN p p pN

(8.6)

The second term above satisfies the inequality 0

X N X N .  Π/  1 D .N /  N: p p pN pN

(8.7)

(We could use Chebyshev’s theorem (Theorem 6.1) to get the better bound O. logNN / on the right hand side above, but that wouldn’t improve the order of the final bound we obtain for EN !.) Mertens’ second theorem (Theorem 7.1) gives X 1 D log log N C O.1/; as N ! 1: p pN

(8.8)

From (8.6)–(8.8), we obtain N X

!.n/ D N log log N C O.N /; as N ! 1;

nD1

and dividing this by N gives EN ! D log log N C O.1/; as N ! 1; completing the proof of the theorem.

(8.9) 

84

8 Hardy–Ramanujan Theorem

Theorem 8.3. EN ! 2 D .log log N /2 C O.log log N /; as N ! 1: Remark. To prove the Hardy–Ramanujan theorem, we only need the upper bound EN ! 2  .log log N /2 C O.log log N /; as N ! 1:

(8.10)

Proof. We have ! 2 .n/ D .

X

1/2 D .

X

1/.

p1 jn

pjn

X

X

1/ D

p2 jn

1C

p1 p2 jn p1 ¤p2

X

1D

X

1 C !.n/: (8.11)

p1 p2 jn p1 ¤p2

pjn

Thus, N X

N X X

! 2 .n/ D

1C

nD1 p1 p2 jn p1 ¤p2

nD1

N X

!.n/:

(8.12)

nD1

The second term on the right hand side of (8.12) can be estimated by Theorem 8.2, giving N X

!.n/ D NEN ! D N log log N C O.N /; as N ! 1:

(8.13)

nD1

To estimate the first term on the right hand side of (8.12), we write N X X

1D

X p1 p2 N p1 ¤p2

X

p1 p2 N nN p1 ¤p2 p1 p2 jn

nD1 p1 p2 jn p1 ¤p2

N

X

1D

X  N  D p1 p2 p p N 1 2

p1 ¤p2

X  N  N  1 :   p1 p2 p p N p1 p2 p1 p2

(8.14)

1 2

p1 ¤p2

The number of ordered pairs of distinct primes .p1 ; p2 / such that p1 p2  N is of course equal to twice the number of such unordered pairs fp1 ; p2 g. The fundamental theorem of arithmetic states that each integer has a unique factorization into primes; thus, if p1 p2 D p3 p4 , then necessarily fp1 ; p2 g D fp3 ; p4 g. Consequently the number of unordered pairs fp1 ; p2 g such that p1 p2  N is certainly no greater than N . Thus, the second term on the right hand side of (8.14) satisfies 0

X  N X  N    1  2N: p1 p2 p1 p2 p p N p p N 1 2 p1 ¤p2

1 2 p1 ¤p2

(8.15)

8 Hardy–Ramanujan Theorem

85

Using Mertens’ second theorem for the second inequality below, we bound from above the summation in the first term on the right hand side of (8.14) by X p1 p2 N p1 ¤p2

X 1  2 1 /2  log log N C O.1/ ; as N ! 1: . p1 p2 p pN

(8.16)

From (8.12)–(8.16), we conclude that (8.10) holds. To complete the proof of the theorem, we need to show (8.10) with the reverse inequality. The easiest way to do this is to note simply that the variance is a nonnegative quantity. Thus,  2 EN ! 2  .EN !/2 D log log N C O.1/ D .log log N /2 C O.log log N /; where the first equality follows from Theorem 8.2. For an alternative proof, see Exercise 8.1.  We now use Chebyshev’s inequality along with the estimates in Theorems 8.2 and 8.3 to prove the Hardy–Ramanujan theorem. Proof of Theorem 8.1. From Theorems 8.2 and 8.3 we have  2 VarN .!/DEN ! 2 .EN !/2 D.log log N /2 CO.log log N / log log N C O.1/ D O.log log N /; as N ! 1:

(8.17)

Theorem 8.2 gives EN ! D log log N C RN ; where RN is bounded as N ! 1:

(8.18)

1

Applying Chebyshev’s inequality with  D .log log N / 2 Cı , where ı > 0, we obtain from (8.5), (8.17), and (8.18)   O.log log N / 1 PN j!  log log N  RN j  .log log N / 2 Cı  ; as N ! 1: .log log N /1C2ı Thus,   1 lim PN j!  log log N  RN j  .log log N / 2 Cı D 1:

N !1

(8.19)

Translating (8.19) back to the notation in the statement of the theorem, we have for every ı > 0 1

jfn 2 ŒN  W j!.n/  log log N  RN j  .log log N / 2 Cı gj D 1: N !1 N lim

(8.20)

86

8 Hardy–Ramanujan Theorem

The main difference between (8.20) and the statement of the Hardy–Ramanujan theorem is that log log N appears in (8.20) and log log n appears in (8.1). Because log log x is such a slowly varying function, this difference is not very significant. The remainder of the proof consists of showing that if (8.20) holds for all ı > 0, then (8.1) also holds for all ı > 0. Fix an arbitrary ı > 0. Using the fact that (8.20) holds with ı replaced by 2ı , we will show that (8.1) holds for ı. This will then complete the proof of the theorem. The term RN in (8.20) may vary with N , but it is bounded in absolute value, say 1 by M . For N 2  n  N , we have 1

log log N  log log n  log log N  log log N 2 D log 2:

(8.21)

Therefore, writing !.n/  log log n D .!.n/  log log N  RN / C .log log N  log log n/ C RN , the triangle inequality and (8.21) give 1

j!.n/log log nj  j!.n/log log N RN jClog 2CM; for N 2  n  N: (8.22) Using (8.20) with ı replaced by 1 2

limN !1 NN

ı , 2

along with (8.22) and the fact that

D 0, we have 1

1

jfn 2 ŒN  W j!.n/  log log nj  .log log N / 2 C 2 ı C log 2 C M gj lim D 1: N !1 N (8.23) 1

1

1

By (8.21), it follows that .log log n/ 2 Cı  .log log N log 2/ 2 Cı , for N 2  n  N . Clearly, we have 1

1

1

.log log N  log 2/ 2 Cı  .log log N / 2 C 2 ı C log 2 C M; for sufficiently large N: Thus, 1

1

1

1

.log log n/ 2 Cı  .log log N / 2 C 2 ı C log 2 C M; for N 2  n  N and sufficiently large N:

(8.24) From (8.23), (8.24), and the fact that limN !1

1 2

N N

D 0, we conclude that 1

jfn 2 ŒN  W j!.n/  log log nj  .log log n/ 2 Cı gj D 1: N !1 N lim

 Exercise 8.1. Prove the lower bound EN ! 2  .log log N /2 C O.log log N /

8 Hardy–Ramanujan Theorem

87

by using (8.12)–(8.15) and an inequality that begins with P 1 p p1 ;p2  N p1 p2 .

P

1 p1 p2 N p p 1 2 p1 ¤p2



p1 ¤p2

Exercise 8.2. Let .n/ denote the number of prime divisors of n,Qcounted with ki repetitions. Thus, if the P prime factorization of n is given by n D m iD1 pi , then m !.n/ D m, but .n/ D iD1 ki . Use the method of proof in Theorem 8.2 to prove that EN  D

N 1 X .n/ D log log N C O.1/; as N ! 1: N nD1

Exercise 8.3. Let d.n/ denote the number of divisors of n. Thus, d.12/ D 6 because the divisors of 12 are 1,2,3,4,6,12. Show that 1X d.j / D log n C O.1/: n j D1 n

This shows that the average order of the divisor function is the function log n. Recall from the remark after Theorem 8.2 that the average order of !.n/, the function counting the number of distinct prime divisors, is the function log n. (Hint: We P P P P P logP have d.k/ D mjk 1, so nkD1 d.k/ D k2Œn mjk 1 D m2Œn k2ŒnWmjk 1.)

Chapter Notes The theorem of G. H. Hardy and S. Ramanujan was proved in 1917. The proof we give is along the lines of the 1934 proof of P. Turán, which is much simpler than the original proof. For more on multiplicative number theory and primes, the subject of the material in Chaps. 6–8, the reader is referred to Nathanson’s book [27] and to the more advanced treatment of Tenenbaum in [33]. In [27] one can find a proof of the prime number theorem by “elementary” methods. For very accessible books on analytic number theory and a proof of the prime number theorem using analytic function theory, see, for example, Apostol’s book [5] or Jameson’s book [25]. For a somewhat more advanced treatment, see the book of Montgomery and Vaughan [26]. One can also find a proof of the prime number theorem using analytic function theory, as well as a whole trove of sophisticated material, in [33].

Chapter 9

The Largest Clique in a Random Graph and Applications to Tampering Detection and Ramsey Theory

9.1 Graphs and Random Graphs: Basic Definitions A finite graph G is a pair .V; E/, where V is a finite set of vertices and E is a subset of V .2/ , the set of unordered pairs of elements of V . The elements of E are called edges. (This is what graph theorists call a simple graph. That is, there are no loops—edges connecting a vertex to itself—and there are no multiple edges, more than one edge connecting the same pair of vertices.) If x; y 2 V and the pair fx; yg 2 E, then we say that an edge joins the vertices x and y;otherwise, we say that there is no edge joining x and y. If jV j D n, then jV .2/ j D n2 D 12 n.n1/. The size of the graph is the number of vertices it contains, that is, jV j. We will identify the vertex set V of a graph of size n with Œn. The graph G D .V; E/ with jV j D n and E D V .2/ is called the complete graph of size n and is henceforth denoted by Kn . This graph has n vertices and an edge connects every one of the 12 n.n  1/ pairs of vertices. See Fig. 9.1. For a graph G D .V; E/ of size n, a clique of size k 2 Œn is a complete subgraph .2/ K of G of size k; that is, K D .VK ; EK /, where VK  V; jVK j D k and EK D VK . See Fig. 9.2. Consider the vertex set V D Œn. Now construct the edge set E  Œn.2/ in the following random fashion. Let p 2 .0; 1/. For each pair fx; yg 2 Œn.2/ , toss a coin with probability p of heads and 1p of tails. If heads occurs, include the pair fx; yg in E, and if tails occurs, do not include it in E. Do this independently for every pair fx; yg 2 Œn.2/ . Denote the resulting random edge set by En .p/. The resulting random graph is sometimes called an Erd˝os–Rényi graph; it will be denoted by Gn .p/ D .Œn; En .p//. In this chapter, the generic notation P for probability and E for expectation will be used throughout. To get a feeling for how many edges one expects to see in the random graph, attach to each of the N WD 12 n.n  1/ potential edges a random variable which is equal to 1 if the edge exists in the random set of edges En .p/ and is equal to 0 if the edge does not exist in En .p/. Denote these random variables by fWm gN mD1 . The random variables are distributed according to the Bernoulli distribution with R.G. Pinsky, Problems from the Discrete to the Continuous, Universitext, DOI 10.1007/978-3-319-07965-3__9, © Springer International Publishing Switzerland 2014

89

90

9 The Largest Clique in a Random Graph and Applications

Fig. 9.1 The complete graph with 5 vertices, G D K5 10 3

6 9 5

1

7

2

8

4

Fig. 9.2 A graph with 10 vertices and 13 edges. The largest clique is the one of size 4, formed by the vertices f4; 5; 6; 7g

parameter p; that is, P .Wm D 1/ D 1  P .Wm D 0/ D p. Thus, the expectation and the variance of Wm are given by EWm D p and 2 .Wm / D p.1  p/. Let SN D P N mD1 Wm denote the number of edges in the random graph. By the linearity of the expectation, one has ESN D Np. Because edges have been selected independently, the random variables fWm gN mD1 are independent. Thus, the variance of SN is the sum 2 of the variances of fWm gN mD1 ; that is, .SN / D Np.1p/. Therefore, Chebyshev’s inequality gives P .jSN  Npj  N

1C 2

/

Np.1  p/ : N 1C 1C

Consequently, for any > 0, one has limN !1 P .jSN  Npj  N 2 / D 0. Thus, for any > 0 and large n (depending on ), with high probability the Erd˝os–Rényi graph Gn .p/ will have 12 n2 p C O.n1C / edges. The main question we address in this chapter is this: how large is the largest complete subgraph, that is, the largest clique, in Gn .p/, as n ! 1? We study this question in Sect. 9.2. In Sect. 9.3 we apply the results of Sect. 9.2 to a problem in tampering detection. In Sect. 9.4, we discuss Ramsey theory for cliques in graphs and use random graphs to give a bound on the size of a fundamental deterministic quantity.

9.2 The Size of the Largest Clique

91

9.2 The Size of the Largest Clique in a Random Graph Let Ln;p be the random variable denoting the size of the largest clique in Gn .p/. Let .2/ log 1 n WD log 1 log 1 n. p

p

p

Theorem 9.1. Let Ln;p denote the size of the largest clique in the Erd˝os–Rényi graph Gn .p/. Then (  0; if c < 2I .2/  lim P Ln;p  2 log 1 n  c log 1 n D p n!1 p 1; if c > 2: Remark. Despite the increasing randomness and disorder in Gn .p/ as n grows, the theorem shows that Ln;p behaves almost deterministically—with probability approaching 1 as n ! 1, the size of the largest clique will be very close to .2/ 2 log 1 n  2 log 1 n. In fact, it is known that for each n, there exists a value dn p

p

such that limn!1 P .Ln equals either dn or dn C 1/ D 1. That is, with probability approaching 1 as n ! 1, Ln is restricted to two specific values. The proof of this is similar to the proof of Theorem 9.1 but a little more delicate; see [9]. We have chosen the formulation in Theorem 9.1 in particular because it is natural for the topic discussed in Sect. 9.3. Let Nn;p .k/ be the random variable denoting the number of cliques of size k in the random graph Gn .p/. We will always assume tacitly that the argument of Nn;p is a positive integer. Of course it follows from Theorem 9.1 that .2/ limn!1 P .Nn;p .kn / D 0/ D 1, if kn  2 log 1 n  c log 1 n, for some c < 2. p

p

We say then that the random variable Nn;p .kn / converges in probability to 0 as n ! 1. The proof of Theorem 9.1 will actually show that if kn  2 log 1 n  p

.2/

c log 1 n, for some c > 2, then limn!1 P .Nn;p .kn / > M / D 1, for any M 2 R. p

We say then that the random variable Nn;p .kn / converges in probability to 1 as n ! 1. We record this as a corollary. Corollary 9.1. .2/

i. If kn  2 log 1 n  c log 1 n, for some c < 2, then Nn;p .kn / converges to 0 in p

p

probability; that is, lim P .Nn;p .kn / D 0/ D 1I

n!1 .2/

ii. If kn  2 log 1 n  c log 1 n, for some c > 2, then Nn;p .kn / converges to 1 in p

p

probability; that is, lim P .Nn;p .kn / > M / D 1; for all M 2 R:

n!1

92

9 The Largest Clique in a Random Graph and Applications

Proof  of Theorem 9.1. The number of cliques of size kn in the complete graph Kn is knn ; denote these cliques by fKjn W j D 1; : : : ; knn g. Let IKjn be the indicator random variable defined to be equal to 1 or 0, according to whether the clique Kjn is or is not contained in the random graph Gn .p/. Then we can represent the random variable Nn;p .kn /, denoting the number of cliques of size kn in the random graph Gn .p/, as

Nn;p .kn / D

.knn / X

IKjn :

(9.1)

j D1

Let P .Kjn / denote the probability that the clique Kjn is contained in Gn .p/; that is, the probability that the edges of the clique Kjn are all contained in the random edge   set En .p/ of Gn .p/. Since each clique Kjn contains k2n edges, we have kn

P .Kjn / D p . 2 / : The expected value EIKjn of IKjn is given by EIKjn D P .Kjn /. Thus, the expected value of Nn;p .kn / is given by

ENn;p .kn / D

.knn / X

EIKjn

j D1

! kn n D p. 2 /: kn

(9.2)

We will first prove that if c < 2, then .2/

lim P .Ln;p  2 log 1 n  c log 1 n/ D 0:

n!1

p

(9.3)

p

We have ENn;p .kn /  P .Nn;p .kn /  1/ D P .Ln;p  kn /; where the equality follows from the fact that a clique of size l contains sub-cliques of size j for all j 2 Œl  1. Thus, to prove (9.3) it suffices to prove that .2/

lim ENn;p .2 log 1 n  cn log 1 n/ D 0;

n!1

p

(9.4)

p

where 0  cn  c < 2, for all n. (We have written cn instead of c in (9.4) because we need the argument of Nn;p to be an integer.) This approach to proving (9.3) is known as the first moment method.

9.2 The Size of the Largest Clique

93

To prove (9.4), we need the following lemma. 1

Lemma 9.1. If kn D o.n 2 /, as n ! 1, then n kn Proof. We have that

n kn

D

! 

nkn ; as n ! 1: kn Š

n.n1/.nkn C1/ . kn Š

lim

n!1

Thus, to prove the lemma we need to show

n.n  1/    .n  kn C 1/ D 1; nkn

or, equivalently, lim

n!1

kX n 1

log.1 

j D1

j / D 0: n

(9.5)

Letting f .x/ D  log.1  x/, and applying Taylor’s remainder theorem in the form f .x/ D f .0/ C f 0 .x  .x//x, for x > 0, where x  .x/ 2 .0; x/, we have 0   log.1  x/  2x; 0  x  Thus, for n sufficiently large so that 0

kX n 1

log.1 

j D1

kn n

1 : 2

 12 , we have

kX n 1 .kn  1/kn j j /2 D : n n n j D1 1

Letting n ! 1 in the above equation, and using the assumption that kn D o.n 2 /, we obtain (9.5).  .2/

We can now prove (9.4). Let kn D 2 log 1 n  cn log 1 n, where 0  cn  c < 2, p

p

for all n. Stirling’s formula gives kn Š  knkn e kn

p 2kn ; as n ! 1:

Using this with Lemma 9.1 and (9.2), we have ! kn .kn 1/ kn n nkn p 2 nkn kn .kn 1/  k ; as n ! 1; ENn;p .kn / D p. 2 /  p 2 p kn Š kn knn e kn 2kn

94

9 The Largest Clique in a Random Graph and Applications

and thus log 1 ENn;p .kn / D log 1 p

p

! kn n p. 2 /  kn

1 1 1 kn log 1 n  kn2 C kn  kn log 1 kn C kn log 1 e  log 1 2kn ; as n ! 1: p p p p 2 2 2 (9.6) Note that .2/

.2/

log 1 kn D log 1 .2 log 1 n  cn log 1 p

p

p

p

cn log 1 n    p D n/ D log 1 .log 1 n/ 2  p p log 1 n p

.2/

cn log 1 n   p .2/ .2/ log 1 n C log 1 2  D log 1 n C O.1/; as n ! 1: p log 1 n p p

(9.7)

p

Substituting for kn and using (9.7), we have 1 .2/ kn log 1 n  kn2  kn log 1 kn D .2 log 1 n  cn log 1 n/ log 1 n p p p p 2 p 1 .2/ .2/ .2/ .2 log 1 n  cn log 1 n/2  .2 log 1 n  cn log 1 n/.log 1 n C O.1// D p p 2 p p p   .2/ (9.8) .cn  2/.log 1 n/ log 1 n C O log 1 n : p

p

p

Since 12 kn C kn log 1 e  12 log 1 2kn D O.log 1 n/, it follows from (9.6), (9.8), and p p p the fact that 0  cn  c < 2 that .2/

lim log 1 ENn;p .2 log 1 n  cn log 1 n/ D 1:

n!1

p

p

p

Thus, (9.4) holds, completing the proof of (9.3). We now prove that if c > 2, then .2/

lim P .Ln;p  2 log 1 n  c log 1 n/ D 1:

n!1

p

(9.9)

p

The analysis in the above paragraph shows that if cn  c > 2, for all n, then .2/

lim ENn;p .2 log 1 n  cn log 1 n/ D 1:

n!1

p

(9.10)

p

The first moment method used above exploits the fact that (9.4) implies (9.3). Now (9.10) does not imply (9.9). To prove (9.9), we employ the second moment

9.2 The Size of the Largest Clique

95

method. (This method was also used in Chap. 3 and Chap. 8.) The variance of Nn;p .kn / is given by  2    2 2 Var Nn;p .kn / D E Nn;p .kn /  ENn;p .kn / D ENn;p .kn /  ENn;p .kn / : (9.11) .2/

Our goal now is to show that if kn D 2 log 1 n  cn log 1 n with cn  c > 2, for all p

p

n, then    2  Var Nn;p .kn / D o ENn;p .kn / ; as n ! 1:

(9.12)

Chebyshev’s inequality gives for any > 0     Var Nn;p .kn / P jNn;p .kn /  ENn;p .kn /j  jENn;p .kn /j   2 : 2 ENn;p .kn /

(9.13)

Thus, (9.12) and (9.13) yield lim P .j

n!1

Nn;p .kn /  1j < / D 1; for all > 0: ENn;p .kn /

(9.14)

From (9.14) and (9.10), it follows that lim P .Nn;p .kn / > M / D 1; for all M 2 R:

n!1

(9.15)

In particular then, (9.9) follows from (9.15). Thus, the proof of the theorem will be complete when we prove (9.12), or, in light of (9.11), when we prove that   2 2  2 ENn;p (9.16) .kn / D ENn;p .kn / C o .ENn;p .kn / ; as n ! 1:   We relabel the cliques fKjn W j D 1; : : : ; knn g, of size kn in Kn according to the vertices that are contained in each clique. Thus, we write Kin1 ;i2 ;:::;ikn to denote the clique whose vertices are i1 ; i2 ; : : : ; ikn . The representation for Nn;p .kn / in (9.1) becomes X Nn;p .kn / D IKin ;i ;:::;i : (9.17) 1i1 0, if  ¤ 0 . Let 0 > . For any  > 0, we have P .Sn  0 n/  exp.0 n/E exp.Sn /;

(10.13)

since exp..Sn  0 n// P  1 on the event fSn  0 ng. We can represent the random variable Sn as Sn D nj D1 Bj , where the fBj gnj D1 are independent and identically distributed Bernoulli random variables with parameter ; that is, P .Bj D 1/ D 1  P .Bj D 0/ D . Using the fact that these random variables are independent and identically distributed, we have E exp.Sn / D E exp.

n X

Bj / D

j D1

n Y

E exp.Bj / D .E exp.B1 //n D

j D1

.e  C 1  /n :

(10.14)

Thus, from (10.13), we obtain the inequality  n P .Sn  0 n/  e 0 .e  C 1  / ; for all n  1 and all  > 0:

(10.15)

The function f ./ WD e 0 .e  C 1  /,   0, satisfies f .0/ D 1, lim!1 f ./ D 1, and f 0 .0/ D 0 C  < 0. Consequently, f possesses a global minimum at some 0 > 0, and f .0 / 2 .0; 1/ [indeed, f .0 /  0 would contradict (10.15)]. In Exercise 10.5, the reader is asked to show that  1 10   0 . Note now that .0 ; /, defined in the statement of f .0 / D 1 0 0 the proposition, is equal to  log f .0 /. Thus, .0 ; / > 0, for 0 > , and e .0 ;/ D f .0 /. Substituting  D 0 in (10.15) gives P .Sn  0 n/  e .0 ;/n : Finally, since .0 ; / D .1  0 ; 1  /, it follows that .0 ; / > 0, if  ¤ 0 . 

10.4 Proof of Theorem 10.1 In this section, and also in Sect. 10.6, we will use tacitly the following facts, which are left to the reader in Exercise 10.6:

116

10 Giant Component in a Sparse Random Graph

1. If Xi  Bin.ni ; p/, i D 1; 2, and n1 > n2 , then P .X1  k/  P .X2  k/, for all integers k  0. 2. If Xi  Bin.n; pi /, i D 1; 2, and p1 > p2 , then P .X1  k/  P .X2  k/, for all integers k  0. We assume that pn D nc with c 2 .0; 1/. From the analysis in Sect. 10.2, we have seen that for x 2 Œn, the size of the connected component of Gn .pn / containing x is given by T D minft  0 W Yt D 0g. (As noted in Sect. 10.2, the quantities T and Yt depend on x and n, but this dependence is suppressed in the notation.) Let YOt be a random variable distributed according to the distribution Bin.n  1; 1  .1  nc /t /. Then from Lemma 10.1, P .T > t /  P .Yt > 0/ D P .YOt > t  1/ D P .YOt  t /: (The inequality above is not an equality because we have continued the definition of Yt past the time T .) Let YNt be a random variable distributed according to the distribution Bin.n  1; tcn /. By Taylor’s remainder formula, .1  x/t  1  tx, for x  0 and t a positive integer. Thus, tcn  1  .1  nc /t , and consequently P .YOt  t /  P .YNt  t /. Thus, we have P .T > t /  P .YNt  t /:

(10.16)

If Sn;t  Bin.n; tcn / as in Proposition 10.1, then P .YNt  t /  P .Sn;t  t /. Using this with (10.16) and Proposition 10.1, we conclude that there exists a  > 0 such that P .T > t /  e t ; t  0; n  1:

(10.17)

Let > 0 satisfy  > 1. Then from (10.17) we have P .T > log n/  e  log n D n :

(10.18)

We have proven that the probability that the connected component containing x is larger than log n is no greater than n . There are n vertices in Gn .pn /; thus the probability that at least one of them is in a connected component larger than log n is certainly no larger than nn D n1 ! 0 as n ! 1. This completes the proof of Theorem 10.1. 

10.5 The Galton–Watson Branching Process 1 We define a random population n gnD0 be a nonnegP1 process in discrete time. Let fq1 ative sequence satisfying nD0 qn D 1. We will refer to fqn gnD0 as the offspring distribution of the process. Consider an initial particle alive at time t D 0 and set

10.5 Galton–Watson Branching Process

117

Fig. 10.2 A realization of a branching process that becomes extinct at n D 5

X0 D 1 to indicate that the size of the initial population is 1. At time t D 1, this particle gives birth to a random number of offspring and then dies. For each n 2 ZC , the probability that there were n offspring is qn . Let X1 denote the population size at time 1, namely the number of offspring of the initial particle. In general, at any time t  1, all of the Xt1 particles alive at time t  1 give birth to random numbers of offspring and die. The new number of particles is Xt . The numbers of offspring of the different particles throughout all the generations are assumed independent of one another and are all distributed according to the same offspring distribution fqn g1 nD0 . The random population process fXt g1 tD0 is called a Galton–Watson branching process. Clearly, if Xt D 0 for some t , then Xr D 0 for all r  t . If this occurs, we say the process becomes extinct; otherwise we say that the process survives. See Fig. 10.2. If q0 D 0, then the probability of survival is 1. Otherwise, there is a positive probability of extinction, since at any time t  1, there is a positive probability (namely q0Xt1 ) that all of the particles die without leaving any offspring, in which case Xt D 0. The most fundamental question we can ask about this process is whether it has a positive probability of surviving. Let W be a random variable distributed according to the offspring distribution: P .W D n/ D qn . Let  D EW D

1 X

nqn

nD0

denote the mean number of offspring of a particle. It is easy to show that EXtC1 D EXt (Exercise 10.7), from which it follows that EXt D t , t  0. From this, it follows that if  < 1, then limt!1 EXt D 0. Since EXt  P .Xt  1/, it follows that limt!1 P .Xt  1/ D 0, which means that the process has probability 1 of extinction. The fact that EXt is growing exponentially in t when  > 1 would suggest, but not prove, that for  > 1 the probability of extinction is less than 1. In fact, we can use the method of generating functions to prove the following result. Define .s/ D

1 X nD0

qn s n ; s 2 Œ0; 1:

(10.19)

118

10 Giant Component in a Sparse Random Graph

The function .s/ is the probability generating function for the distribution fqn g1 nD0 . Theorem 10.3. Consider a Galton–Watson P branching process with offspring dis1 tribution fqn g1 , where q > 0. Let  D 0 nD0 nqn 2 Œ0; 1 denote the mean nD0 number of offspring of a particle. (i) If   1, then the Galton–Watson process becomes extinct with probability 1. (ii) If  > 1, then the Galton–Watson process becomes extinct with probability ˛ 2 .0; 1/, where ˛ is the unique root s 2 .0; 1/ of the equation .s/ D s. Proof. If q0 C q1 D 1, then necessarily,  < 1. Thus, it follows from the paragraph before the statement of the theorem that extinction occurs with probability 1. Assume now that q0 C q1 < 1. Since the power series for .s/ converges uniformly for s 2 Œ0; 1  , for any > 0, it follows that we can differentiate term by term to get  0 .s/ D

1 X nD0

nqn s n1  0;  00 .s/ D

1 X

n.n  1/qn s n2  0; 0  s < 1:

nD0

In particular then, since q0 C q1 < 1,  is a strictly convex function on Œ0; 1, and consequently, so is .s/ WD .s/  s. We have .0/ D q0 > 0 and .1/ D 0. Also, lims!1 0 .s/ D lims!1  0 .s/  1 D   1. Since is strictly convex, it follows that if   1, then 0 .s/ < 0 for s 2 Œ0; 1/, and consequently .s/ > 0, for s 2 Œ0; 1/. However, if  > 1, then 0 .s/ > 0 for s < 1 and sufficiently close to 1. Using this along with the strict convexity and the fact that .0/ > 0 and .1/ D 0, it follows that there exists a unique ˛ 2 .0; 1/ such that .˛/ D 0 and that .s/ > 0, for s 2 .0; ˛/, and .s/ < 0, for s 2 .˛; 1/. (The reader should verify this.) We have thus shown that the smallest root ˛ 2 Œ0; 1 of the equation .z/ D z satisfies ˛ 2 .0; 1/; if  > 1; and ˛ D 1; if   1: Furthermore, in the case  > 1; one has .s/ > s; for s 2 Œ0; ˛/; and .s/ < s; for s 2 .˛; 1/:

(10.20)

Now let t WD P .Xt D 0/ denote the probability that extinction has occurred by time t . Of course, 0 D 0. We claim that t D .t1 /; for t  1:

(10.21)

To prove this, first note that when t D 1, (10.21) says that 1 D .0/ D q0 , which is of course true. Now consider t > 1. We first calculate P .Xt D 0jX1 D n/, the probability that Xt D 0, conditioned on X1 D n. By the conditioning, at time t D 1, there are n particles, and each of these particles will contribute independently to the

10.5 Galton–Watson Branching Process

119

population size Xt at time t , through t  1 generations of branching. In order to have Xt D 0, each of these n “new” branching processes must become extinct by time t  1. The probability that any one of them becomes extinct by time t  1 is, by definition, t1 . By the independence, it follows that the probability that they all n become extinct by time t  1 is t1 . We have thus proven that n : P .Xt D 0jX1 D n/ D t1

Since P .X1 D n/ D qn , we conclude that t D P .Xt D 0/ D

1 X

P .X1 D n/P .Xt D 0jX1 D n/ D

nD0

1 X

n qn t1 D .t1 /;

nD0

proving (10.21). From its definition, t is nondecreasing, and ext WD limt!1 t is the extinction probability. Letting t ! 1 in (10.21) gives ext D .ext /:

(10.22)

It follows immediately from (10.22) and (10.20) that ext D 1, if   1. If  > 1, then there are two roots s 2 Œ0; 1 of the equation .s/ D s, namely s D ˛ and s D 1. If ext D 1, then t > ˛ for sufficiently large t , and then by (10.20) and (10.21), for such t , we have tC1 D .t / < t , which contradicts the fact that t is nondecreasing. Thus, we conclude that ext D ˛.  At one point in the proof of Theorem 10.2, we will use the above result on the extinction probability of a Galton–Watson branching process. However, we will need to consider this process in an alternative form. In the original formulation, at time t , the entire population of size Xt1 that was alive at time t  1 reproduces and dies, and then Xt is the new population size. In other words, time t referred to the t th generation of particles. In our alternative formulation, at each time, t only one of the particles that was alive at time t  1 reproduces and dies. Thus, as before, we have X0 D 1 to denote that we start with a single particle, and X1 denotes the number of offspring that the original particle produces before it dies. At time t D 2, instead of having all X1 particles reproduce and die simultaneously, we choose (arbitrarily) just one of these particles that was alive at time t D 1 and have it reproduce and die. Then X2 is equal to the new total population. We continue in this way, at each step choosing just one of the particles that was alive at the previous step. Since in any case, the number of offspring of any particle is independent of the number of offspring of the other particles, it is clear that this new process has the same extinction probability as the original one.

120

10 Giant Component in a Sparse Random Graph

10.6 Proof of Theorem 10.2 We assume that pn D nc , with c > 1. From the analysis in Sect. 10.2, we have seen that for x 2 Œn, the size of the connected component of Gn .pn / containing x is given by T D minft  0 W Yt D 0g. Consider a Galton–Watson branching process fXt g1 tD0 in the alternative form described at the end of Sect. 10.5, and let the offspring distribution be the Poisson m distribution with parameter c; that is, qm D e c cmŠ . The probability generating function of this distribution is given by .s/ D

1 X mD0

qm s m D

1 X mD0

e c

cm m s D e c.s1/ : mŠ

(10.23)

The expected number of offspring is equal to c. Since c > 1, it follows from Theorem 10.3 that the extinction time Text D infft  1 W Xt D 0g satisfies P .Text < 1/ D ˛;

(10.24)

where ˛ 2 .0; 1/ is the unique solution s 2 .0; 1/ to the equation .s/ D s, that is, to the equation e c.s1/ D s. Substituting z D 1  s in this equation, this becomes 1  ˛ is the unique root z 2 .0; 1/ of the equation 1  e cz  z D 0:

(10.25)

Let fWt g1 tD1 be a sequence of independent, identically distributed random variables distributed according to the Poisson distribution with parameter c. If Xt1 ¤ 0, then Wt will serve as the number of offspring of the particle chosen for reproduction and death from among the Xt1 particles alive at time t  1. Then we may represent the process fXt g1 tD0 by X0 D 1 and Xt D Xt1 C Wt  1; 1  t  Text :

(10.26)

If Text < 1, then of course Xt D 0 for all t  Text . For any fixed t  1, as soon as one knows the values of fWs gtsD1 , one knows the values of fXs gtsD1 . (We note that it might happen that these values of fWs gtsD1 result in Xs0 D 0 for some s0 < t , in which case the values of fWs gtsDs0 C1 are superfluous for determining the values of fXs gtsD1 .) If rN WD frs gtsD1 are the values obtained for fWs gtsD1 , let lN WD fls gtsD1 N r/. denote the corresponding values for fXs gtsD1 . We write lN D l. N Note that Text  t occurs if and only if ls > 0, for 0  s  t  1, or, equivalently, if and only if lt1 > 0. Now consider the process fYt g1 tD0 introduced in Sect. 10.2. Recall that T is equal to the smallest t for which Yt D 0. Note from (10.3) and (10.26) that ext is defined. fYt gTtD0 is defined recursively in a way very similar to the way fXt gTtD0 The difference is that the independent sequence of random variables fWt g1 tD1

10.6 Proof of Theorem 10.2

121

distributed according to the Poisson distribution with parameter c is replaced by the sequence fZt g1 tD1 . The distribution of these latter random variables is given by (10.4) and (10.5). (As noted in Sect. 10.2, Yt , T , and Zt depend on x and n, but that dependence has been suppressed in the notation.) Because the form of the ext recursion formula is the same for fXs gTsD1 and fYs gTsD1 , and because X0 D Y0 D 1, t N r/ N satisfies it follows that if rN D frs gsD1 are the values obtained for fZs gtsD1 , and if l. t N r/ N are the corresponding values for fYs gsD1 . lt1 > 0, then l. Since the random variables fWs gtsD1 are independent, we have N D P .fWs gtsD1 D r/

t Y

P .Ws D rs /:

(10.27)

sD1

By (10.4) and by the conditional independence condition (10.5), if lt1 > 0, we have P .fZs gtsD1 D r/ N D

t Y

P .Zs D rs jYs1 D ls1 /;

(10.28)

sD1

where, for convenience, we define l0 D 1. By (10.4), the distribution of Zs , conditioned on Ys1 D ls1 , is given by Bin.n  s  ls1 C 1; nc /. Since limn!1 .n  s  ls1 C 1/ nc D c, it follows from the Poisson approximation to the binomial distribution (see Proposition A.3 in Appendix A) that lim P .Zs D rs jYs1 D ls1 / D P .Ws D rs /:

n!1

Thus, we conclude from (10.27) and (10.28) that for any fixed t , lim P .fZs gtsD1 D r/ N D P .fWs gtsD1 D r/; N

n!1

for all rN D frs gtsD1 for which lt1 .r/ N > 0; and, consequently, that for any fixed t , N D P .fXs gt D l/; N for all lN D fls gt satisfying lt1 > 0: lim P .fYs gtsD1 D l/ sD1 sD1 (10.29)

n!1

Since Text , the extinction time for fXt g1 tD0 , is the smallest t for which Xt D 0, and Text  t is equivalent to lt1 > 0, and since T is the smallest t for which Yt D 0, it follows from (10.29) that lim P .T  t / D P .Text  t /; for any fixed t  1:

n!1

(10.30)

122

10 Giant Component in a Sparse Random Graph

From (10.24), we have limt!1 P .Text  t / D ˛; thus, for any > 0, there exists an integer  such that P .Text  t / 2 .˛  2 ; ˛/, if t   . It then follows from (10.30) that there exists an n1; D n1; .t / such that P .T  t / 2 .˛  ; ˛ C /; if t   and n  n1; .t /:

(10.31)

We now analyze the probabilities P . n  T  .1˛ /n/ and P ..1˛C /n  T  n/. We will show that these probabilities are very small. From (10.25), it follows that 1  e cz > z, for z 2 .0; 1  ˛/, and 1  e cz < z, for z 2 .1  ˛; 1. Consequently, for > 0, choosing ı D ı. / sufficiently small, we have 1  e cz  2ı > z; for z 2 . ; 1  ˛  /I 1  e cz C 2ı < z; for z 2 .1  ˛ C ; 1: (10.32) Since T is the smallest t for which Yt D 0, we have f n  T  .1  ˛  /ng  [ nt.1˛ /n fYt  0g: (Recall that Yt has also been defined recursively for t > T and can take on negative values for such t .) Thus, letting YOt be the random variable distributed according to the distribution Bin.n  1; 1  .1  nc /t /, it follows from Lemma 10.1 that X

P . n  T  .1  ˛  /n/ 

P .YOt  t  1/:

(10.33)

nt.1˛ /n

One has limn!1 .1  nc /bn D e cb , uniformly over b in a bounded set. (The reader should verify this by taking the logarithm of .1  nc /bn and applying Taylor’s ct formula.) Applying this with b D nt , with 0  t  n, it follows that .1  nc /t  e  n is small for large n, uniformly over t 2 Œ0; n. Thus, for ı D ı. /, which has been ct defined above, there exists an n2;ı D n2;ı. / such that 1  .1  nc /t  1  e  n  ı, for n  n2;ı and 0  t  n. Let YNt be a random variable distributed according to ct the distribution Bin.n  1; 1  e  n  ı/. Then P .YOt  t  1/  P .YNt  t  1/, if n  n2;ı . Using this with (10.33), we obtain P . n  T  .1  ˛  /n/ 

X

P .YNt  t  1/; n  n2;ı :

(10.34)

nt.1˛ /n

Every t in the summation on the right hand side of (10.34) is of the form t D bn n, ct with  bn  1  ˛  . Thus, it follows from (10.32) that 1  e  n  ı D cbn 1e  ı  bn C ı. We now apply part (ii) of Proposition 10.2 with n  1 in place of n, with  D 1e cbn ı, and with 0 D bn . Note that  and 0 are bounded from 0 and from 1 as n varies and as t varies over the above range. Also, we have  > 0 C ı. Consequently, there exists a constant  > 0 such that .0 ; /  , for all ; 0 as above. Thus, we have for n  n2;ı ,

10.6 Proof of Theorem 10.2

123

P .YNt  t  1/ D P .YNt  bn n  1/  P .YNt  bn .n  1//  e .n1/ :

(10.35)

From (10.34) and (10.35) we conclude that P . n  T  .1  ˛  /n/  .1  ˛/ne .n1/ ; n  n2;ı. / :

(10.36)

A very similar analysis shows that P ..1  ˛ C /n  T  n/  ˛ ne .n1/ ; n  n3;ı. / ;

(10.37)

for some n3;ı D n3;ı. / . This is left to the reader as Exercise 10.8. We now analyze the probability P .t < T < n/, for fixed t . As in (10.33), we have X P .t < T < n/  P .YOs  s  1/; (10.38) t 0 Q P .YQs  n  s/  e .ns/ Ee Ys :

(10.40)

P n1 We can represent the random variable YQs as YQs D n1 j D1 Bj , where the fBj gj D1 are independent and identically distributed Bernoulli random variables with parameter .1  nc /s ; that is, P .Bj D 1/ D 1  P .Bj D 0/ D .1  nc /s . Using the fact that these random variables are independent and identically distributed, we have Q

Ee Ys D

n1 Y

 c c n1 Ee Bj D .1  /s /e  C 1  .1  /s : n n j D1

(10.41)

Thus, from (10.40) and (10.41), we obtain  c c n1 P .YQs  n  s/  e .ns/ .1  /s e  C 1  .1  /s : n n

(10.42)

124

10 Giant Component in a Sparse Random Graph

We now substitute n D M s in (10.42) to obtain  M s1 c s  / .e  1/ C 1  P .YQs  n  s/  e s.M 1/ .1  Ms     M s c s  c s  / .e  1/ C 1 D e s .M 1/CM log .1 M s / .e 1/C1 ; e s.M 1/ .1  Ms for all  > 0:

(10.43)

We will show that for an appropriate choice of  > 0, the expression in the square brackets above is negative and bounded away from 0 for all s  1 and sufficiently large M . Let   c s  fs;M ./ WD .M  1/ C M log .1  / .e  1/ C 1 : Ms Then fs;M .0/ D 0 and 0 fs;M ./ D .M  1/ C

M.1  Mc s /s e  : .1  Mc s /s .e   1/ C 1

(10.44)



For any fixed , defining g.y/ D y.eye , for y > 0, it is easy to check that 1/C1 g 0 .y/ > 0; therefore, g is increasing. The last term on the right hand side of (10.44) is M g.y/, with y D .1  Mc s /s . Since 1  x  e x , for x  0, we have c .1  Mc s /s  e  M , if n D M s  c, and thus the last term on the right hand c side of (10.44) is bounded from above by M g.e  M /, independent of s, for s  Mc . Thus, from (10.44), we have c

0 fs;M ./  .M  1/ C

M e M e e

c M

.e   1/ C 1

c

1CM

1  eM e

1Ce

c M

; for all s 

D M C 1 C

M e c

e  1 C e M

D

c : M

c

Since limM !1 M 1e M c D ce  , uniformly over  2 Œ0; 1, and since c > 1, e 1Ce M it follows that there exists a 0 > 0 and an M0 such that if  2 Œ0; 0  and M  M0 , 0 then fs;M ./  1c , for all s  1. It then follows that fs;M .0 /   0 .c1/ , for all 2 2 M  M0 and s  1. Choosing  D 0 in (10.43) and using this last inequality for fs;M .0 /, we conclude that P .YQs  n  s/  e 

0 .c1/ s 2

; for n  M0 s; s  1:

(10.45)

10.6 Proof of Theorem 10.2

125

From (10.38), (10.39), and (10.45), we obtain the estimate P .t < T < n/ 

X t t ; T 62 ..1  ˛  /n; .1  ˛ C /n/  I   1  ˛  2  P T 2 ..1  ˛  /n; .1  ˛ C /n/  1  ˛ C ;

(10.47)

for all n  n : (The third set of inequalities above is a consequence of the first two sets of inequalities.) We recall that the above estimates have been obtained when p D nc , with c > 1, and where 1  ˛ D 1  ˛.c/ is the unique root z 2 .0; 1/ of the equation 1  e cz  z D 0. The reader can check that the above estimates hold uniformly for c 2 Œc1 ; c2 , for any 1 < c1 < c2 . Thus, consider as before a fixed c > 1 and ˛ D ˛.c/, and let ı > 0 satisfy c  ı > 1. For c 0 2 Œc  ı; c, let ˛ 0 WD ˛.c 0 /. Then for all > 0, there exists a t > 0 and a n > 0 such that for all n  n and all 0 c 0 2 Œc  ı; c, one has for the graph G.n; cn /, ˛ 0   P .T  t /  ˛ 0 C I   P T > t ; T 62 ..1  ˛ 0  /n; .1  ˛ 0 C /n/  I   1  ˛ 0  2  P T 2 ..1  ˛ 0  /n; .1  ˛ 0 C /n/  1  ˛ 0 C ;

(10.48)

for all n  n : Return now to our graph G.n; nc /, with n considerably larger than the n in (10.48). (We will quantify “considerably larger” a bit later on.) Recall that we started out by choosing arbitrarily some vertex x in the graph G.n; nc /, and then applied our algorithm, obtaining T , which is the size of the connected component containing x. Call this the first step in a “game.” If it results in T  t , say that a “draw” occurred on the first step. If it results in .1˛  /n < T < .1˛ C /n, say that a “win” occurred on the first step. Otherwise, say that a “loss” occurred on the first step. If a win or a loss occurs on this first step, we stop the procedure and say that the game ended in a win or loss, respectively. If a draw occurs, then consider the remaining n  T vertices that are not in the connected component containing x, and

126

10 Giant Component in a Sparse Random Graph

consider the corresponding edges. This gives a graph of size n0 D n  T . Note that by the definition of the algorithm, there is no pair of points in this new graph that has already been checked by the algorithm. Therefore, the conditional edge probabilities for this new graph, conditioned on having implemented the algorithm, are as before, namely nc , independently for each edge. This edge probability can be written as 0 pn0 D nc 0 , where c 0 D nT c. Now T  t . Thus, if n  n is sufficiently large, n then c 0 2 Œc  ı; c and n0 D n  T  n , so the estimates (10.48) (with n replaced by n0 ) will hold for this new graph, which has n0 vertices and edge probabilities 0 pn0 D nc 0 . Choose an arbitrary vertex x1 from this new graph and repeat the above algorithm on the new graph. Let T1 denote the random variable T for this second step. If a win or a loss occurs on the second step of the game, then we stop the game and say that the game ended in a win or a loss, respectively. (Of course, here we define win, loss, and draw in terms of T1 ; n0 , and ˛ 0 instead of T; n, and ˛. However, the same t is used.) If a draw occurs on this second step, then we consider the n0  T1 D n  T  T1 vertices that are neither in the connected component of x nor of x1 . We continue like this for a maximum of M steps, where M is chosen  M sufficiently large to satisfy ˛.c  ı/ C < . (We work with > 0 sufficiently small so that ˛.c  ı/ C < 1.) The reason for this choice of M will become clear below. If after M steps, a win or a loss has not occurred, then we declare that the game has ended in a draw. Note that the smallest possible graph size that can ever be used in this game is n  t .M  1/. The smallest modified value of c 1/ c. We can now quantify what we meant when that can ever be used is nt .M n we said at the outset of this paragraph that we are choosing n “considerably larger” than n . We choose n sufficiently large so that n  t .M  1/  n and so that nt .M 1/ c  c  ı. Thus, the estimates in (10.48) are valid for all of the steps of n the game. It is easy to check that ˛ D ˛.c/ is decreasing for c > 1. Thus, if the game ends in a win, then there is a connected component of size between .1  ˛.c  ı/  /n and .1  ˛.c/ C /n. What is the probability that the game ends in a win? Let W denote the event that the game ends in a win, let D denote the event that it ends in a draw, and let L denote the event that it ends in a loss. We have P .W / D 1  P .L/  P .D/:

(10.49)

The game ends in a draw if there was a draw on M consecutive steps. Since on any given step the probability of a draw is no greater than ˛.c  ı/ C , the probability of obtaining M consecutive draws is no greater than  M ˛.c  ı/ C ; so by the choice of M , we have  M P .D/  ˛.c  ı/ C < :

(10.50)

Let D c denote the complement of D; that is, D c D W [ L. Obviously, we have L D L \ D c . Then we have P .L/ D P .L \ D c / D P .D c /P .LjD c /  P .LjD c /:

(10.51)

10.6 Proof of Theorem 10.2

127

If one played a game with three possible outcomes on each step—win, loss, or draw—with respective nonzero probabilities p 0 , q 0 , and r 0 , and the outcomes of all the steps were independent of one another, and one continued to play step after step 0 until either a win or a loss occurred, then the probability of a win would be p0pCq 0 0

and the probability of a loss would be p0qCq 0 (Exercise 10.9). Conditioned on D c , our game essentially reduces to this game. However, the probabilities of win and loss and draw are not exactly fixed, but can vary a little according to (10.48). Thus, we can conclude that P .LjD c / 

D : 1  ˛.c  ı/  2 C 1  ˛.c  ı/ 

(10.52)

From (10.49)–(10.52) we obtain P .W /  1  

: 1  ˛.c  ı/ 

(10.53)

In conclusion, we have demonstrated the following. Consider any c > 1 and any ı > 0 such that c  ı > 1. Then for each sufficiently small > 0 and sufficiently there will exist large n depending on , with probability at least 1   1˛.cı/ c a connected component of G.n; n / of size between .1  ˛.c  ı/  /n and .1  ˛.c/ C /n. If the connected component above, which has been shown to exist with probability close to 1 and which is of size around .1  ˛/n, is in fact with probability close to 1 the largest connected component, then the above estimates prove (10.1), since by (10.25) the ˇ defined in the statement of the theorem is in fact 1  ˛. Thus, to complete the proof of (10.1) and (10.2), it suffices to prove that with probability approaching 1 as n ! 1, every other component of G.n; nc / is of size O.log n/, as n ! 1. In fact, we will prove here the weaker result that with probability approaching 1 as n ! 1, every other component is of size o.n/ as n ! 1. In Exercise 10.10, the reader is guided through a proof that every other component is of size O.log n/. To prove that every other component is of size o.n/ with probability approaching 1 as n ! 1, assume to the contrary. Then for an unbounded sequence of n’s, the following holds. As above, with probability at least 1   1˛.cı/ , there c will exist a connected component of G.n; n / of size between .1  ˛.c  ı/  /n and .1  ˛.c/ C /n, and by our assumption, for some > 0, with probability at least , there will be another connected component of size at least n. We may take < 1  ˛.c  ı/  . But if this were true, then at the first step of our algorithm, when we randomly selected a vertex x, the probability that it would be in a connected component of size at least n would be at least  1 

 .1  ˛.c  ı/  /n n C : 1  ˛.c  ı/  n n

128

10 Giant Component in a Sparse Random Graph

For and ı sufficiently small, this number will be larger than 1  ˛.c/ C which case the algorithm would have to give P .T  t / < ˛.c/  > 0 sufficiently small, this contradicts the first line of (10.47).

2 . 2

2 , 2

in

However, for 

Exercise 10.1. This exercise refers to Remark 3 after Theorem 10.2. Prove that for 1 any > 0 and large n, the number of edges of Gn . nc / is equal to 12 cn C O.n 2 C / c with high probability. Show directly that ˇ.c/  2 , for 1 < c < 2, where ˇ.c/ is as in Theorem 10.2. Exercise 10.2. Let Dn denote the number of disconnected vertices in the Erd˝os– Rényi graph Gn .pn /. For this exercise, it will be convenient to represent Dn as a sum of indicator random variables. Let Dn;iPbe equal to 1 if the vertex i is disconnected and equal to 0 otherwise. Then Dn D niD1 Dn;i . (a) Calculate EDn . P P (b) Calculate EDn2 . (Hint: Write EDn2 D E. niD1 Dn;i /. nj D1 Dn;j /.) Exercise 10.3. In this exercise, you are guided through a proof of the result noted in Remark 2 after Theorem 10.2, namely that: n if pn D log nCc , then as n!1, the probability that the Erd˝os–Rényi graph Gn .pn / n possesses at least one disconnected vertex approaches 0 if limn!1 cn D 1, while for any M , the probability that it possesses at least M disconnected vertices approaches 1 if limn!1 cn D 1. n Let Dn be as in Exercise 10.2, with pn D log nCc . n (a) Use Exercise 10.2(a) to show that limn!1 EDn equals 0 if limn!1 cn D 1 and equals 1 if limn!1 cn D 1. (Hint: Consider log EDn and note that by 2 Taylor’s remainder theorem, log.1  x/ D x  .1x1  /2 x2 , for 0 < x < 1, where x  D x  .x/ satisfies 0 < x  < x.) (b) Use (a) to show that if limn!1 cn D 1, then limn!1 P .Dn D 0/ D 1. (c) Use Exercise 10.2(b) to calculate EDn2 . (d) Show that if limn!1 cn D 1, then the variance 2 .Dn / satisfies 2 .Dn / D  o .EDn /2 . (Hint: Recall that 2 .Dn / D EDn2  .EDn /2 .) (e) Use Chebyshev’s inequality with (a) and (d) to conclude that if limn!1 cn D 1, then for any M , limn!1 P .Dn  M / D 1. Exercise 10.4. Recall from Chap. 5 that the probability generating function PX .s/ of a nonnegative random variable X taking integral values is defined by PX .s/ D Es X D

1 X

s i P .X D i /:

iD0

The probability generating function of a random variable X uniquely characterizes .i /

its distribution, because

PX .0/ iŠ

D P .X D i /.

10.6 Proof of Theorem 10.2

129

(a) Let X  Bin.n; p/. Show that PX .s/ D .ps C 1  p/n . (b) Let Z  Bin.n; p/, and let Y  Bin.Z; p 0 /, by which is meant that conditioned on Z D m, the random variable Y is distributed according to Bin.m; p 0 /. Calculate PY .s/ by writing PY .s/ D Es Y D

n X

E.s Y jZ D m/P .Z D m/;

mD0

and conclude that Y  Bin.n; pp 0 /. Conclude from this that (10.7) and (10.9) imply (10.6). Exercise 10.5. Let f ./ D e 0 .e  C 1  /, with 0 <  < 0 < 1. Show that  1 10   0 inf0 f ./ is attained at some 0 > 0 and that f .0 / D 1 2 .0; 1/. 0 0 Pn Exercise 10.6. If X  Bin.n; p/, then X can be represented as X D iD1 Bi , where fBi gniD1 are independent and identically distributed random variables distributed according to the Bernoulli distribution with parameter p; that is, P .Bi D 1/ D 1  P .Bi D 0/ D p. (a) Use the above representation to prove that if Xi  Bin.ni ; p/; i D 1; 2; and n1 > n2 ; then P .X1  k/  P .X2  k/; for all integers k  0;

(10.54)

and that if Xi  Bin.n; pi /; i D 1; 2; and p1 > p2 ; then P .X1  k/  P .X2  k/; for all integers k  0:

(10.55)

1 and (Hint: For (10.54), represent X1 using the random variables fBi gniD1 represent X2 using the first n2 of these very same random variables. For (10.55), let fUi gniD1 be independent and identically distributed random variables, distributed according to the uniform distribution on Œ0; 1; that is, P .a  Ui  .1/ b/ D b  a, for 0  a < b  1. Define random variables fBi gniD1 and .2/ n fBi giD1 by the formulas

( .1/ Bi

D

1; if Ui  p1 I 0; if Ui > p1 ;

( .2/ Bi

.1/

D

1; if Ui  p2 I 0; if Ui > p2 : .2/

Now represent X1 and X2 through fBi gniD1 and fBi gniD1 , respectively. This method is called coupling.) (b) Prove (10.54) and (10.55) directly P from the  fact that if X  Bin.n; p/, then for 0  k  n, one has P .X  k/ D nj Dk jn p j .1  p/nj .

130

10 Giant Component in a Sparse Random Graph

Exercise 10.7. If fXt g1 tD0 is a Galton–Watson branching process of the type described at the beginning of Sect. 10.5, show that EXtC1 D EXt , where  is the mean number of offspring of a particle. (Hint: Use induction and conditional expectation.) Exercise 10.8. Prove (10.37) by the method used to prove (10.36). Exercise 10.9. Prove that if one plays a game with three possible outcomes on each step—win, loss, or draw—with respective nonzero probabilities p 0 , q 0 , and r 0 , and the outcomes of all the steps are independent of one another, and one continues to play step after step until either a win or a loss occurs, then the probability of a win 0 0 is p0pCq 0 and the probability of a loss is p0qCq 0 . Exercise 10.10. In the proof of Theorem 10.2, after the algorithm for finding the connected component of a vertex was implemented a maximum of M times, and a component with size around .1˛/n was found with probability close to 1, the final paragraph of the proof of the theorem gave a proof that with probability approaching 1 as n ! 1, all other components are of size o.n/ as n ! 1. To prove the stronger result, as in the statement of Theorem 10.2, that with probability approaching 1 as n ! 1 all other components are of size O.log n/, consider starting the algorithm all over again after the component of size around .1  ˛/n has been discovered. The number of edges left is around n0 D ˛ n and the edge probability is still nc , which we can write as nC0 with C  c˛. If C < 1, then the method of proof of Theorem 10.1 shows that with probability approaching 1 as n ! 1 all components are of size O.log n0 / D O.log n/ as n ! 1. To show that C < 1, it suffices to show that c˛ < 1. To prove this, use the following facts: (1) xe x increases in Œ0; 1/ and decreases in .1; 1/, so for c > 1, there exists a unique d 2 .0; 1/ such that de d D ce c ; (2) ˛ D e c.˛1/ .

Chapter Notes The context in which Theorems 10.1 and 10.2 were originally proven by Erd˝os and Rényi   in 1960 [18] is a little different from the context presented here. Let N WD n2 . Define G.n; M /, 0  M  N , to be the random graph with n vertices and exactly M edges, where the M edges are selected uniformly at random from the N possible edges. One can consider an evolving random graph fG.n; t /gN tD0 . By definition, G.n; 0/ is the graph on n vertices with no edges. Then sequentially, given G.n; t /, for 0  t  N 1, one obtains the graph G.n; t C1/ by choosing at random from the complete graph Kn one of the edges that is not in G.n; t / and adjoining it to G.n; t /. Erd˝os and Rényi looked at evolving graphs of the form G.n; tn /, with tn D Œ cn . They showed that if c < 1, then with probability approaching 1 as 2 n ! 1, the largest component of G.n; tn / is of size O.log n/, while if c > 1, then with probability approaching 1 as n ! 1 there is one component of size approximately ˇ.c/  n, and all other components are of size O.log n/. To see how

10.6 Proof of Theorem 10.2

131

this connects up to the version given   in this chapter, note that the expected number of edges in the graph Gn . nc / is nc n2 D c.n1/ . A detailed study of the borderline 2 case, when tn  n2 as n ! 1, was undertaken by Bollobás [8]. Our proofs of Theorems 10.1 and 10.2 are along the lines of the method sketched briefly in the book of Alon and Spencer [2]. We are not aware in the literature of a complete proof of Theorems 10.1 and 10.2 with all the details. The large deviations bound in Proposition 10.2 is actually tight. That is, in part (i), where 0 > , for any > 0, one has for sufficiently large n, P .Sn  0 n/  e ..0 ;/C /n . Thus, in particular, limn!1 n1 log P .Sn  0 n/ D .0 ; /. Similarly, in part (ii), where 0 < , limn!1 n1 log P .Sn  0 n/ D .0 ; /. Consider two measures, on a finite or countably infinite set A. P  and 0 , defined 0 .x/ Then H.0 I / WD x2A 0 .x/ log .x/ is called the relative entropy of 0 with respect to . It plays a fundamental role in the theory of large deviations. In the case that A is a two-point set, say A D f0; 1g, and .f1g/ D 1  .f0g/ D  and 0 .f1g/ D 1  0 .f0g/ D 0 , one has H.0 I / D .0 ; /. For more on large deviations, see the book by Dembo and Zeitouni [13]. For some basic results on the Galton–Watson branching process, using probabilistic methods, see the advanced probability textbook of Durrett [16]. Two standard texts on branching processes are the books of Harris [24] and of Athreya and Ney [7].

Appendix A

A Quick Primer on Discrete Probability

In this appendix, we develop some basic ideas in discrete probability theory. We note from the outset that some of the definitions given here are no longer correct in the setting of continuous probability theory. Let  be a finite or countably infinite set, and let 2 denote the set of subsets of . An element A 2 2 is simply a subset of , but in the language of probability it is called an event. A probability measure on  is a function P W 2 ! Œ0; 1 satisfying P .;/ D 0; P ./ D 1 and PNwhich is -additive; that is, for any 1N  N  1, one has P .[N A / D n nD1 P .An /, whenever the events fAn gnD1 nD1 are disjoint. From this -additivity, it follows that P is uniquely determined by fP .fxg/gx2 . Using the -additivity on disjoint events, it is not hard PN to prove that P is -sub-additive on arbitrary events; that is, P .[N A /  n nD1 P .An /, for nD1 arbitrary events fAn gN . See Exercise A.1. The pair .; P / is called a probability nD1 space. If C and D are events and P .C / > 0, then the conditional probability of D given C is denoted by P .DjC / and is defined by P .DjC / D

P .C \ D/ : P .C /

Note that P . jC / is itself a probability measure on . Two events C and D are called independent if P .C \ D/ D P .C /P .D/. Clearly then, C and D are independent if either P .C / D 0 or P .D/ D 0. If P .C /; P .D/ > 0, it is easy to check that independence is equivalent to either of the following two equalities: P .DjC / D P .D/ or P .C jD/ D P .C /. Consider a collection fCn gN nD1 of events, with 1  N  1. This collection of events is said to be independent if for any Qm m finite subset fCnj gm j D1 P .Cnj /. j D1 of the events, one has P .\j D1 Cnj / D Let .; P / be a probability space. A function X W  ! R is called a (discrete, real-valued) random variable. For B  R, we write fX 2 Bg to denote the event X 1 .B/ D f! 2  W X.!/ 2 Bg, the inverse image of B. When considering the probability of the event fX 2 Bg or the event fX D xg, we write P .X 2 B/ or P .X D x/, instead of P .fX 2 Bg/ or P .fX D xg/. The distribution of the random R.G. Pinsky, Problems from the Discrete to the Continuous, Universitext, DOI 10.1007/978-3-319-07965-3, © Springer International Publishing Switzerland 2014

133

134

A Primer on Discrete Probability

variable X is the probability measure X on R defined by X .B/ D P .X 2 B/, for B  R. The function pX .x/ WD P .X D x/ is called the probability function or the discrete density function for X . The expected value or expectation EX of a random variable X is defined by EX D

X

x P .X D x/ D

x2R

X

x pX .x/; if

x2R

X

jxj P .X D x/ < 1:

x2R

Note that the set of x 2 R for which P .X D x/ > 0 is either finite or countably infinite; thus, these summations are well defined. We frequently denote EX by . If P .X  0/ D 1 and the condition above in the definition of EX does not hold, then we write EX P D 1. In the sequel, when we say that the expectation of X “exists,” we mean that x2R jxj P .X D x/ < 1. Given a function W R ! R and a random variable X , we can define a new random variable Y D .X /. One can calculate EY according to the definition of expectation above or in the following equivalent way: EY D

X

.x/P .X D x/; if

x2R

X

j .x/jP .X D x/ < 1:

x2R

For n 2 N, the nth moment of X is defined by EX n D

X

x n P .X D x/; if

x2R

X

jxjn P .X D x/ < 1:

x2R

If  D EX exists, then one defines the variance of X , denoted by 2 or 2 .X / or Var.X /, by

2 D E.X  /2 D

X

.x  /2 P .X D x/:

x2R

Of course, it is possible to have 2 D 1. It is easy to check that

2 .X / D EX 2  2 :

(A.1)

Chebyshev’s inequality is a fundamental inequality involving the expected value and the variance. Proposition A.1 (Chebyshev’s Inequality). Let X be a random variable with expectation  and finite variance 2 . Then for all  > 0, P .jX  j  / 

2 : 2

A Primer on Discrete Probability

135

Proof. X

P .jX  j  / D

x2RWjxj

X .x  /2 x2R

2

P .X D x/ D

X

P .X D x/ 

x2RWjxj

.x  /2 P .X D x/  2

2

: 2 

Let fXj gnj D1 be a finite collection of random variables on a probability space .; P /. We call X D .X1 ; : : : ; Xn / a random vector. The joint probability function of these random variables, or equivalently, the probability function of the random vector, is given by pX .x/ D pX .x1 ; : : : ; xn / WD P .X1 D x1 ; : : : ; Xn D xn / D P .X D x/; xi 2 R; i D 1; : : : ; n; where x D .x1 ; : : : ; xn /: P P It follows that j 2Œnfig xj 2R pX .x/ D P .Xi D xi /. For any function H W Rn ! R, we define X X H.x/pX .x/; if jH.x/jpX .x/ < 1: EH.X / D x2Rn

x2Rn

In particular then, if EXj exists, it can be written as EXj D Similarly, if EXk exists, for all k, then we have E

n X kD1

ck X k D

P x2Rn

xj pX .x/.

n n X X X X  . ck xk /pX .x/ D ck xk pX .x/ : x2Rn kD1

kD1

x2Rn

It follows from this that the expectation is linear; that is, if EXk exists for k D 1; : : : ; n, then E

n X kD1

ck X k D

n X

ck EXk ;

kD1

for any real numbers fck gnkD1 . Let fXj gN j D1 be a collection of random variables on a probability space .; P /, where 1  N  1. The random variables are called independent if for every finite n  N , one has

136

A Primer on Discrete Probability

P .X1 D x1 ; X2 D x2 ; : : : ; Xn D xn / D

n Y

P .Xj D xj /;

j D1

for all xj 2 R; j D 1; 2; : : : ; n: Let ffi gniD1 be real-valued functions with fi defined at least on the set fx 2 R W P .Xi D x/ > 0g. Assume that Ejfi .Xi /j < 1, for i D 1; : : : ; n. From the definition of independence it is easy to show that if fXj gnj D1 are independent, then E

n Y

fi .Xi / D

iD1

n Y

Efi .Xi /:

(A.2)

iD1

The variance is of course not linear. However the variance of a sum of independent random variables is equal to the sum of the variances of the random variables: If fXi gniD1 are independent random variables, then

2.

n X iD1

Xi / D

n X

2 .Xi /:

(A.3)

iD1

It suffices to prove (A.3) for n D 2 and then use induction. Let i D EXi , i D 1; 2. We have  2  2

2 .X1 C X2 / D E X1 C X2  E.X1 C X2 / D E .X1  1 / C .X2  2 / D E.X1  1 /2 C E.X2  2 /2 C 2E.X1  1 /.X2  2 / D 2 .X1 / C 2 .X2 /; where the last equality follows because (A.2) shows that E.X1  1 /.X2  2 / D E.X1  1 /E.X2  2 / D 0. Chebyshev’s inequality and (A.3) allow for an exceedingly short proof of an important result—the weak law of large numbers for sums of independent, identically distributed (IID) random variables. Theorem A.1. Let fXn g1 nD1 be a sequence of independent, identically distributed 2 random variables and assume that their Pncommon variance is finite. Denote their common expectation by . Let Sn D j D1 Xj . Then for any > 0, lim P .j

n!1

Sn  j  / D 0: n

Proof. We have ESn D n, and since the random variables are independent and identically distributed, it follows from (A.3) that 2 .Sn / D n 2 . Now applying Chebyshev’s inequality to Sn with  D n gives

A Primer on Discrete Probability

137

P .jSn  nj  n / 

n 2 ; .n /2 

which proves the theorem.

Remark. The weak law of large numbers is a first moment result. It holds even without the finite variance assumption, but the proof is much more involved. The above weak law of large numbers is actually a particular case of the following weak law of large numbers. Proposition A.2. Let fYn g1 nD1 be random variables. Assume that  

2 .Yn / D o .EYn /2 ; as n ! 1: Then for any > 0, lim P .j

n!1

Yn  1j  / D 0: EYn

Proof. By Chebyshev’s inequality, we have

2 .Yn / P .jYn  EYn j  jEYn j/   2 : EYn  If X and Y are random variables on a probability space .; P /, and if P .X Dx/>0, then the conditional probability function of Y given X D x is defined by pY jX .yjx/ WD P .Y D yjX D x/ D

P .X D x; Y D y/ : P .X D x/

The conditional expectation of Y given X D x is defined by E.Y jX D x/ D if

X

X

y P .Y D yjX D x/ D

y2R

X

y pY jX .yjx/;

y2R

jyjP .Y D yjX D x/ < 1:

y2R

It is easy to verify that EY D

X

E.Y jX D x/P .X D x/;

x2R

where E.Y jX D x/P .X D x/ WD 0, if P .X D x/ D 0.

138

A Primer on Discrete Probability

A random variable X that takes on only two values—0 and 1, with P .X D 1/ D p and P .X D 0/ D 1  p, for some p 2 Œ0; 1—is called a Bernoulli random variable. One writes X  Ber.p/. It is trivial to check that EX D p and

2 .X / D p.1  p/. Let n 2 N and let p 2 Œ0; 1. A random variable X satisfying ! n j P .X D j / D p .1  p/nj ; j D 0; 1; : : : ; n; j is called a binomial random variable, and one writes X  Bin.n; p/. The random variable X can be thought of as the number of “successes” in n independent trials, where on each trial there are two possible outcomes—“success” and “failure”— and the probability of “success” is p on each trial. Letting fZi gniD1 be independent, identically distributed random variables Pn distributed according to Ber.p/, it follows that X can be realized as X D iD1 Zi . From the formula for the expected value and variance of a Bernoulli random variable, and from the linearity of the expectation and (A.3), the above representation immediately yields EX D np and

2 .X / D np.1  p/. A random variable X satisfying P .X D n/ D e 

n ; n D 0; 1; : : : ; nŠ

where  > 0, is called a Poisson random variable, and one writes X  Pois./. One can check easily that EX D  and 2 .X / D . Proposition A.3 (Poisson Approximation to the Binomial Distribution). For n 2 N and p 2 Œ0; 1, let Xn;p  Bin.n; p/. For  > 0, let X  Pois./. Then lim

n!1;p!0;np!

P .Xn;p D j / D P .X D j /; j D 0; 1; : : : :

Proof. By assumption, we have p D

n , n

(A.4)

where limn!1 n D . We have

! n j n.n  1/    .n  j C 1/ n j n P .Xn;p Dj /D . / .1  /nj D p .1p/nj D jŠ n n j n 1 j n.n  1/    .n  j C 1/  .1  /nj I jŠ n nj n thus, lim

n!1;p!0;np!

P .Xn;p D j / D e 

j D P .X D j /: jŠ 

A Primer on Discrete Probability

139

Equation (A.4) is an example of weak convergence of random variables or dis1 tributions. In general, if fXn g1 nD1 are random variables with distributions fXn gnD1 , and X is a random variable with distribution X , then we say that Xn converges weakly to X , or Xn converges weakly to X , if limn!1 P .Xn  x/ D P .X  x/, for all x 2 R for which P .X D x/ D 0, or equivalently, if limn!1 Xn ..1; x/ D X ..1; x/, for all x 2 R for which X .fxg/ D 0. Thus, for example, if P .Xn D n1 / D P .Xn D 1 C n1 / D 12 , for n D 1; 2;    , and P .X D 0/ D P .X D 1/ D 12 , then Xn converges weakly to X since limn!1 P .Xn  x/ D P .X  x/, for all x 2 R  f0; 1g. See also Exercise A.4. Exercise A.1. Use the -additivity property of probability measures PNon disjoint sets to prove -sub-additivity on arbitrary sets: that is, P .[N nD1 P .An /, for nD1 An /  N arbitrary events fAn gN nD1 , where 1  N  1. (Hint: Rewrite [nD1 An as a disjoint union [N nD1 Bn , by letting B1 D A1 ; B2 D A2  A1 ; B3 D A3  A2  A1 , etc.) Exercise A.2. Prove that P .A1 [A2 / D P .A1 /CP .A2 /P .A1 \A2 /, for arbitrary events A1 ; A2 . Then prove more generally that for any finite n and arbitrary events fAk gnkD1 , one has P .[nkD1 Ak / D

X

P .Ai / 

P .Ai \ Aj /C

1i 0 and write N / D N L .t / C p1 T C .t / C p1 TL .t /; .t L 2 2

(C.6)

where 1 N L .t / D p 2

Z

L

e

tg. pz /

dz

t

L

and TLC .t / D

Z

1

e

tg. pz / t

L

d z; TL .t / D

Z

L p

e

tg. pz / t

d z:

 t

From Taylor’s remainder formula it follows that for any > 0 and sufficiently small v, one has 1 1 .1  /v 2  g.v/  .1 C /v 2 : 2 2 Thus, limt!1 tg. pz t / D 12 z2 , uniformly over z 2 ŒL; L; consequently, 1 lim N L .t / D p t!1 2 0 p   Since t g. pz t / D t 1  TLC .t /

1 1C pz



D

t

p p tz t Cz

Z

L

1 2

e  2 z d z:

(C.7)

L

is increasing in z, we have

p p Z t CL 1  t C L tg. pLt / z 0 tg. pz t / e  p t g. p / e dz D p D tL L t tL

p t C L tΠpLt log.1C pLt / e : p tL

By Taylor’s formula, we have log.1 C

L p / t

D

lim sup TLC .t /  t!1

L p t



L2 2t

1  1 L2 e 2 : L

3

C O.t  2 / as t ! 1; thus, (C.8)

148

C Proof of Stirling’s Formula

A very similar argument gives lim sup TL .t /  t!1

1  1 L2 e 2 : L

(C.9)

Now from (C.6)–(C.9), we obtain 1 p 2

Z

L

1 2 N /  lim sup .t N / e  2 z d z  lim inf .t

t!1

L

1 p 2

Z

L L

1 2

e 2 z d z C

t!1

2 1 2 p e 2 L : L 2

N / is independent of L, letting L ! 1 above gives (C.5). Since .t



Appendix D

An Elementary Proof of

P1

1 nD1 n2

D

2 6

The standard way to prove the identity in the title of this appendix is via Fourier series. We give a completely elementary proof, following [1]. Consider the double integral Z

1

I D 0

Z 0

1

1 dxdy: 1  xy

(D.1)

(Actually, the expression on the right hand side of (D.1) is an improper integral, R1R1 1 because the integrand blows up at .x; y/ D .1; 1/. Thus, 0 0 1xy dxdy WD R 1 R 1 1 lim !0C 0 dxdy. Since the integrand is nonnegative, there is no 0 1xy R1R1 1 problem applying the standard rules of calculus directly to 0 0 1xy dxdy.) On the one hand, expanding the integrand in a geometric series and integrating term by term gives Z

1

I D

Z

0

.xy/ dxdy D n

0 nD0

1 Z 1 X nD0

1 1X

0

x n dx

1 Z X nD0

 Z

1 0



y n dy D

1 0

1 X nD0

Z

1

x n y n dxdy D

0 1

X 1 1 D : 2 .n C 1/ n2 nD1

(D.2)

(The interchanging of the order of the integration and the summation is justified by the fact that all the summands are nonnegative.) , v D yx . This On the other hand, consider the change of variables u D yCx 2 2 ı transformation rotates the square Œ0; 1 Œ0; 1 clockwise by 45 and shrinks its sides p by the factor 2. The new domain is f.u; v/ W 0  u  12 ; u  v  ug [ f.u; v/ W 1  u  1; u  1  v  1  ug. The Jacobian @.x;y/ of the transformation is equal to 2 @.u;v/ 1 2, so the area element dxdy gets replaced by 2d udv. The function 1xy becomes 1 . Since the function and the domain are symmetric with respect to the u-axis, 1u2 Cv 2 we have R.G. Pinsky, Problems from the Discrete to the Continuous, Universitext, DOI 10.1007/978-3-319-07965-3, © Springer International Publishing Switzerland 2014

149

150

D Proof of

Z

1 2

I D4



Z

u 0

0

 dv du C 4 2 2 1u Cv

Using the integration formula Z

1 2

I D4 0

R

dx x 2 Ca2

D

1 a

Z

1



1 2

Z

1u 0

P1

D

1 nD1 n2

2 6

 dv d u: 2 2 1u Cv

arctan xa , we obtain

  1 u du C 4 arctan p p 1  u2 1  u2

Z

1 1 2

 1u  1 d u: arctan p p 1  u2 1  u2

  Now the derivative of g.u/ WD arctan p u 2 is p 1 2 , and the derivative of 1u 1u  1u  q 1u  1p1 p h.u/ WD arctan D arctan is  2 . Thus, we conclude that 2 2 1Cu 1u

Z

1 2

I D4

1u

0

g.u/g .u/ d u  8

0

Z

1 1 2

1

h.u/h0 .u/ d u D 2g 2 .u/j02  4h2 .u/j11 D 2

   1 1  1 2 arctan2 p  arctan2 0  4 arctan2 0  arctan2 p D 6 arctan2 p 3 3 3 2  : D 6. /2 D 6 6

(D.3)

Comparing (D.2) and (D.3) gives 1 X 1 2 : D 2 n 6 nD1



References

1. Aigner, M., Ziegler, G.: Proofs from the Book, 4th edn. Springer, Berlin (2010) 2. Alon, N., Spencer, J.: The Probabilistic Method, 3rd edn. Wiley-Interscience Series in Discrete Mathematics and Optimization. Wiley, Hoboken (2008) 3. Alon, N., Krivelevich, M., Sudakov, B.: Finding a large hidden clique in a random graph. In: Proceedings of the 9th Annual ACM-SIAM Symposium on Discrete Algorithms (San Francisco, CA, 1998), pp. 594–598. ACM, New York (1998) 4. Andrews, G.: The Theory of Partitions, reprint of the 1976 original. Cambridge University Press, Cambridge (1998) 5. Apostol, T.: Introduction to Analytic Number Theory. Undergraduate Texts in Mathematics. Springer, New York (1976) 6. Arratia, R., Barbour, A.D., Tavaré, S.: Logarithmic Combinatorial Structures: A Probabilistic Approach. EMS Monographs in Mathematics. European Mathematical Society, Zürich (2003) 7. Athreya, K., Ney, P.: Branching Processes, reprint of the 1963 original [Springer, Berlin]. Dover Publications, Inc., Mineola (2004) 8. Bollobás, B.: The evolution of random graphs. Trans. Am. Math. Soc. 286, 257–274 (1984) 9. Bollobás, B.: Modern Graph Theory. Graduate Texts in Mathematics, vol. 184. Springer, New York (1998) 10. Bollobás, B.: Random Graphs, 2nd edn. Cambridge Studies in Advanced Mathematics, vol. 73. Cambridge University Press, Cambridge (2001) 11. Brauer, A.: On a problem of partitions. Am. J. Math. 64, 299–312 (1942) 12. Conlon, D.: A new upper bound for diagonal Ramsey numbers. Ann. Math. 170, 941–960 (2009) 13. Dembo, A., Zeitouni, O.: Large Deviations Techniques and Applications, 2nd edn. Springer, New York (1998) 14. Diaconis, P., Freedman, D.: An elementary proof of Stirling’s formula. Am. Math. Mon. 93, 123–125 (1986) 15. Doyle, P., Snell, J.L.: Random Walks and Electric Networks. Carus Mathematical Monographs, vol. 22. Mathematical Association of America, Washington (1984) 16. Durrett, R.: Probability: Theory and Examples, 4th edn. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge (2010) 17. Dwass, M.: The number of increases in a random permutation. J. Combin. Theor. Ser. A 15, 192–199 (1973) 18. Erd˝os, P., Rényi, A.: On the evolution of random graphs. Magyar Tud. Akad. Mat. Kutató Int. Közl 5, 17–61 (1960) 19. Feller, W.: An Introduction to Probability Theory and Its Applications, 3rd edn, vol. I. Wiley, New York (1968)

R.G. Pinsky, Problems from the Discrete to the Continuous, Universitext, DOI 10.1007/978-3-319-07965-3, © Springer International Publishing Switzerland 2014

151

152

References

20. Flajolet, P., Sedgewick, R.: Analytic Combinatorics. Cambridge University Press, Cambridge (2009) 21. Flory, P.J.: Intramolecular reaction between neighboring substituents of vinyl polymers. J. Am. Chem. Soc. 61, 1518–1521 (1939) 22. Graham, R., Rothschild, B., Spencer, J.: Ramsey Theory, 2nd edn. Wiley-Interscience Series in Discrete Mathematics and Optimization. Wiley, New York (1990) 23. Hardy, G.H., Ramanujan, S.: Asymptotic formulae in combinatory analysis. Proc. London Math. Soc. 17, 75–115 (1918) 24. Harris, T.: The Theory of Branching Processes, corrected reprint of the 1963 original [Springer, Berlin]. Dover Publications, Inc., Mineola (2002) 25. Jameson, G.J.O.: The Prime Number Theorem. London Mathematical Society Student Texts, vol. 53. Cambridge University Press, Cambridge (2003) 26. Montgomery, H., Vaughan, R.: Multiplicative Number Theory. I. Classical Theory. Cambridge Studies in Advanced Mathematics, vol. 97. Cambridge University Press, Cambridge (2007) 27. Nathanson, M.: Elementary Methods in Number Theory. Graduate Texts in Mathematics, vol. 195. Springer, New York (2000) 28. Page, E.S.: The distribution of vacancies on a line. J. Roy. Stat. Soc. Ser. B 21, 364–374 (1959) 29. Pinsky, R.: Detecting tampering in a random hypercube. Electron. J. Probab. 18, 1–12 (2013) 30. Pitman, J.: Combinatorial stochastic processes. Lectures from the 32nd Summer School on Probability Theory held in Saint-Flour, 7–24 July 2002. Lecture Notes in Mathematics, 1875. Springer, Berlin (2006) 31. Rényi, A.: On a one-dimensional problem concerning random space filling (Hungarian; English summary). Magyar Tud. Akad. Mat. Kutató Int. Közl. 3, 109–127 (1958) 32. Spitzer, F.: Principles of Random Walk, 2nd edn. Graduate Texts in Mathematics, vol. 34. Springer, New York (1976) 33. Tenenbaum, G.: Introduction to Analytic and Probabilistic Number Theory. Cambridge Studies in Advanced Mathematics, vol. 46. Cambridge University Press, Cambridge (1995) 34. Wilf, H.: Generating Functionology, 3rd edn. A K Peters, Ltd., Wellesley (2006)

Index

A Abel summation, 77 arcsine distribution, 37 average order, 13 B Bernoulli random variable, 138 binomial random variable, 138 branching process – see Galton–Watson branching process, 117 C Chebyshev’s -function, 70 Chebyshev’s -function, 68 Chebyshev’s inequality, 134 Chebyshev’s theorem, 68 Chinese remainder theorem, 19 clique, 89 coloring of a graph, 104 composition of an integer, 5 cycle index, 58 cycle type, 51

F Fibonacci sequence, 142 finite graph, 89 G Galton–Watson branching process, 117 generating function, 141 giant component, 110 H Hardy–Ramanujan theorem, 81 I independent events, 133 independent random variables, 135 L large deviations, 113

D derangement, 49 Dyck path, 40

M Mertens’ theorems, 75 M˝obius function, 8 M˝obius inversion, 10 multiplicative function, 9

E Erd˝os–Rényi graph, 89 Euler -function, 11 Euler product formula, 19 Ewens sampling formula, 52 expected value, 134 extinction, 117

P p-adic, 71 partition of an integer, 1 Poisson approximation to the binomial distribution, 138 Poisson random variable, 138 prime number theorem, 67

R.G. Pinsky, Problems from the Discrete to the Continuous, Universitext, DOI 10.1007/978-3-319-07965-3, © Springer International Publishing Switzerland 2014

153

154 probabilistic method, 107 probability generating function, 54 probability space, 133

R Ramsey number, 105 random variable, 133 relative entropy, 115, 131 restricted partition of an integer, 1

S sieve method, 19 simple, symmetric random walk, 35 square-free integer, 8 Stirling numbers of the first kind, 54

Index Stirling’s formula, 145 survival, 117

T tampering detection, 99 total variation distance, 99

V variance, 134

W weak convergence, 139 weak law of large numbers, 136, 137