On Stein's Method for Infinitely Divisible Laws with Finite First Moment [1st ed.]
 978-3-030-15016-7;978-3-030-15017-4

Table of contents :
Front Matter ....Pages i-xi
Introduction (Benjamin Arras, Christian Houdré)....Pages 1-2
Preliminaries (Benjamin Arras, Christian Houdré)....Pages 3-12
Characterization and Coupling (Benjamin Arras, Christian Houdré)....Pages 13-29
General Upper Bounds by Fourier Methods (Benjamin Arras, Christian Houdré)....Pages 31-56
Solution to Stein’s Equation for Self-Decomposable Laws (Benjamin Arras, Christian Houdré)....Pages 57-75
Applications to Sums of Independent Random Variables (Benjamin Arras, Christian Houdré)....Pages 77-88
Back Matter ....Pages 89-104

Citation preview

SPRINGER BRIEFS IN PROBABILIT Y AND MATHEMATIC AL STATISTICS

Benjamin Arras Christian Houdré

On Stein’s Method for  Infinitely Divisible Laws  with Finite First Moment

SpringerBriefs in Probability and Mathematical Statistics Editor-in-Chief Mark Podolskij, University of Aarhus, Aarhus C, Denmark Series Editors Nina Gantert, Technische Universität München, Münich, Germany Richard Nickl, University of Cambridge, Cambridge, UK Sandrine Péché, Univirsité Paris Diderot, Paris, France Gesine Reinert, University of Oxford, Oxford, UK Mathieu Rosenbaum, Université Pierre et Marie Curie, Paris, France Wei Biao Wu, University of Chicago, Chicago, IL, USA

SpringerBriefs present concise summaries of cutting-edge research and practical applications across a wide spectrum of fields. Featuring compact volumes of 50 to 125 pages, the series covers a range of content from professional to academic. Briefs are characterized by fast, global electronic dissemination, standard publishing contracts, standardized manuscript preparation and formatting guidelines, and expedited production schedules. Typical topics might include: - A timely report of state-of-the art techniques - A bridge between new research results, as published in journal articles, and a contextual literature review - A snapshot of a hot or emerging topic - Lecture of seminar notes making a specialist topic accessible for non-specialist readers -SpringerBriefs in Probability and Mathematical Statistics showcase topics of current relevance in the field of probability and mathematical statistics Manuscripts presenting new results in a classical field, new field, or an emerging topic, or bridges between new results and already published works, are encouraged. This series is intended for mathematicians and other scientists with interest in probability and mathematical statistics. All volumes published in this series undergo a thorough refereeing process. The SBPMS series is published under the auspices of the Bernoulli Society for Mathematical Statistics and Probability.

More information about this series at http://www.springer.com/series/14353

Benjamin Arras Christian Houdré •

On Stein’s Method for Infinitely Divisible Laws with Finite First Moment

123

Benjamin Arras Laboratoire Paul Painlevé University of Lille Nord de France Villeneuve-d’Ascq, France

Christian Houdré School of Mathematics Georgia Institute of Technology Atlanta, GA, USA

ISSN 2365-4333 ISSN 2365-4341 (electronic) SpringerBriefs in Probability and Mathematical Statistics ISBN 978-3-030-15016-7 ISBN 978-3-030-15017-4 (eBook) https://doi.org/10.1007/978-3-030-15017-4 Library of Congress Control Number: 2019933697 Mathematics Subject Classification (2010): 60E07, 60E10, 60F05, 47D03, 47D07, 11K65 © The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

To Charlotte and Alice À CMH

Acknowledgements

Both authors would like to thank Lennart Bondesson for his kind email exchanges on the infinite divisibility of the powers of a normal random variable and to thank Nathan Ross for his comments and bibliographical pointers. Both authors also thank Sorbonne Université for its hospitality while part of this research was carried out. Christian Houdré would like to thank the Institute for Mathematical Sciences of the National University of Singapore for its hospitality and support as well as the organizers of the 2015 Workshop on New Directions in Stein’s Method for their invitation. His participation in this workshop ultimately led to the present collaborative work. His research was supported in part by the grants #246283 and #524678 from the Simons Foundation. This material is also based, in part, upon work supported by the National Science Foundation under Grant No. 1440140, while Christian Houdré was in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the fall semester of 2017.

vii

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

. . . .

3 3 6 10

3 Characterization and Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

4 General Upper Bounds by Fourier Methods . . . . . . . . . . . . . . . . . . .

31

5 Solution to Stein’s Equation for Self-Decomposable Laws . . . . . . . .

57

6 Applications to Sums of Independent Random Variables . . . . . . . . .

77

Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

99

2 Preliminaries . . . . . . . . . . . . . 2.1 Stein’s Method . . . . . . . . 2.2 Infinite Divisibility . . . . . 2.3 Some Probability Metrics

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

ix

Abstract

We present, in a unified way, a Stein methodology for infinitely divisible laws (without Gaussian component) having finite first moment. Via a covariance representation, we obtain a characterizing non-local Stein operator which boils down to classical operators in many specific examples. Thanks to this characterizing operator, we introduce various extensions of size-bias and zero-bias distributions and prove that these notions are closely linked to infinite divisibility. When combined with standard Fourier techniques, these extensions also allow to obtain explicit rates of convergence for compound Poisson approximation, in particular, toward the astable distribution. Finally, in the setting of nondegenerate self-decomposable laws and for instance stable ones, we solve, by semigroup techniques, the Stein equation induced by the characterizing nonlocal operator. This further leads to quantitative bounds in weak limit theorems, for sums of independent random variables, originating in very classical settings.







Keywords Infinite Divisibility Self-decomposability Stable Laws Stein’s Method Stein–Tikhomirov’s Method Weak Limit Theorems Rates of Convergence Kolmogorov Distance Wasserstein Distance Smooth Wasserstein Distance













xi

Chapter 1

Introduction

Since its inception in the normal setting (see [85]), Stein’s method of approximation has enjoyed tremendous successes in both theory and applications. Starting with Chen’s [30] initial extension to the Poisson case the method has been developed for various distributions such as compound Poisson, geometric, negative binomial, exponential, or Laplace, to name but a few. (We refer the reader to Chen, Goldstein and Shao [31] or Ross [81] for good introductions to the method, as well as more precise and complete references.) The methodology developed for the distributions just mentioned is often ad hoc, the fundamental equation changing from one law to another and it is therefore not always easy to see their common underlying thread/approach. There is, however, a large class of random variables for which a common methodology is possible. The class we have in mind is the infinitely divisible one, and it is the purpose of these notes to study Stein’s method in this context. Our results will, in particular, provide a common framework for all the examples mentioned above in addition to presenting a bountiful of new ones. The content of the manuscript is as follows: the next chapter presents background material on the basics, of Stein’s method, infinite divisibility, and probability metrics, which are used throughout. In Chap. 3, Theorem 3.1 provides a functional characterization of infinitely divisible laws from which distance estimates follow. Various comparisons with previously known situations are also made. Then, Corollary 3.5 and Proposition 3.8 respectively extend the notions of size-bias and zero-bias to infinitely divisible distributions whose Lévy measure satisfies moments conditions. Chapter 4 shows, via Fourier methods, how the new characterization of the previous chapter leads to rates of convergence results, in either Kolmogorov or smooth Wasserstein distance, in particular for the compound Poisson approximations of infinitely divisible distributions. In Chap. 5, the corresponding Stein’s equation is put forward, solved, and its properties studied when the target limiting law is self-decomposable, e.g., α-stable, 1 < α < 2. Solving the equation associated with such target laws relies mainly on semigroup techniques which are reminiscent of the Gaussian approach developed © The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 B. Arras and C. Houdré, On Stein’s Method for Infinitely Divisible Laws with Finite First Moment, SpringerBriefs in Probability and Mathematical Statistics, https://doi.org/10.1007/978-3-030-15017-4_1

1

2

1 Introduction

in [9, 48]. Chapter 5 ends with Remark 5.9 where Stein’s method for the Poisson law, viewed as a particular case of discrete self-decomposable law, is described. This leads to a further extension where a Stein’s equation for the convolution between a self-decomposable law and a Poisson law is briefly discussed. In Chap. 6, the Stein methodology developed to this point is used to obtain quantitative approximation results in classical weak limit theorems for sums of independent random variables. This leads to Theorem 6.2, the main result of the chapter, which is afterward complemented with some more explicit corollaries. In particular, in Theorems 6.7 and 6.8, explicit rates of convergence are first obtained for a limit theorem originating in extreme value theory and then for a nonstandard limit theorem connected to number theory. These notes end with a brief discussion of further extensions of our ideas and results to be developed in future work.

Chapter 2

Preliminaries

2.1 Stein’s Method Nowadays, Stein’s method is a powerful tool to quantify limit theorems appearing in probability theory and, since its introduction in a Gaussian setting, it has been extended to many probability distributions beginning with the Poisson one. The method rests upon basic principles, briefly recalled next, which are in some sense universal and which partially explain its successes. First, let Z ∼ N (0, 1) be a standard normal random variable and let X be a random variable whose distribution will be compared to the one of Z . It is by now standard that the law of Z is completely characterized via the following covariance identity: EZ f (Z ) = E f  (Z ),

(2.1)

since/if the above holds true for all absolutely continuous f for which both E|Z f (Z )| and E| f  (Z )| are finite. Then, the main idea of Stein’s method goes as follows: the distribution of X is close to the one of Z if, for sufficiently many test functions f , EX f (X ) ≈ E f  (X ). Next, to make this idea more concrete, the maneuver is to introduce an equation capturing the information on the standard normal distribution and, to this end, the fundamental equation is given by f h 1 (x) − x f h 1 (x) = h 1 (x) − Eh 1 (Z ), x ∈ R, where h 1 belongs to an appropriate class of test functions linked to probability metrics of interest. Assuming the existence of a solution for this differential equation and, integrating it with respect to the law of X , provides the following representation:

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 B. Arras and C. Houdré, On Stein’s Method for Infinitely Divisible Laws with Finite First Moment, SpringerBriefs in Probability and Mathematical Statistics, https://doi.org/10.1007/978-3-030-15017-4_2

3

4

2 Preliminaries

  |Eh 1 (X ) − Eh 1 (Z )| = E f h 1 (X ) − X f h 1 (X ) . This last equality is at the core of Stein’s method: the standard normal law is completely encoded into a differential equation so that the problem of bounding the quantity |Eh 1 (X ) − Eh 1 (Z )| only depends on the regularity of the solution f h 1 , and on the structure of the law of X . Before going further into the description of the method, let us first recall the following result which makes precise the lines of reasoning presented above. This result is taken from [81, Corollary 3.38]. Theorem 2.1 Let Z ∼ N (0, 1) and let X be a random variable with finite expectation. Then,   sup |P (X ≤ x) − P (Z ≤ x)| ≤ sup E f  (X ) − X f (X ) , x∈R

f ∈F

where F is the set √ of absolutely continuous functions on R such that  f ∞ := supx∈R | f (x)| ≤ π/2,  f  ∞ ≤ 2, and such that for all u, v, w ∈ R,   √ |(w + u) f (w + u) − (w + v) f (w + v)| ≤ |w| + 2π/4 (|u| + |v|) . In the Poisson case, let W ∼ P(λ) be a Poisson random variable with parameter λ > 0 and let Y be a random variable whose distribution is to be compared to the one of W . Now, W is characterized via EW f (W ) = λE f (W + 1), since/if this equality holds true for all bounded function f defined on N. Then, again, Y is close, in distribution, to W if, for sufficiently many test functions f , EY f (Y ) ≈ λE f (Y + 1). Next, the fundamental equation is now a difference equation which is given by λ f h 2 (n + 1) − n f h 2 (n) = h 2 (n) − Eh 2 (W ), n ∈ N, where now h 2 belongs to an appropriate class of test functions linked to probability metrics of interest. This provides the following representation:   |Eh 2 (W ) − Eh 2 (Y )| = Eλ f h 2 (Y + 1) − Y f h 2 (Y ) , leading, for example (e.g., see [81, Theorem 4.5]), to: Theorem 2.2 Let W ∼ P(λ), λ > 0 and let Y be an integer-valued random variable with mean λ. Then,

2.1 Stein’s Method

5

sup |P (Y ∈ A) − P (W ∈ A)| ≤ sup |E λ f (Y + 1) − Y f (Y )| , A⊂N

f ∈F

  where F is the set of functions such that  f ∞ := supn∈N | f (n)| ≤ min 1, λ−1/2  and  f (· + 1) − f (·)∞ ≤ (1 − e−λ )/λ ≤ min 1, λ−1 . As mentioned above, once the fundamental equation is solved, the next step is usually to exploit the regularity of the solution and the structure of the random variables X or Y . Indeed, Stein’s method has proven to be fruitful when the following structures are involved: sums of independent random variables, sums of dependent random variables with local dependencies, exchangeable pairs, size-bias (or zerobias) distributions combined, in a subtle manner, with coupling techniques, regular functionals of random variables (e.g., Malliavin regular functionals). For good literature surveys on the subject as well as standard references, together with persuasive applications, the reader is referred to [10, 13, 27, 28, 31, 35, 63, 72, 81, 86]. Let us briefly describe, next, a way to obtain, from Theorem 2.1, relevant bounds from which explicit rates of convergence are achievable. This method is, by now, classical and is called the bounded zero-bias coupling technique (see [31, Sect. 5.1]). The main point of this approach is to use a distributional transformation linked to the normal distribution together with a bounded coupling. More precisely, let X be a random variable with mean zero and unit variance. The zero-bias transform X ∗ of X is defined, for all differentiable function f for which the following expectations exist, by: EX f (X ) = E f  (X ∗ ). Then, assuming that one can find a coupling of X and X ∗ such that |X − X ∗ | ≤ δ a.s., for some δ > 0, the following result holds true, e.g., see [31, Theorem 5.1] or [81, Theorem 3.39]. Theorem 2.3 Let Z ∼ N (0, 1) and let X be a mean zero random variable with unit variance. Let X ∗ be defined on the same probability space as X and be such that |X − X ∗ | ≤ δ a.s., for some δ > 0. Then, 

1 sup |P (X ≤ x) − P (Z ≤ x)| ≤ 1 + √ + 2π x∈R



2π 4

 δ.

(2.2)

√ Now, if X = nk=1 X k / n, where the X k , k ≥ 1, are i.i.d. zero mean variance one random variables, which moreover are bounded by C, for some C > 0 indepen∗ dent of n, then it √ is possible to build a coupling √ between X and X such that ∗ |X − X | ≤ 2C/ n. Combined with (2.2), the 1/ n-rate is then met. Besides the Gaussian and the Poisson laws, Stein’s method has been successfully developed for many univariate distributions such as gamma, Laplace, beta, or semicircular, to name but a few. As put forward in the introduction, the main objective of

6

2 Preliminaries

the current notes is to show that a common Stein methodology is possible for a large class of probability distributions namely, the infinitely divisible one. Indeed, those distributions form a natural class for which a generalized central limit theorem holds and it includes a multitude of well-known distributions, in addition to the Gaussian and the Poisson ones. Moreover, while being naturally linked to limiting theorems for sums of independent summands, a systematic treatment of Stein’s method in that setting has not yet appeared in the literature. (Although, as we found out after our first research posting on this topic, this potential subject of investigation is already mentioned by Charles Stein in [86, Lecture XV, Summary, page 158].)

2.2 Infinite Divisibility Infinite divisibility is a classical subject which has been studied in depth, by many authors, and which is, nowadays, a standard part of probability theory. It was introduced to study the arithmetic of independent random variables and to understand the limiting laws of their sums which was accomplished with great success, e.g., see [45, 59, 62, 66, 77, 84, 87]. Besides its connections to limit theorems, infinite divisibility has also proved fruitful in diverse areas of pure and applied mathematics such as probabilistic number theory [18], analytic number theory [64, 69, 70], Lévy processes and their stochastic calculus as well as their applications to mathematical finance and physics [2, 14, 16, 34]. Let us now recall the definition of infinite divisibility and present some of its main consequences. Definition 2.4 A random variable X is infinitely divisible (ID) if, for each n ≥ 1, there exists X 1,n , . . . , X n,n independent and identically distributed such that X =d X 1,n + · · · + X n,n ,

(2.3)

where =d is short for equality in distribution. As already mentioned, infinitely divisible distributions are intimately linked to weak limit theorems. Indeed, the set of infinitely divisible distributions is closed with respect to the weak convergence of probability measures (see [84, Lemma 7.8] as well as Remark 4.4 (v) below), while the following is a standard limit theorem for row sums of null arrays (see, e.g., [84, Theorem 9.3]). Theorem 2.5 Let (rn )n≥1 be a sequence of integers, rn ≥ 1, such that limn→+∞ rn = +∞. Let {Z n,k : k = 1, . . . , rn , n = 1, 2, . . . } be such that Z n,1 , . . . , Z n,rn , n ≥ 1, are independent and such that, for all ε > 0,    lim max P  Z n,k  ≥ ε = 0.

n→+∞1≤k≤rn

(2.4)

2.2 Infinite Divisibility

7

n Let Sn := rk=1 Z n,k , n ≥ 1. If, for some cn ∈ R, n ≥ 1, the distribution of Sn + cn converges, as n → +∞, to a distribution μ, then μ is infinitely divisible. Recall that the condition (2.4) is equivalent to   lim max ϕ Z n,k (t) − 1 = 0,

n→+∞1≤k≤rn

(2.5)

uniformly in t on any compact subset of R, where ϕ Z n,k is the characteristic function of Z n,k (see [77, Lemma 3.1]). Next, and clearly, if ϕ is the characteristic function of X , then Definition 2.4 is equivalent to the existence, for each n ≥ 1, of a characteristic function ϕn such that ϕ(t) = (ϕn (t))n , t ∈ R. Now, thanks to this fundamental property, the characteristic function of an infinitely divisible random variable has a specific representation which is central to the whole theory. This result, known as the Lévy–Khintchine representation, asserts that X is infinitely divisible if and only if its characteristic function ϕ is given, for all t ∈ R, by

+∞ 2 2t itu + (e − 1 − itu1|u|≤1 )ν(du) , (2.6) ϕ(t) = exp itb − σ 2 −∞ for some

+∞ b ∈ R, σ ≥ 0 and a positive Borel measure ν on R such that ν({0}) = 0 and −∞ (1 ∧ u 2 )ν(du) < +∞. This is then abbreviated as X ∼ I D(b, σ 2 , ν). The measure ν is called the Lévy measure of X , and X is said to be without Gaussian component (or to be purely Poissonian or purely non-Gaussian) whenever σ 2 = 0. (We refer the reader to Sato [84], for a good introduction to infinitely divisible laws and Lévy processes.) The representation (2.6) is the one which will mainly be used throughout these notes with the (unique) generating triplet (b, σ 2 , ν). However, other types of representations are also possible and two of them are presented next. First,

if ν is such that |u|≤1 |u|ν(du) < +∞, then (2.6) becomes

+∞ t2 ϕ(t) = exp itb0 − σ 2 + (eitu − 1)ν(du) , 2 −∞

(2.7)

where b0 = b − |u|≤1 uν(du) is called the drift of X . This representation is crypti cally expressed as X ∼ I D(b0 , σ 2 , ν)0 . Second, if ν is such that |u|>1 |u|ν(du) < +∞, then (2.6) becomes

+∞ t2 ϕ(t) = exp itb1 − σ 2 + (eitu − 1 − itu)ν(du) , 2 −∞

(2.8)

where b1 = b + |u|>1 uν(du) is called the center of X . In turn, this last representation is now cryptically written as X ∼ I D(b1 , σ 2 , ν)1 . In fact, b1 = EX as, for any p > 0,

8

2 Preliminaries

E|X | p < +∞ is equivalent to |u|>1 |u| p ν(du) < +∞. Also, for any r > 0, Eer |X |
1 er |u| ν(du) < +∞. Various choices of generating triplets (b, σ 2 , ν) provide various classes of infinitely divisible laws. The triplet (b, 0, 0) corresponds to a degenerate random variable, (b, σ 2 , 0) to a normal one with mean b and variance σ 2 , the choice (λ, 0, λδ1 ), where λ > 0 and where δ1 is the Dirac measure at 1, corresponds to a Poisson random variable with parameter λ. For ν finite, with the choice b0 = b − |u|≤1 uν(du) = 0, σ 2 = 0 and further setting ν(du) = ν(R)ν0 (du), where ν0 is a Borel probability measure on R, (2.7) becomes

ϕ(t) = exp ν(R)

+∞ −∞

(eitu − 1)ν0 (du) ,

(2.9)

i.e., X is compound Poisson: X ∼ CP(ν(R), ν0 ). Next, let X ∼ N Bin 0 (r, p), i.e., let X be negative binomial with support the nonnegative integers and let P(X = k) =

(r + k) r p (1 − p)k , k = 0, 1, 2, . . . (r )k!

−1 k where r > 0 and 0 < p < 1. Then, X ∼ I D(b, 0, ν) with ν(du) = r ∞ k=1 k q δk (du) and b0 = b − |u|≤1 uν(du) = 0, i.e., b = rq, and so EX = rq/ p, where as usual q = 1 − p. If instead, X ∼ N Bin(r, p), i.e., if P(X = k) =

(r + k − 1) r p (1 − p)k−1 , k = 1, 2, . . . (r )(k − 1)!

−1 k then X ∼ I D(b, 0, ν) with b = 1 + rq and ν(du) = r ∞ k=1 k q δk (du) and so EX = r/ p. If X has a Gamma distribution with parameters α > 0 and β > 0, i.e., if X has density β α (α)−1 x α−1 e−βx 1(0,+∞) (x), x ∈ R, then X ∼ I D(b, 0, ν) with ν(du) = αe−βu u −1 1(0,+∞) (u)du and b0 = 0, i.e.,

1 b = 0 αe−βu du = α(1 − e−β )/β. If X is the standard Laplace distribution with −1 −|u| density e−|x| /2, x ∈ R,

then X ∼ I D(b, 0, ν) where ν(du) = |u| e du, u = 0 and b0 = 0, i.e., b = |u|≤1 ue−|u| |u|−1 du = 0. More generally, if X has a two-sided exponential distribution with parameters α > 0 and β > 0, i.e., if X has density αβ(α + β)−1 (e−αx 1[0+∞) (x) + eβx 1(−∞,0) (x)), x ∈ R, then, once more, X ∼ I D(b, 0, ν) with Lévy measure   ν(du) = e−αu u −1 1(0,+∞) (u) − eβu u −1 1(−∞,0) (u) du,

2.2 Infinite Divisibility

9

and b0 = 0, i.e., b = |u|≤1 uν(du) = α−1 (1 − e−α ) − β −1 (1 − e−β ). Finally, if X

∞ is a stable random variable, then ν is given by ν(B) = S 0 σ(dξ) 0 1 B (r ξ) r dr 1+α , 0 < α < 2, where σ (the spherical component of ν) is a finite positive measure on the unit sphere S 0 of R (S 0 = {−1, 1}) and where B is a Borel set of R. Therefore, if X is an α-stable random variable, its Lévy measure is given by

1 1 ν(du) := c1 α+1 1(0,+∞) (u) + c2 1+α 1(−∞,0) (u) du, u |u|

(2.10)

where c1 , c2 ≥ 0 are such that c1 + c2 > 0. The symmetric case corresponds to c1 = c2 and b = 0, which we write as X ∼ SαS. The class of infinitely divisible distributions is vast and also includes, Student’s t-distribution, the Pareto distribution, the F-distribution, the Gumbel distribution to name but a few ones (see [87, B] for more examples). Besides these classical examples let us mention that any log-convex density on (0, +∞) is infinitely divisible and so are many classes of log-concave measures (see [84, Chap. 10]). It is also noteworthy that the cube or more generally any positive power q ≥ 3 of a (centered) normal random variable is infinitely divisible. Indeed, when q is a nonnegative integer, i.e., q ∈ N, such that q ≥ 3, this is a direct consequence of [19, Theorem 7.3.6] while, for q ∈ R+ \ N with q ≥ 3, this follows from an adaptation of the proof of [19, Theorem 7.3.6] together with the fact that the probability density function of an α-stable distribution on (0, +∞) with index α ≤ 1/2 is hyperbolically completely monotone (see [20]). On the negative side, our framework does not encompass the Stein methodology developed in [40] for the binomial law since, as well known, a nondegenerate bounded random variable cannot be infinitely divisible. For a more recent example of an unbounded continuous law for which a Stein’s method has been developed but for which our approach does not apply, see Remark 3.4 (i). An important subclass of infinitely divisible distributions, which will play a substantial role in our study (see Chap. 5), is formed by the self-decomposable ones. Recall that a random variable X is self-decomposable (SD) if, for any c ∈ (0, 1), there exists a random variable X c independent of X such that X =d cX + X c .

(2.11)

Many of the examples of infinitely divisible distributions with density presented, above, are self-decomposable (again, see Chap. 5, for examples and standard properties of self-decomposability). As illustrated next, self-decomposable distributions are also fundamentally linked to limit theorems for sums of independent uniformly asymptotically negligible summands (see [84, Theorem 15.3]). Theorem 2.6 Let μ be a probability measure on R. Let (Z k )k≥1 be a sequence of independent random variables and let bn > 0, n ≥ 1, be such that lim max P (|bn Z k | ≥ ε) = 0.

n→+∞1≤k≤n

(2.12)

10

2 Preliminaries

Let cn ∈ R, n ≥ 1, be such that the distribution of bn nk=1 Z k + cn converges to μ, as n tends to +∞. Then, μ is self-decomposable. (In case the limit μ is nondegenerate, then necessarily bn → 0 and bn+1 /bn → 1, n → +∞.) Conversely, if X is selfdecomposable, one can always find (Z k )k≥1 , (bn )n≥1 and (cn )n≥1 , as above and also satisfying (2.12), such that, as n → +∞, (bn nk=1 Z k + cn )n≥1 converges in law toward X . A narrower class of ID laws, still SD, is formed by the stable ones. The characteristic function, ϕ, of a stable random variable has already been described via its Lévy measure, see (2.10), requiring also that σ 2 = 0. Another characterization asserts that ϕ is such that, for any a > 0, there exist b > 0 and c ∈ R satisfying, for all t ∈ R, ϕ(t)a = ϕ(bt)eict . A requirement, more akin to (2.11), for the stability of the random variable X , is that for any a1 > 0, a2 > 0, there exist a > 0 and b ∈ R such that a1 X + a2 X  =d a X + b, where X  is an independent copy of X . As far as weak convergence, to a stable law, is concerned, the following result is central (see [84, Theorem 15.7]). Theorem 2.7 Let μ be a probability measure on R. Let (Z k )k≥1 be a sequence of independent and identically distributed random variables. Then, μ is a stable probability measure if and only if there exist bn > 0 and cn ∈ R, n ≥ 1, such that, as n tends to +∞, the distribution of bn nk=1 Z k + cn converges toward μ.

2.3 Some Probability Metrics At this point, let us introduce various probability metrics, quantifying weak convergence theorems, which are of use throughout these notes. First, very classically, the Kolmogorov distance is: d K (X, Y ) : = sup |P(X ≤ x) − P(Y ≤ x)| = sup |Eh(X ) − Eh(Y )| , x∈R

h∈H

where H := {h : R → R, h = 1(−∞,x] , x ∈ R}. In a similar vein, the total variation distance, quantifying Poisson-convergence, is given by: dT V (X, Y ) : = sup |P(X ∈ A) − P(Y ∈ A)| = sup |Eh(X ) − Eh(Y )|, A∈B(R)

h∈H

where B(R) are the Borel subsets of R, and where H := {h : R → R, h = 1 A , A ∈ B(R)}. Next, recall that the smooth Wasserstein distance, dWr , r ≥ 0, is given by

2.3 Some Probability Metrics

11

dWr (X, Y ) = sup |Eh(X ) − Eh(Y )|,

(2.13)

h∈Hr

where Hr is the set of continuous functions which are r -times continuously differentiable and such that h (k) ∞ ≤ 1, for all 0 ≤ k ≤ r , where h (0) = h, and where h (k) , k ≥ 1, is the kth derivative of h, while  · ∞ is the corresponding supremum norm. At first, note that, by an approximation argument, d K (X, Y ) ≤ dW0 (X, Y ). Next, for r ≥ 1, another approximation argument (see, e.g., Appendix A of [4] or Lemma A.3 of the Appendix) shows that the smooth Wasserstein distance dWr can also be represented as dWr (X, Y ) =

sup

h∈Cc∞ (R)∩Hr

|E h(X ) − E h(Y )|,

(2.14)

where now, Cc∞ (R) is the space of compactly supported, infinitely differentiable functions on R. Moreover, by Lemma A.4, for r ≥ 2, √  dWr −1 (X, Y ) ≤ 3 2 dWr (X, Y ), and, also for r ≥ 2, −1 1  √  rk=1   1 2k−1 dW1 (X, Y ) ≤ 3 2 dWr (X, Y ) 2r −1 .

(2.15)

Another very natural distance metrizing the weak convergence of probability measures (see [39]) is the Fortet–Mourier distance given, for any two random variables X and Y , by d F M (X, Y ) =

sup |Eh(X ) − Eh(Y )|, h∈B Li p(1)

where B Li p(1) is the space of bounded Lipschitz functions, endowed with the norm  ·  B Li p := max( · ∞ ,  ·  Li p ), such that  ·  B Li p ≤ 1, where  ·  Li p is given, for any h Lipschitz, by  h  Li p := supx= y |h(x) − h(y)|/|x − y|. Lemma A.5 of the Appendix ensures that dW1 (X, Y ) = d F M (X, Y ). In [39], the bounded Lipschitz distance, d B L , is defined as d B L (X, Y ) := sup |Eh(X ) − Eh(Y )| , h∈H

where H is the set of bounded Lipschitz functions h on R such that h∞ + h Li p ≤ 1. Clearly, d B L (X, Y ) ≤ dW1 (X, Y ) ≤ 2 d B L (X, Y ),

12

2 Preliminaries

so that both metrics are equivalent, and equivalent to convergence in distribution (see [39, Theorem 11.3.3]). Also, Lemma A.6, via a further approximation argument, ensures that when the law of X has a bounded density h X with supremum norm h X ∞ ,

h X ∞  dW1 (X, Y ). d K (X, Y ) ≤ 1 + 2 This last inequality, combined with (2.15), leads to r −1

1 h X ∞  √  k=1 21k  3 2 d K (X, Y ) ≤ 1 + dWr (X, Y ) 2r , 2

r ≥ 2. Furthermore, the smooth Wasserstein distances and the classical Wasserstein distances are ordered in the following way: dWr (X, Y ) ≤ dW1 (X, Y ) ≤ W1 (X, Y ) ≤ W p (X, Y ),

(2.16)

for all r ≥ 1 and where, for any p ≥ 1, and any two random variables X and Y , each having finite absolute pth moment, W pp (X, Y ); = inf E|X − Y | p ,

(2.17)

where the infimum is taken over the set of probability measures on R × R with marginals, respectively, given by the law of X and the law of Y . Recall finally that convergence in Wasserstein- p distance is equivalent to convergence in law and convergence of the absolute pth moments (see, e.g., [93, Theorem 6.9]). In the rest of this text, the terminology Lévy measure is used to denote a positive Borel measure on R which is atomless at the origin and which integrates out the function f (x) = min(1, x 2 ). Moreover, following [84], for a real function f , the terminology “increasing” signifies that f (s) ≤ f (t) for s < t, while “decreasing” that f (s) ≥ f (t) for s < t. When equality is not allowed, strictly increasing or strictly decreasing are instead used. We also write ln for the natural logarithm. Finally, all our random variables are assumed to live on the same (rich enough) probability space (, F, P).

Chapter 3

Characterization and Coupling

We are now ready to present our first result which characterizes ID laws via a functional equality. This functional equality involves Lipschitz functions and we have to agree on what is meant by “Lipschitz”. Below, the functions we consider need not be defined on the whole of R but just on a subset of R containing R X , the range of X ∼ I D(b, 0, ν), and R X + Sν , where Sν is the support of ν. For example, if X is a Poisson random variable, a Lipschitz function (with Lipschitz constant 1) is then defined on N and is such that | f (n + 1) − f (n)| ≤ 1, for all n ∈ N. But, as well known, a Lipschitz function f defined on a subset S of R can be extended, to the whole of R, without increasing its Lipschitz semi-norm  f  Li p . (This can be done in various ways, e.g., for any x ∈ R, let f˜(x) = inf z∈S ( f (z) + |x − z|). Then, for any y ∈ R, f˜(x) ≤ inf z∈S ( f (z) + |x − y| + |y − z|) = |x − y| + f˜(y). Another extension is given via f¯(x) = supz∈S ( f (z) − |x − z|).) Now, below, in the integral representations and as integrands, f and f˜ are indistinguishable. Therefore, and since we do not wish to distinguish between, say, discrete and continuous random variables, in the sequel, Lipschitz will be understood in the classical sense, i.e., f ∈ Li p with Lipschitz constant C > 0, if | f (x) − f (y)| ≤ C|x − y|, for all x, y ∈ R, and f could then be viewed as the Lipschitz extension f˜. Throughout the text, the space of real-valued Lipschitz functions defined on some domain D is denoted by Li p(D), while the space of bounded Lipschitz ones is denoted by B Li p(D). Endowed with the norm  ·  B Li p , B Li p(D) is a Banach space. Finally, we denote the closed unit ball of Li p(D) by Li p(1), and, similarly, B Li p(1) denotes the closed unit ball of B Li p(D). Theorem 3.1 Let X be a random variable such that E|X| < +∞. Let b ∈ R and let +∞ ν be a positive Borel measure on R such that ν({0}) = 0, −∞ (1 ∧ u 2 )ν(du) < +∞ and |u|>1 |u|ν(du) < +∞. Then,  E X f (X ) − b f (X ) −



+∞ −∞

 ( f (X + u) − f (X )1|u|≤1 )uν(du) = 0,

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 B. Arras and C. Houdré, On Stein’s Method for Infinitely Divisible Laws with Finite First Moment, SpringerBriefs in Probability and Mathematical Statistics, https://doi.org/10.1007/978-3-030-15017-4_3

(3.1) 13

14

3 Characterization and Coupling

for all bounded Lipschitz function f if and only if X ∼ I D(b, 0, ν). Proof Note at first that, by the assumption on ν and f , the left-hand side of (3.1) is well defined and that, throughout the proof, interchanges of integrals and expectations are perfectly justified. The direct part of the statement is, in fact, a particular case of a covariance representation obtained in [55, Proposition 2]. Indeed, if X ∼ I D(b, 0, ν) and if f and g are two bounded Lipschitz functions, then Cov( f (X ), g(X )) =

 1 0

E

 +∞ −∞

( f (X z + u) − f (X z ))(g(Yz + u) − g(Yz ))ν(du)dz,

(3.2) where (X z , Yz ) is a two-dimensional ID vector with characteristic function defined by ϕz (t, s) = (ϕ(t)ϕ(s))1−z ϕ(t + s)z , for all z ∈ [0, 1], all s, t ∈ R and where ϕ is the characteristic function of X . In other words, (X z , Yz ) ∼ I D(bz , 0, νz ) where bz = (b, b) and νz = zν1 + (1 − z)ν0 , z ∈ [0, 1], where ν0 (dv, dw) = ν(dv)δ0 (dw) + δ0 (dv)ν(dw) is concentrated on the two main axes of R2 while ν1 (dv, dw) is the push-forward of ν to the main diagonal of R2 . Since X z =d Yz =d X where, again, =d stands for equality in distribution, taking g(y) = y (which is possible by first taking g R (y) = y1|y|≤R + R1 y≥R − R1 y≤−R for R > 0 and then passing to the limit), (3.2) becomes  EX f (X ) − EX E f (X ) = E

+∞ −∞

( f (X + u) − f (X ))uν(du).

(3.3)

To pass from (3.3) to (3.1), just note that since E|X  | < +∞, differentiating the characteristic function of X , shows that EX = b + |u|>1 uν(du). To prove the converse part of the equivalence, i.e., that (3.1), when valid for all bounded Lipschitz functions f , implies that X ∼ I D(b, 0, ν), it is enough to apply (3.1) to sines and cosines or equivalently to complex exponential functions and then to identify the corresponding characteristic function. For any s ∈ R, let f (x) = eisx , x ∈ R, then (3.1) becomes  EX eis X − bEeis X = Eeis X

+∞ −∞

(eisu − 1|u|≤1 )uν(du).

(3.4)

Setting ϕ(s) = Eeis X , (3.4) rewrites as   ϕ (s) = iϕ(s) b +

+∞ −∞

 (eisu − 1|u|≤1 )uν(du) .

Integrating out the real and imaginary parts of (3.5) leads, for any t ≥ 0, to:

(3.5)

3 Characterization and Coupling

15

   t  +∞ ϕ(t) = exp itb + i (eisu − 1|u|≤1 )uν(du)ds 0 −∞    +∞  t isu (e − 1|u|≤1 )u ds ν(du) = exp itb + i −∞ 0    +∞ itu (e − 1 − itu1|u|≤1 )ν(du) . = exp itb + −∞

A similar computation for t ≤ 0 finishes the proof.



Remark 3.2 (i) Both the statement and the proof of Theorem 3.1 carry over to X ∼ I D(b, σ 2 , ν). The corresponding version of (3.1) which characterizes X is then    +∞ ( f (X + u) − f (X )1|u|≤1 )uν(du) = 0. E X f (X ) − b f (X ) − σ 2 f (X ) − −∞

(3.6) In particular, if ν = 0, (3.6) is the well-known characterization of the normal law with mean b = EX and variance σ 2 . (ii) There are other ways to restate Theorem 3.1 for X such that E|X | < +∞. For example, if X ∼ I D(b, 0, ν), then  Cov(X, f (X )) = E

+∞ −∞

( f (X + u) − f (X ))uν(du).

(3.7)

Conversely, if (3.7) is satisfied for all bounded Lipschitz functions f , then X ∼ +∞ I D(b, 0, ν), where b = EX − −∞ u1|u|>1 ν(du). In case |u|≤1 |u|ν(du) < +∞, a further characterizing representation is   EX f (X ) − b −



|u|≤1



uν(du) E f (X ) = E

+∞ −∞

f (X + u)uν(du),

(3.8)

or equivalently,  EX f (X ) − b0 E f (X ) = E

+∞ −∞

f (X + u)uν(du),

(3.9)

i.e.,   EX f (X ) − EX −

+∞

−∞

  uν(du) E f (X ) = E

+∞ −∞

f (X + u)uν(du). (3.10)

Let us now specialize (3.1), (3.7) and (3.9) to various cases, some known and some new. Examples 3.3 (i) Of course, if X ∼ I D(λ, 0, λδ1 ), i.e., when X is a Poisson random variable with parameter λ = EX > 0, then (3.1) becomes the familiar

16

3 Characterization and Coupling

EX f (X ) = EX E f (X + 1).

(3.11)

More generally, if ν(du) = cδ1 (du), then X ∼ I D(b = EX, 0, cδ1 ).  k (ii) If X ∼ N Bin 0 (r, p), then, as indicated before, b0 = 0, ν(du) = r +∞ k=1 q δk (du)/k, with q = 1 − p, and so (3.9) becomes EX f (X ) = r E

∞ 

f (X + k)q k

k=1

= rqE f (X + 1) + r E

∞ 

f (X + k)q k

k=2

= rqE f (X + 1) +

∞  ∞ 

f ( j + k)r

k=2 j=0

= rqE f (X + 1) +

∞ 

= rqE f (X + 1) +

−2

f () pr q 

r  (r + k) (r ) k=0 k!

f () pr q 

1 (r +  − 1) (r ) ( − 2)!

=2 ∞ 

(r + j) r j k p q q (r ) j!

=2

= rqE f (X + 1) + qEX f (X + 1),

(3.12)

since (t + 1) = t(t), t > 0. Hence, (3.12) is exactly the negative binomial characterizing identity obtained in [21]. (iii) If X ∼ N Bin(r, p), (3.1) becomes EX f (X ) = E f (X ) + r E

∞ 

f (X + k)q k

k=1

= E f (X ) +

∞  ∞ 

f ( j + k)r

k=1 j=1

= E f (X ) +

∞ 

= E f (X ) +

=2

−1

f () pr q −1

r  (r + k − 1) (r ) k=1 (k − 1)!

f () pr q −1

1 (r +  − 1) (r ) ( − 2)!

=2 ∞ 

(r + j − 1) r j−1 k p q q (r )( j − 1)!

= E f (X ) + qE((r + X − 1) f (X + 1)),

(3.13)

which, in view of the previous example, is exactly the expected characterizing identity since X − 1 ∼ N Bin 0 (r, p).

3 Characterization and Coupling

17

(iv) If X ∼ CP(ν(R), ν0 ), then (3.1) (or (3.8)–(3.10)) becomes  EX f (X ) = E

+∞ −∞

 f (X + u)uν(du) = ν(R)E

+∞ −∞

f (X + u)uν0 (du), (3.14)

and (3.14) is the characterizing identity for the compound Poisson law given in [11]. (v) If X is the standard Laplace distribution with density e−|x| /2, x ∈ R, then 1 −1 −|u| ν(du) = |u| e du, u = 0, b = 0, −1 uν(du) = 0, and (3.1) (or (3.8)–(3.10)) becomes  +∞ f (X + u)sign(u)e−|u| du EX f (X ) = E −∞

= 2E ( f (X + L) sign(L))  +∞ =E ( f (X + u) − f (X − u))e−u du 0

= E( f (X + Y ) − f (X − Y )),

(3.15)

where sign(u) = u/|u|, u = 0, sign(0) = 0, where L is a standard Laplace random variable independent of X , while Y is a standard exponential random variable independent of X . In [79], a Stein-type operator for the Laplace distribution is introduced as E f (X ) − f (0) = E f (X ), for f twice differentiable on R and such that  f ∞ ,  f ∞ , and  f ∞ are all finite. Let now, g be a differentiable function on R such that g∞ < +∞ and g ∞ < +∞. Then, from (3.15) and Fubini theorem, 

+∞

(g(X + u) − g(X − u))e−u du   +∞  +u =E g (X + t)dt e−u du 0 −u    +∞ g (X + t) e−u du dt =E

EX g(X ) = E

0

−∞

u≥|t|

= 2E g (X + Y ), where Y is a standard Laplace random variable independent of X . (vi) If X is a Gamma random variable with parameters α > 0 and β > 0, then, see [37, 43, 65, 78], for f “nice”, E((β X − α) f (X )) = EX f (X ).

(3.16)

18

3 Characterization and Coupling

But, X is infinitely divisible with ν(du) = α1(0,+∞) (u) exp(−βu)/udu, and it follows from (3.1) that 1 − e−β E f (X ) β  ∞ + αE ( f (X + u) − f (X )1|u|≤1 )e−βu du.

EX f (X ) = α

(3.17)

0

Equivalently from (3.8)–(3.10), since b0 = 0, and since EX = α/β,  EX f (X ) = E = αE

+∞

f (X + u)uν(du)

−∞  +∞

f (X + u)e−βu du

0

α E f (X + Y ) β = EX E f (X + Y ), =

where Y is an exponential random variable, with parameter β, independent of X . Thus, Theorem 3.1 implies the existence of an additive size-bias (see, e.g., [31, Sect. 2]) distribution for the gamma distribution. Moreover, it says that the only probability measure which has an additive exponential size-bias distribution is the gamma one. (vii) Let 0 < θ < +∞ and let X be a generalized Dickman random variable with parameter θ defined through its characteristic function by   ϕ(t) = exp θ

1 0

 eitu − 1 du , u

for all t ∈ R. Thanks to Theorem 3.1, for all bounded Lipschitz function f 

1

EX f (X ) = θE f (X ) + θ

( f (X + u) − f (X ))du

0

= θE f (X + U ), where U is a uniform random variable on [0, 1] independent of X . Note, in particular, that one recovers the characterizing identity for the generalized Dickman distribution of [5, Chap. 4.2]. (viii) To complement this very partial list, let us consider an example where the literature is sparse (for the symmetric case, see [4, 94]), namely, the stable case. At first, let X be a symmetric α-stable random variable with α ∈ (1, 2), i.e., let X ∼ SαS. Then, b = 0 and (3.1) becomes

3 Characterization and Coupling

 EX f (X ) = E 

+∞

−∞

19

 

f (X + u) − f (X )1{|u|≤1} uν(du)

du (−u)α −∞   +∞  du f (X + u) − f (X )1{|u|≤1} α +E u 0  +∞  du = cE f (X + u) − f (X − u) α , u 0

=c −E

0



f (X + u) − f (X )1{|u|≤1}

and, therefore, the previous integral is a fractional operator acting on the test function f . Let us develop this point a bit more by adopting the notation of [83, Sect. 5.4]. The Marchaud fractional derivatives, of order β, of (a sufficiently nice function) f are defined by  +∞ f (x) − f (x − u) β du, (1 − β) 0 u 1+β  +∞ f (x) − f (x + u) β β du. D− ( f )(x) := (1 − β) 0 u 1+β β

D+ ( f )(x) :=

Note that the above operators are well defined for bounded Lipschitz functions as soon as β ∈ (0, 1). Then, in a more compact form α−1 EX f (X ) = Cα E(Dα−1 + ( f )(X ) − D− ( f )(X )),

(3.18)

where Cα = c(2 − α)/(α − 1). Now, for X ∼ SαS, [1, Proposition 3.2] or [94, Theorem 4.1] put forward the following characterizing equation: α

EX f (X ) = αE 2 f (X ),

(3.19)

where α/2 is the fractional Laplacian defined via α 2

 f (x) := dα

 R

f (x + u) − f (x) du, |u|1+α

where dα = (1 + α) sin(πα)/(2π cos(απ/2)) and the previous integral has to be understood as the Cauchy principal value, if it exists, namely,  R

f (x + u) − f (x) du = lim+ ε→0 |u|1+α

 R\[−ε,ε]

f (x + u) − f (x) du. |u|1+α

In particular, when the function f is twice continuously differentiable on R with first and second derivatives bounded, the fractional Laplacian admits the following representation which is more suited in this situation:

20

3 Characterization and Coupling α



 2 f (x) = dα

R

f (x + y) − f (x) − y f (x) dy. |y|1+α

On the right-hand side of (3.18), taking f (nice enough) as a test function leads to α−1 Dα−1 + ( f )(x) − D− ( f )(x) :=

α−1 (2 − α)



+∞

0

f (x + u) − f (x − u) du. uα

Moreover, α



f (x + y) − f (x) − y f (x) dy |y|1+α R    f (x + y) − f (x) − y f (x) f (x + y) − f (x) − y f (x) = dα dy + dy |y|1+α |y|1+α R+ R−    f (x + y) − f (x) − y f (x) f (x − y) − f (x) + y f (x) = dα dy + dy |y|1+α |y|1+α R+ R+  f (x + y) − f (x) + f (x − y) − f (x) dy = dα |y|1+α R+    y dy f (x + t) − f (x − t)dt = dα 1+α |y| 0 R+  dα dt = f (x + t) − f (x − t) α , (3.20) α R+ t

 2 f (x) = dα

showing, for X ∼ SαS, the equivalence of the two characterizing identities (3.18) and (3.19). For the general stable case with Lévy measure given, with c1 = c2 , by (2.10) and with b = 0, then in a straightforward manner, α−1 EX f (X ) := c2,α E(Dα−1 + ( f )(X )) − c1,α E(D− ( f )(X )) +

(c1 − c2 ) E f (X ), α−1 (3.21)

where c1,α = c1

(2 − α) , α−1

c2,α = c2

(2 − α) . α−1

(3.22)

(ix) Another class of infinitely divisible distributions which is of particular interest in a Malliavin calculus framework is the class of second-order Wiener chaoses. As it is well known, if X belongs to this class and if =d denotes equality in distribution, then X =d

+∞  k=1

λk (Z k2 − 1),

3 Characterization and Coupling

21

where (Z k )k≥1 is a sequence of iid standard normal random variables and where the sequence of reals (λk )k≥1 is square summable. Equivalently, the characteristic function of X is given by  ϕ(t) = exp

+∞ −∞

 (e

itu

− 1 − itu)ν(du) ,

where ⎛ ⎞ ⎞ −u/(2λ) −u/(2λ)   e e ν(du) ⎝ ⎠ 1(0,+∞) (u) + ⎝ ⎠ 1(−∞,0) (u), = du 2u 2(−u) ⎛

λ∈+

(3.23)

λ∈−

with  + = {λk : λk > 0} and − = {λk : λk < 0}. Thus, X ∼ I D(b, 0, ν) with b = − |u|>1 uν(du) and ν as in (3.23). The corresponding characterizing identity is therefore  EX f (X ) = E

+∞ −∞

( f (X + u) − f (X ))uν(du),

(3.24)

since also EX = 0. Remark 3.4 (i) In [68], the authors developed a Stein’s method √  for the two-sided Maxwell distribution whose density is given by f (x) = x 2 exp −x 2 /2 / 2π, for all x ∈ R. Now, since the Hermite functions are the eigenvectors of the Fourier transform, one can easily compute the characteristic function of the two-sided Maxwell distribution, and it is given by ϕ(t) = (1 − t 2 ) exp(−t 2 /2), for all t ∈ R. Since ϕ vanishes at t = ±1, the two-sided Maxwell distribution is not infinitely divisible. Moreover, the one-sided Maxwell distribution whose density is equal to 2x 2 exp(−x 2 )/ (3/2), for all x > 0, is not infinitely divisible either (see [87, Appendix B, Sect. 3, p 521]). (ii) The Tracy–Widom distribution (see, e.g., [90]) is another important distribution for which, unfortunately, our approach does not apply since it is not infinitely divisible (see [38]), and for which a Stein’s methodology still needs to be developed. As a first corollary to Theorem 3.1, the following characterizing identities result extends the notion of additive size-bias distribution to infinitely divisible probability  +1 measure with finite nonzero mean and such that −1 |u|ν(du) < +∞. Corollary 3.5 Let X be a nondegenerate random variable such that E|X | < +∞. Let ν be a Lévy measure such that 

+∞

−∞

and let b0 = b −

1 −1

|u|ν(du) < +∞,

uν(du), b ∈ R. Assume further that

(3.25)

22

3 Characterization and Coupling

m± 0 = max(±b0 , 0) +

 R

ν˜± (du) = 0

(3.26)

with ν(du) ˜ = uν(du). Then, − + − EX f (X ) = m + 0 E f (X + Y ) − m 0 E f (X + Y ),

(3.27)

for all bounded Lipschitz functions f , where the random variables Y + , Y − and X are independent with Y + and Y − having respective law μY ± (du) =

b0± ν˜± (du) , ± δ0 (du) + m0 m± 0

(3.28)

with b0+ = max(b0 , 0) and b0− = − min(b0 , 0), if and only if X ∼ I D(b, 0, ν). Proof Let f be a bounded Lipschitz function. By (3.9),  EX f (X ) = b0 E f (X ) + E

+∞ −∞

f (X + u)uν(du).

(3.29)

Now since E|X | < +∞ and thanks to (3.25), ν(du) ˜ = uν(du) is a finite signed measure and so its Jordan decomposition is given by ν(du) ˜ = ν˜+ (du) − ν˜− (du) = u1(0,+∞) (u)ν(du) − (−u)1(−∞,0) (u)ν(du). Therefore, (3.29) becomes  +∞  0 EX f (X ) = b0+ E f (X ) − b0− E f (X ) + E f (X + u)ν˜ + (du) − E f (X + u)ν˜ − (du). −∞

0

Now, + m+ 0 = b0 + − m− 0 = b0 +

 

R

R

ν˜+ (du) = b0+ + ν˜− (du) = b0− +



+∞

uν(du), 

0 0 −∞

(−u)ν(du),

− (b0 = b0+ − b0− and m + 0 − m 0 = EX ) and, therefore, introducing the random vari+ − ables Y and Y proves the direct implication. The converse implication follows directly from Theorem 3.1 or by first taking f (·) = eit· , t ∈ R, in (3.27) and then, as previously done in the proof of Theorem 3.1, by solving a differential equation.  + Remark 3.6 (i) If m − 0 = 0 and m 0 = 0, Corollary 3.5 remains valid when replacing the identity (3.27) by + EX f (X ) = m + 0 E f (X + Y ),

3 Characterization and Coupling

23

for all bounded Lipschitz functions f . A similar proposition also holds true when + m− 0 = 0 and m 0 = 0. (ii) When X ∼ I D(b, 0, ν) is nonnegative, then necessarily the support of its Lévy 1 measure is in (0, +∞), the condition 0 udν(u) < +∞ is automatically satisfied b0 ≥ 0 (see [84, Theorem 24.11]). In this context, for EX < +∞, b0 = EX − and +∞ uν(du), m 0 = m + 0 = EX and so, when EX > 0, the characterizing identity 0 (3.27) becomes EX f (X ) = EX E f (X + Y ),

(3.30)

where Y is a random variable independent of X whose law is given by μY (du) =

b0 u1(0,+∞) (u) δ0 (du) + ν(du). EX EX

(3.31)

This agrees with the standard notion of size-bias distribution for finite mean nonnegative random variable (see, e.g., [7]) and recovers and extends a result there. The pair (Y + , Y − ) in (3.27) will be called the additive size-bias pair associated with X. (iii) For X ≥ 0 with finite first moment, there is a natural relationship between sizebias distribution and equilibrium distribution with respect to X (see [76]). Corollary 3.5 also leads to an extension of this relationship. Namely, for X as in the corollary and f bounded Lipschitz, E f (X ) − f (0) = EX f (U X ), − + − = m+ 0 E f (U (X + Y )) − m 0 E f (U (X + Y )), where U is a uniform random variable on [0, 1] independent of X , Y + and Y − . (iv) For a nonnegative integer-valued random variable with finite mean X , [82] introduces another distributional transformation in the following way: the random variable X ∗r is said to have the r-equilibrium distribution if X ∗r =d Ur,X s , where r > 0, where X s has the size-bias distribution and where the random variable Ur,n has the distribution of the number of white balls drawn in n − 1 draws in a standard Pólya urn scheme starting with r white balls and a single black ball. Now, let X ∼ I D(b, 0, ν) be a nonnegative integer-valued random variable with finite first moment. Then, by (3.30), for all f bounded and Lipschitz EX f (X ) = EX E f (X + Y ), where Y is a random variable independent of X whose law is given by (3.31). Then, as in [82, Lemma 2.1], one shows that EX E f (X + Y ) = EX ED (r ) f (X ∗r ), with X ∗r =d Ur,X +Y and D (r ) ( f )(k) = (k/r + 1) f (k + 1) − k/r f (k), for all k ≥ 1, and so

24

3 Characterization and Coupling

EX f (X ) = EX ED (r ) f (X ∗r ). (v) Let us consider a nontrivial example for which the assumption (3.25) is not satisfied. Let X be a second-order Wiener chaos randomvariable such that, for all  k ≥ 1, λk > 0, with further k≥1 λk = +∞ (recall that k≥1 λ2k < +∞). Then, for the Lévy measure ν N given via ν N (du) = 1(0,+∞) (u)

  N exp − u  2λk k=1

2u

du,

we have 

+1 −1

|u|ν N (du) =

N  k=1

       N 1 1 λk 1 − exp − λk −→ +∞, ≥ 1 − exp − N →+∞ 2λk 2λmax k=1

 +1 where λmax = maxk≥1 λk . So, by monotone convergence, −1 |u|ν(du) = +∞. In particular, the Rosenblatt distribution belongs to this class of second Wiener chaos random variables, since asymptotically λk ∼ C D k D−1 , for some C D > 0 and D ∈ (0, 1/2) (see Theorem 3.2 of [92]).

k→+∞

As seen in some of the examples presented above, characterizing identities can involve local operators, e.g., derivatives, while our generic characterization is nonlocal, involving difference operators. Let us explain, next, how to pass from one to the other also encouraging the reader to contemplate how this passage is linked to the notion of zero-bias distribution (see [47]) with an additive structure. Remark 3.7 As just indicated, let us present a general methodology valid for X ∼ I D(b, 0, ν) such that X ≥ 0 and 0 < EX < +∞, to pass from the nonlocal characterization of Theorem 3.1 to a local characterization. Again, since X ≥ 0, then neces1 sarily the support of ν is in (0, +∞), 0 uν(du) < +∞ and b0 ≥ 0 (see [84]). Hence,  +∞ from the finite mean assumption and for all v > 0, η(v) = v uν(du) < +∞. Therefore, denoting by μ the law of X , for any bounded Lipschitz function f ,  +∞ Cov(X, f (X )) = E ( f (X + u) − f (X ))uν(du) 0   +∞  +∞  u = f (x + v)dv uν(du)μ(d x) 0 0 0  +∞  +∞ = f (x + v)η(v)dvμ(d x) 0 0  +∞ = f (y)(η ∗ μ)(dy),

(3.32)

0

where η ∗ μ is the convolution of the law μ with the positive Borel measure η(dv) = η(v)dv. Since η ∗ μ is absolutely continuous with respect to the Lebesgue measure,

3 Characterization and Coupling

25

denoting its Radon–Nikodym derivative by h, then h(y) = (3.32) becomes 

+∞

Cov(X, f (X )) =

y 0

η(y − v)μ(dv), and

f (y)h(y)dy.

(3.33)

0

In particular, when X has an exponential distribution, then h(y) = ye−y and (3.33) becomes the classical relation Cov(X, f (X )) = EX f (X ).

(3.34)

In general, probability law, it is a positive measure, not necessarily finite,  +∞ η ∗ μ is nota+∞ since 0 η(v)dv = 0 u 2 ν(du). In case X is nondegenerate with EX 2 < +∞,  +∞ i.e., 1 u 2 ν(du) < +∞, (3.32) can be rewritten as Cov(X, f (X )) = η((0, +∞))E f (X + Y ),

(3.35)

 +∞ where η((0, +∞)) = 0 u 2 ν(du) < +∞, and Y , with law η/η((0, +∞)), is independent of X . In view of our previous corollary, it is a simple matter to modify the above arguments in case the condition X ≥ 0 is not satisfied. The corresponding result is then given by the following proposition, whose proof is briefly sketched and whose statement is, again, also related to the notion of zero-bias distribution (see [47]). Proposition 3.8 Let X be a nondegenerate random variable such that EX 2 < +∞. Let b ∈ R, and let ν = 0 be a Lévy measure such that  |u|>1

u 2 ν(du) < +∞.

(3.36)

Then,  Cov(X, f (X )) =

+∞

−∞



u ν(du) E f (X + Y ), 2

(3.37)

for all bounded Lipschitz functions f , where the random variables X and Y are independent, with the law of Y given by η(u) du, μY (du) =  +∞ 2 −∞ u ν(du) and where η is defined, for all v ∈ R, by η(v) := η+ (v)1(0,+∞) (v) + η− (v)1(−∞,0) (v),

26

3 Characterization and Coupling

with η+ and η− , respectively, defined on (0, +∞) and on (−∞, 0) via  η+ (v) =



+∞

η− (v) =

uν(du), v

v −∞

(−u)ν(du),

if and only if X ∼ I D(b, 0, ν). Proof Let us first sketch the proof of the direct implication. If μ denotes the law of X , then from Theorem 3.1 and our hypotheses,  Cov(X, f (X )) =E  =

+∞

( f (X + u) − f (X ))uν(du)

−∞ +∞  +∞

−∞

=

+∞

+∞ 

−∞ +∞



0



−∞



+∞



−∞ +∞

+  =

+∞

−∞

0



+∞



−∞ +∞

+  =

=

+∞

−∞







+ 

( f (x + u) − f (x))uν(du)μ(d x)

0

+∞

−∞

−∞



+∞

−∞

0

( f (x + u) − f (x))uν(du)μ(d x)  f (x + v)dv uν(du)dμ(d x)

−∞ u 0 0 −∞



0

 f (x + v)dv (−u)ν(du)μ(d x)

u

f (x + v)η+ (v)dvμ(d x) 

0

f (x + v)η− (v)dvdμ(d x) −∞  f (x + v) η+ (v)1(0,+∞) (v)  + η− (v)1(−∞,0) (v) dvμ(d x) f (x + v)η(v)dvμ(d x).

(3.38)

The conclusion then easily follows by the very definition of Y and the assumption (3.36). The converse implication is a direct consequence of the converse part of Theorem 3.1 or, as before, follows by taking f (·) = eit· , t ∈ R in (3.37).  Remark 3.9 (i) The previous proposition can, in particular, be applied to the twosided exponential distribution with parameters α > 0 and β > 0. In this case,  the Lévy measure is given by ν(du) = e−αu /u1(0,+∞) (u) − eβu /u1(−∞,0) (u) du. Then, the condition (3.36) is readily satisfied, and the law of Y has the following density: f Y (t) =

 −αt  e α2 β 2 eβt 1 1 (t) + (t) . (0,+∞) (−∞,0) α2 + β 2 α β

(3.39)

3 Characterization and Coupling

27

(ii) As done in Corollary 3.5, Proposition 3.8 extends the notion of zero-bias distribution to all infinitely divisible nondegenerate distributions with finite variance. The random variable Y in (3.37) will be called the extended zero-bias distribution associated with X . (iii) Another possible writing for (3.37), more in line with (3.27), is Cov(X, f (X )) = η+ ((0, +∞)) E f (X + Y + ) + η− ((−∞, 0)) E f (X + Y − ), (3.40) where Y + and Y − have respective law η+ (u)1(0,+∞) (u) du, η+ ((0, +∞)) η− (u)1(−∞,0) (u) μY − (du) = du, η− ((−∞, 0))

μY + (du) =

and where 

+∞

η+ ((0, +∞)) = 0

 u 2 ν(du), η− ((−∞, 0)) :=

0

u 2 ν(du).

−∞

Remark 3.10 In the Stein’s method literature, the size-bias and the zero-bias distributions are powerful tools which have been efficiently used in several situations. Classically, they have been used in conjunction with coupling techniques to produce quantitative results for Poisson and normal approximations (see, e.g., [31, 81]). More recently, these two concepts, combined again with coupling techniques, have also been used to prove concentration inequalities (see, e.g., [6, 15, 44, 46]). Note that Corollary 3.5 and Proposition 3.8 are characterizing results for nondegenerate infinitely divisible distributions whose Lévy measures satisfy appropriate moment conditions. These results suggest the introduction of extended notions of size-bias and zero-bias for nondegenerate random variables with finite first and second moments, respectively. For example, they could be of the type EX f (X ) = EX + E f ( X˜ + ) − EX − E f ( X˜ − ), for some random variables X˜ + , X˜ − and would reduce to (3.27), i.e., to an additive framework, in the infinitely divisible case. In particular, thanks to these (covariance) representations, new methodologies to obtain concentration results for infinitely divisible distributions might be reachable and would, for example, complement [53, 54]. Finally, the reader is referred to [7] which emphasizes connections between size-bias distribution and several themes of probability theory. It is important to note that the stable distributions with α ∈ (1, 2) do satisfy neither the assumptions of Corollary 3.5 nor those of Proposition 3.8. Nevertheless, our next

28

3 Characterization and Coupling

result which is a mixture of the two previous ones characterizes infinitely divisible distributions with finite first moment, and in particular, the stable ones. For this purpose, we introduce the following functions, respectively, well defined on (0, 1) and on (−1, 0):  η+ (v) =



1

uν(du), v

η− (v) =

v

−1

(−u)ν(du),

and note that since ν is a Lévy measure, for all v ∈ (0, 1),  vη+ (v) ≤

1



1

u ν(du) ≤ 2

v

u 2 ν(du) < +∞,

0

and similarly for η− . Proposition 3.11 Let X be a nondegenerate random variable such that E|X | < +∞. Let b ∈ R and let ν be a Lévy measure such that  0
1

|u|ν(du) < +∞, and

1 −1

u 2 ν(du) > 0.

(3.41)

Then, Cov(X, f (X )) =

 1 −1

 u 2 ν(du)

E f (X + U ) + mE f (X + V+ ) − mE f (X + V− ),

(3.42) for all bounded Lipschitz functions f , where m = m + + m − with m ± defined via 

+∞

m+ =

 uν(du), m − =

1

−1 −∞

(−u)ν(du),

and where the random variables X , U , V+ and V− are independent, with the laws of U , V+ and V− , respectively, given by η+ (u)1(0,1) (u) + η− (u)1(−1,0) (u) du,  +1 2 −1 u ν(du) m− u μV+ (du) = δ0 (u) + 1(1,+∞) (u)ν(du), m m m+ −u δ0 (u) + 1(−∞,−1) (u)ν(du), μV− (du) = m m

μU (du) =

if and only if X ∼ I D(b, 0, ν). Proof First, let X ∼ I D(b, 0, ν), and denote its law by μ. Then, from Theorem 3.1, for any bounded Lipschitz function,

3 Characterization and Coupling

29

 Cov(X, f (X )) = E  =E

+∞ −∞

|u|≤1

( f (X + u) − f (X ))uν(du)

( f (X + u) − f (X ))uν(du)



+E

|u|>1

( f (X + u) − f (X ))uν(du).

To continue, let us perform steps similar to those of Proposition 3.8 and Corollary 3.5 for, respectively, the first and second terms of the previous sum. For the first one,  E

 |u|≤1

1

( f (X + u) − f (X ))uν(du) = E

( f (X + u) − f (X ))uν(du)

0



+E

−∞



+

−∞



+

−∞

 =

−∞

+∞  +∞ −∞

 f (x + v)dv (−u)ν(du)μ(d x)

u

f (x + v)η+ (v)dvμ(d x)

0 +∞  0

−∞ −1 +∞  +∞

 =

 f (x + v)dv uν(du)dμ(d x)

0 0 +∞  0  0

−∞ −1 +∞  1

 =

( f (X + u) − f (X ))uν(du)

−1 +∞  1  u

 =

0

−∞

f (x + v)η− (v)dvdμ(d x)

 f (x + v) η+ (v)1(0,1) (v)  + η− (v)1(−1,0) (v) dvμ(d x) f (x + v)η(v)dvμ(d x).

For the second term,  E

 |u|>1

( f (X + u) − f (X ))uν(du) = E  =E

 |u|>1 +∞ 1

f (X + u)uν(du) − E f (X ) 

|u|>1

uν(du)

−1

f (X + u)uν(du) − E f (X + u)(−u)ν(du) −∞  − E f (X ) uν(du) |u|>1

= m (E f (X + V+ ) − E f (X + V− )) .

The conclusion then easily follows from the very definition of U , V+ and V− and the assumption (3.41). The converse implication is a direct consequence of the converse part of Theorem 3.1 or, as before, follows by taking f (·) = eit· , t ∈ R in (3.42). 

Chapter 4

General Upper Bounds by Fourier Methods

The Fourier methodology developed in [91] to study the Stein’s equation in the Gaussian setting, often nowadays referred to as the Stein–Tikhomirov method, has been extended in [4] to provide rates of convergence in Kolmogorov or in smooth Wasserstein distance for sequences (X n )n≥1 converging toward X ∞ . This approach leads to quantitative estimates when X ∞ is a second-order Wiener chaos, or the generalized Dickman distribution or even the symmetric α-stable one. Corollary 3.5, or Proposition 3.8, or even the stable characterizing identities of the previous chapter allow extensions of the aforementioned estimates to classes of infinitely divisible sequences. The forthcoming results are general and have a non-empty intersection with those on the Dickman distribution presented in [4]. Theorem 4.1 Let X n ∼ I D(bn , 0, νn ), n ≥ 1, be a sequence of nondegenerate random variables converging in law toward the nondegenerate X ∞ ∼ I D(b∞ , 0, ν∞ ), with also E|X n | < +∞, E|X ∞ | < +∞ and 

+1

−1

 |u|νn (du) < +∞,

+1

−1

|u|ν∞ (du) < +∞,

(4.1)

n ≥ 1. Further, for all t ∈ R, let  |ϕ∞ (t)| 0

|t|

ds ≤ C∞ |t| p∞ , |ϕ∞ (s)|

(4.2)

where ϕ∞ is the characteristic function of X ∞ and where C∞ > 0, p∞ ≥ 1. Let the law of X ∞ be absolutely continuous with respect to the Lebesgue measure and have a bounded density. Then, 1

 d K (X n , X ∞ ) ≤ C∞ np∞ +2 ,

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 B. Arras and C. Houdré, On Stein’s Method for Infinitely Divisible Laws with Finite First Moment, SpringerBriefs in Probability and Mathematical Statistics, https://doi.org/10.1007/978-3-030-15017-4_4

31

32

4 General Upper Bounds by Fourier Methods

where + n − ∞ − n =|(m n0 )+ − (m ∞ 0 ) | + |(m 0 ) − (m 0 ) | + + + ∞ − − − + (m ∞ 0 ) E|Yn − Y∞ | + (m 0 ) E|Yn − Y∞ |, ± ± where (m n0 )± , Yn± and (m ∞ 0 ) , Y∞ are the quantities defined in Corollary 3.5, respec > 0 depends on the supremum tively, associated with X n and X ∞ , and where C∞ norm of the density of X ∞ but is independent of n.

Proof From Corollary 3.5 applied to X n and X ∞ , let n ± ∞ ± ± (t), ± n (t) := (m 0 ) ϕYn± (t) − (m 0 ) ϕY∞ ∞ + ∞ − S∞ (t) := (m 0 ) ϕY∞+ (t) − (m 0 ) ϕY∞− (t),

εn (t) := ϕn (t) − ϕ∞ (t), ± . Now, thanks to where ϕYn± and ϕY∞± are the characteristic functions of Yn± and Y∞ it. the identity (3.27) applied to the test functions f (·) = e ,

 1 d ϕ∞ (t) = ϕ∞ (t)S∞ (t), i dt  1 d − ϕn (t) = ϕn (t)S∞ (t) + ϕn (t)+ n (t) − ϕn (t)n (t). i dt Subtracting these last two expressions, recalling also that the characteristic function of an ID law never vanishes, leads to:  d εn (t) d − εn (t) = (ϕ∞ (t)) + iϕn (t)(+ n (t) − n (t)), dt ϕ∞ (t) dt since S∞ (t) =

d 1 (ϕ∞ (t)). iϕ∞ (t) dt

Then, straightforward computations imply that for all t ≥ 0:  εn (t) = iϕ∞ (t) 0

t

 ϕn (s)  + (n (s) − − n (s)) ds, ϕ∞ (s)

− and similarly for t ≤ 0. Let us next bound the difference, + n (s) − n (s). First, − |+ n (s) − n (s)| ≤ I + I I + I I I + I V,

4 General Upper Bounds by Fourier Methods

33

where + I := |(m n0 )+ − (m ∞ 0 ) |, − I I := |(m n0 )− − (m ∞ 0 ) |, + ∞ + + + + (s)| ≤ (m I I I := (m ∞ 0 ) |ϕYn+ (s) − ϕY∞ 0 ) |s|E|Yn − Y∞ |, ∞ − ∞ − − − I V := (m 0 ) |ϕYn− (s) − ϕY∞− (s)| ≤ (m 0 ) |s|E|Yn − Y∞ |.

Hence, 

 ds  n + + n − ∞ − |(m 0 ) − (m ∞ 0 ) | + |(m 0 ) − (m 0 ) | |ϕ (s)| ∞ 0  t  |s|ds  ∞ + + − − − (m 0 ) E|Yn+ − Y∞ + |ϕ∞ (t)| | + (m ∞ 0 ) E|Yn − Y∞ | . |ϕ (s)| ∞ 0

|εn (t)| ≤ |ϕ∞ (t)|

t

Then, using (4.2), together with the definition of n , entails |εn (t)| ≤ C∞ (|t| p∞ + |t| p∞ +1 )n .

(4.3)

Since the law of X ∞ has a bounded density, applying the classical Esseen inequality (see, e.g., [77, Theorem 5.1]) gives, for all T > 0,  d K (X n , X ∞ ) ≤ C1

T −T

|εn (t)| h ∞ ∞ dt + C2 , |t| T

(4.4)

where C1 and C2 are positive (absolute) constants, while h ∞ ∞ is the essential supremum of the density h ∞ of the law of X ∞ . Next, plugging (4.3) into (4.4), it follows that   C d K (X n , X ∞ ) ≤ C1 T p∞ + T p∞ +1 n + 2 . T 1

The choice T = (1/n ) p∞ +2 concludes the proof.



Remark 4.2 (i) Let us briefly discuss the growth condition on the limiting characteristic function, namely, the requirement that for all t ∈ R,  L(ϕ)(t) := |ϕ(t)| 0

|t|

ds ≤ C|t| p , |ϕ(s)|

(4.5)

for some C > 0 and p ≥ 1. When the limiting distribution is the standard normal one, the functional L(ϕ) is the Dawson integral associated with the normal. It decreases to zero at infinity, and for any t ∈ R,

34

4 General Upper Bounds by Fourier Methods

L(ϕ)(t) := e

2

− t2



|t|

2|t| . 1 + t2

s2

e 2 ds ≤

0

(4.6)

Various behaviors are possible for this (generalized Dawson) functional, see [4]. As detailed below, in a general gamma setting, (4.5) holds true with p = 1 while in the stable case (see Lemma 10 of Appendix B in [4]), for 1 < α < 2, and t > 0,  α L(ϕ)(t) ≤ t 1−α /c + Ce−ct ,

(4.7)

 c−1/α α where C = 0 ecs ds and c = c1 + c2 as given in (2.10). In particular, C ≤ e/c1/α . Moreover, for t small, (4.7) can be replaced by L(ϕ)(t) ≤ |t|. Then, for some constant C  > 0 only depending on α and c, and for all t ∈ R, |t| . 1 + |t|α

L(ϕ)(t) ≤ C 

(4.8)

For the compound Poisson case with finite Lévy measure ν, Lϕ(t) = e

ν(R)

 +∞ −∞

(cos(ut)−1)ν0 (du)



|t|

eν(R)

 +∞ −∞

(1−cos(us))ν0 (du)

ds ≤ e2ν(R) |t|, t ∈ R.

0

For the generalized Dickman distribution as considered in [4], a linear growth can also be obtained from the corresponding characteristic function. (ii) As it is well known, e.g., see [81],  (4.9) d K (X n , X ∞ ) ≤ 2h ∞ ∞ W1 (X n , X ∞ ), where again h ∞ ∞ is the supremum norm of h ∞ , the bounded density of the law of X ∞ , and where W1 is the Wasserstein-1 distance as given in (2.17) which also admits the following well-known representation: W1 (X, Y ) = sup |Eh(X ) − Eh(Y )|, h∈Li p(1)

for X, Y random variables with finite first moment. Therefore, to go beyond the bounded density case, e.g., to consider discrete limiting laws, it is natural to explore convergence rates in (smooth) Wasserstein. Under uniform (exponential) integrability, such issues can be tackled. For example, instead of the bounded density assumption, let, for some λ > 0 and α ∈ (0, 1], α

sup E eλ|X n | < +∞, , i.e., sup n≥1

n≥1

 |u|>1

α

eλ|u| νn (du) < +∞,

(4.10)

4 General Upper Bounds by Fourier Methods

35

then,  n | ln n | 2α , dW p∞ +2 (X n , X ∞ ) ≤ C∞ 1

(4.11)

 and n are as in the previous theorem. The proof of (4.11) uses the where C∞ pointwise estimate (4.3) combined with the assumption (4.10) and with the statement and conclusion of [4, Theorem 1].

Proposition 3.8 also provides quantitative upper bounds in Kolmogorov distance. This is the content of the next proposition whose statement is similar to that of Theorem 4.1. Proposition 4.3 Let X n ∼ I D(bn , 0, νn ), n ≥ 1, be a sequence of nondegenerate random variables converging in law toward X ∞ ∼ I D(b∞ , 0, ν∞ ) (nondegenerate) and such that E|X n |2 < +∞, E|X ∞ |2 < +∞, n ≥ 1. Let also, for all t ∈ R,  |ϕ∞ (t)| 0

|t|

ds ≤ C∞ |t| p∞ , |ϕ∞ (s)|

(4.12)

where ϕ∞ is the characteristic function of X ∞ and where C∞ > 0, p∞ ≥ 1. Let the law of X ∞ be absolutely continuous with respect to the Lebesgue measure and have a bounded density, then 1

 np∞ +3 , d K (X n , X ∞ ) ≤ C∞

(4.13)

n = |ηn − η∞ | + |EX n − EX ∞ | + E|Yn − Y∞ |,

(4.14)

where

where Yn and Y∞ are the random variables defined in Proposition 3.8, respectively, associated with X n and X ∞ , where  ηn :=

+∞ −∞

 u νn (du), 2

η∞ :=

+∞ −∞

u 2 ν∞ (du),

 > 0 is independent of n. and where C∞

Proof The proof of this proposition is very similar to the proof of Theorem 4.1 and so it is only sketched. From Proposition 3.8 applied to X n and X ∞ , let m ∞ := EX ∞ , m n := EX n , n (t) := t (η∞ ϕY∞ (t) − ηn ϕYn (t)) + i(m n − m ∞ ), R∞ (t) := −tη∞ ϕY∞ (t) + im ∞ , ε(t) := ϕn (t) − ϕ∞ (t).

36

4 General Upper Bounds by Fourier Methods

Next, thanks to the identity (3.37) applied to the test functions f (·) = eit. ,  d ϕn (t) = R∞ (t)ϕn (t) + ϕn (t)n (t), dt  d ϕ∞ (t) = R∞ (t)ϕ∞ (t). dt Subtracting the last two expressions, it follows that  d εn (t) d εn (t) = (ϕ∞ (t)) + ϕn (t)n (t). dt ϕ∞ (t) dt Then, straightforward computations imply that for all t ≥ 0,  εn (t) = ϕ∞ (t) 0

t

ϕn (s) n (s)ds, ϕ∞ (s)

and similarly for t ≤ 0. Combining this last expression with (4.12) and with standard estimates give |εn (t)| ≤ C∞ (|t| p∞ + |t| p∞ +1 + |t| p∞ +2 )n . Finally, proceeding as in the end of the proof of Theorem 4.1 concludes the proof.  ± Remark 4.4 (i) Since the random variables (Yn± , Y∞ ) (resp. (Yn , Y∞ )) in Theorem 4.1 (resp. Proposition 4.3) are independent of (X n , X ∞ ), one can choose any of their couplings. In particular, n in Theorem 4.1 can be replaced by + n − ∞ − n = |(m n0 )+ − (m ∞ 0 ) | + |(m 0 ) − (m 0 ) | + + + ∞ − − − + (m ∞ 0 ) W1 (Yn , Y∞ ) + (m 0 ) W1 (Yn , Y∞ ).

(4.15)

Similarly, the quantity n of Proposition 4.3 can be replaced by n = |ηn − η∞ | + |EX n − EX ∞ | + W1 (Yn , Y∞ ).

(4.16)

(ii) Recall that the Wasserstein-1 distance between two random variables X and X˜ both having finite first moment, and respective law μ and μ˜ can also be represented as  +∞ |Fμ (t) − Fμ˜ (t)|dt, (4.17) W1 (X, X˜ ) = −∞

where Fμ and Fμ˜ are the respective cumulative distribution functions of μ and μ. ˜ Combining the above with Proposition 3.8, (4.16) becomes

4 General Upper Bounds by Fourier Methods 

37 

 νn (dv) ν∞ (dv) − dt ηn η∞ −∞ −∞      +∞  +∞  0 ν∞ (dv) ν∞ (dv) νn (dv) 2 νn (dv) + + v(v ∧ t) − v − dt. ηn η∞ ηn η∞ 0 0 −∞ 

n = |ηn − η∞ | + |EX n − EX ∞ | +

0

t

(−v)(t − v)

(4.18)

2 (iii)

Next, for the second-order chaoses X n = +∞ k=1 λn,k (Z k − 1)/2, n ≥ 1 and +∞ 2 X ∞ = k=1 λ∞,k (Z k − 1)/2, with λn,k > 0 and λ∞,k > 0, for all k ≥ 1, n in (4.18) becomes  n = 2



+∞ +∞

0

(λ2∞,k e

− 2λ t

∞,k

− λ2n,k e

n,k ) dt.

− 2λt

k=1

(4.19)

Similar computations can be done using (4.15) and (4.17). (iv) Again, the Kolmogorov distance can be replaced by a smooth Wasserstein one. (Replacing also the bounded density assumption.) Indeed, if for some λ > 0 and α α ∈ (0, 1], sup E eλ|X n | < +∞, then n≥1  n | ln n | 2α , dW p∞ +3 (X n , X ∞ ) ≤ C∞ 1

as easily seen by simple modifications of the techniques presented above. (v) Any sequence of infinitely divisible random variables converging in law has a limiting distribution which is itself infinitely divisible, e.g., [84, Lemma 7.8]. It is thus natural to ask for conditions for such convergence as well as for quantitative versions of it. On this subject, [84, Theorem 8.7] provides necessary and sufficient conditions ensuring the weak convergence of sequences of infinitely divisible distributions. Namely, it requires that, as n → +∞,  βn = bn +

+∞

−∞

  u c(u) − 1|u|≤1 νn (du) −→ β∞ ,

and that u 2 c(u)dνn =⇒ u 2 c(u)dν∞ , for some bounded continuous function c from R to R such that c(u) = 1 + o(|u|), as |u| → 0, and c(u) = O(1/|u|), as |u| → +∞. Therefore, Theorem 4.1 and Proposition 4.3 provide quantitative versions of these results. The previous results do not encompass the case of the stable distributions since neither (4.1) nor E|X ∞ |2 < +∞ are satisfied. To obtain quantitative convergence results toward more general ID distributions, let us present a result valid for some

38

4 General Upper Bounds by Fourier Methods

classes of self-decomposable laws. Again, below and elsewhere, we follow [84] and use the terminology increasing or decreasing in a non-strict sense. Proposition 4.5 Let X n ∼ I D(bn , 0, νn ), n ≥ 1, be a sequence of nondegenerate random variables converging in law toward X ∞ ∼ I D(b∞ , 0, ν∞ ) (nondegenerate) and such that E|X n | < +∞, n ≥ 1, E|X ∞ | < +∞, and let ψ1,n (u) ψ2,n (−u) 1(0,+∞) (u)du + 1(−∞,0) (u)du, u (−u) ψ1,∞ (u) ψ2,∞ (−u) ν∞ (du) := 1(0,+∞) (u)du + 1(−∞,0) (u)du, u (−u) νn (du) :=

where ψ1,n , ψ2,n , ψ1,∞ and ψ2,∞ are nonnegative decreasing functions on (0, +∞). Let also, for all t ∈ R,  |ϕ∞ (t)| 0

|t|

ds ≤ C∞ |t| p∞ , |ϕ∞ (s)|

(4.20)

where ϕ∞ is the characteristic function of X ∞ and where C∞ > 0, p∞ ≥ 1. Finally, let the law of X ∞ be absolutely continuous with respect to the Lebesgue measure and have a bounded density. Then, 1

 (n ) p∞ +2 , d K (X n , X ∞ ) ≤ C∞

where n =|EX n − EX ∞ | + +

 0 −1

 1 0

|u||ψ1,n (u) − ψ1,∞ (u)|du +

|u||ψ2,n (−u) − ψ2,∞ (−u)|du +

 −1 −∞

 +∞ 1

|ψ1,n (u) − ψ1,∞ (u)|du

|ψ2,n (−u) − ψ2,∞ (−u)|du,

 and where C∞ > 0 is independent of n.

Proof Again, this proof is very similar to the proof of Theorem 4.1 and so it is only sketched. Let m n := EX n ,

m ∞ := EX ∞ ,  +∞  +∞ n (t) := m n − m ∞ + (eitu − 1)uνn (du) − (eitu − 1)uν∞ (du), −∞ −∞  +∞ itu (e − 1)uν∞ (du), S∞ (t) := m ∞ + −∞

εn (t) := ϕn (t) − ϕ∞ (t).

4 General Upper Bounds by Fourier Methods

39

Applying the identity (3.7) to X n and X ∞ with f (·) = eit. gives d (ϕn (t)) = in (t)ϕn (t) + i S∞ (t)ϕn (t), dt d (ϕ∞ (t)) = i S∞ (t)ϕ∞ (t), dt and thus, d ϕ (t) (εn (t)) = ∞ εn (t) + in (t)ϕn (t). dt ϕ∞ (t) Therefore, for all t ≥ 0,  εn (t) = iϕ∞ (t) 0

t

ϕn (s) n (s)ds, ϕ∞ (s)

and similarly for t ≤ 0. Let us now bound the quantity n (·)  +∞  +∞ isu isu (e − 1)uνn (du) − (e − 1)uν∞ (du) |n (s)| ≤ |m n − m ∞ | + −∞ −∞   1 ≤ 2(1 + |s|) |m n − m ∞ | + + |u||ψ1,n (u) − ψ1,∞ (u)|du 0



+∞

+  +

1 −1 −∞

|ψ1,n (u) − ψ1,∞ (u)|du +



0 −1

|u||ψ2,n (−u) − ψ2,∞ (−u)|du

 |ψ2,n (−u) − ψ2,∞ (−u)|du ,

≤ 2(1 + |s|)n . This implies |εn (t)| ≤ C∞ (|t| p∞ + |t| p∞ +1 )n . To conclude the proof, proceed as in the end of the proof of Theorem 4.1.



Remark 4.6 Recalling (4.8), note that the stable distributions do satisfy the assumptions of Proposition 4.5. However, the very specific properties of their Lévy measure entail detailed computations in order to reach a precise rate of convergence. To illustrate how our methodology can be adapted to obtain a rate of convergence toward a stable law, we present an example pertaining to the domain of normal attraction of the symmetric α-stable distribution. Let 1 < α < 2 and let c := (1 − α)/(2(2 − α) cos(απ/2)) and λ := (2c)1/α . Then, denote by f 1 (x) := α(2λ)−1 (1 + |x|/λ)−α−1 the density of the Pareto law with parameters α > 0 and λ > 0. As well known, this random variable is infinitely divisible, see [87, Chap. IV,

40

4 General Upper Bounds by Fourier Methods

Example 11.6], and belongs to the domain of normal attraction of the symmetric αstable distribution, [77]. Our version of the Pareto density differs from the one considered in [33, 60, 94] given by f (x) := αλα /(2|x|α+1 )1|x|>λ , which is not infinitely divisible. Indeed, since f is symmetric, its characteristic function ϕ is real-valued, and by standard computations, α



ϕ(s) = 1 − s + α 0

1

1 − cos(λys) αs 2 λ2 α , dy ≤ 1 − s + y α+1 2(2 − α)

s > 0. Now, it is not difficult to see that the above right-hand side can take negative values, e.g., for α = 3/2 and s = 2, so ϕ would then have to vanish, contradicting infinite divisibility. Proposition 4.7 Let (ξi )i≥1 of iid random variables such that ξ1 ∼ f 1 .

nbe a sequence ξi /n 1/α , n ≥ 1, and let X ∞ ∼ SαS have characterFor 1 < α < 2, let X n = i=1 istic function ϕ∞ (t) = exp(−|t|α ), t ∈ R. Then, d K (X n , X ∞ ) ≤

C n α −1 2

,

(4.21)

for some C > 0 which depends only on α. Proof Let ϕn be the characteristic function of X n , n ≥ 1. Adopting the notation of the proof of Proposition 4.5, for all t ≥ 0,  εn (t) = iϕ∞ (t) 0

t

ϕn (s) n (s)ds, ϕ∞ (s)

and similarly for t ≤ 0. But thanks to the identity (3.7) applied to X n and X ∞ with f (·) = eis. , in (s) =

ϕn (s) ϕ∞ (s) − . ϕn (s) ϕ∞ (s)

Moreover, for s > 0, ϕ∞ (s) = −αs α−1 . ϕ∞ (s) By standard computations, for s > 0,  +∞

dy cos(λs(y − 1)) α+1 y

  +∞   1 1 − cos(λsy) sin(λsz) α = cos(λs) 1 − s + αdy + sin(λs) αdz y α+1 z α+1 0 1

ϕ1 (s) = α

1

= 1 − s α + ψ1 (s) + ψ2 (s) + ψ3 (s),

4 General Upper Bounds by Fourier Methods

41

where 

1 − cos(λsy) αdy, y α+1 0   +∞  cos(λsz) ψ2 (s) = (cos(λs) − 1) αdz , z α+1 1 1

ψ1 (s) =

and  ψ3 (s) = sin(λs)

+∞ 1

 sin(λsz) αdz . z α+1

Then, ϕn (s)

1

= n 1− α

ϕn (s)



ϕ1



ϕ1



s 1 nα



s 1 nα

−αs α−1 + n 1− α ψ1 1

=

1−

sα n





s 1 nα

+ ψ1



s 1 nα

    1 1 + n 1− α ψ2 s1 + n 1− α ψ3 s1 nα   n α   , s s + ψ2 1 + ψ3 1 nα



implying that 1

n (s) =

−αs α−1 + n 1− α ψ1 1− 1 α

=

n 1− ψ1



sα n

 s n

1 α



 s n

1 α



+ ψ1

s n 1 α

1 α

+ n 1− ψ2

    1 1 s s + n 1− α ψ2 + n 1− α ψ3 1 1 nα nα      + αs α−1 s s + ψ2 + ψ3 1 1 

 s n

1 α

n

α

1

+ n 1− α ψ3

1−

sα n

+ ψ1

n

 s



n

1 α

s n

1 α

α



       s s s + αs α−1 ψ1 + ψ2 + ψ3 1 1 1 nα nα nα      . s s + ψ2 + ψ3 1 1 −αs

2α−1

n

n

α

n

α

Before, bounding the quantity εn let us provide bounds on the functions ψ1 , ψ2 , and ψ3 and their derivatives. For s > 0, |ψ1 (s)| ≤ C1 s 2 , |ψ2 (s)| ≤ C2 s 2 , |ψ3 (s)| ≤ C3 s 2 ,

(4.22)

and |ψ1 (s)| ≤ C4 s, |ψ2 (s)| ≤ C5 (s + s 2 ), |ψ3 (s)| ≤ C6 s, for some strictly positive constants, Ci , i = 1, ..., 6, depending only on α. Therefore, for t > 0,

42

4 General Upper Bounds by Fourier Methods   n−1 s ϕ       1 1 s s s s 2α−1 nα 1− α1  1− α1  n 1− α1 ψ  + n + n −α |εn (t)| ≤ |ϕ∞ (t)| ψ ψ 1 2 3 1 1 1 |ϕ (s)| n ∞ 0 nα nα nα        s s s ds + ψ2 + ψ3 + αs α−1 ψ1 1 1 1 nα nα nα   s    t ϕn−1 1 1 s s α+1 s2 s 2α−1 nα ≤ C|ϕ∞ (t)| + 2 ds, + 3 + 2 |ϕ∞ (s)| n 0 n α −1 n α −1 nα 

t

and so 1 α

1 α



|εn (n t)| ≤ Cn|ϕ∞ (n t)| 0

t

  |ϕn−1 1 (u)| u + u 2 + u 2α−1 + u α+1 du. 1 |ϕ∞ (n α u)|

 1/α  Let us now detail how to bound the ratio |ϕn−1 u |. For 0 < u ≤ t 1 (u)|/|ϕ∞ n |ϕn−1 α 1 (u)|  1  ≤ enu +(n−1) ln ϕ1 (u) |ϕ∞ n α u | ≤ enu

α

+(n−1)(−u α +ψ1 (u)+ψ2 (u)+ψ3 (u))

≤ een(ψ1 (u)+ψ2 (u)+ψ3 (u)) . By (4.22), we can choose η ∈ (0, 1) such that 0 < C(η) = max (|ψ1 (u)| + u∈(0,η)

|ψ2 (u)| + |ψ3 (u)|)/u α < 1, since α ∈ (1, 2). Then, for 0 < u ≤ t ≤ η, |ϕn−1 α 1 (u)|  1  ≤ eenC(η)u , |ϕ∞ n α u | which implies that, for 0 < t ≤ η < 1, |εn (n α t)| ≤ Cne−n(1−C(η))t 1

α

2  t + t 3 + t 2α + t α+2 .

A similar bound can also be obtained for −η ≤ t < 0. Setting T := n 1/α η, applying Esseen’s inequality, and if h α denotes the density of the SαS-law, we finally get  +n 1/α η |εn (t)| h α ∞ dt + C2 1 |t| −n 1/α η nαη 1  +η h α ∞ |εn (n α t)| dt + C2 ≤ C1 1 |t| −η nαη  +η   α h α ∞ ≤ C1 Cne−n(1−C(η))t t + t 2 + t 2α−1 + t α+1 dt + C2 1 0 nαη

d K (X n , X ∞ ) ≤ C1

4 General Upper Bounds by Fourier Methods ≤ C1

 n α1 η

α



≤ Cη,α

1 2 n α −1

Cη,α,h α 2

n α −1

+

1 3 n α −1

t 2

n α −1

0





e−(1−C(η))t

+

43

+

1 1 + 2 n nα

t2 3

n α −1

+

+ C2

t α+1 t 2α−1 + 2 n nα

dt + C2

h α ∞ 1

nαη

h α ∞ 1

nαη

,

for some Cη,α,h α > 0 depending only on η, on α and h α ∞ . This concludes the proof of the proposition.  Remark 4.8 The above result has to be compared with the ones available in the literature but for other types of Pareto laws. Very recently, and via Stein’s method, a rate of convergence in Wasserstein-1 and for symmetric stable limiting laws is obtained in [94]. When specialized to the Pareto law with density f (x) := αλα /(2|x|α+1 )1|x|>λ , described in the previous remark, this rate is of order n −(2/α−1) (see [94]), which via the inequality (4.9) provides a rate of the order n −(1/α−1/2) in Kolmogorov distance. Moreover, a convergence rate of order n −(2/α−1) in Kolmogorov distance is known to hold for the same Pareto law (see, e.g., [33] and references therein). The results of [50] also imply a rate of convergence of order n −(2/α−1) in Kolmogorov distance for the Pareto law considered in Proposition 4.7 (see [50, Corrolary 1]). At a different level, the rate n −(2/α−1) also appears when one considers the convergence, in supremum norm, of the corresponding densities toward the stable density, see [60]. Finally, note also that Proposition 4.7 is a special case of [57, Theorem 1.2] proved with different techniques. Remark 4.9 Analyzing the proof of Proposition 4.7, it is clearly possible to generalize the previous result beyond the Pareto case, to more general distributions pertaining to the domain of normal attraction of the symmetric α–stable distribution. Indeed, consider distribution functions of the form (c + a(x)) , xα (c + a(−x)) ∀x < 0, F(x) = , (−x)α ∀x > 0, F(x) = 1 −

where the function a defined on (0, +∞) is such that lim a(x) = 0, and where x→+∞

c = (1 − α)/(2(2 − α) cos(απ/2)). Moreover, let a be bounded and continuous on (0, +∞) and be such that lim xa(x) < +∞. Then, by straightforward compux→+∞

tations 

+∞ −∞

eit x d F(x) = 1 − t α + ψ(t),

where ψ is a real-valued function satisfying for all t ∈ R, |ψ(t)| ≤ C1 |t|2 and |ψ  (t)| ≤ C2 |t|, for some C1 > 0 and C2 > 0, two constants only depending

44

4 General Upper Bounds by Fourier Methods

on α and a. Assuming further that the probability measure associated the

n with 1/α ξi /n , distribution function F is infinitely divisible, it follows that for X n = i=1 with ξi iid random variables such that ξ1 ∼ F, d K (X n , X ∞ ) ≤

C n α −1 2

,

where X ∞ ∼ SαS and where C > 0 only depends on α and a. A further simple adaptation of the proofs of the previous results leads to explicit rates of convergence for the compound Poisson approximation of some classes of infinitely divisible distributions. The next two results give Berry–Esseen-type bounds. Theorem 4.10 Let X ∼ I D(b, 0, ν) be nondegenerate such that E|X | < ∞ and such that  +1 |u|ν(du) < ∞. (4.23) −1

Let its law be absolutely continuous with respect to the Lebesgue measure with a bounded density, and let its characteristic function, ϕ, be such that, for all t ∈ R,  |ϕ(t)| 0

|t|

ds ≤ C|t| p , |ϕ(s)|

(4.24)

where C > 0 and p ≥ 1. Finally, let X n , n ≥ 1, be compound Poisson random variables each with characteristic function,    1 ϕn (t) := exp n (ϕ(t)) n − 1 .

(4.25)

Then, d K (X n , X ) ≤ C 

2 1    p+2  p+2  +∞ 1 |b0 | + |u|ν(du) , n −∞

(4.26)

where C  > 0 depends on the supremum norm of the density of X , but is independent of n. Proof Clearly, ϕn (t) → ϕ(t), for all t ∈ R, and so the sequence (X n )n≥1 converges in distribution toward X . Then, adopting the notations of the proof of Theorem 4.1,  1 d − ϕn (t) = ϕn (t)S(t) + ϕn (t)+ n (t) − ϕn (t)n (t). i dt

4 General Upper Bounds by Fourier Methods

45

Moreover, thanks to (4.25), 1  ϕ (t)   1 d ϕn (t) = ϕ(t) n ϕn (t) = i S(t) ϕ(t) n ϕn (t), dt ϕ(t) since S(t) =

1 d (ϕ(t)). iϕ(t) dt

Thus, − + n (t) − n (t) = S(t)



ϕ(t)

 n1

 −1 ,

which implies that  0

  1 ϕn (s) S(s) ϕ(s) n − 1 ds. ϕ(s)



t

t

εn (t) = iϕ(t) Therefore, |εn (t)| ≤ |ϕ(t)|

0

 1 1 |S(s)| ϕ(s) n − 1|ds. |ϕ(s)|

(4.27)

Next, by the very definition of S, Corollary 3.5, and straightforward computations, − S(s) = m + 0 ϕY + (s) − m 0 ϕY − (s)      +∞  +∞ + − isu isu = b0 + e ν˜+ (du) − b0 + e ν˜− (du) −∞ −∞  +∞ eisu ν(du). ˜ = b0 + −∞

Hence,

 |S(s)| ≤ |b0 | +

+∞ −∞

|u|ν(du),

(4.28)

and further straightforward computations lead to    +∞   n1 |s| |b0 | + |u|ν(du) . | ϕ(s) − 1| ≤ n −∞

(4.29)

Combining (4.24) and (4.27)–(4.29) gives  2  +∞ 1 |b0 | + |εn (t)| ≤ C |u|ν(du) |t| p+1 . n −∞

(4.30)

46

4 General Upper Bounds by Fourier Methods

To conclude the proof of this theorem, proceed as in the end of the proof of Theorem 4.1.  Proposition 4.11 Let X ∼ I D(b, 0, ν) be nondegenerate and such that E|X |2 < ∞. Let its law be absolutely continuous with respect to the Lebesgue measure with a bounded density, and let its characteristic function, ϕ, be such that, for all t ∈ R,  |ϕ(t)|

|t|

0

ds ≤ C|t| p , |ϕ(s)|

(4.31)

where C > 0 and p ≥ 1. Finally, let X n , n ≥ 1, be compound Poisson random variables each with characteristic function,    1 ϕn (t) := exp n (ϕ(t)) n − 1 .

(4.32)

Then, 2 1    p+4  p+4  +∞ 1 2 d K (X n , X ) ≤ C |EX | + u ν(du) , n −∞



(4.33)

where C  > 0 is independent of n. Proof The proof is similar to the proof of Theorem 4.10 and so is only sketched. Clearly, ϕn (t) → ϕ(t), for all t ∈ R and so the sequence (X n )n≥1 converges in distribution toward X . Then, with the previous notations,  d ϕn (t) = R(t)ϕn (t) + n (t)ϕn (t). dt Moreover, thanks to (4.32), 1  ϕ (t)   1 d ϕn (t) = ϕ(t) n ϕn (t) = R(t) ϕ(t) n ϕn (t), dt ϕ(t) since R(t) =

1 d (ϕ(t)). ϕ(t) dt

Thus, n (t) = R(t)

  1 ϕ(t) n − 1 ,

4 General Upper Bounds by Fourier Methods

47

which implies that 

  1 ϕn (s) R(s) ϕ(s) n − 1 ds, ϕ(s)

t

εn (t) = ϕ(t) 0

and therefore,  |εn (t)| ≤ |ϕ(t)|

t

0

 1 1 |R(s)| ϕ(s) n − 1|ds. |ϕ(s)|

(4.34)

By the very definition of R, Proposition 3.8, and straightforward computations,  R(s) = −s  =i

+∞

−∞ +∞ 

 u ν(du) ϕY (s) + iEX 2

 eisu − 1 uν(du) + iEX.

−∞

Hence,  |R(s)| ≤ |EX | + |s|

+∞

−∞

u 2 ν(du).

Moreover, further straightforward computations lead to    +∞  1 |s| |EX | + |s| v 2 ν(dv) . | ϕ(s) n − 1| ≤ n −∞

(4.35)

Combining (4.31) and (4.34)–(4.35) gives  2  +∞ 1 2 |EX | + |t| u ν(du) |t| p+1 , |εn (t)| ≤ C n −∞  2  +∞ 1 |EX | + u 2 ν(du) (|t| p+1 + |t| p+2 + |t| p+3 ). ≤C n −∞ To conclude the proof of this proposition, proceed as in the end of the proof of Proposition 4.10. 

Remark 4.12 (i) Under the condition  A := sup s∈R

+∞ −∞

(eisu − 1)uν(du) < ∞,

48

4 General Upper Bounds by Fourier Methods

the upper bound on the Kolmogorov distance in Proposition 4.11 becomes 1   p+2 2 1 (|EX | + A) p+2 , n

d K (X n , X ) ≤ C 

(4.36)

which is comparable to the one obtained in Theorem 4.10 and is, for instance, verified in case the Lévy measure of X satisfies the assumptions of Theorem 4.10. (ii) Once again, versions of Theorem 4.10 and of Proposition 4.11 can be derived for the smooth Wasserstein distance. Let us develop this claim a little bit more, and assume that X has finite exponential moments, namely, that Eeλ|X | is finite for some λ > 0. This condition implies that the characteristic function ϕ is analytic in a horizontal strip of the complex plane containing the real axis. Then, by the very definition of ϕn and the use of the Lévy–Raikov Theorem (see, e.g., [66, Theorem 10.1.1]), it follows that ϕn is analytic in at least the same horizontal strip. Moreover, in this strip, still by its very definition, (ϕn )n≥1 converges pointwise toward ϕ. Hence, the random variables eη|X n | (for some η > 0) are uniformly integrable. Therefore, if Eeλ|X | is finite and if the assumptions (4.23) and (4.24) hold true, (4.30) and [4, Theorem 1] lead to √ dW p+2 (X n , X ) ≤ C

ln n n



 |b0 | +

+∞ −∞

2 |u|ν(du)

,

(4.37)

for some constant C only depending on the limiting distribution. We now present some examples illustrating the applicability of our methods, as developed to this point, by verifying the validity of various hypotheses. (i) The gamma random variable with parameters α ≥ 1 and β > 0 satisfies the assumptions of Theorem 4.10. Indeed, (4.23) and the boundedness of the density are automatically verified, and moreover  |ϕ(t)| 0

|t|

ds ≤ |t|, |ϕ(s)|

for all t ∈ R.

q(ii) Let 2q ≥ 3 and let (λ1 , ..., λq ) be q nonzero distinct reals. Let X := k=1 λk (Z k − 1), where the {Z i , i = 1, ..., q} are iid standard normal random variables. Clearly, X is infinitely divisible and its Lévy measure is given by ⎛ ⎛ ⎞ ⎞ e−u/(2λ) e−u/(2λ) ν(du) ⎠ 1(0,+∞) (u) + ⎝ ⎠ 1(−∞,0) (u), := ⎝ du 2u 2(−u) λ∈+

λ∈−

where + = {λk : λk > 0} and − = {λk : λk < 0}have finite cardinality, and so q the condition (4.23) is verified. Moreover, ϕ(t) := j=1 e−itλ j /(1 − 2itλ j )1/2 and thus

4 General Upper Bounds by Fourier Methods

49

1 1   q4 ≤ |ϕ(t)| ≤  q , 1 + 4λ2max t 2 1 + 4λ2min t 2 4 with λmax = max |λk | and λmin = min |λk |. This readily implies that X has a bounded k≥1

k≥1

density and that, for all t ∈ R,

 |ϕ(t)|

|t|

0

ds ≤ C|t|, |ϕ(s)|

where C := sup

q

(1 + 4λ2max t 2 ) 4 (1 + 4λ2min t 2 )

t∈R

 =

q 4

λmax λmin

q/2 .

(iv) More generally, let (λk )k≥1

be an absolutely summable sequence such that 2 |λk | = 0, for all k ≥ 1. Let X := +∞ k=1 λk (Z k − 1) where (Z k )k≥1 is a sequence of iid standard normal random variables. Since (λk )k≥1 is absolutely summable, the condition (4.23) is verified. Let us now fix N ≥ 3 and assume that the absolute values of the eigenvalues (λk )k≥1 are indexed in decreasing order, i.e., |λ1 | ≥ |λ2 | ≥ ... ≥ |λ N | ≥ .... Then, for all t ∈ R, ψ N (t) ψ N (t)   N4 ≤ |ϕ(t)| ≤  N , 1 + 4t 2 |λ1 |2 1 + 4t 2 |λ N |2 4 where +∞ 

1

k=N +1

(1 + 4t 2 λ2k ) 4

ψ N (t) :=

1

.

Since 0 ≤ ψ N (t) ≤ 1, it is clear that X has a bounded density. Moreover, for each N , ψ N is a decreasing function, thus,  |ϕ(t)| 0

|t|

ds ≤ C|t|, |ϕ(s)|

(4.38)

with C := sup t∈R

q

(1 + 4λ21 t 2 ) 4 q

(1 + 4λ2N t 2 ) 4

 =

λ1 λN

q/2 .

The next theorem pertains to quantitative convergence results inside the second Wiener chaos. [73, Theorem 3.1] puts forward the fact that a sequence of

50

4 General Upper Bounds by Fourier Methods

second-order Wiener chaos random variables converging in law, necessarily converges toward a random variable which is the sum of a centered Gaussian random variable (possibly degenerate) and of an independent second-order Wiener chaos random variable. It is, therefore, natural to consider the following instances of convergence in law: +∞

λn,k (Z k2 − 1) =⇒

n→+∞

k=1

+∞

λ∞,k (Z k2 − 1).

k=1

To study this issue, below, 1 denotes the space of absolutely summable

real-valued sequences and for any such sequence λ = (λk )k≥1 ∈ 1 , let λ1 := +∞ k=1 |λk |. Theorem 4.13 Let (λn )n≥1 be a sequence of elements of 1 , converging (in  · 1 ) ∈ 1 . Moreover, let |λ∞,k | = 0 and |λn,k | = 0, for all k ≥ 1, n ≥ 1, and toward λ∞

+∞

+∞ 2 2 2 λ = λ = 1/2. Next, set X = further, let +∞ n k=1 n,k k=1 ∞,k k=1 λn,k (Z k − 1), n ≥

+∞ 1, X ∞ = k=1 λ∞,k (Z k2 − 1), and let + − − n := |λ+ n 1 − λ∞ 1 | + |λn 1 − λ∞ 1 | + − − + λ+ n − λ∞ 1 + λn − λ∞ 1 , ± ± where ± n := {λn,k , k ≥ 1} = {λn,k , λn,k > 0 (< 0)}, and similarly for ∞ . Then,

  n . d K (X n , X ∞ ) ≤ C∞

(4.39)

  n | ln n |, dW2 (X n , X ∞ ) ≤ C∞

(4.40)

and

  , C∞ depending only on X ∞ . for some positive constants C∞

Proof Since, for each n ≥ 1, λn is absolutely summable and since so is λ∞ , the conditions (4.1) of Theorem 4.1 are satisfied. Then, from the proof of Theorem 4.1 and, for all t ≥ 0,  εn (t) = iϕ∞ (t) 0

t

 ϕn (s)  + n (s) − − n (s) ds. ϕ∞ (s)

− Let us compute the quantities + n and n . By definition (see Corollary 3.5),

 b0n = −

+∞ −∞

uνn (du) = −

λ∈+ n

λ+

λ∈− n

(−λ),

(4.41)

4 General Upper Bounds by Fourier Methods

51

and ν˜n+ (du) =

1 −u 1 −u e 2λ 1(0,+∞) (u)du, ν˜n− (du) = e 2λ 1(−∞,0) (u)du, 2 2 + − λ∈n

λ∈n

implying that (m n0 )+ = (m n0 )− = λn 1 , ⎛ ⎞ λ 1 ⎝ n + ⎠, ϕYn+ (t) = (b0 ) + (m n0 )+ 1 − 2itλ + λ∈n ⎛ ⎞ −λ ⎠ 1 ⎝ n − ϕYn− (t) = (b0 ) + . (m n0 )− 1 − 2itλ − λ∈n

Then, after some straightforward computations, − n ∞ + n (s) − n (s) = b0 − b0 +

+

λ∈− ∞

λ∈+ n

λ λ − 1 − 2isλ 1 − 2isλ + λ∈∞

−λ −λ − . 1 − 2isλ 1 − 2isλ − λ∈n

Therefore, − + + − − |+ n (s) − n (s)| ≤|λn 1 − λ∞ 1 | + |λn 1 − λ∞ 1 | + − − + λ+ n − λ∞ 1 + λn − λ∞ 1 .

Combining the previous bound with (4.38) and (4.41) entails |εn (t)| ≤ C∞ |t|n . Finally, proceeding as in the proof of Theorem 4.1 gives   n . d K (X n , X ∞ ) ≤ C∞ As for the upper bound on the smooth Wasserstein distance, recall the following tail property of the second Wiener chaoses: there exists K > 0 such that for all unit variance X in the second Wiener chaos, and for all x > 2: P(|X | > x) ≤ exp (−K x) ,

52

4 General Upper Bounds by Fourier Methods

(see, e.g., [56, Theorem 6.7]). This tail estimate implies that sup E eη|X n | < ∞,

E eη∞ |X ∞ | < ∞,

n≥1

for some η, η∞ > 0, and [4, Theorem 1] finishes the proof of the theorem.



Remark 4.14 In the previous theorem, one could also consider (λn )n≥1 such that, for each n ≥ 1, there exists kn ≥ 1 (converging to +∞ with n) such that for all + 1, λn,k = 0. The quantity n would k = 1, ..., kn , |λn,k | = 0 and for all k ≥ kn

then depend on the remainder term, Rn := +∞ k=kn +1 |λ∞,k |. To conclude this chapter on the compound Poisson approximation of infinitely divisible distributions let us consider the stable case. Clearly, and as already indicated, an α-stable random variable satisfies neither the hypotheses of Corollary 3.5 nor those of Proposition 3.8. Nevertheless, the identities (3.18) and (3.21) lead to our next result. Theorem 4.15 Let α ∈ (1, 2) and let X be an α–stable random variable with Lévy measure given by (2.10) where c1 , c2 ≥ 0 are such that c1 + c2 > 0, and with characteristic function ϕ. For each n ≥ 1, let X n be a compound Poisson random variables with characteristic function    1 ϕn (t) := exp n (ϕ(t)) n − 1 .

(4.42)

Then, d K (X n , X ) ≤ C

1 1

n 1+α

,

(4.43)

where C > 0 depends only on α, c1 and c2 . Proof Thanks to (3.21) with f (·) = eit. ,   +∞   +∞ d c1 − c2 −itu du itu du (ϕ(t)) = iϕ(t) c2 . (1 − e ) α − c1 (1 − e ) α + dt u u α−1 0 0 (4.44) Next, setting   S(t) := c2 0

+∞

(1 − e

−itu

du ) α − c1 u

 0

+∞

du c1 − c2 (1 − e ) α + u α−1 itu

(4.42) gives d ϕ (t) 1 1 (ϕn (t)) = (ϕ(t)) n ϕn (t) = i S(t)(ϕ(t)) n ϕn (t). dt ϕ(t)

 ,

4 General Upper Bounds by Fourier Methods

53 1

Introducing the quantity n (t) := S(t)((ϕ(t)) n − 1), d (ϕn (t)) = i S(t)ϕn (t) + iϕn (t)n (t). dt Subtracting (4.44) from (4.45) and setting εn (t) = ϕn (t) − ϕ(t) lead to d (εn (t)) = i S(t)εn (t) + iϕn (t)n (t). dt Thus, for t ≥ 0,  εn (t) = iϕ(t)

t

0

ϕn (s) n (s)ds, ϕ(s)

and similarly for t ≤ 0. Let us now bound S. For s > 0, 

 +∞ du du |c1 − c2 | + c |1 − eisu | α + 1 α u u α−1 0 0 2−α |c 2 − c | 1 2 + , ≤ (c1 + c2 )|s|α−1 (2 − α)(α − 1) α−1

|S(s)| ≤ c2

+∞

|1 − e−isu |

using  0

+∞

|1 − e

isu

  +∞ du du +2 α−1 uα 0 u 2 2−α 2 . ≤ |s|α−1 (2 − α)(α − 1)

du | α ≤ |s|α−1 u



2

Moreover,  |s|  |c1 − c2 | 22−α 1 α−1 n (ϕ(s)) ≤ (c + , − 1 + c )|s| 1 2 n (2 − α)(α − 1) α−1 and    πα sgn(s) , ϕ(s) := exp isEX − c|s|α 1 − iβ tan 2 with c = c1 + c2 and β = (c1 − c2 )/(c1 + c2 ). This implies that, for t ≥ 0,

(4.45)

54

4 General Upper Bounds by Fourier Methods

|εn (t)| ≤ ≤ ≤ ≤

 2  t |c1 − c2 | 1 22−α 1 α−1 |ϕ(t)| (c1 + c2 )s + |s|ds n (2 − α)(α − 1) α−1 0 |ϕ(s)| 2   |c1 − c2 | 22−α 1 −ct α t cs α α−1 e + e s (c1 + c2 )s ds n (2 − α)(α − 1) α−1 0  2    t |c1 − c2 | 22−α 2 α α (c1 + c2 ) + (t + t 2α−1 ) e−ct ecs ds n (2 − α)(α − 1) α−1 0  2 2−α |c1 − c2 | (t 2 + t 2α ) 2 2C (c1 + c2 ) + , n (2 − α)(α − 1) α−1 1 + tα

where (4.8) is used to obtain the last inequality and where C > 0 only depends on α, c1 and c2 . Therefore, with a similar argument for t ≤ 0, it follows that, for all t ∈ R, |εn (t)| ≤

C  (t 2 + |t|2α ) , n 1 + |t|α

for some C  > 0 depending only on α and c. To conclude the proof of this theorem, we proceed as in the end of the proof of Theorem 4.10: by Esseen inequality,  d K (X n , X ) ≤ C1

T −T

|εn (t)| h α ∞ dt + C2 , |t| T

where C1 > 0, C2 > 0, while h α is the density of the stable distribution. Thus, d K (X n , X ) ≤

C1 n



T

0

t dt + 1 + tα

 0

T

 t 2α−1 h α ∞ . dt + C2 1 + tα T

Next, the idea is to exploit the different behaviors of the functions t → t/(1 + t α ) and t → t 2α−1 /(1 + t α ) at 0 and at infinity to optimize in T appearing on the right-hand side of the previous inequality. Thus, for T ≥ 1,  0

T

t dt ≤ 1 + tα

 0

ε

t dt + 1 + tα

 ε

T

t dt 1 + tα

ε2 ≤ + ε1−α T 2 2 ≤ C T α+1 , where we optimized in ε in the last line and where C > 0 only depends on α. Similarly,  0

T

t 2α−1 dt ≤ C T α . 1 + tα

4 General Upper Bounds by Fourier Methods

55

Thus,  h α ∞ C α 2 T + T α+1 + C2 . n T

d K (X n , X ) ≤ 1

Finally, choosing T = n 1+α finishes the proof of the theorem.



For the symmetric α-stable distribution, and in view of the proof of Proposition 4.7, the rate of convergence obtained above can be improved to n 1/α . This is the content of the next result. Theorem 4.16 Let α ∈ (1, 2) and let X ∼ SαS with characteristic function ϕ(t) = exp(−|t|α ), t ∈ R. Let X n , n ≥ 1, be a compound Poisson random variable with characteristic function    1 ϕn (t) := exp n (ϕ(t)) n − 1 . Then, d K (X n , X ) ≤

C 1



,

(4.46)

where C > 0 only depends on α. Proof From the proof of Proposition 4.7 (with its notations), for t ≥ 0,  εn (t) = iϕ(t)

t

0

ϕn (s) n (s)ds. ϕ(s)

Moreover, for all 0 ≤ s ≤ t, |n (s)| ≤

s 2α−1 , n

hence,  1   t ϕ (n α1 s) 1 n 2α−1 ds. εn (n α t) ≤ Cn ϕ n α t |s| 1 α ϕ(n s) 0     We next detail how to bound the ratio ϕn n 1/α s /ϕ n 1/α s . For 0 ≤ s ≤ t ≤ 1,  1  ϕ n α s n   ≤ exp (n(exp(−s α ) − 1 + s α )) . 1 ϕ nαs

56

4 General Upper Bounds by Fourier Methods

Now, pick η ∈ (0, 1) such that 0 < C(η) = max (exp(−s α ) − 1 + s α )/s α < 1. s∈(0,η)

Then,  1  α εn n α t ≤ Cne−n(1−C(η))t t 2α . A similar bound can also be obtained for t ≤ 0. Setting T := n 1/α η, and applying Esseen’s inequality, we finally get d K (X n , X ∞ ) ≤

C1

≤ C1 ≤ C1 ≤

C1

  

1

+n α η 1 α

−n η +η −η +η

|εn (t)| h α ∞ dt + C2 1 |t| nαη 1

|εn (n α t)| h α ∞ dt + C2 1 |t| nαη α

Cne−n(1−C(η))t t 2α−1 dt + C2

0



1

nαη

e−(1−C(η))t

0

h α ∞ Cη,α + C2 1 ≤ n nαη Cη,α,h α ≤ , 1 nα

α

h α ∞ 1

nαη

t 2α−1 h α ∞ dt + C2 1 n ηn α

for some Cη,α,h α > 0, depending only on η, α and on h α ∞ , where, again, h α is the (bounded) density of the SαS-law. This concludes the proof of the theorem. 

Chapter 5

Solution to Stein’s Equation for Self-Decomposable Laws

Having found in Chap. 3 that the operator Agen given for all f ∈ B Li p(R), by  Agen f (x) = x f (x) − b f (x) −

+∞ −∞

( f (x + u) − f (x)1|u|≤1 )uν(du),

characterizes X ∼ I D(b, 0, ν), the usual next step in Stein’s method is now to show that for any h ∈ H (a class of nice functions), the equation Agen f (x) = h(x) − Eh(X )

(5.1)

has a solution f h which also belongs to a class of nice functions. Of course for X ∼ I D(b, σ 2 , ν), the integral operator Agen becomes an integro-differential operator given by Agen f (x) = x f (x) − σ 2 f  (x) − b f (x) −



+∞

−∞

( f (x + u) − f (x)1|u|≤1 )uν(du)).

Then, when interested in comparing the law of some random variable Y with the law of X , one needs to estimate sup |Eh(Y ) − Eh(X )| = sup |EAgen f h (Y )|. h∈H

h∈H

In the sequel, we develop a semigroup methodology to solve a corresponding Stein equation for nondegenerate self-decomposable laws on R. Semigroup methods have been initiated in [9, 48] and mainly developed for multivariate normal approximation or for diffusions approximation. To start with, recall that, by definition, X , with characteristic function ϕ, is self-decomposable if for any γ ∈ (0, 1),

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 B. Arras and C. Houdré, On Stein’s Method for Infinitely Divisible Laws with Finite First Moment, SpringerBriefs in Probability and Mathematical Statistics, https://doi.org/10.1007/978-3-030-15017-4_5

57

58

5 Solution to Stein’s Equation for Self-Decomposable Laws

ϕγ (t) :=

ϕ(t) , ϕ(γt)

(5.2)

t ∈ R, is itself a characteristic function ([84, Definition 15.1]). Recall also that nondegenerate self-decomposable laws are infinitely divisible, absolutely continuous with respect to the Lebesgue measure (see [84, Proposition 15.5] and [87, Chap. V, Sect. 6, Theorem 6.14]) and moreover closed under convolution. The class of selfdecomposable distributions comprises many of the infinitely divisible ones. To name but a few, the stable distributions, the gamma distributions, the second Wiener chaostype distributions, the Laplace distribution, the generalized Dickman distribution, the double-Pareto distribution with r > 1, the log-normal distribution, the logistic distribution, the Student distribution with r > 0, the generalized inverse Gaussian, the generalized hyperbolic distributions (see, e.g., [49]), the half-Cauchy distribution (see, e.g., [36]), the Weibull distribution with parameter 0 < α ≤ 1 and the generalized gamma distribution with parameters (r, α) such that r > 0, |α| ≤ 1 α = 0 are all self-decomposable. We refer the reader to [84, 87] for more examples and properties of self-decomposable distributions. To continue, and as usual, denote by S(R) the Schwartz space of infinitely differentiable rapidly decreasing real-valued functions defined on R, and by F the Fourier transform operator given, for f ∈ S(R), by  F( f )(ξ) =

+∞ −∞

f (x)e−i xξ d x.

(5.3)

 |u|β dν(u) < +∞ .

(5.4)

For X ∼ I D(b, 0, ν), let also   β ∗ := sup β ≥ 1 :

|u|>1

For X ∼ I D(b, 0, ν) nondegenerate and self-decomposable with law μ X , let f X be its Radon–Nikodym derivative with respect to the Lebesgue measure, let S( f X ) = {x ∈ R, 0 ≤ f X (x) < +∞} and let N ( f X ) = {x ∈ R, f X (x) = 0}. Thanks to [84, Theorem 28.4], f X is continuous on S( f X ) (actually on R or on R \ {b0 }, if b0 exists). The integrability properties of the measure |u|ν(du) on {|u| ≤ 1}, ensure that the following alternatives hold true, e.g., see [84, Chap. 5, Sect. 24]:  • If |u|≤1 |u|ν(du) < +∞, then the support of μ X (denoted by Supp(μ X )) is either [b0, +∞) or (−∞, b0 ] or R. • If |u|≤1 |u|ν(du) = +∞, then the support of μ X is R. Before solving the Stein equation (5.1), let us start with the following proposition. Proposition 5.1 Let X ∼ I D(b, 0, ν) be self-decomposable with law μ X , characteristic function ϕ and such that E|X | < ∞. Let (Ptν )t≥0 be the family of operators defined, for all t ≥ 0 and for all f ∈ S(R), via

5 Solution to Stein’s Equation for Self-Decomposable Laws

Ptν ( f )(x) =

1 2π



+∞

−∞

F( f )(ξ)eiξxe

59 −t

ϕ(ξ) dξ. ϕ(e−t ξ)

(5.5)

Then, μ X is invariant for (Ptν )t≥0 , and (Ptν )t≥0 extends to a C0 -semigroup on L p (μ X ), with 1 ≤ p ≤ β ∗ . Its generator A is defined for all f ∈ S(R) and for all x ∈ R by A( f )(x) =

1 2π



+∞

−∞

  F( f )(ξ)eiξx (iξ) −x + EX +

= (EX − x) f  (x) +

+∞ −∞

 iuξ  e − 1 uν(du) dξ (5.6)



+∞

−∞



 f  (x + u) − f  (x) uν(du).

(5.7)

Proof First, it is easy to see that for any f ∈ S(R)  +∞ lim Ptν ( f )(x) = f (x)μ X (d x), P0ν ( f )(x) = f (x), t→+∞ −∞  +∞  +∞ ν Pt ( f )(x)μ X (d x) = f (x)μ X (d x). −∞

−∞

Next, let s, t ≥ 0 and f ∈ S(R). Then, on the one hand, ν Pt+s ( f )(x) =

1 2π



+∞

−∞

F( f )(ξ)eiξe

−(t+s)

x

ϕ(ξ) dξ, ϕ(e−(t+s) ξ)

while, on the other hand,  +∞ ϕ(ξ) 1 −t dξ F(Psν ( f ))(ξ)eiξe x 2π −∞ ϕ(e−t ξ)  +∞ 1 ϕ(es ξ) iξe−t x ϕ(ξ) e dξ = es F( f )(es ξ) 2π −∞ ϕ(ξ) ϕ(e−t ξ)  +∞ ϕ(ξ) 1 −(t+s) x dξ, = F( f )(ξ)eiξe 2π −∞ ϕ(e−(t+s) ξ)

Ptν (Psν ( f ))(x) =

ϕ(et ξ) . The semigroup property is therefore verϕ(ξ) ified on S(R). Now, let t ∈ (0, 1) and let f ∈ S(R). Then,

since F(Ptν ( f ))(ξ) = et F( f )(et ξ)

 1 ν Pt ( f )(x) − f (x) = t



+∞ −∞

F( f )(ξ)eiξx

 dξ 1 iξx(e−t −1) ϕ(ξ) e − 1 . −t t ϕ(e ξ) 2π

60

5 Solution to Stein’s Equation for Self-Decomposable Laws

But, by Lemma A.1 of the Appendix,    +∞  iuξ  1 iξx(e−t −1) ϕ(ξ) e − 1 = −x + EX + e − 1 uν(du) (iξ). lim t→0+ t ϕ(e−t ξ) −∞ Moreover, applying Lemma A.2 of the Appendix, for t ∈ (0, 1),





1 iξx(e−t −1) ϕ(ξ)

≤ C(1 + |ξ|)(E|X | + |x| + |ξ| + 1),

e − 1

t −t ϕ(e ξ) for some constant C > 0, independent of t. Thus, lim+

t→0

 1 ν Pt ( f )(x) − f (x) = A( f )(x), t

and, therefore, the generator of (Ptν )t≥0 on S(R) is indeed A. Now, let t ≥ 0, let f ∈ S(R) and let 1 ≤ p ≤ β ∗ . Since X is self-decomposable, there exists, for each t ≥ 0, a probability measure μt such that ϕ(ξ) = ϕ(e−t ξ)



+∞

−∞

eiuξ μt (du),

(5.8)

and thus, Ptν (

 f )(x) =

+∞ −∞

f (u + e−t x)μt (du).

(5.9)

The previous representation allows to extend the semigroup to Cb (R), the space of bounded continuous functions on R endowed with the supremum norm, and Ptν (Cb (R)) ⊂ Cb (R). Therefore, Ptν is a contraction semigroup on Cb (R) such that for all f ∈ Cb (R) and all t ≥ 0  R

Ptν ( f )(x)μ X (d x) =

 R

f (x)μ X (d x),

(5.10)

and moreover such that, for all f ∈ Cb (R), and all x ∈ R, lim Ptν ( f )(x) = f (x).

t→0+

(5.11)

Indeed, one can check the invariance property (5.10) on Cb (R) by noting that the probability measures (μ X ⊗ μt ) ◦ ψt−1 , with ψt (x, y) = e−t x + y, and μ X are the same. Similarly, one can check the pointwise convergence property (5.11) by using −t the fact that, by the Lévy continuity theorem, μt ◦ ϕ−1 t,x , where ϕt,x (u) = e x + u, + for all u ∈ R, converges weakly toward δx , as t → 0 . The semigroup property of (Ptν )t≥0 on Cb (R) follows by similar arguments, and also,

5 Solution to Stein’s Equation for Self-Decomposable Laws



+∞ −∞



ν

P ( f )(x) p μ X (d x) ≤



+∞



−∞ +∞

t



−∞

61

Ptν (| f | p )(x)μ X (d x) | f (x)| p μ X (d x),

finishing the proof of our claim. Finally, a standard approximation argument concludes the proof of the proposition.  Remark 5.2 (i) It is important to note that the representation (5.9) also allows to extend the semigroup to the space of continuous functions on R vanishing at ± infinity. Moreover, this extension is a Feller semigroup (e.g., one can apply [80, Chap. III, Proposition 2.4]). (ii) Self-decomposable distributions are naturally associated with Ornstein– Uhlenbeck-type Markov processes. Indeed, thanks to [84, Theorem 17.5], for any self-decomposable distributions μ, one can find an Ornstein–Uhlenbeck-type Markov process such that μ is its invariant measure. Hence, from a heuristic point of view, it seems legitimate to implement a semigroup methodology to solve a Stein equation associated with a self-decomposable law. (iii) There is a natural connection between generalized Mehler semigroups and self– decomposable probability measures. Indeed, for any probability measure on R, let D(μ) := {c ∈ [0, 1] : μ = Tc (μ) ∗ μc , for some μc ∈ M1 (R)}, with M1 (R) the set of probability measures on R, with Tc (μ)(B) = μ(B/c), for any Borel set B, and with the convention that T0 (μ) = δ0 . The set D(μ) is a closed multiplicative sub-semigroup containing 0 and 1. When μ is self-decomposable, this set is exactly [0, 1]; it contains in particular the one parameter semigroup (e−t )t≥0 . Note that μt = μe−t , for all t ≥ 0 with μt given by (5.8), which is well defined when μ is self-decomposable. The family of probability measures (μt )t≥0 then satisfies the following measure-valued cocycles (see [52]): μs+t = μt ∗ Te−t (μs ), s, t > 0. This convolution equality readily implies the semigroup property for the family of operators (Ptν )t≥0 of Lemma 5.1. For further information regarding the connection between decomposability of probability measures and generalized Mehler semigroups, the reader is referred to [58] and the references therein. (iv) When ν is the Lévy measure of a symmetric α-stable distribution, the generator of the semigroup (Ptν )t≥0 boils down to A( f )(x) = −x f  (x) + 



= −x f (x) + c

+∞



−∞  +∞ 0

 f  (x + u) − f  (x) uν(du) 

f  (x + u) − f  (x − u)

 du uα

,

62

5 Solution to Stein’s Equation for Self-Decomposable Laws

which is, thanks to (3.20), proportional to the one considered in [94]. (v) For the generalized Dickman distribution, a Stein methodology has been developed in [17]. One of their Stein equations is (see [17, Equation (98)]) x  f (x) + f h (x) − f h (x + 1) = h(x) − Eh(X ), θ h

(5.12)

where X is a generalized Dickman random variable with parameter θ > 0 and h belongs to the set of functions which are Lipschitz with Lipschitz derivative and with both Lipschitz constants at most one. Note that, by Proposition 5.1 and Example 3.1 (vii), the generator of the semigroup (Ptν )t≥0 boils down to A( f )(x) = (θ − x) f  (x) +





+∞

−∞



 f  (x + u) − f  (x) uν(du)

= −x f (x) + θ( f (x + 1) − f (x)), which is clearly proportional to the differential-delay operator considered in [17, Equation (98)]. (vi) Let ν be the Lévy measure of the standard Laplace distribution, i.e., let ν(du) = |u|−1 e−|u| du, u = 0, then the generator of the associated semigroup boils down to A( f )(x) = −x f  (x) +



+∞



 f  (x + u) − f  (x − u) e−u du,

0

and the corresponding Stein’s equation to −x f h (x)



+∞

+ 0



 f h (x + u) − f h (x − u) e−u du = h(x) − Eh(X ), x ∈ R, (5.13)

where X is the standard Laplace random variable. A Stein’s method has also been developed for the Laplace distribution in [79]. There, the fundamental equation is − f h (x) + f h (x) = h(x) − Eh(X ), x ∈ R, which does not seem to be immediately comparable to (5.13). (vii) A Stein’s method for exponential approximation is developed in [29, 42, 76]. There, the associated Stein equations read as f h (x) − f h (x) = h(x) − Eh(X ), x ≥ 0, x f h (x) − (x − 1) f h (x) = h(x) − Eh(X ), x > 0, where now X is an exponential random variable with parameter 1, and h is an appropriate test function (not necessarily smooth). Since the Lévy measure of the

5 Solution to Stein’s Equation for Self-Decomposable Laws

63

exponential distribution is ν(du) = u −1 e−u 1(0,+∞) (u)du, and thanks to Proposition 5.1, in our case the associated Stein equation is given, for all x ∈ R, by (1 − x)

f h (x)

 +

+∞

0



 f h (x + u) − f h (x) e−u du = h(x) − Eh(X ),

which is a nonlocal differential equation on the whole real line. (viii) In [43, 65, 78], the following Stein equation has been introduced and analyzed, to study gamma approximation: x f h (x) + (α − βx) f h (x) = h(x) − Eh(X α,β ), x > 0, where X α,β is a gamma random variable with parameters α > 0, β > 0 and where h belongs to a suitable class of test functions. Also, in [37], the following version of the gamma Stein equation is used: x f h (x) + (α − βx) f h (x) = h(x) − Eh(X α,β ), x ∈ R, but with the substantial difference that the above equation is now solved on the whole real line. In particular, in [37, Theorem 2.1], the following bounds on the solution are obtained when the test function h is continuously differentiable on R and when both h and h  are Lipschitz:   1 1 h  ∞ ,  f h ∞ ≤ 4β max 1, h  ∞ + 2h  ∞ .  f h ∞ ≤ β −1 h  ∞ ,  f h ∞ ≤ 2 max 1, α α

Recalling that the Lévy measure of the gamma distribution is ν(du) = αe−βu u −1 1(0,+∞) (u)du, the gamma Stein equation inferred from Proposition 5.1 is 

α −x β



f h (x) +

 +∞  0

 f h (x + u) − f h (x) αe−βu du = h(x) − Eh(X α,β ), x ∈ R,

which is, once again, a nonlocal differential equation on the whole real line. Moreover, as shown below, when the test function h is twice continuously differentiable on R and such that h  ∞ < +∞, h  ∞ < +∞ then, the following bounds, which are uniform in α > 0 and β > 0, hold true:  f h ∞ ≤ h  ∞ ,  f h ∞ ≤

h  ∞ . 2

In particular, these bounds do not explode when α → 0+ . (ix) Recall that since μ X , the law of X , is nondegenerate and self-decomposable, its Lévy measure admits the following representation ν(du) = ψ(u)|u|−1 du, u = 0, where ψ is a nonnegative function increasing on (−∞, 0) and decreasing on (0, +∞) ([84, Corollary 15.11]). Then, the probability measure μt (defined via (5.8)) is infinitely divisible. Denoting by νt the Lévy measure corresponding to μt , one easily

64

5 Solution to Stein’s Equation for Self-Decomposable Laws

checks that νt (du) =

ψ(u) − ψ(et u) du. |u|

With the help of the previous proposition, we now wish to solve the Stein equation associated with the operator A. More precisely, for any Lipschitz function or bounded Lipschitz function h, we wish to solve the following integro-differential equation: 

(EX − x) f (x) +



+∞



−∞

 f  (x + u) − f  (x) uν(du) = h(x) − Eh(X ).

(5.14)

Using classical semigroup theory ([75, Chap. 2] or [41, Chap. 1]), the first step is to prove that 

+∞



0

 Ptν (h)(x) − Eh(X ) dt

is well defined when h is a (bounded) Lipschitz function. At first, let h be a continuously differentiable function on R such that h∞ ≤ 1 and h  ∞ ≤ 1. Since

 +∞

 +∞



h(y + e−t x)μt (dy) − h(y)μ X (dy)

|Ptν (h)(x) − Eh(X )| =

−∞ −∞  +∞ ≤ |h(y + e−t x) − h(y)|μt (dy) + dW1 (μt , μ X ) −∞ −t

≤ e |x| + dW1 (μt , μ X ), we need to estimate the rate at which μt converges to μ X in smooth Wasserstein-1 distance. We begin by estimating the rate at which μt converges toward μ X in smooth Wasserstein-2 distance. Proposition 5.3 Let X ∼ I D(b, 0, ν) be nondegenerate self-decomposable with law μ X , and characteristic function ϕ, and moreover such that E|X | < ∞. Let X t , t ≥ 0, be random variables each having characteristic function ϕt (ξ) =

ϕ(ξ) , ϕ(e−t ξ)

ξ ∈ R.

(5.15)

Then, dW2 (X t , X ) ≤ Ce− 4 , t

for t > 0 and for some C > 0 independent of t.

(5.16)

5 Solution to Stein’s Equation for Self-Decomposable Laws

65

Proof One can apply [3, Theorem A.1] with d = 1 to get the exponential decay of dW2 (X t , X ). However, let us describe the proof in the univariate setting. Let X t be a random variable with law μt given via (5.8). Then, X t =d (1 − e−t )EX + X t1 + X t2 ,

(5.17)

where X t1 and X t2 are independent and, respectively, defined, for all ξ ∈ R and for all t > 0, via 1

Eeiξ X t = exp



|u|≤1

eiuξ − 1 − iuξ νt (du),



2

Eeiξ X t = exp



|u|>1

eiuξ − 1 − iuξ νt (du),

where νt is the Lévy measure of X t . Moreover, from [67, inequality 13] −t

E |X t | ≤ (1 − e )E |X | +

 |u| νt (du) 2

|u|≤1

21

 +2

|u|≥1

|u|νt (du),

(5.18)

which implies, in particular, that supt>0 E|X t | < +∞, since νt (B) ≤ ν(B), for all Borel sets B. Let us next estimate the difference between ϕt and ϕ, the respective characteristic functions of μt and μ X . For t ≥ 0,



ϕ(ξ)

|1 − ϕ(e−t ξ)|

|ϕt (ξ) − ϕ(ξ)| ≤

ϕ(e−t ξ)

≤ E|X ||ξ|e−t . Now, let g be an infinitely differentiable function with compact support contained in the interval [−2R, 2R], for some R > 1. Then by Fourier inversion and Fubini theorem, for all t > 0,  1 |F(g)(ξ)||ξ|dξ |Eg(X ) − Eg(X t )| ≤ e−t E|X | 2π R  1 (1 + |ξ|)3 ≤ e−t E|X | |F(g)(ξ)| |ξ|dξ 2π R (1 + |ξ|)3     1 |ξ|dξ −t 3 ≤ e E|X |sup |F(g)(ξ)|(1 + |ξ| ) . 2π R (1 + |ξ|)3 ξ∈R Moreover, for all p ≥ 2    sup |F(g)(ξ)|(1 + |ξ| p ) ≤ C R g∞ + g ( p) ∞ , ξ∈R

for some C > 0. Thus, for all t > 0   ˜ −t E|X |R g∞ + g (3) ∞ , |Eg(X ) − Eg(X t )| ≤ Ce

(5.19)

66

5 Solution to Stein’s Equation for Self-Decomposable Laws

with C˜ > 0. Now, let h ∈ Cc∞ (R) ∩ H3 . Let  R be a compactly supported infinitely differentiable function on R whose support is contained in [−2R, 2R], with values in [0, 1] and such that  R (x) = 1, for all x such that |x| ≤ R. Then, for all t > 0 |Eh(X ) − Eh(X t )| ≤|Eh(X ) R (X ) − Eh(X t ) R (X t )| + |Eh(X )(1 −  R (X ))| + |Eh(X t )(1 −  R (X t ))|. Now, note that  |Eh(X t )(1 −  R (X t ))| ≤

R

(1 −  R (x))dμt (x)

≤ P (|X t | ≥ R) 1 ≤ sup E|X t |, R t>0 which is finite by the first part of the proof. A similar bound holds true for |Eh(X )(1 −  R (X ))|. Moreover, from (5.19), |Eh(X ) − Eh(X t )| ≤

  C1 + C˜1 e−t E|X |R h R ∞ + (h R )(3) ∞ , R

for some constants C1 , C˜1 > 0. Now, h R ∞ ≤ 1, and, by taking for  R an appropriate scaling of a bump function , (h R )(3) ∞ ≤ D, for some D > 0 independent of R and h. Then,  |Eh(X ) − Eh(X t )| ≤ C2

1 −t + Re E|X | . R

for some C2 > 0. Choosing R = et/2 , for all t > 0, it follows that dW3 (X, X t ) ≤ C˜ 2 e− 2 , t

for some C˜ 2 > 0. Using Lemma A.4 with r = 3, dW2 (X, X t ) ≤ C3 e− 4 , t

for some C3 > 0. This concludes the proof of the proposition.



5 Solution to Stein’s Equation for Self-Decomposable Laws

67

Thanks to Lemma A.4 with r = 2, it is possible to link the smooth Wasserstein-2 distance to the smooth Wasserstein-1 distance, e.g., √ dW1 (X, Y ) ≤ 3 2 dW2 (X, Y ), for any two random variables X and Y . Therefore, combining Proposition 5.3 with Lemma A.4, with r = 2, yields |Ptν (h)(x) − Eh(X )| ≤ e−t |x| + Ce− 8 , t

(5.20)

 +∞ which implies that 0 |Ptν (h)(x) − Eh(X )|dt < +∞, ensuring the well definiteness of the function  +∞  ν  Pt (h)(x) − Eh(X ) dt, x ∈ R. (5.21) f h (x) = − 0

Let us now study the regularity of f h . Lemma 5.4 Let h be a continuously differentiable function such that h∞ ≤ 1 and h  ∞ ≤ 1. Then, f h is differentiable on R and  f h ∞ ≤ 1. Proof Since  d  ν P (h)(x) = e−t dx t



+∞ −∞

h  (xe−t + y)μt (dy),

it is clear that f h is differentiable and that  f h ∞ ≤ 1.



When h is a bounded Lipschitz function, the existence of higher order derivatives for f h is linked to the behavior at ±∞ of the characteristic function of the probability measure μt as well as to heat kernel estimates for the density of μt . To illustrate these ideas, let us provide some examples. (i) Let X be a gamma random variable with parameters (α, 1). Then, for all ξ ∈ R, α



−2t 2 2

ϕ(ξ)

= 1+e ξ

,

ϕ(e−t ξ)

1 + ξ2

is a decreasing function of ξ on R+ and thus,



ϕ(ξ)

≤ 1. e−αt ≤

ϕ(e−t ξ)

Similar upper and lower bounds hold for more general probability laws pertaining to the second Wiener chaos.

68

5 Solution to Stein’s Equation for Self-Decomposable Laws

(ii) Let X be a Dickman random variable with parameter θ = 1, namely, let the characteristic function of X be given, for all ξ ∈ R, by 

1

ϕ(ξ) = exp 0

eiξu − 1 du u



(see [4]). Then, for all ξ ∈ R, ϕ(ξ) = exp ϕ(e−t ξ)



1

e

iuξ 1

0

− eiuξ(e u

−t

−1)

 du .

Using standard asymptotic expansion for the cosine integral [74, Formulae 6.12.3, 6.12.4 and 6.2.20, Chap. 6], for all ξ ∈ R,



−t

ϕ(ξ)

1 + |ξ|e−t

≤ C2 1 + |ξ|e , ≤

C1 1 + |ξ| ϕ(e−t ξ)

1 + |ξ| for some C1 > 0, C2 > 0 independent of t. (iii) Let X be a SαS random variable with α ∈ (1, 2). Then, for all ξ ∈ R, ϕ(ξ) −αt α = e−(1−e )|ξ| . −t ϕ(e ξ) By Fourier inversion, μt admits a smooth density q such that, for all k ≥ 0, q

(k)

1 (t, y) = 2π



+∞ −∞



 ϕ(ξ) ei yξ (iξ)k dξ. ϕ(e−t ξ)

Moreover, denoting by pα (t, y) the probability transition density function of a one-dimensional α-stable Lévy process and noting that q(t, y) = pα (1 − e−αt , y), Lemma 2.2 of [32] implies that 1 − e−αt |q  (t, y)| ≤ C α+2 . 1 −αt α (1 − e ) + |y| As in the proof of Proposition 4.3 in [94], this last inequality implies that the function f h admits a second-order derivative uniformly bounded such that  f h ∞ ≤ Cα h  ∞ , for some constant Cα > 0, only depending on α. The previous examples point out that for some specific target laws the semigroup solution to the Stein equation (5.14) with h bounded Lipschitz, might not reach second-order differentiability. Nevertheless, if h is a C 2 (R)-function such that

5 Solution to Stein’s Equation for Self-Decomposable Laws

69

h∞ , h  ∞ , h  ∞ ≤ 1, then f h is twice differentiable with  f h ∞ ≤

1 . 2

In this instance, it is possible to partially transform the nonlocal part of the Stein operator into an operator acting on second derivatives. This is the purpose of the next lemma whose proof is very similar to the one of [94, Lemma 4.6] and, as such, is only sketched.  Lemma 5.5 Let ν be a Lévy measure such that |u|>1 |u|ν(du) < +∞. Let f be a twice continuously differentiable function with first and second derivatives bounded. Then, for all N > 0, and all x ∈ R, 

+∞ −∞







( f (x + u) − f (x))uν(du) =

+N −N

K ν (t, N ) f  (x + t)dt + R N (x), (5.22)

where K ν (t, N ) and R N (x) are, respectively, given by  N  t uν(du) + 1[−N ,0] (t) (−u)ν(du), K ν (t, N ) = 1[0,N ] (t) t −N  R N (x) = ( f  (x + u) − f  (x))uν(du). |u|>N

Proof For N > 0 and x ∈ R,  +∞ −∞

( f  (x + u) − f  (x))uν(du) =

 N 0

( f  (x + u) − f  (x))uν(du) +

 0 −N

( f  (x + u) − f  (x))uν(du) + R N (x).

(5.23)

For the first term on the right-hand side of (5.23), 

N

( f  (x + u) − f  (x))uν(du) =

0

 

N 0

u

f  (x + t)dt uν(du)

0 N

=



f  (x + t)



0

N

uν(du) dt,

(5.24)

t

while similar computations for the second term lead to 

0 −N

( f  (x + u) − f  (x))uν(du) =



0 −N

f  (x + t)



t

−N

(−u)ν(du) dt. (5.25)

70

5 Solution to Stein’s Equation for Self-Decomposable Laws

Combining (5.24) and (5.25) gives 

+∞ −∞







( f (x + u) − f (x))uν(du) =

+N −N

K ν (t, N ) f  (x + t)dt + R N (x). 

Next, we study the regularity properties of the nonlocal part of the Stein operator. For this purpose, set  T ( f )(x) :=

+∞



−∞

 f  (x + u) − f  (x) uν(du).

(5.26)

 Proposition 5.6 Let ν be a Lévy measure such that |u|>1 |u|ν(du) < ∞. Let f be a twice continuously differentiable function such that  f  ∞ < +∞ and  f  ∞ < +∞. (i) If |u|≤1 |u|ν(du) < +∞, then T ( f ) Li p < +∞.  (ii) If |u|≤1 |u|ν(du) = +∞ and if there exist γ, β > 0 and C1 , C2 > 0 such that for any R > 0,  |u|>R

|u|ν(du) ≤



C1 , Rγ

|u|≤R

|u|2 ν(du) ≤ C2 R β ,

then, sup

|T ( f )(x) − T ( f )(y)| β

|x − y| β+γ

x= y

< +∞.

Proof Let us start with (i). Let x, y ∈ R, x = y. Then,



|T ( f )(x) − T ( f )(y)| ≤





 f  (x + u) − f  (y + u) − f  (x) + f  (y) uν(du)

−∞  +∞  ≤ 2 f ∞ |x − y| |u|ν(du), +∞



−∞

showing that  T ( f ) Li p ≤ 2

+∞ −∞

|u|ν(du) f  ∞ .

Let us next prove (ii). Let R > 0 and let x, y ∈ R, x = y. Then,

(5.27)

5 Solution to Stein’s Equation for Self-Decomposable Laws

71

 +∞



   |T ( f )(x) − T ( f )(y)| ≤

f (x + u) − f  (y + u) − f  (x) + f  (y) uν(du)

−∞ 



f (x + u) − f  (y + u) − f  (x) + f  (y) |u|ν(du) ≤ |u|≤R



+

|u|>R





f (x + u) − f  (y + u) − f  (x) + f  (y) |u|ν(du).

≤ 2 f  ∞





 |u|≤R

|u|2 ν(du) + |x − y|



|x − y| . ≤ 2C f  ∞ R β + Rγ

|u|>R

|u|ν(du)

1

Choosing R = |x − y| γ+β entails, β

|T ( f )(x) − T ( f )(y)| ≤ 2C f  ∞ |x − y| β+γ , concluding the proof of the proposition.



Remark 5.7 Above, when ν is the Lévy measure of a symmetric α-stable probability distribution, the exponents γ and β are, respectively, equal to α − 1 and 2 − α and so β/(γ + β) = 2 − α which is exactly the right order of Hölderian regularity needed for optimal approximation results as in [94]. To end this chapter, we solve the Stein equation (5.14) for h infinitely differentiable with compact support such that h∞ ≤ 1, h  ∞ ≤ 1 and h  ∞ ≤ 1. This solution ensures, as shown in the results of the next chapter, via the representation (2.14), the existence of quantitative bounds on the smooth Wasserstein-2 distance. Note also that, from the Fourier approach in defining the semigroup (Ptν )t≥0 , the function f h is a solution to the Stein equation (5.14) on the whole real line. Lemma 5.8 Let X ∼ I D(b, 0, ν) be nondegenerate, self-decomposable and such that E|X | < ∞. Let h ∈ Cc∞ (R) be such that h∞ ≤ 1, h  ∞ ≤ 1 and h  ∞ ≤ 1. Let f h be the function given by (5.21). Then, for all x ∈ R, (EX − x)

f h (x)

 +

+∞ −∞



 f h (x + u) − f h (x) uν(du) = h(x) − Eh(X ). (5.28)

Proof Let h ∈ Cc∞ (R) be such that h∞ ≤ 1, h  ∞ ≤ 1 and h  ∞ ≤ 1 and let f h be given by (5.21). Let hˆ := h − Eh(X ) and let ψ ∈ S(R). Now, by Fourier arguments as in the proof of Proposition 5.1 d P ν (h); ψ = A(Ptν (h)); ψ, dt t where  f ; g =

 +∞ −∞

f (x)g(x)d x. Thus, integrating from 0 to +∞,

(5.29)

72

5 Solution to Stein’s Equation for Self-Decomposable Laws



+∞

0

d P ν (h); ψdt = dt t



+∞

0

A(Ptν (h)); ψdt.

(5.30)

Let us first deal with the left-hand side of (5.30). By definition,  0

+∞

d P ν (h); ψdt = lim Ptν (h); ψ − lim+ Ptν (h); ψ. t→+∞ t→0 dt t

Straightforward applications of the dominated convergence theorem therefore imply that  +∞ d P ν (h); ψdt = E h(X ) − h; ψ. dt t 0 To conclude let us deal with the right-hand-side of (5.30). First, note that A(Ptν (h)) = ˆ Thus, A(Ptν (h)). 

+∞ 0

 A(Ptν (h)); ψdt = 

+∞ 0

ˆ A(Ptν (h))dt; ψ

= −A( f h ); ψ, where the interchange of integrals is justified by Fubini theorem. Then, for all ψ ∈ S(R), A( f h ) + E h(X ) − h; ψ = 0. Now, for x ∈ R and for 0 < ε ≤ 1, let ψε ∈ S(R) be defined, for all y ∈ R, by ψε (y) := √

 (x − y)2 . exp − 2ε2 2πε 1

By the dominated convergence theorem and the regularity of f h and of h, it follows that lim A( f h ) + E h(X ) − h; ψε  = A( f h )(x) + E h(X ) − h(x) = 0,

ε→0+

finishing the proof of the lemma.



Remark 5.9 (i) The methodology developed in this chapter can also be developed for some discrete infinitely divisible distributions such as for X ∼ P(λ), a Poisson random variable with parameter λ > 0. Indeed, very classically, or from Theorem 3.1, one can infer the following Stein equation: λ( f (n + 1) − f (n)) + (λ − n) f (n) = h(n) − Eh(X ), n ∈ N,

(5.31)

5 Solution to Stein’s Equation for Self-Decomposable Laws

73

with h a measurable test function such that E|h(X )| < +∞. Now, assume that f can be written as a discrete gradient, namely, f (n) = g(n) − g(n − 1), for all n ≥ 1. Then, (5.31) boils down to λ(g(n + 1) − g(n) − (g(n) − g(n − 1))) + (λ − n)(g(n) − g(n − 1)) = h(n) − Eh(X ), n ≥ 1,

(5.32)

which is the discrete analog of the continuous equation (5.14) displayed above. In particular, the left-hand side of (5.32) can be identified as the generator of the (birth–death rate process) M/M/∞ queue with parameters λ and 1 (see, e.g., [8, 22, 24]). Thanks to [24, Equation 1.2], its semigroup admits the following Mehler-type representation:  Pt (h)(n) := Eh

n 

 Z k (t) + Yt , n ∈ N, t ≥ 0,

(5.33)

k=1

with h a bounded function on N, Yt a Poisson random variable with parameter λ(1 − e−t ) and (Z i (t))1≤i≤n iid Bernoulli random variables with parameter e−t independent of Yt , for all t > 0. Indeed, using once again Fourier techniques to express the generator of the M/M/∞ queue as a pseudo-differential operator, it is readily seen that (Pt )t>0 given by (5.33) is a semigroup of operators whose infinitesimal generator is given by the left-hand side of (5.32). To end this analogy, note that a notion of discrete self-decomposability has been introduced and studied (see, e.g., [87, Chap. V, Sect. 4]) to circumvent the fact that discrete random variables are not stable with respect to multiplication by a scalar. Now, a discrete random variable Y taking nonnegative values is called discrete self-decomposable, if for any γ ∈ (0, 1), Y =d γ ◦ Y + Yγ ,

(5.34)

Y where γ ◦ Y =d k=1 Z i (γ), with (Z i (γ))i≥1 a sequence of iid Bernoulli random variable with parameter γ independent of Y , and where Yγ is a random variable independent of γ ◦ Y . Discrete self-decomposable distributions are, in particular, infinitely divisible (see [87, Chap. V, Sect. 4, Theorem 4.7]) and, for instance, both the Poisson and the negative binomial distribution are discrete self-decomposable. Indeed, for any γ ∈ (0, 1), the Poisson distribution satisfies the following equality in law: X =d

X 

Z k (γ) + X γ ,

(5.35)

k=1

with X γ , independent of {X, (Z i (γ))i≥1 }, having a Poisson distribution with parameter λ(1 − γ). As previously, taking γ = e−t in (5.35) explains the Mehler representation of the semigroup of the M/M/∞ queue with parameters (λ, 1).

74

5 Solution to Stein’s Equation for Self-Decomposable Laws

(ii) In a rather similar fashion, let us consider the Stein equation for the negative binomial distribution with parameters (r, p) as in Example 3.1 (ii). For X ∼ N Bin 0 (r, p) and for h a measurable test function such that E|h(X )| < +∞, then (1 − p) (r + n) f (n + 1) − n f (n) = h(n) − Eh(X ), n ≥ 0.

(5.36)

As before, setting f (n) = g(n) − g(n − 1), (5.36) boils down to, (1 − p) (r + n) (g(n + 1) − g(n)) − n(g(n) − g(n − 1)) = h(n) − Eh(X ), n ≥ 1.

(5.37)

Again the left-hand side of (5.37) is the generator of an immigration–birth–death process with constant immigration rate r (1 − p) and per capita birth and death rates 1 − p and 1 (see, e.g., [12]). Moreover, the negative binomial distribution is discrete self-decomposable since, for all γ ∈ (0, 1), X =d γ ◦ X + X γ , where X γ and γ ◦ X are independent and X γ has a probability generating function given by Ez

 1− p r = γ + (1 − γ) , 0 ≤ z ≤ 1, 1 − pz



(see [87, Chap. V, Sect. 4, Example 4.6]). (iii) To finish this chapter, let us briefly explain how self-decomposability and its discrete version, say in the Poisson case, can be combined in a single encompassing framework. Let X be a nondegenerate self-decomposable random variable with finite first moment, Lévy measure ν and characteristic function ϕ and let Y be a Poisson random variable with parameter λ > 0 independent of X . Then, let us define, for all f ∈ S (R), all t ≥ 0, all n ∈ N and all x ∈ R Pt ( f )(x + n) =

1 2π

 R

F ( f )(ξ)eiξxe

−t

n  iξ  ϕ(ξ) iξ −t −t −t e e + 1 − e eλ(1−e ) e −1 dξ. ϕ(e−t ξ)

(5.38)

In particular, Pt ( f )(x + n) admits the following stochastic representation:  Pt ( f )(x + n) = E f

xe

−t

+

n 

 Z k (t) + Yt + X t , x ∈ R, n ∈ N, t ≥ 0,

k=1

where {Z 1 (t), . . . , Z n (t), Yt , X t } is a collection of independent random variables and where, for 1 ≤ i ≤ n, Z i (t) is a Bernoulli random variable with parameter e−t , Yt is a Poisson random variable with parameter λ(1 − e−t ) and X t has characteristic function ϕt given, for all ξ ∈ R, by ϕt (ξ) = ϕ(ξ)/ϕ(e−t ξ). Moreover, from (5.38), one has, for all n ∈ N and all x ∈ R,

5 Solution to Stein’s Equation for Self-Decomposable Laws

75

 lim Pt ( f )(x + n) =

t→+∞

R

f (x) (μ X ∗ μY ) (d x),

lim Pt ( f )(x + n) = f (x + n),

t→0+

where X ∼ μ X and Y ∼ μY . Now, using (5.38), lim+ (Pt ( f )(x + n) − f (x + n)) /t t→0

can be computed. Indeed, note that, for all x ∈ R and all ξ ∈ R,

   iuξ  1 iξx(e−t −1) ϕ(ξ) lim e − 1 = −iξx + iξEX + iξ e − 1 uν(du), t→0+ t ϕ(e−t ξ) R and, for all n ∈ N and all ξ ∈ R, n  iξ  1 iξ −t −t e e + 1 − e−t eλ(1−e ) e −1 e−inξ − 1 = n e−iξ − 1 + λ eiξ − 1 . t→0+ t lim

Thus, for all f ∈ S (R), all n ∈ N and all x ∈ R,     Pt ( f )(x + n) − f (x + n) = (EX − x) f  (x + n) + f (x + n + u) − f  (x + n) uν(du) + t t→0 R + n( f (x + n − 1) − f (x + n)) + λ( f (x + n + 1) − f (x + n)). lim

(5.39) From (5.39), one can infer the following Stein equation associated with the law of X + Y , for all n ∈ N and all x ∈ R: (EX − x) f h (x + n) +

 R

   f h (x + n + u) − f h (x + n) uν(du)

+ n( f h (x + n − 1) − f h (x + n)) + λ( f h (x + n + 1) − f h (x + n)) = h(x + n) − Eh(X + Y ),

where h is a suitable test function such that E |h(X + Y )| < +∞. In particular, one can retrieve the Stein’s equation associated with the law of X (resp. the law of Y ) by taking λ = 0 and n = 0 (resp. EX = 0, x = 0, ν = 0) and thus recovering (5.14) (resp. (5.32)).

Chapter 6

Applications to Sums of Independent Random Variables

It is well known that self-decomposable laws naturally appear as limiting laws for, rather general, sums of independent random variables (see [59, 62]). Indeed, let (Z k )k≥1 be a sequence of independent random variables, let (bn )n≥1 be a sequence of strictly positive reals, let (cn )n≥1 be a sequence of reals, and finally let Sn := bn

n 

Z k + cn .

(6.1)

k=1

Recall that if {bn Z k , k = 1, ..., n, n ≥ 1} is a null array, i.e., if for all ε > 0, lim max P (|bn Z k | > ε) = 0,

n→+∞1≤k≤n

(6.2)

then whenever (Sn )n≥1 converges in law, its limit has to be self-decomposable (see [84, Theorem 15.3]). (In case the limit is nondegenerate, then necessarily bn → 0 and bn+1 /bn → 1, as n → +∞.) Conversely, if X is self-decomposable, one can always find (Z k )k≥1 , (bn )n≥1 and (cn )n≥1 as above, also satisfying (6.2) and such that (Sn )n≥1 converges in law toward X . In the sequel, we quantitatively revisit this result with the help of the Stein methodology developed in the previous chapters. First, we need the following straightforward lemma. Lemma 6.1 Let (Z k )k≥1 , (bn )n≥1 and (cn )n≥1 be as above, let E|Z k | < ∞ for all k ≥ 1 and let f ∈ Li p(R). Then, for all n ≥ 1, ⎛ ESn f  (Sn ) = ⎝cn + bn

n 

⎞ EZ k ⎠ E f  (Sn ) + bn

k=1

+ bn

n  k=1 n 

E1bn |Z k |≤N Z˜ k ( f  (Sn ) − f  (Sn,k )) E1bn |Z k |>N Z˜ k ( f  (Sn ) − f  (Sn,k )),

k=1

(6.3) © The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 B. Arras and C. Houdré, On Stein’s Method for Infinitely Divisible Laws with Finite First Moment, SpringerBriefs in Probability and Mathematical Statistics, https://doi.org/10.1007/978-3-030-15017-4_6

77

78

6 Applications to Sums of Independent Random Variables

where Sn,k = Sn − bn Z k , Z˜ k = Z k − EZ k and N ≥ 1. Moreover, if f is twice continuously differentiable with  f  ∞ < +∞ and  f  ∞ < +∞, then ⎛ ESn f  (Sn ) = ⎝cn + bn

n 

⎞ EZ k ⎠ E f  (Sn ) + bn

k=1

n 

E1bn |Z k |>N Z k ( f  (Sn ) − f  (Sn,k ))

k=1

+

n  +∞  k=1 −∞ n 

EK k (t, N ) f  (Sn,k + t)dt

EZ k E( f  (Sn ) − f  (Sn,k )), (6.4)

− bn

k=1

where K k (t, N ) = Ebn Z k 1bn |Z k |≤N (10≤t≤bn Z k − 1bn Z k ≤t≤0 ).

(6.5)

Proof For f Lipschitz, the first part of the lemma follows from E Z˜ k f  (Sn ) = E Z˜ k ( f  (Sn ) − f  (Sn,k )),

(6.6)

which is valid for all k ≥ 1, since Z˜ k and Sn,k are independent and since E Z˜ k = 0. For f twice continuously differentiable with  f  ∞ < +∞ and  f  ∞ < +∞, the result follows by computations similar to the proof of [94, Lemma 4.5] using also Lemma 5.5 above.  In the next theorem, X ∼ I D(b, 0, ν) is nondegenerate, self-decomposable, and such that E|X | < +∞. Further, (Sn )n≥1 is as in (6.1) with E|Z k | < +∞, for all k ≥ 1. From the results of the previous chapter (Lemma 5.8), for any h ∈ Cc∞ (R) with h∞ ≤ 1, h  ∞ ≤ 1 and h  ∞ ≤ 1,    +∞  

 |Eh(Sn ) − Eh(X )| = E (EX − Sn ) f h (Sn ) + f h (Sn + u) − f h (Sn ) uν(du)  . −∞

(6.7)

The next results upperbound (6.7).

Theorem 6.2 (i) Let |u|≤1 |u|ν(du) < +∞. Then, for n, N ≥ 1,      n n   b  +∞  n   EZ k ) + E|Z k | |u|ν(du) dW2 (Sn , X ) ≤ EX − (cn + bn   n k=1 −∞ k=1  n n  1  |EZ k |E|Z k | + 2 |u|ν(du) + 2bn E|Z k |1|bn Z k |>N + bn2 2 k=1 |u|>N k=1

6 Applications to Sums of Independent Random Variables

1 + 2 k=1 n



+N −N

   K ν (t, N )   − K k (t, N ) dt.  n

79

(6.8)

(ii) Let |u|≤1 |u|ν(du) = +∞ and let there exist γ > 0, β > 0 and C1 > 0, C2 > 0 such that for R > 0 

C1 |u|ν(du) ≤ γ , R |u|>R

 |u|≤R

|u|2 ν(du) ≤ C2 R β .

(6.9)

Then, for n, N ≥ 1,   β   n n    β bnβ+γ    dW2 (Sn , X ) ≤ EX − (cn + bn EZ k ) + Cγ,β E|Z k | β+γ   n k=1 k=1  n n  1 2 + bn |EZ k |E|Z k | + 2 |u|ν(du) + 2bn E|Z k |1|bn Z k |>N 2 k=1 |u|>N k=1   n   1  +N  K ν (t, N ) − K k (t, N ) dt, +  2 k=1 −N n for some Cγ,β > 0 only depending on γ and β.

Proof Let us start with the proof of (i). Assume that |u|≤1 |u|ν(du) < +∞. Let h ∈ Cc∞ (R) be such that h∞ ≤ 1, h  ∞ ≤ 1 and h  ∞ ≤ 1. Then,    +∞  

 |Eh(Sn ) − Eh(X )| = E (EX − Sn ) f h (Sn ) + f h (Sn + u) − f h (Sn ) uν(du)  −∞  ⎛  ⎞   n      ⎝ ⎠ ≤ E E X − (cn + bn EZ k ) f h (Sn )   k=1     n       + E b n Z˜ k ( f h (Sn,k + Z k bn ) − f h (Sn,k )) − T ( f h )(Sn )   k=1   ⎛ ⎞      n n     1  ≤ E X − ⎝cn + bn EZ k ⎠ + E T ( f h )(Sn,k ) − T ( f h )(Sn )    n k=1  k=1     n n    1 + E bn Z˜ k ( f h (Sn,k + Z k bn ) − f h (Sn,k )) − T ( f h )(Sn,k ) n   k=1 k=1  ⎛ ⎞    +∞  n n    bn ≤ E X − ⎝cn + bn EZ k ⎠ + |u|ν(du) E|Z k | n −∞   k=1 k=1     n n    1    ˜ Z k ( f h (Sn,k + Z k bn ) − f h (Sn,k )) − + E bn T ( f h )(Sn,k ) , n   k=1 k=1

(6.10)

where we have successively used Lemma 5.4 and Proposition 5.6 (i). Next, we need to bound

80

6 Applications to Sums of Independent Random Variables

  n n     1     I := Ebn T ( f h )(Sn,k ) . Z˜ k ( f h (Sn,k + Z k bn ) − f h (Sn,k )) −   n k=1 k=1 At first,   n        I ≤ E bn EZ k ( f h (Sn,k + Z k bn ) − f h (Sn,k ))   k=1   n n     1   + E bn Z k ( f h (Sn,k + Z k bn ) − f h (Sn,k )) − T ( f h )(Sn,k )   n k=1 k=1 1 2 b |EZ k |E|Z k | 2 n k=1   n n    1     Z k ( f h (Sn,k + Z k bn ) − f h (Sn,k )) − T ( f h )(Sn,k ) . + Ebn   n k=1 k=1 n



Let N ≥ 1. Now, from Lemma 5.5, n n  +N 1 1  T ( f h )(Sn,k ) = K ν (t, N ) f h (Sn,k + t)dt + R N (Sn,k ) , n k=1 n k=1 −N and moreover,  n 1 E|R N (Sn,k )| ≤ 2 |u|ν(du). n k=1 |u|>N Therefore, 1 2 b |EZ k |E|Z k | + 2 2 n n

I ≤

k=1

 |u|>N

 n   |u|ν(du) + Ebn Z k ( f h (Sn,k + Z k bn ) − f h (Sn,k )) k=1

 n   1  +N − K ν (t, N ) f h (Sn,k + t)dt , n −N k=1

and from Lemma 6.1, Ebn

n  k=1

Z k ( f h (Sn,k + Z k bn ) − f h (Sn,k )) = bn

n  k=1

+

E1bn |Z k |>N Z k ( f h (Sn,k + Z k bn ) − f h (Sn,k )) n  +∞  k=1 −∞

EK k (t, N ) f h (Sn,k + t)dt.

6 Applications to Sums of Independent Random Variables

81

Then, I ≤

 n n  1 2 |EZ k |E|Z k | + 2 |u|ν(du) + 2bn E|Z k |1|bn Z k |>N bn 2 k=1 |u|>N k=1   n   1  +N  K ν (t, N )  dt. (6.11) − K + (t, N ) k   2 k=1 −N n

Combining (6.10) and (6.11) proves part (i). For the proof of (ii), proceed in a similar way.  Remark 6.3 (i) Using Lemma A.4, it is possible to transfer the bounds on dW2 (Sn , X ) obtained above to bounds on dW1 (Sn , X ). (ii) The approach just presented generalizes the methodology developed for the symmetric α-stable distribution in [94]. Note that in this case, due to the regularizing properties of the probability transition density function of the one-dimensional αstable Lévy process, it is possible to obtain quantitative upper bounds in Wasserstein1 distance. At the level of the semigroup solution to the Stein equation, this can be viewed by a gain of one order of differentiability with a test function h only Lipschitz. In this regard, Theorem 6.2 can be compared with Theorem 2.1 of [94] (from which several quantitative convergence results follow). (iii) As indicated in [94], the quantity 1 2 k=1 n



+N −N

   K ν (t, N )   − K k (t, N ) dt  n

might be seen as the L 1 -analog of the L 2 -Stein discrepancy considered, e.g., in [25, 26, 61, 71]. However, the technique developed in the current chapter is much more reminiscent of the K -function approach exposed, e.g., in [31, Sect. 2.3.1] when dealing with sums of independent summands in a Gaussian setting. (iv) To illustrate the abstract bounds obtained in the previous theorem, let us consider a canonical example associated with nondegenerate self-decomposable laws (see [84, proof of Theorem 15.3, (ii)]). Let X ∼ I D(b, 0, ν) be nondegenerate selfdecomposable with E|X | < ∞, X ≥ 0 a.s. and let ϕ be its characteristic function. For all n ≥ 1, set cn = 0 and bn = 1/n. Further, let (Z m )m≥1 be independent random variables defined via their characteristic functions, given for all m ≥ 1 and ξ ∈ R by  +∞ ϕ((m + 1)ξ) km (u) iuξ = exp iξEX + du , ϕm (ξ) := (e − 1 − iuξ) ϕ(mξ) u 0 (6.12) with km (u) = k(u/(m + 1)) − k(u/m) and k given by ν(du)/du = k(u)/|u|, thanks to [84, Corollary 15.11]. Then, the sequence (Sn )n≥1 defined by (6.1)

82

6 Applications to Sums of Independent Random Variables

converges in distribution to X . Moreover, it is possible to extract, for this example, explicit rates of convergence for some ofthe terms present on the right-hand side of (6.8). First, note that EX − (cn + bn nm=1 EZ m ) = 0, for all n ≥ 1. Next, consider the terms defined, for all n ≥ 1, by bn (I I ) := n



n 

  E|Z m |

m=1

+∞

−∞

n 1  |u|ν(du) , (I I I ) := bn2 |EZ m |E|Z m |. 2 m=1

Thanks to [84, Theorem 24.11], Z m ≥ 0 a.s. for all m ≥ 1, thus for all n ≥ 1 (I I ) ≤

EX n



+∞

0

(EX )2 . uν(du) , (I I I ) ≤ 2n

The last three terms on the right-hand side of (6.8) depend, respectively, on the tail properties of the Lévy measure ν of X , on P(Z m ≥ x), as x → +∞, and on a refined analysis of the L 1 -Stein discrepancy combined with a good choice of the truncation parameter N . Next, we consider the important case where (Z k )k≥1 is a sequence of independent and identically distributed random variables such that E|Z k | < +∞, k ≥ 1. In this situation, from [84, Theorem 15.7], the limiting self-decomposable law is actually stable. Recall that an infinitely divisible probability measure, μ, is stable if, for any a > 0, there exist b > 0 and c ∈ R such that, for all t ∈ R, ϕ(t)a = ϕ(bt)eict , where ϕ is the characteristic function of μ. Also, by [84, Theorem 14.3(ii)], any nondegenerate stable distribution with index α ∈ (0, 2) has a Lévy measure given by (2.10). Note finally that, when ν is given by (2.10) with α ∈ (1, 2), |u|≤1 |u|ν(du) = +∞ and that, for any R > 0,  |u|>R

|u|ν(du) =

c1 + c2 1 , α − 1 R α−1

 |u|≤R

|u|2 ν(du) =

c1 + c2 2−α R . 2−α

Corollary 6.4 Let X be a nondegenerate stable random variable with index α ∈ (1, 2) and Lévy measure given by (2.10). Let (Sn )n≥1 be as in (6.1) with (Z k )k≥1 independent and identically distributed and with E|Z 1 | < +∞. Then, for n, N ≥ 1, n dW2 (Sn , X ) ≤ |EX − (cn + nbn EZ 1 )| + Cα (bn )2−α E |Z 1 |2−α + bn2 |EZ 1 |E|Z 1 | 2  1 +N c1 + c2 1 |K ν (t, N ) − n K 1 (t, N )| dt, + 2nbn E|Z 1 |1|bn Z 1 |>N + +2 α − 1 N α−1 2 −N

(6.13)

where Cα > 0 only depends on α, c1 and c2 , and where, for all t ∈ R, t = 0 and N ≥ 1, n ≥ 1,

6 Applications to Sums of Independent Random Variables

83

t 1−α − N 1−α N 1−α − (−t)1−α + c2 1[−N ,0] (t) , α−1 1−α K 1 (t, N ) = Ebn Z 1 1bn |Z 1 |≤N (10≤t≤bn Z 1 − 1bn Z 1 ≤t≤0 ).

K ν (t, N ) = c1 1[0,N ] (t)

Proof This is a direct application of Theorem 6.2 (ii) (β = 2 − α and γ = α − 1) together with the fact that the random variables Z k are identically distributed. Moreover, thanks to (2.10) with α ∈ (1, 2) and to Lemma 5.5, for all t ∈ R, t = 0, and for all N ≥ 1 K ν (t, N ) = c1 1[0,N ] (t)

t 1−α − N 1−α N 1−α − (−t)1−α + c2 1[−N ,0] (t) . α−1 1−α 

This concludes the proof of the corollary.

Remark 6.5 Assuming further properties of the tails behavior of the law of Z 1 as done, for example, in [94, Theorem 2.6 and Corollary 2.7], it is possible to extract explicit rates of convergence from the right-hand side of (6.13). For more details, the reader is referred to the proofs of [94, Theorem 2.6 and Corollary 2.7]. Finally, as ultimate example, let now (Z n,k )1≤k≤n, n≥1 be a null array (see, e.g., [84, Definition 9.2]), namely, for each fixed n ≥ 1, Z n,1 , ..., Z n,n are independent random variables and, for all ε > 0,

lim max P |Z n,k | > ε = 0.

n→+∞1≤k≤n

(6.14)

Let (Sn )n≥1 be the sequence of row sums associated with this triangular array, namely, for all n ≥ 1 let Sn =

n 

Z n,k .

(6.15)

k=1

A classical result of Khintchine (see, e.g., [84, Theorem 9.3]) asserts that if for some cn ∈ R, n ≥ 1, Sn + cn converges in distribution to X , then X is infinitely divisible. Thus, by a straightforward adaptation of the proofs of Lemma 6.1 and of Theorem 6.2, the following result for sums from null arrays holds true. Theorem 6.6 Let X ∼ I D(b, 0, ν) be nondegenerate, self-decomposable and such that E|X | < ∞. Let (Sn )n≥1 be given by (6.15) with E|Z n,k | < ∞, for all n ≥ 1, for all k = 1, ..., n and let cn ∈ R, for all n ≥ 1. (i) Let |u|≤1 |u|ν(du) < +∞. Then, for n, N ≥ 1,

84

6 Applications to Sums of Independent Random Variables

  ⎞ ⎛    +∞ n n    1  dW2 (Sn + cn , X ) ≤ EX − (cn + EZ n,k ) + ⎝ E|Z n,k |⎠ |u|ν(du) −∞   n k=1 k=1  n n  1 |EZ n,k |E|Z n,k | + 2 |u|ν(du) + 2 E|Z n,k |1|Z n,k |>N + 2 |u|>N k=1 k=1   n   1  +N  K ν (t, N )  dt, + (t, N ) − K k   2 n −N k=1

where, for all t ∈ R, for all N ≥ 1 and for all 1 ≤ k ≤ n K k (t, N ) = EZ n,k 1|Z n,k |≤N (10≤t≤Z n,k − 1 Z n,k ≤t≤0 ).

(ii) Let |u|≤1 |u|ν(du) = +∞ and let there exist γ > 0, β > 0 and C1 > 0, C2 > 0 such that for any R > 0  |u|>R

|u|ν(du) ≤



C1 , Rγ

|u|≤R

|u|2 ν(du) ≤ C2 R β .

Then, for n, N ≥ 1,   ⎞ ⎛   n n β    Cγ,β  ⎝ dW2 (Sn + cn , X ) ≤ EX − (cn + EZ n,k ) + E|Z n,k | β+γ ⎠ n   k=1 k=1  n n   1 |EZ n,k |E|Z n,k | + 2 |u|ν(du) + 2 E|Z n,k |1|Z n,k |>N + 2 |u|>N k=1 k=1   n   1  +N  K ν (t, N )  dt, − K + (t, N ) k   2 n −N k=1

for some Cγ,β > 0 only depending on γ and β. To conclude this chapter, let us present some concrete examples from extreme value theory and number theory for which explicit rates of convergence can be found. To start with extreme value theory, let (Yk )k≥1 be a sequence of i.i.d. exponential random variables with parameter 1. Then, for any n ≥ 1, set, Sn =

n  Yk k=1

k

− ln n.

(6.16)

 As it is well known, max1≤k≤n Yk =d nk=1 k −1 Yk and Sn converges in distribution to a Gumbel random variable X , whose distribution function is given by F(x) := exp (− exp(−x)), for all x ∈ R (see [87, Chap. IV, Example 11.1]). Moreover, the Lévy–Khintchine representation of the characteristic function of X is given by

6 Applications to Sums of Independent Random Variables

 ϕ(t) = exp itγ +

+∞

itu e − 1 − itu

0

85

e−u du , t ∈ R, u(1 − e−u )

where γ is the Euler constant (see [87, Chap. IV, Example 11.10]), and so X is a nondegenerate self-decomposable random variable with finite first moment. Note also that  +∞  1 π2 uν(du) = +∞ and u 2 ν(du) = , 6 0 0 where ν is the Lévy measure of X . The following result provides an explicit rate of convergence which is reminiscent of the one contained in [51]. Theorem 6.7 Let (Sn )n≥1 be defined by (6.16). Let X be a Gumbel random variable with distribution function F(x) = exp (− exp(−x)), for x ∈ R. Then, for all n ≥ 1 dW2 (Sn , X ) ≤

C n

for some C > 0 independent of n. Proof Let h ∈ Cc∞ (R) be such that h∞ ≤ 1, h  ∞ ≤ 1 and h  ∞ ≤ 1 and let f h be given by Lemma 5.8. Then, for all n ≥ 1    +∞  

 e−u  |Eh(Sn ) − Eh(X )| = E(γ − Sn ) f h (Sn ) + du f h (Sn + u) − f h (Sn ) −u )  (1 − e 0   ≤ |γ − ESn | + E(ESn − Sn ) f h (Sn )   +∞ 

 e−u  du + f h (Sn + u) − f h (Sn ) −u )  . (1 − e 0

First, note that, for all n ≥ 1   n   1  C1  |γ − ESn | = γ + ln n − ≤  k n k=1 for some C1 > 0 not depending on n. Set Sn,k = Sn − k −1 Yk , for n ≥ 1 and 1 ≤ k ≤ n. Now, for all n ≥ 1, ESn f h (Sn ) =

n  1 k=1

=

k

EYk f h (Sn,k + k −1 Yk ) − ln n E f h (Sn ),

 n  1 k=1

k

0

+∞

e−u E f h (Sn + k −1 u)du − ln n E f h (Sn ),

86

6 Applications to Sums of Independent Random Variables

using that Yk is independent of Sn,k and Theorem 3.1 applied to Yk . Then, for all n≥1    E(ESn − Sn ) f  (Sn ) + h 

+∞ 0

      n 1 +∞  e−u  = E du f h (Sn ) (1 − e−u )   k 0 k=1   +∞ 

 e−u f h (Sn + u) − f h (Sn ) − f h (Sn + k −1 u) e−u du + du  −u (1 − e ) 0   +∞  +∞ 



 1 − e−nu f h (Sn ) − f h (Sn + u) f h (Sn + u) du + = E u −1 e 0 0   1 du  − f h (Sn ) u (e − 1)    +∞     1 +∞ e−nu

 e−nu = E du  ≤ udu. f h (Sn + u) − f h (Sn ) u e −1 2 eu − 1



f h (Sn + u) − f h (Sn )



0

0

Next, [74, Formula 25.11.25], for all n ≥ 1  0

+∞

e−nu udu = ζ(2, n + 1), eu − 1

 −2 where ζ is the Hurwitz zeta function defined by ζ(2, n + 1) = +∞ k=0 (k + n + 1) , for all n ≥ 1. The asymptotic expansion [74, 25.11.43] concludes the proof.  The next example deals with the fluctuations of a probabilistic model on certain type of integers which was first introduced in [23] and further studied in [4, 17]. Let ( p j ) j≥1 be an enumeration in increasing order of the prime numbers with p1 = 2. Let (X k )k≥1 be a sequence of independent Bernoulli random variables such that, for each k ≥ 1, P(X k = 1) = 1/(1 + pk ) and P(X k = 0) = pk /(1 + pk ). For each n ≥ 1, let now Sn be given by Sn =

n 1  (ln pk )X k . ln pn k=1

(6.17)

By [23, Theorem 1], Sn converges in law, as n tends to infinity, toward a Dickman distributed random variable with parameter θ = 1. The next theorem refines this result. Its proof rests in part upon a technical lemma which is contained in [4] but, for the sake of completeness, a proof is also provided in the Appendix. Theorem 6.8 Let (Sn )n≥1 be defined by (6.17). Let X be a Dickman distributed random variable with parameter θ = 1. Then, for all n ≥ 2 dW2 (Sn , X ) ≤

C , ln n

for some C > 0 independent of n. Proof The beginning of the proof is similar to the one of Theorem 6.2 (i), noting that the Lévy measure of the Dickman distribution is given by ν(du) = 1(0,1) (u)du/u.

6 Applications to Sums of Independent Random Variables

87

Let h ∈ Cc∞ (R) be such that h∞ ≤ 1, h  ∞ ≤ 1 and h  ∞ ≤ 1 and let f h be given by Lemma 5.8. Then, setting bn = (ln pn )−1 , Z k = (ln pk )X k , Z˜ k = Z k − EZ k and cn = 0,     n n  b  +∞   n   |Eh(Sn ) − Eh(X )| ≤ E X − cn + bn EZ k  + |u|ν(du) E|Z k |   n −∞ k=1 k=1   n n    1   + E bn T ( f h )(Sn,k ) , Z˜ k ( f h (Sn,k + Z k bn ) − f h (Sn,k )) −   n k=1 k=1     n n n   b +∞   1    n ≤ E X − cn + bn EZ k  + |u|ν(du) E|Z k | + bn2 |EZ k |E|Z k |   n 2 −∞ k=1 k=1 k=1   n n    1   + Ebn Z k ( f h (Sn,k + Z k bn ) − f h (Sn,k )) − T ( f h )(Sn,k ) . (6.18)   n k=1

k=1

Let us begin by upper bounding the first three terms on the right-hand side of (6.18). For all n ≥ 2, using Lemma A.7,      n n     1  ln pk     EZ k  ≤ 1 − E X − cn + bn ,    ln pn k=1 1 + pk  k=1 ≤

C1 ln n

for some C1 > 0 independent of n. Moreover, for all n ≥ 2 bn n



+∞

−∞

|u|ν(du)

 n

E|Z k | ≤

k=1



n 1  ln pk , n ln pn k=1 1 + pk

C3 , n

for some C3 > 0 independent of n. Similarly, for all n ≥ 2 2 n n  ln pk 1 1 1 2 . bn |EZ k |E|Z k | = = O 2 k=1 2(ln pn )2 k=1 (1 + pk ) (ln n)2   To conclude, let us deal with Ebn nk=1 Z k ( f h (Sn,k + Z k bn ) − f h (Sn,k )) − nk=1 T ( f h )(Sn,k )/n , for all n ≥ 2. First, note that, for all n ≥ 1 E

n n 

1 1  1  T ( f h )(Sn,k ) = E f h (Sn,k + u) − f h (Sn,k ) du = E f h (Sn,J + U ) − f h (Sn,J ) , n n 0 k=1

k=1

where U and J are independent random variables, independent of (X k )k≥1 , and uniformly distributed on [0, 1] and on {1, . . . , n}, respectively. Moreover, by the very definition of I in Lemma A.7, one has

88

6 Applications to Sums of Independent Random Variables

Ebn

n 

Z k ( f h (Sn,k + Z k bn ) − f h (Sn,k )) = ESn E



k=1

f h



ln p I Sn,I + ln pn





− f h (Sn,I ) .

Then, for all n ≥ 1   n n    1 Ebn Z k ( f h (Sn,k + Z k bn ) − f h (Sn,k )) − T ( f h )(Sn,k )  n k=1 k=1     ln p I − f h (Sn,I ) − E f h (Sn,J + U ) − f h (Sn,J ) , ≤ ESn E f h Sn,I + ln pn             ln p I ≤ 2ESn − 1 + E f h (Sn,I ) − f h (Sn,J ) + E f h Sn,I + − f h (Sn,J + U ). ln pn

Let us deal with these three terms separately. First, as previously, for all n ≥ 2, |ESn − 1| ≤

C1 . ln n

Now, for all n ≥ 2,      E f (Sn,I ) − f  (Sn,J ) ≤ 1 E  X I ln p I − X J ln p J h h 2  ln pn ln pn

  1  ≤ E X I ln p I + 1 E X J ln p J .  2 ln pn 2 ln pn

But, by the respective definitions of I and J , n  1 (ln pk )2 1 , = O ESn (ln pn )2 k=1 (1 + pk )2 (ln n)2 n X J ln p J 1  ln pk 1 E = . =O ln pn n ln pn k=1 1 + pk n

E

X I ln p I ln pn



=

Finally, for all n ≥ 1,          E f Sn,I + ln p I − f  (Sn,J + U ) ≤ 1 E U − ln p I  + 1 E X I ln p I + 1 E X J ln p J . h   h   ln pn 2 ln pn 2 ln pn 2 ln pn

Using, once more, Lemma A.7 completes the proof of the upper bound.



We finish our manuscript by briefly addressing, among many others, three possible extensions and generalizations of our current work which will be presented elsewhere. A first possible direction of future research is to solve the Stein equation associated with general infinitely divisible distributions with finite first moment (not only the self-decomposable target laws). A second possible direction of research to which our methods are amenable is the study of extensions to multivariate (and even infinitedimensional) settings of the results presented here. (In this vein, see [3].) A third direction would be to attempt at removing the finite first moment assumption which is present throughout our hypotheses.

Appendix

This chapter is devoted to the proof of seven technical results used in the previous chapters. Lemma A.1 Let X ∼ I D(b, 0, ν) be self-decomposable, with characteristic function ϕ, and such that E|X | < ∞. Let x, ξ ∈ R. Then,      +∞  iuξ  1 iξx(e−t −1) ϕ(ξ) e − 1 = −x + EX + lim e − 1 uν(du) (iξ). t→0+ t ϕ(e−t ξ) −∞ Proof Let x, ξ ∈ R. First, it is clear that 1 −t lim+ (eiξx(e −1) − 1) = −iξx. t→0 t

(A.1)

Next, since X is infinitely divisible with finite first moment, for all t ≥ 0,  +∞ ϕ(ξ) −t = eiξEX (1−e ) e −∞ −t ϕ(e ξ)

 −t eiuξ −eiuξe −iuξ(1−e−t ) ν(du)

.

Now, from (A.1), 1 −t lim (eiξEX (1−e ) − 1) = iξEX, t

t→0+

and moreover, lim+ e

t→0

 +∞  iuξ iuξe−t −e −iuξ(1−e−t ) ν(du) −∞ e

= 1.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 B. Arras and C. Houdré, On Stein’s Method for Infinitely Divisible Laws with Finite First Moment, SpringerBriefs in Probability and Mathematical Statistics, https://doi.org/10.1007/978-3-030-15017-4

89

90

Appendix

So, to finish the proof, one needs to show that  +∞    +∞ −t  iuξ  1 −∞ eiuξ −eiuξe −iuξ(1−e−t ) ν(du) e e − 1 uν(du)(iξ). −1 = lim t→0+ t −∞ For this purpose, let us show that lim+

t→0

1 t



+∞ −∞

  −t eiuξ − eiuξe − iuξ(1 − e−t ) ν(du) =

+∞

−∞

 iuξ  e − 1 uν(du)(iξ). (A.2)

For the real part of (A.2), one wants to prove lim+

t→0

1 t



+∞

−∞

  cos(uξ) − cos(uξe−t ) ν(du) = −



+∞

sin(uξ)uν(du)ξ.

(A.3)

−∞

But, for all u ∈ R, lim

t→0+

 1 cos(uξ) − cos(uξe−t ) = −ξu sin(uξ). t

(A.4)

Hence, a straightforward application of the dominated convergence theorem shows (A.3). The imaginary part of (A.2) can be treated in a similar fashion.  Lemma A.2 For any t ≥ 0, let Yt be an ID random variable such that E|Yt | < +∞, EYt = 0, and with Lévy measure νt (du) =

ψ(u) − ψ(et u) du, |u|

where ψ is a nonnegative function increasing on (−∞, 0) and decreasing on (0, +∞) and such that   |u|ψ(u)du < ∞, ψ(u)du < ∞. |u|≤1

|u|>1

Then, for all ξ ∈ R and for all t ∈ (0, 1),

1

iξYt Ee − 1 ≤ Cψ (|ξ| + |ξ|2 ), t

(A.5)

for some Cψ > 0 only depending on ψ. Proof Let ξ = 0 and t ∈ (0, 1). First, since E|Yt | < +∞ and since Yt has zero mean, EeiξYt = e

 +∞ −∞

(eiuξ −1−iuξ )νt (du) .

Appendix

91

Then,





ω∈[0,|ξ|]

 +∞ iuξ



−∞ (e −1−iuξ )νt (du) − 1 ≤ |ξ| max

e

+∞



u e

iuω

−∞



 − 1 νt (du)

.

Moreover, for ω ∈ [0, |ξ|]







+∞ −∞



  u eiuω − 1 νt (du)

≤ |ω|



|u|≤1

|u|2 νt (du) + 2

|u|νt (du).

|u|>1

  Let us bound the two terms |u|≤1 |u|2 νt (du) and |u|>1 |u|νt (du) present above. We only consider these integrals on (0, +∞) since similar arguments provide the same type of bounds on (−∞, 0). For the first term, 

1







1

u ψ(u) − ψ(e u) du = t

0



 uψ(u)du −

0

uψ(et u)du 0

1

=

1

uψ(u)du − e−2t

0

= −e−2t



et

uψ(u)du 0



et

uψ(u)du + (1 − e−2t )



1

1

uψ(u)du 0

≤ sup |uψ(u)|(et − 1) + (1 − e−2t ) 10 be a regularization of h by convolution such that

h − h ε ∞ ≤ ε, h (k) ε ∞ ≤ 1, 0 ≤ k ≤ r. Let φ ∈ Cc∞ (R) be an even function with values in [0, 1] and such that φ(x) = 1, for x ∈ [−1, 1]. Let M ≥ 1 and set φ M (x) = φ(x/M), for all x ∈ R. Next, denote by h M,ε the Cc∞ (R) function defined, for all x ∈ R, by h M,ε (x) = φ M (x)h ε (x). Then,







|E h(X ) − E h(Y )| ≤ E h M,ε (X ) − E h M,ε (Y ) + E h(X ) − h M,ε (X ) + E h(Y ) − h M,ε (Y )





≤ Eh M,ε (X ) − Eh M,ε (Y ) + E |h(X ) − h ε (X )| + E h ε (X ) − h M,ε (X )



+ E |h(Y ) − h ε (Y )| + E h ε (Y ) − h M,ε (Y )









≤ E h M,ε (X ) − E h M,ε (Y ) + 2ε + E h ε (X ) − h M,ε (X ) + E h ε (Y ) − h M,ε (Y )

 



|1 − φ M (x)| μ X (d x) + |1 − φ M (y)| μY (dy). ≤ E h M,ε (X ) − E h M,ε (Y ) + 2ε + R

R

Now, choosing M ≥ 1 large enough so that 

 R

|1 − φ M (x)| μ X (d x) +

R

|1 − φ M (y)| μY (dy) ≤ 2ε,

it follows that, for such M ≥ 1,



|Eh(X ) − Eh(Y )| ≤ Eh M,ε (X ) − Eh M,ε (Y ) + 4ε. Moreover, for all x ∈ R, for all M ≥ 1 and for all ε > 0 |h M,ε (x)| ≤ 1, while, for all x ∈ R and for all 1 ≤ k ≤ r , |h (k) M,ε (x)|

k   k ( p) p) |h (k− ≤ (x)||φ M (x)| ε p p=0 ⎛ ⎞ k 1 ⎠ ≤ ⎝1 + C k , p M p=1

Appendix

93

for some Ck > 0 which only depends on k and on φ. Thus, ⎛

⎞ r 1 ⎠ |Eh(X ) − Eh(Y )| ≤ ⎝1 + Cr |Eh(X ) − Eh(Y )| + 4ε, sup M p h∈Cc∞ (R)∩Hr p=1 for some appropriate constant Cr > 0 only depending on r > 0 and on φ. Letting  first M → +∞ and then ε → 0+ gives the result. Lemma A.4 Let X and Y be two random variables. Then, for r ≥ 2, √  dWr −1 (X, Y ) ≤ 3 2 dWr (X, Y ). Moreover, −1 1  √ rk=1   1 2k−1 dWr (X, Y ) 2r −1 , dW1 (X, Y ) ≤ 3 2

for r ≥ 2. Proof Let r ≥ 2, let h be an element of Hr −1 and let ε > 0. Assume that dWr (X, Y ) = 0. Let h ε be defined, for all x ∈ R, by  h ε (x) :=

+∞ −∞

  dy y2 h(x − y) exp − 2 √ . 2ε 2πε

Then, for all 0 ≤ k ≤ r − 1,

h (k) ε ∞ ≤ 1 and h − h ε ∞ ≤ ε. ) Moreover, by an integration by parts, h (r ε ∞ ≤ 1/ε. Thus,

|Eh(X ) − Eh(Y )| ≤ 2ε + |Eh ε (X ) − Eh ε (Y )| . Choosing ε ∈ (0, 1) implies 1 |Eh(X ) − Eh(Y )| ≤ 2ε + dWr (X, Y ). ε Now, taking ε = leads to



dWr (X, Y )/(2(1 + dWr (X, Y ))), which clearly belongs to (0, 1), 

|Eh(X ) − Eh(Y )| ≤

2dWr (X, Y ) + 1 + dWr (X, Y )

   2 1 + dWr (X, Y ) dWr (X, Y ).

Finally, since dWr (X, Y ) ≤ 2 and since 1/(1 + dWr (X, Y )) ≤ 1,

94

Appendix

|Eh(X ) − Eh(Y )| ≤

  2dWr (X, Y ) + 6dWr (X, Y ),

which implies that √  dWr −1 (X, Y ) ≤ 3 2 dWr (X, Y ). 

A recursive argument concludes the proof. Lemma A.5 Let X and Y be two random variables. Then, dW1 (X, Y ) = d F M (X, Y ). Proof First, since H1 ⊂ B Li p(1), dW1 (X, Y ) ≤ d F M (X, Y ).

Let h be in B Li p(1), let ε > 0 and let h ε be a regularization of h by convolution such that

h ε ∞ ≤ 1,

h − h ε ∞ ≤ ε,

h ε Li p(1) ≤ 1.

Then, |Eh(X ) − Eh(Y )| ≤ 2ε + |Eh ε (X ) − Eh ε (Y )| ≤ 2ε + dW1 (X, Y ), since h ε belongs to H1 . Then, letting ε → 0 gives d F M (X, Y ) ≤ dW1 (X, Y ), which concludes the proof of the lemma.



The next lemma shows that the Kolmogorov distance and the smooth Wasserstein-1 distance are naturally related. Lemma A.6 Let X be a random variable with a bounded density h X and let Y be a random variable. Then,  

h X ∞  dW1 (X, Y ). d K (X, Y ) ≤ 1 + 2 Proof If dW1 (X, Y ) = 0, there is nothing to prove. If dW1 (X, Y ) ≥ 1, then since d K (X, Y ) ≤ 1,  

h X ∞  d K (X, Y ) ≤ 1 ≤ 1 + dW1 (X, Y ). 2

Appendix

95

So, let us assume that 0 < dW1 (X, Y ) < 1. Let x ∈ R and let gx be the indicator function of the set (−∞, x]. Let ε ∈ (0, 1) and gx,ε be the continuous function which is equal to 1 on (−∞, x], to 0 on [x + ε, +∞) and which is linear and decreasing in between. Then, Egx (Y ) − Egx (X ) = Egx (Y ) − Egx,ε (X ) + Egx,ε (X ) − Egx (X ) ≤ Egx,ε (Y ) − Egx,ε (X ) + Egx,ε (X ) − Egx (X ). Let us start with the second difference. By definition and the boundedness of the density of X ,  Egx,ε (X ) − Egx (X ) =

  ε gx,ε (y) − gx (y) h X (y)dy ≤ h X ∞ . 2 R

Now, note (1)

gx,ε ∞ ≤ 1, gx,ε

∞ ≤ ε−1 ,

so that the function εgx,ε belongs to B Li p(1) since ε ∈ (0, 1). Therefore, by Lemma A.5, Egx,ε (Y ) − Egx,ε (X ) ≤

1 dW (Y, X ). ε 1

Thus, Egx (Y ) − Egx (X ) ≤ Now, taking ε =



ε 1 dW (X, Y ) + h X ∞ . ε 1 2

dW1 (X, Y ) < 1 gives,

 

h X ∞  Egx (Y ) − Egx (X ) ≤ 1 + dW1 (X, Y ). 2 The same inequality holds for Egx (X ) − Egx (Y ), using the continuous function g˜ x,ε which is equal to 1 on (−∞, x − ε], to 0 on [x, +∞) and which is linear and decreasing in between.  Lemma A.7 Let ( pn )n≥1 be an increasing enumeration of the prime numbers starting at 2 and let (X n )n≥1 be a sequence of independent random variables Bernoulli distributed such that, for each k ≥ 1, P(X k = 1) = 1/(1 + pk ) and P(X k = 0) = pk /(1 + pk ). Let (Sn )n≥1 be given by, Sn = nk=1 (ln pk )X k / ln pn , for all n ≥ 1, and let I be a discrete random variable independent of (X n )n≥1 and with values in {1, ..., n}, such that, for all k ∈ {1, ..., n}, P (I = k) =

ln pk . ln pn ESn (1 + pk )

96

Appendix

Finally, let U be a uniform random variable on [0, 1] independent of (X n )n≥1 . Then, for all n ≥ 2



n

1 ln pk

C1

1 −



ln pn k=1 1 + pk ln n for some C1 > 0 independent of n. Moreover, there exists a well-chosen coupling between U and I such that, for all n ≥ 2



ln p I

C2

≤ E U − ,

ln pn ln n

(A.7)

for some C2 > 0 independent of n. Proof We divide the proof into two steps. Step 1: To start, let us bound the term |ESn − 1|, for all n ≥ 2. First, 1 ESn − 1 = ln pn



n k=1

 ln pk − ln pn . ( pk + 1)

Next by the prime number theorem [88, Sect. 1.10], ln pn = ln n + O(|ln ln n|), with the usual definition of O. Moreover, by Mertens first theorem (see, e.g., [88, Proposition 1.51] or [89, Theorem 1.8]) n n n ln pk ln pk ln pk − ln pn = − ln pn − p + 1 p p ( pk + 1) k k=1 k k=1 k=1 k

= O(1). Thus,  ESn − 1 = O

1 ln n

 ,

and so we are done with Step 1. Step 2: To continue the proof of the lemma, let us deal with (A.7). For this purpose, set F0 = 0 and for 1 ≤ j ≤ n, Fj =

j ln pk 1 . ESn ln pn k=1 1 + pk

Appendix

97

Note that 0 ≤ F j ≤ 1, for all 0 ≤ j ≤ n. Now, consider the coupling between U and I defined, for all 1 ≤ j ≤ n by I = j if U ∈ [F j−1 , F j ]. Note that U is independent of Sn since I is. Moreover,





n



ln p I

ln p I



= E U − P(I = j)E U − ln pn

ln pn

j=1



I = j .

To continue, we need to control the following quantity:



 

ln p j



ln p j



q j := max F j−1 − , Fj − , ln pn

ln pn



for 1 ≤ j ≤ n. Let us bound F j − 1≤ j ≤n ln p j 1 Fj − = ln pn ln pn



ln p j

, ln pn

for 1 ≤ j ≤ n. For all n ≥ 2 and for all

 j    j ln pk 1 ln pk 1 . − ln p j + −1 1 + p ln p ES p +1 k n n k=1 k=1 k

Thus, using again Mertens first theorem and standard inequalities gives





F j − ln p j ≤ C1 + C2

ln pn ln pn ln pn







ESn − 1





+ C3 ESn − 1 ,

ES

ES

n n

for some constants C1 > 0, C2 > 0, and C3 > 0 independent of n and j. Similarly, using the fact that F j − F j−1 = P(I = j),





F j−1 − ln p j ≤ C1 + C2

ln pn ln pn ln pn







ESn − 1





+ C3 ESn − 1 + P(I = j).

ES

ES

n n

Then, for all 1 ≤ j ≤ n qj ≤

C1 C2 + ln pn ln pn







ESn − 1





+ C3 ESn − 1 + P(I = j).

ES

ES

n n

The previous bounds imply that





ln p I

1 1

≤C E U − + ln pn

ln pn ln pn





 n

ESn − 1 ESn − 1



+

+ P(I = j)2 .

ES ES

n n j=1

98

Appendix

But n

P(I = j)2 ≤

j=1

C , (ln pn )2

for some C > 0 independent of n. Combining both steps together and using the prime number theorem lead to



 

1 ln p I



=O , E U − ln pn

ln n which concludes the proof of the lemma.



References

1. S. Albeverio, S. Rüdinger, J.L. Wu, Invariant measures and symmetry property of Lévy type operators. Potential Anal. 13(2), 147–168 (2000) 2. D. Applebaum, Lévy Processes and Stochastic Calculus (Cambridge University Press, Cambridge, 2009) 3. B. Arras, C. Houdré, On Stein’s method for multivariate self-decomposable laws with finite first moment. Electron. J. Probab. (2019), arXiv:1809.02050 4. B. Arras, G. Mijoule, G. Poly, Y. Swan, A new approach to the Stein-Tikhomirov method: with applications to the second Wiener chaos and Dickman convergence (2017), arXiv:1605.06819 5. R. Arratia, A.D. Barbour, S. Tavaré, Logarithmic Combinatorial Structures: A Probabilistic Approach (European Mathematical Society, 2003) 6. R. Arratia, P. Baxendale, Bounded size bias coupling: a Gamma function bound, and universal Dickman-function behavior. Probab. Theory Relat. Fields 162, 411–429 (2015) 7. R. Arratia, L. Goldstein, F. Kochman, Size bias for one and all. Probab. Surv. 16, 1–61 (2019) 8. A.D. Barbour, Stein’s method and Poisson process convergence. J. Appl. Probab. 25A, 175–184 (1988) 9. A.D. Barbour, Stein’s method for diffusion approximations. Probab. Theory Relat. Fields. 84(3), 297–322 (1990) 10. A.D. Barbour, L.H.Y. Chen, An Introduction to Stein’s Method. Lecture Notes Series, Institute for Mathematical Sciences, National University of Singapore, vol. 4 (Singapore University Press, Singapore, 2005) 11. A.D. Barbour, L.H.Y. Chen, W.-L. Loh, Compound Poisson approximations for non-negative random variables via Stein’s method. Ann. Probab. 20(4), 1843–1866 (1992) 12. A.D. Barbour, H.L. Gan, A. Xia, Stein factors for negative binomial approximation in Wasserstein distance. Bernoulli 21(2), 1002–1013 (2015) 13. A.D. Barbour, L. Holst, S. Janson, Poisson Approximation (Oxford Science Publications, Oxford) 14. O.E. Barndorff-Nielsen, T. Mikosch, S.I. Resnick, Lévy Processes: Theory and Applications (Springer Science and Business Media, New York, 2001) 15. J. Bartroff, L. Goldstein, Ü. Islak, Bounded size biased couplings, log concave distributions and concentration of measure for occupancy models. Bernoulli 24(4B), 3283–3317 (2018) 16. J. Bertoin, Lévy Processes, vol. 121 (Cambridge University Press, Cambridge, 1998) 17. C. Bhattacharjee, L. Goldstein, Dickman approximation in simulation, summations and perpetuities. Bernoulli (2018), arXiv:16.08192v5 18. P. Billingsley, The probability theory of additive arithmetic functions. Ann. Probab. 2(5), 749– 791 (1974)

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 B. Arras and C. Houdré, On Stein’s Method for Infinitely Divisible Laws with Finite First Moment, SpringerBriefs in Probability and Mathematical Statistics, https://doi.org/10.1007/978-3-030-15017-4

99

100

References

19. L. Bondesson, Generalized Gamma Convolutions and Related Classes of Distributions and Densities (Lectures Notes in Statistics (Springer, Berlin, 1992) 20. P. Bosch, T. Simon, A proof of Bondesson’s conjecture on stable densities. Ark. Mat. 54(1), 31–38 (2016) 21. T.C. Brown, M.J. Phillips, Negative binomial approximations with Stein’s method. Methodol. Comput. Appl. Probab. 1(4), 407–421 (1999) 22. T.C. Brown, A. Xia, Stein’s method and birth-death processes. Ann. Probab. 29, 1373–1403 (2001) 23. F. Cellarosi, Y.G. Sinai, Non-standard limit theorems in number theory, Prokhorov and Contemporary Probability Theory (Springer, Berlin, 2013), pp. 197–213 24. D. Chafaï, A. Joulin, Intertwining and commutation relations for birth-death processes. Bernoulli 19(5), 1855–1879 (2013) 25. S. Chatterjee, A new method of normal approximation. Ann. Probab. 36(4), 1584–1610 (2008) 26. S. Chatterjee, Fluctuations of eigenvalues and second order Poincaré inequalities. Probab. Theory Relat. Fields 143, 1–40 (2009) 27. S. Chatterjee, A short survey of Stein’s method. Proc. ICM IV 1–24, (2014) 28. S. Chatterjee, P. Diaconis, E. Meckes, Exchangeable pairs and Poisson approximation. Probab. Surv. 2, 64–106 (2005) 29. S. Chatterjee, J. Fulman, A. Röllin, Exponential approximation by Stein’s method and spectral graph theory. Lat. Am. J. Probab. Math. Stat. 8, 197–223 (2011) 30. L.H.Y. Chen, Poisson approximation for dependent trials. Ann. Probab. 3(3), 534–545 (1975) 31. L.H.Y. Chen, L. Goldstein, Q.M. Shao, Normal Approximation by Stein’s Method Probability and Its Application (Springer, Heidelberg, 2011) 32. Z.-Q. Chen, X. Zhang, Heat kernels and analyticity of non-symmetric jump diffusion semigroups. Probab. Theory Relat. Fields 165, 267–312 (2016) 33. G. Christoph, W. Wolf, Convergence Theorems with a Stable Limit Law (Akademie-Verlag, Berlin, 1993) 34. R. Cont, P. Tankov, Financial Modelling with Jump Processes (Chapman and Hall/CRC, 2004) 35. P. Diaconis, S. Holmes, Stein’s Method: Expository Lectures and Applications. IMS Lecture Notes-Monograph Series 46, (2004) 36. A. Diédhiou, On the self-decomposability of the half-Cauchy distribution. J. Math. Anal. Appl. 220, 42–64 (1998) 37. C. Döbler, G. Peccati, The Gamma Stein equation and noncentral de Jong theorems. Bernoulli 24(4B), 3384–3421 (2018) 38. J.A. Domínguez-Molina, The Tracy-Widom distribution is not infinitely divisible. Stat. Probab. Lett. 123, 56–60 (2017) 39. R.M. Dudley, Real Analysis and Probability, 2nd edn. (Cambridge University Press, Cambridge, 2002) 40. W. Ehm, Binomial approximation to the Poisson binomial distribution. Stat. Probab. Lett. 11, 7–16 (1991) 41. S.N. Ethier, T.G. Kurtz, Markov Processes: Characterization and Convergence, vol. 282 (Wiley, New York, 2009) 42. J. Fulman, N. Ross, Exponential approximation and Stein’s method of exchangeable pairs. Lat. Am. J. Probab. Math. Stat. 10(1), 1–13 (2013) 43. R.E. Gaunt, A.M. Pickett, G. Reinert, Chi-square approximation by Stein’s method with application to Pearson’s statistic. Ann. Appl. Probab. 27(2), 720–756 (2017) 44. S. Ghosh, L. Goldstein, Concentration of measures via size-biased couplings. Probab. Theory Relat. Fields 149, 271–278 (2011) 45. B.V. Gnedenko, A.N. Kolmogorov, Limit Distributions for Sums of Independent Random Variables (Addison-Wesley Publishing Company, Cambridge, 1954) 46. L. Goldstein, Ü. Islak, Concentration inequalities via zero bias couplings. Stat. Probab. Lett. 86, 17–23 (2014) 47. L. Goldstein, G. Reinert, Stein’s method and the zero bias transformation with application to simple random sampling. Ann. Appl. Probab. 7(4), 935–952 (1997)

References

101

48. F. Götze, On the rate of convergence in the multivariate CLT. Ann. Probab. 19(2), 724–739 (1991) 49. C. Halgreen, Self-decomposability of the generalized inverse Gaussian and hyperbolic distributions. Z. Wahrscheinlichkeitstheorie verw. Gebiete 47, 13–17 (1979) 50. P. Hall, Two-sided bounds on the rate of convergence to a stable law. Probab. Theory Relat. Fields 57(3), 349–364 (1981) 51. W.J. Hall, J.A. Wellner, The rate of convergence in law of the maximum of an exponential sample. Stat. Neerlandica 33, 151–154 (1979) 52. K.H. Hofmann, Z.J. Jurek, Some analytic semigroups occurring in probability theory. J. Theor. Probab. 9(3), 745–763 (1996) 53. C. Houdré, Remarks on deviation inequalities for functions of infinitely divisible random vectors. Ann. Probab. 30(3), 1223–1237 (2002) 54. C. Houdré, P. Marchal, On the concentration of measure phenomenon for stable and related random vectors. Ann. Probab. 32(2), 1496–1508 (2004) 55. C. Houdré, V. Pérez-Abreu, D. Surgailis, Interpolation, correlation identities and inequalities for infinitely divisible variables. J. Fourier Anal. Appl. 4(6), 651–668 (1998) 56. S. Janson, Gaussian Hilbert Spaces, vol. 129 (Cambridge University Press, Cambridge, 1997) 57. A. Janssen, D.M. Mason, On the rate of convergence of sums of extremes to a stable law. Probab. Theory Relat. Fields 86(2), 253–264 (1990) 58. Z.J. Jurek, On relations between Urbanik and Mehler semigroups. Probab. Math. Stat. 29(2), 297–308 (2009) 59. A.Ya. Khintchine, Limit Laws for Sums of Independent Random Variables (ONTI, MoscowLeningrad (in Russian, 1938) 60. R. Kuske, J.B. Keller, Rate of convergence to a stable law. SIAM J. Appl. Math. 61(4), 1308– 1323 (2000) 61. M. Ledoux, I. Nourdin, G. Peccati, Stein’s method, logarithmic Sobolev inequality and transport inequalities. Geom. Funct. Anal. 25, 256–306 (2015) 62. P. Lévy, Théorie de l’addition des variables aléatoires, 1st edn. (Gauthier-Villars, Paris, 1937) 63. C. Ley, G. Reinert, Y. Swan, Stein’s method for comparison of univariate distributions. Probab. Surv. 14, 1–52 (2017) 64. G.D. Lin, C.-Y. Hu, The Riemann zeta distribution. Bernoulli 7(5), 817–828 (2001) 65. H.M. Luk, Stein’s method for the Gamma distribution and related statistical applications, University of Southern California, 1994 66. E. Lukacs, Characteristic Functions, 2nd edn. (Griffin, London, 1970), p. 1820 67. M.B. Marcus, J. Rosinski, L 1 -norm of infinitely divisible random vectors and certain stochastic integrals. Electron. Commun. Probab. 6, 15–29 (2001) 68. I.W. McKeague, E. Peköz, Y. Swan, Stein’s method and approximating the quantum harmonic oscillator. Bernoulli (to appear) 69. T. Nakamura, A modified Riemann zeta distribution in the critical strip. Proc. Am. Math. Soc. 143(2), 897–905 (2015) 70. T. Nakamura, A complete Riemann zeta distribution and the Riemann hypothesis. Bernoulli 21(1), 604–617 (2015) 71. I. Nourdin, G. Peccati, Stein’s method on Wiener chaos. Probab. Theory Relat. Fields 145, 75–118 (2009) 72. I. Nourdin, G. Peccati, Normal Approximations with Malliavin Calculus: From Stein’s Method to Universality (Cambridge University Press, Cambridge, 2012) 73. I. Nourdin, G. Poly, Convergence in law in the second Wiener/Wigner chaos. Electron. Commun. Probab. 36, 1–12 (2012) 74. F.W.J. Olver, D.W. Lozier, R.F. Boisvert, C.W. Clark, NIST Handbook of Mathematical Functions (Cambridge University Press, Cambridge, 2010) 75. J.R. Partington, Linear Operators and Linear Systems: An Analytical Approach to Control Theory (Cambridge University Press, Cambridge, 2004) 76. E.A. Peköz, A. Röllin, New rates for exponential approximation and the theorems of Rényi and Yaglom. Ann. Probab. 39(2), 587–608 (2011)

102

References

77. V.V. Petrov, Limit Theorems of Probability Theory (Oxford University Press, Oxford, 1995) 78. A. Pickett, Rates of convergence of Chi-square approximations via Steins method, Ph.D. thesis, University of Oxford, 2004 79. J. Pike, H. Ren, Stein’s method and the Laplace distribution. ALEA Lat. Am. J. Probab. Math. Stat. 11(1), 571–587 (2014) 80. D. Revuz, M. Yor, Continuous Martingales and Brownian Motion, vol. 293, 3rd edn. (Springer, Berlin, 1999) 81. N.F. Ross, Fundamentals of Stein’s method. Probab. Surv. 8, 210–293 (2011) 82. N.F. Ross, Power laws in preferential attachment graphs and Stein’s method for the negative binomial distribution. Adv. Appl. Prob. 45, 876–893 (2013) 83. S.G. Samko, A.A. Kilbas, O.I. Marichev, Fractional Integrals and Derivatives (Gordon and Breach Science Publishers, Yverdon, 1993) 84. K-I. Sato, Lévy Processes and Infinitely Divisible Distributions (Cambridge University Press, Corrected Printing with Supplements, 2015) 85. C. Stein, A bound for the error in the normal approximation to the distribution of a sum of dependent random variables, in Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability (1972), pp. 583–602 86. C. Stein, Approximate Computation of Expectations. Institute of Mathematical Statistics Lecture Notes Monograph Series 7, (1986) 87. F.W. Steutel, K. Van Harn, Infinite Divisibility of Probability Distributions on the Real Line (CRC Press, 2003) 88. T. Tao, V.H. Vu, Additive Combinatorics, vol. 105 (Cambridge University Press, Cambridge, 2006) 89. G. Tenenbaum, Introduction to Analytic and Probabilistic Number Theory. Graduate Studies in Mathematics, vol. 163, 3rd edn. (2015) 90. C.A. Tracy, H. Widom, Distribution functions for largest eigenvalues and their applications. Proc. ICM I 587–596, (2002) 91. A.N. Tikhomirov, On the convergence rate in the central limit theorem for weakly dependent random variables. Theory Probab. Math. Stat. 25(4), 790–809 (1981) 92. M. Veillette, M.S. Taqqu, Properties and numerical evaluation of the Rosenblatt distribution. Bernoulli 19(3), 982–1005 (2013) 93. C. Villani, Optimal Transport, Old and New, vol. 338 (Springer, Berlin, 2009) 94. L. Xu, Approximation of stable law in Wasserstein-1 distance by Stein’s method. Ann. Appl. Probab. (2019), arXiv:1709.00805v3

Index

A Absolutely continuous, 3, 4, 24, 31, 35, 38, 44, 46, 58 Additive size-bias, 18, 21, 23 α-stable distribution, 9 B Ball (closed unit ball), 13 Banach space, 13 Berry–Esseen-type bounds, 44 Borel measure, 7, 12, 13, 24 Borel set, 9, 61, 65 Bounded Lipschitz distance, 11 C Characteristic function, 7, 10, 14, 18, 21, 31– 35, 38, 40, 44, 46, 48, 52, 55, 57, 58, 64, 65, 67, 74, 81, 82, 89 Compound Poisson, 1, 8, 17, 34, 44, 46, 52, 55 Coupling, 5, 27, 36, 96, 97 Covariance, 3, 14, 27 Cumulative distribution functions, 36 D Dawson integral, 33 Decreasing, 12, 38, 49, 58, 63, 67, 90, 95 Discrete gradient, 73 Discrete self-decomposable, 2, 73, 74 Domain of normal attraction, 39, 40, 43 E Esseen inequality, 33, 54

Exponential random variable, 17, 18, 62, 84 Extreme value theory, 2, 84

F Fortet–Mourier distance, 11 Fourier transform, 21, 58 Fractional Laplacian, 19

G Gamma distribution, 8, 18, 58, 63 Generalized Dickman distribution, 18, 31, 34, 58, 62 Generator, 59–62, 73, 74 Gumbel distribution, 9

H Hurwitz zeta function, 86

I Immigration–birth–death process, 74 Increasing, 12, 13, 38, 63, 86, 90, 95 Infinite divisibility, 1, 6, 40 Integro-differential equation, 64

K Kolmogorov distance, 10, 35, 37, 43, 48, 94

L Laplace distribution, 8, 17, 58, 62 Lévy–Khintchine representation, 7, 84

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 B. Arras and C. Houdré, On Stein’s Method for Infinitely Divisible Laws with Finite First Moment, SpringerBriefs in Probability and Mathematical Statistics, https://doi.org/10.1007/978-3-030-15017-4

103

104 Lévy measure, 1, 7–10, 12, 20, 21, 23, 24, 26–28, 34, 39, 48, 52, 61–63, 65, 69– 71, 74, 82, 85, 86, 90 Lévy–Raikov Theorem, 48 Lipschitz functions, 11, 13–15, 19, 22, 23, 25, 28 Lipschitz semi-norm, 13

M Malliavin, 5, 20 Mehler semigroups, 61

N Negative binomial, 1, 8, 16, 73, 74 Nonlocal, 24, 63, 69, 70 Normal law, 4, 15 Null arrays, 6, 77, 83 Number theory, 2, 6, 84

P Pareto density, 40 Pareto distribution, 9, 58 Poisson, 1–5, 8, 13, 15, 27, 72–74 Prime numbers, 86, 95, 96, 98 Probability metrics, 1, 3, 4, 10

R Radon–Nikodym derivatives, 25, 58 Range/support, 8, 13, 23, 24, 58, 65, 66, 71 Rosenblatt distribution, 24

Index Row sums, 6, 83

S Schwartz space, 58 Second-order Wiener chaoses, 20 Self-decomposable, 1, 9, 10, 38, 57, 58, 61, 63, 71, 77, 81–83, 85, 88, 89 Semigroup, 1, 57, 59–61, 64, 68, 71, 73, 81 Size-bias distribution, 18, 21, 23, 27 Smooth Wasserstein distance, 1, 10–12, 48, 51 Stable distribution, 27, 37, 39, 54, 58, 82 Stein equation, 57, 58, 61–64, 68, 71, 72, 74, 81, 88 Stein methodology, 2, 6, 9, 62, 77 Symmetric α-stable distribution, 39, 40, 43, 55, 61, 81

T Total variation distance, 10 Tracy–Widom distribution, 21 Two-sided exponential, 8, 26 Two-sided Maxwell distribution, 21

W Wasserstein distance, 12, 34, 36, 81 Weak convergence, 6, 10, 11, 37

Z Zero-bias distribution, 25, 27