Topics in Infinitely Divisible Distributions and Lévy Processes, Revised Edition [1st ed. 2019] 978-3-030-22699-2, 978-3-030-22700-5

This book deals with topics in the area of Lévy processes and infinitely divisible distributions such as Ornstein-Uhlenb

377 80 2MB

English Pages VIII, 135 [140] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Topics in Infinitely Divisible Distributions and Lévy Processes, Revised Edition [1st ed. 2019]
 978-3-030-22699-2, 978-3-030-22700-5

Table of contents :
Front Matter ....Pages i-viii
Classes Lm and Their Characterization (Alfonso Rocha-Arteaga, Ken-iti Sato)....Pages 1-26
Classes Lm and Ornstein–Uhlenbeck Type Processes (Alfonso Rocha-Arteaga, Ken-iti Sato)....Pages 27-59
Selfsimilar Additive Processes and Stationary Ornstein–Uhlenbeck Type Processes (Alfonso Rocha-Arteaga, Ken-iti Sato)....Pages 61-75
Multivariate Subordination (Alfonso Rocha-Arteaga, Ken-iti Sato)....Pages 77-106
Inheritance in Multivariate Subordination (Alfonso Rocha-Arteaga, Ken-iti Sato)....Pages 107-119
Back Matter ....Pages 121-135

Citation preview

SPRINGER BRIEFS IN PROBABILIT Y AND MATHEMATIC AL STATISTICS

Alfonso Rocha-Arteaga Ken-iti Sato

Topics in Infinitely Divisible Distributions and Lévy Processes Revised Edition

123

SpringerBriefs in Probability and Mathematical Statistics Editor-in-Chief Mark Podolskij, University of Aarhus, Aarhus, Denmark Series Editors Nina Gantert, Technische Universität München, Münich, Germany Richard Nickl, University of Cambridge, Cambridge, UK Sandrine Péché, Univirsité Paris Diderot, Paris, France Gesine Reinert, University of Oxford, Oxford, UK Mathieu Rosenbaum, Université Pierre et Marie Curie, Paris, France Wei Biao Wu, University of Chicago, Chicago, IL, USA

SpringerBriefs present concise summaries of cutting-edge research and practical applications across a wide spectrum of fields. Featuring compact volumes of 50 to 125 pages, the series covers a range of content from professional to academic. Briefs are characterized by fast, global electronic dissemination, standard publishing contracts, standardized manuscript preparation and formatting guidelines, and expedited production schedules. Typical topics might include: • A timely report of state-of-the art techniques • A bridge between new research results, as published in journal articles, and a contextual literature review • A snapshot of a hot or emerging topic • Lecture of seminar notes making a specialist topic accessible for non-specialist readers • SpringerBriefs in Probability and Mathematical Statistics showcase topics of current relevance in the field of probability and mathematical statistics Manuscripts presenting new results in a classical field, new field, or an emerging topic, or bridges between new results and already published works, are encouraged. This series is intended for mathematicians and other scientists with interest in probability and mathematical statistics. All volumes published in this series undergo a thorough refereeing process. The SBPMS series is published under the auspices of the Bernoulli Society for Mathematical Statistics and Probability.

More information about this series at http://www.springer.com/series/14353

Alfonso Rocha-Arteaga • Ken-iti Sato

Topics in Infinitely Divisible Distributions and Lévy Processes Revised Edition

123

Alfonso Rocha-Arteaga Facultad de Ciencias Físico-Matemáticas Universidad Autónoma de Sinaloa Culiacán, México

Ken-iti Sato Hachiman-yama 1101-5-103 Tenpaku-ku, Nagoya, Japan

ISSN 2365-4333 ISSN 2365-4341 (electronic) SpringerBriefs in Probability and Mathematical Statistics ISBN 978-3-030-22699-2 ISBN 978-3-030-22700-5 (eBook) https://doi.org/10.1007/978-3-030-22700-5 Mathematics Subject Classification: 60G51, 60E07, 60G18 © The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

Lévy processes are mathematical models of random phenomena that evolve in time and have applications in different branches of scientific interest and in modern areas, such as finance and risk theory. They are stochastic processes with independent and stationary increments in disjoint time intervals, continuous in probability, starting at zero, and with sample paths being right continuous with left limits. They are closely connected with infinitely divisible distributions, which have nth roots in convolution sense for each n. Infinitely divisible distributions are described by the Lévy–Khintchine representation of their Fourier transform (or characteristic function). Selfdecomposable and stable distributions are major subclasses of the class of infinitely divisible distributions. The class of selfdecomposable distributions includes that of stable distributions, and furthermore, there is a decreasing chain of classes of distributions Lm , m = 0, 1, . . . , ∞, from the class L0 of selfdecomposable distributions to the class L∞ generated by stable distributions through convolution and convergence. This book deals with topics in the area of Lévy processes and infinitely divisible distributions, such as Ornstein–Uhlenbeck type processes, selfsimilar additive processes, and multivariate subordination; it focuses on developing them around the Lm classes. The prerequisites for this book are textbooks on probability theory on the level, such as Billingsley [13] (2012), Chung [19] (1974), Gnedenko and Kolmogorov [26] (1968), or Loève [58] (1977, 1978). There are five chapters in this book. Chapter 1 studies the basic properties of Lm giving in particular two characterizations of them, the first one showing each member of Lm as a limit of distributions type equivalent of partial sums of a sequence of independent random variables and the second one as a special form of its Lévy–Khintchine representation. Chapter 2 introduces Ornstein–Uhlenbeck type processes generated by a Lévy process and a constant c > 0 through stochastic integrals based on Lévy processes. Necessary and sufficient conditions are given for a generating Lévy process so that the OU type process has a limit distribution of Lm class. A mapping from a class of distributions of the generating Lévy processes to the class of selfdecomposable distributions is defined; then, it is expressed by v

vi

Preface

an improper stochastic integral. Its relationship with the Lm classes is studied. Chapter 3 establishes the correspondence between selfsimilar additive processes and selfdecomposable distributions and makes a close inspection of a transformation named after Lamperti, which makes c-selfsimilar additive processes and stationary OU type processes with parameter c transformed to each other. Chapters 4 and 5 treat multivariate subordination. Subordination is a procedure of combining two independent Lévy processes, one of which is increasing in R+ = [0, ∞) and replaces the time parameter of the other. The resulting process is a new Lévy process. Chapter 4 generalizes it to multivariate subordination where one is a Kparameter Lévy process and the other is a K-valued Lévy process, K being a cone in RN . The relation of the Lévy–Khintchine representations involved is given when K = RN + . Furthermore, the subordination of K-parameter convolution semigroups is shown for a general cone K. Chapter 5 studies the properties inherited by the subordinated process in multivariate subordination. Strictly stable and Lm properties are inherited by the subordinated from the subordinator when the subordinand is strictly stable. This work began with Sato’s visit to CIMAT, Guanajuato, in January–February 2001. It was published by Comunicaciones del CIMAT in 2001 and then by Sociedad Matemática Mexicana and Instituto de Matemáticas de la UNAM in 2003 in Aportaciones Matemáticas collection, Investigación series, number 17. In this new edition, we have made a full revision of the previous publication. New material reflecting the advances in the understanding of the topics has been added. At the same time, many parts have been rewritten in an attempt to make them as close to self-contained as possible. Theorems, lemmas, propositions, and remarks were reorganized; some were deleted, and others were newly added. The pages on various extensions of Lm and Sα at the end of the previous edition were deleted; some of the comments there were moved to other places. We thank Víctor Pérez-Abreu for his constant encouragement to us in the work for the old and new editions of this book. We also thank the anonymous reviewers for their valuable suggestions during our preparation for this edition. Culiacán, México Nagoya, Japan April 2019

Alfonso Rocha-Arteaga Ken-iti Sato

Contents

1

Classes Lm and Their Characterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Basic Properties and Characterization by Limit Theorems . . . . . . . . . . . 1.2 Characterization in Lévy–Khintchine Representation . . . . . . . . . . . . . . . . . Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 11 24

2

Classes Lm and Ornstein–Uhlenbeck Type Processes . . . . . . . . . . . . . . . . . . . 2.1 Stochastic Integrals Based on Lévy Processes . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Ornstein–Uhlenbeck Type Processes and Limit Distributions . . . . . . . . 2.3 Relations to Classes Lm , Sα , and S0α . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27 28 34 50 57

3

Selfsimilar Additive Processes and Stationary Ornstein–Uhlenbeck Type Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Selfsimilar Additive Processes and Class L0 . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Lamperti Transformation and Stationary Ornstein–Uhlenbeck Type Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61 61 66 74

4

Multivariate Subordination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.1 Subordinators and Subordination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.2 Subordination of Cone-Parameter Lévy Processes . . . . . . . . . . . . . . . . . . . . 86 4.3 Case of Cone RN 90 + ........................................................ 4.4 Case of General Cone K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

5

Inheritance in Multivariate Subordination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Inheritance of Lm Property and Strict Stability. . . . . . . . . . . . . . . . . . . . . . . . 5.2 Operator Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

107 107 113 118

vii

viii

Contents

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

Chapter 1

Classes Lm and Their Characterization

Selfdecomposable distributions are extensions of stable distributions and form a subclass of the class of infinitely divisible distributions. In this chapter we will introduce, between the class L0 of selfdecomposable distributions and the class S of stable distributions, a chain of subclasses called Lm , m = 1, . . . , ∞: L0 ⊃ L1 ⊃ L2 ⊃ · · · ⊃ L∞ ⊃ S. In Sect. 1.1 basic properties are proved and the class Lm for 1 ≤ m < ∞ is characterized as the class of limit distributions of partial sums of independent random variables whose distributions belong to Lm−1 . The class L∞ is defined as  L m 0, lim max P [ |Znk | > ε ] = 0.

n→∞ 1≤k≤rn

The sums Sn =

rn

k=1 Znk ,

(1.1)

n = 1, 2, . . . , are called the row sums.

Remark 1.9 In Definition 1.8 the condition (1.1) can be replaced by lim max | μZnk (z) − 1| = 0,

z ∈ Rd .

n→∞ 1≤k≤rn

(1.2)

Indeed, to see that (1.2) implies (1.1) for d = 1, see that, for each μ ∈ P(R) and a > 0, a −1

a

μ(z))dz −a (1 − 

= a −1



a

izx −a (1 − e )dz   = 2 (1 − (sin ax)/(ax))μ(dx) ≥ 2 |x|≥2/a (1 − (a|x|)−1 )μ(dx)

μ(dx)

≥ μ({x : |x| ≥ 2/a}). The case of general d is reduced to the case d = 1 since, for X = (Xj )1≤j ≤d , √  P [ |X| > ε ] ≤ dj =1 P [ |Xj | > ε/ d ] and  μXj (z) is the value of  μX on the j th axis. To see that (1.1) implies (1.2), notice that, for μ ∈ P(Rd ), | μ(z) − 1| ≤



|eiz,x − 1| μ(dx) ≤ 2

 |x|>ε

μ(dx) + |z|ε

for every ε > 0. The following theorem is one of the fundamental results on infinitely divisible distributions. Theorem 1.10 Let {Znk } be a null array on Rd with row sums Sn . If, for some cn ∈ Rd , n = 1, 2, . . . , the distribution of Sn − cn converges to some μ ∈ P, then μ ∈ I D.

1.1 Basic Properties and Characterization by Limit Theorems

5

This is shown in Theorem 9.3 of [93] or, for d = 1, in [19, 26, 58].2 Conversely, any μ ∈ I D is the limit of the distribution of the row sum of a null array, as is seen from the definition of infinite divisibility. Now we introduce selfdecomposable distributions. Definition 1.11 A distribution μ ∈ P is selfdecomposable if, for every b ∈ (1, ∞) there is ρb ∈ P such that

b (z) .  μ (z) =  μ b−1 z ρ

(1.3)

 Sometimes it is called of class L. Let L0 = L0 Rd be the class of selfdecomposable distributions on Rd . Gaussian distributions on Rd and -distributions and Cauchy distributions on R are examples of selfdecomposable distributions. In order to analyse L0 , let us use the following operation K that makes a subclass of P from a subclass of P. Definition 1.12 For any subclass Q of P, define a subclass K (Q) of P as follows: K (Q) is the totality of μ ∈ P such that there are independent random variables Z1 , Z2 , . . . on Rd , bn > 0, and cn ∈ Rd satisfying   (a) L bn nk=1 Zk − cn → μ as n → ∞, (b) {bn Zk : k = 1, 2, . . . , n; n ∈ N} is a null array, (c) L (Zk ) ∈ Q for each k. It follows from Theorem 1.10 that K (Q) ⊂ I D. Proposition 1.13 Let μ ∈ L0 . Then ρb in (1.3) is uniquely determined by μ and b, and both μ and ρb are in I D. Proof Let μ ∈ L0 . Then  μ has no zero. Indeed, if it has a zero, then there is z0 such that  μ(z0 ) = 0 and  μ(z) = 0 for |z| < |z0 | and hence we have ρ b (z0 ) = 0 for all b > 1 from (1.3) and

 μ(2−1 z0 )) 1 = Re(1 − ρ b (z0 )) ≤ 4Re(1 − ρ →0 b (2−1 z0 )) = 4Re 1 −  μ(2−1 b−1 z0 ) as b ↓ 1, which is absurd. Here we have used the fact that, for any μ ∈ P,   Re(1 −  μ(2z)) = (1 − cos2z, x)μ(dx) = 2 (1 − cos2 z, x)μ(dx)  ≤ 4 (1 − cosz, x)μ(dx) = 4Re(1 −  μ(z)).

2 The

property (1.1) for Znk is called infinitesimal by Gnedenko and Kolmogorov [26] (1968) and uniformly asymptotically negligible by Loève [58] (1977, 1978). Chung [19] (1974) uses the word holospoudic double array. We follow Feller [24] (1971) in using the word “null array”.

6

1 Classes Lm and Their Characterization

It follows from  μ(z) = 0 for all z that ρ b (and hence ρb itself) in (1.3) is uniquely determined by μ and b. Now let us see that μ and ρb are in I D. Let Z1 , Z2 , . . . be independent random  variables with  μZk (z) = ρ (k+1)/k ((k + 1)z) =  μ((k + 1)z)/ μ(kz). Then Sn = nk=1 Zk satisfies E[ eiz,n

−1 S

n

]=

n   μ((n + 1)n−1 z)  μ((k + 1)n−1 z) = → μ(z)  μ(kn−1 z)  μ(n−1 z)

k=1

as n → ∞. We see that {n−1 Zk : k = 1. . . . , n; n ∈ N} is a null array, since        μ((k + 1)n−1 z) −  μ(kn−1 z) μ((k + 1)n−1 z)     → 0, − 1 = max max  1≤k≤n  1≤k≤n  μ(kn−1 z) μ(kn−1 z) which is (1.2). Hence μ ∈ K(P) ⊂ I D with bn = n−1 and cn = 0. In order to prove ρb ∈ I D, let mj and nj be positive integers such that mj < nj , nj /mj → b, and mj → ∞ as j → ∞. Let −1

Wn = n

Sn ,

Uj =

n−1 j

mj 

Zk ,

k=1

Vj =

n−1 j

nj 

Zk .

k=mj +1

Then μWn → μ,  μWnj =  μUj  μVj , and Uj = mj n−1 j Wmj . Thus         μWmj (mj n−1 μUj (z) −  μ(mj n−1 μ(mj n−1  j z) =  j z) −  j z)     μWmj (w) −  ≤ max  μ(w) → 0 |w|≤|z|

as j → ∞ and hence  μUj (z) →  μ(b−1 z). It follows that  μVj (z) → ρ b (z) for each   −1 z. Since nj Zk : k = mj + 1, . . . , nj ; j ∈ N is a null array, we see that ρb ∈ I D, using Theorem 1.10.   d Definition 1.14 For m = 1, 2, 3, . . . , Lm = Lm R is recursively defined as follows: μ ∈ Lm if and only if for every b > 1 there is ρb ∈ Lm−1 such that  μ (z) =  μ b−1 z ρ b (z). Sometimes μ ∈ Lm is called m+1 times selfdecomposable in view of Definition 1.11 and this naming is made more precise in Theorem 2.29 proved later. It is immediate that L0 ⊃ Lm for all m ≥ 1. Next, we prove that these classes form a nested sequence. Thus, intersection over all Lm will give the limiting class. Proposition 1.15 I D ⊃ L0 ⊃ L1 ⊃ L2 ⊃ · · · . Proof We already have I D ⊃ L0 ⊃ L1 . Suppose that Lm ⊃ Lm+1 . Then Lm+1 ⊃  Lm+2 follows from the definition.

1.1 Basic Properties and Characterization by Limit Theorems

7

   d Definition 1.16 L∞ = L∞ Rd = ∞ m=0 Lm R . δc (z) = Remark Trivial distributions δc on Rd belong to L∞ , since   −1 1.17  δ(1−b−1 )c (z) for all b > 1. δc b z  The classes Lm have the following properties. Proposition 1.18 Let m ∈ {0, 1, 2, . . . , ∞}. (i) If μ1 and μ2 are in Lm , then μ1 ∗ μ2 ∈ Lm . (ii) If μn ∈ Lm and μn → μ, then μ ∈ Lm . (iii) If μ1 = L (X) ∈ Lm and μ2 = L (aX + c) with a ∈ R and c ∈ Rd , then μ2 ∈ Lm . (iv) If μ ∈ Lm , then, for any t ∈ [0, ∞), μt∗ ∈ Lm . Proof Let us define L−1 = I D and prove (i)–(iv) for m = −1, 0, 1, 2, . . . by induction. The assertions (i)–(iv) for L−1 are shown in Proposition 1.5. Now we assume that, for a given m ≥ 0, (i)–(iv) are true with m replaced by m − 1 and show (i)–(iv) for Lm . For μj in L0 , the corresponding ρb in (1.3) is denoted by ρj,b . (i) If μ1 , μ2 ∈ Lm , then ρ1,b , ρ2,b ∈ Lm−1 and ρb for μ1 ∗ μ2 equals ρ1,b ∗ ρ2,b , which is in Lm−1 . (ii) If μn ∈ Lm and μn → μ, then ρn,b ∈ Lm−1 and



ρ n,b (z) =  μ(z)/ μ b−1 z , μn (z) / μn b−1 z →  which is continuous in z and is the characteristic function of some ρb ∈ P. The induction hypothesis implies ρb ∈ Lm−1 . It follows that μ ∈ Lm . (iii) Let μ1 = L (X) ∈ Lm and μ2 = L (aX + c). Then ρ1,b ∈ Lm−1 and

 μ2 (z) =  μ1 b−1 az ρ 1,b (az) eic,z μ1 (az) eic,z = 

−1 −1 = μ1 b−1 az eic,b z ei(1−b )c,z ρ 1,b (az)

= μ2 b−1 z  δ(1−b−1 )c (z) ρ 1,b (az) . δ(1−b−1 )c (z) ρ We see μ2 ∈ Lm since  1,b (az) is the characteristic function of a distribution in Lm−1 . (iv) Let μ ∈ Lm . Then ρb ∈ Lm−1 . It follows from (1.3) that (log  μ)(z) = (log  μ)(b−1 z)+(log ρ b )(z). Hence  μt (z) =  μt (b−1 z) ρbt (z). Since ρbt ∈ Lm−1 , we obtain μt∗ ∈ Lm . The assertions (i)–(iv) for m = ∞ are consequences of the assertions for m < ∞.  Next we introduce stable distributions and prove that they are contained in the class L∞ . In the next section it will be shown that L∞ is the smallest class closed under convolution and convergence that contains the stable distributions.

8

1 Classes Lm and Their Characterization

Definition 1.19 A distribution μ ∈ P is called stable if, for each n ∈ N, there are an ∈ (0, ∞) and cn ∈ Rd satisfying μ(an z)eicn ,z ,  μ(z)n = 

z ∈ Rd .

(1.4)

A distribution μ ∈ P is called strictly stable if μ is stable and (1.4) holds with cn = 0. The class of stable or strictly stable distributions, respectively, on Rd is denoted by S = S(Rd ) or S0 = S0 (Rd ). An underlying concept in this definition is type equivalence. Two distributions L(X) and L(Y ) are said to be type equivalent if L(Y ) = L(aX + c) for some a > 0 and c ∈ Rd . Remark 1.20 If μ ∈ S, then μ ∈ I D, because n  −1 −1  μ(z) =  μ(an−1 z)e−ian n cn ,z . μ)(z) = If μ ∈ S, then μs∗ ∈ S for all s ≥ 0. Indeed, rewrite (1.4) as n(log  (log  μ)(an z) + icn , z and then, multiplying by s, obtain n(log  μs )(z) = (log  μs )(an z) + iscn , z. Similarly, if μ ∈ S0 , then μs∗ ∈ S0 for all s ≥ 0. For stable distributions the following Propositions 1.21 and 1.22 hold. Their detailed proofs are found in Section 13 and solution of exercise E18.4 of [93].3 Proposition 1.21 A distribution μ ∈ P is in S if and only if μ ∈ I D and, for every t ∈ (0, ∞), there exist at ∈ (0, ∞) and ct ∈ Rd such that μ(at z)eict ,z ,  μt (z) = 

z ∈ Rd .

(1.5)

A distribution μ ∈ P is in S0 if and only if μ ∈ I D and, for every t ∈ (0, ∞), there is at ∈ (0, ∞) such that μ(at z),  μt (z) = 

z ∈ Rd .

(1.6)

Proposition 1.22 If μ ∈ S and μ is not trivial, then at and ct in (1.5) are unique for every t ∈ (0, ∞) and there is α ∈ (0, 2] such that, for all t ∈ (0, ∞), at = t 1/α . If μ ∈ S0 and μ = δ0 , then at in (1.6) is unique for every t ∈ (0, ∞) and there is α ∈ (0, 2] such that, for all t ∈ (0, ∞), at = t 1/α .

3 The

proof of Proposition 1.22 uses Lévy–Khintchine representation in Theorem 1.28. But Theorem 1.28 is shown independently of the theory of stable distributions.

1.1 Basic Properties and Characterization by Limit Theorems

9

Definition 1.234 Let α ∈ (0, 2]. A distribution μ ∈ P(Rd ) is called α-stable or stable with index α if μ ∈ I D and, for every t ∈ (0, ∞), there is ct ∈ Rd such that μ(t 1/α z)eict ,z .  μt (z) = 

(1.7)

A distribution μ ∈ P(Rd ) is called strictly α-stable or strictly stable with index α if μ ∈ I D and, for every t ∈ (0, ∞), μ(t 1/α z).  μt (z) = 

(1.8)

The class of α-stable or strictly α-stable distributions, respectively, on Rd is denoted by Sα = Sα (Rd ) or S0α = S0α (Rd ). A distribution μ ∈ P(Rd ) is called degenerate if there are c ∈ Rd and a linear subspace V of Rd with dim(V ) ≤ d − 1 such that μ(c + V ) = 1. Otherwise μ is called nondegenerate. The class S2 is the class of (possibly degenerate) Gaussian distributions. Proposition 1.24 If α, α  ∈ (0, 2] and α = α  , then Sα ∩ Sα  = {δc : c ∈ Rd }

and

S0α ∩ S0α  = {δ0 }.

(1.9)

Any trivial distribution is strictly 1-stable. For every α = 1 and c = 0, we have δc ∈ S0α . We have S=



0 α∈(0,2] Sα

and

S0 =



0 α∈(0,2] Sα .

(1.10)

If μ ∈ Sα , then μs∗ ∈ Sα for all s ≥ 0. If μ ∈ S0α , then μs∗ ∈ S0α for all s ≥ 0. Proof This follows from Proposition 1.22 and Definition 1.23. The assertion on μs∗ is proved similarly to Remark 1.20.  Proposition 1.25 L∞ ⊃ S. Proof Let μ ∈ S. Then, for some α ∈ (0, 2], (1.7) holds. Given b > 1 let t = b−α < 1. Then



μ b−1 z eic,z =  μ b−1 z ρ b (z) ,  μ (z) =  μ (z)1−t  b (z) =  μ (z)1−t eic,z . Hence μ ∈ L0 . Now, by where ρb is given by ρ Proposition 1.18 and Remark 1.17, ρb ∈ L0 . The last arguments are recursively applied to yield 4 This

definition of α-stability and strict α-stability is slightly different from that of [93], where trivial distributions do not have index in stability and δ0 does not have index in strict stability. Thus neither (1.9) nor (1.10) is true if Sα and S0α are, respectively, the classes of α-stable and strictly α-stable distributions in the sense of [93].

10

1 Classes Lm and Their Characterization

ρb ∈ L0 ⇒ μ ∈ L1 ⇒ ρb ∈ L1 ⇒ μ ∈ L2 ⇒ ρb ∈ L2 ⇒ μ ∈ L3 ⇒ . . . Thus μ ∈ Lm for every m ≥ 0.



Using the operation K of Definition 1.12, we characterize the classes Lm for m = 0, 1, 2, . . . , ∞ as classes of limit distributions. This is a main theorem in this section. Theorem 1.26 (i) L0 = K (P) = K (I D). (ii) Lm = K (Lm−1 ) for m = 1, 2, . . . (iii) L∞ = K (L∞ ) and L∞ is the greatest class Q that satisfies Q = K (Q). Proof (i) Proposition 1.13 and its proof show that L0 ⊂ K(I D) ⊂ K(P). Hence it is enough to show that K(P) ⊂ L0 . Suppose that μ ∈ K(P) with Zk , bn , and cn being those of Definition 1.12. Then μ ∈ I D. If μ is trivial, then μ ∈ L0 as is shown by Remark 1.17. So we assume that μ is non-trivial. Then we have bn → 0 and bn /bn+1 → 1 as n → ∞; see Lemma 15.4 of [93] for details. Thus log bn → −∞ and log bn −log bn+1 → 0. Then for every b > 1 there are sequences of positive integers {mj }, {nj } going to infinity such that mj < nj and log bmj − log bnj → log b, that is, bmj /bnj → b. Let Wn = bn

n 

Zk + cn ,

k=1

Uj = bnj

mj 

−1 Zk + bnj bm c , j mj

Vj = bnj

nj 

−1 Zk + cnj − bnj bm c . j mj

k=mj +1

k=1

Then μWn → μ, Wnj = Uj + Vj , and  μWnj (z) =  μUj (z)  μVj (z)

(1.11)

−1 W , we have  by independence of {Zk }. Since Uj = bnj bm μUj (z) → mj j  −1  μ b z as j → ∞, as in the last part of the proof of Proposition 1.13. Now  μ (z) / μ b−1 z , which it follows from (1.11) that, for each z,  μVj (z) tends to   is continuous in z. Hence there is ρb ∈ P such that ρ b (z) =  μ (z) / μ b−1 z . Therefore μ ∈ L0 . (ii) Let m be a positive integer. Let μ ∈ Lm . Then ρb in (1.3) is in Lm−1 . Let us show that μ ∈ K(Lm−1 ). Since μ ∈ I D,  μ has no zero and ρ b (z) =  μ (z) / μ b−1 z for b > 1. Let {Zk } be independent random variables with  μZk (z) = ρ (k+1)/k ((k + 1) z). Then L(Zk ) ∈ Lm−1 by Proposition 1.18 (iii). The same argument as in the proof of Proposition 1.13 now shows that μ ∈

1.2 Characterization in Lévy–Khintchine Representation

11

K(Lm−1 ) with bn = n−1 and cn = 0 in Definition 1.12. The converse part in (ii) saying that K(Lm−1 ) ⊂ Lm is proved almost in the same way as the proof of K(P) ⊂ L0 in (i); simply note that L(Zk ) ∈ Lm−1 implies that L(Vj ) ∈ Lm−1 and ρb ∈ Lm−1 by the use of Proposition 1.18. (iii) We have Lm = K (Lm−1 ) ⊃ K (L∞ ) for all m and, in consequence, L∞ ⊃ K (L∞ ). Conversely, if μ ∈ L∞ , then μ ∈ K(L∞ ). Indeed, we have ρb ∈ L∞ . Let us take  Zk as in the proof of the converse part of (ii) and see that n L Z /n tends to μ, where the distribution of Zk is in L∞ and {Zk /n} k=1 k is a null array. This means that μ ∈ K (L∞ ). Hence L∞ = K (L∞ ). Finally, let Q = K (Q). Then Q ⊂ K (P) = L0 . Now we obtain Q = K (Q) ⊂ K (L0 ) = L1 . Repetition of this argument using (ii) yields Q ⊂ Lm for every m. Therefore Q ⊂ L∞ .  Characterization of S as a class of limit distributions is as follows. Theorem 1.27 A distribution μ ∈ P is stable if and only if μ ∈ K(Q) for some Q consisting of one element. In this case {Zk } in Definition 1.12 are independent identically distributed random variables on Rd and K(Q) is the class of distributions type equivalent with μ. The “only if” part of this theorem is a direct consequence of the definition of stability. For the “if” part, see Theorem 15.7 of [93] or, for d = 1, [24, 26, 58].

1.2 Characterization in Lévy–Khintchine Representation One of the basic results on infinitely divisible distributions is the following Lévy– Khintchine representation of their characteristic functions ([93] Theorem 8.1). Theorem 1.28 If μ ∈ I D(Rd ), then    μ(z) = exp − 12 z, Az + iγ , z +

Rd



eiz,x − 1 − iz, x1{|x|≤1} (x) ν(dx) , (1.12)

where A is a symmetric nonnegative-definite d × d matrix,  ν is a measure on Rd with ν({0}) = 0 and Rd (|x|2 ∧ 1)ν(dx) < ∞,

(1.14)

γ is a vector in Rd,

(1.15)

(1.13)

and A, ν, and γ are uniquely determined by μ. Conversely, if A, ν, and γ satisfy (1.13), (1.14), and (1.15), then there exists a unique μ ∈ I D satisfying (1.12).

12

1 Classes Lm and Their Characterization

Definition 1.29 Given μ ∈ I D, we call A and ν in Theorem 1.28 the Gaussian covariance matrix of μ and the Lévy measure of μ, respectively. (If d = 1, then we often call A the Gaussian variance of μ.) We call the triplet (A, ν, γ ) the generating  triplet of μ; sometimes we simply call it the triplet of μ. In case ν satisfies Rd (|x|∧ 1)ν(dx) < ∞, γ 0 ∈ Rd defined by  γ0 = γ −

|x|≤1

xν(dx)

(1.16)

is called the drift of μ and (1.12) is written as    μ(z) = exp − 12 z, Az + iγ 0 , z +

Rd



eiz,x − 1 ν(dx) .

(1.17)

A distribution μ ∈ I D satisfying (1.17) with A = 0, γ 0 = 0, and ν(Rd ) < ∞ is called compound Poisson. A distribution μ ∈ I D is Gaussian if ν = 0; μ ∈ I D is non-Gaussian if ν = 0; μ ∈ I D is purely non-Gaussian if A = 0 and ν = 0. For d = 1, μ ∈ I D(R) is Poisson with parameter c > 0 if A = 0, γ 0 = 0, and ν = cδ1 , which means μ({n}) = e−c cn /(n!) for n = 0, 1, . . .. Notice that a Gaussian distribution on Rd is possibly degenerate, that is, supported on some (d − 1)-dimensional hyperplane in Rd . Here 0-dimensional hyperplane means a one-point set. Remark 1.30 Theorem 1.28 shows that, if μ ∈ I D, then  (log  μ)(z) = − 12 z, Az + iγ , z + g(z, x)ν(dx), Rd

(1.18)

where g(z, x) = eiz,x − 1 − iz, x1{|x|≤1} (x).

(1.19)

Indeed, since g(z, x) is continuous in z and satisfies g(0, x) = 0 and |g (z, x)| ≤

1 2

|z|2 |x|2 1{|x|≤1} (x) + 2 · 1{|x|>1} (x) ,

(1.20)

the dominated convergence theorem shows that the right-hand side of (1.18) is continuous in z and vanishes at z = 0. Definition 1.31 Let {Xt : t ≥ 0} be a stochastic process on Rd defined on a probability space (, F, P ). It is called Lévy process if it satisfies the following: (i) X0 = 0 almost surely. (ii) For any 0 ≤ t0 < · · · < tn , the random variables Xt0 , Xt1 −Xt0 , . . . , Xtn −Xtn−1 are independent. (iii) The distribution of Xs+t − Xs does not depend on s.

1.2 Characterization in Lévy–Khintchine Representation

13

(iv) For every t ≥ 0 and ε > 0, P (|Xs − Xt | ≥ ε) → 0 as s → t. (v) Xt (ω) is right continuous with left limits in t, almost surely.5 We call {Xt } an additive process if it satisfies (i), (ii), (iv), and (v). Further, we call {Xt } an additive process in law if it satisfies (i), (ii), and (iv). We call (ii) and (iii), respectively, the independent increment property and the stationary increment property. Sometimes we refer to (iv) as stochastic continuity. If {Xt } is a Lévy process on Rd , then L(Xt ) is in I D(Rd ) for every t ≥ 0 and L(Xt ) = μt , where μ = L(X1 ). Further, any joint distribution L (Xtj )1≤j ≤n for n ∈ N and 0 ≤ t1 < t2 < · · · < tn is infinitely divisible on Rnd and determined by μ.6 To see this, notice that Xtj = Xt1 + (Xt2 − Xt1 ) + · · · + (Xtj − Xtj −1 ), whose distribution is determined by {μt : t ≥ 0}, and that (Xtj )1≤j ≤n is a linear transformation of (Xtj − Xtj −1 )1≤j ≤n with t0 = 0. Conversely, for every μ ∈ I D(Rd ) there exists a Lévy process {Xt } such that L(X1 ) = μ. This correspondence between I D(Rd ) and the class of Lévy processes on Rd is one-to-one and onto if two Lévy processes identical in law are identified. We use the words generating triplet, Gaussian covariance matrix, Lévy measure, and drift of a Lévy process {Xt } for those of L(X1 ). A Lévy process {Xt } is called selfdecomposable, of class Lm , stable, α-stable, strictly stable, strictly α-stable, Gaussian, purely non-Gaussian, Poisson, or compound Poisson, respectively, if so is L(Xt ) for all t ≥ 0. Evidently, this condition is equivalent to saying that “if so is L(Xt ) for some t > 0”. For a Lévy process {Xt } on Rd , it is known that Xt (ω) is continuous in t almost surely if and only if it is Gaussian. A Lévy process {Xt } on Rd with generating triplet (I, 0, 0) is called the Brownian motion on Rd . Here I is the d × d identity matrix. A basic fact on additive processes is the following. Theorem 1.32 (i) Let {Xt } be an additive process on Rd . Then L(Xs ) and L(Xt − Xs ) are in I D(Rd ) for all 0 ≤ s ≤ t. d (ii) Let {Xt } and {Xt } be additive processes on Rd such that Xt = Xt for every t ≥ d

0. Then {Xt } = {Xt }. Namely, for any n ∈ N and any choice of 0 ≤ t1 < t2 < d

· · · < tn , (Xtj )1≤j ≤n = (Xtj )1≤j ≤n . Moreover, L((Xtj )1≤j ≤n ) ∈ I D(Rnd ). Proof (i) We can prove L(Xs ) ∈ I D as a consequence of Theorem 1.10. To see L(Xt − Xs ) ∈ I D, note that {Xt+s − Xs : t ≥ 0} is an additive process. (ii) This is shown similarly to the case of Lévy processes above.  say that a statement S(ω) involving ω is true almost surely (or a. s.) if there is 0 ∈ F with P [0 ] = 1 such that S(ω) is true for all ω ∈ 0 . 6 Cone-parameter Lévy processes in Chap. 4 do not have these properties (see Example 4.36). 5 We

14

1 Classes Lm and Their Characterization

A remarkable result on path behaviour of an additive process is the Lévy– Itô decomposition, which clarifies the meanings of Lévy measure and Gaussian covariance matrix ([93] Theorems 19.2, 19.3). The totality of compound Poisson distributions has the following property useful in the theory of infinitely divisible distributions and Lévy processes. Theorem 1.33 The totality of compound Poisson distributions on Rd is dense in I D(Rd ) in the topology of weak convergence. See [93] Corollary 8.8 for a proof. Now let us turn to selfdecomposable distributions. Let S = {ξ ∈ Rd : |ξ | = 1}, the unit sphere in Rd . The following result is proved in [93] Theorem 15.10. Notice that selfdecomposability imposes no restriction on A and γ in the generating triplet (A, ν, γ ). Theorem 1.34 Let μ ∈ I D(Rd ) with Lévy measure ν. Then, μ ∈ L0 if and only if  ν(B) =





1B (rξ )

λ(dξ ) 0

S

kξ (r) dr, r

B ∈ B(Rd ),

(1.21)

where λ is a finite measure on S and kξ (r) is nonnegative, decreasing in r ∈ (0, ∞), and measurable in ξ ∈ S. Remark 1.35 The measure λ and function kξ (r) in Theorem 1.34 are not uniquely determined by μ ∈ L0 . Suppose that ν = 0. Then we can choose λ and kξ (r) satisfying the additional conditions that λ(S) = 1, kξ (r) is right continuous in r, and  ∞ kξ (r) dr = c, (1.22) (r 2 ∧ 1) r 0  where c = (|x|2 ∧ 1)ν(dx) > 0. If both λ, kξ (r) and λ , kξ (r) satisfy (1.21) and these additional conditions, then λ = λ and kξ (r) = kξ (r) for λ-a. e. ξ . Henceforth we always choose λ and kξ (r) satisfying these additional conditions and call λ the spherical component of ν and kξ (r) the k-function of ν (or μ). Define hξ (u) = kξ (eu ),

u ∈ R.

(1.23)

We call hξ (u) the h-function of ν (or μ). The equality (1.22) is written as 



−∞

(e2u ∧ 1)hξ (u)du = c.

(1.24)

In the remaining case ν = 0 (that is, μ is Gaussian), we can choose kξ (r) = 0 in (1.21) and the k-function and the h-function of μ are defined to be 0. Now we have, for any μ ∈ L0 ,

1.2 Characterization in Lévy–Khintchine Representation

15

  μ(z) = exp − 12 z, Az + iγ , z 

 +

λ(dξ ) S

0

k (r)  ξ dr . eiz,rξ  − 1 − iz, rξ 1(0,1] (r) r



(1.25)

In one dimension (d = 1), we have S = {+1, −1}, λ({+1}) + λ({−1}) = 1, and k-function kξ (r) for ξ = +1, −1. Hence, letting k(x) = λ({+1})k+1 (x) for x > 0 and k(x) = λ({−1})k−1 (−x) for x < 0, we have, for any μ ∈ L0 (R),  

k(x)  dx , eizx − 1 − izx1[−1,1] (x)  μ(z) = exp − 12 Az2 + iγ z + |x| R (1.26) with k(x) being decreasing and right continuous on (0, ∞) and increasing and left ∞ continuous on (−∞, 0), k(x) ≥ 0, and −∞ (x 2 ∧ 1)(k(x)/|x|)dx < ∞. This representation is unique. Sometimes we call k(x) in (1.26) the k-function of μ for d = 1. Remark 1.36 The representation of the Lévy measure in Remark 1.35 is a special case of decomposition. In general, if ν is the Lévy measure of μ ∈ I D(Rd )  a polar 2 with Rd (|x| ∧ 1)ν(dx) = c > 0, then 

 ν(B) =



1B (rξ )νξ (dr),

λ(dξ )

(1.27)

0

S

where  ∞ 2 λ is a probability measure on S, νξ is a σ -finite measure on (0, ∞) with 0 (r ∧ 1)νξ (dr) = c for each ξ , and νξ is measurable in ξ (that is, νξ (B) is measurable in ξ for each B ∈ B( (0, ∞) )). Moreover, if both λ, νξ and λ , νξ give this representation, then λ = λ and νξ = νξ for λ-a.e. ξ . Proof is given as an application of the existence (and uniqueness) theorem of conditional distributions. Definition 1.37 For ε > 0, ε is the difference operator, ε f (u) = f (u + ε) − f (u). The iteration of ε n times is denoted by nε . Hence nε f (u)

=

n  j =0

(−1)

n−j

n f (u + j ε). j

Define 0ε f = f . For n ∈ Z+ we say that f (u), u ∈ R, is monotone of order n j if (−1)j ε f ≥ 0 for ε > 0 and j = 0, 1, . . . , n. We say that f (u), u ∈ R, is j completely monotone if (−1)j ε f ≥ 0 for ε > 0 and j ∈ Z+ . Lemma 1.38 (i) Let n ≥ 1. A function f (u) is monotone of order n if and only if, for all ε > 0 j and j = 0, 1, . . . , n − 1, (−1)j ε f is decreasing.

16

1 Classes Lm and Their Characterization

(ii) Let n ≥ 2. A function f (u) is monotone of order n if and only if f ∈ C n−2 , (−1)j f (j ) ≥ 0 for j = 0, 1, . . . , n − 2, and (−1)n−2 f (n−2) is decreasing and convex. (iii) A function f (u) is completely monotone if and only if f ∈ C ∞ and (−1)j f (j ) ≥ 0 for j = 0, 1, . . . . See Widder [138] pp. 144–151 (1946) for a proof. A consequence of (ii) is that, if f ∈ C n and (−1)j f (j ) ≥ 0 for j = 0, 1, . . . , n, then  f is monotone of order n. Now we will give characterization of the class Lm Rd in terms of h-functions. Theorem 1.39 (i) Let m ∈ {0, 1, . . . }. Then μ ∈ Lm if and only if μ ∈ L0 and its h-function hξ (u) is monotone of order m + 1 for λ-a. e. ξ . (ii) μ ∈ L∞ if and only if μ ∈ L0 and hξ (u) is completely monotone for λ-a. e. ξ . Proof (i) The assertion is true for m = 0 owing to Theorem 1.34. Let m ≥ 1. We will show the validity of the assertion for m, assuming that it is valid for m − 1. Let μ ∈ Lm with Lévy measure ν. If ν = 0,  then its h-function 0 is monotone of any order. Let ν = 0. We have  μ (z) =  μ b−1 z ρ b (z) for every b > 1, and it follows from (1.25) that ρb has Lévy measure  νb (B) =



S



1B (rξ )

λ(dξ ) 0

kξ (r) − kξ (br) dr, r

B ∈ B(Rd ).

∞  Let ab (ξ ) = 0 (r 2 ∧ 1) kξ (r) − kξ (br) r −1 dr. We have 0 < ab (ξ ) < c, where c is as in Remark 1.35. The spherical component λb and the h-function hb,ξ of ρb are as follows: λb (dξ ) = cb−1 ab (ξ )λ(dξ ), where cb is a constant that   makes λb a probability measure, kb,ξ (r) = cb (ab (ξ ))−1 kξ (r) − kξ (br) , and    hb,ξ (u) = kb,ξ eu = cb (ab (ξ ))−1 hξ (u) − hξ (u + log b) .

(1.28)

Since ρb ∈ Lm−1 , hb,ξ is monotone of order m for λ-a.e. ξ by the induction hypothesis. Hence

(−1)j jε hξ (u) − jε hξ (u + log b) = (−1)j cb−1 ab (ξ )jε hb,ξ (u) ≥ 0 j

for j = 0, 1, . . . , m. Choosing b = eε , we see that (−1)j ε hξ (u) ≥ 0 for j = 1, 2, . . . , m + 1, and therefore hξ is monotone of order m + 1 for λ-a.e. ξ . Conversely, suppose that μ ∈ L0 and hξ is monotone of order m + 1 for j λ-a.e. ξ . Then, by Lemma 1.38 (i), (−1)j ε hξ (u) is decreasing in u for j = 0, 1, . . . , m. Hence it follows from (1.28) that hb,ξ (u) is monotone of order m. Now we have ρb ∈ Lm−1 from the induction hypothesis. Thus μ ∈ Lm . (ii) This is an immediate consequence of (i) and the definition of L∞ .  Next we will consider the class L∞ .

1.2 Characterization in Lévy–Khintchine Representation

17

Lemma 1.40 (i) If h(u) is a completely monotone function on R, then there is a unique measure  on [0, ∞) such that  h(u) = e−αu (dα), u ∈ R. (1.29) [0,∞)

 Conversely, if a measure  on [0, ∞) is such that [0,∞) e−αu (dα) < ∞ for all u ∈ R, then the function h defined by (1.29) is completely monotone on R. (ii) Let 0 < c < ∞. The function h(u) in (i) satisfies 



−∞

(e2u ∧ 1)h(u)du = c

(1.30)

if and only if the corresponding measure  is concentrated on the open interval (0, 2) and satisfies

 (0,2)

1 1 + α 2−α

(dα) = c.

(1.31)

(iii) Let hξ (u), ξ ∈ S, be completely monotone in u ∈ R for each ξ and let ξ be the measure corresponding to hξ in (i). Then hξ is measurable in ξ if and only if ξ is measurable in ξ . Proof (i) Suppose that h(u) is completely monotone on R. Then by Bernstein’s theorem (see Widder [138] (1946) or Feller [24] (1971)) there is, for  each u0 , a unique finite measure u0 on [0, ∞) such that h(u0 + u) = [0,∞) e−αu u0 (dα) for u ≥ 0. The uniqueness implies that, if u0 < u1 , then u1 (dα) = e−α(u1 −u0 ) u0 (dα). Hence  defined by (dα) = eαu0 u0 (dα) does not depend on u0 and satisfies (1.29). The uniqueness of  comes from the uniqueness in Bernstein’s theorem. Conversely, if  is a measure satisfying  −αu (dα) < ∞ for all u ∈ R, then the function h defined by (1.29) [0,∞) e is completely monotone on R, since we can change the order of differentiation and integration to obtain  (−1) (d/du) (h(u)) = n

n

[0,2)

e−αu α n (dα) ≥ 0.

(ii) Suppose that (1.30) holds. Then ({0}) = 0 since h(u) → 0 as u → ∞. Moreover, we have 

 (dα) (0,∞)

0 −∞

e

−u(α−2)

 du =

0 −∞

e2u h(u)du < ∞.

18

1 Classes Lm and Their Characterization

0 Since −∞ e−u(α−2) du = ∞ for α ≥ 2, this implies that  is concentrated in (0, 2). Now we have (1.31), since  c= 

0 −∞

=

  (dα)



(0,2)



e2u h(u)du +



= (0,2)

h(u)du 0

0 −∞





eu(2−α) du +

1 1 + α 2−α

(0,2)



(dα)

e−αu du

0

(dα).

The “if” part of (ii) is shown similarly. (iii) The inversion formula for Laplace transforms (see Widder [138] p. 295 (1946) or Feller [24] p. 440 (1971)) says that 

β

ξ (dα) = lim

u→∞

0

 (−u)m (m) hξ (u) m!

(1.32)

m≤βu

for all β > 0 satisfying ξ ({β}) = 0. Hence, for any α0 ≥ 0, ξ ([0, α0 ]) is the limit of the right-hand side of (1.32) as β ↓ α0 . This shows the “only if” part of the assertion. The “if” part follows from the equation (1.29) for hξ .  Theorem 1.41 (i) If μ ∈ L∞ , then   μ(z) = exp − 12 z, Az + iγ , z 

 +

(dα) (0,2)





e

λα (dξ )

iz,rξ 

0

S

dr  − 1 − iz, rξ 1(0,1] (r) 1+α , r (1.33)

where A is a nonnegative-definite symmetric d × d matrix, γ ∈ Rd ,  is a measure on the interval (0, 2) satisfying

 (0,2)

1 1 + α 2−α

(dα) < ∞,

(1.34)

λα is a probability measure on S for each α, and λα is measurable in α. These A, γ , and  are uniquely determined by μ; λα is determined by μ up to α of -measure 0. (ii) Given A, γ , , and λα satisfying the conditions above, we can find μ ∈ L∞ satisfying (1.33).

1.2 Characterization in Lévy–Khintchine Representation

19

Proof (i) Suppose that μ ∈ L∞ with Lévy measure ν and h-function hξ . Let ξ be the measure in Lemma 1.40 corresponding to hξ . Then ξ is measurable in ξ and concentrated in (0, 2), since hξ satisfies (1.24). Thus we have 

e−αu ξ (dα),

hξ (u) =

 (0,2)

(0,2)

1 1 + α 2−α

(1.35)

ξ (dα) = c.

(1.36)

Let λ be the spherical component of ν. If ν = 0, then let  = 0. If ν = 0, then we can find a measure  on (0, 2) satisfying (1.34) and a probability measure λα measurable in α such that      (dα) λα (dξ )f (α, ξ ) = λ(dξ ) ξ (dα) f (α, ξ ) S

(0,2)

S

(0,2)

(1.37) for every nonnegative measurable function f (α, ξ ). In fact, it suffices to apply the existencetheorem of conditional distribution to the probability measure given by c−1 α −1 + (2 − α)−1 λ(dξ )ξ (dα) on (0, 2) × S. Now we have 

 Rd





f (x)ν (dx) =

 (dα)



λα (dξ )

(0,2)

f (rξ ) r −α−1 dr

(1.38)

0

S

for every nonnegative measurable function f (x), since (1.21) shows 

 ν(B) =



1B (rξ )

λ(dξ ) 0

S

hξ (log r) dr, r

(1.39)

from which follows 



ν(B) =

λ(dξ ) 

1B (rξ ) r −1 dr

0

S

=



 λ(dξ )

S

r −α ξ (dα)

(0,2)





ξ (dα) (0,2)



1B (rξ ) r −α−1 dr

0

by use of (1.35) and Fubini’s theorem. It follows that (1.38) is valid for every complex-valued ν-integrable function f . Letting f (x) = eiz,x − 1 − i z, x 1(0,1] (|x|) for each z, we get (1.33). Proof of the uniqueness assertion in (i) will be given after the proof of (ii). (ii) Suppose that A, γ , , and λα are given and satisfy the conditions in (i). Let c be the value of the integral in (1.34). Then we can find a probability measure λ on S and a measure ξ on (0, 2) measurable in ξ such that (1.36)

20

1 Classes Lm and Their Characterization

and (1.37) are satisfied. Define the function hξ by (1.35) and let kξ = hξ (log r). Define a measure ν by (1.21). Then we can show the equality (1.38) and  2 ∧ 1)ν(dx) = c. Now Theorem 1.28 says that there is μ ∈ I D (|x| S with generating triplet (A, γ , ν) and Theorem 1.34 says that this μ is in L0 . Moreover, Theorem 1.39 says that μ ∈ L∞ , since hξ (u) is completely monotone in u ∈ R by Lemma 1.40. This μ satisfies (1.33). Let us see the uniqueness of A, γ , , and λα in (i). The expression (1.33) says that μ has generating triplet (A, ν, γ ) with ν expressible by (1.38). Construction procedure in the proof of (ii) gives λ and hξ , which expresses ν as in (1.39). Hence λ and hξ (log r) are determined by ν, as is stated in Remark 1.35. The hξ determines ξ . Now  and λα are determined by λ and ξ .  Characterization of α-stable and strictly α-stable distributions on Rd is as follows ([93] Theorems 14.1, 14.3, and 14.7). Theorem 1.42 (i) Let 0 < α < 2. If μ ∈ Sα , then μ ∈ I D with generating triplet (A, ν, γ ) satisfying A = 0 and  ν (B) = c

 λ(dξ )

S



1B (rξ ) r −α−1 dr

for every B ∈ B Rd ,

0

(1.40) with a probability measure λ on S and a nonnegative constant c. Conversely, for any λ, c, and γ , there is μ ∈ Sα having generating triplet (0, ν, γ ) with ν satisfying (1.40). (ii) The class S2 is the totality of Gaussian distributions. (iii) Let 0 < α ≤ 2 and μ ∈ Sα . Then μ ∈ S0α if and only if one of the following is satisfied. (a) (b) (c) (d)

0 < α < 1 and γ 0 = 0; α = 1 and S ξ λ(dξ ) = 0;  1 < α < 2 and γ + |x|>1 xν(dx) = 0;7 α = 2 and γ = 0.

We see from (1.40) that μ ∈ Sα with 0 < α < 2 if and only if μ ∈ L∞ and, in its representation (1.33), A = 0 and  ((0, 2) \ {α}) = 0. Theorem 1.43 The class L∞ is the smallest class containing S and closed under convolution and convergence.

 7 If μ ∈ S with 1 < α < 2, then any μ ∈ I D, α |x|>1 |x|ν(dx) < ∞, which is shown by (1.40). For    |x|ν(dx) < ∞ and Rd |x|μ(dx) < ∞ are equivalent and, in this case, Rd xμ(dx) = |x|>1  γ + |x|>1 xν(dx) ([93] Example 25.12).

1.2 Characterization in Lévy–Khintchine Representation

21

Proof It is already shown that the class L∞ contains S and that it is closed under convolution and convergence (Propositions 1.18 and 1.25). Let Q be a class containing S and closed under convolution and convergence. Let μ ∈ L∞ and look at the representation (1.33) of  μ. We will prove that μ ∈ Q. Suppose γ = 0 and A = 0. First, assume that  is supported by [ε, 2 − ε] for some positive ε. Let M (dα dξ ) =  (dα) λα (dξ ). It is a finite measure on [ε, 2 − ε] × S. Choose Mn such that they converge to M and that each Mn is supported by En × S where En is a finite set in [ε, 2 − ε]. We have the expression Mn (dα dξ ) = n (dα) λn,α (dξ ), where n is supported by En . Let, for each z,  fz (α, ξ ) = 0



dr eiz,rξ  − 1 − iz, rξ 1(0,1] (r) 1+α r

   and let μn be such that  μn (z) = exp (0,2) n (dα) S λn,α (dξ )fz (α, ξ ) . As μn is the convolution of a finite number of stable distributions, it belongs to Q. Since fz (α, ξ ) is continuous in (α, ξ ),  μn (z) converges to  μ (z).  Hence μ ∈ Q.  Next consider a general μ in L∞ . Restrict  to n−1 , 2 − n−1 and let μn be the Then μn ∈ Q, as the Gaussian factor of μn is in Q. Since resulting distribution.  |fz (α, ξ )| is finite,  μn (z) tends to  (dα) λ (dξ ) μ (z). Hence μ ∈ Q. α (0,2) S This concludes the proof.  Remark 1.44 Many properties of selfdecomposable distributions are known. Unimodality of all μ ∈ L0 (R) was proved by Yamazato [143] (1978) after several unsuccessful papers. In general, a measure η on R is called unimodal (with mode c) if η = aδc + f (x)dx, where 0 ≤ a ≤ ∞ and f (x) is increasing on (−∞, c) and decreasing on (c, ∞). By the result of Yamazato, unimodality of all μ ∈ S(R) was also established for the first time. Singularity of the density of μ ∈ L0 (R) and its degree of smoothness were determined by Sato and Yamazato [108] (1978). Absolute continuity of non-trivial μ ∈ L0 (R) is easily observed. Indeed, if A = 0, then it is because μ has a nondegenerate Gaussian factor; if A = 0 and ν = 0, then it follows from absolute continuity of ν combined with ν(R) = ∞. This argument does not work for μ ∈ L0 (Rd ) with d ≥ 2, since ν is not always absolutely continuous even if μ is nondegenerate. However, it is shown by Sato [85] (1982) that any nondegenerate μ ∈ L0 (Rd ) is absolutely continuous. See [93] for more accounts. Example 1.45 Let μt,q be -distribution on R+ = [0, ∞) with shape parameter t > 0 and scale parameter q > 0, that is, μt,q (dx) =

q t t−1 −qx x e 1(0,∞) (x)dx.

(t)

(1.41)

When t = 1, μ1,p is called exponential distribution with parameter q. Let us prove that μt,q is in L0 (R) but not in L1 (R). This fact was observed in [8] p. 182. The Laplace transform of μt,q is

22

1 Classes Lm and Their Characterization

 ∞ qt e μt,q (dx) = x t−1 e−(u+q)x dx (u) =

(t) 0 0

 ∞ u −t qt (u + q)t t−1 −(u+q)x x e dx = 1 + , =

(t) q (u + q)t 0 

Lμt,q



−ux

u ≥ 0.

On the other hand,

 u+q  u  u  ∞ u dy dy log 1 + = = = dy e−(q+y)x dx q y q + y q 0 0 0  u  ∞  ∞  e−qx dx. 1 − e−ux e−qx dx e−yx dy = = x 0 0 0 Hence Lμt,q

  (u) = exp t

∞

e

−ux

0

 e−qx −1 dx , x

u ≥ 0.

Extending this equality to the left half plane {w ∈ C : Re (w) ≤ 0} by analyticity in the interior and continuity to the boundary, we get, for w = iz with z ∈ R,    μt,q (z) = exp t



e

0

izx

e−qx  dx . −1 x

(1.42)

This is a special case of (1.17). Thus μt,q ∈ I D with Lévy measure νt,q (dx) = tx −1 e−qx 1(0,∞) (x)dx. It follows from Remark 1.35 that μt,q ∈ L0 with k-function k (x) = te−qx 1(0,∞) (x) u for fixed t, q. The h-function defined by h(u) = k(eu ), u ∈ R, equals te−qe . Then h (u) = −tqeu e−qe , u



u h (u) = t q 2 e2u − qeu e−qe .

Note that h (u) < 0 for u < − log q. The h-function is not monotone of order 2 by Lemma 1.38 (ii) because it is not convex on (−∞, − log q). Therefore, by Theorem 1.39 (i), μt,q ∈ L1 . The Lévy process {Xt } on R with L(X1 ) being exponential distribution μ1,q is called -Lévy process or -process with parameter q. This process satisfies L(Xt ) = μt,q , which follows from (1.42). Thus the time parameter and the shape parameter coincide. It is a selfdecomposable process but not a process of class L1 . Example 1.46 The distribution μ(dx) = c0 exp(abx − ceax )dx

(1.43)

1.2 Characterization in Lévy–Khintchine Representation

23

on R with positive parameters a, b, c and c0 = acb / (b) is discussed in Linnik and Ostrovskii [57] p. 52 and p. 361 (1977) (see also [93] E18.19). Here we have made a change of parametrization. It is shown that μ is infinitely divisible with Gaussian variance A = 0 and Lévy measure ν(dx) = |x|−1 eabx (1 − eax )−1 1(−∞,0) (x)dx.

(1.44)

It follows that μ ∈ L0 . The support of μ is the whole line but the positive tail of μ is much lighter than the negative tail. (In general the support of a measure η on Rd is defined to be the set of x such that η(G) > 0 for any open set G containing x.) Another such example is an α-stable distribution with α ∈ [1, 2) and Lévy measure |x|−α−1 1(−∞,0) (x)dx. These belong to the class of infinitely divisible distributions that some people call spectrally negative. The distribution μ has an interesting connection with -process {Zt } with parameter q. Since qt P [ log Zt ≤ u ] =

(t)



u −∞

exp(tx − qex )dx,

log Zt has distribution μ with a = 1, b = t, and c = q. It is proved by Akita and Maejima [1] (2002) that L(log Zt ) is in L1 for t ≥ 1/2 and in L2 for t ≥ 1. The parameter a of μ represents scaling. Indeed, if X is a random variable with distribution μ, then aX has distribution (c0 /a) exp(bx − cex )dx. Therefore μ ∈ L1 for b ≥ 1/2 and μ ∈ L2 for b ≥ 1. We  have μ ∈ L∞ for any choice of a, b, c. Proof is as follows.  From (1.43) we see |x|α μ(dx) < ∞ for any α > 0, which is equivalent to |x|α ν(dx) < ∞ ([93] Theorem  25.3). But an arbitrary non-Gaussian μ ∈ L∞ with Lévy measure ν satisfies |x|α ν(dx) = ∞ whenever α ∈ (0, 2) satisfies ((0, α]) > 0 for the corresponding measure  in (1.38). Example 1.47 The distribution μ(dx) = (π cosh x)−1 dx

(1.45)

was found by Lévy as the distribution of stochastic area of two-dimensional Brownian motion. He showed that μ is infinitely divisible with generating triplet (0, ν, 0) where ν(dx) = (2x sinh x)−1 dx (see [93] Example 15.15). We see from this that μ is selfdecomposable with kfunction k(x) = 2−1 |sinh x|−1 . Let us show that μ ∈ L1 (R). Since k(x) = k(−x), it is enough to consider the function h(u) = 2−1 (sinh(eu ))−1 for u ∈ R. Differentiating twice, we have

24

1 Classes Lm and Their Characterization

 −2 h (u) = −2−1 eu cosh(eu ) sinh(eu ) < 0,     −3 eu + 2−1 sinh(2eu ) eu coth(eu ) − 1 . h (u) = 2−1 eu sinh(eu ) Let f (x) = x coth x − 1. If we can show that f (x) > 0 for x > 0, then h (u) > 0 and μ ∈ L1 by Theorem 1.39 and It suffices to show x(e2x + 1) >  2xLemma 1.38. 2x 2x e − 1 for x > 0. Let g(x) = x e + 1 − e + 1. Then g  (x) = 1 − e2x + 2xe2x and g  (x) = 4xe2x . Thus g(0) = 0, g  (0) = 0, and g  (x) > 0 for x > 0. Hence g(x) > 0 for x > 0. Thus f (x) is positive for x > 0. Noting that |x|>1 |x|α ν(dx) < ∞ for any α > 0, we have μ ∈ L∞ by the same reason as in Example 1.46. But the highest m for which μ belongs to Lm is not known. Remark 1.48 Analysis of path behaviours of Lévy processes is thoroughly made by the Lévy–Itô decomposition of sample functions. Some basic facts from this decomposition are the following. Let {Xt } be a Lévy process on Rd with generating triplet (A, ν, γ ). Call s > 0 a jump time of Xt (ω) if Xs (ω) = Xs− (ω) ({Xt } has a jump at s > 0). Then:  (i) |x|≤1 ν(dx) < ∞ if and only if the number of jump times of Xt (ω) in any finite time interval is finite a. s.  (ii) |x|≤1 ν(dx) = ∞ if and only if jump times are countable and dense in (0, ∞) a.  s. (iii) |x|≤1 |x|ν(dx) < ∞ if and only if the sum of |Xs (ω) − Xs− (ω)| over all jump times s in any finite time interval is finite a. s.  (iv) |x|≤1 |x|ν(dx) = ∞ if and only if the sum of |Xs (ω) − Xs− (ω)| over all jump times s in any finite time interval is infinite a. s. (v) A = 0 or |x|≤1 |x|ν(dx) = ∞ if and only if Xt (ω) has infinite total variation in any finitetime interval a. s. (vi) A = 0 and |x|≤1 |x|ν(dx) < ∞ if and only if Xt (ω) is a linear non-random function plus the sum of all jumps in (0, t] a. s.

Notes During the pioneering studies of infinitely divisible and stable distributions together with Lévy, additive, and stable processes in 1920s and 1930s, selfdecomposable distributions were, without the name, introduced by Lévy with characterizations similar to Theorem 1.26 (i) and Theorem 1.34. It was in answer to a question raised by Khintchine, who named the distributions as of class L. Expositions of the results were given in the books by Lévy [54] (1937), Khintchine [46] (1938), Gnedenko and Kolmogorov [26] (1968), and Loève [58] (1977, 1978). The classes Lm , m = 1, 2, . . . , and L∞ were introduced by Urbanik [129] (1972b), [130] (1973), as a decreasing sequence of classes which possess stronger

Notes

25

similarity in some sense to the class of stable distributions as m increases. Then Kumar and Schreiber [49] (1978), [50] (1979) and Thu [122] (1979) followed. Theorems 1.26, 1.39, and 1.41 are reformulation and extension of Urbanik’s theory by Sato [84] (1980). The definition (1.23) of h-function follows Sato [101] (2010); it is different from that of [84] and the former edition of this book. The meaning of “monotone of order m” is also different from that of [84] (1980). Selfdecomposable and stable distributions and processes are extensively studied; many results are known, see Bertoin [12] (1996) and Sato [93, 95] (1999, 2001b). If μ ∈ P(Rd ) satisfies (1.3) for some b ∈ (1, ∞) and some ρb ∈ I D(Rd ), then μ ∈ I D and it is called semi-selfdecomposable with b as a span. If μ ∈ I D(Rd ) satisfies (1.7) for some α ∈ (0, 2) and some b ∈ (1, ∞), then μ is called α-semi-stable with b as a span. If a Lévy process {Xt } is such that L(X1 ) is semiselfdecomposable (resp. α-semi-stable) with b as a span, then {Xt } is called a semi-selfdecomposable process (resp. an α-semi-stable) with b as a span. Results are extended to these classes in weaker forms. Some on semi-selfdecomposable distributions are found in Maejima and Sato [60] (2003) and Maejima and Ueda [64] (2009). Unimodality and non-unimodality can have time evolution. Among Lévy processes on R some have time evolution from unimodal to non-unimodal and some have time evolution from non-unimodal to unimodal. Furthermore, for any α ∈ (0, 1), there is an α-semi-stable subordinator (see Definition 4.1 for subordinator) with b as a span such that, for some t1 > 0 and t2 > 0, L(Xt ) is unimodal for t = bn t1 for all n ∈ Z and non-unimodal for t = bn t2 for all n ∈ Z. Notice that both bn t1 and bn t2 accumulate at 0 and ∞. Time evolution in modality of several Lévy processes on R with explicit Lévy measures is investigated; see Sato [91] (1997) and Watanabe [134, 136] (1999, 2001). A measure η on Rd is called discrete if there is a countable set C ∈ B(Rd ) such that η(Rd \ C) = 0; continuous if η({x}) = 0 for all x ∈ Rd ; absolutely continuous if η(B) = 0 for all B ∈ B(Rd ) having Lebesgue measure 0; singular if there is B ∈ B(Rd ) with Lebesgue measure 0 such that η(Rd \ B) = 0; continuous singular if it is continuous and singular. Concerning these continuity properties, there are two basic theorems. Let μ ∈ I D(Rd ) with generating triplet (A, ν, γ ). The continuity theorem says that μ is continuous if and only if A = 0 or ν(Rd ) = ∞ (see [93] Theorem 27.4). The Hartman–Wintner theorem in [32] (1942) says that, if A = 0, ν is discrete, and ν(Rd ) = ∞, then μ is either absolutely continuous or continuous singular (see [93] Theorem 27.16). Orey [69] (1968) showed the following. If {Xt }  is a Lévy process on R with −α −cn generating triplet (A, ν, γ ) such that A = 0 and ν = ∞ n=1 an δan with an = 2 where 0 < α < 2, c ∈ N, and c > 2/(2 − α), then L(Xt ) is continuous singular for every t > 0. The proof uses the Hartman–Wintner theorem and the Riemann– Lebesgue theorem (that is, if μ ∈ P(Rd ) is absolutely continuous, then | μ(z)| → 0 as |z| → ∞). This seems to be the simplest example of such a Lévy process and a continuous singular distribution in I D(R); both finite and infinite cases of

26



(0,1) xν(dx) are possible, α  > α (see [93] E 29.12).

1

since



(0,1) x

α ν(dx)

Classes Lm and Their Characterization

= ∞ and



(0,1) x

α  ν(dx)

< ∞ for

There exists a Lévy process on R such that, for some t0 ∈ (0, ∞), L(Xt ) is continuous singular for all t ∈ (0, t0 ) and absolutely continuous for all t ∈ (t0 , ∞). This is a remarkable time evolution of a qualitative distributional property of a Lévy process; it is a sort of phase transition. Two methods of construction of such a Lévy process are known; see Tucker [126] (1965), Sato [90] (1994), and Watanabe [135, 136] (2000, 2001); the Lévy process constructed by either method has discrete Lévy measure ν satisfying |x| 0. Intuitively speaking, {Xt } is a Lévy process {Zt } with L(Z1 ) = ρ to which a force is exerted toward the origin with magnitude c times the distance to the origin. Such a process is constructed as a unique solution of some stochastic integral equation involving {Zt } and c. The Ornstein–Uhlenbeck type process has a limit distribution under an integrability condition on the Lévy measure νρ of ρ. Specifically, if νρ satisfies  |x|>2

log |x| νρ (dx) < ∞,

(2.1)

then, as t → ∞, L (Xt ) converges to a selfdecomposable distribution μ on Rd . Conversely, every selfdecomposable distribution μ appears as the limit distribution of some Ornstein–Uhlenbeck type process. If condition (2.1) does not hold, L (Xt ) does not tend to any distribution as t tends to ∞.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 A. Rocha-Arteaga, K. Sato, Topics in Infinitely Divisible Distributions and Lévy Processes, SpringerBriefs in Probability and Mathematical Statistics, https://doi.org/10.1007/978-3-030-22700-5_2

27

28

2 Classes Lm and Ornstein–Uhlenbeck Type Processes

In Sect. 2.3 it is shown that there is a one-to-one and onto correspondence between the Lévy processes of class Lm−1 satisfying (2.1) on the one hand, and the distributions of class Lm which appear as the limit distributions of the Ornstein– Uhlenbeck type processes on the other hand. This correspondence preserves α-stability.

2.1 Stochastic Integrals Based on Lévy Processes In this section let {Zt : t ≥ 0} be a Lévy process on Rd with L(Z1 ) = ρ and  t  (z) = etψρ (z) , Eeiz,Zt  = ρ

(2.2)

 . Further, let Aρ , νρ , γρ be the where ψρ (z) is the distinguished logarithm of ρ (ρ) generating triplet of {Zt }. Sometimes {Zt } is denoted by {Zt }, which expresses that L(Z1 ) = ρ. Let us define the stochastic integral of a bounded measurable function defined on a bounded closed interval in R+ based on {Zt }. We will describe the characteristic function of the integral. Definition 2.1 Let 0 ≤ t0 < t1 < ∞. A function f (s) on [t0 , t1 ] is called a step function if there are a finite number of points t0 = s0 < s1 < · · · < sn = t1 such that f (s) =

n 

aj 1[sj −1 ,sj ) (s)

(2.3)

j =1

with some a1 , . . . , an ∈ R. When f (s) is a step function of this form, define 

t1

f (s)dZs =

t0

 t1

It follows that

t0

   E exp i z,

=

n  j =1

= exp

aj (Zsj − Zsj −1 ).

(2.4)

j =1

f (s)dZs has infinitely divisible distribution and 

t1

f (s)dZs t0

n 

=

n 

   E exp i aj z, Zsj − Zsj −1

j =1

⎤ ⎡ n       sj − sj −1 ψρ aj z ⎦ exp sj − sj −1 ψρ aj z = exp ⎣ 

j =1 t1

ψρ (f (s) z) ds. t0

(2.5)

2.1 Stochastic Integrals Based on Lévy Processes

29

Proposition 2.2 Let f (s) be a real-valued bounded measurable function on [t0 , t1 ] such that there are uniformly bounded step functions  t fn (s), n = 1, 2, . . . , on [t0 , t1 ] satisfying fn → f almost everywhere. Then t01 fn (s)dZs converges to an Rd valued random variable X in probability. The limit X does not depend on the choice of fn almost surely. The law of X is infinitely divisible and represented as 

t1

Eeiz,X = exp

ψρ (f (s)z)ds.

(2.6)

t0

Using this result, we give Definition 2.3 The Rd -valued random variable X in Proposition 2.2 is called (ρ) stochastic integral of f on [t0 , t1 ] based on the Lévy process {Zs } = {Zs } and denoted by 

t1

X=



t1

f (s)dZs =

t0

t0

f (s)dZs(ρ) .

(2.7)

Proof of Proposition 2.2 Since ψρ is continuous and ψρ (0) = 0, ψρ ((fn (s)− fm (s))z) tends to 0 for almost every s, as n, m → ∞. Then 

t1

ψρ ((fn (s) − fm (s)) z) ds → 0

t0

as n, m → ∞. Hence by (2.5) 

t1



t1

fn (s)dZs −

t0



t1

fm (s)dZs =

t0

(fn (s) − fm (s)) dZs → 0

t0

in probability. It follows that there exists a random variable X which is the limit in t probability of t01 fn (s)dZs . The law of X is infinitely divisible, since the laws of  t1 t0 fn (s)dZs are. Moreover, 

t1

 ψρ (fn (s) z) ds →

t0

t1

ψρ (f (s) z) ds t0

by Lebesgue’s bounded convergence theorem. Then, by (2.5)    E exp i z,



t1

fn (s)dZs t0

 → exp

t1

ψρ (f (s) z) ds. t0

Hence we have (2.6). To see that the limit X does not depend on approximating sequences of f , let fn (s) → f (s) and gn (s) → f (s) a.e. both boundedly. Then

30

2 Classes Lm and Ornstein–Uhlenbeck Type Processes

Ee



% t & i z, t01 (fn −gn )dZs

t1

= exp

ψρ ((fn (s) − gn (s)) z) ds → 1

t0

as n → ∞, showing that

 t1 t0

fn dZs −

 t1 t0

gn dZs → 0 in probability.



Proposition 2.4 If f (s) is a real-valued bounded measurable function on [t0 , t1 ], t then t01 f (s)dZs is definable in the sense of (2.7) and we have (2.6). Proof By Proposition 2.2, it is enough to show the existence of uniformly bounded step functions fn (s) such that fn (s) → f (s) a. e. Let |f (s)| ≤ C. By Lusin’s theorem ([31, p. 243]), for each n, there is a closed set Fn ⊂ [t0 , t1 ] such that [t0 , t1 ] \ Fn has Lebesgue measure less than 2−n and the restriction of f to Fn is continuous. Then, by Urysohn’s theorem in general topology, there is a continuous function gn on [t0 , t1 ] with |gn (s)| ≤ C such that gn = f on Fn . We can choose step functions fn on [t0 , t1 ] such that |fn (s) − gn (s)| < 2−n and |fn (s)| ≤ C. Let ∞ ∞ G = k=1 ∞n=k ([t0 , t1 ] \ Fn ). Then G has Lebesgue measure 0. If s ∈ G, then s∈ ∞ k=1 n=k Fn and fn (s) → f (s), since |fn (s) − f (s)| = |fn (s) − gn (s)| < 2−n 

for all large n.

Proposition 2.5 Let f (s) be a locally bounded, measurable function on [0, ∞). Then there is an additive process {Xt : t ≥ 0} on Rd such that, for every t > 0,    t f (s)dZs = 1. P Xt =

(2.8)

0

Proof Let Y0 = 0 and Yt = 

t1 t0

t 0

f (s)dZs for t > 0. If 0 ≤ t0 < t1 < t2 , then

 f (s)dZs +

t2 t1

 f (s)dZs =

t2

f (s)dZs

a. s.,

t0

as is proved from the case of step functions. This combined with the independent increment property of {Zt } proves that {Yt } has independent increments. If tn ↓ t, then  tn  tn ψρ (f (s)z) ds → 1 Eeiz,Ytn −Yt  = Eeiz, t f (s)dZs  = exp t

and, similarly, if t > 0 and tn ↑ t, then Eeiz,Yt −Ytn  → 1. Hence {Yt } is stochastically continuous. This shows that {Yt } is an additive process in law in Definition 1.31. Now, {Yt } has a modification {Xt } which is an additive process, by Theorem 11.5 of [93]. 

2.1 Stochastic Integrals Based on Lévy Processes

31

t Remark 2.6 Henceforth, 0 f (s)dZs is understood to be the modification Xt in t Proposition 2.5. For 0 < t0 ≤ t, t0 f (s)dZs is understood to be Xt − Xt0 . We need a Fubini type theorem for stochastic integrals and ordinary integrals in order to prove the existence of Ornstein–Uhlenbeck type processes. We establish the following proposition. Proposition 2.7 Let f (s) and g(s) be bounded measurable functions on [t0 , t1 ]. Then  s  t1  t1  t1 g(s)ds f (u)dZu = f (u)dZu g(s)ds a. s., (2.9) t0

t0

t0

u

which is another writing of 



t1

s



t1

f (u)dZu ds =

g(s) t0



f (u)

g(s)ds dZu

t0

t0

t1

a. s.

u

Proof Let X = X(f, g) and Y = Y (f, g) denote the left- and the right-hand side of (2.9), respectively. Existence of Y follows from Proposition 2.4. For any additive process {Xt }, Xt (ω) is bounded and measurable in t ∈ [t0 , t1 ] for almost every ω by virtue of (v) in Definition 1.31. Hence X is definable by Proposition 2.5. Step 1. We show that Ee

iz,X

iz,Y 

= Ee

 = exp



t1



ψρ f (u) t0

t1

(2.10)

g(s)ds z du. u

The second equality is evident from (2.6). Let us calculate Eeiz,X . Let tn,k = t0 + k2−n (t1 − t0 ) for n = 1, 2, . . . and k = 0, 1, . . . , 2n . For s ∈ [t0 , t1 ), define λn (s) = tn,k if tn,k−1 ≤ s < tn,k . Let  Xn =



t1

λn (s)

g(s)ds t0

f (u)dZu . t0

s Since t0 f (u)dZu is right continuous and locally bounded in s a.s., Xn tends to X a.s. as n → ∞. Hence Eeiz,Xn  → Eeiz,X . We have 

n

Xn =

2  k=1

tn,k

ck t0

 f (u)dZu =

2n t1  t0 k=1

ck 1[t0 ,tn,k ) (u)f (u)dZu

a. s.

32

2 Classes Lm and Ornstein–Uhlenbeck Type Processes

with ck =

 tn,k tn,k−1

g(s)ds. Thus, by (2.6),

Eeiz,Xn  = exp



t1 t0

⎞ ⎛ n 2  ψρ ⎝ ck 1[t0 ,tn,k ) (u)f (u) z⎠ du k=1

⎛ n   t1 2  ⎝ ψρ 1[t0 ,tn,k ) (u) = exp t0

 = exp

ψρ t0

g(s)dsf (u) z⎠ du

tn,k−1

k=1



t1

⎞ tn,k

t1 λn (u)−2−n (t1 −t0 )

g(s)dsf (u) z du,

which tends to the rightmost member of (2.10) as n → ∞. Step 2. Let us show that X = Y a.s., assuming that f and g are step functions. Without loss of generality, we can assume that f (s) =

N 

g(s) =

aj 1[sj −1 ,sj ) (s),

j =1

N 

bj 1[sj −1 ,sj ) (s)

j =1

with t0 = s0 < s1 < · · · < sN = t1 . First we prepare the identity 

t1 t0

 sdZs = t1 Zt1 − t0 Zt0 −

t1

Zs ds

a. s.

(2.11)

t0

Define tn,k and λn (s)as in Step 1. Since  t λn (s), n = 1, 2, . . . , are step functions and t λn (s) → s, we have t01 λn (s)dZs → t01 sdZs in probability. Notice that 

t1

n

λn (s)dZs =

t0

2 

 tn,k Ztn,k − Ztn,k−1

k=1 n

=

2  k=1

tn,k Ztn,k −

n −1 2

 tn,k + 2−n (t1 − t0 ) Ztn,k

k=0

= t1 Zt1 − t0 Zt0 −

n −1 2

2−n (t1 − t0 ) Ztn,k

k=0

 = t1 Zt1 − t0 Zt0 −

t1

t0

 → t1 Zt1 − t0 Zt0 −

 Zλn (s) ds + 2−n (t1 − t0 ) Zt1 − Zt0

t1

Zs ds t0

a. s.

2.1 Stochastic Integrals Based on Lévy Processes

33

as n → ∞, since Zλn (s) → Zs boundedly on [t0 , t1 ) a. s. This proves (2.11). Now X=

N 



N 

s

ds sk−1

k=1

=



sk

bk 

sk

bk

N 

sk−1 j =1

k=1

f (u)dZu t0

 aj Zs∧sj − Zs∧sj −1 ds = I1 , say.

Since 

sk

sk−1

 Zs∧sj

⎧ ⎪ ⎪ ⎨0 sk  − Zs∧sj −1 ds = sk−1 Zs − Zsk−1 ds ⎪ ⎪ ⎩Z − Z (s − s ) sj sj −1 k k−1

for k ≤ j − 1 for k = j for k ≥ j + 1,

we have I1 =

N 

⎛ aj ⎝bj

=

⎞ N    Zs − Zsj −1 ds + bk Zsj − Zsj −1 (sk − sk−1 )⎠

sj

sj −1

j =1 N 



⎛ aj ⎝−bj



k=j +1

sj sj −1

j =1

⎛ ⎞⎞ N   sdZs + Zsj − Zsj −1 ⎝bj sj + bk (sk − sk−1 )⎠⎠ . k=j +1

Here we used (2.11). On the other hand, Y =

N 



=

j =1

=

N  j =1

t1

dZu sj −1

j =1 N 



sj

aj 

g(s)ds u

⎛ sj

aj

⎞ N   ⎝bj sj − u + bk (sk − sk−1 )⎠ dZu

sj −1

⎛ aj ⎝bj

k=j +1



sj

sj −1

⎞ N    sj − u dZu + Zsj − Zsj −1 bk (sk − sk−1 )⎠ . k=j +1

Therefore X = Y a. s. whenever f and g are step functions. Step 3. We show X = Y a. s. when f is bounded measurable and g is a step function. By Proposition 2.4 there are uniformly bounded step functions fn such that fn → f a. e. on [t0 , t1 ]. Let Xn = X(fn , g) and Yn = Y (fn , g). We have Xn = Yn a. s. by Step 2. Since X − Xn = X(f − fn , g) and Y − Yn = Y (f − fn , g), Step 1 gives

34

2 Classes Lm and Ornstein–Uhlenbeck Type Processes

Ee

iz,X−Xn 

= Ee

iz,Y −Yn 

 = exp

t1

 ψρ (f (u) − fn (u))

t0

t1

g(s)dsz du, u

which tends to 1 as n → ∞. It follows that Xn → X and Yn → Y in probability. Therefore X = Y a. s. Step 4. Now let us show X = Y a.s. when f and g are bounded measurable functions. Choose uniformly bounded step functions gn such that gn → g a. e. on /n = X(f, gn ) and Y /n = Y (f, gn ). Then X /n = Y /n by Step 3 and, by the [t0 , t1 ]. Let X /n → X and Y /n → Y in probability. same method as in Step 3, we can show that X Hence (2.9) is proved.  Example 2.8 Let f (s) be of class C 1 on [t0 , t1 ]. As an example of the use of Proposition 2.7, let us show the integration-by-parts formula 

t1 t0

 f (u)dZu = f (t1 )Zt1 − f (t0 )Zt0 −

t1

Zs f  (s)ds

a. s.

(2.12)

t0

Indeed, notice that 

t1

 f (u)dZu = −

t0

t1

dZu 

=−  =−  =−



t1 t0

u

t1

f  (s)ds



s

t1 t0 t1



t1

dZu t0



t1

dZu + f (t1 )

t0

t0

t0

f  (s)ds + f (t1 )

dZu

(by Proposition 2.7)

t0

  f  (s) Zs − Zt0 ds + f (t1 ) Zt1 − Zt0  f  (s)Zs ds + (f (t1 ) − f (t0 ))Zt0 + f (t1 ) Zt1 − Zt0 ,

which is the right-hand side of (2.12).

2.2 Ornstein–Uhlenbeck Type Processes and Limit Distributions (ρ)

Let {Zt : t ≥ 0} = {Zt : t ≥ 0} be a Lévy process on Rd with L(Z1 ) = ρ. Let (Aρ , νρ , γρ ) be the generating triplet of ρ and let ψρ (z) be the distinguished logarithm of ρ . Let J be a random variable on Rd . In the following we always assume that J and {Zt } are independent. Given c ∈ R, consider the equation 

t

Xt = J + Zt − c

Xs ds, 0

t ≥ 0.

(2.13)

2.2 Ornstein–Uhlenbeck Type Processes and Limit Distributions

35

A stochastic process {Xt : t ≥ 0} is said to be a solution of (2.13) if Xt (ω) is right continuous with left limits in t and satisfies (2.13) a. s. Proposition 2.9 Equation (2.13) has an almost surely unique solution {Xt } and, almost surely, Xt = e

−ct

J +e

−ct



t

ecs dZs ,

t ≥ 0.

(2.14)

0

Proof Define Xt by (2.14). Then  c

t



t

Xs ds = J c

0

e−cs ds + c

0

 = J 1 − e−ct + c  = J 1 − e−ct +





t

0 t



 ecu dZu

0 t



e−cs ds

0 t

s

ecu dZu

e−cs ds

u

1 − e−c(t−u) dZu

0

= J − Xt + Zt, where we have applied Proposition 2.7. Therefore (2.13) holds. Let us prove the uniqueness of the solution of (2.13). Suppose that Xt1 (ω) and 2 Xt (ω) satisfy (2.13). For a fixed ω define a bounded function f (t) on [t0 , t1 ] by t f (t) = Xt1 (ω) − Xt2 (ω). Then we have f (t) = −c 0 f (s) ds and f (t) = −c

 t  −c 0

s

 t  t f (u) du ds = (−c)2 f (u)du ds

0



t

= (−c)2

0

u

(t − s) f (s) ds.

0

By induction we get (−c)n f (t) = (n − 1)!



t

(t − s)n−1 f (s) ds,

for n = 1, 2, 3, . . .

0

 ∞  n−1 n Since is finite, |−c|n / (n − 1)! (t − s)n−1 n=1 |−c| / (n − 1)! (t − s) tends to 0 uniformly in s ∈ [0, t] as n → ∞. Hence f (t) = 0. This concludes the proof.  Let us define a Markov process, using conditional probability. Definition 2.10 Let T be an interval in R. Let {Xt : t ∈ T } be a stochastic process on Rd defined on a probability space (, F, P ). Let, for t ∈ T , Ft be the σ -algebra generated by {Xs : s ∈ T ∩ (−∞, t]} (that is, Ft is the smallest sub-σ -algebra of F that contains {ω ∈  : Xs (ω) ∈ B} for all s ∈ T ∩ (−∞, t] and B ∈ B(Rd )). Then {Xt : t ∈ T } is called a Markov process if, for every s, t ∈ T with s < t,

36

2 Classes Lm and Ornstein–Uhlenbeck Type Processes

P [Xt ∈ B | Fs ] = P [Xt ∈ B | σ (Xs )]

a. s. for B ∈ B(Rd ),

(2.15)

where σ (Xs ) is the σ -algebra generated by Xs . In Chap. 3 we will encounter Markov processes with T = R. But in this chapter we assume T = [0, ∞) from now on. A readable textbook on conditional probability is Billingsley [13] (2012). We begin with the definition of transition functions. Definition 2.11 A function Ps,t (x, B) with 0 ≤ s ≤ t < ∞, x ∈ Rd , B ∈ B(Rd ) is called transition function on Rd if it is a probability measure with respect to B for fixed s, t, x, a measurable function of x for fixed s, t, B, Ps,s (x, B) = δx (B), and  Rd

Ps,t (x, dy)Pt,u (y, B) = Ps,u (x, B)

for 0 ≤ s ≤ t ≤ u.

(2.16)

If, in addition, Ps+h,t+h (x, B) does not depend on h, then we write Pt (x, B) = P0,t (x, B) and call it temporally homogeneous transition function. In this case (2.16) becomes  Ps (x, dy)Pt (y, B) = Ps+t (x, B) for s ≥ 0, t ≥ 0. (2.17) Rd

Definition 2.12 When a transition function Ps,t (x, B) is given, a Markov process {Xt : t ≥ 0} satisfying P [Xt ∈ B | σ (Xs )] = Ps,t (Xs , B)

a. s.

(2.18)

for any fixed s, t with s < t and B ∈ B(Rd ) is called Markov process with transition functionPs,t (x, B). When a temporally homogeneous transition function Pt (x, B) is given, a Markov process {Xt } satisfying P [Xs+t ∈ B | σ (Xs )] = Pt (Xs , B)

a. s.

(2.19)

for any fixed s, t with s < s + t and B is called temporally homogeneous Markov process with transition function Pt (x, B). Proposition 2.13 Let ρ ∈ I D and c ∈ R. Define Pt (x, B) by  Rd

e

iz,y

   t −ct −cs Pt (x, dy) = exp ie x, z + ψρ (e z)ds .

(2.20)

0

Then Pt (x, B) is a temporally homogeneous transition function. It is a distribution in I D for fixed t, x. The process {Xt } of Proposition 2.9 is a temporally homogeneous Markov process with transition function Pt (x, B) and initial distribution L(J ).

2.2 Ornstein–Uhlenbeck Type Processes and Limit Distributions

37

Proof For every s ∈ [0, t] we have, from (2.14), Xt = e−c(t−s) Xs + e−ct



t s

ecu dZu(ρ) .

(2.21)

t (ρ) Because e−ct s ecu dZu and {Xu : u ≤ s} are independent, the identity (2.21) shows that {Xt : t ≥ 0} is a Markov process satisfying (2.15) and (2.18) if we define Ps,t (x, B) by    t −c(t−s) −ct cu (ρ) Ps,t (x, B) = P e x+e e dZu ∈ B s

(use Proposition 1.16 of [93] for a proof). This Ps,t (x, B) is infinitely divisible and, by (2.6),  Rd

e

iz,y

 %  &  t  −ct cu −c(t−s) Ps,t (x, dy) = exp i z, e x + ψρ e e z du  % &  −c(t−s) x + = exp i z, e

s t−s

  −cv ψρ e z dv .

0

Thus Ps,t (x, B) depends only on t − s. Let Pt (x, B) = P0,t (x, B). Then (2.20) is satisfied. Once the expression is known, it is easy to check (2.17) since, for η(B) = Ps (x, dy)Pt (y, B), we have   η(w) =

   t −ct −cu ψρ (e w)du Ps (x, dy) exp ie y, w + 







= exp ie−cs−ct x, w + = exp ie−cs−ct x, w +

0 s

ψρ (e−cu−ct w)du +

0



t

 ψρ (e−cu w)du

0

s+t

  ψρ (e−cu w)du = Ps+t (x, dy)eiy,w .

0

It follows that Pt (x, B) is a temporally homogeneous transition function. The assertion is proved.  We introduce the following definition. Definition 2.14 Let ρ ∈ I D and c ∈ R. A stochastic process {Xt : t ≥ 0} on Rd is called wide-sense Ornstein–Uhlenbeck type (or OU type) process generated by (ρ) ρ and c, or generated by {Zt } and c, if it is a temporally homogeneous Markov process with transition function Pt (x, B) of (2.20). It is called Ornstein–Uhlenbeck type (or OU type) process if, in addition, c > 0. The OU type process generated by

38

2 Classes Lm and Ornstein–Uhlenbeck Type Processes

Brownian motion and c > 0 is called Ornstein–Uhlenbeck process.1 Sometimes the (ρ) process {Zt } is called the background driving Lévy process. Proposition 2.13 shows that from any initial distribution we can construct the wide-sense OU type process generated by ρ and c. An alternative construction is to use the fact that, from any initial distribution and transition function, Kolmogorov extension theorem gives the desired Markov process. The finite-dimensional distributions of the process are expressed by initial distribution and transition function. Remark 2.15 Let ρ ∈ I D and c ∈ R. Let Pt (x, B) be the temporally homogeneous transition function satisfying (2.20). Then ρ and c are determined by Pt (x, B). Indeed, let Pt (x, B) be defined by 

   t   ψρ  (e−c s z)ds eiz,y Pt (x, dy) = exp ie−c t x, z +

(2.22)

0

with some ρ  ∈ I D and c ∈ R. Suppose Pt (x, B) = Pt (x, B) for all t, x, B. Then ie−ct x, z +



t



ψρ (e−cs z)ds = ie−c t x, z +

0



t



ψρ  (e−c s z)ds

(2.23)

0

for all z. Differentiation gives 



−ice−ct x, z + ψρ (e−ct z) = −ic e−c t x, z + ψρ  (e−c t z). Letting t = 0, we have − icx, z + ψρ (z) = −ic x, z + ψρ  (z).

(2.24)

Hence, letting x = 0, we have ψρ = ψρ  , that is, ρ = ρ  . Hence c = c . Proposition 2.16 Suppose that {Xt } is a wide-sense OU type process generated by ρ ∈ I D and c ∈ R and, at the same time, by ρ  ∈ I D and c ∈ R. Let η = L(X0 ). (i) If at least one of ρ and η is non-trivial, then c = c and ρ = ρ  . (ii) Assume that ρ = δγ and η = δγη for some γ , γη ∈ Rd . Then ρ  = δγ  for some γ  and the following are true. (1) if c = 0 and γ /c = γη , then c = c and γ  = γ ; (2) if c = 0 and γ /c = γη , then, either c = 0, γ  /c = γη or c = 0, γ  = 0; (3) if c = 0 and c = 0, then γ = 0 and γ  /c = γη ; (4) if c = 0 and c = 0, then γ = γ  . We have c = c or γ = γ  if and only if {c, γ , c , γ  } satisfies one of the following conditions: 1 Some

authors use the name Ornstein–Uhlenbeck process for our Ornstein–Uhlenbeck type process.

2.2 Ornstein–Uhlenbeck Type Processes and Limit Distributions

39

(5) c = c , c = 0, c = 0, γ /c = γ  /c = γη ; (6) c =

0, c = 0, γ /c = γη , γ  = 0; (7) c = 0, c = 0, γ = 0, γ  /c = γη . Proof The process {Xt } is a temporally homogeneous Markov process with transition functions Pt (x, B) and Pt (x, B) of (2.20) and (2.22). (i) Assume that at least one of ρ and η is non-trivial. Let η1 = L(X1 ). Then η1 is non-trivial. We have P [X1 ∈ B1 , X1+t ∈ B] = B1 η1 (dx)Pt (x, B) =    B1 η1 (dx)Pt (x, B) for all B1 . Hence, for fixed B and t, Pt (x, B) = Pt (x, B) for η1 -a. e. x. Since B(Rd ) is countably generated, we have, for fixed t, Pt (x, dy) = Pt (x, dy) for η1 -a. e. x. Thus (2.23) holds for all z. It follows from the continuity in t that there is G ∈ B(Rd ) with η1 (G) = 1 such that (2.23) holds for all x ∈ G, t, and z. Hence we have (2.24) for all x ∈ G and z. That is, ψρ  (z) − ψρ (z) = i(c − c)x, z for all x ∈ G and z. Since G is not a singleton, there are distinct points x1 , x2 in G such that i(c − c)x1 , z = i(c − c)x2 , z for all z. It follows that c = c. Then we obtain ρ  = ρ. t (ρ) (ii) Since ρ = δγ , 0 e−cu dZu equals (1 − e−ct )γ /c if c = 0 and equals tγ if c = 0. Since η = δγη and Pt (γη , B) = P [e−ct γη +

t

0e

−cu dZ (ρ) u

∈ B],

{Xt } is a non-random process, 0 Xt = 0

e−ct γη + (1 − e−ct )γ /c

if c = 0,

γη + tγ

if c = 0,

dXt ce−ct (γ /c − γη ) = dt γ

if c = 0, if c = 0.

At the same time, these expressions of Xt and dXt /dt hold with γ and c replaced by γ  and c . All assertions follow from comparison of the two expressions.  Lévy processes do not have limit distributions as t → ∞ except in the case of the zero process. But, in the case of OU type processes, drift force toward the origin of the magnitude proportional to the distance from the origin works, so that they are conjectured to have limit distributions. The conjecture is true only if they do not have too many big jumps. Theorem 2.17 Let c > 0 be fixed. (ρ)

(i) Let {Zt } = {Zt } be a Lévy process on Rd with generating triplet (Aρ , νρ , γρ ), and let ψρ = log ρ . Let {Xt } be an OU type process generated by {Zt } and c. Assume that

40

2 Classes Lm and Ornstein–Uhlenbeck Type Processes

 |x|>2

log |x| νρ (dx) < ∞.

(2.25)

Then L(Xt ) → μ

as t → ∞

(2.26)

for some μ ∈ P, and this μ does not depend on L(X0 ). Moreover, 

 ψρ (e−cs z) ds < ∞,

∞

(2.27)

0

  μ(z) = exp



ψρ (e−cs z)ds,

(2.28)

0

and μ belongs to L0 (Rd ). The generating triplet (A, ν, γ ) of μ is as follows: A = (2c)−1 Aρ ,   νρ (dx) ν(B) = Rd

(2.29) ∞

1B (e−cs x)ds,

B ∈ B(Rd ),

(2.30)

0

γ = c−1 γρ + c−1

 |x|>1

x νρ (dx). |x|

(2.31)

(ii) For any μ ∈ L0 (Rd ), there exists a unique ρ ∈ I D with generating triplet (Aρ , νρ , γρ ) satisfying (2.25) such that μ is the limit distribution (2.26) of the OU type process {Xt } generated by ρ and c with arbitrary L(X0 ). Using λ and kξ (r) in Theorem 1.34 and Remark 1.35 for the Lévy measure ν of μ, we have 



νρ (B) = −c



1B (rξ )dkξ (r).

λ(dξ ) S

(2.32)

0

(iii) In the set-up of (i), assume  |x|>2

log |x| νρ (dx) = ∞

(2.33)

instead of (2.25). Then L(Xt ) does not tend to any distribution as t → ∞ and, moreover, for any a > 0, sup Pt (x, Da (y)) → 0 x,y

where Da (y) = {z : |z − y| ≤ a}.

as t → ∞,

(2.34)

2.2 Ornstein–Uhlenbeck Type Processes and Limit Distributions

41

(Clearly, (2.27), (2.28), (2.30) are rewritten to  ∞ −s   0 ψρ (e z) ds < ∞, ∞   μ(z) = exp c−1 0 ψρ (e−s z)ds ,  ∞ ν(B) = c−1 Rd νρ (dx) 0 1B (e−s x)ds, B ∈ B(Rd ),

(2.35) (2.36) (2.37)

respectively. Sometimes we will use this form.) Proof (i) From (2.14), the characteristic function of Xt is Ee

 t

ie−ct z,X0  exp = Ee ψρ (e−cs z)ds.

iz,Xt 

(2.38)

0

Let g(z, x) = eiz,x − 1 − iz, x1{|x|≤1} (x) as in (1.19). Then g(e−cs z, x) = g(z, e−cs x) + iz, e−cs x1{11

0

t

νρ (dx) 0

(2.42)

e−cs x1{|e−cs x|≤1} ds. (2.43)

/ Observe that, as t → νt (B) increases to ν(B)  ∞, At tends to A of (2.29) and / of (2.30). We have (|x|2 ∧ 1)ν(dx) < ∞, because 

 |x| ν(dx) =



2

|x|≤1

Rd

= (2c)−1 

 |x|>1

ν(dx) =

Rd

 Rd

 e−cs x 2 1{|e−cs x|≤1} ds

∞

νρ (dx) 0

|x|2 ∧ 1 νρ (dx),

 νρ (dx)

∞ 0

1{|e−cs x |>1} ds = c−1

 |x|>1

log |x| νρ (dx).

42

2 Classes Lm and Ornstein–Uhlenbeck Type Processes

Since / νt is absolutely continuous with respect to ν with d/ νt /dν increasing to 1 as t → ∞ and since |g(z, x)| ≤ 12 |z|2 |x|2 1{|x|≤1} (x) + 2 · 1{|x|>1} (x), we see that 

 Rd

g(z, x)/ νt (dx) →

Rd

as t → ∞.

g(z, x)ν(dx)

Moreover, since 



|x|>1



νρ (dx)

=c

−1

e

−cs

0



|x|>1

 |x|1{|x|≤ecs } ds =



|x|>1

|x|νρ (dx)

∞ c−1 log |x|

e−cs ds

νρ (dx),

the dominated convergence theorem gives γt → c−1 γρ + /





|x|>1

= c−1 γρ + c−1



νρ (dx)



|x|>1

0

e−cs x1{|x|≤ecs } ds

x νρ (dx), |x|

which is γ of (2.31). Let μ denote the  t infinitely divisible distribution with triplet (A, ν, γ ). Then we obtain exp 0 ψρ (e−cs x)ds →  μ(z) as t → ∞. That is, L(Xt ) → μ. To show (2.27), notice that   ψρ (e−cs z) ≤ 1 e−2cs z, Aρ z + e−cs |γρ ||z| 2    −cs 2 2 1   e x νρ (dx) + 2 + 2 |z|  + |z| 

|x|≤ecs

1ecs

νρ (dx)

 −cs  e x  νρ (dx),

 ψρ (e−cs z) ds ≤ (4c)−1 z, Aρ z + c−1 |γρ ||z| + 1 |z|2 2

∞ 0

 +2

|x|>1

ν(dx) + c−1 |z|

 |x|>1

 |x|≤1

|x|2 ν(dx)

νρ (dx).

In order to show that μ ∈ L0 , observe that, for b > 1,  μ(b

−1

 z) = exp



ψρ (e 0

−cs −1

b

 z)ds = exp



(log b)/c

ψρ (e−cs z)ds.

2.2 Ornstein–Uhlenbeck Type Processes and Limit Distributions

43

We can write  μ(z) = exp  μ(b−1 z)



t

ψρ (e−cs z)ds

with t = (log b)/c,

0

which is the characteristic function of Pt (0, dy) in Proposition 2.13. Hence μ ∈ L0 . (ii) Let μ ∈ L0 with generating triplet (A, ν, γ ). Then Theorem 1.34 and Remark 1.35 say that 

 ν(B) =



1B (rξ )

λ(dξ ) 0

S

kξ (r) dr, r

B ∈ B(Rd ),

(2.44)

where λ is a probability measure on S and kξ (r) is nonnegative, right continuous, decreasing in r ∈ (0, ∞), and measurable in ξ ∈ S. Define a measure νρ by (2.32). To prove that νρ is the Lévy measure of some ρ ∈ I D satisfying (2.25), we will show that 

 |x| νρ (dx) + 2

|x|≤2

|x|>2

log |x| νρ (dx) < ∞.

(2.45)

Let 

u

l (u) =

(r 2 ∧ 1)

0

1

dr = r

0 ≤ u ≤ 1, (1/2)u2 , 1/2 + log u, u > 1.

Below we use the following fact (see [93] Lemma 17.6). In general, if l (r) and k(r) are nonnegative right continuous functions on (0, ∞) such that k(r) is decreasing and k(∞) = 0 and l (r) is increasing and l (0+) = 0, then 







l (r) dk(r) = −

0+

k(r)dl (r) ,

(2.46)

0+

including the case of ∞. Now it follows from the definition of l and νρ that 

 Rd



l (|x|) νρ (dx) = −c  =c

λ(dξ ) S

l (r) dkξ (r) 0 ∞

λ(dξ ) 

=c

kξ (r)dl (r) 0

S

 λ(dξ )

S

 =c





Rd

0



(r 2 ∧ 1)

kξ (r) dr r

(|x|2 ∧ 1)ν(dx) < ∞,

44

2 Classes Lm and Ornstein–Uhlenbeck Type Processes

on the one hand. We have   l (|x|) νρ (dx) = Rd

|x|≤1

1 2 |x| νρ (dx) + 2

|x|≤2

1 2 |x| νρ (dx) + 8

 ≥



 

|x|>1

|x|>2

1 + log |x| 2

νρ (dx)

log |x| νρ (dx)

on the other hand. Therefore (2.45) follows. Now, for B ∈ B(Rd ), apply (2.46) to (2.44). Then 

 ν(B) = − 



0





dkξ (r) 0

Rd

1B (uξ ) 



λ(dξ ) S

= c−1

r

dkξ (r) 0

S

=−





λ(dξ )

du u

1B (e−s rξ )ds

0





νρ (dy)

1B (e−s y)ds,

0

including the case of ∞. That is, (2.30) holds. Then, the OU Next, define Aρ and γρ by (2.29) and (2.31), respectively.  type process generated by ρ with generating triplet Aρ , νρ , γρ and c has μ as limit distribution. This proves (2.26). We next show the uniqueness of the generating triplet. Suppose thattwo processes of OU type with common c have the limit distribution μ. Let Zt1 and  2 j Zt be their background driving Lévy processes with ρj = L(Z1 ) and ψj = ∞  ∞ log ρ j , j = 1, 2. By (2.28) we have 0 ψ1 (e−cs z)ds = 0 ψ2 (e−cs z)ds. ∞ ∞ Replacing z by e−ct z, we obtain t ψ1 (e−cs z)ds = t ψ2 (e−cs z)ds. Hence  t −cs z)ds = t ψ (e−cs z)ds. Differentiation at t = 0 leads to ψ (z) = 1 0 ψ1 (e 0 2 ψ2 (z). (iii) We assume (2.33). Let/ νt be the Lévy measure of Pt (x, dy);/ νt does not depend on x. Then, Proposition 2.13 and (2.42) give  |x|>a

 / νt (dx) =

|x|>a



t∧

1 |x| log c a

νρ (dx) → ∞ as t → ∞

(2.47) for any a > 0. It follows that, for any x ∈ Rd , Pt (x, dy) does not tend to a probability measure as t → ∞. Indeed, if on the contrary Pt (x0 , dy) tends to some probability measure μ for some x0 , then, μ is infinitely divisible (Proposition 1.5) and its Lévy measure ν satisfies that, for any bounded continuous function f vanishing on a neighbourhood of 0, f (x)/ νt (dx) →  f (x) ν(dx) (see [93] Theorem 8.7), which contradicts (2.47). Since (2.20) and (2.38) give 

−ct eix,y Pt (0, dy), Eeiz,Xt  = Eeiz,e X0 

2.2 Ornstein–Uhlenbeck Type Processes and Limit Distributions

45

it follows that L(Xt ) does not tend to any probability measure as t → ∞. We use (2.47) and Lemma 2.18 given below to obtain  Pt (x, Da (y)) ≤ Cd

/ νt (du)

|u|>a/π

−1/2

→0

as

t → ∞,

for any a, x, and y, with a constant Cd depending only on d. Thus we get the assertion (2.34). This finishes the proof of (iii), provided that Lemma 2.18 is true.  The following lemma is an interesting estimate of μ(B) for μ ∈ I D and a bounded Borel set B by the tail of the Lévy measure ν of μ. It is written in Hengartner and Theodorescu [33] (1973) for d = 1 and called LeCam’s estimate; it is extended to general d by Sato and Yamazato [110] (1984). Lemma 2.18 Let I (x, a) = [x1 − a, x1 + a] × · · · × [xd − a, xd + a], a cube in Rd with centre x = (xj )1≤j ≤d . Let μ ∈ I D(Rd ) with Lévy measure ν. Then  μ(I (x, a)) ≤ Cd

−1/2 |y|>a/π

ν(dy)

(2.48)

,

where Cd is a constant which depends only on d. Proof First we show that, for any μ ∈ P(Rd ), x ∈ Rd , a > 0, and b > 0 with b ≤ π/a, μ (I (x, a)) ≤ Let f (u) =



sin(u/2) u/2

 f (u) =

2 ∞

−∞

b

2

−d

 | μ(z)| dz.

(2.49)

I (0,b)

and h(v) = (1 − |v|)1{|v|≤1} (v). Then h(v) =

eiuv h(v)dv,

For x, z ∈ Rd , let f/(x) = x ∈ Rd and b > 0, 

π 2d

2d

j =1 f (xj )

f/(b(y − x))μ(dy) =

1 2π

and / h(z) =







−∞

e−iuv f (u)du.

2d

j =1 h(zj ).

 μ(dy)

= b−d



eib(y−x),z

d 

h(zj )dz

j =1

μ(z)/ h(b−1 z)dz. e−ix,z 

Since f (u) ≥ (2/π )2 for |u| ≤ π , it follows that

Then, for every

46

2 Classes Lm and Ornstein–Uhlenbeck Type Processes

b−d





f/(b(y − x))μ(dy) ≥

| μ(z)| dz ≥ I (0,b)

I (x,a)

2d 2 μ (I (x, a)) π

if ab ≤ π , that is, (2.49). Now let μ ∈ I D with Lévy measure ν. We claim that, for any b > 0, b

−d

 | μ(z)| dz ≤ I (0,b)

Cd



−1/2 |y|>1/b

ν(dy)

(2.50)

,

where Cd is a constant which depends only on d. We have      | μ(z)| ≤ exp Re g(z, y)ν(dy) ≤ exp −

|y|>1/b

 (1 − cosz, y)ν(dy) .

 Let V = |y|>1/b ν(dy). If V = 0, then (2.50) is trivial. Suppose that V > 0, and let / ν(dy) = V −1 1{|y|>1/b} (y)ν(dy). Since  | μ(z)| ≤

e−V (1−cosz,y)/ ν(dy)

by Jensen’s inequality for the convex function e−u , we have 

 | μ(z)| dz ≤ I (0,b)

 |y|>1/b

F (y)/ ν(dy) with F (y) =

√ |z|≤ db

e−V (1−cosz,y) dz.

y = 0 and consider an orthogonal transformation that carries y/|y| to e1 = We fix δ1j 1≤j ≤d . Then  F (y) =

√ |z|≤ db

e−V (1−cos(z1 |y|)) dz.

√ d Let √ Ek = {z ∈ R : |z| ≤ db and 2π k/|y| < z1 ≤ 2π(k + 1)/|y|} and n = db|y|/2π with brackets denoting the integer part. Then

F (y) = 2

n   k=0 Ek

≤ 2(n + 1)

e−V (1−cos(z1 |y|)) dz 





··· E



2π/|y|

dz2 · · · dzd

4Cd bd−1 (n + 1)|y|−1

e−V (1−cos(z1 |y|)) dz1

0



π 0

e−V (1−cos u) du,

2.2 Ornstein–Uhlenbeck Type Processes and Limit Distributions

47

√ where E = {z ∈ Rd−1 : |z | ≤ db} and Cd is the volume of the ball with radius √ d in Rd−1 . Using 1 − cos u ≥ 2π −2 u2 for 0 ≤ u ≤ π , we have 

π

e−V (1−cos u) du ≤



0



e−2V π

−2 u2

du = CV −1/2

0

with an absolute constant C. Noting that sup (n + 1)|y|−1 = sup

|y|>1/b

|y|>1/b

√  db|y|/2π + 1 |y|−1 = bCd

with a constant Cd depending only on d, we obtain (2.50). Taking b = π/a and combining (2.49) and (2.50), we get (2.48).



Remark 2.19 In Theorem 2.17 (i), L(Xt ) converges as t → ∞. But, if {Zt } = (ρ) {Zt } is non-trivial, then Xt does not converge in probability as t → ∞. Proof is t as follows. It is enough to consider the case Xt = e−ct 0 ecs dZs . Suppose, on the contrary, that Xt → Y in probability for some Y as t → ∞. Then Xt − Xt−1 → 0 in probability, since P [|Xt − Xt−1 | > ε] ≤ P [|Xt − Y | > ε/2] + P [|Xt−1 − Y | > ε/2] → 0 for any ε > 0. Hence Eeiz,Xt −Xt−1  → 1 as t → ∞. From the non-triviality there is z0 ∈ Rd such that | ρ (z0 )| < 1 (see [93] Lemma 13.9). It follows that Re ψρ (z0 ) < 0 since | ρ (z0 )| = eRe ψρ (z0 ) . As Re ψρ ≤ 0 and ψρ is continuous, it 1 follows that 0 Re ψρ (e−cs z0 )ds < 0. We have Xt − Xt−1 = e−ct



t

ecs dZs + (e−ct − e−c(t−1) )



t−1

ecs dZs

0

t−1

and the two terms on the right are independent. Hence       iz,Xt −Xt−1    Ee  ≤ E exp iz, e−ct

    t    ecs dZs   = exp ψρ (e−ct+cs z)ds  t−1 t−1    1  1   = exp ψρ (e−cs z)ds  = exp Re ψρ (e−cs z)ds < 1, 0

t

0

if z = z0 . This is a contradiction. It is natural to consider Theorem 2.17 as a result on transformation of ρ to μ. Thus we give the following two definitions and one remark. Definition 2.20 The class I Dlog = I Dlog (Rd ) is the collection of ρ ∈ I D(Rd ) such that its Lévy measure νρ satisfies |x|>2 log |x|νρ (dx) < ∞ or, equivalently,  |x|>2 log |x|ρ(dx) < ∞; see [93] Theorem 25.3 and Proposition 25.4.

48

2 Classes Lm and Ornstein–Uhlenbeck Type Processes

Definition 2.21 Let c > 0. For ρ ∈ I D(Rd ) define a mapping (c) (ρ) = μ if the OU type process {Xt } generated by ρ and c, starting from an arbitrary initial distribution, satisfies L(Xt ) → μ as t → ∞. Let D((c) ) and R((c) ), respectively, denote the domain (of definition) and the range of (c) . That is, R((c) ) = {(c) (ρ) : ρ ∈ D((c) )}. Remark 2.22 Theorem 2.17 says that D((c) ) = I Dlog , R((c) ) = L0 , (c) is one-to-one, and (c) (ρ) does not depend on X0 . The mapping (c) depends on c, but neither D((c) ) nor R((c) ) depends on c. In general we write ψμ = log  μ for μ ∈ I D. Proposition 2.23 Let ρ, ρ  ∈ I D and c, c > 0. ∞ (i) If μ = (c) (ρ), then ψμ (z) = 0 ψρ (e−cs z)ds. (ii) (c) (ρ ∗ ρ  ) = (c) (ρ) ∗ (c) (ρ  ). (iii) (c) (ρ t∗ ) = ((c) (ρ))t∗ for t ≥ 0.  (iv) (c ) (ρ) = ((c) (ρ))(c/c )∗ .  (v) If (c) (ρ) = (c ) (ρ  ), then ρ  = ρ (c /c)∗ . (vi) If ρ = δγρ , then (c) (ρ) = δγρ /c . Proof (i) This is shown in Theorem 2.17.   ), and μ =  (ρ ∗ ρ  ). Then ψ  (z) = (ii) Let (c) (ρ), μ = (c) μ  ∞(c) (ρ −cs  ∞ μ = −cs −cs z))ds = ψ + ψ  (z).  (e  (e ψ z)ds = (ψ (e z) + ψ ρ μ ρ∗ρ ρ μ 0 0 ∞ (iii) Let μ = (c) (ρ) and μt = (c) (ρ t∗ ). Then ψμt (z) = 0 ψρ t∗ (e−cs z)ds = ∞ t 0 ψρ (e−cs z)ds. ∞ (iv) Let μc = (c) (ρ) and μc = (c ) (ρ). Then ψμc (z) = 0 ψρ (e−cs z)ds = ∞  ∞ (1/c) 0 ψρ (e−s z)ds and ψμc (z) = (1/c ) 0 ψρ (e−s z)ds. Hence  ψμc (z) = (c/c )ψμc (z).   (v) We have (c ) (ρ (c /c)∗ ) = ((c ) (ρ))(c /c)∗ = (c) (ρ), using (iv). Hence  (c ) (ρ (c /c)∗ ) = (c ) (ρ  ). Then use the  ∞one-to-one property. (vi) If ρ = δγρ , then ψρ (z) = iγρ , z and 0 ψρ (e−cs z)ds = iγρ , z/c.  Another equivalent definition of (c) is by the distribution of an improper stochastic integral. t (ρ) Definition 2.24 If the limit in probability of t0 f (s)dZs as t → ∞ exists, then ∞  ∞ (ρ) (ρ) the limit is denoted by t0 f (s)dZs and we say that t0 f (s)dZs is definable. The limit is called the improper stochastic integral of f . Proposition 2.25 Let f (s) be measurable and locally bounded on [0, ∞) (that is, bounded on every bounded interval [0, t]). Then the improper stochastic integral ∞ t (ρ) is definable if and only if 0 ψρ (f (s)z)ds is convergent in C as 0 f (s)dZs t → ∞ for each z ∈ Rd . In this case Ee

iz,X

= exp

 lim

t→∞ 0

t

ψρ (f (s)z)ds

 for X = 0



f (s)dZs(ρ) .

(2.51)

2.2 Ornstein–Uhlenbeck Type Processes and Limit Distributions

49

Proof The “only if” part follows from Propositions 2.2 and 2.4, since convergence in probability implies convergence in distribution. To see the “if” part, notice that   E exp i z,

t1 t0

 f (s)dZs(ρ)

 = exp

t1

ψρ (f (s)z)ds → 1,

t0 , t1 → ∞.

t0



  t (ρ)  It follows that, for every ε > 0, P  t01 f (s)dZs  > ε → 0 as t0 , t1 → ∞. t (ρ)  Hence 0 f (s)dZs converges in probability as t → ∞. Definition 2.26 In the set-up of Proposition 2.25, the improper stochastic integral mapping f is defined as follows: 



f (ρ) = L 0

f (s)dZs(ρ)

,

  ∞ (ρ) D(f ) = ρ ∈ I D : 0 f (s)dZs is definable . Lemma 2.27 Let {Yt : t ≥ 0} be an additive process in law on Rd . Then, Yt → Y∞ in probability for some Y∞ as t → ∞ if and only if L(Yt ) → μ for some μ ∈ P as t → ∞. In this case, L(Y∞ ) = μ. Proof The “only if” part is well-known. Let us prove the “if” part. Suppose that L(Yt ) → μ. Let ρs,t = L(Yt − Ys ) for 0 ≤ s ≤ t < ∞. Then ρ0,t = ρ0,s ∗ ρs,t , ρs,t ∈ I D, and μ ∈ I D. Their characteristic functions have no zero. Hence ρ s,t (z) = ρ 0,t (z)/ ρ0,s (z) →  μ(z)/ μ(z) = 1 =  δ0 (z) as s, t → ∞. Thus, for any ε > 0, P [|Yt − Ys | > ε] → 0 as s, t → ∞. This implies that Yt is convergent in probability ([19] Exercise 4.2.6).  Proposition 2.28 Fix c > 0. Let (c) be as defined in Definition 2.21. If f (s) = e−cs , then f = (c) , which means that D(f ) = D((c) ) and that f (ρ) = (c) (ρ) for ρ ∈ D(f ). We also have ∞   D((c) ) = ρ ∈ I D : 0 |ψρ (e−cs z)|ds < ∞ for z ∈ C .

(2.52)

t (ρ) Proof Let Yt = 0 e−cs dZs . Then {Yt } is an additive process in law on Rd t and L(Yt ) has characteristic function exp 0 ψρ (e−cs z)ds (Proposition 2.4). Hence L(Yt ) equals Pt (0, dy) (Proposition 2.13). That is, L(Yt ) = L(Xt ) whenever J = 0. Thus we obtain f = (c) for f (s) = e−cs by Theorem 2.17 and Lemma 2.27. To see (2.52), let M be the right-hand side of (2.52). Then D((c) ) = I Dlog ⊂ M from t Theorem 2.17. If ρ ∈ M, then 0 ψρ (e−cs z)ds is convergent as t → ∞ for each z, ∞ and hence 0 f (s)dZs is definable by Proposition 2.25. Therefore D((c) ) ⊃ M. 

50

2 Classes Lm and Ornstein–Uhlenbeck Type Processes

Incidentally, for some f (s) decreasing to 0 as s → ∞, we have ∞   D(f )  ρ ∈ I D : 0 |ψρ (f (s)z)|ds < ∞ for z ∈ C (see Sato [101] (1984)).

2.3 Relations to Classes Lm , Sα , and S0α We clarify the relation of the mapping (c) in Definition 2.21 with the classes Lm , S, S0 , Sα , and S0α introduced in Sect. 1.1. Theorem 2.29 Let c > 0. (i) Let m ∈ {0, 1, . . . , ∞}. Then (c) maps Lm−1 ∩ I Dlog onto Lm one-to-one. Here we understand L−1 = I D. (ii) Let 0 < α ≤ 2. Then, ρ ∈ Sα if and only if ρ ∈ I Dlog and (c) (ρ) ∈ Sα . Further, ρ ∈ S0α if and only if ρ ∈ I Dlog and (c) (ρ) ∈ S0α . (iii) If ρ ∈ Sα , then ρ ∈ I Dlog and (c) (ρ) = ρ 1/(αc)∗ ∗ δγ

for some γ ∈ Rd .

(2.53)

Conversely, if ρ is a non-trivial distribution in I Dlog and (c) (ρ) = ρ t∗ ∗ δγ

for some t > 0 and γ ∈ Rd ,

(2.54)

then 1/(tc) ≤ 2 and ρ ∈ S1/(tc) . The above two sentences remain true with Sα , S1/(tc) , and γ replaced by S0α , S01/(tc) , and 0, respectively. Proof (i) Let ρ ∈ I Dlog and (c) (ρ) = μ. Then μ ∈ L0 and, for each b > 1, there is ηb ∈ I D such that  μ(z) =  μ(b−1 z) ηb (z). This ηb satisfies 

(1/c) log b

 ηb (z) = exp

ψρ (e−cs z)ds,

(2.55)

0

as is seen at the end of the proof of Theorem 2.17 (i). If μ ∈ Lm , then ηb ∈ Lm−1 and 3

c/ log b  ηb

c (z) = exp log b



4

(1/c) log b

ψρ (e

−cs

0

→ exp ψρ (z) = ρ (z)



z)ds = exp 0

as b ↓ 1,

1

ψρ (b−u z)du

2.3 Relations to Classes Lm , Sα , and S0α

51

proving that ρ ∈ Lm−1 by the use of Proposition 1.18. Conversely, let ρ ∈ Lm−1 ∩ I Dlog . Recall that the right-hand side of (2.55) is the  (1/c) log b −cs (ρ) e dZs . If f (s) is a step function, characteristic function of 0 t (ρ) then L( 0 f (s)dZs ) ∈ Lm−1 , see (2.4). Then, it follows from Propositions 1.18, 2.2, and 2.4 that ηb ∈ Lm−1 . Therefore μ ∈ Lm . Now we see that (c) maps Lm−1 onto Lm . The one-to-one property is known in Theorem 2.17 (ii). (ii) By Definition 1.23, a distribution ρ in I D is α-stable if and only if, for any t > 0, there is γρ,t ∈ Rd such that tψρ (z) = ψρ (t 1/α z) + iγρ,t , z.

(2.56)

We note that any stable distribution is in I Dlog , which follows from (1.40). If ρ ∈ Sα and (c) (ρ) = μ, then, by (2.28),    μ(z) = exp t t



0 ∞

 = exp

ψρ (e

−cs

 z)ds

ψρ (t 1/α e−cs z) + iγ0,t , e−cs z ds =  μ(t 1/α z)eiγt ,z

0

with γt = (1/c)γ0,t , which shows that μ ∈ Sα . Conversely, assume that μ ∈ Sα , ρ ∈ I Dlog , and μ = (c) (ρ). Then  μ(z)t =  μ(t 1/α z)eiγt ,z with some γt , and hence, by (2.28), 



t

ψρ (e−cs z)ds =

0





ψρ (e−cs t 1/α z)ds + iγt , z

0

for all z ∈ Rd . Replacing z by e−cu z and making change of variables, we get 



t

ψρ (e−cs z)ds =

u





ψρ (e−cs t 1/α z)ds + ie−cu γt , z.

u

Differentiation in u gives tψρ (e−cu z) = ψρ (e−cu t 1/α z) + ice−cu γt , z. Letting u ↓ 0, we have tψρ (z) = ψρ (t 1/α z) + icγt , z, that is, ρ ∈ Sα . The argument above simultaneously shows that ρ ∈ S0α if and only if μ ∈ S0α . (iii) Let ρ ∈ Sα ⊂ I Dlog . We will show (2.53). Since (2.56) holds for all t > 0, we have  ∞  ∞  −αcs e ψρ (e−cs z)ds = ψρ (z) − iγ0,exp(−αcs) , z ds. 0

0

52

2 Classes Lm and Ornstein–Uhlenbeck Type Processes

It follows that 



ψρ (e−cs z)ds =

0

1 ψρ (z) + iγ , z αc

u with γ = − limu→∞ 0 γ0,exp(−αcs) ds, where the existence of the limit comes ∞ ∞ from the finiteness of 0 ψρ (e−cs z)ds and 0 e−αcs ψρ (z)ds. By (2.28) this gives (2.53). Conversely, suppose that ρ is non-trivial, belongs to I Dlog , and satisfies (2.54). This means that 



ψρ (e−cs z)ds = tψρ (z) + iγ , z.

0

Let u ∈ R and replace z by e−cu z to obtain 



ψρ (e−cv z)dv = tψρ (e−cu z) + ie−cu γ , z.

u

Fix z for a while and denote f (u) = ice−cu γ , z and g(u) = ψρ (e−cu z) for u ∈ R. Then we see that g is differentiable and g(u) = −tg  (u) + f (u). Hence d  u/t 1 1 e g(u) = eu/t g  (u) + eu/t g(u) = eu/t f (u), dt t t that is,  eu/t g(u) = 0

u

1 v/t e f (v)dv + g(0) t

for u ∈ R.

Now we have ψρ (e−cu z) = e−u/t



u 0

1 v/t e ice−cv γ , zdv + e−u/t ψρ (z) t

= iηu , z + e−u/t ψρ (z) with some ηu ∈ Rd . Thus, for every s > 0, there is γ0,s such that sψρ (z) = ψρ (s tc z) + iγ0,s, z for z ∈ Rd . Since ρ is non-trivial, this shows that (tc)−1 ≤ 2 and ρ is (tc)−1 -stable by Propositions 1.21 and 1.22. The argument above also proves the last assertion in (iii) concerning strict stability. 

2.3 Relations to Classes Lm , Sα , and S0α

53

Remark 2.30 If ρ is stable and μ = (c) (ρ), then there are a > 0 and γ ∈ Rd satisfying  μ(z) = ρ (az)eiγ ,z . But the converse is not true. See Wolfe [139] (1982a). Remark 2.31 By introducing a stronger convergence concept in I Dlog and using the usual weak convergence in L0 , the mapping (c) and its inverse are continuous. See Sato and Yamazato [110] (1984). Let us give another formulation of the relation of (c) with the classes Lm . For 1 m ∈ N, let m (c) be the mth iteration of (c) . That is, (c) = (c) and, for m ≥ 2, m−1 m−1 m−1 m m (c) is defined by (c) (ρ) = (c) ((c) (ρ)) if ρ ∈ D((c) ) and (c) (ρ) ∈ m m m D((c) ). The domain and the range of (c) , D((c) ) and R((c) ), are described below.

Theorem 2.32 Fix c > 0 and let m be a positive integer. For μ and ρ in I D(Rd ) we write ψ = log  μ, ψρ = log ρ , (A, ν, γ ) for the triplet of μ, and (Aρ , νρ , γρ ) for the triplet of ρ. Then,    m ν (dx) < ∞ , D(m ) = ρ ∈ I D : (log |x|) ρ (c) |x|>2

(2.57)

R(m (c) ) = Lm−1 .

(2.58)

m If ρ ∈ D(m (c) ) and μ = (c) (ρ), then





  s m−1 ψρ (e−cs z) ds < ∞,

0





ψ(z) = 0

s m−1 ψρ (e−cs z)ds. (m − 1)!

A = (2c)−m Aρ ,   νρ (dx) ν(B) = Rd

γ =c

−m

(2.59)

γρ + c

(2.61) ∞ 0

−m

(2.60)

 |x|>1

s m−1 (m − 1)!

1B (e−cs x)ds,

B ∈ B(Rd ),

m−1 x  (log |x|)j νρ (dx), |x| j!

(2.62)

(2.63)

j =0

and the correspondence of μ and ρ is one-to-one. Proof By induction. When m = 1, the statements reduce to Theorem 2.17. Let m ≥ 2. Assume that the assertions are true for m − 1 in place of m. Let us show the assertions for m.

54

2 Classes Lm and Ornstein–Uhlenbeck Type Processes

 Step 1. We assume |x|>2 (log |x|)m νρ (dx) < ∞. Noting that ρ ∈ I Dlog , write μ1 = (c) (ρ), ψ1 = log  μ1 , and (A1 , ν1 , γ1 ) for the triplet of μ1 . Then, using (2.30), we get  |x|>2

(log |x|)m−1 ν1 (dx) = c−1 =c

−1



 Rd

log |e−s x|

m−1

0





|x|>2

= (cm)−1

∞

νρ (dx)

log(|x|/2)

νρ (dx)

1{|e−s x|>2} ds

(log |x| − s)m−1 ds

0





|x|>2

 (log |x|)m − (log 2)m νρ (dx) < ∞. (2.64)

m−1 m It follows that μ1 ∈ D((c) ). Thus ρ ∈ D(m (c) ). Let μ = (c) (ρ). Since μ1 ∈ L0 , repeated application of Theorem 2.29 (i) shows that μ ∈ Lm−1 . Let us show (2.59). Define η by



 η(B) = Then  |x|≤1



=

Rd

|x|≤1

 Rd



νρ (dx) 0





|x|2 νρ (dx)

s m−1 1B (e−s x)ds.

0

 |x|2 η(dx) =



νρ (dx)

2  s m−1 e−s x  1{|e−s x |≤1} ds

s m−1 e−2s ds +



0

∞

which is finite, since Moreover 

log |x| s

m−1 e−2s ds

|x|>1

=m

−1





Rd

|x|>1

|x|>1

∞ log |x|

s m−1 e−2s ds,

∼ 2−1 |x|−2 (log |x|)m−1 as |x| → ∞.

 η(dx) =

 |x|2 νρ (dx)



νρ (dx) 0

s m−1 1{|e−s x |>1} ds

(log |x|)m νρ (dx) < ∞.

Hence, writing ψρ (e−cs z) = − 12 e−2cs z, Aρ z + ie−cs γρ , z   g(z, e−cs x)νρ (dx) + i z, e−cs x1{11

xνρ (dx)

∞ log |x|

s m−1 −s e ds = c−m (m − 1)!

 |x|>1

m−1 x  (log |x|)j νρ (dx). |x| j! j =0

 Step 2. Suppose that we are given ρ ∈ I D satisfying |x|>2 (log |x|)m νρ (dx) = ∞. We claim that ρ ∈ D(m (c) ). From the definition we have I Dlog = D((c) ) ⊃ D(2(c) ) ⊃ · · · . Hence, if ρ ∈ I Dlog , then ρ ∈ D(m (c) ). So we may and do assume that ρ ∈ I Dlog . Let μ1 = (c) (ρ) and ν1 be the Lévy measure of μ1 . Then the same calculus as  m−1 ) from in (2.64) shows that |x|>2 (log |x|)m−1 ν1 (dx) = ∞. Hence μ1 ∈ D((c) the induction hypothesis. It follows that ρ ∈ D(m ). Combined with Step 1, this (c) proves (2.57). Step 3. Let μ be an arbitrary member of Lm−1 . We claim that μ ∈ R(m (c) ). Since Lm−1 ⊂ L0 = R((c) ), there is μm−1 such that (c) (μm−1 ) = μ. Hence, m−1 ) by virtue of Theorem 2.29 (i), μm−1 ∈ Lm−2 ∩ I Dlog . Then μm−1 ∈ R((c) m from the induction hypothesis. Hence μ ∈ R((c) ). Step 1 and this prove (2.58). We have the one-to-one property in the theorem since (c) is one-to-one.  Remark 2.33 The description (2.57) of D(m (c) ) can be written as    m ρ(dx) < ∞ . D(m ) = ρ ∈ I D : (log |x|) (c) |x|>2

(2.66)

See [93] Theorem 25.3 and Proposition 25.4 for a proof. The class Lm−1 is directly expressed as the range of an improper stochastic integral mapping as follows.

56

2 Classes Lm and Ornstein–Uhlenbeck Type Processes

Theorem 2.34 Let m ∈ N. Fix a constant cm > 0. Let fm (s) = e−(cm s) and consider the mapping fm in Definition 2.26. Then D(fm ) and R(fm ) do not depend on cm . If (c) is with parameter c satisfying cm = m! cm , then 1/m

 f m = m (c) ,

(2.67)

m which means that D(fm ) = D(m (c) ) and that fm (ρ) = (c) (ρ) for ρ ∈ D(fm ).

Proof As before, let ψρ (z) = (log ρ )(z). Then ∞   D(fm ) = ρ ∈ I D : 0 |ψρ (fm (s)z)|ds < ∞ for z ∈ Rd .

(2.68)

This is shown in (2.52) for m = 1. For general m ∈ N Proposition 2.25 says (2.68) with “=” replaced by “⊃”. The proof of (2.68) with “=” replaced by “⊂” is given in Theorem 6.3 (iii) of [101] or Proposition 4.3 of [99]. It follows from (2.68) via change of variables in integration that D(fm ) does not depend on the choice of cm . m Now let us show D(fm ) = D(m (c) ). Choosing cm = m! c , we have 



 |ψρ (fm (s)z)|ds =

0

0



um−1 |ψρ (e−cu z)|du, (m − 1)!

by change of variables s = (m!)−1 um . Hence D(fm ) ⊃ D(m (c) ) by Theorem 2.32 and (2.68). Conversely, suppose ρ ∈ D(fm ). It follows from (2.68) that we can define 



ψj (z) = 0

uj −1 ψρ (e−cu z)du, (j − 1)!

j = 1, . . . , m.

Then, for j = 2, . . . , m, we see   uj −2 ψρ (e−c(u+s) z)du (j − 2)! 0 0 0  ∞  ∞  ∞ (v − s)j −2 v j −1 ≤ |ψρ (e−cv z)|dv = |ψρ (e−cv z)|dv < ∞, ds (j − 2)! (j − 1)! 0 s 0  ∞  ∞ v j −1 ψρ (e−cv z)dv = ψj (z). ψj −1 (e−cs z)ds = (j − 1)! 0 0 



|ψj −1 (e−cs z)|ds =





  ds 



It follows from Theorem 2.17 and Proposition 2.28 that there is μj satisfying (log  μj )(z) = ψj (z) and that μj −1 ∈ D((c) ) and (c) (μj −1 ) = μj for m j = 1, . . . , m. Hence ρ ∈ D(m  (c) ) and (c) (ρ) = fm (ρ).

Notes

57

Notes The formula (2.30) of the Lévy measures of selfdecomposable distributions was found by Urbanik [127] (1969), although he did not recognize its probabilistic meaning. The representation of the class of selfdecomposable distributions as the class of limit distributions of OU type processes (Theorem 2.17) was discovered by Wolfe [139, 140] (1982a, 1982b) and, immediately after that, the papers Sato and Yamazato [109, 110] (1983, 1984), Jurek and Vervaat [45] (1983), and Gravereaux [28] (1982) were worked out. Wolfe’s paper [139] (1982a) was submitted in October 1979. Sato and Yamazato used, in the paper [108] (1978) studying fine properties of the density of a general selfdecomposable distribution on R, an integro-differential equation which showed that it was a stationary distribution density of an OU type process. Most of these results were obtained in the form of operator generalization, which will be touched upon in Sect. 5.2. In the case where {Zt } is an increasing Lévy process on R, the limit theorem in Theorem 2.17 was obtained earlier by Çinlar and Pinsky [20, 21] (1971, 1972) in storage theory. They treated a more general equation than (2.13), where cXs is replaced by a release function r(Xs ) > 0. Theorem 2.29 was given by [45] (1983) and [109] (1983) for (i) on Lm and by [139] (1982a) and [45] (1983) for (ii) on S and S0 . Theorem 2.32 on Lm was due to Jurek [40] (1983b) in a different formulation. The representation of Lm as the range of an improper stochastic integral in Theorem 2.34 is given in [40] (1983b). Another expression of Lm (R) is made by Graversen and Pedersen [29] (2011). Any OU type process on Rd having limit distribution (that is, satisfying (2.25)) is recurrent. But there are recurrent OU type processes without limit distribution. Also there are transient OU type processes. Indeed, suppose that d = 1 and that {Xt } is an OU type process generated by c > 0 and ρ with Lévy measure ν(dx) = 1{|x|>2} (x)b|x|−1 (log |x|)−α−1 dx with α > 0, b > 0. Then (2.25) is satisfied if and only if α > 1; if α < 1, then {Xt } is transient; if α = 1 and 2b > c, then {Xt } is transient; if α = 1 and 2b ≤ c, then {Xt } is recurrent ([110] (1984)). A general criterion of recurrence and transience was given by Shiga [114] (1990) for d = 1 and by Sato, Watanabe, and Yamazato [105] (1994) for d ≥ 2. Study of the improper stochastic integral mapping f in Definition 2.26 was developed in Jurek [42] (1988), Maejima and Sato [60] (2003), Sato [96–101] (2004, 2006a, 2006b, 2006c, 2009, 2010), Barndorff-Nielsen, Maejima, and Sato [7] (2006), Barndorff-Nielsen, Rosi´nski, and Thorbjørnsen [10] (2008), Maejima [59] (2015), and others. One of the results is the continuous-parameter interpolation 1/m and extrapolation of the mappings fm , m ∈ N, with fm (s) = e−c(m! s) in 1/p Theorem 2.34 to the mappings fp , p ∈ (0, ∞), with fp (s) = e−c( (p+1)s) . This is connected with fractional integral of Riemann, Liouville, and others. It is true that fp+q = fq fp for p, q ∈ (0, ∞). Thu [123, 124] (1982, 1984) and [125] (1986) are pioneering works in this direction. The concept of function on R monotone of

58

2 Classes Lm and Ornstein–Uhlenbeck Type Processes

order n ∈ N in Definition 1.37 is extended to the concept of function monotone of order p ∈ (0, ∞) and it is shown that μ ∈ R(fp ) if and only if μ ∈ L0 and its h-function hξ (u) is monotone of order p in u ∈ R. Another extension of the concept of selfdecomposability is to introduce the concept of function on (0, ∞) monotone of order n ∈ N similarly to Definition 1.37, to extend it to monotone of order p ∈ (0, ∞), and, for some function gp , to show that μ ∈ R(gp ) if and only if μ ∈ L 0 and its k-function kξ (r) is monotone of order p in r ∈ (0, ∞). The limiting class p R(gp ) equals the Thorin class on Rd , which will be explained below. The processes in Rosi´nski [83] (2007) are closely connected with distributions in the scheme of [101] (2010). The limit of the ranges of iterations of f is studied by Maejima (2009) and Sato [102] (2011); they are extension of the fact and Sato [61] m ). L∞ = ∞ R( m=1 (c) After the study of the roles of Gaussian, stable, infinitely divisible, selfdecomposable, and compound Poisson distributions in the early stage of modern probability theory, new sufficient conditions for infinite divisibility, new explicit distributions in I D appearing in stochastic processes, and new subclasses of I D were obtained. It was from 1960s on by Goldie [27] (1967), Steutel [116, 117] (1967, 1970), Thorin [120, 121] (1977a, 1977b), Bondesson [15] (1981), Barndorff-Nielsen and Halgreen (1977) [6], Barndorff-Nielsen [4] (1978), Halgreen [30] (1979), and others. For onedimensional distributions Bondesson [16] (1992) and Steutel and van Harn [118] (2004) are monographs with detailed information. Some of the subclasses can be expressed as R(f ). We mention two important classes: Thorin class T = T (Rd ) and Goldie-Steutel-Bondesson class B = B(Rd ). Class T is the Rd -version of the class GGC (generalized -convolutions) introduced by Thorin in the course of his proof of infinite divisibility of Pareto and log-normal distributions. Class B is the Rd -version of the class gcmed (generalized convolutions of mixture of exponential distributions) studied by Bondesson. A distribution μ ∈ P(Rd ) is in T if and only if  μ(z) is of the form (1.25) with kξ (r) being completely monotone in r ∈ (0, ∞); μ is in B if and only if its Lévy measure is of the form (1.27) with νξ having completely monotone density in (0, ∞) for each ξ . Let us define a mapping ∞  by  = f with f (s) satisfying s = f (s) u−1 e−u du. Then we can prove D() = I Dlog . Further, let us define a different type of improper stochastic integral  t0 t t0 as the limit in probability of t 0 f (s)dZs as t → 0+. 0+ f (s)dZs with a finite

 1 (ρ) was introduced by Barndorff-Nielsen The mapping ϒ(ρ) = L 0+ log(1/s)dZs and Thorbjørnsen [11] (2004) in an area called free probability. It is easy to show that D(ϒ) = I D. Barndorff-Nielsen, Maejima, and Sato [7] (2006) showed that (1) ϒ = ϒ(1) = , B = R(ϒ), and T = (1) (B) = ϒ(L0 ) = R(). Several works followed this result. James, Roynette, and Yor [37] (2008) contains another characterization and examples of class T . Alf and O’Connor [2] (1977) showed that μ ∈ I D(R) has Lévy measure  −1 z (log ρ unimodal with mode 0 if and only if (log  μ )(z) = z )(u)du for some 0

 1 (ρ) ρ ∈ I D, which is equivalent to μ = L 0 sdZs . O’Connor [67, 68] (1979a, 1979b) called such μ as of class U and studied some classes with a continuous

Notes

59

parameter which satisfy a relation similar to selfdecomposability (1.3). Jurek [41– 43] (1985, 1988, 1989) obtained similar results and proved their stochastic integral representation. They are contained in the scheme of [101] (2010) and the Rd -version of O’Connor’s U is called Jurek class U = U (Rd ). Further, Maejima and Ueda [65] (2010) found a temporally inhomogeneous extension of OU type processes related to those continuous-parameter classes.

Chapter 3

Selfsimilar Additive Processes and Stationary Ornstein–Uhlenbeck Type Processes

Selfsimilar processes on Rd are those stochastic processes whose finite-dimensional distributions are such that any change of time scale has the same effect as some change of spatial scale. Under the condition of stochastic continuity and non-zero, the relation of the two scale changes is expressed by a positive number c called exponent. A Lévy process is selfsimilar if and only if it is strictly stable. In Sect. 3.1 selfsimilar additive processes are studied. It is shown that the class of those processes on Rd exactly corresponds to the class L0 (Rd ). By a transformation named after Lamperti, selfsimilar processes correspond to stationary processes. In the case of selfsimilar additive processes this is realized by the correspondence to stationary OU type processes. Those results will be given in Sect. 3.2.

3.1 Selfsimilar Additive Processes and Class L0 d It will be shown that, for any selfsimilar additive process {Xt }on R , the distribution d of Xt is selfdecomposable for every t, that is, L(Xt ) ∈ L0 R . Conversely, if μ  belongs to the class L0 Rd , then for every c > 0, there is, uniquely in law, a c-selfsimilar additive process {Xt } on Rd with L (X1 ) = μ. As a consequence, there are many additive processes which are selfsimilar.

Definition 3.1 A stochastic process {Xt : t ≥ 0} on Rd is selfsimilar if, for any d

a > 0, there is b > 0 such that {Xat : t ≥ 0} = {bXt : t ≥ 0}. Theorem 3.2 Let {Xt : t ≥ 0} be a selfsimilar, stochastically continuous, nonzero process on Rd with X0 = 0 a. s. Then b in the definition above is uniquely determined by a and there is c > 0 such that, for any a > 0, b = a c . See Theorem 13.11 and Remark 13.13 of [93].

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 A. Rocha-Arteaga, K. Sato, Topics in Infinitely Divisible Distributions and Lévy Processes, SpringerBriefs in Probability and Mathematical Statistics, https://doi.org/10.1007/978-3-030-22700-5_3

61

62

3 Selfsimilar Additive Processes and Stationary Ornstein–Uhlenbeck Type Processes

Definition 3.3 A stochastic process {Xt : t ≥ 0} on Rd is called a c-selfsimilar process if it is stochastically continuous, starts from 0, and satisfies {Xat : t ≥ d

0} = {a c Xt : t ≥ 0} for all a > 0. The number c > 0 is called the exponent of selfsimilarity. (If in addition {Xt } is non-zero, then its exponent c is uniquely determined. But, if {Xt } is a zero process, it is c-selfsimilar for all c > 0.) We study selfsimilar additive processes on Rd . Recall that if {Xt : t ≥ 0} is an additive process, then L(Xt ) ∈ I D for any t ≥ 0 (Theorem 1.32). First, let us consider time change by powers of t. Proposition 3.4 If {Xt } is a c-selfsimilar additive process on Rd , then, for any κ > 0, {Xt κ } is a cκ-selfsimilar additive process. Proof We can prove that {Xt κ } is an additive process from the assumption that {Xt } d

is additive. If {Xt } is c-selfsimilar, then {X(at)κ } = {Xa κ t κ } = {a cκ Xt κ } and thus {Xt κ } is cκ-selfsimilar.  This shows that exponents are not important for the study of properties of selfsimilar additive processes because we can freely change the exponent c by a simple time change. The following theorem establishes the relation between selfsimilar additive processes and selfdecomposable distributions. Theorem 3.5 (i) Let {Xt : t ≥ 0} be a selfsimilar additive process on Rd . Then L(Xt ) ∈ L0 (Rd ) for all t ≥ 0. (ii) For any μ ∈ L0 (Rd ) and any c > 0, there is a unique (in law) c-selfsimilar additive process {Xt } on Rd such that L(X1 ) = μ. Proof (i) Let c be the exponent of selfsimilarity of {Xt }. For 0 ≤ s ≤ t let μt and μs,t be the distributions of Xt and Xt − Xs , respectively. We have  μs,t (z),  μt (z) =  μs (z)  μs,t (z) =  μt (s/t)c z  d

(3.1)

by the independent increments and by Xs = X(s/t)t = (s/t)c Xt . Given b > 1 choose 0 < s < t such that b = (s/t)−c . Then the identity above shows that μt ∈ L0 for t > 0. Since L(X0 ) = δ0 , μ0 ∈ L0 is evident. (ii) Suppose that μ ∈ L0 and c > 0 are given. Then  μ(z) = 0 and, for every b > 1, there is a unique ηb ∈ I D such that  μ (z) =  μ b−1 z  ηb (z). Next define μt and μs,t by μ0 = δ0 ,  for t > 0,  μt (z) =  μ t cz c  μs,t (z) =  for 0 < s < t η(t/s)c t z

3.1 Selfsimilar Additive Processes and Class L0

63

and μ0,t = μt for t > 0. We have μt = μs ∗ μs,t for 0 ≤ s < t, since    μ scz  η(t/s)c t c z  μ t cz = 

for 0 < s < t.

μt (z)/ μs (z). It follows that, for 0 ≤ r < s < t, Therefore  μs,t (z) =   μr,s (z)  μs,t (z) =

 μt (z) μt (z)  μs (z)  = = μr,t (z),  μr (z)   μr (z) μs (z)

that is, μr,s ∗ μs,t = μr,t . Now Kolmogorov’s extension theorem applies and we can construct, on some probability space, a process {Xt : t ≥ 0} such that, for 0 ≤ t0 < t1 < · · · < tn and B0 , . . . , Bn ∈ B(Rd ),     P Xt0 ∈ B0 , . . . , Xtn ∈ Bn = μt0 (dx0 )1B0 (x0 ) μt0 ,t1 (dx1 )  1B1 (x0 + x1 )

 ···

μtn−1 ,tn (dxn )1Bn (x0 + · · · + xn−1 + xn ).

We have L(Xt ) = μt . In particular, L(X1 ) = μ. The process {Xt } starts at 0 a. s., has independent increments, and is stochastically continuous because μs,t → δ0 as s ↑ t or t ↓ s. Indeed,  ηb (z) → 1 uniformly in any neighbourhood of 0 as b ↓ 1. Therefore it is an additive process in law. By choosing a modification, it is an additive process ([93] Theorem 11.5). Moreover, we have from the definition of μt that d

Xat = a c Xt

for t ≥ 0.

This implies that {Xat } and {a c Xt } have a common system of finite-dimensional distributions, since both are additive processes. Here we have used Theod

rem 1.32 (ii). Hence {Xt } is c-selfsimilar. Since Xt = t c X1 , the process {Xt } with the properties required is unique in law, again by Theorem 1.32 (ii).  Proposition 3.6 (i) Let 0 < α ≤ 2. Suppose that {Xt } is a strictly α-stable process on Rd . Then {Xt } is (1/α)-selfsimilar. (ii) Let c > 0. Suppose that {Xt } is a non-zero c-selfsimilar Lévy process on Rd . Then c ≥ 1/2 and {Xt } is a strictly (1/c)-stable process. Proof (i) Let μ = L(X1 ). Then L(Xt ) = μt ∈ S0α , that is,  μta (z) =  μt (a 1/α z). Hence d

Xat = a 1/α Xt . Since {Xat } and {a 1/α Xt } are both Lévy processes, it follows d

that {Xat } = {a 1/α Xt }.

64

3 Selfsimilar Additive Processes and Stationary Ornstein–Uhlenbeck Type Processes d

(ii) We have Xat = a c Xt for all t ≥ 0. In particular μ = L(X1 ) satisfies  μa (z) = c  μ(a z). Since μ = δ0 , Proposition 1.22 implies that c = 1/α with 0 < α ≤ 2. Thus μ ∈ S0α . Hence {Xt } is a strictly α-stable process.  Notice that, in the case of a selfsimilar Lévy process, the exponent of selfsimilarity is important. If {Xt } is a non-zero Lévy process and κ = 1, then the time change in Proposition 3.4 transforms {Xt } to a non-Lévy additive process. Example 3.7 Let 0 < α ≤ 2 and μ ∈ Sα (Rd ). Let {Xt : t ≥ 0} be the (1/α)selfsimilar additive process with L(X1 ) = μ in Theorem 3.5. Let {Yt : t ≥ 0} be the Lévy process with L(Y1 ) = μ. d

(i) Suppose that μ ∈ S0α . Then {Xt } = {Yt }, (ii) Suppose that μ ∈ Sα \ S0α . Then {Xt } and {Yt } are not identical in law and they are related as follows. If α = 1, then 

 d {Xt } = Yt + t 1/α − t τ ,

(3.2)

where τ = 0. Here τ is the drift of μ for 0 < α < 1 and the mean of μ for 1 < α ≤ 2. If α = 1, then 5 1  d (3.3) {Xt } = Yt − (t log t) c1 ξ λ (dξ ) , S

where c1 and λ denote  c and λ in the expression (1.40) of the Lévy measure ν of μ, and we have S ξ λ(dξ ) = 0. Here is a proof. The identity in law in (i) and the first assertion in (ii) follow from Proposition 3.6. Let us prove the remaining part, using Theorem 1.42. Let μt = L(Xt ). Then  μt (z) =  μ(t 1/α z). We have L(Yt ) = μt . Let α = 1. Then  μ(z) =  η(z)eiτ,z , τ = 0, and η ∈ S0α . Hence η(t 1/α z)eit  μt (z) = 

1/α τ,z

= η(z)t eit

1/α τ,z

= μ(z)t ei(t

1/α −t)τ,z

,

d

that is, Xt = Yt + (t 1/α − t)τ . Let α = 1. Then,     ∞

dr eirξ,z − 1 − irξ, z1(0,1] (r) 2 + iγ , z  μ(z) = exp c1 λ(dξ ) r S 0 with

 S

ξ λ(dξ ) = 0. Since  μt (z) =  μ(tz), we have

    μt (z) = exp c1 λ(dξ ) S

0



eiuξ,z − 1 − iuξ, z1(0,1]

 u tdu t

u2

 + itγ , z .

3.1 Selfsimilar Additive Processes and Class L0

65

Noting that 1(0,1]

u t

0 =

1(0,1] (u) − 1(t,1] (u)

for t < 1

1(0,1] (u) + 1(1,t] (u)

for t > 1,

we obtain      μt (z) =  μ(z)t exp −i(t log t) c1 ξ λ(dξ ), z , S

 d that is, Xt = Yt − (t log t)c1 S ξ λ(dξ ). Now we get (3.2) for α = 1 and (3.3) for α = 1, using Theorem 1.32 (ii), since both sides are additive processes. When {Xt } is a selfsimilar additive process, the distribution of Xt is selfdecomposable. But its joint distributions (finite-dimensional distributions) are not always selfdecomposable. Let us give conditions for joint distributions of {Xt } to be selfdecomposable and, furthermore, conditions for them to be of class Lm . Theorem 3.8 Let μ ∈ L0 (Rd ) such that μ = δ0 . Let c > 0 and let {Xt } be the c-selfsimilar additive process with L(X1 ) = μ. Let m ∈ {0, 1, . . . , ∞}. Then the following six conditions are equivalent. Here we understand L−1 = I D and L∞−1 = L∞ . (a) (b) (c) (d) (e) (f)

μ ∈ Lm (Rd ). d L(X ≥ 0. ) for all t 2d  t ) ∈ Lm (R L (Xtk )k=1,2 ∈ Lm−1 (R ) for all t1 , t2 ≥ 0. d ) for all t , t ≥ 0 and c , c ∈ R. L(c 1 2 1 2  1 Xt1 + c2 X t2 ) ∈ Lm−1 (R nd L (Xtk )1≤k≤n ∈ Lm−1 (R ) for all n ∈ N and t1 , . . . , tn ≥ 0. L(c1 Xt1 + · · · + cn Xtn ) ∈ Lm−1 (Rd ) for all n ∈ N, t1 , . . . , tn ≥ 0, and c1 , . . . , cn ∈ R. Let us prepare

Lemma 3.9 Let m ∈ {0, 1, . . . , ∞}. (i) Let X be a random variable on Rd1 and F a real d2 × d1 matrix. If L(X) ∈ Lm (Rd1 ), then L(F X) ∈ Lm (Rd2 ). (ii) Let d1 , . . . , dn ∈ N and d = d1 + · · · + dn . Let Xj be a random variable on Rdj for each j . If X1 , . . . , Xn are independent and if L(Xj ) ∈ Lm (Rdj ) for each j , then L((Xj )1≤j ≤n ) ∈ Lm (Rd ). Proof (i) Let Y = F X, μ = L(X), and μY = L(Y ). Suppose that μ ∈ L0 . For any b > 1,  μ(z) =  μ(b−1 z) ηb (z) with some ηb ∈ I D. Let F  be the transpose μ(F  z), we have  μY (z) =  μY (b−1 z) ηb (F  z). Hence of F . Since  μY (z) =  μY ∈ L0 . This proves the assertion for m = 0. By induction we can prove it for m = 1, 2 . . . . The validity for m = ∞ follows from this.

66

3 Selfsimilar Additive Processes and Stationary Ornstein–Uhlenbeck Type Processes

(ii) Let X1 , . . . , Xn be independent, X = (Xj )1≤j ≤n , μj = L(Xj ), and μ = L(X). Assume μj ∈ L0 for each j . Then  μj (zj ) =  μj (b−1 zj ) ηj,b (zj ) for d j zj ∈ R with some ηj,b ∈ I D. For z = (zj )1≤j ≤d with zj ∈ Rdj ,  μ(z) =

n  j =1

 μj (zj ) =

n 

 μj (b−1 zj ) ηj,b (zj ) =  μ(b−1 z) ηb (z),

j =1

2 ηj,b (zj ). Hence the assertion is true for m = 0. It is true where  ηb (z) = nj=1  for m = 1, 2 . . . by induction. Thus it is true also for m = ∞.  Proof of Theorem 3.8 Conditions (a) and (b) are equivalent, because L (Xt ) = L (t c X1 ) for t > 0 and L (X0 ) = δ0 . Let 0 ≤ s < t and let μt = L(Xt ) and μs,t = L(Xt − Xs ). Then (3.1) shows that μt ∈ Lm if and only if μs,t ∈ Lm−1 for 0 < s < t. Now let us prove that (b)⇒(e)⇒(f)⇒(d)⇒(b) and that (e)⇒(c)⇒(d). (b)⇒(e): We have L(Xt − Xs ) ∈ Lm−1 for all 0 ≤ s ≤ t. Given 0 = t0 ≤ t1 ≤ · · · ≤ tn , we see that  Xtk − Xtk−1 , k = 1, . . . , n, are independent and hence, by Lemma 3.9 (ii), L (Xtk − Xtk−1 )1≤k≤n ∈ Lm−1 . Since there is some n × n matrix F satisfying (Xtk )1≤k≤n = F (Xtk − Xtk−1 )1≤k≤n , we see L (Xtk )1≤k≤n ∈ Lm−1 by Lemma 3.9 (i). (e)⇒(f): This follows from Lemma 3.9 (i), since c1 Xt1 + · · · + cn Xtn = F (Xtk )1≤k≤n with some 1 × n matrix F . (f)⇒(d): (d) is a special case of (f). (d)⇒(b): Since Xt − Xs is a special case of c1 Xt1 + c2 Xt2 , we have μs,t ∈ Lm−1 for 0 < s < t. (e)⇒(c): (c) is a special case of (e). (c)⇒(d): Use Lemma 3.9 (i) as in the proof that (e)⇒(f).  Remark 3.10 The result in Theorem 3.8 on selfsimilar additive processes is quite different from Lévy processes. Let m ∈ {0, 1, . . . , ∞}. If {Yt } is a Lévy process on Rd with L(Y1 ) ∈ Lm (Rd ), then L((Ytk )1≤k≤n ) ∈ Lm (Rnd ) for every n ∈ N and t1 , . . . , tn in [0, ∞). This follows from Proposition 1.18 (iv) and Lemma 3.9.

3.2 Lamperti Transformation and Stationary Ornstein–Uhlenbeck Type Processes We define stationary processes as follows. Notice that the set of time parameter is the whole line R. Definition 3.11 A stochastic process {Ys : s ∈ R} on Rd is called stationary process if

3.2 Lamperti Transformation and Stationary Ornstein–Uhlenbeck Type Processes d

{Ys : s ∈ R} = {Ys+u : s ∈ R}

67

for u ∈ R,

that is, if shift of time preserves the system of finite-dimensional distributions. We use the following simple fact. Lemma 3.12 (i) Let (Xj )1≤j ≤N and (Xj )1≤j ≤N be random variables on RN with a common d

law. Let Q be an N  × N real matrix. Then Q(Xj )1≤j ≤N = Q(Xj )1≤j ≤N . (ii) Let {Xj : 1 ≤ j ≤ N } and {Xj : 1 ≤ j ≤ N } be families of random variables d

on Rd such that (Xj )1≤j ≤N = (Xj )1≤j ≤N . Then, for any a1 , . . . , aN ∈ R, we d

have (aj Xj )1≤j ≤N = (aj Xj )1≤j ≤N . Proof 

(i) Since z, Qx = Q z, x for z ∈ RN and x ∈ RN , we have  z,(X ) j 1≤j ≤N 

Eeiz,Q(Xj )1≤j ≤N  = EeiQ

 z,(X  ) j 1≤j ≤N 

= EeiQ



= Eeiz,Q(Xj )1≤j ≤N  . (ii) We can consider {Xj : 1 ≤ j ≤ N } and {Xj : 1 ≤ j ≤ N } as random variables on RN d . Choosing a suitable Nd × Nd matrix Q, we see that the assertion is a special case of (i).  In order to make the situation clear, let us introduce some classes of stochastic processes. We identify two stochastic processes which are identical in law. Given c > 0, let Ssc = Ssc (Rd ) be the class of c-selfsimilar processes on Rd , which are defined in Definition 3.3. Let St = St(Rd ) be the class of stochastically continuous stationary processes on Rd . Unlike a selfsimilar process, a general process in St does not have any “exponent”. Theorem 3.13 (i) Let c > 0 and let {Xt : t ≥ 0} ∈ Ssc . Let {Ys : s ∈ R} be the process defined by Ys = e−cs Xexp s

for

s ∈ R.

(3.4)

Then {Ys : s ∈ R} ∈ St. (ii) Let {Ys : s ∈ R} ∈ St. Let c > 0 and let {Xt : t ≥ 0} be defined by X0 = 0 Then {Xt : t ≥ 0} ∈ Ssc .

and

Xt = t c Ylog t

for t > 0.

(3.5)

68

3 Selfsimilar Additive Processes and Stationary Ornstein–Uhlenbeck Type Processes

Proof d

(i) We have {Xat : t ≥ 0} = {a c Xt : t ≥ 0} for all a > 0. Hence {Xt : t ≥ d d 0} = {Xaa −1 t : t ≥ 0} = {a c Xa −1 t : t ≥ 0}. Hence {Xt : t ≥ 0} = {e−cu X(exp u)t : t ≥ 0} for all u ∈ R. Using Lemma 3.12, we obtain {Ys : s ∈ R} = {e−cs Xexp s : s ∈ R} = {e−cs e−cu X(exp u)(exp s) : s ∈ R} d

= {Ys+u : s ∈ R}

for all u,

which shows that {Ys } is a stationary process. Its stochastic continuity follows from that of {Xt }. d (ii) We have {Ys : s ∈ R} = {Ys+u : s ∈ R} for all u. Then, choosing u = log a and using Lemma 3.12, we have d

{Xat : t > 0} = {a c t c Ylog(at) : t > 0} = {a c t c Ylog t : t > 0} = {a c Xt : t > 0} for all a > 0. The stochastic continuity of {Xt } follows, for t > 0, from that of d

{Ys } and, for t = 0, from Xt = t c Ylog t = t c Y0 → 0 in probability as t ↓ 0.  Definition 3.14 Let c denote the mapping of {Xt : t ≥ 0} to {Ys : s ∈ R} in Theorem 3.13 (i) and let c denote the mapping of {Ys : s ∈ R} to {Xt : t ≥ 0} in Theorem 3.13 (ii). Both c and c are called Lamperti transformation. Remark 3.15 It is straightforward to see that c ◦ c is the identity mapping of Ssc and that c ◦ c is the identity mapping of St, where the symbol ◦ denotes composition of mappings. Hence, c is a one-to-one mapping from Ssc onto St, c is a one-to-one mapping from St onto Ssc , and c = (c )−1 ,

 −1 c = c .

Notice that the domain of c depends on c but its range does not depend on c; on the other hand, the domain of c does not depend on c but its range depends on c. There are two mappings from Ssc into Sscκ . Proposition 3.16 Let c > 0 and κ > 0. Let {Xt : t ≥ 0} be a c-selfsimilar process on Rd . Define Xt by Xt = Xt κ for t ≥ 0 and Xt by Xt = t cκ−c Xt for t > 0 and X0 = 0. Then both {Xt : t ≥ 0} and {Xt : t ≥ 0} are cκ-selfsimilar processes. Proof The assertion on {Xt } is proved in the same way as the proof of Proposition 3.4. For the process {Xt } we have  } = {(at)cκ−c Xat } = {(at)cκ−c a c Xt } = {a cκ t cκ−c Xt } = {a cκ Xt } {Xat d

3.2 Lamperti Transformation and Stationary Ornstein–Uhlenbeck Type Processes

69

for a > 0 by Lemma 3.12. Clearly, {Xt } is stochastically continuous for t > 0. d

Since Xt = t cκ X1 → 0 in probability as t ↓ 0, {Xt } is stochastically continuous also at t = 0.  Definition 3.17 Let Ic denote the mapping of {Xt } to {Xt } and Jc the mapping of {Xt } to {Xt } in Proposition 3.16. Let F (κ) with κ > 0 denote the mapping of {Ys : s ∈ R} ∈ St to {Yκs : s ∈ R} ∈ St. (κ)

(κ)

Proposition 3.4 shows that, if {Xt } is a c-selfsimilar additive process, then (κ) (κ) Ic ({Xt }) is a cκ-selfsimilar additive process. However, the mapping Jc does not have this property, as we will see in Proposition 3.27. Proposition 3.18 Let c > 0 and κ > 0. The mappings Ic(κ) and Jc(κ) are one-to-one from Ssc onto Sscκ . The mapping F (κ) is one-to-one from St onto St. We have (Ic(κ) )−1 = Icκ

(1/κ)

(Jc(κ) )−1 = Jcκ

(1/κ)

,

,

(F (κ) )−1 = F (1/κ) ,

(3.6)

for their inverses. Proof It is enough to verify (1/κ)

Icκ

(1/κ)

◦ Ic(κ) = Jcκ (1/κ)

Ic(κ) ◦ Icκ F

(1/κ)

◦F

◦ Jc(κ) = identity mapping of Ssc , (1/κ)

= Jc(κ) ◦ Jcκ

(κ)

=F

(κ)

◦F

= identity mapping of Sscκ ,

(1/κ)

= identity mapping of St.

These are easily shown from the definitions.



Now we can examine Remark 3.15 more in detail. Proposition 3.19 Let c > 0 and κ > 0. Then cκ ◦ c = Jc(κ) ,

(3.7)

cκ ◦ F (κ) ◦ c = Ic(κ) .

(3.8)

Proof Let {Xt } ∈ Ssc and {Ys } = c ({Xt }) = {e−cs Xexp s }. To see (3.7), note that cκ ({Ys }) = {t cκ Ylog t } = {t cκ t −c Xt } = Jc(κ) ({Xt }). To see (3.8), note that  } = {Ys } = F (κ) ({Ys }) = {Yκs } = {e−cκs Xexp(κs) } and that cκ ({Ys }) = {t cκ Ylog t {t cκ t −cκ Xt κ } = Ic(κ) ({Xt }).



Next we show that Lamperti transformation preserves Markov property. Given the two processes {Xt : t ≥ 0} and {Ys : s ∈ R} as in Theorem 3.13, let FtX , t ≥ 0, and FsY , s ∈ R, be the σ -algebras generated by {Xt  : t  ≤ t} and by {Ys  : s  ≤ s}, respectively.

70

3 Selfsimilar Additive Processes and Stationary Ornstein–Uhlenbeck Type Processes

Theorem 3.20 Let c > 0. Let {Xt : t ≥ 0} ∈ Ssc , {Ys : s ∈ R} ∈ St, and {Ys } = Y X X c ({Xt }). Then FsY = Fexp s for s ∈ R and Ft = Flog t for t > 0. If one of the processes {Xt } and {Ys } is a Markov process, then the other is also a Markov (x, B), then {Ys } process. If {Xt } is a Markov process with transition function PtX 1 ,t2 is a Markov process with transition function X cs1 cs2 PsY1 ,s2 (y, B) = Pexp s1 ,exp s2 (e y, e B).

(3.9)

If {Ys } is a Markov process with transition function PsY1 ,s2 (y, B), then {Xt } is a Markov process with transition function −c −c Y (x, B) = Plog PtX t1 ,log t2 (t1 x, t2 B). 1 ,t2

(3.10)

Proof Since Ys = e−cs Xexp s and Xt = t c Ylog t , the asserted relation of FsY and FtX is straightforward. Suppose that {Xt } is a Markov process with transition function PtX (x, B) (see Definition 2.12). The defining property is, for t1 < t2 , 1 ,t2     P {Xt2 ∈ B} ∩ H = E PtX for H ∈ FtX1 , B ∈ B(Rd ). (X , B)1 t H 1 1 ,t2 X cs1 cs2 Y Let s1 < s2 and let F (y, B) = Pexp s1 ,exp s2 (e y, e B). Then, for H ∈ Fs1 ,

    P {Ys2 ∈ B} ∩ H = P {Xexp s2 ∈ ecs2 B} ∩ H     X cs2 = E F (Ys1, B)1H . (X , e B)1 = E Pexp exp s H s1 ,exp s2 1 Hence {Ys : s ∈ R} is a Markov process with transition function PsY1 ,s2 (y, B) equal to F (y, B). The converse assertion is proved similarly.  Now we are ready to consider the case where {Xt : t ≥ 0} is a selfsimilar additive process. This is related to the process defined below. Definition 3.21 Let ρ ∈ I D(Rd ) and c > 0. A stochastic process {Ys : s ∈ R} on Rd is called stationary Ornstein–Uhlenbeck type process generated by ρ and c if it is a stationary process and, at the same time, a Markov process such that, for any s ∈ R, {Ys+t : t ≥ 0} is the Ornstein–Uhlenbeck type process generated by ρ and c in Sect. 2.2. Here L(Ys ) does not depend on s; it is called stationary distribution of {Ys : s ∈ R}. Remark 3.22 Let {Ys : s ∈ R} be a stationary OU type process generated by ρ and c. If {Ys } is a non-constant process (that is, if L(Y0 ) is not a δ-distribution), then it follows from Proposition 2.16 that ρ and c are uniquely determined. In this case ρ automatically belongs to I Dlog and L(Ys ) = (c) (ρ) for all s ∈ R. Indeed, let μ = L(Ys ). For s1 < s2 and B ∈ B(Rd ), we have  μ(B) = P [Ys2 ∈ B] = μ(dy)PsY2 −s1 (y, B). (3.11) Rd

3.2 Lamperti Transformation and Stationary Ornstein–Uhlenbeck Type Processes

71

If ρ ∈ I D \ I Dlog , then, letting s2 → ∞, we obtain from (3.11) and Theorem 2.17 (iii) that μ(B) = 0 for all bounded Borel set B, which is absurd. Thus we have ρ ∈ I Dlog = D((c) ). Letting s2 → ∞ again in (3.11) and using Theorem 2.17 (i), we see that μ = (c) (ρ). Hence {Ys : s ∈ R} is the unique (in law) stationary OU type process generated by ρ ∈ I Dlog and c > 0. Conversely, for any ρ ∈ I Dlog and c > 0 we can construct a stationary OU type process generated by ρ and c. Theorem 3.23 Fix c > 0. Let {Xt : t ≥ 0} ∈ Ssc , {Ys : s ∈ R} ∈ St, and {Ys } = c ({Xt }). Assume that L(X1 ) is non-trivial and that {Ys } is a non-constant process. Then {Xt : t ≥ 0} is a c-selfsimilar additive process in law with L(X1 ) = μ if and only if {Ys : s ∈ R} is a stationary Ornstein–Uhlenbeck type process generated by ρ ∈ I Dlog and c satisfying (c) (ρ) = L(Ys ) = μ. We prepare the following fact. Proposition 3.24 (i) Let {Xt : t ≥ 0} be an additive process in law on Rd with L(Xt1 ) = μt1 and L(Xt2 − Xt1 ) = μt1 ,t2 for 0 ≤ t1 ≤ t2 . Then {Xt : t ≥ 0} is a Markov process with transition function PtX (x, B) = μt1 ,t2 (B − x) and 1 ,t2 



Rd

eiz,x  PtX (x, dx  ) =  μt1 ,t2 (z)eix,z = 1 ,t2

 μt2 (z) ix,z e .  μt1 (z)

(3.12)

(ii) Let {Xt : t ≥ 0} be a stochastically continuous Markov process on Rd with initial distribution δ0 and transition function PtX (x, B) spatially homogeneous in 1 ,t2 X X the sense that Pt1 ,t2 (x, B) = Pt1 ,t2 (0, B − x) for all x and B. Then {Xt : t ≥ 0} is an additive process in law. Proof (i) We have, for 0 ≤ t1 < t2 < · · · < tn , Xtn = Xt1 + (Xt2 − Xt1 ) + · · · + (Xtn − Xtn−1 ), and the right-hand side is the sum of independent summands. It follows that (x, B) = μt1 ,t2 (B − {Xt } is a Markov process with transition function PtX 1 ,t2 x), since P [Xt1 ∈ B1 , Xt2 ∈ B2 ] = E[1B1 (Xt1 )μt1 ,t2 (B2 − Xt1 )]. The expression (3.12) follows from this. (ii) Define μt = L(Xt ) and μt1 ,t2 for 0 ≤ t1 < t2 by μt1 ,t2 (B) = PtX (0, B). Then 1 ,t2  (x, B) = PtX (0, B − x) = μt1 ,t2 (B − x) = μt1 ,t2 (dx2 )1B (x + x2 ). PtX 1 ,t2 1 ,t2 It follows that, for 0 ≤ t1 < t2 < · · · < tn , P [Xt1 ∈ B1 , . . . , Xtn ∈ Bn ] =



 · · · μt1 (dx1 )1B1 (x1 )

μt1 .t2 (dx2 )1B2 (x1 + x2 ) · · · μtn−1 .tn (dxn )1B2 (x1 + · · · + xn ).

72

3 Selfsimilar Additive Processes and Stationary Ornstein–Uhlenbeck Type Processes

Then, we can prove that Xt1 , Xt2 − Xt1 , . . . , Xtn − Xtn−1 are independent; see Theorem 10.4 of [93] for details.  Proof of Theorem 3.23 Suppose that {Xt : t ≥ 0} is a c-selfsimilar additive process in law with L(X1 ) = μ and μ is not a δ-distribution. Then μ ∈ L0 by Theorem 3.5. (x, B) By Proposition 3.24, {Xt } is a Markov process with transition function PtX 1 ,t2  c d having characteristic function  μ(t2 z)/ μ(t1c z) eix,z , since Xt = t c X1 . Hence {Ys : s ∈ R} is a Markov process with transition function (3.9) in Theorem 3.20 X cs1 and L(Ys ) = L(Y0 ) = μ. Let η(B) = Pexp s1 ,exp s2 (e y, B). Then 



Rd

=

eiz,y  PsY1 ,s2 (y, dy  ) =  η(e−cs2 z) =  μ(z)

 μ(e−c(s2 −s1 ) z)

 μ(ecs2 e−cs2 z) exp(iecs1 y, e−cs2 z)  μ(ecs1 e−cs2 z)

exp(ie−c(s2 −s1 ) y, z)

  −c(s2 −s1 ) = exp ie y, z +



ψρ (e

−cu

 z)du −

0

  −c(s2 −s1 ) y, z + = exp ie



ψρ (e

−cu−c(s2 −s1 )

 z)du

0

s2 −s1

ψρ (e

−cu

 z)du .

0

In the above we have  ∞ used Theorem 2.17 (ii) for μ ∈ L0 and chosen ρ ∈ I Dlog such that  μ(z) = exp 0 ψρ (e−cu z)du. The formula that we have obtained coincides with (2.20). Hence {Ys } is a non-constant stationary OU type process generated by ρ and c with stationary distribution μ. Conversely, suppose that {Ys : s ∈ R} is a non-constant stationary OU type process generated by ρ ∈ I Dlog and c with (c) (ρ) = L(Ys ) = μ. We know that {Xt } is c-selfsimilar and L(X1 ) = L(Y0 ) = μ. All we have to show is that {Xt } is an additive process. Since {Ys } = c ({Xt }), we know from Theorem 3.20 that {Xt } is a Markov process and (3.9) and (3.10) hold. We claim that PtX (x, B) = 1 .t2 −c Y PtX (0, B − x) for all t , t , x, and B. Let ζ (B) = P (t x, B). Then it 1 2 log t1 ,log t2 1 1 .t2  iz,x   X c  follows from (3.10) that e Pt .t (x, dx ) =  ζ (t z). We know, from (2.20), 2

1 2

   ζ (z) = exp ie−c(log t2 −log t1 ) t1−c x, z +

log t2 −log t1

 ψρ (e−cu z)du .

0

Hence  e

iz,x  

PtX (x, dx  ) 1 .t2





log t2 −log t1

= exp ix, z +

ψρ (e

−c(u−log t2 )

0

It follows that  e

iz,x  

PtX (x, dx  ) 1 .t2

 =e

ix,z



(0, dx  ). eiz,x  PtX 1 .t2

 z)du .

3.2 Lamperti Transformation and Stationary Ornstein–Uhlenbeck Type Processes

73

This means that PtX (x, B) = PtX (0, B − x) as we claimed. Now, using 1 .t2 1 .t2  Proposition 3.24 (ii), we conclude that {Xt } is an additive process in law. Corollary 3.25 Any stationary OU type process {Ys : s ∈ R} on Rd has stationary distribution L(Ys ) in L0 . Conversely, any distribution in L0 (Rd ) is the stationary distribution of some stationary OU type process. Let us prove the following additional results. Proposition 3.26 Let {Ys } be a non-constant stationary OU type process generated by some ρ ∈ I Dlog and c > 0 and let {Xt } = c ({Ys }) with c > 0 different from c. Then {Xt } is a c -selfsimilar process with non-trivial L(X1 ), but {Xt } is not an additive process in law. Proof By Theorem 3.13, {Xt } is a c -selfsimilar process with non-trivial L(X1 ) and {Ys } = c ({Xt }). Suppose that {Xt } is an additive process in law. Then, by Theorem 3.23, {Ys } is a non-constant stationary OU type process generated  by −1 (c ) (μ) and c , where μ = L(Y0 ). On the other hand, {Ys } is a stationary

OU type process generated by ρ = −1 (c) (μ) and c. This is contradictory to the uniqueness of generator in Proposition 2.16 (i). Therefore {Xt } is not an additive process in law. 

Proposition 3.27 Let {Xt } be a c-selfsimilar additive process in law with nontrivial L(X1 ). Let {Xt } = Jc(κ) ({Xt }) with κ ∈ (0, 1) ∪ (1, ∞), where Jc(κ) is the mapping in Definition 3.17. Then {Xt } is a cκ-selfsimilar process with L(X1 ) = L(X1 ), but {Xt } is not an additive process in law. Proof This is another expression of Proposition 3.26. Apply that proposition to (κ) {Ys } = c ({Xt }) and {Xt } = cκ ({Ys }) and notice cκ ◦ c = Jc in Proposition 3.19.  Example 3.28 Let us directly check Proposition 3.27 for Brownian motion {Xt } on (κ) R. Recalling that {Xt } is (1/2)-selfsimilar, let {Xt } = J1/2 ({Xt }) = {t (κ−1)/2 Xt } with κ ∈ (0, 1) ∪ (1, ∞). In order to prove that {Xt } is not an additive process in law, it is enough to show that X1 and X2 − X1 are not independent. Let us check variances. We have varX1 = varX1 = 1 and var(X2 − X1 ) = var(2(κ−1)/2 X2 − X1 ) = var(2(κ−1)/2 (X2 − X1 ) + (2(κ−1)/2 − 1)X1 ) = 2κ−1 + (2(κ−1)/2 − 1)2 = 2κ − 2(κ−1)/2+1 + 1. If X1 and X2 − X1 are independent, then varX2 = varX1 + var(X2 − X1 ) = 2κ − 2(κ−1)/2+1 + 2, which is contradictory to varX2 = var(2(κ−1)/2 X2 ) = 2κ .

74

3 Selfsimilar Additive Processes and Stationary Ornstein–Uhlenbeck Type Processes

Remark 3.29 Let {Xt : t ≥ 0} and {Ys : s ∈ R} be a non-trivial c-selfsimilar additive process in law and a non-constant stationary OU type process generated by ρ ∈ I Dlog and c, respectively, with {Ys } = c ({Xt }). Then the background driving Lévy (ρ) process {Zu } = {Zu } of {Ys } is constructed from {Xt } in the following way. Fix  exp(r+u) −c an arbitrary r ∈ R. Define Zu = exp r t dXt for u ≥ 0, where the stochastic integral based on an additive process on Rd is defined similarly to Definition 2.3 (see Sato [96] (2004)). Then {Zu : u ≥ 0} is a Lévy process. Indeed, it is clearly an additive process in law and, moreover, for 0 ≤ u1 < u2 and p > 0, Zu2 +p −  exp(r+u +p)  exp(r+u ) d  exp(r+u ) Zu1 +p = exp(r+u12+p) t −c dXt = exp(r+u12) (at)−c dXat = exp(r+u12) t −c dXt = d

Zu2 − Zu1 with a = ep , since {a −c Xat } = {Xt }. We can prove, for 0 ≤ u1 ≤ u2 ,  u2  exp(r+u2 ) −c measurable u1 f (u)dZu = exp(r+u1 ) f (log t − r)t dXt for all locally bounded  s cu functions f on [0, ∞), since this holds for step functions. Hence 0 e dZu =  exp(r+s) −cr e dXu = e−cr (Xexp(r+s) − Xexp r ). It follows that Yr+s = e−cs Yr + exp r s e−cs 0 ecu dZu for s ≥ 0, which is a special case of (2.14), showing that {Zu } is the background driving Lévy process of the OU type process {Yr+s : s ≥ 0}. Note that Yr and {Zu } are independent.

Notes Selfsimilar additive processes were studied by Sato [88, 89] (1990, 1991) for the first time. Theorem 3.5 was proved in [89] in a more general form called wide-sense operator selfsimilar additive processes, which are related to Q-selfdecomposable distributions in Sect. 5.2. Thereafter selfsimilar additive processes are employed as a model in mathematical finance under the name Sato processes, such as in Carr et al. [17] (2005), Eberlein and Madan [23] (2009), and Kokholm and Nicolato [48] (2010). Theorem 3.8 was given by Maejima et al. [62] (2000). Path behaviours of selfsimilar additive processes are studied by Watanabe [132] (1996), Sato and Yamamuro [106, 107] (1998, 2000), Yamamuro [141, 142] (2000a, 2000b), and Watanabe and Yamamuro [137] (2010). One of the results is that any nondegenerate selfsimilar additive process on Rd is transient if d ≥ 3, like any nondegenerate Lévy process. Lemma 3.9 (i) says that any linear transformation of a distribution in Lm belongs to Lm . In addition to this fact, there is μ = L(X) ∈ Lm (Rd ) \ Lm+1 (Rd ) such  that, for every d  × d real matrix F with 1 ≤ d  < d, L(F X) is in Lm+1 (Rd ). This statement is true for m = −2, −1, 0, 1, . . . < ∞ with understanding that L−1 = I D and L−2 = P. This is by Sato [92] (1998) for m = −1 and by Maejima et al. [63] (1999) for m ≥ 0. For m = −2, see the references in [92]; an explicit construction of μ using “signed Lévy measure” is done. The two representations of selfdecomposable distributions in selfsimilar additive processes (Theorem 3.5) and in stationary OU type processes (Corollary 3.25) were

Notes

75

originally proved separately. The former edition of this book constructed stationary OU type processes with time parameter in (−∞, ∞) by stochastic integrals. But those two kinds of processes are connected by a transformation introduced by Lamperti [53] (1962), which is now called Lamperti transformation. This fact was pointed out by Jeanblanc, Pitman, and Yor [38] (2002) and examples induced by Bessel processes were examined. Maejima and Sato [60] (2003) extended this connection to the processes called semi-selfsimilar additive and semi-stationary OU type in the representations of the semi-selfdecomposable distributions. In the present edition we have made a close inspection of Lamperti transformation and constructed stationary OU type processes via this transformation. As a typical non-stable example, let μ be the exponential distribution on R with mean 1 and let {Yt } and {Xt } be the Lévy process and the 1-selfsimilar additive process, respectively, with L(Y1 ) = L(X1 ) = μ. Then, the jump times of Yt (ω) are countable and dense in (0, ∞) a. s., while the path of Xt (ω) is a step function of t ∈ (ε, ∞) for any ε > 0 a. s. and the jump times of Xt (ω) accumulate only at t = 0 a. s. Moreover, limt↓0 t −a Yt (ω) = 0 a. s. for any a > 0, while lim supt↓0 Xt (ω)/(t log | log t|) = 1 a. s. Thus path behaviours of {Xt } and {Yt } are greatly different. See Sato [89] (1991). An active area related to selfsimilarity is the study of positive selfsimilar Markov processes and their connection with Lévy processes. See Chapter 13 of Kyprianou [51] (2014) for definition and examples.

Chapter 4

Multivariate Subordination

Subordination of stochastic processes consists of transforming a stochastic process {Xt } to another one through random time change by an increasing process {Zt }, where {Xt } and {Zt } are assumed to be independent. Subordination of a Lévy process {Xt } on Rd by an increasing Lévy process {Zt } on R is studied in books such as Feller [24] (1971) and Sato [93] (1999). In this case subordination provides a Lévy process; it means introducing a new Lévy process {Yt } by defining Yt = XZt . In Sect. 4.1 we recall the Lévy–Khintchine representation of the characteristic function of an increasing Lévy process and the description of the generating triplet of {Yt } in terms of those of {Xt } and {Zt }. Further, characterization of Lévy processes on a cone K in RN is given. They are called K-valued subordinators, the extended notion of an increasing Lévy process. In Sect. 4.2, the concept of a Lévy process is expanded by replacing the time parameter t in [0, ∞) by a parameter s in a cone K in RN . A natural generalization of the stationary independent increment property by assuming stationary independent increments along K-increasing sequences leads to the notion of a K-parameter Lévy process on Rd . Certain continuity conditions are added in the definition. The concept of subordination is extended to substitution of s by a K-valued subordinator {Zt }. It is proved that Yt = XZt is a Lévy process. N The positive orthant RN + is a cone. A deeper study in the case K = R+ is made in Sect. 4.3. Joint distributions of {Xs : s ∈ K} are examined and the relations of the generating triplets involved in subordination are clarified. The case of general cones K is examined in Sect. 4.4. The notion of cone-parameter convolution semigroups {μs : s ∈ K} on Rd is introduced in Sect. 4.2. Here a K-parameter convolution semigroup is a family of distributions on Rd that satisfies μs 1 +s 2 = μs 1 ∗ μs 2 for s 1 , s 2 ∈ K and has a

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 A. Rocha-Arteaga, K. Sato, Topics in Infinitely Divisible Distributions and Lévy Processes, SpringerBriefs in Probability and Mathematical Statistics, https://doi.org/10.1007/978-3-030-22700-5_4

77

78

4 Multivariate Subordination

continuity property. In Sect. 4.4 the problem is discussed whether or not a given K-parameter convolution semigroup generates a K-parameter Lévy process and whether uniquely or not if it generates. The situation is complicated but we touch on typical results and examples. Further, subordination of cone-parameter convolution semigroups is presented.

4.1 Subordinators and Subordination Basic results on subordination of a Lévy process on Rd by an increasing Lévy process on R are presented here but their proofs are omitted (they can be found in the book [93]). Increasing Lévy processes on R are often called subordinators. Further, in this section, we consider K-valued subordinators, that is, K-valued (or K-increasing) Lévy processes. Definition 4.1 A Lévy process {Zt : t ≥ 0} on R is called increasing if Zt (ω) is increasing in t, almost surely. An increasing Lévy process on R is called a subordinator. Theorem 4.2 A Lévy process {Zt : t ≥ (AZ , νZ , γZ ) is a subordinator if and only if

0} on R with generating triplet

 AZ = 0, νZ ((−∞, 0)) = 0, (0,1]

where γZ0 = γZ −



(0,1] xνZ (dx)

xνZ (dx) < ∞, and γZ0 ≥ 0,

(4.1)

is the drift of {Zt }. For any w ∈ C with Re w ≤ 0,

E[ewZt ] = et(w) , (w) = γZ0 w +

(4.2)

 (ews − 1)νZ (ds).

(4.3)

(0,∞)

For a proof see [93, Theorem 21.5]. Notice that (4.2)–(4.3) represent characteristic function if w = iz, z ∈ R, and Laplace transform if w = −u, u ≥ 0. Theorem 4.3 Let {Zt : t ≥ 0} be a subordinator with Lévy measure νZ , drift γZ0 , and (w) of (4.3). Let {Xt : t ≥ 0} be a Lévy process on Rd with generating triplet (AX , νX , γX ) and μ = L(X1 ). Assume that {Zt } and {Xt } are independent. Define Yt = XZt . Then {Yt : t ≥ 0} is a Lévy process on Rd and μ)(z)) . Eeiz,Yt  = et((log 

The generating triplet (AY , νY , γY ) of {Yt } is as follows:

(4.4)

4.1 Subordinators and Subordination

79

AY = γZ0 AX ,  μs (B)νZ (ds) + γZ0 νX (B) νY (B) = 

(0,∞)

γY =

for B ∈ B(Rd \ {0}),

(4.6)

 νZ (ds)

(0,∞)

(4.5)

|x|≤1

xμs (dx) + γZ0 γX .

(4.7)

  If γZ0 = 0 and (0,1] s 1/2 νZ (ds) < ∞, then AY = 0, |x|≤1 |x|νY (dx) < ∞, and the drift of {Yt } is zero. This is Theorem 30.1 of [93]. It is a special case of Theorems 4.23 and 4.41, of which we will give a full proof. Definition 4.4 The procedure in Theorem 4.3 of making {Yt } from {Zt } and {Xt } is called (Bochner’s) subordination. We say that {Yt } is subordinate to {Xt } by {Zt }. Sometimes we call {Xt } subordinand and {Yt } subordinated. In the proof of Theorem 4.3 the following fact is essential. Lemma 4.5 Let {Xt } be a Lévy process on Rd . Then there are constants C(ε), C1 , C2 , C3 such that P [|Xt | > ε] ≤ C(ε)t

for ε > 0,

E[|Xt |2 ; |Xt | ≤ 1] ≤ C1 t, |E[Xt ; |Xt | ≤ 1]| ≤ C2 t, E[|Xt |; |Xt | ≤ 1] ≤ C3 t 1/2 . This is Lemma 30.3 of [93]. It is a special case of d = N = 1 in Lemma 4.40. d

A stochastic process {Xt } on Rd is called rotation invariant if {Xt } = {U Xt } for every d × d orthogonal matrix U . Example 4.6 Let {Xt } be the Brownian motion on Rd and {Zt } a non-trivial α-stable subordinator, 0 < α < 1, in Theorem 4.3. By Theorems 4.2 and 1.42, {Zt } has characteristic function   Eeiz,Z1  = exp c1

(eizx − 1)x −1−α dx + iγ 0 z



(0,∞)

with 0 < α < 1, γ 0 ≥ 0, and c1 > 0. Or, equivalently, we can write

πα   sgn (z) + iγ 0 z Eeiz,Z1  = exp −c |z|α 1 − i tan 2

80

4 Multivariate Subordination

with c = c1 α −1 (1 − α) cos (π α/2) > 0. Thus an α-stable process on R with parameter (α, κ, τ, c) of Definition 14.16 of [93] is a subordinator if and only if 0 < α < 1, β = 1, τ = γ 0 ≥ 0. The function (w) in (4.3) for w = −u ≤ 0 is given by (−u) = −c uα − γ 0 u with c = c1 α −1 (1 − α) [93, Example 24.12]. Thus, {Zt } is a non-trivial strictly α-stable subordinator if and only if, in addition, γ 0 = 0; see Theorem 1.42 (iii). Now let {Zt } be a non-trivial strictly α-stable subordinator. Then  (−u) = −c uα . From log  μ (z) = −(1/2) |z|2 , we get Ee

iz,Yt 

  t  2α = exp − α c |z| 2

by (4.4). Hence, {Yt } is a rotation invariant 2α-stable process. See [93], Theorem 14.14, for a characterization of a rotation invariant stable distribution. Example 4.7 Let, in Theorem 4.3, {Xt } be a Lévy process on Rd and {Zt } -process with parameter q > 0. Define a measure V q on Rd by L(Y1 ) = (1/q)V q . Then V q is called the q-potential measure of {Xt }. We have

 ∞  −ux e−qx u dx = − log 1 + , u ≥ 0,  (−u) = e −1 x q 0 see Example 1.45. The subordination Yt = XZt gives Eeiz,Yt  = et



  − log 1−q −1 log  μ(z)

P (Yt ∈ B) =

qt

(t)



−t

= 1 − q −1 log  μ (z) , ∞

z ∈ Rd ,

P (Xs ∈ B) s t−1 e−qs ds.

0

∞

Hence L (Y1 ) = q −1 V q where V q = 0 μs (B)e−qs ds. In particular, if d = 1 and {Xt } is a Poisson process with parameter c > 0, then, −u for each t > 0, Ee−uXt = etc(e −1) and  1−q −1 c(e−u −1)

Ee−uYt = e−t log

 −t = pt 1 − (1 − p) e−u ,

u ≥ 0,

where p = q/(c + q). Hence Yt has negative binomial distribution with parameters t and q/ (c + q). α If d = 1 and {Xt } is a symmetric α-stable process with EeizXt = e−t|z| , 0 < α ≤ 2, then

4.1 Subordinators and Subordination

−1

EeizY1 = 1 + q −1 |z|α ,

81

z ∈ R.

Here L(Y1 ) is called Linnik distribution or geometric stable distribution. Definition 4.8 A subset K of RN is a cone if it is a nonempty closed convex set, is not {0}, and satisfies (a) if s ∈ K and a ≥ 0, then as ∈ K, (b) if s ∈ K and −s ∈ K, then s = 0.1 Notice that a cone is closed under multiplication by nonnegative reals and does not contain any straight line through 0. Assume, in the following of this section, that K is a cone in RN . Then it determines a partial order. Definition 4.9 Write s 1 ≤K s 2 if s 2 − s 1 ∈ K. A sequence {s n }n=1,2,... ⊂ RN is K-increasing if s n ≤K s n+1 for each n; K-decreasing if s n+1 ≤K s n for each n. A mapping f from [0, ∞) into RN is K-increasing if f (t1 ) ≤K f (t2 ) for t1 < t2 ; K-decreasing if f (t2 ) ≤K f (t1 ) for t1 < t2 . Lemma 4.10 A cone K has the following properties. (i) If s 1 ∈ K and s 2 ∈ K, then s 1 + s 2 ∈ K. (ii) K does not contain any straight line. (iii) There is an (N − 1)-dimensional linear subspace H of RN such that, for any s ∈ K, (s + H ) ∩ K is a bounded set. (iv) If {s n }n=1,2,... is a K-decreasing sequence in K, then it is convergent. Proof (i) Notice that s 1 + s 2 = 2( 12 s 1 + 12 s 2 ). (ii) Suppose that a straight line {s 0 + as 1 : a ∈ R}, s 1 = 0, is contained in K. Then K  n1 (s 0 + ns 1 ) → s 1 . Hence s 1 ∈ K. Similarly −s 1 ∈ K, since K  n1 (s 0 − ns 1 ) → −s 1 . Hence K contains the straight line {as 1 : a ∈ R}, contradicting that K is a cone. (iii) Let us admit the fact that there is an (N − 1)-dimensional linear subspace H of RN such that K ∩ H = {0} (this fact is evident if N = 1 or 2 or if K = RN +; in general case, books in convex analysis (e. g. Rockafellar [82] (1970)) are helpful in giving a proof). We can choose γ = 0 such that H = {u : u, γ  = 0} and K \ {0} ⊂ {u : u, γ  > 0}. We claim that (s + H ) ∩ K is bounded for any s ∈ K. Suppose that there are s n ∈ (s + H ) ∩ K with |s n | → ∞. Then, s n , γ  > 0 and s n − s, γ  = 0. A subsequence of |s n |−1 s n tends to some point u ∈ K with |u| = 1. From |s n |−1 s n − |s n |−1 s, γ  = 0 we have u, γ  = 0, which contradicts that K ∩ H = {0}.

1

In the previous edition this is called “proper cone” as the condition (b) is imposed. Here we follow [71] and [72].

82

4 Multivariate Subordination

(iv) We use H and γ in the proof of (iii). Let {s n }n=1,2,... be a K-decreasing sequence in K. Let K1 = {u : u ∈ K and u − s 1 , γ  ≤ 0}. Then K1 is bounded. Indeed, if there are un ∈ K1 with |un | → ∞, then a limit point v of |un |−1 un satisfies |v| = 1, v ∈ K, and v, γ  ≤ 0, which is absurd. Now let us show that {s n } is bounded. If |s n | → ∞, then |s 1 + s n | → ∞, |s 1 − s n | → ∞, and s 1 + s n , s 1 − s n ∈ K, and hence, for all large n, s 1 + s n ∈ K1 and s 1 −s n ∈ K1 , which means that s n , γ  > 0 and −s n , γ  > 0, a contradiction. Similarly, if a subsequence {s n(k) } of {s n } satisfies |s n(k) | → ∞, we have a contradiction. Hence {s n } is bounded. If two subsequences {s n(k) } and {s m(l) } tend to u and v, respectively, then v − u ∈ K since s m(l) − s n(k) ∈ K for n(k) > m(l), and similarly u − v ∈ K, which shows u = v since K does not contain any straight line through 0. Therefore {s n } is convergent.  Let (, B, ρ) be a σ -finite measure space. We say that a collection {N (B) : B ∈ B} of random variables on Z+ ∪ {+∞} is Poisson random measure on  with intensity measure ρ if the following are satisfied: (a) for all B, N (B) has Poisson distribution with mean ρ(B), (b) if B1 , . . . , Bn are disjoint then N (B1 ), . . . , N (Bn ) are independent, and (c) for every ω, N(·, ω) is a measure on . Here we understand that a random variable X has Poisson distribution with mean 0 if X = 0 a. s.; X has Poisson distribution with mean ∞ if X = ∞ a. s. Now let us extend Theorem 4.2 to higher dimensions. Theorem 4.11 Let {Zt : t ≥ 0} be a Lévy process on RN with generating triplet (A, ν, γ ). Then the following three conditions are equivalent. (a) For any fixed t ≥ 0, Zt ∈ K a. s. (b) Almost surely, Zt (ω) is K-increasing in t. (c) The generating triplet satisfies  A = 0, ν(RN \ K) = 0,

|x|≤1

|x| ν(dx) < ∞, and γ 0 ∈ K,

(4.8)

where γ 0 is the drift of {Zt }. Proof First, let us check the equivalence of (a) and (b). If (b) holds, then Zt = Zt − Z0 ∈ K a. s. and (a) holds. If (a) holds, then, for 0 ≤ s ≤ t, P [Zt − Zs ∈ K] = P [Zt−s ∈ K] = 1, hence P [Zt − Zs ∈ K for all s, t in Q ∩ [0, ∞) with s ≤ t] = 1, and thus (b) holds by right continuity of sample functions and by closedness of K. Let us show that (c) implies (a). Assume (c). By the Lévy–Itô decomposition of sample functions in Theorem 19.3 of [93],

4.1 Subordinators and Subordination

83

 Zt (ω) = lim

n→∞ (0,t]×{|x|>1/n}

xJ (d(s, x), ω) + tγ 0

a. s.,

 where, for B ∈ B (0, ∞) × (RN \ {0}) , J (B, ω) is defined to be the number of s such that (s, Zs (ω) − Zs− (ω)) ∈ B. It is known that J (B) is a Poisson random measure with intensity measure being the product of  Lebesgue measure on (0, ∞) and ν on RN \ {0}. Here we have used A = 0 and |x|≤1 |x|ν(dx) < ∞. It follows from ν(RN \ K) = 0 that 

 J (d(s, x)) = tν({|x| > 1/n} \ K) = 0

E (0,t]×({|x|>1/n}\K)



and hence (0,t]×{|x|>1/n} xJ (d(s, x), ω) is the sum of a finite number of x in K for each ω. This, combined with γ 0 ∈ K and with (i) of Lemma 4.10 and closedness of K, shows that Zt ∈ K a. s. Conversely, assume (b) and let us show (c). We use a part of the general Lévy–Itô decomposition. Since all jumps Zs − Zs− are in K, we have 

 ν(RN \ K) = E J (0, 1] × (RN \ K) = 0. We deal with ω such that Zt (ω) is K-increasing in t and Z0 (ω) = 0, and we omit ω. If 0 ≤ s < t, then Zt− − Zs = limε↓0 Zt−ε − Zs ∈ K. Hence, if 0 < s1 < · · · < sn ≤ t, then Zt −

n 

(Zsk − Zsk − ) = Zt − Zsn +

k=1 (n)

Let Zt

=



n 

(Zsk − − Zsk−1 ) + Zs1 ∈ K.

k=2

(0,t]×{|x|>1/n} xJ (d(s, x)).

1/n up to time t. It follows that

It is the sum of jumps with size bigger than

Zt − Zt(n)

(n)

∈ K and that

(n+1)

(Zt − Zt ) − (Zt − Zt

(n+1)

) = Zt

(n)

− Zt

∈ K,

that is, Zt − Zt(n) is a K-decreasing sequence in K. Hence, by (iv) of Lemma 4.10, Zt − Zt(n) is convergent. Define Zt1 = limn→∞ Zt(n) and Zt2 = Zt − Zt1 . We see that Zt1 and Zt2 take values in K. We claim that  xn =

xν(dx) is convergent as n → ∞. 1/n1/n

(e

iz,x

  = exp t

 − 1)ν(dx)

(eiz,x − 1 − iz, x)ν(dx)

1/n1

(n)

(e

iz,x

− 1)ν(dx) + i

 z, xν(dx) .

1/n 0 and t → 0). Remark 4.20 It follows from the property (i) of Definition 4.19 that μ0 = δ0 . Indeed, we have μ0 = μ0 ∗ μ0 and thus  μ0 (z) =  μ0 (z)2 , which implies  μ0 (z) = 0 or 1 for each z. We then have  μ0 (z) = 1 for all z, since  μ0 (0) = 1 and  μ0 is continuous. Therefore μ0 = δ0 . It also follows from (i) that μs ∈ I D(Rd ) for all s ∈ K because, for each n ∈ N, n−1 s ∈ K and μs = (μn−1 s )n . Lemma 4.21 Let {Xs : s ∈ K} be a K-parameter Lévy process on Rd and let μs = L(Xs ). Then the following are true. (i) {μs : s ∈ K} is a K-parameter convolution semigroup on Rd . (ii) {Xts 0 : t ≥ 0} is a Lévy process on Rd for any s 0 ∈ K. (iii) μs is infinitely divisible for all s ∈ K. Proof (i) L(Xs 1 +s 2 ) = L((Xs 1 − X0 ) + (Xs 1 +s 2 − Xs 1 )) = L(Xs 1 − X0 ) ∗ L(Xs 1 +s 2 − Xs 1 ) = L(Xs 1 ) ∗ L(Xs 2 ), since Xs 1 − X0 and Xs 1 +s 2 − Xs 1 are independent, L(Xs 1 +s 2 − Xs 1 ) = L(Xs 2 − X0 ), and X0 = 0 a. s. Since Xts → X0 = 0 in probability as t ↓ 0, μts → δ0 as t ↓ 0. (ii) Fix s 0 ∈ K. If 0 ≤ t0 < · · · < tn with n ≥ 2, then Xtj +1 s 0 − Xtj s 0 , j = d

0, . . . , n − 1, are independent. If 0 ≤ s < t, then Xts 0 − Xss 0 = X(t−s)s 0 . Note   that limt  →t P Xt  s 0 − Xts 0  > ε = 0 and X0s 0 = 0 a. s. Finally, almost surely Xts 0 is right continuous with left limits in t from the property (e). (iii) This follows from (i) by Remark 4.20.  Remark 4.22 The {μs : s ∈ K} in Lemma 4.21 is called the K-parameter convolution semigroup induced by the K-parameter Lévy process {Xs : s ∈ K}. Conversely, is it true that any K-parameter convolution semigroup on Rd is induced by some K-parameter Lévy process on Rd ? The answer is negative, as will be shown in Theorem 4.48. Now let us give an analogue of the first half of Theorem 4.3 on subordination. Theorem 4.23 Let {Zt : t ≥ 0} be a K-valued subordinator and {Xs : s ∈ K} a Kparameter Lévy process on Rd . Suppose that they are independent. Define Yt = XZt . Then {Yt : t ≥ 0} is a Lévy process on Rd . Proof Let n ≥ 2 and f 1 , . . . , f n−1 be measurable and bounded from Rd to R, and let 0 ≤ t1 ≤ · · · ≤ tn . Let s k ∈ K, k = 1, . . . , n, with s 1 ≤K s 2 ≤K · · · ≤K s n and let

88

4 Multivariate Subordination



1

G s ,...,s

n



=E

3n−1 

4  f Xs k+1 − Xs k . k

k=1

Since Xs k+1 − Xs k , k = 1, . . . , n − 1, are independent, we have n−1

    G s1, . . . , sn = E f k Xs k+1 − Xs k . k=1

  d Next, let g k (s) = E f k (Xs ) for s ∈ K. Since Xs k+1 − Xs k = Xs k+1 −s k , we have    E f k Xs k+1 − Xs k = g k (s k+1 − s k ). It follows that

n−1 

G s1, . . . , sn = g k s k+1 − s k . k=1

We use the standard argument for independence (such as Proposition 1.16 of [93]). As {Xs } and {Zt } are independent, we obtain E

3n−1  k=1

4        n−1  k f Ytk+1 − Ytk = E G Zt1 , . . . , Ztn = E g k Ztk+1 − Ztk , k=1

(4.12) noting that Zt1 , . . . , Ztn make a K-increasing sequence. Choosing f j = 1 for all j = k and using that {Zt } has stationary increments, we see that       E f k Ytk+1 − Ytk = E g k Ztk+1 − Ztk       = E g k Ztk+1 −tk = E f k Ytk+1 −tk .

(4.13)

Equations (4.12) and (4.13) say that {Yt } has independent increments and stationary increments, respectively. Evidently Y0 = 0 a. s. Since Zt is right continuous with left limits in t and since {Xs } has property (e) in Definition 4.18, Yt is right continuous with left limits in t a. s. Here notice that, when tn < t and tn ↑ t, we have Ztn ≤K Ztn+1 and Ztn → Zt− , but Ztn can be equal to Zt− ; even if Ztn = Zt− for large n, Ytn = XZtn is convergent. Now L(Yt − Ys ) = L(Yt−s ) → δ0 as t ↓ s or s ↑ t, which shows stochastic continuity of {Yt }. Thus {Yt } is a Lévy process on Rd .  Definition 4.24 Let M ≥ 2 and let K be a cone in RM . Let {Zt : t ≥ 0} be a K-valued subordinator, {Xs : s ∈ K} a K-parameter Lévy process on Rd , and assume that they are independent. The procedure in Theorem 4.23 of making {Yt }

4.2 Subordination of Cone-Parameter Lévy Processes

89

from {Xs } and {Zt } is called multivariate subordination. Here {Xs }, {Zt }, and {Yt } are called, respectively, subordinand, subordinating (or subordinator), and subordinated. The following lemma is useful in considering examples. Lemma 4.25 Let {Xs1 : s ∈ K}, . . . , {Xsn : s ∈ K} be independent K-parameter Lévy processes on Rd . Let Xs = Xs1 + · · · + Xsn . Then {Xs : s ∈ K} is a K-parameter Lévy process on Rd . Proof It is straightforward to check the defining properties for a K-parameter Lévy process.  Let us introduce the notions of strong basis and weak basis of a cone K in RM . The smallest linear subspace of RM that contains K is called the linear subspace generated by K. Definition 4.26 Let K be a cone in RM . If {e1 , . . . , eN } is a linearly independent system in K such that K = {s1 e1 + · · · + sN eN : s1 ≥ 0, . . . , sN ≥ 0}, then {e1 , . . . , eN } is called a strong basis of K. If {e1 , . . . , eN } is a basis of the linear subspace generated by K and e1 , . . . , eN are in K, then {e1 , . . . , eN } is called a weak basis of K and K is called N -dimensional cone. We say that K is nondegenerate if it is M-dimensional. Example 4.27 In one dimension, there are only two cones in R; they are the intervals [0, ∞) and (−∞, 0] and each of them has a strong basis. In two dimensions, let us identify R2 with the complex plane C. Then, for any choice of θ1 ∈ R and θ2 ∈ (θ1 , θ1 + π ), the sector {z = reiθ : r ≥ 0 and θ ∈ [θ1 , θ2 ]} is a cone in C and it has a strong basis {eiθ1 , eiθ2 }. Any nondegenerate cone in C is of this form. In R3 any triangular cone is a nondegenerate cone having a strong basis, but any non-triangular nondegenerate cone does not have a strong basis. Any cone K in RM has a weak basis. If N = M, then the nonnegative orthant RN + has the standard basis {e1 , . . . , eN } of RN as a strong basis. Remark 4.28 Let K be a cone with strong basis {e1 , . . . , eN }. For k = 1, 2 let k eN with s k , . . . , s k ∈ R. Then s 1 ≤ 2 s k = s1k e1 + · · · + sN K s if and only if N 1 sj1 ≤ sj2 for j = 1, . . . , N . Thus, in this sense, the partial order determined by K is equivalent to the componentwise order. Definition 4.29 Let K be an N -dimensional cone in RM and L the linear space  generated by K. Let K  be a cone in RM . We say that K and K  are isomorphic if  there exists a linear transformation T from L to RM such that dim(T L) = N and T K = K  . We call T isomorphism and we denote by T −1 its inverse defined on T L.

90

4 Multivariate Subordination

1 2 From this definition we see that K  is an N -dimensional cone. Further, u ≤K  u  1 −1 1 −1 2 N is a strong basis (resp. if and only if T u ≤K T u . A system e , . . . , e   −1 1  −1 N weak basis) of K if and only if T e , . . . , T e is a strong basis (resp. weak basis) of K.

Proposition 4.30 N (a) Any N -dimensional cone  1 K withNa strong basis is isomorphic to R+ . of K there is an isomorphism given by (a) (b) For any strong basis e , . . . , e such that each s1 e1 + · · · + sN eN ∈ K is identified with (sj )1≤j ≤N ∈ RN +.  1   1  N N Proof (a) Let e , . . . , e be a strong basis of K. Denote by f , . . . , f the N basis  of R . Let us consider any  bijective mapping from the strong basis standard e1 , . . . , eN of K to the strong basis f 1 , . . . , f N of RN + . It is readily seen that this mapping extends uniquely to a one-to-one linear transformation from the linear space generated by K onto RN . Assertion (b) follows from (a) by considering the particular mapping ej → f j , j = 1, . . . , N . 

Remark 4.31 If {e1 , . . . , eN } and {f 1 , . . . , f N } are both strong bases of K, then they are identical up to scaling and permutation. See Proposition 2.4 of [71] for a proof.

4.3 Case of Cone RN + N k In this section we assume K = RN + , a cone in R . Denote the unit vectors e = (δkj )1≤j ≤N for k = 1, . . . , N , where δkj = 1 or 0 according as k = j or not. Thus {e1 , . . . , eN } is the standard basis of RN and, at the same time, the strong basis of N 1 2 N the cone RN + . The partial order s ≤K s in R for K = R+ is equivalent to the k 1 2 k componentwise order sj ≤ sj , j = 1, . . . , N , for s = (sj )1≤j ≤N = s1k e1 + · · · + k eN , k = 1, 2. First we give various examples of RN -parameter Lévy processes sN + -parameter convolution semigroups. Then joint distributions and their relation to RN + -parameter Lévy processes are considered. Generating triplets appearing in of RN + multivariate subordination are described.

Example 4.32 Let {Vt : t ≥ 0} be a Lévy process on Rd . Fix c = (cj )1≤j ≤N ∈ RN +. Define Xs = Vc,s = Vc1 s1 +···+cN sN

for s = (sj )1≤j ≤N ∈ RN +.

(4.14)

N d Then, {Xs : s ∈ RN + } is a R+ -parameter Lévy process on R . N 1 2 To show this, we write K = R+ . If s ≤K s ≤K · · · ≤K s n with s 1 , . . . , s n ∈ K, then Xs k+1 − Xs k = Vc,s k+1  − Vc,s k  , k = 1, . . . , n, are independent, since

4.3 Case of Cone RN +

91



 c, s k+1 − c, s k ≥ 0. If s 1 ≤K s 2 and s 3 ≤K s 4 such that s 2 − s 1 = s 4 − s 3 , d

d

then Xs 2 − Xs 1 = Vc,s 2  − Vc,s 1  = Vc,s 2 −s 1  = Vc,s 4 −s 3  = Xs 4 − Xs 3 . If   s  ∈ K and s  → s, then Xs  = Vc,s   → Vc,s = Xs in probability. If s k k≥1 is       a K-decreasing sequence converging to s ∈ K, then Xs k − Xs  = Vc,s k −s   →  k  k 0, since c, s − s → 0. If s k≥1 is K-increasing, s k = s, and s k → s, then c, s k  ≤ c, s k+1 , c, s k  ≤ c, s and c, s k  → c, s, and hence Xs k = Vc,s k  is convergent to Vc,s− or Vc,s . Thus Xs is K-right continuous with K-left limits a. s. Finally X0 = Vc,0 = 0 a. s. j

Example 4.33 Let {Vt : t ≥ 0}, j = 1, . . . , N, be independent Lévy processes on Rd . Define Vs = Vs11 + Vs22 + · · · + VsNN

for s = (sj )1≤j ≤N ∈ RN +.

(4.15)

N d Then {Vs : s ∈ RN + } is a R+ -parameter Lévy process on R . j N N Indeed, for each j , {Vsj : s ∈ R+ } is a R+ -parameter Lévy process, as it is a N special case of Example 4.32 with c = ej . Hence {Vs : s ∈ RN + } is a R+ -parameter Lévy process by Lemma 4.25. j

Example 4.34 For each j = 1, . . . , N, let {Ut : t ≥ 0} be a Lévy process on Rdj . Assume that they are independent. Let d = d1 + · · · + dN . Define j

Us = (Usj )1≤j ≤N

for s = (sj )1≤j ≤N ∈ RN +,

(4.16)

j

that is, Us is the direct product of Usj , j = 1, . . . , N. Then {Us : s ∈ RN + } is a d. RN -parameter Lévy process on R + k Indeed, for each k, let {Xsk : s ∈ RN + } be the process defined as Xs = k,j k,j k (Xsj )1≤j ≤N for s = (sj )1≤j ≤N with Xsj = 0 in Rdj for j = k and Xsk,k k = Usk . N N Then {Xsk : s ∈ R+ } is a R+ -parameter Lévy process on Rd just by the same proof as each term of (4.15) in Example 4.33. Then {Xsk }, k = 1, . . . , N , are independent, Us = Xs1 + · · · + XsN , and Lemma 4.25 applies. Example 4.35 Let {μs : s ∈ R2+ } be a collection of distributions on R such that, for each s = (s1 , s2 ) , μs is Gaussian distribution on R with mean zero and variance s1 + s2 . Here  means transpose as in Notation. Then {μs } is a R2+ j parameter convolution semigroup. Let {Vt : t ≥ 0}, j = 1, 2, 3, be independent Brownian motions on R. Let {Xs1 : s ∈ R2+ } and {Xs2 : s ∈ R2+ } be defined, respectively, by Xs1 = Vs11 + Vs22 and Xs2 = Vs31 +s2 for every s = (s1 , s2 ) . Then, clearly both {Xs1 } and {Xs2 } are R2+ -parameter Lévy processes and they induce a common R2+ -parameter convolution semigroup {μs }, that is, L(Xs1 ) = L(Xs2 ) = μs

92

4 Multivariate Subordination



 2 2

= L Xe1 , Xe2 , since for every s ∈ Nevertheless, L





    L V11 + V02 , V01 + V12 = L V11 , V12

= L V13 , V13 .

R2+ .

Xe11 , Xe12



Joint distributions of cone-parameter Lévy processes are not necessarily infinitely divisible as is shown by the following. Example 4.36 Let {μs : s ∈ R2+ }, {Xs1 : s ∈ R2+ }, {Xs2 : s ∈ R2+ }, and e1 , e2 be 1 + X2 as in Example 4.35. Let Xs = XU s (1−U )s , where U is a Bernoulli random variable with 0 < p = P (U = 1) < 1 and P (U = 0) = 1 − p such that U is independent of {{Xs1 }, {Xs2 }}. Then {Xs : s ∈ R2+ } is a R2+ -parameter Lévy process which induces the R2+ -parameter convolution semigroup {μs } by Proposition 3.9 of [72]. The joint distribution L((Xe1 , Xe2 ) ) of {Xs } is not infinitely divisible by Remark 3.11 of [72]. Theorem 4.37 Let us write RN + = K. Let {Xs : s ∈ K} be a K-parameter Lévy j j process on Rd . Define Xt = Xtej and let {Vt : t ≥ 0}, j = 1, . . . , N , be j

d

j

independent Lévy processes on Rd such that {Vt } = {Xt } for each j . Define {Vs : s ∈ K} as Vs1 e1 +···+sN eN = Vs11 + · · · + VsNN .

(4.17)

Then, for every n ∈ N and s 1 , . . . , s n ∈ K satisfying s 1 ≤ K s 2 ≤ K · · · ≤K s n ,

(4.18)

we have d

(Xs k )1≤k≤n = (Vs k )1≤k≤n .

(4.19)

Proof We claim that d

Xs = Vs

for s ∈ K.

(4.20)

Indeed, for s = (sj )1≤j ≤N = s1 e1 + · · · + sN eN ∈ K,

 Xs = Xs1 e1 + Xs1 e1 +s2 e2 − Xs1 e1 + · · · + Xs − Xs1 e1 +···+sN−1 eN−1 . The right-hand side is the sum of N independent terms by the definition of Kparameter Lévy process. Further, d

Xs1 e1 = Xs11 = Vs11 , d

d

Xs1 e1 +s2 e2 − Xs1 e1 = Xs2 e2 = Xs22 = Vs22 ,

4.3 Case of Cone RN +

93

and so on. Hence we obtain (4.20) from (4.17). Now we claim that (4.19) holds for s 1 , . . . , s n ∈ K satisfying (4.18). In order to prove this, it is enough to prove  d  Xs k − Xs k−1 1≤k≤n = Vs k − Vs k−1 1≤k≤n ,

(4.21)

where s 0 = 0, since there is an n × n matrix T such that

  Xs k 1≤k≤n = T Xs k − Xs k−1 1≤k≤n ,

  Vs k 1≤k≤n = T Vs k − Vs k−1 1≤k≤n . Since {Vs } also is a K-parameter Lévy process by Example 4.33, the components of each side of (4.21) are independent. Furthermore, d

d

d

Xs k − Xs k−1 = Xs k −s k−1 = Vs k −s k−1 = Vs k − Vs k−1 

by virtue of (4.20). Hence we have (4.21). Therefore (4.19) holds.

N d Corollary 4.38 Let {Xs : s ∈ RN + } be a R+ -parameter Lévy process on R and j define Xt = Xtej . Then

Eeiz,Xs  =

N 

j

Ee

iz,Xsj 

for s = (sj )1≤j ≤N , z ∈ Rd .

(4.22)

j =1

Proof This is an expression of (4.19) when n = 1.





Remark 4.39 In the notation of Theorem 4.37 the joint distribution L (X  s k ) 1≤k≤n of a K-parameter Lévy process {Xs : s ∈ K} is determined by L Xej , j = 1, . . ., N , as long as (4.18) is satisfied. In particular, for each s, L (Xs ) is determined by L Xej , j = 1, . . . , N . However, general joint distributions are not determined by L Xej , j = 1, . . . , N . For example, suppose that Xs = Ws1 +···+sN for s = (sj )1≤j ≤N ∈ K with a Lévy process {Wt : t ≥ 0} as in Example 4.32 with cj = 1, j = 1, . . . , N . Then Xe1 = Xe2 = · · · = XeN while Ve1 , Ve2 , . . . , VeN are   independent. Thus L Xej 1≤j ≤N = L Vej 1≤j ≤N except in the trivial case,   although L Xej = L Vej , j = 1, . . . , N . Let us extend Lemma 4.5 to a RN + -parameter Lévy process. N d Lemma 4.40 Let {Xs : s ∈ RN + } be a R+ -parameter Lévy process on R . Then, there are constants C(ε), C1 , C2 , C3 such that

P [|Xs | > ε] ≤ C(ε)|s|

for ε > 0,

E[|Xs |2 ; |Xs | ≤ 1] ≤ C1 |s|,

(4.23) (4.24)

94

4 Multivariate Subordination

|E[Xs ; |Xs | ≤ 1]| ≤ C2 |s|,

(4.25)

E[|Xs |; |Xs | ≤ 1] ≤ C3 |s|1/2 .

(4.26)

d

Proof We use Theorem  4.37. Since Xs = Vs , it is enough to show the estimates for Vs . Notice that N j =1 |sj | ≤ C|s| for some constant C > 0. Proof of (4.23) j

and (4.24) uses Lemma 4.5 for Vsj as follows:       j j V P [|Vs | > ε] = P  N j =1 sj  > ε ≤ P |Vsj | > ε/N for some j   N  j / ≤ N j =1 P |Vsj | > ε/N ≤ C(ε) j =1 sj , / for some constant C(ε).        N j 2 j j 2 E |Vs | ; |Vs | ≤ 1 ≤ E  j =1 Vsj  ; |Vsj | ≤ 1 ∀j + P |Vsj | > 1 for some j ≤ N2 /1 ≤C

N

j =1 E

     j j j |Vsj |2 ; |Vsj | ≤ 1 + N j =1 P |Vsj | > 1

N

j =1 sj ,

/1 . for some constant C We denote the kth component by putting the superscript (k) in order to prove (4.25). We have |E [Vs ; |Vs | ≤ 1]| ≤

N

      (k) iVs ; |Vs | ≤ 1  = N k=1 |Ik1 + Ik2 + Ik3 |,

k=1 E

where  (k)   (k)  Ik1 = E eiVs − 1 , Ik2 = −E eiVs − 1; |Vs | > 1 ,  (k)  Ik3 = −E eiVs − 1 − iVs(k) ; |Vs | ≤ 1 . We have    j (k) i N V Ik1 = E e j =1 sj − 1      1(k)  j (k) j (k) i N i N−1 Vsj j =1 Vsj j =1 + · · · + E eiVs1 − 1 −e =E e and hence

4.3 Case of Cone RN +

|Ik1 | ≤

95

  j (k)    j (k)  sj      / N E eiVsj − 1  = N  E eiV1 − 1  ≤ C2 j =1 sj , j =1  j =1 

N

/2 . As we have for some constant C |Ik2 | ≤ 2P [|Vs | > 1] ≤ 2C(1)|s|,     |Ik3 | ≤ 12 E (Vs(k) )2 ; |Vs | ≤ 1 ≤ 12 E |Vs |2 ; |Vs | ≤ 1 ≤ 12 C1 |s| by (4.23) and (4.24), we now obtain (4.25). Finally

  1/2 E [|Vs |; |Vs | ≤ 1] ≤ E |Vs |2 ; |Vs | ≤ 1 ≤ C1 1/2 |s|1/2 

by Schwarz’s inequality.

Let us give description of generating triplets in multivariate subordination in the case of the cone RN +. Theorem 4.41 Let {Yt : t ≥ 0} be a Lévy process on Rd obtained by multivariate N d subordination from an RN + -parameter Lévy process {Xs : s ∈ R+ } on R and j RN + -valued subordinator {Zt : t ≥ 0} as in Theorem 4.23. Let Xt = Xtej . (i) The characteristic function of Yt is as follows: Eeiz,Yt  = etZ (ψX (z)) ,

z ∈ Rd ,

(4.27)

where Z is the function  of (4.11) in Remark 4.14 and ψX (z) =

j (ψX (z))1≤j ≤N ,

j ψX (z)



 j = log L(X1 ) (z).

(4.28)

0 ) (ii) Let νZ and γZ0 = (γZ,j 1≤j ≤N be the Lévy measure and the drift of {Zt } and j

j

j

j

let (AX , νX , γX ) be the generating triplet of {Xt }. Let μs = L(Xs ). Then the generating triplet (AY , νY , γY ) of {Yt } is as follows: AY =

N 

j

0 γZ,j AX ,

(4.29)

j =1

 νY (B) =

RN +

μs (B)νZ (ds) +

N 

j

0 γZ,j νX (B),

B ∈ B(Rd \ {0}),

j =1

(4.30)  γY =

 RN +

νZ (ds)

|x|≤1

xμs (dx) +

N  j =1

j

0 γZ,j γX .

(4.31)

96

4 Multivariate Subordination

  (iii) If |s|≤1 |s|1/2 νZ (ds) < ∞ and γZ0 = 0, then AY = 0, |x|≤1 |x|νY (dx) < ∞, and the drift γY0 of {Yt } is zero. Proof

  j (i) Let Vt : t ≥ 0 , j = 1, 2, . . . , N, and {Vs : s ∈ RN + } be the processes defined in Theorem 4.37. Then, by (4.22) of Corollary 4.38, Ee

iz,Xs 

=

N 

j

Ee

iz,Xsj 

=

j =1

N 

j

esj ψX (z) = es,ψX (z)

(4.32)

j =1

for z ∈ Rd and s ∈ RN + . Use the standard argument for independence (as in Proposition 1.16 of [93]). We get Eeiz,Yt  = E



Eeiz,Xs 

s=Zt



= EeZt ,ψX (z) = etZ (ψX (z))

for z ∈ Rd by (4.10), since Re ψX (z), s = (4.27). (ii) Let z ∈ Rd . Let K = RN + . We have Ee

iz,Yt 

=e

tZ (ψX (z))

N

j j =1 (Re ψX (z))sj

≤ 0. This is



  0 ψX (z),s = exp t γZ , ψX (z) + (e − 1)νZ (ds) K

by (4.11), since Re ψX (z), s ≤ 0. Notice that γZ0 , ψX (z) =

N 

j

0 γZ,j ψX (z)

j =1

=

N  j =1

0 γZ,j



 1 j j j − z, AX z + iγX , z + g(z, x)νX (dx) 2 Rd

with g(z, x) = eiz,x − 1 − iz, x1{|x|≤1} (x). Hence γZ0 , ψX (z)

: 9N : 9 N   1 j j 0 0 = − z, γZ,j AX z + i γZ,j γX , z 2 j =1

 +

Rd

Next it follows from (4.32) that



g(z, x) ⎝

j =1

N  j =1



0 γZ,j νX ⎠ (dx). j

4.3 Case of Cone RN +



97



(eψX (z),s − 1)νZ (ds) =

K

K



 =

νZ (ds) K





=

νZ (ds)

Rd

Rd

K

(Eeiz,Xs  − 1)νZ (ds)

(eiz,x − 1)μs (dx)   νZ (ds) z,

 g(z, x)μs (dx) + i K

|x|≤1

 xμs (dx) .



Here we used (4.25) of Lemma 4.40 below and |s|≤1 |s|νZ (ds) < ∞. Define  / ν by / ν(B) = K μs (B \ {0})νZ (ds), B ∈ B(Rd ). Then, by (4.23) and (4.24) of Lemma 4.40,   2 |x| / ν(dx) ≤ C1 |s|νZ (ds) < ∞, |x|≤1

K



|x|>1



/ ν(dx) ≤ C(1)

Hence   (eψX (z),s −1)νZ (ds) = K

|s|νZ (ds) < ∞. K

 g(z, x)/ ν(dx) + i

Rd

 νZ (ds)

K

|x|≤1

 xμs (dx), z .

Thus we get (4.29)–(4.31). (iii) Assume |s|≤1 |s|1/2 νZ (ds) < ∞ and γZ0 = 0. Then AY = 0 by (4.29), 

 |x|≤1



|x|νY (dx) =

νZ (ds) K

|x|≤1

|x|μs (dx) < ∞

by (4.26) of Lemma 4.40, and  γY0 = γY −

|x|≤1

 xνY (dx) =

by (4.30) and (4.31).

 νZ (ds)

K

 |x|≤1

xμs (dx) −

|x|≤1

xνY (dx) = 0 

Remark 4.42 Theorem 4.41 shows that the distribution of {Yt } subordinate to {Xs } by {Zt } is determined by the distributions of {Xt1 }, . . . , {XtN }, and {Zt }, although the joint distributions of {Xs } are not in general determined by {Xt1 }, . . . , {XtN } as Remark 4.39 says. This is because relevant joint distributions of {Xs } are only those with RN + -increasing sequences of parameters and they are determined by {Xt1 }, . . . , {XtN } as in Theorem 4.37.

98

4 Multivariate Subordination

4.4 Case of General Cone K In order to make clear the relation between K-parameter Lévy processes and K-parameter convolution semigroups for a cone K, we introduce the notions of generativeness and related ones for K-parameter convolution semigroups. Definition 4.43 Let K be a cone in RM . Let {μs : s ∈ K} be a K-parameter convolution semigroup. A K-parameter Lévy process in law {Xs : s ∈ K} is said to be associated with {μs : s ∈ K} if L(Xs ) = μs . We say that {μs : s ∈ K} is generative if there is a K-parameter Lévy process in law associated with it; otherwise, {μs : s ∈ K} is non-generative. We say that {μs : s ∈ K} is uniquegenerative if it is generative and any two K-parameter Lévy processes in law d

{Xs1 : s ∈ K} and {Xs2 : s ∈ K} associated with it satisfy {Xs1 : s ∈ K} = {Xs2 : s ∈ K}. If {μs : s ∈ K} is generative but not unique-generative, we say that it is multiplegenerative. Remark 4.44 Existence of multiple-generative K-parameter convolution semigroups on Rd follows from Example 4.35 and Remark 4.39. In the case of cone K with a strong basis, main results on generativeness and unique-generativeness of K-parameter convolution semigroups are contained in the following theorem. We call a subset L of Rd an additive subgroup of Rd if x −y ∈ L for all x, y ∈ L. Theorem 4.45 Let K be a cone in RM with a strong basis {e1 , . . . , eN } and let {μs : s ∈ K} be a K-parameter convolution semigroup on Rd . Let Vs = Vs11 +   j · · · + VsNN for s = s1 e1 + · · · + sN eN in K, where Vt : t ≥ 0 , j = 1, . . . , N , are j

independent Lévy processes satisfying L(V1 ) = μej for j = 1, . . . , N. Then (i) {μs : s ∈ K} is generative. In particular, {Vs : s ∈ K} is a K-parameter Lévy process associated with {μs }. (ii) The following three statements are equivalent: (a) {μs : s ∈ K} is unique-generative. (b) Any K-parameter Lévy process in law {Xs : s ∈ K} associated with d {μs : s ∈ K} satisfies {Xs : s ∈ K} = {Vs : s ∈ K}. (c) For any K-parameter Lévy process in law {Xs : s ∈ K} associated with {μs : s ∈ K} we have Xs = Xs1 e1 + · · · + XsN eN almost surely for s = s1 e1 + · · · + sN eN ∈ K. (iii) For j = 1, . . . , N let Lj be an additive subgroup of Rd such that Lj ∈ B(Rd ). Assume that Lj ∩ Lk = {0} for all distinct j , k. If μtej (Lj ) = 1 for all t ≥ 0 for j = 1, . . . , N , then {μs } is unique-generative. Proof of (i) is similar to Theorem 4.37 for K = RN + . We can also prove (ii) in the same idea. Proof of (iii) is rather technical; see Theorem 4.2 of [72].

4.4 Case of General Cone K

99

Example 4.46 Let K = R2+ and let {e1 , e2 } be the standard basis of R2 . Let L1 = Qd and L2 = (cQ)d with c ∈ R \ Q. Let {μs : R2+ } be the compound Poisson convolution semigroup on Rd such that, for each j , the Lévy measure of μej is concentrated in Lj . Then {μs } unique-generative, as Theorem 4.45 (iii) applies. This is Example 4.4 of [72]. The following definition and two subsequent theorems are concerned with existence of non-generative cone-parameter convolution semigroups. Definition 4.47 Let d ≥ 2 and let Sd+ be the set of symmetric nonnegativedefinite d × d matrices. Each s = (sj k )dj,k=1 ∈ Sd+ is determined by the lower triangle (sj k )j ≤k with d(d + 1)/2 entries. The set Sd+ is identified with a subset of Rd(d+1)/2 . Then Sd+ is a nondegenerate cone in Rd(d+1)/2 . For s ∈ Sd+ let μs be Gaussian distribution on Rd with mean zero and covariance matrix s. Then, obviously, {μs : s ∈ Sd+ } is an Sd+ -parameter convolution semigroup on Rd . It is called the canonical Sd+ -parameter convolution semigroup. Theorem 4.48 The canonical Sd+ -parameter convolution semigroup on Rd is nongenerative. This can be rephrased as follows: in the sense of cone-parameter Lévy process, there is no Brownian motion on Rd with parameter in Sd+ . Theorem 4.48 is a consequence of the following more general theorem. We say that a K-parameter convolution semigroup {μs : s ∈ K} is trivial if μs is trivial for all s ∈ K. Theorem 4.49 Let d ≥ 2 and let K = Sd+ . Let {μs : s ∈ K} be a non-trivial  K-parameter convolution semigroup on Rd such that |x|2 μs (dx) < ∞ and the covariance matrix vs of μs satisfies vs ≤K s for all s ∈ K. Then {μs : s ∈ K} is non-generative. This is by Pedersen and Sato [72, Theorem 4.1] (2004). In their proof assertion (iii) on unique-generativeness in Theorem 4.45 is effectively used. Remark 4.50 The cone Sd+ has no strong basis. A proof of it is given by Theorem 4.48 combined with Theorem 4.45. Some sufficient conditions for generativeness of K-parameter convolution semigroups for general cone K are known. Theorem 4.51 Let K be a cone in RM and let {μs : s ∈ K} be a K-parameter convolution semigroup on Rd . Then the following two statements are true. (i) If d = 1, then {μs : s ∈ K} is generative. (ii) If μs is purely non-Gaussian for all s ∈ K, then {μs : s ∈ K} is generative. See Theorems 5.2 and 5.3 of [72]. We have shown in Theorem 4.23 that given a K-parameter Lévy process and a K-valued subordinator for a cone K, we can perform multivariate subordination. However, Theorem 4.41 shows that, at least for K = RN + , the distribution of the subordinated Lévy process is determined by the induced K-parameter convolution

100

4 Multivariate Subordination

semigroup and the distribution of the K-valued subordinator. This suggests that it is more natural to study subordination of K-parameter convolution semigroups than to study subordination of K-parameter Lévy processes. This study is made by Pedersen and Sato [71] (2003) and the description of the generating triplets of the subordinated convolution semigroup is found for a general cone K. Next we recall these results. First, we give some basic properties of K-parameter convolution semigroups. Proposition 4.52 Let K be a cone in RM . Let {e1 , . . . , eN } be a weak basis of K. Then any K-parameter convolution semigroup {μs : s ∈ K} on Rd has the following properties. (i) μs ∈ I D(Rd ) for each s ∈ K, and μts = μt∗ s for t ≥ 0. (ii) For s = s1 e1 + · · · + sN eN ∈ K we have μse11 )(z) · · · ( μseNN )(z),  μs (z) = (

z ∈ Rd ,

(4.33)

s

where ( μejj )(z) = exp{sj (log  μej )(z)} as defined in Definition 1.3. (iii) Let (As , νs , γs ) be the generating triplet of μs . For s = s1 e1 + · · · + sN eN ∈ K we have As = s1 Ae1 + · · · + sN AeN, νs = s1 νe1 + · · · + sN νeN, γs = s1 γe1 + · · · + sN γeN. (iv) For any sequence {s n }n=1,2,... with |s n − s 0 | → 0 as n → ∞, we have μs n → μs 0 as n → ∞. Recall that some of s1 , . . . , sn may be negative unless {e1 , . . . , eN } is a strong basis. Proof (i) For each n ∈ N, μs = (μ(1/n)s )n∗ . Hence μs ∈ I D(Rd ) and μ(1/n)s = (1/n)∗ (m/n)∗ μs . Thus μ(m/n)s = μs . Since  μts is right continuous in t by the property (ii) of Definition 4.19 we have  μts =  μts . (ii) Write sj = sj+ −sj− , where sj+ = sj ∨0 and sj− = −(sj ∧0). We have s = u−v + N − N where u = s1+ e1 + · · · + sN e ∈ K and v = s1− e1 + · · · + sN e ∈ K. Then μs ∗ μv = μu . Hence, by (4.33), + + s s  μe11 (z) · · ·  μeNN (z)  μu (z) − , = −  μs (z) =  μv (z) s s  μe11 (z) · · ·  μeNN (z) which is (4.33).

4.4 Case of General Cone K

101

(iii) Follows from (4.33) by use of Exercise 12.2 of [93]. (iv) also follows from (4.33), since |s n − s 0 | → 0 implies sjn → sj0 for j = 1. . . . , N .  d Let us proceed to generalization of subordination.  For any measure μ on R and any μ-integrable function f , we denote μ(f ) = Rd f (x)μ(dx).

Theorem 4.53 Let K1 be an N1 -dimensional cone in RM1 and K2 an N2 dimensional cone in RM2 . Let {μu : u ∈ K2 } be a K2 -parameter convolution semigroup on Rd and {ρs : s ∈ K1 } a K1 -parameter convolution semigroup on RM2 such that ρs (K2 ) = 1 for all s ∈ K1 . Define a distribution σs on Rd by  σs (f ) =

(4.34)

μu (f )ρs (du). K2

for all bounded continuous functions f on Rd . Then the family {σs : s ∈ K1 } is a K1 -parameter convolution semigroup on Rd . See Theorem 4.3 of Pedersen and Sato [71] (2003). Notice that μu (f ) in (4.34) is continuous in u by virtue of Proposition 4.52 (iv). Definition 4.54 The above procedure for obtaining {σs : s ∈ K1 } is called subordination of {μu : u ∈ K2 } by {ρs : s ∈ K1 }. The convolution semigroups {μu : u ∈ K2 }, {ρs : s ∈ K1 }, and {σs : s ∈ K1 } are, respectively, called subordinand, subordinating (or subordinator), and subordinated. The following theorem reduces to Theorem 4.3 in the case of ordinary subordination K1 = K2 = R+ . In the case of multivariate subordination K1 = R+ and K2 = K it reduces to Theorem 4.41. Theorem 4.55 Let {μu : u ∈ K2 }, {ρs : s ∈ K1 }, and {σs : s ∈ K1 } be the subordinand, subordinating, and subordinated convolution semigroups in Theorem 4.53.

j j j 1 N 2 Let {h , . . . , h } be a weak basis of K2 . Let Aμ , νμ , γμ be the generating triplet of μhj for j = 1, . . . , N2 . Let νρs and γρ0s be the Lévy measure and the drift of ρs for all s ∈ K1 and let us decompose γρ0s = γρ0s ,1 h1 + · · · + γρ0s ,N2 hN2 . Let R be the orthogonal projection from RM2 to the linear subspace L2 generated by K2 and let T be a linear transformation from RM2 onto RN2 defined by T u = (uj )1≤j ≤N2 where Ru = u1 h1 + · · · + uN2 hN2 . Then (i) For any s ∈ K1 , the characteristic function of σs is as follows:  Rd

ρ

eiz,x σs (dx) = es (w) ,

z ∈ Rd ,

102

4 Multivariate Subordination

where  sρ (w) = T γρ0s , w + w = (wj )1≤j ≤N2 ,

K2

(ew,T u − 1)νρs (du),

% &  wj = − 12 z, Ajμ z +

Rd

% & g(z, x)νμj (dx) + i γμj , z

with g(z, x) of (1.19).  (ii) For any s ∈ K1 , the generating triplet Aσs , νσs , γσs of σs is as follows: Aσs =

N2  j =1

γρ0s ,j Ajμ ,

 νσs (B) =

K2

μu (B)νρs (du) +

j =1

 γσs =

K2

N2 

γρ0s ,j νμj (B),

 νρs (du)

|x|≤1

xμu (dx) +

N2  j =1

B ∈ B(Rd \ {0}),

γρ0s ,j γμj .

 (iii) Fix s ∈ K1 . If K2 ∩{|u|≤1} |u|1/2 νρs (du) < ∞ and γρ0s = 0, then Aσs = 0,  0 |x|≤1 |x|νσs (dx) < ∞, and the drift γσs of σs is zero. This is Theorem 4.4 of Pedersen and Sato [71] (2003). Recall that some γρ0s ,j may be negative unless {h1 , . . . , hN2 } is a strong basis of K2 . We can give the description of the generating triplet of the subordinated in the multivariate subordination of Definition 4.24 based on Theorem 4.23 for general K as a special case of Theorem 4.55 with K1 = [0, ∞).

Notes The idea of subordination was originated by Bochner and expounded in his book [14] (1955). Cone-parameter Lévy processes and their subordination have been studied in [8, 72, 74] (2001, 2004, 2005). When K = RN + , K-parameter Lévy processes and their subordination were introduced in [8] (2001). In this case of K = RN + , all results in this chapter on multivariate subordination are found in [8] (2001). But the proof of Theorem 4.41 has been simplified. Example 4.15 is from [8] (2001); several other examples of construction of N -variate subordinators are also contained in that paper. In the case of a general cone K, all results in this chapter on K-parameter convolution semigroups are found in [71, 72] (2003, 2004); Theorem 4.23 is from there.

Notes

103

Furthermore, Examples 4.35 and 4.36, Definition 4.47, and related results are from [72] (2004). In Sect. 4.2 the notion of subordination has been extended to the case of parameters in a general cone K in RN . We mention that Bochner [14] (1955) already considered processes with parameter in a cone, under the name of multidimensional time variable. The cone-parameter Lévy processes and their subordination studied in [8] (2001) and [72] (2004) are intimately connected with cone-parameter convolution semigroups and their subordination introduced by Pedersen and Sato [71] (2003). The description of the generating triplet of the subordinated process in multivariate subordination of RN + -parameter Lévy process is made in [8] (2001). See Theorem 4.41. More generally, in the case of subordination of cone-parameter convolution semigroups this description is made for all cones K in RN in [71] (2003). See Theorem 4.55. Furthermore, since any cone in RN with strong basis is N isomorphic with RN + , the results of [8] (2001) apply to any cone in R with strong basis. See Proposition 4.30. Subordination and description of the generating triplet of the subordinated process have also been studied in the infinite-dimensional case. The paper [74] (2005) deals with subordination of cone-parameter Lévy processes with values in the Banach space of trace-class operators of a separable Hilbert space by a subordinator with values in the Banach space of trace-class operators. This paper describes the generating triplet of the subordinated process similarly to (4.29)– (4.31). Pérez-Abreu and Rocha-Arteaga [73] (2003) studied subordination of a Banach-space-valued Lévy process by a real-valued subordinator. They clarified the generating triplet of the subordinated Banach-space-valued Lévy process. In the Gaussian case the multiparameter Brownian motion {Bs : s ∈ RN } and the Brownian sheet {Ws : s ∈ RN } have been discussed for a long time. We mention Lévy [55] (1948), Chentsov [18] (1957), and McKean [66] (1963) for the former and Orey and Pruitt [70] (1973) and Khoshnevisan and Shi [47] (1999) for the latter. When the parameter s is restricted to a cone K not isomorphic to [0, ∞), neither {Bs : s ∈ K} nor {Ws : s ∈ K} is a K-parameter Lévy process. Likewise, twoparameter Lévy processes in Vares [131] (1983) and Lagaize [52] (2001) are not K-parameter Lévy processes in our sense. But probabilistic potential theory for the j RN + -parameter Lévy process in Example 4.33 with {Vt }, j = 1, . . . , N , being a symmetric Lévy processes was studied by Hirsch [34] (1995) and, in the case where j {Vt } was a Brownian motion on Rd for each j , Khoshnevisan and Shi [47] (1999) called it the (N, d) additive Brownian motion and studied its capacity. Theorem 4.11 in Sect. 4.1 on Lévy processes taking values in a cone is by Skorohod [115] (1991). In the terminology of Theorem 4.11, for a cone K in RN , Kincreasing Lévy process on RN is determined by an infinitely divisible distribution concentrated on K. An infinitely divisible distribution μ on RN with generating triplet (A, ν, γ ) is concentrated on K if and only if 

N A = 0, ν R \K = 0,

|x|≤1

|x| ν(dx) < ∞, and γ 0 ∈ K

(4.35)

104

4 Multivariate Subordination

 where γ 0 = γ − |x|≤1 xν(dx), the drift of μ. Furthermore, in this case,  μ  has Nthe Laplace transform Lμ(u), defined for u in the dual cone K = u ∈ R : u, x ≥ 0 for all x ∈ K , given by 5 1 % & 

e−u,x − 1 ν (dx) . Lμ (u) = exp − γ 0 , u +

(4.36)

K

The existence of such representations for cone-increasing Lévy processes on infinite-dimensional spaces is not guaranteed in general. They are intrinsically related to functional-analytic aspects, specifically to the geometry of the cone. In fact, this result has been extended to infinite dimensions as follows. Let B denote a separable Banach space with norm ·. Let B(B) be the class of Borel sets in B. Measures and distributions on B are understood to be defined on B(B). Terms such as B-valued random variable, its distribution, independence, convolution, infinite divisibility, cone, and K-increasing sequence are defined in the same way as in the case of RN . Denote byB  the strong topological dual of B. For any cone K we define the dual cone K  = f ∈ B  : f (s) ≥ 0 for all s ∈ K . Let μ  μ(f ) = B eif (x) μ(dx) be a distribution on B. The mapping  μ : B  → C defined as  for f ∈ B  is called the functional of μ. The mapping Lμ : K  → R  characteristic −f (x) defined as Lμ (f ) = B e μ(dx) for f ∈ K  is called the Laplace transform of μ. With any infinitely divisible distribution μ on B, a triplet of parameters (A, ν, γ ) from the Lévy–Khintchine representation of  μ is associated. Here A is related to the Gaussian part of μ, ν is a σ -finite measure on B\ {0} such that the mapping f −→   exp{ B eif (x) − 1 − if (x)1{||x||≤1} (x) ν(dx)} for all f ∈ B  , is the characteristic functional of some distribution on B, and γ ∈ B; see Araujo and Giné [3, Theorem 6.3.2] (1991) and Linde [56, Theorem 5.7.3] (1986). The triplet (A, ν, γ ) is unique and it is called the generating triplet of μ; and ν is called the Lévy measure of μ. A sequence {sn }n=1,2,... in a cone K is called K-majorized if there exists s ∈ K such that sn ≤K s for n ≥ 1. A cone K is said to be regular if every K-increasing and K-majorized sequence in K is norm convergent. A cone K is called normal if for each y ∈ K there is a constant λ > 0 such that 0 ≤K x ≤K y implies x ≤ λ y. Let c0 be the Banach space of real sequences converging to zero with the supremum norm. Let c0+ be the cone in c0 of nonnegative sequences converging to zero. The following result is interesting in comparison with Theorem 4.11. Let K be a normal cone in a separable Banach space B. Then, the following three statements are equivalent. (i) The cone K is regular. (ii) Every infinitely divisible distribution μ concentrated on K has characteristic functional 1 5 

 μ(f ) = exp if (γ 0 ) + eif (x) − 1 ν(dx) , f ∈ B  , (4.37) K

Notes

105

where γ 0 ∈ K and ν is the Lévy measure concentrated on K satisfying ν({0}) = 0 and K (1 ∧ |f (x)|)ν(dx) < ∞ for f ∈ B  . (iii) Every infinitely divisible distribution μ concentrated on K has Laplace transform 1 5 

e−f (x) − 1 ν(dx) , f ∈ K  , Lμ (f ) = exp −f (γ 0 ) + (4.38) K

where γ 0 ∈ K and ν is the Lévy measure concentrated on K satisfying ν({0}) = 0 and K (1 ∧ f (x))ν(dx) < ∞ for f ∈ K  . This is by Pérez-Abreu and Rosi´nski [77, Theorem 1 and Remark 2] (2007). They also found that c0 contains two normal cones such that one is regular and the other is not. Thus, the existence of representations (4.37) and (4.38) indeed depends on the cone. The two representations generalize (4.35) and (4.36), respectively, to Banach spaces. Dettweiler [22] (1976) showed (4.38) for cone-valued infinitely divisible random variables when K is a normal regular cone in a locally convex toplogical vector space. A stochastic process {Xt : t ≥ 0} on B is called Lévy process if it has independent and stationary increments, is stochastically continuous with respect to the norm, and starts at 0 almost surely and if, almost surely, Xt (ω) is right continuous with left limits in t with respect to the norm. For any B-valued Lévy process {Xt : t ≥ 0} the distribution L(X1 ) is infinitely divisible and the generating triplet (A, ν, γ ) of L(X1 ) is called the generating triplet of {Xt }; and ν is called the Lévy measure of {Xt }. Henceforth let K be a cone in a separable Banach space B. A process is a K-increasing Lévy process on B if and only if it is a K-valued Lévy process. The proof of this fact is like that of the equivalence between (a) and (b) of Theorem 4.11. In general separable Banach spaces the equivalence between (c) and (a)–(b) of Theorem 4.11 is not true. We say that a K-increasing Lévy process on B is a K-valued subordinator or simply K-subordinator. The existence of representation (4.37) for a K-subordinator was given (for different subclasses of cones) by Gihman and Skorohod [25] (1975), Pérez-Abreu and Rocha-Arteaga [74, 75] (2005, 2006), and Rocha-Arteaga [81] (2006). A Ksubordinator {Zt : t ≥ 0} is called regular K-subordinator if μ = L(Z1 ) has representation (4.37). Theorem 18 of [75] (2006) shows that the concepts of Ksubordinator and regular K-subordinator coincide for a wide subclass of normal regular cones (normal cones not isomorphic with c0+ ). However, there are regular K-subordinators of unbounded variation; this is the case when x≤1 xν(dx) is a Pettis integral but not a Bochner integral, [75, p. 48] (2006). The paper [81] (2006) shows that, for a class of Banach spaces, the representations (4.37) and (4.38) are completely analogous to (4.35) and (4.36) for a cone in RN and regular subordinators share many properties with real subordinators, for instance the bounded variation property. Also, [81] (2006) contains examples of construction of subordinators in a class of cones with strong bases.

106

4 Multivariate Subordination

Representation (4.37) in the case of cone-valued additive processes on the duals of nuclear Fréchet spaces was proved by Pérez-Abreu et al. [76] (2005). Barndorff-Nielsen and Pérez-Abreu [9] (2008) studied matrix subordinators with values in the cone Sd+ of d × d symmetric nonnegative-definite matrices. In particular, they introduced and characterized a class of infinitely divisible distributions on the open subcone of positive definite-matrices of Sd+ ([9] Theorem 4.3). This class is an analogue of the Goldie–Steutel–Bondesson class of infinitely divisible distributions on R+ studied by Bondesson [15] (1981). Concerning the Goldie–Steutel–Bondesson class of distributions on Rd , see Notes in Chap. 2.

-distributions on R+ are infinitely divisible. Pérez-Abreu and Stelzer [78] (2014) introduced -distributions on a cone in RN and studied the class of generalized -convolutions on a cone. Also examples of nonnegative-definite matrix

-distributions are introduced.

Chapter 5

Inheritance in Multivariate Subordination

We now study inheritance of Lm property or strict stability from subordinator to subordinated in multivariate subordination. In order to observe this inheritance, we have to assume strict stability of the distribution at each s ∈ K of a K-parameter subordinand {Xs : s ∈ K}. Section 5.1 gives results and examples. Section 5.2 discusses some generalization where the defining condition of selfdecomposability or stability for distributions on Rd involves a d × d matrix Q. This is called operator generalization.

5.1 Inheritance of Lm Property and Strict Stability We begin with the following theorem and examples in the usual subordination. Theorem 5.1 Suppose that {Xt : t ≥ 0} is a strictly α-stable process on Rd , {Zt : t ≥ 0} is a subordinator, and they are independent. Let {Yt : t ≥ 0} be a Lévy process on Rd constructed from {Xt } by subordination by {Zt }. (i) If {Zt } is selfdecomposable, then {Yt } is selfdecomposable. (ii) More generally, let m ∈ {0, 1, . . . , ∞}. If {Zt } is of class Lm (R), then {Yt } is of class Lm (Rd ). (iii) If {Zt } is strictly β-stable, then {Yt } is strictly αβ-stable. Halgreen [30] (1979) and Ismail and Kelker [36] (1979) proved part of these results. Proof of Theorem 5.1 will be given as a special case of Theorem 5.9. Example 5.2 Let 0 < α < 1. Let {Yt } be the Lévy process on R subordinate to a α strictly α-stable increasing process {Xt } on R with Ee−uXt = e−tu , u ≥ 0, by a

-process {Zt } with EZ1 = 1. Then P [Y1 ≤ x] = 1 − Eα (−x α ),

x ≥ 0,

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 A. Rocha-Arteaga, K. Sato, Topics in Infinitely Divisible Distributions and Lévy Processes, SpringerBriefs in Probability and Mathematical Statistics, https://doi.org/10.1007/978-3-030-22700-5_5

107

108

5 Inheritance in Multivariate Subordination

∞

where Eα (x) is the Mittag–Leffler function Eα (x) = P [Yt ≤ x] =

∞  n=0

n=0 x

n / (nα

(−1)n (t + n) x α(t+n) , n! (t) (1 + α(t + n))

+ 1), and

x ≥ 0.

By Theorem 5.1, L(Yt ) is selfdecomposable. See Pillai [79] (1990) or Sato [93, E 34.4] (1999). Example 5.3 Let 0 < α ≤ 2. Let {Yt } be the Lévy process subordinate to a α symmetric α-stable process {Xt } on R with EeizXt = e−t|z| by a -process {Zt } with EZ1 = 1/q, q > 0. Then EeizYt = (1 + q −1 |z|α )−t ,

z ∈ R,

where L(Y1 ) is Linnik distribution or geometric stable distribution (Example 4.7). Theorem 5.1 shows that L(Yt ) is selfdecomposable. In the definitions and examples below, we use γ , δ, λ, χ , and ψ for parameters of some special distributions, keeping a customary usage. Definition 5.4 The distribution μγ ,δ (dx) = (2π )−1/2 δeγ δ x −3/2 e−(δ

2 x −1 +γ 2 x)/2

1(0,∞) (x)dx

with parameters γ > 0, δ > 0 is called inverse Gaussian distribution. The Laplace transform Lμγ ,δ (u), u ≥ 0, of μγ ,δ is  ;

 e−ux μγ ,δ (dx) = exp −δ 2u + γ 2 − γ

 Lμγ ,δ (u) =

(0,∞)

  = exp 2−1 π −1/2 δ 







= exp 2−1 π −1/2 δ = exp (2π )

−1/2

δ



0



0 ∞

e−(2u+γ

2 )x

 − 1 x −3/2 dx + γ δ

 2 e−2ux − 1 x −3/2 e−γ x dx

e

−ux

 −3/2 −γ 2 x/2 −1 x e dx .

0

The last formula shows that μγ ,δ is infinitely divisible with Lévy measure density (2π )−1/2 δx −3/2 e−γ

2 x/2

on (0, ∞). Hence μγ ,δ is selfdecomposable by Theorem 1.34. For every λ ∈ R we denote by Kλ the modified Bessel function of order λ given by (4.9), (4.10) of [93, p. 21] .

5.1 Inheritance of Lm Property and Strict Stability

109

Example 5.5 Let {Zt } be a subordinator with L(Z1 ) being the inverse Gaussian μγ ,δ . Then L(Zt ) = μγ ,tδ . Let {Yt } be the Lévy process subordinate to Brownian motion {Xt } on R by {Zt }. Then  P [Yt ∈ B] =





(2π s)−1/2 e−x

μγ ,tδ (ds) 0

= (2π )

−1

 tδe

B

tγ δ

= (4π )−1 tγ 2 δetγ δ  B



dx B

=





0

s −2 e−((x





dx B

2 /(2s)

dx

2 +t 2 δ 2 )/(2s))−(γ 2 s/2)

u−2 e−(γ

ds

2 (x 2 +t 2 δ 2 )/(4u))−u

du

0



; γ etγ δ 8 K1 tγ δ 1 + (x/(tγ ))2 dx, π 1 + (x/(tγ ))2

where K1 is the modified Bessel function of order 1. This shows that L(Yt ) is a special case of the normal inverse Gaussian distribution defined by BarndorffNielsen [5] (1997). By Theorem 5.1, it is selfdecomposable. By Theorem 4.3 its characteristic function is  ;

 izYt t(−z2 /2) 2 2 Ee =e = exp −tδ z +γ −γ with (w) = −δ

8

−2w + γ 2 − γ .

Definition 5.6 The distribution μλ,χ ,ψ (dx) = cx λ−1 e−(χ x

−1 +ψx)/2

1(0,∞) (x)dx

is called generalized inverse Gaussian distribution with parameters λ, χ , ψ. Here c is a normalizing constant. The domain of the parameters is given by {λ < 0, χ > 0, ψ ≥ 0}, {λ = 0, χ > 0, ψ > 0}, and {λ > 0, χ ≥ 0, ψ > 0}. The Laplace transform Lμλ,χ ,ψ (u), u ≥ 0, of μλ,χ ,ψ is Lμλ,χ ,ψ (u) ⎧ √

λ/2 ⎪ Kλ χ (ψ + 2u) ψ ⎪ ⎪ √ ⎨ ψ + 2u  K χψ λ = √ 1+(λ/2) K ⎪ 2χ u 2 λ ⎪ ⎪ ⎩

(−λ)(χ u)λ/2

if χ > 0 and ψ > 0 if λ < 0, χ > 0, and ψ = 0,

where Kλ is the modified Bessel function of order λ. It is known that μλ,χ ,ψ is infinitely divisible and, moreover, selfdecomposable [93, E. 34.13]. It belongs to the smaller class GGC called generalized -convolutions, which means that it is

110

5 Inheritance in Multivariate Subordination

the limit of a sequence of convolutions of -distributions. See Halgreen [30] (1979). Concerning this class, see Notes at end of Chap. 2. In order to extend Theorem 5.1 to multivariate subordination, we prepare two lemmas. Lemma 5.7 Let K be a cone in RN and {Xs : s ∈ K} a K-parameter Lévy process d on Rd . Let 0 < α ≤ 2. Then L(Xs ) ∈ S0α if and only if Xts = t 1/α Xs for every t > 0. Proof Let μs = L(Xs ). The meaning of μs ∈ S0α is that μs ∈ I D and  μs (z)t =  μs (t 1/α z) for t > 0. See Definition 1.23 and Proposition 1.22. Since, by Lemma 4.21, {Xts : t ≥ 0} is a Lévy process,  μts (z) =  μs (z)t . Hence the condition d

is written as Xts = t 1/α Xs .



Lemma 5.8 Let {Zt } be a K-valued subordinator such that L(Zt ) ∈ L0 (RN ) for t ≥ 0. Let (w) be the function in (4.11). For b > 1 define b (w) as (w) = (b−1 w) + b (w).

(5.1)

Then etb (iz) , z ∈ RN , is the characteristic function of a K-valued subordinator (b) (b) {Zt }. Let m ≥ 1. Then L(Zt ) ∈ Lm for t ≥ 0 if and only if L(Zt ) ∈ Lm−1 for t ≥ 0 and b > 1. Proof Let μ = L (Z1 ) with generating triplet (A, ν, γ ). Its characteristic function is  μ (z) = e(iz) , z ∈ RN . If b > 1, then by selfdecomposability there is a distribution ρb such that

b (z) .  μ (z) =  μ b−1 z ρ  μb (z) =  μ b−1 z . Then μ = μb ∗ ρb and, by Proposition 1.13, Let μb be such that   /b ,/ μb and ρb are in I D. Let A νb , / γb and (Ab , νb , γb ) be the generating triplets /b + Ab , ν = / of μb and ρb , respectively. Then A = A γb + γb .  νb + νb , and γ = / Hence νb ≤ ν. By Theorem 4.11, A = 0, ν RN \K = 0, |s|≤1 |s| ν(ds) < ∞,   and γ 0 ∈ K. Therefore νb RN \K = 0, and |s|≤1 |s| νb (ds) < ∞. Also Ab = 0, as 0 ≤ z, Ab z ≤ z, Az = 0. Further, their drifts are related as γ 0 = / γb0 + γb0 0  0 0 −1 0 −1 γ ∈ K. Then, by Theorem 4.11, a Lévy and / γb = b γ . Thus γb = 1 − b (b) (b) process {Zt } with L(Z1 ) = ρb is a K-valued subordinator. Its characteristic function equals ( ρbt )(z) = etb (iz) . Finally, L(Zt ) is of class Lm if and only if, for (b)  each b > 1, ρb ∈ Lm−1 , that is, L(Zt ) is of class Lm−1 . Theorem 5.9 Let K be a cone in RN and 0 < α ≤ 2. Let {Zt : t ≥ 0} be a Kvalued subordinator and {Xs : s ∈ K} a K-parameter Lévy process on Rd such that L(Xs ) ∈ S0α for all s ∈ K. Assume that they are independent. Let {Yt : t ≥ 0} be the Lévy process on Rd constructed from {Xs } and {Zt } by multivariate subordination of Definition 4.24.

5.1 Inheritance of Lm Property and Strict Stability

111

(i) If {Zt } is selfdecomposable, then {Yt } is selfdecomposable. (ii) Let m ∈ {0, 1, . . . , ∞}. If {Zt } is of class Lm (RN ), then {Yt } is of class Lm (Rd ). (iii) Let 0 < β ≤ 1. If L(Zt ) ∈ S0β for all t ≥ 0, then L(Yt ) ∈ S0αβ for all t ≥ 0. Proof Let μs = L (Xs ) . (i) Let {Zt } be selfdecomposable. Then L(Zt ) ∈ L0 for all t ≥ 0. Using Lemma 5.8 and its notation, we have Zt = b−1 Zt + Zt , d

where b−1 Zt and Zt

(b)

(b)

are independent. Then, 

Eeiz,Yt  = Eei b

−1/α z,Y

t

  E  μZ (b) (z) .

(5.2)

t

Indeed we have, using Lemma 4.21 (i) and Lemma 5.7, Eeiz,Yt  = E



Eeiz,Xs 



s=Zt

    =E  μZt (z) = E  μb−1 Z +Z (b) (z) t

t

      μb−1 Zt (z) E  =E  μb−1 Zt (z) μZ (b) (z) = E  μZ (b) (z) t t

    =E  μZt b−1/α z E  μZ (b) (z) , t

1/α can be an arbitrary real which is the right-hand  side of (5.2). Notice that b bigger than 1 and E  μZ (b) (z) is the characteristic function of a subordinated t process by Lemma 5.8. This shows that {Yt } is selfdecomposable. (ii) By induction. If m = 0, then the assertion is true by (i). Suppose that the assertion is true for m − 1 in  of m. Let {Zt } be of class Lm , that is,  place (b) is a K-valued subordinator of class Lm−1 L(Zt ) ∈ Lm for t ≥ 0. Then Zt   by Lemma 5.8. Hence E  μZ (b) (z) is a characteristic function of class Lm−1 . t Thus L (Yt ) ∈ Lm . d (iii) Let L(Zt ) ∈ S0β for t ≥ 0. Then Zat = a 1/β Zt . Therefore, using Lemma 5.7,

Ee

iz,Yat 



= E Eeiz,Xs 

s=Zat





= E Eeiz,Xs 



s=a 1/β Zt

   1/(αβ)     Yt . =E  μa 1/β Zt (z) = E  μZt a 1/(αβ) z = E ei z,a d

Thus Yat = a 1/(αβ) Yt for any a > 0.



112

5 Inheritance in Multivariate Subordination

When d = 1, Theorem 5.1 can be generalized to the case where {Xt : t ≥ 0} is Brownian motion with non-zero drift on R. This is 2-stable, but not strictly 2-stable. So the assumption in Theorem 5.1 is not satisfied. Nevertheless, selfdecomposability is inherited as follows. Theorem 5.10 Let {Xt : t ≥ 0} be Brownian motion with drift γ on R. That is, EeizXt = et (−(z

2 /2)+iγ z)

,

z ∈ R.

Let {Yt } be a Lévy process subordinate to {Xt } by {Zt }. If {Zt } is selfdecomposable, then {Yt } is selfdecomposable. See Sato [94] (2001a). Remark 5.11 There arises the question whether Theorem 5.10 can be extended to the case where {Xt } is an α-stable, not strictly α-stable process with 0 < α < 2 on R. Ramachandran’s paper [80] (1997) contains an answer to this question.1 Namely, if 1 < α < 2, then there are an α-stable, not strictly α-stable process {Xt } on R and a selfdecomposable subordinator {Zt } such that the Lévy process {Yt } subordinate to {Xt } by {Zt } is not selfdecomposable. Specifically, Ramachandran shows that α if EeizXt = et (−c|z| +iγ z) with 1 < α < 2, c > 0, and γ = 0 and {Zt } is process with parameter q > 0 (a special case of Example 4.7), then {Yt } is not selfdecomposable. The question in the case 0 < α ≤ 1 is still open in the authors’ knowledge. Remark 5.12 If d ≥ 2, then the situation is quite different and Theorem 5.10 cannot be generalized. It is known that, for d ≥ 2, a Lévy process {Yt } on Rd subordinate to Brownian motion with drift, {Xt }, by a selfdecomposable subordinator {Zt } is not necessarily selfdecomposable. Even if L(Z1 ) is a generalized -convolution, {Yt } is not necessarily selfdecomposable. Definition 5.13 The distribution

8 μ(dx) = c exp −a 1 + x 2 + bx dx on R with parameters a, b satisfying a > 0 and |b| < a or a scale change of this distribution is called hyperbolic distribution. Here c is a normalizing constant. The distribution μ(dx) = c

8

1 + x2

λ−(1/2)

8 Kλ−(1/2) a 1 + x 2 ebx

on R or its scale change, where c is normalizing constant, is called generalized hyperbolic distribution. Here the domain of parameters is given by {λ ≥ 0, a >

1 Z. J.

Jurek kindly called the authors’ attention to the paper [80] on this point.

5.2 Operator Generalization

113

0, |b| < a} and {λ < 0, a > 0, |b| ≤ a}. This distribution reduces to the hyperbolic distribution if λ = 1. Example 5.14 Let {Xt } be Brownian motion with drift γ being zero or non-zero and let {Zt } the subordinator with L(Z1 ) being generalized inverse Gaussian μλ,χ ,ψ with λ = 1, χ > 0, ψ > 0. Let us calculate the distribution at t = 1 for the Lévy process {Yt } subordinate to {Xt } by {Zt }:  P [Y1 ∈ B] = c



e

−(χ s −1 +ψs)/2

0

c =√ ψ +γ



 √

ds B

1 2π s

e−(x−sγ )

2 /(2s)

dx

√ 2 e− (ψ+γ )(χ +x )+γ x dx

B

by the calculation in Example 2.13 of [93]. Hence L(Y1 ) is a hyperbolic distribution √ √ with a = χ (ψ + γ ) and b = χ γ . More generally if we assume that L(Z1 ) is generalized inverse Gaussian μλ,χ ,ψ , then L(Y1 ) is generalized hyperbolic distribution. For a proof, use the formula (30.28) of [93] for modified Bessel functions. It follows from Theorem 5.1 (if γ = 0) and Theorem 5.10 (if γ = 0) that generalized hyperbolic distributions are selfdecomposable.

5.2 Operator Generalization For distributions on Rd , d ≥ 2, the concepts of stability, selfdecomposability, and Lm property are generalized to the situation where multiplication by positive real numbers is replaced by multiplication by matrices of the form bQ . For a set J ⊂ R let MJ (d) be the set of real d × d matrices all of whose eigenvalues have real parts in J . Let Q ∈ M(0,∞) (d). Definition 5.15 A distribution μ on Rd is called Q-selfdecomposable if, for every b > 1, there is ρb ∈ P(Rd ) such that 

 μ(z) =  μ(b−Q z) ρb (z),

z ∈ Rd ,

(5.3)



where Q is the transpose of Q and b−Q is a d × d matrix defined by 



b−Q = e−(log b)Q =

∞  (n!)−1 (− log b)n (Q )n . n=0

The class of all Q-selfdecomposable distributions on Rd is denoted by L0 (Q). For m = 1, 2, . . . the class Lm (Q) is defined to be the class of distributions μ on Rd

114

5 Inheritance in Multivariate Subordination

such that, for  every b > 1, there exists ρb ∈ Lm−1 (Q) satisfying (5.3). Define L∞ (Q) = m 0 and m = 0, 1, . . . , ∞. Proposition 5.16 The classes just introduced form nested classes I D ⊃ L0 (Q) ⊃ L1 (Q) ⊃ · · · ⊃ L∞ (Q).

(5.4)

Proof can be given analogously to the proofs of Propositions 1.13 and 1.15. See Jurek [39] (1983a) and Sato and Yamazato [111] (1985). Definition 5.17 A distribution μ on Rd is called Q-stable if, for every n ∈ N, there is c ∈ Rd such that 

μ(nQ z)eic,z ,  μ(z)n = 

z ∈ Rd .

(5.5)

It is called strictly Q-stable if, for all n, 

μ(nQ z),  μ(z)n = 

z ∈ Rd .

(5.6)

Let SQ be the class of Q-stable distributions on Rd . Let S0Q be the class of strictly Q-stable distributions on Rd . Here we are using the usual terminology, but it is not harmonious with the usage of the word α-stable; μ is α-stable if and only if it is (α −1 I )-stable, where I is the identity matrix. Similarly to the α-stable case, we have the following. Proposition 5.18 A distribution μ is Q-stable if and only if μ ∈ I D and, for every t > 0, there is c ∈ Rd such that 

 μ(z)t =  μ(t Q z)eic,z .

(5.7)

A distribution μ is strictly Q-stable if and only if μ ∈ I D and, for every t > 0, 

μ(t Q z).  μ(z)t = 

(5.8)

Proof is like that of Proposition 1.21. Remark 5.19 If μ ∈ SQ for some Q ∈ M(0,∞) (d), then μ is called operator stable and sometimes Q is called exponent of operator stability of μ. But, in general, Q is not uniquely determined by μ; see Hudson and Mason [35] (1981) and Sato [86] (1985). If μ ∈ L0 (Q) for some Q ∈ M(0,∞) (d), then μ is called operator selfdecomposable. Remark 5.20 Operator stable and operator selfdecomposable distributions appear in a natural way when we study limit theorems for sums of a sequence of independent random vectors, allowing linear transformations (matrix multiplications) of partial sums. Basic papers are Sharpe [113] (1969) and Urbanik [128] (1972a).

5.2 Operator Generalization

115

Proposition 5.21 Suppose that μ is Q-stable and nondegenerate on Rd . Then Q must be in M[1/2,∞) (d) and, moreover, any eigenvalue of Q with real part 1/2 is a simple root of the minimal polynomial of Q; μ is Gaussian if and only if Q ∈ M{1/2} (d); μ is purely non-Gaussian if and only if Q ∈ M(1/2,∞) (d). This is by Sharpe [113] (1969). Definition 5.22 For Q ∈ M(0,∞) (d), let S(Q) denote the union of SaQ over all a > 0; let S0 (Q) denote the union of S0aQ over all a > 0. The relation with S and S0 in Definition 1.19 is that S = S(I ) and S0 = S0 (I ). The class S(Q) is a subclass of L∞ (Q). Moreover, we have the following. Proposition 5.23 The class L∞ (Q) is the smallest class containing S(Q) and closed under convolution and weak convergence. See Sato and Yamazato [111] (1985) for a proof. Definition 5.24 A Lévy process {Xt : t ≥ 0} is called Q-selfdecomposable, Qstable, or of class Lm (Q), respectively, if L(X1 ) (or, equivalently, L(Xt ) for every t ≥ 0) is Q-selfdecomposable, Q-stable, or of class Lm (Q). Here are results on the inheritance of operator selfdecomposability, Lm (Q) property, and strict operator stability in some cases. These partially extend Theorem 5.9. Propositions 5.21 and 5.23 are not used in the proof. Let N and d be positive integers satisfying d ≥ N ≥ 1. Let dj , 1 ≤ j ≤ N , be positive integers such that d1 + · · · + dN = d. Every x ∈ Rd is expressed as x = (xj )1≤j ≤N with xj ∈ Rdj . We call xj the j th component-block of x. The j th component-block of Xt is denoted by (Xt )j . As in Sect. 4.3, we use the unit vectors ek = (δkj )1≤j ≤N , k = 1, . . . , N, in RN . N Theorem 5.25 Suppose that {Xs : s ∈ RN + } is a given R+ -parameter Lévy process d on R with the following structure: for each j = 1, . . . , N ,

(Xtej )k = 0

for all k = j.

(5.9)

Suppose that {Zt : t ≥ 0} is a given RN + -valued subordinator and let {Yt : t ≥ 0} be a Lévy process on Rd obtained by multivariate subordination from {Xs } and {Zt }. That is, {Xs } and {Zt } are independent and Yt = XZt . Let Qj ∈ M[1/2,∞) (dj ) and cj > 0 for 1 ≤ j ≤ N , and let C = diag(c1 , . . . , cN ). Assume that, for each j , L((Xtej )j ) is strictly Qj -stable. Let D = diag(c1 Q1 , . . . cN QN ) ∈ M(0,∞) (d).

(i) If {Zt : t ≥ 0} is C-selfdecomposable, then {Yt : t ≥ 0} is D-selfdecomposable. (ii) More generally, let m ∈ {0, 1, . . . , ∞}. If {Zt : t ≥ 0} is of class Lm (C) on RN , then {Yt : t ≥ 0} is of class Lm (D) on Rd .

116

5 Inheritance in Multivariate Subordination

(iii) If {Zt : t ≥ 0} is strictly C-stable, then {Yt : t ≥ 0} is strictly D-stable. Here diag(c1 , . . . , cN ) denotes the diagonal matrix with diagonal entries c1 , . . . , cN ; diag(c1 Q1 , . . . cN QN ) denotes the blockwise diagonal matrix with diagonal blocks c1 Q1 , . . . , cN QN . j

j

)(z) with ρ = Proof We use Theorem 4.41. Let Xt = Xtej . Let ψX (z) = (log ρ j j j d L(X1 ) for z ∈ R , and ψX (z) = (ψX (z))1≤j ≤N . Let μj = L((X1 )j ) ∈ P(Rdj ). Then it follows from (5.9) that j

j

j

μj (zj )t , etψX (z) = Eeiz,Xt  = Eeizj ,(Xt )j  =  where z = (zj )1≤j ≤N ∈ Rd with zj ∈ Rdj . Thus μj (zj ))1≤j ≤N . ψX (z) = (log  We have 

μj (a Qj zj ),  μj (zj )a = 

a>0

by the strict Qj -stability of μj . Hence 

a C ψX (z) = (a cj log  μj (zj ))1≤j ≤N = (log  μj (a cj Qj zj ))1≤j ≤N .

(5.10)

(i) Assume {Zt : t ≥ 0} is C-selfdecomposable. Let Z be the function  in (4.11) for {Zt }. For b > 1 and w = (wj )1≤j ≤N ∈ CN with Re wj ≤ 0, Define Z,b (w) by Z (w) = Z (b−C w) + Z,b (w). Similarly to the proof of Proposition 1.13, we can show that eZ,b (iu) , u ∈ RN , is an infinitely divisible characteristic function. Further, as in Lemma 5.8, there (b) (b) iu,Zt  = etZ,b (iu) . In the is an RN + -valued subordinator {Zt } such that Ee proof note that γb0 = (I − b−C )γ 0 = diag(1 − b−c1 , . . . , 1 − b−cN )γ 0 ∈ RN +. Now we have Eeiz,Yt  = etZ (ψX (z)) = etZ (b

−C ψ (z)) X

etZ,b (ψX (z))

and 



μj (b−cj Qj zj ))1≤j ≤N = ψX (b−D z) b−C ψX (z) = (log  by (5.10), since

5.2 Operator Generalization

117 







b−D z = diag(b−c1 Q1 , . . . , b−cN QN )z = (b−cj Qj zj )1≤j ≤N . Hence

 Eeiz,Yt  = E exp(ib−D z, Yt ) etZ,b (ψX (z)) . As the second factor in the right-hand side is the characteristic function of a subordinated process, we see that L(Yt ) is D-selfdecomposable. (ii) By induction similar to (ii) of Theorem 5.9. (iii) Assume that {Zt } is strictly C-stable, that is, aZ (w) = Z (a C w). Then, for a > 0, Eeiz,Yat  = eatZ (ψX (z)) = etZ (a

C ψ (z)) X

and, as above, 

a C ψX (z) = ψX (a D z). Hence 

Eeiz,Yat  = E exp(ia D z, Yt ), which shows D-stability of {Yt }.



Remark 5.26 Let Q ∈ M(0,∞) (d) and let SQ = {ξ ∈ Rd : |ξ | = 1, and |r Q ξ | > 1 for every r > 1}. Then any x ∈ Rd \ {0} is uniquely expressed as x = r Q ξ with r > 0 and ξ ∈ SQ . Notice that SI is the unit sphere S but SQ  S for some Q. Let μ ∈ I D with generating triplet (A, ν, γ ). Then μ ∈ L0 (Q) if and only if QA + AQ is nonnegative-definite and 

 ν(B) =



λ(dξ ) 0

SQ

1B (r Q ξ )

kξ (r) dr, r

B ∈ B(Rd ),

where λ is a finite measure on SQ and kξ (r) is nonnegative, decreasing in r ∈ (0, ∞), and measurable in ξ ∈ SQ . Under the assumption that α > 0, Q ∈ M(α/2,∞) (d), and μ is purely non-Gaussian, we can show that μ ∈ Sα −1 Q if and only if 

 ν(B) =

λ(dξ ) SQ

0



1B (r Q ξ )r −α−1 dr,

B ∈ B(Rd ),

118

5

Inheritance in Multivariate Subordination

where λ is a finite measure on SQ ; this statement does not exclude the possibility that Sα −1 Q is the set of trivial distributions. It follows that, if {Zt } is a non-trivial (α −1 Q)-stable Rd+ -valued subordinator, then Q is strongly restricted. For example then, under the additional assumption that d = 2 and Q is of the real Jordan normal form, Q cannot be

q1 1 0 q1

nor



q1 −q2 q2 q1

with q1 > 0, q2 > 0 and thus Q must be of the form

q1 0 0 q2 with q1 > 0, q2 > 0. See Sato [86, pp. 42–43] (1985), Sato and Yamazato [111] (1985), and Jurek and Mason [44] (1993).

Notes Halgreen [30] (1979) and Ismail and Kelker [36] (1979) proved assertion (i) of Theorem 5.1 in the case where {Xt } is Brownian motion on R. Assertion (iii) of Theorem 5.1 was essentially known to Bochner [14] (1955). Theorem 5.25 was given in Barndorff-Nielsen et al. [8] (2001), but we have given a simpler proof. Assertion (ii) of Theorem 5.1 is a special case of Theorem 5.25 (ii) with N = 1 and Q = Q1 = (1/α)I . Theorem 5.9 was shown by Pedersen and Sato [71] (2003) for subordination of cone-parameter convolution semigroups on Rd . Examples 5.5 and 5.14 are from Barndorff-Nielsen and Halgreen [6] (1977) and Halgreen [30] (1979); see also Bondesson [16] (1992). Theorems 5.9 and 5.10 were extended to subordination of cone-parameter convolution semigroups, respectively, by Pedersen and Sato [71] (2003) and by Sato [100] (2009). Theorem 5.10 was proved in Sato [94] (2001a). Earlier Halgreen [30] (1979) and Shanbhag and Sreehari [112] (1979) proved it under the condition that L(Z1 ) is a generalized -convolution. Remark 5.12 is by Takano [119] (1989,1990). Characterization and many related results on general distributions in S(Q) and Lm (Q) are discussed in Sharpe [113] (1969), Urbanik [128] (1972a), Sato and Yamazato [111] (1985), and Jurek and Mason [44] (1993). For characterization of distributions in S0 (Q), see Sato [87] (1987). For Q ∈ M(0,∞) (d) with d ≥ 2, consider Eq. (2.13) with c replaced by Q. Then we can extend the notion of Ornstein–Uhlenbeck type process generated by ρ ∈ I D(Rd ) and c > 0 to that generated by ρ and Q in a natural way; the extended process is also called Ornstein–Uhlenbeck type process frequently, but let us call it

Notes

119

as Q-OU type process. Connections of distributions in Lm (Q), m = 0, 1, . . . , ∞, with Q-OU type processes are parallel to those of Lm with OU type processes in Chaps. 2 and 3 and the proofs are similar; in fact it was done simultaneously in many papers. However, it was a harder problem to find a criterion of recurrence and transience for Q-OU type processes; it was solved by Sato et al. [104] (1996) and Watanabe [133] (1998).

Bibliography

1. Akita, K., & Maejima, M. (2002). On certain self-decomposable self-similar processes with independent increments. Statistics & Probability Letters, 59, 53–59. 2. Alf, C., & O’Connor, T. A. (1977). Unimodality of the Lévy spectral function. Pacific Journal of Mathematics, 69, 285–290. 3. Araujo, A., & Giné, E. (1980). The central limit theorem for real and banach valued random variables. New York, Wiley. 4. Barndorff-Nielsen, O. E. (1978). Hyperbolic distributions and distributions on hyperbolae. Scandinavian Journal of Statistics, 5, 151–157. 5. Barndorff-Nielsen, O. E. (1997). Normal inverse Gaussian distributions and stochastic volatility modelling. Scandinavian Journal of Statistics, 24, 1–13. 6. Barndorff-Nielsen, O. E. & Halgreen, C. (1977). Infinite divisibility of the hyperbolic and generalized inverse Gaussian distributions. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 38, 309–311. 7. Barndorff-Nielsen, O. E., Maejima, M., & Sato, K. (2006). Some classes of infinitely divisible distributions admitting stochastic integral representations. Bernoulli, 12, 1–33. 8. Barndorff-Nielsen, O. E., Pedersen, J., & Sato, K. (2001). Multivariate subordination, selfdecomposability and stability. Advances in Applied Probability, 33, 160–187. 9. Barndorff-Nielsen, O. E., & Pérez-Abreu, V. (2008). Matrix subordinators and related upsilon transformations. Theory of Probability and its Applications, 52, 1–23. 10. Barndorff-Nielsen, O. E., Rosi´nski, J., & Thorbjørnsen, S. (2008). General ϒ transformations. ALEA Latin American Journal of Probability and Mathematical Statistics, 4, 131–165. 11. Barndorff-Nielsen, O. E., & Thorbjørnsen, S. (2004). A connection between free and classical infinite divisibility. Infinite Dimensional Analysis, Quantum Probability and Related Topics, 7, 573–590. 12. Bertoin, J. (1996). Lévy processes. Cambridge: Cambridge University Press. 13. Billingsley, P. (2012). Probability and measure (Anniversary ed.). New Jersey: Wiley. 14. Bochner, S. (1955). Harmonic analysis and the theory of probability. Berkeley: University of California Press. 15. Bondesson, L. (1981). Classes of infinitely divisible distributions and densities. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 57, 39–71. Correction and addendum 59, 277 (1982). 16. Bondesson, L. (1992). Generalized gamma convolutions and related classes of distribution densities. Lecture notes in statistics (Vol. 76). New York: Springer.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 A. Rocha-Arteaga, K. Sato, Topics in Infinitely Divisible Distributions and Lévy Processes, SpringerBriefs in Probability and Mathematical Statistics, https://doi.org/10.1007/978-3-030-22700-5

121

122

Bibliography

17. Carr, P., Geman, H., Madan, D. B, & Yor, M. (2005). Pricing options on realized variance. Finance and Stochastics, 9, 453–475. 18. Chentsov, N. N. (1957). Lévy’s Brownian motion of several parameters and generalized white noise. Theory of Probability and its Applications, 2, 265–266. 19. Chung, K. L. (1974). A course in probability theory (2nd ed.). Orlando, FL: Academic Press. 20. Çinlar, E., & Pinsky, M. (1971). A stochastic integral in storage theory. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 17, 227–240. 21. Çinlar, E., & Pinsky, M. (1972). On dams with additive inputs and a general release rule. Journal of Applied Probability, 9, 422–429. 22. Dettweiler, E. (1976). Infinitely divisible measures on the cone of an ordered locally convex vector space. Annales scientifiques de l’Université de Clermont, 14, 61, 11–17. 23. Eberlein, E., & Madan, D. B. (2009). Sato processes and the valuation of structured products. Quantitative Finance, 9, 1, 27–42. 24. Feller, W. (1971). An introduction to probability theory and its applications (2nd ed., Vol. 2). New York: Wiley. 25. Gihman, I. I., & Skorohod, A. V. (1975). The theory of stochastic processes (Vol. II). Berlin: Springer. 26. Gnedenko, B. V., & Kolmogorov, A. N. (1968). Limit distributions for sums of independent random variables (2nd ed.). Reading, MA: Addison-Wesley (Russian original 1949). 27. Goldie, C. (1967). A class of infinitely divisible random variables. Proceedings of the Cambridge Philosophical Society, 63, 1141–1143. 28. Gravereaux, J. B. (1982). Probabilité de Lévy sur Rd et équations différentielles stochastiques linéaires, Séminaire de Probabilité 1982 (Université de Rennes I, Publications des Séminaires de Mathématiques), 1–42. 29. Graversen, S. E., & Pedersen, J. (2011). Representations of Urbanik’s classes and multiparameter Ornstein–Uhlenbeck processes. Electronic Communications in Probability, 16, 200–212. 30. Halgreen, C. (1979). Self-decomposability of the generalized inverse Gaussian and hyperbolic distributions. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 47, 13–17. 31. Halmos, P. R. (1950). Measure theory. Princeton, NJ: Van Nostrand. 32. Hartman, P., & Wintner, A. (1942). On the infinitesimal generators of integral convolutions. American Journal of Mathematics, 64, 273–298. 33. Hengartner, W., & Theodorescu, R. (1973). Concentration functions. New York: Academic Press. 34. Hirsch, F. (1995). Potential theory related to some multiparameter processes. Potential Analysis, 4, 245–267. 35. Hudson, W. N., & Mason, J. D. (1981). Operator-stable distribution on R 2 with multiple exponents. Annals of Probability, 9, 482–489. 36. Ismail, M. E. H., & Kelker, D. H. (1979). Special functions, Stieltjes transforms and infinite divisibility. SIAM Journal on Mathematical Analysis, 10, 884–901. 37. James, L. F., Roynette, B., & Yor, M. (2008). Generalized gamma convolutions, Dirichlet means, Thorin measures, with explicit examples. Probability Surveys, 5, 346–415. 38. Jeanblanc, M., Pitman, J., & Yor, M. (2002). Self-similar processes with independent increments associated with Lévy and Bessel processes. Stochastic Processes and their Applications, 100, 223–231. 39. Jurek, Z. J. (1983a). Limit distributions and one-parameter groups of linear operators on Banach spaces. Journal of Multivariate Analysis, 13, 578–604. 40. Jurek, Z. J. (1983b). The class Lm (Q) of probability measures on Banach spaces. Bulletin of the Polish Academy of Sciences Mathematics, 31, 51–62. 41. Jurek, Z. J. (1985). Relations between the s-selfdecomposable and selfdecomposable measures. Annals of Probability, 13, 592–608. 42. Jurek, Z. J. (1988). Random integral representations for classes of limit distributions similar to Levy class L0 . Probability Theory and Related Fields, 78, 473–490. 43. Jurek, Z. J. (1989). Random integral representations for classes of limit distributions similar to Lévy class L0 , II. Nagoya Mathematical Journal, 114, 53–64.

Bibliography

123

44. Jurek, Z. J., & Mason, J. D. (1993). Operator-limit distributions in probability theory. New York: Wiley. 45. Jurek, Z. J., & Vervaat, W. (1983). An integral representation for self-decomposable Banach space valued random variables. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 62, 247–262. 46. Khintchine, A. Ya. (1938). Limit laws for sums of independent random variables. Moscow: ONTI (in Russian). 47. Khoshnevisan, D., & Shi, Z. (1999). Brownian sheet and capacity. Annals of Probability, 27, 1135–1159. 48. Kokholm, T., & Nicolato, E. (2010). Sato processes in default modelling. Applied Mathematical Finance, 17(5), 377–397. 49. Kumar, A., & Schreiber, B. M. (1978). Characterization of subclasses of class L probability distributions. Annals of Probability, 6, 279–293. 50. Kumar, A., & Schreiber, B. M. (1979). Representation of certain infinitely divisible probability measures on Banach spaces. Journal of Multivariate Analysis, 9, 288–303. 51. Kyprianou, A. E. (2014). Fluctuations of Lévy processes with applications (2nd ed.). Berlin: Springer. 52. Lagaize, S. (2001). Hölder exponent for a two-parameter Lévy process. Journal of Multivariate Analysis, 77, 270–285. 53. Lamperti, J. (1962). Semi-stable stochastic processes. Transactions of the American Mathematical Society, 104, 62–78. 54. Lévy, P. (1937). Théorie de l’Addition des Variables Aléatoires . Paris: Gauthier-Villars. (2 éd. 1954). 55. Lévy, P. (1948). Processus Stochastiques et Mouvement Brownien. Paris: Gauthier-Villars. (2nd éd. 1965). 56. Linde, W. (1986). Probability in Banach spaces -stable and infinitely divisible distributions. Berlin: Wiley. 57. Linnik, J. V., & Ostrovskii, I. V. (1977). Decomposition of random variables and vectors. Providence, RI: American Mathematical Society. 58. Loève, M. (1977, 1978). Probability theory (4th ed., Vols. I and II). New York: Springer (1st ed., 1955). 59. Maejima, M. (2015). Classes of infinitely divisible distributions and examples. In Lévy Matters V. Lecture notes in mathematics (Vol. 2149, pp. 1–65). Cham: Springer 60. Maejima, M., & Sato, K. (2003). Semi-Lévy processes, semi-selfsimilar additive processes, and semi-stationary Ornstein–Uhlenbeck type processes. Journal of Mathematics of Kyoto University, 43, 609–639. 61. Maejima, M., & Sato, K. (2009). The limits of nested subclasses of several classes of infinitely divisible distributions are identical with the closure of the class of stable distributions. Probability Theory and Related Fields, 145, 119–142. 62. Maejima, M., Sato, K., & Watanabe, T. (2000). Distributions of selfsimilar and semiselfsimilar processes with independent increments. Statistics & Probability Letters, 47, 395–401. 63. Maejima, M., Suzuki, K., & Tamura, Y. (1999). Some multivariate infinitely divisible distributions and their projections. Probability and Mathematical Statistics, 19, 421–428. 64. Maejima, M., & Ueda, Y. (2009). Stochastic integral characterizations of semiselfdecomposable distributions and related Ornstein–Uhlenbeck type processes. Communications on Stochastic Analysis, 3, 349–367. 65. Maejima, M., & Ueda, Y. (2010). α-self-decomposable distributions and related Ornstein– Uhlenbeck type processes. Stochastic Processes and their Applications, 120, 2363–2389. 66. McKean, H. P., Jr. (1963). Brownian motion with a several dimensional time. Theory of Probability and its Applications, 8, 357–378. 67. O’Connor, T. A. (1979a). Infinitely divisible distributions with unimodal Lévy spectral functions. Annals of Probability, 7, 494–499.

124

Bibliography

68. O’Connor, T. A. (1979b) Infinitely divisible distributions similar to class L distributions. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 50, 265–271. 69. Orey, S. (1968). On continuity properties of infinitely divisible distribution functions. Annals of Mathematical Statistics, 39, 936–937. 70. Orey, S., & Pruitt, W. E. (1973). Sample functions of the N -parameter Wiener process. Annals of Probability, 1, 138–163. 71. Pedersen, J., & Sato, K. (2003). Cone-parameter convolution semigroups and their subordination. Tokyo Journal of Mathematics, 26, 503–525. 72. Pedersen, J., & Sato, K. (2004). Relations between cone-parameter Lévy processes and convolution semigroups. Journal of the Mathematical Society of Japan, 56, 541–559. 73. Pérez-Abreu, V., & Rocha-Arteaga, A. (2003). Lévy processes in Banach spaces: Distributional properties and subordination. Stochastic Models, Contemporary Mathematics Series of the American Mathematical Society, 336, 225–235. 74. Pérez-Abreu, V., & Rocha-Arteaga, A. (2005). Covariance-parameter Lévy processes in the space of trace-class operators. Infinite Dimensional Analysis, Quantum Probability and Related Topics, 8, 33–54. 75. Pérez-Abreu, V., & Rocha-Arteaga, A. (2006). On the Lévy–Khintchine representation of Lévy processes in cones of Banach spaces. Publicaciones Matemáticas del Uruguay, 11, 41– 55. 76. Pérez-Abreu, V., Rocha-Arteaga, A., & Tudor, C. (2005). Cone-additive processes in duals of nuclear Fréchet spaces. Random Operators and Stochastic Equations, 13, 353–368. 77. Pérez-Abreu, V., & Rosi´nski, J. (2007). Representation of infinitely divisible distributions on cones. Journal of Theoretical Probability, 20, 535–544. 78. Pérez-Abreu, V., & Stelzer, R. (2014). Infinitely divisible multivariate and matrix gamma distributions. Journal of Multivariate Analysis, 130, 155–175. 79. Pillai, R. N. (1990). On Mittag–Leffler functions and related distributions. Annals of the Institute of Statistical Mathematics, 42, 157–161. 80. Ramachandran, B. (1997). On geometric-stable laws, a related property of stable processes, and stable densities of exponent one. Annals of the Institute of Statistical Mathematics, 49, 299–313. 81. Rocha-Arteaga, A. (2006). Subordinators in a class of Banach spaces. Random Operators and Stochastic Equations, 14, 1–14. 82. Rockafellar, B. T. (1970). Convex analysis. Princeton, NJ: Princeton University Press. 83. Rosi´nski, J. (2007). Tempering stable processes. Stochastic Processes and their Applications, 117, 677–707. 84. Sato, K. (1980). Class L of multivariate distributions and its subclasses. Journal of Multivariate Analysis, 10, 207–232. 85. Sato, K. (1982). Absolute continuity of multivariate distributions of class L. Journal of Multivariate Analysis, 12, 89–94. 86. Sato, K. (1985). Lectures on multivariate infinitely divisible distributions and operator-stable processes. Technical Report Series, Laboratory for Research in Statistics and Probability, Carleton University and University of Ottawa, No. 54, Ottawa. 87. Sato, K. (1987). Strictly operator-stable distributions. Journal of Multivariate Analysis, 22, 278–295. 88. Sato, K. (1990). Distributions of class L and self-similar processes with independent increments. In T. Hida, H. H. Kuo, J. Potthoff & L. Streit (Eds.), White noise analysis. Mathematics and applications (pp. 360–373). Singapore: World Scientific. 89. Sato, K. (1991). Self-similar processes with independent increments. Probability Theory and Related Fields, 89, 285–300. 90. Sato, K. (1994). Time evolution of distributions of Lévy processes from continuous singular to absolutely continuous. Research Bulletin, College of General Education, Nagoya University, Series B, 38, 1–11.

Bibliography

125

91. Sato, K. (1997). Time evolution of Lévy processes. In N. Kono & N.-R. Shieh (Eds.), Trends in Probability and Related Analysis, Proceedings SAP ’96 (pp. 35–82). Singapore: World Scientific. 92. Sato, K. (1998). Multivariate distributions with selfdecomposable projections. Journal of the Korean Mathematical Society, 35, 783–791. 93. Sato, K. (1999). Lévy processes and infinitely divisible distributions. Cambridge: Cambridge University Press. 94. Sato, K. (2001a). Subordination and selfdecomposability. Statistics & Probability Letters, 54, 317–324. 95. Sato, K. (2001b). Basic results on Lévy processes. In O. E. Barndorff-Nielsen, T. Mikosch & S. I. Resnick (Eds.), Lévy processes, theory and applications (pp. 3–37). Boston: Birkhäuser. 96. Sato, K. (2004). Stochastic integrals in additive processes and application to semi-Lévy processes. Osaka Journal of Mathematics, 41, 211–236. 97. Sato, K. (2006a). Additive processes and stochastic integrals. Illinois Journal of Mathematics, 50, 825–851. 98. Sato, K. (2006b). Two families of improper stochastic integrals with respect to Lévy processes. ALEA Latin American Journal of Probability and Mathematical Statistics, 1, 47– 87. 99. Sato, K. (2006c). Monotonicity and non-monotonicity of domains of stochastic integral operators. Probability and Mathematical Statistics, 26, 23–39. 100. Sato, K. (2009). Selfdecomposability and semi-selfdecomposability in subordination of coneparameter convolution semigroups. Tokyo Journal of Mathematics, 32, 81–90. 101. Sato, K. (2010). Fractional integrals and extensions of selfdecomposability. In Lévy matters I. Lecture notes in mathematics (Vol. 2001, pp. 1–91). Cham: Springer. 102. Sato, K. (2011). Description of limits of ranges of iterations of stochastic integral mappings of infinitely divisible distributions. ALEA Latin American Journal of Probability and Mathematical Statistics, 8, 1–17. 103. Sato, K. (2013). Lévy processes and infinitely divisible distributions (Revised ed.). Cambridge: Cambridge University Press. 104. Sato, K., Watanabe, T., Yamamuro, K., & Yamazato, M. (1996). Multidimensional process of Ornstein–Uhlenbeck type with nondiagonalizable matrix in linear drift terms. Nagoya Mathematical Journal, 141, 45–78. 105. Sato, K., Watanabe, T., & Yamazato, M. (1994). Recurrence conditions for multidimensional processes of Ornstein–Uhlenbeck type. Journal of the Mathematical Society of Japan, 46, 245–265. 106. Sato, K., & Yamamuro, K. (1998). On selfsimilar and semi-selfsimilar processes with independent increments. Journal of the Korean Mathematical Society, 35, 207–224. 107. Sato, K., & Yamamuro, K. (2000). Recurrence-transience for self-similar additive processes associated with stable distributions. Acta Applicandae Mathematica, 63, 375–384. 108. Sato, K., & Yamazato, M. (1978). On distribution functions of class L. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 43, 273–308. 109. Sato, K., & Yamazato, M. (1983). Stationary processes of Ornstein–Uhlenbeck type. In K. Itô & J. V. Prokhorov (Eds.), Probability theory and mathematical statistics, Fourth USSR– Japan symposium, proceedings 1982. Lecture notes in mathematics (No. 1021, pp. 541–551). Berlin: Springer. 110. Sato, K., & Yamazato, M. (1984). Operator-selfdecomposable distributions as limit distributions of processes of Ornstein–Uhlenbeck type. Stochastic Processes and their Applications, 17, 73–100. 111. Sato, K., & Yamazato, M. (1985). Completely operator-selfdecomposable distributions and operator-stable distributions. Nagoya Mathematical Journal, 97, 71–94. 112. Shanbhag, D. N., & Sreehari, M. (1979). An extension of Goldie’s result and further results in infinite divisibility. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 47, 19–25.

126

Bibliography

113. Sharpe, M. (1969). Operator-stable probability distributions on vector groups. Transactions of the American Mathematical Society, 136, 51–65. 114. Shiga, T. (1990). A recurrence criterion for Markov processes of Ornstein–Uhlenbeck type. Probability Theory and Related Fields, 85, 425–447. 115. Skorohod, A. V. (1991). Random processes with independent increments. Dordrecht: Kluwer Academic Publications (Russian original 1986). 116. Steutel, F. W. (1967). Note on the infinite divisibility of exponential mixtures. Annals of Mathematical Statistics, 38, 1303–1305. 117. Steutel, F. W. (1970). Preservation of infinite divisibility under mixing and related topics, Math. Centre Tracs. No. 33, Math Centrum Amsterdam. 118. Steutel, F. W., & van Harn, K. (2004). Infinite divisibility of probability distributions on the real line. New York: Marcel Decker. 119. Takano, K. (1989/1990). On mixtures of the normal distribution by the generalized gamma convolutions. Bulletin of the Faculty of Science, Ibaraki University. Series A, 21, 29–41. Correction and addendum, 22, 49–52. 120. Thorin, O. (1977a). On the infinite divisibility of the Pareto distribution. Scandinavian Actuarial Journal, 1977, 31–40. 121. Thorin, O. (1977b). On the infinite divisibility of the lognormal distribution. Scandinavian Actuarial Journal, 1977, 121–148 . 122. Thu, N. V. (1979). Multiply self-decomposable probability measures on Banach spaces. Studia Mathematica, 66, 161–175. 123. Thu, N. V. (1982). Universal multiply self-decomposable probability measures on Banach spaces. Probability and Mathematical Statistics, 3, 71–84. 124. Thu, N. V. (1984). Fractional calculus in probability. Probability and Mathematical Statistics, 3, 173–189. 125. Thu, N. V. (1986). An alternative approach to multiply self-decomposable probability measures on Banach spaces. Probability Theory and Related Fields, 72, 35–54. 126. Tucker, H. G. (1965). On a necessary and sufficient condition that an infinitely divisible distribution be absolutely continuous. Transactions of the American Mathematical Society, 118, 316–330. 127. Urbanik, K. (1969). Self-decomposable probability distributions on R m . Applicationes Mathematicae, 10, 91–97. 128. Urbanik, K. (1972a). Lévy’s probability measures on Euclidean spaces. Studia Mathematica, 44, 119–148. 129. Urbanik, K. (1972b). Slowly varying sequences of random variables. Bulletin L’Académie Polonaise des Science, Série des Sciences Mathématiques, Astronomiques et Physiques, 20, 679–682. 130. Urbanik, K. (1973). Limit laws for sequences of normed sums satisfying some stability conditions. In P. R. Krishnaiah (Ed.) Multivariate analysis-III (pp. 225–237). New York: Academic Press. 131. Vares, M. E. (1983). Local times for two-parameter Lévy processes. Stochastic Processes and their Applications, 15, 59–82. 132. Watanabe, T. (1996). Sample function behavior of increasing processes of class L. Probability Theory and Related Fields, 104, 349–374. 133. Watanabe, T. (1998). Sato’s conjecture on recurrence conditions for multidimensional processes of Ornstein–Uhlenbeck type. Journal of the Mathematical Society of Japan, 50, 155–168. 134. Watanabe, T. (1999). On Bessel transforms of multimodal increasing Lévy processes. Japanese Journal of Mathematics, 25, 227–256. 135. Watanabe, T. (2000). Absolute continuity of some semi-selfdecomposable distributions and self-similar measures. Probability Theory and Related Fields, 117, 387–405. 136. Watanabe, T. (2001). Temporal change in distributional properties of Lévy processes. In O. E. Barndorff-Nielsen, T. Mikosch & S. I. Resnick (Eds.), Lévy processes, theory and applications (pp. 89–107). Boston: Birkhäuser.

Bibliography

127

137. Watanabe, T., & Yamamuro, K. (2010). Limsup behaviors of multi-dimensional selfsimilar processes with independent increments. ALEA Latin American Journal of Probability and Mathematical Statistics, 7, 79–116. 138. Widder, D. V. (1946). The Laplace transform. Princeton, NJ: Princeton University Press. 139. Wolfe, S. J. (1982a). On a continuous analogue of the stochastic difference equation Xn = ρXn−1 + Bn . Stochastic Processes and their Applications, 12, 301–312. 140. Wolfe, S. J. (1982b). A characterization of certain stochastic integrals (Tenth Conference on Stochastic Processes and Their Applications, Contributed Papers). Stochastic Processes and their Applications, 12, 136. 141. Yamamuro, K. (2000a). Transience conditions for self-similar additive processes. Journal of the Mathematical Society of Japan, 52, 343–362. 142. Yamamuro, K. (2000b). On recurrence for self-similar additive processes. Kodai Mathematical Journal, 23, 234–241. 143. Yamazato, M. (1978). Unimodality of infinitely divisible distribution functions of class L. Annals of Probability, 6, 523–531.

Notation

R, Q, Z, N, and C are the sets of real numbers, rational numbers, integers, positive integers, and complex numbers, respectively. R+ = [0, ∞) and Z+ = {0, 1, 2, . . . }. Rd is the d-dimensional Euclidean space and elements of Rd are column vectors x = (xj )1≤j ≤d . The inner product is x, y = dj =1 xj yj for x = (xj )1≤j ≤d and y = (yj )1≤j ≤d . The norm is |x| = x, x1/2 . S = {ξ ∈ Rd : |ξ | = 1} is the unit sphere in Rd . CN is the set of N -tuples of complex numbers and elements of CN are column vectors w = (wj )1≤j ≤N . For any w = (wj )1≤j ≤N and v = (vj )1≤j ≤N in CN , we   define w, v = N j =1 wj vj . This is not the Hermitian inner product. For any N ×   N matrix F , the transpose of F is denoted by F . Thus (w1 , . . . , wN ) denotes the transpose of the row vector (w1 , . . . , wN ). For any Borel set C in Rd , B(C) is the class of Borel subsets of C. For any subset C of Rd , 1C (x) is the indicator function of C. P = P(Rd ) is the class of probability measures (distributions) on Rd . That is, μ ∈ P(Rd ) is a countably additive mapping from B(Rd ) into [0, 1] satisfying μ(Rd ) = 1. I D = I D(Rd ) is the class of infinitely divisible distributions on Rd . S = S(Rd ) is the class of stable distributions on Rd . Sα = Sα (Rd ) is the class of α-stable distributions on Rd .  iz,x  μ(z) = Rd e μ(dx), z ∈ Rd , is the characteristic function of μ ∈ P(Rd ).  d (μ1 ∗ μ2 ) (B) = Rd ×Rd 1B (x + y) μ1 (dx) μ2 (dy), for B ∈ B(R ), is the d n convolution of μ1 and μ2 in P(R ). For μ ∈ P and n ∈ N, μ is the n-fold μ)(z) , t (z) = et (log  convolution of μ. For μ ∈ I D and t ∈ R+ , μt is defined as μ n t where log  μ is the distinguished logarithm of  μ. Sometimes μ and μ are written as μn∗ and μt∗ , respectively. d

L(X) is the distribution (law) of a random variable X on Rd . X = Y means that two random variables X and Y have a common distribution or they are identical in d

law, that is, L(X) = L(Y ). {Xt } = {Yt } means that two stochastic processes {Xt } © The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 A. Rocha-Arteaga, K. Sato, Topics in Infinitely Divisible Distributions and Lévy Processes, SpringerBriefs in Probability and Mathematical Statistics, https://doi.org/10.1007/978-3-030-22700-5

129

130

Notation

and {Yt } are identical in law, that is, have a common system of finite-dimensional d

distributions. Note that Xt = Yt simply means that, for each t, Xt and Yt have a common distribution.  For any μ ∈ P(Rd ) with |x|2 μ(dx) < ∞, the covariance matrix of μ is the nonnegative-definite matrix (cov(Xi , Xj ))di,j =1 for a random variable (Xj )1≤j ≤d such that L((Xj )1≤j ≤d ) = μ. For μn (n = 1, 2, . . . ) and μ in P, μn → μ means weak convergence of μn to μ, that is, f (x)μn (dx) → f (x)μ(dx) for all bounded continuous functions f . When μn (n = 1, 2, . . . ) and μ are finite measures on Rd , the convergence μn → μ is defined in the same way. δc is a distribution concentrated at c; it is called a trivial distribution. A random variable X is trivial if L(X) is trivial. A stochastic process {Xt } is a trivial process if Xt is trivial for all t; it is a zero process if L(Xt ) = δ0 for all t ≥ 0. The words increasing and decreasing are used in the wide sense allowing flatness. Unless specifically mentioned, measurable means Borel measurable.

Index

A Additive process, 13 in law, 13 Additive subgroup, 98 Akita, K., 23, 121 Alf, C., 58, 121 Araujo, A., 104, 121

B Barndorff-Nielsen, O.E., 57, 58, 102, 106, 118, 121 Bernstein’s theorem, 17 Bertoin, J., 25, 121 Billingsley, P., v, 36, 121 Bochner, S., 103, 118, 121 Bondesson, L., 58, 118, 121 Brownian motion, 13

C Carr, P., 74, 122 Characteristic functional, 2, 104, 129 Chentsov, N.N., 103, 122 Chung, K.L., v, 2, 122 Çinlar, E., 122 Çinlar, E., 57 Class B (Goldie-Steutel-Bondesson), 58 of generalized -convolutions, 58, 109 I D, 1, 129 I Dlog , 47 K (Q), 5 L, 5

L0 , 5 L0 (Q), 113 Lm , 6 Lm (Q), 113 L∞ , 7 L∞ (Q), 114 P, 1, 129 S, 8, 129 Sα , 9, 129 S0 , 8 S0α , 9 SQ , 114 S0Q , 114 S(Q), 115 S0 (Q), 115 Ssc , 67 St, 67 T (Thorin), 58 U (Jurek), 59 Component-block, 115 Cone, 81, 104 dual in a Banach space, 104 in RN , 104 isomorphic, 89 linear subspace generated by, 89 N -dimensional, 89 nondegenerate, 89 normal, 104 regular, 104 RN + , 84 Sd+ , 99 strong basis of, 89 weak basis of, 89

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2019 A. Rocha-Arteaga, K. Sato, Topics in Infinitely Divisible Distributions and Lévy Processes, SpringerBriefs in Probability and Mathematical Statistics, https://doi.org/10.1007/978-3-030-22700-5

131

132 Convergence in P, 2 Convolution, 129 Convolution semigroup canonical, 99 generative, 98 induced by a Lévy process, 87 K-parameter, 87 multiple-generative, 98 non-generative, 98 trivial, 99 unique-generative, 98 Covariance matrix, 130

D Decreasing, 130 Dettweiler, E., 105, 122 Distinguished logarithm, 2, 129 nth root, 2 tth power, 2 Distribution α-stable, 9 compound Poisson, 12 degenerate, 9 δc concentrated at c, 130 exponential, 21

(gamma), 21 Gaussian, 12 Gaussian degenerate, 12 generalized hyperbolic, 112 generalized inverse Gaussian, 109 geometric stable, 81, 108 hyperbolic, 112 infinitely divisible, 1, 104 inverse Gaussian, 108 Linnik, 81, 108 m + 1 times selfdecomposable, 6 μn∗ (or μn ), 3, 129 μt∗ (or μt ), 3, 129 nondegenerate, 9 non-Gaussian, 12 normal inverse Gaussian, 109 operator selfdecomposable, 114 operator stable, 114 of a random variable, 3 Poisson, 12 purely non-Gaussian, 12 Q-selfdecomposable, 113 Q-stable, 114 selfdecomposable, 5 spectrally negative, 23 stable, 8

Index strictly α-stable, 9 strictly Q-stable, 114 strictly stable, 8 trivial, 130 Drift of an infinitely divisible law, 12 of a Lévy process, 13

E Eberlein, E., 74, 122 Exponent of selfsimilarity, 62

F Feller, W., 5, 17, 18, 122 Fubini type theorem, 31 Function completely monotone, 15 K-decreasing, 81 K-increasing, 81 with K-left limits, 86 K-right continuous, 86 Mittag-Leffler, 108 modified Bessel, 109, 113 monotone of order n, 15 step, 28

G Gaussian covariance matrix, 12, 13 Gaussian variance, 12 Geman, H., 74, 122 Generating triplet (A, ν, γ ) of infinitely divisible law, 12, 104 of a Lévy process, 13, 105 Gihman, I.I., 105, 122 Giné, E., 104, 121 Gnedenko, B.V., v, 2, 5, 24, 122 Goldie, C., 58, 122 Gravereaux, J.B., 57, 122 Graversen, S.E., 57, 122

H Halgreen, C., 58, 107, 118, 121, 122 Halmos, P.R., 30, 122 Hartman, P., 25, 122 Hartman–Wintner theorem, 25 Hengartner, W., 45, 122 Hirsch, F., 103, 122 Hudson, W.N., 114, 122

Index I Identical in law random variables, 129 stochastic processes, 3, 130 Increasing, 130 Independent increments (property), 13 Index, of a stable distribution, 9 Integration-by-parts formula, 34 Ismail, M.E.H., 107, 118, 122 Isomorphism, 89 J James, L.F., 58, 122 Jeanblanc, M., 75, 122 Jensen’s inequality, 46 Jurek, Z.J., 57, 59, 114, 118, 122, 123 K Kelker, D.H., 107, 118, 122 Khintchine, A.Ya., 24, 123 Khoshnevisan, D., 103, 123 Kokholm, T., 74, 123 Kolmogorov, A.N., v, 2, 5, 24, 122 Kolmogorov’s extension theorem, 63 Kumar, A., 25, 123 Kyprianou, A.E., 75, 123

L Lagaize, S., 103, 123 Lamperti, J., 75, 123 Lamperti transformation, 68, 75 Laplace transform, 78, 104 Law of X (L(X)), 3, 129 LeCam’s estimate, 45 Lévy–Itô decomposition, 14, 24, 82 Lévy–Khintchine representation, 11, 77, 104 Lévy measure, 12 h-function of, 14 k-function of, 14, 15 of a Lévy process, 13, 105 of an infinitely divisible distribution, 12, 104 spherical component of, 14 Lévy, P., 24, 103, 123 Lévy process, 12, 105 α-stable, 13 background driving, 38 compound Poisson, 13

(gamma), 22 Gaussian, 13 jump of, 24

133 jump time of, 24 K-increasing (or K-valued), 84 K-parameter, 86 K-parameter (in law), 87 of Lm class, 13 of Lm (Q) class, 115 Poisson, 13 purely non-Gaussian, 13 Q-selfdecomposable, 115 Q-stable, 115 selfdecomposable, 13 stable, 13 strictly α-stable, 13 strictly stable, 13 Linde, W., 104, 123 Linnik, J.V., 23, 123 Loève, M., 2, 5, 24, 123 Lusin’s theorem, 30

M Madan, D.B., 74, 122 Maejima, M., 23, 25, 58, 59, 74, 75, 121, 123 Mapping F (κ) , 69 Ic(κ) , 69 (κ) Jc , 69 c , 68 c , 68 (c) , 48 f , 49 Markov process, 35 positive selfsimilar, 75 temporally homogeneous with transition function, 36 Mason, J.D., 114, 118, 122, 123 McKean, H.P. Jr., 103, 123 Measurable, 130 Measure absolutely continuous, 25 continuous, 25 continuous singular, 25 discrete, 25 intensity, 82 q-potential, 80 singular, 25 support of, 23 Modified Bessel function, 86, 108

N Nicolato, E., 74, 123 Null array, 4

134 O O’Connor, T.A., 58, 121, 123, 124 Orey, S., 25, 103, 124 Ornstein–Uhlenbeck process, 38 Ornstein–Uhlenbeck type process (or OU type process), 37 generated by ρ and c, 37 (ρ) generated by {Zt } and c, 37 of wide-sense, 37 stationary, 70 Ostrovskii, I.V., 23, 123 P Pedersen, J., 57, 102, 118, 121, 122, 124 Pérez-Abreu, V., 102, 103, 105, 106, 121, 124 Pillai, R.N., 108, 124 Pinsky, M., 57, 122 Pitman, J., 75, 122 Poisson random measure, 82 Probability space, 3 Pruitt, W.E., 103, 124 R Ramachandran, B., 112, 124 Riemann–Lebesgue theorem, 25 Rocha-Arteaga, A., 102, 103, 105, 106, 124 Rockafellar, B.T., 81, 124 Rosi´nski, J., 57, 58, 105, 121, 124 Rotation invariant, 79 Row sums, 4 Roynette, B., 58, 122 S Sato, K., 25, 26, 53, 57, 58, 74, 75, 102, 114, 115, 118, 119, 121, 123–125 Schreiber, B.M., 25, 123 Selfsimilar process, 61, 62 Sequence K-decreasing, 81 K-increasing, 81 K-majorized, 104 Shanbhag, D.N., 118, 125 Sharpe, M., 114, 118, 126 Shi, Z., 103, 123 Shiga, T., 126 Skorohod, A.V., 103, 105, 122, 126 Sreehari, M., 118, 125

Index Stationary distribution, 70 increments (property), 13 OU type process, 70 process, 66 Stelzer, R., 106, 124 Steutel, F.W., 58, 126 Stochastic continuity, 13 Stochastic integral, 29 definable, 30, 48 improper, 48, 58 mapping f , 49 Stochastic process, 3 solution, 35 trivial, 130 Subordinand, 79, 89, 101 Subordinate/subordinated, 79, 89, 101 Subordinating, 89, 101 Subordination Bochner’s, 79 multivariate, 89 of convolution semigroups, 101 Subordinator, 78, 89, 101 K-valued, 84, 105 N -variate, 84 regular, 105 Suzuki, K., 74, 123

T Takano, K., 118, 126 Tamura, Y., 74, 123 Theodorescu, R., 45, 122 Thorbjørnsen, S., 57, 58, 121 Thorin, O., 58, 126 Thu, N.V., 25, 57, 126 Transition function, 36 temporally homogeneous, 36 Tucker, H.G., 26, 126 Tudor, C., 106, 124 Type equivalent, 8

U Ueda, Y., 25, 59, 123 Unit sphere, 14, 129 Urbanik, K., 24, 57, 114, 118, 126 Urysohn’s theorem, 30

Index V van Harn, K., 58, 126 Vares, M.E., 103, 126 Vervaat, W., 57, 123

W Watanabe, T., 25, 26, 57, 74, 119, 123, 125–127 Weak convergence, 2, 130 Widder, D.V., 16–18, 127

135 Wintner, A., 25, 122 Wolfe, S.J., 53, 57, 127 Y Yamamuro, K., 74, 119, 125, 127 Yamazato, M., 21, 53, 57, 114, 115, 118, 119, 125, 127 Yor, M., 58, 74, 75, 122 Z Zero process, 130