This book is intended for university seniors and graduate students majoring in probability theory or mathematical financ

*624*
*161*
*2MB*

*English*
*Pages XII, 218
[225]*
*Year 2020*

- Author / Uploaded
- Shigeo Kusuoka

*Table of contents : Front Matter ....Pages i-xii Preparations from Probability Theory (Shigeo Kusuoka)....Pages 1-20 Martingale with Discrete Parameter (Shigeo Kusuoka)....Pages 21-42 Martingale with Continuous Parameter (Shigeo Kusuoka)....Pages 43-85 Stochastic Integral (Shigeo Kusuoka)....Pages 87-104 Applications of Stochastic Integral (Shigeo Kusuoka)....Pages 105-134 Stochastic Differential Equation (Shigeo Kusuoka)....Pages 135-177 Application to Finance (Shigeo Kusuoka)....Pages 179-201 Appendices (Shigeo Kusuoka)....Pages 203-214Back Matter ....Pages 215-218*

Monographs in Mathematical Economics 3

Shigeo Kusuoka

Stochastic Analysis

Monographs in Mathematical Economics Volume 3

Editor-in-Chief Toru Maruyama, Professor Emeritus, Keio University, Tokyo, Japan Series Editors Shigeo Kusuoka, Graduate School of Mathematical Sciences, The University of Tokyo, Tokyo, Japan Jean-Michel Grandmont, CREST-CNRS, Malakoff CX, France R. Tyrrell Rockafellar, Dept. Mathematics, University of Washington, Seattle, WA, USA

More information about this series at http://www.springer.com/series/13278

Shigeo Kusuoka

Stochastic Analysis

123

Shigeo Kusuoka Graduate School of Mathematical Sciences The University of Tokyo Tokyo, Japan

ISSN 2364-8279 ISSN 2364-8287 (electronic) Monographs in Mathematical Economics ISBN 978-981-15-8863-1 ISBN 978-981-15-8864-8 (eBook) https://doi.org/10.1007/978-981-15-8864-8 © Springer Nature Singapore Pte Ltd. 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, speciﬁcally the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microﬁlms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a speciﬁc statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional afﬁliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

To Hiroko

Preface

Martingale theory is now regarded as a foundation of the theory of stochastic processes. It was created essentially by Doob [2]. He reﬁned Kolmogorov’s idea on summation of independent random variables and discovered various kinds of inequalities in martingale theory. He also answered the question concerning in what case one can regard a stochastic process as a continuous process. Martingale theory got connected with stochastic integrals and stochastic differential equations of Itô [5], and made great progress in the 1970s. In particular, the French school established a deep theory including discontinuous martingales. However, the theory of continuous martingales is most important for applications. The term “Stochastic analysis” is often used for the theory of continuous martingales, particularly in Japan. There are various kinds of textbooks for stochastic analysis. However, from a viewpoint of applications (especially to ﬁnance) the content of these textbooks is either insufﬁcient or overlong. I try to discuss stochastic analysis concisely and rigorously without reducing proofs. I try to use only elementary knowledge in functional analysis and to discuss almost all results in an L2 framework. For example, the Doob–Meyer decomposition theorem concerning submartingales is proved only in the case that submartingales are square integrable. Linear algebra and measure theory are necessary as preliminary knowledge. However, measure theory used in this book is summarized in Chap. 1. Tokyo, Japan June 2020

Shigeo Kusuoka

Acknowledgements The contents of this book are based on courses of lectures on stochastic analysis I gave in The University of Tokyo. I thank students who attended and listened to my lectures patiently. Also, I would also like to thank the anonymous referees who gave me valuable advices for improving the manuscript. Finally, I express my gratitude to Toru Maruyama for encouraging me for a long time to publish this book.

vii

Contents

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

1 1 8 11 17 19

2 Martingale with Discrete Parameter . . . . . . . . . . . . . . . . . 2.1 Deﬁnition of Martingale . . . . . . . . . . . . . . . . . . . . . . . 2.2 Doob’s Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Stopping Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Doob’s Decomposition and Martingale Transformation . 2.5 Upcrossing Number and Downcrossing Number . . . . . . 2.6 Uniform Integrability . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Lévy’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 A Remark on Doob’s Decomposition . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

21 21 22 24 26 31 35 39 40

3 Martingale with Continuous Parameter . . . 3.1 Several Notions on Stochastic Processes 3.2 D-Modiﬁcation . . . . . . . . . . . . . . . . . . . 3.3 Doob’s Inequality and Stopping Times . 3.4 Doob–Meyer Decomposition . . . . . . . . . 3.5 Quadratic Variation . . . . . . . . . . . . . . . 3.6 Continuous Local Martingale . . . . . . . . 3.7 Brownian Motion . . . . . . . . . . . . . . . . . 3.8 Optimal Stopping Time . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

43 43 45 48 53 61 66 73 79

4 Stochastic Integral . . . . . . . . . . . . 4.1 Spaces of Stochastic Processes 4.2 Continuous Semi-martingales . 4.3 Itô’s Formula . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. 87 . 87 . 95 . 100

1 Preparations from Probability Theory . . . . . . . . . . 1.1 Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Lp -Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Conditional Expectation . . . . . . . . . . . . . . . . . . 1.4 Jensen’s Inequality for Conditional Expectations 1.5 Some Remarks . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

ix

x

Contents

5 Applications of Stochastic Integral . . . . . . . . . . . . 5.1 Characterization of Brownian Motion . . . . . . . 5.2 Representation of Continuous Local Martingale 5.3 Girsanov Transformation . . . . . . . . . . . . . . . . . 5.4 Moment Inequalities . . . . . . . . . . . . . . . . . . . . 5.5 Itô’s Representation Theorem . . . . . . . . . . . . . 5.6 Property of Brownian Motion . . . . . . . . . . . . . 5.7 Tanaka’s Formula . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

105 105 110 112 118 121 127 132

. . . . 135

6 Stochastic Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Itô’s Stochastic Differential Equation and Euler–Maruyama Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Deﬁnition of Stochastic Differential Equation . . . . . . . . . . . . 6.3 Uniqueness of a Solution to Martingale Problem . . . . . . . . . 6.4 Time Homogeneous Stochastic Differential Equation . . . . . . 6.5 Smoothness of Solutions to Stochastic Differential Equations

. . . . .

. . . . .

. . . . .

. . . . .

135 148 156 159 163

7 Application to Finance . . . . . . . . . . . . . . . . . . . . . . . 7.1 Basic Situation and Dynamical Portfolio Strategy . 7.2 Black–Scholes Model . . . . . . . . . . . . . . . . . . . . . 7.3 General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 American Derivative . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

179 179 183 189 199

8 Appendices . . . . . . . . . . . . . . . 8.1 Dynkin’s Lemma . . . . . . . 8.2 Convex Function . . . . . . . 8.3 L2 -Weakly Compact . . . . . 8.4 Generalized Inverse Matrix 8.5 Proof of Theorem 5.6.1 . . . 8.6 Gronwall’s Inequality . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

203 203 204 206 208 209 214

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217

Notation

rfCg: Proposition 1.1.5 Function1A : 1; if x 2 A; 1A ð x Þ ¼ 0; if x 62 A a ^ b ¼ minfa; bg a _ b ¼ maxfa; bg n BðRn Þ: Borel algebra S on R G1 _ G2 ¼ rfG1 G2 g LpG ¼ Lp ðX; G; PÞ ¼ ff ; f : X ! R is Gmeasurable; E½jf jp \1g ess:supk2K Xk : Section 1.5 last line Z = 0 : A family of integers greater than or equal to 0 N 0 : The family of all A 2 F such that PðAÞ ¼ 0 or 1: Dð½0; 1ÞÞ: The family of functions w : ½0; 1Þ ! R such that lims#t wðsÞ ¼ wðtÞ; t 2 ½0; 1Þ; and that lims"t wðsÞ exists for all t 2 ð0; 1Þ Dn ; n = 1; D: S Dn ¼ 2kn ; k 2 Z = 0 ; D ¼ 1 n¼1 Dn M2 : Section 3.4 þ ;2 : Section 3.4 A Mcb : Section 3.5 c : Section 4.1 M 2 L0 : Section 4.1 L2 ðAÞ: Section 4.1 Cb ðRN Þ: The family of all bounded continuous functions deﬁned in RN k f k1 ¼ supx2RN jf ðxÞj; f 2 C b ðRN Þ Cn ðRN Þ; n = 1: The family of all n-times continuous differentiable functions deﬁned in RN Cnb ðRN Þ; n = 1: n o i1 þ þ iN ¼ f 2 Cn ðRN Þ; ð@x@1 Þi1 ð@xNf ÞiN 2 Cb ðRd Þ; i1 þ þ iN 5 n

xi

xii

T N n C1 ðRN Þ ¼ 1 n¼1 C ðR Þ T 1 N N n C1 b ðR Þ ¼ n¼1 C b ðR Þ N 1 1 C0 ðR Þ ¼ ff 2 C ðRN Þ; there is an R [ 0 such that f ðxÞ ¼ 0 for all x 2 N R with jxj [ Rg dij : Kronecker’s delta, 1; if i ¼ j; dij ¼ 0; if i 6¼ j

Notation

Chapter 1

Preparations from Probability Theory

1.1 Review We review fundamental results in probability theory including notation in this section. Definition 1.1.1 We say that G is a σ -algebra over a set S, if the following four conditions are satisfied. (1) G is a family of subsets of S. (2) ∅ ∈ G. (3) If A ∈ G, then Ac = S \ A ∈ G. (4) If An ∈ G, n = 1, 2, . . . , then ∞ n=1 An ∈ G. We say that (S, G) is a measurable space, if S is a set and G is a σ -algebra over S. Proposition 1.1.1 Suppose that G is a σ -algebra over a set S. Then we have the following. (1) S ∈ G. (2) If A, B ∈ G, then A ∪ B, A ∩ B, A \ B ∈ G. (3) If An ∈ G, n = 1, 2, . . . , then ∞ n=1 An ∈ G. Proposition 1.1.2 Assume that C is a family of subsets of a set S. Let C be the set of σ -algebras over S which include C, and let σ {C} =

G.

G∈C

Then σ {C} is a σ -algebra over S. Moreover, σ {C} ⊂ G holds for any σ -algebra G over S which includes C. σ {C} is called the σ -algebra generated by C. As for the set of real numbers R, the σ -algebra over R generated by {[a, b); a, b ∈ R, a < b} is called the Borel algebra over R. We denote by B(R) the Borel algebra over R. © Springer Nature Singapore Pte Ltd. 2020 S. Kusuoka, Stochastic Analysis, Monographs in Mathematical Economics 3, https://doi.org/10.1007/978-981-15-8864-8_1

1

2

1 Preparations from Probability Theory

Also, as for [−∞, ∞], the σ -algebra over [−∞, ∞] generated by {[a, b) a, b ∈ [−∞, ∞], a < b} is called the Borel algebra over [−∞, ∞]. We denote by B([−∞, ∞]) the Borel algebra over [−∞, ∞]. Definition 1.1.2 We say that μ is a measure on a measurable space (S, G), if the following three conditions are satisfied. (1) μ is a mapping from G into [0, ∞]. (2) μ(∅) = 0. (3) If An ∈ G, n = 1, 2, . . . , are mutually disjoint, i.e., An ∩ Am = ∅, n = m, then μ

∞

An

n=1

=

∞

μ(An ).

n=1

Definition 1.1.3 We say that (, F .P) is a probability space, if the following three conditions are satisfied. (1) is a set. (2) (, F ) is a measurable space, i.e., F is a σ -algebra over . (3) P is a measure on a measurable space (, F ) such that P() = 1. We fix a probability space (, F .P) in this chapter. We say that G is a sub-σ -algebra , if G is a σ -algebra over such that G ⊂ F . Definition 1.1.4 (1) Let (Si , i ), i = 1, 2, be a measurable space. We say that a mapping f : S1 → S2 is 1 /2 measurable, if f −1 (A) ∈ 1 for any A ∈ 2 . (2) Let (S, ) be a measurable space. We say that X is an S-valued random variable, if X is an F / measurable mapping from into S. (3) We say that X is a random variable, if X is an F /B(R) measurable mapping from into R. We often admit −∞ and ∞ as a value of a random variable. In this case, we say that X : → [−∞, ∞] is a random variable, if X is an F /B([−∞, ∞]) measurable mapping. We define a sub-σ -algebra σ {X λ ; λ ∈ } for a family of random variables X λ : → [−∞, ∞], λ ∈ , by σ {X λ ; λ ∈ } = σ

{X λ−1 (A)

; A ∈ B([−∞, ∞])} .

λ∈

σ {X λ ; λ ∈ } is a sub-σ -algebra. We call this sub-σ -algebra σ {X λ ; λ ∈ } the subσ -algebra generated by a family of random variables X λ : → [−∞, ∞], λ ∈ . Here we remark that a set can be an uncountable set. Let X : → [−∞, ∞] be a random variable and let G be a sub-σ -algebra. We say that a random variable X is G-measurable, if X −1 (A) ∈ G for any A ∈ B([−∞, ∞]). So a random variable X is G-measurable, if and only if σ {X } ⊂ G. If a random variable X : → [−∞, ∞] is non-negative, i.e., X (ω) 0, ω ∈ , then the integral of X with respect to the measure P

1.1 Review

3

X (ω)P(dω) ∈ [0, ∞]

is well-defined. We say that a random variable X : → [−∞, ∞] is integrable, if

|X (ω)|P(dω) < ∞.

If a random variable X is integrable, then we can define the integral of X with respect to the measure P X (ω)P(dω) ∈ R.

In probability theory, we use a special notation. We denote by E[X ] the integral of a random variable X, i.e., X (ω)P(dω). E[X ] =

We call E[X ] the expectation of a random variable X. Also we denote E[1 A X ] by E[X, A] for A ∈ F . Here 1 A : → R is a random variable given by the following: 1 A (ω) =

1, if ω ∈ A, 0, if ω ∈ \ A.

In the case that we handle more than one probability space or probability measure, we denote E[X ] by E P [X ], and denote E[X, A] by E P [X, A] in order to make clear under what probability measure we are considering. For a constant c ∈ R, we denote by c a random variable Y defined by Y (ω) = c, ω ∈ . We also use the following notion in probability theory. For a statement Q(ω) for ω ∈ , we denote by {Q} a set {ω ∈ , ; Q(ω) }. Also if {Q} ∈ F , we denote P({Q}) by P(Q) and denote E[X, {Q}] by E[X, Q] for a random variable X. Also, we say that Q a.s. (Q is true almost surely), if P({Q}) = 1. For example, if Y is a random variable, { Y > y } is {ω ∈ ; Y (ω) > y }, and P(Y > y), E[X, Y > y] are P({Y > y}), E[X, {Y > y}], respectively. Also, Y > y a.s. means that P(Y > y) = P({ω ∈ ; Y (ω) > y}) = 1. Expectations have the following properties. Proposition 1.1.3 (1) E[1] = 1. If X, Y are non-negative random variables, and if a, b ∈ [0, ∞], then a X + bY are non-negative random variables, and E[a X + bY ] = a E[X ] + bE[Y ]. (2) (linearity) If X, Y are integrable random variables, and a, b ∈ R, then a X + bY is an integrable random variable, and

4

1 Preparations from Probability Theory

E[a X + bY ] = a E[X ] + bE[Y ]. (3) If X is a non-negative random variable, and An ∈ F , n = 1, 2, . . . , are mutually disjoint, i.e., An ∩ Am = ∅, n = m, then

E X,

∞

An =

n=1

∞

E[X, An ].

n=1

Proposition 1.1.4 Suppose that X n , n = 1, 2, . . . , are non-negative random variables. ∞ ∞ Xn = E[X n ]. (1) E n=1

n=1

(2) (Monotone convergence theorem) If X n+1 (ω) X n (ω), ω ∈ , n = 1, 2, . . . , then E[ lim X n ] = lim E[X n ]. n→∞

n→∞

(3) (Fatou’s lemma) E[ lim X n ] lim E[X n ]. n→∞

n→∞

Proposition 1.1.5 [Dominated convergence theorem] Suppose that X n , n = 1, 2, . . . , are random variables satisfying the following conditions. (i) There exists a constant M such that |X n (ω)| M,

ω ∈ .

(ii) There exists a random variable such that X n (ω) → X ∞ (ω), n → ∞, for all ω ∈ . Then lim E[X n ] = E[X ∞ ].

n→∞

We say that a family of sets A is a π -system, if A1 ∩ A2 ∈ A for any A1 , A2 ∈ A. We give the proof of the following proposition in Appendix 8.1. Proposition 1.1.6 Let μ1 and μ2 be measures on a measurable space (S, ), and A be a subset of . Suppose that A is a π -system. Suppose moreover that the following two conditions are satisfied. (i) μ1 (S) = μ2 (S) < ∞. (ii) μ1 (A) = μ2 (A) for any A ∈ A. Then μ1 (B) = μ2 (B) for any B ∈ σ {A}.

1.1 Review

5

Definition 1.1.5 (1) We say that a family of subsets {Aλ }λ∈ of F is independent, if P(A1 ∩ A2 ∩ · · · ∩ An ) = P(A1 ) · · · P(An ) for any n 2, any distinct λ1 , . . . , λn ∈ , and any A j ∈ Aλ j , j = 1, . . . , n. (2) We say that random variables X λ , λ ∈ , are independent, if {σ {X λ }}λ∈ is independent. Proposition 1.1.7 Let n 2. (1) If a family of subsets {A1 , A2 , . . . , An } of F is independent and if A1 is a π system, then a family {σ {A1 }, A2 , . . . , An } is independent. (2) If a family of subsets {A1 , A2 , . . . , An } of F is independent and if Ai , i = 1, . . . , n, are π -system, then {σ {Ai }; i = 1, . . . , n} is independent. Proof (1) Let us take Cm ∈ Am , m = 2, . . . , n, and fix them. Let μ1 and μ2 be measures on (, F ) given by μ1 (B) = P

B∩

n

Cm , μ2 (B) = P(B)P

m=2

n

Cm ,

B ∈ F.

m=2

Then we see that μ1 () = μ2 () = P( nm=2 Cm ) < ∞. Also, we see from the assumption on independence that μ1 (A) = μ2 (A) for any A ∈ A1 . So by Proposition 1.1.6 we see that μ1 (B) = μ2 (B) for any B ∈ σ {A1 }. This proves Assertion (1). Assertion (2) is an easy consequence of Assertion (1). The following is an easy fact. However, we often use this fact and so we give it as a proposition. Proposition 1.1.8 Let ρn : [0, ∞] → [0, n], n = 1, 2, . . . , be given by ⎧ ⎨ k − 1 , t ∈ k − 1 , k , k = 1, . . . , n2n , ρn (t) = 2n 2n 2n ⎩ n, t ∈ [n, ∞]. Then, ρn (t) ρn+1 (t) t, n 1, t ∈ [0, ∞], and ρn (t) ↑ t, n → ∞, for any t ∈ [0, ∞]. We have the following concerning independent random variables. Proposition 1.1.9 Let n 2. (1) If X m , m = 1, . . . , n, are independent non-negative random variables, then

E

n m=1

Xm =

n m=1

E[X m ].

6

1 Preparations from Probability Theory

(2) If X m , m = 1, . . . , n, are independent integrable random variables, then X m is integrable, and

n n Xm = E[X m ]. E m=1

n m=1

m=1

Proof (1) For any r 1, we have

E

n

r

ρr (X m ) =

i 1 ,...,i n =0 m=1

m=1

Since {ρr (X m ) =

E

n

m=1

n im

r2

im } 2r

im P {ρr (X m ) = r } . 2 m=1 n

∈ σ {X m }, we see from the definition of independence that r

ρr (X m ) =

2r

r2

n n im im P ρ = (X ) = E[ρr (X m )]. r m 2r 2r

i 1 ,...,i n =0 m=1

m=1

Letting r → ∞, we have Assertion (1). Assertion (2) is an easy consequence of Assertion (1). For real numbers a, b, we use the following notation: a ∨ b = max{a, b},

a ∧ b = min{a, b}.

Proposition 1.1.10 Suppose that X n , n = 1, 2, . . . , are non-negative random variables such that P(X n < ∞) = 1, n 1. If ∞

E[X n ∧ 1] < ∞,

n=1

then

∞

X n < ∞ a.s.

n=1

In particular, X n → 0, n → ∞, a.s. Proof By Proposition 1.1.4, we see that

∞ ∞ E (X n ∧ 1) = E[X n ∧ 1] < ∞. n=1

n=1

So we see that ∞ n=1 (X n ∧ 1) < ∞ a.s. Therefore X n 1 only for finitely many n. So we see that ∞ n=1 X n < ∞ a.s.

1.1 Review

7

We recall the notions of convergence in probability theory. Definition 1.1.6 Let X be a random variable, and X n , n = 1, 2, . . . , be a sequence of random variables. (1) We say that a sequence of random variables X n , n = 1, 2, . . . , converges to a random variable X in probability, if P(|X | < ∞) = 1 and if P(|X − X n | > ε) → 0,

n→∞

for any ε > 0. (2) We say that a sequence of random variables X n , n = 1, 2, . . . , converges to a random variable X with probability one, if P(|X | < ∞) = 1 and if P( lim |X − X n | = 0) = 1. n→∞

Proposition 1.1.11 Let X be a random variable, X n , n = 1, 2, . . . , be a sequence of random variables, and p ∈ [1, ∞). Then the following two conditions are equivalent. (1) A sequence of random variables X n , n = 1, 2, . . . , converges to a random variable X in probability. (2) E[|X − X n | p ∧ 1] → 0, n → ∞. Proof Assume that Assertion (1) holds. Let us take an arbitrary ε ∈ (0, 1]. Then we have E[|X − X n | p ∧ 1] = E[|X − X n | p ∧ 1, |X − X n | > ε] + E[|X − X n | p ∧ 1, |X − X n | ε] P(|X − X n | > ε) + ε p . Therefore we see that lim E[|X − X n | p ∧ 1] ε p .

n→∞

Since ε ∈ (0, 1] is arbitrary, we obtain Assertion (2). Let us assume Assertion (2). Then for any ε ∈ (0, 1], we have ε p P(|X − X n | > ε) E[|X − X n | p ∧ 1, |X − X n | > ε] E[|X − X n | p ∧ 1] → 0, n → ∞. Therefore we obtain Assertion (1). Proposition 1.1.12 Let X, Y be random variables, and let X n , n = 1, 2, . . . , and Yn , n = 1, 2, . . . , be sequences of random variables. Suppose that a sequence of random variables X n , n = 1, 2, . . . , converges to a random variable X in probability, and that a sequence of random variables Yn , n = 1, 2, . . . , converges to a random

8

1 Preparations from Probability Theory

variable Y in probability. Then we have the following. (1) For any a, b ∈ R, a sequence of random variables a X n + bYn , n = 1, 2, . . . , converges to a random variable a X + bY in probability. (2) A sequence of random variables X n Yn , n = 1, 2, . . . , converges to a random variable X Y in probability. Proof Since the proof of Assertion (1) is easy, we prove Assertion (2) only. Note that |X Y − X n Yn | |X ||Y − Yn | + |Y ||X − X n | + |X − X n ||Y − Yn | |X ||Y − Yn | + |Y ||X − X n | + |X − X n |2 + |Y − Yn |2 . Since (a + b) ∧ 1 (a ∧ 1) + (b ∧ 1) for any a, b 0, we see that for any m 1 E[|X Y − X n Yn | ∧ 1] = E[|X Y − X n Yn | ∧ 1, |X | + |Y | > m] + E[|X Y − X n Yn | ∧ 1, |X | + |Y | m] P(|X | + |Y | > m) + E[(|X ||Y − Yn | ∧ 1) + (|Y ||X − X n | ∧ 1), |X | + |Y | m] + E[(|X − X n |2 ∧ 1) + (|Y − Yn |2 ∧ 1), |X | + |Y | m] P(|X | + |Y | > m) + m E[|Y − Yn | ∧ 1] + m E[|X − X n | ∧ 1] + E[|X − X n |2 ∧ 1] + E[|Y − Yn |2 ∧ 1].

Therefore we obtain lim E[|X Y − X n Yn | ∧ 1] P(|X | + |Y | > m).

n→∞

Since m 1 is arbitrary, we see that E[|X Y − X n Yn | ∧ 1] → 0, n → ∞. This implies Assertion (2) by the previous proposition.

1.2 L p -Space For any sub-σ -algebras G1 and G2 , we denote σ {G1

G2 } by G1 ∨ G2 . p

Definition 1.2.1 For any sub-σ -algebra G, we define LG = L p (, G, P), p ∈ [1, ∞), to be a set of G-measurable random variables X such that E[|X | p ] < ∞. p We denote LF by L p for simplicity of notation. p

p

Proposition 1.2.1 (1) If X, Y ∈ LG and a ∈ R, then X + Y and a X belong to LG . p Therefore LG is a real vector space. p (2) Suppose that X n ∈ LG , n = 1, 2, . . . , and that E[|X n − X m | p ] → 0, n, m → p ∞. Then there exists X ∞ ∈ LG satisfying the following. (i) There exists a subsequence {n k }∞ k=1 such that a sequence of random variables

1.2 L p -Space

9

X n k , k = 1, 2, . . . , converges to X ∞ with probability one. (ii) E[|X ∞ − X n | p ] → 0, n → ∞. p

Proof (1) It is obvious that a X ∈ LG . Also we see that E[|X + Y | p ] E[(2 max{|X |, |Y |}) p ] 2 p E[max{|X | p , |Y | p }] 2 p (E[|X | p ] + E[|Y | p ]) < ∞. p

So we see that X + Y ∈ LG . (2) From the assumption, there is an m k 1 for each k 1 such that E[|X n − X m | p ] 2−( p+1)k ,

n, m m k .

Let n k = k + m 1 + · · · + m k . Then we see that E[|X n k+1 − X n k | p ] 2−( p+1)k , So we see that

E

∞

2 |X n k+1 − X n k | kp

p

k=1

Let

∞

k = 1, 2, . . . .

2kp E[|X n k+1 − X n k | p ] 1.

k=1

0 = ω ∈ ;

∞

2 |X n k+1 (ω) − X n k (ω)| < ∞ . kp

p

k=1

Then we see that 0 ∈ G and that P(0 ) = 1. Moreover, for any ω ∈ 0 we have ∞

|X n +1 (ω) − X n (ω)|

=1

∞ =1

2−

∞

1/ p 2kp |X n k+1 (ω) − X n k (ω)| p

< ∞.

k=1

Therefore we see that {X n k (ω)}∞ k=1 is a Cauchy sequence for any ω ∈ . Let X ∞ be given by lim X n k (ω), ω ∈ 0 , X ∞ (ω) = k→∞ 0, otherwise. Then it is obvious that X ∞ is G-measurable, and X n k , k = 1, 2, . . . , converges to X ∞ with probability one. Also, we see by Fatou’s lemma that for any n n k E[|X ∞ − X n | p ] = E[ lim |X n − X n | p ] lim E[|X n − X n | p ] 2−( p+1)k . →∞

→∞

10

1 Preparations from Probability Theory

Therefore we see that E[|X ∞ − X n | p ] → 0, n → ∞. Let p ∈ [1, ∞). Let X ∞ ∈ L p and X n ∈ L p , n = 1, 2, . . .. We say that a sequence of random variables X n , n = 1, 2, . . . , converges to a random variable X ∞ in L p sense, if E[|X ∞ − X n | p ] → 0, n → ∞. Proposition 1.2.2 Suppose that X.Y ∈ L2G . Then we have the following. (1) |E[X Y ]| E[|X Y |] E[X 2 ]1/2 E[Y 2 ]1/2 . (2) E[(X + Y )2 ]1/2 E[X 2 ]1/2 + E[Y 2 ]1/2 . (3) E[(X − Y )2 ] + E[(X + Y )2 ] = 2(E[X 2 ] + E[Y 2 ]). Proof (1) Suppose that E[X 2 ] = 0. Then we see that X = 0 a.s. and that X Y = 0 a.s. So we obtain E[|X Y |] = 0. This implies our assertion. So we may assume that E[X 2 ] > 0. Note that E[|X Y |] E[X 2 + Y 2 ] < ∞. Let t = E[|X Y |]/E[X 2 ]. Then we see that E[X 2 ]E[Y 2 ] − E[|X Y |]2 = E[X 2 ](E[X 2 ]t 2 − 2E[|X Y |]t + E[Y 2 ]) = E[X 2 ]E[(t|X | − |Y |)2 ] 0. This implies Assertion (1). Also, we see that E[(X + Y )2 ] = E[X 2 ] + 2E[X Y ] + E[Y 2 ] E[X 2 ] + 2E[X 2 ]1/2 E[Y 2 ]1/2 + E[Y 2 ] = (E[X 2 ]1/2 + E[Y 2 ]1/2 )2 . This implies Assertion (2). Assertion (3) follows from an easy computation. Proposition 1.2.3 Let X be a random variable such that E[X 2 ] < ∞, and let G be a sub-σ -algebra. Then there exists Y ∈ L2G satisfying the following. (1) E[|X − Y |2 ] = inf{E[|X − Z |2 ]; Z ∈ L2G }. (2) E[Y, A] = E[X, A] for any A ∈ G. Proof Let c = inf{E[|X − Z |2 ]; Z ∈ L2G }. Then there are Z n ∈ L2G , n = 1, 2, . . . , such that E[|X − Z n |2 ] → c, n → ∞. Then by Proposition 1.2.2 (2), we see that 0 E[|Z n − Z m |2 ] = E[((X − Z n ) − (X − Z m ))2 ] = 2(E[(X − Z n )2 ] + E[(X − Z m )2 ]) − E[((X − Z n ) + (X − Z m ))2 ]

2 1 2 2 X − (Z n + Z m ) = 2(E[(X − Z n ) ] + E[(X − Z m ) ]) − 4E 2 2(E[(X − Z n )2 ] + E[(X − Z m )2 ]) − 4c.

1.2 L p -Space

11

Here we use the fact that 12 (Z n + Z m ) ∈ L2G . Therefore we see that E[|Z n − Z m |2 ] → 0, n, m → ∞. So by Proposition 1.2.1 we see that there exists Y ∈ L2G such that E[|Y − Z n |2 ] → 0, n → ∞. Then we see that E[|X − Y |2 ] = E[((X − Z n ) − (Y − Z n ))2 ] (E[|X − Z n |2 ]1/2 + E[|Y − Z n |2 ]1/2 )2 → c, n → ∞. This implies that E[|X − Y |2 ] c. By the definition of c we see that E[|X − Y |2 ] c. So we obtain E[|X − Y |2 ] = c. Let A ∈ G. Then 1 A ∈ L2G . Let f : R → R be a function given by f (t) = E[(X − (Y − t1 A ))2 ]. By Assertion (1), we see that f (t) has a minimum at t = 0. So we see that 0 = f (0) = 2E[(X − Y )1 A ] = 2(E[X, A] − E[Y, A]). These imply our assertions. Remark Let us define an equivalence relation ∼ in L2 by X ∼ Y ⇔ X = Y a.s.

X, Y ∈ L2 .

Then the quotient space L 2 (, F , P) = L2 / ∼ becomes a Hilbert space.

1.3 Conditional Expectation Let (, F , P) be a probability space. Then we have the following. Theorem 1.3.1 Let G be a sub-σ -algebra, and X be a non-negative random variable. Then there exists a non-negative random variable Y satisfying the following two conditions. (1) Y is G-measurable. (2) E[Y, B] = E[X, B] for any B ∈ G. Moreover, if Y is another non-negative random variable satisfying Conditions (1) and (2), then Y = Y a.s. Proof Let X n = X ∧ n, n = 0, 1, 2, . . . . Since E[X n2 ] < ∞, by Proposition 1.2.3 there are Yn ∈ L2G , n = 1, 2, . . . , such that E[Yn , B] = E[X n , B] for any B ∈ G. Let Y0 = 0. Then Y0 ∈ L2G and E[Y0 , B] = E[X 0 , B] = 0 for any B ∈ G. Let An,m = {Yn − Ym < 0} for n > m 0. Then we see that An,m ∈ G. Since X n X m , we see that E[Yn − Ym , An,m ] = E[X n − X m , An,m ] 0.

12

1 Preparations from Probability Theory

This shows that P(Yn − Ym < 0) = 0, n > m 0, and so P(Yn Ym ) = 1. Let {Y 0 = ∞ n=0 n+1 Yn } and let Y = lim n→∞ 10 Yn . Since X n , n = 1, 2, . . . , and 10 Yn , n = 1, 2, . . . , are non-decreasing sequences of non-negative random variables, we see that for B ∈ G E[Y, B] = lim E[10 Yn , B] = lim E[Yn , B] = lim E[X n , B] = E[X, B]. n→∞

n→∞

n→∞

This implies our first assertion. Suppose that Y is a non-negative random variable satisfying Conditions (1) and (2). Let An = {Y Y + n1 , Y n}, n 1. Then we see that An ∈ G and E[Y, An ] < ∞. Therefore we see that E[Y, An ] +

1 P(An ) E[Y, An ] + E[Y − Y, An ] = E[Y , An ] = E[X, An ] = E[Y, An ]. n

This implies that P(An ) = 0. Note that ∞

An = {Y > Y }.

n=1

So we see that P(Y > Y ) = 0. Since we can prove P(Y > Y ) = 0 similarly, we obtain Y = Y a.s. We denote by E[X |G] the random variable Y in the above-mentioned theorem. We prove the following also. Corollary 1.3.1 Let G be a sub-σ -algebra, and X be an integrable random variable. Then there exists an integrable random variable Y satisfying the following. (1) Y is G-measurable. (2) E[Y, B] = E[X, B] for any B ∈ G. Moreover, if Y is another integrable random variable satisfying Conditions (1) and (2), then Y = Y a.s. Proof Let X + and X − be non-negative random variables given by X + = max{X, 0} and X − = max{−X, 0}. Then we see that E[X + ] < ∞, and E[X − ] < ∞. Since E[E[X + |B] + E[X − |B]] = E[E[X + |B] + E[X − |B], ] = E[X + + X − , ] < ∞,

we see that E[X + |B] < ∞ and E[X − |B] < ∞ a.s. Therefore letting Y = E[X + |B] − E[X − |B], we see that Y is an integrable random variable satisfying Conditions (1) and (2). Suppose that Y is an integrable random variable satisfying Conditions (1) and (2). Then we see that {Y > Y } ∈ G, and that

1.3 Conditional Expectation

13

E[Y − Y , {Y > Y }] = E[Y, {Y > Y }] − E[Y , {Y > Y }] = E[X, {Y > Y }] − E[X, {Y > Y }] = 0. This implies that P(Y > Y ) = 0. Similarly, we see that P(Y > Y ) = 0. So we obtain P(Y = Y ) = 1. We denote also by E[X |G] the integrable random variable Y in Corollary 1.3.1. Note that E[X |G] is determined only almost surely in either case that X is a non-negative random variable or X is an integrable random variable. We call E[X |G] the conditional expectation of X given a sub- σ -algebra G. For any A ∈ F , we denote E[1 A |G] by P(A|G) and we call this the conditional probability of an event A given a sub-σ -algebra G. Proposition 1.3.1 Let X and Y be non-negative random variables, a 0, and G and H be sub-σ -algebras. Then we have the following. (1) E[X |{∅, }] = E[X ]. (2) E[a X |G] = a E[X |G] a.s. and E[X + Y |G] = E[X |G] + E[Y |G] a.s. (3) If X Y a.s., then E[X |G] E[Y |G] a.s. (4) If H ⊂ G, then E[E[X |G]|H] = E[X |H] a.s. Proof We leave the proof of Assertions (1), (2) and (3) as exercises to the reader, since they are easy. We only prove Assertion (4). It is obvious that E[E[X |G]|H] is H-measurable. Suppose that A ∈ H. Then, since A ∈ G, we see that E[E[E[X |G]|H], A] = E[E[X |G], A] = E[X, A]. So we have Assertion (4) by Theorem 1.3.1. Proposition 1.3.2 Let X n , n = 1, 2, . . . , be non-negative random variables, and G be a sub-σ -algebra. Then we have the following. (1) Suppose that X n X n+1 a.s. for any n = 1, 2, . . .. Then E[ lim X n |G] = lim E[X n |G] a.s. n→∞

n→∞

(2) E[ lim X n |G] lim E[X n |G] a.s. n→∞

n→∞

Proof (1) By the previous proposition (3), we see that E[X n |G] E[X n+1 |G] a.s. n = 1, 2, . . .. Therefore we see that for any B ∈ G E[ lim X n , B] = lim E[X n , B] = lim E[E[X n |G], B] = E[ lim E[X n |G], B] n→∞

n→∞

n→∞

n→∞

by the monotone convergence theorem. This shows Assertion (1).

14

1 Preparations from Probability Theory

(2) Let Yn = inf kn X k , n = 1, 2, . . .. Then we see that Yn Yn+1 , n = 1, 2, . . .. Since X n Yn , we see that E[X n |G] E[Yn |G] a.s. Therefore we have E[ lim X n |G] = E[ lim Yn |G] = lim E[Yn |G] lim E[X n |G] a.s. n→∞

n→∞

n→∞

n→∞

This shows Assertion (2). Proposition 1.3.3 Let X and Y be non-negative random variables, and G be a subσ -algebra. Assume moreover that Y is G-measurable. Then E[Y X |G] = Y E[X |G] a.s. Proof Suppose that A, B ∈ G. Then we see that 1 A E[X |G] is G-measurable. Since E[1 A E[X |G], B] = E[E[X |G], A ∩ B] = E[X, A ∩ B] = E[1 A X, B], we see that E[1 A X |G] = 1 A E[X |G] a.s. Let ρn : [0, ∞] → [0, n], n = 1, 2, . . . , be as in Proposition 1.1.8. Then ρn (Y ) is G-measurable, and ρn (Y ) ↑ Y, n → ∞. Since n2 k 1 −1 k−1 k + n1Y −1 ([n,∞]) , ρn (Y ) = n Y ([ 2n , 2n )) 2 k=1 n

we easily see that E[ρn (Y )X |G] = ρn (Y )E[X |G]. Letting n → ∞, we have our assertion by Proposition 1.3.2. The following is an easy consequence of Proposition 1.3.1. Proposition 1.3.4 Let X and Y be integrable random variables, a ∈ R, and G and H be sub-σ -algebras. Then we have the following. (1) E[X |{∅, }] = E[X ]. (2) E[a X |G] = a E[X |G] a.s. and E[X + Y |G] = E[X |G] + E[Y |G] a.s. (3) If X Y a.s., then E[X |G] E[Y |G] a.s. (4) If H ⊂ G, then E[E[X |G]|H] = E[X |H] a.s. Proposition 1.3.5 Let X ∞ , X n , n = 1, 2, . . . , be integrable random variables, and G be a sub-σ -algebra. If X n , n = 1, 2, . . . , converges to X ∞ as n → ∞, in L1 -sense, then E[X n |G], n = 1, 2, . . . , converges to E[X ∞ |G] as n → ∞, in L1 -sense. Proof Since −|X ∞ − X n | X ∞ − X n |X ∞ − X n |, we see by Proposition 1.3.4 that −E[|X ∞ − X n ||G] E[X ∞ − X n |G] E[|X ∞ − X n ||G], a.s. So we see that

1.3 Conditional Expectation

15

|E[X ∞ |G] − E[X n |G]| E[|X ∞ − X n ||G] a.s. Therefore we see that E[|E[X ∞ |G] − E[X n |G]|] E[|X ∞ − X n |] → 0, n → ∞. So we obtain our assertion. The following is an easy consequence of Proposition 1.3.3. Proposition 1.3.6 Let X and Y be random variables, and G be a sub-σ -algebra. Suppose that Y is G-measurable. If X and X Y are integrable, then E[Y X |G] = Y E[X |G] a.s. Proposition 1.3.7 Let (Si , i ), i = 1, 2, be measurable spaces, and X i , i = 1, 2, be Si -valued random variables, i.e., X i is an F /i -measurable map from into Si for each i = 1, 2. Let G be a sub-σ -algebra, and f : S1 × S2 → [−∞, ∞] be a 1 ⊗ 2 -measurable function such that f (s1 , s2 ) 0, (s1 , s2 ) ∈ S1 × S2 . Suppose that σ {X 2 } and G ∨ σ {X 1 } are independent. Then E[ f (X 1 , X 2 )|G] = E[h f (X 1 )|G]. Here h f : S1 → [0, ∞] is a function given by h f (x) = E[ f (x, X 2 )], x ∈ S1 . Note that h f : S1 → [0, ∞] is 1 -measurable by Fubini’s theorem. Proof Step 1. We show our assertion in the case that f = 1 B , B ∈ 1 ⊗ 2 . Here 1 ⊗ 2 is a σ -algebra over S1 × S2 given by 1 ⊗ 2 = σ {A1 × A2 ; A1 ∈ 1 , A2 ∈ 2 }. Let G ∈ G and let μ1 and μ2 be measures on (S1 × S2 , 1 ⊗ 2 ) given by μ1 (B) = E[1 B (X 1 , X 2 )1G ],

μ2 (B) = E[h 1 B (X 1 )1G ],

B ∈ 1 ⊗ 2 .

Now let A ⊂ 1 ⊗ 2 be given by A = {A1 × A2 ; A1 ∈ 1 , A2 ∈ 2 }. It is easy to see that A is a π -system. For any A1 ∈ 1 and A2 ∈ 2 we see that h 1 A1 ×A2 (x) = E[1 A1 ×A2 (x, X 2 )] = 1 A1 (x)E[1 A2 (X 2 )].

16

1 Preparations from Probability Theory

So by Proposition 1.1.9 we see that μ1 (A1 × A2 ) = E[1 A2 (X 2 )1 A1 (X 1 )1G ] = E[1 A2 (X 2 )]E[1 A1 (X 1 )1G ] = E[E[1 A2 (X 2 )]1 A1 (X 1 )1G ] = E[h 1 A1 ×A2 (X 1 )1G ] = μ2 (A1 × A2 ). Also, we see that μ1 (S1 × S2 ) = P(G) and μ2 (S1 × S2 ) = P(G). Therefore by Proposition 1.1.6 we see that μ1 (B) = μ2 (B),

B ∈ σ {A} = 1 ⊗ 2 .

This shows that our assertion is valid in the case that f = 1 B , B ∈ 1 ⊗ 2 . Step 2. General Case. Let f n = ρn ◦ f, n 1. Then by Step 1 we easily see that E[ρn ( f (X 1 , X 2 ))|G] = E[h fn (X 1 )|G]. Letting n → ∞, we have our assertion. Proposition 1.3.8 Let X be a non-negative random variable, and let G and H be sub-σ -algebras. Suppose that σ {X } ∨ G and H are independent. Then E[X |G ∨ H] = E[X |G] a.s. Proof We show our assertion in the case that X is bounded, first. Let A = {G ∩ H ; G ∈ G, H ∈ H}. Then A is a π -system. Let μ1 and μ2 be measures on G ∨ H given by μ1 (B) = E[X, B], μ2 (B) = E[E[X |G], B],

B ∈ G ∨ H.

Then for any G ∈ G and H ∈ H we see by Proposition 1.1.9 that μ1 (G ∩ H ) = E[X 1G 1 H ] = E[X 1G ]E[1 H ] = E[E[X |G]1G 1 H ] = μ2 (G ∩ H ). So we see that μ1 (B) = μ2 (B),

B ∈ σ {A} = G ∨ H.

Therefore by the definition of conditional expectations we see that E[X |G ∨ H] = E[X |G] a.s. For a general non-negative random variable X we see that

1.3 Conditional Expectation

17

E[X |G ∨ H] = lim E[X ∧ n|G ∨ H] = lim E[X ∧ n|G] = E[X |G] a.s. n→∞

n→∞

So we have our assertion. If we let G = {∅, } in the previous proposition, we have the following corollary. Corollary 1.3.2 Let X be a non-negative random variable, and H be a sub-σ algebra. If σ {X } and H are independent, then E[X |H] = E[X ] a.s. Remark Conditional expectation E[X |G] is only determined almost surely. So equalities and inequalities for conditional expectations hold only almost surely. However, we omit “a.s.” from now on throughout the book for simplicity of notation.

1.4 Jensen’s Inequality for Conditional Expectations Definition 1.4.1 Let −∞ a < b ∞. We say that ϕ : (a, b) → R is a convex function, if ϕ(λx + (1 − λ)y) λϕ(x) + (1 − λ)ϕ(y) for any x, y ∈ (a, b) and λ ∈ [0, 1]. Proposition 1.4.1 Let −∞ a < b ∞, and ϕ : (a, b) → R be a convex function. Then we have the following. (1) ϕ : (a, b) → R is a continuous function. (2) For any x ∈ (a, b), there is c ∈ R such that ϕ(y) ϕ(x) + c(y − x),

y ∈ (a, b).

We give the proof of Proposition 1.4.1 in Appendix 8.2. Corollary 1.4.1 Let ϕ : R → R be a convex function. Then there are (an , bn ) ∈ R × R, n = 1, 2, . . . , such that ϕ(x) = sup(an x + bn ), n

x ∈ R.

Proof By Proposition 1.4.1, we see that there is cz ∈ R for any z ∈ Q such that ϕ(x) ϕ(z) + cz (x − z), Let ψ : R → R be given by

x ∈ R.

18

1 Preparations from Probability Theory

ψ(x) = sup{ϕ(z) + cz (x − z); z ∈ Q},

x ∈ R.

Then it is easy to see that ψ is a convex function, and that ψ(x) = ϕ(x),

x ∈ Q.

Since both of ϕ and ψ are continuous, we see that ψ = ϕ. Proposition 1.4.2 (Jensen’s inequality) Let X be an integrable random variable, G be a sub-σ -algebra, and ϕ : R → R be a convex function. (1) If E[|ϕ(X )|] < ∞, then E[ϕ(X )|G] ϕ(E[X |G]). (2) If ϕ(x) 0, x ∈ R, then E[ϕ(X )|G] ϕ(E[X |G]). Proof Let (an , bn ) ∈ R × R, n = 1, 2, . . . , be as in Corollary 1.4.1. Then E[ϕ(X )|G] E[an X + bn |G] = an E[X |G] + bn . Taking supn , we have our assertion. Let p ∈ [1, ∞). Then ϕ(x) = |x| p , x ∈ R, is a convex function, and so we have the following by Jensen’s inequality. Corollary 1.4.2 Let p ∈ [1, ∞), X be a non-negative random variable, and G be a sub-σ -algebra. Then E[X p |G] E[X |G] p . In particular, E[X p ] E[X ] p . Proof By Proposition 1.4.2, we see that E[(X ∧ n) p |G] E[X ∧ n|G] p for any n 1. So letting n → ∞, we have the first part. Letting G = {∅, }, we have the latter part.

1.5 Some Remarks

19

1.5 Some Remarks Proposition 1.5.1 (Hölder’s inequality) Let 1 < p, q < ∞ such that 1/ p + 1/q = 1, and let X and Y be non-negative random variables. Then E[X Y ] E[X p ]1/ p E[Y q ]1/q . Proof Our assertion is obvious if E[Y q ] = ∞. So we may assume that E[Y q ] < ∞. Step 1. The case that Y (ω) > 0 for all ω ∈ and that E[Y q ] = 1. Let Q(A) = E[Y q , A], A ∈ F . Then Q is a probability measure on (, F ). So by Corollary 1.4.2 we see that E[X Y ] = E Q [X Y 1−q ] E Q [X p Y p(1−q) ]1/ p = E[X p ]1/ p = E[X p ]1/ p E[Y q ]1/q . This implies our assertion. Step 2. General case. Let Yn = E[(Y + 1/n)q ]−1/q (Y + 1/n), n = 1, 2, . . .. Then we see that Yn (ω) > q 0 for all ω ∈ and that E[Yn ] = 1. So we see by the results in Step 1 that E[X Y ] E[X (Y + 1/n)] = E[X Yn ]E[(Y + 1/n)q ]1/q E[X p ]1/ p E[(Y + 1/n)q ]1/q .

So letting n → ∞, we have our assertion. Proposition 1.5.2 Let {X λ }λ∈ be a family of random variables satisfying the following condition. (Condition) For any λ1 , λ2 ∈ , there is a λ3 ∈ such that X λ1 ∨ X λ2 X λ3 a.s. Then there exists a sequence {λn }∞ n=1 ⊂ satisfying the following two conditions. (1) X λn X λn+1 a.s., n = 1, 2, . . .. (2) Let X˜ = limn→∞ X λn . Then X λ X˜ a.s. for any λ ∈ . Proof Let Z λ = arctan X λ , λ ∈ . Here we regard tan : [−π/2, π/2] → [−∞, ∞] as a bijective increasing function, and we regard arctan as its inverse function. So arctan is an increasing function from [−∞, ∞] onto [−π/2, π/2]. Then we see that a = sup{E[Z λ ]; λ ∈ } π/2, and so there is a sequence {λ˜ n }∞ n=1 ⊂ such that E[Z λ˜ n ] → a, n → ∞. Let λ1 = λ˜ 1 . Also let us take λn ∈ , n = 2, 3, . . . , inductively so that X λ˜ n ∨ X λn−1 X λn a.s. Then it is obvious that Z λ˜ n ∨ Z λn−1 Z λn a.s.,

n = 2, 3, . . . .

20

1 Preparations from Probability Theory

Let Z˜ = limn→∞ Z λn . Then we see that Z λn ↑ Z˜ , n → ∞, a.s. So we see that E[ Z˜ ] = a. Also, the following claim holds. Claim. Z λ Z˜ a.s. for any λ ∈ . Suppose that there is a λ0 ∈ such that the inequality Z λ0 Z˜ a.s. is not valid. Then we see that lim E[Z λ0 ∨ Z λn ] = E[Z λ0 ∨ Z˜ ] = E[ Z˜ ] + E[(Z λ0 − Z˜ ) ∨ 0] > a.

n→∞

So we see that there is an n 1 such that E[Z λ0 ∨ Z λn ] > a. However, from the assumption we see that there is λ˜ ∈ such that X λ0 ∨ X λn X λ˜ a.s. So we see that E[Z λ˜ ] E[Z λ0 ∨ Z λn ] > a. This contradicts the definition of a. So we have the claim. Let X˜ = tan Z˜ . Then we see that X λn ↑ X˜ a.s., and that X λ X˜ a.s. for any λ ∈ . Corollary 1.5.1 Let {X λ }λ∈ be a family of random variables. Then there exists a random variable X˜ satisfying the following. (1) X λ X˜ a.s. for all λ ∈ . (2) If Y is a random variable such that X λ Y a.s. for any λ ∈ , then X˜ Y a.s. ˜ be a family of non-empty finite subsets of , and let X¯ A , A ∈ , ˜ be a Proof Let random variable given by X¯ A = max{X λ ; λ ∈ A}. Then we see that

X¯ A1 ∨ X¯ A2 = X¯ A1 ∪A2 ,

˜ A1 , A2 ∈ .

Therefore by the previous proposition we see that there exist a sequence {An }∞ n=1 ˜ and a random variable X˜ such that X¯ An ↑ X˜ , n → ∞, and X¯ A X˜ a.s. for ⊂ ˜ It is obvious that X˜ satisfies (1) and (2). any A ∈ . We denote X˜ in Corollary 1.5.1 by ess.supλ∈ X λ in this book.

Chapter 2

Martingale with Discrete Parameter

2.1 Definition of Martingale Let T be an ordered set. We only consider the case that T is a subset of [−∞, ∞] in this book. In almost all cases, T is {0, 1, 2, . . . , m}, m 1, or Z0 = {0, 1, 2, . . .}, or [0, T ], T > 0, or [0, ∞), although we consider the case that T is Z0 = {n ∈ Z; n 0}. Let (, F , P) be a probability space. Definition 2.1.1 We say that {Ft }t∈T is a filtration, if the following two conditions are satisfied. (1) Ft , t ∈ T are sub-σ -algebras. (2) Fs ⊂ Ft for any s, t ∈ T with s < t. Definition 2.1.2 Let {Ft }t∈T be a filtration. We say that X = {X t }t∈T is an {Ft }t∈T martingale, if the following two conditions are satisfied. (1) X t is an Ft -measurable integrable random variable for each t ∈ T. (2) If s, t ∈ T, s < t, then E[X t |Fs ] = X s . Definition 2.1.3 Let {Ft }t∈T be a filtration. We say that X = {X t }t∈T is an {Ft }t∈T submartingale (supermartingale), if the following two conditions are satisfied. (1) X t is an Ft -measurable integrable random variable for each t ∈ T. (2) If s, t ∈ T, s < t, then E[X t |Fs ] ( )X s . In the case that it is obvious what filtration {Ft }t∈T we handle, we call {Ft }t∈T martingales (sub-, supermartingales) martingales (sub-, supermartingales) for simplicity. The following is obvious. Proposition 2.1.1 (1) If X = {X t }t∈T and Y = {Yt }t∈T are martingales, and if a, b ∈ R, then a X + bY = {a X t + bYt }t∈T is a martingale. © Springer Nature Singapore Pte Ltd. 2020 S. Kusuoka, Stochastic Analysis, Monographs in Mathematical Economics 3, https://doi.org/10.1007/978-981-15-8864-8_2

21

22

2 Martingale with Discrete Parameter

(2) If X = {X t }t∈T and Y = {Yt }t∈T are submartingales (supermartingales), and if a, b 0, then a X + bY = {a X t + bYt }t∈T is also a submartingale (supermartingale). Also, we have the following. Proposition 2.1.2 (1) Let X = {X t }t∈T be a martingale, and ϕ : R → R be a convex function. Suppose moreover that E[|ϕ(X t )|] < ∞ for all t ∈ T. Then {ϕ(X t )}t∈T is a submartingale. (2) Let X = {X t }t∈T be a submartingale, and ϕ : R → R be a non-decreasing convex function. Suppose moreover that E[|ϕ(X t )|] < ∞ for all t ∈ T. Then {ϕ(X t )}t∈T is a submartingale. Proof In both cases, by Jensen’s inequality we see that for any t, s ∈ T with s < t E[ϕ(X t )|Fs ] ϕ(E[X t |Fs ]) ϕ(X s ). Therefore we have our assertions.

2.2 Doob’s Inequality First, we consider of the case that m is an integer, T = {0, 1, . . . , m} and our filtration is {Fn }m n=0 . Proposition 2.2.1 Let X = {X n }m n=0 be a submartingale. Then λP( max X k > λ) E[X m , max X k > λ] E[X m ∨ 0], k=0,...,m

k=0,...,m

λ > 0.

Proof Let Ak = k−1 =0 {X λ} ∩ {X k > λ}, k = 0, 1, . . . , m. Then it is easy to see that Ak ∈ Fk , k = 0, . . . , m, and Ak ’s are mutually disjoint. Also, we see that { max X k > λ} = k=0,...,m

m

Ak .

k=0

Then we see that λP( max X k > λ) =λ

k=0,...,m m

P(Ak )

k=0 m

m

E[X k , Ak ]

k=0

E[E[X m |Fk ], Ak ] =

k=0

m k=0

E[X m , Ak ] = E[X m , max X k > λ]. k=0,...,m

2.2 Doob’s Inequality

23

This implies our assertion. Theorem 2.2.1 (Doob’s inequality) Let M = {Mn }m n=0 be a martingale. Then for any p ∈ (1, ∞) E[ max

k=0,1,...,m

|Mk | p ]1/ p

p E[|Mm | p ]1/ p . p−1

Proof Our assertion is obvious if E[|Mm | p ] = ∞. So we assume that E[|Mm | p ] < ∞. Let Z = maxk=0,1,...,m |Mk |. Then by Jensen’s inequality, we see that E[Z p ]

m k=0

E[|Mk | p ] =

m

E[|E[Mm |Fk ]| p ]

k=0

m

E[|Mm | p ] < ∞.

k=0

Let X n = |Mn |, n = 0, 1, . . . , m. Then by Proposition 2.1.2, we see that X = {X n }m n=0 is a submartingale. So by Proposition 2.2.1 we see that λP(Z > λ) E[|Mm |, Z > λ],

λ > 0.

Then we see that ∞ ∞ 1{Z >λ} pλ p−1 dλ] = p λ p−1 P(Z > λ) dλ E[Z p ] = E[ 0 0 ∞ ∞ p−2 p−2 λ E[|Mm |, Z > λ] dλ = pE |Mm | 1{Z >λ} λ dλ p 0

0

p p E[|Mm |Z p−1 ] E[|Mm | p ]1/ p E[Z p ]( p−1)/ p . = p−1 p−1 Here we use Hölder’s inequality (Proposition 1.5.1). Since E[Z p ] < ∞, we have our assertion. Now let us consider the case that T = Z0 and our filtration is {Fn }∞ n=0 . We say that M = {Mn }∞ is a square integrable martingale, if M is a martingale and if n=0 E[Mn2 ] < ∞, n = 0, 1, . . .. Proposition 2.2.2 If M = {Mn }∞ n=0 is a square integrable martingale, then E[(Mn − Mm )2 |Fm ] = E[Mn2 |Fm ] − Mm2 ,

n m 1.

In particular, E[Mn2 ] = E[M02 ] +

n k=1

E[(Mk − Mk−1 )2 ],

n 0.

24

2 Martingale with Discrete Parameter

Proof For any n m 1, we see that E[(Mn − Mm )2 |Fm ] = E[Mn2 |Fm ] − 2E[Mm Mn |Fm ] + E[Mm2 |Fm ] = E[Mn2 |Fm ] − Mm2 .

This implies the first assertion. The latter assertion is obvious from this. Proposition 2.2.3 If M = {Mn }∞ integrable martingale such that n=0 is a square

F supn0 E[Mn2 ] < ∞, then there exists a σ { ∞ n=0 n }-measurable random variable M∞ such that Mn → M∞ a.s., n → ∞, and E[(M∞ − Mn )2 ] → 0, n → ∞. Moreover, Mn = E[M∞ |Fn ]. Proof E[Mn2 ] is non-decreasing in n, and so there is a 0 such that E[Mn2 ] → ∞ a, n → ∞. For any N 1, {M N +n − M N }∞ n=0 is an {F N +n }n=0 -martingale. So by Doob’s inequality (Theorem 2.2.1) we see that E[ sup |Mn − Mm |2 ] n,mN

4E[sup |M N +n − M N |2 ] = 4 lim E[ max |M N +n − M N |2 ] L→∞

n0

n=0,...,L

16 lim E[|M N +L − M N | ] = 16( lim E[M N2 +L ] − E[M N2 ]) = 16(a − E[M N2 ]). 2

L→∞

L→∞

So we see that E[lim N →∞ supn,mN |Mn − Mm |2 ] = 0. Let 0 = {lim N →∞ supn,mN |Mn − Mm | = 0}. Then P(0 ) = 1. Moreover, {Mn (ω)}∞ n=0 is a Cauchy sequence for any ω ∈ 0 . So we can define M∞ = limn→∞ 10 Mn , and we see that Mn → M∞ a.s. n → ∞. Also, we see that E[|M∞ − Mn |2 ] = E[ lim |10 Mm − 10 Mn |2 ] lim E[10 |Mm − Mn |2 ] m→∞

m→∞

= lim (E[Mm2 ] − E[Mn2 ]) = a − E[Mn2 ] → 0, n → ∞. m→∞

Therefore by Proposition 1.3.5 we see that E[M∞ |Fn ] = lim E[Mm |Fn ](L1 − convergence) = Mn . m→∞

This implies our assertion.

2.3 Stopping Time From now on, we mainly consider the case that T = Z0 = {0, 1, 2, . . .} in this chapter. Also, we fix a filtration {Fn }∞ n=0 = {Fn }n∈Z0 .

2.3 Stopping Time

25

Definition 2.3.1 We say that σ is an ({Fn }∞ n=0 -)stopping time, if σ is a mapping from into Z0 ∪ {∞} such that {σ = n} ∈ Fn ,

n = 0, 1, . . . .

Proposition 2.3.1 (1) Suppose that τ is a mapping from into Z0 ∪ {∞}. Then τ is a stopping time, if and only if {τ n} ∈ Fn ,

n = 0, 1, . . . .

(2) Let m ∈ Z0 ∪ {∞} and let τ ≡ m, i.e., τ (ω) = m, ω ∈ . Then τ is a stopping time. (3) If σ and τ are stopping times, σ ∧ τ, σ ∨ τ, and σ + τ are stopping times. Here σ ∧ τ is a mapping defined by (σ ∧ τ )(ω) = σ (ω) ∧ τ (ω), ω ∈ . The definitions of σ ∨ τ and σ + τ are similar. Proof We have Assertion (1) from the definition of filtration and the fact that {τ n} =

n

{τ = k} and {τ = n} = {τ n} \ {τ n − 1}.

k=0

We have Assertions (2) and (3) from the definition of stopping times, Assertion (1) and the fact that {σ ∧ τ n} = {σ n} ∪ {τ n}, and {σ + τ = n} =

n

{σ ∨ τ n} = {σ n} ∩ {τ n},

({σ = k} ∩ {τ = n − k}).

k=0

We define a subset Fτ of F for each stopping time τ by Fτ = {A ∈ F ; A ∩ {τ = n} ∈ Fn for anyn ∈ Z0 }. We leave the proof of the following proposition to the reader as an exercise. Proposition 2.3.2 Let τ be a stopping time. (1) Let A ∈ F . Then A ∈ Fτ , if and only if A ∩ {τ n} ∈ Fn for any n ∈ Z0 . (2) Fτ is a sub-σ -algebra. (3) τ is Fτ -measurable. (4) Let m ∈ Z0 . If τ ≡ m, then Fτ = Fm . Proposition 2.3.3 Let σ and τ be stopping times. (1) If A ∈ Fσ , then A ∩ {σ τ } ∈ Fτ and A ∩ {σ τ } ∈ Fσ ∧τ .

26

2 Martingale with Discrete Parameter

(2) If σ (ω) τ (ω) for all ω ∈ , then Fσ ⊂ Fτ . (3) Fσ ∩ Fτ = Fσ ∧τ . (4) {τ < σ }, {τ = σ }, {τ > σ } ∈ Fσ ∧τ . (5) If A ∈ Fσ , then A ∩ {σ = τ } ∈ Fσ ∧τ . Proof (1) For any n ∈ Z0 we see that (A ∩ {σ τ }) ∩ {τ = n} = (A ∩ {σ n}) ∩ {τ = n} ∈ Fn . So we see that A ∩ {σ τ } ∈ Fτ . Then by this result we see that A ∩ {σ τ } = A ∩ {σ (σ ∧ τ )} ∈ Fσ ∧τ . This implies Assertion (1). Assertion (2) follows from Assertion (1), since {σ τ } = . Suppose that A ∈ Fσ ∩ Fτ . Then by Assertion (1) we see that A = (A ∩ {σ τ }) ∪ (A ∩ {τ σ }) ∈ Fσ ∧τ . So Fσ ∩ Fτ ⊂ Fσ ∧τ . Also, by Assertion (2) we see that Fσ ∧τ ⊂ Fσ ∩ Fτ . So we have Assertion (3). Letting A = in Assertion (1), we see that {σ τ }, {τ σ } ∈ Fσ ∧τ . This implies Assertion (4). Assertion (5) follows from Assertions (1) and (4).

2.4 Doob’s Decomposition and Martingale Transformation In this chapter, we call a sequence of random variables X = {X n }∞ n=0 a stochastic process. ∞ Definition 2.4.1 We say that a stochastic process X = {X n }∞ n=0 is ({Fn }n=0 -)adapted, if a random variable X n is Fn -measurable for all n ∈ Z0 .

It is obvious that martingales are adapted. ∞ Definition 2.4.2 We say that a stochastic process X = {X n }∞ n=0 is ({Fn }n=0 -) predictable, if X 0 = 0 and a random variable X n is Fn−1 -measurable for all n 1.

Theorem 2.4.1 Let {X n }∞ n=0 be an adapted process such that E[|X n |] < ∞, n = 0, 1, 2, . . . . Then we have the following. ∞ (1) There exist a martingale {Mn }∞ n=0 with M0 = 0 and a predictable process {An }n=0 such that X n = X 0 + Mn + An , n 0. ˜ Moreover, if there is a martingale { M˜ n }∞ n=0 with M0 = 0 and a predictable process ∞ ˜ ˜ ˜ { An }n=0 such that X n = X 0 + Mn + An , n 0, then M˜ n = Mn a.s. and A˜ n = An a.s., n 0.

2.4 Doob’s Decomposition and Martingale Transformation

27

We call this decomposition Doob’s decomposition. ∞ (2) If {X n }∞ n=0 is a submartingale, then a predictable process {An }n=0 in Doob’s decomposition of X is a non-decreasing process, i.e., An+1 An a.s., n = 0, 1, 2, . . .. Proof (1) Let Dn = X n − E[X n |Fn−1 ] and Z n = E[X n |Fn−1 ] − X n−1 , n 1. Then we see that Dn is Fn -measurable and E[Dn |Fn−1 ] = 0 and that Z n is Fn−1 measurable, n 1. Also, we see that Dn + Z n = X n − X n−1 , n 1. Let Mn =

n

Dk , and An =

k=1

n

Zk ,

n 0.

k=1

∞ Then we see that M0 = 0 and {Mn }∞ n=0 is a martingale and that {An }n=0 is predictable. Also, we see that X n = X 0 + Mn + An . Suppose that M˜ n and A˜ n satisfy the condition. Then we see that A˜ 0 = A0 = 0, and that

A˜ n − A˜ n−1 = E[ A˜ n − A˜ n−1 |Fn−1 ] = E[X n − X n−1 |Fn−1 ] − E[ M˜ n − M˜ n−1 |Fn−1 ] = E[X n |Fn−1 ] − X n−1 = An − An−1 .

So we see that A˜ n = An a.s., n 1. This implies Assertion (1). ∞ (2) If {X n }∞ n=0 is a submartingale, we easily see that Z n 0 a.s. So {An }n=0 is non-decreasing. Proposition 2.4.1 Let τ be a stopping time and {X n }∞ n=0 be an adapted stochastic process. Then a random variable 1{τ 0, there exists a δ > 0 such that sup E[|X λ |, A] < ε λ∈

for any A ∈ F such that P(A) < δ. Proof Assume that {X λ }λ∈ is uniformly integrable. Take an ε > 0 and fix it. Then there is an n 1 such that sup E[|X λ |, |X λ | n] < ε/2. λ∈

Then we see that sup E[|X λ |] sup E[|X λ |, |X λ | n] + sup E[|X λ |, |X λ | < n] < ε/2 + n < ∞. λ∈

λ∈

λ∈

Let δ = ε/(2n). Then E[|X λ |, A] = E[|X λ |, A ∩ {|X λ | n}] + E[|X λ |, A ∩ {|X λ | < n}] E[|X λ |, |X λ | n] + n P(A) for any A ∈ F . So if P(A) < δ, then sup E[|X λ |, A] < ε. λ∈

Therefore we see that conditions (i) and (ii) are satisfied. Assume that the conditions (i) and (ii) are satisfied. Then M = sup E[|X λ |] < ∞. λ∈

Therefore we see that P(|X λ | n)

1 E[|X λ |] M/n. n

Let ε > 0. Let δ > 0 be as in Condition (ii). Then if we take a natural number n > M/δ, noting that M/n δ, we see that

2.6 Uniform Integrability

37

E[|X λ |, |X λ | n] ε,

λ ∈ .

Therefore we see that {X λ }λ∈ is uniformly integrable. Proposition 2.6.4 If {X λ }λ∈ and {Yλ }λ∈ are uniformly integrable families of random variables, then {X λ + Yλ }λ∈ is uniformly integrable. Proof By Proposition 2.6.3, we see that for any ε > 0 there exists a δ > 0 for which E[|X λ |, A]

1 ε, 2

E[|Yλ |, A]

1 ε, 2

λ ∈ ,

for any A ∈ F such that P(A) δ. Then we see that E[|X λ + Yλ |, A] ε, λ ∈ , for any A ∈ F such that P(A) δ. So by Proposition 2.6.3 we have our assertion. Proposition 2.6.5 Let X be an integrable random variable, and {Gλ }λ∈ be a family of sub-σ -algebras. Then {E[X |Gλ ]}λ∈ is uniformly integrable. Proof Since E[|X |, |X | n] → 0, n → ∞, we see that {X } is uniformly integrable. Let X λ = E[X |Gλ ], λ ∈ . Then X λ is Gλ -measurable. Note that P(|X λ | n)

1 1 1 1 E[|X λ |] = E[|E[X |Gλ ]|] E[E[|X ||Gλ ]] = E[|X |]. n n n n

This implies that sup P(|X λ | n) → 0, λ∈

n → ∞.

Also, we have E[|X λ |, |X λ | n] E[E[|X ||Gλ ], |X λ | n] = E[|X |, |X λ | n]. So by Proposition 2.6.3, we see that sup E[|X λ |, |X λ | n] → 0, λ∈

n → ∞.

Theorem 2.6.1 Let X n , n = 1, 2, . . . , and X be integrable random variables. Then the following two conditions are equivalent. (1) E[|X − X n |] → 0, n → ∞. (2) {X n }∞ n=1 is uniformly integrable, and X n converges to X in probability. Proof First we prove that Assertion (1) implies Assertion (2). For any ε > 0 we see that

38

2 Martingale with Discrete Parameter

P(|X − X n | > ε)

1 E[|X − X n |] → 0, ε

n → ∞.

So we see that X n converges to X in probability. Let Yn = X n − X, n = 1, 2, . . .. It is sufficient to prove that {Yn }∞ n=1 is uniformly integrable. Let us take an arbitrary ε > 0 and fix it. Then from the assumption, we see that there is an m 0 1 such that E[|Ym |] ε/2, Since

m 0 k=1

m m0.

|Yk | is integrable, we see that there is an n 0 1 such that E

m 0

|Yk |,

k=1

m0

|Yk | n ε/2,

n n0.

k=1

So we see that for any n n 0 , sup E[|Ym |, |Ym | n] sup E[|Ym |] + m

mm 0

m0

sup E[|Ym |] + E mm 0

E[|Yk |, |Yk | n]

k=1

m 0

|Yk |,

k=1

m0

|Yk | n ε.

k=1

So we see that {Yn }∞ n=1 is uniformly integrable. Now let us assume (2). Let Ym = X m − X, m = 1, 2, . . .. Then {Ym }∞ m=1 is also uniformly integrable. Let n 1. For any ε > 0 we see that E[|Ym |, |Ym | n] E[|Ym |, ε |Ym | n] + E[|Ym |, |Ym | < ε] n P(|Ym | ε) + ε.

Therefore we see that lim E[|Ym |, |Ym | n] ε.

m→∞

Since ε > 0 is arbitrary, we see that lim E[|Ym |, |Ym | n] = 0.

m→∞

Then we see that lim E[|Ym |]

m→∞

lim E[|Ym |, |Ym | n] + sup E[|Ym |, |Ym | n] = sup E[|Ym |, |Ym | n]. m→∞

m

m

Letting n → ∞, we see that E[|Ym |] → 0, m → ∞. So we obtain Assertion (1).

2.7 Lévy’s Theorem

39

2.7 Lévy’s Theorem Theorem 2.7.1 Let X be an integrable random variable. -algebras, (1) Suppose that Fn , n = 1, 2, . . . , be a non-decreasing sequence of sub-σ

i.e., F1 ⊂ F2 ⊂ F3 ⊂ · · · . Let F∞ be a sub-σ -algebra given by F∞ = σ { ∞ n=1 Fn }. Then n → ∞, E[|E[X |F∞ ] − E[X |Fn ]|] → 0, and P( lim |E[X |F∞ ] − E[X |Fn ]| = 0) = 1. n→∞

(2) Suppose that Gn , n = 1, 2, . . . , be a non-increasing sequence of sub-σ -algebras, i.e., G1 ⊃ G2 ⊃ G3 ⊃ · · · . Let G∞ be a sub-σ -algebra given by G∞ = ∞ n=1 Gn . Then n → ∞, E[|E[X |G∞ ] − E[X |Gn ]|] → 0, and P( lim |E[X |G∞ ] − E[X |Gn ]| = 0) = 1. n→∞

The above-mentioned theorem is called Lévy’s theorem. Proof We prove Assertion (1) first. Let X n = E[X |Fn ], n = 1, 2, . . .. Then {X n }∞ n=1 is an {Fn }∞ n=1 -martingale. Since E[X n ∨ 0] E[|E[X |Fn ]|] E[E[|X ||Fn ]] = E[|X |], we see by Theorem 2.5.1 that there is an F∞ -measurable random variable X ∞ such that P( lim |X ∞ − X n | = 0) = 1. n→∞

By Proposition 2.6.5 we see that {X n }∞ n=1 is uniformly integrable. So we see that E[|X ∞ − X n |] → 0. E[X, A], n m. So we It is easy to see that for any A ∈ Fm , m 1, E[X n , A] =

see that E[X ∞ , A] = limn→∞ E[X n , A] = E[X, A]. Since ∞ m=1 Fm is a π -system, we see that X ∞ = E[X |F∞ ]. These imply Assertion (1). We can show Assertion (2) similarly by using Theorem 2.5.2. We can show the following proposition similarly by using Theorems 2.5.1 and 2.6.1.

∞ Proposition 2.7.1 Let {Fn }∞ n=0 be a filtration, and let F∞ = σ { n=0 Fn }. Let ∞ {Mn }∞ n=0 be an {Fn }n=0 -martingale. Then the following are equivalent.

40

2 Martingale with Discrete Parameter

(1) {Mn }∞ n=0 is uniformly integrable. (2) There is an F∞ -measurable integrable random variable X such that Mn = E[X |Fn ], n 0.

2.8 A Remark on Doob’s Decomposition Concerning Doob’s decomposition, we have the following estimate. Theorem 2.8.1 Let {X n }∞ n=0 be a submartingale, and X n = X 0 + Mn + An be the ∞ Doob’s decomposition, i.e., {Mn }∞ n=0 is a martingale, {An }n=0 be a predictable nondecreasing process with M0 = A0 = 0. Moreover, we assume that E[|X n |2 ] < ∞, n = 0, 1, 2, . . .. Then we have E[A2n ] 16E[max{X n − X k ; k = 0, 1, . . . , n}2 ], n 0. Proof Note that An − An−1 = E[X n − X n−1 |Fn−1 ] E[|X n | + |X n−1 ||Fn−1 ] and so we see that E[A2n ] < ∞, n 0. Let Yn = max{X n − X k ; k = 0, 1, . . . , n}. Then we see that Mn − Mk + An − Ak = X n − X k Yn , k = 0, 1, . . . , n. Therefore we have E[An − Ak |Fk ] E[Yn |Fk ], k = 0, 1, . . . , n. On the other hand, we have (An − Ak−1 )2 − (An − Ak )2 = (2 An − Ak − Ak−1 )(Ak − Ak−1 ) 2(An − Ak−1 )(Ak − Ak−1 ).

Therefore we see that E[A2n ] 2

n

E[(Ak − Ak−1 )E[An − Ak−1 |Fk−1 ]].

(2.2)

k=1

Since E[An −Ak−1 |Fk−1 ] E[Yn |Fk−1 ] max{E[Yn |F ]; =0, 1, . . . , n}, k=1, . . . , n, we see by Eq. (2.2) and Doob’s inequality that E[A2n ] 2E[An max{E[Yn |F ]; = 0, 1, . . . , n}] 2E[A2n ]1/2 E[max{E[Yn |F ]; = 0, 1, . . . , n}2 ]1/2 4E[A2n ]1/2 E[Yn2 ]1/2 .

2.8 A Remark on Doob’s Decomposition

41

This implies our assertion. By a similar argument we have the following, although we will not use it. Theorem 2.8.2 Let {X n }∞ n=0 be a submartingale, and X n = X 0 + Mn + An is the Doob’s decomposition. Let p ∈ (1, ∞). We assume moreover that E[|X n | p ] < ∞, n = 0, 1, 2, . . .. Then there exists a C p ∈ (0, ∞) depending only on p ∈ (1, ∞) such that E[Anp ] C p E[max{X n − X k ; k = 0, 1, . . . , n} p ], n 0. Proof Since Ak − Ak−1 E[|X k | + |X k−1 ||Fk−1 ], p

we see that E[Ak ] < ∞, k = 1, 2, . . . , n. Let Yn = max{X n − X k ; k = 0, 1, . . . , n}. Then we have E[An − Ak |Fk ] E[Yn |Fk ],

k = 0, 1, . . . , n.

(2.3)

On the other hand, we have

1

(An − Ak−1 ) p − (An − Ak ) p = 0

d (An − Ak + t (Ak − Ak−1 )) p dt dt 1

=p

(An − Ak + t (Ak − Ak−1 )) p−1 (Ak − Ak−1 ) dt

0

p(An − Ak−1 ) p−1 (Ak − Ak−1 ).

So we see that E[Anp ] p

n

E[(Ak − Ak−1 )E[(An − Ak−1 ) p−1 |Fk−1 ]].

(2.4)

k=1

Case 1. The case that p ∈ (1, 2]. By Eq. (2.3) and the fact that 1/ p + ( p − 1)/ p = 1, we have E[(An − Ak−1 ) p−1 |Fk−1 ] E[An − Ak−1 |Fk−1 ] p−1 E[Yn |Fk−1 ] p−1 max{E[Yn |F j ]; j = 0, 1, . . . , n} p−1 .

Then by Eq. (2.4), we see that E[Anp ] pE[An max{E[Yn |F j ]; j = 0, 1, . . . , n} p−1 ] pE[Anp ]1/ p E[max{E[Yn |F j ]; j = 0, 1, . . . , n} p ]( p−1)/ p p−1 p pE[Anp ]1/ p E[Ynp ]( p−1)/ p . p−1

42

2 Martingale with Discrete Parameter

Therefore we have E[Anp ] p p/( p−1)

p p−1

p E[Ynp ].

Case 2. The case that p ∈ (2, ∞). Note that 1/(2 p − 3) + (2 p − 1)(2 p − 4)/(2(2 p − 3)) = p − 1. Then we have by Eq. (2.3) E[(An − Ak−1 ) p−1 |Fk−1 ] E[An − Ak−1 |Fk−1 ]1/(2 p−3) E[(An − Ak−1 )(2 p−1)/2 |Fk−1 ](2 p−4)/(2 p−3) max{E[Yn |F j ]; j = 0, 1, . . . , n}1/(2 p−3) p−1)/2 × max{E[A(2 |F j ]; j = 0, 1, . . . , n}(2 p−4)/(2 p−3) . n

So by Eq. (2.4) we have E[Anp ] pE[An max{E[Yn |F j ]; j = 0, 1, . . . , n}1/(2 p−3) p−1)/2 × max{E[A(2 |F j ]; j = 0, 1, . . . , n}(2 p−4)/(2 p−3) ] n

pE[Anp ]1/ p E[max{E[Yn |F j ]; j = 0, 1, . . . , n} p ]1/( p(2 p−3)) × E[max{E[An(2 p−1)/2 |F j ]; j = 0, 1, . . . , n}]2 p/(2 p−1) ](2 p−1)(2 p−4)/(2 p(2 p−3)) 1/(2 p−3) p pE[Anp ]1/ p E[Ynp ]1/( p(2 p−3)) p−1 (2 p−4)/(2 p−3) 2 p/(2 p − 1) × E[Anp ](2 p−1)(2 p−4)/(2 p(2 p−3)) . 2 p/(2 p − 1) − 1 Here we use the fact that 1 1 1 + + = 1. p p(2 p − 3) 2 p(2 p − 3)/((2 p − 1)(2 p − 4)) So we see that E[Anp ]

p

p(2 p−3)

p p−1

p (2 p)2 p( p−2) E[Ynp ].

Chapter 3

Martingale with Continuous Parameter

3.1 Several Notions on Stochastic Processes Let (, F , P) be a complete probability space, that is, any subset B of for which there is an A ∈ F such that B ⊂ A and P(A) = 0 belongs to F . Let N0 be the set of A ∈ F such that P(A) = 0 or 1. Then N0 is a sub-σ -algebra. In this section, we consider the case that T = [0, ∞). Definition 3.1.1 We say that a filtration {Ft }t∈[0,∞) satisfies the standard condition, if the following two conditions are satisfied. (1) N0 ⊂ F0 . (2) (Right continuity) s>t Fs = Ft for any t 0. We say that (, F , P, {Ft }t∈[0,∞) ) is a standard filtered probability space, if (, F , P) is a complete probability space and a filtration {Ft }t∈[0,∞) satisfies the standard condition. We assume that (, F , P, {Ft }t∈[0,∞) ) is a standard filtered probability space from now on. A stochastic process X = {X t }t∈[0,∞) is a family of random variables. We regard X as a function defined in [0, ∞) × . We introduce several notions in the following in order to analyze stochastic processes. Definition 3.1.2 (1) We say that a stochastic process X : [0, ∞) × → R is ({Ft }t∈[0,∞) -)adapted, if X (t, ·) : → R is Ft -measurable for any t ∈ [0, ∞). (2) We say that a stochastic process X : [0, ∞) × → R is progressively measurable, if X |[0,T ]× : [0, T ] × → R is B([0, T ]) ⊗ FT -measurable for any T ∈ [0, ∞). Here B([0, T ]) is the Borel algebra on [0, T ], and B([0, T ]) ⊗ FT is the product σ -algebra of B([0, T ]) and FT . As for a stochastic process X : [0, ∞) × → R, we denote X (t, ω), t ∈ [0, ∞), ω ∈ , by X t (ω) or X (t)(ω). We denote by X t or X (t) a random variable X (t, ·) : → R for t ∈ [0, ∞). © Springer Nature Singapore Pte Ltd. 2020 S. Kusuoka, Stochastic Analysis, Monographs in Mathematical Economics 3, https://doi.org/10.1007/978-981-15-8864-8_3

43

44

3 Martingale with Continuous Parameter

Proposition 3.1.1 (1) If a stochastic process X : [0, ∞) × → R is adapted, and if X (·, ω) : [0, ∞) → R is right continuous for all ω ∈ , then X is progressively measurable. (2) If a stochastic process X : [0, ∞) × → R is adapted, and if X (·, ω) : [0, ∞) → R is left continuous for all ω ∈ , then X is progressively measurable. Proof Since the proofs of Assertions (1) and (2) are almost same, we prove Assertion (1) only. Let T 0. Let X n : [0, T ] × → R, n 1, be given by

[2n t] + 1 X n (t, ω) = X T ∧ ,ω , 2n

t ∈ [0, T ], ω ∈ .

Note that 2−n ([2n t] + 1) ↓ t, n → ∞. So we see that X n (t, ω) → X (t, ω), n → ∞. Since X n is B([0, T ]) ⊗ FT -measurable, we see that X |[0,T ]× : [0, T ] × → R is also B([0, T ]) ⊗ FT -measurable. Therefore X is progressively measurable. Let D([0, ∞)) be the set of all functions w : [0, ∞) → R satisfying the following condition (D). (D) w(t) = lims↓t w(s) for all t ∈ [0, ∞), and w(t−) = lims↑t w(s) exists for all t ∈ (0, ∞). Definition 3.1.3 (1) We say that a stochastic process X : [0, ∞) × → R is a Dprocess, if X (·, ω) : [0, ∞) → R belongs to D([0, ∞)) for any ω ∈ . (2) We say that a stochastic process X : [0, ∞) × → R is a continuous process, if X (·, ω) : [0, ∞) → R is continuous for any ω ∈ . Let us recall the definitions of martingales. Definition 3.1.4 We say that X : [0, ∞) × → R is an ({Ft }t∈[0,∞) -)martingale, if X is an ({Ft }t∈[0,∞) -)adapted process, E[|X t |] < ∞,

t 0,

and E[X t |Fs ] = X s for any t > s 0. Also, we say that X : [0, ∞) × → R is an ({Ft }t∈[0,∞) -)sub-(super-)martingale, if X is an ({Ft }t∈[0,∞) -)adapted process, E[|X t |] < ∞,

t 0,

and E[X t |Fs ] ( )X s for any t > s 0. We give a simple remark in the following.

3.1 Several Notions on Stochastic Processes

45

Proposition 3.1.2 Let X : [0, ∞) × → R be an ({Ft }t∈[0,∞) -)adapted process, and that t 0. E[|X t |] < ∞, Let {tn }∞ n=0 is a sequence of increasing non-negative numbers such that t0 = 0, tn−1 < tn , n 1, and tn → ∞, n → ∞. Suppose that for any n 1 E[X t |Fs ] = X s ,

s, t ∈ [tn−1 , tn ], s < t.

Then X : [0, ∞) × → R is an ({Ft }t∈[0,∞) -)martingale. Proof It is sufficient to prove the following claim. Claim. E[X t |Fs ] = X s for any t > s 0 such that t ∈ [tn−1 , tn ], s ∈ [tm−1 , tm ], n m 1. We prove this claim by induction in n − m. From the assumption we see that the claim is valid in the case that n − m = 0. Assume that the claim is valid in the case that n − m = k 0. Suppose that n − m = k + 1. Since tm ∈ [tm , tm+1 ] and n − (m + 1) = k, we see that E[X t |Fs ] = E[E[X t |Ftm ]|Fs ] = E[X tm |Fs ] = X s . So the claim is shown. In this book, we use the following notation. n , n 1, and are sets defined by n =

3.2

k ; k ∈ Z0 , n 1, 2n

and =

∞

n .

n=1

D-Modification

In this book, we mainly handle submartingales which are D-processes. We give the reason why we may consider that submartingales are D-processes. Proposition 3.2.1 Let w˜ : → R be a real-valued function defined in . Assume that K ˜ −n k)}k=0 ; a, b) < ∞ sup sup U K ({w(2 n1 K 1

for any a, b ∈ Q with a < b, and that ˜ < ∞. sup |w(t)| t∈

Then we have the following. (1) For any t ∈ [0, ∞)

46

3 Martingale with Continuous Parameter

lim w(s) ˜ ∈ (−∞, ∞)

s↓t,s∈

exists and for any t ∈ (0, ∞) lim w(s) ˜ ∈ (−∞, ∞)

s↑t,s∈

exists. (2) Let us define w : [0, ∞) → R by ˜ w(t) = lim w(s), s↓t,s∈

t ∈ [0, ∞).

Then w ∈ D([0, ∞)). Proof (1) Suppose that there is a t ∈ [0, ∞) such that lim w(s) ˜

= lim w(s). ˜ s↓t,s∈

s↓t,s∈

Then there are a, b ∈ Q, a < b, such that lim w(s) ˜ < a < b < lim w(s). ˜ s↓t,s∈

s↓t,s∈

Then we see that there are sm ∈ , m = 1, 2, . . . , such that s1 > s2 > s3 > s4 > · · · ˜ 2m ) < a, m = 1, 2, . . .. and w(s ˜ 2m−1 ) > b, w(s K ˜ −n k)}k=0 ; a, b) < ∞. Then there are n 1 and Let N = supn1 sup K 1 U K ({w(2 K 1 such that s1 , s2 , . . . , s2N +4 ∈ n and s1 < 2−n K . This implies the contradiction. So we obtain the first assertion in (1). The proof of the second assertion of (1) is similar. Assertion (2) is an easy consequence of Assertion (1). Proposition 3.2.2 Let (, F , P, {Ft }t∈[0,∞) ) be a standard filtered probability space. Also, let {X t }t∈[0,∞) be a submartingale such that {X t }t∈[0,∞) is uniformly integrable. If E[X t ] is right continuous in t ∈ [0, ∞), i.e., lim E[X s ] = E[X t ], s↓t

t ∈ [0, ∞),

then there is a D-process X˜ : [0, ∞) × → R such that P(X t = X˜ t ) = 1,

t ∈ [0, ∞).

We call this D-process X˜ a D-modification of a stochastic process X. Proof Since {X t }t∈[0,∞) is uniformly integrable, we see that

3.2 D-Modification

47

sup E[|X t |] < ∞. t∈[0,∞) n

n

N2 N2 For any n, N 1, {X k2−n }k=0 is an {Fk2−n }k=0 -submartingale. So by Proposition 2.5.2 we see that for any a < b, a, b ∈ Q N2 ; a, b)] (b − a)−1 E[(X N − a) ∨ 0]. E[U N 2n ({X k2−n }k=0 n

So we see that K ; a, b)] (b − a)−1 sup E[|X t | + |a|] < ∞. E[sup sup U K ({X 2−n k }k=0 n1 K 1

t∈[0,∞)

Also, by Corollary 2.4.1 we see that for any λ > 0 P(

sup

k=0,...,N 2n

|X k2−n | > λ) λ−1 (E[|X 0 |] + 2E[|X N |]) 3λ−1 sup E[|X t |]. t∈[0,∞)

This implies that P(sup |X t | > λ) 3λ−1 sup E[|X t |], t∈

λ > 0.

t∈[0,∞)

So we see that P(sup |X t | = ∞) = 0. t∈

Let 0 ∈ F be given by 0 = {sup |X t | < ∞} ∩ t∈

K {sup sup U K ({X 2−n k }k=0 ; a, b) < ∞}.

a,b∈Q,a 0 and p ∈ (1, ∞) E[ sup |Mt | ] p

t∈[0,T ]

p p−1

p E[|MT | p ].

(2) For any T > 0 and λ > 0 λE[ sup |Mt | > λ] E[|MT |]. t∈[0,T ]

n

n

Proof It is easy to see that {MkT /2n }2k=0 is an {FkT /2n }2k=0 -martingale for any n 1. Then by Doob’s inequality (Theorem 2.2.1) we see that E[ max n |MkT /2n | p ] k=0,...,2

p p−1

p E[|MT | p ].

Since max |MkT /2n | p ↑ sup |Mt | p , n → ∞,

k=0,...,2n

t∈[0,T ]

we have Assertion (1). We can show Assertion (2) similarly by using Proposition 2.2.1. Definition 3.3.1 We say that σ is an ({Ft }t∈[0,∞) -)stopping time, if σ is a mapping from to [0, ∞] and if {σ t} ∈ Ft ,

t ∈ [0, ∞).

3.3 Doob’s Inequality and Stopping Times

49

Proposition 3.3.1 For any σ : → [0, ∞] the following are equivalent. (1) σ is a stopping time. (2) {σ < t} ∈ Ft for any t ∈ [0, ∞). Proof If (1) is valid, {σ < t} =

∞ 1 σ t− ∈ Ft , n n=1

and so we have (2). On the other hand, if (2) is valid, {σ t} =

∞ 1 σ σ } ∈ Fσ ∧τ . (4) If A ∈ Fσ , then A ∩ {σ = τ } ∈ Fσ ∧τ . Proof Let σn = ρ˜n ◦ σ, and τn = ρ˜n ◦ τ for n 1. Suppose that A ∈ Fσ . Since σ σn , we see that A ∈ Fσn . Then by Propositions ∞ 2.3.3 and 3.3.5, we see that A ∩ {σn τn } ∈ Fσn ∧τn . Since {σ τ } = n=m {σn τn }, m 1, we have Assertion (1) by Proposition 3.3.6. The proof of Assertions (2) and (3) are similar to the proof of Proposition 2.3.3. Assertion (4) follows from Assertions (1) and (3). Proposition 3.3.10 Suppose that X : [0, ∞) × → R is an adapted D-process. Let us define 1{σ 0.

n→∞

At is non-decreasing in t, and so we see that P(At− = At ) = 1,

t > 0.

(3.5)

Let Nt(n,k) , t 0, n, k 1, be given by Nt(n,k) = E[(A2−n k − E[A2−n k |F2−n (k−1) ])|Ft ],

t ∈ [0, ∞).

Since N (n,k) is a martingale, we may assume that N (n,k) is a D-process. Note that Nt(n,k) = N2−n k , t k2−n , and Nt(n,k) = 0, t ∈ [0, 2−n (k − 1)). So by Eq. (3.5) we see that ∞ (n,k) (n,k) (n,k) (n,k) (Nt − Nt− ) d At = E (Nt − Nt− ) d At = 0. E (2−n (k−1),2−n k)

0

Now let us define a D-process A(n) , n 1, by (n,k) A(n) + E[A2−n k |F2−n (k−1) ] t = Nt

= E[A2−n k |Ft ], t ∈ [2−n (k − 1), 2−n k), k = 1, 2, . . . . Let A∞ = supt At . Then E[A2∞ ] < ∞, and 0 A(n) t E[A∞ |Ft ], So we see that

t ∈ [0, ∞).

60

3 Martingale with Continuous Parameter

∞

E

(n) (A(n) − A ) d A t = 0. t t−

(3.6)

0

Now let ρ˜n : [0, ∞) → n , be given by ρ˜n (t) = 2−n k,

t ∈ [(k − 1)2−n , 2−n k), k = 0, 1, . . . .

Then we see that ρ˜n (t) ↓ t, n → ∞, and A(n) t = E[Aρ˜n (t) |Ft ],

t ∈ [0, ∞).

This shows that A(n) t ↓ At , n → ∞. Now suppose that σ is a finite stopping time. Recalling that σn = ρ˜n ◦ σ is a stopping time, we see that E[Aρ˜n (σ ) ] =

∞

E[A2−n k , σ ∈ [2−n (k − 1), 2−n k)]

k=1

= =

∞ k=1 ∞

−n E[N2(n,k) (k − 1), 2−n k)] −n k + E[A 2−n k |F2−n (k−1) ], σ ∈ [2

E[Nσ(n,k) + E[A2−n k |F2−n (k−1) ], σ ∈ [2−n (k − 1), 2−n k)]

k=1

=

∞

−n −n (n) E[A(n) σ , σ ∈ [2 (k − 1), 2 k)] = E[Aσ ].

k=1

Here we use the fact that {σ ∈ [2−n (k − 1), 2−n k)} ∈ Fσ . Let ε > 0, and let τn , n = 1, 2, . . . , be stopping times given by τn = inf{t 0; A(n) t − At > ε}. It is obvious that τn is non-decreasing in n. So we see that there is a stopping time τ such that τn ↑ τ, n → ∞. For any m 1 we have ε P( sup t∈[0,m−1]

(n) |A(n) t − At | > ε) E[Aτn ∧m − Aτn ∧m ] = E[Aρ˜n (τn ∧m) ] − E[Aτn ∧m ].

Since A is regular, ρ˜n (τn ∧ m) → τ ∧ m, n → ∞, and τn ∧ m → τ ∧ m, n → ∞, we see that n → ∞. P( sup |A(n) t − At | > ε) → 0, t∈[0,m−1]

On the other hand, we see that

3.4 Doob–Meyer Decomposition

61

E[Am−2 |Ft ] At A(n) t E[A∞ |Ft ],

t m − 1, n 1,

and so we have E[

sup t∈[m−1,∞)

2 2 |A(n) t − At | ] 4E[|A∞ − Am−2 | ] → 0,

These imply that

2 E[ sup |A(n) t − At | ] → 0,

m → ∞.

n → 0.

t∈[0,∞)

Combining this with Eq. (3.6), we see that

∞

0=E

(At − At− ) d At = E

0

(At − At− )2 .

t

This shows that A is continuous with probability 1. The following is an easy consequence of the previous proposition. Corollary 3.4.1 If X is a submartingale and is a continuous process such that ¯ 2 and a continuous E[supt∈[0,∞) X t2 ] < ∞, then there are continuous process M ∈ M ¯ +,2 such that X t = X 0 + Mt + At , t 0. process A ∈ A Proposition 3.4.4 If X is a continuous process and is an L2 -non-decreasing process, then X is an L2 -natural non-decreasing process. ¯ 2 , and a Proof Since X is a submartingale, there is a continuous process M ∈ M ¯ continuous process A ∈ A2 such that X = M + A. However, by Proposition 3.4.1 we see that X = A. This shows our assertion. Remark In this book, we show the Doob–Meyer theorem for square integrable submartingales. See Ikeda–Watanabe [4], Karatzas–Shreve [6], Revuz–Yor [7], and Rogers–Williams [8] for a general case.

3.5 Quadratic Variation Let (, F , P, {Ft }t∈[0,∞) ) be a standard filtered probability space. Let Mcb = Mcb ({Ft }t∈[0,∞) ) be the set of {Ft }t∈[0,∞) -martingales M = {Mt }t0 satisfying the following two conditions. (i) M is a continuous process and M0 = 0. (ii) There is a K ∈ (0, ∞) such that |M(t, ω)| K , t ∈ [0, ∞), ω ∈ . Proposition 3.5.1 Let M ∈ Mcb . Then we have the following. |Mt − Ms |] → 0, δ ↓ 0. (1) E[ sup s,t∈[0,∞),|s−t|δ 2 2 (2) supn1 E[( ∞ k=1 (Mk/2n − M(k−1)/2n ) ) ] < ∞ and

62

3 Martingale with Continuous Parameter ∞

E[(Mk/2n − M(k−1)/2n )4 ] → 0,

n → ∞.

k=1

Proof By the assumption, there is a K > 0 such that supt0 |Mt | K . Then for any δ ∈ (0, 1] and m 1, we have E[

|Mt − Ms |] E[

sup s,t∈[0,∞),|s−t|δ

sup

|Mt − Ms |] + 2E[sup |Mt − Mm |].

s,t∈[0,m+1],|s−t|δ

tm

Since Mt , t ∈ [0, m + 1], is uniformly continuous, we see that for any ω ∈ sup

|Mt − Ms | → 0,

δ ↓ 0.

s,t∈[0,m+1],|s−t|δ

Since supt,s∈[0,m+1] |Mt − Ms | 2K , we see that E[

sup

|Mt − Ms |] → 0,

δ ↓ 0.

s,t∈[0,m+1],|s−t|δ

By Doob’s inequality (Theorem 3.3.1) we see that E[sup |Mt − Mm |] 2 sup E[|Mt − Mm |2 ]1/2 = 2(sup E[Mt2 ] − E[Mm ]2 )1/2 . tm

tm

tm

So we see that E[suptm |Mt − Mm |] → 0, m → ∞, and we obtain Assertion (1). 4 Let an = ∞ k=1 E[(Mk/2n − M(k−1)/2n ) ], n 1. Then we see that an 4K

2

∞

2 4 E[(Mk/2n − M(k−1)/2n )2 ] 4K 2 sup E[Mk/2 n ] 4K . k1

k=1

Also, we see that ⎡

2 ⎤ m E⎣ (Mk/2n − M(k−1)/2n )2 ⎦ k=1

=2

m k=1

+

m

E E

m

(M /2n − M( −1)/2n )2 |Fk/2n (Mk/2n − M(k−1)/2n )2

=k+1

E[(Mk/2n − M(k−1)/2n )4 ]

k=1

2

m k=1

2 2 2 E[E[Mm/2 n − Mk/2n |Fk/2n ](Mk/2n − M(k−1)/2n ) ] + an

3.5 Quadratic Variation

8K 2

m

63

E[(Mk/2n − M(k−1)/2n )2 ] + 4K 4 12 K 4 .

k=1

This implies the first part of Assertion (2). Moreover, we have

an E (sup |Mk/2n k1

∞ − M(k−1)/2n | ) (Mk/2n − M(k−1)/2n )2

2

k=1

E[sup |Mk/2n

⎡

2 ⎤1/2 ∞ − M(k−1)/2n |4 ]1/2 E ⎣ (Mk/2n − M(k−1)/2n )2 ⎦

k1

k=1

((2K ) E[sup |Mk/2n − M(k−1)/2n |]) 3

1/2

(12K 4 )1/2 .

k1

So we have the second part of Assertion (2). We define a stochastic process Q (n) (X, Y ) = {Q (n) (X, Y )t }t0 , n 1, for stochastic processes X = {X t }t0 and Y = {Yt }t0 by Q (n) (X, Y )t =

∞

(X (k/2n )∧t − X ((k−1)/2n )∧t )(Y(k/2n )∧t − Y((k−1)/2n )∧t ),

t 0.

k=1

(3.7) Let M ∈ Mcb . Then X t = Mt2 is a bounded submartingale and is a continuous process. Therefore by Corollary 3.4.1, we see that there is a continuous process M ˜ 2. ¯ +,2 such that Mt2 − Mt belongs to M ∈A Proposition 3.5.2 Let M ∈ Mcb . Then {Q (n) (M, M)t − Mt }t∈[0,∞) is a martingale and is a continuous process. Moreover, E[sup(Q (n) (M, M)t − Mt )2 ] → 0,

n → ∞.

t0

Proof Note that for any t, s ∈ [k/2n , (k + 1)/2n ], t > s, E[(Mt2 − Q (n) (M, M)t ) − (Ms2 − Q (n) (M, M))s |Fs ] = E[Mt2 − Ms2 |Fs ] − E[(Mt − Mk/2n )2 − (Ms − Mk/2n )2 |Fs ] = E[(Mt − Ms )2 |Fs ] − E[Mt2 − Ms2 |Fs ] = 0. So we see that Mt2 − Q (n) (M, M)t is a martingale. Therefore Mt − Q (n) (M, M)t is also a martingale. We apply the proof of Theorem 3.4.1 to X t = Mt2 . We regard {X k/2n }∞ k=0 as an (n) ∞ 2 L -bounded submartingale under filtration {Fk/2n }k=0 and X k/2n = Nk/2n + A(n) k/2n , k 0, is a Doob decomposition. Let Nt = Mt2 − Mt . Then we see that

64

3 Martingale with Continuous Parameter (n) (n) 2 2 Nk/2 n − N (k−1)/2n = Mk/2n − E[Mk/2n |F(k−1)/2n ] 2 2 2 2 = Mk/2 n − M(k−1)/2n + E[(Mk/2n − M(k−1)/2n ) |F(k−1)/2n ],

and so 2 2 Mk/2n − M(k−1)/2n = (Mk/2 n − N k/2n ) − (M(k−1)/2n − N (k−1)/2n ) (n) (n) = − (Nk/2n − Nk/2 n ) + (N (k−1)/2n − N (k−1)/2n )

+ E[(Mk/2n − M(k−1)/2n )2 |F(k−1)/2n ]. Therefore we see that (Q (n) (M, M)k/2n − Mk/2n ) − (Q (n) (M, M)(k−1)/2n − M(k−1)/2n ) (n) (n) 2 = (Nk/2n − Nk/2 n ) − (N (k−1)/2n − N (k−1)/2n ) + (Mk/2n − M(k−1)/2n )

− E[(Mk/2n − M(k−1)/2n )2 |F(k−1)/2n ]. So we see that E[((Q (n) (M, M)k/2n − Mk/2n ) − (Q (n) (M, M)(k−1)/2n − M(k−1)/2n ))2 ] (n) (n) 2 2E[((Nk/2n − Nk/2 n ) − (N (k−1)/2n − N (k−1)/2n )) ]

+ 8E[(Mk/2n − M(k−1)/2n )4 ]. This implies that E[ sup (Q (n) (M, M)t − Mt )2 ] t∈[0,∞)

4 sup E[(Q (n) (M, M)k/2n − Mk/2n )2 ] k0

8 sup E[(N

k/2n

k0

−

(n) 2 Nk/2 n) ]

+ 32

∞

E[(Mk/2n − M(k−1)/2n )4 ].

k=1

Therefore by Theorem 3.4.1 and Proposition 3.5.1 we have our assertion. We define a continuous process M, N for M, N ∈ Mcb by M, N t =

1 (M + N t − M − N t ), 4

t ∈ [0, ∞).

Proposition 3.5.3 Let M, N ∈ Mcb . (1) M, N = N , M, and {Mt Nt − M, N t }t∈[0,∞) is a martingale. (2) {Q (n) (M, N )t − M, N t }t∈[0,∞) is a continuous process and is a martingale. Moreover,

3.5 Quadratic Variation

65

E[sup(Q (n) (M, N )t − M, N t )2 ] → 0,

n → ∞.

t0

Proof The first equation in Assertion (1) is obvious. Since Mt Nt − M, N t =

1 1 ((Mt + Nt )2 − M + N t ) − ((Mt − Nt )2 − M − N t ), 4 4

{Mt Nt − M, N t }t∈[0,∞) is a martingale. Assertion (2) follows from the fact that Q (n) t (M, N ) =

1 (n) (Q t (M + N , M + N ) − Q (n) t (M − N , M − N )). 4

We define a stochastic process X τ : [0, ∞) × → R for an adapted D-process X : [0, ∞) × → R and a stopping time τ : → [0, ∞] by X τ (t, ω) = X (t ∧ τ (ω), ω) = X t∧τ (ω),

t ∈ [0, ∞), ω ∈ .

Then it is easy to see that X τ is an adapted D-process. We remark that for any adapted D-process X and any stopping times τ and σ we have (X τ )σ = X τ ∧σ . If M ∈ Mcb and if τ : → [0, ∞] is a stopping time, M τ ∈ Mcb because of Proposition 3.3.12. Then we have the following. Proposition 3.5.4 Let M, N ∈ Mcb , and τ : → [0, ∞] be a stopping time. Then M, N τ = M τ , N τ = M, N τ . Proof Note that if τ (ω) ∈ [(k − 1)2−n , k2−n ], k 1, ω ∈ , then sup |Q (n) (M, N τ )t − Q (n) (M, N )τt |

t∈[0,∞)

(

sup

s,t∈[(k−1)2−n ,k2−n ]

|Mt − Ms |)(

sup

|Nt − Ns |).

sup

|Nt − Ns |4 ]1/2 → 0,

s,t∈[(k−1)2−n ,k2−n ]

Therefore we see that E[ sup |Q (n) (M, N τ )t − Q (n) (M, N )τt |2 ] t∈[0,∞)

E[

sup s,t∈[0,∞),|s−t|2−n

|Mt − Ms |4 ]1/2 E[

s,t∈[0,∞),|s−t|2−n

66

3 Martingale with Continuous Parameter

as n → ∞. So by Proposition 3.5.3 we see that M, N τ = M, N τ . The second equation follows from the fact that M τ , N τ = M, N τ τ = (M, N τ )τ = M, N τ .

3.6 Continuous Local Martingale Definition 3.6.1 (1) We say that {τn }∞ n=1 is a sequence of non-decreasing stopping times, if τn , n = 1, 2, . . . , are stopping times, τk (ω) τk+1 (ω), k = 1, 2, . . . , and τn (ω) → ∞, n → ∞, for all ω ∈ . (2) We say that a stochastic process M : [0, ∞) × → R is a continuous local martingale, if M is an adapted continuous process, and if there is a sequence of c τn non-decreasing stopping times {τn }∞ n=1 such that M − M0 ∈ Mb for all n 1. (3) We say that a stochastic process A : [0, ∞) × → R is an adapted nondecreasing continuous process, if A is an adapted continuous process, and if A(·, ω) : [0, ∞) → R is an non-decreasing function for all ω ∈ . (4) We say that a stochastic process A : [0, ∞) × → R is an adapted continuous process with finite variation, if there are adapted continuous non-decreasing processes Ai , i = 0, 1, such that A = A1 − A0 . In this book, we denote by Mcloc the set of continuous local martingales M with M0 = 0, denote by A+,c the set of adapted non-decreasing processes A with A0 = 0, and denote by Ac the set of adapted continuous processes with finite variation with A0 = 0. Proposition 3.6.1 If M : [0, ∞) × → R is a continuous process and is a martingale with M0 = 0, then M is a continuous local martingale. Proof Let τn = inf{t 0; |Mt | n} ∧ n, n 1. Then {τn }∞ n=1 is a sequence of non-decreasing stopping times. By Proposition 3.3.12 we see that M τn ∈ Mcb , and so we have our assertion. ∞ Proposition 3.6.2 (1) If {τn }∞ n=1 and {σn }n=1 are sequences of non-decreasing stop∞ ping times, then {τn ∧ σn }n=1 is also a sequence of non-decreasing stopping times. (2) Let {τn }∞ n=1 be a sequence of non-decreasing stopping times and let X n : [0, ∞) × → R, n = 1, 2, . . . , be adapted continuous process such that X nτn = τn , n = 1, 2, . . . , with probability 1. Then there exists an adapted continuous X n+1 process X : [0, ∞) × → R such that X τn = X nτn a.s., n = 1, 2, . . .. Moreover, if X˜ : [0, ∞) × → R is an adapted continuous process such that X˜ τn = X nτn a.s., n = 1, 2, . . . , then X = X˜ .

Proof Assertion (1) is obvious. We prove Assertion (2). Let n ∈ F be given by

3.6 Continuous Local Martingale

67

τn n = {X nτn (t) = X n+1 (t) for all t ∈ [0, ∞)},

n = 1, 2, . . . .

Also, let 0 = ∞ n=1 n . Then we see that P(0 ) = 1. Let ω ∈ 0 . If T > 0 and τn (ω) T, n 1, then τm (t, ω) = X mτm (t, ω) = X m (t, ω), X m+1 (t, ω) = X m+1

t ∈ [0, T ]

for any m n. Therefore we see that X m (t, ω) = X n (t, ω), m n, for any t ∈ [0, T ]. Since τn (ω) → ∞, n → ∞, we see that for any T > 0 X m (·, ω)|[0,T ] = X n (·, ω)|[0,T ] ,

mn

for sufficiently large n 1. Let X : [0, ∞) × → R be given by X (t, ω) =

lim X n (t, ω), ω ∈ 0 ,

n→∞

0,

ω ∈ \ 0 .

Then we see that X is an adapted continuous process and that X tτn (ω) = X nτn (t, ω), t ∈ [0, ∞), ω ∈ 0 . The uniqueness is obvious. Proposition 3.6.3 (1) Mcloc and Ac are vector spaces. (2) Let τ is a stopping time. If M ∈ Mcloc and A ∈ Ac , then M τ ∈ Mcloc and Aτ ∈ Ac . Proof Suppose that X, Y ∈ Mcloc . Then there are sequences of non-decreasing stop∞ τn σn ∈ Mcb , n = 1, 2, . . .. Then we ping times, {τn }∞ n=1 and {σn }n=1 such that X , Y c τn ∧σn τ n σn σn τ n = (X ) + (Y ) ∈ Mb . So we see that X + Y ∈ Mcloc . see that (X + Y ) We leave the proof of the remaining assertions to the reader as an exercise. Proposition 3.6.4 If X ∈ Mcloc ∩ Ac , then X = 0. Proof From the assumption, we see that there are A0 , A1 ∈ A+,c such that X = A1 − A0 . Since X is a continuous local martingale, there is a sequence of nonτn decreasing stopping times, {τn }∞ ∈ Mcb , n 1. Let n=1 such that X σn = inf{t 0; A0 (t) + A1 (t) n},

n = 1, 2, . . . .

σn σn Then {σn }∞ n=1 is a sequence of non-decreasing stopping times and A0 (t) + A1 (t) ∞ n, t 0. Let ρn = τn ∧ σn , n = 1, 2, . . .. Then {ρn }n=1 is also a sequence of nonρ decreasing stopping times. Then we see that X ρn = (X τn )σn ∈ Mcb and that A0 n (t) + ρn A1 (t) n, t 0. ρ ρ ¯ 2 , we see by Proposition 3.4.1 that X ρn = Aρn − Aρn = 0, Since A0 n − A1 n ∈ M 0 1 n 1. This implies that X = 0.

Proposition 3.6.5 Let M ∈ Mcloc . (1) There exists a M ∈ A+,c such that {Mt2 − Mt }t∈[0,∞) is a local martingale.

68

3 Martingale with Continuous Parameter

Moreover, such a M ∈ A+,c is unique. (2) For any T > 0 and ε > 0 P( sup |Q (n) (M, M)t − Mt | > ε) → 0, t∈[0,T ]

n → ∞.

Proof (1) The uniqueness follows from Proposition 3.6.4. So we show the existence. τn There is a sequence of non-decreasing stopping times {τn }∞ n=1 such that M ∈ c τn Mb , n 1. Let An = M , n 1. Then by Proposition 3.5.4 we see that n = M τn+1 , M τn+1 τn = (M τn+1 )τn , (M τn+1 )τn = An , Aτn+1 n = Aτnn . Therefore by Proposition 3.6.4 we see that there is an and so we see that Aτn+1 adapted continuous process A such that Aτn = An , n = 1, 2, . . .. Since An ∈ A+,c , n = 1, 2, . . . , we see that A ∈ A+,c . Let X = M 2 − A. Then X is an adapted continuous process such that X 0 = 0. Let n 1, σn = inf{t 1; |X t | > n},

and let ρn = τn ∧ σn . Then {ρn }∞ n=1 is a sequence of non-decreasing stopping times. Moreover, we see that X ρn = ((M τn )2 − Aτn )σn = ((M τn )2 − M τn )σn . So we see that X ρn ∈ Mcb , and so X is a continuous local martingale. Letting M = A, we have Assertion (1). (2) By Proposition 3.5.2 we see that for any m 1 E[ sup (Q (n) (M τm , M τm )t − M τm t )2 ] → 0,

n → ∞.

t∈[0,∞)

Then we see that P( sup |Q (n) (M, M)t − Mt | > ε) t∈[0,T ]

P( sup |Q (n) (M τm , M τm )t − Aτt m | > ε, τm > T + 2) + P(τm T + 2) t∈[0,T ]

1 2 E[ sup (Q (n) (M τm , M τm )t − M τm t )2 ] + P(τm T + 2). ε t∈[0,∞) Therefore lim P( sup |Q (n) (M, M)t − Mt | > ε) P(τm T + 2).

n→∞

t∈[0,T ]

Letting m → ∞, we have Assertion (2).

3.6 Continuous Local Martingale

69

For any M, N ∈ Mcloc we define M, N : [0, ∞) × → R by M, N =

1 (M + N − M − N ). 4

Then we have the following as an easy consequence. Proposition 3.6.6 Let M, N ∈ Mcloc . (1) M, N ∈ Ac , and {Mt Nt − M, N t }t∈[0,∞) is a continuous local martingale. (2) For any T > 0 and ε > 0 P( sup |Q (n) (M, N )t − M, N t | > ε) → 0, t∈[0,T ]

n → ∞.

Also, we have the following. Proposition 3.6.7 Let M, N ∈ Mcloc and τ : → [0, ∞] be a stopping time. Then M, N τ = M τ , N τ = M, N τ . For any continuous process X : [0, ∞) × → R we define a continuous process X ∗ by the following: X ∗ (t, ω) = sup |X (s, ω)|,

t ∈ [0, ∞), ω ∈ .

s∈[0,t]

It is not easy to see when a continuous local martingale is a martingale in general. However, we have the following result. Proposition 3.6.8 Let M ∈ Mcloc . (1) If E[M ∗ (t)] < ∞, t 0, then M is a martingale. (2) E[M(t)] E[M ∗ (t)2 ] 4E[M(t)],

t ∈ [0, ∞).

In particular, if E[M(t)] < ∞ for all t > 0, then M and {M(t)2 − M(t); t 0} are martingales. (3) If E[M(t)] < ∞, t > 0, then E[M(t)2 ] = E[M(t)],

t > 0.

τn Proof There is a sequence of non-decreasing stopping times {τn }∞ n=1 such that M ∈ c ∗ τn τn ∗ Mb , n = 1, 2, . . .. It is obvious that (M ) = (M ) . (1) Since |M(t) − M τn (t)| 2M ∗ (t), we see that E[|M(t) − M τn (t)|] → 0, n → ∞. This implies that M is a martingale. (2) Note that E[M τn (t)2 ] = E[M τn (t)]. So we see that

E[M(t)] = lim E[Mτn (t)] = lim E[M τn (t)2 ] E[M ∗ (t)2 ]. n→∞

n→∞

70

3 Martingale with Continuous Parameter

Also, by Doob’s inequality (Theorem 3.3.1) we see that E[M ∗ (t)2 ] = lim E[ sup |M τn (s)|2 ] 4 lim E[|M τn (t)|2 ] n→∞

n→∞

s∈[0,t] τn

= 4 lim E[M (t)] = 4E[M(t)]. n→∞

These imply the first part of Assertion (2). Moreover, we see that (M 2 − M)∗ (t) M ∗ (t)2 + M(t). So Assertion (1) implies the second part of Assertion (2). Suppose that E[M ∗ (t)2 ] < ∞. Then we see that E[M(t)2 ] = lim E[M τn (t)2 ] = lim E[Mτn (t)] = E[M(t)]. n→∞

n→∞

This implies Assertion (3). The following is an immediate consequence of the above-mentioned proposition. Corollary 3.6.1 If M ∈ Mcloc and M = 0, then M = 0. Proposition 3.6.9 We define A : [0, ∞) × → R for A ∈ Ac by A(t, ω) = lim

n→∞

∞

|A(t ∧ k2−n , ω) − A(t ∧ ((k − 1)2−n ), ω)|, t ∈ [0, ∞), ω ∈ .

k=1

Then A ∈ A+,c . We call A the total variation process of A. Proof There are A0 , A1 ∈ A+,c such that A = A0 − A1 . Let A2 = A0 + A1 . Then A2 ∈ A+,c and we see that |A(t ∧ k2−n ) − A(t ∧ ((k − 1)2−n ))| A2 (t ∧ k2−n ) − A2 (t ∧ ((k − 1)2−n )). Let A(n) (t) =

∞

|A(t ∧ k2−n ) − A(t ∧ (k − 1)2−n )|,

n 1.

k=1

Then A(n) (t) A2 (t), t 0. For any t ∈ m , m 1, A(n) (t), n m, is nondecreasing in n. Therefore there exists limn→∞ A(n) (t, ω) for all t ∈ . It is easy to see that |A(n) (t) − A(n) (s)| A2 (t) − A2 (s), n m, for any t ∈ [0, ∞), s ∈ m , and m 1 with t s. Therefore we see that A(t) = limn→∞ A(n) (t, ω) exists for all t ∈ [0, ∞), and that 0 A(t) − A(s) A2 (t) − A2 (s) for any s, t ∈ , s t. Therefore we see that A(s) A(t) for any t > s 0, and that A(t) − A(s) A2 (t) − A2 (s) for any t > s 0. These imply that A ∈ A+,c .

3.6 Continuous Local Martingale

71

The following inequality is often called the Kunita–Watanabe inequality. Proposition 3.6.10 Let M, N ∈ Mcloc , and ϕ : [0, ∞) × → R, and ψ : [0, ∞) × → R be progressively measurable processes. Then we have

∞

∞

|ϕt ψt | dM, N (t)

0

0

1/2 ϕt2

∞

dMt 0

1/2 ψt2

dN t

a.s.

Proof For any r ∈ R we see that M + r N t = Mt + 2r M, N t + r 2 N t

t 0

with probability 1. Let

0 =

{M + r N t = Mt + 2r M, N t + r 2 N t }.

r,t∈Q, t0

Then P(0 ) = 1. For any ω ∈ , t, s ∈ [0, ∞), and r ∈ R let F(t, s, r ; ω) = (Mt (ω) − Ms (ω)) + 2r (M, N t (ω) − M, N s (ω)) + r 2 (N t (ω) − N s (ω)).

Then it is obvious that (t, s, r ) ∈ [0, ∞)2 × R → F(t, s, r ; ω) is continuous . If ω ∈ 0 , then F(t, s, r ; ω) = M + r N t (ω) − M + r N s (ω) 0,

t > s 0, t, s, r ∈ Q.

and so we see that F(t, s, r ; ω) 0,

t > s 0, r ∈ R.

Therefore we have (M, N t (ω) − M, N s (ω))2 (Mt (ω) − Ms (ω))(N t (ω) − N s (ω)), t > s 0, ω ∈ 0 . So we see that for any 0 t0 < t1 < · · · < tm

72

3 Martingale with Continuous Parameter m

|M, N tk (ω) − M, N tk−1 (ω)|

k=1 m

(Mtk (ω) − Mtk−1 (ω))1/2 (N tk (ω) − N tk−1 (ω))1/2

k=1

m 1/2 m 1/2 (Mtk (ω) − Mtk−1 (ω)) (N tk (ω) − N tk−1 (ω)) k=1

k=1

=(Mtm (ω) − Mt0 (ω))1/2 (N tm (ω) − N t0 (ω))1/2 . Therefore we see that M, N (t, ω) − M, N (s, ω) (Mt (ω) − Ms (ω))1/2 (N t (ω) − N s (ω))1/2 , t > s 0, ω ∈ 0 . Let us take an arbitrary ω ∈ 0 and fix it. Suppose that f : [0, ∞) → R and g : [0, ∞) → R are functions such that there are 0 = t0 < t1 < · · · < tn for which f (t) = f (tk−1 ), g(t) = g(tk−1 ), t ∈ [tk−1 .tk ), k = 1, . . . , n, f (t) = g(t) = 0, t ∈ [tn , ∞). Then we see that

=

∞

0 n k=1 n

| f (t)g(t)| dM, N (t, ω)

| f (tk−1 )g(tk−1 )|(M, N (tk , ω) − M, N (tk−1 , ω)) | f (tk−1 )|(Mtk (ω) − Mtk−1 (ω))1/2 |g(tk−1 )|(N tk (ω) − N tk−1 (ω))1/2

k=1

n

1/2

f (tk−1 ) (Mtk (ω) − Mtk−1 (ω)) 2

k=1

∞

= 0

1/2

∞

f (t)2 dMt

1/2 g(t) dN t

n

1/2 g(tk−1 ) (N tk (ω) − N tk−1 (ω)) 2

k=1

.

0

Then by the usual argument in measure theory we see that for any measurable functions f : [0, ∞) → R and g : [0, ∞) → R,

∞

| f (t)g(t)| dM, N (t, ω)

0

for any ω ∈ 0 . This implies our assertion.

0

∞

1/2 f (t)2 dMt 0

∞

1/2 g(t) dN t

3.7 Brownian Motion

73

3.7 Brownian Motion We say that a probability measure μ on (R, B(R)) is a Gaussian distribution (of mean m and variance v, m ∈ R, v 0), if the characteristic function of μ is given by

√ √ v ξ ∈ R. exp( −1ξ x)μ(d x) = exp −1mξ − ξ 2 , 2 R When v > 0, the Gaussian distribution of mean m and variance v is a normal distribution and its mean is m and its variance is v. If μ is the Gaussian distribution of mean m and variance 0, then μ({m}) = 1. Definition 3.7.1 Let X be a family of random variables. We say that X is a Gaussian system, if the following is valid. For any n 1, a1 , . . . , an ∈ R and X 1 , . . . , X n ∈ X, the probability law of a1 X 1 + · · · + an X n is a Gaussian distribution. Proposition 3.7.1 Suppose that X n , n ∈ Z0 , are independent random variables ˜ be the set of and that each probability law of X n is a Gaussian distribution. Let H all linear combinations of X n , n ∈ Z0 , and let H be the set of all random variables ˜ Then H is a which is given by L2 -limit of sequences of random variables in H. Gaussian system. Proof Suppose that the mean and the variance of X n , n = 1, 2, . . . , are m n and vn respectively. Then for any an ∈ R, n = 1, 2, . . . , E exp

√

−1ξ

n k=1

ak X k

=

n

√ E[exp( −1ξ ak X k )]

k=1

n

n

√ 1 2 −1 ak m k ξ − a k vk ξ 2 . = exp 2 k=1

k=1

Therefore the probability law of nk=1 ak X k is a Gaussian distribution. Suppose that Yn , n = 1, 2, . . . , are random variables whose probability law is a Gaussian distribution of mean m n ∈ R and variance vn 0, n = 1, 2, . . .. Also, suppose that Y is a random variable such that E[(Y − Yn )2 ] → 0, n → ∞. Then Y has a second moment. Let m be the mean of Y and v be the variance of Y. Then we see that m n → m, vn → v, n → ∞. Therefore we see that √ √ E[exp( −1ξ Y )] = lim E[exp( −1ξ Yn )] n→∞ √ √ vn v = lim exp −1m n ξ − ξ 2 = exp −1mξ − ξ 2 . n→∞ 2 2 So the probability law of Y is a Gaussian distribution. These imply our assertions.

74

3 Martingale with Continuous Parameter

Definition 3.7.2 We say that a stochastic process B : [0, ∞) × → Rd is a ddimensional Brownian motion (also called a Wiener process), if each component B i , i = 1, . . . , d, (B(t, ω) = (B 1 (t, ω), . . . , B d (t, ω)), t ∈ [0, ∞), ω ∈ ) is a continuous process and the following two conditions are satisfied. (1) B i (tk ) − B i (tk−1 ), i = 1, . . . , d, k = 1, . . . , n, are independent for any n 2, 0 = t0 < t 1 < · · · < t n . (2) B(0) = 0, and the probability law of B i (t) − B i (s) is the Gaussian distribution of mean 0 and variance t − s for any t > s 0, i = 1, . . . , d. Proposition 3.7.2 Let = [0, 1), F be the family of Borel sets included in [0, 1), and let P be the Lebesgue measure on [0, 1). Then for any d 1, there exists a d-dimensional Brownian motion on the probability space (, F , P). Proof We only give a sketch of the proof. Let ηn : → {0, 1}, n ∈ Z1 , be given by ηn (ω) = [2n ω] − 2[2n−1 ω],

ω ∈ [0, 1).

Here [x], x ∈ R, denotes the maximal integer less than or equal to x. Note that {ω ∈ ; η1 (ω) = i 1 , η2 (ω) = i 2 , . . . , ηn (ω) = i n } n n ik ik 1 , + n , n 1, i 1 , . . . , i n = 0, 1. = 2k k=1 2k 2 k=1 So we see that P(ηn = 0) = P(ηn = 1) =

1 , 2

n = 1, 2, . . . ,

and ηn , n ∈ Z 1 , are independent random variables. Also, we see that ω=

∞

2−k ηk (ω),

ω ∈ [0, 1).

k=1

Let Z n : → [0, 1], n ∈ Z1 be given by Z n (ω) =

∞

2−k η2n (2k+1) (ω),

ω ∈ .

k=1

Then we see that Z n , n ∈ Z1 , are independent random variables whose probability law is the Lebesgue measure on (0, 1). Let : R → (0, 1), the normal distribution function, that is,

3.7 Brownian Motion

75

1 (x) = √ 2π

2 y dy, exp − 2 −∞ x

x ∈ R.

Then : R → (0, 1) is a strictly increasing continuous function and is bijective. Therefore there is a continuous inverse function −1 : (0, 1) → R. Let Wn,m,i , n ∈ Z0 , m ∈ Z1 , i = 1, . . . , d, be given by Wn,m,i = −1 (Z 2n 3m 5i ). Then Wn,m,i , n ∈ Z0 , m ∈ Z1 , i = 1, . . . , d, is a family of independent random variables and the probability law of each random variable is the standard normal distribution. Let us define ψn,m : [0, ∞) → R, n ∈ Z0 , m ∈ Z1 , by the following:

1, t ∈ [m − 1, m), m 1, 0, t ∈ / [m − 1, m), ⎧ 2m − 2 2m − 1 ⎪ (n−1)/2 ⎪ 2 , , t ∈ , ⎪ ⎪ n n ⎨ 2 2 2m − 1 2m ψn,m (t) = −2(n−1)/2 , t ∈ , n , ⎪ ⎪ ⎪ 2n 2 ⎪ ⎩ 0, t∈ / [2−(n−1) (m − 1), 2−(n−1) m), ψ0,m (t) =

n, m 1.

Then {ψn,m ; n 0, m 1} is a complete orthonormal basis of L 2 ([0, ∞), dt). Let ϕn,m : [0, ∞) → R, n ∈ Z0 , m ∈ Z1 , be given by

t

ϕn,m (t) =

ψn,m (s) ds,

t ∈ [0, ∞), n 0, m 1.

0

Then it is easy to see that |ϕn.m (t)| 2−n/2 , ϕ0,m (t) = 0, and ϕn,m (t) = 0,

t ∈ [0, ∞), n 0, m 1, t ∈ [0, m − 1], m 1,

t∈ / [2−(n−1) (m − 1), 2−(n−1) m), n, m 1.

Let am ∈ R, m 1. Then we see that Moreover, we see that

∞ m=1

"∞ " N " " " " am ϕ0,m (t)" |am |, " " " m=1

and

m=1

am ϕn,m (t) is a finite summation.

t ∈ [0, N ],

76

3 Martingale with Continuous Parameter

"∞ " " " " " am ϕn,m (t)" 2−n/2 max{|am |; m = 1, . . . , 2n−1 N }, " " "

t ∈ [0, N ], n 1

m=1

for any N ∈ Z1 . Now let us define a stochastic process X ri : [0, ∞) × → R, r 1, i = 1, . . . , d, by r ∞ X ri (t) = Wn,m,i ϕn,m (t), t ∈ [0, ∞). n=0 m=1

It is obvious that X ri is a continuous process. Also, we see that ∞

d i X r (t) = Wn,m,i ψn,m (t) a.e.t. dt n=0 m=1 r

Let H be the set of all bounded measurable functions h : [0, ∞) → R such that the support of h is a compact set. For each h ∈ H, let Yri (h) : [0, ∞) × → R, r 1, i = 1, . . . , d, be given by

Yri (h) =

∞

h(t) 0

d i X (t) dt. dt r

Then the distribution law of Yri (h) is a Gaussian distribution, and Yr1 (h 1 ), Yr2 (h 2 ), . . . , Yrd (h d ), is independent for any h 1 , . . . , h d ∈ H. Also, we see that E[Yri (h)] = 0, and ⎡

r ∞ i 2 ⎣ E[Yr (h) ] = E Wn,m,i n=0 m=0

=

r ∞ n=0 m=0

∞

∞

2 ⎤ h(t)ψn,m (t) dt ⎦

0

2 h(t)ψn,m (t) dt

.

0

Since {ψn,m ; n 0, m 1} is a complete orthonormal basis of L 2 ([0, ∞), dt), we see that

∞ E[Yri (h)2 ] →

For any N , r ∈ Z1 we see that

h(t)2 dt,

0

r → ∞.

3.7 Brownian Motion

sup

t∈[0,N ]

77

|X ri +1 (t)

−

X ri (t)|

"∞ " " " " " = sup " Wr +1,m,i ϕr +1,m (t)" " t∈[0,N ] " m=0

2

−r/2

max{|Wr +1,m,i |; m = 1, . . . , 2r N }.

This implies that E[ sup |X ri +1 (t) − X ri (t)|4 ] 2−2r E[max{|Wr +1,m,i |4 ; m = 1, . . . , 2r N }] t∈[0,N ]

2 N r

2

−2r

E[|Wr +1,m,i |4 ] = 3N 2−r .

m=1

So we see that ∞ ∞ i i E sup |X r +1 (t) − X r (t)| E[ sup |X ri +1 (t) − X ri (t)|4 ]1/4 < ∞. r =1 t∈[0,N ]

r =1

t∈[0,N ]

Let 0 ∈ F be given by 0 =

∞ ∞ d

# sup

r =1 t∈[0,N ]

i=1 N =1

|X ri +1 (t)

−

X ri (t)|

t 0, σ {B(s) − B(t)} and G ˜ t }t∈[0,∞) is a filtration and N0 ⊂ G ˜ ˜ 0 . Let G ˜ t+ = Proof It is obvious that {G s>t Gs . Also, let Ht = σ {B(s) − B(t); s ∈ [t, ∞)}. By the definition of Brownian motion, we see that Gt and Ht are independent for any t 0. ˜ s and Hs are independent Now let us take t 0 and fix it. It is easy to see that G ˜ t+ and Hs are independent for any s > t. for any s $ > t. So we see that G Since s>t Hs is a π -system, and B(s) − B(t) = limr ↓t (B(s) − B(r )), we see $ ˜ t+ and Ht are that σ { s>t Hs } = Ht , and so by Proposition 1.1.4 we see that G independent. Let ˜ t+ ] = X a.s. for some G ˜ t -measurable random variable X }. D = {D ∈ F ; E[1 D |G It is easy to check that D is a Dynkin class. Let A ∈ N0 , B ∈ Gt , and C ∈ Ht . Then we see that 1 A = 0 a.s. or 1 A = 1 a.s. So we see that ˜ t+ ] = E[1 A 1 B 1C |G ˜ t+ ] = 1 A E[1 B 1C |G ˜ t+ ] = 1 A 1 B E[1C ]. E[1 A∩B∩C |G Let K = {A ∩ B ∩ C; A ∈ N0 , B ∈ Gt , C ∈ Ht }. Then we see that K is π -system and K ⊂ D. So by Theorem 8.1.1 in Appendix 8.1, we see that σ {K} ⊂ D. In particular, we ˜ t+ ⊂ σ {N0 ∪ Gt ∪ Ht } ⊂ σ {K}. see that G ˜ t -measurable random variable X such ˜ t+ . Then there is a G Suppose that C ∈ G ˜ ˜ t+ ] = 1C a.s., we see that E[1C |Gt+ ] = X a.s. Let A = {1C = X }. Since E[1C |G that P(A) = 0. Note that {X = 1} \ A ⊂ C ⊂ {X = 1} ∪ A. ˜ t+ = G ˜ t . This implies our assertion. ˜ t . So we see that G Then we see that C ∈ G Let (, F , P, {Ft }t∈[0,∞) ) be a standard filtered probability space. Definition 3.7.3 We say that a stochastic process B : [0, ∞) × → Rd is ddimensional {Ft }t∈[0,∞) -Brownian motion, if each component B i , i = 1, . . . , d, of B(t, ω) = (B 1 (t, ω), . . . , B d (t, ω)), t ∈ [0, ∞), ω ∈ , is an adapted continuous process and satisfies the following conditions. (1) B(t) − B(s) and Fs is independent for any t > s 0.

3.8 Optimal Stopping Time

79

(2) B(0) = 0, and the probability law of B(t) − B(s) is a normal distribution with mean 0 and covariance matrix (t − s)Id . The following is an immediate consequence of Proposition 3.7.3. Proposition 3.7.4 (1) If B : [0, ∞) × → Rd is a d-dimensional {Ft }t∈[0,∞) Brownian motion, then B is a d-dimensional Brownian motion. ˜t (2) Let B : [0, ∞) × → Rd be a d-dimensional Brownian motion, and let G ˜ = σ {B(s); s ∈ [0, t]} ∨ N0 , t 0. Then (, F , P, {Gt }t∈[0,∞) ) be a standard fil˜ t }t∈[0,∞) -Brownian motion. tered probability space, and B is a d-dimensional {G Proposition 3.7.5 Let B be a d-dimensional {Ft }t∈[0,∞) -Brownian motion. (1) B i , i = 1, . . . , d, is a continuous process and is a martingale. (2) {B i (t)B j (t) − δi j t; t ∈ [0, ∞)}, i, j = 1, . . . , d, is a continuous process and is a martingale. (3) B i , B j t = δi j t, t ∈ [0, ∞), i, j = 1, . . . , d. Here δi j , i, j = 1, . . . , d, is Kronecker’s delta, i.e., δi j =

1, i = j, 0, i = j.

Proof It is obvious that they are continuous processes. For any t > s 0, we see that E[B i (t)|Fs ] = E[B i (s)|Fs ] + E[B i (t) − B i (s)|Fs ] = B i (s) + E[B i (t) − B i (s)] = B i (s).

So we have Assertion (1). For t > s 0 we see that E[(B i (t)B j (t) − δi j t) − (B i (s)B j (s) − δi j s)|Fs ] = E[(B i (t) − B i (s))(B j (t) − B j (s)) − δi j (t − s)|Fs ] + B j (s)E[B i (t) − B i (s)|Fs ] + B i (s)E[B j (t) − B j (s)|Fs ] = E[(B i (t) − B i (s))(B j (t) − B j (s))] − δi j (t − s) = 0. So we have Assertion (2). By Assertion (2), we see that B i (t)B j (t) − δi j t is a continuous local martingale. Since δi j t ∈ Ac , we have Assertion (3).

3.8 Optimal Stopping Time Let (, F , P, {Ft }t∈[0,∞) ) be a standard filtered probability space. Let X : [0, ∞) × → R be an adapted continuous process such that E[supt∈[0,∞) X (t)2 ] < ∞.

80

3 Martingale with Continuous Parameter

For any stopping times σi : → [0, ∞], i = 0, 1, such that σ0 σ1 , we denote Sσσ10 be the set of all stopping times τ : → [0, ∞] for which σ0 τ σ1 . Let us take a T ∈ (0, ∞) and fix it. Proposition 3.8.1 Suppose that σ ∈ S0T . Then for any τ1 , τ2 ∈ SσT there is a τ3 ∈ SσT such that E[X τ1 |Fσ ] ∨ E[X τ2 |Fσ ] E[X τ3 |Fσ ] a.s. Proof Let τ1 , τ2 ∈ SσT . Let A = {E[X τ1 |Fτ1 ∧τ2 ] E[X τ2 |Fτ1 ∧τ2 ]}. Then we see that A ∈ Fτ1 ∧τ2 . Let τ3 : → [0, T ] be given by τ3 (ω) =

τ1 (ω), if ω ∈ A, τ2 (ω), otherwise.

Then we see that σ τ1 ∧ τ2 τ3 T and that {τ3 < t} = ({τ1 ∧ τ2 < t} ∩ A ∩ {τ1 < t}) ∪ ({τ1 ∧ τ2 < t} ∩ ( \ A) ∩ {τ2 < t}) ∈ Ft .

So we see that τ3 ∈ SσT . Also, we see that E[X τ3 |Fτ1 ∧τ2 ] = E[1 A X τ1 + (1 − 1 A )X τ2 |Fτ1 ∧τ2 ] = 1 A E[X τ1 |Fτ1 ∧τ2 ] + (1 − 1 A )E[X τ2 |Fτ1 ∧τ2 ] = E[X τ1 |Fτ1 ∧τ2 ] ∨ E[X τ2 |Fτ1 ∧τ2 ]. So we have our assertion. For any σ ∈ S0T let U˜ σ be defined by U˜ σ = ess.sup{E[X τ |Fσ ]; τ ∈ SσT }. Since |U˜ σ | E[sup[0,T ] |X (t)||Fσ ], we see by Proposition 2.6.5 that {U˜ σ ; σ ∈ S0T } is uniformly integrable. Proposition 3.8.2 Suppose that σn ∈ S0T , n = 1, 2, . . . , ∞ and that σn ↓ σ∞ a.s., n → ∞. Then U˜ σ∞ E[U˜ σn |Fσ∞ ] a.s. and E[U˜ σn ] → E[U˜ σ∞ ] a.s., n → ∞. Proof By Propositions 1.5.2 and 3.8.1, we see that there are τn,m ∈ SσTn , m = 1, 2, . . . , n = 1, 2, . . . , ∞, such that E[X (τn,m )|Fσn ] ↑ U˜ σn a.s., m → ∞. Then we see that U˜ σ∞ lim E[X (τn,m )|Fσ∞ ] = E[U˜ σn |Fσ∞ ] a.s., m→∞

and so we have the first part of our assertion. Also, we see that E[U˜ σ∞ ] = E[ lim E[X (τ∞,m )|Fσ∞ ]] = lim E[X (τ∞,m )] m→∞

m→∞

3.8 Optimal Stopping Time

and that

81

lim E[U˜ σn ] lim E[X (τ∞,m ∨ σn )] = E[X (τ∞,m )].

n→∞

n→∞

So we see that

lim E[U˜ σn ] E[U˜ σ∞ ].

n→∞

Since

lim E[U˜ σn ] E[U˜ σ∞ ],

n→∞

we have the second part of our assertion. Proposition 3.8.3 There is an adapted D-process U : [0, ∞) × → R satisfying the following. (1) {U (t); t ∈ [0, ∞)} is a supermartingale, U (t) = U˜ t , t ∈ [0, T ] a.s. and U (t) = X (T ), t > T. (2) With probability 1, |U (t)| E[ sup |X (s)||Ft ],

t ∈ [0, T ].

s∈[0,T ]

In particular, E[ sup U (t)2 ] 4E[ sup X (t)2 ]. t∈[0,T ]

t∈[0,T ]

Proof Let U˜ t = X (T ) for t > T. Then we see by Proposition 3.8.2 that {U˜ t ; t ∈ [0, ∞)} is a supermartingale and that E[U˜ t ] is right continuous in t. So by Proposition 3.2.2 we have Assertion (1). By Propositions 1.5.2 and 3.8.1, we see that there are τn ∈ StT such that E[X (τn ) |Ft ] → U˜ t , n → ∞. Then we see that |U˜ t | = lim |E[X (τn )|Ft ]| E[ sup |X (t)||Ft ]. n→∞

t∈[0,T ]

So by Doob’s inequality we see that E[ sup U (t)2 ] E[( sup t∈[0,T ]

t∈[0,T ]∩Q

|U˜ t |)2 ] 4E[( sup |X (t)|)2 ]. t∈[0,T ]

These imply Assertion (2). Proposition 3.8.4 Let σ ∈ S0T . If there are n 1 and 0 = t0 < t1 < · · · < tn such that σ ∈ {t0 , t1 , . . . , tn } a.s., then U (σ ) = U˜ σ a.s. Proof Note that U (σ ) =

n k=0

1{σ =tk } U˜ tk .

82

3 Martingale with Continuous Parameter

By Propositions 1.5.2 and 3.8.1 we see that there are τ˜k,m ∈ StTk , k = 0, 1, . . . , n, m = 1, 2, . . . , such that E[X (τ˜k,m )|Ftk ] ↑ U˜ tk , m → ∞, Let τm be given by τm =

n

k = 0, 1, . . . , n.

1{σ =tk } τ˜k,m .

k=0

Then it is easy to see that τm ∈ SσT . Also, by Proposition 3.3.9 (4) we see that 1{σ =tk } E[X (τ˜k,m )|Ftk ] = 1{σ =tk } E[X (τ˜k,m )|Fσ ]. So we see that U (σ ) =

n k=0

=

n

lim 1{σ =tk } E[X (τ˜k,m )|Ftk ]

m→∞

1{σ =tk } lim E[X (τm )|Fσ ] = lim E[X (τm )|Fσ ] U˜ σ . m→∞

k=0

m→∞

On the other hand, by Propositions 1.5.2 and 3.8.1 we see that there are ηm ∈ SσT , m = 1, 2, . . . , such that E[X (ηm )|Fσ ] ↑ U˜ σ ,

m → ∞.

Then we see that U˜ σ =

n

1{σ =tk } lim E[X (ηm )|Fσ ] m→∞

k=0

=

n

1{σ =tk } lim E[X (tk ∨ ηm )|Ftk ] U (σ ).

k=0

m→∞

These imply our assertion. Proposition 3.8.5 Let σ ∈ S0T . Then U (σ ) = U˜ σ = ess.sup{E[X τ |Fσ ]; τ ∈ SσT } a.s. Proof Let σ ∈ S0T . Let ρˆn : [0, ∞) → [0, T ], n = 1, 2, . . . , be given by

3.8 Optimal Stopping Time

ρˆn (t) =

kT 2n

83

∧ T, t ∈

(k − 1)T kT , n 2n 2

, k = 1, 2, . . . ,

and let σn = ρˆn ◦ σ, n = 1, 2, . . .. Then we see that σn ∈ SσT , n = 1, 2, . . . , and σn ↓ σ, n → ∞. By Proposition 3.8.4 we see that U (σn ) = U˜ σn , and so by Proposition 3.8.2 we see that E[U˜ σ ] = lim E[U (σn )] = E[U (σ )] n→∞

and

U˜ σ lim E[U (σn )|Fσ ] = U (σ ). n→∞

These imply our assertion. Proposition 3.8.6 U is regular. Proof Suppose that σn ∈ S0T , n = 1, 2, . . . , ∞ and that σn → σ∞ , n → ∞. Since U is a D-process, we see that E[U (σ∞ ∨ σn )] → E[U (σ∞ )], n → ∞. By Propositions 1.5.2 and 3.8.1, we see that there are τn,m ∈ SσT∞ ∧σn , n, m = 1, 2, . . . , such that E[X (τn,m )|Fσ∞ ∧σn ] ↑ U (σ∞ ∧ σn ), m → ∞,

n = 1, 2, . . .

So we see that E[U (σ∞ )] E[U (σ∞ ∧ σn )] = lim E[X (τn,m )]. m→∞

Let us take an arbitrary ε > 0. Then we see that E[X (τn,m )] = E[X (τn,m ∨ σ∞ )] + E[X (τn,m ) − X (τn,m ∨ σ∞ )] E[U (σ∞ )] + E[|X (τn,m ) − X (τn,m ∨ σ∞ )|, |σ∞ − σn | > ε] + E[|X (τn,m ) − X (τn,m ∨ σ∞ )|, |σ∞ − σn | ε] E[U (σ∞ )] + 2P(|σ∞ − σn | > ε)1/2 E[ sup X (t)2 ]1/2 t∈[0,T ]

+ E[

sup s,t∈[0,T ],|s−t|ε

|X (t) − X (s)|].

So we see that 0 E[U (σ∞ ∧ σn )] − E[U (σ∞ )] 2P(|σ∞ − σn | > ε)1/2 E[ sup X (t)2 ]1/2 + E[ t∈[0,T ]

sup s,t∈[0,T ],|s−t|ε

|X (t) − X (s)|],

84

3 Martingale with Continuous Parameter

ε > 0, n = 1, 2, . . . , and so we see that E[U (σ∞ ∧ σn )] → E[U (σ∞ )],

n → ∞.

Therefore we have |E[U (σn )] − E[U (σ∞ )]| = |E[U (σ∞ ∧ σn )] + E[U (σ∞ ∨ σn )] − 2E[U (σ∞ )]| → 0,

n → ∞.

In the case that σn , n = 1, 2, . . . , ∞ are general finite stopping times and that σn → σ∞ , n → ∞, we see that |E[U (σn )] − E[U (σ∞ )]| = |E[U (σn ∧ T )] − E[U (σ∞ ∧ T )]| → 0,

n → ∞.

So we see that U is regular. From the above-mentioned results, we have the following. Theorem 3.8.1 Let X : [0, ∞) × → R be an adapted continuous process such that E[supt∈[0,∞) X (t)2 ] < ∞. Then we have the following. ˜ 2 , and A ∈ A ˜ +,2 satisfying (1) There is a D-process U : [0, ∞) × → R, M ∈ M the following. (i) E[supt∈[0,∞) U (t)2 ] < ∞. (ii) U is a supermartingale, and A is a continuous process such that U (t) = U (0) + M(t) − A(t),

t ∈ [0, ∞).

(3.8)

(iii) U (σ ) = ess.sup{E[X τ |Fσ ]; τ ∈ SσT } a.s. for any σ ∈ S0T . (2) Let c = sup{E[X (τ )]; τ ∈ S0T }. Then there is a σ ∈ S0T such that E[X (σ )] = c. Proof Assertion (1) follows from Propositions 3.4.3, 3.8.5 and 3.8.6. Note that E[U (0)] = sup{E[X (τ )]; τ ∈ S0T } = c. Let σ0 ∈ S0T be given by σ0 = inf{t > 0; A(t) > 0} ∧ T. Since A is a continuous process, we see that A(σ0 ) = 0. Therefore we see that E[U (σ0 )] = E[U (0)]. For any τ ∈ S0T we see that E[X (τ )] = E[E[X (τ )|Fτ ]] E[U (τ )] = E[U (0)] − E[A(τ )].

(3.9)

By Propositions 1.5.2 and 3.8.1, we see that there are τn ∈ SσT0 such that E[X (τn )|Fσ0 ] → U (σ0 ), n → ∞. By Eq. (3.9) we see that for any ε > 0

3.8 Optimal Stopping Time

85

E[A(σ0 + ε), τn σ0 + ε] E[A(τn )] E[U (σ0 )] − E[X (τn )] → 0,

n → ∞.

Note that if σ0 < T, then A(σ0 + ε) > 0. Therefore we see that τn → σ0 in probability as n → ∞. So by Theorem 2.6.1 we see that E[X (σ0 )] = lim E[X (τn )] = E[U (0)]. n→∞

This proves our theorem. Suppose that σ ∈ S0T satisfies Assertion (2) in Theorem 3.8.1. Then by Eq. (3.9), we see that σ σ0 and E[X (σ )] = E[U (σ )], which implies that X (σ ) = U (σ ).

Chapter 4

Stochastic Integral

4.1 Spaces of Stochastic Processes Let (, F , P, {Ft }t∈[0,∞) ) be a standard filtered probability space throughout this ¯ c be the set of martingales M such that M is a continuous process, section. Let M 2 ¯ c ⊂M ¯ 2 , and that M ¯c M(0) = 0, and E[supt∈[0,∞) M(t)2 ] < ∞. It is obvious that M 2 2 is a vector space. ¯ c by We define MM for M ∈ M 2 MM = sup E[M(t)2 ]1/2 . t∈[0,∞)

¯ c , then P(M(t) = 0 for all t ∈ [0, ∞) ) It is easy to see that if MM = 0 for M ∈ M 2 ¯ c. = 1, and so M = 0. It is easy to see that · M is a norm on M 2 ¯ c is complete relative to the norm · M . In other words, if Proposition 4.1.1 M 2 c ¯ Mn ∈ M2 , n = 1, 2, . . . , satisfy Mn − Mm M → 0, n, m → 0, then there exists ¯ c such that M∞ − Mn M → 0, n → 0. an M∞ ∈ M 2 Proof By Theorem 3.3.1, we see that E[ sup |Mn (t) − Mm (t)|2 ] 4 sup E[|Mn (t) − Mm (t)|2 ] → 0 t∈[0,∞)

t∈[0,∞)

as n, m → 0. Therefore there is n˜ k 1 for each k 1 such that E[ sup |Mn (t) − Mm (t)|2 ] 2−2k ,

n, m n˜ k .

t∈[0,∞)

Let n k =

k

=1

n˜ , k = 1, 2, . . .. Then we see that

E[ sup |Mn k+1 (t) − Mn k (t)|2 ]1/2 2−k ,

k = 1, 2, . . . .

t∈[0,∞)

© Springer Nature Singapore Pte Ltd. 2020 S. Kusuoka, Stochastic Analysis, Monographs in Mathematical Economics 3, https://doi.org/10.1007/978-981-15-8864-8_4

87

88

4 Stochastic Integral

Then we have E

∞

sup |Mn k+1 (t) − Mn k (t)| 1.

k=1 t∈[0,∞)

Let 0 ∈ F be given by 0 =

∞

sup |Mn k+1 (t) − Mn k (t)| < ∞ .

k=1 t∈[0,∞)

Then P(0 ) = 1. Also, we see that for any ω ∈ 0 sup |Mn (t, ω) − Mn k (t, ω)| → 0,

k, → ∞.

t∈[0,∞)

Let M∞ : [0, ∞) × → R be given by M∞ (t, ω) =

lim Mn k (t, ω), ω ∈ 0 ,

k→∞

ω ∈ \ 0 .

0,

Then M∞ is a continuous process and by Fatou’s lemma we see that E[ sup |M∞ (t) − Mm (t)|2 ] t∈[0,∞)

lim E[ sup |Mn (t) − Mm (t)|2 ] 2−2k , m n k , k = 1, 2, . . . →∞

t∈[0,∞)

˜ c . Also, we see that So we see that M∞ ∈ M 2 M∞ − Mm M E[ sup |M∞ (t) − Mm (t)|2 ]1/2 → 0,

m → ∞.

t∈[0,∞)

These imply our assertion. Let L0 be the set of ξ : [0, ∞) × → R satisfying the following condition. (L0 ) There are n 1, 0 = t0 < · · · < tn and Ftk -measurable bounded random variables ηk , k = 0, 1, . . . , n − 1 such that ξ(t, ω) =

n

ηk−1 (ω)1(tk−1 ,tk ] (t),

t ∈ [0, ∞), ω ∈ .

k=1

For A ∈ A+,c let L2 (A) denote the set of progressively measurable processes ξ : [0, ∞) × → R such that ξ 2,A = E 0

∞

1/2 ξ(t) d A(t) 2

< ∞.

4.1 Spaces of Stochastic Processes

89

· 2,A is a semi-norm on L2 (A). Then we have the following. Proposition 4.1.2 Let A ∈ A+,c and ξ ∈ L2 (A). Assume that E[supt∈[0,∞) A(t)] < ∞. Then there are ξn ∈ L0 , n = 1, 2, . . . , such that ξ − ξn 2,A → 0, n → ∞. Before proving Proposition 4.1.2, we recall the following results. We leave the proof of the following to the reader as an exercise. Proposition 4.1.3 (1) Let T > 0, and let f : [0, ∞) → R be a bounded measurable function such that f (t) = 0, t > T. Also, let f n : [0, ∞) → R, n = 1, 2, . . . , be t given by f n (t) = n (t−1/n)∨0 f (s) ds, t 0. Then

∞

| f (t) − f n (t)|2 dt → 0,

n → ∞.

0

(2) Let φ : [0, ∞) → [0, ∞) be a strictly increasing continuous function such that φ(0) = 0 and φ(t) → ∞, t → ∞. Let T > 0 and let f : [0, ∞) → R be a bounded measurable function such that f (t) = 0, t > T. Then

∞

f (t) dφ(t) =

0

∞

f (φ −1 (t)) dt.

0

Proof Proof of Proposition 4.1.2. Let ξ˜n (t) = ((ξ(t) ∨ (−n)) ∧ n)1[0,n] (t), n = 1, 2, . . .. Then it is easy to see that ξ˜n ∈ L2 (A) and ξ − ξ˜n 2,A → 0, n → ∞. So we may assume that ξ is bounded and that there is a T > 0 such that ξ(t) = 0, t > T. ˜ = A(t) + t, t ∈ [0, ∞). Then A˜ ∈ A+,c . Also, we see that for all ω ∈ Let A(t) ˜ ˜ ω) = 0, A(·, ω) : [0, ∞) → [0, ∞) is a strictly increasing continuous function, A(0, −1 ˜ ˜ ˜ and A(t, ω) → ∞, t → ∞. Let A (·, ω) be the inverse function of A(·, ω) for each ω ∈ . Also let ξˆn : [0, ∞) × → R, n 1, be given by ξˆn (t) = n

t ˜ A˜ −1 (( A(t)−1/n)∨0)

˜ ξ(s) d A(s) =n

˜ A(t) ˜ ( A(t)−1/n)∨0

ξ( A˜ −1 (s)) ds.

Then ξˆn is a continuous process. Since ˜ − 1/n) ∨ 0) = inf{s ∈ [0, t]; A(s) ˜ ˜ − 1/n) ∨ 0}, = ( A(t) A˜ −1 (( A(t) we see that ξˆn is adapted and ξˆ (t) = 0, t > T + 1/n. Therefore we see that

∞

˜ (ξ(t) − ξˆn (s))2 d A(t) E (ξ(t) − ξˆn (s))2 d A(t) 0 0

∞ −1 −1 2 (ξ( A˜ (t)) − ξˆn ( A˜ (t))) dt =E 0

t 2 ∞ −1 −1 ξ( A˜ (t)) − n =E ξ( A˜ (s)) ds dt → 0, n → ∞.

ξ − ξˆn 22,A = E

∞

0

(t−1/n)∨0

90

4 Stochastic Integral

Let ξn,m : [0, ∞) × → R, n, m , be given by ξn,m (t) =

∞

ξˆn ((k − 1)2−m )1((k−1)2−m ,k2−m ] (t).

k=1

Then we see that

ξˆn − ξn,m 2,A → 0,

m → ∞.

These imply our assertion. We define I0 (ξ, M) : [0, ∞) × → R for ξ ∈ L0 and M ∈ Mcb by the following. If ξ is given by n ηk−1 (ω)1(tk−1 ,tk ] (t), ξ(t, ω) = k=1

for n 1, 0 = t0 < · · · < tn , and Ftk -measurable bounded random variable ηk , k = 0, 1, . . . , n − 1, we let I0 (ξ, M)(t, ω) =

n

ηk−1 (M(tk ∧ t) − M(tk−1 ∧ t))

t ∈ [0, ∞), ω ∈ .

k=1

Since it is easy to see that I0 (ξ, M)(t, ω) = lim

n→∞

∞

ξ((k − 1)2−n )(M(k2−n ∧ t) − M((k − 1)2−n ∧ t)),

k=1

we see that the definition of I0 (ξ, M) does not depend on the representation of ξ. Proposition 4.1.4 Let ξ ∈ L0 and M ∈ Mcb . ¯ c . Therefore I0 is a bilinear form from L0 × Mc into M ¯ c. (1) I0 (ξ, M) ∈ M 2 2 b (2) I0 (ξ, M)M = ξ 2, M . (3) If τ : → [0, ∞] is a stopping time, I0 (ξ, M)τ = I0 (ξ, M τ ). (4) For any N ∈ Mcb , I0 (ξ, M), N (t) =

t

ξ(s) d M, N (s)

0

and I0 (ξ, M)N − I0 (ξ, M), N is a martingale. Proof Let ξ ∈ L0 and M ∈ Mcb . Then there are n 1, 0 = t0 < · · · < tn and Ftk measurable bounded random variables ηk , k = 0, 1, . . . , n − 1, such that ξ(t, ω) =

n k=1

ηk−1 (ω)1(tk−1 ,tk ] (t).

4.1 Spaces of Stochastic Processes

91

Suppose that tm−1 s t tm , m = 1, . . . , n. Then we see that E[I0 (ξ, M)(t)|Fs ] =

m−1

ηk−1 (M(tk ) − M(tk−1 )) + ηm−1 E[M(t) − M(tm−1 )|Fs ] = I0 (ξ, M)(s).

k=1

Also, we see that I0 (ξ, M)(t) = I0 (ξ, M)(tn ), t tn . Therefore we see that I0 (ξ, M)(t) is a martingale. It is obvious that I0 (ξ, M)(0) = 0 and I0 (ξ, M) is a continuous process. Also, we see that E[I0 (ξ, M)(tn )2 ] =

n

E[(I0 (ξ, M)(tk ) − I0 (ξ, M)(tk−1 ))2 ]

k=1

=

n

2 E[ηk−1 E[(M(tk ) − M(tk−1 ))2 |Ftk−1 ]] =

k=1

∞

=E 0

n

2 E[ηk−1 ( M (tk ) − M (tk−1 ))]

k=1

ξ(t)2 d M (t) = ξ 22, M .

So we obtain Assertions (1) and (2). Since we see that I0 (ξ, M)τ (t, ω) = =

n k=1 n

ηk−1 (M(tk ∧ t ∧ τ (ω)) − M(tk−1 ∧ t ∧ τ (ω))) ηk−1 (M τ (tk ∧ t) − M τ (tk−1 ∧ t)) = I0 (ξ, M τ )(t, ω),

k=1

we obtain Assertion (3). Let N ∈ Mcb . If tm−1 s t tm , m = 1, . . . , n, we see that E[I0 (ξ, M)(t)N (t)|Fs ] − I0 (ξ, M)(s)N (s) =E[(I0 (ξ, M)(t) − I0 (ξ, M)(s))(N (t) − N (s))|Fs ] =ηm−1 E[(M(t) − M(s))(N (t) − N (s))|Fs ]

t ξ(r ) d M, N (r )|Fs . =ηm−1 E[( M, N (t) − M, N (s))|Fs ] = E s

Therefore we see that

t E I0 (ξ, M)(t)N (t) − ξ(r ) d M, N (r )|Fs 0 s ξ(r ) d M, N (r ), =I0 (ξ, M)(s)N (s) − 0

92

4 Stochastic Integral

and so we see that {I0 (ξ, M)(t)N (t) − This implies Assertion (4).

t 0

ξ(r ) d M, N (r ); t 0} is a martingale.

Let M ∈ Mcb . Then by Propositions 3.5.1 and 3.5.2 we see that supt∈[0,∞) E[ M

(t)] < ∞. By Proposition 4.1.2 we see that for any ξ ∈ L2 ( M ) there are ξn ∈ L0 , n = 1, 2, . . . , such that ξ − ξn 2, M → 0. Then we see by Proposition 4.1.4(2) that I0 (ξn , M) − I0 (ξm , M)M = ξn − ξm 2, M → 0,

n, m → ∞.

˜ c such that Therefore by Proposition 4.1.1 we see that there is an N ∈ M 2 N − I0 (ξn , M)M → 0,

n → ∞.

If ξn ∈ L0 , n = 1, 2, . . . , satisfy that ξ − ξn 2, M → 0, then we see that I0 (ξn , M) − I0 (ξn , M)M = ξn − ξn 2, M → 0,

n → ∞,

and so the limit N does not depend on the choice of limit sequence of {ξn }∞ n=1 . We ¯ c. denote this N by I M (ξ ). Then I M is a mapping from L2 ( M ) into M 2 By this argument and Proposition 4.1.4(2) it is easy to see that I M (ξ )M = ξ 2, M ,

ξ ∈ L2 ( M ).

Moreover, we have the following. Proposition 4.1.5 (1) Let M ∈ Mcb , ξ ∈ L2 ( M ) and τ : → [0, ∞] be a stopping time. Then I M (ξ )τ = I M τ (ξ ) = I M τ (ξ 1(0,τ ) (·)). (2) Let M, N ∈ Mcb and ξ ∈ L2 ( M ). Then we see that

t

I M (ξ ), N (t) =

ξ(s) d M, N (s),

t ∈ [0, ∞).

0

Proof Let ξ ∈ L2 ( M ) and ξn ∈ L0 , n = 1, . . . , such that ξ − ξn 2, M → 0, n → ∞. Then we have ξ − ξn 2, M τ → 0, n → ∞. ¯c Note that for any N ∈ M 2

N τ 2M E[ sup |Ntτ |2 ] E[ sup |Nt |2 ] 4N 2M . t∈[0,∞)

t∈[0,∞)

Therefore we see that I M (ξ )τ − I0 (ξn , M)τ M → 0, and I M τ (ξ ) − I0 (ξn , M τ )M → 0, as n → ∞. So we see that I M (ξ )τ = I M τ (ξ ). Since ξ − ξ 1(0,τ ) (·)2, M τ = 0, we obtain Assertion (1).

4.1 Spaces of Stochastic Processes

93

Let N ∈ Mcb . Then we see that E[|(I M (ξ )(t) − I0 (ξn , M)(t))N (t)|] E[(I M (ξ )(t) − I0 (ξn , M)(t))2 ]1/2 E[N (t)2 ]1/2 I M (ξ ) − I0 (ξn , M)M N M = ξ − ξn 2, M N M → 0,

n → ∞.

On the other hand, by Proposition 3.6.10 we see that

t t E ξ(s) d M, N (s) − ξn (s) d M, N (s) 0 0 t

E |ξ(s) − ξn (s)| d M, N (s) 0

t

E (

t

|ξ(s) − ξn (s)|2 d M (s))1/2

0

1/2

1 d N (s)

0

ξ − ξn 2, M E[ N (t)]1/2 → 0, n → ∞. t Since {I0 (ξn , M)(t)N (t) − 0 ξn (s) d M, N (s); t ∈ [0, ∞)} is a martingale, we t see that {I M (ξ )(t)N (t) − 0 ξ(s) d M, N (s); t ∈ [0, ∞)} is a martingale. So we obtain Assertion (2). For each A ∈ A+,c , let L2,loc (A) be the set of progressively measurable processes ξ : [0, ∞) × → R such that

P

T

ξ(t)2 d A(t) < ∞ for any T > 0 = 1.

0

Then we have the following. Theorem 4.1.1 Let M ∈ Mcloc , that is, M is a continuous local martingale with M0 = 0. Then for any ξ ∈ L2,loc ( M ) there is an X ∈ Mcloc satisfying the following: X, N (t) =

t

ξ(s) d M, N (s), t 0

(4.1)

0

for any N ∈ Mcloc . Moreover, if X ∈ Mcloc satisfies the condition (4.1), then P(X (t) = X (t) for all t ∈ [0, ∞)) = 1. Remark The left-hand side of Equation (4.1) is finite with probability one, because we see by Proposition 3.6.10 that t t 1/2 2 ξ(s) d M, N (s) ξ(s) d M (s) N (t)1/2 . 0

0

94

4 Stochastic Integral

Proof First we show the uniqueness. Suppose that X, X ∈ Mcloc satisfy the condition (4.1). Then for any N ∈ Mcloc X − X , N = 0. Since X − X ∈ Mcloc , we see that X − X = X − X , X − X = 0. So by Corollary 3.6.1 we see that X = X . Next we show the existence. Let τn , n 1, be stopping times given by t 2 τn = inf t 0; ξ(s) d M (s) + |Mt | + M (t) > n ∧ n. 0

Then {τn }∞ n=1 is an increasing sequence of stopping times. It is easy to see that ¯ c. M τn ∈ Mcb and ξ 1(0,τn ) ∈ L2, M τn for any n 1. Let X n = I M τn (ξ 1(0,τn ) ) ∈ M 2 τn Then we see by Proposition 4.1.5 that X n+1 = X n . Therefore by Proposition 3.6.2 there is an adapted continuous process X such that X τn = X n . Then we see that X is a continuous local martingale. Let N ∈ Mcloc . Then there is an increasing sequence of stopping times {σn }∞ n=1 such that N σn ∈ Mcb . Then we see that X, N τn ∧σn (t) = X τn , N σn (t) = X n , N σn (t) t t τn σn ξ(s)1(0,τn ) (s) d M , N (s) = ξ(s)1(0,τn ) (s) d M, N τn ∧σn (s) = 0 0 t∧τn ∧σn ξ(s) d M, N (s). = 0

So letting n → ∞, we see that

t

X, N (t) =

ξ(s) d M, N (s).

0

t We denote X in Theorem 4.1.1 by ξ • M. We often denote X (t) by 0 ξ(s) d M(s), also. We have the following by Theorem 4.1.1, its proof and Proposition 3.6.8. Proposition 4.1.6 (1) Let M ∈ Mcloc , ξ ∈ L2,loc ( M ), and τ be a stopping time. If M τ ∈ Mcb , and ξ ∈ L2 ( M τ ), then (ξ • M)τ = I M τ (ξ ). (2) Suppose that M1 , M2 ∈ Mcloc , and ξi ∈ L2,loc ( Mi ), i = 1, 2. Then (ξ1 • M1 ), (ξ2 • M2 ) (t) =

t

ξ1 (s)ξ2 (s) d M1 , M2 (s),

0

(3) Let M ∈ Mcloc and ξ ∈ L2,loc ( M ). If

t 0.

4.1 Spaces of Stochastic Processes

95

t

E

ξ(s)2 d M (s) < ∞

0

for all t 0, then ξ • M is a martingale and

t

E[(ξ • M)(t)2 ] = E

ξ(s)2 d M (s) < ∞.

0

Proposition 4.1.7 Let M ∈ Mcloc , ξ, η ∈ L2,loc ( M ), T > 0 and A ∈ F . If ξ(t, ω) = η(t, ω)

(t, ω) ∈ [0, T ] × A,

then (ξ • M)(T ) = (η • M)(T )

a.s. ω ∈ A,

i.e., P(A ∩ {(ξ • M)(T ) = (η • M)(T )}) = 0. Proof Let τ be a stopping time given by t (ξ(s) − η(s))2 d M (s) > 0 . τ = inf t 0; 0

Then we see that A ⊂ {τ T }. Also, we see that (ξ • M)τ (T ) =

T

T

1[0,τ ) (t)ξ(t) d M(t) =

0

1[0,τ ) (t)η(t) d M(t) = (η • M)τ (T ).

0

So we have our assertion.

4.2

Continuous Semi-martingales

Definition 4.2.1 We say that a stochastic process X : [0, ∞) × → R is a continuous semi-martingale, if there are M ∈ Mcloc and A ∈ Ac such that X (t, ω) = X (0, ω) + M(t, ω) + A(t, ω), t ∈ [0, ∞), ω ∈ . For any continuous semi-martingale X the decomposition X = X 0 + M + A, M ∈ Mcloc , A ∈ Ac is unique because of Proposition 3.6.4. We say that a progressively measurable process ξ is X -integrable, if

T

P 0

ξ(t)2 d M (t) + 0

T

|ξ(t)| dA(t) < ∞ for any T > 0 = 1.

96

4 Stochastic Integral

Here A is the total variation t process given in Proposition 3.6.9. If ξ is X -integrable, we can define ξ • M and 0 ξ(s) d A(s), t ∈ [0, ∞). Moreover, we see that ξ • M ∈ t Mcloc and 0 ξ(s) d A(s) ∈ Ac . So we define a continuous semi-martingale (ξ • X ) by t

(ξ • X )(t) = (ξ • M)(t) +

ξ(s) d A(s),

t ∈ [0, ∞).

0

t We often denote (ξ • X )(t) by 0 ξ(s) d X (s). Note that any adapted continuous process is X -integrable for any continuous semi-martingale X. Let X and Y be continuous semi-martingales such that X = X 0 + M + A, Y = ˜ M, N ∈ Mc , and A, A˜ ∈ Ac . Then we define X, Y ∈ Ac by Y0 + N + A, loc X, Y = M, N . Proposition 4.2.1 Let X be a continuous semi-martingale such that X = X 0 + M + A, M ∈ Mcloc , A ∈ Ac . Suppose that ξ, ξn , n = 1, 2, . . . , are X -integrable progressively measurable processes and that T P 0

|ξ(t) − ξn (t)|2 d M (t) +

T 0

|ξ(t) − ξn (t)| dA(t) > ε

→ 0,

n→∞

for any T > 0 and ε > 0. Then P( sup |(ξ • X )(t) − (ξn • X )(t)| ε) → 0, t∈[0,T ]

n→∞

for any T > 0 and ε > 0. Proof Let ηn (t) =

t 0

|ξ(s) − ξn (s)|2 d M (s)

1/2 +

t 0

|ξ(s) − ξn (s)| dA(s) ,

t 0.

Then we see that P(ηn (T ) > δ) → 0, n → ∞ for any δ > 0 and T > 0. Let 1 , τn,m = inf t > 0; ηn (t) > m Then we see that for any T > 0

n, m = 1, 2, . . . .

4.2 Continuous Semi-martingales

97

E[ sup |(ξ • X )τn,m (t) − (ξn • X )τn,m (t)|] t∈[0,T ]

E[ sup |((ξ − ξn ) • M

τn,m

t∈[0,T ]

T ∧τn,m

2E

)(t)| ]

2 1/2

+E

1/2

|ξ(t) − ξn (t)| d M (t) 2

t τ n,m sup (ξ(s) − ξn (s)) d A (s)

t∈[0,T ]

T ∧τn,m

+E

0

0

|ξ(t) − ξn (t)| dA(t)

0

3 . m

Therefore we see that for any T > 0 and ε > 0 P( sup |(ξ • X )(t) − (ξn • X )(t)| > ε) t∈[0,T ]

1 E[ sup |(ξ • X )τn,m (t) − (ξn • X )τn,m (t)|] ε t∈[0,T ] 3 P(ηn (T ) > 1/m) + . mε P(τn,m < T ) +

So we see that lim P( sup |(ξ • X )(t) − (ξn • X )(t)| > ε)

n→∞

t∈[0,T ]

3 , mε

m 1.

This implies our assertion. Proposition 4.2.2 Let X be a continuous semi-martingale and ξ be an adapted continuous process. Then ∞ P sup (ξ • X )(t) − ξ((k − 1)2−n )(X (k2−n ∧ t) − X ((k − 1)2−n ∧ t)) ε t∈[0,T ]

k=1

→ 0,

as n → ∞ for any T > 0 and ε > 0. Proof Let ξn n = 1, 2, . . . , be progressively measurable processes given by ξn (t) =

∞

ξ((k − 1)2−n )1((k−1)2−n ,k2−n ] (t).

k=1

Then we see that T 2 |ξ(t) − ξn (t)| d M (t) + 0

T 0

|ξ(t) − ξn (t)| dA(t) → 0 a.s., n → ∞.

98

4 Stochastic Integral

So our assertion follows from the previous proposition. Proposition 4.2.3 Let X and Y be continuous semi-martingales and ξ be a adapted continuous process. Then for any T > 0 and ε > 0, ∞

3 k k − 1 X ∧T −X ∧ T ε → 0, n → ∞, P 2n 2n k=1

and

∞ k k−1 k−1 k k−1 X P ξ ∧ T − X ∧ T Y ∧ T − Y ∧ T 2n 2n 2n 2n 2n k=1

k−1 k ε → 0, n → ∞. ∧T − X, Y n ∧ T − X, Y

n 2 2

˜ M, N ∈ Mc , A, A˜ ∈ Ac . Then Proof Let X = X 0 + M + A, Y = Y0 + N + A, loc we have

3 ∞ X k ∧ T − X k − 1 ∧ T n n 2 2 k=1

(

sup s,t∈[0,T ],|s−t|2

2

⎧ ⎨

2 ∞ k k−1 X |X (t) − X (s)|) ∧T −X ∧ T n n 2 2 −n k=1

sup |X (t) − X (s)| ⎩s,t∈[0,T ],|s−t|2−n

∞ k=1

M

k ∧T 2n

−M

k−1 ∧T 2n

2

2 ∞ k k−1 A n ∧T − A + ∧T 2 2n k=1 2(

sup

s,t∈[0,T ],|s−t|2−n

|X (t) − X (s)|)(Q (n) (M, M)(T ) + A(T )2 ).

Therefore we have the first assertion by Propositions 1.1.12 and 3.6.5. Let τm , m = 1, 2, . . . , be given by ˜ > m} ∧ m. τm = inf{t 0; |ξ(t)| + |M(t)| + |N (t)| + A(t) + A(t) τm Then we see that {τm }∞ m=1 is an increasing sequence of stopping times, and M , c τm N ∈ Mb . Let ξn n = 1, 2, . . . , be progressively measurable processes given by

ξn (t) =

∞ k=1

ξ((k − 1)2n )1((k−1)2−n ,k2−n ] (t).

4.2 Continuous Semi-martingales

99

Then we see that |ξn 1[0,τm ] | m. Note that

k−1 k k−1 τm τm ξ X ∧T −X ∧T 2n 2n 2n

k k−1 m X τm ∧ T − X τm ∧ T . 2n 2n Let

L (m,n) (t) = Q (n) (M τm , N τm )(t) − M τm , N τm (t),

t 0.

Then we see that L (m,n) is a martingale by Proposition 3.5.3, and that E[ sup |L (m,n) (t)|2 ] → 0, t∈[0,T ]

n → ∞, T > 0, m 1.

Let us define Rn ( X˜ , Y˜ ) ∞

k−1 ˜ k ∧ T − Y˜ k − 1 ∧ T ˜ k ∧ T − X˜ k − 1 ∧ T Y X = ξ 2n 2n 2n 2n 2n k=1

k k−1 . − X˜ , Y˜ n ∧ T − X˜ , Y˜

∧ T n 2 2

for any continuous semi-martingales X˜ and Y˜ . Then we see that Rn (X τm , Y τm ) |(ξn • L (m,n) )(T )| ∞

k k−1 k−1 τm τm ˜ τm k ∧ T − A˜ τm k − 1 ∧ T X ξ ∧ T − X ∧ T + A 2n 2n 2n 2n 2n k=1 ∞

k k−1 k k−1 k−1 + Aτm ξ ∧ T − Aτm ∧T N τm ∧ T − N τm ∧T 2n 2n 2n 2n 2n k=1

|(ξn • L (m,n) )(T )| + m( + m(

sup s,t∈[0,T ],|s−t|2−n

sup s,t∈[0,T ],|s−t|2−n

|X τm (t) − X τm (s)|) A˜ τm (T )

|N τm (t) − N τm (s)|)Aτm (T ).

It is obvious that sup s,t∈[0,T ],|s−t|2−n

and sup s,t∈[0,T ],|s−t|2−n

|X τm (t) − X τm (s)| A˜ τm (T ) → 0,

|N τm (t) − N τm (s)|Aτm (T ) → 0,

100

4 Stochastic Integral

as n → ∞. Also, we see that E[|(ξn • L (m,n) )(T )|2 ] =

∞

E[(ξ((k − 1)2−n )2 (L (m,n) (k2−n ) − L (m,n) ((k − 1)2−n ))2 ]

k=1

m2

∞

E[(L (m,n) (k2−n ) − L (m,n) ((k − 1)2−n ))2 ] = m 2 E[L (m,n) (T )2 ] → 0, n → ∞.

k=1

Therefore we see that P(Rn (X τm , Y τm ) ε) → 0,

n → ∞,

and so we see that lim P(Rn (X, Y ) ε) lim (P(τm < T ) + P(Rn (X τm , Y τm ) ε)) = P(τm < T ). n→∞

n→∞

Letting m → ∞, we have our assertion. Proposition 4.2.4 Let X : [0, ∞) × → R be an adapted continuous process, and τn {τn }∞ n=1 be an increasing sequence of stopping times. Assume that X is a continuous semi-martingale for all n 1. Then X is also a continuous semi-martingale. Proof From the assumption there are Mn ∈ Mcloc , An ∈ Ac , n = 1, 2, . . . , such that τn n + Aτn+1 = (Mn+1 + An+1 )τn X τn (t) = X (0) + Mn (t) + An (t), t 0. Since Mn+1 τn n = Mn + An , we see by Proposition 3.6.4 that Mn+1 = Mn , and Aτn+1 = An , n = 1, 2, . . . So by Proposition 3.6.2 (2) we see that there are adapted continuous processes M and A such that Mn = M τn and Aτn = An . It is easy to see that M ∈ Mcloc and A ∈ Ac . Also, we see that X (t) = X (0) + M(t) + A(t), t 0. Therefore X is a continuous semi-martingale.

4.3 Itô’s Formula Proposition 4.3.1 Let N 1, X i , i = 1, 2, . . . , N , be continuous semimartingales, and F ∈ C0∞ (R N ). Then a continuous process Y given by Y (t) = F(X 1 (t), . . . , X N (t)), t 0, is a continuous semi-martingale, and Y (t) =Y (0) +

N i=1

+

t 0

∂F 1 (X (s), . . . , X N (s)) d X i (s) ∂xi

N 1 t ∂2 F (X 1 (s), . . . , X N (s)) d X i , X j (s) 2 i, j=1 0 ∂ x i ∂ x j

(4.2)

4.3 Itô’s Formula

101

t 0. Proof Let us denote (X 1 (t), . . . , X N (t)) by X (t). Let ξi , ξi j , i, j = 1, . . . , N , be bounded adapted continuous processes given by ξi (t) =

∂F ∂2 F (X (t)) and ξ (t) = (X (t)), i j ∂xi ∂xi ∂x j

t 0.

Note that for any x, y ∈ R N we see that F(y)

= F(x) + = F(x) +

1

0 N i=1

d F(x + t (y − x)) dt dt

t1 2 1 ∂F d i i (x)(y − x ) + dt F(x + t (y − x)) dt 1 2 ∂xi 0 dt 0

N N ∂F 1 t ∂2 F i i = F(x) + (x)(y − x ) + (x)(y i − x i )(y j − x j ) d X i , X j

i j ∂xi 2 0 ∂x ∂x i=1

i, j=1

+ R(y, x),

where

1

R(y, x) = 0

t1

dt1

dt2 0

0

t2

d3 F(x + t (y − x)) dt . dt 3

It is obvious that |R(y, ; x)| C0

N

|y i − x i |3 ,

x, y ∈ R N ,

i=1

where C0 =

N

∂3 F sup i j k (z) . N ∂x ∂x ∂x

i, j,k=1 z∈R

Let us take an arbitrary T > 0 and fix it. Then we see that Y (T ) − Y (0) ∞ = (F(X ((k2−n ) ∧ T )) − F(X (((k − 1)2−n ) ∧ T ))) k=1

=

N i=1

Here

(n) I1,i (T )

N N 1 (n) 1 (n) + I (T ) + I (T ) + R (n) (T ). 2 i, j=1 2,i j 2 i, j=1 3,i j

102

4 Stochastic Integral (n) I1,i (T ) = (n) I2,i j (T ) =

∞ k=1 ∞

ξi ((k − 1)2−n )(X i ((k2−n ) ∧ T ) − X i (((k − 1)2−n ) ∧ T )), ξi j ((k − 1)2−n )( X i , X j ((k2−n ) ∧ T ) − X i , X j (((k − 1)2−n ) ∧ T )),

k=1 (n) I3,i j (T )

=

∞

ξi j ((k − 1)2−n )

k=1

× {(X i ((k2−n ) ∧ T ) − X i (((k − 1)2−n ) ∧ T ))(X j ((k2−n ) ∧ T ) − X j (((k − 1)2−n ) ∧ T )) − ( X i , X j ((k2−n ) ∧ T ) − X i , X j (((k − 1)2−n ) ∧ T ))},

and R

(n)

(T ) =

∞

R(X ((k2−n ) ∧ T ), X (((k − 1)2−n ) ∧ T )).

k=1

Noting that |R (n) (T )| C0

∞ N

|X i ((k2−n ) ∧ T ) − X i (((k − 1)2−n ) ∧ T )|3 ,

i=1 k=1

we see by Propositions 4.2.2 and 4.2.3 that as n → ∞ (n) (T ) → I1,i (n) I2,i j (T ) →

and

T

0

0

T

∂F (X (s)) d X i (s), ∂xi ∂2 F (X (s)) d X i , X j (s), ∂xi ∂x j

(n) (n) |I3,i j (T )| + |R (T )| → 0.

Here all limits are in the sense of convergence in probability. So we obtain Equation (4.2) in the case that t = T. This implies our assertion. Theorem 4.3.1 Let N 1, X i , i = 1, 2, . . . , N , be continuous semi-martingales, and F ∈ C 2 (R N ). Then a continuous process Y given by Y (t) = F(X 1 (t), . . . , X N (t)), t 0, is a continuous semi-martingale, and

4.3 Itô’s Formula

103

Y (t) =Y (0) +

N 0

i=1

+

N

1 2 i, j=1

t

t

0

∂F 1 (X (s), . . . , X N (s)) d X i (s) ∂xi

∂2 F (X 1 (s), . . . , X N (s)) d X i , X j (s) ∂xi ∂x j

(4.3)

for any t 0. ∞ N N Proof Let us take a ϕ ∈ C0 (R N) such that ϕ 0, ϕ(x) = 1, x ∈ [−1/4, 1/4] , and RN ϕ(x)d x = 1. Let Fn : R → R be given by

Fn (x) = ϕ

1 N x n ϕ(n(x − y))F(y) dy, n RN

x ∈ RN .

Then Fn ∈ C0∞ (R N ) and so by Proposition 4.3.1 we have Fn (X 1 (t), . . . , X N (t)) = Fn (X 1 (0), . . . , X N (0)) +

N i=1

t 0

∂ Fn 1 (X (s), . . . , X N (s))d X i (s) ∂xi

N 1 t ∂ 2 Fn + (X 1 (s), . . . , X N (s))d X i , X j (s). 2 i, j=1 0 ∂ x i ∂ x j

Then we see that Fn (x) → F(x), and

∂ Fn ∂F (x) → i (x), i ∂x ∂x

∂ 2 Fn ∂2 F (x) → (x) ∂xi ∂x j ∂xi ∂x j

n → ∞,

n → ∞,

uniformly in x on compacts. Therefore this observation and Proposition 4.2.1 imply our assertion. Equation (4.3) is called Itô’s formula. We can extend the above theorem to the following. Corollary 4.3.1 Let N 1 and X i , i = 1, 2, . . . , N , be continuous semimartingales. Let D be an open subset in R N . We assume that (X 1 (t, ω), . . . , X N (t, ω)) ∈ D for all t ∈ [0, ∞) and ω ∈ . Then for any F ∈ C 2 (D) an adapted continuous process Y given by Y (t) = F(X 1 (t), . . . , X N (t)), t 0, is a continuous semi-martingale, and

104

4 Stochastic Integral N

Y (t) =Y (0) +

0

i=1

+

N

1 2 i, j=1

t

t

0

∂F 1 (X (s), . . . , X N (s)) d X i (s) ∂xi

∂2 F (X 1 (s), . . . , X N (s)) d X i , X j (s) ∂xi ∂x j

for t 0. Proof If D = R N , then we have our assertion by the previous theorem. So we may assume that D = R N . Also, D is not an empty set. Then there is a sequence Un n = 1, 2, . . . , of non-empty bounded open subsets such that U¯ n ⊂ Un+1 , n = 1, 2, . . . , ∞ N ¯ and ∞ n=1 Un = D. Here Un is the closure of Un . Also, there are ϕn ∈ C (R ) such N that ϕn = 1 on U¯ 3n−2 , and ϕn = 0 on R \ U3n−1 . Let τn , n = 1, 2, . . . , be stopping times given by τn = inf{t 0; (X 1 (t), . . . , X N (t)) ∈ R N \ U¯ 3n }. N Then {τn }∞ n=1 is an increasing sequence of stopping times. Let Fn : R → R, N n = 1, 2, . . . , be given by Fn (x) = 1 D (x)ϕn+1 (x)F(x), x ∈ R . Then it is easy to see that Fn ∈ C 2 (R N ) and Y τn (t) = Fn ((X 1 )τn (t), . . . , (X N )τn (t)), t 0. So by Proposition 4.2.4, Y is a continuous semi-martingale. Also, we see that

Y τn (t) =Y (0) +

N

+

1 2 i, j=1

1[0,τn ) (s)

0

i=1 N

t

0

t

1[0,τn ) (s)

∂F 1 (X (s), . . . , X N (s)) d X i (s) ∂xi

∂2 F (X 1 (s), . . . , X N (s)) d X i , X j (s). ∂xi ∂x j

So letting n → ∞, we have our assertion.

Chapter 5

Applications of Stochastic Integral

We assume that (, F , P, {Ft }t∈[0,∞) ) is a standard filtered probability space throughout this section.

5.1 Characterization of Brownian Motion Theorem 5.1.1 Let d 1. Assume that M i ∈ Mcloc , i = 1, 2, . . . , d, satisfy that M i , M j (t) = δi j t,

t 0, i, j = 1, . . . , d.

Then M = (M 1 , . . . , M d ) is a d-dimensional {Ft }t∈[0,∞) -Brownian motion. Proof Let us take a ξ = (ξ1 , . . . , ξd ) ∈ Rd and fix it for a while. Let f : R1+d → C be given by f (x0 , x1 , . . . , xd ) = exp

√

|ξ |2 x0 . −1 ξi xi + 2 i=1 d

Then it is easy to see that ∂f 1 ∂2 f + = 0. ∂ x0 2 i=1 ∂ xi2 d

By Itô’s formula, we have

© Springer Nature Singapore Pte Ltd. 2020 S. Kusuoka, Stochastic Analysis, Monographs in Mathematical Economics 3, https://doi.org/10.1007/978-981-15-8864-8_5

105

106

5 Applications of Stochastic Integral

f (t, M 1 (t), . . . , M d (t)) − 1 t d t ∂f ∂f 1 d i = (s, M (s), . . . , M (s)) d M (s) + (s, M 1 (s), . . . , M d (s)) ds ∂ x ∂ x i 0 0 0 i=1 d 1 t ∂2 f + (s, M 1 (s), . . . , M d (s)) dM i , M j (s) 2 i, j=1 0 ∂ xi ∂ x j =

d i=1

t 0

∂f (s, M 1 (s), . . . , M d (s)) d M i (s). ∂ xi

Here we applied Itô’s formula to the real part and the imaginary part. Since | f (t, M 1 (t), . . . , M d (t))| exp(tξ 2 /2), we see that the real part and the imaginary part of f (t, M 1 (t), . . . , M d (t)) − 1, t 1, are martingales. So we see that for t >s0 E[ f (t, M 1 (t), . . . , M d (t))|Fs ] = f (s, M 1 (s), . . . , M d (s)). Therefore we see that

d √ |ξ |2 i i (t − s) . E exp −1 ξi (M (t) − M (s)) Fs = exp − 2 i=1 Let us take an arbitrary A ∈ Fs such that P(A) > 0. Let PA be a probability measure on (, F ) given by PA (B) =

P(A ∩ B) , P(A)

B ∈ F.

Then we see that

d √ |ξ |2 PA i i E (t − s) , −1 ξi (M (t) − M (s)) = exp − exp 2 i=1

ξ ∈ Rd .

So we see that the probability law of M(t) − M(s) under PA is a normal distribution such that its mean is 0 and its covariance matrix is (t − s)Id . Therefore we see that for any C ∈ B(Rd )

d/2

1 |x|2 dx exp − 2π(t − s) 2(t − s) C = P (M(t) − M(s) ∈ C) = P(M(t) − M(s) ∈ C),

PA (M(t) − M(s) ∈ C) =

and so

5.1 Characterization of Brownian Motion

107

P({M(t) − M(s) ∈ C} ∩ A) = P({M(t) − M(s) ∈ C})P(A). This shows that M(t) − M(s) and Fs are independent. So we see that M is a ddimensional {Ft }t∈[0,∞) -Brownian motion. Proposition 5.1.1 Let B : [0, ∞)× → Rd be a d-dimensional {Ft }t∈[0,∞) Brownian motion. and let τ : → [0, ∞] is an {Ft }t∈[0,∞) -stopping time such that P(τ < ∞) = 1. Let Gt = Fτ +t , t ∈ [0, ∞), and let B˜ : [0, ∞) × → Rd be given by ˜ ω) = B(τ (ω) + t, ω) − B(τ (ω), ω), B(t, t 0, ω ∈ . Then {Gt }t∈[0,∞) satisfies the standard condition and B˜ is a d-dimensional {Gt }t∈[0,∞) Brownian motion. Proof By Proposition 3.3.6 we see that {Gt }t∈[0,∞) satisfies the standard condition. Let σn , n 1, be stopping times given by σn = inf t 0;

d

|B (t)| n ∧ n. i

i=1

Let Mni = (B i )σn , i = 1, . . . , d, n 1. Then Mni ∈ Mcb . Let Nn , i, j = 1, . . . , d, n 1, be given by i, j

Nni, j (t) = Mni (t)Mnj (t) − Mni , Mnj (t) = Mni (t)Mnj (t) − δi j (t ∧ σn ). i, j

Then Nn ∈ Mcb . i, j Now let M˜ ni , N˜ n , i, j = 1, . . . , d, n 1, be given by M˜ ni (t, ω) = Mni (τ (ω) + t, ω) − Mni (τ (ω), ω) and

N˜ ni, j (t, ω) = Nni, j (τ (ω) + t, ω) − Nni, j (τ (ω), ω).

i, j ∈ Mcb ({Gt }t∈[0,∞) ). Then by Proposition 3.3.13 we see that M˜ ni , N˜ n c Here Mb ({Gt }t∈[0,∞) ) is the set of continuous, bounded {Gt }t∈[0,∞) -martingales starting from 0. Let σ˜ n = (σn − τ ) ∨ 0. Then we see that

{σ˜ n < t} = {σn < τ } ∪ {σn < τ + t} ∈ Fτ +t . So σ˜ n are {Gt }t∈[0,∞) -stopping times. Also, it is easy to see that ( B˜ i )σ˜ n = M˜ ni . So we see that B˜ i , i = 1, . . . , d, are continuous local martingales with respect to the filtration {Gt }t∈[0,∞) . Since

108

5 Applications of Stochastic Integral

M˜ ni (t) M˜ nj (t) − δi j (t ∧ σ˜ n ) = N˜ ni, j (t) − Mni (τ ) M˜ nj (t) + Mnj (τ ) M˜ ni (t), j we see that B˜ i , B˜ j σ˜ n (t) = M˜ ni , M˜ n (t) = δi j (t ∧ σ˜ n ) with respect to the filtration {Gt }t∈[0,∞) . So by Theorem 5.1.1 we have our assertion.

Theorem 5.1.2 Let M ∈ Mcloc . We assume that limt→∞ M(t) = ∞ with probability one. Then there is a 1-dimensional Brownian motion B : [0, ∞) × → R and o ∈ F such that P(0 ) = 1, and M(t, ω) = B(M(t, ω), ω),

(t, ω) ∈ [0, ∞) × 0 .

Proof Let T (r ), r 0, be stopping times given by T (r ) = inf{t 0; M(t) > r }. Let 1 ∈ F be given by 1 = {ω ∈ ; lim M(t, ω) = ∞}. t→∞

Then from the assumption we see that P(1 ) = 1. For each ω ∈ 1 , T (·, ω) : [0, ∞) → [0, ∞) is non-decreasing and right-continuous, and satisfies that T (r, ω) → ∞, r → ∞. Since E[M T (r ) (t)] = E[MT (r ) (t), 0 ] r, we see by Proposition 3.6.8 that M T (r ) and M T (r ) (t)2 − MT (r ) (t), t 0, are martingales. Let Gt = FT (t) , t 0. By Proposition 3.3.6 we see that the filtration {Gt }t∈[0,∞) satisfies the standard condition. Moreover, by Proposition 3.3.5, we see that E[M(T (r1 ))|Gr0 ] = E[M T (r1 ) (T (r1 ))|FT (r0 ) ] = M T (r1 ) (T (r0 )) = M(T (r0 )) for r1 > r0 0. So we see that {M(T (r )); r 0} is a {Gr }r ∈[0,∞) -martingale. Since M(T (r )) = r a.s., we similarly see that {M(T (r ))2 − r ; r 0} is a {Gr }r ∈[0,∞) martingale. Moreover, we see that M T (0) = MT (0) = 0. So we see that M(T (0)) = 0 a.s. Let us take an s > 0 and fix it for a while. Let τ (s) be a stopping time given by τ (s) = inf{t > s; M(t) > M(s)}, and let N ∈ Mcloc be given by N = M τ (s) − M s . Then

5.1 Characterization of Brownian Motion

109

N = M τ (s) − 2M τ (s) , M s + M s = Mτ (s) − 2M, Ms + Ms = Mτ (s) − Ms = 0. So we see that M τ (s) = M s a.s. For s1 > s0 0, let As0 ,s1 = {M(t) = M(s0 ) for all t ∈ [s0 , s1 ]}, and Bs0 ,s1 = {M(s1 ) = M(s0 )}. Then we see that P(Acs0 ,s1 ∩ Bs0 ,s1 ) = P({M s1 = M s0 }c \ {τ (s0 ) s1 }) P({M τ (s0 ) = M s0 }c ) = 0. So we see that P(As0 ,s1 ∪ Bsc0 ,s1 ) = 1. Now let 0 ∈ F be given by

0 = 1 ∩ {M(T (0)) = 0} ∩

(As0 ,s1 ∪ Bsc0 ,s1 ).

s1 >s2 0, s1 ,s2 ∈Q

Then we see that P(0 ) = 1. Suppose that ω ∈ 0 , t1 > t0 0, and M(t1 , ω) = M(t0 , ω). Then for any s0 < s1 , s0 , s1 ∈ [t0 , t1 ] ∩ Q, we see that ω ∈ Bs0 ,s1 and so ω ∈ As0 ,s1 . So we see that M(t, ω) = M(s0 , ω),

t ∈ [s0 , s1 ].

Since [t0 , t1 ] ∩ Q is dense in [t0 , t1 ], we see that M(t, ω) = M(t0 , ω),

t ∈ [t0 , t1 ].

Suppose that ω ∈ 0 . Then M(T (r, ω), ω) = r, r 0. Therefore for any r > 0 we see that M(T (r −, ω), ω) = r = M(T (r, ω), ω). This implies that M(T (r −, ω), ω) = M(T (r, ω), ω). So we see that for any ω ∈ 1 M(T (r, ω), ω), r ∈ [0, ∞), is continuous in r. Let B : [0, ∞) × → R be given by B(t, ω) =

M(T (t, ω), ω), if ω ∈ 0 , 0, otherwise.

110

5 Applications of Stochastic Integral

Then B is a continuous process. Moreover, we see that B(0) = M(T (0)) = 0, and that {B(t); t 0} and {B(t)2 − t; t 0} are {Gt }t∈[0,∞) -martingales. So B is a continuous local martingale and B(t) = t, t 0, with respect to the filtration {Gt }t∈[0,∞) . So B is a 1-dimensional {Gt }t∈[0,∞) -Brownian motion. Since T (M(t, ω), ω) = t, t 0, ω ∈ 0 , we see that M(t) = B(M(t)), ω ∈ 0 , t 0. This completes the proof.

5.2 Representation of Continuous Local Martingale Theorem 5.2.1 Let Mi ∈ Mcloc , i = 1, . . . , N , and B : [0, ∞) × → Rd be a ddimensional {Ft }t∈[0,∞) -Brownian motion. Assume that there are progressively measurable processes ik : [0, ∞) × → R, i = 1, . . . , N , k = 1, . . . , d, such that P

N d i=1 k=1

T

ik (t) dt < ∞ = 1, 2

T > 0,

0

Mi , M j (t) =

d k=1

t

ik (s) jk (s) ds,

i, j = 1, . . . , N ,

0

and Mi , B k = 0,

i = 1, . . . , N , k = 1, . . . , d.

Then there is a d-dimensional {Ft }t∈[0,∞) -Brownian motion B˜ : [0, ∞) × → Rd such that Mi (t) =

d k=1

t

ik (s) d B˜ k (s),

t ∈ [0, ∞), i = 1, . . . , N .

0

Proof We use the results on generalized inverse matrices in Appendix 8.4 Let : [0, ∞) × → M N ,d be given by

(t, ω) = ( i,k (t, ω))i=1,...,N ,k=1,...,d ,

(t, ω) ∈ [0, ∞) × ,

and let k,i (t, ω) = (ψ N ,d ( (t, ω))k,i ,

k = 1, . . . , d, i = 1, . . . , N .

Then k,i : [0, ∞) × → R, k = 1, . . . , d, i = 1, . . . , N , are progressively measurable. Let N k ∈ Mcloc , k = 1, . . . , d, be given by

5.2 Representation of Continuous Local Martingale

N k (t) =

N i=1

t

111

k,i (s) d Mi (s),

k = 1, . . . , d.

0

Then we see that N k , B = 0, k, = 1, . . . , d. Let Ak , k, = 1, . . . , d, be progressively measurable processes given by (Ak (t, ω))k, =1,...,d = Id − ψ N ,d ( (t, ω)) (t, ω). Also, let B˜ ∈ Mcloc , k = 1, . . . , d, be given by B˜ k (t) = N k (t) +

d

t

Ak (s) d B (s),

t ∈ [0, ∞).

0

=1

Then we see that B˜ k , B˜ (t) = N k , N (t) + =

d

t

r =1 0

N d

t

i, j r =1 0

Akr (s)A r (s) ds

k,i (s) , j (s) i,r (s) j,r (s) ds +

d r =1 0

t

Akr (s)A r (s) ds.

Noting that ψ N ,d ( (s)) (s) is an orthogonal projection matrix, we see that d N

d

k,i (s) , j (s) i,r (s) j,r (s) +

i, j r =1

Akr (s)A r (s) = δk , k, = 1, . . . , d.

r =1

Therefore we see that B˜ is a d-dimensional {Ft }t∈[0,∞) -Brownian motion. Let M˜ i (t) =

d k=1

t

i,k (s) d B˜ k (s).

0

Then we see that M˜ i (t) =

N d k=1 j=1

t

i,k (s)k, j (s) d M j (s) +

0

d

t

i,k (s)Ak (s) d B (s).

k, =1 0

Since (s)ψ N ,d ( (s)) (s) = (s), we see that M˜ i (t) =

N j=1

t 0

( (s)ψ N ,d ( (s)))i, j (s) d M j (s).

112

5 Applications of Stochastic Integral

Therefore we see that for i, h = 1, . . . , N Mi − M˜ i , Mh (t) = Mi , Mh (t) −

N j=1

=

d

t

t

( (s)ψ N ,d ( (s)))i, j (s) dM j , Mh (s)

0

( (s) − (s)ψ N ,d ( (s)) (s))i,k h,k (s) ds = 0.

0

k=1

This implies that Mi − M˜ i = 0. So we see that Mi = M˜ i .

5.3 Girsanov Transformation Proposition 5.3.1 Let M ∈ Mcloc , c ∈ (0, ∞) and T > 0. Suppose that inf M(t) −c.

t∈[0,T ]

Then E[M(t)|Fs ] M(s) for any s, t ∈ [0, T ] with t > s. Proof From the assumption, there is an increasing sequence {τn }∞ n=1 of stopping times such that M τn ∈ Mcb , n = 1, 2, . . .. Then we see that E[M τn (t) + c|Fs ] = M τn (s) + c,

t > s 0.

Therefore we see that E[M(t)|Fs ] + c = E[ lim (M τn (t) + c)|Fs ] n→∞

lim E[M τn (t) + c|Fs ] = M(s) + c, t > s 0. n→∞

This implies our assertion. We define an adapted continuous process E(X ) : [0, ∞) × → [0, ∞) for each continuous semi-martingale X by

1 E(X )(t) = exp X (t) − X (t) , t ∈ [0, ∞). 2

5.3 Girsanov Transformation

113

Proposition 5.3.2 For any M ∈ Mcloc

t

E(M)(t) = 1 +

E(M)(s) d M(s),

t ∈ [0, ∞).

0

Therefore E(M) − 1 ∈ Mcloc . Also, {E(M)(t); t ∈ [0, ∞)} is a supermartingale. In particular, E[E(M)(t)] 1, t ∈ [0, ∞). Proof Let f ∈ R2 → R be given by f (x, y) = exp(x − y/2). Then f ∈ C 2 (R2 ). So by Itô’s formula, it is easy to see that

t

E(M)(t) = f (M(t), M(t)) = 1 +

E(M)(s) d M(s).

0

So we see that E(M) − 1 ∈ Mcloc . Since E(M)(t) is non-negative, we see that E(M)(t) − 1 −1, t ∈ [0, ∞). So we see by Proposition 5.3.1 that E[E(M)(t)|Fs ] E(M)(s).

Proposition 5.3.3 Let M ∈ Mcloc and T > 0. If E[E(M)(T )] = 1, then E[E(M)(T )|Ft ] = E(M)(t),

t ∈ [0, T ].

Proof From the previous proposition, we see that E[E(M)(T )|Ft ] E(M)(t),

t ∈ [0, T ].

(5.1)

So we see that 0 E[E(M)(t) − E[E(M)(T )|Ft ]] = E[E(M)(t)] − E[E(M)(T )] 0. Therefore we have our assertion. Proposition 5.3.4 Let ρ : → (0, ∞) be a random variable such that E[ρ] = 1 and ρ(ω) > 0 for all ω ∈ . Let Q be a probability measure on (, F ) given by Q(A) = E[ρ, A],

A ∈ F.

Also, let ρt , t ∈ [0, ∞), be random variables given by ρt = E[ρ|Ft ].

114

5 Applications of Stochastic Integral

Then we have the following. (1) ρ(t) > 0 a.s. for all t 0. Also, for any non-negative random variable X E Q [X |Ft ] = ρt−1 E[ρ X |Ft ],

t ∈ [0, ∞).

(2) Let X : [0, ∞) × → R be an {Ft }t∈[0,∞) -adapted stochastic process. Then the following two conditions are equivalent. (i) X is an {Ft }t∈[0,∞) -martingale with respect to the probability measure Q. (ii) {ρt X t ; t ∈ [0, ∞)} is an {Ft }t∈[0,∞) -martingale with respect to the probability measure P. Proof Since ρ > 0, we see that ρt > 0 a.s. Note that ρt−1 E[ρ X |Ft ] is Ft -measurable. Then for any A ∈ Ft E Q [ρt−1 E[ρ X |Ft ], A] = E[ρ(E[ρ|Ft ]−1 E[ρ X |Ft ]1 A )] = E[E[ρ|Ft ](E[ρ|Ft ]−1 E[ρ X |Ft ]1 A )] = E[E[ρ X |Ft ]1 A ] = E[ρ X 1 A ] = E Q [X, A].

This implies that Assertion (1). Let us show Assertion (2). Note that E Q [|X t |] = E[ρ|X t |] = E[ρt |X t |]. So we see that X t is Q-integrable, if and only if ρt X t is P-integrable. Also, for t > s 0, we see that E Q [X t |Fs ] = ρs−1 E[ρ X t |Fs ] = ρs−1 E[E[ρ X t |Ft ]|Fs ] = ρs−1 E[ρt X t |Fs ]. Therefore we see that E Q [X t − X s |Fs ] = ρs−1 E[ρt X t − ρs X s |Fs ]. This implies Assertion (2). The following theorem is called Girsanov’s theorem. Theorem 5.3.1 Let B : [0, ∞) × → Rd be a d-dimensional {Ft }t∈[0,∞) Brownian motion. Let M ∈ Mcloc and T > 0, and suppose that E[E(M)(T )] = 1. Let Q be a probability measure on (, F ) given by Q(A) = E[E(M)(T ), A],

A ∈ F.

Also, let B˜ : [0, ∞) × → Rd be given by B˜ i = B i − B i , MT ,

i = 1, . . . , d.

5.3 Girsanov Transformation

115

Then B˜ is a d-dimensional {Ft }t∈[0,∞) -Brownian motion under the probability measure Q. Proof By Proposition 5.3.3, we have E[E(M)(T )|Ft ] = E(M T )(t). Let {τn ; n ∈ Z1 } be an increasing sequence of stopping times given by

d i i τn = inf t 0; (|B (t)| + | B˜ (t)|) + |M(t)| + M(t) n . i=1

Then by Itô’s formula, we see that for n m 1 E((M T )τn )(t)( B˜ i )τm (t) t t E((M T )τn )(s)( B˜ i )τm (s) d(M T )τn (s) + E((M T )τn )(s) d( B˜ i )τm (s) = 0 0 t E((M T )τn )(s) d(M T )τn , ( B˜ i )τm (s) + =

0 t

E((M T )τn )(s)( B˜ i )τm (s) d(M T )τn (s) +

0

t

E((M T )τn )(s) d(B i )τm (s).

0

i So we see that Yn,m = E((M T )τn )( B˜ i )τm is a continuous local martingale. Since it is i i is a martingale. Also, we see that easy to see that Yn,m is bounded, we see that Yn,m i E[|E(M T )(t)( B˜ i )τm (t) − Yn,m (t)|] m E[|E(M T )(t) − E(M T )τn (t)|].

Since

E(M T )τn (t) = E[E(M)(T )|FT ∧τn ∧t ],

we see by Proposition 2.6.5 and Theorem 2.6.1 that E[|E(M T )(t) − E(M T )τn (t)|] → 0, n → ∞. So we see that E(M T )( B˜ i )τm is a martingale. By this fact and Proposition 5.3.4, we see that ( B˜ i )τm is a Q-martingale. So we see that B˜ i , i = 1, . . . , d, is a Q-continuous local martingale with B˜ i (0) = 0. By Proposition 4.2.3, we see that P(|Q (n) ( B˜ i , B˜ j )(t) − δi j t| ε) → 0, n → ∞, i, j = 1, . . . , d, t 0, ε > 0. So we see that Q(|Q (n) ( B˜ i , B˜ j )(t) − δi j t| ε) → 0, n → ∞, i, j = 1, . . . , d, t 0, ε > 0.

116

5 Applications of Stochastic Integral

This implies that B˜ i , B˜ j (t) = δi j t under the probability measure Q. So by Theorem 5.1.1 we have our assertion. Proposition 5.3.5 Let M ∈ Mcloc and T > 0. Assume that

1 E exp M(T ) < ∞. 2 Then for any α ∈ (0, 1)

E[E(α M)(T )] = 1.

Proof Let us take an α ∈ (0, 1) and fix it. Let us take a β ∈ (0, 1) such that (1 + β)α 2 < 1. Since p 1+β − 1 → 1 + β, lim p↓1 p−1 there is a p ∈ (1, ∞) such that p 1+β ( p 1+β − 1)α 2 1 < . 2( p − 1) 2 Let q = p/( p − 1). Now let τn , n 1, be a stopping time given by τn = inf{t 0; |M(t)| + M(t) > n}. Then E(α M τn ) is a bounded martingale. So we see that E[E(α M τn )(T )] = E[E(α M τn )(0)] = 1. So we see that τn

E(α M )(T )

p1+β

p 1+β ( p 1+β − 1)α 2 τn M (T ) = E( p α M )(T ) exp 2 1+β 1+β

− 1)α 2 p p (p 1+β τn τn M (T ) = E( p α M )(T ) exp q 2( p − 1)

p M(T ) . E( p 1+β α M τn )(T ) exp 2q 1+β

τn

Therefore we see that τn

E(α M )(T )

pβ

E( p

1+β

τn

α M )(T )

1 M(T ) . exp 2q

1/ p

5.3 Girsanov Transformation

117

This implies that

1/q 1 β E[E(α M τn )(T ) p ] E[E( p 1+β α M τn )(T )]1/ p E exp M(T ) 2

1/q 1 M(T ) E exp . 2 So from the assumption, we see that β

sup E[E(α M τn )(T ) p ] < ∞. n

Therefore by Corollary 2.6.1, we see that {E(α M τn )(T ); n 1} is uniformly integrable. Since E(α M τn )(T ) converges to E(α M)(T ), we see by Theorem 2.6.1 that E[E(α M)(T )] = lim E[E(α M τn )(T )] = 1. n→∞

The following theorem is called Novikov’s theorem. Theorem 5.3.2 Let M ∈ Mcloc and T > 0. If

1 M(T ) < ∞, E exp 2 then E[E(M)(T )] = 1. Proof Let α ∈ (0, 1). Then E(α M)(T ) = E(M)(T )α exp

α(1 − α) M(T ) . 2

Therefore we see by previous proposition that 1−α α M(T ) 1 = E[E(α M)(T )] E[E(M)(T )]α E exp 2

1−α 1 M(T ) E[E(M)(T )]α E exp . 2 Then we see that

1 M(T ) E[E(M)(T )] E exp 2

1−1/α .

118

5 Applications of Stochastic Integral

Letting α ↑ 1, we see that

E[E(M)(T )] 1.

This fact and Proposition 5.3.2 imply our assertion. When we use Girsanov’s theorem (Theorem 5.3.1), we have to show the assumption E[E(M)(T )] = 1. We often use Novikov’s theorem (Theorem 5.3.2) to show it. We give such an example in Chap. 7.

5.4 Moment Inequalities Theorem 5.4.1 Let p ∈ (0, ∞). Then there are c p , C p ∈ (0, ∞) such that c p E[M(T ) p/2 ] E[M ∗ (T ) p ] C p E[M(T ) p/2 ]. for any M ∈ Mcloc and T ∈ [0, ∞). Proof Let τn , n 1, be stopping times given by τn = inf{t 0; |Mt | + M(t) > n}. Then we see that (M τn )∗ (T ) ↑ M ∗ (T ), n → ∞, and M τn (T ) ↑ M(T ), n → ∞. So we may assume that M ∈ Mcb and M is bounded. Case 1. p ∈ [2, ∞). Let f (x) = |x| p , x ∈ R. Then we see that f ∈ C 2 (R). Also, we see that f (x) = p sgn(x)|x| p−1 , and f (x) = p( p − 1)|x| p−2 . So by Itô’s formula we see that |M(t)| =

t

p

0

p( p − 1) f (M(s)) d M(s) + 2

t

|M(s)| p−2 dM(s).

0

Therefore we see that p( p − 1) E E[|M(T )| p ] = 2

T 0

|M(s)| p−2 dM(s)

p( p − 1) p( p − 1) E[M ∗ (T ) p−2 M(T )] E[M ∗ (T ) p ]1−2/ p E[M(T ) p/2 ]2/ p . 2 2

By Doob’s inequality (Theorem 3.3.1) we see that

5.4 Moment Inequalities

119

p p E[|M(T )| p ] p−1

p p p( p − 1) E[M ∗ (T ) p ]1−2/ p E[M(T ) p/2 ]2/ p . p−1 2

E[M ∗ (T ) p ]

This implies the existence of C p . t Let N (t) = 0 M(s)( p−2)/4 d M(s). Then we see that N (t) =

t

M(s)( p−2)/2 dM(s) =

0

2 M(t) p/2 . p

So by Itô’s formula we see that M(t)M(t)( p−2)/4 = N (t) +

t

M(s) d(M(s)( p−2)/4 ).

0

This implies that |N (t)| M ∗ (t)M(t)( p−2)/4 +

t

M ∗ (t) d(M(s)( p−2)/4 ) = 2M ∗ (t)M(t)( p−2)/4 .

0

So we see that 2 E[M(T ) p/2 ] = E[N (T )] = E[N (T )2 ] 4E[M ∗ (T ) p ]2/ p E[M(T ) p/2 ]1−2/ p . p

This implies that the existence of c p . Case 2. p ∈ (0, 2). t Let us take an arbitrary ε ∈ (0, 1]. Let N (t) = 0 (ε + M(s))( p−2)/4 d M(s). Then by Itô’s formula we see that N (t)(ε + M(t))(2− p)/4 t t (2− p)/4 = (ε + M(s)) d N (s) + N (s) d((ε + M(t))(2− p)/4 ) 0 0 t N (s) d((ε + M(s))(2− p)/4 ). = M(t) + 0

So we see that ∗

|M(t)| N (t)(ε + M(t))

(2− p)/4

+ 0

2N ∗ (t)(ε + M(t))(2− p)/4 . Therefore we see that

t

N ∗ (t) d((ε + M(s))(2− p)/4 )

120

5 Applications of Stochastic Integral

M ∗ (t) 2N ∗ (t)(ε + M(t))(2− p)/4 . Noting that p/2 + (2 − p)/2 = 1, we see that E[M ∗ (t) p ] 2 p E[N ∗ (t) p (ε + M(t)) p(2− p)/4 ] 2 p E[N ∗ (t)2 ] p/2 E[(ε + M(t)) p/2 ]1− p/2 22 p E[N (t)] p/2 E[(ε + M(t)) p/2 ]1− p/2 t p/2 = 4p E (ε + M(s))( p−2)/2 dM(s) E[(ε + M(t)) p/2 ]1− p/2 0

p/2 2 p =4 E[(ε + M(t)) p/2 − ε p/2 ] p/2 E[(ε + M(t)) p/2 ]1− p/2 . p So letting ε ↓ 0, we see that ∗

E[M (t) ] p

32 p

p/2 E[M(t) p/2 ].

This implies the existence of C p . Also, we see that M(t) p/2 = (M(t)(ε + M ∗ (t))−(2− p) ) p/2 (ε + M ∗ (t)) p(2− p)/2 . So by Hölder’s inequality we see that E[M(t) p/2 ] E[M(t)(ε + M ∗ (t))−(2− p) ] p/2 E[(ε + M ∗ (t)) p ]1− p/2 . Let N (t) =

t

0 (ε

+ M ∗ (s))−(2− p)/2 d M(s). Then we see that

t

N (t) =

(ε + M ∗ (s))−(2− p) dM(s) M(t)(ε + M ∗ (t))−(2− p) .

0

So we see that

M(t) N (t)(ε + M ∗ (t))2− p .

By Itô’s formula we have M(t)(ε + M ∗ (t))−(2− p)/2 t t ∗ −(2− p)/2 = (ε + M (s)) d M(s) + M(s) d((ε + M ∗ (s))−(2− p)/2 ) 0 0 2− p t M(s)(ε + M ∗ (s))−(4− p)/2 d M ∗ (s). = N (t) − 2 0

5.4 Moment Inequalities

121

So we see that 2− p t ∗ M (s)(ε + M ∗ (s))−(4− p)/2 d M ∗ (s) 2 0 2− p t (ε + M ∗ (s))−(2− p)/2 d M ∗ (s) M ∗ (t)(ε + M ∗ (t))−(2− p)/2 + 2 0 2− p 2 2 (ε + M ∗ (t)) p/2 + (ε + M ∗ (t)) p/2 = (ε + M ∗ (t)) p/2 . 2 p p

|N (t)| |M(t)|(ε + M ∗ (t))−(2− p)/2 +

Therefore we see that E[N (t)] = E[N (t)2 ]

2 2 E[(ε + M ∗ (s)) p ], p

and so we have E[M(t) p/2 ] E[N (t)] p/2 E[(ε + M ∗ (t)) p ]1− p/2

p 2 E[(ε + M ∗ (s)) p ]. p

Letting ε ↓ 0, we see the existence of c p .

5.5 Itô’s Representation Theorem Let (, F , P) be a complete probability space, and B : [0, ∞) × → Rd be a ddimensional Brownian motion. Let Gt = σ {σ {B(s); s t} ∪ N0 }, t 0. Then by Proposition 3.7.4 we see that {Gt }t∈[0,∞) is a standard filtered probability space and B is {Gt }t∈[0,∞) -Brownian motion. In this section we consider all notions over the standard filtered probability space (, F , P, {Gt }t∈[0,∞) ). Let A(t) = t. Then we see that B i = A, i = 1, . . . , d. Let I (ξ ) =

d

ξi • B i

i=1

˜ c , and that for each ξ = (ξ1 , . . . , ξd ) ∈ L2 (A)d . Then we see that I (ξ ) ∈ M 2 sup E[I (ξ )(t)2 ] = t∈[0,∞)

d i=1

E

∞

ξi (t)2 dt .

0

For each T > 0 L 2T = L 2 (, GT , P) becomes a Hilbert space. Let IT2 = {I (ξ )(T ); ξ ∈ L2 (A)d }. Then IT2 is a vector subspace of L 2T . (Precisely speaking,

122

5 Applications of Stochastic Integral

we have to consider its quotient space.) Since we have E[I (ξ )(T )I (ξ˜ )(T )] = E

T

˜ ξ(t) · ξ (t) dt ,

0

we may consider IT2 as a closed vector subspace. For any ξ ∈ L2 (A) and a stopping time τ we see that I (ξ )(T ∧ τ ) = I (1[0,τ ] ξ )(T ). So we see that I (ξ )(T ∧ τ ) ∈ IT2 . Proposition 5.5.1 Let X ∈ L 2T such that E[X ] = 0. Let M(t) = E[X |Gt ], t 0. If E[X Z ] = 0 for all Z ∈ IT2 , then M(t)I (ξ )(t), t 0, is a martingale for any ξ ∈ L2 (A)d . Proof Let t ∈ [0, T ] and C ∈ Gt . Let τ : → [0, ∞) be given by τ (ω) = t, ω ∈ \ C, and τ (ω) = T, ω ∈ C. Then it is easy to see that τ is a stopping time. Moreover, we see that I (ξ )(T ∧ τ ) = 1C I (ξ )(T ) + (1 − 1C )I (ξ )(t) = 1C (I (ξ )(T ) − I (ξ )(t)) + I (ξ )(t).

So we see that E[M(T )I (ξ )(T ), C] − E[M(t)I (ξ )(t), C] = E[X 1C I (ξ )(T )] − E[E[X |Gt ]1C I (ξ )(t)] = E[X I (ξ )(T ∧ τ )] − E[X I (ξ )(T ∧ t)] = 0.

So we see that E[M(T )I (ξ )(T )|Gt ] = M(t)I (ξ )(t), t T. If t > s T, then E[M(t)I (ξ )(t)|Gs ] = X E[I (ξ )(t)|Gs ] = M(s)I (ξ )(s). These imply our assertion. Proposition 5.5.2 Let X be a bounded GT -measurable random variable such that E[X ] = 0. Assume that E[X Z ] = 0 for all Z ∈ IT2 . Then X = 0 a.s. Proof From the assumption there is an a > 0 such that |X | a. Let ρ be a random variable given by 1 X. ρ =1+ 2a Then ρ is a bounded GT -measurable random variable, ρ 1/2, and E[ρ] = 1. Also, from the assumption it is easy to see that E[ρ Z ] = E[Z ], Z ∈ IT2 . Let ρ(t) = E[ρ|Gt ]. For any ξ ∈ L2 (A), we see that ρ(t)I (ξ )(t) = I (ξ )(t) + (1/2a) E[X |Gt ]I (ξ )(t), and so we see that ρ(t)I (ξ )(t) is a martingale by the previous proposition. Let Q be a probability measure on (, F ) given by Q(C) = E[ρ, C], C ∈ F . Then by Proposition 5.3.4 we see that I (ξ ) is a Q-martingale for any ξ ∈ L2 (A)d . Note that

5.5 Itô’s Representation Theorem

√

123

|η|2 exp −1 ηi B (t) − t −1 2 i=1 d t d √ √ |η|2 i s d B i (s) −1ηi exp −1 ηi B (s) − = 2 0 i=1 i=1 d

i

for any η ∈ Rd and t 0. So we see that exp

√

d

|η|2 −1 ηi B (t ∧ T ) − 2 i=1 i

− 1,

t 0,

is a Q-martingale for any η ∈ Rd . Then similarly to the proof of Theorem 5.1.1, we see that B is a Brownian motion under the probability Q. So for any n 1, 0 t1 < t2 · · · < tn , and bounded Borel measurable functions f : (Rd )n → R we see that E[(ρ − 1) f (B(t1 ), . . . , B(tn ))] = E Q [ f (B(t1 ), . . . , B(tn ))] − E[ f (B(t1 ), . . . , B(tn ))] = 0.

Since ρ − 1 is GT -measurable, we see that ρ − 1 = 0 a.s., and so X = 0 a.s. Theorem 5.5.1 (1) L 2T = {c + Y ; c ∈ R, Y ∈ IT2 } for any T > 0. That is, if X ∈ L 2T and E[X ] = 0, then X ∈ IT2 . (2) For any {Gt }t∈[0,∞) -martingale M, there is a continuous process M˜ such that ˜ M(t) = M(t) a.s. for all t 0. Proof (1) First, we assume that X is a bounded GT -measurable random variable such that E[X ] = 0. Since IT2 is a closed vector subspace in L 2T , we see that there is a Y ∈ L 2T , ξ˜ ∈ L2 (A)d such that X = Y + I (ξ˜ )(T ), and E[Y Z ] = 0 for all Z ∈ IT2 . Then we see that E[Y ] = E[X ] − E[I (ξ˜ )(T )] = 0. Let M(t) = E[Y |Gt ], t 0. Then by Corollary 3.2.1 we may assume that M(t) is a D-process. Also, by Proposition 5.5.1 we see that M(t)I (ξ )(t), t 0, is a martingale and a D-process for any ξ ∈ L2 (A)d . Let τn , n 1, be an increasing sequence of stopping times given by τn = inf{t 0; |I (ξ˜ )(t)| > n} ∧ n. Then we see that M(τn ∧ T ) = E[X |Gτn ∧T ] − I (ξ˜ )(τn ∧ T ) and so M(τn ∧ T ) is bounded. For any ξ ∈ L2 (A) we see that

124

5 Applications of Stochastic Integral

E[M(τn ∧ T )I (ξ )(T )] = E[M(τn ∧ T )E[I (ξ )(T )|Gτn ∧T ]] = E[M(τn ∧ T )I (ξ )(τn ∧ T )] = 0. So by Proposition 5.5.2 we see that M(τn ∧ T ) = 0 a.s. Letting n → ∞, we see that Y = 0. This shows that X = I (ξ˜ ) ∈ IT2 . Now assume that X ∈ L 2T . Let X n = (X ∧ n) ∨ (−n) − E[(X ∧ n) ∨ (−n)], n 1. Then X n is bounded and E[X n ] = 0. So we see that X n ∈ IT2 . Since E[(X n − (X − E[X ]))2 ] → 0, n → ∞, we see that X − E[X ] ∈ IT2 . This implies Assertion (1). (2) Let M be a {Gt }t∈[0,∞) -martingale. Then M(t) = E[M(T )|Gt ], t ∈ [0, T ], for any T > 0. For each n 1 let X T,n = (M(T ) ∧ n) ∨ (−n). Since X T,n ∈ L 2T , we see that there is a ξT,n ∈ L2 (A)d such that X T,n = E[X T,n ] + I (ξT,n )(T ). Let N T,n = E[X T,n ] + I (ξT,n ). Then N T,n is a continuous process. Moreover, we see that E[|M(t) − N T,n (t)|] = E[|E[M(T ) − X T,n |Gt ]|] E[|M(T ) − X T,n |] → 0, n → ∞, t ∈ [0, T ]. By Theorem 3.3.1 (2) we see that for any ε > 0 ε P( sup |N T,n (t) − N T,m (t)| > ε) t∈[0,T ]

E[|N T,n (T ) − N T,m (T )|] = E[|X T,n − X T,m |] → 0, n, m → ∞. Therefore for each k 1 there is an n˜ k such that P( sup |N T,n (t, ω) − N T,m (t, ω)| > 2−k ) 2−k , t∈[0,T ]

i.e.,

E[1[2−k ,∞) ( sup |N T,n (t, ω) − N T,m (t, ω)|)] 2−k , t∈[0,T ]

Let n k =

k

=1

n, m n˜ k ,

n, m n˜ k .

n˜ and let 0 ∈ F is given by

0 = ω ∈ ;

∞

1[2−k ,∞) ( sup |N T,n k (t, ω) − N T,n k+1 (t, ω))|) < ∞ .

k=1

t∈[0,T ]

Then we see that P(0 ) = 1 and sup |N T,n k (t, ω) − N T,n (t, ω)| → 0, k, → ∞,

t∈[0,T ]

Let N T : [0, ∞) × → R be given by

ω ∈ 0 .

5.5 Itô’s Representation Theorem

125

N T (t, ω) =

lim N T,n k (t ∧ T, ω), if ω ∈ 0 ,

k→∞

0,

otherwise.

Then N T is a continuous process and M(t) = N T (t) a.s. for any t ∈ [0, T ]. So we have Assertion (2) by Proposition 3.6.2. Let L2loc be the set of progressively measurable processes ξ : [0, ∞) × → R such that

T |ξ(t)|2 dt < ∞ for all T > 0 = 1. P 0

Note that ξ ∈ L2loc if and only if ξ is B i -integrable for each i = 1, . . . , d. Then we have the following. Corollary 5.5.1 For any {Gt }t∈[0,∞) -continuous local martingale M there are ξi ∈ L2loc , i = 1, . . . , d, and c ∈ R such that M(t) = c +

d i=1

t

ξi (s) d B i (s),

t ∈ [0, ∞).

0

Proof Since M(0) is G0 -measurable, we see that there is a c ∈ R such that M(0) = c a.s. So there is an increasing sequence of stopping times τn , n = 1, 2, . . . , such that M τn − c ∈ Mcb ({Gt }t∈[0,∞) ). Then we see by Theorem 5.5.1 that there are ξn ∈ L2 (A)d , i = 1, 2, . . . , d, n 1, such that M τn = c + I (ξn ). Since I (ξn ) = M τn − c = (M τn+1 − c)τn = I (1[0,τn ) (·)ξn+1 ), we see that 1[0,τn ) (t)ξn+1 (t) = 1[0,τn ) (t)ξn (t) Let ξ(t) = 1[0,τ1 ) (t)ξ1 (t) +

∞

dt ⊗ d P − a.e.(t, ω).

1[τn ,τn+1 ) (t)ξn+1 (t).

n=1

Then we see that M τn = (c + I (ξ ))τn , n 1. So we have our assertion. Proposition 5.5.3 Let M ∈ Mcloc ({Gt }t∈[0,∞) ), and T > 0. Assume that E[E(M) (T )] = 1. Let Q be a probability measure on (, F ) given by Q(A) = E[E(M)(T ), A], Also, let B˜ : [0, ∞) × → Rd be given by

A ∈ F.

126

5 Applications of Stochastic Integral

B˜ i = B i − B i , MT ,

i = 1, . . . , d.

Then for any Q-integrable random variable Y there are ξi ∈ L2loc , i = 1, . . . , d, such that E Q [Y |Gt ] = E Q [Y ] +

d i=1

t

ξi (s) d B˜ i (s),

t ∈ [0, ∞).

0

Proof Let Y (t) = E Q [Y |Gt ], t ∈ [0, ∞). Then Y (t), t ∈ [0, ∞) is a {Gt }t∈[0,∞) -Qmartingale. So by Proposition 5.3.4 we see that E(M T (t))Y (t) is a P-martingale. By Theorem 5.5.1 we may assume that Y (t) is a continuous process. Note that M T is a P-continuous local martingale. By Theorem 5.5.1 and Corollary 5.5.1 we see that there are ξ˜i ∈ L2loc , ηi ∈ L2loc , i = 1, . . . , d, and c ∈ R such that E(M )(t)Y (t) = c + T

d i=1

and M (t) = T

d i=1

t

t

ξ˜i (s) d B i (s),

0

ηi (s) d B i (s),

t ∈ [0, ∞).

0

So we see that B˜ i (t) = B i (t) −

t

ηi (s) ds,

i = 1, . . . , d, t ∈ [0, ∞).

0

Then by Itô’s formula we see that

1 E(M T )(t)−1 = exp −M T (t) + M T (t) 2 t t E(M T )(s)−1 d M T (s) + E(M T )(s)−1 dM T (s) =1− 0

=1−

0

d i=1

t 0

and so we see that

E(M T )(s)−1 ηi (s) d B i (s) +

d i=1

0

t

E(M T )(s)−1 ηi (s)2 ds,

5.5 Itô’s Representation Theorem

127

Y (t) = E(M T )(t)−1 (E(M T )(t)Y (t)) d t d t T −1 ˜ i = c+ E(M (s)) ξi (s) d B (s) − Y (s)ηi (s) d B i (s) i=1

+

d t i=1

= c+

0

Y (s)ηi (s)2 ds −

0

d i=1

d t

E(M T (s))−1 ηi (s)ξ˜i (s) ds

0

i=1 t

0

i=1

E(M T (s))−1 ξ˜i (s) d B˜ i (s) −

0

d i=1

t

Y (s)ηi (s) d B˜ i (s).

0

Since c = Y (0) = E Q [Y |G0 ], we see that c = E Q [Y ]. So we have our assertion.

5.6 Property of Brownian Motion Proposition 5.6.1 Let B : [0, ∞) × → R be a 1-dimensional {Ft }t∈[0,∞) Brownian motion. Let σa : → [0, ∞), a > 0, be a stopping time given by σa = inf{t > 0; B(t) = a}. Then P(σa < ∞) = 1 and √ E[exp(−λσa )] = exp(− 2λa),

λ > 0.

Proof Let λ > 0 and f ∈ C 2 (R2 ) be given by f (t, x) = exp(−λt +

√ 2λx),

t, x ∈ R.

Then by Itô’s formula we see that f (t, B(t))

t t ∂f 1 ∂2 f ∂f (s, B(s)) d B(s) + (s, B(s)) + (s, B(s)) ds = f (0, 0) + ∂t 2 ∂x2 0 ∂x 0 t ∂f (s, B(s)) d B(s). =1+ 0 ∂x So we see that f (t ∧ σa , B(t ∧ σa )) = 1 + 0

t

1[0,σa ] (s)

∂f (s, B(s)) d B(s). ∂x

128

5 Applications of Stochastic Integral

Note that √ √ √ √ 1[0,σ ] (s) ∂ f (s, B(s)) = 1[0,σ ] (s) 2λ exp(−λs + 2λB(s)) 2λ exp( 2λa). a a ∂x So we see that E[exp(−λ(t ∧ σa ) +

√ 2λB(t ∧ σa ))] = 1.

Also, note that if σa = ∞, then exp(−λ(t ∧ σa ) +

√ √ 2λB(t ∧ σa )) exp(−λt + 2λa).

So we see that √ 1 = lim (E[exp(−λ(t ∧ σa ) + 2λB(t ∧ σa )), {σa < ∞} ∪ {σa = ∞}] t→∞ √ = E[exp(−λσa + 2λa), σa < ∞]. Therefore we see that √ E[exp(−λσa ), σa < ∞] = exp(− 2λa). Letting λ ↓ 0, we see that P(σa < ∞) = 1. These imply our assertion. Let h d : (0, ∞) → R, d 1, be given by h d (s) =

⎧ ⎨

if d = 1, if d = 2, ⎩ −(d−1) s , if d 3. s, log s,

Also, let f d : (Rd \ {0}) → R be given by f d (x) = h d (|x|), x ∈ Rd \ {0}. Then it is easy to see that

d ∂ 2 f d (x) = 0, x ∈ Rd \ {0}. i ∂ x i=1 Proposition 5.6.2 Let d 1 and B : [0, ∞) × → Rd be a d-dimensional {Ft }t∈[0,∞) -Brownian motion. Let x0 ∈ Rd \ {0} and τa : → [0, ∞], a 0, be a stopping time given by τa = inf{t 0; |x0 + B(t)| = a}. (1) If R > |x0 | > r > 0, then

5.6 Property of Brownian Motion

129

P(τr < τ R ) =

h d (|x0 |) − h d (R) , h d (r ) − h d (R)

P(τr > τ R ) =

h d (r ) − h d (|x0 |) . h d (r ) − h d (R)

and

(2) If d = 1, 2, then for any r ∈ (0, |x0 |) P(τr < ∞) = 1. (3) If d 3, then for any r ∈ (0, |x0 |) P(τr < ∞) = (4) If d 2, then

r d−1 h d (|x0 |) =( ) . h d (r ) |x0 |

P(τ0 < ∞) = 0.

Proof (1) Let R > |x0 | > r > 0. Then there is a f˜ ∈ C 2 (Rd ) such that f˜(x) = f d (x), Then we see that

r/2 < |x| < 2R.

f˜(x0 + B(t)) = f˜(x0 ) + M(t) + A(t),

where M(t) =

d k=1

and 1 A(t) = 2

t 0

∂ f˜ (x0 + B(s)) d B k (s), ∂xk

t d ∂ 2 ˜ f (x0 + B(s)) ds. ∂xi 0 i=1

Note that M

τr ∧τ R

(t) =

d k=1

t

1[0,τr ∧τ R ] (s)

0

∂ fd (x0 + B(s)) d B k (s), ∂xk

and A

τr ∧τ R

1 (t) = 2

0

t

d ∂ 2 1[0,τr ∧τ R ] (s) f d (x0 + B(s)) ds = 0. ∂xi i=1

130

5 Applications of Stochastic Integral

∂ fd sup i (x) ; x ∈ R2 , r |x| R < ∞, ∂x

Since

we see that E[M τr ∧τ R (t)] = 0. So we see that E[ f d (x0 + B(t ∧ τr ∧ τ R ))] = f d (x0 ) = h d (|x0 |). By Proposition 5.6.1 we have P(τ R < ∞) = 1. Therefore we see that E[ f d (x0 + B(t ∧ τr ∧ τ R ))] = E[ f d (x0 + B(t ∧ τr ∧ τ R )), τr < τ R ] + E[ f d (x0 + B(t ∧ τr ∧ τ R )), τr > τ R ] → h d (r )P(τr < τ R ) + h d (R)P(τr > τ R ), t → ∞. This implies that h d (r )P(τr < τ R ) + h d (R)P(τr > τ R ) = h(|x0 |). Also, we have P(τr < τ R ) + P(τr > τ R ) = 1. These show Assertion (1). (2) If d = 1, 2, then we see that h d (R) → ∞, R → ∞. Since τ R → ∞, R → ∞, we see that P(τr < ∞) = lim P(τr < τ R ) = 1. R→∞

(3) If d 3, then h d (R) → 0, R → ∞. So we see that P(τr < ∞) = lim P(τr < τ R ) = R→∞

h d (|x0 |) . h d (r )

(4) If d 2, then |h d (r )| → ∞, r ↓ 0. So we see that P(τ0 < τ R ) = lim P(τr < τ R ) = 0. r ↓0

Therefore we see that P(τ0 < ∞) = lim P(τ0 < τ R ) = 0. R→∞

Remark Let d = 2 and x0 ∈ R2 , x0 = 0. Let 0 = {x0 + B(t) = 0 for all t 0}. Then by Proposition 5.6.2 (4) we see that P(0 ) = 1. So 0 ∈ F0 . So let M :

5.6 Property of Brownian Motion

131

[0, ∞) × → R be given by M(t) = log |x0 + B(t)| − log |x0 | on 0 , and M(t) = 0 on \ 0 . Then M is a continuous process. By Corollary 4.3.1 we see that M(t) =

2 i=1

t

10 0

x0d + B i (s) d B i (s), |x0 + B(s)|2

and so we see that M is a continuous local martingale. It is easy to see that M(t) = 0

t

1 ds. |x0 + B(s)|2

However, we see that 1 ds E[M(t)] = E |x0 + B(s)|2 0

t 1 1 |y − x0 |2 = dy = ∞. ds exp − 2π s R2 |y|2 2s 0

t

Therefore by Proposition 3.6.8 we see that E[ max M(t)2 ] = ∞. t∈[0,T ]

On the other hand, we see that for p ∈ (1, ∞) 1 E[|M(t)| ] = 2π t p

|y − x0 |2 dy < ∞. | log |y| − log |x0 || exp − 2t R2 p

If M is a martingale, then we see by Theorem 3.3.1 that E[ max M(t)2 ] 4E[M(T )2 ] < ∞ t∈[0,T ]

and we have a contradiction. Therefore we see that M is not a martingale. By this example, we see the following. (1) Even if M is a continuous local martingale and E[|M(t)| p ] < ∞, p ∈ (0, ∞), t 0, M may not be a martingale. (2) In Doob’s inequality (Theorem 3.3.1), we cannot replace the assumption “M is a martingale” by “M is a continuous local martingale.” Also, the following result is shown by Dudley [3]. Theorem 5.6.1 Let B be a 1-dimensional {Ft }t∈[0,∞) -Brownian motion, T > 0, ε > 0, and X be an FT -measurable random variable. Then there is a progressively measurable process ξ : [0, ∞) × → R satisfying the following.

132

5 Applications of Stochastic Integral

∞

(1) 0

ξ(t)2 dt < ∞ a.s.

T

ξ(t) d B(t) = X a.s. T

2 (3) P {X = 0} \ ξ(t) dt = 0 < ε. (2)

0

0

By this result we see that any random variable can be described by a stochastic t integral. If E[X ] = 0 in the above theorem, then M(t) = 0 ξ(s) d B(s) is a continuous local martingale but is not a martingale. We will prove this theorem in Appendix 8.5.

5.7 Tanaka’s Formula Proposition 5.7.1 Let X be a continuous local martingale. Let ψ : R → R be a bounded right continuous function. Let continuous functions ψ˜ : R → R and f : R → R be given by ˜ ψ(x) =

x

ψ(y) dy, f (x) =

0

x

˜ ψ(y) dy,

x ∈ R.

0

Then 1 2

t

t

ψ(X s ) dX, X s = f (X t ) − f (X 0 ) −

0

˜ s) d Xs ψ(X

t 0.

0

Proof Let us take a ϕ ∈ C0∞ (R) such that ϕ 0, ϕ(x) = 0, x 0, and d x = 1. Let ψn : R → R, n 1, be given by

0 −∞

ϕ(x)

ψn (x) =

nϕ(n(x − y))ψ(y) dy,

x ∈ R.

R

Since ψ is bounded and right continuous, we see that {ψn (x); n 1, x ∈ R} is bounded and that ψn (x) → ψ(x), n → ∞, x ∈ R. Let ψ˜ n (x) =

x 0

x

ψn (y) dy and f n (x) =

ψ˜ n (y) dy, x ∈ R, n 1.

0

˜ n → ∞, for all x ∈ R, and f n (x) → f (x) uniThen we see that ψ˜ n (x) → ψ(x), formly on compacts as n → ∞. Also, by Itô’s formula we see that 1 2

t 0

t

ψn (X s ) dX, X s = f n (X t ) − f n (X 0 ) − 0

ψ˜ n (X s ) d X s ,

t 0.

5.7 Tanaka’s Formula

133

So letting n → ∞, we obtain our assertion. The formula in the following proposition is called Tanaka’s formula. Proposition 5.7.2 Let X be a continuous semi-martingale. Then we have the following. t t 1 1[−a,a) (X s ) dX, X s = |X t | − |X 0 | − sign(X s ) d X s , t 0. (1) lim a↓0 2a 0 0 Here ⎧ ⎨ −1, if x < 0, 0, if x = 0, sign(x) = ⎩ 1, if x > 0. (2) 1 2a

t

1[0,a) (X s ) dX, X s t = (X t ∨ 0) − (X 0 ∨ 0) − 1(0,∞) (X s ) d X s , lim a↓0

0

t 0.

0

Proof Since the proofs of Assertions (1) and (2) are similar, we only prove Assertion (1). Let ψa = a −1 1[−a,a) , a > 0. Then ψa is bounded and right continuous. Since ψ˜ a (x) =

x

ψ(y) dy =

0

(a −1 x) ∧ 1, if x 0, (a −1 x) ∨ (−1), if x < 0,

we see that ψ˜ a (x) → sign(x), a ↓ 0, x ∈ R, and |ψ˜ a (x)| 1, x ∈ R. So we see that x f a (x) = ψ˜ a (y) dy → |x|, a ↓ 0, uniformly on compacts in x ∈ R. 0

Note that t 1 t ψ˜ a (X s ) d X s , ψa (X s ) dX, X s = f a (X t ) − f a (X 0 ) − 2 0 0

t 0.

Therefore letting a ↓ 0, we have our assertion. Let {B(t)}t0 be a 1-dimensional {Ft }t0 -Brownian motion. Let L x : [0, ∞) × → R, x ∈ R, be given by L x (t) = |Bt − x| − |x| − 0

t

sign(B(s) − x) d B(s),

t 0.

134

5 Applications of Stochastic Integral

t Since E[ 0 1{y} (B(s)) ds] = 0, t > 0, y ∈ R, we have the following equality by Proposition 5.7.2: 1 L x (t) = lim a↓0 2a

t

1(x−a,x+a) (B(s)) ds,

x ∈ R, t ∈ [0, ∞).

0

L x is called the local time. The following result is known. Proposition 5.7.3 For any continuous function f : R → R 0

t

f (B(s)) ds =

f (x)L x (t) d x a.s., R

t 0.

(5.2)

Chapter 6

Stochastic Differential Equation

The definition of stochastic differential equations is not unique, since many kinds of definitions are used for each application. In this book we consider Markov type stochastic differential equations only.

6.1 Itô’s Stochastic Differential Equation and Euler–Maruyama Approximation Let (, F , P, {Ft }t∈[0,∞) ) be a standard filtered probability space, and B : [0, ∞) × → Rd be a d-dimensional {Ft }t∈[0,∞) -Brownian motion. Let N 1. We say that η : [0, ∞) × → R N is progressively measurable, if ηi , i = 1, . . . , N , are progressively measurable and η(t) = (η1 (t), . . . , η N (t)), t 0. If η : [0, ∞) × → R N is progressively measurable, and P

N i=1

T

ηi (t)2 dt < ∞ for all T > 0 = 1,

0

then we can define ηi • B k , i = 1, . . . , N , k = 1, . . . , d. So we define an adapted continuous process η • B k : [0, ∞) × → R N , k = 1, . . . , d, by η • B k = (η1 • B k , . . . , η N • B k ), i.e.,

t

η(s) d B (s) = k

0

t

0

t

η (s) d B (s), . . . , 1

k

η (s) d B (s) . N

k

0

Let σk : [0, ∞) × R N → R N , k = 0, 1, . . . , d, be Borel measurable functions. The notion of stochastic differential equations was given by Itô in 1942 [5]. First we consider stochastic differential equations following his idea. © Springer Nature Singapore Pte Ltd. 2020 S. Kusuoka, Stochastic Analysis, Monographs in Mathematical Economics 3, https://doi.org/10.1007/978-981-15-8864-8_6

135

136

6 Stochastic Differential Equation

Let us consider a stochastic differential equation d X (t) =

d

σk (t, X (t)) d B k (t) + σ0 (t, X (t)) dt.

(6.1)

k=1

This equation itself does not make any sense. Let ξ be an F0 -measurable R N -valued random variable. Itô defined a solution X to the stochastic differential equation (6.1) with an initial condition X (0) = ξ as follows. A solution X is an adapted continuous process X : [0, ∞) × → R N such that X (t) = ξ +

d k=1

t

σk (s, X (s)) d B k (s) +

0

t

σ0 (s, X (s)) ds, t 0.

(6.2)

0

We call the Eq. (6.2) Itô’s stochastic differential equation in this book, since we give a different definition of stochastic differential equations in the next section. On the other hand, we can define an adapted continuous process X h : [0, ∞) × → R N for each h > 0 recurrently by the following: X h (0) = ξ X h (t) = X h ((n − 1)h) d nh 1 + σk (s, X h ((n − 1)h)) ds (B k (t) − B k ((n − 1)h)) h (n−1)h k=1 nh 1 + σ0 (s, X h ((n − 1)h)) ds (t − (n − 1)h), h (n−1)h t ∈ ((n − 1)h, nh], n 1.

(6.3)

It is not clear whether a solution X satisfying Eq. (6.2) exists, but we can define X h always, for example, in the case that σk ’s are bounded. We call X h the Euler– Maruyama approximation to Eq. (6.2). Proposition 6.1.1 Let σk : [0, ∞) × R N → R N , k = 0, 1, . . . , d, be continuous functions satisfying the following. (Assumption) There is a K ∈ (0, ∞) such that |σk (t, x) − σk (t, y)| K |x − y|,

t ∈ [0, ∞), x, y ∈ R N , k = 0, 1, . . . , d, (6.4)

and |σk (t, x)| K ,

t ∈ [0, ∞), x ∈ R N , k = 0, 1, . . . , d.

Then for any F0 -measurable R N -valued random variable ξ such that E[|ξ |2 ] < ∞ there exists an adapted continuous process X : [0, ∞) × → R N such that

6.1 Itô’s Stochastic Differential Equation and Euler–Maruyama Approximation

E[ sup |X (t)|2 ] < ∞,

137

T ∈ [0, ∞)

t∈[0,T ]

(6.5)

and X satisfies Eq. (6.2). Moreover, if X˜ satisfies Eqs. (6.2) and (6.5), then P(X (t) = X˜ (t), t ∈ [0, ∞)) = 1. Proof Let S be the set of adapted continuous processes Y : [0, ∞) × → R N such that E[supt∈[0,T ] |Y (t)|2 ] < ∞ for all T > 0. Let (Y ) : [0, ∞) × → R N , Y ∈ S, be an adapted continuous process given by (Y )i (t) = ξ i +

d t k=1 0

σki (s, Y (s)) d B k (s) +

t 0

σ0i (s, Y (s)) ds, t 0, i = 1, . . . , N .

Then from the assumption and Proposition 3.6.8 we see that ⎡

E ⎣ sup

d

t∈[0,T ]

4E

k=1

d

T 0

k=1

t 0

2 ⎤ σki (s, Y (s)) d B k (s) ⎦

|σki (s, Y (s))|2

ds 4T K 2 , T 0, i = 0, . . . , N ,

and E

t

sup

t∈[0,T ]

0

2 σ0i (s, Y (s)) ds

T

E

2 |σ0 (s, Y (s))| ds

T 2 K 2.

0

So we see that (Y ) ∈ S for any Y ∈ S. By Proposition 3.6.8 we see that for any Y1 , Y2 ∈ S E[ sup |(Y1 )(t) − (Y2 )(t)|2 ] t∈[0,T ]

2

N i=1

⎡

E ⎣ sup

t∈[0,T ]

T

+ 2E 0

8

N i=1

0

k=1

0

t

2 ⎤ (σki (s, Y1 (s)) − σki (s, Y2 (s))) d B k (s) ⎦

|σ0i (s, Y1 (s)) T

E

d

d k=1

2 −

σ0i (s, Y2 (s))| ds

(σki (s, Y1 (s))

−

σki (s, Y2 (s)))2

ds

138

6 Stochastic Differential Equation

i 2 i ds σ0 (s, Y1 (s)) − σ0 (s, Y2 (s)) 0 d |σk (s, Y1 (s)) − σk (s, Y2 (s))|2 ds

+ 2T E

T

= 8E

T

0

k=1 T

+ 2T E

|σ0 (s, Y1 (s)) − σ0 (s, Y2 (s))| ds 2

0

2(4d + T )K E

T

|Y1 (s) − Y2 (s)| ds

2

2

0

T

2(4d + T )K 2

E[ sup |Y1 (r ) − Y2 (r )|2 ds]. r ∈[0,s]

0

Let us define X n ∈ S, inductively by X 0 = 0 and X n = (X n−1 ), n = 1, 2, . . .. Then we see that for any T ∈ (0, ∞) E[ sup |X n+1 (s) − X n (s)|2 ] s∈[0,t]

t

2(4d + T )K 2

E[ sup |X n (r ) − X n−1 (r )|2 ] ds, t ∈ [0, T ], n = 1, 2, . . . . r ∈[0,s]

0

Therefore we can easily show E[ sup |X n+1 (s) − X n (s)|2 ] s∈[0,t]

(2(4d + T )K 2 )n

tn E[ sup |X 1 (s)|2 ], t ∈ [0, T ], n = 0, 1, . . . . n! s∈[0,T ]

by induction in n. This implies that ∞ n=0

Let

E[ sup |X n+1 (t) − X n (t)|2 ]1/2 < ∞, t∈[0,T ]

0 = ω ∈ ;

∞

T > 0.

sup |X n+1 (t) − X n (t)| < ∞, m = 1, 2, . . . .

n=0 t∈[0,m]

Then we see that P(0 ) = 1 and so 0 ∈ F0 . Let X : [0, ∞) × → R N be an adapted continuous process given by X (t, ω) =

lim X n (t, ω), if ω ∈ 0 ,

n→∞

0,

if ω ∈ / 0 .

6.1 Itô’s Stochastic Differential Equation and Euler–Maruyama Approximation

139

Then we see that lim E[ sup |X (t) − X n (t)|2 ]1/2 = 0,

n→∞

t∈[0,T ]

T > 0.

This implies that X ∈ S. Also, we see that lim E[ sup |(X )(t) − X n+1 (t)|2 ]1/2 = 0,

n→∞

T > 0.

t∈[0,T ]

So we see that P((X )(t) = X (t), t ∈ [0, ∞)) = 1. This implies the first part of the assertion. Suppose that X˜ also satisfies Eqs. (6.2) and (6.5). Then we see that X˜ ∈ S and ( X˜ ) = X˜ . So we see that E[ sup |X (t) − X˜ (t)|2 ] = E[ sup |(X )(t) − ( X˜ )(t)|2 ] t∈[0,T ]

t∈[0,T ]

2N (4d + T )K 2 0

T

E[( sup |X (r ) − X˜ (r )|)2 ] ds. r ∈[0,s]

Therefore by Gronwall’s inequality in Appendix 8.6 we see that E[ sup |X (s) − X˜ (s)|2 ] = 0,

t ∈ [0, ∞).

s∈[0,t]

This completes the proof. Proposition 6.1.2 Let σk : [0, ∞) × R N → R N , k = 0, 1, . . . , d, be Borel measurable functions, and assume that there is a K ∈ (0, ∞) for which (Assumption) in the previous proposition is satisfied. Also, let ξ be an F0 -measurable R N -valued random variable such that E[|ξ |2 ] < ∞. Then by the previous proposition we see that there exists an adapted continuous process X : [0, ∞) × → R N satisfying Eq. (6.2). Let X h be an adapted continuous process defined by Eq. (6.3). Then for any T > 0 there is a C T ∈ (0, ∞) such that E[ sup |X (t) − X h (t)|2 ] C T (h + h (T )), t∈[0,T ]

h ∈ (0, 1].

Here h (T ) =

d k=0

T

E 0

ϕh (t)+h 2 1 (σk (t, X (t)) − σk (s, X (s)) ds) dt . h ϕh (t)

and ϕh : [0, ∞) → [0, ∞) is a function inductively defined by ϕh (0) = 0, ϕh (t) = (n − 1)h, t ∈ ((n − 1)h, nh], n = 1, 2, . . .. In particular, for any T > 0

140

6 Stochastic Differential Equation

E[ sup |X (t) − X h (t)|2 ] → 0,

h ↓ 0.

t∈[0,T ]

Proof Let σh,k : [0, ∞) × R N → R N , k = 0, 1, . . . , d, be given by

1 h

σh,k (t, x) =

ϕh (t)+h ϕh (t)

σk (s, x) ds,

(t, x) ∈ [0, ∞) × R N .

Then we see that |σh,k (t, x) − σh,k (t, y)| K |x − y|,

t ∈ [0, ∞), x, y ∈ R N ,

and |σh,k (t, x)| K ,

t ∈ [0, ∞), x ∈ R N

for any h ∈ (0, 1], and k = 0, 1, . . . , d. Also, we see that X h (t) = ξ +

d

t

0

k=1

t

σh,k (s, X h (ϕh (s))) d B (s) + k

σh,0 (s, X h (ϕh (s))) ds.

0

Let Z h (t) = E[sups∈[0,t] |X (s) − X h (s)|2 ]. First we see that for t ∈ [0, ∞) E[|X h (t) − X h (ϕh (t))|2 ]1/2 2 1/2 d t k E σh,k (s, X h (ϕh (s))) d B (s) ϕh (t)

k=1

+ E

t

ϕh (t)

d

2 1/2 σh,0 (s, X h (ϕh (s)) ds 1/2

t

E

k=1

ϕh (t)

σh,k (s, X h (ϕh (s)))2 ds

+ K h (d + 1)K h 1/2 .

Also, we see that X (t) − X h (t) =

d

+ +

(σh,k (s, X (s)) − σh,k (s, X h (s))) d B k (s)

0

k=1

t

t

(σh,0 (s, 0 d t

X (s)) − σh,0 (s, X h (s))) ds

(σh,k (s, X h (s)) − σh,k (s, X h (ϕh (s)))) d B k (s)

k=1

0

6.1 Itô’s Stochastic Differential Equation and Euler–Maruyama Approximation

+ +

t

(σh,0 (s, 0 d t

141

X h (s)) − σh,0 (s, X h (ϕh (s)))) ds

(σk (s, X (s)) − σh,k (s, X (s))) d B k (s)

0

k=1 t

(σ0 (s, X (s)) − σh,0 (s, X (s))) ds.

+

0

Therefore by Doob’s inequality we see that for t ∈ [0, T ] Z h (t)

1/2

d k=1

1/2

t

|σh,k (s, X (s)) − σh,k (s, X h (s))| ds 2

2E 0

t

+E

2 1/2

|σh,0 (s, X (s)) − σh,0 (s, X h (s))| ds

0

+

d

t

2E

1/2 |σh,k (s, X h (s)) − σh,k (s, X h (ϕh (s)))|2 ds

0

k=1

t

+E

2 1/2

|σh,0 (s, X h (s)) − σh,0 (s, X h (ϕh (s))| ds

0

+2

d

t

E 0

k=1

t

+E

2 1/2

|σ0 (s, X (s)) − σh,0 (s, X (s))| ds

0

(2d + T

1/2 |σk (s, X (s)) − σh,k (s, X (s))|2 ds

1/2

1/2

t

)K

E[|X (s) − X h (s)| ] ds 2

0

+ (2d + T 1/2 )K

E |X h (s) − X h (ϕh (s))|2 ds

t

1/2

0

+ (2 + T 1/2 )

d k=0

E

t

1/2 |σk (s, X (s)) − σh,k (s, X (s))|2 ds

.

0

This implies that Z h (t) 9(2d + T 1/2 )2 K 2

t 0

Z h (s) ds + 9(2d + T 1/2 )2 (d + 1)2 K 2 hT + 9(2 + T 1/2 )2 h (T )

for any t ∈ [0, T ]. Therefore by Gronwall’s inequality in Appendix 8.6 we see that

142

6 Stochastic Differential Equation

Z h (T ) (9(2d + T 1/2 )2 (d + 1)2 K 2 hT + 9(2 + T 1/2 )2 h (T )) exp(9(2d + T 1/2 )2 K 2 T ).

So we have the first inequality. We see that for any bounded measurable function ψ : [0, ∞) → R and T > 0

T

ψ(t) −

0

ϕh (t)+h 2 1 ψ(s) ds dt → 0, h ϕh (t)

h ↓ 0.

This implies that the second inequality. Proposition 6.1.3 Let N 1, and σ j,k : [0, ∞) × R N → R N , j = 1, 2, k = 0, 1, . . . , d, be Borel measurable functions. Assume that there are R, K ∈ (0, ∞) such that σ1,k (t, x) = σ2,k (t, x),

t ∈ [0, R], x ∈ B R , k = 0, 1, . . . , d,

and |σ1,k (t, x) − σ2,k (t, y)| K |x − y|,

t ∈ [0, R], x, y ∈ B R , k = 0, 1, . . . , d.

Here B R = {x ∈ R N ; |x| R}. Let ξ j , j = 1, 2, be F0 -measurable R N -valued random variables. Moreover, let X ( j) : [0, ∞) × → R N , j = 1, 2, be adapted continuous processes such that X ( j) (t) = ξ j +

d k=1 0

t

σ j,k (s, X ( j) (s)) d B k (s) +

0

t

σ j,0 (s, X ( j) (s)) ds, t 0, j = 1, 2.

Let τ R = inf{t 0, |X (1) (t)| ∨ |X (2) (t)| R} ∧ R. Then

τR τR (t) = X (2) (t), t ∈ [0, ∞)}) = 1. P({ξ1 = ξ2 } ∪ {X (1)

Proof Let A = {ξ1 = ξ2 } and B = {|ξ1 | R}. Then we see that A, B ∈ F0 . So by the assumption we see that i i )τ R (t) − (X (1) )τ R (t)) 1 A ((X (2) d t i i = 1 A∩B 1[0,τ R ] (s)(σ2,k (s, X (2) (s)) − σ1,k (s, X (1) (s))) d B k (s) 0

k=1

+ 0

t

i i 1 A∩B 1[0,τ R ] (s)(σ2,0 (s, X (2) (s)) − σ1,0 (s, X (1) (s))) ds.

Therefore by Proposition 3.6.8

6.1 Itô’s Stochastic Differential Equation and Euler–Maruyama Approximation

143

τR τR E[ sup 1 A |X (2) (t) − X (1) (t)|2 ] t∈[0,T ]

2

N

d

i=1

k=1

+2

N

T

i i 1 A∩B 1[0,τ R ] (s)(σ2,k (s, X (2) (s)) − σ1,k (s, X (1) (s)))2 ds

4E 0

T

E 0

i=1

2

i 1 A∩B 1[0,τ R ] (s)|σ2,0 (s,

2(4d + T )K E

T

2

0

T

2(4d + R)K 2 0

Since

τR 1 A |X (2) (s)

−

X (2) (s)) −

τR X (1) (s)|2

i σ1,0 (s,

X (1) (s))| ds

ds

τR τR E[ sup 1 A |X (2) (s) − X (1) (s)|2 ] dt. s∈[0,t]

τR τR (t) − X (1) (t)|2 ] (2R)2 < ∞, E[ sup 1 A |X (2) t∈[0,T ]

we see by Gronwall’s inequality that τR τR (t) − X (1) (t)|2 ] = 0, E[ sup 1 A |X (2) t∈[0,T ]

T > 0.

This implies our assertion. Theorem 6.1.1 Let N 1. Assume that σk : [0, ∞) × R N → R N , k = 0, 1, . . . , d, are measurable functions satisfying the following local Lipschitz continuous condition (LLC). (LLC) For any R > 0 there is a K R ∈ (0, ∞) such that |σk (t, x) − σk (t, y)| K R |x − y|,

t ∈ [0, R], x, y ∈ B R , k = 0, 1, . . . , d.

Here B R = {x ∈ R N ; |x| R}. We assume moreover that for any T > 0 there is a C T ∈ (0, ∞) such that sup |σk (t, 0)| C T ,

t∈[0,T ]

k = 0, 1, . . . , d.

Let ξ be an F0 -measurable R N -valued random variable. Then we have the following. (1) There is a family X n : [0, ∞) × → R N , n = 1, 2, . . . , of adapted continuous processes satisfying the following conditions. (i) Let τn , n = 1, 2, . . . , be stopping times given by τn = inf{t 0; |X n (t)| n} ∧ n,

n = 1, 2, . . . .

τn = X n and P(τn τn+1 ) = 1, n = 1, 2, . . .. Then X n+1 (ii) For each n = 1, 2, . . . , X n satisfies

144

6 Stochastic Differential Equation

X n (t)

d

=ξ+

k=1 t

t

1[0,τn ] (s)σk (s, X n (s)) d B k (s)

0

1[0,τn ] (s)σ0 (s, X n (s)) ds, t 0.

+ 0

(2) If stopping times τn , n = 1, 2, . . . , given in (1) satisfy P(limn→∞ τn = ∞) = 1, then there exists a unique adapted continuous process X : [0, ∞) × → R N satisfying X (t) = ξ +

d k=1

t

t

σk (s, X (s)) d B (s) + k

0

σ0 (s, X (s)) ds, t 0.

(6.6)

0

Moreover, for any T > 0 and ε > 0 P( sup |X (t) − X h (t)| > ε) → 0, t∈[0,T ]

h ↓ 0.

Proof Let An = {|ξ | n} ∈ F0 , n 1. Let ϕ : R → R be given by ϕ(s) = ((1 − s) ∧ 1) ∨ 0, s ∈ R. Then we see that ϕ(s) = 1, s 0, ϕ(s) = 0, s 1, and |ϕ(s) − ϕ(s )| |s − s |, s, s ∈ R. Let σn,k : [0, ∞) × R N → R N , n 1, k = 0, 1, . . . , d, be given by σn,k (t, x) = 1[0,n+1] (t)ϕ(|x| − n)σk (t, x). Then we see that |σn,k (t, x)| Cn+1 + K n+1 (n + 1), |σn,k (t, x) − σn,k (t, y)| 2(n + 1)(Cn+1 + K n+1 )|x − y|, x, y ∈ R N , t ∈ [0, ∞),

and σn+1,k (t, x) = σn,k (t, x),

x ∈ Bn , t ∈ [0, n], n = 1, 2, . . . .

So by Proposition 3.6.2 we see that there are adapted continuous processes X˜ n : [0, ∞) × → R N , n 1, such that X˜ n (t) = 1 An ξ +

d k=1

t

σn,k (s, X˜ n (s)) d B k (s) +

0

t

σn,0 (s, X˜ n (s)) ds, t 0.

0

Let σ˜ n be a stopping time given by σ˜ n = inf{t 0; | X˜ n (t)| ∨ | X˜ n+1 (t)| n} ∧ n.

6.1 Itô’s Stochastic Differential Equation and Euler–Maruyama Approximation

145

Then by Proposition 6.1.3 we see that σ˜ n = X˜ nσ˜ n }) = 1. P(( \ An ) ∪ { X˜ n+1

Let σn be a stopping time given by σn = inf{t 0; | X˜ n (t)| n} ∧ n. Then we see that

σn = X˜ nσn }) = 1. P(( \ An ) ∪ { X˜ n+1

Let X n : [0, ∞) × → R N be a adapted continuous process given by X n (t) = (1 − 1 An )ξ + 1 An X˜ nσn (t). Let τn = inf{t 0; |X n (t)| n} ∧ n, n = 1, 2, . . .. Then we see that X n (0) = ξ, and that τn = 1 An σn . So we easily see that X n ’s and τn ’s satisfy conditions (i) and (ii). So we obtain Assertion (1). Note that P(τn τn+1 ) = 1, n = 1, 2, . . .. So if P(limn→∞ τn = ∞) = 1, then it is easy to see that there is a sequence of increasing stopping times τ˜n , n = 1, 2, . . . , such that τ˜n = τn a.s., n = 1, 2, . . .. Therefore by Proposition 3.6.2 we see that there is an adapted continuous process X : [0, ∞) × → R N such that X τn = X n , n = 1, 2, . . .. Then we see that X satisfies Eq. (6.6). The uniqueness follows from Proposition 6.1.3. Now let us define an adapted continuous process X˜ n,h : [0, ∞) × → R N , h > 0, n 1, inductively by X˜ n,h (0) = ξ, X˜ n,h (t) = X˜ n,h ((n − 1)h) d nh 1 ˜ + σn,k (s, X n,h ((n − 1)h)) ds (B k (t) − B k ((n − 1)h)) h (n−1)h k=1 nh 1 + σn,0 (s, X˜ n,h ((n − 1)h)) ds (t − (n − 1)h), h (n−1)h t ∈ ((n − 1)h, nh], n = 1, 2, . . . . Let us take an arbitrary T ∈ Z1 and fix it. Then by Proposition 6.1.2 we see that E[ sup

t∈[0,T +1]

| X˜ n (t) − X˜ n,h (t)|2 , An ] → 0,

Let Dn ∈ F , n 1, given by Dn = { sup

t∈[0,T +1]

|X (t)| n}.

h ↓ 0.

146

6 Stochastic Differential Equation

For C1 , C2 ∈ F , we say that C1 ⊂ C2 a.s. if P(C1 \ C2 ) = 0. By Proposition 6.1.3 we see that Dn−1 ⊂ { sup

t∈[0,T +1]

|X (t) − X˜ n (t)| = 0} a.s., n T + 3,

and {

sup

t∈[0,T +1]

| X˜ n,h (t)| < n} ⊂ { sup |X h (t) − X˜ n,h (t)| = 0} a.s., n T + 1, h ∈ (0, 1]. t∈[0,T ]

So we see that Dn−1 ∩ { sup

t∈[0,T +1]

| X˜ n (t) − X˜ n,h (t)| ε} ⊂ { sup |X (t) − X h (t)| ε} a.s. t∈[0,T ]

for any n T + 3, h ∈ (0, 1] and ε ∈ (0, 1). Since Dn−1 ⊂ An a.s., we see that P( sup |X (t) − X h (t)| > ε) P({ t∈[0,T ]

sup

t∈[0,T +1]

| X˜ n (t) − X˜ n,h (t)| > ε} ∩ An ) + P( \ Dn−1 ),

for any n T + 2, h ∈ (0, 1] and ε ∈ (0, 1). So we see that for any ε ∈ (0, 1) lim P( sup |X (t) − X h (t)| > ε) P( \ Dn−1 ). h↓0

t∈[0,T ]

Since P( \ Dn−1 ) → 0, n → ∞, we have the last assertion. Corollary 6.1.1 Let N 1. Assume that σk : [0, ∞) × R N → R N k = 0, 1, . . . , d, are continuous and satisfy the local Lipschitz continuous condition (LLC). Moreover, we assume the following growth condition (G). (G) There is a C ∈ (0, ∞) such that 1 |σk (t, x)|2 + σ0i (t, x)x i C(1 + |x|2 ), 2 k=1 i=1 N

N

x ∈ R N , t ∈ [0, ∞).

Then there exists a unique adapted continuous process X : [0, ∞) × → R N satisfying Eq. (6.6). Moreover, it holds that P( sup |X (t) − X h (t)| > ε) → 0, t∈[0,T ]

h ↓ 0,

for any T > 0 and ε > 0. Proof Let X n , τn be as in Theorem 6.1.1. Let c : [0, ∞) × R N → R be a Borel measurable function given by

6.1 Itô’s Stochastic Differential Equation and Euler–Maruyama Approximation

c(t, x) =

N N 1 |σk (t, x)|2 + σ0i (t, x)x i C(1 + |x|2 ), 2 k=1

147

x ∈ R N , t ∈ [0, ∞).

i=1

Also, let Ar = {|ξ | r } ∈ F0 , r > 0. Then by Itô’s formula we see that for r n 1 Ar |X nτn (t)|2 = 1 Ar |ξ | + 2 2

N d k=1 i=1

t

+2

0

t

1 Ar 1(0,τn ] (s)X ni (s)σki (s, X n (s)) d B k (s)

1 Ar 1(0,τn ] (s)c(s, X n (s)) ds.

0

Since

t 0

1 Ar 1(0,τn ] (s)(X ni (s)σki (s, X n (s)))2 ds tn 2

sup

(s,x)∈[0,t]×Bn

|σk (s, x)|2 ,

we see by the assumptions that for any T > 0 t E[|X nτn (t)|2 , Ar ] = E[|ξ |2 , Ar ] + 2 E[1 Ar 1(0,τn ] (s)c(s, X n (s))] ds 0 t E[C(1 + |X nτn (s)|2 ), Ar ] ds r2 + 2 0 t 2 E[|X nτn (s)|2 , Ar ] ds, t ∈ [0, T ]. (r + 2T C) + 2C 0

Then by Gronwall’s inequality we see that E[|X nτn (t)|2 , Ar ] (r 2 + 2T C) exp(2Ct), t ∈ [0, T ]. So we see that P(Ar ∩ {τn T }) n −2 E[|X nτn (T )|2 , Ar ] n −2 (r 2 + 2T C) exp(2C T ), r n. Letting n → ∞, we see that P(Ar ∩ { lim τn T }) = 0, r > 0, T > 0. n→∞

Letting r ↑ ∞ furthermore, we see that P( lim τn T ) = 0, T > 0. n→∞

148

6 Stochastic Differential Equation

So we see that P(limn→∞ τn = ∞) = 1. Therefore by Theorem 6.1.1 we have our assertion.

6.2 Definition of Stochastic Differential Equation Let N 1, d 1, and σk : [0, ∞) × R N → R N , k = 0, 1, . . . , d, be Borel measurable functions. Let us consider a stochastic differential equation d X (t) =

d

σk (t, X (t))dwk (t) + σ0 (t, X (t)) dt.

(6.7)

k=1

Definition 6.2.1 We say that X is a solution to the stochastic differential equation (6.7), if there is a standard filtered probability space (, F , P, {Ft }t∈[0,∞) ), d-dimensional {Ft }t∈[0,∞) -Brownian motion B : [0, ∞) × → Rd , and an N dimensional adapted continuous process X : [0, ∞) × → R N such that P

d k=1

T

t

|σk (t, X (t))| dt + 2

0

|σ0 (t, X (t))| dt < ∞ = 1, T > 0,

0

and X (t) = X (0) +

d k=1

t

σk (s, X (s))d B k (s) ds +

0

t

σ0 (s, X (s)) ds, t ∈ [0, ∞).

0

Let W N be the set of all continuous functions w : [0, ∞) → R N , and let dW : W × W N → [0, ∞) be a metric function on W N given by N

dW (w1 , w2 ) =

∞ k=1

2−k ∧ max |w1 (t) − w2 (t)|, t∈[0,k]

w1 , w2 ∈ W N .

Then it is easy to see that (W N , dW ) is a separable complete metric space. Let B(W N ) be the Borel algebra over W N , i.e., B(W N ) is a σ -algebra over W N generated by the family of open sets in W N . Also, let Bt (W N ), t ∈ [0, ∞), be a sub-σ -algebra of B(W N ) given by Bt (W N ) = σ {{{w ∈ W N ; w(s) ∈ A}; s ∈ [0, t], A ∈ B(R N )}}. It is easy to show that B(W N ) = σ { t∈[0,∞) Bt (W N )}. Let a i j : [0, ∞) × R N → R, i, j = 1, . . . , N , and bi : [0, ∞) × R N → R, i = 1, . . . , N , be Borel measurable functions such that N × N matrix (a i j (t, x))i, j=1,...,N

6.2 Definition of Stochastic Differential Equation

149

is a non-negative definite symmetric matrix for all (t, x) ∈ [0, ∞) × R N . We define a measurable function L f : [0, ∞) × R N → R for any f ∈ C0∞ (R N ) by (L f )(t, x) =

N N ∂2 f ∂f 1 ij a (t, x) i j (x) + bi (t, x) i (x). 2 i, j=1 ∂x ∂x ∂ x i=1

(6.8)

Note that L can be regarded as a linear operator. Definition 6.2.2 Let x ∈ R N . We say that a probability measure μ on (W N , B(W N )) is a solution to the martingale problem starting from x with an infinitesimal generator L , if the following three conditions are satisfied. (1) μ(w(0) = x) = 1. (2) For any T > 0 and f ∈ C0∞ (R N ) Eμ

T

|(L f )(t, w(t))| dt < ∞.

0

(3) For any f ∈ C0∞ (R N )

t

f (w(t)) −

(L f )(s, w(s)) ds; t ∈ [0, ∞)

0

is a {Bt (W N )}t∈[0,∞) -martingale under μ. Let ν be a probability measure on (W N , B(W N )). Let N ν = {B ⊂ W N ; there is an A ∈ B(W N ) such that B ⊂ A, ν(A) = 0}, and let F ν = σ {B(W N ) ∪ N ν }. Then it is easy to show that the probability measure ν can be extended to a probability measure ν˜ on a measurable space (W N , F ν ) uniquely. Let σ {Bs (W N ) ∪ N ν }. Ftν = s>t

Then (W N , F ν , ν, ˜ {Ftν }t∈[0,∞) ) is a standard filtered probability space. Proposition 6.2.1 Let ν be a probability measure on (W N , B(W N )), and (W N , F ν , ν˜ , {Ftν }t∈[0,∞) ) be the above-mentioned standard filtered probability space. Assume that ⎞ ⎛ T N N ⎝ |a i j (s, w(s))| + |bi (t, w(t))|⎠ dt < ∞, ν˜ − a.s.w. 0

i, j=1

i=1

for any T > 0. Assume moreover that

150

6 Stochastic Differential Equation

t

f (w(t)) −

(L f )(s, w(s)) ds; t ∈ [0, ∞)

0

is an {Ftν }t∈[0,∞) -continuous local martingale for any quadratic function f on R N . Let t bi (s, w(s)) ds, t ∈ [0, ∞), i = 1, 2, . . . , N . M i (t, w) = wi (t) − wi (0) − 0

Then M i , i = 1, . . . , N , are continuous local martingales such that

t

M i , M j (t) =

a i j (s, w(s)) ds, ν˜ − a.s.w.,

i, j = 1, . . . , N .

(6.9)

0

Moreover,

T

f (w(t)) − f (w(0)) −

(L f )(t, w(t)) dt; t ∈ [0, ∞)

0

is an {Ftν }t∈[0,∞) continuous local martingale for any f ∈ C ∞ (R N ). Proof Let f (x) = x i , i = 1, . . . , N . Then we see that L f (t, x) = bi (t, x). So from the assumption we see that M i is a local continuous martingale. Note that

t

wi (t) = wi (0) + M i (t) +

bi (s, w(s)) ds,

i = 1, . . . , N .

0

So we see that wi , i = 1, . . . , N , are continuous semi-martingales. Therefore by Itô’s formula we see that for any f ∈ C ∞ (R N ) f (w(t)) = f (w(0)) +

N i=1

= f (w(0)) + +

N

1 2 i, j=1

t 0

0

N t i=1

t

0

N ∂f 1 t ∂2 f i (w(s)) dw (s) + (w(s)) dwi , w j (s) ∂xi 2 i, j=1 0 ∂ x i ∂ x j

∂f i (w(s)) d M (s) + ∂xi i=1 N

t

bi (s, w(s))

0

∂2 f (w(s)) dM i , M j (s). ∂xi ∂x j

Let f (x) = x i x j , i, j = 1, . . . , N . Then we see that L f (t, x) =

N k=1

bk (t, x)

∂f (x) + a i j (t, x), ∂xk

∂f (w(s)) ds ∂xi

6.2 Definition of Stochastic Differential Equation

151

and so we see that

t

f (w(t)) − f (w(0)) − =

N k=1

L f (s, w(s)) ds

0 t 0

∂f (w(s)) d M k (s) + M i , M j (t) − ∂xk

t

a i j (s, w(s)) ds.

0

Then by the assumption we see that

t

M , M (t) − i

j

a i j (s, w(s)) ds,

t 0,

0

is a continuous local martingale. So by Proposition 3.6.4 we have Eq. (6.9). Then by Eq. (6.9) we see that f (w(t)) = f (w(0)) +

N

t 0

k=1

∂f (w(s)) d M k (s) + ∂xk

t

(L f )(s, w(s)) ds

0

for any f ∈ C ∞ (R N ). This implies the last assertion. There is a strong relation between stochastic differential equations and martingale problems. Proposition 6.2.2 Let σk : [0, ∞) × R N → R N , k = 0, 1, . . . , d, be Borel measurable functions. Let a i j : [0, ∞) × R N → R, i, j = 1, . . . , N , and bi : [0, ∞) × R N → R, i = 1, . . . , N , be Borel measurable functions given by a i j (t, x) =

d

j

σki (t, x)σk (t, x) and bi (t, x) = σ0i (t, x)

k=1

for any (t, x) ∈ [0, ∞) × R N , i, j = 1, . . . , N . Also, let L be defined by Eq. (6.8). Let x ∈ R N . Then we have the following. (1) Let X : [0, ∞) × → R N be a solution to the stochastic differential equation (6.7) such that X (0) = x and E

T

|L f (t, X (t))| dt < ∞

0

for any T > 0 and f ∈ C0∞ (R N ). Let X : → W N be given by ( X (ω))(t) = X (t, ω), t ∈ [0, ∞), ω ∈ , and let ν be a probability measure on (W N , B(W N )) N given by ν(A) = P(−1 X (A)), A ∈ B(W ). Then ν is a solution to the martingale problem starting from x with an infinitesimal generator L .

152

6 Stochastic Differential Equation

(2) Let ν be a solution to the martingale problem starting from x with an infinitesimal generator L . Then there is a standard filtered probability space (, F , P, {Ft }t∈[0,∞) ), d-dimensional {Ft }t∈[0,∞) -Brownian motion B : [0, ∞) × → Rd and an N -dimensional adapted continuous process X : [0, ∞) × → R N satisfying the following. (i) X is a solution to the stochastic differential equation (6.7) and X (0) = x. (ii) Let ν be the probability measure on (W N , B(W N )) given by (1). Then ν = μ. Proof We say that X is a solution to the stochastic differential equation (6.7), if there is a standard filtered probability space (, F , P, {Ft }t∈[0,∞) ), d-dimensional {Ft }t∈[0,∞) -Brownian motion B : [0, ∞) × → Rd , d

X (t) = x +

t

0

k=1

t

σk (s, X (s)) d B (s) + k

σ0 (s, X (s)) ds.

0

Let f ∈ C0∞ (R N ) and fix it. By Itô’s formula we see that f (X (t)) = f (x) + N f (t) +

t

(L f )(s, X (s)) ds,

0

where

N d

N f (t) =

k=1 i=1

0

t

σki (s, X (s))

∂f (X (s)) d B k (s). ∂xi

By easy calculation we see that N

a i j (t, x)

i, j=1

∂f ∂f (x) j (x) = (L( f 2 ))(t, x) − 2 f (x)(L f )(t, x). ∂xi ∂x

Since N (t) = f

d k=1

=

t

0

=

t

t

0

(σki (s, X (s))

∂f (X (s)))2 ds ∂xi

⎞ ∂ f ∂ f ⎝ a i j (s, X (s)) i (X (s)) j (X (s))⎠ ds ∂x ∂x i, j=1 ⎛

N

(L( f 2 )(s, X (s)) − 2 f (X (s))(L f )(s, X (s)) ds,

0

we see that E[N f (t)] < ∞, t 0. So N f is a square integrable martingale. Let M f : [0, ∞) × W N → R be given by

6.2 Definition of Stochastic Differential Equation

t

M f (t, w) = f (w(t)) −

(L f )(s, w(s)) ds,

153

t ∈ [0, ∞), w ∈ W N .

0

Let s 0 and A ∈ Bs (W N ). Since −1 X (A) ∈ Fs , we see that E ν [M f (t), A] = E

f (X (t)) −

t

L f (r, X (r )) dr, −1 X (A)

0 −1 f ν f = E[E[ f (x) + N (t)|Fs ], X (A)] = E[ f (x) + N f (s), −1 X (A)] = E [M (s), A].

Since M f (s) is Bs (W N )-measurable, we see that E ν [M f (t)|Bs (W N )] = M f (s). So we see that ν is a solution to the martingale problem. (2) Let B([0, 1)) be a Borel algebra over [0, 1), and let γ be the Lebesgue measure on [0, 1). Then ([0, 1), B([0, 1)), γ ) is a probability space. By Proposition 3.7.2 there is a d-dimensional Brownian motion Bˆ on this probability space ([0, 1), B([0, 1)), γ ). Let = W N × [0, 1), F˜ = B(W N ) ⊗ B([0, 1)) (product σ -algebra) and P˜ = ˜ is a probability space. μ ⊗ γ . Then (, F˜ , P) ˜ Let N0 = Let (, F , P) be the completion of the probability space (, F˜ , P). {B ∈ F ; P(B) = 0 or 1} and let Ft =

ˆ ); r ∈ [0, s]}} ∪ N0 } σ {{A × [0, 1); A ∈ Bs (W N )} ∪ {W N × C; C ∈ σ { B(r

s>t

for each t ∈ [0, ∞). Then (, F , P, {Ft }t∈[0,∞) ) is a standard filtered probability space. Let B : [0, ∞) × → Rd be given by ˆ z), B(t, (w, z)) = B(t,

(w, z) ∈ = W N × [0, 1), t ∈ [0, ∞).

Then it is easy to see that B is a d-dimensional {Ft }t∈[0,∞) -Brownian motion. Suppose that M˜ is a martingale and is a continuous process on the standard ˜ = 0. filtered probability space (W N , B(W N ), μx , {Bt (W N )}t∈[0,∞) ) such that M(0) Let M : [0, ∞) × → R be given by ˜ w), M(t, (w, z)) = M(t,

(w, z) ∈ = W N × [0, 1), t ∈ [0, ∞).

Then it is easy to see that M : [0, ∞) × → R N is an {Ft }t∈[0,∞) -martingale and is a continuous process. Therefore by Proposition 3.6.1 we see that M is a continuous local martingale. Also, we see by easy consideration that M, B k = 0, k = 1, . . . , d. Now let X : [0, ∞) × → R N be given by X (t, (w, z)) = w(t),

(w, z) ∈ = W N × [0, 1), t ∈ [0, ∞).

Then X is an {Ft }t∈[0,∞) -adapted continuous process.

154

6 Stochastic Differential Equation

Let M f : [0, ∞) × → R, f ∈ C ∞ (R N ), be given by

t

M (t) = f (X (t)) − f (x) − f

L f (r, X (r )) dr,

t ∈ [0, ∞).

0

If f ∈ C0∞ (R N ), then M f is an {Ft }t∈[0,∞) -martingale. So we see that M f , f ∈ C ∞ (R N ), is a continuous local martingale. In particular, letting f (x) = x i , i = 1, . . . , N , we see that

t

M i (t) = X i (t) − x i −

bi (r, X (r )) dr,

t ∈ [0, ∞), i = 1, . . . , d,

0

are continuous local martingales. Also, we see that M i , B k = 0, i = 1, . . . , N , k = 1, . . . , d. Letting f (x) = x i x j , i, j = 1, . . . , N , we see by similar argument in the proof of Proposition 6.2.1 that

t

M i , M j (t) = =

d k=1

0

a i j (r, X (r )) dr

0 t

j

σki (r, X (r ))σk (r, X (r )) dr, i, j = 1, . . . , N , t ∈ [0, ∞).

So by Theorem 5.2.1 there is a d-dimensional {Ft }t∈[0,∞) -Brownian motion B˜ such that M i (t) =

d

t

0

k=1

σki (r, X (r )) d B˜ k (r ),

i = 1, . . . , N , t ∈ [0, ∞).

So we see that X (t) = x +

d k=1

0

t

σk (r, X (r )) d B˜ k (r ) +

t

σ0 (r, X (r )) dr, t ∈ [0, ∞).

0

Therefore X is a solution to the stochastic differential equation (6.7). For any A ∈ B(W N ) we see that P(−1 X (A)) = (μ ⊗ γ )(A × [0, 1)) = μ(A). So we see that ν = μ.

6.2 Definition of Stochastic Differential Equation

155

These imply Assertion (2). Let us explain why the definition of stochastic differential equations is not simple. Let σ : R → R be given by σ (x) =

−1, if x < 0, 1, if x 0.

Then we consider the following stochastic differential equation: d X (t) = σ (X (t)) dw(t),

X (0) = 0.

(6.10)

Let {B(t)}t0 be a 1-dimensional {Ft }t∈[0,∞) -Brownian motion, and let

t

Z (t) =

σ (B(s)) d B(s).

0

Note that

t

Z , Z t =

σ (B(s))2 ds = t.

0

So Z (t) is also a 1-dimensional {Ft }t0 -Brownian motion. Moreover, we see that B(0) = 0, t t 2 σ (B(s)) d B(s) = σ (B(s)) d Z (s). B(t) = 0

0

So letting X (t) = B(t), t 0, we see that X is a solution to the stochastic differential equation (6.10). Now let Gt , t ∈ [0, ∞), be given by Gt =

σ {σ {Z (r ); r ∈ [0, s]} ∪ N0 },

s>t

where N0 = {A; P(A) = 0 or 1}. Then by Proposition 3.7.3 we see that (, F , P, {Gt }t0 ) is a standard filtered probability space. By Theorem 5.1.1, we see that Z is a {Gt }t0 -Brownian motion. Suppose that X is a {Gt }t0 -adapted continuous process such that

t

X (t) = 0

Then we see that

σ (X (s)) d Z (s).

156

6 Stochastic Differential Equation

X, X t =

t

σ (X (s))2 ds = t.

0

Therefore X (t) is also a {Gt }t0 -Brownian motion, and so P(X (t) = x) = 0, x ∈ R, t > 0. By Proposition 5.7.2, we see that t σ (X (s))2 d Z (s) = σ (X (s)) d X (s) 0 0 t 1 1[0,a) (|X (s)|) ds. = |X (t)| − lim a↓0 2a 0

Z (t) =

t

So we see that Z (t) is σ {σ {|X (s)|; s ∈ [0, t]} ∪ N0 }-measurable. Therefore we see that Gt ⊂ σ {σ {|X (s)|; s ∈ [0, t]} ∪ N0 }. Then we can easily show that X (t) = E[X (t)|Gt ] = E[E[X (t)|σ {|X (s)|; s ∈ [0, t]}]|Gt ] = 0 a.s. This is a contradiction. By the above argument, we see that if we take (, F , P, {Gt }t0 ) as a basic standard filtered probability space and if we take Z as a basic Brownian motion, then there does not exist X such that t σ (X (s)) d Z (s), t 0, X (t) = 0

in this probability space. So if we take Itô’s idea on stochastic differential equations, the existence of solutions depends on the choice of a standard filtered probability space. Since we want to discuss the existence of solutions only on the conditions of coefficient functions not depending on the choice of probability spaces, we define a solution to a stochastic differential equation by Definition 6.2.1. We also express stochastic differential equations like Eq. (6.7) without mentioning probability spaces. However, Itô’s definition is useful in the case that the coefficients satisfy a good condition. So we will use both ideas in the following.

6.3 Uniqueness of a Solution to Martingale Problem Let N 1. We assume that σk : [0, ∞) × R N → R N , k = 0, 1, . . . , d, are continuous functions satisfying the following two conditions, local Lipschitz continuity condition (LLC) and growth condition (G).

6.3 Uniqueness of a Solution to Martingale Problem

157

(LLC) For any R > 0 there exists a K R ∈ (0, ∞) such that |σk (t, x) − σk (t, y)| K R |x − y|,

t ∈ [0, R], x, y ∈ B R , k = 0, 1, . . . , d.

(G) There is a C ∈ (0, ∞) such that 1 |σk (t, x)|2 + σ0i (t, x)x i C(1 + |x|2 ), 2 k=1 i=1 N

N

x ∈ R N , t ∈ [0, ∞).

We also define continuous functions a i j : [0, ∞) × R N → R, i, j = 1, . . . , N , b : [0, ∞) : R N → R, i, j = 1, . . . , N , by i

a i j (t, x) =

d

j

σki (t, x)σk (t, x) and bi (t, x) = σ0i (t, x)

k=1

for any t ∈ [0, ∞), x ∈ R N , i, j = 1, . . . , N , and define a second order linear partial differential operator L by L f (t, x) =

d d 1 ij ∂2 f ∂f a (t, x) i j (x) + bi (t, x) i (x), x ∈ R N , f ∈ C 2 (R N ). 2 ∂x ∂x ∂x i, j=1

i=1

Then we have the following. Proposition 6.3.1 For any x ∈ R N there exists a unique solution to the martingale problem starting at x with an infinitesimal generator L . Proof First, note that ⎛ ⎝

sup t∈[0,T ],x∈R N ,|x|R

N

|a i j (t, x)| +

i, j=1

N

⎞ |bi (t, x)|⎠ < ∞

i=1

for any T > 0 and R > 0. Therefore we see that sup t∈[0,T ],x∈R N

|(L f )(t, x)| < ∞

for any T > 0 and f ∈ C0∞ (R N ). By Corollary 6.1.1 and Proposition 6.2.2 we see that a solution to the martingale problem exists. So we show the uniqueness of a solution. Suppose that νi , i = 1, 2, are solutions to the martingale problem starting at x with an infinitesimal generator L . Then we see by Proposition 6.2.2 that for each i = 1, 2, there is a standard filtered probability space ((i) , F (i) , P (i) , {Ft(i) }t∈[0,∞) ), a d-dimensional Ft(i) -Brownian motion {B (i) (t); t ∈ [0, ∞)}, and an Ft(i) -adapted

158

6 Stochastic Differential Equation

continuous process X (i) : [0, ∞) × (i) → R N such that X (i) satisfies Itô’s stochastic differential equation (i)

X (t) = x +

d

t

(i)

σk (t, X (s)) d B

(i),k

(s) +

0

k=1

t

σ0 (X (i) (s)) ds, t 0

0

and that the probability law of X (i) (·) in W N is νi , i = 1, 2. Let Fh : [0, ∞) × W d → R N , h ∈ (0, 1], be inductively given by ˜ = x, Fh (0, w) and ˜ Fh (t, w) = Fh ((n − 1)h, w) ˜ d 1 nh + σk (s, Fh ((n − 1)h, w)) ˜ ds (w˜ k (t) − w˜ k ((n − 1)h)) h (n−1)h k=1 nh 1 + σ0 (s, Fh ((n − 1)h, w)) ˜ ds (t − (n − 1)h), h (n−1)h t ∈ ((n − 1)h, nh], n = 1, 2, . . . Then for each i = 1, 2, {Fh (t, , B (i) (·)); t ∈ [0, ∞)} is the Euler–Maruyama approximation of X (i) . So by Corollary 6.1.1 we see that P (i) ( sup |X (i) (t) − Fh (t, B (i) (·))| > ε) → 0, h ↓ 0. t∈[0,T ]

Therefore we see that (i)

(i)

E νi [ f (w)] = E P [ f (X (i) (·))] = lim E P [ f (Fh (·, B (i) ))] h↓0

for any bounded continuous function f : W d → R. Since Brownian motions B (1) and B (2) have the same probability law, we see that (1)

(2)

E P [ f (Fh (·, B (1) ))] = E P [ f (Fh (·, B (2) ))] for any bounded continuous function f : W d → R. Therefore we see that E ν1 [ f (w)] = E ν2 [ f (w)] for any bounded continuous function f : W d → R, and so we see that ν1 = ν2 .

6.3 Uniqueness of a Solution to Martingale Problem

159

Remark In this book, we do not mention the notions, strong solutions and pathwise uniqueness of a stochastic differential equation. By using these notions we can show the uniqueness of a solution to a martingale problem under more general assumptions. See Ikeda–Watanabe [4] for this topic.

6.4 Time Homogeneous Stochastic Differential Equation Let σk : R N → R N , k = 0, 1, 2, . . . , d, be continuous functions satisfying the Lipschitz continuous condition, i.e., there is a C0 ∈ (0, ∞) such that |σk (x) − σk (y)| C0 |x − y|,

x, y ∈ R N , k = 0, 1, . . . , d.

Let a i j : R N → R, bi : R N → R, i, j = 1, . . . , N , be a continuous function given by a i j (x) =

d

j

σki (x)σk (x) and bi (x) = σ0i (x), x ∈ R N , i, j = 1, . . . , N .

k=1

Also, let L be a second order differential operator given by L f (x) =

d d 1 ij ∂2 f ∂f a (x) i j (x) + bi (x) i (x), x ∈ R N , f ∈ C 2 (R N ). 2 i, j=1 ∂x ∂x ∂x i

Then by Proposition 6.3.1 we see that there is a unique solution μx to the martingale problem starting from x ∈ R N with an infinitesimal generator L . Let Cb (R N ) be the set of all bounded continuous functions defined in R N . Let Pt f : R N → R, t ∈ [0, ∞), f ∈ Cb (R N ), be given by (Pt f )(x) = E μx [ f (w(t))],

x ∈ R N , f ∈ Cb (R N ).

Also we define a norm · ∞ on Cb (R N ) by f ∞ = sup | f (x)|,

f ∈ Cb (R N ).

x∈R N

Let (, F , P, {Ft }t∈[0,∞) ) be a standard filtered probability space and {B(t); t ∈ [0, ∞)} be a d-dimensional {Ft }t∈[0,∞) -Brownian motion. Then by Corollary 6.1.1 we see that there is a unique solution to Itô’s stochastic differential equation X (t, x) = x +

d k=1

0

t

t

σk (X (s, x)) d B k (s) + 0

σ0 (X (s, x)) ds, t 0

160

6 Stochastic Differential Equation

for each x ∈ R N . Then by Proposition 6.2.2 we see that (Pt f )(x) = E[ f (X (t, x))], t 0, x ∈ R N ,

f ∈ Cb (R N ).

Proposition 6.4.1 (1) For any T > 0 there is a C T ∈ (0, ∞) such that E[ sup |X (t, x) − X (t, y)|2 ] C T |x − y|2 , x, y ∈ R N . t∈[0,T ]

(2) Pt , t ∈ [0, ∞), are linear operators defined in Cb (R N ) and Pt f (x) → f (x), t ↓ 0, for any x ∈ R N . Proof We show Assertion (1). Note that X (t, x) − X (t, y) d t =x−y+ (σk (X (s, x)) − σk (X (s, y))) d B k (s) k=1

t

+

0

(σ0 (X (s, x)) − σ0 (X (s, y))) ds.

0

So we see that E[ sup |X (s, x) − X (s, y)|2 ]1/2 s∈[0,t]

|x − y| +

t 1/2 d 4N E |σk (X (s, x)) − σ (X (s, y))|2 ds 0

k=1

t

+E

2 1/2

|σ0 (X (s, x)) − σ0 (X (s, y))| ds

0

|x − y| + (2d N

1/2

+t

1/2

t

)C0 E

. 1/2

|X (s, x) − X (s, y)| ds 2

.

0

Therefore we see that for any t ∈ [0, T ] E[ sup |X (s, x) − X (s, y)|2 ] s∈[0,t]

2|x − y|2 + 2(2d N 1/2 + T 1/2 )2 C02

0

t

E[ sup |X (r, x) − X (r, y)|2 ] ds. r ∈[0,s]

So we have Assertion (1) from Gronwall’s inequality. We show Assertion (2). It is obvious that Pt is a linear operator. Suppose that f ∈ Cb (R N ). Then it is easy to see that Pt f is a bounded function. Let us take arbitrary ε ∈ (0, 1] and R > 1. Then we see that

6.4 Time Homogeneous Stochastic Differential Equation

161

|Pt f (y) − Pt f (x)| E[| f (X (t, y)) − f (X (t, x))|] E[| f (X (t, y)) − f (X (t, x))|, |X (t, y) − X (t, x)| ε, |X (t, x)| R] + E[| f (X (t, y)) − f (X (t, x))|, |X (t, y) − X (t, x)| > ε or |X (t, x)| > R] 1 δ(ε, R) + 2 f ∞ 2 E[|X (t, y) − X (t, x)|2 ] + P(|X (t, x)| > R) , ε where δ(ε, R) = sup{| f (z) − f (˜z )|; z, z˜ ∈ R N , |z − z˜ | ε, |z| R}. Therefore we see that lim |Pt f (y) − Pt f (x)| δ(ε, R) + 2 f ∞ P(|X (t, x)| > R).

y→x

Since lim δ(ε, R) = 0 and lim P(|X (t, x)| > R) = 0, ε↓0

R→∞

we see that Pt f (y) → Pt f (x), y → x, and so Pt f is a continuous function. Since X (t, x) is a continuous process, we see that X (t, x) → x, t ↓ 0. Therefore we see that t ↓ 0. Pt f (x) = E[ f (X (t, x))] → f (x), These imply Assertion (2). Let ξ be an F0 -measurable R N -valued random variable, and let us consider Itô’s stochastic differential equation X (t) = ξ +

d k=1

t

t

σk (X (s)) d B k (s) +

0

σ0 (X (s)) ds,

t 0.

0

By Corollary 6.1.1 there is a unique solution to this equation. Then we have the following. Proposition 6.4.2 Let τ be a finite Ft -stopping time. Then E[ f (X (τ + t))|Fτ ] = (Pt f )(X (τ )),

t ∈ [0, ∞), f ∈ Cb (R N ).

Proof Let h ∈ (0, 1], and let us define Fh : [0, ∞) × R N × W d → R N inductively by

162

6 Stochastic Differential Equation

Fh (0, x, w) ˜ = x, Fh (t, x, w) ˜ = Fh ((n − 1)h, x, w) ˜ +

d

σk (Fh ((n − 1)h, x, w))( ˜ w˜ k (t) − w˜ k ((n − 1)h))

k=1

˜ − (n − 1)h), t ∈ ((n − 1)h, nh], n 1. + σ0 (Fh ((n − 1)h, x, w))(t Then as we saw in the proof of Proposition 6.3.1, (Pt f )(x) = lim E[ f (Fh (t, x, B(·)))], h↓0

t 0, x ∈ R N , f ∈ Cb (R N ).

˜ = B(τ + t) − B(τ ), t 0. Then by ProposiLet Gt = Fτ +t , t 0, and B(t) ˜ tion 5.1.1 we see that B(t), t 0, is a d-dimensional {Gt }t∈[0,∞) -Brownian motion. Let X˜ (t) = X (τ + t), t ∈ [0, ∞). Then X˜ is Gt -adapted. Although we need some arguments, we can show that X˜ (t) = X (τ ) +

d k=1

t

σk ( X˜ (s)) d B˜ k (s) +

0

t

σ0 ( X˜ (s)) ds,

t 0.

0

So by Corollary 6.1.1 we see that ˜ → 0, h ↓ 0 E[| f ( X˜ (t)) − f (Fh (t, X (τ ), B(·)))|]

t 0, f ∈ Cb (R N ).

Therefore we see that ˜ E[|E[ f ( X˜ (t))|G0 ] − E[ f (Fh (t, X (τ ), B(·)))|G 0 ]|] → 0, h ↓ 0. On the other hand, we see by Proposition 1.3.8 that ˜ ˜ E[ f (Fh (t, X (τ ), B(·)))|G 0 ] = E[ f (Fh (t, x, B(·)))]|x=X (τ ) , and so we see that ˜ E[ f (Fh (t, X (τ ), B(·)))|G 0 ] → (Pt f )(X (τ )), h ↓ 0, a.s. These imply our assertion. Proposition 6.4.3 (1) For any t, s 0, Ps Pt = Pt+s is a linear operator in Cb (R N ). (2) For any f ∈ C02 (R N )

t

(Pt f )(x) = f (x) + 0

(Ps L f )(x) ds,

t 0, x ∈ R N ,

6.4 Time Homogeneous Stochastic Differential Equation

and

163

1 lim (Ph f (x) − f (x)) = L f (x), h↓0 h

x ∈ RN .

Proof (1) Let us apply the previous proposition for ξ = x and τ = s. Then we see that f ∈ Cb (R N ). E[ f (X (t + s, x))|Fs ] = (Pt f )(X (s, x)) So we see that (Pt+s f )(x) = E[E[ f (X (t + s, x))|Fs ]] = E[(Pt f )(X (s, x))] = (Ps (Pt f ))(x). This implies Assertion (1). From the definition of the martingale problem we see that E μx

t

f (w(t)) −

L f (w(s)) ds = E μx [ f (w(0))] = f (x)

0

for f ∈ C02 (R N ). So we have the first part of Assertion (2). The second part follows from the fact that 1 1 (Ph f (x) − f (x)) = h h

h

(Ps L f )(x) ds.

0

Suppose that f ∈ C02 (R N ) and Pt f is smooth enough. Then we can expect that 1 1 lim (Pt+h f (x) − Pt f (x)) = lim (Ph (Pt f )(x) − Pt f (x)) = L Pt f (x). h↓0 h h↓0 h However, this equation is not obvious in general. We can justify this idea under more assumptions on σk as we see in the next section.

6.5 Smoothness of Solutions to Stochastic Differential Equations In this section we consider time homogeneous stochastic differential equations only. First, we make some preparations. Let N 1, and let g(t, x) =

1 2π t

N /2

|x|2 , exp − 2t

t ∈ (0, ∞), x ∈ R N .

164

6 Stochastic Differential Equation

Then we see that N ∂ 1 ∂ 2 g(t, x) = g(t, x), ∂t 2 i=1 ∂ x i

t ∈ (0, ∞), x ∈ R N .

We can show the following by easy computation. Proposition 6.5.1 (1) g(t, x) = t −N /2 g(1, t −1/2 x), t ∈ (0, ∞), x ∈ R N . N (2) For any multi-index α = (α1 , . . . , α N ) ∈ Z0 |α| ∂ |α| −(N +|α|)/2 ∂ g g(t, x) = t (1, t −1/2 x), t ∈ (0, ∞), x ∈ R N . ∂xα ∂xα

Here |α| = α1 + · · · + α N . N and p ∈ [0, ∞) (3) For any multi-index α ∈ Z0 |α| p 1/ p ∂ dx g(t, x) α RN ∂ x p |α| 1/ p ∂ g dx = t −((1−1/ p)N +|α|)/2 (1, x) , t ∈ (0, ∞). α RN ∂ x

Let G t f ∈ C ∞ (R N ), t ∈ [0, ∞), f ∈ C0∞ (R N ), be given by (G 0 f )(x) = f (x) and (G t f )(x) =

RN

g(t, x − y) f (y) dy, x ∈ R N , t ∈ (0, ∞).

Then we see the following by easy computation. N Proposition 6.5.2 For any multi-index α ∈ Z0 , f ∈ C0∞ (R N ), t ∈ (0, ∞) and x ∈

RN

∂ |α| ∂ |α| f ∂ |α| g (x) = (G f )(x) = G (t, x − y) f (y) dy, t t α ∂xα ∂xα RN ∂ x

and ∂ ∂t

∂ |α| f Gt ∂xα

1 (x) = 2 i=1 d

RN

∂g ∂ ∂ |α| (t, x − y) f (y) dy. ∂xi ∂ yi ∂ yα

N and p ∈ Proposition 6.5.3 (1) For any f ∈ C0∞ (R N ), any multi-index α ∈ Z0 (N , ∞),

6.5 Smoothness of Solutions to Stochastic Differential Equations

|α| ∂ f sup α (x) x∈R N ∂ x −(N +|α|)/2 C0,α s + C1,α, p s

( p−N )/(2 p)

165

| f (x)| d x

RN d i=1

RN

p 1/ p ∂ ∂ |α| . ∂ x i ∂ x α f (x) d x

Here C0,α C1,α, p

|α| ∂ g = sup α (1, y) , y∈R N ∂ y q 1/q ∂g 1 2p dy = (1, y) , 1 2 p−N RN ∂ y

and q = p/( p − 1). N , f ∈ C ∞ (R N ), ψ ∈ C0∞ (R N ), p ∈ (N , ∞) and (2) For any multi-index α ∈ Z0 s>0 |α| ∂ sup α (ψ(x) f (x)) x∈R N ∂ x C0,α s −(N +|α|)/2 | f (x)||ψ(x)| d x RN ⎞ ⎛ |β| ∂ ψ ⎟ ⎜ sup (x)⎠ + C1,α, p s ( p−N )/(2 p) N 2|α|+1 ⎝ β ∂ x N x∈R N β∈Z0 ,|β||α|+1

⎛ ⎜ ×⎝

N β∈Z0 ,|β||α|+1

Dψ

⎞ 1/ p |β| p ∂ f ⎟ ⎠. ∂ x β (x) d x

Here Dψ is an open set given by Dψ =

! N β∈Z0

x ∈ RN ;

∂ |β| ψ (x) = 0 . ∂xβ

Proof By Propositions 6.5.1 and 6.5.2 we see that

166

6 Stochastic Differential Equation

|α| ∂ ∂ x α f (x) |α| s ∂ f ∂ ∂ |α| f = G s (x) − (x) dr Gr α ∂xα ∂r ∂ x 0 |α| ∂ g α (s, x − y) | f (y)| dy RN ∂ x d ∂g ∂ ∂ |α| 1 s + dr (r, x − y) f (y) dy i i α 2 i=1 0 ∂y ∂y RN ∂ x |α| ∂ g s −(N +|α|)/2 sup α (1, y) | f (y)| dy RN y∈R N ∂ x q 1/q p 1/ p d s ∂g ∂ ∂ |α| 1 dy + dr (r, y) f (y) dy . i i α 2 i=1 ∂y ∂y 0 RN ∂ y RN Then we see that

s

dr 0

RN

q 1/q s ∂g −(1+N / p)/2 = dr r ∂ x i (r, y) dy RN 0

q 1/q ∂g ∂ x i (1, y) dy

for all i = 1, . . . , N . This implies Assertion (1). Since we see that ∂ jN ∂ j1 · · · (ψ(x) f (x)) (∂ x 1 ) j1 (∂ x N ) jN j1 jN j1 jN ∂ j1 −k1 ∂ jN −k N ··· ··· ··· ψ (x) = k1 kN (∂ x 1 ) j1 −k1 (∂ x N ) jN −k N k1 =0 k N =0 ∂ k1 ∂ kN × · · · f (x), (∂ x 1 )k1 (∂ x N )k N we obtain Assertion (2). Also, we have the following. N Proposition 6.5.4 For any multi-indices α, β ∈ Z0 , with |α| |β| 2, there is a N polynomial Pα,β (yγ ; |γ | |α| − 1) in yγ ∈ R N , γ ∈ Z0 , |γ | |α| − 1, satisfying the following. N For any F, G ∈ C ∞ (R N ; R N ) and any multi-indices α ∈ Z0 with |α| 1,

6.5 Smoothness of Solutions to Stochastic Differential Equations

∂ |α| F(G(x)) ∂xα N ∂F ∂ |α| G i (G(x)) (x) = ∂xi ∂xα i=1 +

N β∈Z0 ,2|β||α|

∂ |β| F (G(x))Pα,β ∂xβ

167

∂ |γ | G (x); |γ | |α| − 1 , x ∈ R N . ∂xγ

Proof If |α| = 1, then ∂F ∂G i ∂ F(G(x)) = (G(x)) (x). ∂x j ∂xi ∂x j i=1 N

So our assertion is valid if we let Pα,β = 0. In the case that |α| 2 we can easily show our assertion by induction in |α|. From now on, we assume that σk : R N → R N , k = 0, 1, . . . , d, satisfy the following assumption. (S) σk ∈ C ∞ (R N ; R N ), k = 0, 1, . . . , d, and |α| ∂ sup α σk (x) < ∞ x∈R N ∂ x N for any k = 0, 1, . . . , d, and any multi-index α ∈ Z0 . Let (, F , P, {Ft }t∈[0,∞) ) be a standard filtered probability space, and B : [0, ∞) × → Rd be a d-dimensional {Ft }t∈[0,∞) -Brownian motion. Let B 0 (t) = t, t ∈ [0, ∞), for simplicity of equations. Let us consider Itô’s stochastic differential equation

X (t, x) = x +

d k=1

t

t

σk (X (s, x)) d B (s) + k

0

σ0 (X (s, x)) ds,

t 0.

0

(6.11) Note that we can express this equation as follows: X (t, x) = x +

d k=0

t

σk (X (s, x)) d B k (s).

0

By Assumption (S) and Corollary 6.1.1 there exists a unique solution X (·, x) to this equation for each x ∈ R N . Let X h : [0, ∞) × R N × → R N , h ∈ (0, 1], be given inductively by

168

6 Stochastic Differential Equation

X h (0, x) = x, X h (t, x) = X h ((n − 1)h, x) +

d

σk (X h ((n − 1)h, x))(B k (t) − B k ((n − 1)h)),

(6.12)

k=0

t ∈ ((n − 1)h, nh], n = 1, 2, . . .. Then X h (·, x) is the Euler–Maruyama approximation of the solution X (·, x) to the stochastic differential equation (6.11). So by Assumption (S) and Proposition 6.1.2 we see that E[ sup |X (t, x) − X h (t, x)|2 ] → 0, t∈[0,T ]

h ↓ 0,

T > 0, x ∈ R N .

It is obvious from the definition of X h that X h (·, ·, ω) : [0, ∞) × R N → R N is continuous for each h ∈ (0, 1] and ω ∈ . Moreover, X h (t, ·, ω) : R N → R N is smooth for each h ∈ (0, 1], t ∈ [0, ∞) and ω ∈ , and ∂ |α| X h (·, ·, ω) : [0, ∞) × R N → R N ∂xα N is continuous for any multi-index α ∈ Z0 and any ω ∈ . Let ϕh : [0, ∞) → [0, ∞) be given by ϕh (0) = 0, ϕh (t) = (n − 1)h, t ∈ ((n − 1)h, nh], n = 1, 2, . . .. Then by Eq. (6.12) we see that

X h (t, x) = x +

d k=0

t

σk (X h (ϕh (s), x)) d B k (s), t 0.

0

Here we use a stochastic integral to simplify notation. However, this equation has a meaning for each ω. N Then by Proposition 6.5.4 we see that for each α ∈ Z0 ∂ |α| X h (t, x) ∂xα |α| d N t ∂σk ∂ |α| ∂ x + (X h (ϕh (s), x)) α X hi (ϕh (s), x) d B k (s) = α i ∂x ∂x ∂x k=0 i=1 0 t |β| d ∂ σk + (X h (ϕh (s), x)) ∂xβ 0 N k=0 β∈Z0 ,2|β||α|

× Pα,β

∂ |γ | i X ((ϕh (s), x); |γ | |α| − 1) ∂xγ h

First we show the following.

d B k (s).

(6.13)

6.5 Smoothness of Solutions to Stochastic Differential Equations

169

i, j

Proposition 6.5.5 Suppose that ak : [0, ∞) × → R, bki : [0, ∞) × → R, k = 0, 1, . . . , d, i, j = 1, . . . , N , are progressively measurable processes such that there exists a K ∈ (0, ∞) for which i, j

|ak (t)| K a.s.

t ∈ [0, ∞), k = 0, 1, . . . , d, i, j = 1, . . . , N .

Also, we assume that Z : [0, ∞) × → R N is an adapted continuous process such that d N

Z i (t) = Z i (0) +

k=0 j=1

t 0

i, j

ak (s)Z j (ϕh (s)) d B k (s) +

d

t 0

k=0

bki (s) d B k (s),

for any t ∈ [0, ∞) and i = 1, . . . , N . Then for any p ∈ (2, ∞) E[ max |Z (t)| p ] t∈[0,T ] 2 N p

E[|Z (0)| ]

p+1

p 1/ p

d + (1 + C p )(1 + T )E

T

1/ p p |bk (s)| ds p

0

k=0

× exp(2 p (d + 1)N p+2 K (1 + C p )(1 + T )) p T ),

T ∈ [0, ∞).

Here C p ∈ (0, ∞) is a constant appearing in Theorem 5.4.1. Proof Let p ∈ (2, ∞) and T > 0. Then we see that E[ max |Z i (t)| p ]1/ p t∈[0,T ]

E[|Z (0)| ] i

N d

+

p 1/ p

k=1 j=1

t p 1/ p i, j j k E max ak (s)Z (ϕh (s)) d B (s) t∈[0,T ]

0

t p 1/ p d i, j j + E max a0 (s)Z (ϕh (s)) ds t∈[0,T ]

j=1

0

k=1

0

t t p 1/ p p 1/ p d i k i + E max bk (s) d B (s) + E max b0 (s) ds t∈[0,T ] t∈[0,T ] E[|Z (0)| ] i

+

d

+

k=1

d N

T

E 0

T

Cp E 0

k=1 j=1

j=1 d

+

p 1/ p

0

i, j |ak (t)Z j (ϕh (t))|2

p/2 1/ p dt

p 1/ p

i, j |a0 (t)Z j (ϕh (t))| dt

Cp E 0

T

p/2 1/ p |bki (t)|2

dt

+E 0

T

p 1/ p |b0i (t)| dt

170

6 Stochastic Differential Equation

E[|Z i (0)| p ]1/ p +

d N

K C p E T p/2−1

d

K E T p−1

+

Cp E T

T

p/2−1 0

k=1

1/ p

s∈[0,t]

s∈[0,t]

max |Z j (s)| p dt

0

j=1 d

T

max |Z j (s)| p dt

0

k=1 j=1

+

1/ p

T

1/ p |bki (t)| p

+E T

dt

T

p−1 0

1/ p |b0i (t)| p

.

dt

Therefore we see that E[ max |Z i (t)| p ]1/ p t∈[0,T ]

T

E[|Z (0)| p ]1/ p + N (d + 1)K (C p + 1)(1 + T )E +

d

(C p + 1)(1 + T )E

T

0

1/ p |bk (t)| p dt

1/ p max |Z (s))| p dt

s∈[0,t]

,

0

k=0

i = 1, . . . , N . This implies that E[ max |Z (s)| p ] s∈[0,t]

E[(N max max |Z i (s)|) p )] N p+1 max E[ max |Z i (s)| p ] s∈[0,t] i=1,...,N s∈[0,t] i=1,...,N T 1/ p p d (C p + 1)(1 + T )E |bk (t)| p dt 2 p N p+1 E[|Z (0)| p ]1/ p + k=0

0

t

+ 2 p N p+1 (d + 1)N K (1 + C p )(1 + T ) 0

E[ max |Z (s))| p ] ds r ∈[0,s]

for any t ∈ [0, T ], T > 0. So by Grownwall’s inequality we have E[ max |Z (s)| p ] s∈[0,t] 2 p N p+1 E[|Z (0)| p ]1/ p +

d

T

N (1 + C p )(1 + T )E

k=0

× exp(2 p (d + 1)N p+2 K (1 + C p )(1 + T ) p t) for all t ∈ [0, T ]. This implies our assertion. Proposition 6.5.6 (1) For any T > 0 and p ∈ (1, ∞)

0

1/ p p |bk (s)| p ds

6.5 Smoothness of Solutions to Stochastic Differential Equations

171

E[ sup |X h (t, x) − x| p ] < ∞.

sup

t∈[0,T ]

h∈(0,1],x∈R N

(2) For any T > 0, p ∈ (2, ∞) and any multi-index α with |α| 1 sup

E

h∈(0,1],x∈R N

α p ∂ sup α X h (t, x) < ∞. ∂x

t∈[0,T ]

Proof Since X h (t, x) − x =

d k=0

t

σk (X h (ϕh (s), x)) d B k (s)

t 0,

0

we have Assertion (1) from Assumption (S) and Theorem " 5.4.1. "N k We show Assertion (2) by induction in |α|. Let K = dk=0 i=1 supx∈R N | ∂σ (x)|. ∂ xi Then in the case that |α| = 1 we see that ∂ X h (t, x) ∂xi N t d ∂σk ∂ ∂ j (X h (ϕh (s), x)) i X h (ϕh (s), x) d B k (s), = ix+ j ∂x ∂ x ∂ x k=0 j=1 0 and so we have our assertion by Proposition 6.5.5. Now let us assume that our assertion is valid for any multi-index α with |α| n, n 1. Now suppose that α is a multi-index such that |α| = n + 1. Then by Eq. (6.13) we see that

=

∂ |α| X h (t, x) ∂xα d N t ∂σk 0

k=0 i=1

+

d k=0

0

t

∂x

(X h (ϕh (s), x)) i

∂ |α| i X (ϕh (s), x) d B k (s) ∂xα h

(α) bh,k (s, x) d B k (s),

where (α) bh,k (t, x) =

N ,2|β||α| β∈Z0

∂ |β| σk (X h (ϕh (t), x))Pα,β ∂xβ

and Pα,β ’s are as in Proposition 6.5.4.

∂ |γ | i X (ϕ (t), x); |γ | |α| − 1 , h ∂xγ h

172

6 Stochastic Differential Equation

Then from the assumption of induction and Assumption (S) we see that for any T > 0 and p ∈ (2, ∞) sup x∈R N ,h∈(0,1]

(α) E[ sup |bh,k (t, x)| p ] < ∞,

k = 0, 1, . . . , d.

t∈[0,T ]

So by Proposition 6.5.4 we see that our assertion is valid for α. This completes the induction. Proposition 6.5.7 (1) For any T > 0 sup E[ sup |X (t, x) − X h (t, x)|2 ] → 0, t∈[0,T ]

x∈R N

h ↓ 0.

(2) For any T > 0 and p ∈ (2, ∞) sup E[ sup |X (t, x) − X h (t, x)| p ] → 0, t∈[0,T ]

x∈R N

h ↓ 0.

Proof Since the proof of Assertion (1) is similar to that of Proposition 6.1.2, we give only a sketch of the proof. Since X (t, x) − X h (t, x) = =

d

d

t

(σk (X (s, x)) − σk (X h (ϕh (s), x))) d B k (s)

k=0 0 t

k=1 0

+ +

t

(σk (X (s, x)) − σk (X (ϕh (s), x))) d B k (s) +

(σ0 (X (s, x)) − σ0 (X (ϕh (s), x))) ds

0

d

t

(σk (X (ϕh (s), x)) − σk (X h (ϕh (s), x))) d B k (s)

k=1 0 t

(σ0 (X (ϕh (s), x)) − σ0 (X h (ϕh (s), x))) ds,

0

we see that E[ sup |X (s, x) − X h (s, x)|2 ] s∈[0,t]

2(4 + t)

d

|σk (X (s, x)) − σk (X (ϕh (s), x))| ds 2

E 0

k=0

+ 2(4 + t)

t

d

Let K ∈ [0, ∞) be given by

|σk (X (ϕh (s), x)) − σk (X h (ϕh (s), x))| ds . 2

E

k=0

t

0

6.5 Smoothness of Solutions to Stochastic Differential Equations

K =

d

sup |σk (x)| +

N k=0 x∈R

d N

173

∂σk sup i (x) . ∂x N

k=0 i=1 x∈R

Then we see that E[ sup |X (s, x) − X h (s, x)|2 ] s∈[0,t]

2(4 + t)(d + 1)K

2

+ 2(4 + t)(d + 1)

0 t

t

E[|X (s, x) − X (ϕh (s), x)|2 ] ds sup E[|X (s, x) − X h (s, x)|2 ] ds.

0 r ∈[0,s]

Since we can easily show that E[|X (s, x) − X (ϕh (s), x)|2 ] h(d + 1)K 2 ,

s ∈ [0, ∞), h ∈ (0, 1], x ∈ R N ,

we have Assertion (1) from Grownwall’s inequality. Note that 1 p−2 1 1 1 + = . 2 p − 1 2p p − 1 p Then we see that E[ sup |X (t, x) − X h (t, x)| p ]1/ p t∈[0,T ]

E[ sup |X (t, x) − X h (t, x)|2 ]1/2( p−1) t∈[0,T ]

× E[( sup |X (t, x) − x| + sup |X h (t, x) − x|)2 p ]( p−2)(2 p( p−1)) . t∈[0,T ]

t∈[0,T ]

So we have Assertion (2) from Assertion (1) and Proposition 6.5.6. Theorem 6.5.1 Taking Assumption (S), for any multi-index α, T > 0 and p ∈ (1, ∞) sup E x∈R N

α p ∂ sup α (X h (t, x) − X h (t, x)) → 0, ∂x

t∈[0,T ]

h, h ↓ 0.

(6.14)

Also, for any R > 0 E

sup

α p ∂ sup α (X h (t, x) − X h (t, x)) → 0, ∂x

x∈R N ,|x|R t∈[0,T ]

h, h ↓ 0.

(6.15)

174

6 Stochastic Differential Equation

In particular, there exists X : [0, ∞) × R N × → R N satisfying the following. (1) For any t ∈ [0, ∞) and ω ∈ , X (t, ·, ω) : R N → R N is smooth and (∂ α /∂ x α ) X (t, x, ω) is continuous in (t, x) ∈ [0, ∞) × R N for any multi-index α. Furthermore, for any T > 0 and p ∈ (1, ∞) sup E

α p ∂ sup α (X (t, x) − x) < ∞. ∂x

t∈[0,T ]

x∈R N

(2) X (t, x) is Ft -measurable for any t ∈ [0, ∞) and x ∈ R N . Moreover, for any x ∈ RN d t X (t, x) = x + σk (X (s, x)) d B k (s), t ∈ [0, ∞). k=0

0

Proof By Proposition 6.5.6 we see that

sup h,h ∈(0,1],x∈R N

|α| p ∂ E sup α (X h (t, x) − X h (t, x)) < ∞ t∈[0,T ] ∂ x

for any T > 0 and p ∈ (2, ∞). Also, by Proposition 6.5.7 we see that for any p ∈ (1, ∞) h, h ↓ 0. sup E[ sup |X h (t, x) − X h (t, x)| p ] → 0, x∈R N

t∈[0,T ]

By Proposition 6.5.3 (2) we see that for any ψ ∈ C0∞ (R N ) and s > 0 |α| p 1/ p ∂ E sup sup α (ψ(x)(X h (t, x) − X h (t, x))) x∈R N t∈[0,T ] ∂ x ⎛

1/ p

C0,α s −(N +|α|)/2 ⎝

E RN

sup |X h (t, x) − X h (t, x)| p

t∈[0,T ]

⎞ |ψ(x)| d x ⎠

+ C1,α, p s ( p−N )/(2 p) N 2|α|+1 C˜ ψ,α ⎛ ⎝ × d x E sup N β∈Z0 ,|β||α|+1

Dψ

Here C˜ ψ,α =

⎞ |β| p 1/ p ∂ ⎠. β (X h (t, x) − X h (t, x)) t∈[0,T ] ∂ x

N β∈Z0 ,|β||α|+1

|β| ∂ ψ sup (x) . β x∈R N ∂ x

Let us take a ψ0 ∈ C0∞ (R N ) such that 0 ψ0 1 and ψ0 (x) = 1, |x| 1, and fix it for a while. Let x0 ∈ R N and let ψ(x) = ψ0 (x − x0 ). Since C˜ ψ,α = C˜ ψ0 ,α , we

6.5 Smoothness of Solutions to Stochastic Differential Equations

175

see that E

α p 1/ p ∂ sup α (X h (t, ·) − X h (t, ·))(x0 ) ∂x

t∈[0,T ]

C0,α s −(N +|α|)/2 |Dψ0 | sup E[ sup |X h (t, x) − X h (t, x)| p ]1/ p x∈R N

+ C1,α, p s

×

N β∈Z0 ,|β||α|+1

t∈[0,T ]

N2 C˜ ψ0 ,α |Dψ0 | |β| p 1/ p ∂ sup E sup β (X h (t, x) − X h (t, x)) . t∈[0,T ] ∂ x x∈R N

( p−N )/(2 p)

|α|+1

Here |Dψ0 | is the volume of Dψ0 . Note that the constant in the above inequality is independent of x0 . So letting h, h ↓ 0, and then letting s ↓ 0, we obtain Eq. (6.14). Let ψ(x) = ψ0 ( n1 x), n 1. Then we see that E

sup

α p 1/ p ∂ sup α (X h (t, x) − X h (t, x)) ∂x

x∈R N ,|x|n t∈[0,T ]

|α| p 1/ p ∂ E sup sup α (ψ(x)(X h (t, x) − X h (t, x))) x∈R N t∈[0,T ] ∂ x C0,α s −(N +|α|)/2 |Dψ | sup E[ sup |X h (t, x) − X h (t, x)|] x∈R N

+ C1,α, p s

×

t∈[0,T ]

N 2|α|+1 C˜ ψ0 ,α |Dψ | |β| p 1/ p ∂ sup E sup β (X h (t, x) − X h (t, x)) . t∈[0,T ] ∂ x x∈R N

( p−N )/(2 p)

N β∈Z0 ,|β||α|+1

So by a similar argument to the proof of Eq. (6.14) we see that E

sup

α p ∂ sup α (X h (t, x) − X h (t, x)) → 0, ∂x

x∈R N ,|x|n t∈[0,T ]

h, h ↓ 0.

So we obtain Eq. (6.15). By Eq. (6.15) we see that there is an h n ∈ (0, 1/n) for each n 1 such that N ,|α|n α∈Z0

E

sup

n α ∂ 2 sup α ((X h (t, x) − X h (t, x)) 2−n , h, h ∈ (0, h n ]. ∂x

x∈R N ,|x|n t∈[0,n]

Let h˜ n = min{h 1 , . . . , h n } and let 0 ∈ F be given by

176

6 Stochastic Differential Equation

0 ⎧ ⎫ ⎪ ⎪ ⎪ ⎪ ∞ ⎨ ⎬ α ∂ = ω ∈ ; sup sup α (X h˜ n+1 (t, x, ω) − X h˜ n (t, x, ω)) < ∞ . ⎪ ⎪ ∂x N ⎪ ⎪ n=1 α∈Z N ,|α|n x∈R ,|x|n t∈[0,n] ⎩ ⎭ 0

Then we see that P(0 ) = 1. Moreover, if we let X : [0, ∞) × R N × → R N be given by lim X h˜ (t, x, ω), ω ∈ 0 , X (t, x, ω) = n→∞ n 0, otherwise, t ∈ [0, ∞), x ∈ R N , then we easily see that X satisfies our conditions. Let Cb∞ (R N ) be the set of all smooth functions f defined in R N such that |α| ∂ f sup α (x) < ∞ ∂x N

x∈R

N for any multi-index α ∈ Z0 . Then we have the following as a corollary to the above theorem.

Corollary 6.5.1 Taking Assumption (S) and letting f ∈ Cb∞ (R N ; R), Pt f ∈ Cb∞ (R N ; R) for any t ∈ [0, ∞). Moreover, (Pt f )(x) is smooth in (t, x) ∈ (0, ∞) × R N → (Pt f )(x), and ∂ (Pt f )(x) = (L Pt f )(x) ∂t

(t, x) ∈ (0, ∞) × R N .

Proof Note that Pt f (x) = E[ f (X (t, x))],

(t, x) ∈ [0, ∞) × R N .

N So by Theorem 6.5.1 we see that for any multi-index α ∈ Z0

|α| ∂ |α| ∂ (Pt f )(x) = E f (X (t, x)) . ∂xα ∂xα Therefore we see that Pt f ∈ Cb∞ (R N ; R). Also, by Itô’s formula Pt f (x) = f (x) + 0

t

(Ps L f )(x) ds.

6.5 Smoothness of Solutions to Stochastic Differential Equations

Thus we have

177

∂ (Pt f )(x) = (Pt L f )(x). ∂t

So we see that (Pt f )(x) is smooth in (t, x) ∈ (0, ∞) × R N . The last equation follows from the argument at the end of Sect. 6.4.

Chapter 7

Application to Finance

7.1 Basic Situation and Dynamical Portfolio Strategy In this section, we discuss option pricing theory in the following situation. We assume that 1 + N kinds of securities, the 0-th security to the N -th security, are traded in the exchange market. We consider the market prices of securities to be random variables. Let a standard filtered probability space (, F , P, {Ft }t∈[0,∞) ) be given for mathematical setting. We let Ft , t ∈ [0, ∞), express the common information of all people at time t. We take the time parameter to be continuous and assume that we can trade securities at any time t, t ∈ [0, ∞), in the market. We denote by S i (t) the price of the i-th security, i = 0, 1, . . . , N , at time t. Since all people can observe the price S i (t) at time t, we consider S i (t) as an Ft -measurable random variable for each i = 0, 1, . . . , N and t ∈ [0, ∞). We also consider prices to be not necessarily integers and that they can be any real numbers. Also, we assume that S i , i = 0, 1, . . . , N , are continuous semi-martingales. Furthermore, we assume the following. (1) No dividend is given to security holders. Mathematical models and discussions become complicated in the case that dividends exist. So we do not handle such cases in this book. (2) Anybody can buy or sell the i-th security, i = 0, 1, . . . , N , at time t, t ∈ [0, ∞), at the price S i (t) by any volume which can be any real number. We need both sellers and buyers in the trade of securities. If we try to sell a large amount of securities, we have to offer a low price to find buyers, and so it affects the market price. We ignore such effects in our model. (3) We can hold any amount of securities which can be any real number without any constraint or any cost. In particular, we can hold a negative amount of securities without any cost and so short selling is possible without any constraint or cost. This assumption is unnatural, because the total amount of any security is fixed, and so we cannot sell any security beyond the total amount. Also, we have to pay some cost for short selling in real world. However, in the case that a financial institute © Springer Nature Singapore Pte Ltd. 2020 S. Kusuoka, Stochastic Analysis, Monographs in Mathematical Economics 3, https://doi.org/10.1007/978-981-15-8864-8_7

179

180

7 Application to Finance

handles derivatives, the amount of securities which the financial institute has to trade for hedging purposes on each derivative is not so large. If the financial institute holds securities for other purposes, short selling does not take place totally. For these reasons, Assumption (3) is not so unnatural as long as we consider trade for hedging on each derivative. Of course, it is possible that the total amount of securities which a financial institute trades becomes very large. So we have to consider risk on the total assets which a financial institute holds. We need a quite different approach to consider this problem, and so we will not handle this problem in this book. Now consider what we can do through trades at market. Let us take a strategy such that we hold ξi (t) i-th securities at time t for each i = 0, 1, . . . , N . Since a list of assets which a person holds is called a portfolio, we just call this strategy a portfolio strategy ξ = (ξ 0 , ξ 1 , . . . , ξ N ). ξ i (t) is not necessarily Ft -measurable, since we may decide this amount ξ i (t) by coin tossing. However, we assume that ξ i (t) is Ft -measurable for simplicity in this book. First we consider the case that ξ i ∈ L0 , i = 0, 1, . . . , N . Then there are n 1, 0 = t0 < t1 < · · · < tn < ∞, and bounded Ftk−1 -measurable random variables ηki , k = 1, . . . , n, i = 0, . . . , N , such that ξ i (t) =

n

1(tk−1 ,tk ] (t)ηki ,

t ∈ [0, ∞), i = 0, 1, . . . , N .

k=1

In this strategy, we change a portfolio only at time tk , k = 0, 1, . . . , n, and the amount of cash ck (yen) which we gain at time tk is given by ck = −

N i (ηk+1 − ηki )S i (tk ), i=0

i where η0i = 0, ηn+1 = 0, i = 0, 1, . . . , N . If ck is negative, it means that we pay −ck (yen). Note that m

ck =

k=0

m N i=0

=

k=0

N m i=0 k=1

ηki S i (tk )

−

m+1

ηki S i (tk−1 )

k=1

ηki (S i (tk ) − S i (tk−1 )) −

N

i ηm+1 S i (tm )

i=0

m = 0, 1, . . . , n. So letting tn+1 = ∞, we see that the total gain G(t) of cash up to time t ∈ [tm , tm+1 ), m = 0, 1, . . . , n, is given by

7.1 Basic Situation and Dynamical Portfolio Strategy

G(t) =

m

181

ck

k=0

=

N i=0

=

t

0

N i=0

ξ i (s) d S i (s) −

N

i i (ηm+1 (S i (t) − S i (tm )) + ηm+1 S i (tm ))

i=0 t

ξ i (s) d S i (s) −

0

N

ξ i (t+)S i (t).

i=0

Since G(t) is just a simple summation of cash at a distinct time, it does not have an important meaning in economics. Now let us assume the following. (4) P(mint∈[0,T ] S 0 (t) > 0) = 1 for all T > 0. Then we see by Corollary 4.3.1 that S 0 (t)−1 , t ∈ [0, ∞), is a continuous semimartingale. We use the 0-th security price as a numeraire, that is, we measure the security prices or cash at time t relative to the 0-th security price S 0 (t). Let S˜ i (t) = S 0 (t)−1 S i (t), i = 1, . . . , N , t ∈ [0, ∞). Then the discounted value of cash which we gain at time tk by a change of portfolio is given by 0 S 0 (tk )−1 ck = −(ηk+1 − ηk0 ) −

N i (ηk+1 − ηki ) S˜ i (tk ). i=1

˜ So the total discounted gain G(t) up to time t ∈ [0, ∞) is given by ˜ G(t) =

N i=1

t

ξ i (s) d S˜ i (s) − ξ 0 (t+) −

0

N

ξ i (t+) S˜ i (t)

i=1

by a similar argument for G(t). Let us consider portfolio strategies such that no gain or loss takes place by the change of portfolio at any time t ∈ (0, ∞). Such a strategy is called a self-financing strategy. ˜ ˜ A portfolio strategy is self-financing, if and only if G(t) = G(0), t ∈ [0, ∞). In this case we see that ξ 0 (t+) = ξ 0 (0+) +

N i=1

ξ i (0+) S˜ i (0) −

N i=1

ξ i (t+) S˜ i (t) +

N i=1

t

ξ i (s) d S˜ i (s), t ∈ (0, ∞).

0

So we see that ξ = (ξ 0 , . . . , ξ N ) is a self-financing strategy, if and only if

182

7 Application to Finance

ξ 0 (t) = ξ 0 (0+) + +

ξ i (0+) S˜ i (0) −

i=1

N i=1

N

t

N

ξ i (t) S˜ i (t)

i=1

ξ i (s) d S˜ i (s), t ∈ (0, ∞).

(7.1)

0

A gain c0 = G(0) at time 0 is given by c0 = −S 0 (0)(ξ 0 (0+) +

N

ξ i (0+) S˜ i (0))

i=1

If we take a self-financing strategy ξ = (ξ 0 , . . . , ξ N ) and if we clear the portfolio at time T, then we gain V (T ) (yen) given by V (T ) =

N

ξ i (T )S i (T ) = S 0 (T )(ξ 0 (T ) +

i=0

N T

−1

= S (T ) −S (0) c0 + 0

0

N

ξ i (T ) S˜ i (T ))

i=1

i ˜ ξ (t) d S (t) . i

(7.2)

0

i=1

We assume that ξ i ∈ L0 , i = 0, 1, . . . , N , up to now. However, Eq. (7.2) is welldefined if ξ i is S˜ i -integrable progressively measurable process for all i = 1, . . . , N . For this reason and the argument in Chap. 4, Sect. 4.2, we extend the concept of self-financing strategy in the following. Let c be an F0 -measurable random variable, ξ i , i = 1, . . . , N , be S˜ i -integrable progressively measurable process and let ξ 0 be a continuous semi-martingale given by −1

ξ (t) = −S (0) c − 0

0

N

ξ (t) S˜ i (t) + i

N

i=1

i=1

t

ξ i (s) d S˜ i (s),

t ∈ (0, ∞).

0

Then it is obvious that Eq. (7.1) is satisfied. We consider (c, ξ 0 , . . . , ξ N ) as an (extended) self-financing portfolio strategy. If we take this portfolio strategy, we have gain c (yen) at time 0, and no cash-flow appears at time t > 0. Let T > 0. If we take this strategy (c, ξ 0 , . . . , ξ N ) up to time T and if we clear the portfolio at time T, then we gain cash V (T ) (yen) at time T given by V (T ) =

N i=0

−1

ξ (T )S (T ) = S (T ) −S (0) c + i

i

0

0

N i=1

0

T

i ˜ ξ (t) d S (t) . i

(7.3)

7.1 Basic Situation and Dynamical Portfolio Strategy

183

As long as we consider self-financing strategies only, if we give an F0 -measurable random variable c and S˜ i -integrable progressively measurable process ξ i , i = 1, . . . , N , then ξ 0 is determined and V (T ) is expressed by them. Now we consider a concrete model.

7.2 Black–Scholes Model Letting N = 1, we consider two kinds of securities. Let B = {B(t); t ∈ [0, ∞)} be a 1-dimensional {Ft }t∈[0,∞) -Brownian motion. Let r > 0, σ > 0 and μ ∈ R, and let us assume that the price processes S i , i = 0, 1, of the i-th security are given by the following stochastic differential equation: d S 0 (t) = r S 0 (t) dt, d S 1 (t) = σ S 1 (t) d B(t) + μS 1 (t) dt, S 0 (0) = S00 > 0,

S 1 (0) = S01 > 0.

Here S00 and S01 are positive numbers. The 0-th security is a riskless security, and we can regard r as an interest rate. The 1-st security is a security with risk such as a stock. Now let T > 0 and K > 0 and let us assume that the following derivative is traded in a derivative market: “the right to sell one 1-st security at time T at K (yen)” Strictly speaking, this derivative is a contract between a derivative seller and a derivative buyer such that if the derivative buyer exercises the right at time T, then the derivative seller has to buy one 1-st security at K (yen) from him. The price of this derivative is the value of this contract. If S 1 (T ) > K at time T, then the derivative buyer can sell the 1-st security at a higher price than K , and so it is ridiculous to exercise this derivative. On the other hand, if S 1 (T ) < K at time T, the derivative buyer can buy the 1-st security at S 1 (T ) in the security market and sell it to the derivative seller at K , and he gets a profit K − S 1 (T ). So the derivative buyer will naturally exercise the derivative in this case. Let Y = (K − S 1 (T )) ∨ 0. Then we may regard this derivative as the contract that the derivative seller pays Y to the derivative buyer at time T. Let Gt = σ {σ {B(s); s t} ∪ N0 }, t 0. Then we see that Gt ⊂ Ft , t ∈ [0, ∞). Note that S 0 (T )−1 Y is a GT -measurable bounded random variable in this case. Now let us consider the price of this derivative from the side of a derivative seller. If he does not do anything up to time T, he will have a risk to pay depending on the price S 1 (T ). We will consider how he can hedge this risk through trade of securities in the security market. It is easy to see that S 0 (t) = S00 exp(r t)

184

7 Application to Finance

σ2 t . S 1 (t) = S01 exp σ B(t) + μ − 2

and

Therefore we see that 1 2 ˜S 1 (t) = S0 exp σ B(t) + μ − r − σ t . 2 S00 Let c be a real number and ξ 1 be a S˜ 1 -integrable progressively measurable process, and let t 0 0 −1 1 1 ˜ ξ 1 (s) d S˜ 1 (s), t ∈ (0, ∞). ξ (t) = −S (0) c − ξ (t) S (t) + 0

If he takes a self-financing portfolio strategy such that he holds the ξ i (t) i-th security at time t ∈ (0, ∞), i = 0, 1, then he will gain c at time 0 and there will be no cash-flow at time t ∈ (0, ∞) as stated in the previous section. If he takes this portfolio strategy and if he clears the portfolio at time T, then he gains c at time 0, and gains V (T ) at time T. Moreover, there is no cash flow at time t ∈ (0, T ), and no profit or loss will take place after time T because he will not have a portfolio. Here 0 −1 V (T ) = S (T ) −S (0) c +

T

0

1 ˜ ξ (t) d S (t) . 1

0

Suppose that 0 −1 Y = V (T ) = S (T ) −S (0) c +

T

0

1 ˜ ξ (t) d S (t) . 1

0

If a derivative seller takes this self-financing portfolio strategy and takes −c (yen) from a derivative buyer at time 0, and if he gives Y (yen) to the derivative buyer at time T, then cash-flow is 0 at all time ∈ [0, ∞). That is, if a derivative seller gains −c, which is the initial cost of the portfolio strategy, as the price of this derivative, then he can replicate the derivative by trading securities in the market, and so he will not have any loss by selling this derivative. We call such a self-financing portfolio strategy a replication strategy for the derivative Y at maturity T, and call −c replication cost. Now let ˜ B(t) = B(t) + (μ − r )(t ∧ T ), t ∈ [0, ∞). Then we see that

7.2 Black–Scholes Model

185

t 1 1 2 σ2 ˜ ˜ − σ t = S0 1 + ˜ ˜S 1 (t) = S0 exp σ B(t) s d B(s) exp σ B(s) − 2 2 S00 S00 0 t ˜ = S˜ 1 (0) + σ S˜ 1 (s) d B(s), t ∈ [0, T ]. 0

Let M(t) = −(μ − r )B(t), t ∈ [0, ∞). If we define a probability measure Q on (, F ) by

Q(A) = E[E(M)(T ), A] = E exp −(μ − r )B(T ) −

(μ − r )2 T 2

,A ,

A ∈ F,

then we see by Theorem 5.3.1 that B˜ is an Ft -Brownian motion under Q. So S˜ 1 (t ∧ T ) ˜ s is an Ft -continuous local martingale under Q. It is obvious that Gt = σ {σ { B(s); Q 0 −1 2 t} ∪ N0 }. Since E [(S (T ) Y ) ] < ∞, we see by Theorem 5.5.1 that there is an Gt -progressively measurable process η such that E Q [S 0 (T )−1 Y |Ft ] = E Q [S 0 (T )−1 Y ] +

t

˜ η(s) d B(s),

t ∈ [0, T ],

0

and

EQ

T

η(t)2 dt < ∞.

0

Let c = −S 0 (0)E Q [S 0 (T )−1 Y ], ξ 1 (t) = 1[0,T ] (t)(σ S˜ 1 (t))−1 η(t), t ∈ (0, ∞), and ξ (t) = −S (0) c − ξ (t) S˜ 1 (t) + 0

0

−1

t

1

ξ 1 (s) d S˜ 1 (s),

t ∈ (0, ∞).

0

Then a portfolio strategy (c, ξ ) is self-financing and

T

ξ 1 (t) d S˜ 1 (t)) V (T ) = S (T )(−S (0) c + 0 T 0 Q 0 −1 1 ˜ η (s) d B(s) = S 0 (T )E[S 0 (T )−1 Y |FT ] = Y. = S (T ) E [S (T ) Y ] + 0

0

−1

0

So S 0 (0)E Q [S 0 (T )−1 Y ] is the replication cost for Y and is the price of this derivative. This is the conclusion reached by Black, Scholes and Merton. However, there is a serious problem as follows. We use Theorem 5.5.1 in the above argument. But if we use Theorem 5.6.1, we see that for any c ∈ R there is a ˜ B-integrable progressively measurable process ηˆ such that S 0 (T )−1 Y + S 0 (0)−1 c =

T 0

˜ η(t) ˆ d B(t).

186

7 Application to Finance

So if we let ξˆ 1 (t) = (σ S˜ 1 (t))−1 η(t), ˆ t ∈ [0, ∞), and let ξˆ 0 (t) = −S 0 (0)−1 c − ξ 1 (t) S˜ 1 (t) +

t

ξ 1 (s) d S˜ 1 (s), t ∈ [0, ∞),

0

then (c, ξˆ 0 , ξˆ 1 ) is a self-financing portfolio strategy and V (T ) = Y. So we can conclude that even if a derivative seller sells by any price, he can find a self-financing strategy whose replication cost is less than this price. This is a strange conclusion, and so we consider feasible self-financing strategies to be restricted. There have appeared many paradoxes in the history of probability theory. One of them is the following. Let us consider a gambling game of coin tossing as follows. We bet some amount of money and toss a coin. If the head of the coin appears, we get twice the amount of money which we bet, and if the tail appears, then we lose all the money which we bet. In this gambling, we can take the following strategy. We first bet a thousand yen, we bet two thousand yen the next time, and so on, and we bet 2n thousand yen the n-th time until the head of the coin appears. Since the head of the coin appears eventually, as long as this gambling is fair, we will always gain one thousand yen. So there is a betting strategy by which we always win in this gambling game. However, we intuitively understand that we cannot take such a strategy. Historically, many reasons have been given for this. One of them is the following. If the tail of the coin appears continuously 30 times, then we have to bet 230 thousand yen the next time. This amount is beyond one trillion yen. If we lose 50 times continuously, then the amount we have to bet will be more than the total assets in the world. So we cannot continue this gambling game. In mathematical finance, many people adopt this idea. If we take a self-financing strategy (c, ξ ) and if we clear our portfolio at time t ∈ (0, T ), then the cash we will gain is given by t 1 1 ˜ ξ (s) d S (s) . ξ (t)S (t) + ξ (t)S (t) = S (t) −c + 0

0

1

1

0

0

If this value is negative, short selling takes place, because the prices of securities are positive in our model. So if we have to clear our portfolio at time t, then we will have debt. We consider short selling to be limited, and there must be an a > 0 such that ξ 0 (t)S 0 (t) + ξ 1 (t)S 1 (t) −a, for all t ∈ [0, T ]. Based on this idea, we give the following definition. Definition 7.2.1 We say that (c, (ξ 0 , ξ 1 )) is an admissible self-financing strategy if the following two conditions are satisfied.

7.2 Black–Scholes Model

187

(i) c ∈ R and ξ 1 is an S˜ 1 -integrable progressively measurable process, and

−1

ξ (t) = −S (0) c − ξ (t)S (t) + 0

0

1

1

t

ξ 1 (s) d S˜ 1 (s),

t ∈ [0, ∞).

0

(ii) For any T > 0 there exists an a > 0 such that ξ 0 (t)S 0 (t) + ξ 1 (t)S 1 (t) −aS 0 (t),

t ∈ [0, T ].

The condition (ii) is slightly different from the explanation so far, but we admit this definition and continue the argument. Suppose that (c, ˆ (ξˆ 0 , ξˆ 1 )) is an admissible self-financing portfolio strategy which replicates Y at time t. Let

t∧T

N (t) =

ξˆ 1 (s) d S˜ 1 (s)

t ∈ [0, ∞).

0

Then N is a continuous local martingale under Q, and N (t) = S 0 (0)−1 cˆ + ξˆ 0 (t) + ξ 1 (t) S˜ 1 (t) S 0 (0)−1 cˆ − a,

t ∈ [0, T ].

So by Proposition 5.3.1 we see that N is a supermartingale under Q and so we see that E Q [N (T )] E Q [N (0)] = 0. Therefore we see that

S (0)E [S (T ) Y ] = E −cˆ + 0

Q

−1

0

T

Q

ˆξ 1 (s) d S˜ 1 (s) −c. ˆ

0

So we see that an initial cost −cˆ is greater than or equal to S 0 (0)E Q [S 0 (T )−1 Y ]. On the other hand, since S 0 (T )−1 Y is bounded, there is an a > 0 such that −a 0 S (T )−1 Y a. Let (c, ξ ) be the previous self-financing portfolio strategy which we obtained by following the idea of Black, Scholes and Merton. Then we see that −1

t

−a E [S (T ) Y |Ft ] = −c + Q

0

ξ 1 (s) d S˜ 1 (s),

0

and so we see that ξ 0 (t)S 0 (t) + ξ 1 (t)S 1 (t) t ξ 1 (s) d S˜ 1 (s) −aS 0 (t), t ∈ (0, T ]. = S 0 (t) −S 0 (0)−1 c + 0

So (c, ξ ) is admissible. We can justify Black, Scholes and Merton’s idea in this way.

188

7 Application to Finance

Now let us consider the price of this derivative from the side of a derivative buyer. Then we may consider that he is a seller of a derivative contract such that he will pay −Y (yen) at time T. Since −S 0 (T )−1 Y is a GT -measurable bounded random variable, we see that if he receives −S 0 (0)E Q [S 0 (T )−1 Y ] (yen) at time 0, then he can replicate −Y at time T by an admissible self-financing portfolio strategy. So we can conclude that the price of this derivative is S 0 (0)E Q [S 0 (T )−1 Y ] (yen) by this argument. Remark Let us consider a general derivative contract that the seller pays Y (yen) to the buyer at time T. (1) As long as Y is GT -measurable and S 0 (0)−1 Y is bounded, the price of this derivative is S 0 (0)E Q [S 0 (T )−1 Y ] (yen). (2) If P(S 0 (T )−1 Y < −a) > 0 for any a > 0, then there does not exist any admissible self-financing portfolio strategy which replicates Y at time T. For this reason, the definition of admissibility may be too strong. However, there is no other good definition of “admissible” so far. When we apply finance theory, people in financial institutes consider the price of the derivative to be S 0 (0)E Q [S 0 (T )−1 Y ] even if S 0 (0)−1 Y is not bounded. (3) The following suicide strategy is an admissible self-financing strategy. By Theorem 5.6.1 we see that there is a progressively measurable process η such that T ˜ η(t) d B(t) = −S 0 (T )−1 . 0

Let

Z (t) =

t∧T

˜ η(s) d B(s),

t ∈ [0, ∞).

0

Then Z is a continuous semi-martingale such that Z (0) = 0 and Z (T ) = −S 0 (T )−1 . Let τ = min{t ∈ [0, ∞); Z (t) = −S 0 (t)−1 }. Then τ is a stopping time such that τ T. Also, we see that Z τ (t) −S 0 (t)−1 , t ∈ [0, T ]. Let ξ 1 (t) = 1[0,τ ] (t)(σ S˜ 1 (t))−1 η(t), and ξ (t) = −ξ (t) S˜ 1 (t) + 0

1

t

ξ 1 (s) d S˜ 1 (s), t ∈ (0, T ].

0

Then ξ 0 (t)S 0 (t) + ξ 1 (t)S 1 (t) = S 0 (t)

t 0

ξ 1 (s) d S˜ 1 (s) = S 0 (t)Z τ (t) −1, t ∈ [0, T ].

7.2 Black–Scholes Model

189

So (0, (ξ 0 , ξ 1 )) is an admissible self-financing portfolio strategy with an initial cost 0. When we clear this portfolio at time T, we will gain ξ 0 (T )S 0 (T ) + ξ 1 (T )S 1 (T ) = S 0 (T )Z τ (T ) = −S 0 (T )S 0 (τ )−1 < 0. So we will always lose at time T. There is no admissible self-financing strategy such that we will always win at time T, but there is an admissible self-financing strategy such that we will always lose at time T.

7.3 General Case We consider more general model than the Black–Scholes model. Let (, F , P) be a complete probability space and B = (B 1 , . . . , B d ) be a d-dimensional Brownian motion. Let Ft = σ {σ {B(s); s t} ∪ N0 }, t 0. Then by Proposition 3.7.3 we see that {Ft }t∈[0,∞) is a filtration satisfying the standard condition, and B is an {Ft }t∈[0,∞) Brownian motion. Let S i , i = 0, 1, . . . , N , be {Ft }t∈[0,∞) -continuous semi-martingales satisfying the following condition. (S-1) P(mint∈[0,T ] S 0 (t) > 0) = 1 for any T > 0. Then by Corollary 4.3.1 we see that S˜ i (t) = S 0 (t)−1 S i (t), t 0, i = 1, . . . , N , are continuous semi-martingales. Moreover, we assume the following. (S-2) There are progressively measurable processes σ ji , bi , i = 1, . . . , N , j = 1, 2, . . . , d, such that

T

σ ji (t)2 dt < ∞ = 1,

T > 0, i = 1, . . . , N , j = 1, . . . , d,

T

|b (t)| dt < ∞ = 1,

T > 0, i = 1, . . . , N ,

P

0

i

P 0

and S˜ i (t) =

d j=1

t 0

σ ji (s) d B j (s)

+

t

bi (s) ds,

t 0, i = 1, . . . , N .

0

Now we regard S i , i = 0, 1, . . . , N , as security price processes, and we assume the following as before. (1) There is no dividend from these securities. (2) We can buy or sell the i-th security, i = 0, 1, . . . , N , at any time t, t ∈ [0, ∞), by any amount at the price S i (t).

190

7 Application to Finance

(3) We can hold securities worth any amount of real numbers without any constraint or cost. Suppose that ξ i , i = 1, . . . , N , be S˜ i -integrable progressively measurable process. Then we see that N t i=1 0

d t

ξ i (s) d S˜ i (s) =

j=1 0

⎞ ⎞ ⎛ ⎛ t N N i i j i i ⎝ ⎝ σ j (s)ξ (s)⎠ d B (s) + b (s)ξ (s)⎠ ds. 0

i=1

i=1

So we extend a range of self-financing portfolio strategies so that we can take progressively measurable processes (ξ 1 (t), . . . , ξ N (t)) as a self-financing portfolio strategy if ⎛ d ⎝ P

T 0

j=1

2

N

σ ji (t)ξ i (t)

T

dt + 0

i=1

⎞ N bi (t)ξ i (t) dt < ∞⎠ = 1 i=1

for any T > 0. Then we have the following definition. Definition 7.3.1 We say that (c, ξ ) is an admissible self-financing portfolio strategy if the following two conditions are satisfied. (i) c ∈ R and ξ = (ξ 0 , ξ 1 , . . . , ξ N ) is an R1+N -value stochastic process such that ξ i , i = 1, . . . , N , are progressively measurable, ⎛ d P⎝ 0

j=1

T

N

2 σ ji (t)ξ i (t)

dt +

T

0

i=1

⎞ N bi (t)ξ i (t) dt < ∞⎠ = 1, i=1

and ξ 0 (t) −1

= −S (0) c − 0

+

t N 0

N

⎛ ξ (t) S˜ i (t) + ⎝ i

i=1

bi (s)ξ i (s)

d j=1

0

t

N

σ ji (s)ξ i (s)

i=1

ds , t ∈ [0, ∞).

i=1

(ii) There is an a > 0 for any T > 0 such that N i=0

ξ i (t)S i (t) −aS 0 (t),

t ∈ [0, T ].

d B j (s)

7.3 General Case

191

The above strategy is a self-financing portfolio strategy for which the initial cost is −c and we hold ξ i (t) i-th security at time t, i = 1, . . . , N . Let V

(c,ξ )

(t) =

N

ξ i (t)S i (t).

i=0

Then similarly to the argument we used to derive Eq. (7.3), we see that S 0 (t)−1 V (c,ξ ) (t) −1

= −S (0) c + 0

+

t d 0

d j=1

t

0

bi (s)ξ i (s)

d

σ ji (s)ξ i (s)

d B j (s)

i=1

ds, t ∈ [0, ∞),

(7.4)

i=1

and the cash we get at time T is V (c,ξ ) (T ) (yen) if we take a self-financing portfolio strategy (c, ξ ) up to time T and clear our portfolio at time T. Definition 7.3.2 (1) We say that ξ = (ξ 0 , ξ 1 , . . . , ξ N ) is an arbitrage strategy, if (0, ξ ) is an admissible self-financing portfolio strategy and if there is a T > 0 such that P(V (0,ξ ) (T ) 0) = 1 and P(V (0,ξ ) (T ) > 0) > 0. (2) We say that a security price system is non-arbitrage, if there is not any arbitrage strategy. If there is an arbitrage strategy, then the security market is out of equilibrium and the security price system will change. So it is necessary that the security price system is non-arbitrage. Let be a family of {Ft }t0 -continuous local martingales satisfying the following two conditions. (i) ρ(0) = 1, and P( min ρ(t) > 0) = 1 and E[ρ(T )] = 1 t∈[0,T ]

for any T > 0. (ii) ρ(t) S˜ i (t), i = 1, . . . , N , are {Ft }t0 -continuous local martingales. Let us note the following. Proposition 7.3.1 Suppose that ρ is an {Ft }t0 -continuous local martingale such that ρ(0) = 1 and P( min ρ(t) > 0) = 1 and E[ρ(T )] = 1 t∈[0,T ]

for any T > 0. Then we have the following.

192

7 Application to Finance ρ

(1) There are η j ∈ L2loc , j = 1, 2, . . . , d, such that ρ = E(M) and M(t) =

d

t

0

j=1

ηρj (s) d B j (s),

t ∈ [0, ∞).

(2) ρ ∈ if and only if d

ρ

σ ji (t)η j (t) + bi (t) = 0, dt ⊗ d P − a.e.(t, ω), i = 1, . . . , N .

j=1

Proof Let M be a continuous local martingale given by M(t) =

t

0

1 dρ(s), ρ(s)

t ∈ [0, ∞).

Then by Itô’s formula we see that

t

log ρ(t) =

1 1 dρ(s) − ρ(s) 2

0

t

1 1 d ρ (s) = M(t) − M (t). ρ(s)2 2

0

So we see that ρ = E(M). By Corollary 5.5.1 we see that there are η j ∈ L2loc , j = 1, 2, . . . , d, such that M(t) =

N j=1

t

η j (s) d B j (s),

t 0.

0

This implies Assertion (1). Also, by Itô’s formula we see that ρ(t) S˜ i (t) =

t

ρ(s) S˜ i (s) d M(s) +

0

t

ρ(s) d S˜ i (s) +

0

t

=

ρ(s) S˜ i (s) d M(s) +

0

d j=1

t

+ 0

ρ(s)bi (s) ds +

d t j=1

0

0

t

ρ(s) d M, S˜ i (s)

0 t

ρ(s)σ ji (s) d B j (s)

ρ(s)η j (s)σ ji (s) ds.

So ρ(t) S˜ i (t), i = 1, . . . , d, are continuous local martingales if and only if

7.3 General Case

t 0

193

⎛ ⎞ d ρ(s) ⎝ σ ji (s)η j (s) + bi (s)⎠ ds = 0 a.s.,

t ∈ [0, ∞).

j=1

So we have Assertion (2). ρ

Proposition 7.3.2 We define a probability measure Q T on (, F ) for each ρ ∈ and T > 0 by ρ A ∈ F. Q T (A) = E P [ρ(T ), A], Then S 0 (t ∧ T )−1 V (c,ξ ) (t ∧ T ), t ∈ [0, ∞), is a Q ρT -continuous local martingale ρ and is a Q T -supermartingale for any admissible self-financing portfolio strategy (c, ξ ). Proof Let B˜ j (t), t ∈ [0, ∞), j = 1, . . . , d, be given by B˜ j (t) = B j (t) −

t∧T 0

ρ

η j (s) ds,

t ∈ [0, ∞), j = 1, . . . , d.

By Theorem 5.3.1 we see that B˜ = ( B˜ 1 , . . . , B˜ d ) is a d-dimensional {Ft }t0 ρ Brownian motion under Q T . Also, we see that −1

S (t) V 0

(c,ξ )

−1

(t) = −S (0) c + 0

d j=1

t 0

d

σ ji (s)ξ i (s)

d B˜ j (s),

t ∈ [0, T ].

i=1

So we see that S 0 (t ∧ T )−1 V (c,ξ ) (t ∧ T ) is a Q ρT -continuous local martingale. Since we assume that (c, ξ ) is admissible, we see that there is an a > 0 such that S 0 (t ∧ T )−1 V (c,ξ ) (t ∧ T ) −a, t ∈ [0, T ]. So by Proposition 5.3.1 we see that it is a Q ρT -supermartingale. Proposition 7.3.3 If = ∅, then a security price system is non-arbitrage. Proof Let ρ ∈ . Suppose that ξ = (ξ i )i=0,1,...,N is an arbitrage strategy. Then there is a T > 0 such that P(V (0,ξ ) (T ) 0) = 1 and P(V (0,ξ ) (T ) > 0) > 0. Then ρ ρ we see that Q T (V (0,ξ ) (T ) 0) = 1, and Q T (V (0,ξ ) (T ) > 0) > 0. Since S 0 (t ∧ ρ T )−1 V (0,ξ ) (t ∧ T ) is a Q T -supermartingale by Proposition 7.3.2, we see that ρ

ρ

0 = E Q T [S 0 (0)−1 V (0,ξ ) (0)] E Q T [S 0 (T )−1 V (0,ξ ) (T )]. Since S 0 (T )−1 V (0,ξ ) (T ) is non-negative, we see that Q ρT (S 0 (T )−1 V (0,ξ ) (T ) = 0) = 1.

194

7 Application to Finance

So we see that P(V (0,ξ ) (T ) = 0) = 1. This contradicts the assumption. So we see that there is non-arbitrage. Delbaen [1] showed that = ∅ if a security price system is non-arbitrage. Corollary 7.3.1 Assume that = ∅. Let Y be an FT -measurable random variable such that S 0 (T )−1 Y is bounded. Suppose that there is an admissible self-financing portfolio strategy (c, ξ ) such that V (c,ξ ) (T ) Y. Then S 0 (0)E[ρ(T )S 0 (T )−1 Y ] −c for all ρ ∈ . Proof Since S 0 (t ∧ T )−1 V (c,ξ ) (t ∧ T ) is a Q ρT -supermartingale, we see that ρ

ρ

−S 0 (0)−1 c = E Q T [S 0 (0)−1 V (c,ξ ) (0)] E Q T [S 0 (T )−1 V (c,ξ ) (T )] ρ

E Q T [S 0 (T )−1 Y ] = E[ρ(T )S 0 (T )−1 Y ]. This implies our assertion. Proposition 7.3.4 Assume that = ∅. Let Y be an FT -measurable random variable such that S 0 (T )−1 Y is bounded. Suppose that there is a c ∈ R such that S 0 (0)E[ρ(T )S 0 (T )−1 Y ] = −c for all ρ ∈ . Then there is an admissible selffinancing portfolio strategy (c, ξ ) such that V (c,ξ ) (T ) = Y. We make some preparations to prove the above-mentioned proposition. ρ¯ Let us take a ρ¯ ∈ and fix it. Let Q¯ = Q T , and let B¯ j (t) = B j (t) −

t∧T 0

ρ¯

η j (s) ds,

t ∈ [0, ∞), j = 1, . . . , d.

¯ Then B¯ = ( B¯ 1 , . . . , B¯ d ) is a d-dimensional {Ft }t∈[0,∞) -Brownian motion under Q. Note that S˜ i (t) =

d j=1

t 0

σ ji (s) d B¯ j (s), i = 1, . . . , N , t ∈ [0, T ]. ¯

Recall that S 0 (T )−1 Y is bounded. Let Z (t) = E Q [S 0 (T )−1 Y |Ft ], t ∈ [0, T ]. Then by Proposition 5.5.3 we see that there are α j ∈ L2loc , j = 1, 2, . . . , d, such that ¯

Z (t) = E Q [S 0 (T )−1 Y ] +

d j=1

t

α j (s) d B˜ j (s),

t ∈ [0, ∞).

0

Let α(t) be a d-dimensional-column-vector-valued stochastic process and (t) be a N × d-matrix-valued stochastic process given by α(t) = (α1 (t), . . . , αd (t)) and (t) = {σ ji (t)}i=1,...,N , j=1,...,d , t ∈ [0, T ]. Then we have the following.

7.3 General Case

195

Proposition 7.3.5 We assume the assumption in Proposition 7.3.4. Then (Id − ψ N ,d ((t))(t))α(t) = 0 for dt ⊗ d P − a.e.(t, ω) ∈ (0, T ) × . Here we use notation given in Appendix 8.4. Proof Note that ψ N ,d ((t))(t) is an orthogonal projection in Rd . Let δ(t) be a d-dimensional-column-vector-valued stochastic process given by δ(t) = |(Id − ψ N ,d ((t))(t))α(t)|−1 (Id − ψ N ,d ((t))(t))α(t), if |(Id − ψ N ,d ((t))(t))α(t)| > 0, and δ(t) = 0, if |(Id − ψ N ,d ((t))(t))α(t)| = 0. Then we see that |δ(t)| 1 and that ψ N ,d ((t))(t)δ(t) = 0. So we see that (t)δ(t) = 0, i.e., d σ ji (t)δ j (t) = 0, i = 1, . . . , N . j=1

For each ε > 0, let N ε (t) = ε

d j=1

t

¯ 1(0,T ) (s)δ j (s) d B(s),

t ∈ [0, ∞).

0

¯ Then we see that N ε is a Q-continuous local martingale, and that N ε (t) ε2 t. Therefore by Proposition 5.3.3 and Theorem 5.3.2 we see that E(N ε )(t), t 0, is a ¯ Q-martingale. So by Proposition 5.3.4 we see that E(N ε )(t)ρ(t) is a P-martingale. Note that log(E(N ε )(t)ρ(t)) 1 = N (t) − N ε (t) + 2 j=1 d

ε

=

d j=1

0

t

0

t

ηρj¯ (s) d B j (s)

ρ¯

(η j (s) + εδ j (s)) d B j (s) −

1 2

1 − 2 j=1

d j=1

0

d

t

0

t

ηρj¯ (t)2 dt

ρ¯

(η j (s) + εδ j (s))2 ds

and that d

ρ¯

σ ji (t)(η j (t) + εδ j (t)) + bi (t) = 0,

j=1

So we see that E(N ε )ρ ∈ .

i = 1, . . . , N , t ∈ [0, T ].

196

7 Application to Finance

From the assumption we see that 0 = E[E(N ε )(T )ρ(T )S 0 (T )−1 Y ] − E[ρ(T )S 0 (T )−1 Y ] ¯

¯

¯

= E Q [E(N ε )(T )S 0 (T )−1 Y ] − E Q [S 0 (T )−1 Y ] = E Q [(E(N ε )(T ) − 1)(Z (T ) − Z (0))] ⎞⎛ ⎞⎤ ⎡⎛ d T d T ¯ E(N ε )(t)δ j (y) d B˜ j (t)⎠ ⎝ α j (t) d B˜ j (t)⎠⎦ = E Q ⎣⎝ε j=1 0

j=1 0

⎛ ⎞ ⎤ ⎡ T d ¯ ⎣ ε Q E(N )(t) ⎝ δ j (t)α j (t)⎠ dt ⎦ . = εE 0

j=1

Note that if |(Id − ψ N ,d ((t))(t))α(t)| = 0 then if |(Id − ψ N ,d ((t))(t))α(t)| > 0 then d

d j=1

δ j (t)α j (t) = 0 and that

δ j (t)α j (t) = |(Id − ψ N ,d ((t))(t))α(t)|−1 (α(t), (Id − ψ N ,d ((t))(t))α(t))Rd

j=1

= |(Id − ψ N ,d ((t))(t))α(t)|.

So we see that

0 = EQ

T

E(N ε )(t)|(Id − ψ N ,d ((t))(t))α(t)| dt .

0

This implies our assertion. Now let us prove Proposition 7.3.4. Let ξ(t) = ψd,N ((t)∗ )α(t). Since ψ N ,d ((t))(t) = (t)∗ ψd,N ((t)∗ ), we see that (t)∗ ξ(t) = α(t). Therefore we see that N σ ji (t)ξ i (t) = α j (t), i = 1, . . . , N , t 0. i=1

Also, we see that N

bi (t)ξ i (t) = −

i=1

N

ρ¯

(σ ji (t)η j (t))ξ i (t) = −

i=1

0

i=1

ρ¯

η j (t)α j (t).

j=1

Therefore we see that ⎛ 2 N d t i i ⎝ P σ j (s)ξ (s) ds + j=1

d

0

t

⎞ N bi (s)ξ i (s) ds < ∞⎠ = 1 i=1

7.3 General Case

197

and that ⎞

⎛

⎛

⎞

t d t N N ¯ ⎝ ⎝ Z (t) = E Q [S 0 (T )−1 Y ] + σ ij (s)ξ i (s)⎠ d B j (s) + bi (s)ξ i (s)⎠ ds, 0 0 j=1 i=1 i=1

for t ∈ [0, T ]. From the assumption, there is an a > 0 such that S 0 (T )−1 Y −a. ¯ So we see that Z (t) = E Q [S 0 (T )−1 Y |Ft ] −a, t ∈ [0, T ]. ¯ Recall that c = −E Q [S 0 (T )−1 Y ]. Let ξ 0 (t) = −S 0 (0)−1 c − +

t N 0

N i=1

⎛ N d t ξ i (t) S˜ i (t) + ⎝ σ ji (s)ξ i (s) d B j (s)

j=1

0

i=1

bi (s)ξ i (s) ds , t ∈ [0, ∞).

i=1

Then we see that (c, ξ ) is an admissible self-financing portfolio strategy, and that V (c,ξ ) (T ) = Y. This shows that Proposition 7.3.4. By combining these results, we have the following theorem. Theorem 7.3.1 Assume that = ∅. Let Y be an FT -measurable random variable such that S 0 (T )−1 Y is bounded. (1) If there is an admissible self-financing portfolio strategy (c, ξ ) such that V (c,ξ ) (T ) = Y, then E[ρ(T )S 0 (T )−1 Y ] = −c for all ρ ∈ . (2) If E[ρ(T )S 0 (T )−1 Y ] is the same value for all ρ ∈ , then there is an admissible self-financing portfolio strategy (c, ξ ) such that V (c,ξ ) (T ) = Y and c = −E[ρ(T )S 0 (T )−1 Y ] for all ρ ∈ . Let Y be an FT -measurable random variable such that S 0 (T )−1 Y is bounded. We assume that the following contract is traded in a market. The derivative buyer receives Y (yen) from the derivative seller at time T > 0. Let us assume that = ∅. Suppose that E[ρ(T )S 0 (T )−1 Y ] have the same value for all ρ ∈ . Then by Theorem 7.3.1 we see that if the derivative seller is paid E[ρ(T )S 0 (T )−1 Y ] (yen) at time 0 then he can replicate cash Y (yen) with probability 1 without any cost through an admissible self-financing portfolio strategy. Therefore if the price of this contract is more than E[ρ(T )S 0 (T )−1 Y ] (yen), one can gain a profit by selling this contract. Also, by a similar argument, if the price of this contract is less than E[ρ(T )S(T )−1 Y ] (yen), one can gain a profit by buying this contract. So we can conclude that the price of this contract has to be S 0 (0)E[ρ(T )S 0 (T )−1 Y ] (yen).

198

7 Application to Finance

In the case that E[ρ(T )S(T )−1 Y ] has different values depending on ρ ∈ , we cannot determine the price of this contract by an argument on replication. If has only one element, the price of this contract is S 0 (0)E[ρ(T )S 0 (T )Y ] (yen) and Y can be replicated as long as Y is FT -measurable and S 0 (T )−1 Y is bounded. We say that the market is complete if any of these contracts can be replicated. Let us consider the following condition. (S-3) N = d, and det(σ ji (t, ω))i, j=1,...,N = 0 dt ⊗ d P − a.e. Moreover, there is a progressively measurable process γ such that N

σ ji (t)γ j (t) + bi (t) = 0, dt ⊗ d P − a.e. i = 1, . . . , N ,

j=1

⎡

and

⎛

E ⎣exp ⎝

⎞⎤

N 1

2

j=1

T

γ j (t)2 dt ⎠⎦ < ∞,

T > 0.

0

Proposition 7.3.6 Assume that (S-3) is satisfied. Then has a unique element ρ and ηρ = γ . d t j Proof Let N (t) = i=1 0 γ j (s) d B (s)ds, t 0, and let ρ0 = E(N ). Then by Theorem 5.3.2 we see that ρ0 is a martingale. Also, we see that ρ0 , S˜ i (t) =

N j=1

0

σ ji (s)γ j (s) ds.

Therefore we see that ρ0 (t) S˜ i (t) t t S˜ i (s) dρ0 (s) + ρ0 (s) d S˜ i (s) + ρ0 , S˜ i (t) = S˜ i (0) + 0

= S˜ i (0) +

t 0

0

S˜ i (s) dρ0 (s) +

d t j=1 0

ρ0 (s)σ ij (s) d B j (s) +

t 0

⎛ ⎝

N

⎞ σ ij (s)γ j (s) + bi (s)⎠ ds.

j=1

This shows that ρ0 ∈ . On the other hand, if ρ ∈ , then we see by Proposition 7.3.4 that N

ρ

σ ji (t)η j (t) + bi (t) = 0, dt ⊗ d P − a.e. i = 1, . . . , N .

j=1

Since (σ ji (t, ω))i, j=1,...,N is a regular matrix for dt ⊗ d P − a.e.(t, ω), we see that ηρ (t, ω) = γ (t, ω) for dt ⊗ d P − a.e.(t, ω), and so we see that ρ = ρ0 . This implies our assertion.

7.4 American Derivative

199

7.4 American Derivative We consider the same situation as the previous section, i.e., (, F , P) is a complete probability space, B = (B 1 , . . . , B d ) is a d-dimensional Brownian motion, Ft = σ {σ {B(s); s t} ∪ N0 }, t 0, S i , i = 0, 1, . . . , N , are {Ft }t∈[0,∞) -continuous semi-martingales, and Conditions (S-1) and (S-2) are satisfied. We also assume that Condition (S-3) is satisfied. Then we see that has a unique element ρ. Let Z : [0, ∞) × → R be a continuous progressively measurable process and T > 0. We assume that the following contract is traded in a market. (A) If the holder exercises his right at a time t ∈ [0, T ], then he will receive Z (t) (yen) from the writer. This type of contract is called an American derivative. Let Y (t) = Z (t) ∨ 0. When Z (t) is negative, the holder will not exercise the right at time t, and he may give up the right. So we consider this contract (A) to be equivalent to the following contract (A’). (A’) If the holder exercises his right at a time t ∈ [0, T ], then he will receive Y (t) (yen) from the writer. We consider the price of this contract. Let Q T be a probability measure on (, F ) given by Q T (A) = E[ρ(T ), A],

A ∈ F.

Also, let X be a continuous progressively measurable process given by X (t) = S 0 (t ∧ T )−1 Y (t ∧ T ), t ∈ [0, ∞). Note that X (t) 0. Now we make the following assumption (SA). (SA) There is a C0 ∈ (0, ∞) such that X (t) C0 , t ∈ [0, T ], with probability 1. Then we see that E Q T [ sup X (t)2 ] < ∞. t∈[0,T ]

Let b = sup{E Q T [X (τ )]; τ ∈ S0T }. Then by Theorem 3.8.1 we see that there is a square integrable {Ft }t∈[0,∞) -Q T -martingale M such that M(0) = 0 and X (t) b + M(t),

t ∈ [0, T ],

with probability 1. By Proposition 5.5.3 we see that M is a continuous martingale. Let σ0 be a stopping time given by σ0 = T ∧ inf{t ∈ [0, T ]; M(t) > C0 − b} ˜ C0 − b, t ∈ [0, T ], and also we easily and let M˜ = M σ0 . Then we see that M(t) see that ˜ X (t) b + M(t), t ∈ [0, T ], ˜ with probability 1. So we see that M(t) −b, t ∈ [0, T ].

200

7 Application to Finance

Now let B¯ j (t) = B j (t) −

t∧T

ρ¯

η j (s) ds,

0

t ∈ [0, ∞), j = 1, . . . , d.

Then we see that B¯ = ( B¯ 1 , . . . , B¯ d ) is a d-dimensional Brownian motion under Q T , and that S˜ i (t) = S˜ i (0) +

N j=1

t 0

σ ji (s) d B¯ j (s),

i = 1, . . . , N , t ∈ [0, T ].

By Proposition 5.5.3 we see that there are α j ∈ L2loc , j = 1, 2, . . . , N , such that N

˜ M(t) =

j=1

t

α j (s) d B˜ j (s).

0

By Assumption (S-3) we see that there are progressively measurable processes ξi , i = 1, . . . , N , such that N

σ ji (t)ξ i (t) = α j (t),

j = 1, . . . , N , a.e.(t, ω) ∈ [0, T ] × .

i=1

So we see that ˜ M(t) =

N i=1

t

ξ i (s) d S˜ i (s),

t ∈ [0, T ].

0

Also let ξ 0 (t) =b−

N

ξ i (t) S˜ i (t) +

i=1

+

t d 0

d j=1

t 0

d

σ ji (s)ξ i (s) d B j (s)

i=1

bi (s)ξ i (s) ds, t ∈ [0, ∞),

i=1

and c = −S 0 (0)b. Then we see that (c, ξ ) is an admissible self-financing portfolio ˜ t ∈ [0, T ]. So we see that strategy such that S 0 (t)−1 V (c,ξ ) (t) = b + M(t), Y (t) V (c,ξ ) (t),

t ∈ [0, T ].

7.4 American Derivative

201

Suppose that the seller of American derivative (A) receives −c = S 0 (0)b (yen) at time 0 and takes an admissible self-financing portfolio strategy (c, ξ ). When the buyer exercises the American derivative at time t ∈ [0, T ], the seller will not make any loss if he clears his portfolio at the same time t. So if the price of American derivative (A) is more than S 0 (0)b, then one can gain a profit with probability 1 by selling it. Now let us consider the situation of the buyer of the contract (A). By Theorem 3.8.1 we see that there is a stopping time τ ∈ S0T such that E Q T [X (τ )] = b. Let N (t) = −E Q T [X (τ )|Ft ], t 0. Then we see that N (τ ) = −X (τ ). Since N is a bounded Q T -martingale, by a similar argument in Sect. 7.3 we see that there is an admissible self-financing portfolio strategy (c, ξ ) such that c = S 0 (0)b, N (t) = −b +

N j=1

t

ξi (s) d S˜ i (s),

t ∈ [0, T ],

0

and ξ 0 (t) = −b −

N i=1

+

t d 0

˜i

ξ (t) S (t) + i

d j=1

0

t

d

σ ji (s)ξ i (s)

d B j (s)

i=1

bi (s)ξ i (s) ds, t ∈ [0, ∞).

i=1

Note that S 0 (t)−1 V (c,ξ ) (t) = N (t), t ∈ [0, T ]. Suppose that a person buys American derivative (A) at the price S 0 (0)b (yen), takes an admissible self-financing portfolio strategy (c, ξ ) and clears his portfolio at time τ. Then the total cash he has to pay at time 0 is c + S 0 (0)b = 0 (yen). Suppose moreover that he gives up the American derivative (A) if X (τ ) = −N (τ ) = 0 and that he exercises the American derivative (A) at time τ if X (τ ) > 0. Then he will get Y (τ ) = S 0 (τ )X (τ ) (yen) at time τ from the American derivative (A). So the total cash he has to pay at time τ is also 0 (yen). By this argument we see that if the price of American derivative (A) is less than S 0 (0)b (yen) then one will gain a profit by buying the American derivative. By the above arguments we get the following conclusion. If Assumptions (S1), (S2) and (S3) are satisfied, and if there is a C0 > 0 such that supt∈[0,T ] S 0 (t)−1 Z (t) C0 , then the price p of American derivative (A) is given by p = S 0 (0) sup{E Q T [Z (τ ) ∨ 0]; τ ∈ S0T }. Also, as long as the price of American derivative (A) is p, a buyer and a seller can avoid risk by taking a suitable admissible self-financing portfolio strategy.

Chapter 8

Appendices

8.1 Dynkin’s Lemma We prove Proposition 1.1.6 in this section. Definition 8.1.1 We say that a family C of subsets of a set S is a π -system, if A ∩ B ∈ C for any A, B ∈ C. Definition 8.1.2 We say that a family D of subsets of a set S is a Dynkin class over S, if the following three conditions are satisfied. (1) S ∈ D. (2) If A, B ∈ D and A ⊂ B, then B \ A ∈ D. A2 , . . . ∈ D is a sequence of non-decreasing sets, i,e, A1 ⊂ A2 ⊂ A3 ⊂ (3) If A1 , · · · , then ∞ n=1 An ∈ D. Proposition 8.1.1 Let A be a family of subsets of a set S. Then the following are equivalent. (1) A is a σ -algebra over S. (2) A is a π -system and a Dynkin class over S. Proof It is obvious that (1) implies (2). Suppose that (2) holds. Then for any A, B ∈ A we see that A ∪ B = S \ ((S \ A) ∩ (S \ B)) ∈ A. 2, . . .. Then we see Suppose that An ∈ A, n = 1, 2, . . .. Let Bn = nk=1 An , n = 1, ∞ A = that Bn ∈ A and B1 ⊂ B2 ⊂ · · · . So we see that ∞ n=1 n n=1 Bn ∈ A. This implies that A is a σ -algebra. Proposition 8.1.2 (1) Suppose that Dλ , λ ∈ , are Dynkin classes over a set S. Then λ∈ Dλ is also a Dynkin class over S. (2) Let C be a family of subsets in a set S. Then there is a minimum Dynkin class d{C} over the set S which contains C, i.e., d{C} is a Dynkin class over the set S containing C, and if a Dynkin class D over the set S contains C, then d{C} ⊂ D. © Springer Nature Singapore Pte Ltd. 2020 S. Kusuoka, Stochastic Analysis, Monographs in Mathematical Economics 3, https://doi.org/10.1007/978-981-15-8864-8_8

203

204

8 Appendices

Proof Assertion (1) is obvious. Let D be the set of all Dynkin classes over S containing C. Then we see that {∅, S} ∈ D and that D = A∈D A is a Dynkin class over S, and so D is a minimum Dynkin class. Theorem 8.1.1 (Dynkin) Let S be a set and C be a family of subsets of S. Assume that C is a π -system. If D is a Dynkin class over the set S such that C ⊂ D, then σ {C} ⊂ D. Proof It is obvious that d{C} ⊂ D. Let D A = {D ∈ d{C}; D ∩ A ∈ d{C}} for A ∈ d{C}. Then it is easy to show the following two claims. Claim 1. D A is a Dynkin class over the set S for any A ∈ d{C}. Claim 2. C ⊂ D A for any A ∈ C. Then we see that d{C} ⊂ D A for any A ∈ C. So we see that if A ∈ C and B ∈ d{C}, then A ∩ B ∈ d{C}. Therefore we see that C ⊂ D B for any B ∈ d{C}. Then we see that d{C} ⊂ D B for any B ∈ d{C}. Therefore we see that A ∩ B ∈ d{C} for any A, B ∈ d{C}, and so d{C} is a π -system. This implies that d{C} is a σ -algebra over S, and so we see that σ {C} ⊂ d{C}. Now let us prove Proposition 1.1.6. Recall that (S, B) is a measurable space and that μ1 and μ2 are measures on (S, B). Let us define D ⊂ B by D = {D ∈ B; μ1 (D) = μ2 (D)}. Then from the assumption we see that S ∈ D. Also, it is easy to see that D is a Dynkin class over S. Since A ⊂ D and A is a π -system, we see by Theorem 8.1.1 that σ {A} ⊂ D. This proves Proposition 1.1.6. Example Let us consider the case that S = R. It is obvious that C = {(−∞, x]; x ∈ R} is a π -system. Note that σ {C} = B(R). Therefore if two finite measures μ1 and μ2 on (R, B(R)) satisfy μ1 ((−∞, x]) = μ2 ((−∞, x]),

x ∈ R,

then μ1 (A) = μ2 (A),

A ∈ B(R).

8.2 Convex Function We prove Proposition 1.4.1 in this section. Let −∞ a < b ∞ and ϕ : (a, b) → R be a convex function. Then we have the following.

8.2 Convex Function

205

Proposition 8.2.1 (1) If a < x1 < x2 < x3 < b, then ϕ(x2 ) − ϕ(x1 ) ϕ(x3 ) − ϕ(x2 ) . x2 − x1 x3 − x2 d +ϕ ϕ(x) − ϕ(y) ϕ(y) − ϕ(x) d −ϕ (x) = lim and (x) = lim exist for any x ∈ y↑x y↓x dx x−y dx y−x (a, b). Moreover, for any a < u < x < v < b (2)

ϕ(x) − ϕ(u) d −ϕ d +ϕ ϕ(v) − ϕ(x) (x) (x) . x −u dx dx v−x (3) ϕ : (a, b) → R is continuous. Proof From the definition on convex functions, we see that for a < x1 < x2 < x3 < b x2 − x1 x3 − x2 ((x3 − x2 ) + (x2 − x1 ))ϕ(x2 ) = (x3 − x1 )ϕ x3 + x1 x3 − x1 x3 − x1 (x2 − x1 )ϕ(x3 ) + (x3 − x2 )ϕ(x1 ). This implies Assertion (1). Also, we see that for a < x1 < x2 < x3 < b ϕ(x3 ) − ϕ(x1 ) x3 − x1 x3 − x2 ϕ(x3 ) − ϕ(x2 ) x2 − x1 ϕ(x2 ) − ϕ(x1 ) = + . x3 − x1 x3 − x2 x3 − x1 x2 − x1 Therefore we see by Assertion (1) that for a < u 1 < u 2 x v1 < v2 < b ϕ(v2 ) − ϕ(x) ϕ(x) − ϕ(u 2 ) ϕ(v1 ) − ϕ(x) ϕ(x) − ϕ(u 1 ) . x − u1 x − u2 v1 − x v2 − x Assertion (2) follows from this by easy argument. By Assertion (2) we see that −∞

λr +1 = · · · = λn = 0 such that K = U CU −1 is an n × n diagonal matrix whose diagonal elements are λ1 , . . . , λn . Then for any ε > 0 U C(C + ε In )−2 U −1 is an n × n diagonal matrix whose diagonal elements are λ1 (λ1 + ε)−2 , . . . , λr (λr + ε)−2 , λr +1 , . . . , λn . Therefore Kˆ = limε↓0 U C(C + ε In )−2 U −1 exists and is an n × n diagonal matrix −1 ˆ whose diagonal elements are λ−1 1 , . . . , λr , λr +1 , . . . , λn . So we see that C = −2 −1 ˆ ˆ limε↓0 C(C + ε In ) exists and that C = U K U. Since K Kˆ = Kˆ K , K Kˆ K = K , Kˆ K Kˆ = Kˆ and K Kˆ is an n × n diagonal matrix, we have our assertion. Now let Mn,m , n, m 1, be the set of all n × m real matrices. Then Mn,m is an nm-dimensional vector space, and so we identify it as an nm-dimensional Euclidean space. In particular, we can consider Borel algebra B(Mn,m ) of Mn,m and (Mn,m , B(Mn,m )) as a measurable space. as a linear operator from Rm to Rn by Also, we see that A ∈ Mn,m can be regarded m m n x ∈ R → Ax ∈ R . Here Ax = ( j=1 ai j x j )i=1,...,n for A = (ai j )i=1,...,n, j=1,...,m and x = (x j ) j=1,...,m . First we recall the following basic fact in linear algebra. Proposition 8.4.2 Let A ∈ Mn,m . Then (Au, v)Rn = (u, A∗ v)Rm for any u ∈ Rm and v ∈ Rn , and ker A∗ = (Image A)⊥ . Here A∗ is the transposed matrix of A, ker A∗ = {v ∈ Rn ; A∗ v = 0}, Image A = {Au; u ∈ Rm } and V ⊥ = {u ∈ Rn ; (u, v)Rn = 0 for all v ∈ V }. Proof It is obvious that (Au, v)Rn = (u, A∗ v)Rm for any u ∈ Rm and v ∈ Rn . Note that w ∈ (Image A)⊥ , if and only if (w, Au)Rn = (A∗ w, u)Rm = 0 for all u ∈ Rm . So we see that ker A∗ = (Image A)⊥ . Proposition 8.4.3 Let us define ψn,m : Mn,m → Mm,n by ψn,m (A) = lim A∗ (A A∗ )(A A∗ + ε In )−2 , A ∈ Mn,m . ε↓0

8.4 Generalized Inverse Matrix

209

Then ψn,m is measurable and the following holds for any A ∈ Mn,m . (1) Aψn,m (A)A = A. (2) Aψn,m (A) is an orthogonal projection onto Image A in Rn , i.e., Aψn,m (A) is an n × n symmetric real matrix such that (Aψn,m (A))2 = Aψn,m (A) and Image (Aψn,m (A)) = Image A. (3) ψn,m (A)A is an orthogonal projection onto Image A∗ in Rm , i.e., ψn,m (A)A is an m × m symmetric real matrix such that (ψn,m (A)A)2 = ψn,m (A)A and Image (ψn,m (A)A) = Image A∗ . Proof Note that A → A∗ (A A∗ )(A A∗ + ε In )−2 is a continuous map from Mn,m to Mm,n . Let A ∈ Mn,m and C = A A∗ . Then C is an n × n symmetric real matrix and ˆ ψn,m (A) = lim A∗ C(C + ε In )−2 = A∗ C. ε↓0

Since ψn,m is a limit of continuous maps, we see that it is a measurable map. Note that ker A = (Image A∗ )⊥ in Rm . Let x ∈ ker A and y ∈ (ker A)⊥ = Image A∗ . Then there is a z ∈ Rn such that y = A∗ z. So we see that ˆ z = A A∗ z = A(x + y). Aψn,m (A)A(x + y) = C Cˆ A A∗ z = C CC This implies Assertion (1). ˆ we see that Aψn,m (A) is a symmetric matrix. By Since Aψn,m (A) = C C, Assertion (1), we see that (Aψn,m (A))2 = Aψn,m (A)Aψn,m (A) = Aψn,m (A). So Aψn,m (A) is an orthogonal projection. Also, by Assertion (1) we see that Image (Aψn,m (A)) = Image A. Therefore we have Assertion (2). Note that (ψn,m (A)A)2 = ψn,m (A)Aψn,m (A)A = ψn,m (A)A. So we see that ψn,m (A)A is an orthogonal projection. Also, we see that Image (ψn,m (A)A) ⊂ Image A∗ = (ker A)⊥ . Since A(Im − ψn,m (A)A) = 0, we see that Image (Im − ψn,m (A)A) ⊂ ker A. So we see that Image A∗ = (ker A)⊥ ⊂ Image (ψn,m (A)A). These imply Assertion (3). ψn,m (A) is called a generalized inverse matrix of a matrix A.

8.5 Proof of Theorem 5.6.1 Let (, F , P, {Ft }t∈[0,∞) ) be a standard filtered probability space, and B : [0, ∞) × → R be a 1-dimensional {Ft }t∈[0,∞) Brownian motion. We make some preparations for the proof of Theorem 5.6.1. Proposition 8.5.1 Let T1 > T0 0. Then there is a D-process η satisfying the following. (1) η(t) = 0 a.s. for t ∈ [0, T0 ) ∪ [T1 , ∞) and η(t) is σ {σ {B(s) − B(T0 ); s ∈

210

8 Appendices

[T0 , t]} ∪ N0 }-measurable for t ∈ [T0 , T1 ). T (2) 0 1 η(t)2 dt < ∞ a.s. and

T1

η(t) d B(t) = 1 a.s.

0

(3) For any λ > 0 E exp −λ

T1

η(t)2 dt

√ = exp(− 2λ).

T0

Proof Let γ : [0, ∞) → [0, T1 − T0 ) be given by γ (t) = (T1 − T0 ) − Since

we see that

T1 − T0 , t (T1 − T0 ) + 1

t ∈ [0, ∞).

1 1 − = t, T1 − T0 − γ (t) T1 − T0

γ (t)

0

1 ds = t, (T1 − T0 − s)2

t ∈ [0, ∞).

Let Gt = σ {σ {B(s) − B(T0 ); s ∈ [T0 , T0 + γ (t)]} ∪ N0 }, t ∈ [0, ∞). Then we see by Theorem 5.1.1 that Z (t) =

T0 +γ (t) T0

1 d B(s), t ∈ [0, ∞), T1 − s

is {Gt }t∈[0,∞) Brownian motion. Let τ = {t > 0; Z (t) = 1}. Then √ by Proposition 5.6.1 we see that P(τ < ∞) = 1 and E[exp(−λτ )] = exp (− 2λ), λ > 0. Let Z˜ (t) =

0

t

1[T0 ,T1 ) (s)

1 d B(s), t ∈ [0, T1 ). T1 − s

Then we see that Z˜ (T0 + γ (τ )) = 1 and Z˜ (t) < 1, t ∈ [0, T0 + γ (τ )). So we see that T0 + γ (τ ) = inf{t ∈ [0, T1 ); Z˜ (t) = 1} ∧ T1 . Therefore we see that T0 + γ (τ ) is an Ft -stopping time. Now let

8.5 Proof of Theorem 5.6.1

211

η(t) = 1[T0 ,T0 +γ (τ )) (t)

1 , T1 − t

t ∈ [0, ∞).

Then we see that

T1 T0

γ (τ )

η(t)2 dt = 0

1 ds = τ. (T1 − T0 − s)2

These imply Assertions (1), (2) and (3). Now let us prove Theorem 5.6.1. Let A = {X = 0} ∈ FT . Also, let Yt = E[arctan X |Ft ] and Z t = E[1 A |Ft ], t ∈ [0, ∞). Then we see that −π/2 < Yt < π/2 and 0 Z t 1, t ∈ [0, ∞), with probability 1. Also, we see that Yt → arctan X and Z t → 1 A as t ↑ T with probability 1. Let At = {Z t 1/2}. Then we see that |1 At − 1 A | 2|Z t − 1 A |. So we see that 1 At → 1 A a.s., t ↑ T. Let X (t) = 1 At tan Yt , t ∈ [0, T ]. Then we see that X (t) → X a.s. t ↑ T. So there are T˜n ∈ [0, T ) for each n 1 such that E[|X (t) − X | ∧ 1] 2−4n−1 and E[|1 At − 1 A |] 2−n−1 , Let Tn = max{T˜1 , . . . , T˜n } +

t ∈ [T˜n , T ).

1 (T − max{T˜1 , . . . , T˜n }). n

Then we see that {Tn }∞ n=1 is a strictly increasing sequence such that Tn → T, n → ∞. Then we see that E[|X (Tn+1 ) − X (Tn )| ∧ 1] E[|X (Tn+1 ) − X | ∧ 1] + E[|X (Tn ) − X | ∧ 1] 2−4n , n 1.

By Proposition 8.5.1, there are D-processes ηn , n = 1, 2, . . . satisfying the following four conditions. (i) ηn (t) = 0, t ∈ [0, Tn ) ∪ [Tn+1 , ∞). (ii) ηn (t) is σ {σ {B(s) − B(Tn ); s ∈ [Tn , t]} ∪ N0 }-measurable for t ∈ [Tn , Tn+1 ). T T (iii) 0 ηn (t)2 dt < ∞ a.s. and 0 ηn (t) d B(t) = 1 a.s. T1 √ ηn (t)2 dt = exp(− 2λ) for any λ > 0. (iv) E exp −λ Note that

T0

212

8 Appendices

2 E ((X (Tk ) − X (Tk−1 ))

Tk+1 Tk

P(|X (Tk ) − X (Tk−1 )| 2

−2k

ηk (t) dt) ∧ 1 2

) + E 2−4k

22k E[|X (Tk ) − X (Tk−1 )| ∧ 1] + 2−2k P

Tk+1

+P

ηk (t) dt ∧ 1

ηk (t)2 dt < 22k

ηk (t)2 dt 22k

2−2k+4 + 2−2k + 2

Tk Tk+1

2

Tk

Tk

−2k+5

Tk+1

1 1 − e−1

+ 2(1 − exp(−2

1 − E exp −2−2k

Tk+1

ηk (t)2 dt

Tk

−k+1/2

)).

So we see that ∞

E

(X (Tk ) − X (Tk−1 ))2

Tk+1

ηk (t)2 dt ∧ 1 < ∞.

Tk

k=2

Therefore by Proposition 1.1.10 we see that ∞ 2 (X (Tk ) − X (Tk−1 ))

Tk+1

ηk (t)2 dt < ∞ a.s.

Tk

k=2

Now let ξm (t) = X (Tm )ηm (t) +

∞

(X (Tk ) − X (Tk−1 ))ηk (t),

t ∈ [0, ∞)

k=m+1

for each m 1. Then we see that ξm , m 1, are progressively measurable and that

∞

ξm (t)2 dt Tm+1 ηm (t)2 dt = X (Tm )2 0

Tm

+

∞

(X (Tk ) − X (Tk−1 ))2

k=m+1

Moreover, we see that

Tk+1 Tk

ηk (t)2 dt < ∞ a.s., m 1.

8.5 Proof of Theorem 5.6.1

213

T

Tn ξm (t) d B(t) = lim ξm (t) d B(t) n→∞ 0 0 ⎞ ⎛ Tm+1 Tk+1 n−1 = lim ⎝ X (Tm ) ηm (t) d B(t) + (X (Tk ) − X (Tk−1 )) ηk (t) d B(t)⎠ n→∞

0

0

k=m+1

= lim X (Tn ) = X. n→∞

By the definition of ξm we see that

T

∞

ξm (t) dt = 0 ⊃ {X (Tm ) = 0} ∩ 2

0

{X (Tk+1 ) − X (Tk ) = 0} ⊃

k=m+1

∞

A Tk .

k=m

So we see that P

A\

T

ξm (t) dt = 0

2

0

∞

E[|1 ATk − 1 A |] 2−m .

k=m

Since m is arbitrary, we have our theorem. Remark Let Ft = σ {σ {B(s); s ∈ [0, t]} ∪ N0 }. Let f (x) = −1(−∞,−1) (x) + 1(1,∞) (x), x ∈ R, and let X = f (B(1)). Since E[X ] = 0, we see by Theorem 5.5.1 that there uniquely exists a progressively measurable process η : [0, ∞) × → R 1 such that E[ 0 η(t)2 dt] < ∞ and

t

E[X |Ft ] =

η(s) d B(s),

t ∈ [0, 1].

0

Then by Proposition 1.3.7 we see that E[X |Ft ] = E[ f (B(1) − B(t) + x)]|x=B(t) , and E[ f (B(1) − B(t) + x)] =

1 2π(1 − t)

1/2

(z + x)2 (z − x)2 − exp − dz exp − 2(1 − t) 2(1 − t)

∞ 1

for any t ∈ (0, 1). Therefore we see that P(E[X |Ft ] = 0) = 1. So by Proposition 4.1.7 we see that 1 2 η(t) dt = 0 = 0. P 0

Since P(X = 0) > 0, we see that this η is quite different from ξ in Theorem 5.6.1.

214

8 Appendices

8.6 Gronwall’s Inequality The following proposition is called Gronwall’s inequality. Proposition 8.6.1 Let T > 0 and a, b ∈ [0, ∞). If an integrable Borel measurable function f : [0, T ] → [0, ∞) satisfies

t

f (t) a + b

f (s) ds,

t ∈ [0, T ],

0

then f (t) a exp(bt),

t ∈ [0, T ].

T Proof Let M = a + b 0 f (s) ds < ∞. Then from the assumption we see that f (t) M, t ∈ [0, T ]. We will prove the following claim by induction. Claim. For any n 1 f (t) a

n−1 (bt)k

k!

k=0

+M

(bt)n−1 , (n − 1)!

t ∈ [0, T ].

We see that the claim holds in the case n = 1 from the assumption. Suppose that the claim holds in the case n = m. Then we see that t m−1 (bs)k (bs)m−1 +M f (t) a + b a ds k! (m − 1)! 0 k=0 =a+a

m−1 k=0

(bt)m (bt)k+1 +M . (k + 1)! m!

So the claim holds in the case that n = m + 1. Letting n → ∞, we see that f (t) a

∞ (bt)k k=0

This proves our assertion.

k!

= a exp(bt),

t ∈ [0, T ].

References

1. Delbaen, F.: Representing martingale measures when asset prices are continuous and bounded. Math. Financ. 2, 107–130 (1992) 2. Doob, J.L.: Stochastic Processes. Wiley, New York (1953) 3. Dudley, R.M.: Wiener functionals as Itô integrals. Ann. Prob. 5, 140–141 (1977) 4. Ikeda, N., Watanabe, S.: Stochastic Differential Equations and Diffusion Processes (NorthHolland Mathematical Library). North Holland (1981) 5. Itô, K.: Differential equations determining a Markoff process (original Japanese: Zenkoku Sizyo Sugaku Danwakai-si). J. Pan-Japan Math. Coll. no. 1077 (1942). English translation is in Selected Papers (Springer Collected Works in Mathematics). Springer, Berlin (1987) 6. Karatzas, I., Shreve, S.: Brownian Motion and Stochastic Calculus (Graduate Texts in Mathematics), 2nd, corrected edn. Springer, Berlin (1998) 7. Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion (Grundlehren der mathematischen Wissenschaften), 3rd edn. Springer, Berlin (1999) 8. Rogers, L.C.G., Williams, D.: Diffusions, Markov Processes, and Martingales Volume 1, Foundations (Cambridge Mathematical Library), 2nd edn. Cambridge University Press, Cambridge (2000) 9. Williams, D.: Probability with Martingales. Cambridge University Press, Cambridge (1991)

© Springer Nature Singapore Pte Ltd. 2020 S. Kusuoka, Stochastic Analysis, Monographs in Mathematical Economics 3, https://doi.org/10.1007/978-981-15-8864-8

215

Index

Symbols

A Adapted, 26 Adapted continuous process with finite variation, 66 Adapted non-decreasing continuous process, 66 Admissible, 186, 190 American derivative, 199 Arbitrage, 191

B Borel algebra , 1

C Complete, 43, 198 Continuous local martingale, 66 Continuous process, 44 Continuous semi-martingale, 95 Convergence in L p -sense, 10 Convergence in probability, 7 Convergence with probability one, 7 Convex function, 204

D Discounted value, 181 D-modification, 46 Doob’s inequality, 23 Doob–Meyer decomposition, 55 D-process, 44

E Euler–Maruyama approximation, 136 Expectation, 3 F Filtration, 21 G General inverse matrix, 209 Girsanov’s theorem, 114 Gronwall’s inequality, 214 Growth condition, 146 I Independent, 5 Integrable, 3 Itô’s formula, 103 Itô’s stochastic differential equation, 136 L Lévy’s theorem, 39 Local Lipschitz continuous condition, 143 Local time, 134 M Martingale, 21 Martingale transformation, 28 N Non-arbitrage, 191 Novikov’s theorem, 117 Numeraire, 181

© Springer Nature Singapore Pte Ltd. 2020 S. Kusuoka, Stochastic Analysis, Monographs in Mathematical Economics 3, https://doi.org/10.1007/978-981-15-8864-8

217

218 P π -system, 4, 203 Portfolio strategy, 180 Predictable, 26

R Random variable , 2 Regular, 59 Replication cost, 184 Replication strategy, 184

S Self-financing strategy, 181 Sequence of non-decreasing stopping times, 66 σ -algebra generated by C, 1 Square integrable martingales, 23 Standard condition, 43

Index Standard filtered probability space, 43 Stochastic differential equation, 148 Stopping time, 25, 48 Submartingale, 21 Sub-σ -algebra, 2 Suicide strategy, 188 Supermartingale, 21

U Uniformly integrable, 35 Upcrossing number, 31

W Weak compactness, 206

X X -integrable, 95