American-Type Options. Volume 2 American-Type Options: Stochastic Approximation Methods, Volume 2 9783110329841, 9783110329681

The book gives a systematical presentation of stochastic approximation methods for discrete time Markov price processes.

201 29 3MB

English Pages 571 [572] Year 2014

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

American-Type Options. Volume 2 American-Type Options: Stochastic Approximation Methods, Volume 2
 9783110329841, 9783110329681

Table of contents :
Preface
Contents
1 Reward approximations for autoregressive log-price processes (LPP)
1.1 Markov Gaussian LPP
1.2 Autoregressive LPP
1.3 Autoregressive moving average LPP
1.4 Modulated Markov Gaussian LPP
1.5 Modulated autoregressive LPP
1.6 Modulated autoregressive moving average LPP
2 Reward approximations for autoregressive stochastic volatility LPP
2.1 Nonlinear autoregressive stochastic volatility LPP
2.2 Autoregressive conditional heteroskedastic LPP
2.3 Generalized autoregressive conditional heteroskedastic LPP
2.4 Modulated nonlinear autoregressive stochastic volatility LPP
2.5 Modulated autoregressive conditional heteroskedastic LPP
2.6 Modulated generalized autoregressive conditional heteroskedastic LPP
3 American-type options for continuous time Markov LPP
3.1 Markov LPP
3.2 LPP with independent increments
3.3 Diffusion LPP
3.4 American-type options for Markov LPP
4 Upper bounds for option rewards for Markov LPP
4.1 Upper bounds for rewards for Markov LPP
4.2 Asymptotically uniform conditions of compactness for Markov LPP
4.3 Upper bounds for rewards for LPP with independent increments
4.4 Upper bounds for rewards for diffusion LPP
4.5 Upper bounds for rewards for mean-reverse diffusion LPP
5 Time-skeleton reward approximations for Markov LPP
5.1 Lipschitz-type conditions for pay-off functions
5.2 Time-skeleton approximations for optimal expected rewards
5.3 Time-skeleton approximations for reward functions
5.4 Time-skeleton reward approximations for LPP with independent increments
5.5 Time-skeleton reward approximations for diffusion LPP
6 Time-space-skeleton reward approximations for Markov LPP
6.1 Time-space-skeleton reward approximations for Markov LPP
6.2 Time-space-skeleton reward approximations for LPP with independent increments
6.3 Time-space-skeleton reward approximations for diffusion LPP
7 Convergence of option rewards for continuous time Markov LPP
7.1 Convergence of rewards for continuous time Markov LPP
7.2 Convergence of rewards for LPP with independent increments
7.3 Convergence of rewards for univariate Gaussian LPP with independent increments
7.4 Convergence of rewards for multivariate Gaussian LPP with independent increments
8 Convergence of option rewards for diffusion LPP
8.1 Convergence of rewards for time-skeleton approximations of diffusion LPP
8.2 Convergence of rewards for martingale-type approximations of diffusion LPP
8.3 Convergence of rewards for trinomial-tree approximations of diffusion LPP
8.4 Rewards approximations for mean-reverse diffusion LPP
9 European, knockout, reselling and random pay-off options
9.1 Reward approximations for European-type options
9.2 Reward approximations for knockout American-type options
9.3 Reward approximations for reselling options
9.4 Reward approximations for American-type options with random pay-off
10 Results of experimental studies
10.1 Binomial- and trinomial-tree reward approximations for discrete time models
10.2 Skeleton reward approximations for discrete time models
10.3 Reward approximations for continuous time models
10.4 Reward approximation algorithms for Markov LPP

Citation preview

Dmitrii S. Silvestrov American-Type Options

De Gruyter Studies in Mathematics

Edited by Carsten Carstensen, Berlin, Germany Nicola Fusco, Napoli, Italy Fritz Gesztesy, Columbia, Missouri, USA Niels Jacob, Swansea, United Kingdom Karl-Hermann Neeb, Erlangen, Germany

Volume 57

Dmitrii S. Silvestrov

American-Type Options Stochastic Approximation Methods Volume 2

DE GRUYTER

Mathematics Subject Classification 2010 91G20, 91G60, 60G40, 60J05, 60J10, 60G40, 60G15, 60J22, 60J10, 60G15, 65C40, 62L15 Author Prof. Dr. Dmitrii S. Silvestrov Stockholm University Department of Mathematics SE-106 91 Stockholm Sweden [email protected]

ISBN 978-3-11-032968-1 e-ISBN (PDF) 978-3-11-032984-1 e-ISBN (EPUB) 978-3-11-038990-6 Set-ISBN 978-3-11-032985-8 ISSN 0179-0986 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2015 Walter de Gruyter GmbH, Berlin/Munich/Boston Printing and binding: CPI books GmbH, Leck ♾ Printed on acid-free paper Printed in Germany www.degruyter.com

Preface This book is the 2nd volume of the monograph on stochastic approximation methods for American-type options. The 1st volume, Silvestrov (2014), was devoted to stochastic approximation methods for models of American-type options with general pay-off functions, for discrete time Markov log-price processes. In the 2nd volume, we systematically present stochastic approximation methods for American-type options with general pay-off functions, for continuous time Markov log-price processes. The principal novelty of our studies is that we systematically consider continuous time multivariate modulated Markov log-price processes and general pay-off functions, which can depend not only on log-price but also an additional stochastic modulating index component with a general phase space. We also impose minimal conditions of smoothness on transition probabilities and pay-off functions as well as minimal moment compactness conditions on log-price processes and restrictions on the rate of growth for pay-off function in log-price arguments. The book contains ten chapters. In Chapters 1 and 2, we present results about reward approximations for American-type options with general pay-off functions, for autoregressive and autoregressive stochastic volatility log-price processes. These chapters are included in the 2nd volume, in order to balance in size two volumes of the monograph. In Chapter 3, we introduce models of multivariate modulated Markov logprice processes and consider main examples of such processes, including log-price processes with independent increments and diffusion log-price processes; define basic objects connected with American type options, namely, reward functions, optimal expected rewards and optimal stopping times; and present some typical examples of options such as call and put options, exchange of assets options, and their portfolios. In Chapter 4, we give upper bounds for reward functions and moments of random rewards for American type options with general pay-off functions, which have not more than polynomial rate of growth in price arguments, for multivariate modulated Markov log-price processes. We also specify these results for log-price processes with independent increments, diffusion log-price processes and their time-skeleton, martingale, and trinomial-tree approximations as well as for mean-reverse diffusion log-price processes. In Chapter 5, we give asymptotically uniform explicit upper bounds for distance between optimal expected rewards for American-type options with general pay-off functions, for multivariate modulated Markov log-price processes and optimal expected rewards for American-type options with general pay-off functions, for embedded discrete time modulated Markov log-price processes based on skeleton

vi

Preface

partitions of time interval. These upper bounds play a key role in getting results about convergence of reward approximation algorithms studied in this book. In Chapter 6, we presents results about convergence of time-space-skeleton reward approximations, for continuous time multivariate modulated Markov logprice processes, approximated, in this case, by atomic Markov chains, which transition probabilities and initial distributions concentrated on finite sets of skeleton points. These results are essentially used for getting results about convergence of rewards for continuous time models. At the same time, they have their own value, since American-type options for approximating embedded discrete time log-price processes can be interpreted as Bermudian-type options for the corresponding continuous time log-price processes. In Chapters 7 and 8, we present our main convergence results for rewards of American-type options with general perturbed pay-off functions, for perturbed multivariate modulated Markov log-price processes. What is important that we impose minimal conditions of smoothness on the limiting transition probabilities and pay-off functions. For the basic case, where the transition probabilities have densities with respect to some pivotal Lebesgue-type measure, it is usually required that the sets of weak discontinuity for the limiting transition probabilities are zero sets with respect to the above pivotal measures. As far as pay-off functions are concerned, we impose on them Lipschitz-type conditions, which are weaker than conditions involving derivatives of pay-off functions used usually in integrodifferential approximation methods. Chapter 7 presents convergence results for general multivariate modulated Markov log-price processes and multivariate log-price processes with independent increments, including Gaussian processes with independent increments. Chapter 8 presents convergence results for diffusion log-price processes and their skeleton, martingale, and trinomial-tree approximations as well as some results about reward approximations for mean-reverse diffusion log-price processes. In Chapter 9, we show, in which way results about reward approximations for American-type options can be applied to simpler European-type options and to more complex knockout American-type options; present reward convergence results for American-type options, which appear in the model of reselling of European options; and introduce and present results about convergence of rewards for American-type options with random pay-off. In Chapter 10, we present some results of experimental numerical studies for reward stochastic approximation algorithms based on theoretical results for discrete and continuous time models of American-type options presented in the 1st and 2nd volumes of the book. We should also point out that the plan of the 2nd volume, announced in the 1st volume, was slightly changed. It was decided to concentrate presentation on results related to stochastic approximation methods for rewards of American-type options, which play a central role in our research studies. Some interesting them-

Preface

vii

selves results about optimal stopping times, in particular, about multi-threthhold structure of optimal stopping domains, and Monte Carlo reward approximation algorithms were excluded, in order to satisfy requirements concerned the size of the book. However, we give references to papers containing the corresponding results. The bibliography, which contains more than 300 references, is also preceded by brief remarks. It should be noted that the bibliography for the 2nd volume just supplements the bibliography for the 1st volume containing about 600 references. The presentation of material in the present book is organized in a way that will hopefully be appreciated by readers. Each chapter has a preamble, in which the main results are outlined and the content of chapter by sections is presented. Each section is broken up into titled subsections. I would like also to comment on the notation system used in the book. Throughout the text I make use of several basic classes of conditions. Conditions that belong to a specific class are denoted by the same letter. For example, the letter B is used for conditions imposed on pay-off functions, the letter C for moment compactness conditions imposed on log-price processes, and so forth. Conditions belonging to a specific class have subscripts numbering conditions in the class. A list of all conditions is given in an index. Chapters and sections have, respectively, a singular and a double numeration. Formulas also have a double numeration. For example, label (1.2) refers to formula 2 in Chapter 1. Subsections, theorems, lemmas, definitions and remarks have a triple numeration. For example, Theorem 1.2.3 means Theorem 3 in Section 1.2. References to chapters, sections, lemmas, theorems, formulas or conditions from the 1st volume are indexed by symbol ∗, for example, the symbol B∗ refer to condition B in the 1st volume. Reference Theorem 1.2.3∗ means Theorem 3 in Section 1.2, in the 1st volume of the book. I hope that the publication of this 2nd volume of the monograph and the additional bibliography of works related to problems of stochastic approximation methods for American-type options and related problems will be, together with the 1st volume, a useful contribution to the continuing intensive studies in this actual area of financial mathematics. In addition to its employment for research and reference purposes, the book can be used in special courses related to stochastic approximation methods and option pricing and as a complementary reading in general courses on stochastic processes. In this respect, it may be useful for specialists as well as doctoral and advanced undergraduate students. I am much indebted to Dr. Evelina Silvestrova for her continuous encouragement and support of various aspects of my work on this book. I would like to thank my collaborators at different stages of my research studies in the area, Professor Alexander Kukush, Professor Anatoliy Malyarenko, Professor Raimondo Manca, Dr. Henrik Jönson, Dr. Robin Lundgren, Dr. Evelina Silvestrova, and Dr. Fredrik Srenberg for the fruitful cooperation.

viii

Preface

I would also like to thank all my colleagues at the Department of Mathematics, at Stockholm University, and the School of Education, Culture and Communication, at the Mälardalen University, for creating the inspiring research environment and friendly atmosphere which so stimulated my work. I also would like to thank the Riksbankens Jubileumsfond for the important and stimulating support of my research in the area and work on this book.

Stockholm, September 2014

Dmitrii Silvestrov

Contents Preface 1 1.1 1.2 1.3 1.4 1.5 1.6 2 2.1 2.2 2.3 2.4 2.5 2.6

v Reward approximations for autoregressive log-price processes (LPP) 1 Markov Gaussian LPP 2 Autoregressive LPP 11 Autoregressive moving average LPP 22 Modulated Markov Gaussian LPP 33 Modulated autoregressive LPP 43 Modulated autoregressive moving average LPP 58 Reward approximations for autoregressive stochastic volatility LPP 74 Nonlinear autoregressive stochastic volatility LPP 75 Autoregressive conditional heteroskedastic LPP 87 Generalized autoregressive conditional heteroskedastic LPP 100 Modulated nonlinear autoregressive stochastic volatility LPP 113 Modulated autoregressive conditional heteroskedastic LPP 127 Modulated generalized autoregressive conditional heteroskedastic LPP 142

3 3.1 3.2 3.3 3.4

American-type options for continuous time Markov LPP Markov LPP 157 LPP with independent increments 162 Diffusion LPP 167 171 American-type options for Markov LPP

157

4 4.1 4.2 4.3 4.4 4.5

Upper bounds for option rewards for Markov LPP 180 Upper bounds for rewards for Markov LPP 180 Asymptotically uniform conditions of compactness for Markov LPP 194 Upper bounds for rewards for LPP with independent increments 206 Upper bounds for rewards for diffusion LPP 215 Upper bounds for rewards for mean-reverse diffusion LPP 239

5 5.1 5.2 5.3

Time-skeleton reward approximations for Markov LPP 256 Lipschitz-type conditions for pay-off functions 257 Time-skeleton approximations for optimal expected rewards 270 Time-skeleton approximations for reward functions 287

x 5.4 5.5 6 6.1 6.2 6.3 7 7.1 7.2 7.3 7.4

8 8.1 8.2 8.3 8.4

Contents

Time-skeleton reward approximations for LPP with independent 294 increments Time-skeleton reward approximations for diffusion LPP 298 Time-space-skeleton reward approximations for Markov LPP 308 Time-space-skeleton reward approximations for Markov LPP 309 Time-space-skeleton reward approximations for LPP with independent increments 329 Time-space-skeleton reward approximations for diffusion LPP 354 Convergence of option rewards for continuous time Markov LPP 371 Convergence of rewards for continuous time Markov LPP 372 Convergence of rewards for LPP with independent increments 376 Convergence of rewards for univariate Gaussian LPP with independent increments 391 Convergence of rewards for multivariate Gaussian LPP with independent increments 405 Convergence of option rewards for diffusion LPP 427 Convergence of rewards for time-skeleton approximations of diffusion LPP 427 Convergence of rewards for martingale-type approximations of diffusion LPP 431 Convergence of rewards for trinomial-tree approximations of diffusion LPP 435 Rewards approximations for mean-reverse diffusion LPP 441

9 9.1 9.2 9.3 9.4

456 European, knockout, reselling and random pay-off options Reward approximations for European-type options 457 Reward approximations for knockout American-type options 463 Reward approximations for reselling options 469 Reward approximations for American-type options with random 484 pay-off

10 10.1

Results of experimental studies 496 Binomial- and trinomial-tree reward approximations for discrete time models 496 Skeleton reward approximations for discrete time models 504 Reward approximations for continuous time models 513 Reward approximation algorithms for Markov LPP 520

10.2 10.3 10.4

Contents

Bibliographical Remarks Bibliography Index

549

531

526

xi

1 Reward approximations for autoregressive log-price processes (LPP) In Chapter 1, we present results about space-skeleton approximations for rewards of American type options for autoregressive log-price processes with Gaussian noise terms. These approximations are based on approximations of rewards for autoregressive log-price processes with Gaussian noise terms by the corresponding rewards for American-type options for atomic autoregressive type Markov chains, which transition probabilities and initial distributions are concentrated on finite sets of skeleton points. The rewards for approximating atomic Markov chains can be effectively computed using backward recurrence relations presented in Chapter 3∗ . The space-skeleton reward approximations also require special fitting of transition probabilities and initial distributions for approximating processes to the corresponding transition probabilities and initial distributions for approximated processes. Convergence of the approximating rewards can be proven using the general convergence results presented in Chapters 5∗ – 8∗ . In Section 1.1, we give results about convergence of space-skeleton approximations for rewards of American-type options for multivariate Markov Gaussian log-price processes with linear drift and bounded volatility coefficients. In Section 1.2, we give results about convergence of space-skeleton approximations for rewards of American-type options for autoregressive log-price processes with Gaussian noise terms. In Section 1.3, we give results about convergence of space-skeleton approximations for rewards of American-type options for autoregressive moving average log-price processes with Gaussian noise terms. In Section 1.4, we give results about convergence of space-skeleton approximations for rewards of American-type options for multivariate modulated Markov Gaussian log-price processes with linear drift and bounded volatility coefficients. In Section 1.5, we give results about convergence of space-skeleton approximations for rewards of American-type options for modulated autoregressive log-price processes with Gaussian noise terms. In Section 1.6, we give results about convergence of space skeleton approximations for rewards of American-type options for modulated autoregressive moving average log-price processes with Gaussian noise terms. The main results, presented in Theorems, 1.1.1–1.6.2 are new.

2

1

Reward approximations for autoregressive LPP

1.1 Markov Gaussian LPP In this section, we present results about space-skeleton approximations for rewards of American type options for Markov Gaussian log-price processes with linear drift and constant diffusion coefficients.

1.1.1 Upper bounds for rewards of Markov Gaussian log-price processes with linear drift and bounded volatility Here and henceforth, we interpret all vectors as column vectors even they are written as row vectors. Let us consider multivariate log-price processes with Gaussian noise terms defined by the following vector stochastic difference equation, ~0,n − Y ~0,n−1 = ~λn + Λn Y ~0,n−1 + Σn W ~ n0 , n = 1, 2, . . . , Y

(1.1)

~0,0 = (Y0,0,1 , . . . , Y0,0,k ) is a random vector taking values in space Rk ; where: (a) Y 0 0 0 ~ (b) Wn = (Wn,1 , . . . , Wn,k ), n = 1, 2, . . . is a sequence of k-dimensional i.i.d. ran0 dom vectors, which have a standard multivariate normal distribution with EW1,i = 0 0 ~ 0, EW1,i W1,j = I(i = j), i, j = 1, . . . , k; (c) the random vector Y0,0 and the random ~ n0 , n = 1, 2, . . . are independent; (d) ~λn = (λn,1 , . . . , λn,k ), n = 1, 2, . . . sequence W are k-dimensional vectors with real-valued components; (e) Λn = kλn,i,j k, n = 1, 2, . . . and Σn = kσn,i,j k, n = 1, 2, . . . are k × k matrices with real-valued elements. This model is a particular case of the multivariate modulated Gaussian random walk with a drift coefficient µ ~ 0,n (~ y , x) and and a volatility coefficient Σ0,n (~ y , x). This model was introduced in Sections 1.2∗ and 4.5∗ . In this case modulating index is absent and, therefore, drift coefficients µ ~ 0,n (~ y , x) ≡ µ ~ n (~ y ) = (µn,i (~ y ), i = 1, . . . , k) and volatility coefficients Σ0,n (~ y , x) ≡ Σn (~ y ) = kσn,i,j (~ y )k do not depend on index argument x and have the specific form, namely, for ~ y ∈ Rk , n = 1, 2, . . ., µ ~ n (~ y ) = ~λn + Λn ~ y , Σn (~ y ) = Σn .

(1.2)

~0,n is given by the Lemma 1.1.1 Let the Markov Gaussian log-price process Y stochastic difference equation (1.1). In this case, condition G4∗ (for the model without index component) holds, i.e., there exist constants 0 < K1 < ∞ and 0 ≤ K2,l < ∞, l = 1, . . . , k such that, max

sup

0≤n≤N −1, i,j=1,...,k ~ y ∈Rk

2 |µn+1,i (~ y )| + σn+1,i,j (~ y) < K1 . Pk 1 + l=1 K2,l |yl |

(1.3)

1.1

Markov Gaussian LPP

3

Proof. The following inequality obviously holds, for every ~ y ∈ Rk , i, j = 1, . . . , k, n = 0, . . . , N − 1, 2 |µn+1,i (~ y )| + σn+1,i,j (~ y ) = |λn+1,i +

k X

2 λn+1,i,l yl )| + σn+1,i,j

l=1

0

k X γi M2,i exp{( AN −n,j (β¯i )|yj |) }. βi

(1.9)

j=1

Remark 11.1.1. The explicit formulas for the constants M1 and M2,i = M2,i (βi ), i = 1, . . . , k take, according formulas given in Remark 4.5.5∗ , the follow-

1.1

5

Markov Gaussian LPP

ing form, M1 = L1 , M2,i (βi ) = L1 L2,i I(γi = 0) + L1 L2,i (1 Pk γ K (A (β¯ )+ 1 k2 AN −1,l (β¯i )) N βi ) i I(γi > 0). + 2k e 1 l=1 N −1,l i 2

(1.10)

In this case the optimal expected reward is defined by the formula, Φ0 =

~ ~0,0 ). Eg(τ0,0 , eY0,τ0,0 ) = Eφ0 (Y

sup

(1.11)

(0)

τ0,0 ∈Mmax,0,N

¯ ∗ formulated in Section 4.5∗ should be replaced In this case, condition D6 [β] by the following condition assumed to hold for vector parameter β¯ = (β1 , . . . , βk ) with non-negative components: P ¯ E exp{ k AN,j (β¯i )|Y0,0,j |} < K3,i , i = 1, . . . , k, for some 1 < K3,i < D1 [β]: j=1

∞, i = 1, . . . , k. Theorem 4.5.4∗ takes in this case the following form. ~0,n is given by Theorem 1.1.2. Let the Markov Gaussian log-price process Y ¯ the vector stochastic difference equation (1.1). Let also conditions B1 [¯ γ ] and D1 [β] hold for some vector parameters γ¯ = (γ1 , . . . , γk ) and β¯ = (β1 , . . . , βk ) such that 0 ≤ γi ≤ βi < ∞, i = 1, . . . , k. Then, there exists a constant 0 ≤ M3 < ∞ such that the following inequality takes place, |Φ0 | ≤ M3 .

(1.12)

Remark 1.1.2. The explicit formula for the constant M3 takes, according formulas given in Remark 4.5.6∗ , the following form, X X M3 = L1 + L1 L2,i + L1 L2,i (1 i:γi =0 K1

+ 2k e

i:γi >0

Pk l=1

γ (AN −1,l (β¯i )+ 21 k2 AN −1,l (β¯i )) N βi

)

i

γi β

K3,ii .

(1.13)

1.1.2 Space skeleton approximations for option rewards of Markov Gaussian log-price processes with linear drift and constant bounded coefficients ~0,n has the Gaussian transition probabilities P0,n (~ The Markov process Y y , A) = ~ ~ P{Y0,n ∈ A/Y0,n−1 = ~ y } defined for ~ y ∈ Rk , A ∈ Bk , n = 1, 2, . . . by the following relation, ~ n0 ∈ A}. P0,n (~ y , A) = P{~ y + ~λn + Λn ~ y + Σn W (1.14) Let us construct the corresponding space skeleton approximating processes ~ ε,n , n = 0, 1, . . ., for ε ∈ (0, ε0 ], according the algorithm described in Subsection Z 8.4.2∗ .

6

1

Reward approximations for autoregressive LPP

+ Let m− ε,n,i ≤ mε,n,i , i = 1, . . . , k, n = 0, 1, . . . be integer numbers, δε,n,i > 0 and λε,n,i ∈ R1 , i = 1, . . . , k, n = 0, 1, . . .. Let us also define index sets Lε,n , n = 0, 1, . . ., + Lε,n = {¯ l = (l1 , . . . , lk ), li = m− ε,n.i , . . . , mε,n,i , i = 1, . . . , k}.

(1.15)

First, the skeleton intervals Iε,n,i,l should be constructed for li = m− ε,n,i , . . ., m+ ε,n,i , i = 1, . . . , k, n = 0, 1, . . .,

Iε,n,i,l

 − 1   (−∞, δε,n,i (mε,n,i + 2 )] + λε,n,i = (δε,n,i (l − 12 ), δε,n,i (l + 12 )] + λε,n,i   1 (δε,n,i (m+ ε,n,i − 2 ), ∞) + λε,n,i

if l = m− ε,n,i , + if m− ε,n,i < l < mε,n,i ,

if l =

(1.16)

m+ ε,n,i .

Then skeleton cubes ˆIε,¯l should be constructed for ¯ l ∈ Lε,n , n = 0, 1, . . ., ˆI ¯ = Iε,n,1,l × · · · × Iε,n,k,l . 1 k ε,l

(1.17)

+ Second, skeleton points yε,n,l should be defined for l = m− ε,n,i , . . . , mε,n,i , i = 1, . . . , k, n = 0, 1, . . ., yε,n,i,l = lδε,n,i + λε,n,i , (1.18)

and vector skeleton points ~ yε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ~ yε,n,¯l = (yε,n,1,l1 , . . . , yε,n,k,lk ).

(1.19)

Third, skeleton functions, hε,n,i (y), y ∈ R1 should be defined for i = 1, . . ., k, n = 0, 1, . . ., hε,n,i (y) =



yε,n,i,l

+ if y ∈ Iε,n,i,l , m− ε,n,i ≤ l ≤ mε,n,i ,

(1.20)

ˆ ε,n (~ and vector skeleton functions h y ), ~ y = (y1 , . . ., yk ) ∈ Rk , should be defined for n = 0, 1, . . ., ˆ ε,n (~ h y ) = (hε,n,1 (y1 ), . . . , hε,n,k (yk )). (1.21) ~ε,n is deThe corresponding space skeleton approximating Markov process Y fined, for every ε ∈ (0, ε0 ], by the following stochastic transition dynamic relation,

 ~    Yε,n n    ~ Yε,0

  ˆ ε,n h ˆ ε,n−1 (Y ˆ ε,n−1 (Y ~ε,n−1 ) + ~λn + Λn h ~ε,n−1 ) + Σn W ~ n0 , =h = 1, 2, . . . , ˆ ε,0 (Y ~0,0 ). =h

(1.22)

~ε,n defined, for every ε ∈ (0, ε0 ], by the nonlinear dyThe log-price process Y namic transition relation (1.22) is a skeleton atomic Markov chain, with the phase

1.1

Markov Gaussian LPP

7

~ε,n ∈ A/Y ~ε,n−1 = space Rk and one-step transition probabilities Pε,n (~ y , A) = P{Y ~ y } defined for ~ y ∈ Rk , A ∈ Bk , n = 1, 2, . . . by the following relation, X ˆ ε,n−1 (~ Pε,n (~ y , A) = P0,n (h y ), ˆIε,n,¯l ) ~ yε,n,l¯∈A

X

=

ˆ ε,n−1 (~ ˆ ε,n−1 (~ ¯ n + Λn h y) P{h y) + λ

~ yε,n,l¯∈A

~ n0 ∈ ˆI ¯}. + Σn W ε,n,l

(1.23)

It is useful to note that transition probabilities Pε,n (~ y , A) are not Gaussian. However summands in the above summation formula are Gaussian probabilities. ˆ ε,n (·) take values in the set It is useful to note that the skeleton function h ¯ Fε,n = {yε,n,¯l , l ∈ Lε,n }, for every n = 0, 1, . . .. This implies that hε,n (Yε,n ) = Yε,n if Yε,n ∈ Fε,n . Moreover, relations (1.22) and (1.23) imply that P{Yε,n ∈ Fε,n , n = 0, 1, . . .} = 1. This relation, seems, should imply that one can replace in relation ˆ ε,n−1 (Y ~ε,n−1 ) by random vectors Y ~ε,n−1 in relation (1.22). This random vectors h is, however, not so. Transition probabilities Pε,n (~ y , A) should be defined for all points ~ y ∈ Rk including points which do not belong to sets Fε,n . That is why the transition dynamic relation (1.22) and formula (1.23) are given in the above form. ~ε,0 ∈ A}, A ∈ Bk is concerned, As far as the initial distribution Pε,0 (A) = P{Y it takes, for every ε ∈ (0, ε0 ], the following form, X X ˆ ε,0 (Y ~0,0 ) ∈ ˆI ¯}. Pε,0 (A) = P0,0 (ˆIε,0,¯l ) = P{h (1.24) ε,n,l ~ yε,0,l¯∈A

~ yε,0,l¯∈A

We consider the model with a pay-off function g(n, e~y ), ~ y = (y1 , . . . , yk ) ∈ Rk which does not depend on ε. (ε) Let us also recall, for ε ∈ (0, ε0 ], the class Mmax,n,N of all Markov moments ~ε,n such that (a) n ≤ τε,n ≤ N , (b) event {τε,n = τε,n for the log-price process Y (ε) ~ m} ∈ Fn,m = σ[Yε,l , n ≤ l ≤ m], n ≤ m ≤ N . ~ε,n by In this case, the reward functions are defined for the log-price process Y the following relation, for ~ y ∈ Rk and n = 0, 1, . . . , N , φε,n (~ y) =

sup

~

E~y,n g(τε,n , eYε,τε,n ).

(1.25)

(ε)

τε,n ∈Mmax,n,N

Probability measures Pε,n (~ y , A), ~ y ∈ Rk , n = 1, 2, . . . are concentrated on finite sets, for every ε ∈ (0, ε0 ]. This obviously implies that the rewards functions |φε,n (~ y )| < ∞, ~ y ∈ Rk , n = 1, 2, . . . for every ε ∈ (0, ε0 ]. It is useful also to note that, by the definition, φε,N (~ y ) = g(N, e~y ), ~ y ∈ Rk . ¯ ˆ By the definition of sets Iε,n,¯l , l ∈ Lε,n , n = 0, 1, . . ., for every ~ y ∈ Rk and n = 0, 1, . . ., there exists the unique ¯ lε,n (~ y ) = (lε,n,1 (~ y ), . . . , lε,n,k (~ y )) ∈ Lε,n such that ~ y ∈ ˆIε,n,¯lε,n (~y) .

8

1

Reward approximations for autoregressive LPP

The following lemma is the direct corollary of Lemma 8.4.2∗ and relation (8.124)∗ . ~0,n is given by the Lemma 1.1.2. Let the Markov Gaussian log-price process Y vector stochastic difference equation (1.1), while the corresponding approximating ~ε,n is defined, for every ε ∈ (0, ε0 ], by the space-skeleton Markov log-price process Y stochastic transition dynamic relation (1.22). Then the reward functions φε,n (~ y) and φε,n+m (~ yε,n+m,¯l ), ¯ l ∈ Lε,n+m , m = 1, . . . N − n are, for every ε ∈ (0, ε0 ], ~ y ∈ Rk and n = 0, . . . , N − 1, the unique solution for the following recurrence finite system of linear equations,  φε,N (~ yε,N,¯l ) = g(N, e~yε,N,l¯),      ¯  l ∈ Lε,N ,      φε,n+m (~yε,n+m,¯l ) = max g(n + m, e~yε,n+m,l¯),    P   yε,n+m+1,¯l0 ) ¯ l0 ∈Lε,n+m+1 φε,n+m+1 (~ (1.26)   ×P0,n+m+1 (~ yε,n+m,¯l , ˆIε,n+m+1,¯l0 ) ,     ¯  l ∈ Lε,n+m , m = N − n − 1, . . . , 1,       φε,n (~ y ) = max g(n, e~y ),    P   yε,n+1,¯l0 )P0,n+1 (~ yε,n,¯lε,n (~y) , ˆIε,n+1,¯l0 ) , ¯ l0 ∈Lε,n+1 φε,n+1 (~ (ε)

while the optimal expected reward Φε = Φε (Mmax,0,N ) is defined by the following formula, for every ε ∈ (0, ε0 ], X Φε = P0,0 (ˆIε,0,¯l ) φε,0 (~ yε,0,¯l ). (1.27) ¯ l∈Lε,0

Obviously, |Φε | < ∞, for every ε ∈ (0, ε0 ].

1.1.3 Convergence of option reward functions for space skeleton approximations for Markov Gaussian log-price processes with linear drift and bounded volatility coefficients Let now formulate conditions of convergence for the above reward functions. Let us introduce special shorten notations for the maximal and the minimal skeleton points, for i = 1, . . . , k, n = 0, 1, . . . and ε ∈ (0, ε0 ], ± zε,n,i = δε,n,i m± ε,n,i + λε,n,i .

(1.28)

We impose the following condition on the parameters of the space skeleton model: ± N1 : (a) δε,n,i → 0 as ε → 0, for i = 1, . . . , k, n = 0, 1, . . . , N ; (b) ±zε,n,i →∞ ± as ε → 0, for i = 1, . . . , k, n = 0, 1, . . . , N ; (c) ±zε,n,i , n = 0, . . . , N is nondecreasing sequences, for every i = 1, . . . , k and ε ∈ (0, ε0 ].

1.1

Markov Gaussian LPP

9

The moment condition, which provide the existence of appropriate upper bounds for reward functions is condition B1 [¯ γ ], which we assume to hold for the payoff function g(n, e~y ), ~ y = (y1 , . . . , yk ) ∈ Rk , for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components. Let now formulate the corresponding convergence conditions. We assume that the following reduced variant of continuity condition I7∗ (for the model without index component) holds: I1 : There exist sets Y0n ∈ Bk , n = 0, . . . , N such that function g(n, e~y ) is continuous in points ~ y ∈ Y0n , for every n = 0, . . . , N . We assume that the following condition of weak continuity holds: ~ n0 ∈ Y0n } = 0, for ~ y0 ∈ Y0n−1 , n = 1, . . . , N , where O1 : P{~ y0 + ~λn + Λn ~ y0 + Σn W 0 Yn , n = 0, . . . , N are sets penetrating condition I1 . 2 ~ n0 ≡ 0 and P{~ Remark 1.1.3. If maxi,j=1,...,k σn,i,j = 0, then Σn W y0 + ~λn + 0 2 0 > Λn ~ y0 ∈ Yn } = 0, if and only if ~ y0 + ~λn + Λn ~ y0 ∈ Yn . If maxi,j=1,...,k σn,i,j 0 ~ n = Σn W ~ n a Gaussian random vector, with at least one component 0 then W possessing a non-zero variance. The set Rr,k,n = {~ y : ~ y = Σn w, ~ w ~ ∈ Rk } is an Euclidean hyper-subspace of Rk , with a dimension 1 ≤ r ≤ k. The probability ~ n , with respect to the Lebesgue measure density function of the random vector W Lr,k,n (A) in the hyper-subspace Rr,k,n , is concentrated and strictly positive in ~ n0 ∈ Y0n } = 0, if and points ~ y ∈ Rr,k,n . This implies that P{~ y0 + ~λn + Λn ~ y0 + Σn W 0 0 0 only if Lr,k,n (Yr,k,n,y0 ) = 0, where Yr,k,n,y0 = (Yn − ~ y0 − ~λn − Λn ~ y0 ) ∩ Rr,k,n is 0 the cut of the set Yn − ~ y0 − ~λn − Λn ~ y0 by the hyper-subspace Rr,k,n . In particular, if det(Σn ) 6= 0, then r = k, the hyper-subspace Rk,k,n = Rk , and the Lebesgue measure Lk,k,n (A) = Lk (A) is the Lebesgue measure in the space Rk . In this case, 0 the above probability equals 0 if and only if Lk (Yk,k,n,y0 ) = 0.

The following theorem is a corollary of Theorem 8.4.7∗ . ~0,n is given by Theorem 1.1.3. Let the Markov Gaussian log-price process Y the vector stochastic difference equation (1.1), while the corresponding approximat~ε,n is defined, for every ε ∈ (0, ε0 ], by ing space-skeleton Markov log-price process Y the stochastic transition dynamic relation (1.22). Let also conditions B1 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components, and, also, conditions N1 , I1 , and O1 hold. Then, for every n = 0, 1, . . . , N , the following relation takes place for any ~ yε → ~ y0 ∈ Y0n , φε,n (~ yε ) → φ0,n (~ y0 ) as ε → 0.

(1.29)

Proof. As follows from the transition dynamic relations (1.1) and (1.22), the space skeleton approximation model considered in Theorem 1.1.3 is a particular case of the space skeleton approximation model considered in Theorem 8.4.7∗ . The

10

1

Reward approximations for autoregressive LPP

difference is that the index component is absent. One can always to add in the model the virtual index component X0,n with one-point phase space X = {x0 }. Condition N1 is a variant of condition N2∗ , for the model with the above virtual index component. Condition B1 [¯ γ ] is a reduced version of condition B1 [¯ γ ]∗ , for the model without index component. Condition I1 is a reduced variant of condition I7∗ , for the case, where the corresponding pay-off function does not depend on index component. Lemma 1.1.1 implies that the reduced version of condition G4∗ (for the model ~0,n . without index component) holds for the Markov Gaussian log-price process Y Since functions µ ~ n (~ y ) = ~λn + Λn ~ y and Σn (~ y ) = Σn are continuous in ~ y ∈ Rk , the reduced variant of condition O5∗ (a) (for the model with a virtual index component) holds for sets Z00n = Rk × {x0 }. Condition O5∗ (b) can be omitted in this case. Condition O5∗ (c) takes the form of condition O1 . Thus, all conditions of Theorem 8.4.7∗ hold. By applying this theorem we get the convergence relation (1.29). 

1.1.4 Convergence of optimal expected rewards for space skeleton approximations of Markov Gaussian log-price processes with linear drift and bounded volatility coefficients Let us now give conditions for convergence for optimal expected rewards Φε in the above space skeleton approximation model. We shall apply Theorem 8.4.8∗ . ¯ = (A1 (β), ¯ . . . , Ak (β)) ¯ ~ β) ¯ ∗ , with the function A( In this case, condition D24 [β] given by relations (1.7) and (1.8), should be assumed to hold for some vector parameter β¯ = (β1 , . . . , βk ) with non-negative components and the corresponding vectors β¯i = (βi,1 , . . . , βi,k ) with components βi,j = I(i = j), i, j = 1, . . . , k. Condition K15∗ should be replaced by the following condition imposed on the ~0,0 ∈ A}: initial distribution P0,0 (A) = P{Y K1 :P0,0 (Y00 ) = 1, where Y00 is the set introduced in conditions I1 . The following theorem is a corollary of Theorem 8.4.8∗ . ~0,n is defined Theorem 1.1.4. Let the Markov Gaussian log-price process Y by the vector stochastic difference equation (1.1), while the approximating space~ε,n is defined, for every ε ∈ (0, ε0 ], by the skeleton Markov log-price process Y ¯ holds for dynamic transition relation (1.22). Let also conditions B1 [¯ γ ] and D1 [β] some vector parameters γ¯ = (γ1 , . . . , γk ) and β¯ = (β1 , . . . , βk ) such that, for every i = 1, . . . , k, either βi > γi > 0 or βi = γi = 0, and also conditions N1 , I1 , O1 , and K1 hold. Then, the following relation takes place, Φε → Φ0 as ε → 0.

(1.30)

1.2 Autoregressive LPP

11

¯ and K1 are just re-formulation, respectively, of conProof. Conditions D1 [β] ¯ ditions D19 [β]∗ and K15∗ used in Theorem 8.4.8∗ . Other conditions of this theorem also holds that was pointed out in the proof of Theorem 1.1.3. By applying Theorem 8.4.8∗ we get convergence relation (1.30). 

1.2 Autoregressive LPP In this section, we present results concerned space skeleton approximations for rewards of American-type options for autoregressive (AR) log-price processes with Gaussian noise terms.

1.2.1 Upper bounds for rewards of autoregressive log-price processes Let us now consider an inhomogeneous in time autoregressive type model with Gaussian noise terms, where the log-price process Yn is given by the following autoregressive stochastic difference equation, Yn − Yn−1 = an,0 + an,1 Yn−1 + · · · + an,p Yn−p + σn Wn , n = 1, 2, . . . ,

(1.31)

~0 = (Y0 , . . . , Y−p+1 ) is a p-dimensional random vector with real-valued where: (a) Y components; (b) W1 , W2 , . . . is a sequence of real-valued i.i.d. standard normal ~0 and the variables with mean value 0 and variance 1; (c) the random vector Y random sequence W1 , W2 , . . . are independent; (d) p is a positive integer number; (e) an,0 , an,1 , . . ., an,p , σn , n = 1, 2, . . . are real-valued constants. An autoregressive AR(p) model with Gaussian noise terms is a particular case of the above model, with coefficients a0 , a1 , . . ., ap , σ independent on n. As was shown in Subsection 1.3.4∗ , the above autoregressive type model can be imbedded in the model of multivariate Markov Gaussian log-price processes introduced in Subsection 1.1.1. Let us define the p-dimensional log-price process, ~0,n = (Y0,n,1 , . . . , Y0,n,p ) = (Yn , . . . , Yn−p+1 ), n = 0, 1, . . . . Y

(1.32)

We can always assume that the sequence of random variables Wn , n = 1, 2, . . . is the sequence of the first components of the sequence of p-dimensional i.i.d. 0 0 ~ n0 = (Wn,1 , . . . , Wn,p ), n = 1, 2, . . ., with standard Gaussian random vectors W 0 0 0 EW1,i = 0, EW1,i W1,j = I(i = j), i, j = 1, . . . , p, i.e., 0 Wn = Wn,1 , n = 1, 2, . . .

(1.33)

~ n = (Wn,1 , . . ., Let also of consider p-dimensional i.i.d. Gaussian vectors W Wn,p ) = (Wn , 0, . . . , 0), n = 1, 2, . . . introduced in Subsection 1.1.1.

12

1

Reward approximations for autoregressive LPP

~ n can be obtained as a linear transformation of vectors W ~ n0 , namely, Vectors W 0 ~ ~ Wn = ΣWn , n = 1, 2, . . ., where p × p matrix Σ = kσi,j k has elements σi,j = I(i = 1)I(j = 1), i, j = 1, . . . , p. The stochastic dynamic equation (1.31) can be re-written in the equivalent form of vector stochastic difference equation,   Y0,n,1 − Y0,n−1,1 = an,0 + an,1 Y0,n−1,1      + · · · + an,p Y0,n−1,p + σn Wn,1 ,    Y0,n,2 − Y0,n−1,2 = Y0,n−1,1 − Y0,n−1,2 + Wn,2 , (1.34)  . . . . . .      Y0,n,p − Y0,n−1,p = Y0,n−1,p−1 − Y0,n−1,p + Wn,p ,    n = 1, 2, . . . . Taking into account the above remarks, one can re-write the stochastic difference equation (1.34) in the following matrix form, ~0,n − Y ~0,n−1 = ~λn + Λn Y ~0,n−1 + Σn W ~ n0 , n = 1, 2, . . . , Y

(1.35)

where p-dimensional vectors ~λn = (λn,1 , . . . , λn,p ), n = 1, 2, . . . and p × p matrices Λn = kλn,i,j k, n = 1, 2, . . . are defined by the following relations,     an,1 an,2 . . . . . . . . . . . . an,p an,0  1  0  −1 0 ... ... ... 0     ~λn =  (1.36)  .  , Λn =  . .. .. .. .. ..  ,  ..  ..  . . . . .  0 ... ... ... 0 1 −1 0 and p × p matrices Σn = kσn,i,j k, n = 1, 2, . . . are defined by the following relation,

   Σn =  

σn 0 .. . 0

0 0 .. . 0

... ... ...

0 0 .. . 0

   . 

(1.37)

The stochastic difference equation (1.35) is a particular case of the stochastic difference equation (1.1). In this case, parameter k = p. Therefore, by Lemma 1.1.1, condition G4∗ holds, for example, with constants, K4 = 1 +

2 max (|an+1,0 | + σn+1 ),

0≤n≤N −1

max0≤n≤N −1, i=1,...,p |an+1,i | , 2 1 + max0≤n≤N −1 (|an+1,0 | + σn+1 ) 1 , l = 2, . . . , p, = 2 1 + max0≤n≤N −1 (|an+1,0 | + σn+1 )

K5,1 = K5,l

(1.38)

which replace, respectively, constants K26∗ and K27,l∗ , l = 1, . . . , k penetrating condition G4∗ (or constants K1 and K2,l , l = 1, . . . , k given in relation (1.5)).

1.2

Autoregressive LPP

13

Thus, Theorems 1.1.1 and 1.1.2 can be applied to the vector autoregressive ~0,n given by the vector stochastic difference Markov Gaussian log-price process Y equation (1.35) that yields, in fact, the upper bounds for reward functions and optimal expected rewards for the autoregressive type log-price processes with Gaussian noise terms Yn given by the autoregressive stochastic difference equation (1.31). It is natural to assume for an autoregressive log-price process that the corresponding pay-off function depend on the corresponding sequence of values for log-prices. We consider real-valued measurable pay-off function g(n, e~y ), ~ y = (y1 , . . ., yp ) ∈ Rp , n = 0, 1, . . ., and assume that the following condition, replacing condition B1 [¯ γ ], holds for some vector parameter γ¯ = (γ1 , . . . , γp ) with non-negative components: ~ ey )| P|g(n, < L3 , for some 0 < L3 < ∞ and B2 [¯ γ ]: max0≤n≤N sup~y∈R p γ |y | p

1+

0 ≤ L4,i < ∞, i = 1, . . . , p.

i=1

L4,i e

i

i

(0)

Let us Mmax,p,n,N be the class of all stopping times τ0,n for the process Yn (0)

such that (a) n ≤ τ0,n ≤ N , (b) event {τ0,n = m} ∈ Fp,n,m = σ[Yl , n − p + 1 ≤ l ≤ m], n ≤ m ≤ N . (0) (0) Obviously, the class Mmax,p,n,N coincides with the class Mmax,n,N of all ~0,l = (Y0,l,1 , . . . , Y0,l,p ) = (Yl , Markov moments τ0,n for the Markov process Y (0) ~0,l , n ≤ . . . , Yl−p+1 ) such that (a) n ≤ τ0,n ≤ N , (b) event {τ0,n = m} ∈ Fn,m = σ[Y l ≤ m], n ≤ m ≤ N . In this case, the reward functions are defined for the log-price process Yn ~0,n ) by the following relation, for ~ (its equivalent vector version Y y ∈ Rp and n = 0, 1, . . . , N , φ0,n (~ y) =

sup

~

E~y,n g(τ0,n , eYτ0,n ).

(1.39)

(0) τ0,n ∈Mmax,n,N

¯ = (A1 (β), ¯ . . . , Ap (β)), ¯ β¯ = (β1 , . . . , βp ), β1 , . . . , βp ~ β) In this case, functions A( ≥ 0, penetrating formulation of Theorems 1.1.1 and 1.1.2, has the following components, p X 1 ¯ = K4 K5,j (1.40) Aj (β) (βl + p2 βl2 ), j = 1, . . . , p. 2 l=1

¯ generates a sequence of functions A ¯ = (An,1 (β), ¯ ..., ~ β) ~ n (β) Function A( ¯ An,p (β)), n = 0, 1, . . . from the class Ap by the following recurrence relation, for any β¯ = (β1 , . . . , βp ), βi ≥ 0, i = 1, . . . , p, ¯ = ~ n (β) A



β¯ ¯ + A( ¯ ~ n−1 (β) ~ A ~ n−1 (β)) A

for n = 0, for n = 1, 2, . . . .

(1.41)

14

1

Reward approximations for autoregressive LPP

Let us also define vectors β¯i = (βi,1 , . . . , βi,p ) with components βi,j = βj I(j = i), i, j = 1, . . . , p. Theorem 1.1.1 takes in this case the following form. Theorem 1.2.1. Let the autoregressive log-price process Yn and its equiva~0,n are given, respectively, by the stochastic difference equalent vector version Y tions (1.31) and (1.35). Let also condition B2 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γp ) with γi ≥ 0, i = 1, . . . , p. Then, for any vector parameter β¯ = (β1 , . . . , βp ) with components βi ≥ γi , i = 1, . . . , p, there exist constants 0 ≤ M4 , M5,i = M5,i (βi ) < ∞, i = 1, . . . , p such that the reward functions φn (~ y) satisfy the following inequalities for ~ y = (y1 , . . . , yp ) ∈ Rp , 0 ≤ n ≤ N , X |φ0,n (~ y )| ≤ M4 + M5,i i: γi =0

+

X i: γi >0

p X γi M5,i exp{( AN −n,j (β¯i )|yj |) }. βi

(1.42)

j=1

Remark 1.2.1. The explicit formulas for the constants M4 and M5,i , i = 1, . . . , p take, according formulas given in Remark 1.1.1, the following form, M4 = L3 , M5,i = L3 L4,i I(γi = 0) + L17 L18,i (1 Pp γ K (A (β¯ )+ 1 p2 AN −1,l (β¯i )) N βi ) i I(γi > 0). + 2p e 4 l=1 N −1,l i 2

(1.43)

In this case, the optimal expected reward is defined by the formula, Φ0 =

sup

~ ~0,0 ). Eg(τ0,0 , eY0,τ0,0 ) = Eφ0 (Y

(1.44)

(0) τ0,0 ∈Mmax,p,0,N

¯ should be replaced in this case by the following condition Condition D1 [β] assumed to hold for vector parameter β¯ = (β1 , . . . , βp ) with non-negative components: P ¯ E exp{ p AN,j (β¯i )|Y0,0,j |} < K6,i , i = 1, . . . , p, for some 1 < K6,i < D2 [β]: j=1

∞, i = 1, . . . , p. Theorem 1.1.2 takes in this case the following form. Theorem 1.2.2. Let the autoregressive log-price process Yn and its equivalent ~0,n are given, respectively, by the stochastic difference equations vector version Y ¯ hold for some vector pa(1.31) and (1.35). Let also conditions B2 [¯ γ ] and D2 [β] rameters γ¯ = (γ1 , . . . , γk ) and β¯ = (β1 , . . . , βk ) such that 0 ≤ γi ≤ βi < ∞, i = 1, . . . , p. Then, there exists a constant 0 ≤ M6 < ∞ such that the following inequality takes place, |Φ0 | ≤ M6 . (1.45)

1.2

Autoregressive LPP

15

Remark 1.2.2. The explicit formula for the constant M6 takes, according formulas given in Remark 1.1.2, the following form, X X M6 = L17 + L3 L4,i + L3 L4,i (1 i:γi =0

+ 2p e

K4

Pp l=1

i:γi >0 γ (AN −1,l (β¯i )+ 12 p2 AN −1,l (β¯i )) N βi

)

i

γi β

K6,ii .

(1.46)

1.2.2 Space-skeleton approximations for option rewards of autoregressive log-price processes ~0,n has the Gaussian transition probabilities P0,n (~ The Markov process Y y , A) = ~0,n ∈ A/Y ~0,n−1 = ~ P{Y y } defined for ~ y ∈ Rp , A ∈ Bp , n = 1, 2, . . . by the following relation, ~ n0 ∈ A}, P0,n (~ y , A) = P{~ y + ~λn + Λn ~ y + Σn W

(1.47)

where the vector coefficients ~λn , n = 1, 2, . . . and matrix coefficients Λn , Σn , n = 1, 2, . . . are given by relations (1.36) and (1.37). ~0,n given by the stochastic difference equation (1.35) is The log-price process Y a particular case of the multivariate Markov Gaussian log-price process considered in Subsection 1.1.1, with parameter k = p and characteristics given by relations (1.36) and (1.37). Therefore, Lemma 1.1.1 and Theorems 1.1.3 and 1.1.4 can be applied to the above autoregressive log-price processes with Gaussian noise terms given by the stochastic difference equation (1.31) or equivalently by the vector stochastic difference equation (1.35). In this case, some simplification in the construction of the corresponding skeleton approximation model can be achieved due to specific shift structure of the stochastic difference equation (1.35), which is well seen from the equivalent form (1.34) of this equation. + Let m− ε,n ≤ mε,n be integer numbers, δε,n > 0 and λε,n ∈ R1 , for n = −p + 1, −p + 2, . . .. ± In this case, one can use parameters m± ε,n,i = mε,n−i+1 , δε,n,i = δε,n−i+1 , and λε,n,i = λε,n−i+1 , for i = 1, . . . , p, n = 0, 1, . . .. In this case, the index sets Lε,n , n = 0, 1, . . . have the following form, Lε,n = {¯ l = (l1 , . . . , lp ), + li = m− ε,n,i , . . . , mε,n,i , i = 1, . . . , p} = {¯ l = (l1 , . . . , lp ), + li = m− ε,n−i+1 , . . . , mε,n−i+1 , i = 1, . . . , p}.

(1.48)

Other elements of the space skeleton approximation should be also defined with the use of the above shift construction.

16

1

Reward approximations for autoregressive LPP

First, the skeleton intervals Iε,n,l should be n = −p + 1, . . .,  − 1   (−∞, δε,n (mε,n + 2 )] + λε,n Iε,n,l = (δε,n (l − 12 ), δε,n (l + 21 )] + λε,n   1 (δε,n (m+ ε,n − 2 ), ∞) + λε,n

+ defined for l = m− ε,n , . . . , mε,n ,

if l = m− ε,n , + if m− ε,n < l < mε,n ,

if l =

(1.49)

m+ ε,n ,

and then skeleton cubes ˆIε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ˆI ¯ = Iε,n,1,l × · · · × Iε,n,p,l 1 p ε,n,l = Iε,n,l1 × · · · × Iε,n−p+1,lp .

(1.50)

+ Second, skeleton points yε,n,l should be defined for l = m− ε,n , . . . , mε,n , n = −p + 1, . . ., yε,n,l = lδε,n + λε,n , (1.51)

and vector skeleton points ~ yε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ~ yε,n,¯l = (yε,n,1,l1 , . . . , yε,n,p,lp ) = (yε,n,l1 , . . . , yε,n−p+1,lp ).

(1.52)

Third, skeleton functions, hε,n (y), y ∈ R1 should be defined for n = −p+1, . . ., hε,n (y) =



yε,n,l

+ if y ∈ Iε,n,l , m− ε,n ≤ l ≤ mε,n ,

(1.53)

ˆ ε,n (~ and vector skeleton functions h y ), ~ y = (y1 , . . ., yp ) ∈ Rp , should be defined n = 0, 1, . . ., ˆ ε,n (~ h y ) = (hε,n,1 (y1 ), . . . , hε,n,p (yp )) = (hε,n (y1 ), . . . , hε,n−p+1 (yp )).

(1.54)

~ε,n are defined, The corresponding space skeleton approximating processes Y for every ε ∈ (0, ε0 ], by the following vector stochastic transition dynamic relation,

 ~   Yε,n n   ~ Yε,0

ˆ ε,n h ˆ ε,n−1 (Y ˆ ε,n−1 (Y ~ε,n−1 ) + ~λn + Λn h ~ε,n−1 ) + Σn W ~ n0 =h = 1, 2, . . . , ˆ ε,0 (Y ~0,0 ), =h

 (1.55)

where the characteristics ~λn , Λn , and Σn , n = 1, . . . , N are defined by relations (1.36) and (1.37). The vector stochastic transition dynamic relation (1.55) can be re-written in the form of following equivalent transition dynamic relation given for components

1.2

of the log-price   Yε,n,1             Yε,n,2     ... Yε,n,p      n      Yε,0,1     ...    Yε,0,p

Autoregressive LPP

17

~ε,n , process Y  = hε,n hε,n−1 (Yε,n−1,1 ) + an,0 + an,1 hε,n−1 (Yε,n−1,1 )  + · · · + an,p hε,n−p (Yε,n−1,p ) + σn Wn , = hε,n−1 (Yε,n−1,1 ), ... = hε,n−p+1 (Yε,n−1,p−1 ), = 1, 2, . . . ,

(1.56)

= hε,0 (Y0,0,1 ), ... = hε,−p+1 (Y0,0,p ).

~ε,n defined, for every ε ∈ (0, ε0 ], by the nonlinear dyThe log-price process Y namic transition relation (1.91) is a skeleton atomic Markov chain, with the phase ~ε,n ∈ A/Y ~ε,n−1 = space Rp and one-step transition probabilities Pε,n (~ y , A) = P{Y ~ y } defined for ~ y ∈ Rp , A ∈ Bp , n = 1, 2, . . . by the following relation, X ˆ ε,n−1 (~ Pε,n (~ y , A) = P0,n (h y ), ˆIε,n,¯l ) ~ yε,n,l¯∈A

X

=

ˆ ε,n−1 (~ ¯ n + Λn h ˆ ε,n−1 (~ P{h y) + λ y)

~ yε,n,l¯∈A

~ n0 ∈ ˆI ¯}, + Σn W ε,n,l

(1.57)

where the characteristics ~λn , Λn , and Σn , n = 1, . . . , N are defined by relations (1.36) and (1.37). ~ε,0 ∈ A}, A ∈ Bp is concerned, As far as the initial distribution Pε,0 (A) = P{Y it takes, for every ε ∈ (0, ε0 ], the following form, X X ˆ ε,0 (Y ~0,0 ) ∈ ˆI ¯}. Pε,0 (A) = P0,0 (ˆI ¯) = P{h (1.58) ε,0,l

~ yε,0,l¯∈A

ε,n,l

~ yε,0,l¯∈A

We consider the model with a pay-off function g(n, e~y ), ~ y = (y1 , . . . , yp ) ∈ Rp do not depend on ε. (ε) Let us also recall, for ε ∈ (0, ε0 ], the class Mmax,n,N of all Markov moments ~ε,n such that (a) n ≤ τε,n ≤ N , (b) event {τε,n = τε,n for the log-price process Y (ε) ~ε,l , n ≤ l ≤ m], n ≤ m ≤ N . m} ∈ Fn,m = σ[Y ~ε,n by In this case, the reward functions are defined for the log-price process Y the following relation, for ~ y ∈ Rp and n = 0, 1, . . . , N , φε,n (~ y) =

sup (ε) τε,n ∈Mmax,n,N

~

E~y,n g(τε,n , eYε,τε,n ).

(1.59)

18

1

Reward approximations for autoregressive LPP

Probability measures Pε,n (~ y , A), ~ y ∈ Rp , n = 1, 2, . . . are concentrated on finite sets, for every ε ∈ (0, ε0 ]. This obviously implies that the rewards functions |φε,n (~ y )| < ∞, ~ y ∈ Rp , n = 1, 2, . . . for every ε ∈ (0, ε0 ]. It is useful also to note that, by the definition, φε,N (~ y ) = g(N, e~y ), ~ y ∈ Rp . An analogue of Lemma 1.1.2 can be formulated for the above model. However, the corresponding system of linear equations can be simplified taking into account the specific shift structure of the stochastic difference equation (1.35) written in equivalent form (1.34). By the definition of sets ˆIε,n,¯l , ¯ l ∈ Lε,n , n = 0, 1, . . ., for every ~ y ∈ Rp and n = 0, 1, . . ., there exists the unique ¯ lε,n (~ y ) = (lε,n,1 (~ y ), . . . , lε,n,p (~ y )) ∈ Lε,n such that ~ y ∈ ˆIε,n,¯lε,n (~y) . The following lemma is the direct corollary of Lemma 1.1.2. ~0,n is defined by the Lemma 1.2.1. Let the autoregressive log-price process Y vector stochastic difference equation (1.35), while the corresponding approximat~ε,n is defined, for every ε ∈ (0, ε0 ], by the ing space-skeleton log-price process Y stochastic transition dynamic relation (1.55). Then the reward functions φε,n (~ y) and φn+m (~ yε,n+m,¯l ), ¯ l ∈ Lε,n+m , m = 1, . . . N − n are, for every ε ∈ (0, ε0 ], ~ y ∈ Rp and n = 0, . . . , N −1, the unique solution for the following recurrence finite system of linear equations,

 φε,N (~ yε,N,¯l ) = g(N, e~yε,N,l ),     ¯  l = (l1 , . . . , lp ) ∈ Lε,N ,      φε,n+m (~ yε,n+m,¯l ) = max g(n + m, e~yε,n+m,l¯),     Pm+ε,n+m+1    φε,n+m+1 (~ yε,n+m+1,(l10 ,l1 ,...,lp−1 ) )  l10 =m−  ε,n+m+1     ×P{yε,n+m,l1 + an+m+1,0 + an+m+1,1 yε,n+m,l1      + · · · + an+m+1,p yε,n+m−p+1,lp   +σn+m+1 Wn+m+1 ∈ Iε,n+m+1,l10 } ,     ¯  l = (l1 , . . . , lp ) ∈ Lε,n+m , m = N − n − 1, . . . , 1,       φε,n (~ y ) = max g(n, e~y ),     Pm+ε,n+1   φε,n+1 (~ yε,n+1,(l10 ,lε,n,1 (~y),...,lε,n,p−1 (~y)) )   l10 =m− ε,n+1      ×P{yε,n,lε,n,1 (~y) + an+1,0 + an+1,1 yε,n,lε,n,1 (~y)     + · · · + an+1,p yε,n−p+1,lε,n,p (~y) + σn+1 Wn+1 ∈ Iε,n+1,l10 } , (ε)

(1.60)

while the optimal expected reward Φε = Φε (Mmax,0,N ) is defined by the following formula, for every ε ∈ (0, ε0 ], X Φε = P0,0 (ˆIε,0,¯l ) φε,0 (~ yε,0,¯l ). (1.61) ¯ l∈Lε,0

1.2

19

Autoregressive LPP

Proof. The following system of linear equations for the reward functions φε,n (~ y ) and φn+m (~ yε,n+m,¯l ), ¯ l ∈ Lε,n+m , m = 1, . . . N − n is the variant of the system of linear equations (1.26) given in Lemma 1.1.2,  φε,N (~ yε,N,¯l ) = g(N, e~yε,N,l¯),      ¯  l ∈ Lε,N ,     ~ y   φε,n+m (~yε,n+m,¯l ) = max g(n + m, e ε,n+m,l¯),   P   yε,n+m+1,¯l0 ) ¯ l0 ∈Lε,n+m+1 φε,n+m+1 (~ (1.62)  ˆ    ×P0,n+m+1 (~yε,n+m,¯l , Iε,n+m+1,¯l0 ) ,     ¯l ∈ Lε,n+m , m = N − n − 1, . . . , 1,     φε,n (~y ) = max g(n, e~y ),      P  yε,n+1,¯l0 )P0,n+1 (~ yε,n,¯lε,n (~y) , ˆIε,n+1,¯l0 ) . ¯ l0 ∈Lε,n+1 φε,n+1 (~ The system of linear equations (1.62) can be re-written in simpler form taking into account shift features of the dynamic transition relation (1.55) written in the equivalent form (1.56). Indeed, according the above relations, we get the following formula, for ¯ l = (l1 , . . . , lp ) ∈ Lε,n+m , ¯ l0 = (l10 , . . . , lp0 ) ∈ Lε,n+m+1 , P0,n+m+1 (~ yε,n+m,¯l , ˆIε,n+m+1,¯l0 ) = P{yε,n+m,l1 + an+m+1,0 + an+m+1,1 yε,n+m,l1 + · · · + an+m+1,p yε,n+m−p+1,lp + σn+m+1 Wn+m+1 ∈ Iε,n+m+1,l10 } × I(yε,n+m,l1 ∈ Iε,n+m,l20 ) × · · · × I(yε,n+m−p+2,lp−1 ∈ Iε,n+m−p+2,lp0 ) = P{yε,n+m,l1 + an+m+1,0 + an+m+1,1 yε,n+m,l1 + · · · + an+m+1,p yε,n+m−p+1,lp + σn+m+1 Wn+m+1 ∈ Iε,n+m+1,l10 } × I(l1 = l20 ) × · · · × I(lp−1 = lp0 ).

(1.63)

Relation (1.63) implies that the system of linear equations (1.62) can be rewritten in the simpler form, where multivariate sums over vector indices ¯ l0 ∈ Lε,n+m+1 , m = N − n, . . . , 0 are replaced in the corresponding equations by uni+ variate sums over indices l10 = m− ε,n+m+1 , . . . , mε,n+m+1 , m = N − n − 1, . . . , 1. By doing this, we get write down the system of linear equations (1.60).  Obviously, |Φε | < ∞, for every ε ∈ (0, ε0 ].

1.2.3 Convergence of option reward functions for space skeleton approximations for option rewards of autoregressive log-price processes Let now formulate conditions of convergence for the above reward functions. We shall apply Theorem 1.1.3.

20

1

Reward approximations for autoregressive LPP

Let us introduce special shorten notations for the maximal and the minimal skeleton points, for n = −p + 1, . . . and ε ∈ (0, ε0 ], ± zε,n = δε,n m± ε,n + λε,n .

(1.64)

We impose the following condition on the parameters of the space skeleton model: ± N2 : (a) δε,n → 0 as ε → 0, for n = −p + 1, . . . , N ; (b) ±zε,n → ∞ as ε → 0, for ± n = −p+1, . . . , N ; (c) ±zε,n , n = −p+1, . . . , N are non-decreasing sequences, for every ε ∈ (0, ε0 ].

The moment condition, which provide the existence of appropriate upper bounds for reward functions is condition B2 [¯ γ ], which we assume to hold for the payoff function g(n, e~y ), ~ y = (y1 , . . . , yp ) ∈ Rp , for some vector parameter γ¯ = (γ1 , . . . , γp ) with non-negative components. Let now formulate the corresponding convergence conditions. We assume that the following variant of continuity condition I1 holds: I2 : There exist sets Y0n ∈ Bp , n = 0, . . . , N such that function g(n, e~y ) is continuous in points ~ y ∈ Y0n , for every n = 0, . . . , N . We assume that the following variant of condition of weak continuity O1 , in which characteristics ~λn , Λn , Σn , n = 1, 2, . . . are given by relations (1.36) and (1.37), holds: ~ n0 ∈ Y0n } = 0, for ~ y0 ∈ Y0n−1 , n = 1, . . . , N , where O2 : P{~ y0 + ~λn + Λn ~ y0 + Σn W Y0n , n = 0, . . . , N are sets penetrating condition I2 . A remark analogous to Remark 1.1.3 can be made about the condition O2 . 0 ~ n0 ≡ 0 and P{~ y0 + ~λn +Λn ~ y0 ∈ Yn } = 0, if Remark 1.2.3. If σn2 = 0, then Σn W 0 ~ n = Σn W ~ n0 = (σn Wn,1 and only if ~ y0 + ~λn + Λn ~ y0 ∈ Y0n . If σn2 > 0 then W , 0, . . . , 0) is a Gaussian random vector (which should be considered as a column vector with p components). The set R1,p,n = {~ y:~ y = Σn w, ~ w ~ ∈ Rp } = {~ y = (y1 , 0, . . . , 0), y1 ∈ R1 } is a one-dimensional Euclidean hyper-subspace of Rp . The probability density ~ n , with respect to Lebesgue measure L1,p,n (A) function of the random vector W in the hyper-subspace R1,p,n , is concentrated and strictly positive in points ~ y ∈ ~ n0 ∈ Y0n } = 0, if and only if R1,p,n . This implies that P{~ y0 + ~λn + Λn ~ y0 + Σn W 0 0 0 L1,p,n (Y1,p,n,y0 ) = 0, where Y1,p,n,y0 = (Yn − ~ y0 − ~λn − Λn ~ y0 ) ∩ R1,p,n is the cut of 0 ~ the set Yn − ~ y0 − λn − Λn ~ y0 by the hyper-subspace R1,p,n . The following theorem is the direct corollary of Theorem 1.1.3. ~0,n is defined by the Theorem 1.2.3. Let the autoregressive log-price process Y vector stochastic difference equation (1.35), while the corresponding approximat~ε,n is defined, for every ε ∈ (0, ε0 ], by the ing space-skeleton log-price process Y

1.2

Autoregressive LPP

21

stochastic transition dynamic relation (1.55). Let also conditions B2 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γp ) with non-negative components, and, also, conditions N2 , I2 , and O2 hold. Then, for every n = 0, 1, . . . , N , the following relation takes place for any ~ yε → ~ y0 ∈ Y0n , φε,n (~ yε ) → φ0,n (~ y0 ) as ε → 0.

(1.65)

~0,n and its space skeleton approximation logProof. The log-price process Y ~ε,n defined, respectively, in (1.35) and (1.55) are particular variants price process Y of the corresponding log-price processes defined, respectively, in (1.1), and (1.22). Conditions B2 [¯ γ ], N2 , I2 , and O2 are just variants of conditions B1 [¯ γ ], N1 , I1 , and O1 for the model considered in the present subsection. Thus all condition of Theorem 1.1.3 hold. By applying this theorem we get the convergence relation (1.65). 

1.2.4 Convergence of optimal expected rewards for space skeleton approximations of autoregressive log-price processes Let us now give conditions for convergence for optimal expected rewards Φε in the above space skeleton approximation model. We shall apply Theorem 1.1.4. ¯ = (A1 (β), ¯ . . . , Ap (β)) ¯ ~ β) ¯ with the function A( In this case, condition D2 [β], given by relation (1.40), should be assumed to hold for some vector parameter β¯ = (β1 , . . . , βp ) with non-negative components and the corresponding vectors β¯i = (βi,1 , . . . , βi,p ) with components βi,j = I(i = j), i, j = 1, . . . , p. Condition K1 should be replaced by the following condition imposed on the ~0,0 ∈ A}: initial distribution P0,0 (A) = P{Y K2 : P0,0 (Y00 ) = 1, where Y00 is the set introduced in conditions I2 . The following theorem is a corollary of Theorem 1.1.4. ~0,n is defined by the Theorem 1.2.4. Let the autoregressive log-price process Y vector stochastic difference equation (1.35), while the corresponding approximating ~ε,n is defined, for every ε ∈ (0, ε0 ], by the dynamic space-skeleton log-price process Y ¯ holds for some transition relation (1.55). Let also conditions B2 [¯ γ ] and D2 [β] vector parameters γ¯ = (γ1 , . . . , γk ) and β¯ = (β1 , . . . , βk ) such that, for every i = 1, . . . , k, either βi > γi > 0 or βi = γi = 0, and also conditions N2 , I2 , O2 , and K2 hold. Then, the following relation takes place, Φε → Φ0 as ε → 0.

(1.66)

¯ and K2 are just re-formulation, respectively, of conProof. Conditions D2 [β] ¯ ditions D1 [β] and K1 used in Theorem 1.1.4. Other conditions of this theorem also holds that was pointed out in the proof of Theorem 1.2.3. By applying Theorem 1.1.4 we get convergence relation (1.66). 

22

1

Reward approximations for autoregressive LPP

1.3 Autoregressive moving average LPP In this section, we present results concerned space skeleton approximations for rewards of American-type options for autoregressive moving average (ARMA) log-price processes with Gaussian noise terms.

1.3.1 Upper bounds for rewards of autoregressive moving average type log-price processes Let us now consider an inhomogeneous in time autoregressive moving average type model with Gaussian noise terms, where the log-price process Yn is given by the following stochastic difference equation, Yn − Yn−1 = an,0 + an,1 Yn−1 + · · · + an,p Yn−p + bn,1 Wn−1 + · · · + bn,q Wn−q + σn Wn , n = 1, 2, . . . ,

(1.67)

~0 = (Y0 , . . . , Y−p+1 , W0 , . . . , W−q+1 ) is a (p + q)-dimensional random where: (a) Y vector with real-valued components; (b) W1 , W2 , . . . is a sequence of real-valued i.i.d. standard normal variables with mean value 0 and variance 1; (c) the random ~0 and the random sequence W1 , W2 , . . . are independent; (d) p and q are vector Y positive integer numbers; (e) an,0 , an,1 , . . . , an,p , bn,1 , . . . , bn,q , σn , n = 1, 2, . . . are real-valued constants. An autoregressive moving average ARM A(p, q) model with Gaussian noise terms is a particular case of the above model, with coefficients a0 , a1 , . . . , ap , b1 , . . . , bq , σ independent on n. As was shown in Subsection 1.3.5∗ , the above autoregressive type model can be imbedded in the model of multivariate log-price processes introduced in Subsection 1.1.1. Let us introduce the (p + q)-dimensional vector process, ~0,n = (Y0,n,1 , . . . , Y0,n,p , Y0,n,p+1 , . . . , Y0,n,p+q ) Y = (Yn , . . . , Yn−p+1 , Wn , . . . , Wn−q+1 ), n = 0, 1, . . .

(1.68)

We can always assume that the sequence of random variables Wn , n = 1, 2, . . . is the sequence of the first components of the sequence of (p + q)-dimensional i.i.d. 0 0 ~ n0 = (Wn,1 , . . . , Wn,p+q ), n = 1, 2, . . ., with standard Gaussian random vectors W 0 0 0 EW1,i = 0, EW1,i W1,j = I(i = j), i, j = 1, . . . , p + q, i.e., 0 Wn = Wn,1 , n = 1, 2, . . .

(1.69)

~ n = (Wn,1 , Let also of consider (p + q)-dimensional i.i.d. Gaussian vectors W Wn,2 , . . . , Wn,p , Wn,p+1 , Wn,p+2 , . . . , Wn,p+q ) = (Wn , 0, . . . 0, Wn , 0, . . ., 0), n = 1, 2, . . ..

1.3

23

Autoregressive moving average LPP

~ n can be obtained as a linear transformation of vectors W ~ n0 , namely, Vectors W 0 ~ ~ Wn = ΣWn , n = 1, 2, . . ., where (p + q) × (p + q) matrix Σ = kσi,j k, has elements σi,j = (I(i = 1) + I(i = p + 1))I(j = 1), i, j = 1, . . . , p + q. The stochastic difference equation (1.67) can be re-written in the equivalent form of vector stochastic difference equation,   Y0,n,1 − Y0,n−1,1 = an,0 + an,1 Y0,n−1,1 + · · · + an,p Y0,n−1,p     + bn,1 Y0,n−1,p+1 + · · · + bn,q Y0,n−1,p+q     + σn Wn,1 ,      Y − Y = Y0,n−1,1 − Y0,n−1,2 + Wn,2 , 0,n,2 0,n−1,2     . . . . ..  (1.70) Y0,n,p − Y0,n−1,p = Y0,n−1,p−1 − Y0,n−1,p + Wn,p ,    Y0,n,p+1 − Y0,n−1,p+1 = − Y0,n−1,p+1 + Wn,p+1 ,     Y0,n,p+2 − Y0,n−1,p+2 = Y0,n−1,p+1 − Y0,n−1,p+2 + Wn,p+2 ,     ... ...      Y0,n,p+q − Y0,n−1,p+q = Y0,n−1,p+q−1 − Y0,n−1,p+q + Wn,p+q ,    n = 1, 2, . . . . Taking into account the above remarks, one can re-write the stochastic difference equation (1.70) in the following matrix form, ~0,n − Y ~0,n−1 = ~λn + Λn Y ~0,n−1 + Σn W ~ n0 , n = 1, 2, . . . , Y

(1.71)

where ~λn , n = 1, 2, . . . are (p + q)-dimensional vectors and Σn , n = 1, 2, . . . are (p + q) × (p + q) matrices of the following form,    

     ~λn =       

an,0 0 .. . .. . .. . .. . 0

             , Σn =           

σn 0 .. . 0 1 0 .. . 0

0 0 .. . 0 0 0 .. . 0

... ...

... ... ...

...

0 0 .. . 0 0 0 .. . 0

      ,     

(1.72)

and Λn , n = 1, 2, . . . are (p + q) × (p + q) matrices of the following form, Λn =

       =     

an,1 1 .. . 0 0 0 .. . 0

(1.73) an,2 −1 .. . ... ... ... .. . ...

... 0 .. . ... ... ... .. . ...

... ...

... ... ...

...

... ... .. . 0 ... ... .. . ...

... ... .. . 1 ... ... .. . ...

an,p 0 .. . −1 0 0 .. . 0

bn,1 0 .. . 0 −1 1 .. . 0

bn,2 ... .. . ... 0 −1 .. . ...

... ... .. . ... ... 0 .. . ...

... ...

... ... ...

...

... ... .. . ... ... ... .. . 0

... ... .. . ... ... ... .. . 1

bn,q 0 .. . 0 0 0 .. . −1

       .     

24

1

Reward approximations for autoregressive LPP

The stochastic difference equation (1.71) is a particular case of the stochastic difference equation (1.1). Therefore, by Lemma 1.1.1, condition G4∗ (in which one should take parameter k = p + q) holds, for example, with constants, K107 = 1 +

2 max (|an+1,0 | + σn+1 ),

0≤n≤N −1

max0≤n≤N −1, i=1,...,p,j=1,...,q (|an+1,i | ∨ |bn+1,j |) , 2 1 + max0≤n≤N −1 (|an+1,0 | + σn+1 ) 1 , = 2 1 + max0≤n≤N −1 (|an+1,0 | + σn+1 )

K108,1 = K108,l

l = 2, . . . , p + q,

(1.74)

which replace, respectively, constants K26∗ and K27,l∗ , l = 1, . . . , k penetrating condition G4∗ (or constants K1 and K2,l , l = 1, . . . , k given in relation (1.5)). Thus, Theorems 1.1.1 and 1.1.2 can be applied to the vector autoregressive ~0,n given by the vector stochastic difference Markov Gaussian log-price process Y equation (1.71) that yields, in fact, the upper bounds for reward functions and optimal expected rewards for the autoregressive moving average type log-price processes with Gaussian noise terms Yn given by the autoregressive stochastic difference equation (1.67). It is natural to assume for autoregressive moving average models of log-price processes that the corresponding pay-off functions also depend on the corresponding sequences of values for log-prices and noise terms. In this case, we consider real-valued measurable pay-off functions g(n, e~y ), ~ y= (y1 , . . . , yp+q ) ∈ Rp+q , n = 0, 1, . . ., which are assumed to satisfy the following condition, for some vector parameter γ¯ = (γ1 , . . . , γp+q ) with non-negative components: B3 [¯ γ ]: max0≤n≤N sup~y∈Rp

y ~

1+

e P|g(n, p+q i=1

)|

L6,i eγi |yi |

< L5 , for some 0 < L5 < ∞ and

0 ≤ L6,i < ∞, i = 1, . . . , p + q. (0)

Let us Mmax,p,q,n,N be the class of all stopping times τ0,n for the process Yn (0)

such that (a) n ≤ τ0,n ≤ N , (b) event {τ0,n = m} ∈ Fp,q,n,m = σ[Yl0 , n − p + 1 ≤ l0 ≤ m, Wl00 , n − q + 1 ≤ l00 < m], n ≤ m ≤ N . (0) (0) Obviously, the class Mmax,p,q,n,N coincides with the class Mmax,n,N of all ~0,r = (Y0,r,1 , . . . , Y0,r,p+q ) = Markov moments τ0,n for the Markov process Y (Yr , . . . , Yr−p+1 , Wr , . . ., Wr−q+1 ) such that (a) n ≤ τ0,n ≤ N , (b) event (0) ~0,l , n ≤ l ≤ m], n ≤ m ≤ N . {τ0,n = m} ∈ Fn,m = σ[Y In this case, the reward functions are defined for the log-price process Yn ~0,n ) by the following relation, for ~ (its equivalent vector version Y y ∈ Rp+q and n = 0, 1, . . . , N ,

1.3

φ0,n (~ y) =

Autoregressive moving average LPP

sup

~

E~y,n g(τ0,n , eY0,τ0,n ).

25

(1.75)

(0)

τ0,n ∈Mmax,n,N

¯ = (A1 (β), ¯ . . . , Ap+q (β)), ¯ β¯ = (β1 , . . . , βp+q ), ~ β) In this case, functions A( β1 , . . ., βp+q ≥ 0, penetrating formulation of Theorems 1.1.1 and 1.1.2, has the following components, ¯ = K7 K8,j Aj (β)

p+q X 1 (βl + (p + q)2 βl2 ), j = 1, . . . , p + q. 2

(1.76)

l=1

¯ generates a sequence of functions A ¯ = (An,1 (β), ¯ ..., ~ β) ~ n (β) Function A( ¯ n = 0, 1, . . . from the class Ap+q by the following recurrence relation, An,p+q (β)), for any β¯ = (β1 , . . . , βp+q ), βi ≥ 0, i = 1, . . . , p + q, ¯ = ~ n (β) A



β¯ ¯ + A( ¯ ~ n−1 (β) ~ A ~ n−1 (β)) A

for n = 0, for n = 1, 2, . . . .

(1.77)

Let us also define vectors β¯i = (βi,1 , . . . , βi,p+q ) with components βi,j = βj I(j = i), i, j = 1, . . . , p + q. Theorem 1.1.1 takes in this case the following form. Theorem 1.3.1. Let the autoregressive moving average log-price process Yn ~0,n are given, respectively, by the stochastic and its equivalent vector version Y difference equations (1.67) and (1.71). Let also condition B3 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γp+q ) with γi ≥ 0, i = 1, . . . , p + q. Then, for any vector parameter β¯ = (β1 , . . . , βp+q ) with components βi ≥ γi , i = 1, . . . , p + q, there exist constants 0 ≤ M7 , M8,i = M8,i (βi ) < ∞, i = 1, . . . , p + q such that the reward functions φn (~ y ) satisfy the following inequalities for ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , 0 ≤ n ≤ N , X |φ0,n (~ y )| ≤ M7 + M8,i i: γi =0

+

X i: γi >0

p+q X γi M8,i exp{( AN −n,j (β¯i )|yj |) }. βi

(1.78)

j=1

Remark 1.3.1. The explicit formulas for the constants M7 and M8,i (βi ), i = 1, . . . , p + q take, according formulas given in Remark 1.1.1, the following form, M7 = L5 , M8,i (βi ) = L5 L6,i I(γi = 0) + L5 L6,i (1 + 2(p+q) × Pp+q γ K (A (β¯ )+ 1 (p+q)2 A2N −1,l (β¯i )) N βi × e 104 l=1 N −1,l i 2 ) i I(γi > 0),

(1.79)

26

1

Reward approximations for autoregressive LPP

where vectors β¯i = (βi,1 , . . . , βi,p+q ) = (β1 I(i = 1), . . . , βp+q I(i = p + q)), i = 1, . . . , p + q. ¯ should be replaced in this case by the foloowing condition Condition D2 [β] assumed to hold for vector parameter β¯ = (β1 , . . . , βp+q ) with non-negative components: P ¯ E exp{ p+q AN,j (β¯i )|Y0,0,j |} < K9,i , i = 1, . . . , p + q, for some 1 < K9,i < D3 [β]: j=1

∞, i = 1, . . . , p + q. In this case the optimal expected reward is defined by the formula, Φ0 =

~ ~0,0 ). Eg(τ0,0 , eY0,τ0,0 ) = Eφ0 (Y

sup

(1.80)

(0)

τ0,0 ∈Mmax,0,N

The following theorem gives conditions and the upper bound for the optimal expected reward Φ0 . Theorem 1.3.2. Let the autoregressive moving average log-price process Yn ~0,n are given, respectively, by the stochastic and its equivalent vector version Y ¯ hold difference equations (1.67) and (1.71). Let also conditions B3 [¯ γ ] and D3 [β] and 0 ≤ γi ≤ βi < ∞, i = 1, . . . , p + q. Then, there exists a constant 0 ≤ M9 < ∞ such that the following inequality takes place, |Φ0 | ≤ M9 .

(1.81)

Remark 1.3.2. The explicit formula for the constant M9 takes, according formulas given in Remark 1.1.2, the following form, X X M9 = L11 + L11 L12,i + L11 L12,i (1 i:γi =0 K7

+ 2(p+q) e

i:γi >0

Pp+q l=1

γ (AN −1,l (β¯i )+ 12 (p+q)2 A2N −1,l (β¯i )) N βi

)

i

γi β

K9,ii .

(1.82)

1.3.2 Space skeleton approximations for option reward functions for autoregressive moving average log-price processes ~0,n has the Gaussian transition probabilities P0,n (~ The Markov process Y y , A) = ~0,n ∈ A/Y ~0,n−1 = ~ P{Y y } defined for ~ y ∈ Rp+q , A ∈ Bp+q , n = 1, 2, . . . by the following relation, ~ n0 ∈ A}. P0,n (~ y , A) = P{~ y + ~λn + Λn ~ y + Σn W

(1.83)

where the vector coefficients ~λn , n = 1, 2, . . . and matrix coefficients Λn , Σn , n = 1, 2, . . . are given by relations (1.72) and (1.73). ~0,n given by the stochastic difference The autoregressive log-price process Y equation (1.71) is a particular case of the multivariate Markov Gaussian log-price

1.3

27

Autoregressive moving average LPP

process considered in Subsection 1.1.1, with parameter k = p+q and characteristics given by relations (1.72) and (1.73). Therefore, Lemma 1.1.2 and Theorems 1.1.3 and 1.1.4 can be applied to the above autoregressive log-price processes with Gaussian noise terms given by the stochastic difference equation (1.67) or equivalently by the vector stochastic difference equation (1.71). In this case, some simplification in the construction of the corresponding skeleton approximation model can be achieved due to specific shift structure of the stochastic difference equation (1.71), which is well seen from the equivalent form (1.70) of this equation. + Let m− ε,n ≤ mε,n be integer numbers, δε,n > 0 and λε,n ∈ R1 , for n = − max(p, q) + 1, − max(p, q) + 2, . . .. ± In this case, one can use parameters m± ε,n,i = mε,n−i+1 , δε,n,i = δε,n−i+1 , ± ± λε,n,i = λε,n−i+1 , for i = 1, . . . , p and mε,n,i = mε,n−i+p+1 , δε,n,i = δε,n−i+p+1 , λε,n,i = λε,n−i+p+1 , for i = p + 1, . . . , p + q, for n = 0, 1, . . .. In this case, the index sets Lε,n , n = 0, 1, . . . take the following form, + Lε,n = {¯ l = (l1 , . . . , lp+q ), m− ε,n,i ≤ li ≤ mε,n,i , i = 1, . . . , p + q} + = {¯ l = (l1 , . . . , lp+q ), m− ε,n−i+1 ≤ li ≤ mε,n−i+1 , i = 1, . . . , p, + m− ε,n−i+p+1 ≤ li ≤ mε,n−i+p+1 , i = p + 1, . . . , p + q}.

(1.84)

Other elements of the space skeleton approximation should be also defined with the use of the above shift construction. + First, the skeleton intervals Iε,n,l should be defined for l = m− ε,n , . . . , mε,n , n = − max(p, q) + 1, . . .,  − 1 if l = m−  ε,n ,  (−∞, δε,n (mε,n + 2 )] + λε,n − 1 1 (1.85) Iε,n,l = (δε,n (l − 2 ), δε,n (l + 2 )] + λε,n if mε,n < l < m+ ε,n ,   + + 1 (δε,n (mε,n − 2 ), ∞) + λε,n if l = mε,n , and then skeleton cubes ˆIε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ˆI ¯ = Iε,n,1,l × · · · × Iε,n,p,l × Iε,n,p+1,l 1 p p+1 · · · × Iε,n,p+q,lp+q ε,n,l = Iε,n,l1 × · · · × Iε,n−p+1,lp × Iε,n,lp+1 × · · · × Iε,n−q+1,lp+q .

(1.86)

+ Second, skeleton points yε,n,l should be defined for l = m− ε,n , . . . , mε,n , n = − max(p, q) + 1, . . ., yε,n,l = lδε,n + λε,n , (1.87)

and vector skeleton points ~ yε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ~ yε,n,¯l = (yε,n,1,l1 , . . . , yε,n,p,lp , yε,n,p+1,lp+1 , . . . , yε,n,p+q,lp+q ) = (yε,n,l1 , . . . , yε,n−p+1,lp , yε,n,lp+1 , . . . , yε,n−q+1,lp+q ).

(1.88)

28

1

Reward approximations for autoregressive LPP

Third, skeleton functions, hε,n (y), y ∈ R1 should be defined for n = − max(p, q) + 1, . . .,  + hε,n (y) = yε,n,l if y ∈ Iε,n,l , m− (1.89) ε,n ≤ l ≤ mε,n , ˆ ε,n (~ and vector skeleton functions h y ), ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , should be defined n = 0, 1, . . ., ˆ ε,n (~ h y ) = (hε,n,1 (y1 ), . . . , hε,n,p (yp ), hε,n,p+1 (yp+1 ), . . . , hε,n,p+q (yp+q )) = (hε,n (y1 ), . . . , hε,n−p+1 (yp ), hε,n (yp+1 ), . . . , hε,n−q+1 (yp+q )).

(1.90)

~ε,n are defined, The corresponding space skeleton approximating processes Y for every ε ∈ (0, ε0 ], by the following stochastic transition dynamic relation,   0 ˆ ˆ ~ ˆ ~ ~ ~ ~  Y = h h ( Y ) + λ + Λ h ( Y ) + Σ W ε,n ε,n ε,n−1 ε,n−1 n n ε,n−1 ε,n−1 n  n  n    ~ Yε,0

= 1, 2, . . . , ˆ ε,0 (Y ~0,0 ), =h

(1.91)

where the characteristics ~λn , Λn , and Σn , n = 1, . . . , N are defined by relations (1.72) and (1.73), or by the following equivalent transition dynamic relation given in the form of system of transition relations the system for the components of the ~ε,n , log-price process Y    = hε,n hε,n−1 (Yε,n−1,1 ) + an,0 + an,1 hε,n−1 (Yε,n−1,1 )  Yε,n,1      + · · · + an,p hε,n−p (Yε,n−1,p ) + bn,1 hε,n−1 (Yε,n−1,p+1 )        + · · · + bn,q hε,n−q (Yε,n−1,p+q ) + σn Wn ,       Yε,n,2 = hε,n−1 (Yε,n−1,1 ),      . . . ...     = hε,n−p+1 (Yε,n−1,p−1 ),   Yε,n,p     Yε,n,p+1 = hε,n (Wn ),     Yε,n,p+2 = hε,n−1 (Yε,n−1,p+1 ), (1.92) ... ...     Yε,n,p+q = hε,n−q+1 (Yε,n−1,p+q−1 ),       n = 1, 2, . . . ,      Yε,0,1 = hε,0 (Y0,0,1 ),      ... ...      Y = hε,−p+1 (Y0,0,p ), ε,0,p      Yε,0,p+1 = hε,0 (Y0,0,p+1 ),      ... ...    Yε,0,p+q = hε,−q+1 (Y0,0,p+q ).

1.3

Autoregressive moving average LPP

29

~ε,n defined, for every ε ∈ (0, ε0 ], by the nonlinear The log-price process Y dynamic transition relation (1.92) is a skeleton atomic Markov chain, with the ~ε,n ∈ phase space Rp+q and one-step transition probabilities Pε,n (~ y , A) = P{Y ~ A/Yε,n−1 = ~ y } defined for ~ y ∈ Rp+q , A ∈ Bp+q , n = 1, 2, . . . by the following stochastic transition dynamic relation, X ˆ ε,n−1 (~ Pε,n (~ y , A) = P0,n (h y ), ˆIε,n,¯l ) ~ yε,n,l¯∈A

X

=

ˆ ε,n−1 (~ ¯ n + Λn h ˆ ε,n−1 (~ P{h y) + λ y)

~ yε,n,l¯∈A

~ n0 ∈ ˆI ¯}, + Σn W ε,n,l

(1.93)

where the characteristics ~λn , Λn , and Σn , n = 1, . . . , N are defined by relations (1.72) and (1.73). ~ε,0 ∈ A}, A ∈ Bp+q is concerned, As far as the initial distribution Pε,0 (A) = P{Y it takes, for every ε ∈ (0, ε0 ], the following form, X X ˆ ε,0 (Y ~0,0 ) ∈ ˆI ¯}. Pε,0 (A) = P0,0 (ˆIε,0,¯l ) = P{h (1.94) ε,n,l ~ yε,0,l¯∈A

~ yε,0,l¯∈A

We assume the model with a pay-off function g(n, e~y ), ~ y = (y1 , . . . , yp+q ) ∈ Rp+q do not depend on ε. (ε) Let us also recall, for ε ∈ (0, ε0 ], the class Mmax,n,N of all Markov moments ~ε,n such that (a) n ≤ τε,n ≤ N , (b) event {τε,n = τε,n for the log-price process Y (ε) ~ m} ∈ Fn,m = σ[Yε,l , n ≤ l ≤ m], n ≤ m ≤ N . ~ε,n by In this case, the reward functions are defined for the log-price process Y the following relation, for ~ y ∈ Rp+q and n = 0, 1, . . . , N , φε,n (~ y) =

sup

~

E~y,n g(τε,n , eYε,τε,n ).

(1.95)

(ε)

τε,n ∈Mmax,n,N

Probability measures Pε,n (~ y , A), ~ y ∈ Rp+q , n = 1, 2, . . . are concentrated on finite sets, for every ε ∈ (0, ε0 ]. This obviously implies that the rewards functions |φε,n (~ y )| < ∞, ~ y ∈ Rp+q , n = 1, 2, . . . for every ε ∈ (0, ε0 ]. It is useful also to note that, by the definition, φε,N (~ y ) = g(N, e~y ), ~ y ∈ Rp+q . An analogue of Lemma 1.1.2 can be formulated for the above model. However, the corresponding system of linear equations can be simplified taking into account the specific shift structure of the dynamic relations (1.91), which is well seen from the equivalent form (1.92). By the definition of sets ˆIε,n,¯l , ¯ l ∈ Lε,n , n = 0, 1, . . ., for every ~ y ∈ Rp+q and ¯ n = 0, 1, . . ., there exists the unique lε,n (~ y ) = (lε,n,1 (~ y ), . . . , lε,n,p+q (~ y )) ∈ Lε,n such that ~ y ∈ ˆIε,n,¯lε,n (~y) . The following lemma is the direct corollary of Lemma 1.1.2.

30

1

Reward approximations for autoregressive LPP

~0,n is Lemma 1.3.1. Let the autoregressive moving average log-price process Y defined by the vector stochastic difference equation (1.71), while the corresponding ~ε,n is defined, for every ε ∈ (0, ε0 ], approximating space-skeleton log-price process Y by the stochastic transition dynamic relation (1.91). Then the reward functions φε,n (~ y ) and φn+m (~ yε,n+m,¯l ), ¯ l ∈ Lε,n+m , m = 1, . . . N − n are, for every ε ∈ (0, ε0 ], ~ y ∈ Rp+q and n = 0, . . . , N − 1, the unique solution for the following recurrence finite system of linear equations,  ~ yε,N,l¯ ),   φε,N (~yε,N,¯l ) = g(N, e    ¯  l = (l1 , . . . , lp+q ) ∈ Lε,N ,      φε,n+m (~ yε,n+m,¯l ) = max g(n + m, e~yε,n+m,l¯),    +  Pmε,n+m+1   0 φε,n+m+1 (~ yε,n+m+1,(l10 ,l1 ,...,lp−1 ,lp+1  ,lp+1 ,...,lp+q ) ) 0  l10 ,lp+1 =m−  ε,n+m+1     ×P{yε,n+m,l1 + an+m+1,0 + an+m+1,1 yε,n+m,l1      + · · · + an+m+1,p yε,n+m−p+1,lp + bn+m+1,1 yε,n+m,lp+1       + · · · + bn+m+1,q yε,n+m−q+1,lp+q    0 + σn+m+1 Wn+m+1 ∈ Iε,n+m+1,l10 , Wn+m+1 ∈ Iε,n+m+1,lp+1 } , (1.96)    ¯l = (l1 , . . . , lp+q ) ∈ Lε,n+m , m = N − n − 1, . . . , 1,       φ (~ y ) = max g(n, e~y ),   ε,n +   Pmε,n+1   φε,n+1 (~ yε,n+1,(l10 ,lε,n,1 (~y),...,lε,n,p−1 (~y),  0  l10 ,lp+1 =m−  ε,n+1     0 lp+1 ,lε,n,p+1 (~ y ),...,lε,n,p+q−1 (~ y )) ) × P{yε,n,lε,n,1 (~ y ) + an+1,0       + an+1,1 yε,n,lε,n,1 (~y) + · · · + an+1,p yε,n−p+1,lε,n,p (~y)      + bn+1,1 yε,n,lε,n,p+1 (~y) + · · · + bn+1,q yε,n−q+1,lε,n,p+q (~y)      0 + σn+1 Wn+1 ∈ Iε,n+1,l10 , Wn+1 ∈ Iε,n+m+1,lp+1 } , (ε)

while the optimal expected reward Φε = Φε (Mmax,0,N ) is defined by the following formula, for every ε ∈ (0, ε0 ], X Φε = P0,0 (ˆIε,0,¯l ) φε,0 (~ yε,0,¯l ). (1.97) ¯ l∈Lε,0

Proof of Lemma 1.3.1 is analogous to the proof of Lemma 1.2.1. Obviously, |Φε | < ∞, for every ε ∈ (0, ε0 ].

1.3.3 Convergence of option reward functions for space skeleton approximations for option reward functions for autoregressive moving average log-price processes with Gaussian noise terms Let now formulate conditions of convergence for the above reward functions. We shall apply Theorem 1.3.3.

1.3

Autoregressive moving average LPP

31

Let us introduce special shorten notations for the maximal and the minimal skeleton points, for n = − max(p, q) + 1, . . . and ε ∈ (0, ε0 ], ± zε,n = δε,n m± ε,n + λε,n .

(1.98)

We impose the following condition on the parameters of the space skeleton model: ± N3 : (a) δε,n → 0 as ε → 0, for n = − max(p, q) + 1, . . . , N ; (b) ±zε,n → ∞ as ± ε → 0, for n = − max(p, q) + 1, . . . , N ; (c) ±zε,n , n = − max(p, q) + 1, . . . , N are non-decreasing sequences, for every ε ∈ (0, ε0 ].

The moment condition, which provide the existence of appropriate upper bounds for reward functions is condition B3 [¯ γ ], which we assume to hold for the pay-off function g(n, e~y ), ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , for some vector parameter γ¯ = (γ1 , . . . , γp+q ) with non-negative components. Let now formulate the corresponding convergence conditions. We assume that the following variant of continuity condition I1 holds: I3 : There exists sets Y0n ∈ Bp+q , n = 0, . . . , N such that function g(n, e~y ) is continuous in points ~ y ∈ Y0n , for every n = 0, . . . , N . We assume that the following variant of condition of weak continuity O1 , in which characteristics ~λn , Λn , Σn , n = 1, 2, . . . are given by relations (1.36) and (1.37), holds: ~ n0 ∈ Y0n } = 0, for ~ y0 ∈ Y0n−1 , n = 1, . . . , N , where O3 : P{~ y0 + ~λn + Λn ~ y0 + Σn W Y0n , n = 0, . . . , N are sets penetrating condition I3 . A remark analogous to Remark 1.1.3 can be made about the condition O3 . 0 ~ n0 = (0, . . . , 0, Wn,1 , 0, . . . , 0) is a GausRemark 1.3.3. If σn2 = 0, then Σn W sian random vector (which should be considered as a column vector with p + q components) with the only (p + 1)-th non-zero component. The set R1,p+q,n = {~ y : ~ y = Σn w, ~ w ~ ∈ Rp+q } = {~ y = (0, . . . , 0, yp+1 , 0, . . . , 0), yp+1 ∈ R1 } is a one-dimensional Euclidean hyper-subspace of Rp+q . The probability density func~ n , with respect to Lebesgue measure L1,p+q,n (A) tion of the random vector W in the hyper-subspace R1,p+q,n , is concentrated and strictly positive in points ~ n0 ∈ Y0n } = 0, if and only ~ y ∈ R1,p+q,n . This implies that P{~ y0 + ~λn + Λn ~ y0 + Σn W 0 0 0 if L1,p+q,n (Y1,p+q,n,y0 ) = 0, where Y1,p+q,n,y0 = (Yn − ~ y0 − ~λn − Λn ~ y0 ) ∩ R1,p+q,n 0 ~ is the cut of the set Yn − ~ y0 − λn − Λn ~ y0 by the hyper-subspace R1,p+q,n . If 0 0 ~ n = Σn W ~ n0 = (σn Wn,1 σn2 > 0 then W , 0, . . . , 0, Wn,1 , 0, . . . , 0) is a Gaussian random vector (which should be considered as a column vector with p+q components) with two the first and (p + 1)-th non-zero components. The set R1,p+q,n = {~ y : ~ y = Σn w, ~ w ~ ∈ Rp+q } = {~ y = (y1 , 0, . . . , 0, yp+1 , 0, . . . , 0), σn−1 y1 = yp+1 ∈ R1 } is again a one-dimensional Euclidean hyper-subspace of Rp+q . The probability

32

1

Reward approximations for autoregressive LPP

~ n , with respect to Lebesgue measure density function of the random vector W L1,p+q,n (A) in the hyper-subspace R1,p+q,n , is concentrated and strictly positive ~ n0 ∈ Y0n } = 0, if and in points ~ y ∈ R1,p+q,n . This implies that P{~ y0 +~λn +Λn ~ y0 +Σn W 0 0 0 only if L1,p+q,n (Y1,p+q,n,y0 ) = 0, where Y1,p+q,n,y0 = (Yn −~ y0 −~λn −Λn ~ y0 )∩R1,p+q,n 0 ~ is the cut of the set Yn − ~ y0 − λn − Λn ~ y0 by the hyper-subspace R1,p+q,n . The following theorem is the direct corollary of Theorem 1.1.3. Theorem 1.3.3. Let the mixed autoregressive moving average log-price pro~0,n is defined by the vector stochastic difference equation (1.71), while the cess Y ~ε,n is defined, for corresponding approximating space-skeleton log-price process Y every ε ∈ (0, ε0 ], by the stochastic transition dynamic relation (1.91). Let also conditions B3 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γp+q ) with nonnegative components, and, also, conditions N3 , I3 , and O3 hold. Then, for every n = 0, 1, . . . , N , the following relation takes place for any ~ yε → ~ y0 ∈ Y0n , φε,n (~ yε ) → φ0,n (~ y0 ) as ε → 0.

(1.99)

~0,n and its space skeleton approximating logProof. The log-price process Y ~ price process Yε,n defined, respectively, in (1.71) and (1.91) are particular variants of the corresponding log-price processes defined, respectively, in (1.1), and (1.22). Conditions B3 [¯ γ ], N3 , I3 , and O3 are just variants of conditions B1 [¯ γ ], N1 , I1 , and O1 for the model considered in the present subsection. Thus all condition of Theorem 1.1.3 hold. By applying this theorem we get the convergence relation (1.99). 

1.3.4 Convergence of optimal expected rewards for space skeleton approximations of autoregressive moving average log-price processes Let us now give conditions for convergence for optimal expected rewards Φε in the above space skeleton approximation model. We shall apply Theorem 1.1.4. ¯ = (A1 (β), ¯ . . . , Ap (β)) ¯ ~ β) ¯ with the function A( In this case, condition D1 [β], given by relation (1.40), should be assumed to hold for some vector parameter β¯ = (β1 , . . . , βp ) with non-negative components and the corresponding vectors β¯i = (βi,1 , . . . , βi,p ) with components βi,j = I(i = j), i, j = 1, . . . , p. Condition K1 should be replaced by the following condition imposed on the ~0,0 ∈ A}: initial distribution P0,0 (A) = P{Y K3 : P0,0 (Y00 ) = 1, where Y00 is the set introduced in conditions I3 . The following theorem is a corollary of Theorem 1.1.4. Theorem 1.3.4. Let the mixed autoregressive moving average log-price pro~0,n is defined by the vector stochastic difference equation (1.71), while the cess Y ~ε,n is defined, for corresponding approximating space-skeleton log-price process Y

33

1.4 Modulated Markov Gaussian LPP

every ε ∈ (0, ε0 ], by the stochastic transition dynamic relation (1.91). Let also ¯ holds for some vector parameters γ¯ = (γ1 , . . . , γp+q ) conditions B3 [¯ γ ] and D3 [β] and β¯ = (β1 , . . . , βp+q ) such that, for every i = 1, . . . , p + q either βi > γi > 0 or βi = γi = 0, and also conditions N3 , I3 , O3 and K3 hold. Then, the following relation takes place, Φε → Φ0 as ε → 0. (1.100) ¯ and K3 are just re-formulation, respectively, of conProof. Conditions D3 [β] ¯ ditions D1 [β] and K1 used in Theorem 1.1.4. Other conditions of this theorem also holds that was pointed out in the proof of Theorem 1.3.3. By applying Theorem 1.1.4 we get convergence relation (1.66). 

1.4 Modulated Markov Gaussian LPP In this section, we present results concerned space skeleton approximations for rewards of American type options for modulated Markov Gaussian log-price processes with linear drift and constant diffusion coefficients.

1.4.1 Upper bounds for rewards of modulated Markov Gaussian log-price processes with linear drift and bounded volatility coefficients Let us consider a modulated multivariate Markov Gaussian log-price process ~ 0,n = (Y ~0,n , X0,n ) given by the vector modulated stochastic difference equation, Z

 ~ ~0,n−1 = ~λn (X0,n−1 ) Y0,n − Y    ~0,n−1 + Σn (X0,n−1 )W ~ n0 , + Λn (X0,n−1 )Y  X = Cn (X0,n−1 , Un ),   0,n n = 1, 2, . . . ,

(1.101)

~ 0,0 = (Y ~0,0 , X0,0 ) is a random vector taking values in the space Z = where: (a) Z Rk × X, where X is a Polish (complete, separable, metric) space with a metric ~ n0 , Un ), n = 1, 2, . . . dX (x0 , x00 ) and Borel σ-algebra of measurable subsets BX ; (b) (W is a sequence of i.i.d. random vectors taking values in the space Rk × U (where U is a measurable space with σ-algebra of measurable subsets BU ), moreover, 0 0 ~ n0 = (Wn,1 W , . . . , Wn,k ), n = 1, 2, . . . is a sequence of k-dimensional i.i.d. random 0 vectors, which have a standard multivariate normal distribution with EW1,i = 0 0 0, EW1,i W1,j = I(i = j), i, j = 1, . . . , k, while random variables U1 , U2 , . . . have a ~ n0 = w}, ~ n = 1, 2, . . .; (c) regular conditional distribution Gw~ (A) = P{Un ∈ A/W ~ 0,0 = (Y ~0,0 , X ~ 0,0 ) and the random sequence (W ~ n0 , Un ), n = the random vector Z 1, 2, . . . are independent; (d) ~λn (x) = (λn,1 (x), . . . , λn,k (x)), n = 1, 2, . . . are kdimensional vector functions acting from the space X to the space Rk ; (e) Λn (x) =

34

1

Reward approximations for autoregressive LPP

kλn,i,j (x)k, n = 1, 2, . . . and Σn (x) = kσn,i,j (x)k, n = 1, 2, . . . are k × k matrix functions wich elements are real-valued measurable functions defined on the space X; (f) Cn (x, u), n = 1, 2, . . . are measurable functions acting from the space X × U to the space X. This model was introduced in Subsection 1.4.4∗ and it is a particular case of the modulated multivariate Markov Gaussian log-price processes considered in Subsection 4.5.1∗ . In this case, the drift coefficient µ ~ n (~ y , x) = (µn,i (~ y , x), i = 1, . . . , k) and diffusion coefficient Σn (~ y , x) = kσn,i,j (~ y , x)k, have the particular form, namely, for (~ y , x) ∈ Rk × X, n = 1, 2, . . ., µ ~ n (~ y , x) = ~λn (x) + Λn (x)~ y , Σn (~ y , x) = Σn (x).

(1.102)

Let us assume that the following condition holds:

 2 G1 : max0≤n≤N −1, i,j=1,...,k supx∈X |λn+1,i (x)| + |λn+1,i,j (x)| + σn+1,i,j (x) < K10 , for some 0 < K10 < ∞. Lemma 1.4.1 Let the multivariate modulated Markov Gaussian log-price process ~0,n is given by the stochastic difference equation (1.101). Let also condition G1 Y holds. Then condition G4∗ holds for vector coefficients µ ~ n (~ y , x) and matrix coefficients Σn (~ y , x) defined by relation (1.102), i.e., there exist constants 0 < K11 < ∞ and 0 ≤ K12,l < ∞, l = 1, . . . , k such that, 2 (~ y , x) |µn+1,i (~ y , x)| + σn+1,i,j < K11 . Pk 0≤n≤N −1, i,j=1,...,k ~ z =(~ y ,x)∈Z 1 + l=1 K12,l |yl |

max

sup

(1.103)

Proof. Obviously, for every (~ y , x) ∈ Rk × X, i, j = 1, . . . , k, n = 0, . . . , N − 1, 2 (|µn+1,i (~ y , x)| + σn+1,i,j (~ y , x))

= (|λn+1,i (x) +

k X

2 λn+1,i,l (x) yl )| + σn+1,i,j (x))

l=1

0

k X γi AN −n,j (β¯i )|yj |) }. M11,i exp{( βi

(1.109)

j=1

Remark 1.4.1. The explicit formulas for the constants M10 and M11,i = M11,i (βi ), i = 1, . . . , k take, according formulas given in Remark 4.5.5∗ , the following form, M10 = L7 , M11,i (βi ) = L7 L8,i I(γi = 0) + L7 L8,i (1 Pk γ K (A (β¯ )+ 1 k2 AN −1,l (β¯i )) N βi + 2k e 11 l=1 N −1,l i 2 ) i I(γi > 0).

(1.110)

In this case the optimal expected reward is defined by the formula, Φ0 =

sup

~

Eg(τ0 , eY0,τ0 , X0,τ0 )

τ0 ∈Mmax,0,N

~0,0 , X0,0 ). = Eφ0 (Y

(1.111)

¯ formulated in Section 1.4.5∗ should be replaced In this case, condition D6 [β] by the following condition assumed to hold for vector parameter β¯ = (β1 , . . . , βk ) with non-negative components: P ¯ E exp{ k AN,j (β¯i )|Y0,0,j |} < K13,i , i = 1, . . . , k, for some 1 < K13,i < D4 [β]: j=1

∞, i = 1, . . . , k. Theorem 4.5.4∗ takes in this case the following form. Theorem 1.4.2. Let the multivariate modulated Markov Gaussian log-price ~0,n is given by the vector modulated stochastic difference equation (1.101). process Y ¯ hold for some vector Let condition G1 holds and, also, conditions B4 [¯ γ ] and D4 [β] ¯ parameters γ¯ = (γ1 , . . . , γk ) and β = (β1 , . . . , βk ) such that 0 ≤ γi ≤ βi < ∞, i =

1.4

Modulated Markov Gaussian LPP

37

1, . . . , k. Then, there exists a constant 0 ≤ M12 < ∞ such that the following inequality takes place, |Φ0 | ≤ M12 . (1.112) Remark 1.4.2. The explicit formula for the constant M12 takes, according formulas given in Remark 4.5.6∗ , the following form, X X M12 = L7 + L7 L8,i + L7 L8,i (1 i:γi =0 k K11

+2 e

i:γi >0

Pk l=1

γ

(AN −1,l (β¯i )+ 21 k2 AN −1,l (β¯i )) N βi

)

i

γi β

i K13,i .

(1.113)

1.4.2 Space-skeleton approximations for option rewards of modulated Markov Gaussian log-price processes with linear drift and constant diffusion coefficients ~ 0,n has transition probabilities The multivariate modulated Markov process Z ~ 0,n ∈ A/Z ~ 0,n−1 = ~z} defined for ~z = (~ P0,n (~z, A) = P{Z y , x) ∈ Z = Rk × X, A ∈ BZ , n = 1, 2, . . . by the following relation, ~ n0 , Cn (x, Un )) ∈ A}. P0,n (~z, A) = P{(~ y + ~λn (x) + Λn (x)~ y + Σn (x)W

(1.114)

Let us construct the corresponding space skeleton approximating processes ~ Zε,n , n = 0, 1, . . ., for ε ∈ (0, ε0 ], according the algorithm described in Subsections 7.3.2∗ and 8.4.2∗ . Here and henceforth, we assume that X is a Polish metric space, i.e., a complete, separable, metric space. + Let m− ε,n,j ≤ mε,n,j , j = 0, . . . , k, n = 0, 1, . . . be integer numbers, δε,n,i > 0 and λε,n,i ∈ R1 , i = 1, . . . , k, n = 0, 1, . . .. Let us also define index sets Lε,n , n = 0, 1, . . ., + Lε,n = {¯ l = (l0 , l1 , . . . , lk ), lj = m− ε,n.j , . . . , mε,n,j , j = 0, . . . , k}.

(1.115)

First, the skeleton intervals Iε,n,i,l should be constructed for li = m− ε,n,i , . . ., i = 1, . . . , k, n = 0, 1, . . . , N ,  − 1 if l = m−  ε,n,i ,  (−∞, δε,n,i (mn,i + 2 )] + λε,n,i

m+ ε,n,i ,

Iε,n,i,l =

(δε,n,i (l − 1 ), δε,n,i (l + 1 )] + λε,n,i

2 2   (δ + 1 ε,n,i (mn,i − 2 ), ∞) + λε,n,i

+ if m− ε,n,i < l < mε,n,i ,

if l =

(1.116)

m+ ε,n,i .

+ Then skeleton cubes Iε,n,l1 ,...,lk = Iε,n,1,l1 ×· · ·×Iε,n,k,lk , li = m− ε,n,i , . . . , mε,n,i , i = 1, . . . , k, n = 0, 1, . . . should be defined. By the definition, the skeleton points yε,n,i,li = li δε,n,i + λε,n,i ∈ Iε,n,i,li and, + thus, vector points (yε,n,1,l1 , . . . , yε,n,k,lk ) ∈ Iε,n,l1 ,...,lk , for li = m− ε,n,i , . . . , mε,n,i , i = 1, . . . , k, n = 0, 1, . . ..

38

1

Reward approximations for autoregressive LPP

+ Second, non-empty sets Jε,n,l ∈ BX , l = m− ε,n,0 , . . . , mε,n,0 , n = 0, 1, . . . , N , 0 00 such that (a) Jε,n,l0 ∩Jε,n,l00 = ∅, l 6= l , n = 0, 1, . . .; (b) X = ∪m− ≤l≤m+ Jε,n,l , ε,n,0 ε,n,0 n = 0, 1, . . ., should be constructed. Recall that one of our model assumptions is that X is a Polish space. In this case it is natural to assume that there exist "large" sets Kε,n , n = 0, 1, . . . and + "small" non-empty sets Kε,n,l ∈ BX , l = m− ε,n,0 , . . . , mε,n,0 , n = 0, 1, . . ., such that 0 00 (c) Kε,n,l0 ∩ Kε,n,l00 = ∅, l 6= l , n = 0, 1, . . .; (d) ∪m− ≤l≤m+ Kε,n,l = Kε,n , n = ε,n,0 ε,n,0 0, 1, . . .. The exact sense of epithets "large" and "small" used above is clarified in condition N2 formulated below. Then, the sets Jε,n,l can be defined in the following way, for n = 0, 1, . . ., ( + Kε,n,l if m− ε,n,0 ≤ l < mε,n,0 , Jε,n,l = (1.117) Kε,n,m+ ∪ Kε,n if l = m+ ε,n,0 . ε,n,0

The standard case is, where X = {x0l , l0 = m− , . . . , m+ } is a finite set and metric dX (xl00 , xl000 ) = I(xl00 6= xl000 ). In this case, the simplest choice m0± ε,n = m± , n = −r + 1, . . . and to take sets Kε,n = X, n = 0, 1, . . . and sets Kε,n,l = Jε,n,l0 = {xl0 }, l0 = m− , . . . , m+ , n = −r + 1, . . .. + Finally skeleton points xε,n,l ∈ Kε,n,l , l = m− ε,n,0 , . . . , mε,n,0 , n = 0, 1, . . . should be chosen. Third, skeleton sets Aε,n,¯l and skeleton points, ~zε,n,¯l ∈ Aε,n,¯l can be defined + for ¯ l ∈ Lε,n = {¯ l = (l0 , . . . , lk ), lj = m− ε,n,j , . . . , mε,n,j , j = 0, . . . , k}, n = 0, 1, . . ., in the following way, Aε,n,¯l = Iε,n,l1 ,...,lk × Jε,n,l0 , (1.118) and ~zε,n,¯l = (~ yε,n,¯l , xε,n,¯l ) = ((yε,n,1,l1 , . . . , yε,n,k,lk ), xε,n,l0 ).

(1.119)

Fourth, skeleton functions, hε,n,i (y), y ∈ R1 , i = 1, . . . , k, n = 0, 1, . . . , N should be defined by the following formulas, hε,n,i (y) =



yε,n,i,l

+ if y ∈ Iε,n,i,l , m− ε,n,i < l < mε,n,i ,

(1.120)

and skeleton functions hε,n,0 (x), x ∈ X, n = 0, 1, . . ., should be defined by the following formula, hε,n,0 (x) =



xε,n,l

+ if x ∈ Jε,n,l , m− ε,n,0 ≤ l ≤ mε,n,0 .

(1.121)

ˆ ε,n (~ Finally, vector skeleton functions h y ), ~ y = (y1 , . . ., yk ) ∈ Rk , n = 0, 1, . . . , N should be defined by the following formula, ˆ ε,n (~ h y ) = (hε,n,1 (y1 ), . . . , hε,n,k (yk )),

(1.122)

1.4

Modulated Markov Gaussian LPP

39

¯ ε,n (~z), ~z = (~ and then vector skeleton functions h y , x) ∈ Z, n = 0, 1, . . . , N can be defined by the following formula, ¯ ε,n (~ ˆ ε,n (~ h y ) = (h y ), hε,n,0 (x)).

(1.123)

~ ε,n = (Y ~ε,n , Xε,n ) The corresponding space skeleton approximating processes Z are defined by the following stochastic transition dynamic relation,   ˆ ε,n−1 (Y ˆ ε,n h ~ε,n−1 ) + ~λn (hε,n−1,0 (X0,n−1 ))   Y~ε,n = h      ˆ ε,n−1 (Y ~0,n−1 ) + Λn (hε,n−1,0 (X0,n−1 ))h       ~ n0 , + Σn (hε,n−1,0 (X0,n−1 ))W (1.124)     X = h C (h (X ), U ) , ε,n ε,n,0 0,n ε,n−1,0 ε,n−1 0,n      n = 1, 2, . . . ,     ~ ˆ ε,0 (Y ~0,0 ), Xε,0 = hε,0,0 (X0,0 ). Yε,0 = h ~ ε,n is, for every ε ∈ (0, ε0 ], a skeleton The approximating log-price process Z atomic Markov chain, with one-step transition probabilities Pε,n (~z, A), ~z = (~ y , x) ∈ Z, A ∈ BZ , n = 1, 2, . . . given by the following formula, X ¯ ε,n−1 (~z), A ¯) Pε,n (~z, A) = P0,n (h ε,n,l ~ zε,n,l¯∈A

X

=

ˆ ε,n−1 (~ P{ h y ) + ~λn (hε,n−1,0 (x))

~ zε,n,l¯∈A

ˆ ε,n−1 (~ + Λn (hε,n−1,0 (x))h y) ~ n0 , + Σn (hε,n−1,0 (x))W

 hε,n,0 (C0,n (hε,n−1,0 (x), U0,n ) ∈ Aε,n,¯l }.

(1.125)

~ ε,0 ∈ A}, A ∈ BZ is concerned, As far as the initial distribution Pε,0 (A) = P{Z it takes, for every ε ∈ (0, ε0 ], the following form, X Pε,0 (A) = P0,0 (Aε,0,¯l ) ~ zε,0,l¯∈A

=

X

 ˆ ε,0 (Y ~0,0 ), hε,0,0 (X0,0 ) ∈ A ¯}. P{ h ε,n,l

(1.126)

~ zε,0,l¯∈A

We assume that the pay-off functions g(n, e~y , x), ~ y = (y1 , . . . , yk ) ∈ Rk , x ∈ X, n = 0, 1, . . . do not depend on ε. (ε) Let us also recall the class Mmax,n,N of all Markov moments τε,n for the log(ε) ~ ε,n such that (a) n ≤ τε,n ≤ N , (b) event {τε,n = m} ∈ Fn,m price process Z = ~ ε,l , n ≤ l ≤ m], n ≤ m ≤ N . σ[Z

40

1

Reward approximations for autoregressive LPP

~ ε,n by In this case, the reward functions are defined for the log-price process Z the following relation, for ~z = (~ y , x) ∈ Z and n = 0, 1, . . . , N , φε,n (~ y , x) =

sup

~

E(~y,x),n g(τε,n , eYε,τε,n , Xε,τε,n ).

(1.127)

(ε) τε,n ∈Mmax,n,N

Probability measures Pε,n (~z, A), ~z ∈ Z, n = 1, 2, . . . are concentrated on finite sets, for every ε ∈ (0, ε0 ]. This obviously implies that the rewards functions |φε,n (~ y , x)| < ∞, ~z = (~ y , x) ∈ Z, n = 1, 2, . . . for every ε ∈ (0, ε0 ]. It is also useful to note that φε,N (~ y , x) = g(N, e~y , x), (~ y , x) ∈ Z. ¯ By the definition of sets Aε,n,¯l , l ∈ Lε,n , n = 0, 1, . . ., for every ~z = (~ y , x) ∈ Z = Rk × X and n = 0, 1, . . ., there exists the unique ¯ lε,n (~z) = (lε,n,0 (~z), lε,n,1 (~z), . . . , lε,n,k (~z)) ∈ Lε,n such that ~z ∈ Aε,n,¯lε,n (~z) . The following lemma is the direct corollary of Lemma 8.4.2∗ and relation (8.124)∗ . Lemma 1.4.2. Let the multivariate modulated Markov Gaussian log-price ~ 0,n is defined by the vector modulated stochastic difference equation process Z (1.101), while the corresponding approximating space-skeleton log-price process ~ ε,n is defined, for every ε ∈ (0, ε0 ], by the stochastic transition dynamic relaZ tion (1.124). Then the reward functions φε,n (~ y , x) and φn+m (~ yε,n+m,¯l , xε,n+m,¯l ), ¯ l ∈ Lε,n+m , m = 1, . . . N − n are, for every ε ∈ (0, ε0 ], ~z = (~ y , x) ∈ Z and n = 0, . . . , N − 1, the unique solution for the following recurrence finite system of linear equations,  ~ yε,N,l¯ , xε,N,¯l ),   φε,N (~yε,N,¯l , xε,N,¯l ) = g(N, e     ¯l ∈ Lε,N ,       φε,n+m (~ yε,n+m,¯l , xε,n+m,¯l ) = max g(n + m, e~yε,n+m,l¯, xε,n+m,¯l ),     P   yε,n+m+1,¯l0 , xε,n+m+1,¯l0 )  ¯ l0 ∈Lε,n+m+1 φε,n+m+1 (~     ×P0,n+m+1 (~zε,n+m,¯l , Aε,n+m+1,¯l0 ) , (1.128)    ¯  l ∈ Lε,n+m , m = N − n − 1, . . . , 1,       φε,n (~ y ) = max g(n, e~y ),     P    yε,n+1,¯l0 , xε,n+1,¯l0 ) ¯  l0 ∈Lε,n+1 φε,n+1 (~      ×P0,n+1 (~zε,n,¯lε,n (~z) , Aε,n+1,¯l0 ) , (ε)

while the optimal expected reward Φε = Φε (Mmax,0,N ) is defined by the following formula, for every ε ∈ (0, ε0 ], X Φε = P0,0 (Aε,0,¯l ) φε,0 (~ yε,0,¯l , xε,0,¯l ). (1.129) ¯ l∈Lε,0

1.4

Modulated Markov Gaussian LPP

41

Obviously, |Φε | < ∞, for every ε ∈ (0, ε0 ].

1.4.3 Convergence of option reward functions for space skeleton approximations for modulated Markov Gaussian log-price processes with linear drift and bounded volatility coefficients Let now formulate conditions of convergence for the reward functions for modulated Markov Gaussian log-price processes with linear drift and constant diffusion coefficients. We impose on parameters of the space skeleton model condition N2∗ , introduced in Subsection 7.3.2∗ . It takes the following form: ± N4 : (a) δε,n,j → 0 as ε → 0, for j = 1, . . . , k, n = 0, 1, . . . , N ; (b) ±zε,n,j →∞ ± as ε → 0, for j = 1, . . . , k, n = 0, 1, . . . , N ; (c) ±zε,n,j , n = 0, 1, . . . , N are non-decreasing sequences, for every j = 1, . . . , k and ε ∈ (0, ε0 ]; (d) for any x ∈ X and d > 0, there exists εx,d ∈ (0, ε0 ] such that the ball Rd (x) ⊆ Kε,n , n = 0, . . . , N , for ε ∈ (0, εx,d ]; (e) sets Kε,n,l have diameters dε,n,l = supx0 ,x00 ∈Kε,n,l dX (x0 , x00 ) such that dε,n = maxm− ≤l≤m+ dε,n,l → 0 as ε → ε,n,0 ε,n,0 0, n = 0, . . . , N .

Let us formulate the corresponding moment conditions, which provide the existence of appropriate upper bounds for reward functions. We assume that condition B4 [¯ γ ]∗ holds for the pay-off functions g(n, e~y , x), ~z = (~ y , x) ∈ Z, for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components. Also, we assume that condition G1 holds. Let now formulate the corresponding convergence conditions. We assume that condition I7∗ , introduced in Subsection 7.3.2∗ , holds. It takes the following form: I4 : There exists sets Z0n ∈ BZ , n = 0, . . . , N such that function g(n, e~y , x) is continuous in points ~z = (~ y , x) ∈ Z0n for every n = 0, . . . , N . Also, we assume that the following analogue of condition O5∗ holds: O4 : There exist sets Xn ∈ BX , n = 0, . . . , N and Un ∈ BU , n = 1, . . . , N such that: (a) ~λn (xε ) → ~λn (x0 ), Λn (xε ) → Λn (x0 ), Σn (xε ) → Σn (x0 ) and Cn (xε , u) → Cn (x0 , u) as ε → 0, for any xε → x0 ∈ Xn−1 as ε → 0, u ∈ Un , n = 1, . . . , N ; ~ n0 , (b) P{Un ∈ Un } = 1, n = 1, . . . , N ; (c) P{ ~ y0 +~λn (x0 )+Λn (x0 )~ y0 +Σn (x0 )W  0 00 0 00 y0 , x0 ) ∈ Zn−1 ∩ Zn−1 and Cn (x0 , Un ) ∈ Zn ∪ Zn } = 0 for every ~z0 = (~ n = 1, . . . , N , where Z0n , n = 0, . . . , N are sets introduced in condition I7 and Z00n = Rk × Xn , n = 0, . . . , N .

42

1

Reward approximations for autoregressive LPP

The following theorem is a corollary of Theorem 8.4.7∗ . Theorem 1.4.3. Let the multivariate modulated Markov Gaussian log-price ~ 0,n is defined by the vector modulated stochastic difference equation process Z (1.101), while the corresponding approximating space-skeleton log-price process ~ ε,n is defined, for every ε ∈ (0, ε0 ], by the dynamic transition relation (1.124). Z Let also conditions B4 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components, and, also, conditions G1 , N4 , I4 , and O4 hold. Then, for every n = 0, 1, . . . , N , the following relation takes place for any ~zε = (~ yε , xε ) → ~z0 = (~ y0 , x0 ) ∈ Z0n ∩ Z00n , φε,n (~ yε , xε ) → φ0,n (~ y0 , x0 ) as ε → 0.

(1.130)

~ 0,n defined by the transition dynamic relation Proof. The log-price process Z (1.101) is a particular case of the Markov Gaussian log-price process defined in Subsection 8.4.2∗ and Theorem 1.4.3 is a particular case of Theorem 8.4.7∗ . Indeed, Conditions N4 , B4 [¯ γ ], and I4 , which are just re-formulations of conditions N2∗ , B1 [¯ γ ]∗ , and I7∗ , are used in both Theorem 1.4.3 and Theorem 8.4.7∗ . By Lemma 1.4.1, condition G1 implies that condition G4∗ holds. Condition O4 is the variant of condition O5∗ for the log-price process defined by the transition dynamic relation (1.101). Thus, all conditions of Theorem 8.4.7∗ hold. By applying this theorem, we get the convergence relation (1.130). 

1.4.4 Convergence of optimal expected rewards for space skeleton approximations of modulated Markov Gaussian log-price processes with linear drift and bounded volatility coefficients Let us now give conditions for convergence for optimal expected rewards Φε in the above space skeleton approximation model. We shall apply Theorem 8.4.8∗ . ¯ = (A1 (β), ¯ . . . , Ak (β)) ¯ ~ β) ¯ with the function A( In this case, condition D4 [β], given by relations (2.189) and (1.108), should be assumed to hold for some vector parameter β¯ = (β1 , . . . , βk ) with non-negative components and the corresponding vectors β¯i = (βi,1 , . . . , βi,k ) with components βi,j = I(i = j), i, j = 1, . . . , k. Condition K15∗ should be replaced by the following condition imposed on the ~ 0,0 ∈ A}: initial distribution P0,0 (A) = P{Z K4 : P0,0 (Z00 ) = 1, where Z00 is the set introduced in conditions I4 . The following theorem is a corollary of Theorem 8.4.8∗ . Theorem 1.4.4. Let the multivariate modulated Markov Gaussian log-price ~ 0,n is defined by the vector modulated stochastic difference equation process Z (1.101), while the corresponding approximating space-skeleton log-price process

43

1.5 Modulated autoregressive LPP

~ ε,n is defined, for every ε ∈ (0, ε0 ], by the stochastic transition dynamic relation Z ¯ holds for some vector parameters (1.124). Let also conditions B4 [¯ γ ] and D4 [β] γ¯ = (γ1 , . . . , γk ) and β¯ = (β1 , . . . , βk ) such that, for every i = 1, . . . , k either βi > γi > 0 or βi = γi = 0, and also conditions G1 , N4 , I4 , O4 , and K4 hold. Then, the following relation takes place, Φε → Φ0 as ε → 0.

(1.131)

Proof. Theorem 1.4.4 is a corollary of Theorem 8.4.8∗ . ¯ and K4 are just re-formulation, respectively, of conditions Conditions D4 [β] ¯ ∗ and K15∗ used in Theorem 8.4.8∗ . Other conditions of this theorem also D19 [β] holds that was pointed out in the proof of Theorem 1.4.3. By applying Theorem 8.4.8∗ we get convergence relation (1.131). 

1.5 Modulated autoregressive LPP In this section, we present results concerned space skeleton approximations for rewards of American type options for modulated autoregressive (AR) log-price processes with Gaussian noise terms.

1.5.1 Upper bounds for rewards of modulated autoregressive type log-price processes Let us now consider a modulated autoregressive type model with Gaussian noise ~ n = (Y ~n , Xn ) is given by the folterms, where the modulated log-price process Z lowing modulated stochastic difference equation,  Yn − Yn−1 = an,0 (Xn−1 , . . . , Xn−r )     +an,1 (Xn−1 , . . . , Xn−r )Yn−1     + · · · + an,p (Xn−1 , . . . , Xn−r )Yn−p (1.132) + σn (Xn−1 , . . . , Xn−r )Wn ,      Xn = Cn (Xn−1 , . . . , Xn−r , Un ),    n = 1, 2, . . . , ~0 = (Y0 , . . . , Y−p+1 ) is a p-dimensional random vector with real-valued where: (a) Y ~ 0 = (X0 , . . . , X−r+1 ) is a r-dimensional random vector with components; (b) X components taking values in the space X(r) (X(r) = X × · · · × X is the r times product of a Polish space X with a metric dX (x0 , x00 ) and Borel σ-algebra of measurable subsets BX ); (c) (W1 , U1 ), (W2 , U2 ) . . . is a sequence of i.i.d. random vectors taking values in the space R1 ×U, moreover, W1 , W2 , . . . is a sequence of real-valued i.i.d. standard normal variables with mean value 0 and variance 1 while random

44

1

Reward approximations for autoregressive LPP

variables U1 , U2 , . . . have a regular conditional distribution Gw (A) = P{Un ∈ ~ 0 = (Y ~0 , X ~ 0 ) and the random A/Wn = w}, n = 1, 2, . . .; (d) the random vector Z sequence (W1 , U1 ), (W2 , U2 ) . . . are independent; (e) p is a positive integer number; (f) an,i (~x) = an,i (x1 , . . . , xr ), σn (~x) = σn (x1 , . . . , xr ), ~x = (x1 , . . . , xr ) ∈ X(r) , i = 1, . . . , r, n = 1, 2, . . . are measurable functions acting from the space X(r) to the space R1 ; (g) Cn (~x, u) = Cn (x1 , . . . , xr , u), (~x, u) ∈ X(r) × U, n = 1, 2, . . . are measurable functions acting from the space X(r) × U to the space X. The above autoregressive type model can be imbedded in the model of modulated multivariate log-price Markov Gaussian processes introduced in Subsection 1.4.1. Let us define the p-dimensional log-price process, ~0,n = (Y0,n,1 , . . . , Y0,n,p ) Y = (Yn , . . . , Yn−p+1 ), n = 0, 1, . . . .

(1.133)

and the r-dimensional index process, ~ 0,n = (X0,n,1 , . . . , X0,n,r ) X = (Xn , . . . , Xn−r+1 ), n = 0, 1, . . . .

(1.134)

~ 0,n has Note that, is this case, parameter k = p and the index component X (r) the phase space X . We can always assume that the sequence of random variables Wn , n = 1, 2, . . . is the sequence of the first components of the sequence of p-dimensional i.i.d. 0 0 ~ n0 = (Wn,1 standard Gaussian random vectors W , . . . , Wn,p ), n = 1, 2, . . ., with 0 0 0 EW1,i = 0, EW1,i W1,j = I(i = j), i, j = 1, . . . , p, i.e., 0 Wn = Wn,1 , n = 1, 2, . . . .

(1.135)

Let us also of consider p-dimensional i.i.d. Gaussian vectors ~ n = (Wn,1 , . . . , Wn,p ) W = (Wn , 0, . . . , 0), n = 1, 2, . . . .

(1.136)

~ n can be obtained as a linear transformation of vectors W ~ n0 , namely, Vectors W 0 ~ n = ΣW ~ n , n = 1, 2, . . ., where Σ is the p × p matrix, which has the following W form, Σ = kσi,j k = kI(i = 1)I(j = 1)k.

(1.137)

The stochastic dynamic equation (1.132) can be re-written in the equivalent form of vector modulated stochastic difference equation,

1.5

 Y0,n,1 − Y0,n−1,1                          Y0,n,2 − Y0,n−1,2     ...  Y0,n,p − Y0,n−1,p        X0,n,1       X0,n,2       ...      X0,n,r     n

Modulated autoregressive LPP

45

= an,0 (X0,n−1,1 , . . . , X0,n−1,r ) + an,1 (X0,n−1,1 , . . . , X0,n−1,r )Y0,n−1,1 + · · · + an,p (X0,n−1,1 , . . . , X0,n−1,r )Y0,n−1,p + σn (X0,n−1,1 , . . . , X0,n−1,r )Wn,1 , = Y0,n−1,1 − Y0,n−1,2 + Wn,2 , ... = Y0,n−1,p−1 − Y0,n−1,p + Wn,p ,

(1.138)

= Cn (X0,n−1,1 , . . . , X0,n−1,r , Un ), = X0,n−1,1 , ... = X0,n−1,r−1 , = 1, 2, . . . .

Taking into account the above remarks, one can re-write the vector modulated stochastic difference equation (1.138) in the following matrix form,

 ~ ~0,n−1 = ~λn (X ~ 0,n−1 ) Y0,n − Y    ~ 0,n−1 )Y ~0,n−1 + Σn (X ~ 0,n−1 )W ~ n0 , +Λn (X ~ ¯n (X ~ 0,n−1 , Un ),  X =C   0,n n = 1, 2, . . . ,

(1.139)

where p-dimensional vector functions ~λn (~x), ~x = (x1 , . . . , xr ) ∈ X(r) , n = 1, 2, . . . and p × p matrix functions Σn (~x) = kσn,i,j (~x)k, ~x = (x1 , . . . , xr ) ∈ X(r) , n = 1, 2, . . . are defined by the following relations,

  ~λn (~x) =   

an,0 (~x) 0 .. . 0





     , Σn (~x) =   

σn (~ x) 0 .. . 0

0 0 .. . 0

... ... ...

0 0 .. . 0

   , 

(1.140)

and p × p matrix functions Λn (~x) = kλn,i,j (~x)k, ~x = (x1 , . . . , xr ) ∈ X(r) , n = 1, 2, . . . are defined by the following relation,

   Λn (~ x) =  

an,1 (~x) 1 .. . 0

an,2 (~x) −1 .. . ...

... 0 .. . ...

... ... ...

... ... .. . 0

... ... .. . 1

an,p (~x) 0 .. . −1

   , 

(1.141)

46

1

Reward approximations for autoregressive LPP

~ n (~x, u), ~x = (x1 , . . . , xr ) ∈ X(r) , u ∈ U, n = while r-dimensional vector functions C 1, 2, . . . are defined by the following relation,   Cn (~x, u)   x1  ~ n (~x, u) =  (1.142) C .  ..   . xr−1 The above model of modulated autoregressive type model with Gaussian noise terms is a particular case of the model of modulated Markov Gaussian log-price processes with linear drift and constant diffusion characteristics introduced in Subsection 1.4.1, and the stochastic difference equation (1.139) is a particular case of the stochastic difference equation (1.101). Note that, in this case, parameter k = p, and the role of the index component ~ 0,n , which has the phase space X(r) . is played by the process X Let us assume that the following condition holds:  2 G2 : max0≤n≤N −1, i=1,...,p sup~x∈X(r) |an+1,0 (~x)| + |an+1,i (~x)| + σn+1 (~x) < K14 , for some 0 < K14 < ∞, Condition G2 obviously implies that condition G1 holds with constant K14 replacing constant K10 penetrating condition G1 . By Lemma 1.4.1, this implies that condition G4∗ holds with constants,  2 (~x) K15 = 1 + max sup |an+1,0 (~x)| + σn+1 0≤n≤N −1 ~ x∈X(r)

≤ 1 + K14 , K16,1 =

max0≤n≤N −1,i=1,...,p sup~x∈X(r) |an+1,i (~x)|)  2 1 + max0≤n≤N −1 sup~x∈X(r) |an+1,0 (~x)| + σn+1 (~x)

≤ 1 + K14 , K16,l =

1  2 1 + max0≤n≤N −1 sup~x∈X(r) |an+1,0 (~x)| + σn+1 (~x)

≤ 1 + K14 , l = 2, . . . , p,

(1.143)

which replace, respectively, constants K26∗ and K27,l∗ , l = 1, . . . , k penetrating condition G4∗ (or constants K11 and K12,l , l = 1, . . . , k given in relation (1.105)). Thus, Theorems 1.4.1 – 1.4.4 can be applied to the above the model of modulated autoregressive type log-price processes with Gaussian noise terms Zn = (Yn , Xn ) given by the modulated stochastic difference equation (1.132) or its equiv~ 0,n = (Y ~0,n , X ~ 0,n ) given by the vector modulated stochastic alent vector version Z difference equation (1.139). In this case, we consider pay-of functions g(n, e~y , ~x), ~ y = (y1 , . . . , yp ) ∈ Rp , ~x = (x1 , . . . , xr ) ∈ X(r) , n = 0, 1, . . ., which are real-valued measurable func-

1.5

47

Modulated autoregressive LPP

tions assumed to satisfy the following condition, for some vector parameter γ¯ = (γ1 , . . . , γp ) with non-negative components: B5 [¯ γ ]: max0≤n≤N sup(~y,~x)∈Rp ×X(r)

y ~

e P|g(n, p

1+

0 ≤ L10,i < ∞, i = 1, . . . , p.

i=1

,~ x)| L10,i eγi |yi |

< L9 , for some 0 < L9 < ∞,

(0)

Let us Mmax,p,r,n,N be the class of all stopping times τ0,n for the process (0)

Zn = (Yn , Xn ) such that (a) n ≤ τ0,n ≤ N , (b) event {τ0,n = m} ∈ Fp,r,n,m = σ[Yl0 , n − p + 1 ≤ l0 ≤ m, Xl00 , n − r + 1 ≤ l00 ≤ m], n ≤ m ≤ N . (0) (0) Obviously, the class Mmax,p,r,n,N coincides with the class Mmax,n,N of all ~ 0,l = (Y ~0,l , X ~ 0,l ), where Y ~0,l = Markov moments τ0,n for the Markov process Z ~ (Yl , . . . , Yl−p+1 ) and X0,l = (Xl , . . . , Xl−r+1 ), such that (a) n ≤ τ0,n ≤ N , (b) (0) ~ 0,l , n ≤ l ≤ m], n ≤ m ≤ N . event {τ0,n = m} ∈ Fn,m = σ[Z In this case, the reward functions are defined for the modulated log-price ~ 0,n = (Y ~0,n , X ~ 0,n )) by the process Zn = (Yn , Xn ) (its vector equivalent version Z (r) following relation, for (~ y , ~x) ∈ Rp × X and n = 0, 1, . . . , N , φ0,n (~ y , ~x) =

sup

~ ~ 0,τ0,n ). E(~y,~x),n g(τ0,n , eY0,τ0,n , X

(1.144)

(0) τn ∈Mmax,n,N

¯ = (A1 (β), ¯ . . . , Ap (β)), ¯ β¯ = (β1 , . . . , βp ), β1 , . . . , βp ~ β) In this case, function A( ≥ 0, penetrating formulation of Theorems 4.5.3∗ and 4.5.4∗ , has the following components, ¯ = K15 K16,j Aj (β)

p X 1 (βl + p2 βl2 ), j = 1, . . . , p. 2

(1.145)

l=1

¯ generates a sequence of functions A ¯ = (An,1 (β), ¯ . . ., ~ β) ~ n (β) Function A( ¯ An,p (β)), n = 0, 1, . . . from the class Ap by the following recurrence relation, for any β¯ = (β1 , . . . , βp ), βi ≥ 0, i = 1, . . . , p,  β¯ for n = 0, ¯ = ~ n (β) A (1.146) ¯ ¯ ~ ~ ~ An−1 (β) + A(An−1 (β)) for n = 1, 2, . . . . Recall also vectores β¯i = (βi,1 , . . . , βi,p ) with components βi,j = βj I(j = i), i, j = 1, . . . , p. Theorem 1.4.1 takes in this case the following form. Theorem 1.5.1. Let a modulated autoregressive log-price process Zn = ~ 0,n = (Y ~0,n , X ~ 0,n ) are given, respec(Yn , Xn ) and its equivalent vector version Z tively, by the modulated stochastic difference equations (1.132) and (1.139). Let condition G2 holds and, also, condition B5 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γp ) with γi ≥ 0, i = 1, . . . , p. Then, for any vector parameter β¯ = (β1 , . . . , βp ) with components βi ≥ γi , i = 1, . . . , p, there exist constants

48

1

Reward approximations for autoregressive LPP

0 ≤ M13 , M14,i = M14,1 (βi ), i = 1, . . . , p < ∞ such that the reward functions φn (~ y , ~x) satisfy the following inequalities for ~ y = (y1 , . . . , yp ) ∈ Rp , ~x = (x1 , . . . , xr ) ∈ X(r) , 0 ≤ n ≤ N , X |φ0,n (~ y , ~x)| ≤ M13 + M14,i i: γi =0

+

X

M14,i exp{(

p X j=1

i: γi >0

γi AN −n,j (β¯i )|yj |) }. βi

(1.147)

Remark 1.5.1. The explicit formulas for the constants M13 and M14,i (βi ), i = 1, . . . , p take, according formulas given in Remark 1.4.1, the following form, M13 = L9 , M14,i (βi ) = L9 L10,i I(γi = 0) + L9 L10,i (1 Pp γ K (A (β¯ )+ 1 p2 A2N −1,l (β¯i )) N βi + 2p e 15 l=1 N −1,l i 2 ) i I(γi > 0).

(1.148)

where vectors β¯i = (βi,1 , . . . , βi,p ) = (β1 I(i = 1), . . . , βp I(i = p)), i = 1, . . . , p. ¯ should be replaced in this case by the following condition, Condition D5 [β] assumed to hold for vector parameter β¯ = (β1 , . . . , βp ) with non-negative components: P ¯ E exp{ p AN,j (β¯i )|Y0,0,j |} < K17,i , i = 1, . . . , p, for some 1 < K17,i D5 [β]: j=1 < ∞, i = 1, . . . , p. In this case the optimal expected reward is defined by the formula, Φ0 =

sup

~

~ 0,τ0 ) Eg(τ0 , eY0,τ0 , X

(0) τ0 ∈Mmax,0,N

~0,0 , X ~ 0,0 ). = Eφ0 (Y

(1.149)

The following theorem gives conditions and the upper bound for the optimal expected reward Φ0 . Theorem 1.5.2. Let a modulated autoregressive log-price process Zn = ~ 0,n = (Y ~0,n , X ~ 0,n ) are given, respec(Yn , Xn ) and its eqiuvalent vector version Z tively, by the modulated stochastic difference equations (1.132) and (1.139). Let ¯ hold and 0 ≤ γi ≤ condition G2 holds and, also, conditions B5 [¯ γ ] and D5 [β] βi < ∞, i = 1, . . . , p. Then, there exists a constant 0 ≤ M15 < ∞ such that the following inequality takes place, |Φ0 | ≤ M15 .

(1.150)

1.5

Modulated autoregressive LPP

49

Remark 1.5.2. The explicit formula for the constant M15 takes, according formulas given in Remark 1.4.2, the following form, X X M15 = L9 + L9 L10,i + L9 L10,i (1 i:γi =0 p K15

+2 e

i:γi >0

Pp l=1

γ

(AN −1,l (β¯i )+ 12 p2 A2N −1,l (β¯i )) N βi

)

i

γi β

i . K17,i

(1.151)

1.5.2 Space skeleton approximations for option rewards of modulated autoregressive log-price processes ~ 0,n has transition probabilities P0,n (~z, A) = P{Z ~ 0,n ∈ The Markov process Z (r) ~ 0,n−1 = ~z} defined for ~z = (~ A/Z y , ~x) ∈ Z = Rp × X , A ∈ BZ , n = 1, 2, . . . by the following relation,

 ~ n0 , C ~ n (~x, Un ) ∈ A}. P0,n (~z, A) = P{ ~ y + ~λn (~x) + Λn (~x)~ y + Σn (~x)W

(1.152)

Lemma 1.4.2 can be reformulated for the above modulated autoregressive log-price processes with Gaussian noise terms given by the stochastic difference equation (1.132) and its vector version given by the he stochastic difference equation (1.139). Some simplifications in formulations can, however, be achieved due to specific shifting structure of these equations. We assume that X is a Polish space with a metric dX (x0 , x00 ). In this case, X(r) is 1 also a Polish space with the metric dX(r) (~x0 , ~x00 ) = (d2X (x01 , x001 ) + · · · + d2X (x0r , x00r )) 2 , for ~x0 = (x01 , . . . , x0r ), ~x00 = (x001 , . . . , x00r ) ∈ X(r) . + Let m− ε,n ≤ mε,n be integer numbers, δε,n > 0 and λε,n ∈ R1 , for n = −p + 1, −p + 2, . . .. ± In this case, one can use parameters m± ε,n,i = mε,n−i+1 , δε,n,i = δε,n−i+1 , and λε,n,i = λε,n−i+1 , for i = 1, . . . , p, n = 0, 1, . . .. In this case, the index sets Lε,n , n = 0, 1, . . . take the following form, + Lε,n = {¯ l = (l1 , . . . , lp ), li = m− ε,n,i , . . . , mε,n,i , i = 1, . . . , p} + = {¯ l = (l1 , . . . , lp ), li = m− ε,n−i+1 , . . . , mε,n−i+1 , i = 1, . . . , p}.

(1.153)

Other elements of the space skeleton approximation should be also defined with the use of the above shift construction. + First, the skeleton intervals Iε,n,l should be defined for l = m− ε,n , . . . , mε,n , n = −p + 1, . . .,  − 1 if l = m−  ε,n ,  (−∞, δε,n (mε,n + 2 )] + λε,n − 1 1 Iε,n,l = (1.154) (δε,n (l − 2 ), δε,n (l + 2 )] + λε,n if mε,n < l < m+ ε,n ,   + + 1 (δε,n (mε,n − 2 ), ∞) + λε,n if l = mε,n ,

50

1

Reward approximations for autoregressive LPP

and then skeleton cubes ˆIε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ˆI ¯ = Iε,n,1,l × · · · × Iε,n,p,l 1 p ε,n,l = Iε,n,l1 × · · · × Iε,n−p+1,lp .

(1.155)

+ Second, skeleton points yε,n,l should be defined for l = m− ε,n , . . . , mε,n , n = −p + 1, . . ., yε,n,l = lδε,n + λε,n , (1.156)

and vector skeleton points ~ yε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ~ yε,n,¯l = (yε,n,1,l1 , . . . , yε,n,p,lp ) = (yε,n,l1 , . . . , yε,n−p+1,lp ).

(1.157)

0+ Let m0− ε,n ≤ mε,n , n = −r + 1, −r + 2, . . . be integer numbers. l0 = (l10 , . . . , lr0 ), li0 = m0− Let us also define index sets L0ε,n = {¯ ε,n−i+1 , . . ., 0+ mε,n−i+1 , i = 1, . . . , r}, n = 0, 1, . . .. 0+ Third, non-empty sets Jε,n,l0 ∈ BX , l0 = m0− ε,n , . . . , mε,n , n = −r + 1, . . ., such 00 000 0+ Jε,n,l0 , that (a) Jε,n,l00 ∩ Jε,n,l000 = ∅, l 6= l , n = −r + 1, . . .; (b) X = ∪m0− 0 ε,n ≤l ≤mε,n n = −r + 1, . . ., should be constructed. 0+ Non-empty sets Kε,n , n = −r+1, . . . and Kε,n,l ∈ BX , l = m0− ε,n,0 , . . . , mε,n,0 , n = 00 000 −r+1, . . . should be chosen such that (c) Kε,n,l00 ∩Kε,n,l000 = ∅, l 6= l , n = 0, 1, . . .; 0+ Kε,n,l0 = Kε,n , n = −r + 1, . . .. (d) ∪m0− 0 ε,n ≤l ≤mε,n

The standard case is, where X = {x0l , l0 = m− , . . . , m+ } is a finite set and metrics dX (xl00 , xl000 ) = I(xl00 6= xl000 ). In this case, the simplest choice m0± ε,n = m± , n = −r + 1, . . . and to take sets Kε,n,l0 = Jε,n,l0 = {xl0 }, l0 = m− , . . . , m+ , n = −r + 1, . . .. Sets Jε,n,l can be defined in the following way, for n = −r + 1, . . ., ( 0+ if m0− Kε,n,l0 ε,n,0 ≤ l < mε,n,0 , Jε,n,l0 = (1.158) Kε,n,m0+ ∪ Kε,n if l0 = m0+ ε,n,0 . ε,n,0

and skeleton cubes ˆ Jε,n,¯l0 should be defined for ¯ l0 = (l10 , . . . , lr0 ) ∈ L0ε,n , n = 0, 1, . . ., ˆ Jε,n,¯l0 = Jε,n,l10 × · · · × Jε,n−r+1,lr0 .

(1.159)

The difference in notations (between the above sets ˆ Jε,n,¯l and sets Jε,n,l used in Subsection 11.4.2) can be removed by a natural re-numeration of indices, 0− − 0− 0− − namely, (m0− n , . . ., mε,n−r+1 ) ↔ mε,n,0 , . . . , (mε,n + 1, . . . , mε,n−r+1 ) ↔ mε,n + Q r 0+ + + + − 1, . . ., (m0+ ε,n , . . . , mε,n−r+1 ) ↔ mε,n , where mε,n − mε,n + 1 = j=1 (mε,n−j+1 − −mε,n−j+1 + 1), for n = 0, 1, . . .. The simplest variant is to choose integers ±m± ε,n ≥ 0, n = 0, 1, . . .. 0+ Fourth, skeleton points xε,n,l0 should be chosen for l0 = m0− ε,n,0 , . . . , mε,n,0 , n = −r + 1, . . . such that xε,n,l0 ∈ Jε,n,l0 , (1.160)

1.5

Modulated autoregressive LPP

51

and vector skeleton points ~xε,n,¯l0 should be defined for ¯ l0 = (l10 , . . . , lr0 ) ∈ L0ε,n , n = 0, 1, . . ., ~xε,n,¯l0 = (xε,n,l10 , . . . , xε,n−r+1,lr0 ). (1.161) Fifth, skeleton sets Aε,n,¯l,¯l0 and skeleton points, ~zε,n,¯l,¯l0 ∈ Aε,n,¯l,¯l0 can be defined, for ¯ l ∈ Lε,n , ¯ l0 ∈ L0ε,n , n = 0, 1, . . ., in the following way, Aε,n,¯l,¯l0 = ˆIε,n,¯l × ˆ Jε,n,¯l0 ,

(1.162)

~zε,n,¯l,¯l0 = (~ yε,n,¯l , ~xε,n,¯l0 ).

(1.163)

and Sixth, skeleton functions, hε,n (y), y ∈ R1 should be defined for n = −p + 1, . . ., hε,n (y) =



yε,n,l

+ if y ∈ Iε,n,l , m− ε,n ≤ l ≤ mε,n ,

(1.164)

ˆ ε,n (~ and vector skeleton functions h y ), ~ y = (y1 , . . ., yp ) ∈ Rp , should be defined for n = 0, 1, . . ., ˆ ε,n (~ h y ) = (hε,n,1 (y1 ), . . . , hε,n,p (yp )) = (hε,n (y1 ), . . . , hε,n−p+1 (yp )).

(1.165)

Seventh, skeleton functions h0ε,n (x), x ∈ X, n = −r + 1, . . ., should be defined, h0ε,n (x) =



xε,n,l0

0+ 0 if x ∈ Jε,n,l0 , m0− ε,n,0 ≤ l ≤ mε,n,0 .

(1.166)

ˆ 0ε,n (~x), ~x = (x1 , . . . , xr ) ∈ X(r) , should be defined and vector skeleton functions h for n = 0, 1, . . ., ˆ 0ε,n (~ h x) = (h0ε,n,1 (x1 ), . . . , h0ε,n,r (xr )) = (h0ε,n (x1 ), . . . , h0ε,n−r+1 (xr )).

(1.167)

¯ ε,n (~z), ~z = (~ Eights, vector skeleton functions h y , ~x) ∈ Z = Rp × X(r) , n = 0, 1, . . ., should be defined, ¯ ε,n (~z) = (h ˆ ε,n (~ ˆ 0ε,n (~x)). h y ), h

(1.168)

~ ε,n = (Y ~ε,n , X ~ ε,n ) The corresponding space skeleton approximating processes Z are defined by the following stochastic transition dynamic relation,   ˆ ε,n h ˆ ε,n−1 (Y ˆ 0ε,n−1 (X ~ε,n−1 ) + ~λn (h ~ ε,n−1 ))   Y~ε,n = h      ˆ 0ε,n−1 (X ˆ ε,n−1 (Y ~ ε,n−1 ))h ~ε,n−1 ), +Λn (h       ˆ 0ε,n−1 (X ~ ε,n−1 )) W ~ n0 , + Σn (h (1.169)    ˆ 0ε,n C ˆ 0ε,n−1 (X ~ ε,n = h ~ 0,n (h ~ ε,n−1 ), Un ) ,  X      n = 1, 2, . . . ,     ~ ˆ ε,n (Y ˆ 0ε,0 (X ~0,0 ), X ~ ε,0 = h ~ 0,0 ), Yε,0 = h

52

1

Reward approximations for autoregressive LPP

or by the following equivalent transition dynamic relation given in the form of ~ ε,n = system of transition relations for the components of the log-price process Z ~ε,n , X ~ ε,n ), n = 0, 1, . . ., (Y

                                                                                        

Yε,n,1

 ˆ 0ε,n−1 (X ~ ε,n−1 )) = hε,n hε,n−1 (Yε,n−1,1 ) + an,0 (h ˆ 0ε,n−1 (X ~ ε,n−1 ))hε,n−1 (Yε,n−1,1 ) + an,1 (h ˆ 0ε,n−1 (X ~ ε,n−1 ))hε,n−p (Yε,n−1,p ) + · · · + an,p (h  ˆ 0ε,n−1 (X ~ ε,n−1 ))Wn , + σn (h

Yε,n,2 ... Yε,n,p

= hε,n−1 (Yε,n−1,1 ), ... = hε,n−p+1 (Yε,n−1,p−1 ),

Xn,1

ˆ 0ε,n−1 (X ~ ε,n−1 ), Un ), = Cn (h

Xn,2 ... Xn,r

= Xn−1,1 , ... = Xn−1,r−1 ,

n

(1.170)

= 1, 2, . . . ,

Yε,0,1 ... Yε,0,p

= hε,0 (Y0,0,0 ), ... = hε,−p+1 (Y0,0,p ),

Xε,0,1 ... Xε,0,r

= h0ε,0 (X0,0,0 ), ... = h0ε,−r+1 (X0,0,r ).

~ ε,n defined, for every ε ∈ (0, ε0 ], by the nonlinear The log-price process Z dynamic transition relation (1.169) is a skeleton atomic Markov chain, with the phase space Z = Rp × X(r) and one-step transition probabilities Pε,n (~z, A) = P{~zε,n ∈ A/~zε,n−1 = ~z} defined for ~z ∈ Z, A ∈ BZ , n = 1, 2, . . . by the following relation, X ¯ ε,n−1 (~z), A ¯ ¯0 ) Pε,n (~z, A) = P0,n (h ε,n,l,l ~ zε,n,l, yε,n,l¯,xε,n,l¯0 )∈A ¯ l¯0 =(~

=

X

ˆ ε,n−1 (~ P{ h y)

~ zε,n,l, yε,n,l¯,xε,n,l¯0 )∈A ¯ l¯0 =(~

ˆ 0ε,n−1 (x)) + Λn (h ˆ 0ε,n−1 (x))h ˆ ε,n−1 (~ + ~λn (h y )) ˆ 0ε,n−1 (x))W ~ n0 , + Σn (h

 ˆ 0ε,n C ˆ 0ε,n−1 (X ~ n (h ~ ε,n−1 ), Un ) ∈ A ¯ ¯0 }. h ε,n,l,l

(1.171)

1.5

Modulated autoregressive LPP

53

~ ε,0 ∈ A}, A ∈ BZ is concerned, As far as the initial distribution Pε,0 (A) = P{Z it takes, for every ε ∈ (0, ε0 ], the following form, X Pε,0 (A) = P0,0 (Aε,0,¯l,¯l0 ) ~ zε,0,l¯∈A

X

=

 ˆ ε,0 (Y ˆ 0ε,0 (X ~0,0 ), h ~ 0,0 ) ∈ A ¯ ¯0 }. P{ h ε,n,l,l

(1.172)

~ zε,0,l¯∈A

We assume that the pay-off functions g(n, e~y , ~x), ~ y = (y1 , . . . , yp ) ∈ Rp , ~x = (x1 , . . ., xr ) ∈ X(r) do not depend on ε. (ε) Let us also recall the class Mmax,n,N of all Markov moments τε,n for the log(ε) ~ ε,n such that (a) n ≤ τε,n ≤ N , (b) event {τε,n = m} ∈ Fn,m price process Z = ~ σ[Zε,l , n ≤ l ≤ m], n ≤ m ≤ N . ~ ε,n by In this case, the reward functions are defined for the log-price process Z the following relation, for ~z = (~ y , x) ∈ Z and n = 0, 1, . . . , N , φε,n (~ y , x) =

sup

~ ~ ε,τε,n ). E(~y,~x),n g(τε,n , eYε,τε,n , X

(1.173)

(ε)

τε,n ∈Mmax,n,N

Probability measures Pε,n (~z, A), ~z ∈ Z = Rp × X(r) , n = 1, 2, . . . are concentrated on finite sets, for every ε ∈ (0, ε0 ]. This obviously implies that the rewards functions |φε,n (~ y , ~x)| < ∞, ~z = (~ y , ~x) ∈ Z, n = 1, 2, . . . for every ε ∈ (0, ε0 ]. It is also useful to note that φε,N (~ y , x) = g(N, e~y , ~x), (~ y , ~x) ∈ Z. An analogue of Lemma 1.4.2 can be formulated for the above model. However, the corresponding system of linear equations can be simplified taking into account the specific shift structure of the dynamic relations (1.169), which is well seen from the equivalent form (1.170). By the definition of sets Aε,n,¯l,¯l0 , ¯ l ∈ Lε,n , ¯ l0 ∈ L0ε,n , n = 0, 1, . . ., there exist 0 0 ¯ the unique lε,n (~z) = (lε,n,1 (~z), . . . , lε,n,p (~z)) ∈ Lε,n and ¯ lε,n (~z) = (lε,n,1 (~z), . . . , 0 0 (r) lε,n,r (~z)) ∈ Lε,n , for every ~z = (~ y , ~x) ∈ Z = Rp × X and n = 0, 1, . . ., such that ~z ∈ Aε,n,¯lε,n (~z),¯lε,n 0 (~ z) . The following lemma is the direct corollary of Lemma 1.4.2. ~ 0,n is deLemma 1.5.1. Let the modulated autoregressive log-price process Z fined by the vector modulated stochastic difference equation (1.139), while the ~ ε,n is defined, for corresponding approximating space-skeleton log-price process Z every ε ∈ (0, ε0 ], by the stochastic transition dynamic relation (1.169). Then the reward functions φε,n (~ y , ~x) and φn+m (~ yε,n+m,¯l , ~xε,n+m,¯l0 ), ¯ l ∈ Lε,n+m , ¯ l0 ∈ 0 (r) Lε,n+m , m = 1, . . . N − n are, for every ε ∈ (0, ε0 ], ~z = (~ y , ~x) ∈ Z = Rp × X and n = 0, . . . , N − 1, the unique solution for the following recurrence finite system of linear equations,

54

1

Reward approximations for autoregressive LPP

 φε,N (~ yε,N,¯l , ~xε,N,¯l0 ) = g(N, e~yε,N,l , ~xε,N,¯l0 ),       ¯  l = (l1 , . . . , lp ) ∈ Lε,N , ¯ l0 = (l10 , . . . , lr0 ) ∈ L0ε,N ,         φε,n+m (~ yε,n+m,¯l , ~xε,n+m,¯l0 ) = max g(n + m, e~yε,n+m,l¯, ~xε,n+m,¯l0 ),      Pm0+ Pm+ε,n+m+1  ε,n+m+1   φε,n+m+1 (~ yε,n+m+1,(l100 ,l1 ,...,lp−1 ) , −  00 l1000 =m0− l1 =mε,n+m+1  ε,n+m+1      0 ~xε,n+m+1,(l1000 ,l10 ,...,lr−1  ))      ×P{yε,n+m,l1 + an+m+1,0 (~xε,n+m,¯l0 )       + an+m+1,1 (~xε,n+m,¯l0 )yε,n+m,l1        + · · · + an+m+1,p (~xε,n+m,¯l0 )yε,n+m−p+1,lp       + σn+m+1 (~xε,n+m,¯l0 )Wn+m+1 ∈ Iε,n+m+1,l100 ,         Cn+m+1 (~xε,n+m,¯l0 , Un+m+1 ) ∈ Jε,n+m+1,l1000 } ,   0 0 0 0  ¯l = (l1 , . . . , lp ) ∈ Lε,n+m , ¯l = (l1 , . . . , lp ) ∈ Lε,n+m ,      m = N − n − 1, . . . , 1,         φ (~ y , ~ x ) = max g(n, e~y , ~x),  ε,n      Pm+ε,n+1 Pm0+  ε,n+1   φε,n+1 (~ yε,n+1,(l100 ,lε,n,1 (~z),...,lε,n,p−1 (~z)) , − 00  l1 =mε,n+1 l1000 =m0−  ε,n+1      ~xε,n+1,(l000 ,l0 (~z),...,l0  (~ z )) ) 1 ε,n,1  ε,n,r−1      ×P{yε,n,lε,n,1 (~z) + an+1,0 (~xε,n,¯lε,n 0  (~ z) )       + an+1,1 (~xε,n,¯lε,n 0 z) (~ z ) )yε,n,lε,n,1 (~       + · · · + an+1,p (~xε,n,¯lε,n  0 z) (~ z ) )yε,n−p+1,lε,n,p (~       + σn+1 (~xε,n,¯lε,n 0  (~ z ) )Wn+1 ∈ Iε,n+1,l100 ,        Cn+1 (~xε,n,¯lε,n 0 (~ z ) , Un+1 ) ∈ Jε,n+1,l1000 } ,

(1.174)

(ε)

while the optimal expected reward Φε = Φε (Mmax,0,N ) is defined by the following formula, for every ε ∈ (0, ε0 ], X Φε = P0,0 (Aε,0,¯l,¯l0 ) φε,0 (~ yε,0,¯l , ~xε,0,¯l0 ). (1.175) ¯ l∈Lε,0 , ¯ l0 ∈L0ε,0

Proof. The following system of linear equations for reward functions φε,n (~ y , ~x) and φn+m (~ yε,n,¯l , ~xε,n,¯l0 ), ¯ l ∈ Lε,n+m , ¯ l0 ∈ L0ε,n+m m = 1, . . . N − n is the variant of

1.5

Modulated autoregressive LPP

the system of linear equations (1.128) given in Lemma 1.4.2,  φε,N (~ yε,N,¯l , ~xε,N,¯l0 ) = g(N, e~yε,N,l¯, ~xε,N,¯l0 ),       ¯  l ∈ Lε,N , ¯ l0 ∈ L0ε,N       φε,n+m (~ yε,n+m,¯l , ~xε,n+m,¯l0 ) = max g(n + m, e~yε,n+m,l¯, ~ yε,n+m,¯l0 ),     P P   yε,n+m+1,¯l00 , ~xε,n+m+1,¯l000 )  ¯ ¯ l000 ∈L0ε,n+m+1 φε,n+m+1 (~ l00 ∈Lε,n+m+1     ×P0,n+m+1 (~zε,n+m,¯l,¯l0 , Aε,n+m+1,¯l00 ,¯l000 ) ,     ¯l ∈ Lε,n+m , ¯l0 ∈ L0ε,n+m , m = N − n − 1, . . . , 1,        φε,n (~ y , ~x) = max g(n, e~y , ~x),    P P    yε,n+1,¯l00 , ~xε,n+1,¯l000 ) ¯ ¯  l000 ∈L0ε,n+1 φε,n+1 (~ l00 ∈Lε,n+1       ×P (~z ,A ) . 0,n+1

0 ε,n,¯ lε,n (~ z ),¯ lε,n (~ z)

55

(1.176)

ε,n+1,¯ l00 ,¯ l000

The system of linear equations (1.176) can be re-written in simpler form taking into account shift features of the dynamic transition relation (1.169) or its equivalent form (1.170). Indeed, according the above relations, we get the following formula, for ¯ l = 0 0 0 0 00 00 00 ¯ ¯ (l1 , . . . , lp ) ∈ Lε,n+m , l = (l1 , . . . , lr ) ∈ Lε,n+m and l = (l1 , . . . , lp ) ∈ Lε,n+m+1 , ¯ l000 = (l1000 , . . . , lr000 ) ∈ L0ε,n+m+1 , P0,n+m+1 (~zε,n+m,¯l,¯l0 , Aε,n+m+1,¯l00 ,¯l000 ) = P{yε,n+m,l1 + an+m+1,0 (~xε,n+m,¯l0 ) + an+m+1,1 (~xε,n+m,¯l0 )yε,n+m,l1 + · · · + an+m+1,p (~xε,n+m,¯l0 )yε,n+m−p+1,lp + σn+m+1 (~xε,n+m,¯l0 )Wn+m+1 ∈ Iε,n+m+1,l100 , Cn+m+1 (~xε,n+m,¯l0 , Un+m+1 ) ∈ Jε,n+m+1,l1000 } × I(yε,n+m,l1 ∈ Iε,n+m,l200 ) × · · · × I(yε,n+m−p+2,lp−1 ∈ Iε,n+m−p+2,lp00 ) × I(xε,n+m,l10 ∈ Jε,n+m,l2000 ) 0 × · · · × I(xε,n+m−r+2,lr−1 ∈ Jε,n+m−r+2,lr000 )

= P{ yε,n+m,l1 + an+m+1,0 (~ xε,n+m,¯l0 ) + an+m+1,1 (~xε,n+m,¯l0 )yε,n+m,l1 + · · · + an+m+1,p (~xε,n+m,¯l0 )yε,n+m−p+1,lp + σn+m+1 (~xε,n+m,¯l0 )Wn+m+1 ∈ Iε,n+m+1,l100 ,

56

1

Reward approximations for autoregressive LPP

Cn+m+1 (~xε,n+m,¯l0 , Un+m+1 ) ∈ Jε,n+m+1,l1000 } × I(l1 = l200 ) × · · · × I(lp−1 = lp00 ) 0 × I(l10 = l2000 ) × · · · × I(lr−1 = lr000 ).

(1.177)

Relation (1.177) implies that the system of linear equations (1.176) can be re-written in the simpler form, where multivariate sums over vector indices ¯ l00 ∈ 000 0 ¯ Lε,n+m+1 , l ∈ Lε,n+m+1 , m = N − n, . . . , 0 are replaced in the corresponding + 000 equations by univariate sums over indices l100 = m− ε,n+m+1 , . . . , mε,n+m+1 , l1 = 0− 0+ mε,n+m+1 , . . . , mε,n+m+1 , m = N − n, . . . , 0. By doing this, we get write down the system of linear equations (1.174).  Obviously, |Φε | < ∞, for every ε ∈ (0, ε0 ].

1.5.3 Convergence of space skeleton approximations for option rewards of modulated autoregressive log-price processes with Gaussian noise terms Let now formulate conditions of convergence for reward functions, for modulated autoregressive log-price processes with Gaussian noise terms. Let us introduce special shorten notations for the maximal and the minimal skeleton points, for n = −p + 1, . . . and ε ∈ (0, ε0 ], ± zε,n = δε,n m± ε,n + λε,n .

(1.178)

We impose the following condition on the parameters of the space skeleton model: ± N5 : (a) δε,n → 0 as ε → 0, for n = −p + 1, . . . , N ; (b) ±zε,n → ∞ as ε → 0, ± for n = −p + 1, . . . , N ; (c) ±zε,n , n = −p + 1, . . . , N are non-decreasing sequences, for every ε ∈ (0, ε0 ]; (d) for any x ∈ X and d > 0, there exists εx,d ∈ (0, ε0 ] such that the ball Rd (x) ⊆ Kε,n , n = 0, . . . , N , for ε ∈ (0, εx,d ]; (e) sets Kε,n,l have diameters dε,n,l = supx0 ,x00 ∈Kε,n,l dX (x0 , x00 ) such that dε,n = maxm− ≤l≤m+ dε,n,l → 0 as ε → 0, n = 0, . . . , N . ε,n,0

ε,n,0

Let us formulate the corresponding moment conditions, which provide the existence of appropriate upper bounds for reward functions. We assume that condition B5 [¯ γ ] holds for the payoff functions g(n, e~y , ~x), (r) ~z = (~ y , ~x) ∈ Z = Rp × X , for some vector parameter γ¯ = (γ1 , . . . , γp ) with non-negative components. Also, we assume that condition G2 holds. Let now formulate the corresponding convergence conditions. We assume that the following condition holds: I5 : There exists sets Z0n ∈ BZ , n = 0, . . . , N such that function g(n, e~y , ~x) is continuous in points ~z = (~ y , ~x) ∈ Z0n for every n = 0, . . . , N .

1.5

Modulated autoregressive LPP

57

Condition O4 takes, in the case where characteristics ~λn (~x), Λn (~x), Σn (~x) and ~ Cn (~x, u) given by relations (1.140) – (1.142), the following form: O5 : There exist sets Xn ∈ BX(r) , n = 0, . . . , N and Un ∈ BU , n = 1, . . . , N such that: (a) an,i (~xε ) → an,i (~x0 ), i = 0, . . . , p, σn (~xε ) → σn (~x0 ), Cn (~xε , u) → Cn (~x0 , u) as ε → 0, for any ~xε → ~x0 ∈ Xn−1 as ε → 0, u ∈ Un , n = 1, . . . , N ; ~ n0 , (b) P{Un ∈ Un } = 1, n = 1, . . . , N ; (c) P{ ~ y0 +~λn (~x0 )+Λn (~x0 )~ y0 +Σn (~x0 )W  0 00 0 00 ~ n (~x0 , Un ) ∈ Zn ∪ Zn } = 0 for every ~z0 = (~ y0 , ~x0 ) ∈ Zn−1 ∩ Zn−1 and C n = 1, . . . , N , where Z0n , n = 0, . . . , N are sets introduced in condition I14 and Z00n = Rk × Xn , n = 0, . . . , N . The following theorem is a corollary of Theorem 1.4.3. ~ 0,n is deTheorem 1.5.3. Let the modulated autoregressive log-price process Z fined by the vector modulated stochastic difference equation (1.139), while the cor~ ε,n is defined, for every responding approximating space-skeleton log-price process Z ε ∈ (0, ε0 ], by the stochastic transition dynamic relation (1.169). Let also conditions B5 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components, and, also, conditions G2 , N5 , I5 , and O5 hold. Then, for every n = 0, 1, . . . , N , the following relation takes place for any ~zε = (~ yε , ~xε ) → ~z0 = (~ y0 , ~ x0 ) ∈ Z0n ∩ Z00n , φε,n (~ yε , ~xε ) → φ0,n (~ y0 , ~x0 ) as ε → 0.

(1.179)

~ 0,n and its space skeleton approximation logProof. The log-price process Z ~ ε,n defined, respectively, in (1.139) and (1.169) are particular variprice process Z ants of the corresponding log-price processes defined, respectively, in (1.101), and (1.124). Conditions B5 [¯ γ ], N5 , and I5 are, respectively, variants of conditions B4 [¯ γ ], N4 , and I4 , for the case where the role of the index phase space X is played by the space X(r) . Conditions G2 implies that conditions G1 holds and condition O5 implies that condition O4 holds, for the model given by the transition dynamic relations (1.139) and (1.169). Thus, all conditions of Theorem 1.4.3 hold. By applying this theorem, we get the convergence relation (1.179). 

1.5.4 Convergence of optimal expected rewards for space skeleton approximaions of modulated autoregressive log-price processes Let us now give conditions for convergence for optimal expected rewards Φε . We shall apply Theorem 1.4.4. ¯ = (A1 (β), ¯ . . . , Ap (β)) ¯ ~ β) ¯ with the function A( In this case, condition D5 [β], given by relations (1.145) and (2.190), should be assumed to hold for some vector

58

1

Reward approximations for autoregressive LPP

parameter β¯ = (β1 , . . . , βp ) with non-negative components and the corresponding vectors β¯i = (βi,1 , . . . , βi,p ), where βi,j = I(i = j), i, j = 1, . . . , p. Condition K4 should be replaced by the following condition imposed on the ~ 0,0 ∈ A}: initial distribution P0,0 (A) = P{Z K5 : P0,0 (Z00 ) = 1, where Z00 is the set introduced in conditions I5 . The following theorem is a corollary of Theorem 1.4.4. ~ 0,n is Theorem 1.5.4. Let the modulated autoregressive log-price process Z defined by the vector modulated stochastic difference equation (1.139), while the ~ ε,n is defined, for corresponding approximating space-skeleton log-price process Z every ε ∈ (0, ε0 ], by the stochastic transition dynamic relation (1.169). Let also ¯ holds for some vector parameters γ¯ = (γ1 , . . . , γp ) and conditions B5 [¯ γ ] and D5 [β] β¯ = (β1 , . . . , βp ) such that, for every i = 1, . . . , p either βi > γi > 0 or βi = γi = 0, and also conditions G2 , N5 , I5 , O5 , and K5 hold. Then, the following relation takes place, Φε → Φ0 as ε → 0. (1.180) ¯ and K5 are just re-formulation, respectively, of conProof. Conditions D5 [β] ¯ and K4 used in Theorem 1.4.4. Other conditions of this theorem ditions D4 [β] also holds that was pointed out in the proof of Theorem 1.5.3. By applying Theorem 1.4.4 we get convergence relation (1.180). 

1.6 Modulated autoregressive moving average LPP In this section, we present results concerned space skeleton approximations for rewards of American type options for modulated autoregressive moving average (ARMA) log-price processes with Gaussian noise terms.

1.6.1 Upper bounds for rewards of mixed modulated autoregressive moving average type log-price processes with Gaussian noise terms Let us consider an inhomogeneous in time modulated mixed autoregressive moving average type model with Gaussian noise terms, where the log-price process Yn is given by the following modulated stochastic difference equation,  Yn − Yn−1 = an,0 (Xn−1 , . . . , Xn−r )     +an,1 (Xn−1 , . . . , Xn−r )Yn−1 + · · · + an,p (Xn−1 , . . . , Xn−r )Yn−p     + bn,1 (Xn−1 , . . . , Xn−r )Wn−1 + · · · + bn,q (Xn−1 , . . . , Xn−r )Wn−q (1.181) + σn (Xn−1 , . . . , Xn−r )Wn ,      Xn = Cn (Xn−1 , . . . , Xn−r , Un ),    n = 1, 2, . . . ,

1.6

Modulated autoregressive moving average LPP

59

~0 = (Y0 , . . . , Y−p+1 , W0 , . . . , W−q+1 ) is a (p + q)-dimensional random where: (a) Y ~ 0 = (X0 , . . . , X−r+1 ) is a r-dimensional vector with real-valued components; (b) X random vector with components taking values in the space X(r) (X(r) = X×· · ·×X is the r times product of a Polish space X with a metric dX (x0 , x00 ) and Borel σalgebra of measurable subsets BX ); (c) (W1 , U1 ), (W2 , U2 ), . . . is a sequence of i.i.d. random vectors taking values in the space R1 × U, moreover, W1 , W2 , . . . is a sequence of real-valued i.i.d. standard normal variables with mean value 0 and variance 1, while random variables U1 , U2 , . . . have a regular conditional distribution ~ 0 = (Y ~0 , X ~ 0) Gw (A) = P{Un ∈ A/Wn = w}, n = 1, 2, . . .; (d) the random vector Z and the random sequence (W1 , U1 ), (W2 , U2 ) . . . are independent; (e) p and r are positive integer numbers; (f) an,i (~x) = an,i (x1 , . . . , xr ), bn,j (~x) = bn,j (x1 , . . . , xr ), σn (~x) = σn (x1 , . . . , xr ), ~x = (x1 , . . . , xr ) ∈ X(r) , i = 1, . . . , p, j = 1, . . . , q, n = 1, 2, . . . are measurable functions acting from the space X(r) to the space R1 ; (g) Cn (~x, u) = Cn (x1 , . . . , xr , u), (~x, u) ∈ X(r) × U, n = 1, 2, . . . are measurable functions acting from the space X(r) × U to the space U. The above autoregressive type model can be imbedded in the model of modulated multivariate log-price Markov Gaussian processes introduced in Subsection 1.4.1. Let us introduce the (p + q)-dimensional vector process, ~0,n = (Y0,n,1 , . . . , Y0,n,p , Y0,n,p+1 , . . . , Y0,n,p+q ) Y = (Yn , . . . , Yn−p+1 , Wn , . . . , Wn−q+1 ), n = 0, 1, . . . ,

(1.182)

and r-dimensional vector index process, ~ 0,n = (X0,n,1 , . . . , X0,n,r ) X = (Xn , . . . , Xn−r+1 ), n = 0, 1, . . . .

(1.183)

We can always assume that the sequence of random variables Wn , n = 1, 2, . . . is the sequence of the first components of the sequence of (p + q)-dimensional i.i.d. 0 0 ~ n0 = (Wn,1 standard Gaussian random vectors W , . . . , Wn,p+q ), n = 1, 2, . . ., with 0 0 0 EW1,i = 0, EW1,i W1,j = I(i = j), i, j = 1, . . . , p + q, i.e., 0 Wn = Wn,1 , n = 1, 2, . . . .

(1.184)

Let us consider again (p + q)-dimensional i.i.d. Gaussian vectors ~ n = (Wn,1 , Wn,2 , . . . , Wn,p , Wn,p+1 , Wn,p+2 , . . . , Wn,p+q ) W = (Wn , 0, . . . 0, Wn , 0, . . . , 0), n = 1, 2, . . . .

(1.185)

~ n can be obtained as a linear transformation It is useful to note that vectors W 0 0 ~ ~ ~ of vectors Wn , namely, Wn = ΣWn , n = 1, 2, . . ., where (p + q) × (p + q) matrix Σ = kσi,j k, has elements σi,j = (I(i = 1) + I(i = p + 1))I(j = 1), i, j = 1, . . . , p + q.

60

1

Reward approximations for autoregressive LPP

The stochastic difference equation (1.181) can be re-written in the equivalent form of vector stochastic difference equation,  Y0,n,1 − Y0,n−1,1 = an,0 (X0,n−1,1 , . . . , X0,n−1,r )       + (an,1 (X0,n−1,1 , . . . , X0,n−1,r ) − 1)Y0,n−1,1        + · · · + an,p (X0,n−1,1 , . . . , X0,n−1,r )Y0,n−1,p         + bn,1 (X0,n−1,1 , . . . , X0,n−1,r )Y0,n−1,p+1     + · · · + bn,q (X0,n−1,1 , . . . , X0,n−1,r )Y0,n−1,p+q       + σn (X0,n−1,1 , . . . , X0,n−1,r )Wn,1 ,        Y0,n,2 − Y0,n−1,2 = Y0,n−1,1 − Y0,n−1,2 + Wn,2 ,       ... ...      Y0,n,p − Y0,n−1,p = Y0,n−1,p−1 − Y0,n−1,p + Wn,p , (1.186)  Y0,n,p+1 − Y0,n−1,p+1 = − Y0,n−1,p+1 + Wn,p+1 ,       Y0,n,p+2 − Y0,n−1,p+2 = Y0,n−1,p−1 − Y0,n−1,p+2 + Wn,p+2 ,       ... ...       Y0,n,p+q − Y0,n−1,p+q = Y0,n−1,p+q−1 − Y0,n−1,p+q + Wn,p+q ,        X0,n,1 = Cn (X0,n−1,1 , . . . , X0,n−1,r , Un ),       X0,n,2 = X0,n−1,1 ,       ... ...       X0,n,r = X0,n−1,r−1 ,      n = 1, 2, . . . . Taking into account the above remarks, one can re-write the stochastic difference equation (1.186) in the following matrix form,

 ~0,n − Y ~0,n−1 = ~λn (X ~ 0,n−1 )+ Y      ~ 0,n−1 )Y ~0,n−1 + Σn (X ~ 0,n−1 )W ~ n0 ,  Λn (X      

~ 0,n X

¯n (X ~ 0,n−1 , Un ), =C

n

= 1, 2, . . . ,

(1.187)

where (p + q)-dimensional vector functions ~λn (~x), ~x = (x1 , . . . , xr ) ∈ X(r) , n = 1, 2, . . . and (p+q)×(p+q) matrix functions Σn (~x) = kσn,i,j (~x)k, ~x = (x1 , . . . , xr ) ∈ X(r) , n = 1, 2, . . . and are defined by the following relations,

1.6

        ~λn (~ x) =       

an,0 (~x) 0 .. . .. . .. . .. . 0

61

Modulated autoregressive moving average LPP





               , Σn (~x) =             

σn (~ x) 0 .. . 0 1 0 .. . 0

0 0 .. . 0 0 0 .. . 0

... ... ... ... ... ...

0 0 .. . 0 0 0 .. . 0

        ,      

(1.188)

and (p + q) × (p + q) matrix functions Λn (~x) = kλn,i,j (~x)k, ~x = (x1 , . . . , xr ) ∈ X(r) , n = 1, 2, . . . are defined by the following relation, where shorten notations an,i = an,i (~x), i = 0, . . . , p and bn,j = bn,j (~x), j = 1, . . . , q are used in order to display the matrix functions Λn (~x), Λn (~ x) =

       =     

an,1 1 .. . 0 0 0 .. . 0

(1.189) an,2 −1 .. . ... ... ... .. . ...

... 0 .. . ... ... ... .. . ...

... ...

... ... ...

...

... ... .. . 0 ... ... .. . ...

... ... .. . 1 ... ... .. . ...

an,p 0 .. . −1 0 0 .. . 0

bn,1 0 .. . 0 −1 1 .. . 0

bn,2 ... .. . ... 0 −1 .. . ...

... ... .. . ... ... 0 .. . ...

... ...

... ... ...

...

... ... .. . ... ... ... .. . 0

... ... .. . ... ... ... .. . 1

bn,q 0 .. . 0 0 0 .. . −1

       ,     

~ n (~x, u), ~x = (x1 , . . . , xr ) ∈ X(r) , u ∈ U, n = while r-dimensional vector functions C 1, 2, . . . are defined by the following relations,

  ~ n (~x, u) =  C  

Cn (~x, u) x1 .. . xr−1

   . 

(1.190)

The above model of modulated autoregressive type model with Gaussian noise terms is a particular case of the model of modulated Markov Gaussian log-price processes with linear drift and constant diffusion characteristics introduced in Subsection 1.4.1, and the modulated stochastic difference equation (1.187) is a particular case of the modulated stochastic difference equation, (1.101). Note that, in this case, parameter k = p + q, and the role of the index com~ 0,n , which has the phase space X(r) . ponent is played by the process X Let us assume that the following condition holds: G3 : max0≤n≤N −1, i=1,...,p,j=1,...,q sup~x∈X(r) |an+1,0 (~x)| + |an+1,i (~x)| + |bn+1,j (~x)| +  2 σn+1 (x) < K18 , for some 0 < K18 < ∞.

62

1

Reward approximations for autoregressive LPP

Condition G3 obviously implies that condition G1 holds with constant K18 replacing constant K10 penetrating condition G1 . By Lemma 1.4.1, this implies that condition G4∗ holds, with constants,  2 K19 = 1 + max sup |an+1,0 (~ x)| + σn+1 (~x) 0≤n≤N −1 ~ x∈X(r)

≤ 1 + K115 ,

(1.191)

and K20,1 =

max0≤n≤N −1,i=1,...,p,j=1,...,q sup~x∈X(r) (|an+1,i (~x)| ∨ |bn+1,j (~x)|)  2 1 + max0≤n≤N −1 sup~x∈X(r) |an+1,0 (~x)| + σn+1 (~x)

≤ 1 + K18 , K20,l =

1 2 1 + max0≤n≤N −1 sup~x∈X(r) |an+1,0 (~x)| + σn+1 (~x)



≤ 1 + K18 , l = 2, . . . , p + q,

(1.192)

which can replace, respectively, constants K26∗ and K27,l∗ , l = 1, . . . , k penetrating condition G4∗ (or constants K11 and K12,l , l = 1, . . . , k given in relation (1.105)). Thus, Theorems 1.4.1 – 1.4.4 can be applied to the above the model of modulated autoregressive type log-price processes with Gaussian noise terms Zn = (Yn , Xn ) given by the modulated stochastic difference equation (1.181) or its equiv~ 0,n = (Y ~0,n , X ~ 0,n ) given by the vector modulated stochastic alent vector version Z difference equation (1.187). In this case, we consider pay-of functions g(n, e~y , ~x), ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , ~x = (x1 , . . . , xr ) ∈ X(r) , n = 0, 1, . . ., which are real-valued measurable functions assumed to satisfy the following condition, for some vector parameter γ¯ = (γ1 , . . . , γp+q ) with non-negative components: B6 [¯ γ ]: max0≤n≤N sup(~y,~x)∈Rp+q ×X(r)

y ~

1+

e P|g(n, p+q i=1

,~ x)|

L12,i eγi |yi |

< L11 , for some 0 < L11
0

p+q X γi M17,i exp{( AN −n,j (β¯i )|yj |) }. βi

(1.196)

j=1

Remark 1.6.1. The explicit formulas for the constants M16 and M17,i (βi , i = 1, . . . , p + q take, according formulas given in Remark 1.4.1, the following form, M16 = L11 , M17,i = L11 L12,i I(γi = 0) + L23 L24,i (1 Pp+q γ K (A (β¯ )+ 1 (p+q)2 A2N −1,l (β¯i )) N βi + 2p+q e 19 l=1 N −1,l i 2 ) i I(γi > 0).

(1.197)

64

1

Reward approximations for autoregressive LPP

where vectors β¯i = (βi,1 , . . . , βi,p+q ) = (β1 I(i = 1), . . . , βp+q I(i = p + q)), i = 1, . . . , p + q. ¯ should be replaced in this case by the following condition, Condition D4 [β] assumed to hold for vector parameter β¯ = (β1 , . . . , βp+q ) with non-negative components: P ¯ E exp{ p+q AN,j (β¯i )|Y0,0,j |} < K21,i , i = 1, . . . , p, for some 1 < K21,i D6 [β]: j=1

< ∞, i = 1, . . . , p + q. In this case the optimal expected reward is defined by the formula, Φ0 =

~ ~ 0,τ0 ) Eg(τ0 , eY0,τ0 , X

sup (0)

τ0 ∈Mmax,0,N

~0,0 , X ~ 0,0 ). = Eφ0 (Y

(1.198)

Theorem 1.4.2 takes in this case the following form. Theorem 1.6.2. Let a modulated autoregressive log-price process Zn = ~ n = (Y ~n , X ~ n ) are given, respectively, (Yn , Xn ) and its eqiuvalent vector version Z by the modulated stochastic difference equations (1.181) and (1.187). Let condition ¯ hold and 0 ≤ γi ≤ βi < ∞, i = G3 holds and, also, conditions B6 [¯ γ ] and D6 [β] 1, . . . , p + q. Then, there exists a constant 0 ≤ M18 < ∞ such that the following inequality takes place, |Φ0 | ≤ M18 . (1.199) Remark 1.6.2. The explicit formula for the constant M78 takes, according formulas given in Remark 1.4.2, the following form, X X M18 = L11 + L11 L12,i + L11 L12,i (1 i:γi =0

+ 2p+q e

K19

i:γi >0

Pp+q l=1

γ (AN −1,l (β¯i )+ 12 (p+q)2 A2N −1,l (β¯i )) N βi

)

i

γi β

i K21,i .

(1.200)

1.6.2 Space skeleton approximations for option rewards of modulated autoregressive moving average log-price processes ~ 0,n has transition probabilities P0,n (~z, A) The modulated Markov log-process Z ~ 0,n ∈ A/Z ~ 0,n−1 = ~z} defined for ~z = (~ = P{Z y , ~x) ∈ Z = Rp+q × X(r) , A ∈ BZ , n = 1, 2, . . . by the following relation, ~ n0 , P0,n (~z, A) = P{ ~ y + ~λn (~x) + Λn (~x)~ y + Σn (~x)W  ~ n (~x, Un ) ∈ A}. C

(1.201)

Lemma 1.4.2 and Theorems 1.4.3 and 1.4.4 can be applied to the above modulated autoregressive log-price processes with Gaussian noise terms given by the

1.6

65

Modulated autoregressive moving average LPP

modulated autoregressive type stochastic difference equation (1.187) or by the equivalent vector modulated autoregressive type stochastic difference equation (1.181). Some simplifications in formulations can, however, be achieved due to specific features of the equation (1.187). Let us construct the corresponding skeleton approximation model taking into account the shift structure of the stochastic difference equation (1.186) defining ~ ε,n . components the vector log-price processes Z + − Let mε,n ≤ mε,n be integer numbers, δε,n > 0 and λε,n ∈ R1 , for n = − max(p, q) + 1, − max(p, q) + 2, . . .. ± In this case, one can use parameters m± ε,n,i = mε,n−i+1 , δε,n,i = δε,n−i+1 , ± ± λε,n,i = λε,n−i+1 , for i = 1, . . . , p and mε,n,i = mε,n−i+p+1 , δε,n,i = δε,n−i+p+1 , λε,n,i = λε,n−i+p+1 , for i = p + 1, . . . , p + q, for n = 0, 1, . . .. In this case, the index sets Lε,n , n = 0, 1, . . . takes the following form, + Lε,n = {¯ l = (l1 , . . . , lp+q ), m− ε,n,i ≤ li ≤ mε,n,i , i = 1, . . . , p + q} + = {¯ l = (l1 , . . . , lp+q ), m− ε,n−i+1 ≤ li ≤ mε,n−i+1 , i = 1, . . . , p, + m− ε,n−i+p+1 ≤ li ≤ mε,n−i+p+1 , i = p + 1, . . . , p + q}.

(1.202)

+ First, the skeleton intervals Iε,n,l should be defined for l = m− ε,n , . . . , mε,n , n = − max(p, q) + 1, . . .,  − 1 if l = m−  ε,n ,  (−∞, δε,n (mε,n + 2 )] + λε,n − 1 1 (1.203) Iε,n,l = (δε,n (l − 2 ), δε,n (l + 2 )] + λε,n if mε,n < l < m+ ε,n ,   + + 1 (δε,n (mε,n − 2 ), ∞) + λε,n if l = mε,n ,

and then skeleton cubes ˆIε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ˆI ¯ = Iε,n,1,l × · · · × Iε,n,p,l 1 p ε,n,l × Iε,n,p+1,lp+1 · · · × Iε,n,p+q,lp+q = Iε,n,l1 × · · · × Iε,n−p+1,lp × Iε,n,lp+1 × · · · × Iε,n−q+1,lp+q .

(1.204)

+ Second, skeleton points yε,n,l should be defined for l = m− ε,n , . . . , mε,n , n = − max(p, q) + 1, . . ., yε,n,l = lδε,n + λε,n , (1.205)

and vector skeleton points ~ yε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ~ yε,n,¯l = (yε,n,1,l1 , . . . , yε,n,p,lp , yε,n,p+1,lp+1 , . . . , yε,n,p+q,lp+q ) = (yε,n,l1 , . . . , yε,n−p+1,lp , yε,n,lp+1 , . . . , yε,n−q+1,lp+q ).

(1.206)

66

1

Reward approximations for autoregressive LPP

0+ Let m0− ε,n ≤ mε,n , n = −r + 1, −r + 2, . . . be integer numbers. Let us also define index sets, for n = 0, 1, . . .,

L0ε,n = {¯ l0 = (l10 , . . . , lr0 ), 0+ li0 = m0− ε,n−i+1 , . . . , mε,n−i+1 , i = 1, . . . , r}.

(1.207)

0+ Third, non-empty sets Jε,n,l0 ∈ BX , l0 = m0− ε,n , . . . , mε,n , n = −r + 1, . . ., such 00 000 0+ Jε,n,l0 , that (a) Jε,n,l00 ∩ Jε,n,l000 = ∅, l 6= l , n = −r + 1, . . .; (b) X = ∪m0− 0 ε,n ≤l ≤mε,n n = −r + 1, . . ., should be constructed. Sets Kε,n , n = −r + 1, . . ., and non-empty sets Kε,n,l ∈ BX , l = m0− ε,n,0 , . . . , 0+ 00 000 mε,n,0 , n = −r + 1, . . . such that (c) Kε,n,l00 ∩ Kε,n,l000 = ∅, l 6= l , n = 0, 1, . . .; (d) 0+ Kε,n,l0 = Kε,n , n = −r + 1, . . .. ∪m0− 0 ε,n ≤l ≤mε,n

The standard case is, where X = {x0l , l0 = m− , . . . , m+ } is a finite set and metrics dX (xl00 , xl000 ) = I(xl00 6= xl000 ). In this case, the simplest choice m0± ε,n = m± , n = −r + 1, . . . and to take sets Kε,n,l0 = Jε,n,l0 = {xl0 }, l0 = m− , . . . , m+ , n = −r + 1, . . .. Sets Jε,n,l can be defined in the following way, for n = −r + 1, . . .,  0+  Kε,n,l0 if m0− ε,n,0 ≤ l < mε,n,0 , (1.208) Jε,n,l0 =  Kε,n,m0+ ∪ Kε,n if l0 = m0+ ε,n,0 , ε,n,0

and skeleton cubes ˆ Jε,n,¯l0 should be defined for ¯ l0 = (l10 , . . . , lr0 ) ∈ L0ε,n , n = 0, 1, . . ., ˆ Jε,n,¯l0 = Jε,n,l10 × · · · × Jε,n−r+1,lr0 .

(1.209)

The difference in notations (between the above sets ˆ Jε,n,¯l and sets Jε,n,l used in Subsection 1.4.2) can be removed by a natural re-numeration of indices, namely 0− − 0− 0+ 0− − (m0− n , . . ., mε,n−r+1 ) ↔ mε,n,0 , . . . , (mε,n +1, . . . , mε,n−r+1 ) ↔ mε,n +1, . . ., (mε,n , + . . . , m0+ ε,n−r+1 ) ↔ mε,n , where, for n = 0, 1, . . ., − m+ ε,n − mε,n + 1 =

r Y

− (m+ ε,n−j+1 − mε,n−j+1 + 1).

(1.210)

j=1

The simplest variant is to choose integers ±m± ε,n ≥ 0, n = 0, 1, . . .. 0+ Fourth, skeleton points xε,n,l0 should be chosen for l0 = m0− ε,n,0 , . . . , mε,n,0 , n = −r + 1, . . . such that xε,n,l0 ∈ Jε,n,l0 , (1.211) and vector skeleton points ~xε,n,¯l0 should be defined for ¯ l0 = (l10 , . . . , lr0 ) ∈ L0ε,n , n = 0, 1, . . ., ~xε,n,¯l0 = (xε,n,l10 , . . . , xε,n−r+1,lr0 ). (1.212) Fifth, skeleton sets Aε,n,¯l,¯l0 and skeleton points, ~zε,n,¯l,¯l0 ∈ Aε,n,¯l,¯l0 can be defined, for ¯ l ∈ Lε,n , ¯ l0 ∈ L0ε,n , n = 0, 1, . . ., in the following way, Aε,n,¯l,¯l0 = ˆIε,n,¯l × ˆ Jε,n,¯l0 ,

(1.213)

1.6

Modulated autoregressive moving average LPP

67

and ~zε,n,¯l,¯l0 = (~ yε,n,¯l , ~xε,n,¯l0 ).

(1.214)

Sixth, skeleton functions, hε,n (y), y ∈ R1 should be defined for n = − max(p, q) +1, . . .,  + hε,n (y) = yε,n,l if y ∈ Iε,n,l , m− (1.215) ε,n ≤ l ≤ mε,n , ˆ ε,n (~ and vector skeleton functions h y ), ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , should be defined n = 0, 1, . . ., ˆ ε,n (~ h y ) = (hε,n,1 (y1 ), . . . , hε,n,p (yp ), hε,n,p+1 (yp+1 ), . . . , hε,n,p+q (yp+q )) = (hε,n (y1 ), . . . , hε,n−p+1 (yp ), hε,n (yp+1 ), . . . , hε,n−q+1 (yp+q )).

(1.216)

Seventh, skeleton functions h0ε,n (x), x ∈ X, n = −r + 1, . . ., should be defined, h0ε,n (x) =



xε,n,l0

0+ 0 if x ∈ Jε,n,l0 , m0− ε,n,0 ≤ l ≤ mε,n,0 ,

(1.217)

ˆ 0ε,n (~x), ~x = (x1 , . . . , xr ) ∈ X(r) , should be defined, and vector skeleton functions h for n = 0, 1, . . ., ˆ 0ε,n (~x) = (h0ε,n,1 (x1 ), . . . , h0ε,n,r (xr )) h = (h0ε,n (x1 ), . . . , h0ε,n−r+1 (xr )).

(1.218)

¯ ε,n (¯ Eights, vector skeleton functions h z ), z¯ = (¯ y , ~x) ∈ Z = Rp+q × X(r) , n = 0, 1, . . ., should be defined, ¯ ε,n (~z) = (h ˆ ε,n (~ ˆ 0ε,n (~x)). h y ), h

(1.219)

~ ε,n = (Y ~ε,n , X ~ ε,n ) The corresponding space skeleton approximating processes Z are defined by the following stochastic transition dynamic relation,   ˆ ε,n h ˆ ε,n−1 (Y ˆ 0ε,n−1 (X ~ε,n = h ~ε,n−1 ) + ~λn (h ~ ε,n−1 ))  Y      ˆ 0ε,n−1 (X ˆ ε,n−1 (Y ~ ε,n−1 ))h ~ε,n−1 ),  +Λn (h       ˆ 0ε,n−1 (X ~ ε,n−1 )) W ~ n0 ,  + Σn (h   (1.220) ˆ 0ε,n C ˆ 0ε,n−1 (X ~ ε,n = h ~ 0,n (h ~ ε,n−1 ), Un ) , X      n = 1, 2, . . . ,      ˆ ε,n (Y ~ ~0,0 ),  Yε,0 = h    ˆ 0ε,0 (X ~ ε,0 = h ~ 0,0 ), X or by the following equivalent stochastic transition dynamic relation given in the form of system of stochastic transition relations for the components of the log-price

68

1

Reward approximations for autoregressive LPP

~ ε,n = (Y ~ε,n , X ~ ε,n ), n = 0, 1, . . ., process Z   ˆ 0ε,n−1 (X ~ ε,n−1 ))  Yε,n,1 = hε,n hε,n−1 (Yε,n−1,1 ) + an,0 (h      ˆ 0ε,n−1 (X ~ ε,n−1 ))hε,n−1 (Yε,n−1,1 )  + an,1 (h     0 ˆ ε,n−1 (X ~ ε,n−1 ))hε,n−p (Yε,n−1,p )  + · · · + an,p (h     0  ˆ ε,n−1 (X ~ ε,n−1 ))hε,n−1 (Yn−1,p+1 )  + bn,1 (h     0 ˆ ε,n−1 (X ~ ε,n−1 ))hε,n−q (Yn−1,p+q )  + · · · + bn,q (h       ˆ0 ~  + σ n (hε,n−1 (Xε,n−1 ))Wn ,       Yε,n,2 = hε,n−1 (Yε,n−1,1 ),      . . . ...     Y = hε,n−p+1 (Yε,n−1,p−1 ), ε,n,p      Yn,p+1 = hε,n (Wn ),      Yn,p+2 = hε,n−1 (Yn−1,p+1 ),      . . . ...    Yn,p+q = hε,n−q+1 (Yn−1,p+q−1 ),  Xn,1 ˆ 0ε,n−1 (X ~ ε,n−1 ), Un ),  = Cn (h      X = Xn−1,1 ,   n,2   ... ...      X = Xn−1,r−1 , n,r      n = 1, 2, . . . ,      Yε,0,1 = hε,0 (Y0,0,1 ),     . . . ...      Y = hε,−p+1 (Y0,0,p ), ε,0,p      Yε,0,p+1 = hε,0 (Y0,0,p+1 ),      ... ...      Yε,0,p+q = hε,−q+1 (Y0,0,p+q ),      Xε,0,1 = h0ε,0 (X0,0,1 ),     ... ...    Xε,0,r = h0ε,−r+1 (X0,0,r ).

(1.221)

~ ε,n defined, for every ε ∈ (0, ε0 ], by the nonlinear The log-price process Z dynamic transition relation (1.169) is a skeleton atomic Markov chain, with the phase space Z = Rp+q × X(r) and one-step transition probabilities Pε,n (~z, A) = P{~zε,n ∈ A/~zε,n−1 = ~z} defined for ~z ∈ Z, A ∈ BZ , n = 1, 2, . . . by the following relation, X ¯ ε,n−1 (~z), A ¯ ¯0 ) Pε,n (~z, A) = P0,n (h ε,n,l,l ~ zε,n,l, yε,n,l¯,xε,n,l¯0 )∈A ¯ l¯0 =(~

1.6

Modulated autoregressive moving average LPP

X

=

69

ˆ ε,n−1 (~ ˆ 0ε,n−1 (x)) P{ h y ) + ~λn (h

~ zε,n,l, yε,n,l¯,xε,n,l¯0 )∈A ¯ l¯0 =(~

ˆ 0ε,n−1 (x))h ˆ ε,n−1 (~ ˆ 0ε,n−1 (x))W ~ n0 , + Λn (h y )) + Σn (h  ˆ 0ε,n C ˆ 0ε,n−1 (X ~ n (h ~ ε,n−1 ), Un ) ∈ A ¯ ¯0 }. h ε,n,l,l

(1.222)

~ ε,0 ∈ A}, A ∈ BZ is concerned, As far as the initial distribution Pε,0 (A) = P{Z it takes, for every ε ∈ (0, ε0 ], the following form, X Pε,0 (A) = P0,0 (Aε,0,¯l,¯l0 ) ~ zε,0,l¯∈A

X

=

 ˆ ε,0 (Y ˆ 0ε,0 (X ~0,0 ), h ~ 0,0 ) ∈ A ¯ ¯0 }. P{ h ε,n,l,l

(1.223)

~ zε,0,l¯∈A

We assume that the pay-off functions g(n, e~y , ~x), ~ y = (y1 , . . . , yp+q ) ∈ (r) Rp+q , ~x = (x1 , . . ., xr ) ∈ X do not depend on ε. (ε) Let us also recall the class Mmax,n,N of all Markov moments τε,n for the log(ε) ~ ε,n such that (a) n ≤ τε,n ≤ N , (b) event {τε,n = m} ∈ Fn,m price process Z = ~ ε,l , n ≤ l ≤ m], n ≤ m ≤ N . σ[Z ~ ε,n by In this case, the reward functions are defined for the log-price process Z the following relation, for n = 0, 1, . . . , N and ~z = (~ y , ~x) ∈ Z, φε,n (~ y , ~x) =

sup

~

~ ε,τε,n ). E(~y,~x),n g(τε,n , eYε,τε,n , X

(1.224)

(ε) τε,n ∈Mmax,n,N

Probability measures Pε,n (~z, A), ~z ∈ Z = Rp+q × X(r) , n = 1, 2, . . . are concentrated on finite sets, for every ε ∈ (0, ε0 ]. This obviously implies that the rewards functions |φε,n (~ y , ~x)| < ∞, ~z = (~ y , ~x) ∈ Z, n = 1, 2, . . . for every ε ∈ (0, ε0 ]. It is also useful to note that φε,N (~ y , x) = g(N, e~y , ~x), (~ y , ~x) ∈ Z. An analogue of Lemma 1.4.2 can be formulated for the above model. However, the corresponding system of linear equations can be simplified taking into account the specific shift structure of the dynamic relations (1.220), which is well seen from the equivalent form (1.221). By the definition of sets Aε,n,¯l,¯l0 , ¯ l ∈ Lε,n , ¯ l0 ∈ L0ε,n , n = 0, 1, . . ., there exist 0 0 the unique ¯ lε,n (~z) = (lε,n,1 (~z), . . . , lε,n,p+q (~z)) ∈ Lε,n and ¯ lε,n (~z) = (lε,n,1 (~z), . . . , 0 0 (r) lε,n,r (~z)) ∈ Lε,n , for every ~z = (~ y , ~x) ∈ Z = Rp+q × X and n = 0, 1, . . ., such that ~z ∈ Aε,n,¯lε,n (~z),¯lε,n 0 (~ z) . The following lemma is the direct corollary of Lemma 1.4.2. Lemma 1.6.1. Let the modulated autoregressive moving average log-price pro~ 0,n is defined by the vector modulated stochastic difference equation cess process Z (1.187), while the corresponding approximating space-skeleton log-price process ~ ε,n is defined, for every ε ∈ (0, ε0 ], by the stochastic transition dynamic relaZ tion (1.220). Then the reward functions φε,n (~ y , ~x) and φn+m (~ yε,n+m,¯l , ~xε,n+m,¯l0 ),

70

1

Reward approximations for autoregressive LPP

¯ l ∈ Lε,n+m , ¯ l0 ∈ L0ε,n+m , m = 1, . . . N − n are, for every ε ∈ (0, ε0 ] ~z = (~ y , ~x) ∈ Z = (r) Rp+q × X and n = 0, . . . , N − 1, the unique solution for the following recurrence finite system of linear equations,  y ~ε,N,l ,~ xε,N,¯l0 ),   φε,N (~yε,N,¯l , ~xε,N,¯l0 ) = g(N, e   0 0 0 0 ¯ ¯   l = (l1 , . . . , lp+q ) ∈ Lε,N , l = (l1 , .. . , lr ) ∈ Lε,N ,    y ~  φε,n+m (~y  xε,n+m,¯l0 ), xε,n+m,¯l0 ) = max g(n + m, e ε,n+m,l¯, ~ ε,n+m,¯ l, ~      Pm0+ Pm+  ε,n+m+1 ε,n+m+1  φε,n+m+1 (~ yε,n+m+1,(l00 ,l1 ,...,lp−1 ,  − 00 ,l00  1 l000 =m0− l =m  1 ε,n+m+1 1 p+1 ε,n+m+1     ,l01 ,...,l0r−1 ) )  l00p+1 ,lp+1 ,...,lp+q ) , ~xε,n+m+1,(l000 1     ×P{ yε,n+m,l1 + an+m+1,0 (~ xε,n+m,¯l0 )      + an+m+1,1 (~ xε,n+m,¯l0 )yε,n+m,l1      + · · · + a (~ xε,n+m,¯l0 )yε,n+m−p+1,lp n+m+1,p     + bn+m+1,1 (~ xε,n+m,¯l0 )yε,n+m,lp+1      + · · · + bn+m+1,p (~ xε,n+m,¯l0 )yε,n+m−q+1,lp+q      + σn+m+1 (~ xε,n+m,¯l0 )Wn+m+1 ∈ Iε,n+m+1,l00 ,   1    Wn+m+1 ∈ Iε,n+m+1,l00 ,   p+1       C (~ x , U n+m+1 ε,n+m,¯ n+m+1 ) ∈ Jε,n+m+1,l000 } , l0  1 ¯

¯0

0

0

0

l = (l1 , . . . , lp+q ) ∈ Lε,n+m , l = (l1 , . . . , lp+q ) ∈ Lε,n+m ,     m = N − n − 1, . . . , 1,        φε,n (~ y, ~ x) = max g(n, ey~ , ~ x),     + 0+  Pmε,n+1 Pmε,n+1   φε,n+1 (~ yε,n+1,(l00 ,lε,n,1 (~z),...,lε,n,p−1 (~z)) ,   1 l00 ,l00 =m− l000 =m0−  1 p+1 ε,n+1 1 ε,n+1     xε,n+1,(l000 ,l0  l00 ,l (~ z ),...,lε,n,p+q (~ z )) , ~ (~ z ),...,l0ε,n,r−1 (~ z )) ) p+1 ε,n,p+1 1 ε,n,1     ×P{yε,n,lε,n,1 (~z) + an+1,0 (~ xε,n,¯l0 (~z) )   ε,n     + a (~ x )y 0 ¯ n+1,1 z) ε,n,lε,n (~ z ) ε,n,lε,n,1 (~      + · · · + a (~ x )y 0 ¯ n+1,p z)  ε,n,lε,n (~ z ) ε,n−p+1,lε,n,p (~     + bn+1,1 (~ xε,n+m,¯l0 )yε,n,lp+1      + · · · + bn+1,p (~ xε,n,¯l0 )yε,n−q+1,lp+q      + σ (~ x , 0 ¯ n+1  ε,n,lε,n (~ z ) )Wn+1 ∈ Iε,n+1,l00 1     Wn+1 ∈ Iε,n+1,l00 ,   p+1      } ,  Cn+1 (~xε,n,¯l0ε,n (~z) , Un+1 ) ∈ Jε,n+1,l000 1 (ε)

(1.225)

while the optimal expected reward Φε = Φε (Mmax,0,N ) is defined by the following formula, for every ε ∈ (0, ε0 ], X Φε = P0,0 (Aε,0,¯l,¯l0 ) φε,0 (~ yε,0,¯l , ~xε,0,¯l0 ). (1.226) ¯ l∈Lε,0 , ¯ l0 ∈L0ε,0

1.6

71

Modulated autoregressive moving average LPP

Proof of Lemma 1.6.1 is analogous to the proof of Lemma 1.5.1 Obviously, |Φε | < ∞, for every ε ∈ (0, ε0 ].

1.6.3 Convergence of space skeleton approximations for option rewards of modulated autoregressive moving average log-price processes Let now formulate conditions of convergence for the reward functions for modulated autoregressive moving average log-price processes with Gaussian noise terms. We shall apply Theorem 1.4.3. Let us introduce special shorten notations for the maximal and the minimal skeleton points, for n = − max(p, q) + 1, . . . and ε ∈ (0, ε0 ], ± zε,n = δε,n m± ε,n + λε,n .

(1.227)

We impose the following modification of condition N4 on the parameters of the space skeleton model: ± N6 : (a) δε,n → 0 as ε → 0, for n = − max(p, q) + 1, . . . , N ; (b) ±zε,n → ∞ as ± ε → 0, for n = − max(p, q) + 1, . . . , N ; (c) ±zε,n , n = − max(p, q) +1, . . . , N are non-decreasing sequences, for every ε ∈ (0, ε0 ]; (d) for any x ∈ X and d > 0, there exists εx,d ∈ (0, ε0 ] such that the ball Rd (x) ⊆ Kε,n , n = 0, . . . , N , for ε ∈ (0, εx,d ]; (e) sets Kε,n,l have diameters dε,n,l = supx0 ,x00 ∈Kε,n,l dX (x0 , x00 ) such that dε,n = maxm− ≤l≤m+ dε,n,l → 0 as ε → 0, n = 0, . . . , N . ε,n,0

ε,n,0

Let us first formulate the corresponding moment conditions, which provide the existence of appropriate upper bounds for reward functions. We assume that condition B6 [¯ γ ] holds for the payoff functions g(n, e~y , ~x), (r) ~z = (~ y , ~x) ∈ Z = Rp+q × X , for some vector parameter γ¯ = (γ1 , . . . , γp+q ) with non-negative components. Let now formulate the corresponding convergence conditions. We assume that the following condition holds: I6 : There exists sets Z0n ∈ BZ , n = 0, . . . , N such that function g(n, e~y , ~x) is continuous in points ~z = (~ y , ~x) ∈ Z0n for every n = 0, . . . , N . Also, we assume that the following analogue of condition O4 , with character~ n (~x, u) given by relations (1.188) – (1.190), holds: istics ~λn (~ x), Λn (~ x), Σn (~x) and C O6 : There exist sets Xn ∈ BX(r) , n = 0, . . . , N and Un ∈ BU , n = 1, . . . , N such that: (a) an,i (~xε ) → an,i (~x0 ), i = 0, . . . , p, bn,j (~xε ) → bn,j (~x0 ), j = 1, . . . , q, σn (~xε ) → σn (~x0 ), Cn (~xε , u) → Cn (~x0 , u) as ε → 0, for any ~xε → ~x0 ∈ Xn−1 as ε → 0, u ∈ Un , n = 1, . . . , N ; (b) P{Un ∈ Un } = 1, n = 1, . . . , N ; (c)  ~ n0 , C ~ n (~x0 , Un ) ∈ Z0n ∪ Z00n } = 0 for every P{ ~ y0 + ~λn (~ x0 ) + Λn (~x0 )~ y0 + Σn (~x0 )W

72

1

Reward approximations for autoregressive LPP

~z0 = (~ y0 , ~x0 ) ∈ Z0n−1 ∩ Z00n−1 and n = 1, . . . , N , where Z0n , n = 0, . . . , N are sets introduced in condition I12 and Z00n = Rk × Xn , n = 0, . . . , N . The following theorem is a corollary of Theorem 1.4.3. Theorem 1.6.3. Let the modulated autoregressive moving average log-price ~ 0,n is defined by the vector modulated stochastic difference equaprocess process Z tion (1.187), while the corresponding approximating space-skeleton log-price pro~ ε,n is defined, for every ε ∈ (0, ε0 ], by the stochastic transition dynamic cess Z relation (1.220). Let also conditions B6 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components, and, also, conditions G3 , N6 , I6 , and O6 hold. Then, for every n = 0, 1, . . . , N , the following relation takes place for any ~zε = (~ yε , ~xε ) → ~z0 = (~ y0 , ~x0 ) ∈ Z0n ∩ Z00n , φε,n (~ yε , ~xε ) → φ0,n (~ y0 , ~x0 ) as ε → 0.

(1.228)

~ 0,n and its space skeleton approximation logProof. The log-price process Z ~ price process Zε,n defined, respectively, in (1.187) and (1.220) are particular variants of the corresponding log-price processes defined, respectively, in (1.101), and (1.124). Conditions B6 [¯ γ ], N6 , and I6 are, respectively, variants of conditions B4 [¯ γ ], N4 , and I4 , for the case where the role of the index phase space X is played by the space X(r) . Conditions G3 imply that conditions G1 holds and condition O6 imply that condition O4 holds, for the model given by the stochastic transition dynamic relations (1.187) and (1.220). Thus, all conditions of Theorem 1.4.3 hold. By applying this theorem, we get the convergence relation (1.228). 

1.6.4 Convergence of optimal expected rewards for space skeleton approximations of modulated autoregressive moving average log-price processes Let us now give conditions for convergence for optimal expected rewards Φε in the above space skeleton approximation model. We shall apply Theorem 1.4.4. ¯ = (A1 (β), ¯ . . . , Ap+q (β)) ¯ ~ β) ¯ with the function A( In this case, condition D6 [β], given by relations (1.194) and (1.195), should be assumed to hold for some vector parameter β¯ = (β1 , . . . , βp+q ) with non-negative components and the corresponding vectors β¯i = (βi,1 , . . . , βi,p+q ) with components βi,j = I(i = j), i, j = 1, . . . , p + q. Condition K4 should be replaced by the following condition imposed on the ~ 0,0 ∈ A}: initial distribution P0,0 (A) = P{Z K6 : P0,0 (Z00 ) = 1, where Z00 is the set introduced in conditions I6 .

1.6

Modulated autoregressive moving average LPP

73

The following theorem is a corollary of Theorem 1.4.4. Theorem 1.6.4. Let the modulated autoregressive moving average log-price ~ 0,n is defined by the vector modulated stochastic difference equaprocess process Z tion (1.187), while the corresponding approximating space-skeleton log-price pro~ ε,n is defined, for every ε ∈ (0, ε0 ], by the stochastic transition dynamic cess Z ¯ holds for some vector paramrelation (1.220). Let also conditions B6 [¯ γ ] and D6 [β] ¯ eters γ¯ = (γ1 , . . . , γp+q ) and β = (β1 , . . . , βp+q ) such that, for every i = 1, . . . , p+q either βi > γi > 0 or βi = γi = 0, and also conditions G3 , N6 , I6 , O6 and K6 hold. Then, the following relation takes place, Φε → Φ0 as ε → 0.

(1.229)

¯ and Proof. Theorem 1.6.4 is a corollary of Theorem 1.4.4. Conditions D6 [β] ¯ K6 are just re-formulation, respectively, of conditions D4 [β] and K4 used in Theorem 1.4.4. Other conditions of this theorem also holds that was pointed out in the proof of Theorem 1.6.3. By applying Theorem 1.4.4. we get convergence relation (1.229). 

2 Reward approximations for autoregressive stochastic volatility LPP In Chapter 2, we present results about space-skeleton approximations for rewards of American type options for autoregressive stochastic volatility log-price processes with Gaussian noise terms. These approximations are based on approximations of rewards for autoregressive stochastic volatility log-price processes with Gaussian noise terms by the corresponding rewards for American type options for autoregressive atomic type Markov chains, which transition probabilities and initial distributions are concentrated on finite sets of skeleton points. The rewards for approximating atomic Markov chains can be effectively computed using backward recurrence relations presented in Chapter 3∗ . The space skeleton approximations do also require special fitting of transition probabilities and initial distributions for approximating processes to the corresponding transition probabilities and initial distributions for approximated processes. Convergence of the approximating rewards can be proven using the general convergence results presented in Chapters 5∗ – 8∗ . In Section 2.1, we give results about convergence of space skeleton approximations for rewards of American-type options for nonlinear autoregressive stochastic volatility log-price processes with linear drift and bounded volatility coefficients. In Section 2.2, we give results about convergence of space skeleton approximations for rewards of American-type options for autoregressive conditional heteroskedastic log-price processes with Gaussian noise terms. In Section 2.3, we give results about convergence of space skeleton approximations for rewards of American-type options for generalized autoregressive conditional heteroskedastic log-price processes with Gaussian noise terms. In Section 2.4, we give results about convergence of space skeleton approximations for rewards of American-type options for modulated nonlinear autoregressive stochastic volatility log-price processes with linear drift and bounded volatility coefficients. In Section 2.5, we give results about convergence of space skeleton approximations for rewards of American-type options for modulated autoregressive conditional heteroskedastic log-price processes with Gaussian noise terms. In Section 2.6, we give results about convergence of space skeleton approximations for rewards of American-type options for modulated generalized autoregressive conditional heteroskedastic log-price processes with Gaussian noise terms. The main results, presented in Theorems 2.1.1–2.6.2, are new.

2.1 Nonlinear autoregressive stochastic volatility LPP

75

2.1 Nonlinear autoregressive stochastic volatility LPP In this section, we present results about space-skeleton approximations for rewards of American type options for nonlinear autoregressive stochastic volatility log-price processes with Gaussian noise terms.

2.1.1 Upper bounds for rewards of nonlinear autoregressive stochastic volatility log-price processes Let us consider a nonlinear autoregressive stochastic volatility log-price process with Gaussian noise terms, which is defined by the following stochastic difference equation, Yn − Yn−1 = A0n (Yn−1 , . . . , Yn−k ) + A00n (Yn−1 , . . . , Yn−k )Wn , n = 1, 2, . . . ,

(2.1)

~0 = (Y0 , . . . , Y−k+1 ) is a k-dimensional random vector with real-valued where: (a) Y components; (b) W1 , W2 , . . . is a sequence of real-valued i.i.d. standard normal ~0 and the variables with mean value 0 and variance 1; (c) the random vector Y random sequence W1 , W2 , . . . are independent; (d) k is a positive integer number; (d) A0n (~ y ) = A0n (y1 , . . . , yk ), A00n (~ y ) = A00n (y1 , . . ., yk ), ~ y = (y1 , . . . , yk ) ∈ Rk , n = 1, 2, . . . are measurable functions acting from the space Rk to R1 . Let us introduce the k-dimensional vector process, ~0,n = (Y0,n,1 , . . . , Y0,n,k ) Y = (Yn , . . . , Yn−k+1 ), n = 0, 1, . . . .

(2.2)

We can always assume that the sequence of random variables Wn , n = 1, 2, . . . is the sequence of the first components of the sequence of k-dimensional i.i.d. 0 0 ~ n0 = (Wn,1 standard Gaussian random vectors W , . . . , Wn,k ), n = 1, 2, . . ., with 0 0 0 EW1,i = 0, EW1,i W1,j = I(i = j), i, j = 1, . . . , k, i.e., 0 Wn = Wn,1 , n = 1, 2, . . . .

(2.3)

Let also of consider again k-dimensional i.i.d. Gaussian vectors ~ n = (Wn,1 , . . . , Wn,k ) W = (Wn , 0, . . . , 0), n = 1, 2, . . . .

(2.4)

~ n can be obtained as a linear As was mentioned in Subsection 5.3.2∗ , vectors W 0 0 ~ ~ ~ transformation of vectors Wn , namely, Wn = ΣWn , n = 1, 2, . . ., where k×k matrix Σ = kσi,j k, n = 1, 2, . . . has elements σi,j = I(i = 1)I(j = 1), i, j = 1, . . . , k.

76

2

Autoregressive stochastic volatility LPP

The stochastic transition dynamic relation (2.1) can be re-written in the following equivalent form as the system of stochastic difference equations,  Y0,n,1 − Y0,n−1,1 = A0n (Y0,n−1,1 , . . . , Y0,n−1,k )     + A00n (Y0,n−1,1 , . . . , Y0,n−1,k )Wn,1 ,     Y0,n,2 − Y0,n−1,2 = Y0,n−1,1 − Y0,n−1,2 + Wn,2 , (2.5)  ... ...     Y0,n,k − Y0,n−1,k = Y0,n−1,k−1 − Y0,n−1,k + Wn,k ,    n = 1, 2, . . . . Taking into account the above remarks, one can re-write the system of stochastic difference equations (2.5) in the following form of vector stochastic difference equation, ~0,n − Y ~0,n−1 = µ ~0,n−1 ) + Σn (Y ~0,n−1 )W ~ n0 , n = 1, 2, . . . , Y ~ n (Y

(2.6)

where k-dimensional vectors functions µ ~ n (~ y ) = (µn,1 (~ y ), . . . , µn,k (~ y )), ~ y = (y1 , . . ., yk ) ∈ Rk , n = 1, 2, . . . and k × k matrix functions Σn (~ y ) = kσn,i,j (~ y )k, ~ y = (y1 , . . . , yk ) ∈ Rk , n = 1, 2, . . . are defined by the following relations,

   µ ~ n (~ y) =  

y) A0n (~ y1 − y2 .. . yk−1 − yk





     , Σn (~y ) =   

y) A00n (~ 0 .. . 0

0 0 .. . 0

... ... ...

0 0 .. . 0

   . 

(2.7)

This model is a particular case of the multivariate Markov Gaussian log-price process without the index component introduced in Sections 1.4∗ and 4.5∗ . Let us assume that the following condition holds: G4 : max0≤n≤N −1 sup~y∈Rk

|A0n+1 (~ y )|+(A00 y ))2 n+1 (~

Pk

1+

l=1

0 ≤ K23,1 , . . . , K23,k < ∞.

K23,l |yl |

< K22 , for some 0 < K22 < ∞ and

~0,n Lemma 2.1.1. Let the multivariate Markov Gaussian log-price process Y is given by the vector stochastic difference equation (2.6) and condition G4 holds. In this case, condition G4∗ (for the model without index component) holds, i.e., there exist constants 0 < K24 < ∞ and 0 ≤ K25,l < ∞, l = 1, . . . , k such that, max

sup

0≤n≤N −1, i,j=1,...,k ~ y ∈Rk

2 |µn+1,i (~ y )| + σn+1,i,j (~ y) < K24 . Pk 1 + l=1 K25,l |yl |

(2.8)

Proof. Condition G4 implies that the following inequality holds for any ~ y= (y1 , . . . , yk ) ∈ Rk , n = 0, . . . , N − 1, i, j = 1, . . . , k, 2 |µn+1,i (~ y )| + σn+1,i,j (~ y) =

2.1

Nonlinear autoregressive stochastic volatility LPP

77

|A0n+1 (~ y )| + (A00n+1 (~ y ))2 if i = 1, |yi−1 − yi | if i = 2, . . . , k,  Pk K22 + l=1 K22 K23,l |yl |) if i = 1, < 1 + |yi−1 | + |yi | if i = 2, . . . , k,

 =

≤ K22 ∨ 1 +

k X (K22 K22,l ∨ 1)|yl |).

(2.9)

l=1

Inequality (2.9) implies that condition G4∗ holds, for example, with the following finite constants, K24 = K22 ∨ 1, K25,l = (K22 K23,l ∨ 1)/(K22 ∨ 1), l = 1, . . . , k,

(2.10)

which can replace, respectively, constants K26∗ and K27,l∗ , l = 1, . . . , k penetrating condition G4∗ .  Thus, Theorems 4.5.3∗ and 4.5.4∗ can be applied to the above nonlinear autoregressive conditional heteroskedastic type log-price processes Y0,n given by the ~0,n stochastic transition dynamic relation (2.1) or its equivalent vector version Y given by the vector stochastic transition dynamic relation (2.6) that yield the upper bounds, respectively, for reward functions and optimal expected rewards for these log-price processes. We assume that pay-of functions g(n, e~y ), ~ y = (y1 , . . . , yk ) ∈ Rk , n = 0, 1, . . ., which are real-valued measurable functions, which do not depend on index argument. In this case, condition B1 [¯ γ ]∗ used in Theorems 4.5.3∗ and 4.5.4∗ can be replaced by condition B1 [¯ γ ], formulated in Section 1.1. This condition should be assumed to hold for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components. (0) Let us Mmax,k,n,N be the class of all stopping times τ0,n for the process Yn (0)

such that (a) n ≤ τ0,n ≤ N , (b) event {τ0,n = m} ∈ Fk,n,m = σ[Yl , n − k + 1 ≤ l ≤ m], n ≤ m ≤ N . (0) Let us also recall the class Mmax,n,N of all Markov moments τ0,n for the log(0) ~0,n such that (a) n ≤ τ0,n ≤ N , (b) event {τ0,n = m} ∈ Fn,m price process Y = ~0,l , n ≤ l ≤ m], n ≤ m ≤ N . σ[Y (0)

(0)

Obviously, the class Mmax,k,n,N coincides with the class Mmax,n,N . ~0,n are defined by In this case, the reward functions for the log-price process Y the following relation, for ~ y ∈ Rk and n = 0, 1, . . . , N , φ0,n (~ y) =

sup (0) τ0,n ∈Mmax,n,N

~

E~y,n g(τ0,n , eY0,τ0,n ).

(2.11)

78

2

Autoregressive stochastic volatility LPP

¯ = (A1 (β), ¯ . . . , Ak (β)), ¯ β¯ = (β1 , . . . , βk ), β1 , . . . , βk ≥ 0, pene~ β) Function A( trating formulation of Theorems 4.5.3∗ and 4.5.4∗ , has the following components, ¯ = K24 K25,j Aj (β)

k X 1 (βl + k2 βl2 ), j = 1, . . . , k. 2

(2.12)

l=1

¯ generates a sequence of functions A ¯ = (An,1 (β), ¯ . . ., ~ β) ~ n (β) Function A( ¯ An,k (β)), n = 0, 1, . . . from the class Ak by the following recurrence relation, for any β¯ = (β1 , . . . , βp ), βi ≥ 0, i = 1, . . . , k, ¯ = ~ n (β) A



β¯ ¯ + A( ¯ ~ n−1 (β) ~ A ~ n−1 (β)) A

for n = 0, for n = 1, 2, . . . .

(2.13)

Recall also vectores β¯i = (βi,1 , . . . , βi,k ) with components βi,j = βj I(j = i), i, j = 1, . . . , k. Theorem 4.5.3∗ takes in this case the following form. ~0,n Theorem 2.1.1. Let the multivariate Markov Gaussian log-price process Y is given by the vector stochastic difference equation (2.6). Let condition G4 holds and, also, condition B1 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γk ) with γi ≥ 0, i = 1, . . . , k. Then, for any vector parameter β¯ = (β1 , . . . , βk ) with components βi ≥ γi , i = 1, . . . , k, there exist constants 0 ≤ M19 , M20,i = M20,i (βi ) < ∞, i = 1, . . . , k such that the reward functions φ0,n (~ y ) satisfy the following inequalities for ~ y = (y1 , . . . , yk ) ∈ Rk , 0 ≤ n ≤ N , X |φ0,n (~ y )| ≤ M19 + M20,i i: γi =0

+

X i: γi >0

k X γi M20,i exp{( AN −n,j (β¯i )|yj |) }. βi

(2.14)

j=1

Remark 2.1.1. The explicit formulas for the constants M19 and M20,i = M20,i (βi ), i = 1, . . . , k take, according formulas given in Remark 4.5.5∗ , the following form, M19 = L1 , M20,i (βi ) = L1 L2,i I(γi = 0) + L1 L2,i (1 Pk γ K (A (β¯ )+ 1 k2 AN −1,l (β¯i )) N βi + 2k e 121 l=1 N −1,l i 2 ) i I(γi > 0).

(2.15)

In this case the optimal expected reward is defined by the formula, Φ0 =

sup

~

Eg(τ0,0 , eY0,τ0,0 )

(0)

τ0,0 ∈Mmax,0,N

~0,0 ). = Eφ0 (Y

(2.16)

2.1

Nonlinear autoregressive stochastic volatility LPP

79

¯ formulated in Section 1.4.5 should be replaced In this case, condition D6 [β] by the following condition assumed to hold for vector parameter β¯ = (β1 , . . . , βk ) with non-negative components: P ¯ E exp{ k AN,j (β¯i )|Y0,0,j |} < K26,i , i = 1, . . . , k, for some 1 < K26,i D7 [β]: j=1

< ∞, i = 1, . . . , k. Theorem 4.5.4∗ takes in this case the following form. ~0,n Theorem 2.1.2. Let the multivariate Markov Gaussian log-price process Y is given by the vector stochastic difference equation (2.6). Let condition G4 holds ¯ hold for some vector parameters γ¯ = and, also, conditions B1 [¯ γ ] and D7 [β] ¯ (γ1 , . . . , γk ) and β = (β1 , . . . , βk ) such that 0 ≤ γi ≤ βi < ∞, i = 1, . . . , k. Then, there exists a constant 0 ≤ M21 < ∞ such that the following inequality takes place, |Φ0 | ≤ M21 .

(2.17)

Remark 2.1.2. The explicit formula for the constant M21 takes, according formulas given in Remark 4.5.6∗ , the following form, X X M21 = L1 + L1 L2,i + L3 L4,i (1 i:γi =0 K24

+ 2k e

i:γi >0

Pk l=1

γ (AN −1,l (β¯i )+ 12 k2 AN −1,l (β¯i )) N βi

)

i

γi β

i K26,i .

(2.18)

2.1.2 Space-skeleton approximations for option rewards of nonlinear autoregressive stochastic volatility log-price processes ~0,n The transition dynamic relation (2.6) shows that the vector log-price process Y is a particular case of the multi-dimensional Markov Gaussian log-price processes considered in Subsection 8.4.2∗ . ~0,n has Gaussian transition probabilities P0,n (~ The Markov process Y y , A) = ~ ~ P{Y0,n ∈ A/Y0,n−1 = ~ y } defined for ~ y ∈ Rk , A ∈ Bk , n = 1, 2, . . . by the following relation, ~ n0 ∈ A}, P0,n (~ y , A) = P{~ y+µ ~ n (~ y ) + Σn (~ y )W (2.19) where vector functions µ ~ n (~ y ) and matrix functions Σn (~ y ) are defined in relation (2.7). It is useful to re-write the system of stochastic difference equations (2.5) in the form of the system of dynamic transition relations for the components of the log~0,n . In this case, relation (2.5) takes the following form (where the price process Y corresponding cancelation of the identical term Y0,n−1,i is made in the transition dynamic relation for the i-th component as well as the fact that the noise terms

80

2

Autoregressive stochastic volatility LPP

Wn,i = 0 for i = 2, . . . , k),   Y0,n,1 = Y0,n−1,1 + A0n (Y0,n−1,1 , . . . , Y0,n−1,k )      + A00n (Y0,n−1,1 , . . . , Y0,n−1,k )Wn ,    Y0,n,2 = Y0,n−1,1 ,  ... ...      Y0,n,k = Y0,n−1,k−1 ,    n = 1, 2, . . . .

(2.20)

~ε,n , for Let us construct the corresponding skeleton approximating processes Y ε ∈ (0, ε0 ], according the algorithm described in Subsection 8.2.4∗ . + Let m− ε,n ≤ mε,n , n = −k + 1, . . . , N be integer numbers, δε,n > 0 and λε,n ∈ R1 , n = −k + 1, . . .. Let us also define index sets Lε,n , n = 0, 1, . . ., Lε,n = {¯ l = (l1 , . . . , lk ), + li = m− ε,n−i+1 , . . . , mε,n−i+1 , i = 1, . . . , k}.

(2.21)

+ First, the skeleton intervals Iε,n,l should be constructed for l = m− ε,n , . . . , mε,n , n = −k + 1, . . .,  − 1 if l = m−  ε,n ,  (−∞, δε,n (mε,n + 2 )] + λε,n − 1 1 (2.22) Iε,n,l = (δε,n (l − 2 ), δε,n (l + 2 )] + λε,n if mε,n < l < m+ ε,n ,   + + 1 if l = mε,n , (δε,n (mε,n − 2 ), ∞) + λε,n

and skeleton cubes Iε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., Iε,n,¯l = Iε,n,l1 × · · · × Iε,n−k+1,lk .

(2.23)

+ Second, skeleton points yε,n,l should be defined for l = m− ε,n , . . . , mε,n , n = −k + 1, . . ., yε,n,l = lδε,n + λε,n , (2.24)

and vector skeleton points ~ yε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ~ yε,n,¯l = (yε,n,l1 , . . . , yε,n−k+1,lk ).

(2.25)

Third, skeleton functions, hε,n (y), y ∈ R1 should be defined for n = −k+1, . . ., hε,n (y) =



yε,n,l

+ if y ∈ Iε,n,l , m− ε,n ≤ l ≤ mε,n ,

(2.26)

¯ ε,n (~ and vector skeleton functions h y ), ~ y = (y1 , . . ., yk ) ∈ Rk , should be defined n = 0, 1, . . ., ˆ ε,n (~ h y ) = (hε,n (y1 ), . . . , hε,n−k+1 (yk )). (2.27)

2.1

Nonlinear autoregressive stochastic volatility LPP

81

~ε,n should be deThe corresponding space skeleton approximating processes Y fined, for every ε ∈ (0, ε0 ], by the following vector stochastic transition dynamic relation,   ˆ ε,n h ˆ ε,n−1 (Y ˆ ε,n−1 (Y ~ε,n = h ~ε,n−1 ) + µ ~ε,n−1 ))  Y ~ n (h       ˆ ε,n−1 (Y ~ε,n−1 )) W ~ n0 , + Σn (h (2.28)    n = 1, 2, . . . ,    ˆ ε,0 (Y ~ε,0 = h ~0,0 ), Y or by the following equivalent transition dynamic relation given in the form of system of transition relations the system for the components of the log-price process ~ε,n , Y    hε,n−1 (Yε,n−1,1 ) Y = h ε,n,1 ε,n     0  + An (hε,n−1 (Yε,n−1,1 ), . . . , hε,n−k (Yε,n−1,k ))      00  , + A (h (Y ), . . . , h (Y ))W  ε,n−1 ε,n−1,1 n ε,n−k ε,n−1,k n      = hε,n−1 (Yε,n−1,1 ),   Yε,n,2 ... ... (2.29)   Y = h (Y ),  ε,n,k ε,n−k+1 ε,n−1,k−1     n = 1, 2, . . . ,      Yε,0,1 = hε,0 (Y0,0 ),     . . . ...    Yε,0,−k+1 = hε,−k+1 (Y0,−k+1 ). ~ε,n is, for every ε ∈ [0, ε0 ], a skeleton atomic Markov The log-price process Y chain, with the phase space Rk and one-step transition probabilities Pε,n (~ y , A) = ~ε,n ∈ A/Y ~ε,n−1 = ~ P{Y y }, ~ y ∈ Rk , A ∈ Bk , n = 1, 2, . . .. ~ε,n is, for every ε ∈ (0, ε0 ] is a skeleton Moreover, the log-price process Y Markov chain. Its transition probabilities are determined by one-step transition ~0,n via the following formula, probabilities of the Markov chain Y X ˆ ε,n−1 (~ Pε,n (~ y , A) = P0,n (h y ), Iε,n,¯l ) ~ yε,n,l¯∈A

=

X

ˆ ε,n−1 (~ ˆ ε,n−1 (~ P{h y) + µ ¯n (h y ))

~ yε,n,l¯∈A

ˆ ε,n−1 (~ ~ n0 ∈ I ¯}. + Σ0,n (h y )) W ε,n,l

(2.30)

It is useful to note that skeleton functions hε,n (·) project values Yε,n to the set + of skeleton points Fε,n = {lδε,n + λε,n , l = m− ε,n , . . . , mε,n } in the way described above in relation (2.26). This relation which also implies that hε,n (Yε,n ) = Yε,n if Yε,n ∈ Fε,n since hε,n (y) = y for y ∈ Fε,n . Moreover, relations (2.28) – (2.29)

82

2

Autoregressive stochastic volatility LPP

imply that P{Yε,n ∈ Fε,n , n = 0, 1, . . .} = 1. That is why, seems, one replace in ˆ ε,n−1 (Y ~ε,n−1 ) by random vectors Y ~ε,n−1 in relation (2.28). relation random vectors h This is, however, not so. Transition probabilities Pε,n (~ y , A) should be defined for all points ~ y = (y1 , . . . , yk ) ∈ Rk including points which do not belong to sets Fε,n × · · · × Fε,n−k+1 . That is why the transition dynamic relation (2.28) and formula (2.30) are given in the above form. It is also useful to note that the stochastic transition dynamic relations (2.28) and (2.29) reduce, respectively, to the stochastic transition dynamic relations (2.6) ˆ ε,n (~ and (2.20), if to replace skeleton functions h y ) and hε,n (y) in the above two ˆ 0,n (~ relations, respectively, by limiting skeleton functions h y ) ≡ (y, . . . , y) and h0,n (y) ≡ y. ~ε,0 ∈ A}, A ∈ Bk is concerned, As far as the initial distribution Pε,0 (A) = P{Y it takes, for every ε ∈ (0, ε0 ], the following form, X X ˆ ε,0 (Y ~0,0 ) ∈ I ¯}. Pε,0 (A) = P0,0 (Iε,0,¯l ) = P{h (2.31) ε,n,l ~ yε,0,l¯∈A

~ yε,0,l¯∈A

We consider the model with a pay-off function g(n, e~y ), ~ y = (y1 , . . . , yk ) ∈ Rk which does not depend on ε. (ε) Let us also recall, for ε ∈ (0, ε0 ], the class Mmax,n,N of all Markov moments ~ε,n such that (a) n ≤ τε,n ≤ N , (b) event {τε,n = τε,n for the log-price process Y (ε) ~ m} ∈ Fn,m = σ[Yε,l , n ≤ l ≤ m], n ≤ m ≤ N . ~ε,n by In this case, the reward functions are defined for the log-price process Y the following relation, for ~ y ∈ Rk and n = 0, 1, . . . , N , φε,n (~ y) =

sup

~

E~y,n g(τε,n , eYε,τε,n ).

(2.32)

(ε) τε,n ∈Mmax,n,N

Probability measures Pε,n (~ y , A), ~ y ∈ Rk , n = 1, 2, . . . are concentrated on finite sets, for every ε ∈ (0, ε0 ]. This obviously implies that the rewards functions |φε,n (~ y )| < ∞, ~ y ∈ Rk , n = 1, 2, . . . for every ε ∈ (0, ε0 ]. It is also useful to note that φε,N (~ y ) = g(N, e~y ), ~ y ∈ Rk . An analogue of Lemma 8.4.2∗ can be formulated for the above model. However, the corresponding system of linear equations can be simplified taking into account the specific shift structure of the stochastic difference equation (2.28) written in equivalent form (2.29). By the definition of sets Iε,n,¯l , ¯ l ∈ Lε,n , n = 0, 1, . . ., for every ~ y ∈ Rk and ¯ n = 0, 1, . . ., there exists the unique lε,n (~ y ) = (lε,n,1 (~ y ), . . . , lε,n,k (~ y )) ∈ Lε,n such that ~ y ∈ Iε,n,¯lε,n (~y) . ~0,n is Lemma 2.1.2. Let a multivariate Markov Gaussian log-price process Y defined by the vector stochastic difference equation (2.6), while the corresponding ~ε,n is defined, for every ε ∈ (0, ε0 ], approximating space-skeleton log-price process Y by the dynamic transition relation (2.28). Then the reward functions φε,n (~ y ) and

2.1

Nonlinear autoregressive stochastic volatility LPP

83

φn+m (~ yε,n+m,¯l ), ¯ l ∈ Lε,n+m , m = 1, . . . N − n are, for every ε ∈ (0, ε0 ], ~ y ∈ Rk and n = 0, . . . , N − 1, the unique solution for the following recurrence finite system of linear equations,  φε,N (~ yε,N,¯l ) = g(N, e~yε,N,l ),      ¯ l = (l1 , . . . , lk ) ∈ Lε,N ,        φε,n+m (~ yε,n+m,¯l ) = max g(n + m, e~yε,n+m,l¯),      Pm+ε,n+m+1   φε,n+m+1 (~ yε,n+m+1,(l10 ,l1 ,...,lk−1 ) )   l10 =m− ε,n+m+1       ×P{yε,n+m,l1 + A0n+m+1 (yε,n+m,l1 , . . . , yε,n+m−k+1,lk )     +A00n+m+1 (yε,n+m,l1 , . . . , yε,n+m−k+1,lk )Wn+m+1 ∈ Iε,n+m+1,l10 } , (2.33)    ¯  l = (l1 , . . . , lk ) ∈ Lε,n+m , m = N − n − 1, . . . , 1,       φε,n (~ y ) = max g(n, e~y ),      Pm+ε,n+1    φε,n+1 (~ yε,n+1,(l10 ,lε,n,1 (~y),...,lε,n,k−1 (~y) )  l10 =m−  ε,n+1      ×P{yε,n,lε,n,1 (~y) + A0n+1 (yε,n,lε,n,1 (~y) , . . . , yε,n−k+1,lε,n,k (~y) )       +A00n+1 (yε,n,lε,n,1 (~y) , . . . , yε,n−k+1,lε,n,k (~y) )Wn+1 ∈ Iε,n+1,l10 } , (ε)

while the optimal expected reward Φε = Φε (Mmax,0,N ) is defined by the following formula, for every ε ∈ (0, ε0 ], X Φε = P0,0 (Iε,0,¯l ) φε,0 (~ yε,0,¯l ). (2.34) ¯ l∈Lε,0

Proof. The following system of linear equations for the reward functions φε,n (~ y ) and φn+m (~ yε,n+m,¯l ), ¯ l ∈ Lε,n+m , m = 1, . . . N − n is the variant of the system of linear equations given in Lemma 8.4.2∗ ,  φε,N (~ yε,N,¯l ) = g(N, e~yε,N,l ), ¯ l ∈ Lε,N ,        φε,n+m (~yε,n+m,¯l ) = max g(n + m, e~yε,n+m,l¯),   P    yε,n+m+1,¯l0 ) ¯  l0 ∈Lε,n+m+1 φε,n+m+1 (~   (2.35) ×P0,n+m+1 (~ yε,n+r,¯l , Iε,n+m+1,¯l0 ) , ¯ l ∈ Lε,n+m ,     m = N − n − 1, . . . , 1,       φε,n (~ y ) = max g(n, e~y ),      P yε,n+1,¯l0 )P0,n+1 (~ yε,n,¯lε,n (~y) , Iε,n+1,¯l0 ) . ¯ l0 ∈Lε,n+1 φε,n+1 (~ The system of linear equations (2.35) can be re-written in much more simple form taking into account specific features of the dynamic transition relation (2.20) and (2.29).

84

2

Autoregressive stochastic volatility LPP

Indeed, according the above relations, we get the following formula, for ¯ l = (l1 , . . . , lk ) ∈ Lε,n+m , ¯ l0 = (l10 , . . . , lk0 ) ∈ Lε,n+m+1 , P0,n+m+1 (~ yε,n+m,¯l , Iε,n+m+1,¯l0 ) = P{yε,n+m,l1 + A0n+m+1 (yε,n+m,l1 , . . . , yε,n+m−k+1,lk ) + A00n+m+1 (yε,n+m,l1 , . . . , yε,n+m−k+1,lk )Wn+r+1 ∈ Iε,n+m+1,l10 } × I(yε,n+m,l1 ∈ Iε,n+m,l20 ) × · · · × I(yε,n+m−k+2,lk−1 ∈ Iε,n+m−k+2,lk0 ) = P{yε,n+m,l1 + A0n+m+1 (yε,n+m,l1 , . . . , yε,n+m−k+1,lk ) + A00n+m+1 (yε,n+m,l1 , . . . , yε,n+m−k+1,lk )Wn+m+1 ∈ Iε,n+m+1,l10 } × I(l1 = l20 ) × · · · × I(lk−1 = lk0 ).

(2.36)

Relation (2.36) implies that the system of linear equations (2.35) can be rewritten in the following simpler form, where multivariate sums over vector indices ¯ l0 ∈ Lε,n+m+1 , m = N −n−1, . . . , 1 are replaced in the corresponding equations by + univariate sums over indices l10 = m− ε,n+m+1 , . . . , mε,n+m+1 , m = N − n − 1, . . . , 1.  Obviously, |Φε | < ∞, for every ε ∈ (0, ε0 ].

2.1.3 Convergence of option reward functions for space skeleton approximations of nonlinear autoregressive stochastic volatility log-price processes Let now formulate conditions of convergence for the above reward functions. Let us introduce special shorten notations for the maximal and the minimal skeleton points, for n = −k + 1, . . . , N and ε ∈ (0, ε0 ], ± zε,n = δε,n m± ε,n + λε,n .

(2.37)

We impose the following condition on the parameters of the space skeleton model: ± N7 : (a) δε,n → 0 as ε → 0, for n = −k + 1, . . . , N ; (b) ±zε,n → ∞ as ε → 0, for ± n = −k +1, . . . , N ; (c) ±zε,n , n = −k +1, . . . , N are non-decreasing sequences, for every ε ∈ (0, ε0 ].

Let us first formulate the corresponding moment conditions, which provide the existence of appropriate upper bounds for reward functions. We assume that condition B1 [¯ γ ] holds for the payoff functions g(n, e~y ), ~ y = (y1 , . . . , yk ) ∈ Rk , for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components. We also assume that condition G4 holds for the transition dynamic functions A0n (~ y ), A00n (~ y ), ~ y = (y1 , . . . , yk ) ∈ Rk , n = 1, 2, . . . , N , i.e., the functions A0n (~ y ) and 00 2 (An (~ y )) have not more than linear rate of growth in arguments yi , i = 1, . . . , k.

2.1

Nonlinear autoregressive stochastic volatility LPP

85

Let now formulate the corresponding convergence conditions. We assume that the continuity condition I1 , holds for the pay-off function g(n, e~y ). We also assume that the following variant of condition of weak continuity O5∗ , in which vector functions µ ~ n (~ y ), ~ y ∈ Rk , n = 1, . . . , N and matrix functions Σn (~ y ), ~ y ∈ Rk , n = 1, . . . , N are defined in relation (2.7), holds: O7 : (a) There exist sets Y00n ∈ Bk , n = 0, . . . , N such that functions A0n (~ y ), A00n (~ y) 00 are continuous for ~ y = (y1 , . . . , yk ) ∈ Yn−1 , for n = 1, . . . , N , (b) P{~ y0 + ~ n0 ∈ Y0n ∪ Y00n } = 0, for ~ y0 ∈ Y0n−1 ∩ Y00n−1 , n = 1, . . . , N , µn (~ y0 ) + Σn (~ y0 )W where Y0n , n = 0, . . . , N are sets penetrating condition I11 . A remark analogous to Remark 1.1.3 can be made about the condition O7 . ~ n0 ≡ 0 and P{~ Remark 2.1.3. If A00n (~ y0 ) = 0, then Σn (~ y0 )W y0 + µn (~ y0 ) ∈ 00 00 0 00 ~ n0 = Yn ∪Yn } = 0, if and only if ~ y0 +~ µn (~ y0 ) ∈ Yn ∩Yn . If A00n (~ y0 ) > 0 then Σn (~ y0 )W 0 (A00n (~ y0 )Wn,1 , 0, . . . , 0) is a Gaussian random vector (which should be considered as a k-dimensional column vector). The set of k-dimensional vectors R1,k,n = {~ y= (y1 , 0, . . . , 0) : y1 ∈ R1 } is a one-dimensional Euclidean hyper-subspace of Rk . ~ n0 , with respect The probability density function of the random vector Σn (~ y0 )W to Lebesgue measure L1,k,n (A) in the hyper-subspace R1,k,n , is concentrated and ~ n0 ∈ strictly positive in points ~ y ∈ R1,k,n . This implies that P{~ y0 +~ µn (~ y0 )+Σn (~ y0 )W 00 00 0 00 Yn ∪ Yn } = 0, if and only if L1,k,n (Y1,k,n,y0 ) = 0 and L1,k,n (Y1,k,n,y0 ) = 0, where 0 0 0 Y1,k,n,y0 = (Yn − ~ y0 − µ ~ n (~ y0 )) ∩ R1,k,n is the cut of the set Yn − ~ y0 − µ ~ n (~ y0 ) by 00 00 the hyper-subspace R1,k,n and Y1,k,n,y0 = (Yn − ~ y0 − µ ~ n (~ y0 )) ∩ R1,k,n is the cut of 00 the set Yn − ~ y0 − µ ~ n (~ y0 ) by the hyper-subspace R1,k,n .

The following theorem is the direct corollary of Theorem 8.4.7∗ . Theorem 2.1.3. Let the multivariate Markov Gaussian log-price process log~0,n is defined by the vector stochastic difference equation (2.6), while price process Y ~ε,n is defined, the corresponding approximating space-skeleton log-price process Y for every ε ∈ (0, ε0 ], by the dynamic transition relation (2.28). Let also conditions B1 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components, and, also, conditions G4 , N7 , I1 , and O7 hold. Then, for every n = 0, 1, . . . , N , the following relation takes place for any ~ yε → ~ y0 ∈ Y0n ∩ Y00n , φε,n (~ yε ) → φ0,n (~ y0 ) as ε → 0.

(2.38)

Proof. As follows from the transition dynamic relation (2.6), the space skeleton approximation model considered in Theorem 2.1.3 is a particular case of the space skeleton approximation model considered in Theorem 8.4.7∗ . The difference is that the index component is absent. One can always to add in the model the virtual index component X0,n with one-point phase space X = {x0 }. Condition N7 is a variant of condition N2∗ .

86

2

Autoregressive stochastic volatility LPP

Condition B1 [¯ γ ] is a reduced version of condition B1 [¯ γ ]∗ , for the model without index component. Condition I1 is a reduced variant of condition I7∗ , for the case, where the corresponding pay-off function does not depend on index component. Condition G4 is a reduced variant of conditions G4∗ for the corresponding vector drift and matrix diffusion parameters defined by relation (2.7), for the model without modulating index component given by this relation. Condition O7 is just a reduced variant of condition O5∗ , for the corresponding vector drift and matrix diffusion parameters defined by relation (2.7), for the model without modulating index component given by the stochastic transition dynamic relation (2.6). In this case, the sets Y00n × {x0 }, n = 0, . . . , N can ply the role of sets Z00n , n = 0, . . . , N penetrating condition O5∗ . Thus, all conditions of Theorem 8.4.7∗ hold. By applying this theorem we get the convergence relation (2.38). 

2.1.4 Convergence of optimal expected rewards for space skeleton approximations of nonlinear autoregressive stochastic volatility log-price processes Let us now give conditions for convergence for optimal expected rewards Φε in the above space skeleton approximation model. We shall apply Theorem 8.4.8∗ . ¯ = (A1 (β), ¯ . . . , Ak (β)) ¯ ~ β) ¯ with the function A( In this case, condition D7 [β], given by relations (1.7) and (1.8), should be assumed to hold for some vector parameter β¯ = (β1 , . . . , βk ) with non-negative components and the corresponding vectors β¯i = (βi,1 , . . . , βi,k ) with components βi,j = I(i = j), i, j = 1, . . . , k. Condition K15∗ should be replaced by the following condition imposed on the ~0,0 ∈ A}: initial distribution P0,0 (A) = P{Y K7 : P0,0 (Y00 ∩ Y000 ) = 1, where Y00 and Y000 are the sets introduced, respectively, in conditions I1 and O7 . The following theorem is a corollary of Theorem 8.4.8∗ . Theorem 2.1.4. Let the multivariate Markov Gaussian log-price process log~0,n is defined by the vector stochastic difference equation (2.6), while price process Y ~ε,n is defined, the corresponding approximating space-skeleton log-price process Y for every ε ∈ (0, ε0 ], by the dynamic transition relation (2.28). Let also conditions ¯ holds for some vector parameters γ¯ = (γ1 , . . . , γk ) and β¯ = B1 [¯ γ ] and D7 [β] (β1 , . . . , βk ) such that, for every i = 1, . . . , k either βi > γi > 0 or βi = γi = 0, and also conditions G4 , N7 , I1 , O7 , and K7 hold. Then, the following relation takes place, Φε → Φ0 as ε → 0. (2.39) Proof. Theorem 2.1.4 is a corollary of Theorem 8.4.8∗ .

2.2 Autoregressive conditional heteroskedastic LPP

87

¯ and K7 are just re-formulation, respectively, of conditions Conditions D7 [β] ¯ D19 [β]∗ and K15∗ used in Theorem 8.4.8∗ . Other conditions of these theorem also holds that was pointed out in the proof of Theorem 2.1.3. By applying Theorem 8.4.8∗ we get convergence relation (2.39). 

2.2 Autoregressive conditional heteroskedastic LPP In this section, we present results concerned space-skeleton approximations for rewards of American-type options for autoregressive conditional heteroskedastic (ARCH) log-price processes.

2.2.1 Upper bounds for rewards of autoregressive conditional heteroskedastic log-price processes The particular variant of the nonlinear autoregressive stochastic volatility log-price processes model considered in Section 2.1 is an inhomogeneous in time autoregressive conditional heteroskedastic type model given by the following stochastic difference equation, Yn − Yn−1 = an,0 + an,1 (Yn−1 − fn,1 Yn−2 ) + · · · + an,p−1 (Yn−p+1 − fn,p−1 Yn−p ) + gκ (σn )Wn , n = 1, 2, . . . ,

(2.40)

where σn = dn,0 + dn,1 (Yn−1 − en,1 Yn−2 )2 + · · · + dn,p−1 (Yn−p+1 − en,p−1 Yn−p )2

 12

, n = 1, 2, . . . ,

(2.41)

~0 = (Y0 , . . . , Y−p+1 ) is a p-dimensional random vector with realand: (a) Y valued components; (b) W1 , W2 , . . . is a sequence of real-valued i.i.d. standard normal variables with mean value 0 and variance 1; (c) the random vector ~0 and the random sequence W1 , W2 , . . . are independent; (d) p is a positive Y integer number; (e) an,0 , an,1 , . . . , an,p−1 , n = 1, 2, . . . are real-valued constants; (e) dn,0 , dn,1 , . . . , dn,p−1 , n = 1, 2, . . . are non-negative constants; (f) en,1 , . . . , en,p−1 , fn,1 , . . . , fn,p−1 , n = 1, 2, . . . are constants taking values in the interval [0, 1]; (g) gκ (·) is a function from the class Gκ for some κ ≥ 0. Remind that Gκ = {gκ (·)} is, for every κ ≥ 0, the class of measurable functions κ (y) acting from [0, ∞) to [0, ∞) such that supy≥0 g1+y κ < ∞.

88

2

Autoregressive stochastic volatility LPP

Let us restrict consideration by two well known models, namely CIR(p) (Cox, Ingersoll, Ross) type models, where parameter κ = 21 and AR(p)/ARCH(p) type models, where parameter κ = 1. √ The case, where function g 12 (y) ≡ y, parameters p = 2 and coefficients an,0 = a0 , an,1 = a1 , fn,1 = 0, dn,0 = 0, dn,1 = σ 2 , en,1 = 0 corresponds to to the standard discrete time CIR(2) model. In this case, the stochastic difference equation (2.40) takes the following form, p Yn − Yn−1 = a0 + a1 Yn−1 + σ Yn−1 Wn , n = 1, 2, . . . . (2.42) The case, where function g1 (y) ≡ y, parameters p = 2 and coefficients an,0 = a0 , an,1 = a1 , fn,1 = 1, dn,0 = d0 , dn,1 = d1 , en,1 = 1 correspond to the standard AR(2)/ARCH(2) type model. In this case the stochastic difference equation (2.40) takes the following form, Yn − Yn−1 = a0 + a1 (Yn−1 − Yn−2 ) 1

+ (d0 + d1 (Yn−1 − Yn−2 )2 ) 2 Wn , n = 1, 2, . . . .

(2.43)

First, let us consider the case of CIR(p) type models with parameter κ = 12 . Let us introduce the p-dimensional vector process, ~0,n = (Y0,n,1 , . . . , Y0,n,p ) Y = (Yn , . . . , Yn−p+1 ), n = 1, 2, . . . .

(2.44)

We can always assume that the sequence of random variables Wn , n = 1, 2, . . . is the sequence of the first components of the sequence of p-dimensional i.i.d. 0 0 ~ n0 = (Wn,1 standard Gaussian random vectors W , . . . , Wn,p ), n = 1, 2, . . ., with 0 0 0 EW1,i = 0, EW1,i W1,j = I(i = j), i, j = 1, . . . , p, i.e., 0 Wn = Wn,1 , n = 1, 2, . . . .

(2.45)

Let also of consider p-dimensional i.i.d. Gaussian vectors, ~ n = (Wn,1 , . . . , Wn,p ) W = (Wn , 0, . . . , 0), n = 1, 2, . . . .

(2.46)

~ n can be obtained as a linear transformation of vectors W ~ n0 , namely, Vectors W 0 ~ n = ΣW ~ n , n = 1, 2, . . ., where p × p matrix Σ has the following form, W Σ = kσi,j k = kI(i = 1)I(j = 1)k.

(2.47)

In this case, the stochastic difference equation (2.5) takes the following form,

2.2

Autoregressive conditional heteroskedastic LPP

 Y0,n,1 − Y0,n−1,1 = an,0 + an,1 (Y0,n−1,1 − fn,1 Y0,n−1,2 )      + · · · + an,p−1 (Y0,n−1,p−1 − fn,p−1 Y0,n,p )       + g 12 ( dn,0 + dn,1 (Y0,n−1,1 − en,1 Y0,n−1,2 )2    1 + · · · + dn,p−1 (Y0,n−1,p−1 − en,p−1 Y0,n,p )2 2 )Wn,1 ,   Y0,n,2 − Y0,n−1,2 = Y0,n−1,1 − Y0,n−1,2 + Wn,2 ,      ... ...     Y0,n,p − Y0,n−1,p = Y0,n−1,p−1 − Y0,n−1,p + Wn,p ,    n = 1, 2, . . . ,

89

(2.48)

while the vector stochastic difference equation (2.6) takes the following vector form, ~0,n − Y ~0,n−1 = µ ~0,n−1 ) + Σn (Y ~0,n−1 )W ~ n0 , n = 1, 2, . . . , Y ~ n (Y

(2.49)

where p-dimensional vectors functions µ ~ n (~ y ) = (µn,1 (~ y ), . . . , µn,p (~ y )), ~ y = (y1 , . . ., yp ) ∈ Rp , n = 1, 2, . . . and p × p matrix functions Σn (~ y ) = kσn,i,j (~ y )k, ~ y = (y1 , . . . , yp ) ∈ Rp , n = 1, 2, . . . are defined by the following relations,

   µ ~ n (~ y) =  

A0n (~ y) y1 − y2 .. . yp−1 − yp





     , Σn (~y ) =   

A00n (~ y) 0 .. . 0

0 0 .. . 0

... ... ...

0 0 .. . 0

   , 

(2.50)

where k = p and functions A0n (~ y ), A00n (~ y ), ~ y = (y1 , . . . , yp ) ∈ Rp , n = 1, 2, . . . are given by the following relation, A0n (~ y ) = A0n (y1 , . . . , yp ) = an,0 + an,1 (y1 − fn,1 y2 ) + · · · + an,p−1 (yp−1 − fn,p−1 yp ), A00n (~ y)

= A00n (y1 , . . . , yp ) = g 12 ( dn,0 + dn,1 (y1 − en,1 y2 )2 1 + · · · + dn,p−1 (yp−1 − en,p−1 yp )2 2 ).

(2.51)

~0,n is a particular variant of a nonlinear autoreThe above log-price process Y gressive stochastic volatility log-price processes considered in Section 2.1. Note that in this case k = p. Lemma 2.2.1 Let the autoregressive conditional heteroskedastic log-price Y0,n ~0,n are given, respectively, by the stochastic and its equivalent vector version Y difference equations (2.40) and (2.49), and parameter κ = 12 . In this case, condition G4 holds, i.e., there exist constants 0 < K27 < ∞ and 0 ≤ K28,l < ∞, l = 1, . . . , k such that, |A0n+1 (~ y )| + (A00n+1 (~ y ))2 Pp max sup < K27 . (2.52) 0≤n≤N −1 ~ 1 + l=1 K28,l |yl | y ∈Rp

90

2

Autoregressive stochastic volatility LPP

Proof. Function g 12 (·) ∈ G 12 . Thus, there exists a constant L = L(g 12 (·)) ∈ (0, ∞) such that 1 |g 21 (y)| < L(1 + |y| 2 ), y ∈ R1 . (2.53) Using relation (2.53) we get the following inequality, for every ~ y = (y1 , . . . , yp ) ∈ Rp , n = 1, . . . , N , |A0n (y1 , . . . , yp )| + (A00n (y1 , . . . , yp ))2 < |an,0 | + |an,1 |(|y1 | + |y2 |) + · · · + |an,p−1 |(|yp−1 | + |yp |) + 2L2 1 + dn,0 + dn,1 (|y1 | + |y2 |)2 1  + · · · + dn,p−1 (|yp−1 | + |yp |)2 2 p p ≤ |an,0 | + 2L2 (1 + dn,0 ) + (|an,1 | + 2L2 dn,1 )|y1 | p p + (|an,1 | + |an,2 | + 2L2 ( dn,1 + dn,2 ))|y2 | p p + · · · + (|an,p−2 | + |an,p−1 | + 2L2 ( dn,p−2 + dn,p−1 ))|yp−1 | p + (|an,p−1 | + 2L2 dn,p−1 )|yp | ≤ K27 (1 +

p X

K28,l |yl |),

(2.54)

l=1

where K27 =

max (|an,0 | + 2L2 (1 +

p

0≤n≤N −1

K28,l =

               

max0≤n≤N −1 (|an,1 |+2L2

dn,0 )),

√ d ) √n,1

max0≤n≤N −1 (|an,0 |+2L2 (1+

dn,0 ))

if l = 1, max0≤n≤N −1 (|an,l−1 |+|an,l |+2L2 (

√ √ d + dn,l )) √n,l−1

max0≤n≤N −1 (|an,0 |+2L2 (1+

dn,0 ))

(2.55)

  if l = 2, . . . , p − 1,     √   max0≤n≤N −1 (|an,p−1 |+2L2 dn,p−1 )   √ K28,p =  2  max0≤n≤N −1 (|an,0 |+2L (1+ dn,0 ))    if l = p. It follows from inequality (2.54) that condition G4 holds, for example, with constants K27 and K28,l = 1, l = 1, . . . , p, which can replace, respectively, constants K22 and K23,l , l = 1, . . . , k (recall that, in this case, k = p) penetrating condition G4 .  Thus, Theorems 2.1.1 and 2.1.2 can be applied to the above CIR(p) type logprice process Yn given by the stochastic difference equation (2.40) that yield the upper bounds, respectively, for reward functions and optimal expected rewards given in these theorems for these log-price processes.

2.2

Autoregressive conditional heteroskedastic LPP

91

It is natural to assume for autoregressive models of log-price processes that the corresponding pay-off functions also depend on the corresponding sequences of values for log-prices. In this case, we consider pay-of functions g(n, e~y ), ~ y = (y1 , . . . , yp ) ∈ Rp , n = 0, 1, . . ., which are real-valued measurable functions assumed to satisfy condition B2 [¯ γ ], for some vector parameter γ¯ = (γ1 , . . . , γp ) with non-negative components. (0) Let us Mmax,p,n,N be the class of all stopping times τ0,n for the process Yn (0)

such that (a) n ≤ τ0,n ≤ N , (b) event {τ0,n = m} ∈ Fp,n,m = σ[Yl , n − p + 1 ≤ l ≤ m], n ≤ m ≤ N . (0) (0) Obviously, the class Mmax,p,n,N coincides with the class Mmax,n,N of all ~0,l = (Y0,l,1 , . . . , Y0,l,p ) = (Yl , Markov moments τ0,n for the Markov process Y (0) ~0,l , n ≤ . . . , Yl−p+1 ) such that (a) n ≤ τ0,n ≤ N , (b) event {τ0,n = m} ∈ Fn,m = σ[Y l ≤ m], n ≤ m ≤ N . In this case, the reward functions are defined for the log-price process Yn ~0,n ) by the following relation, for ~ (its equivalent vector version Y y ∈ Rp and n = 0, 1, . . . , N , φ0,n (~ y) =

sup

~

E~y,n g(τ0,n , eY0,τ0,n ).

(2.56)

(0)

τ0,n ∈Mmax,n,N

¯ = (A1 (β), ¯ . . . , Ap (β)), ¯ β¯ = (β1 , . . . , βp ), β1 , . . . , βp ~ β) In this case, functions A( ≥ 0, penetrating formulation of Theorems 2.1.1 and 2.1.2, has the following components, p X 1 ¯ = K27 K28,j Aj (β) (βl + p2 βl2 ), j = 1, . . . , p. (2.57) 2 l=1

¯ generates a sequence of functions A ¯ = (An,1 (β), ¯ . . ., ~ β) ~ n (β) Function A( ¯ An,p (β)), n = 0, 1, . . . from the class Ap that are defined by the following recurrence relation, for any β¯ = (β1 , . . . , βp ), βi ≥ 0, i = 1, . . . , p, ¯ = ~ n (β) A



β¯ ¯ + A( ¯ ~ n−1 (β) ~ A ~ n−1 (β)) A

for n = 0, for n = 1, 2, . . . .

(2.58)

Theorem 2.1.1 takes in this case the following form. Theorem 2.2.1. Let the autoregressive conditional heteroskedastic log-price ~0,n are given, respectively, by the process Yn and its equivalent vector version Y stochastic difference equation (2.40) and the vector stochastic difference equation (2.49), and parameter κ = 12 . Let also condition B2 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γp ) with γi ≥ 0, i = 1, . . . , p. Then, for any vector parameter β¯ = (β1 , . . . , βp ) with components βi ≥ γi , i = 1, . . . , p, there exist constants 0 ≤ M22 , M23,1 , . . . , M23,p < ∞ such that the reward functions φn (~ y ) satisfy the

92

2

Autoregressive stochastic volatility LPP

following inequalities for ~ y = (y1 , . . . , yp ) ∈ Rp , 0 ≤ n ≤ N , X |φ0,n (~ y )| ≤ M22 + M23,i i: γi =0

X

+

i: γi >0

p X γi AN −n,j (β¯i )|yj |) }. M23,i exp{( βi

(2.59)

j=1

Remark 2.2.1. The explicit formulas for the constants M22 and M23,i , i = 1, . . . , p take, according formulas given in Remark 2.1.1, the following form, M22 = L3 , M23,i = L3 L4,i I(γi = 0) + L3 L4,i (1 Pp γ K (A (β¯ )+ 1 p2 A2N −1,l (β¯i )) N βi + 2p e 27 l=1 N −1,l i 2 ) i I(γi > 0).

(2.60)

where vectors β¯i = (βi,1 , . . . , βi,p ) = (β1 I(i = 1), . . . , βp I(i = p)), i = 1, . . . , p. ¯ should be replaced in this case by the following condition Condition D7 [β] assumed to hold for vector parameter β¯ = (β1 , . . . , βp ) with non-negative components: P ¯ E exp{ p AN,j (β¯i )|Y0,j |} < K29,i , i = 1, . . . , p, for some 1 < K29,i < D8 [β]: j=1

∞, i = 1, . . . , p. In this case, the optimal expected reward is defined by the formula, Φ0 =

~

~0,0 ). Eg(τ0,0 , eY0,τ0,0 ) = Eφ0 (Y

sup

(2.61)

(0) τ0,0 ∈Mmax,0,N

Theorem 2.1.2 takes in this case the following form. Theorem 2.2.2. Let the autoregressive conditional heteroskedastic log-price ~0,n are given, respectively, by the process Yn and its equivalent vector version Y stochastic difference equation (2.40) and the vector stochastic difference equation ¯ hold and 0 ≤ γ ] and D8 [β] (2.49), and parameter κ = 12 . Let also conditions B2 [¯ γi ≤ βi < ∞, i = 1, . . . , p. Then, there exists a constant 0 ≤ M24 < ∞ such that the following inequality takes place, |Φ0 | ≤ M24 .

(2.62)

Remark 2.2.2. The explicit formula for the constant M84 takes, according formulas given in Remark 2.1.2, the following form, X X M24 = L3 + L3 L4,i + L17 L18,i (1 i:γi =0 K27

+ 2p e

i:γi >0

Pp l=1

γ (AN −1,l (β¯i )+ 12 p2 A2N −1,l (β¯i )) N βi

)

i

γi β

i K29,i .

(2.63)

2.2

Autoregressive conditional heteroskedastic LPP

93

Let now consider the standard AR(2)/ARCH(2) model with parameters a0 = d0 = 0, a1 = 0, d1 = σ > 0, f1 = e1 = 1,. In this case the stochastic difference equation (2.43) takes the following form, Yn − Yn−1 = σ|Yn−1 − Yn−2 |Wn , n = 1, 2, . . . .

(2.64)

For simplicity, let us also assume that the initial values Y0 = y0 and Y−1 = y−1 are non-random constants such that h = h(y0 , y−1 ) = y0 − y−1 6= 0. In this case, equation (2.64) yields the following explicit formulas, for n = 1, 2, . . ., n−1 Y |Wi |. (2.65) Yn − Yn−1 = σ n |h|Wn i=1

This summation in relation (2.65) yields the following explicit formula, for n = 1, 2, . . ., j−1 n X Y σ j |h|Wj |Wi |. (2.66) Yn = Y0 + j=1

i=1

Let us prove that in this case, for any β > 0 and n = 3, 4, . . ., EeβYn = ∞.

(2.67)

Take n = 3. Since W1 , W2 and W3 are independents standard normal variables, we get EeβY3 = Eeβ(σ 1

3

2

|h|W3 |W2 ||W1 |+σ 2 |h|W2 |W1 |+σ|h|W1 ) 6

2

2

2

2

= Ee 2 β σ h W2 W1 +σ |h|W2 |W1 |+σ|h|W1 Z∞ Z∞ 2 u2 v2 1 2 6 2 2 2 1 = ( e 2 β σ h u v +σ |h|u|v|+σ|h|v e− 2 du)e− 2 dv 2π −∞ −∞ + βσ31|h|

1 ≥ 2π

Z∞

Z ( − βσ31|h|

1

e2β

2

2

σ 6 h2 v 2 u2 +σ 2 |h|u|v|+σ|h|v − u2

e

du)e−

v2 2

dv

−∞

= ∞.

(2.68)

In the case n > 3, the proof of relation (2.65) is analogous. Let us now consider the standard American call option, which has the pay-off function g(n, ey ) = e−ρn [ey − K]+ , where ρ, K > 0, and a maturity N ≥ 3. In this case, for any ~ y0 = (y0 , y−1 ) such that h = h(y0 , y−1 ) = y0 − y−1 6= 0, E~y0 ,0 e−ρN [eYn − K]+ ≥ e−ρN E~y0 ,0 eYN − e−ρN K = ∞.

(2.69)

Since τ ≡ N is a particular case of Markov stopping time, the corresponding reward function, for any ~ y0 = (y0 , y−1 ) such that h = h(y0 , y−1 ) = y0 − y−1 6= 0, φ0 (~ y0 ) ≥ E~y0 ,0 e−ρN [eYN − K]+ = ∞.

(2.70)

94

2

Autoregressive stochastic volatility LPP

Therefore, the reward function does not exist in this case. Relation (2.65) implies also that the reward function for American type option does not exist for any pay-off function which has a power rate of growth at infinity. This is because of the stochastic volatility has in the above model too great rate of growth as function of the corresponding log-price. Similar situation takes place for general AR(p)/ARCH(p) type models with smoothing function g1 (·). In this case the corresponding volatility has a linear rate of growth as function of log-price that is too great rate causing the same effect of non-existence of the corresponding reward functions for call type options. The exclusion is the case of bounded pay-off functions that is typical for put type options. In this case, the corresponding reward functions do obviously exist for any general AR(p)/ARCH(p) type model given by the stochastic difference equation (2.40), with any smoothing function gκ (·) and parameter κ > 0.

2.2.2 Space skeleton approximations for option rewards of autoregressive conditional heteroskedastic log-price processes As in Subsection 2.2.1, we restrict consideration by CIR(p) type models, where √ parameter κ = 21 . Note that function g 21 (·) = y, corresponding to the standard CIR model, belongs to the class G 12 . ~0,n has the Gaussian transition probabilities The Markov log-price process Y ~ ~ P0,n (~ y , A) = P{Y0,n ∈ A/Y0,n−1 = ~ y } defined for ~ y ∈ Rp , A ∈ Bp , n = 1, 2, . . . by the following relation, ~ n0 ∈ A}. P0,n (~ y , A) = P{~ y+µ ~ n (~ y ) + Σn (~ y )W

(2.71)

where the vector coefficients µ ~ n (~ y ), n = 1, 2, . . . and matrix coefficients Σn (~ y ), n = 1, 2, . . . are given by relations (2.51) and (2.50). ~0,n given by the stochastic difference equation (2.49) is The log-price process Y a particular case of the multivariate Markov Gaussian log-price process considered in Subsection 2.1.1, with parameter k = p and characteristics given by relations (2.51) and (2.50). Therefore, Lemma 2.1.2 and Theorems 2.1.3 and 2.1.4 can be applied to the ~0,n . In this case, some simplification in the construction of log-price processes Y the corresponding skeleton approximation model can be achieved due to specific shift structure of the vector stochastic difference equation (2.49), which is well seen from the equivalent form (2.48) of this equation. The construction of the corresponding skeleton approximations is analogous to those described in for autoregressive log-price processes in Subsection 1.2.2. + Let m− ε,n ≤ mε,n be integer numbers, δε,n > 0 and λε,n ∈ R1 , for n = −p + 1, −p + 2, . . ..

2.2

95

Autoregressive conditional heteroskedastic LPP

± In this case, one can use parameters m± ε,n,i = mε,n−i+1 , δε,n,i = δε,n−i+1 , and λε,n,i = λε,n−i+1 , for i = 1, . . . , p, n = 0, 1, . . .. In this case, the index sets Lε,n , n = 0, 1, . . . have the following form, + Lε,n = {¯ l = (l1 , . . . , lp ), li = m− ε,n,i , . . . , mε,n,i , i = 1, . . . , p} + = {¯ l = (l1 , . . . , lp ), li = m− ε,n−i+1 , . . . , mε,n−i+1 , i = 1, . . . , p}.

(2.72)

Other elements of the space skeleton approximation should be also defined with the use of the above shift construction. + First, the skeleton intervals Iε,n,l should be defined for l = m− ε,n , . . . , mε,n , n = −p + 1, . . .,  − 1 if l = m−  ε,n ,  (−∞, δε,n (mε,n + 2 )] + λε,n  Iε,n,l =

  

(δε,n (l − 12 ), δε,n (l + 21 )] + λε,n

+ if m− ε,n < l < mε,n ,

1 (δε,n (m+ ε,n − 2 ), ∞) + λε,n

if l = m+ ε,n ,

(2.73)

and then skeleton cubes ˆIε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ˆI ¯ = Iε,n,1,l × · · · × Iε,n,p,l = Iε,n,l × · · · × Iε,n−p+1,l . 1 p p 1 ε,n,l

(2.74)

+ Second, skeleton points yε,n,l should be defined for l = m− ε,n , . . . , mε,n , n = −p + 1, . . ., yε,n,l = lδε,n + λε,n , (2.75)

and vector skeleton points ~ yε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ~ yε,n,¯l = (yε,n,1,l1 , . . . , yε,n,p,lp ) = (yε,n,l1 , . . . , yε,n−p+1,lp ).

(2.76)

Third, skeleton functions, hε,n (y), y ∈ R1 should be defined for n = −p+1, . . ., hε,n (y) =



yε,n,l

+ if y ∈ Iε,n,l , m− ε,n ≤ l ≤ mε,n ,

(2.77)

ˆ ε,n (~ and vector skeleton functions h y ), ~ y = (y1 , . . ., yp ) ∈ Rp , should be defined n = 0, 1, . . ., ˆ ε,n (~ h y ) = (hε,n,1 (y1 ), . . . , hε,n,p (yp )) = (hε,n (y1 ), . . . , hε,n−p+1 (yp )).

(2.78)

~ε,n are defined, The corresponding space skeleton approximating processes Y for every ε ∈ (0, ε0 ], by the following vector stochastic transition dynamic relation,

 ~   Yε,n

ˆ ε,n h ˆ ε,n−1 (Y ˆ ε,n−1 (Y ~ε,n−1 ) + µ ~ε,n−1 )) =h ~ n (h  0 ˆ ε,n−1 (Y ~ε,n−1 ))W ~ n , n = 1, 2, . . . , + Σn (h

  ~ Yε,0

ˆ ε,0 (Y ~0,0 ), =h

(2.79)

96

2

Autoregressive stochastic volatility LPP

where the vector coefficients µ ~ n (~ y ), n = 1, 2, . . . and matrix coefficients Σn (~ y ), n = 1, 2, . . . are given by relations (2.51) and (2.50). The vector stochastic transition dynamic relation (2.79) can be re-written in the form of following equivalent transition dynamic relation given for components ~ε,n , of the log-price process Y    Yε,n,1 = hε,n hε,n−1 (Yε,n−1,1 )       +A0n (hε,n−1 (Yε,n−1,1 ), . . . , hε,n−p (Yε,n−1,p ))     00   +A (h (Y ), . . . , h (Y ))W , ε,n−1 ε,n−1,1 ε,n−p ε,n−1,p n n        Yε,n,2 = hε,n−1 (Yε,n−1,1 ), ... ... (2.80)   Yε,n,p = hε,n−p+1 (Yε,n−1,p−1 ),     n = 1, 2, . . . ,      Yε,0,1 = hε,0 (Y0,0,1 ),      ... ...    Y =h (Y ), ε,0,p

ε,−p+1

0,0,p

where functions A0n (~ y ) = A0n (y1 , . . . , yp ), A00n (~ y ) = A00n (y1 , . . . , yp ), n = 1, 2, . . . are given by relation (2.51). ~ε,n defined, for every ε ∈ (0, ε0 ], by the nonlinear dyThe log-price process Y namic transition relation (1.91) is a skeleton atomic Markov chain, with the phase ~ε,n ∈ A/Y ~ε,n−1 = space Rp and one-step transition probabilities Pε,n (~ y , A) = P{Y ~ y } defined for ~ y ∈ Rp , A ∈ Bp , n = 1, 2, . . . by the following relation, X ˆ ε,n−1 (~ Pε,n (~ y , A) = P0,n (h y ), ˆIε,n,¯l ) ~ yε,n,l¯∈A

=

X

ˆ ε,n−1 (~ ˆ ε,n−1 (Y ~ε,n−1 )) P{h y) + µ ~ n (h

~ yε,n,l¯∈A

ˆ ε,n−1 (Y ~ε,n−1 ))W ~ n0 ∈ ˆI ¯}, + Σn (h ε,n,l

(2.81)

where the vector coefficients µ ~ n (~ y ), n = 1, 2, . . . and matrix coefficients Σn (~ y ), n = 1, 2, . . . are given by relations (2.51) and (2.50). ~ε,0 ∈ A}, A ∈ Bp is concerned, As far as the initial distribution Pε,0 (A) = P{Y it takes, for every ε ∈ (0, ε0 ], the following form, X X ˆ ε,0 (Y ~0,0 ) ∈ ˆI ¯}. Pε,0 (A) = P0,0 (ˆI ¯) = P{h (2.82) ε,0,l

~ yε,0,l¯∈A

ε,n,l

~ yε,0,l¯∈A

We consider the model with a pay-off functions g(n, e~y ), ~ y = (y1 , . . . , yp ) ∈ Rp do not depend on ε. (ε) Let us also recall, for ε ∈ (0, ε0 ], the class Mmax,n,N of all Markov moments ~ε,n such that (a) n ≤ τε,n ≤ N , (b) event {τε,n = τε,n for the log-price process Y (ε) ~ m} ∈ Fn,m = σ[Yε,l , n ≤ l ≤ m], n ≤ m ≤ N .

2.2

Autoregressive conditional heteroskedastic LPP

97

~ε,n by In this case, the reward functions are defined for the log-price process Y the following relation, for ~ y ∈ Rp and n = 0, 1, . . . , N , φε,n (~ y) =

sup

~

E~y,n g(τε,n , eYε,τε,n ).

(2.83)

(ε) τε,n ∈Mmax,n,N

Probability measures Pε,n (~ y , A), ~ y ∈ Rp , n = 1, 2, . . . are concentrated on finite sets, for every ε ∈ (0, ε0 ]. This obviously implies that the rewards functions |φε,n (~ y )| < ∞, ~ y ∈ Rp , n = 1, 2, . . . for every ε ∈ (0, ε0 ]. It is useful to note that φε,N (~ y ) = g(N, e~y ), ~ y ∈ Rp . An analogue of Lemma 2.1.2 can be formulated for the above model. However, the corresponding system of linear equations can be simplified taking into account the specific shift structure of the stochastic difference equation (2.79) written in equivalent form (2.80). By the definition of sets ˆIε,n,¯l , ¯ l ∈ Lε,n , n = 0, 1, . . ., for every ~ y ∈ Rp and ¯ n = 0, 1, . . ., there exists the unique lε,n (~ y ) = (lε,n,1 (~ y ), . . . , lε,n,p (~ y )) ∈ Lε,n such that ~ y ∈ ˆIε,n,¯lε,n (~y) . The following lemma is the direct corollary of Lemma 2.1.2. Lemma 2.2.2. Let the autoregressive conditional heteroskedastic log-price pro~0,n is given, respectively, by the stochastic difference equation (2.49), while cess Y ~ε,n is defined, the corresponding space skeleton approximating log-price process Y for every ε ∈ (0, ε0 ], by the stochastic transition dynamic relation (2.79). Then the reward functions φε,n (~ y ) and φn+m (~ yε,n+m,¯l ), ¯ l ∈ Lε,n+m , m = 1, . . . N − n are, for every ε ∈ (0, ε0 ], ~ y ∈ Rp and n = 0, . . . , N − 1, the unique solution for the following recurrence finite system of linear equations,  φε,N (~ yε,N,¯l ) = g(N, e~yε,N,l¯),      ¯  l = (l1 , . . . , lp ) ∈ Lε,N ,      φε,n+m (~ yε,n+m,¯l ) = max g(n + m, e~yε,n+m,l¯),     Pm+ε,n+m+1   φε,n+m+1 (~ yε,n+m+1,(l10 ,l1 ,...,lp−1 ) )   l10 =m−  ε,n+m+1     ×P{yε,n+m,l1 + A0n+m+1 (yε,n+m,l1 , . . . , yε,n+m−p+1,lp )    (2.84) +A00n+m+1 (yε,n+m,l1 , . . . , yε,n+m−p+1,lp )Wn+m+1 ∈ Iε,n+m+1,l10 } ,    ¯  l = (l1 , . . . , lp ) ∈ Lε,n+m , m = N − n − 1, . . . , 1,      φε,n (~y ) = max g(n, e~y ),     Pm+ε,n+1   φε,n+1 (~ yε,n+1,(l10 ,lε,n,1 (~y),...,lε,n,p−1 (~y) )   l10 =m−  ε,n+1     ×P{yε,n,lε,n,1 (~y) + A0n+1 (yε,n,lε,n,1 (~y) , . . . , yε,n−p+1,lε,n,p (~y) )      +A00n+1 (yε,n,lε,n,1 (~y) , . . . , yε,n−p+1,lε,n,p (~y) )Wn+1 ∈ Iε,n+1,l10 } , where functions A0n (y1 , . . . , p), A00n (y1 , . . . , yp ), n = 1, 2, . . . are given by relation (ε) (2.51), while the optimal expected reward Φε = Φε (Mmax,0,N ) is defined as by the

98

2

Autoregressive stochastic volatility LPP

following formula, for every ε ∈ (0, ε0 ], X P0,0 (Iε,0,¯l ) φε,0 (~ yε,0,¯l ). Φε =

(2.85)

¯ l∈Lε,0

Proof of Lemma 2.2.2 is analogous to the proof of Lemma 2.1.2. Obviously, |Φε | < ∞, for every ε ∈ (0, ε0 ].

2.2.3 Convergence of option reward functions for space-skeleton approximations of autoregressive conditional heteroskedastic log-price processes In this case, the space skeleton approximations are constructed in the way analogous to those used in Section 1.2 and 2.1 for autoregressive and nonlinear autoregressive stochastic volatility log-price processes with Gaussian noise terms. In particular, we use the same structural skeleton condition N2 and impose the same conditions B2 [¯ γ ], and I2 on the pay-off functions. We also assume to hold the following variant of condition of weak continuity O7 , in which vector functions µ ~ n (~ y ), ~ y ∈ Rk , n = 1, . . . , N and matrix functions Σn (~ y ), ~ y ∈ Rk , n = 1, . . . , N and are defined in relation (2.50), where functions A0n (~ y ) and A00n (~ y ) are given by relation (2.51): O8 : (a) There exist sets Y00n ∈ Bp , n = 0, . . . , N such that function A00n (~ y ) are continuous for ~ y = (y1 , . . . , yp ) ∈ Y00n−1 , for n = 1, . . . , N , (b) P{~ y0 + µn (~ y0 ) + 0 00 0 00 0 ~ y0 ∈ Yn−1 ∩ Yn−1 , n = 1, . . . , N , where Σn (~ y0 )Wn ∈ Yn ∪ Yn } = 0, for ~ Y0n , n = 0, . . . , N are sets penetrating condition I2 . Remark 2.2.3. In this case, functions A0n (~ y ) are continuous. This is why, these functions are not involved in formulation of condition O8 (a). If also assume that g 21 (y) is continuous function then functions A00n (~ y ) is also continuous. In this case, sets Y00n = Rp , n = 0, . . . , N . A remark analogous to Remark 2.1.3 can be made about the condition O8 . ~ n0 ≡ 0 and P{~ Remark 2.2.4. If A00n (~ y0 ) = 0, then Σn (~ y0 )W y0 + µn (~ y0 ) ∈ 00 00 0 00 00 ~ n0 = Yn ∪Yn } = 0, if and only if ~ y0 +~ µn (~ y0 ) ∈ Yn ∩Yn . If An (~ y0 ) > 0 then Σn (~ y0 )W 0 (A00n (~ y0 )Wn,1 , 0, . . . , 0) is a Gaussian random vector (which should be considered as a p-dimensional column vector). The set of p-dimensional vectors R1,p,n = {~ y= (y1 , 0, . . . , 0) : y1 ∈ R1 } is a one-dimensional Euclidean hyper-subspace of Rp . ~ n0 , with respect The probability density function of the random vector Σn (~ y0 )W to Lebesgue measure L1,p,n (A) in the hyper-subspace R1,p,n , is concentrated and ~ n0 ∈ strictly positive in points ~ y ∈ R1,p,n . This implies that P{~ y0 +~ µn (~ y0 )+Σn (~ y0 )W 00 00 0 00 Yn ∪ Yn } = 0, if and only if L1,p,n (Y1,p,n,y0 ) = 0 and L1,p,n (Y1,p,n,y0 ) = 0, where 0 0 0 Y1,p,n,y0 = (Yn − ~ y0 − µ ~ n (~ y0 )) ∩ R1,p,n is the cut of the set Yn − ~ y0 − µ ~ n (~ y0 ) by 00 00 the hyper-subspace R1,p,n and Y1,p,n,y0 = (Yn − ~ y0 − µ ~ n (~ y0 )) ∩ R1,p,n is the cut of 00 the set Yn − ~ y0 − µ ~ n (~ y0 ) by the hyper-subspace R1,p,n .

2.2

Autoregressive conditional heteroskedastic LPP

99

Remark 2.2.5. It g 12 (y) > 0 for y > 0, then A00n (~ y ) > 0, ~ y ∈ Rp if dn,0 > 0. The following theorem is the direct corollary of Theorem 2.1.3 for the above model of autoregressive conditional heteroskedastic log-price processes with Gaussian noise terms. Theorem 2.2.3. Let the autoregressive conditional heteroskedastic log-price ~0,n are given, respectively, by the process Yn and its equivalent vector version Y stochastic difference equation (2.40) and the vector stochastic difference equation (2.49), and parameter κ = 12 , while the corresponding approximating space-skeleton ~ε,n , is defined, for every ε ∈ (0, ε0 ], by the dynamic transition log-price process, Y relation (2.79). Let also conditions B2 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γp ) with non-negative components, and, also, conditions N2 , I2 , and O8 hold. Then, for every n = 0, 1, . . . , N , the following relation takes place for any ~ yε → ~ y0 ∈ Y0n ∩ Y00n , φε,n (~ yε ) → φ0,n (~ y0 ) as ε → 0. (2.86) Proof. Conditions N2 , B2 [¯ γ ], I2 , and O8 are variants of conditions, respecy ) and tively, N7 , B1 [¯ γ ], I1 , and O7 , for the case, where k = p and functions A0n (~ y ) have the specific form given in relation (2.51). A00n (~ Thus all conditions of Theorem 2.1.3 hold. By applying this theorem we get the asymptotic relation (2.86). 

2.2.4 Convergence of optimal expected rewards for space skeleton approximations of autoregressive conditional heteroskedastic log-price processes Let us now give conditions for convergence for optimal expected rewards Φε in the above space skeleton approximation model. We shall apply Theorem 2.1.4. ¯ = (A1 (β), ¯ . . . , Ap (β)) ¯ ~ β) ¯ with the function A( In this case, condition D31 [β], given by relations (2.57) and (2.58), should be assumed to hold for some vector parameter β¯ = (β1 , . . . , βp ) with non-negative components and the corresponding vectors β¯i = (βi,1 , . . . , βi,p ) with components βi,j = I(i = j), i, j = 1, . . . , p. Condition K7 should be replaced by the following condition imposed on the ~0,0 ∈ A}: initial distribution P0,0 (A) = P{Y K8 : P0,0 (Y00 ∩ Y000 ) = 1, where Y00 and Y000 are the sets introduced, respectively, in conditions I2 and O8 . The following theorem is a corollary of Theorem 2.1.4. Theorem 2.2.4. Let the autoregressive conditional heteroskedastic log-price ~0,n are given, respectively, by the process Yn and its equivalent vector version Y stochastic difference equation (2.40) and the vector stochastic difference equation (2.49), and parameter κ = 21 , while the corresponding approximating space-skeleton

100

2

Autoregressive stochastic volatility LPP

~ε,n , is defined, for every ε ∈ (0, ε0 ], by the dynamic transition log-price process, Y ¯ holds for some vector paramrelation (2.79). Let also conditions B2 [¯ γ ] and D8 [β] eters γ¯ = (γ1 , . . . , γp ) and β¯ = (β1 , . . . , βp ) such that, for every i = 1, . . . , p either βi > γi > 0 or βi = γi = 0, and also conditions N2 , I2 , O8 , and K8 hold. Then, the following relation takes place, Φε → Φ0 as ε → 0.

(2.87)

Proof. Theorem 2.2.4 is a corollary of Theorem 2.1.4. ¯ and K8 are just re-formulation, respectively, of conditions Conditions D8 [β] ¯ and K7 used in Theorem 2.1.4. D7 [β] Other conditions of these theorem also holds that was pointed out in the proof of Theorem 2.2.3. By applying Theorem 2.1.4 we get convergence relation (2.87). 

2.3 Generalized autoregressive conditional heteroskedastic LPP In this section, we present results concerned space-skeleton approximations for rewards of American-type options for generalized autoregressive conditional heteroskedastic (GARCH) log-price processes.

2.3.1 Upper bounds for rewards of generalized autoregressive conditional heteroskedastic log-price processes Let us consider a nonlinear autoregressive stochastic volatility log-price process with Gaussian noise terms, which is defined by the following stochastic difference equation, Y0,n − Y0,n−1 = gκ (σn )Wn , n = 1, 2, . . . ,

(2.88)

where σn = dn,0 + dn,1 (Y0,n−1 − en,1 Y0,n−2 )2 + · · · + dn,p−1 (Y0,n−p+1 − en,p−1 Y0,n−p )2  12 2 2 , n = 1, 2, . . . , + bn,1 σn−1 + · · · + bn,q σn−q

(2.89)

~0,0 = (Y0,0 , . . . , Y0,−p+1 , σ0 , . . . , σ−q+1 ) is a (p + q)-dimensional random and: (a) Y vector with real-valued first p components and non-negative last q components; (b) W1 , W2 , . . . is a sequence of real-valued i.i.d. standard normal variables with ~0,0 and the random semean value 0 and variance 1; (c) the random vector Y quence W1 , W2 , . . . are independent; (d) p and q are positive integer numbers; (e)

2.3

Generalized autoregressive conditional heteroskedastic LPP

101

dn,0 , . . . , dn,p−1 , n = 1, 2, . . . and bn,1 , . . . , bn,q , n = 1, 2, . . . are non-negative constants; (f) en,1 , . . . , en,p−1 , n = 1, 2, . . . are constants taking values in the interval [0, 1]; (g) gκ (·) is a function from the class Gκ , for some κ ≥ 0. By reasons presented in Section 2.2.1, we again restrict consideration by CIR(p) type models, where parameter κ = 12 . Let us introduce the (p + q)-dimensional vector process, ~0,n = (Y0,n,1 , . . . , Y0,n,p , Y0,n,p+1 , . . . , Y0,n,p+q ) Y = (Yn , . . . , Yn−p+1 , σn , . . . , σn−q+1 ), n = 1, 2, . . . .

(2.90)

We can always assume that the sequence of random variables Wn , n = 1, 2, . . . is the sequence of the first components of the sequence of (p + q)-dimensional i.i.d. 0 0 ~ n0 = (Wn,1 standard Gaussian random vectors W , . . . , Wn,p+q ), n = 1, 2, . . ., with 0 0 0 EW1,i = 0, EW1,i W1,j = I(i = j), i, j = 1, . . . , p + q, i.e., 0 Wn = Wn,1 , n = 1, 2, . . . .

(2.91)

~ n = (Wn,1 , Let also of consider (p + q)-dimensional i.i.d. Gaussian vectors W . . . , Wn,p+q ) = (Wn , 0, . . . 0), n = 1, 2, . . .. ~ n can be obtained as a linear transformation of vectors W ~ n0 , namely, Vectors W 0 ~ n = ΣW ~ n , n = 1, 2, . . ., where (p + q) × (p + q) matrix Σ = kσi,j k has elements W σi,j = I(i = 1)I(j = 1), i, j = 1, . . . , p + q. The stochastic difference equation (2.88) can be re-written in the following equivalent form of nonlinear autoregressive stochastic difference equation, Y0,n − Y0,n−1 = A0n (Y0,n−1 , . . . , Y0,n−p−q ) + A00n (Y0,n−1 , . . . , Y0,n−p−q )Wn , n = 1, 2, . . . ,

(2.92)

y ) are given, for ~ y ∈ y ) and A00n (~ where the transition dynamic functions A0n (~ Rp+q , n = 1, 2, . . . by the following relation, A0n (~ y ) = A0n (y1 , . . . , yp+q ) = 0, A00n (~ y )), y ) = A00n (y1 , . . . , yp+q ) = g 21 (Bn (~

(2.93)

where Bn (~ y ) = Bn (y1 , . . . , yp+q ) = dn,0 + dn,1 (y1 − en,1 y2 )2 + · · · + dn,p−1 (yp−1 − en,p−1 yp )2  21 2 2 + bn,1 yp+1 + · · · + bn,q yp+q ,

(2.94)

as well as in the form of the following of vector stochastic difference equation, ~0,n − Y ~0,n−1 = µ ~0,n−1 ) + Σn (Y ~0,n−1 )W ~ n0 , n = 1, 2, . . . , Y ~ n (Y

(2.95)

102

2

Autoregressive stochastic volatility LPP

where (p + q)-dimensional vectors functions µ ~ n (~ y ) = (µn,1 (~ y ), . . . , µn,p+q (~ y )), ~ y= (y1 , . . ., yp+q ) ∈ Rp+q , n = 1, 2, . . . and (p + q) × (p + q) matrix functions Σn (~ y) = kσn,i,j (~ y )k, ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , n = 1, 2, . . . are defined by the following relations,   00   0 An (~ y) 0 . . . 0    y1 − y2 0 0 ... 0         . .. .. ..     .. . . .         .. .. ..      yp−1 − yp (2.96) µ ~ n (~ y) =  . . . .  , Σn (~y ) =    Bn (~y ) − yp+1  .. .. ..       yp+2 − yp+1   . . .      .. .. ..     ..    . . .  . 0 0 ... 0 yp+q−1 − yp+q ~0,n is a particular variant of a nonlinear autoreThe above log-price process Y gressive stochastic volatility log-price processes considered in Section 2.1 as well as the multivariate Markov Gaussian log-price processes without index component considered in Section 4.5∗ . Note that, in this case, k = p + q. Lemma 2.3.1 Let the generalized autoregressive conditional heteroskedastic ~0,n are given, respectively, by the log-price Y0,n and its equivalent vector version Y stochastic difference equations (2.88) and (2.95) and parameter κ = 12 . In this case, condition G4 holds, i.e., there exist constants 0 < K30 < ∞ and 0 ≤ K31,l < ∞, l = 1, . . . , p + q such that, max

sup

0≤n≤N −1 ~ y ∈Rp+q

y )2 y )| + (A00n+1 (~ |A0n+1 (~ < K30 . Pp+q 1 + l=1 K31,l |yl |

(2.97)

y ) ≡ 0, this term disappears in inequality (2.97). Proof. Since A0n+1 (~ The following inequalities take place, for every ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , n = 1, 2, . . ., Bn (~ y ) ≤ (dn,0 + dn,1 (|y1 | + |y2 |)2 + · · · + dn,p−1 (|yp−1 | + |yp |)2  21 2 2 + bn,1 yp+1 + · · · + bn,q yp+q p p ≤ dn,0 + dn,1 (|y1 | + |y2 |) p + · · · + dn,p−1 (|yp−1 | + |yp |) p p  + bn,1 |yp+1 | + · · · + bn,q |yp+q | p p p p ≤ dn,0 + dn,1 |y1 | + ( dn,1 + dn,2 )|y2 | p p p + · · · + ( dn,p−2 + dn,p−1 )|yp−1 | + dn,p−1 |yp | p p + bn,1 |yp+1 | + · · · + bn,q |yp+q |.

(2.98)

2.3

Generalized autoregressive conditional heteroskedastic LPP

103

It follows from relation (2.98) that, for every ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , n = 1, 2, . . ., p p |Bn (~ y )| < 1 + dn,0 + (1 + dn,1 )|y1 | p p + (1 + dn,1 + dn,2 )|y2 | p p + · · · + (1 + dn,p−2 + dn,p−1 )|yp−1 | p p + (1 + dn,p−1 )|yp | + (1 + bn,1 )|yp+1 | p + · · · + (1 + bn,q )|yp+q |. (2.99) Also, using inequality (2.53) for function g 12 (y), and relation (2.98) we get, for every ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , n = 1, 2, . . ., (A00n (~ y ))2 < 2L2 1 + (dn,0 + dn,1 (|y1 | + |y2 |)2 + · · ·

 21  2 2 + dn,p−1 (|yp−1 | + |yp |)2 + bn,1 yp+1 + · · · + bn,q yp+q p p ≤ 2L2 1 + dn,0 + dn,1 (|y1 | + |y2 |) + · · · p p p  + dn,p−1 (|yp−1 | + |yp |) + bn,1 |yp+1 | + · · · + bn,q |yp+q | p p p p ≤ 2L2 (1 + dn,0 ) + 2L2 dn,1 |y1 | + 2L2 ( dn,1 + dn,2 )|y2 | p p p + · · · + 2L2 ( dn,p−2 + dn,p−1 )|yp−1 | + 2L2 dn,p−1 |yp | p p + 2L2 bn,1 |yp+1 | + · · · + 2L2 bn,q |yp+q |. (2.100) Relations (2.99) and (2.100) readily imply that inequality (2.97) holds with the constants, p K30 = max (1 + 2L2 )(1 + dn+1,0 ), 0≤n≤N −1

K31,l =

                        

max0≤n≤N −1

√ d √n+1,1

max0≤n≤N −1 (1+

dn+1,0 )





max0≤n≤N −1 (

dn,l−1 +



max0≤n≤N −1 (1+ max0≤n≤N −1

if l = 1, dn+1,l )

dn+1,0 )

if l = 2, . . . , p − 1, (2.101)



dn+1,p−1



max0≤n≤N −1 (1+

dn+1,0 )

√ max0≤n≤N −1 bn+1,l √

max0≤n≤N −1 (1+

dn+1,0 )

if l = p, if l = p + 1, . . . , p + q.

It follows from inequality (2.54) that condition G4 holds, for example, with constants K30 and K31,l = 1, l = 1, . . . , p + q, which can replace, respectively, constants K22 and K23,l , l = 1, . . . , k (recall that, in this case, k = p + q) penetrating condition G4 . Thus, Theorems 2.1.1 and 2.1.2 can be applied to the above log-price process Yn given by the stochastic difference equation (2.88) that yield the upper bounds,

104

2

Autoregressive stochastic volatility LPP

respectively, for reward functions and optimal expected rewards given in these theorems for these log-price processes. In this case, we consider pay-of functions g(n, e~y ), ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , n = 0, 1, . . ., which are real-valued measurable functions assumed to satisfy condition B3 [¯ γ ], for some vector parameter γ¯ = (γ1 , . . . , γp+q ) with non-negative components. (0) Let us Mmax,p,q,n,N be the class of all stopping times τ for the process Yn (p,q)

such that (a) n ≤ τ0,n ≤ N , (b) event {τ0,n = m} ∈ Fn,m = σ[Yl0 , n − p + 1 ≤ l0 ≤ m, σl00 , n − q + 1 ≤ l00 ≤ m], n ≤ m ≤ N . (0) (0) Obviously, the class Mmax,p,q,n,N coincides with the class Mmax,n,N of all ~0,l = (Y0,l,1 , . . . , Y0,l,p+q ) = Markov moments τ0,n for the Markov process Y (Yl , . . . , Yl−p+1 , σl , . . . , σl−q+1 ) such that (a) n ≤ τ0,n ≤ N , (b) event {τ0,n = ~0,l , n ≤ l ≤ m], n ≤ m ≤ N . m} ∈ Fn,m = σ[Y In this case, the reward functions are defined for the log-price process Yn ~0,n ) by the following relation, for ~ (its vector equivalent version Y y ∈ Rp+q and n = 0, 1, . . . , N , φ0,n (~ y) =

sup

~

E~y,n g(τ0,n , eY0,τ0,n ).

(2.102)

(0)

τ0,n ∈Mmax,n,N

¯ = (A1 (β), ¯ . . . , Ap+q (β)), ¯ β¯ = (β1 , . . . , βp+q ), ~ β) In this case, functions A( β1 , . . . , βp+q ≥ 0, penetrating formulation of Theorems 4.5.3 and 4.5.4, has the following components, ¯ = K30 K31,j Aj (β)

p+q X 1 (βl + (p + q)2 βl2 ), j = 1, . . . , p + q. 2

(2.103)

l=1

¯ generates a sequence of functions A ¯ = (An,1 (β), ¯ . . ., ~ β) ~ n (β) Function A( ¯ An,p+q (β)), n = 0, 1, . . . from the class Ap+q that are defined by the following recurrence relation, for any β¯ = (β1 , . . . , βp+q ), βi ≥ 0, i = 1, . . . , p + q, ¯ = ~ n (β) A



β¯ ¯ + A( ¯ ~ n−1 (β) ~ A ~ n−1 (β)) A

for n = 0, for n = 1, 2, . . . .

(2.104)

Theorem 2.1.1 takes in this case the following form. Theorem 2.3.1. Let the generalized autoregressive conditional heteroskedastic ~0,n are given, respectively, log-price process Y0,n and its equivalent vector version Y by the stochastic difference equations (2.88) and (2.95), and parameter κ = 12 . Let also condition B3 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γp+q ) with γi ≥ 0, i = 1, . . . , p + q. Then, for any vector parameter β¯ = (β1 , . . . , βp+q ) with components βi ≥ γi , i = 1, . . . , p + q, there exist constants 0 ≤ M25 , M26,1 , . . . , M26,p+q < ∞ such that the reward functions φn (~ y ) satisfy the following inequalities for ~ y=

2.3

Generalized autoregressive conditional heteroskedastic LPP

105

(y1 , . . . , yp+q ) ∈ Rp+q , 0 ≤ n ≤ N , X M26,i |φ0,n (~ y )| ≤ M25 + i: γi =0 p+q X γi AN −n,j (β¯i )|yj |) }. M26,i exp{( βi

X

+

(2.105)

j=1

i: γi >0

Remark 2.3.1. The explicit formulas for the constants M25 and M26,i , i = 1, . . . , p + q take, according formulas given in Remark 2.1.1, the following form, M25 = L5 , M26,i = L5 L6,i I(γi = 0) + L5 L6,i (1 + 2p+q × Pp+q γ K (A (β¯ )+ 1 (p+q)2 A2N −1,l (β¯i )) N βi ) i I(γi > 0), × e 30 l=1 N −1,l i 2

(2.106)

where vectors β¯i = (βi,1 , . . . , βi,p+q ) = (β1 I(i = 1), . . . , βp+q I(i = p + q)), i = 1, . . . , p + q. ¯ should be replaced in this case by the following condition Condition D7 [β] assumed to hold for vector parameter β¯ = (β1 , . . . , βp+q ) with non-negative components: P ¯ E exp{ p+q AN,j (β¯i )|Y0,j |} < K32,i , i = 1, . . . , p + q, for some 1 < K32,i D9 [β]: j=1

< ∞, i = 1, . . . , p + q. In this case the optimal expected reward is defined by the formula, Φ0 =

sup

~

Eg(τ0,0 , eY0,τ0,0 )

(0) τ0,0 ∈Mmax,0,N

~0,0 ). = Eφ0 (Y

(2.107)

Theorem 2.1.2 takes in this case the following form. Theorem 2.3.2. Let the generalized autoregressive conditional heteroskedastic ~0,n are given, respectively, by the log-price Y0,n and its equivalent vector version Y stochastic difference equations (2.88) and (2.95), and parameter κ = 21 . Let also ¯ hold and 0 ≤ γi ≤ βi < ∞, i = 1, . . . , p + q. Then, conditions B3 [¯ γ ] and D9 [β] there exists a constant 0 ≤ M27 < ∞ such that the following inequality takes place, |Φ0 | ≤ M27 .

(2.108)

Remark 2.3.2. The explicit formula for the constant M27 takes, according formulas given in Remark 2.1.2, the following form, X X M27 = L5 + L5 L6,i + L5 L6,i (1 i:γi =0 K30

+ 2p+q e

i:γi >0

Pp+q l=1

γ (AN −1,l (β¯i )+ 21 (p+q)2 A2N −1,l (β¯i )) N βi

)

i

γi β

i K32,i .

(2.109)

106

2

Autoregressive stochastic volatility LPP

2.3.2 Space-skeleton approximations for option rewards of generalized autoregressive conditional heteroskedastic log-price processes ~0,n has the Gaussian transition probabilities The Markov log-price process Y ~0,n ∈ A/Y ~0,n−1 = ~ P0,n (~ y , A) = P{Y y } defined for ~ y ∈ Rp , A ∈ Bp , n = 1, 2, . . . by the following relation, ~ n0 ∈ A}. P0,n (~ y , A) = P{~ y+µ ~ n (~ y ) + Σn (~ y )W

(2.110)

where the vector coefficients µ ~ n (~ y ), n = 1, 2, . . . and matrix coefficients Σn (~ y ), n = 1, 2, . . . are given by relations (2.96) and (2.93). ~0,n given by the stochastic difference equation (2.49) is The log-price process Y a particular case of the multivariate Markov Gaussian log-price process considered in Subsection 2.1.1, with parameter k = p+q, and characteristics given by relations (2.96) and (2.93). It is useful to re-write the vector transition dynamic relation (2.133) in the form of the system of dynamic transition relations for the components of the log~ 0,n . In this case, relation (2.133) takes the following form (where the price process Z corresponding cancelation of the identical term Y0,n−1,i is made in the transition dynamic relation for the i-th component as well as the fact that the noise terms Wn,i = 0 for i = 2, . . . , p + q),  Y0,n,1 = Y0,n−1,1 + A00n (Y0,n−1,1 , . . . , Y0,n−1,p+q )Wn,1 ,       Y0,n,2 = Y0,n−1,1 ,      ... ...     Y = Y , 0,n,p 0,n−1,p−1    (2.111) Y0,n,p+1 = Bn (Y0,n−1,1 , . . . , Y0,n−1,p+q ),      Y0,n,p+2 = Y0,n−1,p+1 ,     ... ...     Y = Y ,  0,n,p+q 0,n−1,p+q−1     n = 1, 2, . . . . Lemma 2.1.2 and Theorems 2.1.3 and 2.1.4 can be applied to the log-price ~0,n . processes Y In this case, some simplification in the construction of the corresponding skeleton approximation model can be achieved due to specific shift structure of the stochastic difference equation (2.95), which is well seen from the equivalent form (2.111) of this equation. The construction of the corresponding skeleton approximations is analogous to those described in for autoregressive log-price processes in Subsection 11.3.2. + Let m− ε,n ≤ mε,n be integer numbers, δε,n > 0 and λε,n ∈ R1 , for n = − max(p, q) + 1, − max(p, q) + 2, . . ..

2.3

107

Generalized autoregressive conditional heteroskedastic LPP

± In this case, one can use parameters m± ε,n,i = mε,n−i+1 , δε,n,i = δε,n−i+1 , ± ± λε,n,i = λε,n−i+1 , for i = 1, . . . , p and mε,n,i = mε,n−i+p+1 , δε,n,i = δε,n−i+p+1 , λε,n,i = λε,n−i+p+1 , for i = p + 1, . . . , p + q, for n = 0, 1, . . .. In this case, the index sets Lε,n , n = 0, 1, . . . take the following form, + Lε,n = {¯ l = (l1 , . . . , lp+q ), m− ε,n,i ≤ li ≤ mε,n,i , i = 1, . . . , p + q} + = {¯ l = (l1 , . . . , lp+q ), m− ε,n−i+1 ≤ li ≤ mε,n−i+1 , i = 1, . . . , p, + m− ε,n−i+p+1 ≤ li ≤ mε,n−i+p+1 , i = p + 1, . . . , p + q}.

(2.112)

Other elements of the space skeleton approximation should be also defined with the use of the above shift construction. + First, the skeleton intervals Iε,n,l should be defined for l = m− ε,n , . . . , mε,n , n = − max(p, q) + 1, . . .,  − 1 if l = m−  ε,n ,  (−∞, δε,n (mε,n + 2 )] + λε,n − 1 1 Iε,n,l = (2.113) (δε,n (l − 2 ), δε,n (l + 2 )] + λε,n if mε,n < l < m+ ε,n ,   + + 1 (δε,n (mε,n − 2 ), ∞) + λε,n if l = mε,n , and then skeleton cubes ˆIε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ˆI ¯ = Iε,n,1,l × · · · × Iε,n,p,l 1 p ε,n,l × Iε,n,p+1,lp+1 · · · × Iε,n,p+q,lp+q = Iε,n,l1 × · · · × Iε,n−p+1,lp × Iε,n,lp+1 × · · · × Iε,n−q+1,lp+q . Second, skeleton points yε,n,l should be defined for l = − max(p, q) + 1, . . ., yε,n,l = lδε,n + λε,n ,

(2.114) + m− ε,n , . . . , mε,n ,

n =

(2.115)

and vector skeleton points ~ yε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ~ yε,n,¯l = (yε,n,1,l1 , . . . , yε,n,p,lp , yε,n,p+1,lp+1 , . . . , yε,n,p+q,lp+q ) = (yε,n,l1 , . . . , yε,n−p+1,lp , yε,n,lp+1 , . . . , yε,n−q+1,lp+q ).

(2.116)

Third, skeleton functions, hε,n (y), y ∈ R1 should be defined for n = − max(p, q) +1, . . .,  + hε,n (y) = yε,n,l if y ∈ Iε,n,l , m− (2.117) ε,n ≤ l ≤ mε,n , ˆ ε,n (~ and vector skeleton functions h y ), ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , should be defined n = 0, 1, . . ., ˆ ε,n (~ h y ) = (hε,n,1 (y1 ), . . . , hε,n,p (yp ), hε,n,p+1 (yp+1 ), . . . , hε,n,p+q (yp+q )) = (hε,n (y1 ), . . . , hε,n−p+1 (yp ), hε,n (yp+1 ), . . . , hε,n−q+1 (yp+q )).

(2.118)

108

2

Autoregressive stochastic volatility LPP

~ε,n are defined, The corresponding space skeleton approximating processes Y for every ε ∈ (0, ε0 ], by the following dynamic relation,  ˆ ε,n−1 (Y ˆ ε,n−1 (Y ˆ ε,n h ~ε,n−1 ) + µ ~ε,n−1 )) ~ε,n = h  ~ n (h Y       ˆ ε,n−1 (Y ~ε,n−1 ))W ~ n0 +Σn (h (2.119)   n = 1, 2, . . . ,     ~ ˆ ε,0 (Y ~0,0 ), Yε,0 = h where vector functions µ ~ n (~ y ) and matrix fubctions Σn (~ y ), n = 1, . . . , N are defined by relation (2.96), or by the following equivalent transition dynamic relation given in the form of system of transition relations th for the components of the log-price ~ε,n , process Y    Yε,n,1 = hε,n hε,n−1 (Yε,n−1,1 )         +A00n (hε,n−1 (Yε,n−1,1 ), . . . , hε,n−p (Yε,n−1,p ),         h (Y ), . . . , h (Y ))W , ε,n−1 ε,n−1,p+1 ε,n−q ε,n−1,p+q n,1        Yε,n,2 = hε,n−1 (Yε,n−1,1 ),     . . . ...     Yε,n,p = hε,n−p+1 (Yε,n−1,p−1 ),         Y = h Bn (hε,n−1 (Yε,n−1,1 ), . . . , hε,n−p (Yε,n−1,p ),  ε,n,p+1 ε,n         h (Y ), . . . , h (Y )) ,  ε,n−1 ε,n−1,p+1 ε,n−q ε,n−1,p+q  (2.120) Yε,n,p+2 = hε,n−1 (Yε,n−1,p+1 ),      ... ...       Yε,n,p+q = hε,n−q+1 (Yε,n−1,p+q−1 ),     n = 1, 2, . . . ,        = hε,0 (Y0,0,1 ),  Yε,0,1      ... ...     Yε,0,p = hε,−p+1 (Y0,0,p ),        Yε,0,p+1 = hε,0 (Y0,0,p+1 ),       ... ...    Y = hε,−q+1 (Y0,0,p+q ). ε,0,p+q ~ε,n defined, for every ε ∈ (0, ε0 ], by the nonlinear The log-price process Y dynamic transition relation (2.119) is a skeleton atomic Markov chain, with the ~ε,n ∈ phase space Rp+q and one-step transition probabilities Pε,n (~ y , A) = P{Y ~ A/Yε,n−1 = ~ y } defined for ~ y ∈ Rp+q , A ∈ Bp+q , n = 1, 2, . . . by the following

2.3

Generalized autoregressive conditional heteroskedastic LPP

109

relation,

X

Pε,n (~ y , A) =

ˆ ε,n−1 (~ P0,n (h y ), ˆIε,n,¯l )

~ yε,n,l¯∈A

=

X

ˆ ε,n−1 (~ ˆ ε,n−1 (~ P{h y) + µ ¯n (h y ))

~ yε,n,l¯∈A

ˆ ε,n−1 (~ ~ n0 ∈ ˆI ¯}, + Σn (h y ))W ε,n,l

(2.121)

where vector functions µ ~ n (~ y ) and matrix fubctions Σn (~ y ), n = 1, . . . , N are defined by relation (2.96). ~ε,0 ∈ A}, A ∈ Bp+q is concerned, As far as the initial distribution Pε,0 (A) = P{Y it takes, for every ε ∈ (0, ε0 ], the following form, X Pε,0 (A) = P0,0 (ˆIε,0,¯l ) ~ yε,0,l¯∈A

=

X

ˆ ε,0 (Y ~0,0 ) ∈ ˆI ¯}. P{h ε,n,l

(2.122)

~ yε,0,l¯∈A

We assume the model with a pay-off function g(n, e~y ), ~ y = (y1 , . . . , yp+q ) ∈ Rp+q do not depend on ε. (ε) Let us also recall, for ε ∈ (0, ε0 ], the class Mmax,n,N of all Markov moments ~ε,n such that (a) n ≤ τε,n ≤ N , (b) event {τε,n = τε,n for the log-price process Y (ε) ~ m} ∈ Fn,m = σ[Yε,l , n ≤ l ≤ m], n ≤ m ≤ N . ~ε,n by In this case, the reward functions are defined for the log-price process Y the following relation, for ~ y ∈ Rp+q and n = 0, 1, . . . , N , φε,n (~ y) =

sup

~

E~y,n g(τε,n , eYε,τε,n ).

(2.123)

(ε)

τε,n ∈Mmax,n,N

Probability measures Pε,n (~ y , A), ~ y ∈ Rp+q , n = 1, 2, . . . are concentrated on finite sets, for every ε ∈ (0, ε0 ]. This obviously implies that, for every ε ∈ (0, ε0 ] and ~ y ∈ Rp+q , n = 1, 2, . . ., the rewards functions, |φε,n (~ y )| < ∞. (2.124) It is useful to note that φε,N (~ y ) = g(N, e~y ), ~ y ∈ Rp+q . An analogue of Lemma 2.1.2 can be formulated for the above model. However, the corresponding system of linear equations can be simplified taking into account the specific shift structure of the dynamic relations (1.91), which is well seen from the equivalent form (1.92). By the definition of sets ˆIε,n,¯l , ¯ l ∈ Lε,n , n = 0, 1, . . ., for every ~ y ∈ Rp+q and n = 0, 1, . . ., there exists the unique ¯ lε,n (~ y ) = (lε,n,1 (~ y ), . . . , lε,n,p+q (~ y )) ∈ Lε,n such that ~ y ∈ ˆIε,n,¯lε,n (~y) .

110

2

Autoregressive stochastic volatility LPP

The following lemma is the direct corollary of Lemma 2.1.2. Lemma 2.3.2. Let the generalized autoregressive conditional heteroskedastic ~0,n is given by the stochastic difference equations (2.95), while log-price process Y ~ε,n is defined, the corresponding approximating space-skeleton log-price process Y for every ε ∈ (0, ε0 ], by the dynamic transition relation (1.91). Then the reward functions φε,n (~ y ) and φn+m (~ yε,n+m,¯l ), ¯ l ∈ Lε,n+m , m = 1, . . . N − n are, for every ε ∈ (0, ε0 ], ~ y ∈ Rp+q and n = 0, . . . , N − 1, the unique solution for the following recurrence finite system of linear equations,   φε,N (~ yε,N,¯l ) = g(N, e~yε,N,l¯),     ¯l = (l , . . . , l ) ∈ L ,  1 p+q ε,N        φ (~ y ) = max g(n + m, e~yε,n+m,l¯), ¯ ε,n+m  ε,n+m,l     Pm+ε,n+m+1   0 φε,n+m+1 (~ yε,n+m+1,(l10 ,l1 ,...,lp−1 ,lp+1  ,lp+1 ,...,lp+q−1 ) ) 0  l10 ,lp+1 =m−  ε,n+m+1     ×P{yε,n+m,l1 + A0n+m+1 (yε,n+m,l1 , . . . , yε,n+m−p+1,lp ,      yε,n+m,lp+1 , . . . , yε,n+m−q+1,lp+q )Wn+m+1 ∈ Iε,n+m+1,l10 }       ×I Bn+m+1 (yε,n+m,l1 , . . . , yε,n+m−p+1,lp ,       0  yε,n+m,lp+1 , . . . , yε,n+m−q+1,lp+q ) ∈ Iε,n+m+1,lp+1 (2.125) ¯  l = (l1 , . . . , lp+q ) ∈ Lε,n+m , m = N − n − 1, . . . , 1,       ~ y ¯   φε,n (~yε,n,¯l ) = max g(n, e ε,n,l ),     Pm+ε,n+1   φε,n+1 (~ yε,n+1,(l10 ,lε,n,1 (~y),...,lε,n,p−1 (~y),  0  l10 ,lp+1 =m−  ε,n+1     0 lp+1 ,lε,n,p+1 (~ y ),...,lε,n,p+q−1 (~ y )) )P{yε,n,lε,n,1 (~ y)     00   +An+1 (yε,n,lε,n,1 (~y) , . . . , yε,n−p+1,lε,n,p ,      yε,n,lε,n,p+1 , . . . , yε,n−q+1,lε,n,p+q )Wn+1 ∈ Iε,n+1,l10 }      ×I Bn+1 (yε,n,lε,n,1 (~y) , . . . , yε,n−p+1,lε,n,p ,        0 yε,n,lε,n,p+1 , . . . , yε,n−q+1,lε,n,p+q ) ∈ Iε,n+1,lp+1 , where functions Bn (y1 , . . . , yp+q ), A00n (y1 , . . . , yp+q ), n = 1, 2, . . . are given by rela(ε) tion (2.93), while the optimal expected reward Φε = Φε (Mmax,0,N ) is defined as by the following formula, for every ε ∈ (0, ε0 ], X Φε = P0,0 (Iε,0,¯l ) φε,0 (~ yε,0,¯l ). (2.126) ¯ l∈Lε,0

Proof of Lemma 2.3.2 is analogous to the proof of Lemma 2.1.2. Obviously, |Φε | < ∞, for every ε ∈ (0, ε0 ].

2.3

Generalized autoregressive conditional heteroskedastic LPP

111

2.3.3 Convergence of option reward functions for generalized autoregressive conditional heteroskedastic log-price processes In this case, the space skeleton approximations are constructed in the way analogous to those used in Sections 1.3 and 2.1 for autoregressive moving average log-price processes with Gaussian noise terms. In particular, we use the same structural skeleton condition N3 and impose the same conditions B3 [¯ γ ], and I3 on the pay-off functions. We also impose the following condition of weak continuity on vector functions µ ~ n (~ y ), ~ y ∈ Rp+q , n = 1, . . . , N and matrix functions Σn (~ y ), ~ y ∈ Rp+q , n = 1, . . . , N y ) are given by relation y ) and A00n (~ given by relation (2.96), and functions A0n (~ (2.93): O9 : (a) There exist sets Y00n ∈ Bp , n = 0, . . . , N such that function A00n (~ y ) is con00 tinuous for ~ y = (y1 , . . . , yp ) ∈ Yn−1 , for n = 1, . . . , N , (b) P{~ y0 + µn (~ y0 ) + ~ n0 ∈ Y0n ∪ Y00n } = 0, for ~ Σn (~ y0 )W y0 ∈ Y0n−1 ∩ Y00n−1 , n = 1, . . . , N , where Y0n , n = 0, . . . , N are sets penetrating condition I3 . Remark 2.3.3. In this case, functions A0n (~ y ) are continuous. This is why, these functions are not involved in formulation of condition O9 (a). If also assume that g 12 (y) is continuous function then functions A00n (~ y ) is also continuous. In this case, sets Y00n = Rp+q , n = 0, . . . , N .

00

A remark analogous to Remark 2.2.3 can be made about the condition O9 . ~ n0 ≡ 0 and P{~ Remark 2.3.4. If A00n (~ y0 ) = 0, then Σn (~ y0 )W y0 + µn (~ y0 ) ∈ 00

Yn ∪ Yn } = 0, if and only if ~ y0 + µ ~ n (~ y0 ) ∈ Y0n ∩ Y00n . If A00n (~ y0 ) > 0 then 0 00 0 ~ Σn (~ y0 )Wn = (An (~ y0 )Wn,1 , 0, . . . , 0) is a Gaussian random vector (which should be considered as a (p + q)-dimensional column vector). The set of (p + q)-dimensional vectors R1,p+q,n = {~ y = (y1 , 0, . . . , 0) : y1 ∈ R1 } is a one-dimensional Euclidean hyper-subspace of Rp+q . The probability density function of the random ~ n0 , with respect to Lebesgue measure L1,p+q,n (A) in the hypervector Σn (~ y0 )W subspace R1,p+q,n , is concentrated and strictly positive in points ~ y ∈ R1,p+q,n . ~ n0 ∈ Y00n ∪ Y00n } = 0, if and only This implies that P{~ y0 + µ ~ n (~ y0 ) + Σn (~ y0 )W 0 00 0 if L1,p+q,n (Y1,p+q,n,y0 ) = 0 and L1,p+q,n (Y1,p+q,n,y0 ) = 0, where Y1,p+q,n,y0 = 0 0 (Yn − ~ y0 − µ ~ n (~ y0 )) ∩ R1,p+q,n is the cut of the set Yn − ~ y0 − µ ~ n (~ y0 ) by the hyper00 00 subspace R1,p+q,n and Y1,p+q,n,y0 = (Yn − ~ y0 − µ ~ n (~ y0 )) ∩ R1,p+q,n is the cut of the 00 set Yn − ~ y0 − µ ~ n (~ y0 ) by the hyper-subspace R1,p+q,n . Remark 2.3.5. It g 21 (y) > 0 for y > 0, then A00n (~ y ) > 0, ~ y ∈ Rp+q if dn,0 > 0. The following theorem is the direct corollary of Theorem 2.1.3 for the above model of generalized autoregressive conditional heteroskedastic log-price processes with Gaussian noise terms.

112

2

Autoregressive stochastic volatility LPP

Theorem 2.3.3. Let the generalized autoregressive conditional heteroskedastic ~0,n are given, respectively, log-price process Y0,n and its equivalent vector version Y by the stochastic difference equations (2.88) and (2.95), and parameter κ = 21 , ~ε,n , is dewhile the corresponding approximating space-skeleton log-price process Y fined, for every ε ∈ (0, ε0 ], by the dynamic transition relation (2.119). Let also conditions B3 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γp+q ) with nonnegative components, and, also, conditions N3 , I3 , and O9 hold. Then, for every n = 0, 1, . . . , N , the following relation takes place for any ~ yε → ~ y0 ∈ Y0n ∩ Y00n , φε,n (~ yε ) → φ0,n (~ y0 ) as ε → 0.

(2.127)

Proof. Conditions N3 , B3 [¯ γ ], I3 , and O9 are variants of conditions, respectively, N7 , B1 [¯ γ ], I1 , and O7 , for the case, where k = p + q, the index component is absent, and vector functions µ ~ n (~ y ) and matrix functions Σn (~ y ) have the specific form given by relation (2.96). Thus, all conditions of Theorem 2.1.3 hold. By applying this theorem we get the asymptotic relation (2.127). 

2.3.4 Convergence of optimal expected rewards for space skeleton approximations of generalized autoregressive conditional heteroskedastic log-price processes Let us now give conditions for convergence for optimal expected rewards Φε in the above space skeleton approximation model. We shall apply Theorem 2.1.4. ¯ = (A1 (β), ¯ . . . , Ap+q (β)) ¯ ~ β) ¯ with the function A( In this case, condition D9 [β], given by relations (2.103) and (2.104), should be assumed to hold for some vector parameter β¯ = (β1 , . . . , βp+q ), with non-negative components, and the corresponding vectors β¯i = (βi,1 , . . . , βi,p+q ), with components βi,j = I(i = j), i, j = 1, . . . , p + q. Condition K7 should be replaced by the following condition imposed on the ~0,0 ∈ A}: initial distribution P0,0 (A) = P{Y K9 : P0,0 (Y00 ∩ Y000 ) = 1, where Y00 and Y000 are the sets introduced, respectively, in conditions I3 and O9 . The following theorem is a corollary of Theorem 2.1.4. Theorem 2.3.4. Let the generalized autoregressive conditional heteroskedastic ~0,n are given, respectively, log-price process Y0,n and its equivalent vector version Y by the stochastic difference equations (2.88) and (2.95), and parameter κ = 21 , while ~ε,n , is defined, the corresponding approximating space-skeleton log-price process Y for every ε ∈ (0, ε0 ], by the dynamic transition relation (2.119). Let also conditions ¯ holds for some vector parameters γ¯ = (γ1 , . . . , γp+q ) and β¯ = B3 [¯ γ ] and D9 [β]

2.4 Modulated nonlinear autoregressive stochastic volatility LPP

113

(β1 , . . . , βp+q ) such that, for every i = 1, . . . , p+q either βi > γi > 0 or βi = γi = 0, and also conditions N3 , I3 , O9 , and K9 hold. Then, the following relation takes place, Φε → Φ0 as ε → 0. (2.128) ¯ and Proof. Theorem 2.3.4 is a corollary of Theorem 2.1.4. Conditions D9 [β] ¯ K9 are just re-formulation, respectively, of conditions D7 [β] and K7 used in Theorem 2.1.4. Other conditions of these theorem also holds that was pointed out in the proof of Theorem 2.3.3. By applying Theorem 2.1.4 we get convergence relation (2.87). 

2.4 Modulated nonlinear autoregressive stochastic volatility LPP In this section, we present results concerned space skeleton approximations for rewards of American type options for nonlinear autoregressive stochastic volatility log-price processes with Gaussian noise terms.

2.4.1 Upper bound for rewards of modulated nonlinear autoregressive stochastic volatility log-price processes Let us consider a modulated nonlinear autoregressive stochastic volatility logprice process Zn = (Yn , Xn ) with Gaussian noise terms, which is defined by the following modulated stochastic difference equation,  Yn − Yn−1 = A0n (Yn−1 , . . . , Yn−k , Xn−1 , . . . , Xn−r )      + A00n (Yn−1 , . . . , Yn−k , Xn−1 , . . . , Xn−r )Wn , (2.129)  Xn = Cn (Xn−1 , . . . , Xn−r , Un ),     n = 1, 2, . . . , ~0 = (Y0 , . . . , Y−k+1 ) is a k-dimensional random vector with real-valued where: (a) Y ~ 0 = (X0 , . . . , X−r+1 ) is a r-dimensional random vector taking components; (b) X values in the space X(r) (X(r) = X × · · · × X is the r times product of a Polish space X with a metric dX (x0 , x00 ) and Borel σ-algebra of measurable subsets BX ); (c) (W1 , U1 ), (W2 , U2 ), . . . is a sequence of i.i.d. random vectors taking values in the space R1 × U, moreover, W1 , W2 , . . . is a sequence of real-valued i.i.d. standard normal variables with mean value 0 and variance 1, while random variables U1 , U2 , . . . have a regular conditional distribution Gw (A) = P{Un ∈ A/Wn = ~ 0 = (Y ~0 , X ~ 0 ) and the random sequence w}, n = 1, 2, . . .; (d) the random vector Z

114

2

Autoregressive stochastic volatility LPP

(W1 , U1 ), (W2 , U2 ) . . . are independent; (e) k and r are positive integer numbers; (e) A0n (~ y , ~x) = A0n (y1 , . . . , yk , x1 , . . . , xr ), A00n (~ y , ~x) = A00n (y1 , . . . , yk , x1 , . . . , xr ), (r) ~ y = (y1 , . . . , yk ) ∈ Rk , ~x = (x1 , . . . , xr ) ∈ X , n = 1, 2, . . . are measurable functions acting from the space Rk × X(r) to R1 ; (f) Cn (~x, u) = Cn (x1 , . . . , xr , u), ~x = (x1 , . . . , yr ) ∈ X(r) , n = 1, 2, . . . are measurable functions acting from the space X(r) × U to X. Let us introduce the k-dimensional vector process, ~0,n = (Y0,n,1 , . . . , Y0,n,k ) = (Yn , . . . , Yn−k+1 ), n = 1, 2, . . . , Y

(2.130)

and the r-dimensional vector process, ~ 0,n = (X0,n,1 , . . . , X0,n,r ) = (Xn , . . . , Xn−r+1 ), n = 1, 2, . . . . X

(2.131)

We can always assume that the sequence of random variables Wn , n = 1, 2, . . . is the sequence of the first components of the sequence of k-dimensional i.i.d. 0 0 ~ n0 = (Wn,1 standard Gaussian random vectors W , . . . , Wn,k ), n = 1, 2, . . ., with 0 0 0 EW1,i = 0, EW1,i W1,j = I(i = j), i, j = 1, . . . , k, i.e., 0 Wn = Wn,1 , n = 1, 2, . . . .

(2.132)

~ n = (Wn,1 , Let also of consider again k-dimensional i.i.d. Gaussian vectors W . . ., Wn,k ) = (Wn , 0, . . . , 0), n = 1, 2, . . .. ~ n can be obtained as a linear transformation of vectors W ~ n0 , namely, Vectors W 0 ~ n = ΣW ~ n , n = 1, 2, . . ., where k × k matrix Σ = kσi,j k has elements σi,j = I(i = W 1)I(j = 1), i, j = 1, . . . , k. The modulated stochastic difference equation (2.129) can be re-written in the following equivalent vector form,  ~0,n−1 , X ~ 0,n−1 ) + Σn (Y ~0,n−1 , X ~ 0,n−1 )W ~ n0 , ~ n (Y  Y~0,n − Y~0,n−1 = µ ~ ~ ~ (2.133) X0,n = Cn (X0,n−1 , Un ),  n = 1, 2, . . . , where k-dimensional vectors functions µ ~ n (~ y , ~x) = (µn,1 (~ y , ~x), . . . , µn,k (~ y , ~x)), ~ y= (y1 , . . ., yk ) ∈ Rk , ~x = (x1 , . . . , xr ) ∈ X(r) , n = 1, 2, . . . and k × k matrix functions Σn (~ y , ~x) = kσn,i,j (~ y , ~x)k, ~ y = (y1 , . . . , yk ) ∈ Rk , ~x = (x1 , . . . , yr ) ∈ X(r) , n = 1, 2, . . . are defined by the following relations,

   µ ~ n (~ y , ~x) =  

A0n (~ y , ~x) y1 − y2 .. . yk−1 − yk





     , Σn (~y , ~x) =   

A00n (~ y , ~x) 0 .. . 0

0 0 .. . 0

... ... ...

0 0 .. . 0

   , 

(2.134)

2.4

Modulated nonlinear autoregressive stochastic volatility LPP

115

~ n (~x, u) = (Cn,1 (~x, u), . . . , Cn,r (~x, u)), ~x = (x1 , while r-dimensional functions C (r) . . . , xr ) ∈ X , u ∈ U, n = 1, 2, . . . are defined by the following relations,

  ~ n (~x, u) =  C  

Cn (~x, u) x1 .. . xr−1

   . 

(2.135)

~ 0,n = Relation (2.133) shows that the modulated vector log-price process Z ~ ~ (Y0,n , X0,n ) is a particular case of the modulated multivariatel Markov Gaussian log-price processes considered in Sections 4.5∗ and 8.4∗ . Note that, in this case, the role of the index component is played by the process ~ X0,n , which has the phase space X(r) . Let us assume that the following condition holds: G5 : max0≤n≤N −1 sup(~y,~x)∈Rk ×X(r)

|A0n+1 (~ y ,~ x)|+(A00 y ,~ x))2 n+1 (~ 1+

Pk l=1

K34,l |yl |

< K33 , for some 0
0

k X γi AN −n,j (β¯i )|yj |) }. M29,i exp{( βi

(2.142)

j=1

Remark 2.4.1. The explicit formulas for the constants M28 and M29,i (βi ), i = 1, . . . , p take, according formulas given in Remark 4.5.5∗ , the following form, M28 = L25 , M29,i (βi ) = L13 L14,i I(γi = 0) + L13 L14,i (1 Pk γ K (A (β¯ )+ 1 k2 A2N −1,l (β¯i )) N βi + 2k e 35 l=1 N −1,l i 2 ) i I(γi > 0).

(2.143)

where vectors β¯i = (βi,1 , . . . , βi,k ) = (β1 I(i = 1), . . . , βk I(i = k)), i = 1, . . . , k. ¯ ∗ should be replaced in this case by the following condition, Condition D3 [β] assumed to hold for vector parameter β¯ = (β1 , . . . , βk ) with non-negative components: P ¯ E exp{ k AN,j (β¯i )|Y0,0,j |} < K37,i , i = 1, . . . , k, for some 1 < K37,i D10 [β]: j=1

< ∞, i = 1, . . . , k. In this case the optimal expected reward is defined by the formula, Φ0 =

sup

~ ~ 0,τ0 ) = Eφ0 (Y ~0,0 , X ~ 0,0 ). Eg(τ0 , eY0,τ0 , X

(2.144)

(0) τ0 ∈Mmax,0,N

Theorem 4.5.4∗ takes in this case the following form. Theorem 2.4.2. Let the modulated autoregressive stochastic volatility log~ 0,n = (Y ~0,n , X ~ 0,n ) price process Zn = (Yn , Xn ) and its equivalent vector version Z are given, respectively, by the modulated stochastic difference equations (2.129) and ¯ hold and (2.133). Let condition G5 holds and, also, conditions B7 [¯ γ ] and D10 [β] 0 ≤ γi ≤ βi < ∞, i = 1, . . . , k. Then, there exists a constant 0 ≤ M30 < ∞ such that the following inequality takes place, |Φ0 | ≤ M30 .

(2.145)

118

2

Autoregressive stochastic volatility LPP

Remark 2.4.2. The explicit formula for the constant M90 takes, according formulas given in Remark 4.5.6∗ , the following form, X X M30 = L13 + L13 L14,i + L13 L14,i (1 i:γi =0 k K35

+2 e

Pk l=1

i:γi >0 γ

(AN −1,l (β¯i )+ 21 k2 A2N −1,l (β¯i )) N βi

)

i

γi β

i . K37,i

(2.146)

2.4.2 Space-skeleton approximations for option rewards of modulated nonlinear autoregressive stochastic volatility log-price processes ~ 0,n has Gaussian transition probabilities P0,n (~ The Markov process Z y , A) = ~ ε,n ∈ A/Z ~ ε,n−1 = ~z} defined for ~z = (~ P{Z y , ~x) ∈ Z = Rk × X(r) , A ∈ BZ , n = 1, 2, . . . by the following relation,

 ~ n0 , C ~ n (~x, Un ) ∈ A}. P0,n (~z, A) = P{ ~ y+µ ~ n (~ y ) + Σn (~ y )W

(2.147)

It is useful to re-write the vector transition dynamic relation (2.133) in the form of the system of dynamic transition relations for the components of the log~ 0,n . In this case, relation (2.133) takes the following form (where the price process Z corresponding cancelation of the identical term Y0,n−1,i is made in the transition dynamic relation for the i-th component as well as the fact that the noise terms Wn,i = 0 for i = 2, . . . , k),  Y0,n,1 = Y0,n−1,1 + A0n (Y0,n−1,1 , . . . , Y0,n−1,k , X0,n−1,1 , . . . , X0,n−1,r )       + A00 n (Y0,n−1,1 , . . . , Y0,n−1,k , X0,n−1,1 , . . . , X0,n−1,r )Wn ,      Y0,n,2 = Y0,n−1,1 ,     . . . ...    Y0,n,k = Y0,n−1,k−1 , (2.148)

                 

X0,n,1

= Cn (X0,n−1,1 , . . . , X0,n−1,r , Un ),

X0,n,2 ... X0,n,r

= X0,n−1,1 , ... = X0,n−1,r−1 ,

n

= 1, 2, . . . .

Let us construct the corresponding skeleton approximating log-price processes ~ Zε,n , for ε ∈ (0, ε0 ], according the algorithm described in Subsection 1.4.2. + Let m− ε,n ≤ mε,n , n = −k + 1, −k + 2, . . . be integer numbers, δε,n > 0 and λε,n ∈ R1 , n = −k + 1, −k + 2, . . .. Let us also define index sets Lε,n , n = 0, 1, . . .. + Lε,n = {¯ l = (l1 , . . . , lk ), li = m− ε,n−i+1 , . . . , mε,n−i+1 , i = 1, . . . , k}.

(2.149)

2.4

Modulated nonlinear autoregressive stochastic volatility LPP

119

+ First, the skeleton intervals Iε,n,l should be constructed for l = m− ε,n , . . . , mε,n , n = −k + 1, . . .,  − 1 if l = m−  ε,n ,  (−∞, δε,n (mε,n + 2 )] + λε,n − 1 1 (2.150) Iε,n,l = (δε,n (l − 2 ), δε,n (l + 2 )] + λε,n if mε,n < l < m+ ε,n ,   + + 1 (δε,n (mε,n − 2 ), ∞) + λε,n if l = mε,n ,

and skeleton cubes Iε,n,¯l should be defined for ¯ l = (l1 , . . . , lk ) ∈ Lε,n , n = 0, 1, . . ., Iε,n,¯l = Iε,n,l1 × · · · × Iε,n−k+1,lk .

(2.151)

Then, skeleton points yε,n,l = lδε,n + λε,n should be defined for l = m− ε,n , . . ., n = −k + 1, . . ., and vector skeleton points ~ yε,n,¯l should be defined for ¯ l = (l1 , . . . , lk ) ∈ Lε,n , n = 0, 1, . . ., m+ ε,n ,

~ yε,n,¯l = (yε,n,l1 , . . . , yε,n−k+1,lk ).

(2.152)

0+ Let m0− ε,n ≤ mε,n , n = −r + 1, −r + 2, . . . be integer numbers. Let us also define index sets L0ε,n , n = 0, 1, . . ., 0+ L0ε,n = {¯ l0 = (l10 , . . . , lr0 ), li0 = m0− ε,n−i+1 , . . . , mε,n−i+1 , i = 1, . . . , r}.

(2.153)

0+ Second, sets Jε,n,l0 ∈ BX , l0 = m0− ε,n , . . . , mε,n , n = −r + 1, . . ., such that (a) 00 000 0+ Jε,n,l0 , n = Jε,n,l00 ∩ Jε,n,l000 = ∅, l 6= l , n = −r + 1, . . .; (b) X = ∪m0− 0 ε,n ≤l ≤mε,n −r + 1, . . ., should be constructed. Remind that one of our model assumption is the X is a Polish space. Sets Kε,n , n = −r + 1, . . ., should be chosen and then non-empty sets Kε,n,l ∈ BX , l = 0+ 00 000 m0− ε,n,0 , . . . , mε,n,0 , n = −r + 1, . . . such that (c) Kε,n,l00 ∩ Kε,n,l000 = ∅, l 6= l , n = 0+ Kε,n,l0 = Kε,n , n = −r + 1, . . .. 0, 1, . . .; (d) ∪m0− 0 ε,n ≤l ≤mε,n

The standard case is, where X = {x0l , l0 = m− , . . . , m+ } is a finite set and metrics dX (xl00 , xl000 ) = I(xl00 6= xl000 ). In this case, the simplest choice m0± ε,n = m± , n = −r + 1, . . . and to take sets Jε,n,l0 = {xl0 }, l0 = m− , . . . , m+ , n = −r + 1, . . .. Sets Jε,n,l can be defined in the following way, for n = −r + 1, . . ., ( 0+ Kε,n,l0 if m0− ε,n,0 ≤ l < mε,n,0 , (2.154) Jε,n,l0 = Kε,n,m0+ ∪ Kε,n if l0 = m0+ ε,n,0 , ε,n,0

and skeleton cubes Jε,n,¯l0 should be defined for ¯ l0 = (l10 , . . . , lr0 ) ∈ L0ε,n , n = 0, 1, . . ., Jε,n,¯l0 = Jε,n,l10 × · · · × Jε,n−r+1,lr0 .

(2.155)

0+ Then skeleton points xε,n,l0 ∈ Jε,n,l0 should be chosen for l0 = m0− ε,n,0 , . . . , mε,n,0 , ¯ n = −r + 1, . . ., and vector skeleton points ~xε,n,¯l0 should be defined for l0 = (l10 , . . . , lr0 ) ∈ L0ε,n , n = 0, 1, . . .,

~xε,n,¯l0 = (xε,n,l10 , . . . , xε,n−r+1,lr0 ).

(2.156)

120

2

Autoregressive stochastic volatility LPP

Third, skeleton sets Aε,n,¯l,¯l0 and skeleton points, ~zε,n,¯l,¯l0 ∈ Aε,n,¯l,¯l0 can be defined, for ¯ l ∈ Lε,n , ¯ l0 ∈ L0ε,n , n = 0, 1, . . ., in the following way, Aε,n,¯l,¯l0 = Iε,n,¯l × Jε,n,¯l0 ,

(2.157)

~zε,n,¯l,¯l0 = (~ yε,n,¯l , ~xε,n,¯l0 ).

(2.158)

and Fourth, skeleton functions, hε,n (y), y ∈ R1 , n = −k + 1, . . . should be defined by the following formulas, hε,n (y) =



yε,n,l

+ if y ∈ Iε,n,l , m− ε,n < l < mε,n ,

(2.159)

and skeleton functions h0ε,n (x), x ∈ X, n = −r + 1, . . ., should be defined by the following formula, h0ε,n (x) =



xε,n,l0

0+ 0 if x ∈ Jε,n,l0 , m0− ε,n,0 ≤ l ≤ mε,n,0 .

(2.160)

ˆ ε,n (~ Finally, vector skeleton functions h y ), ~ y = (y1 , . . ., yk ) ∈ Rk , n = 0, 1, . . . should be defined by the following formula, ˆ ε,n (~ h y ) = (hε,n (y1 ), . . . , hε,n−k+1 (yk )),

(2.161)

ˆ 0ε,n (~x), ~x = (x1 , . . . , xr ) ∈ X(r) , n = 0, 1, . . . should and vector skeleton functions h be defined by the following formula, ˆ 0ε,n (~x) = (h0ε,n (x1 ), . . . , h0ε,n−r+1 (xr )). h

(2.162)

~ ε,n = The corresponding space skeleton approximating log-price processes Z ~ (Yε,n , Xε,n ) are defined by the following dynamic relations,   ˆ ε,n h ˆ ε,n−1 (Y ~ε,n = h ~ε,n−1 )  Y       ˆ ε,n−1 (Y ˆ 0ε,n−1,0 (X ~ ε,n−1 )) ~ε,n−1 ), h  +µ ~ n (h        ˆ ε,n−1 (Y ˆ 0ε,n−1 (X ~ε,n−1 ), h ~ ε,n−1 )) W ~ n0 ,  + Σn (h 

               

~ ε,n X

 ˆ 0ε,n,0 C ˆ 0ε,n−1 (X ¯0,n (h ~ ε,n−1 ), U0,n ) , = h

n ~ Yε,0

= 1, 2, . . . , ˆ ε,n (Y ~0,0 ), =h

~ ε,0 X

ˆ 0ε,0 (X ~ 0,0 ), =h

(2.163)

or by the following equivalent transition dynamic relation given in the form of system of transition relations the system for the components of the log-price process

2.4

Modulated nonlinear autoregressive stochastic volatility LPP

~ ε,n = (Y ~ε,n , X ~ ε,n ), n = 0, 1, . . ., Z    Y = h ε,n,1 ε,n hε,n−1 (Yε,n−1,1 )      + A0n (hε,n−1 (Yε,n−1,1 ), . . . , hε,n−k (Yε,n−1,k ),      h0ε,n−1 (Xε,n−1,1 ), . . . , h0ε,n−r (Xε,n−1,r ))      + A00n (hε,n−1 (Yε,n−1,1 ), . . . , hε,n−k (Yε,n−1,k )       h0ε,n−1 (Xε,n−1,1 ), . . . , h0ε,n−r (Xε,n−1,r ))Wn ,       Yε,n,2 = hε,n−1 (Yε,n−1,1 ),     ... ...      Yε,n,k = hε,n−k+1 (Yε,n−1,k−1 ),     Xn,1 = Cn (Xn−1,1 , . . . , Xn−1,r , Un ),

                                       

Xn,2 ... Xn,r

= Xn−1,1 , ... = Xn−1,r−1 ,

n

121

(2.164)

= 1, 2, . . . ,

Yε,0,1 ... Yε,0,−k+1 Xε,0,1 ... Xε,0,−r+1

= hε,0 (Y0,0 ), ... = hε,−k+1 (Y0,−k+1 ), = h0ε,0 (X0,0 ), ... = h0ε,−r+1 (X0,−r+1 ).

~ ε,n is, for every ε ∈ (0, ε0 ], a Markov chain, with the The log-price process Z (r) phase space Z = Rk × X and one-step transition probabilities Pε,n (~z, A) = ~ ε,n ∈ A/Z ~ ε,n−1 = ~z}, ~z ∈ Z, A ∈ BZ , n = 1, 2, . . .. P{Z ~ ε,n is, for every ε ∈ (0, ε0 ] is a skeleton Moreover, the log-price process Z atomic Markov chain. Its transition probabilities are determined by transition ~ 0,n via the following formula, probabilities of the Markov chain Z X ˆ ε,n−1 (~ Pε,n (~z, A) = P0,n ((h y ), ~ zε,n,l, yε,n,l¯,~ xε,n,l¯0 )∈A ¯ l¯0 =(~

=

ˆ 0ε,n−1 (~x)), A ¯ ¯0 ) h ε,n,l,l X ˆ ε,n−1 (~ P{(h y) ~ zε,n,l, yε,n,l¯,~ xε,n,l¯0 )∈A ¯ l¯0 =(~

¯ ε,n−1 (~ ˆ 0ε,n−1 (~x)) +µ ~ n (h y ), h ˆ ε,n−1 (~ ˆ 0ε,n−1 (~x)), + Σn (h y ), h ˆ 0ε,n−1 (X ~ n0 , C ¯0,n (h ~ ε,n−1 ), U0,n )) ∈ A ¯ ¯0 }. W ε,n,l,l

(2.165)

It is useful to note that skeleton functions hε,n (·) project values Yε,n to the set + of skeleton points Fε,n = {lδε,n + λε,n , l = m− ε,n , . . . , mε,n } in the way described above in relation (2.159).

122

2

Autoregressive stochastic volatility LPP

This relation also implies that hε,n (Yε,n ) = Yε,n if Yε,n ∈ Fε,n since hε,n (y) = y for y ∈ Fε,n . Analogously, skeleton functions h0ε,n (·) project values Xε,n to the set of skele0+ ton points F0ε,n = {xε,n,l0 , l0 = m0− ε,n , . . . , mε,n } in the way described above in relation (2.160). This relation also implies that h0ε,n (Xε,n ) = Xε,n if Xε,n ∈ F0ε,n since h0ε,n (x) = ~ε,n , X ~ ε,n ) ∈ x for x ∈ F0ε,n . Moreover, relations (2.163) – (2.164) imply that P{(Y 0 Fε,n × Fε,n , , n = 0, 1, . . .} = 1. ˆ ε,n−1 (Y ~ε,n−1 ) by That is why, seems, one replace in relation random vectors h 0 ˆ ~ ~ ε,n−1 in ~ random vectors Yε,n−1 and vectors hε,n−1 (Xε,n−1 ) by random vectors X relation (2.164). This is, however, not so. Transition probabilities Pε,n (~z, A) should be defined for all points ~z = (~ y , ~x), ~ y = (y1 , . . . , yk ) ∈ Rk , ~x = (x1 , . . . , xr ) ∈ X(r) including points ~ y , which do not belong to sets Fε,n × · · · × Fε,n−k+1 , and points ~x, which do not belong to sets F0ε,n × · · · × F0ε,n−r+1 . That is why the transition dynamic relation (2.164) and formula (2.165) are given in the above form. It is also useful to note that transition dynamic relations (2.163) and (2.164) reduce, respectively, to transition dynamic relations (2.133) and (2.148), if to reˆ ε,n (~ ˆ 0ε,n (~x) and hε,n (~ place skeleton functions h y ), h y ), h0ε,n (x) in the above two reˆ 0,n (~ ˆ 00,n (~x) ≡ lations, respectively, by limiting skeleton functions h y ) ≡ (y, . . . , y), h (x, . . . , x) and h0,n (y) ≡ 0, h00,n (x) ≡ x. ~ ε,0 ∈ A}, A ∈ BZ is concerned, As far as the initial distribution Pε,0 (A) = P{Z it takes, for every ε ∈ (0, ε0 ], the following form, X Pε,0 (A) = P0,0 (Aε,0,¯l,¯l0 ) ~ zε,0,l, yε,0,l¯,~ xε,0,l¯0 )∈A ¯ l¯0 =(~

X

=

ˆ ε,0 (Y ˆ 0ε,0 (X ~0,0 ), h ~ 0,0 )) ∈ A ¯ ¯0 }. P{(h ε,n,l,l

(2.166)

~ zε,0,l, yε,0,l¯,~ xε,0,l¯0 )∈A ¯ l¯0 =(~

We assume that the pay-off function g(n, e~y , ~x), ~ y = (y1 , . . . , yk ) ∈ Rk , ~x = (r) (x1 , . . ., xr ) ∈ X do not depend on ε. (ε) Let us also recall the class Mmax,n,N of all Markov moments τε,n for the log(ε) ~ ε,n such that (a) n ≤ τε,n ≤ N , (b) event {τε,n = m} ∈ Fn,m price process Z = ~ ε,l , n ≤ l ≤ m], n ≤ m ≤ N . σ[Z ~ ε,n by In this case, the reward functions are defined for the log-price process Z the following relation, for ~z = (~ y , ~x) ∈ Z and n = 0, 1, . . . , N , φε,n (~ y , ~x) =

sup

~

~ ε,τε,n ). E(~y,~x),n g(τε,n , eYε,τε,n , X

(2.167)

(ε) τε,n ∈Mmax,n,N

Probability measures Pε,n (~z, A), ~z ∈ Z = Rk × X(r) , n = 1, 2, . . . are concentrated on finite sets, for every ε ∈ (0, ε0 ].

2.4

Modulated nonlinear autoregressive stochastic volatility LPP

123

This obviously implies that, for every ε ∈ (0, ε0 ] and ~z = (~ y , ~x) ∈ Z, n = 1, 2, . . ., the rewards functions, |φε,n (~ y , ~x)| < ∞.

(2.168)

It is also useful to note that φε,N (~ y , x) = g(N, e~y , ~x), (~ y , ~x) ∈ Z. An analogue of Lemma 8.4.2∗ can be formulated for the above model. However, the corresponding system of linear equations can be simplified taking into account the specific shift structure of the dynamic relation (2.163), which is well seen from the equivalent form (2.164). By the definition of sets Iε,n,¯l , ¯ l ∈ Lε,n , n = 0, 1, . . . and Jε,n,¯l0 , ¯ l0 ∈ L0ε,n , n = (r) 0, 1, . . ., for every ~ y ∈ Rk , ~x ∈ X and n = 0, 1, . . ., there exist the unique ¯ lε,n (~ y) = 0 0 0 (lε,n,1 (~ y ), . . . , lε,n,k (~ y )) ∈ Lε,n and ¯ lε,n (~x) = (lε,n,1 (~x), . . . , lε,n,r (~x)) ∈ L0ε,n such that ~ y ∈ Iε,n,¯lε,n (~y) and ~x ∈ Jε,n,¯lε,n 0 (~ x) . The following lemma is the direct corollary of Lemma 8.4.2∗ . Lemma 2.4.2. Let a modulated nonlinear autoregressive stochastic volatil~0,n is defined by the modulated stochastic difference equaity log-price process Y tion (2.133), while the corresponding approximating space-skeleton log-price pro~ε,n is defined, for every ε ∈ (0, ε0 ], by the dynamic transition relation cess Y (2.163). Then the reward functions φε,n (~ y , ~x) and φn+m (~ yε,n+m,¯l , ~xε,n+m,¯l0 ), ¯ l ∈ 0 0 Lε,n+m , ¯ l ∈ Lε,n+m , m = 1, . . . N − n are, for every ε ∈ (0, ε0 ], ~ y ∈ Rk , ~x ∈ X(r) and n = 0, . . . , N −1, the unique solution for the following recurrence finite system of linear equations,  φε,N (~ yε,N,¯l , ~ xε,N,¯l0 ) = g(N, ey~ε,N,l , ~ xε,N,¯l0 ),     0 0 ¯ ¯  l = (l1 , . . . , lk ) ∈ Lε,N , l = (l1 , . . . , lr0 ) ∈ L0ε,N ,      y ~   φ (~ y , ~ x ) = max g(n + m, e ε,n+m,l¯, ~ xε,n+m,¯l0 ), 0 ¯ ¯ ε,n+m ε,n+m,l ε,n+m,l     + 0+  Pmε,n+m+1 Pmε,n+m+1   φε,n+m+1 (~ yε,n+m+1,(l00 ,l1 ,...,lk−1 ) ,  −  1 l00 =m l000 =m0−  1 1 ε,n+m+1 ε,n+m+1   0  ~ x ) × P{y + A (~ y xε,n+m,¯l0 ) 000 0 0  ε,n+m,l1 ε,n+m+1,(l1 ,l1 ,...,lr−1 ) l, ~ n+m+1 ε,n+m,¯      + A00 yε,n+m,¯l , ~ xε,n+m,¯l0 )Wn+m+1 ∈ Iε,n+m+1,l00 ,  n+m+1 (~ 1       Cn+m+1 (~ xε,n+m,¯l0 , Un+m+1 ) ∈ Jε,n+m+1,l000 } ,   1  ¯ l = (l1 , . . . , lk ) ∈ Lε,n+m , ¯ l0 = (l10 , . . . , l0 ) ∈ L0ε,n+m ,

k    m = N − n − 1, . . . , 1,       φε,n (~ y, ~ x) = max g(n, ey~ , ~ x),       Pm+ Pm0+  ε,n+1 ε,n+1  φε,n+1 (~ yε,n+1,(l00 ,lε,n,1 (~z),...,lε,n,k−1 (~z)) ,  00 =m− 1  l l000 =m0− 1 ε,n+1 1 ε,n+1     ~ xε,n+1,(l000 ,l0  z) (~ z ),...,l0ε,n,r−1 (~ z )) ) × P{yε,n,lε,n,1 (~  1 ε,n,1    0  +An+1 (~ yε,n,¯lε,n (~z) , ~ xε,n,¯l0 (~z) )  ε,n    00  + An+1 (~ yε,n,¯lε,n (~z) , ~ xε,n,¯l0 (~z) )Wn+1 ∈ Iε,n+1,l00   ε,n 1      Cn+1 (~x ¯0 } , ε,n,lε,n (~ z ) , Un+1 ) ∈ Jε,n+1,l000 1

(2.169)

124

2

Autoregressive stochastic volatility LPP

(ε)

while the optimal expected reward Φε = Φε (Mmax,0,N ) is defined as by the following formula, for every ε ∈ (0, ε0 ], X Φε = P0,0 (Aε,0,¯l,¯l0 ) φε,0 (~ yε,0,¯l , ~xε,0,¯l0 ). (2.170) ¯ l ∈ Lε,0 ,¯ l0 ∈ L0ε,0

Proof. The following system of linear equations for reward functions φε,n (~ y , ~x) and φn+m (~ yε,n,¯l , ~xε,n,¯l0 ), ¯ l ∈ Lε,n+m , ¯ l0 ∈ L0ε,n+m m = 1, . . . N − n is the variant of the system of linear equations given in Lemma 8.4.2∗ ,  ~ yε,N,l¯ , ~xε,N,¯l0 ),   φε,N (~yε,N,¯l , ~xε,N,¯l0 ) = g(N, e   0 0  ¯ ¯ l ∈ L , l ∈ L  ε,N ε,N    ~ yε,n+m,l¯  φε,n+m (~y  , ~ x ,~ yε,n+m,¯l0 ),  ε,n+m,¯ l ε,n+m,¯ l0 ) = max g(n + m, e   P P   yε,n+m+1,¯l00 , ~xε,n+m+1,¯l000 ) ¯ ¯  l00 ∈Lε,n+m+1 l000 ∈L0ε,n+m+1 φε,n+m+1 (~    ×P0,n+m+1 (~zε,n+m,¯l,¯l0 , Aε,n+m+1,¯l00 ,¯l000 ) , (2.171)   0 0  ¯ ¯ l ∈ Lε,n+m , l ∈ Lε,n+m , m = N − n − 1, . . . , 1,       φε,n (~ y , ~x) = max g(n, e~y , ~x),    P P   yε,n+1,¯l00 , ~xε,n+1,¯l000 ) ¯ ¯  l00 ∈Lε,n+1 l000 ∈L0ε,n+1 φε,n+1 (~      ×P zε,n,¯lε,n (~z),¯lε,n 0 0,n+1 (~ (~ z ) , Aε,n+1,¯ l00 ,¯ l000 ) . The system of linear equations (2.171) can be re-written in simpler form taking into account shift features of the dynamic transition relation (2.148) and (2.164). Indeed, according the above relations, we get the following formula, for ¯ l = (l1 , . . . , lk ) ∈ Lε,n+m , ¯ l0 = (l10 , . . . , lr0 ) ∈ L0ε,n+m and ¯ l00 = (l100 , . . . , lk00 ) ∈ Lε,n+m+1 , ¯ l000 = (l1000 , . . . , lr000 ) ∈ L0ε,n+m+1 , P0,n+m+1 (~zε,n+m,¯l,¯l0 , Aε,n+m+1,¯l00 ,¯l000 ) = P{yε,n+m,l1 + A0n+m+1 (~ yε,n+m,¯l , ~xε,n+m,¯l0 ) + A00n+m+1 (~ yε,n+m,¯l , ~xε,n+m,¯l0 )Wn+m+1 ∈ Iε,n+m+1,l100 , Cn+m+1 (~xε,n+m,¯l0 , Un+m+1 ) ∈ Jε,n+m+1,l1000 } × I(yε,n+m,l1 ∈ Iε,n+m,l200 ) × · · · × I(yε,n+m−p+2,lp−1 ∈ Iε,n+m−p+2,lp00 ) × I(xε,n+m,l10 ∈ Jε,n+m,l2000 ) 0 × · · · × I(xε,n+m−r+2,lr−1 ∈ Jε,n+m−r+2,lr000 )

= P{yε,n+m,l1 + A0n+m+1 (~ yε,n+m,¯l ), ~xε,n+m,¯l0 ) + A00n+m+1 (~ yε,n+m,¯l , ~xε,n+m,¯l0 )Wn+m+1 ∈ Iε,n+m+1,l100 ,

2.4

Modulated nonlinear autoregressive stochastic volatility LPP

125

Cn+m+1 (~xε,n+m,¯l0 , Un+m+1 ) ∈ Jε,n+m+1,l1000 } × I(l1 = l200 ) × · · · × I(lp−1 = lp00 ) 0 × I(l10 = l2000 ) × · · · × I(lr−1 = lr000 ).

(2.172)

Relation (2.172) implies that the system of linear equations (2.171) can be re-written in the simpler form, where multivariate sums over vector indices ¯ l00 ∈ 000 0 ¯ Lε,n+m+1 , l ∈ Lε,n+m+1 , m = N − n, . . . , 0 are replaced in the corresponding + 00 equations by univariate sums over indices l100 , lp+1 = m− ε,n+m+1 , . . . , mε,n+m+1 , 0− 0+ 000 l1 = mε,n+m+1 , . . . , mε,n+m+1 , m = N − n, . . . , 0. By doing this, we get write down the system of linear equations (2.169).  Obviously, |Φε | < ∞, for every ε ∈ (0, ε0 ].

2.4.3 Convergence of option reward functions for modulated nonlinear autoregressive stochastic volatility log-price processes Let now formulate conditions of convergence for the above log-reward functions. Let us introduce special shorten notations for the maximal and the minimal skeleton points, for n = −k + 1, . . . , N and ε ∈ (0, ε0 ], ± zε,n = δε,n m± ε,n + λε,n .

(2.173)

We impose the following condition on the parameters of the space skeleton model: ± N8 : (a) δε,n → 0 as ε → 0, for n = −k + 1, . . . , N ; (b) ±zε,n → ∞ as ε → 0, ± for n = −k + 1, . . . , N ; (c) ±zε,n , n = −k + 1, . . . , N are non-decreasing sequences, for every ε ∈ (0, ε0 ]; (d) for any x ∈ X and d > 0, there exists εx,d ∈ (0, ε0 ] such that the ball Rd (x) ⊆ Kε,n , n = 0, . . . , N , for ε ∈ (0, εx,d ]; (e) sets Kε,n,l have diameters dε,n,l = supx0 ,x00 ∈Kε,n,l dX (x0 , x00 ) such that dε,n = maxm− ≤l≤m+ dε,n,l → 0 as ε → 0, n = 0, . . . , N . ε,n,0

ε,n,0

Let now formulate the corresponding convergence conditions. We assume that the following condition holds: I7 : There exists sets Z0n ∈ BZ , n = 0, . . . , N such that function g(n, e~y , ~x) is continuous at points ~z = (~ y , ~x) ∈ Z0n for every n = 0, . . . , N . Also, we assume that the following weak convergence condition, with charac~ n (~x, u) given by relations (2.134) – (2.135), holds: teristics µ ~ n (~ y , ~x), Σn (~ y , ~x), and C O10 : There exist sets Z00n ∈ BZ , n = 0, . . . , N and Un ∈ BU , n = 1, . . . , N such that: (a) A0n (~ yε , ~xε ) → A0n (~ y0 , ~x0 ), A00n (~ yε , ~xε ) → A00n (~ y0 , ~x0 ), Cn (~xε , u) → Cn (~x0 , u) as ε → 0, for any ~zε = (~ yε , ~xε ) → ~z0 = (~ y0 , ~x0 ) ∈ Z00n−1 as

126

2

Autoregressive stochastic volatility LPP

ε → 0, u ∈ Un , n = 1, . . . , N ; (b) P{Un ∈ Un } = 1, n = 1, . . . , N ; (c)  ~ n0 , C ~ n (~x0 , U0,n ) ∈ Z0n ∪ Z00n } = 0 for every P{ ~ y0 + µ ~ n (~ y0 , ~x0 ) + Σn (~ y0 , ~x0 )W ~z0 = (~ y0 , ~x0 ) ∈ Z0n−1 ∪ Z00n−1 and n = 1, . . . , N , where Z0n , n = 0, . . . , N are sets introduced in condition I7 . The following theorem is a corollary of Theorem 8.4.7∗ . Theorem 2.4.3. Let a modulated nonlinear autoregressive stochastic volatil~0,n is defined by the vector modulated stochastic difference ity log-price process Y equation (2.133), while the corresponding approximating space-skeleton log-price ~ε,n is defined, for every ε ∈ (0, ε0 ], by the dynamic transition relation process Y (2.163). Let also conditions B7 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components, and, also, conditions G5 , N8 , I7 , and O10 hold. Then, for every n = 0, 1, . . . , N , the following relation takes place for any ~zε = (~ yε , ~ xε ) → ~z0 = (~ y0 , ~x0 ) ∈ Z0n ∩ Z00n , φε,n (~ yε , ~xε ) → φ0,n (~ y0 , ~x0 ) as ε → 0.

(2.174)

Proof. As follows from the transition dynamic relations (2.133) and (2.163), the space skeleton approximation model considered in Theorem 2.4.3 is a particular case of the space skeleton approximation model considered in Theorem 8.4.7∗ . Conditions N8 , B7 [¯ γ ], and I7 are variants of conditions N2∗ , B1 [¯ γ ]∗ , and I7∗ used in Theorem 8.4.7∗ . By Lemma 2.4.1 condition G5 implies that condition G4∗ holds. Condition O10 is the variant of condition O5∗ , for the above model considered in Theorem 8.4.7∗ . Thus, all conditions of Theorem 8.4.7∗ hold. By applying this theorem, we get the convergence relation (2.174). 

2.4.4 Convergence of optimal expected rewards for space skeleton approximations of modulated nonlinear autoregressive stochastic volatility log-price processes Let us now give conditions for convergence for optimal expected rewards Φε in the above space skeleton approximation model. We shall apply Theorem 8.4.8∗ . ¯ = (A1 (β), ¯ . . . , Ak (β)) ¯ ~ β) ¯ with the function A( In this case, condition D10 [β], given by relations (2.140) and (2.141), should be assumed to hold for some vector parameter β¯ = (β1 , . . . , βp ) with non-negative components and the corresponding vectors β¯i = (βi,1 , . . . , βi,k ) with components βi,j = I(i = j), i, j = 1, . . . , k. Condition K15∗ should be replaced by the following condition imposed on the ~ 0,0 ∈ A}: initial distribution P0,0 (A) = P{Z K10 : P0,0 (Z00 ∩ Z000 ) = 1, where Z00 and Z000 are sets introduced, respectively, in conditions I7 and O10 .

2.5 Modulated autoregressive conditional heteroskedastic LPP

127

The following theorem is a corollary of Theorem 8.4.8∗ . Theorem 2.4.4. Let a modulated nonlinear autoregressive stochastic volatil~0,n is defined by the vector modulated stochastic difference ity log-price process Y equation (2.133), while the corresponding approximating space-skeleton log-price ~ε,n is defined, for every ε ∈ (0, ε0 ], by the dynamic transition relation process Y ¯ holds for some vector parameters (2.163). Let also conditions B7 [¯ γ ] and D10 [β] ¯ γ¯ = (γ1 , . . . , γk ) and β = (β1 , . . . , βk ) such that, for every i = 1, . . . , k either βi > γi > 0 or βi = γi = 0, and also conditions G5 , N8 , I7 , O10 , and K10 hold. Then, the following relation takes place, Φε → Φ0 as ε → 0.

(2.175)

¯ and Proof. Theorem 2.4.4 is a corollary of Theorem 8.4.8∗ . Conditions D10 [β] ¯ K10 are just re-formulations, respectively, of conditions D19 [β]∗ and K15∗ used in Theorem 8.4.8∗ . Other conditions of these theorem also hold that was pointed out in the proof of Theorem 2.4.3. By applying Theorem 8.4.8∗ we get convergence relation (2.175). 

2.5 Modulated autoregressive conditional heteroskedastic LPP In this section, we present results concerned space skeleton approximations for rewards of American type options for modulated autoregressive conditional heteroskedastic (ARCH) log-price processes with Gaussian noise terms.

2.5.1 Space-skeleton approximations for modulated autoregressive conditional heteroskedastic log-price processes Let us consider a modulated autoregressive conditional heteroskedastic log-price process with Gaussian noise terms, which is defined by the following modulated stochastic difference equation,  Yn − Yn−1 = an,0 (Xn−1 , . . . , Xn−r )      + an,1 (Xn−1 , . . . , Xn−r )(Yn−1       − fn,1 (Xn−1 , . . . , Xn−r )Yn−2 )  (2.176) + · · · + an,p−1 (Xn−1 , . . . , Xn−r )(Yn−p+1     − fn,p−1 (Xn−1 , . . . , Xn−r )Yn−p ) + gκ (σn )Wn ,      Xn = Cn (Xn−1 , . . . , Xn−r , Un ),    n = 1, 2, . . . ,

128

2

Autoregressive stochastic volatility LPP

where σn = dn,0 (Xn−1 , . . . , Xn−r ) + dn,1 (Xn−1 , . . . , Xn−r )(Yn−1 − en,1 (Xn−1 , . . . , Xn−r )Yn−2 )2 + · · · + dn,p−1 (Xn−1 , . . . , Xn−r )(Yn−p+1 1 − en,p−1 (Xn−1 , . . . , Xn−r )Yn−p )2 2 , n = 1, 2, . . . ,

(2.177)

~0 = (Y0 , . . . , Y−p+1 ) is a p-dimensional random vector with real-valued and (a) Y ~ 0 = (X0 , . . . , X−r+1 ) is a r-dimensional random vector with components; (b) X components taking values in the space X(r); (c) (W1 , U1 ), (W2 , U2 ) . . . is a sequence of i.i.d. random vectors taking values in the space R1 × U, moreover, W1 , W2 , . . . is a sequence of real-valued i.i.d. standard normal variables with mean value 0 and variance 1 while random variables U1 , U2 , . . . have a regular conditional distribution Gw (A) = P{Un ∈ A/Wn = w}, n = 1, 2, . . .; (d) the ran~ 0 = (Y ~0 , X ~ 0 ) and the random sequence (W1 , U1 ), (W2 , U2 ) . . . are dom vector Z independent; (e) p and r are positive integer numbers; (f) an,i (~x) = an,i (x1 , . . . , xr ), ~ x = (x1 , . . . , xr ) ∈ X(r) , i = 0, . . . , p, n = 1, 2, . . . are measurable functions acting from the space X(r) to the space R1 ; (g) dn,i (~x) = dn,i (x1 , . . . , xr ), ~x = (x1 , . . . , xr ) ∈ X(r) , i = 0, . . . , p, n = 1, 2, . . . are measurable functions acting from the space X(r) to the interval [0, ∞); (h) en,i (~x) = en,i (x1 , . . . , xr ), fn,i (~x) = fn,i (x1 , . . . , xr ), ~x = (x1 , . . . , xr ) ∈ X(r) , i = 1, . . . , p − 1, n = 1, 2, . . . are measurable functions acting from the space X(r) to the interval [0, 1]; (h) Cn (~x, u) = Cn (x1 , . . . , xr , u), (~x, u) ∈ X(r) × U, n = 1, 2, . . . are measurable functions acting from the space X(r) × U to the space R1 ; (i) gκ (·) is a function from the class Gκ for some κ ≥ 0. As is Section 2.2, we restrict consideration by CIR(p) type models, where parameter κ = 12 . The above model is a particular case of the model considered in Section 2.4. In the this case, k = p, and the components of corresponding modulated nonlinear autoregressive stochastic volatility log-price process, ~0,n = (Yn,1 , . . . , Yn,p ) Y = (Yn , . . . , Yn−p+1 ), n = 0, 1, . . . ,

(2.178)

and ~ 0,n = (Xn,1 , . . . , Xn,r ) X = (Xn , . . . , Xn−r+1 ), n = 0, 1, . . . .

(2.179)

We can always assume that the sequence of random variables Wn , n = 1, 2, . . . is the sequence of the first components of the sequence of p-dimensional i.i.d.

2.5

Modulated ARCH LPP

129

0 0 ~ n0 = (Wn,1 standard Gaussian random vectors W , . . . , Wn,k ), n = 1, 2, . . ., with 0 0 0 EW1,i = 0, EW1,i W1,j = I(i = j), i, j = 1, . . . , p, i.e., 0 Wn = Wn,1 , n = 1, 2, . . . .

(2.180)

~ n = (Wn,1 , Let also of consider again k-dimensional i.i.d. Gaussian vectors W . . ., Wn,k ) = (Wn , 0, . . . , 0), n = 1, 2, . . .. ~ n can be obtained as a linear transformation of vectors W ~ n0 , namely, Vectors W ~ n = ΣW ~ n0 , n = 1, 2, . . ., where p × p matrix Σ = kσi,j k has elements σi,j = I(i = W 1)I(j = 1), i, j = 1, . . . , p. The modulated stochastic difference equation (2.129) can be re-written in the following equivalent vector form,  ~0,n−1 , X ~ 0,n−1 ) + Σn (Y ~0,n−1 , X ~ 0,n−1 )W ~ n0 , ~ n (Y  Y~0,n − Y~0,n−1 = µ ~ 0,n ~ n (X ~ 0,n−1 , Un ), (2.181) X =C  n = 1, 2, . . . , where p-dimensional vectors functions µ ~ n (~ y , ~x) = (µn,1 (~ y , ~x), . . . , µn,p (~ y , ~x)), ~ y= (r) (y1 , . . ., yp ) ∈ Rp , ~x = (x1 , . . . , xr ) ∈ X , n = 1, 2, . . . and p × p matrix functions Σn (~ y , ~x) = kσn,i,j (~ y , ~x)k, ~ y = (y1 , . . . , yp ) ∈ Rp , ~x = (x1 , . . . , yr ) ∈ X(r) , n = 1, 2, . . . are defined by the following relations,   00   y , ~x) 0 . . . 0 An (~ y , ~x) A0n (~   y1 − y2  0 0 ... 0      (2.182) µ ~ n (~ y , ~x) =   , Σn (~y , ~x) =  .. ..  , .. ..    . . .  . 0 0 ... 0 yk−1 − yp where functions A0n (~ y , ~x), A00n (~ y , ~x), ~ y = (y1 , . . . , yp ) ∈ Rp , ~x = (x1 , . . . , xr ) ∈ X(r) , n = 1, 2, . . . are given by the following relations, A0n (~ y , ~x) = A0n (y1 , . . . , yp , x1 , . . . , xr ) = an,0 (x1 , . . . , xr ) + an,1 (x1 , . . . , xr )(y1 − fn,1 (x1 , . . . , xr )y2 ) + · · · + an,p−1 (x1 , . . . , xr )(yp−1 − fn,p−1 (x1 , . . . , xr )yp ), A00n (~ y , ~x)

= A00n (y1 , . . . , yp , x1 , . . . , xr )

= g 21 ( dn,0 (x1 , . . . , xr ) + dn,1 (x1 , . . . , xr )(y1 − en,1 (x1 , . . . , xr )y2 )2 1 (2.183) + · · · + dn,p−1 (x1 , . . . , xr )(yp−1 − en,p−1 (x1 , . . . , xr )yp )2 2 ), ~ n (~x, u) = (Cn,1 (~x, u), . . . , Cn,r (~x, u)), ~x = (x1 , while r-dimensional functions C (r) . . . , xr ) ∈ X , u ∈ U, n = 1, 2, . . . are defined by the following relation,   Cn (~x, u)   x1  ~ n (~x, u) =  C (2.184)  . ..   . xr−1

130

2

Autoregressive stochastic volatility LPP

Let us assume that the following condition holds: p  G6 : max0≤n≤N −1, i=0,...,p−1 sup~x∈X(r) |an+1,i (~x)| + dn+1,i (~x) < K38 , for some 0 < K38 < ∞. Lemma 2.5.1 Let the modulated autoregressive conditional heteroskedastic ~0,n are given, respectively, log-price process Yn and its equivalent vector version Y by the modulated stochastic difference equations (2.176) and (2.181), parameter κ = 21 , and condition G6 holds. In this case, condition G5 holds, i.e., there exist constants 0 < K39 < ∞ and 0 ≤ K40,l < ∞, l = 1, . . . , p such that, max

sup

0≤n≤N −1 (~ y ,~ x)∈Rk ×X(r)

|A0n+1 (~ y , ~x)| + (A00n+1 (~ y , ~x))2 Pp < K39 . 1 + l=1 K40,l |yl |

(2.185)

Proof. Since function g 21 (·) ∈ G 21 , inequality (2.53) holds for this function. Using it and condition G6 we get the following inequality, for every ~ y = (y1 , . . . , yp ) ∈ Rp , n = 1, . . . , N , |A0n (~ y , ~x)| + (A00n (~ y , ~x))2 < |an,0 (~x)| + |an,1 (~x)|(|y1 | + |y2 |) + · · · + |an,p−1 (~x)|(|yp−1 | + |yp |) x) + dn,1 (~x)(|y1 | + |y2 |)2 + 2L2 1 + dn,0 (~ 1  + · · · + dn,p−1 (~x)(|yp−1 | + |yp |)2 2 p ≤ |an,0 (~x)| + 2L2 (1 + dn,0 (~x)) p + (|an,1 (~x)| + 2L2 dn,1 (~x))|y1 | + (|an,1 (~x)| + |an,2 (~x)| p p + 2L2 ( dn,1 (~x) + dn,2 (~x)))|y2 | + · · · + (|an,p−2 (~ x)| + |an,p−1 (~x)| p p 2 + 2L ( dn,p−2 (~x) + dn,p−1 (~x)))|yp−1 | p + (|an,p−1 (~x)| + 2L2 dn,p−1 (~x))|yp | ≤ K136 (1 +

p X

K137,l |yl |),

(2.186)

l=1

where constants 0 < K39 < ∞ and 0 ≤ K40,l < ∞, l = 1, . . . , p are given by the following formulas, p K39 = max sup (|an,0 (~x)| + 2L2 (1 + dn,0 (~x))), 0≤n≤N −1 ~ x∈X(r)

K40,l =

2.5

=

                 

(|an,1 (~ x)|+2L2 max0≤n≤N −1 sup~ x∈X(r)

131

Modulated ARCH LPP

√ d (~ x)) √n,1

max0≤n≤N −1 sup~ (|an,0 (~ x)|+2L2 (1+ x∈X(r)

dn,0 (~ x)))

if l = 1, (|an,l−1 (~ x)|+|an,l (~ x)|+2L2 ( max0≤n≤N −1 sup~ x∈X(r)

√ √ d (~ x)+ dn,l (~ x))) √n,l−1

(|an,0 (~ x)|+2L2 (1+ max0≤n≤N −1 sup~ x∈X(r)

dn,0 (~ x)))

(2.187)

   if l = 2, . . . , p − 1,     √   max0≤n≤N −1 sup~ x)) (|an,p−1 (~ x)|+2L2 dn,p−1 (~  x∈X(r) √  =   max0≤n≤N −1 sup~ (|an,0 (~ x)|+2L2 (1+ dn,0 (~ x)))  x∈X(r)    if l = p. It follows from inequality (2.186) that condition G5 holds, for example, with constants K39 and K40,l = 1, l = 1, . . . , p, which can replace, respectively, constants K33 and K34,l , l = 1, . . . , k (recall that, in this case, k = p) penetrating condition G5 .  Thus, Theorems 2.4.1 and 2.4.2 can be applied to the above the model of modulated log-price processes Zn = (Yn , Xn ) given by the modulated stochastic ~ 0,n = (Y ~0,n , X ~ 0,n ) difference equation (2.176) or its equivalent vector version Z given by the vector modulated stochastic difference equation (2.181). In this case, we consider pay-of functions g(n, e~y , ~x), ~ y = (y1 , . . . , yp ) ∈ Rp , ~x = (r) (x1 , . . . , xr ) ∈ X , n = 0, 1, . . ., which are real-valued measurable functions assumed to satisfy condition B5 [¯ γ ], for some vector parameter γ¯ = (γ1 , . . . , γp ) with non-negative components. (0) Let us Mmax,p,r,n,N be the class of all stopping times τ0,n for the process (0)

Zn = (Yn , Xn ) such that (a) n ≤ τ0,n ≤ N , (b) event {τ0,n = m} ∈ Fp,r,n,m = σ[Yl0 , n − p + 1 ≤ l0 ≤ m, Xl00 , n − r + 1 ≤ l00 ≤ m], n ≤ m ≤ N . (0) (0) Obviously, the class Mmax,p,r,n,N coincides with the class Mmax,n,N of all ~ 0,l = (Y ~0,l , X ~ 0,l ), where Y ~0,l = Markov moments τ0,n for the Markov process Z ~ (Yl , . . . , Yl−p+1 ) and X0,l = (Xl , . . . , Xl−r+1 ), such that (a) n ≤ τ0,n ≤ N , (b) (0) ~ 0,l , n ≤ l ≤ m], n ≤ m ≤ N . event {τ0,n = m} ∈ Fn,m = σ[Z In this case, the reward functions are defined for the modulated log-price ~ 0,n = (Y ~0,n , X ~ 0,n ) by the process Zn = (Yn , Xn ) (its vector equivalent version Z (r) following relation, for (~ y , ~x) ∈ Rp × X and n = 0, 1, . . . , N , φ0,n (~ y , ~x) =

sup

~ ~ 0,τ0,n ). E(~y,~x),n g(τ0,n , eY0,τ0,n , X

(2.188)

(0) τn ∈Mmax,n,N

¯ = (A1 (β), ¯ . . . , Ap (β)), ¯ β¯ = (β1 , . . . , βp ), β1 , . . . , βp ~ β) In this case, function A( ≥ 0, penetrating formulation of Theorems 2.4.1 and 2.4.2, has the following com-

132

2

Autoregressive stochastic volatility LPP

ponents, ¯ = K39 K40,j Aj (β)

p X 1 (βl + p2 βl2 ), j = 1, . . . , p. 2

(2.189)

l=1

¯ generates a sequence of functions A ¯ = (An,1 (β), ¯ . . ., ~ β) ~ n (β) Function A( ¯ n = 0, 1, . . . from the class Ap by the following recurrence relation, for An,p (β)), any β¯ = (β1 , . . . , βp ), βi ≥ 0, i = 1, . . . , p, ¯ = ~ n (β) A



β¯ ¯ + A( ¯ ~ n−1 (β) ~ A ~ n−1 (β)) A

for n = 0, for n = 1, 2, . . . .

(2.190)

Recall also vectors β¯i = (βi,1 , . . . , βi,p ) with components βi,j = βj I(j = i), i, j = 1, . . . , p. Theorem 2.4.1 takes in this case the following form. Theorem 2.5.1. Let the modulated autoregressive conditional heteroskedastic ~0,n are given, respectively, log-price process Yn and its equivalent vector version Y by the modulated stochastic difference equations (2.176) and (2.181), and parameter κ = 21 . Let condition G6 holds and, also, condition B5 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γp ) with γi ≥ 0, i = 1, . . . , p. Then, for any vector parameter β¯ = (β1 , . . . , βp ) with components βi ≥ γi , i = 1, . . . , p, there exist constants 0 ≤ M31 , M32,i = M32,i (βi ) < ∞, i = 1, . . . , p such that the reward functions φn (~ y) satisfy the following inequalities for ~ y = (y1 , . . . , yp ) ∈ Rp , ~x = (x1 , . . . , xr ) ∈ X(r) , 0 ≤ n ≤ N , X |φ0,n (~ y , ~x)| ≤ M31 + M32,i i: γi =0

+

X i: γi >0

M32,i exp{(

p X j=1

γi AN −n,j (β¯i )|yj |) }. βi

(2.191)

Remark 2.5.1. The explicit formulas for the constants M31 and M32,i (βi ), i = 1, . . . , p take, according formulas given in Remark 2.1.1, the following form, M31 = L9 , M32,i (βi ) = L9 L10,i I(γi = 0) + L9 L10,i (1 Pp γ K (A (β¯ )+ 1 p2 A2N −1,l (β¯i )) N βi + 2p e 136 l=1 N −1,l i 2 ) i I(γi > 0),

(2.192)

where vectors β¯i = (βi,1 , . . . , βi,p ) = (β1 I(i = 1), . . . , βp I(i = p)), i = 1, . . . , p. ¯ should be replaced in this case by the following condition Condition D10 [β] assumed to hold for vector parameter β¯ = (β1 , . . . , βp ) with non-negative components: P ¯ E exp{ p AN,j (β¯i )|Y0,j |} < K41,i , i = 1, . . . , p, for some 1 < K41,i < D11 [β]: j=1

∞, i = 1, . . . , p.

2.5

Modulated ARCH LPP

133

In this case, the optimal expected reward is defined by the formula, Φ0 =

~

Eg(τ0,0 , eY0,τ0,0 )

sup (0) τ0,0 ∈Mmax,0,N

~0,0 ). = Eφ0 (Y

(2.193)

Theorem 2.4.2 takes in this case the following form. Theorem 2.5.2. Let the modulated autoregressive conditional heteroskedastic ~0,n are given, respectively, by log-price process Yn and its equivalent vector version Y the modulated stochastic difference equations (2.176) and (2.181), and parameter ¯ hold and γ ] and D11 [β] κ = 12 . Let condition G6 holds and, also, conditions B5 [¯ 0 ≤ γi ≤ βi < ∞, i = 1, . . . , p. Then, there exists a constant 0 ≤ M33 < ∞ such that the following inequality takes place, |Φ0 | ≤ M33 .

(2.194)

Remark 2.5.2. The explicit formula for the constant M33 takes, according formulas given in Remark 2.4.2, the following form, X X M33 = L21 + L9 L10,i + L9 L10,i (1 i:γi =0 K39

+ 2p e

Pp l=1

i:γi >0 γ (AN −1,l (β¯i )+ 21 p2 A2N −1,l (β¯i )) N βi

)

i

γi β

i . K41,i

(2.195)

2.5.2 Convergence of option reward functions for space skeleton approximations of modulated autoregressive conditional heteroskedastic log-price processes ~ 0,n has Gaussian transition probabilities P0,n (~ The Markov process Z y , A) = ~ ε,n ∈ A/Z ~ ε,n−1 = ~z} defined for ~z = (~ P{Z y , ~x) ∈ Z = Rp × X(r) , A ∈ BZ , n = 1, 2, . . . by the following relation,

 ~ n0 , C ~ n (~x, Un ) ∈ A}. P0,n (~z, A) = P{ ~ y+µ ~ n (~ y ) + Σn (~ y )W

(2.196)

where the vector function µ ~ n (~ y , ~x), the matrix functions Σn (~ y , ~x) and the vector ~ n (~x, u) are given by relations (2.182) and (2.184). functions C It is useful to re-write the vector stochastic transition dynamic relation (2.133) in the form of the system of dynamic transition relations for the components of ~ 0,n . In this case, relation (2.133) takes the following form the log-price process Z (where the corresponding cancelation of the identical term Y0,n−1,i is made in the transition dynamic relation for the i-th component as well as the fact that the

134

2

Autoregressive stochastic volatility LPP

noise terms Wn,i = 0 for i = 2, . . . , p),

 Y0,n,1 = Y0,n−1,1 + A0n (Y0,n−1,1 , . . . , Y0,n−1,p , X0,n−1,1 , . . . , X0,n−1,r )      +A00n (Y0,n−1,1 , . . . , Y0,n−1,p , X0,n−1,1 , . . . , X0,n−1,r )Wn,1      Y0,n,2 = Y0,n−1,1 ,      ... ...    Y0,n,p = Y0,n−1,p−1 ,  X0,n,1 = Cn (X0,n−1,1 , . . . , X0,n−1,r , Un ),      X0,n,2 = X0,n−1,1 ,     ... ...      X = X , 0,n,r 0,n−1,r−1    n = 1, 2, . . . .

(2.197)

~ ε,n , for Let us construct the corresponding skeleton approximating processes Z ε ∈ (0, ε0 ], according the algorithm described in Subsection 11.5.2. + Let m− ε,n ≤ mε,n be integer numbers, δε,n > 0 and λε,n ∈ R1 , for n = −p + 1, −p + 2, . . .. ± In this case, one can use parameters m± ε,n,i = mε,n−i+1 , δε,n,i = δε,n−i+1 , and λε,n,i = λε,n−i+1 , for i = 1, . . . , p, n = 0, 1, . . .. In this case, the index sets Lε,n , n = 0, 1, . . . take the following form, + Lε,n = {¯ l = (l1 , . . . , lp ), li = m− ε,n,i , . . . , mε,n,i , i = 1, . . . , p} + = {¯ l = (l1 , . . . , lp ), li = m− ε,n−i+1 , . . . , mε,n−i+1 , i = 1, . . . , p}.

(2.198)

Other elements of the space skeleton approximation should be also defined with the use of the above shift construction. + First, the intervals Iε,n,l should be defined for l = m− ε,n , . . . , mε,n , n = −p + 1, . . .,  − 1 if l = m−  ε,n ,  (−∞, δε,n (mε,n + 2 )] + λε,n − 1 1 (2.199) Iε,n,l = (δε,n (l − 2 ), δε,n (l + 2 )] + λε,n if mε,n < l < m+ ε,n ,   + + 1 (δε,n (mε,n − 2 ), ∞) + λε,n if l = mε,n , and then skeleton cubes ˆIε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ˆI ¯ = Iε,n,1,l × · · · × Iε,n,p,l 1 p ε,n,l = Iε,n,l1 × · · · × Iε,n−p+1,lp .

(2.200)

+ Second, skeleton points yε,n,l should be defined for l = m− ε,n , . . . , mε,n , n = −p + 1, . . ., yε,n,l = lδε,n + λε,n , (2.201)

2.5

135

Modulated ARCH LPP

and vector skeleton points ~ yε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ~ yε,n,¯l = (yε,n,1,l1 , . . . , yε,n,p,lp ) = (yε,n,l1 , . . . , yε,n−p+1,lp ).

(2.202)

0+ Let m0− ε,n ≤ mε,n , n = −r + 1, −r + 2, . . . be integer numbers. Let us also define index sets L0ε,n = {¯ l0 = (l10 , . . . , lr0 ), li0 = m0− ε,n−i+1 , . . ., 0+ mε,n−i+1 , i = 1, . . . , r}, n = 0, 1, . . .. 0+ Third, non-empty sets Jε,n,l0 ∈ BX , l0 = m0− ε,n , . . . , mε,n , n = −r + 1, . . ., such 0+ Jε,n,l0 , that (a) Jε,n,l00 ∩ Jε,n,l000 = ∅, l00 6= l000 , n = −r + 1, . . .; (b) X = ∪m0− 0 ε,n ≤l ≤mε,n n = −r + 1, . . ., should be constructed. 0+ Non-empty sets Kε,n , n = −r+1, . . . and Kε,n,l ∈ BX , l = m0− ε,n,0 , . . . , mε,n,0 , n = 00 000 −r+1, . . . should be chosen such that (c) Kε,n,l00 ∩Kε,n,l000 = ∅, l 6= l , n = 0, 1, . . .; 0+ Kε,n,l0 = Kε,n , n = −r + 1, . . .. (d) ∪m0− 0 ε,n ≤l ≤mε,n

The standard case is, where X = {x0l , l0 = m− , . . . , m+ } is a finite set and metrics dX (xl00 , xl000 ) = I(xl00 6= xl000 ). In this case, the simplest choice m0± ε,n = ± 0 − + m , n = −r + 1, . . . and to take sets Kε,n,l0 = Jε,n,l0 = {xl0 }, l = m , . . . , m , n = −r + 1, . . .. Sets Jε,n,l can be defined in the following way, for n = −r + 1, . . ., ( 0+ Kε,n,l0 if m0− ε,n,0 ≤ l < mε,n,0 , Jε,n,l0 = (2.203) Kε,n,m0+ ∪ Kε,n if l0 = m0+ ε,n,0 , ε,n,0

and skeleton cubes ˆ Jε,n,¯l0 should be defined for ¯ l0 = (l10 , . . . , lr0 ) ∈ L0ε,n , n = 0, 1, . . ., ˆ Jε,n,¯l0 = Jε,n,l10 × · · · × Jε,n−r+1,lr0 .

(2.204)

The difference in notations (between the above sets ˆ Jε,n,¯l and sets Jε,n,l used in Subsection 2.4.2) can be removed by a natural re-numeration of indices, 0− − 0− 0− − namely, (m0− n , . . ., mε,n−r+1 ) ↔ mε,n,0 , . . . , (mε,n + 1, . . . , mε,n−r+1 ) ↔ mε,n + Qr 0+ + 0+ + + − 1, . . ., (mε,n , . . . , mε,n−r+1 ) ↔ mε,n , where mε,n − mε,n + 1 = j=1 (mε,n−j+1 −m− ε,n−j+1 + 1), for n = 0, 1, . . .. The simplest variant is to choose integers ±m± ε,n ≥ 0, n = 0, 1, . . .. 0+ Fourth, skeleton points xε,n,l0 should be chosen for l0 = m0− ε,n,0 , . . . , mε,n,0 , n = −r + 1, . . . such that xε,n,l0 ∈ Jε,n,l0 , (2.205) and vector skeleton points ~xε,n,¯l0 should be defined for ¯ l0 = (l10 , . . . , lr0 ) ∈ L0ε,n , n = 0, 1, . . ., (2.206) ~xε,n,¯l0 = (xε,n,l10 , . . . , xε,n−r+1,lr0 ). Fifth, skeleton sets Aε,n,¯l,¯l0 and skeleton points, ~zε,n,¯l,¯l0 ∈ Aε,n,¯l,¯l0 can be defined, for ¯ l ∈ Lε,n , ¯ l0 ∈ L0ε,n , n = 0, 1, . . ., in the following way, Aε,n,¯l,¯l0 = ˆIε,n,¯l × ˆ Jε,n,¯l0 ,

(2.207)

136

2

Autoregressive stochastic volatility LPP

and ~zε,n,¯l,¯l0 = (~ yε,n,¯l , ~xε,n,¯l0 ).

(2.208)

Sixth, skeleton functions, hε,n (y), y ∈ R1 should be defined for n = −p + 1, . . ., hε,n (y) =



yε,n,l

+ if y ∈ Iε,n,l , m− ε,n ≤ l ≤ mε,n ,

(2.209)

ˆ ε,n (~ and vector skeleton functions h y ), ~ y = (y1 , . . ., yp ) ∈ Rp , should be defined for n = 0, 1, . . ., ˆ ε,n (~ h y ) = (hε,n,1 (y1 ), . . . , hε,n,p (yp )) = (hε,n (y1 ), . . . , hε,n−p+1 (yp )).

(2.210)

Seventh, skeleton functions h0ε,n (x), x ∈ X, n = −r + 1, . . ., should be defined, h0ε,n (x) =



xε,n,l0

0+ 0 if x ∈ Jε,n,l0 , m0− ε,n,0 ≤ l ≤ mε,n,0 ,

(2.211)

ˆ 0ε,n (~x), ~x = (x1 , . . . , xr ) ∈ X(r) , should be defined and vector skeleton functions h for n = 0, 1, . . ., ˆ 0ε,n (~x) = (h0ε,n,1 (x1 ), . . . , h0ε,n,r (xr )) h = (h0ε,n (x1 ), . . . , h0ε,n−r+1 (xr )).

(2.212)

¯ ε,n (~z), ~z = (~ Eights, skeleton functions h y , ~x) ∈ Z = Rp × X(r) , n = 0, 1, . . ., should be defined, ¯ ε,n (~z) = (h ˆ ε,n (~ ˆ 0ε,n (~x)). h y ), h (2.213) ~ ε,n = (Y ~ε,n , X ~ ε,n ) The corresponding space skeleton approximating processes Z are defined by the following vector stochastic transition dynamic relation,   ˆ ε,n h ˆ ε,n−1 (Y ~ε,n−1 )   Y~ε,n = h     ˆ ε,n−1 (Y ˆ 0ε,n−1,0 (X ~ε,n−1 ), h ~ ε,n−1 ))  +µ ~ n (h       ˆ ε,n−1 (Y ˆ 0ε,n−1 (X ~ε,n−1 ), h ~ ε,n−1 )) W ~ n0 ,  + Σn (h   (2.214) ˆ 0ε,n,0 C ˆ 0ε,n−1 (X ~ ε,n = h ¯0,n (h ~ ε,n−1 ), U0,n ) , X       n = 1, 2, . . . ,    ˆ ε,n (Y ~ ~0,0 ),  Y = h  ε,0    ~ 0 ˆ ε,0 (X ~ 0,0 ), Xε,0 = h where the vector function µ ~ n (~ y , ~x), the matrix functions Σn (~ y , ~x) and the vector ~ n (~x, u) are given by relations (2.182) and (2.184). functions C The vector stochastic transition dynamic relation (2.214) can be re-written in the following equivalent form as a system of stochastic transition dynamic

2.5

Modulated ARCH LPP

137

relations,

                                                                                

 Yε,n,1 = hε,n hε,n−1 (Yε,n−1,1 ) + A0n (hε,n−1 (Yε,n−1,1 ), . . . , hε,n−p (Yε,n−1,p ), h0ε,n−1 (Xε,n−1,1 ), . . . , h0ε,n−r (Xε,n−1,r )) + A00n (hε,n−1 (Yε,n−1,1 ), . . . , hε,n−p (Yε,n−1,k )  h0ε,n−1 (Xε,n−1,1 ), . . . , h0ε,n−r (Xε,n−1,r ))Wn , Yε,n,2 = hε,n−1 (Yε,n−1,1 ), ... ... Yε,n,k = hε,n−p+1 (Yε,n−1,p−1 ), Xn,1 = Cn (Xn−1,1 , . . . , Xn−1,r , Un ), Xn,2 = Xn−1,1 , ... ... Xn,r = Xn−1,r−1 ,

(2.215)

n = 1, 2, . . . , Yε,0,1 = hε,0 (Y0,0 ), ... ... Yε,0,−p+1 = hε,−p+1 (Y0,−p+1 ), Xε,0,1 = h0ε,0 (X0,0 ), ... ... Xε,0,−r+1 = h0ε,−r+1 (X0,−r+1 ),

where functions A0n (~ y , ~x), A00n (~ y , ~x) are given by relation (2.183). ~ ε,n is, for every ε ∈ (0, ε0 ], a nonlinear autoregresThe log-price process Z sive Markov chain, with the phase space Z = Rp × X(r) and one-step transition ~ ε,n ∈ A/Z ~ ε,n−1 = ~z}, ~z ∈ Z, A ∈ BZ , n = 1, 2, . . .. probabilities Pε,n (~z, A) = P{Z ~ ε,n is, for every ε ∈ (0, ε0 ] is a skeleton Moreover, the log-price process Z Markov chain. Its transition probabilities are determined by transition probabili~ 0,n via the following formula, ties of the Markov chain Z X ˆ ε,n−1 (~ Pε,n (~z, A) = P0,n ((h y ), ~ zε,n,l, yε,n,l¯,~ xε,n,l¯0 )∈A ¯ l¯0 =(~

=

ˆ 0ε,n−1 (~x)), A ¯ ¯0 ) h ε,n,l,l X ˆ ε,n−1 (~ P{(h y) ~ zε,n,l, yε,n,l¯,~ xε,n,l¯0 )∈A ¯ l¯0 =(~

¯ ε,n−1 (~ ˆ 0ε,n−1 (~x)) +µ ~ n (h y ), h ˆ ε,n−1 (~ ˆ 0ε,n−1 (~x)) W ~ n0 , + Σn (h y ), h ˆ 0ε,n−1 (X ¯0,n (h ~ ε,n−1 ), U0,n )) ∈ A ¯ ¯0 }. C ε,n,l,l

(2.216)

138

2

Autoregressive stochastic volatility LPP

~ ε,0 ∈ A}, A ∈ BZ is concerned, As far as the initial distribution Pε,0 (A) = P{Z it takes, for every ε ∈ (0, ε0 ], the following form, X P0,0 (Aε,0,¯l,¯l0 ) Pε,0 (A) = ~ zε,0,l, yε,0,l¯,~ xε,0,l¯0 )∈A ¯ l¯0 =(~

X

=

ˆ ε,0 (Y ˆ 0ε,0 (X ~0,0 ), h ~ 0,0 )) ∈ A ¯ ¯0 }. P{(h ε,n,l,l

(2.217)

~ zε,0,l, yε,0,l¯,~ xε,0,l¯0 )∈A ¯ l¯0 =(~

We assume that the pay-off functions g(n, e~y , ~x), ~ y = (y1 , . . . , yk ) ∈ Rp , ~x = (x1 , . . ., xr ) ∈ X(r) do not depend on ε. (ε) Let us also recall the class Mmax,n,N of all Markov moments τε,n for the log(ε) ~ ε,n such that (a) n ≤ τε,n ≤ N , (b) event {τε,n = m} ∈ Fn,m price process Z = ~ σ[Zε,l , n ≤ l ≤ m], n ≤ m ≤ N . ~ ε,n by In this case, the reward functions are defined for the log-price process Z the following relation, for ~z = (~ y , ~x) ∈ Z and n = 0, 1, . . . , N , φε,n (~ y , ~x) =

~

~ ε,τε,n ). E(~y,~x),n g(τε,n , eYε,τε,n , X

sup

(2.218)

(ε) τε,n ∈Mmax,n,N

Probability measures Pε,n (~z, A), ~z ∈ Z = Rp × X(r) , n = 1, 2, . . . are concentrated on finite sets, for every ε ∈ (0, ε0 ]. This obviously implies that, for every ε ∈ (0, ε0 ] and ~z = (~ y , ~x) ∈ Z, n = 1, 2, . . ., the rewards functions |φε,n (~ y , ~x)| < ∞.

(2.219)

It is also useful to note that φε,N (~ y , x) = g(N, e~y , ~x), (~ y , ~x) ∈ Z. An analogue of Lemma 2.4.2 can be formulated for the above model. However, the corresponding system of linear equations can be simplified taking into account the specific shift structure of the dynamic relation (2.214), which is well seen from the equivalent form (2.215). By the definition of sets Iε,n,¯l , ¯ l ∈ Lε,n , n = 0, 1, . . . and Jε,n,¯l0 , ¯ l0 ∈ L0ε,n , n = (r) 0, 1, . . ., for every ~ y ∈ Rp , ~x ∈ X and n = 0, 1, . . ., there exist the unique ¯ lε,n (~ y) = 0 0 0 0 ¯ (lε,n,1 (~ y ), . . . , lε,n,p (~ y )) ∈ Lε,n and lε,n (~x) = (lε,n,1 (~x), . . . , lε,n,r (~x)) ∈ Lε,n such that ~ y ∈ Iε,n,¯lε,n (~y) and ~x ∈ Jε,n,¯lε,n 0 (~ x) . The following lemma is the direct corollary of Lemma 2.4.2. Lemma 2.5.2. Let a modulated autoregressive conditional heteroskedastic log~0,n is defined by the vector modulated stochastic difference equaprice process Y tion (2.181), while the corresponding approximating space-skeleton log-price pro~ε,n is defined, for every ε ∈ (0, ε0 ], by the dynamic transition relation cess Y (2.214). Then the reward functions φε,n (~ y , ~x) and φn+m (~ yε,n+m,¯l , ~xε,n+m,¯l0 ), ¯ l ∈ 0 0 ¯ Lε,n+m , l ∈ Lε,n+m , m = 1, . . . N − n are, for every ε ∈ (0, ε0 ], ~ y ∈ Rp , ~x ∈ X(r) and n = 0, . . . , N −1, the unique solution for the following recurrence finite system

2.5

Modulated ARCH LPP

of linear equations,  xε,N,¯l0 ), φε,N (~ yε,N,¯l , ~ xε,N,¯l0 ) = g(N, ey~ε,N,l , ~       ¯l = (l1 , . . . , lp ) ∈ Lε,N , ¯l0 = (l10 , . . . , lr0 ) ∈ L0ε,N ,       y ~ε,n+m,l¯  φ (~ y , ~ x ,~ xε,n+m,¯l0 ),  0 ) = max g(n + m, e ¯ ¯ ε,n+m ε,n+m, l ε,n+m, l      Pm0+ Pm+  ε,n+m+1 ε,n+m+1   φε,n+m+1 (~ yε,n+m+1,(l00 ,l1 ,...,lp−1 ,) , −  00 1 =m0− l000 =m ,l l00  1 ε,n+m+1 ε,n+m+1 1 p+1     0  yε,n+m,¯l , ~ xε,n+m,¯l0 )  ,l01 ,...,l0r−1 ) ) × P{yε,n+m,l1 + An+m+1 (~  ~xε,n+m+1,(l000 1     + A00 yε,n+m,¯l , ~ xε,n+m,¯l0 )Wn+m+1 ∈ Iε,n+m+1,l00 ,  n+m+1 (~  1        Cn+m+1 (~ xε,n+m,¯l0 , Un+m+1 ) ∈ Jε,n+m+1,l000 } ,  1    ¯ l = (l1 , . . . , lk ) ∈ Lε,n+m , ¯ l0 = (l10 , . . . , lk0 ) ∈ L0ε,n+m ,

139

(2.220)

   m = N − n − 1, . . . , 1,         φ (~ y , ~ x ) = max g(n, ey~ , ~ x), ε,n     + 0+  Pmε,n+1 Pmε,n+1   φε,n+1 (~ yε,n+1,(l00 ,lε,n,1 (~z),...,lε,n,k−1 (~z)) ,  −  1 l00 =m l000 =m0−  1 ε,n+1 1 ε,n+1      ~ xε,n+1,(l000 ,l0 z) (~ z ),...,l0ε,n,r−1 (~ z )) ) × P{yε,n,lε,n,1 (~  1 ε,n,1     0  +An+1 (~ yε,n,¯lε,n (~z) , ~ xε,n,¯l0 (~z) )   ε,n    00  + An+1 (~ yε,n,¯lε,n (~z) , ~ xε,n,¯l0 (~z) )Wn+1 ∈ Iε,n+1,l00   ε,n 1        } ,  Cn+1 (~xε,n,¯l0ε,n (~z) , Un+1 ) ∈ Jε,n+1,l000 1 (ε)

while the optimal expected reward Φε = Φε (Mmax,0,N ) is defined as by the following formula, for every ε ∈ (0, ε0 ], X Φε = P0,0 (Aε,0,¯l,¯l0 ) φε,0 (~ yε,0,¯l , ~xε,0,¯l0 ). (2.221) ¯ l∈Lε,0 ,¯ l0 ∈L0ε,0

Obviously, |Φε | < ∞, for every ε ∈ (0, ε0 ].

2.5.3 Convergence of reward functions for space skeleton approximations of modulated of modulated autoregressive conditional heteroskedastic log-price processes In this case, the space skeleton approximations are constructed in the way analogous to those used in Sections 1.5 and 2.4 for autoregressive log-price processes with Gaussian noise terms.

140

2

Autoregressive stochastic volatility LPP

In particular, we use the same structural skeleton condition N5 and impose the same conditions B5 [¯ γ ], and I5 on the pay-off functions. The model under consideration is also a particular case of the model considered in Section 2.4. Condition O10 should be specified taking into account the concrete form of ~ n (~x, u) are defined by the dynamic transition functions µ ~ n (~ y, ~ x), Σn (~ y , ~x) and C 0 relations (2.182) and (2.184), in which functions An (~ y , ~x) and A00n (~ y , ~x) are given in relation (2.183). For simplicity, we restrict consideration by the case, where the smoothing transformation function g 21 (y) is continuous. Let assume that the following condition holds: O11 : There exist sets Xn ∈ BX(r) , n = 0, . . . , N and Un ∈ BU , n = 1, . . . , N such that: (a) an,i (~xε ) → an,i (~x0 ), dn,i (~xε ) → dn,i (~x0 ), en,j (~xε ) → en,j (~x0 ) fn,j (~ xε ) → fn,j (~x0 ) Cn (~xε , u) → Cn (~x0 , u) as ε → 0, for any ~xε → ~x0 ∈ Xn−1 as ε → 0, i = 0, . . . , p, j = 1, . . . , p, u ∈ Un , n = 1, . . . , N ; (b1 ) P{Un ∈ Un } = 1, n = 1, . . . , N ; (b2 ) g 12 (y), y ≥ 0 is a continuous function; (c) P{ ~ y0 +  0 00 0 ~ ~ µ ~ n (~ y0 , ~x0 ) + Σn (~ y0 , ~x0 )Wn , Cn (~x0 , U0,n ) ∈ Zn ∪ Zn } = 0 for every ~z0 = (~ y0 , ~ x0 ) ∈ Z0n−1 ∪ Z00n−1 and n = 1, . . . , N , where Z0n , n = 0, . . . , N are sets introduced in condition I14 and Z00n = Rk × Xn , n = 0, . . . , N . The following theorem is a corollary of Theorem 2.4.3. Theorem 2.5.3. Let a modulated autoregressive conditional heteroskedastic ~0,n is defined by the vector modulated stochastic stochastic log-price processes Y difference equation (2.181) and parameter κ = 12 , while the corresponding approx~ε,n is defined, for every ε ∈ (0, ε0 ], by imating space-skeleton log-price process Y the dynamic transition relation (2.214), where the corresponding dynamic transiy , ~x) are given by relation (2.183). Let also cony , ~x) and A00n (~ tion functions A0n (~ ditions B5 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γp ) with non-negative components, and, also, conditions G6 , N5 , I5 , and O11 hold. Then, for every n = 0, 1, . . . , N , the following relation takes place for any ~zε = (~ yε , ~xε ) → ~z0 = (~ y0 , ~ x0 ) ∈ Z0n ∩ Z00n , φε,n (~ yε , ~xε ) → φ0,n (~ y0 , ~x0 ) as ε → 0.

(2.222)

Proof. The space skeleton approximation model considered in Theorem 2.5.3 is a particular case of the space skeleton approximation model considered in Theorem 2.4.3. Conditions N5 , B5 [¯ γ ] and I5 coincide with conditions, respectively, N8 , B7 [¯ γ] and I7 if k = p. As was shown in Subsection 2.5.1, condition G6 implies, in this case, that condition G5 holds.

2.5

Modulated ARCH LPP

141

Condition O11 is the variant of condition O10 , for the above model considered in Theorem 2.5.1. Indeed, conditions O11 (a) and (b1 ) – (b2 ) imply that conditions O10 (a) and (b) hold for functions A0n (~ y , ~x) and A00n (~ y , ~x) given in relations (2.183), for the sets Xn and Un penetrating condition O11 . Thus, all conditions of Theorem 2.4.3 hold. By applying this theorem, we get the convergence relation (2.222). 

2.5.4 Convergence of optimal expected rewards for space skeleton approximations of modulated of modulated autoregressive conditional heteroskedastic log-price processes Let us now give conditions for convergence for optimal expected rewards Φε in the above space skeleton approximation model. ¯ = (A1 (β), ¯ . . . , Ap (β)) ¯ ~ β) ¯ with the function A( In this case, condition D11 [β], given by relations (2.189) and (2.190), should be assumed to hold for some vector parameter β¯ = (β1 , . . . , βp ), with non-negative components, and the corresponding vectors β¯i = (βi,1 , . . . , βi,p ), with components βi,j = I(i = j), i, j = 1, . . . , p. Condition K10 should be replaced by the following condition imposed on the ~ 0,0 ∈ A}: initial distribution P0,0 (A) = P{Z K11 : P0,0 (Z00 ∩ Z000 ) = 1, where Z00 and Z000 are sets introduced, respectively, in conditions I5 and O11 . The following theorem is a corollary of Theorem 2.4.4. Theorem 2.5.4. Let a modulated autoregressive conditional heteroskedastic ~0,n is defined by the vector modulated stochastic stochastic log-price processes Y difference equation (2.181) and parameter κ = 21 , while the corresponding approx~ε,n is defined, for every ε ∈ (0, ε0 ], by imating space-skeleton log-price process Y the dynamic transition relation (2.214), where the corresponding dynamic transition functions A0n (~ y , ~x) and A00n (~ y , ~x) are given by relation (2.183). Let also con¯ holds for some vector parameters γ¯ = (γ1 , . . . , γp ) and ditions B5 [¯ γ ] and D11 [β] β¯ = (β1 , . . . , βp ) such that, for every i = 1, . . . , p either βi > γi > 0 or βi = γi = 0, and also conditions G6 , N5 , I5 , O11 and K11 hold. Then, the following relation takes place, Φε → Φ0 as ε → 0. (2.223) Proof. Theorem 2.5.4 is a corollary of Theorem 2.4.4. ¯ and K11 are just re-formulation, respectively, of conditions Conditions D11 [β] ¯ and K10 used in Theorem 2.4.4. D10 [β] Other conditions of these theorem also holds that was pointed out in the proof of Theorem 2.5.3.

142

2

Autoregressive stochastic volatility LPP

By applying Theorem 2.4.4 we get convergence relation (2.223). 

2.6 Modulated generalized autoregressive conditional heteroskedastic LPP In this section, we present results concerned space skeleton approximations for rewards of American-type options for modulated generalized autoregressive conditional heteroskedastic (GARCH) log-price processes with Gaussian noise terms.

2.6.1 Upper bounds for option rewards of modulated generalized autoregressive conditional heteroskedastic log-price processes Let us consider a modulated generalized autoregressive conditional heteroskedastic log-price process with Gaussian noise terms, which is defined by the following modulated stochastic difference equation, Yn − Yn−1 = gκ (σn )Wn , n = 1, 2, . . . ,

(2.224)

where σn = dn,0 (Xn−1 , . . . , Xn−r ) + dn,1 (Xn−1 , . . . , Xn−r )(Yn−1 − en,1 (Xn−1 , . . . , Xn−r )Yn−2 )2 + · · · + dn,p−1 (Xn−1 , . . . , Xn−r )(Yn−p+1

(2.225)

2

− en,p−1 (Xn−1 , . . . , Xn−r )Yn−p ) 2 + bn,1 (Xn−1 , . . . , X0,−r )σn−1

2 + · · · + bn,q (Xn−1 , . . . , Xn−r )σn−q

 21

, n = 1, 2, . . . ,

~0 = (Y0 , . . . , Y−p+1 , σ0 , . . . , σ−q+1 ) is a (p + q)-dimensional random vecand (a) Y ~ 0 = (X0 , . . . , X−r+1 ) is a r-dimensional tor with real-valued components; (b) X random vector with components taking values in the space X; (c) (W1 , U1 ), (W2 , U2 ), . . . is a sequence of i.i.d. random vectors taking values in the space R1 × U, moreover, W1 , W2 , . . . is a sequence of real-valued i.i.d. standard normal variables with mean value 0 and variance 1, while random variables U1 , U2 , . . . have a conditional distribution Gw (A) = P{Un ∈ A/Wn = w}; (d) the random vector ~ 0 = (Y ~0 , X ~ 0 ) and the random sequence (W1 , U1 ), (W2 , U2 ) . . . are independent; Z (e) p and r are positive integer numbers; (f) dn,i (~x) = dn,i (x1 , . . . , xr ), bn,j (~x) = bn,j (x1 , . . . , xr ), ~x = (x1 , . . . , xr ) ∈ X(r) , i = 0, . . . , p − 1, j = 1, . . . , q, n = 1, 2, . . . are measurable functions acting from the space X(r) to the interval [0, ∞); (g) en,i (~ x) = en,i (x1 , . . . , xr ), ~x = (x1 , . . . , xr ) ∈ X(r) , i = 1, . . . , p − 1, n = 1, 2, . . .

2.6

Modulated GARCH LPP

143

are measurable functions acting from the space X(r) to the interval [0, 1]; (h) Cn (~x, u) = Cn (x1 , . . . , xr , u), (~x, u) ∈ X(r) × U, n = 1, 2, . . . are measurable functions acting from the space Rr × U to the space X; (i) gκ (·) is a function from the class Gκ , for some κ ≥ 0. As is Section 2.3, we restrict consideration by CIR(p) type models, where parameter κ = 21 . The above model is a particular case of the model considered in Section 2.4. In the this case, k = p+q, and the components of corresponding modulated nonlinear autoregressive stochastic volatility log-price process, ~0,n = (Y0,n,1 , . . . , Y0,n,p , Y0,n,p+1 , . . . , Y0,n,p+q ) Y = (Yn , . . . , Yn−p+1 , σn , . . . , σn−q+1 ), n = 1, 2, . . . ,

(2.226)

and a r-dimensional stochastic process, ~ n = (Xn,1 , . . . , Xn,r ) X = (Xn , . . . , Xn−r+1 ), n = 0, 1, . . . .

(2.227)

We can always assume that the sequence of random variables Wn , n = 1, 2, . . . is the sequence of the first components of the sequence of (p + q)-dimensional i.i.d. 0 0 ~ n0 = (Wn,1 standard Gaussian random vectors W , . . . , Wn,p+q ), n = 1, 2, . . ., with 0 0 0 EW1,i = 0, EW1,i W1,j = I(i = j), i, j = 1, . . . , p + q, i.e., 0 Wn = Wn,1 , n = 1, 2, . . . .

(2.228)

Let also of consider (p + q)-dimensional i.i.d. Gaussian vectors, ~ n = (Wn,1 , . . . , Wn,p+q ) W = (Wn , 0, . . . 0), n = 1, 2, . . . .

(2.229)

~ n can be obtained as a linear transformation of vectors W ~ n0 , namely, Vectors W 0 ~ ~ Wn = ΣWn , n = 1, 2, . . ., where (p + q) × (p + q) matrix Σ takes the following form, Σ = kσi,j k = kI(i = 1)I(j = 1), i, j = 1, . . . , p + qk. (2.230) The modulated stochastic difference equation (2.224) can be re-written in the following equivalent form of modulated nonlinear stochastic difference equation, ~0,n−1 , X ~ 0,n−1 ) Y0,n − Y0,n−1 = A0n (Y ~0,n−1 , X ~ 0,n−1 )Wn , n = 1, 2, . . . , + A00n (Y

(2.231)

y , ~x) and A00n (~ y , ~x) are given, for ~ y= where the transition dynamic functions A0n (~ (r) (y1 , . . . , yp+q ) ∈ Rp+q , ~x = (x1 , . . . , xr ) ∈ X , n = 1, 2, . . . by the following relation, A0n (~ y , ~x) = 0, A00n (~ y , ~x) = g 21 (Bn (~ y , ~x)), (2.232)

144

2

Autoregressive stochastic volatility LPP

where Bn (~ y , ~x) = dn,0 (~x) + dn,1 (~x)(y1 − en,1 (~x)y2 )2 + · · · + dn,p−1 (~x)(yp−1 − en,p−1 (~x)yp )2  12 2 2 , + bn,1 (~x)yp+1 + · · · + bn,q (~x)yp+q

(2.233)

as well as in the following equivalent vector form,  ~ ~0,n−1 + µ ~0,n−1 , X ~ 0,n−1 ) Y0,n = Y ~ n (Y    ~0,n−1 , X ~ 0,n−1 )W ~ n0 ,  +Σn (Y

 ~  X   0,n

~ n (X ~ 0,n−1 , Un ), =C n = 1, 2, . . . ,

where (p + q)-dimensional vectors functions µ ~ n (~ y , ~x) = (µn,1 (~ y , ~x), . . . , µn,p+q (~ y, ~x)), ~ y = (y1 , . . ., yp+q ) ∈ Rp+q , ~x = (x1 , . . . , xr ) ∈ X(r) , n = 1, 2, . . . and (p + q) × (p + q) matrix functions Σn (~ y , ~x) = kσn,i,j (~ y , ~x)k, ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , ~x ∈ X(r) , n = 1, 2, . . . are defined by the following relations,   00   0

 y1 − y2  ..   .  yp−1 − yp  µ ~ n (~ y, ~ x) =   Bn (~y, ~x) − yp+1  yp+2 − yp+1   ..  .

             , Σn (~y, ~x) =           

yp+q−1 − yp+q

An (~ y, ~ x) 0 .. . .. . .. . .. . 0

0 0 .. . .. . .. . .. . 0

... ...

...

0 0 .. . .. . .. . .. . 0

      ,     

(2.234)

~ n (~x, u) = (Cn,1 (~x, u), . . . , Cn,r (~x, u)), ~x = (x1 , while r-dimensional functions C (r) . . . , xr ) ∈ X , u ∈ U, n = 1, 2, . . . are defined by the following relations,

  ~ n (~x, u) =  C  

Cn (~x, u) x1 .. . xr−1

   . 

(2.235)

Let us assume that the following condition holds: p p  G7 : max0≤n≤N −1, i=0,...,p−1,j=1,...,q sup~x∈X(r) dn+1,i (~x) + bn+1,j (~x) < K42 , for some 0 < K42 < ∞. The following lemma is a particular variant of Lemma 2.4.2. Lemma 2.6.2 Let the modulated generalized autoregressive conditional het~0,n are given, eroskedastic log-price process Y0,n and its equivalent vector version Y respectively, by the modulated stochastic difference equations (2.224) and (2.234),

2.6

Modulated GARCH LPP

145

parameter κ = 12 , and condition G7 holds. In this case, condition G5 holds, i.e., there exist constants 0 < K43 < ∞ and 0 ≤ K44,l < ∞, l = 1, . . . , p + q such that, max

0≤n≤N −1 ~ y ∈R

sup

x∈X(r) p+q ,~

|A0n+1 (~ y , ~x)| + (A00n+1 (~ y , ~x))2 < K43 . Pp+q 1 + l=1 K44,l |yl |

(2.236)

Proof. Since A0n+1 (~ y , ~x) ≡ 0. this term disappear in inequality (2.236). The following inequalities take place, for every ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , ~x ∈ X(r) , n = 1, 2, . . ., Bn (~ y , ~x) ≤ (dn,0 (~x) + dn,1 (~x)(|y1 | + |y2 |)2 + · · · + dn,p−1 (~x)(|yp−1 | + |yp |)2

 12 2 2 + bn,1 (~x)yp+1 + · · · + bn,q (~x)yp+q p p ≤ dn,0 (~x) + dn,1 (~x)(|y1 | + |y2 |) + · · · p + dn,p−1 (~x)(|yp−1 | + |yp |) p p  + bn,1 (~x)|yp+1 | + · · · + bn,q (~x)|yp+q | p p ≤ dn,0 (~x) + dn,1 (~x)|y1 | p p + ( dn,1 (~x) + dn,2 (~x))|y2 | + · · · p p p + ( dn,p−2 (~x) + dn,p−1 (~x))|yp−1 | + dn,p−1 (~x)|yp | p p + bn,1 (~x)|yp+1 | + · · · + bn,q (~x)|yp+q |. (2.237) It follows from relation (2.237) that, for every ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , ~x ∈ X(r) , n = 1, 2, . . ., p |Bn (~ y , ~x)| < 1 + dn,0 (~x) p p p + (1 + dn,1 (~x))|y1 | + (1 + dn,1 (~x) + dn,2 (~x))|y2 | p p + · · · + (1 + dn,p−2 (~x) + dn,p−1 (~x))|yp−1 | p p + (1 + dn,p−1 (~x))|yp | + (1 + bn,1 (~x))|yp+1 | p + · · · + (1 + bn,q (~x))|yp+q |. (2.238) Also, using inequality (2.53) for function g 21 (y), and relation (2.237) we get, for every ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , ~x ∈ X(r) , n = 1, 2, . . ., (A00n (~ y , ~x)2 < 2L2 1 + (dn,0 (~x) + dn,1 (~x)(|y1 | + |y2 |)2 + · · · + dn,p−1 (~x)(|yp−1 | + |yp |)2

 12  2 2 + bn,1 (~x)yp+1 + · · · + bn,q (~x)yp+q p p ≤ 2L2 1 + dn,0 + dn,1 (~x)(|y1 | + |y2 |)

146

2

Autoregressive stochastic volatility LPP

p + · · · + dn,p−1 (~x)(|yp−1 | + |yp |) p p  + bn,1 (~x)|yp+1 | + · · · + bn,q (~x)|yp+q | p p ≤ 2L2 (1 + dn,0 (~x)) + 2L2 dn,1 (~x)|y1 | p p + 2L2 ( dn,1 (~x) + dn,2 (~x))|y2 | p p + · · · + 2L2 ( dn,p−2 (~x) + dn,p−1 (~x))|yp−1 | p p + 2L2 dn,p−1 (~x)|yp | + 2L2 bn,1 (~x)|yp+1 | p + · · · + 2L2 bn,q (~x)|yp+q |.

(2.239)

Condition G7 and relations (2.99) – (2.100) readily imply that inequality (2.97) holds with the constants 0 < K43 < ∞ and 0 ≤ K44,l < ∞, l = 1, . . . , p + q given by the following formulas, p K43 = max sup (1+2L2 )(1+ dn+1,0 (~x)), 0≤n≤N −1 ~ x∈X(r)

K44,l =

                         

max0≤n≤N −1 sup~ x∈X(r)

√ d (~ x) √n+1,1

max0≤n≤N −1 sup~ (1+ x∈X(r)

dn+1,0 (~ x))

if l = 1, √

max0≤n≤N −1 sup~ ( x∈X(r)



dn,l−1 (~ x)+



max0≤n≤N −1 sup~ (1+ x∈X(r)

dn+1,l (~ x))

dn+1,0 (~ x))

if l = 2, . . . , p − 1, √  max0≤n≤N −1 sup~x∈X(r) dn+1,p−1 (~x)   √   max0≤n≤N −1 sup~ x)) (1+ dn+1,0 (~  x∈X(r)      if l = p,    √   max0≤n≤N −1 sup~ bn+1,l (~ x)  x∈X(r)    max0≤n≤N −1 sup (r) (1+√dn+1,0 (~x))  ~ x∈X     if l = p + 1, . . . , p + q.

(2.240)

It follows from inequality (2.54) that condition G5 holds, for example, with constants K43 and K44,l = 1, l = 1, . . . , p + q, which can replace, respectively, constants K33 and K34,l , l = 1, . . . , k (recall that, in this case, k = p + q) penetrating condition G5 .  Thus, Theorems 2.4.1 and 2.4.2 can be applied to the above log-price process Yn given by the transition dynamic relations (2.224) – (2.225) that yield the upper bounds, respectively, for reward functions and optimal expected rewards given in these theorems for these log-price processes. In this case, we consider pay-of functions g(n, e~y ), ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , n = 0, 1, . . ., which are real-valued measurable functions assumed to satisfy condition B6 [¯ γ ], for some vector parameter γ¯ = (γ1 , . . . , γp+q ) with non-negative components.

2.6

147

Modulated GARCH LPP

(0)

Let us Mmax,p,q,r,n,N be the class of all stopping times τ0,n for the process (0)

Zn = (Yn , Xn ) such that (a) n ≤ τ0,n ≤ N , (b) event {τ0,n = m} ∈ Fp,q,r,n,m = σ[Yl0 , n − p + 1 ≤ l0 ≤ m, Wl00 , n − q + 1 ≤ l00 < m, Xl000 , n − r + 1 ≤ l000 ≤ m], n ≤ m ≤ N. (0) Obviously, the class Mmax,p,q,r,n,N coincides with the class Mmax,n,N of all ~ 0,l = (Y ~0,l , X ~ 0,l ), where Y ~0,l = Markov moments τ for the Markov process Z ~ 0,l = (Xl , . . . , Xl−r+1 ), such that (a) n ≤ (Yl , . . . , Yl−p+1 , Wl , . . . , Wl−q+1 ) and X ~ 0,l , n ≤ l ≤ m], n ≤ m ≤ N . τ0,n ≤ N , (b) event {τ0,n = m} ∈ Fn,m = σ[Z In this case, the reward functions are defined for the modulated log-price ~ 0,n = (Y ~0,n , X ~ 0,n )) by the process Zn = (Yn , Xn ) (its vector equivalent version Z (r) following relation, for (~ y , ~x) ∈ Rp+q × X and n = 0, 1, . . . , N , φ0,n (~ y , ~x) =

~

~ 0,τ0,n ). E(~y,~x),n g(τ0,n , eY0,τ0,n , X

sup

(2.241)

(0) τ0,n ∈Mmax,n,N

¯ = (A1 (β), ¯ . . . , Ap+q (β)), ¯ β¯ = (β1 , . . . , βp+q ), ~ β) In this case, functions A( β1 , . . . , βp+q ≥ 0, penetrating formulation of Theorems 2.4.1 and 2.4.2, has the following components, ¯ = K43 K4,j Aj (β)

p+q X 1 (βl + (p + q)2 βl2 ), j = 1, . . . , p + q. 2

(2.242)

l=1

¯ generates a sequence of functions A ¯ = (An,1 (β), ¯ . . ., ~ β) ~ n (β) Function A( ¯ An,p+q (β)), n = 0, 1, . . . from the class Ap+q that are defined by the following recurrence relation, for any β¯ = (β1 , . . . , βp+q ), βi ≥ 0, i = 1, . . . , p + q, ( β¯ for n = 0, ¯ = ~ n (β) A (2.243) ¯ ¯ ~ ~ ~ An−1 (β) + A(An−1 (β)) for n = 1, 2, . . . . Theorem 2.4.1 takes in this case the following form. Theorem 2.6.1. Let the modulated generalized autoregressive conditional het~0,n are given, eroskedastic log-price process Yn and its equivalent vector version Y respectively, by the modulated stochastic difference equations (2.224) and (2.234), γ ] holds for and parameter κ = 12 . Let condition G7 holds and, also, condition B6 [¯ some vector parameter γ¯ = (γ1 , . . . , γp+q ) with γi ≥ 0, i = 1, . . . , p + q. Then, for any vector parameter β¯ = (β1 , . . . , βp+q ) with components βi ≥ γi , i = 1, . . . , p + q, there exist constants 0 ≤ M34 , M35,i = M35,i (βi ) < ∞, i = 1, . . . , p + q < ∞ such that the reward functions φn (~ y ) satisfy the following inequalities for ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , ~x = (x1 , . . . , xr ) ∈ X(r) , 0 ≤ n ≤ N , X |φ0,n (~ y , ~x)| ≤ M34 + M35,i i: γi =0

+

X i: γi >0

p+q X γi M35,i exp{( AN −n,j (β¯i )|yj |) }. βi j=1

(2.244)

148

2

Autoregressive stochastic volatility LPP

Remark 2.6.1. The explicit formulas for the constants M34 and M35,i , i = 1, . . . , p + q take, according formulas given in Remark 2.4.1, the following form, M34 = L11 , M35,i = L11 L12,i I(γi = 0) + L11 L12,i (1 + 2p+q × Pp+q γ (A (β¯ )+ 1 (p+q)2 A2N −1,l (β¯i )) N βi K ) i I(γi > 0), × e 43 l=1 N −1,l i 2

(2.245)

where vectors β¯i = (βi,1 , . . . , βi,p+q ) = (β1 I(i = 1), . . . , βp+q I(i = p + q)), i = 1, . . . , p + q. ¯ should be replaced in this case by the following condition Condition D10 [β] assumed to hold for vector parameter β¯ = (β1 , . . . , βp+q ) with non-negative components: P ¯ E exp{ p+q AN,j (β¯i )|Y0,j |} < K45,i , i = 1, . . . , p + q, for some 1 < K45,i < D12 [β]: j=1 ∞, i = 1, . . . , p + q. In this case the optimal expected reward is defined by the formula, Φ0 =

~ ~0,0 ). Eg(τ0,0 , eY0,τ0,0 ) = Eφ0 (Y

sup

(2.246)

(0) τ0,0 ∈Mmax,0,N

Theorem 2.4.2 takes in this case the following form. Theorem 2.6.2. Let the modulated generalized autoregressive conditional het~0,n are given, eroskedastic log-price process Yn and its equivalent vector version Y respectively, by the modulated stochastic difference equations (2.224) and (2.234), γ ] and and parameter κ = 21 . Let condition G7 holds and, also, conditions B6 [¯ ¯ hold and 0 ≤ γi ≤ βi < ∞, i = 1, . . . , p + q. Then, there exists a constant D12 [β] 0 ≤ M36 < ∞ such that the following inequality takes place, |Φ0 | ≤ M36 .

(2.247)

Remark 2.6.2. The explicit formula for the constant M36 takes, according formulas given in Remark 2.4.2, the following form, X X M36 = L11 + L11 L12,i + L11 L12,i (1 i:γi =0

+2

p+q K43

e

i:γi >0

Pp+q l=1

γ

(AN −1,l (β¯i )+ 21 (p+q)2 A2N −1,l (β¯i )) N βi

)

i

γi β

i K45,i .

(2.248)

2.6.2 Space-skeleton approximations for option rewards of modulated generalized autoregressive conditional heteroskedastic log-price processes ~ 0,n has Gaussian transition probabilities P0,n (~ The Markov process Z y , A) = ~ ~ P{Zε,n ∈ A/Zε,n−1 = ~z} defined for ~z = (~ y , ~x) ∈ Z = Rp+q × X(r) , A ∈ BZ , n =

2.6

Modulated GARCH LPP

149

1, 2, . . . by the following relation, ~ n0 , P0,n (~ y , A) = P{ ~ y+µ ~ n (~ y ) + Σn (~ y )W  ~ n (~x, Un ) ∈ A}. C

(2.249)

where the vector function µ ~ n (~ y , ~x), the matrix functions Σn (~ y , ~x) and the vector ~ n (~x, u) are given by relations (2.234) and (2.235). functions C It is useful to re-write the vector stochastic differential equation (2.234) in the form of the system of stochastic transition dynamic relations for the components ~ 0,n . In this case, relation (2.234) takes the following form of the log-price process Z (where the corresponding cancelation of the identical term Y0,n−1,i is made in the transition dynamic relation for the i-th component as well as the fact that the noise terms Wn,i = 0 for i = 2, . . . , p + q),  Y = Y0,n−1,1   0,n,1   00   + An (Yε,n−1,1 , . . . , Yε,n−1,p+q , Xε,n−1,1 , . . . , Xε,n−1,r )Wn,1 ,     Y0,n,2 = Y0,n−1,1 ,      ... ...     Y = Y , 0,n,p 0,n−1,p−1        Y0,n,p+1 = Bn (Yε,n−1,1 , . . . , Yε,n−1,p+q , Xε,n−1,1 , . . . , Xε,n−1,r ),   Y0,n,p+2 = Y0,n−1,p+1 , (2.250)  ... ...     Y0,n,p+q = Y0,n−1,p+q−1 ,       X0,n,1 = Cn (X0,n−1,1 , . . . , X0,n−1,r , Un ),      X0,n,2 = X0,n−1,1 ,     ... ...      X0,n,r = X0,n−1,r−1 ,    n = 1, 2, . . . . Let us construct the corresponding skeleton approximation model taking into account the shift structure of the stochastic transition dynamic relation (2.250) ~ ε,n . defining components the vector log-price processes Z − + Let mε,n ≤ mε,n be integer numbers, δε,n > 0 and λε,n ∈ R1 , for n = − max(p, q) + 1, − max(p, q) + 2, . . .. ± In this case, one can use parameters m± ε,n,i = mε,n−i+1 , δε,n,i = δε,n−i+1 , ± λε,n,i = λε,n−i+1 , for i = 1, . . . , p and m± ε,n,i = mε,n−i+p+1 , δε,n,i = δε,n−i+p+1 , λε,n,i = λε,n−i+p+1 , for i = p + 1, . . . , p + q, for n = 0, 1, . . .. In this case, the index sets Lε,n , n = 0, 1, . . . takes the following form, + Lε,n = {¯ l = (l1 , . . . , lp+q ), m− ε,n,i ≤ li ≤ mε,n,i , i = 1, . . . , p + q} + = {¯ l = (l1 , . . . , lp+q ), m− ε,n−i+1 ≤ li ≤ mε,n−i+1 , i = 1, . . . , p, + m− ε,n−i+p+1 ≤ li ≤ mε,n−i+p+1 , i = p + 1, . . . , p + q}.

(2.251)

150

2

Autoregressive stochastic volatility LPP

+ First, the skeleton intervals Iε,n,l should be defined for l = m− ε,n , . . . , mε,n , n = − max(p, q) + 1, . . .,  − 1 if l = m−  ε,n ,  (−∞, δε,n (mε,n + 2 )] + λε,n − 1 1 Iε,n,l = (2.252) (δε,n (l − 2 ), δε,n (l + 2 )] + λε,n if mε,n < l < m+ ε,n ,   + + 1 (δε,n (mε,n − 2 ), ∞) + λε,n if l = mε,n ,

and then skeleton cubes ˆIε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ˆI ¯ = Iε,n,1,l × · · · × Iε,n,p,l 1 p ε,n,l × Iε,n,p+1,lp+1 · · · × Iε,n,p+q,lp+q = Iε,n,l1 × · · · × Iε,n−p+1,lp × Iε,n,lp+1 × · · · × Iε,n−q+1,lp+q .

(2.253)

+ Second, skeleton points yε,n,l should be defined for l = m− ε,n , . . . , mε,n , n = − max(p, q) + 1, . . ., yε,n,l = lδε,n + λε,n , (2.254)

and vector skeleton points ~ yε,n,¯l should be defined for ¯ l ∈ Lε,n , n = 0, 1, . . ., ~ yε,n,¯l = (yε,n,1,l1 , . . . , yε,n,p,lp , yε,n,p+1,lp+1 , . . . , yε,n,p+q,lp+q ) = (yε,n,l1 , . . . , yε,n−p+1,lp , yε,n,lp+1 , . . . , yε,n−q+1,lp+q ).

(2.255)

0+ Let m0− ε,n ≤ mε,n , n = −r + 1, −r + 2, . . . be integer numbers. Let us also define index sets L0ε,n = {¯ l0 = (l10 , . . . , lr0 ), li0 = m0− ε,n−i+1 , . . ., 0+ mε,n−i+1 , i = 1, . . . , r}, n = 0, 1, . . .. 0+ Third, non-empty sets Jε,n,l0 ∈ BX , l0 = m0− ε,n , . . . , mε,n , n = −r + 1, . . ., such 0+ Jε,n,l0 , that (a) Jε,n,l00 ∩ Jε,n,l000 = ∅, l00 6= l000 , n = −r + 1, . . .; (b) X = ∪m0− 0 ε,n ≤l ≤mε,n n = −r + 1, . . ., should be constructed. Sets Kε,n , n = −r + 1, . . ., and non-empty sets Kε,n,l ∈ BX , l = m0− ε,n,0 , . . . , 0+ mε,n,0 , n = −r + 1, . . . such that (c) Kε,n,l00 ∩ Kε,n,l000 = ∅, l00 6= l000 , n = 0, 1, . . .; (d) 0+ Kε,n,l0 = Kε,n , n = −r + 1, . . .. ∪m0− 0 ε,n ≤l ≤mε,n

The standard case is, where X = {x0l , l0 = m− , . . . , m+ } is a finite set and metrics dX (xl00 , xl000 ) = I(xl00 6= xl000 ). In this case, the simplest choice m0± ε,n = m± , n = −r + 1, . . . and to take sets Kε,n,l0 = Jε,n,l0 = {xl0 }, l0 = m− , . . . , m+ , n = −r + 1, . . .. Sets Jε,n,l can be defined in the following way, for n = −r + 1, . . ., ( 0+ Kε,n,l0 if m0− ε,n,0 ≤ l < mε,n,0 , Jε,n,l0 = (2.256) Kε,n,m0+ ∪ Kε,n if l0 = m0+ ε,n,0 , ε,n,0

2.6

Modulated GARCH LPP

151

and skeleton cubes ˆ Jε,n,¯l0 should be defined for ¯ l0 = (l10 , . . . , lr0 ) ∈ L0ε,n , n = 0, 1, . . ., ˆ Jε,n,¯l0 = Jε,n,l10 × · · · × Jε,n−r+1,lr0 .

(2.257)

The difference in notations (between the above sets ˆ Jε,n,¯l and sets Jε,n,l used in Subsection 2.4.3) can be removed by a natural re-numeration of indices, namely 0− − 0− 0− − 0+ (m0− n , . . ., mε,n−r+1 ) ↔ mε,n,0 , . . . , (mε,n +1, . . . , mε,n−r+1 ) ↔ mε,n +1, . . ., (mε,n , + . . . , m0+ ε,n−r+1 ) ↔ mε,n , where for n = 0, 1, . . ., − m+ ε,n − mε,n + 1 =

r Y

− (m+ ε,n−j+1 − mε,n−j+1 + 1).

(2.258)

j=1

The simplest variant is to choose integers ±m± ε,n ≥ 0, n = 0, 1, . . .. Fourth, skeleton points xε,n,l0 ∈ Jε,n,l0 should be chosen for l0 = m0− ε,n,0 , . . ., 0+ mε,n,0 , n = −r+1, . . . such that and vector skeleton points ~xε,n,¯l0 should be defined for ¯ l0 = (l10 , . . . , lr0 ) ∈ L0ε,n , n = 0, 1, . . ., ~xε,n,¯l0 = (xε,n,l10 , . . . , xε,n−r+1,lr0 ).

(2.259)

Fifth, skeleton sets Aε,n,¯l,¯l0 and skeleton points, ~zε,n,¯l,¯l0 ∈ Aε,n,¯l,¯l0 can be defined, for ¯ l ∈ Lε,n , ¯ l0 ∈ L0ε,n , n = 0, 1, . . ., in the following way, Aε,n,¯l,¯l0 = ˆIε,n,¯l × ˆ Jε,n,¯l0 ,

(2.260)

~zε,n,¯l,¯l0 = (~ yε,n,¯l , ~xε,n,¯l0 ).

(2.261)

and Sixth, skeleton functions, hε,n (y), y ∈ R1 should be defined for n = − max(p, q) +1, . . .,  + (2.262) hε,n (y) = yε,n,l if y ∈ Iε,n,l , m− ε,n ≤ l ≤ mε,n , ˆ ε,n (~ and vector skeleton functions h y ), ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , should be defined n = 0, 1, . . ., ˆ ε,n (~ h y ) = (hε,n,1 (y1 ), . . . , hε,n,p (yp ), hε,n,p+1 (yp+1 ), . . . , hε,n,p+q (yp+q )) = (hε,n (y1 ), . . . , hε,n−p+1 (yp ), hε,n (yp+1 ), . . . , hε,n−q+1 (yp+q )).

(2.263)

Seventh, skeleton functions h0ε,n (x), x ∈ X, n = −r + 1, . . ., should be defined, h0ε,n (x) =



xε,n,l0

0+ 0 if x ∈ Jε,n,l0 , m0− ε,n,0 ≤ l ≤ mε,n,0 ,

(2.264)

ˆ 0ε,n (~x), ~x = (x1 , . . . , xr ) ∈ X(r) , should be defined, and vector skeleton functions h for n = 0, 1, . . ., ˆ 0ε,n (~x) = (h0ε,n,1 (x1 ), . . . , h0ε,n,r (xr )) h = (h0ε,n (x1 ), . . . , h0ε,n−r+1 (xr )).

(2.265)

152

2

Autoregressive stochastic volatility LPP

¯ ε,n (¯ Eights, vector skeleton functions h z ), z¯ = (¯ y , ~x) ∈ Z = Rp+q × X(r) , n = 0, 1, . . ., should be defined, ¯ ε,n (~z) = (h ˆ ε,n (~ ˆ 0ε,n (~x)). h y ), h

(2.266)

~ ε,n = (Y ~ε,n , X ~ ε,n ) The corresponding space skeleton approximating processes Z are defined by the following vector stochastic transition dynamic relation,   ˆ ε,n h ˆ ε,n−1 (Y ~ε,n = h ~ε,n−1 )  Y      ˆ ε,n−1 (Y ˆ 0ε,n−1,0 (X ~ε,n−1 ), h ~ ε,n−1 ))  +µ ~ n (h       ˆ ε,n−1 (Y ˆ 0ε,n−1 (X ~ ε,n−1 )) W ~ n0 , ~ε,n−1 ), h  + Σn (h   (2.267) ˆ 0ε,n−1 (X ˆ 0ε,n,0 C ~ ε,n−1 ), U0,n ) , ¯0,n (h ~ ε,n = h X       n = 1, 2, . . . ,    ˆ ε,n (Y ~ ~0,0 ),  Y =h  ε,0    ~ 0 ˆ ε,0 (X ~ 0,0 ), Xε,0 = h where the vector function µ ~ n (~ y , ~x), the matrix functions Σn (~ y , ~x) and the vector ~ n (~x, u) are given by relations (2.234) and (2.235). functions C The vector stochastic transition dynamic relation (2.267) can be re-written in the following equivalent form as a system of stochastic transition dynamic relations,    Yε,n,1 = hε,n hε,n−1 (Yε,n−1,1 )      + A00n (hε,n−1 (Yε,n−1,1 ), . . . , hε,n−p (Yε,n−1,p ),       hε,n−1 (Yε,n−1,p+1 ), . . . , hε,n−q (Yε,n−1,p+q ))Wn,1      Yε,n,2 = hε,n−1 (Yε,n−1,1 ),      ... ...     Y = hε,n−p+1  ε,n,p   (Yε,n−1,p−1 ),    Yε,n,p+1 = hε,n Bn (hε,n−1 (Yε,n−1,1 ), . . . , hε,n−p (Yε,n−1,p ),        hε,n−1 (Yε,n−1,p+1 ), . . . , hε,n−q (Yε,n−1,p+q )) ,  (2.268) Yε,n,p+2 = hε,n−1 (Yε,n−1,p+1 ),     ... ...     Yε,n,p+q = hε,n−q+1 (Yε,n−1,p+q−1 ),      n = 1, 2, . . . ,     Y = hε,0 (Y0,0,1 ), ε,0,1     . . . . . .      Yε,0,p = hε,−p+1 (Y0,0,p ),     Yε,0,p+1 = hε,0 (Y0,0,p+1 ),     ... ...    Yε,0,p+q = hε,−q+1 (Y0,0,p+q ),

2.6

153

Modulated GARCH LPP

where functions A00n (~ y , ~x), Bn (~ y , ~x) are given by relations (2.232) and (2.233). ~ ε,n is, for every ε ∈ (0, ε0 ], a nonlinear autoregressive The log-price process Z Markov chain„ with the phase space Z = Rp+q × X(r) and one-step transition ~ ε,n ∈ A/Z ~ ε,n−1 = ~z}, ~z = (~ probabilities Pε,n (~z, A) = P{Z y , ~x) ∈ Z, A ∈ BZ , n = 1, 2, . . .. ~ ε,n is, for every ε ∈ (0, ε0 ] is a skeleton Moreover, the log-price process Z atomic Markov chain. Its transition probabilities are determined by transition ~ 0,n via the following formula, probabilities of the Markov chain Z X ˆ ε,n−1 (~ ˆ 0ε,n−1 (~x)), A ¯ ¯0 ) Pε,n (~z, A) = P0,n ((h y ), h ε,n,l,l ~ zε,n,l, yε,n,l¯,~ xε,n,l¯0 )∈A ¯ l¯0 =(~

X

=

ˆ ε,n−1 (~ ¯ ε,n−1 (~ ˆ 0ε,n−1 (~x)) P{(h y) + µ ~ n (h y ), h

~ zε,n,l, yε,n,l¯,~ xε,n,l¯0 )∈A ¯ l¯0 =(~

ˆ ε,n−1 (~ ˆ 0ε,n−1 (~x)) W ~ n0 , + Σn (h y ), h ˆ 0ε,n−1 (X ¯0,n (h ~ ε,n−1 ), U0,n )) ∈ A ¯ ¯0 }. C ε,n,l,l

(2.269)

~ ε,0 ∈ A}, A ∈ BZ is concerned, As far as the initial distribution Pε,0 (A) = P{Z it takes, for every ε ∈ (0, ε0 ], the following form, X Pε,0 (A) = P0,0 (Aε,0,¯l,¯l0 ) ~ zε,0,l, yε,0,l¯,~ xε,0,l¯0 )∈A ¯ l¯0 =(~

X

=

ˆ ε,0 (Y ˆ 0ε,0 (X ~0,0 ), h ~ 0,0 )) ∈ A ¯ ¯0 }. P{(h ε,n,l,l

(2.270)

~ zε,0,l, yε,0,l¯,~ xε,0,l¯0 )∈A ¯ l¯0 =(~

We assume that the pay-off function g(n, e~y , ~ x), ~ y = (y1 , . . . , yp+q ) ∈ Rp+q , ~x = (x1 , . . ., xr ) ∈ X(r) do not depend on ε. (ε) Let us also recall the class Mmax,n,N of all Markov moments τε,n for the log(ε) ~ ε,n such that (a) n ≤ τε,n ≤ N , (b) event {τε,n = m} ∈ Fn,m price process Z = ~ ε,l , n ≤ l ≤ m], n ≤ m ≤ N . σ[Z ~ ε,n by In this case, the reward functions are defined for the log-price process Z the following relation, for ~z = (~ y , ~x) ∈ Z and n = 0, 1, . . . , N , φε,n (~ y , ~x) =

sup

~

~ ε,τε,n ). E(~y,~x),n g(τε,n , eYε,τε,n , X

(2.271)

(ε) τε,n ∈Mmax,n,N

Probability measures Pε,n (~z, A), ~z ∈ Z = Rp+q × X(r) , n = 1, 2, . . . are concentrated on finite sets, for every ε ∈ (0, ε0 ]. This obviously implies that the rewards functions |φε,n (~ y , ~x)| < ∞, ~z = (~ y , ~x) ∈ Z, n = 1, 2, . . . for every ε ∈ (0, ε0 ]. It is also useful to note that φε,N (~ y , x) = g(N, e~y , ~x), (~ y , ~x) ∈ Z. An analogue of Lemma 2.4.2 can be formulated for the above model. However, the corresponding system of linear equations can be simplified taking into account the specific shift structure of the dynamic relation (2.267), which is well seen from the equivalent form (2.268).

154

2

Autoregressive stochastic volatility LPP

By the definition of sets Iε,n,¯l , ¯ l ∈ Lε,n , n = 0, 1, . . . and Jε,n,¯l0 , ¯ l0 ∈ L0ε,n , n = (r) 0, 1, . . ., for every ~ y ∈ Rp+q , ~x ∈ X and n = 0, 1, . . ., there exist the unique 0 0 0 ¯ lε,n (~ y ) = (lε,n,1 (~ y ), . . . , lε,n,p+q (~ y )) ∈ Lε,n and ¯ lε,n (~x) = (lε,n,1 (~x), . . . , lε,n,r (~x)) ∈ 0 Lε,n such that ~ y ∈ Iε,n,¯lε,n (~y) and ~x ∈ Jε,n,¯lε,n 0 (~ x) . The following lemma is the direct corollary of Lemma 2.4.2. Lemma 2.6.2. Let the modulated generalized autoregressive conditional het~0,n is given by the modulated stochastic difference eroskedastic log-price process Y equation (2.234), while the corresponding approximating space-skeleton log-price ~ε,n is defined, for every ε ∈ (0, ε0 ], by the stochastic transition dyprocess Y namic relation (2.267). Then the reward functions φε,n (~ y , ~x) and φn+m (~ yε,n+m,¯l , ~xε,n+m,¯l0 ), ¯ l ∈ Lε,n+m , ¯ l0 ∈ L0ε,n+m , m = 1, . . . N − n are, for every ε ∈ (0, ε0 ], ~ y ∈ Rp+q , ~x ∈ X(r) and n = 0, . . . , N − 1, the unique solution for the following recurrence finite system of linear equations,  φε,N (~ yε,N,¯l , ~ xε,N,¯l0 ) = g(N, ey~ε,N,l , ~ xε,N,¯l0 ),     0 = (l0 , . . . , l0 ) ∈ L0 ¯ ¯  l = (l , . . . , l ) ∈ L , l 1 p+q ε,N r 1 ε,N ,        φε,n+m (~yε,n+m,¯l , ~xε,n+m,¯l0 ) = max g(n + m, ey~ε,n+m,l¯, ~xε,n+m,¯l0 ),     Pm+  Pm0+  ε,n+m+1 ε,n+m+1  φε,n+m+1 (~ yε,n+m+1,(l00 ,l1 ,...,lp−1 ,  − 00 ,l00  1 l =m l000 =m0−  1 p+1 ε,n+m+1 1 ε,n+m+1    xε,n+m+1,(l000 ,l0 ,...,l0 ) )  l00 ,l ,...,lp+q−1 ) , ~  p+1 p+1 1 1 r−1    00  ×P{y + A (~ y , ~ x , ¯ ¯ ε,n+m,l  1 ε,n+m,l0 )Wn+m+1 ∈ Iε,n+m+1,l00 n+m+1 ε,n+m,l 1     Cn+m+1 (~ xε,n+m,¯l0 , Un+m+1 ) ∈ Jε,n+m+1,l000 }   1      ×I B (~ y , ~ x ) ∈ I ,  n+m+1 ε,n+m,¯ ε,n+m+1,l00 l ε,n+m,¯ l0 p+1 ¯ l = (l1 , . . . , lp+q ) ∈ Lε,n+m , ¯ l0 = (l10 , . . . , lk0 ) ∈ L0ε,n+m ,      m = N − n − 1, . . . , 1,        φε,n (~ y, ~ x) = max g(n, ey~ , ~ x),     + 0+  Pmε,n+1 Pmε,n+1   φε,n+1 (~ yε,n+1,(l00 ,lε,n,1 (~z),...,lε,n,p−1 (~z) ,   1 l00 ,l00 =m− l000 =m0−  1 p+1 ε,n+1 1 ε,n+1    xε,n+1,(l000 ,l0  l00 ,l (~ z ),...,lε,n,p+q−1 (~ z )) , ~ (~ z ),...,l0ε,n,r−1 (~ z )) )  p+1 ε,n,p+1 1 ε,n,1     ×P{yε,n,lε,n,1 (~z) + A00 yε,n,¯lε,n (~z) , ~ xε,n,¯l0 (~z) )Wn+1 ∈ Iε,n+1,l00 ,  n+1 (~ ε,n  1    C (~ x , U ) ∈ J } 000 0  ¯ n+1 n+1 ε,n+1,l1 ε,n,lε,n (~ z)        ×I Bn+1 (~yε,n,¯lε,n (~z) , ~xε,n,¯l0ε,n (~z) ) ∈ Iε,n+1,l00p+1 ,

(2.272)

where functions A0n (~ y , ~x), A00n (~ y , ~x) are given by relation (2.233), while the optimal (ε) expected reward Φε = Φε (Mmax,0,N ) is defined as by the following analogous of formula, for every ε ∈ (0, ε0 ], X Φε = P0,0 (Aε,0,¯l,¯l0 ) φε,0 (~ yε,0,¯l , ~xε,0,¯l0 ). (2.273) ¯ l∈Lε,0 ,¯ l0 ∈L0ε,0

Obviously, |Φε | < ∞, for every ε ∈ (0, ε0 ].

2.6

Modulated GARCH LPP

155

2.6.3 Convergence of option reward functions for space skeleton approximations of modulated autoregressive conditional heteroskedastic log-price processes In this case, the space skeleton approximations are constructed in the way analogous to those used in Sections 1.6 and 2.4 for autoregressive moving average log-price processes with Gaussian noise terms. In particular, we use the same structural skeleton condition N6 and impose the same conditions B6 [¯ γ ], and I6 on the pay-off functions. The model under consideration is a particular case of the model considered in Section 2.4. Condition O10 should be specified taking into account the concrete form of ~ n (~x, u) are defined by the dynamic transition functions µ ~ n (~ y , ~x), Σn (~ y , ~x) and C 0 relations (2.234) and (2.235), in which functions Bn (~ y , ~x) and Bn00 (~ y , ~x) are given in relation are given by relation (2.233). For simplicity, we restrict consideration by the case, where the smoothing transformation function g 21 (y) is continuous. Let assume that the following condition holds: O12 : There exist sets Xn ∈ BX(r) , n = 0, . . . , N and Un ∈ BU , n = 1, . . . , N such that: (a) dn,i (~xε ) → dn,i (~x0 ), en,j (~xε ) → en,j (~x0 ) bn,l (~xε ) → bn,l (~x0 ) Cn (~xε , u) → Cn (~x0 , u) as ε → 0, for any ~ xε → ~x0 ∈ Xn−1 as ε → 0, i = 0, . . . , p, j = 1, . . . , p, l = 1, . . . , q, u ∈ Un , n = 1, . . . , N ; (b1 ) P{Un ∈ Un } = 1, n = 1, . . . , N ; (b2 ) g 12 (y), y ≥ 0 is a continuous function; (c) P{ ~ y0 +  0 00 0 ~ ~ µ ~ n (~ y0 , ~x0 ) + Σn (~ y0 , ~x0 )Wn , Cn (~x0 , U0,n ) ∈ Zn ∪ Zn } = 0 for every ~z0 = (~ y0 , ~ x0 ) ∈ Z0n−1 ∪ Z00n−1 and n = 1, . . . , N , where Z0n , n = 0, . . . , N are sets introduced in condition I6 and Z00n = Rk × Xn , n = 0, . . . , N . The following theorem is a corollary of Theorem 2.4.3. Theorem 2.6.3. Let Let the modulated generalized autoregressive conditional ~0,n is given by the modulated stochastic difference heteroskedastic log-price process Y equation (2.234), while the corresponding approximating space-skeleton log-price ~ε,n is defined, for every ε ∈ (0, ε0 ], by the stochastic transition dynamic process Y relation (2.267), and parameter κ = 12 . Let also conditions B6 [¯ γ ] holds for some vector parameter γ¯ = (γ1 , . . . , γp+q ) with non-negative components, and, also, conditions G7 , N6 , I6 , and O12 hold. Then, for every n = 0, 1, . . . , N , the following relation takes place for any ~zε = (~ yε , ~xε ) → ~z0 = (~ y0 , ~x0 ) ∈ Z0n ∩ Z00n , φε,n (~ yε , ~xε ) → φ0,n (~ y0 , ~x0 ) as ε → 0.

(2.274)

Proof. The space skeleton approximation model considered in Theorem 2.6.3 is a particular case of the model considered in Theorem 2.4.3.

156

2

Autoregressive stochastic volatility LPP

Conditions N6 , B6 [¯ γ ], and I6 are just variants of conditions, respectively, N8 , B7 [¯ γ ] and I7 if k = p + q. As was shown in Subsection 2.6.1, condition G7 implies, in this case, that condition G5 holds. Condition O12 is the variant of condition O10 , for the model considered in Theorem 2.6.1. Indeed, conditions O12 (a) and (b1 ) – (b2 ) imply that conditions O10 (a) and (b), for the sets Xn and Un penetrating condition O12 . Thus, all conditions of Theorem 2.4.3 hold. By applying this theorem, we get the convergence relation (2.222). 

2.6.4 Convergence of optimal expected rewards for space skeleton approximations of modulated autoregressive conditional heteroskedastic log-price processes Let us now give conditions for convergence for optimal expected rewards Φε in the above space skeleton approximation model. We shall apply Theorem 2.4.4. ¯ = (A1 (β), ¯ . . . , Ap (β)) ¯ ~ β) ¯ with the function A( In this case, condition D12 [β], given by relations (2.242) and (2.243), should be assumed to hold for some vector parameter β¯ = (β1 , . . . , βp+q ) with non-negative components and the corresponding vectors β¯i = (βi,1 , . . . , βi,p+q ) with components βi,j = I(i = j), i, j = 1, . . . , p + q. Condition K10 should be replaced by the following condition imposed on the ~ 0,0 ∈ A}: initial distribution P0,0 (A) = P{Z K12 : P0,0 (Z00 ∩ Z000 ) = 1, where Z00 and Z000 are sets introduced, respectively, in conditions I6 and O12 . The following theorem is a corollary of Theorem 2.4.4. Theorem 2.6.4. Let Let the modulated generalized autoregressive conditional ~0,n is given by the modulated stochastic difference heteroskedastic log-price process Y equation (2.234), while the corresponding approximating space-skeleton log-price ~ε,n is defined, for every ε ∈ (0, ε0 ], by the stochastic transition dynamic process Y ¯ holds γ ] and D12 [β] relation (2.267), and parameter κ = 21 . Let also conditions B6 [¯ for some vector parameters γ¯ = (γ1 , . . . , γp+q ) and β¯ = (β1 , . . . , βp+q ) such that, for every i = 1, . . . , p + q either βi > γi > 0 or βi = γi = 0, and also conditions N6 , G7 , I6 , O12 and K12 hold. Then, the following relation takes place, Φε → Φ0 as ε → 0.

(2.275)

¯ and Proof. Theorem 2.6.4 is a corollary of Theorem 2.4.2. Conditions D12 [β] ¯ and K10 used in K12 are just re-formulation, respectively, of conditions D10 [β] Theorem 2.4.2. Other conditions of these theorem also holds that was pointed out in the proof of Theorem 2.6.3. By applying Theorem 2.4.4 we get convergence relation (2.275). 

3 American-type options for continuous time Markov LPP In Chapter 3, we introduce models of continuous time multivariate modulated Markov log-price and price processes as well as define American-type options for such processes. In Section 3.1 we introduce continuous time multivariate modulated Markov log-price and price processes and consider general questions connected with such processes, namely phase spaces, spaces of trajectories, filtrations. and etc. In particular, we discuss and comment the sense of introducing a stochastic modulating index component and consider different variants of modulation. In Section 3.2, we present models of continuous time price processes with independent increments including Lévy type log-price and price processes. In Section 3.3, we consider model of diffusion log-price and price-processes. In Section 3.4, we define American type options for continuous time multivariate modulated Markov log-price processes with pay-off functions depending on both price and index component and introduce basic objects connected with American type options studied in the book, namely, reward functions and optimal expected rewards.

3.1 Markov LPP In this section, we introduce models of continuous time multivariate modulated Markov type price processes that are studied in this book.

3.1.1 Continuous time log-price and price processes Let Rk be a k-dimensional Euclidean space and Bk is the corresponding Borel σ-algebra of subsets of Rk . Let also X be a measurable space with a σ-algebra of measurable subsets BX . A typical example is, where X is a complete, separable, metric space (Polish space) with a metric dX (x0 , x00 ) and BX be the corresponding Borel σ-algebra of subsets of X (the minimal σ-algebra containing all balls Rd (x) = {x0 ∈ X : dX (x, x0 ) ≤ d} in the space X). Let us consider the space Z = Rk ×X with the σ-algebra of measurable subsets BZ = Bk × BX , which is the minimal σ-algebra containing all sets B × C, where B ∈ Bk , C ∈ BX . If X is a Polish space with a metric dX (x0 , x00 ), then p space Z = Rk × X is also a Polish space, with a natural metric dZ (~z 0 , ~z 00 ) = |~ y0 −~ y 00 |2 + dX (x0 , x00 )2 ,

158

3

American-type options for continuous time Markov LPP

~z 0 = (~ y 0 , x0 ), ~z 00 = (~ y 00 , x00 ) ∈ Z. Let BZ be the corresponding Borel σ-algebra of subsets of Z. We also use the space R+ s = (s1 , . . . , sk ) : s1 , . . . , sk > 0} that k = {~ is the k-dimensional product of the interval R+ = (0, ∞), and the space 1 + V = R × X,. The space V also is a Polish space with the metric dV (~v 0 , ~v 00 ) = k p 0 0 0 00 00 00 0 00 2 0 00 2 |~s − ~s | + dX (x , x ) , ~v = (~s , x ), ~v = (~s , x ) ∈ V. The corresponding Borel σ-algebras for the spaces R+ and V are denoted, respectively, as Bk+ and BV . ~ (t) = (Y1 (t), . . . , Yk (t)), t ≥ 0 with the Let consider a stochastic process Y phase space Rk and a stochastic process X(t), t ≥ 0 with the phase space X, and ~ ~ (t), X(t)), t ≥ 0 with the phase space Z. a stochastic process Z(t) = (Y ~ (t) and X(t) and, thus, Z(t) ~ We shall always assume that that the processes Y as well, are defined on the same probability space < Ω, F, P >. ~ We also consider the stochastic process S(t) = (S1 (t), . . . , Sk (t)), t ≥ 0 con~ (t) = (Y1 (t), . . . , Yk (t), t ≥ 0 by the following nected with the stochastic process Y formulas ~ ~ ~ (t) = ln S(t), ~ S(t) = eY (t) , Y t ≥ 0. (3.1) Here and henceforth, the notations e~y = (ey1 , . . . , eyk ), (y1 , . . . , yk ) ∈ Rk and ln ~s = (ln s1 , . . . , ln sn ), ~s = (s1 , . . . , sk ) ∈ Rk+ are used. ~ (t) = (S(t), ~ Let also consider the stochastic process process V X(t)), t ≥ 0. ~ (t), X(t), Z(t), ~ ~ ~ (t) have, reBy the definition, these processes Y S(t), and V + spectively, the phase spaces, Rk , X, Z, Rk , and V. ~ (t) as a vector log-price process with continuous time and X(t) We interpret Y ~ (t) and as a continuous time stochastic index modulating the log-price process Y ~ Z(t) as an extended log-price process supplemented by the additional stochastic index component X(t). ~ ~ (t) as an Respectively, S(t) is interpreted as a vector price process and V extended price process supplemented by the additional index component X(t). We do prefer to use log-price processes as initial objects instead price processes because of log-price processes usually have additive structure of increments more simple and convenient for analysis, while price processes have more complex multiplicative structure of increments. The character of increments mentioned above also explains using of exponential transformation which connect log-price processes with the corresponding price processes.

3.1.2 Spaces of trajectories for log-price and price processes Let us also introduce the space càdlàg functions DZ and DV that are the spaces of functions defined, respectively, on the intervals [0, ∞) and taking values, respec-

3.1

Markov LPP

159

tively, in the space Z and V, continuous from the right and possessing limits from the left. The standard assumption, which we always assume and which usually holds ~ in applications, is that the process Z(t) is a càdlàg process, i.e., trajectories < ~ ~ (t) is Z(t, ω), t ≥ 0 > are càdlàg functions for all ω ∈ Ω. Obviously, in this case, V also a càdlàg process. ~ Note that the case, where the process Z(t) is an almost sure càdlàg process, can ~ be reduced to the case, where the process Z(t) is a càdlàg process. Indeed, replacing ~ trajectories of a.s. càdlàg process Z(t) which are not càdlàg functions by a constant trajectory, one get the new càdlàg process Z 0 (t) stochastically equivalent to the ~ ~ ~ 0 (t), t ≥ 0} = 1. process Z(t) in the sense that P{Z(t) =Z The main problem studied in the book is connected with stopping of the ex~ ~ (t) at some tended log-price process Z(t) and that corresponding price process V random time τ = τ (ω) ≥ 0 defined on the same probability space < Ω, F, P > where the above processes are defined and finding of optimal stopping times according some criteria of optimality. ~ The assumption that the process Z(t) is a càdlàg process automatically implies ~ ~ that Z(τ ) and V (τ ) are random variables, for any random time τ ≥ 0.

3.1.3 Information filtrations generated by log-price and price processes ~ (s), 0 ≤ s ≤ t], t ≥ 0 a natural filtration generated by the Let denote FtY = σ[Y ~ ~ log-price process Y (t) and FtZ = σ[Z(s), 0 ≤ s ≤ t], t ≥ 0 a natural filtration ~ generated by the process Z(t). ~ It is useful to note that the natural filtration FtS = σ[S(s), 0 ≤ s ≤ t], t ≥ 0 ~ generated by the process S(t) coincides with the filtration FtY and the natural ~ (s), 0 ≤ s ≤ t], t ≥ 0 generated by the process V ~ (t) coincides filtration FtV = σ[V Z with the filtration Ft . By the definition, FtY ⊆ FtZ , t ≥ 0, i.e., filtration Ft is an extension of filtration Z Ft . The component X(t) represents an additional market information which be~ (t) come available at moment t additionally to the value of the log-price process Y ~ or the corresponding price process S(t). This information can be supplied by some additional price process X(t), which ~ (t) or the values are observable but not included in the vector log-price process Y ~ price process S(t). Another variant is where X(t) represents the stochastic dynamics of some some parameters of price processes, for example, stochastic volatility. The third variant is that this X(t) is, indeed, a market index, for example, a global price index “controlling" market prices, or a jump process representing

160

3

American-type options for continuous time Markov LPP

some market regime index, for example, indicating growing, declining, or stable market situation.

3.1.4 Markov log-price and price processes ~ In what follows, we always assume that Z(t) is inhomogeneous in time càdlàg ~ Markov process with a phase space Z, initial distribution P (A) = P{Z(0) ∈ A} ~ and transition probabilities P (t, ~z, t + u, A) = P{Z(t + u) ∈ A/Z(t) = ~z}. Here and henceforth, P (t, ~z, t + u, A) is, as usual, assumed, to be probability measure in A ∈ BZ and measurable function in arguments (t, ~z, u), ∈ [0, ∞) × Z × [0, ∞). ~ (t) is also inhomogeneous in time càdlàg Markov Obviously, in this case, V process with the phase space V, the initial distribution P˙ (B) = P (AB ),

(3.2)

and one-step transition probabilities P˙ (t, ~v , t + u, B) = P (t, ~z~v , t + u, AB ),

(3.3)

where ~v = ((s1 , . . . , sk ), x) ∈ V, B ∈ BV , ~z~v = ((ln s1 , . . . , ln sk ), x) ∈ Z, AB = {~z~v : v ∈ B} ∈ BZ . ~ (t) of the corresponding price It is worth to note that the log-price process Y ~ process S(t) themselves may be not Markov processes. Thus, in this case, the component Xn represents information which addi~ (t) or the price protion to to information supplied by the log-price process Y ~ ~ ~ cess S(t) makes the corresponding extended processes Z(t) = (S(t), X(t)) or ~ (t) = (S(t), ~ V X(t)) Markov processes. ~ ~ (t) are An important is the homogeneous case where the processes Z(t) and V homogeneous in time, i.e. the case where the transition probabilities P (t, ~z, t + u, A) = P (~z, u, A) and P˙ (t, ~v , t + u, A) = P˙ (~v , u, A) do not depend on time t ≥ 0.

3.1.5 Modulated Markov log-price and price processes The index component X(t) can be also interpreted as a modulator of the corre~ (t) or the price process S(t). ~ sponding log-price process Y To see this, let us first consider the case, where the phase space X = {1, 2, . . ., m} is a finite set. In this case, as usual, BX is the σ-field of all subsets of X. Let us introduce conditional marginal transition probabilities for the index X(t), ~ (t) = ~ P (t, x, t + u, x0 /~ y ) = P{X(t + u) = x0 /Y y , X(t) = x} = P (t, (~ y , x), t + u, Y × {x0 }),

3.1

Markov LPP

161

~ (t), and conditional marginal transition probabilities for the log-price process Y P (t, ~ y , t + u, B/x, x0 ) ~ (t + u) ∈ B/Y ~ (t) = ~ = P{Y y , X(t) = x, X(t + u) = x0 } = P (t, (~ y , x), t + u, B × {x0 })/P (t, x, t + u, x0 /~ y ).

(3.4)

The formula above determined this conditional transition probability in the case, where P (t, x, t + u, x0 /~ y ) 6= 0. Otherwise, an arbitrary probability measure (as function of A) can play the role of this conditional transition probability. Transition probabilities P (t, z, t + u, A), z = (~ y , x) ∈ Z, A ∈ BZ are uniquely determined by their values on cylindric sets A = B×C, where B ∈ Bk , C ∈ BX . For such sets, these probabilities can be represented via the corresponding marginal transition probabilities introduced above, P (t, (~ y , x), t + u, B × C) X = P (t, ~ y , t + u, B/x, x0 )P (t, x, t + u, x0 /~ y ).

(3.5)

x0 ∈C

This representation let one consider the index X(t) as a modulator of the log~ (t). In fact, relation (3.5) show that it is some kind of mutual price component Y ~ (t) and X(t). cross modulation of processes Y There is, however, the important case where the index X(t) modulates the ~ (t) but not vise versa. This is the case, where the marginal log-price process Y transition probabilities for the index do not depend on the values of the log-price process, i.e., P (t, x, t + u, x0 /~ y ) = P (t, x, t + u, x0 ). (3.6) It is easy to show that in this case the index X(t) is a Markov process with the phase space X and transition probabilities P (t, x, t + u, x0 ). Let now consider the case, where X is a general space with the σ-field of measurable subsets BX . Let us introduce conditional marginal transition probabilities for the index X(t), ~ (t) = ~ P (t, x, t + u, x0 /~ y ) = P{X(t + u) ∈ C/Y y , X(t) = x} = P (t, (~ y , x), t + u, Y × C). Transition probabilities P (t, z, t + u, A), z = (~ y , x) ∈ Z, A ∈ BZ are uniquely determined by their values on cylindric sets A = B × C, where B ∈ Bk , C ∈ BX . By the definition, P (t, (~ y , t + u, x), B × C) ≤ P (t, (~ y , x), t + u, Y × C), C ∈ BX , for every (~ y , x) ∈ Z, B ∈ Bk . Thus, by the Radon-Nikodym theorem, P (t, (~ y , x), t+ u, B × C), can be represented in the following form, Z P (t, (~ y , x), t + u, B × C) = P (t, ~ y , t + u, B/x, x0 )P (t, x, t + u, dx0 /~ y ), (3.7) C

162

3

American-type options for continuous time Markov LPP

where P (t, ~ y , t + u, B/x, x0 ) is the Radon-Nikodym derivative of the measure P (t, (~ y , x), t + u, B × C), C ∈ BX with respect to measure P (t, (~ y , x), t + u, Y × C), C ∈ BX . Under some minor conditions, for example, if X is a Polish metric space (complete, separable metric space) and BX is the corresponding Borel σ-field of subsets of X, there exists the regular variant of the derivative P (t, ~ y , t + u, B/x, x0 ), which 0 is a measurable function in variable (x, x , ~ y ) ∈ X × X × Y for every B ∈ Bk and a probability measure in B ∈ Bk for every (x, x0 , ~ y ) ∈ X × X × Y. This derivative is a regular conditional distribution ~ (t + u) ∈ B/Y ~ (t) = ~ P (t, ~ y , t + u, B) = P{Y y , X(t) = x, X(t + u) = x0 }. One can consider the index X(t) as a modulator of the log-price component ~ (t). In fact, relation (3.7) show that it is some kind of mutual cross modulation Y ~ (t) and X(t). of processes Y There is, however, the important case where the index X(t) modulates the ~ (t) but not vise versa. This is the case, where the marginal log-price process Y transition probabilities for the index do not depend on the values of the log-price process, i.e., for all admissible values of arguments, P (t, x, t + u, C/~ y ) = P (t, x, t + u, C).

(3.8)

It is easy to show that in this case the index X(t) is a Markov process with the phase space X and one-step transition probabilities P (t, x, t + u, C).

3.2 LPP with independent increments In this section, we introduce different models of log-price and price processes with independent increments, including Lévy processes.

3.2.1 Log-price and price processes with independent increments ~ (t) = (Y1 (t), . . . , Yk (t)), t ≥ 0 This the model, where the k-dimensional log-price Y is a càdlàg process with independent increments that means mutual independence ~ (tn ) − Y ~ (tn−1 ), n = 1, 2 . . . for any 0 = t0 < t1 < t2 < · · · . of increments Y ~ (0) is concerned, it is usually assumed that it is a As far as initial value Y ~ (t) − Y ~ (0), t ≥ 0. random vector independent of the stochastic process Y In this case, the model without index component is considered. In fact, one can always to define a "virtual" constant index component X(t) which takes the constant value x0 for every t ≥ 0. ~ (t) is a càdlàg Markov process with transition probabilities The process Y which are connected with the distributions of increments P (t, t + u, A) by the

3.2

163

LPP with independent increments

following relation, P (t, ~ y , t + u, A) = P (t, t + u, A − ~ y) ~ (t + u) − Y ~ (t) ∈ A}. = P{~ y+Y

(3.9)

3.2.2 Step-wise log-price and price processes with independent increments ~ (t) is a process of step sums of independent random The important case is, where Y vectors. Such process is determined by a sequence of independent random vectors ~n = (Yn,1 , . . . , Yn,k ), n = 1, 2, . . .. This sequence represents the sequential jumps Y ~ (t). Also a nonrandom sequence 0 = t0 < t1 < · · · such that for the process Y tn → ∞ as n → ∞ should be given. This sequence represents the sequential ~ (t). The process Y ~ (t) is defined by the formula, moments of jumps for the process Y following relation, X ~ (t) = ~n , t ≥ 0. Y Y (3.10) tn ≤t

In this case, the corresponding price process is given by the relation, ~ ~ S(t) = eY (t) , t ≥ 0.

(3.11)

3.2.3 Lévy log-price and price processes ~ (t) is a Lévy process, that is a Another important case is, where the process Y stochastically continuous càdlàg process with independent increments. In this case the Lévy–Khintchine representation takes place for the characteristic function of increments, ~ (t) − Y ~ (0))} ϕt (~s) = E exp{i(~s, Y 1 = exp{i(~ µ(t), ~s) − (Σ(t)~s, ~s) 2 Z

(ei(~s,~y) − 1 − i(~s, ~ y ))Π(t, d~ y)

+ |~ y |. The particular important case of continuous Lévy process (Gaussian process with independent increment), where Π(t, A) ≡ 0 and characteristic function (δ) ϕt (~s) takes the following form, (δ) ~ (t) − Y ~ (0))} ϕt (~s) = E exp{i(~s, Y 1 = exp{i(~ µ(t), ~s) − (Σ(t)~s, ~s)}, ~s ∈ Rk , t ≥ 0. 2

(3.13)

The most important is the homogeneous case, where the distribution of incre~ (t + u) − Y ~ (t) do not depend on t ≥ 0, ments Y ~ (t + u) − Y ~ (t) ∈ A} = P (u, A). P{Y

(3.14)

and the triplet < µ ~ (·), Σ(·), Π(·, ·) > takes the following form, µ ~ (t) = t~ µ, Σ(t) = tΣ, Π(t, A) = tΠ(A), A ∈ Bk , t ≥ 0.

(3.15)

In the Gaussian homogeneous case the characteristic function of the process ~ (t) = Y ~ (t) − Y ~ (0), t ≥ 0 takes the form, W (δ) ~ (t))} ϕt (~s) = E exp{i(~s, W 1 = exp{t(i(~ µ, ~s) − (Σ~s, ~s))}, ~s ∈ Rk , t ≥ 0, 2

(3.16)

where (a) µ ~ = (µ1 , . . . , µk ) = (EW1 (1), . . . , EWk (1)) is the mean vector; (b) Σ = kσij k = kE(Wi (1) − EWi (1))(Wj (1) − EWj (1)k is the corresponding covariance ~ (1) = (W1 (1), . . ., Wk (1)). matrix for the k-dimensional Gaussian random vector W ~ (t), t ≥ 0 is referred as a k-dimensional Brownian In this case, the process W motion (or as a Wiener process). ~ (t) is standard if µ The Brownian motion W ~ = (0, . . . , 0) and Σ = kδ(i, j)k is a unit matrix. In this case, the components < Wi (t), t ≥ 0 >, i = 1, . . . are independent one-dimensional Brownian motions. ~ As far as the price process S(t) is concerned, it will be given in this case by the following relation, ~ ~ S(t) = eY (t) , t ≥ 0. (3.17)

3.2

LPP with independent increments

165

~ The process S(t) can be referred as an exponential Lévy process. ~ (t) = Y ~ (t) − Y ~ (0), t ≥ 0 is a k-dimensional Brownian motion, If the process W ~ ~ ~ (t) Y (t) Y (0)+ W ~ the process S(t) =e =e , t ≥ 0 can be referred as a k-dimensional exponential (geometric) Brownian motion.

3.2.4 Log-price and price processes with independent increments modulated by semi-Markov indices For simplicity, let us restrict consideration by the case where the phase space X = {1, 2, . . .} is a discrete set. Let us first introduce a continuous time semi-Markov process X(t), t ≥ 0. Let us introduce a two-dimensional Markov chain (Jn , Tn ), n = 0, 1, . . . with a phase space X×[0, ∞), where an initial distribution Q(x) = P{J0 = x} = P{J0 = x, T0 = 0} and one-step transition probabilities Qn (x, y, t) = P{Jn+1 = y, Tn+1 ≤ t/Jn = x, Tn = s}, which do not depend on the current state of the second component. Such Markov chains are usually referred as Markov renewal process. Then the semi-Markov process X(t) can be defined in the following way, X(t) = JN (t) , t ≥ 0,

(3.18)

Mn = T1 + · · · + Tn , n = 0, 1 . . . ,

(3.19)

N (t) = max(n : Mn ≤ t), t ≥ 0.

(3.20)

where and Random variables Jn are values of the semi-Markov process Xn at moments of jumps Mn , while random variables Tn are the corresponding inter-jump times. The assumption Qn (x, y, 0) = 0, x, y ∈ X, n = 0, 1, . . . is also made in order to exclude instant jumps. Also, some conditions usually are assumed which imply relation P{N (t) < ∞, t ≥ 0} = 1, i.e., exclude accumulation of infinite number of jumps at finite time intervals. For example, the condition Qn (x, y, h) ≤ q < 1, x, y ∈ X, n = 0, 1, . . . for some h > 0 implies the above property. Note that we introduced above a inhomogeneous in time Markov renewal process (Jn , Tn ) and respectively the corresponding semi-Markov process X(t) defined above is also inhomogeneous in time. The semi-Markov process X(t) is homogeneous in time if the Markov renewal process (Jn , Tn ) is homogeneous in time Markov chain that is where its transition probabilities Qn (x, t) = Q(x, t), x ∈ X, t ≥ 0, n = 0, 1, . . . do not depend on n. As well known, the semi-Markov process X(t), in general, is not a Markov process. However, one can add additional components to the random sequence X(t) which will make the corresponding process Markov.

166

3

American-type options for continuous time Markov LPP

Let us define the process which counts time from the moment of last jump before moment t up to moment t, M (t) = t − MN (t) , t ≥ 0.

(3.21)

¯ In this case, the discrete time process X(t) = (X(t), M (t), N (t)), t ≥ 0 is a ¯ = X × [0, ∞) × N , where homogeneous Markov process with the phase space X N = {0, 1, . . .}. If the Markov renewal process (Jn , Tn ) is homogeneous in time then the in¯ volvement of the component N (t) is not needed. In this case the process X(t) = (X(t), M (t)), t ≥ 0 is a homogeneous Markov process. There also exists the case where the semi-Markov process X(t) is a homogeneous in time Markov process. It the case, where the transition probabilities Qn (x, y, t) have the special form Qn (x, y, t) = p(x, y)(1 − e−m(x)t ), where kp(x, y)k is a m × m stochastic matrix, and m(x) > 0, x ∈ X. A process with independent increments Y (t) modulated by a semi-Markov index X(t) can be defined by the following relation, N (t)−1

~ (t) = Y ~ (0) + Y

X

~n,Jn (Tn+1 ) Y

n=0

~N (t),X(t) (M (t)), t ≥ 0, +Y

(3.22)

~ (0) is a k-dimensional random vector; (b) Y ~n,x (t), t ≥ 0 is, for evwhere; (a) Y ery n = 0, 1, . . . and x ∈ X, a k-dimensional càdlàg process with independent ~n,x (0) ≡ 0 and distributions of increments increments with the initial state Y ~ ~n,x (t) ∈ A} for n = 0, 1, . . . , x ∈ X; (c) Pn,x (t, t + u, A) = P{Yn,x (t + u) − Y (Jn , Tn ), n = 0, 1, . . . is a Markov renewal process and X(t), t ≥ 0 is the corresponding semi-Markov process with a discrete set of states X and transition ~ (0), the Markov probabilities Qn (x, y, t), t ≥ 0, x, y ∈ X; (d) the random vector Y renewal process < (Jn , Tn ), n = 0, 1, . . . > and the processes with independent ~n,i (t), t ≥ 0 >, n = 0, 1, . . . , x ∈ X are mutually independent. increments < Y The following alternative recurrence relation can be used to define the process ~ (t) modulated by a semi-Markov index X(t), independent increments Y  ~ (t) = Y ~ (Mn ) + Y ~n,Jn (t − Mn ) for Mn < t ≤ Mn+1 , Y (3.23) n = 0, 1, . . . . In the case, where Yn,x (t), t ≥ 0 are Lévy processes, the corresponding distributions of increments Pn,x (t, t + u, A) are uniquely determined by the corresponding triplets < µ ~ n,x (·), Σn,x (·), Πn,x (·, ·) >, which should, in general case, be functions of parameters n = 0, 1, . . . and x ∈ X. ~ As far as the price process S(t) is concerned, it will be given in by the following formula, ~ ~ S(t) = eY (t) , t ≥ 0. (3.24)

3.3 Diffusion LPP

167

~ ~ (t), X(t)) is not a Markov process. But, Note that the process Z(t) = (Y ¯ ~ the process Z(t) = (Y (t), X(t), M (t), N (t)) as well as the process V¯ (t) = ~ ¯ (S(t), X(t), M (t), N (t)) are Markov processes. Moreover, the process Z(t) and ¯ V (t) are homogeneous in time Markov processes. If the distributions of increments Pn,x (t, t + u, A) = Px (t, t + u, A) do not depend on n, and transition probabilities Qn (x, y, t) = Q(x, y, t) also do not depend ¯ = (Y ~ (t), X(t), M (t)) and V¯ (t) = (S(t), ~ on n, then the processes Z(t) X(t), M (t)) are homogeneous in time Markov processes. Finally, if the distributions of increments Px (t, t + u, A) = Px (u, A) do not ~n,x (t) are homogeneous in time processes with independent depend on t, i.e., Y increments, and the transition probabilities Qn (x, y, t) = p(x, y)(1 − e−m(x)t ), and, thus, the semi-Markov process X(t) is a homogeneous in time Markov process, ~ ~ (t), X(t)) and V ~ (t) = (S(t), ~ then the processes Z(t) = (Y X(t) are homogeneous in time Markov processes.

3.3 Diffusion LPP In this section we present examples of diffusion and modulated diffusion log-price processes.

3.3.1 Diffusion log-price and price processes We restrict consideration by the model, where is where the k-dimensional log~ (t) = (Y1 (t), . . . , Yk (t)), t ≥ 0 is given by a system of stochastic price process Y differential equations, ( P ~ (t))dWj (t), t ≥ 0, ~ (t))dt + l σij (t, Y dYi (t) = µi (t, Y j=1 (3.25) i = 1, 2, . . . , k, s ~ (0) = (Y1 (0), . . . , Yk (0)) is a random vector; (b) W ~ (t) = (W1 (t), where (a) Y . . . , Wl (t)), t ≥ 0 is a multivariate Brownian motion with a mean vector µ ~ = (EW1 (1), . . . , EWl (1)) = (0, . . . , 0) and a covariance matrix Σ = kEWi (1)Wj (1)k ~ (0) and the process = kσij k such that σii = 1, i = 1, . . . , l; (c) the random vector Y ~ W (t), t ≥ 0 are independent; (c) µi (t, ~ y ), i = 1, . . . , k are measurable functions acting from [0, ∞)×Rk to R1 ; (d) σij (t, ~ y ), i = 1, . . . , k, j = 1, . . . , l are measurable functions acting from [0, ∞) × Rk into R1 . ~ (0) = ~ A special case is, where the initial state of the process Y y = (y1 , . . . , yk ) ~ ∈ Rk is a non-random vector. In this case, we use notation Y~y (t), t ≥ 0 is used in order to indicate the corresponding initial state.

168

3

American-type options for continuous time Markov LPP

It is worth to note that in this case, the multivariate Brownian motion ~ (t), t ≥ 0 generates the family of diffusion processes Y ~~y (t) depending of the W initial state ~ y ∈ Rk and defined by the relation (3.25). The definition given above slightly differ of the standard one where it is usually ~ (t) are not correlated, assumed that components of the vector Brownian motion W i.e., covariances σij = 0, i 6= j. The model can be reduced to the standard one. ~ (t) are not It is natural to assume that the components of the process W linearly dependent. It is so if det(Σ) 6= 0. Matrix Σ is symmetric. In this case, √ there exists a symmetric matrix Π = kπij k = Σ, which is a solution of the equation Σ = Π2 . This matrix also has det(Π) 6= 0. ~ 0 (t) = Π−1 W ~ (t), t ≥ 0 is a standard l-dimensional Brownian In this case, W ~ (t) = ΠW ~ 0 (t), t ≥ 0. motion with non-correlated components and W ~ (t) and W ~ 0 (t) generate the same natural filtraNote also that the processes W tion. The system of stochastic differential equations (3.25) can be rewritten in the equivalent form, ( P 0 ~ (t))dt + l σij ~ (t))dWj0 (t), t ≥ 0, dYi (t) = µi (t, Y (t, Y j=1 (3.26) i = 1, 2, . . . , k, where 0 σij (t, ~ y) =

l X

σir (t, ~ y )πrj , i, j = 1, . . . , l.

(3.27)

r=1

There are two important cases. The first one is where the noise terms Wi (t), t ≥ 0, i = 1, . . . , l are independent. The second one is, where the noise terms Wi (t) = W1 (t), t ≥ 0, i = 1, . . . , l are the same Brownian motion. In the first case, the l × l matrix Π = kI(i = j)k is a unit matrix. In the second case, the l × l matrix Π = k √1l k and det(Π) = 0 if l > 1. However, the model can, in the obvious way, be reduced to the case, where l = 1 with the coefficients Pl σ ˜i1 (t, ~ y ) = j=1 σij (t, ~ y ), i = 1, . . . , k. In the case l = 1, the 1 × 1 matrix Π = k1k, and det(Π) = 1. The standard conditions on the coefficients of the equation (3.25) that provide ~ (t), t ≥ 0 for this equation, which is a continuous Markov the existence of solution Y ~ (0), W ~ (s), s ≤ t], t ≥ 0, are: process adapted to the filtration Ft = σ[Y (1) for every T, N > 0 there exists a constant KT,N < ∞ such that |µi (t, ~ x) − µi (t, ~ y )|, |σij (t, ~x) − σij (t, ~ y )| ≤ KT,N |~x − ~ y | for t ∈ [0, T ], |~x, ~ y| ≤ N , i = 1, . . . , k, j = 1, . . . , l; (2) for every T > 0 there exists a constant KT < ∞ such that |µi (t, ~x)|, |σij (t, ~ x)| ≤ KT (1 + |~x|), for t ∈ [0, T ], ~x ∈ Rk , i = 1, . . . , k, j = 1, . . . , l.

3.3

Diffusion LPP

169

~ (t), t ≥ 0 is a.s. unique, i.e., if Y ~ 0 (t), t ≥ 0 is another continuous The solution Y Markov process, which is adapted to the filtration Ft , t ≥ 0, satisfies equation ~ 0 (0) = Y ~ (0)} = 1, then P{Y ~ 0 (t) = Y ~ (t), t ≥ 0} = 1. (3.25), and such that P{Y It is worth to note that the system (3.25) can be considered only on a finite interval [0, T ]. In this case, the conditions (1) – (2) that provide existence of the a.s. unique solution should be required only for (t, ~x) ∈ [0, T ] × Rk . ~ As far as the price process S(t) is concerned, it will be given in by the following relation, ~ ~ S(t) = eY (t) , t ≥ 0. (3.28) ~ The price process S(t) is also a diffusion process and system (3.26) can rewritten, using Itô formula, for this process,  ~  dSi (t) = Si (t)(µi (t, ln S(t))     P l  2 0 ~  + 12 j=1 σij (t, ln S(t)) )dt (3.29) P l 0 0  ~  + S (t) σ (t, ln S(t))dW (t), t ≥ 0, i ij j  j=1     i = 1, 2, . . . , k. This is the system of linear stochastic differential equations. It obviously has the a.s. unique solution which is a continuous Markov process adapted to the filtration Ft , t ≥ 0, t ≥ 0 in the case where the system (3.25) has the a.s. unique solution which is a continuous Markov process adapted to the filtration Ft , t ≥ 0, t ≥ 0. ~ (t) imply that the It is worth to note that condition (1) for the process Y ~ corresponding variant of this conditions also hold for the process S(t). ~ However, condition (2) for the process Y (t) may not imply this condition to ~ hold for the process S(t). There exists the important case where condition (2) holds for both processes ~ ~ Y (t) and S(t). It is the case where the following condition stronger than (2) is assumed: (3) for every T > 0 there exists a constant KT < ∞ such that |µi (t, ~x)|, |σij (t, ~ x)| ≤ KT , for t ∈ [0, T ], ~x ∈ Rk , i = 1, . . . , k, j = 1, . . . , l. An important case is, where the following continuity condition, additional to conditions (1) and (2), holds for the coefficients of the system of stochastic differential equations (3.25): (4) µi (t, ~x), σij (t, ~x), i = 1, . . . , k, j = 1, . . . , l are continuous functions in (t, ~ x) ∈ [0, ∞) × Rk . ~ (t) and S(t) ~ In this case, the processes Y are diffusion processes.

170

3

American-type options for continuous time Markov LPP

3.3.2 Diffusion log-price processes modulated by semi-Markov indices A Markov renewal process (Jn , Tn ), n = 0, 1, . . . and the corresponding continuous time semi-Markov process X(t), t ≥ 0 with a discrete phase space X = {1, 2, . . .} should be first introduced by in the same way as in the Subsection 3.2.4. ~ (t) modulated by a semi-Markov index X(t) can be A diffusion process Y defined by the following recurrence relation, ( ~ (t) = Y ~ (0) + Y ~~ Y Y (Mn ),n,Jn (t − Mn ) for Mn < t ≤ Mn+1 , (3.30) n = 0, 1, . . . , ~ (0) is a k-dimensional random vector; (b) < Y~y,n,x (t), t ≥ 0 > is, where; (a) Y for every ~ y ∈ Rk , n = 0, 1, . . ., a k-dimensional process, which has the initial state Y~y,n,x (0) = ~ y, ~ y ∈ Rk and is a solution of the systems of stochastic differential equations (3.25) with coefficients µi,n,x (t, ~ y ), i = 1, . . . , k, σij,n,x (t, ~ y ), i = ~ n (t) = 1, . . . , k, j = 1, . . . , l, and a multivariate standard Brownian motion W (W1,n (t), . . . , Wl,n (t)), t ≥ 0 generating these systems, for every n = 0, 1, . . . , x ∈ X; (c) µi,n,x (t, ~ y ), i = 1, . . . , k, n = 0, 1, . . . , x ∈ X are measurable functions acting from [0, ∞) × Rk to R1 ; (d) σij,n,x (t, ~ y ), i = 1, . . . , k, j = 1, . . . , l, n = 0, 1, . . . , x ∈ X are measurable functions acting from [0, ∞) × Rk into R1 . (f) (Jn , Tn ), n = 0, 1, . . . is a Markov renewal process and X(t), t ≥ 0 is the corresponding semi-Markov process with a discrete set of states X and transition ~ (0), probabilities Qn (x, y, t), t ≥ 0, n = 0, 1, . . . , x, y ∈ X; (f) the random vector Y ~ n (t), t ≥ 0 >, n = 0, 1, . . . and the Markov renewal process the processes < W < (Jn , Tn ), n = 0, 1, . . . > are mutually independent. It is natural to assume additionally that (g) coefficients µi,n,x (t, ~ y ), i = 1, . . . , k, σij,n,x (t, ~ y ), i = 1, . . . , k, j = 1, . . . , l satisfy, for every x ∈ X, n = 0, 1, . . ., the formulated above conditions (1), (2) and (4), which guarantee, for every n = 0, 1, . . . , x ∈ X, the existence of the unique solution of the systems of stochastic differential equations (3.25), which is a diffusion process; (h) transition probabilities Qn (x, y, t), t ≥ 0, x, y ∈ X, n = 0, 1, . . . satisfy the formulated above conditions, which exclude instant jumps and appearance of infinite number of jumps in finite time intervals for the semi-Markov process X(t). ~ (t) as to a diffusion process moduIn this case, one can refer to the process Y lated by semi-Markov index X(t). ~ As far as the price process S(t) is concerned, it is given in by the following formula, ~ ~ S(t) = eY (t) , t ≥ 0. (3.31) ~ ~ (t), X(t)) is not a Markov process. But, Note that the process Z(t) = (Y ¯ ~ the process Z(t) = (Y (t), X(t), M (t), N (t)) as well as the process V¯ (t) = ~ ¯ (S(t), X(t), M (t), N (t)) are Markov processes. Moreover, the process Z(t) and ¯ V (t) are homogeneous in time Markov processes.

3.4 American-type options for Markov LPP

171

If the coefficients µi,n,x (t, ~ y ) = µi,x (t, ~ y ), i = 1, . . . , k, n = 0, 1, . . . , x ∈ X and σij,n,x (t, ~ y ) = σij,x (t, ~ y ), i = 1, . . . , k, j = 1, . . . , l, n = 0, 1, . . . , x ∈ X do not depend on n, the covariance matrices Σn,x = Σx = kσij,x k do not depend on n, and the transition probabilities Qn (x, y, t) = Q(x, y, t) also do not depend on n, ¯ ~ (t), X(t), M (t)) and V¯ (t) = (S(t), ~ then the processes Z(t) = (Y X(t), M (t)) are homogeneous in time Markov processes. Finally, if the coefficients µi,n,x (t, ~ y ) = µi,x (~ y ), i = 1, . . . , k, n = 0, 1, . . ., x ∈ X and σij,n,x (t, ~ y ) = σij,x (~ y ), i = 1, . . . , k, j = 1, . . . , l, n = 0, 1, . . . , x ∈ X do not depend on n and t; the covariance matrices Σn,x = Σx = kσij,x k do not depend on n, and, thus, Y~y,n,x (t) are homogeneous in time diffusion processes; the transition probabilities Qn (x, y, t) = p(x, y)(1 − e−m(x)t ), and, thus, the semiMarkov process X(t) is a homogeneous in time Markov process, then the processes ~ ~ (t), X(t)) and V ~ (t) = (S(t), ~ Z(t) = (Y X(t)) are homogeneous in time Markov processes.

3.4 American-type options for Markov LPP In this section, we define American-type option contracts and present some general results concerned pricing of such options for continuous time multivariate modulated Markov price processes.

3.4.1 American-type options for continuous time price processes ~ ~ (t), X(t)), t ∈ [0, T ] Let us consider a càdlàg Markov log-price process Z(t) = (Y ~ (t) = (S(t), ~ and the corresponding càdlàg Markov price process V X(t)), t ∈ [0, T ]. ~ ~ Remind that the the log-price process Y (t) and the price process S(t) are ~ (t) Y ~ connected by the relation S(t) = e , t ∈ [0, T ]. ~ Let the natural filtration Ft = σ[Z(s), s ≤ t], t ∈ [0, T ] be the natural filtration ~ ~ (s), 0 ≤ s ≤ t], t ∈ [0, T ] generated by the process Z(t). It coincides with Ft = σ[V ~ the natural filtration generated by the process V (t). We also consider a pay-off function g(t, ~s, x) which is assumed to be a realvalued function defined for (t, ~s, x) ∈ [0, ∞) × R+ k × X measurable in argument (t, ~s, x). The pay-off function can be also expressed as g(t, e~y , x) and considered as a function defined for (t, ~ y , x) ∈ [0, ∞) × Rk × X. An American-type option contract is an agreement between two parties, a seller and a buyer which has an option price C payed by the buyer to the seller. The option contract guarantees to the buyer possibility to execute the option contract in any moment 0 ≤ t ≤ T and to get in this case the pay-off (reward) ~ ~ g(t, S(t), X(t)) = g(t, eY (t) , X(t)). Parameter T ∈ [0, ∞) is called a maturity.

172

3

American-type options for continuous time Markov LPP

We analyze the contract from position of the buyer. In this case, the option contract can be analyzed either in situation when the contract is not yet bought by the buyer or after the contract was already bought by the buyer. In the first case, one of the important question is what should be the fair price of the contract? In the second case, the goal of the buyer is to execute the contract in the optimal way according to some reasonable criterium of optimality. One of such natural criterion is to execute the contract in the way that would maximize the expected pay-off (reward). It is natural to assume that the decision of the seller to execute or not the contract at some moment 0 ≤ t ≤ T should be based only on the information about the modulated price process up to moment t. This corresponds to the assumption that the "stopping" strategy of the seller is defined by a random stopping time τ that is a Markov moment for the price ~ (t) = (S(t), ~ ~ ~ (t), X(t)). process V X(t)) as well as for the log-price process Z(t) = (Y This mean that τ is a non-negative random variable such that {τ > t} ∈ Ft for every t ≥ 0. According the definition of the option contract, the Markov moment τ defining stopping strategy of the buyer should also satisfy inequality 0 ≤ τ ≤ T . If the buyer decides to execute the contract at a stopping time τ the cor~ ), X(τ )) = responding pay-off (reward) will be the random variable g(τ, S(τ ~ (τ ) Y g(τ, e , X(τ )).

3.4.2 Optimal expected rewards, reward functions and optimal stopping times ~ ~ (t) and generates the same natural filtration Ft and, The processes Z(t) and V therefore, the same class Mmax,0,T of all Markov moments 0 ≤ τ ≤ T . The first natural object of interest is the optimal expected reward, Φ(Mmax 0,T ) =

sup

~

Eg(τ0 , eY (τ0 ) , X(τ0 ))

τ0 ∈Mmax,0,T

=

sup

~ 0 ), X(τ0 )). Eg(τ0 , S(τ

(3.32)

τ0 ∈Mmax,0,T

We shall assume that the following condition holds: ~ ~ A1 : E sup0≤t≤T |g(t, eY (t) , X(t))| = E sup0≤t≤T |g(t, S(t), X(t))| < ∞. In Chapter 4, we give effective conditions formulated in terms of pay-off func~ tions g(t, ey , x) and transition probabilities for log-price processes Z(t), which are sufficient for condition A1 to hold.

3.4

American-type options for Markov LPP

173

Condition A1 guarantees that the following inequality holds, |Φ(Mmax,0,T )| ≤

sup

~

E|g(τ0 , eY (τ0 ) , X(τ0 ))|

τ0 ∈Mmax,0,T

~ ≤ E sup |g(t, S(t), X(t))| < ∞.

(3.33)

0≤t≤T

Thus, the reward functional Φ(Mmax 0,T ) is well defined. Let us Mmax,t,T be, for every t ∈ [0, T ], the class of all Markov moments τt ~ for the Markov process Z(s) such that (a) t ≤ τt ≤ T , (b) event {τt > s} ∈ Ft,s = σ[Z(u), t ≤ u ≤ s], t ≤ s ≤ T . The second natural object of interest is the reward function, defined for (t, ~z) ∈ [0, T ] × Z, φt (~z) = φt (~ y , x) =

sup

~

E~z,t g(τt , eY (τt ) , X(τt )).

(3.34)

τt ∈Mmax,t,T

Here and henceforth, the notations P~z,t and E~z,t are used, respectively, for ~ conditional probabilities and conditional expectations, under condition Z(t) = ~z. Let us assume that the following condition holds: ~

A2 : E~z,t supt≤s≤T |g(s, eY (s) , X(s))| < ∞, for ~z ∈ Z, 0 ≤ t ≤ T . In Chapter 4, we also give conditions formulated in terms of pay-off func~ tions g(t, ey , x) and transition probabilities for log-price processes Z(t), which are sufficient for condition A2 to hold. Condition A2 guarantees that the following relation holds, for ~z ∈ Z, 0 ≤ t ≤ T, |φt (~z)| ≤

sup

~

E~z,t |g(τt , eY (τt ) , X(τt ))|

τt ∈Mmax,t,T ~

≤ E~z,t sup |g(t, eY (t) , X(t))| < ∞.

(3.35)

0≤t≤T

Thus, the reward function φt (~z) is also well defined. Under model measurability assumptions imposed on transition probabilities of càdlàg Markov log-price processes and pay-off functions, the reward function φt (~z) is measurable in argument (~z, t) and the following relation takes place, if both conditions A1 and A2 hold, ~ Φ(Mmax,0,T ) = Eφ0 (Z(0)).

(3.36)

An third natural object of interest is optimal stopping times τt∗ ∈ Mmax,t,T , ~ ∗ t ∈ [0, T ] such that Φ(Mmax,0,T ) = Eg(τ0∗ , eY (τ0 ) , X(τ0∗ )) and φt (~z) = E~z,t g(τt∗ , ~



eY (τt ) , X(τt∗ )), for ~z ∈ Z, 0 ≤ t ≤ T . In this book, we concentrate our attention on the computational problems connected with estimation and computing of optimal expected rewards and reward functions.

174

3

American-type options for continuous time Markov LPP

Being restricted in the volume of the book, we leave beyond the book materials concerned structure and computational problems concerned optimal stopping times for American-options time processes, in particular, results about multithreshold structure of the optimal stopping domains. We just refer here to papers by the author and his collaborators related to these problems that are Kukush and Silvestrov (1999, 2000, 2004), Jönsson (2001, 2005a, 2005b), and Jönsson, Kukush, and Silvestrov (2002, 2004, 2005), and Lundgren (2007a, 2007b, 2010b). We also refer to the classical books by Chow, Robbins, and Siegmund (1971) and Shiryaev (1976) as well as to more recent works listed in the bibliographical remarks, where the readers can find general facts about optimal stopping times for continuous time Markov processes. However, all necessary facts concerned hitting structure of optimal stopping times and related recurrence relations for reward functions for discrete time processes, which play an essential role in our approximation studies, are given in the 1st volume of the book, Silvestrov (2014).

3.4.3 Pay-off functions In this book, we study the American-type options with pay-off functions g(t, ~s, x) that have not more than polynomial rate of growth in the price argument ~s = (s1 , . . . , sk ) ∈ R+ k and are bounded in arguments 0 ≤ t ≤ T and x ∈ X. This means that the following condition is assumed to hold for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components: ˙ 8 [¯ B γ ]: sup0≤t≤T,(~s,x)∈V Pk |g(t,~s,x)| < L15 , for some 0 < L15 < ∞ and 1+

i=1

L16,i (si ∨s−1 )γi i

0 ≤ L16,1 , . . . , K16,k < ∞.

This condition can be also equivalently formulated as the assumption that the payoff function expressed as g(n, e~y , x) that has not more than exponential rate of growth in argument ~ y = (y1 , . . . , yk ) ∈ Rk and are bounded in argument x ∈ X: B8 [¯ γ ]: sup0≤t≤T,(~y,x)∈Z

y ~

P|g(t,e k

1+

L16,1 , . . . , L16,k < ∞.

i=1

,x)|

L16,i eγi |yi |

< L15 , for some 0 < L15 < ∞ and 0 ≤

It is useful to note that the case γi = 0 corresponds to the model where the pay-off function is bounded in argument si , respectively, yi . Below, we present some standard examples of pay-off functions.

3.4.4 Pay-off functions for call and put type options In what follows, the notation [s]+ = sI(s ≥ 0) is used.

3.4

American-type options for Markov LPP

175

Let us describe first standard call and put options for univariate modulated price processes and then their different generalizations. We give below formulas for pay-off functions in more traditional terms of arguments t, ~s, x. One can always to re-write the corresponding formulas in terms of arguments t, ~ y , x using the transition relation, g(t, ~s, x) = g(t, e~y , x), ~s = e~y .

(3.37)

We give below formulas for pay-off functions in terms of arguments t, ~s, x. The corresponding measurability properties of functions penetrating the examples below are assumed by the default. ~ (t) = (S(t), ~ The one-dimensional modulated price process V X(t)) is considered. The pay-off functions that correspond, respectively, to the standard call and put options are defined for t ∈ [0, T ], s ∈ R+ 1 , x ∈ X by the formulas, g(t, s, x) = e−rt [s − K]+ , g(t, s, x) = e−rt [K − s]+ ,

(3.38)

where K ≥ 0 is a strike price for the option contact; r ≥ 0 is a risk-free interest rate; T ≥ 0 is the maturity. The first generalization where the parameters of the contract, namely, the interest rate and the strike price depend on the index component modulating price process. The pay-off functions that correspond, respectively, to the standard call and put options are defined for t ∈ [0, T ], s ∈ R+ 1 , x ∈ X by the formulas, g(t, s, x) = e−rt [s − K(x)]+ , g(t, s, x) = e−rt [K(x) − s]+ ,

(3.39)

where 0 ≤ K(x) ≤ K < ∞, x ∈ X are strike prices; r ≥ 0 is a risk-free interest rates for the corresponding values of index component; T is a maturity. It is worth to note that the above formula let one consider models with a random interest rate. The second generalization is to consider inhomogeneous in time variant of such contracts. The pay-off functions that correspond, respectively, to the standard call and put options are defined for t ∈ [0, T ], s ∈ R+ 1 , x ∈ X by the formulas, g(t, s, x) = e−Rt [s − Kt ]+ , g(t, s, x) = e−Rt [Kt − s]+ ,

(3.40)

where 0 ≤ Kt ≤ K < ∞, t ∈ [0, T ] are strike prices, Rt ≥ 0 is an accumulated risk-free interest rate accumulated in the time interval [0, t], t ∈ [0, T ], T is a maturity. The third example is where the contract is defined a mixture of the option contracts. In this case, the pay-off functions are defined for t ∈ [0, T ], s ∈ R+ 1 ,x ∈ X by the formulas, g(t, s, x) = e−rt

m X (a0i [s − Ki0 ]+ + a00i [Ki00 − s]+ ), i=1

(3.41)

176

3

American-type options for continuous time Markov LPP

where Ki0 , Ki00 ≥ 0, i = 1, . . . , m are strike prices for the corresponding option contacts; r ≥ 0 is a risk-free interest rate; a0i , a00i ∈ R1 , i = 1, . . . , m are weighting coefficients for the corresponding option contracts; T is a maturity. In the case of call options or mixture of call options with non-negative waiting coefficients the pay-off functions are non-negative and non-decreasing in s. While in the case of put options or mixture of put options with non-negative waiting coefficients the pay-off functions are non-negative and non-increasing functions in s. However, in the case of mixture of call and put options the non-negativity and monotonicity properties may not hold. ˙ 8 [γ] obviously holds with γ = 1. In the above models condition B Moreover, in the case of put options or mixture (portfolio) of put options the ˙ 8 [γ] also holds with pay-off functions are bounded functions in s, i.e., condition B γ = 0. It is possible also to combine the option models described above by considering mixture of option contracts, with parameters which are inhomogeneous in time functions of index component. In this case, the pay-off functions are defined for t ∈ [0, T ], s ∈ R+ 1 , x ∈ X by the formulas, g(t, s, x) = e−Rt +

m X (a0i (t, x)[s − Ki0 (t, x)]+

i=1 00 ai (t, x)[Ki00 (t, x)

− s]+ ),

(3.42)

Ki0 (t, x), Ki00 (t, x)

≤ K < ∞, t ∈ [0, T ], x ∈ X, i = 1, . . . , m are strike where 0 ≤ prices; Rt ≥ 0 are accumulated risk-free interest rates; −A ≤ a0i (t, x), a00i (t, x) ≤ A < ∞, t ∈ [0, T ], x ∈ X, i = 1, . . . , m are weighting coefficients for the corresponding option contracts; T is a maturity. The next generalization is connected with the model where the option contract ~ is based on the mixture of price processes S~c (t) = (S(t), ~c), n = 0, 1, . . ., where the notation (~c, ~s) = c1 s1 + · · · + ck sk is used for the scalar product of the vectors ~c = (c1 , . . . , ck ), ~s = (s1 , . . . , sk ). In this case, the pay-off functions are defined for t ∈ [0, T ], ~s ∈ R+ k , x ∈ X by the formulas, g(t, ~s, x) = e−rt [(~c, ~s) − K]+ , g(t, ~s, x) = e−rt [K − (~c, ~s)]+ ,

(3.43)

where K ≥ 0 is a strike price for the option contact; r ≥ 0 is a risk-free interest rate; ci ≥ 0, i = 1, . . . , k are weighting coefficients for the corresponding components of the vector price process; T is a maturity. Generalizations described above namely option contracts with parameters depending of index component, inhomogeneous in time contracts, mixtures of option contracts can be also considered for the above model. In all above models, the pay-off functions are piecewise linear functions in s.

3.4

American-type options for Markov LPP

177

3.4.5 Nonlinear pay-off functions for call- and put-type options In such model, the pay-off functions that correspond, respectively, to the standard call and put options are defined for t ∈ [0, T ], s ∈ R+ 1 , x ∈ X by the formulas, g(t, s, x) = e−rt [f (s) − K]+ , g(t, s, x) = e−rt [K − f (s)]+ ,

(3.44)

where f (s) > 0, s ∈ R+ 1 is some measurable transformation function such that f (s) ≤ K10 + K20 sγ where 0 ≤ K10 , K20 < ∞, 0 ≤ γ < ∞; r ≥ 0 is a risk-free interest rate; T is the maturity. The typical examples of functions f (s) = a + bs + cs2 , where a, b, ≥ 0, c > 0, or f (s) = asγ , where a > 0, γ > 0. It is also worth to note that models with nonlinear transformation functions f (s) can appear also as the result of transformation of a price or a log-price process. Let ht (s) be, for every t ∈ [0, T ], a strictly monotonic function performing − + one-to-one mapping the interval R+ 1 to some open interval (st , st ), where 0 ≤ − + st < st ≤ ∞. In this case, there exists the inverse function ft (s) = h−1 t (s) such that ft (ht (s)) = s, s ∈ R+ . 1 In some cases, the price process S(t), t ≥ 0 can be simplified if one transforms it in the process S (h) (t) = ht (S(t)), t ≥ 0. The random pay of process g(t, S(t), X(t)), t ≥ 0 is transformed in this case as g(t, S(t), X(t)) = g(t, ft (ht (S(t))), X(t)) = g (f ) (t, S (h) (t), X(t)), t ≥ 0, where g (f ) (t, s, x) = g(t, ft (s), x) is the transformed pay-off function. Generalizations described above namely option contracts with parameters depending of index component, inhomogeneous in time contracts can be also considered for the above model. In this case, the pay-off functions that correspond, respectively, to the standard call and put options are defined for t ∈ [0, T ], s ∈ R+ 1 , x ∈ X by the formulas, g(t, s, x) = e−Rt +

m X (a0i (t, x)[fi0 (t, s, x) − Ki0 (t, x)]+

i=1 00 ai (t, x)[Ki00 (t, x)

− fi00 (t, s, x)]+ ),

(3.45)

where 0 ≤ Ki0 (t, x), Ki00 (t, x) ≤ K < ∞, t ∈ [0, T ], x ∈ X, i = 1, . . . , m are strike prices; Rt ≥ 0, t ∈ [0, T ], x ∈ X are accumulated risk-free interest rates; fi0 (t, s, x), fi00 (t, s, x) > 0, t ∈ [0, T ], s ∈ R+ 1 , x ∈ X, i = 1, . . . , m are transformation functions such that fi (t, s, x) ≤ K10 + K20 sγ , t ∈ [0, T ], s ∈ R+ 1 , x ∈ X, i = 1, . . . , m, where 0 ≤ K10 , K20 < ∞, 0 ≤ γ < ∞; −A ≤ a0i (t, x), a00i (t, x) ≤ A < ∞, [0, T ], x ∈ X, i = 1, . . . , m are weighting coefficients for the corresponding option contracts, T is a maturity. Generalization based on mixtures of price processes can also be considered for options with nonlinear payoff functions.

178

3

American-type options for continuous time Markov LPP

3.4.6 Pay-off functions for exchange of assets contracts Another type of option type contracts are connected with the model of exchange of assets. ~ (t) = (S(t), ~ In the simplest case, a two-dimensional price process V X(t)), ~ where S(t) = (S1 (t), S2 (t)), is considered. In this case, the pay-off functions are defined for t ∈ [0, T ], ~s = (s1 , s2 ) ∈ R+ 2 , x ∈ X, by the formulas, g(t, ~s, x) = e−rt (s1 − s2 ),

(3.46)

where r ≥ 0 is a risk-free interest rate; ai ∈ R1 , i = 1, . . . , k are weighting coefficients for the corresponding components of the vector price process; T is a maturity. The first generalization is connected with the model of exchange of k assets, ~ (t) = (S(t), ~ where modulated price process V X(t)) has a k-dimensional price ~ component S(t) = (S1 (t), . . . , Sk (t)). In this case, the pay-off functions are defined for , t ∈ [0, T ], ~s = (s1 , . . . , sk ) ∈ R+ k , x ∈ X by the formulas, g(t, ~s, x) = e−rt

k X

ai si ,

(3.47)

i=1

where r ≥ 0 is a risk-free interest rate; ai ∈ R1 , i = 1, . . . , k are weighting coefficients for the corresponding components of the vector price process; T is the maturity. The further generalization is connected with the case, where the waiting coefficients depend on on the index component modulating price process. In this case, the pay-off functions are defined for t ∈ [0, T ], ~s = (s1 , . . . , sk ) ∈ R+ k , x ∈ X, by the formulas, k X g(t, ~s, x) = e−rt ai (x)si , (3.48) i=1

where r ≥ 0 is a risk-free interest rates; and −A ≤ ai (x) ≤ A < ∞, x ∈ X, i = 1, . . . , k, are weighting coefficients for the corresponding components of the vector price process values of index component; T is a maturity. Also the generalization connected with inhomogeneous in time variant of such contracts is possible. In this case, the pay-off functions are defined for t ∈ [0, T ], ~s = (s1 , . . . , sk ) ∈ R+ k , x ∈ X, by the formulas, g(t, ~s, x) = e−Rt

k X

ai (t, x)si ,

(3.49)

i=1

where Rt ≥ 0, t ∈ [0, T ] are accumulated risk-free interest rates; and −A ≤ ai (t, x) ≤ A < ∞, t ∈ [0, T ], x ∈ X, i = 1, . . . , k, are weighting coefficients for

3.4

American-type options for Markov LPP

179

the corresponding components of the vector price process values of index component; T is a maturity. In the exchange asset contracts pay-off functions are again linear functions in argument ~s. The non-negativity and monotonicity properties may not hold. ˙ 8 [γ] obviously holds, with γ = 1. In the above models condition B

4 Upper bounds for option rewards for Markov LPP In Chapter 4, we give upper bounds for rewards of American-type options for continuous time multivariate modulated Markov log-price processes. We consider models of options with pay-off functions, which have not more than polynomial rate of growth in price arguments and multivariate modulated Markov log-price processes. In Section 4.1, we give upper bounds for exponential moments of supremums for multivariate modulated Markov log-price processes and upper bounds for optimal expected rewards and reward functions for American-type options for multivariate modulated Markov log-price processes. In Section 4.2, we analyze and present different variants of asymptotically uniform conditions of exponential moment compactness for multivariate modulated Markov log-price processes. These conditions play the key role in getting upper bounds for rewards pointed above. In Section 4.3, we present upper bounds for optimal expected rewards and reward functions for American-type options, for multivariate log-price processes with independent increments. In Section 4.4, we give upper bounds for optimal expected rewards and reward functions for American-type options, for diffusion log-price processes and their skeleton, martingale-type and binomial tree approximations. In Section 4.5, investigate and give upper bounds for optimal expected rewards and reward functions for American-type options, for mean-reverse diffusion logprice processes. The main results for multivariate modulated Markov log-price processes and multivariate log-price processes with independent increments are, respectively, given in Theorems 4.1.1–4.1.4 and 4.3.1–4.3.2. These results generalize results obtained in papers Silvestrov, Jönsson, and Stenberg (2008, 2009), for univariate modulated Markov log-price processes and Lundgren and Silvestrov (2009, 2011), Silvetsrov and Lundgren (2011), for multivariate Markov log-price processes. The results for diffusion log-price processes, given in Theorems 4.4.1–4.4.6 and 4.5.1–4.5.4 have not been published earlier.

4.1 Upper bounds for rewards for Markov LPP In this section, we give asymptotically uniform upper bounds for exponential moments of supremums for multivariate modulated Markov log-price processes and

4.1

Upper bounds for rewards for Markov LPP

181

upper bounds for rewards of American-type options for multivariate modulated Markov log-price processes.

4.1.1 Upper bounds for supremums of log-price processes ~ ~ (t), X(t)), t ≥ 0 a continuous time càdlàg multivariate modulated Let Z(t) = (Y Markov log-price process with the phase space Z = Rk × X (X is a Polish space with a metric dX (x0 , x00 )) and transition probabilities P (t, ~z, t + u, A). As usual, it is assumed that transition probabilities P (t, ~z, t + u, A) are measurable in argument (t, ~z, u). Remind also that we use notations P~z,t and E~z,t , respectively, for conditional ~ probabilities and expectations under condition, Z(t) = ~z. Let us define the first-type modulus of exponential moment compactness for ~ (t) = (Y1 (t), . . . , Yk (t)), for β, c, T ≥ the components of the log-price process Y 0, i = 1, . . . , k, ∆β (Yi (·), c, T ) =

sup

sup E~z,t (eβ|Yi (t+u)−Yi (t)| − 1).

(4.1)

0≤t≤t+u≤t+c≤T ~ z ∈Z

We use the following first-type condition of exponential moment compactness for the log-price process, which is assumed to hold for some vector parameter β¯ = (β1 , . . . , βk ) with non-negative components: ¯ limc→0 ∆βi (Yi (·), c, T ) = 0, i = 1, . . . , k. C1 [β]: ¯ implies that the condition C1 [β¯0 ] It is useful to note that condition C1 [β] 0 0 holds for any vector parameter β¯ = (β1 , . . . , βk0 ) such that βi0 ≤ βi , i = 1, . . . , k. Note also that ∆0 (Yi (·), c, T ) ≡ 0 and, thus, the relation penetrating condition ¯ C1 [β] holds automatically for i such that βi = 0. The following lemma gives explicit upper bounds for conditional moments of supremums for the absolute values of log-price processes Yi (t). ¯ holds. Then there exist constants M37,i = Lemma 4.1.1. Let condition C1 [β] M37,i (βi ) < ∞, i = 1, . . . , k such that the following inequalities take place for ~z = (~ y , x) = ((y1 , . . . , yk ), x) ∈ Z, 0 ≤ t ≤ T, i = 1, . . . , k, E~z,t exp{βi sup |Yi (s)|} ≤ M37,i eβi |yi | .

(4.2)

t≤s≤T

Proof. The case, where βi = 0 is trivial. Indeed, the quantities on the left and right hand sides in (4.2) equal, respectively, to 1 and M37,i . Thus, inequality (4.2) holds with M37,i = 1. Let, therefore, give the proof of inequality (4.2) for the case, where βi > 0.

182

4

Upper bounds for options rewards for Markov LPP

¯ implies that for any Let us begin from the following remark. Condition C1 [β] −βi constant e < R1,i < 1 one can choose 0 < c1,i = c1,i (βi , R1,i ) ≤ T such that ∆βi (Yi (·), c1,i , T ) + 1 < R1,i . eβi The following equality holds for every 0 ≤ t ≤ u ≤ T , βi Si (t, u)

(4.3)

= exp{βi sup |Yi (s)|} = sup exp{βi |Yi (s)|}. t≤s≤u

(4.4)

t≤s≤u

Note also that, by the definition, the random variable, βi Si (t, t)

= exp{βi |Yi (t)|}.

(4.5)

Let us also introduce random variables, βi Wi [t

0

, t00 ] = sup exp{βi |Yi (t) − Yi (t0 )|}, 0 ≤ t0 ≤ t00 ≤ T.

(4.6)

t0 ≤t≤t00

Define a partition Πm,t = ht = vm,0 < . . . < vm,m = T i of interval [t, T ] by points vm,n = t + n(T − t)/m, n = 0, . . . , m. Using equality (4.4) we can get the following inequalities for n = 1, . . . m and i = 1, . . . , k, βi Si (t, vm,n )

≤ βi Si (t, vm,n−1 ) +

exp{βi |Yi (u)|}

sup vm,n−1 ≤u≤vm,n

≤ βi Si (t, vm,n−1 ) + exp{βi |Yi (vm,n−1 )|} βi Wi [vm,n−1 , vm,n ] ≤ βi Si (t, vm,n−1 )(β Wi [vm,n−1 , vm,n ] + 1).

(4.7)

We shall show below that the following inequality holds for i = 1, . . . , k, sup

sup E~z,t0 βi Wi [t0 , t00 ] ≤ R2,i ,

(4.8)

0≤t0 ≤t00 ≤t0 +c1,i ≤T ~ z ∈Z

where R2,i =

eβi (eβi − 1)R1,i < ∞. 1 − R1,i

(4.9)

−t Let us choose m = [ cT1,i ] + 1. In this case vm,n − vm,n−1 = Tm ≤ c1,i . ~ ¯ relations Now, using Markov property of the process Z(t), condition C1 [β],

(4.7) – (4.8), we get for ~z ∈ Z, 0 ≤ t ≤ T , and n = 1, . . . , m,   ~ m,n−1 )) E~z,t βi Si (t, vm,n ) ≤ E~z,t βi Si (t, vm,n−1 )E(β Wi [vm,n−1 , vm,n ] + 1/Z(v ≤ E~z,t βi Si (t, vm,n−1 )(L2,i + 1) ≤ · · · ≤ E~z,t βi Si (t, t)(R2,i + 1)n = eβi |yi | (R2,i + 1)n .

(4.10)

Finally, we get the following inequalities, for ~z ∈ Z, 0 ≤ t ≤ T , (ε)

E~z,t βi Si (t, vm,m ) = E~z,t exp{βi sup |Yi

(u)|}

t≤u≤T

≤ eβi |yi | (R2,i + 1)

[ c T ]+1 1,i

.

(4.11)

4.1

Upper bounds for rewards for Markov LPP

183

Relation (4.11) implies that the right hand inequality (4.2) holds with the constant, M37,i = (R2,i + 1)

[ c T ]+1 1,i

.

(4.12)

To show that relations (4.8) and (4.9) holds, we have from relation (4.3) that for every i = 1, . . . , k, sup P~z,t {|Yi (t00 ) − Yi (t)| ≥ 1}

sup

0≤t0 ≤t≤t00 ≤t0 +c≤T ~ z ∈Z



sup

sup

0≤t0 ≤t≤t00 ≤t0 +c≤T ~ z ∈Z



E~z,t exp{βi |Yi (t00 ) − Yi (t)|} eβi

∆βi (Yi (·), c1,i , T ) + 1 ≤ R1,i < 1. eβi

(4.13)

~ The process Yi (t) is not a Markov process but Z(t) is a càdlàg Markov process. The following lemma presents the variant of well known Kolmogorov inequality for Markov processes. We formulate it in the form of a lemma. The proof is a slight modification of the standard proof for Markov processes, which can be found, for example, in Gikhman and Skorokhod (1971). Lemma 4.1.2. Let the following condition holds for the process Z(t), for some t = t0 < · · · < tn = t00 , a > 0, and 0 ≤ R < 1: 0

sup Pz,tk {|Y (tn ) − Y (tk )| ≥ a} ≤ R, k = 0, . . . , n.

(4.14)

~ z ∈Z

Then, for any point ~z0 ∈ Z and b ≥ 0, P~z0 ,t0 { max |Y (tk ) − Y (t0 )| ≥ a + b} 1≤k≤n

1 ≤ P~z ,t {|Y (tn ) − Y (t0 )| ≥ b}. 1−R 0 0

(4.15)

Let us present shortly the proof of this lemma. Let us introduce, for k = 1, . . . , n random events, Ak = {|Y (tr ) − Y (t0 )| < a + b, r < k, |Y (tk ) − Y (t0 )| ≥ a + b} and Bk = {|Y (tn ) − Y (tk )| < a}. Using Markov property of the process Z(t) and condition (α) we get (a) Pz0 ,t0 (Ak ∩ B k ) ≤ Ez0 ,t0 {χ(Ak )Ez0 ,t0 {χ(B k )/Z(tk )}} ≤ Pz0 ,t0 (Ak )R. It is readily seen that (b) Ak ∩ Bk ⊆ {|Y (tn ) − Y (t0 )| ≥ b}. Relations (a) and (b) imply 1 1 Pz0 ,t0 (Ak ∩ Bk ) ≤ 1−R Pz0 ,t0 {Ak , |Y (tn ) − Y (t0 )| ≥ b}. that (c) Pz0 ,t0 (Ak ) ≤ 1−R On the other hand Ak , k = 0, . . . , n are disjoint events and, (d) ∪nk=1 Ak = {max1≤k≤n |Y (tk ) − Y (t0 )| ≥ a + b}. Relations (c) and (d), and the disjoint property of events Ak , k = 1, . . . , n we get (e) Pz0 ,t0 {max1≤k≤n |Y (tk ) − Y (t0 )| ≥ Pn Pn 1 a + b} = k=1 Pz0 ,t0 (Ak ) ≤ 1−R k=1 Pz0 ,t0 {Ak , |Y (tn ) − Y (t0 )| ≥ b} ≤ 1 P {|Y (t ) − Y (t )| ≥ b}.  z ,t n 0 0 0 1−R Since Y (t) is a càdlàg process and, therefore, it is also a separable process, Lemma 4.1.2 implies the similar continuous time version of its proposition.

184

4

Upper bounds for options rewards for Markov LPP

Lemma 4.1.3. Let the following condition holds for a càdlàg Markov process ~ Z(t) for some 0 ≤ t0 ≤ t00 ≤ T and a > 0 and 0 ≤ R < 1: sup P~z,t {|Yi (t00 ) − Yi (t)| ≥ a} ≤ R, t0 ≤ t ≤ t00 .

(4.16)

~ z ∈Z

Then, for any vector ~z ∈ Z and b > 0, P~z,t0 { sup |Yi (t) − Yi (t0 )| ≥ a + b} t0 ≤t≤t00

1 P~z,t0 {|Yi (t00 ) − Yi (t0 )| ≥ b}. 1−R



(4.17)

To shorten notations denote the random variables, Wi+ [t0 , t00 ] = sup |Yi (t) − Yi (t0 )|, t0 ≤t≤t00

0

00

Wi [t , t ] = |Yi (t00 ) − Yi (t0 )|.

(4.18)

Note that +

eβi Wi

[t0 ,t00 ]

=

βi Wi [t

0

, t00 ].

(4.19)

Using (4.13) and Lemma 4.1.2, we get for every , 0 ≤ t0 ≤ t00 ≤ t0 + c1,i ≤ T, ~z ∈ Z, i = 1, . . . , k, and b > 0, P~z,t0 {Wi+ [t0 , t00 ] ≥ 1 + b} ≤

1 P~z,t0 {Wi [t0 , t00 ] ≥ b}. 1 − R1,i

(4.20)

Relations (4.3) and (4.20) imply that for every, 0 ≤ t0 ≤ t00 ≤ t0 + c1,i ≤ T , ~z ∈ Z, Z∞ βi Wi+ [t0 ,t00 ] 0 E~z,t e = 1 + βi eβi b P~z,t0 {Wi+ [t0 , t00 ] ≥ b}db 0

Z1 ≤ 1 + βi

βi b

e

Z∞ db + βi

0

= eβi + βi

eβi b P~z,t0 {Wi+ [t0 , t00 ] ≥ b}db

1

Z∞

eβi (1+b) P~z,t0 {Wi+ [t0 , t00 ] ≥ 1 + b}db

0 βi

≤e

βi eβi + 1 − R1,i

Z∞

eβi b P~z,t0 {Wi [t0 , t00 ] ≥ b}db

0

= eβi +

βi

βi e 1 − R1,i

0

00

E~z,t0 eβi Wi [t ,t βi

]

−1

4.1

Upper bounds for rewards for Markov LPP

=

0 00 eβi (E~z,t0 eβi Wi [t ,t ] − R1,i ) 1 − R1,i



eβi (∆βi (Yi (·), c1,i , T ) + 1 − R1,i ) 1 − R1,i



eβi (eβi − 1)R1,i = R2,i . 1 − R1,i

185

(4.21)

Since inequality (4.21) holds for every 0 ≤ t0 ≤ t00 ≤ t0 + c1,i ≤ T , ~z ∈ Z, it imply relation (4.8). The proof of Lemma 16.1.1 is complete.  Remark 4.1.1. The explicit formulas for constants M37,i = M37,i (βi ), i = 1, . . . , k follow from the remark made in the beginning of the proof and formulas (4.9) and (4.12), ( 1 if βi = 0, [ cT ]+1 (4.22) M37,i = eβi (eβi −1)R1,i if βi > 0, + 1 1,i 1−R1,i where constants e−βi < R1,i < 1 and 0 < c1,i = c1,i (βi , R1,i ) ≤ T are defined for i ¯ such that βi > 0 in relation (4.3), which holds due to condition C1 [β].

4.1.2 Upper bounds for reward functions ~ (t). Let now obtain upper bounds for reward functions for log-price processes Y ~ y Let g(t, e , x) be a real-valued measurable in argument (t, ~ y , x) pay-off function defined on the space [0, T ] × Z = [0, T ] × Rk × X. We assume that the condition B8 [¯ γ ] restricts the rate of growth for pay-off functions, holds: ¯ hold and 0 ≤ γi ≤ βi < ∞, i = Lemma 4.1.4. Let conditions B8 [¯ γ ] and C1 [β] 1, . . . , k. Then, there exists constants M38 , M39,i = M39,i (βi , γi ) < ∞, i = 1, . . . , k such that the following inequalities take place for 0 ≤ t ≤ T, ~z = ((y1 , . . . , yk ), x) ∈ Z, k X ~ E~z,t sup |g(s, eY (s) , X(s))| ≤ M38 + M39,i eγi |yi | . (4.23) t≤s≤T

i=1

Proof. Using condition B8 [¯ γ ] we get the following inequality for 0 ≤ t ≤ T , ~

sup |g(s, eY (s) , X(s))| t≤s≤T

≤ sup (L15 + t≤s≤T

≤ L15 +

k X

L15 L16,i eγi |Yi (s)| )

i=1 k X i=1

L15 L16,i sup eγi |Yi (s)| . t≤s≤T

(4.24)

186

4

Upper bounds for options rewards for Markov LPP

Now using well known property of moments of random variable and Lemma 4.1.3, we get the following inequality for 0 ≤ t ≤ T, ~z ∈ Z, ~

E~z,t sup |g(s, eY (s) , X(s))| t≤s≤T

≤ L15 +

k X

L15 L16,i E~z,t sup eγi |Yi (s)| t≤s≤T

i=1

X

≤ L15 +

L15 L16,i

i:γi =0

+

βi

X

t≤s≤T

i:γi >0

X

= L15 +

γi

L15 L16,i (E~z,t ( sup eγi |Yi (s)| ) γi ) βi L15 L16,i

i:γi =0

+

γi

X

L15 L16,i (E~z,t sup eβi |Yi (s)| ) βi t≤s≤T

i:γi >0

X

≤ L15 +

L15 L16,i

i:γi =0

+

γi

X

L15 L16,i (eβi |yi | M1,i ) βi

i:γi >0

X

= L15 +

L15 L16,i

i:γi =0

+

X

γi β

L15 L16,i M1,ii eγi |yi | .

(4.25)

i:γi >0

Thus, inequality (4.158) holds with the constants M38 and M39,i , which are given by relation (4.25). The proof is complete.  Remark 4.1.2. The explicit formula for the constants M38 and M39,i = M39,i (βi , γi ), i = 1, . . . , k follows from formulas (4.22) and (4.25), ( L15 L16,i if γi = 0, γi (4.26) M38 = L15 , M39,i = βi L15 L16,i M39,i if γi > 0. Lemma 4.1.3 let us get the following upper bounds for the log-reward functions of American type options. ¯ hold and 0 ≤ γi ≤ βi < Theorem 4.1.1. Let conditions B8 [¯ γ ] and C1 [β] ∞, i = 1, . . . , k. Then, the log-reward functions φt (~ y , x) satisfy the following inequalities for 0 ≤ t ≤ T, ~z = ((y1 , . . . , yk ), x) ∈ Z, |φt (~ y , x)| ≤ M38 +

k X i=1

M39,i eγi |yi | .

(4.27)

4.1

187

Upper bounds for rewards for Markov LPP

Proof. Using Lemma 4.1.3 and the definition of the log-reward functions, we get the following inequalities for 0 ≤ t ≤ T, ~z = (~ y , x) ∈ Z, |φt (~ y , x)| = |

~

E~z,t g(τt , eY (τt ) , X(τt ))|

sup τt ∈Mmax,t,T

~

≤ E~z,t sup |g(s, eY (s) , X(s))| ≤ M38 + t≤s≤T

k X

M39,i eγi |yi | ,

(4.28)

i=1

which proves the theorem. 

4.1.3 Upper bounds for optimal expected rewards We can get upper bounds for optimal expected rewards using the above upper bounds for reward functions and averaging formulas connecting the reward functions and optimal expected rewards. To do this, we impose the following moment condition on the initial distribution of the log-price process assumed to hold for some vector parameter β¯ = (β1 , . . . , βk ) with non-negative components: ¯ Eeβi |Yi (0)| < K46,i , i = 1, . . . , k, for some 0 < K46,i < ∞. D13 [β]: ¯ and D13 [β] ¯ hold. Then, there exist a Lemma 4.1.5. Let condition, C1 [β] constant M40,i = M40,i (βi ) < ∞, i = 1, . . . , k such that for the following inequality takes place, for i = 1, . . . , k, E exp{βi sup |Yi (s)|} ≤ M40,i .

(4.29)

0≤s≤T

¯ we get, for i = 1, . . . , k Proof. Using Lemma 4.1.1 and condition D13 [β], E exp{βi sup |Yi (s)|} 0≤s≤T

~ = E E{exp{βi sup |Yi (s)|}/Z(0)} 0≤s≤T βi |Yi (0)|

≤ EM37,i e

≤ M37,i K46,i = M40,i .

(4.30)

Remark 4.1.3. The constants M40,i = M40,i (βi ), i = 1, . . . , k are given by the following formulas, M40,i = M37,i K46,i , i = 1, . . . , k.

(4.31)

¯ and D13 [β] ¯ hold and 0 ≤ γi ≤ Lemma 4.1.6. Let conditions B8 [¯ γ ], C1 [β], ¯ γ¯ ) < ∞ such βi < ∞, i = 1, . . . , k. Then, there exist a constant M41 = M41 (β, that for the following inequality takes place, ~

E sup |g(t, eY (t) , X(t))| ≤ M41 . 0≤t≤T

(4.32)

188

4

Upper bounds for options rewards for Markov LPP

Proof. Using Lemmas 4.1.4 and 4.1.5, we get, ~

E sup |g(t, eY (t) , X(t))| 0≤t≤T

≤ E sup (L15 + 0≤t≤T

L16,i L1 eγi |Yi (t)| )

i=1

k X

≤ (L15 +

k X

L15 L16,i Eeγi sup0≤t≤T |Yi (t)| )

i=1

X

≤ L15 +

L15 L16,i

i:γi =0

+

γi

X

L15 L16,i (Eeβi sup0≤t≤T |Yi (t)| ) βi

i:γi >0

X

≤ L15 +

L15 L16,i

i:γi =0

+

X

γi

L15 L16,i (M37,i K46,i ) βi .

(4.33)

i:γi >0

Relation (4.33) proves that inequality (4.32) holds with the constant M41 given in (4.33).  ¯ γ¯ ) follows Remark 4.1.4. The explicit formula for the constant M41 = M41 (β, from the last inequality in (4.33), X X γi M41 = L15 + L15 L16,i + L15 L16,i (M1,i K46,i ) βi i:γi =0

i:γi >0

X

= M38 +

M39,i +

i:γi =0

X

γi β

i . M39,i K46,i

(4.34)

i:γi >0

The following theorem give the upper bounds for the optimal expected reward Φ = Φ(Mmax,0,T ). ¯ and D13 [β] ¯ hold and 0 ≤ γi ≤ Theorem 4.1.2. Let conditions B8 [¯ γ ], C1 [β], βi < ∞, i = 1, . . . , k. Then, the the following inequality takes place, |Φ| ≤ M41 .

(4.35)

Proof. It follows from the definition of the functional Φ and Lemma 4.1.5 that |Φ| ≤

sup

~

E|g(τ, eY (τ0 ) , X(τ0 ))|

τ0 ∈Mmax,0,T ~

≤ E sup |g(t, eY (t) , X(t))| ≤ M41 . 0≤t≤T

This inequality proves the theorem. .

(4.36)

4.1

Upper bounds for rewards for Markov LPP

189

4.1.4 Asymptotically uniform upper bounds for supremums of log-price processes In this subsection we consider the model, with perturbed price processes and perturbed pay-off functions. We give asymptotically uniform with respect to perturbation parameter variants of upper bounds for reward functions and optimal expected rewards. These results are used in theorems about convergence of reward functions and optimal expected rewards for models with perturbed log-price processes and pay-off functions. ~ ε (t) = (Y ~ε (t), Xε (t)), t ≥ 0 is a càdlàg mulLet consider the model, where Z tivariate modulated Markov log-price process with a phase space Z = Rk × X (X is a Polish space), an initial distribution Pε (A) and transition probabilities ~ ε (t) depends on some perturbation Pε (t, ~z, t + u, A). We assume that the process Z parameter ε ∈ [0, ε0 ], where 0 < ε0 < ∞. As usual, it is assumed that transition probabilities Pε (t, ~z, t + u, A) are measurable in argument (t, ~z, u). In approximation problems it is usually assumed that the price process for ε = 0 is unperturbed process while the price processes with ε > 0 are perturbed price ~ ε (t) are simpler than the unperturbed processes. Usually perturbed processes Z ~ 0 (t), and converge to them, in some sense, as parameter ε → 0. process Z In this context, we are interested to modify the results concerning upper bounds for reward functions and optimal expected rewards in such way that the corresponding upper bounds would be valid asymptotically uniformly with respect to parameter ε → 0. In what follows, we use notations P~z,t and E~z,t , respectively, for conditional ~ ε (t) = ~z. probabilities and expectations under condition, Z ¯ Condition C1 [β] should be replaced by the following condition assumed to hold for some vector parameter β¯ = (β1 , . . . , βk ) with non-negative components: ¯ limc→0 limε→0 ∆βi (Yε,i (·), c, T ) = 0, i = 1, . . . , k. C2 [β]: The following lemma gives explicit asymptotically uniform upper bounds for moments of supremums for absolute values of perturbed log-price processes. ¯ ∈ ¯ holds. Then there exists ε1 = ε1 (β) Lemma 4.1.7. Let condition C2 [β] (0, ε0 ] and constants M42,i = M42,i (βi ) < ∞, i = 1, . . . , k such that the following inequalities take place for ε ∈ [0, ε1 ] and 0 ≤ t ≤ T, ~z = (~ y , x) = ((y1 , . . . , yk ), x) ∈ Z, i = 1, . . . , k, E~z,t exp{βi sup |Yε,i (s)|} ≤ M42,i eβi |yi | .

(4.37)

t≤s≤T

Proof. The case, where βi = 0 is trivial. Indeed, the quantities on the left and right hand sides in (4.37) equal, respectively, to 1 and M42,i . Thus, inequality (4.37) holds with M42,i = 1, for ε ∈ [0, ε0 ].

190

4

Upper bounds for options rewards for Markov LPP

Let, therefore, give the proof of inequality (4.37) for the case, where βi > 0. ¯ implies that for any Let us begin from the following remark. Condition C2 [β] −βi constant e < R3,i < 1 one can choose 0 < c2,i = c2,i (βi , R3,i ) ≤ T such that limε→0 ∆βi (Yεi (·), c2,i , T ) + 1 < R3,i . eβi

(4.38)

¯ ≤ ε0 Then, by the definition of upper limit, one can choose 0 < ε1 = ε1 (β) such that for ε ∈ [0, ε1 ] and 1 ≤ i ≤ k such that βi > 0, ∆βi (Yεi (·), c2,i , T ) + 1 < R3,i . (4.39) eβi Relation (4.39) means that, for every ε ∈ [0, ε1 ], relation (4.3), with the constants R3,i , i = 1, . . . , k, replacing the constants R1,i , i = 1, . . . , k, holds for the ~ε (t). process Y The continuation of the proof, for every fixed ε ∈ [0, ε1 ] and i such that βi > 0, repeats word by word the proof of Lemma 4.1.1, with the only replacing the constants R1,i and c1,i , respectively, by the constants R3,i and c2,i . This yields, that, for every ε ∈ [0, ε1 ], the inequality (4.37) holds. For every i = 1, . . . , k, constant M37,i penetrating inequality (4.2) in Lemma 4.1.1 is a function of the corresponding constant R1,i . The explicit expression for this function is given in Remark 4.1.1. The same formula gives the expression for the constant M42,i as a function of corresponding constant R3,i .  Remark 4.1.5. The constants M42,i = M42,i (βi ), i = 1, . . . , k are given by the formulas which coincide, in fact, with formulas (4.22), where one should just replace the constants R1,i and c1,i = c1,i (βi , R1,i ) by the constants R3,i and c2,i = c2,i (βi , R3,i ), ( 1 if βi = 0, [ cT ]+1 M42,i = (4.40) eβi (eβi −1)R3,i 2,i +1 , if βi > 0, 1−R3,i where constants e−βi < R3,i < 1 and 0 < c2,i = c2,i (βi , R3,i ) ≤ T for i such that βi > 0. ¯ is determined by relation (4.39). Remark 4.1.6. The parameter ε1 = ε1 (β)

4.1.5 Asymptotically uniform upper bounds for reward functions Let now give asymptotically uniform upper bounds for for reward functions of price and log-price processes. Let gε (t, e~y , x) be, for every ε ∈ [0, ε0 ] real-valued measurable in argument (t, ~ y , x) pay-off function defined on the space [0, T ] × Y × X. This is useful options for models where the more complex unperturbed payoff functions g0 (t, e~y , x) are approximated by simpler perturbed pay-off functions gε (t, e~y , x) as ε → 0.

4.1

Upper bounds for rewards for Markov LPP

191

Condition B8 [¯ γ ] should be replaced by the following condition assumed to hold for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components: B9 [¯ γ ]: limε→0 sup0≤t≤T,(~y,x)∈Z

y ~

1+

0 ≤ L18,i < ∞, i = 1, . . . , k.

P|gkε (t,e i=1

,x)|

L18,i eγi |yi |

< L17 , for some 0 < L17 < ∞,

The following lemma gives asymptotically uniform upper bounds for conditional expectations of supremums for perturbed pay-off processes. ¯ hold and 0 ≤ γi ≤ βi < ∞, i = Lemma 4.1.8. Let conditions B9 [¯ γ ] and C2 [β] ¯ ∈ (0, ε0 ] and constants M43 , M44,i = 1, . . . , k. Then, there exist ε2 = ε2 (¯ γ , β) M44,i (βi , γi ) < ∞, i = 1, . . . , k such that the following inequalities take place for ε ∈ [0, ε2 ] and 0 ≤ t ≤ T, ~z = ((y1 , . . . , yk ), x) ∈ Z, ~

E~z,t sup |gε (s, eYε (s) , Xε (s))| ≤ M43 + t≤s≤T

k X

M44,i eγi |yi | .

(4.41)

i=1

Proof. Condition B9 [¯ γ ] implies that there exists 0 < ε3 = ε3 (¯ γ ) ≤ ε0 such that for any ε ≤ ε3 , |gε (t, e~y , x)| < L17 . Pk γi |yi | 0≤t≤T,(~ y ,x)∈Z 1 + i=1 L18,i e sup

(4.42)

¯ γ¯ ) = ε1 ∧ ε3 . By the definition 0 < ε2 ≤ ε0 . Let now take ε2 = ε2 (β, For every ε ∈ [0, ε2 ], the following proof repeats the proof of Lemma 4.1.4. The only difference is that one should refer to Lemma 4.1.7 instead of Lemma 4.1.1 and to relation (4.42) instead of condition B8 [¯ γ ], when getting an inequality (4.41), for 0 ≤ t ≤ T, ~z ∈ Z.  Remark 4.1.8. The explicit formulas for the constants M43 and M44,i = M44,i (βi , γi ), i = 1, . . . , k analogous to the formula (9.125) take place, ( L17 L18,i if γi = 0, γi M43 = L17 , M44,i = (4.43) βi L17 L18,i M42,i if γi > 0. ¯ γ¯ ) = ε1 ∧ ε3 , where parameters ε1 Remark 4.1.9. The parameter ε2 = ε2 (β, and ε3 are determined, respectively, by relations (4.39) and (4.42). We consider the model, where the log-price and price processes and the pay-off function depend on a perturbation parameter ε ∈ [0, ε0 ]. In this case the corresponding reward functions also depend on this perturbation parameter. (ε) ~ ε (u), t ≤ u ≤ s], 0 ≤ t ≤ s ≤ T and In this case, let us denote by Ft,s = σ[Z (ε)

Mmax,t,T the class of all Markov moments τε,t for the Markov log-price process ~ ε (s) such that (a) t ≤ τε,t ≤ T , (b) event {τε,t > s} ∈ F (ε) , t ≤ s ≤ T . Z t,s

192

4

Upper bounds for options rewards for Markov LPP

¯ hold and 0 ≤ γi ≤ βi < ∞, i = Let assume that conditions B9 [¯ γ ] and C2 [β] 1, . . . , k. Lemma 4.1.8 implies that, for every 0 ≤ ε ≤ ε2 , the reward function for ~ ε (s) can be defined for every 0 ≤ t ≤ American option for the log-price process Z T, ~z = (~ y , x) ∈ Z, (ε)

(ε)

φt (~z) = φt (~ y , x) =

~

E~z,t gε (τε,t , eY (τε,t ) , X(τε,t )).

sup

(4.44)

(ε) τε,t ∈Mmax,t,T

Lemma 4.1.8 let us get the following asymptotically uniform upper bounds for the log-reward functions of American type options for perturbed log-price processes. ¯ hold and 0 ≤ γi ≤ βi < Theorem 4.1.3. Let conditions B9 [¯ γ ] and C2 [β] (ε)

∞, i = 1, . . . , k. Then, the log-reward functions φt (~ y , x) satisfy the following inequalities for ε ∈ [0, ε2 ] and 0 ≤ t ≤ T, ~z = (~ y , x) ∈ Z, (ε)

|φt (~ y , x)| ≤ M43 +

k X

M44,i eγi |yi | .

(4.45)

i=1

Proof. It is analogous to the proof of Theorem 4.1.1. Using Lemma 4.1.8 and the definition of the log-reward functions, we get the following inequalities for ε ∈ [0, ε2 ] and 0 ≤ t ≤ T, ~z = (~ y , x) ∈ Z, ~

(ε)

|φt (~ y , x)| ≤ E~z,t sup |gε (s, eYε (s) , Xε (s))| t≤s≤T

≤ M43 +

k X

M44,i eγi |yi | ,

(4.46)

i=1

which prove the theorem. 

4.1.6 Asymptotically uniform upper bounds for optimal expected rewards We can also get asymptotically uniform upper bounds for optimal expected rewards for perturbed log-price processes using the above asymptotically uniform upper bounds for reward functions and averaging formulas connecting the reward functions and optimal expected rewards. ¯ should be replaced by the following condition assumed to Condition D13 [β] hold for some vector parameter β¯ = (β1 , . . . , βk ) with nonnegative components: ¯ limε→0 Eeβi Yεi (0) < K47,i , i = 1, . . . , k, for some 0 < K47,i < ∞, i = D14 [β]: 1, . . . , k.

4.1

Upper bounds for rewards for Markov LPP

193

¯ and D14 [β] ¯ hold. Then, there exist 0 < Lemma 4.1.9. Let condition, C2 [β], ¯ ε4 = ε4 (β) ≤ ε0 and constants M45,i = M45,i (βi ) < ∞, i = 1, . . . , k such that for the following inequality takes place, for i = 1, . . . , k, E exp{βi sup |Yε,i (s)|} ≤ M45,i .

(4.47)

0≤s≤T

¯ implies that there exists ε5 ∈ (0, ε0 ] such that for Proof. Condition D14 [β] ε ∈ [0, ε5 ] and i = 1, . . . , k, Eeβi Yεi (0) < K47,i .

(4.48)

Let us define ε4 = min(ε1 , ε5 ). By the definition, 0 < ε4 ≤ ε0 . For every ε ∈ [0, ε4 ], the following proof repeats the proof of Lemma 4.1.5. The only difference is that one should refer to Lemma 4.1.7 instead of Lemma ¯ when getting an inequality 4.1.1 and relation (4.48) instead of condition D13 [β], (4.47), for 0 ≤ t ≤ T, ~z ∈ Z.  Remark 4.1.10. The explicit formulas for the constants M45,i = M45,i (βi ), i = 1, . . . , k take place, M45,i = M42,i K47,i . (4.49) ¯ = min(ε1 , ε5 ), where parameters Remark 4.1.11. The parameter ε4 = ε4 (β) ε1 and ε5 are determined, respectively, by relations (4.39) and (4.48). The following lemma give asymptotically uniform explicit upper bound for the expectations of supremums for perturbed pay-off processes. ¯ and D14 [β] ¯ hold and 0 ≤ γi ≤ Lemma 4.1.10. Let conditions B9 [¯ γ ], C2 [β], ¯ βi < ∞, i = 1, . . . , k. Then, there exists ε6 = ε6 (β, γ¯ ) ∈ (0, ε0 ] and a constant ¯ γ¯ ) < ∞ such that for the following inequality takes place for ε ∈ M46 = M46 (β, [0, ε6 ], ~

E sup |gε (t, eYε (t) , Xε (t))| ≤ M46 .

(4.50)

0≤t≤T

¯ γ¯ ) = min(ε2 , ε5 ) = min(ε1 , ε3 , ε5 ). By the Proof. Let us define ε6 = ε6 (β, definition, 0 < ε6 ≤ ε0 . For every ε ∈ [0, ε6 ], the proof repeats the proof of Lemma 4.1.6. The only difference is that one should refer to Lemma 4.1.9 instead of Lemma 4.1.5, relation ¯ (4.42) instead of condition B1 [¯ γ ], and relation (4.48) instead of condition D13 [β], when getting an inequality (4.50), for 0 ≤ t ≤ T, ~z ∈ Z.  ¯ γ¯ ) takes Remark 4.1.12. The explicit formula for the constant M46 = M46 (β, place, γi X X βi . (4.51) M46 = M43 + M44,i + M44,i K47,i i:γi =0

i:γi >0

¯ γ¯ ) = min(ε1 , ε3 , ε5 ), where paRemark 4.1.13. The parameter ε6 = ε6 (β, rameters ε1 , ε3 , and ε5 are determined, respectively, by relations (4.39), (4.42), and (4.48).

194

4

Upper bounds for options rewards for Markov LPP

(ε) ~ ε (t) Let Mmax,0,T be the class of all Markov moments τε,0 for the process Z such that 0 ≤ τε,0 ≤ T . ~ ε (t) is defined as, The optimal expected reward for the log-price process Z (ε)

Φε = Φ(Mmax,0,T ) =

~

Egε (τε,0 , eY (τε,0 ) , X(τε,0 )).

sup

(4.52)

(ε)

τε,0 ∈Mmax,0,T

The following theorem follows from Lemma 4.1.10. It gives asymptotically uniform upper bounds for perturbed log-price processes. ¯ and D14 [β] ¯ hold and 0 ≤ γi ≤ Theorem 4.1.4. Let conditions B9 [¯ γ ], C2 [β], βi < ∞, i = 1, . . . , k. Then, the following inequality takes place for ε ∈ [0, ε6 ], |Φε | ≤ M46 .

(4.53)

Proof. It follows from the definition of the functional Φε and Lemma 4.1.10 that, for ε ∈ [0, ε6 ], |Φε | ≤

~

E|gε (τε,0 , eY (τε,0 ) , X(τε,0 ))|

sup (ε)

τε ∈Mmax,T ~

≤ E sup |gε (t, eYε (t) , Xε (t))| ≤ M46 .

(4.54)

0≤t≤T

This inequality proves the theorem. .

4.2 Asymptotically uniform conditions of compactness for Markov LPP In this section, we present various useful modifications of asymptotically uniform conditions of exponential moment compactness for multivariate modulated Markov log-price processes.

4.2.1 The first-type conditions of moment compactness for log-price processes The following representation formula takes place for the first-type modulus of exponential moment compactness ∆β (Yε,i (·), c, T ), ∆β (Yε,i (·), c, T ) =

sup

sup(E~z,t eβ|Yε,i (t+u)−Yε,i (t)| − 1)

0≤t≤t+u≤t+c≤T ~ z ∈Z

Z∞ =

sup

sup β

0≤t≤t+u≤t+c≤T ~ z ∈Z 0

eβh P~z,t {|Yε,i (t + u) − Yε,i (t)| ≥ h}dh.

(4.55)

4.2

195

Conditions of compactness for Makov LPP

¯ can be effecThis representation shows that the compactness condition C2 [β] tively used if the tail probabilities for increments |Yε,i (t + u) − Yε,i (t)| are given explicitly or can be effectively estimated. Let us define the the modulus of J-compactness for the components of the ~ε (t) = (Yε,1 (t), . . . , Yε,i (t)), for h, c, T ≥ 0, i = 1, . . . , k, log-price process Y ∆J (Yε,i (·), h, c, T ) =

sup P~z,t {|Yε,i (t + u) − Yε,i (t)| ≥ h}.

sup

(4.56)

0≤t≤t+u≤t+c≤T ~ z ∈Z

In what follows we use the following condition of J-compactness for càdlàg ~ε (t): log-price processes Y C3 : limc→0 limε→0 ∆J (Yε,i (·), h, c, T ) = 0, h > 0, i = 1, . . . , k. ¯ with βi > 0, i = 1, . . . , k implies that condiLemma 4.2.1. Condition C2 [β] tion C3 holds. Proof. Relation (4.55) and conditions of Lemma 16.2.1 imply that the following inequality takes place for i = 1, . . . , k and h > 0, ∆βi (Yε,i (·), c, T )

Zh ≥

sup

sup βi

eβi y P~z,t {|Yε,i (t + u) − Yε,i (t)| ≥ h0 }dh0

0≤t≤t+u≤t+c≤T z∈Z 0



sup

sup(eβi h − 1)P~z,t {|Yε,i (t + u) − Yε,i (t)| ≥ h}

0≤t≤t+u≤t+c≤T ~ z ∈Z

= (eβi h − 1)∆J (Yε,i (·), h, c, T ),

(4.57)

or equivalently ∆J (Yε,i (·), h, c, T ) ≤

∆βi (Yε,i (·), c, T ) . eβi h − 1

(4.58)

Relation (4.58) implies the proposition of the lemma.  Let us also define the first-type modulus of power moment compactness for ~ε (t) = (Yε,1 (t), . . . , Yε,k (t)), for c ≥ 0 and components of the log-price process Y α ≥ 1, Υα (Yε,i (·), c, T ) =

sup

sup E(|Yε,i (t + u) − Yε,i (t)| ∧ 1)α .

(4.59)

0≤t≤t+u≤t+c≤T ~ z ∈Z

The following representation formula takes place for the modulus of power moment compactness Υα (Yε,i (·), c, T ),

196

4

Upper bounds for options rewards for Markov LPP

Υα (Yε,i (·), c, T ) =

sup

sup E~z,t (|Yε,i (t + u) − Yε,i (t)| ∧ 1)α

0≤t≤t+u≤t+c≤T ~ z ∈Z

Z1 =

sup

sup α

hα−1 P~z,t {|Yε,i (t + u) − Yε,i (t)| ≥ h}dh.

(4.60)

0≤t≤t+u≤t+c≤T ~ z ∈Z 0

The following condition of power moment compactness for components of the ~ε (t) = (Yε,1 (t), . . . , Yε,k (t)), assumed to hold for some vector log-price process Y α ¯ = (α1 , . . . , αk ), with components αi ≥ 1, i = 1, . . . , k, is used in what follows: C4 [¯ α]: limc→0 limε→0 Υαi (Yε,i (·), c, T ) = 0, i = 1, . . . , k. Lemma 4.2.2. Condition C3 implies that condition C4 [¯ α] holds for any vector parameter α ¯ = (α1 , . . . , αk ), with components αi ≥ 1, i = 1, . . . , k. Proof. Relation (4.60) and conditions of Lemma 4.2.2 imply that the following inequality takes place, Υαi (Yε,i (·), c, T )

Z1 =

sup αi

sup

hαi −1 P~z,t {|Yε,i (t + u) − Yε,i (t)| ≥ h}dh

0≤t≤t+u≤t+c≤T ~ z ∈Z 0

Z1 ≤ αi

sup

sup P~z,t {|Yε,i (t + u) − Yε,i (t)| ≥ h}dh

0≤t≤t+u≤t+c≤T ~ z ∈Z 0

Z1 = αi

∆J (Yε,i (·), h, c, T )dh.

(4.61)

0

The proposition of the lemma follows in an obvious way from inequality (4.61).  ¯ with βi > 0, i = 1, . . . , k implies that conLemma 4.2.3. Condition C2 [β] dition C4 [¯ α] holds for any vector parameter α ¯ = (α1 , . . . , αk ), with components αi ≥ 1, i = 1, . . . , k. Proof. It is obvious that the following inequality holds for any α ≥ 1 and β > 0, hα−1 0 < Rα,β = sup βh < ∞. (4.62) h≥0 e Relation (4.55) and conditions of Lemma 4.2.3 imply that the following inequality takes place for i = 1, . . . , k,

4.2

Conditions of compactness for Makov LPP

197

∆βi (Yε,i (·), c, T )

Z1 ≥

sup βi

sup

eβi h P~z,t {|Yε,i (t + u) − Yε,i (t)| ≥ h}dh

0≤t≤t+u≤t+c≤T ~ z ∈Z 0



sup

sup

0≤t≤t+u≤t+c≤T ~ z ∈Z

Z1 × αi

βi αi Rαi ,βi

hαi −1 P~z,t {|Yε,i (t + u) − Yε,i (t)| ≥ h}dh

0

βi Υαi (Yε,i (·), c, T ), = αi Rαi ,βi

(4.63)

or, equivalently, Υαi (Yε,i (·), c, T ) ≤

αi Rαi ,βi ∆βi (Yε,i (·), c, T ). βi

(4.64)

This proposition of the lemma follows from inequality (4.64). 

4.2.2 The second-type conditions of moment compactness for log-price processes The moment modulus of compactness ∆β (Yε,i (·), c, T ) involves in the defining formula the absolute exponential moments E~z,t eβ|Yε,i (t+u)−Yε,i (t)| . For some models, such as log-price processes with independent increments, it is more convenient to use some modification of this modulus involving moment generating functions, i.e., usual exponential moments E~z,t eβ(Yε,i (t+u)−Yε,i (t)) . Let us define the second-type modulus exponential moment of compactness for ~ε (t) = (Yε,1 (t), . . . , Yε,k (t)), for β, c, T ≥ the components of the log-price process Y 0, i = 1, . . . , k, Ξ± β (Yε,i (·), c, T ) =

sup

sup |E~z,t e±β(Yε,i (t+u)−Yε,i (t)) − 1|.

(4.65)

0≤t≤t+u≤t+c≤T ~ z ∈Z

Let us assume that the following condition holds for or some vector parameter β¯ = (β1 , . . . , βk ) with non-negative components: ¯ limc→0 limε→0 Ξ± (Yε,i (·), c, T ) = 0, i = 1, . . . , k. E1 [β]: βi

¯ implies condition C2 [β] ¯ to hold. Lemma 4.2.4. Condition E1 [β]

198

4

Upper bounds for options rewards for Markov LPP

Proof. The following inequality takes place, for any vector β¯ = (β1 , . . . , βk ) with nonnegative components and every 0 ≤ t ≤ t + u ≤ T , ~z ∈ Z, E~z,t eβi |Yε,i (t+u)−Yε,i (t)| − 1 = (E~z,t eβi (Yε,i (t+u)−Yε,i (t)) − 1)I(Yε,i (t + u) ≥ Yε,i (t)) + (E~z,t e−βi (Yε,i (t+u)−Yε,i (t)) − 1)I(Yε,i (t + u) < Yε,i (t)) ≤ |E~z,t eβi (Yε,i (t+u)−Yε,i (t)) − 1| + |E~z,t e−βi (Yε,i (t+u)−Yε,i (t)) − 1|.

(4.66)

Relation (4.66) implies the following inequality holds, for any β, c, T ≥ 0, i = 1, . . . , k, − ∆β (Yε,i (·), c, T ) ≤ Ξ+ β (Yε,i (·), c, T ) + Ξβ (Yε,i (·), c, T ).

(4.67)

This inequality implies the proposition of the lemma.  Let now show that condition C3 , under some natural additional moment con¯ dition, implies condition C2 [β]. Let us introduce the functional that is the maximum of moment generating functions for increments of the log-price processes Yε,i (t), i = 1, . . . , k, defined for α ≥ 0, Ξα (Yε,i (·), T ) = sup sup E~z,t eα(Yε,i (t+u)−Yε,i (t)) . (4.68) 0≤t≤t+u≤T ~ z ∈Z

The following condition formulated in terms of this functional is assumed to hold for some vector parameter α ¯ = (α1 , . . . , αk ) with nonnegative components: E2 [¯ α]: limε→0 Ξ±αi (Yε,i (·), T ) < K48,i , i = 1, . . . , k, for some 1 < K48,i < ∞, i = 1, . . . , k. The following lemma gives effective sufficient conditions which imply the first¯ to hold. type condition of exponential moment compactness C2 [β] ¯ holds if, Lemma 4.2.4. Conditions C3 and E2 [¯ α] imply that condition C2 [β] for every i = 1, . . . , k, either 0 < βi < αi or 0 = βi = αi . Proof. The case, where βi = 0 is trivial. In this case the relation penetrating ¯ automatically holds. Thus, let us prove that this relation also condition C2 [β] holds for i such that βi > 0. Using Hölder inequality we get the following estimates, for every 0 ≤ t ≤ t + u ≤ T , ~z ∈ Z, E~z,t eβi |Yε,i (t+u)−Yε,i (t)| − 1 ≤ eβi h − 1 + E~z,t eβi |Yε,i (t+u)−Yε,i (t)|

4.2

199

Conditions of compactness for Makov LPP

× I(|Yε,i (t + u) − Yε,i (t)| ≥ h) βi

≤ eβi h − 1 + (E~z,t eαi |Yε,i (t+u)−Yε,i (t)| ) αi × (P~z,t {|Yε,i (t + u) − Yε,i (t)| ≥ h})

αi −βi αi

.

(4.69)

Using relation (4.69) we get the following inequality for the moment modulus of compactness, ∆βi (Yε,i (·), c, T ) ≤ eβi h − 1 +

 βi sup (E~z,t eαi |Yε,i (t+u)−Yε,i (t)| ) αi

sup

0≤t≤t+u≤t+c≤T ~ z ∈Z

× (P~z,t {|Yε,i (t + u) − Yε,i (t)| ≥ h}) ≤ eβi h − 1 + (

αi −βi αi

 βi

sup E~z,t eαi |Yε,i (t+u)−Yε,i (t)| ) αi

sup

0≤t≤t+u≤t+c≤T ~ z ∈Z

×(

sup P~z,t {|Yε,i (t + u) − Yε,i (t)| ≥ h})

sup

αi −βi αi



0≤t≤t+u≤t+c≤T ~ z ∈Z βi h

≤e

β

− 1 + (∆αi (Yε,i (·), c, T ) + 1) αi

× (∆J (Yε,i (·), h, c, T ))

αi −βi αi

.

(4.70)

Also, the following inequality takes place, for every 0 ≤ t ≤ t + u ≤ T , ~z ∈ Z, E~z,t eαi |Yε,i (t+u)−Yε,i (t)| = E~z,t eαi (Yε,i (t+u)−Yε,i (t)) I(Yε,i (t + u) ≥ Yε,i (t)) + E~z,t e−αi (Yε,i (t+u)−Yε,i (t)) I(Yε,i (t + u) < Yε,i (t)) ≤ E~z,t eαi (Yε,i (t+u)−Yε,i (t)) + E~z,t e−αi (Yε,i (t+u)−Yε,i (t)) .

(4.71)

Using relation (4.71) we get the following inequality, ∆αi (Yε,i (·), c, T ) + 1 ≤

sup

 sup E~z,t eαi (Yε,i (t+u)−Yε,i (t))

0≤t≤t+u≤t+c≤T ~ z ∈Z

+ E~z,t e−αi (Yε,i (t+u)−Yε,i (t))



≤ Ξαi (Yε,i (·), T ) + Ξ−αi (Yε,i (·), T ).

(4.72)

Finally, we get, using (4.70) and (4.72), the following inequality, ∆βi (Yε,i (·), c, T ) β

≤ eβi h − 1 + (Ξαi (Yε,i (·), T ) + Ξ−αi (Yε,i (·), T )) αi × (∆J (Yε,i (·), h, c, T ))

αi −βi αi

.

(4.73)

200

4

Upper bounds for options rewards for Markov LPP

For an arbitrary δ > 0 one can choose and fix h = h(δ) such that eβi h − 1 ≤ δ. Then, we get, passing to the limit as ε → 0 and then c → 0 in the inequality (4.73) and using conditions C3 and E2 [¯ α], lim lim ∆βi (Yε,i (·), c, T ) ≤ eβi h − 1 ≤ δ.

c→0 ε→0

(4.74)

¯ Due to an arbitrary choice of δ > 0 this relation implies that condition C2 [β] holds.  Remark 4.2.1. If condition C3 holds and also condition E2 [¯ α] holds for any vector parameter α ¯ = (α1 , . . . , αk ) with nonnegative components, then condition ¯ holds for any vector parameter β¯ = (β1 , . . . , βk ) with nonnegative compoC2 [β] nents. Indeed, in this case, one can always choose αi > βi , i = 1, . . . , k.

4.2.3 Conditions of moment compactness for index processes Let us also define the the modulus of J-compactness for the index component Xε (t), for h, c, T ≥ 0, ∆J (Xε (·), h, c, T ) =

sup P~z,t {dX (Xε (t + u), Xε (t)) ≥ h}.

sup

(4.75)

0≤t≤t+u≤t+c≤T ~ z ∈Z

In what follows we use the following J-compactness condition: C5 : limc→0 limε→0 ∆J (Xε (·), h, c, T ) = 0, h > 0. In what follows we also use then following modulus of power moment compactness for the index component Xε (t), for α > 0 and c ≥ 0, Υα (Xε (·), c, T ) =

sup

sup E~z,t dX (Xε (t + u), Xε (t))α .

(4.76)

0≤t≤t+u≤t+c≤T ~ z ∈Z

and the following moment compactness condition is assumed to hold for some α > 0: C6 [α]: limc→0 limε→0 Υα (Xε (·), c, T ) = 0. Lemma 4.2.5. Let condition C6 [α] holds for some α > 0. Then, condition C5 holds. Proof. If condition C6 [α] holds for some α > 0, then the following inequality takes place, for h > 0, ∆J (Xε (·), h, c, T ) =

sup

sup P~z,t {dX (Xε (t + u), Xε (t)) ≥ h}

0≤t≤t+u≤t+c≤T ~ z ∈Z



sup

sup

0≤t≤t+u≤t+c≤T ~ z ∈Z

=

Υα (Xε (·), c, T ) . hα

E~z,t dX (Xε (t + u), Xε (t))α hα (4.77)

4.2

201

Conditions of compactness for Makov LPP

This proposition of the lemma follows from inequality (4.77).  Let us introduce the functional that is an analogue of functional Ξα (Yε,i (·), T ) for the index component, Xε (t), for α ≥ 0, Ξα (Xε (·), T ) =

sup

sup E~z,t dX (Xε (t + u), Xε (t))α .

(4.78)

0≤t≤t+u≤T ~ z ∈Z

The following condition formulated in terms of this functional is assumed to hold for some α ≥ 0: E3 [α]: limε→0 Ξα (Xε (·), T ) < K49 , for some 0 < K49 < ∞. Lemma 4.2.6. Conditions C5 and E3 [α] imply condition C6 [β] to hold if 0 < β < α. Proof. Using Hölder inequality we get the following estimates, for every 0 ≤ t ≤ t + u ≤ T , ~z ∈ Z, E~z,t dX (Xε (t + u), Xε (t))β ≤ hβ + E~z,t dX (Xε (t + u), Xε (t))β I(dX (Xε (t + u), Xε (t)) ≥ h) β

≤ hβ + (E~z,t dX (Xε (t + u), Xε (t))α ) α × (P~z,t {dX (Xε (t + u), Xε (t)) ≥ h})

α−β α

.

(4.79)

Using relation (4.79), we get the following inequality β

Υβ (Xε (·), c, T ) ≤ hβ +

sup

sup E~z,t dX (Xε (t + u), Xε (t))α ) α

0≤t≤t+u≤t+c≤T ~ z ∈Z

× (P~z,t {dX (Xε (t + u), Xε (t)) ≥ h}) β

α−β α

≤ hβ + Ξα (Xε (·), T ) α ∆J (Xε (·), h, c, T )



α−β α

.

(4.80)

For an arbitrary δ > 0 one can choose and fix h = h(δ) such that hβ ≤ δ. Then, we get, passing to the limit as ε → 0 and then as c → 0 in the inequality (4.80) and using conditions C4 and E2 [α], lim lim Υα (Xε (·), c, T ) ≤ hβ ≤ δ.

c→0 ε→0

(4.81)

Due to an arbitrary choice of δ > 0 this relation implies that condition C6 [β] holds.  Remark 4.2.2. If condition C5 holds and also condition E3 [α] holds for any α ≥ 0, then condition C6 [β] holds for any β ≥ 0 with nonnegative components. Indeed, in this case, one can always choose α > β. Im what follows, the following two lemmas also are useful. Lemma 4.2.7. Let condition C6 [α] holds for some α ≥ 1. Then there exists ε7 = ε7 (α) ∈ (0, ε0 ] and a constant M47 = M47 (α) such that the following

202

4

Upper bounds for options rewards for Markov LPP

inequalities take place for ε ∈ [0, ε7 ], Ξα (Xε (·), T ) ≤ M47 .

(4.82)

Proof. It is resembles the proof of Lemmas 4.1.1 and 4.1.7. Condition C6 [α] implies that for any constant 0 < R4 < 1 one can choose 0 < c3 = c3 (α, R4 ) ≤ T such that lim Υα (Xε (·), c3 , T ) < R4 .

ε→0

(4.83)

Then, by the definition of upper limit, one can choose 0 < ε7 = ε7 (α) ≤ ε0 such that for ε ∈ [0, ε7 ], Υα (Xε (·), c3 , T ) < R4 . (4.84) Let us also introduce random variables, Wε,α [t0 , t00 ] = sup dX (Xε (t), Xε (t0 ))α , 0 ≤ t0 ≤ t00 ≤ T.

(4.85)

t0 ≤t≤t00

Define a partition Πm,t = ht = vm,0 < . . . < vm,m = T i of interval [t, T ] by points vm,n = t + n(T − t)/m, n = 0, . . . , m. The following obvious inequality takes place, for 0 ≤ t ≤ T , Wε,α [t, T ] ≤

m X

Wε,α [vm,k−1 , vm,k ].

(4.86)

k=1

We shall show below that the following inequality holds, for ε ∈ [0, ε7 ], and 0 ≤ t0 ≤ T , sup sup E~z,t0 Wε,α [t0 , t00 ] ≤ R5 , (4.87) 0≤t0 ≤t00 ≤t0 +c3 ≤T ~ z ∈Z

where

1 2α−2 (αR4α + R4 ) < ∞. (4.88) 1 − R4 −t Let us choose m = [ cT3 ] + 1. In this case vm,n − vm,n−1 = Tm ≤ c3 . Now, using condition C6 [α] and relation (4.86), we get, for ε ∈ [0, ε7 ] and ~z ∈ Z, 0 ≤ t ≤ T , T (4.89) E~z,t Wε,α (t, T ) = E~z,t sup dX (Xε (t), Xε (t0 ))α ≤ R5 ([ ] + 1). c 3 t≤u≤T

R5 = 1 +

Relation (4.89) implies that relation (4.82) holds with the constant, T ] + 1). (4.90) c3 To show that relation (4.87) holds, we get using relation (4.84) that, for ε ∈ [0, ε7 ], M47 = R5 ([

sup

sup P~z,t {dX (Xε (t00 ), Xε (t)) ≥ 1}

0≤t0 ≤t≤t00 ≤t0 +c≤T ~ z ∈Z



sup

sup E~z,t dX (Xε (t00 ), Xε (t))α

0≤t0 ≤t≤t00 ≤t0 +c≤T ~ z ∈Z

= Υα (Xε (·), c, T ) ≤ R4 < 1.

(4.91)

4.2

Conditions of compactness for Makov LPP

203

~ ε (t) is a càdlàg Markov The process Xε (t) is not a Markov process but Z process. The following lemma presents the variant of well known Kolmogorov inequality for Markov processes. We formulate it in the form of a lemma. The proof is a slight modification of the standard proof for Markov processes, which can be found, for example, in Gikhman and Skorokhod (1971). Lemma 4.2.8. Let the following condition holds for a càdlàg Markov process ~ ε (t) for some 0 ≤ t0 ≤ t00 ≤ T and a > 0 and 0 ≤ R < 1: Z sup P~z,t {dX (Xε (t00 ), Xε (t)) ≥ a} ≤ R.

(4.92)

~ z ∈Z

Then, for any vector ~z ∈ Z and b > 0, P~z,t0 { sup dX (Xε (t), Xε (t0 )) ≥ a + b} t0 ≤t≤t00



1 P~z,t0 {dX (Xε (t00 ), Xε (t0 )) ≥ b}. 1−R

(4.93)

The proof analogous to those shortly presented for Lemmas 4.1.2 and 4.1.3. Using Lemma 4.2.8, we get for every ε ∈ [0, ε7 ] and 0 ≤ t0 ≤ t00 ≤ t0 + c3 ≤ T, ~z ∈ Z, and b > 0, P~z,t0 {Wε,α [t0 , t00 ] ≥ 1 + b} ≤

1 P~z,t0 {dX (Xε (t0 ), Xε (t00 )) ≥ b}. 1 − R4

(4.94)

Relations (4.91) and (4.94) imply that for every ε ∈ [0, ε7 ] and 0 ≤ t0 ≤ t00 ≤ t + c3 ≤ T , ~z ∈ Z, 0

0

Z∞

00

E~z,t0 Wε,α [t , t ] = α

bα−1 P~z,t0 {Wε,α [t0 , t00 ] ≥ b}db

0

Z1 ≤α

b

α−1

Z∞ db + α

0

bα−1 P~z,t0 {Wε,α [t0 , t00 ] ≥ b}db

1

Z∞ = 1 + α (1 + b)α−1 P~z,t0 {Wε,α [t0 , t00 ] ≥ 1 + b}db 0

α ≤1+ 1 − R4

Z∞ (1 + b)α−1 P~z,t0 {dX (Xε (t0 ), Xε (t00 )) ≥ b}db 0

≤1+

α2α−2 1 − R4

Z∞ (1 + bα−1 )P~z,t0 {dX (Xε (t0 ), Xε (t00 )) ≥ b}db 0

2α−2 ≤1+ (αE~z,t0 dX (Xε (t0 ), Xε (t00 )) 1 − R4

204

4

Upper bounds for options rewards for Markov LPP

+ E~z,t0 dX (Xε (t0 ), Xε (t00 ))α ) 1 2α−2 (α(E~z,t0 dX (Xε (t0 ), Xε (t00 ))α ) α 1 − R4 + E~z,t0 dX (Xε (t0 ), Xε (t00 ))α )

≤1+

≤1+

1 2α−2 (αR4α + R4 ). 1 − R4

(4.95)

Since inequality (4.95) holds for every ε ∈ [0, ε7 ] and 0 ≤ t0 ≤ t00 ≤ t0 + c3 ≤ T , ~z ∈ Z, it imply relation (4.87). The proof of Lemma 4.2.7 is complete.  Remark 4.2.3. The explicit formula for constant M47 = M47 (α) follows from formulas (4.88) and (4.90), M47 = (1 +

1 2α−2 T (αR4α + R4 ))([ ] + 1), 1 − R4 c3

(4.96)

where constants 0 < R4 < 1 and 0 < c3 = c3 (α, R4 ) ≤ T are determined by in relation (4.83). Remark 4.2.4. Parameter ε7 = ε7 (α) is determined by relation (4.84). The following lemma is a simple corollary of Lemma 4.2.7. Lemma 4.2.9. Let condition C6 [α] holds for some α ≥ 1. Then, the following inequality takes place, for ε ∈ [0, ε7 ], E

sup 0≤t≤t+u≤T

dX (Xε (t), Xε (t + u))α ≤ M47 .

(4.97)

Proof. Relation (4.82) implies the following inequality, E

sup 0≤t+u≤T

dX (Xε (t), Xε (t + u))α

Z E~z,0

=

sup 0≤t+u≤T

~ ε (0) ∈ d~z} ≤ M47 . dX (Xε (t), Xε (t + u))α P{Z

(4.98)

Rk ×X

which proves the lemma. . In conclusion, let us formulate the lemma, which is direct corollary of Lemma 4.2.7 and, together with Lemma 4.2.5, gives some kind of inverse proposition to the proposition of Lemma 4.2.6. Lemma 4.2.10. If condition C6 [α] holds for some α ≥ 1, then condition E3 [α] holds.

4.2.4 Time-skeleton approximations for log-price processes As an example, let us consider a model of time-skeleton approximation for càdlàg ~ 0 (t) = (Y ~0 (t), X ~ 0 (t)). modulated Markov log-price process Z

4.2

Conditions of compactness for Makov LPP

205

Let Πε = h0 = tε,0 < tε,1 < · · · < tε,nε = T i be, for every ε ∈ (0, ε0 ] a partition of the interval [0, T ] such that (a) nε → ∞ as ε → 0, (b) ∆ε = max0≤n≤nε −1 ∆tε,n → ∆0 = 0 as ε → 0, where ∆tε,n = tε,n+1 − tε,n , n = 0, . . . , nε − 1. Let us introduce, for every ε ∈ (0, ε0 ], the step-wise time-skeleton approximation processes,  ~ 0 (tε,n ) for t ∈ [tε,n , tε,n+1 ), n = 0, . . . , nε − 1, Z ~ Zε (t) = (4.99) ~ 0 (T ) Z for t = T. ~ ε (t) is, for every ε ∈ (0, ε0 ], a càdlàg process. By the definition, the process Z Let us assume that the following condition of J-compactness holds for modu~0 (t): lated càdlàg log-price processes Y C7 : limc→0 ∆J (Y0,i (·), h, c, T ) = 0, h > 0, i = 1, . . . , k. ~ 0 (t), then condition C3 Lemma 4.2.7. If condition C7 holds for the process Z ~ ε (t). holds for the time-skeleton approximation processes Z Proof. It follows from the following inequality, which holds for every ε ∈ (0, ε0 ] and any h, c > 0, ∆J (Yε,i (·), h, c, T ) =

sup P~z,t {|Yε,i (t + u) − Yε,i (t)| ≥ h}

sup

0≤t≤t+u≤t+c≤T ~ z ∈Z



sup P~z,t {|Y0,i (t + u) − Y0,i (t)| ≥ h}

sup

0≤t≤t+u≤t+c+∆ε ≤T ~ z ∈Z

= ∆J (Y0,i (·), h, c + ∆ε , T ).

(4.100)

Let us assume that the following condition of moment boundedness holds, for some vector parameter α ¯ = (α1 , . . . , αk ) with nonnegative components: E4 [¯ α]: Ξαi (Y0,i (·), T ) < K50,i , i = 1, . . . , k, for some 1 < K50,i < ∞, i = 1, . . . , k. ~ 0 (t), then condition ¯ holds for the process Z Lemma 4.2.8. If condition C1 [β] ~ ¯ C2 [β] holds for the time-skeleton approximation processes Zε (t). Proof. It follows from the following inequality, which holds for every ε ∈ (0, ε0 ] and any c > 0, ∆β (Yε,i (·), c, T ) =

sup

sup E~z,t (eβ|Yε,i (t+u)−Yε,i (t)| − 1)

0≤t≤t+u≤t+c≤T ~ z ∈Z



sup

E~z,t (eβ|Y0,i (t+u)−Y0,i (t)| − 1)

0≤t≤t+u≤t+c+∆ε ≤T

= ∆β (Y0,i (·), c + ∆ε , T ).

(4.101)

206

4

Upper bounds for options rewards for Markov LPP

The proposition of the lemma follows from this inequality.  ~ 0 (t), then condition Lemma 4.2.9. If condition E4 [¯ α] holds for the process Z ~ E2 [¯ α] holds for the time-skeleton approximation processes Zε (t). Proof. It follows from the the following inequality, which holds for every ε ∈ (0, ε0 ], Ξα (Yε,i (·), T ) =

sup E~z,t eα(Y0,i (tε,n+m )−Y0,i (tε,n ))

sup

0≤tε,n ≤tε,n+m ≤T ~ z ∈Z



sup

sup E~z,t eα(Y0,i (t+u)−Y0,i (t))

0≤t≤t+u≤T ~ z ∈Z

= Ξα (Y0,i (·), T ).

(4.102)

Let us assume that the following condition of J-compactness holds for càdlàg index process X0 (t): C8 : limc→0 ∆J (X0 (·), h, c, T ) = 0, h > 0. Lemma 4.2.10. If condition C8 holds for the process X0 (t), then condition ~ ε (t). C5 holds for the time-skeleton approximation processes X Proof. It follows from the following inequality, which holds for every ε ∈ (0, ε0 ] and any h, c > 0, ∆J (Xε (·), h, c, T ) =

sup

sup P~z,t {dX (Xε (t + u), Xε (t)) ≥ h}

0≤t≤t+u≤t+c≤T ~ z ∈Z



sup

sup P~z,t {dX (X0 (t + u), X0 (t)) ≥ h}

0≤t≤t+u≤t+c+∆ε ≤T ~ z ∈Z

= ∆J (X0 (·), h, c + ∆ε , T ).

(4.103)

Relation (4.103) proves the lemma. 

4.3 Upper bounds for rewards for LPP with independent increments In this section, we present results concerned upper bounds for option rewards of American-type options for log-price processes with independent increments.

4.3.1 Upper bounds for option rewards for log-price processes with independent increments ~ (t) = (Y1 (t), . . . , Yk (t)), t ≥ 0 be a càdlàg multivariate log-price process Let us Y with independent increments.

4.3

Upper bounds for rewards for LPP with independent increments

207

The independent increments property means mutual independence of incre~ (tn ) − Y ~ (tn−1 ), n = 1, 2 . . . for any 0 = t0 < t1 < t2 < · · · . As far as initial ments Y ~ (0) is concerned, it is assumed that the random vector Y ~ (0) is independent value Y ~ ~ of the stochastic process Y (t) − Y (0), t ≥ 0. In this subsection, the model without index component is considered. In fact, one can always to define a virtual constant index component X(t) which takes the constant value x0 for every t ≥ 0. ~ (t), for 0 ≤ Let us introduce distribution of the increments for the process Y t ≤ t + u < ∞, A ∈ Bk , ~ (t + u) − Y ~ (t) ∈ A}. P (t, t + u, A) = P{Y

(4.104)

~ (t) is a càdlàg Markov process with transition probabilities The process Y connected with distributions of increments for this process P (t, t + u, A) by the following relation, for 0 ≤ t ≤ t + u < ∞, ~ y ∈ Rk , A ∈ B k , P (t, ~ y , t + u, A) = P (t, t + u, A − ~ y) ~ (t + u) − Y ~ (t) ∈ A}. = P{~ y+Y

(4.105)

Every component of a multivariate process process with independent increments is itself a process with independent increments. Thus, Yi (t) is a process with independent increments. Its distributions of increments are connected with transition probabilities of the vector process with ~ (t) by the following formula, independent increments Y Pi (t, t + u, Ai ) = P{Yi (t + u) − Yi (t) ∈ Ai } = P (t, t + u, A0i ),

(4.106)

B1 , A0i

where Ai ∈ = Ri−1 × Ai × Rk−i , i = 1, . . . , k. The first-type modulus of exponential moment compactness ∆βi (Yi (·), c, T ), defined in formula (4.65), takes in this case the following simpler form, ∆β (Yi (·), c, T ) = ∆0β (Yi (·), c, T ) =

E(eβ|Yi (t+u)−Yi (t)| − 1).

sup

(4.107)

0≤t≤t+u≤t+c≤T

¯ assumed to hold for some vector parameter Respectively, condition C1 [β], ¯ β = (β1 , . . . , βk ) with non-negative components, takes the following form: ¯ limc→0 ∆0β (Yi (·), c, T ) = 0, i = 1, . . . , k. C9 [β]: i

In this case the J-compactness modulus ∆J (Yi (·), h, c, T ) takes the following simpler form, ∆J (Yi (·), h, c, T ) = ∆0J (Yi (·), h, c, T ) =

sup 0≤t≤t+u≤t+c≤T

P{|Yi (t + u) − Yi (t)| ≥ h}.

(4.108)

208

4

Upper bounds for options rewards for Markov LPP

In this case, ∆0J (Yi (·), h, c, T ) is a modulus of uniform stochastic continuity for process Yi (t). Conditions C3 takes the following form: C10 : limc→0 ∆0J (Yi (·), h, c, T ) = 0, h > 0, i = 1, . . . , k. ~ (t). Condition C10 is condition of uniform stochastic continuity for processes Y ~ (t) is a càdlàg Lévy process, i.e., It is useful to note that, in this case, where Y it is a càdlàg stochastically continuous process with independent increments, then the condition of uniform stochastic continuity C10 automatically holds. The maximum of the moment generation function for the increments of the log-price process Ξα (Yi (·), T ), defined in formula (4.78), takes the following form, for α ∈ R1 , Ξα (Yi (·), T ) = Ξ0α (Yi (·), T ) =

Eeα(Yi (t+u)−Yi (t))

sup 0≤t≤t+u≤T

=

Eeα(Yi (t+u)−Yi (0)) . Eeα(Yi (t)−Yi (0))

sup 0≤t≤t+u≤T

(4.109)

The functional Ξ0α (Yi (·), T ) can take value +∞ if the exponential moment Ee = +∞ for some 0 ≤ t ≤ t + u ≤ T . Respectively, the quotient Eeα(Yi (t+u)−Yi (0)) /Eeα(Yi (t)−Yi (0)) should be counted as +∞ in such case. Condition E2 [¯ α], assumed to hold for some vector parameter α ¯ = (α1 , . . . , αk ) with non-negative components, takes the following form: α(Yi (t+u)−Yi (t))

E5 [¯ α]: Ξ0±αi (Yi (·), T ) < K51,i , i = 1, . . . , k, for some 1 < K51,i < ∞, i = 1, . . . , k. ~ (t) is a Lévy process. Let us assume that Y ~ (t) is also a Lévy process. Lévy– Every component Yi (t) of the process Y Khintchine representation takes place for the characteristic function of increments Yi (t) − Yi (0) for 0 ≤ t ≤ t + u ≤ T, i = 1, . . . , k,

φi,t (s) = E exp{is(Yi (t) − Yi (0))} 1 = exp{iµi (t)s − σii (t)s2 + 2

Z

(eisy − 1 − isy)Πi (t, dy)

|y| and < ~ (t) and Yi (t) are connected in µi (·), σii (·), Πi (·, ·) >, respectively, for processes Y the following way: (a) µi (t) is i-th component of vector function µ ~ (t); (b) σii (t) is ith diagonal element of matrix function Σ(t); and (c) Πi (t, Ai ) is a projection of the measure Π(t, A) such that Πi (t, Ai ) = Π(t, A0i ), for Ai ∈ R1 , A0i = Ri−1 ×Ai ×Rk−i . Let us introduce the following condition, for 0 ≤ t ≤ T and αi ≥ 0, i = 1, . . . , k: R E6 (t, i, αi ): |y|≥1 eαi |y| Πi (t, dy) < K52,i , for some 1 < K52,i < ∞. The following lemma gives the well-known explicit formula for the moment generating function of the Lévy process Yi (t). Lemma 4.3.1. Let Yi (t) be a real-valued Lévy process. Then, condition E6 (t, i, αi ) is necessary and sufficient for existence of exponential moments E exp{αYi (t)} < ∞, |α| ≤ αi . Moreover, the following formula takes place for −αi ≤ α ≤ αi , ψi,t (α) = E exp{α(Yi (t) − Yi (0))} 1 = exp{µi (t)α + σii (t)α2 + 2

Z

(eαy − 1 − αy)Πi (t, dy)

|y| 0 or αi = βi = 0. ¯ by conditions C12 and E9 [¯ This lemma let one to replace condition C11 [β] α], in Theorems 4.3.1 and 4.3.2. ~ε,i (t) is for every ε ∈ [0, ε0 ] a Lévy Let us also consider the case, where Y log-price process with a triplet of characteristics < µε,i (·), σε,ii (·), Πε,i (·, ·) >. In this case, condition E7 [¯ α] can be replaced by the following conditions, assumed to hold for some vector parameter α ¯ = (α1 , . . . , αk ) with non-negative components:

214

4

Upper bounds for options rewards for Markov LPP

R E10 [¯ α]: limε→0 sup0≤t≤t+u≤T |µε,i (t + u) − µε,i (t)| + σε,ii (T ) + |y|≤1 y 2 Πε,i (T, dy) R + |y|>1 eαi |y| Πε,i (T, dy) < K56,i , i = 1, . . . , k, for some 1 < K56,i < ∞, i = 1, . . . , k. Lemma 4.3.7. Condition E10 [¯ α] implies condition E9 [¯ α] to hold. Proof. It follows in an obvious way from the inequality (4.114) applied to the ~ε,i (t).  processes Y ~0 (t) and define As an example, let us consider a càdlàg Lévy log-price process Y its time-skeleton approximation processes. Let Πε = h0 = tε,0 < tε,1 < · · · < tε,nε = T i be, for every ε ∈ (0, ε0 ] a partition of the interval [0, T ] such that (a) nε → ∞ as ε → 0, (b) ∆ε = max0≤n≤nε −1 ∆tε,n → ∆0 = 0 as ε → 0, where ∆tε,n = tε,n+1 − tε,n , n = 0, . . . , nε − 1. Let us introduce the step-wise time-skeleton approximation processes,  ~0 (tε,n ) for t ∈ [tε,n , tε,n+1 ), n = 0, . . . , nε − 1, Y ~ Yε (t) = (4.122) ~ Y0 (T ) for t = T. ~ε (t) is, for every ε ∈ (0, ε0 ], a càdlàg process By the definition, the process Y with independent increments. ~0 (t) is a càdlàg Lévy process, then condition C12 holds for Lemma 4.3.8. If Y ~ε (t). the time-skeleton approximation processes Y Proof. It follows from the following inequality, which holds for every ε ∈ (0, ε0 ] and any h, c > 0, ∆0J (Yε,i (·), h, c, T ) =

sup

P{|Yε,i (t + u) − Yε,i (t)| ≥ h}

0≤t≤t+u≤t+c≤T



sup

P{|Y0,i (t + u) − Y0,i (t)| ≥ h}

0≤t≤t+u≤t+c+∆ε ≤T

= ∆0J (Y0,i (·), h, c + ∆ε , T ).

(4.123)

~0 (t) and, thus, In this case, condition C10 holds for the process Y lim lim ∆0J (Y0,i (·), h, c + ∆ε , T ) = 0.

c→0 ε→0

(4.124)

Relation (4.124) proves the lemma.  The following lemma is the corollary of Lemma 4.2.8. ~0 (t) and, for every Lemma 4.3.9. If condition E10 [¯ α] holds for the process Y ¯ holds for i = 1, . . . , k either αi > βi > 0 or αi = βi = 0, then condition C11 [β] ~ the time-skeleton approximation processes Yε (t). ~0 (t), Lemma 4.3.10. If condition E5 [¯ α] holds for the càdlàg Lévy process Y ~ then condition E2 [¯ α] holds for the time-skeleton approximation processes Yε (t).

4.4 Upper bounds for rewards for diffusion LPP

215

Proof. It follows from the the following inequality, which holds for every ε ∈ (0, ε0 ], Ξ0α (Yε,i (·), T ) =

Eeα(Y0,i (tε,n+m )−Y0,i (tε,n ))

sup 0≤tε,n ≤tε,n+m ≤T



sup

Eeα(Y0,i (t+u)−Y0,i (t))

0≤t≤t+u≤T

= Ξ0α (Y0,i (·), T ).

(4.125)

4.4 Upper bounds for rewards for diffusion LPP In this section, we present upper bounds for rewards of American-type options, for diffusion log-price processes.

4.4.1 Skeleton approximations for diffusion log-price processes ~0 (t) = (Y0,1 (t), . . . , Y0,k (t)), t ∈ [0, T ] be a k-dimensional continuous log-price Let Y process, which is a solution of the the following stochastic differential equation, ~0 (t) = µ ~0 (t)) + Σ(t, Y ~0 (t))dW ~ (t), t ∈ [0, T ], dY ~ (t, Y

(4.126)

~0 (0) = (Y0,1 (0), . . . , Y0,k (0)) is a random vector taking values in the where: (a) Y ~ (t) = (W1 (t), . . . , Wk (t)), t ∈ [0, T ] is a standard k-dimensional space Rk ; (b) W Wiener process, i.e. a k-dimensional continuous Gaussian process with means EWi (t) = 0, i = 1, . . . , k, t ∈ [0, T ] and covariances EWi (t)Wj (s) = min(t, s)I(i = ~0 (0) and the process j), i, j = 1, . . . , k, t, s ∈ [0, T ]; (c) the random vector Y ~ (t), t ∈ [0, T ] are independent; (d) µ W ~ (t, ~ y ) = (µ1 (t, ~ y ), . . . , µk (t, ~ y )), t ∈ [0, T ] is k-dimensional vector function with components which are Borel functions acting from the space [0, T ] × Rk to the space R1 ; (e) Σ(t, ~ y ) = kσi,j (t, ~ y )k is a k × k matrix function with components which are Borel functions acting from the space [0, T ] × Rk to the space R1 . We impose the following standard conditions on coefficients of stochastic differential equation (4.126): G8 : For any 0 < H < ∞, there exists a constant 1 ≤ K57,H < ∞, such |µ (t,~ y 0 )−µi (t,~ y 00 )|+|σi,j (t,~ y 0 )−σi,j (t,~ y 00 )| < that max1≤i,j≤k sup0≤t≤T,|~y0 |,|~y00 |≤H,~y0 6=y~00 i |~ y 0 −~ y 00 | K57,H .

and G9 : max1≤i,j≤k sup0≤t≤T,~y∈Rk

|µi (t,~ y )|+|σi,j (t,~ y )| 1+|~ y|

< K58 , for some 1 ≤ K58 < ∞.

216

4

Upper bounds for options rewards for Markov LPP

As is well known, conditions G8 and G9 guarantee the existence of unique solution for the stochastic differential equation (4.126), which is continuous Markov ~0 (t) adapted to filtration Ft = σ[Y ~0 (0), W ~ (s), 0 ≤ s ≤ t], t ∈ [0, T ]. process Y Our goal is to get upper bounds for exponential moments for supremum of the ~0 (t). In order, to do this we use time-skeleton approximations log-price process Y ~ of the process Y0 (t) by some processes given by stochastic difference equations. We employ results by Gikhman and Skorokhod (1975, 1982). Let Πε = h0 = tε,0 < tε,1 < · · · < tε,Nε = T i be, for every ε ∈ (0, ε0 ] a partition of the interval [0, T ] such that (a) Nε → ∞ as ε → 0, (b) ∆ε = max0≤n≤Nε −1 ∆tε,n → 0 as ε → 0, where ∆tε,n = tε,n+1 − tε,n , n = 0, . . . , Nε − 1. ~ε,n = (Yε,n,1 , . . . , Yε,n,k ), n = 0, . . . , Nε and W ~ ε,n = (Wε,n,1 , . . . , Wε,n,k ), Let Y n = 0, . . . , Nε be, for every ε ∈ (0, ε0 ], two sequences of random k-dimensional ~ε,n = Y ~ε,n+1 − Y ~ε,n , n = 0, . . . , Nε − 1 and ∆W ~ ε,n = W ~ ε,n+1 − vectors and ∆Y ~ Wε,n , n = 1, . . . , Nε − 1. We assume that the above two random sequences are connected, for every ε ∈ (0, ε0 ], by the following stochastic difference equation, ~ε,n = µ ~ε,n )∆tε,n + Σ(tε,n , Y ~ε,n )∆W ~ ε,n , n = 0, 1, . . . , Nε − 1, ∆Y ~ (tε,n , Y

(4.127)

~ε,0 = Y ~0 (0); (b) W ~ ε,n = W ~ (tε,n ), n = 0, . . . , Nε . where: (a) Y Let us also introduce the step-wise time-skeleton approximation processes,  ~ε,n Y for t ∈ [tε,n , tε,n+1 ), n = 0, . . . , Nε − 1, ~ε (t) = (4.128) Y ~ Yε,Nε for t = T. ~ε (t) is, for every ε ∈ (0, ε0 ], a Markov process. Note that Y The following additional Lipschitz-type condition imposed on coefficients of the stochastic differential equation (4.126) is required for the corresponding ap~ε (t): proximation result concerned time-skeleton approximation processes Y G10 : For any 0 ≤ H < ∞, there exists a non-negative and non-decreasing function hH (t), t ≥ 0 such that hH (t) → hH (0) = 0 as 0 ≤ t → 0, and max1≤i,j≤k sup|~y|≤H (|µi (t, ~ y 0 )−µi (s, ~ y )|+|σi,j (t, ~ y )−σi,j (s, ~ y )| ≤ hH (s−t), 0 ≤ t ≤ s ≤ T. It is useful to note that conditions G8 – G10 imply that functions µ(t, ~ y ) and Σ(t, ~ y ) are continuous in argument (t, ~ y ) ∈ [0, T ] × Rk . ~ (t) is a diffusion process. In this case Y As follows from results given in Gikhman and Skorokhod (1975, 1979, 1982) conditions G8 – G10 imply that distributions for the approximation processes ~ε (·) = < Y ~ε (s), s ∈ [0, T ] > and the diffusion process Y ~0 (·) = < Y ~0 (s), s ∈ [0, T ] >, Y considered as random variables taking values in the Polish space D[0,T ] of kdimensional càdlàg functions, weakly converge, i.e., ~ε (·) ∈ ·} ⇒ P{Y ~0 (·) ∈ ·} as ε → 0. P{Y

(4.129)

4.4

Upper bounds for rewards for diffusion LPP

217

~ε,t (·) = < Moreover, conditional distributions of the approximation processes Y ~ ~ ~ Yε (s), s ∈ [t, T ] > and the diffusion process Y0,t (·) = < Y0 (s), s ∈ [t, T ] >, considered as random variables taking values in the Polish space D[t,T ] of k-dimensional càdlàg functions (k-dimensional random vectors in the case t = T ), weakly converge, i.e., for every ~ y ∈ Rk , 0 ≤ t ≤ T , ~ε,t (·) ∈ ·} ⇒ P~y,t {Y ~0,t (·) ∈ ·} as ε → 0. P~y,t {Y

(4.130)

Let also consider more complex martingale-type time-skeleton approximations ~0 (t). for the process Y 0 0 0 0 0 0 ~ ε,n ~ε,n ), , . . . , Wε,n,k = (Wε,n,1 ), n = 0, . . . , Nε and W , . . . , Yε,n,k = (Yε,n,1 Let Y 0 ~ n = 0, . . . , Nε be two sequences of random k-dimensional vectors and ∆Yε,n = 0 0 0 0 0 ~ ε,n ~ ε,n+1 ~ ε,n ~ε,n ~ε,n+1 , n = 0, . . . , Nε − 1. −W =W , n = 0, . . . , Nε − 1 and ∆W −Y Y We assume that the above two random sequences are connected, for every ε ∈ (0, ε0 ], by the following stochastic difference equation, 0 0 0 0 ~ε,n ~ε,n ~ε,n ~ ε,n ∆Y =µ ~ ε (tε,n , Y )∆tε,n + Σε (tε,n , Y )∆W , n = 0, 1, . . . , Nε − 1, (4.131) 0 0 ~ ε,0 ~ ε,n where: (a) W = ~0 = (0, . . . , 0); (b) W , n = 0, . . . , Nε is, for every ε ∈ 0 0 0 ~ε,0 ~ε,n (0, ε0 ], a martingale with respect to the filtration Fε,n = σ[Y ,...,Y ], 0 0 n = 0, . . . , Nε , i.e., E{∆Wε,n,i /Fε,n } = 0, i = 1, . . . , k, n = 0, . . . , Nε − 1; (c) 0 0 0 E{∆Wε,n,i ∆Wε,n,j /Fε,n } = ∆tε,n I(i = j), i, j = 1, . . . , k, n = 0, . . . , Nε − 1; (d) µ ~ ε (t, ~ y ) = (µε,1 (t, ~ y ), . . . , µε,k (t, ~ y )) is a k-dimensional vector function with components which are Borel functions acting from space [0, T ] × Rk to space R1 ; (e) Σε (t, ~ y ) = kσε,i,j (t, ~ y )k is a k × k matrix function with components which are Borel functions acting from space [0, T ] × Rk to space R1 . Let us also introduce the step-wise time-skeleton approximation processes,  0 ~ε,n for t ∈ [tε,n , tε,n+1 ), n = 0, . . . , Nε − 1, Y ~ε0 (t) = Y (4.132) 0 ~ε,n Y for t = T. ε

We impose on the coefficients of the above stochastic difference equation the following natural condition, analogous to condition G9 : G11 : limε→0 max1≤i,j≤k sup0≤t≤T,~y∈Rk K59 < ∞.

|µε,i (t,~ y )|+|σε.i,j (t,~ y )| 1+|~ y|

< K59 , for some 1 ≤

We also impose the following convergence condition on coefficients of the above stochastic difference and differential equations: O13 : max1≤i,j≤k sup0≤t≤T,|~y|≤H (|µε,i (t, ~ y ) − µi (t, ~ y )| + |σε,i,j (t, ~ y ) − σi,j (t, ~ y )|) → 0 as ε → 0, for any 0 ≤ H < ∞, We also assume that the following variant of Lindeberg-type condition holds 0 ~ ε,n for the above martingale sequence W :

218 O14 : E

4

Upper bounds for options rewards for Markov LPP

PNε −1 n=0

0 0 ~ ε,n ~ ε,n I(|∆W | > κ) · |∆W |2 → 0 as ε → 0, for any κ > 0.

Finally, the following condition of weak convergence of distribution for initial ~ε,0 should be assumed: random vectors Y 0 ~ε,0 ∈ ·} ⇒ P{Y ~0 (0) ∈ ·} as ε → 0. K13 : P{Y The following general approximation result obtained in Gikhman and Skorokhod (1982) states that conditions G8 – G11 , O13 – O14 and K13 imply that ~ε0 (t), t ∈ [0, T ] > and ~ε0 (·) = < Y distributions for the approximation processes Y ~0 (·) = < Y ~0 (t), t ∈ [0, T ] > considered as random variables the diffusion process Y taking values in the Polish space D[0,T ] of k-dimensional càdlàg functions, weakly converge, i.e. ~ε0 (·) ∈ ·} ⇒ P{Y ~0 (·) ∈ ·} as ε → 0. P{Y (4.133) 0 ~0 (0) ≡ ~ ~ε,0 ≡ It is also useful to point out an important case where Y y0 and Y ~ yε are k-dimensional non-random vectors. In this case condition K13 takes the following form:

K14 : ~ yε → ~ y0 as ε → 0. Obviously, condition K14 automatically holds if ~ yε ≡ ~ y0 , for ε ∈ [0, ε0 ]. In this case, the asymptotic relation (4.133) takes the form of the following asymptotic relation locally uniform with respect to initial state, ~ε0 (·) ∈ ·} ⇒ P~y ,0 {Y ~0 (·) ∈ ·} as ε → 0. P~yε ,0 {Y 0

(4.134)

In what follows, we consider an important particular case of the above model, 0 ~ ε,n where random sequence ∆W , n = 0, . . . , nε −1 has the following specific Markov form, 0 0 ~ ε,n ~ ε,n+1 (Y ~ε,n ∆W =A , Uε,n+1 ), n = 0, 1, . . . , Nε − 1, (4.135) where: (f) Uε,n+1 , n = 0, 1, . . . , Nε − 1 is a sequence of independent random vari0 ables uniformly distributed in interval [0, 1]; (g) the random vector Yε,0 and the sequence of random variables Uε,n+1 , n = 0, 1, . . . , Nε − 1 are independent; (h) ~ ε,n+1 (~ A y , u) = (Aε,n+1,1 (~ y , u), . . . , Aε,n+1,k (~ y , u)) is, for every n = 0, . . . , Nε − 1, a k-dimensional vector function with components which are Borel functions acting from space Rk × [0, 1] to space R1 . 0 In this case, for every ε ∈ (0, ε0 ], the random sequence Yε,n is a Markov chain 0 and Yε (t) is a Markov process. In this case, one can apply the above asymptotic relation (4.134) to the condi0 ~ε,t ~ε (s), s ∈ [t, T ] > tional distributions of the approximation processes Y (·) = < Y ~ ~ and the diffusion process Y0,t (·) = < Y0 (s), s ∈ [t, T ] >, considered as random variables taking values in the Polish space D[t,T ] of k-dimensional càdlàg functions (k-dimensional random vectors in the case t = T ) and to wright down the

4.4

Upper bounds for rewards for diffusion LPP

219

following relation of weak convergence, for any ~ yε → ~ y0 ∈ Rk , 0 ≤ t ≤ T , 0 ~ε,t ~0,t (·) ∈ ·} as ε → 0. P~yε ,t {Y (·) ∈ ·} ⇒ P~y0 ,t {Y

(4.136)

It is useful to note that the asymptotic relations (4.129) and (4.130) are particular cases of the asymptotic relations (4.133), (4.134) and (4.136). Indeed, in this case it is readily seen that the model martingale and Markov assumptions formulated in (4.131) and (4.135) as well as additional conditions G11 , O13 – O14 hold for the processes defined by relations (4.126) – (4.128) and satisfying conditions G8 – G10 . ~ε,0 ∈ A} ≡ P{Y ~0 (0) ∈ A}, Condition K13 also holds, if the distribution P{Y for every ε ∈ (0, ε0 ]. ~ε,0 = Y0 (0) by any other random vectors Y ~ε,0 However, one can replace Y ~ independent of the Wiener process W (t), t ∈ [0, T ], but with distributions P{Yε,0 ∈ A} depending on parameter ε and satisfying condition K1 . In this case, the asymptotic relation (4.128) can be rewritten in the form analogous to (4.134). As far as the asymptotical relation (4.130) is concerned, it also can be rewritten in the more general form analogous to (4.136) that is, for any ~ yε → ~ y0 as ε → 0, ~ε,t (·) ∈ ·} ⇒ P~y ,t {Y ~0,t (·) ∈ ·} as ε → 0. P~yε ,t {Y 0

(4.137)

4.4.2 Upper bounds for option rewards for diffusion log-price processes with bounded characteristics and their time-skeleton approximations ~0 (t) defined by the stochastic Let us again consider the diffusion log-price process Y differential equation (4.126). We assume that conditions G8 and G10 hold but replace condition G9 by the following stronger condition: G12 : max1≤i,j≤k sup0≤t≤T,~y∈Rk (|µi (t, ~ y )| + |σi,j (t, ~ y )|) < K60 , for some 1 ≤ K60 < ∞. ~0 (t) and Y ~ε (t) are given, respectively, by Lemma 4.4.1. Let the processes Y relations (4.126) and (4.128). Then, conditions G8 , G10 , and G12 imply that condition C3 holds for the above processes. Proof. Relation (4.127) and conditions of the lemma imply that, for every ε ∈ (0, ε0 ] and 0 ≤ n ≤ n + m ≤ nε , ~ y ∈ Rk , i = 1, . . . , k, E~y,n (Yε,n+m,i − Yε,n,i ) =

m−1 X r=n

and, thus,

~ (tε,r ))∆tε,r , E~y,n µi (tε,r , Y

(4.138)

220

4

Upper bounds for options rewards for Markov LPP

|E~y,n (Yε,n+m,i − Yε,n,i )| ≤

m−1 X

~ (tε,r ))|∆tε,r E~y,n |µi (tε,r , Y

r=n

≤ K60

m−1 X

∆tε,r = K60 (tε,m − tε,n ) ≤ K60 T.

(4.139)

r=n

Using relations (4.127) and (4.139) and conditions of the lemma, we get that for every ε ∈ (0, ε0 ] and 0 ≤ n ≤ n + m ≤ nε , ~ y ∈ Rk , i = 1, . . . , k, E~y,n (Yε,n+m,i − Yε,n,i )2 = E~y,n (Yε,n+m−1,i − Yε,n,i )2 ~ε,n+m−1 )∆tε,n+m−1 + 2E~y,n (Yε,n+m−1,i − Yε,n,i )µi (tε,n+m−1 , Y  2 2 ~ε,n+m−1 )E{Wε,n+m−1,i /Fε,n+m−1 } + E~y,n σi,i (tε,n+m−1 , Y = E~y,n (Yε,n+m−1,i − Yε,n,i )2 ~ε,n+m−1 )∆tε,n+m−1 + 2E~y,n (Yε,n+m−1,i − Yε,n,i )µi (tε,n+m−1 , Y 2 ~ε,n+m−1 )∆tε,n+m−1 + E~y,n σi,i (tε,n+m−1 , Y

≤ E~y,n (Yε,n+m−1,i − Yε,n,i )2 2 + 2K60 (tε,n+m−1 − tε,n )K60 ∆tε,n+m−1 + K60 ∆tε,n+m−1 2 ≤ E~y,n (Yε,n+m−1,i − Yε,n,i )2 + K60 (2T + 1)∆tε,n+m−1 2 ≤ · · · ≤ K60 (2T + 1)(tε,m − tε,n ).

(4.140)

Let us denote nε,t such n that tε,n ≤ t < tε,n+1 , for t ∈ [0, T ]. Note that, by the definition, tε,nε,t+u − tε,nε,t ≤ u + ∆ε , for every 0 ≤ t ≤ t + u ≤ T . Relation (4.140) implies, under conditions of the lemma, the following relation, for every ε ∈ (0, ε0 ] and 0 ≤ t ≤ t + u ≤ t + c ≤ T, ~ y ∈ Rk , i = 1, . . . , k, 2 E~y,t (Yε,i (t + u) − Yε,i (t))2 ≤ K60 (2T + 1)(tε,nε,t+u − tε,nε,t ) 2 ≤ K60 (2T + 1)(c + ∆ε ),

(4.141)

and, thus, for every ε ∈ (0, ε0 ] and 0 ≤ t ≤ t + u ≤ t + c ≤ T, ~ y ∈ Rk , i = 1, . . . , k, 1 E~y,t (Yε,i (t + u) − Yε,i (t))2 h2 1 2 ≤ 2 K60 (2T + 1)(c + ∆ε ). h

P~y,t {|Yε,i (t + u) − Yε,i (t)| ≥ h} ≤

(4.142)

Relations (4.130) and (4.142) imply, taking into account continuity of the ~0 (t) and the functionals ft,t+u (~ limiting process Y y (·)) = |yi (t + u) − yi (t)| at k-dimensional continuous functions ~ y (·) = < ~ y (t), t ∈ [0, T ] >, that, for every 0 ≤ t ≤ t + u ≤ t + c ≤ T, ~ y ∈ Rk , i = 1, . . . , k and h > 0, which are points of continuity for the distribution functions of the corresponding limiting random

4.4

221

Upper bounds for rewards for diffusion LPP

variables, lim P~y,t {|Yε,i (t + u) − Yε,i (t)| ≥ h}

0 0, P~y,t {|Y0,i (t + u) − Y0,i (t)| ≥ h} ≤

1 2 K60 (2T + 1)c. h2

(4.144)

Relations (4.142) and (4.144) implies, under conditions of the lemma, the following relation, for every ε ∈ [0, ε0 ] and i = 1, . . . , k, ∆J (Yε,i (·), h, c, T ) ≤

1 2 K60 (2T + 1)(c + ∆ε ). h2

(4.145)

Finally, relations (4.142) and (4.144) imply that, for every 0 ≤ t ≤ t + u ≤ t + c ≤ T, ~ y ∈ Rk , i = 1, . . . , k lim lim ∆J (Yε,i (·), h, c, T )

c→0 ε→0

≤ lim lim

c→0 ε→0

1 2 K60 (2T + 1)(c + ∆ε ) = 0. h2

(4.146)

The proof is complete.  In the above model, which has an index component, the functional Ξα (Yε,i (·), T ) takes the following form, for ε ∈ [0, ε0 ], Ξα (Yε,i (·), T ) =

sup

sup E~y,t eα(Yε,i (t+u)−Yε,i (t)) .

(4.147)

0≤t≤t+u≤T ~ y ∈Rk

~0 (t) and Y ~ε (t) are given, respectively, by reLemma 4.4.2. Let the processes Y lations (4.126) and (4.128). Then, conditions G8 , G10 , and G12 imply that condition E2 [¯ α] holds for the above processes, for any vector parameter α ~ = (α1 , . . . , αk ) with non-negative components. Proof. Relations (4.131) – (4.128) and conditions of the lemma imply that, for every ε ∈ (0, ε0 ] and 0 ≤ n ≤ n + m ≤ nε , ~ y ∈ Rk and αi ∈ R1 , i = 1, . . . , k, E~y,n exp{αi (Yε,n+m,i − Yε,n,i )}  = E~y,n exp{αi (Yε,n+m−1,i − Yε,n,i ) ~ε,n+m−1 )∆tε,n+m−1 } × exp{αi µi (tε,n+m−1 , Y

~ε,n+m−1 )Wε,n+m−1,i /Fε,n+m−1 } × E{exp{αi σi,i (tε,n+m−1 , Y

222

4

Upper bounds for options rewards for Markov LPP

 = E~y,n exp{αi (Yε,n+m−1,i − Yε,n,i ) ~ε,n+m−1 )∆tε,n+m−1 } × exp{αi µi (tε,n+m−1 , Y 1 2 ~ε,n+m−1 )∆tε,n+m−1 } × exp{ αi2 σi,i (tε,n+m−1 , Y 2 ≤ E~y,n {exp{αi (Yε,n+m−1,i − Yε,n,i ) 1 2 × exp{|αi |K60 ∆tε,n+m−1 + αi2 K60 ∆tε,n+m−1 } 2 1 2 ≤ · · · ≤ exp{(|αi |K60 + αi2 K60 )(tε,m − tε,n )} 2 1 2 ≤ exp{(|αi |K60 + αi2 K60 )T }. (4.148) 2 Relation (4.148) implies that, under conditions of the lemma, the following relation, for every ε ∈ (0, ε0 ] and 0 ≤ t ≤ t + u ≤ t + c ≤ T, ~ y ∈ Rk , α i ∈ R1 , i = 1, . . . , k, E~y,t exp{αi (Yε,i (t + u) − Yε,i (t))} 1 2 )(tε,nε,t+u − tε,nε,t )} ≤ exp{(|αi |K60 + αi2 K60 2 1 2 )T }. (4.149) ≤ exp{(|αi |K60 + αi2 K60 2 Relations (4.130) and (4.149) imply, taking into account continuity of the ~0 (t) and the functionals ft,t+u,±αi (~ limiting process Y y (·)) = e±αi (yi (t+u)−yi (t)) at k-dimensional continuous functions ~ y (·) = < ~ y (t), t ∈ [0, T ] >, that, for every 0 ≤ t ≤ t + u ≤ t + c ≤ T, ~ y ∈ Rk , αi ∈ R1 , i = 1, . . . , k and h ≥ 0, which are points of continuity for the distribution functions of the corresponding limiting random variables, lim P~y,t {exp{αi (Yε,i (t + u) − Yε,i (t)) ≤ h}

ε→0

= P~y,t {exp{αi (Y0,i (t + u) − Y0,i (t)) ≤ h}.

(4.150)

Relation (4.148) implies that, for every for every ε ∈ (0, ε0 ] and 0 ≤ t ≤ t+u ≤ t + c ≤ T, ~ y ∈ Rk , αi ∈ R1 , i = 1, . . . , k and any κ > 1, E~y,t (exp{αi (Yε,i (t + u) − Yε,i (t))})κ = E~y,t exp{καi (Yε,i (t + u) − Yε,i (t))} 1 2 ≤ exp{(|καi |K60 + κ2 αi2 K60 )T }. (4.151) 2 Relations (4.150) and (4.151) imply that, for every 0 ≤ t ≤ t + u ≤ t + c ≤ T, ~ y ∈ Rk , i = 1, . . . , k and αi ≥ 0, i = 1, . . . , k, lim E~y,t {exp{αi (Yε,i (t + u) − Yε,i (t))}

0 and the diffusion process Y0,t (·) = < Y0 (s), s ∈ [t, T ] >, considered as random variables taking values in the Polish space D[t,T ] of real-valued càdlàg functions (random variables in the case t = T ), for any yε → y0 ∈ R1 , 0 ≤ t ≤ T , i.e., 00 00 ~0,t Pyε ,t {Yε,t (·) ∈ ·} ⇒ Py0 ,t {Y (·) ∈ ·} as ε → 0. (4.198) In particular, relation (4.198) holds if yε ≡ y0 , for any y0 ∈ R1 . Condition C3 takes in this case the following form: C13 : limc→0 limε→0 ∆J (Yε00 (·), h, c, T ) = 0, for h > 0. ~ε00 (t) are given by relations (4.180) and Lemma 4.4.15. Let the processes Y (4.183). Then, conditions G15 , G17 , G18 , G19 and (4.188) and (4.190) imply that condition C13 holds for the above processes. Proof. The first part of the proof, connected with getting upper bounds for ∆J (Yε00 (·), h, c, T ) for ε > 0, repeats the corresponding part in the proof of Lemma 4.4.8. This proof involves only relations for conditional moments of the first and 0 (tε,n ) and Yε00 (tε,n ). In particular, second order analogous for random variables Yε,i analogs of relations (4.160) – (4.167) can be written down with the only replacement of constant K62 by constant K66 . Also analogue of relation (4.168) takes ~00 (t). Note place, since process Y000 (t) is just an univariate variant of the process Y 00 that due to this relation, weak convergence of processes Yε (t) is not required.  Lemma 4.4.16. Let the processes Yε00 (t) are given by relations (4.180) and (4.183). Then, conditions G15 , G17 , G18 , G19 and (4.188) and (4.190) imply that condition E2 [α] holds for the above processes, for any α ≥ 0. Proof. Relations (4.180) and (4.187) together with conditions of the lemma imply that, for every ε ∈ (0, ε22 ] and 0 ≤ n ≤ n + m ≤ Nε − 1, y ∈ R1 and α ∈ R1 , 00 00 Ey,n exp{α(Yε,n+m − Yε,n )}  00 00 = Ey,n exp{α(Yε,n+m−1 − Yε,n )}

00 × E{exp{αBε,n+1 (Yε,n , Uε,n+1 )}/Fε,n+m−1 }  00 00 = Ey,n exp{α(Yε,n+m−1 − Yε,n )}

+



00 pε,n+1,+ (Yε,n+m−1 ) √ −αδ ∆ε 00 e pε,n+1,− (Yε,n+m−1 )

× eαδ

∆ε

 00 + pε,n+1,◦ (Yε,n+m−1 )

 00 00 = Ey,n exp{α(Yε,n+m−1 − Yε,n )} 2 2 p α δ ∆ε + ···) × (1 + αδ ∆ε + 2!

236

4

Upper bounds for options rewards for Markov LPP 2 00 1 σ (tε,n+m−1 , Yε,n+m−1 )∆tε,n+m−1 × ( 2 δ 2 ∆ε 00 (µ(tε,n+m−1 , Yε,n+m−1 )∆tε,n+m−1 )2 + δ 2 ∆ε 00 µ(tε,n+m−1 , Yε,n+m−1 )∆tε,n+m−1 √ + ) δ ∆ε p α 2 δ 2 ∆ε + ···) + (1 − αδ ∆ε + 2! 2 00 1 σ (tε,n+m−1 , Yε,n+m−1 )∆tε,n+m−1 × ( 2 δ 2 ∆ε 00 (µ(tε,n+m−1 , Yε,n+m−1 )∆tε,n+m−1 )2 + δ 2 ∆ε 00 µ(tε,n+m−1 , Yε,n+m−1 )∆tε,n+m−1 √ − ) δ ∆ε 00 )∆tε,n+m−1 σ 2 (tε,n+m−1 , Yε,n+m−1 + (1 − 2 δ ∆ε 00 )∆tε,n+m−1 )2  (µ(tε,n+m−1 , Yε,n+m−1 ) − δ 2 ∆ε  00 00 = Ey,n exp{α(Yε,n+m−1 − Yε,n )}

α 2 δ 2 ∆ε α4 δ 4 ∆2ε + + ···) 2! 4! 00 σ 2 (tε,n+m−1 , Yε,n+m−1 )∆tε,n+m−1 ×( δ 2 ∆ε 00 (µ(tε,n+m−1 , Yε,n+m−1 )∆tε,n+m−1 )2 ) + 2 δ ∆ε √ p α3 δ 3 ( ∆ε )3 + (αδ ∆ε + + ···) 3! 00 µ(tε,n , Yε,n+m−1 )∆tε,n+m−1  √ × δ ∆ε  00 00 ≤ Ey,n exp{α(Yε,n+m−1 − Yε,n )} 1 2 α2 δ2 ∆ε 2 K66 (1 + T )∆tε,n+m−1 × 1+ α e 2  2 2 + |α|eα δ ∆ε K66 ∆tε,n+m−1  00 00 ≤ Ey,n exp{α(Yε,n+m−1 − Yε,n )}  2 2 1 2 (1 + T ) + |α|K66 )∆tε,n+m−1 × 1 + eα δ T ( α2 K66 2  00 00 ≤ Ey,n exp{α(Yε,n+m−1 − Yε,n )} 2 2 1 2 × exp{eα δ T ( α2 K66 (1 + T ) + |α|K66 )∆tε,n+m−1 } 2 α2 δ 2 T 1 2 2 ≤ · · · ≤ exp{e ( α K66 (1 + T ) + |α|K66 )(tε,n+m − tε,n )} 2 2 2 α δ T 1 2 2 ≤ exp{e ( α K66 (1 + T ) + |α|K66 )T }. (4.199) 2 × 1+(

4.4

Upper bounds for rewards for diffusion LPP

237

Relation (4.199) implies that, for ε ∈ (0, ε22 ] and α ≥ 0, 1 2 ( α2 K66 (1 + T ) + |α|K66 )T }. 2 which, obviously, implies that, for α ≥ 0, 2 2

Ξ±α (Yε00 (·), T ) ≤ exp{eα

δ T

lim Ξ±α (Yε00 (·), T ) ≤ exp{eα

2 2

δ T

0 0 replacing constants K64,H , H > 0. Also, condition G23 and G24 imply that |σ(t, y) − σ(t, 0)| ≤ K70 (1 + |y|) and, thus, |σ(t, y)| ≤ |σ(t, 0)| + K70 (1 + |y|), for t ∈ [0, T ], y ∈ R1 . Also, condition G23 implies that function σ(t, 0) is continuous and, thus, it is bounded in interval [0, T ], i.e., sup0≤t≤T σ(t, 0) = K72 < ∞. Therefore, conditions G20 , G23 and G24 imply that condition G16 holds for process Y (t), with the constant K73 = K69 + K70 + K71 replacing constant K65 . Conditions G23 and G24 imply that |σ(t0 , y) − σ(t00 , y)| ≤ K70 |t0 − t00 |, for 0 00 t , t ∈ [0, T ], y ∈ R1 . Thus conditions G20 , G23 and G24 imply that condition G17 holds with function h00H (t) = h0H (t) + K70 |t|. Since conditions G20 , G21 , G23 and G24 imply that conditions G15 , G16 and G17 hold, there exists the unique solution for the stochastic differential equation (4.209), which is continuous Markov process Y (t) adapted to the filtration Ft = σ[Y (0), W (s), 0 ≤ s ≤ t], t ∈ [0, T ]. Moreover, Y (t) is a diffusion process. The diffusion process Y (t), t ∈ [0, T ] can be transformed to the case of the diffusion process with the unit volatility coefficient. Let us assume the following additional conditions hold: G25 : σ ≤ inf (t,y)∈[0,T ]×R1 σ(t, y) ≤ sup(t,y)∈[0,T ]×R1 σ(t, y) ≤ σ 0 , for some 0 < σ < σ 0 < ∞. G26 : For any 0 < H < ∞, there exists a constant 1 ≤ K74,H < ∞, such that sup0≤t≤T,|y0 |,|y00 |≤H

|σy0 (t,y 0 )−σy0 (t,y 00 )| |y 0 −y 00 |

< K74,H .

G27 : For any 0 ≤ H < ∞, there exists a non-negative and non-decreasing function h00H (t), t ≥ 0 such that h00H (t) → h00H (0) = 0 as 0 ≤ t → 0, and sup|y|≤H (|σt0 (t0 , y) − σt0 (t00 , y)| + |σy0 (t0 , y) − σy0 (t00 , y)|) ≤ h00H (|t0 − t00 |), 0 ≤ t0 , t00 ≤ T.

4.5

Upper bounds for rewards for mean-reverse diffusion LPP

241

Let us define function f (t, y) for (t, y) ∈ [0, T ] × R1 ,

Zy f (t, y) =

1 dx. σ(t, x)

(4.210)

0

R y0 Here and henceforth integral y0 = − y00 if y 0 ≥ y 00 . Consider the transformed diffusion process, R y00

Y˜0 (t) = f (t, Y (t)), t ∈ [0, T ].

(4.211)

Itô formula applied to the process Y˜ (t) = f (t, Y (t)) yields the following stochastic differential equation for this process, dY˜ (t) = µ ˜(t, Y˜ (t)) + dW (t), t ∈ [0, T ],

(4.212)

where µ ˜(t, y) is given for (t, y) ∈ [0, T ] × R1 by the following formula,

Zy µ ˜(t, y) = −

σt0 (t, x) µ(t, y) 1 dx + − σy0 (t, y). 2 σ (t, x) σ(t, y) 2

(4.213)

0

Conditions G20 and G23 – G26 imply that, for t ∈ [0, T ], |y 0 |, |y 00 | ≤ H < ∞, 0

Zy

00

00

|

|˜ µ(t, y ) − µ ˜(t, y )|
0. 2 + σ Also, conditions G21 and G23 – G26 imply that, for t ∈ [0, T ], y ∈ R1 ,

Zy |˜ µ(t, y)| ≤

|

σt0 (t, x) µ(t, y) 1 |dx + | | + |σy0 (t, y)| σ 2 (t, x) σ(t, y) 2

0

K70 K8 ≤ 2 |y| + (1 + |y|) + K9 (1 + |y|). σ σ

(4.215)

Relation (4.215) implies that condition G16 holds for process Y˜ (t), with conK69 stant K65 = Kσ70 2 + σ + K70 .

242

4

Upper bounds for options rewards for Markov LPP

Since conditions G15 and G16 hold for the diffusion process Y˜ (t), it is the unique solution of the stochastic differential equation (4.212). Finally, conditions G22 – G19 imply that for any t0 , t00 ∈ [0, T ], |y| ≤ H < ∞, 0

00

Zy |

|˜ µ(t , y) − µ ˜(t , y)| ≤

σt0 (t0 , x) − σt0 (t00 , x) |dx σ 2 (t0 , x)

0

Zy +

|σt0 (t00 , x)||

σ 2 (t00 , x) − σ 2 (t00 , x) |dx σ 2 (t0 , x)σ 2 (t00 , x)

0

µ(t0 , y) − µ(t00 , y) | σ(t0 , y) σ(t0 , y) − σ(t00 , y) | + |µ(t00 , y)|| σ(t0 , y)σ(t00 , y) 1 + |σy0 (t0 , y) − σy0 (t00 , y)| 2 H 2σ 0 K70 H 00 0 ≤ 2 h00H (|t0 − t00 |) + hH (|t − t00 |) σ σ4 1 K69 (1 + H) 00 0 + h0H (|t0 − t00 |) + hH (|t − t00 |) σ σ2 1 + h00H (|t0 − t00 |). 2 +|

(4.216)

Relation (4.216) implies that condition G17 holds for process Y˜ (t), with func2σ 0 K70 H 1 0 H tions h000 + K69 (1+H) + 12 )h00H (t), H > 0. H (t) = σ hH (t) + ( σ 2 + σ4 σ2 Let now assume that the following man-reverse type condition holds: G28 : There exists D > 0 such that µ ˜(t, y) ≤ 0, for y ≥ D, 0 ≤ t ≤ T and µ ˜(t, y) ≥ 0, for y ≤ −D, 0 ≤ t ≤ T . A classical example of mean-reverse diffusion process Y (t), satisfying conditions G20 – G27 and, also, the mean-reverse condition G28 , is where the drift and volatility functional coefficients µ(t, y) = −λ(y−η), σ(y, t) = σ, (t, y) ∈ [0, T ]×R1 , where constants λ ≤ 0, η ∈ R1 , σ > 0. In this case, the function f (t, y) = σy and the transformed diffusion process ˜ Y (t) has the drift and volatility functional coefficients µ ˜(t, y) = σ1 µ(t, y) = − σλ (y − η), σ ˜ (t, y) = 1, (t, y) ∈ [0, T ] × R1 , i.e., Y˜ (t) is also the mean-reverse diffusion process, for which conditions conditions G20 – G27 and the mean-reverse condition G28 hold. Ry 1 The transformation function f (t, y) = 0 σ(t,x) dx has, under conditions G23 R y σt0 (t,x) 0 – G27 , continuous first derivative ft (t, y) = 0 σ2 (t,x) dx in argument t ∈ [0, T ], for 1 every y ∈ R1 and continuous the first fy0 (t, y) = σ(t,y) and the second fy00 (t, y) = σy0 (t,y) σ 2 (t,y)

derivatives in y ∈ R1 , for every t ∈ [0, T ].

4.5

243

Upper bounds for rewards for mean-reverse diffusion LPP

The first derivative fy0 (t, y) satisfies the following inequalities, for every (t, y) ∈ [0, T ] × R1 , 1 1 0 < 0 ≤ fy0 (t, y) ≤ < ∞. (4.217) σ σ These inequalities imply that f (t, y) is continuous, strictly monotonically increasing function in y ∈ R1 , for every t ∈ [0, T ], which satisfy the following inequalities, for every (t, y) ∈ [0, T ] × R1 , −y y y −y I(y ≥ 0) + I(y < 0) ≤ f (t, y) ≤ I(y ≥ 0) + 0 I(y < 0). σ0 σ σ σ

(4.218)

Thus, for every t ∈ [0, T ], there exists the inverse function f = f −1 (t, y), y ∈ R1 for function f (t, y) as function of y ∈ R1 , which, for every t ∈ [0, T ], satisfy identity f −1 (t, f (t, y)) = f (t, f −1 (t, y)) = y, y ∈ R1 , and it is the unique solution of the equation, Zf 1 dx = y. (4.219) σ(t, x) 0 −1

It is readily seen that f (t, y) is continuous function in argument (t, y) ∈ [0, T ] × R1 , which have continuous derivatives in t ∈ [0, T ] and R1 in is also a strongly increasing function in y ∈ R1 , for every t ∈ [0, T ]. This function has, under conditions G23 – G27 , has continuous first derivative 0 −1 1 f y (t, y) = f 0 (t,f −1 (t,y)) in argument y ∈ R1 , for every t ∈ [0, T ]. y

This derivative satisfy the following inequalities, for every (t, y) ∈ [0, T ] × R1 , −1

0 < σ ≤ f 0 y (t, y) ≤ σ 0 < ∞.

(4.220)

These inequalities imply that fy−1 (t, y) is continuous, strictly monotonically increasing function in y ∈ R1 , for every t ∈ [0, T ], which satisfy the following inequalities, for every (t, y) ∈ [0, T ] × R1 , σyI(y ≥ 0) − σ 0 yI(y < 0) ≤ f −1 (t, y) ≤ σ 0 yI(y ≥ 0) − σyI(y < 0).

(4.221)

4.5.2 Upper bounds for exponential moments for supremums for univariate mean-reverse diffusion log-price processes Let us also assume that the following condition holds for some β ≥ 0: ˜

D19 [β]: Eeβ|Y (0)| < K75 , for some 1 < K75 < ∞. The following lemma takes place. Lemma 4.5.1. Let Y˜ (t) be a process given by the stochastic differential equation (4.212). Let also conditions G20 , G21 , G28 and D19 [β] hold for processes

244

4

Upper bounds for options rewards for Markov LPP

Y˜ (t). Then, there exists a constant 0 ≤ M67 = M67 (β) < ∞ such that the following inequality holds, E exp{β sup |Y˜ (s)|} ≤ M67 . (4.222) 0≤s≤T

Proof. Let us assume, for the moment, that the initial state Y˜ (0) = y0 is a constant. It is convenient to continue the drift functional coefficient µ ˜(t, y) = µ ˜(T, y) for t > T, y ∈ R1 and to consider the diffusion process Y˜ (t) as the diffusion process defined on interval [0, ∞), with the initial state y0 , the drift coefficient µ ˜(t, y) and the volatility coefficient 1. Let define also a constant D0 = D+d, where d > 0 and assume for the moment that |y0 | < D0 . Let us define the sequence of Markov moments τ0 = 0, τ1 = inf(t ≥ τ0 : Y (t) = 0 D ), τ2 = inf(t ≥ τ1 : Y (t) = D), τ3 = inf(t ≥ τ2 : Y (t) = D0 ), . . .. It is readily seen that, under conditions of Lemma 4.5.1, these moments are finite with probability 1 random variables. Let us also define the random variable νT = min(n ≥ 0 : τ2n ≤ T ). Let us show that νT is a finite with probability 1 random variable. Let us define a functional drift coefficient µ ˜0 (t, y) = µ ˜0 (y), which do not depend on argument t ∈ [0, ∞) and is given by the following formula, µ ˜0 (y) =



−K7000 (1 + y) −K7000 (1 + D)

for y ≥ D, for y ≤ D.

(4.223)

According relation (4.223), µ ˜0 (y) is a non-positive and non-increasing function, which obviously satisfy conditions G20 and G21 . Let us consider, for (u, y) ∈ [0, ∞) × R1 , diffusion processes Y˜y (t), t ∈ [u, ∞) and Y˜y0 (t), t ∈ [u, ∞) which is based on the Winner process W (t)−W (u), t ∈ [u, ∞) and which are solutions of the stochastic integral equations, respectively, Y˜y (t) = Rt Rt 0 0 y+ uµ ˜(t, Y˜y (s))ds + W (t) − W (u), t ∈ [u, ∞) and Y˜y0 (t) = y + u µ ˜ (Y˜y (s))ds + W (t) − W (u), t ∈ [u, ∞). The above processes are unique solutions for the corresponding stochastic integral equations since the drift functional coefficients µ ˜(t, y) and µ ˜0 (y) satisfy conditions G20 and G21 . The defining relation (4.223) implies that µ ˜0 (y) is a non-positive and nonincreasing function and that, for (t, y) ∈ [0, ∞) × R1 , µ ˜0 (y) ≤ µ ˜(t, y).

(4.224)

This implies that the processes Y˜ (t) and Y˜ 0 (t) are connected by the following relation, for every (u, y) ∈ [0, ∞) × R1 , P{Y˜y0 (t) ≤ Y˜y (t), t ∈ [u, ∞)} = 1.

(4.225)

4.5

Upper bounds for rewards for mean-reverse diffusion LPP

245

To see this, let us compare stochastic approximations for the processes Y˜y (t) given by relation Y˜y,0 (t) = y, t ∈ [u, ∞) and recurrent relations, for n = 0, 1, . . ., Y˜y,n+1 (t) = y +

Zt

µ ˜(t, Y˜y,n (s))ds + W (t) − W (u), t ∈ [u, ∞),

(4.226)

u 0 and Y˜y0 (t) given by relation Y˜y,0 (t) = y, t ∈ [u, ∞) and recurrent relations, for n = 0, 1, . . .,

0 Y˜y,n+1 (t) = y +

Zt

0 µ ˜0 (Y˜y,n (s))ds + W (t) − W (u), t ∈ [u, ∞).

(4.227)

u 0 (t) = Y˜y,0 (t) = y, t ∈ [u, ∞). Let us make By the above defining formulas, Y˜y,0 0 the induction assumption that Y˜y,n (t) ≤ Y˜y,n (t), t ∈ [u, ∞). Then, using that function µ ˜0 (y) is non-decreasing and relation (4.224), we get that, for t ∈ [u, ∞),

0 Y˜y,n+1 (t) = y +

Zt

0 µ ˜0 (Y˜y,n (s))ds + W (t) − W (u)

u

Zt ≤y+

µ ˜0 (Y˜y,n (s))ds + W (t) − W (u)

u

Zt ≤y+

µ ˜(s, Y˜y,n (s))ds + W (t) − W (u)

u

= Y˜y,n+1 (t).

(4.228)

As is known E(Y˜y (t) − Y˜y,n (t))2 → 0 as n → ∞, for every t ∈ [u, ∞) and 0 E(Y˜y0 (t) − Y˜y,n (t))2 → 0 as n → ∞, for every t ∈ [u, ∞). These relations and inequalities (4.228) imply that P{Y˜u0 (t) ≤ Y˜y (t)} = 1, for every t ∈ [u, ∞). Thus, relation (4.225) holds, since Y˜y (t) and Y˜y0 (t) are continuous processes. Note that τn , n = 0, 1, . . . are Markov moments for the process Y˜ (t). Let us introduce indicator random variables Ih,n = I(τ2n − τ2n−1 > h), n = 1, 2, . . . , h > 0. The process Y˜ 0 (t) is a homogeneous in time diffusion process. It is also a continuous process. Thus, there exists some h > 0 such that, for any u ∈ [0, ∞), P{

inf

u≤t≤u+h

Y˜D0 0 (t) > D} = P{ inf Y˜D0 0 (t) > D} 0≤t≤h

= ph > 0.

(4.229)

246

4

Upper bounds for options rewards for Markov LPP

Due to relation (4.225), the following relation takes place, for any en = 0, 1, n = 1, 2, . . . , m, m ≥ 1, P{Ih,n = en , n = 1, . . . , m − 1, Ih,m = 1} Z∞ = P{Ih,n = en , n = 1, . . . , m − 1, 0

τ2m−1 ∈ du}P{

inf

u≤t≤u+h

Y˜D0 (t) > D}

Z∞ ≥

P{Ih,n = en , n = 1, . . . , m − 1, 0

τ2m−1 ∈ du}P{

inf

u≤t≤u+h

Y˜D0 0 (t) > D}

Z∞ P{Ih,n = en , n = 1, . . . , m − 1, τ2m−1 ∈ du}ph

= 0

= P{Ih,n = en , n = 1, . . . , m − 1}ph .

(4.230)

Let denote An,m = {Ih,r = 0, r = n, . . . , m}, 1 ≤ n ≤ m < ∞. Relation (4.230) implies, in an obvious way, that P (An,m ) ≤ (1 − ph )m−n+1 , for any 1 ≤ n ≤ m < ∞. Pm Let also B(n, m, l) = { r=n Ir,h ≤ l}, 1 ≤ n ≤ m < ∞, l = 1, 2, . . .. Then, P (B1,(l+1)n,l ) ≤ P (∪lr=0 Arn+1,(r+1)n ) ≤ (l + 1)(1 − ph )n → 0 as n → ∞, for every l = 1, 2, . . .. Let l = [T /h]. It remain to note that P{νT ≥ (l + 1)n} = P{τ2(l+1)n ≤ T } (l+1)n

≤ P{

X

(τ2r − τ2r−1 ) ≤ T }

r=1

≤ P (B1,(l+1)n,l ) ≤ (l + 1)(1 − ph )n → 0 as n → ∞.

(4.231)

Obviously, Y (t) ≤ D0 for t ∈ [τ2n , τ2n+1 ), for n = 0, 1, . . .. Also, µ ˜(t, y) ≥ 0 for (t, y) ∈ [0, ∞) × [D, ∞) and, therefore, for n = 0, 1, . . ., 0

Y˜ (t) = D +

Zt

µ ˜(s, Y˜ (s))ds + W (t) − W (τ2n+1 )

τ2n+1 0

≥ D + W (t) − W (τ2n+1 ), t ∈ [τ2n+1 , τ2n+2 ), Let us introduce the process,  0 00 ˜ Y (s) = W (t) − W (τ2n+1 )

if s ∈ [τ2n , τ2n+1 ), n = 0, 1, . . . , if s ∈ [τ2n+1 , τ2n+2 ), n = 0, 1, . . . .

(4.232)

(4.233)

4.5

Upper bounds for rewards for mean-reverse diffusion LPP

247

The above remarks imply that Y˜ (t) ≤ D0 + Y˜ 00 (t), t ∈ [0, ∞).

(4.234)

Let also νT0 = min(n ≥ 0 : τ2n+1 ≤ T ). It is also a finite with probability 1 random variable, since νT0 ≤ νT + 1. Therefore, relation (4.234) implies that, with probability 1, sup Y˜ (t) ≤ D0 + sup Y˜ 00 (t) 0≤t≤T

0≤t≤T 0

≤ D + sup W (t) − min 0 W (τ2n+1 ) 0≤n≤νT

0≤t≤T

0

≤ D + sup W (t) − inf W (t) 0≤t≤T

0≤t≤T 0

= D + sup |W (t)|.

(4.235)

0≤t≤T

Note also, that the requirement |y0 | < D0 can be satisfied by choosing, for example, d = |y0 | + 1 and, thus, D0 = D + |y0 | + 1. The above remarks imply that, with probability 1, sup Y˜ (t) ≤ |y0 | + D + 1 + sup |W (t)|. 0≤t≤T

(4.236)

0≤t≤T

As known, and also as follows from Lemma 4.4.8, there exists, for every β ≥ 0, a constant 1 ≤ M68 = M68 (β, T ) < ∞ such that E exp{β sup |W (t)|} ≤ M68 .

(4.237)

0≤t≤T

Finally, we can get the following inequality, for every β ≥ 0, E exp{β sup Y˜ (t)} ≤ e|y0 | eD+1 M68 .

(4.238)

0≤t≤T

In the case, where the initial state Y (0) is a random variable satisfying condition D19 [β], one can get by integration of expression on the left and right hand sides in relation (4.238), the following inequality, E exp{β sup Y˜ (t)} ≤ K75 eD+1 M68 .

(4.239)

0≤t≤T

In analogous way, we can get the the following inequality, E exp{β sup −Y˜ (t)} = E exp{−β inf Y˜ (t)} 0≤t≤T

0≤t≤T D+1

≤ K75 e

M68 .

(4.240)

Since |Y˜ (t)| = max(Y˜ (t), −Y˜ (t)), relations (4.239) and (4.240) imply the final inequality,

248

4

Upper bounds for options rewards for Markov LPP

E exp{β sup |Y˜ (t)|} 0≤t≤T

= E exp{β sup max(Y˜ (t), −Y˜ (t))} 0≤t≤T

= E exp{β max( sup Y˜ (t), sup −Y˜ (t))} 0≤t≤T

0≤t≤T

= E max(exp{β sup Y˜ (t)}, exp{β sup −Y˜ (t)}) 0≤t≤T

0≤t≤T

≤ E exp{β sup Y˜ (t)} + E exp{β sup −Y˜ (t)} 0≤t≤T D+1

≤ 2K75 e

0≤t≤T

M68 .

(4.241)

The proof is complete.  Remark 4.5.1. Constant M67 is given by formula M67 = 2K75 eD+1 M68 (β, T ).

(4.242)

It is useful to note that M67 is completely determined by the threshold menreverse parameter D, parameter β and constant K75 , which is determined by the initial distribution. Also, the following simple estimate takes place, for β ≥ 0, E exp{β sup |W (t)|} ≤ E exp{β( sup W (t) ∨ sup −W (t))} 0≤t≤T

0≤t≤T

0≤t≤T

≤ E exp{β sup W (t)} + E exp{β sup −W (t)} 0≤t≤T β|W (T )|Ê

= 2e

0≤t≤T βW (T )

≤ 4e

= 4e

β2 T 2

.

(4.243)

and, thus, one can take, M68 (β, T ) = 4e

β2 T 2

.

(4.244)

Remark 4.5.2. Lemma 4.5.1 is valid for any mean-reverse diffusion process Y˜ (t) given by the stochastic differential equation (4.212) and satisfying conditions of this lemma. One should not assume that process Y˜ (t) is result of transformation of some initial process diffusion process Y (t). Let us return back to the model, where the processes Y (t) and Y˜ (t) are connected by relations Y˜ (t) = f (t, Y (t)), t ∈ [0, T ] and Y (t) = f −1 (t, Y˜ (t)), t ∈ [0, T ]. This let one easily to get inequalities analogous to (4.222) and (4.249) for process Y (t). In this case, it is natural to replace condition D19 [β] by analogous condition formulated in terms of initial state for the process Y (t): D20 [β]: Eeβ|Y (0)| < K76 , for some 1 < K76 < ∞.

4.5

Upper bounds for rewards for mean-reverse diffusion LPP

249

Lemma 4.5.2. Let Y (t) be a process given by the stochastic differential equation (4.209). Let also conditions G20 – G21 , G23 – G26 , G28 and D20 [β] hold. Then, there exists a constant 0 ≤ M69 = M69 (β) < ∞ such that the following inequality holds, σ E exp{β 0 sup |Y (s)|} ≤ M69 . (4.245) σ 0≤s≤T Proof. Condition D20 [β] and inequality (4.218) imply the following inequality, E exp{βσ|Y˜ (0)|} = E exp{β|f (0, Y (0))|} ≤ E exp{βσ

|Y (0))| } = E exp{β|Y (0))|} < K76 . σ

(4.246)

Thus, condition D19 [β 0 ] (with parameter β 0 = βσ and constant K76 replacing constant K75 penetrating the inequality in this condition) holds for process Y˜ (t). Using inequalities (4.218) and (4.221), we get the following inequality, σ sup |Y (s)|} σ 0 0≤s≤T σ = E exp{β 0 sup |f −1 (t, Y˜ (t))|} σ 0≤s≤T σ ≤ E exp{β 0 sup σ 0 |Y˜ (t)|} σ 0≤s≤T

E exp{β

= E exp{βσ sup |Y˜ (t)|} 0≤s≤T

≤ 2K76 e

D+1

M68 (βσ, T ).

(4.247)

The above inequality implies that inequality (4.245) holds,  Remark 4.5.3. Constant M69 (β) is given by formula M69 = 2K76 eD+1 M68 (βσ, T ).

(4.248)

By applying Lemma 4.5.1 to the diffusion log-price process Y˜t (s), s ∈ [0, T − t] with initial state y ∈ R1 and functional drift and volatility coefficients µ ˜(t + s, y) and 1, we get the following lemma. Lemma 4.5.3. Let Y˜ (t) be a process given by the stochastic differential equation (4.212). Let also conditions G20 , G21 and G28 hold for the processes Y˜ (t). Then, for every β ≥ 0, there exists a constant 0 ≤ M70 = M70 (β) < ∞ such that the following inequality holds, for every (t, y) ∈ [0, T ] × R1 , Ey,t exp{β sup |Y˜ (s)|} ≤ M70 eβ|y| .

(4.249)

t≤s≤T

Remark 4.5.4. In the case, where the initial distribution is concentrated in point y, condition D19 [β] holds for any β ≥ 0, with constant K75 = eβ|y| . Thus, constant M70 is given by formula M70 = 2M68 (β, T )eD+1 . It is useful to note

250

4

Upper bounds for options rewards for Markov LPP

that M70 is completely determined by the threshold men-reverse parameter D and parameter β. Remark 4.5.5. Lemma 4.5.3 is valid for any mean-reverse diffusion process Y˜ (t) given by the stochastic differential equation (4.212) and satisfying conditions of this lemma. One should not assume that process Y˜ (t) is result of transformation of some initial process diffusion process Y (t). By applying Lemma 4.5.2 to the diffusion log-price process Yt (s), s ∈ [0, T − t] with initial state y ∈ R1 and functional drift and volatility coefficients µ(t + s, y) and σ(t + s), we get the following lemma. Lemma 4.5.4. Let Y (t) be a process given by the stochastic differential equation (4.209). Let also conditions G20 – G21 , G23 – G26 , G28 hold. Then, for every β ≥ 0, there exists a constant 0 ≤ M71 = M71 (β) < ∞ such that the following inequality holds, for every (t, y) ∈ [0, T ] × R1 , Ey,t exp{β

σ sup |Y (s)|} ≤ M71 eβ|y| . σ 0 t≤s≤T

(4.250)

Proof. Using inequalities (4.218) and (4.221), and relations y˜ = f (0, y) and |˜ y | = |f (0, y)| ≤ |y| σ , we get the following inequality, for every (t, y) ∈ [0, T ] × R1 ,

Ey,t exp{β

σ sup |Y (s)|} σ 0 t≤s≤T

≤ Ey˜,t exp{βσ sup |Y˜ (t)|} 0≤s≤T D+1

≤ 2e

M68 (βσ, T )eβσ|˜y| ≤ 2eD+1 M68 (βσ, T )eβ|y| .

(4.251)

Remark 4.5.6. In the case, where the initial distribution is concentrated in point y, condition D20 [β] holds for any β ≥ 0, with constant K76 = eβ|y| . Thus, constant M71 is given by formula M71 = 2eD+1 M68 (βσ, T ). Remark 4.5.7. Relation (4.251) shows (taking into account that argument y˜ = f (0, y) runs over R1 when y runs over R1 ) that under conditions of Lemma 6.5.4, the following inequality holds for the transformed diffusion process Y˜ (t), for t ∈ [0, T ], y ∈ R1 , Ey,t exp{βσ sup |Y˜ (t)|} ≤ 2eD+1 M68 (βσ, T )eβσ|y| .

(4.252)

0≤s≤T

4.5.3 Upper bounds for option rewards for univariate mean-reverse diffusion log-price processes Recall that the processes Y (t) and Y˜ (t) are connected by relations Y˜ (t) = f (t, Y (t)), t ∈ [0, T ] and Y (t) = f −1 (t, Y˜ (t)), t ∈ [0, T ].

4.5

251

Upper bounds for rewards for mean-reverse diffusion LPP

Let g(t, ey ), (t, y) ∈ [0, T ] × R1 be a real-valued Borel pay-off function. Let Φ = supτ0 ∈Mmax,0,T Eg(τ0 , eY (τ0 ) ) be the corresponding optimal expected reward and φt (y) = supτt ∈Mmax,t,T Ey,t g(τt , eY (τt ) ) be, the reward function for the log-price process Y (t). ˜ ˜ = sup g (τ0 , eY (τ0 ) ) be the corresponding optimal exLet, also, Φ τ0 ∈Mmax,0,T E˜ ˜ pected reward and φ˜t (y) = supτt ∈Mmax,t,T Ey,t g˜(τt , eY (τt ) ) be, the reward function for the log-price process Y˜ (t). Since f (t, y) is continuous, strictly monotonically increasing function in y ∈ R1 , for every t ∈ [0, T ], processes Y (t) and Y˜ (t) = f (t, Y (t)) generate the same filtrations Ft = σ[Y (s), s ≤ t] = σ[Y˜ (s), s ≤ t], t ∈ [0, T ] and, thus, these processes generate the same classes of Markov stopping times Mmax,t,T , for t ∈ [0, T ]. Using the above facts, we can write down the following representation for optimal expected reward, Φ=

Eg(τ0 , eY (τ0 ) )

sup τ0 ∈Mmax,0,T

=

Eg(τ0 , ef

sup

−1

(τ0 ,f (τ0 ,Y (τ0 )))

)

τ0 ∈Mmax,0,T

=

˜ ˜ E˜ g (τ0 , eY (τ0 )) ) = Φ,

sup

(4.253)

τ0 ∈Mmax,0,T

where g˜(t, ey ) is the new transformed pay-off function defined by the following relation, (t, y) ∈ [0, T ] × R1 , g(t, ef

−1

(t,y)

) = g(t, ef

−1

(t, ln ey )

) = g˜(t, ey ).

(4.254)

Relation analogous to relation (4.253) can be also written down for reward functions, for (t, y) ∈ [0, T ] × R1 and y˜ = f (0, y), φt (y) =

sup

Ey,t g(τt , eY (τt ) )

τt ∈Mmax,t,T

=

sup

Ey,t g(τt , ef

−1

(τt ,f (τt ,Y (τt )))

)

τt ∈Mmax,t,T

=

sup

˜ y ). Ey˜,t g˜(τt , eY (τt ) ) = φ˜t (˜

(4.255)

τt ∈Mmax,t,T

Let assume that a pay-off function g(t, ey ) does not depend on parameter ε. In this case condition B11 [γ] should be replaced by the following condition, assumed to hold for some γ ≥ 0: B12 [γ]: sup0≤t≤T,y∈R1

|g(t,ey )| 1+L24 eγ|y|

< L23 , for some 0 < L23 < ∞, 0 ≤ L24 < ∞.

The following two lemmas are direct corollaries of Lemmas 4.4.10 and 4.4.11. Lemma 4.5.5. Let Y˜ (t) be a process given by the stochastic differential equation (4.212). Let conditions G20 – G21 , G23 – G26 , and G28 hold and also

252

4

Upper bounds for options rewards for Markov LPP

condition B12 [γ] and D19 [β] hold for some 0 ≤ γ ≤ β < ∞. Then, there exist a constant 0 ≤ M72 = M72 (γ, β) < ∞ such that the following inequality holds, ˜

E sup |g(s, eY (s) )| ≤ M72 .

(4.256)

0≤s≤T

Proof. Using condition B12 [γ] and inequality (4.222), we get the following inequality, ˜

E sup |g(s, eY (s)) |} 0≤s≤T ˜

≤ L23 + L23 L24 E sup eγ|Y (t))| 0≤s≤T

= L23 + L23 L24 E exp{γ sup |Y˜ (t))|} 0≤s≤T γ

≤ L23 + L23 L24 (E exp{β sup |Y˜ (t))|}) β 0≤s≤T γ

≤ M72 = L23 + L23 L24 (2K75 M68 (β, T )eD+1 ) β ,

(4.257)

which proves the lemma.  Remark 4.5.8. The constant M72 is given by formula, γ

M72 = L23 + L23 L24 (2K75 eD+1 M68 (β, T )) β .

(4.258)

Lemma 4.5.6. Let Y (t) be a process given by the stochastic differential equation (4.209). Let conditions G20 – G21 , G23 – G26 , and G28 hold, and also condition B12 [γ] and D20 [β] hold for some 0 ≤ γ ≤ β σσ0 < ∞. Then, there exist a constant 0 ≤ M73 = M73 (γ, β) < ∞ such that the following inequality holds, E sup |g(s, eY (s) )| ≤ M73 .

(4.259)

0≤s≤T

Proof. Using inequalities (4.221) and (4.245), we get the following inequality, E sup |g(s, eY (s) )| 0≤s≤T

≤ L23 + L23 L24 E sup eγ|Y (t)| 0≤s≤T

= L23 + L23 L24 E exp{γ sup |Y (t)|} 0≤s≤T

≤ L23 + L23 L24 (E exp{β

γÊσ 0 σ sup |Y (t))|}) βσ 0 σ 0≤s≤T

≤ M73 = L23 + L23 L24 (2K76 M68 (βσ, T )eD+1 )

γÊσ 0 βσ

.

(4.260)

An alternative proof of the lemma can be obtained by combyning transformations of the log-price process Y (t) given in relations (4.211) and the pay-off function g(t, ey ) given in relation (4.254).

4.5

Upper bounds for rewards for mean-reverse diffusion LPP

253

Using condition B12 [γ] and relation (4.254) we get the following inequality, which holds for any (t, y) ∈ [0, T ] × R1 , |˜ g (t, ey )| = |g(t, ef

−1

(t,y)

)|

≤ L23 + L23 L24 eγ|f

−1

(t,y)|

0

≤ L23 + L23 L24 eγσ |y| .

(4.261)

Thus, condition B12 [γ 0 ] (with parameter γ 0 = γσ 0 and constants L23 and L24 ) holds for the transformed pay-off function g˜(t, ey ). Also, as was shown in the proof of Lemma 4.5.2, condition D20 [β] implies that condition D19 [β 0 ] (with parameter β 0 = βσ and constant K76 replacing constant K75 penetrating this condition) holds for the process Y˜ (t). 0 Note that inequality γ ≤ βσ σ 0 is equivalent to inequality γσ ≤ βσ. Using inequality (4.256) given in Lemma 4.5.3 (for the case where cinditions B12 [γ] and D20 [β] are replaced, respectively, by conditions B12 [γ 0 ] (with constants L23 and L24 penetrating inequality in this condition) and D19 [β 0 ]) (with constant K76 replacing constant K75 penetrating inequality in this condition) we get the following inequality, E sup |g(s, eY (s) )| 0≤s≤T

= E sup |g(s, ef

−1

(t,f (t,Y (s)))

)|}

0≤s≤T ˜

0 = E sup |˜ g (s, eY (s) )| ≤ M73 ,

(4.262)

0≤s≤T 0 is given by the same formula as constant M73 , but with rewhere constant M73 placement parameters γ and β, respectively, by parameters γ 0 and β 0 , and constant K75 by constant K76 . 0 the expression given in relation This actually gives for for the constant M73 (4.247) for the constant M73 . 

Remark 4.5.9. The constant M73 is given by formula , γσ 0

M73 = L23 + L23 L24 (2K76 eD+1 M68 (βσ, T )) βσ .

(4.263)

Lemma 4.5.7. Let Y˜ (t) be a process given by the stochastic differential equation (4.212). Let also conditions G20 – G21 , G23 – G26 , G28 hold and also condition B12 [γ] holds for some γ ≥ 0. Then, for every β ≥ γ, there exist constants M74 , M75 = M75 (γ, β) < ∞ such that the following inequality holds, for every (t, y) ∈ [0, T ] × R1 , ˜

Ey,t sup |g(s, eY (s) )| ≤ M74 + M75 eγ|y| .

(4.264)

t≤s≤T

Proof. By applying Lemma 4.5.5 to the diffusion log-price process Y˜t (s), s ∈ [0, T − t], with initial state y ∈ R1 and functional drift and volatility coefficients

254

4

Upper bounds for options rewards for Markov LPP

µ ˜(t + s, y) and 1, and the pay-off function gt (s, ey ) = g(t + s, ey ), s ∈ [0, T − t] and taking into account Remark 4.5.3 we we get the following inequality, γ

˜

Ey,t sup |g(s, eY (s) )| ≤ L23 + L23 L24 (2eβ|y| eD+1 M68 (β, T )) β t≤s≤T

= M74 + M75 eγ|y| .

(4.265)

Remark 4.5.10. Constants M74 and M75 are given by the formulas, γ

M74 = L23 , M75 = L23 L24 (2M68 (β, T )eD+1 ) β .

(4.266)

Lemma 4.5.11. Let Y (t) be a process given by the stochastic differential equation (4.209). Let also conditions G20 – G21 , G23 – G26 , G28 hold and also condi0 tion B12 [γ] holds for some γ ≥ 0. Then, for every β ≥ γ σσ , there exist constants M76 , M77 = M77 (γ, β) < ∞ such that the following inequality holds, for every (t, y) ∈ [0, T ] × R1 , σ0

Ey,t sup |g(s, eY (s) )| ≤ M76 + M77 eγÊ σ |y| .

(4.267)

t≤s≤T

Proof. By applying Lemma 4.5.6 to the diffusion log-price process Yt (s), s ∈ [0, T − t], with initial state y ∈ R1 and functional drift and volatility coefficients µ(t+s, y) and σ(t+s), and the pay-off function gt (s, ey ) = g(t+s, ey ), s ∈ [0, T −t], and taking into account Remark 4.5.4 we we get the following inequality, Ey,t sup |g(s, eY (s) )| ≤ L23 + L23 L24 (2eβ|y| M68 (βσ, T )eD+1 )

γÊσ 0 βσ

t≤s≤T

= M76 + M77 eγ

Êσ 0 σ

|y|

.

(4.268)

Remark 4.5.12. Constants M76 and M77 are given by the formulas, M76 = L23 , M77 = L23 L24 (2eβ|y| M68 (βσ, T )eD+1 )

γÊσ 0 βσ

.

(4.269)

The following theorems are direct corollaries of Lemmas 4.5.5 and 4.5.7. Theorem 4.5.1. Let Y˜ (t) be a process given by the stochastic differential equation (4.212). Let conditions G20 – G21 , G23 – G24 , and G28 hold and also condition B12 [γ] and D19 [β] hold for some 0 ≤ γ ≤ β < ∞. Then, the following inequality holds, ˜ ≤ M72 . Φ (4.270) Theorem 4.5.2. Let Y (t) be a process given by the stochastic differential equation (4.209). Let conditions G20 – G21 , G23 – G24 , and G28 hold, and also condition B12 [γ] and D20 [β] hold for some 0 ≤ γ ≤ β σσ0 < ∞. Then, the following inequality holds, Φ ≤ M73 . (4.271)

4.5

Upper bounds for rewards for mean-reverse diffusion LPP

255

The following theorems are direct corollaries of Lemmas 4.5.6 and 4.5.8. Theorem 4.5.3. Let Y˜ (t) be a process given by the stochastic differential equation (4.212). Let also conditions G20 – G21 , G23 – G24 , and G28 hold and also condition B12 [γ] holds for some γ ≥ 0. Then, for every β ≥ γ, the following inequality holds, for every (t, y) ∈ [0, T ] × R1 , φ˜t (y) ≤ M74 + M75 eγ|y| .

(4.272)

Theorem 4.5.4. Let Y (t) be a process given by the stochastic differential equation (4.209). Let also conditions G20 – G21 , G23 – G24 , and G28 hold and 0 also condition B12 [γ] holds for some γ ≥ 0. Then, for every β ≥ γ σσ , the following inequality holds, for every (t, y) ∈ [0, T ] × R1 , φt (y) ≤ M76 + M77 eγ

Êσ 0 σ

|y|

.

(4.273)

5 Time-skeleton reward approximations for Markov LPP In Chapter 5, we presents our main results about time-skeleton reward approximations for continuous time multivariate modulated Markov log-price processes. We give asymptotically uniform explicit upper bounds for distance between optimal expected rewards and reward functions for American-type options with general pay-off functions and multivariate modulated Markov log-price processes and optimal expected rewards and reward functions for American-type options with general pay-off functions and embedded discrete time modulated Markov logprice processes based on skeleton partitions of time interval. These upper bounds play a key role in getting results about convergence of reward approximation algorithms studied in this book. In Section 5.1, we formulate a number variants of asymptotically uniform Lipschitz-type conditions for pay-off functions used in time-skeleton reward approximations. In Section 5.2, we give explicit upper bounds for distance between optimal expected rewards for continuous time multivariate modulated Markov log-price processes and the corresponding embedded discrete time Markov log-price processes. In Section 5.3, we present analogous results for reward functions of Americantype options. In Section 5.4, we give results about time-skeleton reward approximations for multivariate log-price processes with independent increments, In Section 5.5, we present results about time-skeleton reward approximations for diffusion log-price processes. Our main results about time-skeleton approximations are given in Theorems 5.2.1 and 5.3.1, for multivariate modulated Markov log-price processes, and in Theorems 5.4.1–5.4.2, for log-price processes with independent increments. These results generalize in several aspects results obtained in Silvestrov, Jönsson, and Stenberg (2008, 2009), for univariate modulated Markov log-price processes and in Lundgren and Silvestrov (2009, 2011) and Silvetsrov and Lundgren (2011), for multivariate Markov log-price processes. First, we consider multivariate modulated models, i.e. combine together multivariate and modulation aspects together. Second, we consider pay-off functions, which depend also of the index component. Fourth, we improve formulation of the corresponding conditions imposed on pay-off functions, in particular consider various Lipschitz-type conditions instead of stronger conditions imposed on derivatives of pay-off functions and moment compactness condition on log-price processes by giving them in a natural asymptotic form.

5.1 Lipschitz-type conditions for pay-off functions

257

Results about time-skeleton approximations for diffusion log-price processes and their time-skeleton, martingale and trinomial-tree approximations, given Theorems 5.5.1–5.5.6, are new.

5.1 Lipschitz-type conditions for pay-off functions In this section we formulate variants of asymptotically uniform Lipschitz-type conditions for pay-off functions which play an important role for getting asymptotically uniform estimates for time-skeleton reward approximations for multivariate modulated Markov LPP.

5.1.1 Asymptotically uniform Lipschitz-type conditions for pay-off functions expressed in terms of price arguments Let us first formulate Lipschitz-type conditions conditions for pay-off functions gε (t, s, x) in terms of price and index arguments ~s = (s1 , . . . , sk ) ∈ R+ k and index argument x ∈ X. Let vectors ~s = (s1 , . . . , sk ), ~s0 = (s01 , . . . , s0k ), ~s00 = (s001 , . . . , s00k ) ∈ R+ k and, also, define vectors ~s0i = (s1 , . . . , si−1 , s0i , si+1 , . . . , sk ), ~s00i = (s1 , . . . , si−1 , s00i , si+1 , . . . , sk ), i = 1, . . . , k. The following Lipschitz-type condition is assumed to hold for some k(k + 2)dimensional vector parameter γ¯ = (γ0,1 , . . . , γ0,k , . . . , γk+1,1 , . . . , γk+1,k ) with nonnegative components: B13 [¯ γ ]: There exists ε27 = ε27 (¯ γ ) ∈ (0, ε0 ] such that for every ε ∈ [0, ε27 ]: |gε (t0 ,~ s,x)−gε (t00 ,~ s,x)| P ≤ L˙ 25,0 , for some 0 ≤ L˙ 25,0 , (a) sup0≤t0 ,t00 ≤T, t0 6=t00 , γ0,j k 0 00 ˙ (1+

~ s∈R+ , x∈X k

j=1

L˙ 26,0,1 , . . . , L˙ 26,0,k < ∞; P (b) sup 0≤t≤T,~s0i ,~s00i ∈R+ , k

s00 ~ s0i 6=~ i , x∈X

(1+

j6=i

L26,0,j sj

)|t −t |

|gε (t,~ s0i ,x)−gε (t,~ s00 i ,x)| ˙ 26,i,j sγi,j +L ˙ 26,i,i (s0 ∨s00 )γi,i )|s0 −s00 | L i i i i j

≤ L˙ 25,i , i =

1, . . . , k, for some 0 ≤ L˙ 25,i , L˙ 26,i,1 , . . . , L˙ 26,i,k < ∞, i = 1, . . . , k; s,x0 )−gε (t,~ s,x00 )| Pk|gε (t,~ (c) sup 0≤t≤T, ~s∈R+ , ≤ L˙ 25,k+1 , for some 0 ≤ γk+1,j 0 00 ˙ k

x0 ,x00 ∈X, x0 6=x00

(1+

j=1

L26,k+1,j sj

)dX (x ,x )

L˙ 25,k+1 , L˙ 26,k+1,1 , . . . , L˙ 26,k+1,k < ∞. Note that constants L˙ 25,j , j = 0, . . . , k + 1 can take value 0. Such values correspond to the cases where the pay-off function gε (t, ~s, x) do not depend of the corresponding arguments. It is also possible formulate condition analogous to B13 [¯ γ ], but based on the use of upper limits in parameter ε: B14 [¯ γ ]: The following asymptotic relations hold:

258

5

Time-skeleton reward approximations for Markov LPP

(a) limε→0 sup0≤t0 ,t00 ≤T, t0 6=t00 , ~ s∈R+ , x∈X k

0

00

|gε (t ,~ s,x)−gε (t ,~ s,x)| P γ0,j k 0 ˙

(1+

j=1

L28,0,j sj

)|t −t00 |

≤ L˙ 27,0 , for some

0 < L˙ 27,0 < ∞ and 0 ≤ L˙ 28,0,1 , . . . , L˙ 28,0,k < ∞; s00 s0i ,x)−gε (t,~ i ,x)| P ˙ |gε (t,~ (b) limε→0 sup 0≤t≤T,~s0i ,~s00i ∈R+ , γi,j 0 00 ˙ ~ s0i 6=~ s00 i , x∈X

k

(1+

j6=i

L28,i,j sj

| +L28,i,i (si ∨si )γi,i )|s0i −s00 i



L˙ 27,i , i = 1, . . . , k, for some 0 < L˙ 27,i < ∞ and 0 ≤ L˙ 28,i,1 , . . . , L˙ 28,i,k < ∞, i = 1, . . . , k; s,x0 )−gε (t,~ s,x00 )| Pk|gε (t,~ ≤ L˙ 27,k+1 , for (c) limε→0 sup 0≤t≤T, ~s∈R+ , γk+1,j 0 00 ˙ k

x0 ,x00 ∈X, x0 6=x00

(1+

j=1

L28,k+1,j sj

)dX (x ,x )

some 0 < L˙ 27,k+1 < ∞ and 0 ≤ L˙ 28,k+1,1 , . . . , L˙ 28,k+1,k < ∞. Condition B14 [¯ γ ] implies that there exists ε28 = ε28 (¯ γ ) ∈ (0, ε0 ] such that all inequalities penetrating condition B13 [¯ γ ] hold, for every ε ∈ [0, ε28 ]. The Lipschitz-type condition B13 [¯ γ ] does not imply absolute continuity for the pay-off functions gε (t, ~s, x) in arguments t and ~s. However, in the case, where these functions are absolutely continuous, a condition sufficient for condition B13 [¯ γ ] can be formulated in terms of their partial derivatives. Let us assume that the following condition holds, for some k(k+2)-dimensional vector γ¯ with non-negative components: B15 [¯ γ ]: There exists ε29 = ε29 (¯ γ ) ∈ (0, ε0 ] such that for every ε ∈ [0, ε29 ]: (a) function gε (t, ~s, x) is absolutely continuous in t, with respect to the Lebesgue measure on [0, T ], for every fixed (~s, x) ∈ R+ s, with k × X and in ~ respect to the Lebesgue measure on R+ , for every fixed (t, x) ∈ [0, T ] × X; k + (b) for every (~s, x) ∈ Rk × X, the absolute value of the partial derivative Pk γ s,x) | ∂gε (t,~ | ≤ (1 + j=1 L˙ 30,0,j sj 0,j )L˙ 29,0 , for almost all t ∈ [0, T ], with ∂t respect to the Lebesgue measure on [0, T ], for some 0 ≤ L˙ 29,0 , L˙ 30,0,j < ∞, j = 1, . . . , k; (c) for every (t, x) ∈ [0, T ] × X and i = 1, . . . , k, the absolute value of the Pk γ (t,~ s,x) | ≤ (1 + j=1 L˙ 30,i,j sj i,j )L˙ 29,i , for almost all partial derivative | ∂gε∂s i + ~s ∈ Rk , with respect to the Lebesgue measure on R+ k , for some 0 ≤ L˙ 29,i , L˙ 30,i,j < ∞, j = 1, . . . , k; (d) for every (t, ~s) ∈ [0, T ] × Rk+ and x0 , x00 ∈ X, the following inequality holds Pk γ |gε (t, ~s, x0 )−gε (t, ~s, x00 )| ≤ (1+ j=1 L˙ 30,k+1,j sj k+1,j )L˙ 29,k+1 dX (x0 , x00 ), for some 0 ≤ L˙ 29,k+1 , L˙ 30,k+1,j < ∞, j = 1, . . . , k. Lemma 5.1.1. Condition B15 [¯ γ ] implies condition B13 [¯ γ ] to hold, with the ˙ ˙ ˙ ˙ same parameters ε27 = ε29 and L25,i = L29,i , L26,i,j = L30,i,j , i = 0, . . . , k + 1, j = 1, . . . , k. Proof. Let vectors ~s = (s1 , . . . , sk ), ~s0 = (s01 , . . . , s0k ), ~s00 = (s001 , . . . , s00k ) ∈ R+ k and, also, define vectors ~si (v) = (s1 , . . . , si−1 , v, si+1 , . . . , sk ), v ∈ R+ k , i = 1, . . . , k and ~s0i = (s1 , . . . , si−1 , s0i , si+1 , . . . , sk ), ~s00i = (s1 , . . . , si−1 , s00i , si+1 , . . ., sk ), i = 1, . . . , k.

5.1

Lipschitz-type conditions for pay-off functions

259

0 00 + 0 00 − 0 00 − Let us, also, denote s+ i = si ∨si , si = si ∧si , i = 1, . . . , k and t = t ∨t , t = 00 t ∧t . The inequality penetrating condition B15 [¯ γ ] (b) implies that the following 0 00 inequality holds, for ε ∈ [0, ε29 ] and t , t ∈ [0, T ], ~s ∈ R+ k , x ∈ X, 0

|gε (t0 , ~s, x) − gε (t00 , ~s, x)| ≤

Zt+ |

∂gε (t, ~s, x) |dt ∂t

t−

≤ (1 +

k X

γ L˙ 30,0,j sj 0,j )L˙ 29,0 |t0 − t00 |.

(5.1)

j=1

Inequality (5.1) is equivalent to the inequality penetrating condition B13 [¯ γ] (a), for the case, where ε27 = ε29 and L˙ 25,0 = L˙ 29,0 , L˙ 26,0,j = L˙ 30,0,j , j = 1, . . . , k. Also, the inequality penetrating condition B15 [¯ γ ] (c) implies that the following inequality holds, for ε ∈ [0, ε29 ] and t ∈ [0, T ], ~s ∈ R+ k , x ∈ X, i = 1, . . . , k, +

Zsi

|gε (t, ~s0i , x) − gε (t, ~s00i , x)| ≤

|

∂gε (t00 , ~si (v), x00 ) |dv ∂v

s− i s+ i

Z ≤

(1 + s− i

≤ (1 +

X

γ L˙ 30,i,j sj i,j + L˙ 30,i,i v γi,i )L˙ 29,i dv

j6=i

X

γ L˙ 30,i,j sj i,j + (s0i ∨ s00i )γi,i )L˙ 29,i |s0i − s00i |.

(5.2)

j6=i

Inequality (5.2) is equivalent to the inequality penetrating condition B13 [¯ γ] ˙ ˙ ˙ ˙ (b), for the case, where ε27 = ε29 and L25,i = L29,i , L26,i,j = L30,i,j , i = 1, . . . , k, j = 1, . . . , k. Also, the inequality penetrating condition B15 [¯ γ ] (d) is equivalent to the inequality penetrating condition B13 [¯ γ ] (c). As above, ε27 = ε29 and L˙ 25,k+1 = L˙ 29,k+1 , L˙ 26,k+1,j = L˙ 30,k+1,j , j = 1, . . . , k.  Let vectors ~s0 = (s01 , . . . , s0k ), ~s00 = (s001 , . . . , s00k ) ∈ R+ k, 0 (s1 , . . . , s0i , s00i+1 , . . . , s00k ), i = 1, . . . , k. By the definition, ~sk

and, also, vectors ~si = = ~s0 and ~s0 = ~s00 . Using condition B13 [¯ γ ] we get the following inequality, for ε ∈ [0, ε27 ] and (t0 , ~s0 , x0 ), (t00 , ~s00 , x00 ) ∈ [0, T ] × R+ k × X, |gε (t0 , ~s0 , x0 ) − gε (t00 , ~s00 , x00 )| ≤ |gε (t0 , ~s00 , x00 ) − gε (t00 , ~s00 , x00 )| +

k X

|gε (t0 , ~si , x00 ) − gε (t0 , ~si−1 , x00 )|

i=1

+ |gε (t0 , ~s0 , x0 ) − gε (t00 , ~s0 , x00 )|

260

5

Time-skeleton reward approximations for Markov LPP

≤ (1 +

k X

γ0,j ˙ L˙ 26,0,j (s+ )L25,0 |t0 − t00 | j )

j=1

+

k X

(1 +

k X

i=1

γi,j ˙ L˙ 26,i,j (s+ )L25,j |s0i − s00i | j )

j=1

+ (1 +

k X

γk+1,j ˙ L˙ 26,k+1,j (s+ )L25,k+1 dX (x0 , x00 ). j )

(5.3)

j=1

5.1.2 Asymptotically uniform Lipschitz-type conditions for pay-off functions expressed in terms of log-price arguments Condition analogous to B13 [¯ γ ] can be re-formulated in terms the pay-off functions gε (t, e~y , x), i.e., with the use of log-price arguments yi , i = 1, . . . , k. y 00 = (y100 , . . . , yk00 ) ∈ Let us consider vectors ~ y = (y1 , . . . , yk ), ~ y 0 = (y10 , . . . , yk0 ), ~ 00 0 0 yi = (y1 , . . . , yi−1 , yi00 , yi+1 , Rk and vectors ~ yi = (y1 , . . . , yi−1 , yi , yi+1 , . . . , yk ), ~ . . . , yk ), i = 1, . . . , k. Let also yi+ = yi0 ∨ yi00 , yi− = yi0 ∧ yi00 , i = 1, . . . , k, and and |yi |+ = |yi0 | ∨ 00 |yi |, |yi |− = |yi0 | ∧ |yi00 |, i = 1, . . . , k. In what follows we use the following well-known inequality, which holds for any yi0 , yi00 ∈ R1 and α ≥ 1, 0



00

+

α(yi −yi ) + 1 − e + eαyi − eαyi | ≤ eαyi | | ≤ eα|yi | |yi0 − yi00 |. (5.4) α α Using inequality (5.4) and the inequality penetrating condition B13 [¯ γ ] (b) we get the following inequalities, which hold for ε ∈ [0, ε27 ] and (t, ~ y 0 , x), (t, ~ y 00 , x) ∈ [0, T ] × Rk × X, i = 1, . . . , k,

|

0

00

|gε (t, e~yi , x) − gε (t, e~yi , x)| X + 0 00 ≤ (1 + L˙ 26,i,j eγi,j yj + L˙ 26,i,i eγi,i yi )L˙ 25,i |eyi − eyi | j6=i

≤ (1 +

X

+

+

L˙ 26,i,j eγi,j |yj |+|yi | + L˙ 26,i,i e(γi,i +1)|yi | )L˙ 25,i |yi0 − yi00 |.

(5.5)

j6=i

Inequalities (5.5) shows, how should look like analogues of conditions B13 [¯ γ] – B15 [¯ γ ] formulated in terms of log-price arguments. The following Lipschitz-type condition, assumed to hold for some k(k + 2)dimensional vector γ¯ = (γ0,1 , . . ., γ0,k , . . . , γk+1,1 , . . . , γk+1,k ) with non-negative components, is mainly used in what follows: B16 [¯ γ ]: There exists ε30 = ε30 (¯ γ ) ∈ (0, ε0 ] such that for every ε ∈ [0, ε30 ]: ~ ~ |gε (t0 ,ey ,x)−gε (t00 ,ey ,x)| P (a) sup0≤t0 ,t00 ≤T,t0 6=t00 , ≤ L31,0 , for some 0 ≤ k |yj |γ0,j 0 00 ~ y ∈Rk , x∈X

(1+

j=1

L32,0,j e

L31,0 , L32,0,1 , . . . , L32,0,k < ∞;

)|t −t |

5.1

(b) sup0≤t≤T,~yi0 ,~yi00 ∈Rk , ~ yi0 6=~ yi00 , x∈X

Lipschitz-type conditions for pay-off functions y ~0 y ~00 |gε (t,e i ,x)−gε (t,e i ,x)| + γi,j |yj |+|yi |+ +L32,i,i e(γi,i +1)|yi | )|yi0 −yi00 | L32,i,j e

P

(1+

j6=i

261



L31,i , i = 1, . . . , k, for some 0 ≤ L31,i , L32,i,1 , . . . , L32,i,k < ∞, i = 1, . . . , k; 0 y ~ ,x00 )| Pk |gε (t,~s,x )−gγε (t,e |y ≤ L31,k+1 , for some | 0 00

(c) sup0≤t≤T,~y∈Rk ,

x0 ,x00 ∈X, x0 6=x00

(1+

j=1

k+1,j

L32,k+1,j e

j

)dX (x ,x )

0 ≤ L31,k+1 , L32,k+1,1 , . . . , L32,k+1,k < ∞. The following condition is an analogue of condition B16 [¯ γ ] based on the use of upper limits in parameter ε: B17 [¯ γ ]: The following asymptotic relations hold: ~ ~ |gε (t0 ,ey ,x)−gε (t00 ,ey ,x)| P (a) limε→0 sup0≤t0 ,t00 ≤T, t0 6=t0 , k |y |γ 0 0≤

(1+ ~ y ∈Rk , x∈X L33,0 , L34,0,1 , . . . , L34,0,k

(b) limε→0 sup 0≤t≤T,~yi0 , ~yi00 ∈

j=1

L34,0,j e

j

0,j )|t

−t00 |

≤ L33,0 , for some

< ∞; y ~0 y ~00 i ,x)−gε (t,e i ,x)| + + L34,i,j eγi,j |yj |+|yi | +L34,i,i e(γi,i +1)|yi | )|yi0 −yi00 |

|gε (t,e

P

Rk , ~ yi0 6=~ yi00 , x∈X

(1+

x0 ,x00 ∈X, x0 6=x00

(1+

j6=i

≤ L33,i , i = 1, . . . , k, for some 0 ≤ L33,i , L34,i,1 , . . . , L34,i,k < ∞, i = 1, . . . , k; 0 y ~ ,x00 )| Pk |gε (t,~s,x )−gγε (t,e |y (c) limε→0 sup 0≤t≤T, ~y∈Rk , ≤ L33,k+1 , for 0 00 k+1,j j | j=1

L34,k+1,j e

)dX (x ,x )

some 0 ≤ L33,k+1 , L34,k+1,1 , . . . , L34,k+1,k < ∞. Condition B17 [¯ γ ] implies that there exists ε31 = ε31 (¯ γ ) ∈ (0, ε0 ] such that all inequalities penetrating condition B16 [¯ γ ] holds, for every ε ∈ [0, ε31 ]. The only problem is that we should use positive constants L33,j , j = 0, . . . , k in conditions B17 [¯ γ ]. This is not natural in the case, where the pay-off function gε (t, ~s, x) do not depend on ε. However, one can use, in this case, any positive constants L33,j , j = 0, . . . , k. In fact, this let one get the same asymptotically uniform upper bound for reward functionals, using condition B17 [¯ γ ] instead condition B16 [¯ γ ]. Condition B16 [¯ γ ] implies that the pay-off function gε (t, e~y , x) is continuous in argument (t, ~ y , x) ∈ [0, T ] × Rk × X, for every ε ∈ [0, ε30 ]. Condition B17 [¯ γ ] implies that the pay-off function gε (t, e~y , x) is continuous, for every ε ∈ [0, ε31 ]. Condition B16 [¯ γ ] does not require of the absolute continuity of the pay-off functions gε (t, e~y , x) in argument t and ~ y . In the case, where these functions are absolutely continuous, a condition sufficient for condition B16 [¯ γ ] can be formulated in terms of their partial derivatives. Let us assume that the following Lipschitz-type condition holds, for some k(k + 2)-dimensional vector γ¯ with non-negative components: B18 [¯ γ ]: There exists ε32 = ε32 (¯ γ ) ∈ (0, ε0 ] such that for every ε ∈ [0, ε32 ]: (a) function gε (t, e~y , x) is absolutely continuous in t, with respect to the Lebesgue measure on [0, T ], for every fixed (~ y , x) ∈ Rk × X, and in ~ y , with respect to the Lebesgue measure on Rk , for every fixed (t, x) ∈ [0, T ] × X;

262

5

Time-skeleton reward approximations for Markov LPP

(b) for every (~ y , x) ∈ Rk × X, the absolute value of the partial derivative y ~ Pk ,x) | ≤ (1 + j=1 L36,0,j eγ0,j |yj | )L35,0 , for almost all t ∈ [0, T ], with | ∂gε (t,e ∂t respect to the Lebesgue measure on [0, T ], for some 0 ≤ L35,0 , L36,0,1 , . . ., L36,0,k < ∞; (c) for every (t, x) ∈ [0, T ] × X and i = 1, . . . , k, the absolute value of the y ~ Pk ,x) partial derivative | ∂gε (t,e | ≤ (1 + j=1 L36,i,j eγi,j |yj |+|yi | )L35,i , for al∂yi most all ~ y ∈ Rk , with respect to the Lebesgue measure on Rk , for some 0 ≤ L35,i , L36,i,1 , . . . , L36,i,k < ∞; (d) for every (t, ~ y ) ∈ [0, T ] × Rk and x0 , x00 ∈ X, the following inequality holds Pk |gε (t, e~y , x0 )−gε (t, e~y , x00 )| ≤ (1+ j=1 L36,k+1,j eγk+1,j |yj | )L35,k+1 dX (x0 , x00 ), for some 0 ≤ L35,k+1 , L36,k+1,1 , . . . , L36,k+1,k < ∞. Lemma 5.1.2. Condition B18 [¯ γ ] implies condition B16 [¯ γ ] to hold, with parameters ε30 = ε32 and L31,i = L35,i , L32,i,j = L36,i,j , j = 1, . . . , k, i = 0, k + 1 and L31,i = 2L35,i , L32,i,j = L36,i,j , j 6= i, L32,i,i = 1 + L36,i,i , i = 1, . . . , k. Proof. Note first of all that the Lebesgue integration is used below. y 00 = (y100 , . . . , yk00 ) ∈ Rk , and, Let vectors ~ y = (y1 , . . . , yk ), ~ y 0 = (y10 , . . . , yk0 ), ~ also, vectors ~ yi (u) = (y1 , . . . , yi−1 , u, yi+1 , . . . , yk ), u ∈ Rk , i = 1, . . . , k and ~ yi0 = yi00 = (y1 , . . . , yi−1 , yi00 , yi+1 , . . . , yk ), i = 1, . . . , k. (y1 , . . . , yi−1 , yi0 , yi+1 , . . . , yk ), ~ + Let us, also, denote yi = yi0 ∨ yi00 , yi− = yi0 ∧ yi00 , |yi |+ = |yi0 | ∨ |yi00 |, |yi |− = 0 |yi | ∧ |yi00 |, i = 1, . . . , k and t+ = t0 ∨ t00 , t− = t0 ∧ t00 . Let condition B18 [¯ γ ] (b) holds for ε ∈ [0, ε32 ] and t0 , t00 ∈ [0, T ], ~ y ∈ Rk , x ∈ X. In this case, 0

00

~ y

Zt+

~ y

|gε (t , e , x) − gε (t , e , x)| ≤

|

∂gε (t, e~y , x) |dt ∂t

t−

≤ (1 +

k X

L36,0,j eγ0,j |yj | )L35,0 |t0 − t00 |.

(5.6)

j=1

Inequality (5.6) implies that the inequality penetrating condition B16 [¯ γ ] (a) holds, for ε30 = ε32 and L31,0 = L35,0 , L32,0,j = L36,0,j , j = 1, . . . , k. Let condition B18 [¯ γ ] (c) holds and ε ∈ [0, ε32 ] and t ∈ [0, T ], x ∈ X and ~ yi0 , ~ yi00 ∈ Rk . In this case, +

~ yi0

~ yi00

Zyi

|gε (t, e , x) − gε (t, e , x)| ≤

|

∂gε (t00 , e~yi (u) , x00 ) |du ∂u

yi− yi+

Z ≤

(1 + yi−

X j6=i

L36,i,j eγi,j |yj |+|u| + L36,i,i e(γi,i +1)|u| )L35,i du

5.1

Lipschitz-type conditions for pay-off functions

263

+

Zyi =

X

(eu +

L36,i,j eγi,j |yj |+u + L36,i,i e(γi,i +1)u )L35,i du

j6=i

yi−



+

≤ eyi − eyi +

X

+



L36,i,j eγi,j |yj | (eyi − eyi )

j6=i

+ L36,i,i

(γi,i +1)yi+



X

+

− e(γi,i +1)yi  L35,i γi,i + 1

e

+

≤ e|yi | +

L36,i,j eγi,j |yj | e|yi |

j6=i (γi,i +1)|yi |+

 + L36,i,i e L35,i |yi+ − yi− | X + ≤ 1+ L36,i,j eγi,j |yj | e|yi | j6=i

+ (1 + L36,i,i )e(γi,i +1)|yi |

+



L35,i |yi+ − yi− |.

(5.7)

Inequality (5.7) implies that the inequality penetrating condition B16 [¯ γ ] (b) holds, for ε30 = ε32 and L31,i = L35,i , L32,i,j = L36,i,j , j 6= i, L32,i,i = 1+L36,i,i , i = 1, . . . , k. Also, the inequality penetrating condition B18 [¯ γ ] (d) is equivalent to the inequality penetrating condition B16 [¯ γ ] (c), for ε30 = ε32 and L31,k+1 = L35,k+1 , L32,.k+1,j = L36,k+1,j , j = 1, . . . , k.  y 00 = (y100 , . . . , yk00 ) ∈ Rk and let also vectors Let vectors ~ y 0 = (y10 , . . . , yk0 ), ~ 00 yk = ~ y 0 and ~ y0 = ~ y 00 . . . . , yk00 ), i = 1, . . . , k. By the definition, ~ ~ yi = (y10 , . . . , yi0 , yi+1 If condition B16 [¯ γ ] holds, then the following basic inequality takes place, for 0 00 ε ∈ [0, ε30 ] and (t0 , e~y , x0 ), (t00 , e~y , x00 ) ∈ [0, T ] × Rk × X, 0

00

|gε (t0 , e~y , x0 ) − gε (t00 , e~y , x00 )| 00

00

≤ |gε (t0 , e~y , x00 ) − gε (t00 , e~y , x00 )| +

k X

|gε (t0 , e~yi , x00 ) − gε (t0 , e~yi−1 , x00 )|

i=1 0

0

+ |gε (t0 , e~y , x0 ) − gε (t00 , e~y , x00 )| ≤ (1 +

k X

+

L32,0,j eγ0,j |yj | )L31,0 |t0 − t00 |

j=1

+

k X

(1 +

i=1

+ L32,i,i e

X

+

L32,i,j eγi,j |yj |

+|yi |+

j6=i (γi,i +1)|yi |+

)L31,i |yi0 − yi00 |

264

5

Time-skeleton reward approximations for Markov LPP

+ (1 +

k X

+

L32,k+1,j eγk+1,j |yj | )L31,k+1 dX (x0 , x00 ).

(5.8)

j=1

5.1.3 Asymptotically uniform Lipschitz-type conditions and rates of growth for pay-off functions In what follows, we use condition, which is, in fact, condition B2 [¯ γ ], assumed to hold for that case, where the k-dimensional vector γ¯ = (γ, . . . , γ) has identical components γ ≥ 0: B19 [γ]: limε→0 sup0≤t≤T,(~y,x)∈Z

y ~

1+

P|gkε (t,e

0 ≤ L38,i < ∞, i = 1, . . . , k.

i=1

,x)|

L38,i eγ|yi |

< L37 , for some 0 < L37 < ∞,

Let also introduce the following calibration condition: B20 : limε→0 sup0≤t≤T,x∈X |gε (t, e~y0 , x)| < L39 , for some ~ y0 = (y0,1 , . . . , y0,k ) ∈ Rk and 0 < L39 < ∞. Condition B20 implies that there exists ε33 ∈ (0, ε0 ] such that the following inequality holds, for ε ∈ [0, ε33 ], |gε (t, e~y0 , x)| < L39 .

sup

(5.9)

0≤t≤T,x∈X

Let us denote, γ◦ = max γi,j + 1 1≤i,j≤k

≤ γ∗ = max γ0,j ∨ ( max γi,j + 1) ∨ max γk+1,j . 1≤j≤k

1≤i,j≤k

1≤j≤k

(5.10)

Lemma 5.1.3. Let conditions B16 [¯ γ ] and B20 hold. Then, condition B19 [γ] holds, for every γ > γ◦ . Proof. Obviously, for every γ > γ◦ and i = 1, . . . , k, Li (γ) = sup yi ∈R1

|yi − y0,i | < ∞. e(γ−γ◦ )|yi |+

(5.11)

Let ε34 = ε30 (¯ γ ) ∧ ε33 , where ε30 = ε30 (¯ γ ) is the parameter penetrating condition B16 [¯ γ ] and ε33 is determined by relation (5.9). Let ~ y = (y1 , . . . , yk ) ∈ Rk and ~ y0 = (y0,1 , . . . , y0,k ) is a point penetrating condition B20 . Denote |yj |+ = |yj | ∨ |y0,j |, j = 1, . . . , k. Note that, for every i, j = 1, . . . , k, γi,j + 1 + γ − γ◦ = γ + γi,j − max γi,j ≤ γ. 1≤i,j≤k

(5.12)

5.1

Lipschitz-type conditions for pay-off functions

265

Condition B16 [¯ γ ] and inequalities (5.8) and (5.9) imply that for every ε ∈ [0, ε34 ] and (t, ~ y , x) ∈ [0, T ] × Rk × X, |gε (t, e~y , x) − gε (t, e~y0 , x)| ≤

k X X + + (1 + L32,i,j eγi,j |yj | +|yi | i=1

j6=i +

+ L32,i,i e(γi,i +1)|yi | )L31,i |yi − y0,i | ≤

k X X + + (1 + L32,i,j eγi,j |yj | +(1+γ−γ◦ )|yi | i=1

j6=i +

+ L32,i,i e(γi,i +1+γ−γ◦ )|yi | )L31,i Li (γ) ≤

k k X X + + L32,i,j eγ(|yj | ∨|yi | ) )L31,i Li (γ) (1 + j=1

i=1





k X

k X

i=1

j=1

(1 +

k X

k X

i=1

j=1

(1 +

γ|yi |

+e =

+

+

L32,i,j (eγ|yj | + eγ|yi | ))L31,i Li (γ)

L32,i,j (eγ|yj | + eγ|y0,j |

+ eγ|y0,i| ))L31,i Li (γ)

k k X X L32,i,j (eγ|y0,j | + eγ|y0,i| ))L31,i Li (γ) (1 + j=1

i=1

+

k X

k X (L32,i,j L31,i Li (γ) + L32,j,i L31 jLj (γ))

eγ|yj |

i=1

j=1

= L040 +

k X

L040,j eγ|yj | .

(5.13)

j=1

where L040 = L040 (γ) =

k k X X (1 + L32,i,j (eγ|y0,j | i=1

γ|y0,i |

+e

L040,j

=

j=1

))L31,i Li (γ),

L040,j (γ)

k X = (L32,i,j L31,i Li (γ) i=1

+ L32,j,i L31,j Lj (γ)), j = 1, . . . , k.

(5.14)

266

5

Time-skeleton reward approximations for Markov LPP

Relations (5.9) and (5.13) imply that, for every ε ∈ [0, ε34 ] and (t, ~ y , x) ∈ [0, T ] × Rk × X, |gε (t, e~y , x)| < |gε (t, e~y0 , x)| + L040 +

k X

L040,j eγ|yj |

j=1

< L39 + L040 +

k X

L040,j eγ|yj |

j=1

≤ (1 +

k X

L038,j eγ|yj | )L037 ,

(5.15)

j=1

where L037 = L39 + L040 , L038,j =

L040,j , j = 1, . . . , k. L39 + L040

(5.16)

Equivalently, relation (5.15) means that the following inequality holds, for every ε ∈ [0, ε34 ], |gε (t, e~y , x)| < L037 . Pk 0 γ|yi | 0≤t≤T,(~ y ,x)∈Z 1 + L e i=1 38,i sup

(5.17)

Relation (5.17) obviously implies that condition B19 [γ] holds.  Remark 5.1.1. Constants L37 , L38,j , j = 1, . . . , k penetrating the inequalities in condition B19 [γ] coincides with the corresponding constants L037 , L038,j , j = 1, . . . , k given by formulas (5.16) and (5.17). Remark 5.1.2. Parameter ε34 = ε34 (¯ γ ) penetrating relation (5.17) is given by formula ε34 = ε30 (¯ γ ) ∧ ε33 , where ε30 = ε30 (¯ γ ) is the parameter penetrating condition B16 [¯ γ ] and parameter ε33 is determined by relation (5.9). Remark 5.1.3. Condition B16 [¯ γ ] can be replaced by condition B17 [¯ γ ] in Lemma 5.1.3. In this case, the inequality (5.17) also takes place, but for ε ∈ ˜ 037,i , L ˜ 038,i,j , i = 0, . . . , k + 1, j = 1, . . . , k. Here, ε˜034 = [0, ε˜034 ] and with constants L ε31 (¯ γ ) ∧ ε33 and the above constants are computed by the same formulas as constants L037,i , L038,i,j , i = 0, . . . , k + 1, j = 1, . . . , k, but with replacement constants L31,j , L32,i,j , i = 0, . . . , k + 1, j = 1, . . . , k by constants L33,j , L34,i,j , i = 0, . . . , k + 1, j = 1, . . . , k.

5.1.4 Weaken Lipschitz-type conditions for pay-off functions In some models pay-off functions can satisfy weaken variants of Lipschitz-type conditions. One such model is considered in Section 10.3, where the derivative of the payoff function with respect the time argument t is unbounded in the neighborhood

5.1

Lipschitz-type conditions for pay-off functions

267

of maturity T and, respectively, the inequality penetrating the basic condition B16 [¯ γ ] (a) where quantity |t0 − t00 | is replaced by p holds in the weaken variant, 00 quantity |t0 − t00 |, in the case where t = T . This makes reasonable to formulate a weaken variant of Lipschitz-type condition B16 [¯ γ ]. It takes the form of the following condition, assumed to hold for a vector parameter γ¯ = (γ0,1 , . . . , γ0,k , . . . , γk+1,1 , . . ., γk+1,k ), with non-negative components, and a vector parameter ν¯ = (ν0 , . . ., νk+1 ), with components taking values in interval (0, 1]: B21 [¯ γ , ν¯]: There exists ε35 = ε35 (¯ γ , ν¯) ∈ (0, ε0 ] such that for every ε ∈ [0, ε35 ]: 0 y ~ 00 y ~ P|gkε (t ,e ,x)−g|yε (t|γ ,e ,x)| (a) sup 0≤t0 ,t00 ≤T, t0 6=t00 , ≤ L41,0 , for some 0 ≤ 0 00 ν j 0,j (1+

~ y ∈Rk , x∈X

j=1

)|t −t |

L41,0,j e

0

L41,0 , L42,0,1 , . . . , L42,0,k < ∞; (b) sup 0≤t≤T, ~yi0 ,~yi00 ∈Rk , ~ yi0 6=~ yi00 , x∈X

y ~0 y ~00 i ,x)−gε (t,e i ,x)| + + L42,i,j eγi,j |yj |+νi |yi | +L42,i,i e(γi,i +νi )|yi | )|yi0 −yi00 |νi

|gε (t,e

(1+

P j6=i

≤ L41,i , i = 1, . . . , k, for some 0 ≤ L41,i , L42,i,1 , . . . , L42,i,k < ∞, i = 1, . . . , k; 0 y ~ 00 ε (t,e ,x )| Pk |gε (t,~s,x )−g ≤ L41,k+1 , for so(c) sup 0≤t≤T, ~y∈Rk , γk+1,j |yj | 0 00 νk+1 x0 ,x00 ∈X, x0 6=x00

(1+

j=1

L42,k+1,j e

)dX (x ,x )

me 0 ≤ L41,k+1 , L42,k+1,1 , . . . , L42,k+1,k < ∞. In the case, where vector ν¯ = (1, . . . , 1), condition B21 [¯ γ , ν¯] reduces to condition B16 [¯ γ ]. Analogously, condition B17 [¯ γ ] can be modified. A weaken variant of this condition takes that following form: B22 [¯ γ , ν¯]: The following asymptotic relations hold: ~ ~ |gε (t0 ,ey ,x)−gε (t00 ,ey ,x)| P (a) limε→0 sup0≤t0 ,t00 ≤T, t0 6=t0 , k |y |γ 0 0≤

(1+ ~ y ∈Rk , x∈X L43,0 , L44,0,1 , . . . , L44,0,k

(b) limε→0 sup 0≤t≤T, yi00 ∈Rk , ~ yi0 , ~ ~ yi0 6=~ yi00 , x∈X

P

(1+

j6=i

j=1

L44,0,j e

j

0,j )|t

−t00 |

≤ L43,0 , for some

< ∞; y ~00 y ~0 |gε (t,e i ,x)−gε (t,e i ,x)| + γi,j |yj |+νi |yi |+ L44,i,j e +L44,i,i e(γi,i +νi )|yi | )|yi0 −yi00 |νi

≤ L43,i , i = 1, . . . , k, for some 0 ≤ L43,i , L44,i,1 , . . . , L44,i,k < ∞, i = 1, . . . , k; 0 y ~ ,x00 )| Pk |gε (t,~s,x )−gγε (t,e |y (c) limε→0 sup 0≤t≤T, ~y∈Rk , ≤ L43,k+1 , for 0 00 k+1,j j | x0 ,x00 ∈X, x0 6=x00

(1+

j=1

L44,k+1,j e

)dX (x ,x )

some 0 ≤ L43,k+1 , L44,k+1,1 , . . . , L44,k+1,k < ∞. Condition B22 [¯ γ , ν¯] implies that there exists ε36 = ε36 (¯ γ , ν¯) ∈ (0, ε0 ] such that all inequalities penetrating condition B21 [¯ γ , ν¯] holds, for every ε ∈ [0, ε36 ]. If condition B21 [¯ γ , ν¯] holds, then the following inequality analogous to the 0 00 key inequality (5.8), takes place, for ε ∈ [0, ε35 ] and (t0 , e~y , x0 ), (t00 , e~y , x00 ) ∈ [0, T ] × Rk × X,

268

5

Time-skeleton reward approximations for Markov LPP

0

00

|gε (t0 , e~y , x0 ) − gε (t00 , e~y , x00 )| k X

≤ (1 +

+

L42,0,j eγ0,j |yj | )L41,0 |t0 − t00 |ν0

j=1 k X X + + + (1 + L42,i,j eγi,j |yj | +νi |yi | i=1

j6=i (γi,i +νi )|yi |+

+ L42,i,i e + (1 +

k X

)L41,i |yi0 − yi00 |νi +

L42,k+1,j eγk+1,j |yj | )L41,k+1 dX (x0 , x00 )νk+1 .

(5.18)

j=1

Lemma 5.1.4 Let conditions B21 [¯ γ , ν¯] and B20 hold. Then, condition B19 [γ] holds, for every γ > γ◦ . Proof. Obviously, for every γ > γ◦ and i = 1, . . . , k, Li (γ, νi ) = sup yi ∈R1

|yi − y0,i |νi . e(γ−γ◦ )|yi |+

(5.19)

Let ε37 = ε35 (¯ γ ) ∧ ε33 , where ε35 = ε35 (¯ γ ) is the parameter penetrating condition B21 [¯ γ , ν¯] and ε33 is determined by relation (5.9). Condition B21 [¯ γ , ν¯] and inequalities (5.18) and (5.9) imply that, for every ε ∈ [0, ε37 ] and (t, ~ y , x) ∈ [0, T ] × Rk × X, |gε (t, e~y , x) − gε (t, e~y0 , x)| ≤

k X X + + (1 + L42,i,j eγi,j |yj | +νi |yi | i=1

j6=i +

+ L42,i,i e(γi,i +νi )|yi | )L41,i |yi − y0,i |νi ≤

k X X + + (1 + L42,i,j eγi,j |yj | +(νi +γ−γ◦ )|yi | i=1

j6=i +

+ L42,i,i e(γi,i +νi +γ−γ◦ )|yi | )L41,i Li (γ, νi ) ≤

k k X X + + (1 + L42,i,j eγ(|yj | ∨|yi | ) )L41,i Li (γ, νi ) i=1





j=1

k X

k X

i=1

j=1

(1 +

k X

k X

i=1

j=1

(1 +

+

+

L42,i,j (eγ|yj | + eγ|yi | ))L41,i Li (γ, νi )

L42,i,j (eγ|yj | + eγ|y0,j |

5.1

Lipschitz-type conditions for pay-off functions

269

+ eγ|yi | + eγ|y0,i| ))L41,i Li (γ, νi ) =

k k X X (1 + L42,i,j (eγ|y0,j | + eγ|y0,i| ))L41,i Li (γ, νi ) i=1

+

j=1

k X

eγ|yj |

j=1

= L0040 +

k X (L42,i,j L41,i Li (γ, νi ) + L42,j,i L41 jLj (γ, νi )) i=1

k X

L0040,j eγ|yj | ,

(5.20)

j=1

where L0040 = L0040 (γ) =

k k X X L42,i,j (eγ|y0,j | (1 + j=1

i=1 γ|y0,i |

+e

))L41,i Li (γ, νi ),

L0040,j = L0040,j (γ) =

k X (L42,i,j L41,i Li (γ, νi ) i=1

+ L42,j,i L41,j Lj (γ, νi )), j = 1, . . . , k.

(5.21)

Relations (5.9) and (5.20) imply that, for every ε ∈ [0, ε37 ] and (t, ~ y , x) ∈ [0, T ] × Rk × X, |gε (t, e~y , x)| < |gε (t, e~y0 , x)| + L0040 +

k X

L0040,j eγ|yj |

j=1

< L39 + L0040 +

k X

L0040,j eγ|yj |

j=1

≤ (1 +

k X

L0038,j eγ|yj | )L0037 ,

(5.22)

j=1

where L0037 = L39 + L0040 , L0038,j =

L0040,j , j = 1, . . . , k. L39 + L0040

(5.23)

Equivalently, relation (5.22) means that the following inequality holds, for every ε ∈ [0, ε37 ], |gε (t, e~y , x)| < L0037 . Pk 00 γ|y | i 0≤t≤T,(~ y ,x)∈Z 1 + i=1 L38,i e sup

Relation (5.24) obviously implies that condition B19 [γ] holds. 

(5.24)

270

5

Time-skeleton reward approximations for Markov LPP

Remark 5.1.4. In this case, constants L37 , L38,j , j = 1, . . . , k penetrating the inequalities in condition B19 [γ] coincides with the corresponding constants L0037 , L0038,j , j = 1, . . . , k given by formulas (5.23) and (5.24). Remark 5.1.5. Parameter ε37 = ε37 (¯ γ , ν¯) penetrating relation (5.24) is given by formula ε37 = ε35 (¯ γ , ν¯)∧ε33 , where ε35 = ε35 (¯ γ , ν¯) is the parameter penetrating condition B21 [¯ γ , ν¯] and parameter ε33 is determined by relation (5.9). Remark 5.1.6. Condition B21 [¯ γ , ν¯] can be replaced by condition B22 [¯ γ , ν¯] in Lemma 5.1.4. In this case, the inequality (5.24) also takes place, but for ε ∈ ˜ 0037,i , L ˜ 0038,i,j , i = 0, . . . , k + 1, j = 1, . . . , k. Here, ε˜0037 = [0, ε˜0037 ] and with constants L ε36 (¯ γ , ν¯) ∧ ε33 and the above constants are computed by the same formulas as constants L0037,i , L0038,i,j , i = 0, . . . , k + 1, j = 1, . . . , k, but with replacement constants L41,j , L42,i,j , i = 0, . . . , k + 1, j = 1, . . . , k by constants L43,j , L44,i,j , i = 0, . . . , k + 1, j = 1, . . . , k.

5.2 Time-skeleton approximations for optimal expected rewards In this section, we give explicit and uniform with respect to perturbation parameter estimates in so-called time-skeleton approximations for optimal expected rewards for continuous time modulated Markov log-price processes.

5.2.1 Inequalities connecting reward functionals for American and Bermudian options ~ ε (t) = (Y ~ε (t), Xε (t)), t ≥ 0 is a càdlàg Let consider again the model, where Z multivariate modulated Markov log-price process with a phase space Z = Rk × X (X is a Polish space with a metric dX (x0 , x00 ), an initial distribution Pε (A) and ~ ε (t) depends transition probabilities Pε (t, ~z, t+u, A). We assume that the process Z on some perturbation parameter ε ∈ [0, ε0 ], where 0 < ε0 < ∞. Also we assume that a pay-off function gε (t, e~y , x) depends on parameter ε and is, for every ε ∈ [0, ε0 ] a measurable function acting from the space [0, T ] × Rk × X to R1 . (ε) Recall class Mmax,0,T of all Markov moments 0 ≤ τε,0 ≤ T for the process ~ ε (t). We also consider different non-empty classes of Markov moments M(ε) ⊆ Z 0,T

(ε)

Mmax,0,T . ~ ε (t) We are interested in optimal expected reward for the log-price process Z (ε) over the class M0,T ,

5.2

Time-skeleton approximations for optimal expected rewards

(ε)

Φ(M0,T ) =

~

Egε (τε,0 , eY (τε,0 ) , X(τε,0 )).

sup

271

(5.25)

(ε) τε,0 ∈M0,T

Lemma 4.1.10 and Theorem 4.1.4 give effective sufficient conditions, under (ε) (ε) which there exists ε6 ∈ (0, ε0 ] such that, for any class M0,T ⊆ Mmax,0,T and ε ∈ [0, ε6 ], ~

(ε)

|Φ(M0,T )| ≤ sup E|gε (t, eYε (t) , Xε (t))| < ∞.

(5.26)

0≤t≤T

The case T = 0 is trivial, since, in such situation, any non-empty class of (ε) (ε) Markov moments M0,0 = Mmax,0,0 contains only one Markov moment τε ≡ 0 (ε)

~

(ε)

and, thus, Φ(M0,0 ) = Φ(Mmax,0,0 ) = Egε (0, eY (0) , X(0)). Let T > 0 and N ≥ 1. Let Π = h0 = t0 < t1 < · · · < tN = T i be a partition on the interval [0, T ] and d(Π) = max (ti − ti−1 ).

(5.27)

1≤i≤N

In order to include in the model the case T = 0, let us interpret the set Π = {0} as a partition of interval [0, 0] and define, in this case, d(Π) = 0. ˆ (ε) of all Markov moments from M(ε) Consider the class M max,0,T , which only Π,0,T (ε)

take the values 0 = t0 , t1 , . . . tN = T , and the class MΠ,0,T of all Markov moments ˆ (ε) such that the event {ω : τε,0 (ω) = tn } ∈ σ[Z ~ ε (t0 ), . . ., Z ~ ε (tn )] for τε,0 from M Π,0,T n = 0, . . . N . By definition, (ε) ˆ (ε) ⊆ M(ε) MΠ,0,T ⊆ M max,0,T . Π,0,T

(5.28)

Relations (5.26) and (5.28) imply that, (ε)

(ε)

(ε)

ˆ −∞ < Φ(MΠ,0,T ) ≤ Φ(M Π,0,T ) ≤ Φ(Mmax,0,T ) < ∞.

(5.29)

(ε) ˆ (ε) ), and Φ(M(ε) ) correspond The reward functionals Φ(Mmax,0,T ), Φ(M Π,0,T Π,0,T to American type option in continuous time, Bermudan type option in continuous time, and American type option in discrete time, respectively. In the first two cases, the underlying price process is a continuous time Markov type price process, while in the third case the corresponding price process is a discrete time Markov type process. ~ ε (t0 ), Z ~ ε (t1 ), . . . , Z ~ ε (tN ) are connected in a discrete The random variables Z time inhomogeneous Markov chain with the phase space Z, the initial distribution Pε (A), the transition probabilities Pε (tn , ~z, tn+1 , A). Note that we slightly modified the standard definition of a discrete time Markov chain by considering moments 0 = t0 , . . . , tN = T as the moments of

272

5

Time-skeleton reward approximations for Markov LPP

~ ε (tn ), instead of the moments 0, . . . , N . This is jumps for the Markov chain Z done in order to synchronize the discrete and continuous time models. Thus, the (ε) optimization problem (5.84) for the class MΠ,0,T is really a problem of optimal expected reward for American type options in discrete time. In what follows. the following useful lemma is used. Let assume that the phase space Z = Z0 × Z00 is the product of two measurable spaces Z0 and Z00 (a σ-algebra BZ of measurable subsets of Z is the minimal σalgebra containing sets A0 × A00 , for any subsets A0 ∈ BZ0 0 and A00 ∈ BZ0000 , where BZ0 0 and BZ0000 are σ-algebras of measurable subsets, respectively, of spaces Z0 and Z00 ). ~ ε (tn ) = (Z ~ ε0 (tn ), Respectively, we assume that discrete time Markov process Z 00 00 0 0 00 ~ε (tn ), Xε (tn ))), n = 0, 1, . . . , N has two compo~ε (tn ), Xε (tn )), (Y ~ ε (tn )) = ((Y Z nents. ~ ε (tn ) ∈ A/Z ~ ε (tn−1 ) = ~z} Let us denote Pε,n (~z, A) = Pε,n (~z0 , ~z00 , A) = P{Z ~ ε (tn ). transition probabilities of the Markov process Z ~ y Let gε (tn , e , x) be a pay-off function, which is a real-valued measurable function of argument (t, ~ y , x) = (t, (~ y0 , ~ y 00 ), (x0 , x00 )) ∈ [0, T ] × Z. (ε) Let, also, φn (~z), ~z ∈ Z, n = 0, . . . , N and Φ(ε) be, respectively, the corresponding reward functions and the optimal expected reward. (ε) We assume that conditions of Theorems 4.1.3 hold, and, thus, |φn (~z)| < ∞, for ~z ∈ Z, n = 0, . . . , N , for ε ∈ (0, ε2 ], where ε2 is defined in Remark 4.1.9. (ε) Let, for every n = 0, . . . , N , MΠ,tn ,T be the class of all Markov moments ~ ε (tr ), which take values tn , . . . , tN = T and such that event τε,n , for the process Z ~ ~ ε (tN )], for every n ≤ m ≤ N . {τε,n = tm } ∈ σ[Zε (tm ), . . . , Z ∗ Let, also, τε,n be, for every n = 0, . . . , N , the optimal stopping time from the ∗ (ε) (ε) ∗ ∗ class MΠ,tn ,T such that φn (~z) = E~z,tn gε (τε,n , eYε (τε,n ) , Xε (τε,0 )), ~z ∈ Z. ∗ According Lemma 2.4.3∗ , the optimal stopping times τε,n exists and, moreover, ∗ ∗ it has the first hitting time structure, i.e. τε,n = min(r ≥ n, Zε,n ∈ Dε,n ) ∧ N, ∗ where Dε,n , n = 0, . . . , N are optimal stopping domains which have the following (ε) ∗ form Dε,n = {~z ∈ Z : gε (tn , e~y , x) ≥ φn (~z)}, n = 0, . . . , N .

Lemma 5.2.1 Let the transition probabilities Pε,n (~z0 , ~z00 , A) = Pε,n (~z0 , A), 0 n = 1, . . . , N and the pay-off functions gε (tn , e~y , x) = gε (tn , e~y , x0 ), n = 0, . . . , N depend only of the first component of the argument ~z = (~ y , x) = (~z0 , ~z00 ) = (ε) (ε) 0 0 0 0 0 00 ((~ y , x ), (~ y , x )) ∈ Z = Z × Z . Then the reward functions φn (~z) = φn (~z0 ), n = 0 00 0, . . . , N depend only of the first component of the argument ~z = (~z , ~z ) ∈ Z, and ∗ 0∗ the optimal stopping domains have the structure Dε,n = Dε,n × Z00 , n = 0, . . . , N , (ε) 0 0∗ 0 0 ~ y0 0 where Dε,n = {~z ∈ Z : gε (tn , e , x ) ≥ φn (~z )}, , n = 0, . . . , N . Proof. By Theorem 2.3.3∗ and conditions of the Lemma 5.2.1, the reward function, 0

φN (~z) = gε (tN , e~y , x) = gε (tN , e~y , x0 ) = φN (~z0 ), (ε)

(ε)

(5.30)

5.2

273

Time-skeleton approximations for optimal expected rewards

i.e., it depends only of the first component of the argument ~z = (~z0 , ~z00 ) ∈ Z. Respectively, the optimal stopping domain, 0∗ ∗ × Z00 , Dε,N = {~z ∈ Z : gε (tN , e~y , x) ≥ φN (~z)} = Dε,N (ε)

(5.31)

where 0

0∗ Dε,N = {~z0 ∈ Z0 : gε (tN , e~y , x0 ) ≥ φN (~z0 )} = Z0 . (ε)

(5.32)

Also, by Theorem 2.3.3∗ and conditions of the lemma, the reward function, Z  (ε) φN −1 (~z) = max gε (tN −1 , e~y , x), φN (~zN )Pε,N (~z, d~zN ) Z ~ y0

0

Z

 0 0 φN (~zN )Pε,N (~z0 , d~zN )

= max gε (tN −1 , e , x ), Z

=

(ε) φN −1 (~z0 ),

(5.33)

i.e., it also depend only of the first component of the argument ~z = (~z0 , ~z00 ) ∈ Z. 0 00 Note that we use notation ~zN = (~zN , ~zN ) ∈ Z for the argument of integration 0 00 ~zN = (~zN , ~zN ) ∈ Z in (5.33). Respectively, the optimal stopping domain, ∗ 0∗ 00 Dε,N z ∈ Z : gε (tN −1 , e~y , x) ≥ φN −1 (~z)} = Dε,N −1 = {~ −1 × Z , (ε)

(5.34)

where 0

0∗ Dε,N z 0 ∈ Z0 : gε (tN −1 , e~y , x0 ) ≥ φN −1 (~z0 )}. −1 = {~ (ε)

(5.35)

By repeating the recursive procedure described above, we prove the proposition of lemma concerned the reward functions and the optimal stopping domains for n = N − 2, . . . , 0.  The following lemma establishes a very useful equality between the reward (ε) ˆ (ε) ). functionals Φ(MΠ,0,T ) and Φ(M Π,0,T Lemma 5.1.2 Let Π be an arbitrary for any partition of interval [0, T ]. Let also conditions of Lemma 4.1.10 hold. Then, for any ε ∈ [0, ε6 ], (ε) ˆ (ε) ). Φ(MΠ,0,T ) = Φ(M Π,0,T

(5.36)

ˆ (ε) can be considProof. The optimization problem (5.84) for the class M Π,0,T ered as a problem of optimal expected reward for American type options with ~ ε (tn ) additional discrete time. To see this let us add to the random variables Z 0 ~ ~ components Zε (tn ) = {Zε (t), tn−1 < t ≤ tn } with the corresponding phase space ~ ε0 (0) we can take an arbitrary point Z0 endowed by the corresponding σ-field. As Z 0 ~ ε00 (tn ) = (Z ~ ε (tn ), Z ~ ε0 (tn )), with the in Z . Consider the extended Markov chain Z

274

5

Time-skeleton reward approximations for Markov LPP

phase space Z00 = Z × Z0 . As above, we slightly modify the standard definition and count moments t0 , . . . , tN as moments of jumps for the this Markov chain instead of moments 0, . . . , N . This is done in order to synchronize the discrete and continuous time models. ˜ (ε) the class of all Markov moments τε taking values t0 , . . . , tN , Denote by M Π,0,T ~ ε00 (tn ) and consider the reward functional, for the discrete time Markov chain Z (ε)

˜ Φ(M Π,0,T ) =

sup

~

Egε (τε , eYε (τε ) , Xε (τε )).

(5.37)

˜ (ε) τε ∈M Π,0,T

ˆ (ε) is It is readily seen that the optimization problem (5.84) for the class M Π,T equivalent to the optimization problem (5.37), i.e., (ε)

(ε)

ˆ ˜ Φ(M Π,0,T ) = Φ(MΠ,0,T ).

(5.38)

According Theorem 2.4.4∗ , the optimal stopping moment τε∗ exists, and the ~ ε00 (tn ). Moreover, the optimal Markov event {τε∗ = tn } depends only on the value Z 00 ~ ε00 (tn ) ∈ Dε,n moment has a hitting time structure that is τε∗ = min(tn : Z ) ∧ N, 00 00 where Dε,n , n = 0, . . . , N are measurable subsets of the phase space Z . The optimal stopping domains are determined by the transition probabilities ~ ε00 (tn ) and the corresponding pay-off function. of the extended Markov chain Z ~ ε00 (tn ) = (Z ~ ε (tn ), Z ~ ε0 (tn )) has transition However, the extended Markov chain Z ~ ε (tn ). Also the payprobabilities depending only on values of the first component Z ~ y off function gε (t, e , x) depend only of the first component ~z = (~ y , x) of argument ~z00 = (~z, ~z0 ) ∈ Z00 = Z × Z0 . Therefore, by Lemma 5.2.1, the optimal Markov moment has in this case the ~ ε (tn ) ∈ Dε,n ), where first hitting time structure of the form τε∗ = min(tn : Z Dε,n , n = 0, . . . , N are measurable subsets of the phase space of the first component Z. Therefore, for the optimal stopping moment τε∗ event {τε∗ = tn } depends only ~ ε (tn ), for every n = 0, . . . , N . Therefore, τε∗ ∈ M(ε) . Hence, on the value Z Π,0,T ∗

~ (ε) ˆ (ε) ). Φ(MΠ,0,T ) ≥ Egε (τε∗ , eYε (τε ) , Xε (τε∗ )) = Φ(M Π,0,T

(5.39)

Inequalities (5.29) and (5.39) imply equality (5.36). 

5.2.2 Time-skeleton approximations for optimal expected rewards for multivariate modulated Markov log-price processes ¯ holds for a kLet us now assume the following variant of condition C2 [β] ¯ dimensional vector β = (β, . . . , β) with identical components β ≥ 0: C14 [β]: limc→0 limε→0 ∆β (Yε,i (·), c, T ) = 0, i = 1, . . . , k.

5.2

Time-skeleton approximations for optimal expected rewards

275

Let us also assume that conditions B16 [¯ γ ] and B20 hold. These conditions imply that condition B19 [γ] holds for any γ > γ◦ . Let us now assume that β > γ∗ . Note that γ∗ ≥ γ◦ ≥ 1 are defined by relation (5.10). Let us also assume that, additionally to conditions B16 [¯ γ ], B20 and C14 [β], ¯ for a k-dimensional vector β¯ = (β, . . . , β) the following variant of condition D14 [β], with identical components, holds: D21 [β]: limε→0 Eeβ|Yε,i (0)| < K77,i , i = 1, . . . , k, for some 1 < K77,i < ∞, i = 1, . . . , k. By Lemma 4.1.10 and Theorem 4.1.4 (which should be applied for βi = β, i = 1, . . . , k and some γi = γ ∈ (γ∗ , β), i = 1, . . . , k), there exists ε6 = ε6 (β, γ) ∈ (0, ε0 ] (defined by the corresponding formulas in Remark 4.1.13, namely, ε6 = ε1 ∧ε3 ∧ε5 ) such that, for every ε ∈ [0, ε6 ], the optimal expected reward is finite, i.e., ~

(ε)

|Φ(Mmax,0,T )| ≤ E sup |gε (s, eYε (s) , Xε (s))| < ∞.

(5.40)

0≤s≤T

The following theorem gives a time-skeleton approximation for the reward functionals Φ(ε) (Mmax,0,T ), asymptotically uniform with respect to the perturbation parameter ε. Theorem 5.2.1. Let conditions B16 [¯ γ ], B20 , C14 [β], and D21 [β] hold, and β . Then, there γ∗ < β < ∞. Let also condition C6 [α∗ ] holds, for α∗ = β−γ ∗ exist ε38 = ε38 (β, γ¯ ) ∈ (0, ε0 ] and constants 0 ≤ M78 = M78 (β, γ∗ ), M79,1 = M79,1 (β, γ∗ ), . . . , M79,k = M79,k (β, γ∗ ), M80 = M80 (β, γ∗ ) < ∞ and c = c(β) > 0 such that the following time-skeleton approximation inequality holds, for any partition Π such that d(Π) ≤ c and ε ∈ [0, ε38 ], (ε)

(ε)

0 ≤ Φ(Mmax,0,T ) − Φ(MΠ,0,T ) ≤ M78 d(Π) +

k X

1

M79,i ∆β (Yε,i (·), d(Π), T ) α∗

i=1 1

+ M80 Υα∗ (Xε (·), d(Π), T ) α∗ .

(5.41)

Proof. Let us use Theorem 4.1.4 for the case, where parameters γi = γ ∈ (γ∗ , β), i = 1, . . . , k and βi = β, i = 1, . . . , k. According to this theorem, (ε)

|Φ(Mmax,0,T )| < ∞,

(5.42)

for ε ∈ [0, ε6 ], where ε6 = ε1 ∧ ε3 ∧ ε5 , is given by the corresponding formulas pointed out in Remark 4.1.13. However, in this case, one can take ε6 = ε39 , where ε39 = ε1 ∧ ε34 ∧ ε5 . Indeed, according Lemma 5.1.3, parameter ε34 = ε30 ∧ ε33 can, in this case play the role of parameter ε3 , since condition B20 [γ] is a particular variant of condition condition B2 [¯ γ ] used in Theorem 4.1.4. Parameters ε30 and ε33 are given by the corresponding formulas pointed out in Remark 5.1.2.

276

5

Time-skeleton reward approximations for Markov LPP

Note that parameter ε38 pointed out in Theorem 5.2.1 and explicitly defined below, and parameter ε39 are connected by inequality ε39 ≥ ε38 . Let us also choose c = c(β) = min(c2,1 , . . . , c2,k ) > 0,

(5.43)

where constants c2,i = c2,i (β), i = 1, . . . , k are determined by relation (4.38), for the case where parameters βi = β, i = 1, . . . , k. Let now assume that ε ∈ [0, ε38 ] and a partition Π = h0 = t0 < t1 < · · · < TN = T i is chosen in a way such that d(Π) ≤ c. (ε) For any Markov moment τε ∈ Mmax,0,T and the partition Π one can define the discretization of this moment,  tn , if tn−1 ≤ τε < tn , n = 1, . . . N, τε [Π] = T, if τε = T. Let us choose δ > 0 and let τε,δ be δ-optimal Markov moment in the class i.e.,

(ε) Mmax,0,T ,

~

(ε)

Egε (τε,δ , eYε (τε,δ ) , Xε (τε,δ )) ≥ Φ(Mmax,0,T ) − δ.

(5.44)

Such δ-optimal Markov moment always exists for any δ > 0, by the definition (ε) of the reward functional Φ(Mmax,0,T ). ˆ (ε) . This fact and relation By the definition, the Markov moment τε,δ [Π] ∈ M Π,0,T (5.36) given in Lemma 5.1.1 implies that ~

Egε (τε,δ [Π], eYε (τε,δ [Π]) , Xε (τε,δ [Π])) (ε)

(ε)

ˆ ≤ Φ(M Π,0,T ) = Φ(MΠ,0,T )

(5.45)

(ε)

≤ Φ(Mmax,0,T ). By the definition, τε,δ ≤ τε,δ [Π] ≤ τε,δ + d(Π).

(5.46)

Now inequalities (5.44) and (5.45) imply the following skeleton approximation inequality, (ε)

(ε)

0 ≤ Φ(Mmax,0,T ) − Φ(MΠ,0,T ) ~

≤ δ + Egε (τε,δ , eYε (τε,δ ) , Xε (τε,δ )) ~

− Egε (τε,δ [Π], eYε (τε,δ [Π]) , Xε (τε,δ [Π])) ~

≤ δ + E|gε (τε,δ , eYε (τε,δ ) , Xε (τε,δ )) ~

− gε (τε,δ [Π], eYε (τε,δ [Π]) , Xε (τε,δ [Π]))|.

(5.47)

5.2

Time-skeleton approximations for optimal expected rewards

277

In order to shorten notations, let us denote τ 0 = τε,δ , τ 00 = τε,δ [Π] and Yi0 = ~0 = Yε,i (τ 0 ), Yi00 = Yε,i (τ 00 ), |Yi |+ = |Yi0 | ∨ |Yi00 |, for i = 1, . . . , k. Also, let Y 0 0 00 00 00 0 0 00 00 ~ = (Y1 , . . . , Yk ). Finally, let X = Xε (τ ), X = Xε (τ ). (Y1 , . . . , Yk ), Y Obviously, for i = 1, . . . , k, |Yi |+ ≤ Wi = sup |Yε,i (u)|.

(5.48)

0≤u≤T

By substituting random variables introduced above in the inequality (5.8), we get that the following inequality holds, for ε ∈ [0, ε30 ], ~0

~ 00

|gε (τ 0 , eY , X 0 ) − gε (τ 00 , eY , X 00 )| = (L31,0 +

k X

+

L32,0,j L31,0 eγ0,j |Yj | )|τ 0 − τ 00 |

j=1 k k X X + + L32,i,j L31,i eγi,j |Yj | +|Yi | )|Yi0 − Yi00 | (L31,i + + j=1

i=1

+ (L31,k+1 +

k X

+

L32,k+1,j L31,k+1 eγk+1,j |Yj | )dX (X 0 , X 00 )

j=1

≤ (L31,0 +

k X

L32,0,j L31,0 eγ0,j Wj )|τ 0 − τ 00 |

j=1 k k X X + (L31,i + L32,i,j L31,i eγi,j Wj +Wi )|Yi0 − Yi00 | i=1

+ (L31,k+1 +

j=1 k X

L32,k+1,j L31,k+1 eγk+1,j Wj )dX (X 0 , X 00 ).

(5.49)

j=1

Note also that, by Lemma 4.1.9, for ε ∈ [0, ε4 ], where ε4 = ε1 ∧ ε5 , and i = 1, . . . , k, EeβWi = E exp{β sup |Yε,i (u)|} ≤ M45,i < ∞,

(5.50)

0≤u≤T

where constants M45,i = M45,i (β), i = 1, . . . , k are given by the corresponding formulas pointed out in Remark 4.1.10, which should be used for the case, where βi = β, i = 1, . . . , k. ∗ Now, we apply Hölder inequality, with parameters p = γβ∗ and q = β−γ and β γ∗ −1 1 then with parameters p = γ∗ and q = γ∗ , to the corresponding products of random variables on the right hand side in (5.49) and use relation (5.46). This allows us to write down the following inequalities for the expectation on the right hand side in (5.47), for ε ∈ [0, ε39 ], where ε39 = ε1 ∧ ε34 ∧ ε5 = ε4 ∧ ε34 , ~

~

E|gε (τε,δ , eYε (τε,δ ) , Xε (τε,δ )) − gε (τε,δ [Π], eYε (τε,δ [Π]) , Xε (τε,δ [Π]))| ~0

~ 00

= E|gε (τ 0 , eY , X 0 ) − gε (τ 0 , eY , X 00 )|

278

5

Time-skeleton reward approximations for Markov LPP

≤ (L31,0 +

k X

L32,0,j L31,0 Eeγ0,j Wj )d(Π)

j=1

+

k X

L31,i E|Yi0 − Yi00 |

i=1

+

k X k X

L32,i,j L31,i Eeγi,j Wj eWi |Yi0 − Yi00 |

i=1 j=1

+ L31,k+1 EdX (X 0 , X 00 ) +

k X

L32,k+1,j L31,k+1 Eeγk+1,j Wj dX (X 0 , X 00 )

j=1

≤ (L31,0 +

k X

L32,0,j L031,0 (Ee

γ0,j β γ∗

Wj

)

γ∗ β

)d(Π)

j=1

+

k X

β

L31,i (E|Yi0 − Yi00 | β−γ∗ )

β−γ∗ β

i=1

+

k X k X

γi,j β

L32,i,j L31,i ((Ee γ∗ −1 Wj )

γ∗ −1 γ

i=1 j=1 1

× (EeβWi ) γ∗ )

γ∗ β

β

(E|Yi0 − Yi00 | β−γ∗ ) β

+ L31,k+1 (EdX (X 0 , X 00 ) β−γ∗ ) +

k X

L32,k+1,j L31,k+1 (Ee

β−γ∗ β

β−γ∗ β

γk+1,j β γ

Wj

γ

β

) β (EdX (X 0 , X 00 ) β−γ∗ )

β−γ∗ β

j=1

≤ (L31,0 +

k X

L32,0,j L31,0 (EeβWj )

γ∗ β

)d(Π)

j=1

+

k k X X γ∗ γ∗ −1 1 (L31,i + L32,i,j L31,i ((EeβWj ) γ∗ (EeβWi ) γ∗ ) β ) i=1

×

j=1

(E|Yi0

β

− Yi00 | β−γ∗ )

+ (L31,k+1 +

k X

β−γ∗ β

L32,k+1,j L31,k+1 (EeβWj )

γ∗ β

)

j=1 β

× (EdX (X 0 , X 00 ) β−γ∗ ) ≤ (L31,0 +

k X

β−γ∗ β

γ∗ β L32,0,j L31,0 M45,j )d(Π)

j=1

+

k X

k X

i=1

j=1

(L31,i +

γ∗

β

β L32,i,j L31,i M45,j )(E|Yi0 − Yi00 | β−γ∗ )

β−γ∗ β

5.2

+ (L31,k+1 +

279

Time-skeleton approximations for optimal expected rewards

k X

γ∗

β

β )(EdX (X 0 , X 00 ) β−γ∗ ) L32,k+1,j L31,k+1 M45,j

β−γ∗ β

.

(5.51)

j=1

Let us now prove that, the following inequality takes place, for ε ∈ [0, ε4 ] and i = 1, . . . , k, β

β

E|Yi0 − Yi00 | β−γ∗ = E|Yε,i (τε,δ ) − Yε,i (τε,δ [Π])| β−γ∗ ≤ L(β, γ∗ )∆β (Yε,i (·), d(Π), T ), where

(5.52)

β

y β−γ∗ L(β, γ∗ ) = sup βy < ∞. e −1 y≥0

(5.53)

To get inequality (5.52), we employ the method from Silvestrov (1974, 2004) for estimation of moments for increments of stochastic processes stopped at Markov type stopping moments. Introduce the function,  tj+1 − t for tj ≤ t < tj+1 , j = 0, . . . , N − 1, (5.54) fΠ (t) = 0 for t = tN . The function fΠ (t) is continuous from the right on the interval [0, T ] and, for 0 ≤ t ≤ T, 0 ≤ fΠ (t) ≤ d(Π). (5.55) It follows from the definition of function fΠ (t) that τ 00 = τ 0 + fΠ (τ 0 ),

(5.56)

|Yε,i (τε,δ ) − Yε,i (τε,δ [Π])| = |Yε,i (τ 0 ) − Yε,i (τ 0 + fΠ (τ 0 ))|.

(5.57)

and ˜ m = h0 = vm,0 < · · · < vm,m = T i be the partition of interval [0, T ] by Let Π points vm,n = nT /m, n = 0, . . . , m, for m = 1, 2, . . .. Consider the random variables,  vm,n , if vm,n−1 ≤ τ 0 < vm,n , n = 1, . . . N, 0 ˜ τ [Πm ] = (5.58) T, if τ 0 = T. By the definition of these random variables, for any m = 1, 2, . . ., ˜ m ] ≤ τ 0 + T /m, τ 0 ≤ τ 0 [Π

(5.59)

a.s. 0 ˜ m ] −→ τ 0 [Π τ as m → ∞.

(5.60)

and, thus,

280

5

Time-skeleton reward approximations for Markov LPP

~ ε (t) = (Y ~ε (t), Xε (t)) is a càdlàg process, relations (5.59) and (5.60) Since the Z imply that the following relation holds, for i = 1, . . . , k, β

(m) ˜ m ]) − Yε,i (τ 0 [Π ˜ m ] + fΠ (τ 0 [Π ˜ m ]))| β−γ∗ Uε,i = |Yε,i (τ 0 [Π

(5.61)

β

a.s.

−→ Uε,i = |Yε,i (τ 0 ) − Yε,i (τ 0 + fΠ (τ 0 ))| β−γ∗ as m → ∞. (m)

Note also that Uε,i are non-negative random variables and the following inequality holds for any m = 1, . . . and i = 1, . . . , k, β

˜ m ])| + |Yε,i (τ 0 [Π ˜ m ] + fΠ (τ 0 [Π ˜ m ]))|) β−γ∗ Uε,i ≤ (|Yε,i (τ 0 [Π (m)

β

β

β

β

β

˜ m ])| β−γ∗ + |Yε,i (τ 0 [Π ˜ m ] + fΠ (τ 0 [Π ˜ m ]))| β−γ∗ ) ≤ 2 β−γ∗ −1 (|Yε,i (τ 0 [Π ≤ 2 β−γ∗ ( sup |Yε,i (u)|) β−γ∗ 0≤u≤T

≤2

β β−γ∗

β

L(β, γ∗ ) exp{β sup |Yε,i (u)|} = 2 β−γ∗ L(β, γ∗ )eβYi .

(5.62)

0≤u≤T

By relation (5.50), for ε ∈ [0, ε4 ] and i = 1, . . . , k, β

β

E2 β−γ∗ L(β, γ∗ )eβYi ≤ 2 β−γ∗ L(β, γ∗ )M9,i < ∞.

(5.63)

Relations (5.61) and (5.63) imply, by the Lebesgue theorem, that, for ε ∈ [0, ε4 ] and i = 1 . . . , k, (m) EUε,i → EUε,i as m → ∞. (5.64) (m)

Let us now estimate expectations EUε,i , i = 1, . . . , k. 0 00 In order to reduce notations, let us denote Yn+1,i = Yε,i (vm,n+1 ) and Yn+1,i = Yε,i (vm,n+1 ) + fΠ (vm,n+1 )). ~ ε (t), the random variSince τ 0 is a Markov moment for the Markov process Z β 0 00 0 ables χ(vm,n ≤ τ < vm,n+1 ) and |Yn+1,i −Yn+1,i | β−γ are conditionally independent ~ ε (vm,n+1 ). with respect to random vector Z Using this fact and inequality fΠ (vm,n+1 ) ≤ d(Π), we get, for ε ∈ [0, ε4 ] and i = 1 . . . , k, β

(m) ˜ m ]) − Yε,i (τ 0 [Π ˜ m ] + fΠ (τ 0 [Π ˜ m ]))| β−γ∗ EUε,i = E|Yε,i (τ 0 [Π

=

m−1 X

β

0 00 Eχ(vm,n ≤ τ 0 < vm,n+1 )|Yn+1,i − Yn+1,i | β−γ∗

n=0

=

m−1 X

 E χ(vm,n ≤ τ 0 < vm,n+1 )

n=0

β 0 00 ~ ε (vm,n+1 )} × E{|Yn+1,i − Yn+1,i | β−γ∗ /Z ≤

m−1 X n=0

P{vm,n ≤ τ 0 < vm,n+1 }

5.2

Time-skeleton approximations for optimal expected rewards

281

β

0 00 − Yn+1,i | β−γ∗ × sup E~z,vm,n+1 |Yn+1,i ~ z ∈Rk ×X



m−1 X

P{vm,n ≤ τ 0 < vm,n+1 }

n=0 0 00 − Yn+1,i |} × L(β, γ∗ ) sup E~z,vm,n+1 exp{β|Yn+1,i ~ z ∈Rk ×X



m−1 X

P{vm,n ≤ τ 0 < vm,n+1 }L(β, γ∗ )∆β (Yε,i (·), d(Π), T )

n=0

= L(β, γ∗ )∆β (Yε,i (·), d(Π), T ).

(5.65)

Relations (5.64) and (5.65) imply that, for ε ∈ [0, ε4 ] and i = 1 . . . , k, β

EUε,i = E|Yε,i (τ 0 ) − Yε,i (τ 0 + fΠ (τ 0 ))| β−γ∗ β

= E|Yε,i (τε,δ ) − Yε,i (τε,δ [Π])| β−γ∗ (m)

= lim Uε,i ≤ L(β, γ∗ )∆β (Yε,i (·), d(Π), T ). m→∞

(5.66)

Finally, let us use Lemma 4.2.7 and prove that, the following inequality takes place, for ε ∈ [0, ε7 ], where ε7 is defined in Remark 4.2.4, EdX (X 0 , X 00 )α∗ = EdX (Xε (τε,δ ), Xε (τε,δ [Π]))α∗ ≤ Υα∗ (Xε (·), d(Π), T ),

(5.67)

β . where α∗ = β−γ ∗ The proof is analogous to the proof of inequality (5.66). The following relation takes place,

dX (Xε, (τε,δ ), Xε (τε,δ [Π])| = dX (Xε,i (τ 0 ), Xε,i (τ 0 + fΠ (τ 0 ))).

(5.68)

~ ε (t) = (Y ~ε (t), Xε (t)) is a càdlàg process, relations (5.59) and (5.60) Since the Z imply that the following relation holds, ˜ m ]), Xε (τ 0 [Π ˜ m ] + fΠ (τ 0 [Π ˜ m ])))α∗ Vε(m) = dX (Xε (τ 0 [Π a.s.

−→ Vε = dX (Xε (τ 0 ), Xε (τ 0 + fΠ (τ 0 )))α∗ as m → ∞.

(5.69)

(m)

Note also that Vε are non-negative random variables and the following inequality holds for any m = 1, . . ., |Vε(m) | ≤

dX (Xε (t), Xε (t + u))α∗ .

(5.70)

dX (Xε (t), Xε (t + u))α∗ ≤ M47 ,

(5.71)

sup 0≤t≤t+u≤T

By Lemma 4.2.7, for ε ∈ [0, ε7 ], E

sup 0≤t≤t+u≤T

282

5

Time-skeleton reward approximations for Markov LPP

where M47 = M47 (α∗ ) is defined in Remark 4.2.3. Relations (5.69) and (5.71) imply, by the Lebesgue theorem, that, for ε ∈ [0, ε7 ], EVε(m) → EVε as m → ∞. (5.72) (m)

Let us now estimate expectations EVε . 0 00 In order to reduce notations, let us denote Xn+1 = Xε (vm,n+1 ) and Xn+1 = Xε (vm,n+1 + fΠ (vm,n+1 )). ~ ε (t), the random variSince τ 0 is a Markov moment for the Markov process Z α∗ 0 0 00 ables χ(vm,n ≤ τ < vm,n+1 ) and dX (Xn+1 , Xn+1 ) are conditionally independent ~ ε (vm,n+1 ). with respect to random vector Z Using this fact, Lemma 4.2.7 (for the case, where α = α∗ ) and inequality fΠ (vm,n+1 ) ≤ d(Π), we get, for ε ∈ [0, ε7 ], ˜ m ]), Xε (τ 0 [Π ˜ m ] + fΠ (τ 0 [Π ˜ m ])))α∗ EVε(m) = EdX (Xε (τ 0 [Π =

m−1 X

0 00 Eχ(vm,n ≤ τ 0 < vm,n+1 )dX (Xn+1 , Xn+1 )α∗

n=0

=

m−1 X

 E χ(vm,n ≤ τ 0 < vm,n+1 )

n=0

0 00 ~ ε (vm,n+1 )} × E{dX (Xn+1 , Xn+1 )α∗ /Z ≤

m−1 X

(5.73) P{vm,n ≤ τ 0 < vm,n+1 }

n=0 0 00 × sup E~z,vm,n+1 dX (Xn+1 , Xn+1 )α∗ ~ z ∈Rk ×X



m−1 X

P{vm,n ≤ τ 0 < vm,n+1 }Υα∗ (Xε (·), d(Π), T )

n=0

= Υα∗ (Xε (·), d(Π), T ). Relations (5.72) and (5.73) imply that, for ε ∈ [0, ε7 ], EVε = EdX (Xε (τ 0 ), Xε (τ 0 + fΠ (τ 0 )))α∗ = EdX (Xε (τε,δ ), Xε (τε,δ [Π]))α∗ = lim Vε(m) ≤ Υα∗ (Xε (·), d(Π), T ).

(5.74)

m→∞

~0

~ 00

Let us now substitute estimates for E|gε (τ 0 , eY , X 0 ) − gε (τ 0 , eY , X 00 )|, E|Yi0 − 00 Yi | , and EdX (X 0 , X 00 )α∗ and obtained, respectively, in relations (5.51), (5.52) and (5.67) in inequality (5.47). This yields the following inequality, for ε ∈ [0, ε38 ], where ε38 = ε4 ∧ ε34 ∧ ε7 = ε1 ∧ ε5 ∧ ε7 ∧ ε30 ∧ ε33 , β−γ∗ β

5.2

283

Time-skeleton approximations for optimal expected rewards

(ε)

(ε)

0 ≤ Φ(Mmax,0,T ) − Φ(MΠ,0,T ) ≤ δ + (L31,0 +

k X

γ∗ β )d(Π) L32,0,j L31,0 M45,j

j=1

+

k X

k X

i=1

j=1

(Li +

+ (L31,k+1 +

γ∗

1

β )(L(β, γ∗ )∆β (Yε,i (·), d(Π), T )) α∗ L32,i,j L31,i M45,j

k X

γ∗

1

β L32,k+1,j L31,k+1 M45,j )Υα∗ (Xε (·), d(Π), T ) α∗ .

(5.75)

j=1

Since, the expression on the right hand side of inequality (5.75) does not depend on δ, we can pass δ → 0 in inequality (5.75) and get the following inequality, for ε ∈ [0, ε38 ], (ε)

(ε)

0 ≤ Φ(Mmax,0,T ) − Φ(MΠ,0,T ) ≤ (L31,0 +

k X

γ∗ β L32,0,j L31,0 M45,j )d(Π)

j=1

+

k X

(L31,i +

i=1

× L(β, γ∗ )

k X

γ∗ β L32,i,j L31,i M45,j )

j=1 1 α∗

+ (L31,k+1 +

1

∆β (Yε,i (·), d(Π), T ) α∗

k X

γ∗ β ) L32,k+1,j L31,k+1 M45,j

j=1 1

× Υα∗ (Xε (·), d(Π), T ) α∗ = M78 d(Π) +

k X

1

M79,i ∆β (Yε,i (·), d(Π), T ) α∗

i=1 1

+ M80 Υα∗ (Xε (·), d(Π), T ) α∗ .

(5.76)

Inequality (5.76) is equivalent to inequality (5.41). The proof is complete.  Remark 5.2.1. Substitution in inequality (5.76) of constants M45,j = M45,j (β), j = 1, . . . , k, given in Remark 4.1.10 by formulas (4.40) and (4.49) used for βi = β, i = 1, . . . , k, and constants M47 = M47 (α∗ ), given in Remark 4.2.3 by formula (4.22) used for α = α∗ , yields the following formulas for constants M78 = M78 (β, γ∗ ), M79,1 = M79,1 (β, γ∗ ), . . . , M79,k = M79,k (β, γ∗ ) and M80 = M80 (β, γ∗ ),

284

5

Time-skeleton reward approximations for Markov LPP

M78 = L31,0 +

k X

γ∗ β , L32,0,j L31,0 M45,j

j=1

M79,i =

k X

k X

i=1

j=1

(L31,i +

γ∗

1

β L32,i,j L31,i M45,j )L(β, γ∗ ) α∗ ,

i = 1, . . . , k, M80 = (L31,k+1 +

k X

γ∗ β L32,k+1,j L31,k+1 M45,j ).

(5.77)

j=1

Here, constants M45,i = M45,i (β), i = 1, . . . , k are given by the following formulas, [ T ]+1 eβ (eβ − 1)R3,i M45,i = + 1 c2,i K77,i , (5.78) 1 − R3,i where constants e−β < R3,i < 1, i = 1, . . . , k, constants 0 < c2,i = c2,i (β, R3,i ) ≤ T, i = 1, . . . , k are determined by relation (4.38) used for βi = β, i = 1, . . . , k, and K77,i , i = 1, . . . , k are constants penetrating condition D21 [β] (replacing condition D14 [β]). Remark 5.2.2. Parameter ε38 = ε38 (β, γ¯ ) = ε1 ∧ ε5 ∧ ε7 ∧ ε30 ∧ ε33 , where parameter ε1 = ε1 (β) is determined by relation (4.39), parameter ε5 = ε5 (β) is determined by relation (4.48) used for βi = β, i = 1, . . . , k, parameter ε7 = ε7 (α∗ ) is determined by relation (4.84) used for α = α∗ , parameter ε30 = ε30 (¯ γ ) is a parameter penetrating condition B16 [¯ γ ], parameter ε33 is determined by condition B20 and relation (5.9). Remark 5.2.3. Parameter c = c(β) = min(c2,1 , . . . , c2,k ), where constants c2,i = c2,i (β, R3,i ), i = 1, . . . , k are determined by relation (4.38), for the case where parameters βi = β, i = 1, . . . , k. Remark 5.2.4. Values for constants R3,i , i = 1, . . . , k can be arbitrary chosen from the corresponding intervals. However, this choice affects not only the values of constants c2,i , i = 1, . . . , k, but also the value of parameter ε1 = ε1 (β) determined by relation (4.39). Analogously, values for constant R4 can be arbitrary chosen from the corresponding interval. However, this choice affects not only the value of constant c3 but also the value of parameter ε7 = ε7 (α∗ ) determined by relation (4.84). Remark 5.2.5. If the pay-off functions gε (t, e~y , x) do not depend on some of argument, t, y1 , . . . , yk or x then the corresponding constant, L31,0 , L31,1 , . . ., L31,k or L31,k+1 penetrating condition B16 [¯ γ ] equals 0 and, thus, the corresponding constant M78 , M79,1 , . . . , M79,k or M80 equals 0. In this case, the correspond1 1 ing term M78 d(Π), M79,1 ∆β (Yε,1 (·), d(Π), T ) α∗ , . . . , M79,k ∆β (Yε,k (·), d(Π), T ) α∗

5.2

Time-skeleton approximations for optimal expected rewards

285

1

or M80 Υα∗ (Xε (·), d(Π), T ) α∗ disappears from the right hand side expression in inequality (5.41). Also, the expression for parameter γ∗ given in formula (5.10) is simplified, since the maximum of γi,j over j = 1, . . . , k disappears in this expression for γ∗ for the corresponding indices i. Remark 5.2.6. If the pay-off functions gε (t, e~y , x) and the log-price processes ~ Yε (t) both do not depend on the index argument x, then, one can omit condition C6 [α∗ ] in Theorem 5.2.1. Indeed, in such case, one can always add to the process ~ε (t) a virtual index component Xε (t) with one-state phase space X = {x0 }. Y Condition C6 [α∗ ] automatically holds in this case. Also, as was pointed out in 1 Remark 5.2.5, the term M80 Υα∗ (Xε (·), d(Π), T ) α∗ disappears from the right hand side expression in inequality (5.41). Remark 5.2.7. Condition B16 [¯ γ ] can be replaced in Theorem 5.2.1 by condition B17 [¯ γ ]. In this case, parameter ε30 penetrating condition B16 [¯ γ ] should be replaced in formula ε38 = ε1 ∧ ε5 ∧ ε7 ∧ ε30 ∧ ε33 by parameter ε31 ∈ (0, ε0 ] such that all inequalities penetrating condition B16 [¯ γ ] hold, for every ε ∈ [0, ε31 ]. Remark 5.2.8. In the case, where B16 [¯ γ ] is replaced in Theorem 5.2.1 by condition B17 [¯ γ ] and the pay-off functions gε (t, e~y , x) do not depend on some of argument, t, y1 , . . . , yk or x, then the corresponding constant L33,0 , L33,1 , . . . , L33,k or L33,k+1 penetrating condition B17 [¯ γ ] can take any positive value, since the expression on left hand side of the corresponding inequality identically equals 0. The choice of this value does not affect the choice of parameter ε31 . Thus, inequality (5.41) can be written down with an arbitrary small value of the corresponding constant L33,0 , L33,1 , . . . , L33,k or L33,k+1 . This makes it possi1 ble to remove the corresponding term M78 d(Π), M79,1 ∆β (Yε,1 (·), d(Π), T ) α∗ , . . ., 1 1 M79,k ∆β (Yε,k (·), d(Π), T ) α∗ or M80 Υα∗ (Xε (·), d(Π), T ) α∗ from the right hand side expression in inequality (5.41). Remark 5.2.9. Condition B16 [¯ γ ] can also be replaced in Theorem 5.2.1 by condition B18 [¯ γ ] stronger than condition B16 [¯ γ ]. In this case, parameter ε30 penetrating condition B16 [¯ γ ] should be replaced in formula ε38 = ε1 ∧ ε5 ∧ ε7 ∧ ε30 ∧ ε33 , by parameter ε32 ∈ (0, ε0 ] such that all inequalities penetrating condition B18 [¯ γ] hold, for every ε ∈ [0, ε32 ], and constants L31,i , L32,i,j , i = 0, . . . , k + 1, j = 1, . . . , k should be replaced by constants L35,i , L36,i,j , i = 0, . . . , k + 1, j = 1, . . . , k, in inequality (5.41). Remark 5.2.10. Theorem 5.2.1 also covers the case, where T = 0. In tis case, both expressions on the left and right hand sides in inequality (5.41) equals 0. Remark 5.2.11. Inequality (5.41) presented in Theorem 5.2.1 acts for any partition Π with d(Π) ≤ c, for all ε ∈ [0, ε38 ]. This implies that inequality (5.41) holds, under conditions of Theorem 5.2.1, for any partition Πε with d(Πε ) ≤ c chosen specifically for every ε ∈ [0, ε38 ]. In particular, partitions Πε can be chosen such that d(Πε ) → 0 as ε → 0.

286

5

Time-skeleton reward approximations for Markov LPP

Condition B16 [¯ γ ] can be replaced in Theorem 5.2.1 by weaker condition B21 [¯ γ , ν¯]. Theorem 5.2.1 takes in this case the following form. Theorem 5.2.2. Let conditions B21 [¯ γ , ν¯], B20 , C14 [β], and D21 [β] hold, β . Then, and γ∗ < β < ∞. Let also condition C6 [νk+1 α∗ ] holds, for α∗ = β−γ ∗ there exist ε40 = ε40 (β, γ¯ , ν¯) ∈ (0, ε0 ] and constants 0 ≤ M81 = M81 (β, γ∗ ), M82,1 = M82,1 (β, γ∗ , ν1 ), . . . , M82,k = M82,k (β, γ∗ , νk ), M83 = M83 (β, γ∗ , νk+1 ) < ∞ and c = c(β) > 0 such that the following time-skeleton approximation inequality holds, for any partition Π such that d(Π) ≤ c and ε ∈ [0, ε40 ], (ε)

(ε)

0 ≤ Φ(Mmax,0,T ) − Φ(MΠ,0,T ) ≤ M81 d(Π)ν0 +

k X

νi

M82,i ∆β (Yε,i (·), d(Π), T ) α∗

i=1 1

+ M83 Υνk+1 α∗ (Xε (·), d(Π), T ) α∗ .

(5.79)

Proof. It repeats the proof of Theorem 5.2.1. We just shortly comments the corresponding minor changes. First of all, using inequality (5.18) we can get get the inequality analogous to (5.49), for ε ∈ [0, ε35 ], ~0

~ 00

|gε (τ 0 , eY , X 0 ) − gε (τ 00 , eY , X 00 )| ≤ (L41,0 +

k X

L42,0,j L41,0 eγ0,j Wj )|τ 0 − τ 00 |ν0

j=1

+

k X

k X

i=1

j=1

(L41,i +

+ (L41,k+1 +

k X

L42,i,j L41,i eγi,j Wj +νi Wi )|Yi0 − Yi00 |νi

L42,k+1,j L41,k+1 eγk+1,j Wj )dX (X 0 , X 00 )νk+1 ,

(5.80)

j=1

and then inequality analogous to (5.51), for ε ∈ [0, ε41 ], where ε41 = ε1 ∧ ε35 ∧ ε5 = ε4 ∧ ε35 , ~0 ~ 00 E|gε (τ 0 , eY , X 0 ) − gε (τ 0 , eY , X 00 )| ≤ (L41,0 +

k X

γ∗

β L42,0,j L31,0 M45,j )d(Π)ν0

j=1 k k β(1−νi )+γ∗ νi X X + (L41,i + L42,i,j L41,i M45,j β ) i=1

×

(E|Yi0

j=1 β

− Yi00 | β−γ∗ )

+ (L41,k+1 +

k X j=1

(β−γ∗ )νi β

γ∗ β L42,k+1,j L41,k+1 M45,j )

287

5.3 Time-skeleton approximations for reward functions

β

× (EdX (X 0 , X 00 )νk+1 β−γ∗ )

β−γ∗ β

(5.81)

.

The remaining part of the proof repeats the corresponding part in the proof of Theorem 5.2.1.  Remark 5.2.12. Constants M81 = M81 (β, γ∗ , ν¯), M82,1 = M82,1 (β, γ∗ , ν¯), . . . , M82,k = M82,k (β, γ∗ , ν¯) and M83 = M83 (β, γ∗ , ν¯) are given by the following formulas, M81 = L41,0 +

k X

γ∗ β L42,0,j L41,0 M45,j

j=1

M82,i =

k X

k X

i=1

j=1

(L41,i +

β(1−νi )+γ∗ νi β

L42,i,j L41,i M45,j

νi

)L(β, γ∗ ) α∗ ,

i = 1, . . . , k, M83 = L41,k+1 +

k X

γ∗ β L42,k+1,j L41,k+1 M45,j .

(5.82)

j=1

Remark 5.2.13. Parameter ε40 = ε40 (β, γ¯ , ν¯) is given by the following formula ε40 = ε1 ∧ ε5 ∧ ε7 ∧ ε35 ∧ ε33 .

5.3 Time-skeleton approximations for reward functions In this section we give explicit and uniform with respect to perturbation parameter estimates in so-called time-skeleton approximations for reward functions for continuous time modulated Markov log-price processes.

5.3.1 Time-skeleton approximations for reward functions for multivariate modulated Markov log-price processes (ε)

(ε)

Let us consider some non-empty class of Markov moments Mt,T ⊆ Mmax,t,T and define a reward function, for 0 ≤ t ≤ T, ~z = (~ y , x) ∈ Z, (ε)

(ε)

φt (Mt,T , ~ y , x) =

~

E~z,t gε (τε,t , eY (τε,t ) , X(τε,t )).

sup

(5.83)

(ε)

τε,t ∈Mt,T

Note that we also accept the case, where t = T . In this case, any non-empty (ε) (ε) class of Markov moments MT,T = Mmax,T,T contains only one Markov moment (ε)

(ε)

(ε)

τε ≡ T , and φT (MT,T , ~ y , x) = φT (~ y , x) = gε (T, e~y , x).

288

5

Time-skeleton reward approximations for Markov LPP

Lemma 4.1.8 and Thorem 4.1.3 give effective sufficient conditions, under which (ε) (ε) there exist ε2 ∈ (0, ε0 ] such that, for any class Mt,T ⊆ Mmax,t,T and 0 ≤ t ≤ T, ~z = (~ y , x) ∈ Z, for ε ∈ [0, ε2 ], (ε)

~

(ε)

y , x)| ≤ sup E~z,t |gε (t, eYε (t) , Xε (t))| < ∞. |φt (Mt,T , ~

(5.84)

0≤t≤T

(ε) (ε) Since Mt,T ⊆ Mmax,t,T , (ε) (ε) y , x), for 0 ≤ φt (Mmax,t,T , ~

(ε)

(ε)

(ε)

the reward functions φt (Mt,T , ~ y , x) and φt (~ y , x) =

t ≤ T, ~z = (~ y , x) ∈ Z are connected by the following inequalities, for 0 ≤ t ≤ T, ~z = (~ y , x) ∈ Z, (ε)

(ε)

(ε)

−∞ < φt (Mt,T , ~ y , x) ≤ φt (~ y , x) < ∞.

(5.85)

Let us consider the case, where the initial distribution Pε (A) = I(~z ∈ A), A ∈ BZ is a singular probability measure concentrated in a point ~z = (~ y , x) = ((y1 , . . . , yk ), x) ∈ Z = Rk × X. Condition D21 [β] automatically holds, in this case, for every point ~z ∈ Z, with constants K77,i = K78,i eβ|yi | , i = 1, . . . , k, for any β ≥ 0 and any 1 < K78,i < ∞, i = 1, . . . , k. In this case, Lemmas 4.1.8 and 4.1.10 have the same conditions. (ε) The reward functional Φ(M0,T ) coincides, in this case, with the reward func(ε)

(ε)

tion φ0 (M0,T , ~ y , x), i.e., (ε)

(ε)

(ε)

Φ(M0,T ) = φ0 (M0,T , ~ y , x).

(5.86)

Moreover, let us consider, for 0 ≤ t ≤ T , a shifted in time càdlàg Markov ~ ε,t (s), s ≥ 0 with the phase space Z, the initial distribution Pε (A) = process Z I(~z ∈ A) and the transition probabilities Pt (s, ~z, s+u, A) = P (t+s, ~z, t+s+u, A), and the shifted in time pay-off function gε,t (s, e~y , x) = gε (t + s, e~y , x), s ∈ [0, T − t]. (ε) ~ ε,t (s) and Let also denote the reward functional Φ(M0,T −t ) for the process Z (ε)

the pay-off function gε,t (s, e~y , x), by Φt (M0,T −t ). As follows from the above remarks and relation (5.86), the reward functional (ε) (ε) (ε) Φt (M0,T −t ) coincides with the reward function φt (Mt,T , ~ y , x), i.e., (ε)

(ε)

(ε)

Φt (M0,T −t ) = φt (Mt,T , ~ y , x).

(5.87)

Let, as above, set Π = h0 = t0 < · · · < tN = T i be a partition of the interval [0, T ] and d(Π) = max1≤i≤N (ti − ti−1 ). In order to include in the model the case T = 0, let us interpret the set Π = {0} as a partition of interval [0, 0] and define, in this case, d(Π) = 0. ˆ (ε) Let us take t ∈ Π (i.e., t = tm , for some m = 0, . . . , N ) and let M Π,t,T (ε)

be the class of all Markov moments from Mmax,t,T , which only take the values (ε) ˆ (ε) such t ≤ tn ≤ T , and MΠ,t,T be the class of all Markov moments τε,t from M Π,t,T

5.3

Time-skeleton approximations for reward functions

289

~ ε (tr ), t ≤ tr ≤ T ] for t ≤ tn ≤ T . By the that the event {ω : τε,t (ω) = tn } ∈ σ[Z definition, (ε) ˆ (ε) ⊆ M(ε) (5.88) MΠ,t,T ⊆ M max,t,T . Π,t,T Relations (5.91) and (5.88) and the above remarks imply that, for every t ∈ Π and ~z = (~ y , x) ∈ Z, (ε) (ε) (ε) ˆ (ε) (ε) −∞ < φt (MΠ,t,T , ~ y , x) ≤ φt (M y , x) ≤ φt (~ y , x) < ∞. Π,t,T , ~

(5.89)

(ε) (ε) (ε) ˆ (ε) (ε) The reward functions φt (MΠ,t,T , ~ y , x), φt (M y , x), and φt (~ y , x) corΠ,t,T , ~ respond to American type option in continuous time, Bermudan type option in continuous time, and American type option in discrete time, respectively. In the first two cases, the underlying price process is a continuous time Markov type price process, while in the third case the corresponding price process is a discrete time Markov type process. ~ ε (tm ), Z ~ ε (tm+1 ), . . . , Z ~ ε (tN ) are connected in a discrete The random variables Z time inhomogeneous Markov chain with the phase space Z, the initial distribution Pε (A), and transition probabilities Pε (tn , ~z, tn+1 , A). Note that we have slightly modified the standard definition of a discrete time Markov chain by considering moments t = tm , . . . , tN = T as the moments of jumps for the Markov chain ~ ε (tn ), instead of the moments m, . . . , N . This is done in order to synchronize the Z discrete and continuous time models. Thus, the optimization problem (5.84) for (ε) the class MΠ,0,T is really a problem of optimal expected reward for American-type options in discrete time. The following lemma, which is an analogue of Lemma 5.2.1, establishes useful (ε) (ε) (ε) ˆ (ε) equality between reward functions φt (MΠ,t,T , ~ y , x) and φt (M y , x). Π,t,T , ~

Lemma 5.3.1 Let Π be an arbitrary for any partition of interval [0, T ]. Let also conditions of Lemma 4.1.8 hold. Then, for any t ∈ Π, ~z ∈ Z and ε ∈ [0, ε2 ], (ε) (ε) (ε) ˆ (ε) φt (MΠ,t,T , ~ y , x) = φt (M y , x). Π,t,T , ~

(5.90)

Proof. It follows from Lemma 5.2.1 and relation (5.87), which let one interpret values of reward functions as the corresponding values of optimal expected rewards. .

5.3.2 Time-skeleton approximations for reward functions for multivariate modulated Markov log-price processes Let us assume that conditions C14 [β], B16 [¯ γ ] and B20 hold and β > γ∗ . Then, by Theorem 4.1.3 (which should be applied for βi = β, i = 1, . . . , k and some γi = γ ∈ (γ∗ , β), i = 1, . . . , k), there exists ε2 = ε2 (β, γ) ∈ (0, ε0 ] (defined by the corresponding formulas pointed out in Remarks 4.1.7, namely, ε2 = ε1 ∧ ε3 ) such

290

5

Time-skeleton reward approximations for Markov LPP

that, for every ε ∈ [0, ε2 ] and (t, ~ y , x) ∈ [0, T ] × Rk × X, the log-reward function (ε) (ε) y , x) is finite, i.e., for 0 ≤ t ≤ T, ~z = (~ y , x) ∈ Z, φt (Mt,T , ~ (ε)

~

(ε)

y , x)| ≤ E~z,t sup |gε (s, eYε (s) , Xε (s))| < ∞. |φt (Mt,T , ~

(5.91)

t≤s≤T

The following theorem gives a time-skeleton approximation for the reward (ε) functions φt (~ y , x), asymptotically uniform with respect to the perturbation parameter ε. Theorem 5.3.1. Let conditions B16 [¯ γ ], B20 , and C14 [β] hold, and γ∗ < β β < ∞. Let also condition C6 [α∗ ] holds, for α∗ = β−γ . Then, there exist ∗ ε42 = ε42 (β, γ¯ ) ∈ (0, ε0 ] and functions 0 ≤ M84 (~ y ) = M84 (~ y , β, γ∗ ), M85,1 (~ y) = M85,1 (~ y , β, γ∗ ), . . . , M85,k (~ y ) = M85,k (~ y , β, γ∗ ), M86 (~ y ) = M86 (~ y , β, γ∗ ) < ∞, ~ y ∈ Rk and c = c(β) > 0 such that the following time-skeleton approximation inequality holds, for any partition Π such that d(Π) ≤ c and t ∈ Π, ~z = (~ y , x) = ((y1 , . . . , yk ), x) ∈ Rk × X, ε ∈ [0, ε42 ], (ε)

(ε)

(ε)

0 ≤ φt (~ y , x) − φt (MΠ,t,T , ~ y , x) ≤ M84 (~ y )d(Π) +

k X

1

M85,i (~ y ) ∆β (Yε,i (·), d(Π), T ) α∗

i=1 1

+ M86 (~ y )Υα∗ (Xε (·), d(Π), T ) α∗ .

(5.92)

Proof. It follows from Theorem 5.2.1 and relation (5.87), which let one interpret values of reward functions as the corresponding values of optimal expected rewards. It is obviously that conditions B16 [¯ γ ], B20 and C14 [β] and hold pay-off func~ y ~ y tions gε,t (s, e , x) = gε (t + s, e , x), (s, ~ y , x) ∈ [0, T − t] × Rk × X and for the ~ ε,t (s), s ∈ [0, T − t], for any t ∈ [0, T ], if these shifted in time log-price processes Z ~ ε (s). condition hold for the initial pay-off functions gε (s, e~y , x) and the processes Z As was pointed above, condition D21 [β] automatically holds, for any initial distribution concentrated in a point ~z ∈ Z, with constants K77,i = K78,i e|yi | , i = 1, . . . , k, for any constants 1 < K78,i < ∞, i = 1, . . . , k. Thus, Theorem 5.2.1 can be applied to the above shifted in time log-price ~ ε,t (s) and pay-off functions gε,t (s, e~y , x). processes Z Let Π = h0 = t0 < · · · < tN = T i be an arbitrary partition of interval [0, T ] such that d(Π) ≤ c, where c = c(β) = min(c2,1 , . . . , c2,k ), where constants c2,i = c2,i (β, R3,i ), i = 1, . . . , k are determined by relation (4.38), for the case where parameters βi = β, i = 1, . . . , k. Let t = tn ∈ Π, for some n = 0, . . . , N . In this case, partition Π should be replaced by the partition Πtn = {0 = tn − t < · · · tN − t = T − t}. It is obvious that the same parameter c = c(β) can be used, and d(Πtn ) = maxn+1≤i≤N (ti − ti−1 ) ≤ d(Π) = max1≤i≤N (ti − ti−1 ) ≤ c, for n = 1, . . . , N .

5.3

Time-skeleton approximations for reward functions

291

Constants M78 , M79,i , i = 1, . . . , k and M80 should be replaced by constants M84 , M85,i , i = 1, . . . , k and M86 computed by the same formulas but with replacement of constants K77,i , i = 1, . . . , k by constants K78,i e|yi | , i = 1, . . . , k, where one can take any 1 < K78,i < ∞, i = 1, . . . , k. This let use, in fact, in the final form of inequality (5.92) values K78,i = 1, i = 1, . . . , k. Also parameter ε38 = ε1 ∧ ε5 ∧ ε7 ∧ ε30 ∧ ε12 should be replaced by parameter ε42 = ε1 ∧ ε7 ∧ ε30 ∧ ε33 . Indeed, parameter ε5 can be excluded from formula defining ε42 since, as was mentioned above, condition D21 [β] holds and, thus, one can take ε5 = ε0 .  Remark 5.3.1. Functions M84 (~ y ) = M84 (~ y , β, γ∗ ), M85,1 (~ y ) = M85,1 (~ y , β, γ∗ ), . . ., M85,k (~ y ) = M15,k (~ y , β, γ∗ ) and M86 (~ y ) = M86 (~ y , β, γ∗ ) are given by the following formulas, for ~ y ∈ Rk , M84 (~ y ) = L31,0 +

k X

γ∗ β (yj ), L31,0,j L31,0 M45,j

j=1 k k γ∗ X X 1 β L32,i,j L31,i M45,j (L31,i + M85,i (~ y) = (yj ))L(β, γ∗ ) α∗ , i=1

j=1

i = 1, . . . , k, M86 (~ y ) = (L31,k+1 +

k X

γ∗ β L32,k+1,j L31,k+1 M45,j (yj )),

(5.93)

j=1

where functions M45,i (yi ) = M45,i (yi , β), i = 1, . . . , k are given by the following formulas, for yi ∈ R1 , i = 1, . . . , k, M45,i (yi ) =

[ T ]+1 eβ (eβ − 1)R3,i + 1 c2,i e|yi | , 1 − R3,i

(5.94)

where constants e−β < R3,i < 1, i = 1, . . . , k, constants 0 < c2,i = c2,i (β, R3,i ) ≤ T, i = 1, . . . , k are determined by relation (4.38), which should be used for the case where βi = β, i = 1, . . . , k. Remark 5.3.2. Parameter ε42 = ε42 (β, γ¯ ) = ε1 ∧ ε7 ∧ ε30 ∧ ε33 , where parameter ε1 = ε1 (β) is determined by relation (4.39), parameter ε7 = ε7 (α∗ ) is determined by relation (4.84), which all should be used for the case where α = α∗ , parameter ε30 = ε30 (¯ γ ) is a parameter penetrating condition B16 [¯ γ ], parameter ε33 is determined by relation (5.9). Remark 5.3.3. Parameter c = c(β) = min(c2,1 , . . . , c2,k ), where constants c2,i = c2,i (β, R3,i ), i = 1, . . . , k are determined by relation (4.38), for the case where parameters βi = β, i = 1, . . . , k. Remark 5.3.4. Values for constants R3,i , i = 1, . . . , k can be arbitrary chosen from the corresponding intervals. However, this choice affects not only the values of

292

5

Time-skeleton reward approximations for Markov LPP

constants c2,i , i = 1, . . . , k, but also the value of parameter ε1 = ε1 (β) determined by relation (4.39). Analogously, values for constant R4 can be arbitrary chosen from the corresponding interval. However, this choice affects not only the value of constant c3 but also the value of parameter ε7 = ε7 (α∗ ) determined by relation (4.84). Remark 5.3.5. If the pay-off functions gε (t, e~y , x) do not depend on some of argument, t, y1 , . . . , yk or x then the corresponding constant, L31,0 , L31,1 , . . ., L31,k or L31,k+1 penetrating condition B16 [¯ γ ] equals 0 and, thus, M84 (~ y ), M85,1 (~ y ), . . . , M85,k (~ y ) or M86 (~ y ) is identically equals 0, for ~ y ∈ Rk . In this 1 case, the corresponding term M84 (~ y )d(Π), M85,1 (~ y )∆β (Yε,1 (·), d(Π), T ) α∗ , . . ., 1 1 M85,k (~ y )∆β (Yε,k (·), d(Π), T ) α∗ or M86 (~ y )Υα∗ (Xε (·), d(Π), T ) α∗ disappears from the right hand side expression in inequality (5.92). Also, the expression for parameter γ∗ given in formula (5.10) is simplified, since the maximums of γi,j over j = 1, . . . , k disappears in this formula for the corresponding indices i. Remark 5.3.6. If both, the pay-off functions gε (t, e~y , x) and the log-price ~ε (t), do not depend on the index argument x, then, one can omit processes Y condition C6 [α∗ ] in Theorem 5.3.1. Indeed, in such case, one can always to add ~ε (t) a virtual index component Xε (t) with one-state phase space to the process Y X = {x0 }. Condition C6 [α∗ ] automatically holds in this case. Also, as was pointed 1 out in Remark 5.3.5, the term M86 (~ y )Υα∗ (Xε (·), d(Π), T ) α∗ disappears from the right hand side expression in inequality (5.92). Remark 5.3.7. Condition B16 [¯ γ ] can be replaced in Theorem 5.3.1 by condition B17 [¯ γ ]. In this case, parameter ε30 penetrating condition B16 [¯ γ ] should be replaced in formula ε42 = ε42 (β, γ¯ ) = ε1 ∧ ε7 ∧ ε30 ∧ ε33 by parameter ε31 ∈ (0, ε0 ] such that all inequalities penetrating condition B16 [¯ γ ] hold, for every ε ∈ [0, ε31 ]. Remark 5.3.8. In the case, where B16 [¯ γ ] is replaced in Theorem 5.2.1 by condition B17 [¯ γ ] and the pay-off functions gε (t, e~y , x) do not depend on some of argument, t, y1 , . . . , yk or x, then the corresponding constant L33,0 , L33,1 , . . . , L33,k or L33,k+1 penetrating condition B17 [¯ γ ] can take any positive value, since the expression on left hand side of the corresponding inequality identically equals 0. The choice of this value does not affect the choice of parameter ε31 . Thus, inequality (5.41) can be written down with an arbitrary small value of the corresponding constant L33,0 , L33,1 , . . . , L33,k or L33,k+1 . This makes it possi1 ble to remove the corresponding term M84 d(Π), M85,1 ∆β (Yε,1 (·), d(Π), T ) α∗ , . . ., 1 1 M85,k ∆β (Yε,k (·), d(Π), T ) α∗ or M86 Υα∗ (Xε (·), d(Π), T ) α∗ from the right hand side expression in inequality (5.92). Remark 5.3.9. Condition B16 [¯ γ ] can also be replaced in Theorem 5.3.1 by condition B18 [¯ γ ] stronger than condition B16 [¯ γ ]. In this case, parameter ε30 penetrating condition B16 [¯ γ ] should be replaced in formula ε42 = ε1 ∧ ∧ε7 ∧ ε30 ∧ ε33 , by parameter ε32 ∈ (0, ε0 ] such that all inequalities penetrating condition B18 [¯ γ]

5.3

Time-skeleton approximations for reward functions

293

hold, for every ε ∈ [0, ε32 ], and constants L31,i , L32,i,j , i = 0, . . . , k + 1, j = 1, . . . , k should be replaced by constants L35,i , L36,i,j , i = 0, . . . , k + 1, j = 1, . . . , k, in inequality (5.92). Remark 5.3.10. Theorem 5.3.1 also covers the case, where t = T . In tis case, both expressions on the left and right hand sides in inequality (5.92) equals 0. Remark 5.3.11. Inequality (5.92) presented in Theorem 5.3.1 acts for any partition Π with d(Π) ≤ c, for all ε ∈ [0, ε42 ]. This implies that inequality (5.92) holds, under conditions of Theorem 5.3.1, for any partition Πε with d(Πε ) ≤ c chosen specifically for every ε ∈ [0, ε42 ]. In this case, if one would like to write down inequality (5.92) for given 0 ≤ t ≤ T , then point t should be included in partition Πε , for every ε ∈ [0, ε16 ]. In particular, partitions Πε can be chosen such that d(Πε ) → 0 as ε → 0. Remark 5.3.12. As was pointed out in the remarks made after Theorem (ε) (ε) 2.4.3∗ , the reward function φt (MΠn ,t,T , ~ y , x) is measurable as function of argument (~ y , x), for every t. Also, recurrence relations given in Theorem 2.3.3∗ readily imply that this reward function is also measurable as function of argument (t, ~ y , x), due to model assumptions about measurability of transition probabilities Pε (t, ~z, t + u, ·) as function of argument (t, ~z, u) and the pay-off function gε (t, e~y , x) as function of argument (t, ~ y , x). Theorems 5.3.1 implies that (ε) (ε) φt (MΠn ,t,T , ~ y , x) uniformly converge (for every ε ∈ [0, ε42 ]) the reward function (ε)

(ε)

φt (~ y , x) if d(Πn ) → 0 as n → 0. Thus, φt (~ y , x) is also a measurable function of argument (t, ~ y , x). Condition B16 [¯ γ ] can be replaced in Theorem 5.3.1 by weaker condition B21 [¯ γ , ν¯]. Theorem 5.3.1 takes in this case the following form. Theorem 5.3.2. Let conditions B16 [¯ γ , ν¯], B20 , and C14 [β] hold, and γ∗ < β . Then, there exist β < ∞. Let also condition C6 [νk+1 α∗ ] holds, for α∗ = β−γ ∗ ε43 = ε43 (β, γ¯ , ν¯) ∈ (0, ε0 ] and functions 0 ≤ M87 (~ y ) = M87 (~ y , β, γ∗ ), M88,1 (~ y) = M88,1 (~ y , β, γ∗ , ν1 ), . . . , M88,k (~ y ) = M88,k (~ y , β, γ∗ , νk ), M89 (~ y ) = M89 (~ y , β, γ∗ , νk+1 ) < ∞, ~ y ∈ Rk and c = c(β) > 0 such that the following time-skeleton approximation inequality holds, for any partition Π such that d(Π) ≤ c and t ∈ Π, ~z = (~ y , x) = ((y1 , . . . , yk ), x) ∈ Rk × X, ε ∈ [0, ε43 ], (ε)

(ε)

(ε)

0 ≤ φt (~ y , x) − φt (MΠ,t,T , ~ y , x) ν0

≤ M87 (~ y )d(Π)

+

k X

νi

M88,i (~ y ) ∆β (Yε,i (·), d(Π), T ) α∗

i=1 1

+ M89 (~ y )Υνk+1 α∗ (Xε (·), d(Π), T ) α∗ .

(5.95)

Remark 5.3.13. Functions M87 (~ y ) = M87 (~ y , β, γ∗ ), M88,1 (~ y ) = M88,1 (~ y , β, γ∗ , ν¯), . . ., M88,k (~ y ) = M88,k (~ y , β, γ∗ , ν¯) and M89 (~ y ) = M89 (~ y , β, γ∗ , ν¯) are given

294

5

Time-skeleton reward approximations for Markov LPP

by the following formulas, for ~ y ∈ Rk , M84 (~ y ) = L41,0 +

k X

γ∗ β (yj ), L41,0,j L41,0 M45,j

j=1

M85,i (~ y) =

k X

k X

i=1

j=1

(L41,i +

L42,i,j L41,i (M45,j (yj ))

β(1−νi )+γ∗ νi β

νi

)L(β, γ∗ ) α∗ ,

i = 1, . . . , k, M86 (~ y ) = (L41,k+1 +

k X

γ∗ β L42,k+1,j L41,k+1 M45,j (yj )).

(5.96)

j=1

Remark 5.3.14. Parameter ε43 = ε43 (β, γ¯ , ν¯) is given by the following formula ε43 = ε1 ∧ ε7 ∧ ε35 ∧ ε33 .

5.4 Time-skeleton reward approximations for LPP with independent increments In this section, we present results about time-skeleton reward approximations for log-price processes with independent increments.

5.4.1 Time-skeleton approximations for optimal expected rewards for multivariate log-price processes with independent increments ~ε (t) = (Yε,1 (t), . . . , Yε,k (t)), t ∈ [0, T ] be, for every ε ∈ Let, as in Subsection 4.3.2, Y [0, ε0 ] where 0 < ε0 < ∞, a càdlàg multivariate log-price process with independent ~ε (0) ∈ A} and distributions of increments with initial distribution Pε (A) = P{Y ~ ~ increments Pε (t, t + u, A) = P{Yε (t + u) − Yε (t) ∈ A}. We consider the model with log-price processes without index component. In this case, the first-type modulus of exponential moment compactness ∆β (Yε,i (·), c, T ) coincides with the simpler first-type modulus of exponential moment compactness ∆0β (Yε,i (·), c, T ). ¯ assumed to Condition C14 [β] takes in this case the form of condition C11 [β], ¯ hold for a k-dimensional vector β = (β, . . . , β) with identical components β ≥ 0: C15 [β]: limc→0 limε→0 ∆0β (Yε,i (·), c, T ) = 0, i = 1, . . . , k. Let introduce the following analogue of condition E9 [¯ α], assumed to hold for some vector parameter α ¯ = (α, . . . , α), with identical components α ≥ 0: E11 [α]: limε→0 Ξ0±α (Yε,i (·), T ) < K79,i , i = 1, . . . , k, for some 1 < K79,i < ∞, i = 1, . . . , k.

5.4

295

Time-skeleton approximations for LPP with independent increments

The following useful lemma is the direct corollary of Lemma 4.3.6. Lemma 5.4.1. Conditions C12 and E11 [α] imply condition C15 [β] to hold if, either α > β > 0 or α = β = 0. We also assume that gε (t, ey , x) = gε (t, ey ), (t, ~ y , x) ∈ [0, T ] × Rk × X be, for every ε ∈ [0, ε0 ], a pay-off function, which also do not depend on index argument x. In fact, we can always add as virtual index component Xε (t) with the degen~ε (t) and also to erated one-point phase space X = {x0 } to the log-price process Y y y consider the pay-off function gε (t, e ) = gε (t, e , x0 ) as a function defined on the space [0, T ] × Rk × X. In this case, conditions B16 [¯ γ ] – B20 take simpler forms. Condition B16 [¯ γ ] reduces to the following Lipschitz-type condition, assumed to hold for some k(k + 1)-dimensional vector γ¯ = (γ0,1 , . . ., γ0,k , . . . , γk,1 , . . ., γk,k ) with non-negative components: B23 [¯ γ ]: There exists ε43 = ε43 (¯ γ ) ∈ (0, ε0 ] such that for every ε ∈ [0, ε43 ]: 0 y ~ 00 y ~ P|gk ε (t ,e )−gε|y(t|γ,e )| 0 00 ≤ L45,0 , for some 0 ≤ (a) sup0≤t0 ,t00 ≤T, t0 6=t00 , ~y∈Rk j 0,j (1+

j=1

)|t −t |

L46,0,j e

L45,0 , L46,0,1 , . . . , L46,0,k < ∞; (b) sup0≤t≤T, ~yi0 ,

~ yi00 ∈Rk , ~ yi0 6=~ yi00

(1+

P j6=i

y ~0 y ~00 |gε (t,e i )−gε (t,e i )| + γi,j |yj |+|yi |+ L46,i,j e +L46,i,i e(γi,i +1)|yi | )|yi0 −yi00 |

≤ L45,i ,

i = 1, . . . , k, for some 0 ≤ L45,i , L46,i,1 , . . . , L46,i,k < ∞, i = 1, . . . , k. The following condition is an analogue of condition B23 [¯ γ ] based on the use of upper limits in parameter ε: B24 [¯ γ ]: The following asymptotic relations hold: 0 y ~ 00 y ~ P|gk ε (t ,e )−gε|y(t|γ,e )| (a) limε→0 sup0≤t0 ,t00 ≤T, t0 6=t0 , ~y∈Rk (1+

j=1

L48,0,j e

j

0,j )|t0 −t00 |



L47,0 , for

some 0 ≤ L47,0 , L48,0,1 , . . . , L48,0,k < ∞; (b) limε→0 sup0≤t≤T, ~yi0 ,

~ yi00 ∈Rk , ~ yi0 6=~ yi00

(1+

P j6=i

y ~0 y ~00 |gε (t,e i )−gε (t,e i )| + γi,j |yj |+|yi |+ L48,i,j e +L48,i,i e(γi,i +1)|yi | )|yi0 −yi00 |

≤ L47,i , i = 1, . . . , k, for some 0 ≤ L47,i , L48,i,1 , . . . , L48,i,k < ∞, i = 1, . . . , k. Condition B24 [¯ γ ] implies that there exists ε44 = ε44 (¯ γ ) ∈ (0, ε0 ] such that all inequalities penetrating condition B23 [¯ γ ] holds, for every ε ∈ [0, ε44 ]. Condition B23 [¯ γ ] does not require of the absolute continuity of the pay-off functions gε (t, e~y , x) in argument t and ~ y . In the case, where these functions are absolutely continuous, a condition sufficient for condition B23 [¯ γ ] can be formulated in terms of their partial derivatives. Let us assume that the following Lipschitz-type condition holds, for some k(k + 1)-dimensional vector γ¯ with non-negative components: B25 [¯ γ ]: There exists ε45 = ε45 (¯ γ ) ∈ (0, ε0 ] such that for every ε ∈ [0, ε45 ]:

296

5

Time-skeleton reward approximations for Markov LPP

(a) function gε (t, e~y ) is absolutely continuous in t, with respect to the Lebesgue measure on [0, T ], for every fixed ~ y ∈ Rk and in ~ y , with respect to the Lebesgue measure on Rk , for every fixed t ∈ [0, T ]; y ~

(t,e ) |≤ (b) for every ~ y ∈ Rk , the absolute value of the partial derivative | ∂gε ∂t Pk γ0,j |yj | (1 + j=1 L50,0,j e )L49,0 , for almost all t ∈ [0, T ], with respect to the Lebesgue measure on [0, T ], for some 0 ≤ L49,0 , L50,0,1 , . . ., L50,0,k < ∞; (c) for every t ∈ [0, T ] and i = 1, . . . , k, the absolute value of the partial ~ Pk (t,ey ) | ≤ (1 + j=1 L50,i,j eγi,j |yj |+|yi | )L49,i , for almost all derivative | ∂gε∂y i ~ y ∈ Rk , with respect to the Lebesgue measure on Rk , for some 0 ≤ L49,i , L50,i,1 , . . . , L50,i,k < ∞.

Condition B19 [γ], assumed to hold for some γ ≥ 0, reduces to the following form: B26 [γ]: limε→0 sup0≤t≤T,~y∈Rk

y ~

1+

P|gk ε (t,e

L52,i < ∞, i = 1, . . . , k.

i=1

)|

L52,i eγ|yi |

< L51 , for some 0 < L51 < ∞, 0 ≤

Condition B20 reduces to the following form: y0 = (y0,1 , . . . , y0,k ) ∈ Rk and B27 : limε→0 sup0≤t≤T |gε (t, e~y0 )| < L53 , for some ~ 0 < L53 < ∞. Condition B27 implies that there exists ε46 ∈ (0, ε0 ] such that the following inequality holds, for ε ∈ [0, ε46 ], sup |gε (t, e~y0 )| < L53 .

(5.97)

0≤t≤T

Finally parameters γ∗ take the following simpler form, γ◦ = max γi,j + 1 1≤i,j≤k

≤ γ∗ = max γ0,j ∨ ( max γi,j + 1). 1≤j≤k

1≤i,j≤k

(5.98)

The following lemma is the direct corollary of Lemma 5.1.3. Lemma 5.4.2. Let conditions B23 [¯ γ ] and B27 hold. Then, condition B26 [γ] holds, for every γ > γ◦ . Since the index component is absent, the moment compactness condition C6 [α∗ ], for the index component, can be omitted. ¯ assumed to Condition D21 [β] takes in this case the form of condition D3 [β] ¯ hold for a vector parameter β = (β, . . . , β) with identical components β ≥ 0: D22 [β]: limε→0 EeβYε,i (0) < K80,i , i = 1, . . . , k, for some 1 < K80,i < ∞, i = 1, . . . , k.

5.4

297

Time-skeleton approximations for LPP with independent increments

Theorem 5.2.1 takes the following form. Theorem 5.4.1. Let conditions B23 [¯ γ ], B27 , C15 [β], and D22 [β] hold, and γ∗ < β < ∞. Then, there exist ε47 = ε47 (β, γ¯ ) ∈ (0, ε0 ] and constants 0 ≤ M87 = M87 (β, γ∗ ), M88,1 = M88,1 (β, γ∗ ), . . . , M88,k = M88,k (β, γ∗ ) and c = c(β) > 0 such that the following time-skeleton approximation inequality holds, for any partition Π such that d(Π) ≤ c and ε ∈ [0, ε47 ], (ε)

(ε)

0 ≤ Φ(Mmax,0,T ) − Φ(MΠ,0,T ) k X

≤ M87 d(Π) +

M88,i ∆0β (Yε,i (·), d(Π), T )

β−γ∗ β

.

(5.99)

i=1

Remark 5.4.1. Constants M87 , M88,1 , . . . , M88,k and parameter ε47 can be computed using the corresponding formulas in Remarks 5.2.1 – 5.2.3, where they should replace parameters M78 , M79,1 , . . . , M79,k and parameter ε38 , with changes caused by the simplifications in conditions of Theorem 5.2.1 described above. Remark 5.4.2. Inequality (5.99) presented in Theorem 5.4.1 acts for any partition Π with d(Π) ≤ c, for all ε ∈ [0, ε47 ]. This implies that inequality (5.99) holds, under conditions of Theorem 5.4.1, for any partition Πε with d(Πε ) ≤ c chosen specifically for every ε ∈ [0, ε47 ], in particular, for partitions Πε such that d(Πε ) → 0 as ε → 0. A variant of this condition B23 [¯ γ ] with weaken Lipschitz-type inequalities takes the form of the following condition, assumed to hold for a vector parameter γ¯ = (γ0,1 , . . . , γ0,k , . . . , γk,1 , . . ., γk,k ), with non-negative components, and a vector parameter ν¯ = (ν0 , . . ., νk ), with components taking values in interval (0, 1]: B28 [¯ γ , ν¯]: There exists ε48 = ε48 (¯ γ , ν¯) ∈ (0, ε0 ] such that for every ε ∈ [0, ε48 ]: 0 y ~ 00 y ~ Pk|gε (t ,e )−g|yε (t|γ ,e )|0 00 ν ≤ L54,0 , for some (a) sup0≤t0 ,t00 ≤T, t0 6=t00 , ~y∈Rk j 0,j (1+

j=1

L55,0,j e

)|t −t |

0

0 ≤ L54,0 , L55,0,1 , . . . , L55,0,k < ∞; (b) sup 0≤t≤T, ~yi0 ,

~ yi00 ∈Rk , ~ yi0 6=~ yi00

y ~0 y ~00 i )−gε (t,e i | + + L55,i,j eγi,j |yj |+νi |yi | +L55,i,i e(γi,i +νi )|yi | )|yi0 −yi00 |νi

|gε (t,e

(1+

P j6=i



L54,i , i = 1, . . . , k, for some 0 ≤ L54,i , L55,i,1 , . . . , L55,i,k < ∞, i = 1, . . . , k. If vector ν¯ = (1, . . . , 1), condition B28 [¯ γ , ν¯] reduces to condition B23 [¯ γ ]. Remark 5.4.3. Condition B23 [¯ γ ] can be replaced in Theorem 5.4.1 by the β−γ∗ weaker condition B28 [¯ γ , ν¯]. Quantities d(Π) and ∆0β (Yε,i (·), d(Π), T ) β should (β−γ∗ )νi

be replaced, respectively, by d(Π)ν0 and ∆0β (Yε,i (·), d(Π), T ) β , in inequality (5.99) given in Theorem 5.4.1. Also constants M87 , M88,i , i = 1, . . . , k and parameter ε47 should be changed as it is described in Remarks 5.2.12 and 5.2.13.

298

5

Time-skeleton reward approximations for Markov LPP

5.4.2 Time-skeleton approximations for reward functions for multivariate log-price processes with independent increments In the case, where the index component is absent and the pay-off function does (ε) (ε) not depend of the index argument, the reward functions φt (~ y , x) = φt (~ y ) and (ε) (ε) (ε) (ε) φt (MΠ,t,T , ~ y , x) = φt (MΠ,t,T , ~ y ) also do not depend of x. Theorem 5.3.1 takes in this case the following form. Theorem 5.4.2. Let conditions B23 [¯ γ ], B27 and C15 [β] hold, and γ∗ < β < ∞. Then, there exist ε49 = ε49 (β, γ¯ ) ∈ (0, ε0 ] and functions 0 ≤ M89 (~ y) = M89 (~ y , β, γ∗ ), M90,1 (~ y ) = M90,1 (~ y , β, γ∗ ), . . . , M90,k (~ y ) = M90,k (~ y , β, γ∗ ) < ∞, ~ y ∈ Rk and c = c(β) > 0 such that the following time-skeleton approximation inequality holds, for any partition Π such that d(Π) ≤ c and t ∈ Π, ~ y = (y1 , . . . , yk ) ∈ Rk , ε ∈ [0, ε49 ], (ε)

(ε)

(ε)

0 ≤ φt (~ y ) − φt (MΠ,t,T , ~ y) ≤ M89 (~ y )d(Π) +

k X

M90,i (~ y ) ∆β (Yε,i (·), d(Π), T )

β−γ∗ β

.

(5.100)

i=1

Remark 5.4.4. Constants M89 , M90,1 , . . . , M90,k and parameter ε49 can be computed using the corresponding formulas in Remarks 5.3.1 – 5.3.3, where they should replace parameters M84 , M85,1 , . . . , M85,k and parameter ε42 , with changes caused by the simplifications of conditions in Theorem 5.3.1 described above. Remark 5.4.5. Inequality (5.100) presented in Theorem 5.4.2 acts for any partition Π with d(Π) ≤ c, for all ε ∈ [0, ε49 ]. This implies that inequality (5.100) holds, under conditions of Theorem 5.4.2, for any partition Πε with d(Πε ) ≤ c chosen specifically for every ε ∈ [0, ε49 ], in particular, for partitions Πε such that d(Πε ) → 0 as ε → 0. In this case, if one would like to write down inequality (5.100) for given 0 ≤ t ≤ T , then point t should be included in partition Πε , for every ε ∈ [0, ε49 ], in particular, for partitions Πε such that d(Πε ) → 0 as ε → 0. Remark 5.4.6. Condition B23 [¯ γ ] can be replaced in Theorem 5.4.2 by the β−γ∗ weaker condition B28 [¯ γ , ν¯]. Quantities d(Π) and ∆0β (Yε,i (·), d(Π), T ) β should (β−γ∗ )νi

be replaced, respectively, by d(Π)ν0 and ∆0β (Yε,i (·), d(Π), T ) β , in inequality (5.100) given in Theorem 5.4.2. Also constants M89 , M90,i , i = 1, . . . , k and parameter ε49 should be changed as it is described in Remarks 5.3.12 and 5.3.13.

5.5 Time-skeleton reward approximations for diffusion LPP In this section, we present results about time-skeleton reward approximations for diffusion log-price processes.

5.5

299

Time-skeleton reward approximations for diffusion LPP

5.5.1 Time-skeleton reward approximations for multivariate diffusion log-price processes with bounded characteristics and their time-skeleton approximations ~0 (t) be a diffusion process given by the stochastic Let, as in Subsection 5.4.2, Y ~ε (t) be, for every ε ∈ (0, ε0 ], a time-skeleton differential equation (4.126) and Y approximation log-price process given by the stochastic difference equation (4.127) and relation (4.128). We assume that conditions G8 , G10 , and G12 hold. We consider the model with log-price processes without index component. In this case, the first-type modulus of exponential moment compactness ∆β (Yε,i (·), c, T ) takes simpler form, ∆00β (Yε,i (·), c, T ) =

sup

sup E~y,t (eβ|Yε,i (t+u)−Yε,i (t)| − 1).

(5.101)

0≤t≤t+u≤t+c≤T ~ y ∈Rk

Condition C14 [β] takes in this case the form of the following simpler condition, assumed to hold for some β ≥ 0: C16 [β]: limc→0 limε→0 ∆00β (Yε,i (·), c, T ) = 0, i = 1, . . . , k. In this case, conditions G8 , G10 , and G12 imply, by Lemma 4.4.3, that con~ε (t), for any β ≥ 0. dition C16 [β] holds for processes Y Since the index component is absent, the moment compactness condition C6 [α∗ ], for the index component, can be omitted. Since the log-price processes have no index component, we also assume that gε (t, e~y ) be, for every ε ∈ [0, ε0 ], a pay-off function, which also does not depend on index argument x. In this case, conditions B16 [¯ γ ] – B21 [¯ γ , ν¯] reduce to simpler conditions B23 [¯ γ] – B28 [¯ γ , ν¯]. Also, parameter γ∗ is given, in this case, by relation (5.107). ~ε (0) = Y0 (0) for all ε ∈ (0, ε0 ], condition D16 [β] ¯ takes the form of Since Y the following simpler condition, assumed to hold for some vector parameter β¯ = (β, . . . , β) with identical components β ≥ 0: D23 [β]: Eeβ|Y0,i (0)| < K81,i , i = 1, . . . , k, for some 1 < K81,i < ∞, i = 1, . . . , k. Theorem 5.2.1 takes the following form. ~0 (t) be a diffusion process given by the stochastic difTheorem 5.5.1. Let Y ~ε (t) be, for every ε ∈ (0, ε0 ], the corresponding ferential equation (4.126) and Y approximating time-skeleton log-price process given by the stochastic difference equation (4.127) and relation (4.128). Let conditions G8 , G10 , G12 and, also, conditions B23 [¯ γ ], B27 D23 [β] hold, for some γ∗ < β < ∞. Then, there exist ε50 = ε50 (β, γ¯ ) ∈ (0, ε0 ] and constants 0 ≤ M91 = M91 (β, γ∗ ), M92,1 =

300

5

Time-skeleton reward approximations for Markov LPP

M92,1 (β, γ∗ ), . . . , M92,k = M92,k (β, γ∗ ) and c = c(β) > 0 such that the following time-skeleton approximation inequality holds, for any partition Π such that d(Π) ≤ c and ε ∈ [0, ε50 ], (ε)

(ε)

0 ≤ Φ(Mmax,0,T ) − Φ(MΠ,0,T ) ≤ M91 d(Π) +

k X

M92,i ∆00β (Yε,i (·), d(Π), T )

β−γ∗ β

.

(5.102)

i=1

Remark 5.5.1. Constants M91 , M92,1 , . . . , M92,k and parameter ε50 can be computed using the corresponding formulas in Remarks 5.2.1 – 5.2.3, where they should replace parameters M78 , M79,1 , . . . , M79,k and parameter ε38 , with changes caused by the simplifications of conditions in Theorem 5.2.1 described above. Remark 5.5.2. Inequality (5.102) presented in Theorem 5.5.1 acts for any partition Π with d(Π) ≤ c, for all ε ∈ [0, ε50 ]. This implies that inequality (5.99) holds, under conditions of Theorem 5.5.1, for any partition Πε with d(Πε ) ≤ c chosen specifically for every ε ∈ [0, ε50 ], in particular, for partitions Πε such that d(Πε ) → 0 as ε → 0. Remark 5.5.3. Condition B23 [¯ γ ] can be replaced in Theorem 5.5.1 by the β−γ∗ weaker condition B28 [¯ γ , ν¯]. The quantities d(Π) and ∆0β (Yε,i (·), d(Π), T ) β (β−γ∗ )νi

should be replaced, respectively, by d(Π)ν0 and ∆0β (Yε,i (·), d(Π), T ) β , in inequality (5.102) given in Theorem 5.5.1. Also constants M91 , M92,i , i = 1, . . . , k and parameter ε50 should be changed as this is described in Remarks 5.2.12 and 5.2.13. In the case, where the index component is absent and the pay-off function (ε) (ε) does not depend of the index argument, the reward functions φt (~ y , x) = φt (~ y) (ε) (ε) (ε) (ε) and φt (MΠ,t,T , ~ y , x) = φt (MΠ,t,T , ~ y ) also do not depend of x. Theorem 5.3.1 takes in this case the following form. ~0 (t) be a diffusion process given by the stochastic difTheorem 5.5.2. Let Y ~ε (t) be, for every ε ∈ (0, ε0 ], the correspondferential equation (4.126) and Y ing approximating time-skeleton log-price process given by the stochastic difference equation (4.127) and relation (4.128). Let conditions G8 , G10 , G12 and, also, conditions B23 [¯ γ ] and B27 hold, for some γ∗ < β < ∞. Then, there exist ε51 = ε51 (β, γ¯ ) ∈ (0, ε0 ] and constants 0 ≤ M93 (~ y ) = M93 (~ y , β, γ∗ ), M94,1 (~ y) = M94,1 (~ y , β, γ∗ ), . . . , M94,k (~ y ) = M94,k (~ y , β, γ∗ ) and c = c(β) > 0 such that the following time-skeleton approximation inequality holds, for any partition Π such that d(Π) ≤ c and t ∈ Π, ~ y = (y1 , . . . , yk ) ∈ Rk , ε ∈ [0, ε51 ], (ε)

(ε)

(ε)

0 ≤ φt (~ y ) − φt (MΠ,t,T , ~ y) ≤ M93 (~ y )d(Π) +

k X i=1

M94,i (~ y ) ∆00β (Yε,i (·), d(Π), T )

β−γ∗ β

.

(5.103)

5.5

301

Time-skeleton reward approximations for diffusion LPP

Remark 5.5.4. Constants M93 , M94,1 , . . . , M94,k and parameter ε51 can be computed using the corresponding formulas in Remarks 5.3.1 – 5.3.3, where they should replace parameters M84 , M85,1 , . . . , M85,k and parameter ε42 , with changes caused by the simplifications in conditions of Theorem 5.3.1 described above. Remark 5.5.5. Inequality (5.103) presented in Theorem 5.5.2 acts for any partition Π with d(Π) ≤ c, for all ε ∈ [0, ε51 ]. This implies that inequality (5.103) holds, under conditions of Theorem 5.5.2, for any partition Πε with d(Πε ) ≤ c chosen specifically for every ε ∈ [0, ε51 ], in particular, for partitions Πε such that d(Πε ) → 0 as ε → 0. In this case, if one would like to write down inequality (5.103) for given 0 ≤ t ≤ T , then point t should be included in partition Πε , for every ε ∈ [0, ε51 ], in particular, for partitions Πε such that d(Πε ) → 0 as ε → 0. Remark 5.5.6. Condition B23 [¯ γ ] can be replaced in Theorem 5.5.2 by the β−γ∗ weaker condition B28 [¯ γ , ν¯]. The quantities d(Π) and ∆0β (Yε,i (·), d(Π), T ) β (β−γ∗ )νi

should be replaced, respectively, by d(Π)ν0 and ∆0β (Yε,i (·), d(Π), T ) β , in inequality (5.103) given in Theorem 5.5.2. Also constants M93 , M94,i , i = 1, . . . , k and parameter ε51 should be changed as this is described in Remarks 5.3.12 and 5.3.13.

5.5.2 Time-skeleton reward approximations for multivariate diffusion log-price processes with bounded characteristics and their martingale-type approximations ~00 (t) = Y ~0 (t) be a diffusion process given by the Let, as in Subsection 4.4.3, Y ~ε0 (t) be, for every ε ∈ (0, ε0 ] a stochastic differential equation (4.126) and Y martingale-type approximation log-price process given by the stochastic difference equation (4.127) supplemented by relation (4.135). We assume that conditions G8 , G10 , G12 , G13 and G14 hold. Condition C14 [β] takes in this case the form of the following simpler condition, assumed to hold for some β ≥ 0: 0 C17 [β]: limc→0 limε→0 ∆00β (Yε,i (·), c, T ) = 0, i = 1, . . . , k.

In this case, conditions G8 , G10 , G12 , G13 and G14 imply, by Lemma 4.4.10, ~ε0 (t), for any β ≥ 0. that condition C17 [β] holds, for processes Y Since the index component is absent, the moment compactness condition C6 [α∗ ], for the index component, can be omitted. Since the log-price processes have no index component, we also assume that gε (t, ey , x) = gε (t, ey ) be, for every ε ∈ [0, ε0 ], a pay-off function, which also does not depend on index argument x. In this case, conditions B16 [¯ γ ] – B20 reduce to simpler conditions B23 [¯ γ] – B27 .

302

5

Time-skeleton reward approximations for Markov LPP

Also, parameter γ∗ is given, in this case, by relation (5.107). We also assume that the following condition, which is a variant of condition ¯ assumed to hold for some vector parameter β¯ = (β1 , . . . , βk ) with identical D17 [β], ~ε0 (t): components β ≥ 0, holds for the processes Y 0

D24 [β]: limε→0 Eeβ|Yε,i (0)| < K82,i , i = 1, . . . , k, where 1 < K82,i < ∞, i = 1, . . . , k. Theorem 5.2.1 takes the following form. ~00 (t) = Y ~0 (t) be a diffusion process given by the stochasTheorem 5.5.3. Let Y ~ε0 (t) be, for every ε ∈ (0, ε0 ] the corretic differential equation (4.126) and Y sponding approximating martingale-type log-price process given by the stochastic difference equation (4.127) supplemented by relation (4.135). Let also conditions G8 , G10 , G12 , G13 and G14 hold, and conditions B23 [¯ γ ], B27 and D24 [β] hold, and γ∗ < β < ∞. Then, there exist ε52 = ε52 (β, γ¯ ) ∈ (0, ε0 ] and constants 0 ≤ M95 = M95 (β, γ∗ ), M96,1 = M96,1 (β, γ∗ ), . . . , M96,k = M96,k (β, γ∗ ) and c = c(β) > 0 such that the following time-skeleton approximation inequality holds, for any partition Π such that d(Π) ≤ c and ε ∈ [0, ε52 ], (ε)

(ε)

0 ≤ Φ(Mmax,0,T ) − Φ(MΠ,0,T ) ≤ M95 d(Π) +

k X

0 M96,i ∆00β (Yε,i (·), d(Π), T )

β−γ∗ β

.

(5.104)

i=1

Remark 5.5.7. Constants M95 , M96,1 , . . . , M96,k and parameter ε52 can be computed using the corresponding formulas in Remarks 5.2.1 – 5.2.3, where they should replace parameters M78 , M79,1 , . . . , M79,k and parameter ε38 , with changes caused by the simplifications of conditions in Theorem 5.2.1 described above. Remark 5.5.8. Inequality (5.104) presented in Theorem 5.5.3 acts for any partition Π with d(Π) ≤ c, for all ε ∈ [0, ε52 ]. This implies that inequality (5.104) holds, under conditions of Theorem 5.5.3, for any partition Πε with d(Πε ) ≤ c chosen specifically for every ε ∈ [0, ε52 ], in particular, for partitions Πε such that d(Πε ) → 0 as ε → 0. Remark 5.5.9. Condition B23 [¯ γ ] can be replaced in Theorem 5.5.1 by the β−γ∗ weaker condition B28 [¯ γ , ν¯]. The quantities d(Π) and ∆0β (Yε,i (·), d(Π), T ) β (β−γ∗ )νi

should be replaced, respectively, by d(Π)ν0 and ∆0β (Yε,i (·), d(Π), T ) β , in inequality (5.104) given in Theorem 5.5.3. Also constants M95 , M96,i , i = 1, . . . , k and parameter ε52 should be changed as this is described in Remarks 5.2.12 and 5.2.13. In the case, where the index component is absent and the pay-off function (ε) (ε) does not depend of the index argument, the reward functions φt (~ y , x) = φt (~ y) (ε) (ε) (ε) (ε) y ) also do not depend of x. and φt (MΠ,t,T , ~ y , x) = φt (MΠ,t,T , ~ Theorem 5.3.1 takes in this case the following form.

5.5

Time-skeleton reward approximations for diffusion LPP

303

~00 (t) = Y ~0 (t) be a diffusion process given by the stochasTheorem 5.5.4. Let Y ~ε0 (t) be, for every ε ∈ (0, ε0 ] the correspondtic differential equation (4.126) and Y ing approximating martingale-type log-price process given by the stochastic difference equation (4.127) supplemented by relation (4.135). Let also conditions G8 , G10 , G12 , G13 and G14 hold, and conditions B23 [¯ γ ], B27 and D24 [β] hold, and γ∗ < β < ∞. Then, there exist ε53 = ε53 (β, γ¯ ) ∈ (0, ε0 ] and constants 0 ≤ M97 = M97 (~ y , β, γ∗ ), M98,1 = M98,1 (~ y , β, γ∗ ), . . . , M98,k = M98,k (~ y , β, γ∗ ) and c = c(β) > 0 such that the following time-skeleton approximation inequality holds, for any partition Π such that d(Π) ≤ c and t ∈ Π, ~ y = (y1 , . . . , yk ) ∈ Rk , ε ∈ [0, ε53 ], (ε)

(ε)

(ε)

0 ≤ φt (~ y ) − φt (MΠ,t,T , ~ y) ≤ M97 (~ y )d(Π) +

k X

0 M98,i (~ y ) ∆00β (Yε,i (·), d(Π), T )

β−γ∗ β

.

(5.105)

i=1

Remark 5.5.10. Constants M97 , M98,1 , . . . , M98,k and parameter ε53 can be computed using the corresponding formulas in Remarks 5.3.1 – 5.3.3, where they should replace parameters M84 , M85,1 , . . . , M85,k and parameter ε42 , with changes caused by the simplifications of conditions in Theorem 5.3.1 described above. Remark 5.5.11. Inequality (5.105) presented in Theorem 5.5.4 acts for any partition Π with d(Π) ≤ c, for all ε ∈ [0, ε53 ]. This implies that inequality (5.105) holds, under conditions of Theorem 5.5.4, for any partition Πε with d(Πε ) ≤ c chosen specifically for every ε ∈ [0, ε53 ], in particular, for partitions Πε such that d(Πε ) → 0 as ε → 0. In this case, if one would like to write down inequality (5.105) for given 0 ≤ t ≤ T , then point t should be included in partition Πε , for every ε ∈ [0, ε53 ], in particular, for partitions Πε such that d(Πε ) → 0 as ε → 0. Remark 5.5.12. Condition B23 [¯ γ ] can be replaced in Theorem 5.5.4 by β−γ∗ the weaker condition B28 [¯ γ , ν¯]. The quantities d(Π) and ∆0β (Yε,i (·), d(Π), T ) β (β−γ∗ )νi

should be replaced, respectively, by d(Π)ν0 and ∆0β (Yε,i (·), d(Π), T ) β , in inequality (5.105) given in Theorem 5.5.4. Also constants M97 , M98,i , i = 1, . . . , k and parameter ε53 should be changed as this is described in Remarks 5.3.12 and 5.3.13.

5.5.3 Time-skeleton reward approximations for optimal expected rewards for univariate diffusion log-price processes with bounded characteristics and their trinomial-tree approximations Let Y000 (t) be a diffusion process given by the stochastic differential equation (4.180) and Yε00 (t) be, for every ε ∈ (0, ε0 ], a trinomial-tree approximation log-price process given by the stochastic difference equation (4.181) and relation (4.183).

304

5

Time-skeleton reward approximations for Markov LPP

We assume that conditions G15 , G17 , G18 , G19 and (4.188) and (4.190) hold. The following condition, assumed to hold for some β ≥ 0, is just an univariate variant of condition C17 [β]: C18 [β]: limc→0 limε→0 ∆00β (Yε00 (·), c, T ) = 0. In this case, conditions G15 , G17 , G18 , G19 and (4.188) and (4.190) imply, by Lemma 4.4.17, that condition C18 [β] holds, for processes Yε00 (t), for any β ≥ 0. Since the index component is absent, the moment compactness condition C6 [α∗ ], for the index component, can be omitted. Sine the log-price processes have no index component, we also assume that gε (t, ey , x) = gε (t, ey ) be, for every ε ∈ [0, ε0 ], a pay-off function, which also does not depend on index argument x. In this case, conditions B23 [¯ γ ] – B27 take simpler forms since we consider the univariate case, where k = 1. Condition B23 [¯ γ ] reduces to the following Lipschitz-type condition, assumed to hold for some two-dimensional vector γ¯ = (γ0,1 , γ1,1 ) with non-negative components: B29 [¯ γ ]: There exists ε54 = ε54 (¯ γ ) ∈ (0, ε0 ] such that for every ε ∈ [0, ε54 ]: (a) sup0≤t0 ,t00 ≤T, t0 6=t00 , y∈R1 L57,0,1 < ∞;

~ |gε (t0 ,ey )−gε (t00 ,ey )| (1+L57,0,1 e|y|γ0,1 )|t0 −t00 |

≤ L56,0 , for some 0 ≤ L56,0 ,

0

(b) sup0≤t≤T, y0 ,y00 ∈R1 , y0 6=y00

00

|gε (t,ey )−gε (t,ey )| 0 00 (1+L57,1,1 e(γ1,1 +1)(|y |∨|y |) )|y 0 −y 00 |

≤ L56,1 , for some

0 ≤ L56,1 , L57,1,1 < ∞. The following condition is an analogue of condition B29 [¯ γ ] based on the use of upper limits in parameter ε: B30 [¯ γ ]: The following asymptotic relations hold: 0 y 00 y ε (t ,e )| (a) limε→0 sup0≤t0 ,t00 ≤T, t0 6=t0 , y∈R1 |gε (t ,e )−g ≤ L58,0 , for some 0 ≤ (1+L59,0,1 e|y|γ0,1 )|t0 −t00 | L58,0 , L59,0,1 ; 0

(b) limε→0 sup0≤t≤T, y0 , y00 ∈R1 , y0 6=y00

00

|gε (t,ey )−gε (t,ey )| 0 00 (1+L59,1,1 e(γ1,1 +1)(|y |∨|y |) )|y 0 −y 00 |

≤ L58,1 ,

for some 0 ≤ L58,1 , L59,1,1 < ∞. Condition B30 [¯ γ ] implies that there exists ε55 = ε55 (¯ γ ) ∈ (0, ε0 ] such that all inequalities penetrating condition B30 [¯ γ ] holds, for every ε ∈ [0, ε55 ]. Condition B29 [¯ γ ] does not require of the absolute continuity of the pay-off functions gε (t, ey ) in argument t and ~ y . In the case, where these functions are absolutely continuous, a condition sufficient for condition B29 [¯ γ ] can be formulated in terms of their partial derivatives. Let us assume that the following Lipschitz-type condition holds, for some two-dimensional vector γ¯ = (γ0,1 , γ1,1 ) with non-negative components: B31 [¯ γ ]: There exists ε56 = ε56 (¯ γ ) ∈ (0, ε0 ] such that for every ε ∈ [0, ε56 ]:

5.5

305

Time-skeleton reward approximations for diffusion LPP

(a) function gε (t, ey ) is absolutely continuous in t, with respect to the Lebesgue measure on [0, T ], for every fixed y ∈ R1 and in y, with respect to the Lebesgue measure on R1 , for every fixed t ∈ [0, T ]; (t,ey ) |≤ (b) for every y ∈ R1 , the absolute value of the partial derivative | ∂gε ∂t (1 + L61,0,j eγ0,1 |y| )L60,0 , for almost all t ∈ [0, T ] with respect to the Lebesgue measure on [0, T ], for some 0 ≤ L60,0 , L61,0,1 < ∞; (t,ey ) (c) for every t ∈ [0, T ], the absolute value of the partial derivative | ∂gε∂y |≤ (1 + L61,1,1 e(γ1,1 +1)|y| )L60,1 , for almost all y ∈ R1 with respect to the Lebesgue measure on R1 , for some 0 ≤ L60,1 , L61,1,1 < ∞. Condition B26 [γ], assumed to hold for some γ ≥ 0, reduces to condition B11 [γ] introduced in Subsection 4.4.4. Condition B27 reduces to the following form: B32 : limε→0 sup0≤t≤T |gε (t, ey0 )| < L62 , for some y0 ∈ R1 and 0 < L64 < ∞. Condition B32 implies that there exists ε57 ∈ (0, ε0 ] such that the following inequality holds, for ε ∈ [0, ε57 ], sup |gε (t, ey0 )| < L62 .

(5.106)

0≤t≤T

Finally parameters γ∗ take the following simpler form, γ◦ = γ1,1 + 1 ≤ γ∗ = γ0,1 ∨ (γ1,1 + 1).

(5.107)

The following lemma is the direct corollary of Lemma 5.1.3. Lemma 5.5.3. Let conditions B29 [¯ γ ] and B32 hold. Then, condition B11 [γ] holds, for every γ > γ◦ . Condition D17 [β], assumed to hold for the processes Yε00 (t), for some β ≥ 0, reduces to condition D18 [β] also introduced in Subsection 4.4.4. Theorem 5.2.1 takes the following form. Theorem 5.5.5. Let Y000 (t) be a diffusion process given by the stochastic differential equation (4.180) and Yε00 (t) be, for every ε ∈ (0, ε0 ], the corresponding approximating trinomial-tree log-price process given by the stochastic difference equation (4.181) and relation (4.183). Let conditions G15 , G17 , G18 , G19 and (4.188) and (4.190) hold, and conditions B29 [¯ γ ], B32 and D18 [β] hold, and γ∗ < β < ∞. Then, there exist ε58 = ε58 (β, γ¯ ) ∈ (0, ε0 ] and constants 0 ≤ M99 = M99 (β, γ∗ ), M100,1 = M100,1 (β, γ∗ ) < ∞ and c = c(β) > 0 such that the following time-skeleton approximation inequality holds, for any partition Π such that d(Π) ≤ c and ε ∈ [0, ε58 ], (ε)

(ε)

0 ≤ Φ(Mmax,0,T ) − Φ(MΠ,0,T ) ≤ M99 d(Π) + M100,1 ∆00β (Yε00 (·), d(Π), T )

β−γ∗ β

.

(5.108)

306

5

Time-skeleton reward approximations for Markov LPP

Remark 5.5.13. Constants M99 , M100,1 and parameter ε58 can be computed using the corresponding formulas in Remarks 5.2.1 – 5.2.3, where they should replace parameters M78 , M79,1 and parameter ε38 , with changes caused by the simplifications of conditions in Theorem 5.2.1 described above. Remark 5.5.14. Inequality (5.108) presented in Theorem 5.5.5 acts for any partition Π with d(Π) ≤ c, for all ε ∈ [0, ε58 ]. This implies that inequality (5.108) holds, under conditions of Theorem 5.5.5, for any partition Πε with d(Πε ) ≤ c chosen specifically for every ε ∈ [0, ε58 ], in particular, for partitions Πε such that d(Πε ) → 0 as ε → 0. A variant of this condition B28 [¯ γ ] with weaken Lipschitz-type inequalities takes the form of the following condition, assumed to hold for a bivariate vector parameter γ¯ = (γ0,1 , γ1,1 ), with non-negative components, and a vector parameter ν¯ = (ν0 , ν1 ), with components taking values in interval (0, 1]: B33 [¯ γ , ν¯]: There exists ε59 = ε59 (¯ γ , ν¯) ∈ (0, ε0 ] such that for every ε ∈ [0, ε59 ]: |gε (t0 ,ey )−gε (t00 ,ey )| ≤ L63,0 , for some 0 ≤ L63,0 , (a) sup0≤t0 ,t00 ≤T, t0 6=t00 , y∈R1 (1+L64,0,1 e|y|γ0,1 )|t0 −t00 |ν0 L64,0,1 < ∞; 0

(b) sup0≤t≤T, y0 ,y00 ∈R1 , y0 6=y00

00

|gε (t,ey )−gε (t,ey | 0 00 (1++L64,1,1 e(γ1,1 +ν1 )(|y |∨|y |) )|y 0 −y 00 |ν1

≤ L63,1 , for

some 0 ≤ L63,1 , L64,1,1 < ∞. Remark 5.5.15. Condition B29 [¯ γ ] can be replaced in Theorem 5.5.5 by β−γ∗ the weaker condition B33 [¯ γ , ν¯]. The quantities d(Π) and ∆00β (Yε00 (·), d(Π), T ) β (β−γ∗ )νi

should be replaced, respectively, by d(Π)ν0 and ∆00β (Yε00 (·), d(Π), T ) β , in inequality (5.108) given in Theorem 5.5.5. Also constants M99 , M100,1 and parameter ε58 should be changed as this is described in Remarks 5.2.12 and 5.2.13. In the case, where the index component is absent and the pay-off function (ε) (ε) does not depend of the index argument, the reward functions φt (y, x) = φt (y) (ε) (ε) (ε) (ε) and φt (MΠ,t,T , y, x) = φt (MΠ,t,T , y) also do not depend of x. Theorem 5.3.1 takes in this case the following form. Theorem 5.5.6. Let Y000 (t) be a diffusion process given by the stochastic differential equation (4.180) and Yε00 (t) be, for every ε ∈ (0, ε0 ], the corresponding approximating trinomial-tree log-price process given by the stochastic difference equation (4.181) and relation (4.183). Let conditions G15 , G17 , G18 , G19 and (4.188) and (4.190) hold, and conditions B29 [¯ γ ], B32 and D18 [β] hold, and γ∗ < β < ∞. Then, there exist ε60 = ε60 (β, γ¯ ) ∈ (0, ε0 ] and constants 0 ≤ M101 = M101 (y, β, γ∗ ), M102,1 = M102,1 (y, β, γ∗ ) < ∞ and c = c(β) > 0 such that the following time-skeleton approximation inequality holds, for any partition Π such that d(Π) ≤ c and t ∈ Π, y ∈ R1 , ε ∈ [0, ε60 ], (ε)

(ε)

(ε)

0 ≤ φt (y) − φt (MΠ,t,T , y) ≤ M100 (y)d(Π) + M101,1 (y) ∆00β (Yε00 (·), d(Π), T )

β−γ∗ β

.

(5.109)

5.5

Time-skeleton reward approximations for diffusion LPP

307

Remark 5.5.16. Constants M101 , M102,1 and parameter ε60 can be computed using the corresponding formulas in Remarks 5.3.1 – 5.3.3, where they should replace parameters M84 , M85,1 , . . . , M85,k and parameter ε42 , with chan ges caused by the simplifications of conditions oinTheorem 5.3.1 described above. Remark 5.5.17. Inequality (5.109) presented in Theorem 5.5.6 acts for any partition Π with d(Π) ≤ c, for all ε ∈ [0, ε60 ]. This implies that inequality (5.109) holds, under conditions of Theorem 5.5.6, for any partition Πε with d(Πε ) ≤ c chosen specifically for every ε ∈ [0, ε60 ], in particular, for partitions Πε such that d(Πε ) → 0 as ε → 0. In this case, if one would like to write down inequality (5.109) for given 0 ≤ t ≤ T , then point t should be included in partition Πε , for every ε ∈ [0, ε60 ], in particular, for partitions Πε such that d(Πε ) → 0 as ε → 0. Remark 5.5.18. Condition B23 [¯ γ ] can be replaced in Theorem 5.5.6 by β−γ∗ the weaker condition B33 [¯ γ , ν¯]. The quantities d(Π) and ∆00β (Yε00 (·), d(Π), T ) β (β−γ∗ )ν1

should be replaced, respectively, by d(Π)ν0 and ∆00β (Yε00 (·), d(Π), T ) β , in inequality (5.109) given in Theorem 5.5.6. Also constants M101 , M102,i , i = 1, . . . , k and parameter ε60 should be changed as this is described in Remarks 5.3.12 and 5.3.13.

6 Time-space-skeleton reward approximations for Markov LPP

In Chapter 6, we presents results about convergence of time-skeleton and timespace-skeleton reward approximations for continuous time multivariate modulated Markov log-price processes. Time-skeleton approximating log-price processes are obtained by discretization in time of continuous time log-price processes, while time-space-skeleton approximating log-price processes are obtained by two-step discretization, first in time and second in space (of log-prices) of continuous time log-price processes. As the result, we get approximations of rewards for a multivariate modulated Markov log-price process by the corresponding rewards for American type options for multivariate modulated atomic Markov chains, which transition probabilities and initial distributions are concentrated on finite sets of skeleton points. The rewards for approximating atomic Markov chains can be effectively computed using backward recurrence relations presented in Chapter 6 and based on the corresponding results given Chapters 2∗ – 3∗ and 7∗ – 8∗ . The space skeleton approximations do also require special fitting of transition probabilities and initial distributions for approximating Markov chains to the corresponding transition probabilities and initial distributions for approximated multivariate modulated Markov processes. Convergence of the approximating rewards is proven using the general convergence results presented in Chapters 5∗ – 8∗ . Results about convergence of time-skeleton and time-space-skeleton reward approximations have their own value since the corresponding rewards for American-type options for embedded discrete time log-price processes can be interpreted as Bermudian-type options for the corresponding continuous time log-price processes. In the framework of our research studies, these results play, together with results about time-skeleton rewards approximations given in Chapter 5, a very important role in getting results about convergence of option rewards for Americantype options for continuous time log-price processes, which we present in Chapters 7–10. In Section 6.1, we present results about convergence of time-skeleton and timespace-skeleton reward approximations for American-type options for multivariate modulated Markov log-price processes. In Section 6.2, we present results about convergence of time-skeleton and timespace-skeleton reward approximations for American-type options for multivariate log-price processes with independent increments.

6.1 Time-space-skeleton reward approximations for Markov LPP

309

In Section 6.2, we present results about convergence of time-skeleton and time-space-skeleton reward approximations for American-type options for diffusion log-price processes and their time-skeleton, martingale and trinomial-tree approximations. Our main results are given in Theorems 6.1.1–6.1.8, for multivariate modulated Markov log-price processes and Theorems 6.2.1–6.2.8, for multivariate logprice processes with independent increments. Results presented in the above theorems are based on the corresponding results presented in Chapters 5–8 in the 1st volume of the book. They generalize in several aspects results obtained in Silvestrov, Jönsson, and Stenberg (2008, 2009), for univariate modulated Markov log-price processes and in Lundgren and Silvestrov (2009, 2011) and Silvetsrov and Lundgren (2011), for multivariate Markov logprice processes. First, we consider multivariate modulated models, i.e. combine together multivariate and modulation aspects together. Second, we consider payoff functions, which depend also of the index component. Results about convergence of time-space and time-space-skeleton approximations for multivariate diffusion log-price processes and their time-skeleton, martingale and trinomial-tree approximations presented in Theorems 6.3.1–6.3.8 are new.

6.1 Time-space-skeleton reward approximations for Markov LPP In this section, we present our main results about convergence of time-spaceskeleton reward approximations for embedded discrete time multivariate modulated Markov log-price processes. These results have their own value and also play the key role in proofs of the corresponding convergence results for rewards for continuous time Markov log-price processes.

6.1.1 Convergence of time-skeleton reward approximations based on a given partition of time interval, for multivariate modulated Markov log-price processes ~ ε (t) = (Y ~ε (t), Xε (t)), t ∈ [0, T ] be, for every ε ∈ [0, ε0 ], a multivariate modLet Z ulated Markov log-price process with a phase space Z = Rk × X, an initial distribution Pε (A), and transition probabilities Pε (t, ~z, t + u, A). Remind that transition probabilities Pε (t, ~z, t + u, ·) are assumed to be measurable functions in argument (t, ~z, u). Let also gε (t, e~y , x), (t, ~ y , x) ∈ [0, T ] × Rk × X be, for every ε ∈ [0, ε0 ], a realvalued measurable pay-off function.

310

6

Time-space-skeleton reward approximations for Markov LPP

Let us formulate conditions of convergence for the optimal expected rewards (ε) Φ(MΠ,0,T ), for a given partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ]. We essentially use results obtained in 1st volume of the book, in particular, results about convergence of space-skeleton approximations for rewards for the embedded discrete time multivariate modulated Markov log-price processes ~ ε (tn ) = (Y ~ε (tn ), Xε (tn )) = ((Yε,1 (tn ), . . . , Yε,k (tn )), Xε (tn )) and pay-off functions Z ~ y gε (tn , e , x), given in Chapters 5∗ –8∗ . The following condition is an analogue of condition B5 [¯ γ ]∗ . It should be assumed to hold for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components: B34 [¯ γ ]: limε→0 max0≤n≤N sup~z=(~y,x)∈Z

y ~

|gε (tn ,e P k

1+

L65 < ∞ and 0 ≤ L66,1 , . . . , L66,k < ∞.

i=1

,x)|

L66,i eγi |yi |

< L65 , for some 0
0 and then ε(c) ∈ (0, ε0 ] such that for ε ∈ [0, ε(c)] and i = 1, . . . , k, ∆βi (Yε,i (·), c, T ) ≤ R. (6.7) Take an arbitrary integer 0 ≤ n ≤ N and consider the uniform partition htn = (m) (m) (m) u0 < . . . < um = tn+1 i of the interval [tn , tn+1 ] by points uj = (tn+1m−tn )j . ~ ε (t) imply that for Relation (6.7) and the Markov property of the processes Z

314

6

Time-space-skeleton reward approximations for Markov LPP

ε ∈ [0, ε(c)] and m = [ (tn+1c−tn ) ]+1 (in this case j = 1, . . . , m and i = 1, . . . , k, (m)

E~z,tn eβi |Yε,i (uj

(m)

−Yε,i (u0

(m)

(tn+1 −tn ) m

≤ c), ~z = (~ y , x) ∈ Rk ×X,

)| (m)

(m)

(m)

≤ E~z,tn eβi |Yε,i (uj−1 )−Yε,i (u0 )| eβi |Yε,i (uj )−Yε,i (uj−1 )| (m) (m) (m) (m)  ~ ε (u(m) )} = E~z,tn eβi |Yε,i (uj−1 )−Yε,i (u0 )| E{eβi |Yε,i (uj )−Yε,i (uj−1 )| /Z j−1 (m)

(m)

≤ E~y,tn (eβi |Yε,i (uj−1 )−Yε,i (u0

)|

(R + 1).

(6.8)

Finally, we get iterating inequality (6.8), for ε ∈ [0, ε(c)] and ~z ∈ Rk × X, and i = 1, . . . , k, E~z,tn eβi |Yε,i (tn+1 )−Yε,i (tn )| (m)

= E~z,tn eβi |Yε,i (um

(m)

)−Yε,i (u0

)|

≤ (R + 1)m < ∞.

(6.9)

¯ holds.  Thus, condition C19 [β] ¯ (introduced in Subsection 4.1.6) holds. We also assume that condition D14 [β] ¯ and D14 [β], ¯ assumed to hold for By Lemma 4.1.10, conditions B9 [¯ γ ], C2 [β] some vector parameters β¯ and γ¯ such that γi ≤ βi , i = 1, . . . , k, implies that there ¯ γ¯ ) ∈ (0, ε0 ] such that, for ε ∈ [0, ε6 ] and any partition Π, exists ε6 (β, ~

(ε)

|Φ(MΠ,0,T )| ≤ E sup |gε (t, eYε (t) , Xε (t))| < ∞.

(6.10)

0≤t≤T

We impose on pay-off functions gε (tn , e~y , x) the following condition of locally uniform convergence, which is an analogue of condition I8 [Π]: I9 : There exist sets Z0t ∈ BZ , tÊ ∈ [0, T ], such that gε (t, e~yε , xε ) → g0 (t, e~y0 , x0 ) as ε → 0 for any ~zε = (~ yε , xε ) → ~z0 = (~ y0 , x0 ) ∈ Z0t and t ∈ [0, T ]. ~ ε (t + u) ∈ We also impose on transition probabilities Pε (t, ~z, t + u, A) = P{Z ~ ~ A/Zε (t) = ~z} of the discrete time log-price process Zε (tn ) the following condition of locally uniform weak convergence, which is an analogue of condition J1 [Π]: J2 : There exist sets Z00t ∈ BZ , t ∈ [0, T ] such that: (a) Pε (t, ~zε , t + u, ·) ⇒ P0 (t, ~z0 , t + u, ·) as ε → 0, for any ~zε = (~ yε , xε ) → ~z0 = (~ y0 , x0 ) ∈ Z00t as ε → 0, and 0 ≤ t < t + u ≤ T ; (b) P0 (t, ~z0 , t + u, Z0t+u ∩ Z00t+u ) = 1, for every ~z0 ∈ Z0t ∩ Z00t and 0 ≤ t < t + u ≤ T , where Z0t , t ∈ [0, T ] are sets introduced in condition I9 . 0

00

A typical example is where the sets Zt , Zt , t ∈ [0, T ] are empty sets. Then condition J2 (b) obviously holds. 0 00 Another typical example is where sets Zt , Zt , t ∈ [0, T ] are at most finite or countable sets. Then the assumption that measures P0 (t, ~z0 , t + u, A), ~z0 ∈ Z0t ∩ Z00t

6.1 0

Markov LPP

315

00

have no atoms at points from sets Zt , Zt , for every t ∈ [0, T ], implies that condition J2 (b) holds. One more example is where measures P0 (t, ~z0 , t + u, A), ~z0 ∈ Z0t ∩ Z00t , 0 ≤ t < t+u ≤ T are absolutely continuous with respect to some σ-finite measure P (A) on 0 00 BZ and P (Zt ), P (Zt ) = 0, t ∈ [0, T ]. This assumption also implies that condition J2 (b) holds. Note also the transition probabilities Pε (t, ~z, t, A) = I(~z ∈ A). In this case, measures I(~zε ∈ ·) ⇒ I(~z0 ∈ ·) as ε → 0 for any ~zε → ~z0 ∈ Z as ε → 0. Indeed, a Borel set A is a set of continuity for the measure I(~z0 ∈ A) if and only if ~z0 ∈ / ∂A. In this case, ~zε ∈ / ∂A and I(~zε ∈ A) = I(~z0 ∈ A) for all ε small enough. Also, I(~z0 ∈ A) = 1 for any ~z0 ∈ A ∈ BZ . Thus, the case 0 ≤ t = t + u ≤ T can also be included in condition J2 . Finally, we assume that condition of weak convergence on initial distribution ~ ε (0) ∈ A}, which is an analogue condition K16 re-formulated in Pε (A) = P{Z ~ ε (t): terms of processes Z K17 : (a) Pε (·) ⇒ P0 (·) as ε → 0; (b) P0 (Z00 ∩ Z000 ) = 1, where Z00 and Z000 are the sets introduced in conditions I9 and J2 . ¯ and D14 [β] ¯ hold and, for every Theorem 6.1.3. Let conditions B9 [¯ γ ], C2 [β] i = 1, . . . , k, either 0 = γi = βi or 0 < γi < βi < ∞. Let also conditions I9 , J2 , and K17 hold. Then, the following asymptotic relation holds for any partition Π = h0 = t0 < t1 · · · < tN = T i of the interval [0, T ], (ε)

(0)

Φ(MΠ,0,T ) → Φ(MΠ,0,T ) as ε → 0.

(6.11)

Proof. Let Π = h0 = t0 < t1 · · · < tN = T i be an arbitrary partition of the interval [0, T ]. We shall check that conditions of Theorem 6.1.1 hold. It is obvious that condition B9 [¯ γ ] implies that condition B34 [¯ γ ] holds for any partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ] with constants L65 = L1 7 and L66,i = L18,i , i = 1, . . . , k. ¯ implies that condition C19 [β] ¯ holds for any By Lemma 6.1.1, condition C2 [β] partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ]. It remains to note that condition K17 is just a variant condition K16 .  Let us now formulate conditions of convergence for the reward functions (ε) (ε) φt (MΠ,t,T , ~ y , x), for an arbitrary partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ]. ¯ assumed to hold for some vector By Lemma 4.1.8, conditions B9 [¯ γ ] and C2 [β], ¯ parameters β and γ¯ such that γi ≤ βi , i = 1, . . . , k, implies that there exists ¯ γ¯ ) ∈ (0, ε0 ] such that, for ε ∈ [0, ε2 ], any partition Π, t = tn ∈ Π and ε2 (β, ~z = (~ y , x) ∈ Z, (ε)

~

(ε)

|φt (MΠ,t,T , ~ y , x)| ≤ E~z,t sup |gε (s, eYε (s) , Xε (s))| < ∞. t≤s≤T

(6.12)

316

6

Time-space-skeleton reward approximations for Markov LPP

The remarks made in Subsection 6.1.1, let us re-formulate Theorem 6.1.3 (ε) (ε) for reward functions φt (MΠ,t,T , ~ yε , xε ) which, in fact coincide with the optimal (ε)

expected rewards Φt (M0,T −t ) for the pay-off functions, log-price processes and initial distributions described in Subsection 6.1.1. ¯ hold and, for every i = Theorem 6.1.4. Let conditions B9 [¯ γ ] and C2 [β] 1, . . . , k, either 0 = γi = βi or 0 < γi < βi < ∞. Let also conditions I9 and J2 hold. Then, the following asymptotic relation holds for any partition Π = h0 = t0 < t1 · · · < tN = T i of the interval [0, T ], t = tn ∈ Π and ~zε → ~z0 ∈ Z0t ∩ Z00t as ε → 0, (ε) (ε) (0) (0) φt (MΠ,t,T , ~ yε , xε ) → φt (MΠ,t,T , ~ y0 , x0 ) as ε → 0. (6.13) Remark 6.1.1. If one would like to get asymptotical relation (6.11) and (6.13) only for some sequence of partitions Πr = h0 = tr,0 < · · · < tr,Nr i, r = 1, 2, . . . such that d(Πr ) = max1≤n≤Nr −1 (tr,n+1 − tr,n ) → 0 as r → ∞, one can require holding of asymptotic relations in condition J2 only for t < t + u, t, t + u ∈ Πr = htr,n , n = 0, . . . , Nr i, r = 1, 2, . . ..

6.1.3 Time-space-skeleton reward approximations for multivariate modulated Markov log-price processes ~ 0 (t) = (Y ~0 (t), X0 (t)), t ∈ Let us consider the model, where a log-price process Z [0, T ] does not depend on parameter ε and is a càdlàg Markov process with a phase space Z = Rk × X (X is a Polish space with a metric dX (x0 , x00 ), an initial distribution P0 (A) and transition probabilities P0 (t, ~z, t + u, A). Also, we assume that a pay-off function g0 (t, e~y , x) is a measurable function acting from the space [0, T ] × Z to R1 , which does not depend on parameter ε. Let us now construct space-skeleton approximations for the optimal expected (0) (0) (0) reward Φ(MΠ,0,T ) and the reward function φt (MΠ,t,T , ~ y , x), for a given partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ]. In fact, we should just reformulate in terms of the pay-off functions g0 (tn , e~y , ~ 0 (tn ) = x) and the discrete time multivariate modulated log-price processes Z ~0 (tn ), X0 (tn )) the corresponding results given in Chapter 7∗ . (Y ~ Π,0,n = (Y ~Π,0,n , XΠ,0,n ) = Z ~ 0 (tn ) = (Y ~0 (tn ), X0 (tn )), Let us use the notation Z ~ n = 0, . . . , N for Markov chain Z0 (tn ), in order to point out explicitly the partition Π = h0 = t0 < · · · < tN = T i used to construct this Markov chain. This Markov chain has the phase space Z and transition probabilities, ~ Π,0,n ∈ A/Z ~ Π,0,n−1 = ~z} PΠ,0,n (~z, A) = P{Z ~ 0 (tn ) ∈ A/Z ~ 0 (tn−1 ) = ~z} = P{Z = P0 (tn−1 , ~z, tn , A), and the initial distribution,

(6.14)

6.1

Markov LPP

317

~ Π,0,0 ∈ A} PΠ,0 (A) = P{Z ~ 0 (tn ) ∈ A} = P0 (A). = P{Z

(6.15)

Let us construct, for every ε ∈ (0, ε0 ], an approximating space-skeleton Markov ~ Π,ε,n = (Y ~Π,ε,n , XΠ,ε,n ), n = 0, . . . , N with the same phase space Z and chain Z ~ Π,ε,n ∈ A/Z ~ Π,ε,n−1 = ~z} and initial transition probabilities PΠ,ε,n (~z, A) = P{Z ~ distribution PΠ,ε (A) = P{ZΠ,ε,0 ∈ A} fitted in the special way to the transition ~ Π,0,n . probabilities and the initial distribution of the Markov chain Z − + Let mε,t,j ≤ mε,t,j , j = 0, . . . , k, t ∈ [0, T ] be integer numbers. Let us define sets of vector indices Lε,t , for t ∈ [0, T ], + Lε,t = {¯ l = (l0 , . . . , lk ), lj = m− ε,t,j , . . . , mε,t,j , j = 0, . . . , k}.

(6.16)

Let us choose δε,t,i > 0, λε,t,i ∈ R1 , i = 1, . . . , k, t ∈ [0, T ]. First, the skeleton intervals Iε,t,i,li should be constructed for li = m− ε,t,i , . . ., + mε,t,i , i = 1, . . . , k, t ∈ [0, T ],  − 1 if li = m−  ε,t,i ,  (−∞, δε,t,i (mε,t,i + 2 )] + λε,t,i − 1 1 Iε,t,i,li = (6.17) (δε,t,i (li − 2 ), δε,t,i (li + 2 )] + λε,t,i if mε,t,i < li < m+ ε,t,i ,   + + 1 (δε,t,i (mε,t,i − 2 ), ∞) + λε,t,i if li = mε,t,i . + Then, skeleton points yε,t,i,li = li δε,t,i +λε,t,i ∈ Iε,t,i,li , li = m− ε,t,i , . . . , mε,t,i , i = 1, . . . , k, t ∈ [0, T ] should be defined. + Second, sets Jε,t,l0 ∈ BX , l0 = m− ε,t,0 , . . . , mε,t,0 , t ∈ [0, T ], such that (a) Jε,t,l00 ∩ 0 00 Jε,t,l000 = ∅, l0 6= l0 , t ∈ [0, T ]; (b) X = ∪m− ≤l0 ≤m+ Jε,t,l0 , n = 0, 1, . . ., should be ε,t,0 ε,t,0 constructed. Remind that one of our model assumption is that X is a Polish space, i.e., a complete, separable, metric space. In this case, it is natural to assume that there exist "large" sets Kε,t , t ∈ [0, T ], and "small" non-empty sets Kε,t,l0 ∈ BX , l0 = + 0 00 m− ε,t,0 , . . . , mε,t,0 , t ∈ [0, T ] such that (c) Kε,t,l00 ∩ Kε,t,l000 = ∅, l0 6= l0 , t ∈ [0, T ]; (d) ∪m− ≤l0 ≤m+ Kε,t,l0 = Kε,t , t ∈ [0, T ]. The exact sense of the epithets "large" ε,t,0 ε,t,0 and "small" used above is clarified in condition N7 formulated below. Then, the skeleton sets Jε,t,l0 can be defined in the following way, for l0 = + m− ε,t,0 , . . ., mε,t,0 , t ∈ [0, T ], ( + Kε,t,l0 if m− ε,t,0 ≤ l0 < mε,t,0 , (6.18) Jε,t,l0 = Kε,t,m+ ∪ Kε,t if l = m+ ε,t,0 . ε,t,0

+ Then, skeleton points xε,t,l0 ∈ Jε,t,l0 , l0 = m− ε,t,0 , . . . , mε,t,0 , t ∈ [0, T ] should be chosen. The particular important case is, where the space X = {1, . . . , m} is a finite set and metrics dX (x, y) = I(x 6= y). In this case, the standard choice is to take

318

6

Time-space-skeleton reward approximations for Markov LPP

compacts Kε,t = X, t ∈ [0, T ]; one point sets Jε,t,l = Kε,t,l = {l}, 1 = m− ε,t,0 ≤ l ≤ m+ = m, t ∈ [0, T ]; and points x = l, l = 1, . . . , m, t ∈ [0, T ]. ε,t,l ε,t,0 If the space X = {1, 2, . . .} is a countable set and again metrics dX (x, y) = I(x 6= y), the standard choice is to take compacts Kε,t = {l0 : l0 ≤ mε }, t ∈ [0, T ] + and one point sets Kε,t,l0 = {l0 }, 1 = m− ε,n,0 ≤ l0 ≤ mε,n,0 = mε , n = 0, 1, . . .. − + In this case, sets Jε,n,l0 = {l0 }, 1 = mε,t,0 ≤ l0 < mε,t,0 = mε , t ∈ [0, T ], while Jε,t,mε = {l : l ≥ mε }, and points xε,t,l0 = l0 , l0 = 1, . . . , mε , t ∈ [0, T ]. Another important case is, where X = Rp . In this case one can always choose + as Kε,t and Kε,t,l0 , m− ε,t,0 ≤ l0 ≤ mε,t,0 , cubes satisfying the conditions imposed above on these sets. Third, the skeleton sets Aε,t,¯l = Iε,t,l1 × · · · × Iε,t,lk × Jε,t,l0 and vector skeleton points, ~zε,t,¯l ∈ Aε,t,¯l = (~ yε,t,¯l , xε,t,¯l ) = ((yε,t,1,l1 , . . . , yε,t,k,lk ), xε,t,l0 ) should be defined for ¯ l = (l0 , l1 , . . . , lk ) ∈ Lε,t , t ∈ [0, T ]. We construct, for every ε ∈ (0, ε0 ], an approximating space-skeleton atomic ~ Π,ε,n = (Y ~Π,ε,n , XΠ,ε,n ) as a Markov chain with the phase space Markov chain Z Z, the transition probabilities PΠ,ε,n (~z, A) and the initial distribution PΠ,ε (A) satisfying the following skeleton fitting conditions, which are analogues of the fitting condition L4∗ and M4∗ : L1 : For every ε ∈ (0, ε0 ], the transition probabilities PΠ,ε,n (~z, {~zε,tn ,¯l }) = PΠ,0,n (~zε,tn−1 ,¯l0 , Aε,tn ,¯l ), ~z ∈ Aε,tn−1 ,¯l0 , ¯ l0 ∈ Lε,tn−1 , ¯ l ∈ Lε,tn , n = 1, 2, . . . , N . and M1 : For every ε ∈ (0, ε0 ], the initial distribution PΠ,ε ({~zε,t0 ,¯l }) = PΠ,0 (Aε,t0 ,¯l ), ¯ l ∈ Lε,t0 . It is useful to note that the space-skeleton structure described above and used ~ Π,ε,n is determined by the set to construct space-skeleton atomic Markov chains Z ¯ of elements ΞΠ,ε = < ~zε,tn ,¯l , Aε,tn ,¯l , l ∈ Lε,tn , n = 0, . . . , N >. Let us define skeleton functions, hε,t,i (y), y ∈ R1 , for every ε ∈ (0, ε0 ] and i = 1, . . . , k, t ∈ [0, T ],  − − 1   δε,t,i mε,t,i + λε,t,i if y ≤ δε,t,j (mε,t,i + 2 ) + λε,t,i ,     δε,t,i l + λε,t,i if δε,t,i (l − 21 ) + λε,t,i < y   ≤ δε,t,i (l + 21 ) + λε,t,i ,

hε,t,i (y) =

        δ m+ + λ ε,t,i ε,t,i ε,t,i

(6.19)

m− ε,t,i if y >

< l < m+ ε,t,i , + δε,t,i (mε,t,i − 21 )

+ λε,t,i ,

and skeleton functions hε,t,0 (x), x ∈ X, for every ε ∈ (0, ε0 ] and t ∈ [0, T ], + hε,t,0 (x) = {xε,t,l if x ∈ Jε,t,l , m− ε,t,0 ≤ l ≤ mε,t,0 .

(6.20)

¯ ε,t (~z), ~z = (~ Finally, let us define vector skeleton functions h y , x) = ((y1 , . . ., yk ), x) ∈ Z, for every ε ∈ (0, ε0 ] and t ∈ [0, T ],

6.1

Markov LPP

¯ ε,t (~z) = (hε,t,1 (y1 ), . . . , hε,t,k (yk ), hε,t,0 (x)). h

319

(6.21)

The transition probabilities PΠ,ε,n (~z, A), ~z ∈ Z, A ∈ BZ , n = 1, 2, . . . , N take the following form, for every ε ∈ (0, ε0 ],

X

PΠ,ε,n (~z, A) =

¯ ε,tn−1 (~z), A PΠ,0,n (h ε,tn ,¯ l)

~ zε,tn ,l¯∈A

¯ ε,tn−1 (~z)}. ¯ ε,tn (Z ~ Π,0,n ) ∈ A/Z ~ Π,0,n−1 = h = P{h

(6.22)

~ Π,ε,0 ∈ A}, A ∈ BZ is conAs far as the initial distribution PΠ,ε,0 (A) = P{Z cerned, it takes the following form, for every ε ∈ (0, ε0 ], X PΠ,ε,0 (A) = PΠ,0,0 (Aε,t0 ,¯l ) ~ zε,0,l¯∈A

¯ ε,t0 (Z ~ 0 (0)) ∈ A}. = P{h

(6.23)

The quantity PΠ,ε,n (~z, A), defined in relation (6.22), is a measurable function in ~z and a probability measure as function of A and, thus, it can serve as a transition probabilities for a Markov chain. Also the quantity PΠ,ε,0 (A), defined in relation (6.23), is a probability measure and, thus, it can serve as an initial distribution for a Markov chain. ~ Π,ε,n can play any Markov chain The role of approximating log-price process Z with the phase space Z, the transition probabilities PΠ,ε,n (~z, A) defined in relation (6.22) and the initial distribution PΠ,ε,0 (A) defined in relation (6.23). Obviously, the transition probabilities PΠ,ε,n (~z, A) and the initial distribution PΠ,ε,0 (A) given by relations (6.22) and (6.23) satisfy the fitting conditions L1 and M1 . (ε) Let us denote MΠ,n,N of all Markov moments τε,n for the discrete time Markov ~ Π,ε,r , r = 0, 1, . . . , N , which (a) take values n, n + 1, . . . , N and log-price process Z ~ Π,ε,r , n ≤ r ≤ m], m = n, . . . , N . (b) {τε,n = m} ∈ σ[Z Let, also, φΠ,ε,n (~ y , x) be the corresponding reward function for the American option with the pay-off function g0 (tn , e~y , x), defined by the following relation, for ~z = (~ y , x) ∈ Z, n = 0, 1, . . . , N , φΠ,ε,n (~ y , x) =

~

E~z,n g0 (tτε,n , eYΠ,ε,τε,n , XΠ,ε,τε,n ).

sup

(6.24)

(ε) τε,n ∈MΠ,n,N

(ε)

Let also ΦΠ,ε = Φε (MΠ,0,N ) be the corresponding optimal expected reward, defined by the following relation, ΦΠ,ε =

sup (ε) τε,0 ∈MΠ,0,N

~

Eg0 (tτε,0 , eYΠ,ε,τε,0 , XΠ,ε,τε,0 ).

(6.25)

320

6

Time-space-skeleton reward approximations for Markov LPP

~ Π,ε,n have initial distributions and transiSince the atomic Markov chains Z tion probabilities concentrated at sets with finite numbers of points, the reward functionals |φΠ,ε,n (~ y , x)| < ∞, ~z = (~ y , x) ∈ Z, n = 0, 1, . . . , N and |ΦΠ,ε | < ∞, for every ε ∈ (0, ε0 ]. The following lemma is a variant of Lemma 7.3.1∗ . ~ Π,ε,n is a Lemma 6.1.2. Let, for every ε ∈ (0, ε0 ], the log-price process Z space-skeleton Markov chain with transition probabilities and initial distribution given in relations (6.22) and (6.23) and satisfying the fitting conditions L1 and M1 . Then, the log-reward functions φΠ,ε,n (~ y , x) and φΠ,ε,n+r (~ yε,tn+r ,¯l , xε,tn+r ,¯l ), ¯ y , x) ∈ Aε,tn ,¯l00 , ¯ l00 ∈ Lε,tn , n = l ∈ Lε,tn+r , r = 1, . . ., N − n are, for every ~z = (~ 0, . . . , N , the unique solution for the following finite recurrence system of linear equations,  ~ yε,t ,l¯ ¯   φΠ,ε,N (~yε,tN ,¯l , xε,tN ,¯l ) = g0 (tN , e N , xε,tN ,¯l ), l ∈ Lε,tN ,    ~ y  φ  yε,tn+r ,¯l , xε,tn+r ,¯l ) = max g0 (tn+r , e ε,tn+r ,l¯, xε,tn+r ,¯l ), Π,ε,n+r (~    P   yε,tn+r+1 ,¯l0 , xε,tn+r+1 ,¯l0 )  ¯  l0 ∈Lε,tn+r+1 φε,n+r+1 (~   (6.26) ×PΠ,0,n+r+1 (~zε,tn+r ,¯l , Aε,tn+r+1 ,¯l0 ) ,    ¯  l ∈ Lε,tn+r , r = N − n − 1, . . . , 1,       φΠ,ε,n (~ y , x) = max g0 (tn , e~y , x),     P   φΠ,ε,n+1 (~ y z ¯0 , x ¯0 )PΠ,0,n+1 (~ ¯00 , A ¯0 ) , ¯0 l ∈Lε,tn+1

ε,tn+1 ,l

ε,tn+1 ,l

ε,tn ,l

ε,tn+1 ,l

(ε)

while the optimal expected reward ΦΠ,ε = Φε (MΠ,0,N ) is given by the following formula, X ΦΠ,ε = φΠ,ε,0 (~ yε,t0 ,¯l , xε,t0 ,¯l ))PΠ,ε,0 (Aε,t0 ,¯l ). (6.27) ¯ l∈Lε,t0

Relations (6.26) and (6.27) give an effective algorithm for computing of the reward functions φΠ,ε,n (~ y , x) and optimal expected rewards ΦΠ,ε for approximating space-skeleton discrete time Markov log-price processes ZΠ,ε,n . However, it should be noted that the above algorithm includes also computing ~ y of values g0 (tn+r , e ε,tn+r ,l¯, xε,tn+r ,¯l ), for the pay-off function g0 (t, e~y , x) and values of probabilities P0,n+r+1 (~zε,tn+r ,¯l , Aε,tn+r+1 ,¯l0 ), for the Markov log-price process ~ 0 (t). Z In many cases, these quantities can be computed effectively. However, in some cases, this can be a difficult problem and alternative approximation methods can serve better. ¯ Let us consider the space-skeleton structure ΞΠ,ε˙ = < ~zε,t ˙ n, ˙ n ,¯ l , Aε,t ˙ n ,¯ l , l ∈ Lε,t n = 0, . . . , N >, for the fixed value of parameter ε = ε˙ determining the skeleton points and sets.

6.1

321

Markov LPP

~ ε (t) be, for every ε ∈ [0, ε0 ], respectively, the payLet us also gε (t, e~y , x) and Z off function and the Markov log-price process introduced in Subsection 6.1.1. We use them as approximations for the pay-off function g0 (t, e~y , x) and the log-price ~ 0 (t). process Z ~ (ε) , with the initial distribution P (ε) (A) The approximating Markov chain Z Π,ε,n ˙ Π,ε˙ (ε)

(ε)

(ε)

and transition probabilities PΠ,ε,n z , A), and quantities ΦΠ,ε˙ and φΠ,ε,n y , x) can ˙ (~ ˙ (~ be defined, for every ε ∈ [0, ε0 ], for the pay-off function gε (t, e~y , x) and the log~ ε (t), in the same way, as the Markov chain Z ~ Π,ε,n price process Z ˙ , with the initial distribution PΠ,ε˙ (A) and transition probabilities PΠ,ε,n z , A), and quantities ΦΠ,ε˙ ˙ (~ and φΠ,ε,n y , x) have been defined, for the pay-off function g0 (t, e~y , x) and the ˙ (~ ~ 0 (t). log-price process Z This requires to assume that the following fitting conditions holds: (ε)

L2 [ε]: ˙ For every ε ∈ [0, ε0 ], the transition probabilities PΠ,ε,n z , {~zε,t ˙ n ,¯ l }) = ˙ (~ (ε) 0 ¯ ¯ P (~z z∈A ¯), ~ ¯0 , l ∈ Lε,t ¯0 , A ˙ n , n = 1, 2, . . . , N . ˙ n−1 , l ∈ Lε,t Π,0,n

ε,t ˙ n−1 ,l

ε,t ˙ n ,l

ε,t ˙ n−1 ,l

and (ε)

(ε)

M2 [ε]: ˙ For every ε ∈ [0, ε0 ], the initial distribution PΠ,ε ({~zε,t ˙ 0 ,¯ l }) = PΠ,0 (Aε,t0 ,¯ l ), ¯ l ∈ Lε,t0 . (ε)

(ε)

In this case, quantities ΦΠ,ε˙ and φΠ,ε,n y , x) can be computed using the re˙ (~ currence algorithm given in Lemma 6.1.2. Let us assume that the following simpler analogues of conditions I8 [Π] and J1 [Π] hold: I10 [Π]: There exist sets Z˙ 0tn ∈ BZ , n = 0, . . . , N , such that the pay-off functions gε (tn , e~y , x) → g0 (tn , e~y , x) as ε → 0 for ~z = (~ y , x) ∈ Z˙ 0t and n = 0, . . . , N . n

and J3 [Π]: There exist sets Z˙ 00tn ∈ BZ , n = 0, . . . , N such that Pε (tn−1 , ~z, tn , ·) ⇒ P0 (tn−1 , ~z, tn , ·) as ε → 0, for ~z = (~ y , x) ∈ Z˙ 00tn−1 as ε → 0, and n = 1, . . . , N . Let Btn ,~z,tn+1 = {A ∈ BZ : P0 (tn , ~z, tn+1 , ∂A) = 0} be the σ-algebra of sets of continuity for the probability measure P0 (tn , ~z, tn+1 , A), for ~z ∈ Z, n = 1, . . . , N . Let us assume that the following condition holds: ˙0 ˙ 00 ¯ N9 [Π]: (a) ~zε,t , ˙ n , n = 0, . . . , N ; (b) Aε,t zε,t ˙ n ,¯ l ∈ Ztn ∩ Ztn , l ∈ Lε,t ˙ n+1 ,¯ l0 ∈ Btn ,~ ˙ n ,l¯,tn+1 0 ¯ ¯ l ∈ Lε,t ˙ n , l ∈ Lε,t ˙ n+1 , n = 0, . . . , N − 1. The following lemma, which is the direct corollary of Lemma 6.1.2, gives (ε) conditions for convergence of the reward functions φΠ,ε,n y , x). ˙ (~ Lemma 6.1.3. Let conditions L2 [ε], ˙ I10 [Π], J3 [Π], N9 [Π] hold. Then the following asymptotic relation takes place, for every ~z = (~ y , x) ∈ Z˙ 0tn , n = 0, . . . , N ,

322

6

Time-space-skeleton reward approximations for Markov LPP

(ε)

(0)

φΠ,ε,n y , x) → φΠ,ε,n y , x) as ε → 0. ˙ (~ ˙ (~

(6.28)

Let B = {A ∈ BZ : P0 (∂A) = 0} be the of σ-algebra of sets of continuity for the probability measure P0 (A). Let us assume that the following condition holds: ¯ K18 [Π]: (a) Pε (·) ⇒ P0 (·) as ε → 0; (b) A ¯ ∈ B, l ∈ Lε,t ˙ 0. ε,t ˙ 0 ,l

The following lemma give conditions for convergence of the optimal expected (ε) rewards ΦΠ,ε˙ . Lemma 6.1.3. Let conditions L2 [ε], ˙ M2 [ε], ˙ I13 [Π], J20 [Π], N9 [Π] and K18 [Π] hold. Then the following asymptotic relation takes place, (ε)

(0)

ΦΠ,ε˙ → ΦΠ,ε˙ as ε → 0.

(6.29)

Remark 6.1.2. Under conditions of Lemma 6.1.3, the values of rewards func(ε) (0) ¯ tions φΠ,ε,n yε,t yε,t ˙ n, n = l , xε,t ˙ n ,¯ l ) as ε → 0, for l ∈ Lε,t ˙ n ,¯ l , xε,t ˙ n ,¯ l ) → φΠ,ε,n ˙ n ,¯ ˙ (~ ˙ (~ 1, . . . , N , and, under conditions of Lemma 6.1.4, the above asymptotic relation also holds for ¯ l ∈ Lε,t ˙ 0. Remark 6.1.3. If one would like to get asymptotical relations given in Lemma 6.1.3, for some sequences of partitions Πm , m = 0, 1, . . . and some sequence of parameters ε˙l , l = 0, 1, . . . determining the sets of elements for the corresponding space-skeleton structures ΞΠm ,ε˙l , one should require holding of conditions of Lemmas 6.1.2 and 6.1.3 for all partitions Πm , m = 0, 1, . . . and all skeleton structures corresponding to parameters ε˙l , l = 0, 1, . . .. The numbers of skeleton points and skeleton sets involved in conditions N9 [Π] and K18 [Π] (b) are finite, and these numbers are not more than countable in the case considered in Remark 6.1.3. This always makes it possible to choose skeleton points and skeleton sets in such way that conditions N9 [Πm ] and K18 [Πm ] (b) hold for any given sequences of partitions Πm , m = 0, 1, . . . and parameters ε˙l , l = 0, 1, . . . determining the sets of elements for the corresponding skeleton structures ΞΠm ,ε˙l .

6.1.4 Convergence of time-space-skeleton reward approximations based on a given partition of time interval for multivariate modulated Markov log-price processes Let us formulate conditions of convergence for the rewards φΠ,ε,n (~ y , x) and optimal expected rewards ΦΠ,ε . These conditions, in fact, re-formulate in terms of the payoff functions g0 (tn , e~y , x) and the discrete time multivariate modulated Markov ~ 0 (tn ) = (Y ~0 (tn ), X0 (tn )) the corresponding conditions given log-price processes Z in Section 7.3∗ .

6.1

Markov LPP

323

Let us introduce special shorten notations for the maximal and the minimal skeleton points, for j = 1, . . . , k, t ∈ [0, T ] and ε ∈ (0, ε0 ], ± yε,t,j = δε,t,j m± ε,t,j + λε,t,j .

(6.30)

We impose the following on the parameters of the space-skeleton model defined above: ± → ∞ as N10 [Π]: (a) δε,t,j → 0 as ε → 0, for j = 1, . . . , k, t ∈ Π; (b) ±yε,t,j ± ε → 0, for j = 1, . . . , k, t ∈ Π; (c) ±yε,t,j , t ∈ Π are non-decreasing functions in argument t ∈ Π, for every j = 1, . . . , k and ε ∈ (0, ε0 ]; (d) for any x ∈ X and d > 0, there exists εx,d ∈ (0, ε0 ] such that the ball Rd (x) ⊆ Kε,t , t ∈ Π, for ε ∈ (0, εx,d ]; (e) sets Kε,t,l have diameters dε,t,l = supx0 ,x00 ∈Kε,t,l dX (x0 , x00 ) such that dε,t = maxm− ≤l≤m+ dε,t,l → 0 as ε → 0, t ∈ Π. ε,t,0

ε,t,0

− Note that condition N10 [Π] implies that δε,tn ,j (m+ ε,t,j −mε,tn ,j ) → ∞ as ε → 0, − for j = 1, . . . , k, tn ∈ Π, and m+ ε,tn ,j − mε,tn ,j → ∞ as ε → 0, for j = 1, . . . , k, tn ∈ Π. − Note, however, that condition N10 [Π] does not require that m+ ε,tn ,0 −mε,tn ,0 → ∞ as ε → 0, for tn ∈ Π. + Note also that that sets Kε,tn and Kε,tn ,l , m− ε,tn ,0 ≤ l ≤ mε,tn ,0 satisfying conditions N10 [Π] (d) and (e) can be always constructed for the most important cases, where X is a discrete space or an Euclidian space, or a product of such spaces. The role of sets Kε,tn ,l can be played, respectively, by one-point sets, cubes, or products of one-point sets and cubes. Also, it is worth to note that the standard choice of structural skeleton parameters penetrating condition N10 [Π] is where parameters δε,tn ,j = δε , m± ε,tn ,j = ±mε , j = 1, . . . , k, m± = ±m do not depend on t ∈ Π. ε,0 n ε,tn ,0 In this case, conditions N10 [Π] (a) – (b) require that δε → 0, mε → ∞ and δε mε → ∞ as ε → 0. ~ 0 (t) the following first-type condition We impose on the log-price process Z ¯ and of exponential moment compactness, which is a variant of condition C19 [β], ¯ should be assumed to hold for some vector parameter β = (β1 , . . . , βk ) with nonnegative components: ¯ ∆βi (Y0,i (t· , Π) < K87,i , i = 1, . . . , k for some 1 < K87,i < ∞, i = 1, . . . , k. C20 [β]:

Let us re-call the second-type modulus of exponential moment compactness for the components of the log-price process, Ξ± β (Y0,i (t· , Π) =

max

sup E~z,tn e±β(Y0,i (tn+1 )−Y0,i (tn )) .

0≤n≤N −1 ~ z ∈Z

(6.31)

As follows from Lemma 4.1.8∗ , the following condition implies condition ¯ to hold: C20 [β]

324

6

Time-space-skeleton reward approximations for Markov LPP

¯ Ξ± (Y0,i (t· , Π) < K88,i , i = 1, . . . , k, for some 1 < K88,i < ∞, i = 1, . . . , k. E13 [β]: β The following condition is an analogue of condition B34 [¯ γ ]. It should be assumed to hold for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components: B35 [¯ γ ]: max0≤n≤N sup~z=(~y,x)∈Z

y ~

1+

and 0 ≤ L68,1 , . . . , L68,k < ∞.

|g0 (tn ,e P k i=1

,x)|

L68,i eγi |yi |

< L67 , for some 0 < L67 < ∞

¯ assumed to hold As follows from Lemma 4.1.4∗ , conditions B35 [¯ γ ] and C20 [β], ¯ for some vector parameters γ¯ and β such that 0 ≤ γi ≤ βi , i = 1, . . . , k, imply that the reward function φΠ,0,n (~ y , x) is finite, i.e., for t = tn ∈ Π and ~z = (~ y , x) ∈ Z, ~

|φΠ,0,n (~ y , x)| ≤ E~z,tn sup |g0 (tr , eY0 (tr ) , X0 (tr ))| < ∞.

(6.32)

n≤r≤N

Condition I8 [Π] should be replaced by the following condition: I11 [Π]: There exist sets Z0tn ∈ BZ , n = 0, . . . , N such that function g0 (tn , e~y , x) is continuous in points ~z = (~ y , x) ∈ Z0tn , for every n = 0, . . . , N . Condition I11 [Π] means that g0 (tn , e~yε , xε ) → g0 (tn , e~y0 , x0 ) as ε → 0 for any ~zε = (~ yε , xε ) → ~z0 = (~ y0 , x0 ) ∈ Z0tn as ε → 0, for n = 0, . . . , N . Thus, condition I11 [Π] is, in fact, re-formulation of condition I8 [Π] for the case, where the pay-off function does not depend on ε. Condition J1 [Π] should be replaced by the following condition: J4 [Π]: There exist sets Z00tn ∈ BZ , n = 0, . . . , N such that: (a) PΠ,0,n (~zε , ·) = P0 (tn−1 , ~zε , tn , ·) ⇒ PΠ,0,n (~z0 , ·) = P0 (tn−1 , ~z0 , tn , ·) as ε → 0, for any ~zε → ~z0 ∈ Z00tn−1 as ε → 0, and n = 1, . . . , N ; (b) PΠ,0,n (~z0 , Z0tn ∩ Z00tn ) = P0 (tn−1 , ~z0 , tn , Z0tn ∩ Z00tn ) = 1, for every ~z0 ∈ Z0tn−1 ∩ Z00tn−1 , n = 1, . . . , N , where Z0tn , n = 0, . . . , N are sets introduced in condition I11 [Π]. Condition J4 [Π] (a) means that the transition probabilities P0 (tn−1 , ~z0 , tn , ·) are weakly continuous at points ~z ∈ Z00tn−1 for every n = 1, . . . , N . 0

00

Note also that condition J4 [Π] (b) holds if the sets Z n , Z n = ∅, n = 1, . . . , N , or these sets are finite or countable and measures P0 (tn−1 , ~z0 , tn , ·), ~z0 ∈ Z0tn−1 ∩ 0 00 Z00tn−1 have no atoms in points of the sets Z tn , Z tn , for n = 1, . . . , N , as well as in the case, where measures P0 (tn−1 , ~z0 , tn , ·), ~z0 ∈ Z0tn−1 ∩ Z00tn−1 , n = 1, . . . , N are absolutely continuous with respect to some σ-finite measure P (A) on the σ-algebra 0 00 BZ and P (Ztn ), P (Ztn ) = 0, for n = 1, . . . , N . The following theorem is a variant of Theorem 7.3.1∗ . Theorem 6.1.5. Let Π = h0 = t0 < t1 · · · < tN = T i be a partition of the in~ Π,0,n = Z ~ 0 (tn ) terval [0, T ] and multivariate modulated Markov log-price process Z ~ and the corresponding space-skeleton approximation Markov processes ZΠ,ε,n are

6.1

Markov LPP

325

¯ hold with vector defined as described above. Let also conditions B35 [¯ γ ] and C20 [β] ¯ parameters γ¯ = (γ1 , . . . , γk ) and β = (β1 , . . . , βk ) such that, for every i = 1, . . . , k, either βi > γi > 0 or βi = γi = 0, and also conditions L1 , N10 [Π], I11 [Π], and J4 [Π] hold. Then, the following relation takes place, for t = tn ∈ Π and ~zε = (~ yε , xε ) → ~z0 = (~ y0 , x0 ) ∈ Z0tn ∩ Z00tn as ε → 0, (0)

(0)

φΠ,ε,n (~ yε , xε ) → φΠ,0,n (~ y0 , x0 ) = φt (MΠ,t,T , ~ y0 , x0 ) as ε → 0.

(6.33)

¯ It also should be The following condition is an analogue of conditions D25 [β]. assumed to hold for some vector parameter β¯ = (β1 , . . . , βk ) with non-negative components: ¯ Eeβi |Y0,i (t0 )| < K89,i , i = 1, . . . , k, for some 1 < K89,i < ∞, i = 1, . . . , k. D26 [β]: ¯ and D26 [β], ¯ assumed As follows from Lemma 4.1.6∗ , conditions B35 [¯ γ ], C20 [β] to hold for some vector parameters γ¯ and β¯ such that 0 ≤ γi ≤ βi , i = 1, . . . , k, imply that the reward function ΦΠ,0 is finite, i.e., ~

|ΦΠ,0 | ≤ E sup |g0 (tn , eY0 (tn ) , X0 (tn ))| < ∞.

(6.34)

0≤n≤N

The following condition is a reduced form of condition K16 : K19 : PΠ,0,0 (Z0t0 ∩ Z00t0 ) = P0 (Z0t0 ∩ Z00t0 ) = 1, where Z0t0 and Z00t0 are the sets introduced in conditions I11 [Π] and J4 [Π]. The following theorem is a variant of Theorem 7.3.2∗ . Theorem 6.1.6. Let Π = h0 = t0 < t1 · · · < tN = T i be a partition of the in~ Π,0,n = Z ~ 0 (tn ) terval [0, T ] and multivariate modulated Markov log-price process Z ~ and the corresponding space-skeleton approximation Markov processes ZΠ,ε,n are ¯ and D26 [β] ¯ hold defined as described above. Let also conditions B35 [¯ γ ], C20 [β] ¯ with vector parameters γ¯ = (γ1 , . . . , γk ) and β = (β1 , . . . , βk ) such that, for every i = 1, . . . , k, either βi > γi > 0 or βi = γi = 0, and also conditions L1 , M1 , N10 [Π], I11 [Π], J4 [Π] and K19 hold. Then, the following relation takes place, (0)

ΦΠ,ε → ΦΠ,0 = Φ(MΠ,0,T ) as ε → 0.

(6.35)

6.1.5 Convergence of time-space-skeleton reward approximations based on an arbitrary partitions of time interval, for multivariate modulated Markov log-price processes Let us formulate conditions of convergence for the reward functions φΠ,ε,n (~ y , x) and the optimal expected rewards ΦΠ,ε , for an arbitrary partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ].

326

6

Time-space-skeleton reward approximations for Markov LPP

In fact, we should just formulate (in terms of the pay-off function g0 (t, e~y , x) ~ 0 (t) = and the continuous time multivariate modulated log-price processes Z ~0 (t), X0 (t))) conditions given in Theorems 6.1.5 and 6.1.6 in such way these (Y theorems could be applied for the case of an arbitrary chosen partition Π. First of all, we should replace fitting conditions L1 and M1 by the following more general conditions imposed on transition probabilities and initial distributions of approximating atomic Markov chains ZΠ,ε,n , which should hold for any partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ]: L3 : For any partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ], and every ε ∈ (0, ε0 ], the transition probabilities PΠ,ε,n (~z, {~zε,tn ,¯l }) = PΠ,0,n (~zε,tn−1 ,¯l0 , Aε,tn ,¯l ), ~z ∈ Aε,tn−1 ,¯l0 , ¯ l0 ∈ Lε,tn−1 , ¯ l ∈ Lε,tn , n = 1, 2, . . . , N . and M3 : For any partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ], and for every ε ∈ (0, ε0 ], the initial distribution PΠ,ε ({~zε,t0 ,¯l }) = PΠ,0 (Aε,t0 ,¯l ), ¯ l ∈ Lε,t0 . Note that, according condition M3 , we require that, for every ε ∈ (0, ε0 ], the approximating Markov chains ZΠ,ε,n have the same initial distribution for all partitions Π. We should replace fitting conditions N10 [Π] by the following more general condition: ± N11 : (a) δε,t,j → 0 as ε → 0, for j = 1, . . . , k, t ∈ [0, T ]; (b) ±yε,t,j → ∞ as ± ε → 0, for j = 1, . . . , k, t ∈ [0, T ]; (c) ±yε,t,j are non-decreasing functions in argument t ∈ [0, T ], for every j = 1, . . . , k and ε ∈ (0, ε0 ]; (d) for any x ∈ X and d > 0, there exists εx,d ∈ (0, ε0 ] such that the ball Rd (x) ⊆ Kε,t , t ∈ [0, T ], for ε ∈ (0, εx,d ]; (e) sets Kε,t,l have diameters dε,t,l = supx0 ,x00 ∈Kε,t,l dX (x0 , x00 ) such that dε,t = maxm− ≤l≤m+ dε,t,l → 0 as ε → 0, t ∈ [0, T ]. ε,t,0

ε,t,0

− Note that condition N11 implies that δε,t,j (m+ ε,t,j − mε,t,j ) → ∞ as ε → 0, for − + j = 1, . . . , k, t ∈ [0, T ], and mε,t,j −mε,t,j → ∞ as ε → 0, for j = 1, . . . , k, t ∈ [0, T ]. − Note, however, that condition N11 does not require that m+ ε,t,0 − mε,t,0 → ∞ as ε → 0, for t ∈ [0, T ]. + Note also that that sets Kε,t and Kε,t,l , m− ε,t,0 ≤ l ≤ mε,t,0 satisfying conditions N11 (d) and (e) can be always constructed for the most important cases, where X is a discrete space or an Euclidian space, or a product of such spaces. The role of sets Kε,t,l can be played, respectively, by one-point sets, cubes, or products of one-point sets and cubes. Also, it is worth to note that the standard choice of structural skeleton parameters penetrating condition N11 is where parameters δε,t,j = δε , m± ε,t,j = ±mε , j = 1, . . . , k, m± ε,t,0 = ±mε,0 do not depend on t ∈ [0, T ].

6.1

Markov LPP

327

In this case, conditions N11 (a) – (b) require that δε → 0, mε → ∞ and δε mε → ∞ as ε → 0. ~ 0 (t) the following first-type condition We impose on the log-price process Z ¯ and of exponential moment compactness, which is a variant of condition C1 [β], ¯ should be assumed to hold for some vector parameter β = (β1 , . . . , βk ) with nonnegative components: ¯ limc→0 ∆βi (Y0,i (·), c, T ) = 0, i = 1, . . . , k. C21 [β]: The following condition is an analogue of condition B1 [¯ γ ]. It should be assumed to hold for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components: B36 [¯ γ ]: sup0≤t≤T sup~z=(~y,x)∈Z 0 ≤ L70,1 , . . . , L70,k < ∞.

y ~

P|gk0 (t,e

1+

i=1

,x)|

L70,i eγi |yi |

< L69 , for some 0 < L69 < ∞ and

¯ assumed to hold As follows from Lemma 4.1.4, conditions B36 [¯ γ ] and C21 [β], ¯ for some vector parameters γ¯ and β such that 0 ≤ γi ≤ βi , i = 1, . . . , k, imply that the reward function φΠ,0,n (~ y , x) is finite, i.e., for any partition Π, t = tn ∈ Π and ~z = (~ y , x) ∈ Z, ~

|φΠ,0,n (~ y , x)| ≤ E~z,t sup |g0 (s, eY0 (s) , X0 (s))| < ∞.

(6.36)

t≤s≤T

Condition I11 [Π] should be replaced by the following condition: I12 : There exist sets Z0t ∈ BZ , t ∈ [0, T ] such that g0 (t, e~y , x) (as function in ~z) is continuous in points ~z = (~ y , x) ∈ Z0t , for every t ∈ [0, T ]. Condition J4 [Π] should be replaced by the following condition: J5 : There exist sets Z00t ∈ BZ , t ∈ [0, T ] such that: (a) P0 (t, ~zε , t + u, ·) ⇒ P0 (t, ~z0 , t+u, ·) as ε → 0, for any ~zε → ~z0 ∈ Z00t as ε → 0, and 0 ≤ t < t+u ≤ T ; (b) P0 (t, ~z0 , t + u, Z0t+u ∩ Z00t+u ) = 1, for every ~z0 ∈ Z0t ∩ Z00t , 0 ≤ t < t + u ≤ T , where Z0t , t ∈ [0, T ] are sets introduced in condition I12 . Condition J5 (a) means that the transition probabilities P0 (t, ~z0 , t + u, ·) are weakly continuous at points ~z ∈ Z00t , for every 0 ≤ t ≤ t + u ≤ T . 0 00 Note also that condition J5 (b) holds if the sets Z t , Z t = ∅, t ∈ [0, T ], or these sets are finite or countable and measures P0 (t, ~z0 , t + u, A), ~z0 ∈ Z0t ∩ Z00t have no 0 00 atoms in points of the sets Z t+u , Z t+u , for 0 ≤ t ≤ t+u ≤ T , as well as in the case, where measures P0 (t, ~z0 , t + u, A), ~z0 ∈ Z0t ∩ Z00t , 0 ≤ t ≤ t + u ≤ T are absolutely continuous with respect to some σ-finite measure P (A) on the σ-algebra BZ and 0 00 P (Zt ), P (Zt ) = 0, for t ∈ [0, T ]. The following theorem is a corollary of Theorem 6.1.5.

328

6

Time-space-skeleton reward approximations for Markov LPP

¯ hold and, for every i = Theorem 6.1.7. Let conditions B36 [¯ γ ], C21 [β] 1, . . . , k either 0 = γi = βi or 0 < γi < βi < ∞. Let also conditions L3 , N11 , I12 and J5 hold. Then, the following asymptotic relation holds for any partition Π = h0 = t0 < t1 · · · < tN = T i of the interval [0, T ], t = tn ∈ Π and ~zε → ~z0 ∈ Z0t ∩ Z00t as ε → 0, (0)

(0)

φΠ,ε,n (~ yε , xε ) → φΠ,0,n (~ y0 , x0 ) = φt (MΠ,t,T , ~ y0 , x0 ) as ε → 0.

(6.37)

Proof. Let Π = h0 = t0 < t1 · · · < tN = T i be an arbitrary partition of the interval [0, T ]. We shall check that conditions of Theorem 6.1.5 hold. It is obvious that condition B36 [¯ γ ] implies that condition B35 [¯ γ ] holds for any partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ] with constants L67 = L69 and L68,i = L70,i , i = 1, . . . , k. ¯ implies that condition C20 [β] ¯ holds for any partition Π = Condition C21 [β] h0 = t0 < t1 . . . < tN = T i of the interval [0, T ]. It follows from Lemma 6.1.1, which should be just applied to the model, where ¯ the log-price processes do not depend on parameter ε. In this case conditions C2 [β] ¯ ¯ ¯ and C19 [β] reduce, respectively, to conditions C21 [β] and C20 [β]. It remains to note that conditions L2 , N11 , I12 and J5 imply that, respectively, conditions L1 , N10 [Π], I11 [Π] and J4 [Π] hold for any partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ].  ¯ which is natural The following condition is also a variant of conditions D26 [β], ~ 0 (t). It should be assumed to to re-formulate, in this case, in terms of processes Z hold for some vector parameter β¯ = (β1 , . . . , βk ) with non-negative components: ¯ Eeβi |Y0,i (0)| < K90,i , i = 1, . . . , k, for some 1 < K90,i < ∞, i = 1, . . . , k. D27 [β]: ¯ and D27 [β], ¯ assumed As follows from Lemma 4.1.4, conditions B36 [¯ γ ], C21 [β] to hold for some vector parameters γ¯ and β¯ such that 0 ≤ γi ≤ βi , i = 1, . . . , k, imply that the optimal expected reward ΦΠ,0 is finite, i.e., for any partition Π, ~

|ΦΠ,0 | ≤ E sup |g0 (t, eY0 (t) , X0 (t))| < ∞.

(6.38)

0≤t≤T

¯ which, also, is The following condition is also a variant of conditions K19 [β], ~ natural to re-formulate, in this case, in terms of processes Z0 (t): K20 : P0 (Z00 ∩ Z000 ) = 1, where Z00 and Z000 are the sets introduced in conditions I12 and J5 . The following theorem is a corollary of Theorem 6.1.6. ¯ and D26 [β] ¯ hold and, for Theorem 6.1.8. Let conditions B36 [¯ γ ], C21 [β] every i = 1, . . . , k either 0 = γi = βi or 0 < γi < βi < ∞. Let also conditions L3 , M3 , N11 , I12 , J5 and K20 hold. Then, the following asymptotic relation holds for any partition Π = h0 = t0 < t1 · · · < tN = T i of the interval [0, T ], the following

6.2 Time-space-skeleton reward approximations for LPP with independent increments

329

asymptotic relation holds, (0)

ΦΠ,ε → ΦΠ,0 = Φ(MΠ,0,T ) as ε → 0.

(6.39)

Proof. Let Π = h0 = t0 < t1 · · · < tN = T i be an arbitrary partition of the interval [0, T ]. We shall check that conditions of Theorem 6.1.6 hold. ¯ As was pointed out in the proof of Theorem 6.1.7, conditions B36 [¯ γ ], C21 [β], ¯ L2 , N2 , I12 and J5 imply that, respectively, conditions B35 [¯ γ ], C20 [β], L1 , N10 [Π], I11 [Π] and J4 [Π] hold. ¯ is, just, a re-formulation of condition D25 [β] ¯ required in Condition D26 [β] Theorem 6.1.6. It remains to note that condition K20 coincides with condition K19 , with sets Z00 and Z000 are the sets introduced in conditions I12 and J5 . These sets play, in this case, the role of the corresponding sets in conditions I11 [Π] and J4 [Π]. The latter conditions are implied to hold, respectively, by conditions I12 and J5 .  Remark 6.1.4. If one would like to get asymptotical relation (6.37) and (6.39) only for some sequence of partitions Πr = {0 = tr,0 < · · · < tr,Nr }, r = 1, 2, . . . such that d(Πr ) = max1≤n≤Nr −1 (tr,n+1 − tr,n ) → 0 as r → ∞, one can require holding of asymptotic relations in condition J5 only for t < t + u, t, t + u ∈ Πr = {tr,n , n = 0, . . . , Nr }, r = 1, 2, . . ..

6.2 Time-space-skeleton reward approximations for LPP with independent increments In this section we present results about time-space-skeleton reward approximations for multivariate log-price processes with independent increments.

6.2.1 Convergence of time-skeleton reward approximations based on a given partition of time interval, for multivariate log-price processes with independent increments ~ε (t), t ∈ [0, T ] be, for every ε ∈ [0, ε0 ], a càdlàg log-price process with inLet Y dependent increments, with a phase space Rk , an initial distribution Pε (A), and distributions of increments Pε (t, t + s, A). Let also gε (t, e~y ) be, for every ε ∈ [0, ε0 ], a real-valued measurable pay-off function which do not depend on the index argument. Let us formulate conditions of convergence for optimal expected rewards (ε) Φ(MΠ,0,T ), for a given partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ].

330

6

Time-space-skeleton reward approximations for Markov LPP

The above model is a particular case of the model considered in Section 6.1. In this case, we consider log-price processes and pay-off function without index component. Also, processes are processes with independent increments which are a particular case of Markov processes. It is useful to note that, in this case, the corresponding embedded discrete P ~ ε,k , n = 0, 1, . . . , N is a random ~ε (tn ) = Y ~ε (t0 ) + n W time log-price process Y k=1 ~ε (t0 ) = Y ~ε (0) and independent random jumps W ~ ε,n = walk with the initial state Y ~ ~ Yε (tn ) − Yε (tn−1 ), n = 1, 2, . . . , N . The following condition is a reduced form of condition B34 [¯ γ ] for the model without index component. It should be assumed to hold for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components: B37 [¯ γ ]: limε→0 max0≤n≤N sup~y∈Rk and 0 ≤ L72,1 , . . . , L72,k < ∞.

y ~

P|gkε (tn ,e

1+

i=1

)|

L72,i eγi |yi |

< L71 , for some 0 < L71 < ∞

The first-type modulus of exponential moment compactness ∆β (Yε,i (t· , Π) takes the following simpler form, for β ≥ 0, ∆0β (Yε,i (t· , Π) =

max

0≤n≤N −1

Eeβ|Yε,i (tn+1 )−Yε,i (tn )| .

(6.40)

¯ The following condition is the corresponding reduced form of condition C15 [β]. ¯ It should be assumed to hold for some vector parameter β = (β1 , . . . , βk ) with nonnegative components: ¯ limε→0 ∆0β (Yε,i (t· , Π) < K91,i , i = 1, . . . , k for some 1 < K91,i < ∞, i = C22 [β]: i

1, . . . , k. The second-type modulus of exponential moment compactness Ξ± β (Yε,i (t· , Π) takes the following simpler form, for β ≥ 0, Ξ0± β (Yε,i (t· , Π) =

max

0≤n≤N −1

Ee±β(Yε,i (tn+1 )−Yε,i (tn )) .

(6.41)

¯ The following condition is the corresponding reduced form of condition E25 [β]. ¯ It should be assumed to hold for some vector parameter β = (β1 , . . . , βk ) with nonnegative components: ¯ limε→0 Ξ0± (Yε,i (t· , Π) < K92,i , i = 1, . . . , k, for some 1 < K92,i < ∞, E14 [β]: β

i = 1, . . . , k. ¯ implies that condition [C22 [β] ¯ As follows from Lemma 5.1.5∗ , condition E14 [β] holds. ¯ (the difThe following condition is a particular variant of condition D25 [β] ference is only that it is imposed on initial states of the embedded random walk Yε (t0 ) (instead of the initial states of log-price component of the embedded Markov

6.2

LPP with independent increments

331

~ ε (t0 ) = (Y ~ε (t0 ), Xε (t0 ))), assumed to hold for some vector parameter chains Z ¯ β = (β1 , . . . , βk ) with non-negative components: 0 0 ¯ limε→0 Eeβi Yεi (t0 ) < K85,i , i = 1, . . . , k, for some 0 < K85,i < ∞, i = D025 [β]: 1, . . . , k. ¯ and D025 [β] ¯ are, just, particular Since conditions conditions B37 [¯ γ ], C22 [β] ¯ γ¯ ) ∈ ¯ ¯ variants of conditions B34 [¯ γ ], C19 [β] and D25 [β], there exists ε63 = ε63 (β, (0, ε0 ] such that, for every ε ∈ [0, ε63 ], the optimal expected reward is finite, i.e., ~

(ε)

|Φ(MΠ,0,T )| ≤ E sup |gε (tn , eYε (tn ) ))| < ∞.

(6.42)

0≤n≤N

We impose on pay-off functions gε (tn , e~y ) the following condition of locally uniform convergence, which is a reduced form of condition I8 : I13 [Π]: There exist sets Ytn ∈ BRk , n = 0, . . . , N , such that gε (tn , e~yε ) → g0 (tn , e~y0 ) as ε → 0 for any ~ yε → ~ y0 ∈ Y0tn and n = 0, . . . , N . ~ε (tn ) − We also impose on distribution of increments Pε,n (tn−1 , tn , A) = P{Y ~ ~ yε (tn−1 ) ∈ A} of the discrete time log-price process Zε (tn ) the following condition of locally uniform weak convergence, which replaces, in this case, condition J1 [Π]: J6 [Π]: (a) Pε (tn−1 , tn , ·) ⇒ P0 (tn−1 , tn , ·) as ε → 0, for any n = 1, . . . , N ; (b) y ) = 1, for every ~ y0 ∈ Ytn−1 and n = 1, . . . , N , where P0 (tn−1 , tn , Ytn − ~ Ytn , n = 0, . . . , N are sets introduced in condition I13 . A typical example is where the sets Ytn , n = 1, . . . , N are empty sets. Then condition J6 [Π] (b) obviously holds. Another typical example is where sets Ytn , n = 1, . . . , N are at most finite or countable sets. Then the assumption that measures P0 (tn−1 , tn , A), n = 1, . . . , N have no atoms implies that condition J6 [Π] (b) holds. One more example is, where measures P0 (tn−1 , tn , A), n = 1, . . . , N are absolutely continuous with respect to the Lebesgue measure in Rk and sets Ytn , n = 1, . . . , N have zero Lebesgue measure. This assumption also implies that condition J6 [Π] (b) holds. ~ε (tn ) is a Markov chain with the phase The discrete time log-price process Y space Rk and transition probabilities, ~ε (tn ) − Y ~ε (tn−1 ) ∈ A}. Pε,n (~ y , A) = Pε (tn−1 , tn , A − ~ y ) = P{~ y+Y

(6.43)

~ ε (tn ) = (Y ~ε (tn ), Xε (tn )) with the It is a particular case of the Markov chain Z virtual index component Xε (tn ) ≡ x0 with a one-point phase space X0 = {x0 }. As follows from Lemma 5.3.2∗ , condition J6 [Π] implies condition J1 [Π] to hold, with sets Z0tn = Ytn × X0 , Z00tn = Rk × X0 , n = 1, . . . , N . ~ε (t0 ) The following condition imposed on the initial distributions Pε (A) = P{Y ∈ A} is the corresponding reduced form of condition K16 :

332

6

Time-space-skeleton reward approximations for Markov LPP

K21 : (a) Pε (·) ⇒ P0 (·) as ε → 0; (b) P0 (Yt0 ) = 1, where Yt0 is the set introduced in condition I13 [Π]. Theorem 6.1.1 takes in this case the following form. Theorem 6.2.1. Let Π = h0 = t0 < t1 · · · < tN = T i be a partition of the ¯ and D025 [β] ¯ hold and, for every interval [0, T ] such that conditions B37 [¯ γ ], C22 [β] i = 1, . . . , k, either 0 = γi = βi or 0 < γi < βi < ∞. Let also conditions I13 [Π], J6 [Π], and K21 hold for this partition.Then, the following asymptotic relation holds, (ε) (0) Φ(MΠ,0,T ) → Φ(MΠ,0,T ) as ε → 0. (6.44) Let us now formulate conditions of convergence for the reward functions for a given partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ]. (ε) (ε) Note that, in this case, the reward function φt (MΠ,t,T , ~ y ) also does not depend on the index argument x. ¯ are, just, particular variants of Since conditions conditions B37 [¯ γ ] and C22 [β] ¯ γ¯ ) ∈ (0, ε0 ] such that, for ¯ conditions B31 [¯ γ ] and C19 [β], there exists ε64 = ε64 (β, (ε) (ε) every ε ∈ [0, ε64 ], the reward function φt (MΠ,t,T , ~ y ) is finite, i.e., for t = tn ∈ Π and ~ y ∈ Rk , (ε) (ε) φt (MΠ,t,T , ~ y ),

(ε)

~

(ε)

|φt (MΠ,t,T , ~ y )| ≤ E~y,tn sup |gε (tr , eYε (tr ) )| < ∞.

(6.45)

n≤r≤N

Theorem 6.1.2 takes in this case the following form. Theorem 6.2.2. Let Π = h0 = t0 < t1 · · · < tN = T i be a partition of ¯ hold and, for every the interval [0, T ] such that conditions B37 [¯ γ ] and C22 [β] i = 1, . . . , k, either 0 = γi = βi or 0 < γi < βi < ∞. Let also conditions I13 [Π] and J6 [Π] hold for this partition. Then, the following asymptotic relation holds for t = tn ∈ Π and ~ yε → ~ y0 ∈ Ytn as ε → 0, (ε)

(ε)

(0)

(0)

φt (MΠ,t,T , ~ yε ) → φt (MΠ,t,T , ~ y0 ) as ε → 0.

(6.46)

6.2.2 Convergence of time-skeleton reward approximations based on arbitrary partitions of time interval, for multivariate log-price processes with independent increments Let us formulate conditions for convergence of rewards, for an arbitrary partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ]. In fact, we should just formulate (in terms of the pay-off functions gε (t, e~y ) and ~ε (t)) conditions the continuous time multivariate modulated log-price processes Y given in Theorems 6.2.1 and 6.2.2 in such way these theorems could be applied to the case of an arbitrary chosen partition Π.

6.2

LPP with independent increments

333

(ε)

Let us first do this for the optimal expected rewards Φ(MΠ,0,T ). We assume that conditions B10 [¯ γ ] (introduced in Subsection 4.3.3) holds for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components. ¯ (introduced in Subsection 4.3.2) Let us also assume that condition C11 [β] holds. ¯ implies that condition C22 [β] ¯ As follows from Lemma 6.1.1, condition C11 [β] holds for any partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ]. ¯ (introduced in Subsection 4.3.2) holds. We also assume that condition D15 [β] ¯ and D15 [β] ¯ (which are just reBy Lemma 4.1.10, conditions B10 [¯ γ ], C11 [β] ¯ and D14 [β]), ¯ assumed to duced forms, respectively, of conditions B9 [¯ γ ], C2 [β] hold for some vector parameters β¯ and γ¯ such that γi ≤ βi , i = 1, . . . , k, implies ¯ γ¯ ) ∈ (0, ε0 ] such that, for ε ∈ [0, ε6 ] and any partition Π, that there exists ε6 (β, ~

(ε)

|Φ(MΠ,0,T )| ≤ E sup |gε (t, eYε (t) )| < ∞.

(6.47)

0≤t≤T

We impose on pay-off functions gε (tn , e~y ) the following condition of locally uniform convergence, which is a reduced form of condition I9 [Π]: I14 : There exist sets Yt ∈ BRk , t ∈ [0, T ], such that the pay-off functions U

gε (t, e~y0 ) −→ g0 (t, e~y0 ) as ε → 0, i.e., gε (t, e~yε ) → g0 (t, e~y0 ) as ε → 0 for any ~ yε → ~ y0 ∈ Yt and t ∈ [0, T ]. ~ε (t + u) − We also impose on distributions of increments Pε (t, t + u, A) = P{Y ~ ~ Yε (t) ∈ A} of the log-price process Zε (t) the following condition of locally uniform weak convergence, which is a reduced form of condition J2 [Π]: J7 : (a) Pε (t, t + u, ·) ⇒ P0 (t, t + u, ·) as ε → 0, for any 0 ≤ t < t + u ≤ T ; (b) P0 (t, t + u, Yt+u − ~ y ) = 1, for every ~ y ∈ Yt and 0 ≤ t < t + u ≤ T , where Yt , t ∈ [0, T ] are sets introduced in condition I14 . A typical example is where the sets Yt , t ∈ [0, T ] are empty sets. Then condition J7 (b) obviously holds. Another typical example is where sets Yt , t ∈ [0, T ] are at most finite or countable sets. Then the assumption that measures P0 (t, t + u, A), t ∈ [0, T ] have no atoms implies that condition J7 (b) holds. One more example is where measures P0 (t, t + u, A), 0 ≤ t ≤ t + u ≤ T are absolutely continuous with respect to the Lebesgue measure on BRk and sets Yt , t ∈ [0, T ] have zero Lebesgue measure. This assumption also implies that condition J7 (b) holds. It should be noted that the most important is the model, where the limiting process Y0 (t) is a Lévy process. In this case, there exist many well known models, where infinitely divisible distributions P0 (t, t + u, A) have probability densities with respect to the Lebesgue measure. It also worth to note that necessary and sufficient conditions which provide holding of asymptotic relations Pε (t, t + u, ·) ⇒ P0 (t, t + u, ·) as ε → 0 for the case,

334

6

Time-space-skeleton reward approximations for Markov LPP

~ε (t) are processes of step-sums of independent random variables where processes Y ~0 (t) is a Lévy process are well known. or Lévy processes while the limiting process Y For example, one can find these conditions in the books by Skorokhod (1964, 1971) and Gikhman and Skorokhod (1971, 2004). Note also the transition probabilities Pε (t, ~z, t, A) = I(~z ∈ A). In this case, measures I(~zε ∈ ·) ⇒ I(~z0 ∈ ·) as ε → 0 for any ~zε → ~z0 ∈ Z as ε → 0. Indeed, a Borel set A is a set of continuity for the measure I(~z0 ∈ A) if and only if ~z0 ∈ / ∂A. In this case, ~zε ∈ / ∂A and I(~zε ∈ A) = I(~z0 ∈ A) for all ε small enough. Also, I(~z0 ∈ B) = 1 for any ~z0 ∈ B ∈ BZ . Thus, the case 0 ≤ t = t + u ≤ T can also be included in condition J7 . ~ε (t) is a Markov process with the phase The continuous time log-price process Y space Rk and transition probabilities, Pε (t, ~ y , t + u, A) = Pε (t, t + u, A − ~ y) ~ε (t + u) − Y ~ε (t) ∈ A}. = P{~ y+Y

(6.48)

~ ε (t) = (Y ~ε (t), Xε (t)) with the It is a particular case of the Markov process Z virtual index component Xε (t) ≡ x0 with a one-point phase space X0 = {x0 }. As follows from Lemma 5.3.2∗ , condition J24 implies condition J19 to hold with sets Z0t = Yt × X0 , Z00t = Rk × X0 , t ∈ [0, T ]. ~ε (0) The following condition imposed on the initial distributions Pε (A) = P{Y ∈ A} is the corresponding reduced form of condition K17 : K22 : (a) Pε (·) ⇒ P0 (·) as ε → 0; (b) P0 (Y0 ) = 1, where Y0 is the set introduced in conditions I14 . Theorem 6.1.3 takes in this case the following form. ¯ and D15 [β] ¯ hold and, for Theorem 6.2.3. Let conditions B10 [¯ γ ], C11 [β] every i = 1, . . . , k, either 0 = γi = βi or 0 < γi < βi < ∞. Let also conditions I14 , J7 , and K22 hold. Then, the following asymptotic relation holds for any partition Π = h0 = t0 < t1 · · · < tN = T i of the interval [0, T ], (ε)

(0)

Φ(MΠ,0,T ) → Φ(MΠ,0,T ) as ε → 0.

(6.49)

¯ assumed to hold for some vector By Lemma 4.1.8, conditions B9 [¯ γ ] and C2 [β], parameters β¯ and γ¯ such that γi ≤ βi , i = 1, . . . , k, implies that there exists ¯ γ¯ ) ∈ (0, ε0 ] such that, for ε ∈ [0, ε2 ], any partition Π, t = tn ∈ Π and ε2 (β, ~z = (~ y , x) ∈ Z, (ε)

~

(ε)

|φt (MΠ,t,T , ~ y , x)| ≤ E~z,t sup |gε (s, eYε (s) , Xε (s))| < ∞.

(6.50)

t≤s≤T

¯ (which are just particular By Lemma 4.1.8, conditions B10 [¯ γ ] and C11 [β] ¯ cases, respectively, of conditions B9 [¯ γ ] and C2 [β]), assumed to hold for some

6.2

335

LPP with independent increments

vector parameters β¯ and γ¯ such that γi ≤ βi , i = 1, . . . , k, implies that there exists ¯ γ¯ ) ∈ (0, ε0 ] such that, for ε ∈ [0, ε2 ], any partition Π, t = tn ∈ Π and ~ ε2 (β, y ∈ Rk , (ε)

~

(ε)

|φt (MΠ,t,T , ~ y )| ≤ E~y,t sup |gε (s, eYε (s) )| < ∞.

(6.51)

t≤s≤T

Theorem 6.1.4 takes in this case the following form. ¯ hold and, for every i = Theorem 6.2.4. Let conditions B10 [¯ γ ] and C11 [β] 1, . . . , k, either 0 = γi = βi or 0 < γi < βi < ∞. Let also conditions I14 and J7 hold. Then, the following asymptotic relation holds for any partition Π = h0 = t0 < t1 · · · < tN = T i of the interval [0, T ], t = tn ∈ Π and ~ yε → ~ y0 ∈ Yt as ε → 0, (ε)

(ε)

(0)

(0)

φt (MΠ,t,T , ~ yε ) → φt (MΠ,t,T , ~ y0 ) as ε → 0.

(6.52)

Remark 6.2.1. If one would like to get asymptotical relation (6.49) and (6.52) only for some sequence of partitions Πr = {0 = tr,0 < · · · < tr,Nr }, r = 1, 2, . . . such that d(Πr ) = max1≤n≤Nr −1 (tr,n+1 − tr,n ) → 0 as r → ∞, one can require holding of asymptotic relations in condition J7 only for t < u, t, t + u ∈ Πr = {tr,n , n = 0, . . . , Nr }, r = 1, 2, . . ..

6.2.3 Time-space-skeleton reward approximations with fixed space-skeleton structure, for multivariate log-price processes with independent increments ~0 (t), t ∈ [0, T ] does not depend Let consider the model, where a log-price process Y on parameter ε and is a càdlàg process with independent increments with a phase space Rk , an initial distribution P0 (A) and distributions of increments P0 (t, t + u, A). Also, we assume that a pay-off function g0 (t, e~y ) is a measurable function acting from the space [0, T ] × Rk to R1 , which does not depend on parameter ε. Let us construct space-skeleton approximations with fixed space-skeleton (0) structure for the optimal expected reward Φ(MΠ,0,T ) and the reward function (0)

(0)

φt (MΠ,t,T , ~ y ), for a given partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ]. In fact, we should reformulate in terms of the pay-off functions g0 (tn , e~y ) and ~0 (tn ) the corresponding results given in the discrete time log-price processes Y Subsection 6.2.1. ~0 (tn ) and the pay-off We should take into account that the log-price process Y ~ y function g0 (tn , e ) have not index components. ~0 (tn ) is a Markov chain. We also should take into account that the process Y In fact, it is a random walk, i.e.,

336

6

Time-space-skeleton reward approximations for Markov LPP

~0 (tn ) = Y ~0 (t0 ) + Y

n X

~ 0,k , n = 0, 1, . . . , N, W

(6.53)

k=1

~ε (t0 ) = Y ~ε (0) and independent random jumps W ~ 0,n = with the initial state Y ~ ~ Y0 (tn ) − Y0 (tn−1 ), n = 1, 2, . . . , N . ~Π,0,n = Y ~0 (tn ), n = 0, . . . , N and W ~ Π,0,n = W ~ 0,n , n = Let us use the notation Y ~ 1, . . . , N for Markov chain Y0 (tn ), in order to point out explicitly the partition Π = h0 = t0 < · · · < tN = T i used to construct this Markov chain. This Markov chain has the phase space Rk and transition probabilities, ~Π,0,n ∈ A/Y ~Π,0,n−1 = ~ PΠ,0,n (~ y , A) = P{Y y} ~0 (tn ) ∈ A/Y ~0 (tn−1 ) = ~ = P{Y y} ~ Π,0,n ∈ A − ~ = P{W y } = P0 (tn−1 , tn , A − ~ y ),

(6.54)

and the initial distribution, ~Π,0,0 ∈ A} PΠ,0 (A) = P{Y ~0 (t0 ) ∈ A} = P0 (A). = P{Y

(6.55)

Let us construct, for every ε ∈ (0, ε0 ], an approximating space-skeleton Markov ~Π,ε,n , n = 0, . . . , N with the same phase space Rk and transition probabilichain Y ~Π,ε,n ∈ A/Y ~Π,ε,n−1 = ~ ties PΠ,ε,n (~z, A) = P{Y y } and initial distribution PΠ,ε (A) = ~ P{YΠ,ε,0 ∈ A} fitted in the special way to the transition probabilities and the ~Π,0,n . initial distribution of the Markov chain Y − + Let mε,t,i ≤ mε,t,i , i = 1, . . . , k, t ∈ [0, T ] be integer numbers. Let us define, the sets of vector indices ˆLε,t , for t ∈ [0, T ], + ˆLε,t = {¯ l = (l1 , . . . , lk ), li = m− ε,t,i , . . . , mε,t,i , i = 1, . . . , k}.

(6.56)

Let us choose δε,t,i > 0, λε,t,i ∈ R1 , i = 1, . . . , k, t ∈ [0, T ]. + First, the skeleton intervals Iε,t,i,l should be constructed for l = m− ε,t,i , . . . , mε,t,i , i = 1, . . . , k, t ∈ [0, T ],

Iε,t,i,l

 − 1   (−∞, δε,t,i (mε,t,i + 2 )] + λε,t,i = (δε,t,i (l − 12 ), δε,t,i (l + 12 )] + λε,t,i   1 (δε,t,i (m+ ε,t,i − 2 ), ∞) + λε,t,i

if l = m− ε,t,i , + if m− ε,t,i < l < mε,t,i ,

if l =

(6.57)

m+ ε,t,i .

Then, skeleton points yε,t,i,li = li δε,t,i + λε,t,i ∈ Iε,t,i,li for li = m− ε,t,i , . . ., = 1, . . . , k, t ∈ [0, T ] should be defined. ˆ ¯ should be defined, for ¯ Second, the skeleton sets A l = (l1 , . . . , lk ) ∈ ˆLε,t , t ∈ ε,t,l [0, T ], ˆ ¯ = Iε,t,1,l × · · · × Iε,t,k,l , A (6.58) 1 k ε,t,l m+ ε,t,i , i

6.2

LPP with independent increments

337

ˆ ¯ should be chosen, for and skeleton points, ~ yε,t,¯l = (yε,t,1,l1 , . . . , yε,t,k,lk ) ∈ A ε,t,l ¯ ˆ l = (l1 , . . . , lk ) ∈ Lε,t , t ∈ [0, T ]. We construct, for every ε ∈ (0, ε0 ], an approximating space-skeleton atomic ~Π,ε,n as a Markov chain with the phase space Rk , the transition Markov chain Y probabilities PΠ,ε,n (~z, A) and the initial distribution PΠ,ε (A) satisfying the following skeleton fitting conditions, which are analogues of the fitting condition L1 and M1 : L4 : For every ε ∈ (0, ε0 ], the transition probabilities PΠ,ε,n (~ y , {~ yε,tn ,¯l }) = 0 ¯ ¯ ˆ ˆ ˆ ˆ , n = 1, 2, . . . , N . , l ∈ L PΠ,0,n (~ yε,tn−1 ,¯l0 , A ), ~ y ∈ A , l ∈ L ε,tn ε,tn−1 ε,tn ,¯ l ε,tn−1 ,¯ l0 and ˆ M4 : For every ε ∈ (0, ε0 ], the initial distribution PΠ,ε ({~ yε,t0 ,¯l }) = PΠ,0 (A ε,t0 ,¯ l ), ¯ l ∈ ˆLε,t0 . This is useful to note that the space-skeleton structure described above and ~Π,ε,n is determined by the set of used to construct space-skeleton Markov chains Y ¯ ˆ ˆ Π,ε = < ~ ˆ elements Ξ yε,tn ,¯l , A ε,tn ,¯ l , l ∈ Lε,tn , n = 0, . . . , N >. Let us define skeleton functions, hε,t,i (y), y ∈ R1 , for every ε ∈ (0, ε0 ] and i = 1, . . . , k, t ∈ [0, T ],  1  δε,t,i m− if y ≤ δε,t,j (m−  ε,t,i + λε,t,i ε,t,i + 2 ) + λε,t,i ,     if δε,t,i (l − 21 ) + λε,t,i < y   δε,t,i l + λε,t,i ≤ δε,t,i (l + 21 ) + λε,t,i ,

hε,t,i (y) =

        δ m+ + λ ε,t,i ε,t,i ε,t,i

(6.59)

m− ε,t,i if y >

< l < m+ ε,t,i , + δε,t,i (mε,t,i − 12 )

+ λε,t,i .

ˆ ε,t (~ Finally, let us define vector skeleton functions h y ), ~ y = (y1 , . . ., yk ) ∈ Rk , for every ε ∈ (0, ε0 ] and t ∈ [0, T ], ˆ ε,t (~ h y ) = (hε,t,1 (y1 ), . . . , hε,t,k (yk )).

(6.60)

The transition probabilities PΠ,ε,n (~ y , A), ~ y ∈ Rk , A ∈ BRk , n = 1, 2, . . . , N take the following form, for every ε ∈ (0, ε0 ], X ˆ ε,tn−1 (~ ˆ PΠ,ε,n (~ y , A) = PΠ,0,n (h y ), A ε,tn ,¯ l) ~ yε,tn ,l¯∈A

ˆ ε,tn (Y ˆ ε,tn−1 (~ ~Π,0,n ) ∈ A/Y ~Π,0,n−1 = h = P{h y )} X ˆ ˆ = P0 (tn−1 , tn , A y )) ε,tn ,¯ l − hε,tn−1 (~ ~ yε,tn ,l¯∈A

ˆ ε,tn (W ˆ ε,tn−1 (~ ~ 0,n + h = P{h y )) ∈ A}.

(6.61)

338

6

Time-space-skeleton reward approximations for Markov LPP

~Π,ε,0 ∈ A}, A ∈ BZ is conAs far as the initial distribution PΠ,ε,0 (A) = P{Y cerned, it takes the following form, for every ε ∈ (0, ε0 ], X ˆ PΠ,ε,0 (A) = PΠ,0,0 (A ε,t0 ,¯ l) ~ yε,0,l¯∈A

ˆ ε,t0 (Y ~0 (0)) ∈ A}. = P{h

(6.62)

The quantity PΠ,ε,n (~ y , A), defined in relation (6.61), is a measurable function in ~ y and a probability measure as function of A and, thus, it can serve as a transition probabilities for a Markov chain. Also the quantity PΠ,ε,0 (A), defined in relation (6.62), is a probability measure and, thus, it can serve as an initial distribution for a Markov chain. ~Π,ε,n can play any Markov chain The role of approximating log-price process Y with the phase space Rk , the transition probabilities PΠ,ε,n (~z, A) defined in relation (6.61) and the initial distribution PΠ,ε,0 (A) defined in relation (6.62). Obviously, the transition probabilities PΠ,ε,n (~ y , A) and the initial distribution PΠ,ε,0 (A) given by relations (6.61) and (6.62) satisfy the fitting conditions L4 and M4 . (ε) Let us denote MΠ,n,N of all Markov moments τε,n for the discrete time Markov ~Π,ε,r , r = 0, 1, . . . , N , which (a) take values n, n + 1, . . . , N , (b) log-price process Y ~ {τε,n = m} ∈ σ[YΠ,ε,r , n ≤ r ≤ m], n ≤ m ≤ N . Let, also, φΠ,ε,n (~ y ) be the corresponding reward function for the American option with the pay-off function g0 (tn , e~y ), defined by the following relation, for ~ y ∈ Rk , n = 0, 1, . . . , N , φΠ,ε,n (~ y) =

~

E~y,n g0 (tτε,n , eYΠ,ε,τε,n ).

sup

(6.63)

(ε) τε,n ∈MΠ,n,N

(ε)

Let also ΦΠ,ε = Φε (MΠ,0,N ) be the corresponding optimal expected reward, defined by the following relation, ΦΠ,ε =

sup

~

Eg0 (tτε,0 , eYΠ,ε,τε,0 ).

(6.64)

(ε) τε,0 ∈MΠ,0,N

~Π,ε,n have initial distributions and transition Since the atomic Markov chains Y probabilities concentrated at sets with finite numbers of points, reward functionals |φΠ,ε,n (~ y )| < ∞, ~ y ∈ Rk , n = 0, 1, . . . , N and |ΦΠ,ε | < ∞, for every ε ∈ (0, ε0 ]. The following lemma is a variant of Lemma 6.1.2. ~Π,ε,n is a space Lemma 6.2.1. Let, for every ε ∈ (0, ε0 ], the log-price process Y skeleton Markov chain with transition probabilities and initial distribution given in relations (6.61) and (6.62) and satisfying the fitting conditions L4 and M4 . Then, the log-reward functions φΠ,ε,n (~ y ) and φΠ,ε,n+r (~ yε,tn ,¯l ), ¯ l ∈ ˆLε,tn+r , r = 1, . . ., 00 ¯ ˆ ˆ N − n are, for every ~ y ∈ Aε,tn ,¯l00 , l ∈ Lε,tn , n = 0, . . . , N , the unique solution for

6.2

LPP with independent increments

the following finite recurrence system of linear equations,  φΠ,ε,N (~ yε,tN ,¯l ) = g0 (tN , e~yε,tN ,l¯), ¯ l ∈ ˆLε,tN ,      ~ y   φΠ,ε,n+r (~ yε,tn+r ,¯l ) = max g0 (tn+r , e ε,tn+r ,l¯),     P  ˆ  φε,n+r+1 (~ yε,tn+r+1 ,¯l0 )PΠ,0,n+r+1 (~ yε,tn+r ,¯l , A ¯ ε,tn+r+1 ,¯ l0 ) , l0 ∈ ˆLε,t n+r+1

           

¯ l ∈ ˆLε,tn+r , r = N − n − 1, . . . , 1,

339

(6.65)

φΠ,ε,n (~ y ) = max g0 (tn , e~y ),  P ˆ φΠ,ε,n+1 (~ yε,tn+1 ,¯l0 )PΠ,0,n+1 (~ yε,tn ,¯l00 , A ¯ ε,tn+1 ,¯ l0 ) , l0 ∈ ˆLε,t n+1

(ε)

while the optimal expected reward ΦΠ,ε = Φε (MΠ,0,N ) is given by the following formula, X ˆ ΦΠ,ε = φΠ,ε,0 (~ yε,t0 ,¯l )PΠ,ε,0 (A (6.66) ε,t0 ,¯ l ). ¯ l∈ ˆ Lε,t0

Relations (6.65) and (6.66) give an effective algorithm for computing of the reward functions φΠ,ε,n (~ y ) and optimal expected rewards ΦΠ,ε for approximating ~Π,ε,n . space-skeleton discrete time Markov log-price processes Y However, it should be noted that the above algorithm includes also computing ~ y of values g0 (tn+r , e ε,tn+r ,l¯), for the pay-off function g0 (t, e~y ) and values of probaˆ ~ bilities P0,n+r+1 (~ yε,tn+r ,¯l , A ε,tn+r+1 ,¯ l0 ), for the Markov log-price process Y0 (t). In many cases, these quantities can be computed effectively. However, in some cases, this can be a difficult problem and alternative approximation methods can serve better. ¯ ˆ ˙ n, ˆ Π,ε˙ = < ~ ˆ Let us consider the space-skeleton structure Ξ yε,t ˙ n ,¯ l , Aε,t ˙ n ,¯ l , l ∈ Lε,t n = 0, . . . , N >, for the fixed value of parameter ε = ε˙ determining the skeleton points and sets. ~ε (t) be, for every ε ∈ [0, ε0 ], respectively, a pay-off Let us also gε (t, e~y ) and Y function and a Markov log-price process introduced in Subsection 6.2.1. We use them as approximations for the pay-off function g0 (t, e~y ) and the log-price process ~0 (t). Y ~ (ε) , with the initial distribution P (ε) (A) The approximating Markov chain Y Π,ε,n ˙ Π,ε˙ (ε)

(ε)

(ε)

and transition probabilities PΠ,ε,n y , A), and quantities ΦΠ,ε˙ and φΠ,ε,n y ) can ˙ (~ ˙ (~ ~ y be defined, for every ε ∈ [0, ε0 ], for the pay-off function gε (t, e ) and the log~ε (t), in the same way, as the Markov chain Y ~Π,ε,n price process Y ˙ , with the initial distribution PΠ,ε˙ (A) and transition probabilities PΠ,ε,n y , A), and quantities ΦΠ,ε˙ ˙ (~ and φΠ,ε,n y ) have been defined, for the pay-off function g0 (t, e~y ) and the log-price ˙ (~ ~0 (t). process Y This requires to assume that the following fitting conditions holds: (ε)

L5 [ε]: ˙ For every ε ∈ [0, ε0 ], the transition probabilities PΠ,ε,n y , {~ yε,t l }) = ˙ n ,¯ ˙ (~ (ε) 0 ¯ ¯ ˆ ˆ ˆ ˆ PΠ,0,n (~ yε,t y ∈ Aε,t ˙ n−1 , l ∈ Lε,t ˙ n , n = 1, 2, . . . , N . ˙ n−1 ,¯ l0 , Aε,t ˙ n ,¯ l ), ~ ˙ n−1 ,¯ l0 , l ∈ Lε,t

340

6

Time-space-skeleton reward approximations for Markov LPP

and (ε) (ε) ˆ M5 [ε]: ˙ For every ε ∈ [0, ε0 ], the initial distribution PΠ,ε ({~ yε,t ˙ 0 ,¯ l }) = PΠ,0 (Aε,t0 ,¯ l ), ¯ l ∈ ˆLε,t0 . (ε)

(ε)

In this case, quantities ΦΠ,ε˙ and φΠ,ε,n y ) can be computed using the recur˙ (~ rence algorithm given in Lemma 6.2.1. Let us assume that the following reduced analogues of conditions I14 [Π] and J7 [Π] hold: ˙ tn ∈ BR , n = 0, . . . , N , such that the pay-off functions I15 [Π]: There exist sets Y k ~ y ~ y ˙ tn and n = 0, . . . , N . gε (tn , e ) → g0 (tn , e ) as ε → 0 for ~ y∈Y and J8 [Π]: Pε (tn−1 , tn , ·) ⇒ P0 (tn−1 , tn , ·) as ε → 0, for n = 1, . . . , N . Let Btn ,tn+1 = {A ∈ BRk : P0 (tn , tn+1 , ∂A) = 0} be the σ-algebra of sets of continuity for the probability measure P0 (tn , tn+1 , A), for , n = 0, . . . , N − 1. Let us assume that the following condition holds: ˆ ˙ tn , ¯ y l ∈ ˆLε,t N12 [Π]: (a) ~ y ¯0 − ~ ¯ ∈ Btn ,tn+1 , ¯∈ Y ˙ n , n = 0, . . . , N ; (b) A ε,t ˙ n+1 ,l

ε,t ˙ n ,l

ε,t ˙ n ,l

¯ ¯0 ˆ ˙ n+1 , n = 0, . . . , N − 1. l ∈ ˆLε,t ˙ n , l ∈ Lε,t The following lemma, which is the direct corollary of Lemma 6.1.3, gives (ε) conditions for convergence of the reward functions φΠ,ε,n y ). ˙ (~ Lemma 6.2.2. Let conditions L5 [ε], ˙ I15 [Π], J8 [Π], N12 [Π] hold. Then the ˙ tn , n = 0, . . . , N , following asymptotic relation takes place for every ~ y∈Y (ε)

(0)

φΠ,ε,n y ) → φΠ,ε,n y ) as ε → 0. ˙ (~ ˙ (~

(6.67)

Let B = {A ∈ BZ : P0 (∂A) = 0} be the of σ-algebra of sets of continuity for the probability measure P0 (A). Let us assume that the following condition holds: ¯ ˆ ˙ 0. ˆ K23 [Π]: (a) Pε (·) ⇒ P0 (·) as ε → 0; (b) A ε,t ˙ 0 ,¯ l ∈ B0 , l ∈ Lε,t The following lemma give conditions for convergence of the optimal expected (ε) rewards ΦΠ,ε˙ . Lemma 6.2.3. Let conditions L5 [ε], ˙ M5 [ε], ˙ I15 [Π], J8 [Π], N12 [Π] and K23 [Π] hold. Then the following asymptotic relation takes place, (ε)

(0)

ΦΠ,ε˙ → ΦΠ,ε˙ as ε → 0.

(6.68)

Remark 6.2.2. Under conditions of Lemma 6.2.2, the values of rewards func(ε) (0) ¯ ˆ ˙ n , n = 1, . . . , N , and, tions φΠ,ε,n yε,t yε,t ˙ n ,¯ l ) → φΠ,ε,n ˙ n ,¯ l ) as ε → 0, for l ∈ Lε,t ˙ (~ ˙ (~

6.2

LPP with independent increments

341

under conditions of lemma 6.2.3, the above asymptotic relation also holds for ¯ l ∈ ˆLε,t ˙ 0. Remark 6.2.3. If one would like to get asymptotical relations given in Lemma 6.2.2 and 6.2.3 for some sequences of partitions Πm , m = 0, 1, . . . and some sequence of parameters ε˙l , l = 0, 1, . . . determining the sets of elements for the corˆ Πm ,ε˙ , one should require holding of condiresponding space-skeleton structures Ξ l tions of Lemmas 6.2.2 and 6.2.3, for all partitions Πm , m = 0, 1, . . . and all skeleton structures corresponding to parameters ε˙l , l = 0, 1, . . .. The numbers of skeleton points and skeleton sets involved in conditions N12 [Π] and K23 [Π] (b) are finite, and these numbers are not more than countable in the case considered in Remark 6.2.3. This always makes it possible to choose skeleton points and skeleton sets in such way that conditions N12 [Πm ] and K23 [Πm ] (b) hold for any given sequences of partitions Πm , m = 0, 1, . . . and parameters ε˙l , l = 0, 1, . . . determining the sets ˆ Πm ,ε˙ . of elements for the corresponding skeleton structures Ξ l

6.2.4 Time-space-skeleton reward approximations with an additive space-skeleton structure, for multivariate log-price processes with independent increments ~0 (t), t ∈ [0, T ] is a process with independent increments taking values The process Y in the space Rk . This makes it possible to construct for this process alternative skeleton approximations with an alternative additive space-skeleton structure. Let us first construct space-skeleton approximations with additive space(0) skeleton structure for the optimal expected reward Φ(MΠ,0,T ) and the reward (0)

(0)

function φt (MΠ,t,T , ~ y ), for a given partition Π = {0 = t0 < t1 . . . < tN = T } of the interval [0, T ]. In fact, we should reformulate in terms of the pay-off functions g0 (tn , e~y ) and ~0 (tn ) the corresponding results given in the discrete time log-price processes Y Chapter 7∗ . Pn ~ ~0 (tn ) = Y ~0 (t0 ) + The embedded discrete time process Y k=1 W0,k , n = ~ε (t0 ) = Y ~ε (0) and independent 0, 1, . . . , N is a random walk with the initial state Y ~ 0,n = Y ~0 (tn ) − Y ~0 (tn−1 ), n = 1, 2, . . . , N . random jumps W ~ ~0 (tn ), n = 0, . . . , N for the random walk Let us use the notation YΠ,0,n = Y ~ ~ ~ Y0 (tn ) and WΠ,0,n = W0,n , n = 0, . . . , N in order to point out explicitly the partition Π = {0 = t0 < · · · < tN = T } used to construct this random walk. This random walk has the phase space Rk and jump distributions, ~ Π,0,n ∈ A} PΠ,0,n (A) = P{W = P0 (tn−1 , tn , A),

(6.69)

342

6

Time-space-skeleton reward approximations for Markov LPP

and the initial distribution, ~Π,0,0 ∈ A} PΠ,0 (A) = P{Y = P0 (A).

(6.70)

P ~ Π,ε,k , n = ~Π,ε,n = Y ~Π,ε,0 + n W Let us now consider the random walk Y k=1 ~ 0, 1, . . . , N is a random walk with the initial state YΠ,ε,0 and independent random ~ ε,n , n = 1, 2, . . . , N . As usual, we also assume that the initial state and jumps W jumps are independent. ~Π,ε,n has the same phase space Rk , distribuWe assume the the random walk Y ~ tions of jumps PΠ,ε,n (A) = P{WΠ,ε,n ∈ A} and an initial distribution PΠ,ε (A) = ~Π,ε,0 ∈ A} fitted in the special way to the distributions of jumps and the initial P{Y ~Π,0,n . distribution of the random walk Y − + Let mε,t,j ≤ mε,t,j , j = 0, . . . , k, t ∈ [0, T ] be integer numbers. 0 Let us define, the sets of vector indices ˆLε,t , for t ∈ [0, T ], ˆL0ε,t = {¯ l = (l1 , . . . , lk ), + lj = m− ε,t,j , . . . , mε,t,j , j = 1, . . . , k}.

(6.71)

Let us choose δε,i > 0, λε,t,i ∈ R1 , i = 1, . . . , k, t ∈ [0, T ]. First, the skeleton intervals Iε,t,i,l should be constructed for l = m− ε,t,i , . . ., m+ , i = 1, . . . , k, t ∈ [0, T ], ε,t,i

Iε,t,i,l

 − 1   (−∞, δε,i (mε,t,i + 2 )] + λε,t,i = (δε,i (l − 12 ), δε,i (l + 12 )] + λε,t,i   1 (δε,i (m+ ε,t,i − 2 ), ∞) + λε,i

if l = m− ε,t,i , + if m− ε,t,i < l < mε,t,i ,

if l =

(6.72)

m+ ε,t,i .

Then, skeleton points yε,t,i,li = li δε,i + λε,t,i ∈ Iε,t,i,li for li = m− ε,t,i , . . ., = 1, . . . , k, t ∈ [0, T ] should be defined. ˆ 0 ¯ = Iε,t,1,l × · · · × Iε,t,k,l and skeleton points, Second, the skeleton sets A 1 k ε,t,l 0 ˆ = (y , . . . , y ) ∈ A should be defined, for ¯ l = (l1 , . . . , lk ) ∈ ¯ ε,t,1,l ε,t,k,l ¯

m+ ε,t,i , i

~ yε,t,l 1 ˆL0ε,t , t ∈ [0, T ].

k

ε,t,l

Let us now assume that the following skeleton fitting conditions hold: ~ Π,ε,n = ~ L6 : For every ε ∈ (0, ε0 ], the distributions of jumps P{W y

ε,tn ,¯ l} = 0 0 0 ¯ ˆ ˆ ˆ ~ PΠ,ε,n ({~ yε,tn ,¯l }) = P{WΠ,0,n ∈ Aε,tn ,¯l } = PΠ,0,n (Aε,tn ,¯l ), l ∈ Lε,tn , n = 1, 2, . . . , N .

and ~Π,ε,0 = ~ M6 : For every ε ∈ (0, ε0 ], the initial distribution P{Y yε,t0 ,¯l } = PΠ,ε ({~ yε,t0 ,¯l }) 0 0 0 ¯ ˆ ˆ ˆ ~ = P{YΠ,0,0 ∈ A ¯} = PΠ,0 (A ¯), l ∈ Lε,t . ε,tn ,l

ε,t0 ,l

0

6.2

LPP with independent increments

343

This is useful to note that the space-skeleton structure described above and ~Π,ε,n is determined by used to construct space-skeleton atomic Markov chains Y 0 0 0 ˆ Π,ε = < ~ ˆ the set of elements Ξ yε,tn ,¯l , A ,¯ l ∈ ˆLε,tn , n = 0, . . . , N >. ε,tn ,¯ l Let us define skeleton functions, hε,t,i (y), y ∈ R1 , for ε ∈ (0, ε0 ] and i = 1, . . . , k, t ∈ [0, T ],  1  if y ≤ δε,i (m− δε,i m−  ε,t,i + 2 ) + λε,t,i , ε,t,i + λε,t,i     if δε,i (l − 12 ) + λε,t,i < y   δε,i l + λε,t,i ≤ δε,i (l + 12 ) + λε,t,i ,

hε,t,i (y) =

        δ m+ + λ ε,i ε,t,i ε,t,i

(6.73)

m− ε,t,i if y >

< l < m+ ε,t,i , + δε,i (mε,t,i − 12 )

+ λε,t,i .

ˆ 0ε,t (~ Finally, let us define vector skeleton functions h y ), ~ y = (y1 , . . ., yk ) ∈ Rk , for ε ∈ (0, ε0 ] and t ∈ [0, T ], ˆ 0ε,t (~ h y ) = (hε,t,1 (y1 ), . . . , hε,t,k (yk )).

(6.74)

The jump distributions PΠ,ε,n (A), A ∈ BRk , n = 1, 2, . . . , N take the following form, for every ε ∈ (0, ε0 ],

PΠ,ε,n (A) =

X

ˆ 0 ¯) PΠ,0,n (A ε,tn ,l

~ yε,tn ,l¯∈A

=

X

ˆ 0ε,t (W ˆ 0 ¯) = P{h ~ Π,0,n ) ∈ A}. P0 (tn−1 , tn , A n ε,tn ,l

(6.75)

~ yε,tn ,l¯∈A

As far as the initial distribution PΠ,ε,0 (A), A ∈ BZ is concerned, it takes the following form, for every ε ∈ (0, ε0 ],

PΠ,ε (A) =

X

ˆ 0 ¯) PΠ,0,0 (A ε,t0 ,l

~ yε,0,l¯∈A

=

X

ˆ 0ε,t (Y ˆ 0 ¯) = P{h ~Π,0,n ) ∈ A}. P0 (A 0 ε,t0 ,l

(6.76)

~ yε,0,l¯∈A

The quantities PΠ,ε,n (A) and PΠ,ε (A), defined in relations (6.75) and (6.76) are probability measures as function of A and, thus, they can serve as, respectively, distribution of jumps and an initial distribution for a random walk. Obviously, the distributions of jumps PΠ,ε,n (A) and the initial distribution PΠ,ε (A) given by relations (6.75) and (6.76) satisfy conditions L10 and M10 . (ε) Let us denote MΠ,n,N of all Markov moments τε,n for the discrete time Markov ~Π,ε,r , r = 0, 1, . . . , N , which (a) take values n, n + 1, . . . , N , (b) log-price process Y ~ {τε,n = m} ∈ σ[YΠ,ε,r , n ≤ r ≤ m], n ≤ m ≤ N .

344

6

Time-space-skeleton reward approximations for Markov LPP

Let, also, φΠ,ε,n (~ y ) be the corresponding reward function for the American option with the pay-off function g0 (tn , e~y ), defined by the following relation, for ~ y ∈ Rk , n = 0, 1, . . . , N , φΠ,ε,n (~ y) =

~

E~y,n g0 (tτε,n , eYΠ,ε,τε,n ).

sup

(6.77)

(ε) τε,n ∈MΠ,n,N

(ε)

Let also ΦΠ,ε = Φε (Mmax,Π,0,N ) be the corresponding optimal expected reward, defined by the following relation, ΦΠ,ε =

sup

~

Eg0 (tτε,0 , eYΠ,ε,τε,0 ).

(6.78)

(ε)

τε,0 ∈MΠ,0,N

~Π,ε,n have initial distributions and transition Since the atomic Markov chains Y probabilities concentrated at sets with finite numbers of points, reward functionals |φΠ,ε,n (~ y )| < ∞, ~ y ∈ Rk , n = 0, 1, . . . , N and |ΦΠ,ε | < ∞, for every ε ∈ (0, ε0 ]. + Let us denote, for ¯ l = (l1 , . . . , lk ), li = m− ε,n,n+r,i , . . ., mε,n,n+r,i , i = 1, . . . , k, ~ yε,n,n+r,¯l = (δε,1 l1 + λε,n,n+r,1 , . . . , δε,k lk + λε,n,n+r,k ),

(6.79)

where, for i = 1, . . . , k, 0 ≤ n < n + r < ∞, m± ε,n,n+r,i =

n+r X l=n+1

m± ε,tl ,i , λε,n,n+r,i =

n+r X

λε,tl ,i .

(6.80)

l=n+1

The following lemma is a variant of Lemma 7.4.2∗ . ~Π,ε,n is a space Lemma 6.2.4. Let, for every ε ∈ (0, ε0 ], the log-price process Y skeleton random walk with jump distributions and the initial distribution given in relations (6.75) and (6.76) and satisfying the fitting conditions L6 and M6 . Then, the log-reward functions φΠ,ε,n (~ y ) and φΠ,ε,n+r (~ y+~ yε,n,n+r,¯l ), ¯ l = (l1 , . . . , lk ), − + li = mε,n,n+r,i , . . . , mε,n,n+r,i , i = 1, . . . , k, r = 1, . . . N − n, are, for every ~ y ∈ Rk , n = 0, . . . , N , the unique solution for the following finite recurrence system of linear equations,  φε,N (~ y+~ yε,n,N,¯l ) = g0 (tN , e~y+~yε,n,N,l¯),     +  ¯  l = (l1 , . . . , lk ), lj = m−  ε,n,N,i , . . . , mε,n,N,i , i = 1, . . . , k,      φε,n+r (~ y+~ yε,n,n+r,¯l )      = max g0 (tn+r , e~y+~yε,n,n+r,l¯),    P    y+~ yε,n,n+r,¯l + ~ yε,n+r+1,¯l 0 ) ¯ l 0 ∈Lε,tn+r+1 φε,n+r+1 (~  (6.81) 0 ˆ  ×PΠ,0,n+r+1 (A ) ,  ε,tn+r+1 ,¯ l0    +  ¯ l = (l1 , . . . , lk ), li = m−  ε,n,n+r,j , . . . , mε,n,n+r,i ,      i = 1, . . . , k, r = N − n − 1, . . . , 1,       φε,n (~ y , x) = max g0 (tn , e~y ),     P  ˆ0  0 φε,n+1 (~ y+~ yε,n+1,¯l 0 )PΠ,0,n+1 (A ) , ¯ l 0 ∈ ˆL ε,tn+1 ,¯ l0 ε,tn+1

6.2

LPP with independent increments

345

(ε)

while the optimal expected reward ΦΠ,ε = Φε (MΠ,0,N ) is given by the following formula, X ˆ 0 ¯). φΠ,ε,0 (~ yε,t0 ,¯l )PΠ,ε,0 (A (6.82) ΦΠ,ε = ε,t0 ,l 0 ¯ l∈ ˆLε,t0

The recurrence algorithm presented in Lemmas 18.3.1 and 18.3.2 is computationally very effective even for large values of parameters N and m+ ε,tr ,j − m− ε,tr ,j , j = 1, . . . , k, r = n + 1, . . . , N which determine thel number of equaPN Qk − + 0 tions Nε,n,N = r=n+1 j=1 (mε,tr ,j − mε,tr ,j + 1) + 1, in system (6.65), and P Q N k − 00 = r=n+1 j=1 (m+ Nε,n,N ε,n,r,j − mε,n,r,j + 1) + 1 in system (6.81). 0 00 However, it should be noted that Nε,n,N and Nε,n,N depend in a multiplicative form on the dimension parameter k and, thus, is sensitive to the value of this parameter. ˆ The complexity of computing probabilities PΠ,0,n+r+1 (~ yε,tn+r ,¯l , A ε,tn+r+1 ,¯ l0 ) ~ y

and values of pay-off function g0 (tn+r , e ε,tn+r ,l¯), penetrating system (6.65), ˆ0 and probabilities PΠ,0,n+r+1 (A ) and values of pay-off function g0 (tn+r , ε,tn+r+1 ,¯ l0 e~y+~yε,n,n+r,l¯) penetrating system (6.81) also play an important role. ˆ Note that probabilities PΠ,0,n+r+1 (~ yε,tn+r ,¯l , A ε,tn+r+1 ,¯ l0 ), penetrating system 0 ˆ (6.65), and probabilities PΠ,0,n+r+1 (A ¯0 ), penetrating system (6.81), both ε,tn+r+1 ,l

ˆ0 ˆ0 are probabilities of cubes, respectively, A or A for distributions ε,tn+r+1 ,¯ l0 ε,tn+r+1 ,¯ l0 ~0 (t). of increments for the log-price process with independent increments Y Such probabilities can be effectively computed in many important cases, for ~0 (t) is a Gaussian process example, for Gaussian distributions in the case where Y with independent increments, or for compound Poisson-normal distributions, in ~0 (t) is a jump-diffusion process process, and many other models. the case where Y Relations (6.81) and (6.82) give an effective algorithm for computing of the reward functions φΠ,ε,n (~ y ) and optimal expected rewards ΦΠ,ε for the approximating space-skeleton random walks YΠ,ε,n . However, it should be noted that the above algorithm includes also computing ~ y of values g0 (tn+r , e ε,tn+r ,l¯), for the pay-off function g0 (t, e~y ) and values of probaˆ0 ~0 (t) with independent increments. bilities P0,n+r+1 (A ), for the log-price Y ε,tn+r+1 ,¯ l0 In many cases, these quantities can be computed effectively. However, in some cases, this can be a difficult problem and alternative approximation methods can serve better. 0 ˆ 0Π,ε˙ = < ~ ˆ 0 ¯, ¯ Let us consider the space-skeleton structure Ξ yε,t l ∈ ˆLε,t ˙ n, ˙ n ,¯ l , Aε,t ˙ n ,l n = 0, . . . , N >, for the fixed value of parameter ε = ε˙ determining the skeleton points and sets. ~ε (t) be, for every ε ∈ [0, ε0 ], respectively, a pay-off Let us also gε (t, e~y ) and Y function and a Markov log-price process introduced in Subsection 6.2.1. We use them as approximations for the pay-off function g0 (t, e~y ) and the log-price process ~0 (t). Y

346

6

Time-space-skeleton reward approximations for Markov LPP

~ (ε) , with the initial distribution P (ε) (A) The approximating random walk Y Π,ε,n ˙ Π,ε˙ (ε) ~ (ε) ∈ A}, and quantities Φ(ε) and and distributions of jumps PΠ,ε,n ˙ (A) = P{WΠ,ε,n ˙ Π,ε˙ (ε)

φΠ,ε,n y ) can be defined, for every ε ∈ [0, ε0 ], for the pay-off function gε (t, e~y ) and ˙ (~ ~ε (t), in the same way, as the random walk Y ~Π,ε,n the log-price process Y ˙ , with the ~ Π,ε,n initial distribution PΠ,ε˙ (A) and distributions of jumps PΠ,ε,n (A) = P{W ∈ ˙ ˙ A}, and quantities ΦΠ,ε˙ and φΠ,ε,n y ) have been defined, for the pay-off function ˙ (~ ~0 (t). g0 (t, e~y ) and the log-price process Y This requires to assume that the following fitting conditions holds: ~ (ε) = ~ L7 [ε]: ˙ For every ε ∈ |0, ε0 ], the distributions of jumps P{W y ¯} = Π,ε,n ˙

ε,t ˙ n ,l

0 (ε) ˆ 0 ¯), ¯ ˆ 0 ¯} = P (ε) (A ~ (ε) l ∈ ˆLε,t PΠ,ε,n yε,t ˙ n, n = ˙ n ,¯ l }) = P{WΠ,0,n ∈ Aε,t Π,0,n ˙ ({~ ε,t ˙ n ,l ˙ n ,l 1, 2, . . . , N .

and ~ (ε) = ~ M7 [ε]: ˙ For every ε ∈ [0, ε0 ], the initial distribution P{Y yε,t ˙ 0 ,¯ l} = Π,ε,0 ˙ 0 (ε) (ε) (ε) ˆ 0 0 ¯ ˆ ˆ ~ PΠ,ε˙ ({~ yε,t } = PΠ,0 (Aε,t ), l ∈ Lε,t ˙ 0. ˙ 0 ,¯ l }) = P{YΠ,0,0 ∈ Aε,t ˙ n ,¯ l ˙ 0 ,¯ l (ε)

(ε)

In this case, quantities ΦΠ,ε˙ and φΠ,ε,n y ) can be computed using the recur˙ (~ rence algorithm given in Lemma 6.2.4. Let us again assume that conditions I15 [Π] and J8 [Π] hold. Let Btn ,tn+1 = {A ∈ BRk : P0 (tn , tn+1 , ∂A) = 0}, n = 0, . . . , N − 1 be the σ-algebras of sets of continuity for the probability measures P0 (tn−1 , tn+1 , A). Let us assume that the following condition holds: ¯ ˆ 0˙ ˆ0 N13 [Π]: A , n = 0, . . . , N − 1. ¯ ∈ Btn ,tn+1 , l ∈ Lε,t ε,t ˙ n+1 ,l

n+1

Let us introduce, for n = 0, . . . , N sets, +

mε,n,n+r,j −n k ˙ ˙ Π,ε,n ˙ tn ∩ (∩N Y =Y ˙ r=1 ∩j=1 ∩l =m− j

ε,n,n+r,j

˙ tn+r − ~ (Y yε,n,n+r, ¯ ˙ l )).

(6.83)

The following lemma, which is the direct corollary of Lemma 6.2.4, gives (ε) conditions for convergence of the reward functions φΠ,ε,n y ). ˙ (~ Lemma 6.2.5. Let conditions L7 [ε], ˙ I15 [Π], J8 [Π], N13 [Π] hold Then the ˙ Π,ε,n following asymptotic relation takes place, for every ~ y∈Y ˙ , n = 0, . . . , N , (ε)

(0)

φΠ,ε,n y ) → φΠ,ε,n y ) as ε → 0. ˙ (~ ˙ (~

(6.84)

Let B = {A ∈ BZ : P0 (∂A) = 0} be the of σ-algebra of sets of continuity for the probability measure P0 (A). Let us assume that the following condition holds: ˆ 0 ¯ ∈ B0 , ¯ K24 [Π]: (a) Pε (·) ⇒ P0 (·) as ε → 0; (b) A l ∈ L0ε,t ˙ . ε,t ˙ 0 ,l

0

6.2

LPP with independent increments

347

The following lemma give conditions for convergence of the optimal expected (ε) rewards ΦΠ,ε˙ . Lemma 6.2.6. Let conditions L7 [ε], ˙ M7 [ε], ˙ I15 [Π], J8 [Π], N13 [Π] and K24 [Π] hold. Then the following asymptotic relation takes place, (ε)

(0)

ΦΠ,ε˙ → ΦΠ,ε˙ as ε → 0.

(6.85)

Remark 6.2.4. Under conditions of Lemma 6.2.5, the values of rewards func(ε) (0) ¯ tions φΠ,ε,n y+~ yε,n,n+r, y+~ yε,n,n+r, ¯ ¯ ˙ l ) → φΠ,ε,n ˙ l ) as ε → 0, for l = (l1 , . . . , lk ), ˙ (~ ˙ (~ − + lj = mε,n,n+r,j , . . . , mε,n,n+r,j , j = 1, . . . , k, r = 1, . . . N − n, n = 1, . . . , N , and, (ε)

(0)

under conditions of Lemma 6.2.6, also φΠ,ε,0 yε,t yε,t l ) → φΠ,ε,0 ˙ 0 ,¯ l ) as ε → 0, for ˙ 0 ,¯ ˙ (~ ˙ (~ 0 ¯ l ∈ ˆLε,t . 0

Remark 6.2.5. If one would like to get asymptotical relations given in Lemmas 6.2.5 and 6.2.6 for some sequences of partitions Πm , m = 0, 1, . . . and some sequence of parameters ε˙l , l = 0, 1, . . . determining the sets of elements for the corˆ 0Π ,ε˙ , one should require holding of condiresponding space-skeleton structures Ξ m l tions of Lemmas 6.2.5 and 6.2.6, for all partitions Πm , m = 0, 1, . . . and all skeleton structures corresponding to parameters ε˙l , l = 0, 1, . . .. The numbers of skeleton points and skeleton sets involved in conditions N13 [Π] and K24 [Π] (b) are finite, and these numbers are not more than countable in the case considered in Remark 6.2.5. This always makes it possible to choose skeleton points and skeleton sets in such way that conditions N13 [Πm ] and K24 [Πm ] (b) hold for any given sequences of partitions Πm , m = 0, 1, . . . and parameters ε˙l , l = 0, 1, . . . determining the sets of elements for the corresponding skeleton structures ˆ 0Π ,ε˙ . Ξ m l

6.2.5 Convergence of time-space-skeleton reward approximations for a given partition of time interval, for multivariate log-price processes with independent increments Let us formulate conditions of convergence for the rewards φΠ,ε,n (~ y ) and optimal expected rewards ΦΠ,ε . These conditions, in fact, re-formulate in terms of the payoff functions g0 (tn , e~y , x) and the discrete time multivariate modulated log-price ~0 (tn ) the corresponding conditions given in process with independent increments Y Sections 7.3∗ and 7.4∗ . In the case of the space-skeleton approximation model with fixed spaceskeleton structure, the maximal and the minimal skeleton points are given by the following formulas, for j = 1, . . . , k, t ∈ [0, T ] and ε ∈ (0, ε0 ], ± yε,t,j = δε,t,j m± ε,t,j + λε,t,j .

(6.86)

In the case, we impose on parameters of the corresponding fixed skeleton structure the following condition:

348

6

Time-space-skeleton reward approximations for Markov LPP

± → ∞ as N14 [Π]: (a) δε,t,j → 0 as ε → 0, for j = 1, . . . , k, t ∈ Π; (b) ±yε,t,j ± ε → 0, for j = 1, . . . , k, t ∈ Π; (c) ±yε,t,j , t ∈ Π are non-decreasing functions in argument t ∈ Π, for every j = 1, . . . , k and ε ∈ (0, ε0 ]. − Note that condition N14 [Π] implies that δε,t,j (m+ ε,t,j − mε,t,j ) → ∞ as ε → 0, − + for j = 1, . . . , k, t ∈ Π, and mε,t,j − mε,t,j → ∞ as ε → 0, for j = 1, . . . , k, t ∈ Π. The standard choice of structural skeleton parameters penetrating condition ± N14 [Π] is where parameters δε,t,j = δε , m± ε,t,j = ±mε , j = 1, . . . , k, mε,t,0 = ±mε,0 do not depend on t ∈ Π. In this case, conditions N14 [Π] requires that δε → 0, mε → ∞ and δε mε → ∞ as ε → 0. In the case of the space-skeleton reward approximation model with additive skeleton structure, the maximal and the minimal skeleton points are given by the following formulas, for j = 1, . . . , k, t ∈ [0, T ] and ε ∈ (0, ε0 ], 0± yε,t,j = δε,j m± ε,t,j + λε,t,j .

(6.87)

In the case, we impose on parameters of the corresponding additive skeleton structure the following condition: 0± N15 [Π]: (a) δε,j → 0 as ε → 0, for j = 1, . . . , k; (b) ±yε,t,j → ∞ as ε → 0, for j = 1, . . . , k, t ∈ Π. − Note that condition N15 [Π] implies that δε,t,j (m+ ε,t,j − mε,t,j ) → ∞ as ε → 0, − + for j = 1, . . . , k, t ∈ Π, and mε,t,j − mε,t,j → ∞ as ε → 0, for j = 1, . . . , k, t ∈ Π. The standard choice of structural skeleton parameters penetrating condition ± N15 [Π] is where parameters δε,t,j = δε , m± ε,t,j = ±mε , j = 1, . . . , k, mε,t,0 = ±mε,0 do not depend on t ∈ Π. In this case, conditions N13 [Π] requires that δε → 0, mε → ∞ and δε mε → ∞ as ε → 0. ¯ takes the form of the following condition, assumed to hold Condition C22 [β] for some vector parameter β¯ = (β1 , . . . , βk ) with non-negative components: ¯ ∆0β (Y0,i (t· , Π) < K93,i , i = 1, . . . , k for some 1 < K93,i < ∞, i = 1, . . . , k. C23 [β]: i

Let us re-call the second-type modulus of exponential moment compactness for the components of the log-price process, Ξ0± β (Y0,i (t· , Π) =

max

0≤n≤N −1

Ee±β(Y0,i (tn+1 )−Y0,i (tn )) .

(6.88)

¯ As follows from Lemma 4.1.8, the following condition implies condition C23 [β] to hold: ¯ Ξ± (Y0,i (t· , Π) < K16,i , i = 1, . . . , k, for some 1 < K16,i < ∞, i = 1, . . . , k. E15 [β]: β

6.2

LPP with independent increments

349

The following condition is a reduced form of condition B37 [¯ γ ] for the model without index component. It should be assumed to hold for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components: B38 [¯ γ ]: max0≤n≤N sup~y∈Rk

y ~

1+

0 ≤ L74,1 , . . . , L74,k < ∞.

P|gk0 (tn ,e i=1

)|

L74,i eγi |yi |

< L73 , for some 0 < L73 < ∞ and

¯ assumed to hold As follows from Lemma 4.1.4, conditions B38 [¯ γ ] and C22 [β], ¯ for some vector parameters γ¯ and β such that 0 ≤ γi ≤ βi , i = 1, . . . , k, imply that the reward function φΠ,0,n (~ y ) is finite, i.e., for t = tn ∈ Π and ~ y ∈ Rk , ~

|φΠ,0,n (~ y )| ≤ E~y,tn sup |g0 (tr , eY0 (tr ) )| < ∞.

(6.89)

n≤r≤N

Condition I13 [Π] takes the following simpler form: I16 [Π]: There exist sets Ytn ∈ BRk , n = 0, . . . , N such that function g0 (tn , e~y ) is continuous in points ~ y ∈ Ytn for every n = 0, . . . , N . Condition J6 [Π] takes the following simpler form: y ) = 1, for every ~ y ∈ Ytn−1 , n = 1, . . . , N , where Ytn , J9 [Π]: P0 (tn−1 , tn , Ytn − ~ n = 0, . . . , N are sets introduced in condition I16 [Π]. A typical example is where the sets Ytn , n = 1, . . . , N are empty sets. Then condition J9 [Π] obviously holds. Another typical example is where sets Ytn , n = 1, . . . , N are at most finite or countable sets. Then the assumption that measures P0 (tn−1 , tn , A), n = 1, . . . , N have no atoms implies that condition J9 [Π] holds. One more example is where measures P0 (tn−1 , tn , A), n = 1, . . . , N are absolutely continuous with respect to the Lebesgue measure in Rk and sets Ytn , n = 1, . . . , N have zero Lebesgue measure. This assumption also implies that condition J9 [Π] holds. ~Π,ε,n has no index component. However, one can add to The Markov chain Y tis Markov chain the virtual index component XΠ,ε,n ≡ x0 with a one-point phase space X0 = {x0 }. It should be also pointed out that conditions J9 [Π] and N14 [Π] (in the case of the approximation model with fixed skeleton structure) or J9 [Π] and N15 [Π] (in the case of the approximation model with additive skeleton structure) imply that condition J6 [Π] holds, with the sets with sets Z0tn = Ytn × X0 , Z00tn = Rk × X0 , n = 1, . . . , N . In the case of the approximation model with fixed skeleton structure, condition ˆ ε,tn (~ N14 [Π] implies that sup|~y|≤H |h y) − ~ y | → 0 as ε → 0, for any 0 ≤ H < ˆ ε,tn (W ~ 0,n + ∞ and n = 0, . . . , N . This obviously implies that random vectors h d ˆ ~ hε,tn−1 (~ yε )) −→ W0,n + ~ y0 as ε → 0 for any ~ yε → ~ y0 as ε → 0, for any ~ y0 ∈ Rk and n = 1, . . . , N . This relation implies (taking into account formula (6.61) for

350

6

Time-space-skeleton reward approximations for Markov LPP

~Π,ε,n ) that condition J6 [Π] (a) the transition probabilities of the Markov chain Y 00 holds, with the sets Ztn = Rk × X0 , n = 1, . . . , N . Also, condition J9 [Π] implies condition J6 [Π] (b) to hold that is readily seen from formula (6.54) for transition ~Π,0,n . probabilities of the Markov chain Y Analogously, in the case of the approximation model with additive skeleton ˆ 0ε,t (~ structure, condition N15 [Π] implies that sup|~y|≤H |h y) − ~ y | → 0 as ε → 0, for n any 0 ≤ H < ∞ and n = 0, . . . , N . This obviously implies that random vectors d ˆ 0ε,t (W ~ 0,n ) + ~ ~ 0,n + ~ h yε −→ W y0 as ε → 0 for any ~ yε → ~ y0 as ε → 0, for any ~ y0 ∈ Rk n and n = 1, . . . , N . This relation implies (taking into account formula (6.75) for the ~Π,ε,n ) that condition J6 [Π] (a) holds, distributions of jumps for the random walk Y 00 with the sets Ztn = Rk × X0 , n = 1, . . . , N . Also, condition J9 [Π] implies condition J6 [Π] (b) to hold that is readily seen from formula (6.69) for the distributions of ~Π,0,n . jumps for the random walk Y The following theorem is a variant of Theorems 7.4.5∗ and 6.1.5, in the case of approximation model with fixed skeleton structure, or Theorem 7.4.5∗ , in the case of approximation model with additive skeleton structure. Theorem 6.2.5. Let Π = h0 = t0 < t1 · · · < tN = T i be an arbitrary parti~Π,0,n = Y ~0 (tn ) represented by tion of the interval [0, T ] and the log-price process Y random walk and the corresponding space-skeleton approximation Markov log-price ~Π,ε,n are defined as described in Subsection 6.3.3 (Subsection 6.3.4). Let processes Y ¯ hold with vector parameters γ¯ = (γ1 , . . . , γk ) also conditions B38 [¯ γ ] and C22 [β] ¯ and β = (β1 , . . . , βk ) such that, for every i = 1, . . . , k, either βi > γi > 0 or βi = γi = 0, and also conditions L4 , N14 [Π] (L6 , N15 [Π]), I16 [Π], and J9 [Π] hold. Then, the following relation takes place, for any t = tn ∈ Π and ~ yε → ~ y0 ∈ Yt as ε → 0, (0) (0) φΠ,ε,n (~ yε ) → φΠ,0,n (~ y0 ) = φt (MΠ,t,T , ~ y0 ) as ε → 0. (6.90) ¯ (the difThe following condition is a particular variant of condition D26 [β] ference is only that it is imposed on initial states of the embedded random walk Yε (t0 ) instead of the initial states of log-price component of the embedded Markov ~ ε (t0 ) = (Y ~ε (t0 ), Xε (t0 ))), assumed to hold for some vector parameter chains Z ¯ β = (β1 , . . . , βk ) with non-negative components: 0 0 ¯ Eeβi |Y0,i (t0 )| < K89,i D026 [β]: , i = 1, . . . , k, for some 1 < K89,i < ∞, i = 1, . . . , k. ¯ and D026 [β], ¯ assumed As follows from Lemma 4.1.6, conditions B38 [¯ γ ], C22 [β] to hold for some vector parameters γ¯ and β¯ such that 0 ≤ γi ≤ βi , i = 1, . . . , k, imply that the reward function ΦΠ,0 is finite, i.e., ~

|ΦΠ,0 | ≤ E sup |g0 (tn , eY0 (tn ) )| < ∞. 0≤n≤N

The following condition is a reduced form of condition K21 :

(6.91)

6.2

LPP with independent increments

351

K25 : PΠ,0,0 (Yt0 ) = P0 (Yt0 ) = 1, where Yt0 is the set introduced in conditions I16 [Π]. The following theorem is a variant of Theorems 7.3.2∗ and 6.1.6, in the case of approximation model with fixed skeleton structure, or Theorem 7.4.6∗ , in the case of approximation model with additive skeleton structure. Theorem 6.2.6. Let Π = h0 = t0 < t1 · · · < tN = T i be a partition of the ~Π,0,n = Y ~0 (tn ) represented by random walk interval [0, T ] and the log-price process Y and the corresponding space-skeleton approximation Markov log-price processes ~Π,ε,n are defined as described in Subsection 6.3.3 (Subsection 6.3.4). Let also Y ¯ and D026 [β] ¯ hold with vector parameters γ¯ = (γ1 , . . . , γk ) conditions B38 [¯ γ ], C22 [β] and β¯ = (β1 , . . . , βk ) such that, for every i = 1, . . . , k, either βi > γi > 0 or βi = γi = 0, and also conditions L4 , M4 , N14 [Π] (L6 , M6 , N15 [Π]), I16 [Π], J9 [Π] and K25 hold. Then, the following relation takes place, (0)

ΦΠ,ε → ΦΠ,0 = Φ(MΠ,0,T ) as ε → 0.

(6.92)

6.2.6 Convergence of time-space-skeleton reward approximations based on arbitrary partitions of time interval, for multivariate log-price processes with independent increments Let us formulate conditions of convergence for the reward functions φΠ,ε,n (~ y ) and the optimal expected rewards ΦΠ,ε , for an arbitrary partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ]. In fact, we should just formulate (in terms of the pay-off function g0 (t, e~y ) ~0 (t) with independent increments) and the continuous time log-price processes Y conditions given in Theorems 6.2.5 and 6.2.6 in such way these theorems could be applied to the case of an arbitrary chosen partition Π. In the case of approximation model with fixed skeleton structure, we should replace fitting conditions L4 and M4 by the following more general conditions imposed on transition probabilities and initial distributions of approximating atomic Markov chains YΠ,ε,n , which should hold for any partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ]: L8 : For any partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ], and every ε ∈ (0, ε0 ], the transition probabilities PΠ,ε,n (~ y , {~ yε,tn ,¯l }) = 0 ¯ ¯ ˆ ˆ ˆ ˆ PΠ,0,n (~ yε,tn−1 ,¯l0 , Aε,tn ,¯l ), ~ y ∈ Aε,tn−1 ,¯l0 , l ∈ Lε,tn−1 , l ∈ Lε,tn , n = 1, 2, . . . , N . and M8 :For any partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ], ˆ and for every ε ∈ (0, ε0 ], the initial distribution PΠ,ε ({~ yε,t0 ,¯l }) = PΠ,0 (A ε,t0 ,¯ l ), ¯ ˆ l ∈ Lε,t0 .

352

6

Time-space-skeleton reward approximations for Markov LPP

We should also replace fitting conditions N14 [Π] by the following more general condition: ± N16 : (a) δε,t,j → 0 as ε → 0, for j = 1, . . . , k, t ∈ [0, T ]; (b) ±yε,t,j → ∞ as ± ε → 0, for j = 1, . . . , k, t ∈ [0, T ]; (c) ±yε,t,j are non-decreasing functions in argument t ∈ [0, T ], for every j = 1, . . . , k and ε ∈ (0, ε0 ].

Note that, according condition M8 , the approximating Markov chains ZΠ,ε,n have the same initial distribution for all partitions Π, for every ε ∈ (0, ε0 ]. − Note that condition N16 implies that δε,t,j (m+ ε,t,j − mε,t,j ) → ∞ as ε → 0, for − j = 1, . . . , k, t ∈ [0, T ], and m+ ε,t,j −mε,t,j → ∞ as ε → 0, for j = 1, . . . , k, t ∈ [0, T ]. The standard choice of structural skeleton parameters penetrating condition ± N16 is where parameters δε,t,j = δε , m± ε,t,j = ±mε , j = 1, . . . , k, mε,t,0 = ±mε,0 do not depend on t ∈ [0, T ]. In this case, conditions N16 require that δε → 0, mε → ∞ and δε mε → ∞ as ε → 0. In the case of approximation model with additive skeleton structure, we should replace fitting conditions L6 and M6 by the following more general conditions, which should hold for any partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ]: L9 : For any partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ], ~ Π,ε,n = ~ and every ε ∈ (0, ε0 ], the distributions of jumps P{W yε,tn ,¯l } = 0 0 0 ¯ ˆ ˆ ˆ ~ PΠ,ε,n ({~ y ¯}) = P{WΠ,0,n ∈ A ¯} = PΠ,0,n (A ¯), l ∈ Lε,t , n = ε,tn ,l

ε,tn ,l

ε,tn ,l

n

1, 2, . . . , N . and M9 : For any partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ], and for ~Π,ε,0 = ~ every ε ∈ (0, ε0 ], the initial distribution P{Y yε,t0 ,¯l } = PΠ,ε ({~ yε,t0 ,¯l }) = 0 0 0 ¯ ˆ ˆ ˆ ~ P{YΠ,0,0 ∈ A ¯} = PΠ,0 (A ¯), l ∈ Lε,t . ε,tn ,l

ε,t0 ,l

0

Also condition N15 [Π] takes the following more general form: 0± N17 : (a) δε,j → 0 as ε → 0, for j = 1, . . . , k; (b) ±yε,t,j → ∞ as ε → 0, for j = 1, . . . , k, t ∈ [0, T ].

Note that, according condition M9 , the approximating random walk ZΠ,ε,n have the same initial distribution for all partitions Π, for every ε ∈ (0, ε0 ]. − Note that condition N17 implies that δε,j (m+ ε,t,j − mε,t,j ) → ∞ as ε → 0, for − + j = 1, . . . , k, t ∈ [0, T ], and mε,t,j −mε,t,j → ∞ as ε → 0, for j = 1, . . . , k, t ∈ [0, T ]. Also, it is worth to note that the standard choice of structural skeleton parameters penetrating condition N17 is where parameters δε,j = δε , m± ε,t,j = ±mε , j = 1, . . . , k, m± = ±m do not depend on t ∈ [0, T ]. ε,0 ε,t,0

6.2

LPP with independent increments

353

In this case, conditions N17 (a) – (b) require that δε → 0, mε → ∞ and δε mε → ∞ as ε → 0. ¯ takes in this case the form of the following condition, asCondition C11 [β] sumed to hold for some vector parameter β¯ = (β1 , . . . , βk ) with non-negative components: ¯ limc→0 ∆0β (Y0,i (·), c, T ) = 0, i = 1, . . . , k. C24 [β]: i

The following condition is an analogue of condition B1 [¯ γ ] for the model without index component. It should be assumed to hold for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components: B39 [¯ γ ]: sup0≤t≤T sup~y∈Rk

y ~

Pk|g0 (t,e

1+

0 ≤ L78,1 , . . . , L78,k < ∞.

i=1

)|

L76,i eγi |yi |

< L75 , for some 0 < L75 < ∞ and

¯ assumed to hold As follows from Lemma 4.1.4, conditions B39 [¯ γ ] and C24 [β], ¯ for some vector parameters γ¯ and β such that 0 ≤ γi ≤ βi , i = 1, . . . , k, imply that the reward function φΠ,0,n (~ y ) is finite, i.e., for any partition Π, t = tn ∈ Π and ~ y ∈ Rk , ~ |φΠ,0,n (~ y )| ≤ E~y,t sup |g0 (s, eY0 (s) )| < ∞. (6.93) t≤s≤T

Condition I16 [Π] should be replaced by the following condition: I17 : There exist sets Yt ∈ BRk , t ∈ [0, T ] such that g0 (t, e~y ) (as function in ~ y ) is continuous in points ~ y ∈ Yt , for every t ∈ [0, T ]. Condition J9 takes the following simpler form: J10 : P0 (t, t + u, Yt+u − ~ y ) = 1, for every ~ y ∈ Yt , 0 ≤ t < t + u ≤ T , where Yt , t ∈ [0, T ] are sets introduced in condition I17 . A typical example is where the sets Yt , t ∈ [0, T ] are empty sets. Then condition J10 obviously holds. Another typical example is where sets Ytn , n = 1, . . . , N are at most finite or countable sets. Then the assumption that measures P0 (t, t + u, A), 0 ≤ t < t + u ≤ T have no atoms implies that condition J10 holds. One more example is where measures P0 (t, t + u, A), , 0 ≤ t < t + u ≤ T are absolutely continuous with respect to the Lebesgue measure in Rk and sets Yt , t ∈ [0, T ] have zero Lebesgue measure. This assumption also implies that condition J10 holds. The following theorem in the corollary of Theorem 6.2.5. Theorem 6.2.7. Let Π = h0 = t0 < t1 · · · < tN = T i be an arbitrary ~Π,0,n = Y ~0 (tn ) represented partition of the interval [0, T ] and the log-price process Y by random walk and the corresponding space-skeleton approximation Markov log-

354

6

Time-space-skeleton reward approximations for Markov LPP

~Π,ε,n are defined as described in Subsection 6.3.3 ( Subsection price processes Y ¯ hold and, for every i = 1, . . . , k either 0 = 6.3.4). Let conditions B39 [¯ γ ], C24 [β] γi = βi or 0 < γi < βi < ∞. Let also conditions L8 , N16 (L9 , N17 ), I27 and J10 hold. Then, the following asymptotic relation holds for any partition Π = h0 = t0 < t1 · · · < tN = T i of the interval [0, T ], t = tn ∈ Π and ~ yε → ~ y0 ∈ Ytn as ε → 0, φΠ,ε,n (~ yε ) → φΠ,0,n (~ y0 ) (0)

(0)

= φt (MΠ,t,T , ~ y0 ) as ε → 0.

(6.94)

¯ which is natural The following condition is also a variant of conditions D27 [β], ~ to re-formulate, in this case, in terms of processes Y0 (t). It should be assumed to hold for some vector parameter β¯ = (β1 , . . . , βk ) with non-negative components: ¯ Eeβi |Y0,i (0)| < K91,i , i = 1, . . . , k, for some 1 < K91,i < ∞, i = 1, . . . , k. D027 [β]: ¯ and D27 [β], ¯ assumed As follows from Lemma 4.3.6, conditions B39 [¯ γ ], C24 [β] ¯ to hold for some vector parameters γ¯ and β such that 0 ≤ γi ≤ βi , i = 1, . . . , k, imply that the optimal expected reward ΦΠ,0 is finite, i.e., for any partition Π, ~

|ΦΠ,0 | ≤ E sup |g0 (t, eY0 (t) )| < ∞.

(6.95)

0≤t≤T

The following condition is also a variant of conditions K25 , which, also, is ~ 0 (t): natural to re-formulate, in this case, in terms of processes Z K26 : P0 (Y0 ) = 1, where Y0 is the sets introduced in condition I17 . The following theorem in the corollary of Theorem 6.2.6. Theorem 6.2.8. Let Π = h0 = t0 < t1 · · · < tN = T i be an arbitrary partition ~Π,0,n = Y ~0 (tn ) represented by ranof the interval [0, T ] and the log-price process Y dom walk and the corresponding space-skeleton approximation Markov log-price ~Π,ε,n are defined as described in Subsection 6.3.3 (Subsection 6.3.4). processes Y ¯ and D027 [β] ¯ hold and, for every i = 1, . . . , k either Let conditions B39 [¯ γ ], C24 [β] 0 = γi = βi or 0 < γi < βi < ∞. Let also conditions L8 , M8 , N16 (L9 , M9 , N17 ), I17 , J10 and K26 hold. Then, the following asymptotic relation holds, (0)

ΦΠ,ε → ΦΠ,0 = Φ(MΠ,0,T ) as ε → 0.

(6.96)

6.3 Time-space-skeleton reward approximations for diffusion LPP In this section we present results about time-space-skeleton reward approximations for diffusion log-price processes.

6.3

355

Diffusion LPP

6.3.1 Convergence of time-skeleton reward approximations for multivariate diffusion log-price processes ~0 (t) be a diffusion log-price process given by the Let, as in Subsection 4.4.2, Y ~ε (t) be, for every ε ∈ (0, ε0 ], a timestochastic differential equation (4.126) and Y skeleton approximation log-price process given by the stochastic difference equation (4.127) and relation (4.128). We assume that conditions G8 , G10 , and G12 hold. Let, also, gε (t, e~y ) be, for every ε ∈ [0, ε0 ], a real-valued measurable pay-off function which do not depend on the index argument. Let us formulate conditions of convergence for optimal expected rewards (ε) (ε) (ε) Φ(MΠ,0,T ) and reward functions φt (MΠ,t,T , ~ y ), for an arbitrary partition Π = h0 = t0 < t1 . . . < tN = T i of interval [0, T ]. The above model is a particular case of the model considered in Section 6.1. In ~ε (t) is, this case, we consider log-price processes without index component. Also, Y ~ for every ε ∈ [0, ε0 ], is a càdlàg Markov process and, moreover, Y0 (t) is a diffusion process. We assume that condition B10 [¯ γ ] (introduced in Subsection 4.3.3) holds for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components. ~ε (t) has no index component, the first-type modulus of Since the process Y exponential moment compactness ∆βi (Yε,i (·), c, T ) takes simpler form, ∆00βi (Yε,i (·), c, T ) =

sup

sup E~y,t (eβ|Yε,i (t+u)−Yε,i (t)| − 1).

(6.97)

0≤t≤t+u≤t+c≤T ~ y ∈Rk

¯ takes in this case the form of the following simpler condition, Condition C2 [β] assumed to hold for some vector parameter β¯ = (β1 , . . . , βk ) with non-negative components: ¯ limc→0 limε→0 ∆00β (Yε,i (·), c, T ) = 0, i = 1, . . . , k. C25 [β]: i

¯ By Lemma 4.4.3, conditions G8 , G10 , and G12 imply that condition C25 [β] ~ε (t), for any vector parameter β¯ = (β1 , . . . , βk ) with nonholds for processes Y ¯ which is, as was pointed negative components. By Lemma 6.1.1, condition C25 [β], ¯ ¯ holds, for above, a simpler form of condition C2 [β], implies that condition C19 [β] any partition Π of the interval [0, T ]. ~ε,0 = Y ~0 (0) for every ε ∈ (0, ε0 ], condition D14 [β] ¯ Since the initial state Y ¯ reduces to condition D16 [β] (introduced in Subsection 4.4.2), assumed to hold for some vector parameter β¯ = (β1 , . . . , βk ) with non-negative components. By Theorem 4.4.2, conditions G8 , G10 , G12 and conditions B10 [¯ γ ] and ¯ assumed to hold for some vector parameters β¯ and γ¯ such that 0 ≤ γi ≤ D16 [β], ¯ γ¯ ) ∈ (0, ε0 ] such that, βi < ∞, i = 1, . . . , k, imply that there exists ε14 = ε14 (β,

356

6

Time-space-skeleton reward approximations for Markov LPP

for ε ∈ [0, ε14 ] and any partition Π, ~

(ε)

|Φ(MΠ,0,T )| ≤ E sup |gε (t, eYε (t) )| < ∞.

(6.98)

0≤t≤T

We impose on pay-off functions gε (t, e~y ) the condition of locally uniform convergence I14 . ~ε (t) preResults about convergence of time-skeleton approximation processes Y sented in Subsection 4.4.1, in particular, relation (4.137), imply that, conditions G8 , G10 , G12 , the following relation holds, for any 0 ≤ t < t + u ≤ T and ~ yε → ~ y0 ∈ Rk as ε → 0, ~ε (t + u) ∈ ·/Y ~ε (t) = ~ Pε (t, ~ yε , t + u, ·) = P{Y yε } ~0 (t + u) ∈ ·/Y ~0 (t) = ~ ⇒ P0 (t, ~ y0 , t + u, ·) = P{Y y0 } as ε → 0.

(6.99)

~ε (t) is a particular case of the Markov process Z ~ ε (t) = The Markov process Y ~ε (t), Xε (t)) with the virtual index component Xε (t) ≡ x0 with a one-point phase (Y space X0 = {x0 }. Relation (7.75) implies that condition J2 (a) (its variant for the model without index component, with space Rk playing the role of sets Z00t = Rk × X0 , t ∈ [0, T ]) holds. Thus condition J2 can be reduced to the following simpler condition: J11 : P0 (t, ~ y , t + u, Yt+u ) = 1, for every ~ y ∈ Yt and 0 ≤ t < t + u ≤ T , where Yt , t ∈ [0, T ] are sets introduced in condition I14 . A typical example is where the sets Yt , t ∈ [0, T ] are empty sets. Then condition J11 obviously holds. Another typical example is where sets Yt , t ∈ [0, T ] are at most finite or countable sets. Then the assumption that measures P0 (t, ~ y , t + u, A), ~ y ∈ Rk , ∈ 0 ≤ t < t + u ≤ T have no atoms implies that condition J11 holds. One more example is where measures P0 (t, y, t + u, A), ~ y ∈ Rk , 0 ≤ t < t + u ≤ T are absolutely continuous with respect to the Lebesgue measure in Rk and sets Yt , t ∈ [0, T ] have zero Lebesgue measure. This assumption also implies that condition J11 holds. There exist many well known models, where distributions P0 (t, ~ y , t+u, A), ~ y∈ Rk , ∈ 0 ≤ t < t + u ≤ T have probability densities with respect to the Lebesgue measure. Condition J11 implies condition J2 to hold with sets Z0t = Yt × X0 , Z00t = Rk × X0 , t ∈ [0, T ]. ~ε,0 = Y ~0 (0) for every ε ∈ (0, ε0 ], condition K17 takes Since the initial state Y the form of the following condition imposed on the initial distributions P0 (A) = ~0 (0) ∈ A}: P{Y K27 : P0 (Y0 ) = 1, where Y0 is the set introduced in conditions I14 .

6.3

Diffusion LPP

357

Theorem 6.1.3 takes, in this case, the following form. ~0 (t) be the diffusion process given by the stochastic Theorem 6.3.1. Let Y ~ε (t) be, for every ε ∈ (0, ε0 ] the corresponddifferential equation (4.126) and Y ing approximating time-skeleton log-price process given by the stochastic difference equation (4.127) and relation (4.128). Let conditions conditions B10 [¯ γ ], and ¯ D16 [β] hold and, for every i = 1, . . . , k, either 0 = γi = βi or 0 < γi < βi < ∞. Let also conditions G8 , G10 , G12 , I14 , J11 , and K27 hold. Then, the following asymptotic relation holds for any partition Π = h0 = t0 < t1 · · · < tN = T i of the interval [0, T ], (ε) (0) Φ(MΠ,0,T ) → Φ(MΠ,0,T ) as ε → 0. (6.100) By Lemma 4.4.5 and Theorem 4.4.1, conditions G8 , G10 , G12 and conditions B10 [¯ γ ], assumed to hold for some vector parameter γ¯ with non-negative components, imply that there exists ε13 = ε13 (¯ γ ) ∈ (0, ε0 ] such that, for ε ∈ [0, ε13 ] and any partition Π, t = tn ∈ Π and ~ y ∈ Rk , (ε)

~

(ε)

|φt (MΠ,t,T , ~ y )| ≤ E~y,t sup |gε (s, eYε (s) )| < ∞.

(6.101)

t≤s≤T

Theorem 6.1.4 takes, in this case, the following form. ~0 (t) be the diffusion process given by the stochastic Theorem 6.3.2. Let Y ~ε (t) be, for every ε ∈ (0, ε0 ] the corresponding differential equation (4.126) and Y approximating time-skeleton log-price process given by the stochastic difference equation (4.127) and relation (4.128). Let condition B10 [¯ γ ] holds, for some vector parameter γ¯ with non-negative components, and, also, conditions G8 , G10 , G12 , I14 and J11 hold. Then, the following asymptotic relation holds for any partition Π = h0 = t0 < t1 · · · < tN = T i of the interval [0, T ], t = tn ∈ Π and ~ yε → ~ y0 ∈ Yt as ε → 0, (ε) (ε) (0) (0) φt (MΠ,t,T , ~ yε ) → φt (MΠ,t,T , ~ y0 ) as ε → 0. (6.102)

6.3.2 Convergence of martingale-type reward approximations for diffusion type log-price processes with bounded characteristics ~00 (t) = Y ~0 (t) be a diffusion process given by the Let, as in Subsection 4.4.3, Y ~ε0 (t) be, for every ε ∈ (0, ε0 ] a stochastic differential equation (4.126) and Y martingale-type approximation log-price process given by the stochastic difference equation (4.127), which is supplemented by the dynamic Markov relation (4.135). We assume that conditions conditions G8 , G10 , G12 , G13 , G14 , O13 and O14 hold. Let also gε (t, e~y ) be, for every ε ∈ [0, ε0 ], a real-valued measurable pay-off function which do not depend on the index argument.

358

6

Time-space-skeleton reward approximations for Markov LPP

Let us again formulate conditions of convergence for optimal expected rewards (ε) (ε) (ε) Φ(MΠ,0,T ) and reward functions φt (MΠ,t,T , ~ y ), for an arbitrary partition Π = h0 = t0 < t1 . . . < tN = T i of interval [0, T ]. The above model also particular case of the model considered in Section 6.1. In this case, we consider log-price processes and pay-off function without index ~ε0 (t) is, for every ε ∈ [0, ε0 ], is a càdlàg Markov process and, component. Also, Y 0 ~0 (t) is a diffusion process. moreover, Y We assume that condition B11 [¯ γ ] (introduced in Subsection 4.3.3) holds for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components. By Lemma 4.4.10, conditions G8 , G10 , G12 , G13 and G14 imply that condi~ε0 (t), for any vector parameter β¯ = (β1 , . . . , βk ) ¯ holds for processes Y tion C24 [β] ¯ which is, as with non-negative components. By Lemma 6.1.1, condition C25 [β], ¯ was pointed above, a simpler form of condition C2 [β], implies that condition ¯ holds, for any partition Π of the interval [0, T ]. C19 [β] ¯ reduces to condition D17 [β], ¯ assumed to hold for some vecCondition D14 [β] ¯ tor parameter β = (β1 , . . . , βk ) with non-negative components. By Lemma 4.4.14 and Theorem 4.4.4, conditions G8 , G10 , G12 , G13 and G14 ¯ assumed to hold for some vector parameters and conditions B10 [¯ γ ] and D17 [β], β¯ and γ¯ such that 0 ≤ γi ≤ βi < ∞, i = 1, . . . , k, imply that there exists ε21 = ¯ γ¯ ) ∈ (0, ε0 ] such that, for ε ∈ [0, ε21 ] and any partition Π, ε17 (β, ~0

(ε)

|Φ(MΠ,0,T )| ≤ E sup |gε (t, eYε (t) )| < ∞.

(6.103)

0≤t≤T

We impose on pay-off functions gε (t, e~y ) the condition of locally uniform convergence I14 . ~ε (t) preResults about convergence of time-skeleton approximation processes Y sented in Subsection 4.4.1, in particular, relation (4.136), imply that, under conditions G8 , G10 , G12 , G13 , O13 and O14 , the following relation holds, for any 0 ≤ t < t + u ≤ T and ~ yε → ~ y0 ∈ Rk as ε → 0, ~ε0 (t + u) ∈ ·/Y ~ε0 (t) = ~ Pε (t, ~ yε , t + u, ·) = P{Y yε } ~00 (t + u) ∈ ·/Y ~00 (t) = ~ ⇒ P0 (t, ~ y0 , t + u, ·) = P{Y y0 } as ε → 0.

(6.104)

~ε (t) is a particular case of the Markov process Z ~ ε (t) = The Markov process Y ~ (Yε (t), Xε (t)) with the virtual index component Xε (t) ≡ x0 with a one-point phase space X0 = {x0 }. Relation (6.104) implies that condition J2 (a) (its variant for the model without index component, with space Rk playing the role of sets Z00t = Rk × X0 , t ∈ [0, T ]) holds. Thus condition J2 can be reduced to the following simpler condition J11 . Condition K17 takes the form of the following condition imposed on the initial ~ε0 (0) ∈ A}: distributions Pε (A) = P{Y

6.3

359

Diffusion LPP

K28 : (a) Pε (·) ⇒ P0 (·) as ε → 0; (b) P0 (Y0 ) = 1, where Y0 is the set introduced in conditions I14 . Theorem 6.1.3 takes, in this case, the following form. ~00 (t) = Y ~0 (t) be the diffusion process given by the Theorem 6.3.3. Let Y ~ε0 (t) be, for every ε ∈ (0, ε0 ] the corstochastic differential equation (4.126) and Y responding approximating martingale-type log-price process given by the stochastic difference equation (4.127), which is supplemented by the dynamic Markov rela¯ hold and, for every tion (4.135). Let conditions conditions B10 [¯ γ ], and D17 [β] i = 1, . . . , k, either 0 = γi = βi or 0 < γi < βi < ∞. Let also conditions G8 , G10 , G12 , G13 , G14 , O13 , O14 , I14 , J11 , and K28 hold. Then, the following asymptotic relation holds for any partition Π = h0 = t0 < t1 · · · < tN = T i of the interval [0, T ], (ε) (0) Φ(MΠ,0,T ) → Φ(MΠ,0,T ) as ε → 0. (6.105) By Lemma 4.4.12 and Theorem 4.4.2, conditions G8 , G10 , G12 , G13 and G14 and conditions B10 [¯ γ ], assumed to hold for some vector parameter γ¯ with nonnegative components, imply that there exists ε19 = ε19 (¯ γ ) ∈ (0, ε0 ] such that, for ε ∈ [0, ε19 ] and any partition Π, t = tn ∈ Π and ~ y ∈ Rk , (ε)

~

(ε)

|φt (MΠ,t,T , ~ y )| ≤ E~y,t sup |gε (s, eYε (s) )| < ∞.

(6.106)

t≤s≤T

Theorem 6.1.4 takes, in this case, the following form. ~0 (t) be the diffusion process given by the ~00 (t) = Y Theorem 6.3.4. Let Y ~ε0 (t) be, for every ε ∈ (0, ε0 ] a the corstochastic differential equation (4.126) and Y responding approximating martingale-type log-price process given by the stochastic difference equation (4.127), which is supplemented by the dynamic Markov relation (4.135). Let condition B3 [¯ γ ] holds, for some vector parameter γ¯ with nonnegative components, and, also, conditions G8 , G10 , G12 , G13 , G14 , O13 , O14 , I14 and J11 hold. Then, the following asymptotic relation holds for any partition Π = h0 = t0 < t1 · · · < tN = T i of the interval [0, T ], t = tn ∈ Π and ~ yε → ~ y0 ∈ Yt as ε → 0, (ε) (ε) (0) (0) φt (MΠ,t,T , ~ yε ) → φt (MΠ,t,T , ~ y0 ) as ε → 0. (6.107)

6.3.3 Convergence of trinomial-tree reward approximations for univariate diffusion log-price processes Let, as in Subsection 4.4.4, Y000 (t) be an univariate diffusion process given by the stochastic differential equation (4.180) and Yε00 (t) be, for every ε ∈ (0, ε0 ], a martingale-type time-skeleton approximation log-price process given by the stochastic difference equation (4.181) and relation (4.183).

360

6

Time-space-skeleton reward approximations for Markov LPP

We assume that conditions G15 , G17 , G18 , G19 and (4.188) and (4.190) hold. Let also gε (t, ey ) be, for every ε ∈ [0, ε0 ], a real-valued measurable pay-off function which do not depend on the index argument. Let us again formulate conditions of convergence for optimal expected rewards (ε) (ε) (ε) Φ(MΠ,0,T ) and reward functions φt (MΠ,t,T , y), for an arbitrary partition Π = h0 = t0 < t1 . . . < tN = T i of interval [0, T ]. The above model is a particular case of the model considered in Section 6.1. In this case, we consider log-price processes and pay-off function without index component. Also, Yε00 (t) is, for every ε ∈ [0, ε0 ], is a càdlàg Markov process, and, moreover, Y000 (t) is a diffusion process. We assume that condition B11 [γ] (introduced in Subsection 4.3.4) holds for some vector parameter γ ≥ 0. By Lemma 4.4.17, conditions G15 , G17 , G18 , G19 and (4.188) and (4.190) ~ε00 (t), for ¯ (its univariate variant) holds for processes Y imply that condition C25 [β] ¯ (its univariate variant), which any β¯ = β ≥ 0. By Lemma 6.1.1, condition C25 [β] ¯ (its univariate variant), is, as was pointed above, a simpler form of condition C2 [β] ¯ implies that condition C19 [β] (its univariate variant) holds, for any partition Π of the interval [0, T ]. ¯ (its univariate variant) reduces to condition D18 [β], asCondition D14 [β] sumed to hold for some β ≥ 0. By Lemma 4.4.21 and Theorem 4.4.6, conditions G15 , G17 , G18 , G19 , (4.188), (4.190) and conditions B11 [γ], D18 [β], assumed to hold for some 0 ≤ γ ≤ β < ∞, ¯ γ¯ ) ∈ (0, ε0 ] such that, for ε ∈ [0, ε26 ] and any imply that there exists ε26 = ε26 (β, partition Π, 00 (ε) |Φ(MΠ,0,T )| ≤ E sup |gε (t, eYε (t) )| < ∞. (6.108) 0≤t≤T

We impose on pay-off functions gε (tn , ey ) the following condition of locally uniform convergence, which is an univariate variant of condition I14 : I18 : There exist sets Yt ∈ BR1 , tÊ ∈ [0, T ], such that gε (t, eyε ) → g0 (t, ey0 ) as ε → 0 for any yε → y0 ∈ Yt and t ∈ [0, T ]. The diffusion process Yε00 (t) is a particular case of the Markov process Zε (t) = ~ (Yε (t), Xε (t)). It has the univariate log-price component and the virtual index component Xε (t) ≡ x0 with a one-point phase space X0 = {x0 }. ~ε (t) preResults about convergence of time-skeleton approximation processes Y sented in Subsection 4.4.1, in particular, relation (4.136), imply that, under conditions G15 , G17 , G18 , G19 , (4.188), (4.190), the following relation holds, for any 0 ≤ t < t + u ≤ T and yε → y0 ∈ R1 as ε → 0, Pε (t, yε , t + u, ·) = P{Yε00 (t + u) ∈ ·/Yε00 (t) = yε } ⇒ P0 (t, y0 , t + u, ·) = P{Y000 (t + u) ∈ ·/Y000 (t) = y0 } as ε → 0.

(6.109)

6.3

Diffusion LPP

361

Relation (6.109) implies that condition J2 (a) (its univariate variant for the model without index component), with space R1 playing the role of sets Z00t , t ∈ [0, T ]) holds. Thus condition J19 can again be reduced to the following simpler condition, which is an univariate variant of condition J11 : J12 : P0 (t, y, t + u, Yt+u ) = 1, for every y ∈ Yt and 0 ≤ t < t + u ≤ T , where Yt , t ∈ [0, T ] are sets introduced in condition I18 . Condition K17 takes the form of the following condition imposed on the initial distributions Pε (A) = P{Yε00 (0) ∈ A}: K29 : (a) Pε (·) ⇒ P0 (·) as ε → 0; (b) P0 (Y0 ) = 1, where Y0 is the set introduced in conditions I18 . Theorem 6.1.3 takes, in this case, the following form. Theorem 6.3.5. Let Y000 (t) be the diffusion process given by the stochastic differential equation (4.180) and Yε00 (t) be, for every ε ∈ (0, ε0 ], the corresponding approximating trinomial-tree log-price process given by the stochastic difference equation (4.181) and relation (4.183). Let conditions conditions B11 [γ], and D18 [β] hold and, either γ = β = 0 or 0 < γ < β < ∞. Let also conditions G15 , G17 , G18 , G19 , (4.188), (4.190), I18 , J12 , and K29 hold. Then, the following asymptotic relation holds for any partition Π = h0 = t0 < t1 · · · < tN = T i of the interval [0, T ], (ε) (0) Φ(MΠ,0,T ) → Φ(MΠ,0,T ) as ε → 0. (6.110) By Lemma 4.4.17 and Theorem 4.4.5, conditions G15 , G17 , G18 , G19 , (4.188), (4.190), and conditions B11 [γ], assumed to hold for some 0 ≤ γ < ∞, implies that there exists ε24 = ε24 (γ) ∈ (0, ε0 ] such that, for ε ∈ [0, ε24 ] and any partition Π, t = tn ∈ Π and y ∈ R1 , (ε)

(ε)

|φt (MΠ,t,T , y)| ≤ Ey,t sup |gε (s, eYε (s) )| < ∞.

(6.111)

t≤s≤T

Theorem 6.1.4 takes, in this case, the following form. ~0 (t) be the diffusion process given by the ~00 (t) = Y Theorem 6.3.6. Let Let Y ~ε0 (t) be, for every ε ∈ (0, ε0 ] the corstochastic differential equation (4.126) and Y responding approximating trinomial-tree log-price process given by the stochastic difference equation (4.127) supplemented by relation (4.135). Let condition B4 [γ], holds, for some parameter γ ≥ 0, and, also, conditions conditions G15 , G17 , G18 , G19 , (4.188), (4.190), I18 and J12 hold. Then, the following asymptotic relation holds for any partition Π = h0 = t0 < t1 · · · < tN = T i of the interval [0, T ], t = tn ∈ Π and yε → y0 ∈ Yt as ε → 0, (ε)

(ε)

(0)

(0)

φt (MΠ,t,T , yε ) → φt (MΠ,t,T , y0 ) as ε → 0.

(6.112)

362

6

Time-space-skeleton reward approximations for Markov LPP

6.3.4 Time-space-skeleton reward approximations for diffusion log-price processes ~0 (t), t ∈ [0, T ] does Let consider the model, where a continuous log-price process Y not depend on parameter ε, has a phase space Rk and satisfies the stochastic differential equation (4.126). ~0 (t) is the We assume that conditions G8 – G10 hold. This guarantees that Y unique solution for the stochastic differential equation (4.126), which is a diffusion ~0 (0), W ~ (s), 0 ≤ s ≤ t], t ∈ [0, T ]. process adapted to filtration Ft = σ[Y ~ ~0 (t) = ~ Let us P0 (t, ~ y , t+u, A) = P{Y0 (t+u) ∈ A/Y y }, ~ y ∈ Rk , A ∈ B R k , 0 ≤ t ≤ ~ t + u ≤ T and P0 (A) = P{Y0 (0) ∈ A}, A ∈ BRk be be, respectively, the transition ~0 (t), t ∈ [0, T ]. probabilities and the initial distribution for the diffusion process Y Also, we assume that a pay-off function g0 (t, e~y ) is a measurable function acting from the space [0, T ] × Rk to R1 , which does not depend on parameter ε. Let us construct space-skeleton approximations with fixed fixed skeleton (0) structure for the optimal expected reward Φ(MΠ,0,T ) and the reward function (0)

(0)

φt (MΠ,t,T , ~ y ), for an arbitrary partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ]. In fact, we should reformulate in terms of the pay-off functions g0 (t, e~y ) and the ~0 (t) the corresponding results given in Section discrete time log-price processes Y 6.1. ~Π,0,n = Y ~0 (tn ), n = 0, . . . , N for Markov chain Y ~0 (tn ), Let us use the notation Y in order to point out explicitly the partition Π = h0 = t0 < · · · < tN = T i used to construct this Markov chain. This Markov chain has the phase space Rk and transition probabilities, ~Π,0,n ∈ A/Y ~Π,0,n−1 = ~ PΠ,0,n (~ y , A) = P{Y y} ~0 (tn ) ∈ A/Y ~0 (tn−1 ) = ~ = P{Y y } = P0 (tn−1 , ~ y , tn , A),

(6.113)

and the initial distribution, ~Π,0,0 ∈ A} = P{Y ~0 (t0 ) ∈ A} = P0 (A). PΠ,0 (A) = P{Y

(6.114)

Let us construct, for every ε ∈ (0, ε0 ], an approximating space-skeleton Markov ~Π,ε,n , n = 0, . . . , N with the same phase space Rk and transition probabilichain Y ~Π,ε,n ∈ A/Y ~Π,ε,n−1 = ~ ties PΠ,ε,n (~z, A) = P{Y y } and initial distribution PΠ,ε (A) = ~ P{YΠ,ε,0 ∈ A} fitted in the special way to the transition probabilities and the ~Π,0,n . initial distribution of the Markov chain Y − + Let mε,t,j ≤ mε,t,j , j = 0, . . . , k, t ∈ [0, T ] integer numbers. Let us define, the sets of vector indices ˜Lε,t , for t ∈ [0, T ], + ˜Lε,t = {¯ l = (l1 , . . . , lk ), lj = m− ε,t,j , . . . , mε,t,j , j = 1, . . . , k}.

Let us choose δε,t,i > 0, λε,t,i ∈ R1 , i = 1, . . . , k, t ∈ [0, T ].

(6.115)

6.3

Diffusion LPP

363

First, the skeleton intervals Iε,t,i,li should be constructed for li = m− ε,t,i , . . ., i = 1, . . . , k, t ∈ [0, T ],

m+ ε,t,i ,

Iε,t,i,li

 − 1   (−∞, δε,t,i (mε,t,i + 2 )] + λε,t,i = (δε,t,i (li − 12 ), δε,t,i (l + 21 )] + λε,t,i   1 (δε,t,i (m+ ε,t,i − 2 ), ∞) + λε,t,i

if li = m− ε,t,i , + if m− ε,t,i < li < mε,t,i ,

if li =

(6.116)

m+ ε,t,i .

Then, skeleton points yε,t,i,li = li δε,t,i + λε,t,i ∈ Iε,t,i,li for li = m− ε,t,i , . . ., = 1, . . . , k, t ∈ [0, T ] should be defined. ˜ ¯ = Iε,t,1,l × · · · × Iε,t,k,l and vector skeleton Second, the skeleton sets A 1 k ε,t,l ˜ ¯ should be defined for ¯ points ~ yε,t,¯l = (yε,t,1,l1 , . . . , yε,t,k,lk ) ∈ A l = (l1 , . . . , lk ) ∈ ε,t,l ˜Lε,t , t ∈ [0, T ]. We construct, for every ε ∈ (0, ε0 ], an approximating space-skeleton atomic ~Π,ε,n as a Markov chain with the phase space Rk , the initial disMarkov chain Y tribution PΠ,ε (A) and the transition probabilities PΠ,ε,n (~ y , A), and satisfying the following skeleton fitting conditions, which are simplifications of the fitting condition L1 and M1 : m+ ε,t,i , i

L10 : For every ε ∈ (0, ε0 ], the transition probabilities PΠ,ε,n (~z, {~ yε,tn ,¯l }) = 0 ¯ ˜ ˜ ˜Lε,tn−1 , ¯ ˜Lε,tn , n = 1, 2, . . . , N . PΠ,0,n (~ yε,tn−1 ,¯l0 , A ), ~ z ∈ A , l ∈ l ∈ 0 ¯ ¯ ε,tn ,l ε,tn−1 ,l and ˜ M10 : For every ε ∈ (0, ε0 ], the initial probabilities PΠ,ε ({~ yε,t0 ,¯l }) = PΠ,0 (A ε,t0 ,¯ l ), ¯ ˜ l ∈ Lε,t0 . This is useful to note that the space-skeleton structure described above and ~Π,ε,n is determined by the set of used to construct space-skeleton Markov chains Y ¯ ˜ ˜ ˜ elements ΞΠ,ε = < ~ yε,tn ,¯l , Aε,tn ,¯l , l ∈ Lε,tn , n = 0, . . . , N >. Let us define skeleton functions hε,t,i (y), y ∈ R1 , for ε ∈ (0, ε0 ] and i = 1, . . . , k, t ∈ [0, T ],  1  δε,t,i m− if y ≤ δε,t,j (m−  ε,t,i + λε,t,i ε,t,i + 2 ) + λε,t,i ,     if δε,t,i (l − 21 ) + λε,t,i < y   δε,t,i l + λε,t,i hε,t,i (y) = (6.117) ≤ δε,t,i (l + 12 ) + λε,t,i ,   − +   mε,t,i < l < mε,t,i ,     δ m+ + λ 1 if y > δε,t,i (m+ ε,t,i ε,t,i ε,t,i ε,t,i − 2 ) + λε,t,i . ˜ ε,t (~ Finally, let us define vector skeleton functions h y ), ~ y = (y1 , . . ., yk ) ∈ Rk , for ε ∈ (0, ε0 ] and t ∈ [0, T ], ˜ ε,t (~ h y ) = (hε,t,1 (y1 ), . . . , hε,t,k (yk )).

(6.118)

364

6

Time-space-skeleton reward approximations for Markov LPP

The transition probabilities PΠ,ε,n (~ y , A), ~ y ∈ Rk , A ∈ BRk , n = 1, 2, . . . , N take the following form, for every ε ∈ (0, ε0 ], X ˜ ε,tn−1 (~ ˜ PΠ,ε,n (~ y , A) = PΠ,0,n (h y ), A ε,tn ,¯ l) ~ yε,tn ,l¯∈A

˜ ε,tn (Y ˜ ε,tn−1 (~ ~Π,0,n ) ∈ A/Y ~Π,0,n−1 = h = P{h y )} X ˜ ε,tn−1 (~ ˜ = P0 (tn−1 , h y ), tn , A ε,tn ,¯ l ).

(6.119)

~ yε,tn ,l¯∈A

~ Π,ε,0 ∈ A}, A ∈ BZ is conAs far as the initial distribution PΠ,ε,0 (A) = P{Z cerned, it takes the following form, for every ε ∈ (0, ε0 ], X ˜ ˜ ~ PΠ,ε,0 (A) = PΠ,0,0 (A (6.120) ε,t0 ,¯ l ) = P{hε,t0 (Y0 (0)) ∈ A}. ~ yε,0,l¯∈A

The quantity PΠ,ε,n (~ y , A), defined in relation (6.119), is a measurable function in ~ y and a probability measure as function of A and, thus, it can serve as a transition probabilities for a Markov chain. Also the quantity PΠ,ε,0 (A), defined in relation (6.120), is a probability measure and, thus, it can serve as an initial distribution for a Markov chain. ~Π,ε,n can play any Markov chain The role of approximating log-price process Y with the phase space Rk , the transition probabilities PΠ,ε,n (~z, A) defined in relation (6.119), and the initial distribution PΠ,ε,0 (A) defined in relation (6.120). Obviously, the transition probabilities PΠ,ε,n (~z, A) and the initial distribution PΠ,ε,0 (A) given by relations (6.119) and (6.120) satisfy conditions L10 and M10 . (ε) Let us denote MΠ,n,N of all Markov moments τε,n for the discrete time Markov ~Π,ε,r , r = 0, 1, . . . , N , which (a) take values n, n + 1, . . . , N , (b) log-price process Y ~ {τε,n = m} ∈ σ[YΠ,ε,r , n ≤ r ≤ m], n ≤ m ≤ N . Let, also, φΠ,ε,n (~ y ) be the corresponding reward function for the American option with the pay-off function g0 (tn , e~y ), defined by the following relation, for ~ y ∈ Rk , n = 0, 1, . . . , N , φΠ,ε,n (~ y) =

~

E~y,n g0 (tτε,n , eYΠ,ε,τε,n ).

sup

(6.121)

(ε) τε,n ∈Mmax,Π,n,N

(ε)

Let also ΦΠ,ε = Φε (MΠ,0,N ) be the corresponding optimal expected reward, defined by the following relation, ΦΠ,ε =

sup

~

Eg0 (tτε,0 , eYΠ,ε,τε,0 ).

(6.122)

(ε) τε,0 ∈Mmax,Π,0,N

~Π,ε,n have initial distributions and transition Since the atomic Markov chains Y probabilities concentrated at sets with finite numbers of points, reward functionals |φΠ,ε,n (~ y )| < ∞, ~ y ∈ Rk , n = 0, 1, . . . , N and |ΦΠ,ε | < ∞, for every ε ∈ (0, ε0 ].

6.3

Diffusion LPP

365

The following lemma is a variant of Lemma 6.1.2. ~Π,ε,n is a space Lemma 6.3.1. Let, for every ε ∈ (0, ε0 ], the log-price process Y skeleton Markov chain with transition probabilities and initial distribution given in relations (6.119) and (6.120) and satisfying the fitting conditions L10 and M10 . Then, the log-reward functions φΠ,ε,n (~ y ) and φΠ,ε,n+r (~ yε,tn ,¯l ), ¯ l ∈ ˜Lε,tn+r , r = 00 ¯ ˜ ˜ 1, . . ., N − n are, for every ~ y ∈ Aε,tn ,¯l00 , l ∈ Lε,tn , n = 0, . . . , N , the unique solution for the following finite recurrence system of linear equations,  l ∈ ˜Lε,tN , φΠ,ε,N (~ yε,tN ,¯l ) = g(tN , e~yε,tN ,l¯), ¯      ~ y   φΠ,ε,n+r (~ yε,tn+r ,¯l ) = max g0 (tn+r , e ε,tn+r ,l¯),     P  ˜  yε,tn+r+1 ,¯l0 )P0,n+r+1 (~ yε,tn+r ,¯l , A ¯ ε,tn+r+1 ,¯ l0 ) , l0 ∈ ˜Lε,tn+r+1 φε,n+r+1 (~ (6.123) ¯  l ∈ ˜Lε,tn+r , r = N − n − 1, . . . , 1,        φΠ,ε,n (~y ) = max g0 (tn , e~y ),      P ˜ φΠ,ε,n+1 (~ yε,tn+1 ,¯l0 )PΠ,0,n+1 (~ yε,tn ,¯l00 , A ¯ ε,tn+1 ,¯ l0 ) , l0 ∈ ˜Lε,t n+1

(ε)

while the optimal expected reward ΦΠ,ε = Φε (MΠ,0,N ) is given by the following formula, X ˜ ΦΠ,ε = φΠ,ε,0 (~ yε,t0 ,¯l )PΠ,ε,0 (A (6.124) ε,t0 ,¯ l ). ¯ l∈ ˜Lε,t0

The recurrence algorithm presented in Lemma 6.3.1 is computationally − very effective even for large values of parameters N and m+ ε,tr ,j − mε,tr ,j , j = 1, . . . , k, r = n + 1, . . . , N , which determine the maximal number of equations PN Qk − Nε,n,N = r=n+1 j=1 (m+ ε,tr ,j − mε,tr ,j + 1), in system (6.123). However, it should be noted that Nε,n,N depends in a multiplicative form on the dimension parameter k and, thus, is sensitive to the value of this parameter. ˜ The complexity of computing probabilities PΠ,0,n+r+1 (~ yε,tn+r ,¯l , A ε,tn+r+1 ,¯ l0 ) ~ y

and values of pay-off function g0 (tn+r , e ε,tn+r ,l¯), penetrating system (6.123) also play an important role. ˜ Note that probabilities PΠ,0,n+r+1 (~ yε,tn+r ,¯l , A ε,tn+r+1 ,¯ l0 ), penetrating system ˜ (6.123) are probabilities of cubes A , for the corresponding transition 0 ¯ ε,tn+r+1 ,l ~ probabilities for the diffusion process Y0 (t). Such probabilities can be effectively computed in number of important models. Relations (6.123) and (6.124) give an effective algorithm for computing of the reward functions φΠ,ε,n (~ y ) and optimal expected rewards ΦΠ,ε for approximating ~Π,ε,n . space-skeleton discrete time Markov log-price processes Y However, it should be noted that the above algorithm includes also computing ~ y of values g0 (tn+r , e ε,tn+r ,l¯), for the pay-off function g0 (t, e~y ) and values of proba˜ ~ bilities P0,n+r+1 (~ yε,tn+r ,¯l , A ε,tn+r+1 ,¯ l0 ), for the diffusion log-price process Y0 (t).

366

6

Time-space-skeleton reward approximations for Markov LPP

In many cases, these quantities can be computed effectively. However, in some cases, this can be a difficult problem and alternative approximation methods can serve better. ¯ ˜ ˙ n, ˜ Π,ε˙ = < ~ ˜ Let us consider the space-skeleton structure Ξ yε,t ˙ n ,¯ l , Aε,t ˙ n ,¯ l , l ∈ Lε,t n = 0, . . . , N >, for the fixed value of parameter ε = ε˙ determining the skeleton points and sets. ~ε (t) be, for every ε ∈ [0, ε0 ], respectively, a pay-off Let us also gε (t, e~y ) and Y function and a Markov log-price process introduced, respectively, in Subsection 6.3.1, 6.3.2, or 6.3.3. We use them as approximations for the pay-off function g0 (t, e~y ) and the log~0 (t). price process Y ~ (ε) , with the initial distribution P (ε) (A) The approximating Markov chain Y Π,ε,n ˙ Π,ε˙ (ε)

(ε)

(ε)

and transition probabilities PΠ,ε,n y , A), and quantities ΦΠ,ε˙ and φΠ,ε,n y ) can ˙ (~ ˙ (~ ~ y be defined, for every ε ∈ [0, ε0 ], for the pay-off function gε (t, e ) and the log~ε (t), in the same way, as the Markov chain Y ~Π,ε,n price process Y ˙ , with the initial distribution PΠ,ε˙ (A) and transition probabilities PΠ,ε,n (~ y , A), and quantities ΦΠ,ε˙ ˙ ~ y and φΠ,ε,n y ) have been defined, for the pay-off function g0 (t, e ) and the log-price ˙ (~ ~0 (t). process Y This requires to assume that the following fitting conditions holds: (ε)

L11 [ε]: ˙ For every ε ∈ [0, ε0 ], the transition probabilities PΠ,ε,n y , {~ yε,t ˙ n ,¯ l }) = ˙ (~ (ε) 0 ¯ ¯ ˜ ˜ ˜ ˜ PΠ,0,n (~ yε,t y ∈ Aε,t ˙ n , n = 1, 2, . . . , N . ˙ n−1 , l ∈ Lε,t ˙ n ,¯ l ), ~ ˙ n−1 ,¯ l0 , l ∈ Lε,t ˙ n−1 ,¯ l0 , Aε,t and (ε)

(ε)

˜ M11 [ε]: ˙ For every ε ∈ [0, ε0 ], the initial distribution PΠ,ε ({~ yε,t ˙ 0 ,¯ l }) = PΠ,0 (Aε,t0 ,¯ l ), ¯ ˜ l ∈ Lε,t0 . (ε)

(ε)

In this case, quantities ΦΠ,ε˙ and φΠ,ε,n y ) can be computed using the recur˙ (~ rence algorithm given in Lemma 6.3.1. We assume that condition I15 [Π], which is a reduced version of condition I10 [Π], holds. ~ε (t + u) ∈ A/Y ~ (t) = ~ Let Pε (t, ~ y , t + u, A) = P{Y y } be transition probabilities ~ (t). of the Markov process Y We assume that the following reduced variant and J3 [Π] holds: ˙ tn ∈ BR , n = 0, . . . , N such that Pε (tn−1 , ~ J12 [Π]: There exist sets Y y , tn , ·) ⇒ k ˙ P0 (tn−1 , ~ y , tn , ·) as ε → 0, for ~ y ∈ Ytn−1 as ε → 0, and n = 1, . . . , N . As follows from the remarks related to relation (7.75), conditions G8 , G10 , and G12 imply, that condition J12 [Π] holds, for the approximation model considered in Subsections 6.3.1.

6.3

Diffusion LPP

367

According to the remarks related to relation (6.104), conditions conditions G8 , G10 , G12 , G13 , G14 , O13 and O14 imply, that condition J12 [Π] holds, for the approximation model considered in Subsections 6.3.2. Also, as follows from the remarks related to relation (7.75), conditions G15 , G17 , G18 , G19 , (4.188) and (4.190) imply, that condition J12 [Π] holds, for the approximation model considered in Subsections 6.3.3. y , tn+1 , ∂A) = 0}, ~ y ∈ Rk be the σLet Btn ,~y,tn+1 = {A ∈ BRk : P0 (tn , ~ algebra of sets of continuity for the probability measure P0 (tn , ~ y , tn+1 , A), for n = 0, . . . , N − 1. Let us assume that the following condition holds: ¯ ˜ ˙ n , n = 0, . . . , N ; (b) A ˙ ˜ N18 [Π]: (a) ~ yε,t , yε,t ˙ n ,¯ l ∈ Ytn , l ∈ Lε,t ε,t ˙ n+1 ,¯ l0 ∈ Btn ,~ ˙ n ,l¯,tn+1 0 ¯ ¯ ˜ ˙ n+1 , n = 0, . . . , N − 1. l ∈ ˜Lε,t ˙ n , l ∈ Lε,t The following lemma, which is the direct corollary of Lemma 6.3.1, gives (ε) conditions for convergence of the reward functions φΠ,ε,n y ). ˙ (~ Lemma 6.3.2. Let conditions L11 [ε], ˙ I15 [Π], J12 [Π], N18 [Π] hold. Then the ˙ tn , n = 0, . . . , N , following asymptotic relation takes place for every ~ y∈Y (ε)

(0)

φΠ,ε,n y ) → φΠ,ε,n y ) as ε → 0. ˙ (~ ˙ (~

(6.125)

Let B = {A ∈ BZ : P0 (∂A) = 0} be the of σ-algebra of sets of continuity for the probability measure P0 (A). Let us assume that the following condition holds: ¯ ˆ ˙ 0. ˜ K30 [Π]: (a) Pε (·) ⇒ P0 (·) as ε → 0; (b) A ε,t ˙ 0 ,¯ l ∈ B0 , l ∈ Lε,t The following lemma give conditions for convergence of the optimal expected (ε) rewards ΦΠ,ε˙ . Lemma 6.3.3. Let conditions L11 [ε], ˙ M11 [ε], ˙ I15 [Π], J12 [Π], N18 [Π] and K30 [Π] hold. Then the following asymptotic relation takes place, (ε)

(0)

ΦΠ,ε˙ → ΦΠ,ε˙ as ε → 0.

(6.126)

Remark 6.3.2. It is useful to note that, under conditions of Lemma 6.3.2, (0) (ε) ¯ yε,t yε,t the values of rewards functions φΠ,ε,n ˙ n ,¯ l ) → φΠ,ε,n ˙ n ,¯ l ) as ε → 0, for l ∈ ˙ (~ ˙ (~ ˜Lε,t ˙ n , n = 1, . . . , N , and, under conditions of Lemma 6.3.3, the above asymptotic relations also holds for ¯ l ∈ ˜Lε,t ˙ 0. Remark 6.3.3. If one would like to get asymptotical relations given in Lemma 6.1.2 for some sequences of partitions Πm , m = 0, 1, . . . and some sequence of parameters ε˙l , l = 0, 1, . . . determining the sets of elements for the corresponding ˜ Πm ,ε˙ , one should require holding of conditions of Lemspace-skeleton structures Ξ l mas 6.3.2 and 6.3.3, for all partitions Πm , m = 0, 1, . . . and all skeleton structures corresponding to parameters ε˙l , l = 0, 1, . . ..

368

6

Time-space-skeleton reward approximations for Markov LPP

The numbers of skeleton points and skeleton sets involved in conditions N18 [Π] and K30 [Π] (b) are finite, and these numbers are not more than countable in the above aggregated case. This always makes it possible to choose skeleton points and skeleton sets in such way that conditions N18 [Πm ] and K30 [Πm ] (b) hold for any given sequences of partitions Πm , m = 0, 1, . . . and parameters ε˙l , l = 0, 1, . . . determining the sets ˜ Πm ,ε˙ . of elements for the corresponding skeleton structures Ξ l

6.3.5 Convergence of time-space-skeleton reward approximations with fixed skeleton structure based on arbitrary partition of time interval, for diffusion log-price processes Let us formulate conditions of convergence for the reward functions φΠ,ε,n (~ y ) and the optimal expected rewards ΦΠ,ε , for an arbitrary partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ]. In fact, we should just formulate (in terms of the pay-off function g0 (t, e~y ) and ~0 (t) conditions sufficient for the the continuous time diffusion log-price processes Y corresponding conditions in Theorems 6.1.7 and 6.1.8. We assume that conditions G8 , G10 and G12 , hold. We should take into consideration that, in this case, we consider the model without index component. First of all, we should replace fitting conditions L3 and M3 by the following simpler fitting conditions: L12 : For any partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ], and every ε ∈ (0, ε0 ], the transition probabilities PΠ,ε,n (~ y , {~ yε,tn ,¯l }) = 0 ¯ ˜ ˜ ˜Lε,tn−1 , ¯ ˜Lε,tn , n = 1, 2, . . . , N . PΠ,0,n (~ yε,tn−1 ,¯l0 , A ), ~ y ∈ A , l ∈ l ∈ 0 ¯ ¯ ε,tn ,l ε,tn−1 ,l and M12 : For any partition Π = h0 = t0 < t1 . . . < tN = T i of the interval [0, T ], ˜ and for every ε ∈ (0, ε0 ], the initial probabilities PΠ,ε ({~ yε,t0 ,¯l }) = PΠ,0 (A ε,t0 ,¯ l ), ¯ l ∈ Lε,t0 . Note that, according condition M12 , we require that, for every ε ∈ (0, ε0 ], the approximating Markov chains ZΠ,ε,n have the same initial distribution for all partitions Π. The skeleton structural condition N11 should be replaced by simpler condition N16 . We assume that conditions G8 , G10 , and G12 hold. By Lemmas 4.4.3 and 6.1.1, these conditions imply that the following condition holds for any vector parameter β¯ = (β1 , . . . , βk ) with non-negative components:

6.3

Diffusion LPP

369

¯ limc→0 ∆00β (Y0,i (·), c, T ) = 0, i = 1, . . . , k. C26 [β]: i ¯ This condition is a particular case of condition C21 [β]. ¯ Condition B36 [β] should be replaced by simpler condition B39 [¯ γ ], assumed to hold for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components, replace condition By Lemma 4.4.5 and Theorem 4.4.1, conditions G8 , G10 , G12 and B39 [¯ γ ], assumed to hold for some vector parameter γ¯ with non-negative components, imply that, for any partition Π, t = tn ∈ Π and ~ y ∈ Rk , ~

|φΠ,0,n (~ y )| ≤ E~y,t sup |g0 (s, eY0 (s) )| < ∞.

(6.127)

t≤s≤T

Condition I12 should be replaced by simpler condition I20 . As well known, conditions G8 , G9 (which is weaker than condition G12 ) and G10 imply that transition probabilities P0 (t, ~ y , t + u, ·) are weakly continuous in argument ~ y ∈ Rk , for every 0 ≤ t < t + u ≤ T . This implies that condition J5 (a) (its variant for the model without index component) holds. Thus condition J5 can be reduced to the following simpler condition: J13 : P0 (t, ~ y , t + u, Yt+u ) = 1, for every ~ y ∈ Yt , 0 ≤ t < t + u ≤ T , where Yt , t ∈ [0, T ] are sets introduced in condition I20 . A typical example is where the sets Yt , t ∈ [0, T ] are empty sets. Then condition J13 obviously holds. Another typical example is where sets Ytn , n = 1, . . . , N are at most finite or countable sets. Then the assumption that measures P0 (t, t + u, A), 0 ≤ t < t + u ≤ T have no atoms implies that condition J13 holds. One more example is where measures P0 (t, t + u, A), , 0 ≤ t < t + u ≤ T are absolutely continuous with respect to the Lebesgue measure in Rk and sets Yt , t ∈ [0, T ] have zero Lebesgue measure. This assumption also implies that condition J13 holds. Theorem 6.1.7 takes the following form. Theorem 6.3.7. Let Π = h0 = t0 < t1 · · · < tN = T i be an arbitrary partition ~Π,0,n = Y ~0 (tn ) and of the interval [0, T ] and the time-skeleton log-price process Y the corresponding approximating time-space-skeleton Markov log-price processes ~Π,ε,n are defined as described in Subsection 6.3.4. Let condition B39 [¯ Y γ ] holds for some vector parameter γ¯ with non-negative components. Let, also, conditions G8 , G10 , G12 , L12 , N16 , I20 and J13 hold. Then, the following asymptotic relation holds for partition Π and any t = tn ∈ Π and ~ yε → ~ y0 ∈ Ytn as ε → 0, (0)

(0)

y0 ) as ε → 0. φΠ,ε,n (~ yε ) → φΠ,0,n (~ y0 ) = φt (MΠ,t,T , ~ ¯ takes the form of condition D16 [β]. ¯ Condition D27 [β]

(6.128)

370

6

Time-space-skeleton reward approximations for Markov LPP

As follows from Lemma 4.4.7 and Theorem 4.4.2, conditions G8 , G10 , G12 ¯ assumed to hold for some vector parameters β¯ and conditions B39 [¯ γ ] and D16 [β], and γ¯ such that 0 ≤ γi ≤ βi < ∞, i = 1, . . . , k, imply that, for any partition Π, ~

|ΦΠ,0 | ≤ E sup |g0 (t, eY0 (t) )| < ∞.

(6.129)

0≤t≤T

Finally condition K20 take the form of the following condition imposed on the ~0 (0) ∈ A}: initial distributions P0 (A) = P{Y K31 : P0 (Y0 ) = 1, where Y0 is the set introduced in conditions I20 . Theorem 6.1.8 takes the following form. Theorem 6.3.8. Let Π = {0 = t0 < t1 · · · < tN = T } be an arbitrary partition ~Π,0,n = Y ~0 (tn ) and of the interval [0, T ] and the time-skeleton log-price process Y ~Π,ε,n are the corresponding approximating time-space-skeleton log-price processes Y ¯ hold defined as described in Subsection 6.3.4. Let conditions B39 [¯ γ ] and D16 [β] and, for every i = 1, . . . , k either 0 = γi = βi or 0 < γi < βi < ∞. Let, also, conditions G8 , G10 , G12 , L12 , M12 , N16 , I20 , J13 and K31 hold. Then, the following asymptotic relation holds, (0)

ΦΠ,ε → ΦΠ,0 = Φ(MΠ,0,T ) as ε → 0.

(6.130)

7 Convergence of option rewards for continuous time Markov LPP In Chapter 7, we presents results about convergence of option rewards for continuous time multivariate modulated Markov log-price processes. What is important that we impose minimal conditions of smoothness on the limiting transition probabilities and pay-off functions. In the basic case, where transition probabilities have densities with respect to some pivotal Lebesgue-type measure, it is usually required that the sets of weak discontinuity for the limiting transition probabilities are zero sets with respect to the above pivotal measure. In fact, such assumptions make it possible for the transition probabilities to be very irregular. For example, the above discontinuity sets can be countable sets dense in the corresponding phase spaces. Also, we impose Lipschitz-type conditions on payoff functions. These conditions are weaker than conditions involving derivatives of pay-off functions, which are usually used in integrodifferential approximation methods. In Section 7.1, we present general limit theorems about convergence of rewards for American-type options with general pay-off functions and multivariate modulated Markov log-price processes. In Section 7.2, we give theorems about convergence of rewards for Americantype options with general pay-off functions and log-price processes with independent increments as well as their time-skeleton and time-space-skeleton approximations. In Section 7.3, we present results about convergence of rewards for Americantype options with general pay-off functions and univariate Gaussian log-price processes with independent increments as well as their binomial-tree approximations. In Section 7.4, we present results about convergence of rewards for Americantype options with general pay-off functions and multivariate Gaussian log-price processes with independent increments as well as their binomial- and trinomialtree approximations. Our main results are given in Theorems 7.1.1–7.1.2, for multivariate modulated Markov log-price processes, Theorems 7.2.1–7.2.4, for log-price processes with independent increments, Theorems 7.3.1–7.3.4, for univariate Gaussian logprice processes with independent increments, and Theorems 7.4.1–7.4.4, for multivariate Gaussian log-price processes with independent increments. The results presented in Chapter 7 are based and generalize in several aspects the results obtained in Silvestrov, Jönsson, and Stenberg (2008, 2009), for univariate modulated Markov log-price processes and in Lundgren and Silvestrov (2009, 2011) and Silvetsrov and Lundgren (2011), for multivariate Markov log-price processes. First, we consider multivariate modulated models, i.e., combine together multivariate and modulation aspects together. Second, we consider pay-off func-

372

7

Convergence of option rewards for continuous time Markov LPP

tions, which depend also of the index component. Third, we improve formulation of the corresponding conditions imposed on pay-off functions by replacing conditions imposed on their derivatives by weaker Lipschitz-type conditions. Results presented in Theorems 7.2.1–7.2.4 and 7.4.1–7.4.4 are new.

7.1 Convergence of rewards for continuous time Markov LPP In this section, we present our main results about convergence of rewards for continuous time multivariate modulated Markov log-price processes.

7.1.1 Convergence of optimal expected rewards for multivariate modulated Markov log-price processes ~ ε (t) = (Y ~ε (t), Xε (t)), t ∈ [0, T ] be, for every ε ∈ [0, ε0 ], a càdlàg multivariate Let Z modulated Markov log-price process with a phase space Z = Rk × X (X is a Polish space), an initial distribution Pε (A), and transition probabilities Pε (t, ~z, t + u, A). Remind that we assume that Pε (t, ~z, t + u, ·) are measurable in argument (t, ~z, u). Let also gε (t, e~y , x), (t, ~ y , x) ∈ [0, T ] × Rk × X be, for every ε ∈ [0, ε0 ], a realvalued measurable pay-off function. Let us assume that conditions B16 [¯ γ ], B20 , C14 [β], and D21 [β] hold, and γ∗ < β < ∞. By Lemma 4.1.10 and Theorem 4.1.4 (which should be applied for βi = β, i = 1, . . . , k and some γi = γ ∈ (γ∗ , β), i = 1, . . . , k), there exists ε6 = ε6 (β, γ) ∈ (0, ε0 ] such that, for every ε ∈ [0, ε6 ], the optimal expected reward is finite, i.e., ~

(ε)

|Φ(Mmax,0,T )| ≤ E sup |gε (s, eYε (s) , Xε (s))| < ∞.

(7.1)

0≤s≤T

Condition B16 [¯ γ ] implies that functions gε (t, e~y , x) is continuous in argument (t, ~ y , x) ∈ [0, T ] × Rk × X, for ε ∈ [0, ε30 ] (ε30 is defined in this condition). The Lipschitz-type condition B16 [¯ γ ] let us simplify the condition of locally uniform convergence condition I9 . Condition B16 [¯ γ ] implies that, if gε (t0 , e~y0 , x0 ) → g0 (t0 , e~y0 , x0 ) as ε → 0, for some (t0 , ~ y0 , x0 ) ∈ [0, T ] × Rk × X, then convergence in point (t0 , ~ y0 , x0 ) is locally uniform, i.e., gε (tε , e~yε , xε ) → g0 (t0 , e~y0 , x0 ) as ε → 0 for any (tε , ~ yε , xε ) → (t0 , ~ y0 , x0 ). This let formulate convergence condition for pay-off functions in the following simpler form I19 : There exist sets Z0t ∈ BZ , t ∈ [0, T ], such that the pay-off functions gε (t, e~y0 , x0 ) → g0 (t, e~y0 , x0 ) as ε → 0 for any ~z0 = (~ y0 , x0 ) ∈ Z0t and t ∈ [0, T ].

7.1

373

Convergence of rewards for continuous time Markov LPP

We also impose on transition probabilities Pε (t, ~z, t + u, A) of the log-price ~ ε (t) the following condition of locally uniform weak convergence, which process Z is an analogue of condition J2 : J14 : There exist sets Z00t ∈ BZ , t ∈ [0, T ] such that: (a) Pε (t, ~zε , t + u, ·) ⇒ P0 (t, ~z0 , t + u, ·) as ε → 0, for any ~zε = (~ yε , xε ) → ~z0 = (~ y0 , x0 ) ∈ Z00t as ε → 0, and 0 ≤ t < t + u ≤ T ; (b) P0 (t, ~z0 , t + u, Z0t+u ∩ Z00t+u ) = 1, for every ~z0 ∈ Z0t ∩ Z00t and 0 ≤ t < t + u ≤ T , where Z0t , t ∈ [0, T ] are sets introduced in condition I19 . 0

00

A typical example is where the sets Zt , Zt , t ∈ [0, T ] are empty sets. Then con0 00 dition J14 (b) obviously holds. Another typical example is where sets Zt , Zt , t ∈ [0, T ] are at most finite or countable sets. Then the assumption that measures 0 00 P0 (t, ~z0 , t + u, A), ~z0 ∈ Z0t ∩ Z00t have no atoms at points from sets Zt , Zt , for every t ∈ [0, T ], implies that condition J19 (b) holds. One more example is where measures P0 (t, ~z0 , t + u, A), ~z0 ∈ Z0t ∩ Z00t , 0 ≤ t < t + u ≤ T are absolutely continuous 0 00 with respect to some σ-finite measure P (A) on BZ and P (Zt ), P (Zt ) = 0, t ∈ [0, T ]. This assumption also implies that condition J19 (b) holds. Let introduce the set of convergence, Z(g) = {(t, ~ y , x) ∈ [0, T ] × Rk × X : gε (t, e~y , x) → g0 (t, e~y , x) as ε → 0}. (g)

Condition B16 [¯ γ ] implies that Z(g) is a closed set, and, thus, Z

(7.2)

is an open

set. Let, also, introduce cuts of the set Z(g) , for t ∈ [0, T ], (g)

Zt

= {~z = (~ y , x) ∈ Z = Rk × X : (t, ~ y , x) ∈ Z(g) }.

(7.3)

(g)

Sets Zt , t ∈ [0, T ] are also closed subsets of the space Z = Rk × X, and, thus, (g) Zt , t

∈ [0, T ] are open subsets of the space Z = Rk × X. (g) It is obvious that sets Z0t ⊆ Zt , t ∈ [0, T ]. (g) Due to this relation, it, seems, would be better to use sets Zt instead of sets (g) 0 0 00 Zt , in condition I19 . Since, P0 (t, ~z0 , t + u, Zt+u ∩ Zt ) ≤ P0 (t, ~z0 , t + u, Zt+u ), t ∈ (g) [0, T ], using of sets Zt , seems, would weaken condition J14 (b). However, it is not true. Indeed, relation P0 (t, ~z0 , t + u, Z0t+u ) = 1 should hold for any t ∈ Z0t ∩ Z00t . Thus extension of sets Z0t+u would make stronger the condition imposed by this relation. (g) Therefore, using of sets Z0t ⊆ Zt , t ∈ [0, T ] have a sense and let one balance convergence conditions I19 and J14 . Finally, we assume that condition of weak convergence on initial distribution ~ ε (0) ∈ A}: Pε (A) = P{Z sets

K32 : (a) Pε (·) ⇒ P0 (·) as ε → 0; (b) P0 (Z00 ∩ Z000 ) = 1, where Z00 and Z000 are the sets introduced in conditions I19 and J14 .

374

7

Convergence of option rewards for continuous time Markov LPP

The following theorems present our main convergence result. It gives condi(ε) tions of convergence for optimal expected reward Φ(Mmax,0,T ). Theorem 7.1.1. Let conditions B16 [¯ γ ], B20 , C14 [β], and D21 [β] hold, and β and conditions I19 , γ∗ < β < ∞. Let also condition C6 [α∗ ] holds, for α∗ = β−γ ∗ J14 and K32 hold. Then, the following asymptotic relation takes place, (0)

(ε)

Φ(Mmax,0,T ) → Φ(Mmax,0,T ) as ε → 0.

(7.4)

Proof. Let Πm = h0 = tm,0 < tm,1 < · · · < tm,Nm = T i be a sequence of partitions on the interval [0, T ] such that d(Πm ) = max1≤r≤Nn −1 (tm,n+1 −tm,n ) → 0 as m → ∞, Since d(Πm ) → 0 as m → ∞ and we study the case when ε → 0, we can assume that d(Πm ) ≤ c, m = 1, 2, . . . and ε ≤ ε38 , where c = c(β) and ε38 are defined in Theorem 5.2.1 (see, Remarks 5.2.2 and 5.2.3). This ensures that the reward (ε) (ε) functionals Φ(Mmax,0,T ) and Φ(MΠm ,0,T ), m = 1, 2, . . . are finite for ε ∈ [0, ε38 ]. Since, we assume all conditions of Theorem 5.2.1 to hold, relation (5.41), given in this theorem, and the assumption d(Πm ) → 0 as m → ∞ imply that the following relation holds, (ε)

(ε)

lim lim |Φ(Mmax,0,T ) − Φ(MΠm ,0,T )| = 0.

m→∞ ε→0

(7.5)

As was shown in Lemma 5.1.3, conditions B16 [¯ γ ] and B20 imply that condition B19 [γ] holds for any γ ≥ γ◦ and, thus, it holds for γ > γ∗ ≥ γ◦ (these parameters are defined in relation (5.10)). Condition B19 [γ∗ ] is just a variant of condition B2 [¯ γ∗ ] for the vector parameter γ¯ = (γ, . . . , γ) with identical components. Also, condition C14 [β] and D21 [β] are just variants, respectively, of condi¯ and D14 [β], ¯ for the vector parameter β¯ = (β, . . . , β) with identical tion C2 [β] components β ≥ 0. Therefore, all condition of Theorem 6.1.3, with vector parameters γ¯∗ = (γ∗ , . . . , γ∗ ) and β¯ = (β, . . . , β) such that γ∗ < β, hold. Thus, relation (6.11) given in this theorem can be applied to functionals (ε) Φ(MΠm ,0,T ), m = 1, 2, . . . that yields that the following relation holds, for every m = 1, 2, . . ., (ε)

(0)

Φ(MΠm ,0,T ) → Φ(MΠm ,0,T ) as ε → 0.

(7.6)

Relations (7.5) and (7.6) let us make the final step in the proof of Theorem 7.1.1. We employ the following inequality that can be written down, for every m = 1, 2, . . ., (ε)

(0)

|Φ(Mmax,0,T ) − Φ(Mmax,0,T )| (ε)

(ε)

≤ |Φ(Mmax,0,T ) − Φ(MΠm ,0,T )| (0)

(0)

+ |Φ(Mmax,0,T ) − Φ(MΠm ,0,T )| (ε)

(0)

+ |Φ(MΠm ,0,T ) − Φ(MΠm ,0,T )|.

(7.7)

7.1

375

Convergence of rewards for continuous time Markov LPP

Using this inequality and relation (7.6), we get the following relation, for every m = 1, 2, . . ., (0)

(ε)

lim |Φ(Mmax,0,T ) − Φ(Mmax,0,T )|

ε→0

(ε)

(ε)

≤ lim |Φ(Mmax,0,T ) − Φ(MΠm ,0,T )| ε→0

(0)

(0)

+ |Φ(Mmax,0,T ) − Φ(MΠm ,0,T )|.

(7.8)

Finally, relation (7.5) implies (note that relation ε → 0 admit also the case where ε = 0) that the expression on the right hand side in (7.8) can be forced to take a value less then any δ > 0 by choosing the partition Πm with the diameter d(Πm ) small enough. This proves the asymptotic relation (7.4) and, thus, the proof of Theorem 7.1.1 is complete.  Remark 7.1.1. Condition B16 [¯ γ ] can be replaced in Theorem 7.1.1 by the weaker condition B21 [¯ γ , ν¯]. Indeed, inequality (5.79) given in Theorem 5.2.2 implies relation (7.5) to hold equally well with inequality (5.41) given in Theorem 5.2.1. Remark 7.1.2. Condition B16 [¯ γ ] can, also, be replaced in Theorem 7.1.1 by condition B17 [¯ γ ] or B18 [¯ γ ]. ~ ε (t) have no index component, Remark 7.1.3. If the log-price processes Z then condition C6 [α∗ ] can be omitted in Theorem 7.1.1.

7.1.2 Convergence of reward functions for multivariate modulated Markov log-price processes As was pointed above, we assume that conditions B16 [¯ γ ], B20 , C14 [β] hold, and γ∗ < β < ∞. By Lemma 4.1.8 and Theorem 4.1.3 (which should be applied for βi = β, i = 1, . . . , k and some γi = γ ∈ (γ∗ , β), i = 1, . . . , k), there exists ε2 = ε2 (β, γ) ∈ (0, ε0 ] such that, for every ε ∈ [0, ε2 ] and (t, ~ y , x) ∈ [0, T ] × Rk × X, the reward function (ε) φt (~ y , x) is finite, i.e., for 0 ≤ t ≤ T, ~z = (~ y , x) ∈ Z, ~

(ε)

|φt (~ y , x)| ≤ E~z,t sup |gε (s, eY (s) , X(s))| < ∞.

(7.9)

t≤s≤T

The following theorems present our main convergence result. It gives condi(ε) tions of convergence reward functions φt (~ y , x). Theorem 7.1.2. Let conditions B16 [¯ γ ], B20 , and C14 [β] hold, and γ∗ < β < β and conditions I19 and J14 ∞. Let also condition C6 [α∗ ] holds, for α∗ = β−γ ∗ hold. Then, for any t ∈ [0, T ], the following asymptotic relation takes place for any ~zε = (~ yε , xε ) → ~z0 = (~ y0 , x0 ) ∈ Z0t ∩ Z00t as ε → 0, (ε)

(0)

φt (~ yε , xε ) → φt (~ y0 , x0 ) as ε → 0.

(7.10)

376

7

Convergence of option rewards for continuous time Markov LPP

Proof. There are two ways to prove the theorem. The first one is to give the proof is analogous to the proof of Theorem 7.1.1. The only difference is that Theorems Theorem 7.3.1 and 6.1.4 should be us used instead of Theorems 5.2.1 and 6.1.3. However, we can use also Lemma 5.3.1. (ε) According to this lemma, the reward functional φt (~ yε , xε ), for the log-price ~ ε (s), s ∈ [0, T ] and the pay-off function gε (s, e~y , x), s ∈ [0, T ], ~z = (~ process Z y , x) ∈ (ε) Z, coincides with the reward functional Φ(Mmax,0,T −t ), for the shifted in time ~ ε,~z ,t (s), s ∈ [0, T − t] with the phase space Z, the initial càdlàg Markov process Z ε

distribution Pε,~zε ,t (A) = I(~zε ∈ A) concentrated in point ~zε , the transition probabilities Pε,t (s, ~z, s + u, A) = Pε (t + s, ~z, t + s + u, A), and the shifted in time pay-off function gε,t (s, e~y , x) = gε (t + s, e~y , x), s ∈ [0, T − t], ~z = (~ y , x) ∈ Z. Note that the trivial case t = T is also possible. (ε) (ε) In this case, φt (~ y , x) and the reward functional Φ(Mmax,0,0 ) computed for the above shifted log-price process and pay-off function coincide both with gε (T, e~yε , xε ). Note also that the role of the sets Z00 and Z000 (for the pay-off function gε (t, e~y , x) ~ ε (s)) is played by the sets Z0t and Z00t (for the shifted and the log-price process Z ~ ε,~z ,t (s)). pay-off function gε,t (s, e~y , x) and the shifted log-price process Z ε Condition D21 [β] holds, for the initial distributions Pε,~zε ,t (A) = I(~zε ∈ A), if ~zε = (~ yε , xε ) → ~z0 = (~ y0 , x0 ) ∈ Z as ε → 0. This holds for any β ≥ 0 and t ∈ [0, T ]. Condition K32 holds for the initial distributions Pε,~zε ,t (A) = I(~zε ∈ A) for any ~zε = (~ yε , xε ) → ~z0 = (~ y0 , x0 ) ∈ Z0t ∩ Z00t . This holds for any t ∈ [0, T ]. In the light of the above remarks, we can conclude that Theorem 7.1.2 is the direct corollary of Theorem 7.1.1.  Remark 7.1.3. Condition B16 [¯ γ ] can be replaced in Theorem 7.1.2 by the weaker condition B21 [¯ γ , ν¯]. Indeed, inequality (5.79) given in Theorem 5.2.2 implies relation (7.5) to hold equally well with inequality (5.41) given in Theorem 5.2.1. Remark 7.1.4. Condition B16 [¯ γ ] can, also, be replaced in Theorem 7.1.2 by condition B17 [¯ γ ] or B18 [¯ γ ]. ~ ε (t) have no index component, Remark 7.1.5. If the log-price processes Z then condition C6 [α∗ ] can be omitted in Theorem 7.1.2.

7.2 Convergence of rewards for LPP with independent increments In this section, we present results about convergence of rewards for log-price processes with independent increments.

7.2

Convergence of rewards for LPP with independent increments

377

7.2.1 Convergence of rewards for multivariate log-price processes with independent increments ~ε (t) = (Yε,1 (t), . . . , Yε,k (t)), t ∈ [0, T ] be, Let, as in Subsections 4.3.2 and 5.4.1, Y for every ε ∈ [0, ε0 ], a càdlàg multivariate log-price process with independent ~ε (0) ∈ A} and distributions of increments with initial distribution Pε (A) = P{Y ~ε (t + u) − Y ~ε (t) ∈ A}. increments Pε (t, t + u, A) = P{Y We consider the model with log-price processes without index component. In this case, the first-type modulus of exponential moment compactness ∆β (Yε,i (·), c, T ) coincides with the simpler first-type modulus of exponential moment compactness ∆0β (Yε,i (·), c, T ). Condition C14 [β] takes in this case the form of condition C15 [β], assumed to hold for some β ≥ 0. By Lemma 5.4.1, condition C15 [β] is implied to hold by conditions C12 and E11 [α] if, either α > β > 0 or α = β = 0. We also assume that gε (t, ey , x) = gε (t, ey ), (t, y, x) ∈ [0, T ] × R1 × X be, for every ε ∈ [0, ε0 ], a pay-off function, which also do not depend on index argument x. In fact, we can always add an virtual index component Xε (t) with the degen~ε (t) and also to erated one-point phase space X = {x0 } to the log-price process Y y y consider the pay-off function gε (t, e ) = gε (t, e , x0 ) as a function defined on the space [0, T ] × Rk × X. Conditions B16 [¯ γ ] – B21 [¯ γ , ν¯] take, in this case, simpler forms of conditions B23 [¯ γ ] – B28 [¯ γ , ν¯]. Since the index component is absent, the moment compactness condition C6 [α∗ ], for the index component, can be omitted. Condition D21 [β] can be replaced, in this case, by condition D22 [β]. ~ε (t) is a particular case of the process Z ~ ε (t) introduced in Subsection Process Y 7.1.1. (ε) Conditions B23 [¯ γ ], B27 and C15 [β] imply that the reward function φt (~ y) < ∞, for ε ∈ (0, ε2 ] (see, Subsection 7.1.1). If also condition D22 [β] hold, then the (ε) optimal expected reward Φ(Mmax,0,T ) < ∞, for ε ∈ (0, ε6 ] (see, Subsection 7.1.1) Condition B23 [¯ γ ] implies that functions gε (t, e~y ) are continuous for ε ∈ [0, ε43 ] (ε43 is defined in this condition). Condition I19 takes in this case the following simpler form: I20 : There exist sets Yt ∈ BRk , t ∈ [0, T ], such that the pay-off functions gε (t, e~y0 ) → g0 (t, e~y0 ) as ε → 0 for any ~ y0 ∈ Yt and t ∈ [0, T ]. We also impose on distributions of increments Pε (t, t + u, A) of the log-price ~ε (t) the following condition of weak conprocess with independent increments Y vergence, which is an analogue of condition J14 :

378

7

Convergence of option rewards for continuous time Markov LPP

J15 : (a) Pε (t, t + u, ·) ⇒ P0 (t, t + u, ·) as ε → 0 as ε → 0, for 0 ≤ t < t + u ≤ T ; (b) P0 (t, t + u, Yt+u − ~ y ) = 1, for every ~ y ∈ Yt and 0 ≤ t < t + u ≤ T , where Yt , t ∈ [0, T ] are sets introduced in condition I20 . Finally, we assume that condition of weak convergence on initial distribution ~ε (0) ∈ A}, which is an analogue condition K38 : Pε (A) = P{Y K33 : (a) Pε (·) ⇒ P0 (·) as ε → 0; (b) P0 (Y0 ) = 1, where Y0 is the set introduced in condition I20 . The following two theorems are direct corollaries of Theorems 7.1.1 and 7.1.2. Theorem 7.2.1. Let conditions B23 [¯ γ ], B27 , C15 [β], and D22 [β] hold, and γ∗ < β < ∞. Let also conditions I20 , J15 and K33 hold. Then, the following asymptotic relation takes place, (ε)

(0)

Φ(Mmax,0,T ) → Φ(Mmax,0,T ) as ε → 0.

(7.11)

Theorem 7.2.2. Let conditions B23 [¯ γ ], B27 and C15 [β] hold, and γ∗ < β < ∞. Let also conditions I20 and J15 hold. Then, for any t ∈ [0, T ], the following asymptotic relation takes place for any ~ yε → ~ y0 ∈ Yt as ε → 0, (ε)

(0)

φt (~ yε ) → φt (~ y0 ) as ε → 0.

(7.12)

Remark 7.2.1. Condition B23 [¯ γ ] can be replaced in Theorem 7.2.1 and 7.2.2 by the weaker condition B28 [¯ γ , ν¯]. Remark 7.2.2. Condition B23 [¯ γ ] can, also, be replaced in Theorem 7.2.1 and 7.2.2 by condition B24 [¯ γ ] or B25 [¯ γ ].

7.2.2 Convergence of rewards for time-skeleton approximations of multivariate log-price processes with independent increments ~0 (t) = (Y0,1 (t), . . . , Y0,k (t)), t ∈ [0, T ] be a càdlàg Lévy log-price process (i.e., Let Y a càdlàg stochastically continuous process with independent increments), which does not depend on parameter ε, and has a phase space Rk , an initial distribution P0 (A) and distributions of increments P0 (t, t + u, A). Also, we assume that a pay-off function g0 (t, e~y ) is a measurable function acting from the space [0, T ] × Rk to R1 , which does not depend on parameter ε. Let Πε = h0 = tε,0 < . . . < tε,Nε = T i be, for every ε ∈ (0, ε0 ], a partition of interval [0, T ] such that the following condition holds: N19 : (a) Nε → ∞ as ε → 0; (b) d(Πε ) = max1≤n≤Nε (tε,n − tε,n−1 ) → 0 as ε → 0.

7.2

Convergence of rewards for LPP with independent increments

379

[t]

In what follows we shall also use an extension Πε of partition Πε by a given point t ∈ [0, T ]. It is defined by the following relation, for every ε ∈ (0, ε0 ], 0 0 Π[t] ε = h0 = tε,0 < · · · < tε,Nε0 = T i =  Πε = h0 = tε,0 < · · · < tε,n−1 < tε,n < · · · < tε,Nε = T i     if t ∈ Πε , t = tε,n , for some n = 0, . . . , Nε , =  h0 = tε,0 < · · · < tε,n−1 < t < tε,n < · · · < tε,Nε = T i    if t ∈ / Πε , tε,n−1 < t < tε,n , for some n = 0, . . . , Nε .

(7.13)

By the above definition, there exist the unique skeleton point t0ε,nε,t = min(n ≥ [t]

= t) in the extended partition Πε such that t0ε,nε,t = t. Let us consider, for every ε ∈ (0, ε0 ], the step-wise time-skeleton approximation process approximation process,  ~0 (tε,n ) for t ∈ [tε,n , tε,n+1 ), n = 0, . . . , nε − 1, Y ~ (7.14) Yε (t) = ~0 (T ) Y for t = T.

0:

t0ε,n

~ε (t) is, for every ε ∈ (0, ε0 ], a càdlàg process By the definition, the process Y with independent increments. ~0 (t) is a realEvery component Y0,i (t) of the càdlàg Lévy log-price process Y valued càdlàg Lévy process. Let < µ0,i (·), σ0,ii (·), Π0,ii (·, ·) > be the triplet of characteristic in the corresponding Lévy–Khintchine representation for characteristic function of process Y0,i (t) (see, formula (4.110)). Let us assume that the following condition, which is a variant of condition E7 [¯ α], for a vector parameter α ¯ = (α, . . . , α) with identical components, holds for some α ≥ 0: R E16 [α]: |y|≥1 eα|y| Π0,i (T, dy) < K92,i , i = 1, . . . , k, where 1 < K92,i < ∞, i = 1, . . . , k. By Lemmas 4.2.9 and 4.3.2, condition E16 [α] implies that condition E11 [α] ~ε (t). holds for processes with independent increments Y ~0 (t), and, Condition C10 automatically holds for the càdlàg Lévy process Y thus, by Lemma 4.2.7, condition of J-compactness C12 holds for the time-skeleton ~ε (t). approximation processes Y By Lemma 5.4.1 or 4.3.6 conditions C12 and E11 [α] imply that the first type condition of exponential moment compactness C15 [β] holds if α > β > 0 or α = β = 0. We consider the model, where the pay-off function g0 (t, e~y ) does not depend on parameter ε. This simplify the corresponding conditions. In particular, condition B23 [¯ γ ] takes the form of the following simpler condition assumed to hold for some k(k + 1)-dimensional vector γ¯ = (γ0,1 , . . ., γ0,k , . . . , γk,1 , . . . , γk,k ) with nonnegative components:

380

7

Convergence of option rewards for continuous time Markov LPP 0

y ~

P|gk 0 (t ,e

B40 [¯ γ ]:(a) sup 0≤t0 ,t00 ≤T,

(1+ t0 6=t00 , ~ y ∈Rk L78,0,1 , . . . , L78,0,k
0, λε,i ∈ R1 , i = 1, . . . , k. + First, the skeleton intervals Iε,i,li should be constructed for li = m− ε,i , . . . , mε,i , i = 1, . . . , k,  − 1 if li = m−  ε,i ,  (−∞, δε,i (mε,i + 2 )] + λε,i − 1 1 (7.22) Iε,i,li = (δε,i (li − 2 ), δε,i (li + 2 )] + λε,i if mε,i < li < m+ ε,i ,   + + 1 if li = mε,i . (δε,i (mε,i − 2 ), ∞) + λε,i + Then, skeleton points yε,i,li = li δε,i + λε,i ∈ Iε,i,li for li = m− ε,i , . . ., mε,i , i = 1, . . . , k should be defined. Second, the skeleton sets Aε,¯l = Iε,1,l1 × · · · × Iε,k,lk and vector skeleton points, ~ yε,¯l = (yε,1,l1 , . . . , yε,k,lk ) ∈ Aε,¯l should be defined, for ¯ l = (l1 , . . . , lk ) ∈ Lε,1 . Let us define skeleton functions, hε,i (y), y ∈ R1 , for i = 1, . . . , k and ε ∈ (0, ε0 ],  1  δε,i m− if y ≤ δε,i (m−  ε,i + λε,i ε,i + 2 ) + λε,i ,     if δε,i (li − 12 ) + λε,i < y   δε,i li + λε,i hε,i (y) = (7.23) ≤ δε,i (li + 12 ) + λε,i ,   − +   mε,i < li < mε,i ,     δ m+ + λ 1 if y > δε,i (m+ ε,i ε,i ε,i ε,i − 2 ) + λε,i .

ˆ ε (~ Finally, let us define vector skeleton functions h y ), ~ y = (y1 , . . ., yk ) ∈ Rk , for ε ∈ (0, ε0 ], ˆ ε (~ h y ) = (hε,1 (y1 ), . . . , hε,k (yk )). (7.24) ~ε (t) can, for every ε ∈ The time-skeleton approximation log-price process Y (0, ε0 ], be represented in the following form, X ~ε (t) = Y ~0 (0) + ~0 (tε,n ) − Y ~0 (tε,n−1 )), Y (Y tε,n ≤t

X

~0 (0) + =Y

~ ε,n , t ∈ [0, T ], W

(7.25)

tε,n ≤t

~ ε,n = (Wε,n,1 , . . . , Wε,n,k ) = Y ~0 (tε,n ) − Y ~0 (tε,n−1 ), n = 1, . . . , Nε . where W Let us introduce, for every ε ∈ (0, ε0 ], the time-space-skeleton approximation ~ε0 (t) defined by the following relation, log-price process Y X ˆ Y ˆ ε (Y ~ε0 (t) = h( ~0 (0)) + ~0 (tε,n ) − Y ~0 (tε,n−1 )) Y h tε,n ≤t

ˆ Y ~0 (0)) + = h(

X tε,n ≤t

0 Wε,n , t ∈ [0, T ],

(7.26)

384

7

Convergence of option rewards for continuous time Markov LPP

where 0 0 0 Wε,n = (Wε,n,1 , . . . , Wε,n,k )

ˆ ε (Y ~0 (tε,n ) − Y ~0 (tε,n−1 )), n = 1, . . . , Nε . =h

(7.27)

Let us assume that the following skeleton structural condition holds, for some α > 0: ± = δε,i m± N20 [α]: (a) δε,i → 0 as ε → 0, for i = 1, . . . , k; (b) yε,i ε,i + λε,i → ±∞ as ε → 0, for i = 1, . . . , k; (c) Nε δε,i → 0 as ε → 0, for i = 1, . . . , k; (d) ∓ e±αyε,i Nε → 0 as ε → 0, for i = 1, . . . , k. 0(ε)

(ε)

Let us denote by Φ0 (Mmax,0,T ) and φt (~ y ) be, respectively, the optimal expected reward and the reward function for the log-price with independent incre~ε0 (t) and the pay-off function g0 (t, e~y ). ments Y Since the process Yε0 (t), t ∈ [0, T ] takes a finite number of different values, The (ε) 0(ε) optimal expected reward Φ0 (Mmax,0,T ) and the reward function φt (~ y ), (t, ~ y) ∈ [0, T ] × Rk always takes finite values. The following two theorems let one approximate the optimal expected re(0) (0) ward Φ(Mmax,0,T ) and the reward function φt (~ y ) by the corresponding quan0(ε)

(ε)

tities Φ0 (Mmax,0,T ) and φt (~ y ) computed for the time-space-skeleton log-price ~ε0 (t) which are simpler than the time-skeleton log-price processes Y ~ε (t). processes Y Theorem 7.2.5. Let conditions B40 [¯ γ ], D28 [β] and E16 [α], for some γ∗ < β < α < ∞, and, also, condition N19 and N20 [α] hold. Then, the following asymptotic relation takes place, Φ0 (Mmax,0,T ) → Φ(Mmax,0,T ) as ε → 0. (ε)

(0)

(7.28)

Theorem 7.2.6. Let conditions B40 [¯ γ ] and E16 [α], for some γ∗ < α < ∞, and, also, condition N19 and N20 [α] hold. Then, for any t ∈ [0, T ], the following asymptotic relation takes place for any ~ yε → ~ y0 ∈ Rk as ε → 0, 0(ε)

(0)

φt (~ yε ) → φt (~ y0 ) as ε → 0.

(7.29)

Proof. In Theorem 7.2.5, inequality γ∗ < β < α < ∞ is assumed. In Theorem 7.2.6, we can choose β such that the above inequality holds. By Lemmas 4.2.6 and 4.3.2, condition E16 [α] (recall that α ≥ 0) implies that the following relation holds, for every ε ∈ (0, ε0 ] and i = 1, . . . , k, Ξ±α (Yε,i (·), T ) =

sup

Ee±α(Yε,i (t+u)−Yε,i (t))

0≤t≤t+u≤T

=

sup 0≤t≤t+u≤T

Y t 0 and i = 1, . . . , k, 0 ~ε,i ∆0J (Y (·), h, c, T ) =

~ε0 (t + u) − Y ~ε0 (t)| ≥ h} P{|Y

sup 0≤t≤t+u≤t+c≤T

h 0 ~ε0 (t) − Y ~ε (t)| ≥ h }. ~0,i ≤ ∆0J (Y (·), , c, T ) + 2 sup P{|Y 3 3 0≤t≤T

(7.36)

Relations (7.35) and (7.36) imply that the following relation holds, for every h > 0 and i = 1, . . . , k, h 0 0 ~0,i (·), h, c, T ) ≤ lim ∆0J (Y (·), , c, T ) = 0. lim lim ∆0J (Yε,i c→0 3

c→0 ε→0

(7.37)

Thus, condition C12 holds for the log-price processes with independent incre~ε0 (t). ments Y Let K be some constant such that K± (α) + 1 < K < ∞. The skeleton functions hε,i (y) also satisfy, for the following inequalities hold, for every n = 1, . . . , Nε , ε ∈ (0, ε0 ] and y ∈ R1 , i = 1, . . . , k, − − − hε,i (y) ≤ (y + δε,i )I(y ≥ yε,i ) + yε,i I(y < yε,i ) − − − ) + δε,i + yε,i I(y < yε,i ). ≤ yI(y ≥ yε,i

(7.38)

Using this inequality we get the following relation, for every n = 1, . . . , Nε , ε ∈ (0, ε0 ] and i = 1, . . . , k, α ≥ 0, 0

EeαWε,n,i = Eeαhε,i (Wε,n,i ) −

− − ≤ eαδε,i (EeαWε,n,i I(Wε,n,i ≥ yε,i ) + eαyε,i P{Wε,n,i < yε,i }) −

≤ eαδε,i (EeαWε,n,i + eαyε,i ) −

= eαδε,i (Eeα(Y0,i (tε,n )−Y0,i (tε,n−1 ) + eαyε,i ).

(7.39)

Let us denote Nε,t,t+u = nε (t + u) − nε (t), 0 ≤ t ≤ t + u ≤ T . Obviously, Nε,t,t+u ≤ Nε,0,T = Nε , 0 ≤ t ≤ t + u ≤ T .

7.2

387

Convergence of rewards for LPP with independent increments

Relations (7.30) and (7.39) imply that the following relation holds, for every ε ∈ (0, ε0 ] and i = 1, . . . , k, 0 Ξα (Yε,i (·), T ) =

0

0

Eeα(Yε,i (t+u)−Yε,i (t))

sup 0≤t≤t+u≤T

=

0≤t≤t+u≤T



0

Y

sup

EeαWε,n

t 0; (e) Yε (0) is a random variable independent of the random variables Wε,n , n = 1, . . . , Nε . We shall approximate the log-price Y0 (t) by the binomial-tree log-price processes Yε (t) in such way that the corresponding reward functionals converge. (ε) Let us denote Mmax,t,T the class of all Markov moments τε,t for the logprice process Yε (t), which (a) take values in the interval [t, T ], (b) {τε,t > s} ∈ σ[Yε (u), t ≤ u ≤ s], t ≤ s ≤ T . (ε) Let, also, φt (y) be the corresponding reward function for the American option, defined by the following relation, for (t, y) ∈ [0, T ] × R1 , (ε)

φt (y) =

Ey,t g0 (τε,t , eYε (τε,t ) ).

sup

(7.62)

(ε) τε,t ∈Mmax,t,T

(ε)

Let also Φε = Φε (Mmax,0,T ) be the corresponding optimal expected reward, defined by the following relation, Φε =

sup

Eg0 (τε,0 , eYε (τε,0 ) ).

(7.63)

(ε)

τε,0 ∈Mmax,0,T

Under condition that Y0 (t) = y the process Y0 (s), s ∈ [t, T ] takes not more than 2Nε + 1 different values and the maximal absolute value is not larger than (ε) |y| + Nε δε , the reward function φt (y) < ∞, for any (t, y) ∈ [0, T ] × R1 and Φε < ∞, if E|Yε (0)| < ∞. Let us assume that the following condition holds, for some β ≥ 0: D30 [β]: limε→0 Eeβ|Yε (0)| < K95 , for some 1 < K95 < ∞. This condition includes also condition D29 [β] (since relation ε → 0 also admits the case, where ε = 0) and imply that E|Yε (0)| < ∞ for ε ∈ (0, ε65 ], for some ε65 ∈ (0, ε0 ]. Since the pay-off function g0 (t, ey ) does not depend on parameter ε, condition I20 (its univariate variant) holds with sets Yt = R1 , t ∈ [0, T ]. Condition K33 , imposed on the initial distributions, takes in this case the following simpler form:

394

7

Convergence of option rewards for continuous time Markov LPP

K34 : Pε (·) = P{Yε (0) ∈ ·} ⇒ P0 (·) = P{Y0 (0) ∈ ·} as ε → 0. Note that condition K33 (b) can be omitted since set Y0 = R1 . In this case the following relations hold for the binomial-tree log-price processes Yε (t), for t ∈ [0, T ], µε (t) = E(Yε (t) − Yε (0)) Nε (t)

=

X

(λε,n + δε (pε,n,+ − pε,n,− ))

n=1 Nε (t)

=

X

Nε (t)

λε,n + δε

n=1

X

(2pε,n,+ − 1),

(7.64)

n=1

and σε2 (t) = Var(Yε (t) − Yε (0)) =

Nε (t)

Nε (t)

X

X

(δε2 − δε2 (pε,n,+ − pε,n,− )2 ) = δε2

n=1

4pε,n,+ (1 − pε,n,+ )

n=1 Nε (t)

= δε2 Nε (t) − δε2

X

(2pε,n,+ − 1)2 .

(7.65)

n=1

Let us assume that the following convergence condition holds: J16 : (a) max1≤n≤Nε |λε,n ± δε | → 0 as ε → 0; (b) sup0≤t≤T |µε (t) − µ0 (t)| → 0 as ε → 0; (c) σε2 (t) → σ02 (t) as ε → 0, for t ∈ [0, T ]. It is useful to note that condition J16 (a) is equivalent to assumption that (a)0 max1≤n≤Nε |λε,n | → 0 as ε → 0 and (a)00 δε → 0 as ε → 0. It is also useful to note that, since functions σε2 (t), t ∈ [0, T ] are non-decreasing, condition J16 (c) implies that sup |σε2 (t) − σ02 (t)| → 0 as ε → 0.

(7.66)

0≤t≤T

Since, function σ02 (t) is continuous and strictly increasing, relation (7.66) implies that the followig relation should hold for partitions Πε , d(Πε ) = max |tε,n − tε,n−1 | → 0 as ε → 0. 1≤n≤Nε

(7.67)

Indeed, if relation (7.67) does not holds, then there exists d > 0 and a sequence 0 < εr → 0 as r → ∞ such that tεr ,n − tεr ,n−1 ≥ d, for r = 1, 2, . . .. It is always possible to select a subsequence εrm , m = 1, 2, . . . from the sequence εr , such that tεr ,n → tn and tεr ,n−1 → tn−1 as m → ∞, for some 0 ≤ tn−1 , tn ≤ T Obviously,

7.3

Univariate Gaussian LPP with Independent Increments

395

tn − tn−1 ≥ d. Also, tεrm ,n−1 < tn−1 + d3 < tn − d3 < tεrm ,n for m large enough, say, m ≥ mn . But, in this case, |σε2 (tn − d3 ) − σε2 (tn−1 + d3 )| = 0, for m ≥ mn and, therefore, by relation (7.66), |σ02 (tn − d3 ) − σ02 (tn−1 + d3 )| = 0. This contradicts to the assumption about strict monotonicity of function σ02 (t). As known (see, for example, Skorokhod (1964)), condition J16 implies that processes Yε0 (·) = < Yε (t) − Yε (0), t ∈ [0, T ] > (considered as a random variables with values in the space of càdlàg functions D[0,T ] ) weakly converge as ε → 0. Let us denote, for ε ∈ [0, ε0 ] and 0 ≤ t ≤ t + u ≤ T , Pε (t, t + u, A) = P{Yε (t + u) − Yε (t) ∈ A}.

(7.68)

As well known, condition J16 (in which one can weaken condition (b) by replacing the uniform convergence in it by a point-wise convergence µε (t) → µ0 (t) as ε → 0, for t ∈ [0, T ]) implies that, for 0 ≤ t ≤ t + u ≤ T , Pε (t, t + u, ·) ⇒ P0 (t, t + u, ·) as ε → 0.

(7.69)

Relation (7.69) implies that condition J15 (a) (its univariate variant) holds, for the log-price processes Yε (t). Condition J15 (b) also holds, since sets Yt = R1 , t ∈ [0, T ]. The following relation holds, for h > 0, P{|Yε (t + u) − Yε (t)| ≥ h}

sup 0≤t≤t+u≤t+c≤T



sup 0≤t≤t+u≤t+c≤T

=

sup 0≤t≤t+u≤t+c≤T



sup 0≤t≤t+u≤t+c≤T

E|Yε (t + u) − Yε (t)|2 h2 Var(Yε (t + u) − Yε (t)) + (E(Yε (t + u) − Yε (t)))2 h2 σε2 (t + u) − σε2 (t) + (µε (t + u) − µε (t))2 . h2

(7.70)

Relation (7.70) and relations of uniform convergence given in condition J16 (a) and (7.66) imply that, for h > 0, lim lim ∆0J (Yε (·), c, h, T )

c→0 ε→0

= lim lim

sup

c→0 ε→0 0≤t≤t+u≤t+c≤T



P{|Yε (t + u) − Yε (t)| ≥ h}

1 lim lim sup (σε2 (t + u) − σε2 (t)) h2 c→0 ε→0 0≤t≤t+u≤t+c≤T 1 + 2 lim lim sup (µε (t + u) − µε (t))2 = 0. h c→0 ε→0 0≤t≤t+u≤t+c≤T

(7.71)

Relations (7.69) and (7.71) are well-known conditions of uniform convergence for càdlàg processes with independent increments Yε0 (·) = < Yε (t) − Yε (0), t ∈ [0, T ] >.

396

7

Convergence of option rewards for continuous time Markov LPP

Relation (7.71) also implies that condition C12 [α] (its univariate variant) holds, for any α ≥ 0, for the log-price processes Yε (t). Relation (7.65) implies that δε2 Nε ≥ σε2 (T ) and, thus, limε→0 δε2 Nε ≥ σ02 (T ) > 0. This relation and condition J16 (a) imply that Nε → ∞ as ε → 0. Let us also assume that the following natural structural condition holds: N21 : limε→0 δε2 Nε < K96 , where 1 < K96 < ∞. Let us introduce, for every ε ∈ (0, ε0 ] function, defined for t ∈ [0, T ] by the following relation, Nε (t) X λε (t) = λε,n . (7.72) n=1

We additionally assume that the following condition holds: J17 : sup0≤t≤T |λε (t) − λ0 (t)| → 0 as ε → 0, where λ0 (t), t ∈ [0, T ] is a continuous function such that λ0 (0) = 0. It is useful to note that relation (7.67) and condition J17 imply that max |λε,n |

1≤n≤Nε

= max |(λε (tε,n ) − λε (tε,n−1 )) − (λ0 (tε,n ) − λ0 (tε,n−1 ))| 1≤n≤Nε

+ max |λ0 (tε,n ) − λ0 (tε,n−1 )| 1≤n≤Nε

≤ 2 sup |λε (t) − λ0 (t)| 0≤t≤T

+

|λ0 (t) − λ0 (t + u))| → 0 as ε → 0.

sup

(7.73)

0≤t≤t+u≤t+d(Πε )≤T

Let us denote Nε (t, t + u) = Nε (t + u) − Nε (t), 0 ≤ t ≤ t + u ≤ T . The following inequality takes place for the moment generation functions of increments Yε (t + u) − Yε (t), 0 ≤ t ≤ t + u ≤ T , for every ε ∈ (0, ε0 ] and β ∈ R1 , Eeβ(Yε (t+u)−Yε (t)) Y  = eβλε,n eβδε pε,n,+ + e−βδε pε,n,− t. Relation (7.71) also implies that condition E8 [¯ α] holds for the log-price pro~ε (t), for any vector α cesses Y ¯ = (α1 , . . . , αk ) with non-negative components. 2 2 2 Nε ≥ Relation (7.111) implies that δε,i Nε ≥ σε,i (T ) and, thus, limε→0 δε,i 2 σ0,i (T ) > 0, for i = 1, . . . , k, This relation and condition J18 (a) imply that Nε → ∞ as ε → 0. Let us also assume that the following natural structural condition holds: 2 N23 : limε→0 δε,i Nε < K98,i , i = 1, . . . , k, for some 1 < K98,i < ∞, i = 1, . . . , k.

Let us introduce, for every ε ∈ (0, ε0 ] functions, defined for t ∈ [0, T ], i = 1, . . . , k by the following relation, Nε (t)

λε,i (t) =

X n=1

λε,n,i .

(7.118)

7.4

Multivariate Gaussian LPP with independent increments

409

We additionally assume that the following condition holds: J19 : sup0≤t≤T |λε,i (t) − λ0,i (t)| → 0 as ε → 0, where λ0,i (t), t ∈ [0, T ], i = 1, . . . , k are continuous functions such that λ0,i (0) = 0, i = 1, . . . , k. It is useful to note that relation (7.114) and condition J37 imply that, for i = 1, . . . , k, max |λε,n,i |

1≤n≤Ne

= max |(λε,i (tε,n ) − λε,i (tε,n−1 )) − (λ0,i (tε,n ) − λ0,i (tε,n−1 ))| 1≤n≤Ne

+ max |λ0,i (tε,n ) − λ0,i (tε,n−1 )| 1≤n≤Ne

≤ 2 sup |λε,i (t) − λ0,i (t)| 0≤t≤T

+

sup

|λ0,i (t) − λ0,i (t + u))| → 0 as ε → 0.

(7.119)

0≤t≤t+u≤t+d(Πε )≤T

Let us denote Nε (t, t + u) = Nε (t + u) − Nε (t), 0 ≤ t ≤ t + u ≤ T . The following inequality takes place for the moment generation functions of increments Yε,i (t + u) − Yε,i (t), 0 ≤ t ≤ t + u ≤ T, i = 1, . . . , k, for every ε ∈ (0, ε0 ] and β ∈ R1 , Eeβ(Yε,i (t+u)−Yε,i (t)) Y = eβλε,n,i eβδε,i pε,n,i,+ t 0 are defined by relation (4.186) aand m± ε are integer numbers such that the following structural condition holds: N27 : ±δε m± ε → ∞ as ε → 0. Let us introduce condition, which is an univariate variant of condition D23 [β] imposed on the diffusion log-price process Y000 (t): 00

D32 [β]: Eeβ|Y0

(0)|

< K99,i , i = 1, . . . , k, for some 1 < K99,i < ∞, i = 1, . . . , k.

The proof analogous to those given in relations (7.46) – (7.48) let one prove that conditions D32 [β] and N27 imply that condition D18 [β] holds for random variables Yε00 (0) given by relation (8.23). Note that condition N27 and relation (8.23) imply that condition K38 holds. Since the atomic Markov chains YΠ00ε ,n have initial distributions and transition probabilities concentrated at sets with finite numbers of points, reward functionals |φΠε ,ε,n (y)| < ∞, y ∈ R1 , n = 0, 1, . . . , Nε and |ΦΠε ,ε | < ∞, for every ε ∈ (0, ε0 ]. The following lemma is a variant of Lemma 3.4.4∗ . Lemma 8.3.1. Let YΠ00ε ,n be, for every ε ∈ (0, ε0 ], the binomial-tree approximation log-price process introduced above. Then, the log-reward functions φΠε ,ε,n (y) and φΠε ,ε,n+r (y + δε l), l = −r, . . . , r, r = 1, . . . Nε − n, are, for every y ∈ R1 , n = 0, . . . , Nε , the unique solution for the following finite recurrence system of linear equations,  y+δε l ),   φΠε ,ε,Nε (y + δε l) = g0 (tε,Nε , e     l = −(Nε − n), . . . , (Nε − n),       φ (y + δε l) = max g0 (tε,n+r , ey+δε l ),   Πε ,ε,n+r    φε,n+r+1 (y + δε (l + 1))pε,n+r+1,+ (y + δε l)   + φε,n+r+1 (y + δε l)pε,n+r+1,◦ (y + δε l)      + φε,n+r (y + δε (l − 1))pε,n+r+1,− (y + δε l) ,      l = −r, . . . , r, r = Nε − n − 1, . . . , 1,       φΠε ,ε,n (y) = max g0 (tε,n , ey ), φΠε ,ε,n+1 (y + δε )pε,n+1,+ (y)      + φΠε ,ε,n+1 (y)pε,n+1,◦ (y) + φΠε ,ε,n+1 (y − δε )pε,n+1,− (y) ,

(8.25)

440

8

Convergence of option rewards for diffusion LPP

(ε)

while the optimal expected reward ΦΠε ,ε = Φε (MΠε ,0,Nε ) is given by the following formula, m+ ε X φΠε ,ε,0 (δε l)P{Y0 (0) ∈ Iε,l }. (8.26) ΦΠε ,ε = l=m− ε

The number of equations in the system (9.90) is Nn,Nε = 1 + (2 + 1) + · · · + (2(Nε − n) + 1) = (Nε − n + 1)2 . The following two theorems are corollaries of Theorems 5.5.5, 5.5.6 and 8.3.1, 8.3.2. Theorem 8.3.3. Let Y000 (t) be the diffusion process given by the stochastic differential equation (4.180) and Yε00 (t) be, for every ε ∈ (0, ε0 ], the corresponding approximating trinomial-tree log-price process given by the stochastic difference equation (4.181) and relations (4.183) and (8.23). Let conditions G15 , G17 , G18 , G19 and (4.188), (4.190) hold, and conditions B29 [¯ γ ], B32 , D32 [β], N27 hold, and γ∗ < β < ∞. Let, also, conditions I21 , J22 hold. Then, the following asymptotic relation takes place, (0) ΦΠ,ε → Φ(Mmax,0,T ) as ε → 0. (8.27) Theorem 8.3.4. Y000 (t) be the diffusion process given by the stochastic differential equation (4.180) and Yε00 (t) be, for every ε ∈ (0, ε0 ], the corresponding approximating trinomial-tree log-price process given by the stochastic difference equation (4.181) and relation (4.183). Let conditions G15 , G17 , G18 , G19 and (4.188), (4.190) hold, and, also, conditions B29 [¯ γ ], B32 , I21 , J22 hold. Then, for every t ∈ [0, T ], the following asymptotic relation takes place for any yε → y0 ∈ R1 as ε → 0, (0) φΠ[t] ,ε,nε,t (yε ) → φt (y0 ) as ε → 0. (8.28) ε

Proof. Conditions of Theorem 8.3.3 include all conditions of Theorems 5.5.5 and 8.3.1. Thus, Theorem 5.5.5, Remark 5.5.10 and relation d(Πε ) → 0 as ε → 0, imply that the following relation holds, (ε)

(ε)

lim |Φ(Mmax,0,T ) − Φε (MΠε ,0,Nε )| = 0.

ε→0

(8.29)

Relation (8.19) given in Theorem 8.3.1 and relation (8.29) obviously imply relation (8.27) to hold. Analogously, conditions of Theorem 8.3.4 include all conditions of Theorem 5.5.6 and 8.3.2. Thus, Theorem 5.5.6, Remark 5.5.12 and relation d(Πε ) → 0 as ε → 0 imply that the following relation holds, (ε)

lim |φt (~ yε ) − φΠ[t] ,ε,nε,t (~ yε )| → 0 as ε → 0.

ε→0

ε

(8.30)

441

8.4 Rewards approximations for mean-reverse diffusion LPP

Relation (8.20) given in Theorem 8.3.2 and relation (8.30) obviously imply relation (8.28) to hold.  Remark 8.3.2. Condition B29 [¯ γ ] can be replaced in Theorems 8.3.3 and 8.3.4 by the weaker condition B33 [¯ γ , ν¯]. Remark 8.3.3. Condition B29 [¯ γ ] can be replaced in Theorems 8.3.3 and 8.3.4 by condition B30 [¯ γ ] or B31 [¯ γ ].

8.4 Rewards approximations for mean-reverse diffusion LPP In this section we present a model, which can serve as an illustration of trinomialtree reward approximations for univariate mean-reverse diffusion log-price processes and Gaussian log-price with independent increments.

8.4.1 Trinomial tree reward approximations for mean-reverse diffusion log-price processes Let us consider the model of log-price process introduced by Schwartz (1997) for modeling energy prices. The corresponding log-price process Y0 (t) is a diffusion process, which is a solution of the following stochastic differential equation, dY (t) = −α(Y (t) − Y (0))dt + νdW (t), t ∈ [0, T ],

(8.31)

where α, ν > 0, W (t) is a standard Brownian motion, and the initial state Y (0) = y0 is a real-valued constant. We assume that coefficients of the stochastic differential equation (8.31) satisfy conditions G15 and G16 . Thus, the diffusion process Y (t) is the unique solution of equation (8.31) adapted to the filtration Ft = σ[W (s), 0 ≤ s ≤ t], t ∈ [0, T ]. It is given by the following relation, Y (t) = y0 + νe

−αt

Zt

eαs dW (s), t ∈ [0, T ].

(8.32)

0

We consider an American call option with a pay-off function given by the following relation, for (t, y) ∈ [0, T ] × R1 , g(t, ey ) = e−rt [ey − K]+ ,

(8.33)

where K > 0 and r ≥ 0 are constants. In this case, the drift µ(t, y) = −α(y − y0 ) and the diffusion σ 2 (t, y) = ν 2 , for (t, y) ∈ [0, T ] × R1 .

442

8

Convergence of option rewards for diffusion LPP

Condition G10 of boundedness for the above coefficients does not hold. Thus, the trinomial-tree reward approximation model introduced in Subsection 4.4.3 can not be implied. However, there exists possibility to transform the log-price process Y (t) and the pay-off function g(t, ey ) is such way that the trinomial-tree reward approximations presented in Subsection 7.4.3 can be applied. Let us introduce the transformed pay-off function, α(T −t)

g0 (t, ey ) = g(t, ey0 +(y−y0 )e −rt

=e

)

y0 +(y−y0 )eα(T −t)

[e

− K]+ , t ∈ [0, T ], s

(8.34)

and the transformed log-price process, Y0 (t) = y0 + (Y (t) − y0 )eα(t−T ) = y0 + νe

−αT

Zt

eαs dW (s), t ∈ [0, T ].

(8.35)

0

The transformed pay-off function g0 (t, ey ) satisfy condition B29 [¯ γ ] (with twodimensional vector parameter γ¯ = (γ0,1 , γ1,1 ) = (1, eαT ), and condition B32 . In this case, parameter γ∗ = eαT + 1. The processes Y (t) and Y0 (t) generate the same natural filtration Ft = σ[Y (s), (0) 0 ≤ s ≤ t] = σ[Y0 (s), 0 ≤ s ≤ t], t ∈ [0, T ] and, therefore, the class Mmax,t,T of all Markov moments t ≤ τ0,t ≤ T such that event {τ0,t > s} ∈ σ[Y (u), t ≤ u ≤ T ], s ∈ [t, T ], is also the same for these processes, for every t ∈ [0, T ]. It follows from relations (8.35) and (8.34) that the following relation holds for (0) the optimal expected reward Φ(Mmax,0,T ), (0)

Φ(Mmax,0,T ) =

sup

Eg(τ0,0 , eY (τ0,0 ) )

(8.36)

Eg0 (τ0,0 , eY0 (τ0,0 ) ),

(8.37)

(0) τ0,0 ∈Mmax,0,T

=

sup (0) τ0,0 ∈Mmax,0,T

(0)

and the reward function φt (y), (t, y) ∈ [0, T ] × R1 , (0)

φt (y) =

sup

Ey,t g(τ0,t , eY (τ0,t ) )

(8.38)

Ey,t g0 (τ0,t , eY0 (τ0,t ) ).

(8.39)

(0) τ0,t ∈Mmax,t,T

=

sup (0) τ0,t ∈Mmax,t,T

The transformed log-price process Y0 (t) is inhomogeneous in time continuous Gaussian process with independent increments. This makes it possible to use the trinomial tree approximation model introduced in Subsection 7.4.3.

8.4

443

Rewards approximations for mean-reverse diffusion LPP

In this case, the process Y0 (t) is approximated by the step-wise process with independent increments, X Wε,n,1 , t ∈ [0, T ], (8.40) Yε (t) = y0 + tε,n ≤t

where: (a) Πε = h0 = tε,0 < · · · < tε,Nε i is a partition of interval [0, T ]; (b) Wε,n,1 , n = 1, . . . , Nε are independent random variables, which take, for every n = 1, . . . , Nε , values λε,n,1 + δε,1 , λε,n,1 and λε,n,1 − δε,1 , with probabilities, respectively, pε,n,+ , pε,n,◦ and pε,n,− . In this case, the drift function µ0 (t) and diffusion σ02 (t) are given by the formula, for t ∈ [0, T ], µ0 (t) = E(Y0 (t) − Y0 (0)) = Eνe−αT

Zt

eαs dW (s) = 0,

(8.41)

0

and σ02 (t) = E(Y0 (t) − Y0 (0))2 = E(νe

Zt

−αT

eαs dW (s))2

0

= ν 2 e−2αT

Zt

e2αs ds = ν 2

e2α(t−T ) − e−2αT . 2α

(8.42)

0

Let us use the simplest uniform partitions Πε of interval [0, T ] by points tε,n = = 0, . . . , Nε , where Nε → ∞ as ε → 0. In this case, condition N25 obviously holds. Also, the Lipschitz-type condition 2 R2 also holds for the variance function σ0,1 (t), and, thus, condition N24 also holds. In this case the following relation takes place, nT Nε , n

2 0 < σΠ = max (σ02 ( ε ,1 1≤n≤Nε

≤ max (ν 1≤n≤Nε

nT (n − 1)T ) − σ02 ( )) Nε Nε

2α( nT Nε −T ) 2e

− e−2αT e2α( − ν2 2α n

≤ max ν 2 e−2αT e2αT Nε · 1≤n≤Nε

2α NTε

1−e 2α



(n−1)T Nε

ν2T . Nε

−T )



− e−2αT

) (8.43)

Parameters of the approximation model should be computed according formulas pointed out in Lemma 7.4.2. Relation (8.43) let us choose parameter δε,1 using the following formula, √ ν T δε,1 = √ . (8.44) Nε

444

8

Convergence of option rewards for diffusion LPP

Relations and (8.43) and (8.44) let us choose probabilities pε,n,1,◦ and pε,n,1,± given by the following formulas, for n = 1, . . . , Nε , pε,n,1,◦ = 1 −

2 (n−1)T σ02 ( nT Nε ) − σ0 ( Nε ) 2 δε,1 nT

= 1 − (ν 2

e2α( Nε −T ) − e−2αT e2α( − ν2 ν2T 2α Nε

2αT

=1−e

n−Nε Nε

1 − e−

2αT Nε

(n−1)T Nε

−T )

− e−2αT

2 2α νNεT

≥ 0.

2αT Nε

)

(8.45)

and pε,n,1,± = =

1 σ02 (tε,n ) − σ02 (tε,n−1 ) 2 2 δε,1 nT e2α( Nε −T ) − ν2 2 2α νNεT

= e2αT

n−Nε Nε

e−2αT

− 2αT Nε

1−e

2αT Nε

−ν

2e

2α(

(n−1)T Nε

≥ 0.

−T )



− e−2αT

ν2T Nε

(8.46)

Finally, according relation (8.41), parameters λε,n,1 = 0, n = 1, . . . , Nε .

(8.47)

By Lemma 7.4.2, all conditions of Theorems 7.3.1 – 7.3.4 hold for processes Yε (t) and, therefore, the asymptotic relations given in these theorems take place for optimal expected rewards and reward functions of the log-price processes Yε (t). Alternatively, it possible to use the binomial-tree approximation model introduced in Subsection 7.3.3. In this case, the process Y0 (t) is approximated by the step-wise process with independent increments, X Yε (t) = y0 + Wε,n , t ∈ [0, T ], (8.48) tε,n ≤t

where: (a) Πε = h0 = tε,0 < · · · < tε,Nε i is a partition of interval [0, T ]; (b) Wε,n , n = 1, . . . , Nε are independent random variables, which take, for every n = 1, . . . , Nε , values λε,n + δε and λε,n − δε with probabilities, respectively, pε,n,+ and pε,n,− . Parameters of the approximation model should be computed according formulas pointed out in Lemma 7.3.2. In this case, general partitions Πε = h0 = tε,0 < · · · < tε,Nε = T i of interval [0, T ] are used such that d(Πε ) → 0 as ε → 0.

8.4

Rewards approximations for mean-reverse diffusion LPP

Parameter δε is given by the formula, r ν 2 (1 − e−2αT ) σ0 (T ) δε = √ = . 2αNε Nε

445

(8.49)

In this case, skeleton point tε,n , n = 1, . . . , Nε are the unique solutions of the following equations, nδε2 = σ02 (tε,n ) = ν 2

e2α(tε,n −T ) − e−2αT , 2α

(8.50)

which are given by the following formulas, tε,n =

ln( Nnε (e2αT − 1) + 1) 2α

, n = 1, . . . , Nε .

(8.51)

Probabilities pε,n,± take in this case the very simple form, pε,n,± =

1 , n = 1, . . . , Nε . 2

(8.52)

Finally, parameters λε,n are given by the following formulas, λε,n = µ0 (tε,n ) − µ0 (tε,n−1 ) = 0, n = 1, . . . , Nε .

(8.53)

By Lemma 7.3.2, all conditions of Theorems 7.3.1 – 7.3.4 hold for processes Yε (t) and, therefore, the asymptotic relations given in these theorems take place for optimal expected rewards and reward functions of the log-price processes Yε (t).

8.4.2 Approximation of rewards for diffusion log-price processes based on space truncation of drift and diffusion functional coefficients Let Y 0 (t), t ∈ [0, T ] and Y 00 (t), t ∈ [0, T ] be two univariate diffusion processes, which are solutions of two stochastic differential equations, dY 0 (t) = µ0 (t, Y 0 (t)) + σ 0 (t, Y 0 (t))dW (t), t ∈ [0, T ],

(8.54)

dY 00 (t) = µ00 (t, Y 00 (t)) + σ 00 (t, Y 00 (t))dW (t), t ∈ [0, T ],

(8.55)

and 0

00

where: (a) Y (0) = Y (0) = Y is a random variable taking values in the space R1 such that EY 2 < ∞; (b) W (t), t ∈ [0, T ] is a standard univariate Wiener process; (c) the random variable Y and the process W (t), t ∈ [0, T ] are independent; (d) µ0 (t, y) and µ00 (t, y) are Borel functions acting from the space [0, T ] × R1 to the space R1 , such that µ0 (t, y) = µ00 (t, y), for t ∈ [0, T ], |y| ≤ M ; (e) σ 0 (t, y) and σ 00 (t, y) are Borel functions acting from the interval [0, T ] × R1 to the interval [0, ∞), such that σ 0 (t, y) = σ 00 (t, y), for t ∈ [0, T ], |y| ≤ M .

446

8

Convergence of option rewards for diffusion LPP

We assume also that conditions G15 and G16 holds for both processes Y 0 (t) and Y 00 (t). This guarantees that diffusion processes Y 0 (t) and Y 00 (t) are the unique solutions, respectively, for equation (8.54) and (8.55) adapted to the filtration Ft = σ[Y, W (s), 0 ≤ s ≤ t], t ∈ [0, T ]. 0 00 Let us define hitting times τM = sup(t ≥ 0 : |Y 0 (t)| ≤ M ) ∧ T and τM = 00 0 00 sup(t ≥ 0 : |Y (t)| ≤ M ) ∧ T , where M ≥ 0, and let τM = τM ∧ τM We shall use well-known results, which can be found, for example in Gikhman and Skorokhod (1968) that, under model assumptions made in (8.54) – (8.55) and conditions G15 and G16 , 0 00 P{τM = τM = τM , sup |Y 0 (t) = Y 00 (t)| = 0} = 1.

(8.56)

0≤t≤τM

Let us also denote, for t ∈ [0, T ], 0 00 Yt,T = sup |Y 0 (s)|, Yt,T = sup |Y 00 (s)|. t≤s≤T

(8.57)

t≤s≤T

Let us assume that the following condition holds, for some β ≥ 0: C27 [β]: sup0≤t≤T,y∈R1

Ey,t e

βY 0 t,T

βY 00 ∨Ey,t e t,T β|y| e

≤ K100 , for some 1 ≤ K100 < ∞.

Let g(t, ey ), (t, y) ∈ [0, T ] × R1 be a real-valued Borel pay-off function. We assume that it satisfies condition B12 [γ] holds for the pay-off function g(t, ey ). Finally, let φ0t (y) and φ00t (y) be reward functions, respectively, for the log-price process Y 0 (t) and Y 00 (t), and the pay-off function g(t, ey ). As follows from proofs of Lemmas 4.1.4 and 4.1.6, conditions C27 [β] and B12 [γ] imply, under assumption that 0 ≤ γ ≤ β, that |φ0t (y)|, |φ00t (y)| < ∞, [0, T ] × R1 . Let us now replace the model assumption EY 2 < ∞, by stronger condition: D33 [β]: Eeβ|Y | ≤ K101 , for some 1 ≤ K101 < ∞. Let Φ0 and Φ00 be optimal expected rewards, respectively, for the log-price process Y 0 (t) and Y 00 (t) and the pay-off function g(t, ey ). As follows from proofs of Lemma 4.1.6, conditions B12 [γ], C27 [β] and D33 [β] imply that |Φ0 |, |Φ00 | < ∞. In what follows we shall use the following two lemmas. Lemma 8.4.1. Let the model assumptions made in (8.54) and conditions G15 and G16 hold for the diffusion log-price processes Y 0 (t) and Y 00 (t). Let, also, conditions B12 [γ] and C27 [β], D33 [β] hold, and 0 ≤ γ < β. Then, there exists a constant 0 ≤ M102 < ∞ such that the following inequality holds, |Φ0 − Φ00 | ≤ M102 e−(β−γ)M .

(8.58)

8.4

Rewards approximations for mean-reverse diffusion LPP

447

Proof. Let us take an arbitrary δ > 0. Let M0max,0,T and M00max,0,T be, respectively, classes of all Markov moments 0 ≤ τ 0 ≤ T and 0 ≤ τ 00 ≤ T adapted to filtration Ft0 = σ[Y 0 (t), 0 ≤ s ≤ t], t ∈ [0, T ] and Ft00 = σ[Y 00 (t), 0 ≤ s ≤ t], t ∈ [0, T ]. It is useful to note that the above tree filtrations are connected by relation Ft0 , Ft00 ⊆ Ft , t ∈ [0, T ]. By the definition of the optimal expected rewards Φ0 and Φ00 there exist δoptimal Markov moments τδ0 ∈ M0max,0,T and τδ00 ∈ M00max,0,T such that the following inequalities take place, 0

Φ0 − δ ≤ Eg(τδ0 , eτδ ) ≤ Φ0 ,

(8.59)

and 00

Φ00 − δ ≤ Eg(τδ00 , eτδ ) ≤ Φ00 . 0

(8.60) 00

(τδ0

(τδ00

) = f 00 (Y 00 (·)) )) = f 0 (Y 0 (·)) and g(τδ00 , eY Random variables g(τδ0 , eY 0 are random functionals defined on trajectories of processes Y (·) = < Y 0 (t), t ∈ [0, T ] > and Y 00 (·) = < Y 00 (t), t ∈ [0, T ] > considered as random variables taking values in the space of real-valued continuous functions C[0,T ] . This and relation (8.56) imply the following relation 0 00 Ef 0 (Y 0 (·))I(Y0,T ≤ M ) = Ef 00 (Y 00 (·))I(Y0,T ≤ M ).

(8.61)

Using conditions of the lemma, we get the following inequality, 0 E|f 0 (Y 0 (·)|I(Y0,T > M ) = E|g(τδ0 , eY

≤ E(L23 + L23 L24 eγ|Y

0

(τδ0 )|

0

(τδ0 )

0 )|I(Y0,T > M)

0 )I(Y0,T > M)

0

0 > M) ≤ L23 (1 + L24 )EeγY0,T I(Y0,T Z∞ 0 = L23 (1 + L24 ) eÊγy P{Y0,T ∈ dy} M −(β−γ)M

Z∞

≤ L23 (1 + L24 )e

0 eÊβy P{Y0,T ∈ dy}

M 0

−(β−γ)M

EeβY0,T Z∞ 0 −(β−γ)M = L93 (1 + L24 )e Ey,0 eβY0,T P{Y ∈ dy} ≤ L23 (1 + L24 )e

≤ L23 (1 + L24 )e−(β−γ)M

−∞ Z∞

K100 eβ|y| P{Y ∈ dy}

−∞

≤ L23 (1 + L24 )K100 K101 e−(β−γ)M = δM .

(8.62)

448

8

Convergence of option rewards for diffusion LPP

We can using conditions of the lemma, we can get the analogous inequality, E|f 00 (Y 00 (·)|I(YT00 > M ) ≤ δM .

(8.63)

Using relations (8.59) – (8.60) and (8.61) – (8.63), we get the following relation, Φ0 ≤ Eg(τδ0 , eY =

0

(τδ0 )

)+δ

0 0 Eg(τδ0 , eY (τδ ) )I(YT0

= Eg(τδ00 , eY

00

(τδ00 )

≤ Eg(τδ00 , eY

00

(τδ0 )

+ E|g(τδ0 , eY

0

≤ M ) + Eg(τδ0 , eY

0

(τδ0 )

)I(YT00 ≤ M ) + Eg(τδ0 , eY

) + E|g(τδ00 , eY

(τδ0 )

00

(τδ00 )

0

)I(YT0 > M ) + δ

(τδ0 )

)I(YT0 > M ) + δ

)|I(YT00 > M )

)|I(YT0 > M ) + δ

≤ Φ00 + 2δM + 2δ.

(8.64)

Since an arbitrary choice of δ > 0, inequality (8.64) implies that the following inequalities take place, Φ0 ≤ Φ00 + 2δM . (8.65) Analogously we can get the following inequality, Φ00 ≤ Φ0 + 2δM .

(8.66)

If Φ00 ≤ Φ0 then inequality (8.65) implies that Φ00 ≤ Φ0 ≤ Φ00 + 2δM and, thus, inequality (8.58) holds. Analogously, if Φ0 ≤ Φ00 then inequality (8.66) implies that Φ0 ≤ Φ00 ≤ Φ0 + 2δM and, again, inequality (8.58) holds.  Remark 8.4.1. The explicit expression for constant M41 is given by the following formula, which follows from relation (8.62), M102 = 2L23 (1 + L24 )K100 K101 .

(8.67)

Lemma 8.4.2. Let the model assumptions made in (8.54) and conditions G15 and G16 hold for the diffusion log-price processes Y 0 (t) and Y 00 (t). Let, also, conditions B12 [γ] and C27 [γ], hold and 0 ≤ γ < β. Then, there exists a constant 0 ≤ M103 < ∞ the following inequality holds, for t ∈ [0, T ], y ∈ R1 , |φ0t (y) − φ00t (y)| ≤ M103 eβ|y| e−(β−γ)M .

(8.68)

Proof The inequality (8.58) does not requires a separate proof, since the reward functions φ0t (y) and φ00t (y) coincide, respectively, with optimal expected rewards Φ0 and Φ00 , for the corresponding shifted log-price processes Yt0 (s) = Y 0 (t + s), s ∈ [0, T − t] and Yt00 (s) = Y 00 (t + s), s ∈ [0, T − t], the pay-off function gt (s, ey ) = g(t + s, ey ), s ∈ [0, T − t], y ∈ R1 and degenerated initial distribution I(y ∈ A), for every t ∈ [0, T ] (see, Subsection 5.3.1). In this case, condition D33 [β] holds, with constant K101 = eβ|y| . 

8.4

Rewards approximations for mean-reverse diffusion LPP

449

Remark 8.4.2. The explicit expression for constant M103 is given by the following formula, which follows from relation (8.67) and the last remark in the proof of Lemma 8.4.1, M103 = 2L23 (1 + L24 )K100 .

(8.69)

8.4.3 Asymptotic reward approximations for diffusion log-price processes based on the space truncation of drift and diffusion functional coefficients Let Yε (t), t ∈ [0, T ] be, for every ε ∈ [0, ε0 ], an univariate diffusion process, which is solution of the stochastic differential equation, dYε (t) = µε (t, Yε (t)) + σε (t, Yε (t))dW (t), t ∈ [0, T ],

(8.70)

where: (a) Yε (0) = Y0 (0) with probability 1, for every ε ∈ [0, ε0 ], where Y0 (0) is a random variable taking values in the space R1 ; (b) W (t), t ∈ [0, T ] is a standard univariate Wiener process; (c) the random variable Y0 (0) and the process W (t), t ∈ [0, T ] are independent; (d) µε (t, y) is a Borel function acting from the space [0, T ]× R1 to the space R1 ; (e) σε (t, y) is a Borel function acting from the interval [0, T ] × R1 to the interval [0, ∞). We assume also that conditions G15 and G16 hold for the process Y0 (t). This guarantees that the diffusion process Y0 (t) is the unique solution for equation (8.70), for ε = 0, which is adapted to the filtration Ft = σ[Y, W (s), 0 ≤ s ≤ t], t ∈ [0, T ]. We assume that coefficients µε (t, y) and σε (t, y) satisfy the following truncation condition: G29 : (a) µε (t, y) = µ0 (t, y), t ∈ [0, T ], |y| ≤ Mε ; (b) σε (t, y) = σ0 (t, y), t ∈ [0, T ], |y| ≤ Mε ; (c) 0 < Mε → ∞ as ε → 0. It is easily seen that condition G29 imply that, if condition G15 and G16 hold for process Y0 (t), then these conditions also hold for process Yε (t) for every ε ∈ (0, ε0 ]. This guarantees that the diffusion process Yε (t) is, for every ε ∈ (0, ε0 ], the unique solution for equation (8.54), which is adapted to the filtration Ft = σ[Y, W (s), 0 ≤ s ≤ t], t ∈ [0, T ]. Let us denote, for t ∈ [0, T ], Yε,t,T = sup |Yε (s)|, t≤s≤T

and assume that the following condition holds, for some β ≥ 0: C28 [β]: limε→0 sup0≤t≤T,y∈R1

Ey,t eβYε,t,T eβ|y|

< K102 , for some 1 ≤ K102 < ∞.

(8.71)

450

8

Convergence of option rewards for diffusion LPP

Condition C28 [β] implies that there exists ε67 ∈ (0, ε0 ] such that for any ε ∈ [0, ε67 ], Ey,t eβYε,t,T sup ≤ K102 . (8.72) eβ|y| 0≤t≤T,y∈R1 Let us g0 (t, ey ), (t, y) ∈ [0, T ] × R1 be a real-valued Borel pay-off function. We assume that it satisfies condition B12 [γ]. Let us also assume that that the following condition holds: D34 [β]: Eeβ|Y0 (0)| ≤ K103 , for some 1 ≤ K103 < ∞. (ε)

Let φt (y) be, for every ε ∈ (0, ε0 ], the reward function for the log-price process Yε (t) and the pay-off function g(t, ey ). As follows from Lemmas 4.1.4, conditions B12 [γ], C28 [β] imply, under as(ε) sumption that 0 ≤ γ ≤ β, that |φt (y)| < ∞, [0, T ] × R1 , for every ε ∈ [0, ε67 ]. Let Φε be, for every ε ∈ (0, ε0 ], the optimal expected reward for the log-price process Yε (t) and the pay-off function g(t, ey ). As follows from proofs of Lemma 4.1.6, conditions B12 [γ], C28 [β] and D34 [β] imply, under assumption that 0 ≤ γ ≤ β, that and |Φε | < ∞, for every ε ∈ [0, ε67 ]. The following theorems are corollaries of Lemmas 8.4.1 and 8.4.2. Theorem 8.4.1. Let the model assumptions made in (8.70) and conditions G15 , G16 and G29 hold for the diffusion log-price processes Yε (t). Let, also, conditions B12 [γ], C28 [β] and D34 [β] hold, and 0 ≤ γ < β. Then there exists a constant 0 ≤ M104 < ∞ such that the following inequality holds, for every ε ∈ (0, ε67 ], |Φ0 − Φε | ≤ M104 e−(β−γ)Mε → 0 as ε → 0.

(8.73)

Theorem 8.4.2. Let the model assumptions made in (8.70) and conditions G15 , G16 and G29 hold for the diffusion log-price processes Yε (t). Let, also, conditions B12 [γ] and C28 [β] hold, and 0 ≤ γ < β. Then, there exists a constant 0 ≤ M105 < ∞ such that the following inequality holds, for t ∈ [0, T ], y ∈ R1 and ε ∈ (0, ε67 ], |φt (y) − φt (y)| ≤ M105 eβ|y| e−(β−γ)Mε → 0 as ε → 0. (0)

(ε)

(8.74)

Proof. In order to get inequalities (8.73) and (8.74), one can apply Lemmas 8.4.1 and 8.4.2 to the pair of log-price processes Y0 (t) and Yε (t), for every ε ∈ (0, ε67 ]. In this case, processes Y0 (t) and Yε (t) play roles, respectively, of the processes Y 0 (t) and Y 00 (t). Relation (9.7) replaces condition C27 [β]. Condition D34 [β] replaces condition D33 [β]. Respectively, constant K102 replaces constant K100 and constant K103 replaces constant K101 . 

8.4

Rewards approximations for mean-reverse diffusion LPP

451

Remark 8.4.3. Constants M104 and M105 are given by the following formula, which follows from formula (8.67), M104 = 2L23 (1 + L24 )K102 K103 , M105 = 2L23 (1 + L24 )K102 .

(8.75)

Remark 8.4.4. Parameter ε67 is defined by relation (9.7). What is important that in the case, where condition G29 holds, the approximating log-price process Yε (t) has, for every ε ∈ (0, ε0 ], bounded drift and volatility functional coefficients even if the drift and volatility coefficients µ0 (t, y) and σ0 (t, y) are only locally bounded, i.e., bounded in every set [0, T ] × {y : |y| ≤ H}, 0 ≤ H < ∞. Indeed in this case, (|µε (t, y)| + σε (t, y)

sup 0≤t≤T,y∈R1

=

sup

(|µ0 (t, y)| + σ0 (t, y)) < ∞.

(8.76)

0≤t≤T,|y|≤Mε

This makes it possible to use asymptotic relations given in Theorems 8.1.1 – 8.3.4 (under conditions of these theorems) for computing approximative values for (ε) the optimal expected reward Φε and the reward function φt (y), for ε ∈ (0, ε0 ]. Asymptotic relations given in Theorems 8.4.1 and 8.4.2, let one use the approximative values for the optimal expected reward Φε and the reward function (ε) φt (y) for approximation of the the optimal expected reward Φ0 and the reward (0) function φt (y). Moreover, in the approximation inequalities given in Theorems 8.4.1 and 8.4.2 can, at least in principle, be used to find for a given δ > 0 a value εδ ∈ (0, ε0 ] such (0) (ε ) that |Φ0 − Φεδ | ≤ δ and |φt (y) − φt δ (y)| ≤ δ. Then, the approximations given in Theorems 8.1.1 – 8.3.4 can be applied to (ε ) find approximative values of quantities Φ0εδ and φt δ (y), which, in this case, will (0) serve as approximative δ-optimal values for the rewards Φ0 and φt (y).

8.4.4 Asymptotic reward approximations for mean-reverse diffusion log-price processes based on the space truncation of drift and diffusion functional coefficients Let us illustrate the approach described above by applying Theorems 8.4.1 and 8.4.2 to the model of mean-reverse diffusion log-price processes introduced in Subsection 4.5.1. Due to transformation relations (4.253) and (4.255) given in Subsection 4.5.2, we can restrict consideration by a diffusion log-price process Y0 (t), which has a drift coefficient µ0 (t, y) and a volatility coefficient 1.

452

8

Convergence of option rewards for diffusion LPP

We assume that conditions G20 and G21 (which in this case replace conditions G15 and G16 ) hold for the functional drift coefficient µ0 (t, y). These conditions take the following form: 0 G020 : For any 0 < H < ∞, there exists a constant 1 ≤ K68,H < ∞, such that

sup0≤t≤T,|y0 |,|y00 |≤H,y0 6=y00

|µ0 (t,y 0 )−µ0 (t,y 00 )| |y 0 −y 00 |

0 < K68,H .

and G021 : sup0≤t≤T,y∈R1

|µ0 (t,y)| 1+|y|

< K69 , for some 1 ≤ K69 < ∞.

In this case, Y0 (t) is a diffusion process given as the solution of the stochastic differential equation, dY0 (t) = µ0 (t, Y0 (t)) + dW (t), t ∈ [0, T ],

(8.77)

where: (a) W (t), t ∈ [0, T ] is a standard Wiener process; (b) random variable Y0 (0) and the process W (t), t ∈ [0, T ] are independent. Conditions G020 and G021 imply that the diffusion process Y0 (t) is the unique solution of the stochastic differential equation (8.77) adapted to the filtration Ft = σ[Y0 (0), W (s), 0 ≤ s ≤ t], t ∈ [0, T ]. Let us also assume that the mean-reverse condition G28 holds for the process Y0 (t). This condition takes the form: G028 : There exists D > 0 such that µ0 (t, y) ≤ 0, for y ≥ D, 0 ≤ t ≤ T and µ0 (t, y) ≥ 0, for y ≤ −D, 0 ≤ t ≤ T . Let us now define the truncated functional drift coefficient,  for t ∈ [0, T ], y > Mε ,  µ0 (t, Mε ) µε (t, y) = µ0 (t, y) for t ∈ [0, T ], |y| ≤ Mε ,  µ0 (t, −Mε ) for t ∈ [0, T ], y < Mε ,

(8.78)

where: 0 < Mε → ∞ as ε → 0. Obviously, condition G29 holds for the truncated drift coefficients µε (t, y) given by relation (8.78) and the volatility coefficient identically equals 1. Also conditions G020 and G021 holds for the truncated functional drift coefficient µε (t, y), for every ε ∈ [0, ε0 ], if these conditions hold for the the functional drift coefficient µ0 (t, y). Let now Yε (t), t ∈ [0, T ] be, for every ε ∈ (0, ε0 ], a diffusion process given as the unique solution of the stochastic differential equation, dYε (t) = µε (t, Yε (t)) + dW (t), t ∈ [0, T ],

(8.79)

which: (a) is based on the same (with process Y0 (t)) Wiener process W (t) and, also, (b) connected with the process Y0 (t) by the assumption that Yε (0) = Y0 (0) with probability 1, for every ε ∈ [0, ε0 ].

8.4

Rewards approximations for mean-reverse diffusion LPP

453

Since conditions G020 and G021 holds for the truncated functional drift coefficient µε (t, y), for every ε ∈ [0, ε0 ], the diffusion process Yε (t) is the unique solution of the stochastic differential equation (8.79) adapted to the filtration Ft = σ[Y0 (0), W (s), 0 ≤ s ≤ t], t ∈ [0, T ]. Since Mε → ∞ as ε → 0, there exists ε68 ∈ (0, ε0 ] such that, for ε ∈ (0, ε68 ], Mε ≥ D.

(8.80)

It is obvious that, if condition G028 (with the threshold mean-reverse parameter D) holds for the the functional drift coefficient µ0 (t, y), then condition G028 (with the same threshold mean-reverse parameter D) holds for the truncated functional drift coefficient µε (t, y), for every ε ∈ (0, ε68 ]. By Lemma 4.5.3, conditions G020 , G021 , and G028 imply that condition C28 [β] holds, for any β ≥ 0, for the processes Yε (t), for ε ∈ [0, ε68 ]. Indeed, Lemma 4.5.3 can be applied to the process Y0 (t) and, also, to the process Yε (t), for every ε ∈ (0, ε68 ]. Moreover, upper bounds Ey,t eβYε,t,T ≤ 2M68 (β, T )eD+1 eβ|y| , t ∈ [0, T ], y ∈ R1 (constant M68 (β, T ) is defined in (4.237)), given in Lemma 4.5.3, depend only of parameters D and β). Thus, condition C28 [β] holds, and the constant penetrating condition C28 [β] is given by formula K102 = 2M68 (β, T )eD+1 . Let now g(t, ey ), (t, y) ∈ [0, T ] × R1 be some real-valued Borel pay-off function, for which condition B12 [γ] holds, for some γ ≥ 0. The following two theorems, which are corollaries of Theorems 8.4.1 and 8.4.2, summarize the above remarks. Theorem 8.4.3. Let the model assumptions made in (8.77) – (8.80) hold for the mean-reverse diffusion log-price processes Yε (t) and conditions G020 , G021 and G028 hold. Let, also, conditions B12 [γ] and D34 [β] hold, and 0 ≤ γ < β. Then there exists a constant 0 ≤ M106 < ∞ such that the following inequality holds, for every ε ∈ (0, ε68 ], |Φ0 − Φε | ≤ M106 e−(β−γ)Mε → 0 as ε → 0.

(8.81)

Theorem 8.4.4. Let the model assumptions made in (8.77) – (8.80) hold for the mean-reverse diffusion log-price processes Yε (t) and conditions G020 , G021 and G028 hold. Let, also, conditions B12 [γ] holds and 0 ≤ γ < β. Then, there exists a constant 0 ≤ M107 < ∞ such that the following inequality holds for t ∈ [0, T ], y ∈ R1 and ε ∈ (0, ε68 ], |φt (y) − φt (y)| ≤ M107 eβ|y| e−(β−γ)Mε → 0 as ε → 0. (0)

(ε)

(8.82)

Remark 8.4.5. Constants M106 and M107 are given by the following formula, which follows from formula (8.67), M106 = 4L23 (1 + L24 )M68 (β, T )eD+1 K103 , M107 = 4L23 (1 + L24 )M68 (β, T )eD+1 .

(8.83)

454

8

Convergence of option rewards for diffusion LPP

Remark 8.4.6. Parameter ε68 is defined by relation (8.80). In conclusion, let us consider the case, where the process Y0 (t) = Y˜ (t) is the diffusion process with unit volatility coefficient, which is the result of transformation (4.211) of another diffusion process Y (t) given by the stochastic differential equation (4.209). Conditions G20 – G21 and G23 – G26 are involved for supporting the above transformation. As was shown in Subsection 4.5.1, if conditions G20 , G21 , G23 and G24 holds for the diffusion type log-price process Y (t), then conditions G20 , G21 hold for the transformed process Y˜ (t) = f (t, Y (t)) (the transformation function f (t, y) is defined in relation (4.210)). Also, as was pointed out in the proof of Lemma 4.5.2, if condition D34 [β] holds for the process Y (t), then condition D34 [βσ] holds for the transformed process Y˜ (t). Also, the pay-off function g0 (t, ey ) = g˜(t, ey ) (the transformed pay-off function g˜(t, ey ) is defined in relation (4.254)), which is the result of transformation (4.254) of another pay-off function g(t, ey ) is used. As was pointed out in the proof of Lemma 4.5.6, if condition B12 [γ] holds for the pay-off function g(t, ey ), then condition B12 [γ 0 ] holds for the transformed pay-off function g˜(t, ey ) (with parameter γ 0 = γσ 0 and the same constants L23 and L24 ). Finally we should take into account transformation relations (4.253) and (4.255) given in Subsection 4.5.2, according to which the corresponding reward functionals, for the log-price processes Y (t) and the pay off-function g(t, ey ) and the transformed log-price processes Y˜ (t) and the transformed pay-off function ˜ and φt (f −1 (0, y)) = φ˜t (y), t ∈ [0, T ], y ∈ g˜(t, ey ), are connected by relations Φ = Φ R1 . We can apply Theorems 8.4.3 and 8.4.4 to the process Y0 (t) = Y˜ (t), t ∈ [0, T ] and the corresponding truncated processes Yε (t) given by the stochastic differential equation (8.79). (0) ˜ = Φ0 and φ˜t (˜ In this case, reward functionals Φ y ) = φt (y), t ∈ [0, T ], y ∈ R1 , for the process Y0 (t) = Y˜ (t) and the pay-off function g˜(t, ey ), are approximated (ε) by the corresponding reward functionals Φε and φt (y), t ∈ [0, T ], y ∈ R1 , for the truncated process Yε (t) and the pay-off function g˜(t, ey ). The following two theorems, which are corollaries of Theorems 8.4.3 and 8.4.4, summarize the above remarks. Theorem 8.4.5. Let Y (t) be the mean-reverse diffusion log-price process given by the stochastic differential equation (4.209) and Y˜ (t) is the corresponding transformed mean-reverse diffusion log-price process given by transformation relation (4.211). Let conditions G20 – G21 , G23 – G26 and D34 [β] hold for the process Y (t) and condition G28 holds for the transformed process Y˜ (t) Let, also, condi-

8.4

Rewards approximations for mean-reverse diffusion LPP

455

tion B12 [γ] hold for the pay-off function g(t, ey ) and 0 ≤ γσ 0 < βσ. Then there exists a constant 0 ≤ M108 < ∞ such that the following inequality holds, for every ε ∈ (0, ε68 ], 0 |Φ − Φε | ≤ M108 e−(βσ−γσ )Mε → 0 as ε → 0. (8.84) Theorem 8.4.6. Let Y (t) be the mean-reverse diffusion log-price process given by the stochastic differential equation (4.209) and Y˜ (t) is the corresponding transformed mean-reverse diffusion log-price process given by transformation relation (4.211). Let conditions G20 – G21 , G23 – G26 hold for the process Y (t) and condition G28 holds for the transformed process Y˜ (t). Let, also, condition B12 [γ] hold for the pay-off function g(t, ey ) and 0 ≤ γσ 0 < βσ. Then, there exists a constant 0 ≤ M109 < ∞ such that the following inequality holds for t ∈ [0, T ], y ∈ R1 and ε ∈ (0, ε68 ], 0

|φt (f −1 (0, y)) − φt (y)| ≤ M109 eβσ|y| e−(βσ−γσ )Mε → 0 as ε → 0. (ε)

(8.85)

Remark 8.4.7. Constants M108 and M109 are given by the following formula, which follows from formula (8.83), M108 = 4L23 (1 + L24 )M68 (βσ, T )eD+1 K103 , M109 = 4L23 (1 + L24 )M68 (βσ, T )eD+1 .

(8.86)

9 European, knockout, reselling and random pay-off options In Chapter 9, we presents results about convergence of option rewards for European, knockout and reselling-type American-type options and American-type options with random pay-off. These models give examples of applications for reward approximation results presented in Chapters 4 – 8. They, also, show new directions for research studies on stochastic approximation methods for American-type options. In Section 9.1, we describe a scheme of embedding European-type options into the models of American-type options. This makes it possible to apply stochastic approximation methods, developed for American-type options to European-type options. In Section 9.2, we show, how knockout American-type options can be embedded into models of usual American-type options, by adding to the log-price processes additional knockout index component. This embedding makes it possible to expand reward approximation results obtained for usual American-type options to knockout American-type options. In Section 9.3, we consider the model of reselling of European options for classical model with a log-price process represented by Brownian motion correlated with a mean-reverse Ornstein-Uhlenbeck process representing stochastic implied volatility. It is shown that the reselling model can be interpreted, in this case, as an American-type option for the above bivariate diffusion process, with the payoff function given by the Black-Scholes formula for the price of initial European option. The binomial-trinomial-tree reward approximations are designed and their convergence is proved. In Section 9.4, we introduce American-type options with càdlàg random payoff functions and give some results about convergence of rewards for such options. This is a new prospective direction for research studies. The main reward approximation results are presented in Theorems 9.1.1–9.1.2, for European-type options, in Theorems 9.2.1–9.2.2, for knockout American-type options, in Theorems 9.3.1–9.3.2, for reselling options, and in Theorems 9.4.1– 9.4.8, for American-type options with random pay-off. Results, presented in Sections 9.1–9.2 were not published earlier. Section 9.3 is based on results of papers by Lundgren, Silvestrov and Kukush (2008) and Lundgren and Silvestrov (2011), Section 9.4 is based on results of papers by Silvestrov and Li (2013, 2015).

9.1 Reward approximations for European-type options

457

9.1 Reward approximations for European-type options In this section, we present results concerned reward approximations for Europeantype options.

9.1.1 Reward approximation for European- and American-type options for multivariate modulated Markov log-price processes ~ ε (t) = (Y ~ε (t), Xε (t)), t ∈ [0, T ] be, for every ε ∈ [0, ε0 ], a càdlàg multivariate Let Z modulated Markov log-price process with a phase space Z = Rk × X (X is a Polish space), an initial distribution Pε (A), and transition probabilities Pε (t, ~z, t + u, A). Remind that we assume that the above transition probabilities are measurable as function of argument (t, ~z, u). Let also gε (t, e~y , x), (t, ~ y , x) ∈ [0, T ] × Rk × X be, for every ε ∈ [0, ε0 ], a realvalued measurable pay-off function. Let us assume that T > 0 and let Π = h0 = t0 < t1 = T i be simplest of the interval [0, T ]. (ε) Let us recall the class Mmax,0,T of all Markov moments 0 ≤ τε ≤ T for the ~ ε (s) and the class M(ε) ˆ (ε) process Z of all Markov moments τε,0 from M Π,0,T

Π,0,T

~ ε (0)] which only take values 0 and T and that the event {ω : τε,0 (ω) = 0} ∈ σ[Z ~ ~ and {ω : τε,0 (ω) = T } ∈ σ[Zε (0), Zε (T )]. In this case, the reward functions for an American-type option are defined for ~z = (~ y , x) ∈ Z, (ε)

(ε)

φT (MΠ,0,T , ~ y , x) = gε (T, e~y , x),

(9.1)

and (ε)

~

(ε)

φ0 (MΠ,0,T , ~ y , x) = max gε (0, e~y , x), E~z,0 Egε (T, eYε (T ) , Xε (T )) Z  0 ~ y = max gε (0, e , x), gε (T, e~y , x0 )Pε (0, ~z, T, d~z0 ,

(9.2)

Z

where ~z0 = (~ y 0 , x0 ), while the optimal expected reward, Z (ε) (ε) Φε = Φ(MΠ,0,T ) = φ0 (~ y , x)Pε (d~z).

(9.3)

Z

At the same time, the reward function for the corresponding European option, (ε) (ε) φ˜T (MΠ,0,T , ~ y , x) = gε (T, e~y , x),

(9.4)

458

9

European, knockout, reselling and random pay-off options

and ~ (ε) (ε) y , x) = E~z,0 Egε (T, eYε (T ) , Xε (T ) φ˜0 (MΠ,0,T , ~ Z 0 = gε (T, e~y , x0 )Pε (0, ~z, T, d~z0 ),

(9.5)

Z

while the corresponding optimal expected reward, Z (ε) (ε) ˜ ε = Φ(M ˜ φ˜0 (~ y , x)Pε (d~z). Φ ) = Π,0,T

(9.6)

Z

Let us assume that the following condition holds, for some vector parameter ¯ β = (β1 , . . . , βk ) with non-negative components: ¯ limε→0 sup~z∈Z E~z,t eβi |Yε,i (T )−yi | < K104,i , i = 1, . . . , k, for some 1 ≤ C29 [β]: K104,i < ∞, i = 1, . . . , k. ¯ implies that there exists ε69 ∈ (0, ε0 ] such that for any Condition C29 [β] ε ∈ [0, ε69 ] and i = 1, . . . , k, sup ~ z ∈Z

E~z,t eβi |Yε,i (T )| ≤ K104,i . eβi |yi |

(9.7)

Let also assume that the following condition holds, for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components: B50 [¯ γ ]: limε→0 supy∈R1

~ ~ |gε (0,ey ,x)|∨|gε (T,ey ,x)|

1+

L96,i < ∞, i = 1, . . . , k.

Pk i=1

L96,i eγi |yi |

< L95 , for some 0 < L95 < ∞, 0 ≤

Condition B50 [¯ γ ] implies that there exists ε70 ∈ (0, ε0 ] such that for any ε ∈ [0, ε70 ], |gε (0, e~y , x)| ∨ |gε (T, e~y , x)| < L95 . (9.8) sup Pk y∈R1 1 + i=1 L96,i eγi |yi | Note that both relations (9.7) and (9.8) hold for ε ∈ (0, ε71 ], where ε71 = ε69 ∧ ε70 . The following lemma is a particular variant of Theorem 5.1.1∗ . ¯ hold and 0 ≤ γi ≤ βi for Lemma 9.1.1. Let conditions B50 [¯ γ ] and C29 [β] every i = 1, . . . , k. Then, there exist constants 0 ≤ M49 , M50,1 , . . . , M50,k < ∞, such that the following relation holds for every ε ∈ [0, ε71 ] and ~z = (~ y , x) ∈ Z, (ε)

(ε)

(ε)

(ε)

|φ0 (MΠ,0,T , ~ y , x)| ∨ |φT (MΠ,0,T , ~ y , x)| ≤ M110 +

k X

M111,i eγi |yi | .

(9.9)

i=1

Remark 9.1.1. The explicit expressions for constants M110 , M111,i , i = 1, . . . , k are given by the following formulas, which are variants of formulas given

9.1

European-type options

459

in Remark 5.1.6∗ , γi β

i I(γi > 0). M110 = L95 , M111,i = L95 L96,i I(γi = 0) + L95 L96,i K104,i

(9.10)

Let us also assume that the following condition holds, for some vector parameter β¯ = (β1 , . . . , βk ) with non-negative components: ¯ limε→0 Eeβi |Yε,i (0)| ≤ K105,i , i = 1. . . . , k, for some 1 ≤ K105,i < ∞, i = D35 [β]: 1, . . . , k. ¯ implies that there exists ε72 ∈ (0, ε0 ] such that for any Condition D35 [β] ε ∈ [0, ε72 ] and i = 1, . . . , k, Eeβi |Yε,i (0)| ≤ K105,i .

(9.11)

Note that all relations, (9.7), (9.9) and (9.11), hold for ε ∈ [0, ε73 ], where ε73 = ε69 ∧ ε70 ∧ ε72 . The following lemma is a particular variant of Theorem 5.1.3∗ . ¯ and D35 [β] ¯ hold and 0 ≤ γi ≤ βi Lemma 9.1.2 Let conditions B50 [¯ γ ], C29 [β] for every i = 1, . . . , k. Then, there exists constant 0 ≤ M112 < ∞, such that the following relation holds, for every ε ∈ [0, ε73 ], (ε)

|Φ(MΠ,0,T )| ≤ M112 .

(9.12)

Remark 9.1.2. The explicit expression for constant M112 is given by the following formulas, which are variants of formulas given in Remark 5.1.13∗ , M112 = L95 +

X

L95 L96,i +

i:γi =0

X

γi β

γi β

i i K105,i . L95 L96,i K104,i

(9.13)

i:γi >0

Let now replace the pay-off function gε (t, e~y , x) by a new pay-off function (ε) (ε) gˆε (t, e~y , x) in such way that the reward function φˆ0 (MΠ,0,T , ~ y , x), for a new (ε) (ε) American-type option, would coincide with the reward function φ˜ (M ,~ y, 0

Π,0,T

x), for the European-type option with the initial pay-off function gε (t, e~y , x). Inequality (9.9) given in Lemma 9.1.1 gives a hint to define a new pay-off function in the following way, for every ε ∈ [0, ε71 ], gˆε (t, e~y , x) =



−(M110 + gε (t, e~y , x)

Pk

i=1

M111,i eγi |yi | )

for t = 0, for t = T.

(9.14)

(ε) (ε) Note that the reward function φ˜0 (MΠ,0,T , ~ y , x) for the European-type option does not depend on the values of pay-off function gε (0, e~y , x) and, thus, the change of these values does not affect the values of the above reward function (ε) (ε) φ˜0 (MΠ,0,T , ~ y , x).

460

9

European, knockout, reselling and random pay-off options

Relation (9.2) and inequality (9.9) imply, under conditions of Lemma 10.1.1, that for every ε ∈ [0, ε34 ] and ~z = (~ y , x) ∈ Z, ~

(ε)

(ε)

y , x) E~z,0 gε (T, eYε (T ) , Xε (T ) = φ˜0 (MΠ,0,T , ~ ≥ gˆε (0, e~y , x),

(9.15)

and, thus, the following relation holds for the reward function of a new American type option, for ε ∈ (0, ε71 ] and ~z = (~ y , x) ∈ Z, (ε) (ε) φˆ0 (MΠ,0,T , ~ y , x)

 ~ = max gˆε (0, e~y , x), E~z,0 Eˆ gε (T, eYε (T ) , Xε (T ))  ~ = max gˆε (0, e~y , x), E~z,0 Egε (T, eYε (T ) , Xε (T )) ~

= E~z,0 gε (T, eYε (T ) , Xε (T )) (ε) (ε) = φ˜0 (MΠ,0,T , ~ y , x).

(9.16)

It is useful to note that the new pay-off functions gˆε (0, e~y , x) satisfies condition γi ˆ 95 = L95 and L ˆ 96,i = L96,i K βi ≥ L96,i , i = 1, . . . , k, B50 [¯ γ ], with new constants L 104,i but with the same vector parameter γ¯ . ˆ 21 = Since the pay-off function gε (T, e~y , x) was not changed and constants L ˆ L21 and L22,i ≥ L22,i , i = 1, . . . , k, relation (9.14) implies that, under condition B50 [¯ γ ], the following inequality holds,for any ε ∈ [0, ε70 ], sup y∈R1

|ˆ gε (0, e~y , x)| ∨ |ˆ gε (T, e~y , x)| < L95 . Pk 1 + i=1 L96,i eγi |yi |

(9.17)

Also, it is useful to note that the pay of function gε (0, e~y , x) can be chosen with only one assumption that it satisfy inequality penetrating condition B50 [¯ γ ]. ~ y ~ y The simplest variant is to choose gε (0, e , x) = gε (T, e , x), for ~z = (~ y , x) ∈ Z. This let one simplify condition B50 [¯ γ ] and replace it by the following condition, assumed to hold, for some vector parameter γ¯ = (γ1 , . . . , γk ) with non-negative components: B050 [¯ γ ]: limε→0 supy∈R1

y ~

P|gkε (T,e0

1+

L096,i < ∞, i = 1, . . . , k.

i=1

,x)|

L96,i eγi |yi |

< L095 , for some 0 < L095 < ∞, 0 ≤

Relation (9.16) also implies that, under conditions of Lemma 9.1.2, the following relation holds, for every ε ∈ [0, ε73 ], Z (ε) (ε) ˆ ˆ Φε = Φ(MΠ,0,T ) = φˆ0 (~ y , x)Pε (d~z) Z

Z = Z

(ε) (ε) ˜ ˜ φ˜0 (~ y , x)Pε (d~z) = Φ(M Π,0,T ) = Φε .

(9.18)

9.1

European-type options

461

Remark 9.1.3. It is useful to note that, in the case, where gε (0, e~y , x) ≥ 0, (~ y , x) ∈ Rk ×X, a new pay-off function can be defined, for t = 0, as gˆε (0, e~y , x) = 0, (~ y , x) ∈ Rk × X. An analogous method of embedding European-type options into Americantype options can be used for more complex models based on multi-step partitions Π = h0 = t0 < · · · < tN = T i of time interval [0, T ]. In the case, of real-valued pay-off functions one should replace the values payoff functions gε (tn , e~y , x), for moments t0 , . . . , tN −1 by new values gˆε (tn , e~y , x) in such way that the reward functions and the optimal expected reward for a new American-type option would coincide with the reward functions and the optimal expected reward for an initial European-type option with the values of pay-off function gε (tN , e~y , x), for tN = T . This can be done in the way analogous to those presented in relation (9.14) or, in the case of non-negative pay-off functions in the simpler way pointed out in Remark 9.1.3.

9.1.2 Convergence of reward approximations for European-type options for multivariate modulated Markov log-price processes Relations (9.16) and (9.18) let one apply reward approximation results for American-type options presented in the 1st and 2nd volume of the book related to reward functionals for European-type options, for embedded discrete ~ ε (tn ). time multivariate modulated Markov processes Z Moreover, in this case, the corresponding partitions have the simplest onestep structure, i.e., Π = {0 = t0 < t1 = T } that simplifies formulations of the corresponding results. Also, relation (9.16) let one to omit the max-operation in the recurrence for(ε) (ε) mula (9.2) for the reward function φˆ0 (MΠ,0,T , ~ y , x) for the new American-type option. This leads to further simplifications in formulations. Let us, for example, formulate variant of Theorems 6.1.1 and 6.1.2 adapted to the above one-step model and formulated in terms of reward functions and optimal expected rewards for European-type options. ¯ and D35 [β] ¯ are just variants of condiIn this case, conditions B050 [¯ γ ], C29 [β] ¯ and D25 [β] ¯ used in the above theorems. tions B34 [¯ γ ], C19 [β] We impose on pay-off functions gε (T, e~y , x) the following condition of locally uniform convergence, which is an analogue of condition I8 [Π]: I22 : There exists set Z0T ∈ BZ , such that the pay-off function gε (T, e~yε , xε ) → g0 (T, e~y0 , x0 ) as ε → 0 for any ~zε = (~ yε , xε ) → ~z0 = (~ y0 , x0 ) ∈ Z0T .

462

9

European, knockout, reselling and random pay-off options

Pk Note that the function gˆε (0, e~y , x) = −(M49 + i=1 M50,i eγi |yi | ) is continuous and does not depend on ε. That is why the corresponding set Z00 of locally uniform convergence (in this case, it is the set of continuity for the above function) coincides with space Z. ~ ε (T ) ∈ We also impose on transition probabilities Pε,n (0, ~z, T, A) = P{Z ~ A/Zε (0) = ~z} the following condition of locally uniform weak convergence, which is an analogue of condition J1 [Π]: J23 : There exist sets Z000 , Z00T ∈ BZ such that: (a) Pε (0, ~zε , T, ·) ⇒ P0 (0, ~z0 , T, ·) as ε → 0, for any ~zε = (~ yε , xε ) → ~z0 = (~ y0 , x0 ) ∈ Z000 as ε → 0; (b) P0 (0, ~z0 , T, Z0T ∩ Z00T ) = 1, for every ~z0 ∈ Z00T , where Z0T is the set introduced in condition I22 . 0

00

A typical example is where the sets ZT , ZT are empty sets. Then condition J23 (b) obviously holds. 0 00 Another typical example is where sets ZT , ZT are at most finite or countable sets. Then the assumption that measures P0 (0, ~z0 , T, A), ~z0 ∈ Z000 have no atoms at 0 00 points from sets ZT , ZT implies that condition J23 (b) holds. One more example is where measures P0 (0, ~z0 , T, A), ~z0 ∈ Z000 are absolutely 0 00 continuous with respect to some σ-finite measure P (A) on BZ and P (ZT ), P (ZT ) = 0. This assumption also implies that condition J23 (b) holds. Finally, we assume to hold the condition of weak convergence on initial dis~ ε (0) ∈ A}, which is an analogue condition K16 : tribution Pε (A) = P{Z K39 : (a) Pε (·) ⇒ P0 (·) as ε → 0; (b) P0 (Z000 ) = 1, where Z000 is the set introduced in condition J40 . The following two theorems, which are corollaries of Theorem 6.1.1 and 6.1.2, give conditions of convergence for rewards of European-type options. ¯ hold and, for every i = Theorem 9.1.1. Let conditions B050 [¯ γ ] and C29 [β] 1, . . . , k, either 0 = γi = βi or 0 < γi < βi < ∞. Let also conditions I22 and J23 hold.Then, the following asymptotic relation holds, for every ~zε = (~ yε , xε ) → ~z0 = (~ y0 , x0 ) ∈ Z000 as ε → 0, (ε) (ε) (0) (0) φ˜0 (MΠ,0,T , ~ y , x) → φ˜0 (MΠ,0,T , ~ y , x) as ε → 0.

(9.19)

¯ and D35 [β] ¯ hold and, for Theorem 9.1.2. Let conditions B050 [¯ γ ], C29 [β] every i = 1, . . . , k, either 0 = γi = βi or 0 < γi < βi < ∞. Let also conditions I22 , J23 , and K39 hold.Then, the following asymptotic relation holds, (ε) (0) ˜ ˜ Φ(M Π,0,T ) → Φ(MΠ,0,T ) as ε → 0.

(9.20)

9.2 Reward approximations for knockout American-type options

463

9.1.3 Other results about convergence of reward approximations for European-type options for multivariate modulated Markov log-price processes It worth to note, that Theorems 9.1.1 – 9.1.2 let one re-formulate in the same manner other reward convergence results given in Chapter 6, in terms of rewards for European-type options, in particular, the corresponding results about convergence rewards given in Theorems 6.2.1 – 6.2.2, for log-price processes with independent increments and in Theorems 6.3.1 – 6.3.6 for diffusion log-price processes, as well as results about convergence of space-skeleton reward approximations given in Lemmas 6.1.1 – 6.1.3 and Theorems 6.1.5 – 6.1.6, for general multivariate modulated Markov log-price processes, in Lemmas 6.2.1 – 6.2.6 and Theorems 6.2.5 – 6.2.6, for log-price processes with independent increments, and in Lemmas 6.3.1 – 6.3.3 and Theorems 6.3.7 – 6.3.8, for diffusion log-price processes. We would like also to note that results given in Chapters 4∗ – 8∗ , in the 1st ¯ and to realize the volume of the book makes it possible to weaken condition C29 [β] procedure of embedding European-type options in the special models of Americantype options, similar with described above, for models with unbounded exponential moments E~z,t eβi |Yε,i (T )−yi | . Also results about convergence of rewards for binomial- and trinomial-tree reward approximations given in Chapters 9∗ – 10∗ used for one-step variants of models (N = 1) considered in these chapters can be used in the corresponding procedure of embedding European-type options in the models of American-type options.

9.2 Reward approximations for knockout American-type options In this section, we present results concerned convergence of rewards for knockout American-type options.

9.2.1 Knockout American-type options ~ ε (t) = (Y ~ε (t), Xε (t)), t ∈ [0, T ] be, for every ε ∈ [0, ε0 ], a multivariate moduLet Z lated càdlàg log-price process with a phase space Z = Rk ×X, an initial distribution Pε (A), and transition probabilities Pε (t, ~z, t + u, A). We recall that it is assumed that X is a Polish space with a metric dX (x0 , x00 ), 0 00 x space with metric dZ (~z0 , ~z00 ) = p, x ∈ X. In this case, Z 0= Rk 0× X0 is 0also00a Polish 00 2 |~ y−~ y 00 |2 + dX (x0 , x00 ), ~z = (~ y , x ), e ), ~z = ((~ y , x00 ), e00 ) ∈ Z.

464

9

European, knockout, reselling and random pay-off options

Remind that we assume that the above transition probabilities are measurable as functions of argument (t, ~z, u). Let gε (t, e~y , x), (t, ~z) = (t, (~ y , x)) ∈ [0, T ] × Z be, for every ε ∈ [0, ε0 ], a realvalued measurable pay-off function of argument (t, ~z). Let also H = < Hε,t , t ∈ [0, T ] > be, for every ε ∈ [0, ε0 ], a so-called time-space knockout domain, which is a family of sets from σ-algebra BZ such that function I(~z ∈ Hε,t ), (t, ~z) ∈ [0, T ] × Z is a measurable function of argument (t, ~z). ~ ε (s) ∈ Hε,s ) ∧ T, t ∈ [0, T ] be knockout Let finally τε,t,H = inf(s ≥ t : Z stopping times. We would like to define, for every ε ∈ [0, ε0 ], a knockout American-type option ~ ε (t), with the pay-off function gε (t, e~y , x) and the timefor the log-price process Z space knockout domain H. (ε) Let Mmax,t,T be, for every t ∈ [0, T ], the class of all Markov moments τε,t for ~ ε (s) such that (a) t ≤ τε,t ≤ T , (b) {τε,t > s} ∈ σ[Z ~ ε (u), t ≤ u ≤ the process Z s], t ≤ s ≤ T . If a knockout American-type option is executed at the Markov stopping ~ (ε) time τε,t ∈ Mmax,t,T then the corresponding random reward is gε (τε,t , eYε (τε,t ) , Xε (τε,t ))I(τε,t < τε,t,H ). The corresponding knockout reward function is defined by the following relation, for ~z = (~ y , x) ∈ Z, 0 ≤ t ≤ T , (ε) y , x) H φt (~

=

~

E~z,t gε (τε,t , eYε (τε,t ) , Xε (τε,t ))I(τε,t < τε,t,H ),

sup

(9.21)

(ε) τε,t ∈Mmax,t,T

and the knockout optimal expected reward by the relation, H Φε

= =



(ε)

(ε)

(Mmax,0,T ) ~

Egε (τε,0 , eYε (τε,0 ) , Xε (τε,0 ))I(τε,0 < τε,0,H ).

sup

(9.22)

(ε)

τε,0 ∈Mmax,0,T

9.2.2 Imbedding into the model of ordinary American-type options In the model of the knockout American-type option, the random reward gε (τε,t , ~ ~ ε (s), t ≤ s ≤ τε,t . eYε (τε,t ) , X(τε,t )I(τε,t < τε,t,H ) depends on the whole trajectory Z However, this model can be imbedded in the model of ordinary American type ~ ε (t). option using an appropriate extension of the log-price process Z Let us introduce, for every ε ∈ (0, ε0 ] the additional index component, Xε0 (t) = I(τε,0,H > t), t ∈ [0, T ],

(9.23)

and an extended index component, ~ ε (t) = (Xε (t), Xε0 (t)), t ∈ [0, T ]. X

(9.24)

9.2

Knockout American-type options

465

~ ε (t) has the The process Xε0 (t) has the phase space {0, 1}, while the process X 0 phase space X = X × {0, 1}. ~ ε0 (t) = (Z ~ ε (t), X ~ ε (t)), t ∈ [0, T ] is also a càdlàg It is obvious that the process Z multivariate modulated Markov process, with the extended phase space Z0 = Z × X0 . 0 0 with the metric dZ0 (~z0 , ~z00 ) = p The space Z = Z × X is also a0 Polish space 0 0 2 |~ y−~ y 00 |2 + dX (x0 , x00 ) + d2 (e0 , e00 ), ~z = ((~ y , x ), e0 ), ~z00 = ((~ y 00 , x00 ), e00 ) ∈ Z0 , 0 00 0 00 where d(e , e ) = I(e 6= e ). Let us define domains Gt , 0 ≤ t ≤ T by the following relation, Gt = {(~z, e) : I(~z ∈ Ht , e = 1) = 0}.

(9.25)

The definition of the additional index component Xε0 (t) implies that the following relation holds, ~ ε0 (t) ∈ Gt , 0 ≤ t ≤ T } = 1. P{Z

(9.26)

This relation should be taken into account when defining the initial distribution and the transition probabilities for the extended Markov log-price process Zε0 (t). ~ ε0 (t) is defined by the following The initial distribution for the Markov process Z relation, for A ∈ BZ , e = 0, 1,  Pε (A ∩ H 0 ) if e = 1, Pε0 (A × {e}) = (9.27) Pε (A ∩ H0 ) if e = 0. ~ ε0 (t) are defined The transition probabilities for the Markov log-price process Z 0 0 by the following relation, for (~ y , x, e) ∈ Z , A ∈ BZ , e = 0, 1, 0 ≤ t ≤ t + u ≤ T , Pε0 (t, (~z, e), t + u, A × {e0 })

=

 P{Zε0 (t + u) ∈ A, τε,t,H > t + u/Zε0 (t) = (~z, e)}      if ~z ∈ H t , e = 1, e0 = 1,       P{Zε0 (t + u) ∈ A, τε,t,H ≤ t + u/Zε0 (t) = (~z, e)}      if ~z ∈ H t , e = 1, e0 = 0,       0 if ~z ∈ H t , e = 0, e0 = 1,                      

Pε (t, ~z, t + u, A) if ~z ∈ H t , e = 0, e0 = 0, 0

if ~z ∈ Ht , e = 1, e0 = 0,

I(~z ∈ A)

if ~z ∈ Ht , e = 1, e0 = 1,

0

if ~z ∈ Ht , e = 0, e0 = 1,

Pε (t, ~z, t + u, A) if ~z ∈ Ht , e = 0, e0 = 0.

(9.28)

466

9

European, knockout, reselling and random pay-off options

Here, it is taken into account that it is impossible for the extended log-price process Zε0 (t) to take values (~z, e) ∈ Gt . However, in order to consider the product Z0 = Z × {0, 1} as the phase space for process Zε0 (t), one should also define transition probabilities Pε0 (t, (~z, e), t + u, A × {e0 }) for such states. This can be done in arbitrary way that does not affect the finite-dimensional distributions for process Zε0 (t). In continuous time case, the simplest way is to define these transition probabilities as Pε0 (t, (~z, e), t + u, A × {e0 }) = I(~z ∈ A)I(e0 = 1), i.e., as the distribution concentrated in point (~z, 1) if ~z ∈ Ht , e = 1, and Pε0 (t, (~z, e), t + u, A × {e0 }) = Pε (t, ~z, t + u, A)I(e0 = 0) if ~z ∈ Z, e = 0. This is also consistent with the requirement that Zε0 (t) should be a càdlàg process. Let us also introduce, for every ε ∈ [0, ε0 ], the transformed knockout pay-off function defined by the following relation, for (~z, e) = ((~ y , x), e) ∈ Z0 , 0 ≤ t ≤ T , g˜ε (t, e~y , x, e) = gε (t, e~y , x)I(e = 1).

(9.29)

It is readily seen that, for every ε ∈ [0, ε0 ], the class of Markov stopping times (ε) ~ ε (t) and Z ~ ε0 (t), since the corresponding σMmax,t,T coincides for the processes Z (ε) ~ ε (s), t ≤ s ≤ t + u] = σ[Z ~ ε0 (s), t ≤ s ≤ t + u], for any algebras F = σ[Z t,t+u

0 ≤ t ≤ t + u ≤ T. Also, the following relation holds, for every ε ∈ [0, ε0 ], for the corresponding (ε) random rewards, for any τε,t ∈ Mmax,t,T , 0 ≤ t ≤ T , ~

~

gε (τε,t , eYτε,t , Xτε,t )I(τε,t < τε,t,H ) = g˜ε (τε,t , eYτε,t , Xτε,t , Xτ0 ε,t ).

(9.30)

Let us now introduce the reward functions for the ordinary American type ~ ε0 (t) with the pay-off function g˜ε (t, e~y , x, e), defined options for the price process Z for ~z 0 = (~z, e) = ((~ y , x), e) ∈ Z0 , 0 ≤ t ≤ T , (ε) φ˜t (~ y , x, e) =

~

E~z0 ,t g˜ε (τε,t , eYτε,t , Xτε,t , Xτ0 ε,t ),

sup

(9.31)

(ε)

τε,t ∈Mmax,t,T

and the optimal expected reward by the relation, ˜ε = Φ

sup

~

E˜ gε (τε,0 , eYτε,0 , Xτε,0 , Xτ0 ε,0 ).

(9.32)

(ε) τε,0 ∈Mmax,0,N

Relation (9.30) implies that the reward functions for the knockout American~ ε (t), the pay-off function gε (t, e~y , x), and type option (for the log-price process Z the knockout stopping time-space domain H) are connected with the reward func~ ε0 (t) and tions for the ordinary American-type option (for the log-price process Z ~ y 0 the payoff function g˜ε (t, e , x, e)) by the following relation, for ~z = (~z, x) = ((~ y , x), e) ∈ Z0 , 0 ≤ t ≤ T ,  (ε)   H φt (~y , x) if (~y , x) ∈ H t , e = 1, (ε) (9.33) y , x, e) = φ˜t (~ gε (t, e~y , x) if (~ y , x) ∈ Ht , e = 1,   0 if e = 0,

9.2

467

Knockout American-type options

and the corresponding optimal expected rewards by relation, ˜ε = Φ

H Φε .

(9.34)

9.2.3 Imbedding of discrete time knockout American-type options into the model of ordinary discrete time American-type options In analogous way, the imbedding of a knockout American-type option for multivariate modulated log-price process into the corresponding model of ordinary American-type option for multivariate modulated log-price process can be realized for discrete time options, It can be achieved just be replacing continuous time t ∈ [0, T ] by discrete time n = 0, 1, . . . , N in all above relations. Moreover, in the discrete time case, one need only define one-step transition probabilities for the corresponding extended Markov log-price process, i.e., write down the key relation (9.28) for t = n, t+u = n+1. In this case, the corresponding transition probabilities Pε0 (n, (~z, e), n + 1, A × {e0 }) = P{Zε0 (n + 1) ∈ A, τε,n,H > n+1/Zε0 (n) = (~z, e)} = Pε (n, ~z, n+1, A∩H n+1 ), for (~z, e) ∈ H n , e = 1, e0 = 1, i.e., they are expressed directly via the corresponding one-step transition probabilities of the initial discrete time Markov log-price processes. This essentially simplifies the imbedding procedure.

9.2.4 Convergence of reward approximations for knockout American-type options for multivariate modulated Markov log-price processes It is useful to compare the imbedding procedure for knockout American-type options for discrete and continuous time Markov log-price processes. As was pointed above, one-step transition probabilities for the extended discrete time Markov log-price processes are expressed directly via the corresponding one-step transition probabilities of the initial discrete time Markov log-price processes in the discrete time case. Also knockout pay-off functions are just products of the corresponding initial pay-off functions and indicators of knockout events. That is why, the corresponding reward approximation results for discrete time American-type options can be directly translated to discrete time knockout American-type options just by replacing transition probabilities for initial Markov log-price processes by transition probabilities for extended Markov log-price processes in corresponding weak convergence conditions for initial distributions and transition probabilities and exponential moment compactness conditions for increments of log-price processes. In the continuous time case, situation is more complicated. In this case, transition probabilities for the extended continuous time Markov log-price processes

468

9

European, knockout, reselling and random pay-off options

~ ε0 (t) are expressed via joint distributions of the initial log-price processes Z ~ ε (t) Z and the knockout stopping times τε,t,H . Thus, one should, first, get the corresponding relations of locally uniform weak convergence for transition probabilities of the extended continuous time Markov log-price processes, which are used in theorems about convergence of rewards presented in Chapters 4 – 8. Fortunately, in most cases conditions of locally uniform weak convergence of initial distributions and transition probabilities used in theorems presented in Chapters 4 – 8 and the corresponding conditions of exponential moment compactness for increments of log-price processes imply J-convergence of the càdlàg ~ ε (·). Markov processes Z In many cases, for example for some of barrier options, stopping times τε,t,H = ~ ε (·)) are a.s. J-continuous functionals with respect to the measure generated ft (Z ~ 0 (·) on σ-algebra BD by the corresponding limiting process Z of Borel sets in [0,T ] the space D[0,T ] . In such cases, the corresponding conditions of locally uniform weak convergence of transition probabilities usually hold also for the extended Markov log~ ε0 (t). price processes Z Also, conditions of exponential moment compactness for increments of the initial Markov log-price processes provide holding of these conditions for the extended log-price processes, since absolute values of increments for the initial Markov logprice processes stochastically majorize absolute values of the corresponding increments for the extended Markov log-price processes. As far as knockout pay-off functions are concerned, they are expressed directly via the initial pay-off functions and indicators of the corresponding knockout domains. This let one re-formulate conditions imposed on initial pay-off functions for the corresponding knockout pay-off functions just by replacing initial pay-off functions by the corresponding knockout pay-off functions in the above conditions. In particular, continuity conditions for initial pay-off functions are automatically translated to the knockout pay-off functions due to continuity of the corresponding transformation factors I(e = 1) in the discrete metric d(e0 , e00 ) = I(e0 6= e00 ) used in the phase space {0, 1} of the knockout index component Xε0 (t). As we think, a significant part of results about convergence and approximation of rewards for American-type options presented in Chapters 4 – 8 can be generalized to the models of knockout American-type options, in the way described above. Let us illustrate the above remarks by presented two typical examples. Let us consider the model of univariate Gaussian log-price processes Y0 (t) and binomial-tree approximation processes Yε (t) and a pay-off function g0 (t, ey ), introduced in Subsection 7.3.1. In this case, the index component Xε (t) is absent and k = 1. Thus the knock(ε) out reward function H φt (y) is the function of the argument y ∈ R1 .

9.3 Reward approximations for reselling options

469

Let us consider a barrier American option with knockout time-space domain H = < Ht = {y ∈ R1 : y ≥ h(t)}, t ∈ [0, T ] >, where h(t), t ∈ [0, T ] is a real-valued continuous function. The following two propositions (which we formulate as conjectures) translate Theorems 7.3.1 and 7.3.2 to the above model of barrier American type options. Proposition 9.2.1. Let conditions B45 [¯ γ ] and D30 [β] hold, and γ∗ < β < ∞. Let also conditions J16 , J17 , N21 and K34 (with distribution P0 (·) without an atom in point h(0)) hold. Then, the following asymptotic relation takes place, H Φε



H Φ0

as ε → 0.

(9.35)

Proposition 9.2.2. Let conditions B45 [¯ γ ], J16 , J17 and N21 hold. Then, for every t ∈ [0, T ], the following asymptotic relation takes place for any yε → y0 6= h(t) as ε → 0, (ε) (0) (9.36) H φt (yε ) → H φt (y0 ) as ε → 0. Remark 9.2.1. As in Theorems 7.3.1 and 7.3.2, condition B23 [¯ γ ] can be replaced in Propositions 9.2.1 and 9.2.2 by one of conditions, B28 [¯ γ , ν¯], B24 [¯ γ ] or B25 [¯ γ ]. It should be noted that a full-scale generalization of reward approximation results given in the present book to knockout American-type options does require of thorough research studies, which are beyond the framework of the present book. We would like to refer to the work by Lundgren (2010), where some results for discrete time knockout American-type options are obtained with the use of approach described above.

9.3 Reward approximations for reselling options In this section, we present results concerned reward approximations for models of reselling for European options.

9.3.1 Reselling of European options We consider the geometric Brownian motion as a price process given by the stochastic differential equation d ln S(t) = µdt + σdW1 (t), t ≥ 0,

(9.37)

where µ ∈ R, σ > 0; W1 (t) is a standard Brownian motion, and the initial state S(0) = s0 > 0 is a constant. It is also assumed that the continuously compounded interest model with a risk-free interest r > 0 is used.

470

9

European, knockout, reselling and random pay-off options

In this case, the price (at moment t and under condition that S(t) = S) for a European call option, with the pay-off function e−rT [S − K]+ , where K > 0 is a strike price and T > 0 is a maturity, and σ > 0 is a volatility, is given by the Black–Scholes formula, √ (9.38) C(t, S, σ) = SF (d) − Ke−r(T −t) F (d − σ T − t), where

√ S ln( K ) + r(T − t) σ T −t √ d = d(t, S, σ) = + , 2 σ T −t

and

Zx

1 F (x) = √ 2π

e−y

2

/2

dy.

(9.39)

(9.40)

−∞

It is well known that the market price of European option deviates from the theoretical price. One of the explanations is that an implied volatility σ(t) is used in formula (9.38) instead of σ. We use a model given by the mean-reverse Ornstein-Uhlenbeck process for stochastic implied volatility, d(ln σ(t) − ln σ) = −α(ln σ(t) − ln σ)dt + νdW2 (t), t ≥ 0,

(9.41)

where α, ν > 0, W2 (t) is also a standard Brownian motion, and the boundary condition is σ(0) = σ. ~ (t) = (W1 (t), W2 (t)) is the bivariate Finally, we assume that the process W Brownian motion with correlated components, i.e., EW1 (t)W2 (t) = ρt, t ≥ 0,

(9.42)

where ρ ∈ [−1, 1]. Note that the process (S(t), σ(t)) is a diffusion process. The use of the market price C(t, S(t), σ(t)) actualizes the problem for reselling of European option. In this case it is assumed that an owner of the option can resell the option at some stopping time from the class Mmax,0,T . This class includes all stopping times 0 ≤ τ0,0 ≤ T that are Markov moments with respect to the filtration Ft = σ[(S(s), σ(s)), 0 ≤ s ≤ t], t ∈ [0, T ] generated by the vector process (S(t), σ(t)). It is worth to note that the process σ(t) is indirectly observable as an implied volatility corresponding to the observable market price of an option. It should also be noted that the problem is considered under the assumption that the option are already bought. In this case, the owner of the option is only interested in optimal reselling of option and, therefore, in finding optimal expected reward for reselling the option, Φ(Mmax 0,T ) =

sup τ0 ∈Mmax,0,T

Ee−rτ0 C(τ0 , S(τ0 ), σ(τ0 )).

(9.43)

9.3

Reselling options

471

In this way, the problem of reselling for European option is imbedded in the problem of optimal execution of the American-type option, with the pay-off function e−rt C(t, S, σ), for the bivariate log-price process (S(t), σ(t)). In this model, there exists the unique solution to the system of stochastic differential equations (9.37) and (9.41) supplemented by the correlation relation (9.42). It is a diffusion process given by the following explicit formulas, ( 1 (t) S(t) = s0 eµt+σW ,t≥0 R t αs (9.44) e dW2 (s) νe−αt 0 , t ≥ 0, σ(t) = σe ~ (t) = (W1 (t), W2 (t)), t ≥ 0 is the bivariate Brownian motion defined in where W (9.37), (9.41), and (9.42). The object od our interest is the reward functional Φ(Mmax,0,T ) for Americantype option, with the pay-off function e−rt C(t, S, σ), for the bivariate diffusion process (S(t), σ(t)). The problem can be however reduced to the simpler model, with the bivariate log-price process, which is a bivariate exponential Gaussian process with independent increments. ~0 (t) = (S0,1 (t), S0,2 (t)) defined by the Let us consider the bivariate processes S following relation, ( S0,1 (t) = eln s0 +σW1 (t) , t ≥ 0,R t αs (9.45) e−α(T −t) ln σ+νe−αT e dW2 (s) 0 S0,2 (t) = e , t ≥ 0. ~0 (t), By the definition of process S ( S(t) = eµt S0,1 (t), t ≥ 0, σ(t)

= (S0,2 (t))e

α(T −t)

, t ≥ 0,

(9.46)

i.e., the process (S(t), σ(t)) is a non-random transformation of the process (S0,1 (t), S0,2 (t)) given by the above formulas. ~0 (t) is a bivariate continuous inhomogeneous in time exponential The process S Gaussian process with independent increments. In some sense, this process is simpler than the process (S(t), σ(t)). It is more suitable for construction of the corresponding tree approximations. α(T −t) Since transformation functions eµt s1 and se2 are continuous and strictly monotonic, respectively, in arguments s1 and s2 , filtration Ft = σ[(S(s), σ(s)), 0 ≤ s ≤ t], t ∈ [0, T ] generated by process (S(t), σ(t)) coincides with the filtration Ft = σ[(S0,1 (s), S0,2 (s)), 0 ≤ s ≤ t], t ∈ [0, T ] generated by process (S0,1 (t), S0,2 (t)). (0) Thus, the classes Mmax,0,T = Mmax,0,T , which includes all stopping times 0 ≤ τ0,0 ≤ T that are Markov moments adapted to the filtration Ft , t ∈ [0, T ], does not depend on which of the above bivariate process (S(t), σ(t)) or (S0,1 (t), S0,2 (t)) is taken as a generator of this filtration.

472

9

European, knockout, reselling and random pay-off options

Let us now define the transformed pay-off function, for 0 ≤ t ≤ T, ~s = (s1 , s2 ), s1 , s2 > 0, α(T −t)

g0 (t, ~s) = e−rt C(t, eµt s1 , se2

).

(9.47)

It follows from the remarks above that the reward functional, Φ(Mmax,0,T ) =

Ee−rτ0 C(τ0 , S(τ0 ), σ(τ0 ))

sup τ0 ∈Mmax,0,T

=

(0)

sup (0) τ0,0 ∈Mmax,0,T

~0 (τ0,0 )) = Φ(M Eg0 (τ0,0 , S max,0,T ).

(9.48)

(0)

Therefore, the reward functional Φ(Mmax,0,T ) = Φ(Mmax,0,T ) is the optimal expected reward for American type option with the pay-off function g0 (t, ~s) for ~ (0) (t). this bivariate exponential Gaussian process with independent increments S ~ Let us, also, consider the bivariate log-price process Y0 (t) = (Y0,1 (t), Y0,2 (t)), t ∈ [0, T ], with the components, ( Y0,1 (t) = ln s0 + σW1 (t), t ≥ 0, Rt (9.49) Y0,2 (t) = e−α(T −t) ln σ + νe−αT 0 eαs dW2 (s), t ≥ 0. ~0 (t) is connected with the process S ~0 (t) by the following relations, Process Y  S0,1 (t) = eY0,1 (t) , Y0,1 (t) = ln S0,1 (t), t ≥ 0, (9.50) S0,2 (t) = eY0,2 (t) , Y0,2 (t) = ln S0,2 (t), t ≥ 0. ~0 (t) is a bivariate continuous inhomogeneous in time Gaussian The process Y process with independent increments. Since the transformation function ey is continuous and strictly monotonic, ~0 (s), 0 ≤ s ≤ t], t ∈ [0, T ] generated filtration Ft = σ[(S(s), σ(s)), 0 ≤ s ≤ t] = σ[S ~ ~0 (s), 0 ≤ s ≤ by process (S(t), σ(t)) or S0 (t) also coincides with filtration Ft = σ[Y ~ t], t ∈ [0, T ] generated by process Y0 (t). (0) Thus, the class Mmax,0,T , which includes all stopping times 0 ≤ τ0,0 ≤ T that are Markov moments adopted to the filtration Ft , t ∈ [0, T ], does not depend on ~0 (t) or Y ~0 (t) is taken as a generator of this which of the above bivariate process S filtration. The pay-off function g0 (t, ~s) can be represented in the form for 0 ≤ t ≤ T, ~s = e~y , ~ y = (y1 , y2 ) ∈ R2 , g0 (t, ~s) = g0 (t, e~y ) = e−rt C(t, eµt ey1 , ee

α(T −t)

y2

).

(9.51)

Relation (9.48) can be re-written in the following form, Φ(Mmax,0,T ) =

sup

Ee−rτ0,0 C(τ0,0 , S(τ0,0 ), σ(τ0,0 ))

τ0,0 ∈Mmax,0,T

=

sup (0)

τ0,0 ∈Mmax,0,T (0)

~

Eg0 (τ0,0 , eY0 (τ0,0 ) )

= Φ(Mmax,0,T ).

(9.52)

9.3

473

Reselling options

9.3.2 Convergence of optimal expected rewards for binomial-trinomial-tree approximations in the model of reselling for European options Let Nε , ε ∈ (0, ε0 ] be positive integer numbers such that Nε → ∞ as ε → 0 and Πε = h0 = tε,0 < · · · < tε,Nε = T i be the corresponding uniform partitions of interval [0, T ] by points tε,n = nT Nε , n = 0, . . . , Nε . ~0 (t), t ∈ [0, T ] by a bivariate binomial-trinomialWe approximate the process Y ~ε (t) = (Yε,1 (t), Yε,2 (t)), t ∈ [0, T ], defined, for every tree approximation process Y ε ∈ (0, ε0 ], by the following relation, Nε (t)

~ε (t) = Yε (0) + Y

X

~ ε,n , W

(9.53)

n=1 ε ~ where: (a) Nε (t) = max(n ≥ 0 : tε,n ≤ t) = [ tN T ], t ∈ [0, T ]; (b) Yε (0) = Yε = ~ (Yε,1 , Yε,2 ) ∈ R2 is a constant vector; (c) Wε,n = (Wε,n,1 , Wε,n,2 ), n = 1, 2, . . . are, for every ε ∈ (0, ε0 ], independent random vectors which have the following structure,  (+δε,n,1 , +δε,n,2 ) pε,n,+,+ ,      (+δε,n,1 , 0) pε,n,+,◦ ,     (+δε,n,1 , −δε,n,2 ) pε,n,+,− ,  (9.54) (Wε,n,1 , Wε,n,2 ) = with prob.    (−δ , +δ ) p , ε,n,1 ε,n,2 ε,n,−,+     (−δ , 0) p ε,n,1 ε,n,−,◦ ,    (−δε,n,1 , −δε,n,2 ) pε,n,−,− .

where (d) δε,n,1 , δε,n,2 > 0, n = 1, . . . , Nε ; (e) pε,n,±,+ , pε,n,±,◦ , pε,n,±,− ≥ 0, pε,n,+,+ + pε,n,+,◦ + pε,n,±,− + pε,n,−,+ , pε,n,−,◦ + pε,n,−,− = 1, n = 1, . . . , Nε . ~ (ε) (t) is a bivariate binomial-trinomial-tree process with independent Process Y increments. (ε) Let us denote by Mmax,0,T the class of all Markov moments 0 ≤ τε,0 ≤ T for ~ε (t) and let, the process Y (ε)

Φ(Mmax,0,T ) =

sup

~

Eg0 (τε,0 , eYε (τε,0 ) ).

(9.55)

(ε)

τε,0 ∈Mmax,0,T

We are going to find conditions, under which all conditions of Theorem 7.2.1 (ε) (0) hold and, therefore, the optimal expected rewards Φ(Mmax,0,T ) → Φ(Mmax,0,T ) as ε → 0. Since, the pay-off function g0 (t, e~y ) does not depend on parameter ε, conditions B40 [¯ γ ] – B44 [¯ γ , ν¯] replace, in this case, conditions B23 [¯ γ ] – B28 [¯ γ , ν¯]. As well known, derivatives (Greeks) of function C(t, S, σ) with respect to 0 (d)σ √ arguments T , S an σ are, respectively, ∂C(t,∂tS, σ) = − SF − rKe−r(T −t) F (d − 2 T −t √ √ S, σ) S, σ) σ T − t), ∂C(t, = F (d) and ∂C(t, = SF 0 (d) T − t. ∂S ∂σ

474

9

European, knockout, reselling and random pay-off options

A simple technical calculations based on the above formulas yield that the α(T −t) y2 derivatives of function g0 (t, e~y ) = e−rt C(t, eµt ey1 , ee ) with respect to arguy ~

(t,e ) | ≤ (1 + L00,1 e2|y1 | + ments t, y1 and y2 satisfy the following inequalities | ∂g0∂t

L00,2 e2e

αT

|y2 | √ 1 ) T −t , 2eαT |y2 |

y ~

y ~

(t,e ) (t,e ) | ∂g0∂y | ≤ 1 + L01,1 e|y1 | and | ∂g0∂y | ≤ 1 + L02,1 e2|y1 | + 1 2

L02,2 e , for some constants 0 ≤ L0i,j < ∞, i = 1, 2, 3, j = 1, 2. Derivatives with respect to variables y1 and y2 are locally bounded, while derivative with respect to t is locally unbounded in the neighbourhood of T . That is why, the Lipschitz-type condition B40 [¯ γ ] does not hold. However, in this case, the weaken Lipschitz-type condition B44 [¯ γ , ν¯] holds, with parameters αT αT 1 γ¯ = (2, 2e , 1, 0, 2, 2e ) and ν¯ = ( 2 , 1, 1). In this case, parameter γ∗ = γ◦ = 2eαT + 1. The pay-off function g0 (t, e~y0 ) does not depend on ε and, obviously, for any ~ y0 = (y0,1 , y0,2 ) ∈ R2 , sup |g0 (t, e~y0 )| < |s0 |eµT e|y0,1 | + K < ∞.

(9.56)

0≤t≤T

Thus, condition B43 holds. By Lemma 5.1.4, conditions condition B44 [¯ γ , ν¯] and B43 imply that condition B42 [γ] holds for any γ > γ◦ . Since function g0 (t, e~y ) does not depend on parameter ε and is a continuous function in (t, ~ y ) ∈ [0, T ] × R2 , condition I20 holds for this function with sets Yt = R2 , t ∈ [0, T ]. Let also denote, for simplicity,  = NTε , for ε ∈ (0, ε0 ]. ~ε (t) to the bivariate Let us fit the bivariate binomial-trinomial-tree process Y ~0 (t) by fitting expectations, variances, and covariance components for process Y ~ε,n to the corresponding quantities for the increments Y ~0 (n) − random vectors Y ~ Y0 ((n − 1)), for every n = 1, . . . , Nε . The corresponding quantities are given by the following formulas, for n = 1, . . . , Nε , −αT

Zn

Eσ(W1 (n) − W1 ((n − 1))) = 0, E νe

eαs dW2 (s) = 0.

(9.57)

(n−1)

σ 2  = Var(σ(W1 (n) − W1 ((n − 1)))), Zn 2 −αT σn,ε = Var νe eαs dW2 (s) (9.58)

(n−1)

= ν 2 e−2αT

Zn (n−1)

e2αs ds = ν 2 e−2αT e2αn

1 − e−2α , 2α

9.3

Reselling options

475

and −αT

Zn

%n,ε = E σ(W1 (n) − W1 ((n − 1))) · νe

eαs dW2 (s)

(n−1)

= ρσνe

−αT

Zn

αs

e ds = σρνe

−αT αn 1

e

(9.59)

− e−α . α

(n−1)

The following system of 6N equations with 8N unknowns should be solved,    EWε,n,1 = δε,n,1 (2(pε,n,+,+ + pε,n,+,◦ + pε,n,+,− ) − 1) = 0,   2 2   VarWε,n,1 = δε,n,1 = σ ε,     EWε,n,2 = δε,n,2 (pε,n,+,+ + pε,n,−,+ − pε,n,+,− − pε,n,−,− ) = 0,    VarW 2 2 ε,n,2 = δε,n,2 (pε,n,+,+ + pε,n,−,+ + pε,n,+,− + pε,n,−,− ) = σn,ε , (9.60)  EWε,n,1 Wε,n,2 = δε,n,1 δε,n,2 (pε,n,+,+ + pε,n,−,−       −pε,n,−,+ − pε,+,− ) = %n,ε ,     pε,n,+,+ + pε,n,+,◦ + pε,n,−,+ pε,n,+,− + pε,n,−,◦ + pε,n,−,− + = 1,   n = 1, . . . , Nε .

It follows from the relations above that the only possible choice for δε,n,1 = √ σ . √ Let us try to find a solution for δε,n,2 in the form δε,n,2 = δn , where δn > 0, n = 1, . . . , N are parameters under our control, due to the fact that the number of unknowns in the system (9.60) exceeds the number of equations. It is also natural to take into account the symmetric property of the process ~0 (t) and to search for unknown probabilities satisfying the additional conditions, Y pε,n,+,+ = pε,n,−,− , pε,n,+,− = pε,n,−,+ , and pε,n,+,◦ = pε,n,−,◦ , for n = 1, . . . , N . In this case, it can be checked that the above system of equations (9.60) has the following solution, q  √ T   = σ δ = σ ε,n,1  Nε ,  q   √   δε,n,2 = δn  = δn NTε ,    2  %n,ε  p 1 σn,ε ε,n,+,+ = pε,n,−,− = 4 ( δ 2 ε + σδn  ), n (9.61) 2  %n,ε 1 σn,ε   p = p = ( − ), ε,n,+,− ε,n,−,+ 2ε  4 δn σδn    2  σn,ε  1  pε,n,+,◦ = pε,n,−,◦ = − 2 ,  2 2δn    n = 1, . . . , Nε . It is also necessary to find conditions under which solutions pε,n,+,+ , pε,n,+,◦ and pε,n,+,− of the system (9.61) are probabilities, i.e., 0 ≤ pε,n,+,+ , pε,n,+,◦ , pε,n,+,− ≤ 1, n = 1, . . . , Nε . This holds if and only if the following system of

476

9

European, knockout, reselling and random pay-off options

inequalities holds,  2 σn,ε   =  δn2       =   σ2

%n,ε n,ε =  2  ± σδ  δn  n      =    n = 1, . . . , N . ε

ν 2 e−2αT 2 δn

R n

ν 2 e−2αT 2 δn

e2α(n−1) e 2α−1 ≤ 1,

ν 2 e−2αT 2 δn

R n

(n−1)

e2αs ds 2α

(n−1)

e2αs ds ±

ρνe−αT δn 

α(n−1) α (e −1) νe−αT νe−αT e ( δn δn α

α

2α

α

R n (n−1)

eα(n−1) e

α

eαs ds

+1 2

(9.62)

± ρ) ≥ 0,

α

Taking the inequality e 2α−1 = e 2+1 e α−1 < ( e 2+1 )2 into account, we can conclude that the system of inequalities (9.62) holds if the following system of two-sided inequalities holds, ( −α −α νe−αT eαn 1+e2 ≤ δn ≤ ν|ρ|−1 e−αT eαn 1+e2 , (9.63) n = 1, . . . , Nε . Thus, a solution of the system (9.61) satisfying the system of inequalities (9.62) exists for any value −1 ≤ ρ ≤ 1. In what follows, that we consider a model with a M -bounded solution, which satisfies for some constant M ≥ 1 the following inequalities, ν ≤ δn ≤ νM, n = 1, . . . , Nε .

(9.64)

If |ρ| > 0, one can take M = |ρ|−1 . If |ρ| = 0, any number M ≥ 1 can be taken. The defining relation (9.54) implies that for any κ > 0 if ε is small enough, √ namely, if (σ ∧ M ) ε ≤ κ, then Nε X (P{|Wε,n,1 | > κ} + P{|Wε,n,2 | > κ}) = 0.

(9.65)

n=1

~ε (t), for ε ≥ 0 and 0 ≤ t ≤ T , Also, by the definition of processes Y EWε,1 (t) = 0, EWε,2 (t) = 0.

(9.66)

and, for every 0 ≤ t ≤ T , VarYε,1 (t) = [t/]σ 2 → VarY0,1 (t) = tσ 2 as 0 < ε → 0, 2 −2αT

[t/] Z

e2αs ds = ν 2 e−2αT

VarYε,2 (t) = ν e

e2α[t/] − 1 2α

0

→ VarY0,2 (t) = ν 2 e−2αT

e2αt − 1 as 0 < ε → 0, 2α

(9.67)

9.3

Reselling options

477

and

EYε,1 (t)Yε,2 (t) = ρσνe

−αT

[t/] Z

eαs ds = ρσνe−αT

eα[t/] − 1 α

0

→ EY0,1 (t)Y0,2 (t) = ρσνe−αT

(9.68)

αt

e

−1 as 0 < ε → 0. α

Let us denote, for 0 ≤ t ≤ t + u ≤ T, A ∈ BR2 , ~ε (t + u) − Y ~ε (t) ∈ A}. Pε (t, t + u, A) = P{Y

(9.69)

As known (see, for example, in Skorokhod (1964)) convergence relations (9.65) ~ε (t), t ∈ [0, T ] weakly to the process Y ~0 (t), t ∈ – (9.68) imply that the processes Y [0, T ] as ε → 0. This implies that the following relation holds, for 0 ≤ t ≤ t + u ≤ T , Pε (t, t + u, ·) ⇒ P0 (t, t + u, ·) as ε → 0.

(9.70)

Relation (9.69) implies that condition J15 (a) holds for the bivariate processes ~ε (t). Y Condition J15 (b) automatically holds, since sets Yt penetrating condition I20 , coincides with R2 , for all t ∈ [0, T ]. Since the functions given on the left hand side in (9.68) are monotone, and the corresponding limit functions are continuous, this convergence is also uniform in interval [0, T ] and, therefore, conditions of Ascoli-Arzelá theorem, in particular, condition of compactness in uniform topology holds as ε → 0. Relation (9.68) and the above remarks imply that the following relation holds, for h > 0 and i = 1, 2, lim lim ∆0J (Yε,i (·), c, h, T )

c→0 ε→0

= lim lim

sup

P{|Yε,i (t + u) − Yε,i (t)| ≥ h}

≤ lim lim

sup

Var(Yε,i (t + u) − Yε,i (t)) h2

c→0 ε→0 0≤t≤t+u≤t+c≤T

c→0 ε→0 0≤t≤t+u≤t+c≤T

+ lim lim

sup

c→0 ε→0 0≤t≤t+u≤t+c≤T

(E(Yε,i (t + u) − Yε,i (t)))2 = 0. h2

(9.71)

It is useful to note that relations (9.69) and (9.71) are just conditions of con~ε (t), t ∈ [0, T ] in uniform (U ) topology (see, for example, vergence for processes Y ~0 (t), t ∈ [0, T ]. in Skorokhod (1964)) to processes Y ~ε (t). Relation (9.71) means that condition C12 holds for processes Y β(Yε,i (t+s)−Yε,i (t)) The moment generating functions Ee , exists, for every 0 ≤ t ≤ t + u ≤ T and i = 1, 2 and any β ∈ R1 .

478

9

European, knockout, reselling and random pay-off options

In the case of the process Yε,1 (t), it takes the following form, E exp{βYε,1 (t + u) − Yε,1 (t))}

( =

(eβσ



1 2

+ e−βσ e

√  1 [(t+u)/]−[t/] 2)

β 2 σ2 u 2

if

ε > 0,

if

ε = 0.

(9.72)

Here, it is taken into account that for n = 1, . . . , N , pε,n,+,+ + pε,n,+,◦ + pε,n,+,− = pε,n,−,+ + pε,n,−,◦ + pε,n,−,− =

1 . 2

(9.73)

In the case of the process Yε,2 (t), it takes the following form, E exp{β(Yε,2 (t + u) − Yε,2 (t))}

=

√  Q[(t+u)/] βδ √ n pε,n,+ + pε,n,◦ + e−βδn  pε,n,− )   n=[t/]+1 (e 1

 

=

e2

 Q [(t+u)/]    n=[t/]+1 (1 +   

e

β 2 ν 2 e−2αT

R t+u

e2αv dv

t

√ 2 σn, βδn  2  (e 2δn

+ e−βδn

√ 

− 2))

if

if

ε > 0,

if

ε = 0.

ε > 0, (9.74)

1 2 2 −2αT 2β ν e

R t+u t

e2αv dv

if

ε = 0.

Here, it is taken into account that pε,n,± = pε,n,+,± + pε,n,−,± =

2 σn,ε , n = 1, . . . , N, 2δn2 

(9.75)

and pε,n,◦ = pε,n,+,◦ + pε,n,−◦ = 1 −

2 σn,ε , n = 1, . . . , N. δn2 

(9.76)

Using relation (9.72), we can get the following relation, for β ∈ R1 , Ξ±β (Yε,1 (·), T ) =

Ee±β(Yε,1 (t+u)−Yε,1 (t))

sup 0≤t≤t+u≤T √ βσ  1

≤ (e →e

β 2 σ2 T 2

2

+ e−βσ

√ 1 

2

T

)

< ∞ as ε → 0. σ2

(9.77)

Using relation (9.74), and inequalities 2δn, ≤ 12 and ν < δn ≤ M ν, n = 2 n 1, . . . , N , we can get the following relation, for β ∈ R1 ,

9.3

Ξ±β (Yε,2 (·), T ) =

Reselling options

479

Ee±β(Yε,2 (t+u)−Yε,2 (t))

sup 0≤t≤t+u≤T [T /]



Y

(1 +

n=1

2 √ √ σn, βδn  −βδn  (e + e − 2)) 2δn2 

√ √ 1 ≤ (1 + (eβM ν  + e−βM ν  − 2))T / 2

→e

β2 M 2 ν 2 T 2

< ∞ as ε → 0.

(9.78)

Relations (9.77) and (9.78) imply condition E11 [β] to hold, for any β ≥ 0. By Lemma 5.4.1, if condition C12 holds and condition E11 [β] holds for any β ≥ 0, then the condition of exponential moment compactness C15 [β] holds, for any β ≥ 0. ~ε (0) = Y ~ε = (Yε,1 , Yε,2 ), for every ε ∈ (0, ε0 ], In this case, the initial state Y −αT ~ ~ while Y0 (0) = Y0 = (ln s0 , e ln σ). Let us assume that the following condition holds: ~ε → Y ~0 as ε → 0. K40 : Y Obviously, condition K33 (a) holds in this case. Condition K33 (b) also holds, since sets Yt = R2 , t ∈ [0, T ]. Condition K40 also implies that condition D28 [β] holds for any β ≥ 0. Thus, one can always choose parameter β > γ∗ . Conditions B42 [γ] and C15 [β] used for some γ∗ < γ < β imply, by Lemma 4.1.10 and Theorem 4.1.4 (which should be applied for βi = β, i = 1, . . . , k and γi = γ, i = 1, . . . , k), imply that there exists ε6 = ε6 (β, γ) ∈ (0, ε0 ] (defined by the corresponding formulas in Remark 4.1.9 and 5.1.5) such that, for every ε ∈ [0, ε6 ], ~

(ε)

|Φ(Mmax,0,T )| ≤ E sup |g0 (s, eYε (s) )| < ∞.

(9.79)

0≤s≤T

Summarizing the remarks above, one can conclude that the conditions and, therefore, the statement of Theorem 7.2.1 (where, according Remark 7.2.1, condition B40 [¯ γ ] for the pay-off function g0 (t, e~y ) should be replaced by condition B44 [¯ γ , ν¯], with the corresponding parameters γ¯ , ν¯ and γ∗ , which have been pointed ~ε (t). By applying this theorem we get the the following above) hold for processes Y result. ~0 (t) be the bivariate Gaussan log-price process with inTheorem 9.3.1. Let Y ~ε (t) be, for every ε ∈ (0, ε0 ], dependent increments given by relation (9.49) and Y the corresponding approximating bivariate binomial-trinomial-tree log-process with independent increments given by relations (9.53) – (9.54), with parameters given by relation (9.61) and satisfying condition (9.64). Let the pay-off function g0 (t, e~y ) is

480

9

European, knockout, reselling and random pay-off options

given by relation (9.51). Let also condition K40 holds. Then, the following asymptotic relation takes place for the corresponding optimal expected rewards, (0)

(ε)

Φ(Mmax,0,T ) → Φ(Mmax,0,T ) as ε → 0.

(9.80)

(ε)

Let also Mmax,t,T be, for every ε ∈ [0, ε0 ], the class of all Markov moments ~ε (t) such that (a) t ≤ τε,t ≤ T , (b) {τε,0 > s} ∈ σ[Y ~ε (u), t ≤ τε,t for the process Y u ≤ s], t ≤ s ≤ T . Let us define the corresponding reward function, for (t, ~ y ) ∈ [0, T ] × R2 , (ε)

φt (~ y) =

~

E~y,t g0 (τε,t , eYε (τε,t ) ).

sup

(9.81)

(ε) τε,t ∈Mmax,t,T

Conditions B42 [γ] and C15 [β] used for some γ∗ < γ < β imply, by Theorem 4.1.3 (which should be applied for βi = β, i = 1, . . . , k and γi = γ, i = 1, . . . , k), imply that there exists ε2 = ε2 (β, γ) ∈ (0, ε0 ] (defined by the corresponding formulas pointed out in Remarks 4.1.7 and 5.1.5) such that, for every ε ∈ [0, ε2 ] and (t, ~ y ) ∈ [0, T ] × R2 , ~

(ε)

|φt (~ y )| ≤ E~y,t sup |g0 (s, eYε (s) )| < ∞.

(9.82)

t≤s≤T

(ε)

As was pointed in Subsection 5.2.1, the reward functional φt (~ y ) coincides (ε) with the reward functional Φ(Mmax,0,T −t ) for the càdlàg process with independent ~ε,t (s), s ∈ [0, T − t] with shifted distribution of increments Pε,t (s, s + increments Y u, A) = Pε (t + s, t + s + u, A), the shifted pay-off function g0,t (s, e~y ) = g0 (t + s, e~y ) and the initial distribution concentrated in point ~ y. This let one formulate the following theorem, which is a corollary of Theorems 7.2.2 and 9.3.1. ~0 (t) be the bivariate Gaussan log-price process with Theorem 9.3.2. Let Y ~ε (t) be, for every ε ∈ (0, ε0 ], independent increments given by relation (9.49) and Y the corresponding approximating bivariate binomial-trinomial-tree log-process with independent increments given by relations (9.53) – (9.54), with parameters given by relation (9.61) and satisfying condition (9.64), and the pay-off function g0 (t, e~y ) is given by relation (9.51). Then, the following asymptotic relation holds for the corresponding reward functions, for any ~ yε → ~ y0 ∈ R2 as ε → 0, (ε)

(0)

φt (~ yε ) → φt (~ y0 ) as ε → 0.

(9.83)

9.3.3 Convergence of binomial-trinomial-tree reward approximations in the model of reselling of European options We are going now to construct the corresponding bivariate binomial-trinomial tree (0) reward approximations for the optimal expected rewards Φ(Mmax,0,T ).

9.3

Reselling options

481

Let us try to find condition, which would make it possible to choose the jump values δn = δ, n = 1, 2, . . . , Nε independent of n. This would automatically provide a very important recombining condition to hold for the corresponding bivariate binomial-trinomial tree. In this case, the total number of nodes (as a function of the number of tree steps) in the bivariate tree would have not more than cubic rate of growth. The system of inequalities (9.63) takes in this case the form of the following system of two-sided inequalities, νe−αT eαn

1 + e−α 1 + e−α ≤ δ ≤ ν|ρ|−1 e−αT eαn , n = 1, . . . , Nε . 2 2

(9.84)

The inequality at the left hand side is the most strong for n = N , while the inequality at the right hand side is the most strong for n = 1. Thus, the system −α −α of inequalities (9.63) holds if ν 1+e2 ≤ u ≤ ν|ρ|−1 e−αT eα 1+e2 . Consequently, this inequality holds if the following stronger inequality ν ≤ u ≤ ν|ρ|−1 e−αT holds. The remarks above lead to the following condition: R4 : |ρ| < e−αT . If condition R4 holds then interval [ν, ν|ρ|−1 e−αT ] has non-zero length and one can choose any value u ∈ [ν, ν|ρ|−1 e−αT ]. Moreover, one can always choose a rational value of δ in this interval that is useful for numerical calculations. If the value of δ chosen as described above we have the following values for the corresponding parameters of the binomial-trinomial approximation model, q  √  δε,n,1 = σ  = σ NTε ,   q   √  T  δ = δ ε,n,2 = δ   Nε ,   2  σ %n,ε   pε,n,+,+ = pε,n,−,− = 4δn,ε 2  + 4σδ    −αT 2 −2αT −2α −α    = ν e e2αn 1−e + ρνe eαn 1−e , pε,n,+,− = pε,n,−,+            pε,n,+,◦ = pε,n,−,◦         n = 1, . . . , Nε .

= = = =

4u2 2α 4u 2 σn,ε %n,ε 4δ 2  − 4σδ −αT ν 2 e−2αT 2αn 1−e−2α e − ρνe4u 4δ 2 2α 2 σn,ε 1 2 − 2δ 2 ε 1 ν 2 e−2αT 2αn 1−e−2α e 2 − 2u2 2α ,

α

−α

eαn 1−eα

(9.85)

,

Let us consider the discrete time Markov time-space skeleton approximation log-price process YΠε ,n = Yε (tε,n ), n = 0, . . . , Nε . (ε) Let us denote MΠε ,n,Nε of all Markov moments τε,n for the discrete time Markov log-price process YΠε ,r , which take values n, n + 1, . . . , Nε . Let, also, φΠε ,ε,n (y) be the corresponding reward function for the American option with the pay-off function g0 (tε,n, ey ), defined by the following relation, for

482

9

European, knockout, reselling and random pay-off options

y ∈ R1 , n = 0, 1, . . . , Nε , φΠε ,ε,n (~ y) =

E~y,n g0 (tε,τε,n , eYε (tε,τε,n ) ).

sup

(9.86)

(ε) τε,n ∈MΠ ,n,N ε ε

(ε)

Let also ΦΠε ,ε = Φε (MΠε ,0,Nε ) be the corresponding optimal expected reward, defined by the following relation, ΦΠε ,ε =

sup

Eg0 (tε,τε,0 , eYε (tε,τε,0 ) ).

(9.87)

(ε) τε,0 ∈MΠ ,0,N ε ε

~Πε ,n have initial distributions and transition Since the atomic Markov chains Y probabilities concentrated at sets with finite numbers of points, reward functionals |φΠε ,ε,n (~ y )| < ∞, ~ y ∈ Rk , n = 0, 1, . . . , Nε and |ΦΠε ,ε | < ∞, for every ε ∈ (0, ε0 ]. ~Πε ,0 = Y ~ε (0) = ~ Since we consider the case, where the initial state Y yε ∈ R2 is constant vector, the following relation holds, for every ε ∈ (0, ε0 ], ΦΠε ,ε = φΠε ,ε,0 (~ yε ).

(9.88)

~Πε ,n ), n = 0, 1, . . . , Nε is a bivariate In this case the Markov chain (n, Y binomial-trinomial tree model with the initial node (0, ~ yε ) and (n + 1)(2n + 1) nodes of the form (n, ~ yε + ~ yn,l1 ,l2 ), where √ √ ~ yn,l1 ,l2 = ((2l1 − n)σ , l2 δ ), l1 = 0, 1, . . . , n, l2 = 0, ±1, . . . , ±n, n = 0, 1, . . . , Nε .

(9.89)

The following lemma is a variant of Lemma 3.2.2∗ . Lemma 9.3.1. Let YΠε ,n be, for every ε ∈ (0, ε0 ], the binomial tree approximation log-price process introduced above. Then, the log-reward functions φΠε ,ε,n (~ yε + ~ yn,l1 ,l2 ), l1 = 0, 1, . . . , n, l2 = 0, ±1, . . . , ±n, n = 0, 1, . . . , Nε , are, for every ~ yε ∈ R2 , n = 0, . . . , Nε , the unique solution for the following finite recurrence system of linear equations,   φΠε ,ε,n (~ yε + ~ yn,l1 ,l2 ) = g0 (T, e~yε +~yn,l1 ,l2 ),      l1 = 0, 1, . . . , Nε , l2 = 0, ±1, . . . , ±Nε ,     ~ yε +~ yn,l1 ,l2  )   φΠε ,ε,n (~yε + ~yn,l1 ,l2 ) = g0 (tε,n , e     ∨ φΠε ,ε,n+1 (~yε + ~yn,l1 +1,l2 +1 )pε,n,+,+     + φΠε ,ε,n+1 (~ yε + ~ yn+1,l1 +1,l2 )pε,n,+,◦ (9.90)  + φ (~ y y )p ε+~ ε,n,+,− Π ,ε,n+1 n+1,l +1,l −1  ε 1 2     + φΠε ,ε,n+1 (~ yε + ~ yn+1,l1 ,l2 +1 )pε,n,−,+       + φΠε ,ε,n+1 (~ yε + ~ yn+1,l1 ,l2 )pε,n,−,◦       + φ y y ε+~ Πε ,ε,n+1 (~ n+1,l1 ,l2 −1 )pε,n,−,− ,     l1 = 0, 1, . . . , n, l2 = 0, ±1, . . . , ±n, n = Nε − 1, . . . , 0.

9.3

Reselling options

483

It is useful to note that the system of equations Lε,n = 1+· · ·+(Nε +1)(2Nε + 1) = 16 (Nε + 1)(4Nε2 + 11Nε + 6) has a cubic rate of growth as function of Nε . The following theorem supplements the statement of Theorem 9.3.1. ~0 (t) be the bivariate Gaussan log-price process with Theorem 9.3.3. Let Y ~ε (t) be, for every ε ∈ (0, ε0 ], independent increments given by relation (9.49) and Y the corresponding approximating bivariate binomial-trinomial-tree log-process with independent increments given by relations (9.53) – (9.54), with parameters given by relation (9.85), and the pay-off function g0 (t, e~y ) is given by relation (9.51). Let also conditions K40 and R4 hold. Then, the following asymptotic relation takes place, (0) ΦΠε ,ε → Φ(Mmax,0,T ) as ε → 0. (9.91) (ε)

Proof. To estimate the difference Φ(Mmax,0,T ) − ΦΠε ,ε , we can use Theorem 7.3.2. In this case, d(Πε ) = , and the following inequalities take place, for β ≥ 0. ∆β (Yε,1 (·), , T ) = Eeβ|Wε,1,1 | − 1 ≤ eβσ

√ 

− 1,

(9.92)

and ∆β (Yε,2 (·), , T ) = max (Eeβ|Wε,n,2 | − 1) 1≤n≤N √ βδ 

≤e

− 1.

(9.93)

All conditions of Theorem 5.2.2 hold, since they are just a part of conditions Theorem 7.2.1. Inequality (5.95) given in Theorem 5.2.2 and applied for some β > γ∗ yields, in this case, the following relation, (ε)

0 ≤ Φ(Mmax,T ) − ΦΠε ,ε √ β−γ∗ √ ≤ M11  + M12,1 (eβσ  − 1) β + M12,2 (eβδ





− 1)

β−γ∗ β

) → 0 as ε → 0,

(9.94)

where constants M11 and M12,1 , M12,2 are given in Remark 5.2.12. Relations (9.80) and (9.94) imply relation (9.91) to hold.  Let us now comment condition R4 . It is, in fact, a condition of weak correlation between the noise terms of the price and stochastic volatility processes. The restriction imposed by this condition can be lightened by dividing the time interval into smaller parts and constructing a tree that has k different values of jumps on different time intervals under assumption that the following condition holds for some k ≥ 1:

484

9

European, knockout, reselling and random pay-off options

R5 [k]: |ρ| < e−

αT k

.

Let us, for example, shortly describe the case where k = 2. Condition R5 [k] guarantees the existence of two intervals with non-zero αT αT length, [νe− 2 , ν|ρ|−1 e−αT ] and [ν, ν|ρ|−1 e− 2 ] such that the constant values δn = δ(1) , n = 1, . . . , [ N2ε ] and δn = δ(2) , n = [ N2ε ] + 1, . . . , Nε can be chosen, respectively, from the first and the second interval, when choosing parameters of binomial-trinomial approximation model according relation (9.61) such that they satisfy relations (9.62). Moreover, these values can be chosen to be positive rational numbers. This makes it possible to represent them in the following form δ(1) = m1 δ and δ(2) = m2 δ, where δ is a positive rational number and m1 and m2 are positive integers. Note that the recombining condition hold for each subinterval. The corresponding tree for the trinomial component has, in this case, at most Nε,n = √ m1 min(n, [ N2ε ])+m2 (max(n, [ N2ε ])−[ N2ε ])+1 nodes located on the grid −Nε,n δ , √ √ √ . . . , −δ , 0, δ , . . . , Nε,n δ , after n steps. The corresponding bivariate binomial-trinomial tree has not more than (n + 1)Nε,n new nodes, after n-th step. This function has not more than the quadratic rate of growth as a function of n. Thus, the total number of nodes in the bivariate tree has not more than cubic rate of growth as a function of steps. A backward algorithm for finding the corresponding reward functions and convergence results are analogous to those presented above. In conclusion, we would like to note that the backward relations (9.90) given in Lemma 9.3.1 let one also find approximative optimal stopping time-space domains ∗ ∗ ~ε (tε,n ) ∈ = min(n ≥ 0 : Y , n = 0, 1, . . . , Nε such that stopping times τε,0 Dε,n ~



∗ ∗ , eYε (τε,0 ) ) = φΠε ,ε,0 (~ ) ∧ Nε possess the property that E~yε ,0 g0 (τε,0 yε ) = ΦΠε ,ε . Dε,n ∗ yε +~ yn,l1 ,l2 such that Domain Dε,n includes, for every n = 0, . . . , Nε , all points ~ φΠε ,ε,n (~ yε + ~ yn,l1 ,l2 ) = g0 (tε,n , e~yε +~yn,l1 ,l2 ).

9.4 Reward approximations for American-type options with random pay-off In this section, we present results about convergence of reward functions for American-type options with random pay-off functions.

9.4.1 American-type options with random pay-off There are at least two reasons to consider such option type contracts.

9.4

American-type options with random pay-off

485

First, this model can be connected with more complex two-stage option type contracts, where the pay-off functions can be chosen randomly from some class of possible pay-offs, at the first stage, and then some American-type option contract is realized with the chosen pay-off function, at the second stage. Second, the random choice of pay-off functions from some class of admissible pay-offs let one make results of the corresponding numerical experiments more representative. In order to be able to concentrate on the concept of options with random payoff, we restrict consideration by a simple discrete time model, where a log-price process is an univariate random walk and a pay-off function does not depend on parameter ε. Let us consider a family of log-price processes, which depend on some perturbation parameter ε ∈ [0, ε0 ] and are defined by the following stochastic transition dynamic relation, Yε,n+1 = Yε,n + Wε,n+1 , n = 0, 1, . . . ,

(9.95)

where: (a) Yε,0 is a real-valued random variable; (b) Wε,n , n = 1, 2, . . . , N is a sequence of real-valued, independent random variables; (c) the random variable Yε,0 and the sequence of random variables Wε,n , n = 1, 2, . . . , N are independent. Let us assume the following condition of moment exponential compactness holds for some β ≥ 0: C30 [β]: limε→0 max1≤n≤N Eeβ|Wε,n | < K106 , for some 1 ≤ K106 < ∞. Condition C30 [β] implies that there exist ε74 ∈ (0, ε0 ] such that, for ε ∈ [0, ε74 ], max Eeβ|Wε,n | < K106 .

1≤n≤N

(9.96)

Let also g(n, ey ), y ∈ R1 , n = 0, 1, . . . be real-valued Borel pay-off functions. A pay-off rate of growth condition takes in this case the form of the following condition assumed to hold for some γ ≥ 0: B51 [γ]: max0≤n≤N supy∈R1 0 ≤ L98 < ∞.

|g(n, ey )| 1+L98 eγ|y|

< L97 , for some constants 0 < L97 < ∞ and

Condition B51 [γ] implies that there exist ε75 ∈ (0, ε0 ] such that, for ε ∈ [0, ε75 ], max sup

0≤n≤N y∈R1 (ε)

|g(n, y)| < L97 . 1 + L98 eγ|y|

(9.97)

Let Mmax,n,N be, for every n = 0, . . . , N , the class of all Markov moments for the log-price process Yε,n such that (a) n ≤ τε,n ≤ N and (b) {τε,n = m} ∈ σ[Yε,r , n ≤ r ≤ m}, m = n, . . . , N .

486

9

European, knockout, reselling and random pay-off options

In this case the reward function of the American-type option is defined, for every y ∈ R1 , n = 0, . . . , N , by the following relation, φ(ε) n (y) =

Ey,n g(τε,n , eYε,τε,n ).

sup

(9.98)

(ε) τε,n ∈Mmax,n,N

and the optimal expected reward is defined by the following relation, Φε =

sup

Eg(τε,0 , eYε,τε,0 ).

(9.99)

(ε) τε,0 ∈Mmax,0,N

By Theorem 5.1.1∗ , conditions C30 [β] and B51 [γ], assumed to hold for some 0 ≤ γ ≤ β, imply that for ε ∈ [0, ε76 ], where ε76 = ε74 ∧ ε75 , and y ∈ R1 , n = 0, 1, . . . , N , γ|y| φ(ε) < ∞, (9.100) n (y) ≤ M113 + M114 e where



M113 = L97 , M114 = L97 L98 K106β .

(9.101)

Let additionally assume that the following condition holds, for some β ≥ 0: D36 [β]: limε→0 Eeβ|Yε (0)| ≤ K102 , for some 1 ≤ K107 < ∞. Condition D36 [β] implies that there exists ε77 ∈ (0, ε0 ] such that for any ε ∈ [0, ε77 ], Eeβ|Yε (0)| ≤ K107 . (9.102) By Theorem 5.1.3∗ , conditions C30 [β], D36 [β] and B51 [γ], assumed to hold for some 0 ≤ γ ≤ β, imply that for ε ∈ [0, ε78 ], where ε78 = ε74 ∧ ε75 ∧ ε77 , Φε ≤ M115 < ∞, where



(9.103) γ

β M115 = L97 + L97 L98 K106β K107 .

(9.104)

Let us now consider the model, where pay-off functions g(n, y) = g(n, y, ω 0 ), y ∈ R, n = 0, 1, . . . do not depend on parameter ε but are themselves random Borel (in y) functions defined on some probability space < Ω0 , F 0 , P 0 >. An analogue of a pay-off rate of growth condition can be reformulated in this case in the following way: |g(n, y)| 0 0 0 0 B52 [γ]: max0≤n≤N supy∈R1 1+L 00 eγ|y| < L , where L = L (ω ) is a random variable taking values in the interval (1, ∞) with probability 1, while L00 is a nonnegative constant.

Let us assume that the sequences of independent random variables Wε,n = Wε,n (ω 00 ), n = 1, 2, . . . , N are defined for all ε ∈ [0, ε0 ] on the same probability space < Ω00 , F 00 , P 00 >.

9.4

American-type options with random pay-off

487

If it is not so, we can always replace random variables Wε,n by random vari0 −1 ables Wε,n = Fε,n (ρn ), n = 1, . . . , N , where ρn , n = 1, . . . , N is a sequence of in−1 dependent random variables uniformly distributed in interval [0, 1] and Fε,n (v) = inf(u : Fε,n (v) > u), n = 1, . . . , N are inverse functions for the distribution functions Fε,n (v) = P{Wε,n ≤ v}, n = 1, . . . , N . d

0 0 Random variables Wε,n = Wε,n , n = 1, . . . , N , are independent, and are defined for all ε ∈ [0, ε0 ] on the same probability space < Ω00 , F 00 , P 00 >, where random variables ρn , n = 1, . . . , N are defined. Now, let us define the probability space < Ω, F, P >, where Ω = Ω0 × Ω00 , F = σ(F 0 × F 00 ) (that is the minimal σ-algebra of subsets of Ω containing all rectangles B ×C, B ∈ F 0 , C ∈ F 00 ), and the probability measure P (A) is a product measure on F, which is uniquely defined by its values on rectangles P (B × C) = P 0 (B) · P 00 (C), B ∈ F 0 , C ∈ F 00 , via the measure continuation theorem. The random variables g(n, y) = g(n, y, ω 0 ), y ∈ R, n = 0, 1, . . . , ω 0 ∈ Ω0 can be considered as functions of ω = (ω 0 , ω 00 ) ∈ Ω = Ω0 × Ω00 , which, in fact, are functions of ω 0 ∈ Ω0 . Analogously, the random variables Wε,n = Wε,n (ω 00 ), ω 00 ∈ Ω00 , n = 1, 2, . . . can be considered as functions of ω = (ω 0 , ω 00 ) ∈ Ω = Ω0 × Ω00 , which, in fact, are function of ω 00 ∈ Ω00 , for all n = 1, 2, . . . , N . The following condition is consistent with the construction described above:

Q1 : The family of random pay-of functions hg(n, y), y ∈ R, n = 0, . . . , N i and the family of random variables hWε,n , n = 1, 2, . . . , N i are independent, for every ε ∈ [0, ε0 ]. For simplicity we restrict consideration by the case of, where pay-off functions are càdlàg random processes in the log-price argument, i.e., the following condition holds: S1 : g(n, y), y ∈ R1 is, for every n = 0, 1, . . ., a càdlàg (continuous from the right and possessing limits from the left in every point y ∈ R) random function. Let us denote by Yg,n the random set of continuity points for the random function g(n, y), y ∈ R1 , for n = 0, 1, . . .. As well known, the random set Yg,n = R1 \ Yg,n of discontinuity points for the càdlàg random function g(n, y), y ∈ R is at most countable for every n = 0, 1, . . .. (ε) (ε) ˆε = Φ ˆ ε (ω 0 ), respectively, the Let us denote by φˆn (y) = φˆn (y, ω 0 ) and Φ reward function and the optimal expected reward, defined for the log-price process Yε,n , n = 0, 1, . . . , N for the pay-off function g(n, y, ω 0 ), y ∈ R, n = 0, . . . , N , for every ω 0 ∈ Ω0 . (ε) ˆ ε as random variables defined, for every ε ∈ We can consider φˆn (y) and Φ [0, ε0 ], on the probability space < Ω0 , F 0 , P 0 >.

488

9

European, knockout, reselling and random pay-off options

Let B ∈ F 0 be the set of ω 0 ∈ Ω0 such that L0 (ω 0 ) < ∞, where L0 is the random variable penetrating condition B52 [γ], i.e. B = {ω 0 ∈ Ω0 : L0 (ω 0 ) < ∞}.

(9.105)

Condition, B52 [γ] implies that P 0 (B) = 1. Condition B51 [γ] holds for functions g(n, ey , ω 0 ), n = 1, . . . , N for every ω 0 ∈ B. If, also, condition C30 [γ] holds and 0 ≤ γ ≤ β, then, by the above remarks related to relation (9.100), the following relation holds, for every ω 0 ∈ B and ε ∈ [0, ε74 ], φˆε,n (y, ω 0 ) < ∞. (9.106) If, additionally condition D36 [β] holds and 0 ≤ γ ≤ β, then, by the above remarks related to relation (9.100), the following relation holds, for every ω 0 ∈ B and ε ∈ [0, ε79 ], where ε79 = ε74 ∨ ε77 , ˆ ε (ω 0 ) < ∞. Φ

(9.107)

9.4.2 Convergence of rewards for American-type options with random pay-off Let us denote, for ε ∈ [0, ε0 ] and n = 1, 2, . . . , N , Pε,n (A) = P{Wε,n ∈ A}, A ∈ BR1 .

(9.108)

This let us assume that the following convergence condition holds: J24 : (a) Pε,n (·) ⇒ P0,n (·) as ε → 0; (b) distributions P0,n (A), n = 1, . . . , N have no atoms. Let us denote by Zg,n the set of points of stochastic continuity of the random function g(n, y), y ∈ R1 , for n = 0, 1, . . .. Condition S1 implies that the non-random sets Zg,n , n = 0, 1, . . . are at most countable subsets of R1 . The following theorem takes place. Theorem 9.4.1. Let conditions C30 [β] and B52 [γ] hold, and, either 0 = γ = β or 0 < γ < β. Let, also, conditions Q1 , S1 and J24 hold. Then, for every n = 0, . . . , N , the following asymptotic relation holds for any yε → y0 ∈ Zg,n , a.s. ˆ(0) φˆ(ε) n (yε ) −→ φn (y0 ) as ε → 0.

(9.109)

Proof. Let yˆε,n = yε,n (ω 0 ), n = 0, . . . , N, ε ∈ [0, ε0 ] be random variables dea.s. fined on a probability space < Ω0 , F 0 , P 0 > such that yˆε,n −→ yˆ0,n as ε → 0 for n = 0, . . . , N and yˆ0,n is, for every n = 0, . . . , N a random variable, which is a point of continuity with probability 1, for the random function g(n, ey ), y ∈ R1 .

9.4

American-type options with random pay-off

489

This implies that P 0 (A) = 1, where the random event A is defined by the following relation, A = {ω 0 ∈ Ω0 : y0,n (ω 0 ) ∈ Yg,n , n = 0, . . . , N }.

(9.110)

Let us prove that conditions of Theorem 5.3.1∗ holds for the pay-off function g(n, y, ω 0 ), y ∈ R1 and the log-price process Yε,n , n = 0, . . . , N , for every ω 0 ∈ A∩B, where the random events A and B are defined, respectively, in relations (9.110) and (9.105). This will imply that the following asymptotic relation takes place, for every ω 0 ∈ A ∩ B and n = 0, . . . , N , 0 0 0 0 ˆ(0) φˆ(ε) n (yε,n (ω ), ω ) → φn (y0,n (ω ), ω ) as ε → 0.

(9.111)

¯ ∗ (its univariate variant). Condition C30 [β] coincides with condition C7 [β] As was mentioned above condition B52 [γ] implies that condition B51 [γ] holds for pay-off functions g(n, ey , ω 0 ), for every ω 0 ∈ B. The latter condition is just, an univariate variant of condition B6 [¯ γ ]∗ . Condition S1 implies that the realization g(n, ey , ω 0 ), y ∈ R is a càdlàg function for every n = 0, 1, . . . , N . Thus, the set Yg,n (ω 0 ) of discontinuity points for the realization g(n, y, ω 0 ), y ∈ R1 is at most countable set, for every n = 0, 1, . . . , N . The following relation holds, for any n = 0, . . . , N and yε (ω 0 ) → y0 (ω 00 ) ∈ Yg,n (ω 0 ) as ε → 0, 0

0

g(n, eyε (ω ) , ω 0 ) → g(n, ey0 (ω ) , ω 0 ) as ε → 0.

(9.112)

Thus, condition I3∗ (its univariate variant) holds for the pay-off functions g(n, ey , ω 0 ), with sets Yn = Yg,n (ω 0 ), n = 0, . . . , N . Condition J24 (a) implies that condition J4∗ (a) holds. Also since Yg,n (ω 0 ), n = 0, . . . , N are at most countable sets, condition J41 (b) implies that condition J4∗ (b) also holds. Therefore, all conditions of Theorem 5.3.1∗ holds for the pay-off funnctions g(n, y, ω 0 ) and the log-price process Yε,n , for every ω 0 ∈ A ∩ B. By applying Theorem 5.3.1∗ to the model with the above pay-off function and the log-price processes, we get relation (9.111). Let us take an arbitrary n = 0, . . . , N and non-random yε → y0 ∈ Zg,n as ε → 0. Let us now define the random event, Cy0 = {ω 0 ∈ Ω0 : y0 ∈ Yg,n (ω 0 )}.

(9.113)

Every point of stochastic continuity for càdlàg random function g(n, ey ), y ∈ R1 is its point of continuity with probability 1. Thus, P (Cy0 ) = 1. That is why, relation (9.111) holds for every ω 0 ∈ A ∩ B ∩ Cy0 for yε → y0 as ε → 0.

490

9

European, knockout, reselling and random pay-off options

Since P (A ∩ B ∩ Cy0 ) = 1, this completes the proof.  Remark 9.4.1. Relation (9.111) proves that, under conditions of Theorem a.s (0) (ε) y0,n ) as ε → 0 holds, for every n = 0, . . . , N if: yε,n ) −→ φˆn (ˆ 9.4.1, relation φˆn (ˆ (a) and random variables yˆε,n , n = 0, . . . , N, ε ∈ [0, ε0 ] are defined on a probability a.s. space < Ω0 , F 0 , P 0 >; (b) yˆε,n −→ yˆ0,n as ε → 0 for n = 0, . . . , N ; and (c) yˆ0,n is, for every n = 0, . . . , N a random variable, which is a point of continuity with probability 1, for the random function g(n, ey ), y ∈ R1 . Remark 20.4.2. If g(n, ey ), y ∈ R1 is a continuous random function, then condition J24 (b) can be omitted, since, in this case, sets Yg,n (ω 0 ) = ∅, n = 0, . . . , N , and, thus, condition condition J4∗ (b) automatically holds. Let additionally assume that the following convergence condition holds for initial distribution Pε (A) = P{Yε (0) ∈ A}: K41 : (a) Pε (·) ⇒ P0 (·) as ε → 0; (b) the distribution P0 (A) has no atoms. The following theorem takes place. Theorem 9.4.2. Let conditions C30 [β], D36 [β] and B52 [γ] hold, and, either 0 = γ = β or 0 < γ < β. Let, also, conditions Q1 , S1 , J24 and K41 hold. Then, the following asymptotic relation holds, a.s. ˆ ˆ ε −→ Φ Φ0 as ε → 0.

(9.114)

Proof. In this case all conditions of Theorem 5.3.2∗ hold for the log-price processes Yε,n , n = 0, . . . , N and the pay-off function g(n, y, ω 0 ), y ∈ R1 , for every ω 0 ∈ A ∩ B ∩ D, where the random events A, B and D are defined, respectively, in relations (9.110), (9.105) and (9.106). ¯ Indeed, condition D36 [β] is, just, an univariate variant of condition D7 [β]. Condition K41 (a) is, just, a variant of condition K4 (a). Also, condition K41 (b) implies, in this case, that condition K4 (b) holds, since Yg,n (ω 0 ), n = 0, . . . , N are at most countable sets. Other conditions of Theorem 5.3.2∗ also hold that was checked in the proof of Theorem 9.4.1. By applying Theorem 5.3.2∗ to the model, with the pay-off function g(n, y, ω 0 ) and the log-price process Yε,n , we get the following asymptotic relation, for every ω 0 ∈ A ∩ B ∩ D, ˆ ε (ω 0 ) → Φ ˆ 0 (ω 0 ) as ε → 0. Φ (9.115) Since P (A ∩ B ∩ D) = 1, relation (9.115) proofs the theorem. 

9.4.3 Approximation of rewards based on skeleton approximations for a random pay-off Let us replace condition S1 by slightly stronger condition:

9.4

American-type options with random pay-off

491

S2 : g(n, y), y ∈ R1 is, for every n = 0, . . ., a nondecreasing càdlàg (continuous from the right and possessing limits from the left in every point y ∈ R) random function. Let us positive real numbers hε and positive integer numbers nε are chosen such that hε → 0 and hε nε → ∞ as ε → 0. Now let us approximate the random functions g(n, y) by random step-wise càdlàg functions gε (n, y) defined for every ε ≥ 0 by the following relation,

gε (n, y) =

 g(n, −hε nε )      g(n, hε (l + 1))     g(n, hε l)        g(n, hε nε )

if y < −hε nε , if hε l ≤ y < hε (l + 1), l = −nε , . . . , −1, if hε l ≤ y < hε (l + 1), l = 0, . . . , nε − 1, if y ≥ hε nε .

(9.116)

(ε) (ε) ˜ε = Φ ˜ ε (ω 0 ),indexΦ (optimal exLet us denote by φ˜n (y) = φ˜n (y, ω 0 ) and Φ ˜ ε respectively, the reward function and the optimal expected pected reward):!Φ reward, defined for the log-price process Yε,n , n = 0, 1, . . . , N for the pay-off function gε (n, y, ω 0 ), y ∈ R, n = 0, . . . , N , for every ω 0 ∈ Ω0 . (ε) ˜ ε as random variables defined for every ε ≥ 0 We can consider φ˜n (y) and Φ on the probability space < Ω0 , F 0 , P 0 >. As follows from the proof given below, namely, relation (9.119) the reward (ε) (ε) function |φ˜n (y, ω 0 )| < ∞ if |φˆn (y, ω 0 )| < ∞, i.e., it is so, if conditions C30 [β] and B52 [γ] holds and 0 ≤ γ ≤ β, for every ω 0 ∈ B and ε ∈ [0, ε74 ]. ˜ ε (ω 0 )| < ∞ Also, relation (9.119) imply that the optimal expected reward |Φ ˆ ε (ω 0 )| < ∞, i.e., it is so, if conditions C30 [β], D36 [β] and B52 [γ] hold and if |Φ 0 ≤ γ ≤ β, for every ω 0 ∈ B ∩ D and ε ∈ [0, ε79 ].

Theorem 9.4.3. Let conditions C30 [β] and B52 [γ] hold, and, either 0 = γ = β or 0 < γ < β. Let, also, conditions Q1 , S2 and J24 hold. Then, for every n = 0, . . . , N , the following asymptotic relation holds for any yε → y0 ∈ Zg,n , a.s. ˜(0) φ˜(ε) n (y) −→ φn (y) as ε → 0.

(9.117)

Proof. It is analogous to the proof of Theorem 9.4.1. Let again yˆε,n = yε,n (ω 0 ), n = 0, . . . , N, ε ∈ [0, ε0 ] be random variables defined a.s. on a probability space < Ω0 , F 0 , P 0 > such that yˆε,n −→ yˆ0,n as ε → 0 for n = 0, . . . , N and yˆ0,n is, for every n = 0, . . . , N a random variable, which is a point of continuity wit probability 1, for the random function g(n, ey ), y ∈ R1 . Let us prove that conditions of Theorem 5.3.1∗ holds for the pay-off functions gε (n, y, ω 0 ), y ∈ R1 and the log-price process Yε,n , n = 0, . . . , N for every ω 0 ∈ A ∩ B, where the random events A and B are defined, respectively, in relations (9.110) and (9.105).

492

9

European, knockout, reselling and random pay-off options

This will imply that the following asymptotic relation takes place, for every ω 0 ∈ A ∩ B and n = 0, . . . , N , 0 0 0 0 ˜(0) φ˜(ε) n (yε,n (ω ), ω ) → φn (y0,n (ω ), ω ) as ε → 0.

(9.118)

¯ ∗ (its univariate variant). Condition C30 [β] coincides with condition C7 [β] Condition S2 implies that the following inequalities hold, for every ε ∈ [0, ε0 ] and n = 0, 1, . . . , N , sup y∈R

|g(n, hε l)| |gε (n, y)| ≤ max −nε ≤l≤nε 1 + L00 eγ|hε l| 1 + L00 eγ|y| ≤ sup y∈R

|g(n, y)| . 1 + L00 eγ|y|

(9.119)

Therefore, condition B52 [γ] implies that the following relation holds, for ω 0 ∈ B and ε ∈ [0, ε0 ], max sup

0≤n≤N y∈R

|gε (n, y, ω 0 )| < L0 (ω 0 ) < ∞. 1 + L00 eγ|y|

(9.120)

Relation (9.120) imply that condition B6 [¯ γ ] (its univariate variant) holds for function g(n, ey , ω 0 ), for every ω 0 ∈ B. Let n = 0, . . . , N and yε (ω 0 ) → y0 (ω 0 ) ∈ Yg,n (ω 0 ). In this case, for ε ∈ [0, εd (ω 0 )], 0

0

|gε (n, eyε (ω ) , ω 0 ) − g(n, ey0 (ω ) , ω 0 )| 0

0

≤ sup |g(n, ey0 (ω ) ) − g(n, ey0 (ω )+u )|,

(9.121)

|u|≤d

and, thus, 0

0

lim |g(n, eyε (ω ) , ω 0 ) − g(n, ey0 (ω ) , ω 0 )|

ε→0

0

0

≤ sup |g(n, ey0 (ω ) ) − g(n, ey0 (ω )+u )| → 0 as d → 0.

(9.122)

|u|≤d

Relation (9.122) is an analogue of relation (9.112) given in the proof of Theorem 9.4.1. Thus, condition I3∗ (its univariate variant) holds for the pay-off functions g(n, ey , ω 0 ), with sets Yn = Yg,n (ω 0 ), n = 0, . . . , N . Condition J24 (a) implies that condition J4∗ (a) holds. Also since Yg,n (ω 0 ), n = 0, . . . , N are at most countable sets, condition J24 (b) implies that condition J4∗ (b) also holds. Therefore, all conditions of Theorem 5.3.1∗ holds for the pay-off functions g(n, y, ω 0 ) and the log-price process Yε,n , for every ω 0 ∈ A ∩ B. By applying this theorem to the model with the above pay-off function and the log-price processes, we get relation (9.118).

9.4

American-type options with random pay-off

493

The remaining proof repeats the corresponding part in the proof of Theorem 9.4.1. . Remark 9.4.3. Relation (9.118) proves that, under conditions of Theorem a.s (0) (ε) y0,n ) as ε → 0 holds, for every n = 0, . . . , N if: yε,n ) −→ φˆn (ˆ 9.4.3, relation φˆn (ˆ (a) random variables yˆε,n , n = 0, . . . , N, ε ∈ [0, ε0 ] are defined on a probability a.s. space < Ω0 , F 0 , P 0 >; (b) yˆε,n −→ yˆ0,n as ε → 0 for n = 0, . . . , N ; and (c) yˆ0,n is, for every n = 0, . . . , N a random variable, which is a point of continuity wit probability 1, for the random function g(n, ey ), y ∈ R1 . Remark 9.4.4. If g(n, ey ), y ∈ R1 is a continuous random function, then condition J24 (b) can be omitted, since, in this case, sets Yg,n (ω 0 ) = ∅, n = 0, . . . , N , and, thus, condition condition J4∗ (b) automatically holds. The following theorem, which is analogue of Theorem 9.4.2, also takes place. Theorem 9.4.4. Let conditions C30 [β], D36 [β] and B52 [γ] hold, and, either 0 = γ = β or 0 < γ < β. Let, also, conditions Q1 , S2 , J24 and K41 hold. Then, the following asymptotic relation holds, a.s. ˜ ˜ ε −→ Φ Φ0 as ε → 0.

(9.123)

9.4.4 Convergence of means for rewards of American-type options with random pay-off Let us now assume that the following condition, which is stronger than condition B52 [γ], holds: |g(n, y)| 0 0 0 0 B53 [γ]: max0≤n≤N supy∈R 1+L 00 eγ|y| < L , where L = L (ω ) is a random variable taking values in the interval (1, ∞) with probability 1 and such that EL0 < ∞, while L00 is a non-negative constant.

The following lemmas give upper bounds for means of the reward functions ˆ φε,n (y) and φ˜ε,n (y). Lemma 9.4.1. Let conditions C30 [β] and B53 [γ] hold, and 0 ≤ γ ≤ β. Let, also, conditions Q1 and S1 hold. Then, the following inequalities hold for ε ∈ [0, ε74 ] and any y ∈ R1 , n = 0, . . . , N , γ|y| E|φˆ(ε) < ∞. n (y)| ≤ M116 + M117 e

where,



M116 = EL0 , M117 = EL0 L00 K106β .

(9.124)

(9.125)

Lemma 9.4.2. Let conditions C30 [β] and B53 [γ] hold, and 0 ≤ γ ≤ β. Let, also, conditions Q1 and S2 hold. Then, the following inequalities hold for ε ∈ [0, ε74 ] and any y ∈ R1 , n = 0, . . . , N , γ|y| E|φ˜(ε) < ∞. n (y)| ≤ M116 + M117 e

(9.126)

494

9

European, knockout, reselling and random pay-off options

Proof. Condition B53 [γ] implies condition B52 [γ] to hold. Inequality (9.100) and condition B52 [γ] implies that, for every ω 0 ∈ B and ε ∈ [0, ε74 ], γ 0 0 0 0 0 00 N β γ|y| < ∞. (9.127) |φˆ(ε) n (y, ω )| ≤ L (ω ) + L (ω )L K106 e Taking into account that P (B) = 1 and computing expectations for random variables on the left and right in inequality (9.127), we get inequality (9.124). The proof of Lemma 9.4.2 is analogous.  The following lemmas give upper bounds for means of the reward functionals ˆ and Φ. ˜ Φ Lemma 9.4.3. Let conditions C30 [β], D36 [β] and B53 [γ] hold, and 0 ≤ γ ≤ β. Let, also, conditions Q1 and S1 hold. Then, the following inequalities hold, for ε ∈ [0, ε79 ], ˆ ε | ≤ M118 < ∞. E|Φ (9.128) where,



γ

β M118 = EL0 + EL0 L00 K106β K107 .

(9.129)

Lemma 9.4.4. Let conditions C30 [β], D36 [β] and B53 [γ] hold, and 0 ≤ γ ≤ β. Let, also, conditions Q1 and S2 hold. Then, the following inequalities holds, for ε ∈ [0, ε79 ], ˜ ε | ≤ M118 < ∞. E|Φ (9.130) Proof. Inequality (9.103) and condition B54 [γ] implies that, for every ω 0 ∈ B and ε ∈ [0, ε79 ], Nγ

γ

ˆ ε (ω 0 )| ≤ L0 (ω 0 ) + L0 (ω 0 )L00 K β K β < ∞. |Φ 106 107

(9.131)

Taking into account that P (B) = 1 and computing expectations for random variables on the left and right in inequality (9.131), we get inequality (9.128). The proof of Lemma 9.4.4 is analogous.  The following two theorems give conditions of convergence for the means of (ε) (ε) reward functions φˆn (y) and φ˜n (y). Theorem 9.4.5. Let conditions C30 [β] and B53 [γ] hold, and, either 0 = γ = β or 0 < γ < β. Let, also, conditions Q1 , S1 and J24 hold. Then, for every n = 0, . . . , N , the following asymptotic relation holds, for any yε → y0 ∈ Zg,n , ˆ(0) Eφˆ(ε) n (yε ) → Eφn (y0 ) as ε → 0.

(9.132)

Theorem 9.4.6. Let conditions C30 [β] and B53 [γ] hold, and, either 0 = γ = β or 0 < γ < β. Let, also, conditions Q1 , S2 and J24 hold. Then, for every n = 0, . . . , N , the following asymptotic relation holds, for any yε → y0 ∈ Zg,n , ˜(0) Eφ˜(ε) n (yε ) → Eφn (y0 ) as ε → 0.

(9.133)

9.4

American-type options with random pay-off

495

Proof. Inequality (9.127) given in the proof of Lemma 9.4.1 means that the Nγ random variable |φˆε,n (y)| ≤ L0 + L0 L00 K β eγ|y| a.s, for εÊ ∈ [0, ε74 ]. Nγ

106

The random variable L0 + L0 L00 K106β eγ|y| has, according Lemma 9.4.1, a finite expectation. This and asymptotic relation (9.109) given in Theorem 9.4.1 imply, by the Lebesgue theorem, relation (9.132). The proof of Theorem 9.4.6 is analogous.  The following two theorems give conditions of convergence for the means of ˆ ε and Φ ˜ ε. reward functions Φ Theorem 9.4.7. Let conditions C30 [β], D36 [β] and B53 [γ] hold, and, either 0 = γ = β or 0 < γ < β. Let, also, conditions Q1 , S1 , J24 and K41 hold. Then, the following asymptotic relation holds, ˆ ε → EΦ ˆ 0 as ε → 0. EΦ

(9.134)

Theorem 9.4.8. Let conditions C30 [β], D36 [β] and B53 [γ] hold, and, either 0 = γ = β or 0 < γ < β. Let, also, conditions Q1 , S2 , J24 and K41 hold. Then, the following asymptotic relation holds, ˜ ε → EΦ ˜ 0 as ε → 0. EΦ

(9.135)

Proof. Inequality (9.131) given in the proof of Lemma 9.4.3 means that the γ γ ˆ ε | ≤ L0 + L0 L00 K N β K β a.s., for εÊ ∈ [0, ε79 ]. random variable |Φ 106 107 Nγ

γ

β has, according Lemma 9.4.3, a finite The random variable L0 + L0 L00 K106β K107 expectation. This and asymptotic relation (9.114) given in Theorem 9.4.2 imply, by the Lebesgue theorem, relation (9.134). The proof of Theorem 9.4.8 is analogous. 

10 Results of experimental studies In this chapter, we presents some experimental results, which illustrate theoretical reward approximation results presented in two volumes of the present book. In Section 10.1, we present results of experimental studies of rates of convergence for binomial- and trinomial-tree reward approximations for American options for discrete time log-price processes represented by Gaussian random walks. In Section 10.2, we present analogous results for space-skeleton reward approximations for standard call American options, for discrete time log-price processes represented by Gaussian and compound Gaussian random walks and digital American options, with random pay-off represented by a compound Poisson-gamma processes, for discrete time log-price processes represented by Gaussian random walks. In Section 10.3 we present some illustrating examples concerned binomial- and trinomial-tree reward approximations for American-type options for continuous time models. These are, first, an exchange of assets option, for log-price process, which is a bivariate Brownian motion, second, a standard American call option, for log-price process represented by a mean-reverse Ornstein-Uhlenbeck process, and, third, a reselling option, with a log-price process represented by Brownian motion correlated with a mean-reverse Ornstein-Uhlenbeck process representing stochastic implied volatility. In Section 10.4, we describe and comment basic variants of two- and threesteps reward approximation algorithms based on results presented in two volumes of the present book. Experimental results presented in this chapter are from papers by Lundgren and Silvestrov (2009, 2011), Silvestrov and Lundgren (2011) and Silvestrov and Li (2013, 2015).

10.1 Binomial- and trinomial-tree reward approximations for discrete time models In this section, we present some results of numerical studies for binomial and trinomial reward approximation models.

10.1.1 Binomial and trinomial reward approximations for log-price processes represented by Gaussian random walks We assume that the log-price process Yn , n = 0, 1, . . . is a homogeneous Gaussian

10.1

Tree reward approximations for discrete time models

497

random walk given by the following stochastic transition dynamic relation, Yn+1 = Yn + Wn+1 , n = 0, 1, . . . ,

(10.1)

where: (a) Y0 ≡ y0 is a real-valued constant; (b) Wn , n = 1, 2, . . . are i.i.d. normal random variables with a mean µ ∈ R1 and a standard deviation (volatility) σ > 0. The pay-off function is a standard pay-off function for call option, g(n, ey ) = e−rn [ey − K]+ ,

(10.2)

where r ≥ 0 is a free interest rate and K > 0 is a strike price. Let Mmax,n,N be, for every n = 0, . . . , N , the class of all Markov moments τn for the process Yn such that (a) n ≤ τn ≤ N , (b) event {τn = m} ∈ σ[Yn , . . . , YN ] for every n ≤ m ≤ N . Let also φn (y), y ∈ R1 , n = 0, . . . , N and Φ be, respectively the reward functions and the optimal expected reward for the process Yn defined by the following relations, φn (y) = sup Ey,n g(τn , eYτn ), (10.3) τn ∈Mmax,n,N

and Φ=

sup

Eg(τ0 , eYτ0 ).

(10.4)

τ0 ∈Mmax,0,N

We use the following useful transformation formula for the reward function φn (y) and the optimal expected reward Φ moving the initial state and the trend to the pay-off function. Let Y0,n be the transformed random walk, given by the following stochastic transition dynamic relation, Y0,n+1 = Y0,n + W0,n+1 , n = 0, 1, . . . ,

(10.5)

where: (a) Y0,0 = Y0 − y0 ≡ 0; (b) W0,n = Wn − µ, n = 1, 2, . . .. Note that Y0,n = Yn − y0 − µn, n = 0, 1, . . . is also a Gaussian random walk, with the initial state Y0,0 = 0, the trend µ0 = 0 and the volatility σ0 = σ. Let also g0 (n, ey ) be the new pay-off function defined for y ∈ R1 , n = 0, 1, . . . by the following formula, g0 (n, ey ) = g(n, ey0 +µn ey ).

(10.6)

Processes Yn and Y0,n = Yn − y0 − µn generates the same classes of Markov (0) moments Mmax,n,N = Mmax,n,N . This implies the following relations, φn (y) =

sup

Ey,n g(τn , eYτn )

τn ∈Mmax,n,N

=

sup (0) τ0,n ∈Mmax n,N

E0,n g0 (τ0,n , eY0,τ0,n ) = φ(0) n (y),

(10.7)

498

10

Results of experimental studies

and Φ=

Eg(τ0 , eYτ0 )

sup τ0 ∈Mmax,0,N

=

Eg0 (τ0,0 , eY0,τ0,0 ) = Φ0 .

sup

(10.8)

(0) τ0,0 ∈Mmax,0,N

We approximate the log-price process Y0,n by the trinomial random walk Yε,n given, for every ε ∈ (0, ε0 ], by the following stochastic transition dynamic relation, Yε,n+1 = Yε,n + Wε,n+1 , n = 0, 1, . . . ,

(10.9)

where: (a) Yε,0 ≡ 0, for every ε ∈ (0, ε0 ]; (b) Wε,n = Wε,n,1 + · · · + Wε,n,˜rε , n = 1, 2, . . ., where Wε,n,k , n, k = 1, 2, . . . are i.i.d random variables taking values δε , 0 and −δε with probabilities, respectively, pε,+ , pε,◦ and pε,− . Here, jump values δε > 0, probabilities pε,+ , pε,◦ , pε,− ≥ 0 and pε,+ + pε,◦ + pε,− = 1, and parameter r˜ε are positive integer numbers. By the definition, Wε,1 is, for every ε ∈ [0, ε0 ], a trinomial random variable taking values lδε , l = −˜ rε , . . . , r˜ε with probabilities, P{Wε,1 = lδε }

X

=

l+ −l− = l, l◦ = r˜ε −l, l+ ,l◦ ,l− ≥0

r˜ε ! l l− ◦ pε,− . p + plε,◦ l+ !l◦ !l− ! ε,+

(10.10)

(ε)

Let us Mmax,n,N be, for every ε ∈ (0, ε0 ] and n = 0, . . . , N , the class of all Markov moments τε,n for the log-price process Yε,n such that (a) n ≤ τε,n ≤ N , (b) event {τε,n = m} ∈ σ[Yε,n , . . . , Yε,N ] for every n ≤ m ≤ N . (ε) Let also φn (y), y ∈ R1 , n = 0, . . . , N and Φε be, respectively the reward functions and the optimal expected reward for the process Yε,n defined by the following relations, φ(ε) n (y) =

sup

Ey,n g0 (τε,n , eYε,τε,n ),

(10.11)

Eg0 (τε,0 , eYε,τε,0 ).

(10.12)

(ε)

τε,n ∈Mmax,0,N

and Φε =

sup (ε)

τε,0 ∈Mmax,0,N

Note that the above approximation trinomial model reduces to the approximation binomial model if we choose probabilities pε,◦ ≡ 0. That is why, the binomial model does not require a separate definition. We chose parameters δε and r˜ε in the following form, 1 − p◦ rε σ 2 1 , r˜ε = [ ], δε = √ , 0 ≤ pε,◦ = p◦ < 1, pε,± = rε 2 1 − p◦ where rε are positive real numbers such that rε → ∞ as ε → 0.

(10.13)

10.1

Tree reward approximations for discrete time models

499

In this case, all conditions of Theorem 5.3.2∗ , which give condition for convergence of optimal expected rewards for log-price processes represented by random walks, hold for the log-price processes Yε,n and the pay-off functions g0 (n, ey ). Condition B6 [¯ γ ]∗ (its univariate variant) with parameter γ = 1 obviously holds for the pay-off function g0 (n, ey ). This pay-off function is continuous and, therefore, condition I3 -1 holds for this function, with sets Yn = R1 , n = 0, . . . , N . The following relation takes place, for any n = 1, . . . , N and any β ≥ 0, rε σ 2 [ 1−p  ] ◦ ∓β √1r 1 − p◦ ±β √1r 1 − p◦ ε ε + p◦ + e Ee±βWε,1 = e 2 2 rε σ 2     2 [ ] β 2 σ2 1−p (1 − p◦ )β 1 ◦ +o = 1+ → e 2 < ∞ as ε → 0. 2rε rε

(10.14)

¯ ∗ (its univariate This relation implies, by Lemma 5.1.10∗ , that condition C7 [β] variant) holds, for any β ≥ 0. ¯ ∗ (its univariate variant) obviously holds for any β ≥ 0. Condition D7 [β] Also, the following relations holds, for every ε ∈ (0, ε0 ], EWε,1 = 0. 2 VarWε,1 = r˜ε δε2 (1 − p◦ ) = [

rε σ 2 1 ] (1 − p◦ ) → σ 2 as ε → 0. 1 − p◦ rε

(10.15)

d

Relation (10.15) implies that Wε,n −→ W0,n as ε → 0, for n = 1, . . . , N , and, thus, condition J4∗ (its univariate variant) holds. Finally, condition K4∗ (its univariate variant) also holds. Thus, we can apply Theorem 5.3.2∗ that yields the following asymptotic relation, Φε → Φ0 = Φ as ε → 0. (10.16) The reward functional Φε can be computed, for every ε ∈ (0, ε0 ], using the backward recurrence algorithm described in Lemma 3.3.2∗ (its univariate variant for the model without index component). According to this lemma, the reward (ε) functions φn (δε l), l = −n˜ rε , . . . , n˜ rε and the optimal expected rewards Φε can be found using the following backward recurrence relations,  (ε)  φN (δε l) = g0 (N, eδε l ),        l = −N r˜ε , . . . , N r˜ε , (ε) (10.17) φn (δε l) = max g0 (n, eδε l ),    Pr˜ε (ε)  0 0   l0 =−˜ rε φn+1 (δε l + δε l )P{Wε,1 = δε l } ,    l = −n˜ rε , . . . , n˜ rε , n = N − 1, . . . , 0, and the following equality (holding, since Yε,0 ≡ 0), (ε)

Φε = φ0 (0).

(10.18)

500

10

Results of experimental studies

10.1.2 Experimental results for binomial and trinomial reward approximations We choose option parameters in "year" units, such that the maturity time T = 0.25 corresponds to a quarter of a year. We imbed the model in discrete time assuming that the option can be executed at moments lT N , l = 0, 1, . . . , N . This means, in fact, that we consider a Bermudian option, if the continuous time framework is used. We take the initial value of the price process as s0 = 10, which corresponds to the initial value y0 = ln 10 for the log-price process. We also take the yearly values for the trend, µY = −20%, and for the volatility, σY = 30%. We also take the risk free yearly interest rate rY = 5% and the strike price K = 10. We perform numerical experiments for two models with two different values of maturity parameter N = 100 and N = 1000, which approximately correspond T in calendar time, respectively, to one day and one hour, for one time period N . 2 According the above remarks, the one-period parameters µ, r and σ should be re-calculated from the corresponding yearly values by multiplying the values µY , rY and σY2 respectively by factor T N −1 , respectively, for N = 100 and N = 1000. Computations have been performed with double precision in MATLAB R07B, on a laptop with moderate characteristics that are 1.3 GHZ Intel Mobile Core 2 Duo SU7300 CPU and 4 GB of internal memory. The operation system is Windows 7 Home Premium 64 bit. We would also like to mention that the execution speed is improved significantly, by using vectors and matrix calculation in MATLAB computations, similar to those presented in Higham (2004). Let us first present the results of numerical experiments for the binomial approximation model, for which probability p◦ = 0. Figure 10.1 summarizes the results of computations for this model, for the case N = 100. This figure shows how the reward value changes with an increasing parameter rε . Y-axle shows the values of the optimal expected reward Φε , and X-axle shows the values of rε in the log-scale log10 rε . In this way we can get better overview of convergence for the optimal expected reward Φε . We choose the sequence of ε1 > ε2 > · · · > εk > · · · such that the corresponding sequence of parameters 1000 = rε0 < rε1 < · · · < rεk < · · · has the step ∆rεk = rεk − rεk−1 = 2000, k = 1, 2, . . .. The neighbor points in the sequence (rεk , φεk ,0 (y0 )), k = 1, 2, . . . have been connected by intercepts of strait lines in order to improve visualization of graphics. The benchmark approximative value (BAV) for Φ0 is achieved by taking a large value of rε = 2 × 106 . This benchmark approximative value, 0.451404, is shown with 6 digits after point that reduces the rounding error to the negligible

10.1

Tree reward approximations for discrete time models

501

Fig. 10.1. Rate of convergence for binomial-tree reward approximations

Table 10.1. Computing times for binomial-tree reward approximations N; Precision

100; 5%

100; 1%

BAV for Φ0

Reward value rε [rε σ 2 ] Time (sec) N; Precision Reward value rε [rε σ 2 ] Time (sec)

0.463035 3 × 104 7 0.08 1000; 5% 0.466016 3.4 × 105 8 1.06

0.454129 1.41 × 105 32 0.17 1000; 1% 0.454962 1.54 × 106 35 8.86

0.451404 2.0 × 106 450 11.9 BAV for Φ0 0.451609 1.0 × 107 225 517

level of 0.05 %. As shown in the Figure 10.1, values of rε roughly larger, respectively, than 3.0×104 or 1.41×105 guarantee that the deviation of computed reward values from the benchmark approximative value are less than, respectively, ±5% or ±1% of the benchmark approximative value. Finally, when rε moves toward 106 , the reward values are stabilized near the above benchmark approximative value with the deviation within the ±0.1% limits. This is consistent with the convergence relation given in the asymptotic relation (10.16). Table 10.1 shows the real computational times needed to get the corresponding reward values, with 5% and 1% precision in the binomial approximation model, respectively, with parameters N = 100 and N = 1000.

502

10

Results of experimental studies

Table 10.2. American- and European expected rewards for the binomial-tree model µY % 0.5 0.0 - 10.0 - 20.0 - 30.0

BAV for Φ0

BAV for Φ0,N

0.658308 0.651294 0.534689 0.451396 0.385771

0.658308 0.651294 0.521423 0.410706 0.318020

∆ 0.000000 0.000000 0.013266 0.040690 0.067751

∆% 0.00 0.00 2.49 9.01 17.56

Let us also explain the choice of trend parameter µY . In this case, the risk neutral value of trend, satisfying risk neutral condition µ∗Y = rY − σY2 /2, is µ∗Y = 0.5%. It is well known that for values of µY ≥ µ∗Y the optimal stopping strategy will be τ ≡ N . Table 10.2 shows the difference between the benchmark approximative values for the optimal expected reward Φ0 and the expected reward Φ0,N = Eg0 (N, eY0,N ) (for the simplest European type stopping time τ ≡ N ), for a series of values µY ≤ µ∗Y . In this case parameter N = 100. The corresponding benchmark approximative values are computed for the value of parameter rε = 2 × 106 , which stabilize the reward values Φε and Eg0 (N, eYε,N ) near the corresponding benchmark approximative values with deviations within the ±0.05% limits. It is worth to note that the expected rewards Φε,N = Eg0 (N, eYε,N ) corresponding to the stopping time τ ≡ N are computed with the use of the backward recurrence algorithm for American reward functions presented in asymptotic relation (10.16), but with the modified pay-off function gˆ0 (n, ey ) = g0 (n, ey )I(n = N ). The results presented in Table 10.2 show that the early execution, which makes difference between American and European type options, begins to play a meaningful role for negative values of µY about −10%. In particular, the benchmark approximative value for the optimal expected reward Φ0 exceeds the benchmark approximative value for the expected reward Φ0,N = Eg(N, eY0,N ) for about 10%, in the case where µY = −20%. This is about twice larger than the 5% lower accuracy limit used in our numerical experiments. It is not out of the picture to note that negative values of the trend parameter have the similar effects as could be caused by implementation dividends in the underlying model. The value µY = −20% chosen as the basic value for the trend parameter in presentation of results of our numerical experimental studies. The results are analogous for other values of µY . Let us now present the results of numerical experiments for the trinomial approximation model.

10.1

Tree reward approximations for discrete time models

503

Fig. 10.2. Rate of convergence for trinomial-tree reward approximations

Table 10.3. Computing times for trinomial-tree reward approximations N; Precision

100; 5%

100; 1%

BAV for Φ0

Reward value rε [rε σ 2 ] Time (sec) N; Precision Reward value rε [rε σ 2 ] Time (sec)

0.447541 9.0 × 103 6 0.22 1000; 5% 0.447926 9 × 104 6 1.81

0.447541 3.6 × 104 24 0.36 1000; 1% 0.447936 3.6 × 105 24 15

0.451408 7.2 × 105 474 78.5 BAV for Φ0 0.451714 2.0 × 106 135 810

Figure 10.2 summarizes the results of computations for the case N = 100 for the trinomial approximation model in the same way as in Figure 10.1 for the binomial approximation model. Below, we show results for the standard case with the probability of zerojump equal to 0.666666. Computations show that variation of this parameter in the limits separated of the extreme value 1, for example in the interval [0, 0.9], does not affect significantly the results. They are close to the results obtained for the binomial approximation model for small values of the above probability. Table 10.3 shows the real computational times used for computing of the corresponding reward values, with 5% and 1% precision in the trinomial approximation model, respectively, with parameters N = 100 and N = 1000.

504

10

Results of experimental studies

We also evaluated, at which level computational rounding errors can penetrate the computed reward values. MATLAB can perform computations with double or single precision, i.e., respectively, with 16 or 8 floating digits. The computations described above have been performed with the double precision. We repeated the same computations with the single precision. The result was that at the level less than 0.01% that is at the negligible level for computations of rewards with 1% precision.

10.2 Skeleton reward approximations for discrete time models In this section we present results of some experimental studies for space-skeleton reward approximations.

10.2.1 Skeleton reward approximations for log-price processes represented by random walks Let a log-price process Y0,n by a random walk given by the following stochastic transition dynamic relation, Y0,n+1 = Y0,n + W0,n+1 , n = 0, 1, . . . ,

(10.19)

where: (a) Y0,0 ≡ y0 is a real-valued constant; (b) W0,n , n = 1, 2, . . . are i.i.d. random variables. Let hε (y) be, for every ε ∈ (0, ε0 ], a skeleton functions defined by the following relation, r˜ε X hε (y) = lδε I(y ∈ Iε,l ), (10.20) l=−˜ rε

where (a) δε > 0 are positive real numbers; (b) r˜ε are positive integer numbers, and (c) Iε,l are skeleton intervals given by the following relation,

Iε,l

 if l = −˜ rε , rε + 21 )]   (−∞, δε (−˜ 1 1 = (δε (l − 2 ), δε (l + 2 )] if − r˜ε < l < r˜ε ,   (δε (˜ rε − 12 ), ∞) if l = r˜ε .

(10.21)

We approximate the log-price process Y0,n by a random walk Yε,n given, for every ε ∈ (0, ε0 ], by the following stochastic transition dynamic relation, Yε,n+1 = Yε,n + Wε,n+1 , n = 0, 1, . . . ,

(10.22)

where: (a) Yε,0 ≡ y0 , for every ε ∈ (0, ε0 ]; (b) Wε,n = hε (W0,n ), n = 1, 2, . . ..

10.2

Skeleton reward approximations for discrete time models

505

By the definition, Wε,n , n = 1, . . . , N are, for every ε ∈ (0, ε0 ], i.i.d. random variables taking values lδε , l = −˜ rε , . . . , r˜ε with probabilities, which are interval probabilities for random variables W0,1 , P{Wε,1 = lδε } = P{W0,1 ∈ Iε,l }, l = −˜ rε , . . . , r˜ε .

(10.23)

The pay-off function is again a standard pay-off function for call option, g(n, ey ) = e−rn [ey − K]+ , where r ≥ 0 is a free interest rate and K > 0 is a strike price. (ε) Let us Mmax,n,N be, for every ε ∈ [0, ε0 ] and n = 0, . . . , N , the class of all Markov moments τε,n for the log-price process Yε,n such that (a) n ≤ τε,n ≤ N , (b) event {τε,n = m} ∈ σ[Yε,n , . . . , Yε,N ] for every n ≤ m ≤ N . (ε) Let also φn (y), y ∈ R1 , n = 0, . . . , N and Φε be, respectively the reward functions and the optimal expected reward for the process Yε,n defined by the following relations, φ(ε) n (y) =

sup

Ey,n g(τε,n , eYε,τε,n ),

(10.24)

Eg(τε,0 , eYε,τε,0 ).

(10.25)

(ε)

τε,n ∈Mmax,0,N

and Φε =

sup (ε) τε,0 ∈Mmax,0,N

We chose parameters δε and r˜ε in the following form, 1 δε = √ , rε

r˜ε = [rε σ ˜ 2 ],

(10.26)

where σ ˜ > 0 is some scaling parameter, and rε are positive real numbers such that rε → ∞ as ε → 0. Let us assume that the following condition of exponential moment compactness holds, for some β > 1: E17 [β]: Ee±βW0,1 < K108 , for some 1 < K108 < ∞. In this case, all conditions of Theorem 7.4.2∗ , which give condition for convergence of optimal expected rewards for space-skeleton approximations of log-price processes represented by random walks, hold for the log-price processes Yε,n and the pay-off functions g(n, ey ). The fitting conditions L5∗ and M5∗ obviously holds. 2 The structural skeleton conditions N3∗ holds, since δε r˜ε = [r√ε σr˜ε ] → ∞ as ε → 0. Condition B4 [γ]∗ hold for the pay-off function g(n, ey ), with parameter γ = 1. Condition E10 [β] is a particular variant of condition E17 [β]∗ , which implies condition C13 [β]∗ to hold. Note that we assumed that β > 1. Condition D10 [β]∗ holds for any β ≥ 0 since, we assume that Yε,0 ≡ y0 , for every ε ∈ [0, ε0 ].

506

10

Results of experimental studies

Condition I6∗ holds in this case with the sets Y0n = R1 , n = 0, . . . , N , since function g(n, ey ) is continuous in argument y, for every n = 0, . . . , N . Condition J11∗ automatically holds, since sets Y0n = R1 , n = 0, . . . , N . Finally, condition K10∗ holds, since Yε,0 ≡ y0 , for every ε ∈ [0, ε0 ], and set Y00 = R1 . Thus, we can apply Theorem 7.4.2∗ that yields the following asymptotic relation, Φε → Φ0 as ε → 0. (10.27) The reward functional Φε can be computed, for every ε ∈ (0, ε0 ], using the backward recurrence algorithm described in Lemma 7.4.1∗ . According to this (ε) lemma, the reward functions φn (y0 + δε l), l = −n˜ rε , . . . , n˜ rε and the optimal expected rewards Φε can be found using the following backward recurrence relations,  (ε)  φN (y0 + δε l) = g0 (N, ey0 +δε l ),        l = −N r˜ε , . . . , N r˜ε , (ε) (10.28) φn (y0 + δε l) = max g0 (n, ey0 +δε l ),    Pr˜ε (ε)  0 0   l0 =−˜ rε φn+1 (y0 + δε l + δε l )P{Wε,1 = δε l } ,    l = −n˜ rε , . . . , n˜ rε , n = N − 1, . . . , 0, and the following equality (holding, since Yε,0 ≡ y0 ), (ε)

Φε = φ0 (y0 ).

(10.29)

Note that, in this case, probabilities given by relation (10.23) should be used in the backward recurrence relations (10.28). These probabilities can be effectively computed not only for the case, where Y0,n is a Gaussian random walk. For example, it can be done for various models, where Y0,n is a Lévy random walk, in particular, a discrete time analogue of some jump-diffusion process.

10.2.2 Experimental results for space-skeleton reward approximations for the Gaussian and compound Gaussian models Let us now present the results of numerical experiments for the skeleton approximation model. In order to compare the computational results for skeleton approximation model with those for binomial-trinomial approximations we consider the model of log-price process Y0,n represented by Gaussian random walk with the same initial value y0 and parameters µ and σ as for the model considered in Subsection 10.1.2 – 10.1.3. We also choose parameters of the corresponding skeleton approximation model given in relation (10.26), with the scale parameters σ ˜ = σ.

10.2

Skeleton reward approximations for discrete time models

507

Fig. 10.3. Rate of convergence for skeleton reward approximations

Table 10.4. Computing times for skeleton reward approximations N; Precision

100; 5%

100; 1%

BAV for Φ0

Reward value rε [rε σ 2 ] Time (sec) N; Precision Reward value rε [rε σ 2 ] Time (sec)

0.440469 1.6 × 104 4 0.11 1000; 5% 0.433613 1.8 × 105 4 1.6

0.452112 2.9 × 104 7 0.16 1000; 1% 0.448058 2.7 × 105 6 2.42

0.451451 2.0 × 106 450 62.3 BAV for Φ0 0.451925 1.0 × 107 225 2.26 × 103

In this case, Wε,n = hε,n (W0,n ), n = 1, . . . are are, for every ε ∈ [0, ε0 ], i.i.d. random variables taking values lδε , l = −rε,n , . . . , rε,n with probabilities, which are Gaussian interval probabilities. Condition E17 [β] obviously hold and, thus, the asymptotic relation (10.27) holds. Figure 10.3 summarises the results of computations for the case N = 100 for the skeleton approximation model, in the same way as Figure 10.1 makes this for the binomial approximation model. Table 10.4 shows the real computational times needed to get the corresponding reward values, with 5% and 1% precision in the skeleton approximation model, respectively, with parameters N = 100 and N = 1000.

508

10

Results of experimental studies

Fig. 10.4. Rate of convergence for skeleton reward approximations for a compound Gaussian model

The great advantage of skeleton approximations is their universality in comparison with binomial–trinomial approximations. The latter approximations can be used only for Gaussian models, while the skeleton approximation can also be used for wide classes of non-Gaussian models. Below, we show some experimental results about the rate of convergence for the skeleton approximation model for such case, where the underlying log-price process Y0,n is a compound Gaussian random walk. This means that the random jumps W0,n , n = 1, 2, . . . are i.i.d. random variables that can be represented in the following form, 0 W0,n = W0,n +

Nn X

00 W0,n,k , n = 1, 2, . . . ,

(10.30)

k=1 0 00 where: (a) W0,n , W0,n,k , Nn , n, k = 1, 2, . . . are independent random variables; (b) 0 W0,n , n = 1, 2, . . . are normal random variable with a mean value µ0 and a variance 00 σ 02 ; (c) W0,n,k , k = 1, 2, . . . are normal random variables with a mean value µ00 and a variance σ 002 ; (d) Nn , n = 1, 2, . . . a Poisson random variables with parameter λ. In this case the random variables Wε,n = hε,n (W0,n ), n = 1, . . . are, for every ε ∈ (0, ε0 ], random variables taking values lδε , l = −rε,n , . . . , rε,n with probabilities, which are compound Gaussian interval probabilities for random variables

10.2

Skeleton reward approximations for discrete time models

509

Table 10.5. Computing times for skeleton reward approximations for a compound Gaussian model N; Precision

100; 5%

100; 1%

BAV for Φ0

Reward value rε [rε σ 2 ] Time (sec) N; Precision Reward value rε [rε σ 2 ] Time (sec)

0.315473 7.4 × 104 17 0.25 1000; 5% 0.319221 7.8 × 105 18 10.6

0.327509 1.45 × 105 33 0.3 1000; 1% 0.330769 1.54 × 106 35 37.9

0.330528 2.0 × 106 450 68.5 BAV for Φ0 0.333658 1.0 × 107 225 2.38 × 103

W0,1 given by the following formula, P{Wε,1 = lδε } = P{W0,1 ∈ Iε,l } =

∞ X λk k=0

k!

e−λ P{y0 + µ0 + kµ00 +

p

σ 02 + kσ 002 · W ∈ Iε,l },

(10.31)

where W is a standard normal random variable with parameters 0 and 1. An usual assumption is that the mean value µ0 and the variance σ 02 for the 0 are comparable by values, respectively, with the Gaussian jump component W0,n 00 mean value λµ and the variance λσ 002 for the compound Gaussian component PNn 00 k=1 W0,n,k and also that the intensity of jumps λ is comparatively small. We perform numerical experiments for two models with the values of maturity parameter N = 100 and N = 1000. In order to be able to compare the results of numerical computations we choose the same standard pay-off functions g(n, y) = e−rn [ey − K]+ and parameters µY , σY2 , y0 and T, S, rY , as in Sections 7 and 8, then re-calculate the one-period parameters µ, r and σ 2 from the corresponding yearly values by multiplying the values µY , rY and σY2 by factor T N −1 , respectively, for N = 100 and N = 1000. Finally, we take parameters µ0 , σ 02 , µ00 , σ 002 , and λ, such that µ0 = λµ00 =

µ σ2 , σ 02 = λσ 002 = . 2 2

(10.32)

1 We also choose the value of parameter λ = 10 that automatically implies the 00 0 002 02 relations µ = 10µ and σ = 10σ . Finally, we choose the scale parameter σ ˜ = σ in the above skeleton approximation model. We omit the details connected with truncation of series in formula (10.31) and mention only that the truncation of all terms for k ≥ 5 cause changes in the corresponding reward values at the negligible level of 0.01%.

510

10

Results of experimental studies

Table 10.6. American- and European expected rewards for a compound Gaussian model µY % 0.5 0.0 - 10.0 - 20.0 - 30.0

BAV for Φ0

BAV for Φ0,N

0.663645 0.649564 0.448729 0.33053 0.252259

0.663645 0.649564 0.408980 0.240423 0.131195

∆ 0.000000 0.000000 0.039749 0.090107 0.121064

∆% 0.00 0.00 8.86 27.26 47.99

Figure 10.4 summarizes the results of computations for the case N = 100 for the skeleton approximation model in the same way, as Figure 10.1 makes this for the binomial approximation model. Table 10.5 shows the real computational times needed to get the corresponding reward values, with 5% and 1% precision in the skeleton approximation model, respectively, with parameters N = 100 and N = 1000. Both Figure 10.4 and Table 10.5 present results obtained for parameter µY = −20%. Table 10.6 shows, how this parameter impacts the benchmark approximative values for the optimal expected reward Φ0 and the expected reward Φ0,N = Eg0 (N, eY0,N ) for the simplest European type stopping time τ ≡ N , in the case of compound Gaussian model. Here, parameter N = 100.

10.2.3 Rate of convergence for skeleton reward approximation models with pay-off functions generated by a compound Poisson-gamma process In this subsection, we present some experimental results about the rate of convergence for the skeleton reward approximation models with a pay-off function generated by a compound Poisson-gamma process and a log-price process represented by a Gaussian random walk. In order to compare results with the experimental results for skeleton reward approximation models with the standard pay-off function considered in Subsections 10.1.3 – 10.1.4, we use the same model of log-price process Y0,n represented by Gaussian random walk, with the same initial value y0 and parameters µ and σ. We also choose the same scale parameter σ ˜ = σ in the skeleton approximation model as those used in Subsections 10.1.3 – 10.1.4. The principle difference is that we consider the option model, where the payoff function is not the standard one, g(n, ey ) = e−rn [ey − K]+ . Instead, we assume that the pay-off function is generated by a compound Poisson-gamma process, i.e.,

10.2

Skeleton reward approximations for discrete time models

511

it is given by the following relation, for n = 0, 1, . . ., g(n, ey ) = e−rn Z([ey − K]+ ∧ L), y ∈ R1 ,

(10.33)

where: (a) r > 0 is a risk-free interest rate, (b) K > 0 is a strike price, (c) Z(s), s ≥ 0 is a compound Poisson-gamma process with an intensity λ > 0 for the corresponding Poisson flow of jumps and a gamma probability density of jumps −x/β xα−1 , where α, β > 0; c) L > 0 is a pay-off truncation level. fα,β (x) = Γ(α)β αe In this case, realizations of the random pay-off process Z([s − K]+ ∧ L) are step-wise non-decreasing pay-off functions with a finite numbers of jumps. The total number of jumps has a Poisson distribution with parameter λL and the expected value EZ([s − K]+ ∧ L) = λαβ([s − K]+ ∧ L). If parameters λ and α, β are chosen so that λαβ = 1, then the average rate of growth per time unit for the random pay-off function is 1, in the time interval [K, K + L]. This corresponds with the case of standard pay-off function. The truncation level can be chosen large enough in order it would not affect the rewards. We choose for parameters r and K the same values as for the standard pay-off function considered in Subsections 10.1.1 – 10.1.4. Also, we choose the values T = 0.25 and N = 100 for the maturity. Finally, we choose Y0 = S, the intensity λ = 10 and parameters α = 0.2, β = 0.5. The option with a pay-off function given by a realization of the above compound Poisson-gamma process is a so-called digital American-type options. We take the value L = 50 for the truncation pay-off level. The probability that the price process S0,n = eY0,n will exceed the value K +L at some moment 0, . . . , N is less or equal than N P{Y0,N ≥ ln(K +L)} = N P{W ≥ ln(K+L)−µN √ }, where W is a standard normal random variable with parameters 0 σ N and 1. The above upper bound take the very small value, less than 2.7 × 10−30 , in the case where parameter values, in particular, the truncation level L, are chosen as it is indicated above. This makes negligible the influence of the pay-off truncation level L on the results of the corresponding reward calculations. Figure 10.5 shows the results of computations for the model with a pay-off function given by a typical realization of the Poisson-gamma process, with the above pointed parameters. In this case we used the step ∆rεk = rεk − rεk−1 = 10000, k = 1, 2, . . ., for calculating the data for drawing the graph presented in Figure 10.5. In this case, the benchmark approximative value (BAV) for Φ0 is achieved by taking a large value of rε = 4 × 106 . As shown in the Figure 10.5, values of rε roughly larger, respectively, than 2.0 × 104 or 1.6 × 105 guarantee that the deviation of computed reward values from the benchmark approximative value are less than, respectively, ±5% or ±1% of the benchmark approximative value. Finally, when rε moves toward 2.0 × 106 , the reward values are stabilized near

512

10

Results of experimental studies

Fig. 10.5. Skeleton reward approximations for models with random pay-off

the above benchmark approximative value with the deviation within the ±0.1% limits. This is consistent with the convergence relation given in Theorem 9.4.2. Finally, we made the computations for the above skeleton reward approximation model for 100 pay-off functions generated by the Poisson-gamma process, with the above pointed parameters. Table 10.7 shows the minimal, 25% quantile, the median, 75% quantile and the maximal values for parameters rε and [rε σ 2 ], for which 1% accuracy for the corresponding reward values is achieved, and the corresponding minimal, 25% quantile, the median, 75% quantile and maximal values for computing times. It is worth to note that the expected rewards corresponding to the stopping time τ ≡ N were also computed for the same 100 pay-off functions given by realizations of the Poisson-gamma process, with the above pointed parameters. In all cases, the benchmark approximative values for the reward function Φ0 exceed the expected rewards corresponding to the stopping time τ ≡ N not less than by 10%. The data given in Table 10.7 show that the skeleton reward approximation algorithm is computationally effective and stable in the class of digital options with pay-off functions generated by the compound Poisson-gamma process described above. Let us make some short concluding remarks. The stochastic lattice approximation models possess very good smoothing properties. The results presented in the paper, show that these approximations well converge even for very irregular discontinuous pay-off functions. Comparison of binomial–trinomial-tree and skeleton reward approximations based on numerical experiments show that all approximations have appropriate

10.3 Reward approximations for continuous time models

513

Table 10.7. Computing times for skeleton reward approximations for the model with random pay-off min-value rε [rε σ 2 ] Time (sec)

0.2 × 5 0.55

105

25% quantile 0.6 × 14 1.36

105

median 0.9 × 20 1.93

105

75% quantile 1.4 × 32 3.04

105

max-value 2.2 × 105 50 4.91

computing times. The binomial model has slightly shorter computing times for computing reward values with given precision. However, we would like to mention that trinomial-tree approximations also can be useful, for example, for multivariate and inhomogeneous in time models. In such models, binomial-tree approximations may possess not enough free parameters required for exact or asymptotic fitting of parameters. We can also conclude that the skeleton reward approximations have better stabilization properties than others. As mentioned above, skeleton approximations have a principal advantage in comparison with binomial–trinomial-tree approximations. The latter approximations can be used only for Gaussian models, while the skeleton approximation can also be used for wide classes of non-Gaussian models. Theoretical results presented in Section 10.4 show that the above stochastic approximations possess good convergence properties even for discontinuous and very irregular pay-off functions including models with random càdlàg pay-off functions. The prospective models with random pay-off functions is a new interesting area for research studies with many open problems. In particular, the choice of reasonable classes of random pay-offs is the problem, which requires special investigation. The experimental studies of convergence rates for option models with irregular discontinuous non-random and random pay-offs are also much more complicated than for the models with standard or non-standard but smooth pay-off functions. The first experimental results presented in the paper show that the skeleton reward approximations remain to be effective and computationally stable for discontinuous pay-off functions generated by compound Poisson processes.

10.3 Reward approximations for continuous time models In this section, we present some experimental results concerned binomial-tree reward approximations for diffusion log-price processes and Gaussian log-price processes with independent increments.

514

10

Results of experimental studies

10.3.1 Bivariate binomial-tree reward approximations for a model of exchange of assets ~0 (t) = (Y0,1 (t), Y0,2 (t)), t ∈ [0, T ] be a bivariate homogeLet a log-price process Y neous in time Gaussian log-price process with independent increments Brownian ~0 (0) = ~ motion, with the initial state Y y0 = (y0,1 , y0,2 ) ∈ R2 which is a constant, expectations EY0,i (t) = µ0,i t ∈ R1 , , t ∈ [0, T ], i = 1, 2, variances VarY0,i (t) = 2 σ0,i t > 0, , t ∈ [0, T ], i = 1, 2 and covariances E(Y0,i (1) − µ0,i )(Y0,j (1) − µ0,j ) = ρ0,i,j σ0,i σ0,j , , t ∈ [0, T ], i, j = 1, 2, where |ρ0,i,j | ≤ 1, i, j = 1, 2. We consider so-called exchange of assets American-type option with the payoff function, defined by the following relation, for ~ y = (y1 , y2 ) ∈ R2 , t ∈ [0, T ] g(t, e~y ) = e−rt (ey1 − ey2 ),

(10.34)

where r > 0 is a free interest rate, T > 0 is a maturity. ~ε (t) = (Yε,1 (t), Yε,2 (t)), t ∈ [0, T ] be, for every ε ∈ (0, ε0 ], a bivariate Let also Y homogeneous in time binomial tree log-price process with components Yε,i (t) = P ~ ε,n = (Wε,n,1 , Wε,n,2 ), n = 1, 2, . . . y0,i + nT ≤t Wε,n,i , t ∈ [0, T ], i = 1, 2, where W Nε

are i.i.d. bivariate random vectors taking values (ı1 δε,1 + λε,1 , ı2 δε,2 + λε,2 ) with probabilities pε,ı1 ,ı2 , for ı1 , ı2 = +, −. ~0 (t) by the log-price processes Y ~ε (t) We approximate the log-price process Y choosing parameters δε,i , λε,i , i = 1, 2, pε,ı1 ,ı2 , ı1 , ı2 = +, − for the approximating ~ε (t) according formulas pointed out in Lemma 7.4.3. processes Y (ε) ~ε (t), which Let Mmax,t,T of all Markov moments τε,t for the log-price process Y ~ε (u), t ≤ u ≤ s], t ≤ s ≤ T . (a) take values in the interval [t, T ], (b) {τε,t > s} ∈ σ[Y Let, also, Φε be the optimal expected reward defined by the following relation, Φε =

sup

~

Eg(τε,0 , eYε (τε,0 ) ).

(10.35)

(ε) τε,0 ∈Mmax,0,T

~ε (t) As follows from Theorem 7.4.1, if parameters for the log-price processes Y are chosen according formulas pointed out in Lemma 7.4.1, then Φε → Φ0 as ε → 0. Pn ~ ~Πε ,n = Y ~ε ( nT ) = ~ Moreover, let Y y0 + k=1 W ε,k , n = 0, . . . , Nε be, for every Nε ε ∈ (0, ε0 ] the corresponding embedded discrete time random walk. (ε) Let us denote MΠε ,n,Nε of all Markov moments τε,n for the discrete time ~Πε ,r , which (a) take values n, n + 1, . . . , Nε , (b) {τε,n = Markov log-price process Y ~ m} ∈ σ[YΠε ,r , n ≤ r ≤ m], n ≤ m ≤ Nε . Let, also, φΠε ,ε,n (~ y ) and ΦΠε ,ε be the corresponding reward functions and the optimal expected reward for the American option with the pay-off function ~ y g( nT y ∈ R2 , n = 0, 1, . . . , Nε , Nε , e ), defined by the following relation, for ~ φΠε ,ε,n (~ y) =

sup (ε) τε,n ∈MΠ ,n,N ε ε

~

E~y,n g(tε,τε,n , eYε (tε,τε,n ) ),

(10.36)

10.3

Reward approximations for continuous time models

515

and ΦΠε ,ε =

sup

~

Eg0 (tε,τε,0 , eYε (tε,τε,0 ) ).

(10.37)

(ε) τε,0 ∈MΠ ,0,N ε ε

~ε (t) are chosen By Theorem 7.4.3, if parameters for the log-price processes Y according formulas pointed out in Lemma 7.4.1, then the following asymptotic relation holds, ΦΠε ,ε → Φ0 as ε → 0. (10.38) The reward functional ΦΠε ,ε can be computed, for every ε ∈ (0, ε0 ], using the backward recurrence algorithm described in Lemma 19.6.1. Let us denote, for l1 , l2 = 0, . . . , n, n = 0, 1, . . ., ~ yε,n,l1 ,l2 = ((2l1 − n)δε,1 , (2l2 − n)δε,2 ) + n(λε,1 , λε,2 ).

(10.39)

According to Lemma 7.4.4, the reward functions φΠε ,ε,n (~ y0 + ~ yε,n,l1 ,l2 ), l1 , l2 = 0, . . . , n, n = 0, . . . , Nε and the optimal expected reward Φε can be found using the following backward recurrence relations,  ~ y0 +~ yε,Nε ,l1 ,l2 ),   φΠε ,ε,Nε (~y0 + ~yε,Nε ,l1 ,l2 ) = g(T, e     l1 , l2 = 0, . . . , Nε ,       ~ y +~ y nT   φΠε ,ε,n (~y0 + ~yε,n,l1 ,l2 ) = max g( Nε , e 0 ε,n,l1 ,l2 ),      φΠε ,ε,n+1 (~ y0 + ~ yε,n+1,l1 +1,l2 +1 )pε,+,+ (10.40)   + φΠε ,ε,n+1 (~ y0 + ~ yε,n+1,l1 +1,l2 −1 )pε,+,−       + φΠε ,ε,n+1 (~ y0 + ~ yε,n+1,l1 −1,l2 +1 )pε,−,+        + φΠε ,ε,n+1 (~y0 + ~yε,n+1,l1 −1,l2 −1 )pε,−,− ,      l1 , l2 = 0, . . . , n, n = Nε − 1, . . . , 0, ~Πε ,0 ≡ ~ and the following equality (holding, since Y y0 ), ΦΠε ,ε = φΠε ,ε,0 (~ y0 ).

(10.41)

10.3.2 A numerical example for the model of exchange of assets We choose the parameters for approximating bivariate binomial tree log-price ~ε (t) according the second variant (c) presented in Lemma 7.4.1, with the process Y simplest variant of choice for centering coefficients λε,i = 0, i = 1, 2 and Nε ≥ N , according relation (7.142). We consider the case when the holder of the option has the right to change asset 1 for asset 2. The option has maturity in 6 months.

516

10

Results of experimental studies

Fig. 10.6. Rewards for exchange of Asset 1 for Asset 2, with volatilities 0.05 ≤ σ1 ≤ 1 and 0.05 ≤ σ2 ≤ 1.

Asset 1 has initial price 10 with a drift estimated to be 0.02 and volatility are estimated to 0.1 per year. Asset 2 has initial price 9.5 with a drift estimated to be 0.08 and volatility are estimated to 0.35 per year. The correlation between the to assets are assumed to be ρ = 0.3. The risk free interest rate are assumed to be r = 0.04 for the time period of the contract. Study show that tree size N = 100 is enough, the expected reward for a tree with size N = 100 is 0.0850, this should be compared with the expected reward for a tree with size N = 150 which is 0.0858. Note that in this case low bound, imposed on parameter Nε by relation (7.142), holds. The calculation time for this tree size is 5.11 seconds on an 1.73 GHz Intelr Pentium-M processor, 1GB internal memory using Matlabr . Figure 10.6 illustrate the reward for exchanging Asset 1 for Asset 2, where parameters of the model are as above except the volatility vary on the interval [0.05, 1] for the two different assets. Note that, even for the case on minimal value of volatility 0.05, the low bound N , imposed on parameter Nε by relation (7.142), is, just, 13. It is worth to note that for some combinations of volatilities the reward will be negative and thus exchange is not profitable. This question does however require an additional investigation.

10.3

Reward approximations for continuous time models

517

Fig. 10.7. Reward of an American option a commodity assumed to follow the Schwartz model, having parameters 0.1 ≤ α ≤ 2.5 and 0.04 ≤ ν ≤ 1.

10.3.3 A numerical example for Schwartz model In this subsection, we present a numerical example when a standard American call option written on a commodity that are assumed to follow the Schwartz model. This model of mean-reverse diffusion log-price process was considered in details in Subsection 8.4.1, and we do not repeat this analysis. Note only that it included four steps. First, the the pay-off function and the log-price process were transformed in such way that a new transformed log-price process became a simpler inhomogeneous in time Gaussian process with independent increments. Second, this process was approximated by an inhomogeneous in time trinomialtree process with independent increments with properly fitted expectations and variances of increments. Third, convergence of reward functions and optimal expected rewards was proved, as well as convergence of reward functions and optimal expected rewards for the corresponding approximating embedded discrete time log-price processes represented by trinomial random walks. Fourth, the backward recurrence algorithm for computing of rewards for this discrete time model was given. Figure 10.7 presents a numerical example based on the reward approximation procedure described above. The commodity are currently traded at 10 and has an estimated volatility of 0.25 and mean reverting coefficient of α = 1. The option has a strike price K = 11 and maturity T = 0.5 years. Finally, the risk free interest rate are r = 0.04.

518

10

Results of experimental studies

Study show that tree size N = 50 is enough, the expected reward for a tree with size N = 50 is 0.2802, this should be compared with the expected reward for a tree with size N = 100 which is 0.2863. The computing time for this tree size is 0.963 seconds on an 1.73 GHz Intelr Pentium-M processor, 1GB internal memory using Matlabr . For low values of the ν parameter the option reward reaches its minimum, and when the ν value are high and the α value are low the option reward reaches its maximum.

10.3.4 Numerical examples for the model of reselling of an European option In this subsection we present two numerical examples for the model of reselling of an European option. This corresponding model of bivariate mean-reverse diffusion log-price process was considered in details in Section 9.3, and we do not repeat this analysis. Note only that it included four steps. First, the the pay-off function and the log-price process were transformed in such way that a new transformed log-price process became a simpler bivariate inhomogeneous in time Gaussian process with independent increments. Second, this process was approximated by a bivariate inhomogeneous in time binomial-trinomial-tree process with independent increments with properly fitted expectations, variances and covariances of increments. Third, convergence of reward functions and optimal expected rewards was proved, as well as convergence of reward functions and optimal expected rewards for the corresponding approximating embedded discrete time log-price processes represented by bivariate inhomogeneous in time binomial-trinomial random walks. Fourth, the backward recurrence algorithm for computing of rewards for this discrete time model was given. We consider the model introduced in Section 9.3, with the following parameters. The risk free interest rate r = 0.04. The price process S(t) has the initial value S(0) = 10, the drift parameter µ = 0.02 and the initial volatility σ = 0.2. Note that these values correspond to the risk neutral setting for the price process. We also assume the parameters for the mean reverse volatility process σ(t) to be α = 1 for the mean reverting coefficient and ν = 0.2 for the volatility of volatility. The correlation coefficient connecting the the noise terms for the price and stochastic volatility processes is ρ = 0.3. We consider an European call option with the strike price K = 10 and the time to maturity T = 0.5 of a year. Note that condition R4 holds for the chosen values of parameters ρ, α and T . In this case, we can choose the jump value for the trinomial tree to be δ = ν. Numerical studies show that the optimal expected reselling reward values stabilize good enough for N ≥ 15. For example, they take values 0.9961 and 0.9955 if, respectively, N = 15 and N = 50, for the model with parameters pointed above.

10.3

Reward approximations for continuous time models

519

Fig. 10.8. The optimal expected reselling rewards for the models with parameters r = 0.04; S(0) = 10, µ = 0.02, σ = 0.2, 0.12 < α < 2.4, 0.05 < ν < 1, ρ = 0.3; and K = 10, T = 0.5.

Note that the time needed for calculation of the approximate optimal expected reward value in the case of N = 15 is about 2.5 seconds on an 1.73 GHz Intelr Pentium-M processor, 1GB internal memory using Matlabr. The above reward values should be compared with the expected reward corresponding to the reselling at maturity that is equivalent to the execution of the option at maturity. In this case, the expected reward is 0.7228 for the above model. Thus, optimal reselling of the option before maturity increases the expected reward (for the model with parameters pointed above) by about 25%. We also show in the figures below how the reselling reward depend upon parameters of price and stochastic volatility processes. Figure 10.8 shows, how the optimal expected reselling reward depends upon parameters α and ν. We let α vary in the interval (0.12, 2.4) and ν vary in the interval (0.05, 1). Other parameters take the same values as in the above initial example. In this case, condition R4 is not violated for all values of α in the above interval. We see that the optimal reselling reward is a decreasing function of α and an increasing function of ν. Figure 10.9 shows, how the reselling reward depend upon parameters µ and σ. We let µ vary in the interval (−0.5, 0.5) and σ vary in the interval (0.05, 1). Other parameters take the same values as in the above initial example. We see that the optimal reselling reward is an increasing function of both parameters µ and σ.

520

10

Results of experimental studies

Fig. 10.9. The optimal expected reselling rewards for the models with parameters r = 0.04; S(0) = 10, −0.5 < µ < 0.5, 0.05 < σ < 1, α = 1, ν = 0.2, ρ = 0.3; and K = 10, T = 0.5.

10.4 Reward approximation algorithms for Markov LPP In this section we present two- and three-steps schemes for reward approximations for American-type options, for multivariate modulated Markov log-price processes.

10.4.1 A theoretical scheme for three-steps time-space-skeleton reward approximations for Markov log-price processes Let us suppose that we would like to compute approximative values for the op(0) timal expected reward Φ(Mmax,0,T ) for an American-type option with a payoff function g0 (t, e~y , x) and a multivariate modulated Markov log-price processes ~ 0 (t) = (Y ~0 (t), X0 (t)). Z We restrict the discussion by consideration of he optimal expected reward (0) (ε) Φ(Mmax,0,T ) since reward functions φt (~ y , x) can be considered as particular variants of the optimal expected rewards. (0) (1) We can approximate Φ(Mmax,0,T ) by values of optimal expected rewards (0)

Φ(MΠm ,0,T ), m = 0, 1, . . . computed for embedded time-skeleton log-price pro~ m,n ) based on some sequence of partitions Πm = h0 = tm,0 < · · · < cesses Z(t tm,Nm = T i, m = 0, 1, . . . of interval [0, T ] such that d(Πm ) → 0 as m → ∞. This can be done using theorems about time-skeleton reward approximations presented in Chapter 5 (Theorems 5.2.1 and 5.2.2, for general multivariate modulated Markov log-price processes, Theorem 5.4.1 and 5.4.2, for multivariate logprice processes with independent increments, Theorems 5.5.1, 5.5.3 and 5.5.5, for

10.4

Reward approximation algorithms for Markov LPP

521

diffusion log-price processes), which give conditions, under which the following asymptotic relation takes place, (0)

(0)

Φ(MΠm ,0,T ) → Φ(Mmax,0,T ) as m → ∞.

(10.42)

(0)

(2) Approximative values for Φ(MΠm ,0,T ) can be computed using theorems given in Chapter 6. This can be done by two ways. (0) (2.1) One way is to approximate Φ(MΠm ,0,T ) by space-skeleton reward approximations ΦΠm,ε˙r , r = 0, 1, . . ., computed for some appropriate sequence of space-skeleton structures ΞΠm ,ε˙r with parameters 0 < ε˙r → 0 as r → ∞ in a way providing asymptotically dense covering by skeleton points and sets for the phase ~ 0 (t)). space Z (for the log-price process Z This can be done using theorems about time-space-skeleton reward approximations given in Chapter 6 (Theorems 6.1.1 and 6.1.3, for multivariate modulated Markov log-price processes, Theorems 6.2.1, 6.2.3, 6.2.6 and 6.2.8 for multivariate log-price processes with independent increments, and Theorem 6.3.8, for diffusion log-price processes), which give conditions, under which the following asymptotic relation take place, (0)

ΦΠm,ε˙r → ΦΠm,0 = Φ(MΠm ,0,T ) as r → ∞.

(10.43)

Also theorems about space-skeleton reward approximations given in Chapters 7∗ – 8∗ and Chapters 1 – 2 (respectively, for discrete time modulated Markov log-price processes and autoregressive type log-price processes) can be used. (0) (2.1.1) Values of quantities ΦΠm,ε˙r = ΦΠm,ε˙ , r = 0, 1, . . . can be computed r with the use of the corresponding recurrence algorithms presented in Chapter 6 (Lemmas 6.1.2, 6.2.1, 6.2.4 and 6.3.1) applied to the pay-off function g0 (t, e~y , x) ~ 0 (t). and the multivariate modulated Markov log-price process Z (0) (2.1.2) Alternatively, quantities ΦΠm,ε˙r = ΦΠm,ε˙ can be approximated by r

(ε )

l quantities ΦΠm,ε˙r = ΦΠm, , l = 0, 1, . . ., computed for some approximating payε˙ r

off of functions gεl (t, e~y , x) and approximating multivariate modulated Markov ~ εl (t), for some a sequence 0 < εl → 0 as l → ∞. log-price processes Z This can be done using lemmas convergence of optimal expected rewards for space-skeleton approximations for embedded discrete time multivariate modulated Markov log-price processes also presented in Chapter 6 (Lemmas 6.1.3, 6.2.3, 6.2.5 and 6.3.2), which give conditions, under which the following asymptotic relation take place, (εl ) (0) ΦΠm, → ΦΠm,ε˙ as l → ∞. (10.44) ε˙ r

r

(0)

(2.2.1) There is an alternative way to approximate Φ(MΠm ,0,T ) directly by (ε )

n optimal expected rewards Φ(MΠm ,0,T ), n = 0, 1, . . ., computed for some approxi~ y mating pay-off of functions gεn (t, e , x) and approximating multivariate modulated ~ εn (t) and a sequence 0 < εn → 0 as n → ∞. Markov log-price processes Z

522

10

Results of experimental studies

This can be done using theorems about convergence of optimal expected rewards for embedded discrete time multivariate modulated Markov log-price processes also presented in Chapter 6 (Theorems 6.1.1 and 6.1.3, for multivariate modulated Markov log-price processes, Theorems 6.2.1 and 6.2.3, for multivariate log-price processes with independent increments, Theorems 6.3.1, 6.3.3 and 6.3.5, for space-skeleton approximations of diffusion processes), which give conditions, under which the following asymptotic relation takes place, (ε )

(0)

n Φ(MΠm ,0,T ) → Φ(MΠm ,0,T ) as n → ∞.

(10.45)

Also results about convergence of rewards given in Chapters 5∗ – 10∗ (for discrete time modulated Markov and log-price processes, log-price processes represented by modulated random walks, and binomial- and trinomial-tree approximations for discrete time Markov Gaussian log-price processes) can be used. (εn ) (2.2.2) Approximative values of quantities Φ(MΠm ,0,T ), n = 0, 1, . . . can be computed by applying theorems about convergence of space-skeleton reward approximations given in Chapters 7∗ , 8∗ and Chapter 6 (Theorems 6.1.6, 6.1.8, 6.2.8 and 6.3.8). (0) (3) Alternatively, values of the optimal expected rewards Φ(Mmax,0,T ) can (ε )

n be approximated directly by the optimal expected rewards Φ(Mmax,0,T ) or

(εn ) Φ(MΠm ) n ,εn ,0,T ~ y

computed for simpler approximating pay-off of functions gεn (t,

e , x) and approximating multivariate modulated Markov log-price processes ~ εn (t) and a sequence 0 < εn → 0 as n → ∞. Z This can be done using theorems about convergence of rewards for Americantype options given in Chapters 7–9. These are Theorem 7.1.1, for multivariate modulated Markov log-price processes, Theorems 7.2.1, 7.2.3, 7.2.5 and 7.2.7, for multivariate log-price processes with independent increments, Theorems 7.3.1, 7.3.3, 7.4.1 and 7.4.3 for univariate and multivariate Gaussian processes with independent increments, and Theorems 8.1.1, 8.1.3, 8.2.1, 8.2.3, 8.3.1, 8.3.3, for diffusion processes. These theorems give conditions, under which the following asymptotic relation take place, (ε )

(0)

n Φ(Mmax 0,T ) → Φ(Mmax,0,T ) as n → ∞,

(10.46)

and (0)

Φ(MΠm

n ,εn ,0,T

(0)

) → Φ(Mmax,0,T ) as n → ∞.

(10.47)

~ εn (t) can be chosen (3.1) Parameters of approximating log-price processes Z according formulas pointed out in Lemma 7.2.1, for time-space-skeleton processes with independent increments and Lemmas 7.3.1, 7.3.2, 7.4.1 – 7.4.3 for univariate and multivariate binomial- and trinomial-tree processes approximating Gaussian processes with independent increments. (3.2) Values of quantities ΦΠmn ,εn ,0,T , n = 0, 1, . . . can be computed with the use of the corresponding recurrence algorithms presented in Chapters 7 and 8,

10.4

523

Reward approximation algorithms for Markov LPP

namely, in Lemmas 7.2.1, 7.3.3, 7.4.4, for log-price processes with independent increments and Lemma 8.3.1, for diffusion log-price processes. Also, methods of reward approximations (which let one get asymptotic relations analogous to (10.46) and (10.47)) presented in Section 9.1, for mean-reverse diffusion log-price processes, Section 9.2, for knockout American-type options, Section 9.3, for reselling American-type option, and Section 9.4 for American-type options with random pay-off can be used.

10.4.2 Three-steps time-space-skeleton reward approximation algorithms for Markov log-price processes A practical implementation of the above theoretical approximation scheme do require to realize approximation steps in repeated cycles, with possible sequence of approximation steps in one cycle < (2.1.1) or (2.1.2) −→ (2.1) −→ (1) > or alternatively < (2.2.1) or (2.2.2) −→ (2.1) −→ (1) > or < (3.1) and (3.2) −→ (3) >. Let, for example, consider in more details the approximation scheme with the sequence < (2.1.1) −→ (2.1) −→ (1) > of approximation steps in one cycle. Let us assume that a sequence of partitions Πm , m = 0, 1, . . . space-skeleton structures ΞΠm ,ε˙r , r = 1, 2, . . ., such that the quantities ΦΠm,ε˙r , r = 0, 1, . . . can be computed (as solutions for the corresponding recurrence systems of linear equations) and the asymptotic relations (10.43) and (10.42) hold. (0) In order to compute approximative values for quantity Φ(MΠm ,0,T ), for m = 0, 1, . . ., one should compute sequential values of ΦΠm,ε˙0 , ΦΠm,ε˙1 , . . . until the values ΦΠm,ε˙r become stabilized (due to the asymptotic relation (10.43)), with appropriate small relative errors ∆m,r = |(ΦΠm,ε˙r − ΦΠm,ε˙r−1 )/ΦΠm,ε˙r |, for rea0 0 00 sonably long subsequence of values for parameter r = rm , rm + 1, . . . , rm . The (0) ˆ last such value, Φ(M Πm ,0,T ) = ΦΠm,ε˙ 00 , or the corresponding average value rm

(0)

Φ(MΠm ,0,T ) = (ΦΠm,ε˙

0 rm

+ · · · + ΦΠm,ε˙

00 rm

00 0 )/(rm − rm + 1), can be used as an

(0)

approximative value for quantity Φ(MΠm ,0,T ). (0)

(0)

ˆ One should repeat computing for sequential values Φ(M Π0 ,0,T ) or Φ(MΠm ,0,T ), (0) (0) ˆ for parameter m = 0, 1, . . ., until the values Φ(M Πm ,0,T ) or Φ(MΠm ,0,T ) become stabilized (due to the asymptotic relations (10.42)), with appropriate small rel(0) (0) (0) ˆ m = |(Φ(M ˆ ˆ ˆ ative error, respectively, ∆ Πm ,0,T ) − Φ(MΠm−1 ,0,T ))/Φ(MΠm ,0,T )| (0)

(0)

(0)

or ∆m = |(Φ(MΠm ,0,T ) − Φ(MΠm−1 ,0,T ))/Φ(MΠm ,0,T )|, for reasonably long subsequence of values for parameter m = m0 , . . . , m00 . The last such value, (0) (0) (0) ˆ ˆ Φ(M ) = Φ(M ), or the corresponding average value Φ(M )= max,0,T

max,0,T

Πm00 ,0,T

(0) (0) 00 0 ˆ ˆ (Φ(M Πm0 ,0,T )+· · ·+ Φ(MΠm00 ,0,T ))/(m −m +1), can be used as an approximative (0)

value for the optimal expected reward Φ(Mmax,0,T ).

524

10

Results of experimental studies

10.4.3 Modified three-steps time-space-skeleton reward approximation algorithms for Markov log-price processes It is worth to note that theorems from Chapter 5 (Theorems 5.2.1, 5.2.2, 5.4.1, 5.5.1, 5.5.3 and 5.5.5), which give explicit time-skeleton reward estimates let one, in principle, to change the order of steps in the described above time-space skeleton reward approximation algorithm. Indeed, let δn , n = 0, 1, . . . be a sequence of positive numbers such that δn → 0 as n → ∞ and Πm , m = 0, 1, . . . be a sequence of partitions of interval [0, T ] such that d(Πm ) → 0 as m → ∞. Relation given in the theorems pointed above let one find mn = m(δn ), n = 0, 1, . . . such that, for n = 0, 1, . . ., (0)

(0)

0 ≤ Φ(Mmax,0,T ) − Φ(MΠm

n ,0,T

) ≤ δn .

(10.48)

At the second step, one should use theorems about space-skeleton reward approximations given in Chapter 6 (Theorems 6.1.6, 6.1.8, 6.2.8 and 6.3.8). According to this theorem one can construct, for every partition Πmn , n = 0, 1, . . ., a sequence of space skeleton reward approximations ΦΠmn ,ε˙r , r = 0, 1, . . . such that 0 < ε˙r → 0 as r → ∞, n = 0, 1, . . . and, for every n = 0, 1, . . ., (0)

ΦΠmn ,ε˙r → ΦΠmn ,0 = Φ(MΠm

n ,0,T

) as r → ∞.

(10.49)

A practical implementation of the above algorithm based on the use of relations (10.48) and (10.49) can be the following. Theorems from Chapter 5 (5.2.1, 5.4.1, 5.5.1, 5.5.3 and 5.5.5) let one find quantity mn , for every n = 0, 1, . . . as the minimal m, which satisfies the following inequality, M11 d(Πm ) +

k X

1

M12,i ∆β (Y0,i (·), d(Πm ), T ) α∗

i=1 1

+ M13 Υα∗ (X0 (·), d(Πm ), T ) α∗ ≤ δn .

(10.50) (0)

In order to compute approximative values for quantity Φ(MΠm ,0,T ), for n n = 0, 1, . . ., one should compute sequential values of ΦΠmn ,ε˙0 , ΦΠmn ,ε˙2 , . . ., until the values ΦΠmn ,ε˙r become stabilized with appropriate small relative errors ∆mn ,r = |(ΦΠmn ,ε˙r − ΦΠmn ,ε˙r−1 )/ΦΠmn ,ε˙r | for reasonably long subsequence of values for parameter r = r0 (mn ), r0 (mn ) + 1, . . . , r00 (mn ). The last such value, (0) (0) ˆ Φ(M Πm ,0,T ) = ΦΠmn ,ε˙r00 (m ) , or the corresponding average value Φ(MΠm ,0,T ) = n

(ΦΠmn ,ε˙r0 (m

n

n

n)

+ · · · + ΦΠmn ,ε˙r00 (m ) )/(r00 (mn ) − r0 (mn ) + 1), can be used as an apn

(0)

proximative value for quantity Φ(MΠm ,0,T ). n This computations should be repeated for sequential values of parameter (0) ˆ n = δn /Φ(M ˆ n until the approximate values of relative errors ∆ Πm ,0,T ) or n

10.4

Reward approximation algorithms for Markov LPP

525

(0)

∆n = δn /Φ(MΠm ,0,T ) become stabilized at the appropriate small level, for n reasonably long subsequence of values for parameter n = n0 , . . . , n00 . The last (0) (0) ˆ ˆ ), or the corresponding average value ) = Φ(M such value, Φ(M max,0,T

(0) Φ(Mmax,0,T )

Πm

n00

(0) ˆ = (Φ(M Πm

n0

,0,T

(0) ˆ ,0,T ) + · · · + Φ(MΠm

n00

00 ,0,T ))/(n

− n0 + 1), can be used (0)

as an approximative value for the optimal expected reward Φ(Mmax,0,T ). It should be noted that the above second variant of the approximation algorithm can be effectively used only in the case where inequality (10.50) gives not too large values for quantities mn . It would be also nice to have upper bounds for rates of convergence in the asymptotical relations (10.43) and (10.49). In principle, such bound in the form of O(·) can be obtained. In practice, such bounds would have rather psychological than a real value. The bounds with explicit constants would be required for the practical use. However, reward functionals for American-type options have very nonlinear character. It should be expected to be difficult to get values for such constants admissible for the practical use. Thus, one should accept the usual engineering approach described above, in the use of convergence relations of type (10.43) and (10.49). Theorems about time-skeleton and space-skeleton reward approximation given in Chapters 5 – 9 let one also design two- and three-steps reward approximation algorithms for finding approximative optimal values for the reward func(0) tions φt (~ y , x), analogous to the approximations for the optimal expected rewards (0) Φ(Mmax,0,T ) described above.

Bibliographical Remarks The present book is the second volume of the monograph devoted to systematical presentation of stochastic approximation methods for American type options for Markov log-price processes. The main part of bibliography contained about 600 references is given in the 1st volume, Silvestrov (2014). This bibliography is supplied by bibliographical remarks, which also include historical notes. The present bibliography supplements the above one by the latest references to works devoted to option problematics and related topics, such as optimal stoping problems for stochastic processes. It mostly includes references to works, which were published during the last tree years and are not presented in the bibliography of the 1st volume. First of all we would like to refer to books specially devoted to option pricing and related problems that are Miyahara (2012), Rüfenacht (2012), Kolesnik and Ratanov (2013), Pagès (2013), Guyon and Henry-Labordère (2014), Silvestrov (2014) and other books on mathematical finance and related topics that are Benth, Crisan, Guasoni, Manolarakis, Muhle-Karbe, Nee and Protter (2013), Bernhard, Engwerda, Roorda, Schumacher, Kolokoltsov, Saint-Pierre and Aubin (2013), Campolongo, Jönsson and Schoutens (2013), Kijima (2013), Rosazza and Sgarra (2013), and Swishchuk and Islam (2013). Structure of optimal stopping strategies represented by first hitting times into the corresponding space-time domains and related free boundary problems have been studied in Alcala Burgos (2012), Chen and Cheng (2012), Emmerling (2012), Golbabai, Ahmadian and Milev (2012), Jeunesse and Jourdain (2012), Leung, Sircar and Zariphopoulou (2012), Rehman, Hussain and Shashiashvili (2012), Chen, Cheng and Chadam (2013), Christensen and Irle (2013), Gutiérrez (2013), Howison, Reisinger and Witte (2013), Kim, Ahn and Choe (2013), Kim, I., Jang and Kim, K. (2013), Lamberton and Mikou (2013), Liping and Wanghui (2013), Babbin, Forsyth and Labahn (2014), Gapeev and Rodosthenous (2014), and Rodrigo (2014). Analytical solutions for American type options and related results can be found in Ekström and Tysk (2012), Goard (2012), Joshi and Staunton (2012), Kruse and Müller (2012), Peng (2012), Chiarella and Ziveyi (2013, 2014), Ekström, Hobson, Janson and Tysk (2013), Gapeev and Rodosthenous (2013), Yoon and Kim (2013), Carr, Fisher and Ruf (2014), Jerbi and Kharrat (2014), and Jun and Ku (2015). Option valuation for multivariate price processes have been studied in Morozov and Khizhnyak (2011, 2012), Morozov and Muravei (2012), Šiška (2012), Chiarella and Ziveyi (2013, 2014), Bronstein, Pagès and Portès (2013), Wu (2013), and for modulated price processes in Boyarchenko and Levendorski˘ı (2013b), Foroush

Bibliographical Remarks

527

Bastani, Ahmadi and Damircheli (2013), Khaliq, Kleefeld and Liu (2013), Babbin, Forsyth and Labahn (2014), and Elliott and Hamada (2014). Stochastic approximation methods including tree, time-skeleton and spaceskeleton type approximations for options and related securities for different classes of scalar and vector stochastic processes have been studies in Chen and Joshi (2012), Joshi and Staunton (2012), Boyarchenko and Levendorski˘ı (2013b), Bronstein, Pagès and Portès J. (2013), Howison, Reisinger and Witte (2013), Jin, Li, Tan, and Wu (2013), Lee, J. and Lee, Y. (2013), Pun and Wong (2013), Silvestrov and Li (2013, 2015), Deng and Peng (2014), and Lundengård, Ogutu, Silvestrov, S., and Weke (2014). Monte Carlo based approximation methods have been applied to option pricing problems including American options by Abbas-Turki and Lapeyre (2012), Chang, Lu and Chi (2012), Liang and Xu (2013), Mei and Clutter (2013), Zanger (2013), Chen and Liu (2014), Deng and Peng (2014), Ferreiro-Castilla, Kyprianou, Scheichl and Suryanarayana (2014), Nwozo and Fadugba (2014), and Yu and Liu (2014). Results on comparative numerical studies of different method of option pricing can be found in Guo, Kong, Li and Zhang (2012), Wang, Bernstein and Chesney (2012), Marín, Vargas and Pinzón Cardozo (2013), and Silvestrov and Li (2013, 2015). Difference methods, numerical solutions for partial differential equations associated with prices for American options, and related techniques are presented in works by Holmes and Yang (2012), Le, Cen and Xu (2012), Li and Kang (2012), Reisinger and Witte (2012), Šiška (2012), Sun, Shi and Dong (2012), Wang, Xu, Ma and Qiao (2012), Ballestra and Cecere (2013), Elliott and Siu (2013), Foroush, Ahmadi and Damircheli (2013), Huang, Cen and Le (2013), Khaliq, Kleefeld and Liu (2013), Khodja, Chau, Couturier, Bahi and Spitéri (2013), Lee, J. and Lee, Y. (2013), O’Sullivan, C. and O’Sullivan, S. (2013), Yoon, Kim and Choi (2013), Yousuf and Khaliq (2013), Zheng and Yin (2013a, 2013b), Zhu and Chen (2013), Chiarella and Ziveyi (2014), Company, Egorova, and Jódar (2014), and Zhang, Song and Luan (2014). Other approximation methods, based on PDE for option prices, integral representations, duality martingale relations, variational inequalities, linear programming and related optimization techniques, has been used in papers by Benothman and Trabelsi (2012), Memon (2012), Trabelsi (2012), Antonelli, Mancini and Pinar (2013), Ballestra and Pacelli (2013), Boyarchenko and Levendorski˘ı (2013a), Cheang, Chiarella and Ziogas (2013), Elliott and Siu (2013), Gutiérrez (2013), Huang, Forsyth and Labahn (2013), Pun and Wong (2013), Yin and Han (2013), Christensen (2014), Mahato (2014), Martín-Vaquero, Khaliq and Kleefeld (2014), Salmi, Toivanen and von Sydow (2014), Sun, Shi and Gu (2014), Sun, Shi and Wu (2014), Wu, Yang, Ma and Zhao (2014), and Yu and Yang (2014), and some papers on analysis of risks connected with option contracts Ye, Song and Wang

528

Bibliographical Remarks

(2012), Eberlein and Madan (2012), Madan and Schoutens (2013), and Melnikov and Smirnov (2014). Finally, we give references related to analytical and approximation methods for other type of options. A large number of works is devoted to European options. These are papers by Chen, Shen and Yu (2012), Dong (2012), Étoré and Gobet (2012), Cui and Liu (2012), Huang, Tao and Li (2012), Li and Melnikov (2012), Ma and Li (2012), Popovic and Goldsman (2012), Rana and Ahmad (2012), Sánchez and Olivares (2012), Shen (2012), Shi and Zhou (2012), Sun and Shi (2012), Wang (2012), Wei (2012), Wu, Zhang and Zhu (2012), Yan, D. and Yan, B. (2012), Yang, Yi, Song and Yang (2012), Yue (2012), Zhang and Wang (2012), Zhu, Badran and Lu (2012), Company, Jódar and Fakharany (2013), Dai and Liu (2013), Elbeleze, Kiliçman and Taib (2013), Hadji and Remita (2013), Hariharan, Padma and Pirabaharan (2013), Ivanov (2013), Lesmana and Wang (2013), Kumar, Tripathi and Kadalbajoo (2013), Leduc (2013), Liu and Li (2013), Ludkovski and Shen (2013), MacLean, Zhao and Ziemba (2013), Neri and Schneider (2013), Ngounda, Patidar and Pindza (2013), Ortiz-Gracia and Oosterlee (2013), Serghini, El hajaji, Mermri and Hilal (2013), Swishchuk and Islam (2013), Zhang and Li (2013), Ziveyi, Blackburn and Sherris (2013), Abbas-Turki and Lamberton (2014), Badescu, Elliott and Ortega (2014), Bohner, Sánchez and Rodríguez (2014), Dyrssen, Ekström and Tysk (2014), Eberlein and Glau (2014), Geng, Navon and Chen (2014), Ghandehari and Ranjbar (2014), Guo and Yuan (2014), Kim, Jeong and Shin (2014), Li, Shu and Kan (2014), Li and Wang (2014), Li, Zhou, Zhao and Ge (2014), Milstein and Spokoiny (2014), Nardon and Pianca (2014), Pindza and Patidar (2013), Pindza, Patidar and Ngounda (2014), Ragni (2014), Swishchuk, Tertychnyi and Elliott (2014), and Xiu (2014). Different types of barrier type options have been treated in Roşca, N. and Roşca, A. (2011), Wen, Huo and Deng (2011), Yang (2011), Ding, Huang and Zhao (2012), Howison (2012), Hu and Li (2012), Llemit and Escaner (2012), Obłój and Ulmer (2012), Nandakishore and Udayabaskaran (2012), Achtsis, Cools and Nuyens (2013), Altay, Gerhold, Haidinger and Hirhager (2013), Boyarchenko and Levendorski˘ı (2013a), Carrada-Herrera, Grudsky, Palomino-Jiménez and Porter (2013), Dong (2013), Ibrahim, O’Hara and Constantinou (2013), Jessen and Poulsen (2013), Jun and Ku (2013), Kato, Takahashi and Yamada (2013, 2014), Lin and Palmer (2013), Mijatović and Pistorius (2013), Muroi and Yamada (2013), O’Hara, Sophocleous and Leach (2013), Zhai, Bi and Zhang (2013), Appolloni, Gaudenzi and Zanette (2014), de Innocentis and Levendorski˘ı (2014), Elliott, Siu and Chan (2014), Ferreiro-Castilla, Kyprianou, Scheichl and Suryanarayana(2014), Götz, Escobar and Zagst (2014), Levendorski˘ı (2014), Sun, Shi and Gu (2014), Sun, Shi and Wu (2014), and Thakoor, Tangman and Yannick (2014).

Bibliographical Remarks

529

Asian option have been studied in Almeida and Vicente (2012), Cai and Kou (2012), Calin, Chang and Alshamary (2012), Dong and He (2012), Elahi and Aziz (2012), Kandilarov and Ševčovič (2012), Mudzimbabwe, Patidar and Witbooi (2012), Yu (2012), Zhu and Yu (2012), Cen, Le and Xu (2013a, 2013b), Dingeç and Hörmann (2013), Du, Liu and Gu (2013), Foschi, Pagliarani and Pascucci (2013), Fu, Zhang and Weng (2013), Moon and Kim (2013), Moussi, Lidouh and Nqi (2013), Pan, Zhou, Zhang and Han (2013), Patie (2013), Shen and He (2013), Yao (2013), Zhang and Oosterlee (2013, 2014), Ben and Kebaier (2014), Cai, Li and Shi (2014), Cassagnes, Chen and Ohashi (2014), Fan and Zhang (2014), Hackmann and Kuznetsov (2014), Hepperger (2014), Lee, Kim and Jang (2014), Kim and Wee (2014), Liu, Wu, Xu and Zhao (2014), Moussi, Lidouh and Nqi (2014), Novikov, Ling and Kordzakhia (2014), Shi and Yang (2014), Vecer (2014), Wang, Z., Wang, L., Wang, D. and Jin (2014), Wang, X. and Wang, Y. (2014), and Zhang, Pan, Zhou and Han (2014). Bermudian options have been treated in Jain and Oosterlee (2012), Baviera and Giada (2013), Boyle, Kolkiewicz and Tan (2013), Feng and Lin (2013), Alobaidi, Mansi and Mallier (2014), Imai (2014), Lim, Lee and Kim (2014), binary options in Guo and Li (2012), Zhang and Xu (2012), Thavaneswaran, Appadoo and Frank (2013), British options in Peskir and Samee (2011, 2013), Russian options in Allaart (2010), Ivanov (2010), Suzuki and Sawaki (2010), Glover, Peskir and Samee (2011), Scullard (2011), and other exotic options in Dia and Lamberton (2011), Ivanov and Shiryaev (2011), Agliardi (2012), Coqueret (2012), Goard (2012), Li, Kim and Kwon (2012), Boyarchenko and Levendorski˘ı (2013b), Kadalbajoo, Kumar and Tripathi (2013), Li (2013), Lu and Bao (2013), Pindza, Patidar and Ngounda (2013), Funahashi and Kijima (2014), and Yam, Yung and Zhou (2014). We also refer to works stressing underlying assets for the corresponding options, namely, Oshanin and Schehr (2012), Yang and Sun (2013), Álvarez-Díez, Baixauli-Soler and Belda-Ruiz (2014), Dai and Chiu (2014), for stock options, Zhang (2012), Zhang and Wang (2012), Zhang and Teo (2013), El hajaji, Hilal, Serghini and Mermri (2014), Motsepa, Khalique and Molati (2014), for bond options, Tapiero (2013), Fabozzi, Leccadito and Tunaru (2014), for equity options, Madan (2012), Cont and Deguest (2013), Cont and Kokholm (2013), Areal and Rodrigues (2014), Lin, Li and Lee (2014), Singh (2014), for index options, and Zhang, Grzelak and Oosterlee (2012), for commodity options. Pricing problems for American-type options is closely connected with optimal stopping problems for underlying stochastic price processes treated in the works by Chakrabarty and Guo (2012), Cheng and Riedel (2013), Chen, Song, Yi and Yin (2013), Kohler and Walk (2013), Li and Linetsky (2013), Seaquist (2013), Ye and Zhou (2013), Zhou (2013), Babbin, Forsyth and Labahn (2014), and Bayraktar and Zhou (2014).

530

Bibliographical Remarks

We refer here also to papers concerned optimal stopping for rewards for Markov-type processes and related topics that are Dayanik and Egami (2012), Davis and Cairns (2012), Erick (2012), Lempa (2012), Mishura and Schmidli (2012), Sheu and Tsai (2012), Wong, T. and Wong, H. (2012), Belomestny (2013), Lamberton and Zervos (2013), Mishura and Tomashik (2013), Presman (2013, 2014), Wu (2013), Alvarez and Matomäki (2014), Alvarez, Matomäki and Rakkolainen (2014), Assing, Jacka and Ocejo (2014), Crocce and Mordecki (2014), Ekren, Touzi and Zhang (2014), Gapeev and Rodosthenous (2014), Klimmek (2014), Kyprianou and Ott (2014), Li and Linetsky (2014), Makasu (2014a, 2014b), Pemy (2014), Quenez and Sulem (2014), Silvestrov, Manca, and Silvestrova, E. (2014), and Tanaka (2014). Finally we would like also to refer to some more general works on optimal stopping for stochastic processes that are Bayraktar and Xing (2012), Chen and Yi (2012), Christensen (2012), Lempa (2012), Øksendal and Sulem (2012), Sinel’nikov (2012), Stettner (2012), Baghery, Haadem, Øksendal and Turpin (2013), Brandejsky, de Saporta and Dufour (2013), Christensen, Salminen and Ta (2013), Cui (2013), Horiguchi and Piunovskiy (2013), Makasu (2013), Ott (2013), Xu and Zhou (2013), Bayer, Hoel, von Schwerin and Tempone (2014), Kijima and Siu (2014), Øksendal, Sulem and Zhang (2014), Sonin (2014), and Stockbridge (2014).

Bibliography [1]

[2]

[3] [4] [5] [6] [7] [8] [9]

[10] [11]

[12]

[13] [14]

[15] [16] [17]

[18] [19] [20]

Abbas-Turki, L.A., Lamberton, D. European options sensitivity with respect to the correlation for multidimensional Heston models. Int. J. Theor. Appl. Finance, 17, no. 3, (2014), 1450015, 36 pp. Abbas-Turki, L.A., Lapeyre, B. American options by Malliavin calculus and nonparametric variance and bias reduction methods. SIAM J. Financial Math., 3, no. 1, (2012), 479–510. Achtsis, N., Cools, R., Nuyens, D. Conditional sampling for barrier option pricing under the LT method. SIAM J. Financial Math., 4, no. 1, (2013), 327–352. Agliardi, R. A comprehensive mathematical approach to exotic option pricing. Math. Methods Appl. Sci., 35, no. 11, (2012), 1256–1268. Alcala Burgos, J. Optimizing the Exercise Boundary for the Holder of an American Option over a Parametric Family. Ph.D. Thesis, New York University, 2012, 84 pp. Allaart, P.C. Optimal stopping rules for American and Russian options in a correlated random walk model. Stoch. Models, 26, no. 4, (2010), 594–616. Almeida, C., Vicente, J. Term structure movements implicit in Asian option prices. Quant. Finance, 12, no. 1, (2012), 119–134. Alobaidi, G., Mansi, S., Mallier, R. Numerical solution of an integral equation for perpetual Bermudan options. Int. J. Comput. Math., 91, no. 5, (2014), 1005–1011. Altay, S., Gerhold, S., Haidinger, R., Hirhager, K. Digital double barrier options: several barrier periods and structure floors. Int. J. Theor. Appl. Finance, 16, no. 8, (2013), 1350044, 14 pp. Alvarez, L.H.R., Matomäki, P. Optimal stopping of the maximum process. J. Appl. Probab., 51, no. 3, (2014), 818–836. Alvarez, L.H.R., Matomäki, P., Rakkolainen, T.A. A class of solvable optimal stopping problems of spectrally negative jump diffusions. SIAM J. Control Optim., 52, no. 4, (2014), 2224–2249. Álvarez-Díez, S., Baixauli-Soler, J.S., Belda-Ruiz, M. Are we using the wrong letters? An analysis of executive stock option Greeks. Cent. Eur. J. Oper. Res., 22, no. 2, (2014), 237–262. Antonelli, F., Mancini, C., Pinar, M. Calibrated American option pricing by stochastic linear programming. Optimization, 62, no. 11, (2013), 1433–1450. Appolloni, E., Gaudenzi, M., Zanette, A. The binomial interpolated lattice method for step double barrier options. Int. J. Theor. Appl. Finance, 17, no. 6, (2014), 1450035, 26 pp. Areal, N., Rodrigues, A. Discrete dividends and the FTSE-100 index options valuation. Quant. Finance, 14, no. 10, (2014), 1765–1784. Assing, S., Jacka, S., Ocejo, A. Monotonicity of the value function for a two-dimensional optimal stopping problem. Ann. Appl. Probab., 24, no. 4, (2014), 1554–1584. Babbin, J., Forsyth, P.A., Labahn, G. A comparison of iterated optimal stopping and local policy iteration for American options under regime switching. J. Sci. Comput., 58, no. 2, (2014), 409–430. Badescu, A., Elliott, R.J., Ortega, J.P. Quadratic hedging schemes for non-Gaussian GARCH models. J. Econom. Dynam. Control, 42 (2014), 13–32. Baghery, F., Haadem, S., Øksendal, B., Turpin, I. Optimal stopping and stochastic control differential games for jump diffusions. Stochastics, 85, no. 1, (2013), 85–97. Ballestra, L.V., Cecere, L. A numerical method to compute the volatility of the fractional Brownian motion implied by American options. Int. J. Appl. Math., 26, no. 2, (2013), 203–220.

532 [21]

[22] [23]

[24] [25]

[26] [27] [28] [29]

[30]

[31] [32]

[33]

[34] [35]

[36] [37] [38] [39] [40] [41]

Bibliography

Ballestra, L.V., Pacelli, G. Pricing European and American options with two stochastic factors: a highly efficient radial basis function approach. J. Econom. Dynam. Control, 37, no. 6, (2013), 1142–1167. Baviera, R., Giada, L. A perturbative approach to Bermudan options pricing with applications. Quant. Finance, 13, no. 2, (2013), 255–263. Bayer, C., Hoel, H., von Schwerin, E., Tempone, R. On nonasymptotic optimal stopping criteria in Monte Carlo simulations. SIAM J. Sci. Comput., 36, no. 2, (2014), A869– A885. Bayraktar, E., Xing, H. Regularity of the optimal stopping problem for jump diffusions. SIAM J. Control Optim., 50, no. 3, (2012), 1337–1357. Bayraktar, E., Zhou, Z. On controller-stopper problems with jumps and their applications to indifference pricing of American options. SIAM J. Financial Math., 5, no. 1, (2014), 20–49. Belomestny, D. Solving optimal stopping problems via empirical dual optimization. Ann. Appl. Probab., 23, no. 5, (2013), 1988–2019. Ben A.M., Kebaier, A.M. Monte Carlo for Asian options and limit theorems. Monte Carlo Methods Appl., 20, no. 3, (2014), 181–194. Benothman, L., Trabelsi, F. Asymptotic analysis of European and American options with jumps in the underlying. Int. J. Math. Oper. Res., 4, no. 5, (2012), 548–585. Benth, F.E., Crisan, D., Guasoni, P., Manolarakis, K., Muhle-Karbe, J., Nee, C., Protter, P. (Eds.) Paris-Princeton Lectures on Mathematical Finance 2013. Lecture Notes in Mathematics, 2081, Springer, Cham, 2013, ix+316 pp. Bernhard, P., Engwerda, J.C., Roorda, B., Schumacher, J.M., Kolokoltsov, V., SaintPierre, P., Aubin, J.P. The Interval Market Model in Mathematical Finance: GameTheoretic Methods. Static & Dynamic Game Theory: Foundations & Applications. Birkhäuser, New York, 2013, xiv+346 pp. Bohner, M., Sánchez, F.H.M., Rodríguez, S. European call option pricing using the Adomian decomposition method. Adv. Dyn. Syst. Appl., 9, no. 1, (2014), 75–85. Boyarchenko, M., Levendorski˘ı, S. (2013a) Efficient Laplace inversion, Wiener-Hopf factorization and pricing lookbacks. Int. J. Theor. Appl. Finance, 16, no. 3, (2013), 1350011, 40 pp. Boyarchenko, S., Levendorski˘ı, S. (2013b) American options in the Heston model with stochastic interest rate and its generalizations. Appl. Math. Finance, 20, no. 1, (2013), 26–49. Boyle, P.P., Kolkiewicz, A.W., Tan, K.S. Pricing Bermudan options using lowdiscrepancy mesh methods. Quant. Finance, 13, no. 6, (2013), 841–860. Brandejsky, A., de Saporta, B., Dufour, F. Optimal stopping for partially observed piecewise-deterministic Markov processes. Stoch. Process. Appl., 123, no. 8, (2013), 3201–3238. Bronstein, A. L., Pagès, G., Portès, J. Multi-asset American options and parallel quantization. Methodol. Comput. Appl. Probab., 15, no. 3, (2013), 547–561. Cai, N., Kou, S. Pricing Asian options under a hyper-exponential jump diffusion model. Oper. Res., 60, no. 1, (2012), 64–77. Cai, N., Li, C., Shi, C. Closed-form expansions of discretely monitored Asian options in diffusion models. Math. Oper. Res., 39, no. 3, (2014), 789–822. Calin, O., Chang, D.C., Alshamary, B. Mathematical modelling and analysis of Asian options with stochastic strike price. Appl. Anal., 91, no. 1, (2012), 91–104. Campolongo, F., Jönsson, H., Schoutens, W. Quantitative Assessment of Securitisation Deals. Springer Briefs in Finance. Springer, Heidelberg, 2013. xxii+112 pp. Carr, P., Fisher, T., Ruf, J. On the hedging of options on exploding exchange rates. Finance Stoch., 18, no. 1, (2014), 115–144.

Bibliography

[42]

[43] [44] [45] [46]

[47] [48]

[49]

[50] [51] [52] [53]

[54] [55] [56] [57] [58] [59] [60] [61] [62] [63]

533

Carrada-Herrera, R., Grudsky, S., Palomino-Jiménez, C., Porter, R.M. Asymptotics of European double-barrier option with compound Poisson component. Commun. Math. Anal., 14, no. 2, (2013), 40–66. Cassagnes, A., Chen, Y., Ohashi, H. Path integral pricing of outside barrier Asian options. Physica A: Stat. Mech. Appl., 394, (2014), 266–276. Cen, Z., Le, A., Xu, A. (2013a) An alternating-direction implicit difference scheme for pricing Asian options. J. Appl. Math., (2013), Art. ID 605943, 8 pp. Cen, Z., Le, A., Xu, A. (2013b) Finite difference scheme with a moving mesh for pricing Asian options. Appl. Math. Comput., 219, no. 16, (2013), 8667–8675. Chakrabarty, A., Guo, X. Optimal stopping times with different information levels and with time uncertainty. In: Zhang, T., Zhou, X. (Eds.) Stochastic Analysis and Applications to Finance. Interdisciplinary Mathematical Sciences, 13, World Scientific, Hackensack, NJ, (2012), 19–38. Chang, H., Lu, Z., Chi, X. Large-scale parallel simulation of high-dimensional American option pricing. J. Algorithms Comput. Technol., 6, no. 1, (2012), 1–16. Cheang, G.H.L., Chiarella, C., Ziogas, A. The representation of American options prices under stochastic volatility and jump-diffusion dynamics. Quant. Finance, 13, no. 2, (2013), 241–253. Chen, F., Shen, J., Yu, H. A new spectral element method for pricing European options under the Black-Scholes and Merton jump diffusion models. J. Sci. Comput., 52, no. 3, (2012), 499–518. Chen, N., Liu, Y. American option sensitivities estimation via a generalized infinitesimal perturbation analysis approach. Oper. Res., 62, no. 3, (2014), 616–632. Chen, T., Joshi, M. Truncation and acceleration of the Tian tree for the pricing of American put options. Quant. Finance, 12, no. 11, (2012), 1695–1708. Chen, X., Cheng, H. Regularity of the free boundary for the American put option. Discr. Contin. Dyn. Syst., Ser. B, 17, no. 6, (2012), 1751–1759. Chen, X., Cheng, H., Chadam, J. Nonconvexity of the optimal exercise boundary for an American put option on a dividend-paying asset. Math. Finance, 23, no. 1, (2013), 169–185. Chen, X.. Song, Q., Yi, F., Yin, G. Characterization of stochastic control with optimal stopping in a Sobolev space. Automat. J. IFAC, 49, no. 6, (2013), 1654–1662. Chen, X., Yi, F. A problem of singular stochastic control with optimal stopping in finite horizon. SIAM J. Control Optim., 50, no. 4, (2012), 2151–2172. Cheng, X., Riedel, F. Optimal stopping under ambiguity in continuous time. Math. Financ. Econ., 7, no. 1, (2013), 29–68. Chiarella, C., Ziveyi, J. American option pricing under two stochastic volatility processes. Appl. Math. Comput., 224, (2013), 283–310. Chiarella, C., Ziveyi, J. Pricing American options written on two underlying assets. Quant. Finance, 14, no. 3, (2014), 409–426. Chow, Y.S., Robbins, H., Siegmund, D. Great Expectations: The Theory of Optimal Stopping. Houghton Mifflin Company, Boston, 1971, xii+140 pp. Christensen, S. Phase-type distributions and optimal stopping for autoregressive processes. J. Appl. Probab., 49, no. 1, (2012), 22–39. Christensen, S. A method for pricing American options using semi-infinite linear programming. Math. Finance, 24, no. 1, (2014), 156–172. Christensen, S., Irle, A. American options with guarantee – class of two-sided stopping problems. Stat. Risk Model., 30, no. 3, (2013), 237–254. Christensen, S., Salminen, P., Ta, B.Q. Optimal stopping of strong Markov processes. Stoch. Process. Appl., 123, no. 3, (2013), 1138–1159.

534 [64]

[65]

[66] [67] [68]

[69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87]

Bibliography

Company, R., Egorova, V.N., Jódar, L. Solving American option pricing models by the front fixing method: numerical analysis and computing. Abstr. Appl. Anal., (2014), Art. ID 146745, 9 pp. Company, R., Jódar, L., Fakharany, M. Positive solutions of European option pricing with CGMY process models using double discretization difference schemes. Abstr. Appl. Anal., (2013), Art. ID 517480, 11 pp. Cont, R., Deguest, R. Equity correlations implied by index options: estimation and model uncertainty analysis. Math. Finance, 23, no. 3, (2013), 496Ð530. Cont, R., Kokholm, T. A consistent pricing model for index options and volatility derivatives. Math. Finance, 23, no. 2, (2013), 248–274. Coqueret, G. Exotic Options, Infinitely Divisible Distributions and Lévy Processes Theoretical and Applied Perspectives. PhD Thesis, ESSEC Business School, Paris, 2012. 92 pp. Crocce, F., Mordecki, E. Explicit solutions in one-sided optimal stopping problems for one-dimensional diffusions. Stochastics, 86, no. 3, (2014), 491–509. Cui, Z. Dynamic Optimal Asset Allocation with Optimal Stopping. PhD Thesis, Boston University, 2013, 195 pp. Cui, Y., Liu, B. Comparative analysis of stock price simulation and its European-styled call options. Appl. Math. Inf. Sci., 6, no. 3S, (2012), 887–891. Dai, T.S., Chiu, C.Y. Pricing barrier stock options with discrete dividends by approximating analytical formulae. Quant. Finance, 14, no. 8, (2014), 1367–1382. Dai, Y.L., Liu, L.X. An actuarial option pricing approach to European fixed strike lookback call option. J. Quant. Econom., 30, no. 3, (2013), 81–86. Davis, G.A., Cairns, R.D. Good timing: the economics of optimal stopping. J. Econom. Dynam. Contr., 36, no. 2, (2012), 255–265. Dayanik, S., Egami, M. Optimal stopping problems for asset management. Adv. Appl. Probab., 44, no. 3, (2012), 655–677. de Innocentis, M., Levendorski˘ı, S. Pricing discrete barrier options and credit default swaps under Lévy processes. Quant. Finance, 14, no. 8, (2014), 1337–1365. Deng, D., Peng, C. New methods with capped options for pricing American options. J. Appl. Math., (2014), Art. ID 176306, 7 pp. Detemple, J., Tian, W., Xiong, J. An optimal stopping problem with a reward constraint. Finance Stoch., 16, no. 3, (2012), 423–448. Dia, E.H.A., Lamberton, D. Connecting discrete and continuous lookback or hindsight options in exponential Lévy models. Adv. in Appl. Probab., 43, no. 4, (2011), 1136–1165. Dingeç, K.D., Hörmann, W. Control variates and conditional Monte Carlo for basket and Asian options. Insur. Math. Econom., 52, no. 3, (2013), 421–434. Ding, D., Huang, N., Zhao, J. An efficient algorithm for Bermudan barrier option pricing. Appl. Math. J. Chinese Univ., Ser. B, 27, no. 1, (2012), 49–58. Dong, Y. European option pricing in fractional jump diffusion markets. J. Phys. Sci., 16, (2012), 75–84. Dong, Y. Existence and uniqueness of parabolic problem arising in exponential double barrier options. J. Quant. Econom., 30, no. 1, (2013), 81–88. Dong, Y., He, X.S. Geometric average Asian option pricing model in the masked data case. Math. Pract. Theory, 42, no. 22, (2012), 229–234. Du, K., Liu, G., Gu, G. A class of control variates for pricing Asian options under stochastic volatility models. Int. J. Appl. Math., 43, no. 2, (2013), 45–53. Dyrssen, H., Ekström, E., Tysk, J. Pricing equations in jump-to-default models. Int. J. Theor. Appl. Finance, 17, no. 3, (2014), 1450019, 13 pp. Eberlein, E., Glau, K. Variational solutions of the pricing PIDEs for European options in Lévy models. Appl. Math. Finance, 21, no. 5, (2014), 417–450.

Bibliography

[88] [89] [90] [91] [92]

[93] [94] [95] [96] [97] [98] [99] [100] [101]

[102] [103] [104]

[105]

[106] [107]

[108] [109] [110]

535

Eberlein, E., Madan, D.B. Unbounded liabilities, capital reserve requirements and the taxpayer put option. Quant. Finance, 12, no. 5, (2012), 709–724. Ekström, E., Hobson, D., Janson, S., Tysk, J. Can time-homogeneous diffusions produce any distribution? Probab. Theory Related Fields, 155, no. 3-4, (2013), 493–520. Ekström, E., Tysk, J. Comparison of two methods for superreplication. Appl. Math. Finance, 19, no. 2, (2012), 181–193. Elahi, Y., Aziz, M.I.A. Efficient pricings for binomial Asian option under fuzzy environment. Far East J. Math. Sci., 63, no. 1, (2012), 133–140. Elbeleze, A.A., Kiliçman, A., Taib, B.M. Homotopy perturbation method for fractional Black-Scholes European option pricing equations using Sumudu transform. Math. Probl. Eng., (2013), Art. ID 524852, 7 pp. El hajaji, A., Hilal, K., Serghini, A., Mermri, E.B. Pricing American bond options using a cubic spline collocation method. Bol. Soc. Parana. Mat., 32, no. 2, (2014), 189–208. Ekren, I., Touzi, N., Zhang, J. Optimal stopping under nonlinear expectation. Stoch. Process. Appl., 124, no. 10, (2014), 3277–3311. Elliott, R.J., Hamada, A.S. Option pricing using a regime switching stochastic discount factor. Int. J. Theor. Appl. Finance, 17, no. 3, (2014), 1450020, 26 pp. Elliott, R.J., Siu, T.K. Reflected backward stochastic differential equations, convex risk measures and American options. Stoch. Anal. Appl., 31, no. 6, (2013), 1077–1096. Elliott, R.J., Siu, T.K., Chan, L. On pricing barrier options with regime switching. J. Comput. Appl. Math., 256, (2014), 196–210. Emmerling, T.J. Perpetual cancellable American call option. Math. Finance, 22, (2012), no. 4, 645–666. Erick, T.A. Optimal stopping under model uncertainty and the regularity of lower Snell envelopes. Quant. Finance, 12, no. 6, (2012), 865–871. Étoré, P. Gobet, E. Stochastic expansion for the pricing of call options with discrete dividends. Appl. Math. Finance, 19, no. 3, (2012), 233–264. Fabozzi, F.J., Leccadito, A., Tunaru, R.S. Extracting market information from equity options with exponential Lévy processes. J. Econom. Dynam. Control, 38, (2014), 125– 141. Fan, Y., Zhang, H. The pricing of Asian options in uncertain volatility model. Math. Probl. Eng., (2014), Art. ID 786391, 16 pp. Feng, L., Lin, X. Pricing Bermudan options in Lévy process models. SIAM J. Financ. Math., 4, no. 1, (2013), 474–493. Ferreiro-Castilla, A., Kyprianou, A.E., Scheichl, R., Suryanarayana, G. Multilevel Monte Carlo simulation for Lévy processes based on the Wiener-Hopf factorisation. Stoch. Process. Appl., 124, no. 2, (2014), 985–1010. Foroush Bastani, A., Ahmadi, Z., Damircheli, D. A radial basis collocation method for pricing American options under regime-switching jump-diffusion models. Appl. Numer. Math., 65 (2013), 79–90. Foschi, P., Pagliarani, S., Pascucci, A. Approximations for Asian options in local volatility models. J. Comput. Appl. Math., 237, no. 1, (2013), 442–459. Fu, Y., Zhang, J.Z., Weng, Z.N. Application of Monte Carlo variance reduction method in pricing arithmetic average foreign exchange Asian option. Math. Pract. Theory, 43, no. 8, (2013), 15–22. Funahashi, H., Kijima, M. An extension of the chaos expansion approximation for the pricing of exotic basket options. Appl. Math. Finance, 21, no. 2, (2014), 109–139. Gapeev, P.V., Rodosthenous, N. Perpetual American options in a diffusion model with piecewise-linear coefficients. Stat. Risk Model., 30, no. 1, (2013), 1–20. Gapeev, P.V., Rodosthenous, N. Optimal stopping problems in diffusion-type models with running maxima and drawdowns. J. Appl. Probab., 51, no. 3, (2014), 799–817.

536

Bibliography

[111] Geng, J., Navon, I.M., Chen, X. Non-parametric calibration of the local volatility surface for European options using a second-order Tikhonov regularization. Quant. Finance, 14, no. 1, (2014), 73–85. [112] Ghandehari, M.A.M., Ranjbar, M. European option pricing of fractional version of the Black-Scholes model: approach via expansion in series. Int. J. Nonlin. Sci., 17, no. 2, (2014), 105–110. [113] Gikhman, I.I., Skorokhod, A.V. Theory of Random Processes. 1. Probability Theory and Mathematical Statistics, Nauka, Moscow, 1971, 664 pp. (English edition: The Theory of Stochastic Processes. 1. Fundamental Principles of Mathematical Sciences, 210, and Classics in Mathematics, Springer, New York (1974, 2004)). [114] Gikhman, I.I., Skorokhod, A.V. Theory of Random Processes. 3. Probability Theory and Mathematical Statistics, Nauka, Moscow, 1975, 496 pp. (English edition: The Theory of Stochastic Processes. III. Fundamental Principles of Mathematical Sciences, 232, and Classics in Mathematics, Springer, Berlin (1979, 2007)). [115] Gikhman, I.I., Skorokhod, A.V. Stochastic Differential Equations and their Applications. Naukova Dumka, Kiev, 1982. 611 pp. [116] Glover, K., Peskir, G., Samee, F. The British Russian option. Stochastics, 83, no. 4-6, (2011), 315–332. [117] Goard, J. Exact solutions for a strike reset put option and a shout call option. Math. Comput. Modell., 55, no. 5-6, (2012), 1787–1797. [118] Golbabai, A., Ahmadian, D., Milev, M. Radial basis functions with application to finance: American put option under jump diffusion. Math. Comput. Modell., 55, no. 3-4, (2012), 1354–1362. [119] Götz, B., Escobar, M., Zagst, R. Closed-form pricing of two-asset barrier options with stochastic covariance. Appl. Math. Finance, 21, no. 4, (2014), 363–397. [120] Guo, H., Li, K. The valuation of binary options on bonds. J. Nat. Sci. Nanjing Norm. Univ., 13, no. 2, (2012), 11–15. [121] Guo, Z.G., Kong, T., Li, P.F., Zhang, W. Numerical methods for pricing American options based on the optimal exercise boundary. J. Shandong Univ. Nat. Sci., 47, no. 3, (2012), 110–119. [122] Guo, Z., Yuan, H. Pricing European option under the time-changed mixed Brownianfractional Brownian model. Physica A: Stat. Mech. Appl., 406, (2014), 73–79. [123] Gutiérrez, Ó. American option valuation using first-passage densities. Quant. Finance, 13, no. 11, (2013), 1831–1843. [124] Guyon, J., Henry-Labordère, P. Nonlinear Option Pricing. Chapman & Hall/CRC Financial Mathematics Series, CRC Press, Boca Raton, FL, 2014, xxxviii+445 pp. [125] Hackmann, D., Kuznetsov, A. Asian options and meromorphic Lévy processes. Finance Stoch., 18, no. 4, (2014), 825–844. [126] Hadji, M.L., Remita, M.R. A numerical resolution of a European option value with a stochastic volatility. Appl. Math. Sci., 7, no. 17-20, (2013), 883–891. [127] Hariharan, G., Padma, S., Pirabaharan, P. An efficient wavelet based approximation method to time fractional Black-Scholes European option pricing problem arising in financial market. Appl. Math. Sci., 7, no. 69-72, (2013), 3445–3456. [128] Hepperger, P. Low-dimensional partial integro-differential equations for highdimensional Asian options. In: Kabanov, Y., Rutkowski, M., Zariphopoulou, T. (Eds.) Inspired by Finance, Springer, Cham, (2014), 331–348. [129] Higham, D.J. An introduction to financial option valuation. Mathematics, Stochastics and Computation. Cambridge University Press, Cambridge, 2004. xxii+273 pp. [130] Holmes, A.D., Yang, H. A front-fixing finite element method for the valuation of American put options on zero-coupon bonds. Int. J. Numer. Anal. Model., 9, no. 4, (2012), 777–792.

Bibliography

537

[131] Horiguchi, M., Piunovskiy, A.B. Optimal stopping model with unknown transition probabilities. Control Cybernet., 42, no. 3, (2013), 593–612. [132] Howison, S. Asymptotic approximations for Asian, European, and American options with discrete averaging or discrete dividend/coupon payments. SIAM J. Financial Math., 3, no. 1, (2012), 215–241. [133] Howison, S.D., Reisinger, C., Witte, J.H. The effect of nonsmooth payoffs on the penalty approximation of American options. SIAM J. Financ. Math., 4, no. 1, (2013), 539–574. [134] Huang, Y., Forsyth, P.A., Labahn, G. Inexact arithmetic considerations for direct control and penalty methods: American options under jump diffusion. Appl. Numer. Math., 72, (2013), 33–51. [135] Huang, W.L., Tao, X.X., Li, S.H. Pricing formulae for European options under the fractional Vasicek interest rate model. Acta Math. Sinica, 55, no. 2, (2012), 219–230. [136] Hu, W.W., Li, Z. Pricing value difference between barrier and vanilla options with binomial pricing method. J. Shanghai Jiaotong Univ., 46, no. 5, (2012), 825–831. [137] Huang, J., Cen, Z., Le, A. A finite difference scheme for pricing American put options under Kou’s jump-diffusion model. J. Funct. Spaces Appl., (2013), Art. ID 651573, 11 pp. [138] Ibrahim, S.N.I., O’Hara, J.G., Constantinou, N. Risk-neutral valuation of power barrier options. Appl. Math. Lett., 26, no. 6, (2013), 595–600. [139] Imai, J. Comparison of low discrepancy mesh methods for pricing Bermudan options under a Lévy process. Math. Comput. Simul., 100, (2014), 54–71. [140] Ivanov, R.V. Closed form pricing of European options for a family of normal-inverse Gaussian processes. Stoch. Models, 29, no. 4, (2013), 435–450. [141] Ivanov, R.V. On the optimal stopping problem for a composite Russian option. Avtomat. Telemekh., no. 8, (2010), 105–110 (English translation in Autom. Remote Contr., 71, no. 8, 1602–1607). [142] Ivanov, R.V., Shiryaev, A.N. On the duality principle for hedging strategies in diffusion models. Teor. Veroyatn. Primen., 56, no. 3, (2011), 417–448 (English translation in Theory Probab. Appl., 56, no. 3, 376–402). [143] Jain, S., Oosterlee, C.W. Pricing high-dimensional Bermudan options using the stochastic grid method. Int. J. Comput. Math., 89, no. 9, (2012), 1186–1211. [144] Jerbi, Y. Kharrat, M. Conditional expectation determination based on the J-process using Malliavin calculus applied to pricing American options. J. Stat. Comput. Simul., 84, no. 11, (2014), 2465–2473. [145] Jessen, C., Poulsen, R. Empirical performance of models for barrier option valuation. Quant. Finance, 13, no. 1, (2013), 1–11. [146] Jeunesse, M., Jourdain, B. Regularity of the American Put option in the Black-Scholes model with general discrete dividends. Stochastic Process. Appl., 122, no. 9, (2012), 3101–3125. [147] Jin, X., Li, X., Tan, H.H., Wu, Z. A computationally efficient state-space partitioning approach to pricing high-dimensional American options via dimension reduction. Eur. J. Oper. Res., 231, no. 2, (2013), 362–370. [148] Jönsson, H. Monte Carlo studies of American type options with discrete time. Theory Stoch. Process., 7(23), no. 1-2, (2001), 163–188. [149] Jönsson, H. (2005a) Convergence of reward functions in a discrete time optimal stopping problem: application to American type options. Research Report 2005-3, Department of Mathematics and Physics, Mälardalen University, 31 pp. [150] Jönsson, H. (2005b) Optimal Stopping Domains and Reward Functions for Discrete Time American Type Options. Doctoral Dissertation, No. 22, Mälardalen University, vii+149 pp.

538

Bibliography

[151] Jönsson, H., Kukush, A.G., Silvestrov, D.S. Threshold structure of optimal stopping domains for American type options. Theory Stoch. Process., 8(24), no, 1-2, (2002), 169–177. [152] Jönsson, H., Kukush, A.G., Silvestrov, D.S. Threshold structure of optimal stopping strategies for American type option I. Theor. ˘ Imovirn. Mat. Stat., 71, (2004), 82–92. (English translation in Theory Probab. Math. Statist., 71, 93–103). [153] Jönsson, H., Kukush, A.G., Silvestrov, D.S. Threshold structure of optimal stopping strategies for American type option II. Theor. ˘ Imovirn. Mat. Stat., 72, (2005), 42–53. (English translation in Theory Probab. Math. Statist., 72, 47–58). [154] Joshi, M. Staunton, M. On the analytical/numerical pricing of American put options against binomial tree prices. Quant. Finance, 12, no. 1, (2012), 17–20. [155] Jun, D., Ku, H. Digital barrier option contract with exponential random time. IMA J. Appl. Math., 78, no. 6, (2013), 1147–1155. [156] Jun, D., Ku, H. Analytic solution for American barrier options with two barriers. J. Math. Anal. Appl., 422, no. 1, (2015), 408–423. [157] Kadalbajoo, M.K., Kumar, A., Tripathi, L.P. Application of radial basis function with L-stable Padé time marching scheme for pricing exotic option. Comput. Math. Appl., 66, no. 4, (2013), 500–511. [158] Kandilarov, J.D., Ševčovič, D. Comparison of two numerical methods for computation of American type of the floating strike Asian option. In: Lirkov, I., Margenov, S., Wasniewski, J. (Eds.) Large-Scale Scientific Computing, Lecture Notes in Computer Science, 7116, Springer, Heidelberg, (2012), 558–565. [159] Kato, T., Takahashi, A., Yamada, T. An asymptotic expansion formula for up-and-out barrier option price under stochastic volatility model. JSIAM Lett., 5, (2013), 17–20. [160] Kato, T., Takahashi, A., Yamada, T. A Semigroup Expansion for Pricing Barrier Options. Int. J. Stoch. Anal., (2014), Art. ID 268086, 15 pp. [161] Khaliq, A.Q.M., Kleefeld, B., Liu, R.H. Solving complex PDE systems for pricing American options with regime-switching by efficient exponential time differencing schemes. Numer. Meth. Part. Dif. Equ., 29, no. 1, (2013), 320–336. [162] Khodja, L.Z., Chau, M., Couturier, R., Bahi, J., Spitéri, P. Parallel solution of American option derivatives on GPU clusters. Comput. Math. Appl., 65, no. 11, (2013), 1830–1848. [163] Kijima, M. Stochastic Processes with Applications to Finance. Second edition. Chapman & Hall/CRC Financial Mathematics Series. CRC Press, Boca Raton, FL, 2013. xvi+327 pp. [164] Kijima, M., Siu, C.C. On the first passage time under regime-switching with jumps. In: Kabanov, Y., Rutkowski, M., Zariphopoulou, T. (Eds.) Inspired by Finance, Springer, Cham, (2014), 387–410. [165] Kim, B.J., Ahn, C., Choe, H.J. Direct computation for American put option and free boundary using finite difference method. Jpn. J. Ind. Appl. Math., 30, no. 1, (2013), 21–37. [166] Kim, B., Wee, I.S. Pricing of geometric Asian options under Heston’s stochastic volatility model. Quant. Finance, 14, no. 10, (2014), 1795–1809. [167] Kim, I.J., Jang, B.G., Kim, K.T. A simple iterative method for the valuation of American options. Quant. Finance, 13, no. 6, (2013), 885–895. [168] Kim, J., Jeong, D., Shin, D.H. A regime-switching model with the volatility smile for two-asset European options. Automatica J. IFAC, 50, no. 3, (2014), 747–755. [169] Klimmek, M. Parameter dependent optimal thresholds, indifference levels and inverse optimal stopping problems. J. Appl. Probab., 51, no. 2, (2014), 492–511. [170] Kohler, M., Walk, H. On data-based optimal stopping under stationarity and ergodicity. Bernoulli, 19, no. 3, (2013), 931–953.

Bibliography

539

[171] Kolesnik, A.D., Ratanov, N. Telegraph Processes and Option Pricing. Springer Briefs in Statistics, Springer, Heidelberg, 2013, xii+128 pp. [172] Kruse, S., Müller, M. A summary on pricing American call options under the assumption of a lognormal framework in the Korn-Rogers model. Bull. Malays. Math. Sci. Soc., 35, no. 2A, (2012), 573–581. [173] Kukush, A.G., Silvestrov, D.S. Optimal stopping strategies for American type options with discrete and continuous time. Theory Stoch. Process., 5(21), no. 1-2, (1999), 71–79. [174] Kukush, A.G., Silvestrov, D.S. Structure of optimal stopping strategies for American type options. In: Uryasev, S. (ed) Probabilistic Constrained Optimization: Methodology and Applications. Nonconvex Optimization and Its Applications, 49, Kluwer, Dordrecht, 2000, 173–185. [175] Kukush, A.G., Silvestrov, D.S. Optimal pricing of American type options with discrete time. Theory Stoch. Process., 10(26), no. 1-2, (2004), 72–96. [176] Kumar, A., Tripathi, L.P., Kadalbajoo, M.K. A numerical study of European options under Merton’s jump-diffusion model with radial basis function based finite differences method. Neural Parallel Sci. Comput., 21, no. 3-4, (2013), 293–304. [177] Kyprianou, A., Ott, C. A capped optimal stopping problem for the maximum process. Acta Appl. Math., 129, (2014), 147–174. [178] Lamberton, D., Mikou, M.A. Exercise boundary of the American put near maturity in an exponential Lévy model. Finance Stoch., 17, no. 2, (2013), 355–394. [179] Lamberton, D., Zervos, M. On the optimal stopping of a one-dimensional diffusion. Electron. J. Probab., 18, no. 34, (2013), 49 pp. [180] Le, A., Cen, Z., Xu, A. A robust upwind difference scheme for pricing perpetual American put options under stochastic volatility. Int. J. Comput. Math., 89, no. 9, (2012), 1135–1144. [181] Leduc, G. A European option general first-order error formula. ANZIAM J., 54, no. 4, (2013), 248–272. [182] Lee, M.K., Kim, J.H., Jang, K.H. Pricing arithmetic Asian options under hybrid stochastic and local volatility. J. Appl. Math., (2014), Art. ID 784386, 8 pp. [183] Lee, J., Lee, Y. Tridiagonal implicit method to evaluate European and American options under infinite activity Lévy models. J. Comput. Appl. Math., 237, no. 1, (2013), 234– 243. [184] Lempa, J. Optimal stopping with random exercise lag. Math. Methods Oper. Res., 75, no. 3, (2012), 273–286. [185] Lempa, J. Optimal stopping with information constraint. Appl. Math. Optim., 66, no. 2, (2012), 147–173. [186] Lesmana, D.C., Wang, S. An upwind finite difference method for a nonlinear BlackScholes equation governing European option valuation under transaction costs. Appl. Math. Comput., 219, no. 16, (2013), 8811–8828. [187] Leung, T., Sircar, R., Zariphopoulou, T. Forward indifference valuation of American options. Stochastics, 84, no. 5-6, (2012), 741–770. [188] Levendorski˘ı, S. Method of paired contours and pricing barrier options and CDSs of long maturities. Int. J. Theor. Appl. Finance, 17, no. 5, (2014), 1450033, 58 pp. [189] Li, B. Look-Back Stopping Times and their Applications to Liquidation Risk and Exotic Options. PhD Thesis, University of Iowa, 2013. 172 pp. [190] Li, H., Melnikov, A. On the polynomial-normal model and option pricing. In Edited by: Samuel N Cohen, S.N., Madan D., Tak Kuen Siu, T.K., Yang, H. (Eds.) Stochastic Processes, Finance and Control. Advances in Statistics, Probability and Actuarial Science, 1, World Scientific, Hackensack, NJ, 2012, 285–302. [191] Li, L., Linetsky, V. Optimal stopping and early exercise: an eigenfunction expansion approach. Oper. Res., 61, no. 3, (2013), 625–643.

540

Bibliography

[192] Li, L., Linetsky, V. Optimal stopping in infinite horizon: an eigenfunction expansion approach. Statist. Probab. Lett., 85, (2014), 122–128. [193] Li, J.Y., Kim, M.J., Kwon, R.H. A moment approach to bounding exotic options under regime switching. Optimization, 61, no. 10, (2012), 1253–1269. [194] Li, J., Shu, H., Kan, X. European option pricing with transaction costs in Lévy jump environment. Abstr. Appl. Anal. (2014), Art. ID 513496, 6 pp. [195] Li, Q., Zhou, Y., Zhao, X., Ge, X. Fractional order stochastic differential equation with application in European option pricing. Discr. Dyn. Nat. Soc., (2014), Art. ID 621895, 12 pp. [196] Li, W., Wang, S. A numerical method for pricing European options with proportional transaction costs. J. Global Optim., 60, no. 1, (2014), 59–78. [197] Li, Z.G., Kang, S.G. A finite volume element method for pricing American options. J. Systems Sci. Math. Sci., 32, no. 9, (2012), 1092–1108. [198] Liang, Y.J., Xu, C.L. Numerical method for American option pricing. Commun. Appl. Math. Comput., 27, no. 1, (2013), 101–113. [199] Lim, H., Lee, S., Kim, G. Efficient pricing of Bermudan options using recombining quadratures. J. Comput. Appl. Math., 271, (2014), 195–205. [200] Lin, J., Palmer, K. Convergence of barrier option prices in the binomial model. Math. Finance, 23, no. 2, (2013), 318–338. [201] Lin, T.J.C., Li, M.R., Lee, Y.S. Taiex index option model by using nonlinear differential equation. Math. Comput. Appl., 19, no. 1, (2014), 78–92. [202] Liping, S., Wanghui, Y. A free boundary problem coming from the perpetual American call options with utility. Europ. J. Appl. Math., 24, no. 2, (2013), 231–271. [203] Liu, W.G., Li, S.H. European option pricing model in a stochastic and fuzzy environment. Appl. Math. J. Chinese Univ., Ser. B, 28, no. 3, (2013), 321–334. [204] Liu, J., Wu, W., Xu, J., Zhao, H. An accurate binomial model for pricing American Asian option. J. Syst. Sci. Complex., 27, no. 5, (2014), 993–1007. [205] Llemit, D.G., Escaner, J.M.L., IV. Asymptotic expansion for the price of a UIP barrier option in a binomial tree model. Appl. Math. Sci., 6, no. 101-104, (2012), 5197–5205. [206] Lu, S.Q., Bao, S.X. Martingale analysis of the pricing of a class of exotic options in fractional Brownian motion environment. J. Hefei Univ. Technol. Nat. Sci., 36, no. 7, (2013), 875–878. [207] Ludkovski, M., Shen, Q. European option pricing with liquidity shocks. Int. J. Theor. Appl. Finance, 16, no. 7, (2013), 1350043, 30 pp. [208] Lundengård, K., Ogutu, C., Silvestrov, S., Weke, P. Asian options, jump-diffusion processes on a lattice, and Vandermonde matrices. In: Silvestrov, D., Martin-Löf, A. (Eds.) Modern Problems in Insurance Mathematics. European Actuarial Academy (EAA) Series, Springer, Cham, (2014), 335–364. [209] Lundgren, R. (2007a) Optimal stopping domains for discrete time knock out American options. In: Skiadas, C.H. (Ed.) Recent Advances in Stochastic Modelling and Data Analysis. World Scientific, Singapore, (2007), 613–620. [210] Lundgren, R. (2007b) Structure of optimal stopping domains for American options with knock out domains. Theory Stoch. Process., 13(29), no. 4, (2007), 98–129. [211] Lundgren, R. Convergence of Option Rewards. Doctoral Dissertation, No. 89, Mälardalen University, 2010, ix+163 pp. [212] Lundgren, R., Silvestrov, D. Convergence of option rewards for multivariate price processes. Research Report 2009:10, Department of Mathematics, Stockholm University, 2009, 53 pp. [213] Lundgren, R., Silvestrov, D. Optimal stopping and reselling of European options. In: Rykov, V., Balakrishan, N., Nikulin, M. (Eds.) Mathematical and Statistical Models and Methods in Reliability. Birkhäuser, Boston, (2011), 378–394.

Bibliography

541

[214] Lundgren, R., Silvestrov, D.S., Kukush, A.G. Reselling of options and convergence of option rewards. J. Numer. Appl. Math., 96(1), (2008), 149–172. [215] Ma, Y., Li, Y. A uniform asymptotic expansion for stochastic volatility model in pricing multi-asset European options. Appl. Stoch. Models Bus. Ind., 28, no. 4, (2012), 324–341. [216] MacLean, L.C., Zhao, Y., Ziemba, W.T. An endogenous volatility approach to pricing and hedging call options with transaction costs. Quant. Finance, 13, no. 5, (2013), 699–712. [217] Madan, D.B. S&P 500 index option surface drivers and their risk neutral and real world quadratic covariations. In: Cohen, S.N., Madan, D., Siu, T.K., Yang, H. (Eds.) Stochastic Processes, Finance and Control. Advances in Statistics, Probability and Actuarial Science, 1, World Scientific, Hackensack, NJ, (2012), 317–345. [218] Madan, D.B., Schoutens, W. Systemic risk tradeoffs and option prices. Insur. Math. Econom., 52, no. 2, (2013), 222–230. [219] Mahato, A. The Inverse Volatility Problem for American Options. Ph.D. Thesis, University of Alabama, Birmingham, 2014, 153 pp. [220] Makasu, C. Pareto optimality in a bicriterion optimal stopping problem. Sequent. Anal., 32, no. 4, (2013), 498–501 [221] Makasu, C. (2014a) A bilevel programming approach to double optimal stopping. Appl. Math. Comput., 238, (2014), 393–396. [222] Makasu, C. (2014b) Optimal stopping problem with a vector-valued reward function. Numer. Funct. Anal. Optim., 35, no. 6, (2014), 777–781. [223] Marín, S.F.H., Vargas, Y.C., Pinzón Cardozo, M. Numerical comparison of pricing of European call options for mean reverting processes. Int. J. Res. Rev. Appl. Sci., 14, no. 2, (2013), 385–395. [224] Martín-Vaquero, J., Khaliq, A.Q.M., Kleefeld, B. Stabilized explicit Runge-Kutta methods for multi-asset American options. Comput. Math. Appl., 67, no. 6, (2014), 1293– 1308. [225] Mei, B., Clutter, M.L. Valuing a timber harvest contract as a high-dimensional American call option via least-squares Monte Carlo simulation. Nat. Resour. Model., Ê26, no. 1, (2013), 111–129. [226] Melnikov, A., Smirnov, I. Option pricing and CVaR hedging in the regime-switching telegraph market model. In: Silvestrov, D., Martin-Löf, A. (Eds.) Modern Problems in Insurance Mathematics. European Actuarial Academy (EAA) Series, Springer, Cham, (2014), 365–378. [227] Memon, S. Finite element method for American option pricing: a penalty approach. Int. J. Numer. Anal. Model., Ser. B, 3, no. 3, (2012), 345–370. [228] Mijatović, A., Pistorius, M. Continuously monitored barrier options under Markov processes. Math. Finance, 23, no. 1, (2013), 1–38. [229] Milstein, G.N., Spokoiny, V. Construction of mean-self-financing strategies for European options under regime-switching. SIAM J. Financ. Math., 5, no. 1, (2014), 532–556. [230] Mishura, Yu., Schmidli, H. Dividend barrier strategies in a renewal risk model with generalized Erlang interarrival times. N. Am. Actuar. J., 16, no. 4, (2012), 493–512. [231] Mishura, Yu.S., Tomashik, V.V. An optimal stopping problem for random walks with polynomial reward functions. Theor. ˘ Imovirn. Mat. Stat., 86, (2011), 138–149 (English translation in Theory Probab. Math. Statist., 86, 155–167). [232] Miyahara, Y. Option Pricing in Incomplete Markets. Modeling Based on Geometric Lévy Processes and Minimal Entropy Martingale Measures. Series in Quantitative Finance, 3, Imperial College Press, London, 2012, xiv+185 pp. [233] Moon, K.S., Kim, H. An improved binomial method for pricing Asian options. Commun. Korean Math. Soc., 28, no. 2, (2013), 397–406.

542

Bibliography

[234] Morozov, V.V., Khizhnyak, K.V. An upper bound on the value of an infinite American call option on two assets. Prikl. Mat. Inform., 39 (2011), 98–106 (English translation in Comput. Math. Model., 23, no. 4, 478–486). [235] Morozov, V.V., Khizhnyak, K.V. An upper bound on the value of an infinite American call option on difference and sum of two assets. Prikl. Mat. Inform., 40, (2012), 61–69 (English translation in Comput. Math. Model., 24, no. 1, 54–64). [236] Morozov, V.V., Muravei, D.L. A lower bound on the value of an infinite American call option on two assets. Prikl. Mat. Inform., 36 (2010), 99–106 (English translation in Comput. Math. Model., 23, no. 1, 79–87). [237] Motsepa, T., Khalique, C.M., Molati, M. Group classification of a general bond-option pricing equation of mathematical finance. Abstr. Appl. Anal., (2014), Art. ID 709871, 10 pp. [238] Moussi, A., Lidouh, A., Nqi, F.Z. Estimators of sensitivities of an Asian option: numerical analysis. Int. J. Math. Anal., 8, no. 17-20, (2014), 813–827. [239] Moussi, A., Lidouh, A., Nqi, F.Z. Simulation of Asian options delta using a quadratic congruential pseudo-random numbers generator. Appl. Math. Sci., 7, no. 93-96, (2013), 4617–4629. [240] Mudzimbabwe, W., Patidar, K.C., Witbooi, P.J. A reliable numerical method to price arithmetic Asian options. Appl. Math. Comput., 218, no. 22, (2012), 10934–10942. [241] Muroi, Y., Yamada, T. Spectral binomial tree: new algorithms for pricing barrier options. J. Comput. Appl. Math., 249, (2013), 107–119. [242] Nandakishore, L.V., Udayabaskaran, S. Pricing double barrier options with fluctuating volatility. Far East J. Math. Sci., 63, no. 1, (2012), 127–132. [243] Nardon, M., Pianca, P. A behavioural approach to the pricing of European options. In: Marco, C., Claudio, Pizzi, C. (Eds.) Mathematical and Statistical Methods for Actuarial Sciences and Finance, Springer, Cham, (2014), 219–230. [244] Neri, C., Schneider, L. A family of maximum entropy densities matching call option prices. Appl. Math. Finance, 20, no. 6, (2013), 548–577. [245] Ngounda, E., Patidar, K.C., Pindza, E. Contour integral method for European options with jumps. Commun. Nonlinear Sci. Numer. Simul., 18, no. 3, (2013), 478–492. [246] Novikov, A.A., Ling, T.G., Kordzakhia, N. Pricing of volume-weighted average options: analytical approximations and numerical results. In: Kabanov, Y., Rutkowski, M., Zariphopoulou, T. (Eds.) Inspired by Finance, Springer, Cham, (2014), 461–474. [247] Nwozo, C.R., Fadugba, S.E. On the strength and accuracy of advanced Monte Carlo method for the valuation of American options. Int. J. Math. Comput., 25, no. 4, (2014), 26–41. [248] Obłój, J., Ulmer, F. Performance of robust hedges for digital double barrier options. Int. J. Theor. Appl. Finance, 15, no. 1, (2012), 1250003, 34 pp. [249] O’Hara, J.G., Sophocleous, C., Leach, P.G.L. Symmetry analysis of a model for the exercise of a barrier option. Commun. Nonlinear Sci. Numer. Simul., 18, no. 9, (2013), 2367–2373. [250] Øksendal, B., Sulem, A. Singular stochastic control and optimal stopping with partial information of Itô-Lévy processes. SIAM J. Control Optim., 50, no. 4, (2012), 2254– 2287. [251] Øksendal, B., Sulem, A., Zhang, T. Singular control and optimal stopping of SPDEs, and backward SPDEs with reflection. Math. Oper. Res., 39, no. 2, (2014), 464–486. [252] Ortiz-Gracia, L., Oosterlee, C.W. Robust pricing of European options with wavelets and the characteristic function. SIAM J. Sci. Comput., 35, no. 5, (2013), B1055–B1084. [253] Oshanin, G., Schehr, G. Two stock options at the races: Black-Scholes forecasts. Quant. Finance, 12, no. 9, (2012), 1325–1333.

Bibliography

543

[254] O’Sullivan, C., O’Sullivan, S. Pricing European and American options in the Heston model with accelerated explicit finite differencing methods. Int. J. Theor. Appl. Finance, 16, no. 3, (2013), 1350015, 35 pp. [255] Ott, C. Optimal stopping problems for the maximum process with upper and lower caps. Ann. Appl. Probab., 23, no. 6, (2013), 2327Ð2356. [256] Pagès, G. Functional co-monotony of processes with applications to peacocks and barrier options. In: Donati-Martin, C., Lejay,A., Rouault, A. (Eds.) Séminaire de Probabilités XLV. Lecture Notes in Mathematics, 2078, Springer, Cham, (2013), 365–400. [257] Pan, D., Zhou, S., Zhang, Y., Han, M. Asian option pricing with monotonous transaction costs under fractional Brownian motion. J. Appl. Math., (2013), Art. ID 352021, 6 pp. [258] Patie, P. Asian options under one-sided Lévy models. J. Appl. Probab., 50, no. 2, (2013), 359–373. [259] Pemy, M. Optimal stopping of Markov switching Lévy processes. Stochastics, 86, no. 2, (2014), 341–369. [260] Peng, D.H. Pricing of perpetual American put options under fast diffusion processes. Acta Math. Sci., Ser. A, 32, no. 6, (2012), 1056–1062. [261] Peskir, G., Samee, F. The British put option. Appl. Math. Finance, 18, no. 6, (2011), 537–563. [262] Peskir, G., Samee, F. The British call option. Quant. Finance, 13, no. 1, (2013), 95–109. [263] Pindza, E., Patidar, K.C. A comparative performance of exponential time differencing and implicit explicit methods for pricing European options under the Black-Scholes and Merton jump diffusion models. Rev. Bull. Calcutta Math. Soc., 21, no. 1, (2013), 51–70. [264] Pindza, E., Patidar, K.C., Ngounda, E. Robust spectral method for numerical valuation of European options under Merton’s jump-diffusion model. Numer. Meth. Part. Diff. Equ., 30, no. 4, (2014), 1169–1188. [265] Pindza, E., Patidar, K.C., Ngounda, E. Implicit-explicit predictor-corrector methods combined with improved spectral methods for pricing European style vanilla and exotic options. Electron. Trans. Numer. Anal., 40, (2013), 268–293. [266] Presman, E. Solution of the optimal stopping problem for one-dimensional diffusion based on a modification of the payoff function. In: Shiryaev, A.N., Varadhan, S.R.S., Presman, E.L. (Eds.) Prokhorov and Contemporary Probability Theory. Springer Proceedings in Mathematics & Statistics, 33, Springer, Heidelberg, (2013), 371–403. [267] Presman, E. Solution of optimal stopping problem based on a modification of payoff function. In: Kabanov, Y., Rutkowski, M., Zariphopoulou, T. (Eds.) Inspired by Finance, Springer, Cham, (2014), 505—517. [268] Popovic, R., Goldsman, D. On valuing and hedging European options when volatility is estimated directly. European J. Oper. Res., 218, no. 1, (2012), 124–131. [269] Pun, C.S., Wong, H.Y. CEV asymptotics of American options. J. Math. Anal. Appl., 403, no. 2, (2013), 451–463. [270] Quenez, M.C., Sulem, A. Reflected BSDEs and robust optimal stopping for dynamic risk measures with jumps. Stoch. Process. Appl., 124, no. 9, (2014), 3031–3054. [271] Ragni, S. Rational Krylov methods in exponential integrators for European option pricing. Numer. Linear Algebra Appl., 21, no. 4, (2014), 494–512. [272] Rana, U.S., Ahmad, A. Numerical solution of European call option with dividends and variable volatility. Appl. Math. Comput., 218, no. 11, (2012), 6242–6250. [273] Rehman, N., Hussain, S., Shashiashvili, M. Continuity estimate of the optimal exercise boundary with respect to volatility for the American foreign exchange put option. J. Prime Res. Math., 8, (2012), 85–94. [274] Reisinger, C., Witte, J.H. On the use of policy iteration as an easy way of pricing American options. SIAM J. Financial Math., 3, no. 1, (2012), 459–478.

544

Bibliography

[275] Rodrigo, M.R. Approximate ordinary differential equations for the optimal exercise boundaries of American put and call options. Europ. J. Appl. Math., 25, no. 1, (2014), 27–43. [276] Rosazza, G.E., Sgarra, C. Mathematical Finance: Theory Review and Exercises. From Binomial Model to Risk Measures. Springer, Cham, 2013, x+285 pp. [277] Roşca, N.C., Roşca, A.V. Applications of a combined Monte Carlo and quasi-Monte Carlo method to pricing barrier options. Acta Univ. Apulensis Math. Inform., 28, (2011), 71–86. [278] Rüfenacht, N., Implicit Embedded Options in Life Insurance Contracts. Contributions to Management Science, Springer, Heidelberg, 2012, xxvi+170 pp. [279] Salmi, S., Toivanen, J., von Sydow, L. An IMEX-scheme for pricing options under stochastic volatility models with jumps. SIAM J. Sci. Comput., 36, no. 5, (2014), B817– B834. [280] Sánchez, F.H.M., Olivares, M.B. Numerical solution of pricing of European call option with stochastic volatility. Int. J. Res. Rev. Appl. Sci., 13, no. 3, (2012), 666–677. [281] Scullard, M. The Russian Option in a Jump-Diffusion Model. PhD Thesis, University of California, San Diego. 2011. 99 pp. [282] Seaquist, T.W. Optimal Stopping for Markov Modulated Itô-Diffusions with Applications to Finance. PhD Thesis, University of Texas at Arlington. 2013. 90 pp. [283] Serghini, A., El Hajaji, A., Mermri, E.B., Hilal, K. Pricing of European options using a cubic spline collocation method. Int. J. Appl. Math. Stat., 47, no. 17, (2013), 15–28. [284] Shen, M.X., He, C.L. Geometric average Asian option pricing in fractional Brownian environment. J. Shandong Univ. Nat. Sci., 48, no. 3, (2013), 48–52. [285] Shen, Q. European Option Pricing in Illiquid Markets. PhD Thesis, University of California, Santa Barbara. 2012. 203 pp. [286] Sheu, Y.C. Tsai, M.Y. On optimal stopping problems for matrix-exponential jumpdiffusion processes. J. Appl. Probab., 49, no. 2, (2012), 531–548. [287] Shi, G.P., Zhou, S.W. A binomial tree method for European option pricing with jump diffusion. Math. Theory Appl., 32, no. 1, (2012), 19–26. [288] Shi, Q., Yang, X. Pricing Asian options in a stochastic volatility model with jumps. Appl. Math. Comput., 228, (2014), 411–422. [289] Shiryaev, A.N. Sequential Analysis. Optimal Stopping Rules. Nauka, Moscow, 1976, 272 pp (English editions: Optimal Stopping Rules. Springer, New York, 1978 and Stochastic Modelling and Applied Probability, 8, Springer, Berlin, 2008). [290] Silvestrov, D.S. Limit Theorems for Composite Random Functions. Vysshaya Shkola and Izdatelstvo Kievskogo Universiteta, Kiev, 1974, 318 pp. [291] Silvestrov, D.S. Limit Theorems for Randomly Stopped Stochastic Processes. Probability and Its Applications, Springer, London, 2004, xiv+398 pp. [292] Silvestrov, D.S. American-Type Options. Stochastic Approximation Methods. Vol. 1. De Gruyter Studies in Mathematics, 56, De Gruyter, Berlin, 2014, x+509 pp. [293] Silvestrov, D., Jönsson, H., Stenberg, F. Convergence of option rewards for Markov type price processes modulated by stochastic indices I. Theor. ˘ Imovirn. Mat. Stat., 79, (2008), 149–165 (English translation in Theory Probab. Math. Statist., 79, 153–170). [294] Silvestrov, D., Jönsson, H., Stenberg, F. Convergence of option rewards for Markov type price processes modulated by stochastic indices II. Theor. ˘ Imovirn. Mat. Stat., 80, (2009), 138–155, (English translation in Theory Probab. Math. Statist., 80, 153–172). [295] Silvestrov, D., Li, Y. Lattice approximation methods for American type options. Research Report 2013:02, Department of Mathematics, Stockholm University, Sweden, 2013, 36 pp. [296] Silvestrov, D., Li, Y. Stochastic approximation methods for American type options. Comm. Statist. Theory, Methods, (2015) (Forthcoming).

Bibliography

545

[297] Silvetsrov, D., Lundgren, R. Convergence of option rewards for multivariate price processes. Theor. ˘Imovirn. Mat. Stat., 85, (2011), 102–116 (English translation in Theory Probab. Math. Statist., 85, 115–131). [298] Silvestrov D., Manca, R. Silvestrova, E. Computational algorithms for moments of accumulated Markov and semi-Markov rewards. Comm. Statist. Theory, Meth., 43, no. 7, (2014), 1453–1469. [299] Sinel’nikov, S.S. On optimal stopping for Brownian motion with a negative drift. Teor. Veroyatn. Primen., 56, no. 2, (2011) 391–398 (English translation in Theory Probab. Appl., 56, no. 2, 343–350). [300] Singh, V.K. Competency of Monte Carlo and Black-Scholes in pricing Nifty index options: a vis-à-vis study. Monte Carlo Methods Appl., 20, no. 1, (2014), 61–76. [301] Šiška, D. Error estimates for approximations of American put option price. Comput. Methods Appl. Math., 12, no. 1, (2012), 108–120. [302] Skorokhod, A.V. Random Processes with Independent Increments. Probability Theory and Mathematical Statistics, Nauka, Moscow, 278 pp (English edition: Random Processes with Independent Increments. Nat. Lending Library for Sci. and Tech., Boston Spa, 1971). [303] Sonin, I.M. Optimal stopping of seasonal observations and projection of a Markov chain. In: Kabanov, Y., Rutkowski, M., Zariphopoulou, T. (Eds.) Inspired by Finance, Springer, Cham, (2014), 535—543. [304] Stenberg, F. Semi-Markov Models for Insurance and Finance. Doctoral Dissertation, No. 38, Mälardalen University, 2006, vii+173 pp. [305] Stettner, Ł. On general optimal stopping problems using penalty method. Demonstratio Math., 45, no. 2, (2012), 309–323. [306] Stockbridge, R.H. Discussion of dynamic programming and linear programming approaches to stochastic control and optimal stopping in continuous time. Metrika, 77, no. 1, (2014), 137–162. [307] Sun, Y.D., Shi, Y.M. Pricing European options on the modified Black-Scholes model. Appl. Math. J. Chinese Univ., Ser. A, 27, no. 1, (2012), 23–32. [308] Sun, Y.D., Shi, Y.M., Dong, Y. A finite volume element method for perpetual American options. Appl. Math. J. Chinese Univ., Ser. A, 27, no. 3, (2012), 253–264. [309] Sun, Y., Shi, Y., Gu, X. An integro-differential parabolic variational inequality arising from the valuation of double barrier American option. J. Syst. Sci. Complex., 27, no. 2, (2014), 276–288. [310] Sun, Y., Shi, Y., Wu, M. Second-order integro-differential parabolic variational inequalities arising from the valuation of American option. J. Inequal. Appl., (2014), 2014:8, 14 pp. [311] Suzuki, A., Sawaki, K. The valuation of Russian options for double exponential jump diffusion processes. Asia-Pac. J. Oper. Res., 27, no. 2, (2010), 227–242. [312] Swishchuk, A., Tertychnyi, M., Elliott, R. Pricing currency derivatives with Markovmodulated Lévy dynamics. Insur. Math. Econom., 57 (2014), 67–76. [313] Swishchuk, A., Islam, S. Random Dynamical Systems in Finance. CRC Press, Boca Raton, FL, 2013. xviii+339 pp. [314] Swishchuk, A., Islam, M.S. Normal deviation and Poisson approximation of a security market by the geometric Markov renewal processes. Comm. Statist. Theory Meth., 42, no. 8, (2013), 1488–1501. [315] Tanaka, T. Discrete time optimal stopping problems with fractional rewards. J. Inf. Optim. Sci., 35, no. 3, (2014), 291–306. [316] Tapiero, O.J. A maximum (non-extensive) entropy approach to equity options bid-ask spread. Physica A: Stat. Mech. Appl., 392, no. 14, (2013), 3051–3060.

546

Bibliography

[317] Thakoor, N., Tangman, D.Y., Bhuruth, M. Efficient and high accuracy pricing of barrier options under the CEV diffusion. J. Comput. Appl. Math., 259, Part A (2014), 182–193. [318] Thavaneswaran, A., Appadoo, S.S., Frank, J. Binary option pricing using fuzzy numbers. Appl. Math. Lett., 26, no. 1, (2013), 65–72. [319] Trabelsi, F. Quadratic hedging and two-colours Rainbow American options. IAENG Int. J. Appl. Math., 42, no. 3, (2012), 142–151. [320] Vecer, J. Asian options on the harmonic average. Quant. Finance, 14, no. 8, (2014), 1315–1322. [321] Wang, L.P., Xu, Z.L., Ma, Q.H., Qiao, H.Y. Two kinds of finite difference schemes for pricing of American put options. Math. Pract. Theory, 42, no. 24, (2012), 33–38. [322] Wang, M., Bernstein, A., Chesney, M. An experimental study on real-options strategies. Quant. Finance, 12, no. 11, (2012), 1753–1772. [323] Wang, R. A new method for European option pricing with two stocks. J. Quant. Econom., 29, no. 2, (2012), 52–56. [324] Wang, X., Wang, Y. Hedging strategies for discretely monitored Asian options under Lévy processes. J. Ind. Manag. Optim., 10, no. 4, (2014), 1209–1224. [325] Wang, Z., Wang, L., Wang, D.S., Jin, Y. Optimal system, symmetry reductions and new closed form solutions for the geometric average Asian options. Appl. Math. Comput., 226, (2014), 598–605. [326] Wei, Y. Direct Analysis of Implied Volatility for European Options. PhD Thesis, Michigan State University. 2012. 165 pp. [327] Wen, X., Huo, H.F., Deng, G.H. Pricing of American barrier options in a fractional Brownian motion. J. Quant. Econ., 28, no. 3, (2011), 87–91. [328] Wong, T.W., Wong, H.Y. Stochastic volatility asymptotics of stock loans: valuation and optimal stopping. J. Math. Anal. Appl., 394, no. 1, (2012), 337–346. [329] Wu, H. Optimal stopping under g-expectation with constraints. Oper. Res. Lett., 41, no. 2, (2013), 164–171. [330] Wu, Q., Zhang, J.Z., Zhu, H.Y. Pricing of jump-diffusion European options with fixed ratio transaction cost. Math. Pract. Theory, 42, no. 1, (2012), 1–12. [331] Wu, X. Accurate numerical method for pricing two-asset American put options. J. Funct. Spaces Appl., (2013), Art. ID 189235, 7 pp. [332] Wu, X., Yang, W., Ma, C., Zhao, X. American option pricing under GARCH diffusion model: an empirical study. J. Syst. Sci. Complex., 27, no. 1, (2014), 193–207. [333] Xiu, D. Hermite polynomial based expansion of European option prices. J. Econometrics, 179, no. 2, (2014), 158–177. [334] Xu, Z.Q., Zhou, X.Y. Optimal stopping under probability distortion. Ann. Appl. Probab., 23, no. 1, (2013), 251–282. [335] Yam, S.C.P., Yung, S.P., Zhou, W. Game call options revisited. Math. Finance, 24, no. 1, (2014), 173–206. [336] Yan, D.Q., Yan, B.V. European option pricing for two jump-diffusion processes. Chinese J. Appl. Probab. Statist., 28, no. 2, (2012), 172–180. [337] Yang, H.J., Sun, G.P. Study on the stability of an artificial stock option market based on bidirectional conduction. Entropy, 15, no. 2, (2013), 700–720. [338] Yang, S.L. Numerical solutions for pricing double barrier options under jump diffusion. J. Quant. Econom., 28, no. 4, (2011), 86–89. [339] Yang, Z.J., Yi, H., Song, D.D., Yang, J.Q. Consumption utility-based indifference pricing of European options on nontradable underlying assets. J. Hunan Univ. Nat. Sci., 39, no. 12, (2012), 89–93. [340] Yao, N. Accurate pricing formulas for Asian options with jumps. J. Math., 33, no. 5, (2013), 819–824.

Bibliography

547

[341] Ye, F., Zhou, E. Optimal stopping of partially observable Markov processes: a filteringbased duality approach. IEEE Trans. Automat. Control, 58, no. 10, (2013), 2698–2704. [342] Ye, L., Song, G.F., Wang, J. Order models for call option of retailers based on risk appetite. J. Shanghai Univ. Nat. Sci., 18, no. 6, (2012), 656–660. [343] Yin, L., Han, L. Options strategies for international portfolios with overall risk management via multi-stage stochastic programming. Ann. Oper. Res., 206, (2013), 557–576. [344] Yue, T. Spectral Element Method for Pricing European Options and Their Greeks. PhD Thesis, Duke University. 2012, 272 pp. [345] Yoon, J.H., Kim, J.H. A closed-form analytic correction to the Black-Scholes-Merton price for perpetual American options. Appl. Math. Lett., 26, no. 12, (2013), 1146–1150. [346] Yoon, J.H., Kim, J.H., Choi, S.Y. Multiscale analysis of a perpetual American option with the stochastic elasticity of variance. Appl. Math. Lett., 26, no. 7, (2013), 670–675. [347] Yu, B.M. Two efficient parameterized boundaries for Večeř’s Asian option pricing PDE. Acta Math. Appl. Sin. Engl., 28, no. 4, (2012), 643–652. [348] Yousuf, M., Khaliq, A.Q.M. An efficient ETD method for pricing American options under stochastic volatility with nonsmooth payoffs. Numer. Meth. Part. Dif. Equ., 29, no. 6, (2013), 1864–1880. [349] Yu, X., Liu, Q. Canonical least-squares Monte Carlo valuation of American options: convergence and empirical pricing analysis. Math. Probl. Eng., (2014), Art. ID 763751, 13 pp. [350] Yu, X., Yang, L. Pricing American options using a nonparametric entropy approach. Discrete Dyn. Nat. Soc., (2014), Art. ID 369795, 16 pp. [351] Zanger, D.Z. Quantitative error estimates for a least-squares Monte Carlo algorithm for American option pricing. Finance Stoch., 17, no. 3, (2013), 503–534. [352] Zhai, Y., Bi, X., Zhang, S. Pricing barrier options under stochastic volatility framework. J. Syst. Sci. Complex., 26, no. 4, (2013), 609–618. [353] Zhang, B., Grzelak, L.A., Oosterlee, C.W. Efficient pricing of commodity options with early-exercise under the Ornstein-Uhlenbeck process. Appl. Numer. Math., 62, no. 2, (2012), 91–111. [354] Zhang, B., Oosterlee, C.W. Efficient pricing of European-style Asian options under exponential Lévy processes based on Fourier cosine expansions. SIAM J. Financ. Math., 4, no. 1, (2013), 399–426. [355] Zhang, B., Oosterlee, C.W. Pricing of early-exercise Asian options under Lévy processes based on Fourier cosine expansions. Appl. Numer. Math., 78, (2014), 14–30. [356] Zhang, C.E., Xu, Y. Pricing a binary option under fractional O-U process and stochastic interest rate. Math. Theory Appl., 32, no. 1, (2012), 27–31. [357] Zhang, J., Li, S. Maximal (minimal) conditional expectation and European option pricing with ambiguous return rate and volatility. Int. J. Approx. Reason., 54, no. 3, (2013), 393–403. [358] Zhang, K. Applying a power penalty method to numerically pricing American bond options. J. Optim. Theory Appl., 154, no. 1, (2012), 278–291. [359] Zhang, K., Teo, K.L. Convergence analysis of power penalty method for American bond option pricing. J. Global Optim., 56, no. 4, (2013), 1313–1323. [360] Zhang, K., Wang, S. Pricing American bond options using a penalty method. Automatica J. IFAC, 48, no. 3, (2012), 472–479. [361] Zhang, R., Song, H., Luan, N. Weak Galerkin finite element method for valuation of American options. Front. Math. China, 9, no. 2, (2014), 455–476. [362] Zhang, S.M., Wang, L.H. A fast Fourier transform technique for pricing European options with stochastic volatility and jump risk. Math. Probl. Eng., (2012), Art. ID 761637, 17 pp.

548

Bibliography

[363] Zhang, Y., Pan, D., Zhou, S.W., Han, M. Asian option pricing with transaction costs and dividends under the fractional Brownian motion model. J. Appl. Math., (2014), Art. ID 652954, 8 pp. [364] Zheng, N., Yin, J.F. (2013a) On the convergence of projected triangular decomposition methods for pricing American options with stochastic volatility. Appl. Math. Comput., 223, (2013), 411–422. [365] Zheng, N., Yin, J.F. (2013b) Modulus-based successive overrelaxation method for pricing American options. J. Appl. Math. Inform., 31, no. 5-6, (2013), 769–784. [366] Zhou, E. Optimal stopping under partial observation: near-value iteration. IEEE Trans. Automat. Control, 58, no. 2, (2013), 500–506. [367] Zhu, L.Z., Yu, J.W. Pricing of some Asian options with time-dependent parameters and fixed exercise prices. Math. Theory Appl., 32, no. 2, (2012), 14–19. [368] Zhu, S.P., Badran, A., Lu, X. A new exact solution for pricing European options in a two-state regime-switching economy. Comput. Math. Appl., 64, no. 8, (2012), 2744–2755. [369] Zhu, S.P., Chen, W.T., An inverse finite element method for pricing American options. J. Econom. Dynam. Control, 37, no. 1, (2013), 231–250. [370] Ziveyi, J., Blackburn, C., Sherris, M. Pricing European options on deferred annuities. Insur. Math. Econom., 52, no. 2, (2013), 300–311.

Index |~a| 164 (~a, ~b) 164 < Ω, F , P > 158, 487 < µi (·), σii (·), Πi (·, ·) > 209 164, 209 AR(p) 11 AR(p)/ARCH(p) 88 ARM A(p, q) 22 CIR(p) 88 Rd (x) 157 S (space): – N 166 – C[0,T ] 397, 410, 447 – D[0,T ] 216, 218, 234, 397, 410, 468 – D[t,T ] 218, 235 – DV 158 – DZ 158 – R+ 158 k – Rk 157 – V 158 – X 157 – Z 157 [s]+ 174 ∆ (first-type modulus of moment compactness): – ∆00 β (Yε,i (·), c, T ) 299 – ∆00 βi (Yε,i (·), c, T ) 355 – ∆0β (Yi (·), c, T ) 207 – ∆β (Yi (·), c, T ) 181 – ∆β (Yε,i (t· , Π) 310 ∆J (modulus of J-compactness): – ∆J (Xε (·), h, c, T ) 200 – ∆J (Yi (·), h, c, T ) 208 – ∆J (Yε,i (·), h, c, T ) 195 E~z,t 173, 181, 189 P~z,t 173, 181, 189 Φ (optimal expected reward): – Φ 251, 497 – Φ0 446 – Φ00 446 (ε) – Φ0 (Mmax,0,T ) 384 (ε)

– Φ0ε (MΠε ,0,Nε ) 389 – Φ0Πε ,ε 389, 390 –

˜ (ε) ) Φ(M Π,0,T

274

(0)

– Φ(Mmax,0,T ) 442, 472 (ε)

– Φ(M0,T ) 271 (ε)

– Φ(MΠ,0,T ) 457 (ε)

– Φ(Mmax,0,T ) 473 – Φ(Mmax,0,T ) 172, 472 (ε)

– ΦΠ,ε˙ 321, 339 – Φ0 5, 14, 26, 36, 48, 79, 92, 105, 117, 498 – Φε 8, 18, 30, 40, 54, 70, 83, 97, 110, 124, 139, 194, 393, 406, 450, 486, 498, 505, 514 (ε) – Φε (MΠ,0,N ) 319, 338, 345 (ε)

– Φε (MΠε ,0,Nε ) 390, 402, 423, 430, 434, 438 (ε)

– Φε (Mmax,0,N ) 8, 18, 30, 40, 54, 70, 83, 97, 110, 124, 139 (ε) – Φε (Mmax,0,T ) 393, 406 – ΦΠ,ε 320, 338, 344, 364 – ΦΠε ,ε 402, 423, 430, 434, 438, 482, 514 (ε) ˆ – Φ(M ) 461 Π,0,T

ˆ ε 460, 487 –Φ ˜ 251 –Φ (ε) ˜ – Φ(M Π,0,T ) 458 ˜ ε 458, 466 –Φ (ε)

– H Φ(ε) (Mmax,0,T ) 464 – H Φε 464 (ε) – ΦΠ,ε˙ 366 (ε)

– Φε (MΠ,0,N ) 364 Π (partition): – Π 271, 288 [t] – Πε 379, 430 – Πε 205, 214, 443, 444 – Πm 374 – Πtm 312 ˜ m 279 –Π Υα (Yε,i (·), c, T ) 195 Υα (Xε (·), c, T ) 200 Ξ (second-type modulus of moment compactness): – Ξ0± (Yε,i (·), c, T ) 211 β – Ξ0± (Y0,i (t· , Π) 348 β – Ξ0± (Yε,i (t· , Π) 330 β – Ξ± (Y0,i (t· , Π) 323 β – Ξ± (Yε,i (t· , Π) 310 β

550

Index

– Ξ± (Yε,i (·), c, T ) 197 β Ξα (Xε (·), T ) 201 Ξα (Yε,i (·), T ) 198 ΞΠ (space-skeleton structure): – ΞΠ,ε˙ 320 ˆ0 –Ξ Π,ε˙ 345 ˆ Π,ε˙ 339 –Ξ ˆ Π,ε 337 –Ξ ˜ Π,ε˙ 366 –Ξ ln ~s 158 φ (reward function): (ε) (ε) – φˆ0 (MΠ,0,T , ~ y , x) 459 (ε) – φˆn (y) 487 – φ00 t (y) 446 0(ε)

– φt (~ y ) 384 – φ0t (y) 446 – φ0Πε ,ε,n (~ y ) 389 (0)

– φn (y) 497 (0) – φt (y) 442 (ε)

(ε)

(ε)

(ε)

– φ0 (MΠ,0,T , ~ y , x) 457 – φT (MΠ,0,T , ~ y , x) 457 – – – – – –

(ε) φn (y) 498, 505 (ε) φt (~ y ) 406, 480 (ε) φt (y) 393, 450 (ε) (ε) φt (Mt,T , ~ y , x) (ε) φΠ,ε,n y ) 339 ˙ (~ (ε) φΠ,ε,n (~ y , x) 321 ˙ (ε) φt (~ z ) 192

287

– – φn (y) 497 (ε) – φn (y) 486 – φt (~ y , x) 173 – φt (~ z ) 173 – φt (y) 251 – φ0,n (~ y ) 4, 13, 25, 77, 91 – φ0,n (~ y, ~ x) 47, 116, 131, 147 – φ0,n (~ y , x) 35 – φΠ,ε,n (~ y ) 338, 344, 364 – φΠ,ε,n (~ y , x) 319 – φΠε ,ε,n (~ y ) 423, 430, 434, 438, 482, 514 – φΠε ,ε,n (y) 402 – φε,n (~ y ) 7, 17, 29, 82, 97, 109 – φε,n (~ y, ~ x) 122, 153 – φε,n (~ y , x) 40, 53 – φn (~ y ) 104 (ε) (ε) – φ˜0 (MΠ,0,T , ~ y , x) 458 (ε) (ε) ˜ – φ (M ,~ y , x) 457 T

Π,0,T

(ε) – φ˜n (y) 491 (ε) ˜ – φt (~ y , x, e) 466 – φ˜t (y) 251 (ε) – H φt (~ y , x) 464 (ε)

– φΠ,ε,n y ) 366 ˙ (~ ρ 415 ¯ 4, 13, 25, 35, 47, 63, 78, 91, 104, 116, ~ β) A( 131, 147 dX (x0 , x00 ) 157 dZ (~ z 0, ~ z 00 ) 158 ey~ 158 h (skeleton function): ¯ ε,t (~ –h z ) 318 ˆ – h0ε,t (~ y ) 343 ˆ ε,t (~ –h y ) 337 ˆ ε (~ –h y ) 383 ˜ ε,t (~ –h y ) 363 x/y/z (skeleton point): –~ yε,¯l 383 –~ yε,t,¯l 337, 342, 363 –~ zε,t,¯l 318 – xε,t,l0 317 – yε,i,li 383 – yε,t,i,li 317, 336, 342, 363 Ak 4 B (σ-algebra): – B 322, 340, 346, 367 – Bk+ 158 – Bk 157 – BV 158 – BX 157 – BZ 157, 158 – Btn ,~y,tn+1 367 – Btn ,~z,tn+1 321 – Btn ,tn+1 340, 346 – BD[0,T ] 468 – Borel 157 F (filtration): – Ft00 447 – Ft0 447 0 – Fε,n 217 – FtS 159 – FtV 159 – FtY 159 – FtZ 159 – Ft 168, 171, 216, 231, 240, 442, 449, 452, 453, 471 Gκ 87

Index

M (class of stopping times): ˆ (ε) –M Π,0,T 271 ˆ (ε) –M 288 Π,t,T

– M00 max,0,T 447 – M0max,0,T 447 (0)

– Mmax,0,T 471 (0)

– Mmax,k,n,N 77 (0) – Mmax,k,r,n,N 116

– M(0) max,n,N 3, 24, 35, 77, 497 (0)

– Mmax,p,n,N 13, 91 (0)

– Mmax,p,q,n,N 24, 104 (0)

– Mmax,p,q,r,n,N 62, 147 (0)

– Mmax,p,r,n,N 47, 131 (0)

– Mmax,t,T 442 (ε)

– MΠ,0,T 271 (ε)

– MΠ,n,N 319, 338, 343, 364 (ε)

– MΠ,tn ,T 272 (ε)

– MΠε ,n,Nε 389, 402, 423, 430, 434, 438, 481, 514 (ε) – Mmax,0,T 270, 457, 473 (ε)

– Mmax,n,N 7, 17, 29, 39, 53, 69, 82, 96, 109, 122, 153, 485, 498, 505 (ε) – Mmax,t,T 191, 393, 406, 464, 480, 514 (ε)

– Mt,T 287 – Mmax,0,T 172, 470, 471 – Mmax,n,N 497 – Mmax,t,T 173 A (skeleton set): ˆ0 –A 342 ε,t,¯ l ˆ ¯ 336 –A ε,t,l ˜ ¯ 363 –A ε,t,l

– Aε,¯l 383 – Aε,t,¯l 318 I (skeleton interval): – Iε,i,li 383 – Iε,t,i,li 317, 363 – Iε,t,i,l 336, 342 L (set of indices): 0 – ˆLε,t 342 ˜ – Lε,t 362 – ˆLε,t 336 – Lε,r 383 A (global moment condition): – A1 172

– A2 173 B (Lipschitz-type condition for pay-off functions): – B13 257 – B14 258 – B15 258 – B16 261 – B17 261 – B18 262 – B21 267 – B22 267 – B23 295 – B24 295 – B25 296 – B28 297 – B29 304 – B30 304 – B31 305 – B33 306 – B40 [¯ γ ] 380 – B41 [¯ γ ] 380 – B44 [¯ γ ] 382 – B45 [¯ γ ] 392 – B46 [¯ γ ] 392 – B49 [¯ γ ] 398 B (pay-off rate of growth condition): – B050 [¯ γ ] 460 – B1 [¯ γ] 3 – B2 13 – B3 24 – B4 [¯ γ ] 35 – B5 [¯ γ ] 47 – B6 [¯ γ ] 62 – B7 [¯ γ ] 116 – B8 174 – B9 191 – B10 212 – B11 237 – B12 251 – B19 264 – B20 264 – B26 296 – B27 296 – B32 305 – B34 310 ¯ 324 – B35 [β] – B36 [¯ γ ] 327 – B37 [¯ γ ] 330 ¯ 349 – B38 [β]

551

552

Index

– B39 [¯ γ ] 353 – B42 [¯ γ ] 380 – B43 [¯ γ ] 380 – B47 [¯ γ ] 392 – B48 392 – B50 [¯ γ ] 458 – B51 [γ] 485 – B52 [γ] 486 – B53 [γ] 493 ˙ 8 174 –B C (condition of J-compactness): – C3 195 – C5 200 – C7 205 – C8 206 – C10 208 – C11 211 – C12 213 – C13 235 C (first-type condition of moment compactness): – C1 181 – C2 189 – C4 196 – C6 200 – C9 207 – C14 275 – C15 294 – C16 299 – C17 301 – C18 304 – C19 310 ¯ 323 – C20 [β] ¯ 327 – C21 [β] ¯ 330 – C22 [β] ¯ 348 – C23 [β] ¯ 353 – C24 [β] ¯ 355 – C25 [β] ¯ 369 – C26 [β] – C27 446 – C28 [β] 450 ¯ 458 – C29 [β] – C30 [β] 485 D (first-type condition of moment boundedness): ¯ 331 – D025 [β] ¯ 350 – D026 [β] ¯ 354 – D027 [β] ¯ 5 – D1 [β]

¯ – D2 [β] ¯ – D3 [β] ¯ – D4 [β] ¯ – D5 [β] ¯ – D6 [β]

14 26 36 48 64

– D7 79 – D8 92 – D9 105 – D10 117 – D11 133 – D12 148 – D13 187 – D14 193 – D15 212 – D16 224 – D17 230 – D18 238 – D19 243 – D20 249 – D21 275 – D22 296 – D23 299 – D24 302 – D25 311 ¯ 325 – D26 [β] ¯ 328 – D27 [β] – D28 [¯ γ ] 381 – D29 [β] 392 – D30 [β] 393 – D31 405 – D32 439 – D33 446 – D34 [β] 450 ¯ 459 – D35 [β] – D36 [β] 486 E (second-type condition of moment boundedness): – E2 198 – E3 201 – E4 205 – E5 208 – E6 (t, i, αi ) 209 – E7 209 – E8 211 – E9 213 – E10 214 – E11 295 – E16 [α] 379 – E17 505

Index

E (second-type condition of moment compactness): – E1 197 ¯ 310 – E12 [β] ¯ 324 – E13 [β] ¯ 330 – E14 [β] ¯ 348 – E15 [β] G (Lipschitz-type condition for drift and volatility): – G020 452 – G8 215 – G10 216 – G15 231 – G17 231 – G20 239 – G22 240 – G23 240 – G24 240 – G26 240 – G27 240 G (condition of drift and volatility boundedness): – G021 452 – G1 34 – G2 46 – G3 62 – G4 76 – G5 115 – G6 130 – G7 144 – G9 215 – G11 217 – G12 219 – G13 225 – G16 231 – G18 231 – G19 233 – G21 239 – G25 240 G (mean-reverse condition): – G028 452 – G28 242 G (model structural condition): – G14 227 – G29 449 I (continuity condition for pay-off functions): – I1 9 – I2 20 – I3 31

553

– I4 41 – I5 57 – I6 71 – I7 125 – I11 [Π] 324 – I12 327 – I16 [Π] 349 – I17 353 I (convergence condition for pay-off functions): – I8 [Π] 311 – I9 314 – I10 [Π] 321 – I13 [Π] 331 – I14 333 – I15 [Π] 340 – I18 360 – I19 372 – I20 377 – I21 437 – I22 461 J (continuity condition for transition distributions): – J9 [Π] 349 – J10 353 – J11 356 – J12 361 – J13 369 – J20 429 – J21 433 – J22 437 J (convergence condition for transition distributions): – J1 [Π] 311 – J2 314 – J3 [Π] 321 – J4 [Π] 324 – J5 327 – J6 [Π] 331 – J7 333 – J8 [Π] 340 – J12 [Π] 366 – J14 373 – J15 378 – J16 [β] 394 – J17 [β] 396 – J18 407 – J19 409 – J23 462 – J24 488

554

Index

K (continuity condition for initial distribution): – K1 10 – K2 21 – K3 32 – K4 42 – K5 58 – K6 73 – K7 86 – K8 99 – K9 112 – K10 127 – K11 141 – K12 156 – K19 325 – K20 328 – K25 351 – K26 354 – K27 356 – K28 359 – K29 361 – K31 370 – K36 429 K (convergence condition for initial distributions): – K13 218 – K14 218 – K15 234 – K16 312 – K17 315 – K18 [Π] 322 – K21 332 – K22 334 – K23 [Π] 340 – K24 [Π] 346 – K30 [Π] 367 – K32 374 – K33 378 – K34 394 – K35 406 – K37 433 – K38 437 – K39 462 – K40 479 – K41 490 L (fitting condition for transition probabilities): – L1 318 – L2 [ε] ˙ 321 – L3 326 – L4 337

– L5 [ε] ˙ 340 – L6 342 – L7 [ε] ˙ 346 – L8 351 – L9 352 – L10 363 – L11 [ε] ˙ 366 – L12 368 M (fitting condition for initial distributions): – M1 318 – M2 [ε] ˙ 321 – M3 326 – M4 337 – M5 [ε] ˙ 340 – M6 343 – M7 [ε] ˙ 346 – M8 352 – M9 352 – M10 363 – M11 [ε] ˙ 366 – M12 368 N (structural skeleton condition): – N1 9 – N2 20 – N3 31 – N4 41 – N5 56 – N6 71 – N7 84 – N8 125 – N9 [Π] 321 – N10 [Π] 323 – N11 326 – N12 [Π] 340 – N13 [Π] 346 – N14 [Π] 348 – N15 [Π] 348 – N16 352 – N17 352 – N18 [Π] 367 – N19 378 – N20 [α] 384 – N21 396 – N22 403 – N23 408 – N24 419 – N25 419 – N26 424 – N27 439

Index

O (continuity condition for Gaussian transition distributions): – O1 9 – O2 20 – O3 31 – O7 85 – O8 98 – O9 111 O (convergence condition for Gaussian transition distributions): – O4 41 – O5 57 – O6 72 – O10 126 – O11 140 – O12 155 O (convergence condition for diffusion approximations): – O13 217 – O14 218 Q (structural independence condition): – Q1 487 R (Lipschitz-type condition for variance function): – R2 419 R (max-correlation condition): – R1 415 – R3 421 – R4 481 – R5 484 S (structural pay-off condition): – S1 487 – S2 491 A Approximation – martingale-type 217 – space-skeleton 1 – time-skeleton 217 – time-skeleton reward 308 – time-space-skeleton reward 308 B Buyer – of option 171 C Coefficient – drift 2

555

– drift functional 451 – volatility 2 – volatility functional 451 – weighting 176–178 Condition – calibration 264 – Lipschitz-type 257 Cube – skeleton 6, 16, 27, 37, 50, 65, 66, 80, 95, 107, 119, 150, 151 D Distribution – compound Poisson-normal 345 – initial 7, 17, 29, 39, 53, 69, 82, 96, 109, 122, 153, 160, 319, 338, 343, 362, 364, 465 – jump 341, 343 – multivariate normal 2 – of increment 162, 207, 377, 388 – regular conditional 162 Domain – optimal stopping 484 – time-space knockout 464 E Equation – autoregressive stochastic difference 11 – modulated stochastic difference 43, 58, 113, 127 – stochastic difference 22, 75, 87, 100 – stochastic differential 215, 239 – stochastic integral 244 – vector modulated stochastic difference 33, 44 – vector stochastic difference 2, 12, 23, 60, 76 Expectation – conditional 173, 181, 189 F Filtration 216, 217, 231, 240 – natural 159, 442 Formula – Black–Scholes 470 – Itô 169, 241 Function – càdlàg 158 – characteristic 163 – knockout pay-off 466 – knockout reward 464

556

Index

– nonlinear pay-off 177 – pay-off 3, 171 – reward 4, 7, 13, 17, 24, 35, 40, 47, 62, 77, 91, 97, 104, 109, 116, 122, 131, 147, 153, 173, 192, 251, 287, 319, 338, 344, 364, 384, 389, 393, 423, 430, 438, 442, 446, 450, 457, 459, 466, 480, 481, 486, 487, 498 – skeleton 6, 16, 28, 38, 51, 67, 80, 95, 107, 120, 151, 318, 337, 343, 363, 383, 402, 424, 439, 504 – transformation 177 – transformed pay-off 177, 251, 472 – vector skeleton 6, 16, 28, 38, 39, 51, 67, 80, 95, 107, 120, 151, 152, 318, 337, 343, 363, 383, 424 G Greeks 473 H Hyper-subspace – Euclidean 9, 20, 31, 85, 98, 111 I i.i.d. (independent identically distributed) 2 Index – stochastic 158 Inequality – Hölder 277 – Kolmogorov 183, 203 Interval – skeleton 6, 16, 27, 37, 49, 65, 80, 95, 107, 119, 150, 317, 336, 342, 363, 383, 504 M Markov chain – atomic autoregressive 1 – nonlinear autoregressive 137, 153 – skeleton atomic 6, 17, 29, 39, 52, 68, 81, 96, 108, 121, 153 – space-skeleton atomic 318, 337, 343, 363 Matrix – covariance 164 – stochastic 166 – symmetric 168 – unit 168 Maturity 171, 175–179, 470 Measure – Lebesgue 9, 20, 31, 85, 98, 111

Model – autoregressive 11 – autoregressive moving average 22 – Schwartz 517 Modulator 161 Modulus – of J-compactness 195 – of exponential moment compactness 181, 197, 207, 211, 299, 310, 323, 330, 348, 355 – of power moment compactness 195, 200 Moment – Markov 172 – of jump 163 Motion – bivariate Brownian 470 – Brownian 164 – exponential (geometric) Brownian 165 – geometric Brownian 469 – standard Brownian 164, 168, 469 N Noise – Gaussian 2, 11 O Option – knockout American-type 464 – American-type 171 – call 174 – digital American-type 511 – European call 470 – exchange of assets American-type 514 – put 174 – standard call 175 – standard put 175 P Partition 182, 205, 214, 216, 271, 279, 288, 312, 374, 379, 430, 443, 444 – uniform 473 Pay-off (reward) 171, 172 Point – maximal skeleton 8, 56, 71, 84, 125, 347 – minimal skeleton 8, 56, 71, 84, 125, 347 – skeleton 6, 16, 27, 37, 38, 50, 65, 66, 80, 95, 107, 119, 120, 150, 151, 317, 336, 337, 342, 363, 383 – vector skeleton 6, 16, 27, 50, 65, 66, 95, 107, 119, 150, 151, 318, 363, 383

Index

Portfolio – of options 176 Price – market 470 – option 171 – strike 175–177, 470, 511 – theoretical 470 Probabilities – conditional marginal transition 160, 161 – Gaussian transition 5, 15, 79, 94, 106, 118, 148 – one-step transition 7, 17, 29, 39, 52, 68, 81, 96, 108, 121, 153, 165 – transition 37, 121, 153, 160, 165, 316, 319, 336, 337, 362, 364, 465 Probability – conditional 173, 181, 189 Process – almost sure càdlàg 159 – autoregressive 1, 11 – autoregressive conditional heteroskedastic 87 – autoregressive moving average 1 – binomial-tree with independent increments 393 – binomial-trinomial-tree with independent increments 473 – bivariate exponential Gaussian 471 – compound Poisson-gamma 510 – diffusion 167, 169, 216, 231 – diffusion modulated by semi-Markov index 170 – exponential Lévy 165 – extended log-price 158 – extended price 158 – Gaussian with independent increments 164, 391, 405, 442 – generalized autoregressive conditional heteroskedastic 100 – Lévy 162, 163 – log-price 158 – Markov 160 – Markov Gaussian 2 – Markov renewal 165 – mean-reverse diffusion 449, 453–455 – mean-reverse Ornstein-Uhlenbeck 470 – modulated generalized autoregressive conditional heteroskedastic 142 – modulated autoregressive 1, 43

557

– modulated autoregressive conditional heteroskedastic 127 – modulated autoregressive moving average 1 – modulated Markov Gaussian log-price 33 – modulated nonlinear autoregressive stochastic volatility 113 – multivariate Markov Gaussian 1 – multivariate modulated Markov Gaussian 1 – nonlinear autoregressive stochastic volatility 75 – price 158 – semi-Markov 165 – step-wise 163 – time-skeleton approximation 205, 216, 217, 232, 355 – time-space-skeleton approximation 383, 429 – transformed diffusion 241 – transformed mean-reverse diffusion 454, 455 – trinomial-tree with independent increments 405 – Wiener 164, 231, 399, 412 – with independent increments 162 – with independent increments modulated by semi-Markov index 166 R Random variable – Gaussian (normal) 11 Random vector – Gaussian (normal) 11, 22 Random walk 504 – compound Gaussian 508 – Gaussian 497 – multivariate modulated Gaussian 2 – trinomial 498 Rate – accumulated risk-free interest 175–178 – risk-free interest 175–178, 469, 511 Relation – consistency 412, 418 – stochastic transition dynamic 6, 28, 29, 39, 51, 67, 497, 504 – vector stochastic transition dynamic 16, 81 Representation – Lévy–Khintchine 163, 208 Reselling – of European option 471 Reward – knockout optimal expected 464

558

Index

– optimal expected 5, 14, 26, 36, 48, 64, 78, 92, 105, 117, 172, 194, 251, 270, 319, 338, 344, 384, 389, 442, 446, 450, 457, 458, 486, 487, 498 – random 464 S Seller – of option 171 Set – cylindric 161 – index 6, 15, 27, 37, 49, 65, 80, 95, 107, 118, 119, 149, 150 – of continuity 321, 322, 367 – of vector indices 317, 336, 342, 362, 383 – skeleton 38, 51, 66, 120, 151, 317, 318, 336, 342, 363, 383 Space – complete, separable, metric 37, 157 – Euclidean 157 – measurable 157 – of càdlàg functions 158, 216, 218, 397, 410 – of continuous functions 397, 410, 447 – phase 158 – Polish 37, 49, 157 – probability 158, 487

Structure – additive space-skeleton 341 – fixed space-skeleton 335, 347 – space-skeleton 318, 320, 337, 339, 343, 345, 366 System – of stochastic differential equations 167 – of stochastic difference equations 76 T Theorem – Ascoli-Arzelá 477 – Lebesgue 280, 282 – Radon-Nikodym 161 Time – knockout stopping 464 – optimal stopping 173, 272 – random 159 – stopping 172 Triplet – of characteristics 209 V Volatility 470 – stochastic implied 470

De Gruyter Studies in Mathematics Volume 61 Francesco Altomare, Mirella Cappelletti Montano, Vita Leonessa, Ioan Raşa Markov Operators, Positive Semigroups and Approximation Processes, 2014 ISBN 978-3-11-037274-8, e-ISBN 978-3-11-037274-8, Set-ISBN 978-3-11-036698-3 Volume 60 Vladimir A. Mikhailets, Alexandr A. Murach, Peter V. Malyshev Hörmander Spaces, Interpolation, and Elliptic Problems, 2014 ISBN 978-3-11-029685-3, e-ISBN 978-3-11-029689-1, Set-ISBN 978-3-11-029690-7 Volume 59 Jan de Vries Topological Dynamical Systems, 2014 ISBN 978-3-11-034073-0, e-ISBN 978-3-11-034240-6,Set-ISBN 978-3-11-034241-3 Volume 58 Lubomir Banas, Zdzislaw Brzezniak, Mikhail Neklyudov, Andreas Prohl Stochastic Ferromagnetism: Analysis and Numerics, 2014 ISBN 978-3-11-030699-6, e-ISBN 978-3-11-030710-8, Set-ISBN 978-3-11-030711-5 Volume 57 Dmitrii S. Silvestrov American-Type Options: Stochastic Approximation Methods, Volume 2, 2014 ISBN 978-3-11-032968-1, e-ISBN 978-3-11-032984-1, Set-ISBN 978-3-11-032985-8 Volume 56 Dmitrii S. Silvestrov American-Type Options: Stochastic Approximation Methods, Volume 1, 2013 ISBN 978-3-11-032967-4, e-ISBN 978-3-11-032982-7, Set-ISBN 978-3-11-032983-4 Volume 55 Lucio Boccardo, Gisella Croce Elliptic Partial Differential Equations, 2013 ISBN 978-3-11-031540-0, e-ISBN 978-3-11-031542-4, Set-ISBN 978-3-11-031543-1 www.degruyter.com