Markov Random Flights 2020039025, 2020039026, 9780367564940, 9781003098133

430 42 36MB

English Pages [407] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Markov Random Flights
 2020039025, 2020039026, 9780367564940, 9781003098133

Table of contents :
Cover
Half Title
Series Page
Title Page
Copyright Page
Dedication
Contents
Preface
Introduction
1. Preliminaries
1.1. Markov processes
1.1.1. Brownian motion
1.1.2. Diffusion process
1.1.3. Poisson process
1.2. Random evolutions
1.3. Determinant theorem
1.4. Kurtz’s diffusion approximation theorem
1.5. Special functions
1.5.1. Bessel functions
1.5.2. Struve functions
1.5.3. Chebyshev polynomials
1.5.4. Chebyshev polynomials of two variables on Banach algebra
1.6. Hypergeometric functions
1.6.1. Euler gamma-function and Pochhammer symbol
1.6.2. Gauss hypergeometric function
1.6.3. Powers of Gauss hypergeometric function
1.6.4. General hypergeometric functions
1.7. Generalized functions
1.8. Integral transforms
1.8.1. Fourier transform
1.8.2. Laplace transform
1.9. Auxiliary lemmas
2. Telegraph Processes
2.1. Definition of the process and structure of distribution
2.2. Kolmogorov equation
2.3. Telegraph equation
2.4. Characteristic function
2.5. Transition density
2.6. Probability distribution function
2.7. Convergence to the Wiener process
2.8. Laplace transform of transition density
2.9. Moment analysis
2.9.1. Moments of the telegraph process
2.9.2. Asymptotic behaviour
2.9.3. Carleman condition
2.9.4. Generating function
2.9.5. Semi-invariants
2.10. Group symmetries of telegraph equation
2.11. Telegraph-type processes with several velocities
2.11.1. Uniform choice of velocities
2.11.2. Cyclic choice of velocities
2.12. Euclidean distance between two telegraph processes
2.12.1. Probability distribution function
2.12.2. Numerical example
2.13. Sum of two telegraph processes
2.13.1. Density of the sum of telegraph processes
2.13.2. Partial differential equation
2.13.3. Probability distribution function
2.13.4. Some remarks on the general case
2.14. Linear combinations of telegraph processes
2.14.1. Structure of distribution and system of equations
2.14.2. Governing equation
2.14.3. Sum and difference of two telegraph processes
3. Planar Random Motion with a Finite Number of Directions
3.1. Description of the model and the main result
3.2. Proof of the Main Theorem
3.2.1. System of equations and basic notations
3.2.2. Characters of a finite cyclic group and spectral decomposition of the unit matrix
3.2.3. Equivalent system of equations
3.2.4. Partial differential equation
3.3. Diffusion area
3.4. Polynomial representations of the generator
3.5. Limiting differential operator
3.6. Weak convergence to the Wiener process
4. Integral Transforms of the Distributions of Markov Random Flights
4.1. Description of process and structure of distribution
4.2. Recurrent integral relations
4.3. Laplace transforms of conditional characteristic functions
4.4. Conditional characteristic functions
4.4.1. Conditional characteristic functions in the plane R2
4.4.2. 4.4.2 Conditional characteristic functions in the space R4
4.4.3. Conditional characteristic functions in the space R3
4.4.4. Conditional characteristic functions in arbitrary dimension
4.5. Integral equation for characteristic function
4.6. Laplace transform of characteristic function
4.7. Initial conditions
4.8. Limit theorem
4.9. Random flight with rare switchings
4.10. Hyperparabolic operators
4.10.1. Description of the problem
4.10.2. Governing equation
4.10.3. Random flights in low dimensions
4.10.4. Convergence to the generator of Brownian motion
4.11. Random flight with arbitrary dissipation function
4.12. Integral equation for transition density
4.12.1. Description of process and the structure of distribution
4.12.2. Recurrent relations
4.12.3. Integral equation
4.12.4. Some particular cases
5. Markov Random Flight in the Plane R2
5.1. Conditional densities
5.2. Distribution of the process
5.3. Characteristic function
5.4. Telegraph equation
5.5. Limit theorem
5.6. Alternative derivation of transition density
5.7. Moments
5.8. Random flight with Gaussian starting point
5.9. Euclidean distance between two random flights
5.9.1. Auxiliary lemmas
5.9.2. Main results
5.9.3. Asymptotics and numerical example
5.9.4. Proofs of theorems
6. Markov Random Flight in the Space R3
6.1. Characteristic function
6.2. Discontinuous term of distribution
6.3. Limit theorem
6.4. Asymptotic relation for the transition density
6.4.1. Auxiliary lemmas
6.4.2. Conditional characteristic functions
6.4.3. Asymptotic formula for characteristic function
6.4.4. Asymptotic formula for the density
6.4.5. Estimate of the accuracy
6.5. Fundamental solution to Kolmogorov equation
7. Markov Random Flight in the Space R4
7.1. Conditional densities
7.2. Distribution of the process
7.3. Characteristic function
7.4. Limit theorem
7.5. Moments
8. Markov Random Flight in the Space R6
8.1. Conditional densities
8.2. Distribution of the process
9. Applied Models
9.1. Slow diffusion
9.1.1. Preliminaries
9.1.2. Slow diffusion condition
9.1.3. Stationary densities in low dimensions
9.2. Fluctuations of water level in reservoir
9.3. Pollution model
9.4. Physical applications
9.4.1. Transport processes
9.4.2. Relativity effects
9.4.3. Cosmic microwave background radiation
9.5. Option pricing
Bibliography
Index

Citation preview

Markov Random Flights

Monographs and Research Notes in Mathematics Series Editors: John A. Burns, Thomas J. Tucker, Miklos Bona, Michael Ruzhansky About the Series This series is designed to capture new developments and summarize what is known over the entire field of mathematics, both pure and applied. It will include a broad range of monographs and research notes on current and developing topics that will appeal to academics, graduate students, and practitioners. Interdisciplinary books appealing not only to the mathematical community, but also to engineers, physicists, and computer scientists are encouraged. This series will maintain the highest editorial standards, publishing well-developed monographs as well as research notes on new topics that are final, but not yet refined into a formal monograph. The notes are meant to be a rapid means of publication for current material where the style of exposition reflects a developing topic. Spectral Geometry of Partial Differential Operators (Open Access) Michael Ruzhansky, Makhmud Sadybekov, Durvudkhan Suragan Linear Groups: The Accent on Infinite Dimensionality Martyn Russel Dixon, Leonard A. Kurdachenko, Igor Yakov Subbotin Morrey Spaces: Introduction and Applications to Integral Operators and PDE’s, Volume I Yoshihiro Sawano, Giuseppe Di Fazio, Denny Ivanal Hakim Morrey Spaces: Introduction and Applications to Integral Operators and PDE’s, Volume II Yoshihiro Sawano, Giuseppe Di Fazio, Denny Ivanal Hakim Tools for Infinite Dimensional Analysis Jeremy J. Becnel Semigroups of Bounded Operators and Second-Order Elliptic and Parabolic Partial Differential Equations Luca Lorenzi, Abdelaziz Rhandi Markov Random Flights Alexander D. Kolesnik For more information about this series please visit: https://www.crcpress.com/Chapman--HallCRCMonographs-and-Research-Notes-in-Mathematics/book-series/CRCMONRESNOT

Markov Random Flights

Alexander D. Kolesnik

Institute of Mathematics and Computer Science

First edition published 2021 by CRC Press 6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742 and by CRC Press 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN © 2021 Alexander D. Kolesnik CRC Press is an imprint of Taylor & Francis Group, LLC The right of Alexander D. Kolesnik to be identified as author of this work has been asserted by him in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, access www.copyright.com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For works that are not available on CCC please contact [email protected] Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Names: Kolesnik, Alexander D., author. Title: Markov random flights / Alexander D. Kolesnik. Description: First edition. | Boca Raton : C&H\CRC Press, 2021. | Includes bibliographical references and index. Identifiers: LCCN 2020039025 (print) | LCCN 2020039026 (ebook) | ISBN 9780367564940 (hardback) | ISBN 9781003098133 (ebook) Subjects: LCSH: Random walks (Mathematics) | Markov processes. Classification: LCC QA274.73 .K65 2021 (print) | LCC QA274.73 (ebook) | DDC 519.2/33--dc23 LC record available at https://lccn.loc.gov/2020039025 LC ebook record available at https://lccn.loc.gov/2020039026 ISBN: 9780367564940 (hbk) ISBN: 9781003098133 (ebk) Typeset in Computer Modern font by Cenveo Publisher Services

Dedicated to my parents

Contents

Preface

xi

Introduction

xiii

1 Preliminaries 1.1

1.2 1.3 1.4 1.5

1.6

1.7 1.8

1.9

1

Markov processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Brownian motion . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Diffusion process . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 Poisson process . . . . . . . . . . . . . . . . . . . . . . . . Random evolutions . . . . . . . . . . . . . . . . . . . . . . . . . . Determinant theorem . . . . . . . . . . . . . . . . . . . . . . . . . Kurtz’s diffusion approximation theorem . . . . . . . . . . . . . . Special functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.1 Bessel functions . . . . . . . . . . . . . . . . . . . . . . . . 1.5.2 Struve functions . . . . . . . . . . . . . . . . . . . . . . . . 1.5.3 Chebyshev polynomials . . . . . . . . . . . . . . . . . . . . 1.5.4 Chebyshev polynomials of two variables on Banach algebra Hypergeometric functions . . . . . . . . . . . . . . . . . . . . . . 1.6.1 Euler gamma-function and Pochhammer symbol . . . . . . 1.6.2 Gauss hypergeometric function . . . . . . . . . . . . . . . . 1.6.3 Powers of Gauss hypergeometric function . . . . . . . . . . 1.6.4 General hypergeometric functions . . . . . . . . . . . . . . Generalized functions . . . . . . . . . . . . . . . . . . . . . . . . . Integral transforms . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.1 Fourier transform . . . . . . . . . . . . . . . . . . . . . . . 1.8.2 Laplace transform . . . . . . . . . . . . . . . . . . . . . . . Auxiliary lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

2 Telegraph Processes 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9

Definition of the process and structure of distribution Kolmogorov equation . . . . . . . . . . . . . . . . . . Telegraph equation . . . . . . . . . . . . . . . . . . . Characteristic function . . . . . . . . . . . . . . . . . Transition density . . . . . . . . . . . . . . . . . . . Probability distribution function . . . . . . . . . . . Convergence to the Wiener process . . . . . . . . . . Laplace transform of transition density . . . . . . . . Moment analysis . . . . . . . . . . . . . . . . . . . . 2.9.1 Moments of the telegraph process . . . . . . . 2.9.2 Asymptotic behaviour . . . . . . . . . . . . . .

1 2 4 6 7 11 13 15 15 19 20 21 29 29 30 31 34 35 38 38 40 42 53

. . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

53 56 58 60 61 66 71 74 78 78 80 vii

viii

Contents

2.10 2.11

2.12

2.13

2.14

2.9.3 Carleman condition . . . . . . . . . . . . . . . . . 2.9.4 Generating function . . . . . . . . . . . . . . . . . 2.9.5 Semi-invariants . . . . . . . . . . . . . . . . . . . Group symmetries of telegraph equation . . . . . . . . . Telegraph-type processes with several velocities . . . . . 2.11.1 Uniform choice of velocities . . . . . . . . . . . . . 2.11.2 Cyclic choice of velocities . . . . . . . . . . . . . . Euclidean distance between two telegraph processes . . . 2.12.1 Probability distribution function . . . . . . . . . . 2.12.2 Numerical example . . . . . . . . . . . . . . . . . Sum of two telegraph processes . . . . . . . . . . . . . . 2.13.1 Density of the sum of telegraph processes . . . . . 2.13.2 Partial differential equation . . . . . . . . . . . . 2.13.3 Probability distribution function . . . . . . . . . . 2.13.4 Some remarks on the general case . . . . . . . . . Linear combinations of telegraph processes . . . . . . . . 2.14.1 Structure of distribution and system of equations 2.14.2 Governing equation . . . . . . . . . . . . . . . . . 2.14.3 Sum and difference of two telegraph processes . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

3 Planar Random Motion with a Finite Number of Directions 3.1 3.2

3.3 3.4 3.5 3.6

141

Description of the model and the main result . . . . . . . . . . . . . . . . Proof of the Main Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 System of equations and basic notations . . . . . . . . . . . . . . . 3.2.2 Characters of a finite cyclic group and spectral decomposition of the unit matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Equivalent system of equations . . . . . . . . . . . . . . . . . . . . . 3.2.4 Partial differential equation . . . . . . . . . . . . . . . . . . . . . . Diffusion area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Polynomial representations of the generator . . . . . . . . . . . . . . . . . Limiting differential operator . . . . . . . . . . . . . . . . . . . . . . . . . Weak convergence to the Wiener process . . . . . . . . . . . . . . . . . . .

4 Integral Transforms of the Distributions of Markov Random Flights 4.1 4.2 4.3 4.4

Description of process and structure of distribution . . . . . . . . Recurrent integral relations . . . . . . . . . . . . . . . . . . . . . Laplace transforms of conditional characteristic functions . . . . Conditional characteristic functions . . . . . . . . . . . . . . . . . 4.4.1 Conditional characteristic functions in the plane R2 . . . . 4.4.2 Conditional characteristic functions in the space R4 . . . . 4.4.3 Conditional characteristic functions in the space R3 . . . . 4.4.4 Conditional characteristic functions in arbitrary dimension 4.5 Integral equation for characteristic function . . . . . . . . . . . . 4.6 Laplace transform of characteristic function . . . . . . . . . . . . 4.7 Initial conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Limit theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Random flight with rare switchings . . . . . . . . . . . . . . . . . 4.10 Hyperparabolic operators . . . . . . . . . . . . . . . . . . . . . . 4.10.1 Description of the problem . . . . . . . . . . . . . . . . . .

83 85 87 88 94 97 99 100 100 108 109 109 114 117 124 126 126 130 135

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

141 143 143 145 149 150 154 158 163 165 173

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

173 175 178 180 180 181 183 184 187 191 194 195 199 206 206

Contents

ix

4.10.2 Governing equation . . . . . . . . . . . . . . . . . . . . 4.10.3 Random flights in low dimensions . . . . . . . . . . . . 4.10.4 Convergence to the generator of Brownian motion . . . 4.11 Random flight with arbitrary dissipation function . . . . . . . 4.12 Integral equation for transition density . . . . . . . . . . . . . 4.12.1 Description of process and the structure of distribution 4.12.2 Recurrent relations . . . . . . . . . . . . . . . . . . . . 4.12.3 Integral equation . . . . . . . . . . . . . . . . . . . . . 4.12.4 Some particular cases . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

5 Markov Random Flight in the Plane R2 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9

229

Conditional densities . . . . . . . . . . . . . . Distribution of the process . . . . . . . . . . . Characteristic function . . . . . . . . . . . . . Telegraph equation . . . . . . . . . . . . . . . Limit theorem . . . . . . . . . . . . . . . . . . Alternative derivation of transition density . . Moments . . . . . . . . . . . . . . . . . . . . . Random flight with Gaussian starting point . Euclidean distance between two random flights 5.9.1 Auxiliary lemmas . . . . . . . . . . . . 5.9.2 Main results . . . . . . . . . . . . . . . 5.9.3 Asymptotics and numerical example . . 5.9.4 Proofs of theorems . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

6 Markov Random Flight in the Space R3 6.1 6.2 6.3 6.4

6.5

Characteristic function . . . . . . . . . . . . . . . . . Discontinuous term of distribution . . . . . . . . . . Limit theorem . . . . . . . . . . . . . . . . . . . . . . Asymptotic relation for the transition density . . . . 6.4.1 Auxiliary lemmas . . . . . . . . . . . . . . . . 6.4.2 Conditional characteristic functions . . . . . . 6.4.3 Asymptotic formula for characteristic function 6.4.4 Asymptotic formula for the density . . . . . . 6.4.5 Estimate of the accuracy . . . . . . . . . . . . Fundamental solution to Kolmogorov equation . . .

Conditional densities . . . Distribution of the process Characteristic function . . Limit theorem . . . . . . . Moments . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

287 289 290 294 295 298 300 302 305 308 313

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

8 Markov Random Flight in the Space R6 8.1 8.2

229 235 240 243 246 250 254 257 261 261 264 271 273 287

7 Markov Random Flight in the Space R4 7.1 7.2 7.3 7.4 7.5

209 212 214 216 219 219 220 223 225

Conditional densities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distribution of the process . . . . . . . . . . . . . . . . . . . . . . . . . . .

313 318 323 327 328 335 335 341

x

Contents

9 Applied Models 9.1

9.2 9.3 9.4

9.5

Slow diffusion . . . . . . . . . . . . . . . . . . . 9.1.1 Preliminaries . . . . . . . . . . . . . . . . 9.1.2 Slow diffusion condition . . . . . . . . . . 9.1.3 Stationary densities in low dimensions . Fluctuations of water level in reservoir . . . . . Pollution model . . . . . . . . . . . . . . . . . . Physical applications . . . . . . . . . . . . . . . 9.4.1 Transport processes . . . . . . . . . . . . 9.4.2 Relativity effects . . . . . . . . . . . . . . 9.4.3 Cosmic microwave background radiation Option pricing . . . . . . . . . . . . . . . . . .

345 . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

346 346 347 348 354 355 357 357 357 358 359

Bibliography

361

Index

373

Preface

This book is the first systematic presentation of the theory of Markov random flights in the Euclidean spaces of different dimensions. Markov random flight is a stochastic dynamic system subject to the control of an external Poisson process and represented by the stochastic motion of a particle that moves at constant finite speed and changes its direction at random Poisson time instants. The initial and each new direction are taken at random according to some probability distribution on the unit sphere. The sample paths of Markov random flights are the continuous broken lines composed of a finite number of randomly oriented segments of exponentially distributed lengths. The continuity of the trajectories of Markov random flight is the main difference compared to the L´evy random flight whose trajectories are discontinuous. Stochastic motions studied in the book are the basic model for describing many real finite-velocity transport phenomena arising in statistical physics, chemistry, biology, environmental science and financial markets. The one-dimensional Markov random flight is represented by the classical GoldsteinKac telegraph process and its main characteristics (distribution, characteristic function, telegraph equation, limiting behaviour, moment function, etc.), are given in a separate chapter, whose first sections can also serve as a gradual and friendly introduction to the theory. For this purpose, a number of exercises are included in the chapter that will help the reader better master the material. Other sections of this chapter deal with some functionals of several telegraph processes, namely, the sum of two and the Euclidean distance between two independent telegraph processes, as well as the linear combinations of several telegraph processes driven by partial differential equation. Other chapters are devoted to studying the Markov random flights in the Euclidean spaces of higher dimensions. The planar stochastic motions with a finite number and a continuum number of directions are thoroughly examined. Markov random flights in the Euclidean spaces R3 , R4 and R6 are also studied in separate chapters. Surprisingly, the distributions of the symmetric Markov random flights in the even-dimensional spaces R2 , R4 and R6 are obtained in explicit form and, moreover, in the spaces R2 and R4 the distributions have very simple form in terms of elementary functions. A unified general approach based on the powerful methods of mathematical analysis, such as integral transforms, generalized, hypergeometric and special functions, is developed. This approach enables us to effectively study such stochastic processes in any dimension, while in some low dimensions it leads to closed-form expressions of the distributions. Aside from it having a pure mathematical interest, the importance of such processes is determined by their numerous applications in various fields of science and technology. The extremely important peculiarity of random flights is that they generate diffusion processes with a finite speed of propagation. The slow and super-slow diffusions are of particular interest because they play very important roles in modeling many slow transport processes, such as soil pollution, protein folding, etc. In the last chapter we give a slow diffusion condition under which the Markov random flights generate a slow diffusion. Their stationary distributions in some spaces of low dimensions are presented, which enables a forecast of the behaviour of a slow diffusion process over a long time interval. Some applications of Markov random flights for modeling the hydrological balance of a reservoir, the xi

xii

Preface

process of soil surface pollution from a stationary source, finite-velocity transport, relativistic properties of random flights, cosmic microwave background (CMB) radiation and option pricing, are outlined. We hope that this book will be interest to a wide audience of specialists in the area of diffusion processes with finite speed of propagation and their applications, random walk models and transport processes in multidimensional spaces. We expect that the book will also be useful to students and postgraduates who make their first steps in these intriguing and attractive fields. Finally, I am thankful to my family for their ongoing support. Alexander D. Kolesnik

Introduction

Description of the heat transfer process was one of the most urgent problems in mathematics and physics at the turn of the 19th and 20th centuries. In general terms, it can be described as follows. Let, at the initial time t = 0, there be a point source (unit charge) of heat concentrated at the origin 0 of the space Rm , m ≥ 1, and let the intial heat distribution ϕ(x), x ∈ Rm , in the space, be known. At time t = 0, heat begins to spread from the source. Let f (x, t), x ∈ Rm , denote the temperature at the point x ∈ Rm at time t > 0. The task is to find the function f (x, t). The solution to this problem of heat transfer on the line R1 was given in 1905 by A. Einstein [38] and, independently, one year later in 1906, by M. Smoluchowski [189]. They showed that the function f = f (x, t), x ∈ R1 , t > 0, satisfies the partial differential equation ∂2f ∂f = a2 (0.0.1) ∂t ∂x2 and can be found by solving this equation with the initial condition f (x, t) t=0 = ϕ(x). The constant a is an aggregated parameter determined by the properties of the medium. Equation (0.0.1) was called the heat equation. As it turned out, this important result can easily be generalized to the case of heat transfer in the space Rm of arbitrary dimension m ≥ 2. In this case, the function f (x, t), x ∈ Rm , t > 0, satisfies an equation similar to (0.0.1) with the replacement of the differential operator on the right-hand side by the Laplacian of dimension m. Besides the solution of such an important problem itself, the method by which the equation (0.0.1) was derived is of great interest. One can assert that the idea leading to the heat equation (0.0.1) is truly fundamental and universal. This original idea is to interpret heat transfer as a random walk process. It is based on the physical model of chaotic motion described by naturalist R. Brown in 1827. Observing the smallest particles of flower pollen in water, R. Brown discovered an extremely irregular movement of these particles caused by their collisions with water molecules. The derivation of the heat equation (0.0.1) given by A. Einstein was based just on R. Brown’s experiments. Suppose that heat transfer is carried out by some imaginary particles, each of which can carry an elementary ‘piece’ of heat. The particles are indistinguishable, their number N is large, but finite, and the mass and size of each particle are zero. At the initial time instant t = 0, all the particles are concentrated at the source point (that is, at the origin 0). The particles begin moving simultaneously and independently undergo chaotic Brownian motion. The particles do not interact in any way. Changes in the direction of motion are interpreted as the result of particle’s collisions with some random obstacles. Besides the above, we make the following two important assumptions: (i) Each particle’s speed is infinite; (ii) Per unit of time, the particle undergoes an infinite number of collisions that cause instantaneous changes the direction of its motion. Under such interpretation of the heat transfer process, it is obvious that the temperature at some point x ∈ R1 (more precisely, in a small neighbourhood dx of this point) at arbitrary time t > 0 is directly proportional to the number nx of particles that are at time moment xiii

xiv

Introduction

t in this neighbourhood. The ratio nx /N determines some probabilistic distribution of the particles on the line that characterizes the distribution of heat at time t > 0. If the number of particles N is sufficiently large (that is, for N → ∞), this distribution fairly adequately reflects the real heat distribution on the line. Another probabilistic interpretation of the heat transfer process can be given that does not explicitly take into account the number of particles. It is based on the obvious fact that, if the particles are indistinguishable and do not interact, then emitting many particles once is stochastically equivalent to repeatedly emitting one single particle. Then the distribution of the number of particles that are in the neighbourhood dx at time t, is equivalent to the distribution of the location of a single particle on the line and is the measure of all its trajectories ending in dx at time t. Under the above assumptions, A. Einstein proved that the density p(x, t), x ∈ R1 , t > 0, of the Brownian particle’s position on the line satisfies equation (0.0.1) and can be found by solving it with the initial condition p(x, t) t=0 = δ(x), where δ(x) is the Dirac deltafunction. The initial condition expresses that, at the initial time t = 0, the distribution is entirely concentrated at the origin 0 (the starting point). In the terminology of mathematical physics, this means that the density of the Brownian particle’s position is the fundamental solution (the Green’s function) to the heat equation (0.0.1). It is difficult to overestimate the importance of this result from the point of view of its influence on the further development of mathematics and, especially, probability theory. While this profound idea enabled the reduction of a pure physical problem to a probabilistic problem, it also serves as a mathematical description of the Brownian motion which laid a solid basis for intensive research in this direction. The turning point in the study of Brownian motion was the work by N. Wiener [214], in which, based on the previous works by R. Gˆ ateaux and P. L´evy on functional analysis, Wiener managed to introduce a Gaussian measure in the space of continuous functions. This led to rigorous axiomatic construction of an extremely important stochastic process, which is now called the Wiener process. Subsequent works by N. Wiener, P. L´evy, A.N. Kolmogorov, J. Doob, W. Feller, A.V. Skorohod, K. Itˆ o and many other mathematicians developed this direction into a deep and substantial theory (see, for example, monograph [73] and bibliography therein). After the appearance of the Einstein-Smoluchowski’s heat equation (0.0.1), it and its multidimensional counterpart began to be actively used by mathematicians and physicists for describing various transport processes. It was noted that the theoretical calculations carried out using this equation are in good agreement with experimental data if the propagation velocity is sufficiently big. However, it was quickly discovered that the discrepancy between theoretical and experimental data is very significant, if the process speed is low, and the lower the propagation speed, the greater the discrepancy. This showed that at low transfer speeds the heat equation (0.0.1) is ineffective. This, however, is not unexpected if we recall that the heat equation (0.0.1) was derived under conditions (i) and (ii) assuming an infinite speed of propagation and infinite intensity of collisions. In reality, of course, this cannot be, and both conditions (i) and (ii) can be considered fulfilled only approximately at large speed and intensity of collisions. Accordingly, the well-studied Wiener process in its pure form also does not exist in nature. The axiomatically constructed Wiener process is only a mathematical idealization of the real physical process observed by R. Brown. The artificiality of conditions (i) and (ii) determines a number of very specific, even paradoxical properties of the Wiener process: • Its sample paths are continuous, but nowhere differentiable (that is, the Wiener trajectory is a fractal object of the Hausdorff-Besicovitch dimension 3/2); • The length of any arbitrarily small portion of the path is infinite; • The process instantly fills the entire line. This means that, after an arbitrarily small

Introduction

xv

time after the start, the particle with a positive probability can be in any arbitrarily small interval of the line located arbitrarily far from the start point. The same property is inherent for the Wiener process in the plane. The first of these properties is determined by condition (ii). If the intensity of collisions is infinite, then the particle cannot have any free run and, therefore, its trajectory is nondifferentiable, although continuous. This also implies the second property of infinity of the length of the trajectory. The third property follows from condition (i) concerning the infinite speed of propagation. Condition (i) also manifests itself in the type of equation (0.0.1). It is the parabolic one and, as is well known, the processes with an infinite speed of propagation are driven by parabolic equations. The low efficiency of the parabolic heat equation (0.0.1) for describing transport process at low speeds has encouraged numerous attempts to suggest other models in which the finite speed of propagation would be taken into account. The first such attempt was made by Pearson [167,168] who considered the Rayleigh problem of random movements in a plane with a constant step and uniform choice of new directions. An important milestone was the work by V.A. Fock [43] on the one-dimensional diffusion of a light ray, in which he justified a new equation that differs from the heat equation (0.0.1) by the presence of second time derivative. We note the main idea of this work [43]. Adding the second time derivative to the left side of equation (0.0.1) was not just a formal action, but it changed the type of governing equation and led, thus, to a new class of transport equations. Such a modified equation was no longer a parabolic one, but it became a hyperbolic equation and, as is well known, hyperbolic equations describe processes with finite speed of propagation. In the following decades, a series of works [20, 22, 27, 28, 79, 135, 149–151, 205] has appeared, in which the modified equation was used to describe various heat and mass transfer processes, turbulent diffusion arising in physics, hydrology, meteorology, geophysics, gas dynamics and some other fields. A true breakthrough in this area occurred in the middle of the 20th century, when the pioneering works by S. Goldstein [59] and M. Kac [84] appeared. In these two fundamental works devoted to a very interesting model of random motion with finite speed on the line, Einstein’s idea of interpreting the transport as a process generated by random walks was again used and the mathematical justification of the hyperbolic transport equation was first given. The Goldstein-Kac stochastic model is represented by a particle that, at the initial time t = 0, starts from the origin 0 ∈ R1 of the real line R1 by taking the initial direction (positive or negative) at random with probability 1/2 and moves at the chosen direction with some constant finite speed c. The particle’s motion is controlled by an external homogeneous Poisson process of rate λ > 0 as follows. When a Poisson event occurs, the particle instantaneously changes its direction to the opposite one and keeps moving at the same speed c until the next Poisson event occurs, then it again changes to the opposite direction, and so on. Let X(t) denote the particle’s position on the line at arbitrary time instant t > 0 and let f (x, t) be the transition density of X(t). Since the speed c is finite, the process X(t), at arbitrary time t > 0, is entirely concentrated in the closed interval [−ct, ct]. Therefore, interval [−ct, ct] is the support of the distribution of X(t). This density, structurally, has the form: f (x, t) = f (s) (x, t) + f (ac) (x, t), where f (s) (x, t) and f (ac) (x, t) are the densities of the singular (with respect to Lebesgue measure on the line) and absolutely continuous components of the distribution of X(t), respectively. The singular component of the distribution is concentrated at the two terminal points ±ct of the interval [−ct, ct]. It corresponds to the case, when no Poisson events occur until time moment t and, therefore, the particle does not change its initial direction (the probability of this event is e−λt ). Thus, the density of

xvi

Introduction

the singular component of the distribution of X(t) (as a generalized function) has the form: f (s) (x, t) =

 e−λt  δ(ct + x) + δ(ct − x) , 2

x ∈ R1 , t > 0,

where δ(x) is the Dirac delta-function. The density f (ac) (x, t) of the absolutely continuous component of the distribution of X(t) is of much more interest. This part of distribution corresponds to the case, when at least one Poisson event occurs until time moment t and, therefore, the particle changes its initial direction. The support of this part of distribution is the open interval (−ct, ct). Then the density f (ac) (x, t) (as a generalized function), structurally, has the form: f (ac) (x, t) = p(x, t)Θ(ct − x), where p(x, t) is some positive function absolutely continuous in the interval (−ct, ct) and Θ(x) is the Heaviside unit-step function. It was shown by S. Goldstein [59] and M. Kac [84] that function p = p(x, t), x ∈ (−ct, ct), t > 0, satisfies the hyperbolic partial differential equation with constant coefficients ∂p ∂2p ∂2p + 2λ = 0. (0.0.2) − c2 2 ∂t ∂t ∂x2 Equation (0.0.2) is referred to as the telegraph or damped wave equation. Sometimes it is also called the hyperbolic heat equation. By comparing equations (0.0.1) and (0.0.2), we see that they are distinguished by the presence in (0.0.2) of the second time derivative (besides the parameters c and λ). This confirms the correctness and gives the mathematical justification of the equation suggested by V.A. Fock for describing the processes with a finite speed of propagation. As noted above, the presence of the second time derivative changes the type of transport equation (from parabolic (0.0.1) to hyperbolic (0.0.2)). This result is also remarkable for the fact that for the first time a connection was established between stochastic processes and hyperbolic partial differential equations (prior to this, such a connection was known only for parabolic and elliptic equations). This result can be reformulated in terms of generalized functions as follows. The complete density f = f (x, t), x ∈ R1 , t > 0, as a generalized function, satisfies the inhomogeneous telegraph equation ∂2f ∂f ∂2f + 2λ − c2 = δ(x) δ(t), (0.0.3) 2 ∂t ∂t ∂x2 where the generalized function on the right-hand side of (0.0.3) represents an instant pointlike source concentrated, at the initial time moment t = 0, at the origin 0 ∈ R1 . We note two of the most important differences of the Goldstein-Kac model from the Einstein-Smoluchowski heat transfer model (which, remember, is based on the artificial assumptions (i) and (ii)): (i’) The particle’s speed is finite; (ii’) Per unit of time, the particle undergoes a finite number of collisions that cause instantaneous changes the direction of its motion. From (i’) and (ii’), the properties of the Goldstein-Kac telegraph process X(t) that contrast sharply with the properties of the Wiener process, obviously follow: • The sample paths of X(t) are continuous and differentiable almost everywhere (i.e., they are piecewise broken lines); • For any fixed t > 0, the length of any trajectory is finite and equal to ct; • For any fixed t > 0, the distribution of X(t) is concentrated in the closed interval [−ct, ct] (that is, in the bounded subset of the line R1 ) and is zero outside this interval. All this shows that the Goldstein-Kac telegraph process X(t) is a much more natural and adequate model for describing the real heat and mass transport processes than the model with an infinite speed of propagation.

Introduction

xvii

It was also noted by M. Kac that if both the speed c and the intensity of switchings λ tend simultaneously to infinity in such a way that the following condition fulfils c → ∞,

λ → ∞,

c2 → ρ2 , λ

(0.0.4)

(called the Kac’s condition now), then the telegraph process X(t) turns (in the weak sense) into the Wiener process. Really, dividing the telegraph equation (0.0.2) by 2λ and passing to the limit under the scaling condition (0.0.4), we get the heat equation (0.0.1) with parameter a2 = ρ2 /2. This surprising phenomenon has very simple and clear physical interpretation. If the intensity of switchings λ tends to infinity, then the free particle’s run between collisions is getting shorter and shorter (from the physical point of view, this means that the environment is becoming increasingly saturated with obstacles). To compensate such a decrease of free run, the particle’s speed should also increase. The Kac’s scaling condition indicates that the particle’s speed c should increase by an order of magnitude faster than the intensity λ. Then in the limit, we get a process whose sample paths are continuous, but do not have any smooth pieces (that is, the particle does not have any free runs). But this is exactly the property of the sample paths of the Wiener process. Thus, under the scaling condition (0.0.4), the thermodynamic limit of the Goldstein-Kac telegraph process X(t) is the homogeneous Wiener process with zero drift and diffusion coefficient ρ2 . The solution to the telegraph equation (0.0.2) (that is, the density of the absolutely continuous component of the distribution of X(t)), for |x| < ct, has the form   p   p  λ ct λ λe−λt I0 c2 t2 − x2 + √ I1 c2 t2 − x2 , (0.0.5) p(x, t) = 2c c c c2 t 2 − x 2 where I0 (z) and I1 (z) are the modified Bessel functions of zero and first orders, respectively. Therefore, the complete density of the Goldstein-Kac telegraph process X(t), which is the solution to the inhomogeneous telegraph equation (0.0.3), is given by the formula: e−λt [δ(ct + x) + δ(ct − x)] 2   p   p  ct λ λe−λt λ 2 2 2 2 2 2 √ c t −x + I1 c t −x + I0 Θ(ct − |x|), 2c c c c2 t 2 − x 2 x ∈ (−∞, ∞), t > 0. (0.0.6) This remarkable and surprising result shows that the transition density (0.0.6) of the Goldstein-Kac telegraph process X(t) is the fundamental solution (the Green’s function) to the hyperbolic telegraph equation (0.0.2) and, therefore, there is a perfect analogy with the fact, mentioned above, that the transition density of the Wiener process is the fundamental solution to the parabolic heat equation (0.0.1). One can easily check that, under the Kac’s scaling condition (0.0.4), density (0.0.6) (or (0.0.5)) turns into the transition density of the one-dimensional homogeneous Wiener process. M. Kac also found an original stochastic form of writing a solution of the telegraph equation (0.0.2) with the initial conditions ∂p(x, t) p(x, t)|t=0 = ϕ(x), = 0, ∂t t=0 f (x, t) =

with an arbitrary (classic) function ϕ(x), x ∈ R1 , having a Fourier transform on the whole line. The stochastic solution of such an initial-value problem has very elegant form:      Z t Z t 1 p(x, t) = E ϕ x + c (−1)N (τ ) dτ + ϕ x − c (−1)N (τ ) dτ , (0.0.7) 2 0 0

xviii

Introduction

where N (t) denotes the number of Poisson events occurred until time moment t and E means the expectation. The elegance of formula (0.0.7) lies in its structural resemblance with the classical D’Alambert solution of the one-dimensional wave equation. The only difference R t is that the time t in the D’Alambert solution is replaced by the ‘randomized time’ 0 (−1)N (τ ) dτ in (0.0.7) and then the expectation of the resulting expression is taken. It was also noted by M. Kac that such a method of obtaining the stochastic solutions is also valid for the telegraph equation of any higher dimension (that is, when the differential operator with respect to spatial variable is replaced by the Laplace operator of respective dimension). To do this, one needs to omit the first time derivative in the telegraph equation and to consider the resulting wave equation of respective dimension. Then one needs to write down any of its solution which is well known in any dimension. Replacing everywhere in this solution the time variable t by the above ‘randomized time’ and taking then the expectation from the resulting expression, we obtain a stochastic solution of the telegraph equation of respective dimension. The fundamental works by S. Goldstein [59] and M. Kac [84] gave a powerful impetus to further research in this field, which led to the creation of a vast area in the theory of stochastic processes, called now the theory of random evolutions. Random evolution (RE) is a dynamic system subject to the control of some external stochastic process x(t) with known characteristics. It is also called the random dynamical systems [2]. From the mathematical point of view, RE is a product of a random number of the evolutionary operators Aξ possessing the semi-group property and acting in some functional (usually, Banach) space B. The transport processes form the most important class of REs, when B = C0 (Rm ) and evolutionary operators have the form Aξ = aξ · ∇, where aξ is a measurable mapping from a domain D on functions in Rm . The Goldstein-Kac telegraph process is just a transport process on the Euclidean line R1 . REs are also classified according to the type of governing stochastic process x(t). If x(t) is a Markov process, then we speak of Markov random evolution (for the transport processes, this means that the time interval τ between two consecutive changes of direction is an exponentially distributed random variable). If x(t) is not a Markov process, then RE is referred to as the semi-Markov random evolution (for the transport processes, this means that τ has an arbitrary distribution). One should especially emphasize that even if RE is driven by a Markov process x(t), it is not itself the Markov one. Since, as noted above, RE is a product of a random number of operators, it is natural to consider the limiting, in that or another sense, behaviour of such a product when the number of factors tends to infinity or under some perturbations. This approach led to the predominant development of asymptotic methods in the RE theory, based on well-developed apparatus of the theory of random products of operators [47, 125, 201] and perturbation theory of operators [87, 126, 127]. With this approach, the main goal was to prove various limit and asymptotic theorems. Various limit theorems for REs were obtained in [39, 60– 62, 72, 88, 90, 91, 126, 127, 170–172, 210, 211]. A detailed asymptotic analysis of isotropic transport processes was done in [161]. Discontinuous REs were studied in [89]. REs driven by homogeneous Markov chains with a finite number of states were examined in the works of [64, 65, 72, 172]. An original approach to studying the REs as multiplicative operator functionals was developed in [170, 173–175]. Stochastic integral representations for REs were obtained in [71,164,165,170,174,209]. Applications of REs to describing the behaviour of a particle and wave propagation in random media was considered in [162, 163, 166]. A comprehensive survey of these and some other works on REs was presented by R. Hersh [70]. The martingale methods of studying the REs were developed in [122, 123, 170, 176, 194–196, 210, 211]. In monograph [123] the authors, basing on their method of analysing the semiMarkov REs and on the developments made in [124] the apparatus of phase merging of complex systems, proved a number of important averaging and diffusion approximation

Introduction

xix

theorems in the ergodic and asymptotic phase merging schemes, as well as obtained a double approximation of REs in the merging scheme. This approach was further developed in the works of [119, 120] where the behaviour of stochastic dynamic systems in merging and splitting phase spaces was studied. For all the importance of this field of research, it should be noted that the asymptotic and limit theorems give only an approximate description of the behaviour of REs. More accurate results can be obtained for another extremely important direction in the theory of REs, called random flights and represented by a finite-velocity random walk of a particle subject to the control of an external stochastic process with given distribution. This refers primarily to equations and systems of equations (differential, integral or integro-differential) for the basic characteristics of random flights, the most important being the distribution. The importance of this field of research is determined, first of all, by the fact that random flights generate transport processes and finite-velocity diffusion in the Euclidean spaces. Note also that sometimes, mostly in physical literature, random flights are referred to as the persistent random walks [21, 48, 138–140, 212]. The one-dimensional Markov random flight is represented by the Goldstein-Kac telegraph process. A multidimensional continuous random flight in the Euclidean space Rm , m ≥ 2, is performed by the stochastic motion of a particle that moves with some finite speed and changes, at random time instants, the direction of motion by choosing it on the unit (m − 1)-dimensional sphere according to some probability distribution. Such highly rich stochastic model can generate a lot of particular random walks that might be distinguished following their main features such as the velocity of motion, the stochastic flow of the random time instants in which the particle changes its direction, the dissipation function related to the choice of the initial and new directions, the number of possible directions, the presence or absence of jumps at renewal moments, the dimension of the phase space Rm , m ≥ 2, etc. Clearly, various combinations of these items can generate a great number of different stochastic motions that might seem almost infinite. The majority of the works on such stochastic motions are devoted to studying the random motions at constant speed without jumps driven by a homogeneous Poisson process (that is, the Markovian case) with the uniform choice of directions in the Euclidean spaces of different dimensions. The further development of the theory of random flights was focused on two main directions. The first one is related to the generalizations of the Goldstein-Kac telegraph process on the real line R1 , obtaining their basic characteristics, as well as various applications of the model. A telegraph-type stochastic motion with time-dependent rate of switchings was studied in [86]. Probabilistic representations of the solutions of the telegraph equation, which are the counterparts of formula (0.0.7), were obtained in [83, 92, 203]. Analytical properties of the solution space of the telegraph equation were examined in [4,5]. The relativity effects arising in stochastic motions driven by the telegraph equations were considered in [9,17,18]. The distributions of the first-passage times and maximum displacement of the telegraph process were obtained in the works [44,141,142,213] (see also [115, Section 3.3]). Properties of the telegraph random process with or without a trap were studied in [45]. Telegraph processes with absorbing and reflecting barriers were examined in [156] and [115, Section 3.1]. Occupation time distributions for the telegraph process were obtained in [12] (see also [115, Section 3.2]). Moment and statistical analysis of the telegraph random process was given in [76, 77, 100]. The asymmetric telegraph processes were examined in [134] (see also [115, Chapter 4]). Partial differential equations governing a one-dimensional telegraphtype stochastic motion with several speeds and rates of switchings, taken at random, were derived in [112]. Telegraph processes with random velocities were examined in [192]. The distance between the Goldstein-Kac process and the Wiener process with some applications to generalized telegraph equations was studied in [80], while the distribution of the Eu-

xx

Introduction

clidean distance between two independent Goldstein-Kac telegraph processes was obtained in [98]. The explicit probability distribution of the sum of two independent Goldstein-Kac telegraph processes was presented in [97]. Linear combinations of an arbitrary number of independent Goldstein-Kac telegraph processes were studied in [94]. Analytical study of the generalized telegraph equations based on the apparatus of convolutions and semigroups was carried out in the work [81]. The behaviour of the travelling waves generated by finitevelocity random walks was examined in [67, 68]. A linear reaction-hyperbolic system with constant coefficients describing the finite-velocity random motions was studied in [16]. A finite-velocity stochastic motion on the line with an Erlang-distributed time between two consecutive turns was examined in [31]. A study of a damped telegraph random process with logistic stationary distributions was carried out in [33]. A telegraph process with elastic boundary at the origin was considered in [34]. The one-dimensional stochastic motion of a run-and-tumble particle was studied in [1, 30]. A generalization of the Goldstein-Kac stochastic motion for the case of a jump-telegraph process driven by an alternating fractional Poisson process was presented in [35]. Some properties of the generalized telegraph processes in inhomogeneous media were studied in [181, 182]. Applications of the telegraph processes for constructing an option pricing model, different from the classic Black-Scholes one, were developed in the series of works by [179, 180, 183] (see also [115, Chapter 5] for more details). The second direction is related to the multidimensional counterparts of the GoldsteinKac telegraph process, that is, the multidimensional finite-velocity random walks. Great interest in this problem arose immediately after the appearance of the Goldstein-Kac stochastic model. The question follows: Can the multidimensional finite-velocity random motions be described by the multidimensional telegraph equations similar to the one-dimensional case? This question, first formulated by M. Kac, has stimulated intense discussion among researchers on whether such a motion can be described by the multidimensional counterpart of the Goldstein-Kac telegraph equation (0.0.2). Some researchers have simply replaced the operator ∂ 2 /∂x2 in (0.0.2) with the multidimensional Laplacian ∆. Their reasonings were based on the analogy, mentioned above, between the Goldstein-Kac telegraph process and the one-dimensional Brownian motion. They considered that since the transition density of the multidimensional Brownian motion was the fundamental solution to the multidimensional heat equation (which differed from its one-dimensional counterpart only by the presence of the Laplacian ∆), the same should also be true for the multidimensional random flights. Other researchers quite rightly believed that such a formal replacement was highly doubtful and unjustified from the mathematical point of view. M. Bartlett [5, p.705] wrote that ‘such equivalence is more doubtful in the multidimensional case’. E. Tolubinsky [199, p. 49] has described such attempts as ‘unjustified’. The final solution of this problem was given in [114] where it was shown that the multidimensional random flights are driven by much more complicated operators than the telegraph one, namely, by an operator series called the hyperparabolic operator, composed of the integer powers of the telegraph and Laplace operators of respective dimensions. The main difference between the Goldstein-Kac telegraph process on the line and a multidimensional random flight is the number of possible directions of motion. While there are only two directions on the line (positive and negative), in higher dimensions the number of directions may be arbitrary (finite, countable or continuum). Planar random flights with three and four directions were studied in [32] and [155], respectively. A general model of the planar random flight with an arbitrary finite number of uniformly taken directions was thoroughly examined in [116]. In this work a high-order hyperbolic partial differential equation for the transition density of the motion was derived. It was also shown that un-

Introduction

xxi

der Kac’s scaling condition (0.0.4), the equation turns into a planar heat equation. Weak convergence of this planar random flight, under Kac’s condition (0.0.4), to a homogeneous Wiener process in the plane with zero drift and diffusion coefficient depending on the number of directions, was proved in [110]. A random flight with a finite number of cyclically taken directions in the Euclidean space of arbitrary dimension was studied in [128]. Although random flights with a finite number of directions is of a certain interest, the models of the finite-velocity stochastic motions in the Euclidean space Rm , m ≥ 2, with a continuum number of directions are, undoubtedly, much more natural and attractive both from the theoretical and practical points of view. In such models, the particle takes on the initial and each new direction at random according to some probability distribution on the surface of the (m−1)-dimensional unit sphere. Such stochastic motions can serve as good and adequate models for describing many real phenomena and processes in physics [17,18,22,36, 37, 53–57, 162, 186], hydro- and thermodynamics [153, 188, 199], biology [26, 36, 66, 136, 159] environmental science [85, 193]. One of the most significant properties is that random flight generates a finite-velocity diffusion. This feature is extremely important for describing the diffusion processes whose speed of propagation is small. Such slow and super-slow diffusion processes play a special role in modern biotechnology, chemical physics, environmental science and some other fields (see, for example, [3,25,85,160,169,193,204,208] and the bibliographies therein). A condition under which Markov random flights generate slow diffusion processes was given in [93]. This condition, called the slow diffusion condition, combines the convergence to zero of the speed of motion and the intensity of switchings with the convergence of the time to infinity. Under this slow diffusion condition, the stationary distributions of the slow diffusion processes in the Euclidean spaces of different dimensions, were derived. The importance of random flights for modeling real processes aroused great interest in their study. The main goal of these studies was to obtain the basic probabilistic characteristics of such processes and, mainly, their distributions. However, as it turned out, the problem of describing the random flights in the Euclidean spaces of different dimensions belongs to the class of mathematical problems for which the transition to another dimension is associated with great difficulties. By different methods, one managed to obtain the explicit distributions of the symmetric Markov random flights with constant speed in the spaces R2 [106, 113, 140, 190, 191], R4 [107, 158] and R6 [101]. Besides the distributions, other important characteristics, such as moments, conditional densities, conditional and unconditional characteristic functions, limiting behaviour, etc., were obtained as well. The probability distribution function for the Euclidean distance between two independent planar random flights was given in [99]. But the most difficult is the three-dimensional case. Markov random flight in the space R3 was examined in several works [95, 108, 109, 140, 158, 190], however, no probability distribution of the motion was obtained so far. Only the conditional density of the three-dimensional Markov random flight, corresponding to the single change of direction, was obtained in an explicit form [108, 190]. Besides, the Laplace transform of the characteristic function of the three-dimensional symmetric Markov random flight was given in [140, 190]. The difficulty in analyzing this process implies the need to study its other properties, in particular, the limiting and asymptotic behaviour. Weak convergence of the three-dimensional symmetric Markov random flight to a homogeneous Wiener process was proved in [109]. An asymptotic formula for the transition density of the three-dimensional symmetric Markov random flight on small time intervals was obtained in [95]. A general method of studying the Markov random flights in the Euclidean space Rm of arbitrary dimension m ≥ 2 based on the analysis of the integral transforms of their distributions, was developed in [104]. It was shown that, for arbitrary time t > 0, the characteristic function of the m-dimensional Markov random flight with a constant speed c and arbitrary-

xxii

Introduction

distributed change of directions satisfies a Volterra integral equation of second kind with continuous kernel whose solution is given by an uniformly converging series composed of the multiple convolutions with respect to time t of the characteristic function of the respective distribution on the surface of the (m − 1)-dimensional sphere of radius ct centred at the origin. This equation enabled us to obtain an explicit formula for the Laplace transform of the characteristic function of the m-dimensional symmetric Markov random flight in terms of Gauss hypergeometric function. Basing on this explicit formula, one managed to prove a limit theorem stating that, under Kac’s scaling condition (0.0.4), the m-dimensional symmetric Markov random flight weakly converges to the m-dimensional homogeneous Wiener process with zero drift and diffusion coefficient σ 2 = 2ρ2 /m. A space-time convolutional representation of the transition density of the multidimensional symmetric Markov random flight was obtained in [96]. The overwhelming majority of works and the results obtained in this field are related to the case when the time interval between any two consecutive turns is an exponentiallydistributed random variable (that is, the Markov random flights) [29, 46, 48, 49, 51, 53–57, 75, 95, 96, 99, 101–111, 113, 114, 116, 117, 136, 138–140, 155, 158, 170, 185, 186, 190, 191, 202, 216–218]. However, in recent years a number of works has appeared in which non-Markov random flights were studied. The finite-velocity multidimensional stochastic motions whose trajectories consist of the uniformly oriented segments of Pearson-Dirichlet, Pearson and Dirichlet-distributed random lengths were examined in [19,130–133]. Finite-velocity random motions with jumps, called L´evy random flights, generating an anomalous transport, were studied in [24, 146]. Note also that random flights can also be treated in the framework of the continuous-time random walk [8, 144, 145]. This book is the first systematic presentation of the theory of Markov random flights in the Euclidean spaces of different dimensions, that is, continuous-time random walks at constant speed with exponentially distributed random displacements without jumps. The trajectories of such stochastic motions are continuous and almost everywhere differentiable broken lines whose segments of exponentially distributed random lengths have random orientation in the Euclidean space Rm , m ≥ 2, according to a probability distribution on the unit (m − 1)-dimensional sphere (mostly the uniform one). For such important classes of stochastic processes one managed to construct a unified approach based on the powerful methods of mathematical analysis, such as integral transforms, hypergeometric and special functions. This approach enables one to effectively study such stochastic processes and obtain closed-form expressions for their distributions in some low dimensions. Besides a pure mathematical interest, the importance of such processes is determined by their numerous applications in physics [53–57, 93, 186], astrophysics [14, 15, 186], chemical physics [169], biological systems [25, 26, 36, 66, 159, 160, 204, 208], environmental science [3, 85, 193], and financial modeling [115, 179, 180, 183]. The most important peculiarity of random flights is that they generate diffusion processes with a finite speed of propagation. The book is organized as follows. In Chapter 1 we give some general mathematical notions and results that are used in forthcoming chapters. In particular, the basic properties of the most important types of Markov processes, namely, Brownian motion, diffusion and Poisson processes, are presented. The four sections of the chapter are devoted to generalized functions, integral transforms, as well as special and hypergeometric functions that form an effective mathematical apparatus for developing a general theory in this book. Some more specific mathematical notions and results, such as the elements of the random evolution theory, the determinant theorem and Kurtz’s diffusion approximation theorem, are also presented. We introduce the Chebyshev polynomials of two variables on a commutative Banach algebra over the field of complex numbers and study their basic properties. The last section of the chapter is a collection of auxiliary lemmas that are used in the proofs of

Introduction

xxiii

results in later chapters. This enables us not to be distracted from proving the main results and makes reading more convenient. In Chapter 2 the elements of the theory of the one-dimensional Goldstein-Kac telegraph processes are given. Telegraph processes and their properties are very well studied in the literature (see, for instance, the recent textbook [115] and references therein). That is why the purpose of this chapter is not to present the whole modern theory of the telegraph processes (this would require a separate capacious book), but to give a gradual and friendly introduction to the theory with an exposition of some basic results (such as distribution, telegraph equation and its group symmetries, characteristic function, convergence to a Wiener process, moments, sum of two and distance between two telegraph processes, and linear combinations of several independent telegraph processes). Since, as noted above, the Goldstein-Kac telegraph process is a one-dimensional Markov random flight, this would prepare the reader for the perception of its multidimensional counterparts studied in subsequent chapters and for tracing the arising analogies. Bearing in mind that this chapter can serve as a good introduction to the telegraph processes for students and postgraduates, the first sections are equipped with a number of exercises that can help them better understand the material. For the same reason, the presentation in the first sections of this chapter is given at a simpler level and is based on an infinitesimal approach. Many interesting results concerning various generalizations of the Goldstein-Kac telegraph process, such as the first exit and first passage times, maximum displacement, occupation time and other functionals of the telegraph process, motions with barriers and in inhomogeneous environments etc., are not included in the monograph (this would make it really immence), but those interested can easily find them in other sources (see, for example, the textbook [115] and references therein). As noted above, the cardinal difference between multidimensional random flights and the one-dimensional telegraph process is the number of possible directions of motion. While for a telegraph process on the real line there are only two possible directions (positive and negative), random flights in higher dimensions can have many (arbitrary finite number or continuum) directions. Chapter 3 is devoted to the study of a planar Markov random flight with n, n ≥ 2, possible directions taken randomly according to the uniform probability law. Such a model has interesting applications in relativistic analysis of stochastic kinematics [53]. The principal result of this chapter states that the transition probability density of the motion is the solution to an n-th order hyperbolic partial differential equation with constant coefficients whose operator is composed of the finite sums of the products of the timeshifted and Laplace differential operators. Note that for n = 2, these hyperbolic equations turn into the Goldstein-Kac telegraph equation. The derivation of the equation is based on the properties of the characters of a finite cyclic group and spectral decomposition of the unit matrix with respect to these characters. Under the Kac’s scaling condition (0.0.4), the governing equation transforms into a two-dimensional heat equation. Moreover, using Kurtz’s diffusion approximation theorem, we give two different proofs that the distributions of this stochastic motion converge to the distribution of a planar homogeneous Wiener process with zero drift and diffusion coefficient depending on n, as should be expected. The diffusion area of the process is also studied. The remarkable fact is that the generator of the planar Markov random flight with n directions has nice representations in terms of Chebyshev polynomials of two variables on the commutative Banach algebra of closed differential operators with constant coefficients acting in an appropriate Banach space. Chapter 4 is the core of the book. In this chapter we develop a general unified method of studying the Markov random flight X(t), t > 0, with a continuum number of directions in the Euclidean space Rm of arbitrary dimension m ≥ 2, based on the analysis of the integral transforms of its distributions. It is shown that the joint characteristic functions of X(t) are connected with each other by a convolution-type recurrent relation. This enables us to prove

xxiv

Introduction

that the characteristic function (Fourier transform) of X(t) in any dimension m ≥ 2 satisfies a convolution-type Volterra integral equation of a second kind. We give its solution and obtain the characteristic function of X(t) in terms of the multiple convolutions of the kernel of the equation with itself. An explicit form of the Laplace transform of the characteristic function in any dimension is given. The complete solution of the problem of finding the initial conditions for the governing partial differential equations, is given. We also show that, under the Kac scaling condition (0.0.4), the transition density of X(t) converges to the transition density of the m-dimensional homogeneous Brownian motion with zero drift and diffusion coefficient depending on the dimension m. We give the conditional characteristic functions of X(t) in terms of the inverse Laplace transform of the powers of the Gauss hypergeometric function. In some low dimensions, the conditional characteristic functions are obtained in explicit forms. The integral relations are also given for the non-isotropic case, that is, when the choice of the initial and how each new direction is doing according to an arbitrary probability distribution on the unit sphere. An asymptotic relation for the transition density of the Markov random flight X(t) with rare switchings is obtained. A closed-form expression for the conditional density corresponding to the single change of direction is derived. Integral equations for the transition probability density of the m dimensional Markov random flight with arbitrary dissipation function are presented. Its solution in the form of a functional series composed of the multiple double convolutions with respect to time and space variables is given. This series is uniformly convergent in any closed sub-ball of the diffusion area and, therefore, it uniquely determines a continuous density. Two particular cases of the uniform and von Mises dissipation functions are separately considered. Finally, a complete solution of the long-standing problem of describing the multidimensional finite-velocity stochastic motions by means of the telegraph equations is given. This problem, first noted by M. Kac, concerns the possibility of describing the Markov random flights in the Euclidean space Rm , m ≥ 2, by means of the multidimensional telegraph equations. We give a negative answer to this question, formulated above, and show that the multidimensional Markov random flights are driven by much more complicated operators than the telegraph one, namely, the hyperparabolic operators having the form of a series composed of the products of the powers of the telegraph and Laplace operators of respective dimension. As noted above, the formulas obtained in this chapter are universal and applicable in the Euclidean spaces of arbitrary dimensions. There are, however, a few important low dimensions in which these general formulas can be evaluated in explicit forms. This gives a unique opportunity to obtain the exact distributions of such stochastic motions in these low dimensions. One such unique case is the dimension m = 2. In Chapter 5 we thoroughly study the symmetric Markov random flight X(t) = (X1 (t), X2 (t)), t > 0, at constant speed c with a continuum number of directions in the Euclidean plane R2 . Surprisingly, the most important characteristics of this planar stochastic motion can be explicitly obtained. The characteristic functions conditioned by the number n, n ≥ 1, of changes of directions occur until time t > 0 are derived by successive integration of the products of Bessel functions. Their inverting yields a closed-form expression for the conditional densities of the process that, surprisingly, are the simple polynomials of order (n−2)/2. The remarkable fact is that, for n = 1, the conditional density takes the form of the fundamental solution (the Green’s function) to the two-dimensional wave equation, while, for n = 2, it transforms into the uniform density in the circle of radius ct. The transition density of X(t), which is expressed in terms of elementary (exponential) function, is absolutely continuous in the interior of the circle and has an infinite discontinuity on its boundary. The distributions of the projections X1 (t) and X2 (t) of the process X(t) on the coordinate axes that are given are expressed in terms of the modified Bessel and Struve functions. The series and integral representations of the unconditional characteristic function of X(t) are presented. The most surprising fact is that the absolutely continuous part of the transition density of X(t) is the fundamental

Introduction

xxv

solution to the two-dimensional telegraph equation. This peculiarity is analogous to that of the one-dimensional Goldstein-Kac telegraph process. Two limit theorems state that, under the Kac’s scaling condition (0.0.4), the transition density of X(t) converges to the transition density of a two-dimensional homogeneous Wiener process, as should be expected. The proof of the first theorem implies the passage to the limit in the obtained density, while the second one is based on the Kurtz’s diffusion approximation theorem. A simple alternative derivation of the transition density of X(t) that uses some specific properties of wave propagation in a plane is given. An explicit formula for the moment function of X(t) is obtained. We also study a planar Markov random flight starting from a random point with a Gaussian distribution in the plane. We give the series and integral representations of the transition density of such a motion in terms of modified Bessel functions. In the last section of this chapter we study the Euclidean distance between two independent planar Markov random flights. The main result of this section is presented by an explicit (but rather complicated) expression for the probability distribution function of the Euclidean distance between them at arbitrary time instant t > 0. Chapter 6 is devoted to a very important, but at the same time least studied, case of the Markov random flight X(t), t > 0, at constant speed c in the three-dimensional Euclidean space R3 . While the basic characteristics of the planar Markov random flight studied in the previous chapter were obtained in an explicit form, in the three-dimensional case they are presented only in the form of the integral transforms that can hardly be calculated explicitly. We give a series representation of the characteristic function of X(t) composed of the inverse Laplace transform of the powers of the inverse tangent function. Characteristic functions conditioned by the number n, n ≥ 1, of changes of direction occurred until time t are presented in the form of a multiple integral from a product of the normalized sine functions. The particular case n = 1 is the only one for which the conditional characteristic function corresponding to the single change of direction can be evaluated explicitly and it is expressed in terms of incomplete integral sine and cosine functions. By inverting this conditional characteristic function, we obtain an explicit formula for the conditional density of X(t) corresponding to the single change of direction that has the form of a logarithm (alternatively, inverse hyperbolic tangent) function. This conditional density has an infinite discontinuity on the boundary of the diffusion area (i.e., a closed three-dimensional ball of radius ct) and, therefore, one can assert that the complete transition density of X(t) is also discontinuous on the boundary. This peculiarity is quite similar to that of the planar random flight studied in the previous chapter. By means of Kurtz’s diffusion approximation theorem we prove that, under the Kac’s scaling condition (0.0.4), the distribution of X(t) converges to the distribution of a three-dimensional homogeneous Wiener process. Asymptotic relations with respect to time variable t for the characteristic function and transition density of X(t) are obtained. The error in these formulas has the order o(t3 ) and, therefore, they give a fairly good approximation for small values of t. This conclusion is confirmed by the respective example and graphics. In the last section of the chapter we give a constructive method of obtaining the fundamental solution to Kolmogorov (or Fokker-Planck) equation representing a hyperbolic system of a continuum number of first-order integro-differential equations for the joint densities of the particle’s position and its direction at arbitrary time t > 0. The solution has the form of a uniformly converging functional series whose terms are determined by certain recurrent relations. Surprisingly, the Markov random flight X(t) in the four-dimensional Euclidean space R4 studied in Chapter 7 admits a quite detailed and complete analysis like in the planar case. Despite such fairly high dimension, the main characteristics of X(t) can be obtained in an explicit form. A closed-form expression for the conditional densities of X(t) that, surprisingly, has a very simple form, is obtained. Basing it on these conditional densities, an exact formula for the distribution of X(t) is derived, which is expressed in terms of

xxvi

Introduction

elementary functions. The relations for the characteristic function of X(t) in the forms of an integral and a functional series are obtained. We also examine the limiting behaviour of X(t) under the standard Kac’s scaling condition (0.0.4) and prove its weak convergence to the four-dimensional homogeneous Wiener process. In the final section of the chapter, we derive exact formulas for the mixed moments of the distribution of X(t) in terms of Bessel and Struve functions, as well as a relation for the moments of the Euclidean distance between X(t) and the origin 0 ∈ R4 at arbitrary time t > 0 in terms of the incomplete gamma-function and degenerative hypergeometric function. The method of integral transforms developed in Chapter 4 gives us the unique opportunity to thoroughly study the Markov random flight X(t) in the six-dimensional Euclidean space R6 . Even in this high dimension, this method enables us to obtain the basic characteristics of X(t), mainly its distribution, in an explicit form. In Chapter 8 we give a closed-form expression for the conditional densities of X(t) in the form of the finite sums of Gauss hypergeometric functions, whose first coefficient is always integer and non-positive. This means that hypergeometric functions in the conditional densities, in fact, are some finite-order polynomials. Basing on these conditional densities, we derive an explicit formula for the transition density of X(t). This density is represented in the form of a functional series composed of the finite sums of hypergeometric functions that are some finite-order polynomials, due to a specific form of the conditional densities noted above. Finally, in Chapter 9 some applications of Markov random flights studied in the previous chapters, are considered. The peculiar property of such stochastic motions is that they generate finite-velocity diffusion processes having numerous applications in various fields of science and technology. A wide and extremely important class is represented by the slow and super-slow diffusion processes. For example, such a diffusion arises in the processes of spreading and growing sandpiles [3] in protein folding [25] and of proteins in the yeast plasma membrane [204], extremely slow intramolecular diffusion in unfolded proteins [208], environmental pollution [85, 193] and some other fields. A new conception of slow diffusion processes generated by randomly moving particles when both the speed and the intensity of changes of direction are small, is developed. We present a slow diffusion condition linking these parameters through time and providing a non-degenerate diffusion. Based on this slow diffusion condition, we derive the stationary distributions for large times of the Markov random flights in the Euclidean spaces of low dimensions. An approach for modeling the fluctuations of the water level in a reservoir based on the telegraph processes is presented. The peculiarity of such an approach consists in the interpretation of the water level as a particle moving on a (vertical) line at constant speed and alternating two possible directions (up and down) at random time instants. The fluctuations of the water level can, therefore, be described by a telegraph equation whose parameters (more precisely, their statistical estimates) can be determined from long-term statistical observations. A model of soil pollution from a stationary source is also considered. It is imagined that the pollution process is carried out by randomly moving particles with a random lifetime. Based on the results of previous sections, we present the density of the pollution distribution for the case when the lifetime is an exponentially distributed random variable. Some physical applications of Markov random flights in the finite-velocity transport, cosmic microwave background radiation as well as some their relativistic properties, are outlined. A sketch of the finite-velocity counterpart of the classical Black-Scholes option pricing model based on the telegraph processes is also given.

Chapter 1 Preliminaries

In this chapter we give some general mathematical notions and results that are used in forthcoming chapters. In particular, the basic properties of the most important types of Markov processes, namely, Brownian motion, diffusion and Poisson processes, are presented. Four sections of the chapter are devoted to the generalized functions, integral transforms, as well as special and hypergeometric functions that form an effective mathematical apparatus for developing a general theory in this book. Some more specific mathematical notions and results, such as the elements of the random evolution theory, the determinant theorem and Kurtz’s diffusion approximation theorem, are also presented. The last section of the chapter is a collection of auxiliary lemmas that are used in the proofs of the results in later chapters.

1.1

Markov processes

Let B be the σ-algebra of the Borel subsets of the real line R1 and T > 0 be an arbitrary positive number. A function P (s, x, t, Γ), 0 ≤ s < t ≤ T, x ∈ R1 , Γ ∈ B, is referred to as the transition probability function if the following conditions fulfil: 1. P (s, x, t, Γ) is a B-measurable function with respect to x under fixed s, t, Γ. 2. P (s, x, t, Γ) is a probability measure on B under fixed s, t, x (so, P (s, x, t, R1 ) = 1). 3. For all 0 ≤ s < t1 < t2 , x ∈ R1 , Γ ∈ B, the following relation fulfills: Z∞ P (s, x, t2 , Γ) =

P (s, x, t1 , dy) P (t1 , y, t2 , Γ) .

(1.1.1)

−∞

Relation (1.1.1) is referred to as the Kolmogorov-Chapman equation. A Markov process in R1 is said to be determined if the transition probability function P (s, x, t, Γ) is given. It is treated as the probability that the process, which at time instant s is located at point x, will be located at instant t, (s < t), at the Borel subset Γ ∈ B. In other words, P (s, x, t, Γ) is the probability to pass in time t − s from a point x to the set Γ. This definition of the one-dimensional Markov process can be extended for a Markov process in abstract measurable phase space (X, B). Its transition probability function satisfies a Kolmogorov-Chapman equation similar to (1.1.1) where the integral is taken over the space X. For more details on the definition and basic properties of the Markov process in abstract measurable space see, for instance, [40] or [121, Section 4.2]. In the following subsections we give three examples of the most important Markov stochastic processes, namely, the Wiener process also called the Brownian motion, the diffusion process and the Poisson process that play an important role in further analysis.

1

2

1.1.1

Markov Random Flights

Brownian motion

Let (Ω, F, P) be a probability space. The Brownian motion w(t) = w(t, ω), t ≥ 0, ω ∈ Ω, (also called the Wiener process), on the real line R1 (starting from the origin x = 0 ∈ R1 ) with zero drift and the unit diffusion coefficient is the homogeneous stochastic process with independent increments possessing the following properties: 1. w(0, ω) = 0 for almost all ω ∈ Ω. 2. The system {w(t, ω), t ≥ 0, } is Gaussian on (Ω, F, P) and for any t and h such that t + h > 0, the random variable w(t + h, ω) − w(t, ω) has the mean value 0 and variance |h|. The first and the second moments, that is, the expectation and the variance of the Wiener process w(t) are Ew(t) = 0, E(w(t))2 = t. The covariance function of w(t) is given by E{w(t)w(s)} = min{t, s}. If w(t) is a Brownian motion, then, for arbitrary s > 0 and λ 6= 0, the stochastic processes {w(t + s) − w(t), t ≥ 0, } and {λ−1 w(λ2 t), t ≥ 0}, are the Brownian motions too. In particular, the process {−w(t), t ≥ 0, } is a Brownian motion. The stochastic processes {w(t), t > 0, } and {tw(1/t), t > 0, } have the same distribution. The process w(t) has the Gaussian density given by the formula  2 x 1 exp − , x ∈ R1 , t > 0, (1.1.2) p(x, t) = √ 2t 2πt and the density (1.1.2) is the fundamental solution to the one-dimensional heat equation 1 ∂ 2 p(x, t) ∂p(x, t) = . ∂t 2 ∂x2

(1.1.3)

The characteristic function of w(t), that is, the Fourier transform Fx of density (1.1.2) with respect to spatial variable x ∈ R1 , has the form  2  ξ t E{exp(iξw(t))} = Fx [p(x, t)](ξ) = exp − , ξ ∈ R1 . (1.1.4) 2 The Laplace transform Lt of density (1.1.2) with respect to time variable t ≥ 0 is given by the formula  √  1 Lt [p(x, t)](s) = √ exp −|x| 2s , Re s > 0. (1.1.5) 2s For more detailed definitions of these integral transforms Fx and Lt of the generalized functions f (x, t), x = (x1 , . . . , xm ) ∈ Rm , m ≥ 1, t ≥ 0, in the Euclidean space Rm of arbitrary dimension m ≥ 1 and their main properties see Sections 1.7 and 1.8 below. Similarly to (1.1.2), the homogeneous Brownian motion wσ (t) with zero drift and arbitrary diffusion coefficient σ 2 > 0 has the density   1 x2 pσ (x, t) = √ exp − 2 , x ∈ R1 , t > 0, (1.1.6) 2σ t σ 2πt which is the fundamental solution to the heat equation ∂pσ (x, t) σ 2 ∂ 2 pσ (x, t) = . ∂t 2 ∂x2

(1.1.7)

The first and the second moments, that is, the expectation and the variance of the Wiener process wσ (t) are Ewσ (t) = 0, E(wσ (t))2 = σ 2 t. The integral transforms similar to (1.1.4) and (1.1.5) have the form  2 2  ξ σ t E{exp(iξwσ (t))} = Fx [pσ (x, t)](ξ) = exp − , ξ ∈ R1 . (1.1.8) 2

Preliminaries √   |x| 2s 1 , Lt [pσ (x, t)](s) = √ exp − σ σ 2s

3 Re s > 0,

(1.1.9)

respectively. Almost all the sample paths of Brownian motion {w(t, ω), t ≥ 0, } are nowhere differentiable and for almost all ω ∈ Ω the sample paths of the Brownian motion have unbounded variation in any subinterval. Moreover, the length of any piece of Brownian trajectory is infinite. These exotic properties of sample paths can be explained by the peculiarities of the Wiener process representing a stochastic motion of a mass-less particle moving at infinite speed and subject to an infinite number of changes of direction per unit of time. Such changes of direction can be treated as caused by particle’s collisions with random obstacles. For the Brownian motion w(t) the following limiting relations hold: ( ) ( ) w(t) w(t) P lim p = 1 = P lim p = −1 = 1, (1.1.10) t→0+ 2t ln ln(1/t) 2t ln ln(1/t) t→0+ ( P

lim p

t→∞

w(t) 2t ln ln(1/t)

) =1

( =P

)

w(t)

lim p = −1 2t ln ln(1/t)

= 1.

(1.1.11)

t→∞

Relations (1.1.10) and (1.1.11) are referred to as the iterated logarithm laws. The distributions of some important functionals of the one-dimensional Wiener process w(t) are given by the following items. 1. Distribution of the maximum of Brownian motion. For x > 0,  r Zx 2 2 P sup w(s) < x = e−z /(2t) dz, πt 0≤s≤t 

x > 0.

(1.1.12)

0

2. Distribution of the first passage time. Let a > 0 be an arbitrary point on the right-half of the real line R1 . Let τa = inf{t : w(t) > a} be the moment of the first passing through the point a of the Wiener process w(t). Then the random variable τa has the density: 2 a d P {τa < x} = √ e−a /(2x) , dx 2πx3

x > 0.

(1.1.13)

3. Joint distribution of the maximum and of the value of Brownian motion. Let a > 0 be an arbitrary point on the right-half of the real line R1 . Then, for x < a, the following relation holds: 

 1 P sup w(s) < a, w(t) < x = √ 2πt 0≤s≤t

Zx

e−z

2

/(2t)

dz,

x < a,

a > 0.

(1.1.14)

x−2a

4. Arcsine law. Let Θ(x) be the Heaviside unit-step function, that is, ( 1, x > 0, Θ(x) = x ∈ R1 . 0, x ≤ 0, Then, for 0 < τ < t, the following relation holds:  t  r Z  2 τ P Θ(w(s)) ds < τ = arcsin ,   π t 0

0 < τ < t.

(1.1.15)

4

Markov Random Flights

This relation shows that the time, spent by the Wiener process w(t) on the positive half-axis of the real line R1 by time moment t, has the distribution function (1.1.15). The m-dimensional homogeneous Wiener process w(t) = (w1 (t), . . . , wm (t)) with zero drift and diffusion coefficient σ 2 , also called the m-dimensional Brownian motion, is a stochastic process with independent increments having a Gaussian distribution with the density   1 kxk2 , (1.1.16) p(x, t) = exp − 2 2σ t (2πσ 2 t)m/2 x = (x1 , . . . , xm ) ∈ Rm ,

kxk2 = x21 + · · · + x2m ,

m ≥ 1,

t > 0.

For m = 1, (1.1.16) turns into the one-dimensional density (1.1.6). The components w1 (t), . . . , wm (t) of the m-dimensional Wiener process w(t) are independent onedimensional Wiener processes. The characteristic function of w(t), that is, the Fourier transform Fx of density (1.1.16) with respect to spatial variable x ∈ Rm , has the form   kξk2 t , (1.1.17) E{exp(ihξ, w(t)i)} = Fx [p(x, t)](ξ) = exp − 2σ 2 ξ = (ξ1 , . . . , ξm ) ∈ Rm ,

t > 0,

where hξ, w(t)i = ξ1 w1 (t) + · · · + ξm wm (t) is the inner product of the m-dimensional vectors ξ and w(t). Density (1.1.16) is the fundamental solution (the Green’s function) of the m-dimensional heat equation σ2 ∂p(x, t) = ∆p(x, t), (1.1.18) ∂t 2 where ∂2 ∂2 + ··· + ∆= 2 ∂x1 ∂x2m is the m-dimensional Laplace operator. The iterated logarithm laws (1.1.10) and (1.1.11) are also valid for the m-dimensional Wiener process w(t). If w(t) is separable, then, with probability 1, it is continuous.

1.1.2

Diffusion process

The diffusion process is a somewhat more general object than Brownian motion. Such processes describe fairly well the phenomena of diffusion in Euclidean spaces. The sample path of a diffusion process possesses the property that each its piece behaves like a Brownian trajectory. In other words, every sample path of a diffusion process may be imagined as one pasted of the pieces of Brownian trajectories. Let B be the sigma-algebra of the Borel subsets of the Euclidean line R1 . A R1 -valued Markov process D(t) in the time interval [0, T ], T > 0, with the transition probability function P (s, x, t, Γ), 0 ≤ s < t ≤ T, x ∈ R1 , Γ ∈ B, is referred to as the diffusion process on the real line R1 if the following conditions fulfil: 1. For all ε > 0, t ∈ [0, T ], x ∈ R1 Z 1 lim P (t, x, t + ∆t, dy) = 0. (1.1.19) ∆t→0 ∆t |y−x|>ε

Preliminaries

5

2. There exist the R1 -valued functions a(s, x) and b(s, x), such that for all ε > 0, t ∈ [0, T ], x ∈ R1 Z 1 (y − x) P (t, x, t + ∆t, dy) = a(t, x), (1.1.20) lim ∆t→0 ∆t |y−x| 0, if its counting process N (t) is a stochastic process with independent increments such that P{N (τ ) − N (s) = k} =

[λ(τ − s)]k −λ(τ −s) e , k!

k ≥ 0,

(1.1.25)

for arbitrary time instants τ and s (s ≤ τ ). If λ is constant and does not depend on time t, the Poisson process is called the homogeneous one. Since the Poisson stochastic flow ξ(t) and its properties are completely determined through the counting process N (t), then usually these processes are considered to be identical. The Poisson process ξ(t) is a continuous stochastic process with independent increments, whose sample paths are the monotonous functions of time t having, in any finite time interval, a finite number of jumps of unit length. The increments are the integer non-negative values with the expectation E[ξ(τ ) − ξ(s)] = λ(τ − s) for any τ and s (s ≤ τ ). The increments ξ(τ ) − ξ(s) have the Poisson distribution (1.1.25) with parameter λ(τ − s). The time interval η between the occurrences of any two successive Poisson events is an exponentially distributed random variable with the distribution function P{η < x} = 1 − e−λt ,

x ≥ 0.

If n Poisson events occur in the time interval (0, t), that is, N (t) = n, n ≥ 1, and

Preliminaries

7

τ1 , . . . , τn are the random time instants of these events occurrences, then the joint distribution function of random variables τ1 , . . . , τn is given by the formula: P{τ1 ∈ dτ1 , . . . , τn ∈ dτn } =

n! dτ1 . . . dτn . tn

In particular, for n = 1, we have P{τ1 ∈ dτ1 } =

1 dτ1 , t

that is, the instant of occurrence of a single Poisson event at time interval (0, t) is a random variable uniformly distributed in this interval. For more details related to the properties of Markov processes see, for instance, [40, 42, 78].

1.2

Random evolutions

A random evolution is a dynamical system subject to the control of some external stochastic process. Such situations arise in many branches of science. Let us give only three examples of random evolutions. • A particle moves at constant speed in some phase space in random direction, until it suffers a collision at random time instant; then it takes on a new random direction and keeps moving at constant speed in this new direction, and so on. • A radio signal propagates through a turbulent medium whose refraction index is changing at random. • A population of bacteria evolves in an environment that is subject to random fluctuations. In all these examples, the evolving system changes its mode of evolution because of random changes in the environment. In the first example, the mode of evolution is prescribed by the velocity and direction of the particle; in the second, by the refraction index of the medium; in the third, by random fluctuations of the environment that may influence the evolution of the population, promoting or hindering its growth. We give now the rigorous definition of random evolution. Let (xn , θn ; n ≥ 0) be a renewal process in the measurable phase state space (X, X ) with the semi-Markov kernel K(x, A, t) = P{xn+1 ∈ A, θn+1 ≤ t | xn = x},

A ∈ X , t > 0, n ≥ 0.

Here xk ∈ X is a state of the renewal process and θk is a random time spent by the process in this state xk . Let Sx (t), x ∈ X, t ≥ 0, be a family of strongly continuous semigroups of contraction operators acting in the separable normed (Banach) space B of functions that are strongly measurable in x with respect to X |B, where B is the σ-algebra of the Borel subsets of B. In other words, the mapping Sx (t)f : X → B is X |B-measurable for any f ∈ B and t > 0. Let D(x), x ∈ X, be a family of the linear bounded contraction operators acting in the same Banach space B and the mapping D(x)f : X → B is X |B-strongly measurable. Define the semi-Markov stochastic process x(t) = xν(t) , where ν(t) = max{n : τn ≤ t} Pn is the counting process, τn = θ k=1 k , n ≥ 1, τ0 = 0, are the renewal moments and τ (t) = τν(t) is the point process.

8

Markov Random Flights A general semi-Markov random evolution is defined by the relation: E(t) = Sx(t) (t − τ (t))D(x(t)) . . . D(x2 )Sx1 (θ2 )D(x1 )Sx0 (θ1 )

(1.2.1)

representing, at arbitrary time instant t > 0, a product of a random number of random alternating operators from the semigroups Sx (t) and D(x). Since x(t) is a regular semi-Markov process, that is, for any t > 0, the inequality ν(t) < ∞ fulfils with probability 1, then the product in (1.2.1) contains a finite number of operators for any t > 0. For arbitrary f ∈ B, the process E(t)f, t > 0, is Ft+ |B-strongly measurable and strongly continuous for t 6= τn , n ≥ 1, where Ft+ is a σ-algebra generated by {x(s), θ(s) : 0 ≤ s ≤ t}. Note that in the particular case when the renewal intervals θn are the exponentially distributed random variables, the governing stochastic process x(t) becomes the Markovian one and in this case (1.2.1) is referred to as the Markov random evolution. We emphasize that the terms Markov and semi-Markov random evolution is determined by the type of the governing stochastic process x(t), namely, the Markov or semi-Markov one, respectively. Note also that, even in the case when x(t) is a Markov process, the random evolution E(t) itself is not a Markov process. From (1.2.1), we see that a general semi-Markov random evolution is a product of the alternating continuous and jump random operators that belong to the semigroups Sx (t) and D(x), respectively. A continuous semi-Markov random evolution is given by the equality Ec (t) = Sx(t) (t − τ (t))Sxν(t)−1 (θν(t) ) . . . Sx1 (θ2 )Sx0 (θ1 )

(1.2.2)

and is defined only by the family of semigroups Sx (t). Relation (1.2.2) follows from (1.2.1) for D(x) ≡ I, where I is the identity operator. A jump semi-Markov random evolution is defined by the equality ν(t)

Ed (t) = D(x(t))D(xν(t)−1 ) . . . D(x2 )D(x1 ) =

Y

D(xk ).

(1.2.3)

k=1

Obviously, Ed (t) =

n Y

D(xk ),

for τn ≤ t < τn+1 .

k=1

Such semi-Markov random evolution changes in jumps at the renewal moments τn , n ≥ 1. Introduce the generators S(x), x ∈ X, of the semigroups Sx (t) by the relation: S(x)f = lim t−1 [Sx (t) − I] f, t→0

f ∈ B0 j B,

(1.2.4)

where B0 is a common domain of the operators S(x), x ∈ X. Theorem 1.2.1. The continuous random evolution (1.2.2) satisfies the relation: Ec (t) − I =

Z

t

S(x(s)) Ec (s) ds,

(1.2.5)

0

which is equivalent to the Cauchy problem: d c E (t) = S(x(t)) Ec (t), dt

Ec (0) = I.

(1.2.6)

Preliminaries

9

Proof. According to the semigroup equation [74], we have: t

Z Sx (t) − I =

S(x(s)) Sx (s) ds.

(1.2.7)

0

From the definition of the continuous random evolution (1.2.2), it follows that Ec (t) = Sxn (t − τn ) Ec (τn ),

τn ≤ t < τn+1 .

(1.2.8)

We prove the statement of the theorem by induction. First, we note that, for t = τ1 , equation (1.2.5) coincides with the semigroup equation (1.2.7) for x = x0 . Suppose now that (1.2.5) is also true for t = τn , that is, Z τn S(x(s)) Ec (s) ds. (1.2.9) Ec (τn ) − I = 0

Then, using (1.2.7), (1.2.8) and (1.2.9), we have: Ec (τn+1 ) − I = Sxn (θn+1 ) Ec (τn ) − I Z τn+1 = Ec (τn ) − I + S(xn ) Sxn (s − τn ) Ec (τn ) ds τn Z τn Z τn+1 = S(x(s)) Ec (s) ds + S(x(s)) Ec (s) ds 0 τn Z τn+1 = S(x(s)) Ec (s) ds, 0

where, in the third step, we have used (1.2.8) and the fact that S(xn ) = S(x(t)) for τn ≤ t < τn+1 . Now, for τn ≤ t < τn+1 , and taking into account (1.2.7) and (1.2.8), we obtain: Ec (t) − I = Sxn (t − τn ) Ec (τn ) − I Z t = Ec (τn ) − I + S(xn ) Sxn (s − τn ) Ec (τn ) ds τn

Z

τn

S(x(s)) Ec (s) ds +

= =

t

S(x(s)) Ec (s) ds

τn

0

Z

Z

t

S(x(s)) Ec (s) ds,

0

proving (1.2.5). By differentiating (1.2.5) in t, we arrive at (1.2.6). The theorem is proved. Since we are interested only in the continuous Markov random evolutions without jumps, we omit thereafter the upper index in the notation of random evolution and set Ec (t) ≡ E(t). From Theorem 1.2.1 it follows that S(x(t)) can be considered as a random coefficient of the evolutionary equation (1.2.6) parametrized by a Markov process x(t). Let Q denote the infinitesimal operator (generator) of x(t). For example, if x(t) is a Markov chain with n states, then Q is a scalar (n × n)-matrix. Since the evolution of a dynamical system is determined not only by its current state at time t, but also by its preceding history, it is natural to consider the random evolution as a stochastic process depending on two time parameters, that is, E = E(s, t), 0 ≤ s ≤ t, t > 0.

10

Markov Random Flights

Applying Theorem 1.2.1, one can show that a random evolution E(s, t) satisfies the linear differential equation ∂ E(s, t) = −S(x(s)) E(s, t), (1.2.10) ∂s or, equivalently, ∂ E(s, t) = E(s, t) S(x(t)) (1.2.11) ∂t where E(t, t) = I, 0 ≤ s ≤ t, t > 0. Sometimes, a random evolution is defined as the solution (assumed to exist) to equations (1.2.10) and (1.2.11). Relations (1.2.10) and (1.2.11) are referred to as the backward and forward evolutionary equations, respectively. Denote by u(x, t) the expected value of the solution of (1.2.10), conditioned by the initial value of x(s), x = x(0), that is, u(x, t) = Ex [E(0, t)] .

(1.2.12)

Then one can show that u = u(x, t) satisfies the differential equation: ∂u = S(x)u + Qu. ∂t

(1.2.13)

Given a function f (x) with values in the Banach space B, we define u ˜(x, t) = Ex [E(0, t)] f (x(t)).

(1.2.14)

Then u ˜(x, t) is the solution of the initial-value problem ∂u ˜ = S(x)˜ u + Q˜ u, ∂t

u ˜(x, 0) = f (x).

(1.2.15)

Relation (1.2.15) is an operator version of the classical Feynman-Kac formula of potential theory. For more details and results on the general theory of random evolutions and its applications, see [64, 65, 70, 72, 88, 89, 122, 123, 126, 127, 161–163, 166, 170–172, 194–196, 209–211] and the bibliographies therein. This book is focused on the particular, but the most important and attractive, case of random evolution when each S(x) is taken as a single first-order linear differential operator with random coefficients. Then (1.2.10) is a transport equation associated with the trajectory of a particle moving in the Euclidean space Rm , m ≥ 1, whose speed and direction change at random (this corresponds to the first example of random evolution given above). Such stochastic motions are referred to as the random flights. The one-dimensional Markov random flight represented by the stochastic motion at constant speed of a particle on the line R1 that, at Poissonian time instants, alternates two possible directions (forward or backward), is referred to as the Goldstein-Kac telegraph process and it will thoroughly be studied in the next chapter. A multidimensional continuous random flight in the Euclidean space Rm , m ≥ 2, is performed by the stochastic motion of a particle that moves with some finite speed and changes, at random time instants, the direction of motion by choosing it on the unit (m − 1)-dimensional sphere according to some probability distribution. Such highly rich stochastic model can generate a lot of particular random walks that might be distinguished following their main features: - by the velocity (i.e. the speed of motion is constant, or it is a determenistic function depending on space and time variables, or it is a random variable with given distribution);

Preliminaries

11

- by the stochastic flow of the random time instants in which the particle changes its direction (in other words, by the distribution of the time interval between two successive random instants of the flow); - by the probability law of choosing the initial and all next random directions (so-called, the dissipation function); - by the number of possible directions (finite or continuum); - by the presence or absence of jumps at renewal moments (with a deterministic or random amplitude value); - by the dimension of the phase space Rm , m ≥ 2. Clearly, various combinations of these items can generate a great number of different stochastic motions that might seem almost infinite. The majority of the works on such stochastic motions are devoted to studying the random motions at constant speed without jumps driven by a homogeneous Poisson process (that is, the Markovian case) with the uniform choice of directions in the Euclidean spaces of different dimensions. In recent years, a series of works has appeared dealing with random flights whose time interval between two successive changes of direction is not an exponentially distributed random variable (that is, the non-Markovian case). In particular, in [19, 31, 130–133] the random flights whose time interval between two successive turns are the Erlang, Pearson, Pearson-Dirichlet or Dirichlet-distributed random variables, have thoroughly been examined. In this book, we are mostly concentrated on studying the Markov random flights at constant speed without jumps and the uniform choice of directions in the Euclidean spaces of different dimensions. Besides the purely mathematical interest, just such stochastic motion is extremely important for modelling a great deal of real phenomena arising in various branches of science and technology. In particular, the random flight-based approach enables to give a new insight of a number of physical theories and many fruitful interpretations of some intrigueing facts of relativistic analysis, quantum mechanics, hydro- and thermodynamics, interacting processes (see [17, 18, 37, 53–57, 146, 153, 188] and bibliographies therein).

1.3

Determinant theorem

Consider Kolmogorov equation for a random flight in Rm governed by a finite Markov chain with n states (the inequality n ≥ m is the necessary condition of nondegeneracy), representing the hyperbolic system of first-order partial differential equations ∂ u(x, t) = Du(x, t) + Λu(x, t), ∂t u(x, t) = (u1 (x, t), . . . , un (x, t))T ,

(1.3.1) x = (x1 , . . . , xm ) ∈ Rm ,

t > 0,

where D = diag {V1 , . . . , Vn } is a diagonal matrix differential operator whose elements Vk , k = 1, . . . , n, are the vector fields of the form: Vk =

n X j=1

xkj

∂ , ∂xj

k = 1, . . . , n,

and Λ = kλij kni,j=1 , λij being constant, is the infinitesimal (scalar) matrix of the governing Markov chain, with the initial condition T

u(x, 0) = f (x) = (f1 (x), . . . , fn (x)) .

(1.3.2)

12

Markov Random Flights Consider the linear combination u0 (x, t) =

n X

αk uk (x, t),

αk are arbitrary constants,

k=1

of the components uk (x, t) of the solution of the Cauchy problem (1.3.1)–(1.3.2). It is known that in some spaces this linear combination is the solution of the Cauchy problem for the n-th order hyperbolic partial differential equation Hn u0 (x, t) = 0,

(1.3.3)

and the respective initial conditions are determined by the properties of functions {uk (x, t)} as the solutions of the Cauchy problem (1.3.1)–(1.3.2). In the general case, the passage from the system of Kolmogorov equations (1.3.1) to hyperbolic equation (1.3.3) is a fairly difficult analytical problem. The theorem formulated below enables us to make such passage in a simple way without any restrictions on n, on the properties of the governing Markov chain and on the dimension m of the evolution space. From the coefficients (that is, operators) of system (1.3.1) we compose the matrix differential operator  ∂ λ12 ... λ1n ∂t − V1 − λ11     ∂   − V − λ . . . λ λ 2 22 2n 21   ∂t . Sn (D, Λ, t) =    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     ∂ λn1 λn2 . . . ∂t − Vn − λnn Then system (1.3.1) can be represented in the matrix-operator form: Sn (D, Λ, t)u(x, t) = 0.

(1.3.4)

Since all the operators in matrix Sn = Sn (D, Λ, t) commute with each other, then the following differential operator is uniquely determined X i1 ...in Det Sn = σ1...n A1i1 . . . Anin , (1.3.5) {i1 ...in } i1 ...in where σ1...n is the alternator and summation is doing over all the permutations of the numbers 1, . . . , n. The elements Akj in (1.3.5) are the differential operators of the form:   ∂ − V − λ , if k = j, j jj Akj = ∂t k, j = 1, . . . , n.  −λkj , if k 6= j,

From the formal-algebraic point of view, Det Sn is the result of application of the set of operations with the operators presented in Sn , that yield the determinant of n-th order matrix Sn . Theorem 1.3.1. For any n ≥ 2, the following relation holds: [Det Sn ] u0 (x, t) = 0.

Preliminaries

13

Proof. From the hyperbolicity of system (1.3.1) it follow [148] that the solution of Cauchy problem (1.3.1)–(1.3.2) exists, is unique, and its smoothness is entirely determined by the smoothness of initial conditions. That is why one can consider that the initial conditions (1.3.2) are such that the solution of the Cauchy problem is differentiable at least n times in each of the variables. Therefore, the operator Det Sn is applicable to any components uj (x, t) of the solution of the Cauchy problem (1.3.1)–(1.3.2). It is known [129] that the functions uj = uj (x, t) of system (1.3.4) and the operator Det Sn are connected with each other by the formal relation [Det Sn ] uj (x, t) = 0 for any j = 1, . . . , n. Therefore, in view of the linearity of the operator Det Sn , for a sufficiently smooth solution u(x, t) = (u1 (x, t), . . . , un (x, t))T of the Cauchy problem (1.3.1)– (1.3.2) the following relation holds: [Det Sn ] u0 (x, t) = 0, where u0 (x, t) =

n X

αk ∈ R1 are arbitrary constants.

αk uk (x, t),

k=1

Since the operator Hn in (1.3.3) is introduced as an operator of n-th order, then one can set Det Sn = Hn . The initial conditions in Cauchy problem for equation (1.3.3) can be recalculated according to initial conditions (1.3.2). The analyticity of initial conditions provides the existence, uniqueness and analyticity of the solution of the Cauchy problem for equation (1.3.3). The theorem is proved. Theorem 1.3.1 is referred to as the Determinant Theorem. It states that, in order to pass from the system of Kolmogorov equations (1.3.1) to a hyperbolic n-th order equation, it is sufficient to evaluate the formal ‘determinant’ of system (1.3.4). The elements of such a ‘determinant’ are the respective differential operators. Note also that, although Theorem 1.3.1 is proved for the case when the elements of the determinat Det Sn are some differential operators with constant coefficients, it is also valid for the more general case when the determinant is composed of the elements of an arbitrary commutative ring over the field of complex numbers.

1.4

Kurtz’s diffusion approximation theorem

Let U (t) and S(t) be strongly continuous semigroups of linear contractions on a Banach space L with infinitesimal operators A and B, respectively. Let D(A) and D(B) be the domains of A and B. Suppose that for each sufficiently large α the closure of A + αB is the infinitesimal operator of a strongly continuous semigroup Tα (t) on L. Assume that B is the closure of B restricted to D(A) ∩ D(B). Define the operator P acting in L as follows: Z ∞ Pf = lim e−γt S(t)f dt, (1.4.1) γ→0

0

assuming that this limit exists for any f ∈ L. Operator P possesses a number of important

14

Markov Random Flights

properties (see [74, Theorem 18.6.1]). In particular, P 2 = P, that is, operator P is a bounded linear projection. Denote by R(P) the range of operator P. Let D = {f ∈ R(P) : f ∈ D(A)} and for arbitrary f ∈ D define the operator C by the equality Cf = PAf . The following Kurtz’s diffusion approximation theorem holds (see [127, Theorem 2.2]). Theorem 1.4.1. Let U (t), S(t), Tα (t), D and C be defined as above. Suppose that for all f ∈ D the following equality holds: Cf = 0. (1.4.2) Let D0 = {f ∈ D : ∃ h ∈ D(A) ∩ D(B), such that Bh = −Af }.

(1.4.3)

For f ∈ D0 define the operator C0 by the equality C0 f = PAh.

(1.4.4)

R(µ − C0 ) ⊃ D0

(1.4.5)

Suppose that for some µ > 0. Then the closure of C0 restricted so that C0 f ∈ D0 is the infinitesimal operator of a strongly continuous contraction semigroup T (t) defined on D0 and, for all f ∈ D0 , the limiting relation holds: T (t)f = lim Tα (αt)f. (1.4.6) α→∞

This theorem yields an effective and constructive algorithm for obtaining the approximate results for a wide class of stochastic processes. There are two main points in this method. The first one is referred to finding a solution h ∈ D(A) ∩ D(B) to the equation Bh = −Af

(1.4.7)

for arbitrary element f ∈ D0 . The second important point concerns the possibility of evaluating the projector P defined by (1.4.1). The crucial point here is the existence of the limit on the right-hand side of (1.4.1). For Markov random evolutions, the projector P can be evaluated by means of a more explicit formula. Let V(t) be a homogeneous (in time) Markov process with measurable phase space (E, E) and transition function P (t, x, Γ). Then the semigroup S(t) in the Banach space of bounded, strongly measurable functions f : E → L with the sup-norm is defined as Z S(t)f (x) = f (y) P (t, x, dy), E

and the explicit form of projector P is given by the formula Z Pf (x) = f (y) P (x, dy),

(1.4.8)

E

where P (x, Γ) is the limiting distribution (assumed to exist) of the process V(t) starting from point x or, in other words, the weak limit, as t → ∞, of the transition function P (t, x, Γ). If h and P are found and conditions (1.4.2) and (1.4.5) of the Kurtz’s Theorem 1.4.1 are fulfilled, then one can assert that the transition functions of the random evolution weakly converge to the transition function of a process with the generator given by the closure of operator C0 . Consider now a particular case of Theorem 1.4.1 (see [127, page 64, Example 3.4]).

Preliminaries

15

Example 1.4.1. Let E = {1, 2, 3, . . . } and let V(t) be a positive recurrent pure jump Markov chain with transition matrix P (t) = kpij (t)k∞ i,j=0 . Define pj = lim pij (t) t→∞

and assume that

Z



|pij (t) − pj | dt < ∞. 0

Denote



Z

(pij (t) − pj ) dt.

νij = 0

Let D = {f : f ∈

\

D(Ai ), sup kAi f k < ∞}, i

i

and assume that X

pi Ai f = 0.

i

If sup i

X

kνij Aj f k < ∞,

X

j

and

νij Aj f ∈ D(Ai )

j



X

sup Ai νij Aj f

< ∞, i

j

then the generator of the limiting process is given by the closure of the operator X X C0 f = pi Ai νij Aj f. i

1.5

j

Special functions

In this section the most important properties of some special functions that will be used in forthcoming analysis, are presented.

1.5.1

Bessel functions

The Bessel functions of order ν with real argument Jν (z) and with imaginary argument Iν (z) (also called the modified Bessel function) are the particular cases of the cylindric functions of first kind Zν = Zν (z) that satisfy the Bessel differential equation   d2 Zν 1 dZν ν2 + + 1 − 2 Zν = 0. (1.5.1) dz 2 z dz z The series representations of these functions are given by the formulas: Jν (z) =

∞  z 2k zν X (−1)k , 2ν k! Γ(k + ν + 1) 2 k=0

| arg z| < π,

ν ∈ (−∞, +∞),

(1.5.2)

16

Markov Random Flights Iν (z) =

∞  z 2k 1 zν X , 2ν k! Γ(k + ν + 1) 2

ν ∈ (−∞, +∞).

(1.5.3)

k=0

The Bessel function with real argument Jν (z) for ν = 0, ±1, ±2, . . . is a single-valued holomorphic (analytical) function. If ν 6= 0, ±1, ±2, . . . , then function Jν (z) is a multivalued holomorphic function. In this case the branch of Jν (z) is taken such that z ν > 0 for real positive argument z > 0. If ν > 0 and ν 6= 1, 2, . . . , then the functions Jν (z) and J−ν (z) are linearly independent. If ν = n is an integer number, then J−n (z) = (−1)n Jn (z) and, therefore, the functions Jn (z) and J−n (z) are linearly dependent. The functions (1.5.2) and (1.5.3) are connected with each other by the relation Iν (z) = e−iνπ/2 Jν (e−iπ/2 z),

−π < arg z ≤

π . 2

(1.5.4)

For ν integer, formula (1.5.4) takes the form In (z) = i−n Jn (iz),

n = 0, 1, 2, . . . .

The integral representations of these Bessel functions have the forms: Z 1 1 zν √ (1 − ξ 2 )ν−1/2 cos(ξz) dξ, ν>− , Jν (z) = ν 2 2 Γ(ν + 1/2) π −1 Z 1 zν 1 √ Iν (z) = ν (1 − ξ 2 )ν−1/2 cosh(ξz) dξ, ν>− . 2 2 Γ(ν + 1/2) π −1

(1.5.5)

(1.5.6)

(1.5.7)

There are also other integral representations of Bessel functions. In particular, for arbitrary real a, b ∈ R1 , the following integral representation of the Bessel function J0 (z) holds: Z 2π p 1 J0 (x a2 + b2 ) = exp (ix(a cos θ + b sin θ)) dθ, a, b ∈ R1 . (1.5.8) 2π 0 Really, by expanding the exponential function in the integrand, we have: Z Z 2π ∞ X (ix)n 2π (a cos θ + b sin θ)n dθ. exp (ix(a cos θ + b sin θ)) dθ = n! 0 0 n=0 Taking into account that (see [63, Formulas 3.661(1) and 3.661(2)])  Z 2π 2π (2k − 1)!! (a2 + b2 )k , if n = 2k, (2k)!! k = 0, 1, 2, . . . , (a cos θ + b sin θ)n dθ =  0 0, if n = 2k + 1, we obtain: Z



exp (ix(a cos θ + b sin θ)) dθ = 2π 0

∞ X (−1)k x2k (2k − 1)!! 2 (a + b2 )k (2k)! (2k)!!

k=0 ∞ X

(−1)k x2k p 2 ( a + b2 )2k ((2k)!!)2 k=0 !2k √ ∞ X (−1)k x a2 + b2 = 2π (k!)2 2 k=0 p = 2πJ0 (x a2 + b2 ), = 2π

Preliminaries

17

proving (1.5.8). In the second step we have used the formula: (2k)!! = 2k k!, k = 0, 1, 2, . . . . The important particular cases of Bessel function (1.5.2) are given by the formulas: J0 (z) =

∞ X (−1)k  z 2k , (k!)2 2

J1 (z) =

∞ X k=0

k=0

 z 2k+1 (−1)k . k! (k + 1)! 2

(1.5.9)

The similar formulas hold for the modified Bessel function (1.5.3): I0 (z) =

∞ X k=0

1  z 2k , (k!)2 2

I1 (z) =

∞ X k=0

 z 2k+1 1 . k! (k + 1)! 2

One can easy check that, for ν = 1/2, formulas (1.5.2) and (1.5.3) yield: r r 2 2 J1/2 (z) = sin z, I1/2 (z) = sinh z. πz πz

(1.5.10)

(1.5.11)

The cylindrical functions of first kind Zν = Zν (z) satisfy the following recurrent relations: 2ν Zν (z), (1.5.12) Zν−1 (z) + Zν+1 (z) = z d Zν−1 (z) − Zν+1 (z) = 2 Zν (z), (1.5.13) dz m  d (z ν Zν (z)) = z ν−m Zν−m (z), m = 0, 1, 2, . . . , (1.5.14) z dz  m d (z −ν Zν (z)) = (−1)m z −ν−m Zν+m (z), m = 0, 1, 2, . . . . (1.5.15) z dz The important particular cases of these formulas for Bessel functions are J2 (z) =

2 J1 (z) − J0 (z), z

2 I2 (z) = − I1 (z) + I0 (z), z

(1.5.16)

and

d d J0 (z) = −J1 (z), I0 (z) = I1 (z). (1.5.17) dz dz The asymptotic behaviour of Bessel functions at infinity is given by the formulas: r  πν π 2 cos z − − + O(z −3/2 ), z → +∞, (1.5.18) Jν (z) = πz 2 4 ez Iν (z) = √ (1 + O(1)), 2πz

z → +∞.

(1.5.19)

Asymptotic formula (1.5.18) shows that the Bessel function with real argument Jν (z) tends to zero, as z → +∞, faster than z −1/2 . Formula (1.5.19) expresses the fact that the modified Bessel function Iν (z) tends to infinity, as z → +∞, like z −1/2 ez and the first term in the asymptotic expansion of this function does not depend on index ν. If z > 0 is a positive number, then the Bessel function Jν (z) is a single-valued function and the following estimate Jν (z) 1 ν ≥ 0, (1.5.20) z ν ≤ 2ν Γ(ν + 1) ,

18

Markov Random Flights

as well as the limiting relation lim+

z→0

1 Jν (z) = ν , ν z 2 Γ(ν + 1)

ν≥0

(1.5.21)

hold. Therefore, Jν (z)/z ν is a single-valued continuous function uniformly bounded on the right half-axis z ∈ [0, ∞) and tending to 0, as z → +∞, for arbitrary ν ≥ 0. The modified Bessel functions Iν (z), for z ≥ 0, ν ≥ 0, is a single-valued positive continuous function on the right half-axis z ∈ [0, ∞) tending to ∞, as z → +∞. From series representation (1.5.3) it follows that I0 (0) = 1,

Iν (0) = 0,

ν > 0.

From the integral representation (1.5.7) we can obtain the estimate: Iν (z) ≤

2ν−1

z ν−1 √ sinh z, Γ(ν + 1/2) π

ν≥

1 , 2

z ≥ 0.

(1.5.22)

Really, for ν ≥ 12 , we have: Iν (z) = ≤ = =

zν √ ν 2 Γ(ν + 1/2) π zν √ 2ν Γ(ν + 1/2) π 2ν−1 2ν−1

Z

1

(1 − ξ 2 )ν−1/2 cosh(ξz) dξ

−1 Z 1

cosh(ξz) dξ −1

Z 1 z ν−1 √ cosh(ξz) d(zξ) Γ(ν + 1/2) π 0 z ν−1 √ sinh z, Γ(ν + 1/2) π

proving (1.5.22) Note also the following useful estimates: I1 (z) 1 ≤ ez , z ≥ 0. (1.5.23) z 2 The first inequality in (1.5.23) follows from the first series representation in (1.5.10) and the chain of inequalities: !2 2 ∞ ∞  X X (z/2)k (z/2)k ≤ = ez . I0 (z) = k! k! I0 (z) ≤ ez ,

k=0

k=0

The second inequality in (1.5.23) can be derived by applying the first inequality in (1.5.23) just now proved: ∞ ∞  z 2k 1X 1 1 X 1  z 2k 1 1 I1 (z) = ≤ = I0 (z) ≤ ez . 2 z 2 k! (k + 1)! 2 2 (k!) 2 2 2 k=0

k=0

The cylindrical function of imaginary argument Kν (z), called the Macdonald function, has the following integral representations: Z ∞ π Kν (z) = e−z cosh ξ cosh(νξ) dξ, | arg z| < , 2 0 √ Z ∞ ν π z 1  Kν (z) = ν e−z cosh ξ sinh2ν ξ dξ, Re ν > − , Re z > 0, 2 2 Γ ν + 12 0   √ Z ∞ zν π 1 π −zξ 2 ν−1/2  Kν (z) = ν e (ξ − 1) dξ, Re ν + > 0, | arg z| < . 1 2 2 2 Γ ν+2 1

Preliminaries

19

Macdonald function is connected with Bessel function of imaginary argument by the relations: 1 Iν (z)Kν+1 (z) + Iν+1 (z)Kν (z) = , z π I−ν (z) − Iν (z) Kν (z) = , ν 6= 0, ±1, ±2, . . . . 2 sin πν The following functional relations, quite similar to those of Bessel function (1.5.12)– (1.5.17), also hold: 2ν Kν (z), Kν−1 (z) − Kν+1 (z) = − z d Kν (z), Kν−1 (z) + Kν+1 (z) = −2 dz  m d (z ν Kν (z)) = (−1)m z ν−m Kν−m (z), m = 0, 1, 2, . . . , z dz m  d (z −ν Kν (z)) = (−1)m z −ν−m Zν+m (z), m = 0, 1, 2, . . . , z dz K−ν (z) = Kν (z),

1.5.2

K2 (z) =

d K0 (z) = −K1 (z). dz

2 K1 (z) + K0 (z), z

Struve functions

The Struve function and the modified Struve function are defined by the series representations: ∞  z 2k+ν+1 X (−1)k   , (1.5.24) Hν (z) = 2 Γ k + 23 Γ ν + k + 32 k=0 Lν (z) =

∞ X k=0

Γ k+

 3 2

 z 2k+ν+1 1  , 2 Γ ν + k + 23

(1.5.25)

respectively. In particular, H0 (z) =

∞ X

(−1)k

k=0

2  Γ k + 32

 z 2k+1 2

,

L0 (z) =

∞ X

1

k=0

2  Γ k + 32

 z 2k+1 2

.

(1.5.26)

Integral representations of Struve functions (1.5.24) and (1.5.25) are:  z ν Z 1 1 2  (1 − ξ 2 )ν− 2 sin(zξ) dξ Hν (z) = √ 2 π Γ ν + 21 0  z ν Z π2 2  =√ sin(z cos θ) (sin θ)2ν dθ, 2 π Γ ν + 21 0  z ν 2  L(z) = √ 1 2 πΓ ν+2

Z

π 2

sinh(z cos θ) (sin θ)2ν dθ,

0

1 Re ν > − , 2 1 Re ν > − . 2

(1.5.27)

(1.5.28)

Struve functions are connected to Bessel and modified Bessel functions by the following relations: H−(n+ 1 ) (z) = (−1)n Jn+ 21 (z), 2

L−(n+ 1 ) (z) = In+ 21 (z), 2

n = 0, 1, 2, . . . .

20

Markov Random Flights

The following functional relation holds: d ν [z Hν (z)] = z ν Hν−1 (z). dz The differential equation for Struve functions has the form:  z ν+1 4  , z 2 Y 00 + zY 0 + (z 2 − ν 2 )Y = √ 1 2 πΓ ν+2 where Y means Struve functions (1.5.24) or (1.5.25).

1.5.3

Chebyshev polynomials

Chebyshev polynomials of first kind are defined by T0 (x) ≡ 1, Tn (x) = cos(n arccos x) n  n i p p 1 h = x + x2 − 1 + x − x2 − 1 2 [n/2] (n − k − 1)! n X (−1)k (2x)n−2k , n ≥ 1, = 2 k! (n − 2k)!

(1.5.29)

k=0

where [n/2] means the integer part of a number. Chebyshev polynomials of second kind are defined by U0 (x) ≡ 1, sin[(n + 1) arccos x] sinx  n+1  n+1  p p 1 2 2 x+ x −1 + x− x −1 = √ 2 x2 − 1

Un (x) =

[n/2]

=

X k=0

(−1)k

(n − k)! (2x)n−2k , k! (n − 2k)!

(1.5.30)

n ≥ 1.

Chebyshev polynomials Tn (x) and Un (x) are connected with each other by the following recurrent relations: Tn+1 (x) = 2xTn (x) − Tn−1 (x), Tn (x) = Un (x) − xUn−1 (x),

Un+1 (x) = 2xUn (x) − Un−1 (x), Tn+1 (x) = xTn (x) − (1 − x2 )Un−1 (x).

Chebyshev polynomials are expressed in terms of other functions as follows:   1 1−x Tn (x) = F n, −n; ; , 2 2 √ 1 − x2 d n Tn (x) = (−1)n (1 − x2 )n−1/2 , (2n − 1)!! dxn (−1)n (n + 1) dn Un (x) = √ (1 − x2 )n+1/2 , 1 − x2 (2n + 1)!! dxn where F (α, β; γ; z) is the Gauss hypergeometric function.

Preliminaries

21

The generating functions of the polynomials Tn (x) and Un (x) are: ∞ X

1+2

k=1 ∞ X

z k Tk (x) =

1 − z2 , 1 − 2xz + z 2

|z| < 1,

z k Uk (x) =

1 , 1 − 2xz + z 2

|z| < 1.

k=0

The functions Tn (x) and the differential equation

√ 1 − x2 Un−1 (x) are the two linearly independent solutions to

dY d2 Y −x + n2 Y = 0. 2 dx dx Chebyshev polynomials Tn (x) and Un (x) are orthogonal in the interval [−1, 1] and the following relations hold:  0, if n 6= k,   Z 1 π dx , if n = k 6= 0, Tn (x) Tk (x) √ = 2  1 − x2 −1   π, if n = k = 0, (1 − x2 )

  0, if n 6= k or n = k = 0, p 1 − x2 Un (x) Uk (x) dx = π  , if n = k 6= 0. −1 2

Z

1.5.4

1

Chebyshev polynomials of two variables on Banach algebra

Let B be a commutative (continuous) Banach algebra (also called the normed ring [154]) over the field of complex numbers and let x, y ∈ B be its arbitrary elements. Define the functions Tn : B × B → B by the formulas T0 (x, y) ≡ 1, Tn (x, y) =

[n/2] n X (n − k − 1)! (−1)k (2x)n−2k (2y)k , 2 k! (n − 2k)!

n ≥ 1,

(1.5.31)

k=0

and the functions Un : B × B → B by the formulas U0 (x, y) ≡ 1, [n/2]

Un (x, y) =

X k=0

(−1)k

(n − k)! (2x)n−2k (2y)k , k! (n − 2k)!

n ≥ 1,

(1.5.32)

where 1 denotes the unit element (in multiplication) in B and [n/2] means the integer part of a number. Functions Tn (x, y) and Un (x, y) are referred to as the Chebyshev polynomials of two variables of first and second order, respectively, on Banach algebra B. Obviously, if we set B = R1 , that is, the real line R1 over the field of real numbers, then for y = 1/2 the functions (1.5.31) and (1.5.32) turn into the classical Chebyshev polynomials of first and second order (1.5.29) and (1.5.30), respectively.

22

Markov Random Flights For the sake of visuality, let us write down five polynomials (1.5.31) and (1.5.32): T0 (x, y) = 1, T1 (x, y) = x,

U0 (x, y) = 1, U1 (x, y) = 2x,

T2 (x, y) = 2x2 − 2y,

U2 (x, y) = 4x2 − 2y,

T3 (x, y) = 4x3 − 6xy,

U3 (x, y) = 8x3 − 8xy,

T4 (x, y) = 8x4 − 16x2 y + 4y 2 ,

U4 (x, y) = 16x4 − 24x2 y + 4y 2 .

Functions Tn (x, y) and Un (x, y) possess many properties resembled with those of the classical Chebyshev polynomials that explains their name and definition given above. We establish some most important properties of polynomials Tn (x, y) and Un (x, y) introduced above. Theorem 1.5.1. For any x, y ∈ B and any n ≥ 1, the functions Tn (x, y) and Un (x, y) satisfy the following recurrent relations: Tn+1 (x, y) = 2xTn (x, y) − 2yTn−1 (x, y),

n ≥ 1,

(1.5.33)

Un+1 (x, y) = 2xUn (x, y) − 2yUn−1 (x, y),

n ≥ 1.

(1.5.34)

Proof. Let us prove (1.5.33) for even n. We have: 2xTn (x, y) − 2yTn−1 (x, y) n/2

=

(n − k − 1)! nX (−1)k (2x)n−2k+1 (2y)k 2 k!(n − 2k)! k=0



n−1 2

(n−2)/2

X

(−1)k

k=0

(n − k − 2)! (2x)n−2k−1 (2y)k+1 k!(n − 2k − 1)!

n/2

=

nX (n − k − 1)! (−1)k (2x)n−2k+1 (2y)k 2 k!(n − 2k)! k=0

n/2

(n − k − 1)! n−1 X (−1)k (2x)n−2k+1 (2y)k + 2 (k − 1)!(n − 2k + 1)! k=1

n+1

(2x) = 2

+

k=1

n/2 n−1 X

2

n+1

(2x) = 2

n/2

nX (n − k − 1)! + (2x)n−2k+1 (2y)k (−1)k 2 k!(n − 2k)! (−1)k

k=1 n/2

1X + (−1)k 2 k=1

(n − k − 1)! (2x)n−2k+1 (2y)k (k − 1)!(n − 2k + 1)! 

n n−1 + k n − 2k + 1



(n − k − 1)! (2x)n−2k+1 (2y)k (k − 1)!(n − 2k)!

n/2

=

(2x)n+1 n+1 X (n − k)! + (−1)k (2x)n−2k+1 (2y)k 2 2 k!(n − 2k + 1)! k=1

=

n+1 2

[(n+1)/2]

X k=0

= Tn+1 (x, y),

(−1)k

((n + 1) − k − 1)! (2x)(n+1)−2k (2y)k k!((n + 1) − 2k)!

Preliminaries

23

proving (1.5.33) for even n. Let us prove (1.5.33) for odd n. We have: 2xTn (x, y) − 2yTn−1 (x, y) =

n 2

(n−1)/2

X

(−1)k

k=0

n−1 − 2

(n−1)/2

X

=

n−1 2

n−1 2

(n − k − 2)! (2x)n−2k−1 (2y)k+1 k!(n − 2k − 1)!

(n−1)/2

X

(−1)k

k=1

(n+1)/2

X

(n−1)/2

X

(−1)k

k=1

(n−1)/2

X

(n − k − 1)! (2x)n−2k+1 (2y)k k!(n − 2k)!

(n − k − 1)! (2x)n−2k+1 (2y)k (k − 1)!(n − 2k + 1)!

(−1)k

k=1

1 n (2x)n+1 + 2 2 +

(−1)k

k=0

1 n = (2x)n+1 + 2 2 +

(n − k − 1)! (2x)n−2k+1 (2y)k k!(n − 2k)!

(−1)k

k=1

(n − k − 1)! (2x)n−2k+1 (2y)k k!(n − 2k)!

(n − k − 1)! (2x)n−2k+1 (2y)k (k − 1)!(n − 2k + 1)!

+ (−1)(n+1)/2 (2y)(n+1)/2   (n−1)/2 1 1 X n n−1 (n − k − 1)! = (2x)n+1 + (−1)k + 2 2 k n − 2k + 1 (k − 1)!(n − 2k)! k=1

× (2x)n−2k+1 (2y)k + (−1)(n+1)/2 (2y)(n+1)/2 n+1 1 = (2x)n+1 + 2 2

(n−1)/2

X k=1

(−1)k

(n − k)! (2x)n−2k+1 (2y)k k!(n − 2k + 1)! + (−1)(n+1)/2 (2y)(n+1)/2

=

n+1 2

n+1 = 2

(n+1)/2

X

(−1)k

(n − k)! (2x)n−2k+1 (2y)k k!(n − 2k + 1)!

(−1)k

((n + 1) − k − 1)! (2x)(n+1)−2k (2y)k k!((n + 1) − 2k)!

k=0

[(n+1)/2]

X k=0

= Tn+1 (x, y), proving (1.5.33) for odd n. Thus, (1.5.33) is proved. Proof of recurrent relation (1.5.34) can be carried out in the same manner and is left up to the reader. The theorem is completely proved. Basing on recurrent relations (1.5.33) and (1.5.34), we can prove other interesting relations for the generalized Chebyshev polynomials (1.5.31) and (1.5.32). Theorem 1.5.2. The polynomials (1.5.31) and (1.5.32) are connected with each other by the following recurrent relations: 2Tn (x, y) = Un (x, y) − 2yUn−2 (x, y),

n ≥ 2,

(1.5.35)

24

Markov Random Flights Tn (x, y) = Un (x, y) − xUn−1 (x, y),

n ≥ 1.

(1.5.36)

Proof. At first, we prove equality (1.5.35). For any x, y ∈ B, we have: Un (x, y) − 2yUn−2 (x, y) [n/2]

=

X

(−1)k

k=0

(n − k)! (2x)n−2k (2y)k k!(n − 2k)!

[(n−2)/2]

X



(−1)k

k=0 [n/2]

= (2x)n +

X

(−1)k

k=1 [n/2]

+

X

(−1)k

k=1

= (2x) +

X

(n − k)! (2x)n−2k (2y)k k!(n − 2k)!

(n − k − 1)! (2x)n−2k (2y)k (k − 1)!(n − 2k)!

[n/2] n

k



(−1)

k=1 [n/2]

= (2x)n + n

X k=1

(n − k − 2)! (2x)n−2k−2 (2y)k+1 k!(n − 2k − 2)!

(−1)k

(n − k)! (n − k − 1)! + k!(n − 2k)! (k − 1)!(n − 2k)!



(2x)n−2k (2y)k

(n − k − 1)! (2x)n−2k (2y)k k!(n − 2k)!

[n/2]

=2

n X (n − k − 1)! (−1)k (2x)n−2k (2y)k 2 k!(n − 2k)! k=0

= 2Tn (x, y), proving (1.5.35). Applying recurrent relation (1.5.34) and equality (1.5.35) just now proved, we get: 2Tn (x, y) = Un (x, y) − 2yUn−2 (x, y) = Un (x, y) + (Un (x, y) − 2xUn−1 (x, y)) = 2Un (x, y) − 2xUn−1 (x, y), and (1.5.36) is also proved. If in the Banach algebra B the operation of differentiation with respect to its elements (Frechet strong differentiation) is defined, then the following theorem is true. Theorem 1.5.3. The following relations hold: ∂ Tn (x, y) = n Un−1 (x, y), ∂x

n ≥ 1,

∂ Tn (x, y) = −n Un−2 (x, y), n ≥ 2, ∂y   ∂ n ∂ +y Tn (x, y) = Un (x, y), n ≥ 0, x ∂x ∂y 2

(1.5.37) (1.5.38) (1.5.39)

where the operators ∂/∂x, ∂/∂y means strong differentiation in the (continuous) Banach algebra B.

Preliminaries

25

Proof. We prove (1.5.37) by induction. It is easy to see that for n = 1 and n = 2 equality (1.5.37) fulfills. Suppose that (1.5.37) fulfills also for all the numbers k ≤ n. Then, by differentiating (1.5.33) in x and taking into account the induction assumption and formulas (1.5.34) and (1.5.35), we obtain: ∂ ∂ ∂ Tn+1 (x, y) = 2Tn (x, y) + 2x Tn (x, y) − 2y Tn−1 (x, y) ∂x ∂x ∂x = 2Tn (x, y) + 2nxUn−1 (x, y) − 2y(n − 1)Un−2 (x, y) = 2Tn (x, y) + n [2xUn−1 (x, y) − 2yUn−2 (x, y)] + 2yUn−2 (x, y) = Un (x, y) − 2yUn−2 (x, y) + nUn (x, y) + 2yUn−2 (x, y) = (n + 1)Un (x, y), proving (1.5.37). The proof of formula (1.5.38) is also carried out by induction. Obviously, for n = 1, n = 2 and n = 3 (1.5.38) is true. Suppose that (1.5.38) fulfills also for all the numbers k ≤ n. Then, by differentiating (1.5.33) in y and taking into account the induction assumption and formulas (1.5.34) and (1.5.36), we get: ∂ ∂ ∂ Tn+1 (x, y) = 2x Tn (x, y) − 2Tn−1 (x, y) − 2y Tn−1 (x, y) ∂y ∂y ∂y = −2nxUn−2 (x, y) − 2Tn−1 (x, y) + 2y(n − 1)Un−3 (x, y) = −2nxUn−2 (x, y) − 2 [Un−1 (x, y) − xUn−2 (x, y)] + 2y(n − 1)Un−3 (x, y) = −2nxUn−2 (x, y) − 2Un−1 (x, y) + 2xUn−2 (x, y) + 2y(n − 1)Un−3 (x, y) = −2(n − 1)xUn−2 (x, y) − 2Un−1 (x, y) + 2y(n − 1)Un−3 (x, y) = −(n − 1) [2xUn−2 (x, y) − 2yUn−3 (x, y)] − 2Un−1 (x, y) = −(n − 1)Un−1 (x, y) − 2Un−1 (x, y) = −(n + 1)Un−1 (x, y), proving (1.5.38). Applying formulas (1.5.37) and (1.5.38) just now proved and relation (1.5.34), we obtain:   ∂ n ∂ +y Tn (x, y) = n [xUn−1 (x, y) − yUn−2 (x, y)] = Un (x, y), x ∂x ∂y 2 and equality (1.5.39) is also true. The theorem is completely proved. The next theorem gives some equivalent representations of polynomials Tn (x, y). Theorem 1.5.4. The following relations hold: n  n o p p 1 n Tn (x, y) = x + x2 − 2y + x − x2 − 2y , n ≥ 0, 2  [n/2]  X k n Tn (x, y) = xn−2k x2 − 2y , n ≥ 0, 2k

(1.5.40)

(1.5.41)

k=0

where in formula (1.5.40) it is supposed that B contains also the square roots of its elements and the same branch of the roots is taken. Proof. Applying the well-known combinatorial identity (see [187]) [n/2]

xn + y n =

X k=0

(−1)k

n n−k

  n−k (x + y)n−2k (xy)k , k

n ≥ 1,

(1.5.42)

26

Markov Random Flights

we obtain

n  n o p p 1 n x + x2 − 2y + x − x2 − 2y 2   [n/2] n n−k 1 X (−1)k (2x)n−2k (2y)k = 2 n−k k k=0

[n/2]

=

n X (n − k − 1)! (−1)k (2x)n−2k (2y)k 2 k!(n − 2k)! k=0

= Tn (x, y), and (1.5.40) is thus true. Equality (1.5.41) can be obtained by applying to (1.5.40) the Newton binomial theorem. We note also the following nice form of equality (1.5.40):   1 1 (x + y), xy , xn + y n = 2 Tn 2 2

(1.5.43)

which is valid for any elements x, y ∈ B. This formula shows the relationship between the sum of powers of any two elements of Banach algebra B and their sum and product. If the elements x, y ∈ B and a complex number z are such that the inverse element (1 − 2xz + 2yz 2 )−1 ∈ B exists, then the following formulas for the generating functions of the polynomials Tn (x, y) and Un (x, y) take place. Theorem 1.5.5. The generating functions of polynomials Tn (x, y) and Un (x, y) have the form: ∞ X z k Tk (x, y) = (1 − xz)(1 − 2xz + 2yz 2 )−1 , (1.5.44) k=0 ∞ X

z k Uk (x, y) = (1 − 2xz + 2yz 2 )−1 .

(1.5.45)

k=0

The series on the left-hand sides of these equalities are convergent (in the norm of B) uniformly with respect to x and y in a sufficiently small neighbourhood of the null-element of Banach algebra B for arbitrary complex number z such that |z| < 1/2. Proof. Denote ϕ(z) =

∞ X

z k Tk (x, y).

k=0

Then, applying (1.5.33), we have: ϕ(z) = T0 (x, y) + zT1 (x, y) + = T0 (x, y) + zT1 (x, y) +

∞ X k=2 ∞ X

z k Tk (x, y) z k (2xTk−1 (x, y) − 2yTk−2 (x, y))

k=2

= T0 (x, y) + zT1 (x, y) + 2xz = T0 (x, y) + zT1 (x, y) + 2xz

∞ X k=2 ∞ X k=1

z k−1 Tk−1 (x, y) − 2yz 2

∞ X

z k−2 Tk−2 (x, y)

k=2

z k Tk (x, y) − 2yz 2

∞ X k=0 2

z k Tk (x, y)

= T0 (x, y) + zT1 (x, y) + 2xz (ϕ(z) − T0 (x, y)) − 2yz ϕ(z).

Preliminaries

27

Taking into account that T0 (x, y) = 1, T1 (x, y) = x, we get ϕ(z) = 1 + xz + 2xz(ϕ(z) − 1) − 2yz 2 ϕ(z) and, therefore, ϕ(z) = (1 − xz)(1 − 2xz + 2yz 2 )−1 , proving (1.5.44). Let us now prove (1.5.45). Introduce the function ψ(z) =

∞ X

z k Uk (x, y).

k=0

Then, in view of (1.5.34), we have: ψ(z) = U0 (x, y) + zU1 (x, y) + = U0 (x, y) + zU1 (x, y) +

∞ X k=2 ∞ X

z k Uk (x, y) z k (2xUk−1 (x, y) − 2yUk−2 (x, y))

k=2

= U0 (x, y) + zU1 (x, y) + 2xz = U0 (x, y) + zU1 (x, y) + 2xz

∞ X k=2 ∞ X

z k−1 Uk−1 (x, y) − 2yz 2

∞ X

z k−2 Uk−2 (x, y)

k=2

z k Uk (x, y) − 2yz 2

k=1

∞ X

z k Uk (x, y)

k=0 2

= U0 (x, y) + zU1 (x, y) + 2xz (ψ(z) − U0 (x, y)) − 2yz ψ(z). Taking into account that U0 (x, y) = 1, U1 (x, y) = 2x, we get: ψ(z) = 1 + 2xz + 2xz(ψ(z) − 1) − 2yz 2 ψ(z) and, therefore, we arrive at the equality ψ(z) = (1 − 2xz + 2yz 2 )−1 , proving (1.5.45). Let us now establish the uniform convergence of the series on the left-hand sides of formulas (1.5.44) and (1.5.45). Taking arbitrary elements x, y ∈ B from a sufficiently small neighbourhood of the null-element (that is, the neutral element with respect to addition) of Banach algebra B, such that kxk
1. Hypergeometric series (1.6.14) determines a holomorphic (analytical) function that, generally speaking, has the singularity points z = 0, 1, ∞ (that is, branch points, in the general case). Let us make a cut of the z-plane along the real axis from the point z = 1 to the point z = ∞. Then the hypergeometric series F (α, β; γ; z) in such cut plane yields the single-valued analytic continuation in the domain |z| > 1 and F (α, β; γ; z) takes complex values in this domain. Integral representation of the Gauss hypergeometric function: Z 1 1 F (α, β; γ; z) = ξ β−1 (1 − ξ)γ−β−1 (1 − zξ)−α dξ, Re γ > Re β > 0, B(β, γ − β) 0 (1.6.15)

Preliminaries

31

where B(x, y) is the beta-function defined by (1.6.6) and (1.6.7). The following transformation formulas hold: F (α, β; γ; z) = (1 − z)γ−α−β F (γ − α, γ − β; γ; z)   z = (1 − z)−α F α, γ − β; γ; z−1   z = (1 − z)−β F γ − α, β; γ; . z−1

(1.6.16)

The Gauss hypergeometric function, for various combinations of its coefficients, generates a lot of particular functions (see, for example, [63] or [178]). We give only a few formulas that we need in later chapters: F (−n, β; β; z) = (1 − z)n , for arbitrary β,     3 1 1+z 1 , 1; ; z 2 = ln , F 2 2 2z 1−z   1 1 3 2 arcsin z F , ; ; z = . 2 2 2 z

(1.6.17) (1.6.18) (1.6.19)

Note also the formula concerning the value of the Gauss hypergeometric function at the point z = 1: F (α, β; γ; 1) =

Γ(γ) Γ(γ − α − β) , Γ(γ − α) Γ(γ − β)

Re (γ − α − β) > 0.

(1.6.20)

The hypergeometric function F (α, β; γ; z) satisfies a number of the Gauss recurrent relations (see, for instance, [63, Formulas 9.137(1-18)]). In particular, γF (α, β − 1; γ; z) − γF (α − 1, β; γ; z) + (α − β) z F (α, β; γ + 1; z) = 0,

(1.6.21)

γF (α, β; γ; z) − γF (α, β + 1; γ; z) + αzF (α + 1, β + 1; γ + 1; z) = 0,

(1.6.22)

γF (α, β; γ; z) − γF (α + 1, β; γ; z) + βzF (α + 1, β + 1; γ + 1; z) = 0,

(1.6.23)

Hypergeometric series (1.6.14) is a solution to the hypergeometric differential equation: z(1 − z)

d2 u du + [γ − (α + β + 1)z] − αβu = 0. dz 2 dz

(1.6.24)

Equation (1.6.24) has two linearly independent solutions that can analytically be continued in the whole z-plane except, maybe, the three singularity points z = 0, 1, ∞. Note also the following differentiation formula: dn (α)n (β)n F (α, β; γ; z) = F (α + n, β + n; γ + n; z), dz n (γ)n

1.6.3

n ≥ 1.

(1.6.25)

Powers of Gauss hypergeometric function

Here we derive series representations for some powers of the particular case of Gauss hypergeometric function (1.6.14) of the form:    X ∞ ∞ k Γ k + 12 ( 12 )k ( m−2 1 m−2 m m−2 X 2 )k z F , ; ; z = = √ z k , (1.6.26) 2 2 2 (m ) k! k! (2k + m − 2) π k 2 k=0

k=0

32

Markov Random Flights |z| ≤ 1,

m ≥ 3,

where m ≥ 3 is an arbitrary integer. Note that the series on the right-hand side of (1.6.26) is absolutely and uniformly convergent in the closed unit circle K = {z ∈ C : |z| ≤ 1} of the complex plane C. Lemma 1.6.1. For arbitrary z ∈ C, |z| ≤ 1, and for arbitrary integer m ≥ 3 the following series representation holds:  ∞  2   X Γ k + m−1 (m − 2) Γ m 1 m−2 m 2 2   , ; ; z = F zk , (1.6.27) 2 2 2 Γ m−1 Γ k+ m (k + m − 2) 2 2 k=0

|z| ≤ 1,

m ≥ 3.

The series in (1.6.27) is convergent uniformly in K. Proof. It is known (see [215, Chapter 14, Example 11]) that if the coefficients of the Gauss hypergeometric function F (a, b; c; z) satisfy the condition a + b + 21 = c, then the following formula holds: ∞

2

[F (a, b; c; z)] =

X Γ(k + 2a) Γ(k + 2b) Γ(k + a + b) z k Γ(c) Γ(2c − 1) . (1.6.28) Γ(2a) Γ(2b) Γ(a + b) Γ(k + c) Γ(k + 2c − 1) k! k=0

Since the above condition is fulfilled for the coefficients of the Gauss hypergeometric function 1 m (1.6.26), that is, 12 + m−2 2 + 2 = 2 , then, by applying (1.6.28) to this function, we arrive at (1.6.27). For |z| ≤ 1, we have the inequality for the series on the right-hand side of (1.6.27): ∞    ∞ X X Γ k + m−1 Γ k + m−1 πΓ m k 2 2 2    < ∞, z ≤ = Γ k+ m (k + m − 2) k=0 Γ k + m (k + m − 2) (m − 2) Γ m−1 2 2 2 k=0 proving its uniform convergence in K. The lemma is proved. Lemma 1.6.2. For arbitrary z ∈ C, |z| ≤ 1, and for arbitrary integer m ≥ 3 the following series representation holds:  ∞ 3   X 3(m − 2)2 Γ m 1 m−2 m ξk 2  , ; ; z = √ zk , (1.6.29) F m−1 2 2 2 2k + 3(m − 2) πΓ 2 k=0 where coefficients ξk , k ≥ 0, are given by the formula ξk =

k X l=0

  Γ k − l + 21 Γ l + m−1 2  , (k − l)! Γ l + m (l + m − 2) 2

k ≥ 0,

m ≥ 3,

(1.6.30)

|z| ≤ 1.

The series in (1.6.29) is convergent uniformly in K. Proof. Multiplying the functions given by (1.6.26) and (1.6.27) and taking into account the well-known formula for the multiplication of two converging series, we obtain  ∞   3 X (m − 2)2 Γ m 1 m−2 m 2  F , ; ; z = √ γk z k , (1.6.31) m−1 2 2 2 πΓ 2 k=0

Preliminaries

33

where the coefficients γk have the form: γk =

k X l=0

  Γ k − l + 12 Γ l + m−1 2  , (2k − 2l + m − 2) (l + m − 2) (k − l)! Γ l + m 2

k ≥ 0.

Taking into account that   1 2 1 1 = + , (2k − 2l + m − 2) (l + m − 2) 2k + 3(m − 2) 2k − 2l + m − 2 2(l + m − 2) we obtain:    k X Γ k − l + 21 Γ l + m−1 2 1 1 2  γk = + 2k + 3(m − 2) 2k − 2l + m − 2 2(l + m − 2) (k − l)! Γ l + m 2 l=0   X k Γ k − l + 21 Γ l + m−1 2 2  = 2k + 3(m − 2) (k − l)! Γ l + m (2k − 2l + m − 2) 2 l=0    k 1 X Γ k − l + 12 Γ l + m−1 2  + 2 (k − l)! Γ l + m (l + m − 2) 2 l=0   k X Γ k − l + 21 Γ l + m−1 3 2  . = m 2k + 3(m − 2) (k − l)! Γ l + (l + m − 2) 2 l=0 Substituting these coefficients into (1.6.31) we obtain (1.6.29). The uniform convergence of series (1.6.29) in K follows from that of the series in (1.6.26) and (1.6.27). Lemma 1.6.3. For arbitrary z ∈ C, |z| ≤ 1, and for arbitrary integer m ≥ 3 the following series representation holds: "  #2 ∞ 4   X (m − 2) Γ m ηk 1 m−2 m 2  , ; ; z =2 zk , (1.6.32) F m−1 2 2 2 k + 2(m − 2) Γ 2 k=0 where coefficients ηk , k ≥ 0, are given by the formula   k X Γ l + m−1 Γ k − l + m−1 2 2   , ηk = Γ l+ m (l + m − 2) Γ k−l+ m 2 2 l=0 k ≥ 0,

m ≥ 3,

(1.6.33)

|z| ≤ 1.

The series in (1.6.32) is convergent uniformly in K. Proof. The proof of the lemma is similar to that of Lemma 1.6.2 and immediately follows from (1.6.27) by using the relation   1 1 1 1 = + . (k − l + m − 2)(l + m − 2) k + 2(m − 2) k − l + m − 2 l + m − 2

In the same manner one can obtain similar series representations for higher powers of Gauss hypergeometric function (1.6.26), however the coefficients of such series have very complicated and cumbersome form.

34

Markov Random Flights

1.6.4

General hypergeometric functions

The general hypergeometric function is defined by the following general hypergeometric series: ∞ X (α1 )k (α2 )k . . . (αp )k z k , p Fq (α1 , α2 , . . . , αp ; β1 , β2 , . . . , βq ; z) = (β1 )k (β2 )k . . . (βq )k k!

(1.6.34)

k=0

where (αi )k , (βj )k , i = 1, 2, . . . , p, j = 1, 2, . . . , q, are the Pochhammer symbols. In particular, in later chapters we will need the general hypergeometric function 3 F2 (α1 , α2 , α3 ;

β1 , β2 ; z) =

∞ X (α1 )k (α2 )k (α3 )k z k , (β1 )k (β2 )k k!

(1.6.35)

k=0

which, for a special combination of its coefficients, is connected with the Gauss hypergeometric function (1.6.14) by the following relation (see [178, item 7.4.1, Formula 5]): 3 F2 (α1 , α2 , α3 ;

α1 +1, α2 +1; z) =

  1 α2 F (α1 , α3 ; α1 +1; z)−α1 F (α2 , α3 ; α2 +1; z) . α2 − α1 (1.6.36)

The series Φ(α; γ; z) ≡ 1 F1 (α; γ; z) =

∞ X (α)k z k . (γ)k k!

(1.6.37)

k=0

is referred to as the degenerated hypergeometric function. It has the following integral representation: Z 1 21−γ ez/2 ezξ/2 (1 − ξ)γ−α−1 (1 + ξ)α−1 dξ, Re γ > Re α > 0. Φ(α; γ; z) = B(α, γ − α) −1 (1.6.38) Function (1.6.37) satisfies the following relations: Φ(α; γ; z) = ez Φ(γ − α; γ; −z), z Φ(α + 1; γ + 1; z) = Φ(α + 1; γ; z) − Φ(α; γ; z), γ αΦ(α + 1; γ + 1; z) = (α − γ)Φ(α; γ + 1; z) + γΦ(α; γ; z), αΦ(α + 1; γ; z) = (z + 2α − γ)Φ(α; γ; z) + (γ − α)Φ(α − 1; γ; z), d α Φ(α; γ; z) = Φ(α + 1; γ + 1; z). dz γ

(1.6.39)

In particular, Φ(α; α; z) = ez .

(1.6.40)

Function Φ(α; γ; z) is a solution to the differential equation z

d2 u du + (γ − z) − αu = 0. 2 dz dz

Equation (1.6.41) has the two linearly independent solutions: Φ(α; γ; z),

z 1−γ Φ(α − γ + 1; 2 − γ; z).

(1.6.41)

Preliminaries

35

The degenerated hypergeometric function Φ(α; γ; z) is connected with the Whittaker functions defined by   1 Mλ, µ (z) = z µ+1/2 e−z/2 Φ µ − λ + ; 2µ + 1; z 2   (1.6.42) 1 Mλ, −µ (z) = z −µ+1/2 e−z/2 Φ −µ − λ + ; −2µ + 1; z , 2 that are the two linearly independent solutions to the differential equation     d2 u 1 λ 1 1 2 + − + + u = 0. −µ 2 dz 4 z 4 z2

1.7

(1.6.43)

Generalized functions

Let Rm be the m-dimensional Euclidean space. A continuous function ϕ(x), x = (x1 , . . . , xm ) ∈ Rm , is referred to as the compactly supported one, if it is concentrated inside some closed ball B ⊂ Rm and is zero outside of it. In other words, each compactly supported function is concentrated on a compact set (ball) B ⊂ Rm of the Euclidean space. Clearly, in the one-dimensional case (m = 1) a compactly supported function is a continuous function concentrated in a closed interval that vanishes outside of it. Let D(Rm ) denote the set of all compactly supported functions ϕ = ϕ(x), x ∈ Rm , having continuous derivatives (with respect to all the variables xi , i = 1, . . . , m,) of all orders. The functions belonging to D(Rm ) form a linear space with standard operations of addition and multiplication by numbers. In this linear space D(Rm ) one can define the notion of convergence in the following way. The sequence {ϕn } of the elements from D(Rm ) is called convergent to an element ϕ ∈ D(Rm ) if the following two conditions are fulfilled: 1. There exists a closed ball B outside of which all {ϕn } are zero; 2. In this ball B the uniform convergence of the derivatives ∂ r ϕk ∂xξ11 . . . ∂xξmm



∂rϕ

, ξm

∂xξ11 . . . ∂xm

x = (x1 , . . . , xm ) ∈ B,

m X

ξi = r,

i=1

takes place for any r, ξ1 , . . . , ξm . The linear space D(Rm ) with such a type of convergence is called the basic space and its elements are called the basic functions. The topology of such a type of convergence is generated by the system of the neighbourhoods of zero element. Every neighborhood is given by a finite set {γ0 , . . . , γl } of continuous positive functions and consists of those functions (belonging to D(Rm )) which, for all, x satisfy the inequalities: |ϕ(x)| < γ0 (x), . . . , |ϕ(l) (x)| < γl (x). Every continuous linear functional G(ϕ) on the basic space D(Rm ) is referred to as the generalized function in the space Rm . The continuity of the functional is treated in the sense that G(ϕn ) → G(ϕ) if ϕn → ϕ in the basic space D(Rm ).

36

Markov Random Flights

Every function f = f (x), which is locally-integrable in Rm (that is, integrable in any ball B ⊂ Rm ), generates some generalized function. Really, the expression Z Gf (ϕ) := (f, ϕ) = f (x) ϕ(x) µ(dx) (1.7.1) Rm

is a linear continuous functional on D(Rm ). In this integral µ(·) means the Lebesgue measure on the Borel subsets of Rm . Such generalized functions are called regular. Note that, since the function ϕ(x) is compactly supported, the integration in (1.7.1) is doing, in fact, on a compact set (i.e. some ball in Rm ). Formula (1.7.1) shows that there exists the oneto-one correspondence between locally-integrable in Rm functions and regular generalized functions. The set of generalized functions forms a linear space which is the conjugate space to D(Rm ). Therefore, the operations of addition and multiplication by numbers are defined on this set. Obviously, for the regular generalized functions their adding as generalized functions (i.e. as linear functionals) coincides with usual operation of function adding. The same concerns the operation of multiplication by numbers. In the space of generalized functions one can introduce the operation of passing to the limit. The sequence of generalized functions {fn } is said to be convergent to f , if for any ϕ ∈ D(Rm ) the following relation holds: (fn , ϕ) → (f, ϕ). If α is an infinitely-differentiable function then the operation of multiplication by a generalized function f is defined by the formula: (αf, ϕ) = (f, αϕ). All these operations (i.e. addition, multiplication by numbers and multiplication by infinitely-differentiable functions) are continuous. One can also show that it is impossible to define an operation of multiplication of two generalized functions in such a way that this operation would be continuous in the sense of convergence given above. The r-th derivative of a generalized function f is defined by the following relation: ! ! ∂ r ϕ(x) ∂ r f (x) r , ϕ(x) = (−1) f (x), . ∂xξ11 . . . ∂xξmm ∂xξ11 . . . ∂xξmm We denote by D0 (Rm ) the space of generalized functions defined on the basic space D(Rm ) with all the operations defined above. From the definition of generalized functions and their derivatives given above some important properties follow: 1. Any generalized function has derivatives of all orders. In other words, any generalized function is differentiable infinitely many times (in the generalized sense). 2. If a sequence of generalized functions {fn } converges to a generalized function f (in the sense of the definition given above), then the sequence of their derivatives {fn0 } converges to the derivative f 0 of the limiting function. This is equivalent to the property that any converging series composed of generalized functions can be differentiable arbitrary number of times. 3. The space D0 (Rm ) is complete (in the sense of convergence given above). This follows from the fact that D0 (Rm ) is the conjugate space to the basic space D(Rm ). As we have noted above, it is impossible to define an operation of multiplication of two generalized functions in such a way that this operation would be continuous in the sense

Preliminaries

37

of convergence given above. However, one can define the continuous operation of direct multiplication of two generalized functions. Let f (x) ∈ D0 (Rm ) and g(y) ∈ D0 (Rn ) be two arbitrary generalized functions. The operation of direct multiplication of these functions f (x) · g(y) is defined by the formula ϕ ∈ D(Rm × Rn ).

(f (x) · g(y), ϕ) = (f (x), (g(y), ϕ(x, y))),

The right-hand side of this equality determines a linear continuous functional on D(Rm ×Rn ) and, therefore, it is a generalized function, that is, an element of D0 (Rm ×Rn ). The operation of direct multiplication is commutative, that is, f (x) · g(y) = g(y) · f (x). Now we can define a very important operation of convolution of generalized functions. First, we define this operation for the locally-integrable functions. Let f (x) and g(x) be arbitrary locally-integrable functions in Rm and suppose that the function Z h(x) = |f (x − y) g(y)| dy Rm

is also locally-integrable in Rm . The convolution of two locally-integrable functions f (x) and g(x) is defined by the formula Z Z (f ∗ g)(x) = f (x − y) g(y) dy = g(x − y) f (y) dy = (g ∗ f )(x). (1.7.2) Rm

Rm

The convolutions f ∗ g and |f | ∗ |g| exist simultaneously and satisfy the inequality |(f ∗ g)(x)| ≤ h(x) for almost all x and, therefore, the convolution f ∗ g is also a locally-integrable function in Rm . Thus, it defines a regular generalized function acting on basic functions ϕ ∈ D(Rm ) as follows: Z (f ∗ g, ϕ) = f (x) g(y) ϕ(x + y) dx dy, ϕ ∈ D(Rm ). (1.7.3) Note that the condition of the locally-integrability of function h(x) is fulfilled and, therefore, the convolution (f ∗ g, ϕ) of two locally-integrable functions f and g exists and is defined by (1.7.3), if either at least one of these functions is compactly supported or both of them are integrable in Rm . The convolution of two generalized functions f and g is a linear functional defined by the formula (f ∗ g, ϕ) = (f (x) · g(x), ϕ(x + y)), ϕ ∈ D(Rm ), (1.7.4) where f (x) · g(x) stands for the direct product of these functions. Note that the convolution (1.7.4) may exist not for all pairs of generalized functions. Moreover, the convolution f ∗ g, generally speaking, is not a continuous operation from D0 (Rm ) into D0 (Rm ) (with respect to f or g). The reason of this fact lies apart of our needs and, therefore, is omitted. The operation of convolution has the following important properties: 1. Linearity. For arbitrary numbers λ and µ (λf + µf1 ) ∗ g = λ(f ∗ g) + µ(f1 ∗ g),

f, f1 , g ∈ D0 (Rm ),

under the condition that the convolutions on the right-hand side of this equality exist.

38

Markov Random Flights

2. Commutativity. If the convolution f ∗ g exists, then the convolution g ∗ f also exists, and the following relation holds: f ∗ g = g ∗ f. 3. Differentiability. If the convolution f ∗ g exists, then the convolutions (Dα f ) ∗ g and f ∗ (Dα g) also exist, and the following relations hold: (Dα f ) ∗ g = Dα (f ∗ g) = f ∗ (Dα g), where Dα is the generalized differentiation of order α (see formula (1.8.5) below). 4. Shift. If the convolution f ∗ g exists, then the convolution f (x + h) ∗ g(x) also exists and the following relation holds: f (x + h) ∗ g(x) = (f ∗ g)(x + h),

h ∈ Rm .

In other words, the operations of shifting and convolution commute. As we have noted above, the convolution of two arbitrary generalized function may not always exist. However, there is a very important particular case when the convolution of two generalized functions exists. Just this case is of a special interest for us. Let f be an arbitrary generalized function and let g be a compactly supported (generalized) function. Then the convolution f ∗ g exists in D0 (Rm ) and is defined by the formula (f ∗ g, ϕ) = (f (x) · g(y), η(y)ϕ(x + y)),

ϕ ∈ D(Rm ),

where η is an arbitrary basic function, which is equal to 1 in the neighbourhood of the support of compactly supported function g. Moreover, under some natural conditions this operation of convolution is continuous with respect to f and g separately. The space D0 = D0 (Rm ) of generalized compactly supported functions defined by (1.7.1) can be extended to a more wide space S 0 = S 0 (Rm ) of generalized functions defined in the whole Euclidean space Rm . These are the generalized function in Rm that slowly increase with all their derivatives. Such functions are also called the tempered distributions. The space S 0 can be constructed following a general scheme quite similar to that used in constructing the space D0 . In this scheme the space of basic functions S is generated by all continuous functions in Rm that decrease with all their derivatives, as kxk → ∞, faster than any power of kxk−1 . However, for our purposes it is sufficient to consider only the generalized functions from the space D0 .

1.8

Integral transforms

In this section we consider two the most important integral transformations of generalized functions, namely the Fourier and Laplace ones, that give a powerful analytical method of studying the Markov random flights and their properties in the multidimensional Euclidean spaces.

1.8.1

Fourier transform

For arbitrary generalized function f (x) ∈ D0 define its Fourier transform by the formula Z Fx [f (x)](ξ) = fˆ(ξ) = eihξ,xi f (x) µ(dx), (1.8.1) Rm

Preliminaries

39

where hξ, xi means the inner product of the m-dimensional real-valued vectors ξ = (ξ1 , . . . , ξm ) ∈ Rm and x = (x1 , . . . , xm ) ∈ Rm , µ(·) is the Lebesgue measure on the Borel subsets of the space Rm , m ≥ 1. Since f (x) is a continuous function on some compact set (ball) of Rm , then integral (1.8.1) always exists. The inverse Fourier transform of the function fˆ(ξ) is given by the formula Z 1 −1 ˆ Fξ [f (ξ)](x) = f (x) = e−ihξ,xi fˆ(ξ) µ(dξ). (1.8.2) (2π)m Rm

Note that there exists the one-to-one correspondence f (x) ←→ fˆ(ξ) between a generalized function f (x) ∈ D0 and its Fourier transform fˆ(ξ) and both these mappings (direct and inverse) are continuous. For the spherically symmetric (or radial) functions (that is, depending only of the Eup clidean distance kxk = x21 + · · · + x2m from the origin of the Euclidean space Rm , m ≥ 2), the general formulas (1.8.1) and (1.8.2) take the form of so-called the direct and inverse Hankel formulas, in which the multidimensional integrals in (1.8.1) and (1.8.2) are reduced to the usual one-dimensional Riemann integrals. Let the function f (kxk) be absolutely integrable in Rm , m ≥ 2, and for it the formula of inverse Fourier transform (1.8.2) is valid. Then its Fourier transform is given by the formula Z ∞ fˆ(kξk) = Fx [f (kxk)](ξ) = (2π)m/2 kξk−(m−2)/2 J(m−2)/2 (kξkr) rm/2 f (r) dr, 0

(1.8.3) and the following inversion formula holds: f (kxk) = Fξ−1 [fˆ(kξk)](kxk) = (2π)−m/2 kxk−(m−2)/2

Z



J(m−2)/2 (kxkρ) ρm/2 fˆ(ρ) dρ,

0

(1.8.4) where J(m−2)/2 (x) is the Bessel function of order (m − 2)/2. Formulas (1.8.3) and (1.8.4) are referred to as the direct and inverse Hankel transforms of order (m − 2)/2, respectively. Note that relations (1.8.3) and (1.8.4) are valid for the functions possessing the spherical m symmetry p in the space R , m ≥ 2, that is, depending only on the length of the vector kxk = x21 + · · · + x2m . One of the most remarkable facts is that the Fourier transform of any compactly supported function f (x) ∈ D0 is a holomorphic (analytical) function. Due to this fact, it is very convenient to use Fourier transforms to analyse the functions concentrated on the compact sets of the space Rm . Let us now give some important properties of the Fourier transforms of generalized functions. 1. Fourier transform of derivatives. For arbitrary generalized function f (x) ∈ D0 Fx [Dα f (x)](ξ) = (−iξ)α F[f (x)](ξ). Here D = (D1 , . . . , Dm ) denotes the m-dimensional differential operator composed of the differential operators Dj = ∂/∂xj , j = 1, . . . , m, the order of differentiation α = αm 1 (α1 , . . . , αm ) is the multi-index, xα = xα 1 . . . xm and Dα f (x) =

∂ |α| f (x1 , . . . , xm ) , αm 1 ∂xα 1 . . . ∂xm

|α| = α1 + · · · + αm .

2. Shift of the Fourier transform. For arbitrary generalized function f (x) ∈ D0 Fx [f (x)](ξ + ξ 0 ) = Fx [ei(ξ0 ,x) f (x)](ξ).

(1.8.5)

40

Markov Random Flights 3. Fourier transform of similarity. For arbitrary generalized function f (x) ∈ D0   ξ 1 . Fx [f (cx)](ξ) = m Fx [f (x)] |c| c

4. Fourier transform of convolution. Let f (x) ∈ D0 be an arbitrary generalized function and g(x) be an arbitrary compactly supported generalized function. Then Fx [f (x) ∗ g(x)](ξ) = Fx [f (x)](ξ) F[g(x)](ξ).

(1.8.6)

One can easily check that if δ(x), x = (x1 , . . . , xm ) ∈ Rm , is the m-dimensional deltafunction, then Fx [δ(x)](ξ) = 1. Formula (1.8.6) is of a special importance since it reduces the Fourier transform of the convolution of two generalized functions to the usual product of their Fourier transforms. Note that Fourier transform of two arbitrary generalized functions from the wider space S 0 can be defined in a more complicated way. However, we do not need this space S 0 for our purposes and thus we omit the definition of this space.

1.8.2

Laplace transform

We will also need a definition of another very important integral transformation of a one-dimensional generalized function, namely the Laplace transformation. It is a continuous integral transformation in the complex plane C which is defined as follows. 0 of all generalized functions f (t) ∈ D0 (R1 ) on the real line R1 Consider the set D+ 0 such that f (t) = 0 for t < 0. Let f (t) ∈ D+ be a locally-integrable function such that at |f (t)| < Ae , t → ∞, for some positive constants A > 0 and a > 0. In other words, the absolute value of function f (t) should not increase at infinity faster than some exponential function. Under this condition the integral L[f (t)](s) = f˜(s) =

Z∞

e−st f (t) dt,

s = σ + iω,

(1.8.7)

0

exists for Re s = σ > a. Relation (1.8.7) determines a continuous transformation of the real-valued function f (t) defined on the real line R1 into a complex function f˜(s) = L[f (t)](s) defined in the right half-plane C+ of the complex plane C. Formula (1.8.7) is referred to as the Laplace transform 0 of the generalized function f (t) ∈ D+ . The function f˜(s) is the holomorphic (analytical) function in the right half-plane Re s = σ > a > 0 such that f˜(s) → 0, as σ → ∞, uniformly in ω. Relation (1.8.7) in terms of Fourier transform takes the form: L[f (t)](s) = F[e−σt f (t)](−ω),

σ > a.

(1.8.8)

Note that there exists the one-to-one correspondence f (t) ←→ f˜(s) between a generalized 0 0 function f (t) ∈ D+ and its Laplace transform f˜(s). Denote by D+ (a) the set of all func0 −σt 0 tions f (t) ∈ D+ such that e f (t) ∈ D+ for all σ > a. Formula (1.8.8) defines the Laplace 0 transform of the generalized function e−σt f (t) ∈ D+ (a). Obviously, the Laplace transformation (1.8.7) (or (1.8.8)) is a linear operation, that is, if f1 (t) ←→ f˜1 (s), σ > a1 , and f2 (t) ←→ f˜2 (s), σ > a2 , then λf1 (t) + µf2 (t) ←→ λf˜1 (s) + µf˜2 (s),

σ > max(a1 , a2 ).

Preliminaries

41

Let us now give some important properties of the Laplace transforms. 0 1. Laplace transform of derivatives. For arbitrary generalized function f (t) ∈ D+ (a) L[f (n) (t)](s) = sn L[f (t)](s),

σ > a,

n = 0, 1, . . . .

0 2. Shift of the Laplace transform. For arbitrary generalized function f (t) ∈ D+ (a)

L[eλt f (t)](s) = L[f (t)](s − λ),

σ > a + Re λ.

0 3. Laplace transform of similarity. For arbitrary generalized function f (t) ∈ D+ (a) s 1 , σ > ka. L[f (kt)](s) = m L[f (t)] k k 0 4. Laplace transform of convolution. Let f (t), g(t) ∈ D+ (a) be two arbitrary generalized functions. Then

L[f (t) ∗ g(t)](s) = L[f (t)](s) L[g(t)](s),

σ > a.

(1.8.9)

Formula (1.8.9) shows that Laplace transformation, similarly to the Fourier one, is a multiplicative operation. It is easy to check that if δ(t) is a delta-function concentrated at the point t = 0, then L[δ(t)](s) = 1. If a complex function f˜(s), s = σ + iω, is absolutely-integrable on R1 with respect to ω for some σ > a, then the following inversion formula holds: 1 f (t) = 2πi

σ+i∞ Z

est f˜(s) ds,

(1.8.10)

σ−i∞

where the integration is undertaken over the contour embracing the right-hand part Re s > σ of the right-half C+ of the complex plane C and passing along the imaginary axis of C from the infinite point σ − i∞ to the infinite point σ + i∞. Let we have a function f (x, t) of two variables, where x ∈ Rm is the spatial and t > 0 is the time variables, respectively. Suppose that this function is such that f (x, t) ∈ D0 for 0 any fixed t > 0 and f (x, t) ∈ D+ (a) for any fixed x ∈ Rm . Then, similarly to (1.8.1), one can define its Fourier transformation with respect to spatial variable x by the formula Z Fx [f (x, t)](ξ) = fˆ(ξ, t) = eihξ,xi f (x, t) µ(dx) ξ ∈ Rm , t > 0, (1.8.11) Rm

for arbitrary fixed t > 0, as well as the Laplace transformation with respect to time variable t by the formula Lt [f (x, t)](s) = f˜(x, s) =

Z∞

e−st f (x, t) dt,

x ∈ Rm ,

Re s = σ > a,

(1.8.12)

0 m

for arbitrary fixed x ∈ R . Therefore, one can also define the double Laplace-Fourier transformation ˜ Lt Fx [f (x, t)](ξ, s) = fˆ(ξ, s) =

Z∞

e−st fˆ(ξ, t) dt,

ξ ∈ Rm ,

Re s = σ > a.

(1.8.13)

0

Such double integral transformations with respect to spatial and time variables plays an important role in describing the distributions of the multidimensional Markov random flights.

42

Markov Random Flights

1.9

Auxiliary lemmas

In this section we prove a series of auxiliary lemmas that will be used in later chapters. Lemma 1.9.1. For arbitrary q ≥ 0, p > 0, the following formulas hold: Z

n

x I0 (q

p

p2



x2 )

  ∞ n + 1 n + 3 x2 xn+1 X 1  pq 2k F −k, ; ; 2 + ψ1 , (1.9.1) dx = n+1 (k!)2 2 2 2 p k=0

n ≥ 0, Z

|x| ≤ p,

p

xn

I1 (q p2 − x2 ) p dx p2 − x2  ∞  pq 2k+1  xn+1 X 1 n + 1 n + 3 x2 = F −k, ; ; 2 + ψ2 , p(n + 1) k! (k + 1)! 2 2 2 p

(1.9.2)

k=0

n ≥ 0,

|x| ≤ p,

where ψ1 and ψ2 are arbitrary functions not depending on x. Proof. Let us check formula (1.9.1). First, we show that the series on the right-hand side of (1.9.1) converges uniformly with respect to x ∈ [−p, p]. To prove that, we need the following uniform (in z) estimate:   F −k, n + 1 ; n + 3 ; z ≤ 2k , |z| ≤ 1, n ≥ 0, k ≥ 0. (1.9.3) 2 2 Using the well-known formulas for Pochhammer symbol: (−1)s k! , k ≥ 0, 0 ≤ s ≤ k, (k − s)! (a)s a , a > 0, s ≥ 0, = (a + 1)s a+s (−k)s =

we obtain (for |z| ≤ 1, n ≥ 0, k ≥ 0):    X s k (−k)s n+1 n + 1 n + 3 z 2 s F −k,  ; ; z = n+1 2 2 s! + 1 2 s s=0 k X k! n+1 s s = (−1) z s! (k − s)! n + 2s + 1 s=0 ≤

k X s=0

k! = 2k , s! (k − s)!

proving (1.9.3). Applying now estimate (1.9.3), we obtain the inequality ∞  ∞ X 1  pq 2k  √ n + 1 n + 3 x2 X 1  pq 2k k F −k, ; ; ≤ 2 = I0 (pq 2) < ∞, 2 2 2 (k!) 2 2 2 p (k!) 2 k=0

k=0

proving the uniform convergence in x ∈ [−p, p] of the series in (1.9.1). From this fact it

Preliminaries

43

follows that one may separately differentiate each term of the series on the right-hand side of (1.9.1). Thus, differentiating in x the expression on the right-hand side of (1.9.1) we obtain: " #  ∞ d xn+1 X 1  pq 2k n + 1 n + 3 x2 F −k, ; ; 2 dx n + 1 (k!)2 2 2 2 p k=0    ∞   pq 2k d n + 1 n + 3 x2 1 X 1 n+1 x ; ; F −k, = n+1 (k!)2 2 dx 2 2 p2 k=0 " k # ∞ 1 X 1  pq 2k d X (−1)s k! (n + 1) xn+2s+1 = n+1 (k!)2 2 dx s=0 s! (k − s)! (n + 2s + 1) p2s k=0 " k    2 s # ∞ X 1  pq 2k X x n s k =x (−1) 2 (k!) 2 s p2 s=0 k=0  k ∞ X x2 1  pq 2k 1 − = xn (k!)2 2 p2 k=0 ∞ X

1  q 2k 2 (p − x2 )k (k!)2 2 k=0 !2k p ∞ X q p2 − x2 1 n =x (k!)2 2 k=0 p = xn I0 (q p2 − x2 ),

= xn

yielding the integrand on the left-hand side of (1.9.1). Formula (1.9.2) can be checked in the same manner. The lemma is proved. In particular, by setting n = 0 in (1.9.1) and (1.9.2), we arrive at the formulas:   Z ∞ X p 1  pq 2k 1 3 x2 2 2 I0 (q p − x ) dx = x F −k, ; ; 2 + ψ1 , |x| ≤ p, (1.9.4) (k!)2 2 2 2 p k=0

Z



p

 pq 2k+1 xX 1 I1 (q p2 − x2 ) p dx = F p k! (k + 1)! 2 p2 − x2 k=0



1 3 x2 −k, ; ; 2 2 2 p

 + ψ2 ,

|x| ≤ p. (1.9.5)

Applying Lemma 1.9.1 we obtain, for arbitrary real a, the formulas: Z p (a ± x)n I0 (q p2 − x2 ) dx     ∞ m + 1 m + 3 x2 n n−m xm+1 X 1  pq 2k F −k, ; ; 2 + ψ1 , = (±1) a m+1 (k!)2 2 2 2 p m m=0 k=0 (1.9.6) p Z 2 2 I1 (q p − x ) p (a ± x)n dx p2 − x2   n ∞  pq 2k+1 1 X n n−m xm+1 X 1 (1.9.7) = (±1)m a p m=0 m m+1 k! (k + 1)! 2 k=0   m + 1 m + 3 x2 × F −k, ; ; 2 + ψ2 , 2 2 p n X

m

44

Markov Random Flights n ≥ 0,

|x| ≤ p.

Lemma 1.9.2. Let (Ω, F, P) be a probability space and let A, B, C, D ∈ F be random events such that B is independent of C and D, C ∩ D = ∅, P(C) = P(D) 6= 0, P(B) 6= 0. Then P(A | B(C + D)) =

 1 P(A | BC) + P(A | BD) . 2

(1.9.8)

Proof. Under the lemma’s conditions, we have: P(A | B(C + D)) = = = = =

P(AB(C + D)) P(B(C + D)) P(ABC) + P(ABD) P(BC) + P(BD) P(A | BC)P(B)P(C) + P(A | BD)P(B)P(D) P(B) [P(C) + P(D)] P(A | BC)P(B)P(C) + P(A | BD)P(B)P(C) 2 P(B)P(C)  1 P(A | BC) + P(A | BD) . 2

The lemma is proved. Lemma 1.9.3. For arbitrary positive a > 0, b > 0 the following formula holds: Z a  p  2 a > 0, b > 0. I0 b a2 − x2 dx = sinh(ab), b −a

(1.9.9)

Proof. Using series representation (2.5.2) of the modified Bessel function I0 (z), we get: Z a  p Z a  p   I0 b a2 − x2 dx = 2 I0 b a2 − x2 dx −a

0 ∞ X

 2k Z a b (a2 − x2 )k dx 2 0 k=0  2k Z 1 ∞ X 1 b 2k+1 a (1 − z 2 )k dz =2 (k!)2 2 0 k=0   √ ∞ 2k X 1 b π k! 2k+1  a =2 2 (k!) 2 (2k + 1) Γ k + 21 k=0  2k √ ∞ X π 2k 1 b √ a2k+1 =2 k! 2 (2k + 1) π (2k − 1)!! =2

=2

k=0 ∞ X k=0 ∞

1 (k!)2

a2k+1 b2k (2k)!! (2k + 1)!!

2 X (ab)2k+1 = b (2k + 1)! k=0

2 = sinh(ab), b

Preliminaries where we have used the well-known formulas  √  π 1 Γ(z + 1) = zΓ(z), Γ k + = k (2k − 1)!!, 2 2

45

(2k)!! = 2k k!,

k ≥ 0, (−1)!! = 1.

The lemma is proved. In particular, for a = 2ct, b = λ/c formula (1.9.9) yields: 2ct

 p  λ 2c 2 2 2 I0 4c t − x dx = sinh(2λt). c λ −2ct

Z

The next two lemmas deal with the Fourier transform Z ∞ Fx [f (x)](ξ) ≡ fˆ(ξ) = eiξx f (x) dx,

(1.9.10)

ξ ∈ R1 ,

−∞

and the inverse Fourier transform 1 Fξ−1 [fˆ(ξ)](x) = 2π

Z



e−iξx fˆ(ξ) dξ,

x ∈ R1 ,

−∞

of the modified Bessel function I0 (z). Lemma 1.9.4. For arbitrary positive a > 0, b > 0 the following formula holds: i h p Fx I0 (b a2 − x2 ) Θ(a − |x|) (ξ) " # p p sinh(a b2 − ξ 2 ) sin(a ξ 2 − b2 ) p p =2 1{|ξ|≤b} + 1{|ξ|>b} , b2 − ξ 2 ξ 2 − b2 where Θ(x) is the Heaviside function and 1{·} is the indicator function. Proof. We have i h p Fx I0 (b a2 − x2 ) Θ(a − |x|) (ξ) Z a p = eiξx I0 (b a2 − x2 ) dx −a Z a p =2 cos (ξx) I0 (b a2 − x2 ) dx 0 p (substitution z = a2 − x2 ) √ Z a z cos (ξ a2 − z 2 ) √ =2 I0 (bz) dz a2 − z 2 0 (see [177, item 2.15.10, formula 8]) p sin(a ξ 2 − b2 ) p =2 ξ 2 − b2 " # p p sinh(a b2 − ξ 2 ) sin(a ξ 2 − b2 ) p p 1{|ξ|≤b} + 1{|ξ|>b} . =2 b2 − ξ 2 ξ 2 − b2 The lemma is proved.

(1.9.11)

46

Markov Random Flights

In particular, by setting a = 2ct and b = λ/c in (1.9.11) (for arbitrary c > 0, λ > 0, t > 0), we obtain the following equality:     p λ 4c2 t2 − x2 Θ(2ct − |x|) (ξ) Fx I0 c " # p p (1.9.12) sinh(2t λ2 − c2 ξ 2 ) sin(2t c2 ξ 2 − λ2 ) p p = 2c 1{|ξ|≤ λ } + 1{|ξ|> λ } . c c λ2 − c2 ξ 2 c2 ξ 2 − λ2 Differentiating (1.9.12) in t we get  p    λ ∂ 2 2 2 I0 Fx 4c t − x Θ(2ct − |x|) (ξ) + 4c cos(2ctξ) ∂t c i h p p = 4c cosh(2t λ2 − c2 ξ 2 ) 1{|ξ|≤ λ } + cos(2t c2 ξ 2 − λ2 ) 1{|ξ|> λ } . c

(1.9.13)

c

Applying inverse Fourier transformation to (1.9.12) and (1.9.13) we arrive at the formulas: p p   2 2 2 sin(2t c2 ξ 2 − λ2 ) −1 sinh(2t λ − c ξ ) p p 1{|ξ|≤ λ } + 1{|ξ|> λ } (x) Fξ c c λ2 − c2 ξ 2 c2 ξ 2 − λ2 (1.9.14)  p  λ 1 = I0 4c2 t2 − x2 Θ(2ct − |x|), 2c c   p p Fξ−1 cosh(2t λ2 − c2 ξ 2 ) 1{|ξ|≤ λ } + cos(2t c2 ξ 2 − λ2 ) 1{|ξ|> λ } (x) c c  p  (1.9.15)   λ 1 ∂ 1 I0 = 4c2 t2 − x2 Θ(2ct − |x|) + δ(2ct − x) + δ(2ct + x) , 4c ∂t c 2 where δ(x) is the Dirac delta-function. Lemma 1.9.5. For arbitrary positive p > 0, q > 0, the following formula holds: p p   sinh2 (q p2 − ξ 2 ) sin2 (q ξ 2 − p2 ) Fξ−1 1 + 1 {|ξ|≤p} {|ξ|>p} (x) p2 − ξ 2 ξ 2 − p2 ) (Z 2q   p 1 2 2 = I0 p τ − x dτ Θ(2q − |x|). 4 |x|

(1.9.16)

Proof. Applying Fourier transformation to the right-hand side of (1.9.16) and using formula (1.9.11), we have: "(Z ) # 2q  p  1 Fx I0 p τ 2 − x2 dτ Θ(2q − |x|) (ξ) 4 |x| Z 2q  p    1 2 2 I0 p τ − x Θ(τ − |x|) dτ Θ(2q − |x|) (ξ) = Fx 4 0 Z 2q  p  Z 2q  1 iξx 2 2 = e I0 p τ − x Θ(τ − |x|) dτ dx 4 −2q 0  Z Z 2q  p  1 2q eiξx I0 p τ 2 − x2 Θ(τ − |x|) dx dτ = 4 0 −2q  Z Z τ  p  1 2q iξx 2 2 = e I0 p τ − x dx dτ 4 0 −τ Z i o p 1 2q n h = Fx I0 (p τ 2 − x2 ) Θ(τ − |x|) (ξ) dτ 4 0

Preliminaries 47 # p Z sin(τ ξ 2 − p2 ) 1 2q sinh(τ p2 − ξ 2 ) p p 1{|ξ|≤p} + 1{|ξ|>p} dτ = 2 0 p2 − ξ 2 ξ 2 − p2 Z 2q  p 1 sinh(τ p2 − ξ 2 ) dτ 1{|ξ|≤p} = p 2 p2 − ξ 2 0  Z 2q p 1 2 2 + p sin(τ ξ − p ) dτ 1{|ξ|>p} 2 ξ 2 − p2 0 p p 1 cosh(2q p2 − ξ 2 ) − 1 1 − cos(2q ξ 2 − p2 ) 1 = 2 1{|ξ|≤p} + 2 1{|ξ|>p} p − ξ2 2 ξ − p2 2 p p sinh2 (q p2 − ξ 2 ) sin2 (q ξ 2 − p2 ) = 1 + 1{|ξ|>p} . {|ξ|≤p} p2 − ξ 2 ξ 2 − p2 The lemma is proved. "

p

In particular, by setting q = ct, p = λ/c in (1.9.16) we get the formula p   p  sinh2 t λ2 − c2 ξ 2  sin2 t c2 ξ 2 − λ2 −1 Fξ 1{|ξ|≤ λ } + 1{|ξ|> λ } (x) c c λ2 − c2 ξ 2 c2 ξ 2 − λ2 ) (Z   2ct 1 λp 2 τ − x2 dτ Θ(2ct − |x|). = 2 I0 4c c |x|

(1.9.17)

Note that by differentiating (1.9.17) in t, we obtain again (1.9.14). Lemma 1.9.6. For arbitrary integers n ≥ 0, k ≥ 0, such that n ≥ 2k and for arbitrary real x ∈ R1 the following formula holds:     Z 1 3 x2 n 1 3 x2 z n+1 n 1 1 z n F −k, ; ; 2 dz = − , ; − + , ; 2 + ψ, (1.9.18) 3 F2 −k, − 2 2 z n+1 2 2 2 2 2 2 z where the hypergeometric function on the right-hand side of (1.9.18) is defined by (1.6.35) and ψ is an arbitrary function not depending on z. Proof. Differentiating in z the function on the right-hand side of (1.9.18) and using the formula for Pochhammer symbol a (a)s , = (a + 1)s a+s

s ≥ 0,

a ∈ R1 ,

we obtain:    1 d n 1 1 n 1 3 x2 n+1 z − , ; − + , ; 2 3 F2 −k, − n + 1 dz 2 2 2 2 2 2 z   k n 1 − 2 − 12 s x2s n−2s+1 1 d X (−k)s 2 s    = z 3 n + 1 dz s=0 − n2 − 12 + 1 s s! 2 s  k 1 1 d X (−k)s 2 s n+1 x2s n−2s+1  = z 3 n + 1 dz s=0 n − 2s + 1 s! 2 s   2 s k X (−k)s 12 s 1 x n  =z 3 s! z2 2 s s=0   1 3 x2 = z n F −k, ; ; 2 , 2 2 z coinciding with integrand on the left-hand side of (1.9.18). The lemma is proved.

48

Markov Random Flights Consider the (m − 1)-dimensional unit sphere in the Euclidean space Rm , m ≥ 2:  S1m = x = (x1 , . . . , xm ) ∈ Rm : kxk2 = x21 + · · · + x2m = 1 .

Lemma 1.9.7. For any dimension m ≥ 2 and for arbitrary real constant C the following equality holds: Z J(m−2)/2 (Ckαk) , (1.9.19) eiChα,xi σ(dx) = (2π)m/2 m (Ckαk)(m−2)/2 S1 where hα, xi is the inner product p of the real-valued m-dimensional vectors α = (α1 , . . . , αm ) 2 , J and x = (x1 , . . . , xm ), kαk = α12 + · · · + αm (m−2)/2 (x) is the Bessel function of order (m − 2)/2 and σ(·) is the Lebesgue measure on the surface of sphere S1m . Proof. According to [63, Formula 4.644], for any dimension m ≥ 2, we have: Z Z Z iC(α,x) e σ(dx) = ··· eiC(α,x) σ(dx) S1m

x21 +···+x2m =1

2π (m−1)/2  = Γ m−1 2 2π = Γ

(m−1)/2 m−1 2

= (2π)m/2

π

Z

eiCkαk cos θ (sin θ)m−2 dθ

0 m−1 2

 Γ 12 2(m−2)/2 J(m−2)/2 (Ckαk) (Ckαk)(m−2)/2

Γ





J(m−2)/2 (Ckαk) , (Ckαk)(m−2)/2

where in the last step we have used the well-known integral  √representation of the Bessel function (see [63, Formula 8.411(7)] and the equality Γ 12 = π. The lemma is proved. Note that, for m = 2, formula (1.9.19) yields the well-known integral representation: Z e S12

iChα,xi

Z σ(dx) =



eiC(α1 cos θ+α2 sin θ) dθ

0

= 2πJ0 (Ckαk). For m = 3, equality (1.9.19) turns into the formula: Z ZZZ eiChα,xi σ(dx) = eiC(α,x) σ(dx) S13

x21 +x22 +x23 =1

J1/2 (Ckαk) (Ckαk)1/2 sin(Ckαk) = 4π . Ckαk = (2π)3/2

One should note that formula (1.9.19) can also be obtained by applying the mean-value theorem for harmonic functions. Lemma 1.9.8. For arbitrary real constant k the following relation holds:   2 sin(kt) cos(kt) k L Si(2kt) + Ci(2kt) (s) = arctg , t t s 

Re s > 0,

(1.9.20)

Preliminaries

49

where L is the Laplace transform and the functions Si(x) and Ci(x) are the incomplete integral sine and cosine, respectively, given by the formulas: Z x Z x sin ξ cos ξ − 1 Si(x) = dξ, Ci(x) = dξ. (1.9.21) ξ ξ 0 0 The inverse Laplace transformation of equality (1.9.20) yields: " 2 #   1 k −1 (t) = sin(kt)Si(2kt) + cos(kt)Ci(2kt) . L arctg s t Proof. Consider the convolution: Z t sin(kt) sin(kt) sin(kτ ) sin(k(t − τ )) ∗ = dτ t t τ t−τ 0   Z 1 t 1 1 = sin(kτ ) sin(k(t − τ )) + dτ t 0 τ t−τ Z 2 t sin(kτ ) sin(k(t − τ )) = dτ t 0 τ Z t 2 sin(kτ ) = [sin(kt) cos(kτ ) − sin(kτ ) cos(kt)] dτ t 0 τ Z t Z cos(kt) t 2 sin2 (kτ ) sin(kt) 2 sin(kτ ) cos(kτ ) dτ − dτ = t τ t τ 0 0 Z Z sin(kt) t sin(2kτ ) cos(kt) t 1 − cos(2kτ ) = dτ − dτ t τ t τ 0 0 sin(kt) cos(kt) = Si(2kt) + Ci(2kt). t t

(1.9.22)

(1.9.23)

Applying Laplace transformation to both sides of (1.9.23) and using the formula   k sin(kt) (s) = arctg L t s (see, for instance, [118, Table 8.4-1, Formula 107], we arrive at (1.9.20). The lemma is proved. Lemma 1.9.9. For arbitrary a > 0 the following equality holds:     Z 1 1 1+x sin(ax) ln dx = sin a Si(2a) + cos a Ci(2a) , 1−x a 0

(1.9.24)

where the integral is treated in the improper sense and the functions Si(x) and Ci(x) are given by formulas (1.9.21). Proof. We have:   Z 1 Z 1 Z 1 1+x sin(ax) ln dx = sin(ax) ln(1 + x) dx − sin(ax) ln(1 − x) dx. (1.9.25) 1−x 0 0 0 Let us evaluate separately the integrals on the right-hand side of (1.9.25). Integrating by parts and applying [63, Formula 2.641(2)], we obtain for the first integral in (1.9.25):

50

Markov Random Flights

Z 0

1

sin(ax) ln(1 + x) dx   Z 1 1 cos(ax) =− cos a ln 2 − dx a 1+x  0  cos a 1 =− ln 2 + cos a (ci(2a) − ci(a)) + sin a (si(2a) − si(a)) . a a

(1.9.26)

Here si(x) and ci(x) are the complete integral sine and cosine, respectively, given by (see, for instance, [63, Formulas 8.230]: π si(x) = − + Si(x), ci(x) = C + ln x + Ci(x), (1.9.27) 2 where C = 0.5772 . . . is the Euler constant. From (1.9.27) we easily obtain: ci(2a) − ci(a) = ln 2 + Ci(2a) − Ci(a), si(2a) − si(a) = Si(2a) − Si(a). Substituting these relations into (1.9.26), we obtain the first integral in (1.9.25):  Z 1  sin a  cos a  Ci(2a) − Ci(a) + Si(2a) − Si(a) . sin(ax) ln(1 + x) dx = a a 0

(1.9.28)

The integrand of the second integral in (1.9.25) is obviously unbounded at the point x = 1. Therefore, this integral can be evaluated in the improper sense only. Similarly to the previous one, integrating by parts and applying [63, Formula 2.641(2)], we get: Z 1 sin(ax) ln(1 − x) dx 0

Z

1−ε

sin(ax) ln(1 − x) dx   Z 1−ε 1 cos(ax) lim+ cos (a(1 − ε)) ln ε + dx =− a ε→0 1−x 0 1 =− lim {cos (a(1 − ε)) ln ε a ε→0+ − [cos a (ci(−aε) − ci(−a)) − sin a (si(−aε) − si(−a))]} . = lim+ ε→0

0

(1.9.29)

Using now (1.9.27) and the obvious equalities si(−x) = −si(x) − π, we have:

Ci(−x) = Ci(x),

ci(−aε) − ci(−a) = ln ε + Ci(aε) − Ci(a), si(−aε) − si(−a) = Si(a) − Si(aε).

Substituting these relations into (1.9.29), we obtain the expression for the second integral on the right-hand side of (1.9.25): Z 1 sin(ax) ln(1 − x) dx 0

1 lim {(cos (a(1 − ε)) − cos a) ln ε a ε→0+ − cos a (Ci(aε) − Ci(a)) + sin a (Si(a) − Si(aε))} 1 = − [sin a Si(a) + cos a Ci(a)] . a =−

(1.9.30)

Preliminaries

51

Substituting (1.9.28) and (1.9.30) into (1.9.25), we finally arrive at (1.9.24). The lemma is completely proved. The next lemma concerns the value of the Gauss hypergeometric function at the point z = 1 for some special combination of its coefficients. Lemma 1.9.10. For any integer k such that 1 ≤ k ≤ n + 1, n ≥ 2, the following relation holds: F (−(n + k − 2), k + 3; 4; 1) = 0, (1.9.31) where F (ξ, η; ζ; z) is the Gauss hypergeometric function. Proof. The proof will be given by induction. For k = 1, in view of (1.6.17), we have: F (−(n − 1), 4; 4; 1) = (1 − 1)n−1 = 0, and (1.9.31) is fulfilled. Suppose that (1.9.31) is also valid for some k = q, that is, F (−(n + q − 2), q + 3; 4; 1) = 0.

(1.9.32)

Let us show that (1.9.31) fulfills also for k = q + 1. We note the recurrent relations: ζ −η−ξ F (ξ, η; ζ; 1), ζ −ξ ζ −ξ F (ξ, η; ζ; 1) = F (ξ, η − 1; ζ; 1), ζ −η−ξ F (ξ − 1, η; ζ; 1) =

(1.9.33)

that are the particular cases, for z = 1, of the well-known Gauss recurrent relations (see, for instance, [63, Formulas 9.137(9) and 9.137(10)], respectively. The left-hand side of (1.9.31), for k = q + 1, is F (−(n + (q + 1) − 2), (q + 1) + 3; 4; 1) = F (−(n + q − 2) − 1, q + 4; 4; 1) (see the first relation in (1.9.33)) n−2 = F (−(n + q − 2), q + 4; 4; 1) n+q+2 (see the second relation in (1.9.33)) q =− F (−(n + q − 2), q + 3; 4; 1) n+q+2 = 0, in view of induction assumption (1.9.32). The lemma is proved. The next two lemmas concern the calculation of some definite integrals of the modified Bessel function. Lemma 1.9.11. For arbitrary q > 0 and for any integer n ≥ 0 the following formula holds:   Z 1 p n + 1 I(n+1)/2 (q) n (n−1)/2 2 x I0 (q 1 − x ) dx = 2 Γ , q > 0, n ≥ 0. (1.9.34) 2 q (n+1)/2 0 √ Proof. By introducing the new variable z = 1 − x2 in the integral on the left-hand side of (1.9.34), we obtain:

52

Markov Random Flights Z

1

xn I0 (q

p 1 − x2 ) dx

0 1

Z

z (1 − z 2 )(n−1)/2 I0 (qz) dz

= 0

1 = 2

Z

1 2

Z

= =

1

(1 − ξ)(n−1)/2 I0 (q

ξ) dξ

0 1

(1 − ξ)(n−1)/2

0

∞ 1X

2

p

k=0

1 (k!)2

∞ X k=0 1

 q 2k Z 2

1 (k!)2

 √ 2k q ξ dξ 2

ξ k (1 − ξ)(n−1)/2 dξ

0

  ∞ 1 X 1  q 2k n+1 = B , k + 1 2 (k!)2 2 2 k=0  ∞ Γ(k + 1) 1 X 1  q 2k Γ n+1 2  = n+1 2 (k!)2 2 Γ + k+1 2 k=0   ∞  q 2k n+1 X 1 1  = Γ 2 2 2 k! Γ n+1 2 +k+1 k=0    (n+1)/2 X ∞  q 2k+(n+1)/2 1 n+1 2 1  = Γ n+1 2 2 q 2 k! Γ 2 + k + 1 k=0    (n+1)/2 1 n+1 2 = Γ I(n+1)/2 (q) 2 2 q   n + 1 I(n+1)/2 (q) . = 2(n−1)/2 Γ 2 q (n+1)/2 proving (1.9.34). Lemma 1.9.12. For arbitrary a > 0, b > 0 and q > 0 the following formula holds: Z 1 p 2 eax +bx I0 (q 1 − x2 ) dx 0

  n ∞ X X an−k bk 2n − k + 1 I(2n−k+1)/2 (q) (2n−k−1)/2 , = 2 Γ k! (n − k)! 2 q (2n−k+1)/2 n=0 k=0 a > 0, b > 0, q > 0,

(1.9.35)

Proof. By expanding the exponential function into a series and applying (1.9.34), we get: Z 1 p 2 eax +bx I0 (q 1 − x2 ) dx 0

Z 1 ∞ p X 1 (ax2 + bx)n I0 (q 1 − x2 ) dx n! 0 n=0 Z 1 n   ∞ p X 1 X n n−k k a b x2n−k I0 (q 1 − x2 ) dx = n! k 0 n=0 k=0   ∞ n n−k k XX a b 2n − k + 1 I(2n−k+1)/2 (q) = 2(2n−k−1)/2 Γ , k! (n − k)! 2 q (2n−k+1)/2 n=0 k=0

=

proving (1.9.35).

Chapter 2 Telegraph Processes

In this chapter the elements of the theory of the one-dimensional Goldstein-Kac telegraph processes are given. Telegraph processes and their properties are very well studied in the literature (see, for instance, the recent textbook [115] and references therein). That is why the purpose of this chapter is not to present the whole modern theory of the telegraph processes (this would require a separate capacious book), but to give a gradual and friendly introduction to the theory with an exposition of some basic results (such as distribution, telegraph equation and its group symmetries, characteristic function, convergence to a Wiener process, moments, sum of two and distance between two telegraph processes, linear combinations of several independent telegraph processes). Since, as noted above, the Goldstein-Kac telegraph process is a one-dimensional Markov random flight, this would prepare the reader for the perception of its multidimensional counterparts studied in subsequent chapters and for tracing the arising analogies. Bearing in mind that this chapter can serve as a good introduction to the telegraph processes for students and postgraduates, the first sections are equipped with a number of exercises that can help them better understand the material. For the same reason, the presentation in the first sections of this chapter is given at a simpler level and is based on an infinitesimal approach. Many interesting results concerning various generalizations of the Goldstein-Kac telegraph process, such as the first exit and first passage times, maximum displacement, occupation time and other functionals of the telegraph process, motions with barriers and in inhomogeneous environments etc., are not included in the book (this would make it really immence), but those interested can easily find them in other sources (see, for example, the textbook [115] and references therein).

2.1

Definition of the process and structure of distribution

We start our consideration by a general definition of the stochastic process governed by a two-state Markov process. Let (Ω, F, P) be a probability space. Consider the set of independent random variables {εn }, n = 1, 2, . . . , with exponential probability distribution functions P{εn < t} = 1 − e−λt , t > 0, n = 1, 2, . . . , where λ > 0 is the parameter of distribution. By setting τ0 = 0 and τn = ε1 + · · · + εn ,

n = 1, 2, . . . ,

we define the sequence of random time instants {τn }, n = 1, 2, . . . , whose distribution functions have the form: P{τn ∈ (t, t + dt)} =

(λt)n−1 λe−λt dt, (n − 1)!

n = 1, 2, . . . .

(2.1.1)

This is so-called gamma-distribution with parameters (n, λ). 53

54

Markov Random Flights

Consider the counting process N (t) = max{k : τk ≤ t}. One can check that the process N (t) has Poisson distribution, that is, P{N (t) = k} =

(λt)k −λt e , k!

k = 0, 1, 2, . . . .

(2.1.2)

Therefore, the process N (t) can be treated as the number of Poissonian events that have occurred by time t. From (2.1.2) the asymptotic formulas follow: P{N (t) = 0} = e−λt = 1 − λt + o(t), P{N (t) = 1} = λt e−λt = λt + o(t), P{N (t) ≥ 2} = o(t). Consider now the random variable α that takes only two values {0, 1} (with some probabilities) and suppose that α is independent of εn , n = 0, 1, . . . . The continuous-time process ξ(t), t > 0, called the velocity process and taking only two valued {ξ0 , ξ1 }, where ξ0 and ξ1 are two different real numbers, is defined as follows: ( ξα , if τ2k ≤ t < τ2k+1 , ξ(t) = α = 0, 1, k = 0, 1, 2, . . . . (2.1.3) ξ1−α , if τ2k+1 ≤ t < τ2k+2 , One can easily check that ξ(t), t > 0, is a Markov process and its transition probabilities have the form: P{ξ(t) = ξα | ξ(0) = ξα } = e−λt cosh(λt), P{ξ(t) = ξα | ξ(0) = ξ1−α } = e−λt sinh(λt), Define now a random walk ζ(t), t > 0, by the formula Z t ζ(t) = x0 + ξ(s) ds, x0 ∈ R1 ,

α = 0, 1.

t > 0.

(2.1.4)

(2.1.5)

0

This is a stochastic process starting from some initial point x0 ∈ R1 and representing a motion at random velocity ξα in the course of random time τ1 . Then velocity changes for ξ1−α and the motion is keeping on with this new velocity in the course of random time τ2 , then velocity changes again for ξα and so on. Integral factor in (2.1.5) represents the displacement of the process from the initial point x0 at time instant t. The Goldstein-Kac telegraph process on the line is an important particular case of the general two-state stochastic process ζ(t), t > 0, defined by (2.1.5). It represents a stochastic motion of a particle with a constant finite speed c which, at the initial time instant t = 0, starts from the origin x0 = 0 of the real line R1 , by choosing the initial direction (positive or negative) at random with equal probabilities 1/2. The particle’s motion is controlled by a homogeneous Poisson process with rate λ > 0 as follows. When a Poisson event occurs, the particle instantaneously changes is direction for the opposite one and keeps moving at the same speed c until the next Poisson event occurs, then it changes its direction for the opposite one again, and so on. Let X(t) be the particle’s position on the line R1 and let D(t), t ≥ 0, denote the particle’s direction at time t > 0. This is a two-state stochastic process that takes the values D(t) = +1, if the particle moves in the positive direction at time t and D(t) = −1, if it moves in the negative direction at this moment. The initial direction D(0) is a random variable, such that 1 P{D(0) = +1} = P{D(0) = −1} = . 2

Telegraph Processes

55

Then the particle’s position X(t) on the line at arbitrary time instant t > 0 is given by the formula: Z t (−1)N (s) ds , (2.1.6) X(t) = cD(0) 0

where N (t) is the number of the Poisson events that have occurred by time instant t. The distribution function F (x, t) = P{X(t) < x}, x ∈ R1 , t > 0, of the process X(t) consists of two components and has the structure x ∈ R1 ,

F (x, t) = Fs (x, t) + Fac (x, t),

t > 0,

(2.1.7)

where Fs (x, t) `e Fac (x, t) are the singular (with respect to Lebesgue measure on R1 ) and absolutely continuous components of the distribution, respectively. Since speed c is finite, then, at arbitrary time moment t > 0, the process X(t), with probability 1, is concentrated in the closed interval [−ct, ct], which is the support of the distribution of X(t). The density f (x, t), x ∈ R1 , t > 0, of the distribution (2.1.7), which exists in the sense of generalized functions and is bounded, is also concentrated in the interval [−ct, ct] and, similarly to (2.1.7), has the structure f (x, t) = fs (x, t) + fac (x, t),

x ∈ R1 ,

t > 0,

(2.1.8)

where fs (x, t) `e fac (x, t) are the densities of the singular and absolutely continuous components of the distribution of the telegraph process X(t), respectively. We emphasize that the term ‘density’ is treated in the sense of generalized functions, that allows to use it not only with respect to the absolutely continuous part of density (2.1.8), but also with respect to its singular part. The singular component of the distribution is concentrated at two terminal points ±ct of the interval [−ct, ct], what corresponds to the case when no Poisson events occur until time moment t and, hence, the particle does not change its initial direction. The probability of this event is obviously equal to e−λt and, therefore, P {X(t) = −ct} = P {X(t) = ct} =

1 −λt e . 2

(2.1.9)

Thus, the singular part of density (2.1.8) has the form: fs (x, t) =

e−λt [δ(ct + x) + δ(ct − x)] , 2

x ∈ R1 ,

t > 0,

(2.1.10)

where δ(x) is the Dirac delta-function. The absolutely continuous part of the distribution of the telegraph process X(t) corresponds to the case when at least one Poisson event occurs until time moment t and, therefore, the particle at least once changes its initial direction. In such a case, at time instant t, the particle is located in the open interval (−ct, ct) and the probability of this event is P {X(t) ∈ (−ct, ct)} = 1 − e−λt . (2.1.11) The support of this part of the distribution is the open interval (−ct, ct). Therefore, the absolutely continuous part of density (2.1.8) has the structure fac (x, t) = p(x, t) Θ(ct − |x|),

x ∈ R1 ,

t > 0,

(2.1.12)

where p(x, t) is some positive function, which is absolutely continuous in (−ct, ct), and Θ(x) is the Heaviside unit-step function ( 1, x > 0, Θ(x) = (2.1.13) 0, x ≤ 0.

56

Markov Random Flights

Function p(x, t) in (2.1.12) is the main aim of forthcoming analysis. Exercise 2.1.1. Prove formula (2.1.1). (Hint: use convolutions and induction in n.) Exercise 2.1.2. Prove formula (2.1.2). (Hint: use formula (2.1.1)). Exercise 2.1.3. Prove formula (2.1.4). Exercise 2.1.4. Evaluate the transition matrix P (t) of process ξ(t) and show that it possesses the semi-group property P (t + s) = P (t)P (s). Exercise 2.1.5. Prove that for any t > 0 1 . 2 Exercise 2.1.6. Prove that the Goldstein-Kac telegraph process X(t), t > 0, is not Markovian. P {D(t) = +1} = P {D(t) = −1} =

Exercise 2.1.7. Prove that the two-component stochastic process (X(t), D(t)), t > 0, is Markovian.

2.2

Kolmogorov equation

Let x ∈ (−ct, ct) be an arbitrary interior point of the interval (−ct, ct) and let dx > 0 be some increment, such that x + dx ∈ (−ct, ct). Consider the joint densities f+ (x, t) and f− (x, t) of the particle’s position and its direction at time moment t, defined by the formulas: f+ (x, t) dx = P {x < X(t) < x + dx, D(t) = +1} f− (x, t) dx = P {x < X(t) < x + dx, D(t) = −1} ,

x ∈ (−ct, ct),

t > 0.

(2.2.1)

Theorem 2.2.1. The joint densities f+ = f+ (x, t) and f− = f− (x, t) satisfy the following hyperbolic system of first-order partial differential equations: ∂f+ ∂f+ = −c − λf+ + λf− , ∂t ∂x ∂f− ∂f− =c − λf− + λf+ , ∂t ∂x

x ∈ (−ct, ct),

t > 0.

(2.2.2)

Proof. We derive only the first equation of system (2.2.2). Let ∆t > 0 be some infinitesimal increment in time and let N (s, t), s < t, denote the number of Poisson events that have occurred in time interval (s, t). Then, according to the total probability formula, we have: P {X(t + ∆t) < x, D(t + ∆t) = +1} = P {X(t) + c∆t < x, D(t) = +1, N (t, t + ∆t) = 0} 1 + ∆t

t+∆t Z

P {X(t) − c(τ − t) + c(t + ∆t − τ ) < x, D(t) = −1, N (t, t + ∆t) = 1} dτ + o(∆t) t

= (1 − λ∆t) P {X(t) < x − c∆t, D(t) = +1} t+∆t Z

P {X(t) < x + c(τ − t) − c(t + ∆t − τ ), D(t) = −1} dτ + o(∆t),

+λ t

Telegraph Processes

57

where in the first step we have used the well-known fact, that the moment of occurence of a single Poisson event in a time interval is a random variable uniformly distributed in this interval. Since the probability is a continuous function then, in view of the mean-value theorem of classical analysis, there exists an interior point τ ∗ ∈ (t, t + ∆t), such that t+∆t Z

P {X(t) < x + c(τ − t) − c(t + ∆t − τ ), D(t) = −1} dτ t

= ∆t P {X(t) < x + c(τ ∗ − t) − c(t + ∆t − τ ∗ ), D(t) = −1} . Therefore, P {X(t + ∆t) < x, D(t + ∆t) = +1} = (1 − λ∆t) P {X(t) < x − c∆t, D(t) = +1} + λ∆t P {X(t) < x + c(τ ∗ − t) − c(t + ∆t − τ ∗ ), D(t) = −1} + o(∆t). This equality can be rewritten in terms of densities (2.2.1) as follows: Zx

x−c∆t Z

f+ (ξ, t + ∆t) dξ = (1 − λ∆t) −∞

f+ (ξ, t) dξ −∞

∗ x+c(τ ∗ −t)−c(t+∆t−τ ) Z

+ λ∆t

f− (ξ, t) dξ + o(∆t). −∞

This formula can be represented in the form: Zx [f+ (ξ, t + ∆t) − f+ (ξ, t)] dξ −∞

Zx =−

x−c∆t Z

f+ (ξ, t) dξ − λ∆t

f+ (ξ, t) dξ +

−∞

x−c∆t Z

−∞

f+ (ξ, t) dξ −∞

∗ x+c(τ ∗ −t)−c(t+∆t−τ ) Z

+ λ∆t

f− (ξ, t) dξ + o(∆t). −∞

Dividing this expression by ∆t, we rewrite it as follows: Zx −∞

 1  f+ (ξ, t + ∆t) − f+ (ξ, t) dξ ∆t   x  x−c∆t x−c∆t Z Z Z  1    = −c −λ f+ (ξ, t) dξ − f+ (ξ, t) dξ f+ (ξ, t) dξ  c∆t  −∞

−∞

∗ x+c(τ ∗ −t)−c(t+∆t−τ ) Z



f− (ξ, t) dξ +

−∞

o(∆t) . ∆t

−∞

By passing to the limit, as ∆t → 0, and taking into account that τ ∗ → t in this case, we

58

Markov Random Flights

get: Zx

∂f+ (ξ, t) dξ = −cf+ (x, t) − λ ∂t

−∞

Zx

Zx f+ (ξ, t) dξ + λ

f− (ξ, t) dξ .

−∞

−∞

By differentiating this equality in x, we obtain the first equation of system (2.2.2). The second equation can be derived in the same manner. The theorem is proved. Introducing the notations:   ∂ 0 −c ∂x , D= ∂ 0 c ∂x

 f=

f+ f−

 ,

Λ=

  −λ λ , λ −λ

(2.2.3)

we can rewrite system (2.2.2) in the general form: ∂f = Df + Λf . ∂t

(2.2.4)

Equation (2.2.4), as well as system (2.2.2), represent the backward Kolmogorov equation written for joint densities f± (x, t) of the Goldstein-Kac telegraph process X(t). The matrix differential operator here is the generator (i.e. infinitesimal operator) of the telegraph process X(t), while the scalar matrix Λ is the infinitesimal matrix of the embedded two-state Markov chain controlling the process X(t). Exercise 2.2.1. Derive the second equation of system (2.2.2). Exercise 2.2.2. Justify the passage to the limit in the derivation of system (2.2.2). (Hint: use the properties of the integral from a density with respect to upper index).

2.3

Telegraph equation

Introduce the functions p(x, t) = f+ (x, t) + f− (x, t),

q(x, t) = f+ (x, t) − f− (x, t).

(2.3.1)

Function p(x, t) is of a special interest because it represents just the probability density of the absolutely continuous component of the distribution of the telegraph process X(t). Theorem 2.3.1. Function p = p(x, t), x ∈ (−ct, ct), t > 0, satisfies the second-order hyperbolic partial differential equation with constant coefficients ∂2p ∂2p ∂p + 2λ − c2 2 = 0. 2 ∂t ∂t ∂x

(2.3.2)

Proof. We can give two different proofs of this theorem. Proof 1. We use functions (2.3.1) for eliminating the auxiliary function q = q(x, t). By adding and subtracting the equations of system (2.2.2), we get: ∂p ∂q = −c , ∂t ∂x

∂q ∂p = −c − 2λq. ∂t ∂x

(2.3.3)

Telegraph Processes

59

Differentiating the first equation of system (2.3.3) in t and the second one in x, we have: ∂2q ∂2p = −c , 2 ∂t ∂x ∂t

∂q ∂2q ∂2p = −c 2 − 2λ . ∂x ∂t ∂x ∂x

By substituting the expression for mixed derivative from the second equation of this system to the first one, we obtain: 2 ∂q ∂2p 2∂ p = c + 2λc . (2.3.4) ∂t2 ∂x2 ∂x From the first equation of system (2.3.3) we have: 1 ∂p ∂q =− . ∂x c ∂t Substituting this expression into (2.3.4), we finally arrive at (2.3.2). Proof 2. Equation (2.3.2) can be derived in a more simple way by applying the Determinant Theorem (see Section 1.3). According to this theorem, in order to derive a differential equation from an arbitrary system of first-order differential equations with commuting differential operators, it is sufficient to evaluate the determinant of the matrix differential operator of this system. In our case, system (2.2.2) can be represented in the matrix form: ∂   ∂ f+ −λ ∂t + c ∂x + λ = 0. (2.3.5) ∂ ∂ f − c + λ −λ − ∂t ∂x By computing the determinant of the matrix differential operator in (2.3.5), we get: ∂ Det

∂t

∂ + c ∂x +λ −λ

∂ ∂t

−λ ∂ − c ∂x +λ



 = =

2 ∂ ∂2 + λ − c2 2 − λ2 ∂t ∂x

∂ ∂2 ∂2 + 2λ − c2 2 , 2 ∂t ∂t ∂x

and this is exactly the same differential operator in (2.3.2). The theorem is proved. Partial differential equation (2.3.2) is the classical equation of mathematical physics called the telegraph or damped wave equation. It is also a particular case of a more general transport equation (see, for example, [207, Section 2, item 4]). Function p(x, t) is just that interesting function which is presented in formula (2.1.12) for the density of the absolutely continuous component of the Goldstein-Kac telegraph process X(t). Therefore, in order to find the complete density f (x, t) given by (2.1.8), one needs to solve the following Cauchy problem for the telegraph equation: 2 ∂ 2 f (x, t) ∂f (x, t) 2 ∂ f (x, t) + 2λ − c = 0, ∂t2 ∂t ∂x 2 ∂f (x, t) f (x, t)|t=0 = δ(x), = 0. ∂t t=0

(2.3.6)

First initial condition in (2.3.6) expresses the obvious fact that, at the initial time instant t = 0, the density of the process X(t) is entirely concentrated at the origin x = 0, while the second one means that the speed of spreading the density from the origin outwards is constant in time and the environment is isotropic. Note that all the operations of differentiation in (2.3.6) are treated in the generalized sense, that is, as the differentiation of generalized functions. Since telegraph equation is hyperbolic, then Cauchy problem for it

60

Markov Random Flights

is well-posed, that is, the solution of problem (2.3.6) exists and is unique in the class of generalized functions. Solving the Cauchy problem (2.3.6) is equivalent to solving the inhomogeneous telegraph equation 2 ∂f (x, t) ∂ 2 f (x, t) 2 ∂ f (x, t) + 2λ = δ(x) δ(t), (2.3.7) − c ∂t2 ∂t ∂x2 where the generalized function on the right-hand side of this equation represents the instant unit point-like source concentrated, at the initial time instant t = 0, at the origin x = 0. From (2.3.7) it follows that the transition density f (x, t) of the telegraph process X(t) is the fundamental solution (Green’s function) of the telegraph equation (2.3.2). This fact demonstrates the remarkable analogy between the Goldstein-Kac telegraph process X(t) and the Wiener process on the line R1 whose transition density is the fundamental solution of the one-dimensional heat equation as well. Exercise 2.3.1. Prove that function p(x, t), defined by (2.3.1), is the density of the absolutely continuous component of the distribution of the telegraph process X(t). (Hint: use Exercise 2.1.5). Exercise 2.3.2. Prove that each of the densities f+ (x, t) and f− (x, t) satisfies telegraph equation (2.3.2).

2.4

Characteristic function

In this section we study the characteristic function (Fourier transform of the transition density) of the telegraph process X(t) defined by Zct H(α, t) = Fx [f (x, t)] (α) =

eiαx f (x, t) dx,

α ∈ R1 ,

t > 0.

(2.4.1)

−ct

The explicit form of function H(α, t) is given by the following theorem. Theorem 2.4.1. The characteristic function H(α, t), α ∈ R1 , t > 0, of the Goldstein-Kac telegraph process X(t) is given by the formula:  p  p  λ H(α, t) = e−λt cosh t λ2 − c2 α2 + √ sinh t λ2 − c2 α2 1{|α|≤ λ } c λ2 − c2 α2    p    p λ + cos t c2 α2 − λ2 + √ sin t c2 α2 − λ2 1{|α|> λ } , c c2 α2 − λ2 (2.4.2) where 1{A} is the indicator function. Proof. In view of (2.3.6), the characteristic function H(α, t) is the solution of the Cauchy problem dH(α, t) d2 H(α, t) + 2λ + c2 α2 H(α, t) = 0, dt2 dt (2.4.3) dH(α, t) H(α, t)|t=0 = 1, = 0. dt t=0

Telegraph Processes

61

Characteristic equation of the ordinary differential equation in (2.4.3) is z 2 + 2λz + c2 α2 = 0 with the roots z1 = −λ −

p λ2 − c2 α2 ,

z2 = −λ +

p λ 2 − c2 α 2 .

Therefore, the general solution to the ordinary differential equation in (2.4.3) has the form: √ λ2 −c2 α2 )

H(α, t) = C1 et(−λ−

√ λ2 −c2 α2 )

+ C2 et(−λ+

,

(2.4.4)

where C1 , C2 , are some coefficients not depending on t. By using the initial conditions in (2.4.3), we obtain the system of linear equations for the coefficients: C1 + C2 = 1, p p (−λ − λ2 − c2 α2 )C1 + (−λ + λ2 − c2 α2 )C2 = 0, whose solution is: C1 =

1 2

  λ , 1− √ λ2 − c2 α2

C2 =

1 2

  λ . 1+ √ λ 2 − c2 α 2

By substituting these coefficients into (2.4.4) we obtain (for |α| ≤ λc ): H(α, t)     √ √ λ 1 λ 1 2 2 2 t(−λ− λ2 −c2 α2 ) 1− √ e + 1+ √ et(−λ+ λ −c α ) = 2 2 2 2 2 2 2 2 λ −c α λ −c α " √ 2 2 2 # √ √ √ t λ2 −c2 α2 −t λ2 −c2 α2 −t λ2 −c2 α2 t λ −c α (2.4.5) e + e − e e λ = e−λt +√ 2 2 2 2 2 λ −c α  p   p λ sinh t λ2 − c2 α2 . = e−λt cosh t λ2 − c2 α2 + √ λ 2 − c2 α 2 For |α| > λc , in the similar manner we obtain:  p  −λt H(α, t) = e cos t c2 α2 − λ2 + √

p  λ 2 2 2 sin t c α − λ . c2 α2 − λ2

(2.4.6)

Collecting together (2.4.5) and (2.4.6), we arrive at (2.4.2). Note that function (2.4.2) is continuous in α because at the point α = λc the functions (2.4.5) and (2.4.6) coincide. Theorem is proved.

Exercise 2.4.1. Prove the existence of the continuous bounded density of the GoldsteinKac telegraph process X(t). (Hint: show that, for any t > 0, characteristic function (2.4.2) is absolutely integrable with respect to α in R1 ).

2.5

Transition density

In this section we obtain the closed-form expression for the transition density f (x, t) of the Goldstein-Kac telegraph process X(t). According to Theorem 2.3.1, this is equivalent to

62

Markov Random Flights

finding the fundamental solution (Green’s function) of equation (2.3.2), that is, to solving the Cauchy problem (2.3.6) or (what is the same) to solving the inhomogeneous equation (2.3.7). This result is given by the following theorem. Theorem 2.5.1. For any t > 0, the transition density f (x, t) of the telegraph process X(t) has the form: e−λt [δ(ct + x) + δ(ct − x)] 2     p  p −λt ∂ e λ λ c2 t2 − x2 + c2 t2 − x2 Θ(ct − |x|), + λI0 I0 2c c ∂t c

f (x, t) =

x ∈ (−∞, ∞), where I0 (z) =

(2.5.1)

t > 0, ∞ X k=0

1  z 2k (k!)2 2

(2.5.2)

is the modified Bessel function (i.e. Bessel function of imaginary argument) of zero order, δ(x) is the Dirac delta-function and Θ(x) is the Heaviside function (2.1.13). Proof. We prove that the generalized function (2.5.1) is the solution of the Cauchy problem (2.3.6). The first initial condition f (x, t)|t=0 = δ(x) is obviously fulfilled. Checking of the second initial condition in (2.3.6) is difficult because evaluation of the derivative of generalized function f (x, t) is a fairly complicated problem. Instead, we check that the time derivative of the characteristic function (2.4.2) is equal to zero, as t = 0. By differentiating (2.4.2) in time, we have (for |α| ≤ λc ):  p   p λ dH(α, t) sinh t λ2 − c2 α2 = −λe−λt cosh t λ2 − c2 α2 + √ dt λ2 − c2 α2 hp p  p i −λt +e λ2 − c2 α2 sinh t λ2 − c2 α2 + λ cosh t λ2 − c2 α2 . From this we get, as |α| ≤ λc , that dH(α, t) = −λ + λ = 0, dt t=0 The same can easily be proved for |α| >

λ c

as well. Therefore, ∂f (x, t) =0 ∂t t=0

and the second initial condition in (2.3.6) is also fulfilled. It remains to check that the absolutely continuous part of density (2.5.1) is the solution to the telegraph equation (2.3.2). To avoid evaluation of the derivative of the generalized finction fac (x, t) = p(x, t)Θ(ct − |x|), we prove that the absolutely continuous in the interval (−ct, ct) positive function   p   p  e−λt λ ∂ λ 2 2 2 2 2 2 p(x, t) = λI0 c t −x + I0 c t −x , |x| < ct, (2.5.3) 2c c ∂t c solves the telegraph equation (2.3.2). In this case, the derivative means usual differentiation of the classic function p(x, t) defined in the open interval (−ct, ct).

Telegraph Processes

63

The substitution p(x, t) = e−λt w(x, t),

|x| < ct,

(2.5.4)

yields:  ∂ − λ w(x, t), ∂t  2 ∂ ∂2 −λt p(x, t) = e − λ w(x, t), ∂t2 ∂t ∂2 ∂2 p(x, t) = e−λt 2 w(x, t), 2 ∂x ∂x and, therefore, the telegraph equation (2.3.2) for function p(x, t) transforms into the equation   2 2 ∂ 2 ∂ − c w(x, t) = λ2 w(x, t), |x| < ct, (2.5.5) ∂t2 ∂x2 ∂ p(x, t) = e−λt ∂t



for function w(x, t) given by the formula   p  p   ∂ 1 λ λ 2 2 2 2 2 2 c t −x + c t −x w(x, t) = λI0 I0 2c c ∂t c    p  1 ∂ λ c2 t 2 − x 2 , |x| < ct. = + λ I0 2c ∂t c

(2.5.6)

Thus, one needs to show that function (2.5.6) (2.5.5). √ solves equation  First, we prove that the function I0 λc c2 t2 − x2 solves equation (2.5.5), that is,    p   p  2 2 λ λ ∂ 2 2 ∂ 2 t2 − x2 2 t 2 − x2 , c = λ I c |x| < ct, t > 0. − c I 0 0 ∂t2 ∂x2 c c (2.5.7) Since the series defining the modified Bessel function  X  p  2k ∞ 1 λ λ 2 2 2 I0 c t −x = (c2 t2 − x2 )k , 2 c (k!) 2c k=0

is convergent uniformly in x for any fixed t > 0, then  2  X   p  2k  2  ∞ 2 2 ∂ 1 ∂ λ λ 2 ∂ 2 ∂ 2 t2 − x2 c = − c − c I (c2 t2 − x2 )k . 0 ∂t2 ∂x2 c (k!)2 2c ∂t2 ∂x2 k=1 (2.5.8) It is easy to see that ∂2 2 2 (c t − x2 )k = 2kc2 (c2 t2 − x2 )k−1 + 4c4 t2 k(k − 1)(c2 t2 − x2 )k−2 , ∂t2 ∂2 (c2 t2 − x2 )k = −2k(c2 t2 − x2 )k−1 + 4x2 k(k − 1)(c2 t2 − x2 )k−2 . ∂x2 Hence 

 2 ∂2 2 ∂ −c (c2 t2 − x2 )k = 2c2 k(c2 t2 − x2 )k−1 + 4c4 t2 k(k − 1)(c2 t2 − x2 )k−2 ∂t2 ∂x2 + 2c2 k(c2 t2 − x2 )k−1 − 4c2 x2 k(k − 1)(c2 t2 − x2 )k−2 = 4c2 k(c2 t2 − x2 )k−1 + 4c2 k(k − 1)(c2 t2 − x2 )k−1 = 4c2 k 2 (c2 t2 − x2 )k−1 .

64

Markov Random Flights

Substituting this expression into (2.5.8), we obtain: 

2 ∂2 2 ∂ − c ∂t2 ∂x2



 p  2k  X ∞ λ λ 1 2 2 2 I0 4c2 k 2 (c2 t2 − x2 )k−1 c t −x = 2 c (k!) 2c k=1  2k−2 ∞ X 1 λ = λ2 (c2 t2 − x2 )k−1 ((k − 1)!)2 2c k=1  2k ∞ X λ 1 2 (c2 t2 − x2 )k =λ (k!)2 2c k=0   p λ 2 2 2 2 c t −x , = λ I0 c

proving (2.5.7). Therefore, applying formula (2.5.7) to function (2.5.6), we have, for |x| < ct: 

2 ∂2 2 ∂ − c ∂t2 ∂x2



  2     p 2 1 ∂ ∂ λ 2 ∂ 2 2 2 c t −x −c + λ I0 w(x, t) = 2c ∂t2 ∂x2 ∂t c     2   p 2 1 ∂ ∂ λ 2 ∂ 2 t 2 − x2 = c +λ − c I 0 2c ∂t ∂t2 ∂x2 c     λp 2 2 λ2 ∂ + λ I0 = c t − x2 2c ∂t c = λ2 w(x, t),

and formula (2.5.5) is thus proved. One can check that the positive function f (x, t) is a density indeed, that is, for arbitrary t > 0, the following relation holds: Zct f (x, t) dx = 1.

(2.5.9)

−ct

For the singular part of function (2.5.1) we have:  ct  Zct Z Zct e−λt  fs (x, t) dx = δ(ct − x) dx + δ(ct + x) dx = e−λt 2 −ct

−ct

(2.5.10)

−ct

and this corresponds to (2.1.9). For the absolutely continuous part of function (2.5.1) we have:  ct    p   p Z Zct Zct −λt λ ∂ λ e λ I0 c2 t2 − x2 dx + I0 c2 t2 − x2 dx . p(x, t) dx = 2c c ∂t c −ct

−ct

−ct

(2.5.11) Let us evaluate separately the interior integrals in this equality. For the first integral we get:

Telegraph Processes Zct

65

 2k Zct  p  ∞ X λ 1 λ 2 2 2 (c2 t2 − x2 )k dx I0 c t −x dx = c (k!)2 2c k=0

−ct

= 2ct

−ct ∞ X (λt)2k (k!)2 22k

k=0

Z1

(1 − z 2 )k dz

0

(see [63, Formula 3.249(5)]) ∞ X (λt)2k 22k (k!)2 = 2ct (k!)2 22k (2k + 1)!

(2.5.12)

k=0

∞ 2c X (λt)2k+1 = λ (2k + 1)! k=0

2c = sinh(λt). λ Using formula (2.5.12) just now proved and taking into account that I0 (0) = 1, we obtain for the second integral in (2.5.11): Zct

∂ I0 ∂t

 p   Zct  p ∂ λ λ 2 2 2 2 2 2 I0 c t −x c t −x dx = dx − 2c c ∂t c −ct

−ct

(2.5.13)

= 2c cosh(λt) − 2c. Substituting (2.5.12) and (2.5.13) into (2.5.11), we obtain: Zct

  2c e−λt λ sinh(λt) + 2c cosh(λt) − 2c p(x, t) dx = 2c λ

−ct

= e−λt [sinh(λt) + cosh(λt) − 1]  = e−λt eλt − 1

(2.5.14)

= 1 − e−λt , and this coincides with (2.1.11). By summing up (2.5.10) and (2.5.14), we finally arrive at (2.5.9). The theorem is completely proved.

Remark 2.5.1. Taking into account that I00 (z) = I1 (z), where I1 (z) is the modified Bessel function of first order ∞  z 2k+1 X 1 I1 (z) = , (2.5.15) k! (k + 1)! 2 k=0

we get ∂ I0 ∂t

 p    p λ λct λ 2 2 2 2 2 2 c t −x = √ c t −x , I1 c c c2 t2 − x2

and, therefore, transition density (2.5.1) takes the alternative form:

66

Markov Random Flights

e−λt [δ(ct + x) + δ(ct − x)] 2   p    p ct λ λe−λt λ c2 t2 − x2 + √ I1 c2 t2 − x2 Θ(ct − |x|), + I0 2c c c c2 t 2 − x 2 x ∈ (−∞, ∞), t > 0, (2.5.16)

f (x, t) =

where   p   p  λe−λt λ ct λ I0 c2 t 2 − x 2 + √ I1 c2 t2 − x2 Θ(ct − |x|), 2c c c c2 t2 − x2 x ∈ (−∞, ∞), t > 0, (2.5.17) represents the absolutely continuous part of the density f (x, t). fac (x, t) =

Figure 2.1: The shape of density (2.5.17) at time t = 6 (for c = 5, λ = 1)

The shape of the probability density (2.5.17) at time instant t = 6 for the values of parameters c = 5, λ = 1, is plotted in Fig. 2.1.

2.6

Probability distribution function

As is noted above, at arbitrary time instant t > 0 the process X(t) is concentrated in the interval [−ct, ct]. Let a, b ∈ R1 , a < b, be arbitrary points of R1 such that the intervals (a, b)

Telegraph Processes

67

and (−ct, ct) have a non-empty intersection, that is, (a, b) ∩ (−ct, ct) 6= ∅. We are interested in the probability Pr {X(t) ∈ (a, b) ∩ (−ct, ct)} that the process X(t), at time instant t > 0, is located in the subinterval (a, b) ∩ (−ct, ct) ⊆ (−ct, ct). This result is presented by the following proposition. Proposition 2.6.1. For arbitrary time instant t > 0 and arbitrary open interval (a, b) ⊂ R1 , a, b ∈ R1 , a < b, such that (a, b) ∩ (−ct, ct) 6= ∅ the following formula holds: Pr {X(t) ∈ (a, b) ∩ (−ct, ct)}  2k       ∞ λt 1 3 α2 λt 1 3 β2 λe−λt X 1 , 1+ βF −k, ; ; 2 2 − αF −k, ; ; 2 2 = 2c (k!)2 2 2k + 2 2 2 c t 2 2 c t k=0 (2.6.1) where α = max{−ct, a}, β = min{ct, b} (2.6.2) and F (ξ, η; ζ; z) =

2 F1 (ξ, η; ζ; z)

=

∞ X (ξ)k (η)k z k (ζ)k k!

k=0

is the Gauss hypergeometric function. Proof. By integrating the absolutely continuous part of density (2.5.16) and applying formulas (1.9.4) and (1.9.5) (see below), we obtain:  Pr X(t) ∈ (a, b) ∩ (−ct, ct)  β √     p Zβ Z λ −λt 2 t2 − x2 c I λe 1 c  I0 λ c2 t2 − x2 dx + ct √ dx = 2c c c2 t 2 − x 2 α α " ∞  2k   −λt X 1 λt 1 3 x2 λe x F −k, ; ; 2 2 = 2c (k!)2 2 2 2 c t k=0  2k+1  # x=β ∞ X λt 1 3 x2 1 F −k, ; ; 2 2 +x k! (k + 1)! 2 2 2 c t k=0 x=α  2k   ∞ −λt X λe 1 λt λt = 1+ 2c (k!)2 2 2k + 2 k=0      1 3 β2 1 3 α2 × βF −k, ; ; 2 2 − αF −k, ; ; 2 2 , 2 2 c t 2 2 c t proving (2.6.1). Let x ∈ (−ct, ct) be an arbitrary interior point of the open interval (−ct, ct) and let r > 0 be an arbitrary positive number such that (x − r, x + r) ⊆ (−ct, ct). Then, according to (2.6.1), we obtain the following formula for the probability of being in the subinterval (x − r, x + r) ⊆ (−ct, ct) of radius r centered at point x:

68

Markov Random Flights

Pr {X(t) ∈ (x − r, x + r)}  2k   ∞ λt λe−λt X 1 λt = 1+ (2.6.3) 2c (k!)2 2 2k + 2 k=0      1 3 (x − r)2 1 3 (x + r)2 − (x − r)F −k, , ; ; × (x + r)F −k, ; ; 2 2 c2 t2 2 2 c2 t2 −ct ≤ x − r < x + r ≤ ct. By setting x = 0 in (2.6.3) we obtain the formula:  2k     ∞ λr e−λt X 1 λt λt 1 3 r2 Pr {X(t) ∈ (−r, r)} = 1+ F −k, ; ; 2 2 , c (k!)2 2 2k + 2 2 2 c t k=0 (2.6.4) yielding the probability of being in the symmetric (with respect to the start point x = 0) subinterval (−r, r) ⊆ (−ct, ct). For further analysis we need the following useful formula   1 3 (2k)!! 2k k! F −k, ; ; 1 = = , k ≥ 0, (2.6.5) 2 2 (2k + 1)!! (2k + 1)!! which is the particular case of the more general relation (see [177, page 465, Formula 163])   2k k! 1 3 −k−1/2 √ √ C2k+1 k ≥ 0, (2.6.6) F −k, ; ; z = − ( z), 2 2 (2k + 1)!! z where Cnν (z) are the Gegenbauer polynomials with negative non-integer upper indices. Setting r = ct in (2.6.4) and applying (2.6.5) we obtain the expected result:

−λt

Pr {X(t) ∈ (−ct, ct)} = λt e

∞ X k=0 ∞ X

1 (k!)2



λt 2

2k 

λt 1+ 2k + 2



 F

 1 3 −k, ; ; 1 2 2

2k 

 λt 2k k! 2k + 2 (2k + 1)!! k=0   ∞ X (λt)2k+1 λt −λt =e 1+ 2k k! (2k + 1)!! 2k + 2 k=0   ∞ X (λt)2k+1 λt −λt =e 1+ (2k + 1)! 2k + 2 k=0 "∞ # ∞ X (λt)2k+1 X (λt)2k+2 −λt =e + (2k + 1)! (2k + 2)! = λt e−λt

k=0

1 (k!)2



λt 2

1+

k=0

= e−λt [sinh(λt) + cosh(λt) − 1]  = e−λt eλt − 1 = 1 − e−λt exactly coinciding with (2.1.11). From Proposition 2.6.1 we can extract the explicit form of the probability distribution function of X(t).

Telegraph Processes

69

Theorem 2.6.2. The probability distribution function of the telegraph process X(t) has the form: Pr {X(t) < x}  0,     2k      ∞  1 λxe−λt X λt 1 λt 1 3 x2 , + 1 + F −k, ; ; = 2 2c (k!)2 2 2k + 2 2 2 c2 t2   k=0    1,

x ∈ (−∞, −ct], x ∈ (−ct, ct], x ∈ (ct, +∞). (2.6.7)

Proof. According to Proposition 2.6.1, for arbitrary x ∈ (−ct, ct) we have: Pr {X(t) ∈ (−ct, x)}     2k    ∞ λe−λt X 1 λt 1 3 x2 λt 1 3 = 1 + xF −k, ; ; ; ; 1 + ctF −k, 2c (k!)2 2 2k + 2 2 2 c2 t2 2 2 k=0       ∞ 2k λxe−λt X 1 λt λt 1 3 x2 = ; ; 1 + F −k, 2c (k!)2 2 2k + 2 2 2 c2 t2 k=0     2k+1  ∞ X 1 λt λt 1 3 + e−λt 1 + F −k, ; ; 1 . (k!)2 2 2k + 2 2 2 k=0

Consider separately the second term of this expression. Applying formula (2.6.5) we get: e

−λt

 2k+1     ∞ X 1 λt λt 1 3 1+ F −k, ; ; 1 (k!)2 2 2k + 2 2 2 k=0  2k+1   ∞ X 2k λt λt = e−λt 1+ k! (2k + 1)!! 2 2k + 2 k=0     ∞ 2k+1 X λt λt 22k 1 + = e−λt 2k k! (2k + 1)!! 2 2k + 2 k=0   ∞ e−λt X (λt)2k+1 λt = 1+ 2 (2k + 1)! 2k + 2 k=0 "∞ # ∞ e−λt X (λt)2k+1 X (λt)2k+2 = + 2 (2k + 1)! (2k + 2)! k=0

=

e

2

[sinh (λt) + cosh (λt) − 1]

1 e−λt − . 2 2 Therefore, for arbitrary x ∈ (−ct, ct] we obtain: =

k=0

−λt

70

Markov Random Flights

Pr {X(t) < x} = Pr {X(t) = −ct} + Pr {X(t) ∈ (−ct, x)}  2k     ∞ λt 1 e−λt e−λt λxe−λt X 1 λt 1 3 x2 + = + 1 + F −k, ; ; − 2 2c (k!)2 2 2k + 2 2 2 c2 t2 2 2 k=0       ∞ 2k λt 1 λxe−λt X 1 λt 1 3 x2 = + 1+ F −k, ; ; 2 2 . 2 2 2c (k!) 2 2k + 2 2 2 c t k=0

The theorem is proved. The shape of the probability distribution function (2.6.7) at time instant t = 3 in the interval (−6, 6] (for parameters c = 2, λ = 1) is presented in Fig. 2.2.

Figure 2.2: The shape of p.d.f. (2.6.7) at instant t = 3 (for c = 2, λ = 1)

Remark 2.6.1. In view of (2.6.6), formulas (2.6.1) and (2.6.7) can also be represented in terms of Gegenbauer polynomials: Pr {X(t) ∈ (a, b) ∩ (−ct, ct)}       ∞ e−λt X (λt)2k+1 λt |α| |β| −k−1/2 −k−1/2 = 1+ sgn(α) C2k+1 − sgn(β) C2k+1 , 2 (2k + 1)! 2k + 2 ct ct k=0 (2.6.8) where α and β are given by (2.6.2) and, respectively,

Telegraph Processes

71

Pr {X(t) < x}  0,         ∞  1 e−λt 2k+1 X (λt) λt |x| −k−1/2 − sgn(x) 1+ C2k+1 , = 2 2 (2k + 1)! 2k + 2 ct   k=0    1,

x ∈ (−∞, −ct], x ∈ (−ct, ct], x ∈ (ct, +∞). (2.6.9)

Remark 2.6.2. We see that function (2.6.7) has discontinuities at the points ±ct determined by the singularities concentrated at these two points. It is easy to check that distribution function (2.6.7) produces the expected equalities: lim Pr {X(t) < −ct + ε} =

ε→0+0

e−λt , 2

Pr {X(t) < ct} = 1 −

e−λt . 2

This means that probability distribution function (2.6.7) is left-continuous and it has jumps at the terminal points ±ct of the same amplitude e−λt /2.

2.7

Convergence to the Wiener process

In this section we study the limiting behaviour of the telegraph process X(t), as the speed of motion c and the intensity of switchings λ simultaneously tend to infinity in such a way that the following condition fulfils: c → ∞,

λ → ∞,

c2 → ρ, λ

ρ > 0.

(2.7.1)

Relations (2.7.1) are called the Kac’s condition. The following theorem states that, under the condition (2.7.1), the Goldstein-Kac telegraph process X(t) is asymptotically a Wiener process. Theorem 2.7.1. Under the Kac’s condition (2.7.1), the transition density f (x, t) of the telegraph process X(t) converges to the transition density of the homogeneous Wiener process with zero drift and diffusion coefficient σ 2 = ρ, that is,   1 x2 , x ∈ (−ct, ct). (2.7.2) lim f (x, t) = √ exp − c, λ→∞ 2ρt 2πρt (c2 /λ)→ρ

Proof. We use the alternative form of density (2.5.16). Obviously, under the Kac’s condition (2.7.1), the singular part of density (2.5.16) vanishes, while the Heaviside function Θ(ct−|x|) transforms into the function identically equal to 1 everywhere on the line R1 . Therefore, one needs to study only the limiting behaviour of the absolutely continuous part of density given by the formula:   p   p  λ ct λ λe−λt p(x, t) = I0 c2 t2 − x2 + √ I1 c2 t2 − x2 , |x| < ct. 2c c c c2 t2 − x2

72

Markov Random Flights

In view of the well known asymptotic formula for the modified Bessel function (see, for instance, [63, Formula 8.451(5)]) ez , Iν (z) ∼ √ 2πz we have (for c, λ → ∞):  −λt λe I0 p(x, t) = 2c

r λt 1 −

 x 2

 q exp λt 1 −



λe−λt √  q c 2π λt 1 −

! +q

ct  x 2 ct

 x 2 ct

z → +∞,

r

1 1−

(2.7.3)

 x 2 ct

I1

λt 1 −

 x 2 ct

! 



1/2

! r √  x 2 λ 1 √ ∼ exp −λt + λt 1 − . c ct 2πt Consider separately the argument of the exponential function in this formula. Since |x| < x | < 1, then the radical can be decomposed into the absolutely and ct and, therefore, | ct uniformly converging series and we get: r    x 2 1 x2 1 · 1 x4 1 · 1 · 3 x6 = −λt + λt 1 − −λt + λt 1 − − − − ... ct 2 c2 t2 2 · 4 c4 t4 2 · 4 · 6 c6 t6 1 λ x2 1 · 1 λ x4 1 · 1 · 3 λ x6 =− − − − .... 2 c2 t 2 · 4 c4 t3 2 · 4 · 6 c6 t5 From the Kac’s condition (2.7.1) it follows that 1 λ → , c2 ρ

while

Then, under the Kac’s condition, we have: r −λt + λt 1 −

λ → 0, ck

 x 2

→−

k ≥ 3.

x2 , 2ρt

ct √ √ and, hence, taking into account that ( λ/c) → (1/ ρ), we obtain: (√ !) r  x 2 λ 1 √ lim f (x, t) = lim exp −λt + λt 1 − c, λ→∞ c, λ→∞ c ct 2πt 2 2 (c /λ)→ρ (c /λ)→ρ   1 x2 =√ exp − . 2ρt 2πρt The theorem is proved. Remark 2.7.1. The limiting result (2.7.2) is treated in the following sense. Since the telegraph process X(t) depends on two positive parameters, namely, on the speed of motion c > 0 and on the intensity of switchings λ > 0, then it can be considered as the two-parameter family of stochastic processes X(t) = {Xc, λ (t), c > 0, λ > 0}. Theorem 6.1 states that, under the Kac’s condition (2.7.1), the respective two-parameter family of transition densities is convergent, for arbitrary fixed t > 0, to the transition density of the homogeneous Wiener process at every point x ∈ (−ct, ct).

Telegraph Processes

73

Remark 2.7.2. One can check that, under the Kac’s condition (2.7.1), the characteristic function (2.4.2) of the telegraph process X(t) converges to the characteristic function of the homogeneous Brownian motion. First, we note that from the Kac’s condition (2.7.1) it follows that (λ/c) → ∞ and, therefore, 1{|α|≤ λ } → 1, 1{|α|> λ } → 0. Hence, for the c

c

characteristic function (2.4.2), we have the asymptotic formula (for c, λ → ∞):   p  p λ −λt 2 2 2 2 2 2 sinh t λ − c α H(α, t) ∼ e cosh t λ − c α + √ λ 2 − c2 α 2  ! ! r r 2 2 c c 1 = e−λt cosh λt 1 − 2 α2 + q sinh λt 1 − 2 α2  λ λ c2 1 − λ2 α 2 ! !# " r r c2 2 c2 2 −λt ∼ e cosh λt 1 − 2 α + sinh λt 1 − 2 α λ λ ! r c2 = exp −λt + λt 1 − 2 α2 λ    1 c2 2 1 · 1 c4 4 1 · 1 · 3 c6 6 α − α − α − . . . = exp −λt + λt 1 − 2 λ2 2 · 4 λ4 2 · 4 · 6 λ6   1 c2 2 1 · 1 c4 1 · 1 · 3 c6 4 6 = exp − tα − tα − tα − . . . . 2 λ 2 · 4 λ3 2 · 4 · 6 λ5 Taking into account that, under the Kac’s condition (2.7.1), ck λk−1

→0

for any k ≥ 3,

we get:  1 2 H(α, t) = exp − ρtα , 2 

lim

c, λ→∞ (c2 /λ)→ρ

(2.7.4)

and this is the characteristic function of the homogeneous Brownian motion on the line with zero drift and diffusion coefficient σ 2 = ρ. Remark 2.7.3. The limiting relation (2.7.4) is a more weak result in comparison with (2.7.2). While (2.7.2) states the point-wise convergence of the family of the transition densities of the telegraph process X(t) to the transition density of the homogeneous Wiener process, the limiting relation (2.7.4) states only the convergence of their characteristic functions to the characteristic function of the Brownian motion (that is, weak convergence). It is well known that from the convergence of characteristic functions, generally speaking, the convergence of densities does not follow. However, for the telegraph processes, these types of convergence are equivalent (see Exercise 2.7.1 below). Exercise 2.7.1. Prove that from the convergence of characteristic functions (2.7.4), the point-wise convergence of the respective densities (2.7.2) follows. (Hint: use Gnedenko’s theorem establishing the conditions under which the weak convergence of a family of stochastic processes imply the point-wise convergence of their densities.

74

2.8

Markov Random Flights

Laplace transform of transition density

In this section we derive formulas for the Laplace transforms of the transition density of the Goldstein-Kac telegraph process X(t) and its characteristic function. Theorem 2.8.1. Laplace transform with respect to time of the transition density (2.5.1) of the telegraph process X(t) is given by the formula:  r 1 s + 2λ − |x| √s(s+2λ)   , if x 6= 0, e c   2c s ! r Lt [f (x, t)](s) = x ∈ (−∞, ∞), Re s > 0.  1 s + 2λ   +1 , if x = 0,  2c s (2.8.1) Proof. Let us compute the Laplace transform of the absolutely continuous part of the transition density (2.5.1) for x 6= 0. Taking into account that the Laplace transform of the function   p  I0 a t2 − b2 , t > b, g0 (t) =  0, 0 < t < b, is given by the formula (see, for instance, [7, Table 4.17, Formula 5]) √

2

2

e−b s −a , Lt [g(t)] (s) = √ s2 − a2

Re s > |Re a|,

we obtain the first term of the absolutely continuous part of density (2.5.16) (for t > |x|/c): " !  r   p   #  x 2 λ |x| 2 2 2 2 Lt I0 c t − x Θ(ct − |x|) (s) = Lt I0 λ t − (s) Θ t− c c c |x| √ 2

2

e− c s −λ , = √ s2 − λ2

Re s > λ.

(2.8.2) For the second term of the absolutely continuous part of density (2.5.16) we have (for t > |x|/c):   p   ct λ 2 2 2 Lt √ c t − x Θ(ct − |x|) (s) I1 c c2 t 2 − x 2   !  r    2 t x |x|  2 = Lt  q Θ t− (s) 2 I1 λ t − c c x t2 − c ! r Z∞  x 2 t −st 2 q = e dt  I1 λ t − c x 2 2 t − c |x|/c ! r ∞ Z  x 2 1 d −st 2 q e dt =−  I1 λ t − c ds 2− x 2 t |x|/c c

Telegraph Processes  =−

d Lt  q ds

1 t2 −

 x 2 c

I1

75

 !  r   x 2 |x|  (s) . Θ t− λ t2 − c c

Taking into account that the Laplace transform of the function   p  √ 1 I1 a t2 − b2 , t > b, t2 − b2 g1 (t) =  0, 0 < t < b, is given by the formula (see, for instance, [7, Table 4.17, Formula 8])  √2 2  Lt [g1 (t)] (s) = a−1 b−1 e−b s −a − e−bs , Re s > |Re a|, we get (for t > |x|/c):  p    λ ct I1 c2 t2 − x2 Θ(ct − |x|) (s) Lt √ c c2 t2 − x2  |x| c d  − |x| √s2 −λ2 − e− c s e c =− λ|x| ds   |x| |x| √ 2 s 1 2 √ = e− c s −λ − e− c s , λ s2 − λ2

(2.8.3) Re s > λ.

Hence, in view of (2.8.2) and (2.8.3), we obtain:       p  p ct λ λ λ c2 t2 − x2 + √ c2 t2 − x2 Θ(ct − |x|) (s) Lt I0 I1 2c c c c2 t2 − x2 ( |x| √ 2 2  ) − c s −λ √ |x| |x| λ e s 1 − c s2 −λ2 − c s √ √ = , + e −e 2c λ s2 − λ2 s2 − λ2 |x| √ 2

2

s + λ e− c s −λ 1 − |x| s √ e c , = − 2c 2c s2 − λ2

Re s > λ.

Then the Laplace transform of the absolutely continuous part of density (2.5.1), for x 6= 0, has the form:  −λt   p  p    λe λ λ ct 2 2 2 2 2 2 Lt I0 I1 c t −x + √ c t −x Θ(ct − |x|) (s) 2c c c c2 t2 − x2  p       p λ ct λ λ 2 2 2 2 2 2 I1 c t −x + √ c t −x Θ(ct − |x|) (s + λ) I0 = Lt 2c c c c2 t 2 − x 2 √ |x| s + 2λ e− c s(s+2λ) 1 − |x| (s+λ) p = − e c 2c 2c s(s + 2λ) ) (r |x| s + 2λ − |x| √s(s+2λ) 1 − c (s+λ) c e −e , x 6= 0, Re s > 0. = 2c s (2.8.4) One can easily check that the Laplace transform of the singular part of density (2.5.1), for x 6= 0, is given by the formula:  −λt  e 1 − |x| (s+λ) Lt (δ(ct + x) + δ(ct − x)) (s) = e c , x 6= 0, Re s > 0. (2.8.5) 2 2c

76

Markov Random Flights

By summing up (2.8.4) and (2.8.5), we obtain the first part of formula (2.8.1) for x 6= 0. The second part of (2.8.1) for x = 0 can be proved by applying the formulas for the Laplace transforms of the modified Bessel functions of zero and first orders. The theorem is proved. Remark 2.8.1. One can check that, under the Kac’s condition (2.7.1), function (2.8.1) turns into the Laplace transform of the transition density of the one-dimensional Brownian motion. Really, from the Kac’s condition (2.7.1) it follows that √ 1 λ λ →√ , → ∞, c ρ c and, therefore, for x 6= 0 we have: r √ r   1 s + 2λ − |x| √s(s+2λ) 1 s 1 λ c = e + 2 e−|x| 2c s 2 c s λ r 1 1 2 −|x| √1ρ √2s e → √ 2 ρ s p 2s 1 −|x| ρ , =√ e 2ρs

√ λ c

p

s s( λ +2)

(2.8.6)

and this is exactly the Laplace transform of the transition density of the homogeneous Brownian motion on the line with zero drift and diffusion coefficient σ 2 = ρ. Similarly, for x = 0, we have: ! r √ r   1 1 λ 1 1 s + 2λ 1 s +1 = +2 + → √ 2c s 2 c s λ 2c 2ρs and this coincides with (2.8.6) for x = 0. Theorem 2.8.2. The Laplace transform of the characteristic function H(α, t), given by (2.4.2), of the Goldstein-Kac telegraph process X(t) has the form: Lt [H(α, t)](s) =

s2

s + 2λ , + 2λs + c2 α2

Re s > 0.

(2.8.7)

Proof. In view of formulas (see, for instance, [7, Table 4.9, Formulas 1 and 2, respectively,]) Lt [sinh (at)] (s) =

s2

a , − a2

Lt [cosh (at)] (s) =

s2

s , − a2

Re s > |Im a|,

we have (for |α| ≤ λ/c): √ i p λ2 − c2 α2 2 2 2 Lt sinh (t λ − c α ) (s) = 2 , s − (λ2 − c2 α2 ) h i p s , Lt cosh (t λ2 − c2 α2 ) (s) = 2 2 s − (λ − c2 α2 ) h

Re s > 0.

Therefore, for the first term in curly brackets of the characteristic function (2.4.2), we get:   p   p λ Lt cosh t λ2 − c2 α2 + √ sinh t λ2 − c2 α2 1{|α|≤ λ } (s) c λ2 − c2 α2 (2.8.8) s+λ = 2 1 . λ s − (λ2 − c2 α2 ) {|α|≤ c }

Telegraph Processes

77

Similarly, in view of formulas (see, for instance, [7, Table 4.7, Formulas 1 and 43, respectively,]) a s Lt [sin (at)] (s) = 2 , Lt [cos (at)] (s) = 2 , Re s > |Im a|, s + a2 s + a2 we have (for |α| > λ/c): √ i h p c2 α2 − λ2 2 2 2 , Lt sin (t c α − λ ) (s) = 2 s + (c2 α2 − λ2 ) h i p s Lt cos (t c2 α2 − λ2 ) (s) = 2 , s + (c2 α2 − λ2 )

Re s > 0.

and, therefore, for the second term in curly brackets of the characteristic function (2.4.2), we get:     p p λ 2 2 2 2 2 2 1{|α|> λ } (s) sin t c α − λ Lt cos t c α − λ + √ c c2 α2 − λ2 (2.8.9) s+λ 1 . = 2 λ s − (λ2 − c2 α2 ) {|α|> c } By summing up (2.8.8) and (2.8.9), we obtain for arbitrary α:  p p   λ 2 2 2 2 2 2 Lt cosh t λ − c α + √ sinh t λ − c α 1{|α|≤ λ } c λ2 − c2 α2       p p λ 2 2 2 2 2 2 sin t c α − λ 1{|α|> λ } (s) cos t c α − λ + √ c c2 α2 − λ2 s+λ . = 2 s − λ2 + c2 α2 Thus, for characteristic function (2.4.2) we finally obtain:   Lt H(α, t) (s)     p p λ = Lt e−λt cosh t λ2 − c2 α2 + √ sinh t λ2 − c2 α2 1{|α|≤ λ } c λ2 − c2 α2   p p   λ + cos t c2 α2 − λ2 + √ sin t c2 α2 − λ2 1{|α|> λ } (s) c c2 α2 − λ2     p  p λ 1{|α|≤ λ } = Lt cosh t λ2 − c2 α2 + √ sinh t λ2 − c2 α2 2 2 2 c λ −c α    p    p λ 2 2 2 2 2 2 + cos t c α − λ + √ sin t c α − λ 1{|α|> λ } (s + λ) c c2 α2 − λ2 s + 2λ = (s + λ)2 − λ2 + c2 α2 s + 2λ = 2 . s + 2λs + c2 α2 The theorem is proved. Exercise 2.8.1. Evaluate Laplace transform of the transition density of Brownian motion given by (2.7.2) and show that it coincides with (2.8.6). Exercise 2.8.2. Prove formula (2.8.5). Exercise 2.8.3. Prove the second part of formula (2.8.1) (for x = 0).

78

2.9

Markov Random Flights

Moment analysis

In this section we give a detailed moment analysis of the Goldstein-Kac telegraph process. Consider the moment function of the telegraph process X(t) defined, for arbitrary integer n ≥ 0, as µn (t) = E[X(t)]n , n ≥ 0, where E is the expectation.

2.9.1

Moments of the telegraph process

The explicit form of the moment function µn (t) of the Goldstein-Kac telegraph process X(t) is given by the following theorem. Theorem 2.9.1. For any t > 0, the moments of the telegraph process X(t) are given by the formula:      e−λt c2k 2k−1/2 λ−k+1/2 tk+1/2 Γ k + 1 Ik+1/2 (λt) + Ik−1/2 (λt) , if n = 2k, 2 µn (t) =   0, if n = 2k + 1, k = 0, 1, 2, . . . , (2.9.1) where ∞  z 2k+ν X 1 Iν (z) = k! Γ(k + ν + 1) 2 k=0

is the modified Bessel function of order ν. Proof. The structure of density (2.1.8) implies that µn (t) = µsn (t) + µac n (t),

n = 0, 1, 2, . . . ,

(2.9.2)

where µsn (t) and µac n (t) are the moments of the singular and absolutely continuous parts of the distribution of the process X(t), respectively. For the singular part (2.1.10) of density (2.5.1) we have: Z ct µsn (t) = xn fs (x, t) dx −ct −λt

=

e

2 e

Z

ct

xn δ(ct − x) dx +

−ct

ct

 xn δ(ct + x) dx

−ct

(2.9.3)

−λt n

n

[(ct) + (−ct) ] (2 e−λt (ct)2k , if n = 2k, = 0, if n = 2k + 1, =

Z

k = 0, 1, 2, . . . .

Let us evaluate the moments of the absolutely continuous part of the density of the process X(t). First, we note that, since the density p(x, t) given by (2.5.3), is an even

Telegraph Processes

79

function with respect to spatial variable x (i.e., p(x, t) = p(−x, t)), then all the moments of odd orders are equal to zero, that is, Z ct x2k+1 p(x, t) dx = 0, k = 0, 1, 2, . . . . (2.9.4) µac (t) = 2k+1 −ct

Hence, one needs to calculate only the moments of even orders µac 2k (t) Z ct = x2k p(x, t) dx −ct

 Z ct  p  p    Z ct λ e−λt λ 2k 2k ∂ 2 2 2 2 2 2 λ x I0 I0 = c t − x dx + x c t − x dx c c ∂t c 0 0       Z Z ct ct λp 2 2 λp 2 2 ∂ e−λt 2k 2k 2k 2 2 x I0 λ x I0 c t − x dx + c t − x dx − c(ct) = . c c ∂t 0 c 0 (2.9.5) Let us evaluate separately the integrals in (2.9.5). For the first integral, we have: ct

Z

  p Z 1   p λ 2k+1 2 2 2 c t − x dx = (ct) x I0 z 2k I0 λt 1 − z 2 dz c 0 Z 1 (2.9.6) = (ct)2k+1 ξ (1 − ξ 2 )k−1/2 I0 (λtξ) dξ 0   1 k+1/2 t Ik+1/2 (λt), = c2k+1 2k−1/2 λ−k−1/2 Γ k + 2 2k

0

where we have used the formula (see [177, item 2.15.2, Formula 6]) Za

xν+1 (a2 − x2 )β−1 Iν (cx) dx = 2β−1 aν+β c−β Γ(β)Iν+β (ac),

0

a > 0, Re β > 0, Re ν > −1. Then, by differentiating (2.9.6) in t, for the second integral in (2.9.5) we get: ct

  p λ c2 t2 − x2 dx c 0     1 k−1/2 1 Γ k+ t Ik+1/2 (λt) = c2k+1 2k−1/2 λ−k−1/2 k + 2 2   1 k+1/2 ∂ + c2k+1 2k−1/2 λ−k−1/2 Γ k + t Ik+1/2 (λt) 2 ∂t   3 k−1/2 = c2k+1 2k−1/2 λ−k−1/2 Γ k + t Ik+1/2 (λt) 2       1 k+1/2 1 + c2k+1 2k−1/2 λ−k−1/2 Γ k + t λIk+3/2 (λt) + t−1 k + Ik+1/2 (λt) 2 2

∂ ∂t

Z

x2k I0

80

Markov Random Flights  3 k−1/2 = c2k+1 2k+1/2 λ−k−1/2 Γ k + t Ik+1/2 (λt) 2   1 k+1/2 + c2k+1 2k−1/2 λ−k+1/2 Γ k + t Ik+3/2 (λt) 2       1 k−1/2 1 = c2k+1 2k−1/2 λ−k−1/2 Γ k + t 2 k+ Ik+1/2 (λt) + λtIk+3/2 (λt) (2.9.7) 2 2 " #    1 k+1/2 2 k + 12 = c2k+1 2k−1/2 λ−k+1/2 Γ k + t Ik+1/2 (λt) + I(k+1/2)+1 (λt) 2 λt   1 k+1/2 2k+1 k−1/2 −k+1/2 t Ik−1/2 (λt), =c 2 λ Γ k+ 2 

where in the last step we have used the well-known formula (see, for example, [63, Formula 8.471(1)]) 2ν Iν (z) + Iν+1 (z) = Iν−1 (z). z Substituting (2.9.6) and (2.9.7) into (2.9.5), we obtain:    1 −λt 2k k−1/2 −k+1/2 k+1/2 I (λt) + I (λt) − e−λt (ct)2k . µac (t) = e c 2 λ t Γ k + k+1/2 k−1/2 2k 2 By substituting this expression and (2.9.3) into (2.9.2), we finally arrive at (2.9.1). The theorem is proved.

Remark 2.9.1. In view of just now proved formula (2.9.1), the first and the second moments of the telegraph process X(t) have the form: µ1 (t) = 0 and    3  µ2 (t) = e−λt c2 21/2 λ−1/2 t3/2 Γ I3/2 (λt) + I1/2 (λt) 2 # √   r √ "r 1 2 3/2 π 2 2 −λt 2 cosh(λt) − sinh(λt) + sinh(λt) =e c √ t 2 πλt λt πλt λ √  √  √ 1 2 3/2 π 2 −λt 2 √ =e c √ t eλt − sinh(λt) 2 λt λ πλt   2 c t λt 1 = e−λt sinh(λt) e − λ λt c2 t c2 = − 2 e−λt sinh(λt) λ λ c2 t c2 eλt − e−λt = − 2 e−λt λ λ 2 2 2 c t c = − 2 (1 − e−2λt ). λ 2λ (2.9.8)

2.9.2

Asymptotic behaviour

Let us now study several types of the asymptotic behaviour of the moment function µ2k (t) of the telegraph process X(t) given by (2.9.1).

Telegraph Processes

81

A1. Asymptotic behaviour, as c → ∞, λ → ∞, (t and k are fixed.) Consider the case when, under fixed t and k, the speed of motion c and the intensity of switchings λ simultaneously tend to infinity in such a way, that the Kac’s condition (2.7.1) is fulfilled. In view of asymptotic formula for modified Bessel function (2.7.3), as well as the formula (see, for example, [63, Formula 8.339(2)])  √  π 1 k ≥ 0, (−1)!! = 1, (2.9.9) = k (2k − 1)!!, Γ k+ 2 2 we get: lim

c, λ→∞ (c2 /λ)→ρ

µ2k (t)

  1 = 2k−1/2 tk+1/2 Γ k + 2

lim

c, λ→∞ (c2 /λ)→ρ

h i e−λt c2k λ−k+1/2 Ik+1/2 (λt) + Ik−1/2 (λt)

  2eλt e−λt c2k λ−k+1/2 √ c, λ→∞ 2πλt (c2 /λ)→ρ    2k  1 c k k 1 =2 t √ Γ k+ lim c, λ→∞ 2 λk π   1 ∼ 2k−1/2 tk+1/2 Γ k + 2

lim

(c2 /λ)→ρ

√ π k k 1 =2 t √ (2k − 1)!! ρk k π 2 = ρk tk (2k − 1)!!,

k ≥ 0,

and this coincides with the moment function of the homogeneous Brownian motion on the line with zero drift and diffusion coefficient σ 2 = ρ. A2. Asymptotic behaviour, as t → ∞, λ → ∞, (c and k are fixed.) Similarly to the previous item A1 and applying (2.7.3) and (2.9.9), one can easily show that, if t → ∞ and/or λ → ∞ for fixed c and k, then the following asymptotic formula holds:  2 k c t (2k − 1)!! (2.9.10) µ2k (t) ∼ λ From (2.9.10) we see that the moments µ2k (t) are increasing like tk , as t → ∞ (for fixed c, λ and k). Inversely, the moments µ2k (t) are decreasing like λ−k , as λ → ∞ (for fixed c, t `e k). A3. Asymptotic behaviour, as k → ∞, (c, t and λ are fixed.) Asymptotic analysis, as k → ∞, is much more complicated due to the lack of general asymptotic formulas for the modified Bessel function Iν (z) with respect to index ν (exept a very particular case when argument z depends on index ν in a special way). Nevertheless, in our case asymptotic analysis for the moment function µ2k (t), as k → ∞, is possible due to the special form of the indices of Bessel functions in (2.9.1). This result is presented by the following theorem. Theorem 2.9.2. For arbitrary fixed c, λ and t, the following asymptotic formula holds:   λt µ2k (t) ∼ e−λt (ct)2k 1 + , k → ∞. (2.9.11) 2k + 1 The refined asymptotic formula is:   λt (λt)2 (λt)3 µ2k (t) ∼ e−λt (ct)2k 1 + + + , 2k + 1 4k + 2 (4k + 2)(2k + 3)

k → ∞. (2.9.12)

82

Markov Random Flights

Proof. First, we prove the following asymptotic formulas for modified Bessel functions with indices of the form k ± 1/2: r 2 z k+1/2 , k → ∞, (2.9.13) Ik+1/2 (z) ∼ π (2k + 1)!! r 2 z k−1/2 Ik−1/2 (z) ∼ , k → ∞. (2.9.14) π (2k − 1)!! Let us prove (2.9.13). Using the series representation of the Bessel function, we have: ∞  z 2l X 1 Ik+1/2 (z) = z k+1/2 l! Γ ((l + k + 1/2) + 1) 2 l=0

= z k+1/2

∞ X l=0

 z 2l 1 l! (l + k + 1/2) Γ (l + k + 1/2) 2

(see formula (2.9.9)) =

z

∞ k+1/2 X



r =

π

l=0

z 2l 2l+k l! (l + k + 1/2) (2l + 2k − 1)!! 22l+k+1/2 ∞

z 2l 2 k+1/2 X z π l! (2l + 2k + 1) (2l + 2k − 1)!! 2l l=0

∞ X

r

z 2l 2 k+1/2 z π (2l)!! (2l + 2k + 1)!! l=0 r 2 z k+1/2 ∼ , k → ∞, π (2k + 1)!! =

proving (2.9.13). Similarly, Ik−1/2 (z) = z k−1/2

∞ X

l=0 ∞ k−1/2 X

 z 2l 1 l! Γ(l + k + 1/2) 2

z 2l √ π l! (2l + 2k − 1)!! 2l−1/2 l=0 r ∞ 2 k−1/2 X z 2l = z π (2l)!! (2l + 2k − 1)!! l=0 r 2 z k−1/2 , k → ∞, ∼ π (2k − 1)!!

=

z

and (2.9.14) is also proved. By applying just now proved formulas (2.9.13) and (2.9.14), we get:    1  −λt 2k k−1/2 −k+1/2 k+1/2 Ik+1/2 (λt) + Ik−1/2 (λt) µ2k (t) = e c 2 λ t Γ k+ 2 r   √ π 2 (λt)k+1/2 (λt)k−1/2 −λt 2k k−1/2 −k+1/2 k+1/2 ∼ e c 2 λ t (2k − 1)!! + 2k π (2k + 1)!! (2k − 1)!!   k−1/2 λt (λt) 1+ = e−λt c2k λ−k+1/2 tk+1/2 (2k − 1)!! (2k − 1)!! 2k + 1   λt = e−λt (ct)2k 1 + , k → ∞, 2k + 1 yielding (2.9.11).

Telegraph Processes

83

Formula (2.9.12) can be proved in the same manner by applying, instead of (2.9.13) and (2.9.14), the refined asymptotic formulas for the modified Bessel function (see also Remark 2.9.2 below): z k+5/2 + (4k + 6)z k+1/2 √ , k → ∞, (2.9.15) Ik+1/2 (z) ∼ 2π (2k + 3)!! Ik−1/2 (z) ∼

z k+3/2 + (4k + 2)z k−1/2 √ , 2π (2k + 1)!!

k → ∞.

(2.9.16)

The theorem is thus proved. Remark 2.9.2. One can obtain more refined asymptotic formulas by taking an arbitrary finite number of terms in the series representation of the Bessel function Ik+1/2 (z) `e Ik−1/2 (z): r ∞ 2 k+1/2 X z 2l Ik+1/2 (z) = z , π (2l)!! (2l + 2k + 1)!! l=0 (2.9.17) r ∞ 2 k−1/2 X z 2l z . Ik−1/2 (z) = π (2l)!! (2l + 2k − 1)!! l=0

Since the index k is presented in the denominators of the terms of series (2.9.17) and, therefore, each term of the series tends to zero, as k → ∞, then, for arbitrary integer n ≥ 0, the following asymptotic formulas hold: r n 2 k+1/2 X z 2l + Ik+1/2 (z) = z + Rk,n (z), π (2l)!! (2l + 2k + 1)!! l=0 (2.9.18) r n 2 k−1/2 X z 2l − z + Rk,n (z), Ik−1/2 (z) = π (2l)!! (2l + 2k − 1)!! l=0

± where the remainders Rk,n (z) → 0, as k → ∞, for arbitrary fixed z and n ≥ 0. Note that formulas (2.9.13) and (2.9.14) arise from (2.9.18) for n = 0, while (2.9.15) and (2.9.16) arise from (2.9.18) for n = 1, respectively. One can also obtain the upper bounds for the ± remainders Rk,n (z) and, therefore, to estimate their rate of convergence to zero, as k → ∞.

Remark 2.9.3. Asymptotic formulas (2.9.11) and (2.9.12) show that the behaviour of the moment function µ2k (t), as k → ∞, depends on the factor ct as follows: If ct < 1, then µ2k (t) → 0, as k → ∞; If ct = 1, then µ2k (t) → e−λt , as k → ∞; If ct > 1, then µ2k (t) → ∞, as k → ∞.

2.9.3

Carleman condition

We give now the solution of the moment problem for the Goldstein-Kac telegraph process X(t). We show that, for arbitrary fixed t > 0, the moments of the process X(t) satisfy the Carleman condition and, therefore, the distribution of X(t) is entirely determined by its moments. This result is given by the following theorem. Theorem 2.9.3. For any fixed t > 0, the moments µ2k (t) of the telegraph process X(t), given by (2.9.1), satisfy the Carleman condition: ∞ X k=1

−1/(2k)

[µ2k (t)]

= ∞.

(2.9.19)

84

Markov Random Flights

Proof. To prove the theorem, it is sufficient to show that the general term of the series on the left-hand side of (2.9.19) does not tend to zero, as k → ∞. First, we prove that, for arbitrary k ≥ 1, the following inequality holds: 2 2

µ2k (t) < (ct)2k (1 + λt) eλ

t /2

k ≥ 1.

,

(2.9.20)

Using formulas (2.9.9) and (2.9.17), we have:    1  Ik+1/2 (λt) + Ik−1/2 (λt) µ2k (t) = e−λt c2k 2k−1/2 λ−k+1/2 tk+1/2 Γ k + 2 √   π = e−λt c2k 2k−1/2 λ−k+1/2 tk+1/2 k (2k − 1)!! Ik+1/2 (λt) + Ik−1/2 (λt) 2 r   π = e−λt c2k λ−k+1/2 tk+1/2 (2k − 1)!! Ik+1/2 (λt) + Ik−1/2 (λt) 2 " ∞ X (λt)2l 2k −k+1/2 k+1/2 (ct)−1 (1 + λt)−1/(2k) e−λ

t /(4k)

,

k ≥ 1.

By passing in this inequality to the limit, as k → ∞, we obtain: −1/(2k)

lim [µ2k (t)]

k→∞

≥ (ct)−1 > 0 −1/(2k)

for any c and t > 0. Thus, we have proved that the sequence [µ2k (t)] does not tend to zero, as k → ∞, and therefore, the series (2.9.19) is divergent. The theorem is proved.

Telegraph Processes

2.9.4

85

Generating function

For arbitrary complex number z, such that |z| < λ2 /c2 , consider the generating function of the moments of the telegraph process X(t), defined by the formula: ψ(z, t) =

∞ X k=0

zk

µ2k (t) , (2k)!

|z|
0, satisfies the n-th order hyperbolic partial differential equation: (Det Sn )f = 0.

(2.11.8)

Proof. Applying the Determinant Theorem to system (2.11.6) we get the statement of the theorem. Equation (2.11.8) is the basic equation for our stochastic motion. It describes the general model with n velocities and rates of a switching Poisson process. Thus, the problem of deriving the governing equation for such a general model reduced to a calculation of operator (2.11.7), i.e. the formal determinant of system (2.11.2) or (2.11.6). Of course, this is a nontrivial problem, and it is impracticable in a general case for large n. Nevertheless, the value of formula (2.11.8) consists in its universality and compactness for any evolutionary model.

Telegraph Processes

97

Remark 2.11.1. In this remark we discuss the applicability of the operator Det Sn to the function f in writing (2.11.8). From the form of operator (2.11.7) it follows that the function f = f (x, t) must be differentiable at least n times with respect to each of the variables. Since we are usually dealing with concrete initial-value problems, then fulfillment of this condition is provided by setting the respective initial data. It is known [148] that the smoothness of the solution of a hyperbolic equation of any order with constant coefficients is entirely determined by the smoothness of the respective initial conditions. Therefore, by putting smooth initial conditions we obtain a smooth solution of this initial-value problem. This is a very important peculiarity of hyperbolic equations of any order with constant coefficients. If the initial conditions are given as Dirac-type functions (this means that at time t = 0 the density is concentrated at a single point), then the solution must be of the same type too. Since the motion has continuous trajectories and the speed of this process is finite, then it must contain an absolutely continuous component, and therefore we can apply the operator Det Sn in this case. Example 2.11.1. In our general model we set n = 2 and v1 = C + c, v2 = C − c, where C can be interpreted as a drift on the line. Let the transition probabilities have the form ( 0, if j = k, pjk = j, k = 1, 2. 1, if j 6= k, Then system (2.11.2) transforms into a system whose determinant is: ∂ + (C + c) ∂ + λ1 −λ2 ∂x ∂t Det S2 = ∂ ∂ −λ1 ∂t + (C − c) ∂x + λ2 =

2 ∂2 ∂2 ∂ ∂ 2 2 ∂ + 2C + (C − c ) + [C(λ1 + λ2 ) − c(λ1 − λ2 )] + (λ1 + λ2 ) , ∂t2 ∂x∂t ∂x2 ∂x ∂t

and, therefore, the probability density f = f (x, t) of this stochastic motion with drift satisfies the hyperbolic equation:  2  2 ∂ ∂2 ∂ ∂ 2 2 ∂ + 2C + (C − c ) 2 + [C(λ1 + λ2 ) − c(λ1 − λ2 )] + (λ1 + λ2 ) f = 0. ∂t2 ∂x∂t ∂x ∂x ∂t In particular, for C = 0, λ1 = λ2 = λ, this equation turns into the Goldstein-Kac telegraph equation (2.3.2).

2.11.1

Uniform choice of velocities

There are two important particular cases of our general model, for which the governing equation (2.11.8) can be obtained in an explicit form. Consider the case when the intensity of switchings is constant, i.e. λ1 = · · · = λn = λ. The choice of velocities is made in accordance with the uniform law pjk =

1 n

for any j, k = 1, . . . , n,

which means that the particle can preserve its current velocity.

(2.11.9)

98

Markov Random Flights

Theorem 2.11.3. In the model with a constant rate λ and a uniform choice of velocities (2.11.9) the probability density f = f (x, t), x ∈ R1 , t > 0, satisfies the hyperbolic equation          n n n   Y ∂ λXY ∂ ∂ ∂ + vk +λ − + vj +λ f = 0. (2.11.10)   ∂t ∂x n ∂t ∂x   k=1 j=1  k=1 j6=k

Proof. It is easy to see that for this model the general system (2.11.2) transforms into a system, whose determinant is ∂ + v ∂ + λ(n−1) λ λ − . . . − ∂t 1 ∂x n n n λ(n−1) λ ∂ ∂ λ −n + v + . . . − 2 ∂x ∂t n n Det Sn = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . λ(n−1) λ λ ∂ ∂ −n −n . . . ∂t + vn ∂x + n The value of such a determinant is known (see [41, Formula 225]). Its explicit form for our case is given by the differential operator in (2.11.10). Applying now Theorem 2.11.2 we obtain the statement of the theorem. We can also consider the model with a constant rate λ and the uniform choice of velocities given by the law  if j = k,  0, pjk = j, k = 1 . . . , n, (2.11.11) 1  , if j 6= k, n−1 which means that the particle cannot preserve its current velocity. Theorem 2.11.4. In the model with a constant rate λ and a uniform choice of velocities (2.11.11) the probability density f = f (x, t), x ∈ R1 , t > 0, satisfies the hyperbolic equation           n n n Y ∂ ∂ λn ∂ λn  λ XY ∂ f = 0. (2.11.12) + vk + + vj + −  ∂t ∂x n − 1 n−1 ∂t ∂x n − 1    k=1 j=1 k=1  j6=k

Proof. One can check that for this model the general system (2.11.2) transforms into a system, whose determinant is ∂ ∂ λ λ ∂t + v1 ∂x +λ − n−1 ... − n−1 λ ∂ ∂ λ − + v + λ . . . − 2 ∂x n−1 ∂t n−1 Det Sn = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . λ λ ∂ ∂ − n−1 − n−1 . . . ∂t + vn ∂x + λ This is a determinant of the same type as in Theorem 2.11.3. Its explicit form for our case is given by the differential operator in (2.11.12). Applying now Theorem 2.11.2 we get the statement of the theorem.

Telegraph Processes

99

Example 2.11.2. If in (2.11.11) we set n = 2, then, as is easy to see, equation (2.11.12) turns into the second-order hyperbolic equation  2  ∂2 ∂ ∂ ∂2 ∂ + (v1 + v2 ) + v1 v2 2 + λ(v1 + v2 ) + 2λ f = 0. (2.11.13) ∂t2 ∂x∂t ∂x ∂x ∂t which, for v1 = c, v2 = −c, transforms into the Goldstein-Kac telegraph equation (2.3.2) again.

2.11.2

Cyclic choice of velocities

Consider now the particular case of our general model when the choice of velocities is made in accordance with the cyclic scheme · · · → v1 → · · · → vn−1 → vn → v1 → . . . . In this case the transition probabilities have the form   1, if k = j + 1 for any j = 1 . . . , n − 1, pjk = 1, if j = n and k = 1,   0, otherwise.

(2.11.14)

Theorem 2.11.5. In the model with a cyclic choice of velocities (2.11.14) the probability density f = f (x, t), x ∈ R1 , t > 0, satisfies the hyperbolic equation ) ( n   Y n Y ∂ ∂ + vk + λk − λk f = 0. (2.11.15) ∂t ∂x k=1

k=1

Proof. One can see that for this model the general system (2.11.2) transforms into a system, whose determinant is ∂ + v1 ∂ + λ1 0 ... −λn ∂x ∂t ∂ ∂ −λ + v + λ . . . 0 1 2 ∂x 2 ∂t Det Sn = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ∂ ∂ 0 0 . . . ∂t + vn ∂x + λn The value of such a determinant is also known (see [41, Formula 213]). Its explicit form for this case is given by the differential operator in (2.11.15). Applying now Theorem 2.11.2 we get the statement of the theorem. Example 2.11.3. If in the cyclic model we set n = 2 and λ1 = λ2 = λ, then, as is easy to verify, (2.11.15) turns into (2.11.13) again. Example 2.11.4. Let n = 3 and λ1 = λ2 = λ3 = λ in the cyclic model. Then, as is easy to see, (2.11.15) transforms into the equation  3 ∂ ∂3 ∂3 ∂3 + (v + v + v ) + (v v + v v + v v ) + v v v 1 2 3 1 2 3 1 2 1 3 2 3 ∂t3 ∂x∂t2 ∂x2 ∂t ∂x3 2 2 2 ∂ ∂ ∂ + 3λ 2 + 2λ(v1 + v2 + v3 ) + λ(v1 v2 + v1 v3 + v2 v3 ) 2 ∂t ∂x∂t ∂x  ∂ ∂ + 3λ2 + λ2 (v1 + v2 + v3 ) f = 0. ∂t ∂x

100

2.12

Markov Random Flights

Euclidean distance between two telegraph processes

Euclidean distance between two stochastic processes is an important characteristics, in particular, in various interaction models. The distance between the Goldstein-Kac telegraph process and the Wiener process was studied in [80]. In this section we examine the Euclidean distance between two independent Goldstein-Kac telegraph processes.

2.12.1

Probability distribution function

Consider two independent telegraph processes X1 (t) and X2 (t) performed by the stochastic motions of two particles with finite speeds c1 > 0, c2 > 0 and controlled by two independent Poisson processes N1 (t) and N2 (t) of rates λ1 > 0, λ2 > 0, respectively. Suppose that, at the initial time instant t = 0, both the processes X1 (t) and X2 (t) simultaneously start from the origin x = 0 of the real line R1 . For the sake of definiteness, we also suppose that c1 ≥ c2 (otherwise, one can merely change the numeration of the processes). The subject of our interest is the Euclidean distance ρ(t) = |X1 (t) − X2 (t)| ,

t > 0,

(2.12.1)

between these processes at arbitrary time t > 0. It is clear that 0 ≤ ρ(t) ≤ (c1 + c2 )t, that is, the interval [0, (c1 + c2 )t] is the support of the distribution Pr {ρ(t) < r} of process (2.12.1). The distribution of ρ(t), t > 0, consists of two components. Its singular part is concentrated at the two points (c1 − c2 )t and (c1 + c2 )t of the support. For arbitrary t > 0, the process ρ(t) is located at the point (c1 − c2 )t if and only if both the particles take the same initial direction (the probability of this event is 1/2) and no Poisson events occur till time t (the probability of this event is e−(λ1 +λ2 )t ). Similarly, ρ(t) is located at the point (c1 + c2 )t if and only if the particles take different initial directions (the probability of this event is 1/2) and no Poisson events occur till time t (the probability of this event is e−(λ1 +λ2 )t ). Thus, we have: 1 −(λ1 +λ2 )t e , 2 1 Pr {ρ(t) = (c1 + c2 )t} = e−(λ1 +λ2 )t , 2

Pr {ρ(t) = (c1 − c2 )t} =

t > 0.

(2.12.2)

Therefore, the singular part ϕs (r, t) of the density ϕ(r, t) of the distribution Pr {ρ(t) < r} is the generalized function: ϕs (r, t) =

 e−(λ1 +λ2 )t  δ(r − (c1 − c2 )t) + δ(r − (c1 + c2 )t) , 2

r ∈ R1 ,

t > 0, (2.12.3)

where δ(x) is the Dirac delta-function. The remaining part of the distribution is concentrated in the area Mt = (0, (c1 − c2 )t) ∪ ((c1 − c2 )t, (c1 + c2 )t),

t > 0,

(note that if c1 = c2 = c then Mt transforms into the interval (0, 2ct)). This is the support of the absolutely continuous part of the distribution Pr {ρ(t) < r} corresponding to the case when at least one Poisson event occurs till time instant t > 0. Our aim is to obtain an explicit formula for the probability distribution function Φ(r, t) = Pr {ρ(t) < r} ,

r ∈ R1 ,

t > 0,

(2.12.4)

Telegraph Processes

101

of the Euclidean distance ρ(t). The form of this distribution function is somewhat different for the cases c1 = c2 and c1 > c2 because, for c1 = c2 , the singularity point (c1 − c2 )t = 0 is the terminal point, while in the case c1 > c2 this point is an interior point of the support. That is why in the following theorem we obtain the probability distribution function in the more difficult case c1 > c2 . Similar results concerning the more simple case c1 = c2 will be given separately at the end of this subsection. Theorem 2.12.1. Under the condition has the form:  0,    G(r, t), Φ(r, t) =  Q(r, t),    1,

c1 > c2 , probability distribution function (2.12.4)

r ∈ R,

if r if r if r if r

∈ (−∞, 0], ∈ (0, (c1 − c2 )t], ∈ ((c1 − c2 )t, (c1 + c2 )t], ∈ ((c1 + c2 )t, +∞),

t > 0,

(2.12.5)

c1 > c2 ,

where functions G(r, t) and Q(r, t) are given by the formulas:  2k   ∞ λ1 t λ1 t λ1 e−(λ1 +λ2 )t X 1 1 + G(r, t) = 2c1 (k!)2 2 2k + 2 k=0      2 1 3 (c2 t − r)2 1 3 (c2 t + r) ; ; − (c t − r)F −k, × (c2 t + r)F −k, ; ; 2 2 2 c21 t2 2 2 c21 t2     ∞ 2k λ1 λ2 e−(λ1 +λ2 )t X 1 λ1 t λ1 t + 1+ Ik (r, t), 4c1 c2 (k!)2 2 2k + 2 k=0 (2.12.6) 1 Q(r, t) = 2 −



−λ1 t

1−e



e

−λ2 t

+ 1−e

−λ2 t



−λ1 t

e

+e

−(λ1 +λ2 )t



λ1 (c2 t − r)e−(λ1 +λ2 )t 2c1 2k      ∞ X λ1 t 1 3 (c2 t − r)2 λ1 t 1 × 1 + F −k, ; ; (k!)2 2 2k + 2 2 2 c21 t2 k=0 −(λ1 +λ2 )t



λ2 (c1 t − r)e 2c2

∞ X

2k     λ2 t 1 3 (c1 t − r)2 λ2 t × 1+ F −k, ; ; 2 2k + 2 2 2 c22 t2 k=0  2k   ∞ λ1 λ2 e−(λ1 +λ2 )t X 1 λ1 t λ1 t + 1 + Ik (r, t), 4c1 c2 (k!)2 2 2k + 2 k=0 (2.12.7) with the integral factor 1 (k!)2



    Zc2 t  1 3 (β(x, r))2 1 3 (α(x, r))2 β(x, r)F −k, ; ; Ik (r, t) = − α(x, r)F −k, ; ; 2 2 c21 t2 2 2 c21 t2 −c2 t     q  q c2 t λ2 λ2 c22 t2 − x2 + p 2 I1 c22 t2 − x2 dx, × I0 c2 c2 c2 t2 − x2 (2.12.8)

102

Markov Random Flights

where the variables α(x, r) and β(x, r) are defined by the formulas: α(x, r) = max{−c1 t, x − r}, x ∈ (−c2 t, c2 t),

β(x, r) = min{c1 t, x + r},

(2.12.9)

r ∈ Mt .

Proof. For probability distribution function (2.12.4) we have  Φ(r, t) = e−(λ1 +λ2 )t Pr ρ(t) < r N1 (t) = 0, N2 (t) = 0   + 1 − e−λ1 t e−λ2 t Pr ρ(t) < r N1 (t) ≥ 1, N2 (t) = 0   + e−λ1 t 1 − e−λ2 t Pr ρ(t) < r N1 (t) = 0, N2 (t) ≥ 1    + 1 − e−λ1 t 1 − e−λ2 t Pr ρ(t) < r N1 (t) ≥ 1, N2 (t) ≥ 1 .

(2.12.10)

Let us evaluate separately conditional probabilities on the right-hand side of (2.12.10). Obviously, the first conditional probability is:  0, if r ∈ (−∞, (c1 − c2 )t],    1 Pr ρ(t) < r N1 (t) = 0, N2 (t) = 0 = , if r ∈ ((c1 − c2 )t, (c1 + c2 )t], (2.12.11)  2   1, if r ∈ ((c1 + c2 )t, +∞).  • Evaluation of Pr ρ(t) < r N1 (t) ≥ 1, N2 (t) = 0 . We note that the following equalities for random events hold: {N1 (t) ≥ 1} = {X1 (t) ∈ (−c1 t, c1 t)} , {N2 (t) = 0} = {X2 (t) = −c2 t} + {X2 (t) = c2 t} . Then, according to formula (1.9.8) of Lemma 1.9.2, we have  Pr ρ(t) < r N1 (t) ≥ 1, N2 (t) = 0  = Pr ρ(t) < r {X1 (t) ∈ (−c1 t, c1 t)} ∩ ({X2 (t) = −c2 t} + {X2 (t) = c2 t})   1 = Pr ρ(t) < r {X1 (t) ∈ (−c1 t, c1 t)} ∩ {X2 (t) = −c2 t} 2   + Pr ρ(t) < r {X1 (t) ∈ (−c1 t, c1 t)} ∩ {X2 (t) = c2 t}   1 Pr {X1 (t) ∈ (X2 (t) − r, X2 (t) + r)} ∩ {X1 (t) ∈ (−c1 t, c1 t)} ∩ {X2 (t) = −c2 t} = 2 Pr {X1 (t) ∈ (−c1 t, c1 t)} Pr {X2 (t) = −c2 t}   Pr {X1 (t) ∈ (X2 (t) − r, X2 (t) + r)} ∩ {X1 (t) ∈ (−c1 t, c1 t)} ∩ {X2 (t) = c2 t} + Pr {X1 (t) ∈ (−c1 t, c1 t)} Pr {X2 (t) = c2 t}   1 = Pr {X1 (t) ∈ (−c2 t − r, −c2 t + r)} ∩ {X1 (t) ∈ (−c1 t, c1 t)} −λ t 1 2(1 − e )   + Pr {X1 (t) ∈ (c2 t − r, c2 t + r)} ∩ {X1 (t) ∈ (−c1 t, c1 t)}   1 = Pr {X (t) ∈ (α, −c t + r)} + Pr {X (t) ∈ (c t − r, β)} , 1 2 1 2 2(1 − e−λ1 t ) where α = max{−c1 t, −c2 t − r},

β = min{c1 t, c2 t + r}.

Telegraph Processes Applying formula (2.6.1) of Proposition 2.6.1 we get:  Pr ρ(t) < r N1 (t) ≥ 1, N2 (t) = 0   2k   ∞ 1 λ1 t λ1 e−λ1 t X 1 λ1 t = 1+ 2(1 − e−λ1 t ) 2c1 (k!)2 2 2k + 2 k=0     2 1 1 3 β × βF −k, ; ; 2 2 − (c2 t − r)F −k, ; 2 2 c1 t 2  2k   ∞ −λ1 t X λ1 e λ1 t 1 λ1 t + 1+ 2c1 (k!)2 2 2k + 2 k=0    1 3 (−c2 t + r)2 − αF × (−c2 t + r)F −k, ; ; 2 2 c21 t2

103

3 (c2 t − r)2 ; 2 c21 t2





 1 3 α2 −k, ; ; 2 2 . 2 2 c1 t (2.12.12)

It is easy to check that ( c2 t + r, β= c1 t, ( −c2 t − r, α= −c1 t,

if r ∈ (0, (c1 − c2 )t], if r ∈ ((c1 − c2 )t, (c1 + c2 )t], if r ∈ (0, (c1 − c2 )t], if r ∈ ((c1 − c2 )t, (c1 + c2 )t].

From these formulas we see that α = −β independently of r. Therefore, (2.12.12) becomes:  Pr ρ(t) < r N1 (t) ≥ 1, N2 (t) = 0  2k   ∞ X λ1 e−λ1 t 1 λ1 t λ1 t = 1+ 2c1 (1 − e−λ1 t ) (k!)2 2 2k + 2 k=0      2 1 3 (c2 t − r)2 1 3 β . × βF −k, ; ; 2 2 − (c2 t − r)F −k, ; ; 2 2 c1 t 2 2 c21 t2 (2.12.13) If r ∈ (0, (c1 − c2 )t] then β = c2 t + r and, therefore (2.12.13) becomes:  Pr ρ(t) < r N1 (t) ≥ 1, N2 (t) = 0  2k   ∞ X 1 λ1 t λ1 e−λ1 t λ1 t = 1+ 2c1 (1 − e−λ1 t ) (k!)2 2 2k + 2 k=0      2 1 3 (c2 t + r) 1 3 (c2 t − r)2 × (c2 t + r) F −k, ; ; − (c2 t − r) F −k, ; ; , 2 2 c21 t2 2 2 c21 t2 (2.12.14) if r ∈ (0, (c1 − c2 )t]. If r ∈ ((c1 − c2 )t, (c1 + c2 )t] then β = c1 t and formula (2.12.13) becomes:  Pr ρ(t) < r N1 (t) ≥ 1, N2 (t) = 0   2k   ∞ 1 λ1 e−λ1 t X 1 λ1 t λ1 t = 1 + 1 − e−λ1 t 2c1 (k!)2 2 2k + 2 k=0      1 3 1 3 (c2 t − r)2 , × c1 t F −k, ; ; 1 − (c2 t − r) F −k, ; ; 2 2 2 2 c21 t2 (2.12.15)

104

Markov Random Flights if r ∈ ((c1 − c2 )t, (c1 + c2 )t].

Formula (2.12.15) can be simplified. In view of (2.6.5), one can easily show that  2k     ∞ λ1 t λ1 t 1 3 1 − e−λ1 t λ1 e−λ1 t X 1 1 + c t F −k, ; ; 1 = (2.12.16) 1 2 2c1 (k!) 2 2k + 2 2 2 2 k=0

and, therefore, (2.12.15) takes the form:  Pr ρ(t) < r N1 (t) ≥ 1, N2 (t) = 0   2k    ∞ 1 λ1 (c2 t − r)e−λ1 t X 1 λ1 t λ1 t 1 3 (c2 t − r)2 = − , 1+ F −k, ; ; 2 2c1 (1 − e−λ1 t ) (k!)2 2 2k + 2 2 2 c21 t2 k=0 (2.12.17) if r ∈ ((c1 − c2 )t, (c1 + c2 )t].  • Evaluation of Pr ρ(t) < r N1 (t) = 0, N2 (t) ≥ 1 . It is obvious that for arbitrary r ∈ (0, (c1 − c2 )t] the following relation holds:  if r ∈ (0, (c1 − c2 )t]. (2.12.18) Pr ρ(t) < r N1 (t) = 0, N2 (t) ≥ 1 = 0, Let now r ∈ ((c1 − c2 )t, (c1 + c2 )t]. Since {N1 (t) = 0} = {X1 (t) = −c1 t} + {X1 (t) = c1 t} , {N2 (t) ≥ 1} = {X2 (t) ∈ (−c2 t, c2 t)} , then, similarly as above, one can show that  Pr ρ(t) < r N1 (t) = 0, N2 (t) ≥ 1     1 Pr X2 (t) ∈ (−c2 t, −c1 t + r) + Pr X2 (t) ∈ (c1 t − r, c2 t) . = 2(1 − e−λ2 t ) Applying formula (2.6.1) of Proposition 2.6.1, we get:  Pr ρ(t) < r N1 (t) = 0, N2 (t) ≥ 1   2k   ∞ λ2 e−λ2 t X 1 λ2 t λ2 t 1 1 + = 2(1 − e−λ2 t ) 2c2 (k!)2 2 2k + 2 k=0      1 3 (−c1 t + r)2 1 3 × (−c1 t + r) F −k, ; ; + c t F −k, ; ; 1 2 2 2 c22 t2 2 2     ∞ 2k −λ2 t X λ2 e 1 λ2 t λ2 t + 1+ 2c2 (k!)2 2 2k + 2 k=0      1 3 (c1 t − r)2 1 3 × c2 t F −k, ; ; 1 − (c1 t − r) F −k, ; ; 2 2 2 2 c22 t2      ∞ 2k 1 λ2 e−λ2 t X 1 λ2 t λ2 t = 1+ −λ t 2 1−e 2c2 (k!)2 2 2k + 2 k=0      1 3 1 3 (c1 t − r)2 × c2 t F −k, ; ; 1 − (c1 t − r) F −k, ; ; . 2 2 2 2 c22 t2 Taking into account that  2k     ∞ λ2 e−λ2 t X 1 λ2 t λ2 t 1 3 1 − e−λ2 t 1 + c t F −k, ; ; 1 = 2 2 2c2 (k!) 2 2k + 2 2 2 2 k=0

Telegraph Processes

105

we finally obtain:  Pr ρ(t) < r N1 (t) = 0, N2 (t) ≥ 1   2k    ∞ 1 λ2 (c1 t − r)e−λ2 t X 1 λ2 t λ2 t 1 3 (c1 t − r)2 = − , 1 + F −k, ; ; 2 2c2 (1 − e−λ2 t ) (k!)2 2 2k + 2 2 2 c22 t2 k=0 (2.12.19) if r ∈ ((c1 − c2 )t, (c1 + c2 )t].  • Evaluation of Pr ρ(t) < r N1 (t) ≥ 1, N2 (t) ≥ 1 . Since {N1 (t) ≥ 1} = {X1 (t) ∈ (−c1 t, c1 t)} ,

{N2 (t) ≥ 1} = {X2 (t) ∈ (−c2 t, c2 t)} ,

then, for the fourth conditional probability on the right-hand side of (2.12.10), we have:  Pr ρ(t) < r N1 (t) ≥ 1, N2 (t) ≥ 1  Pr {ρ(t) < r} ∩ {X1 (t) ∈ (−c1 t, c1 t)} ∩ {X2 (t) ∈ (−c2 t, c2 t)} = Pr {X1 (t) ∈ (−c1 t, c1 t)} Pr {X2 (t) ∈ (−c2 t, c2 t)} 1 = −λ t 1 (1 − e )(1 − e−λ2 t )  × Pr {X1 (t) ∈ (X2 (t) − r, X2 (t) + r)} ∩ {X1 (t) ∈ (−c1 t, c1 t)} ∩ {X2 (t) ∈ (−c2 t, c2 t)} 1 = (1 − e−λ1 t )(1 − e−λ2 t )  × Pr {X1 (t) ∈ (max{X2 (t) − r, −c1 t}, min{X2 (t) + r, c1 t})} ∩ {X2 (t) ∈ (−c2 t, c2 t)} 1 = (1 − e−λ1 t )(1 − e−λ2 t )

Zc2 t

  Pr X1 (t) ∈ (α(x, r), β(x, r)) X2 (t) = x Pr X2 (t) ∈ dx ,

−c2 t

where α(x, r) = max{x − r, −c1 t},

β(x, r) = min{x + r, c1 t}.

In view of (2.6.1) and (2.5.17), we obtain:  Pr ρ(t) < r N1 (t) ≥ 1, N2 (t) ≥ 1  2k   Zc2 t  ∞ λ1 t λ1 t 1 λ1 e−λ1 t X 1 1+ = (1 − e−λ1 t )(1 − e−λ2 t ) 2c1 (k!)2 2 2k + 2 k=0 −c2 t      1 3 (β(x, r))2 1 3 (α(x, r))2 × β(x, r)F −k, ; ; − α(x, r)F −k, ; ; f2ac (x, t)dx 2 2 c21 t2 2 2 c21 t2  2k   ∞ X 1 λ1 t λ1 t λ1 λ2 e−(λ1 +λ2 )t 1+ Ik (r, t), = 4c1 c2 (1 − e−λ1 t )(1 − e−λ2 t ) (k!)2 2 2k + 2 k=0 (2.12.20) where the integral factor Ik (r, t) is defined by (2.12.8) and f2ac (x, t) is the density of the absolutely continuous part of the distribution of the telegraph process X2 (t) given by (2.5.17). Substituting now (2.12.11), (2.12.14), (2.12.19) and (2.12.20) into (2.12.10) we obtain the term G(r, t) in distribution function (2.12.5) defined in the interval r ∈ (0, (c1 − c2 )t] and given by formula (2.12.6). Similarly, by substituting (2.12.11), (2.12.17), (2.12.19) and (2.12.20) into (2.12.10) we get the term Q(r, t) in distribution function (2.12.5) defined in the interval r ∈ ((c1 − c2 )t, (c1 + c2 )t] and given by formula (2.12.7). The theorem is thus completely proved.

106

Markov Random Flights

Remark 2.12.1. One can easily see that if r ∈ (0, (c1 − c2 )t], then the variables α(x, r) and β(x, r) take the values α(x, r) = x − r,

β(x, r) = x + r,

for r ∈ (0, (c1 − c2 )t],

independently of x ∈ (−c2 t, c2 t). In this case the integral factor Ik (r, t) can, therefore, be rewritten in a more explicit form. In contrast, if r ∈ ((c1 − c2 )t, (c1 + c2 )t], then each of these variables can take both possible values. Remark 2.12.2. Taking into account that, for any x ∈ (−c2 t, c2 t), α(x, 0) = β(x, 0) = x,

α(x, (c1 + c2 )t) = −c1 t,

α(x, (c1 − c2 )t) = x − (c1 − c2 )t,

β(x, (c1 + c2 )t) = c1 t,

β(x, (c1 − c2 )t) = x + (c1 − c2 )t,

one can easily prove the following limiting relations: 1 Q(r, t) = 1 − e−(λ1 +λ2 )t , r→0+0 2 r→(c1 +c2 )t−0 1 −(λ1 +λ2 )t . lim Q(r, t) − lim G(r, t) = e 2 r→(c1 −c2 )t+0 r→(c1 −c2 )t−0 lim G(r, t) = 0,

lim

(2.12.21)

Formulas (2.12.21) show that probability distribution function (2.12.5) is left-continuous with jumps of the same amplitude e−(λ1 +λ2 )t /2 at the singularity points (c1 ± c2 )t. This entirely accords with the structure of the distribution of the process ρ(t) described above. Remark 2.12.3. When using probability distribution function (2.12.5), the crucial point is the possibility of evaluating the integral term Ik (r, t) given in (2.12.8). By means of tedious computations and by applying relations (1.9.6) and (1.9.7), we can obtain a series representation of the integral Ik (r, t). However it has an extremely complicated and cumbersome form and is therefore omitted here. That is why for practical purposes it is more convenient to use just the integral form of factor Ik (r, t). We end this subsection by presenting a result related to the more simple case of equal velocities. Suppose that both the telegraph processes X1 (t) and X2 (t) have the same speed c1 = c2 = c. In this case the support of distribution (2.12.4) is the closed interval [0, 2ct]. The singular component of the distribution has the density (as a generalized function) ϕs (r, t) =

 e−(λ1 +λ2 )t  δ(r) + δ(r − 2ct) , 2

r ∈ R,

t > 0,

(2.12.22)

concentrated at the two terminal points 0 and 2ct, while the open interval (0, 2ct) is the support of the absolutely continuous part of distribution (2.12.4). For the case of equal velocities, the probability distribution function (2.12.4) is given by the following theorem. Theorem 2.12.2. Under the (2.12.4) has the form:  0,   Φ(r, t) = H(r, t),   1,

condition c1 = c2 = c, probability distribution function if r ∈ (−∞, 0], if r ∈ (0, 2ct], if r ∈ (2ct, +∞),

where function H(r, t) is given by the formula:

r ∈ R,

t > 0,

(2.12.23)

Telegraph Processes

107

H(r, t) i   1h 1 − e−λ1 t e−λ2 t + e−λ1 t 1 − e−λ2 t + e−(λ1 +λ2 )t = 2 " 2k+1    2k+1  # ∞  λ1 t λ1 t λ2 t λ2 t r X 1 −(λ1 +λ2 )t 1+ + 1+ 1− −e ct (k!)2 2 2k + 2 2 2k + 2 k=0     1 3 r 2 × F −k, ; ; 1 − 2 2 ct  2k   ∞ X λ1 t 1 λ1 t λ1 λ2 −(λ1 +λ2 )t e 1+ Jk (r, t), + 4c2 (k!)2 2 2k + 2 k=0 (2.12.24) with the integral factor     Zct  1 3 (β(x, r))2 1 3 (α(x, r))2 Jk (r, t) = β(x, r)F −k, ; ; − α(x, r)F −k, ; ; 2 2 c2 t2 2 2 c2 t2 −ct   p  p   λ2 λ2 ct × I0 I1 c2 t 2 − x 2 + √ c2 t2 − x2 dx, c c c2 t 2 − x 2 (2.12.25) where the variables α(x, r) and β(x, r) are defined by the formulas: α(x, r) = max{−ct, x − r}, x ∈ (−ct, ct),

β(x, r) = min{ct, x + r},

(2.12.26)

r ∈ (0, 2ct).

Proof. By setting c1 = c + ε, c2 = c, the condition c1 > c2 fulfils. Therefore, by applying Theorem 2.12.1 for such c1 , c2 , by passing then to the limit, as ε → 0, and taking into account the uniform convergence of the series in (2.12.6) and (2.12.7), we arrive at the statement of the theorem. Remark 2.12.4. The results obtained in Theorems 2.12.1 and 2.12.2 may be useful for analysing the distribution of the difference between two independent telegraph processes X1 (t) and X2 (t). While the distribution of the difference (as well as of the sum) is given by a respective convolution, the evaluation of such a convolution is a very difficult (and, maybe, impracticable) problem due to fairly complicated form of the probability law of the telegraph process (see formulas (2.5.1) or (2.5.16) for its density or Theorem 2.6.2, formula (2.6.7), for its distribution function). Let FR (r, t) = Pr{R(t) < r} denote the probability distribution function of the difference R(t) = X1 (t) − X2 (t). The interval [−(c1 + c2 )t, (c1 + c2 )t] is the support of the distribution of R(t) with the two singularity points ±(c1 + c2 )t in the case c1 6= c2 and the three singularity points 0, ±2ct in the case of equal velocities c1 = c2 = c. Then the distribution function FR (r, t) of the difference R(t) and the distribution function Φ(r, t) of the Euclidean distance ρ(t) between X1 (t) and X2 (t) are connected with each other by the functional relation FR (r, t) − FR (−r, t) − Pr{R(t) = −r} = Φ(r, t),

r ∈ R1 ,

t > 0.

Note that the term Pr{R(t) = −r} takes a non-zero value if and only if r is the singular point of the distribution of process R(t). For regular r, this term vanishes.

108

2.12.2

Markov Random Flights

Numerical example

Although probability distribution functions (2.12.5) and (2.12.23) have fairly complicated analytical forms, they can, nevertheless, be approximately evaluated with good accuracy using standard packages of computer mathematical programs. As was noted in Remark 2.12.3 above, the crucial point is the evaluation of the integral factors Ik (r, t) given in (2.12.8) (for c1 > c2 ) or Jk (r, t) given in (2.12.25) (for c1 = c2 = c), respectively. To approximately evaluate the series in (2.12.6) and (2.12.7), we do not need to compute integral term (2.12.8) for all k ≥ 0. We notice that each series contains the factor 1/(k!)2 providing very fast convergence. In fact, one can see that, if we take only five terms of each series in the functions G(r, t) and Q(r, t), their approximate values stabilize at the fourth digit. Let us set λ1 = 2, λ2 = 1, c1 = 2, c2 = 1, t = 4. (2.12.27) in our model. In this case, the support of the distribution is the interval [0, 12] with the two singularity points r = 4 (the interior point of the support) and r = 12 (the terminal point). The results of numerical analysis of the probability distribution function (2.12.5) with parameters (2.12.27) for the function G(r, 4) defined on the subinterval r ∈ (0, 4] and for the function Q(r, 4) on the subinterval r ∈ (4, 8] are presented in Table 2.1. Note that in evaluating these functions we take only eight terms in the series. Also note that, although function Q(r, 4) is defined on the whole interval (4, 12], we consider it only over (4, 8] because it has very small increments over (8, 12]. We now examine the behaviour of the probability distribution function Φ(r, 4) in the neighbourhoods of singularity points. As noted above, for the values of parameters given by (2.12.27), function Φ(r, 4) has the two singularity points, namely, r = 4 and r = 12. At the first (interior) point r = 4, formulas (2.12.6) and (2.12.7) yield the values G(4, 4) ≈ 0.76444570,

lim Q(r, 4) ≈ 0.76444877,

r→4+0

and, therefore, their difference is lim Q(r, 4) − G(4, 4) ≈ 0.76444877 − 0.76444570 = 0.00000307.

r→4+0

We see that this difference is equal to the value of the jump amplitude at this singularity point: e−12 /2 ≈ 0.00000307. Similarly, at the second (terminal) singularity point r = 12, formula (2.12.7) yields the value Q(12, 4) ≈ 0.99999693 and, therefore, the difference is 1 − Q(12, 4) ≈ 1 − 0.99999693 = 0.00000307. This is equal to the value of the jump amplitude at this singularity point: e−12 /2 ≈ 0.00000307. Note that, when evaluating G(r, 4) and Q(r, 4) at the singularity points r = 4 and r = 12, we take twenty terms in each series because we need more accuracy in this case. Suppose that every time the particles close in the distance less than r = 0.4, they can start interacting with probability 0.2. Then the probability that the interaction begins at time instant t = 4 is Pr{ρ(4) < 0.4} · 0.2 = G(0.4, 4) · 0.2 = 0.0930 · 0.2 = 0.0186. Here we have used the value of G(r, 4) for r = 0.4 given in Table 2.1.

Telegraph Processes r G(r, 4) 0.2 0.0466 0.4 0.0930 0.6 0.1392 0.8 0.1849 1.0 0.2299 1.2 0.2744 1.4 0.3179 1.6 0.3605 1.8 0.4020 2.0 0.4423

r G(r, 4) 2.2 0.4812 2.4 0.5189 2.6 0.5550 2.8 0.5897 3.0 0.6228 3.2 0.6543 3.4 0.6842 3.6 0.7124 3.8 0.7390 4.0 0.7639

r Q(r, 4) 4.2 0.7872 4.4 0.8063 4.6 0.8261 4.8 0.8447 5.0 0.8617 5.2 0.8803 5.4 0.8916 5.6 0.9075 5.8 0.9192 6.0 0.9268

109 r Q(r, 4) 6.2 0.9393 6.4 0.9448 6.6 0.9523 6.8 0.9618 7.0 0.9646 7.2 0.9696 7.4 0.9741 7.6 0.9778 7.8 0.9815 8.0 0.9839

Table 2.1: Values of functions G(r, 4) on the subinterval r ∈ (0, 4] (left) and Q(r, 4) on the subinterval r ∈ (4, 8] (right) Remark 2.12.5. The model considered in this section can generate a number of interesting problems. The obtained results enable us to compute the probability of starting the interaction at an arbitrary time instant t > 0. However, for practical needs it is more important to evaluate the probability that the interaction will begin before some fixed time moment. Let T > 0 be an arbitrary time instant and let kTr denote the random variable counting how many times during the time interval (0, T ) the distance between the particles was less that some given r > 0. The distribution of the nonnegative integer-valued random variable kTr is of a special importance because it would enable us to evaluate the probability of interaction starting before time T .

2.13

Sum of two telegraph processes

In this section we study an important functional, namely, the sum of two independent Goldstein-Kac telegraph processes. While the distribution of the sum of any two independent stochastic processes is given by the convolution of their distributions, the explicit evaluation of such a convolution is, in general, an impracticable problem. Surprisingly, despite the fairly complicated form of the distribution of the Goldstein-Kac telegraph process (see (2.5.1) and (2.5.16)), one can obtain a closed-form expression for the sum of two independent GoldsteinKac telegraph processes driven by two independent Poisson processes.

2.13.1

Density of the sum of telegraph processes

Consider two independent Goldstein-Kac telegraph processes X1 (t) and X2 (t) on the real line R1 . Both X1 (t) and X2 (t) start simultaneously from the origin x = 0 at the initial time instant t = 0 and have the same constant speed c. The motions are controlled by two independent Poisson processes of the same rate λ > 0. Consider the sum S(t) = X1 (t) + X2 (t), t > 0, of these telegraph processes. The support of the distribution Pr{S(t) < x}, x ∈ R1 , t > 0, of the process S(t) is the closed interval [−2ct, 2ct]. This distribution consists of two components. The singular component is concentrated at the three points 0, ±2ct and corresponds to the case when no Poisson events occur till time t. If both the processes X1 (t) and X2 (t)

110

Markov Random Flights

initially take the same direction (the probability of this event is 1/2) and no Poisson event occurs till time t then, at moment t, the process S(t) is located at one of the terminal points ±2ct. Thus, 1 t > 0. (2.13.1) Pr{S(t) = 2ct} = Pr{S(t) = −2ct} = e−2λt , 4 If the processes X1 (t) and X2 (t) initially take opposite directions (the probability of this event is 1/2) and no Poisson event occurs till time t then, at moment t, the process S(t) is located at the origin and therefore 1 −2λt e , t > 0. (2.13.2) 2 The remaining part Mt = (−2ct, 0) ∪ (0, 2ct) of the interval [−2ct, 2ct] is the support of the absolutely continuous component of the distribution Pr{S(t) < x}, x ∈ R1 , t > 0, corresponding to the case when at least one Poisson event occurs by time t and, therefore, Pr{S(t) = 0} =

Pr{S(t) ∈ Mt } = 1 − e−2λt ,

t > 0.

(2.13.3)

1

Let ϕ(x, t), x ∈ R , t > 0, be the transition density of process S(t) treated as a generalized function. Since X1 (t) and X2 (t) are independent, then, at arbitrary time instant t > 0, the density of S(t) is given by the convolution Z ϕ(x, t) = f (x, t) ∗ f (x, t) = f (z, t) f (x − z, t) dz, x ∈ R1 , t > 0, where f (x, t) is the transition density of the telegraph processes X1 (t) and X2 (t) given by (2.5.1) or (2.5.16). However, it seems impracticable to explicitly compute this convolution due to fairly complicated form of density f (x, t) containing modified Bessel functions. Instead, we develop another way of finding ϕ(x, t) based on the characteristic functions approach and using the properties of special functions. The explicit form of the transition density of process S(t) is given by the following theorem. Theorem 2.13.1. The transition density ϕ(x, t) of process S(t) has the form: ϕ(x, t) =

e−2λt e−2λt δ(x) + [δ(2ct + x) + δ(2ct − x)] 2 4   p    e−2λt λ λp 2 2 1 ∂ 2 2 2 2 + I0 λI0 4c t − x + 4c t − x 2c c 4 ∂t c #   p Z λ2 2ct λ 2 2 τ − x dτ Θ(2ct − |x|) 1{x6=0} , I0 + 2c |x| c x ∈ R1 ,

(2.13.4)

t ≥ 0,

where 1{·} is the indicator function. Remark 2.13.1. In (2.13.4) the term e−2λt e−2λt δ(x) + [δ(2ct + x) + δ(2ct − x)] 2 4 represents the singular part of the density concentrated at the points 0 and ±2ct. The second term of (2.13.4)   p  p   λ λ e−2λt 1 ∂ 2 2 2 2 2 2 λI0 4c t − x + I0 4c t − x ϕac (x, t) = 2c c 4 ∂t c # (2.13.5)  p  Z λ λ2 2ct + I0 τ 2 − x2 dτ Θ(2ct − |x|) 1{x6=0} 2c |x| c ϕs (x, t) =

Telegraph Processes

111

represents the absolutely continuous part of the density concentrated in Mt . Proof. Since the processes X1 (t) and X2 (t) are independent, then the characteristic function of their sum S(t) is Ψ(ξ, t) = H 2 (ξ, t)  p  2 p λ −2λt 2 2 2 2 2 2 sinh t λ − c ξ 1{|ξ|≤ λ } =e cosh t λ − c ξ + p c λ2 − c2 ξ 2    p   2 p λ sin t c2 ξ 2 − λ2 1{|ξ|> λ } , + cos t c2 ξ 2 − λ2 + p c c2 ξ 2 − λ2 (2.13.6) where H(ξ, t) is the characteristic function of the telegraph process given by (2.4.2). Equality (2.13.6) can be represented as follows:   p p   2 2 −2λt 2 2 2 2 2 2 Ψ(ξ, t) = e cosh t λ − c ξ 1{|ξ|≤ λ } + cos t c ξ − λ 1{|ξ|> λ } c c    p  p  sinh 2t λ2 − c2 ξ 2  sin 2t c2 ξ 2 − λ2 p p +λ 1{|ξ|≤ λ } + 1{|ξ|> λ } c c λ2 − c2 ξ 2 c2 ξ 2 − λ2 p   p   sinh2 t λ2 − c2 ξ 2 sin2 t c2 ξ 2 − λ2 2 +λ 1{|ξ|≤ λ } + 1{|ξ|> λ } . c c λ 2 − c2 ξ 2 c2 ξ 2 − λ2 Therefore, the inverse Fourier transformation of this expression yields    p  p  ϕ(x, t) = e−2λt Fξ−1 cosh2 t λ2 − c2 ξ 2 1{|ξ|≤ λ } + cos2 t c2 ξ 2 − λ2 1{|ξ|> λ } (x) c c    p  p   sinh 2t λ2 − c2 ξ 2 sin 2t c2 ξ 2 − λ2 p p 1{|ξ|≤ λ } + 1{|ξ|> λ } (x) + λFξ−1 c c λ2 − c2 ξ 2 c2 ξ 2 − λ2 p p    sinh2 t λ2 − c2 ξ 2   sin2 t c2 ξ 2 − λ2 + λ2 Fξ−1 + 1 1 {|ξ|≤ λc } {|ξ|> λc } (x) . λ 2 − c2 ξ 2 c2 ξ 2 − λ2 (2.13.7) Our aim now is to explicitly compute inverse Fourier transforms on the right-hand side of (2.13.7). For the first term in curly brackets of (2.13.7) we have: h p  p  i Fξ−1 cosh2 t λ2 − c2 ξ 2 1{|ξ|≤ λ } + cos2 t c2 ξ 2 − λ2 1{|ξ|> λ } (x) c c  p  o n  p  o i 1 −1 hn cosh 2t λ2 − c2 ξ 2 + 1 1{|ξ|≤ λ } + cos 2t λ2 − c2 ξ 2 + 1 1{|ξ|> λ } (x) = Fξ c c 2  p   p  i 1 1 −1 h = δ(x) + Fξ cosh 2t λ2 − c2 ξ 2 1{|ξ|≤ λ } + cos 2t λ2 − c2 ξ 2 1{|ξ|> λ } (x) c c 2 2 (see formula (1.9.15))  p   1 1 1 ∂ λ 2 2 2 = δ(x) + δ(2ct − x) + δ(2ct + x) + I0 4c t − x Θ(2ct − |x|). 2 4 8c ∂t c (2.13.8) According to formula (1.9.14), for the second term in curly brackets of (2.13.7) we have:

112

Markov Random Flights

   p  p   sinh 2t λ2 − c2 ξ 2 sin 2t c2 ξ 2 − λ2 p p 1{|ξ|≤ λ } + 1{|ξ|> λ } (x) Fξ−1 c c λ2 − c2 ξ 2 c2 ξ 2 − λ2  p  λ 1 I0 4c2 t2 − x2 Θ(2ct − |x|). = 2c c Finally, according to formula (1.9.17), we have for the third term of (2.13.7):   p p   sinh2 t λ2 − c2 ξ 2 sin2 t c2 ξ 2 − λ2 −1 1{|ξ|≤ λ } + 1{|ξ|> λ } (x) Fξ c c λ2 − c2 ξ 2 c2 ξ 2 − λ2 ) (Z   2ct 1 λp 2 τ − x2 dτ Θ(2ct − |x|). = 2 I0 4c c |x|

(2.13.9)

(2.13.10)

Substituting now (2.13.8), (2.13.9) and (2.13.10) into (2.13.7) we obtain (2.13.4). To ensure that positive function (2.13.4) is the transition density of process S(t), one needs to show that its integral over the support [−2ct, 2ct] yields 1. Since, as easy to see, for arbitrary t > 0, Z 2ct

ϕs (x, t) dx = e−2λt ,

−2ct

then, according to (2.13.3), we need to verify that the absolutely continuous part ϕac (x, t) of function (2.13.4) satisfies the equality Z

2ct

ϕac (x, t) dx = 1 − e−2λt ,

t > 0.

(2.13.11)

−2ct

We have Z

2ct

  Z 2ct  p e−2λt λ 2 2 2 ϕac (x, t) dx = 4c t − x dx λ I0 2c c −2ct −2ct   Z λp 2 2 1 2ct ∂ 2 I0 + 4c t − x dx 4 −2ct ∂t c (Z  p  )  Z 2ct λ λ2 2ct 2 2 I0 + τ − x dτ dx . 2c −2ct c |x|

(2.13.12)

According to (1.9.10), the first integral in (2.13.12) is: 2ct

  p 2c λ 2 2 2 sinh (2λt). 4c t − x dx = I0 c λ −2ct

Z

(2.13.13)

Using (2.13.13), we have for the second integral in (2.13.12): Z

2ct

−2ct

∂ I0 ∂t

 p   Z 2ct  p λ ∂ λ 4c2 t2 − x2 dx = I0 4c2 t2 − x2 dx − 4c c ∂t −2ct c   ∂ 2c = sinh (2λt) − 4c ∂t λ = 4c (cosh (2λt) − 1) .

(2.13.14)

Telegraph Processes Finally, applying formula (1.9.9), we obtain for the third integral in (2.13.12):  ) Z 2ct (Z 2ct  p λ I0 τ 2 − x2 dτ dx c |x| −2ct   Z 2ct Z 2ct  p λ I0 = τ 2 − x2 Θ(τ − |x|) dτ dx c 0 −2ct   Z 2ct Z 2ct  p λ 2 2 I0 = τ − x Θ(τ − |x|) dx dτ c −2ct 0  p   Z 2ct Z τ λ I0 = τ 2 − x2 dx dτ c −τ 0   Z 2ct 2c λ = sinh τ dτ λ c 0 2c2 = 2 (cosh (2λt) − 1) . λ

113

(2.13.15)

Here the change of integration order is justified because the interior integral in curly brackets on the left-hand side of (2.13.15) converges uniformly in x ∈ (−2ct, 2ct). This fact can easily be proved by applying the mean value theorem and taking into account that I0 (z) is strictly positive and monotonously increasing continuous function. Substituting now (2.13.13), (2.13.14) and (2.13.15) into (2.13.12) we obtain Z2ct

  e−2λt 1 λ2 2c2 2c (cosh (2λt) − 1) λ sinh (2λt) + 4c(cosh (2λt) − 1) + 2c λ 4 2c λ2  = e−2λt e2λt − 1

ϕac (x, t) dx = −2ct

= 1 − e−2λt proving (2.13.11). Therefore, one can conclude that positive function (2.13.4) is really the transition density of process S(t). The theorem is thus completely proved. The shape of the absolutely continuous part ϕac (x, t) of the density of S(t) given by (2.13.5) is presented in Fig. 2.3.

Remark 2.13.2. Taking into account that I00 (z) = I1 (z) one can easily check that  p   p  λ 4λct λ ∂ 2 2 2 2 2 2 √ I0 4c t − x = I1 4c t − x , (2.13.16) ∂t c c 4c2 t2 − x2 where I1 (z) is the modified Bessel function of first order (see (2.5.15)). Therefore, density (2.13.4) can be rewritten in the following alternative form: ϕ(x, t) =

e−2λt e−2λt δ(x) + [δ(2ct + x) + δ(2ct − x)] 2 4     p  λe−2λt λp 2 2 ct λ 2 2 2 2 + I0 4c t − x + √ I1 4c t − x 2c c c 4c2 t2 − x2  p   Z λ 2ct λ + I0 τ 2 − x2 dτ Θ(2ct − |x|) 1{x6=0} , 2c |x| c x ∈ R1 ,

t ≥ 0.

(2.13.17)

114

Markov Random Flights

Figure 2.3: The shape of density ϕac (x, t) at instant t = 3 (for c = 2, λ = 1)

2.13.2

Partial differential equation

Consider the function

 g(x, t) =

 ∂ + 2λ ϕ(x, t). ∂t

(2.13.18)

Here ∂/∂t means differentiation in t of the generalized function ϕ(x, t). The unexpected and amazing fact is that this function satisfies the telegraph equation with doubled parameters 2c and 2λ. Theorem 2.13.2. Function g(x, t) defined by (2.13.18) satisfies the telegraph equation  2  2 ∂ ∂ 2 ∂ + 4λ − 4c g(x, t) = 0. (2.13.19) ∂t2 ∂t ∂x2 Proof. Introduce a new function w(x, t) by the equality w(x, t) = e2λt g(x, t). Therefore, in order to prove the theorem, we should demonstrate that function w(x, t) satisfies the equation   2 2 ∂ 2 ∂ 2 − 4c − 4λ w(x, t) = 0. (2.13.20) ∂t2 ∂x2 According to (2.13.18), we have w(x, t) = e

2λt



  ∂ 2λt ∂ + 2λ ϕ(x, t) = e ϕ(x, t) . ∂t ∂t

(2.13.21)

To avoid differentiation of generalized function w(x, t), instead we use the characteristic

Telegraph Processes

115

functions approach. In view of (2.13.20), we should check that the characteristic function (Fourier transform) w(ξ, ˆ t) satisfies the equation ∂ 2 w(ξ, ˆ t) − 4(λ2 − c2 ξ 2 ) w(ξ, ˆ t) = 0. ∂t2

(2.13.22)

According to (2.13.21) and (2.13.6), the characteristic function w(ξ, ˆ t) has the form  ∂ 2λt e Ψ(ξ, t) ∂t   2 p p λ ∂ 1{|ξ|≤ λ } = cosh t λ2 − c2 ξ 2 + p sinh t λ2 − c2 ξ 2 c ∂t λ2 − c2 ξ 2    2    p p λ sin t c2 ξ 2 − λ2 1{|ξ|> λ } + cos t c2 ξ 2 − λ2 + p c c2 ξ 2 − λ2 h  i    p p ∂ = cosh2 t λ2 − c2 ξ 2 1{|ξ|≤ λ } + cos2 t c2 ξ 2 − λ2 1{|ξ|> λ } c c ∂t  p  p    sinh 2t λ2 − c2 ξ 2  sin 2t c2 ξ 2 − λ2 p p 1{|ξ|≤ λ } + 1{|ξ|> λ } +λ c c λ 2 − c2 ξ 2 c2 ξ 2 − λ2   p p  sinh2 t λ2 − c2 ξ 2  sin2 t c2 ξ 2 − λ2 2 1{|ξ|≤ λ } + 1{|ξ|> λ } . +λ c c λ2 − c2 ξ 2 c2 ξ 2 − λ2

w(ξ, ˆ t) =

where Ψ(ξ, t) is the characteristic function of process S(t) given by (2.13.6). By evaluating this expression, after some simple computations we arrive at the formula p  p   p  λ2 − c2 ξ 2 sinh 2t λ2 − c2 ξ 2 + 2λ cosh 2t λ2 − c2 ξ 2 w(ξ, ˆ t) =  p  sinh 2t λ2 − c2 ξ 2  p 1{|ξ|≤ λ } + λ2 c λ2 − c2 ξ 2  p (2.13.23)  p   p  2 2 2 2 2 2 2 2 2 + − c ξ − λ sin 2t c ξ − λ + 2λ cos 2t c ξ − λ  p  sin 2t c2 ξ 2 − λ2  p 1{|ξ|> λ } . + λ2 c c2 ξ 2 − λ2 Thus, we should prove that function (2.13.23) satisfies equation (2.13.22). For the first term of (2.13.23) we have   p   p  ∂2 p 2 2 ξ 2 sinh 2t λ2 − c2 ξ 2 + 2λ cosh 2t λ2 − c2 ξ 2 λ − c ∂t2  p  sinh 2t λ2 − c2 ξ 2  p + λ2 1{|ξ|≤ λ } c λ2 − c2 ξ 2   p   p  = 4(λ2 − c2 ξ 2 )3/2 sinh 2t λ2 − c2 ξ 2 + 8λ(λ2 − c2 ξ 2 ) cosh 2t λ2 − c2 ξ 2  p  + 4λ2 (λ2 − c2 ξ 2 )1/2 sinh 2t λ2 − c2 ξ 2 1{|ξ|≤ λ } c

116

Markov Random Flights

and therefore, for |ξ| ≤ λc , we obtain ∂ 2 w(ξ, ˆ t) − 4(λ2 − c2 ξ 2 ) w(ξ, ˆ t) 2 ∂t     p  p = 4(λ2 − c2 ξ 2 )3/2 sinh 2t λ2 − c2 ξ 2 + 8λ(λ2 − c2 ξ 2 ) cosh 2t λ2 − c2 ξ 2  p  2 2 2 2 1/2 2 2 2 + 4λ (λ − c ξ ) sinh 2t λ − c ξ 1{|ξ|≤ λ } c  p  p   p  λ2 − c2 ξ 2 sinh 2t λ2 − c2 ξ 2 + 2λ cosh 2t λ2 − c2 ξ 2 − 4(λ2 − c2 ξ 2 )  p  sinh 2t λ2 − c2 ξ 2  p 1{|ξ|≤ λ } + λ2 c λ2 − c2 ξ 2 =0 proving (2.13.22). The proof for the second term of (2.13.23) for |ξ| > theorem is proved.

λ c

is similar. The

Remark 2.13.3. From (2.13.18) and (2.13.19) it follows that the transition density ϕ(x, t) of process S(t) solves the third-order hyperbolic partial differential equation   2  2 ∂ ∂ ∂ 2 ∂ + 2λ + 4λ − 4c ϕ(x, t) = 0. (2.13.24) ∂t ∂t2 ∂t ∂x2 Note that differential operator in (2.13.24) represents the product of the telegraph operator with doubled parameters 2c, 2λ and the shifted time differential operator. This interesting fact means that, while the densities of two independent telegraph processes X1 (t) and X2 (t) satisfy the second-order telegraph equation (2.3.2), their convolution (that is, the density ϕ(x, t) of the sum S(t) = X1 (t) + X2 (t)) satisfies third-order equation (2.13.24). By differentiating in t the characteristic function Ψ(ξ, t) given by (2.13.6) one can easily show that ∂ 2 Ψ(ξ, t) ∂Ψ(ξ, t) = 0, = −2c2 ξ 2 , Ψ(ξ, t) t=0 = 1, ∂t t=0 ∂t2 t=0 and, therefore, in contrast to (2.5.1), the transition density ϕ(x, t) of process S(t) is not the fundamental solution of equation (2.13.24). One can also check that, under Kac’s scaling condition (2.7.1) c → ∞,

λ → ∞,

c2 → ρ, λ

ρ > 0,

equation (2.13.24) transforms into the heat equation   ∂ ∂2 − ρ 2 u(x, t) = 0. ∂t ∂x This means that S(t) is asymptotically a homogeneous Wiener process with zero drift and diffusion coefficient σ 2 = 2ρ.

Telegraph Processes

2.13.3

117

Probability distribution function

Now we concentrate our efforts on deriving a closed-form expression for the probability distribution function Φ(x, t) = Pr {S(t) < x} ,

x ∈ R1 ,

t > 0,

of the process S(t). This result is given by the following theorem. Theorem 2.13.3. The probability distribution function Φ(x, t) has the form:  0, if x ∈ (−∞, −2ct],     G− (x, t), if x ∈ (−2ct, 0], Φ(x, t) = t > 0, + G (x, t), if x ∈ (0, 2ct],     1, if x ∈ (2ct, +∞),

(2.13.25)

where functions G± (x, t) are given by the formula: G± (x, t) ∞     λxe−2λt X (λt)2k λt 1 3 x2 + 1+ F −k, ; ; 2 2 2c (k!)2 2k + 2 2 2 4c t k=0   ∞ X (λt)2k+1 1 3 x2 1 1 F , ; −k + , ; . −k, −k − + 3 2 (k!)2 (2k + 1) 2 2 2 2 4c2 t2 k=0 (2.13.26) Here F (α, β; γ; z) is the Gauss hypergeometric function and 1 e−2λt cos = ± 2 4



λx c



3 F2 (α, β, γ; ξ, ζ; z)

=

∞ X (α)k (β)k (γ)k z k (ξ)k (ζ)k k!

(2.13.27)

k=0

is the general hypergeometric function defined by (1.6.35). Proof. Formula (2.13.25) in the intervals x ∈ (−∞, −2ct] and x ∈ (2ct, +∞) is obvious. Therefore, it remains to prove (2.13.25) for x ∈ (−2ct, 2ct]. Since x = 0 is the singularity point, then for arbitrary x ∈ (−2ct, 2ct] we have Φ(x, t) = Pr {S(t) = −2ct} + Pr {S(t) = 0} Θ(x) + Pr {S(t) ∈ Rx } , where

( Rx =

(−2ct, x), (−2ct, x) − {0},

if x ∈ (−2ct, 0], if x ∈ (0, 2ct]

and Θ(x) is the Heaviside unit-step function. Taking into account (2.13.1) and (2.13.2), we get e−2λt e−2λt + Θ(x) + Pr {S(t) ∈ Rx } . (2.13.28) 4 2 Thus, our aim is to evaluate the term Pr {S(t) ∈ Rx } for x ∈ (−2ct, 2ct]. Integrating the absolutely continuous part of density (2.13.17), we have for arbitrary x ∈ (−2ct, 2ct]: √  Z x  p  Z x I1 λc 4c2 t2 − z 2 λe−2λt λ 2 2 2 √ dz Pr{S(t) ∈ Rx } = I0 4c t − z dz + ct 2c c 4c2 t2 − z 2 −2ct −2ct Z 2ct  p    Z λ x λ + I0 τ 2 − z 2 dτ dz . 2c −2ct |z| c (2.13.29) Φ(x, t) =

118

Markov Random Flights

To evaluate the integrals on the right-hand side of (2.13.29), we need the following relations (see formulas (1.9.4) and (1.9.5)):  2k   Z ∞ p X ab 1 3 z2 1 F −k, + ψ1 , (2.13.30) ; ; I0 (b a2 − z 2 ) dz = z (k!)2 2 2 2 a2 k=0

√   2k+1  Z ∞ I1 (b a2 − z 2 ) zX 1 ab 1 3 z2 √ dz = F −k, ; ; 2 + ψ2 , a k! (k + 1)! 2 2 2 a a2 − z 2 k=0 |z| ≤ a,

a > 0,

(2.13.31)

b ≥ 0,

where F (α, β; γ; z) is the Gauss hypergeometric function and ψ1 , ψ2 are arbitrary functions not depending on z. Applying formula (2.13.30) to the first integral in (2.13.29), we get Zx

 p  λ 2 2 2 I0 4c t − z dz c

−2ct

=x

    ∞ X 1 3 x2 1 3 (λt)2k F −k, ; ; 2 2 + 2ct F −k, ; ; 1 . (k!)2 2 2 4c t (k!)2 2 2

∞ X (λt)2k k=0

k=0

In view of the formula   1 3 (2k)!! 2k k! F −k, ; ; 1 = = , 2 2 (2k + 1)!! (2k + 1)!!

k ≥ 0,

(2.13.32)

the second term is found to be 2ct

∞ X (λt)2k k=0

(k!)2

 F

 c 1 3 −k, ; ; 1 = sinh(2λt) 2 2 λ

and we obtain for arbitrary x ∈ (−2ct, 2ct]: Zx I0

    p ∞ X (λt)2k 1 3 x2 c λ 4c2 t2 − z 2 dz = x ; ; F −k, + sinh(2λt). (2.13.33) c (k!)2 2 2 4c2 t2 λ k=0

−2ct

According to (2.13.31), the second integral in (2.13.29) is √    Z x ∞ I1 λc 4c2 t2 − z 2 x X (λt)2k+1 1 3 x2 √ F −k, ; ; 2 2 dz = 2ct k! (k + 1)! 2 2 4c t 4c2 t2 − z 2 −2ct k=0   ∞ X (λt)2k+1 1 3 + F −k, ; ; 1 . k! (k + 1)! 2 2 k=0

Applying (2.13.32) one can easily show that the second term is   ∞ X 1 3 1 (λt)2k+1 F −k, ; ; 1 = (cosh(2λt) − 1) k! (k + 1)! 2 2 2λt k=0

and, therefore, we obtain for arbitrary x ∈ (−2ct, 2ct]: √    Z x ∞ I1 λc 4c2 t2 − z 2 1 3 x2 1 x X (λt)2k+1 √ dz = F −k, ; ; 2 2 + (cosh(2λt) − 1) . 2 2 2 2ct k! (k + 1)! 2 2 4c t 2λt 4c t − z −2ct k=0 (2.13.34)

Telegraph Processes

119

For the third (double) integral in (2.13.29) we have for arbitrary x ∈ (−2ct, 2ct]:   Z x Z 2ct  p λ τ 2 − z 2 dτ dz I0 c −2ct |z|   Z x Z 2ct  p λ 2 2 τ − z 1{τ >|z|} dτ dz I0 = c 0 −2ct (2.13.35)  p   Z 2ct Z x λ I0 = τ 2 − z 2 1{|z| 0.

The density (in the sense of generalized functions) of the singular part of the distribution ˜ has the form of S(t) ϕ˜s (x, t) =

 e−(λ1 +λ2 )t  δ(x − ((x01 + x02 ) + (c1 + c2 )t)) + δ(x − ((x01 + x02 ) − (c1 + c2 )t)) , 4

where δ(x) is the Dirac delta-function. The absolutely continuous part of the distribution ˜ is concentrated in the open interval ((x0 + x0 ) − (c1 + c2 )t, (x0 + x0 ) + (c1 + c2 )t) of S(t) 2 1 2 1 and ˜ ∈ ((x0 + x0 ) − (c1 + c2 )t, (x0 + x0 ) + (c1 + c2 )t)} = 1 − 1 e−(λ1 +λ2 )t , Pr{S(t) t > 0. 1 2 1 2 2 If c1 = c2 = c then the closed interval [(x01 + x02 ) − 2ct, (x01 + x02 ) + 2ct] is the support of ˜ the distribution of S(t). The singular part of the distribution is concentrated at the three 0 0 0 points x1 + x2 , (x1 + x02 ) ± 2ct of this interval and ˜ = (x0 + x0 ) − 2ct} = Pr{S(t) ˜ = (x0 + x0 ) + 2ct} = 1 e−(λ1 +λ2 )t , Pr{S(t) 1 2 1 2 4

t > 0,

126

Markov Random Flights

1 −(λ1 +λ2 )t e , t > 0. 2 The density (in the sense of generalized functions) of the singular part of the distribution ˜ in this case has the form of S(t) ˜ = x01 + x02 } = Pr{S(t)

e−(λ1 +λ2 )t δ(x − (x01 + x02 )) 2  e−(λ1 +λ2 )t  δ(x − ((x01 + x02 ) + 2ct)) + δ(x − ((x01 + x02 ) − 2ct)) . + 4 ˜ is concentrated in the area M ˜t = The absolutely continuous part of the distribution of S(t) 0 0 0 0 0 0 0 0 ((x1 + x2 ) − 2ct, x1 + x2 ) ∪ (x1 + x2 , (x1 + x2 ) + 2ct) and ϕ˜s (x, t) =

˜ ∈M ˜ t } = 1 − e−(λ1 +λ2 )t , Pr{S(t)

t > 0.

˜ is given by The characteristic function of process S(t) ˜ t) = eiξ(x01 +x02 ) H1 (ξ, t)H2 (ξ, t), Ψ(ξ, x01 ,

ξ ∈ R1 ,

t ≥ 0.

x02

If the starting points are symmetric with respect to the origin x = 0, then x01 +x02 = 0 ˜ and in this case Ψ(ξ, t) is a real-valued function, otherwise it is a complex function. Clearly, ˜ t) has a much more complicated form (in comparison with characteristic function Ψ(ξ, t) Ψ(ξ, given by (2.13.6)) that substantially depends on the numbers λ1 /c1 and λ2 /c2 . ˜ To obtain the distribution of process S(t) one needs to calculate the inverse Fourier ˜ transform of the characteristic function Ψ(ξ, t), however this is a very difficult problem that can, apparently, be solved numerically only.

2.14

Linear combinations of telegraph processes

In this section we study the linear combinations of the independent Goldstein-Kac telegraph processes. The structure of the distribution and singularity points are described. The governing high-order hyperbolic partial differential equation for the transition density of the process in a determinant form, is given. The particular case of two telegraph processes is in detail examined.

2.14.1

Structure of distribution and system of equations x0

x0

Let X1 1 (t), . . . , Xn n (t), n ≥ 2, t ≥ 0, be independent Goldstein-Kac telegraph processes on the real line R1 that, at the initial time instant t = 0, simultaneously start from the initial points x01 , . . . , x0n ∈ R1 , respectively. For the sake of simplicity, we omit thereafter the upper x0

indices by identifying Xk (t) ≡ Xk k (t), k = 1, . . . , n, bearing in mind, however, the fact that the process Xk (t) starts from the initial point x0k . Each process Xk (t), k = 1, . . . , n, has some constant finite speed ck > 0 and it is controlled by a homogeneous Poisson process Nk (t) of rate λk > 0, as described above. All these Poisson processes Nk (t), k = 1, . . . , n, are supposed to be independent as well. Consider the linear combination of the processes X1 (t), . . . , Xn (t), n ≥ 2, defined by the equality L(t) =

n X

ak Xk (t),

ak ∈ R1 ,

ak 6= 0, k = 1, . . . , n,

t ≥ 0,

k=1

where ak , k = 1, . . . , n, are arbitrary real non-zero constant coefficients.

(2.14.1)

Telegraph Processes

127

To describe the structure of the distribution of L(t), consider the following partition of the set {1, 2, . . . , n} of indices: I + = {i1 , . . . , ik } such that ais > 0 for all is ∈ I + , 1 ≤ s ≤ k, I − = {i1 , . . . , im } such that ail < 0 for all il ∈ I − , 1 ≤ l ≤ m,

k + m = n.

The support of the distribution Φ(x, t) = Pr{L(t) < x} of the process L(t) is the closed interval depending on the coefficients ak , speeds ck and starting points x0k and having the form: supp L(t)   X  X X  n n X X X = ak x0k − t ais cis − ail cil , ak x0k + t ais cis − ail cil  . k=1

il ∈I −

is ∈I +

is ∈I +

k=1

il ∈I −

(2.14.2) Taking into account that X



X

ail cil =

il ∈I −

|ail |cil ,

il ∈I −

support (2.14.2) can be represented as follows: " n # n n n X X X X 0 0 supp L(t) = ak xk − t |ak |ck , ak xk + t |ak |ck . k=1

k=1

k=1

In particular, if all ak > 0, k = 1, . . . , n, then support (2.14.3) takes the form " n # n X X 0 0 supp L(t) = ak (xk − ck t), ak (xk + ck t) . k=1

(2.14.3)

k=1

(2.14.4)

k=1

At arbitrary time t > 0, the distribution Φ(x, t) contains the singular and absolutely continuous components. The singular part of the distribution corresponds to the case, when no Poisson events (of any Poisson process Nk (t), k = 1, . . . , n,) occur by time t. It is concentrated in the finite point set Ms = {q1 , . . . , q2n } ⊂ supp L(t) that contains 2n singularity points (each qj is counted according to its multiplicity): qj =

n X

ak x0k + t

k=1

n X

ak ijk ck ,

j = 1, . . . , 2n ,

(2.14.5)

k=1

where ijk = ±1, k = 1, . . . , n, are the elements of the ordered sequence σ j = {ij1 , . . . , ijn }, j = 1, . . . , 2n , of length n. The sign of each ijk , (that is, either +1 or −1), is determined by the initial direction (either positive or negative, respectively) taken by the telegraph process Xk (t). Emphasize that some qj may coincide in dependence on the particular values of the starting points x0k , coefficients ak and speeds ck . Note that both the terminal points of support (2.14.3) are singular and, therefore, they belong to Ms , that is, n n X X ak x0k ± t |ak |ck ∈ Ms . k=1

k=1

Other singular points are the interior points of support (2.14.3). It is easy to see that the probability of being at arbitrary singularity point qj at time t is Pr {L(t) = qj } =

e−λt , 2n

j = 1, . . . , 2n ,

(2.14.6)

128

Markov Random Flights

where λ=

n X

λk .

(2.14.7)

k=1

From (2.14.6) it obviously follows that, for arbitrary t > 0, Pr {L(t) ∈ Ms } = e−λt .

(2.14.8)

If at least one Poisson event occurs by time t, then the process L(t) is located in the set Mac = supp L(t) − Ms , which is the support of the absolutely continuous part of the distribution and the probability of being in this set at time t > 0 is: Pr {L(t) ∈ Mac } = 1 − e−λt .

(2.14.9)

Define now the two-state direction processes D1 (t), . . . , Dn (t), n ≥ 2, t > 0, where Dk (t), k = 1, . . . , n, denotes the direction of the telegraph process Xk (t) at time t > 0. This means that Dk (t) = +1, if at instant t the process Xk (t) has positive direction and Dk (t) = −1 otherwise. Let x ∈ Mac be an arbitrary point and let dx > 0 be some increment, such that the interval (x, x + dx) does not contain any singular points. For such x and dx introduce the joint probability densities of the process L(t) and of the set of directions {D1 (t), . . . , Dn (t)} at arbitrary time t > 0 by the relation fσ (x, t) dx ≡ f{i1 ,...,in } (x, t) dx = Pr{x < L(t) < x + dx, D1 (t) = i1 , . . . , Dn (t) = in }. (2.14.10) The set of functions (2.14.10) contains 2n densities indexed by all the ordered sequences of the form σ = {i1 , . . . , in } of length n whose elements ik , k = 1, . . . , n, are either +1 or −1. Theorem 2.14.1. The joint probability densities (2.14.10) satisfy the following hyperbolic system of 2n first-order partial differential equations with constant coefficients: n

X ∂fσ (x, t) ∂fσ (x, t) λk fσ¯ (k) (x, t), = −cσ − λfσ (x, t) + ∂t ∂x

x ∈ Mac , t > 0, (2.14.11)

k=1

where

σ = {i1 , . . . , ik−1 , ik , ik+1 , . . . , in }, σ ¯

(k)

= {i1 , . . . , ik−1 , −ik , ik+1 , . . . , in }, cσ ≡ c{i1 ,...,in } =

n X

ak ik ck ,

(2.14.12)

k=1

and λ is given by (2.14.7). Proof. Let ∆t > 0 be some time increment. Let Nk (t, t + ∆t), k = 1, . . . , n, denote the number of the events of the k-th Poisson process Nk (t) that have occurred in the time interval (t, t + ∆t). Then, according to the total probability formula, we have: Pr{L(t + ∆t) < x, D1 (t + ∆t) = i1 , . . . , Dn (t + ∆t) = in } ( ) n n Y X = (1 − λk ∆t) Pr L(t) + ∆t ak ik ck < x, D1 (t) = i1 , . . . , Dn (t) = in

+

k=1

k=1

n X

t+∆t Z

k=1

λk ∆t

n Y j=1 j6=k

(1 − λj ∆t)

1 ∆t

Pr {L(t) + ak ck (−ik (τk − t) + ik (t + ∆t − τk )) < x, t

D1 (t) = i1 , . . . , Dk−1 (t) = ik−1 , Dk (t) = −ik , Dk+1 (t) = ik+1 , . . . , Dn (t) = in } dτk + o(∆t)

Telegraph Processes =

n Y

129

(1 − λk ∆t) Pr {L(t) < x − cσ ∆t, D1 (t) = i1 , . . . , Dn (t) = in }

k=1

+

n X

λk

n Y

t+∆t Z

Pr {L(t) < x − ak ik ck (2(t − τk ) + ∆t),

(1 − λj ∆t)

j=1 j6=k

k=1

t

D1 (t) = i1 , . . . , Dk−1 (t) = ik−1 , Dk (t) = −ik , Dk+1 (t) = ik+1 , . . . , Dn (t) = in } dτk + o(∆t). The first term on the right-hand side of this expression is related to the case when no n P Nk (t, t + ∆t) = 0. The Poisson event occurs in the time interval (t, t + ∆t), that is, if k=1

second (integral) term concerns the case when a single Poisson event occurs in this interval, n P Nk (t, t + ∆t) = 1. Finally, the term o(∆t) is related to the case when more that is, if k=1

that one Poisson events occur in the interval (t, t + ∆t), that is, if

n P

Nk (t, t + ∆t) ≥ 2 (one

k=1

can easily check that all such probabilities have the order o(∆t)). Since the probability is a continuous function, then, according to the mean-value theorem, for any k there exists some τk∗ ∈ (t, t + ∆t), such that Pr{L(t + ∆t) < x, D1 (t + ∆t) = i1 , . . . , Dn (t + ∆t) = in } n Y (1 − λk ∆t) Pr {L(t) < x − cσ ∆t, D1 (t) = i1 , . . . , Dn (t) = in } = k=1

+ ∆t

n X

λk

n Y

(1 − λj ∆t) Pr {L(t) < x − ak ik ck (2(t − τk∗ ) + ∆t),

j=1 j6=k

k=1

D1 (t) = i1 , . . . , Dk−1 (t) = ik−1 , Dk (t) = −ik , Dk+1 (t) = ik+1 , . . . , Dn (t) = in } + o(∆t). In view of the asymptotic formulas n Y

(1 − λj ∆t) = 1 − λ∆t + o(∆t),

∆t

n Y

(1 − λj ∆t) = ∆t + o(∆t),

j=1 j6=k

j=1

the latter relation can be rewritten as follows: Pr{L(t + ∆t) < x, D1 (t + ∆t) = i1 , . . . , Dn (t + ∆t) = in } = Pr {L(t) < x − cσ ∆t, D1 (t) = i1 , . . . , Dn (t) = in } − λ∆t Pr {L(t) < x − cσ ∆t, D1 (t) = i1 , . . . , Dn (t) = in } n X + ∆t λk Pr {L(t) < x − ak ik ck (2(t − τk∗ ) + ∆t), k=1

D1 (t) = i1 , . . . , Dk−1 (t) = ik−1 , Dk (t) = −ik , Dk+1 (t) = ik+1 , . . . , Dn (t) = in } + o(∆t). In terms of densities (2.14.10) this equality can be represented in the form: x−c Z σ ∆t

Zx

fσ (ξ, t) dξ − λ∆t

fσ (ξ, t + ∆t) dξ = −∞

x−c Z σ ∆t

−∞

+ ∆t

n X k=1

fσ (ξ, t) dξ −∞

∗ x−ak ik ck (2(t−τ k )+∆t) Z

λk

fσ¯ (k) (ξ, t) dξ + o(∆t). −∞

130

Markov Random Flights

This can be rewritten as follows:  x Z   fσ (ξ, t) − fσ (ξ, t + ∆t) − fσ (ξ, t) dξ = − 

Zx −∞

x−c Z σ ∆t

− λ∆t

fσ (ξ, t) dξ + ∆t

n X

∗ x−ak ik ck (2(t−τ k )+∆t) Z

λk

k=1

−∞

fσ (ξ, t) dξ  −∞

−∞ x−c Z σ ∆t



fσ¯ (k) (ξ, t) dξ + o(∆t). −∞

Dividing this equality by ∆t, we can represent it in the form:   x Zx Z  1  1   fσ (ξ, t) − fσ (ξ, t + ∆t) − fσ (ξ, t) dξ = −cσ  cσ ∆t ∆t −∞

−∞

x−c Z σ ∆t

−λ

fσ (ξ, t) dξ + −∞

n X

  fσ (ξ, t) dξ  

x−c Z σ ∆t −∞

∗ x−ak ik ck (2(t−τ k )+∆t) Z

λk

k=1

fσ¯ (k) (ξ, t) dξ +

o(∆t) . ∆t

−∞

Passing now to the limit, as ∆t → 0, and taking into account that τk∗ → t in this case, we obtain Zx

∂fσ (ξ, t) dξ = −cσ fσ (x, t) − λ ∂t

−∞

Zx

−∞

fσ (ξ, t) dξ +

n X k=1

Zx λk

fσ¯ (k) (ξ, t) dξ. −∞

Differentiating this equality in x, we finally arrive at (2.14.11). Since the principal part of system (2.14.11) is strictly hyperbolic, then system (2.14.11) itself is hyperbolic. The theorem is thus completely proved. Note that system (2.14.11) represents the backward Kolmogorov equation written for the joint densities of process L(t). Remark 2.14.1. System (2.14.11) consists of 2n first-order partial differential equations, however the equation for each density fσ (x, t) contains (besides this function itself) only n other densities fσ¯ (k) (x, t), k = 1, . . . , n. This means that each density fσ (x, t) indexed by some ordered sequence σ = {i1 , . . . , in } is expressed in terms of n densities fσ¯ (k) (x, t), k = 1, . . . , n, whose indices σ ¯ (k) = {i1 , . . . , ik−1 , −ik , ik+1 , . . . , in }, k = 1, . . . , n, differ from the index σ = {i1 , . . . , ik−1 , ik , ik+1 , . . . , in } in a single element only. In other words, the equation for arbitrary density fσ (x, t) in (2.14.11) with index σ = {i1 , . . . , in } links it only with those densities whose indices are located from σ at distance 1 in the Hamming metric.

2.14.2

Governing equation

Let Ξn = {σ 1 , . . . , σ 2n } denote the ordered set consisting of 2n sequences, each being (k) (k) (k) (k) of length n and having the form σ k = {i1 , i2 , . . . , in }, ij = ±1, j = 1, . . . , n, k = n 1, . . . , 2 , n ≥ 2. The order in Ξn may be arbitrary, but fixed. For our purposes it is convenient to choose and fix just the lexicographical order of the sequences in the set o n (k) (k) (k) (k) Ξn = σ k = {i1 , i2 , . . . , in }, ij = ±1, j = 1, . . . , n, k = 1, . . . , 2n , that is, the order

Telegraph Processes

131

σ 1 = {−1, −1, . . . , −1, −1}, σ 2 = {−1, −1, . . . , −1, +1}, σ 3 = {−1, −1, . . . , +1, −1}, σ 4 = {−1, −1, . . . , +1, +1}, ........................ σ 2n = {+1, +1, . . . , +1, +1}. Note that this lexicographical order is isomorphic to the binary one by the identification −1 7→ 0 and +1 7→ 1. Under such natural isomorphism, the (−1, +1)-sequence σ k corresponds to the (0, 1)-sequence σ 0k yielding the binary representation of the number k − 1. Let ρ(·, ·) : Ξn × Ξn → {0, 1, . . . , n} be the Hamming metric. For arbitrary element σ k ∈ Ξn , k = 1, . . . , 2n , define a subset Mk ⊂ Ξn by the formula: Mk = {σ s ∈ Ξn : ρ(σ k , σ s ) = 1},

k = 1, . . . , 2n .

By identifying the notations fk (x, t) ≡ fσk (x, t), ck ≡ cσk , k = 1, . . . , 2n , system (2.14.11) can be represented in the following ordered form: ∂fk (x, t) ∂fk (x, t) = −ck − λfk (x, t) + ∂t ∂x

X

λm fm (x, t),

k = 1, . . . , 2n .

{m : σ m ∈Mk }

(2.14.13) The main subject of our interest is the sum of functions (2.14.10) n

p(x, t) =

2 X

fk (x, t),

(2.14.14)

k=1

which is the transition probability density of the process L(t) defined by (2.14.1). Introduce the column-vector of dimension 2n T

f = f (x, t) = (f1 (x, t), f2 (x, t), . . . , f2n (x, t)) and the diagonal (2n × 2n )-matrix differential operator Dn = diag{Ak , k = 1, . . . , 2n },

(2.14.15)

where Ak , k = 1, . . . , 2n , are the differential operators Ak =

∂ ∂ + ck , ∂t ∂x

k = 1, . . . , 2n .

Define the scalar (2n × 2n )-matrix Λn = kξsm k, s, m = 1, . . . , 2n , with the elements  λ, if s = m,   (s) (m) ξsm = −λk , s, m = 1, . . . , 2n . (2.14.16) if ρ(σ s , σ m ) = 1 and ik 6= ik ,   0, otherwise, In other words, matrix Λn has the following structure. All the diagonal elements are equal to λ. At the intersection of the s-th row and of the m-th column (corresponding to the (s) (s) (s) (m) (m) (m) sequences σ s = {i1 , i2 , . . . , in } and σ m = {i1 , i2 , . . . , in }, such that the Hamming metric between them is 1), the element −λk is located, where k is the position number of the non-coinciding elements of these sequences σ s and σ m . Note that, since the Hamming

132

Markov Random Flights

metric between these sequences is 1, such a position number k is unique. All other elements of the matrix are zeros. From this definition it follows that each row or column of the matrix Λn contains (n + 1) non-zero elements and 2n − (n + 1) zeros. The sum of all the elements of every row or column of matrix Λn is zero in view of the definition of λ given by (2.14.7). From definition (2.14.16) and due to the lexicographical order of sequences σ k , k = 1, . . . , 2k in the set Ξn , it follows that the scalar matrix Λn has a block structure and this fact plays an important role in our analysis (see Remarks 2.14.2 and 2.14.3). In these notations (2.14.15) and (2.14.16), system (2.14.13) can be represented in the matrix form [Dn + Λn ] f = 0, (2.14.17) where 0 = (0, 0, . . . , 0)T is the zero column-vector of dimension 2n . Theorem 2.14.2. The transition probability density p(x, t) of the process L(t) given by (2.14.14) satisfies the following hyperbolic partial differential equation of order 2n with constant coefficients {Det [Dn + Λn ]} p(x, t) = 0, (2.14.18) where Det [Dn + Λn ] is the determinant of the matrix differential operator [Dn + Λn ]. Proof. Since the differential operators Ak , k = 1, . . . , 2n , commute with each other, then applying the Determinant Theorem we obtain the statement of the theorem. The hyperbolicity of equation (2.14.18) follows from the hyperbolicity of system (2.14.11) or (2.14.17). Remark 2.14.2. Derivation of a general analytical formula for the determinant Det [Dn + Λn ] is a fairly difficult algebraic problem. Nevertheless, this problem can considerably be simplified, if we note that, from the block structure of matrix Λn noted above and due to the diagonal form of matrix Dn defined by (2.14.15), it follows that their sum Dn + Λn has the block structure ! (1) Dn−1 + Λn−1 En−1 Dn + Λn = , (2.14.19) (2) En−1 Dn−1 + Λn−1 where the blocks in (2.14.19) are composed of the following (2n−1 × 2n−1 )-matrices: (1)

Dn−1 = diag{Ak , k = 1, . . . , 2n−1 },

(2)

Dn−1 = diag{Ak , k = 2n−1 + 1, . . . , 2n }, (2.14.20) the (2n−1 × 2n−1 )-matrix Λn−1 is defined similarly (2.14.16) (but taking into account its dimension), and En−1 = −λ1 En−1 , where En−1 is the unit matrix of dimension (2n−1 × (2) (1) 2n−1 ). Since the matrix En−1 commutes with [Dn−1 + Λn−1 ] (and, of course, with [Dn−1 + Λn−1 ]), then applying the well-known Schur’s formula for the even-order determinants of block matrices to (2.14.19), we obtain: h i (1) (2) (2.14.21) Det [Dn + Λn ] = Det (Dn−1 + Λn−1 )(Dn−1 + Λn−1 ) − λ21 En−1 . Formula (2.14.21) reduces computation of a determinant of dimension (2n × 2n ) to the computation of a determinant of dimension (2n−1 × 2n−1 ). Note also that, in view of definition T (2.14.16), for arbitrary n ≥ 2, the relation Dn + Λn = [Dn + Λn ] holds. In other words, the matrix Dn + Λn coincides with its transposed matrix. Clearly, this approach can be extended recurrently to the determinants of lower dimensions that could enable us to obtain an explicit (but complicated) formula for the determinant Det [Dn + Λn ].

Telegraph Processes

133

Remark 2.14.3. In the particular case n = 2, we obtain the (4 × 4)-determinant .. A1 + λ −λ . −λ 0 2 1 .. −λ2 A2 + λ . 0 −λ1 (2.14.22) Det [D2 + Λ2 ] = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. A + λ −λ1 0 −λ2 3 . .. 0 −λ1 −λ2 A4 + λ In the case n = 3, the following (8 × 8)-determinant emerges: Det [D3 + Λ3 ] .. A1 + λ −λ3 −λ2 0 . −λ1 0 0 0 .. −λ A2 + λ 0 −λ2 . 0 −λ1 0 0 3 .. −λ 0 A + λ −λ . 0 0 −λ 0 2 3 3 1 .. −λ2 −λ3 A4 + λ . 0 0 0 −λ1 0 = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. −λ1 0 0 0 . A5 + λ −λ3 −λ2 0 .. 0 −λ1 0 0 . −λ3 A6 + λ 0 −λ2 .. 0 0 −λ1 0 . −λ2 0 A7 + λ −λ3 .. 0 0 0 −λ1 . 0 −λ2 −λ3 A8 + λ (2.14.23) From (2.14.22) and (2.14.23) we clearly see the nice block structure of matrix Dn + Λn , as was noted in (2.14.19). It is also seen that the diagonal blocks of determinant (2.14.23) are structurally similar to (2.14.22). Such determinants can, therefore, be evaluated by applying the recurrent formula (2.14.21). Note that if we take some other (different from the lexicographical) order of the sequences {σ k , k = 1, . . . , 2n }, (this corresponds to some pairwise permutations of the rows and columns of matrix Dn + Λn ), the determinant Det [Dn + Λn ] changes its form, but keeps, nevertheless, the same value. Remark 2.14.4. To obtain the fundamental solution f (x, t) of partial differential equation (2.14.18) we should solve it with the initial conditions ! n X ∂ k f (x, t) 0 f (x, t)|t=0 = δ x − ak xk , = 0, k = 1, . . . , 2n − 1, (2.14.24) ∂tk t=0 k=1

where δ(·) is the Dirac delta-function. The first condition in (2.14.24) expresses the obvious fact that, at the initial time moment t = 0, the density of process L(t) is entirely n P concentrated at the starting point ak x0k . k=1

To pose the initial-value problem for the transition density p(x, t) of process L(t) we need to find the respective initial conditions. To do this, we may use formula (2.4.2) for the characteristic function of the telegraph process. Since the telegraph processes Xk (t), k = 1, . . . , n, are independent, then, in view of (2.4.2) and (2.13.56), the characteristic function of their linear form L(t) is given by the formula:   Y n n X ˜ k (ak α, t), HL (α, t) = exp −λt + iα ak x0k H α ∈ R1 , t ≥ 0, (2.14.25) k=1

k=1

134

Markov Random Flights

where " q q  # λk 2 2 2 2 2 2 ˜ sinh t λk − ck ξ Hk (ξ, t) = cosh t λk − ck ξ + p 2 1|ξ|≤ λk ck λk − c2k ξ 2 " q  # q λk sin t c2k ξ 2 − λ2k 1|ξ|> λk . + cos t c2k ξ 2 − λ2k + p 2 ck ck ξ 2 − λ2k (2.14.26) In particular, by setting t = 0 in (2.14.25) we get the formula  X  n 0 HL (α, 0) = exp iα ak xk k=1

and its inverting yields the first initial condition in (2.14.24). To obtain other initial conditions, we should differentiate (in t) characteristic function (2.14.25) the respective number of times, then inverting (in α) the result of such differentiation and setting then t = 0. Remark 2.14.5. From the hyperbolicity of equation (2.14.18) and from initial conditions (2.14.24) (more precisely, from the first initial condition of (2.14.24)), it follows that the fundamental solution f (x, t) of equation (2.14.18) is a generalized function and, therefore, the differential operator Det [Dn + Λn ] in (2.14.18) is treated, for any fixed t > 0, as the differential operator acting in the space of generalized functions S 0 . The elements of S 0 are called the tempered distributions. Such an interpretation becomes more visual if we note that solving the initial-value problem (2.14.18)–(2.14.24) is equivalent to solving the inhomogeneous equation ! n X 0 {Det [Dn + Λn ]} f (x, t) = δ(t) δ x − ak xk , (2.14.27) k=1

where the generalized function on the right-hand side of (2.14.27) represents the weighted sum of the instant point-like sources concentrated, at the initial time moment t = 0, n P at the point ak x0k . In such writing of the initial-value problem (2.14.27), the operk=1

ator Det [Dn + Λn ] : S 0 → S 0 is the differential operator acting from S 0 into itself. From this point of view, solving the differential equation (2.14.18) with initial conditions (2.14.24) means finding a generalized function f (x, t) ∈ S 0 such that the differential  opern P ator Det [Dn + Λn ] transforms it into the generalized function δ(t)δ x − ak x0k ∈ S 0 . k=1

Since the initial-value problem (2.14.18)–(2.14.24) is well-posed (due to the hyperbolicity of equation (2.14.18)), such a generalized function f (x, t) exists and is unique in S 0 for any fixed t > 0. Therefore, the fundamental solution f (x, t) of the linear form L(t) defined by (2.14.1) is the Green’s function of the initial-value problem (2.14.18)–(2.14.24). The same concerns the initial-value problem for the transition probability density p(x, t) and the respective initial conditions are determined as described in Remark 2.14.4. Such initial-value problem can also be represented in the form of a inhomogeneous partial differential equation similar to (2.14.27), but with another generalized function on its right-hand side determined by the initial conditions for the transition density p(x, t). Remark 2.14.6. Suppose that the parameters of the telegraph processes tend to infinity in such a way that the following Kac’s scaling conditions fulfill (see (2.7.1)): λk → +∞,

ck → +∞,

c2k → %k , λk

%k > 0,

k = 1, . . . , n.

(2.14.28)

Telegraph Processes

135

According to Theorem 2.7.1, under conditions (2.14.28), each telegraph process Xk (t) converges to the homogeneous Wiener process Wk (t) starting from the initial point x0k ∈ R1 with zero drift and diffusion coefficient σk2 = %k . Therefore, the process L(t) converges to the linear form n X W (t) = ak Wk (t) k=1

of the independent Wiener processes Wk (t), k = 1, . . . , n. By using characteristic function (2.14.25) one can easily show that, under Kac’s scaling conditions (2.14.28), the linear combination L(t) converges (for any fixed t > 0) to the homogeneous Wiener process W (t) with the expectation and diffusion coefficient given, respectively, by n n X X 2 EW (t) = ak x0k , σW = %k a2k . k=1

2.14.3

k=1

Sum and difference of two telegraph processes

We apply the results obtained above for studying the sum and difference S ± (t) = X1 (t) ± X2 (t)

(2.14.29)

of two independent telegraph processes X1 (t) and X2 (t). The sum of two independent telegraph processes on the real line R1 , both with the same parameters c1 = c2 = c, λ1 = λ2 = λ, that simultaneously start from the origin 0 ∈ R1 , was studied in Section 2.13 and the explicit probability distribution of this sum was obtained. It was also proved that the shifted time derivative of the transition density solves a telegraph equation with doubled parameters 2c and 2λ. A functional relation connecting the distributions of the difference of two independent telegraph processes with arbitrary parameters and of the Euclidean distance between them, was given in Remark 2.12.4. Let us consider now the generalization of this model and study the behaviour of the sum and difference of two independent telegraph processes X1 (t), X2 (t) representing two particles that, at the initial time moment t = 0, simultaneously start from the two initial points x01 , x02 ∈ R1 and move with some constant velocities c1 and c2 , respectively. The motions are controlled by two independent Poisson processes of arbitrary rates λ1 and λ2 , respectively, as described above. The coefficients of linear form (2.14.29) are a1 = 1, a2 = 1 for the sum S + (t) and a1 = 1, a2 = −1 for the difference S − (t), respectively. Therefore, according to (2.14.3), the supports of the distributions of S ± (t) are the intervals supp S ± (t) = [(x01 ± x02 ) − (c1 + c2 )t, (x01 ± x02 ) + (c1 + c2 )t].

(2.14.30)

The lexicographically-ordered set of sequences in this case is σ 1 = (−1, −1),

σ 2 = (−1, +1),

σ 3 = (+1, −1),

σ 4 = (+1, +1),

and according to (2.14.5), the support of the sum S + (t) has, therefore, the following singularity points: q1+ = (x01 + x02 ) − (c1 + c2 )t,

(terminal point of the support),

q2+ q3+ q4+

− (c1 − c2 )t,

(interior point of the support),

+ (c1 − c2 )t,

(interior point of the support),

+ (c1 + c2 )t,

(terminal point of the support).

= = =

(x01 (x01 (x01

+ + +

x02 ) x02 ) x02 )

(2.14.31)

136

Markov Random Flights

By setting x01 = x02 = 0, c1 = c2 = c, λ1 = λ2 = λ, we arrive at the model studied in Section 2.13 with the support supp S + (t) = [−2ct, 2ct]. In this case formulas (2.14.31) yield the three singularity points ±2ct (the terminal points of the support) and 0 (the interior point of multiplicity 2). Similarly, the support of the difference S − (t) has the following singularity points: q1− = (x01 − x02 ) − (c1 − c2 )t,

(interior point of the support),

q2− q3− q4−

− (c1 + c2 )t,

(terminal point of the support),

+ (c1 + c2 )t,

(terminal point of the support),

+ (c1 − c2 )t,

(interior point of the support).

= = =

(x01 (x01 (x01

− − −

x02 ) x02 ) x02 )

(2.14.32)

Note that if both the processes X1 (t) and X2 (t) start from the same initial point x01 = x0 ∈ R1 and have the same speed c1 = c2 = c, then the support of the difference S (t) takes the form supp S − (t) = [−2ct, 2ct] with the three singularity points 0, ±2ct (the interior singularity point 0 has multiplicity 2). We see that in this case difference S − (t) has the same support and the same singular points like the sum S + (t) of two telegraph processes with the same speed c1 = c2 = c that simultaneously start from the origin 0 ∈ R1 . In view of (2.14.6), x02 = −

 e−λt , Pr S ± (t) = qj± = 4

j = 1, 2, 3, 4,

where λ = λ1 + λ2 . + According to (2.14.12), for the sum S + (t) the coefficients c+ k ≡ cσ k , k = 1, 2, 3, 4, are: c+ 1 = −(c1 + c2 ),

c+ 2 = −(c1 − c2 ),

c+ 3 = c1 − c2 ,

c+ 4 = c1 + c2 .

Then operators A+ k , k = 1, 2, 3, 4, take the form: ∂ ∂ − (c1 + c2 ) , ∂t ∂x ∂ ∂ + A3 = + (c1 − c2 ) , ∂t ∂x A+ 1 =

∂ ∂ − (c1 − c2 ) , ∂t ∂x ∂ ∂ + A4 = + (c1 + c2 ) . ∂t ∂x A+ 2 =

(2.14.33)

− Similarly, for the difference S − (t) the coefficients c− k ≡ cσ k , k = 1, 2, 3, 4, are:

c− 1 = −(c1 − c2 ),

c− 2 = −(c1 + c2 ),

c− 3 = c1 + c2 ,

c− 4 = c1 − c2 ,

and, therefore, operators A− k , k = 1, 2, 3, 4, become ∂ ∂ − (c1 − c2 ) , ∂t ∂x ∂ ∂ A− + (c1 + c2 ) , 3 = ∂t ∂x A− 1 =

∂ ∂ − (c1 + c2 ) , ∂t ∂x ∂ ∂ A− + (c1 − c2 ) . 4 = ∂t ∂x

A− 2 =

(2.14.34)

The initial-value problems for the transition densities of processes (2.14.29) are given by the following theorem. Theorem 2.14.3. The transition probability densities p± (x, t) of processes (2.14.29) are the solutions of the initial-value problems 2  2   2 ∂ ∂ ∂ 2 2 ∂ 2 + (λ1 + λ2 ) + 2(λ1 + λ2 ) − 2(c1 + c2 ) 2 − (λ1 − λ2 ) ∂t ∂t2 ∂t ∂x (2.14.35)  2  2 2 2 ∂ 2 2 ± + (c1 − c2 ) 2 + (λ1 − λ2 ) p (x, t) = 0, ∂x

Telegraph Processes

137

 ∂p± (x, t) p± (x, t)|t=0 = δ x − (x01 ± x02 ) , = 0, ∂t t=0  ∂ 2 p± (x, t) 2 2 00 0 0 = (c1 + c2 ) δ x − (x1 ± x2 ) , ∂t2 t=0  ∂ 3 p± (x, t) = −2(λ1 c21 + λ2 c22 ) δ 00 x − (x01 ± x02 ) , ∂t3

(2.14.36)

t=0

where δ 00 (x) is the second generalized derivative of the Dirac delta-function. Since equation (2.14.35) is hyperbolic, then, for arbitrary t > 0, the solutions p± (x, t) of initial-value problems (2.14.35)–(2.14.36) exist and are unique in the class of generalized functions S 0 . Proof. To begin with, we find initial conditions (2.14.36). According to (2.14.25), the characteristic functions of the processes S ± (t) are  ˜ 1 (α, t)H ˜ 2 (α, t), H ± (α, t) = exp −(λ1 + λ2 )t + iα(x01 ± x02 ) H α ∈ R1 , t ≥ 0, ˜ 1 (α, t), H ˜ 2 (α, t) are given by (2.14.26). Differentiating H ± = H ± (α, t) where the functions H in t, after some calculations we obtain:  ∂H ± H ± (α, t)|t=0 = exp iα(x01 ± x02 ) , = 0, ∂t t=0 0 0 ∂ 2 H ± ∂ 3 H ± 2 2 2 iα(x01 ±x02 ) = −(c1 + c2 )α e = 2(λ1 c21 + λ2 c22 )α2 eiα(x1 ±x2 ) . , ∂t2 t=0 ∂t3 t=0 Inverting these functions in α yields initial conditions (2.14.36). Let us now derive the governing equation for the transition density of the sum S + (t). To simplify the notations, we identify operators Ak ≡ A+ k , k = 1, 2, 3, 4, by omitting the upper index, bearing in mind, however, that we deal with the operators A+ k represented by (2.14.33). Thus, according to Theorem 2.14.2, we should evaluate determinant (2.14.22) with operators Ak given by (2.14.33). To do this, we apply Schur’s formula (2.14.21) to determinant (2.14.22). We have: .. A1 + λ −λ . −λ 0 2 1 .. −λ2 A2 + λ . 0 −λ1 Det [D2 + Λ2 ] = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. −λ1 0 . A3 + λ −λ2 .. 0 −λ1 . −λ2 A4 + λ     A1 + λ −λ2 A3 + λ −λ2 1   − λ21  = Det  −λ2 A2 + λ −λ2 A4 + λ 0 (A1 + λ)(A3 + λ) − (λ21 − λ22 ) = −λ2 (A2 + A3 + 2λ)

0

 

1

, (A2 + λ)(A4 + λ) − (λ21 − λ22 ) (2.14.37) −λ2 (A1 + A4 + 2λ)

138

Markov Random Flights

where, remind, λ = λ1 + λ2 . In view of (2.14.33), A1 + A4 = 2

∂ , ∂t

A2 + A3 = 2

∂ ∂t

and, therefore, we have: Det [D2 + Λ2 ] (A1 + λ)(A3 + λ) − (λ21 − λ22 ) = ∂ + λ) −2λ2 ( ∂t

2 2 (A2 + λ)(A4 + λ) − (λ1 − λ2 ) ∂ −2λ2 ( ∂t + λ)

   = (A1 + λ)(A3 + λ) − (λ21 − λ22 ) (A2 + λ)(A4 + λ) − (λ21 − λ22 ) − 4λ22



2 ∂ +λ ∂t

= (A1 + λ)(A2 + λ)(A3 + λ)(A4 + λ) − (λ21 − λ22 ) [(A1 + λ)(A3 + λ) + (A2 + λ)(A4 + λ)]  2 ∂ + (λ21 − λ22 )2 − 4λ22 +λ . ∂t (2.14.38) According to (2.14.33), we have # " 2   2 ∂ ∂ ∂ ∂ 2 2 + λ − (c1 − c2 ) 2 − 2c2 +λ , (A1 + λ)(A3 + λ) = ∂t ∂x ∂x ∂t # " 2   2 ∂ ∂ ∂ 2 2 ∂ + λ − (c1 − c2 ) 2 + 2c2 +λ . (A2 + λ)(A4 + λ) = ∂t ∂x ∂x ∂t Therefore, (A1 + λ)(A2 + λ)(A3 + λ)(A4 + λ) " #2 2 2  2 2 ∂ ∂ ∂ 2 2 2 ∂ = + λ − (c1 − c2 ) 2 − 4c2 +λ ∂t ∂x ∂x2 ∂t " #2 2 2 ∂ ∂ ∂4 = + λ − (c21 + c22 ) 2 − 4c21 c22 , ∂t ∂x ∂x4

(2.14.39)

and " (A1 + λ)(A3 + λ) + (A2 + λ)(A4 + λ) = 2

# 2 2 ∂ 2 2 ∂ + λ − (c1 − c2 ) 2 . ∂t ∂x

(2.14.40)

Substituting (2.14.39) and (2.14.40) into (2.14.38) we obtain: Det [D2 + Λ2 ] " #2 2 2 ∂ ∂4 2 2 ∂ = + λ − (c1 + c2 ) 2 − 4c21 c22 ∂t ∂x ∂x4 " # 2  2 2 ∂ ∂ 2 2 2 2 2 ∂ 2 2 2 − 2(λ1 − λ2 ) + λ − (c1 − c2 ) 2 + (λ1 − λ2 ) − 4λ2 +λ ∂t ∂x ∂t " #2 2 2 ∂ ∂4 2 2 ∂ = + λ − (c1 + c2 ) 2 − 4c21 c22 ∂t ∂x ∂x4  2 ∂ ∂2 − 2(λ21 + λ22 ) + λ + 2(λ21 − λ22 )(c21 − c22 ) 2 + (λ21 − λ22 )2 ∂t ∂x

Telegraph Processes # "  2  2   ∂4 ∂ ∂ 2 2 = +λ + λ − 2(λ1 + λ2 ) + (c21 + c22 )2 − 4c21 c22 ∂t ∂t ∂x4 2 2  ∂ ∂2 ∂ + 2(λ21 − λ22 )(c21 − c22 ) 2 + (λ21 − λ22 )2 +λ − 2(c21 + c22 ) 2 ∂t ∂x ∂x  2  2  4 ∂ ∂ ∂ 2 2 2 2 ∂ + 2λ = +λ − (λ − λ ) + (c − c ) 1 2 1 2 ∂t ∂t2 ∂t ∂x4  2 2 ∂ ∂2 ∂ − 2(c21 + c22 ) + 2(λ21 − λ22 )(c21 − c22 ) 2 + (λ21 − λ22 )2 +λ 2 ∂t ∂x ∂x   2  2 2 ∂ ∂ ∂ 2 2 2 ∂ = + 2λ − 2(c1 + c2 ) 2 − (λ1 − λ2 ) +λ ∂t ∂t2 ∂t ∂x 2  2 ∂ + (c21 − c22 ) 2 + (λ21 − λ22 ) , ∂x

139

proving equation (2.14.35) for the transition density p+ (x, t) of the sum S + (t). − + Comparing now operators (2.14.33) and (2.14.34), we see that A− 1 = A2 , A2 = − − + + + A1 , A3 = A4 , A4 = A3 . Therefore, as is easy to see, determinant (2.14.22) written for operators A− k , k = 1, 2, 3, 4, takes the same value like determinant (2.14.37) for operators A+ , k = 1, 2, 3, 4. Thus, the transition density p− (x, t) of the difference S − (t) solves k equation (2.14.35) too. This completes the proof of the theorem. Remark 2.14.7. From Theorem 2.14.3 it follows that, if both the telegraph processes start from the origin x01 = x02 = 0, then their sum S + (t) and difference S − (t) have the same density. This follows from the fact that in this case x01 ± x02 = 0 and both the densities p+ (x, t) and p− (x, t) are the solutions of the same initial-value problem, which is well-posed due to the hyperbolicity of the governing partial differential equation (2.14.35). From the well-posedness of such initial-value problem the uniqueness of its solution follows. Remark 2.14.8. By setting c1 = c2 = c and λ1 = λ2 = λ in (2.14.35) we arrive at the fourth-order equation 

∂ + 2λ ∂t

2 

2 ∂2 ∂ 2 ∂ + 4λ − 4c ∂t2 ∂t ∂x2



p± (x, t) = 0

and this result is weaker in comparison with the third-order equation (2.13.24) for the transition density of the sum S + (t) of two independent telegraph processes, both with the same parameters (c, λ). It is interesting to note that in the product of these operators, the second one represents the telegraph operator with doubled parameters (2c, 2λ). Remark 2.14.9. The form of equation (2.14.35) enables us to make some interesting probabilistic observations. We see that the first term of equation (2.14.35) in square brackets represents a telegraph-type operator (containing also the free term −(λ1 − λ2 )2 ) which is invariant with respect to parameters λ1 , λ2 and c1 , c2 . In other words, if we change λ1 for λ2 and inversely, and/or c1 for c2 and inversely, the first telegraph-type term of equation (2.14.35) preserves its form, while the second one can vary its interior signs. From equation (2.14.35) we can conclude that the sum and difference S ± (t) of two independent telegraph processes are not the telegraph processes, however they still contain some telegraph-type component. To show this, we represent equation (2.14.35) in the following form:

140

Markov Random Flights

 2  2 2 ∂ ∂ ∂ 2 2 ∂ + 2(λ + λ ) + (λ1 + λ2 ) − 2(c + c ) 1 2 1 2 ∂t ∂t2 ∂t ∂x2     ∂ ∂ ∂2 ∂2 − (λ1 − λ2 ) − (c21 − c22 ) 2 (λ1 − λ2 ) + (c21 − c22 ) 2 + 2(λ21 − λ22 ) p± (x, t) = 0. ∂t ∂x ∂t ∂x (2.14.41) We see that the first term in (2.14.41) contains exactly the telegraph operator quite similar to the classical Goldstein-Kac operator in (2.3.2) where the replacements λ 7→ λ1 + λ2 and c2 7→ 2(c21 + c22 ) are made. The second term of (2.14.41) represents the product of two heat operators and this fact implies the presence of a Brownian-type component in the processes S ± (t). Notice also that in this product the first operator in square brackets is exactly the standard heat operator (for λ1 > λ2 and c1 > c2 ), while the second one represents an inverse-time heat operator (that is, with the inverse time replacement t 7→ −t) containing also the free term 2(λ21 − λ22 ).



Chapter 3 Planar Random Motion with a Finite Number of Directions

The attempts of multidimensional generalizations of the Goldstein-Kac telegraph process studied in the previous chapter have a very long history as described in the Introduction. The difficulty of such a generalization follows from the fact that there is a continuum number of directions in the Euclidean space Rm of any higher dimension m ≥ 2. Therefore, in this case the Kolmogorov (Fokker-Planck) equation for the joint densities of the multidimensional motion represents a hyperbolic system of a continuum number of first-order partial differential equations and the problem of extracting a governing equation from this system is impracticable. Hence, other methods of analysis should be invented that would not use Kolmogorov equation. Such a method based on the integral transforms of distributions will be developed in the next section. However, the planar finite-velocity stochastic motion with a finite number of directions is of a special interest and it is the subject of this chapter.

3.1

Description of the model and the main result

A particle moves in a plane with a constant finite velocity c. At every time instant it has one of n possible directions of motion E0 , . . . , En−1 , n ≥ 2, where the direction Ek , k = 0, . . . , n − 1, forms the angle (2πk)/n with the x-axis. The particle’s motion is controlled by a homogeneous Poisson process of rate λ > 0. This means that the particle moves in some direction till the moment, when a Poisson event occurs. At such a moment it instantly chooses a new direction with the probability 1/(n − 1) and keeps moving in this new direction for a random amount of time till the next Poisson event occurs, then it takes on a new direction and so on. This model is a discretization (with respect to the number of directions) of the planar random evolution with the continuum number of directions that will thoroughly be examined in Chapter 5. Obviously, for the case of two directions (n = 2) this planar model turns into the one-dimensional Goldstein-Kac telegraph process studied in previous Chapter 2. Denote by Θ(t) = (X(t), Y (t)) the particle’s position in the plane R2 at time t. Consider the distribution F (x, y, t) = Pr{X(t) < x, Y (t) < y}, (3.1.1) which is the transition function of the process Θ(t). In order to analyse the structure of this distribution one should at once note that it must contain the absolutely continuous component because the trajectories of the motion are continuous and differentiable almost everywhere, and its velocity is finite. Therefore, there exists a density of the absolutely continuous component of the distribution (3.1.1). Clearly, for n = 2 the two-dimensional density degenerates into the one-dimensional one.

141

142

Markov Random Flights

Let us suppose that, at the initial time t = 0, the particle starts from the origin 0 = (0, 0) ∈ R2 . This means that the density is initially concentrated at the origin and represents a Dirac delta-function. Since the speed c is finite, then at arbitrary time instant t > 0, the diffusion area is the right n-gon Mt with the symmetry center at the origin 0 and the vertex 2πk coordinates (ct cos 2πk n , ct sin n ), k = 0, 1, . . . , n − 1, n ≥ 3. A rigorous proof of this fact, as well as some geometric properties of the diffusion area Mt , will be given in Section 3.3. Note also that for n = 2 the right n-gon Mt degenerates into the interval [−ct, ct]. For any t > 0, there exists a singular part of distribution (3.1.1) determined by the initial concentration of the density at the origin 0 and the finite velocity of propagation. This singular component represents the measure of all the sample paths of the process originating at the starting point 0 and finishing, at the time moment t, on the boundary of the n-gon Mt . In other words, the presence of the singular component is determined by the positive probability of being on the boundary of Mt at time t. Really, the particle is located at one of the vertices of Mt at the moment t if no jumps of the governing Poisson process occur until time t, and the probability of this event is equal to e−λt . Other boundary points are attainable in time t by an appropriate choice of the respective directions if one or more than one jumps of the Poisson process occur up to time t, and the probabilities of these events are also positive. For example, if n = 4 then the diffusion area Mt represents πk a square with the vertex coordinates (ct cos πk 2 , ct sin 2 ), k = 0, 1, 2, 3. In this case the particle can attain the boundary of the square (without the vertices) in one Poisson process jump if it does not take on the opposite direction at the jump moment, and therefore, the probability of attaining the boundary in a single jump is equal to (2/3)(λt)e−λt . Similarly, in order to attain the boundary of the square in two jumps of the governing Poisson process, the particle must not take on the opposite direction at the moment of the first jump, and it must take on the initial direction again at the second jump moment. Thus, the probability of attaining the boundary of the square in two jumps is equal to (2/3)(1/3)(((λt)2 )/(2!))e−λt . In the same manner one can evaluate the probabilities of attaining the boundary of the square in more than two Poisson process jumps. The sum of all such probabilities (which is a functional series depending on t) is the probability of attaining the boundary in time t, and it determines the singular component of the distribution. The remaining part of the distribution (3.1.1) is absolutely continuous, and its density is completely concentrated in the interior int Mt of the n-gon Mt . If the initial particle’s position is not concentrated at a single point or some bounded planar area and is smoothly distributed in a whole plane (this means that the initial distribution is a smooth function on the plane), then the singular component is absent and the distribution (3.1.1) is absolutely continuous. The principal result of this chapter is given by the following Main Theorem. Theorem 3.1.1. The transition probability density f (x, y, t), (x, y) ∈ int Mt , t > 0, of the absolutely continuous component of distribution (3.1.1) satisfies the following hyperbolic partial differential equation: Hn f (x, y, t) = 0, (3.1.2) where operator Hn has the form ∂ Hn = ∂t

[(n+1)/2]

X

k−1

(−1)

k=1 [n/2]

−2

X

(−1)k−1

k=1





 n−k Z n−2k+1 Qk−1 k−1

 n − k − 1 n−2k k Z Q k−1

   n−2k  2k  c n [n/2] X n ∂ ∂ −2 − (−1)k , 2 2k ∂x ∂y k=0

(3.1.3)

Planar Random Motion with a Finite Number of Directions   ∂ λn c2 n n! Z= + , Q = ∆, ≡ Cnk = , ∂t n − 1 4 k k! (n − k)!

143 n ≥ 2,

and ∆ is the two-dimensional Laplace operator, [·] means the integer part of a number. The operator Hn is a linear hyperbolic differential operator of nth order with constant coefficients which is the generator (or infinitesimal operator) of the stochastic motion Θ(t). It is known from the general PDEs theory that an initial-value problem is well-posed for any hyperbolic equation of finite order with constant coefficients, and the smoothness of its solution is completely determined by the smoothness of the respective initial conditions (see, for instance, [148]). Therefore, the solution of equation (3.1.2) exists and is unique for any choice of the initial data. Moreover, if we choose the first initial condition as a Dirac-type function (that corresponds to the initial concentration of the density at a single point), then the solution of such an initial-value problem takes the form of a Schwartz-Sobolev function (i.e. distribution). By taking the smooth initial data, we obtain a smooth solution. This is another confirmation (from the point of view of PDEs) of the structure of distribution (3.1.1) outlined above proceeding from probabilistic reasonings. Remark 3.1.1. The reader can easily check that if n = 2 then the main equation (3.1.2) with differential operator (3.1.3) turns into the Goldstein-Kac telegraph equation (2.3.2). For n = 3 equation (3.1.2) becomes the third-order hyperbolic equation    3 ∂2 9λ2 ∂ 3c2 ∂ 3λc2 c3 ∂ 3 ∂3 ∂ + 3λ + − ∆ − ∆ + − 3 f (x, y, t) = 0, ∂t3 ∂t2 4 ∂t 4 ∂t 4 4 ∂x3 ∂x∂y 2 which describes a finite-velocity planar random motion with three directions. In the next section we give the complete proof of this theorem based on some specific properties of the characters of a finite cyclic group.

3.2

Proof of the Main Theorem

The proof of the Main Theorem 3.1.1 and derivation of the governing nth-order hyperbolic equation (3.1.2) consists of a series of consecutive steps below.

3.2.1

System of equations and basic notations

Let ζ(t) be the direction of the particle’s motion at time moment t. Introduce the joint probability densities fk (x, y, t), k = 0, 1, . . . , n − 1, of the absolutely continuous component of the distribution (3.1.1) by the equalities fk (x, y, t) dx dy = Pr{x ≤ X(t) < x + dx, y ≤ Y (t) < y + dy, ζ(t) = Ek }, (x, y) ∈ int Mt ,

t > 0,

(3.2.1)

k = 0, 1, . . . , n − 1.

These densities exist and are smooth, as noted above. Proposition 3.2.1. The joint densities fk = fk (x, y, t), k = 0, 1, . . . , n − 1, satisfy the

144

Markov Random Flights

following hyperbolic system of first-order PDEs n−1 ∂f0 λ X ∂f0 = −c − λf0 + fj , ∂t ∂x n − 1 j=1 n−1 ∂fk 2πk ∂fk 2πk ∂fk λ X = −c cos − c sin − λfk + fj , ∂t n ∂x n ∂y n − 1 j=0

(3.2.2)

j6=k

(x, y) ∈ int Mt ,

k = 0, 1, . . . , n − 1.

t > 0,

Proof. In order to obtain system (3.2.2) one needs to simply write down the backward Kolmogorov equation for the joint densities fk , which in this case represents a system of n first-order PDEs with constant coefficients. System (3.2.2) is hyperbolic because the matrices of its main parts are diagonal and real (see [148, Theorem 4.14]. Our main concern is the reduction of hyperbolic system (3.2.2) to a single high-order hyperbolic equation. Clearly, such reduction is not always possible for general systems. Nevertheless, for a system with constant coefficients this reduction is possible at least in the determinant form, see Section 1.3.1 (see also [11, Part 1, paragraph 4.6] or [148, Ch. 4]). We will carry out the reduction of (3.2.2) to a single nth-order hyperbolic PDE in an explicit form using highly symmetric structure of the model. Denote by ω the primitive root of nth degree from 1, that is, ω n = 1,

ω s 6= 1, 0 < s < n.

(3.2.3)

It is known that the kth power of ω is given by the formula ω k = cos

2πk 2πk + i sin , n n

k = 0, 1, . . . , n − 1,

(3.2.4)

2πk ω k − ω −k = , n 2i

(3.2.5)

√ where i = −1. From (3.2.4) we have cos

ω k + ω −k 2πk = , n 2

sin

where, according to (3.2.3), ω −k = ω n−k = ω k , k = 0, 1, . . . , n − 1, and the line above means the complex conjugation. Taking into account (3.2.5), system (3.2.2) can be rewritten in the form n−1 ω k + ω −k ∂fk ω k − ω −k ∂fk λ X ∂fk = −c −c − λfk + fj , ∂t 2 ∂x 2i ∂y n − 1 j=0

(3.2.6)

j6=k

k = 0, 1, . . . , n − 1. Denoting by Dk the operator Dk = −c

ω k + ω −k ∂ ω k − ω −k ∂ −c , 2 ∂x 2i ∂y

k = 0, 1, . . . , n − 1,

(3.2.7)

Planar Random Motion with a Finite Number of Directions

145

system (3.2.6) becomes n−1 ∂fk λ X = Dk fk − λfk + fj , ∂t n − 1 j=0

k = 0, 1, . . . , n − 1.

(3.2.8)

j6=k

Introduce the following notations f = (f0 , f1 , . . . , fn−1 )T , λ  . . . n−1 λ   λ −λ . . . n−1 n−1  Λ= . . . . . . . . . . . . . . . . . . . . . , λ λ . . . −λ n−1 n−1



−λ

λ n−1

(3.2.9)

D = diag {Dk , k = 0, 1, . . . , n − 1} . Then system (3.2.8) can be written in the vector form ∂f = Df + Λf . ∂t

(3.2.10)

Note that Λ is the infinitesimal matrix of the embedded Markov chain. System (3.2.2), as well as its equivalent forms (3.2.8) and (3.2.10), are basic for proving the Main Theorem 3.1.1.

3.2.2

Characters of a finite cyclic group and spectral decomposition of the unit matrix

In this subsection we use some properties of the characters of a nth-order cyclic group in order to construct a decomposition of the unit matrix into a sum of projective (n × n)matrices having these characters as their elements. First of all we recall the lemma which plays an important role in further analysis. Lemma 3.2.1. Let ω be the primitive root of nth degree from 1. Then for any integer k the following relation holds: ( n−1 X n, if k is multiple to n, jk ω = (3.2.11) 0, otherwise. j=0 Proof. Let k be multiple to n, that is, k = sn, s = 0, ±1, ±2, . . . . Then n−1 X j=0

ω jk =

n−1 X j=0

ω jsn =

n−1 X

n−1 X

j=0

j=0

(ω n )js =

1 = n.

Hence, 0 = 1 − 1 = ω n − 1 = (ω n )k − 1 = (ω k )n − 1 = (ω k − 1)(ω k(n−1) + ω k(n−2) + · · · + ω k − 1) = (ω k − 1)

n−1 X

ω jk ,

j=0 k

and, since ω − 1 6= 0, therefore, the latter sum is 0. The lemma is proved.

146

Markov Random Flights

Consider an arbitrary cyclic group Cn of nth order  Cn = e, a, a2 , . . . , an−1 , an = e, where a is the generating element and e is the unit of the group. It is known that the table of characters {φk }n−1 k=o of this group has the form, (see [69]): φ0 φ1 φ2 .. .

e 1 1 1 .. .

a 1 ω ω2 .. .

a2 1 ω2 ω4 .. .

... ... ... ... .. .

ak 1 ωk ω 2k .. .

... ... ... ... .. .

an−1 1 ω n−1 ω 2(n−1) .. .

φk .. .

1 .. .

ωk .. .

ω 2k .. .

... .. .

ω kk .. .

... .. .

ω k(n−1) .. .

ω 2(n−1)

...

ω k(n−1)

...

ω (n−1)(n−1)

φn−1

1 ω n−1

(3.2.12)

where ω is the primitive root from 1. The elements of Table (3.2.12) are given by the formula φk (am ) = ω km ,

k, m = 0, 1, . . . , n − 1.

(3.2.13)

Lemma 3.2.1 expresses the fact that the sum of elements of any line or column, exept the first ones, in Table (3.2.12) is equal to 0. Denote by χk , k = 0, 1, . . . , n − 1, the column-vectors from Table (3.2.12), that is,  T χk = 1, ω k , ω 2k , . . . , ω (n−1)k , k = 0, 1, . . . , n − 1. (3.2.14) Introduce also the following notations: fek = χ∗k f =

n−1 X

ω −jk fj ,

(3.2.15)

j=0

 T e k = Dχk = D0 , ω k D1 , ω 2k D2 , . . . , ω (n−1)k Dn−1 , D e mk = χ∗m D ek = Q

n−1 X

ω (k−m)j Dj ,

k, m = 0, 1, . . . , n − 1,

(3.2.16) (3.2.17)

j=0

where the sign ∗ means the Hermitian conjugation, i.e. χ∗k = χTk , and the operators Dk are given by (3.2.7). e mk , k, m = 0, 1, . . . , n − 1. Taking into Let us begin computation of the operators Q account (3.2.7), we have e mk = − c Q 2

n−1 X



 ∂  ∂ 1 j ω ω +ω + ω − ω −j ∂x i ∂y j=0   n−1 n−1 c  X (k−m+1)j X (k−m−1)j  ∂ ω + ω =− 2  j=0 ∂x j=0    n−1 n−1 X X 1 ∂  +  ω (k−m+1)j + ω (k−m−1)j  . i ∂y  (k−m)j

j=0

j

−j

j=0



(3.2.18)

Planar Random Motion with a Finite Number of Directions

147

Evaluating the sums on the right-hand side of (3.2.18) and applying Lemma 3.2.1, we get n−1 X

ω

(k−m+1)j

( n, if k − m + 1 is multiple to n, = 0, otherwise.

(3.2.19)

ω

(k−m−1)j

( n, if k − m − 1 is multiple to n, = 0, otherwise.

(3.2.20)

j=0

Analogously, n−1 X j=0

Since k, m = 0, 1, . . . , n − 1, then −n + 2 ≤ k − m + 1 ≤ n,

−n ≤ k − m − 1 ≤ n − 2,

(3.2.21)

and therefore there exist only two values of these variables for which the sums (3.2.19) and (3.2.20) are not equal to 0, namely the pairs of values 0, n and −n, 0 respectively. Taking into account (3.2.21), we can rewrite (3.2.19) and (3.2.20) in the following form:   n−1 n, if k − m = −1, X (k−m+1)j ω = n, if k − m = n − 1, (3.2.22)   j=0 0, otherwise,   n−1 n, if k − m = −n + 1, X (k−m−1)j ω = n, if k − m = 1, (3.2.23)   j=0 0, otherwise. Substituting (3.2.22) and (3.2.23) into (3.2.18), we obtain the general form of the operators e mk , k, m = 0, 1, . . . , n − 1: Q  nA, if k − m = −n + 1,       nB, if k − m = −1, e mk = nA, if k − m = 1, m, k = 0, 1, . . . , n − 1, (3.2.24) Q    nB, if k − m = n − 1,     0, otherwise, where A and B are the differential operators     ∂ c ∂ ∂ c ∂ +i , B=− −i . A=− 2 ∂x ∂y 2 ∂x ∂y

(3.2.25)

Introduce the following values γmk =

χ∗m χk

=

n−1 X

ω (k−m)j ,

m, k = 0, 1, . . . , n − 1.

(3.2.26)

j=0

According to Lemma 3.2.1, γmk

( n, if k − m is multiple to n, = 0, otherwise.

Since 0 ≤ m, k ≤ n − 1, then there exists the only value of the variable k − m for which γmk are not equal to 0, namely, the value k − m = 0, or k = m. Thus, ( n, if k = m γmk = (3.2.27) 0, otherwise.

148

Markov Random Flights

It is easy to check that the matrix Λ and the vectors χk , k = 0, 1, . . . , n−1 are connected with each other by the relation  0, if k = 0,  (3.2.28) Λχk = − λn χk , if 1 ≤ k ≤ n − 1. n−1 This equality means that the vectors χk , k = 0, 1, . . . , n − 1 are the eigenvectors of matrix Λ, and the numbers 0 and −λn/(n − 1) are the respective eigenvalues. Define now the following (n × n)-matrices:   1 ω −k ω −2k . . . ω −(n−1)k  ωk 1 ω −k . . . ω −(n−2)k   1 1 2k k  ∗ ω ω 1 . . . ω −(n−3)k  Πk = χk χk =  (3.2.29)   n n .. .. .. .. ..   . . . . . ω (n−1)k

ω (n−2)k

ω (n−3)k

...

1

The matrices Πk , k = 0, 1, . . . , n − 1, have many important properties some of which are given by the following lemma. k Lemma 3.2.2. For the matrices Πk = ||πij ||, i, j, k = 0, 1, . . . , n−1, the following relations hold: n−1 X Πk = E, (3.2.30) k=0

Πk Πl = Πl Πk = 0, Π2k

k 6= l,

k, l = 0, 1, . . . , n − 1,

= Πk ,

(3.2.31) (3.2.32)

where E is the unit (n × n)-matrix. Proof. At the beginning, we prove equalities (3.2.31) and (3.2.32). According to (3.2.27) and (3.2.29), for any k and l, 0 ≤ k, l ≤ n − 1, we have ( Πk , if k = l, 1 1 1 ∗ ∗ ∗ ∗ ∗ Πk Πl = 2 (χk χk ) (χl χl ) = 2 χk (χk χl ) χl = 2 γkl χk χl = n n n 0, otherwise, and, therefore, equalities (3.2.31) and (3.2.32) are true. Let us now prove (3.2.30). At first, we note that, according to (3.2.29), the elements k ||πij ||, i, j, k = 0, 1, . . . , n − 1, of matrices Πk are given by the formula: k πij =

1 1 ik −jk ω ω = ω (i−j)k . n n

Hence for any i, k, 0 ≤ i, k ≤ n − 1, k πii =

1 , n

n−1 X

k πii = 1.

k=0

If i 6= j, 0 ≤ i, j ≤ n − 1, then, according to Lemma 3.2.1, n−1 X k=0

k πij =

n−1 1 X (i−j)k ω = 0, n k=0

because in this case i − j is not multiple to n. These last equalities prove (3.2.30). Equality (3.2.30) yields a decomposition of the unit matrix E into a sum of the projective (n × n)-matrices Πk , k = 0, 1, . . . , n − 1, that are a tensor product of the vectors formed from the characters of the finite cyclic group Cn of order n.

Planar Random Motion with a Finite Number of Directions

3.2.3

149

Equivalent system of equations

Basing on the results of the previous subsections, we construct now a system of equations for the functions fek , k = 0, 1, . . . , n − 1, defined by (3.2.15). We will show that this system is equivalent to the system (3.2.2) in the sense that if the functions fk , k = 0, 1, . . . , n − 1, satisfy system (3.2.2), then the functions fek , k = 0, 1, . . . , n−1, satisfy this new constructed system and inversely. More precisely, the following statement is true. Proposition 3.2.2. The functions fek = fek (x, y, t), k = 0, 1, . . . , n−1, satisfy the following system of PDEs ∂ fe0 = Afe1 + B fen−1 , ∂t ∂ fem λn e = Afem+1 + B fem−1 − fm , ∂t n−1 λn e ∂ fen−1 = Afe0 + B fen−2 − fn−1 , ∂t n−1

1 ≤ m ≤ n − 2,

(3.2.33)

where A and B are given by (3.2.25). Proof. The system obtained in the vector form (3.2.10) can be represented in the form ∂ [Ef ] = D[Ef ] + Λ[Ef ], ∂t where E is the unit (n × n)-matrix. According to (3.2.30), this equality can be rewritten as follows: "n−1 # "n−1 # "n−1 # X X ∂ X Πk f = D Πk f + Λ Πk f , ∂t k=0

k=0

k=0

or, taking into account (3.2.29), we have "n−1 # n−1 n−1 X X ∂ X ∗ ∗ χk (χk f ) = (Dχk ) (χk f ) + (Λχk ) (χ∗k f ) . ∂t k=0

k=0

k=0

In view of notations (3.2.15), (3.2.16) introduced above and (3.2.28) we get: "n−1 # n−1 n−1 X X ∂ X e e k fek − λn χk fk = D χk fek . ∂t n−1 k=0

k=0

(3.2.34)

k=1

Multiplying equality (3.2.34) from the left by the line-vector χ∗m , m = 0, 1, . . . , n − 1, we arrive at the system: "n−1 # n−1 n−1  X X ∂ X ∗ e e k fek − λn (χm χk ) fk = χ∗m D (χ∗m χk ) fek , ∂t n−1 k=0

k=0

k=1

m = 0, 1, . . . , n − 1, or, taking into account the introduced notations (3.2.17) and (3.2.26), "n−1 # n−1 n−1 X X ∂ X e mk fek − λn γmk fek = Q γmk fek , m = 0, 1, . . . , n − 1. ∂t n−1 k=0

k=0

k=1

(3.2.35)

150

Markov Random Flights

In view of (3.2.27), n−1 X

m = 0, 1, . . . , n − 1.

γmk fek = nfem ,

(3.2.36)

k=0

On the other hand, taking into account (3.2.24), we have:   nAfe1 + nB fen−1 , if m = 0,  n−1  X e mk fek = nAfem+1 + nB fem−1 , if 0 ≤ m ≤ n − 2, Q   k=0  nAfe0 + nB fen−2 , if m = n − 1.

(3.2.37)

Substituting (3.2.36) and (3.2.37) in (3.2.35), we obtain system (3.2.33).

3.2.4

Partial differential equation

In this subsection we will derive a partial differential equation for the function fe0 = fe0 (x, y, t) that will end the proof of the Main Theorem. First of all, we point out the fact that function fe0 , being in accordance with (3.2.15) the sum of functions (3.2.1), is the only among the functions fek (x, y, t), k = 0, 1, . . . , n − 1, which has a quite definite probabilistic sense. That is why just it is of a special interest for us. Rewrite the system (3.2.33) from Proposition 3.2.2 in detail. One has ∂ fe0 = Afe1 + B fen−1 , ∂t λn e ∂ fe1 = Afe2 + B fe0 − f1 , ∂t n−1 .............................. λn e ∂ fen−1 = Afe0 + B fen−2 − fn−1 , ∂t n−1

(3.2.38)

We notice that operators A and B given by (3.2.25) commute with each other and with the operator ∂/∂t, and the following relation holds Q = AB = BA = where ∆=

c2 ∆, 4

(3.2.39)

∂2 ∂2 + ∂x2 ∂y 2

is the two-dimensional Laplace operator. Let us write the elementary identity A0 fe0 + B 0 fe0 = 2I fe0 ,

(3.2.40)

where A0 = B 0 = I, I is the identity operator. Let us rewrite the first equation from (3.2.38) in the following manner: A1 fe1 + B 1 fen−1 =

∂ fe0 . ∂t

(3.2.41)

If we denote by R0 and R1 the operators on the right-hand sides of (3.2.40) and (3.2.41) respectively, that is, ∂ R0 = 2I, R1 = , (3.2.42) ∂t

Planar Random Motion with a Finite Number of Directions

151

then (3.2.40) and (3.2.41) can be rewritten in the form A0 fe0 + B 0 fe0 = R0 fe0 , A1 fe1 + B 1 fen−1 = R1 fe0

(3.2.43)

Introduce the operator Z=

∂ λn + . ∂t n − 1

(3.2.44)

The following result is true. Proposition 3.2.3. For any m, 0 ≤ m ≤ n − 1, the following relation holds: Am fem + B m fen−m = Rm fe0 ,

def fen = fe0 ,

(3.2.45)

where the operators Rm , m ≥ 2, are given by the recurrent relation Rm = ZRm−1 − QRm−2 ,

m ≥ 2,

(3.2.46)

and the operators R0 , R1 are defined by (3.2.42). Proof. We prove this proposition by induction in m. Clearly for m = 0 equality (3.2.45) turns into identity (3.2.40). Let now m = 1. In this case (3.2.45) becomes (3.2.41), and therefore (3.2.45) fulfils also for m = 1. Let us prove formula (3.2.45) for m = 2. Differentiating the first equation in (3.2.38) with respect to t and using the second and the last equations from (3.2.38), then taking into account (3.2.39), we get: ∂ fe1 ∂ fen−1 ∂ 2 fe0 =A +B 2 ∂t ∂t ∂t

   λn e λn e e e e e f1 + B Af0 + B fn−2 − fn−1 = A Af2 + B f0 − n−1 n−1 h i i λn h e Af1 + B fen−1 . = A2 fe2 + B 2 fen−2 + 2Qfe0 − n−1

Then, taking into account (3.2.42), (3.2.43) and (3.2.44), we obtain ∂ 2 fe0 λn ∂ fe0 + − 2Qfe0 ∂t2 n − 1 ∂t   λn ∂ ∂ + fe0 − Q(2I)fe0 = ∂t ∂t n − 1 = [ZR1 − QR0 ] fe0

A2 fe2 + B 2 fen−2 =

= R2 fe0 . Thus, equality (3.2.45) and recurrent relation (3.2.46) hold also for m = 2. Suppose now that relations (3.2.45) and (3.2.46) are true for all the indices m before some l − 1 and l, where l ≥ 1, that is, Al−1 fel−1 + B l−1 fen−(l−1) = Rl−1 fe0 , Al fel + B l fen−l = Rl fe0 .

(3.2.47)

152

Markov Random Flights

Differentiating the second equality in (3.2.47) in t and using the equations of system (3.2.33), we have:     λn e ∂ λn e l l e e e e A Afl+1 + B fl−1 − fl + B Afn−l+1 + B fn−l−1 − fn−l = Rl fe0 , n−1 n−1 ∂t or

h i h i Al+1 fel+1 + B l+1 fen−(l+1) + AB Al−1 fel−1 + B l−1 fen−(l−1) i ∂ λn h l e A fl + B l fen−l = Rl fe0 . − n−1 ∂t

From this equality, in accordance with induction assumption (3.2.47), we obtain the relation ∂ λn Rl fe0 + Rl fe0 − QRl−1 fe0 ∂t n−1   ∂ λn = + Rl fe0 − QRl−1 fe0 ∂t n − 1 = [ZRl − QRl−1 ] fe0

Al+1 fel+1 + B l+1 fen−(l+1) =

= Rl+1 fe0 . proving the proposition. The following proposition yields an equation for function fe0 (x, y, t). Proposition 3.2.4. The function fe0 = fe0 (x, y, t) satisfies the following equation: [An + B n ] fe0 = Rn fe0 ,

(3.2.48)

where the operator Rn is given by recurrent formula (3.2.46). Proof. We write down equality (3.2.45) for m = n − 1: An−1 fen−1 + B n−1 fe1 = Rn−1 fe0 . Let us apply operator Z to this equation. Since Z commutes with the powers of operators A and B, then taking into account (3.2.38) and (3.2.39), we get ZRn−1 fe0 = An−1 Z fen−1 + B n−1 Z fe1 h i h i = An−1 Afe0 + B fen−2 + B n−1 Afe2 + B fe0 h i = [An + B n ] fe0 + Q An−2 fen−2 + B n−2 fe2 . According to (3.2.45), the expression in the second square brackets on the right-hand side of the last equality is Rn−2 fe0 . Transferring it to the left-hand side and taking into account (3.2.46), we obtain [An + B n ] fe0 = ZRn−1 fe0 − QRn−2 fe0 = [ZRn−1 − QRn−2 ] fe0 = Rn fe0 , proving (3.2.48). Equality (3.2.48) is just that equation which is satisfied by the function fe0 . It remains only to find the explicit forms of the operators [An + B n ] and Rn .

Planar Random Motion with a Finite Number of Directions

153

Proposition 3.2.5. For any n ≥ 0, the following equality holds:    n−2k  2k  c n [n/2] X n ∂ ∂ (−1)k , An + B n = 2 − 2 2k ∂x ∂y

(3.2.49)

k=0

where [·] means the integer part of a number. Proof. Since the operators ∂/∂x and ∂/∂y commute with each other then, by Newton binomial theorem, we have: k    n−k  n  c n X   n ∂ ∂ k A +B = − i 1 + (−1) 2 ∂x ∂y k k=0 k    n−k  n  c n X ∂ n ∂ i =2 − 2 ∂x ∂y k n

n

k=0 k is even

   n−2k  2k  c n [n/2] X ∂ ∂ k n =2 − (−1) . 2k 2 ∂x ∂y k=0

Proposition 3.2.6. For any m ≥ 2, the following relation holds: Rm = Lm R1 − Nm R0 ,

(3.2.50)

where the operators Lm and Nm are given by the formulas [(m+1)/2]

Lm =

X

(−1)k−1



k=1 [m/2]

Nm =

X k=1

(−1)k−1



 m−k Z m−2k+1 Qk−1 , k−1

 m − k − 1 m−2k k Z Q . k−1

(3.2.51)

(3.2.52)

Proof. Clearly, for m = 2, we obtain L2 = Z and N2 = Q. Hence (3.2.50) yields us the equality R2 = ZR1 − QR0 , which is true because it coincides with (3.2.46) for m = 2. Suppose that (3.2.50) is true for all the numbers before some m and m − 1, m ≥ 2. We will prove (3.2.50) for the number m + 1. According to Proposition 3.2.3 and induction assumption, we have Rm+1 = ZRm − QRm−1 = Z (Lm R1 − Nm R0 ) − Q (Lm−1 R1 − Nm−1 R0 ) = (ZLm − QLm−1 ) R1 − (ZNm − QNm−1 ) R0 . Therefore, in order to prove the proposition, one needs to establish that Lm+1 = ZLm − QLm−1 , Nm+1 = ZNm − QNm−1 .

(3.2.53)

We will prove the first relation in (3.2.53) for even m only. Other equalities in (3.2.53) can be proved in the same manner and are left up to the reader.

154

Markov Random Flights

So, let m be even, and therefore m − 1 and m + 1 are odd. Then, according to the induction assumption and well-known combinatorial identities, we have: m/2

ZLm − QLm−1 =

X

k−1

(−1)

k=1

  m−k Z (m+1)−2k+1 Qk−1 k−1

(m/2)+1

X

+

(−1)

k−1



k=2 m/2

 m−k Z (m+1)−2k+1 Qk−1 k−2

 m m (m + 1) − k =Z + (−1) Z (m+1)−2k+1 Qk−1 + (−1) 2 Q 2 k−1 k=2   (m + 1) − 1 (m+1)−2·1+1 1−1 = (−1)1−1 Z Q 1−1   m/2 X k−1 (m + 1) − k + (−1) Z (m+1)−2k+1 Qk−1 k−1 k=2   +1 (m + 1) − m m m m +1 −1 2 ( )  Z (m+1)−2( 2 +1)+1 Q( 2 +1)−1 + (−1) 2 m 2 +1 −1   ((m+1)+1)/2 X k−1 (m + 1) − k = (−1) Z (m+1)−2k+1 Qk−1 k−1 X

m

k−1



k=1

= Lm+1 , proving the first relation in (3.2.53) for even m. To end the proof of the Main Theorem it remains only to substitute into equation (3.2.48) the explicit forms of operators An + B n and Rn given by Propositions 3.2.5 and 3.2.6, respectively. The hyperbolicity of the main equation (3.1.2) follows from the hyperbolicity of the initial system (3.2.2). The Main Theorem 3.1.1 is completely proved. Remark 3.2.1. Since the random events {ζ(t) = Ek }, k = 0, 1, . . . , n − 1, do not intersect and form the full group of events, then, according to (3.2.1) and (3.2.15), fe0 (x, y, t) dx dy = Pr {x ≤ X(t) < x + dx, y ≤ Y (t) < y + dy} . From the physical reasonings of the model it follows that fe0 (x, y, t) ≥ 0 and ZZ fe0 (x, y, t) dxdy = 1. Mt

for any fixed t > 0. Therefore, fe0 (x, y, t) = f (x, y, t) is the density of the distribution of the particle’s position in a plane. The function fe0 (x, y, t) ≡ f (x, y, t) describes the behaviour of this density spreading from the starting point outwards with finite velocity.

3.3

Diffusion area

In this section we study the behaviour of the diffusion area of the wave whose spreading is described by function fe0 (x, y, t). We are interested in the minimal part Mt of the plane

Planar Random Motion with a Finite Number of Directions

155

R2 where the particle, with probability 1, is located at time instant t after starting from some initial point. Without loss of generality, one can consider that the particle starts from the origin 0 = (0, 0) ∈ R2 . Moreover, in order to avoid the degenerative case of evolution on the line, we suppose that n ≥ 3. Consider the finite set E = {e0 , e1 , . . . , en−1 } ,

n ≥ 3,

where ek are the orthonormal column-vectors:   cos 2πk n , k = 0, 1, . . . , n − 1. ek =  sin 2πk n Each ek is the vector of unit length originated at the origin 0 and directed under the angle (2πk)/n with respect to the x-axis. In other words, ek is the vector of length 1 whose direction coincides with that of vector Ek . For arbitrary fixed t > 0 we define the set ) ( N N X X [ 2 tk ≤ t, tk ≥ 0, eik ∈ E . ctk ek , St = x∈R : x= N ≥0

k=0

k=0

The set St represents the part of the plane R2 shaded by all the broken lines of length ct originated at 0 and having a finite number of break points. Each of such broken lines consists of a finite number of segments of a random length and of a random orientation multiple to the number 2π/n. From the probabilistic point of view, St is the set of points of R2 that can be visited by the particle during time t after the start. Obviously, just St represents the minimal part of the plane where the particle, with probability 1, is located by time t after starting from the origin 0. The following theorem states that St is the right n-gon Mt with symmetry center at the origin 0 and the distance ct to each of its vertices. Theorem 3.3.1. For any t > 0, the following relation holds: St = Mt .

(3.3.1)

Proof. First, we note that the set St contains all the vertices of the n-gon Mt , namely, the points of the plane with the coordinates   2πk 2πk ct cos , ct sin , k = 0, 1, . . . , n − 1. n n Really, every vertex with number k can be attained if, at the start moment, the particle initially chooses the direction Ek (the probability of this event is 1/n for any k) and then, during time t it does not change this initial direction (the probability of this event is e−λt ). Hence, any vertex of the n-gon Mt is attainable in time t with the probability (1/n)e−λt . Let us show that the set St is convex. Let x, y ∈ St , that is, x=

N1 X

ct0k eik ,

k=0

y=

N2 X j=0

N1 X

t0k ≤ t,

N1 ≥ 0,

t00j ≤ t,

N2 ≥ 0.

k=0

ct00j eij ,

N2 X j=0

156

Markov Random Flights

Then, for arbitrary τ ∈ [0, 1] we have: τ x + (1 − τ )y =

N1 X

c(τ t0k )eik +

N2 X

c[(1 − τ )t00j ]eij =

cγs eis ,

s=0

j=0

k=0

NX 1 +N2

where γs are some nonnegative numbers. Thus, the point τ x + (1 − τ )y of the plane is representable in the form of a finite linear combination of vectors from E . It remains to prove that NX 1 +N2 γs ≤ t. s=0

We have: NX 1 +N2

γs =

s=0

N1 X

  N2 N1 N2 N2 X X X X τ t0k + (1 − τ )t00j = τ  t0k − t00j  + t00j . j=0

k=0

If

j=0

k=0

N1 X

t0k ≥

N2 X

j=0

t00j ,

j=0

k=0

then, since 0 ≤ τ ≤ 1, we get:     NX N1 N2 N2 N1 N1 N2 N2 1 +N2 X X X X X X X 0 00  00 00 0 00    γs = τ t0k ≤ t. tk − tj + tj ≤ tj = tk − tj + s=0

j=0

k=0

j=0

If

N1 X

t0k ≤

NX 1 +N2

 γs = τ 

s=0

N1 X

t0k −

k=0

N2 X

j=0

k=0

t00j ,

j=0

k=0

then

j=0

k=0

N2 X j=0

 t00j  +

N2 X

t00j ≤

j=0

N2 X

t00j ≤ t.

j=0

Hence, we have shown that NX 1 +N2

γs ≤ t,

γs ≥ 0,

s=0

and this means that τ x + (1 − τ )y ∈ St and the set St is, therefore, convex. Since St is convex and contains all the vertices of the right n-gon Mt , then, for any t > 0, the following inclusion holds: Mt ⊆ St . We will prove that in this inclusion the strict equility takes place. To do this, one needs to show that any point ξ 6∈ Mt is unreachable in time t. For the sake of definiteness, let us take thezero and the first vertices of the n-gon Mt 2π with the coordinates (ct, 0) and cos 2π n , sin n , respectively. The equation of the straight line passing through these two points is x cos

π π π + y sin − ct cos = 0. n n n

Planar Random Motion with a Finite Number of Directions

157

Let us take an arbitrary point  ξ=

x0 y0

 6∈ Mt .

Our aim is to prove that ξ 6∈ St . Suppose the opposite, that is, ξ ∈ St . In this case, according to the definition of St , there exists an integer N0 ≥ 0, such that   N0 P 2πik ct cos n  k=0 k   X N0   x0 ,  ξ= = ctk eik =   y0 N   0 P k=0 2πik ctk sin n k=0

N0 X

tk ≤ t,

tk ≥ 0.

(3.3.2)

k=0

Since ξ 6∈ Mt , then the following strict inequality must fulfil: π π π x0 cos + y0 sin − ct cos > 0 n n n or N0 N0 X 2πik 2πik π X π π tk sin tk cos cos + sin − t cos > 0. n n n n n k=0

k=0

By transforming this inequality, we get:   N0 X 2πik π 2πik π π tk cos cos + sin sin − t cos > 0 n n n n n k=0

or

N0 X

tk cos

k=0

Since n ≥ 3, then cos

π n

π(2ik − 1) π > t cos . n n

> 0 and this inequality can, therefore, be rewritten as follows N0 X

tk

k=0

We notice now that, since the numbers to 2π, then cos π(2ink −1) κ≤ ≤ 1, cos nπ

cos π(2ink −1) > t. cos nπ π(2ik −1) , n

ik = 0, 1, . . . , n − 1, cannot be multiples

for any ik = 0, 1, . . . , n − 1,

where κ ≤ 0 is some nonpositive constant. Therefore, N0 X

tk ≥

k=0

N0 X k=0

cos π(2ink −1) >t tk cos nπ

and we obtain the strict inequality N0 X

tk > t,

k=0

which contradicts to (3.3.2). This contradiction proves that ξ 6∈ St . From this fact it immediately follows that Mt = St . The theorem is proved.

158

Markov Random Flights

Thus, we have shown that the diffusion area of the wave, whose propagation is described by function fe0 (x, y, t), represents the right n-gon Mt with the vertex coordinates   2πk 2πk , ct sin , k = 0, 1, . . . , n − 1, n ≥ 3. ct cos n n Clearly, this n-gon can also be given by a system of n straight lines (inequalities) in the plane or to give an explicit analytical expression for the coordinates of an arbitrary point of the perimeter in terms of the angle between the x-axis and the ray originating at 0 and passing through this point.

3.4

Polynomial representations of the generator

In this section we demonstrate a very interesting and fairly unexpected connection between Chebyshev polynomials of two variables introduced in subsection 1.5.4 and the governing operator Hn given by (3.1.3) of the stochastic motion Θ(t) with n, n ≥ 2, directions in the plane R2 . The Main Theorem 3.1.1 states that the absolutely continuous part of the probability density f (x, y, t), (x, y) ∈ R2 , t > 0, of the process Θ(t) exists, is smooth and satisfies the hyperbolic nth-order equation (3.1.2) with the operator ∂ Hn = ∂t

[(n+1)/2]

X

(−1)

k−1

k=1 [n/2]

−2

X

k−1

(−1)

k=1



 n−k Z n−2k+1 Qk−1 k−1

  n − k − 1 n−2k k Z Q k−1

(3.4.1)

   n−2k  2k  c n [n/2] X n ∂ ∂ (−1)k , −2 − 2 2k ∂x ∂y k=0

λn ∂ + , Z= ∂t n − 1

c2 Q = ∆, 4

  n n! , = k k! (n − k)!

n ≥ 2,

(3.4.2)

where ∆ is the two-dimensional Laplace operator and [·] means the integer part of a number. First of all, following the results of subsection 1.5.4, we should construct an appropriate Banach algebra. Denote D = R1 × R1 × (0, ∞) and let µ(·) be the Lebesgue measure in D. Let L1 (D) be the Banach space of the functions in D which are integrable with their modules with respect to measure µ with the norm Z kf k = |f | µ(ds). (3.4.3) D

Denote by C ∞ (D) the space of the functions in D which are smooth with respect to each of the variables. Consider the intersection \ M (D) = C ∞ (D) L1 (D). Obviously, M (D) is not empty and every smooth function possessing the properties of a probability density is contained in M (D).

Planar Random Motion with a Finite Number of Directions

159

Denote by M (D) the closure of the subspace M (D) in the norm (3.4.3). Then M (D) becomes a Banach space as the closed subspace of the Banach space L1 (D). Clearly all the smooth solutions of equation (3.1.2) are the elements of the Banach space M (D). Since the differential operators acting in M (D) are bounded, they are the elements of the two-side ideal (subalgebra) of bounded operators of the commutative algebra of closed operators acting in M (D) [74, Chapter V]. Since M (D) is a Banach space, then the set of elements of the commutative algebra of bounded operators acting in M (D) forms a Banach space too, and therefore it is a commutative Banach algebra. Let us denote it by R(M (D). Thus, the differential operator (3.4.1), as well as its components, are the elements of the Banach algebra R(M (D). With this in hand, we can now transform operator (3.4.1) by means of Chebyshev polynomials of two variables introduced above over the commutative Banach algebra R(M (D). Consider the first term of operator (3.4.1). We have ∂ ∂t

[(n+1)/2]

X

(−1)k−1

k=1

=

=

∂ ∂t ∂ ∂t



 n−k Z n−2k+1 Qk−1 k−1

[(n−1)/2]

X

(−1)k



 n − k − 1 n−2k−1 k Z Q k



 (n − 1) − k Z (n−1)−2k Qk k

k=0 [(n−1)/2]

X

(−1)k

k=0

(3.4.4)

[(n−1)/2]

((n − 1) − k)! Z (n−1)−2k Qk k!((n − 1) − 2k)! k=0   ∂ 1 1 = Un−1 Z, Q , ∂t 2 2

=

∂ ∂t

X

(−1)k

 where Un−1 21 Z, 12 Q are the gereralized Chebyshev polynomials of second kind over R(M (D) defined by (1.5.32). Note that the right-hand side of (3.4.4) is treated as the  product of the elements ∂/∂t and Un−1 21 Z, 12 Q of the Banach algebra R(M (D). Analogously, for the second term in (3.4.1) we obtain: [n/2]

X k=1

(−1)k−1



   [n/2] X n − k − 1 n−2k k n − k − 1 n−2k k−1 Z Q =Q (−1)k−1 Z Q k−1 k−1 k=1

[(n−2)/2]

=Q

X

(−1)k

k=0

  n − k − 2 n−2k−2 k Z Q k

[(n−2)/2]

((n − 2) − k)! Z (n−2)−2k Qk k!((n − 2) − 2k)! k=0   1 1 = QUn−2 Z, Q . 2 2 =Q

X

(−1)k

Finally, according to Proposition 3.2.5, for the third term of operator (3.4.1) we have:    n−2k  2k  c n [n/2] X n ∂ ∂ 2 − (−1)k = An + B n , 2 2k ∂x ∂y k=0

160

Markov Random Flights

where (see (3.2.25)) A=−

c 2



∂ ∂ +i ∂x ∂y

 B=−

,

c 2



∂ ∂ −i ∂x ∂y

 ,

i=



−1.

Using now identity (1.5.42) and taking into account that A + B = −c

∂ , ∂x

AB =

c2 ∆ = Q, 4

we obtain [n/2] n

n

A +B =

X k=0

  n n−k (−1) (A + B)n−2k (AB)k n−k k

[n/2]

k

− k − 1)! =n (−1) k!(n − 2k)! k=0   c ∂ 1 , Q . = 2Tn − 2 ∂x 2 X

k (n



∂ −c ∂x

n−2k

Qk

Thus, differential operator (3.4.1) can be rewritten in the following form:       1 1 1 1 ∂ c ∂ 1 Z, Q −2QUn−2 Z, Q −2Tn − , Q , n ≥ 2. (3.4.5) Hn = Un−1 ∂t 2 2 2 2 2 ∂x 2 Using recurrent relations presented by Theorems 1.5.1 and 1.5.2, we can transform operator Hn as follows:        1 1 c ∂ 1 1 1 λn Z, Q − 2QUn−2 Z, Q − 2Tn − , Q Un−1 Hn = Z − n−1 2 2 2 2 2 ∂x 2       1 1 1 λn 1 1 1 = ZUn−1 Z, Q − QUn−2 Z, Q − Un−1 Z, Q 2 2 2 2 n−1 2 2     1 c ∂ 1 1 Z, Q − 2Tn − , Q − QUn−2 2 2 2 ∂x 2            1 1 1 1 1 1 1 1 = 2 Z Un−1 Z, Q − 2 Q Un−2 Z, Q − QUn−2 Z, Q 2 2 2 2 2 2 2 2     λn 1 1 c ∂ 1 − Un−1 Z, Q − 2Tn − , Q n−1 2 2 2 ∂x 2        1 1 1 1 1 = Un Z, Q − 2 Q Un−2 Z, Q 2 2 2 2 2     λn 1 1 c ∂ 1 − Un−1 Z, Q − 2Tn − , Q n−1 2 2 2 ∂x 2       1 1 λn 1 1 c ∂ 1 = 2Tn Z, Q − Un−1 Z, Q − 2Tn − , Q . 2 2 n−1 2 2 2 ∂x 2 (3.4.6) Thus, we can summarize the obtained results (3.4.5) and (3.4.6) in the following theorem. 

Theorem 3.4.1. In the Banach algebra R(M (D)) the infinitesimal operator (generator) Hn of the stochastic motion Θ(t) in the plane R2 with n, n ≥ 2, directions has the following

Planar Random Motion with a Finite Number of Directions

161

equivalent polynomials representations:       1 1 λn 1 1 c ∂ 1 Hn = 2Tn Z, Q − Un−1 Z, Q − 2Tn − , Q , 2 2 n−1 2 2 2 ∂x 2       ∂ 1 1 1 1 c ∂ 1 Hn = Un−1 Z, Q − 2QUn−2 Z, Q − 2Tn − , Q , ∂t 2 2 2 2 2 ∂x 2

(3.4.7)

where Tn (x, y) and Un (x, y) are the Chebyshev polynomials of two variables of the first and second orders, respectively, and the operators Z and Q are given by (3.4.2). Example 3.4.1. Let n = 2. Then, taking into account that T2 (x, y) = 2x2 − 2y,

U1 (x, y) = 2x,

Z=

∂ + 2λ, ∂t

we obtain from the first formula of (3.4.7):       1 1 1 1 c ∂ 1 H2 = 2T2 Z, Q − 2λU1 Z, Q − 2T2 − , Q 2 2 2 2 2 ∂x 2 "   # "  # 2 2 1 1 1 c ∂ 1 =2 2 Z − 2 Q − 2λ 2 Z − 2 2 − −2 Q 2 2 2 2 ∂x 2 ∂2 + 2Q = Z 2 − 2Q − 2λZ − c2 ∂x2 2    ∂ ∂2 ∂ + 2λ − 2λ + 2λ − c2 = ∂t ∂t ∂x2 ∂2 ∂ ∂ ∂2 = 2 + 4λ + 4λ2 − 2λ − 4λ2 − c2 ∂t ∂t ∂t ∂x2 2 2 ∂ ∂ ∂ = 2 + 2λ − c2 2 , ∂t ∂t ∂x and this is exactly the generator of the classical Goldstein-Kac telegraph process on the line. Example 3.4.2. Let n = 3. Since in this case T3 (x, y) = 4x3 − 6xy,

U2 (x, y) = 4x2 − 2y,

U1 (x, y) = 2x,

Z=

3λ ∂ + , ∂t 2

then the first formula in (3.4.7) yields the differential operator:       1 1 3λ 1 1 c ∂ 1 H3 = 2T3 Z, Q − U2 Z, Q − 2T3 − , Q 2 2 2 2 2 2 ∂x 2 "   # "   # 3 2 1 1 1 3λ 1 1 =2 4 Z −6 Z Q − 4 Z −2 Q 2 2 2 2 2 2 # "  3   c ∂ 1 c ∂ −6 − Q −2 4 − 2 ∂x 2 ∂x 2 3λ 2 3λ ∂3 ∂ = Z 3 − 3ZQ − Z + Q + c3 3 − 3 Q 2 2 ∂x ∂x  3    2 ∂ 3λ 3c2 ∂ 3λ 3λ ∂ 3λ 3λc2 ∂3 3c3 ∂ = ∆− + − + + + ∆ + c3 3 − ∆ ∂t 2 4 ∂t 2 2 ∂t 2 8 ∂x 4 ∂x

162

Markov Random Flights 9λ ∂ 2 27λ2 ∂ ∂3 27λ3 3c2 ∂ 9λc2 + + + − ∆− ∆ 3 2 ∂t 2 ∂t 4 ∂t 8 4 ∂t 8 3 9λ2 ∂ 3c3 ∂ 27λ3 3λc2 3λ ∂ 2 3 ∂ − − − + ∆ + c ∆ − 2 ∂t2 2 ∂t 8 8 ∂x3 4 ∂x ∂3 ∂2 9λ2 ∂ 3c3 ∂ 3 3c3 ∂ 3 3c2 ∂ 3λc2 ∂3 = 3 + 3λ 2 + − − ∆− ∆ + c3 3 − 3 ∂t ∂t 4 ∂t 4 ∂t 4 ∂x 4 ∂x 4 ∂x∂y 2   ∂3 ∂2 9λ2 ∂ ∂3 3c2 ∂ 3λc2 c3 ∂ 3 = 3 + 3λ 2 + −3 , − ∆− ∆+ 3 ∂t ∂t 4 ∂t 4 ∂t 4 4 ∂x ∂x∂y 2

=

which is the generator of the stochastic motion Θ(t) with three directions (see also Remark 3.1.1). Example 3.4.3. Let n = 4. In this case T4 (x, y) = 8x4 − 16x2 y + 4y 2 ,

U3 (x, y) = 8x3 − 8xy,

4λ ∂ + , ∂t 3 and the first formula in (3.4.7) yields the differential operator:       4λ 1 1 1 1 c ∂ 1 H4 = 2T4 Z, Q − U3 Z, Q − 2T4 − , Q 2 2 3 2 2 2 ∂x 2 "   # " #  2  2  3 4 1 1 1 1 4λ 1 1 1 =2 8 Z − 16 Z Q+4 Q − 8 Z −8 Z Q 2 2 2 2 3 2 2 2 "   2 # 4  2 1 c ∂ 1 c ∂ Q+4 Q − 16 − −2 8 − 2 ∂x 2 ∂x 2 2 U2 (x, y) = 4x2 − 2xy,

Z=

4λ 3 8λ ∂4 ∂2 = Z 4 − 4Z 2 Q + 2Q2 − Z + ZQ − c4 4 + 4c2 2 Q − 2Q2 3 3 ∂x ∂x 4  2  3  4λ ∂ 4λ 4λ 4λ ∂ ∂ + + + − c2 ∆− = ∂t 3 ∂t 3 3 ∂t 3   2 2λc ∂ 4λ ∂2 ∂4 + + ∆ − c4 4 + c4 2 ∆ 3 ∂t 3 ∂x ∂x 3       4λ ∂ 4λ 4λ ∂ 4λ ∂ 4λ 2λ ∂ = + + − − c2 + + − ∆ ∂t 3 ∂t 3 3 ∂t 3 ∂t 3 3   4 ∂4 ∂4 ∂ − − − c4 ∂x4 ∂x4 ∂x2 ∂y 2  3    ∂ ∂ 4λ ∂ 4λ ∂ 2λ ∂4 = + − c2 + + ∆ + c4 2 2 ∂t ∂t 3 ∂t 3 ∂t 3 ∂x ∂y 4 3 2 2 3 2 ∂ ∂ 16λ ∂ 64λ ∂ 8λ2 c2 ∂4 2 ∂ 2 ∂ 4 = 4 + 4λ 3 + + − c ∆ − 2λc ∆ − ∆ + c , ∂t ∂t 3 ∂t2 27 ∂t ∂t2 ∂t 9 ∂x2 ∂y 2 which is the generator of the stochastic motion Θ(t) with four directions. One can easily check that the second formula in (3.4.7) produces the same operators H2 , H3 and H4 .

Planar Random Motion with a Finite Number of Directions

3.5

163

Limiting differential operator

In this section we establish a limit theorem related to the asymptotic behaviour, under the Kac’s scaling condition (2.7.1), of the differential operator Hn given by (3.1.3) of the main equation (3.1.2). We will show that in this case the main equation (3.1.2) turns into the classical parabolic diffusion equation (that should be expected). This implies that the stochastic process Θ(t) is asymptotically a Wiener process. One should emphasize, however, that from the convergence of the differential operators governing a family of stochastic processes it does not follow the convergence of their distributions. The convergence of the distributions is related to the central limit theorem and the theory of weak convergence and it will be studied in the next section. Theorem 3.5.1. Under the Kac’s scaling condition c → ∞,

c2 → ρ, λ

λ → ∞,

ρ > 0,

(3.5.1)

the differential equation (3.1.2) with differential operator (3.1.3) transforms into the parabolic diffusion equation ∂f ρ ∂2f = , ∂t 2 ∂x2

if n = 2,

∂f ρ(n − 1) = ∆f, ∂t 2n where ∆ is the two-dimensional Laplace operator. Proof. The proof is based on the analysis As it was noted in Remark 3.1.1, for n = Goldstein-Kac telegraph equation (2.3.2) Theorem 2.7.1. Let now n ≥ 3. We note that from the cn λn−1

if n ≥ 3,

(3.5.2) (3.5.3)

of the differential operator Hn given by (3.1.3). 2 the main equation (3.1.2) transforms into the and for this case equation (3.5.2) follows from Kac’s condition (3.5.1) it follows that

→ 0,

n ≥ 3.

(3.5.4)

Really, cn λn−1

c2(n−1) 1 = n−1 · n−2 = λ c



c2 λ

n−1 ·

1 → ρn−1 · 0 = 0. cn−2

Suppose for definetness that n is odd. The case of even n can be proved in the same manner and is left up to the reader. Let us divide by λn−1 the differential operator Hn given by (3.1.3). We note that, according to (3.5.4), the third term of this divided operator (1/λn−1 )Hn becomes 0 after passage to the limit under condition (3.5.1). Consider the second term of the divided operator (1/λn−1 )Hn . We have:

164

Markov Random Flights

2 λn−1

(n−1)/2

X

 n − k − 1 n−2k k Z Q k−1 "n−2  # X n − 2  ∂ n−s−2  λn s  c2  ∆ s ∂t n−1 4 s=0

(−1)k−1

k=1

=

2 λn−1



(n−1)/2

  n−k−1 k−1 λn−1 k=2 "n−2k   s #  2 k   n−2k−s  X n − 2k λn c ∂ ∆ × ∂t n−1 4 s s=0 "n−2     n−s−2  s # n 1 X n−2 ∂ c2 = ∆ 2 s=0 s ∂t λn−s−1 n−1 2

+

X

(−1)k−1

(n−1)/2

 k   1 n−k−1 (−1) +2 4 k−1 k=2 "n−2k  # X n − 2k   ∂ n−2k−s  n s  c2k  × ∆k . n−s−1 s ∂t n − 1 λ s=0 X

k−1

According to (3.5.1), we have c2 λn−s−1

( ρ, if s = n − 2, → 0, if 0 ≤ s ≤ n − 3,

and c2k λn−s−1

=

c2k = k λ · λn−k−s−1



c2 λ

k ·

1 λn−k−s−1

→ ρk · 0 = 0

for any k and s such as 2 ≤ k ≤ n−1 2 , 0 ≤ s ≤ n − 2k, and for any odd n ≥ 5. Hence, after passage to the limit (3.5.1) the second term of the divided operator (1/λn−1 )Hn turns into 

n n−1

n−2

ρ ∆. 2

Similarly, after passage to the limit (3.5.1) the first term of the divided operator (1/λn−1 )Hn turns into n−1  n ∂ . n−1 ∂t Hence, after passage to the limit under the condition (3.5.1), the main equation (3.1.2) with differential operator (3.1.3) transforms into the parabolic diffusion equation (3.5.3). Thus, one can conclude that the constructed stochastic motion Θ(t), under the Kac’s scaling condition (3.5.1), is asymptotically a Wiener process in R2 with zero drift and explicitly calculated variance depending on the number of directions n.

Planar Random Motion with a Finite Number of Directions

3.6

165

Weak convergence to the Wiener process

Theorem 3.5.1 states that, under the Kac’s scaling condition (3.5.1), the hyperbolic equation (3.1.2) turns into the classical parabolic equation (3.5.3) describing the homogeneous diffusion process in the plane with zero drift and diffusion coefficient σ 2 = ρ(n−1)/n, n ≥ 3. This means that the stochastic motion Θ(t) is asymptotically a Wiener process in R2 . However, as it was noted above, from this fact the convergence of the respective distributions does not follow. In this section we give a rigorous proof that, under the Kac’s scaling condition (3.5.1), the transition density of Θ(t) converges to the transition density of a homogeneous Wiener process in the plane. First of all, we emphasize that the stochastic process Θ(t) = (X(t), Y (t)), t > 0, depends on parameters c and λ. This means that, in fact, we deal with the two-parameter family of stochastic processes {Θc,λ (t), c > 0, λ > 0}. For the sake of simplicity, we will thereafter omit these indices c and λ bearing in mind, however, that we deal with a two-parameter family of stochastic processes. We are interested in the behaviour of the distributions of Θ(t) under the Kac’s condition (3.5.1). Let fk = fk (x, y, t), (x, y) ∈ Mt , t > 0, k = 0, 1, . . . , n − 1, be the joint densities of the particle’s position and its direction at time t. As was noted above, these densities exist and satisfy the following hyperbolic system of n first-order partial differential equations with constant coefficients ∂f = Af + Λf , (3.6.1) ∂t where f is the column-vector of dimension n f = (f0 , f1 , . . . , fn−1 )T , A is the diagonal (n × n)-matrix differential operator   A0 0 . . . 0  0 A1 . . . 0   A= . . . . . . . . . . . . . . . . . . . . 0 0 . . . An−1 with the diagonal operator elements Ak = −c cos

2πk ∂ 2πk ∂ − c sin , n ∂x n ∂y

k = 0, 1, . . . , n − 1,

and Λ is the infinitesimal (n × n)-matrix of the embedded Markov chain with n states: λ  . . . n−1 λ   λ −λ . . . n−1 n−1  Λ= . . . . . . . . . . . . . . . . . . . . . . λ λ . . . −λ n−1 n−1



−λ

λ n−1

Equality (3.6.1) represents the backward Kolmogorov equation for the process Θ(t). It is well known that the systems of form (3.6.1) describe the processes of random evolution in various phase spaces. We will prove the convergence of the distributions of the stochastic motion Θ(t) to the distribution of the Wiener process by means of the Kurtz’s diffusion approximation

166

Markov Random Flights

Theorem 1.4.1. To do this, we introduce some necessary notations and establish some auxiliary relations. First, we note that the process Θ(t) is governed by a homogeneous continuous-time Markov chain V(t) with n, n ≥ 2, states and infinitesimal matrix Λ and the following equality holds: k−1  λn Λ, k ≥ 1. (3.6.2) Λk = − n−1 We prove this equality by induction. For k = 1, it is obviously valid. Suppose that it is also true for some arbitrary number k, k ≥ 2, and all the previous numbers. Then according to the induction assumption, we have:  k−1 k−1  λn λn Λ= − Λ2 Λk+1 = ΛΛk = Λ − n−1 n−1  k−1    k λn λn λn = − Λ, − Λ= − n−1 n−1 n−1 proving (3.6.2). Then the matrix of transition probabilities P (t) of the chain V(t) has the form: P (t) = exp (Λt) ∞ X 1 k =I+ (Λt) k! k=1 k−1 ∞ k  X λn t − Λ =I+ k! n−1 k=1 "∞ k #  n−1 X 1 λn =I− t Λ − λn k! n−1

(3.6.3)

k=1

 n − 1 −(λn/(n−1))t =I− e − 1 Λ, λn where I is the unit (n × n)-matrix. The elements of matrix P (t) are given by the formula:  n − 1 −(λn/(n−1))t 1   + e , if i = j, n i, j = 0, 1, . . . , n − 1. (3.6.4) pij (t) = n   1 − 1 e−(λn/(n−1))t , if i 6= j, n n As was noted above, for n = 2, the planar random motion Θ(t) degenerates into the Goldstein-Kac telegraph process on the line and the convergence of its distributions, under the Kac scaling condition (3.5.1), to the distribution of a homogeneous one-dimensional Brownian motion was already proved (see Theorem 2.7.1). That is why we thereafter consider that n ≥ 3. In this case, we deal with a purely two-dimensional random motion Θ(t) with n, n ≥ 3 directions. The following theorem holds. Theorem 3.6.1. Let the Kac’s condition (3.5.1) be fulfilled. Then in the Banach space of twice continuously differentiable functions with compact support Mt the semigroups generated by the transition functions of the planar stochastic motion Θ(t) converge to the semigroup generated by the transition function of the Wiener process in R2 with the generator Gn =

ρ(n − 1) ∆, 2n

where ∆ is the two-dimensional Laplace operator.

n ≥ 3,

(3.6.5)

Planar Random Motion with a Finite Number of Directions

167

Remark 3.6.1. Note that generator (3.6.5) exactly coincides with the operator obtained by passage to the limit, under the Kac’s condition (3.5.1), from the hyperbolic operator Hn given by (3.1.3) (see formula (3.5.3)) Proof. We can give two proofs of this theorem, both based on the Kurtz’s diffusion approximation theorem 1.4.1. Proof 1. We assume that all the matrix operators A, Λ and P are acting on the vectorfunctions of the form f = (f, f, . . . , f )T , f ∈ C02 (Mt ) and each such a column-vector of dimension n is composed of twice continuously differentiable functions on the compact set Mt . Matrix Λ is a scalar (n × n)-matrix and, therefore, it is applicable to any such a vector-function. Since the process Θ(t) is governed by a homogeneous time-continuous Markov chain V(t) with n, n ≥ 3, states and the transitions are doing according to the uniform law, then formula (1.4.8) simplifies and the projector P becomes the following (n × n)-matrix:   1 1 ... 1 1 1 1 . . . 1  . (3.6.6) P=  n . . . . . . . . . . . . . 1 1 ... 1 From the trigonometric equalities n−1 X

cos

k=0

2πk = 0, n

n−1 X k=0

sin

2πk =0 n

it follows that the operators Ak , k = 0, 1, . . . , n−1, composing the diagonal matrix operator A in (3.6.1), satisfy the equality n−1 X Ak = 0, (3.6.7) k=0

and, therefore, condition (1.4.2) of the Kurtz’s Theorem 1.4.1 is fulfilled. Our next step is to find a solution to equation (1.4.7) for our case. We show that, for any differentiable vector-function f = (f, f, . . . , f )T of dimension n, the equation Λh = −Af

(3.6.8)

has the solution

n−1 T (A0 f, A1 f, . . . , An−1 f ) . λn Really, taking into account the form of infinitesimal matrix Λ, we have:   n−1 T (A0 f, A1 f, . . . , An−1 f ) Λh = Λ λn    λ λ  A0 f −λ n−1 . . . n−1 λ   A1 f  λ n−1   n−1 −λ . . . n−1   =  .  λn . . . . . . . . . . . . . . . . . . . . .  ..  λ λ . . . −λ An−1 f n−1 n−1    n−1  1 1 A0 f − n . . . n n 1  1   A1 f  − n−1 ...  n n n  = .  . . . . . . . . . . . . . . . . . . . . . . . . . .   ..  1 1 . . . − n−1 An−1 f n n n h=

(3.6.9)

168

Markov Random Flights 

− n−1 n A0 f

+      − n−1 A f + 1  n  =  ..   .   n−1 − n An−1 f +

1 n

n−1 P

 Ak f

    1 Ak f   n  k=0  k6=1     n−1  P 1 A f k  n k=0 k6=0 n−1 P

k=0 k6=n−1

 n−1 P 1 A f −A f + k 0 n   k=0   n−1   P  −A1 f + 1 Ak f  n   k=0 = .   . ..     n−1   P 1 −An−1 f + n Ak f 

k=0

In view of (3.6.7), n−1 X k=0

Ak f =

n−1 X

! Ak

f = 0,

k=0

and we obtain  f   A0 0 . . . 0 f     0 A1 . . . 0      Λh =   = − .  = −Af , . . . . . . . . . . . . . . . . . . . .   ..    0 0 . . . An−1 f −An−1 f 

−A0 f −A1 f .. .



proving that colimn-vector (3.6.9) is a solution to matrix equation (3.6.8) indeed. Then, for the twice continuously differentiable vector-function f , we have:  A f    0 A0 0 . . . 0 1 1 ... 1 A1 f     0 A1 . . . n−1 0 1 1 . . . 1   .    PAh =  λn2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ..  0 0 . . . An−1 1 1 ... 1 An−1 f     2 A0 f 1 1 ... 1 2    n − 1 1 1 . . . 1  A1 f   =  .  λn2 . . . . . . . . . . . . .  ..  1 1 ... 1 A2n−1 f n−1  P 2 (3.6.10)  k=0 Ak f  n−1  P 2    A f n−1 k  k=0 =    λn2  ..   .   n−1 P 2  Ak f k=0 "n−1 # n−1 X 2 Ak f . = λn2 k=0

Planar Random Motion with a Finite Number of Directions

169

Evaluating the operator in square brackets on the right-hand side of (3.6.10) and using the trigonometric equalities n−1 X k=0

2πk cos n

2

n = , 2

n−1 X k=0

2πk sin n

2 =

n , 2

n−1 X

sin

k=0

4πk = 0, n

which are valid for any n ≥ 3, we obtain: n−1 X k=0

n−1 X

2 2πk ∂ 2πk ∂ − c sin n ∂x n ∂y k=0 " 2 2  2 2 # n−1 X  2πk ∂ 2πk ∂ 2 2πk ∂ 2πk 2 + 2 sin =c cos + sin cos 2 n ∂x n n ∂x∂y n ∂y 2 k=0 ("n−1  "n−1 # "n−1  2 # 2 2 # 2 ) X X X 2πk 4πk ∂ ∂2 ∂ 2πk 2 =c cos + sin + sin 2 n ∂x n ∂x∂y n ∂y 2

A2k =

−c cos

k=0

k=0

k=0

nc2 ∆, = 2

(3.6.11)

and, therefore, PAh =

c2 n − 1 ∆f , λ 2n

(3.6.12)

where ∆ is the two-dimensional Laplacian. From (3.6.12) we obtain the operator C0 defined by formula (1.4.4): C0 =

c2 n − 1 ∆. λ 2n

(3.6.13)

The last step of the proof is to check the fulfillment of the condition (1.4.5) of the Kurtz’s Theorem 1.4.1. To do this, it is sufficient to show that, for any twice continuously differentiable function f with compact support, there exists a solution g to the equation (µ − C0 )g = f

(3.6.14)

for some µ > 0. From the form of operator (3.6.13), we see that, for any µ > 0, equation (3.6.14) is the inhomogeneous Helmholtz equation with sufficiently smooth right part. As is well known from the general theory of partial differential equations, the solution of such an equation exists for any µ > 0 (see, for instance, [207, Section 30]). Hence, condition (1.4.5) of Theorem 1.4.1 is also fulfilled. Therefore, according to the Kurtz’s diffusion approximation Theorem 1.4.1, one can conclude that, under the Kac’s scaling condition (3.5.1), the semigroups generated by the transition functions of the process Θ(t) converge to the semigroup generated by the transition function of the homogeneous Wiener process in R2 with generator (3.6.5). This means the weak convergence of the distributions. Proof 2. Another proof of the theorem can be given by applying Example 1.4.1 of the Kurtz’s Theorem 1.4.1 related to the case when the evolution is driven by a Markov chain with a countable number of states and the transition matrix P (t) = kpij (t)k∞ i,j=0 . It is clear that all the formulas of Example 1.4.1 are applicable to our case with a few obvious changes.

170

Markov Random Flights

From (3.6.3) and (3.6.4) we find pj = lim pij (t) = t→∞

1 , n

i, j = 0, 1, . . . , n − 1.

In view of (3.6.4), we have: Z νij := 0



 (n − 1)2   , if i = j, λn2 (pij (t) − pj ) dt =   − n − 1 , if i 6= j, λn2

i, j = 0, 1, . . . , n − 1.

Equality (3.6.7) implies fulfilment of the condition n−1 X

1 pi Ai f = n i=0

n−1 X

! Ai

f = 0,

i=0

and, for any twice continuously differentiable function f with compact support, the conditions n−1 X kνij Aj f k < ∞, sup 0≤i≤n−1 j=0



n−1

X

sup A ν A f i ij j < ∞,

0≤i≤n−1

j=0 are also fulfilled according to the Weierstrass theorem. Then, according to Example 1.4.1 of the Kurtz’s Theorem 1.4.1, the operator C0 takes the form: n−1 n−1 X X C0 = pi Ai νij Aj i=0

j=0

 =





n−1 n−1 2 n−1 X  X 1 n−1 X  2   (n − 1)  Aj  A − A i i  .   2 2 n λn λn j=0 i=0 i=0 j6=i

From (3.6.7) it follows that n−1 X

Aj = −Ai .

j=0 j6=i

Therefore, using (2.3.10), we get: " # n−1 n−1 1 (n − 1)2 X 2 n − 1 X 2 C0 = A + A n λn2 i=0 i λn2 i=0 i   n−1 1 (n − 1)2 n−1 X 2 = A + n λn2 λn2 i=0 i =

n−1 n−1 X 2 A λn2 i=0 i

=

c2 n − 1 ∆, λ 2n

and we arrive again at the same operator (3.6.13).

Planar Random Motion with a Finite Number of Directions

171

Hence, according to the Kurtz’s diffusion approximation Theorem 1.4.1, one can conclude that, under the Kac’s scaling condition (3.5.1), the transition densities of the planar stochastic motion Θ(t) weakly converge to the transition density of the two-dimensional Wiener process with generator (3.6.5). The theorem is thus completely proved.

Chapter 4 Integral Transforms of the Distributions of Markov Random Flights

Although random motions with a finite number of directions studied in the previous chapter is of a certain interest as a mathematical model of some specific stochastic dynamic processes, it is obvious that the condition of choosing new directions from some finite set looks fairly artificial and, generally speaking, somewhat unnatural. For example, in gas dynamics the attempt to describe a diffusion by such a model when a particle chooses new directions at a strictly fixed angle seems fairly doubtful. It is absolutely obvious that the motions with a continuum of directions are much more natural and adequate to real processes. In this chapter we present the general approach to the study of such finite-velocity random motions with a continuum number of directions in the Euclidean space Rm , m ≥ 2, based on the analysis of the integral transforms of their distributions. This method works in the space of any dimension m ≥ 2, being especially effective in the spaces of low dimensions 2, 4 and 6 where it leads to the explicit distributions (see the next chapters).

4.1

Description of process and structure of distribution

The continuous multidimensional counterpart of the stochastic motion with a finite number of directions studied in the previous section is represented by the following stochastic process. Consider a particle that, at the initial time instant t = 0, starts from the origin 0 = (0, . . . , 0) of the m-dimensional Euclidean space Rm , m ≥ 2. The particle moves with a constant finite speed c (note that c is treated as the constant norm of the velocity). The initial direction is a random m-dimensional vector uniformly distributed (the Lebesgue probability measure) on the unit sphere  S1m = x = (x1 , . . . , xm ) ∈ Rm : kxk2 = x21 + · · · + x2m = 1 (4.1.1) (for a rigorous definition of the distribution on the surface of a multidimensional sphere see, for instance, [42]). We emphasize that here and thereafter the upper index m means the dimension of the space, which the sphere S1m is considered in, not its own dimension which, clearly, is m − 1. The particle changes its direction at random time instants that form a homogeneous Poisson flow of rate λ > 0. In each of such Poissonian moments the particle instantly takes on a new random direction uniformly distributed on S1m independently of its previous direction. Each sample path of this motion represents a broken line of total length ct composed of the segments of λ-exponentially distributed random lengths and uniformly oriented in Rm . The trajectories of the process are continuous and differentiable almost everywhere. Let X(t) = (X1 (t), . . . , Xm (t)) be the particle’s position at arbitrary time instant t > 0. The stochastic process X(t) is referred to as the Markov random flight. Let N (t) denote 173

174

Markov Random Flights

the number of Poisson events that have occurred in the time interval (0, t) and let dx be an infinitesimal element in the space Rm with Lebesgue measure µ(dx) = dx1 . . . dxm . At arbitrary time moment t > 0 the particle, with probability 1, is located in the mdimensional ball of radius ct:  m 2 2 2 2 2 Bm . (4.1.2) ct = x = (x1 , . . . , xm ) ∈ R : kxk = x1 + · · · + xm ≤ c t The distribution Pr {X(t) ∈ dx} = Pr {X1 (t) ∈ dx1 , . . . , Xm (t) ∈ dxm } ,

x ∈ Bm ct ,

t > 0,

(4.1.3)

consists of two components. The singular component of distribution (4.1.3) is related to the case when no Poisson events occur in the time interval (0, t) and, therefore, the particle does not change its initial direction. The singular component is concentrated on the sphere of radius ct:  m m 2 2 2 2 2 Sct = ∂Bm . (4.1.4) ct = x = (x1 , . . . , xm ) ∈ R : kxk = x1 + · · · + xm = c t m The probability of being on Sct at time moment t is the probability that no Poisson events occur until time t and it is equal to m Pr {X(t) ∈ Sct } = e−λt .

From this fact it follows that the density (in the sense of generalized functions) of the singular part of distribution (4.1.3) has the form:  e−λt Γ m e−λt 2 2 2 (s) 2 δ(c2 t2 − kxk2 ), m ≥ 2, (4.1.5) p (x, t) = m ) δ(c t − kxk ) = mes(Sct 2π m/2 (ct)m−1 m m . ) is the surface measure of sphere Sct where δ(x) is the Dirac delta-function and mes(Sct If at least one Poisson event occurs in the time interval (0, t) and, therefore, the particle at least once changes its direction, then, at time t, it is located strictly inside the ball Bm ct and the probability of this event is equal to −λt Pr {X(t) ∈ int Bm . ct } = 1 − e

(4.1.6)

The part of distribution (4.1.3) corresponding to this case is concentrated in the interior of the ball Bm ct  m 2 2 2 2 2 int Bm (4.1.7) ct = x = (x1 , . . . , xm ) ∈ R : kxk = x1 + · · · + xm < c t and forms its absolutely continuous component. Therefore, there exists the density of the absolutely continuous component of the distribution (4.1.3) p(ac) (x, t) = f (x, t)Θ(ct − kxk),

x ∈ int Bm ct ,

t > 0,

(4.1.8)

where f (x, t) is some positive function absolutely continuous in int Bm ct and Θ(x) is the Heaviside unit-step function. The existence of density (4.1.8) follows from the fact that, since the sample paths of the process X(t) are continuous and differentiable almost everywhere, the distribution (4.1.3) must contain an absolutely continuous component and this justifies the existence of density (4.1.8). The existence of this density follows also from the fact that it can be represented as a Poissonian sum of convolutions. Hence, the density of distribution (4.1.3) has the structure p(x, t) = p(s) (x, t) + p(ac) (x, t), (s)

(ac)

x ∈ Bm ct ,

t > 0,

where p (x, t) and p (x, t) are the densities (in the sense of generalized functions) of the singular and absolutely continuous components of distribution (4.1.3) given by (4.1.5) and (4.1.8), respectively.

Integral Transforms of the Distributions of Markov Random Flights

4.2

175

Recurrent integral relations

Consider the conditional distributions Pr{X(t) ∈ dx | N (t) = n} = Pr{X1 (t) ∈ dx1 , . . . , Xm (t) ∈ dxm | N (t) = n}, If N (t) = n, then the particle’s position X(t) in the space Rm t > 0 is determined by the coordinates Xk (t) = c

n+1 X

(sj − sj−1 )xjk ,

n ≥ 1. (4.2.1) at arbitrary time moment

k = 1, . . . , m,

(4.2.2)

j=1

where xjk are the components of the independent m-dimensional random vectors xj = (xj1 , . . . , xjm ), j = 1, ..., n + 1, uniformly distributed on the unit sphere S1m . The random variables sj , j = 1, ..., n are the moments of occurrence of Poisson events (by definition we set s0 = 0, sn+1 = t). Consider the conditional characteristic functions of conditional distributions (4.2.1): n o Hn (α, t) = E eihα,X(t)i | N (t) = n , n ≥ 1, (4.2.3) where α = (α1 , . . . , αm ) ∈ Rm is the real-valued m-dimensional vector of inversion parameters and hα, X(t)i is the inner product of the vectors α and X(t). For the sake of simplicity, we will thereafter omit the inversion parameter α in the characteristic functions, bearing in mind, however, that Hn (t) ≡ Hn (α, t). Substituting (4.2.2) into (4.2.3), we have:    m n+1   X X Hn (t) = E exp ic αk (sj − sj−1 )xjk    j=1 k=1    n+1   X = E exp ic (sj − sj−1 )hα, xj i , n ≥ 1,   j=1

where hα, xj i is the inner product of the vectors α and xj . Evaluating the expectation in this last equality, we get:  " # Z Z t Z t Z t n+1   Y j 1 n! eic(τj −τj−1 )hα,x i σ(dxj ) , dτ1 dτ2 · · · dτn Hn (t) = n m   t 0 mes(S1 ) S1m τ1 τn−1 j=1 (4.2.4) where σ(·) is the Lebesgue measure on the surface of sphere S1m . This formula emerges by applying the theorem on the expectation of a function from random variables (see, for instance, [121, Section 1.6, item 1.6.3, page 28], using the properties of exponential function and taking into account the well-known fact that the joint distribution of random variables sj , j = 1, ..., n, (0 ≤ s1 ≤ s2 ≤ · · · ≤ sn ≤ t), has the form Pr{s1 ∈ ds1 , . . . , sn ∈ dsn } =

n! ds1 . . . dsn . tn

The surface integral over the unit sphere S1m in (4.2.4) can be evaluated by Lemma 1.9.7

176

Markov Random Flights

and is found to be Z J(m−2)/2 (c(τj − τj−1 )kαk) j , eic(τj −τj−1 )(α,x ) σ(dxj ) = (2π)m/2 m (c(τj − τj−1 )kαk)(m−2)/2 S1 p 2 and J where kαk = α12 + · · · + αm (m−2)/2 (x) is the Bessel function of order (m − 2)/2. Taking into account that 2π m/2 , m ≥ 2, mes(S1m ) = Γ m 2 we get: n! Hn (t) = n t

Z

t

Z

t

t

Z

dτ1 dτ2 · · · dτn 0 τ1 τn−1    n+1 m J Y (c(τ − τ )kαk) j j−1 (m−2)/2 × 2(m−2)/2 Γ ,  2 (c(τj − τj−1 )kαk)(m−2)/2 

(4.2.5) n ≥ 1.

j=1

For the particular cases m = 2 (planar motion), m = 4 (four-dimensional motion) and m = 6 (six-dimensional motion), the conditional characteristic functions (4.2.5) will explicitly be calculated in the later chapters. However, in the general case of arbitrary dimension m ≥ 2 the expression on the right-hand side of (4.2.5), cannot apparently be evaluated in an explicit form. Introduce the function m J (m−2)/2 (ctkαk) , m ≥ 2, (4.2.6) ϕ(t) = 2(m−2)/2 Γ 2 (ctkαk)(m−2)/2 which is referred to as the normed Bessel function (see [82, Ch. IV, Formula (4.4)]). It is the characteristic function (Fourier transform) of the uniform distribution on the surface of m . Then formula (4.2.5) can be written as follows: sphere Sct   Z t Z t Z t n+1   Y n! Hn (t) = n dτ1 dτ2 · · · dτn ϕ(τj − τj−1 ) . (4.2.7)   t 0 τ1 τn−1 j=1

Consider separately the integral factor in (4.2.7): t

Z In (t) :=

Z

t

Z dτ2 · · ·

dτ1 τ1

0

t

dτn τn−1

 n+1 Y 

×

(

t

ϕ(τ3 − τ2 ) . . .

dτ3 τ2

.



j=1

It can be represented in the following form:   Z t Z t In (t) = dτ1 ϕ(τ1 ) dτ2 ϕ(τ2 − τ1 ) 0 τ1 ( Z Z t

ϕ(τj − τj−1 )

 

dτn−1

ϕ(τn−1 − τn−2 )

(4.2.8)

τn−2

Z

(

t

×

dτn

)) ϕ(τn − τn−1 ) ϕ(t − τn )

))) ...

.

τn−1

Note that the functions e−λt λn In (t), n ≥ 1, are the joint characteristic functions of the

Integral Transforms of the Distributions of Markov Random Flights

177

particle’s positions at time moment t and the number of Poisson events (that is, the changes of direction) that have occurred before this moment. The following theorem states that functions (4.2.8) are connected with each other by a convolution-type recurrent relation. Theorem 4.2.1. For any n ≥ 1, the following recurrent relation holds: Z t Z t ϕ(τ ) In−1 (t − τ ) dτ, n ≥ 1, ϕ(t − τ ) In−1 (τ ) dτ = In (t) =

(4.2.9)

0

0

where, by definition, I0 (x) = ϕ(x). Proof. We prove (4.2.9) by induction. From (4.2.8), for n = 1, we have: Z t Z t ϕ(τ ) I0 (t − τ ) dτ ϕ(τ ) ϕ(t − τ ) dτ = I1 (t) =

(4.2.10)

0

0

and, therefore, (4.2.9) is valid for n = 1. Suppose that (4.2.9) fulfils also for all the numbers k ≤ n − 1, n ≥ 2. Consider the interior integral (with respect to τn ) in (4.2.8). Changing the variable ξ = τn − τn−1 in this integral, we get: Z t Z t−τn−1 ϕ(τn −τn−1 ) ϕ(t−τn ) dτn = ϕ(ξ) ϕ((t−τn−1 )−ξ) dξ = I1 (t−τn−1 ) (4.2.11) τn−1

0

in view of (4.2.10) and induction assumption. The next interior integral (with respect to τn−1 ) in (4.2.8) by means of the change of variable ξ = τn−1 − τn−2 yields: Z t Z t−τn−2 ϕ(τn−1 −τn−2 ) I1 (t−τn−1 ) dτn−1 = ϕ(ξ) I1 ((t−τn−2 )−ξ) dξ = I2 (t−τn−2 ) τn−2

0

in view of (4.2.11) and induction assumption. Continuing this integration process in the same manner, after the (n − 1)-th step, we finally arrive at the equality Z t In (t) = ϕ(τ1 ) In−1 (t − τ1 ) dτ1 , 0

proving (4.2.9). Note that formula (4.2.9) can be rewritten in the convolution form: In (t) = ϕ(t) ∗ In−1 (t),

n ≥ 1.

(4.2.12)

Corollary 4.2.1. For any n ≥ 1 the following relation holds: ∗(n+1)

In (t) = [ϕ(t)]

,

n ≥ 1,

(4.2.13)

where the symbol ∗(n + 1) means the (n + 1)-multiple convolution. Proof. Formula (4.2.13) immediately follows from (4.2.12) by means of the chain of equalities ∗(n+1)

In (t) = ϕ(t) ∗ In−1 (t) = ϕ(t) ∗ ϕ(t) ∗ In−2 (t) = · · · = [ϕ(t)]

.

178

Markov Random Flights

Application of the Laplace transformation Z ∞ e−st f (t) dt, L [f (t)] (s) =

Re s > 0,

0

to (4.2.13) yields the following important result. Corollary 4.2.2. For any n ≥ 1 the Laplace transform of functions (4.2.8) has the form: n+1

L [In (t)] (s) = (L [ϕ(t)] (s))

,

n ≥ 1,

Re s > 0.

(4.2.14)

Proof. The statement immediately follows from the main property of the Laplace transform of convolution. From Theorem 4.2.1 it also follows that the conditional characteristic functions (4.2.7) are connected with each other by an integral recurrent relation. Corollary 4.2.3. For any n ≥ 1 the conditional characteristic functions (4.2.7) satisfy the recurrent relation: Z n t n−1 Hn (t) = n τ ϕ(t − τ )Hn−1 (τ ) dτ, n ≥ 1, (4.2.15) t 0 where H0 (t) = ϕ(t). Proof. Multiplying (4.2.9) by (n!/tn ), n ≥ 1, and taking into account (4.2.7), we get   Z (n − 1)! n t n−1 τ ϕ(t − τ ) In−1 (τ ) dτ Hn (t) = n t 0 τ n−1 Z t n = n τ n−1 ϕ(t − τ )Hn−1 (τ ) dτ. t 0

4.3

Laplace transforms of conditional characteristic functions

The results of the previous section show a very important role of the function ϕ(t) defined by (4.2.6). The reason is that, as was noted above, ϕ(t) is the characteristic function (Fourier m of radius ct. transform) of the uniform distribution on the surface of sphere Sct From Theorem 4.2.1 and its corollaries we observe that the conditional characteristic functions Hn (t), n ≥ 1, and their Laplace transforms are, in fact, expressed in terms of function ϕ(t). Formulas (4.2.13) and (4.2.14) demonstrate that the possibility of obtaining an explicit form of conditional characteristic functions (4.2.7) entirely depends on whether the multiple convolutions of function ϕ(t) with itself or inverse Laplace transforms of its powers, can explicitly be evaluated. In the following theorem we present a general formula for conditional characteristic functions Hn (t), n ≥ 1, in terms of inverse Laplace transforms. Theorem 4.3.1. For any n ≥ 1 and arbitrary t > 0, the conditional characteristic functions (4.2.7) are given by the formula:    !n+1 n! −1  1 1 m−2 m (ckαk)2  (t), (4.3.1) p F , ; ; 2 Hn (t) = n L t 2 2 2 s + (ckαk)2 s2 + (ckαk)2

Integral Transforms of the Distributions of Markov Random Flights

179

for arbitrary complex number s such that Re s > 0, where L−1 means the inverse Laplace transform and ∞ X (ξ)k (η)k z k F (ξ, η; ζ; z) ≡ 2 F1 (ξ, η; ζ; z) = (4.3.2) (ζ)k k! k=0

is the Gauss hypergeometric function. Proof. According to [7, Table 5.19, Formula 6] or [63, Formula 6.621(1)], the Laplace transform of function (4.2.6) is  m  J (m−2)/2 (ctkαk) (m−2)/2 L (s) L [ϕ(t)] (s) = 2 Γ 2 (ctkαk)(m−2)/2   (4.3.3) 1 1 m−2 m (ckαk)2 =p F , Re s > 0, , ; ; 2 2 2 2 s + (ckαk)2 s2 + (ckαk)2 for any m ≥ 2. Hence, in view of (4.2.14), we get    !n+1 2 1 m − 2 m (ckαk) 1  (t). In (t) = L−1  p , ; ; 2 F 2 2 2 s + (ckαk)2 s2 + (ckαk)2 Substituting this expression into (4.2.7), we arrive at (4.3.1). Remark 4.3.1. It is important to note that the parameter ω of hypergeometric series (4.3.2) determining the hypergeometric function in (4.3.1), is real and negative for any m ≥ 2, that is, 1 m−2 m 1 ω =ξ+η−ζ = + − = − < 0, 2 2 2 2 and, therefore, this series is absolutely convergent in the area (ckαk)2 s2 + (ckαk)2 ≤ 1, (see [63, item 9.102(2)]). Remark 4.3.2. Although formula (4.3.1) was proved for any n ≥ 1 (and this case corresponds to the absolutely continuous component of the distribution of process X(t)), one can easily check that (4.3.1) is also valid for n = 0 (and this case corresponds to the singular component of the distribution). Really, for n = 0, relation (4.3.1) formally yields: "  # 1 1 m−2 m (ckαk)2 −1 p H0 (t) = L F , ; ; 2 (t) 2 2 2 s + (ckαk)2 s2 + (ckαk)2 (4.3.4) m J (m−2)/2 (ctkαk) (m−2)/2 =2 Γ , 2 (ctkαk)(m−2)/2 where we have used a formula of the inverse Laplace transform of the Gauss hypergeometric function (see [7, Table 5.19, Formula 6], see also (4.3.3)).

180

Markov Random Flights

On the other hand, applying Lemma 1.9.7, we have: n o H0 (t) = E eihα,X(t)i | N (t) = 0  Z Γ m 2 = eihα,xi σ(dx) m 2π m/2 (ct)m−1 Sct Z Γ m 2 eicthα,xi σ(dx) = 2π m/2 S1m m J (m−2)/2 (ctkαk) = 2(m−2)/2 Γ , 2 (ctkαk)(m−2)/2 and this coincides with (4.3.4). As was noted above, formula (4.3.4) yields the characteristic m function of the uniform distribution on the surface of sphere Sct of radius ct. Remark 4.3.3. Applying [63, Formula 9.131(1)], the hypergeometric function can be represented as follows:     1 m−2 m (ckαk)2 1 m (ckαk)2 1 1 p F , ; ; 2 F , 1; ; − = . 2 2 2 s + (ckαk)2 s 2 2 s2 s2 + (ckαk)2 Therefore, formula (4.3.1) has the following alternative form: "  n+1 # 1 1 m (ckαk)2 n! −1 F , 1; ; − (t). Hn (t) = n L t s 2 2 s2

(4.3.5)

Corollary 4.3.1. The characteristic function of the Markov random flight X(t), t ≥ 0, has the following series representation: "  n+1 # ∞ X 1 1 m (ckαk)2 −λt n −1 F , 1; ; − H(t) = e λ L (t). (4.3.6) s 2 2 s2 n=0 Proof. The staement follows from formula (4.3.1) and Remarks 4.3.2 and 4.3.3.

4.4

Conditional characteristic functions

Let us demonstrate how the integral transforms method developed above works in some important particular cases of Markov random flights in low dimensions.

4.4.1

Conditional characteristic functions in the plane R2

Consider the symmetric Markov random flight X(t) = (X1 (t), X2 (t)) in the plane R2 . In this case m = 2 and, therefore, function (4.2.6) takes the form: q ϕ(t) = J0 (ctkαk), kαk = α12 + α22 . Then, in view of (4.2.14), we have n+1

L [In (t)] (s) = (L [J0 (ctkαk)] (s))

.

Integral Transforms of the Distributions of Markov Random Flights

181

Taking into account (see [118, Table 8.4-1, Formula 55]), that 1 , L [J0 (ctkαk)] (s) = p 2 s + (ckαk)2 we get L [In (t)] (s) =

(s2

1 . + (ckαk)2 )(n+1)/2

According to [118, Table 8.4-1, Formula 57], the inverse Laplace transform of this function is   1 −1 In (t) =L (t) (s2 + (ckαk)2 )(n+1)/2  n/2 √ π t  = n+1 Jn/2 (ctkαk). 2ckαk Γ 2 Then conditional characteristic functions (4.2.7) take the form √ J (ctkαk) n! π  n/2 Hn (t) = n/2 , n ≥ 1. n+1 (ctkαk)n/2 2 Γ 2

(4.4.1)

In view of duplication formula for gamma-function, we have       n+1 n+1 2n n n! = Γ 2 · +1 . =√ Γ Γ 2 2 2 π Substituting this expression into (4.4.1), we finally obtain Hn (t) = 2n/2 Γ

n 2

+1

 J (ctkαk) n/2 , (ctkαk)n/2

n ≥ 1.

(4.4.2)

One can obtain the same result by applying Theorem 4.3.1. According to formula (4.3.1) and taking into account [118, Table 8.4-1, Formula 57], we immediately obtain   n! −1 1 (t) Hn (t) = n L t (s2 + (ckαk)2 )(n+1)/2 √ J (ctkαk) n! π  n/2 = n/2 , n+1 (ctkαk)n/2 2 Γ 2 and, thus, we again arrive at (4.4.1). To find conditional densities, one needs to evaluate the inverse Fourier transform of conditional characteristic functions (4.4.2). However, we postpone these calculations until the next Chapter 5 where we will derive again the conditional characteristic functions (4.4.2) by an alternative method of direct integration in formula (4.2.5).

4.4.2

Conditional characteristic functions in the space R4

Consider now the Markov random flight X(t) = (X1 (t), X2 (t), X3 (t), X4 (t)) in the fourdimensional Euclidean space R4 . In this case m = 4 and, therefore, function (4.2.6) takes the form q J1 (ctkαk) ϕ(t) = 2 , kαk = α12 + α22 + α32 + α42 . ctkαk

182

Markov Random Flights

According to (4.2.14), we have n+1

L [In (t)] (s) = 2

 n+1   J1 (ctkαk) . (s) L ctkαk

Taking into account (see [118, Table 8.4-1, Formula 58]) that    p J1 (ctkαk) 1 2 + (ckαk)2 − s , L s (s) = ctkαk (ckαk)2 we get L [In (t)] (s) =

p n+1 2n+1 2 + (ckαk)2 − s s . (ckαk)2n+2

By the same formula, the inverse Laplace transformation of this function yields:  n+1  p 2n+1 −1 2 2 In (t) = L s + (ckαk) − s (t) (ckαk)2n+2 2n+1 (n + 1) Jn+1 (ctkαk) . = (ckαk)n+1 t Then conditional characteristic functions (4.2.7) take the form n! 2n+1 (n + 1) Jn+1 (ctkαk) tn (ckαk)n+1 t Jn+1 (ctkαk) =2n+1 (n + 1)! , (ctkαk)n+1

Hn (t) =

(4.4.3) n ≥ 1.

The same result can be obtained from Theorem 4.3.1. Really, formula (4.3.1) yields    !n+1 n! −1  1 1 (ckαk)2  (t). p Hn (t) = n L F , 1; 2; 2 (4.4.4) t 2 s + (ckαk)2 s2 + (ckαk)2 Consider separately the hypergeometric function on the right-hand side of (4.4.4). We show that the following equality holds: p   2 s2 + (ckαk)2 (ckαk)2 1 p , 1; 2; 2 = , Re s > 0. (4.4.5) F 2 s + (ckαk)2 s + s2 + (ckαk)2 Introduce the variable z by the equality 4z(1 − z) =

(ckαk)2 . s2 + (ckαk)2

Since Re s > 0, then, obviously, |4z(1 − z)| < 1. By solving the simple equation 4z 2 − 4z +

(ckαk)2 = 0, s2 + (ckαk)2

we get 1 z= 2

s 1−

(ckαk)2 1− 2 s + (ckαk)2

!

p s2 + (ckαk)2 − s = p . 2 s2 + (ckαk)2

Integral Transforms of the Distributions of Markov Random Flights

183

Note that, when solving this equation, we take the sign minus before the discriminant in order to the inequality |z| < 1/2 be fulfilled. Then, as is easy to see, p 2 s2 + (ckαk)2 1 p = , 1−z s + s2 + (ckαk)2 and by applying [63, Formula 9.121(4)], we obtain (4.4.5). By substituting now (4.4.5) into (4.4.4), we get  −(n+1)  p 2n+1 n! −1  2 2 L Hn (t) = s + s + (ckαk) (t). tn According to [7, Table 5.3, Formula 43 or Table 5.4, Formula 21], we have  −(n+1)  p Jn+1 (ctkαk) (t) = (ckαk)−(n+1) (n + 1) . L−1 s + s2 + (ckαk)2 t Hence, Jn+1 (ctkαk) 2n+1 n! (ckαk)−(n+1) (n + 1) n t t Jn+1 (ctkαk) n+1 =2 (n + 1)! , (ctkαk)n+1

Hn (t) =

and we again obtain (4.4.3). To find conditional densities, one needs to evaluate the inverse Fourier transform of conditional characteristic functions (4.4.3). However, we postpone these calculations until Chapter 7 where we will derive again the conditional characteristic functions (4.4.3) by an alternative method of direct integration in formula (4.2.5).

4.4.3

Conditional characteristic functions in the space R3

We have seen in the previous subsections that in the Euclidean spaces R2 and R4 the conditional characteristic functions can be evaluated in an explicit form. Due to this fact, the respective conditional densities can also be calculated explicitly (see later chapters). The situation drastically changes in the three-dimensional space R3 and our analysis becomes much more complicated. In the three-dimensional Euclidean space m = 3 and in this case function (4.2.6) takes the form: √ q √ π J1/2 (ctkαk) sin(ctkαk) ϕ(t) = 2 = , kαk = α12 + α22 + α32 . 2 (ctkαk)1/2 ctkαk According to (4.2.14), we have L [In (t)] (s) = (ckαk)−(n+1)

   n+1 sin(ctkαk) L (s) . t

Taking into account (see [118, Table 8.4-1, Formula 107]) that   sin(ctkαk) ckαk L (s) = arctg , t s we get L [In (t)] (s) = (ckαk)−(n+1)

 arctg

ckαk s

n+1 .

184

Markov Random Flights

Thus, −(n+1)

In (t) = (ckαk)

L

−1

"

ckαk arctg s

n+1 # (t).

Therefore, according to (4.2.7), the conditional characteristic functions Hn (t) take the form: " n+1 # n! ckαk −(n+1) −1 Hn (t) = n (ckαk) L arctg (t), n ≥ 1. (4.4.6) t s Let us show how this formula (4.4.6) can be derived directly from Theorem 4.3.1. According to (4.3.1), we have   !n+1  1 1 1 3 (ckαk)2 n! −1   (t). p (4.4.7) F , ; ; Hn (t) = n L t 2 2 2 s2 + (ckαk)2 s2 + (ckαk)2 Applying [63, Formula 9.121(26)], we get  F

1 1 3 (ckαk)2 , ; ; 2 2 2 2 s + (ckαk)2



! p s2 + (ckαk)2 ckαk arcsin p = ckαk s2 + (ckαk)2 p s2 + (ckαk)2 ckαk = arctg . ckαk s

(4.4.8)

Substituting this expression into (4.4.7), we again arrive at (4.4.6). The inverse Laplace transform on the right-hand side of formula (4.4.6) cannot, apparently, be explicitly calculated for arbitrary n ≥ 1. However, for the important particular case n = 1 corresponding to the single change of direction, expression (4.4.6) can be obtained in an explicit form. From (4.4.6) and taking into account equality (1.9.20) of Lemma 1.9.8, we get " 2 # 1 ckαk −2 −1 H1 (t) = (ckαk) L arctg (t) t s (4.4.9)   1 sin(ctkαk)Si(2ctkαk) + cos(ctkαk)Ci(2ctkαk) , = (ctkαk)2 where the functions Si(x) and Ci(x) are the incomplete integral sine and cosine, respectively, given by formulas (1.9.21). In order to obtain the conditional density corresponding to the single change of direction, one needs to calculate the inverse Fourier transform of function (4.4.9). However, we postpone these calculations until Chapter 6 where the three-dimensional case will be studied in more details. Moreover, we will obtain a general formula for the conditional density, corresponding to the single change of direction, in the Euclidean space Rm of arbitrary dimension m ≥ 2.

4.4.4

Conditional characteristic functions in arbitrary dimension

In this subsection we obtain series representations for the conditional characteristic functions of the process X(t) in arbitrary dimension m ≥ 3 corresponding to one, two and three changes of directions.

Integral Transforms of the Distributions of Markov Random Flights

185

Theorem 4.4.1. For arbitrary dimension m ≥ 3, the conditional characteristic functions H1 (α, t), H2 (α, t), H3 (α, t) of the symmetric Markov random flight X(t) corresponding to one, two and three changes of directions, respectively, are given by the formulas:  ∞  r X Γ k + m−1 (ctkαk)k−1/2 π (m − 2) Γ m 2 2   Jk+1/2 (ctkαk), H1 (α, t) = m 2 (k + m − 2) Γ m−1 (2k)!! Γ k + 2 2 k=0 (4.4.10)  ∞ k−1 X 3(m − 2)2 Γ m ξ (ctkαk) k  2  Jk+1 (ctkαk), (4.4.11) H2 (α, t) = k (2k + 3(m − 2)) Γ k + 3 Γ m−1 2 2 2 k=0  !2 ∞ r X (m − 2) Γ m ηk (ctkαk)k−3/2 π 2  Jk+3/2 (ctkαk), H3 (α, t) = 12 m−1 2 (2k + 2)!! (k + 2(m − 2)) Γ 2 k=0 (4.4.12) q α = (α1 , . . . , αm ) ∈ Rm ,

kαk =

2 , α12 + · · · + αm

m ≥ 3,

t > 0,

where Jν (z) are the Bessel functions and the coefficients ξn , ηn have the form:   k X Γ k − l + 12 Γ l + m−1 2  ξk = , k ≥ 0, (k − l)! Γ l + m (l + m − 2) 2 l=0 ηk =

k X l=0

  Γ l + m−1 Γ k − l + m−1 2 2   , Γ k−l+ m Γ l+ m (l + m − 2) 2 2

k ≥ 0.

(4.4.13)

(4.4.14)

Proof. According to Theorem 4.3.1, the conditional characteristic functions Hn (α, t), n ≥ 1, of the m-dimensional symmetric Markov random flight X(t) corresponding to n changes of direction are given by the formula:    !n+1 n! −1  1 m−2 m 1 (ckαk)2  (t), p Hn (α, t) = n Ls F , ; ; 2 t 2 2 2 s + (ckαk)2 s2 + (ckαk)2 (4.4.15) n ≥ 1,

Re s > 0,

α ∈ Rm ,

m ≥ 3,

means the inverse Laplace transformation with respect to complex variable s where L−1 s and ∞ X (ξ)k (η)k z k F (ξ, η; ζ; z) = 2 F1 (ξ, η; ζ; z) = (ζ)k k! k=0

is the Gauss hypergeometric function. Let us prove relation (4.4.10). By setting n = 1 in (4.4.15) and applying formula (1.6.27) of Lemma 1.6.1, we have: H1 (α, t) 



2

!2



1 m−2 m 1 −1  1 (ckαk)  (t) p F L , ; ; 2 t s 2 2 2 s + (ckαk)2 s2 + (ckαk)2 "  ∞   k # m−1 2 X (m − 2) Γ m Γ k + 1 −1 1 (ckαk) 2 2  = Ls (t) t s2 + (ckαk)2 Γ m−1 Γ k+ m (k + m − 2) s2 + (ckαk)2 2 2 k=0  ∞    X Γ k + m−1 (ckαk)2k (m − 2) Γ m 1 −1 2 2   L (t). = (s2 + (ckαk)2 )k+1 t Γ m−1 Γ k+ m (k + m − 2) s 2 2 k=0 (4.4.16)

=

186

Markov Random Flights

Note that evaluating the inverse Laplace transformation of each term of the series separately is justified because it converges uniformly in s everywhere in the set   (ckαk)2 + ≤ 1 ⊂ C+ s∈C : 2 s + (ckαk)2 −(k+1) and the complex functions s2 + (ckαk)2 , k ≥ 0, are holomorphic and do not have any singular points in the right half-plane C+ of the complex plane C. Moreover, each of these functions contains the inversion complex variable s ∈ C+ in a negative power and behaves like s−(2k+2) , as |s| → +∞, and, therefore, all these complex functions rapidly tend to zero at the infinity. According to [118, Table 8.4-1, formula 57], we have L−1 s



 k+1/2 √  π t Jk+1/2 (ctkαk). (t) = k+1 k! 2ckαk (s2 + (ckαk)2 ) 1

Substituting this into (4.4.16), after some simple calculations we obtain (4.4.10). For n = 2, formula (4.4.15) yields: H2 (α, t)    !3 2! −1  1 1 m−2 m (ckαk)2  (t) p = 2 Ls F , ; ; 2 t 2 2 2 s + (ckαk)2 s2 + (ckαk)2 "  ∞  k # X 3(m − 2)2 Γ m 1 ξk (ckαk)2 2 −1 2  (t) = 2 Ls √ t 2k + 3(m − 2) s2 + (ckαk)2 (s2 + (ckαk)2 )3/2 π Γ m−1 2 k=0  ∞   X ξk (ckαk)2k 6(m − 2)2 Γ m 1 −1 2 = √ 2 L (t), 2k + 3(m − 2) s (s2 + (ckαk)2 )k+3/2 π t Γ m−1 2 k=0 (4.4.17) where we have used relation (1.6.29) of Lemma 1.6.2 and the coefficients ξk , k ≥ 0, are given by (1.6.30). According to [118, Table 8.4-1, formula 57], we have L−1 s



1 (s2 + (ckαk)2 )

 (t) = k+3/2

 k+1 √ t π  Jk+1 (ctkαk). 2ckαk Γ k + 32

Substituting this into (4.4.17), after some simple calculations we arrive at (4.4.11). Finally, for n = 3, formula (4.4.15) and Lemma 1.6.3 yield: H3 (α, t)    !4 3! −1  1 1 m−2 m (ckαk)2  (t) p = 3 Ls F , ; ; 2 t 2 2 2 s + (ckαk)2 s2 + (ckαk)2    !2 ∞  k 2 X (m − 2) Γ m 2 (ckαk) 3! −1  η k  (t) 2 = 3 Ls t (s2 + (ckαk)2 )2 k + 2(m − 2) s2 + (ckαk)2 Γ m−1 2 k=0  !2 ∞   X ηk (ckαk)2k 12 (m − 2) Γ m 1 −1 2  = 3 L (t), t k + 2(m − 2) s (s2 + (ckαk)2 )k+2 Γ m−1 2 k=0 (4.4.18)

Integral Transforms of the Distributions of Markov Random Flights

187

where we have used relation (1.6.32) of Lemma 1.6.3 and the coefficients ηk , k ≥ 0, are given by (1.6.33). According to [118, Table 8.4-1, formula 57], we have L−1 s



  k+3/2 √ π t (t) = Jk+3/2 (ctkαk). k+2 (k + 1)! 2ckαk (s2 + (ckαk)2 ) 1

Substituting this into (4.4.18), after some simple calculations we obtain (4.4.12). The theorem is proved. Corollary 4.4.1. For arbitrary dimension m ≥ 3, the inverse Fourier transformation of the conditional characteristic function H1 (α, t) yields the conditional density p1 (x, t) corresponding to the single change of direction that has the form: "r #  ∞  X Γ k + m−1 (ctkαk)k−1/2 π (m − 2) Γ m −1 2 2   p1 (x, t) = Fα Jk+1/2 (ctkαk) (x) 2 Γ m−1 (2k)!! Γ k + m (k + m − 2) 2 2 k=0   2m−3 Γ( m m−1 m m kxk2 2) F , − + 2; ; 2 2 Θ(ct − kxk), = m/2 2 2 2 c t π (ct)m (4.4.19) x = (x1 , . . . , xm ) ∈ Rm , m ≥ 3, t > 0, where Θ(x) is the Heaviside unit-step function. Proof. The proof immediately follows from (4.4.10) and the explicit formula for the conditional density p1 (x, t) that will be obtained by other method in Section 4.9 and taking into account the continuity of the inverse Fourier transformation as well as the uniqueness of conditional density p1 (x, t). Remark 4.4.1. One can check that the series in formulas (4.4.10), (4.4.11) and (4.4.12) are convergent for any fixed t > 0. However, inverting these functions in α is a very difficult problem and, therefore, we cannot obtain closed-form expressions for the respective conditional densities p2 (x, t) and p3 (x, t) (a closed-form expression for the conditional density p1 (x, t) given by (4.4.19) will be obtained by other method in Section 4.9. Remark 4.4.2. In the same manner as above, one can obtain series representations for other conditional characteristic functions using Theorem 4.3.1 and evaluating the powers of Gauss hypergeometric function, as it was demonstrated in Subsection 1.6.3. Another way is to use the recurrent relation (4.2.15) for conditional characteristic functions given in Corollary 4.2.3. However, in both these cases the resulting expressions will certainly have an extremely complicated and cumbersome form.

4.5

Integral equation for characteristic function

According to (4.2.7) and the total probability formula, the characteristic function of the Markov random flight X(t), t ≥ 0, is given by the formal series ∞ ∞ o n X X (λt)n H(t) = E ei(α,X(t)) = e−λt Hn (t) = e−λt λn In (t). n! n=0 n=0

(4.5.1)

188

Markov Random Flights

Our first step is to prove that the series on the right-hand side of (4.5.1) is convergent uniformly in the variable kαk for any fixed t ≥ 0. We need the two auxiliary lemmas. Lemma 4.5.1. The following inequality holds: Jν (x) 1 xν ≤ 2ν Γ(ν + 1) ,

ν ≥ 0.

(4.5.2)

Proof. Using the well-known integral representation of the Bessel function (see [63, Formula 8.411(4)]), we obtain: Z π/2 Jν (x) 2 2ν ≤   | (sin θ) cos (x cos θ)| dθ xν 2ν Γ ν + 1 Γ 1 0 2 2 Z π/2 2 2ν  √ ≤ ν (sin θ) dθ 2 Γ ν + 12 π 0 (see [63, Formula 3.621(1)])   2 1 1 2ν−1  √ 2 = ν B ν + ,ν + 2 2 2 Γ ν + 12 π   Γ ν + 12 Γ ν + 12 22ν  = ν 1 √ Γ (2ν + 1) 2 Γ ν+2 π  1 ν Γ ν+2 2  =√ π Γ 2 ν + 12 (duplication formula for gamma-function)  √ π Γ ν + 21 2ν  =√ π 22ν Γ ν + 12 Γ (ν + 1) 1 . = ν 2 Γ(ν + 1) The lemma is proved. Lemma 4.5.2. For any n ≥ 1, the following equality holds: Z t Z t Z t tn , n ≥ 1. dτ1 dτ2 · · · dτn = n! 0 τ1 τn−1

(4.5.3)

Proof. Formula (4.5.3) can be obtained from a more general [63, Formula 4.631]. The particular case of this general formula has the form: Z 0 Z τn−1 Z τ1 Z 0 1 dτn−1 dτn−2 · · · dτ0 = (−ξ)n−1 dξ. (n − 1)! t t t t Changing the upper and lower indices in all the integrals of this equality and redenoting the integration variables, we get: Z t Z t Z t Z t 1 tn tn dτ1 dτ2 · · · dτn = ξ n−1 dξ = = . (n − 1)! 0 n(n − 1)! n! 0 τ1 τn−1 The lemma is proved. With this in hand, we can prove the uniform convergence of the series on the right-hand side of (4.5.1).

Integral Transforms of the Distributions of Markov Random Flights

189

Lemma 4.5.3. The functional series on the right-hand side of (4.5.1) is uniformly convergent in the variable kαk for any fixed t ≥ 0. Proof. In view of formula (4.5.2) of Lemma 4.5.1, the following inequality is true:  m  J (m−2)/2 (ctkαk) ≤ 1, |ϕ(t)| = 2(m−2)/2 Γ (m−2)/2 2 (ctkαk) for any m ≥ 2. Then, taking into account (4.5.3) and recalling that I0 (t) = ϕ(t), we have:   Z t Z t Z t ∞ ∞ n+1 X  X Y λn In (t) ≤ |ϕ(t)| + λn dτ1 dτ2 · · · dτn |ϕ(τj − τj−1 )|   0 τ1 τn−1 n=0

n=1

≤1+ =1+

∞ X

λn

n=1 ∞ X

j=1

Z

t

Z

t

Z dτ2 · · ·

dτ1 τ1

0

t

dτn τn−1

λ n tn n! n=1

= eλt < ∞ for any fixed t ≥ 0 and all kαk. This means that the functional series on the right-hand side of (4.5.1) is convergent uniformly in kαk for any fixed t ≥ 0 and, therefore, it uniquely determines some smooth function which is the characteristic function H(t) of the Markov random flight X(t), t ≥ 0. The lemma is proved. In the following theorem we present an integral equation for the characteristic function H(t) and give its solution in terms of function ϕ(t). Theorem 4.5.1. The characteristic function H(t), t ≥ 0, of the Markov random flight X(t) satisfies the Volterra integral equation of second kind with continuous kernel: H(t) = e−λt ϕ(t) + λ

Z

t

e−λ(t−τ ) ϕ(t − τ )H(τ ) dτ,

t ≥ 0.

(4.5.4)

0

In the class of continuous functions, integral equation (4.5.4) has the unique solution given by the uniformly converging series H(t) = e−λt

∞ X

∗(n+1)

λn [ϕ(t)]

.

n=0

Proof. In view of Theorem 4.2.1, Lemma 4.5.3 and formula (4.5.1), we have:

(4.5.5)

190

Markov Random Flights

H(t) = e−λt

∞ X

λn In (t)

n=0 ∞ X

( =e

−λt

ϕ(t) +

λ

n

Z

)

t

ϕ(t − τ )In−1 (τ ) dτ 0

n=1

(uniform convergence of the series) ! ) ( Z t ∞ X ϕ(t − τ ) λn In−1 (τ ) dτ = e−λt ϕ(t) + 0

( =e

−λt

n=1 ∞ X

t

Z

ϕ(t − τ )

ϕ(t) + 0

( = e−λt

t

Z

ϕ(t − τ )

ϕ(t) + λ

=e

λ

n=0 ∞ X

0 −λt

! n+1

In (τ )

) dτ

! λn In (τ )

) dτ

n=0

  Z t λτ ϕ(t) + λ ϕ(t − τ ) e H(τ ) dτ 0

= e−λt ϕ(t) + λ

Z

t

e−λ(t−τ ) ϕ(t − τ )H(τ ) dτ

0

proving (4.5.4). Integral equation (4.5.4) can be represented in the convolution form:    H(t) = e−λt ϕ(t) + λ e−λt ϕ(t) ∗ H(t) , t ≥ 0.

(4.5.6)

Let us now check that function (4.5.5) is the solution of the convolutional equation (4.5.6). Substituting (4.5.5) into the right-hand side of (4.5.6), we have:    e−λt ϕ(t)+λ e−λt ϕ(t) ∗ H(t) " !# ∞ X  ∗(n+1) −λt −λt −λt n = e ϕ(t) + λ e ϕ(t) ∗ e λ [ϕ(t)] n=0

=e

−λt

ϕ(t) + λe

−λt

ϕ(t) ∗

∞ X

! ∗(n+1)

n

λ [ϕ(t)]

n=0

= e−λt ϕ(t) + e−λt

∞ X

∗(n+2)

λn+1 [ϕ(t)]

n=0

=e

−λt

ϕ(t) +

∞ X

! n

∗(n+1)

λ [ϕ(t)]

n=1

= e−λt

∞ X

∗(n+1)

λn [ϕ(t)]

n=0

= H(t), and (4.5.5) is, thus, the solution of (4.5.6) indeed. The uniqueness of the solution (4.5.5) of equation (4.5.4) follows from the well-known fact that any Volterra integral equation of second kind with continuous kernel has the unique solution in the class of continuous functions for any λ and arbitrary continuous free term (which, in our case, coincides with the kernel) (see [200]). The theorem is thus completely proved.

Integral Transforms of the Distributions of Markov Random Flights

191

One should note that integral equation (4.5.4) can also be obtained by the methods of renewal theory and using the Markov property. Similarly to Theorem 4.5.1, one can show that the characteristic function of the absolutely continuous component of the distribution of process X(t), t > 0, determined by the equality ∞ X e H(t) = e−λt λn In (t), n=1

satisfies the Volterra integral equation Z t e ) dτ, e e−λ(t−τ ) ϕ(t − τ )H(τ H(t) = λe−λt I1 (t) + λ

t > 0,

(4.5.7)

t > 0.

(4.5.8)

0

where I1 (t) is given by (4.2.10). Equation (4.5.7) can be represented in the convolution form: h i  e e H(t) = λe−λt [ϕ(t) ∗ ϕ(t)] + λ e−λt ϕ(t) ∗ H(t) ,

It is easy to check, similarly to the proof of Theorem 4.5.1, that the function e H(t) = e−λt

∞ X

∗(n+1)

λn [ϕ(t)]

.

(4.5.9)

n=1

is the solution to equation (4.5.8) and, in the class of continuous functions, this solution is unique. From (4.5.7), it also follows that the function e H(t) = eλt H(t) =

∞ X

λn In (t)

n=1

satisfies the more simple Volterra integral equation Z t H(t) = λI1 (t) + λ ϕ(t − τ )H(τ ) dτ,

t > 0,

(4.5.10)

t > 0.

(4.5.11)

0

or, in the convolutional form,   H(t) = λ[ϕ(t) ∗ ϕ(t)] + λ ϕ(t) ∗ H(t) , The function H(t) =

∞ X

∗(n+1)

λn [ϕ(t)]

(4.5.12)

n=1

is the unique continuous solution to equation (4.5.11).

4.6

Laplace transform of characteristic function

The convolutional structure of the Volterra integral equation (4.5.6) enables us to obtain, in an explicit form, the Laplace transform of the characteristic function of process X(t), t ≥ 0. Due to this fact, in the next section we will study its limiting behaviour under the standard Kac’s scaling condition. In this section we give an explicit formula for the Laplace transform of the characteristic function H(t) in terms of the Gauss hypergeometric function, which is valid in the Euclidean space Rm of any dimension m ≥ 2.

192

Markov Random Flights

Theorem 4.6.1. For any dimension m ≥ 2, the Laplace transform of the characteristic function of the m-dimensional Markov random flight X(t), t ≥ 0, has the form:   (ckαk)2 1 m−2 m , ; ; F 2 2 2 (s + λ)2 + (ckαk)2  , (4.6.1)  L [H(t)] (s) = p (ckαk)2 1 m−2 m , ; ; (s + λ)2 + (ckαk)2 − λ F 2 2 2 (s + λ)2 + (ckαk)2 for Re s > 0, where F (ξ, η; ζ; z) is the Gauss hypergeometric function (4.3.2). The principal branch of the radical is taken in complex function (4.6.1). Proof. Applying Laplace transformation to equation (4.5.6), we obtain: L [H(t)] (s) = L [ϕ(t)] (s + λ) + λ L [ϕ(t)] (s + λ) L [H(t)] (s), and, therefore, L [ϕ(t)] (s + λ) , Re s > 0. 1 − λ L [ϕ(t)] (s + λ) Using (4.3.3), we can rewrite general formula (4.6.2) in the explicit form:   (ckαk)2 1 m−2 m 1 √ F , ; ; 2 2 2 2 2 (s+λ) +(ckαk) (s+λ)2 +(ckαk)2   L [H(t)] (s) = (ckαk)2 1 m−2 m λ F 1− √ , ; ; 2 2 2 2 2 (s+λ) +(ckαk) 2 2 L [H(t)] (s) =

(4.6.2)

(s+λ) +(ckαk)

F =p



(ckαk)2 1 m−2 m 2 , 2 ; 2 ; (s+λ)2 +(ckαk)2

(s + λ)2 + (ckαk)2 − λ F





(ckαk)2 1 m−2 m 2 , 2 ; 2 ; (s+λ)2 +(ckαk)2

.

The theorem is proved. Let us consider formula (4.6.1) for the important particular cases of low dimensions. In the two-dimensional case (m = 2), the second coefficient of the hypergeometric function is zero. In this case, the hypergeometric function is identically equal to 1 and formula (4.6.1) takes the simple form: 1 L [H(t)] (s) = p . 2 (s + λ) + (ckαk)2 − λ

(4.6.3)

In the three-dimensional case (m = 3) and taking into account (4.4.8), formula (4.6.1) yields:   (ckαk)2 F 21 , 12 ; 32 ; (s+λ) 2 +(ckαk)2   L [H(t)] (s) = p (ckαk)2 (s + λ)2 + (ckαk)2 − λ F 21 , 21 ; 23 ; (s+λ) 2 +(ckαk)2 √ =p



(s + λ)2 + (ckαk)2 − λ

arctg =

(s+λ)2 +(ckαk)2 ckαk



ckαk s+λ

ckαk − λ arctg





ckαk s+λ

.

arctg



ckαk s+λ



(s+λ)2 +(ckαk)2 ckαk

arctg



ckαk s+λ



(4.6.4)

Integral Transforms of the Distributions of Markov Random Flights

193

Finally, in the four-dimensional case (m = 4) and taking into account (4.4.5), formula (4.6.1) becomes:   (ckαk)2 F 21 , 1; 2; (s+λ) 2 +(ckαk)2   L [H(t)] (s) = p (ckαk)2 (s + λ)2 + (ckαk)2 − λ F 21 , 1; 2; (s+λ) 2 +(ckαk)2 √ 2 =p

=

(s+λ)2 +(ckαk)2



(s+λ)+

(s + λ)2 + (ckαk)2 − 2λ



(s+λ)2 +(ckαk)2



(s+λ)+

2 s+

(4.6.5)

(s+λ)2 +(ckαk)2

p

(s + λ)2 + (ckαk)2 − λ

(s+λ)2 +(ckαk)2

.

Applying Laplace transformation to convolutional equation (4.5.8), we see that the e Laplace transform of the characteristic function H(t) of the absolutely continuous component of the distribution has the form: 2 h i λ (L [ϕ(t)] (s + λ)) e , L H(t) (s) = 1 − λ L [ϕ(t)] (s + λ)

Re s > 0.

(4.6.6)

Similarly, applying Laplace transformation to convolutional equation (4.5.11), we see that the Laplace transform of function H(t), has the form: 2   λ (L [ϕ(t)] (s)) , L H(t) (s) = 1 − λ L [ϕ(t)] (s)

Re s > 0.

(4.6.7)

The following lemma states some analytical properties of function (4.6.1). We remind that the principal branch of the radical is taken in (4.6.1). Under this condition the following lemma holds. Lemma 4.6.1. The complex function (4.6.1) is holomorphic and single-valued in the right half-plane C+ = {s ∈ C : Re s > 0} of the complex plane C. Proof. In order to prove the statement of the lemma, we need to establish the singlevaluedness of the hypergeometric function in (4.6.1). Since the Gauss hypergeometric function F (ξ, η; ζ; z) is holomorphic (analytical) and single-valued in the right half-plane C+ slitted along real axis from the point +1 to +∞, we should exclude all the points s ∈ C+ satisfying the relations     (ckαk)2 (ckαk)2 Re ≥ 1, Im = 0. (s + λ)2 + (ckαk)2 (s + λ)2 + (ckαk)2 If s = a + ib, then these relations are equivalent to the system  (ckαk)2 a2 − b2 + 2λa + λ2 + (ckαk)2 2

2

≥ 1,

2

2

= 0.

(a2 − b2 + 2λa + λ2 + (ckαk)2 ) − (2ab + 2λb) (ckαk)2 (2ab + 2λb) (a2 − b2 + 2λa + λ2 + (ckαk)2 ) − (2ab + 2λb)

The second equality holds only for a = −λ, that is, in the left half-plane of C. Therefore, function (4.6.1) is holomorphic and single-valued everywhere in the right half-plane C+ . The lemma is proved.

194

4.7

Markov Random Flights

Initial conditions

Theorem 4.5.1 enables us to give the complete and exhaustive solution of the problem of finding the initial conditions for symmetric Markov random flight in the Eucledean space of arbitrary dimension. This theorem enables us to write down the initial conditions in arbitrary dimension without knowing any special differential relations. First of all, we note that, for real nonnegative x ≥ 0, the limiting relation holds: lim

x→0

1 Jν (x) = ν , ν x 2 Γ(ν + 1)

x ≥ 0,

ν ≥ 0.

(4.7.1)

This immediately follows from the well-known representation of the Bessel function (see, for instance, [63, Formula 8.440]): ∞  x 2k Jν (x) 1 X (−1)k = , xν 2ν k! Γ(ν + k + 1) 2

x ≥ 0,

ν ≥ 0.

(4.7.2)

k=0

From (4.7.1) it follows that (m−2)/2

ϕ(t)|t=0 = 2

m J (m−2)/2 (ctkαk) Γ = 1, 2 (ctkαk)(m−2)/2 t=0

m ≥ 2.

(4.7.3)

Then, from Volterra integral equation (4.5.4), we get H(t)|t=0 = 1.

(4.7.4)

Therefore, the transition density f (x, t), x ∈ Rm , t ≥ 0, of the symmetric Markov random flight X(t) satisfies the first initial condition f (x, t)|t=0 = δ(x),

(4.7.5)

where δ(x) is the m-dimensional Dirac delta-function. Initial condition (4.7.5) expresses the obvious fact that, at the initial time moment t = 0, the distribution of X(t) is entirely concentrated at the origin 0 ∈ Rm . While the first initial condition (4.7.5) has a quite obvious physical sense, finding the second initial condition is a much more difficult problem even in the one-dimensional case. This difficulty was noted by many researchers. For example, Mark Kac has called the procedure of finding the second initial condition as ‘cumbersome’ (see [84, pages 500-501]). That is why Theorem 4.5.1 is of a special value because it enables us to easily find any initial conditions for any dimension without additional information about a differential equation satisfied by the transition density of the process (if such an equation exists). Really, by differentiating (4.5.4) in t, we get Z t i ∂H(t) ∂ϕ(t) ∂ h −λ(t−τ ) = −λe−λt ϕ(t) + e−λt + λH(t) + λ e ϕ(t − τ ) H(τ ) dτ. ∂t ∂t 0 ∂t From (4.7.2) it follows that ∂ϕ(t) = 0. ∂t t=0 Then, taking into account (4.7.3), (4.7.4) and (4.7.6), we immediately obtain ∂H(t) = 0. ∂t t=0

(4.7.6)

(4.7.7)

Integral Transforms of the Distributions of Markov Random Flights

195

Therefore, the second initial condition for the transition density of X(t) in any dimension has the form: ∂f (x, t) = 0. (4.7.8) ∂t t=0 The second initial condition (4.7.8) can be physically interpreted as an initially vanishing diffusive flux. If necessary, one can continue this procedure and to find the next initial conditions for the characteristic function H(t). For example, by differentiating (4.5.4) twice in t, we have 2 ∂ 2 H(t) 2 −λt −λt ∂ϕ(t) −λt ∂ ϕ(t) =λ e ϕ(t) − 2λe + e ∂t2 ∂t ∂t2 Z t 2 h i ∂ ∂H(t) e−λ(t−τ ) ϕ(t − τ ) H(τ ) dτ. − λ2 H(t) + λ +λ 2 ∂t 0 ∂t

By means of representation (4.7.2), one can easily show that ∂ 2 ϕ(t) (ckαk)2 = − , ∂t2 t=0 m

(4.7.9)

Then, taking into account (4.7.3), (4.7.4), (4.7.6), (4.7.7) and (4.7.9), we obtain (ckαk)2 ∂ 2 H(t) = − . ∂t2 t=0 m The inverse Fourier transformation of this formula yields the third initial condition for the trandition density, which contains the second derivative of the Dirac delta-function (in the sense of generalized functions). It is interesting to note that, unlike the first and second initial conditions (4.7.4) and (4.7.7), this formula explicitly depends on the dimension m of the space.

4.8

Limit theorem

The explicit formula (4.6.1) for the Laplace transform of the characteristic function H(t) enables us to thoroughly study the limiting behaviour of the symmetric Markov random flight X(t), t > 0, in the Euclidean space Rm of arbitrary dimension m ≥ 2 under the standard Kac’s scaling condition. This result is given by the following limit theorem. Theorem 4.8.1. Under the Kac’s condition c2 → ρ, ρ > 0, (4.8.1) λ for arbitrary dimension m ≥ 2, the transition density of the symmetric Markov random flight X(t), t > 0, converges to the transition density of the homogeneous m-dimensional Wiener process with zero drift and diffusion coefficient σ 2 = 2ρ/m. In other words, if p(x, t), x ∈ Bm ct , t > 0, is the transition density of Markov random flight X(t), then the following limiting relation holds:  m/2   m mkxk2 lim p(x, t) = exp − , (4.8.2) c, λ→∞ 4ρπt 4ρt c → ∞,

λ → ∞,

(c2 /λ)→ρ

kxk2 = x21 + · · · + x2m ,

t > 0.

196

Markov Random Flights

Proof. Under the Kac’s condition (4.8.1), we have: lim

c, λ→∞ (c2 /λ)→ρ

(ckαk)2 = (s + λ)2 + (ckαk)2

lim

s λ

c, λ→∞ (c2 /λ)→ρ

+

c2 2 λ2 kαk 2 c2 1 + λ2

kαk2

= 0,

because (c2 /λ2 ) → 0 in this case. Therefore, taking into account the continuity of the Gauss hypergeometric function, we get     1 m−2 m (ckαk)2 1 m−2 m =F , ; ; , ; ; 0 = 1. lim F c, λ→∞ 2 2 2 (s + λ)2 + (ckαk)2 2 2 2 (c2 /λ)→ρ

Then, by passing to the limit in (4.6.1) under the Kac’s scaling condition (4.8.1), we obtain: lim

c, λ→∞ (c2 /λ)→ρ

=

L [H(t)] (s)

lim

c, λ→∞ (c2 /λ)→ρ

  −1 p 1 m−2 m (ckαk)2 (s + λ)2 + (ckαk)2 − λ F , ; ; 2 2 2 (s + λ)2 + (ckαk)2 

=

lim

c, λ→∞ (c2 /λ)→ρ

s



(s + λ) 1 +

−λ

ckαk s+λ

∞ X

(4.8.3)

2

1 2 k



k=0

m−2 2 k m 2 k



1 k!



(ckαk)2 (s + λ)2 + (ckαk)2

k #−1

Note that, from the Kac’s condition (4.8.1), it follows that, for sufficiently large c and λ, the following inequalities hold ckαk (ckαk)2 < 1, (4.8.4) s + λ (s + λ)2 + (ckαk)2 < 1, for any fixed s and kαk. Therefore, the radical on the right-hand side of (4.8.3) can be represented in the form of the absolutely and uniformly converging series s  2  2  4  6 ckαk 1 ckαk 1 · 1 ckαk 1 · 1 · 3 ckαk 1+ =1+ − + − . . . (4.8.5) s+λ 2 s+λ 2·4 s+λ 2·4·6 s+λ Let us prove that this series is uniformly convergent. From the first inequality in (4.8.4) it follows that the series on the right-hand side of (4.8.5) is dominated by the numerical series ∞

A=1+

1 1·1 1·1·3 3 X (2k − 3)!! + + + ··· = + , 2 2·4 2·4·6 2 (2k)!! k=2

and one only needs to prove the convergence of this series. Since the general term of numerical series A has the form ak =

(2k − 3)!! , (2k)!!

k ≥ 2,

then, as is easy to see, 2k + 2 ak = , ak+1 2k − 1

Integral Transforms of the Distributions of Markov Random Flights

197

and, therefore,    3k 3 ak −1 = lim = > 1. lim k k→∞ 2k − 1 k→∞ ak+1 2 Hence, according to the Raabe criterion, the numerical series A is convergent and, therefore, series (4.8.5) is uniformly convergent. Analogously, in view of the second inequality in (4.8.4), hypergeometric series in (4.8.3) is dominated by the numerical series  m−2  ∞ 1 X 2 k 2  k. B= m k! 2 k k=0 Let us prove the convergence of this series. The general term of series B has the form:  m−2  1 2 k

bk =

2  k, m 2 k

k!

and, therefore, bk

=

bk+1

k ≥ 0,

(2k + 2)(k + m 2) . (2k + 1)(k + m 2 − 1)

Then, as is easy to see,     3k 2 + m bk 3 2 +1 k lim k −1 = lim = > 1. m k→∞ k→∞ 2k 2 + (m − 1)k + bk+1 2 − 1 2 According to the Raabe criterion, numerical series B is convergent and, therefore, hypergeometric series in (4.8.3) is uniformly convergent. Note that the convergence of numerical series B follows also from Remark 4.3.1 because this series determines the value of the Gauss hypergeometric function at the point 1. Substituting now series (4.8.5) into (4.8.3), we can rewrite this expression as follows: lim

c, λ→∞ (c2 /λ)→ρ

L [H(t)] (s) " =

lim

c, λ→∞ (c2 /λ)→ρ

1 (s + λ) 1 + 2



1 2 1



−λ 1+

+ =

lim

c, λ→∞ (c2 /λ)→ρ

ckαk s+λ

1 2!

2

1·1 − 2·4



ckαk s+λ

!

4 + ...

m−2 2 1 m 2 1



(ckαk)2 (s + λ)2 + (ckαk)2 !#−1  m−2   2 1 (ckαk)2 2 2 2 2  + ... m (s + λ)2 + (ckαk)2 2 2

 1 (ckαk)2 1 · 1 (ckαk)4 s+λ+ − + ... 2 s+λ 2 · 4 (s + λ)3 m−2 λ(ckαk)2 2 1 m (s + λ)2 + (ckαk)2 2 1   1 m−2 1 2 2 2 2 λ(ckαk)4  2 m 2! ((s + λ)2 + (ckαk)2 ) 2 2

1 2 1



−λ−





−1 − ...

198

Markov Random Flights =

lim

c, λ→∞ (c2 /λ)→ρ

 1 s+ 2

c2 2 λ kαk s λ +1 1 2 1





1·1 2·4



c4 λ3 s λ

kαk4 3 + . . . +1

c2 2 λ kαk  2 2 s + λc 2 λ +1  m−2  1 2 2 2 2  m s 2 2 λ

m−2 2 1 m 2 1



1 − 2!

kαk2 c4 λ3 2

+1

−1

kαk4 +

c2 λ2

kαk2

2 − . . .

Since, as we have shown above, both the series on the right-hand side of this equality are uniformly convergent, we can pass to the limit in each of their terms separately. Then, taking into account that, under the Kac’s condition (4.8.1), cn λn−1

→ 0,

for any n ≥ 3,

we get "

lim

c, λ→∞ (c2 /λ)→ρ

1 L [H(t)] (s) = s + ρkαk2 − 2

1 2 1



m−2 2 1 m 2 1

#−1



2

ρkαk

.

Using the definition of the Pochhammer symbol, one can easily show that  m−2  1 m−2 2 1 2 1 = , m ≥ 2. m 2m 2 1 Thus, we finally obtain lim

c, λ→∞ (c2 /λ)→ρ

L [H(t)] (s) =

−1  ρkαk2 . s+ m

The inverse Laplace transformation of function (4.8.6) yields: " −1 #   ρkαk2 ρkαk2 −1 (t) = exp − t , L s+ m m

(4.8.6)

(4.8.7)

where we have used [7, Table 5.2, Formula 1]. The exponential function on the right-hand side of (4.8.7) is the characteristic function of the m-dimensional homogeneous Wiener process with zero drift and diffusion coefficient σ 2 = 2ρ/m. Applying now the Hankel inversion formula and using [63, Formula 6.631(4)], we can evaluate the inverse Fourier transform F −1 of the exponential function on the right-hand side of (4.8.7): h i 2 w(x, t) := F −1 e−(ρkαk t)/m Z ∞ 2 1 = e−(ρt/m)ξ ξ m/2 J(m−2)/2 (kxkξ) dξ (2π)m/2 kxk(m−2)/2 0 ! 1 kxk2 kxk(m−2)/2 = exp − 4ρt  2ρt m/2 (2π)m/2 kxk(m−2)/2 m m  m/2   2 m mkxk = exp − , 4ρπt 4ρt proving (4.8.2). The theorem is thus completely proved.

Integral Transforms of the Distributions of Markov Random Flights

199

We note that the density obtained  w(x, t) =

m 4ρπt

m/2

mkxk2 exp − 4ρt 

 (4.8.8)

is the fundamental solution (the Green’s function) to the m-dimensional heat equation ρ ∂w(x, t) = ∆w(x, t), ∂t m

(4.8.9)

where ∆ is the m-dimensional Laplace operator.

4.9

Random flight with rare switchings

The Kac’s scaling condition (4.8.1) provides the existence of the limit which, in view of Theorem 4.8.1 and equation (4.8.9), can also be referred to as the thermodynamic limit of the random flight X(t). This condition characterizes the particle’s evolution in a Poisson environment satiated with random obstacles, collisions with which cause changes in direction. Per unit of time, the particle is subjected to a huge and ever-increasing number of collisions and this fact is determined by the condition λ → ∞. In this case, the free run of the particle between collisions is getting shorter and, in order to compensate for this, the particle’s speed must also increase. The Kac’s condition (4.8.1) shows that the speed should increase by an order of magnitude faster that the intensity of the collisions. In this case, as it was proved in Theorem 4.8.1, the thermodynamic limit of the symmetric Markov random flight X(t) is the homogeneous Wiener process with zero drift and a diffusion coefficient depending on the dimension of the space. However, the opposite (in a certain sense) limiting case λ → 0 describing the transport process in a rarefied environment, is also of a great interest. This leads to the need to study the behaviour of random flight at small values of λ. We emphasize at once that this is not about the usual passage to the limit, as λ → 0 (in this sense the problem becomes meaningless because the process becomes degenerate), but about the asymptotic behaviour of random flight X(t) at small values of parameter λ. In this section we obtain an explicit asymptotic formula, as λ → 0, for the transition density of process X(t) in the Euclidean space Rm of arbitrary dimension m ≥ 2. The derivation of this asymptotic relation is based on the evaluation of the exact inverse Fourier transform of the conditional characteristic function corresponding to the single change of direction. The formula obtained yields the first term of the asymptotic expansion of the transition density of the process with respect to the powers of λ. This also enables us to give an exhaustive description of the behaviour of the transition density near the boundary of the diffusion area. By the total probability formula, the density p(x, t) ≡ p(ac) (x, t) of the absolutely continuous component of the distribution of X(t) can be represented in the following form: p(x, t) = e−λt

∞ X (λt)n pn (x, t), n! n=1

x ∈ int Bm ct ,

t > 0,

(4.9.1)

where pn (x, t), n ≥ 1, are the conditional densities of the conditional distributions Pr{X(t) ∈ dx | N (t) = n}, corresponding to n changes of direction (that is, n jumps of the governing Poisson process).

200

Markov Random Flights

Since the conditional densities pn (x, t), n ≥ 1, do not depend on λ, representation (4.9.1) yields the simple asymptotic formula p(x, t) = λte−λt p1 (x, t) + o(λ), or p(x, t) ∼ λte−λt p1 (x, t),

λ → 0.

(4.9.2)

This means that, for small values of λ, the density p(x, t) behaves approximately like function on the right-hand side of (4.9.2) and this approximation is the more accurate, the smaller the value of λ. Obviously, the crucial point here is to find an explicit form of the conditional density p1 (x, t) corresponding to the single change of direction. The main result of this section is given by the following theorem. Theorem 4.9.1. For any t > 0 and arbitrary dimension m ≥ 2, the following asymptotic relation holds:   2ν−1 1 kxk2 Γ(ν + 1) −λt 2 F ν + , −ν + 1; ν + 1; 2 2 , λ → 0, (4.9.3) p(x, t) ∼ λte π ν+1 (ct)2ν+2 2 c t x = (x1 , . . . , xm ) ∈ int Bm ct ,

kxk2 = x21 + · · · + x2m ,

where the parameter ν is

m−2 , m ≥ 2, 2 and the Gauss hypergeometric function F (ξ, η; ζ; z) is given by (4.3.2). ν=

(4.9.4)

Proof. Comparing (4.9.3) and (4.9.2), we see that, in order to prove the theorem, we need to show that for any t > 0 the following equality holds:   1 kxk2 22ν−1 Γ(ν + 1) F ν + , −ν + 1; ν + 1; . (4.9.5) p1 (x, t) = ν+1 π (ct)2ν+2 2 c2 t2 We will prove formula (4.9.5) for all the dimensions m ≥ 4 (that is, for the values of parameter ν ≥ 1) only. The proof of this formula for the cases of lower dimensions m = 2 and m = 3 will separately be given by other method in the respective next chapters. Thus, we consider ν ≥ 1 everywhere in this theorem. The particular case of formula (4.2.5), for n = 1, yields the conditional characteristic function (Fourier transform) H1 (α, t) of the conditional density p1 (x, t) in arbitrary dimension m ≥ 2: 2 Z t [2ν Γ(ν + 1)] Jν (cτ kαk) Jν (c(t − τ )kαk) H1 (α, t) = dτ ν t (c(t − τ )kαk)ν 0 (cτ kαk) (4.9.6) 2 Z 1 [2ν Γ(ν + 1)] Jν (ctkαkξ) Jν (ctkαk(1 − ξ)) = dξ, (ct)2ν (kαkξ)ν (kαk(1 − ξ))ν 0 where, remind, Jν (x) is the Bessel function of order ν, p α = (α1 , . . . , αm ) ∈ Rm is the real 2 and ν is given by m-dimensional vector of inversion parameters, kαk = α12 + · · · + αm (4.9.4). The standard approach is to evaluate the conditional characteristic function H1 (α, t) and to invert it then. However, the difficulty is that the integral in (4.9.6), generally speaking, cannot be calculated in an explicit form in arbitrary dimension (more precisely, as we will show in the next chapters, it can be explicitly evaluated only for the values of parameter ν = 0 and ν = 1, that corresponds to dimensions m = 2 and m = 4, respectively). We will

Integral Transforms of the Distributions of Markov Random Flights

201

overcome this difficulty by evaluating the inverse Fourier transform of function H1 (α, t) without knowing its explicit form. −1 So, let us begin evaluation of the inverse Fourier transform Fα of function H1 (α, t) given by (4.9.6). According to the Hankel inversion formula, we have: −1 p1 (x, t) = Fα [H1 (α, t)] (x) 2

=

kxk−ν [2ν Γ(ν + 1)] (2π)ν+1 (ct)2ν Z Z ∞ × Jν (kxkr) rν+1 0 −ν

=

1

0

 Jν (ctrξ) Jν (ctr(1 − ξ)) dξ dr (rξ)ν (r(1 − ξ))ν

(4.9.7)

2

ν

kxk [2 Γ(ν + 1)] ν+1 (2π) (ct)2ν Z Z ∞ r1−ν Jν (kxkr) × 0

0

1

 Jν (ctrξ) Jν (ctr(1 − ξ)) dξ dr. ξν (1 − ξ)ν

Let us prove that the integral in r on the right-hand side of (4.9.7) is convergent uniformly and absolutely for any ξ ∈ [ε, 1 − ε], ε ∈ (0, 1/2). We have: Z ∞ Z 1−ε  Jν (ctrξ) Jν (ctr(1 − ξ)) 1−ν r J (kxkr) dξ dr ν ν ν ξ (1 − ξ) 0 ε (4.9.8) Z ∞ Z 1−ε Jν (ctrξ) Jν (ctr(1 − ξ)) dξ dr. ≤ (ct)2ν rν+1 |Jν (kxkr)| (ctrξ)ν (ctr(1 − ξ))ν 0

ε

The integrand on the right-hand side of (4.9.8) is continuous with respect to both the variables r and ξ. Let us prove that it is integrable in r in the interval [0, ∞) for any ξ ∈ [ε, 1 − ε], ε ∈ (0, 1/2). To do this, we need to show that it tends to zero sufficiently quickly, as r → ∞. Using the well-known asymptotic expansion of Bessel function (see, for instance [63, Formula 8.451(1)] or [207, Section 23, Formula 23]) r  2 π π  −1/2 Jν (x) = cos x − ν − x + O(x−3/2 ), x → ∞, π 2 4 and taking into account that r 2 < 1, π

 π π  ≤ 1, cos x − ν − 2 4

we obtain the asymptotic estimate |Jν (kxkr)| < g1 (r) ∼ (kxkr)−1/2 + O(r−3/2 ),

r → ∞.

In other words, the modulo of Bessel function is dominated by some function g1 (r) which, as r → ∞, tends to zero like r−1/2 . Analogously, we get the uniform in ξ asymptotic estimates Jν (ctrξ) −(ν+1/2) + O(r−(ν+3/2) ), r → ∞, (ctrξ)ν < g2 (r) ∼ (ctεr) Jν (ctr(1 − ξ)) −(ν+1/2) + O(r−(ν+3/2) ), r → ∞. (ctr(1 − ξ))ν < g3 (r) ∼ (ctεr) Therefore, the integrand on the right-hand side of (4.9.8) is dominated by some, not depending on ξ, function that tends to zero, as r → ∞, like rν+1 r−1/2 r−(ν+1/2) r−(ν+1/2) = r−(ν+1/2) .

202

Markov Random Flights

Since, as we have noted above, ν ≥ 1, it follows that this dominating function tends to zero, as r → ∞, with the rate not less than r−3/2 . Therefore, it is integrable with respect to r in the interval [0, ∞) for any ξ ∈ [ε, 1 − ε], ε ∈ (0, 1/2). Thus, we have shown that the integrand on the right-hand side of (4.9.8), for any ξ ∈ [ε, 1 − ε], ε ∈ (0, 1/2) is dominated by some, not depending on ξ, function integrable with respect to r in [0, ∞). Hence, according to the Weierstrass criterion, the integral in r on the right-hand side of (4.9.8) is convergent uniformly for any ξ ∈ [ε, 1 − ε], ε ∈ (0, 1/2), and, therefore, the integral in r on the right-hand side of (4.9.7) is convergent uniformly and absolutely for any ξ ∈ [ε, 1 − ε], ε ∈ (0, 1/2), as required to prove (Q.E.D). Due to the just proved uniform convergence, we may change the order of integration in (4.9.7) and to rewrite this formula as the improper integral: 2

[2ν Γ(ν + 1)] (2π)ν+1 kxkν (ct)2ν  Z 1−ε  Z ∞ 1 1−ν dξ × lim r Jν (kxkr) Jν (ctξr) Jν (ct(1 − ξ)r) dr . ε→0 ε ξ ν (1 − ξ)ν 0 (4.9.9) Let us consider separately the interior integral in (4.9.9): Z ∞ K := r1−ν Jν (kxkr) Jν (ctξr) Jν (ct(1 − ξ)r) dr.

p1 (x, t) =

0

According to [63, Formula 6.578(9)], we have: K=

2ν−1 ∆2ν−1  (ctξ ct(1 − ξ) kxk)ν Γ ν + 21 Γ

1 2

,

(4.9.10)

where ∆ is the measure of a triangle with the sides a1 = kxk, a2 = ctξ and a3 = ct(1 − ξ). Such a triangle with these sides exists, if and only if the following system of inequalities fulfills: kxk + ctξ > ct(1 − ξ),

kxk + ct(1 − ξ) > ctξ,

ctξ + ct(1 − ξ) > kxk.

Solving this system yields the interval of integration in ξ: 0
0.

Lt [H(α, t)] (s) = Lt Fx [p(x, t)] (α, s),

(4.10.11)

Therefore, according to (4.10.10), (4.10.11) and taking into account the single-valuedness of the complex-valued function in curly brackets on the left-hand side of (4.10.9) (see Lemma 4.6.1), it can be treated as the Laplace-Fourier transform of some operator Ax,t , whose fundamental solution is the transition density p(x, t) of the process X(t), that is p (s + λ)2 + (ckαk)2   − λ = Lt Fx [Ax,t ] (α, s), (4.10.12) 1 m−2 m (ckαk)2 F , ; ; 2 2 2 (s + λ)2 + (ckαk)2 where Ax,t is the operator (composed of some differential operators) such that Ax,t p(x, t) = δ(x) δ(t),

x ∈ Rm , t ≥ 0.

(4.10.13)

To study the structure of operator Ax,t let us consider the function on the left-hand side of (4.10.12) separately: p (s + λ)2 + (ckαk)2  − λ, Re s > 0. (4.10.14) f (kαk, s) =  (ckαk)2 1 m−2 m , ; ; F 2 2 2 (s + λ)2 + (ckαk)2 According to Lemma 4.6.1, for any fixed kαk this function is single-valued in the right half-plane C+ of the complex plane C. For s ∈ C+ such that (ckαk)2 (s + λ)2 + (ckαk)2 ≤ 1 one can write down the series expansion (see (4.10.7)): −1 X  n   ∞ (ckαk)2 (ckαk)2 1 m−2 m , ; ; = an , (4.10.15) F 2 2 2 (s + λ)2 + (ckαk)2 (s + λ)2 + (ckαk)2 n=0 where an are some real coefficients depending on the dimension m of the space. Note also that always a0 = 1. If these coefficients are such that ∞ X

|an | < ∞,

n=0

then the series on the right-hand side of (4.10.15) is absolutely and uniformly convergent with respect to kαk and s. Finding an and, therefore, a general form of expansion (4.10.15) in arbitrary dimension m ≥ 2 is a fairly difficult analytical problem, however we can give, for instance, a few terms of series (4.10.15):

Integral Transforms of the Distributions of Markov Random Flights

211

  −1 1 m−2 m (ckαk)2 F , ; ; 2 2 2 (s + λ)2 + (ckαk)2  2   1 m−2 1 (m − 2)(m2 + 8) (ckαk)2 (ckαk)2 =1− − 2 m (s + λ)2 + (ckαk)2 8 m2 (m + 2) (s + λ)2 + (ckαk)2 3  (ckαk)2 1 (m − 2)(m4 + 2m3 + 24m2 − 16m + 64) − 16 m3 (m + 2)(m + 4) (s + λ)2 + (ckαk)2 ! 4 (ckαk)2 + O 2 2 (s + λ) + (ckαk) (4.10.16) By substituting (4.10.15) into (4.10.14), we get "∞  n # X p (ckαk)2 2 2 f (kαk, s) = (s + λ) + (ckαk) an − λ (s + λ)2 + (ckαk)2 n=0 (4.10.17) ∞ p −(2n−1) X 2n = an (ckαk) (s + λ)2 + (ckαk)2 − λ n=0

Consider the radical in (4.10.17): r

1 2 (s + 2λs + (ckαk)2 ) λ2 r 1 = λ 1 + 2 Teα,s , λ

p (s + λ)2 + (ckαk)2 = λ

1+

where it is denoted Teα,s = s2 + 2λs + (ckαk)2 . Then we can rewrite (4.10.17) as follows: f (kαk, s) =

∞ X

2n

an (ckαk)

λ−(2n−1)

n=0

 1+

1 e Tα,s λ2

−n+1/2 − λ.

(4.10.18)

Since the numbers −n + 21 are neither integer nor zero, then in the domain n o Dλ = s ∈ C+ : |Teα,s | = |s2 + 2λs + (ckαk)2 | < λ2 one can write down the expansion  1+

1 e Tα,s λ2

−n+1/2



 1 1 n − , η; η; − 2 Teα,s 2 λ   k ∞ k X (−1) n − 12 k 1 e Tα,s =1+ k! λ2 =F

(4.10.19)

k=1

and this series converges absolutely and uniformly (with respect to kαk and s) in Dλ (η being an arbitrary real number).

212

Markov Random Flights

Substituting (4.10.19) into (4.10.18), we obtain: "   k # ∞ ∞ X X (−1)k n − 12 k 1 e 2n −(2n−1) Tα,s − λ f (kαk, s) = an (ckαk) λ 1+ k! λ2 n=0 k=1

=

∞ X

2n

an (ckαk)

λ−(2n−1)

n=0

+

∞ X

" 2n

−(2n−1)

an (ckαk)

λ

n=0

=

∞ X

∞ X (−1)k n − k!

1 2 k





k=1 2n

an (ckαk)

1 e Tα,s λ2

k # − λ

λ−(2n−1)

n=1

+

∞ X

" 2n

an (ckαk)

n=0

∞ X (−1)k n − k!

1 2 k



λ

−(2n+2k−1)



Teα,s

k

# .

k=1

(4.10.20) Taking into account the correspondence between the parameters kαk, s and the respective differential operators ∂2 ∂ s2 + 2λs + (ckαk)2 = Teα,s ←→ Tx,t = 2 + 2λ − c2 ∆, ∂t ∂t

kαk2 ←→ −∆,

as well as the uniform convergence of the series in domain Dλ , we obtain the correspondence f (kαk, s) ←→ Ax,t where the operator Ax,t is given by the formula Ax,t =

∞ X

n (−1)n an λ−(2n−1) c2 ∆

n=1

+

∞ X

" n

(−1) an

n=0

n c ∆ 2

∞ X (−1)k n − k!

1 2 k

(4.10.21)

#



−(2n+2k−1)

λ

(Tx,t )

k

.

k=1

The operator Ax,t given by (4.10.21) is a hyperparabolic operator acting in the space D of generalized functions. The uniqueness of Ax,t follows from the single-valuedness of the function f (kαk, s) given by (4.10.14) in C+ and the continuity theorems for the inverse Laplace and Fourier transformations. The theorem is thus completely proved.

4.10.3

Random flights in low dimensions

Consider two particular cases of the formulas obtained. 2D Random Flight. In the two-dimensional case (m = 2) formula (4.10.14) yields: p f (kαk, s) = (s + λ)2 + (ckαk)2 − λ r 1 = λ 1 + 2 Teα,s − λ λ # "    2  3 1·1 1 e 1·1·3 1 e 1 1 e Tα,s − Tα,s + Tα,s − . . . − λ =λ 1+ 2 λ2 2 · 4 λ2 2 · 4 · 6 λ2 # "    2  1e 1 1·1 1 e 1·1·3 1 e = Tα,s − Tα,s + Tα,s − . . . λ 2 2 · 4 λ2 2 · 4 · 6 λ2

Integral Transforms of the Distributions of Markov Random Flights

213

and, therefore, the transition density p(x, t), x ∈ R2 , t ≥ 0 of the two-dimensional Markov random flight X(t) is the solution to the hyperparabolic equation #) (  2 "   1·1·3 1 1 1 1·1 1 Tx,t + Tx,t − . . . p(x, t) = δ(x) δ(t), Tx,t − λ 2 2 · 4 λ2 2 · 4 · 6 λ2 (4.10.22) where ∂ ∂2 ∂2 ∂2 + . ∆= Tx,t = 2 + 2λ − c2 ∆, 2 ∂t ∂t ∂x1 ∂x22 The 2D-case is the exceptional one. The operator on the left-hand side of (4.10.22) represents the product of two commuting operators, namely the differential telegraph operator Tx,t and the hyperparabolic operator in square brackets. As we have noted above, the transition density has the structure p(x, t) = psing (x, t) + pcont (x, t), where psing (x, t) and pcont (x, t) are the densities of the singular (with respect to the Lebesgue measure in R2 ) and absolutely continuous components of the distribution of X(t), respectively. In the next chapter we will show that Tx,t pcont (x, t) = 0,

(4.10.23)

that is, the absolutely continuous part of the density solves the two-dimensional telegraph equation. Then (4.10.22) degenerates into the equation ( #)   "  2 1 1·1·3 1 1 1·1 1 Tx,t − Tx,t + Tx,t − . . . psing (x, t) = δ(x) δ(t). λ 2 2 · 4 λ2 2 · 4 · 6 λ2 Relation (4.10.23) can be re-obtained by means of formulas obtained above. From (4.10.9) it follows, that the Laplace-Fourier transformation of an equation for the density pcont (x, t) produces the equality  p (s + λ)2 + (ckαk)2 − λ Lt Fx [pcont (x, t)] (α, s) = 0, Re s > 0. Multiplying this equality by

p

 (s + λ)2 + (ckαk)2 + λ , we arrive at the expression

  2 2 s + 2λs + (ckαk) Lt Fx [pcont (x, t)] (α, s) = 0, corresponding to the telegraph equation (4.10.23):  2  ∂ ∂ 2 + 2λ − c ∆ pcont (x, t) = 0. ∂t2 ∂t Note that the reason of the exceptionality of the 2D-case is clearly seen from expansion (4.10.16), every coefficient of which contains the factor (m − 2) that kills the expansion terms for m = 2. In this case the coefficients of the series expansion (4.10.7) are given by a0 = 1, and an = 0, n ≥ 1. Then the second parameter of the hypergeometric function on the left-hand side of (4.10.7) is zero and, therefore, the function itself is identically equal to 1. 4D Random Flight. In the four-dimensional case (m = 4), in view of (4.6.5), formula (4.10.14) yields:

214

Markov Random Flights

f (kαk, s) i p 1h s + (s + λ)2 + (ckαk)2 − λ = 2 " r # s 1 1 e = + λ 1 + 2 Tα,s − λ 2 2 λ ! # "  2 3    1·1·3 1 e 1 1 e 1·1 1 e s 1 λ 1+ Tα,s − Tα,s + Tα,s − . . . − λ = + 2 2 2 λ2 2 · 4 λ2 2 · 4 · 6 λ2 #  2  "   1·1·3 1 e 1 1·1 1 e s 1 1e Tα,s + Tα,s − . . . Tα,s − = + 2 2 λ 2 2 · 4 λ2 2 · 4 · 6 λ2 and, therefore, the transition density p(x, t), x ∈ R4 , t ≥ 0 of the four-dimensional Markov random flight X(t) is the solution to the hyperparabolic equation ( #)    "  2 1 ∂ 1 1 1·1 1 1·1·3 1 + Tx,t − Tx,t + Tx,t − . . . p(x, t) = δ(x) δ(t), 2 ∂t λ 2 2 · 4 λ2 2 · 4 · 6 λ2 (4.10.24) where ∂ ∂2 ∂2 ∂2 ∂2 ∂2 ∆= + + + . Tx,t = 2 + 2λ − c2 ∆, ∂t ∂t ∂x21 ∂x22 ∂x23 ∂x24 We see that (4.10.24) is slightly different from (4.10.22) due to the presence of time derivative. This, however, means that the hyperparabolic term in (4.10.24) cannot be eliminated and, therefore, the density of the four-dimensional Markov random flight is not the fundamental solution to a four-dimensional telegraph equation.

4.10.4

Convergence to the generator of Brownian motion

In this Section we prove that, under the Kac’s condition (4.10.3), the hyperparabolic operator Ax,t converges to the generator of the multidimensional Brownian motion. Theorem 4.10.2. Under the Kac’s condition (4.10.3) the following limiting relation holds: lim

c, λ→∞ (c2 /λ)→ρ

∆=

∂2 ∂2 + · · · + , ∂x21 ∂x2m

Ax,t =

ρ ∂ − ∆, ∂t m

x = (x1 , . . . , xm ) ∈ Rm ,

(4.10.25)

m ≥ 2.

Proof. The proof is based on evaluating the limit of the hyperparabolic operator Ax,t explicitly given by (4.10.6). Taking into account that ( ρ, if n = 1, c2n → 2n−1 λ 0, if n ≥ 2, we have for the first term of (4.10.6): "∞ # X n n −(2n−1) 2 (−1) an λ c ∆ lim = −a1 ρ ∆. c, λ→∞ (c2 /λ)→ρ

n=1

Integral Transforms of the Distributions of Markov Random Flights

215

From the series expansion (4.10.16) we have a1 = −

m−2 , 2m

m ≥ 2.

Then we obtain " lim

c, λ→∞ (c2 /λ)→ρ

∞ X

n

(−1) an λ

# n m−2 ρ ∆. c ∆ = 2m

−(2n−1)

2

n=1

(4.10.26)

The second term in (4.10.6) can be represented as follows: "∞ #  ∞ X n X (−1)k n − 21 k −(2n+2k−1) k n 2 (−1) an c ∆ λ (Tx,t ) k! n=0 k=1  ∞ X (−1)k − 12 k −(2k−1) k = λ (Tx,t ) k! k=1 "∞ #  ∞ 1 X c2n n X (−1)k n − 2 k −(2k−1) k n λ (Tx,t ) . + (−1) an 2n ∆ λ k! n=1 k=1 (4.10.27) Since, under the Kac’s condition (4.10.3), c2n →0 λ2n

for any n ≥ 1,

while the series in square brackets tends to a finite limit (in the topology of D), the second term on the right-hand side of (4.10.27) vanishes. Hence, (∞ "∞ #)  X X (−1)k n − 1  n k 2 k lim (−1)n an c2 ∆ λ−(2n+2k−1) (Tx,t ) c, λ→∞ k! n=0 k=1

(c2 /λ)→ρ

" =

lim

c, λ→∞ (c2 /λ)→ρ

∞ X

k=1

(−1)k − 12 k!

k

(4.10.28)

#

 k

λ−(2k−1) (Tx,t )

.

To evaluate the limit on the right-hand side of (4.10.28), we note that " k #  h i 2 2 ∂ ∂ c 1 k +2 − ∆ lim λ−(2k−1) (Tx,t ) = lim λ−k+1 c, λ→∞ c, λ→∞ λ ∂t2 ∂t λ (c2 /λ)→ρ

(c2 /λ)→ρ

(4.10.29)

 2 ∂ − ρ ∆, ∂t =  0,

if k = 1, if k ≥ 2.

Hence, the limit on the right-hand side of (4.10.28) is given by the formula: "∞ #      X (−1)k − 1 1 ∂ k 2 k −(2k−1) lim λ (Tx,t ) = − − 2 −ρ∆ . c, λ→∞ k! 2 1 ∂t (c2 /λ)→ρ

k=1

Since 

1 − 2

 1

Γ =− Γ

3 2 1 2



1 =− , 2

216

Markov Random Flights

we get " lim

c, λ→∞ (c2 /λ)→ρ

∞ X (−1)k − 21 k!

#

 k

λ

−(2k−1)

k

(Tx,t )

=

k=1

∂ ρ − ∆. ∂t 2

(4.10.30)

Therefore, taking into account (4.10.26) and (4.10.30), we finally obtain lim

c, λ→∞ (c2 /λ)→ρ

Ax,t =

=

m−2 ∂ ρ ρ∆+ − ∆ 2m ∂t 2 ρ ∂ − ∆, ∂t m

yielding (4.10.25). The theorem is thus completely proved. Operator on the right-hand side of (4.10.25) is the generator of the m-dimensional homogeneous Brownian motion with zero drift and diffusion coefficient 2ρ/m. Remark 4.10.1. The analysis done enables us to make some important conclusions. The telegraph equation controls a random walk model only in one dimension and, almost by chance, in two dimensions. In higher dimensions the propagator can be represented as an infinite series involving the powers of the telegraph operator (called the hyperparabolic operator). This means that, for dimensions greater than two, the telegraph equation is valid only in an asymptotic sense. Therefore, the negative answer is given to the question (Q) concerning the possibility of describing the multidimensional Markov random flight by the telegraph equation. In the Kac’s limit, the governing hyperparabolic operator transforms into the generator of the multidimensional Brownian motion and this limiting behaviour is valid in any dimension.

4.11

Random flight with arbitrary dissipation function

The uniform choice of the initial and all new directions at each random Poissonian time instant is a very important feature of the symmetric Markov random flight X(t) studied in the previous sections. This key property provides the absolute spatial symmetry of the process X(t). The symmetric structure of X(t) is clearly seen from the form of its characteristic functions Hn (t) and H(t), that contain the inversion multiparameter α = (α1 , . . . , αm ) p 2 . The surprising and amazing fact is as the symmetric functional kαk = α12 + · · · + αm that most of the results obtained for symmetric random flight are also valid for the Markov random flight with arbitrary dissipation function. Suppose that both the initial and every new direction are taken on according to some arbitrary probability distribution on the unit sphere S1m . Let χ(x), x ∈ S1m , denote the density of this distribution, assumed to exist. Let Z(t) = (Z1 (t), . . . , Zm (t)) be the particle’s position in the space Rm , m ≥ 2, at arbitrary time instant t > 0. Consider the conditional characteristic functions (Fourier transforms) n o ˜ n (t) ≡ G ˜ n (α, t) = E eihα,Z(t)i | N (t) = n , G n ≥ 1, (4.11.1) where α = (α1 , . . . , αm ) ∈ Rm is the real m-dimensional vector of inversion parameters.

Integral Transforms of the Distributions of Markov Random Flights

217

Similarly to the case of symmetric random flight, functions (4.11.1) can be represented in the form:  " # Z t Z t Z t n+1  Y Z j n! ic(τj −τj−1 )hα,x i j j ˜ n (t) = dτ dτ · · · dτ e χ(x ) ν(dx ) , G 1 2 n   tn 0 τ1 τn−1 S1m j=1

(4.11.2) where ν(·) is the Lebesgue measure on the surface of sphere S1m . Introducing the function Z ψ(t) = eicthα,xi χ(x) ν(dx), (4.11.3) S1m

we can rewrite (4.11.2) as follows: ˜ n (t) = n! G tn

Z

t

Z

t

t

dτ2 · · ·

dτ1 0

Z

dτn τn−1

τ1

 n+1 Y 

j=1

  ψ(τj − τj−1 ) , 

Denote the integral factor in (4.11.4) by   Z t Z t Z t n+1  Y Jn (t) = dτ1 dτ2 · · · dτn ψ(τj − τj−1 ) ,   0 τ1 τn−1

n ≥ 1.

(4.11.4)

n ≥ 1.

(4.11.5)

  (λt)n e−λt ˜ Gn (t) = λn e−λt Jn (t), Gn (t) = Fx pn (x, t) (α) = n!

(4.11.6)

j=1

Function (4.11.4) has a quite definite probabilistic sense, namely,

α ∈ Rm ,

n ≥ 1,

t > 0,

is the characteristic function (Fourier transform) of the joint probability density pn (x, t) of the particle’s position at time instant t and of the number N (t) = n of the Poisson events that have occurred by this time moment t. Note also that function ψ(t) given by (4.11.3) is m the characteristic function of the density χ(x) on the surface of the sphere Sct of radius ct. The following counterparts of Theorem 4.2.1 and its corollaries take place. Theorem 4.11.1. For any n ≥ 1, the following recurrent relation holds: Z t Z t Jn (t) = ψ(t − τ ) Jn−1 (τ ) dτ = ψ(τ ) Jn−1 (t − τ ) dτ, n ≥ 1, 0

(4.11.7)

0

where, by definition, J0 (x) = ψ(x). Formula (4.11.7) can be represented in the form of convolution: Jn (t) = ψ(t) ∗ Jn−1 (t),

n ≥ 1.

(4.11.8)

Corollary 4.11.1. For any n ≥ 1, the following relation holds: ∗(n+1)

Jn (t) = [ψ(t)]

,

n ≥ 1,

(4.11.9)

where the symbol ∗(n + 1) means the (n + 1)-multiple convolution in time variable. Corollary 4.11.2. For any n ≥ 1, the Laplace transforms of functions (4.11.5) are given by the formula: n+1

L [Jn (t)] (s) = (L [ψ(t)] (s))

,

n ≥ 1,

Re s > 0.

(4.11.10)

218

Markov Random Flights

Corollary 4.11.3. For any n ≥ 1, the conditional characteristic functions (4.11.4) satisfy the following recurrent relation: Z t ˜ n (t) = n ˜ n−1 (τ ) dτ, G τ n−1 ψ(t − τ )G n ≥ 1, (4.11.11) tn 0 ˜ 0 (t) = ψ(t). where G The proofs of Theorem 4.11.1 and Corollaries 4.11.1, 4.11.2 and 4.11.3 are the simple recompilations of the proofs of Theorem 4.2.1 and Corollaries 4.2.1, 4.2.2 and 4.2.3, respectively, in which the function ϕ(t) is everywhere replaced by the function ψ(t) and, therefore, omitted. The characteristic function of the process Z(t) is given by the uniformly converging series ∞ ∞ n o X X (λt)n ˜ Gn (t) = e−λt λn Jn (t) G(t) = E eihα,Z(t)i = e−λt n! n=0 n=0

(4.11.12)

and, similarly to the symmetric case, also satisfies the Volterra integral equation. This result is given by the following theorem. Theorem 4.11.2. The characteristic function G(t), t ≥ 0, satisfies the Volterra integral equation of second kind with continuous kernel: Z t G(t) = e−λt ψ(t) + λ e−λ(t−τ ) ψ(t − τ ) G(τ ) dτ, t ≥ 0. (4.11.13) 0

In the class of continuous functions, integral equation (4.11.13) has the unique solution given by the uniformly converging series G(t) = e−λt

∞ X

∗(n+1)

λn [ψ(t)]

.

(4.11.14)

n=0

The proof of Theorem 4.11.2 is similar to the proof of Theorem 4.5.1 with the formal replacement of function ϕ(t) by function ψ(t) and left up to the reader. Integral equation (4.11.13) can be rewritten in the form of convolution    G(t) = e−λt ψ(t) + λ e−λt ψ(t) ∗ G(t) , t ≥ 0. (4.11.15) From (4.11.15), we obtain the general formula for the Laplace transform of function G(t): L [G(t)] (s) =

L [ψ(t)] (s + λ) , 1 − λ L [ψ(t)] (s + λ)

Re s > 0.

(4.11.16)

One can show that the characteristic function of the absolutely continuous component of the distribution of the Markov random flight Z(t), t > 0, determined as e = e−λt G(t)

∞ X

λn Jn (t),

n=1

satisfies the Volterra integral equation Z t e ) dτ, e = λe−λt J1 (t) + λ G(t) e−λ(t−τ ) ψ(t − τ ) G(τ 0

t > 0,

(4.11.17)

Integral Transforms of the Distributions of Markov Random Flights where Z

219

t

ψ(τ ) ψ(t − τ ) dτ.

J1 (t) = ψ(t) ∗ ψ(t) = 0

Equation (4.11.17) can be rewritten in the convolutional form: h i  e = λe−λt [ψ(t) ∗ ψ(t)] + λ e−λt ψ(t) ∗ G(t) e G(t) ,

t > 0.

(4.11.18)

Re s > 0.

(4.11.19)

e Therefore, the Laplace transform of function G(t) is given by 2 h i λ (L [ψ(t)] (s + λ)) e L G(t) (s) = , 1 − λ L [ψ(t)] (s + λ)

One can easily check that the function e = e−λt G(t)

∞ X

∗(n+1)

λn [ψ(t)]

.

(4.11.20)

n=1

is the unique continuous solution to equation (4.11.18).

4.12

Integral equation for transition density

The general formulas presented in the previous section and related to the Markov random flight with arbitrary dissipation function, enables us to study a generalization of the symmetric motion and to obtain the general relations for the transition density of the process.

4.12.1

Description of process and the structure of distribution

The Markov random flight Z(t) = (Z1 (t), . . . , Zm (t)) with an arbitrary dissipation function is slightly different from its symmetric counterpart and is presented by a particle that, at the initial time instant t = 0, starts from the origin 0 = (0, . . . , 0) of the Euclidean space Rm , m ≥ 2, and moves with some constant speed c. The initial direction is a random m-dimensional vector with arbitrary distribution (which, we remind the reader, is called the dissipation function) on the unit sphere  S m (0, 1) = x = (x1 , . . . , xm ) ∈ Rm : kxk2 = x21 + · · · + x2m = 1 having the absolutely continuous bounded density χ(x), x ∈ S m (0, 1). The motion is controlled by a homogeneous Poisson process N (t) of rate λ > 0 as in the symmetric case, but the new random directions are taken on the sphere S m (0, 1) according to the probability law with the same density χ(x), x ∈ S m (0, 1). Similarly to the symmetric case, at arbitrary time instant t > 0, the Markov random flight Z(t), with probability 1, is concentrated in the closed m-dimensional ball of radius ct centred at the origin 0:  Bm (0, ct) = x = (x1 , . . . , xm ) ∈ Rm : kxk2 = x21 + · · · + x2m ≤ c2 t2 . Consider the probability distribution function Φ(x, t) = Pr {Z(t) ∈ dx} ,

x ∈ Bm (0, ct),

t ≥ 0,

220

Markov Random Flights

where dx is the infinitesimal element in the space Rm with Lebesgue measure µ(dx) = dx1 . . . dxm . For arbitrary fixed t > 0, distribution Φ(x, t) consists of the singular and the absolutely continuous components concentrated on the surface of the sphere  S m (0, ct) = ∂Bm (0, ct) = x = (x1 , . . . , xm ) ∈ Rm : kxk2 = x21 + · · · + x2m = c2 t2 and in the interior  int Bm (0, ct) = x = (x1 , . . . , xm ) ∈ Rm : kxk2 = x21 + · · · + x2m < c2 t2 of the ball Bm (0, ct), respectively. Let p(x, t) = p(x1 , . . . , xm , t), x ∈ Bm (0, ct), t > 0, be the density of distribution Φ(x, t). Similarly to the symmetric case, it has the form p(x, t) = p(s) (x, t) + p(ac) (x, t),

x ∈ Bm (0, ct),

t > 0,

(4.12.1)

where p(s) (x, t) is the density (in the sense of generalized functions) of the singular component of Φ(x, t) concentrated on S m (0, ct) and p(ac) (x, t) is the density of the absolutely continuous component of Φ(x, t) concentrated in int Bm (0, ct). The density χ(x), x ∈ S m (0, 1), on the unit sphere S m (0, 1) generates the absolutely continuous and bounded (in x for any fixed t) density %(x, t), x ∈ S m (0, ct), on the sphere 1 x , x ∈ S m (0, ct), t > 0. S m (0, ct) of radius ct according to the formula %(x, t) = χ ct Therefore, the singular part of density (4.12.1) has the form: p(s) (x, t) = e−λt %(x, t)δ(c2 t2 − kxk2 ),

t > 0,

(4.12.2)

where δ(x) is the Dirac delta-function. The absolutely continuous part of density (4.12.1) has the form: p(ac) (x, t) = f (ac) (x, t)Θ(ct − kxk),

t > 0,

(4.12.3)

where f (ac) (x, t) is some positive function absolutely continuous in int B m (0, ct) and Θ(x) is the Heaviside unit-step function.

4.12.2

Recurrent relations

Consider the joint probability densities pn (x, t), n ≥ 0, x ∈ Bm (0, ct), t > 0, of the particle’s position Z(t) at time instant t > 0 and of the number of the Poisson events {N (t) = n} that have occurred by this instant t. For n = 0, we have p0 (x, t) ≡ p(s) (x, t) = e−λt %(x, t)δ(c2 t2 − kxk2 ),

t > 0,

(4.12.4)

where, remind, p(s) (x, t) is the singular part of density (4.12.1) concentrated on the surface of sphere S m (0, ct) = ∂Bm (0, ct) and given by (4.12.2). If n ≥ 1, then, according to (4.12.3), the joint densities pn (x, t) have the form: pn (x, t) = fn (x, t)Θ(ct − kxk),

n ≥ 1,

t > 0,

(4.12.5)

where fn (x, t), n ≥ 1, are some positive functions absolutely continuous in int Bm (0, ct) and Θ(x) is the Heaviside unit-step function. The joint density pn+1 (x, t) can be expressed through the previous one pn (x, t) by means of a recurrent relation. This result is given by the following theorem.

Integral Transforms of the Distributions of Markov Random Flights

221

Theorem 4.12.1. The joint densities pn (x, t), n ≥ 1, are connected with each other by the following recurrent relation: Z t   x p0 (x, t − τ ) ∗ pn (x, τ ) dτ, n ≥ 1, x ∈ int Bm (0, ct), t > 0. pn+1 (x, t) = λ 0

(4.12.6) Proof. Applying Fourier transformation to the right-hand side of (4.12.6), we have:  Z t   x p0 (x, t − τ ) ∗ pn (x, τ ) dτ (α) λ Fx 0

Z

t



h i x Fx p0 (x, t − τ ) ∗ pn (x, τ ) (α) dτ

0

Z

t

    Fx p0 (x, t − τ ) (α) Fx pn (x, τ ) (α) dτ

=λ 0

Z

t

    e−λ(t−τ ) Fx %(x, t − τ )δ(c2 (t − τ )2 − kxk2 ) (α) Fx pn (x, τ ) (α) dτ (4.12.7)

=λ 0

Z

t

e−λ(t−τ ) ψ(α, t − τ ) λn e−λτ Jn (α, τ ) dτ Z t = λn+1 e−λt ψ(α, t − τ ) Jn (α, τ ) dτ



0

0 n+1 −λt

e Jn+1 (α, t)   = Fx pn+1 (x, t) (α), =λ

where we have used relations (4.11.3), (4.11.6) and (4.11.7). Thus, both the functions on the left- and right-hand sides of (4.12.6) have the same Fourier transform and, therefore, they coincide. The change of integration order in the first step of (4.12.7) is justified because the x convolution p0 (x, t − τ ) ∗ pn (x, τ ) of the singular part p0 (x, t − τ ) of the density with the absolutely continuous one pn (x, τ ), n ≥ 1, is an absolutely continuous (and, therefore, uniformly bounded in x) function. From this fact it follows that, for any n ≥ 1, the integral in square brackets on the left-hand side of (4.12.7) converges uniformly in x for any fixed t. The theorem is proved. Remark 4.12.1. In view of (4.12.2) and (4.12.3), formula (4.12.6) can be represented in the following expanded form: Z t pn+1 (x, t) = λ e−λ(t−τ ) 0 Z  2 2 2 × %(x − ξ, t − τ )δ(c (t − τ ) − kx − ξk ) fn (ξ, τ )Θ(cτ − kξk) ν(dξ) dτ, (4.12.8) n ≥ 1,

x ∈ int Bm (0, ct),

t > 0,

where the function fn (ξ, τ ) is absolutely continuous in the variable ξ = (ξ1 , . . . , ξm ) ∈ Rm and ν(dξ) is the Lebesgue measure. Integration area in the interior integral on the right-hand side of (4.12.8) is given by the system ( kx − ξk2 = c2 (t − τ )2 , m ξ∈R : kξk < cτ.

222

Markov Random Flights

The first relation of this system determines a sphere S m (x, c(t−τ )) of radius c(t−τ ) centred at point x, while the second one represents an open ball int Bm (0, cτ ) of radius cτ centred at the origin 0. Their intersection M (x, τ ) = S m (x, c(t − τ )) ∩ int Bm (0, cτ ),

(4.12.9)

which is a part of (or the whole) surface of sphere S m (x, c(t − τ )) located inside the ball Bm (0, cτ ), represents the integration area of dimension m − 1 in the interior integral of (4.12.8). Note that the sum of the radii of S m (x, c(t−τ )) and int Bm (0, cτ ) is c(t−τ )+cτ = ct > kxk, that is, greater than the distance kxk between their centres 0 and x. This fact, as well as some simple geometric reasonings, show that intersection (4.12.9) depends on τ ∈ (0, t) as follows. • If τ ∈ (0, 2t − kxk 2c ], then intersection (4.12.9) is empty, that is, M (x, τ ) = ∅. kxk t t • If τ ∈ ( 2 − 2c , 2 + kxk 2c ], then intersection M (x, τ ) is not empty and represents some hypersurface of dimension m − 1. m m • If τ ∈ ( 2t + kxk 2c , t), then S (x, c(t − τ )) ⊂ int B (0, cτ ) and, therefore, in this case m M (x, τ ) = S (x, c(t − τ )). Thus, formula (4.12.8), as well as (4.12.6), can be rewritten in the expanded form:   kxk t + 2c 2Z    Z  −λ(t−τ ) pn+1 (x, t) = λ e %(x − ξ, t − τ ) fn (ξ, τ ) ν(dξ) dτ     kxk M (x,τ )

t 2 − 2c

Zt +λ kxk t 2 + 2c

e−λ(t−τ )

  

Z

 

S m (x,c(t−τ ))

   %(x − ξ, t − τ ) fn (ξ, τ ) ν(dξ) dτ  

(4.12.10)

and the expressions in curly brackets of (4.12.10) represents surface integrals over M (x, τ ) and S m (x, c(t − τ )). Remark 4.12.2. By means of the double convolution operation of two arbitrary generalized functions g1 (x, t), g2 (x, t) ∈ S 0 , x ∈ Rm , t > 0, Z tZ x t g1 (x, t) ∗ ∗g2 (x, t) = g1 (ξ, τ ) g2 (x − ξ, t − τ ) dξ dτ (4.12.11) 0

Rm

formula (4.12.6) can be represented in the succinct convolutional form h i x t n ≥ 1. pn+1 (x, t) = λ p0 (x, t) ∗ ∗pn (x, t) ,

(4.12.12)

Taking into account the well-known connections between the joint and conditional densities, we can extract from Theorem 4.12.1 a convolution-type recurrent relation for the conditional probability densities p˜n (x, t), n ≥ 1. Corollary 4.12.1. The conditional densities p˜n (x, t), n ≥ 1, are connected with each other by the following recurrent relation: Z  n + 1 t n x p˜n+1 (x, t) = n+1 τ p˜0 (x, t−τ )∗ p˜n (x, τ ) dτ, n ≥ 1, x ∈ int Bm (0, ct), t > 0, t 0 (4.12.13) where p˜0 (x, t) = %(x, t)δ(c2 t2 − kxk2 ) is the conditional density corresponding to the case when no Poisson events occur before time instant t.

Integral Transforms of the Distributions of Markov Random Flights

223

Proof. The proof immediately follows from Theorem 4.12.1 and recurrent formula (4.11.11). Remark 4.12.3. Formulas (4.12.6) and (4.12.13) are also valid for n = 0. In this case, for arbitrary t > 0, they take the form: Z t   x p0 (x, t − τ ) ∗ p0 (x, τ ) dτ, (4.12.14) p1 (x, t) = λ 0

p˜1 (x, t) =

1 t

t

Z

 x p˜0 (x, t − τ ) ∗ p˜0 (x, τ ) dτ,

(4.12.15)

0

where, remind, function p0 (x, t) defined by (4.12.4) is the singular part of the density concentrated on the surface of the sphere S m (0, ct). The derivation of (4.12.14) is a simple recompilation of the proof of Theorem 4.12.1 where one should take into account the boundedness of density p0 (x, t) that justifies the change of integration order in (4.12.7). Formula (4.12.15) follows from (4.12.14).

4.12.3

Integral equation

The transition probability density p(x, t) of the multidimensional Markov random flight Z(t) is defined by the formula p(x, t) =

∞ X

x ∈ Bm (0, ct),

pn (x, t),

t > 0,

(4.12.16)

n=0

where the joint densities pn (x, t), n ≥ 0, are given by (4.12.4) and (4.12.5). Density (4.12.16) is defined everywhere in the ball Bm (0, ct), while the function p

(ac)

(x, t) =

∞ X

pn (x, t)

(4.12.17)

n=1

forms its absolutely continuous part concentrated in the interior int Bm (0, ct) of the ball. Therefore, series (4.12.17) converges uniformly everywhere in the closed ball Bm (0, ct − ε) for arbitrary small ε > 0. In the following theorem we present an integral equation for density (4.12.16). Theorem 4.12.2. The transition probability density p(x, t) of the Markov random flight Z(t) satisfies the integral equation: Z t   x p(x, t) = p0 (x, t) + λ p0 (x, t − τ ) ∗ p(x, τ ) dτ, x ∈ Bm (0, ct), t > 0. (4.12.18) 0

In the class of functions with compact support, integral equation (4.12.18) has the unique solution given by the series p(x, t) =

∞ X

xt

∗∗(n+1)

λn [p0 (x, t)]

,

(4.12.19)

n=0 x t

where the symbol ∗ ∗ (n + 1) means the (n + 1)-multiple double convolution with respect to spatial and time variables defined by (4.12.11), that is, xt

∗∗(n+1)

[p0 (x, t)]

x t

x t

x t

= p0 (x, t) ∗ ∗p0 (x, t) ∗ ∗ . . . ∗ ∗p0 (x, t) . | {z } (n+1) terms

224

Markov Random Flights

Series (4.12.19) is convergent everywhere in the open ball int Bm (0, ct). For any small ε > 0, the series (4.12.19) converges uniformly (in x for any fixed t > 0) in the closed ball Bm (0, ct − ε) and, therefore, it uniquely determines the density p(x, t) which is continuous and bounded in this ball. Proof. Applying Theorem 4.12.1 and taking into account the uniform convergence of series (4.12.17) and of the integral in formula (4.12.6), we have: p(x, t) =

∞ X

pn (x, t)

n=0

= p0 (x, t) +

∞ X

pn (x, t) n=1 ∞ Z t X

  x p0 (x, t − τ ) ∗ pn−1 (x, τ ) dτ

= p0 (x, t) + λ

= p0 (x, t) + λ

n=1 0 Z tX ∞ 

 x p0 (x, t − τ ) ∗ pn−1 (x, τ ) dτ

0 n=1

Z t"

x

p0 (x, t − τ ) ∗

= p0 (x, t) + λ 0

Z t"

x

p0 (x, t − τ ) ∗

= p0 (x, t) + λ 0

Z = p0 (x, t) + λ

(

∞ X

n=1 (∞ X

)# pn−1 (x, τ )



)# pn (x, τ )



n=0 t

 x p0 (x, t − τ ) ∗ p(x, τ ) dτ,

0

proving (4.12.18). Another way of proving the theorem is to apply the Fourier transformation to both sides of (4.12.18). Justifying then the change of the order of integrals similarly as it was done in (4.12.7), we arrive at Volterra integral equation (4.11.13) for Fourier transforms. Using notation (4.12.11), equation (4.12.18) can be represented in the convolutional form   x t p(x, t) = p0 (x, t) + λ p0 (x, t) ∗ ∗p(x, t) ,

x ∈ Bm (0, ct),

t > 0.

(4.12.20)

Let us check that series (4.12.19) satisfies equation (4.12.20). Substituting (4.12.19) into the right-hand side of (4.12.20), we have:  ∞  ∞ xt xt X x t X n ∗∗(n+1) ∗∗(n+2) p0 (x, t) + λ p0 (x, t) ∗ ∗ λ [p0 (x, t)] = p0 (x, t) + λn+1 [p0 (x, t)] n=0

= p0 (x, t) +

n=0 ∞ X

xt

∗∗(n+1)

λn [p0 (x, t)]

n=1

=

∞ X

xt

∗∗(n+1)

λn [p0 (x, t)]

n=0

= p(x, t) and, therefore, series (4.12.19) is the solution to equation (4.12.20) indeed. Note that by applying Fourier transformation to (4.12.18) and (4.12.19) and taking into account (4.12.4), we arrive at the known results (4.11.15) and (4.11.14), respectively. The uniqueness of solution (4.12.19) in the class of functions with compact support follows

Integral Transforms of the Distributions of Markov Random Flights

225

from the uniqueness of the solution of Volterra integral equation (4.11.13) for its Fourier transform (4.11.6) (i.e. characteristic function) in the class of continuous functions. Since the transition density p(x, t) is absolutely continuous in the open ball int Bm (0, ct), then, for any ε > 0, it is continuous and uniformly bounded in the closed ball Bm (0, ct − ε). From this fact and taking into account the uniqueness of the solution of integral equation (4.12.18) in the class of functions with compact support, we can conclude that series (4.12.19) converges uniformly in Bm (0, ct − ε) for any small ε > 0. This completes the proof.

4.12.4

Some particular cases

In this subsection we consider two important particular cases of the general Markov random flight studied in the previous subsections when the dissipation function has the uniform distribution on the unit sphere S m (0, 1) and circular Gaussian law on the unit circumference S 2 (0, 1). Symmetric Random Flight. Suppose that the initial and every new direction are chosen according to the uniform distribution on the unit sphere S m (0, 1). In this symmetric case the function %(x, t) is the density of the uniform distribution on the surface of the sphere S m (0, ct) and, therefore, it does not depend on spatial variable x. Then, according to (4.1.5), the singular part of the transition density of the symmetric Markov random flight X(t) takes the form:  Γ m p(s) (x, t) = e−λt m/2 2 m−1 δ(c2 t2 − kxk2 ), m ≥ 2, t > 0. (4.12.21) 2π (ct) Therefore, according to Theorem 4.12.1, in arbitrary dimension m ≥ 2, the joint probability densities pn (x, t), n ≥ 1, of X(t) are connected with each other by the following recurrent relation:  Z t  Z  λΓ m e−λ(t−τ ) 2 pn+1 (x, t) = p (ξ, τ ) dξ dτ, (4.12.22) n 2π m/2 cm−1 0 (t − τ )m−1 M (x,τ )

x = (x1 , . . . , xm ) ∈ int Bm (0, ct),

m ≥ 2,

n ≥ 1,

t > 0,

where the integration area M (x, τ ) is given by (4.12.9). According to Theorem 4.9.1 (see formulas (4.9.3) and (4.9.4)), in arbitrary dimension m ≥ 2, the joint density of the symmetric Markov random flight X(t) and of the single change of direction is given by the formula    m−3 Γ m m kxk2 m−1 m −λt 2 2 p1 (x, t) = λe , − + 2; ; , (4.12.23) F 2 2 2 c2 t2 π m/2 cm tm−1 x = (x1 , . . . , xm ) ∈ int Bm (0, ct),

m ≥ 2,

t > 0,

where F (α, β; γ; z) is the Gauss hypergeometric function. Then, substituting (4.12.23) into (4.12.22) (for n = 1), we obtain the following formula for the joint density of process X(t) and of two changes of direction:  2 2m−4 Γ m 2 p2 (x, t) = λ e π m c2m−1    Z t Z (4.12.24) m−1 m m kξk2 dτ × F , − + 2; ; 2 2 dξ , 2 2 2 c τ (τ (t − τ ))m−1 0 2 −λt

M (x,τ )

226

Markov Random Flights x = (x1 , . . . , xm ) ∈ int Bm (0, ct),

m ≥ 2,

t > 0.

In particular, in the three-dimensional Euclidean space R3 , in view of (4.9.23), the joint density (4.12.23) has the form:   λe−λt ct + kxk p1 (x, t) = ln , (4.12.25) 4πc2 tkxk ct − kxk q x = (x1 , x2 , x3 ) ∈ int B3 (0, ct), kxk = x21 + x22 + x23 , t > 0. By substituting this joint density into (4.12.22) (for n = 1, m = 3), we arrive at the formula:    Z  Z λ2 e−λt t cτ + kξk dξ dτ p2 (x, t) = ln , (4.12.26) 2 4 16π c 0 cτ − kξk kξk τ (t − τ )2 M (x,τ )

x = (x1 , x2 , x3 ) ∈ int B3 (0, ct),

t > 0.

Formula (4.12.26) can also be obtained by setting m = 3 in (4.12.24). According to Theorem 4.12.2 and (4.12.21), the transition density of the m-dimensional symmetric Markov random flight X(t) solves the integral equation   Γ m e−λt 2 δ(c2 t2 − kxk2 ) p(x, t) = 2π m/2 cm−1 tm−1 (4.12.27)    Z t  −λ(t−τ ) e x 2 2 2 +λ δ(c (t − τ ) − kxk ) ∗ p(x, τ ) dτ , (t − τ )m−1 0 x = (x1 , . . . , xm ) ∈ Bm (0, ct),

t > 0.

In the class of functions with compact support, equation (4.12.27) has the unique solution given by the series p(x, t) =

∞ X n=0

n

λ

Γ

m 2



!n+1 

2π m/2 cm−1

t ∗x∗(n+1) e−λt 2 2 2 δ(c t − kxk ) . tm−1

(4.12.28)

Circular Gaussian Law on Circumference. Consider now the case of the nonsymmetric planar random flight when the initial and each new direction are taken according to the distribution on the unit circumference S 2 (0, 1) with the two-dimensional density   kx1 1 χk (x) = exp δ(1 − kxk2 ), (4.12.29) 2π I0 (k) kxk q k ∈ R1 , x = (x1 , x2 ) ∈ R2 , kxk = x21 + x22 where I0 (z) is the modified Bessel function of order zero. Formula (4.12.29) determines  the one-parametric family of densities χk (x), k ∈ R1 , and for any fixed real k ∈ R1 the density χk (x) is absolutely continuous and uniformly bounded on S 2 (0, 1). If k = 0, then formula (4.12.29) yields the density of the uniform distribution on the unit circumference S 2 (0, 1), while for k 6= 0 it produces non-uniform densities. In the unit polar coordinates x1 = cos θ, x2 = sin θ, two-dimensional density (4.12.29) takes the form of the circular Gaussian law (also called the von Mises distribution): χk (θ) =

exp(k cos θ) , 2π I0 (k)

θ ∈ [−π, π),

k ∈ R1 .

(4.12.30)

Integral Transforms of the Distributions of Markov Random Flights

227

For arbitrary real k ∈ R1 , density (4.12.29) on the unit circumference S 2 (0, 1) generates the density   e−λt kx1 (s) p (x, t) = exp δ(c2 t2 − kxk2 ), (4.12.31) 2πct I0 (k) kxk q x = (x1 , x2 ) ∈ R2 , kxk = x21 + x22 , t > 0, k ∈ R1 , concentrated on the circumference S 2 (0, ct) of radius ct. Then, according to Theorem 4.12.1, the joint densities are connected with each other by the recurrent relation pn+1 (x, t) λ = 2πcI0 (k)

Z t Z exp 0 M (x,τ )

!

k(x1 − ξ1 ) p (x1 − ξ1 )2 + (x2 − ξ2 )2

 pn (ξ1 , ξ2 , τ ) dξ1 dξ2

e−λ(t−τ ) dτ, t−τ (4.12.32)

x = (x1 , x2 ) ∈ int B2 (0, ct),

n ≥ 1,

t > 0,

k ∈ R1 .

According to Theorem 4.12.2 and (4.12.31), the transition density of the planar Markov random flight with dissipation function (4.12.29) satisfies the integral equation   kx1 e−λt exp δ(c2 t2 − kxk2 ) p(x, t) = 2πct I0 (k) kxk (4.12.33)     Z t  −λτ λ kx1 e x + exp δ(c2 τ 2 − kxk2 ) ∗ p(x, τ ) dτ, 2πc I0 (k) 0 τ kxk q x = (x1 , x2 ) ∈ B 2 (0, ct), kxk = x21 + x22 , t > 0, k ∈ R1 . In the class of functions with compact support, equation (4.12.33) has the unique solution given by the series

p(x, t) =

∞ X n=0

n

λ



1 2πc I0 (k)

n+1 

e−λt exp t



kx1 kxk



2 2

2

δ(c t − kxk )

t ∗x∗(n+1)

.

(4.12.34)

Chapter 5 Markov Random Flight in the Plane R2

In the previous chapter a general method of studying the Markov random flights in the Euclidean space Rm of arbitrary dimension m ≥ 2 was developed based on the analysis of the integral transforms of their distributions. The formulas obtained are universal and applicable in the spaces of arbitrary dimensions. There are, however, a few important low dimensions in which these general formulas can be evaluated in an explicit form. This gives a unique opportunity to obtain the exact distributions of the symmetric Markov random flights in these low dimensions. One of such unique cases is the dimension m = 2. In this chapter we thoroughly study the symmetric Markov random flight in the Euclidean plane R2 .

5.1

Conditional densities

The general description of the symmetric Markov random flight in the Euclidean space Rm of arbitrary dimension m ≥ 2, was given in Section 4.1 of the previous chapter. However, for the reader’s convenience, we give a succinct description of the process in the Euclidean plane R2 . A particle, at the initial time moment t = 0, starts from the origin 0 = (0, 0) of the Euclidean space R2 and moves with constant finite speed c. The initial direction is a random vector uniformly distributed on the unit circumference  S12 = x = (x1 , x2 ) ∈ R2 : kxk2 = x21 + x22 = 1 . The particle changes the directions of its motion at random time instants that form a Poisson flow of rate λ > 0. In such time moments, the particle instantaneously takes on new random direction uniformly distributed on S12 , independently of its previous direction. Let, as above, X(t) = (X1 (t), X2 (t)), t > 0, be the particle’s position at time t. In this section we will focus on the conditional distributions Pr{X(t) ∈ dx | N (t) = n} = Pr{X1 (t) ∈ dx1 , X2 (t) ∈ dx2 | N (t) = n},

n ≥ 1,

(5.1.1)

where, remind, N (t) is the number of Poisson events that have occurred in the time interval (0, t) and dx is the infinitesimal element of the plane R2 with Lebesgue measure µ(dx) = dx1 dx2 . At arbitrary time moment t > 0, the particle, with probability 1, is located in the disc of radius ct:  B2ct = x = (x1 , x2 ) ∈ R2 : kxk2 = x21 + x22 ≤ c2 t2 . If no Poisson events occur until time t and, therefore, the particle does not change its initial direction, then, at time t, it is located on the boundary ∂B2ct of this disc, that is, on the circumference of radius ct:  2 Sct = ∂B2ct = x = (x1 , x2 ) ∈ R2 : kxk2 = x21 + x22 = c2 t2 , 229

230

Markov Random Flights

and the probability of this event is equal to  2 Pr X(t) ∈ Sct = e−λt . If at least one Poisson event occurs until time t and, therefore, the particle at least once changes its direction, then it is located strictly in the interior of disc B2ct :  int B2ct = x = (x1 , x2 ) ∈ R2 : kxk2 = x21 + x22 < c2 t2 , and the probability of this event is  Pr X(t) ∈ int B2ct = 1 − e−λt .

(5.1.2)

The set of all such paths corresponding to this case form the absolutely continuous component of the distribution Pr {X(t) ∈ dx} , x ∈ int B2ct . (5.1.3) Therefore, there exists the density p(x, t) = p(x1 , x2 , t), x = (x1 , x2 ) ∈ int B2ct , t > 0, of the absolutely continuous component of distribution (5.1.3) which is the main subject of this chapter. Our first result concerns the explicit form of the conditional distributions (5.1.1). Theorem 5.1.1. For arbitrary n ≥ 1 and any t > 0, conditional distributions (5.1.1) are given by the formula:  (n−2)/2 kxk2 n 1 − µ(dx), (5.1.4) Pr{X(t) ∈ dx | N (t) = n} = 2π(ct)2 c2 t2 x = (x1 , x2 ) ∈ int B2ct ,

kxk2 = x21 + x22 ,

µ(dx) = dx1 dx2 .

Proof. According to (4.2.5), the conditional characteristic functions of X(t) are given by   Z Z t Z t n+1  Y n! t Hn (t) = n dτ1 dτ2 · · · dτn J0 (c(τj − τj−1 )kαk) , n ≥ 1, (5.1.5)   t 0 τ1 τn−1 j=1 where α = (α1 , α2 ) ∈ R2 is the two-dimensional real vector of inversion parameters, kαk = p 2 α1 + α22 and J0 (x) is the Bessel function of zero order. Surprisingly, this fairly complicated n-multiple integral in (5.1.5) can be evaluated in an explicit form. The calculations are based on the following relation (see [63, Formula 6.581(3)]):   Z a Γ µ + 12 Γ ν + 12 1 µ ν x (a − x) Jµ (x)Jν (a − x) dx = √ aµ+ν+ 2 Jµ+ν+ 12 (a) , (5.1.6) 2π Γ (µ + ν + 1) 0 1 1 Re µ > − , Re ν > − . 2 2 Formula (5.1.5) can be represented as follows:   Z Z t n! t Hn (t) = n dτ1 J0 (cτ1 kαk) dτ2 J0 (c(τ2 − τ1 )kαk) t 0 τ1   Z t Z t × dτ3 J0 (c(τ3 − τ2 )kαk) . . . dτn−1 J0 (c(τn−1 − τn−2 )kαk) (5.1.7) τ2

τn−2

Z

t

× τn−1

   dτn J0 (c(τn − τn−1 )kαk) J0 (c(t − τn )kαk) ... .

Markov Random Flight in the Plane R2

231

Consider the first (interior) integral with respect to τn on the right-hand side of (5.1.7). Changing the variable ξ = c(τn − τn−1 )kαk and applying (5.1.6), we get: Z

t

J0 (c(τn − τn−1 )kαk) J0 (c(t − τn )kαk) dτn τn−1

Z c(t−τn−1 )kαk 1 J0 (ξ) J0 (c(t − τn−1 )kαk − ξ) dξ ckαk 0    1/2 Γ 12 Γ 21 t − τn−1 = √ J1/2 (c(t − τn−1 )kαk). ckαk 2π Γ (1) =

(5.1.8)

Taking into account (5.1.8), changing the variable ξ = c(τn−1 − τn−2 )kαk in the next integral (with respect to τn−1 ) on the right-hand side of (5.1.7) and applying again (5.1.6), we arrive at the formula:  Z t  1/2 Γ 21 Γ 21 t − τn−1 √ J1/2 (c(t − τn−1 )kαk) J0 (c(τn−1 − τn−2 )kαk) dτn−1 ckαk 2π Γ (1) τn−2   Γ 12 Γ 21 √ = 2π Γ (1)   Γ 1 Γ 21 = √2 2π Γ (1)   Γ 12 Γ 21 = √ 2π Γ (1)

1 (ckαk)2

c(t−τZ n−2 )kαk

(c(t − τn−2 )kαk)1/2 J0 (ξ) J1/2 (c(t − τn−2 )kαk − ξ) dξ

0 1 2



Γ (1) Γ 1 √  (c(t − τn−2 )kαk) J1 (c(t − τn−2 )kαk) (ckαk)2 2π Γ 23    Γ 21 Γ(1) t − τn−2 √  J1 (c(t − τn−2 )kαk) . ckαk 2π Γ 23

(5.1.9) Continuing this integration procedure in the same manner, after the k-th step, k ≤ n−1, we obtain the characteristic function in the form: ( k   ) Z t Z t Z t q n! Y Γ 12 Γ 2 √  Hn (t) = n dτ dτ · · · dτn−k−1 1 2 t 2π Γ q+1 0 τ1 τn−k−2 2 q=1 # "Z k/2  t t − τn−k Jk/2 (c(t − τn−k )kαk) J0 (c(τn−k − τn−k−1 )kαk) dτn−k × ckαk τn−k−1 × J0 (c(τn−k−1 − τn−k−2 )kαk) . . . J0 (c(τ2 − τ1 )kαk)J0 (cτ1 kαk). (5.1.10) In the (n − 1)-th step of integration procedure, we get: (n−1   ) Z t (n−1)/2 q n! Y Γ 12 Γ 2 t − τ1 √  Hn (t) = n J(n−1)/2 (c(t−τ1 )kαk) J0 (cτ1 kαk) dτ1 . t ckαk 2π Γ q+1 0 2 q=1 Finally, changing the variable ξ = cτ1 kαk in this integral and applying again (5.1.6), we obtain:

232

Markov Random Flights

(n−1   ) Y Γ 1 Γ q 1 2 √ 2 q+1  n (ckαk) 2π Γ 2 q=1 Z ctkαk × (ctkαk − ξ)(n−1)/2 J(n−1)/2 (ctkαk − ξ)J0 (ξ) dξ 0 (n−1   )   (5.1.11) q Γ 21 Γ n2 n! Y Γ 12 Γ 2 1 n/2 √ √   (ctkαk) = n Jn/2 (ctkαk) t (ckαk)n 2π Γ n+1 2π Γ q+1 2 2 q=1 ( n   ) Y Γ 1 Γ q Jn/2 (ctkαk) 2 √ 2 q+1  . = n! (ctkαk)n/2 2π Γ 2 q=1

n! Hn (t) = n t

Using now the well-known relations for Euler gamma-function (see Subsection 1.6.1)     √ 22x−1 1 1 Γ(2x) = √ Γ(x)Γ x + , Γ(x + 1) = xΓ(x), Γ = π, 2 2 π we see that ( n   ) Y Γ 1 Γ q 2 2 √  = n! n! 2π Γ q+1 2 q=1

Γ √

1 2

 !n

2π (

n Y Γ q=1

Γ

q 2  q+1 2



   ) 3 n−1 n Γ Γ Γ Γ Γ (1) 2 2 2 2   ... = n! 2−n/2 Γ (1) Γ 32 Γ (2) Γ n2 Γ n+1 2  1 Γ 2  = n! 2−n/2 Γ n+1 2  √ n−1 π2 Γ n √ 2 = n! 2−n/2 Γ(n) π n = n 2n/2−1 Γ  n 2 n/2 n =2 Γ 2 2  n = 2n/2 Γ +1 . 2  1

Substituting this expression into (5.1.11), we finally obtain the conditional characteristic functions: n  J (ctkαk) n/2 Hn (t) = 2n/2 Γ +1 , n ≥ 1. (5.1.12) 2 (ctkαk)n/2 Note that formula (5.1.12) for conditional characteristic functions Hn (t) exactly coincides with formula (4.4.2) in Section 4.4 obtained by a more simple method of integral transforms developed in Chapter 4. To prove the theorem, one needs to show that the Fourier transform of distribution (5.1.4) in the circle B2ct coincides with (5.1.12) for any n ≥ 1. Passing to the polar coordinates and using the well-known integral representation of Bessel function (see Subsection 1.5.1, Formula (1.5.8)) Z 2π p 1 J0 (x a2 + b2 ) = eix(a cos θ+b sin θ) dθ, 2π 0

Markov Random Flight in the Plane R2 we have: Z

233

eihα,xi Pr {X(t) ∈ dx | N (t) = n}

B2ct

ZZ =

ei(α1 x1 +α2 x2 ) Pr {X1 (t) ∈ dx1 , X2 (t) ∈ dx2 | N (t) = n}

x21 +x22 ≤c2 t2

ZZ e

=

i(α1 x1 +α2 x2 )

n 2π(ct)2



x2 + x2 1− 12 2 2 c t

(n−2)/2 dx1 dx2

x21 +x22 ≤c2 t2

 (n−2)/2 Z 2π Z ct n ρ2 iρ(α1 cos θ+α2 sin θ) = e ρ dρ dθ 1− 2 2 2π(ct)2 0 c t 0 (n−2)/2   Z ct  Z 2π n ρ2 1 iρ(α1 cos θ+α2 sin θ) = ρ 1− 2 2 e dθ dρ (ct)2 0 c t 2π 0 (n−2)/2 Z ct  n ρ2 = J0 (ρkαk) dρ ρ 1 − (ct)2 0 c2 t2 Z 1 =n ξ(1 − ξ 2 )(n−2)/2 J0 (ctkαkξ) dξ 0

(see [63, Formula 6.567(1)]) n = n 2(n−2)/2 Γ (ctkαk)−n/2 Jn/2 (ctkαk) 2 n  n  Jn/2 (ctkαk) = 2n/2 Γ 2 2 (ctkαk)n/2  J (ctkαk) n n/2 +1 = 2n/2 Γ , 2 (ctkαk)n/2 and this exactly coincides with (5.1.12). −1 of the conditional characteristic We can also evaluate the inverse Fourier transform Fα functions (5.1.12) by means of Hankel inversion formula (1.8.4) and to show that it coincides with conditional distribution (5.1.4). Really, applying the Hankel inversion formula (1.8.4), we get:   n  Jn/2 (ctkαk) −1 Fα [Hn (t)] = 2n/2 Γ + 1 (ct)−n/2 Fα−1 2 kαkn/2 (see Formula (1.8.4))  Z ∞ Jn/2 (ctr) 2n/2 Γ n2 + 1 −1 = (2π) J0 (kxkr) r dr (ct)n/2 rn/2 0  Z ∞ 2n/2 Γ n2 + 1 = r−(n−2)/2 J0 (kxkr) Jn/2 (ctr) dr 2π (ct)n/2 0 (see [63, Formula 6.574(1)])    2n/2 Γ n2 + 1 n Γ(1) kxk2  F 1, − = + 1; 1; 2 c2 t2 2π (ct)n/2 2(n−2)/2 (ct)−n/2+2 Γ n2 Γ(1)

234

Markov Random Flights    Γ n2 + 1 n−2 kxk2 F − , 1; 1; 2 2 = 2 c t π (ct)2 Γ n2    (n−2)/2 n Γ n2 kxk2  = 1 − c2 t 2 2π (ct)2 Γ n2   (n−2)/2 n kxk2 = , 1 − 2π (ct)2 c2 t2

and this exactly coincides with (5.1.4). The theorem is thus completely proved. Remark 5.1.1. Let us consider a few particular cases of the conditional densities of distributions (5.1.4): n pn (x, t) = 2π(ct)2

 (n−2)/2 kxk2 , 1− 2 2 c t

n ≥ 1.

(5.1.13)

For n = 1, conditional density (5.1.13) takes the form:  −1/2 kxk2 1 1− 2 2 p1 (x, t) = 2π(ct)2 c t 1 1 p = , t 2πc c2 t2 − (x21 + x22 )

(5.1.14) x = (x1 , x2 ) ∈ int

B2ct ,

t > 0,

and this exactly coincides with (4.9.22). It is interesting to notice that the second factor in (5.1.14) 1 p u(x1 , x2 , t) = 2 2 2πc c t − (x21 + x22 ) is the fundamental solution (the Green’s function) to the two-dimensional wave equation  2  ∂ u ∂2u ∂2u 2 =c + . ∂t2 ∂x21 ∂x22 This fact shows the wave nature of the process and how this wave propagates from the initial point outwards. Its front is unbounded near the boundary ∂B2ct and possesses a non-vanishing ‘tail’. However, despite the fact that function (5.1.14) has an infinite discontinuity on the boundary ∂B2ct , one can easily check that it is integrable in the whole disc B2ct . Really, by integrating (5.1.14) over the disc B2ct and passing to the polar coordinates, we have: Z ZZ 1 dx1 dx2 p p1 (x, t) µ(dx) = 2 2 2πct c t − (x21 + x22 ) B2ct

x21 +x22 ≤c2 t2

Z 2π Z ct 1 ρ dρ p dθ 2πct 0 c2 t2 − ρ2 0 Z ct 1 ρ dρ p = 2 ct 0 c t 2 − ρ2 Z 1 z dz √ = 1 − z2 0 = 1, =

and, therefore, function (5.1.14) is integrable in B2ct . Hence, function (5.1.14) possesses all

Markov Random Flight in the Plane R2

235

the properties of a probability density. From our analysis it also follows that this important function can be obtained as the distribution of two random vectors (cτ cos θ1 , cτ sin θ1 ) and (c(t − τ ) cos θ2 , c(t − τ ) sin θ2 ), where the random variables θ1 , θ2 are distributed uniformly in the interval [0, 2π), and the random variable τ is distributed uniformly in the interval (0, t). This also means that, if the single Poisson event has occurred and the initial direction has been changed, then the density of the particle’s position near the boundary ∂B2ct takes large values. Nevertheless, the probability of being in the infinitesimal ring Rε = x ∈ B2ct : ct − ε ≤ kxk ≤ ct is less than 1 and proportional only to the thickness ε > 0 of this ring. For n = 2, formula (5.1.13), unexpectedly, yields the density of the uniform distribution in the circle B2ct : 1 , x = (x1 , x2 ) ∈ int B2ct . (5.1.15) p2 (x, t) = π(ct)2 This means that, after two changes of direction, the particle is located in the infinitesimal element dx of the disc B2ct with a probability not depending on the position of this element in B2ct , but depending only on its Lebesgue measure µ(dx). In Chapter 7, we will see that this extremely interesting fact also takes place for the Markov random flight in the fourdimensional space, however, in that case the uniform distribution emerges just after the first change of direction. In a certain sense, the third change of direction somehow cancels the ‘outward push’ noted above. For n ≥ 3, the form of the conditional distribution does not change in such a drastic manner and takes on a bell-shaped structure. For example, for three changes of direction n = 3, formula (5.1.13) produces the conditional density p3 (x, t) =

p 3 c2 t2 − kxk2 , 2π(ct)3

x = (x1 , x2 ) ∈ int B2ct .

The reason for the appearance of the bell form of conditional densities (5.1.13) for n ≥ 3 is that, when a sufficient number of changes of direction has been recorded (the minimal number clearly being three), the trajectories of the process become so fragmented that the particle can hardly leave the neighbourhood of the origin. From this fact it follows that the density p(x, t) of the absolutely continuous component of the distribution of the Markov random flight X(t) must also have a bell-shaped structure centred at the origin. In the next section we will see that the density p(x, t) has just such bell-shaped structure.

5.2

Distribution of the process

From Theorem 5.1.1 we can immediately obtain the unconditional distribution of the planar Markov random flight X(t) which is one of the main results of this chapter. Theorem 5.2.1. The absolutely continuous part of the distribution of the process X(t) has the form:   p λ 2 t2 − kxk2 exp −λt + c c λ p Pr {X(t) ∈ dx} = µ(dx), (5.2.1) 2πc c2 t2 − kxk2 x = (x1 , x2 ) ∈ int B2ct ,

kxk2 = x21 + x22

t > 0.

236

Markov Random Flights

Proof. According to the total probability formula and in view of Theorem 5.1.1, we have: ∞ X Pr {X(t) ∈ dx} = Pr {X(t) ∈ dx | N (t) = n} Pr {N (t) = n} n=1

= e−λt = = = =

=

 (n−2)/2 ∞ X kxk2 n (λt)n 1 − µ(dx) n! 2π (ct)2 c2 t2 n=1

∞ e−λt X (λt)n (c2 t2 − kxk2 )(n−2)/2 µ(dx) 2π (ct)2 n=1 (n − 1)! (ct)n−2  n  ∞ (n−2) p e−λt X 1 λ c2 t2 − kxk2 µ(dx) 2π n=1 (n − 1)! c n  ∞ X λ e−λt 1 λp 2 2 p c t − kxk2 µ(dx) 2πc c2 t2 − kxk2 n=0 n! c   λp 2 2 λ e−λt p exp c t − kxk2 µ(dx) 2πc c c2 t2 − kxk2   p λ 2 t2 − kxk2 exp −λt + c c λ p µ(dx), 2 2πc c t2 − kxk2

proving (5.2.1). It remains to check that, for any t > 0, equality (5.1.2) fulfills. Passing to the polar coordinates, we get:   p λ Z ZZ 2 t2 − (x2 + x2 ) c exp −λt + 1 2 c λ p Pr {X(t) ∈ dx} = dx1 dx2 2 2 2 2 2πc c t − (x1 + x2 ) B2ct

x21 +x22 ≤c2 t2

−λt

=

=

λe 2πc

Z

λe−λt 2c

Z

=1−e



0 ct

0

−λt

 p  λ 2 t 2 − ρ2 c c p dθ ρ dρ c2 t2 − ρ2 0  p  exp λc c2 t2 − ρ2 p d(ρ2 ) c2 t2 − ρ2 Z

ct

exp

,

as required. The missing part of the probability, namely e−λt , pertains to the singular part of the distribution and is concentrated on the boundary ∂B2ct of the disc. The theorem is proved. The density of distribution (5.2.1) has the form:   p λ 2 2 2 λ exp −λt + c c t − kxk p p(x, t) = , 2πc c2 t2 − kxk2

kxk < ct,

(5.2.2)

The shape of density (5.2.2) at time instant t = 3 for the values of parameters c = 2 and λ = 1 is plotted in Fig. 5.1. We see that the density has a local maximum at the origin (that is, at the starting point) and it monotonously decreases and reaches an absolute minimum at some distance from the origin. Then the density begins to monotonously increase and tends to infinity, as it approaches the boundary of the circle. This behaviour of density (5.2.2) is, obviously,

Markov Random Flight in the Plane R2

237

Figure 5.1: The shape of density p(x, y, t) at time t = 3 (for c = 2, λ = 1) due to the presence of the term given by the discontinuous conditional density (5.1.14) corresponding to the single change of direction (see Remark 5.1.1 above). The following theorem yields a formula for the probability of being in a circle B2r of arbitrary radius r < ct centred at the origin. Theorem 5.2.2. For any t > 0, the following relation holds:    λp 2 2 Pr X(t) ∈ B2r = 1 − exp −λt + c t − r2 , c

0 ≤ r < ct.

Proof. In view of (5.2.1) and passing to the polar coordinates, we get: Z  Pr X(t) ∈ B2r = Pr {X(t) ∈ dx} B2r

λ = 2πc

ZZ

x21 +x22 ≤r 2

  p exp −λt + λc c2 t2 − (x21 + x22 ) p dx1 dx2 c2 t2 − (x21 + x22 )

  p λ 2 t2 − ρ2 c c λe p dθ ρ dρ = 2πc 0 c2 t2 − ρ2 0  p  λ 2 t 2 − ρ2 c −λt Z r exp c λe p = d(ρ2 ) 2 t2 − ρ2 2c c 0 √  c2 t2 −r 2 −λt Z exp λc z λe d(z 2 ) =− 2c z ct   λp 2 2 = 1 − exp −λt + c t − r2 , c −λt

Q.E.D. The theorem is proved.

Z



Z

r

exp

(5.2.3)

238

Markov Random Flights

Remark 5.2.1. Density (5.2.2) takes an especially interesting form in polar coordinates:   p λ 2 2 2 λ ρ exp −λt + c c t − ρ p pe (ρ, θ, t) = , 0 < ρ < ct, 0 ≤ θ < 2π, t > 0, 2πc c2 t 2 − ρ 2 (5.2.4) where we see that the radial component does not depend on the angular one. This property of the symmetric planar Markov random flight X(t) is quite similar to the analogous property of the two-dimensional Wiener process whose transition density in polar coordinates also has independent radial and angular components. Remark 5.2.2. The complete density f (x, t), x ∈ B2ct , t ≥ 0, of the distribution of process X(t) in terms of generalized functions has the form:   p λ 2 t2 − kxk2 exp −λt + c −λt c λ e p f (x, t) = Θ (ct − kxk) , (5.2.5) δ(c2 t2 − kxk2 ) + 2 2πct 2πc c t2 − kxk2 where δ(x) is the Dirac delta-function and Θ(x) is the Heaviside unit-step function. Remark 5.2.3. Density (5.2.2) is the product of the Green’s function of planar waves (related to a single change of direction) and an exponential factor yielding the damping effect produced by many changes of direction (see Remark 5.1.1). Our analysis gives some insight into the mechanism of damped wave propagation in the plane. The forces acting on a vibrating membrane can be imagined as the superposition of random motions developing as zigzag lines, in contrast to the outward motion of the wave. The effect of these random motions is to hinder the energy propagation. The more changes of direction there are, the lower is the energy that spreads outward and, therefore, the smaller its contribution to wave propagation. The same damping effect can be observed in the case of one-dimensional waves where, surprisingly, the form of the transition density is much more complicated than (5.2.2). This profound relationship of the finite-velocity planar random motions and the wave processes will again be demonstrated in Section 5.6 where an alternative and much more simple derivation of density (5.2.2) based on the physical principle of the superposition of planar waves, will be given. In the following theorem we present the marginal distributions of the random flight X(t) = (X1 (t), X2 (t)), that is, the distribution of the projections of X(t) onto coordinate axes. Theorem 5.2.3. The density of the distribution of the projections of the symmetric planar Markov random flight X(t) = (X1 (t), X2 (t)) onto coordinate axes has the form: ∂ ∂ Pr{X1 (t) < x} = Pr{X2 (t) < x} ∂x ∂x   p   p  e−λt λe−λt λ λ 2 2 2 2 2 2 = √ + I0 c t − x + L0 c t −x , 2c c c π c2 t2 − x2 (5.2.6) x ∈ (−ct, ct), t > 0, where I0 (z) and L0 (z) are the modified Bessel and Struve functions of order zero, respectively, given by the series I0 (z) =

∞ X k=0

1  z 2k , (k!)2 2

L0 (z) =

∞ X

1

k=0

 2 Γ k + 23

 z 2k+1 2

.

(5.2.7)

Markov Random Flight in the Plane R2

239

Proof. We prove the statement of the theorem for the x1 -axis. Let x ∈ (−ct, ct) be an arbitrary point. Consider the planar set Mx = {x = (x1 , x2 ) ∈ B2ct : x1 < x}. In other words, Mx is the set of all the points of disc B2ct whose x1 -coordinate is less than x. Then, obviously, Pr{X1 (t) < x} = Pr{X(t) ∈ Mx } = Pr{X(t) ∈ ∂Mx } + Pr{X(t) ∈ int Mx }.

(5.2.8)

Some simple geometric reasonings show that −λt

Pr{X(t) ∈ ∂Mx } = e



 x  1 . 1 − arccos π ct

(5.2.9)

According to (5.2.1), we have:   p λ 2 t2 − kxk2 c exp −λt + c λ p µ(dx) Pr{X(t) ∈ int Mx } = 2 2πc Mx c t2 − kxk2  p  Z Z exp λ c2 t2 − (x2 + x2 ) 1 2 c λe−λt p = dx1 dx2 2πc c2 t2 − (x21 + x22 ) x1 0), we obtain: λe−λt c

  p λ c2 t2 − ρ2 c p ρ dρ J0 (ρkαk) c2 t2 − ρ2 0 p k−1 Z ∞  k c2 t2 − ρ2 X λe−λt ct λ = ρJ0 (ρkαk) dρ c c k! 0 k=0 Z 1 ∞  k+1 X (ct)k+1 λ z(1 − z 2 )(k−1)/2 J0 (ctkαkz) dz = e−λt c k! 0

Z

exp

ct

k=0

(see [63, Formula 6.567(1)])   ∞ X (λt)k+1 (k−1)/2 k+1 −λt =e 2 Γ (ctkαk)−(k+1)/2 J(k+1)/2 (ctkαk) k! 2 = e−λt = e−λt = e−λt

k=0 ∞ X

n=1 ∞ X n=1 ∞ X n=1

 n  J (ctkαk) (λt)n n/2 2(n−2)/2 Γ (n − 1)! 2 (ctkαk)n/2 (λt)n n/2 h n  n i Jn/2 (ctkαk) 2 Γ n! 2 2 (ctkαk)n/2  J (ctkαk) (λt)n n/2  n n/2 2 Γ +1 . n! 2 (ctkαk)n/2

Adding to this expression the first term of (5.3.3), we obtain series representation (5.3.2). The theorem is completely proved. Let us evaluate the Laplace transform of characteristic function H(t). Applying the Laplace transformation L to (5.3.2) and taking into account the uniform convergence of the series (see Lemma 4.5.3), we get: " # ∞  J (ctkαk) X (λt)n n/2  n n/2 −λt L [H(t)] (s) = L e 2 Γ +1 (s) n! 2 (ctkαk)n/2 n=0 =

∞  h i X λn n/2  n 2 Γ + 1 (ckαk)−n/2 L tn/2 Jn/2 (ctkαk) (s + λ) n! 2 n=0

(see [7, Table 4.14, Formula 7]) ∞  X λn n/2  n = 2 Γ + 1 (ckαk)−n/2 n! 2 n=0   p −(n+1) n+1 × 2n/2 π −1/2 Γ (ckαk)n/2 (s + λ)2 + (ckαk)2 2  n  ∞   n + 1  p −(n+1) n X 2 n λ √ Γ +1 Γ = (s + λ)2 + (ckαk)2 n! 2 2 π n=0 (duplication formula for gamma-function) p −(n+1) λn = Γ (n + 1) (s + λ)2 + (ckαk)2 n! n=0 ∞ X

Markov Random Flight in the Plane R2 =

∞ X

λn

p

(s + λ)2 + (ckαk)2

243

−(n+1)

n=0 ∞ X

=p

1 (s + λ)2 + (ckαk)2

!n

λ p

(s + λ)2 + (ckαk)2

n=0

.

Since, as is easy to see, for any s such that Re s > 0, the following inequality fulfils λ 1 p q = 2 (ckαk)2 < 1, (s + λ)2 + (ckαk)2 s +1 + λ2

λ

then, applying the formula for the sum of inifinitely decreasing geometric progression to the last series, we get: 1

L [H(t)] (s) = p

λ)2

=p

λ)2

(s +

1

+

(ckαk)2

1 (s +

λ (s+λ)2 +(ckαk)2

1− √

+ (ckαk)2 − λ

,

and this exactly coincides with formula (4.6.3) previously obtained from the general relation (4.6.1). From the representations (5.3.2) and (5.3.3) (or more easily from (5.3.4)), we immediately obtain the initial conditions for the characteristic function H(t): ∂H(t) H(t)|t=0 = 1, = 0, (5.3.5) ∂t t=0 and this exactly coincides with (4.7.4) and (4.7.7).

5.4

Telegraph equation

One of the most remarkable results related to the Goldstein-Kac telegraph process on the line is the fact that its transition density is the fundamental solution (the Green’s function) to the one-dimensional hyperbolic telegraph equation (see Section 2.5). In this section we show that this remarkable property is also valid for the two-dimensional planar Markov random flight. However, as was proved in Section 4.10, in the Euclidean spaces Rm of higher dimensions m ≥ 3 this property is no longer kept. Theorem 5.4.1. The density (5.2.2) of the absolutely continuous component of the distribution of process X(t) is the fundamental solution to the two-dimensional telegraph equation  2  ∂2p ∂p ∂ p ∂2p 2 + 2λ = c + . (5.4.1) ∂t2 ∂t ∂x21 ∂x22 Proof. The change p(x1 , x2 , t) = e−λt w(x1 , x2 , t). transforms (5.4.1) into the following equation for function w = w(x1 , x2 , t):  2  ∂2w ∂ w ∂2w 2 − c + = λ2 w. ∂t2 ∂x21 ∂x22

(5.4.2)

(5.4.3)

244

Markov Random Flights

Making in (5.4.3) the automodel substitution q ξ = c2 t2 − (x21 + x22 ),

(5.4.4)

we see that for the new function ϕ = ϕ(ξ) of variable ξ connected with function w by the equality p  ϕ(ξ) = w c2 t2 − (x2 + y 2 ) , we have: d2 ϕ 00 dϕ ∂2w = (ξt0 )2 2 +ξtt , 2 ∂t dξ dξ

d2 ϕ dϕ ∂2w = (ξx0 1 )2 2 +ξx001 x1 , 2 ∂x1 dξ dξ

d2 ϕ dϕ ∂2w = (ξx0 2 )2 2 +ξx002 x2 . 2 ∂x2 dξ dξ

Then equation (5.4.3) takes the form:  dϕ  0 2  d2 ϕ  00 + ξtt − c2 ξx001 x1 + ξx002 x2 (ξt ) − c2 (ξx0 1 )2 + (ξx0 2 )2 − λ2 ϕ = 0. 2 dξ dξ

(5.4.5)

One can easily see that ξt0 = c2 tξ −1 ,

ξx0 1 = −x1 ξ −1 ,

ξx0 2 = −x2 ξ −1 ,

and, therefore,  00 ξtt = c2 ξ −1 − c2 t2 ξ −3 ,

 ξx001 x1 = − ξ −1 + x21 ξ −3 ,

 ξx002 x2 = − ξ −1 + x22 ξ −3 .

Substituting these expressions into (5.4.5) and taking into account (5.4.3), we find the functions in square brackets:  (ξt0 )2 − c2 (ξx0 1 )2 + (ξx0 2 )2 = c4 t2 ξ −2 − c2 x21 ξ −2 − c2 x22 ξ −2  = c2 c2 t2 − (x21 + x22 ) ξ −2 = c2 ξ 2 ξ −2 = c2 , and, similarly,    00 ξtt − c2 ξx001 x1 + ξx002 x2 = c2 ξ −1 − c2 t2 ξ −3 + c2 ξ −1 + x21 ξ −3 + ξ −1 + x22 ξ −3  = c2 ξ −1 − c2 t2 ξ −3 + ξ −1 + x21 ξ −3 + ξ −1 + x22 ξ −3   = c2 3ξ −1 − c2 t2 − (x21 + x22 ) ξ −3  = c2 3ξ −1 − ξ 2 ξ −3  = c2 3ξ −1 − ξ −1 = 2c2 ξ −1 . Substituting these values into (5.4.5) and dividing it by c2 , we obtain the ordinary differential equation for function ϕ: d2 ϕ 2 dϕ λ2 + − 2 ϕ = 0. dξ 2 ξ dξ c

(5.4.6)

Introduce the new function ψ = ψ(ξ) by the equality: ψ(ξ) = ξ ϕ(ξ).

(5.4.7)

Markov Random Flight in the Plane R2

245

Then

dψ d2 ϕ dψ d2 ψ dϕ = 2ξ −3 ψ − 2ξ −2 = −ξ −2 ψ + ξ −1 , + ξ −1 2 2 dξ dξ dξ dξ dξ and (5.4.6) transforms into the following equation for function ψ: d2 ψ λ2 − 2 ψ = 0. dξ 2 c

(5.4.8)

Equation (5.4.8) is the ordinary differential equation of second order with constant coefficients and its general solution has the form: λ

λ

ψ(ξ) = Ae c ξ + Be− c ξ ,

(5.4.9)

where A and B are arbitrary constants. Returning now to the initial variables and taking into account all the made changes and substitutions, namely, (5.4.7), (5.4.4) and (5.4.2), we obtain the solution to equation (5.4.1) of the form: n λ√ 2 2 2 2 √22 2 2 o e−λt c t −(x1 +x2 ) −λ c t −(x1 +x2 ) c c u(x1 , x2 , t) = p Ae + Be , (5.4.10) c2 t2 − (x21 + x22 ) which, by varying the coefficients A and B, generates some subspace in the space of solutions of the telegraph equation (5.4.1). Obviously, the density (5.2.2) emerges from (5.4.10) at the values of constants A = λ/(2πc), B = 0. In view of (5.3.5), one can conclude that density (5.2.5) is the solution of the Cauchy problem  2  ∂ f ∂f ∂2f ∂2f 2 + 2λ =c + , ∂t2 ∂t ∂x21 ∂x2 2 ∂f f |t=0 = δ(x1 , x2 ), = 0, ∂t t=0 or, which is the same, the solution to the generalized telegraph equation  2  ∂2f ∂f ∂ f ∂2f 2 + 2λ − c + = δ(x1 , x2 )δ(t). ∂t2 ∂t ∂x21 ∂x22 Therefore, density (5.2.2) is the Green’s function of the telegraph equation (5.4.1). The theorem is thus completely proved. Remark 5.4.1. Dividing the telegraph equation (5.4.1) by 2λ and passing to the limit under the Kac’s condition (4.8.1), we see that it transforms into the two-dimensional heat equation   ∂u ρ ∂2u ∂2u = + . (5.4.11) ∂t 2 ∂x21 ∂x22 This is fully consistent with the analogous result (4.8.9) for dimension m = 2. Remark 5.4.2. From (5.4.3) it follows that the function w(x1 , x2 , t) = eλt p(x1 , x2 , t) is the eigenfunction of the two-dimensional wave operator  2  ∂2 ∂ ∂2w 2 2 = 2 − c + , ∂t ∂x21 ∂x22 and the number λ2 is the respective eigenvalue. This emphasizes again the wave nature of random flights already noted in Remark 5.2.3. This is also evidenced by the fact that the transition density of the process is the fundamental solution (the Green’s function) of the damped wave equation (5.4.1).

246

Markov Random Flights

5.5

Limit theorem

A limit theorem for the Markov random flight in the Euclidean space Rm of arbitrary dimension m ≥ 2 was proved in Section 4.8. Using the fact that the distribution of the planar random flight X(t) was obtained in an explicit form, we can now check once more the validity of Theorem 4.8.1. To do this, one needs to show that the transition density (5.2.5), under the Kac’s condition (4.8.1), converges to the transition density of the two-dimensional Wiener process. It is easy to see that, under the Kac’s scaling condition, the singular part of density (5.2.5) vanishes. Therefore, we just need to study the limiting behaviour of the absolutely continuous part of the transition density of X(t) given by (5.2.2). Theorem 5.5.1. Under the Kac’s condition (4.8.1), the following limiting relation holds:    p    λ exp −λt + λc c2 t2 − kxk2  1 kxk2 p = exp − . (5.5.1) lim  2ρπt c, λ→∞  2πc 2ρt c2 t2 − kxk2 (c2 /λ)→ρ

Proof. Let us represent the density (5.2.2) in the form:   q 2 exp −λt + λt 1 − kxk c2 t2 λ q , p(x, t) = 2 2πc2 t 1 − kxk 2 2 c t

kxk < ct.

(5.5.2)

Since kxk < ct, that is kxk ct < 1, we can represent the radical in the exponential factor in (5.5.2) as the absolutely and uniformly converging series r  2  3 kxk2 1 kxk2 1 · 1 kxk2 1 · 1 · 3 kxk2 1− 2 2 =1− − − − ... c t 2 c2 t2 2·4 c2 t2 2·4·6 c2 t2 Substituting this series expansion into (5.5.2) and passing to the limit in each term separately, we get: lim

c, λ→∞ (c2 /λ)→ρ

p(x, t)

  q 2   exp −λt + λt 1 − kxk  c 2 t2 λ q = lim 2 2  c, λ→∞   2πc t  1 − kxk  (c2 /λ)→ρ  c2 t2    1 kxk2 1 · 1 kxk4 1 · 1 · 3 kxk6 1 = lim exp −λt + λt 1 − − − − ... 2ρπt c, λ→∞ 2 c2 t2 2 · 4 c4 t4 2 · 4 · 6 c6 t6    

(c2 /λ)→ρ

1 = 2ρπt =

lim

c, λ→∞ (c2 /λ)→ρ

  1 λ kxk2 1 · 1 λ kxk4 1 · 1 · 3 λ kxk6 exp − − − − ... 2 c2 t 2 · 4 c4 t3 2 · 4 · 6 c6 t5

  1 kxk2 exp − . 2ρπt 2ρt

The theorem is proved.

Markov Random Flight in the Plane R2

247

The function on the right-hand side of (5.5.1) is the transition density of the twodimensional homogeneous Wiener process with zero drift and diffusion coefficient σ 2 = ρ. Obviously, this density (5.5.1) exactly coincides with the density on the right-hand side of (4.8.2) for m = 2. When proving Theorem 5.5.1, we have used the fact that the distribution of random flight X(t) is already known (see (5.2.2)) and the study of the limiting behaviour of a stochastic process with known distribution, generally speaking, is not a difficult problem. But the same result can be obtained without using the known distribution (5.2.1) and this way is a much more difficult one. We will use the Kurtz’s diffusion approximation Theorem 1.4.1 in order to prove that, under the Kac’s scaling condition, the planar random flight X(t) weakly converges to a Wiener process in the plane and this means the convergence of distributions. Since the random flight X(t) depends on two parameters, namely, the speed of motion c and the intensity of switchings λ, then it can be considered as the two-parameter family of stochastic processes X(t) = {Xcλ (t), c > 0, λ > 0}. For the sake of simplicity, we omit these indices bearing in mind, however, that we are now operating with the two-parameter family of stochastic processes. Hence, our aim is to study the limiting behaviour of this family of stochastic processes when its parameters are connected with each other by the Kac’s condition (4.8.1). Let Φ(t) denote the particle’s direction at time instant t > 0. Consider the joint densities of the particle’s position in the plane R2 and its direction fθ = fθ (x, t), x = (x1 , x2 ) ∈ int B2ct , θ ∈ [0, 2π), t > 0, defined by the equality: fθ (x, t) µ(dx) dθ = Pr {X(t) ∈ dx, Φ(t) ∈ dθ} where µ(dx) is the Lebesgue measure in R2 and dθ is the infinitesimal angle. The backward Kolmogorov equation written for these densities is represented by the following system of a continuum number of first-order integro-differential equations: Z 2π ∂fθ ∂fθ λ ∂fθ = −c cos θ − c sin θ − λfθ + fη dη, θ ∈ [0, 2π). (5.5.3) ∂t ∂x1 ∂x2 2π 0 Equation (5.5.3) is a particular case (for the uniform dissipation function identically equal to 1/(2π)) of a more general equation with an arbitrary bounded dissipation function (see, for example, [199, page 40]). One should note that integral term in (5.5.3) appears due to the continuum number of directions. Consider the Banach space B of twice continuously differentiable functions on R2 ×(0, ∞) with the sup-norm vanishing at infinity. Generally speaking, since in virtue of the finite speed of propagation, at arbitrary time instant t > 0, the process X(t) is concentrated in the closed and bounded (that is, in the compact) set B2ct , then for our purposes it would be sufficient to consider functions on a compact set. However, this condition can be weakened and we can prove the convergence of functions from the Banach space B. The densities fθ can be considered as the one-parameter family f = {fθ , θ ∈ [0, 2π)} of functions belonging to B. Consider the one-parameter family A = {Aθ , θ ∈ [0, 2π)} of operators acting in B, where ∂ ∂ − c sin θ . Aθ = −c cos θ ∂x1 ∂x2 Define the action of A on f as follows: Af = {δ(θ, η)Aθ fη , θ, η ∈ [0, 2π)},

(5.5.4)

248

Markov Random Flights

where

( 1, if θ = η, δ(θ, η) = 0, if θ 6= η,

is the generalized Kronecker δ-symbol of rank 2. Introduce the operator Λ acting on f as follows: Z 2π λ fθ dθ, Λf = −λf + 2π 0

(5.5.5)

where λf = {λfθ , θ ∈ [0, 2π)}. Then equation (5.5.3) can be written in the form: ∂f = Af + Λf , (5.5.6) ∂t and we obtain the classical form of the evolutionary equation. With this in hand, we can now prove the weak convergence theorem for the Markov random flight X(t). Theorem 5.5.2. Let the Kac’s scaling condition (4.8.1) be fulfilled. Then in the Banach space B the semigroups generated by the transition functions of the Markov random flight X(t) converge to the semigroup generated by the transition function of the homogeneous Wiener process in R2 with the generator ρ G = ∆, (5.5.7) 2 where ∆ is the two-dimensional Laplace operator. Proof. According to the formulas (1.4.3) and (1.4.7) of the Kurtz’s diffusion Theorem 1.4.1, one needs to find a solution h to the equation Λh = −Af

(5.5.8)

for arbitrary function (family) f ∈ D0 . It is easy to check that, for any differentiable function f , the solution of equation (5.5.8) is given by the formula: Z 2π 1 1 h = Af + fθ dθ. (5.5.9) λ 2π 0 Really, for any f ∈ B, we have: Z 2π  Z 2π Af dθ = Aθ dθ f 0

0

 Z = −c 0



 cos θ dθ

∂ −c ∂x1

Z



 sin θ dθ

0

 ∂ f ∂x2

= 0. Then, taking into account (5.5.5) and (5.5.10), we get:    Z 2π Z 2π  Z 2π 1 λ 1 1 1 Af + fϕ dϕ + Af + fϕ dϕ dθ Λh = −λ λ 2π 0 2π 0 λ 2π 0 Z 2π Z 2π Z 2π Z 2π λ 1 λ = −Af − fϕ dϕ + Af dθ + 2 dθ fϕ dϕ 2π 0 2π 0 4π 0 0 Z 2π  Z 2π Z 2π λ 1 λ = −Af − fϕ dϕ + Aθ dθ f + fϕ dϕ 2π 0 2π 2π 0 0 = −Af

(5.5.10)

Markov Random Flight in the Plane R2

249

and (5.5.8) is fulfilled. Therefore, function (5.5.9) is the solution of equation (5.5.8) indeed. Our next step is to evaluate the projector given by (1.4.8). Since the limiting distribution of the governing Markov process on the sphere (circumference) S12 is the uniform distribution with the density 1/(2π), then (1.4.8) simplifies and the projector is, obviously, given by the formula: Z 2π 1 fθ dθ. (5.5.11) Pf = 2π 0 Then, according to (1.4.4), (5.5.9) and (5.5.11), we obtain: C0 f = PAh  Z 2π  Z 2π 1 1 2 1 = A f+ A fϕ dϕ dθ 2π 0 λ 2π 0 Z 2π Z 2π Z 2π 1 1 2 A f dθ + 2 A dθ fϕ dϕ. = 2πλ 0 4π 0 0 In view of trigonometric equalities Z 2π Z 2 sin θ dθ = π, 0

we get: Z 2π 0



Z

2

cos θ dθ = π,

sin θ cos θ dθ = 0,

0

A2 f dθ =

Z





0

 A2θ dθ f

0

 Z = c2 0



Z 2π  ∂2 ∂2 2 + 2c sin θ cos θ dθ cos θ dθ 2 ∂x1 ∂x1 ∂x2 0 Z 2π  2  ∂ f + c2 sin2 θ dθ ∂x22 0 2



(5.5.12)

= πc2 ∆f , where ∆ is the two-dimensional Laplace operator. Hence, taking into account (5.5.10) and (5.5.12), for any f ∈ B, we have: C0 f =

1 c2 πc2 ∆f = ∆f . 2πλ 2λ

Thus, we have obtained the operator C0 =

c2 ∆ 2λ

(5.5.13)

and, therefore, the generator (5.5.7), under the Kac’s condition (4.8.1), is the limiting operator of the Markov random flight X(t). It remains to check that conditions (1.4.2) and (1.4.5) of the Kurtz’s Theorem 1.4.1 are also fulfilled. In view of (5.5.10), for any continuously differentiable function f we have: 1 Cf = PAf = 2π and, therefore, condition (1.4.2) is fulfilled.

Z



 Aθ dθ f = 0,

0

250

Markov Random Flights

In order to check the fulfillment of condition (1.4.5), it suffices to show that, for any twice continuously differentiable function f , there exists a solution g to equation (µ − C0 )g = f

(5.5.14)

for some µ > 0. It is easy to see that, for any µ > 0, equation (5.5.14) with operator (5.5.13) is the inhomogeneous Klein-Gordon equation (or the Helmholtz equation with a purely imaginary constant) with sufficiently smooth right-hand part. The existence of a solution of this equation for any µ > 0 is a well-known fact from the general theory of partial differential equations (see. for example, [207, Chapter V, Section 30] or [23, Chapter IV]. Thus, condition (1.4.5) is also fulfilled. Therefore, according to Kurtz’s Theorem 1.4.1, one can conclude that, under the Kac’s scaling condition (4.8.1), the semigroups generated by the distributions of the Markov random flight X(t), converge in B to the semigroup generated by the distribution of a homogeneous Wiener process in R2 with generator (5.5.7). The theorem is thus completely proved.

5.6

Alternative derivation of transition density

The derivation of the distribution of the planar Markov random flight X(t) given in Theorems 5.1.1 and 5.2.1 is based on obtaining the conditional characteristic functions by means of integrating the products of Bessel functions and evaluating their inverse Fourier transforms that yields the conditional distributions of the process. By applying then the total probability formula, we arrive at a closed-form expression for the distribution of X(t). This multi-step, and rather complicated from the computational point of view, scheme can be bypassed if we use the deep relationship between random flights and wave processes noted above (see Ramarks 5.2.3 and 5.4.2). In this section we will give a simple and short derivation of density (5.2.2) based on the use of specific properties of wave propagation in a plane. Our goal is to demonstrate how easily this important result (5.2.2) can be obtained with an appropriate physical interpretation of the model. The relationship between finite-velocity random motions and wave processes is that the distribution density is a damped spatial wave that propagates outwardly from the starting point with a finite velocity. This qualitatively absolutely understandable phenomenon that has similar behaviour in the space of any dimension, however, is very difficult for analytical study. The complexity of the analysis essentially depends on both the dimension of the space and the parity or oddness of the dimension itself. The reason for this, obviously, lies in the specifics of the behaviour of wave processes in different Euclidean spaces. It is well known that wave processes behave differently in spaces of even and odd dimensions. In particular, wave diffusion takes place in spaces of even dimension, while Huygens principle does not hold (see, for instance, [207, Chapter 3, Section 14]). Wave propagation in the plane with speed c is described by the two-dimensional wave equation  2  ∂2u ∂ u ∂2u 2 =c + , (5.6.1) ∂t2 ∂x21 ∂x22 u = u(x, t) = u(x1 , x2 , t), t > 0. The fundamental solution (the Green’s function) of this wave equation (5.6.1) has the form: u(x, t) =

Θ(ct − kxk) p , 2πc c2 t2 − kxk2

(5.6.2)

Markov Random Flight in the Plane R2

251

where Θ(x) is the Heaviside unit-step function. From this fact it follows that a perturbation from the instant point-like source δ(x)δ(t) at time t > 0 is concentrated in the closed circle of radius ct centred at the origin. Thus, we can observe the forward front of the wave kxk = ct moving at speed c in the plane. The incoming perturbation persists behind the forward front of the wave in all other time moments, while its backward front is absent (for more details, see [207, Chapter 3, Section 14]. The wave nature of random flight X(t) suggests looking for its density p(x, t) in the form of the uniformly converging series −λt

p(x, t) = e

∞ X

ak

k p c2 t2 − kxk2 ,

(5.6.3)

k=−1

where, due to the absolute spatial symmetry of the motion, the unknown coefficients ak , k ≥ −1, do not depend on the spatial variable x and this representation (5.6.3) is unique. Let us now justify series representation (5.6.3). First of all, we notice that the exponential factor e−λt must be presented in the density of the planar Markov random flight X(t) and this is true in any dimension. This follows from the fact that, since X(t) is driven by a Poisson process of rate λ, the transition density p(x, t) can be represented in the form: p(x, t) = e−λt

∞ X (λt)n pn (x, t), n! n=1

where pn (x, t) are the conditional densities of X(t) conditioned by the random events {N (t) = n} (remind that N (t) is the number of Poisson events occurred in the time interval (0, t)). The presence of exponential factor in (5.6.3) introduces a damping effect in density propagation and this fact was already noted above. Let us now justify the form of series in (5.6.3). Since the process X(t) propagates with finite velocity, it should be described by some hyperbolic partial differential equation. Since the speed is finite and the phase space is isotropic, the random flight X(t) generates some isotropic transport process in the plane. That is why this equation must be a secondorder hyperbolic equation with constant coefficients. The transition density p(x, t) of X(t) is the fundamental solution (the Green’s function) to this equation. But it is well known (see [23, Chapter 3, Section 3, item 2]) that fundamental solutions of such a type of equations are expressed in terms of the ‘hyperbolic distance’ p g(x, t) = c2 t2 − kxk2 . (5.6.4) Function (5.6.4), as well as its integer powers, play a very important role in studying the isotropic processes. For example, the squared function g 2 (x, t) is the characteristic cone of the two-dimensional wave equation (5.6.1) (see [23, Chapter 6, Section 1, item 2]). The fundamental solution (5.6.2) to wave equation (5.6.1) is also expressed as a power of function (5.6.4), namely, u(x, t) = (1/(2πc))g −1 (x, t). The reason is that the function g(x, t), as well as its integer powers, are nothing but traveling plane waves. The classical plane wave method is one of the most powerful methods in mathematical physics (see, for instance, [82, Chapters 1,2,4] or [23, Chapter 6, items 12,13,14])). It is based on the fundamental wave superposition principle (see, for instance [207, Chapter 3, Section 14]). According to this principle, the perturbation at an arbitrary point of the phase space is a result of the superposition (overlaying) of the perturbations coming from all other points at which they are non-zero at a given time. From this principle it follows that a spatial wave can be decomposed into a linear combination of elementary plane waves and, conversely, any linear combination of plane waves generates some spatial wave. Therefore, according

252

Markov Random Flights

to this principle, the transition density p(x, t) of random flight X(t), (which, remind, is some plane wave and, on the other hand, is a fundamental solution of some second-order hyperbolic partial differential equation with constant coefficients), can be represented in the form of a uniformly converging series with respect to all the integer powers of function (5.6.4). However, since the function p(x, t) must possess the properties of a probability density and, therefore, must be integrable in the circle B2ct , we should consider not all, but only integrable powers of function (5.6.4). Passing to polar coordinates, we see that all the integrable powers of function (5.6.4) are given by the formula: ZZ Z k p k/2 2 2 2 c t − kxk µ(dx) = dx1 dx2 c2 t2 − (x21 + x22 ) x21 +x22 ≤c2 t2

B2ct

k+2

Z

1

= 2π(ct)

z 1 − z2

k/2

dz

(5.6.5)

0

=

2π(ct)k+2 , k+2

k ≥ −1.

Note that, for k ≤ −2, the integral on the left-hand side of (5.6.5) does not exist. Hence, one should consider only the powers k ≥ −1 in (5.6.3). Thus, we have completely justified the form of density (5.6.3) basing on the wave nature of random flights and applying the wave superposition principle. It remains to find the unknown coefficients ak , k ≥ −1, of series (5.6.3). To do so, let us integrate equality (5.6.3) in the disc B2ct . Then, taking into account (5.1.2) and (5.6.5), we get: ∞ X 2π(ct)k+2 1 − e−λt = e−λt ak . k+2 k=−1

λt

Multiplying this equality by e , we can rewrite it as follows: eλt − 1 =

∞ X

ak−2

k=1

2π(ct)k . k

Expanding the function on the left-hand side of this equality into a Taylor series, we can represent it in the form: ∞ ∞ X λk k X 2πck k ak−2 t = t . (5.6.6) k! k k=1

k=1

Equality (5.6.6) must be fulfilled for any t > 0. Therefore, it must be fulfilled identically. This means that the series in both the sides of (5.6.6) are two Taylor series expansions of the same smooth function of variable t. But the series expansion of arbitrary smooth function is unique. Hence, the coefficients at the same powers of variable t must coincide and we obtain: 2πck λk = ak−2 , k ≥ 1. k! k From this, we get the desired coefficients: ak =

λk+2 , 2π(k + 1)! ck+2

k ≥ −1.

(5.6.7)

Markov Random Flight in the Plane R2

253

Substituting now the coefficients found (5.6.7) into (5.6.3), we obtain: ∞ X

k p λk+2 2 t2 − kxk2 c 2π(k + 1)! ck+2 k=−1 k  ∞ X λ 1 λp 2 2 e−λt p = c t − kxk2 2πc c2 t2 − kxk2 k! c k=0   p λ 2 2 2 λ exp −λt + c c t − kxk p , = 2πc c2 t2 − kxk2

p(x, t) = e−λt

and this exactly coincides with the already known density (5.2.2). To prove the uniqueness of (5.6.3), we suppose that there exist the two representations of density p(x, t) = e−λt

∞ X

ak

p k c2 t2 − kxk2 ,

p(x, t) = e−λt

k=−1

Then

∞ X

∞ X

bk

p

c2 t2 − kxk2

k

.

k=−1

ak

∞ k k p p X c2 t2 − kxk2 = c2 t2 − kxk2 . bk k=−1

k=−1

Integrating this equality over the disc ∞ X k=−1

ak

B2ct

and taking into account (5.6.5), we get:

∞ X 2π(ct)k+2 2π(ct)k+2 bk = . k+2 k+2 k=−1

For the same reasonings as above, we conclude that the coefficients at the same powers of variable t must coincide and, therefore, ak = bk for any k ≥ −1. From this, the uniqueness of representation (5.6.3) follows. Remark 5.6.1. The analysis above was substantially based on the specific properties of the process of wave propagation in the plane R2 . In particular, the well-known idea of mathematical physics to represent a plane wave as a linear combination of elementary plane waves was applied in writing the transition density in the form of series (5.6.3). However, in other dimensions this idea, apparently, cannot be straightforwardly used. In particular, in the odd-dimensional spaces the Huygens principle fulfills and the wave diffusion does not hold. Mathematically, this fact is expressed in the peculiarity that the Green’s function of the wave equation of any odd dimension is concentrated at the boundary of the diffusion area. Therefore, it seems unlikely that the transition density of the Markov random flight in the odd-dimensional Euclidean spaces might be represented in the form of series (5.6.3). In the even-dimensional spaces, this idea, apparently, might be used, however, series expansions with respect to much more complicated functions (spherical or hypergeometric), should be considered. Remark 5.6.2. One should emphasize the important role of the property that the random flight X(t) is governed by a homogeneous Poisson process. In this case, as is well known, the transition density p(x, t) behaves like a plane wave and this fact was taken as a basis for the alternative derivation of its explicit form in this section. However, if the governing process is not a Poisson one, we cannot assert that density behaves like a wave. Therefore, a similar analysis is not applicable in this case.

254

5.7

Markov Random Flights

Moments

In this section we give the results related to the mixed moments of the planar Markov random flight X(t) = (X1 (t), X2 (t)), t > 0. Since the distribution of X(t) is known, we can explicitly evaluate the moments of the process. Let q = (q1 , q2 ) be the 2-multiindex. We are interested in the mixed moments: EXq (t) = EX1q1 (t)X2q2 (t),

q1 ≥ 1, q2 ≥ 1.

The explicit form of the moments of the planar Markov random flight X(t) is given by the following theorem. Theorem 5.7.1. For any integers q1 , q2 ≥ 1 and any t > 0, the following relation holds:    q1 + 1 q2 + 1 e−λt  q1 +q2   B (ct) ,   π 2 2           (q +q −1)/2 1 2 −λt   2 q1 + 1 q2 + 1  + e√ q1 +q2 (ct) Γ Γ 2 2 π λt EXq (t) = (5.7.1)      × I(q1 +q2 +1)/2 (λt) + L(q1 +q2 +1)/2 (λt) , if q1 and q2 are even,          0, otherwise, where Iν (z) =

∞ X k=0

 z 2k+ν 1 k! Γ(ν + k + 1) 2

(5.7.2)

is the modified Bessel function of order ν, Lν (z) =

∞ X k=0

Γ k+

 3 2

 z 2k+ν+1 1  2 Γ ν + k + 32

(5.7.3)

is the Struve function of order ν and B(x, y) is the beta-function. Proof. According to (5.2.5), we have: ZZ e−λt xq11 xq22 σ(dx) EXq (t) = 2πct x21 +x22 =c2 t2 −λt

+

λe 2πc

ZZ

x21 +x22 ≤c2 t2

xq11 xq22

exp

  p λ 2 t2 − (x2 + x2 ) c 1 2 c p dx1 dx2 2 2 2 2 c t − (x1 + x2 )

Z 2π e−λt (ct)q1 +q2 (cos θ)q1 (sin θ)q2 dθ 2π 0 √  Z Z λ 2 2 2 λe−λt ct 2π q1 q2 exp c c t − r √ (r cos θ) (r sin θ) + r dr dθ 2πc 0 0 c2 t 2 − r 2 Z 2π e−λt = (ct)q1 +q2 (cos θ)q1 (sin θ)q2 dθ 2π 0 √  Z Z 2π λe−λt ct q1 +q2 +1 exp λc c2 t2 − r2 √ r dr (cos θ)q1 (sin θ)q2 dθ. + 2πc 0 c2 t2 − r2 0 =

Markov Random Flight in the Plane R2

255

Taking into account that Z



(cos θ)q1 (sin θ)q2

0

    2B q1 + 1 , q2 + 1 , 2 2 dθ =  0,

if q1 and q2 are even, otherwise,

we obtain for even q1 and q2 :   e−λt q1 + 1 q2 + 1 (ct)q1 +q2 B , π 2 2 √    Z ct λ −λt 2 2 2 λe q1 + 1 q2 + 1 q1 +q2 +1 exp c c t − r √ r + B , dr πc 2 2 c2 t2 − r2 0   e−λt q1 + 1 q2 + 1 = (ct)q1 +q2 B , π 2 2 √   Z 1 λt 1−ξ 2 −λt q1 + 1 q2 + 1 λe q1 +q2 +1 q1 +q2 +1 e p B , (ct) ξ dξ. + πc 2 2 1 − ξ2 0

EXq (t) =

The change of variable z = formula: EXq (t) =

p 1 − ξ 2 in the last integral reduces this expression to the

  e−λt q1 + 1 q2 + 1 (ct)q1 +q2 B , π 2 2   Z 1 −λt λe q1 + 1 q2 + 1 q1 +q2 +1 + B , (ct) (1 − z 2 )(q1 +q2 )/2 eλtz dz. πc 2 2 0

Applying now [63, Formula 3.387(5)] to the integral on the right-hand side of this equality, we get:   e−λt q1 + 1 q2 + 1 q1 +q2 q (ct) , EX (t) = B π 2 2   √  (q1 +q2 +1)/2 λe−λt q1 + 1 q2 + 1 π 2 + B , (ct)q1 +q2 +1 πc 2 2 2 λt     q1 + q2 +1 I(q1 +q2 +1)/2 (λt) + L(q1 +q2 +1)/2 (λt) ×Γ 2   −λt q1 + 1 q2 + 1 e = (ct)q1 +q2 B , π 2 2       (q +q −1)/2 1 2 e−λt 2 q1 + 1 q2 + 1 + √ (ct)q1 +q2 Γ Γ 2 2 π λt   × I(q1 +q2 +1)/2 (λt) + L(q1 +q2 +1)/2 (λt) The theorem is proved. Consider the one-dimensional stochastic process q R(t) = kX(t)k = X12 (t) + X22 (t), representing the Euclidean distance between the moving particle and the origin 0 at time

256

Markov Random Flights

instant t > 0. Obviously, 0 ≤ R(t) ≤ ct and, according to (5.2.3), the absolutely continuous part of the distribution of process R(t) has the form:  Pr {R(t) < r} = Pr X(t) ∈ B2r   λp 2 2 = 1 − exp −λt + c t − r2 , 0 ≤ r < ct, c Therefore, the complete density of R(t) in the interval 0 ≤ r ≤ ct is given by the formula:   re−λt λ r λp 2 2 f (r, t) = δ(ct − r) + √ exp −λt + c t − r2 Θ(ct − r). (5.7.4) ct c c2 t2 − r2 c In the following theorem we present an exact formula for the moments ERq (t), q ≥ 1, of process R(t). Theorem 5.7.2. For any integer q ≥ 1 and any t > 0, the following relation holds: ERq (t) = (ct)q e−λt √ + e−λt π



2 λt

(q−1)/2

q

(ct)q Γ

2

+1



 I(q+1)/2 (λt) + L(q+1)/2 (λt) ,

(5.7.5)

where Iν (x) and Lν (x) are given by (5.7.2) and (5.7.3), respectively. Proof. According to (5.7.4), we have: ERq (t) = (ct)q e−λt +

λe−λt c

ct

√ λ rq+1 2 2 2 √ e c c t −r dr 2 2 2 c t −r 0 Z 1 √ 2 (ct)q+1 ξ q+1 (1 − ξ 2 )−1/2 eλt 1−ξ dξ.

Z

λe−λt c 0 p Making the change of variable z = 1 − ξ 2 in the last integral, we get: = (ct)q e−λt +

ERq (t) = (ct)q e−λt +

λe−λt (ct)q+1 c

Z

1

(1 − z 2 )q/2 eλtz dz

0

= (ct)q e−λt √  (q+1)/2 q   λe−λt π 2 (ct)q+1 Γ + 1 I(q+1)/2 (λt) + L(q+1)/2 (λt) + 2c λt 2 = (ct)q e−λt +e

−λt

√ π



2 λt

(q−1)/2

(ct)q Γ

q 2

+1

  I(q+1)/2 (λt) + L(q+1)/2 (λt) ,

where we have used again [63, Formula 3.387(5)]. The theorem is proved. Remark 5.7.1. From (5.7.5) we can extract relations for the first and second moments, that is, for the expectation and variance of R(t): o n π ER(t) = ct e−λt 1 + [I1 (λt) + L1 (λt)] , ( 2 r )  2π  2 2 −λt ER (t) = (ct) e 1+ I3/2 (λt) + L3/2 (λt) . λt

Markov Random Flight in the Plane R2

5.8

257

Random flight with Gaussian starting point

In the previous sections of this chapter, we considered the symmetric Markov random flight X(t) in the Euclidean plane R2 when the process started from the origin 0 = (0, 0). In this case, the explicit form of the transition density of X(t) is given by formula (5.2.5). If 0 Xx (t) is a Markov random flight that starts from some arbitrary fixed point x0 = (x01 , x02 ) ∈ R2 , then, under the condition that the phase space R2 is homogeneous and isotropic, the 0 density of Xx (t) has the form: 0

f x (x, t) = f (x − x0 , t) e−λt δ(c2 t2 − kx − x0 k2 ) 2πct  (5.8.1)  p λ 2 t2 − kx − x0 k2 exp −λt + c c λ p + Θ(ct − kx − x0 k), 2 2πc c t2 − kx − x0 k2 q x = (x1 , x2 ) ∈ B(x0 , ct), kx − x0 k = (x1 − x01 )2 + (x2 − x02 )2 , t ≥ 0, =

where f (x, t) is the transition density of X(t) given by (5.2.5). 0 Suppose that the starting point x0 = (x01 , x02 ) of the process Xx (t) is a two-dimensional 0 0 random variable with a given density px (x) in the plane R2 . If the random vectors Xx (t) 0 and x0 are independent for any t > 0, then the density of Xx (t) is given by the convolution Z 0 x0 x0 (5.8.2) ϕ (x, t) = f (x, t) ∗ p (x) = f (x − ξ, t) px (ξ) µ(dξ). R2

In this section we derive a formula for the density (5.8.2) in the case when the starting point x0 = (x01 , x02 ) is a standard Gaussian vector with independent coordinates. Then 0 density px (x) has the form:  2  x1 + x22 1 x0 x0 p (x) = p (x1 , x2 ) = exp − . (5.8.3) 2π 2 Since function (5.8.3) has a fairly simple form, we can evaluate convolution (5.8.2) and to 0 obtain two equivalent representations of the density of process Xx (t) in the form of integral and series. 0

Theorem 5.8.1. The transition probability density of the Markov random flight Xx (t) starting from a random point x0 with Gaussian density (5.8.3) is given by the formula: 0

ϕx (x, t) =

e−λt −(kxk2 +c2 t2 )/2 e I0 (ctkxk) 2π Z p λt e−λt −(kxk2 +c2 t2 )/2 1 (c2 t2 /2)ξ2 +λtξ + e e I0 (ctkxk 1 − ξ 2 ) dξ, 2π 0

(5.8.4)

where I0 (z) is the modified Bessel function of zero order. Density (5.8.4) has the following series representation: 0

e−λt −(kxk2 +c2 t2 )/2 λt e−λt −kxk2 /2 e I0 (ctkxk) + e 2π 2π   ∞ X n (5.8.5) X (λt)k 2(k+1)/2 2 2 n−k 2n − k + 1 I(2n−k+1)/2 (ctkxk) × (c t ) Γ . k! (n − k)! 2 (ctkxk)(2n−k+1)/2 n=0

ϕx (x, t) =

k=0

258

Markov Random Flights

Proof. According to (5.8.2) and taking into account (5.8.1) and (5.8.3), we have: 0

0

ϕx (x, t) = ϕx (x1 , x2 , t)  2  ZZ ξ1 + ξ22 e−λt exp − δ(c2 t2 − (x1 − ξ1 )2 − (x2 − ξ2 )2 ) dξ1 dξ2 = 4π 2 ct 2 R2   p λ   2 2 t2 − (x − ξ )2 − (x − ξ )2 c −λt Z Z exp c 1 1 2 2 λe ξ1 + ξ22 p + exp − 4π 2 c 2 c2 t2 − (x1 − ξ1 )2 − (x2 − ξ2 )2 R2   p × Θ ct − (x1 − ξ1 )2 + (x2 − ξ2 )2 dξ1 dξ2   ZZ (x1 − ξ1 )2 + (x2 − ξ2 )2 e−λt exp − δ(c2 t2 − (ξ12 + ξ22 )) dξ1 dξ2 = 4π 2 ct 2 R2   p λ   2 t2 − (ξ 2 + ξ 2 ) c −λt Z Z exp c 1 2 (x1 − ξ1 )2 + (x2 − ξ2 )2 λe p exp − + 4π 2 c 2 c2 t2 − (ξ12 + ξ22 ) R2   q × Θ ct − ξ12 + ξ22 dξ1 dξ2 . Passing to the polar coordinates ξ1 = ρ cos α, ξ2 = ρ sin α, in both integrals, we get: Z ∞  e−λt x0 ϕ (x, t) = dρ ρ δ(c2 t2 − ρ2 ) 4π 2 ct 0    Z 2π (x1 − ρ cos α)2 + (x2 − ρ sin α)2 dα × exp − 2 0   p (5.8.6)  ρ exp λ c2 t2 − ρ2 Z c λe−λt ∞ p + Θ(ct − ρ) dρ 4π 2 c 0 c2 t2 − ρ2    Z 2π (x1 − ρ cos α)2 + (x2 − ρ sin α)2 dα . × exp − 2 0 Let us calculate separately the interior integral in (5.8.6): Z 0



 (x1 − ρ cos α)2 + (x2 − ρ sin α)2 dα exp − 2   Z 2π  1 = exp − x21 + x22 + ρ2 − 2ρ(x1 cos α + x2 sin α) dα 2 0 Z 2π 2 2 2 = e−(x1 +x2 +ρ )/2 eρ(x1 cos α+x2 sin α) dα 

0 2

= 2π e−(kxk

+ρ2 )/2

I0 (ρkxk),

where we have used the well-known integral representation of the modified Bessel function of zero order. Substituting this expression into (5.8.6), we get:

Markov Random Flight in the Plane R2

0

ϕx (x, t) =

e−λt 2πct

Z



2

259

2

ρ δ(c2 t2 − ρ2 ) e−(kxk +ρ )/2 I0 (ρkxk)dρ 0   p λ Z 2 2 2 2 2 λe−λt ∞ exp c c t − ρ p Θ(ct − ρ)e−(kxk +ρ )/2 I0 (ρkxk)dρ + ρ 2πc 0 c2 t2 − ρ2

e−λt −(kxk2 +c2 t2 )/2 e I0 (ctkxk) 2π   p λ Z 2 2 2 2 λe−λt −kxk2 /2 ct exp c c t − ρ p ρ e e−ρ /2 I0 (ρkxk) dρ. + 2 2 2 2πc c t −ρ 0 p Changing the variable z = c2 t2 − ρ2 in the last integral, we obtain: =

e−λt −(kxk2 +c2 t2 )/2 e I0 (ctkxk) 2π Z p λe−λt −(kxk2 +c2 t2 )/2 ct (λ/c)z z2 /2 + e e e I0 (kxk c2 t2 − z 2 ) dz 2πc 0 e−λt −(kxk2 +c2 t2 )/2 = e I0 (ctkxk) 2π Z p λt e−λt −(kxk2 +c2 t2 )/2 1 (c2 t2 /2)ξ2 +λtξ e + e I0 (ctkxk 1 − ξ 2 ) dξ, 2π 0

0

ϕx (x, t) =

(5.8.7)

proving (5.8.4). According to Lemma 1.9.12, the last integral in (5.8.7) is Z 1 p 2 2 2 e(c t /2)ξ +λtξ I0 (ctkxk 1 − ξ 2 ) dξ 0

 n−k   n ∞ X X 2(2n−k−1)/2 c2 t2 2n − k + 1 I(2n−k+1)/2 (ctkxk) k = (λt) Γ k!(n − k)! 2 2 (ctkxk)(2n−k+1)/2 n=0 k=0   ∞ X n X (λt)k 2(k+1)/2 2 2 n−k 2n − k + 1 I(2n−k+1)/2 (ctkxk) . = (c t ) Γ k! (n − k)! 2 (ctkxk)(2n−k+1)/2 n=0 k=0

Substituting this expression into (5.8.7), we arrive at (5.8.5). 0 It remains to check that the positive function ϕx (x, t) given by (5.8.4) (or (5.8.7)) is the density indeed. To do this, we need to show that, for any t > 0, the following equality holds: Z 0 ϕx (x, t) µ(dx) = 1. (5.8.8) R2

We have: Z

2

e−kxk

/2

ZZ

q 2 2 e−(x1 +x2 )/2 I0 (ct x21 + x22 ) dx1 dx2

I0 (ctkxk) µ(dx) =

R2

R2

Z



Z dr

= 0



dθ r e−r

2

/2

 I0 (ctr)

0

Z = 2π 0



r e−r

2

/2

I0 (ctr) dr

260

Markov Random Flights ∞

√ e−z/2 I0 (ct z) dz 0 √  2 2 2π 2 c2 t2 /4 c t = e M−1/2,0 , ct 2 Z



where Mξ,η (z) is the Whittaker function. In the last step we have used [63, Formula 6.643(2)]. By applying [63, Formula 9.220(2)], the Whittaker function on the right-hand side of this equality reduces to the degenerated hypergeometric function and we obtain:   Z 2 2 2 c2 t 2 = 2π ec t /2 . (5.8.9) e−kxk /2 I0 (ctkxk) µ(dx) = 2πΦ 1; 1; 2 R2

From (5.8.9) it also follows that Z p 2 2 2 2 e−kxk /2 I0 (ct 1 − ξ 2 kxk) µ(dx) = 2π ec t (1−ξ )/2 .

(5.8.10)

R2

Therefore, in view of (5.8.9) and (5.8.10), we have: Z 0 ϕx (x, t) µ(dx) R2

=

e−λt −c2 t2 /2 e 2π

Z

2

e−kxk

/2

I0 (ctkxk) µ(dx)

R2 −λt

+

λte 2π

2 2

e−c

t /2

Z

1

2 2

e(c

t /2)ξ 2 +λtξ

0

R2

2 2 2 2 λte e e−c t /2 2πec t /2 + 2π 2π Z 1 = e−λt + λt e−λt eλtξ dξ 0  = e−λt + e−λt eλt − 1 = 1,

=

  p 2 e−kxk /2 I0 (ct 1 − ξ 2 kxk) µ(dx) dξ 

 −λt

−λt

 Z

2 2

e−c

t /2

Z

1

e(c

2 2

t /2)ξ 2 +λtξ

2πec

2 2

t (1−ξ 2 )/2



0

proving (5.8.8). The theorem is completely proved. Remark 5.8.1. We supposed that the starting point x0 is a two-dimensional random vector whose coordinates are standard independent random variables with Gaussian density (5.8.3). Similarly, one can also consider the case when the coordinates of the starting point x0 are dependent Gaussian random variables with given parameters (a1 , σ1 ) and (a2 , σ2 ), respectively. In this case, the density of the two-dimensional random variable x0 has the form: 0

0

px (x) = px (x1 , x2 ) 1 √ = 2πσ1 σ2 1 − r2    1 (x1 − a1 )2 (x1 − a1 )(x2 − a2 ) (x2 − a2 )2 × exp − − 2r + , 2(1 − r2 ) σ12 σ1 σ2 σ22 (5.8.11)

Markov Random Flight in the Plane R2

261

−1 < r < 1. One can carry out the similar calculations as above in order to evaluate the convolution (5.8.2) of density (5.8.1) and Gaussian density (5.8.11), however, in this case, the calculations will, obviously, be much more complicated and cumbersome.

5.9

Euclidean distance between two random flights

The distance between two independent Goldstein-Kac telegraph processes on the line has thoroughly been examined in Section 2.12. The two-dimensional counterpart of this problem is more important from both the theoretical and applied points of view. In this section we study the Euclidean distance between two independent symmetric planar Markov random flights. The problem of finding the probability law for such Euclidean distance is of a special interest. This is due to the importance of such characteristics from the point of view of describing various kinds of interactions between two particles moving randomly in R2 . Such stochastic motions with interaction can serve as very good and adequate mathematical models for describing various real phenomena in physics, chemistry, biology, financial markets and some other fields. For example, in physics and chemistry the particles are the atoms or molecules of the substance and their interaction can provoke a physical or chemical reaction. In biology the particles represent biological objects such as cells, bacteria, animals etc., and their ‘interaction’ can mean creating a new cell (or, contrary, killing the cell), launching an infection mechanism or founding a new animal population, respectively. In financial markets the moving particles can be interpreted as oscillating exchange rates or stock prices and their ‘interaction’ can mean gaining or ruining. Let X1 (t) and X2 (t) denote the positions of two particles on the plane R2 at arbitrary time instant t > 0. In describing the processes of interaction the Euclidean distance between the particles ρ(t) = kX1 (t) − X2 (t)k, t > 0, is of a special importance. It is quite natural to consider that the particles do not ‘feel’ each other if ρ(t) is large. In other words, the forces acting between the particles are negligible if the distance ρ(t) is sufficiently large. However, as soon as the distance between the particles becomes less than some given r > 0, the particles can start interacting with some positive probability. This means that the occurrence of the random event {ρ(t) < r} is the necessary (but, maybe, not sufficient) condition for launching the process of interaction at time instant t > 0. Therefore, the distribution Pr{ρ(t) < r} plays a crucial role in analyzing such processes and it is thus the main focus of this section.

5.9.1

Auxiliary lemmas

The transition probability density (5.2.5) of the symmetric planar Markov random flight X(t) = (X1 (t), X2 (t)), t > 0, in the polar coordinates x1 = r cos α, x2 = r sin α, has the form: √  λr exp −λt + λc c2 t2 − r2 re−λt 2 2 2 √ δ(c t − r ) + Θ(ct − r), (5.9.1) p˜(r, α, t) = 2πct 2πc c2 t2 − r2 0 < r ≤ ct,

0 ≤ α < 2π,

t > 0.

262

Markov Random Flights

The radial component R(t), representing the Euclidean distance from the origin 0 of the random point X(t) at arbitrary time instant t > 0

−−−−−−→ q

R(t) = (0, X(t)) = X12 (t) + X22 (t) is independent of its angular component α(t) representing the polar angle, at time moment −−−−−−→ t, between the random vector (0, X(t)) and the positive half of the x-axis (counter clock-wise circuit). It is obvious that, with probability 1, 0 < R(t) ≤ ct and 0 ≤ α(t) < 2π for any t > 0. From (5.9.1) it follows that the density of R(t) is given by the formula:   λ r λp 2 2 re−λt 2 δ(ct − r) + √ c t − r Θ(ct − r), (5.9.2) exp −λt + f (r, t) = ct c c2 t2 − r2 c r ∈ (0, ct],

t > 0,

and the probability distribution function of R(t) has the form:  0, if r ∈ (−∞, 0],       λp 2 2 F (r, t) = Pr{R(t) < r} = 1 − exp −λt + c t − r2 , if r ∈ (0, ct],  c    1, if r ∈ (ct, +∞).

(5.9.3)

In (5.9.2) the first term is the singular part of density f (r, t) concentrated at the single singularity point r = ct, while the second term   λp 2 2 λ r (ac) 2 exp −λt + c t − r Θ(ct − r), (5.9.4) f (r, t) = √ c c2 t2 − r2 c represents the density of the absolutely continuous part of distribution function (5.9.3) concentrated in the open interval (0, ct). −−−−−−→ The angular component α(t) of the random vector (0, X(t)) has the uniform distribution in the interval [0, 2π) with the density fα(t) (γ, t) =

1 , 2π

γ ∈ [0, 2π),

t > 0,

(5.9.5)

which does not depend on time t. Consider two independent symmetric planar Markov random flights X1 (t) and X2 (t) representing the stochastic motions with constant velocities c1 > 0 and c2 > 0, respectively. The evolutions of these random flights are driven by two independent Poisson processes N1 (t) and N2 (t) of rates λ1 > 0 and λ2 > 0, respectively, as described above. −−−−−−→ Let α1 (t) and α2 (t) denote the polar angles of the random vectors (0, X1 (t)) and −−−−−−→ (0, X2 (t)) with the x-axis, respectively, at time t > 0. As is noted above (see (5.9.5)), both α1 (t) and α2 (t) have the same uniform distribution in the interval [0, 2π) not depending on time t. Obviously, α1 (t) and α2 (t) are independent for any fixed t > 0. Introduce the one-dimensional stochastic process ϕ(t) = |α1 (t) − α2 (t)|,

t > 0.

It is clear that, with probability 1, 0 ≤ ϕ(t) < 2π for arbitrary t > 0. In the following lemma we present the distribution of ϕ(t).

Markov Random Flight in the Plane R2

263

Lemma 5.9.1. The probability distribution function of the process ϕ(t) has the form:  0, if z ∈ (−∞, 0],     2 4πz − z (5.9.6) Pr{ϕ(t) < z} = , if z ∈ (0, 2π],  4π 2    1, if z ∈ (2π, +∞), with the density pϕ(t) (z, t) =

∂ 1 z Pr{ϕ(t) < z} = − 2 , ∂z π 2π

z ∈ [0, 2π),

t > 0.

(5.9.7)

Proof. Since, for any fixed t > 0, the polar angles α1 (t) and α2 (t) are the independent and uniformly distributed in the interval [0, 2π) random variables, then the statement of the lemma immediately emerges by evaluating the probability of the inequality |x − y| < z in the square [0, 2π) × [0, 2π) by means of simple geometric reasonings and the well-known ‘two friends meeting problem’ of elementary probability theory. Note that both the probability distribution function (5.9.6) and density (5.9.7) do not depend on time variable t. Consider now the following stochastic process: η(t) = cos(ϕ(t)) = cos(|α1 (t) − α2 (t)|),

t > 0.

Clearly, −1 ≤ η(t) ≤ 1 with probability 1 for any t > 0. The next lemma yields the distribution of the process η(t). Lemma 5.9.2. The probability distribution function of the process η(t) has the form:  0,    1 Pr{η(t) < z} = 1 − arccos z,  π   1,

if z ∈ (−∞, −1], if z ∈ (−1, 1],

(5.9.8)

if z ∈ (1, +∞),

with the density pη(t) (z, t) =

1 ∂ Pr{η(t) < z} = √ , ∂z π 1 − z2

z ∈ [−1, 1],

t > 0.

(5.9.9)

Proof. Taking into account that 0 ≤ ϕ(t) < 2π and according to (5.9.7), we have for arbitrary z ∈ (−1, 1]: Pr{η(t) < z} = Pr{cos(ϕ(t)) < z} = Pr{arccos z < ϕ(t) < 2π − arccos z} 2π−arccos z  Z 1 z = − dz π 2π 2 arccos z

=1− The lemma is proved.

1 arccos z. π

264

Markov Random Flights

From (5.9.8) we also obtain the tail of the probability distribution function: 1, 1 Pr{η(t) > z} = arccos z,  π   0,    

if z ∈ (−∞, −1], if z ∈ (−1, 1],

(5.9.10)

if z ∈ (1, +∞).

Define now the following stochastic process: ( ϕ(t), if ϕ(t) ∈ (0, π), θ(t) = 2π − ϕ(t), if ϕ(t) ∈ (π, 2π),

t > 0.

(5.9.11)

For ϕ(t) = 0 and ϕ(t) = π, process θ(t) is undefined. The process θ(t) represents the lesser −−−−−−→ −−−−−−→ angle between the random vectors (0, X1 (t)) and (0, X2 (t)) at time instant t > 0. Obviously, 0 < θ(t) < π with probability 1 for any t > 0. Process (5.9.11) can also be rewritten in the following form: θ(t) = arccos(cos(ϕ(t))), t > 0. (5.9.12) We are interested in the distribution of θ(t). One can intuitively expect that θ(t) has the uniform distribution in the interval (0, π), although this is not an obvious statement. In the following lemma we give a rigorous proof of this fact. Lemma 5.9.3. The process θ(t) has the uniform distribution in the interval (0, π), that is,  0, if z ∈ (−∞, 0],   z , if z ∈ (0, π], Pr{θ(t) < z} = t > 0, (5.9.13) π    1, if z ∈ (π, +∞) and the density of distribution (5.9.13) is pθ(t) (z, t) =

1 ∂ Pr{θ(t) < z} = , ∂z π

z ∈ (0, π),

t > 0.

(5.9.14)

Proof. If z ∈ (0, π], then, according to (5.9.12) and (5.9.8), we have: Pr{θ(t) < z} = Pr{arccos(cos(ϕ(t))) < z} = Pr{cos z < cos(ϕ(t)) ≤ 1} = 1 − Pr{cos(ϕ(t)) ≤ cos z}   1 = 1 − 1 − arccos(cos z) π z = π and (5.9.13) is proved.

5.9.2

Main results

Let X1 (t) and X2 (t) be two independent symmetric Markov random flights performed by two particles that simultaneously start from the origin 0 = (0, 0) of the plane R2 and move with constant velocities c1 > 0, c2 > 0, respectively. The choice of the initial and each new random direction is doing according to the uniform law on the unit circumference S1 . The

Markov Random Flight in the Plane R2

265

evolutions of X1 (t) and X2 (t) are driven by two independent homogeneous Poisson processes N1 (t), N2 (t) of rates λ1 > 0, λ2 > 0, as described above. For the sake of definiteness, we suppose that c1 ≥ c2 (otherwise, one can merely change numeration of the processes). We are interested in the distribution of the Euclidean distance ρ(t) = kX1 (t) − X2 (t)k,

t > 0,

(5.9.15)

between X1 (t) and X2 (t) at arbitrary fixed time moment t > 0. Typical trajectories of the planar Markov random flights X1 (t) and X2 (t) at some time instant t > 0, under the conditions c1 > c2 , N1 (t) = 3, N2 (t) = 2, are presented in Fig. 5.2 (dotted broken lines).

Figure 5.2: Sample paths of X1 (t), X2 (t) for N1 (t) = 3, N2 (t) = 2 and the random triangle

Our goal is to obtain the probability distribution function Φ(r, t) = Pr {ρ(t) < r} ,

t > 0,

(5.9.16)

of the Euclidean distance (5.9.15). It is clear that 0 < ρ(t) < (c1 + c2 )t is with probability 1 for any t > 0, that is, the open interval (0, (c1 + c2 )t) is the support of the distribution of process ρ(t). Note that, in contrast to the one-dimensional case (see Subsection 2.12.1), the distribution of the Euclidean distance (5.9.15) is absolutely continuous in the interval (0, (c1 + c2 )t) and does not contain any singular component. This means that probability distribution function (5.9.16) is continuous for r ∈ R1 and does not have any jumps.

266

Markov Random Flights

The form of the probability distribution function Φ(r, t) is somewhat different in the cases c1 > c2 and c1 = c2 . First, we derive a formula for Φ(r, t) in the more difficult case c1 > c2 . The simpler case c1 = c2 will be presented then as a separate theorem. The method of obtaining a formula for Φ(r, t) is different from that used in the onedimensional case (see Subsection 2.12.1). While in the one-dimensional case of the evolution of two telegraph processes the derivation is based on evaluating the probability a particle to be located in a r-neighbourhood of the other one, in the multidimensional case such approach is impracticable. Instead, we consider a random triangle with the vertices 0, X1 (t), X2 (t) −−−−−−→ (see Fig. 5.2). The sides (0, Xi (t)), i = 1, 2, of this triangle are the random vectors of the

−two

−−−−−→ lengths Ri (t) = (0, Xi (t)) , i = 1, 2, with known densities fi (r, t), i = 1, 2, respectively, given by (5.9.2) (or distribution functions Fi (r, t), i = 1, 2, given by (5.9.3)). The random (lesser) angle θ(t) between these vectors has the uniform distribution in the interval (0, π) (see Lemma 5.9.3). Therefore, our aim is to find the distribution of the third side ρ(t) of this random triangle. This result is presented by the following theorem. Theorem 5.9.1. Under the condition c1 > c2 , the probability distribution function Φ(r, t) of the Euclidean distance ρ(t) between two independent planar Markov random flights X1 (t) and X2 (t) has the form:  0, if r ∈ (−∞, 0],      G(r, t), if r ∈ (0, m(t)],    H (r, t), if r ∈ (m(t), M (t)], k (5.9.17) Φ(r, t) =  Q(r, t), if r ∈ (M (t), c1 t],      U (r, t), if r ∈ (c1 t, (c1 + c2 )t],    1, if r ∈ ((c1 + c2 )t, +∞), c1 > c2 ,

c1 6= 2c2 ,

t > 0,

k = 1, 2,

where m(t) = min{(c1 − c2 )t, c2 t},

M (t) = max{(c1 − c2 )t, c2 t},

(5.9.18)

and the functions G(r, t), Hk (r, t), Q(r, t), U (r, t) are given by the formulas: G(r, t) 

 q λ1 2 2 2 = 1 − exp −λ1 t + c1 t − r c1 cZ 2 t+r   q   2 ξ λ1 λ1 e−(λ1 +λ2 )t ξ + (c2 t)2 − r2 2 t2 − ξ 2 dξ p arccos exp c + 1 πc1 2c2 tξ c1 c21 t2 − ξ 2 λ1 e−(λ1 +λ2 )t − c1

c2 t−r Zr

0

+

1 π

Zr 0

ξ p exp 2 c1 t2 − ξ 2



λ1 c1

  q  q λ2 2 2 2 2 2 2 c1 t − ξ exp c2 t − (r − ξ) dξ c2

r+ζ   2   Z ξ + ζ 2 − r2 (ac) (ac) arccos dζ f2 (ζ, t) f1 (ξ, t) dξ 2ξζ r−ζ

ζ+r  2   Zc2 t  Z 1 ξ + ζ 2 − r2 (ac) (ac) + arccos dζ f2 (ζ, t) f1 (ξ, t) dξ , π 2ξζ r

ζ−r

(5.9.19)

Markov Random Flight in the Plane R2

267

H1 (r, t) =1−e



−(λ1 +λ2 )t

exp

λ1 c1

q

r+c Z 2t

λ1 e−(λ1 +λ2 )t + πc1

c21 t2



arccos

 − (r − c2

t)2

ξ 2 + (c2 t)2 − r2 2c2 tξ



ξ p exp c21 t2 − ξ 2

r−c2 t

λ2 e−(λ1 +λ2 )t − c2

Zc2 t p 0

1 + π



ζ c22 t2 − ζ 2

exp

λ2 c2



λ1 c1

 q c21 t2 − ξ 2 dξ

  q  q λ1 2 2 2 2 2 2 c2 t − ζ exp c1 t − (r − ζ) dζ c1

r+ζ    2 Z Zc2 t  ξ + ζ 2 − r2 (ac) (ac) dζ f2 (ζ, t) arccos f1 (ξ, t) dξ , 2ξζ 0

r−ζ

(5.9.20) if c1 > 2c2 ,

H2 (r, t)    q λ1 (c1 t)2 + (c2 t)2 − r2 e−(λ1 +λ2 )t 2 2 2 = 1 − exp −λ1 t + arccos c1 t − r + c1 π 2c1 c2 t2 c t 1   2  q  Z λ1 e−(λ1 +λ2 )t ξ + (c2 t)2 − r2 λ1 ξ 2 t2 − ξ 2 p c + dξ arccos exp 1 πc1 2c2 tξ c1 c21 t2 − ξ 2 

c2 t−r

Zc2 t

λ2 e−(λ1 +λ2 )t + πc2

 arccos

c1 t−r Zr

λ1 −(λ1 +λ2 )t e − c1 1 + π

Zr

1 π 1 π

exp

λ1 c1

ξ p exp 2 c2 t2 − ξ 2



λ2 c2

 q 2 2 2 c2 t − ξ dξ

  q  q λ2 2 2 2 2 2 2 c1 t − ξ exp c2 t − (r − ξ) dξ c2

r−ζ ζ+r     2 Z ξ + ζ 2 − r2 (ac) (ac) dζ f2 (ζ, t) arccos f1 (ξ, t) dξ 2ξζ

cZ 1 t−r r

+

c21 t2 − ξ 2





r+ζ   2   Z ξ + ζ 2 − r2 (ac) (ac) dζ f2 (ζ, t) arccos f1 (ξ, t) dξ 2ξζ

0

+

ξ

p 0

ξ 2 + (c1 t)2 − r2 2c1 tξ

Zc2 t c1 t−r

ζ−r

   2  Zc1 t ξ + ζ 2 − r2 (ac) (ac) f1 (ξ, t) dξ ; dζ f2 (ζ, t) arccos 2ξζ ζ−r

(5.9.21) if c1 < 2c2 ,

268

Markov Random Flights

Q(r, t)    q   λ1 (c1 t)2 + (c2 t)2 − r2 1 2 2 2 exp c1 t − (r − c2 t) − arccos c1 π 2c1 c2 t2 c t  q  2   Z1 λ1 ξ + (c2 t)2 − r2 ξ 2 t2 − ξ 2 dξ p exp arccos c 1 2c2 tξ c1 c21 t2 − ξ 2

−(λ1 +λ2 )t

=1−e

+

λ1 e−(λ1 +λ2 )t πc1

r−c2 t

Zc2 t

λ2 e−(λ1 +λ2 )t + πc2

 arccos

ξ 2 + (c1 t)2 − r2 2c1 tξ



c1 t−r

λ2 − e−(λ1 +λ2 )t c2

Zc2 t 0

+

ζ p exp 2 c2 t 2 − ζ 2



λ2 c2

ξ



p exp c22 t2 − ξ 2

λ2 c2

 q 2 2 2 c2 t − ξ dξ

  q  λ1 c22 t2 − ζ 2 exp c21 t2 − (r − ζ)2 dζ c1

q

r+ζ  2    Z ξ + ζ 2 − r2 (ac) (ac) arccos dζ f2 (ζ, t) f1 (ξ, t) dξ 2ξζ

cZ 1 t−r

1 π

0

r−ζ

Zc2 t

1 + π

    2 Zc1 t ξ + ζ 2 − r2 (ac) (ac) f1 (ξ, t) dξ , dζ f2 (ζ, t) arccos 2ξζ

c1 t−r

r−ζ

(5.9.22) U (r, t) =1−e

+

−(λ1 +λ2 )t

λ1 e−(λ1 +λ2 )t πc1

  q    λ2 1 (c1 t)2 + (c2 t)2 − r2 2 2 2 exp c2 t − (r − c1 t) − arccos c2 π 2c1 c2 t2  q  2   Zc1 t λ1 ξ + (c2 t)2 − r2 ξ 2 2 2 p exp arccos c1 t − ξ dξ 2c2 tξ c1 c21 t2 − ξ 2

r−c2 t

Zc2 t

λ2 e−(λ1 +λ2 )t + πc2

 arccos

ξ 2 + (c1 t)2 − r2 2c1 tξ

r−c1 t

λ1 −(λ1 +λ2 )t − e c1

Zc1 t

r−c2 t

1 + π

Zc1 t

ξ p

c21 t2 − ξ 2

 exp

λ1 c1



ξ

p exp c22 t2 − ξ 2



λ2 c2

q

c22 t2

 −

ξ2



  q  q λ2 c21 t2 − ξ 2 exp c22 t2 − (r − ξ)2 dξ c2

    2 Zc2 t ξ + ζ 2 − r2 (ac) (ac) dξ f1 (ξ, t) arccos f2 (ζ, t) dζ , 2ξζ

r−c2 t

r−ξ

(5.9.23) where i = 1, 2, is the absolutely continuous part of the density of process Ri (t) given by (5.9.4), that is,   q λi z λi (ac) 2 2 2 p fi (z, t) = exp −λi t + ci t − z , 0 < z < ci t, i = 1, 2. ci ci c2i t2 − z 2 (5.9.24) (ac) fi (z, t),

Markov Random Flight in the Plane R2

269

If c1 = 2c2 , then the probability distribution function Φ(r, t) has the form:  0, if r ∈ (−∞, 0],      G(r, t), if r ∈ (0, c2 t],   if r ∈ (c2 t, c1 t], Φ(r, t) = Q(r, t), c1 = 2c2 , t > 0,    U (r, t), if r ∈ (c1 t, (c1 + c2 )t],     1, if r ∈ ((c1 + c2 )t, +∞),

(5.9.25)

where functions G(r, t), Q(r, t), U (r, t) are given by formulas (5.9.19), (5.9.22) and (5.9.23), respectively. Consider now the case of equal velocities. Suppose that both the random flights X1 (t) and X2 (t) have the same speed c1 = c2 = c. Then 0 < ρ(t) < 2ct with probability 1 for any t > 0 and, therefore, the open interval (0, 2ct) is the support of the distribution of process ρ(t). In this case the distribution function Φ(r, t) is given by the following theorem. Theorem 5.9.2. Under the condition c1 = c2 = c, the probability distribution function Φ(r, t) of the Euclidean distance ρ(t) between two independent planar Markov random flights X1 (t) and X2 (t) has the form:  0, if r ∈ (−∞, 0],     V (r, t), if r ∈ (0, ct], Φ(r, t) = c1 = c2 = c, t > 0, (5.9.26)  W (r, t), if r ∈ (ct, 2ct],    1, if r ∈ (2ct, +∞), where functions V (r, t), W (r, t), are given by the formulas: V (r, t)     r2 e−(λ1 +λ2 )t λ1 p 2 2 arccos 1 − 2 2 c t − r2 + = 1 − exp −λ1 t + c π 2c t ct   2  p  Z λ1 λ1 e−(λ1 +λ2 )t ξ + c2 t2 − r2 ξ p + c2 t2 − ξ 2 dξ arccos exp πc 2ctξ c c2 t2 − ξ 2 ct−r

Zct

λ2 e−(λ1 +λ2 )t + πc

 arccos

ct−r Zr



λ1 −(λ1 +λ2 )t e c

+

1 π

Zct 0



1 π

Zct r

ξ p

0

ξ 2 + c2 t2 − r2 2ctξ

c2 t2 − ξ 2

 exp



ξ p exp c2 t2 − ξ 2



 λ2 p 2 2 c t − ξ 2 dξ c

   λ1 p 2 2 λ2 p 2 2 c t − ξ 2 exp c t − (r − ξ)2 dξ c c

 2    Zct ξ + ζ 2 − r2 (ac) (ac) dξ f1 (ξ, t) arccos f2 (ζ, t) dζ 2ξζ 0 ξ−r   2   Z ξ + ζ 2 − r2 (ac) (ac) arccos dξ f1 (ξ, t) f2 (ζ, t) dζ 2ξζ 0

270

Markov Random Flights 1 − π

Zr

r−ξ  2    Z ξ + ζ 2 − r2 (ac) (ac) arccos dξ f1 (ξ, t) f2 (ζ, t) dζ 2ξζ 0

0



1 π

Zct

(5.9.27)

ζ−r   2   Z ξ + ζ 2 − r2 (ac) (ac) dζ f2 (ζ, t) arccos f1 (ξ, t) dξ , 2ξζ

r

0

W (r, t) =1−e

+

     r2 λ2 p 2 2 1 2 exp c t − (r − ct) − arccos 1 − 2 2 c π 2c t ct   p  2  Z λ1 ξ + c2 t2 − r2 ξ p c2 t2 − ξ 2 dξ exp arccos 2ctξ c c2 t 2 − ξ 2

−(λ1 +λ2 )t

λ1 e−(λ1 +λ2 )t πc

r−ct

Zct

λ2 e−(λ1 +λ2 )t + πc

 arccos

ξ 2 + c2 t2 − r2 2ctξ



Zct p

r−ct

+

1 π

Zct



ξ c2 t2 − ξ 2

exp



p exp c2 t 2 − ξ 2

r−ct

λ1 −(λ1 +λ2 )t e − c

ξ

λ2 p 2 2 c t − ξ2 c

 dξ

   λ1 p 2 2 λ2 p 2 2 2 2 c t − ξ exp c t − (r − ξ) dξ c c

    2 Zct ξ + ζ 2 − r2 (ac) (ac) f2 (ζ, t) dζ , dξ f1 (ξ, t) arccos 2ξζ

r−ct

r−ξ

(5.9.28) (ac)

and functions fi

(z, t), i = 1, 2, are given by (5.9.24) for c1 = c2 = c.

The proofs of Theorems 5.9.1 and 5.9.2 are given separately subsection. Remark 5.9.1. By means of tedious but simple computations one can check that for any t > 0 and arbitrary speeds c1 , c2 such that c1 > c2 , the following limiting relations hold: lim G(r, t) = 0,

r→0+0

G(r, t) = G(m(t), t) =

lim r→m(t)−0

lim r→M (t)−0

lim r→m(t)+0

Hk (r, t) = Hk (M (t), t) =

lim

r→c1 t−0

lim r→(c1 +c2 )t−0

Q(r, t) = Q(c1 t, t) =

Hk (r, t), Q(r, t),

lim

k = 1, 2, k = 1, 2,

r→M (t)+0

lim

r→c1 t+0

(5.9.29)

U (r, t),

U (r, t) = U ((c1 + c2 )t, t) = 1.

Similarly, for any t > 0 and arbitrary speed c1 = c2 = c, lim V (r, t) = 0,

r→0+0

lim V (r, t) = V (ct, t) =

r→ct−0

lim

r→2ct−0

lim W (r, t),

r→ct+0

(5.9.30)

W (r, t) = W (2ct, t) = 1.

Formulas (5.9.29) show that, for arbitrary speeds c1 , c2 such that c1 > c2 , the probability distribution function Φ(r, t) is continuous at the points 0, m(t), M (t), c1 t, (c1 + c2 )t and,

Markov Random Flight in the Plane R2

271

therefore, it is continuous in the whole interval (i.e. the support) [0, (c1 + c2 )t]. Similarly, relations (5.9.30) prove the continuity of Φ(r, t) in the interval [0, 2ct] in the case of equal speeds c1 = c2 = c. This entirely accords with the structure of the distribution described above.

5.9.3

Asymptotics and numerical example

The functions G(r, t), Hk (r, t), Q(r, t), U (r, t), V (r, t) and W (r, t) composing the probability distribution functions Φ(r, t) in formulas (5.9.17), (5.9.25) and (5.9.26), have fairly complicated analytical forms and, obviously, cannot be computed explicitly. Therefore, these functions can be evaluated numerically only. One can see that each of them contains the terms of two kinds. The terms of first kind contain single integral and such terms are easily computable numerically (for given parameters λi , ci , i = 1, 2, and time parameter t) by means of any standard package of mathematical programs. The terms of second kind contain double integrals that cannot be evaluated directly. To overcome this difficulty, one may decompose interior integral into a series taking then some finite number of its terms and integrating them until the needed accuracy is reached. However, obtaining of such a decomposition is a fairly difficult problem too. Instead, one may derive asymptotic formulas for these double integrals that can then be applied for obtaining the approximate values of the distribution function Φ(r, t). We will demonstrate this approach by considering Φ(r, t) for small r and arbitrary fixed time t > 0 under the condition c1 > c2 . From the general scheme of interaction described above it follows that the behaviour of Φ(r, t) for small r is of a special importance. Clearly, this behaviour is determined by function G(r, t) given by (5.9.19). We see that G(r, t) consists of five terms of which the first one is an elementary (exponential) function, while both the second and the third terms contain single integral. As we have noted above, all these terms are easily computable numerically for arbitrary r ∈ (0, m(t)]. The difficulty arises when evaluating the fourth and the fifth terms of G(r, t) that contain the double integrals: 1 g4 (r, t) = π

Zr

r+ζ     2 Z ξ + ζ 2 − r2 (ac) (ac) dζ f2 (ζ, t) f1 (ξ, t) dξ , arccos 2ξζ

0

r−ζ

ζ+r  2   Zc2 t  Z 1 ξ + ζ 2 − r2 (ac) (ac) g5 (r, t) = dζ f2 (ζ, t) arccos f1 (ξ, t) dξ , π 2ξζ r

ζ−r (ac)

where, remind, the functions fi (z, t), i = 1, 2, are given by (5.9.24). We notice that lim g4 (r, t) = 0, lim g5 (r, t) = 0 r→0+0

r→0+0

(5.9.31)

and therefore, the smaller is r, the smaller is the contributions of these functions to G(r, t). To estimate their contributions, we should derive asymptotic formulas for g4 (r, t) and g5 (r, t), as r → 0. Differentiating these functions in r, we can easily show that lim

∂g4 (r, t) = 0, ∂r

lim

∂g5 (r, t) = 0, ∂r

r→0+0

lim

∂ 2 g4 (r, t) = 0, ∂r2

(5.9.32)

lim

∂ 2 g5 (r, t) = 0. ∂r2

(5.9.33)

r→0+0

and r→0+0

r→0+0

From the first relation in (5.9.31) and (5.9.32) it follows that, for any fixed t > 0, the

272

Markov Random Flights

asymptotic formula g4 (r, t) ∼ o(rk ), r → 0, holds, where k is some integer positive number such that k ≥ 2. Similarly, from the second relation in (5.9.31) and (5.9.33) it follows that, for any fixed t > 0, the asymptotic formula g5 (r, t) ∼ o(rl ), r → 0, holds for some l ≥ 2. This means that both the terms g4 (r, t) and g5 (r, t) tend to 0, as r → 0, very rapidly (namely, like r3 or faster). Therefore, the contributions of these terms to function G(r, t) are negligible ˜ t) + o(rn ), r → 0, n ≥ 2, for small r. Thus, we get the asymptotic formula G(r, t) = G(r, ˜ t) has the form: where the approximating function G(r, ˜ t) G(r,   q λ1 c21 t2 − r2 = 1 − exp −λ1 t + c1 cZ 2 t+r  2   q  ξ + (c2 t)2 − r2 ξ λ1 λ1 e−(λ1 +λ2 )t 2 2 2 p arccos exp + c1 t − ξ dξ πc1 2c2 tξ c1 c21 t2 − ξ 2 −

λ1 e

−(λ1 +λ2 )t

c2 t−r Zr

c1 0

ξ



p exp c21 t2 − ξ 2

λ1 c1

  q  q λ2 2 2 2 2 2 2 c1 t − ξ exp c2 t − (r − ξ) dξ. c2

(5.9.34) Consider now the following numerical example. For the particular values of parameters c1 = 4,

c2 = 3,

λ1 = 1,

λ2 = 2,

t = 5,

(5.9.35)

in view of (5.9.18), we have m(5) = 5, M (5) = 15. Therefore, the probability distribution function Φ(r, 5) has the form (see (5.9.17)):  0, if r ∈ (−∞, 0],      G(r, 5), if r ∈ (0, 5],    H (r, 5), if r ∈ (5, 15], 2 (5.9.36) Φ(r, 3) =  Q(r, 5), if r ∈ (15, 20],      U (r, 5), if r ∈ (20, 35],    1, if r ∈ (35, +∞), where the functions G(r, 5), H2 (r, 5), Q(r, 5) and U (r, 5) are given by formulas (5.9.19), (5.9.21), (5.9.22) and (5.9.23), respectively, with given parameters (5.9.35).

˜ 5) in the interval r ∈ (0, 5] is plotted in Fig. 5.3. For small The shape of function G(r, r this curve shows fairly well the behaviour of function G(r, t). If we need to evaluate the probabilities of interacting for other r, we should consider the respective function in (5.9.36) with application of the special methods of computing double integrals. Suppose that every time the particles close in the distance less than 0.1, they can begin to interact with probability 0.2. Let us evaluate the probability of launching the interaction at time instant t = 5. First, we need to compute the probability Pr{ρ(5) < 0.1} = Φ(0.1, 5). Since 0.1 ∈ (0, 5], then this probability is determined by function G(r, 5) for r = 0.1. The distance r = 0.1 is sufficiently small and, therefore, one can evaluate this probability by means of approximating function (5.9.34). For given parameters (5.9.35), formula (5.9.34) yields: ˜ Pr{ρ(5) < 0.1} ≈ G(0.1, 5) = 0.26665 · 10−8 .

Markov Random Flight in the Plane R2

273

˜ 5) in the interval (0, 5] for given parameters (5.9.35) Figure 5.3: The shape of function G(r, The error in this probability does not exceed 10−3 multiplied by some constant. Then the probability of launching the interaction at time instant t = 5 is approximately equal to 0.26665 · 10−8 · 0.2 = 0.5333 · 10−9 . Remark 5.9.2. The model considered in this section can generate some other interesting problems. Let T > 0 be an arbitrary time instant and let kTr denote the random variable counting how many times during the time interval (0, T ) the distance between the particles was less than some given r > 0. The distribution of this nonnegative integer-valued random variable kTr is of special importance because it would enable us to evaluate the probability of interaction starting before time T .

5.9.4

Proofs of theorems

In this subsection we give the rigorous proofs of Theorems 5.9.1 and 5.9.2 formulated above. Proof of Theorem 5.9.1. First of all, we note that since 0 < ρ(t) < (c1 + c2 )t with probability 1 for any t > 0, then Pr {ρ(t) < r} = 0, Pr {ρ(t) < r} = 1,

if r ∈ (−∞, 0], if r ∈ ((c1 + c2 )t, +∞).

(5.9.37)

Let now r ∈ (0, (c1 + c2 )t]. Passing to joint distributions, we can write down: Φ(r, t) = Pr {ρ(t) < r, N1 (t) = 0, N2 (t) = 0} + Pr {ρ(t) < r, N1 (t) ≥ 1, N2 (t) = 0} + Pr {ρ(t) < r, N1 (t) = 0, N2 (t) ≥ 1} + Pr {ρ(t) < r, N1 (t) ≥ 1, N2 (t) ≥ 1} .

(5.9.38)

274

Markov Random Flights

Let us evaluate separately joint probabilities on the right-hand side of (5.9.38). • Evaluation of Pr {ρ(t) < r, N1 (t) = 0, N2 (t) = 0}. We note that the following equalities for random events hold: {N1 (t) = 0} = {X1 (t) ∈ Sc1 t } = {R1 (t) = c1 t} , {N2 (t) = 0} = {X2 (t) ∈ Sc2 t } = {R2 (t) = c2 t} . Then, taking into account that cos(θ(t)) = cos(ϕ(t)) = η(t) (in distribution) and using (5.9.10), we have for the first joint distribution in (5.9.38): Pr {ρ(t) < r, N1 (t) = 0, N2 (t) = 0}  = e−(λ1 +λ2 )t Pr R12 (t) + R22 (t) − 2R1 (t)R2 (t) cos(θ(t)) < r2 R1 (t) = c1 t, R2 (t) = c2 t  = e−(λ1 +λ2 )t Pr (c1 t)2 + (c2 t)2 − 2c1 c2 t2 cos(θ(t)) < r2   (c1 t)2 + (c2 t)2 − r2 −(λ1 +λ2 )t =e Pr η(t) > 2c1 c2 t2

=

=

        

e−(λ1 +λ2 )t  π       

     e−(λ1 +λ2 )t    

π

e−(λ1 +λ2 )t ,   (c1 t)2 + (c2 t)2 − r2 arccos , 2c1 c2 t2 0,

if r ∈ (−∞, (c1 − c2 )t],

0,  arccos

(c1 t)2 + (c2 t)2 − r 2c1 c2 t2

e−(λ1 +λ2 )t ,

(c1 t)2 + (c2 t)2 − r2 ≤ −1, 2c1 c2 t2 (c1 t)2 + (c2 t)2 − r2 if − 1 < ≤ 1, 2c1 c2 t2 (c1 t)2 + (c2 t)2 − r2 > 1, if 2c1 c2 t2

if

 2 ,

if r ∈ ((c1 − c2 )t, (c1 + c2 )t], (5.9.39) if r ∈ ((c1 + c2 )t, +∞)

Formula (5.9.39) yields the first joint distribution in (5.9.38) related to the case when no Poisson events occur up to time instant t > 0 and, therefore, the randon points X1 (t), X2 (t) are located on the spheres Sc1 t , Sc2 t , respectively. • Evaluation of Pr {ρ(t) < r, N1 (t) ≥ 1, N2 (t) = 0}. Note that {N1 (t) ≥ 1} = {X1 (t) ∈ int Bc1 t } = {R1 (t) ∈ (0, c1 t)} , {N2 (t) = 0} = {X2 (t) ∈ Sc2 t } = {R2 (t) = c2 t} . Taking into account that cos(θ(t)) = cos(ϕ(t)) = η(t) (in distribution), we have for the second joint distribution in (5.9.38):

Markov Random Flight in the Plane R2

275

Pr {ρ(t) < r, N1 (t) ≥ 1, N2 (t) = 0}   −λ2 t 2 2 2 Pr R1 (t) + R2 (t) − 2R1 (t)R2 (t) cos(θ(t)) < r , R1 (t) ∈ (0, c1 t) R2 (t) = c2 t =e   −λ2 t 2 2 2 =e Pr R1 (t) + (c2 t) − 2c2 tR1 (t) cos(θ(t)) < r , R1 (t) ∈ (0, c1 t)   R12 (t) + (c2 t)2 − r2 −λ2 t =e Pr η(t) > , R1 (t) ∈ (0, c1 t) 2c2 tR1 (t)  Zc1 t   ξ 2 + (c2 t)2 − r2 −λ2 t R (t) = ξ Pr R1 (t) ∈ dξ . Pr η(t) > =e 1 2c2 tξ 0

(5.9.40) According to (5.9.10),   ξ 2 + (c2 t)2 − r2 R (t) = ξ Pr η(t) > 1 2c2 tξ  ξ 2 + (c2 t)2 − r2   ∈ (−∞, −1] and ξ ∈ (0, c1 t), 1, if   2c2 tξ     2  1 ξ + (c2 t)2 − r2 ξ 2 + (c2 t)2 − r2 = arccos , if ∈ (−1, 1] and ξ ∈ (0, c1 t),  π 2c2 tξ 2c2 tξ      ξ 2 + (c2 t)2 − r2   0, if ∈ (1, +∞) and ξ ∈ (0, c1 t). 2c2 tξ This probability depends on the value equalities:  ξ 2 + (c2 t)2 − r2 Pr η(t) > 2c2 tξ

of r. One can check that, as ξ ∈ (0, c1 t), we get the  R1 (t) = ξ

  2  2 2   1 arccos ξ + (c2 t) − r , 2c2 tξ = π   0,

(5.9.41) if ξ ∈ (c2 t − r, β(r)), otherwise,

for r ∈ (0, c2 t], and

 ξ 2 + (c2 t)2 − r2 Pr η(t) > 2c2 tξ

=

    

1 arccos  π   

 R1 (t) = ξ if ξ ∈ (0, r − c2 t],

1, 

2

2

2

 ξ + (c2 t) − r , 2c2 tξ 0,

if ξ ∈ (r − c2 t, β(r)), otherwise,

for r ∈ (c2 t, (c1 + c2 )t], where  β(r) = min c2 t + r, c1 t .

(5.9.42)

276

Markov Random Flights

Substituting (5.9.41) into (5.9.40) and using (5.9.4), we obtain for r ∈ (0, c2 t]: Pr {ρ(t) < r, N1 (t) ≥ 1, N2 (t) = 0} λ1 e−(λ1 +λ2 )t = πc1

β(r) Z

 arccos

ξ 2 + (c2 t)2 − r2 2c2 tξ



c2 t−r

ξ p exp c21 t2 − ξ 2



λ1 c1

 q c21 t2 − ξ 2 dξ, (5.9.43)

for r ∈ (0, c2 t]. Similarly, substituting (5.9.42) into (5.9.40) and using (5.9.4), we have for r ∈ (c2 t, (c1 + c2 )t]: Pr {ρ(t) < r, N1 (t) ≥ 1, N2 (t) = 0} λ1 e−(λ1 +λ2 )t = c1

r−c Z 2t

p

c21 t2 − ξ 2

0

λ1 e−(λ1 +λ2 )t + πc1



ξ

β(r) Z

 arccos

exp

λ1 c1

 q 2 2 2 c1 t − ξ dξ

ξ 2 + (c2 t)2 − r2 2c2 tξ





ξ p

r−c2 t

c21 t2 − ξ 2

exp

λ1 c1

 q c21 t2 − ξ 2 dξ,

for r ∈ (c2 t, (c1 + c2 )t]. Taking into account that Z √ 2 2 x 1 √ 2 2 p eq p −x dx = − eq p −x , q p2 − x2

q 6= 0,

|x| ≤ p,

(5.9.44)

we finally obtain: Pr {ρ(t) < r, N1 (t) ≥ 1, N2 (t) = 0}    q λ1 c21 t2 − (r − c2 t)2 = e−λ2 t 1 − exp −λ1 t + c1 λ1 e−(λ1 +λ2 )t + πc1

β(r) Z

 arccos

ξ 2 + (c2 t)2 − r2 2c2 tξ



r−c2 t

ξ p

c21 t2 − ξ 2

 exp

λ1 c1

 q c21 t2 − ξ 2 dξ, (5.9.45)

for r ∈ (c2 t, (c1 + c2 )t]. • Evaluation of Pr {ρ(t) < r, N1 (t) = 0, N2 (t) ≥ 1}. Since {N1 (t) = 0} = {X1 (t) ∈ Sc1 t } = {R1 (t) = c1 t} , {N2 (t) ≥ 1} = {X2 (t) ∈ int Bc2 t } = {R2 (t) ∈ (0, c2 t)} , then, similarly as above, we get Pr {ρ(t) < r, N1 (t) = 0, N2 (t) ≥ 1} Zc2 t  ξ 2 + (c1 t)2 − r2 −λ1 t Pr η(t) > =e 2c1 tξ 0

  R2 (t) = ξ Pr R2 (t) ∈ dξ .

Markov Random Flight in the Plane R2

277

According to (5.9.10), the conditional probability in the integrand is:   ξ 2 + (c1 t)2 − r2 R (t) = ξ Pr η(t) > 2 2c1 tξ  ξ 2 + (c1 t)2 − r2   1, if ∈ (−∞, −1] and ξ ∈ (0, c2 t),   2c1 tξ     2  1 ξ + (c1 t)2 − r2 ξ 2 + (c1 t)2 − r2 = arccos , if ∈ (−1, 1] and ξ ∈ (0, c2 t),  π 2c1 tξ 2c1 tξ      ξ 2 + (c1 t)2 − r2   ∈ (1, +∞) and ξ ∈ (0, c2 t). 0, if 2c1 tξ This formula splits in the following three cases:   ξ 2 + (c1 t)2 − r2 Pr η(t) > R2 (t) = ξ = 0, 2c1 tξ  ξ 2 + (c1 t)2 − r2 Pr η(t) > 2c1 tξ

for r ∈ (0, (c1 − c2 )t],

 R2 (t) = ξ

  2  2 2   1 arccos ξ + (c1 t) − r , 2c1 tξ = π   0,

if ξ ∈ (c1 t − r, c2 t), otherwise,

for r ∈ ((c1 − c2 )t, c1 t], and

 ξ 2 + (c1 t)2 − r2 Pr η(t) > 2c1 tξ

=

    

1 arccos  π   

 R2 (t) = ξ if ξ ∈ (0, r − c1 t],

1, 

2

2

ξ + (c1 t) − r 2c1 tξ

2

 ,

0,

if ξ ∈ (r − c1 t, c2 t], otherwise,

for r ∈ (c1 t, (c1 + c2 )t]. Taking into account (5.9.4), we therefore obtain: Pr {ρ(t) < r, N1 (t) = 0, N2 (t) ≥ 1} = 0,

for r ∈ (0, (c1 − c2 )t],

(5.9.46)

Pr {ρ(t) < r, N1 (t) = 0, N2 (t) ≥ 1}  q    2 Zc2 t λ2 e−(λ1 +λ2 )t λ2 ξ + (c1 t)2 − r2 ξ 2 t2 − ξ 2 p = exp arccos c dξ, 2 πc2 2c1 tξ c2 c22 t2 − ξ 2 c1 t−r

(5.9.47) for r ∈ ((c1 − c2 )t, c1 t],

278

Markov Random Flights

and Pr {ρ(t) < r, N1 (t) = 0, N2 (t) ≥ 1}    q λ2 −λ1 t 2 2 2 1 − exp −λ2 t + c2 t − (r − c1 t) =e c2   2   q Zc2 t ξ + (c1 t)2 − r2 ξ λ2 e−(λ1 +λ2 )t λ2 2 t2 − ξ 2 p arccos + c dξ, exp 2 πc2 2c1 tξ c2 c22 t2 − ξ 2 r−c1 t

(5.9.48) for r ∈ (c1 t, (c1 + c2 )t]. • Evaluation of Pr {ρ(t) < r, N1 (t) ≥ 1, N2 (t) ≥ 1}. Since {N1 (t) ≥ 1} = {X1 (t) ∈ int Bc1 t } = {R1 (t) ∈ (0, c1 t)} , {N2 (t) ≥ 1} = {X2 (t) ∈ int Bc2 t } = {R2 (t) ∈ (0, c2 t)} , then, taking into account that R1 (t) and R2 (t) are independent, we have: Pr {ρ(t) < r, N1 (t) ≥ 1, N2 (t) ≥ 1}   2 2 2 = Pr R1 (t) + R2 (t) − 2R1 (t)R2 (t) cos(θ(t)) < r , R1 (t) ∈ (0, c1 t), R2 (t) ∈ (0, c2 t)   R12 (t) + R2 (t)2 − r2 = Pr η(t) > , R1 (t) ∈ (0, c1 t), R2 (t) ∈ (0, c2 t) 2R1 (t)R2 (t)  Zc1 tZc2 t    ξ 2 + ζ 2 − r2 = Pr η(t) > R (t) = ξ, R (t) = ζ Pr R1 (t) ∈ dξ Pr R2 (t) ∈ dζ . 1 2 2ξζ 0

0

(5.9.49) According to (5.9.10), the conditional probability in the integrand is:   ξ 2 + ζ 2 − r2 Pr η(t) > R1 (t) = ξ, R2 (t) = ζ 2ξζ  ξ 2 + ζ 2 − r2   ∈ (−∞, −1], 1, if   2ξζ      1 ξ 2 + ζ 2 − r2 ξ 2 + ζ 2 − r2 = arccos , if ∈ (−1, 1],  π 2ξζ 2ξζ      ξ 2 + ζ 2 − r2   0, if ∈ (1, +∞). 2ξζ

=

    

1 arccos  π   

if ξ + ζ ≤ r,

1, 

2

2

ξ +ζ −r 2ξζ 0,

2

 ,

if ξ + ζ > r and |ξ − ζ| ≤ r, if |ξ − ζ| > r.

Therefore, (5.9.49) becomes Pr {ρ(t) < r, N1 (t) ≥ 1, N2 (t) ≥ 1} = I1 (r, t) +

1 I2 (r, t), π

(5.9.50)

Markov Random Flight in the Plane R2 where it is denoted ZZ I1 (r, t) =

279

  Pr R1 (t) ∈ dξ Pr R2 (t) ∈ dζ ,

ξ+ζ≤r 0 0, the process X(t) is concentrated in the three287

288

Markov Random Flights

dimensional ball of radius ct:  B3ct = x = (x1 , x2 , x3 ) ∈ R3 : kxk2 = x21 + x22 + x23 ≤ c2 t2 . Consider the conditional characteristic functions of the process X(t): n o Hn (t) = E eihα,X(t)i | N (t) = n , n ≥ 1, where, remind, N (t) is the number of Poisson events that have occurred in the time interval (0, t), α = (α1 , α2 , α3 ) ∈ R3 is the real three-dimensional vector of inversion parameters and hα, X(t)i denotes the inner product of vectors α `e X(t). According to (4.2.5), the conditional characteristic functions have the form:      Z Z t Z t n+1 Y √ 3 J1/2 (c(τj − τj−1 )kαk)  n! t dτ1 dτ2 · · · dτn 2Γ , Hn (t) = n  t 0 2 (c(τj − τj−1 )kαk)1/2  τ1 τn−1 j=1

(6.1.1) n ≥ 1, where kαk =

p

α12 + α22 + α32 . Taking into account that r   √ 3 π 2 Γ = , J1/2 (x) = sin x, 2 2 πx

formula (6.1.1) becomes:

Hn (t) =

n! tn

Z

t

  n+1 Y  sin(c(τj − τj−1 )kαk)  dτ2 · · · dτn ,   c(τj − τj−1 )kαk τ1 τn−1 j=1

Z dτ1

0

t

Z

t

n ≥ 1.

(6.1.2)

The expression on the right-hand side of (6.1.2) cannot, apparently, be evaluated for arbitrary n and, therefore, the conditional characteristic functions Hn (t) cannot, in general, be explicitly calculated. We remind the reader that in Section 4.4.3 we have already noted such a noncomputability of the conditional characteristic functions of the three-dimensional Markov random flight for arbitrary n, where a respective formula was given in terms of inverse Laplace transform (see (4.4.6)). Nevertheless, as we will see later, in the particular case n = 1, the conditional characteristic function H1 (t) obtained from (6.1.2) can be explicitly inverted that yields a closed-form expression for the conditional density p1 (x, t) corresponding to the single change of direction (see Theorem 6.2.1). For the particular cases n = 2 and n = 3, one can also obtain series representations of the respective conditional characteristic functions H2 (t) and H3 (t) (see Theorem 6.4.1), however, their inverting seems to be an impracticable problem. From (6.1.2) it follows that the characteristic function H(t) of the process X(t) can be represented in the form of the uniformly converging series   Z t Z t Z t ∞ n+1 X Y  sin(c(τj − τj−1 )kαk)  , (6.1.3) H(t) = e−λt λn dτ1 dτ2 · · · dτn   c(τj − τj−1 )kαk 0 τ1 τn−1 n=0

j=1

where, in view of (4.3.4), H0 (t) =

sin(ctkαk) ctkαk

Markov Random Flight in the Space R3

289

According to (4.3.6) and in view of [63, Formula 9.121(27)], the characteristic function H(t) has the following alternative form in terms of inverse Laplace transforms: " n+1 #  ∞ X 1 1 3 (ckαk)2 −λt n −1 (t) H(t) = e λ L F , 1; ; − s 2 2 s2 n=0 " (6.1.4) n+1 # ∞ n X λ ckαk = e−λt L−1 arctg (t). (ckαk)n+1 s n=0 This formula also follows from (4.4.6).

6.2

Discontinuous term of distribution

As noted above, the expression on the right-hand side of (6.1.2) cannot be evaluated for arbitrary n. However, for the important particular case n = 1 corresponding to the single change of direction, the conditional characteristic function H1 (t) can be evaluated in an explicit form. Due to this fact, we can, by inverting H1 (t), obtain an exact formula for the conditional density p1 (x, t), that enables us to give an expression for the discontinuous term of the distribution of X(t). Theorem 6.2.1. For any t > 0, the conditional density p1 (x, t) corresponding to the single change of direction has the form:   ct + kxk 1 p1 (x, t) = ln , (6.2.1) 4π(ct)2 kxk ct − kxk q x = (x1 , x2 , x3 ) ∈ int B3ct , kxk = x21 + x22 + x23 . Proof. From (6.1.2), for n = 1, we have: Z 1 t sin(cτ kαk) sin(c(t − τ )kαk) dτ H1 (t) = t 0 cτ kαk c(t − s)kαk   1 sin(ctkαk) sin(ctkαk) = ∗ (ckαk)2 t t t   1 sin(ctkαk)Si(2ctkαk) + cos(ctkαk)Ci(2ctkαk) , = (ctkαk)2

(6.2.2)

where Si(x) and Ci(x) are the incomplete integral sine and cosine, respectively, given by (1.9.21). Note that in the last step we have used formula (1.9.23). Relation (6.2.2) for the conditional characteristic function H1 (t) exactly coincides with the earlier obtained formula (4.4.9). To prove the theorem, we need to show that the Fourier transform of conditional density (6.2.1) in the ball B3ct coincides with function (6.2.2). Passing to the three-dimensional polar coordinates, we have: Z eihα,xi p1 (x, t) µ(dx) B3ct

=

1 4π(ct)2

Zct 0

    Zπ Z2π   ct + r dr r ln eir(α1 sin θ1 sin θ2 +α2 sin θ1 cos θ2 +α3 cos θ1 ) sin θ1 dθ1 dθ2 .   ct − r 0

0

290

Markov Random Flights

According to [63, Formula 4.624], Z π Z 2π sin(rkαk) eir(α1 sin θ1 sin θ2 +α2 sin θ1 cos θ2 +α3 cos θ1 ) sin θ1 dθ1 dθ2 = 4π . rkαk 0 0 Therefore, by applying Lemma 1.9.9 (see formula (1.9.24)), we get:   Z Z ct 1 ct + r eihα,xi p1 (x, t) µ(dx) = sin(rkαk) ln dr (ct)2 kαk 0 ct − r B3ct   Z 1 1+z 1 sin(ctkαkz) ln = dz ctkαk 0 1−z   1 sin(ctkαk) Si(2ctkαk) + cos(ctkαk) Ci(2ctkαk) = (ctkαk)2 and this coincides with (6.2.2). The theorem is proved. Note that conditional density (6.2.1) has exactly the same form like conditional density (4.9.23) obtained earlier from a general formula. Remark 6.2.1. In view of the well-known equality connecting the logarithm function and the hyperbolic arctangent function   1 1+z Arth(z) = ln , 2 1−z one can represent conditional density (6.2.1) as follows:   1 kxk p1 (x, t) = Arth . 2π(ct)2 kxk ct Remark 6.2.2. The function −λt

λt e

λt e−λt p1 (x, t) = ln 4π(ct)2 kxk



ct + kxk ct − kxk



represents the discontinuous term of the density of the three-dimensional symmetric Markov random flight X(t). This interesting fact, that the three-dimensional Markov random flight, (like the two-dimensional one), has an infinite discontinuity on the boundary of the diffusion area, was already noted in Remark 4.9.2, where the exhaustive explanation of this phenomenon, based on the properties of hypergeometric series, was given.

6.3

Limit theorem

A general limit theorem for symmetric Markov random flight in the Euclidean space Rm of arbitrary dimension m ≥ 2 was proved in Section 4.8. Its proof was based on the passage to the limit, under the Kac’s scaling condition (4.8.1) in the Laplace transform of the characteristic function of the process and double inversion of the resulting function. In this section we give another proof of the weak convergence, under Kac’s condition (4.8.1), of the three-dimensional Markov random flight X(t) to the Wiener process, based on the Kurtz’s diffusion Theorem 1.4.1.

Markov Random Flight in the Space R3

291

Since the random flight X(t) depends on two parameters, namely, on the speed of motion c and on the intensity of switchings λ, it can be considered as the two-parameter family of stochastic processes X(t) = {Xcλ (t), c > 0, λ > 0}. For the sake of simplicity, we omit these indices thereafter, bearing in mind, however, that we are operating with the two-parameter family of stochastic processes. Thus, we should study the limiting behaviour of this family of processes when its parameters are connected with each other by Kac’s condition (4.8.1). Consider the joint densities fω = fω (x, t), x = (x1 , x2 , x3 ) ∈ int B3ct , ω ∈ S13 , t > 0, of the particle’s position X(t) in the space R3 and its direction Φ(t) at arbitrary time moment t > 0, defined by the equality: fω (x, t) µ(dx) ν(dω) = Pr {X(t) ∈ dx, Φ(t) ∈ dω} , where ν(dω) is the measure of the infinitesimal solid angle dω. It is well known that in the space R3 any direction ω is determined by the ordered pair of planar angles ω = (θ, η), θ ∈ [0, π), η ∈ [0, 2π), and the measure ν(dω) of the infinitesimal solid angle dω is ν(dω) = sin θ dθ dη. (6.3.1) The Kolmogorov equation for the densities fω = fθ,η is represented by the continuum system of integro-differential equations Z ∂fθ,η ∂fθ,η ∂fθ,η λ ∂fθ,η = −c sin θ cos η fω ν(dω), − c sin θ sin η − c cos θ − λfθ,η + ∂t ∂x1 ∂x2 ∂x3 4π S13 (6.3.2) θ ∈ [0, π), η ∈ [0, 2π). Consider the Banach space B of twice continuously differentiable functions on R3 ×(0, ∞) vanishing at infinity. The densities fθ,η can be considered as the two-parameter family of functions f = {fθ,η , θ ∈ [0, π), η ∈ [0, 2π)} , belonging to the Banach space B. Introduce the two-parameter family of operators  A = Aω , ω ∈ S13 = {Aθ,η , θ ∈ [0, π), η ∈ [0, 2π)} , acting in B, where Aω = Aθ,η = −c sin θ cos η

∂ ∂ ∂ − c sin θ sin η − c cos θ . ∂x1 ∂x2 ∂x3

Define the action of A on f as follows: Af = {δ(θ, ϕ)δ(η, ψ)Aθ,η fϕ,ψ , θ, ϕ ∈ [0, π), η, ψ ∈ [0, 2π)} , where δ(x, y) is the generalized Kronecker delta-symbol of rank 2. Introduce now the operator Λ acting on f by the rule: Z λ fω ν(dω), Λf = −λf + 4π S13

(6.3.3)

(6.3.4)

where, as usual, λf = {λfθ,η , θ ∈ [0, π), η ∈ [0, 2π)}. Then equation (6.3.2) can be written as follows: ∂f = Af + Λf , ∂t and this is the standard form of writing an abstract equation of random evolution (see Section 1.2). Now we can formulate the diffusion approximation theorem for process X(t).

292

Markov Random Flights

Theorem 6.3.1. Let the Kac’s condition (4.8.1) be fulfilled. Then in the Banach space B the semigroups generated by the transition functions of the three-dimensional symmetric Markov random flight X(t) converge to the semigroup generated by the transition function of the homogeneous Wiener process in R3 with the generator ρ ∆, 3

G=

(6.3.5)

where ∆ is the three-dimensional Laplace operator. Proof. According to the conditions of the Kurtz’s Theorem 1.4.1, we should find a solution h to the equation Λh = −Af . (6.3.6) Let us show that, for any differentiable function (family) f , a solution to equation (6.3.6) is given by the formula: Z 1 1 fω ν(dω). (6.3.7) h = Af + λ 4π S13 Really, in view of the trigonometric equalities Z 2π Z 2π sin η dη = 0, cos η dη = 0, 0

0

!

Z

Z

Aω ν(dω) f = 0 π

Z

π



Z dθ

S13

 Z = −c

π

sin θ cos θ dθ = 0,

0

we have: Z Af ν(dω) = S13

Z

 Aθ,η sin θ dη f =

0





∂ ∂x 1 0 0 Z π  Z 2π ∂ −c sin2 θ dθ sin η dη ∂x 2 0 0  Z π Z 2π  ∂ f = 0. −c sin θ cos θ dθ dη ∂x3 0 0 sin2 θ dθ

cos η dη

(6.3.8)

Then, taking into account (6.3.3) and (6.3.4), we get: ! ! Z Z Z 1 1 λ 1 1 Λh = −λ Af + fω ν(dω) + Af + fω ν(dω) ν(dχ) λ 4π S13 4π S13 λ 4π S13 Z Z Z Z 1 λ λ fω ν(dω) + Af ν(dχ) + ν(dχ) fω ν(dω) = −Af − 4π S13 4π S13 16π 2 S13 S13 ! Z Z Z λ 1 λ = −Af − fω ν(dω) + Aχ ν(dχ) f + fω ν(dω) 4π S13 4π 4π S13 S13 = −Af , and, therefore, function (6.3.7) is really a solution to equation (6.3.6). Since the limiting distribution of the governing Markov process on the sphere S13 is uniform with the density 1/(4π), in view of (1.4.8), the projector P is given by the formula: Z 1 Pf = fω ν(dω) (6.3.9) 4π S13

Markov Random Flight in the Space R3

293

Then, according to (1.4.4), (6.3.7), (6.3.8) and (6.3.9), we have: C0 f = PAh ! Z Z 1 2 1 1 A f+ A fω ν(dω) ν(dχ) = 4π S13 λ 4π S13 Z Z Z 1 1 = A ν(dχ) fω ν(dω) A2 f ν(dω) + 4πλ S13 16π 2 S13 S13 Z 1 A2 f ν(dω). = 4πλ S13 Let us show that, for any f ∈ B, the following equality holds: Z 4πc2 A2 f ν(dω) = ∆f , 3 S13

(6.3.10)

where ∆ is the three-dimensional Laplacian. Really, taking into account (6.3.1), we have: Z A2 f ν(dω) S13 π

Z



Z

A2θ,η



= "Z 0

Z



dθ 0

sin θ dη f

0

π

=



0



∂ ∂ ∂ − c sin θ sin η − c cos θ −c sin θ cos η ∂x1 ∂x2 ∂x3

2

# sin θ dη f .

By squaring the differential operator on the right-hand side of this equality, after some simple groupings, we arrive at:  Z π  2 Z Z 2π ∂ A2 f ν(dω) = c2 sin3 θ dθ cos2 η dη 2 3 ∂x S1 0 0 1 Z π  2 Z 2π ∂ + c2 sin3 θ dθ sin2 η dη ∂x22 0 0 Z π Z π  2 ∂ + c2 sin θ cos2 θ dθ dη ∂x23 0 0 Z π  Z 2π ∂2 + 2c2 sin3 θ dθ sin η cos η dη ∂x1 ∂x2 0 0 Z π  Z 2π ∂2 + 2c2 sin2 θ cos θ dθ cos η dη ∂x1 ∂x3 0 0  Z π  Z 2π ∂2 2 2 + 2c sin θ cos θ dθ sin η dη f. ∂x2 ∂x3 0 0 In view of the trigonometric equalities Z 2π Z 2π sin η dη = 0, cos η dη = 0, 0

0

Z



Z sin η cos η dη = 0,

0

0

π

sin2 θ cos θ dθ = 0,

294

Markov Random Flights

the terms containing mixed derivatives vanish, and we get:  Z π  2 Z Z 2π ∂ sin3 θ dθ A2 f ν(dω) = c2 cos2 η dη 2 3 ∂x 0 S1 0 1  2 Z π Z 2π ∂ sin2 η dη sin3 θ dθ + c2 ∂x22 0 0 Z π Z 2π  2  ∂ dη sin θ cos2 θ dθ + c2 2 f. ∂x 0 0 3 Taking into account the trigonometric equalities Z π Z 2π Z 2π 2 sin θ cos2 θ dθ = , cos2 η dη = π, sin2 η dη = π, 3 0 0 0 we obtain:

Z

A2 f ν(dω) =

S13

Z 0

π

sin3 θ dθ =

4 , 3

4πc2 ∆f , 3

proving (6.3.10). Thus, 1 C0 f = 4πλ

Z

A2 f ν(dξ) =

S13

1 4πc2 c2 ∆f = ∆f . 4πλ 3 3λ

Hence, operator C0 has the form: C0 =

c2 ∆ 3λ

(6.3.11)

and, therefore, the generator (6.3.5), under the Kac’s condition (4.8.1), is the limiting operator of the Markov random flight X(t). Condition (1.4.5) is also fulfilled because from the form of operator (6.3.11) it follows that, for any twice continuously differentiable function f , there exist a solution g to the equation (µ − C0 )g = f (6.3.12) for arbitrary µ > 0. This follows from the fact that, for any µ > 0, equation (6.3.12) with operator (6.3.11) is the inhomogeneous Klein-Gordon equation (or Helmholtz equation with a purely imaginary constant) with a sufficiently smooth right part. The existence of the solution of such an equation for any µ > 0 is a well-known fact of the general theory of partial differential equations (see, for example, [207, Chapter V, Section 30] or [23, Chapter IV]. Thus, condition (1.4.5) is also fulfilled. Therefore, according to the Kurtz’s Theorem 1.4.1, we can conclude that, under the Kac’s condition (4.8.1), the three-dimensional Markov random flight X(t) weakly converges to the Wiener process in R3 with generator (6.3.5). The theorem is thus completely proved.

6.4

Asymptotic relation for the transition density

As was noted above, the conditional characteristic functions (6.1.2) cannot be explicitly evaluated and inverted for arbitrary n. This means that we cannot obtain closed-form

Markov Random Flight in the Space R3

295

expressions for the respective conditional densities. Nevertheless, for some small n, namely for n = 2 and n = 3, one can obtain series representations of the respective conditional characteristic functions H2 (t) and H3 (t). This enables us to derive asymptotic relations for the unconditional characteristic function and for the transition density of the threedimensional symmetric Markov random flight X(t) that give a fairly good approximation on small time intervals. The derivation of such asymptotic formulas is the main subject of this section.

6.4.1

Auxiliary lemmas

In this subsection we establish a series of auxiliary lemmas that will be used in the proofs of theorems related to the asymptotic behaviour of the transition density of the three-dimensional Markov random flight X(t). Lemma 6.4.1. For arbitrary integer n ≥ 0 and for arbitrary real a 6= 0, −1, −2, . . . , the following formula holds:     n X π Γ a2 Γ n + a+1 Γ k + 21 Γ n − k + 12 2  , = (6.4.1) k! (n − k)! (2k + a) (2n + a) Γ a+1 Γ n + a2 2 k=0

n ≥ 0, a 6= 0, −1, −2, . . . . Proof. Using the well-known relations for Pochhammer symbol (−n)k =

(−1)k n! , 0 ≤ k ≤ n, n ≥ 0, (n − k)!

and the formula for Euler gamma-function   √ π 1 Γ k+ = k (2k − 1)!!, 2 2

(x)s x , s > 0, = (x + 1)s x+s

k = 0, 1, 2, . . . ,

(−1)!! = 1,

(6.4.2)

(6.4.3)

we can easily check that the sum on the left-hand side of (6.4.1) is    √   n X Γ k + 21 Γ n − k + 12 π Γ n + 12 1 a 1 a = F , ; −n + , + 1; 1 , (6.4.4) −n, 3 2 k! (n − k)! (2k + a) n! a 2 2 2 2 k=0

where 3 F2 (α1 , α2 , α3 ; β1 , β2 ; z) =

∞ X (α1 )k (α2 )k (α3 )k z k (β1 )k (β2 )k k!

k=0

is the general hypergeometric function defined by (1.6.35). According to [177, item 7.4.4, page 539, Formula 88], we have:    a+1 (1)n 1 a 1 a 2 n 1  + 1; 1 = a 3 F2 −n, , ; −n + , 2 2 2 2 2 +1 n 2 n   √ Γ a2 Γ n + a+1 n! a π 2   . = Γ n + 21 (2n + a) Γ a+1 Γ n + a2 2 Substituting this into (6.4.4), we obtain (6.4.1). The lemma is proved. Now we derive series representations for some powers of the inverse tangent function that will be used in the proofs of asymptotic theorems.

296

Markov Random Flights

Lemma 6.4.2. For arbitrary z ∈ C, |z| < ∞ z 6= ±i, the following series representation holds:   k ∞ X Γ k + 12 z 1 z2 , |z| < ∞, z 6= ±i. (6.4.5) arctg z = √ √ k! (2k + 1) 1 + z 2 π 1 + z2 k=0

The series in (6.4.5) is convergent uniformly in z. Proof. Using the well-known series representation of the inverse tangent function, see [63, formula 1.644(1)], as well as the formulas (2k)!! = 2k k!, k ≥ 0, and (6.4.3), we have (for |z| < ∞, z 6= ±i):  2 k ∞ X (2k)! z z arctg z = √ 2k 2 2 1 + z k=0 2 (k!) (2k + 1) 1 + z 2  2 k ∞ X z (2k)!! (2k − 1)!! z =√ 1 + z 2 k=0 (2k k!)2 (2k + 1) 1 + z 2  2 k ∞ X z (2k − 1)!! z =√ k 2 1 + z k=0 2 k! (2k + 1) 1 + z 2 √  2 k ∞ π X (2k − 1)!! z z 2k √ =√ π 2 k 1 + z2 1 + z k=0 2 k! 2k (2k + 1)   2 k ∞ X Γ k + 12 1 z z =√ √ , k! (2k + 1) 1 + z 2 π 1 + z2 k=0

proving (6.4.5). 2 z Since 1+z 2 < 1 for arbitrary z ∈ C, |z| < ∞, z 6= ±i, we get the inequality    k X ∞ ∞ X Γ k + 12 Γ k + 21 π 3/2 z2 = , < 2 k! (2k + 1) 1 + z k! (2k + 1) 2 k=0

k=0

proving the uniform convergence of the series in (6.4.5). The lemma is proved. Lemma 6.4.3. For arbitrary z ∈ C, |z| < ∞, z 6= ±i, the following series representation holds: 2 X  2 k √  ∞ 2 π z k! z  √ , |z| < ∞, z 6= ±i. arctg z = 3 2 2 1 + z2 (k + 1) Γ k + 1+z 2 k=0 (6.4.6) The series in (6.4.6) is convergent uniformly in z. Proof. From (6.4.5) it follows that arctg z

2

=

1 π



z √ 1 + z2

2 X ∞ k=0

 γk

z2 1 + z2

k ,

where the coefficients γk are given by γk =

k X l=0

  Γ l + 21 Γ k − l + 12 , l! (k − l)! (2l + 1)(2k − 2l + 1)

k ≥ 0.

(6.4.7)

Markov Random Flight in the Space R3 Since

1 1 = (2l + 1)(2k − 2l + 1) 2(k + 1)



297

1 1 + 2l + 1 2k − 2l + 1

then, taking into account the well-known formulas zΓ(z) = Γ(z + 1), Γ   k 1 X Γ l + 12 Γ k − l + 12 γk = k+1 l! (k − l)! (2l + 1) l=0  π Γ 12 Γ(k + 1) 1  = k + 1 (2k + 1) Γ k + 21 =

π 3/2 k!   2(k + 1) k + 12 Γ k + 21

=

π 3/2 k! . 2(k + 1) Γ k + 32

 , 1 2



=



π, we have:

(see Lemma 6.4.1)

Substituting these coefficients into (6.4.7), we obtain (6.4.6). The uniform convergence of the series in formula (6.4.6) can be established similarly to that of Lemma 6.4.2. This completes the proof of the lemma. Lemma 6.4.4. For arbitrary z ∈ C, |z| < ∞, z 6= ±i, the following series representation holds: 3 arctg z 3  1 z √ =√ π (6.4.8) 1 + z2    2 k  ∞ 1 X Γ k+ 2 z 1 1 3 1 × , 5 F4 1, 1, 1, −k, −k − ; −k + , −k + , , 2; 1 k! (2k + 1) 2 2 2 2 1 + z2 k=0

where 5 F4 (a1 , a2 , a3 , a4 , a5 ; b1 , b2 , b3 , b4 ; z) =

∞ X (a1 )k (a2 )k (a3 )k (a4 )k (a5 )k z k (b1 )k (b2 )k (b3 )k (b4 )k k!

(6.4.9)

k=0

is the general hypergeometric function defined by (1.6.34). The series in (6.4.8) is convergent uniformly in z. Proof. From (6.4.5) and (6.4.6) it follows that arctg z

3

1 = 2



z √ 1 + z2

3 X ∞ k=0

 γk

z2 1 + z2

k ,

where the coefficients γk are given by γk =

k X l=0

 l! Γ k − l + 12 , (l + 1) (k − l)! (2k − 2l + 1) Γ l + 32

Applying (6.4.2), (6.4.3) and the formula   √ 1 (−1)k π 2k Γ −k = , 2 (2k − 1)!!

k ≥ 0,

k ≥ 0.

(6.4.10)

298

Markov Random Flights

after some simple computations, we arrive at the relation    2Γ k + 21 1 1 1 3 √ γk = 5 F4 1, 1, 1, −k, −k − ; −k + , −k + , , 2; 1 , 2 2 2 2 k! π(2k + 1)

k ≥ 0.

Substituting these coefficients into (6.4.10), we obtain (6.4.8). The lemma is proved. Lemma 6.4.5. For arbitrary z ∈ C, |z| < ∞, z 6= ±i, the following series representation holds:  4 X  2 k ∞ 4 z π z √ , |z| < ∞, z 6= ±i, (6.4.11) arctg z = γk 2 1 + z2 1 + z2 k=0 where the coefficients γk are given by the formula: k

γk =

l! (k − l)! 1 X  , k+2 (l + 1) Γ l + 32 Γ k − l + 23 l=0

k ≥ 0.

The series in (6.4.11) is convergent uniformly in z. Proof. According to Lemma 6.4.3, we have: 4 X  2 k  ∞ 4 π z z √ arctg z = ξk , 4 1 + z2 1 + z2 k=0

(6.4.12)

where the coefficients ξk are: k X

l! (k − l)!   (l + 1)(k − l + 1) Γ l + 23 Γ k − l + 32 l=0   k 1 l! (k − l)! 1 1 X   + = k+2 l+1 k−l+1 Γ l + 32 Γ k − l + 23

ξk =

l=0 k

=

2 X l! (k − l)!  . k+2 (l + 1) Γ l + 23 Γ k − l + 32 l=0

Substituting this into (6.4.12), we get the statement of the lemma.

6.4.2

Conditional characteristic functions

In this subsection we obtain the series representations of the conditional characteristic functions corresponding to two and three changes of direction. These formulas are the basis for our further analysis leading to asymptotic relations for the unconditional characteristic function and the transition density of the three-dimensional symmetric Markov random flight X(t) on small time intervals. Theorem 6.4.1. The conditional characteristic functions H2 (α, t) and H3 (α, t) corresponding to two and three changes of direction are given, respectively, by the formulas: H2 (α, t) =

∞ X

(ctkαk)k−1 k! (2k + 1)2 k=0 (6.4.13)   1 1 1 3 × 5 F4 1, 1, 1, −k, −k − ; −k + , −k + , , 2; 1 Jk+1 (ctkαk), 2 2 2 2 2k−1

Markov Random Flight in the Space R3

299

∞ X γk (ctkαk)k−3/2 Jk+3/2 (ctkαk), 2k+3/2 (k + 1)! k=0 q t > 0, α = (α1 , α2 , α3 ) ∈ R3 , kαk = α12 + α22 + α32 ,

H3 (α, t) = 3π 3/2

(6.4.14)

where Jν (z) is the Bessel function, 5 F4 (a1 , a2 , a3 , a4 , a5 ; b1 , b2 , b3 , b4 ; z) is the general hypergeometric function given by (6.4.9) and the coefficients γk are given by the formula: k

γk =

1 X l! (k − l)!   3 3 , k+2 (l + 1) Γ l + Γ k − l + 2 2 l=0

k ≥ 0.

(6.4.15)

Proof. According to (4.4.6), the conditional characteristic functions Hn (α, t) of the threedimensional Markov random flight X(t) corresponding to n changes of directions are given by the formula: " n+1 # n! ckαk −(n+1) −1 (t), (6.4.16) Hn (α, t) = n (ckαk) Ls arctg t s n ≥ 1,

α ∈ R3 ,

s ∈ C+ ,

is the inverse Laplace transformation with respect to complex variable s and where L−1 s C+ = {s ∈ C : Re s > 0} is the right half-plane of the complex plane C. In particular, in the case of two changes of directions n = 2, formula (6.4.16) yields: " 3 # 2! ckαk −3 −1 H2 (α, t) = 2 (ckαk) Ls (t), α ∈ R3 , s ∈ C+ . (6.4.17) arctg t s Applying Lemma 6.4.4 to the power of inverse tangent function in (6.4.17), we obtain: !3 ∞   X Γ k+ 1 2 1 ckαk −3 −1 2 p H2 (α, t) = 2 (ckαk) Ls √ t k! (2k + 1) π s2 + (ckαk)2 k=0  k   1 1 3 (ckαk)2 1 (t) × 5 F4 1, 1, 1, −k, −k − ; −k + , −k + , , 2; 1 2 2 2 2 s2 + (ckαk)2  ∞ X Γ k + 12 2 =√ 2 (ckαk)2k k! (2k + 1) πt k=0   1 1 3 1 × 5 F4 1, 1, 1, −k, −k − ; −k + , −k + , , 2; 1 2 2 2 2   1 × L−1 (t). s k+3/2 (s2 + (ckαk)2 ) (6.4.18) Note that evaluating the inverse Laplace transformation of each term of the series separately is justified because it converges uniformly in s everywhere in C+ and the complex functions −(k+3/2) s2 + (ckαk)2 , k ≥ 0, are holomorphic and do not have any singular points in this half-plane. Moreover, each of these functions contains the inversion complex variable s ∈ C+ in a negative power and behaves like s−(2k+3) , as |s| → +∞, and, therefore, all these complex functions rapidly tend to zero at the infinity. According to [118, Table 8.4-1, formula 57], we have    k+1 √ 1 π t −1  Ls (t) = Jk+1 (ctkαk). k+3/2 2ckαk Γ k + 32 (s2 + (ckαk)2 )

300

Markov Random Flights

Substituting this into (6.4.18), after some simple calculations we obtain (6.4.13). For n = 3, formula (6.4.16) yields: " 4 # ckαk 3! arctg H3 (α, t) = 3 (ckαk)−4 L−1 (t), α ∈ R3 , s ∈ C+ . (6.4.19) s t s Applying Lemma 6.4.5 to the power of inverse tangent function in (6.4.19) and taking into account that   k+3/2  √ π t 1 −1 (t) = Jk+3/2 (ctkαk), Ls k+2 (k + 1)! 2ckαk (s2 + (ckαk)2 ) we obtain: H3 (α, t) =

  ∞ 3π X 1 2k −1 γ (ckαk) L (t) k s k+2 t3 (s2 + (ckαk)2 ) k=0

= 3π

3/2

∞ X γk (ctkαk)k−3/2 Jk+3/2 (ctkαk), 2k+3/2 (k + 1)! k=0

(6.4.20)

where the coefficients γk are given by (6.4.15). The theorem is proved. Remark 6.4.1. The series in formulas (6.4.13) and (6.4.14) are convergent for any fixed t > 0, however, we cannot invert their terms separately because, as easy to see, the inverse Fourier transforms of the terms of the series do not exist for k ≥ 2. Thus, while there exist the inverse Fourier transforms of the whole series (6.4.13) and (6.4.14), it is impossible to invert their terms separately and, therefore, we cannot obtain closed-form expressions for the respective conditional densities. These formulas can, nevertheless, be used for obtaining the important asymptotic relations and this is the main subject of the next subsections.

6.4.3

Asymptotic formula for characteristic function

Using the results of the previous subsection, we can now present an asymptotic relation on small time intervals for the characteristic function H(α, t) = e−λt

∞ X (λt)k k=0

k!

Hk (α, t)

of the three-dimensional symmetric Markov random flight, where Hk (α, t), k ≥ 0, are the conditional characteristic functions corresponding to k changes of direction. This result is given by the following theorem. Theorem 6.4.2. For the characterictic function H(α, t), t > 0, of the three-dimensional Markov random flight X(t) the following asymptotic relation holds:    λ −λt sin (ctkαk) + 2 sin (ctkαk)Si(2ctkαk) + cos (ctkαk)Ci(2ctkαk) H(α, t) = e ctkαk c tkαk2  √ λ2 t λ3 π t3/2 + J1 (ctkαk) + J3/2 (ctkαk) + o(t3 ), ckαk (2ckαk)3/2 (6.4.21) q α = (α1 , α2 , α3 ) ∈ R3 ,

kαk =

α12 + α22 + α32 ,

t > 0,

where Si(z) and Ci(z) are the incomplete integral sine and cosine, respectively, given by (1.9.21).

Markov Random Flight in the Space R3

301

Proof. We have:   ∞ X (λt)2 (λt)3 (λt)k H(α, t) = e−λt H0 (α, t)+λtH1 (α, t)+ H2 (α, t)+ H3 (α, t)+ Hk (α, t) . 2! 3! k! k=4

Since all the conditional characteristic functions are uniformly bounded in both variables, that is, |Hk (α, t)| ≤ 1, α ∈ R3 , t ≥ 0, k ≥ 0, then ∞ X (λt)k k=4

k!

Hk (α, t) = o(t3 )

and, therefore, H(α, t) = e

−λt

  (λt)2 (λt)3 3 H0 (α, t) + λtH1 (α, t) + H2 (α, t) + H3 (α, t) + o(t ) . (6.4.22) 2! 3!

In view of (6.4.13), we have: (λt)2 H2 (α, t) 2!  t J1 (ctkαk) = λ2 ckαk    ∞ X (ckαk)k−1 tk+1 1 1 3 1 + F ; −k + , −k + , , 2; 1 J (ctkαk) . 1, 1, 1, −k, −k − 5 4 k+1 2k k! (2k + 1)2 2 2 2 2 k=1

From the asymptotic formula Jν (z) =

zν + o(z ν+1 ), 2ν Γ(ν + 1)

ν ≥ 0,

(6.4.23)

we get Jk+1 (ctkαk) =

(ctkαk)k+1 + o(tk+2 ) 2k+1 (k + 1)!

and, therefore,   ∞ X (ckαk)k−1 tk+1 1 1 3 1 3 5 F4 1, 1, 1, −k, −k − ; −k + , −k + , , 2; 1 Jk+1 (ctkαk) = o(t ). 2k k! (2k + 1)2 2 2 2 2

k=1

Thus, we obtain the following asymptotic relation: (λt)2 λ2 t H2 (α, t) = J1 (ctkαk) + o(t3 ). 2! ckαk Similarly, according to (6.4.14), we have:  γ0 (ckαk)−3/2 t3/2 (λt)3 H3 (α, t) = λ3 π 3/2 J3/2 (ctkαk) 3! 25/2  ∞ X γk (ckαk)k−3/2 tk+3/2 + Jk+3/2 (ctkαk) . 2k+5/2 (k + 1)! k=1 In view of (6.4.23), we have Jk+3/2 (ctkαk) =

(ctkαk)k+3/2  + o(tk+5/2 ) 2k+3/2 Γ k + 52

(6.4.24)

302

Markov Random Flights

and, therefore,

∞ X γk (ckαk)k−3/2 tk+3/2 Jk+3/2 (ctkαk) = o(t4 ). k+5/2 (k + 1)! 2 k=1

Thus, taking into account that γ0 = 2/π (see (6.4.15)), we arrive at the formula: √ (λt)3 λ3 π t3/2 H3 (α, t) = J3/2 (ctkαk) + o(t4 ). 3! (2ckαk)3/2

(6.4.25)

Since, in view of (6.2.2), λtH1 (α, t) =

  λ sin (ctkαk)Si(2ctkαk) + cos (ctkαk)Ci(2ctkαk) c2 tkαk2

and H0 (α, t) =

sin (ctkαk) ctkαk

(this is the characteristic function of the uniform distribution on the surface of the threedimensional sphere of radius ct), then by substituting these formulas, as well as (6.4.24) and (6.4.25) into (6.4.22), we finally obtain asymptotic relation (6.4.21). The theorem is completely proved.

6.4.4

Asymptotic formula for the density

Asymptotic formula (6.4.21) for the unconditional characteristic function enables us to obtain the respective asymptotic relation for the transition density of the process X(t). Our principal result is given by the following theorem. Theorem 6.4.3. For the transition density p(x, t), t > 0, of the three-dimensional Markov random flight X(t) the following asymptotic relation holds:    ct + kxk λ e−λt 2 2 2 −λt ln δ(c t − kxk ) + e p(x, t) = 4π(ct)2 4πc2 tkxk ct − kxk  (6.4.26) λ2 λ3 3 p + Θ(ct − kxk) + o(t ), + 8πc3 2π 2 c2 c2 t2 − kxk2 x = (x1 , x2 , x3 ) ∈ R3 ,

kxk =

q x21 + x22 + x23 ,

t > 0.

−1 Proof. Applying the inverse Fourier transformation Fα to both sides of (6.4.21), we have:    −1 sin (ctkαk) p(x, t) = e−λt Fα (x) ctkαk    λ −1 sin (ctkαk)Si(2ctkαk) + cos (ctkαk)Ci(2ctkαk) (x) + Fα c2 tkαk2    2   3 √ 3/2 λ t πt −1 −1 λ + Fα J1 (ctkαk) (x) + Fα J3/2 (ctkαk) (x) + o(t3 ). ckαk (2ckαk)3/2 (6.4.27) Note that here we have used the fact that,due to the continuity of the inverse Fourier  −1 transformation, the asymptotic formula Fα o(t3 ) (x) = o(t3 ) holds.

Markov Random Flight in the Space R3

303

Let us evaluate separately the inverse Fourier transforms on the right-hand side of (6.4.27). The first one represents the singular part of the density and has the form (see formula (4.1.5) for m = 3):   1 −1 sin (ctkαk) δ(c2 t2 − kxk2 ) (x) = (6.4.28) Fα ctkαk 4π(ct)2 3 that is, the uniform density on the surface of the sphere Sct ⊂ R3 of radius ct centred at 3 the origin 0 ∈ R . The second Fourier transform on the right-hand side of (6.4.27) is also already known (see formula (6.2.1)):    λ −1 sin (ctkαk)Si(2ctkαk) + cos (ctkαk)Ci(2ctkαk) (x) Fα c2 tkαk2   (6.4.29) ct + kxk λ = ln Θ(ct − kxk). 4πc2 tkxk ct − kxk According to the Hankel inversion formula (1.8.4), we have for the third Fourier transform on the right-hand side of (6.4.27):   Z ∞ λ2 t λ2 t −1 −3/2 −1/2 −1 Fα kαk J1 (ctkαk) (x) = (2π) kxk J1/2 (kxkξ) ξ 3/2 ξ −1 J1 (ctξ) dξ. c c 0 Taking into account that

r

2 sin z, (6.4.30) πz and applying [177, Formula 2.12.15(2)], we obtain:   Z ∞ λ2 t −1 λ2 t Fα kαk−1 J1 (ctkαk) (x) = sin (kxkξ) J1 (ctξ) dξ c 2π 2 ckxk 0 λ2 t kxk = (c2 t2 − kxk2 )−1/2 Θ(ct − kxk) (6.4.31) 2π 2 ckxk ct λ2 p = Θ(ct − kxk). 2π 2 c2 c2 t2 − kxk2 This is a fairly unexpected result showing that the conditional density p2 (x, t) corresponding to two changes of direction has an infinite discontinuity on the boundary of the threedimensional ball B3ct . This property is similar to that of the conditional density p1 (x, t) corresponding to the single change of direction (for the respective joint density see (6.4.29)). Applying the Hankel inversion formula (1.8.4), taking into account (6.4.30) and using [63, Formula 6.699(1)], we obtain for the fourth term on the right-hand side of (6.4.27):   √ λ3 π t3/2 −1 −3/2 J3/2 (ctkαk) (x) Fα kαk (2c)3/2 √ Z ∞ λ3 π t3/2 −3/2 −1/2 = (2π) kxk J1/2 (kxkξ) ξ 3/2 ξ −3/2 J3/2 (ctξ) dξ (2c)3/2 0 √ Z ∞ λ3 2 t3/2 √ = 3/2 ξ −1/2 sin (kxkξ) J3/2 (ctξ) dξ 8c π π kxk 0 √ √ λ3 2 t3/2 2−1/2 π kxk (ct)−3/2 = 3/2 √ Θ(ct − kxk) Γ(1) 8c π π kxk λ3 = Θ(ct − kxk). 8πc3 (6.4.32) Substituting now (6.4.28), (6.4.29), (6.4.31) and (6.4.32) into (6.4.27), we arrive at (6.4.26). The theorem is completely proved. J1/2 (z) =

304

Markov Random Flights

Figure 6.1: The shape of the absolutely continuous part of density (6.4.26) at instant t = 0.1 (for c = 4, λ = 1) on the interval kxk ∈ [0, 0.4)

The shape of the absolutely continuous part of density (6.4.26) at time instant t = 0.1 (for c = 4, λ = 1) on the interval kxk ∈ [0, 0.4) is plotted in Fig. 6.1. The error in these calculations does not exceed 0.001. We see that the density increases slowly as the distance kxk from the origin 0 ∈ R3 grows, while near the boundary this growth becomes explosive. From this fact it follows that, for small time t, the greater part of the density is concentrated outside the neighbourhood of the origin 0 ∈ R3 and this feature of the three-dimensional Markov random flight is quite similar to that of its two-dimensional counterpart. The infinite discontinuity of the density on the boundary kxk = ct is also similar to the analogous property of the two-dimensional Markov random flight (see, for comparison, the two-dimensional density (5.2.2) and its graphics in Fig. 5.1). Note that density (6.4.26) is continuous at the origin, as it must be. Remark 6.4.2. Using (6.4.26), we can derive an asymptotic formula, as t → 0, for the probability of being in the ball B3r of some radius r < ct centred at the origin 0 ∈ R3 . Applying [63, Formula 4.642] and [63, Formula 1.513(1)], we have:     Z Z 1 ct + kxk 2π 3/2 r 2 1 ct + ξ  ln µ(dx) = ξ ln dξ kxk ct − kxk ξ ct − ξ Γ 32 0 B3r

= 4π(ct)2 2

= 8π(ct)

= 8πrct

Z

r/(ct)

 z ln

0 ∞ X

1 2k − 1

Z

1 4k 2 − 1



k=1 ∞ X k=1

1+z 1−z

 dz (6.4.33)

r/(ct)

z

2k

0

r2 c2 t 2

k .

dz

Markov Random Flight in the Space R3

305

Note that this series can be expressed in terms of the special Lerch ψ-function. Applying again [63, Formula 4.642], we get: Z B3r

2π 3/2 µ(dx)  p = Γ 32 c2 t2 − kxk

= 4π(ct)2

r

0

ξ2 p dξ c2 t2 − ξ 2

Z

r/(ct)

Z

z 2 (1 − z 2 )−1/2 dz ! r r r r2 arcsin − 1− 2 2 , ct ct c t

(6.4.34)

0

= 2π(ct)2

where we have used the easily checked equality: Z  p 1 x2 √ dx = arcsin x − x 1 − x2 + C, 2 1 − x2

C = const.

Then, by integrating the absolutely continuous part of density (6.4.26) over the ball B3r and taking into account (6.4.33) and (6.4.34), we obtain the following asymptotic formula (for r < ct):   2 k ∞  2λr X 1 r Pr X(t) ∈ B3r ∼ e−λt 2 c 4k − 1 c2 t2 k=1 r    r r r2 λ3 r3 λ 2 t2 − arcsin 1− 2 2 + , + π ct ct c t 6c3

t → 0.

(6.4.35) For example, the probability of being at time instant t = 0.1 in the ball B30.07 of radius r = 0.07, (for the values of parameters λ = 1, c = 2), calculated by means of formula (6.4.35)  is approximately equal to: Pr X(0.1) ∈ B30.07 ≈ 0.002745.

6.4.5

Estimate of the accuracy

The error in asymptotic formula (6.4.26) has the order o(t3 ). This means that, for small t, this formula yields a fairly good accuracy. To estimate it, let us integrate the function in square brackets of (6.4.26) over the ball B3ct . For the first term in square brackets of (6.4.26) we have:   ZZZ λ ct + kxk ln dx1 dx2 dx3 4πc2 tkxk ct − kxk x21 +x22 +x23 ≤c2 t2

ZZZ = λt

1 ln 4πc2 t2 kxk



ct + kxk ct − kxk

(6.4.36)

 dx1 dx2 dx3 = λt,

x21 +x22 +x23 ≤c2 t2

because the second integrand is the conditional density corresponding to the single change of direction (see Theorem 6.2.1) and, therefore, the second integral is equal to 1.

306

Markov Random Flights

Applying [63, Formula 4.642], we have for the second term in square brackets of (6.4.26): ZZZ x21 +x22 +x23 ≤c2 t2

λ2 2π 3/2 λ2  p dx1 dx2 dx3 = 2π 2 c2 Γ 32 2π 2 c2 c2 t2 − kxk2 2λ2 t2 π 2 2 λ t = . 2

Z

1

=

0

Z 0

ct

ξ2 p dξ c2 t 2 − ξ 2

z2 √ dz 1 − z2

For the third term in square brackets of (6.4.26) we get: ZZZ λ3 4 3 3 λ3 λ3 t3 dx dx dx = πc t = . 1 2 3 8πc3 8πc3 3 6

(6.4.37)

(6.4.38)

x21 +x22 +x23 ≤c2 t2

Hence, in view of (6.4.36), (6.4.37) and (6.4.38), the integral of the absolutely continuous part in asymptotic formula (6.4.26) is: ˜ G(t) ZZZ =

e−λt



λ ln 2 4πc tkxk



ct + kxk ct − kxk

x21 +x22 +x23 ≤c2 t2

=e

−λt



λ 3 t3 λ 2 t2 + λt + 2 6

λ2

 +

2π 2 c2

p

c2 t2 − kxk2

+

 λ3 dx1 dx2 dx3 8πc3

 .

(6.4.39) Note that (6.4.39) can also be obtained by passing to the limit, as r → ct, in asymptotic formula (6.4.35). On the other hand, according to (4.1.6), the integral of the absolutely continuous part of the transition density of the three-dimensional Markov random flight X(t) is Z G(t) = p(ac) (x, t) µ(dx) = 1 − e−λt . (6.4.40) B3ct

˜ The difference between the approximating function G(t) and the exact function G(t) given by (6.4.39) and (6.4.40) enables us to estimate the value of the probability generated by all the terms of the density aggregated in the term o(t3 ) of asymptotic relation (6.4.26). ˜ The shapes of functions G(t) and G(t) on the time interval t ∈ (0, 1) for the values of the intensity of switchings λ = 0.5, λ = 1, λ = 1.5, λ = 2 are presented in Figs. 6.2 and 6.3. ˜ We see that, for λ = 0.5, function G(t) yields very good coincidence with function G(t) on the whole subinterval t ∈ (0, 1) (Fig. 6.2 (left)), while for λ = 1 such coincidence is good only on the subinterval t ∈ (0, 0.8) (Fig. 6.2 (right)). The same phenomenon is also clearly ˜ yields very good coincidence with function seen in Fig. 6.3 where, for λ = 1.5, function G(t) G(t) on the subinterval t ∈ (0, 0.5) (Fig. 6.3 (left)), while for λ = 2 such good coincidence takes place only on the subinterval t ∈ (0, 0.4) (Fig. 6.3 (right)). Thus, we can conclude that the greater is the intensity of switchings λ, the shorter is the subinterval of coincidence. This fact can easily be explained. Really, the greater is the intensity of switchings λ, the shorter is the time interval, on which no more than three changes of directions can occur with big probability. This means that, for increasing λ, the asymptotic formula (6.4.26) yields a good accuracy at increasingly smaller time intervals. However, for arbitrary fixed

Markov Random Flight in the Space R3

307

˜ Figure 6.2: The shapes of functions G(t) and G(t) (point line) on the time interval t ∈ (0, 1) for the intensities λ = 0.5 (left) and λ = 1 (right)

˜ Figure 6.3: The shapes of functions G(t) and G(t) (point line) on the time interval t ∈ (0, 1) for the intensities λ = 1.5 (left) and λ = 2 (right) λ, there exists some tλ > 0 such that formula (6.4.26) yields good accuracy on the time interval t ∈ (0, tλ ) and the error of this approximation does not exceed o(t3λ ). ˜ The difference of functions G(t) and G(t) is:   ∞ 2 2 3 3 X (λt)k ˜ = 1 − e−λt − e−λt λt + λ t + λ t G(t) − G(t) = e−λt 2! 3! k! k=4

and, therefore, the accuracy of asymptotic relation (6.4.26) depends not on t and λ separately, but on their product λt. From this last formula it follows that the error of approximation has the order o((λt)3 ) and, therefore, the smaller is the product λt, the smaller is the error in asymptotic relation (6.4.26). This means that asymptotic relation (6.4.26) yields good accuracy when λt  1 that should be expected. The obtained asymptotic formula (6.4.26) has a fairly simple form and can be applied for deriving various useful approximate relations like that (6.4.35) for the probability of being in a three-dimensional ball of small radius. Our analysis shows that the error in this asymptotic formula has the order o((λt)3 ) and, therefore, under the condition λt  1, it has very good accuracy. From this fact it follows that asymptotic formula (6.4.26) is effective not only when t → 0, but also under the more general condition t  λ1 . This means that one may effectively apply it for describing the processes of slow and super-slow diffusion that can be modeled by a three-dimensional Markov random flight with slow speed and rare

308

Markov Random Flights

Poisson switchings. In this case λ is small, while λ1 is big and, therefore, asymptotic formula (6.4.26) yields good accuracy on a fairly long time interval t ∈ (0, tλ ), tλ  λ1 . Another extremely interesting and important problem is to study the asymptotic behaviour of the three-dimensional Markov random flight under the opposite limiting condition t → ∞. Such results would enable us to desribe the evolution of the system on long time periods and to obtain its approximate stationary characteristics. This problem can be solved either by deriving the series representations of the density with respect to negative powers of t, or by obtaining an asymptotic relation for the characteristic function of the process from the known formula for its Laplace transform. An alternative approach to the problem of obtaining the similar asymptotic results in both the above limiting cases is to use the hyperparabolic operators governing the random flights in higher dimensions studied in Section 4.10. Analysis of such operators can give some hints concerning the asymptotic behaviour of the process similarly to that under the Kac’s limiting condition (see Theorem 4.10.2).

6.5

Fundamental solution to Kolmogorov equation

In this section we give a constructive method of obtaining the fundamental solution to Kolmogorov equation for the three-dimensional symmetric Markov random flight X(t) = (X1 (t), X2 (t), X3 (t)), t > 0, representing the following system of a continuum number of integro-differential equations Z ∂fω ∂fω ∂fω ∂fω λ = −c sin θ cos ϕ fη ν(dη), (6.5.1) − c sin θ sin ϕ − c cos θ − λfω + ∂t ∂x1 ∂x2 ∂x3 4π S13 ω = (θ, ϕ),

θ ∈ [0, π),

ϕ ∈ [0, 2π),

where fω = fω (x, t) = fθ,ϕ (x1 , x2 , x3 , t) is the joint density of the particle’s position and its direction (solid angle) ω = (θ, ϕ) at time t. Here ν(dη) is the Lebesgue measure on the ˜ ϕ), surface of unit sphere S13 with ν(dη) = sin θ˜ dθ˜ dϕ, ˜ η = (θ, ˜ θ˜ ∈ [0, π), ϕ˜ ∈ [0, 2π). Note that differential operator on the right-hand side of (6.5.1) is the derivative with respect to direction ω determined by the ordered pair of angles (θ, ϕ). Denoting Z λ Uω = Uω (x, t) = −λfω + fη ν(dη), (6.5.2) 4π S13 we represent (6.5.1) as follows: ∂fω ∂fω ∂fω ∂fω + c sin θ cos ϕ + c sin θ sin ϕ + c cos θ = Uω . ∂t ∂x1 ∂x2 ∂x3

(6.5.3)

Let us solve Cauchy problem for inhomogeneous equation (6.5.3) with the initial condition fω |t=0 = g(x) hω (x), (6.5.4) where g = g(x) and h = hω (x) are some non-negative uniformly bounded functions, that is, 0 ≤ g, h < A < ∞. The solution of Cauchy problem (6.5.3)–(6.5.4) is given by the sum (1) (2) (1) fω = fω + fω , where fω is the solution of the Cauchy problem (1)

(1)

(1)

(1)

∂fω ∂fω ∂fω ∂fω + c sin θ cos ϕ + c sin θ sin ϕ + c cos θ = 0, ∂t ∂x1 ∂x2 ∂x3 fω(1) |t=0 = ghω ,

(6.5.5)

Markov Random Flight in the Space R3 (2)

and fω

309

is the solution of the Cauchy problem (2)

(2)

(2)

(2)

∂fω ∂fω ∂fω ∂fω + c sin θ cos ϕ + c sin θ sin ϕ + c cos θ = Uω , ∂t ∂x1 ∂x2 ∂x3 fω(2) |t=0 = 0.

(6.5.6)

By integrating the respective characteristic systems, one can show that the solutions of Cauchy problems (6.5.5) and (6.5.6) are given by the formulas:

f (2)

fω(1) = g(x1 − ct sin θ cos ϕ, x2 − ct sin θ sin ϕ, x3 − ct cos θ) × hω (x1 − ct sin θ cos ϕ, x2 − ct sin θ sin ϕ, x3 − ct cos θ), Z t = Uω (x1 − ct sin θ cos ϕ, x2 − ct sin θ sin ϕ, x3 − ct cos θ, τ ) dτ.

(6.5.7)

(6.5.8)

0

Thus, we arrive at the integral equation: fω (x1 , x2 , x3 , t) = g(x1 − ct sin θ cos ϕ, x2 − ct sin θ sin ϕ, x3 − ct cos θ) × hω (x1 − ct sin θ cos ϕ, x2 − ct sin θ sin ϕ, x3 − ct cos θ) Z t −λ fω (x1 − ct sin θ cos ϕ, x2 − ct sin θ sin ϕ, x3 − ct cos θ, τ ) dτ 0 Z t Z λ dτ fη (x1 − c(t − τ ) sin θ˜ cos ϕ, ˜ x2 − c(t − τ ) sin θ˜ sin ϕ, ˜ + 4π 0 S13 ˜ τ ) ν(dη). x3 − c(t − τ ) cos θ, (6.5.9) Integral equation (6.5.9) is equivalent to the integro-differential equation (6.5.1) with initial condition (6.5.4). We will solve integral equation (6.5.9) by the successive approximation method. We are seeking a solution of (6.5.9) in the form (ω)

fω = Ψ0

(ω)

+ cΨ1

(ω)

+ c2 Ψ2

+ ··· =

∞ X

cn Ψ(ω) n ,

(6.5.10)

n=0 (ω)

(ω)

where the terms Ψn = Ψn (x1 , x2 , x3 , t) are defined by the recurrent relations: (ω)

Ψ0 (x1 , x2 , x3 , t) = g(x1 − ct sin θ cos ϕ, x2 − ct sin θ sin ϕ, x3 − ct cos θ) × hω (x1 − ct sin θ cos ϕ, x2 − ct sin θ sin ϕ, x3 − ct cos θ), ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Z t (ω) Ψ(ω) (x , x , x , t) = −λ Ψn−1 (x1 − c(t − τ ) sin θ cos ϕ, x2 − c(t − τ ) sin θ sin ϕ, 1 2 3 n 0

x3 − c(t − τ ) cos θ, τ ) dτ +

λ 4π

Z

t

Z dτ

0

S13

(η)

Ψn−1 (x1 − c(t − τ ) sin θ cos ϕ, x2 − c(t − τ ) sin θ sin ϕ, x3 − c(t − τ ) cos θ, τ ) ν(dη).

(6.5.11) Let us now prove that if the series (6.5.10) uniformly converges to some function Y (ω) , then it satisfies equation (6.5.1) and initial condition (6.5.4). Denoting by L the differential operator in (6.5.1), that is, L=

∂ ∂ ∂ ∂ + c sin θ cos ϕ + c sin θ sin ϕ + c cos θ , ∂t ∂x1 ∂x2 ∂x3

310

Markov Random Flights (ω)

we define the action of L on Ψn . According to the above relations and taking into account (ω) that LΨ0 ≡ 0, we have: Z λ (η) (ω) Ψ ν(dη), n ≥ 1. LΨ(ω) = −λΨ + n n−1 4π S13 n−1 Therefore, " LY (ω) = L

∞ X

# cn Ψ(ω) n

n=0

=

∞ X

" c

n

−λΨ(ω) n

n=0

= −λY

(ω)

λ + 4π

λ + 4π

Z

Z S13

# Ψ(η) n

ν(dη)

Y (η) ν(dη).

S13

Thus, we have obtained the relation LY (ω) = −λY (ω) +

λ 4π

Z

Y (η) ν(dη)

S13

proving that the series (6.5.10) is a solution to equation (6.5.1) indeed. By passing to the limit, as t → 0, we can easily check that function Y (ω) satisfies initial condition (6.5.4). Thus, taking into account the hyperbolicity of Kolmogorov equation (6.5.1), we can conclude that the sum Y (ω) of the uniformly converging series (6.5.10) is the solution of Cauchy problem (6.5.1)-(6.5.4), which is unique in the class of continuous functions. It remains to prove the uniform convergence of functional series (6.5.10). Since g, h < A < ∞, then the following estimates hold: (ω)

|Ψ0 | < A2 , λ 2 A t 4π = A2 (2λt), 4π t2 t2 (2λt)2 (ω) |Ψ2 | < 2λ2 A2 + 2λ2 A2 = A2 , 2 2 2! (2λ)2 2 t3 (2λ)2 2 t3 (2λt)3 (ω) |Ψ3 | < λ A +λ A = A2 , 2! 3 2! 3 3! ... ... ... ... ... ... ... ... ... ... ... ... ... (ω)

|Ψ1 | < λtA2 +

(2λt)n , n! ... ... ... ... ... ... ... ... ... ... ... ... ...

2 |Ψ(ω) n | 0. We can summarize the above results in the following theorem. Theorem 6.5.1. The fundamental solution fω (x, t) to Kolmogorov equation (6.5.1) in the (ω) space R3 is given by the uniformly converging series (6.5.10), where the terms Ψn are defined by recurrent relations (6.5.11) with the functions g = δ(x) and hω = δ(˜ ω − ω). The transition probability density f (x, t) of the three-dimensional symmetric Markov random flight X(t) = (X1 (t), X2 (t), X3 (t)), t > 0, is then given by Z f (x, t) = fω (x, t) ν(dω). S13

Remark 6.5.1. The method of obtaining the fundamental solution to the Kolmogorov equation developed in this section can also be applied for studying the three-dimensional Markov random flight with arbitrary dissipation function having an absolutely continuous and bounded density (that is, the non-symmetrical case), however, the calculations in this case are much more complicated and cumbersome.

Chapter 7 Markov Random Flight in the Space R4

The analysis made in the previous Chapters 5 and 6 for the Markov random flights in the Euclidean spaces R2 and R3 is very different in its degree of completeness. While random flight in the Euclidean plane R2 is exhaustively studied and all its most important characteristics including exact distribution are obtained in explicit forms, similar analysis in the three-dimensional space R3 is impracticable. As shown in Chapter 6, almost all important characteristics of the Markov random flight in the space R3 can be obtained in implicit forms as various integral transforms only. This leads us to the conclusion (and this fact was already noted above) that random flights in the odd-dimensional Euclidean spaces are much more difficult to study and, vice versa, random flights in spaces of low even dimensions m ≤ 6 are amenable to almost complete analysis with obtaining their exact characteristics including the most important one, namely the distribution. In this chapter we give a detailed analysis of the symmetric Markov random flight X(t), t > 0, in the four-dimensional Euclidean space R4 . Surprisingly, despite fairly high dimension, one managed to obtain an explicit distribution of X(t) in terms of elementary functions that in itself is a very rare result. In Section 7.1, we obtain closed-form expressions for the conditional densities of X(t) that, surprisingly, have very simple forms. Basing it on the obtained conditional densities, in Section 7.2 an exact formula for the distribution of process X(t) is derived, which is expressed in terms of elementary functions. In Section 7.3, the relations for the characteristic function of X(t) in the forms of an integral and a functional series are obtained. In Section 7.4, we examine the limiting behaviour of X(t) under the standard Kac’s scaling condition and prove its weak convergence to the four-dimensional homogeneous Brownian motion. In the final Section 7.5 of this chapter, we derive exact formulas for the mixed moments of the distribution of X(t) in terms of Bessel and Struve functions, as well as a relation for the moments of the Euclidean distance between X(t) and the origin 0 ∈ R4 at arbitrary time t > 0 in terms of incomplete gamma-function and degenerative hypergeometric function.

7.1

Conditional densities

The general model of symmetric Markov random flight described in Section 4.1 in the four-dimensional case is represented by the stochastic motion of a particle that, at the initial time moment t = 0, starts from the origin 0 = (0, 0, 0, 0) ∈ R4 of the Euclidean space R4 and moves with a constant speed c. The initial and each new direction, that are taken at Poissonian random instants of rate λ > 0, are uniformly distributed on the surface of the unit sphere  S14 = x = (x1 , x2 , x3 , x4 ) ∈ R4 : kxk2 = x21 + x22 + x23 + x24 = 1 . 313

314

Markov Random Flights

Let X(t) = (X1 (t), X2 (t), X3 (t), X4 (t)) denote the particle’s position at time t > 0. Consider the conditional distributions Pr{X(t) ∈ dx | N (t) = n} = Pr{X1 (t) ∈ dx1 , X2 (t) ∈ dx2 , X3 (t) ∈ dx3 , X4 (t) ∈ dx4 | N (t) = n},

n ≥ 1, (7.1.1) where, remind, N (t) is the number of Poisson events occurred in the time interval (0, t) and dx is the infinitesimal element in the space R4 with the Lebesgue measure µ(dx) = dx1 dx2 dx3 dx4 . At arbitrary time instant t > 0, the process X(t) is located in the four-dimensional ball of radius ct :  B4ct = x = (x1 , x2 , x3 , x4 ) ∈ R4 : kxk2 = x21 + x22 + x23 + x24 ≤ c2 t2 . The singular component of the distribution Pr {X(t) ∈ dx} , x ∈ B4ct , t > 0, is concentrated on the surface of the sphere  4 Sct = ∂B4ct = x = (x1 , x2 , x3 , x4 ) ∈ R4 : kxk2 = x21 + x22 + x23 + x24 = c2 t2 , while the absolutely continuous part of the distribution is concentrated in the interior of the ball B4ct :  int B4ct = x = (x1 , x2 , x3 , x4 ) ∈ R4 : kxk2 = x21 + x22 + x23 + x24 < c2 t2 . Let p(x, t) = p(x1 , x2 , x3 , x4 , t), x = (x1 , x2 , x3 , x4 ) ∈ int B4ct , t > 0, denote the density of the absolutely continuous component of the distribution Pr {X(t) ∈ dx}. This chapter is focused on finding the explicit form of density p(x, t) and studying its properties. To start with, we study conditional distributions (7.1.1) for arbitrary n ≥ 1. The main result of this section concerns the explicit form of conditional distributions (7.1.1) given by the following theorem. Theorem 7.1.1. For any n ≥ 1 and arbitrary t > 0, the conditional distributions (7.1.1) are given by the formula: n(n + 1) Pr{X(t) ∈ dx | N (t) = n} = 2 π (ct)4 x = (x1 , x2 , x3 , x4 ) ∈ int B4ct ,

 n−1 kxk2 1− 2 2 µ(dx), c t

kxk2 = x21 + x22 + x23 + x24 ,

Proof. Consider the conditional characteristic functions: n o Hn (t) = E eihα,X(t)i | N (t) = n ,

n ≥ 1,

(7.1.2)

µ(dx) = dx1 dx2 dx3 dx4 .

n ≥ 1,

where α = (α1 , α2 , α3 , α4 ) ∈ R4 is the real four-dimensional vector of inversion parameters and hα, X(t)i is the inner product of the vectors α and X(t). According to formula (4.2.5), these conditional characteristic functions Hn (t), n ≥ 1, in the four-dimensional case have the form:   Z t Z t Z t n+1  n+1 Y J1 (c(τj − τj−1 )kαk)  2 n! dτ dτ · · · dτ . (7.1.3) Hn (t) = 1 2 n  tn c(τj − τj−1 )kαk  0 τ1 τn−1 j=1

Markov Random Flight in the Space R4

315

Surprisingly, this fairly complicated expression on the right-hand side of (7.1.3) can, nevertheless, be explicitly evaluated that enables us to obtain an exact formula for the conditional characteristic functions Hn (t) for arbitrary n ≥ 1. By direct integration on the right-hand side of (7.1.3), we now prove that the conditional characteristic functions Hn (t) are given by the formula: Hn (t) = 2n+1 (n + 1)!

Jn+1 (ctkαk) , (ctkαk)n+1

n ≥ 1.

(7.1.4)

By comparing (7.1.3) and (7.1.4), we see that, obviously, it is sufficient to show that the following equality holds:    Z t Z Z J1 (cτ1 kαk) t J1 (c(τ2 − τ1 )kαk) t J1 (c(τ3 − τ2 )kαk) In := dτ1 dτ2 dτ3 ... τ1 τ2 − τ1 τ3 − τ2 0 τ1 τ2  Z t J1 (c(τn−1 − τn−2 )kαk) × dτn−1 τn−1 − τn−2 τn−2    Z t J1 (c(τn − τn−1 )kαk) J1 (c(t − τn )kαk) × dτn ... τn − τn−1 t − τn τn−1 n+1 Jn+1 (ctkαk). = t (7.1.5) The proof is based on the successive integration on the left-hand side of (7.1.5) and application of the relation (see [63, Formula 6.533(2)]):   Z z Jp (x) Jq (z − x) 1 1 Jp+q (z) dx = + , Re p > 0, Re q > 0. (7.1.6) x z−x p q z 0 Consider the first (interior) integral on the left-hand side of (7.1.5). By changing the variable ξ = c(τn − τn−1 )kαk, we get: t

J1 (c(τn − τn−1 )kαk) J1 (c(t − τn )kαk) dτn τn − τn−1 t − τn τn−1 Z c(t−τn−1 )kαk J1 (ξ) J1 (c(t − τn−1 )kαk − ξ) = ckαk dξ. ξ c(t − τn−1 )kαk − ξ 0 (7.1.7) Applying (6.1.6) to the integral on the right-hand side of (7.1.7), we obtain: Z

Z

t

τn−1

J2 (c(t − τn−1 )kαk) J1 (c(τn − τn−1 )kαk) J1 (c(t − τn )kαk) dτn = 2 . τn − τn−1 t − τn t − τn−1

The second (interior) integral on the left-hand side of (7.1.5) (with respect to τn−1 ) in the same manner yields: Z t J1 (c(τn−1 − τn−2 )kαk) J2 (c(t − τn−1 )kαk) J3 (c(t − τn−2 )kαk) 2 dτn−1 = 3 . τn−1 − τn−2 t − τn−1 t − τn−2 τn−2 Continuing this integration process, after the (n − 1)-th step, we get: Z In = n 0

t

J1 (cτ1 kαk) Jn (c(t − τ1 )kαk) dτ1 . τ1 t − τ1

316

Markov Random Flights

Changing in this integral the variable ξ = cτ1 kαk and applying again (7.1.6), we finally arrive at: n+1 Jn+1 (ctkαk), In = t proving (7.1.5). Thus, we have shown that the conditional characteristic functions are given by formula (7.1.4). Note that relation (7.1.4), just now proved by direct integration, exactly coincides with formula (4.4.3) obtained previously by the method of integral transforms. To prove the statement of the theorem, we need to show that the inverse Fourier transformation of conditional characteristic functions Hn (t) lead to conditional distributions (7.1.2). Applying Hankel inversion formula (1.8.4), we have:  Jn+1 (ctkαk) (kαk)n+1 Z ∞ Jn+1 (ctr) J1 (kxkr) r2 (2π)−2 (kxk)−1 dr rn+1 0

−1 −1 Fα [Hn (t)] = 2n+1 (n + 1)! (ct)−(n+1) Fα



= 2n+1 (n + 1)! (ct)−(n+1) Z ∞ 2n−1 (n + 1)! = 2 r−(n−1) J1 (kxkr) Jn+1 (ctr) dr π (ct)n+1 kxk 0 (see [63, Formula 6.574(1)])   2n−1 (n + 1)! kxk Γ(2) kxk2 = 2 F 2, −n + 1; 2; π (ct)n+1 kxk 2n−1 (ct)−n+3 Γ(2) Γ(n) c2 t2   (n + 1)! kxk2 = 2 F −n + 1; 2; 2; 2 2 4 π (ct) (n − 1)! c t   2 n−1 kxk n(n + 1) = 2 1− 2 2 , 4 π (ct) c t

and this exactly coincides with density in (7.1.2). Inverse proof is somewhat more complicated, namely, that the Fourier transformation of conditional distributions (7.1.2) in the ball B4ct yields conditional characteristic functions (7.1.4) for any n ≥ 1. By passing to four-dimensional polar coordinates, we can easily prove the following formula related to the Fourier transform of the function identically equal to 1 in the ball B4r of radius r > 0: Z J2 (rkαk) eihα,xi µ(dx) = (2πr)2 , (7.1.8) kαk2 B4r Then, in view of (7.1.8), we immediately obtain for n = 1: Z Z 2 eihα,xi Pr{X(t) ∈ dx | N (t) = 1} = 2 eihα,xi µ(dx) π (ct)4 B4ct B4ct 2 J2 (ctkαk) 4π 2 (ct)2 π 2 (ct)4 kαk2 J2 (ctkαk) =8 , (ctkαk)2 =

and this coincides with formula (7.1.4) for n = 1.

Markov Random Flight in the Space R4

317

Let now n ≥ 2. Then the Fourier transform of conditional distributions (7.1.2) is: Z eihα,xi Pr{X(t) ∈ dx | N (t) = n} B4ct

=

n(n + 1) π 2 (ct)4 √

ZZZZ

ei(α1 x1 +α2 x2 +α3 x3 +α4 x4 )

x21 +x22 +x23 +x24 ≤ct

x2 + x22 + x23 + x24 × 1− 1 c2 t2 

n−1 dx1 dx2 dx3 dx4 .

(7.1.9) Integral on the right-hand side of (7.1.9) can be evaluated by passing to four-dimensional polar coordinates, however, this leads to cumbersome calculations. Instead, we use another more simple way of evaluating the four-dimensional integral on the right-hand side of (7.1.9) based on application of the multidimensional Catalan theorem of classical analysis (see, for example, [63, Theorem 4.645]). Applying this Catalan theorem to our case and taking into account (7.1.8), we can reduce four-dimensional integral in (7.1.9) to the Stiltjes integral and formula (7.1.9) takes the form: Z e

ihα,xi

B4ct

n(n + 1) Pr{X(t) ∈ dx | N (t) = n} = 2 π (ct)4

Z 0

ct



r2 1− 2 2 c t

n−1   2 J2 (rkαk) d (2πr) . kαk2

Integrating by parts and taking into account that, for n ≥ 2, the free term vanishes, we get: Z eihα,xi Pr{X(t) ∈ dx | N (t) = n} B4ct

 ct  n−1 n(n + 1)  r2 2 J2 (rkαk) = 2 1 − (2πr) π (ct)4  c2 t2 kαk2 0 " n−1 #) Z ct r2 2 J2 (rkαk) − d 1− 2 2 (2πr) kαk2 c t 0 " #  Z ct n−1 2 r n(n + 1) J (rkαk) 2 (2πr)2 d 1− 2 2 =− 2 π (ct)4 0 kαk2 c t   Z n−2 8n(n − 1)(n + 1) ct 3 r2 = r 1 − J2 (rkαk) dr (ct)6 (kαk)2 c2 t2 0 Z n−2 8n(n − 1)(n + 1) 1 3 = z 1 − z2 J2 (ctkαkz) dz. 2 (ctkαk) 0 Applying now [63, Formula 6.567(1)] to the integral on the right-hand side of this equality, we finally obtain for n ≥ 2: Z eihα,xi Pr{X(t) ∈ dx | N (t) = n} B4ct

8n(n − 1)(n + 1) n−2 2 Γ(n − 1)(ctkαk)−n+1 Jn+1 (ctkαk) (ctkαk)2 Jn+1 (ctkαk) = 2n+1 (n + 1)! , (ctkαk)n+1 =

and this exactly coincides with (7.1.4). The theorem is thus completely proved.

318

Markov Random Flights

Remark 7.1.1. For n = 1, conditional distribution (7.1.2) takes the form: Pr{X(t) ∈ dx | N (t) = 1} =

2 π 2 (ct)4

µ(dx),

x ∈ int B4ct ,

t > 0,

(7.1.10)

and this is exactly the uniform distribution in the four-dimensional ball B4ct . Remind that the density of this distribution p1 (x, t) =

2 , π 2 (ct)4

x ∈ int B4ct ,

t > 0,

has already been obtained in Section 4.9 (see formula (4.9.24)) by other way from a more general formula. This extremely interesting fact shows that, after the first change of direction, the particle can be in a certain area of the ball B4ct with a probability not depending on the location of this area, but depending only on its Lebesgue measure (that is, the fourdimensional volume). Remind that we have already met with a similar phenomenon when studying the symmetric Markov random flight in the Euclidean plane (see Remark 5.1.1, formula (5.1.15)), however, in that case the uniform distribution arose only after the second change of direction. This interesting phenomenon of arising the uniform distribution after some changes of direction is, apparently, peculiar only for Markov random flights in the spaces R2 and R4 , since it is no longer observed in other dimensions. Formula (7.1.2) shows that, under the number of changes of direction n ≥ 2, conditional densities pn (x, t) take the bell-shaped form, that is, they have an absolute maximum at the origin (starting point) and monotonously decrease when approaching to the boundary of the ball B4ct . This fact can be explained that, under a big number of changes of direction, the trajectories of the motion become so fragmented that the particle can hardly leave the neighbourhood of the starting point. Note also that, unlike the two and three-dimensional cases, all the conditional densities pn (x, t), n ≥ 1, of conditional distributions (7.1.2) are continuous on the boundary of the ball B4ct and from this fact the continuity of the whole transition density p(x, t) follows. This issue has already been discussed in Remark 4.9.2.

7.2

Distribution of the process

Conditional distributions obtained in Theorem 7.1.1 enables us to immediately derive the absolutely continuous part of the distribution of process X(t). The main result of this chapter is given by the following theorem. Theorem 7.2.1. The absolutely continuous component of the distribution of the fourdimensional symmetric Markov random flight X(t), t > 0, has the form:      λt kxk2 λ 2 Pr{X(t) ∈ dx} = 2 2 + λt 1 − 2 2 exp − 2 kxk µ(dx), (7.2.1) π (ct)4 c t c t x = (x1 , x2 , x3 , x4 ) ∈ int B4ct ,

kxk2 = x21 + x22 + x23 + x24 ,

µ(dx) = dx1 dx2 dx3 dx4 .

Markov Random Flight in the Space R4

319

Proof. By the total probability formula and in view of Theorem 7.1.1, we have: Pr {X(t) ∈ dx} =

∞ X

Pr {X(t) ∈ dx | N (t) = n} Pr {N (t) = n}

n=1

 n−1 ∞ X kxk2 (λt)n n(n + 1) µ(dx) 1− 2 2 =e n! π 2 (ct)4 c t n=1 n  ∞ kxk2 λt e−λt X (λt)n (n + 2) µ(dx) 1− 2 2 = 2 π (ct)4 n=0 n! c t " n  ∞ X λt e−λt (λt)n kxk2 = 2 2 + 1 − π (ct)4 (n − 1)! c2 t2 n=1 n #  ∞ X kxk2 (λt)n µ(dx) 1− 2 2 +2 n! c t n=1       λt e−λt kxk2 kxk2 = 2 2 + λt 1 − 2 2 exp λt 1 − 2 2 π (ct)4 c t c t      kxk2 −1 µ(dx) + 2 exp λt 1 − 2 2 c t       λt e−λt kxk2 kxk2 2 + λt 1 − exp λt 1 − µ(dx) = 2 π (ct)4 c2 t 2 c2 t2      λt kxk2 λ 2 = 2 kxk µ(dx), 2 + λt 1 − exp − π (ct)4 c2 t2 c2 t −λt

proving (7.2.1). It remains to check that, for any t > 0, the equality (4.1.6) fulfills, that is, Z Pr {X(t) ∈ dx} = 1 − e−λt . B4ct

Passing to four-dimensional polar coordinates, we have: Z Pr {X(t) ∈ dx} B4ct

     kxk2 λ λt 2 kxk µ(dx) 2 + λt 1 − exp − 2 4 c2 t2 c2 t B4ct π (ct)     Z ct Z π Z π Z 2π  λt r2 λ 2 2 + λt 1 − 2 2 = 2 exp − 2 r π (ct)4 0 0 0 0 c t c t Z

=

× r3 (sinθ1 )2 sinθ2 dθ1 dθ2 dθ3 dr      Z ct λt r2 λ 2 3 = 2 exp − r dr r 2 + λt 1 − π (ct)4 0 c2 t2 c2 t Z π Z π Z 2π 2 × (sinθ1 ) dθ1 sinθ2 dθ2 dθ3 0 0 0     Z ct  r2 2λt λ 3 r 2 + λt 1 − = exp − 2 r2 dr (ct)4 0 c2 t2 c t "Z 2 2    #  Z 22 c t 2λt λ λt c t  z  λ = z exp − 2 z dz + z 1 − 2 2 exp − 2 z dz (ct)4 0 c t 2 0 c t c t

(7.2.2)

320

Markov Random Flights  Z 1 λt 1 −λtξ −λtξ ξe dξ + ξ(1 − ξ) e dξ = 2λt 2 0 0   Z 1 Z λt λt 1 2 −λtξ −λtξ = 2λt 1 + ξe dξ − ξ e dξ . 2 2 0 0 Z

Integrals on the right-hand side of this equality can be evaluated by integrating by parts. For the first integral, we have:   Z 1 Z 1 1 −λtξ −λt −λtξ e dξ e − ξe dξ = − λt 0 0 " 1 !# 1 1 −λt −λtξ e + e =− λt λt 0    1 1 =− e−λt + e−λt − 1 . λt λt With this in hand, for the second integral, we get:   Z 1 Z 1 1 −λt −λtξ 2 −λtξ e −2 ξe dξ ξ e dξ = − λt 0 0     2 1 −λt 1 −λt −λt e + e + e −1 . =− λt λt λt Substituting these values, we obtain:     Z  λt 1 1 −λt Pr {X(t) ∈ dx} = 2λt 1 + − e−λt + e −1 2 λt λt B4ct      1 2 1 −λt λt −λt −λt − e + e + e −1 − 2 λt λt λt      λt 1 1 −λt −λt = 2λt 1 + e −1 − e + 2 λt λt    1 1 −λt 1 −λt −λt + e + e −1 + e 2 λt λt     1 −λt 1 −λt 1 −λt e − e + e −1 = 2λt 2 2 λt = 1 − e−λt , proving (7.2.2). Thus, we have proved that (7.2.2) fulfills indeed. The missing part of the probability in (7.2.2), namely e−λt , pertains to the singular component of the distribution and is concen4 trated on the boundary Sct = ∂B4ct . The theorem is completely proved. Remark 7.2.1. Density of distribution (7.2.1) has the form:      λt kxk2 λ 2 p(x, t) = 2 2 + λt 1 − exp − kxk , π (ct)4 c2 t 2 c2 t

kxk < ct.

(7.2.3)

It is easy to see that the four-dimensional density (7.2.3) is structurally similar to the density of the two-dimensional Markov random flight (see, for comparison, formula (5.2.2)). Really, in both the cases, the density is composed of two factors. The first one represents some wave that in the planar case is the Green’s function of the two-dimensional wave equation, while in

Markov Random Flight in the Space R4

321

the four-dimensional density (7.2.3) the wave is represented by function in square brackets. The second term, in both the cases, is an exponential damping factor. This structural resemblance of the distributions of the two and four-dimensional Markov random flights is also clearly visible from the form of the Laplace transforms of their characteristic functions (see, for comparison, formulas (4.6.3) and (4.6.5)). However, as shown by direct verification, in contrast to the two-dimensional case, density (7.2.3) is not a fundamental solution to the standard four-dimensional telegraph equation. Note also that, similarly to conditional densities (7.1.2), the transition density p(x, t) has a bell-shaped form too. This fact, however, is not surprising because the transition density is composed of all the conditional densities (bell-shaped too) according to the total probability formula. From (7.2.3), we see that the transition density p(x, t) has an absolute maximum at the origin (that is, for kxk = 0) and monotonously decreases when approaching the boundary. Moreover, as noted in Remark 4.9.2, the four-dimensional density p(x, t) is continuous everywhere in the ball B4ct , including its boundary. This fact distinguishes the four-dimensional Markov random flight from its two and three-dimensional counterparts, whose distributions have infinite discontinuities on the boundary of the diffusion area. The reason of this phenomenon was already discussed in Remark 4.9.2. Remark 7.2.2. Density (7.2.3) in polar coordinates has the form:      ρ2 λ 2 λt ρ3 ρ sin2 θ1 sin θ2 , 2 + λt 1 − exp − pe(ρ, θ1 , θ2 , θ3 , t) = 2 π (ct)4 c2 t2 c2 t 0 < ρ < ct,

0 ≤ θ1 , θ2 < π,

0 ≤ θ3 < 2π.

From this formula, we see that the radial and angular components of the density are independent. This is similar to the respective property of the Wiener process, whose density in polar coordinates has independent radial and angular components too. Remark 7.2.3. In view of (4.1.5), the complete transition density f (x, t), x ∈ B4ct , t ≥ 0, of the distribution of the four-dimensional symmetric Markov random flight X(t), in terms of generalized functions, has the form: f (x, t) =

e−λt δ(c2 t2 − kxk2 ) 2π 2 (ct)3      λt kxk2 λ 2 kxk Θ (ct − kxk) , + 2 2 + λt 1 − exp − π (ct)4 c2 t2 c2 t

(7.2.4)

where δ(x) is the Dirac delta-function and Θ(x) is the Heaviside unit-step function. The first term in (7.2.4) represents the singular while the second one the absolutely continuous parts of the transition density of process X(t). Distribution (7.2.1) enables us to obtain an exact formula for the probability of being, at any time moment t > 0, in a four-dimensional ball B4r of arbitrary radius 0 < r < ct centred at the origin. This result is given by the following theorem. Theorem 7.2.2. For any t > 0, the following formula holds:      λ λ λ Pr X(t) ∈ B4r = 1 − 1 + 2 r2 − 4 3 r4 exp − 2 r2 , c t c t c t

0 < r < ct,

(7.2.5)

Proof. Derivation of formula (7.2.5) is similar to that of (7.2.2), although somewhat more complicated from the computational point of view.

322

Markov Random Flights

Passing to four-dimensional polar coordinates in (7.2.1), we have:  Pr X(t) ∈ B4r Z = Pr {X(t) ∈ dx} B4r

     λ kxk2 2 exp − 2 kxk 2 + λt 1 − 2 2 µ(dx) c t c t B4r     Z r Z π Z π Z 2π  ρ2 λ 2 λt 2 + λt 1 − exp − ρ = 2 π (ct)4 0 0 0 0 c2 t2 c2 t λt = 2 π (ct)4

Z

× ρ3 sin2 θ1 sin θ2 dθ1 dθ2 dθ3 dρ      r ρ2 λ 2 λt 3 ρ 2 + λt 1 − exp − ρ dρ π 2 (ct)4 0 c2 t2 c2 t Z π Z π Z 2π 2 sin θ1 dθ1 sin θ2 dθ2 dθ3 × 0 0 0      Z r ρ2 λ 2λt 3 ρ 2 + λt 1 − exp − 2 ρ2 dρ (ct)4 0 c2 t2 c t  Z r   2λt λ 2 3 2 dρ ρ exp − 2 ρ (ct)4 c t 0      Z r ρ2 λ 2 3 + λt ρ 1 − 2 2 exp − 2 ρ dρ c t c t 0  Z r  λ 2λt ρ2 exp − 2 ρ2 d(ρ2 ) (ct)4 0 c t      Z λt r 2 ρ2 λ 2 2 + d(ρ ) ρ 1 − 2 2 exp − 2 ρ 2 0 c t c t "Z 2   r λ 2λt z dz z exp − (ct)4 0 c2 t  #  Z 2 λt r  z  λ + z 1 − 2 2 exp − 2 z dz 2 0 c t c t "  Z r2   λt λ 2λt 1+ z exp − 2 z dz (ct)4 2 c t 0  #  Z r2 λ λ 2 − 2 z exp − 2 z dz . 2c t 0 c t Z

=

= =

=

=

=

(7.2.6)

Consider separately integrals in square brackets on the right-hand side of this equality. Integrating by parts, we have for the first integral: "     r2 Z r2   # Z r2 λ c2 t λ λ z exp − 2 z dz = − z exp − 2 z − exp − 2 z dz c t λ c t c t 0 0 0 "     r2 !# c2 t 2 λ 2 c2 t λ (7.2.7) =− r exp − 2 r + exp − 2 z λ c t λ c t 0        c2 t 2 λ 2 c2 t λ 2 r exp − 2 r + exp − 2 r − 1 . =− λ c t λ c t

Markov Random Flight in the Space R4

323

With this in hand and integrating by parts again, we have for the second integral in (7.2.6): r2

 λ z exp − 2 z dz c t 0 "     # Z r2 c2 t 4 λ 2 λ =− r exp − 2 r − 2 z exp − 2 z dz λ c t c t 0           c2 t 4 λ 2c2 t λ c2 t λ =− r exp − 2 r2 + r2 exp − 2 r2 + exp − 2 r2 − 1 λ c t λ c t λ c t (7.2.8) Substituting (7.2.7) and (7.2.8) into (7.2.6), we obtain:  Pr X(t) ∈ B4r  2         c t λt λ 2 c2 t λ 2 2λt 2 − 1 + r exp − r + exp − r − 1 = (ct)4 λ 2 c2 t λ c2 t          1 4 λ 2 c2 t λ 2 c2 t λ 2 2 + r exp − 2 r + r exp − 2 r + exp − 2 r − 1 2 c t λ c t λ c t           2 λt λ 2 c t λ 2 λ 2 2 2 2 4 = r exp − 2 r − c t r exp − 2 r + exp − 2 r − 1 (ct)4 c t c t λ c t       λ λ λ λ λ = 4 3 r4 exp − 2 r2 − 2 r2 exp − 2 r2 − exp − 2 r2 + 1 c t c t c t c t c t     λ 4 λ 2 λ 2 = 1 − 1 + 2 r − 4 3 r exp − 2 r , c t c t c t Z

2



proving (7.2.5). The theorem is proved. It is easy to check that, for r = ct, formula (7.2.5) turns into equality (7.2.2) yielding the probability of being in the interior of the ball B4ct .

7.3

Characteristic function

In this section we give two equivalent representations of the characteristic function H(t) of the four-dimensional symmetric Markov random flight X(t). Remind that, although the general formula (7.1.4) (or (4.4.2)) for conditional characteristic functions Hn (t) was proved for arbitrary n ≥ 1, it is also valid (as was shown in Remark 4.3.1) for n = 0 as well, and this is true for any dimension m ≥ 2. Therefore, the conditional characteristic function H0 (t) in the four-dimensional case has the form: H0 (t) = 2

J1 (ctkαk) . ctkαk

(7.3.1)

Note that H0 (t) is the characteristic function of the singular part of density (7.2.4). Theorem 7.3.1. The characteristic function H(t) of the four-dimensional symmetric Markov random flight X(t) has the following equivalent representations: n ∞  2λ 2e−λt X (n + 1) Jn+1 (ctkαk), (7.3.2) H(t) = ctkαk n=0 ckαk

324

Markov Random Flights J1 (ctkαk) ctkαk     Z ct  4λt r2 λ 2 2 + r 2 + λt 1 − 2 2 exp − 2 r J1 (rkαk) dr. (ct)4 kαk 0 c t c t

H(t) = 2e−λt

(7.3.3)

Proof. The proof of relation (7.3.2) is simple. In view of (7.1.4) and taking into account (7.3.1), we see that the characteristic function H(t) of the process X(t) has the following series representation: o n H(t) = E eihα,X(t)i = e−λt

∞ X (λt)n Hn (t) n! n=0

∞ X Jn+1 (ctkαk) (λt)n n+1 2 (n + 1)! n! (ctkαk)n+1 n=0 n ∞  2e−λt X 2λ = (n + 1) Jn+1 (ctkαk), ctkαk n=0 ckαk

= e−λt

proving (7.3.2). Let us now prove relation (7.3.3). Applying the multidimensional Catalan theorem (see [63, Theorem 4.645]), we obtain the characteristic function (Fourier transform) of distribution (7.2.1) in the ball B4ct : Z eihα,xi Pr {X(t) ∈ dx} B4ct

     kxk2 λ 2 + λt 1 − 2 2 exp − 2 kxk2 µ(dx) c t c t Bct   Z    Z ct  2 r λ 2 λt ihα,xi = 2 2 + λt 1 − 2 2 d e µ(dx) exp − 2 r π (ct)4 0 c t c t Br =

= = =

=

λt π 2 (ct)4

Z

eihα,xi

(see formula (7.1.8))       Z ct  λt r2 λ 2 2 J2 (rkαk) 2 + λt 1 − exp − r d (2πr) π 2 (ct)4 0 c2 t2 c2 t kαk2     Z ct    4λt r2 λ 2 + λt 1 − 2 2 exp − 2 r2 d r2 J2 (rkαk) 4 2 (ct) kαk 0 c t c t     Z ct  2 4λt r λ 2 2 + λt 1 − exp − r (ct)4 kαk2 0 c2 t2 c2 t  × 2r J2 (rkαk) + r2 kαk J20 (rkαk) dr     Z ct  4λt r2 λ 2 r 2 + λt 1 − 2 2 exp − 2 r (ct)4 kαk2 0 c t c t × (rkαk J20 (rkαk) + 2J2 (rkαk)) dr.

Applying the recurrent relation for Bessel functions (see [63, Formula 8.472(1)] z Jν0 (z) + ν Jν (z) = z Jν−1 (z),

Markov Random Flight in the Space R4 we get: Z

325

eihα,xi Pr {X(t) ∈ dx}

B4ct

    Z ct  r2 λ 2 4λt r 2 + λt 1 − 2 2 exp − 2 r = rkαk J1 (rkαk) dr (ct)4 kαk2 0 c t c t     Z ct  r2 λ 4λt 2 r 2 + λt 1 − exp − 2 r2 J1 (rkαk) dr, = (ct)4 kαk 0 c2 t2 c t yielding the second term in (7.3.3), which is the characteristic function of the absolutely continuous part of the distribution of process X(t). Adding to it the term (7.3.1), which is the characteristic function of the singular part of the distribution, we arrive at (7.3.3). Note that integral on the right-hand side of (7.3.3), obviously, cannot be expressed in terms of elementary functions. Let us prove the equivalence of representations (7.3.2) and (7.3.3) of the characteristic function H(t). For the integral term in (7.3.3), we have: 4λt (ct)4 kαk

Z 0

= = = = =

ct

     λ r2 exp − 2 r2 J1 (rkαk) dr r2 2 + λt 1 − 2 2 c t c t       −λt Z ct 2 r2 4λt e r 2 r 2 + λt 1 − exp λt 1 − J1 (rkαk) dr (ct)4 kαk 0 c2 t2 c2 t2 Z  2 4λ e−λt 1 2  z 2 + λt(1 − z 2 ) eλt(1−z ) J1 (ctkαkz) dz ckαk 0 (∞ ) Z  X (λt)n 4λ e−λt 1 2  2 2 n z 2 + λt(1 − z ) (1 − z ) J1 (ctkαkz) dz ckαk 0 n! n=0 Z ∞  4λ e−λt X (λt)n 1 2  z 2 + λt(1 − z 2 ) (1 − z 2 )n J1 (ctkαkz) dz ckαk n=0 n! 0  Z 1 ∞ n −λt X (λt) 4λ e 2 z 2 (1 − z 2 )n J1 (ctkαkz) dz ckαk n=0 n! 0  Z 1 2 2 n+1 + λt z (1 − z ) J1 (ctkαkz) dz . 0

According to [63, Formula 6.567(1)], the integrals on the right-hand side of this expression are equal to: Z 1 z 2 (1 − z 2 )n J1 (ctkαkz) dz = 2n n! (ctkαk)−(n+1) Jn+2 (ctkαk) 0

Z 0

1

z 2 (1 − z 2 )n+1 J1 (ctkαkz) dz = 2n+1 (n + 1)! (ctkαk)−(n+2) Jn+3 (ctkαk).

326

Markov Random Flights

Then we get: ct

     r2 λ r2 2 + λt 1 − 2 2 exp − 2 r2 J1 (rkαk) dr c t c t 0  ∞ n −λt X (λt) 4λ e 2n+1 n! (ctkαk)−(n+1) Jn+2 (ctkαk) = ckαk n=0 n!

4λt (ct)4 kαk

Z

+ λt 2 =

4λ e−λt ckαk

(

∞ X

n+1

−(n+2)

(n + 1)! (ctkαk)

 Jn+3 (ctkαk)

(λt)n 2n+1 (ctkαk)−(n+1) Jn+2 (ctkαk)

n=0

+

∞ X

) n+1

(λt)

n+1

2

−(n+2)

(n + 1) (ctkαk)

Jn+3 (ctkαk)

n=0

4λ e−λt = ckαk

(

∞ X 2 J2 (ctkαk) + (λt)n 2n+1 (ctkαk)−(n+1) Jn+2 (ctkαk) ctkαk n=1 ) ∞ X n n −(n+1) (λt) 2 n (ctkαk) Jn+2 (ctkαk) + n=1

) ∞ X 2 n n −(n+1) (λt) 2 (ctkαk) (n + 2) Jn+2 (ctkαk) J2 (ctkαk) + ctkαk n=1 ( ) ∞ X 4λt 2e−λt n+1 n+1 −(n+1) J2 (ctkαk) + (λt) 2 (ctkαk) (n + 2)Jn+2 (ctkαk) = ctkαk ctkαk n=1 ( ) ∞ X 4λt 2e−λt n n −n J2 (ctkαk) + (λt) 2 (ctkαk) (n + 1) Jn+1 (ctkαk) = ctkαk ctkαk n=2 −λt

4λ e = ckαk

(

∞ 2e−λt X (λt)n 2n (ctkαk)−n (n + 1) Jn+1 (ctkαk) ctkαk n=1 n ∞  2λ 2e−λt X (n + 1) Jn+1 (ctkαk). = ctkαk n=1 ckαk

=

By adding to this expression the first term in (7.3.3), we finally obtain: n ∞  2e−λt X 2λ J1 (ctkαk) + (n + 1) Jn+1 (ctkαk) ctkαk ctkαk n=1 ckαk n ∞  2e−λt X 2λ = (n + 1) Jn+1 (ctkαk), ctkαk n=0 ckαk

H(t) = 2e−λt

exactly coinciding with (7.3.2). The equivalence of representations (7.3.2) and (7.3.3) is thus established. The theorem is completely proved. Let us now evaluate the Laplace transform of characteristic function H(t). Applying Laplace transformation Lt to (7.3.2) and taking into account the uniform convergence of the series (see Lemma 4.5.3), we get:

Markov Random Flight in the Space R4

327

 n  ∞  2 X 2λ −λt Jn+1 (ctkαk) (n + 1) L e Lt [H(t)] (s) = (s) ckαk n=0 ckαk t n   ∞  2 X 2λ Jn+1 (ctkαk) = (n + 1) L (s + λ) ckαk n=0 ckαk t (see [7, Table 4.14, formula 5]) n ∞  2λ 2 X (n + 1) (n + 1)−1 = ckαk n=0 ckαk ∞ 1 X = λ n=0

s+λ+

p (s + λ)2 + (ckαk)2

!n+1

2λ s+λ+

!n+1

ckαk

.

p (s + λ)2 + (ckαk)2

Since, as easy to see, for any s such that Re s > 0, the following inequality holds 2λ 1 < 1, p q =  2 2 2 2 2 s + λ + (s + λ) + (ckαk) s c kαk 1 1 s + + +1 + 2λ

2

2

λ

λ2

applying to the series above the formula for the sum of an infinitely decreasing geometric progression, we get: √ 2λ 1 s+λ+ (s+λ)2 +(ckαk)2 Lt [H(t)] (s) = √ 2λ 2 λ 1− s+λ+

(s+λ) +(ckαk)2

2λ 1 p = λ s + λ + (s + λ)2 + (ckαk)2 − 2λ 2 p = , s + (s + λ)2 + (ckαk)2 − λ and this exactly coincides with formula (4.6.5) obtained in Section 4.6 from general formula (4.6.1).

7.4

Limit theorem

Distribution (7.2.1) obtained above enables us to easily study the limiting behaviour of the four-dimensional symmetric Markov random flight X(t) under the standard Kac’s condition and to verify once again the validity of Theorem 4.8.1 in the four-dimensional case. Theorem 7.4.1. Under the Kac’s scaling condition (4.8.1), the following limiting relation holds:         λt 1 kxk2 λ kxk2 2 lim = 2 2 2 exp − , 2 + λt 1 − 2 2 exp − 2 kxk c, λ→∞ π 2 (ct)4 c t c t ρ π t ρt (c2 /λ)→ρ

(7.4.1) x = (x1 , x2 , x3 , x4 ) ∈ int B4ct ,

t > 0.

328

Markov Random Flights

Proof. Let us represent density (7.2.3) in the form:      2 λ kxk2 λ kxk2 1 λ + t 1 − exp − p(x, t) = 2 3 2 . π t c c2 c2 c2 t2 c2 t Then, as easy to see, under Kac’s condition (4.8.1), the factor in square brackets tends to t/ρ. Hence,   1 kxk2 lim p(x, t) = 2 2 2 exp − , x ∈ int B4ct , t > 0, c, λ→∞ ρ π t ρt (c2 /λ)→ρ

Q.E.D. It is easy to see that the function on the right-hand side of (7.4.1)   kxk2 1 , u(x, t) = 2 2 2 exp − ρ π t ρt

(7.4.2)

which is the density of the four-dimensional homogeneous Wiener process with zero drift and diffusion coefficient ρ/2, exactly coincides with the function on the right-hand side of formula (4.8.2) for m = 4. The theorem is proved. Function (7.4.2) is the fundamental solution to the four-dimensional heat equation ∂u ρ = ∆u, ∂t 4

(7.4.3)

where ∆ is the four-dimensional Laplacian ∆=

∂2 ∂2 ∂2 ∂2 + + + . 2 2 2 ∂x1 ∂x2 ∂x3 ∂x24

This also means that the limiting process is the four-dimensional Wiener process with the generator (ρ/4)∆. Note that equation (7.4.3) exactly coincides with equation (4.8.9) for m = 4. Remark 7.4.1. By direct checking, one can show that the density of process X(t) given by (7.2.3) is not the fundamental (and even partial) solution to the four-dimensional telegraph equation. This fact confirms the results of Section 4.10 stating that Markov random flights in the Euclidean spaces of higher dimensions are driven by the hyperparabolic equations that are much more complicated that the telegraph ones.

7.5

Moments

In the final section of this chapter we give the results concerning the mixed moments of the four-dimensional symmetric Markov random flight X(t). Using the fact that the distribution of X(t) is known, we are able to explicitly evaluate its moments. Let q = (q1 , q2 , q3 , q4 ) denote the 4-multiindex. We are interested in the mixed moments of process X(t): EXq (t) = EX1q1 (t)X2q2 (t)X3q3 (t)X4q4 (t),

q1 ≥ 1, q2 ≥ 1, q3 ≥ 1, q4 ≥ 1.

The explicit form of the moment function of X(t) is given by the following theorem.

Markov Random Flight in the Space R4

329

Theorem 7.5.1. For any integers q1 , q2 , q3 , q4 ≥ 1, the moment function of X(t) is given by the formula:  −λt   q1 + 1 q2 + q3 + q4 + 1 e  q1 +q2 +q3 +q4  B (ct) ,   π2 2 2          q2 + 1 q3 + q4 + 1 q3 + 1 q4 + 1    ×B , B ,   2 2 2 2         q +1 q +1 1 2  Γ 2 Γ q32+1 Γ q42+1 2λt  q1 +q2 +q3 +q4 Γ 2    + 2 (ct)   π Γ q1 +q2 +q23 +q4 +4           × (λt)−(q1 +q2 +q3 +q4 +4)/2 γ q1 + q2 + q3 + q4 + 4 , λt 2 EXq (t) =   q +q +q +q +4  1 2 3 4  λt Γ  2   +    2 Γ q1 +q2 +q23 +q4 +8        q1 + q2 + q3 + q4 + 4 q1 + q2 + q3 + q4 + 8   ; ; −λt ,  × 1 F1   2 2      if all q1 , q2 , q3 , q4 are even,         0, otherwise, where γ(α, x) =

∞ X k=0

(−1)k xα+k k!(α + k)

(7.5.1)

(7.5.2)

is the incomplete gamma-function and 1 F1 (ξ; η; z) = Φ(ξ, η; z) =

∞ X (ξ)k z k (η)k k!

(7.5.3)

k=0

is the degenerate hypergeometric function. Proof. Let us consider separately the singular and the absolutely continuous parts of the distribution of X(t). According to (7.2.4), for the singular part of the distribution, we have: EXqs (t) =

e−λt 2π 2 (ct)3

ZZZZ

xq11 xq22 xq33 xq44 σ(dx)

x21 +x22 +x23 +x24 =c2 t2

Z π e−λt q1 +q2 +q3 +q4 = (ct) (cos θ1 )q1 (sin θ1 )q2 +q3 +q4 dθ1 2π 2 0 Z π Z 2π q2 q3 +q4 × (cos θ2 ) (sin θ2 ) dθ2 (cos θ3 )q3 (sin θ3 )q4 dθ3 . 0

Evaluating these integrals, we get:

0

(7.5.4)

330

Markov Random Flights

    2B q3 + 1 , q4 + 1 , if q3 and q4 are even, 2 2 (cos θ3 )q3 (sin θ3 )q4 dθ3 =  0  0, otherwise,     Z π B q2 + 1 , q3 + q4 + 1 , if q is even, 2 2 2 (cos θ2 )q2 (sin θ2 )q3 +q4 dθ2 =  0  0, otherwise,     Z π B q1 + 1 , q2 + q3 + q4 + 1 , if q1 is even, q2 +q3 +q4 q1 2 2 dθ1 = (cos θ1 ) (sin θ1 )  0  0, otherwise. (7.5.5) Substituting these values of integrals (7.5.5) into (7.5.4), we get for even q1 , q2 , q3 , q4 :   q1 + 1 q2 + q3 + q4 + 1 e−λt q1 +q2 +q3 +q4 q B , EXs (t) = 2 (ct) π 2 2 (7.5.6)     q2 + 1 q3 + q4 + 1 q3 + 1 q4 + 1 ×B , B , . 2 2 2 2 Z



Let us now evaluate the moments of the absolutely continuous part of the distribution. According to (7.2.4) and passing to four-dimensional polar coordinates, we have: EXqc (t) =

λt π 2 (ct)4

ZZZZ

4 Y

x21 +x22 +x23 +x24 ≤c2 t2

i=1

xqi i

     Y 4 kxk2 λ 2 + λt 1 − 2 2 dxi exp − 2 kxk2 c t c t i=1

Z ct Z π Z π Z 2π λt dr dθ dθ dθ3 1 2 π 2 (ct)4 0 0 0 0  q q q q × (r cos θ1 ) 1 (r sin θ1 cos θ2 ) 2 (r sin θ1 sin θ2 cos θ3 ) 3 (r sin θ1 sin θ2 sin θ3 ) 4       r2 λ 2 2 3 r (sin θ1 ) sin θ2 exp − 2 r × 2 + λt 1 − 2 2 c t c t      Z ct r2 λt λ 2 q1 +q2 +q3 +q4 +3 r 2 + λt 1 − r dr = 2 exp − π (ct)4 0 c2 t2 c2 t Z π Z π q1 q2 +q3 +q4 +2 × cos θ1 sin θ1 dθ1 (cos θ2 )q2 (sin θ2 )q3 +q4 +1 dθ2

=

0

0

Z ×



(cos θ3 )q3 (sin θ3 )q4 dθ3 .

0

Taking into account (7.5.5), we can rewrite this equality for even q1 , q2 , q3 , q4 as follows:

Markov Random Flight in the Space R4

331

EXqc (t) ct

     λ 2 r2 r exp − 2 r 2 + λt 1 − 2 2 dr c t c t 0      q1 + 1 q2 + q3 + q4 + 3 q2 + 1 q3 + q4 + 2 q3 + 1 q4 + 1 × 2B , B , B , 2 2 2 2 2 2     q1 +1 q2 +1 q3 +1 q4 +1 Γ Γ Γ Γ λt 2 2 2 2  = 2 π (ct)4 Γ q1 +q2 +q23 +q4 +4      Z ct λ 2 r2 2 (q1 +q2 +q3 +q4 +2)/2 × (r ) exp − 2 r 2 + λt 1 − 2 2 d(r2 ) c t c t 0     Γ q12+1 Γ q22+1 Γ q32+1 Γ q42+1 λt  = 2 (ct)q1 +q2 +q3 +q4 π Γ q1 +q2 +q23 +q4 +4 Z 1 z (q1 +q2 +q3 +q4 +2)/2 (2 + λt(1 − z)) e−λtz dz × 0     Γ q12+1 Γ q22+1 Γ q32+1 Γ q42+1 λt  = 2 (ct)q1 +q2 +q3 +q4 π Γ q1 +q2 +q23 +q4 +4  Z 1  Z 1 × 2 z (q1 +q2 +q3 +q4 +2)/2 e−λtz dz + λt z (q1 +q2 +q3 +q4 +2)/2 (1 − z) e−λtz dz . λt = 2 π (ct)4

0

Z

q1 +q2 +q3 +q4 +3

0

Applying now [63, Formulas 3.381(1) and 3.383(1)] to the first and second integrals in this equality, respectively, we get: EXqc (t) =

    q1 +1 Γ q22+1 Γ q32+1 Γ q42+1 2λt q1 +q2 +q3 +q4 Γ 2  (ct) π2 Γ q1 +q2 +q23 +q4 +4    q1 + q2 + q3 + q4 + 4 −(q1 +q2 +q3 +q4 +4)/2 , λt × (λt) γ 2  #  q +q +q +q +4 λt Γ 1 2 23 4 q1 + q2 + q3 + q4 + 4 q1 + q2 + q3 + q4 + 8  1 F1 + ; ; −λt . 2 Γ q1 +q2 +q23 +q4 +8 2 2

Adding to this equality the moments (7.5.6) of the singular part of the distribution, we finally arrive at (7.5.1). The theorem is proved. Consider now the one-dimensional stochastic process q R(t) = kX(t)k = X12 (t) + X22 (t) + X32 (t) + X42 (t), representing the Euclidean distance between the four-dimensional symmetric Markov random flight X(t) and the origin 0 ∈ R4 at time t. Obviously, 0 ≤ R(t) ≤ ct and, according to Theorem 7.2.2 (formula (7.2.5)), the absolutely continuous part of the distribution of R(t) has the form:  Pr {R(t) < r} = Pr X(t) ∈ B4r     λ 2 λ 4 λ 2 = 1 − 1 + 2 r − 4 3 r exp − 2 r , 0 ≤ r < ct. c t c t c t

332

Markov Random Flights

Therefore, the complete density of the distribution of R(t) in the interval 0 ≤ r ≤ ct is given by the formula: f (r, t) =

r3 e−λt δ(ct − r) (ct)3      4λ 2λ2 2λ2 5 λ 2 3 + + r − r exp − r Θct − r). c4 t3 c4 t 2 c6 t4 c2 t

(7.5.7)

In the following theorem we present an explicit formula for the moments of process R(t). Theorem 7.5.2. For any integer q ≥ 1, the followng relation holds: h q n  q io ERq (t) = (ct)q e−λt + (λt)−(q+2)/2 (2 + λt) γ + 2, λt − γ + 3, λt , 2 2 where γ(α, x) is the incomplete gamma-function given by (7.5.2).

(7.5.8)

Proof. According to (7.5.7), we have:  Z ct    2λ2 λ 2 4λ q+3 q q −λt + 4 2 r exp − 2 r dr + ER (t) = (ct) e c4 t3 c t c t 0   Z λ 2 2λ2 ct q+5 r exp − 2 r dr. − 6 4 c t 0 c t Changing the variable ξ = r2 in both the integrals, we get:    Z c2 t2  2λ λ2 λ 2 (q+2)/2 ERq (t) = (ct)q e−λt + + r dξ ξ exp − c4 t3 c4 t2 c2 t 0   Z c 2 t2 λ2 λ − 6 4 ξ (q+4)/2 exp − 2 r2 dξ c t 0 c t (see [63, Formula 3.381(1)])    −(q+4)/2   λ2 2λ λ q q −λt + + 2, λt = (ct) e + γ c4 t3 c4 t2 c2 t 2 −(q+6)/2    q λ2 λ γ + 3, λt − 6 4 2 c t c t 2  −(q+2)/2   2 + λt q λ = (ct)q e−λt + 2 2 γ + 2, λt 2 c t c t 2  −(q+2)/2   1 λ q − 2 2 γ + 3, λt c t c2 t 2  −(q+2)/2   q  λ 2 + λt  q 1 = (ct)q e−λt + γ + 2, λt − γ + 3, λt c2 t c2 t2 2 c2 t2 2 n h q  q io = (ct)q e−λt + (λt)−(q+2)/2 (2 + λt) γ + 2, λt − γ + 3, λt . 2 2 The theorem is proved. Remark 7.5.1. From (7.5.8) we can obtain formulas for the most important first and second moments, namely, for the expectation and the variance of process R(t):       5 7 −λt −3/2 ER(t) = ct e + (λt) (2 + λt) γ , λt − γ , λt , 2 2 (7.5.9)  2c2 −λt 2 ER (t) = 2 e + λt − 1 . λ

Markov Random Flight in the Space R4

333

The first formula in (7.5.9) immediately follows from (7.5.8) for q = 1. Let us prove the second relation in (7.5.9). For q = 2, formula (7.5.8) yields:  ER2 (t) = (ct)2 e−λt + (λt)−2 [(2 + λt) γ (3, λt) − γ (4, λt)] (see [63, Formula 8.356(1)])    = (ct)2 e−λt + (λt)−2 (2 + λt) γ (3, λt) − 3γ (3, λt) + (λt)3 e−λt    = (ct)2 e−λt + (λt)−2 (λt − 1) γ (3, λt) + (λt)3 e−λt (see [63, Formula 8.352(1)])       (λt)2 = (ct)2 e−λt + (λt)−2 2(λt − 1) 1 − e−λt 1 + λt + + (λt)3 e−λt 2!  −λt   2 −2 −λt 2 = (ct) e + (λt) 2λt − 2 − e (λt) − 2     2 2 2 −λt − 1 − e = (ct)2 e−λt + − λt (λt)2 (λt)2   2 2 = (ct)2 − (1 − e−λt ) λt (λt)2  2c2 = 2 e−λt + λt − 1 , λ proving the second relation in (7.5.9).

Chapter 8 Markov Random Flight in the Space R6

The symmetric Markov random flights in the Euclidean spaces R2 and R4 studied in the previous Chapters 5 and 7 are amenable to almost full research with obtaining their exact characteristics, including the most important one, namely, the distribution. However, as noted above, in the space R3 a similar exhaustive analysis is not, apparently, possible and the main characteristics of the three-dimensional Markov random flight can be given implicitly only in terms of their integral transforms. This suggests that random flight in the even-dimensional spaces are much easier to study, at least if the dimension of the space is not too high. This section is a clear confirmation of this thesis. We consider the symmetric Markov random flight X(t) in the Euclidean space R6 . Surprisingly, despite such fairly high dimension, one managed to obtain the explicit distribution of X(t). Although this distribution has a much more complicated structure than those in the spaces R2 and R4 , nevertheless, it can be found in an explicit form by means of the method of integral transforms developed in Chapter 4. In Section 8.1, we give a closed-form expression for the conditional distributions of the six-dimensional symmetric Markov random flight X(t) in the form of the finite sums of Gauss hypergeometric functions, whose first coefficient is always integer and non-positive. This means that hypergeometric functions in the conditional distributions, in fact, are some finite-order polynomials. Basing it on these conditional distributions, in Section 8.2, we derive an explicit formula for the distribution of X(t). This distribution is represented in the form of a series composed of the finite sums of hypergeometric functions that are some finite-order polynomials, due to above noted specific form of conditional distributions.

8.1

Conditional densities

In the six-dimensional Euclidean space R6 , a moving at constant speed c particle takes, at Poisson moments of intensity λ, random directions uniformly distributed on the unit sphere ( ) 6 X 6 6 2 2 S1 = x = (x1 , x2 , x3 , x4 , x5 , x6 ) ∈ R : kxk = xi = 1 . i=1

As above, we denote by X(t) = (X1 (t), X2 (t), X3 (t), X4 (t), X5 (t), X6 (t)) the particle’s position in the space R6 at time instant t > 0. In this section we are interested in the conditional distributions ( 6 ) \ Pr{X(t) ∈ dx | N (t) = n} = Pr (Xi (t) ∈ dxi ) | N (t) = n , n ≥ 1, (8.1.1) i=1

335

336

Markov Random Flights

where, remind, N (t) is the number of Poisson events occurred in the time interval (0, t) and dx is an infinitesimal element in R6 with the Lebesgue measure µ(dx) = dx1 dx2 dx3 dx4 dx5 dx6 . At arbitrary time moment t > 0, the particle, with probability 1, is located in the six-dimensional ball of radius ct: ( ) 6 X 6 6 2 2 2 2 Bct = x = (x1 , x2 , x3 , x4 , x5 , x6 ) ∈ R : kxk = xi ≤ c t . i=1

The singular component of the distribution Pr {X(t) ∈ dx} , x ∈ B6ct , t > 0, is concentrated on the sphere ( ) 6 X 6 Sct = ∂B6ct = x = (x1 , x2 , x3 , x4 , x5 , x6 ) ∈ R6 : kxk2 = x2i = c2 t2 , i=1

while the remaining part of the distribution is concentrated in the interior ) ( 6 X 2 2 2 6 6 2 xi < c t int Bct = x = (x1 , x2 , x3 , x4 , x5 , x6 ) ∈ R : kxk = i=1

B6ct

and forms its absolutely continuous component. of the ball Let, as above, p(x, t) = p(x1 , x2 , x3 , x4 , x5 , x6 ; t), x ∈ int B6ct , t > 0, denote the density of the absolutely continuous component of the distribution Pr {X(t) ∈ dx}, which is the main subject of this chapter. Our first result concerns the explicit form of conditional distributions (8.1.1) and is given by the following theorem. Theorem 8.1.1. For any n ≥ 1 and t > 0, the conditional distributions (8.1.1) are given by the formula:    5 kxk2 16   1 − µ(dx), if n = 1,    π 3 (ct)6 6 c2 t2           n!(n + 1)! n+1 X (k + 1)(k + 2)(n + 2k + 1) Pr{X(t) ∈ dx | N (t) = n} = (8.1.2) 3 6  2π (ct) 3k (n − k + 1)!(n + k − 2)!  k=0        kxk2   × F −(n + k − 2), k + 3; 3; 2 2 µ(dx),   c t     if n ≥ 2, where x = (x1 , x2 , x3 , x4 , x5 , x6 ) ∈ int B6ct , kxk2 =

6 P i=1

x2i < c2 t2 , and F (ξ, η; ζ; z) is the

Gauss hypergeometric function. Proof. Let pn (x, t), n ≥ 1, be the conditional densities of conditional distributions (7.1.1). According to formula (4.2.6), the characteristic function (Fourier transform) of the uniform 6 distribution on the sphere Sct has the form: ϕ(t) = 8

J2 (ctkαk) , (ctkαk)2

(8.1.3)

6 where α = (αp 1 , α2 , α3 , α4 , α5 , α6 ) ∈ R is the six-dimensional real vector of inversion param2 2 2 2 eters, kαk = α1 + α2 + α3 + α4 + α52 + α62 and J2 (z) is the second-order Bessel function.

Markov Random Flight in the Space R6

337

According to (4.2.7), the conditional characteristic functions Hn (t) (Fourier transforms in the ball B6ct of conditional densities pn (x, t), n ≥ 1), are given by the formula: n! In (t), tn

Hn (t) = where t

Z In (t) =

Z

t

Z

t

dτ2 · · ·

dτ1

dτn τn−1

τ1

0

 n+1 Y 

n ≥ 1,

(8.1.4)

ϕ(τj − τj−1 )

 

,

n ≥ 1.



j=1

Note that, for n = 0 (this corresponds to the case when no Poisson events occur in the time interval (0, t)), formula for characteristic function H0 (t) has the form (see (4.3.4): H0 (t) = I0 (t) = ϕ(t) = 8

J2 (ctkαk) . (ctkαk)2

(8.1.5)

According to (4.2.14) (Corollary 4.2.2), the Laplace transform L of function In (t) is given by the formula: n+1

L[In (t)](s) = (L[ϕ(t)](s))

,

n ≥ 1,

Re s > 0,

(8.1.6)

where function ϕ(t) is presented by (8.1.3). In view of [7, Table 4.14, Formula 6], the Laplace transform of function (8.1.3) is:   8 J2 (ctkαk) L[ϕ(t)](s) = L (s) (ckαk)2 t2  !3  1 8 ckαk  ckαk ckαk  p p + = (ckαk)2 4 3 s + s2 + (ckαk)2 s + s2 + (ckαk)2  −1 (ckαk)2  −3  p p 2 2 2 2 = 2 s + s + (ckαk) s + s + (ckαk) + . 3 Substituting this expression into (8.1.6) and applying the Newton binomial theorem, we get: L[In (t)](s) = 2n+1 = 2n+1



s+

n+1 X k=0

p

s2 + (ckαk)2

−3 n+1 p (ckαk)2  s + s2 + (ckαk)2 3  −(n−k+1) p s + s2 + (ckαk)2

−1

 n + 1 (ckαk)2k k 3k

+

 −3k p × s + s2 + (ckαk)2 n+1

=2

n+1 X k=0

 n

n+1 k



−(n+2k+1) p (ckαk)2k  2 + (ckαk)2 s , s + 3k

n! where k = Cnk = k! (n−k)! are the binomial coefficients. The inverse Laplace transformation of this expression yields:

338

Markov Random Flights

In (t) = 2n+1

n+1 X k=0

n+1 k



 −(n+2k+1)  p (ckαk)2k −1  2 + (ckαk)2 L s + s (t) 3k

(see [7, Table 5.3, Formula 43]) n+1 X n + 1 (ckαk)2k n + 2k + 1 = 2n+1 (ckαk)−(n+2k+1) Jn+2k+1 (ctkαk) k 3k t k=0 n+1 n+1 X n + 1 n + 2k + 1 2 Jn+2k+1 (ctkαk). = (ckαk)n+1 t k 3k k=0

Then, according to (8.1.4), the conditional characteristic functions Hn (t) have the form:  n+1  2n+1 n! X n + 1 n + 2k + 1 Hn (t) = Jn+2k+1 (ctkαk), (ctkαk)n+1 k 3k

n ≥ 1.

(8.1.7)

k=0

Formula (8.1.7) is also valid for n = 0. Really, for n = 0, formula (8.1.7) yields: 2 [J1 (ctkαk) + J3 (ctkαk)] ctkαk 2 4 = J2 (ctkαk) ctkαk ctkαk J2 (ctkαk) =8 , (ctkαk)2

H0 (t) =

coinciding with (8.1.5). Note that here we have used the well-known recurrent relation for Bessel functions (see [63, Formula 8.471(1)]): Jν−1 (z) + Jν+1 (z) =

2ν Jν (z). z

(8.1.8)

To obtain the conditional densities pn (x, t), n ≥ 1, one needs to evaluate the inverse −1 Fourier transforms Fα of conditional characteristic functions (8.1.7): −1 pn (x, t) = Fα [Hn (t)] (x)    n+1  2n+1 n! X n + 1 n + 2k + 1 −1 Jn+2k+1 (ctkαk) = F (x), α (ct)n+1 k 3k (kαk)n+1

n ≥ 1.

k=0

(8.1.9) Evaluation of (8.1.9) is quite different for n = 1 and for n ≥ 2, so we consider these two cases separately. Let n = 1. Then, according to (8.1.7) and applying again (8.1.8), we get the conditional characteristic function H1 (t):   8 4 1 H1 (t) = J2 (ctkαk) + J4 (ctkαk) + J6 (ctkαk) (ctkαk)2 3 3   8 1 = (J2 (ctkαk) + J4 (ctkαk)) + (J4 (ctkαk) + J6 (ctkαk)) (ctkαk)2 3   16 5 = 3 J3 (ctkαk) + J5 (ctkαk) . (ctkαk)3 3

Markov Random Flight in the Space R6

339

Inverting this expression by means of Hankel inversion formula (1.8.4), we obtain the conditional density p1 (x, t) corresponding to the single change of direction: −1 p1 (x, t) = Fα [H1 (t)] (x)       5 −1 J5 (ctkαk) 16 −1 J3 (ctkαk) 3 F (x) + (x) F = α (ct)3 (kαk)3 3 α (kαk)3  Z ∞ 16 J3 (ctr) = r3 J2 (kxkr) 3(2π)−3 kxk−2 dr (ct)3 r3 0  Z ∞ J5 (ctr) 5 r3 J2 (kxkr) dr + (2π)−3 kxk−2 3 r3 0  Z ∞  Z ∞ 16 5 = 3 J (kxkr) J (ctr) dr + J (kxkr) J (ctr) dr 2 3 2 5 (2πct)3 kxk2 3 0 0

(see [63, Formulas 6.512(3) and 6.512(1)])    16 kxk2 5 kxk2 Γ(4) kxk2 = 3 + F 4, −1; 3; (2πct)3 kxk2 (ct)3 3 (ct)3 Γ(3) Γ(2) c2 t2    2 4 kxk2 = 3 3+5 1− 6 π (ct) 3 c2 t2   16 5 kxk2 = 3 1 − . π (ct)6 6 c2 t 2 (8.1.10) Let now n ≥ 2. Applying again Hankel inversion formula (1.8.4), we can evaluate the inverse Fourier transforms in (8.1.9):   −1 Jn+2k+1 (ctkαk) Fα (x) (kαk)n+1 Z ∞ Jn+2k+1 (ctr) = (2π)−3 kxk−2 dr r3 J2 (kxkr) rn+1 Z ∞0 1 = r−(n−2) J2 (kxkr) Jn+2k+1 (ctr) dr (2π)3 kxk2 0 (see [63, Formula 6.574(1)])   kxk2 Γ(k + 3) kxk2 1 F k + 3, −(n + k − 2); 3; 2 2 = (2π)3 kxk2 2n−2 (ct)−n+5 Γ(n + k − 1) Γ(3) c t   2 (k + 2)! kxk = 3 n+2 F −(n + k − 2), k + 3; 3; 2 2 . π 2 (ct)−n+5 (n + k − 2)! c t Substituting this expression into (8.1.9), we obtain conditional densities for arbitrary n ≥ 2: pn (x, t) =

 n+1  2n+1 n! X n + 1 n + 2k + 1 (ct)n+1 k 3k k=0

× =

π 3 2n+2

(k + 2)! F (ct)−n+5 (n + k − 2)!

 −(n + k − 2), k + 3; 3;

n+1 X (n + 2k + 1) (k + 2)! n! (n + 1)! 3 6 2π (ct) k! (n − k + 1)! 3k (n + k − 2)! k=0   kxk2 × F −(n + k − 2), k + 3; 3; 2 2 c t

kxk2 c2 t2



340

Markov Random Flights

n+1 n! (n + 1)! X (k + 1)(k + 2)(n + 2k + 1) 2π 3 (ct)6 3k (n − k + 1)! (n + k − 2)! k=0   kxk2 × F −(n + k − 2), k + 3; 3; 2 2 , n ≥ 2. c t From (8.1.10) and (8.1.11), formula (8.1.2) follows. The theorem is proved.

=

(8.1.11)

Remark 8.1.1. Conditional density p1 (x, t) corresponding to the single change of direction can be obtained in a much more simple way. Applying the obtained above general formula (4.9.5) in the six-dimensional case m = 6, we immediately get:   5 kxk2 16 F , −1; 3; p1 (x, t) = 3 π (ct)6 2 c2 t2   16 5 kxk2 = 3 1 − , π (ct)6 6 c2 t 2 and this exactly coincides with (8.1.10). Remark 8.1.2. Formula (8.1.11) shows that conditional densities pn (x, t), for n ≥ 2, have a fairly complicated form in the spaces of higher dimensions. This sharply contrasts with the two and four-dimensional cases, in which conditional densities have a very simple form for any n ≥ 1 (see, for comparison, formula (5.1.13) in dimension m = 2 and formula (7.1.2) in dimension m = 4, respectively). For example, in the six-dimensional case m = 6, conditional density p2 (x, t), corresponding to two changes of direction, is given by the relation:   4 53 130 kxk2 kxk4 28 kxk6 p2 (x, t) = 3 − + 35 − . (8.1.12) π (ct)6 3 3 (ct)2 (ct)4 3 (ct)6 Let us prove (8.1.12). For n = 2, formula (8.1.11) yields:   3 X 6 (k + 1)(k + 2)(2k + 3) kxk2 p2 (x, t) = 3 F −k, k + 3; 3; 2 2 π (ct)6 3k (3 − k)! k! c t k=0      6 kxk2 2·3·5 kxk2 = 3 F 0, 3; 3; 2 2 + F −1, 4; 3; 2 2 π (ct)6 c t 3 · 2! c t     2 kxk 4·5·9 kxk2 3·4·7 F −2, 5; 3; 2 2 + F −3, 6; 3; 2 2 + 9 · 2! c t 27 · 3! c t      2 2 4 6 4 kxk 14 10 kxk 5 kxk = 3 1+5 1− + 1− + 6 2 2 π (ct) 3 (ct) 3 3 (ct) 2 (ct)4   2 4 kxk 21 kxk 28 kxk6 10 1−6 + − + 9 (ct)2 2 (ct)4 5 (ct)6   2 2 4 6 20 kxk 14 140 kxk 35 kxk = 3 1+5− + − + . 6 2 2 π (ct) 3 (ct) 3 9 (ct) 3 (ct)4  10 20 kxk2 35 kxk4 56 kxk6 + − + − 9 3 (ct)2 3 (ct)4 9 (ct)6   6 106 260 kxk2 70 kxk4 56 kxk6 = 3 − + − 6 2 4 π (ct) 9 9 (ct) 3 (ct) 9 (ct)6   4 53 130 kxk2 kxk4 28 kxk6 − = 3 + 35 − , π (ct)6 3 3 (ct)2 (ct)4 3 (ct)6 proving (8.1.12). Obviously, for n ≥ 3, conditional densities pn (x, t) have much more complicated form than (8.1.12).

Markov Random Flight in the Space R6

341

Remark 8.1.3. Since in conditional densities (8.1.11) the first coefficient in hypergeometric functions is always a negative integer or zero for any n and k, the conditional densities pn (x, t) are, in fact, some finite-order polynomials of the variable kxk2 /(ct)2 . For example, this is very well seen from formula (8.1.12). Such structure is the peculiarity of conditional densities in the even-dimensional spaces (see, for comparison, formulas (5.1.13) and (7.1.2)).

8.2

Distribution of the process

Conditional densities pn (x, t) obtained in the previous section enable us to easily derive the explicit distribution of the six-dimensional symmetric Markov random flight X(t). Theorem 8.2.1. For any t > 0, the absolutely continuous component of the distribution of process X(t) has the form:    16λte−λt 5 kxk2 Pr{X(t) ∈ dx} = 1 − π 3 (ct)6 6 c2 t2 ∞ n+1 X (k + 1)(k + 2)(n + 2k + 1) e−λt X n (λt) (n + 1)! 3 6 2π (ct) n=2 3k (n − k + 1)!(n + k − 2)! k=0   kxk2 µ(dx), × F −(n + k − 2), k + 3; 3; 2 2 c t (8.2.1) 6 6 X Y x = (x1 , x2 , x3 , x4 , x5 , x6 ) ∈ int B6ct , kxk2 = x2i < c2 t2 , µ(dx) = dxi .

+

i=0

i=0

Proof. In view of the total probability formula, we have: ∞ X (λt)n pn (x, t) µ(dx). n! n=1

Pr{X(t) ∈ dx} = e−λt

(8.2.2)

Substituting into this formula the explicit expressions for conditional densities pn (x, t) given by (8.1.2), we immediately obtain (8.2.1). It remains to check that equality (4.1.6) fulfills. In view of (8.2.2), it suffices to show that conditional densities (8.1.2) satisfy the equality: Z pn (x, t) µ(dx) = 1, n ≥ 1, (8.2.3) B6ct

(note that (8.2.3) must be fulfilled in the space of any dimension m ≥ 2). Passing to sixdimensional polar coordinates, we have: Z B6ct



kxk2 c2 t2

k

π3 µ(dx) = (ct)2k

Z 0

ct

r2k+5 dr =

π 3 (ct)6 , 2k + 6

k ≥ 0.

342

Markov Random Flights

Then, for the conditional density p1 (x, t) given by (8.1.10), we get:   Z Z 5 kxk2 16 µ(dx) 1 − p1 (x, t) µ(dx) = 3 π (ct)6 B6ct 6 c2 t2 B6ct ! Z Z 16 5 kxk2 = 3 µ(dx) µ(dx) − π (ct)6 6 B6ct c2 t2 B6ct  3  16 π (ct)6 5 π 3 (ct)6 = 3 − π (ct)6 6 6 8 = 1, and, thus, (8.2.3) is fulfilled for n = 1. Let now n ≥ 2. Then, by integrating (7.1.11), we have: Z pn (x, t) µ(dx) B6ct

"   Z 2 kxk2 n! (n + 1)! F −(n − 2), 3; 3; 2 2 µ(dx) = 2π 3 (ct)6 n! (n − 2)! B6ct c t n+1 X

(k + 1)(k + 2)(n + 2k + 1) + 3k (n − k + 1)! (n + k − 2)! k=1 #   Z kxk2 × F −(n + k − 2), k + 3; 3; 2 2 µ(dx) . c t B6ct

(8.2.4)

Let us now prove that the second term (the sum) on the right-hand side of (8.2.4) is zero. To do this, we will show that the integral of hypergeometric function in the sum in (8.2.4) is equal to zero for any integer k such that 1 ≤ k ≤ n + 1, n ≥ 2. Really, passing to six-dimensional polar coordinates in this integral, we get:   Z kxk2 F −(n + k − 2), k + 3; 3; 2 2 µ(dx) c t B6ct   Z ct r2 = π3 r5 F −(n + k − 2), k + 3; 3; 2 2 dr c t 0 Z 1  = π 3 (ct)6 z 5 F −(n + k − 2), k + 3; 3; z 2 dr 0

(see [63, Formula 7.513]) 1 = π 3 (ct)6 B(1, 3) 3 F2 (−(n + k − 2), k + 3, 3; 3, 4; 1) 2 ∞ π 3 (ct)6 X (−(n + k − 2))s (k + 3)s (3)s 1 = 6 (3)s (4)s s! s=0 =

∞ π 3 (ct)6 X (−(n + k − 2))s (k + 3)s 1 6 (4)s s! s=0

π 3 (ct)6 F (−(n + k − 2), k + 3; 4; 1) 6 = 0, =

in view of equality (1.9.31) of Lemma 1.9.10.

Markov Random Flight in the Space R6

343

Thus, we have shown that the second term (the sum) in (8.2.4) is zero. Then (8.2.4) takes the form:   Z Z kxk2 n(n − 1)(n + 1) pn (x, t) µ(dx) = µ(dx) F −(n − 2), 3; 3; π 3 (ct)6 c2 t2 B6ct B6ct  n−2 Z kxk2 n(n − 1)(n + 1) µ(dx) 1 − = π 3 (ct)6 c2 t2 B6ct n−2  Z r2 n(n − 1)(n + 1) 3 ct 5 = dr π r 1 − π 3 (ct)6 c2 t2 0 Z 1 z 5 (1 − z 2 )n−2 dz = n(n − 1)(n + 1) 0

1 = n(n − 1)(n + 1) B(3, n − 1) 2 n(n − 1)(n + 1) Γ(3) Γ(n − 1) = 2 Γ(n + 2) n(n − 1)(n + 1) 2 (n − 2)! = 2 (n + 1)! = 1, proving (8.2.3). The theorem is thus completely proved. Remark 8.2.1. The density of distribution (8.2.1) has the form:   5 kxk2 16λte−λt 1 − p(x, t) = 3 π (ct)6 6 c2 t 2 +

∞ n+1 X (k + 1)(k + 2)(n + 2k + 1) e−λt X n (λt) (n + 1)! 3 6 2π (ct) n=2 3k (n − k + 1)!(n + k − 2)! k=0   kxk2 × F −(n + k − 2), k + 3; 3; 2 2 . c t

(8.2.5)

The expression on the right-hand side of (8.2.5) cannot, apparently, be simplified. From the form of density (8.2.5), we see that, structurally, it is very different from the form of densities in the two and four-dimensional cases (see Remark 7.2.1). Remind that in these dimensions the densities have the form of the product of a wave-type function and an exponential function introducing the damping effect. In density (8.2.5) the exponential factor e−λt is also present and, as noted above, this trait is peculiar to the Markov random flight in any dimension. However, another factor has the extremely complicated form of a functional series composed of the finite sums of hypergeometric functions (in fact, polynomials) and its connection with wave processes is not visible. This emphasizes again how strongly the behaviour of the Markov random flight depends on the dimension of the space and this fact was repeatedly noted above. One can expect that the distribution of the process in higher dimensions has a much more complicated form than (8.2.5). Remark 8.2.2. Taking into account (4.1.5), the complete transition density f (x, t), x ∈ B6ct , t ≥ 0, of the distribution of the six-dimensional symmetric Markov random flight X(t), in terms of generalized functions, has the form:

344

f (x, t) =

Markov Random Flights

e−λt δ(c2 t2 − kxk2 ) π 3 (ct)5    5 kxk2 16λte−λt 1− + π 3 (ct)6 6 c2 t2 (8.2.6) ∞ n+1 X (k + 1)(k + 2)(n + 2k + 1) e−λt X n + 3 (λt) (n + 1)! 2π (ct)6 n=2 3k (n − k + 1)!(n + k − 2)! k=0   kxk2 Θ (ct − kxk) , × F −(n + k − 2), k + 3; 3; 2 2 c t

where δ(x) is the Dirac delta-function and Θ(x) is the Heaviside unit-step function. The first term in (8.2.6) represents the singular, while the second one the absolutely continuous parts of the density of the distribution of process X(t).

Chapter 9 Applied Models

This chapter is devoted to some possible applications of random flights. Since a lot of theoretical and applied models in physics, chemistry, biology, financial markets, etc., are based on various random walk processes, the potential of their practical applications is huge. This is especially true for models based on Brownian motion. In such models, the replacement of Brownian motion by finite-velocity random walks gives a new insight and a more profound understanding of the most essencial features of the process and often yields qualitatively new, sometimes unexpected, results (see, for example, [53–57,115,153,159,160, 180, 183, 188, 199] and references therein). An extremely important property of random motions is that they generate a diffusion determined by the probabilistic characteristics of randomly moving particles. While Brownian particles generate a diffusion with an infinite speed of propagation, the particles moving at a finite speed generate a finite-velocity diffusion. Various diffusion processes often arise in the nature as well as in many fields of science and technology. These processes are characterized by the presence of some source of substance (or heat, mass, energy, electric charge, etc.) concentrated at an initial point or in a compact set and by spreading the substance outwards. In this chapter we consider finite-velocity diffusion processes generated by Markov random flights studied in previous chapters. In Section 9.1, we develop a conception of slow diffusion processes generated by randomly moving particles when both the speed and the intensity of changings of direction are small. We present a slow diffusion condition linking these parameters through time and providing a non-degenerate diffusion. Based on this slow diffusion condition, we derive the stationary distribution, for a large time, of the Markov random flights in low-dimensional Euclidean spaces. In Section 9.2, we present an approach for modelling the fluctuations of water level in a reservoir based on the telegraph processes studied in Chapter 2. The peculiarity of such an approach consists in the interpretation of water level as a particle moving on a (vertical) line at constant speed and alternating two possible directions (up and down) at random time instants. Section 9.3 deals with a model of soil pollution from a stationary source. It is imagined that the pollution process is carried out by randomly moving particles with a random lifetime. Based on the results of Chapter 5, we obtain the density of the pollution distribution for the case when the lifetime is an exponentially distributed random variable. In Section 9.4, we outline some physical applications of the finite-velocity random motions arising in transport phenomena, discuss some relativistic properties of such stochastic motions and sketch a model of cosmic microwave background radiation (CMB) based on a three-dimensional telegraph equation on the surface of a unit sphere with random initial conditions whose solution represents a random field on the surface of the sphere. Finally, in Section 9.5 a finite-velocity counterpart of the classical Black-Scholes model of option pricing and optimal strategies on financial markets is sketched.

345

346

Markov Random Flights

9.1

Slow diffusion

9.1.1

Preliminaries

As it was shown in Chapter 2, the transition density of the Goldstein-Kac telegraph process X(t) is the solution of the Cauchy problem ∂p(x, t) ∂ 2 p(x, t) ∂ 2 p(x, t) + 2λ , = c2 2 ∂t ∂t ∂x2 ∂p(x, t) p(x, t)|t=0 = δ(x), = 0. ∂t t=0

(9.1.1) (9.1.2)

This means that the transition density p(x, t) is the fundamental solution (the Green’s function) to the telegraph equation (9.1.1) having the form: e−λt [δ(ct + x) + δ(ct − x)] 2   p    p λe−λt ct λ λ + I0 c2 t 2 − x 2 + √ I1 c2 t2 − x2 Θ(ct − |x|), 2c c c c2 t2 − x2 x ∈ (−∞, ∞), t > 0, (9.1.3) where I0 (z) and I1 (z) are the modified Bessel functions of zero and first orders, respectively, and Θ(x) is the Heaviside unit-step function. The telegraph equation can also be considered in a more general context as a particular case of the Maxwell equation (see [207, Section 2, subsection 6]). The first term of (9.1.3) represents the density of the singular component of the distribution of X(t) (in the sense of generalized functions) concentrated at the two terminal points ±ct of the interval [−ct, ct], while the second term is the density of the absolutely continuous component of the distribution concentrated in the open interval (−ct, ct). The first initial condition in (9.1.2) means that, at the initial time instant t = 0, the density is entirely concentrated at the origin, while the second one corresponds to an initially vanishing diffusive flux. Under the Kac’s scaling condition p(x, t) =

c → ∞,

λ → ∞,

c2 → ρ2 , λ

(9.1.4)

the telegraph equation (9.1.1) turns into the heat equation ∂u(x, t) ρ2 ∂ 2 u(x, t) = ∂t 2 ∂x2

(9.1.5)

and the transition density (9.1.3) transforms into the fundamental solution of the parabolic heat equation (9.1.5), that is, into the transition density of the one-dimensional homogeneous Brownian motion with zero drift and diffusion coefficient ρ2 . Therefore, Kac’s condition (9.1.4) can be interpreted as the fast diffusion condition. The Kac’s scaling condition (9.1.4) keeps working in the multidimensional case as well providing the transformation of the hyperparabolic operator into the heat operator of respective dimension (see Theorem 4.10.2). Moreover, under Kac’s condition (9.1.4), the transition density of the symmetric Markov random flight in the Euclidean space Rm of arbitrary dimension m ≥ 2 converges to the transition density of the m-dimensional homogeneous

Applied Models

347

Brownian motion with zero drift and diffusion coefficient 2ρ2 /m (see Theorem 4.8.1). All these facts lead us to the conclusion that scaling condition (9.1.4) is universal in the space of arbitrary dimension and, therefore, it can be treated as the fast diffusion condition indeed. While the fast diffusion processes are of a certain interest in studying many real phenomena, there are numerous important dynamic processes that can be interpreted as the slow diffusion ones. Such processes are characterized first of all by the slow and superslow speed of propagation and their evolution can last very long time (months, years and even decades). They are of a special importance due to their numerous applications in physics, chemistry, biology, environmental science and some other fields (see, for instance, [3, 25, 53, 55, 56, 66, 159, 169, 204, 208, 212] and bibliographies therein). We present a conception of the slow diffusion processes based on the theory of Markov random flights in the Euclidean spaces developed in previous chapters. This approach relies on the fact that the main probabilistic characteristics of such processes (transition densities, for instance) in some spaces of low dimensions are obtained in explicit forms and can, therefore, be used for deriving the stationary distributions of slow diffusion processes. The crucial point is to determine the conditions on the parameters of the motion under which the random flight generates a slow diffusion process (similarly like the Kac’s conditions (9.1.4) generates a fast diffusion one). However, in contrast to the fast diffusion condition (9.1.4), we should determine the appropriate conditions not only for the speed of motion and the intensity of switchings, but also for the time variable. The slow speed of propagation implies that one should consider the process on large time intervals and this leads to the respective stationary distributions. Note also that a special time rescaling was used in [66, 159, 212] to interpret the finite-velocity random walks as a diffusion model in physics, ecology and biology.

9.1.2

Slow diffusion condition

The multidimensional counterpart of the Goldstein-Kac telegraph process is the symmetric Markov random flight X(t) = (X1 (t), . . . , Xm (t)), m ≥ 2, described in Section 4.1. We are interested in the conditions under which the m-dimensional Markov random flight X(t) can serve as an appropriate mathematical model for a slow diffusion process D(t) in the Euclidean space Rm , m ≥ 2. First, since D(t) has a slow speed of propagation, the random flight X(t) must have a slow speed too. Therefore, we should assume that the speed c of process X(t) tends to zero. Some simple reasonings imply that, from the condition c → 0, the condition λ → 0 follows. Really, there are only three logical possibilities: λ tends to infinity, λ tends to some finite limit and, finally, λ tends to zero. If λ → ∞ then, in view of condition c → 0, it follows that the particle cannot leave the origin (i.e. the starting point) and, therefore, there is not any kind of diffusion in this case. If λ tends to some finite limit then, by the same reason, the particle cannot leave some restricted local neighbourhood of the origin and this is also a degenerative case of diffusion. Thus, we can conclude that the only siutable condition providing an appropriate diffusion, is λ → 0. It is clear that, in order to obtain important characteristics of the slow diffusion process D(t), we should consider random flight X(t) for large time t, that is for t → ∞. This implies that we are interested in the stationary distribution of D(t). Since the parameters λ and c of the Markov random flight X(t) are connected with each other through the time, namely λ is the mean number of changes of direction per unit of time and c is the distance passed per unit of time, then we should assume that the products λt and ct must tend to some finite limits. Note that λt is the mean number of changes of direction that have occured until time instant t, while ct is the distance passed by time instant t.

348

Markov Random Flights

All these reasonings lead us to the following slow diffusion condition (SDC) (in fact, the set of conditions): λ → 0, c → 0, t → ∞, λt → a > 0, ct → % > 0.

(9.1.6)

Condition (9.1.6) has very clear physical sense. A slow diffusion process D(t) can be simulated by the symmetric Markov random flight X(t) with small speed and small intensity of switchings. This smallness of the values of the parameters c and λ of X(t) implies that we should consider the process on very long time intervals on which its probabilistic characteristics tend to the stationary ones. Condition (9.1.6) also implies that there should be a balance between the mean number of switchings occurred up to time t and the distance passed by this instant, and it should be independent of time. This concluλ a sion follows from the equalities: λt ct = c → % . Note also that, by defining the quantities λ x a = λt, κ = c , ξ = ct , one could introduce the invariant rescaling of the transition density (9.1.3) of the telegraph process that can be interpreted physically as a ‘convective rescaling’ via the lumped space-time variable ξ, accounting for the finite propagation velocity. In the next subsection we will show that SDC (9.1.6) leads to the stationary distributions indeed and, therefore, such random flights can be applied for modeling the slow diffusion processes in the Euclidean spaces of arbitrary dimension.

9.1.3

Stationary densities in low dimensions

Let, as above, p(x, t), x ∈ Rm , t > 0, denote the transition density of the symmetric Markov random flight X(t). In this subsection we derive, under the SDC (9.1.6), the densities q(x) =

lim

c,λ→0, t→∞ λt→a, ct→%

p(x, t)

of the stationary distributions of X(t) in some Euclidean spaces of low dimensions that can be treated as the stationary densities of slow diffusion processes. The derivation is based on the closed-form expressions for the transition densities of X(t) obtained in previous chapters. In the space R3 the stationary density will be presented in an asymptotic form. Stationary Density on the Line. The one-dimensional Markov random flight is represented by the Goldstein-Kac telegraph process whose transition density is given by (9.1.3). Under the SDC (9.1.6), density (9.1.3) transforms into the stationary density: e−a [δ(% + x) + δ(% − x)] 2 "  p   p # ae−a a a % 2 2 2 2 + I1 I0 % −x + p % −x Θ(% − |x|), 2% % % %2 − x2

q(x) =

x ∈ (−∞, ∞),

a > 0,

(9.1.7)

% > 0.

The shape of the absolutely continuous part of stationary density q(x) given by (9.1.7) is presented in Fig. 9.1. One can see that, for increasing a and fixed %, this curve becomes more and more peaked. On the other hand, for increasing % and fixed a, this graphics becomes more even with decreasing peak.

Applied Models

349

Figure 9.1: The shape of stationary density (9.1.7) (for % = 4, a = 6, |x| < 4) Stationary Density in the Plane. The transition density of the symmetric Markov random flight X(t) in the Euclidean plane R2 is given by the formula (see (5.2.5)):   p λ 2 t2 − kxk2 exp −λt + c −λt c e λ p p(x, t) = δ(c2 t2 − kxk2 ) + Θ(ct − kxk), (9.1.8) 2 2πct 2πc c t2 − kxk2 x = (x1 , x2 ) ∈ R2 ,

kxk =

q x21 + x22 ,

t > 0.

Under the SDC (9.1.6), density (9.1.8) transforms into the stationary density:   p a 2 − kxk2 exp −a + % −a % a e p δ(%2 − kxk2 ) + Θ(% − kxk), q(x) = 2π% 2π% %2 − kxk2 x ∈ R2 ,

a > 0,

(9.1.9)

% > 0.

The shape of the section of the absolutely continuous part of stationary density q(x) given by (9.1.9) is plotted in Fig. 9.2. It shows the behaviour of q(x) as the distance kxk from the origin 0 ∈ R2 grows. We see that q(x) has local maximum at the origin 0 and it is decreasing, as kxk grows. Note also that density q(x) becomes infinite near the boundary kxk = %, that is, lim q(x) = ∞. This follows from the form of stationary density (9.1.9). kxk→%−0

Stationary Density in the Space R4 . The transition density of the symmetric Markov random flight X(t) in the four-dimensional space R4 has the form (see (7.2.4)): e−λt δ(c2 t2 − kxk2 ) 2π 2 (ct)3      λt kxk2 λ 2 + 2 2 + λt 1 − exp − kxk Θ(ct − kxk), π (ct)4 c2 t2 c2 t

p(x, t) =

(9.1.10)

350

Markov Random Flights

Figure 9.2: The shape of the section of stationary density (9.1.9) (for % = 4, a = 6, kxk < 4) x = (x1 , x2 , x3 , x4 ) ∈ R4 ,

kxk =

q x21 + x22 + x23 + x24 ,

t > 0.

Under the SDC (9.1.6), density (9.1.10) transforms into the stationary density: e−a δ(%2 − kxk2 ) 2π 2 %3      a kxk2 a + 2 4 2+a 1− 2 exp − 2 kxk2 Θ(% − kxk), π % % %

q(x) =

x ∈ R4 ,

a > 0,

(9.1.11)

% > 0.

The shape of the section of the absolutely continuous part of stationary density q(x) given by formula (9.1.11) is presented in Fig. 9.3.

We see that stationary density (9.1.11) takes maximal value at the origin and decreases smoothly up to the boundary. It is continuous and takes minimal value on the boundary of the diffusion area. Stationary Density in the Space R6 . The transition density of the symmetric Markov random flight X(t) in the six-dimensional space R6 has the form (see (8.2.6)):    e−λt 16λt e−λt 5 kxk2 2 2 2 δ(c t − kxk ) + 1 − p(x, t) = 3 π (ct)5 π 3 (ct)6 6 c2 t2 ∞ n+1 X (k + 1)(k + 2)(n + 2k + 1) e−λt X n (9.1.12) (λt) (n + 1)! 3 6 2π (ct) n=2 3k (n − k + 1)!(n + k − 2)! k=0   kxk2 × F −(n + k − 2), k + 3; 3; 2 2 Θ(ct − kxk), c t q x = (x1 , x2 , x3 , x4 , x5 , x6 ) ∈ R6 , kxk = x21 + x22 + x23 + x24 + x25 + x26 , t > 0,

+

Applied Models

351

Figure 9.3: The shape of the section of stationary density (9.1.11) (for % = 4, a = 6, kxk < 4) where F (α, β; γ; z) ≡ 2 F1 (α, β; γ; z) =

∞ X (α)k (β)k z k (γ)k k!

k=0

is the Gauss hypergeometric function. Under the SDC (9.1.6), density (9.1.12) transforms into the stationary density:    16a e−a e−a 5 kxk2 1 − q(x) = 3 5 δ(%2 − kxk2 ) + π % π 3 %6 6 %2 +

∞ n+1 X (k + 1)(k + 2)(n + 2k + 1) e−a X n a (n + 1)! 3 6 2π % n=2 3k (n − k + 1)!(n + k − 2)! k=0   kxk2 Θ(% − kxk), × F −(n + k − 2), k + 3; 3; 2 %

x ∈ R6 ,

a > 0,

(9.1.13)

% > 0.

The shape of the section of the absolutely continuous part of stationary density q(x) given by (9.1.13) is plotted in Fig. 9.4.

This figure shows that the six-dimensional stationary density (9.1.13) behaves very likely to its four-dimensional counterpart (see Fig. 9.3). In other words, it takes maximal value at the origin and decreases smoothly up to the boundary. On the boundary it is continuous and takes minimal value. Asymptotic Stationary Density in the Space R3 . The above stationary densities of the Markov random flights in the spaces R1 , R2 , R4 and R6 were derived from the explicit forms of their transition densities in these spaces obtained in previous chapters. As to the three-dimensional Markov random flight is concerned, the situation is more complicated

352

Markov Random Flights

Figure 9.4: The shape of the section of stationary density (9.1.13) (for % = 4, a = 6, kxk < 4) because its explicit distribution was not obtained so far. Only Laplace-Fourier transform of the transition density of the symmetric Markov random flight in the space R3 is known (see formula (4.6.4)). However, the problem of inverting this Laplace-Fourier transform and obtaining a closed-form expression for the density seems impracticable. That is why we suggest another approach based on asymptotic formula for the transition density of the three-dimensional symmetric Markov random flight that enables us to obtain an asymptotic stationary density of the process. Let X(t) = (X1 (t), X2 (t), X3 (t)), t > 0 be the symmetric Markov random flight in the Euclidean space R3 with constant speed c > 0 and the intensity of switchings λ > 0. Let p(x, t), x ∈ R3 , t > 0, denote the transition density of X(t). According to Theorem 6.4.3, for arbitrary t > 0, the following asymptotic relation holds:    λ ct + kxk e−λt 2 2 2 −λt δ(c t − kxk ) + e ln p(x, t) = 4π(ct)2 4πc2 t kxk ct − kxk  (9.1.14) 2 3 λ λ 3 p + + Θ(ct − kxk) + o((λt) ), 8πc3 2π 2 c2 c2 t2 − kxk2 q t > 0. x = (x1 , x2 , x3 ) ∈ R3 , kxk = x21 + x22 + x23 , Therefore, under the SDC (9.1.6) and the additional condition 0 < a  1, density (9.1.14) transforms into the asymptotic stationary density:    e−a a % + kxk 2 2 −a q(x) = δ(% − kxk ) + e ln 4π%2 4π%2 kxk % − kxk  (9.1.15) 2 a a3 3 p + + Θ(% − kxk) + o(a ), 8π%3 2π 2 %2 %2 − kxk2 x ∈ R3 ,

0 < a  1,

% > 0.

The shape of the section of the absolutely continuous part of asymptotic stationary

Applied Models

353

density q(x) given by (9.1.15) (for % = 4, a = 0.01, kxk < 4), is presented in Fig. 9.5. The error of calculations in this graphics does not exceed 10−6 .

Figure 9.5: The shape of the section of stationary density (9.1.15) (for % = 5, a = 0.01, kxk < 4)

We see the essential difference in the behaviour of the three-dimensional density and those in the even-dimensional spaces R4 and R6 . Both the densities (9.1.11) and (9.1.13) in these even-dimensional spaces have very similar behaviour, namely, they are mostly concentrated near the origin and are decreasing, as the distance from the origin grows. Near the boundary kxk = % these densities take minimal values. One can also demonstrate that stationary densities (9.1.11) and (9.1.13) in the spaces R4 and R6 keep the same behaviour for arbitrary value of the parameter a, including for a  1. By varying parameter a, we can change the declivity of the shape of the densities, but not their general behaviour mentioned above. In contrast to these even-dimensional spaces R4 and R6 , the asymptotic stationary density (9.1.15) in the space R3 takes its minimal value at the origin and is slowly growing, as the distance from the origin grows. Thus, one can say that there is a rarefaction area near the origin (at least, for a  1). When approaching the boundary of the diffusion area, the density begins to rise sharply and it becomes infinite on the boundary. Therefore, one can conclude that the overwhelming part of density (9.1.15) is concentrated near the boundary. This fact looks similar to the behaviour of the fundamental solution (the Green’s function) of the three-dimensional wave equation which is concentrated on the surface of the diffusion sphere (see, for instance, [207, section 11, subsection 7]). It is interesting to compare the behaviour of the two-dimensional stationary density (9.1.9) in the plane R2 with those in other spaces. We see that, similarly to the densities in the even-dimensional spaces R4 and R6 , it has a local maximum at the origin and is decreasing, as the distance from the origin grows (see Fig. 9.2). However, in contrast to these even-dimensional spaces, density (9.1.9) becomes infinite on the boundary and this feature is similar to that of the three-dimensional density (9.1.15) (see Fig. 9.5). Moreover, the asymptotic behaviour of the two-dimensional density (9.1.9) is quite similar to that of

354

Markov Random Flights

the three-dimensional density (9.1.15). This means that, for a  1, the shape of density (9.1.9) is drastically different from that presented in Fig. 9.2 and looks very similar to the shape of asymptotic density (9.1.15) plotted in Fig. 9.5. This interesting and somewhat unexpected fact can obviously be explained by the presence of a discontinuous term in the asymptotic decomposition of density (9.1.9) with respect to the powers of parameter a which is determined by the discontinuous term in the series expansion of density (9.1.8) corresponding to the single change of direction (see (5.1.14)). This property is peculiar only for the densities in the spaces R2 and R3 , because in all other Euclidean spaces Rm of arbitrary dimensions m ≥ 4 the transition densities of the isotropic Markov random flights are continuous on the boundary of diffusion area (see Remark 4.9.2). Remark 9.1.1. One can see that the formal replacements λt =: a and ct =: % in the transition densities (9.1.3), (9.1.8), (9.1.10), (9.1.12) and (9.1.14) yield the stationary densities (9.1.7), (9.1.9), (9.1.11), (9.1.13) and (9.1.15), respectively. However, the slow diffusion condition (9.1.6) has a more deep mathematical sense than the above formal replacements. In the SDC (9.1.6) regarding, for example, the condition λ → 0, t → ∞, λt → a, we deal with two sequences {λ} and {t}, the first of which tends to zero and the second one goes to infinity in such a way that their product tends to a finite limit. The same concerns the second pair of sequences {c} and {t}. The SDC (9.1.6) is, therefore, treated as the set of limiting conditions connected with each other through time and each stationary density q(x) appears as the result of such a passage to the limit from the respective transition density. This implies the necessity of justifying such a passage to the limit in these densities. To justify this important operation, we notice that all the transition densities (9.1.3), (9.1.8), (9.1.10), (9.1.12) and (9.1.14) are absolutely continuous in the interior of their diffusion balls Bm ct and, therefore, they are uniformly continuous and uniformly bounded in any closed subball m Bm ct−ε ⊂ Bct for arbitrary small ε > 0. From this fact the justification of the passage to the limit in these transition densities under the SDC (9.1.6), immediately follows. Thus, the SDC (9.1.6) means passage to the limit, as is defined in stationary density q(x) above, not a simple change of notations, as it might seem. Remark 9.1.2. The presented conception of slow diffusion processes based on Markov random flights gives a tool for describing their distributions on long time intervals in the Euclidean spaces of low, most important, dimensions. The core of this conception is the slow diffusion condition (9.1.6) connecting the speed of propagation and the intensity of switchings of the process with the time interval of its evolution. On long time intervals, such distributions tend to the respective stationary ones and this is true in any dimension (asymptotically in the space R3 ). The slow diffusion condition (9.1.6) can, therefore, be considered as the slow-velocity counterpart of the classical Kac’s fast diffusion condition (9.1.4).

9.2

Fluctuations of water level in reservoir

Consider the well-known hydrology problem of describing the process of fluctuation of the water level in a reservoir. The water level fluctuates due to many factors (precipitations, underground sources, inflowing and outflowing rivers, evaporation, industrial and agricultural water withdrawal, etc). The traditional approach consists in the attempt to take all possible factors into account and the degree of impact of each of them. This leads to a nonlinear system of several differential equations with variable coefficients and a number of

Applied Models

355

initial and boundary conditions. The more factors are taken into account, the more equations this system contains. Analytical solving of the respective initial-boundary problem for such a system of equations is impossible, so numerical methods are used. Moreover, a number of empirical assumptions, not always sufficiently justified, are made to derive the equations. For example, the assumption of the determinism of all these factors seems highly doubtful. Besides, solving of such a large system of equations with initial and boundary conditions, even by numerical methods, is a very difficult computational task. A good alternative to the traditional approach can be given in the framework of the theory of Markov random flights, namely, the Goldstein-Kac telegraph process studied in Chapter 2 and its numerous generalizations. The ideological basis of such an alternative is the rejection of the attempt to explicitly take into account the impact of all possible factors separately, as well as the assumption of their determinism. Instead, it is proposed that these factors be considered random and take into account only the result of their combined impact, namely, the water level. Under the stationary regime of functioning the reservoir, its water level fluctuates randomly around some mean value. By interpreting the water level as a point that moves on (vertical) line at random and alternates two possible directions (up and down), we come to the classical stochastic Goldstein-Kac model. This motion is driven by the telegraph equation in (9.1.1) or its numerous generalizations (for example, for the motion with drift). The density of the distribution of water level is given by the absolutely continuous part of (9.1.3) (the singular part of density can, obviously, be omitted in this case). Such an approach also allows to pose and solve the respective initial and boundary problems describing the process of fluctuation of water level. The parameters c and λ (more precisely, their statistical estimates) can be obtained by analysing the results of long-term observations (that usually last many years and even decades) by mathematical statistics methods. If the up and down movements occurs at different speeds and intensities of switchings, then one can apply formulas for the non-symmetric counterpart of the telegraph process (see Section 2.11, [9] or [115, Section 4.1, Formula 4.1.15]. Moreover, the explicit formulas for the distributions of the maximum and the first-passage time of the telegraph process, are known (see [44, 141, 142]). This enables to evaluate the probabilities of the most important events, such as the probability of attaining the emergency levels (‘drought-inundation’), the probability of flooding a given land area at least once during a certain period of time, etc. This approach was applied for modeling the process of water level fluctuation in the Dubossary reservoir of the Dniester River in the framework of the project ‘Development and analysis of mathematical models of some environmental systems’, 1993, (Chapter 4: ‘An Evolutionary model of the hydrological balance of the reservoir’).

9.3

Pollution model

In this section we present an approach for modeling the process of soil pollution from a stationary source. This approach is based on the results, namely, the explicit distribution of the planar Markov random flight obtained in Chapter 5. The soil surface pollution can be imagined as follows. A stationary source (a pipe of an industrial enterprise, for example) emits polluting particles that drop on the soil. The particles have random masses and, at the moment of emitting of each particle, the strength and direction of the wind are also random. When flying, the particle makes chaotic random movements. All these random factors determine a place where the particle falls on the

356

Markov Random Flights

soil. The time of active evolution of the particle, that is, the time from the moment of its emitting to the moment it falls onto the soil, we call the particle’s lifetime and this is a positive random variable. The degree of pollution of a certain part of the territory around the source at some given time moment is directly proportional to the number of particles that have fallen to this site by this time. This interpretation allows us to consider the pollution process as a planar Markov random flight with a µ-exponentially distributed random lifetime. If the wind rose near the source is supposed to be the uniform one, then, in view of 5.2.1, the density of the pollution in the distance kxk from the source is given by the formula:   p Z ∞ exp −λτ + λ c2 τ 2 − kxk2 c λ p Θ(cτ − kxk) µe−µτ dτ p(x) = 2πc 0 c2 τ 2 − kxk2  p  (9.3.1) Z exp λc c2 τ 2 − kxk2 λµ ∞ p = e−(λ+µ)τ dτ, 2πc kxk/c c2 τ 2 − kxk2 where λ and µ are the parameters of the exponential distributions of the intensity of changes of direction and the particle’s lifetime, respectively. Evaluation of (9.3.1) is somewhat cumbersome and left up to the reader. We give the final result:    kxk λµ λe−(λ+µ) c 1 kxk p(x) = + K0 (λ + µ) 2πc c(λ + µ) c c (9.3.2)    k/2   ∞ k k+1 2kxk kxk 1 Xλ Γ Kk/2 (λ + µ) , + √ k! 2 c(λ + µ) c c π k=2

where Kν (z) is the Macdonald function of order ν. The shape of density (9.3.2) is plotted in Fig. 9.6.

We see that, as should be expected, density (9.3.2) takes its maximum value at the origin (that is, the level of pollution is maximal in the neighbourhood of the source), while it decreases nonlinearly as the distance from the source increases. For example, on the unit circumference kxk = 1, density (9.3.2) takes the value: p(x)|kxk=1 ≈ 0.016345.... These results should be treated as follows. Function (9.3.2) shows how the density of pollution behaves, as the distance kxk from the source grows. To find the concentration of pollution in some planar area, say, in the circle Cr = {x ∈ R2 : kxk < r} of some radius r > 0, one needs to integrate density (9.3.2) in Cr . The obtained value kr is less than 1 and yields the share of the total mass M (t) of polluting substance emitted by a given time t and settled in this circle. The product kr M (t) is the total concentration of polluting substance in Cr . Since the total mass M (t) increases, as time t grows, the concentration of polluting substance in Cr increases too. This interpretation allows us to predict the level of pollution in a given planar area on long time intervals. Moreover, knowledge of the distribution of polluting substance on the surface of a planar area enables us to pose a respective initial-value and/or boundary problem for modeling the process of percolating the polluting substance into lower soil strata, as described in [193]. In this case, the distribution of polluting substance yields the first initial and/or boundary condition. The same approach can be used in a more general case when the wind rose is not uniform and has another distribution, for example, the von Mises (or circular Gaussian) probability

Applied Models

357

Figure 9.6: The shape of density (9.3.2) (for c = 4, λ = 1, µ = 2) law determined by formulas (4.12.29) and (4.12.30). One can also consider the case when the particles’ lifetime has another distribution different from the exponential one. Clearly, in these cases the analysis becomes much more complicated and the respective calculations lead to more cumbersome expressions that, apparently, can be evaluated only numerically.

9.4 9.4.1

Physical applications Transport processes

Stochastic motions at finite speed generate finite-velocity transport processes that often arise in various fields of science and technology. This is one of the most important and practically useful features of the Markov random flights studied in this book. The literature devoted to the finite-velocity transport is permanently increasing in recent decades. An application of the transport equations in chromatography with a finite speed of signal propagation was presented in [143]. The role of boundary conditions for transport equation in finite domain by considering the case of sorption curves was discussed in [13]. The occurrence of multiple stationary invariant measures in the presence of potentials in transport models was examined in [58]. A number of other physical examples related to the finite-velocity transport can be found in the review article [10].

9.4.2

Relativity effects

The relativistic properties of the Brownian motion were thoroughly examined in [37]. It is natural to study the relativistic properties of the finite-velocity counterpart of Brownian motion, that is, of the Markov random flights in the Euclidean spaces of different dimensions.

358

Markov Random Flights

Various relativity effects of the finite-velocity random motions were considered in many works. The connections between one-dimensional random walks and some physical processes were studied in [17]. The finite-velocity diffusion models with relativity effects were examined in [18]. A probabilistic analysis of the telegraph process with drift by means of relativistic transformations was done in [9]. The connection between telegraph processes and the 1+1 Dirac equation (as Wiener processes are the stochastic counterpart of the Schr¨odinger equation) was demonstrated in [50]. The connection of the relativistic finite-velocity random motion with the J¨ uttner distribution (i.e. the relativistic velocity probability density function, generalizing the Maxwellian) was established in [52]. The thorough relativistic analysis of stochastic kinematics aimed at determining the transformation of the effective diffusivity tensor in inertial frames, was developed in [53]. For the one-dimensional spatial models, it was shown that the effective diffusion coefficient, measured in a frame moving with some velocity to the rest frame of the stochastic process, is inversely proportional to the third power of the Lorentz factor. Higher-dimensional processes, connected with the symmetric finite-velocity stochastic motion with a finite number of directions studied in Chapter 3 of the book, were also analyzed and it was shown that the diffusivity tensor in a moving frame becomes nonisotropic. Relativistic properties of the Markov random flights and their connections with the relativistic properties of Brownian motion are an extremely interesting and still insufficiently studied area of stochastic analysis.

9.4.3

Cosmic microwave background radiation

The one-dimensional telegraph equations examined in Chapter 2 admit various multidimensional generalizations. The natural one is to consider the multidimensional counterpart of the Goldstein-Kac telegraph equation whose differential operator is given by (4.10.4) with the Laplacian of respective dimension. Although, as it was proved in Section 4.10, the multidimensional telegraph equations do not describe the Markov random flights in the Euclidean spaces of higher dimensions, nevertheless, sometimes they might be used for modeling the hyperbolic diffusion (see, for instance, [137, 138]). An interesting and fruitful generalization is to consider the telegraph equations on manifolds, for example, on the surface of a multidimensional sphere. In [15] the telegraph equation 1 ∂p(x, t) 1 ∂ 2 p(x, t) + = k 2 ∆S p(x, t), 2 2 c ∂t D ∂t x = (x1 , x2 , x3 ) ∈ S,

(9.4.1)

t ≥ 0,

with random initial conditions was applied for modeling the process of CMB radiation. The operator ∆S on the right-hand side of (9.4.1) is the Laplace operator on the surface of a sphere S in the three-dimensional Euclidean space R3 and c > 0, D > 0 and k are some constants having a quite definite sense of special astrophysical characteristics. In the unit spherical coordinates, equation (9.4.1) takes the form 1 ∂p(θ, ϕ, t) 1 ∂ 2 p(θ, ϕ, t) + = k 2 ∆(θ,ϕ) p(θ, ϕ, t), 2 2 c ∂t D ∂t θ ∈ [0, π), where ∆(θ,ϕ) =

1 ∂ sin θ ∂θ

ϕ ∈ [0, 2π),

t > 0,

  ∂ 1 ∂2 sin θ + 2 ∂θ sin θ ∂ϕ2

is the Laplace-Beltrami operator on the sphere.

(9.4.2)

Applied Models

359

The solution to equation (9.4.2) with random initial conditions represents a random field on the sphere S that can be interpreted as a random CMB radiation. For more details, see [15]. Another approach to this problem is to examine the three-dimensional telegraph equations whose solutions are restricted to the surface of a sphere. Such equations describe the processes that can be referred to as the restricted hyperbolic diffusion. This approach was recently developed in [14] for modeling the CMB radiation process. The idea is to consider the three-dimensional telegraph or hyperbolic diffusion equation 1 ∂q(x, t) 1 ∂ 2 q(x, t) + = ∆q(x, t), 2 2 c ∂t D ∂t x = (x1 , x2 , x3 ) ∈ R3 ,

(9.4.3)

t ≥ 0,

where ∆ is the three-dimensional Laplace operator and c > 0, D > 0 are some constants. The solution to equation (9.4.3) with random initial conditions represents a spatial-temporal random field (for more details on random fields and their properties see, for instance, [78]). The restrictions of this random field on the surface of the unit sphere yields a random field on this sphere that can be considered as the random CMB radiation. For more details and numerical calculations based on these models as well as their astrophysical interpretations, see [14, 15].

9.5

Option pricing

The fundamental basis of financial modeling is the concept of arbitrage-free and complete market. For the space- and time-continuous stochastic models the unique underlying process satisfying this concept is the geometric Brownian motion.The classical Black-Scholes option pricing model is based just on this stochastic process. In recent decades a number of works have appeared in which a new option pricing model was developed (see, for instance, [115, 179, 180, 183] and bibliographies therein). The core idea of this approach is the construction of an alternative option pricing model based on the telegraph processes, instead of geometric Brownian motion in the Black-Scholes model. This alternative model can be considered as a finite-velocity counterpart of the Black-Scholes one and implies that random oscillations on stock markets occur at finite velocity. However, application in this model the Goldstein-Kac telegraph process studied in Chapter 2 would lead to the appearance of the arbitrage because the sample paths of the Goldstein-Kac telegraph process are continuous. To avoid this, the model should be based on the telegraph process with jumps whose sample paths are discontinuous. This implies that, at each turn instant, the particle makes a jump of a finite magnitude. Such jumptelegraph processes are not presented in this monograph, but the reader interested in their basic properties may address to the recent book [115, Chapter 4]. Basing on such jump-telegraph processes, an interesting and relatively simple option pricing model can be constructed which is free of arbitrage and complete. Its construction and main properties, as well as some connections with the classical Black-Scholes model, can be found in [115, Chapter 5].

Bibliography

[1] Angelani L., Garra R. Run-and-tumble motion in one dimension with space-dependent speed. Phys. Rev. E, 100 (2019), 5, 052147. [2] Arnold L. Random Dynamical Systems. Springer, 1998, Berlin-Heidelberg. [3] Aronsson G., Evans L.C., Wu Y. Fast/slow diffusion and growing sandpiles. J. Diff. Equat., 131 (1996), 304-335. [4] Bartlett M. Some problems associated with random velocity. Publ. Inst. Stat. Univ. Paris., 6 (1957), 261-270. [5] Bartlett M. A note on random walks at constant speed. Adv. Appl. Probab., 10 (1978), 704-707. [6] Barut A., Raczka R. Theory of Group Representations and Applications. World Sci., 1986, Singapore. [7] Bateman H., Erd´elyi A. Tables of Integral Transforms. McGraw-Hill, 1954, NY. [8] Becker-Kern P., Meerschaert M.M., Scheffler H.-P. Limit theorems for coupled continuous time random walks. Ann. Probab., 32 (2004), 730-756. [9] Beghin L., Nieddu L., Orsingher E. Probabilistic analysis of the telegrapher’s process with drift by means of relativistic transformations. J. Appl. Math. Stoch. Anal., 14 (2001), 11-25. [10] Bena I. Dichotomous Markov noise: exact results for out-of-equilibrium systems. Intern. J. Modern Phys. B, 20 (2006), 20, 2825-2888. [11] Bers L., John F., Schechter M. Partial Differential Equations. Amer. Math. Soc. Publ., 1964, Providence, RI. [12] Bogachev L., Ratanov N. Occupation time distributions for the telegraph process. Stoch. Process. Appl., 121(8) (2011), 1816-1844. [13] Brasiello A., Crescitelli S., Giona M. One-dimensional hyperbolic transport: positivity and admissible boundary conditions derived from the wave formulation. Phys. A, 449 (2016), 176-191. [14] Broadbridge P., Kolesnik A.D., Leonenko N., Olenko A., Omari D. Spherically restricted random hyperbolic diffusion. Entropy, 22 (2020), 217-248. [15] Broadbridge P., Kolesnik A.D., Leonenko N., Olenko A. Random spherical hyperbolic diffusion. J. Statist. Phys., 177 (2019), 889-916. [16] Brooks E. Probabilistic methods for a linear reaction-hyperbolic system with constant coefficients. Ann. Appl. Probab., 9 (1999), 719-731. 361

362

Bibliography

[17] Cane V. Random walks and physical processes. Bull. Int. Statist. Inst., 42 (1967), 622-640. [18] Cane V. Diffusion models with relativity effects. In: Perspectives in Probability and Statistics, Applied Probability Trust, 1975, Sheffield, pp. 263-273. [19] Casini E., Le Ca¨er G., Martinelli A. Short hyperuniform random walk. J. Statist. Phys., 160 (2015), 254-273. [20] Cattaneo C.R. Sur une forme de l’equation de la chaleur ´eliminant le paradoxe d’une propagation instantan´ee. Comptes Rendus., 247(4) (1958), 431. [21] C´enac P., Le Ny A., De Loynes B., Offret Y. Persistent random walks. I. Recurrence versus transience. J. Theoret. Probab., 31 (2018), 232-243. [22] Chandrasekhar S. Stochastic problems in physics and astronomy. Rev. Mod. Phys., 15 (1943), 1, 1-89. [23] Courant R., Gilbert D. Methods of Mathematical Physics. Vol.2, Partial Differential Equations. Interscience, 1962, NY. [24] Chechkin A.V., Metzler R., Klafter J., Gonchar V.Yu. Introduction to the theory of L´evy flights. In: Anomalous Transport: Foundations and Applications, Wiley-VCH Verlag, 2008, Weinheim. [25] Chung H.S., Piana-Agostinetti S., Shaw D.E., Eaton W.A. Structural origin of slow diffusion in protein folding. Science, 349 (2015), 6255, 1504-1510. [26] Codling E.A., Plank M.J., Benhamou S. Random walk models in biology. J. Royal Soc. Interface, 5 (2008), 813-834. [27] Davies R.W. The connection between the Smoluchowski equation and the KramerChandrasekhar equation. Phys. Rev., 93 (1954), 6, 1169. [28] Davydov B.I. Doklady Acad. Nauk USSR, 2 (1935), 7. (In Russian) [29] Detcheverry F. Unimodal and bimodal random motions of independent exponential steps. Eur. Phys. J. E, 37 (2014), 11, 114. [30] Dhar A., Kundu A., Majumdar S.N., Sabhapandit S., Schehr G. Run-and-tumble particle in one-dimensional confining potentials: Steady-state, relaxation and firstpassage properties. Phys. Rev. E, 99 (2019), 3, 032132. [31] Di Crescenzo A. On random motion with velocities alternating at Erlang-distributed random times. Adv. Appl. Probab., 33 (2001), 690-701. [32] Di Crescenzo A. Exact transient analysis of a planar motion with three directions. Stoch. Stoch. Rep., 72 (2002), 175-189. [33] Di Crescenzo A., Martinucci B. A damped telegraph random process with logistic stationary distributions. J. Appl. Probab., 47 (2010), 84-96. [34] Di Crescenzo A., Martinucci B., Zacks S. Telegraph process with elastic boundary at the origin. Methodol. Comput. Appl. Probab., 20 (2018), 333-352. [35] Di Crescenzo A., Meoli A. On a jump-telegraph process driven by an alternating fractional Poisson process. J. Appl. Probab., 55 (2018), 94-111.

Bibliography

363

[36] Dunbar S., Othmer H. On a nonlinear hyperbolic equation describing transmission lines, cell movement, and branching random walks. In: Nonlinear Oscillations in Biology and Chemistry, Lecture Notes in Biomath., 66, Springer-Verlag, 1986. [37] Dunkel J., H¨ anggi P. Relativistic Brownian motion. Phys. Rep., 471 (2009), 1-73. [38] Einstein A. On the movement of small particles suspended in stationary liquids required by molecular-kinetic theory of heat. Ann. Phys., 17 (1905), 549-560. [39] Ellis R. Limit theorems for random evolutions with explicit error estimates. Z. Wahrscheinlichkeitstheorie, 28 (1974), 249-256. [40] Ethier S.N., Kurtz T.G. Markov Processes: Characterization and Convergence. Wiley, 2009, NY. [41] Faddeev D.K., Sominsky I.S. Problems in Algebra. Nauka, 1972, Moscow. (In Russian) [42] Feller W. An Introduction to Probability Theory and Its Applications. Vol. 2. Wiley, 1966, NY. [43] Fock V.A. The solution of a problem of diffusion theory by the finite-difference method and its application to light diffusion. In: Proceedings of the State Optics Institute, Leningrad, 1926, 4(34), p. 32. (In Russian) [44] Foong S.K. First-passage time, maximum displacement and Kac’s solution of the telegrapher’s equation. Phys. Rev. A., 46 (1992), 707-710. [45] Foong S.K., Kanno S. Properties of the telegrapher’s random process with or without a trap. Stoch. Process. Appl., 53 (2002), 147-173. [46] Franceschetti M. When a random walk of fixed length can lead uniformly anywhere inside a hypersphere. J. Statist. Phys., 127 (2007), 813-823. [47] Furstenberg H. Noncommuting random products. Trans. Amer. Math. Soc., 108 (1963), 377-428. [48] Garcia-Pelayo R. The random flight and the persistent random walk. In: Statistical Mechanics and Random Walks: Principles, Processes and Applications, Chapter 19, Nova Science Publ., 2012. [49] Garra R., Orsingher E., Ratanov N. Planar piecewise linear random motions with jumps. Math. Methods Appl. Sci., 40 (2017), no. 18, 7673-7685. [50] Gaveau B., Jacobson T., Kac M., Schulman L. S. Relativistic extension of the analogy between quantum mechanics and Brownian motion. Phys. Rev. Lett., 53 (1984), 5, 419. [51] Ghosh A., Rastegar R., Roitershtein A. On a directionally reinforced random walk. Proc. Amer. Math. Soc., 142 (2014), 3269-3283. [52] Giona M. Relativistic Poisson-Kac and equilibrium J¨ uttner distribution. Europhys. Lett., 126 (2019), 5, 50001. [53] Giona M. Relativistic analysis of stochastic kinematics. Phys. Rev. E, 96 (2017), 4, 042133.

364

Bibliography

[54] Giona M., Brasiello A., Crescitelli S. Markovian nature, completeness, regularity and correlation properties of generalized Poisson-Kac processes. J. Stat. Mech.: Theory and Experiment, 2 (2017), 2, 023205. [55] Giona M., Brasiello A., Crescitelli S. Stochastic foundations of undulatory transport phenomena: generalized Poisson-Kac processes — Part I: Basic theory. J. Phys. A: Mathematical and Theoretical, 50 (2017), 33, 335002. [56] Giona M., Brasiello A., Crescitelli S. Stochastic foundations of undulatory transport phenomena: generalized Poisson-Kac processes — Part II: Irreversibility, norms and entropies. J. Phys. A: Mathematical and Theoretical, 50 (2017), 33, 335003. [57] Giona M., Brasiello A., Crescitelli S. Stochastic foundations of undulatory transport phenomena: generalized Poisson-Kac processes — Part III: Extensions and applications to kinetic theory and transport. J. Phys. A: Mathematical and Theoretical, 50 (2017), 33, 335004. [58] Giona M., Brasiello A., Crescitelli S. Ergodicity-breaking bifurcations and tunneling in hyperbolic transport models. Europhys. Lett., 112 (2015), 3, 30001. [59] Goldstein S. On diffusion by discontinuous movements and on the telegraph equation. Quart. J. Mech. Appl. Math., 4 (1951), 129-156. [60] Gorostiza L. A central limit theorem for a class of d-dimensional random motions with constant speed. Bull. Amer. Math. Soc., 78 (1972), 575-577. [61] Gorostiza L. The central limit theorem for random motions of d-dimensional Euclidean space. Ann. Probab., 1 (1973), 603-612. [62] Gorostiza L. An invariance principle for a class of d-dimensional polygonal random functions. Trans. Amer. Math. Soc., 177 (1973), 413-445. [63] Gradshteyn I.S., Ryzhik I.M. Tables of Integrals, Series and Products. Academic Press, 1980, NY. [64] Griego R.J., Hersh R. Random evolutions, Markov chains and systems of partial differential equations. Proc. Nat. Acad, Sci., 62 (1969), 305-308. [65] Griego R.J., Hersh R. Theory of random evolutions with applications to partial differential equations. Trans. Amer. Math. Soc., 156 (1971), 405-418. [66] Hadeler K.P. Reaction transport systems in biological modelling. In: Mathematics Inspired by Biology, Lecture Notes in Mathematics, 1999, vol. 1714, Springer, Berlin, pp. 95-150. [67] Hadeler K.P. Travelling fronts for correlated random walks. Canad. Appl. Math. Quart., 2 (1994), 27-43. [68] Hadeler K.P. Hyperbolic travelling fronts. Proc. Edinburgh Math. Soc., 31 (1988), 89-97. [69] Hall M. The Theory of Groups. Amer. Math. Soc. Publ., 1999, Providence, RI. [70] Hersh R. Random evolutions: a survey of results and problems. Rocky Mount. J. Math., 4 (1974), 443-477.

Bibliography

365

[71] Hersh R., Papanicolaou G. Non-commuting random evolutions and an operator-valued Feynman-Kac formula. Comm. Pure Appl. Math., 25 (1972), 337-366. [72] Hersh R., Pinsky M. Random evolutions are asymptotically Gaussian. Comm. Pure Appl. Math., 25 (1972), 33-44. [73] Hida T. Brownian Motion. Springer, 1980, Heidelberg. [74] Hille E., Phillips R.S. Functional Analysis and Semigroups. Amer. Math. Soc. Publ., 1996. [75] Hughes B. Random Walks and Random Environments. Vol. I. Random Walks. Oxford Univ. Press, 1995, NY. [76] Iacus S.M. Statistical analysis of the inhomogeneous telegrapher’s process. Statist. Probab. Lett., 55 (2001), 83-88. [77] Iacus S.M., Yoshida N. Estimation for the discretely observed telegraph process. Theory Probab. Math. Stat., 78 (2009), 37-47. [78] Ivanov A.V., Leonenko N.N. Statistical Analysis of Random Fields. Kluwer Publ., 1989, Dordrecht. [79] Jaff´e G. On a paradox in the theory of heat conduction. Phys. Rev., 61 (1942), 9-10, 643-647. [80] Janssen A. The distance between the Kac process and the Wiener process with applications to generalized telegraph equations. J. Theoret. Probab., 3 (1990), 349-360. [81] Janssen A., Siebert E. Convolution, semigroups and generalized telegraph equations. Math. J., 177 (1981), 519-532. [82] John F. Plane Waves and Spherical Means Applied to Partial Differential Equations. Interscience, 1955, NY. [83] Kabanov Yu.M. Probabilistic representation of a solution of the telegraph equation. Theory Probab. Appl., 37 (1992), 379-380. [84] Kac M. A stochastic model related to the telegrapher’s equation. Rocky Mount. J. Math., 4 (1974), 497-509. (Reprinted from: Kac M. Some stochastic problems in physics and mathematics. In: Magnolia Petroleum Company Colloquium Lectures in the Pure and Applied Sciences, no. 2, October 1956). [85] Kallianpur G., Xiong J. Stochastic models of environmental pollution. Adv. Appl. Probab., 26 (1994), 377-403. [86] Kaplan S. Differential equations in which the Poisson process plays a role. Bull. Amer. Math. Soc., 70 (1964), 264-267. [87] Kato T. Perturbation Theory for Linear Operators. Springer, 1980, NY. [88] Kertz R. Limit theorems for discontinuous random evolutions with applications to initial-value problems and to Markov chains on N -lines. Ann. Probab., 2 (1974), 10451064. [89] Kertz R. Discontinuous random evolutions. Ann. Probab., 2 (1974), 416-448.

366

Bibliography

[90] Kertz R. Perturbed semigroup limit theorems with applications to discontinuous random evolutions. Trans. Amer. Math. Soc., 199 (1974), 29-53. [91] Kertz R. Limit theorems for semigroups with perturbed generators with applications to multi-scaled random evolutions. J. Func. Anal., 27 (1978), 215-233. [92] Kisynski J. On M.Kac’s probabilistic formula for the solution of the telegraphist’s equation. Ann. Polon. Math., 29 (1974), 259-272. [93] Kolesnik A.D. Slow diffusion by Markov random flights. Phys. A, 499 (2018), 186-197. [94] Kolesnik A.D. Linear combinations of the telegraph random processes driven by partial differential equations. Stoch. Dynam., 18 (2018), no. 4, ID 1850020, 24 pp. [95] Kolesnik A.D. Asymptotic relation for the transition density of the three-dimensional Markov random flight on small time intervals. J. Statist. Phys., 166 (2017), 434-452. [96] Kolesnik A.D. Integral equation for the transition density of the multidimensional Markov random flight. Theory Stoch. Process., 20(36) (2015), no. 2, 42-53. [97] Kolesnik A.D. The explicit probability distribution of the sum of two telegraph processes. Stoch. Dynam., 15 (2015), no. 2, ID 1550013, 32 pp. [98] Kolesnik A.D. Probability distribution function for the Euclidean distance between two telegraph processes. Adv. Appl. Probab., 46 (2014), 1172-1193. [99] Kolesnik A.D. Probability law for the Euclidean distance between two planar random flights. J. Statist. Phys., 154 (2014), 1124-1152. [100] Kolesnik A.D. Moment analysis of the telegraph random process. Bull. Acad. Sci. Moldova, Ser. Math., 1(68) (2012), 90-107. [101] Kolesnik A.D. The explicit probability distribution of a six-dimensional random flight. Theory Stoch. Process., 15(31) (2009), no. 1, 33-39. [102] Kolesnik A.D. The distribution of a planar random evolution with random start point. Bull. Acad. Sci. Moldova, Ser. Math., 1(59) (2009), 79-86. [103] Kolesnik A.D. An asymptotic relation for the density of a multidimensional random evolution with rare Poisson switchings. Ukrain. Math. J., 60 (2008), 1915-1926. [104] Kolesnik A.D. Random motions at finite speed in higher dimensions. J. Statist. Phys., 131 (2008), 1039-1065. [105] Kolesnik A.D. Moments of the Markovian random evolutions in two and four dimensions. Bull. Acad. Sci. Moldova, Ser. Math., 2(53) (2008), 68-80. [106] Kolesnik A.D. A note on planar random motion at finite speed. J. Appl. Probab., 44 (2007), 838-842. [107] Kolesnik A.D. A four-dimensional random motion at finite speed. J. Appl. Probab., 43 (2006), 1107-1118. [108] Kolesnik A.D. Discontinuous term of the distribution for Markovian random evolution in R3 . Bull. Acad. Sci. Moldova, Ser. Math., 2(51) (2006), 62-68.

Bibliography

367

[109] Kolesnik A.D. Weak convergence of the distributions of Markovian random evolutions in two and three dimensions. Bull. Acad. Sci. Moldova, Ser. Math., 3(43) (2003), 4152. [110] Kolesnik A.D. Weak convergence of a planar random evolution to the Wiener process. J. Theoret. Probab., 14 (2001), 485-494. [111] Kolesnik A.D. A polynomial representation of the infinitesimal operators of Markovian random evolutions in a plane. Bull. Acad. Sci. Moldova, Ser. Math., 1(32) (2000), 67-75. [112] Kolesnik A.D. The equations of Markovian random evolution on the line. J. Appl. Probab., 35 (1998), 27-35. [113] Kolesnik A.D., Orsingher E. A planar random motion with an infinite number of directions controlled by the damped wave equation. J. Appl. Probab., 42 (2005), 11681182. [114] Kolesnik A.D., Pinsky M.A. Random evolutions are driven by the hyperparabolic operators. J. Statist. Phys., 142 (2011), 828-846. [115] Kolesnik A.D., Ratanov N. Telegraph Processes and Option Pricing. Springer, 2013, Heidelberg. [116] Kolesnik A.D., Turbin A.F. The equation of symmetric Markovian random evolution in a plane. Stoch. Process. Appl., 75 (1998), 67-87. [117] Kolesnik A.D., Turbin A.F. An infinitesimal hyperbolic operator of Markov random evolutions in Rn . Dokl. Akad. Nauk Ukrain. SSR., 1 (1991), 11-14. (In Russian) [118] Korn G.A., Korn T.M. Mathematical Handbook. McGraw-Hill, 1968, NY. [119] Korolyuk V.S., Limnios N. Average and diffusion approximation of stochastic evolutionary systems in an asymptotic split state space. Ann. Appl. Probab., 14 (2004), 489-516. [120] Korolyuk V.S., Limnios N. Stochastic Systems in Merging Phase Space. World Sci., 2005, River Edge, NJ. [121] Korolyuk V.S., Portenko N.I., Skorokhod A.V., Turbin A.F. Handbook on Probability Theory and Mathematical Statistics. Nauka, 1985, Moscow. (In Russian) [122] Korolyuk V.S., Swishchuk A.V. Evolution of Systems in Random Media. CRC Press, 1995, Boca Raton, FL. [123] Korolyuk V.S., Swishchuk A.V. Semi-Markov Random Evolutions. Kluwer Publ., 1994, Amsterdam. [124] Korolyuk V.S., Turbin A.F. Mathematical Foundations of Phase Merging of Complex Systems. Kluwer Publ., 1994, Amsterdam. [125] Kurtz T. Extensions of Trotter’s operator semigroup approximation theorems. J. Func. Anal., 3 (1969), 111-132. [126] Kurtz T. A general theorem on the convergence of operator semigroups. Trans. Amer. Math. Soc., 148 (1970), 23-32.

368

Bibliography

[127] Kurtz T. A limit theorem for perturbed operator semigroups with applications to random evolutions. J. Func. Anal., 12 (1973), 55-67. [128] Lachal A. Cyclic random motions in Rd -space with n directions. ESAIM: Probab. & Stat., 10 (2006), 277-316. [129] Lang S. Algebra. Addision-Wesley, 1965, Reading, MA. [130] Le Ca¨er G. Two-step Dirichlet random walks. Phys. A, 430 (2015), 201-215. [131] Le Ca¨er G. A new family of solvable Pearson-Dirichlet random walks. J. Statist. Phys., 144 (2011), 23-45. [132] Le Ca¨er G. A Pearson random walk with steps of uniform orientation and Dirichlet distributed lengths. J. Statist. Phys., 140 (2010), 728-751. [133] Letac G., Piccioni M. Dirichlet random walks. J. Appl. Probab., 51 (2014), 1081-1099. [134] L´ opez O., Ratanov N. On the asymmetric telegraph processes. J. Appl. Probab., 51 (2014), 569-589. [135] Lyapin E.S. Meteorology and Gidrology. In: Proceedings of the Main Geophysical Observatory, 19, Gidrometeoizdat Publ., 1950, Leningrad. (In Russian) [136] Martens K., Angelani L., Di Leonardo R., Bosquet L. Probability distributions for the run-and-tumble bacterial dynamics: An analogy to the Lorentz model. Eur. Phys. J. E, 35 (2012), 9, 84. [137] Masoliver J. Three-dimensional telegrapher’s equation and its fractional generalization. Phys. Rev. E, 96 (2017), 022101. [138] Masoliver J., Lindenberg K. Continuous time persistent random walk: a review and some generalizations. Eur. Phys. J. B, 90 (2017), 107-119. [139] Masoliver J., Lindenberg K., Weiss G.H. A continuous-time generalization of the persistent random walk. Phys. A, 157 (1989), 891-898. [140] Masoliver J., Porr´ a J.M., Weiss G.H. Some two and three-dimensional persistent random walks. Phys. A, 193 (1993), 469-482. [141] Masoliver J., Weiss G.H. First-passage times for a generalized telegrapher’s equation. Phys. A, 183 (1992), 537-548. [142] Masoliver J., Weiss G.H. On the maximum displacement of a one-dimensional diffusion process described by the telegrapher’s equation. Phys. A, 195 (1993), 93-100. [143] Masoliver J., Weiss G.H. Transport equations in chromatography with a finite speed of signal propagation. Separ. Sci. Tech., 26 (1991), 2, 279-289. [144] Meerschaert M.M., Straka P. Semi-Markov approach to continuous-time random walk limit processes. Ann. Probab., 42 (2014), 1699-1723. [145] Meerschaert M.M., Scheffler H.-P. Limit theorems for continuous time random walks with infinite mean waiting times. J. Appl. Probab., 41 (2004), 623-638. [146] Metzler R., Klafter J. The random walks guide to anomalous diffusion: A fractional dynamics approach. Phys. Rep., 339 (2000), 1-77.

Bibliography

369

[147] Miller W. Symmetry Groups and Their Applications. Academic Press, 1972, NY. [148] Mizohata S. The Theory of Partial Differential Equations. Cambridge Univ. Press, 1979. [149] Monin A.S. Atmospheric diffusion. Soviet Phys. Uspekhi, 1 (1959), 119-130. (In Russian) [150] Monin A.S. On diffusion at finite speed. Proc. Acad. Sci. USSR, Ser. Geophys., 3 (1955), 234. (In Russian) [151] Monin A.S. Equation of turbulent diffusion. Doklady Acad. Nauk USSR, 105 (1955), 256. (In Russian) [152] Morse P.M., Feshbach H. Methods of Theoretical Physics. McGraw-Hill, 1953, NY. [153] M¨ uller I., Ruggeri T. Extended Thermodynamics. Springer, 1993, NY. [154] Naimark M.A. Normed Rings. Wiley-VCH Verlag, 1961, Groningen. [155] Orsingher E. Exact joint distribution in a model of planar random motion. Stoch. Stoch. Rep., 69 (2000), 1-10. [156] Orsingher E. Motions with reflecting and absorbing barriers driven by the telegraph equation. Rand. Operat. Stoch. Equat., 3 (1995), 9-21. [157] Orsingher E. Probability law, flow function, maximum distribution of wave-governed random motions and their connections with Kirchoff’s laws. Stoch. Process. Appl., 34 (1990), 49-66. [158] Orsingher E., De Gregorio A. Random flights in higher spaces. J. Theoret. Probab., 20 (2007), 769-806. [159] Othmer H.G., Dunbar S.R., Alt W. Models of dispersal in biological systems. J. Math. Biol., 26 (1988), 263-298. [160] Othmer H.G., Hillen T. The diffusion limit of transport equations. II. Chemotaxis equations. SIAM J. Appl. Math., 62 (2002), 1222-1250. [161] Papanicolaou G. Asymptotic analysis of transport processes. Bull. Amer. Math. Soc., 81 (1975), 330-392. [162] Papanicolaou G. Motion of a particle in a random field. J. Math. Phys., 12 (1971), 1494-1496. [163] Papanicolaou G. Wave propagation in a one-dimensional random medium. SIAM J. Appl. Math., 21 (1971), 13-18. [164] Papanicolaou G. Stochastic equations and their applications. Amer. Math. Mounthly, 80 (1973), 526-544. [165] Papanicolaou G., Hersh R. Some limit theorems for stochastic equations and applications. Indiana Univ. Math. J., 21 (1972), 815-840. [166] Papanicolaou G., Keller J. Stochastic differential equations with applications to random harmonic oscillators and wave propagation in random media. SIAM J. Appl. Probab., 21 (1971), 287-305.

370

Bibliography

[167] Pearson K. The problem of the random walk. Nature, 72 (1905), 294. [168] Pearson K. The problem of the random walk. Nature, 72 (1905), 342. [169] Peters B. On the coupling between slow diffusion transport and barrier crossing in nucleation. J. Chemic. Phys., 135 (2011), 044107. [170] Pinsky M.A. Lectures on Random Evolution. World Sci., 1991, River Edge, NJ. [171] Pinsky M.A. Isotropic transport process on a Riemannian manifold. Trans. Amer. Math. Soc., 218 (1976), 353-360. [172] Pinsky M.A. Differential equations with a small parameter and the central limit theorem for functions defined on a finite Markov chain. Z. Wahrsch., 9 (1968), 101-111. [173] Pinsky M. Multiplicative operator functionals of a Markov process. Bull. Amer. Math. Soc., 77 (1971), 377-380. [174] Pinsky M. Stochastic integral representations of multiplicative operator functionals of a Wiener process. Trans. Amer. Math. Soc., 167 (1972), 89-104. [175] Pinsky M. Multiplicative operator functionals and their asymptotic properties. Adv. Probab., 3 (1974), 1-100. [176] Pinsky M. Random evolutions. In: Probabilistic Methods in Differential Equations, Lect. Notes Math., Springer, 1975, Amsterdam, 451, 89-99. [177] Prudnikov A.P., Brychkov Yu.A., Marichev O.I. Integrals and Series. Special Functions. Nauka, 1983, Moscow. [178] Prudnikov A.P., Brychkov Yu.A., Marichev O.I. Integrals and Series. Supplementary Chapters. Nauka, 1986, Moscow. [179] Ratanov N. Option pricing model based on a Markov-modulated diffusion with jumps. Brazil. J. Probab. Stat., 24 (2010), 413-431. [180] Ratanov N. A jump telegraph model for option pricing. Quant. Finance, 7 (2007), 575-583. [181] Ratanov N. Reaction-advection random motions in inhomogeneous media. Phys. D, 189 (2004), 130-140. [182] Ratanov N.E. Telegraph evolutions in inhomogeneous media. Markov Process. Relat. Fields, 5 (1999), 53-68. [183] Ratanov N., Melnikov A. On financial markets based on telegraph processes. Stochastics, 80 (2008), 247-268. [184] Rayleigh J.W.S. The problem of the random walk. Nature, 72 (1905), 318. [185] Reimberg P.H.F., Abramo L.R. Random flights through spaces of different dimensions. J. Math. Phys., 56 (2015), 013512. [186] Reimberg P.H.F., Abramo L.R. CMB and random flights: temperature and polarization in position space. J. Cosmol. Astropart. Phys., 2013 (2013), 6, 043. [187] Riordan J. Combinatorial Identities. R.E. Krieger Publ., 1979.

Bibliography

371

[188] Rosenau P. Random walker and the telegrapher’s equation: A paradigm of a generalized hydrodynamics. Phys. Rev. E, 48 (1993), R655. [189] Smoluchowski M. Zur kinetischen theorie der brownschen molekularbewegung und der suspensionen. Ann. Phys., 21 (1906), 756-780. [190] Stadje W. Exact probability distributions for non-correlated random walk models. J. Statist. Phys., 56 (1989), 415-435. [191] Stadje W. The exact probability distribution of a two-dimensional random walk. J. Statist. Phys., 46 (1987), 207-216. [192] Stadje W., Zacks S. Telegraph processes with random velocities. J. Appl. Probab., 41 (2004), 665-678. [193] Stagnitti F., Parlange J.-Y., Steenhuis T.S., Barry D.A., Li L., Lockington D.A., Sander G.C. Mathematical equations of the spread of pollution in soils. In: Hydrological Systems Modeling, Vol. II, EOLSS Publ., 2008, 31 pp. [194] Swishchuk A.V. Random Evolutions and Their Applications. Kluwer Publ., 1997. Amsterdam. [195] Swishchuk A.V. The martingale problem and stochastic integral equations in a Banach space for limit semi-Markov random evolutions. I. Rand. Operat. Stoch. Equat., 2 (1994), no. 3, 277-301. [196] Swishchuk A.V. The martingale problem and stochastic integral equations in a Banach space for limit semi-Markov random evolutions. II. Rand. Operat. Stoch. Equat., 2 (1994), no. 4., 303-330. [197] Taylor G.I. Diffusion by continuous movements. Proc. Lond. Math. Soc., 20(2) (1922), 196-212. [198] Thomson W. On the theory of the electric telegraph. Proc. Roy. Soc. Lond., 7 (1854), 382-399. (Reprinted in: Mathematical and Physical Papers by Sir William Thomson, vol. II, The University Press, Cambridge, 1884, article LXXIII, pp. 61-76.) http://www.archive.org/details/mathematicaland02kelvgoog [199] Tolubinsky E.V. Theory of Transport Processes. Naukova Dumka, 1969, Kiev. (In Russian) [200] Tricomi F.G. Integral Equations. Dover Publ., 1985, NY. [201] Trotter H.F. On the product of semigroups of operators. Proc. Amer. Math. Soc., 10 (1959), 545-551. [202] Turbin A.F., Kolesnik A.D. Hyperbolic equations of the random evolutions in Rm . In: Probability Theory and Mathematical Statistics. World Sci., 1992, Singapore, pp. 397-402. [203] Turbin A.F., Samoilenko I.V. A probabilistic method for solving the telegraph equation with real-analytic initial conditions. Ukrain. Math. J., 52 (2000), 1292-1299. [204] Valdes-Taubas J., Pelham H.R. Slow diffusion of proteins in the yeast plasma membrane allows polarity to be maintained by endocytic cycling. Curr. Biol., 13(18) (2003), 1636-1640.

372

Bibliography

[205] Vernotte P. Les paradoxes de la theorie continue de l’equation de la chaleur. Comptes Rendus., 246(22) (1958), 3154. [206] Vilenkin N.Ja. Special Functions and the Theory of Group Representations. Amer. Math. Soc. Publ., 1968, Providance, RI. [207] Vladimirov V.S. Equations of Mathematical Physics. Dekker Publ., 1971. [208] Waldauer S.A., Bakajin O., Lapidus L.J. Extremely slow intramolecular diffusion in unfolded protein L. Proc. Nat. Acad. Sci. USA, 107 (2010), 31, 13713-13717. [209] Watkins J.C. A stochastic integral representation for random evolutions. Ann. Probab., 13 (1985), 531-557. [210] Watkins J.C. Limit theorems for stationary random evolutions. Stoch. Process. Appl., 19 (1985), 189-224. [211] Watkins J.C. A central limit problem in random evolutions. Ann. Probab., 12 (1984), 480-513. [212] Weiss G.H. Some applications of persistent random walks and the telegrapher’s equation. Phys. A, 311 (2002), 381-410. [213] Weiss G. H. First passage time problems for one-dimensional random walks. J. Statist. Phys., 24 (1981), 587-594. [214] Wiener N. Differential space. J. Math. Phys., 2 (1923), 132-174. [215] Whittaker E.T., Watson G.N. A Course of Modern Analysis. Part II: Transcendental Functions. Cambridge Univ. Press, 1996. [216] Zoia A., Dumonteil E., Mazzolo A. Collision densities and mean residence time for d-dimensional exponential flights. Phys. Rev. E, 83 (2011), 4, 041137. [217] Zoia A., Dumonteil E., Mazzolo A. Residence time and collision statistics for exponential random flights: The rod problem revisited. Phys. Rev. E, 84 (2011), 2, 021139. [218] Zoia A., Dumonteil E., Mazzolo A. Collision statistics for random flights with anisotropic scattering and absorption. Phys. Rev. E, 84 (2011), 6, 061130.

Index

Abelian subalgebra, 90 absolutely continuous component, xiv, xv, 55, 59, 141, 142 absolutely continuous component of density, 78, 111, 125, 126, 158, 206, 213, 220, 262, 268, 281, 305, 321, 336, 344 absolutely continuous component of distribution, 174, 179, 193, 199, 205, 208, 213, 218, 230, 235, 256, 314, 318, 331, 336, 341 absolutely converging series, 206 accuracy of asymptotic relation, 306, 307 adjoint mapping, 90 Ado’s theorem, 89 alternator, 12, 96 angular component, 262 angular component of density, 238, 321 arbitrage-free market, 359 arcsine law, 3 asymptotic behaviour, 199 asymptotic behaviour of moment function, 80, 81 asymptotic estimates, 201 asymptotic expansion of Bessel function, 201 asymptotic formula, 72, 81, 83, 129 asymptotic formula for Bessel function, 17 asymptotic relation, 200, 305, 352 asymptotic relation for Bessel function, 301 asymptotic relation for characteristic function, 300 asymptotic relation for transition density, 302 asymptotic stationary density, 351, 352 automodel substitution, 244

backward Kolmogorov equation, 5, 58, 130, 144, 165, 207, 247 Banach algebra, 160 Banach space, 7, 158, 159, 166, 247, 291 Banach space of twice continuously differentiable functions, 247 basic functions, 35 basic space, 35 basis, 91 basis elements, 88, 91, 93 Bessel differential equation, 15 Bessel function, 15, 17, 19, 39, 176, 185, 200, 230, 299, 336 beta-function, 29, 254 Black-Scholes model, 359 block structure, 132, 133 bounded linear projection, 14 Brownian motion, xviii, 2, 73, 76, 166, 207, 216, 346, 347 Carleman condition, 83 Catalan theorem, 317, 324 Cauchy problem, 8, 12, 59, 60, 62, 245, 308–310, 346 characteristic cone, 251 characteristic function, 2, 4, 60, 73, 76, 87, 111, 115, 125, 126, 133, 137, 176, 198, 216–218, 288, 300, 323, 324, 336, 337 characteristic function of absolutely continuous component of distribution, 191, 325 characteristic function of Markov random flight, 187, 189 characteristic function of planar Markov random flight, 240 characteristic function of singular component of distribution, 325 characteristic function of uniform distribution, 302 characteristic function of uniform distribution on sphere, 180

backward evolutionary equation, 10 backward front of wave, 251 373

374 characters of finite cyclic group, 146 Chebyshev polynomials of first kind, 20 Chebyshev polynomials of second kind, 20 Chebyshev polynomials on Banach algebra, 21, 158 circular Gaussian law, 226, 356 circumference, 229 column-vector, 131, 132, 146, 165, 167 commutational relations, 89, 91 commutative Banach algebra, 21 commutativity, 38 commutator, 88, 89 compact set, 167, 247 compactly supported function, 39, 40 compactly supported generalized function, 38 complete market, 359 compositional series, 90 conditional characteristic function, 200, 338 conditional characteristic functions, 175, 178, 181–185, 216, 218, 230, 232, 288, 298, 299, 301, 314, 315, 337, 338 conditional densities, 199, 200, 222, 235, 251, 289, 290, 303, 318, 336–342 conditional density, 187, 205, 206 conditional distributions, 175, 229, 230, 314, 335, 336 conditional probabilities, 102 conditional probability, 278 continuous kernel, 189, 218 continuous random flight, xvii continuous semi-Markov random evolution, 8 continuous-time Markov chain, 166 contraction operator, 7 convergent sequence of generalized functions, 36 convolution, 124, 177, 190, 217, 218, 221, 257 convolution of generalized functions, 37 convolution-type recurrent relation, 177 convolutional equation, 190 cosmic microwave background radiation, 359 counting process, 6, 7, 54 covariance function, 2 cyclic choice of velocities, 99 cyclic group, 146

Index cylindrical function, 17, 18 D’Alambert solution, xvi damped spatial wave, 250 damped wave equation, 59, 245 damped wave propagation, 238 damping effect, 238, 251, 343 decomposition of unit matrix, 145, 148 degenerate hypergeometric function, 329 degenerated hypergeometric function, 34, 260 density of absolutely continuous component, 243 density of distribution, 343 density of planar Markov random flight, 236 density of pollution, 356 derivative of generalized function, 36 determinant, 90, 96, 98, 99, 132, 133 determinant theorem, 12, 13, 59, 96, 98, 99, 132 diagonal matrix, 95 diagonal matrix operator, 167 diagonal operator, 165 difference between two Goldstein-Kac telegraph processes, 107 difference of two telegraph processes, 135 differentiability, 38 differential equation for degenerated hypergeometric function, 34 differential equation for Chebyshev polynomials, 21 differential equation for Struve function, 20 differential equation for Whittaker function, 35 differentiation formula for Gauss hypergeometric function, 31 diffusion area, 154, 205, 206, 253, 321, 350, 354 diffusion coefficient, 2, 5, 71, 73, 76, 116, 135, 165, 195, 198, 199, 216, 247, 328, 346, 347, 358 diffusion process, 4, 5 diffusive flux, 195, 346 diffusivity tensor, 358 Dirac delta-function, xiv, 62, 100, 125, 133, 137, 142, 174, 194, 195, 207, 209, 210, 220, 238, 321, 344 direct Hankel transform, 39

Index direct multiplication of generalized functions, 37 direction process, 128 disc, 234 discontinuity, 71 discontinuous conditional density, 237 discontinuous term of density, 289, 290 disk, 229 dissipation function, 216, 219, 227, 247, 311 double convolution, 222 double Laplace-Fourier transform, 41 drift, 2, 5, 71, 73, 76, 97, 116, 135, 164, 165, 195, 198, 199, 216, 247, 328, 346, 347, 355 duplication formula for gamma-function, 29, 181 eigenfunction, 245 eigenvalue, 148, 245 eigenvectors, 148 Einstein-Smoluchowski’s model, xiv elementary plane wave, 251 embedded Markov chain, 145, 165 error in asymptotic formula, 305, 307 error of approximation, 307 Euclidean distance, 255, 261, 262, 265, 266, 269, 331 Euclidean distance between two Goldstein-Kac telegraph processes, 100 Euler gamma-function, 29, 232, 295 Euler integral of first kind, 29 Euler integral of second kind, 29 evolutionary equation, 248, 291 expectation, xvi, 2, 6, 78, 135, 175, 256, 332 exponential mapping of basis elements, 91 fast diffusion condition, 346, 347, 354 fast diffusion process, 347 Feynman-Kac formula, 10 field of complex numbers, 21 finite-velocity diffusion, xvii, xix first and second moments of the telegraph process, 80 first passage time distribution, 3 first-order symmetry operator, 89 Fokker-Planck equation, 5, 207 forward evolutionary equation, 10

375 forward front of wave, 251 forward Kolmogorov equation, 5 four-dimensional ball, 314, 318 four-dimensional density, 320 four-dimensional heat equation, 328 four-dimensional Markov random flight, 214, 318, 321, 331, 349 four-dimensional telegraph equation, 214, 328 four-dimensional Wiener process, 328 Fourier transform, 2, 38, 115, 176, 200, 216, 221, 224, 232, 289, 303, 316, 317, 324, 336, 337 Fourier transform of convolution, 40 Fourier transform of derivative, 39 Fourier transform of similarity, 40 fourth power of Gauss hypergeometric function, 33 fourth-order equation, 139 fractional-analytical function, 29 Frechet strong differentiation, 24 function with compact support, 223–227 functional relation for Struve function, 20 functional relations for degenerated hypergeometric function, 34 functional relations for Euler gamma-function, 29 functional relations for Pochhammer symbol, 30 fundamental solution, xv, 2, 4, 60, 62, 116, 133, 134, 199, 207, 209, 210, 214, 234, 243, 245, 250, 251, 308, 311, 328, 346, 353 gamma-distribution, 53 Gauss hypergeometric function, 20, 30, 31, 67, 117, 118, 123, 124, 179, 185, 193, 200, 206, 225, 336, 351 Gauss recurrent relations, 31 Gaussian density, 2, 4, 257 Gegenbauer polynomials, 68, 70 general hypergeometric function, 34, 117, 295, 297, 299 general hypergeometric series, 34 generalized differentiation, 38 generalized function, 2, 35, 60, 100, 134, 207, 209, 222, 238, 321, 343 generating function for Chebyshev polynomials, 21 generating function for Chebyshev

376 polynomials on Banach algebra, 26 generator, 8, 14, 58, 143, 160, 166, 216, 248–250, 292, 294, 328 geometric Brownian motion, 359 Goldstein-Kac model, xiii, xiv Goldstein-Kac telegraph equation, 93, 97, 99, 143 Goldstein-Kac telegraph process, xvii, 10, 59, 60, 71, 76, 78, 123, 126, 161, 166, 207, 240, 346–348, 355, 359 Green’s function, xv, 4, 60, 134, 199, 207, 209, 234, 238, 245, 250, 251, 253, 320, 346, 353 group element, 93, 94 group of motions of pseudo-Euclidean plane, 93 group operation, 93 group properties of telegraph equation, 88 group symmetry of Goldstein-Kac telegraph equation, 93 groups of motions, 90 Hamming metric, 130, 131 Hankel inversion formula, 198, 201, 233, 303, 316, 339 heat equation, xi, xv, 2, 4, 60, 116, 199, 207, 346 heat operator, 346 Heaviside function, xiv, 3, 55, 62, 117, 174, 187, 220, 238, 251, 321, 344, 346 Helmholtz equation, 250, 294 holomorphic, 186 holomorphic function, 16, 39, 40, 88, 193, 208, 299 Huygens principle, 250, 253 hydrological balance of reservoir, 355 hyperbolic arctangent function, 290 hyperbolic diffusion, 358 hyperbolic diffusion equation, 359 hyperbolic distance, 251 hyperbolic equation, 12, 58, 97–99, 132, 137, 142, 144, 158, 165 hyperbolic function, 93 hyperbolic operator, 167 hyperbolic system, 11, 56, 94, 128, 144, 165 hyperbolicity, 95, 132, 134, 139, 154, 310

Index hypergeometric differential equation, 31 hypergeometric function, 341–343 hypergeometric series, 31, 179, 197, 206 hyperparabolic equation, 209, 213, 214, 328 hyperparabolic operator, 208, 209, 212, 214, 216, 308, 346 hypersurface, 222 ideal, 90, 159 improper integral, 202 incomplete gamma-function, 329, 332 incomplete integral cosine function, 184, 289, 300 incomplete integral sine function, 184, 289, 300 increment, 4, 6, 56, 128 indicator function, 60, 110 infinite discontinuity, 206, 234, 303, 304, 321 infinitesimal angle, 247 infinitesimal element, 220, 235, 314, 336 infinitesimal matrix, 11, 145, 166 infinitesimal operator, 9, 13, 143, 160 infinitesimal ring, 235 infinitesimal solid angle, 291 inhomogeneous equation, 134 inhomogeneous Helmholtz equation, 169 inhomogeneous Klein-Gordon equation, 250, 294 inhomogeneous telegraph equation, xiv, 60, 207 initial condition, xv, 5, 11, 59, 62, 133, 134, 137, 143, 194, 195, 309, 310 initial conditions, 308 initial conditions for characteristic function, 243 initial-value problem, 10, 97, 133, 134, 136 inner product, 175 integral equation, 223, 226, 227, 309 integral representation of Bessel function, 16, 188, 232 integral representation of characteristic function, 241 integral representation of degenerated hypergeometric function, 34 integral representation of Gauss hypergeometric function, 30 integral representation of Macdonald function, 18

Index integral representation of Struve function, 19 integrand, 201, 202, 204, 278, 305 integration area, 225, 280 integro-differential equations, 247, 308, 309 invariant rescaling, 348 inverse element, 93 inverse Fourier transform, 39, 111, 198, 201, 233, 302, 303, 316, 338, 339 inverse Fourier transformation, 187 inverse Hankel transform, 39 inverse Laplace transform, 179, 181, 184, 198, 289, 299, 337 inverse Laplace transformation, 185, 186 isomorphic, 91, 93, 131 isomorphism, 131 isotropic transport process, xvi iterated logarithm law, 3 joint characteristic functions, 176 joint densities, 165, 220, 221, 225, 227, 247, 291, 308 joint density, 56 joint distribution, 3, 7 joint distributions, 273, 274, 282, 285 joint probabilities, 282 joint probability densities, 128, 143, 217, 220, 225 joint probability density, 311 jump amplitude, 108, 123 jump Markov chain, 15 jump semi-Markov random evolution, 8 Kac’s condition, xv, 71–73, 76, 116, 134, 163, 165, 166, 169, 171, 195, 196, 198, 199, 207, 214, 215, 245, 246, 248, 250, 290, 292, 294, 308, 327, 328, 346 Kolmogorov equation, 11, 291, 308, 310, 311 Kolmogorov-Chapman equation, 1 Kronecker delta-symbol, 248, 291 Kurtz’s theorem, 14, 166, 167, 169–171, 248, 250, 292, 294 lack of memory, 6 Laplace operator, xviii, 4, 143, 150, 158, 163, 166, 199, 208, 209, 248, 249, 292, 358, 359 Laplace transform, 2, 40, 74, 75, 178, 217–219, 337

377 Laplace transform of characteristic function, 192, 193, 208, 242, 326 Laplace transform of convolution, 41 Laplace transform of derivative, 41 Laplace transform of similarity, 41 Laplace transform of the characteristic function, 209 Laplace transforms of conditional characteristic functions, 178 Laplace-Beltrami operator, 358 Laplace-Fourier transform, 210, 213, 352 Lebesgue measure, 5, 36, 158, 173–175, 213, 217, 220, 221, 229, 235, 247, 308, 314, 318, 336 left-continuous, 71, 106, 123 Lerch ψ-function, 305 lexicographical order, 130–132 Lie algebra, 88, 89 Lie group, 90 lifetime, 356, 357 limit theorem, 195, 246, 290 limiting condition, 308 limiting distribution, 14, 249, 292 limiting operator, 249, 294 limiting process, 15, 328 limiting relation, 270, 327 line-vector, 149 linear combination of telegraph processes, 126, 135 linearity, 37 linearly independent solutions, 34, 35 local Lie group, 91, 93, 94 local matrix Lie group, 93 logarithm function, 290 Lorentz factor, 358 Macdonald function, 18, 356 mapping of Lie algebra, 90 marginal distributions of planar Markov random flight, 238 Markov chain, 11, 169 Markov process, 1, 4, 9, 54, 249, 292 Markov random evolution, xvi, 14 Markov random flight, xvii, xx, 10, 173, 180, 181, 185, 192, 195, 199, 206, 208, 209, 216, 218, 219, 223, 226, 246, 248, 257, 264, 343, 346–348, 355 matrix commutator, 91 matrix differential operator, 12, 58, 59, 96, 131, 132, 165

378 matrix equation, 168 matrix Lie algebra, 89, 91, 93 maximum distribution, 3 measurable phase space, 1, 7, 14 mixed moments, 254, 328 modified Bessel function, xv, 15, 17, 18, 62, 63, 65, 72, 78, 82, 113, 226, 238, 254, 257, 346 modified Struve function, 19 moment function, 78, 79, 329 moment generating function, 85, 87 moment problem for Goldstein-Kac telegraph process, 83 moments, 256, 332 moments of absolutely continuous component of distribution, 330 moments of singular component of distribution, 329 multidimensional ball, 174, 219 multidimensional Brownian motion, 4 multidimensional Goldstein-Kac telegraph operator, 208 multidimensional sphere, 173 multidimensional telegraph equation, 207 multidimensional telegraph operator, 208 multidimensional Wiener process, 4 multiindex, 254 multiparameter, 216 multiple convolution, 177, 217 multiple double convolution, 223 normed Bessel function, 176 normed ring, 21 null-element, 26–28 one-parameter family of functions, 247 one-parameter family of operators, 247 one-parameter subgroups, 91 one-parametric family of densities, 226 operator Lie algebra, 91 ordered sequence, 127, 130 ordered set, 130 ordinarity, 6 orthogonal relations for Chebyshev polynomials, 21 parabolic equation, 163, 165 partition, 282 persistent random walk, xvii planar Markov random flight, 227, 229, 238, 240, 250, 251, 254, 261, 262, 266, 269, 349, 355, 356

Index planar random evolution, 141 planar random flight, xviii planar random motion, 143 plane wave, 253 Pochhammer symbol, 29, 120, 198, 295 Poincar´e group, 93, 94 point-like source, 60, 134, 207, 210, 251 point-wise convergence, 73 Poisson distribution, 6, 54 Poisson flow, 173, 229 Poisson process, xiii, xvii, 6, 94, 100, 126, 128, 135, 141, 199, 219, 251, 253, 262, 265, 335 polar angle, 262 polluting substance, 356 polynomials representations, 161 primitive root, 144–146 probability density, 96–99, 227 probability distribution function, 69, 71, 100, 101, 106, 108, 117, 123, 124, 219, 262, 263, 265, 266, 269, 272 probability of being in ball, 304, 321 probability of being in circle, 237 probability of interaction, 109 projective matrices, 148 projector, 14, 167, 249, 292 properties of convolution operation, 37 pseudo-Euclidean plane, 90 Raabe criterion, 197 radial component, 262 radial component of density, 238, 321 radial function, 39 random evolution, xvi, 7 random field, 359 random flight, xvii, xx, 6, 10 random speed, 240 random triangle, 266 randomized time, xvi rank of Lie algebra, 90 rarefied environment, 199 recurrent relation, 177, 178, 218, 221, 222, 225, 227 recurrent relation for Bessel functions, 324, 338 recurrent relations, 309, 311 recurrent relations for Chebyshev polynomials, 20 refined asymptotic formula, 83 regular generalized function, 36

Index remainder, 83, 88 renewal process, 7 restricted hyperbolic diffusion, 359 right n-gon, 155, 158 sample path, 3 Schur’s formula, 132 Schwartz-Sobolev function, 143 semi-invariants of Goldstein-Kac telegraph process, 87 semi-Markov kernel, 7 semi-Markov process, 7, 8 semi-Markov random evolution, xvi, 8 semi-simple Lie algebra, 89 semigroup, 166, 169 semigroups generated by distributions, 248, 250 semigroups generated by transition functions, 292 series representation, 32, 33, 180, 251, 296–298 series representation of Bessel function, 15, 82 series representation of characteristic function, 241 series representation of inverse tangent function, 296 series representation of Struve function, 19 shift, 38 shift of Fourier transform, 39 shift of Laplace transform, 41 shifted time differential operator, 116 single-valuedness, 212 singular component, xiii, 55 singular component of distribution, 179 singular component of transition density, 225 singular component of density, 78, 110, 125–127, 142, 220, 262, 303, 321, 344 singular component of distribution, 100, 174, 314, 320, 336 singular components of distribution, 213 singularity, 71, 123 singularity point, 101, 262 singularity points, 106–108, 127, 135, 136 six-dimensional ball, 336 six-dimensional Markov random flight, 335, 341, 343, 350 slow diffusion, xix, 307

379 slow diffusion condition, xix, 348–352, 354 slow diffusion process, 347, 348 soil pollution process, 355, 356 solid angle, 308 solvable Lie algebra, 89, 90 solvable Lie group, 90 spatial wave, 251 spherically symmetric function, 39 squared Gauss hypergeometric function, 32 stationarity, 6 stationary density, 348–351 stationary distribution, 347 Stiltjes integral, 317 stochastic flow, 6 stochastic kinematics, 358 stochastic motion, xvii stochastic motion with several velocities, 94 stochastic solution, xv strictly hyperbolic, 130 strictly hyperbolic system, 95 strongly continuous semigroup, 7, 13, 14 Struve function, 19, 238, 254 subgroup of rotations, 93 sum of two Goldstein-Kac telegraph processes, 109 sum of two telegraph processes, 135 superposition of planar waves, 238 support of absolutely continuous component of distribution, 110 support of absolutely continuous component of density, 128 support of distribution, 100, 108, 125, 127, 135, 265, 269 support of the absolutely continuous component of distribution, 100 surface integral, 175, 222 symmetry operator, 88 system of integro-differential equations, 291 telegraph equation, xiv, 59, 114, 206, 216 telegraph operator, 116, 140, 216 telegraph process, xiv, 69 tempered distributions, 38, 134 tensor product, 148 terminal points, 101, 123, 125, 127 the Gauss hypergeometric function, 192 thermodynamic limit, 199

380 third power of Gauss hypergeometric function, 32 third-order hyperbolic equation, 143 third-order hyperbolic partial differential equation, 116 three-dimensional ball, 288 three-dimensional Markov random flight, 287, 290, 292, 294, 295, 300, 302, 311, 351, 352 three-dimensional wave equation, 353 time-continuous Markov chain, 167 transformation formulas for Gauss hypergeometric function, 31 transition density, xv, 5, 60–62, 71, 72, 110, 116, 124, 133, 139, 165, 171, 194, 195, 205–207, 209, 210, 213, 226, 227, 247, 251–253, 257, 302, 321, 343, 346, 348–350, 352 transition density of planar Markov random flight, 238 transition function, 14, 141, 166, 169 transition matrix, 15, 169 transition probabilities, 97, 166 transition probability density, 131, 132, 136, 142, 223, 257, 261, 311 transition probability function, 1, 4, 5 transport equation, 10, 59 transport process, xiii, xiv, xvi, 199, 251, 357 traveling plane wave, 251 twice continuously differentiable function, 250 twice continuously differentiable function with compact support, 166, 167, 169, 170 twice continuously differentiable functions, 291, 294 twice continuously differentiable vector-function, 168 two-dimensional density, 226 two-dimensional heat equation, 245 two-dimensional Markov random flight, 213 two-dimensional telegraph equation, 208, 213, 243 two-dimensional wave equation, 234, 250, 251, 320 two-dimensional wave operator, 245

Index two-dimensional Wiener process, 238 two-parameter family of functions, 291 two-parameter family of operators, 291 two-parameter family of stochastic processes, 72, 165, 247, 291 two-parameter family of transition densities, 72 uniform choice of velocities, 97, 98 uniform convergence of series, 188, 189, 212, 326 uniform density on surface of sphere, 303 uniform distribution, 176, 206, 225, 226, 249, 262, 264, 266, 318 uniform distribution in circle, 235 uniform distribution on sphere, 336 uniform law, 97, 167, 264 uniformly convergent series, 32, 33 uniformly converging integral, 201, 202 uniformly converging series, 196–198, 208, 218, 224, 246, 251, 252, 288, 296–298, 310, 311 unit circumference, 226, 227, 229, 264, 356 unit element, 93 unit sphere, 225, 287, 308, 313, 335 variance, 2, 164, 256, 332 vector-function, 167 velocity process, 54 vertices, 266, 280 Volterra integral equation, 189–191, 218, 225 von Mises distribution, 226, 356 water level, 355 wave diffusion, 250, 253 wave equation, xvi wave propagation, 238, 250 wave superposition principle, 251, 252 weak convergence, 73, 169, 294 weak convergence theorem, 248 Weierstrass criterion, 202 Weierstrass theorem, 170 well-posedness, 134, 139, 143 Whittaker function, 35, 260 Wiener process, xii, xv, 2, 60, 71, 72, 116, 135, 164–166, 171, 195, 198, 199, 247, 248, 250, 292, 294, 321