Preface
Contents
1 Introduction
1.1 Getting Started
1.1.1 Conventions and Notations
1.1.2 Dealing with Noise
1.1.3 A Few Useful Equalities
1.1.4 A Few Useful Inequalities
1.2 Some Sources of SPDEs
1.2.1 Biology
1.2.2 Classical Probability Theory
1.2.3 Economics and Finance
1.2.4 Engineering
1.2.5 Physics
1.2.6 Literature
1.2.7 The Structure of This Book
2 Basic Ideas
2.1 Some Useful Facts
2.1.1 Continuity of Random Functions
2.1.2 Connection Between the Itô and Stratonovich Integrals
2.1.3 Random Change of Variables in Random Functions
2.1.4 Problems
2.2 Classification of SPDEs
2.2.1 SPDEs as Stochastic Equations
2.2.2 SPDEs as Partial Differential Equations
2.2.3 Various Notions of a Solution
2.2.4 Problems
2.3 Closed-Form Solutions
2.3.1 Heat Equation
2.3.1.1 Further Directions
2.3.2 Wave Equation
2.3.3 Poisson Equation
2.3.4 Nonlinear Equations
2.3.5 Problems
3 Stochastic Analysis in Infinite Dimensions
3.1 An Overview of Functional Analysis
3.1.1 Spaces
3.1.2 Linear Operators
3.1.3 Problems
3.2 Random Processes and Fields
3.2.1 Fields (No Time Variable)
3.2.2 Processes
3.2.3 Martingales
3.2.4 Problems
3.3 Stochastic Integration
3.3.1 Construction of the Integral
3.3.2 Itô Formula
3.3.3 Problems
4 Linear Equations: Square-Integrable Solutions
4.1 A Summary of SODEs and Deterministic PDEs
4.1.1 Why Square-Integrable Solutions?
4.1.2 Classification of PDEs
4.1.2.1 Second-Order PDEs in Two Variables and Conic Sections
4.1.3 Proving Well-Posedness of Linear PDEs
4.1.4 Well-Posedness of Abstract Equations
4.1.5 Problems
4.2 Stochastic Elliptic Equations
4.2.1 Existence and Uniqueness of Solution
4.2.2 An Example and Further Directions
4.2.3 Problems
4.3 Stochastic Hyperbolic Equations
4.3.1 Existence and Uniqueness of Solution
4.3.2 Further Directions
4.3.3 Problems
4.4 Stochastic Parabolic Equations
4.4.1 Existence and Uniqueness of Solution
4.4.2 A Change of Variables Formula
4.4.3 Probabilistic Representation of the Solution, Part I: Method of Characteristics
4.4.4 Probabilistic Representation of the Solution, Part II: Measure-Valued Solutions and the Filtering Problem
4.4.5 Further Directions
4.4.6 Problems
5 The Polynomial Chaos Method
5.1 Stationary Wiener Chaos
5.1.1 Cameron-Martin Basis
5.1.2 Elementary Operators on Cameron-Martin Basis
5.1.3 Elements of Stationary Malliavin Calculus
5.1.4 Problems
5.2 Stationary SPDEs
5.2.1 Definitions and Basic Examples
5.2.2 Solving Stationary SPDEs by Weighted Wiener Chaos
5.2.2.1 Existence and Uniqueness of Solutions
5.3 Elements of Malliavin Calculus for Brownian Motion
5.3.1 Cameron-Martin Basis for Scalar Brownian Motion
5.3.1.1 Cameron-Martin Basis for Brownian Motion in a Hilbert Space
5.3.2 The Malliavin Derivative and Its Adjoint
5.4 Wiener Chaos Solutions for Parabolic SPDEs
5.4.1 The Propagator
5.4.2 Special Weights and The S-Transform
5.5 Further Properties of the Wiener Chaos Solutions
5.5.1 White Noise and Square-Integrable Solutions
5.5.3 Probabilistic Representation
5.6 Examples
5.6.1 Wiener Chaos and Nonlinear Filtering
5.6.2 Passive Scalar in a Gaussian Field
5.6.3 Stochastic Navier-Stokes Equations
5.6.4 First-Order Itô Equations
5.6.5 Problems
5.7 Distribution Free Stochastic Analysis
5.7.1 Distribution Free Polynomial Chaos
5.7.2 Distribution Free Malliavin Calculus
5.7.2.1 The Malliavin Derivative
5.7.3.1 Itô-Skorokhod Isometry
5.7.4 Stochastic Differential Equations
5.7.4.1 Wick Exponential
5.7.4.2 Linear SDEs
5.7.4.3 Linear Parabolic SPDEs
5.7.4.4 Stationary SPDEs
5.7.4.5 Weighted Norms
5.7.4.6 Wick-Nonlinear SPDEs
6 Parameter Estimation for Diagonal SPDEs
6.1 Examples and General Ideas
6.1.1 An Oceanographic Model and Its Simplifications
6.1.2 Long Time vs Large Space
6.1.3 Problems
6.2 Maximum Likelihood Estimator (MLE):One Unknown Parameter
6.2.1 The Forward Problem
6.2.2 Simplified MLE (sMLE) and MLE
6.2.3 Consistency and Asymptotic Normality of sMLE
6.2.4 Asymptotic Efficiency of the MLE
6.2.5 Problems
6.3 Several Parameters and Further Directions
6.3.1 The Heat Balance Equation
6.3.2 One Parameter: Beyond an SPDE
6.3.3 Several Parameters
6.3.4 Problems
Problems of Chap. 2
Problems of Chap. 3
Problems of Chap. 4
Problems of Chap. 5
Problems of Chap. 6
References
List of Notations
Index

##### Citation preview

Universitext

Sergey V. Lototsky Boris L. Rozovsky

Stochastic Partial Differential Equations

Universitext

Universitext Series editors Sheldon Axler San Francisco State University Carles Casacuberta Universitat de Barcelona Angus MacIntyre Queen Mary, University of London Kenneth Ribet University of California, Berkeley Claude Sabbah École polytechnique, CNRS, Université Paris-Saclay, Palaiseau Endre Süli University of Oxford Wojbor A. Woyczy´nski Case Western Reserve University

Universitext is a series of textbooks that presents material from a wide variety of mathematical disciplines at master’s level and beyond. The books, often well classtested by their author, may have an informal, personal even experimental approach to their subject matter. Some of the most successful and established books in the series have evolved through several editions, always following the evolution of teaching curricula, to very polished texts. Thus as research topics trickle down into graduate-level teaching, first textbooks written for new, cutting-edge courses may make their way into Universitext.

Sergey V. Lototsky • Boris L. Rozovsky

Stochastic Partial Differential Equations

Boris L. Rozovsky Division of Applied Mathematics Brown University Providence Rhode Island, USA

Sergey V. Lototsky Department of Mathematics University of Southern California Los Angeles California, USA

ISSN 0172-5939 Universitext ISBN 978-3-319-58645-8 DOI 10.1007/978-3-319-58647-2

ISSN 2191-6675 (electronic) ISBN 978-3-319-58647-2 (eBook)

Library of Congress Control Number: 2017942781 Mathematics Subject Classification (2010): 60H15, 35R60 © Springer International Publishing AG 2017 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Printed on acid-free paper This Springer imprint is published by Springer Nature The registered company is Springer International Publishing AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

To our families

Preface

vii

viii

Preface

1

Preface

ix

As far as teaching a class, both authors did so on several occasions, more or less following the order of the material in the book. It is indeed possible to cover most of the material in about 40 h of lectures, as long as not too much time is spent on the general discussion of stochastic analysis in infinite dimensions. A more detailed description, both of the structure of this book and of the corresponding class, is in Sect. 1.2.7. To summarize, the objective of this book is not to present all the result about stochastic partial differential equations, but instead to provide the reader with the necessary tools to understand those results (by reading other sources) and maybe even to discover a few new ones. The authors gratefully acknowledge the support of several grants during the period the book was written: ARO (DAAD19-02-1-0374) and NSF (DMS-0237724 (CAREER) and DMS-0803378) [SL]; AFOSR (5-21024 (inter), FA9550-09-10613) ARO (DAAD19-02-1-0374, W911NF-07-1-0044, W911NF-13-1-0012, W911-16-1-0103), NSF (DMS 0604863, DMS 1148284), ONR (N0014-03-10027, N0014-07-1-0044, OSD/AFOSR 9550-05-1-0613, and SD Grant 5-21024 (inter.)) [BR]. SL gratefully acknowledges hospitality of the Division of Applied Mathematics at Brown University on several occasions that were crucial to the success of the project. Los Angeles, CA, USA Providence, RI, USA March 2017

Sergey V. Lototsky Boris L. Rozovsky

Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.1 Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.1.1 Conventions and Notations .. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.1.2 Dealing with Noise . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.1.3 A Few Useful Equalities.. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.1.4 A Few Useful Inequalities .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.2 Some Sources of SPDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.2.1 Biology .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.2.2 Classical Probability Theory . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.2.3 Economics and Finance . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.2.4 Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.2.5 Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.2.6 Literature .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1.2.7 The Structure of This Book . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

1 1 1 2 6 7 12 12 12 15 16 18 21 22

2 Basic Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.1 Some Useful Facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.1.1 Continuity of Random Functions . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.1.2 Connection Between the Itô and Stratonovich Integrals . . . . . . 2.1.3 Random Change of Variables in Random Functions .. . . . . . . . . 2.1.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.2 Classification of SPDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.2.1 SPDEs as Stochastic Equations . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.2.2 SPDEs as Partial Differential Equations .. .. . . . . . . . . . . . . . . . . . . . 2.2.3 Various Notions of a Solution .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.2.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.3 Closed-Form Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.3.1 Heat Equation .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.3.2 Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.3.3 Poisson Equation . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

25 25 25 29 32 35 36 36 38 41 50 52 52 57 60

xi

xii

Contents

2.3.4 Nonlinear Equations . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.3.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

63 67

3 Stochastic Analysis in Infinite Dimensions . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.1 An Overview of Functional Analysis . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.1.1 Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.1.2 Linear Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.1.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.2 Random Processes and Fields . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.2.1 Fields (No Time Variable) .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.2.2 Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.2.3 Martingales.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.2.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.3 Stochastic Integration .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.3.1 Construction of the Integral . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.3.2 Itô Formula .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.3.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

75 75 76 83 96 99 99 114 119 124 128 128 136 141

4 Linear Equations: Square-Integrable Solutions . . . . . .. . . . . . . . . . . . . . . . . . . . 4.1 A Summary of SODEs and Deterministic PDEs . .. . . . . . . . . . . . . . . . . . . . 4.1.1 Why Square-Integrable Solutions?.. . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.1.2 Classification of PDEs . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.1.3 Proving Well-Posedness of Linear PDEs . .. . . . . . . . . . . . . . . . . . . . 4.1.4 Well-Posedness of Abstract Equations . . . .. . . . . . . . . . . . . . . . . . . . 4.1.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.2 Stochastic Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.2.1 Existence and Uniqueness of Solution .. . . .. . . . . . . . . . . . . . . . . . . . 4.2.2 An Example and Further Directions . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.2.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.3 Stochastic Hyperbolic Equations .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.3.1 Existence and Uniqueness of Solution .. . . .. . . . . . . . . . . . . . . . . . . . 4.3.2 Further Directions . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.3.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.4 Stochastic Parabolic Equations .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.4.1 Existence and Uniqueness of Solution .. . . .. . . . . . . . . . . . . . . . . . . . 4.4.2 A Change of Variables Formula .. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.4.3 Probabilistic Representation of the Solution, Part I: Method of Characteristics . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.4.4 Probabilistic Representation of the Solution, Part II: Measure-Valued Solutions and the Filtering Problem . . . . . . . . 4.4.5 Further Directions . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.4.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

143 143 143 146 149 161 163 170 170 172 180 183 183 194 197 198 198 211

223 229 230

5 The Polynomial Chaos Method . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.1 Stationary Wiener Chaos . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.1.1 Cameron-Martin Basis. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.1.2 Elementary Operators on Cameron-Martin Basis . . . . . . . . . . . . .

233 233 233 239

215

Contents

xiii

5.1.3 Elements of Stationary Malliavin Calculus .. . . . . . . . . . . . . . . . . . . 5.1.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Stationary SPDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.2.1 Definitions and Basic Examples . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.2.2 Solving Stationary SPDEs by Weighted Wiener Chaos.. . . . . . Elements of Malliavin Calculus for Brownian Motion .. . . . . . . . . . . . . . . 5.3.1 Cameron-Martin Basis for Scalar Brownian Motion.. . . . . . . . . 5.3.2 The Malliavin Derivative and Its Adjoint ... . . . . . . . . . . . . . . . . . . . Wiener Chaos Solutions for Parabolic SPDEs . . . . .. . . . . . . . . . . . . . . . . . . . 5.4.1 The Propagator .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.4.2 Special Weights and The S-Transform.. . . .. . . . . . . . . . . . . . . . . . . . Further Properties of the Wiener Chaos Solutions . . . . . . . . . . . . . . . . . . . . 5.5.1 White Noise and Square-Integrable Solutions.. . . . . . . . . . . . . . . . 5.5.2 Additional Regularity .. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.5.3 Probabilistic Representation.. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.6.1 Wiener Chaos and Nonlinear Filtering . . . .. . . . . . . . . . . . . . . . . . . . 5.6.2 Passive Scalar in a Gaussian Field . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.6.3 Stochastic Navier-Stokes Equations . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.6.4 First-Order Itô Equations .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.6.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Distribution Free Stochastic Analysis . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.7.1 Distribution Free Polynomial Chaos .. . . . . .. . . . . . . . . . . . . . . . . . . . 5.7.2 Distribution Free Malliavin Calculus . . . . . .. . . . . . . . . . . . . . . . . . . . 5.7.3 Adapted Stochastic Processes . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.7.4 Stochastic Differential Equations . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

242 249 251 251 254 265 265 273 281 281 289 295 295 299 310 317 317 322 329 334 336 338 338 346 358 363

6 Parameter Estimation for Diagonal SPDEs . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.1 Examples and General Ideas.. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.1.1 An Oceanographic Model and Its Simplifications . . . . . . . . . . . . 6.1.2 Long Time vs Large Space .. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.1.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.2 Maximum Likelihood Estimator (MLE): One Unknown Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.2.1 The Forward Problem . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.2.2 Simplified MLE (sMLE) and MLE . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.2.3 Consistency and Asymptotic Normality of sMLE . . . . . . . . . . . . 6.2.4 Asymptotic Efficiency of the MLE . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.2.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.3 Several Parameters and Further Directions . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.3.1 The Heat Balance Equation . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.3.2 One Parameter: Beyond an SPDE. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.3.3 Several Parameters .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.3.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

381 381 381 384 393

5.2

5.3

5.4

5.5

5.6

5.7

395 395 400 410 417 437 439 439 443 453 458

xiv

Contents

Problems: Answers, Hints, Further Discussions . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 461 References .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 493 List of Notations .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 503 Index . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 505

Chapter 1

Introduction

1.1 Getting Started 1.1.1 Conventions and Notations We use the same notation x for a point in the real lineqR or in a d-dimensional Euclidean space Rd . For x D .x1 ; : : : ; xd / 2 Rd , jxj D x21 C : : : C x2d ; for x; y 2 R Rd , xy D x1 y1 C: : :Cxd yd . Integral over the real line can be written either as R or as R C1 1 . Sometimes, when there is no danger of confusion, the domain of integration, in any number of dimensions, is omitted altogether. The space of continuous mappings from a metric space A to a metric space B  is denoted by C.AI B/. For example, given a Banach space X, C .0; T/I X is the collection of continuous mappings from .0; T/ to X. When B D R, we write C.A/. For a positive integer n, C n .A/ is the collection of functions with n continuous derivatives; for  2 .0; 1/ and n D 0; 1; 2 : : :, C nC .A/ is the collection of functions with n continuous derivatives such that derivatives of order n are Hölder continuous of order  . Similarly, C 1 .A/ is the collection of infinitely differentiable functions and C01 .A/ is the collection of infinitely differentiable functions with compact support in A. We will encounter the space S.Rd / of smooth rapidly decreasing functions and its dual S 0 .Rd /, the space of generalized functions. Recall that f 2 S.Rd / if and only if f 2 C 1 .Rd / and lim jxjN jDn f .x/j D 0

jxj!1

for all non-negative integers N and all partial derivatives Dn f of every order n. When necessary (for example, here), we use the convention D0 f D f . For specific partial

© Springer International Publishing AG 2017 S.V. Lototsky, B.L. Rozovsky, Stochastic Partial Differential Equations, Universitext, DOI 10.1007/978-3-319-58647-2_1

1

2

1 Introduction

derivatives, we use the standard notations ut D

@u @2 u ; u xi xj D I @t @xi @xj

also vP D dv=dt. The Laplace operator is denoted by : f D

d X

fxi xi :

iD1

p The symbol i denotes the imaginary unit: i D 1. Notation ak  bk means limk!1 ak =bk D c 2 .0; 1/, and if c D 1, we will emphasize it by writing ak ' bk . Notation ak  bk means 0 < c1  ak =bk  c2 < 1 for all sufficiently larger k. The same notations , ', and  can be used for functions. For example, as x ! 1, we have 2x2 C x  x2 ; x C 5 ' x; x2 .2 C sin x/=.1 C x/  x: Following a different set of conventions,   N .m;  2 / mens that  is a Gaussian (or normal) random variable with mean m and variance  2 ; recall that N .0; 1/ is called a standard Gaussian (or normal) random variable. Here are several important simplifying conventions we use in this book: • We do not distinguish various modifications of either deterministic or random functions. Thus, in this book, all functions from the Sobolev space H 1 .R/ are continuous and so are all trajectories of the standard Brownian motion. • We will write equations driven by Wiener process either as du D : : : dw or as uP D : : : wP (or ut D : : : w, P if it is a PDE). With apologies to the set theory experts, we often use the words “set” and “collection” interchangeably. We fix the stochastic basis F D .˝; T F ; fFt gt1 ; P/ with the usual assumptions (Ft is right-continuous: Ft D ">0 FtC" , and F0 contains all Pnegligible sets, that is, F0 contains every sub-set of ˝ that is a sub-set of an element from F with P-measure zero.).

1.1.2 Dealing with Noise A stochastic ordinary differential equation defines a function of time and is driven by a noise process in time. A stochastic partial differential equation describes a function of time and space and is driven by a noise process in time and space. In what follows, we outline a construction of such space-time noise processes.

1.1 Getting Started

3

Let k ; k  1; be independent standard Gaussian random variables; see Krylov [118, Lemma II.2.3] for the proof that countable many standard Gaussian random variables can exist on a suitable stochastic basis. If fmk .t/; k  1g is an orthonormal basis in L2 ..0; T//, then  X Z t mk .s/ds k (1.1.1) w.t/ D 0

k1

isa standard Brownian motion: a Gaussian process with zero mean and covariance E w.t/w.s/ D min.t; s/. Exercise 1.1.1 (C) Verify that Ew.t/w.s/ D min.t; s/. Hint. Note that

Rt

0 mk .s/ds is the Fourier coefficient of the indicator function of the interval Œ0; t and use the Parseval identity connecting the inner product of two functions to their Fourier coefficients.

We now take this construction to the next level by considering a collection fwk D wk .t/; k  1g of independent standard Brownian motions, t 2 Œ0; T and an orthonormal basis fhk .x/; k  1; x 2 Gg in the space L2 .G/, with G D .0; L/d D .0; L/      .0; L/, a d-dimensional hyper-cube. For x D .x1 ; x2 ; : : : ; xd /, define hk .x/ D

Z

x1

Z

0

x2

0

:::

Z

xd 0

hk .r1 ; : : : ; rd /drd : : : dr1 :

Then the process W.t; x/ D

X

hk .x/wk .t/

(1.1.2)

k1

is Gaussian, d Y   min.xk ; yk /: EW.t; x/ D 0; E W.t; x/W.s; y/ D min.t; s/

(1.1.3)

kD1

Exercise 1.1.2 (C) Verify (1.1.3). We call the process W from (1.1.2) the Brownian sheet. The derivative of the standard Brownian motion, while does not exist in the usual sense, is the standard model of the Gaussian white noise w. P The formal term-by-term differentiation of the series in (1.1.1) suggests a representation w.t/ P D

X

mk .t/k :

(1.1.4)

k1

While the series certainly diverges, it does define a random generalized function on L2 ..0; T// according to the rule w. P f/ D

X k1

fk k ; fk D

Z

T 0

f .t/mk .t/dt:

(1.1.5)

4

1 Introduction

Exercise 1.1.3 (C) (a) Verify that the series in (1.1.5) converges with probability one to a Gaussian RT P random variable with zero mean and variance k fk2 D 0 f 2 .t/dt. (b) Let w be a standard Brownian motion and f 2 L2 ..0; T// a non-random function. RT Show that the random variables k D 0 mk .t/dw.t/; k  1; are iid standard normal. Then use these random variables to define wP according to (1.1.4) and RT verify that 0 f .s/dw.s/ D w. P f /. Let us now take (1.1.4) to the next level and write P x/ D W.t;

X

hk .x/wP k .t/;

(1.1.6)

k1

where fhk ; k  1g is an orthonormal basis in L2 .G/ and G Rd is an open set. We P the (Gaussian) space-time white noise. It is a random call the process W generalized function on L2 ..0; T/  G/: P f/ D W.

XZ k1

T

Z

0

 f .t; x/hk .x/dx dwk .t/:

(1.1.7)

G

P f /: Sometimes, an alternative notation is used for W. P f/ D W.

Z

T

Z

0

f .t; x/dW.t; x/:

(1.1.8)

G

P f /, although we P f / and W P f can also be used to denote W. Even more confusing, .W; use these notations for (slightly) different purposes. Exercise 1.1.4 (C) (a) Verify that   P f/ 2 D E W.

Z

T 0

Z

f 2 .t; x/dxdt:

(1.1.9)

G

(b) Verify that P x/ D W.t;

X

mk .t/hn .x/k;n ;

k;n

P k hn /. where k;n D W.m P is defined on every domain Unlike the Brownian sheet, space-time white noise W G Rd and not just on hyper-cubes .0; L/d , as long as we can find an orthonormal basis fhk ; k  1g in L2 .G/.

1.1 Getting Started

5

Alternative description of the Gaussian white noise is as a zero-mean Gaussian process wP D w.t/ P such that Ew.t/ P w.s/ P D ı.ts/, where ı is the Dirac delta function. Similarly, we have P x/W.s; P y/ D ı.t  s/ı.x  y/: EW.t; To construct noise that is white in time and colored in space, take a sequence of non-negative numbers fqk ; k  1g and define P Q .t; x/ D W

X

qk hk .x/wP k .t/;

(1.1.10)

k1

where fhk ; k  1g is an orthonormal basis in L2 .G/, G Rd . The noise is called finite-dimensional if qk D 0 for all sufficiently large k. Exercise 1.1.5 (C) Verify that if X k1

then the function q.x; y/ D

P

k1

q2k sup h2k .x/ < 1 x2G

q2k hk .x/hk . y/ is well defined and

P Q .s; y/ D ı.t  s/q.x; y/: P Q .t; x/W EW Similarly, expressions P W.x/ D

X

hk .x/k

(1.1.11)

k1

and P Q .x/ D W

X

qk hk .x/k ;

k1

define, respectively, the Gaussian white noise in space. R noise and Gaussian colored P P f / or f .x/dW.x/ to denote k1 fk k , with fk D Similar to (1.1.8), we write W. G R P is a Gaussian white noise in space and f .x/h .x/dx. According to (1.1.11), if W k G P k /; k  1; are iid standard hk ; k  1 is an orthonormal basis, then k D W.h Gaussian random variables. P and W P Q, Throughout this book we will be mostly working with Gaussian noise W although two other types of noise are becoming increasingly popular in the study of SPDEs: the fractional Gaussian noise and Lévy noise.

6

1 Introduction

1.1.3 A Few Useful Equalities Itô formula: If F D F.x/ is a smooth function and w D w.t/ is a standard Brownian motion, then   F w.t/ D F.0/ C

Z

t

0

  1 F 0 w.s/ dw.s/ C 2

Z

t

  F 00 w.s/ ds:

0

Itô isometry: if w is a standard Brownian motion and f , an adapted process, then E

Z

T 0

f .t/dw.t/

2

D

Z

T 0

Ef 2 .t/dt:

Fourier transform: b f . y/ D

1 .2/d=2

Z Rd

f .x/eixy dx; i D

p 1:

Recall that b f is defined on the generalized functions from S 0 .Rd / by .b f ; '/ D . f ; b ' /, 0 d f 2 S .R /; ' 2 S.Rd / (for a quick review, see Rauch [193, Sect. 2.4]). Inverse Fourier transform: Z 1 f .x/eixy dx D b f .y/: fL . y/ D .2/d=2 Rd Standard normal density is unchanged under the Fourier transform: Z 1 1 1 2 2 p ex =2 eixy dx D p ey =2 (1.1.12) 2 1 2 For three different ways to establish this, see Andrews et al. [2, Exercise 1 for Chap. 6]. Parseval’s identity: If fhk ; k  1g is an orthonormal basis in a Hilbert space X and f 2 X, then X . f ; hk /2X D k f k2X : k1

The result is essentially an infinite-dimensional Pythagorean theorem and is named after the French mathematician MARC-ANTOINE PARSEVAL DES CHÊNES (1755–1836).

1.1 Getting Started

7

Exercise 1.1.6 (C) Let fmk .t/; k  1; t 2 Œ0; T; be an orthonormal basis in L2 ..0; T//. Show that 1 Z X 0

kD1

Hint.

Rt 0

t

2 mk .s/ds D t:

(1.1.13)

mk .s/ds is the Fourier coefficient of what function?

Plancherel’s identity or isometry of the Fourier transform: if f is a smooth function with compact support in Rd and b f . y/ D

1 .2/d=2

Z Rd

f .x/eixy dx;

then Z

2

Rd

j f .x/j dx D

Z Rd

jb f . y/j2 dy:

This result is essentially a continuum version of Parseval’s identity and is named after the Swiss mathematician MICHEL PLANCHEREL (1885–1967). Stirling’s formula for the Gamma function    x x  p 1 1 1  .x C 1/ D 2x C C CO 3 1C ; x ! C1; 2 e 12x 288x x named after the Scottish mathematician JAMES STIRLING (1692–1770). Recall that  .x/ D

Z 0

1

t x1 et dt; x > 0;

and  .n C 1/ D nŠ; n D 0; 1; 2; : : : :

1.1.4 A Few Useful Inequalities Below, we summarize several inequalities that are always good to know: Burkholder-Davis-Gundy, epsilon, Gronwall, Hölder, and Jensen, and recall one particular embedding theorem for Sobolev spaces. To state the Burkholder-Davis-Gundy inequality, recall that we are always working on the stochastic basis F D .˝; F ; fFt gt0 ; P/. A square-integrable martingale on F is a processM D M.t/ with values in Rd such that M.0/ D 0, EjM.t/j2 < 1 and E M.t/jFs D M.s/ for all t  s  0. The quadratic

8

1 Introduction

variation of M is the continuous non-decreasing real-valued process hMi such that jMj2  hMi is a martingale. A stopping (or Markov) time on F is a nonnegative random variable such that f! W .!/ > tg 2 Ft for all t  0. Recall that a ^ b means min.a; b/. Theorem 1.1.7 (Burkholder-Davis-Gundy (BDG) Inequality) For every p 2 .0; C1/, there exist two positive real numbers cp ; Cp such that, for every continuous square-integrable martingale M D M.t/ with values in Rd and M.0/ D 0, and for every stopping time , cp EhMip=2 . /  E sup jM.t ^ /jp  Cp EhMip=2 . /:

(1.1.14)

t

In particular, if w is a standard Brownian motion and f , an adapted process, then ˇZ t ˇp Z ˇ ˇ ˇ ˇ E sup ˇ f .s/dw.s/ˇ  Cp E 0 0 be fixed and non-random, and let A; B; P; Q be adapted processes such that E

Z

T 0

  jA.t/j C jP.t/j C B2 .t/ C Q2 .t/ dt < 1:

Consider two processes X and Y defined for 0  t  T by X.t/ D X0 C

Z

t

A.s/ds C 0

Y.t/ D Y0 C

Z

t

B.s/dw.s/; 0

Z

t

P.s/dt C 0

Z

(2.1.10)

t

Q.s/dw.s/; 0

and let hX; Yi.t/ D

Z

t

B.s/Q.s/ds: 0

Also, recall the notation a ^ b D min.a; b/: Given a non-random partition 0 D t1 < t2 < : : : < tN D T of   R t the interval Œ0; T, let 4N D maxk tkC1  tk : Recall that the Itô integral 0 X.s/dY.s/ is defined by Z

t

X.s/dY.s/ D lim

4N !0

0

X

  X.tk / y.tkC1 ^ t/  Y.tk ^ t/ ;

k

where the limit is in probability [192, Theorem II.21]. This integral is named after the Japanese mathematician KIYOSI ITÔ (1915–2008), who introduced it in 1944 . It is possible to show that each ofR the following can be taken as the definition of t the Stratonovich integral 0 X.s/ ı dY.s/; 0  t  T: Z

t

X.s/ ı dY.s/ D lim

4N !0

0

X X.tkC1 / C X.tk /  k

2

 y.tkC1 ^ t/  Y.tk ^ t/ I

X  tkC1 C tk    y.tkC1 ^ t/  Y.tk ^ t/ I X.s/ ı dY.s/ D lim X 4N !0 2 0 k Z t Z t 1 (2.1.11) X.s/ ı dY.s/ D X.s/dY.s/ C hX; Yi.t/; 2 0 0

Z

t

2.1 Some Useful Facts

31

where the limits are in probability. For details, see Protter [192, Theorems V.26 and V.30]. The integral is named after the Soviet scientist RUSLAN LEONT’EVICH STRATONOVICH (1930–1997), who described it in 1964 . It was also discovered independently by D.L. FISK, but only appeared as a part of his Ph.D. dissertation, written in 1964 under HERMAN RUBIN at the Department of Statistics, Michigan State University. For more details about the history of stochastic calculus, see the paperRby Jarrow and Protter R t . t Note that 0 X.s/ ı dY.s/ D 0 X.s/dY.s/ when hX; Yi D 0, which is the case, for example, if either B D 0 or Q D 0, that is, either X or Y has no martingale component. The Stratonovich integral appears naturally in the following situation. Suppose that the trajectories of the Brownian motion are approximated by piece-wise continuously-differentiable functions wn D wn .t/ so that lim sup jwn .t/  w.t/j D 0

n!1 0 0, and look at f .0/  g.0/.

(b) Show that the Sobolev space H 1 .R/ (see Example 3.1.9 on page 79) is a reproducing kernel Hilbert space. Hint. For a smooth compactly supported f , f .x/ D p R f . y/dy: By the Cauchy-Schwarz inequality, j f .x/g.x/j  .1= 2/k f gk1 . .2/1=2 R eixyb To understand the origin of the name, reproducing kernel Hilbert space, we need one more definition. Definition 3.1.48 Let S be a set. A real-valued function K defined on SS is called a positive-definite kernel on S if K.t; s/ D K.s; t/, s; t 2 S, and, for every integer N  1, every collection of points s1 ; : : : ; sN from S, and every collection of real numbers y1 ; : : : ; yN , the following inequality holds: N X

K.si ; sj /yi yj  0:

(3.1.31)

i;jD1

  In other words, every matrix of the form K.si ; sj /; i; j D 1; : : : ; N is symmetric and non-negative definite.

3.1 An Overview of Functional Analysis

95

Exercise 3.1.49 (B) (a) Verify that a positive-definite kernel K has the following properties: (i) K.t; t/  0; (ii) jK.s; t/j2  K.s; s/ K.t; t/. (b) Verify that a real continuous symmetric function K on Œ0; 1Œ0; 1 is a positivedefinite kernel on Œ0; 1 if and only if Z

1 0

Z

1 0

K.s; t/f .s/f .t/dsdt  0

for every f 2 L2 ..0; 1//. The following theorem connects the reproducing kernel Hilbert spaces and the positive-definite kernels. Theorem 3.1.50 Let S be a set. (1) For every reproducing kernel Hilbert space H of functions on S, there exists a unique positive-definite kernel KH on S, called the reproducing kernel of H, such that, for every s 2 S and f 2 H,   f .s/ D f ; KH .s; / H (note that, for every fixed s, KH .s; / is a function on S and therefore can be an element of H). (2) For every positive-definite kernel K on S, there exists a reproducing kernel Hilbert space H of real-valued functions on S, such that K is the reproducing kernel of H: KH D K. Proof For the main idea, see parts (a) and (b) of the following exercise. For details, see Aronszajn [4, Sect. I.2]. Exercise 3.1.51 (C) (a) Derive existence and uniqueness of KH from the Riesz representation theorem. Hint. The continuity of f 7! f .s/ implies f .s/ D . f ; gs /H for some gs 2 H; then KH .s; t/ D gs .t/.

(b) Show that, given a kernel K, the corresponding reproducing kernel Hilbert space H can be constructed as the closure of the set of finite sums of the form P k ak K.xk ; /. The closure in with respect to the norm generated by the inner product ! X X X ak K.xk ; /; bn K. yn ; / D ak bn K.xk ; yn /: n

k

H

k;n

(c) Let H be a reproducing kernel Hilbert space. Show that 

K.s; /; K.t; /

 H

D K.s; t/:

96

3 Stochastic Analysis in Infinite Dimensions

(d) Let H be a reproducing kernel Hilbert space with an orthonormal basis fmk ; k  1g. Show that KH .s; t/ D

X

mk .s/mk .t/:

k1

3.1.3 Problems Together with the conclusion of Problem 3.1.1, the reader should keep in mind that while every two separable Hilbert spaces are isomorphic, they can be very different. Problem 3.1.2 shows how, given an arbitrary separable Hilbert space H, one can construct a Hilbert space HQ such that the inclusion operator j W H ! HQ is HilbertSchmidt; this construction is commonly used in the study of random elements with values in Hilbert spaces. Problem 3.1.3 provides two useful technical results: (a) a way to prove strong convergence in a Hilbert space, and (b) an analogue of the fundamental theorem of calculus in a normal triple of Hilbert spaces. Problem 3.1.4 introduces a class of Hilbert spaces involving time, useful in the study of evolution equations. Problem 3.1.5 summarizes some facts about linear operators that were not mentioned in the text. Problem 3.1.6 discusses linear operators in Hilbert space tensor products. Problems 3.1.7 and 3.1.8 illustrate the importance of the underlying set S in the study of reproducing kernel Hilbert spaces. Problems 3.1.9 and 3.1.10 present examples of constructing a reproducing kernel Hilbert space given the kernel K. Problem 3.1.11 demonstrates a connection between the Hilbert space tensor product and the Hilbert-Schmidt operators. Problem 3.1.1 Show that every two separable Hilbert spaces are isomorphic. Problem 3.1.2 Let H be a separable Hilbert space with an orthonormal basis fmk ; k  1g and let fqk ; k  1g be sequence of positive real numbers such that P 2 Q k1 qk < 1. Define the space H as the closure of H with respect to the norm 0 k f kHQ D @

X

11=2 q2k . f ; mk /2H A

:

k1

Show that the inclusion j W H ! HQ is a Hilbert-Schmidt operator and tr.jj / D P 2 k1 qk D tr.j j/. Problem 3.1.3 (a) Let H be a Hilbert space and h; h1 ; h2 ; : : : 2 H. Show that limn kh  hn kH D 0 (strong convergence) if and only if limn .hn ; x/H D .h; x/H for every x from a dense subset of H (weak convergence on a dense subset) and limn khn kH D khkH (convergence of norms).

3.1 An Overview of Functional Analysis

97

(b) Let .V; H; V 0 / be a normal triple of Hilbert spaces and let u D u.t/ be an element of L2 ..0; T/I V/ such that, for all t 2 Œ0; T, u.t/ D u0 C

Z

t 0

f .s/ds

(3.1.32)

for some u0 2 H and f 2 L2 ..0; T/I V 0 / (equality (3.1.32) is in V 0 ). Show that u 2 C..0; T/I H/ and  Z sup kuk2H .t/  ku0 k2H C

0 > > eiy x .dx/ D eim y 2 y Ry

(3.2.1)

Rd

for some d-dimensional vector m and a d  d-dimensional symmetric nonnegative definite matrix R; y> means the transpose of the column-vector y. Hint. R Rij D

Rd

xi xj .dx/.

(b) Denote by V 0 the set of linear functionals on V. Verify that a centered Gaussian measure on V defines a positive-definite kernel C on V 0 by C .`; h/ D

Z

`.v/h.v/ .dv/:

(3.2.2)

V

For example, let V D f f 2 C.Œ0; 1/; f .0/ D 0g with the sup norm. For every s 2 Œ0; 1, the point mass at s (Dirac delta-function) ıs is a continuous linear functional on V. If is the Wiener measure on V, then, since the canonical process on .V; B.V/; / is the standard Brownian motion, we find Z C .ıs ; ıt / D f .s/f .t/ .df / D E .w.s/w.t// D min.t; s/: (3.2.3) V

Computation of the general expression for C in this example is the subject of Problem 3.2.5 on page 126. Definition 3.2.4 The reproducing kernel Hilbert space H with the reproducing kernel C is called the reproducing kernel Hilbert space of the Gaussian measure . If X is a V-valued Gaussian random element with distribution , then H is also called the reproducing kernel Hilbert space of X. There are two ways to look at the above definition. Recall that one of the starting points in the construction of the reproducing kernel Hilbert space is a collection of functions on a set S. In the setting of Definition 3.2.4, we take S D V 0 , and then, for fixed ` 2 V 0 , the mapping Z h 7! C .`; h/ D `.v/ h.v/ .dv/ WD K.`; h/: (3.2.4) V

becomes a real-valued function on V 0 . Following the procedure outlined in Exercise 3.1.51(b) on page 95, the space H becomes the closure of the set of finite linear combinations of the type X k

Z ak

hk .v/ v .dv/

V

and is, in particular, a sub-space of V. For an example illustrating this construction, see Problem 3.2.6 on page 126 below.

102

3 Stochastic Analysis in Infinite Dimensions

Alternatively, one can look at (3.2.4) as a definition of an operator ` 7! C .`; / from V 0 to .V 0 /0 , which could explain why C is called the covariance operator of the Gaussian measure . If V D H is a Hilbert space, identified with its dual, then C in this interpretation becomes an operator from H to itself: for every h 2 H, C .h; / is the unique element h of H such that, according to the Riesz representation theorem, C .h; x/ D .h ; x/H . Theorem 3.2.5 Let H be a separable Hilbert space, identified with its dual: H 0 D H. (a) If is a Gaussian measure on H, then its covariance operator is nuclear. (b) Conversely, if K W H ! H is a self-adjoint non-negative nuclear operator, then there exists a Gaussian measure on H such that C . f ; g/ D .Kf ; g/H for all f ; g 2 H. Proof (a) Take an orthonormal basis hk ; k  1 in H. Then Z XZ X C .hk ; hk / D .hk ; x/2H .dx/ D kxk2H .dx/ < 1: k

k

H

H

The last inequality follows from a result due to Fernique: for every Gaussian measure on a Hilbert space H, there exists a positive number a such that R akxk2 H .dx/ < 1; see Bogachev [13, Theorem 2.8.5]. He (b) Let Kmk D k mk and assume that fmk ; k  1g is an orthonormal basis in H. Given iid standard Gaussian random variables k , we get the measure P p as the probability distribution of the H-valued random element X D k1 k k mk . The operator K can also be called the covariance operator of . Given a Gaussian measure on a linear topological space V, we immediately get a corresponding canonical Gaussian random element with values in V. We will now discuss other ways of defining Gaussian random elements. Definition 3.2.6 A Gaussian field on G Rd is a collection of random variables X D X.P/, P 2 G, such that, for every fP1 ; : : : ; Pn g G, the random variables X.Pi /; i D 1; : : : ; n; form a Gaussian vector. In what follows, we consider zero-mean fields: EX.P/ D 0 for all P 2 G. A zeromean Gaussian field is characterized by its covariance function   q.P; Q/ D E X.P/X.Q/ I the field is called homogeneous if there exists a function ' D '.t/; t  0 such that q.P; Q/ D '.jP  Qj/, where jP  Qj is the Euclidean distance between the points P and Q. We write P; Q instead of x; y to stress that the objects are not tied to any particular coordinate system in Rd .

3.2 Random Processes and Fields

103

As an example and a motivation for the discussion to follow, let ` be the Lebesgue measure Rd and let W be a random set function on fA W A 2 B.Rd /; `.A/ < 1g: We assume that W has the following properties: 1. W.A/ Tis a Gaussian random S variable with mean zero and variance `.A/; 2. if A B D ;, then W.A B/ D W.A/ C W.B/ and the random variables W.A/, W.B/ are independent. To construct W take a collection fk ; k  1g of independent standard Gaussian random variables and an orthonormal basis fmk ; k  1g in L2 .Rd /. Then W.A/ D

X

k

Z

mk .x/dx:

(3.2.5)

A

k1

Indeed, if 1A is the indicator function of the set A Rd (1A .x/ D 1 if x 2 A and 1A .x/ D 0 if x … A), then, by Parseval’s identity,   E W.A/W.B/ D

Z Rd

1A .x/1B .x/dx:

(3.2.6)

For r > 0 and P 2 Rd , denote by Br .P/ the ball with center at P and radius r. Then, for every fixed r > 0, the function Wr .P/ D

W.Br .P// `.Br .P//

(3.2.7)

  is a zero-mean homogeneous Gaussian field on Rd , and E Wr .P/Wr .Q/ D 0 if jP  Qj > r. Once again, note that, up to this point, we have not been tied to any particular coordinate system in Rd . Let us now fix a Cartesian coordinate system in Rd and, given a point x D .x1 ; : : : ; xd / with xi > 0 for all i D 1; : : : ; d, consider a rectangular box Ax defined by Ax D Œ0; x1   Œ0; x2       Œ0; xd ; Then the random field W.x/ D W.Ax / is called Brownian sheet; cf. Walsh [223, Chap. 1].

(3.2.8)

104

3 Stochastic Analysis in Infinite Dimensions

Exercise 3.2.7 (B) (a) Show that limr!0 Wr .P/ does not exist in distribution for any P. Hint. Consider the characteristic function of Wr .P/.

(b) Show that, for the Brownian sheet (3.2.8), d   Y E W.x/W. y/ D min.xi ; yi /:

(3.2.9)

iD1

Hint. Use (3.2.6). d Looking at (3.2.5), we realize that both limr!0 Wr .x/ Pand @ W.x/=@x1 : : :@xd , if existed, would have to be equal to a divergent series k1 k hk .x/. On the other P acting hand, this divergent series can be interpreted as a generalized function W, d on the functions from the Schwartz space S.R / of rapidly decreasing functions according to the rule

P f/ D W.

X

k fk ; where fk D

k1

Z Rd

f .x/hk .x/dx:

(3.2.10)

Recall that S.Rd / is the collection of infinitely differentiable functions on Rd such that supx2Rd .1 C jxj2 /m jDn f .x/j < 1 for all positive integer m; n and all partial derivatives Dn of f of order n; jxj2 D x21 C    C x2d . Denote by RdC the set fx 2 Rd W x1 > 0; : : : ; xd > 0g: Direct computations show that   P f /  N 0; k f k2 d ; W. L2 .R /   P f /W.g/ P E W. D . f ; g/L2 .Rd / ; Z @d f .x/ P f / D .1/d W.x/ dx; f 2 C01 .RdC /: W. @x1    @xd RdC

(3.2.11) (3.2.12) (3.2.13)

P is a generalized derivative of W. Equality (3.2.12) Equality (3.2.13) shows that W P shows that we can extend W from S.Rd / to L2 .Rd /. We call this extension the P Gaussian white noise on L2 .Rd / and continue to denote it by W. Exercise 3.2.8 (C) Verify (3.2.11)–(3.2.13). P f /: Sometimes, we use an alternative notation for W. P f/ D W.

Z Rd

f .x/dW.x/:

The following definition combines the ideas from Gelfand and Vilenkin [61, Sects. III.1.2 and III.5.1] and from Métivier and Pellaumail [159, Sect. 15.1].

3.2 Random Processes and Fields

105

Definition 3.2.9 A generalized random field X over a linear topological space V is a collection of random variables fX.v/; v 2 Vg with the properties 1. X.au C bv/ D aX.u/ C bX.v/, a; b 2 R, u; v 2 V; 2. if limn!1 vn D v in the topology of V, then limn!1 X.vn / D X.v/ in probability. In other words, X is a continuous linear mapping from V to the space of random variables. Alternative names for such an object are a cylindrical random element and generalized random element. For a generalized Gaussian random field over a Hilbert space, an alternative definition is often used (e.g. Nualart [175, Sect. 1.1.1]). Definition 3.2.10 A zero-mean generalized Gaussian field B over a Hilbert space H is a collection of Gaussian random variables fB. f /; f 2 Hg with the properties 1. EB. f / D 0 for all f 2 H; 2. There exists a bounded, linear, self-adjoint, non-negative operator K on H (called the covariance operator of B) such that   E B. f /B.g/ D .Kf ; g/H for all f ; g 2 H, where .; /H is the inner product in H. In the special case when K is the identity operator, alternative names for B are Gaussian white noise on (or over) H and isonormal Gaussian process on (or over) H.   If B is a Gaussian white noise on H, then E B. f /B.g/ D . f ; g/H : In the P D W.x/ P particular case H D L2 .G/, G Rd , we usually write W to denote P Gaussian white noise on H and also use an alternative notation for W. f /: P f/ D W.

Z

f .x/dW.x/: G

P In fact, as the following exercise shows, B generalizes the construction of W from (3.2.10) to an abstract Hilbert space. Exercise 3.2.11 (C) Let H be a separable Hilbert space with an orthonormal basis fmk ; k  1g: (a) Let B be a Gaussian white noise on H. Verify that fB.mk /; k  1g is a collection of iid standard Gaussian random variables.

106

3 Stochastic Analysis in Infinite Dimensions

(b) Let fk ; k  1g be a collection of iid standard Gaussian random variables, and, for f 2 H, define B. f / D

X . f ; mk /H k :

(3.2.14)

k1

Verify that B is a Gaussian white noise on H. The next exercise establishes a connection between Definitions 3.2.9 and 3.2.10. Exercise 3.2.12 (B) (a) Verify that a generalized Gaussian field is a generalized random  field in the sense of Definition 3.2.9. Hint. Verify by direct computation that E B.af C bg/  2 aB. f /  bB.g/ D 0 and E.B. f /  B. fn //2  Ck f  fn k2H .

(b) Verify that if X is a generalized random field over a Hilbert space and every random variable X.v/ is Gaussian, then X is a generalized Gaussian field in the sense of Definition 3.2.10 and the correlation operator K is uniquely defined. Hint. Use the Riesz representation theorem.

Definition 3.2.13 A generalized field X over a Hilbert space H is called regular if there exists an H-valued random element X such that X 2 L2 .˝I H/ (that is, EkXk2H < 1) and X. f / D .X; f /H for all f 2 H. Exercise 3.2.14 Let G Rd , H D L2 .G/, and consider a regular zero-mean Gaussian random field X. f / D .X; f /H . Show that the  covariance  operator K of X is an integral operator on H with kernel K.x; y/ D E X.x/X. y/ , that is, Kf .x/ D

Z

K.x; y/f . y/dy: G

Hint. Exchange integration and expectation.

There is a close connection between regular Gaussian fields and nuclear operators. Theorem 3.2.15 A generalized Gaussian field X over a separable Hilbert space H is regular if and only if the covariance operator K of X is nuclear. Proof (a) Assume that X. f / D .X; f /H and let fhk ; k  1g be an orthonormal basis in H. Then X X .Khk ; hk /H D E.X; hk /2 D EkXk2H < 1; k1

k1

which, by Theorem 3.1.41(b), implies that K is nuclear. (b) Assume that K is nuclear and let fmk ; k  1g be the orthonormal basis in H consisting of the eigenfunctions of K; such a basis exists because K is

3.2 Random Processes and Fields

107

compact: see Exercise 3.1.20 and Theorem 3.1.28(b). Denote by k ; k  1; 1=2 the eigenvalues of K and define k D k X.mk / so that fk ; k  1g are iid standard Gaussian random variables. Then X. f / D .X; f /H , where XD

Xp

k k mk : k1

Indeed, EkXk2H D . f ; m k /H ,

P

k1

k D kKk1 < 1 (see (3.1.24)) and, with fk D

  X E .X; f /H .X; g/H D

k fk gk D .Kf ; g/H : k

This completes the proof of Theorem 3.2.15. Recall that there is a one-to-one correspondence between a regular Gaussian field X and Gaussian measure . The corresponding reproducing kernel Hilbert space H of is often called the reproducing kernel Hilbert space of X and is denoted by HX . Below is the precise construction. Exercise 3.2.16 (C) Assume that X. f / D .X; f /H is a regular Gaussian field with the covariance operator K and define the Gaussian measure on H by .A/ D P.X 2 A/, A 2 B.H/. Let C be the covariance operator of the measure . (a) Show that C . f ; g/ D .Kf ; g/H . Hint. It is obvious. (b) Conclude that H D

p K.H/

Hint. Use the result of Problem 3.1.9 on page 99.

Let us have another look at the white noise. If B is a Gaussian white noise on a separable Hilbert space H, then, by Definition 3.2.10, the covariance operator of B is identity:   E B. f /B.g/ D . f ; g/H ;

(3.2.15)

and therefore, by Theorem 3.2.15, there is no H-valued random element B such that B. f / D .B; f /H . On the other hand, if fmk ; k  1g is an orthonormal basis in H, then (3.2.14) suggests a representation BD

X

B.mk / mk I

(3.2.16)

k1

we call (3.2.16) the chaos expansion of B. The following exercise justifies the representation (3.2.16).

108

3 Stochastic Analysis in Infinite Dimensions

Exercise 3.2.17 (B) (a) Let H be a separable Hilbert space with an orthonormal basis fmk ; k  1g, and let HQ be a Hilbert space such that H HQ and the inclusion operator j W H ! HQ is Hilbert-Schmidt. Show that (3.2.16) defines a regular generalized Gaussian field over HQ with the covariance operator K D jj . For a systematic way to Q see Problem 3.1.2 on page 96. construct the space H, (b) Show that every generalized Gaussian random field becomes regular when considered on a suitably chosen extension of the original space. Next, we will see how Theorem 3.2.15 works when H D L2 .G/. Example 3.2.18 Consider a zero-mean Gaussian field W D W.x/, x 2 G Rd (in  the sense of Definition 3.2.6), with the covariance function q.x; y/ D E W.x/W. y/ . Then EW 2 .x/ D q.x; x/, and, under the assumption Z q.x; x/dx < 1; (3.2.17) G

W 2 L2 .˝I L2 .G// so that W defines a regular generalized Gaussian field over L2 .G/ by Z W. f / D W.x/f .x/dx: G

  ’ Indeed, since E W. f /W.g/ D GG q.x; y/f .x/f . y/dxdy, the covariance operator R Q of this field is an integral operator with kernel q: Q f .x/ D G q.x; y/f . y/dy. Condition (3.2.17) ensures that the operator Q is nuclear: see (3.1.29) on page 93; note also that, by the Cauchy-Schwarz inequality, q2 .x; y/  q.x; x/q. y; y/. If M D fmk ; k  1g is an orthonormal basis in L2 .Rd / such that Qmk D k mk ; k  1, then the Fourier series expansion of W in the basis M is W.x/ D

X

k mk .x/; k D

k1

Z

W.x/mk .x/dx;

(3.2.18)

G

Note that the zero-mean Gaussian random variables k have variance k and are independent for different k. Representation (3.2.18) is known as the Karhunen-Loève expansion of W, after the Finnish mathematician KARI KARHUNEN (1915–1992) and the French–American mathematician MICHEL LOÈVE (1907–1979), who established the result independently in the mid 1940’s [105, 106, 143, 144]. If the function q.x; x/ is Lebesgue-integrable on every compact subset of G, P over the space C 1 .G/ of smooth then we can define a generalized random field W 0 functions with compact support in G: P f / D .1/d W.

Z G

.d/ W.x/D.d/ x f .x/dx; Dx D

@d : @x1 : : : @xd

(3.2.19)

3.2 Random Processes and Fields

109

   .d/  .d/  ’ P f /W.g/ P Since E W. D GG q.x; y/ Dx f .x/ Dy g. y/ dxdy, we see that the P is a generalized function on C01 .G  G/ and is given covariance operator of W .d/ .d/ by Dx Dy q.x; y/. If there exists a C > 0 such that, for all f ; g 2 C01 .G/, ˇ“ ˇ ˇ

GG

ˇ   .d/  ˇ q.x; y/ D.d/ f .x/ D g. y/ dxdy ˇ  Ck f kL2 .G/ kgkL2 .G/ ; x y

P extends to a zero-mean generalized Gaussian field over L2 .G/. then W This concludes Example 3.2.18. Next we define the action of a linear operator on a generalized random field. Definition 3.2.19 Let H; Y be Hilbert spaces, A W H ! Y, a bounded linear operator, and B, a generalized Gaussian field over H. Then AB is a generalized Gaussian field over Y defined by .AB/. f / D B.A f /. Exercise 3.2.20 (C) Verify that if K is the covariance operator of B, then AKA is the covariance operator of AB. Example 3.2.21 Just as the dual of Hilbert space can be identified with different spaces, a generalized field can be considered over different spaces. For example, let G be a smooth bounded domain and consider the Sobolev spaces H r .G/ on G (see P Example 3.1.29, page 87). pLet W be a Gaussian white noise over H D L2 .G/ D 0 H .G/ and let A D  D  with zero boundary conditions. Consider the field P By Definition 3.2.19, X is a Gaussian white noise over Y D H 1 .G/. X D W. Indeed, in this setting  D 1 , because .f ; g/1 D . f ;  g/0 , f 2 H 0 .G/; g 2 H 1 .G/, while the definition of the inner product in H 1 .G/ implies .f ; g/1 D .1 f ; 1 g/0 D . f ; 1 g/0 : Therefore for f ; h 2 H 1 ,   E X. f /X.h/ D .1 f ; 1 h/0 D . f ; h/1 : On the other hand, let fmk ; k  1g be the orthonormal basis in H consisting of the eigenfunctions of , and let k ; k  1; be the corresponding eigenvalues. Then P D W

X

mk k ; X D

k1

X

k mk k I

k1

note that f k mk ; k  1g is an orthonormal basis in H 1 .G/. Given f D H 1 .G/, we can define X. f / D

X k1

fk k k :

P

k fk mk

2

110

3 Stochastic Analysis in Infinite Dimensions

Then, for f ; g 2 H 1 .G/,   X

2k fk gk D . f ; g/1 ; E X. f /X.g/ D k1

that is, X can also be considered as a Gaussian white noise over H 1 .G/. Thus, X is a Gaussian white noise over two different spaces. This concludes Example 3.2.21. We conclude this section with a discussion of the Markov property for the random fields. The definition of the Markov property for random processes, namely, that the past and future are independent given the present, essentially relies on the linear ordering of index set of the process, the real line. Since a random field is indexed by a set without a linear ordering, there is no clear analogue of the past and future and thus no natural way to define the Markov property. There are three main versions of the Markov property for random fields: the sharp Markov property, the germ Markov property, and the global Markov property. As in the case of processes, all definitions rely on conditions independence, which we will now review. Let Gi , i D 1; 2; 3, be three sigma algebras of events on the the probability space .˝; F ; P/. Recall that G1 and G2 are called conditionally independent given G3 if, for every A 2 G1 and B 2 G2 ,       E 1A 1B jG3 D E 1A jG3 E 1B jG3 In this case, G3 is called the splitting sigma-algebra or the splitting field for G1 and G2 . If G3 is the trivial sigma algebra, then conditional independence becomes the usual independence. Dependent events can become conditionally independent given a non-trivial sigma-algebra. For example, values of a Markov process at two different moments are usually dependent, but they become independent given the value at an intermediate moment. As another example, consider events A; B; C such that C A \ B. By the Bayes formula, A and B are conditionally independent given C (all conditional probabilities are equal to one). Exercise 3.2.22 (C) (a) Give an example of independent events that become dependent after a conditioning. Hint. Roll a die twice and condition on the sum. (b) Show that if G1 and G2 are conditionally independent given G3 , then .G1 [ G3 / and .G2 [ G3 / are conditionally independent given G3 . Hint. Consider the events of the type A1 \ C1 and B1 \ C2 , where A1 2 G1 , B1 2 G2 , and C1 ; C2 2 G3 . Taking such events helps because, for example, 1A1 \C1 D 1A1 1C1 and 1C1 is G3 -measurable.

3.2 Random Processes and Fields

111

(c) Show that G1 and G2 are conditionally independent given G3 if and only if, for every A 2 G1 ,   E 1A j.G2 [ G3 / D E.1A jG3 /: We are now ready to define the first version of the Markov property for (regular) random fields. Recall that, for two sets A; B, notation A n B means the part of A that is not in B. Definition 3.2.23 (Sharp Markov Property) A random field X D X.x/; x 2 S on a metric space S has the sharp Markov property relative to a bounded open set B S (with boundary @B and closure B) if the sigma-algebras      X.x/; x 2 B and  X.x/; x 2 S n B   are conditionally independent given the sigma-algebra  X.x/; x 2 @B . In other words, a random field has the sharp Markov property relative to a set if the values of the field inside and outside of the set are independent given the values of the field on the boundary of the set. We take S a metric space rather than a more general topological space because we want to have an easy notion of a bounded set.   Exercise 3.2.24 (C)   Verify that  X.x/; x 2 S n B can be replaced with  X.x/; x 2 S n B . Hint. See Exercise 3.2.22(b).

Definition 3.2.23, fist suggested by Lévy , is both the most natural and the most restrictive extension of the Markov property from one-parameter processes to random fields. The definition is natural because the values of a field inside and outside of a set are clear analogues of the past and the future of a process. The  definition is restrictive because the boundary sigma-algebra  X.x/; x 2 @B is usually not big enough to ensure the conditional independence required by the definition. As a result, to have the sharp Markov property, one has to consider either very special random fields or special sets. Here are some example: • If a homogeneous Gaussian field on Rd , d > 1; has the sharp Markov property with respect to all bounded open set with sufficiently regular boundary, then the field is degenerate (essentially a single random variable; for details, see Wong [227, Theorem 1]). • The Brownian sheet on R2 has the sharp Markov property relative to a finite union of rectangles with the sides parallel to the axes , but not relative to a triangle with vertices at .0; 0/; .1; 0/; .0; 1/ (Walsh [223, Sect. 1]). • The Brownian sheet in R2 has the sharp Markov property relative to bounded open sets with sufficiently irregular (“thick”) boundary, such as a fractal curve . • Certain two-parameter pure jump processes have the sharp Markov property relative to every bounded open set in R2 .

112

3 Stochastic Analysis in Infinite Dimensions

Exercise 3.2.25 (C) Verify that the Brownian sheet on R2 has the sharp Markov property relative to the square .0; 1/  .0; 1/. An extension of Definition 3.2.23 to generalized fields is possible in two directions: 1. by considering fields that, although generalized, still allow the definition of some form of a boundary value (Walsh [223, Chap. 9], Wong [227, Sect. 4]); 2. by considering the values of the field on the functions supported in an open neighborhood of the boundary; this consideration leads to the germ Markov property. Let X be a generalized random field over a linear topological space V, and assume that V is a collection of functions on a metric space S. Given a (measurable) set C S, define the germ sigma algebra (or the germ field) BC .C/ of X on C by BC .C/ D

\    X. f /; f supported in A ; AC

where the intersection is over all open sets A containing the closure of C. Definition 3.2.26 (Germ Markov Property) A generalized random field X over functions on S has the germ Markov property relative to a bounded open set B (with boundary @B and closure B) if the sigma algebras      X. f /; f supported in B and  X. f /; f supported in S n B are conditionally independent given BC .@B/. Exercise 3.2.27 (C) Verify that X has the germ Markov property relative to a bounded open set B if and only if the sigma algebras BC .B/ and BC .S n B/ are conditionally independent given BC .@B/. Definition 3.2.26 is usually attributed to McKean . Nualart  showed that the Brownian sheet in R2 has the germ Markov property relative to every bounded open set. For a regular zero-mean Gaussian field X D X.x/, x 2 G Rd , the germ Markov property is equivalent to kernel Hilbert space HX corresponding to  the reproducing  the kernel K.x; y/ D E X.x/X. y/ being in a certain sense local: Theorem 3.2.28 Let X D X.x/; x 2 G Rd be a continuous zero-mean Gaussian random field. The field X has a germ Markov property relative to bounded open sub-sets of G if and only if the reproducing kernel Hilbert space HX of X has the following properties: 1. if v1 ; v2 from HX have disjoint supports, then .v1 ; v2 /HX D 0. 2. if v D v1 Cv2 2 HX and v1 ; v2 have disjoint supports, then v1 2 HX and v2 2 HX .

3.2 Random Processes and Fields

113

Proof See Künsch [126, Theorem 5.1] or Pitt [188, Theorem 3.3]. A slight modification of the germ Markov property is the global Markov property. Definition 3.2.29 (Global Markov Property) A generalized random field X over functions on S has the global Markov property if, for every two open sets A; B with A [ B D S, the sigma algebras      X. f /; f supported in A and  X. f /; f supported in B   are conditionally independent given  X. f /; f supported in A \ B . Intuitively, by taking a sequence of sets A and B so that their intersection A \ B becomes smaller and smaller, one could get the germ Markov property from the global Markov property. This is indeed true: the global Markov property implies the germ Markov property relative to bounded open sets [92, Theorem 1.19]. Moreover, for regular fields, the germ and global Markov properties are equivalent [92, Remark 1.25]. P on Rd has the global Exercise 3.2.30 (C) Verify that the Gaussian white noise W Markov property. For another discussion of various Markov properties for random fields see Balan and Ivanoff  and Iwata . Let us summarize the main facts about zero-mean Gaussian fields on Rd : 1. A regular field W on Rd is a mapping from Rd to the space of zero-mean Gaussian random variables, and is characterized by the covariance function   q.x; y/ D E W.x/W. y/ . 2. A generalized field B over L2 .Rd / is a linear mapping from L2 .Rd / to the space of zero-mean Gaussian random variables and is characterized by the covariance  operator Q: E B. f /B.g/ D .Q f ; g/L2 .Rd / . P over S.Rd / by 3. A regular field W on Rd defines a generalized field W P f / D .1/d W.

Z Rd

  W.x/ D.d/ x f .x/ dx;

where D.d/ D @n =@x1    @xd . The kernel of the corresponding covariance .d/ P is D.d/ operator of W x Dy q.x; y/, and usually must be interpreted as a generalized function. 4. If the covariance operator of a generalized field B is nuclear, then there exists a Gaussian field B D B.x/, x 2 Rd such that B. f / D

Z Rd

B.x/f .x/dx:

114

3 Stochastic Analysis in Infinite Dimensions

3.2.2 Processes Let V be a Banach space. Definition 3.2.31 (a) A V-valued stochastic (or random) process X D X.t/; t 2 Œ0; T; is a collection of V-valued random elements X.t/; t 2 Œ0; T. The sample trajectory of X is the function X.t; !/; t 2 Œ0; T for fixed ! 2 ˝. Given a property of a V-valued function of time (continuity, differentiability, etc.), the process is said to have this property if every sample trajectory has this property. (b) A V-valued stochastic process X is called Gaussian if, for every n  1, t1 ; : : : ; tn 2 Œ0; T and every collection .`1 ; : : : ; `n / of bounded linear functionals on V, the random variable `1 .X.t1 // C : : : C `n .X.tn // is Gaussian. (c) A generalized random process, also known as cylindrical process, is a collection of generalized random elements, indexed by points in Œ0; T. If X D X.t/ is a cylindrical process over V, then we write Xf .t/ to denote the value of the generalized random element X.t/ on f 2 V. A cylindrical process X is called Gaussian if fXfi .tj /; i D 1; : : : ; N; j D 1; : : : ; Mg is a Gaussian system for every finite collection of fi 2 V and tj 2 Œ0; T. Recall that a real-valued Gaussian process X D X.t/; t  0; is completely specified by the mean-value function m.t/ D EX.t/ and the covariance function r.t; s/ D E .X.t/  m.t//.X.s/  m.s/ . Definition 3.2.32 The standard Brownian motion, also known as the Wiener process  w D w.t/,  t 2 Œ0; T, is a real-valued Gaussian process such that Ew.t/ D 0 and E w.t/w.s/ D min.t; s/. Below are some properties of the standard Brownian motion: 1. w.0/ D 0 with probability one; 2. w has independent increments: if t1 < t2  t3 < t4 , then the random variables w.t2 /  w.t1 / and w.t4 /  w.t3 / are independent; 3. with probability one, the trajectories of w are Hölder continuous of any order less than 1=2 and are nowhere differentiable; Exercise 3.2.33 (C) Verify the above properties of the standard Brownian motion. Next, we outline the connections of the standard Brownian motion w D w.t/ with other objects in probability and stochastic analysis. 1. Gaussian measures. Let V be the Banach space of continuous on Œ0; T functions with v.0/ D 0 and norm kvkV D sup0 s;  s D M.s/,  • supermartingale, E M.t/jFs  M.s/, t > s; If M is a martingale and ' D '.x/ is a convex function (for example '.x/ D x2 ), such that Ej'.M.t//j < 1, t  0, then, by Jensen’s inequality, the process X.t/ D '.M.t// is a sub-martingale. A martingale M is called square integrable if EjM.t/j2 < 1 for all t  0. If M is a continuous square integrable martingale with M.0/ D 0, then, by the Doob-Meyer decomposition, there exists a unique continuous nondecreasing process hMi, called the quadratic variation of M, such that

120

3 Stochastic Analysis in Infinite Dimensions

hMi.0/ D 0 and the process M 2  hMi is a martingale. For two continuous square integrable martingales M; N with M.0/ D N.0/ D 0, the cross variation process hM; Ni is defined by 1 .hM C Ni  hM  Ni/ I 4

hM; Ni D

(3.2.36)

see, for example, Karatzas and Shreve [103, Sect. 1.5]. Note that hM; Mi.t/ D hMi.t/ and hcMi.t/ D c2 hMi.t/; c 2 R: If M1 ; : : : ; Mn are continuous square-integrable martingales, Mi .0/ D 0, then * n X

+ Mk .t/ D

kD1

n X

hMk ; M` i.t/:

(3.2.37)

k;`D1

Exercise 3.2.40 (C) (a) Verify that hM; Ni is the unique process in the class of processes with bounded variation such that MN hM; Ni is a martingale. Hint. A square-integrable martingale with bounded variation is constant.

(b) Verify that if the sigma-algebras generated by the random variables M.t/; t  0; and N.t/; t  0; are independent, then hM; Ni D 0. Hint. Use the uniqueness statement in part (a).

Definition 3.2.41 (1) A cylindrical process X over a linear topological space V is called a martingale (submartingale, supermartingale) if the process Xf .t/; t  0; is a martingale (submartingale, supermartingale) for every f 2 V. (2) A process M D M.t/ is called a continuous square-integrable martingale with values in a separable Hilbert space H (or an H-valued continuous, squareintegrable martingale) if the M has the following properties: • M.t/ 2 H and EkM.t/k2H < 1 for every t  0; • lims!0 kM.t C s/  M.t/kH D 0 for all t  0 and ! 2 ˝; where, by convention, M.t/ D M.0/ for t < 0;   • for every f 2 H, the process Mf .t/ D f ; M.t/ H is a martingale. Exercise 3.2.42 (C) Let H be a Hilbert space and let f D f .t/; t 2 R; be an H-valued function. Verify that the two conditions are equivalent: 1. lims!0 k f .t C s/  f .t/kH D 0 for all t 2 R; 2. the norm k f ./k is a continuous function and, for every h 2 H, the function fh .t/ D . f .t/; h/H ; t 2 R; is continuous. Hint. See Problem 3.1.3(a), page 96.

3.2 Random Processes and Fields

121

Our next objective is to define and study the quadratic variation of a continuous H-valued martingale. As a motivation, consider an Rd -valued continuous, squareintegrable martingale M. Fixing an orthonormal basis in Rd , we get a representation of M as P a vector martingale .M1 ; : : : ; Md /. If f D . f1 ; : : : ; fd / is a vector in Rd , then Mf D kD1 fk Mk is a real-valued continuous, square-integrable martingale and, by (3.2.37), d X

hMf i.t/ D

fk fn hMk ; Mn i.t/:

(3.2.38)

k;nD1

We will now derive an alternative representation for hMf i. P P The process jMj2 D diD1 Mi2 is a sub-martingale, and jMj2  diD1 hMi i is a martingale. It is therefore natural to define hMi D

d X

hMi i

(3.2.39)

iD1

This definition seems to depend on the basis in Rd , but, on the other hand, hMi is such that jMj2  hMi is a martingale; therefore, by the uniqueness part of the Doob-Meyer decomposition, hMi must be the same in every basis. Proposition 3.2.43 If N1 and N2 are two real-valued continuous, square-integrable martingales, then the function hN1 ; N2 i is absolutely continuous with respect hN1 i C hN2 i: hN1 ; N2 i.t/ D

Z

t 0

  dhN1 ; N2 i.s/   d hN1 i C hN2 i .s/: d hN1 i C hN2 i .s/

Proof It is known that the function hN1 ; N2 i is absolutely continuous with respect to both hN1 i and hN2 i (see, for example, Liptser and Shiryaev [138, Theorem 2.2.8]). Since both hN1 i and hN2 i are absolutely continuous with respect to hN1 i C hN2 i, the result follows, For a continuous square-integrable martingale M with values in Rd , we use Proposition 3.2.43 to define the d  d symmetric matrix QM .s/ D



 dhMi ; Mj i.s/ ; i; j D 1; : : : ; d : dhMi.s/

Treating f as a column vector and writing f > for the corresponding row vector, we can then re-write (3.2.38) as hMf i.t/ D

Z

t 0



 f > QM .s/f dhMi.s/:

122

3 Stochastic Analysis in Infinite Dimensions

Once we think of QM .s/ as a matrix representation of some linear operator QM .s/ on Rd , and interpret hMi as the process that compensates jMj2 to a martingale, we have all the ingredients for an intrinsic (coordinate-free) interpretation of (3.2.38). The following theorem carries out this plan, and also extends (3.2.38) to an infinitedimensional setting. Theorem 3.2.44 Let M be a continuous square-integrable martingale with values in a separable Hilbert space H and M.0/ D 0. Then (a) the process kMk2H is a non-negative sub-martingale and there exists a unique continuous non-decreasing process hMi such that kMk2H  hMi is a martingale. (b) There exists a unique process QM D QM .t/, called the correlation operator of the martingale M, with the following properties: • for every t and !, QM is a non-negative definite self-adjoint nuclear operator on H; • for every f ; g 2 H h.M; f /H ; .M; g/H i.t/ D

Z

t 0

  QM .s/f ; g H dhMi.s/:

(3.2.40)

Proof Let fmk ; k  1g be an orthonormal basis in H. P P 2 2 (a) We have M.t/ D k1 Mk .t/mk and kM.t/kH D k1 Mk .t/, where Mk D .M; mk /H is P a continuous square-integrable martingale. Then we can take hMi.t/ D k1 hMk i.t/. For a more detailed proof, see Rozovskii [199, Sect. 2.1.8]. (b) For fixed t, FQ f ;g .t/ D h.M; f /H ; .M; g/H i.t/ Q Q is a bi-linear form on H, and so FQ f ;g .t/ D .Q.t/f ; g/H . The operator Q.t/ must be nuclear: X X Q tr.Q.t// D h.M; mk /H i.t/ D hMk i.t/ D hMi.t/: k1

k1

Then, for fixed f ; g, look at FQ f ;g .t/ as a real-valued function of time and argue that this function is absolutely continuous with respect to hMi. For fixed t, the corresponding Radon-Nykodim derivative defines a bi-linear form on H, and we get QM .t/ from the relation .QM .t/f ; g/H D

d FQ f ;g .t/ : dhMi.t/

For details, see Métivier and Pellaumail [159, Theorem 14.3]. This completes the proof of Theorem 3.2.44.

3.2 Random Processes and Fields

123

Exercise 3.2.45 (B) (a) Verify that (3.2.40) implies that, for every t  0 and ! 2 ˝, the operator QM .t/ is self-adjoint, non-negative definite, and tr.QM .t// D 1. (b) Let M be a continuous square-integrable H-valued martingale, X, a separable Hilbert space, and A W H ! X, a bounded linear operator. Show that the process N.t/ D AM.t/ is a continuous square-integrable X-valued martingale and QN .t/ D

AQM .t/A :  tr AQM .t/A

Hint. See Exercise 3.2.20 on page 109.

To summarize, if H is a separable Hilbert space with an orthonormal basis fmk ; k  1g and M is an H-valued continuous square-integrable martingale, then, for each k, each process Mk .t/ D .M.t/; mk /H is a real-valued continuous, square-integrable martingale and M.t/ D

X

Mk .t/ mk ; hMi.t/ D

k1

X hMk i.t/; k1

hMk ; M` i.t/ D

Z

t 0

  QM .s/mk ; m` H dhMi.s/:

(3.2.41)

Exercise 3.2.46 (C) Let W Q be a Q-cylindrical Wiener process on a Hilbert space H, and assume that the operator Q is nuclear. Show that W Q is an H-valued martingale with hW Q i.t/ D t tr.Q/ and QW Q .t/ D Q=tr.Q/. Theorem 3.2.47 (Burkholder-Davis-Gundy (BDG) Inequality) If H is a separable Hilbert space and M is an H-valued continuous square-integrable martingale with M.0/ D 0, then, for every p > 0, there exists a positive number C, depending only on p, such that  p  p=2 E sup kM.t/kH  CE hMi.T/ :

(3.2.42)

tT

Proof We derive the result from the finite-dimensional version. Fix an integer N  1 and use the notations from (3.2.41) to define N .N/ .t/ D .M1 .t/; : : : ; MN .t//: M N .N/ is a martingale with values in RN and, as N % 1, Then M N .N/ .t/j % kM.t/kH ; jM

N .N/ i.t/ % hMi.t/: hM

(3.2.43)

124

3 Stochastic Analysis in Infinite Dimensions

N .N/ , By the finite-dimensional version of the BDG inequality applied to M p   .N/  .N/ N N i.T/ p=2 ; E sup jM .t/j  CE hM

(3.2.44)

tT

see, for example, Krylov [118, Theorem IV.4.1]. The key feature of the result is that number C in (3.2.44) depends only on p; in particular, C does not depend on N. Then (3.2.42) follows after passing to the limit N ! 1 in (3.2.44) and using the monotone convergence theorem together with (3.2.41) and (3.2.43). Note that N .N/ .t/j  supt jM N .N/ .t/j, so that supt kM.t/kH  limN supt jM N .N/ .t/j. On the other jM .N/ N .t/j, and so supt kM.t/kH  limN supt jM N .N/ .t/j. That is, hand, kM.t/kH  jM N .N/ .t/j: sup kM.t/kH D lim sup jM t

N

t

This completes the proof of Theorem 3.2.47. Remark 3.2.48 It is not just a lucky coincidence that the constant in the BDG inequality does not depend on the dimension of the space. A general result of Kallenberg and Sztencel  shows that, for every continuous martingale M, there exists a continuous two-dimensional martingale M 0 such that jMj D jM 0 j and hMi D hM 0 i. As a result, every inequality for a continuous two-dimensional martingale M involving only jMj and hMi extends to any number of dimensions, including infinite, with the same constants.

3.2.4 Problems Problem 3.2.1 outlines some computations related to Gaussian white noise on L2 .Rd / and introduces regularization of white noise. Problem 3.2.2 is yet another version of the Kolmogorov continuity criterion, this time in terms of Gaussian measures. Problem 3.2.3 is an exercise on finding the reproducing kernel Hilbert space of a Gaussian measure. Problem 3.2.4 is yet another look at the standard Brownian motion, via the Karhunen-Loève expansion and the limiting distribution of a certain SPDE. Problem 3.2.6 is an example of computing the covariance operator of a Gaussian measure. Problem 3.2.5 leads to the notion of the CameronMartin space. Problem 3.2.7 introduces and investigates homogeneous random fields. Problem 3.2.8 provides an example of a homogeneous field: a Euclidean free field. Problem 3.2.9 investigates some properties of the cross variation of two martingales. Problem 3.2.10 establishes a useful technical result: completeness of the spaces of continuous square-integrable martingales with values in a Hilbert space.

3.2 Random Processes and Fields

125

P D W.x/ P Problem 3.2.1 Let W be Gaussian white noise on L2 .Rd /. P Œ f  .x/ (also known as a (a) For f 2 S.Rd /, define the regular field W regularization of white noise) by P f .x  // D P Œ f  .x/ D W. W

Z Rd

f .x  y/dW. y/:

(3.2.45)

Show that, for every g 2 S.Rd /; P f g/; P Œ f  .g/ D W. W

(3.2.46)

R where . f g/. y/ D Rd f .x  y/g.x/dx. d (b) Let D '.x/, '.0/ > 0, and R ' be an element of S.R / such that '.x/ d '.x/dx D 1. For " > 0, define ' .x/ D " '.x="/. Show that, for every " Rd f 2 S.Rd /,  2 P f / D 0: P Œ'"  . f /  W. lim E W

"!0

(3.2.47)

(c) Let Wr .x/ be the Gaussian field defined by (3.2.6) on page 103. Show that, for every f 2 S.Rd /,  2 P f / D 0: lim E Wr . f /  W.

r!0

(3.2.48)

(d) How would you define a regularization of Gaussian white noise over L2 .G/ for G 6D Rd ? Problem 3.2.2 Using Theorem 2.1.7 on page 28, Prove the following version of Kolmogorov’s continuity criterion: If K D K.x; y/ is a positive-definite kernel on Œ0; 1d  Œ0; 1d and there exist real numbers ˛ < 1 and C > 0 such that K.x; x/ C K. y; y/  2K.x; y/  Cjx  yj2˛ then, for every ˇ < ˛, there exits a centered Gaussian measure on the space V D C ˇ .Œ0; 1/ (functions on Œ0; 1d that are Hölder continuous of order ˇ), such that Z

f .x/f . y/ .df / D K.x; y/: V

Problem 3.2.3 Describe the reproducing kernel Hilbert space of a finitedimensional Gaussian vector.

126

3 Stochastic Analysis in Infinite Dimensions

Problem 3.2.4 (a) Show that the Karhunen-Loève expansion of the standard Brownian motion w D w.t/ on Œ0; T is    p X sin k  12 t=T   k ; w.t/ D 2T k  12  k1

(3.2.49)

where k are independent standard Gaussian random variables. Reconcile this result with representation (3.2.24) on page 115. (b) Show that the right-hand side of (3.2.49) appears as the limit in distribution, as t ! 1, of the solution u D u.t; x/ of the stochastic heat equation ut .t; x/ D uxx .t; x/ C

p P x/; t > 0; x 2 .0; 1/; 2W.t;

with zero initial condition u.0; x/ D 0 and boundary conditions u.t; 0/ D 0, ux .t; 1/ D 0. Problem 3.2.5 Let be the Wiener measure on the space V of continuous on Œ0; 1 functions that are zero at t D 0. Find the corresponding covariance operator C . Recall that the dual space of V is the collection of regular measures on Œ0; 1; see Dunford and Schwartz [42, Theorem IV.6.3]. Problem 3.2.6 Let B be a Gaussian white noise over a separable Hilbert space H, and let HQ be a bigger Hilbert space such that the embedding operator j W H ! HQ is Hilbert-Schmidt. Q with the nuclear (a) Verify that B extends to a regular Gaussian field over H, covariance operator K D jj . (b) Verify that the corresponding Gaussian measure on HQ has the covariance operator C such that C . f ; g/ D .Kf ; g/HQ D .j f ; j g/H . (c) Verify that H is a reproducing kernel Hilbert space with the reproducing kernel KH D C . Problem 3.2.7 Let B be a zero-mean generalized Gaussian field over the space S.Rd /. For a function f 2 S.Rd / and h 2 Rd , define f h by f h .x/ D f .x C h/. The field B is called homogenous if, for every N  1, . f1 ; : : : ; fN / S.Rd / and h 2 Rd , the vectors .B. f1 /; : : : ; B. fN // and .B. f1h /; : : : ; B. fNh // have the same distribution. (a) Show that B is homogeneous if and only if there exists a positive measure on on .Rd ; B.Rd // with the following properties: a. There exists a non-negative number p such that Z Rd

.dx/ < 1I .1 C jxj2 /p

(3.2.50)

3.2 Random Processes and Fields

b. For every f ; g 2 S.Rd /,   E B. f /B.g/ D

127

Z Rd

b g. y/ .dy/; f . y/b

(3.2.51)

where b f ;b g are the Fourier transforms of f ; g, and b g. y/ is the complex conjugate of b g. y/. The measure is called the spectral measure of B. P the Gaussian white noise on L2 .Rd / (see (3.2.10) on page 104), (b) Show that W, is homogeneous and, up to a constant, its spectral measure is the Lebesque measure. (c) We say that BRis regular if there exists a zero-mean Gaussian field B D B.x/ such that G EB2 .x/dx < 1 for every compact set G Rd and B. f / D R d Rd f .x/B.x/dx for every f 2 S.R /. a. Show that a homogenous field over S.Rd / is regular if and only if its spectral measure is finite: .Rd / < 1; equivalently, a finite means we can put p D 0 in (3.2.50). b. Show that a non-trivial regular homogenous field over S.Rd / CANNOT be extended to a regular field over L2 .Rd /. A Comment Strictly speaking, the random fields discussed in this problem are translation invariant; truly homogenous field would require a modification of the definition to include an arbitrary isometry of Rd instead of only translations f h .x/ D f .x C h/ (see Wong [227, Sect. 4]). Problem 3.2.8 Let X be a homogeneous field over S.Rd / with the spectral measure .dy/ D dy=.1 C jyj2 / (see Problem 3.2.7). Such a field is called the Euclidean free field (a) For what d is X regular? Find the corresponding representation of X. (b) Verify that X can be considered a Gaussian white noise over both H 1 .Rd / and H 1 .Rd /. Problem 3.2.9 Let M; N be continuous, square-integrable martingales with values in a separable Hilbert space H, and, with fmk ; k  1g denoting an orthonormal basis in H, X X Mk .t/mk ; N.t/ D Nk .t/mk : M.t/ D k1

k1

Show that hM; Ni D

 X 1 hM C Ni  hM C Ni ; hM; Ni D hMk ; Nk i; 4 k1

.M; N/H  hM; Ni is a martingale; hM; Ni D 0 if M; N are independent:

128

3 Stochastic Analysis in Infinite Dimensions

Problem 3.2.10 Show that the space of continuous, square-integrable martingales with values in a separable Hilbert space and starting at zero is complete. In other words, let Mn D Mn .t/, n  1, t 2 Œ0; T be real-valued continuous, square integrable martingales with values in a separable Hilbert space H, Mn .0/ D 0, and such that lim EhMm  Mn i.T/ D 0:

m;n!1

Show that there exists an H-valued continuous, square-integrable martingale M such that M.0/ D 0 and lim E sup kMn .t/  M.t/k2H D 0:

n!1

0 0 and L D L.x/ is the Lévy Brownian motion on R3 (see Exercise 2.1.10 on page 29). (c) Investigate the properties of the solution when d > 3. Problem 4.2.5 For each of the following equations on Rd , d  1, (a) Interpret the solution u as a generalized Gaussian field on a suitable Hilbert  space and compute E u. f /u.g/ ; (b) Determine whether the solution is a regular field. p P m  0I m2   u D W;

(4.2.29)

P u D r  W;

(4.2.30)

.1  /u D B;

(4.2.31)

182

4 Linear Equations: Square-Integrable Solutions

P D .W P is a Gaussian white noise on L2 .Rd /, W P 1; : : : ; W P d /, with W Pi where W independent Gaussian white noises on L2 .Rd /, and B is a Gaussian white noise on H 1 .Rd /. A Comment The Euclidean free field can be defined as the solution of (4.2.29) with m > 0 and d  1 , or the solution of (4.2.30) for d  3 [223, Proposition 9.4]. It also turns out that (4.2.30) is essentially the same as (4.2.29) with m D 0. p P in Rd is a Problem 4.2.6 Show that the solution u of the equation 1   u D W Gaussian white noise on H 1 .Rd /. A Comment In Example 4.2.3 on page 171 we interpreted u as a Gaussian white noise on H 1 .Rd /. The reader is encouraged to think about these two different interpretations of the same object. To some extend, something similar happens in a normal triple .V; H; V 0 / of Hilbert spaces: an element of the dual space of V can be an element of V (when the duality is relative to the inner product in V), or an element of V 0 (when the duality is relative to the inner product in H). Problem 4.2.7 Find as many different constructions of the Euclidean free field as possible (in this text and other references), and investigate applications of this object in physics. Problem 4.2.8 For d D 2; 3, let A be a partial differential operator that is the generator of the family of diffusion processes X y D X y .t/, y 2 Rd , X y .0/ D y, and let c D c.x/ be a continuous non-negative function. Denote by KA D KA .x; y/ the fundamental solution of the operator A C c in Rd , that is, for every smooth compactly supported f , the function U.x/ D 

Z Rd

K.x; y/f .y/dy

is a classical solution of AU.x/  c.x/U.x/ D f .x/; x 2 Rd : Let G be a smooth bounded domain in Rd , and, for x 2 G, x D infft > 0 W X x .t/ … Gg: P be a Gaussian white noise on L2 .G/, and assume that W P is independent of all Let W the processes X y . Define UA .x/ D

Z

KA .x; y/dW.y/ G

4.3 Stochastic Hyperbolic Equations

183

and let u be the solution of P in G Au  cu D W with zero boundary conditions. Verify the following probabilistic representation of u: Z x 1 0 ˇ x ˇ c.X .t//dt ˇ   C B  u.x/ D UA .x/  E @UA X x . x / e 0 ˇ FWA ; ˇ P where F W is the sigma-algebra generated by W. P be a Gaussian white noise on L2 .Rd / and , the Laplace Problem 4.2.9 Let W operator. Consider the equation P .m2  /=2 u D W for m  0 and  2 R. Is the solution u ever a regular field? Can it have any Hölder space regularity? Then answer the same questions for the solution of the equation P c  0: .m2 C c2 jxj2  /=2 u D W;

4.3 Stochastic Hyperbolic Equations 4.3.1 Existence and Uniqueness of Solution Given two Banach spaces X; Y, recall the following notions for the time-dependent operators and functions: 1. .X; Y/-measurable, .X; Y/-uniformly bounded, and differentiable families of operators: Definition 4.1.19, page 161.   2. The space H 1 .0; T/I X : page 162. Now that we consider stochastic equations, we need to modify the corresponding definitions to include possible dependence on the elementary outcome. We fix the stochastic basis .˝; F ; fFt gt0 ; P/ with the usual assumptions. Definition 4.3.1 Let X; Y be separable Banach spaces and let A D fA.!; t/; ! 2 ˝; 0  t  Tg be a family of mappings from X to Y. The family is called

184

4 Linear Equations: Square-Integrable Solutions

• .X; Y/-adapted if, for every x 2 X and ` 2 Y 0 , the real-valued process t 7! `.A.t/x/ is Ft -adapted; • L1 .X; Y/-uniformly bounded if there exists a positive (non-random) number C such that, for all x 2 X, all t 2 Œ0; T, and all ! 2 ˝, kA.t/xkY  CkxkX . Take a collection fwk ; k  1g of independent standard Brownian motions and consider the following stochastic equation: uR D A.t/u C A1 .t/u C B.t/Pu C f .t/ X  Mk .t/u C Nk .t/Pu C gk .t/ wP k .t/; 0 < t  TI C

(4.3.1)

k1

u.0/ D u0 ; uP .0/ D v0 ; in the normal triple .V; H; V 0 / of Hilbert spaces. This equation is constructed by taking the deterministic version (4.1.36) on page 162 and adding the corresponding stochastic term. Similar to the deterministic case, the reason for considering both A and A1 becomes clear during the derivation of the a priori bound. To keep formulas from getting too congested, we will often omit the time dependence in all operators. Recall that Œv; u, v 2 V 0 , u 2 V, denotes the duality between V and V 0 relative to the inner product in H. To define a square-integrable variational solution of equation (4.3.1), we make the following assumptions: [HP1] The families of operators A D fA.t/; 0  t  Tg, A1 D fA1 .t/; 0  t  Tg, and Mk D fMk .t/; 0  t  Tg are .V; V 0 /-adapted and L1 .V; V 0 /-uniformly bounded. [HP2] The families of operators B D fB.t/; 0  t  Tg and Nk D fNk .t/; 0  t  Tg are .H; V 0 /-adapted and L1 .H; V 0 /-uniformly bounded. [HP3] The initial conditions are F0 -measurable (equivalently, independent of all wk ), u0 2 L2 .˝; V/, v0 2 L2 .˝I H/; the  process f and  each of the processes gk are Ft -adapted and are elements of L2 ˝  .0; T/I V 0 . RT P 2 [HP4] k1 E 0 kgk .t/kV 0 dt < 1.  [HP5] There  exists a positive number  Co such that, for every v 2 L2 ˝  .0; T/I V and h 2 L2 ˝  .0; T/I H , X Z E k1

X Z E k1

T 0 T 0

kMk .t/vk2V 0 dt < 1; (4.3.2) kNk .t/hk2V 0 dt

< 1:

Definition 4.3.2 (Solution of Stochastic Hyperbolic Equations) The variational solution of (4.3.1) is two Ft -adapted processes u D u.t/, v D v.t/ such that u 2

4.3 Stochastic Hyperbolic Equations

185

L2 .˝  .0; T/I V/, v 2 L2 .˝  .0; T/I H/, Z t u.t/ D u0 C v.s/ds in L2 .˝  .0; T/I H/; 0

and v.t/ Dv0 C C

Z

t

Au.s/ds C 0

XZ t  k1

0

Z

t 0

A1 u.s/ds C

Z

t

Bv.s/ds C 0

Z

t 0

f .s/ds

 Mk u.s/ C Nk v.s/ C gk .s/ dwk .s/

(4.3.3)

in L2 .˝  .0; T/I V 0 /. To construct a solution of (4.3.1), we have to modify the assumptions as follows (the precise conditions on the input are in the statement of the theorem below): [HP1’] The family of operators A D fA.t/; 0  t  Tg is .V; V 0 /-adapted and L1 .V; V 0 /-uniformly bounded. [HP2’] The families of operators A1 D fA1 .t/; 0  t  Tg and Mk D fMk .t/; 0  t  Tg are .V; H/-adapted and L1 .V; H/-uniformly bounded. [HP3’] The families of operators B D fB.t/; 0  t  Tg and Nk D fNk .t/; 0  t  Tg are .H; H/-adapted and L1 .H; H/-uniformly bounded.  [HP4’] There exists a positive number   Co such that, for every v 2 L2 ˝  .0; T/I V and h 2 L2 ˝  .0; T/I H , X

E

Z 0

k1

X k1

E

T

Z

T 0

kMk .t/vk2H dt  Co E kNk .t/hk2H dt  Co E

Z

T 0

Z

T 0

kv.s/k2V ds; kh.s/k2H ds:

Before proceeding, the reader is encouraged to think about the differences between the two sets of conditions. After that, the reader can try to state and even prove the following theorem about existence and uniqueness of solution, because all the necessary technical tools have been already presented. On the other hand, the reader who ignored Problems 4.1.4 and 4.1.5 might find the following a bit overwhelming. Theorem 4.3.3 (Solvability of Hyperbolic Equations) In addition to [HP1’]– [HP5’] assume that (i) ŒAu; v D Œu; Av for all u; v 2 V, all t 2 Œ0; T, and all ! 2 ˝, (ii) there exist a positive number cA and a real number M such that, for all u 2 V, all t 2 Œ0; T, and all ! 2 ˝, ŒAu; u C cA kuk2V  Mkuk2H ; and

(4.3.4)

186

4 Linear Equations: Square-Integrable Solutions

(iii) the family of operators A is differentiable for all ! 2 ˝ and the derivative of A is .V; V 0 /-adapted and L1 .V; V 0 /-uniformly bounded. Then, for every F0 -measurable initial conditions  u.0/ D u0 2L2 .˝I V/, uP .0/ D v 2 L .˝I H/, and F -adapted free terms f 2 L 0 2 t 2 ˝  .0; T/I H and gk satisfying P 2 kg k < 1, Eq. (4.3.1) has a solution. The solution is unique in k k L2 .˝.0;T/IH/     L2 ˝I C .0; T/I V  L2 ˝I C .0; T/I H and  ku.t/k2L2 .˝IC..0;T/IV// C kv.t/k2L2 .˝IC..0;T/IH//  C.T/ ku0 k2L2 .˝IV/  X C kv0 k2L2 .˝IH/ C k f k2L2 .˝.0;T/IH/ C kgk k2L2 .˝.0;T/IH/ :

(4.3.5)

k1

Proof Below, we present three steps: (1) derivation of the a priori bound (4.3.5); (2) construction of the solution using the Galerkin approximation; (3) proof of continuity of the solution. The proof of uniqueness is similar to the deterministic case (see Problem 4.1.6 on page 169) and is left to the reader. Step 1: The a priori Estimate (pages 186–188) Inequality (4.3.5) is equivalent to  E sup ku.t/k2V C E sup kv.t/k2H  C.T/ Eku0 k2V C Ekv0 k2H 0 0, we find the characteristic curve through the point .t0 ; x0 /: t.s/ D t0 C s, x.s/ D x0 C cs. The curve intersects the line t D 0 when s D t0 , and then x D x0  ct0 . This means u.t0 ; x0 / D '.x / D '.x0  ct0 /:

(4.4.50)

The result is the familiar formula for the solution: u.t; x/ D u.0; x  ct/. The important feature of this derivation is that, to find the solution of the initial-value problem, we run the characteristic curve backward in time: from the current time t D t0 to the initial time t D 0. This time-reversal, that is, running the characteristic curve backward in time to solve an initial-value problem, will be essential for our analysis of equation (4.4.49). As a further illustration of time reversal in the method of characteristics, let us review the probabilistic representation of solution for deterministic parabolic equations. We start with the time homogeneous case: vt D

d d X 1X aij .x/vxi xj C bi .x/vxi C c.x/v C f .t; x/; 2 i;jD1 iD1

t > 0; x 2 Rd ; v.0; x/ D v0 .x/.

(4.4.51)

4.4 Stochastic Parabolic Equations

217

Theorem 4.4.19 Assume that • the functions aij ; bi ; c are non-random, bounded, and uniformly Lipschitz continuous in x; • the functions f ; v0 are non-random, continuous, and of polynomial growth in x; • the function v is a classical solution of (4.4.51) and is of polynomial growth in x. Then (a) there exists a square matrixP D .ik .x/; i; k D 1; : : : ; d with Lipschitz continuous entries such that dkD1 ik jk D aij ; (b) For every x 2 Rd , the system of stochastic ordinary differential equations Xi .t; x/ D xi C

Z

t 0

d X  bi X.s; x/ ds C



kD1

Z

t 0

  ik X.s; x/ dwk .s/

(4.4.52)

has a unique strong solution [as usual, .w1 ; : : : ; wd / are independent standard Brownian motions], and can serve as a stochastic characteristic for equation (4.4.51):    Rt v.t; x/ D E v0 X.t; x/ e 0 c.X.s;x//ds Z t R    s C e 0 c.X.r;x//dr f t  s; X.s; x/ ds :

(4.4.53)

0

Proof Part (a) is a multi-dimensional analogue of Lipschitz continuity of f .t/ D p t on every interval away from zero; for details, see, for example, Stroock and Varadhan [214, Theorem 5.2.2]. In Part (b), solvability of (4.4.52) follows from global Lipschitz continuity of the coefficients. To establish (4.4.53), fix T > 0, use t to denote the time variable, apply Itô formula to the function F.t; x/ D v.T  t; X.t; x//e

Rt 0

c.X.s;x//ds

; 0  t  T;

take the expectation on both sides, and notice that F.0; x/ D v.T; x/, F.T; x/ D RT v0 .X.T; x//e 0 c.X.s;x//ds. The result is equality (4.4.53). We need polynomial growth assumption to be sure that all expectations are finite, and we need continuity of f   and v0 to make sense out of expressions such as f t  s; X.s; x/ . This concludes the proof of Theorem 4.4.19. Note that (i) Representation (4.4.53) is not canonical in the sense that representation of the solution is in terms of f .t  s; x/ rather than f .s; x/; (ii) The proof cannot be carried out if the functions aij , bi depend on time;

218

4 Linear Equations: Square-Integrable Solutions

(iii) The same characteristic equation represents the solution v at the point x for all t  0: if we got v.t1 ; x/ from (4.4.53) and now want v.t1 ; x/ for some t2 > t1 , we only need to solve (4.4.52) on Œt1 ; t2 . Items (i) and (ii) can be corrected by reversing the time and considering a backward equation for the characteristic: Yi .s; xI t/ D xi C

Z

t

d X   bi Y.r; xI t/ dr C

s

kD1

Z

t

  ik Y.r; xI t/ dwk .r/;

(4.4.54)

s

s  t; where, for fixed t > 0 and 0  s  t, Z

t

F.r/ dw.r/ D

Z

ts 0

s

  F.t  r/d w.t/  w.t  r/ :

(4.4.55)

Equation (4.4.54) is a backward-in-time stochastic equation, which is reduced to the usual forward-in-time equation via a time change. [This is very different from backward stochastic differential equations, which are forward in time and seek an adapted solution given a terminal condition.] Note that, for fixed t > 0 and r 2 Œ0; t, the process w.r/ N D w.t/  w.t  r/ is a standard Brownian motion. Then     Rt  v.t; x/ D E v0 Y.0; xI t/ e 0 c Y.s;xIt/ ds Z t R  (4.4.56)  t   c Y.r;xIt/ dr  s C e f s; Y.s; xI t/ ds : 0

The resulting representation is canonical and easily extends to time-dependent coefficients a; b; c, but there is a price to pay: now, a new equation (4.4.54) must be solved for every new t at which the solution of (4.4.53) is represented. Indeed, changing t to t1 changes the terminal condition in (4.4.54), meaning that the equation must be solved from scratch. Exercise 4.4.20 (C) (a) Verify (4.4.56) by reversing time in (4.4.53). (b) Write an analogue of (4.4.56) when the coefficients a; b; c in (4.4.51) depend on t. If f D 0 and c D 0, then there is a complete analogy between (4.4.56) and (4.4.50): to find the solution of the initial value problem at point .t0 ; x0 /, t0 > 0, we run the corresponding characteristic backward in time from the point .t0 ; x0 / and evaluate the initial condition at the point where the characteristic hits the hyperplane t D 0 in the .t; x/ space.

4.4 Stochastic Parabolic Equations

219

If we insist on keeping (4.4.52) as a characteristic, then define  Z RT U.t; x/ D E v0 .X.T  t//e t c.X.s//ds C

T

e

Rs t

c.X.r//dr

 f .s; X.s//ds :

(4.4.57)

t

The function U satisfies d d X 1X  Ut D aij .x/Uxi xj C bi .x/Uxi C c.x/v C f .t; x/; 2 i;jD1 iD1

(4.4.58)

0  t < T, x 2 Rd , U.T; x/ D v0 .x/ (a backward parabolic equation): see Karatzas and Shreve [103, Theorem 5.7.6]. Exercise 4.4.21 (B) Find a backward in time deterministic PDE whose solution can be represented using (4.4.54), and write the corresponding representation. Some of the problems that can be addressed using probabilistic representations are 1. Maximum principle for the PDE. For example, if the initial condition v0 and the free term f are non-negative, then (4.4.53) implies that the solution of (4.4.51) remains non-negative for all t > 0; 2. Relaxing conditions on the coefficients in the equations. For example, the righthand side of (4.4.53) makes sense even if the stochastic equation (4.4.52) has a unique weak, rather than strong, solution. 3. Numerical solution of the PDE by the Monte-Carlo simulations. Let us summarize what we discussed so far: (a) The canonical representation by the method of characteristics is of the forwardbackward or backward-forward type: it requires one of the equations (either the PDE or the ODE) to be backward in time; (b) In this canonical representation, a different characteristic is necessary for every point at which the solution is represented. (c) Forward-forward and backward-backward representations are not canonical, but can have a computational advantage because one characteristic equation works for all t. Now we are ready to represent the solution of (4.4.49) using the method of characteristics. Theorem 4.4.22 Assume that • the functions aij ; bi ; c; ik ; k are non-random, bounded, continuous, and uniformly Lipschitz continuous in x; • the functions f ; v0 ; gk are non-random, continuous, and of polynomial growth in x; • the function u is a classical solution of (4.4.49) and is of polynomial growth in x;

220

4 Linear Equations: Square-Integrable Solutions

• the following technical conditions hold: X

sup j k .t; x/j < 1;

(4.4.59)

sup jgk .t; x/j < 1;

(4.4.60)

t;x

k1

X

t;x

k1

2aij .t; x/ D

X

ik .t; x/jk .t; x/ C

d X `D1

k1

e  i` .t; x/e  j` .t; x/;

(4.4.61)

and the functions e  i` .t; x/ are non-random, bounded, continuous, and uniformly Lischitz continuous in x. For fixed t > 0 and x 2 Rd , consider the following stochastic ordinary differential equation in Rd : Xi .s; xI t/ D xi C

Z

t

  Bi r; X.r; xI t/ dr C

Z tX

s

C

d Z X `D1

t

s

s

  ik r; X.r; xI t/ dwk .r/

k1

  w` .r/I 0 < s  t; i D 1; : : : ; d e  i` r; X.r; xI t/ d e (4.4.62)

P

where Bi D bi  k1 ik k , e wk ; k  1; are independent standard Wiener processes independent of wk ; k  1, and dw is defined in (4.4.55). Then (4.4.62) has a unique strong solution and Z

u.t; x/ D E C

XZ k1

t

  f s; X.s; xI t/ .s; xI t/ds

t

  gk s; X.s; xI t/ .s; xI t/ dwk .s/

0

0

ˇ  C u0 X.0; xI t/ .0; xI t/ ˇFtw 

(4.4.63)

! ;

where Ftw is the -algebra generated by wk .s/; k  1; 0 < s < t, and .s; xI t/ D exp

Z

t

  c r; X.r; xI t/ dr

s

C

XZ k1

t s

  1 k r; X.r; xI t/ dwk .r/  2

Z tX s k1

k2

!   r; X.r; xI t/ dr :

4.4 Stochastic Parabolic Equations

221

Proof We begin with some general comments. The result is of the forwardbackward type: we have a representation of the solution of a forward in time SPDE using a backward in time SODE. Representation (4.4.63) holds for all t  0 and x 2 Rd , but to get it, we have to fix tP and x and then solve (4.4.62) from s D t to s D 0. Having the matrix 2aij  k ik jk uniformly positive definite is sufficient for (4.4.61) to hold: see Stroock and Varadhan [214, Theorem 5.2.2]. Conditions (4.4.59) and (4.4.60) ensure convergence of all infinite sums. Existence and uniqueness of the strong solution of (4.4.62) follows from boundedness and uniform Lipschitz continuity of the coefficients. Since a weak solution will also work, conditions on the coefficients can be relaxed. Polynomial growth of u0 ; u; f ; gk is necessary to take expectations after applying the Itô formula. The proof of (4.4.63) is carried out by reducing the SPDE to a deterministic equation and applying (4.4.56). Here is an outline: • Since t is fixed in (4.4.63), it will be convenient to set t D T in (4.4.63), and use t as a the time variable. • Define Y.T; x/ D

Z

T 0

C

  f s; X.s; xI T/ .s; xI T/ds

XZ k1

T 0

  gk s; X.s; xI T/ .s; xI T/ dwk .s/

(4.4.64)

  C u0 X.0; xI T/ .0; xI T/: • Let h D .h1 .s/; : : : ; hN .s//; 0 < s < T; be a collection of smooth functions; N is arbitrary but finite if there are infinitely many Wiener processes in the SPDE (4.4.49); otherwise, N is the number of those Wiener processes. • Define ! Z N Z t X 1 t 2 Eh .t/ D exp hk .s/dwk .s/  h .s/ds 2 0 k 0 kD1 and write Eh .T/ D Eh : Note that Eh .t/ D E.Eh jFtw / and dEh .t/ D Eh .t/hk .t/dwk .t/. • The following result will be necessary: If  2 L2 .˝/,  is FTw -measurable and E.Eh / D 0 for every h as above, then E 2 D 0 [181, Lemma 4.3.2]. In other words, any FTw -measurable square-integrable random  variable  is completely characterized by the collection of numbers h D E Eh :     E Eh D E Eh for all h ”  D  with probability one.

(4.4.65)

222

4 Linear Equations: Square-Integrable Solutions

We refer to this result as completeness of the system Eh . To some extend, this is an analogue of saying that equality of the Fourier transforms implies equality of functions. • By (4.4.65), equality (4.4.63) is equivalent to     E u.T; x/Eh D E Eh E.Y.T; x/jFTw / :

(4.4.66)

So, all we need is to establish (4.4.66).   • To prove (4.4.66), define Uh .s; x/ D E u.s; x/Eh . Since u is Ftw -adapted, we have       Uh .t; x/ D E u.t; x/Eh D E u.t; x/E.Eh jFtw / D E u.t; x/Eh .t/ : By the Itô formula d d X @Uh @2 Uh X @Uh D aij C bi C c Uh C f @t @xi xj @xi i;jD1 iD1

C

N X kD1

d X

@Uh ik C k Uh C g k h k @xi iD1

!

with initial condition Uh jtD0 D u0 . • Introduce a new probability measure dP0T D Eh dPT , where PT is the restriction of P to FTw : Then, with E0 denoting the corresponding expectation, Uh .T; x/ D E0 Y.T; x/;

(4.4.67)

where Y.t; x/ is defined in (4.4.64) and where we use the Girsanov theorem [103, Theorem 3.5.1] and the probabilistic representation (4.4.56) for deterministic PDEs. In particular, it is the Girsanov theorem that produces the modified drift in (4.4.49). • To complete the proof, we note that   ˇ    E0 Y.T; x/ D E Eh Y.T; x/ D E E Eh Y.T; x/ˇFTw ;   recall that Uh .T; x/ D E u.T; x/Eh , and get (4.4.66) from (4.4.67). The concludes the proof of Theorem 4.4.22. Exercise 4.4.23 (C) (a) In (4.4.49), assume that f  0, u0  0, and gk D 0. Show that u  0 (this is a version of the maximum principle for (4.4.49)). (b) Verify that setting ; ; and g to zero results in (4.4.56).

4.4 Stochastic Parabolic Equations

223

4.4.4 Probabilistic Representation of the Solution, Part II: Measure-Valued Solutions and the Filtering Problem Measure-valued solutions were first introduced on page 47. These solutions are considered for a special type of linear parabolic equations involving formal adjoint operators and lead to a new type of the existence/uniqueness result (because the collection of measures is not a Hilbert space). In this section, we construct measurevalued solutions for stochastic parabolic equations and apply the result to the problem of optimal nonlinear filtering of diffusion processes. If A is a partial differential operator in Rd , then its formal adjoint A> is defined by the equality .Af ; g/L2 .Rd / D . f ; A> g/L2 .Rd / for all smooth compactly supported functions f ; g. For example, if, in R, Af .x/ D a.x/f 00 .x/, then A> g.x/ D .a.x/g.x//00 . Exercise 4.4.24 (C) Verify that if Af D

d d X 1X @2 f @f aij .!; t; x/ C bi .!; t; x/ C c.!; t; x/f ; 2 i;jD1 @xi @xj @x i iD1

(4.4.68)

then     d d X @ bi .!; t; x/f 1 X @2 aij .!; t; x/f A gD  C c.!; t; x/f : 2 i;jD1 @xi @xj @xi iD1 >

(4.4.69)

Our objective is to study measure-valued solutions for stochastic partial differential equations of the form u t D A> u C

X

M> P k .t/; 0 < t  T; k uw

(4.4.70)

k1

where A is from (4.4.68) and Mk u D

d X

ik .!; t; x/

iD1

M> k D 

@u C k .!; t; x/u; @xi

 d X @ ik .!; t; x/u/ iD1

@xi

(4.4.71) C k .!; t; x/u:

224

4 Linear Equations: Square-Integrable Solutions

To define the measure-valued solution of (4.4.70) we assume that • all coefficients are measurable and Ft -adapted; d • the initial P 2condition u.0; / D 0 is a finite positive measure P 2 on R ; • sup k ik .!; t; x/ < 1 for all i D 1; : : : ; d, and sup k k .!; t; x/ < 1 !;t;x

!;t;x

Definition 4.4.25 A collection of random measures t D t .!; dx/ on Rd with the Borel sigma-algebra is called a measure-valued solution of (4.4.70) if 1. t is a positive measure for every .!; t/ 2 ˝  Œ0; T: t ŒA  0 for all Borel subsets A of R; 2. t is a finite measure on Rd for every .!; t/ 2 ˝  Œ0; T: t ŒRd  WD

Z Rd

t .dx/ < 1I

RT 3. For every T > 0, 0 Ej t ŒRd j2 dt < 1; R 4. For every bounded measurable function f , t Œ f  D Rd f .x/ t .dx/ is an Ft adapted continuous process; 5. For every smooth compactly supported function f D f .x/, t Œ f  D 0 Œ f  C XZ

C

k1

t 0

Z

t 0

s Œ.Af /.s; /ds (4.4.72)

s Œ.Mk f /.s; /dwk .s/:

The following theorem is the main result about existence, uniqueness, and representation of the measure-valued solution for equation (4.4.70). Theorem 4.4.26 Assume that • the functions aij ; bi ; ik ; k are adapted, bounded, continuous in .t; x/, and uniformly Lipschitz continuous in x; • the initial condition u0 is a probability measure 0 on Rd ; • the following technical conditions hold: X k1

aij .!; t; x/ D

sup j k .!; t; x/j < 1;

(4.4.73)

!;t;x

X k1

ik .!; t; x/jk .!; t; x/ C

d X `D1

e  i` .!; t; x/e  j` .!; t; x/ (4.4.74)

and the functions e  i` .!; t; x/ are adapted, bounded, continuous, and uniformly Lischitz continuous in x.

4.4 Stochastic Parabolic Equations

225

Define the process X D X.t/ 2 Rd by Xi .t/ D Xi .0/ C C

Z

Z

t

Bi .s; X.s// ds C

0

Z

t

ik .s; X .s// dwk .s/

0

(4.4.75)

t 0

e  ik .s; X .s// d e wk .s/

P where Bi D bi  k1 ik k and e wk ; k  1; are independent standard Wiener processes, independent of fFt g0 u dt C

m X

M> k u dYk ;

(4.4.83)

kD1

with initial condition 0 equal to the probability distribution of X.0/. Proof Define the function Z.t/ D exp

m Z X kD1

t 0

1X hk .s; X.s//dY.s/  2 kD1 m

Z

t 0

! h2k .s; X.s//ds

(4.4.84)

and a new probability measure PN by Z.T/ dPN D dP:

(4.4.85)

N Under the new measure P, • The process Y is an m-dimensional standard Brownian motion, independent of wQ ` (this is Girsanov’s theorem); • The process X satisfies n n m   X X X dXi D bi  ik hk dt C ik dYk C i` d e w` kD1

kD1

(4.4.86)

`D1

(in the original equation for X, replace dwk with dYk  hk dt); • The distribution of X.0/ does not change. N then direct computations show that, If EN denotes the expectation with respect to P, for every bounded measurable function f D f .x/, x 2 Rd ,     EN Z.t/f .X.t//jY.s/; 0  s  t   E f .X.t//jY.s/; 0  s  t D : EN Z.t/jY.s/; 0  s  t

(4.4.87)

Indeed, the martingale property of Z under PN allows us to replace Z.t/ with Z.T/ on the right-hand side of (4.4.87) [details are in Exercise 4.4.29 below]. Writing Z D Z.T/, f D f X.t/ , and Y for the sigma-algebra generated by Y.s/; 0  s  t, we now need to show that N N f ZjY/: E.ZjY/ E. f jY/ D E.

(4.4.88)

4.4 Stochastic Parabolic Equations

229

Note that ˇ   N E.ZjY/ E. f jY/ D EN ZE. f jY/ˇY : Then, for every Y-measurable random variable ,   EN ZE. f jY/ D E.E. f jY// D E.f / and   N f ZjY/ D E.Zf N EN  E. jY/ D E.f /; which implies (4.4.88). Applying (4.4.76) with EN instead of E, with the sigma-algebra generated by Y.s/, 0  s  t in place of Ft , and with w D Y,  D , D h, and  D Z, we conclude that   EN Z.t/f .X.t//jY.s/; 0  s  t D t Œ f ; where t is the measure-valued solution of (4.4.83). This concludes the proof of Theorem 4.4.27. Exercise 4.4.28 (B) (a) Verify that, in general, t ŒRd  6D 1 for t > 0. (b) Verify that t ŒRd  D 1 for all t  0 if hk D 0. Hint. In both cases, integrate (4.4.83) over Rd and note that A1 D 0 [A1 is the operator A applied to the constant function f .x/ D 1], and Mk 1 D 0 if hk D 0.

Exercise 4.4.29 (B) Verify that     EN f .X.t//Z.t/jY.s/; 0  s  t D EN f .X.t//Z.T/jY.s/; 0  s  t : Hint. Condition on the sigma-algebra FtX;Y generated by both Y.s/ and X.s/, 0  s  t and argue that Z.t/ is a martingale with respect to PN and FtX;Y .

4.4.5 Further Directions Of the three types (elliptic, hyperbolic, parabolic), parabolic equations have been studied the most. This could be because of the applications of such equations to nonlinear filtering of diffusion processes and to the study particle systems. This could also be because the heat semigroup is analytically much better than the wave semigroup, and, unlike their elliptic counterparts, parabolic equations can use the

230

4 Linear Equations: Square-Integrable Solutions

full power of the Itô calculus. Given the vast literature on the subject of stochastic parabolic equations, we will only mention a few selected topics. Our Theorems 4.4.3 and 4.4.12 give more-or-less complete description of the Hilbert space theory for linear stochastic parabolic equations. The main drawback of this theory is inability to provide optimal regularity of the solution. For example, consider the equation driven by space-time white noise P x/; x 2 .0; / ut D uxx dt C W.t; with zero initial and boundary conditions. Our study of the closed-form solutions showed that u is almost 1=2 Hölder continuous in space and almost 1=4 Hölder continuous in time; see(2.3.19), page 56. On the other hand, applying Theorem 4.4.3  in the normal triple H  C1 ..0; //; H  ..0; //; H  1 ..0; // with  < 1=2 does p not even yield continuity of u; with gk .x/ D 2= sin.kx/, we cannot take   1=2.  It would be nice if we could extend Theorem 4.4.3 to Sobolev spaces Hp ..0; // for p > 2: then the Sobolev embedding theorem will give the expected continuity in x. This extension is possible, but requires a lot of additional analytical developments: see Krylov . It is also possible to study stochastic parabolic equations directly in the Hölder spaces . For the semigroup approach see Da Prato and Zabzcyk . Beside the usual Gaussian white noise, popular random perturbations for parabolic equations are fractional Brownian motion and Lévy-type noise. In this connection, we mention several papers dealing with fractional Brownian motion [41, 156, 179, 217] and the book by Peszat and Zabczyk  dealing with the Lévy-type noise. There are several books dedicated exclusively to nonlinear filtering of diffusion processes: Bain and Crisan , Kallianpur , and Xiong . The basic SPDE aspects of the problem are also discussed in Krylov [120, Sect. 8.1] and Rozovskii [199, Chap. 6]. With all these references in mind, our discussion of the filtering problem does not go beyond the very basic result of Theorem 4.4.27.

4.4.6 Problems Problem 4.4.1 investigates one particular aspect of Theorem 4.4.3: a possibility to relax the regularity condition for the stochastic part of the equation. Problem 4.4.2 suggests an alternative conclusion for Theorem 4.4.3. Problem 4.4.3 investigates possible modifications of the stochastic parabolicity condition. Problem 4.4.4 is about a specific degenerate parabolic equation. Problem 4.4.5 connects stochastic parabolic equations with an infinite system of independent Ornstein-Uhlenbeck processes. Problem 4.4.5 lists several specific parabolic equations in one space variable.

4.4 Stochastic Parabolic Equations

231

Problem 4.4.1 Consider the equation du D uxx dt C g.x/dw.t/; 0 < t  T; x 2 R: Show that if g 2 H 1 .R/, then we  cannot guarantee that the solution u will be an element of L2 ˝  .0; T/I H 1 .R/ . Problem 4.4.2 Recall the space H ((4.4.15), page 203). Show that, for Eq. (4.4.1), the solution operator  .u0 ; f ; fgk g/ 7! u is a homeomorphism from L2 .˝I H/  L2 ˝  .0; T/I V 0  L2 ˝  .0; T/I `2 .H/ to H. With `2 denoting the space of square-integrable sequences, kfgk gk2L2 .˝.0;T/I`2 .H// D

X Z E k1

T 0

kgk .t/k2H dt:

Problem 4.4.3 Investigate the possibility of replacing condition (4.4.4) with one of the following: 2

Z

t

0

ŒA.s/v.s/; v.s/ds C

XZ k1

 cA 2

Z 0

t

EŒA.s/v.s/; v.s/ds C

t 0

Z

 cA

t 0

XZ k1

kMk v.s/k2H ds

t 0

Z

CM

Z

t

kv.s/k2H dsI

0

EkMk .s/v.s/k2H ds t

0

kv.s/k2V ds

Ekv.s/k2V ds C M

Z 0

t

Ekv.s/k2H ds:

Of course, the continuity conditions on A and Mk should also be modified accordingly. Problem 4.4.4 Create an analogue of the table on page 210 for the equation   ut D a.t; x/ux x C b.t; x/ux C c.t; x/u C f .t; x/   Q P .t; x/; 0 < t  T; x 2 R; C .t; x/ux C .t; x/u C g.t; x/ W in the normal triple .H rC1 .R/; H r .R/; H r1 .R//, r 2 R, where W Q is the Q-cylindrical Brownian motion, written in the basis of Hermite functions wk (see (4.1.50) on page 167) as follows: W Q .t; x/ D

Xp qk wk .t/ wk .x/: k1

232

4 Linear Equations: Square-Integrable Solutions

Problem 4.4.5 Let wk ; k  1; be independent standard Brownian motions and let ak ; bk ; k ; k  1, be real numbers. Consider a collection of processes uk ; k  1, defined by ak uP k .t/ D bk uk .t/ C k wP k .t/; 0 < t  T:

(4.4.89)

Find conditions on the numbers ak ; bk ; k and the initial data uk .0/ to have E sup

X

0 1, then it is followed by max .0; ˛k1  1/ of entries with the same value. The next entry after that is the index of the second non-zero element of ˛, followed by max .0; ˛k2  1/ of entries with the same value, and so on. For example, if n D 7 and ˛ D .1; 0; 2; 0; 0; 1; 0; 3; 0; : : :/, then the nonzero elements of a are ˛1 D 1, ˛3 D 2, ˛6 D 1, ˛8 D 3. As a result, K˛ D f1; 3; 3; 6; 8; 8; 8g, that is, k1 D 1; k2 D k3 D 3; k4 D 6; k5 D k6 D k7 D 8. Exercise 5.1.22 (A) Let ˛ 2 J 1 and it’s characteristic set K˛ D fk1 ; : : : ; kn g. Prove that    p   ˛ D ıkn : : : ık2 ık1 1= ˛Š; : : : : (5.1.27) Formula (5.1.27) shows that the elements of Cameron-Martin basis ˛ can be obtained by multiple sequential applications of creation operators to the constant p 1= ˛Š. In conclusion of this section, let us discuss actions of Wick product on the elements of Cameron-Martin basis. An extension of formula (2.3.40) on page 62 to the elements of Cameron-Martin basis yields the following relation: ˛ ˘ ˇ D

s

 .˛ C ˇ/Š ˛Cˇ : ˛ŠˇŠ

(5.1.28)

p ˛Š˛ :

(5.1.29)

Let us denote H˛ D In the future we will refer to Cameron-Martin basis

˚

H˛ ; ˛ 2 J 1

as the unnormalised

Exercise 5.1.23 (C) Prove that H˛ ˘ Hˇ D H˛Cˇ :

(5.1.30)

p ˛k C 1 ˛C.k/ D ˛ ˘ .k/ :

(5.1.31)

Clearly, ık .˛ / D

242

5 The Polynomial Chaos Method

Wick product extends by linearity to the elements of L2 ./ W for f ; g 2 L2 ./, f ˘gD

s

X

f˛ gˇ

˛;ˇ2J 1

 .˛ C ˇ/Š ˛Cˇ : ˛ŠˇŠ

(5.1.32)

Exercise 5.1.24 (B) Rewrite expansion (5.1.32) in terms of unnormalised Cameron-Martin basis. An important and now obvious property of the Wick product follows from (5.1.32): E . f ˘ g/ D .Ef / .Eg/ < 1:

(5.1.33)

for f ; g 2 L2 ./ : Exercise 5.1.25 (A) Let ˛ 2 J 1 be a multi-index with j˛j D n  1 with characteristic set K˛ D fk1 ; : : : ; kn g. Verify that ˛ D

k1 ˘ k2 ˘    ˘ kn : p ˛Š

(5.1.34)

Formula (5.1.34) is a simple analog of the well-known result of Itô .

5.1.3

Elements of Stationary Malliavin Calculus

In this section we will discuss the basics of general stochastic calculus, which is often referred to as Malliavin calculus (see, for example, Nualart ). In particular, Malliavin calculus covers Itô’s calculus and its extensions to non-adapted (anticipating) processes. The latter feature makes it possible to apply Malliavin calculus to spatial random fields, in particular to solutions of elliptic SPDEs. As before, let B be the -measurable Gaussian white noise on a separable Hilbert space H: Everywhere below, B will be identified with its chaos expansion BD

X

uk k ;

(5.1.35)

k1

where fk ; k  1g is a sequence of independent standard Gaussian random variables. .uk ; k  1/ is an orthonormal basis in Hilbert space H:

5.1 Stationary Wiener Chaos

243

P .x/ in a An important example of the white noise B is spatial white noise W Hilbert space L2 .G/, where G is a domain in Rd : If uk D uk .x/ is an orthonormal basis in L2 .G/, then the spatial white noise on G is given by P .x/ D W

X

uk .x/ k :

(5.1.36)

k1

Now, let us define the derivative DB on the elements of the ˚ Malliavin Cameron-Martin basis ˛ ; ˛ 2 J 1 as follows: DB ˛ D

Xp ˛k ˛.k/ uk ;

(5.1.37)

k1

where .k/ is the multi-index of length 1 with all coordinates being zero except for the entry #k equal to 1: Remark 5.1.26 Note that the Malliavin derivative DB is a “linear combination” of annihilation operators Dk given by DB ˛ D

X

Dk .˛ / uk 2 H;

(5.1.38)

k1

and, by (5.1.23), E kDB ˛ k2H D j˛j :

(5.1.39)

P .x/, then In particular, if B D W DW.x/ ˛ D P

Xp ˛k ˛.k/ uk .x/ 2 L2 .G/: k1

Exercise 5.1.27 (B) Consider the stochastic exponent E.z/ D e

P

2 k .zk k .1=2/zk /

D

1 YX Hn .k / k1 nD0

znk D

X z˛ p ˛ : ˛Š ˛2J 1

and specify the requirements on z to ensure that DB E.z/ 2 L2 .I H/ : Our next goal is to extend the Malliavin derivative DB from the elements of Cameron-Martin basis to the space L2 ./. In view of (5.1.39), if v 2 L2 ./ ; then its Malliavin derivative DB v does not necessarily belong to L2 .I H/. However, due to (5.1.39), for v 2 L2 ./, E jDB vj2 D

X ˛2J 1

j˛j v˛2 :

(5.1.40)

244

5 The Polynomial Chaos Method

With this in view, let us introduce the following subspace of L2 ./ W L12 ./ D

8 < :

9 =

X

v 2 L2 ./ W

j˛j v˛2 < 1 : ;

˛2J 1

(5.1.41)

Definition 5.1.28 For v 2 L12 ./, the Malliavin derivative DB v is DB v D

X Xp ˛k ˛.k/ v˛ uk :

(5.1.42)

˛2J 1 k1

Exercise 5.1.29 (C) Prove that if v 2 L2 ./, then E .DB .v/ ˛ / D

Xp ˛kC1 v˛C.k/ uk :

(5.1.43)

k1

and kE .DB .v/ ˛ /k2H < 1:

(5.1.44)

Exercise 5.1.30 (C) Prove that Malliavin derivative DB is a continuous linear operator from L12 ./ to L2 ./ : The above construction can be easily extended to Gaussian random variables taking values in a Hilbert space X. Firstly, let us “upgrade” the domain of DB from L12 ./ to L12 .I X/ D

8 < :

v 2 L2 .I X/ W

9 =

X ˛2J

j˛j kv˛ k2X < 1 ; ; 1

(5.1.45)

where X is a Hilbert space. Secondly, for v 2 L12 . I X/ ; let us define the Malliavin derivative as follows: X Xp ˛k ˛.k/ v˛ ˝ uk : (5.1.46) DB .v/ D ˛2J 1 k1

Note that the “extended” version of the Malliavin derivative takes values in the tensor product Hilbert space X ˝ H: Theorem 5.1.31 Malliavin derivative DB is a continuous linear operator from L12 .I X/ to L2 .I X ˝ H/. Moreover, if v 2 L12 .I X/, then kDB .v/k2L2 .IX˝H/ D kvk2L1 .IX/ : 2

5.1 Stationary Wiener Chaos

245

Proof Similar to (5.1.43), E .DB .v/ ˛ / D

Xp ˛kC1 v˛C.k/ ˝ uk : k1

Therefore, kDB .v/k2L2 .IX˝H/ D

X X ˛2J 1 k1

j˛j kv˛ k2X D kvk2L1 .IX/ < 1: 2

The following exercise should persuade a sceptical reader that calling the operator DB a derivative is not far-fetched or misleading. Exercise 5.1.32 (B) If F .x1 ; :::; xn / is a polynomial, hi F .h1 1 ; :::; hn n /, then DB .v/ D

n X @F .h1 1 ; :::; hn n / hk ˝ uk : @x k kD1

2 X; and v

D

(5.1.47)

The statement of the Exercise 5.1.32 is a rather important fact. It has been used in the literature as a starting point for the development of Malliavin calculus (see e.g. Nualart [175, 176]). Next, we would like to define the “anti-derivative” associated with the Malliavin derivative DB . In other words, we want to find an operator that is dual to DB in an appropriate sense. This operator is usually referred to as the Malliavin divergence operator is and denoted by ıB . Getting a little bit ahead of the story, we should say that, in the context of the Itô calculus, the divergence operator is simply the familiar Itô integral. Firstly, let us try to build an anti-derivative (inverse) to DB using creation operators ık (see Definition 5.1.15) as building blocks. With this in mind, let us introduce a the following simple but useful extension of the creation operator with values in the Hilbert space H For u 2H; let us define p ıuk .˛ / D uık .˛ / D ˛k C 1 ˛C.k/ (5.1.48) p If fuk ; k  1g is a basis in H, then ıuk k .˛ / D uk ˛k C 1 ˛C.k/ could be interpreted as is the creation operator “in the direction” of the vector uk . It follows from (5.1.48) that p ıuk k .˛ / WD ˛k C 1 uk ˛C.k/ D uk ık .˛ / : (5.1.49) Let us consider now the class of functions f such that f D

X k1

fk ˝ uk 2 L2 .I X ˝ H/ :

(5.1.50)

246

5 The Polynomial Chaos Method

Write fk;˛ D E . fk ˛ / : Then, by the Cameron-Martin theorem, f can be expanded as follows: X X f D fk;˛ ˝ uk ˛ : (5.1.51) ˛2J 1 k1

Now, it is intuitively clear that the divergence operator ıB (anti-derivative to DB ) should be defined as follows: X X ıB . f / D fk;˛ ˝ uk ık .˛ / D ˛2J 1 k1

D

X X

p fk;˛ ˝ uk ˛k C 1˛C.k/

(5.1.52)

˛2J 1 k1

D

X Xp ˇk fk;ˇ.k/ ˝ uk ˇ :

ˇ2J 1 k1

Of course, the manipulations we performed above were formal. An important question now is the assumptions on f to guarantee ıB . f / 2 L2 .I X ˝ H/. An explicit answer to this question is provided next. Theorem 5.1.33 If f 2 L2 .I X ˝ H/ and X X ˛2J 1

j˛j k fk;˛ k2X < 1;

(5.1.53)

k1

then ıB . f / 2 L2 .I X/ and 

ıB . f /

 ˛

D

Xp ˛k fk;˛.k/ :

(5.1.54)

k1

Proof Since XXp X Xp ˛k C 1 fk;˛ ˛C.k/ D ˛k fk;˛.k/ ˛ ; ˛

˛2J 1 k1

k1

we have 0 E .ıB . f / ˛ / D E @

XX ˇ

1 Xp p fk;ˇ.k/ ˇk ˇ ˛ A D ˛k fk;˛.k/ :

k1

(5.1.55)

k1

Therefore, E kıB . f /k2 D

XX ˛

k1

.˛k C 1/ k f˛ k2X < 1:

(5.1.56)

5.1 Stationary Wiener Chaos

247

Example 5.1.34 Suppose that uk D uk .x/ is an orthonormal basis in L2 .G/ and B P is spatial white noise W.x/ (see (5.1.36)). Consider f 2 L2 .; G/. Denote fk D

Z L2 .G/

f .x/ uk .x/ dx;

fk;˛ D E. fk ˛ /, and assume that X X

j˛j k fk;˛ k2X < 1:

˛2J 1 k1

Then, one can expand f .x/ as follows: f .x/ D

X X

fk;˛ uk .x/ ˛

k1 ˛2J 1

and, by (5.1.52), .f/ D ıW.x/ P

X Xp ˛k C 1fk;˛ ˛C.k/ :

(5.1.57)

˛2J 1 k1

Recall now that, by Exercise 5.1.19, for any k , the elements of the CameronMartin basis are eigenfunctions of the operator ık Dk and ık Dk .˛ / D j˛j˛ :

(5.1.58)

This important equality extends to Malliavin derivative DB and divergence operator ıB : The operator LB D ıB DB

(5.1.59)

is often called Ornstein-Uhlenbeck operator. The following simple result provides an explicit description of the action of Ornstein-Uhlenbeck operator on elements of the space X

L22 .I X/ D fv W

j˛j2 kv˛ k2X < 1:g

˛2J 1

Corollary 5.1.35 If v 2 L22 .I X/, then LB .v/ 2 L2 .I X/ and LB .v/ D

X ˛2J 1

j˛j v˛ ˛ :

248

5 The Polynomial Chaos Method

Proof For v 2 L22 . I X/ ; the Malliavin derivative DB .v/ D

X Xp ˛k ˛.k/ v˛ ˝ uk ˛2J 1 k1

D

X Xp ˛k C 1v˛C.k/ ˝ uk ˛ :

˛2J 1 k1

Recall that for f 2 L2 .I X ˝ H/, f D

P

˛2J 1

P

k1 fk;˛

˝ uk ˛ and

X Xp ˛k C 1 fk;˛ ˛C.k/:

ıB . f / D

˛2J 1 k1

Now, let us take fk;˛ D

p ˛k C 1v˛C.k/ : Then 0

1 X Xp LB .v/ D ıB @ ˛k C 1v˛C.k/ ˝ uk ˛ A ˛2J 1 k1

D

X X

X

.˛k C 1/ v˛C.k/ ˛C.k/ D

˛2J 1 k1

j˛j v˛ ˛

˛2J 1

and we are done. It is, of course, a standard fact that integrals and derivatives are dual operators. We have also noticed that creation and annihilation operators ık and Dk are dual. Next we will show that in some sense it holds also for Malliavin derivative DB and divergence operator ıB : P Theorem 5.1.36 Let f D k1 fk uk be an element of L2 .I X ˝ H/ and XX ˛

˛k k fk;˛ k2X < 1:

k1

Suppose also that ' 2 L12 ./ : Then DB . f / 2 L2 .I X ˝ H/ and DB is dual to ıB in that E .'ıB . f // D E .DB .'/ ; f /H : Proof To begin with, let us assume that ' D ˛ . Let f 2 L2 .I X ˝ H/. It follows from the definition of the divergence operator that E .˛ ıB . f // D

Xp ˛k fk;˛.k/ : k1

(5.1.60)

5.1 Stationary Wiener Chaos

249

On the other hand, DB .˛ / D

Xp

˛k ˛.k/ uk :

k1

Thus, E .DB .˛ / ; f /H D

Xp  Xp  ˛k E ˛.k/ fk D ˛k fk;˛.k/ k1

k1

Therefore we proved that E .˛ ıB . f // D E .DB .˛ / ; f /H : An immediate consequence of (5.1.28) and (5.1.52) is the following identity: ıB .˛ h ˝ uk / D h˛ ˘ k ; h 2 X:

(5.1.61)

5.1.4 Problems Problem 5.1.1 is an invitation to investigate the Wick product as a bi-linear operator on L2 .B/. Problem 5.1.2 address a basic question about completeness of polynomials. Problem 5.1.3 illustrates a challenge related to extension of chaos approach to non-Gaussian setting. Problem 5.1.1 Starting with one-dimensional case B D   N .0; 1/, and then proceeding to infinite dimensions, (a) Give an example showing that the Wick product ˘ is not a bounded operator from L2 ./  L2 ./ to L2 ./ (b) Derive a sufficient condition on  and  so that  ˘  2 L2 ./. (c) Is there a sufficient condition on  so that  ˘  2 L2 ./ for all  2 L2 ./? Problem 5.1.2 Consider the Stieltjes-Wigert weight function 1 2 '.x/ D p e.ln x/ ; x > 0:  (a) Confirm that sn D

Z 0

C1

xn '.x/dx D e.nC1/

2 =4

; n D 0; 1; 2; : : : :

250

5 The Polynomial Chaos Method

(b) Confirm that Z

C1 0

xn sin.2 ln x/ '.x/dx D 0

for every n D 0; 1; 2; : : : and so the polynomials are not complete in L2 ..0; C1/; '.x/dx/. (c) Confirm that the integral Z

1

eax '.x/dx

0

diverges for every a > 0, but the integral Z

1 0

ln '.x/ dx 1 C x2

and the series X k1

1 p 2k s 2k

both converge. The book  can provide further information about this and similar problems. Problem 5.1.3 Let 1 ; 2 ; : : : be iid random variables with zero mean and unit variance, and let fmk .t/; k  1, t 2 Œ0; T; be an orthonormal basis in L2 ..0; T//. Define the process W.t/ D

X

k Mk .t/;

k1

where Mk .t/ D

Rt 0

mk .s/ds:

(a) Verify that W D W.t/ is a wide-sense Wiener process, that is,   W.0/ D 0; EW.t/ D 0; E W.t/W.s/ D min.t; s/I cf. [140, Definition 15.1.2]. (b) Give a sufficient condition on the distribution of k for the sample trajectories of W to be continuous with probability one. [The Kolmogorov continuity criterion implies that the sample trajectories of W are continuous if the distribution of k has finite moments of sufficiently high order, and this includes all standard distributions (exponential, uniform, Poisson, Binomial). As a result, generalized polynomial chaos framework

5.2 Stationary SPDEs

251

requires further modification to study evolution equations driven by processes with jumps; cf. [38, Chap. 13].] (c) Find an example of a distribution of k resulting in discontinuous trajectories of W.

5.2 Stationary SPDEs 5.2.1 Definitions and Basic Examples The objective of this section is to introduce stationary stochastic PDEs. Roughly speaking, by stationary stochastic PDEs we understand SPDEs that do not involve time variable. For example, elliptic SPDEs can be characterized as stationary SPDEs. In this section we will discuss a large class of stationary SPDEs of the form Au C ıB .Mu/ D f ;

(5.2.1)

which includes equations with additive and/or multiplicative noise. The important examples of equations from this class include: 1. Poisson equation with a simple random potential V .x/ C V .x/ ˘  D f .x/ ; x 2 .0; 1/ V .0/ D V .1/ D 0;

(5.2.2)

where  is a standard Gaussian random variable and ˘ is the Wick product. 2. Poisson equation with a more complicated random potential v.x/ C ıW.x/ v .x/ D f .x/ ; P

(5.2.3)

denotes the divergence operator in the sense of Malliavin calculus. where ıW.x/ P 3. Poisson equations in random medium: r .A .x/ ˘ ru .x// D f .x/;

(5.2.4)

  P .x/ ; a.x/ is a deterministic positive-definite where A .x/ WD a.x/ C  W matrix, and  is a real number. Of course, since a.x/ is deterministic, a.x/ ˘ ru .x/ D a.x/ru .x/). Equations (5.2.2)–(5.2.4) are random perturbation of the corresponding deterministic equation. An important feature of these types of perturbations, which is a consequence of the property (5.1.32) of the Wick product, is that the stochastic equations are unbiased in that they preserve the mean dynamics.

252

5 The Polynomial Chaos Method

Exercise 5.2.1 (C) For Eq. (5.2.3), confirm that the function u0 .x/ WD Eu .x/ solves the deterministic Poisson equation r .a.x/ru0 .t; x// D Ef .x/ : The main approach to solving the equation discussed above and further generalizations of these equations is based on utilization of the Cameron-Martin expansion of the solutions. For example, let us construct a solution of equation (5.2.2) by using the Cameron-Martin expansion u .x/ D

1 X

un .x/ hn ./;

nD0

where un .x/ D E .u .x/ hn .// (see (5.1.7) and related notations). Then, proceeding as if all the series converged, (5.1.28) implies u .x/ ˘  D

1 X

un .x/ hnC1 ./:

nD0

As a result,         E u.x/ ˘  h0 ./ D E u .x/ ˘  D Eu.x/ E D 0; and, for k  1, E ..u .x/ ˘ / hk .// D

1 X

un .x/ E .hnC1 ./ ˘ hk .// D uk1 .x/

(5.2.5)

nD0

Therefore, u0 .x/ D f .x/ ; x 2 .0; 1/ ; u .0/ D u .1/ D 0

(5.2.6)

and for k  1; uk .x/ C uk1 .x/ D 0; x 2 .0; 1/ (5.2.7) uk .0/ D uk .1/ D 0 System (5.2.6),(5.2.7) is often referred to as the (deterministic) propagator of Eq. (5.2.2). The propagator of Eq. (5.2.2) is a lower-triangular system and therefore can be solved sequentially, starting with Eq. (5.2.6). As soon as the

5.2 Stationary SPDEs

253

propagator solved, the solution of the equation is obtained by the Cameron-Martin expansion u .x/ D

1 X

uk .x/ hk ./

(5.2.8)

kD0

To summarize, the solution of our SPDE was given by: (a) Cameron-Martin expansion (5.2.8) and (b) the (deterministic) propagator (5.2.6)–(5.2.7). Solution with such structure is usually referred to as the polynomial chaos solution. If, as in our example, the underlying random variables are Gaussian, then the solution is often referred to as Wiener chaos solution. In this section we will develop a systematic approach to construction of Wiener chaos solutions to bilinear SPDEs driven by purely spatial Gaussian noise. In particular, we will investigate bilinear elliptic equations Au.x/ C ıW.x/ Mu.x/ D f .x/ P

(5.2.9)

for a wide range of operators A and MI the corresponding parabolic equations @v.t; x/ D Av.t; x/ C ıW.x/ Mv.t; x/  f .x/ P @t

(5.2.10)

are the subject of the following section. Purely spatial white noise is an important type of stationary perturbations. So far we have discussed elliptic equations only with additive random forcing (Sect. 4.2). However, the methodology developed in Sect. 4.2 is not suitable for elliptic SPDEs with multiplicative random forcing as in examples above. We saw in Sect. 4.4 that, in the case of parabolic equations, the crucial assumption on the operators A and M was 1 The operator A  MM? is elliptic: 2

(5.2.11)

This assumption ensured square integrability of the solution. In this and the following sections we will study Wiener chaos solutions for important classes of SPDEs that are not covered by assumption (5.2.11). Let us start with a simple example illustrating that the solution of a stationary stochastic equation is not square integrable. Example 5.2.2 Consider a algebraic equation v D 1 C v ˘ : It is easy to see that fvn D E .vhn .// ; n  0g solve the following system v0 D 1; vn D InD0 C

p nvn1 ; n  1

(5.2.12)

254

Then vn D

5 The Polynomial Chaos Method

p p P nŠ and v D 1 C 1 nŠ hn ./; Therefore, nD1 Ev 2 D 1 C

X

nŠ D 1:

n1

Exercise 5.2.3 (B) Consider   u D 1 C u ˘  C u ˘ 2  1

(5.2.13)

and recall that  2  1 D H2 ./. (a) Find the equations for un : (b) Confirm that the un is the n-th Fibonacci number. (c) Conclude that Eu2 D 1. Hint: The asymptotic of Fibonacci numbers is given by un  .1 C

p n n 5/ =2 as n ! 1:

(5.2.14)

Example 5.2.2 and Exercise 5.2.3 indicate that one should not expect a solution of a stationary equation to have finite variance. Nevertheless, Wiener chaos solutions can be defined in spaces larger then those we have considered before, and do not require assumption (5.2.11). In the next section we will establish existence and uniqueness of Wiener Chaos solutions for stationary (elliptic) equations of the type (5.2.9).

5.2.2 Solving Stationary SPDEs by Weighted Wiener Chaos In Sect. 4.2 we discussed a general elliptic SPDE of the form P .x/ ; x 2 G; Au .x/ D W

(5.2.15)

P .x/ is Gaussian white noise, and G is a bounded where A is an elliptic operator, W domain in Rd . Popular examples of elliptic SPDEs discussed below include: a random Poisson equation P .x/ ; x 2 G; u .x/ D W and the Euclidean free field equation p P .x/ ; x 2 G; 1  u .x/ D W etc. (see Sect. 4.2.1).

5.2 Stationary SPDEs

255

Now we will demonstrate that the Wiener chaos methodology is a powerful tool for analysis of stochastic elliptic PDEs with multiplicative noise. In this section we will deal with equation Au C ıB .Mu/ D f

(5.2.16)

in the normal triple .V; H; V 0 / of Hilbert spaces (see Definition 3.1.5 on page 78). Everywhere in this section A W V ! V 0 and M W V ! V 0 ˝H are bounded linear operators. The white noise B will be identified with it’s chaos expansion (5.1.7) BD

X

uk k :

k1

Exercise 5.2.4 (C) Show that Eq. (5.2.16) can be rewritten in the form Au C

X

Mk u ˘ k D f ;

(5.2.17)

k1

where Mk u D

X

uk;˛ ˝ uk ˛

(5.2.18)

˛2J 1

Example 5.2.2 and Exercise 5.2.3 indicate that one should not expect a solution of equation (5.2.16) to have a finite second moment. However, returning to Example 5.2.2 one could see that the “weighted” norm E kvk2R D

X

rn2 vn2

n1

P with a sequence of weights rn such that n1 rn2 nŠ < 1 is appropriate for the solution of equation (5.2.5). Therefore, one could expect that a solution of equation (5.2.16) has a finite “weighted” second moment. With this in mind, we will now introduce a version of Wiener chaos expansion suitable for functions [of Gaussian random variables] with infinite variance. Let R be a linear operator on L2 ./ defined by R˛ D r˛ ˛ for every ˛ 2 J 1 , where the weights fr˛ ; ˛ 2 J 1 g are positive numbers. By Theorem 5.1.12, R is bounded if the weights r˛ are uniformly bounded from above: r˛ < C for all ˛ 2 J 1 , with C independent of ˛. Exercise 5.2.5 (C) Prove that if R is bounded linear operator on L2 ./ ; then r˛ are uniformly bounded from above and the inverse operator R1 is defined by R1 ˛ D r˛1 ˛ .

256

5 The Polynomial Chaos Method

Let H be a Hilbert space and .; /H and kkH denote the inner product and the norm in H: Exercise 5.2.6 (C) Prove that the operator R can be extended to an operator on L2 .I H/ by defining Rf as the unique element of L2 .I H/ so that, for all g 2 L2 .I H/, E.Rf ; g/H D

X

  r˛ E . f ; g/H ˛ :

˛2J 1

Let us denote by RL2 .I H/ the closure of L2 .I H/ with respect to the norm k f k2RL2 .IH/ WD kRf k2L2 .IH/ : It is readily RL2 .I H/ can be identified with a formal P seen that the elements ofP series ˛2J 1 f˛ ˛ ; where f˛ 2 H and ˛2J 1 k f˛ k2H r˛2 < 1. In the future, the argument H will be omitted if H D R. Next, we define the space R1 L2 .I H/ as the dual of RL2 .I H/ relative to the inner product in the space L2 .RI H/ W ˚ R1 L2 .I H/ D g 2 L2 .I H/ W R1 g 2 L2 .I H/ : The duality for f 2 RL2 .I H/ and g 2 R1 L2 ./ is defined by   hh f ; gii WD E .Rf /.R1 g/ 2 H:

(5.2.19)

In what follows, the operator R will often be identified with the corresponding collection .r˛ ; ˛ 2 J 1 /. Note that if u 2 R1 L2 .I H/ and v 2 R2 L2 .I H/, then both u and v belong to RL2 .FI H/, where r˛ D min.r1;˛ ; r2;˛ /. Exercise 5.2.7 (C) Let fqk ; k  1g be a sequence of real numbers. What else needs to be assumed about this sequence to ensure that the numbers 1 Y

r˛2 D

q˛k k

kD1

define a proper RL2 .I H/ space? The following notation will be often used in this section: .2N/ ˛ D

Y .2k/ ˛k

(5.2.20)

k1

The following lemma, due to Zhang (see [78, 234]) is an important technical fact.

5.2 Stationary SPDEs

257

Lemma 5.2.8 The sum X

.2N/ ˛ < 1

˛2J

if and only if  > 1 Proof Note that by the formula for the sum of geometric progression XY

.2i/ ˛i D

˛2J i1

D

YX

..2i/ /n

i1 n0

Y

(5.2.21)  1

.1  .2i/

/

:

i1

The infinite product on the right hand of (5.2.21) converges if and only if 1 X .2i/ < 1; iD1

that is, if and only if  > 1. Let r˛2 D .˛Š/ .2N/`˛ ;   0; `  0

(5.2.22)

The weights given by (5.2.22) define the class of spaces RL2 .FI H/ often called the Hida-Kondratiev spaces and denoted by .S/;` .H/. Exercise 5.2.9 (B) Prove that the solution of equation (5.2.12) belongs to HidaKondratiev’s space .S/1;` .R/ for any ` < 0: [Keep in mind that ˛ D n.] Exercise 5.2.10 (A) Specify ; ` such that the solution of equation (5.2.13) u 2 .S/;` .R/. Next, we extend the Wick product to weighted spaces RL2 : Suppose that f 2 RL2 .I H/, where H is a Hilbert space, and  2 RL2 .I R/. Then, the Wick product f ˘  is defined by f ˘D

X

f˛ ˇ ˛ ˘ ˇ :

(5.2.23)

˛;ˇ

Proposition 5.2.11 If f 2 RL2 .FI X/ and  2 RL2 .I R/, then f ˘  is an element N 2 .FI X/ for a suitable operator R. N of RL

258

5 The Polynomial Chaos Method

P

Proof It follows from (5.2.23) that f ˘  D

s

X

F˛ D

˛2J 1

ˇ;2J 1 WˇCD˛

F˛ ˛ and

 ˛Š fˇ  : ˇŠŠ

Therefore, each F˛ is an element of X, because, for every ˛ 2 J 1 , there are only finitely many multi-indices ˇ;  satisfying ˇ C  D ˛. By Lemma 5.2.8, X

.2N/q˛ < 1 if and only if q < 1:

(5.2.24)

˛2J 1

N can be defined using the weights N 2 .FI X/, where the operator R Therefore, f ˘ 2 RL rN˛2 D .2N/2a =.1 C kF˛ k2X /. Next, we extend the divergence operator ıB to weighted spaces. Definition 5.2.12 For f 2 RL2 .I X˝H/, we define ıB . f / as the element of RL2 .I X/ such that hhıB. f /; 'ii D E.Rf ; R1 DB '/H

(5.2.25)

for every ' satisfying ' 2 R1 L2 ./ and DB ' 2 R1 L2 .I H/. We can now generalize the relation (5.1.61) on page 249 connecting the Wick product with the divergence operator. P Theorem P 5.2.13 If f is an element of RL2 .FI X˝H/ so that f D k1 fk ˝ uk , with fk D ˛2J 1 fk;˛ ˛ 2 RL2 .FI X/, then ıB . f / D

X

fk ˘ k ;

(5.2.26)

Xp ˛k fk;˛k :

(5.2.27)

k1

and .ıB . f //˛ D

k1

Proof By linearity and (5.2.23), ıB . f / D

X X k1 ˛2J 1

ıB .˛ fk;˛ ˝ uk / D

X X k1 ˛2J 1

fk;˛ ˛ ˘ k D

X k1

fk ˘ k ;

5.2 Stationary SPDEs

259

which is (5.2.26). On the other hand, by (5.1.21), ıB . f / D

X X

X X p p fk;˛ ˛k C 1 ˛Ck D fk;˛k ˛k ˛ ;

k1 ˛2J 1

k1 ˛2J 1

and (5.2.27) follows. Now we are in a position to construct a solution to Eq. (5.2.1).

5.2.2.1 Existence and Uniqueness of Solutions We start with a rigorous definition of the solution to Eq. (5.2.1). Definition 5.2.14 The solution of equation (5.2.1) with f 2 RL2 .I V 0 /, is a random element u 2 RL2 .FI V/ so that the equality hhAu; 'ii C hhıB.Mu/; 'ii D hh f ; 'ii

(5.2.28)

holds in V 0 for every ' satisfying ' 2 R1 L2 ./ and D' 2 R1 L2 .I H/. Taking ' D ˛ in (5.2.28) and using relation (5.2.25) we conclude that Eq. (5.2.28) leads to the following system of equations for u˛ D E .u˛ / W 8 Au.0/ D Ef ; ˆ ˆ < Xp ˆ ˛k Mk u˛k D f˛ for j˛j > 0: ˆ : Au˛ C

(5.2.29)

k1

In the future we will refer to system (5.2.29) as the propagator. Exercise 5.2.15 (B) Derive the propagators for Eqs. (5.2.2)–(5.2.4). The following theorem establishes equivalence of Eq. (5.2.1) and the propagator. P Theorem 5.2.16 Let u D ˛2J 1 u˛ ˛ be an element of RL2 .I V/. Then u is a solution of equation (5.2.1) if and only if the non-random coefficients u˛ have the following properties: 1. every u˛ is an element of H, 2. the system of equalities (5.2.29) holds in V 0 for all ˛ 2 J 1 . Proof Let u be a solution of Au C ıB .Mu/ D f in RL2 .I V/: Taking ' D ˛ in (5.2.28) and using relation (5.2.27) .ı B . f //˛ D

Xp ˛k fk;˛k ; k1

260

5 The Polynomial Chaos Method

we obtain Eq. (5.2.29). By Theorem 4.1.18 on page 161, u˛ 2 V for all ˛. Conversely, letPfu˛ ; ˛ 2 J g be a collection of functions from V satisfying (5.2.29). Set u D ˛2J 1 u˛ ˛ . Then, by Theorem 5.2.13, the equality hhu; ˛ ii C hhAu C ıB .Mu/; ˛ ii D hh f ; ˛ ii holds in V 0 . By linearity, we conclude that, for any ' 2 R1 L2 .F/ such that D' 2 R1 L2 .FIH/; the equality hhu; 'ii C hhAu C ıB.Mu/; 'ii D hh f ; 'ii holds in V 0 as well. The system of equations (5.2.29) is lower-triangular and can be solved by induction on j˛j. Together with Theorem 5.2.16, this leads to the main result about existence and uniqueness of solution of (5.2.1). N 2 .FI V 0 / for some R. N Assume Theorem 5.2.17 Consider Eq. (5.2.1) in which f 2 RL that the deterministic equation AU D F is uniquely solvable in the normal triple .V; H; V 0 /, that is, for every F 2 V 0 , there exists a unique solution U D A1 F 2 V and kUkV  CA kFkV 0 . Assume also that each Mk is a bounded linear operator from V to V 0 so that, for all v 2 V; kA1 Mk vkV  Ck kvkV ;

(5.2.30)

with Ck independent of v. Then there exists an operator R and a unique solution u 2 RL2 .FI V/ of (5.2.1). Proof By assumption, u˛ D A1 f˛ 

Xp ˛k A1 Mk u˛.k/ k

is the unique solution of (5.2.29) and u˛ 2 V. It remains to take r˛2 D

.2N/2˛ : 1 C ku˛ k2V

Remark 5.2.18 The assumption of the theorem about solvability of the deterministic equation holds if the operator A satisfies hAv; vi  kvk2V for every v 2 V; with  > 0 independent of v. If f is non-random, then there is a reasonably manageable closed-form expression for u˛ , and a more explicit choice of the weights r˛ . To state the corresponding result, introduce the notations .0/

.n/

.n1/

ıB ./ D ; ıB ./ D ıB .BıB

.//;  2 RL2 .FI V/;

where B is a bounded linear operator from V to V ˝ H.

5.2 Stationary SPDEs

261

Theorem 5.2.19 Under the assumptions of Theorem 5.2.17, if f is non-random, then the following holds: 1. the coefficient u˛ , corresponding to the multi-index ˛ with j˛j D n  1 and the characteristic set K˛ D fk1 ; : : : ; kn g, is given by 1 X u˛ D p Bk .n/    Bk .1/ u.0/ ; ˛Š  2Pn

(5.2.31)

where • Pn is the permutation group of the set .1; : : : ; n/; • Bk D A1 Mk ; • u.0/ D A1 f . 2. the operator R can be defined by the weights 1

Y ˛ q˛ qk k ; r˛ D p ; where q˛ D j˛jŠ kD1 where the numbers qk ; k  1; are chosen so that defined in (5.2.30). 3. With r˛ and qk defined by (5.2.32), X j˛jDn

P

2 2 k1 qk Ck

.n/

q˛ u˛ ˛ D ıB .A1 f /;

(5.2.32) < 1, and Ck are

(5.2.33)

where B D .q1 A1 M1 ; q2 A1 M2 ; : : :/, and Ru D A1 f C

X 1 .n/ p ıB .A1 f /: nŠ n1

(5.2.34)

p Proof Define e u˛ D ˛Š u˛ . If f is non-random [so that f.0/ D f and f˛ D 0; j˛j > 0], then e u.0/ D A1 f and, for j˛j  1, Ae u˛ C

X

˛k Mke u˛k D 0;

k1

or e u˛ D

X k1

˛k Bke u˛k D

X k2K˛

Bke u˛k ;

262

5 The Polynomial Chaos Method

where K˛ D fk1 ; : : : ; kn g is the characteristic set of ˛ and n D j˛j. By induction on n, X Bk .n/    Bk .1/ u.0/ ; e u˛ D  2Pn

and (5.2.31) follows. Next, define Un D

X

q˛ u˛ ˛ ; n  0:

j˛jDn

Let us first show that, for each n  1, Un 2 L2 .I V/. By (5.2.31) we have ku˛ k2V  CA2

Y a .j˛jŠ/2 k f k2V 0 Ck k : ˛Š k1

(5.2.35)

By direct computation, X

q2a ku˛ k2V  CA2 k f k2V 0 nŠ

j˛jDn

X j˛jDn

0

1 Y nŠ @ .Ck qk /2˛k A ˛Š k1

1n 0 X Ck2 q2k A < 1; D CA2 k f k2V 0 nŠ @ k1

because of the selection of qk , and so Un 2 L2 .I V/. If the weights r˛ are defined by (5.2.32), then X

r˛2 kuk2V D

˛2J 1

XX

0 1n X X @ r˛2 kuk2V  CA2 k f k2V 0 Ck2 q2k A < 1;

n0 j˛jDn

n0

k1

P because of the assumption k1 Ck2 q2k < 1. Since (5.2.34) follows directly from (5.2.33), it remains to establish (5.2.33), that is, Un D ıB .Un1 /; n  1: For n D 1 we have U1 D

X k1

qk uk k D

X k1

Bk u.0/ k D ıB .U0 /;

(5.2.36)

5.2 Stationary SPDEs

263

where the last equality follows from (5.2.26). More generally, for n > 1 we have by definition of Un that ( .Un /˛ D

q˛ u˛ ;

if j˛j D n;

0;

otherwise:

From the equation q˛ Au˛ C

X

p qk ˛k Mk q˛k u˛k D 0

k1

we find .Un /˛ D

8X p ˆ ˛k qk Bk q˛k u˛k ;
1, then there exists a unique solution u 2 .S/1;`4 .V/ of (5.2.17) and kuk.S/1;`4 .V/  C.`/k f k.S/1;` .V 0 / :

(5.2.40)

Proof Denote by u.gI /,  2 J 1 , g 2 V 0 , the solution of (5.2.17) with f˛ D gI.˛D/ , and define uN ˛ D .˛Š/1=2 u˛ . Clearly, u˛ .g; / D 0 if j˛j < jj and so X

ku˛ . f I /k2V r˛2 D

˛2J 1

X

2 kuaC . f I /k2V raC :

(5.2.41)

˛2J 1

It follows from (5.2.29) that   uN ˛C . f I / D uN ˛ f .Š/1=2 I .0/ :

(5.2.42)

Now we use (5.2.35) to conclude that j˛jŠ kNuaC . f I /kV  p k f kV 0 : ˛ŠŠ

(5.2.43)

Coming back to (5.2.41) with r˛2 D .aŠ/1 .2N/.`4/a and using (5.2.39), we find: ku. f I /k.S/1;`4 .V/  C.`/.2N/2

k f kV 0 p ; .2N/.`=2/ Š

where 0

11=2 X  j˛jŠ 2 .2N/.`4/a A I C.`/ D @ ˛Š 1 ˛2J

(5.2.24) and (5.2.39) imply C.`/ < 1. Then (5.2.40) follows by the triangle inequality after summing over all  and using the Cauchy-Schwartz inequality. Remark 5.2.22 Example 5.2.2, in which f 2 .S/0;0 .R/ and u 2 .S/1;q .R/, q < 0, shows that, while the results of Theorem 5.2.21 are not sharp, a bound of the type kuk.S/;q .V/  Ck f k.S/;` .V 0 / is, in general, impossible if  > 1 or q  `.

5.3 Elements of Malliavin Calculus for Brownian Motion

265

5.3 Elements of Malliavin Calculus for Brownian Motion 5.3.1 Cameron-Martin Basis for Scalar Brownian Motion To begin, let us limit the discussion to a one-dimensional Brownian motion w .t/ on the probability space F D .˝; F ; P/. Recall (see Definition 3.2.32 on page 114) that the standard (one-dimensional) Brownian motion w D w .t/ ; t 2 Œ0; T; is a real-valued Gaussian process such that Ew .t/ D 0 and Ew .t/ w .s/ D min .t; s/. Let Ftw be the P -completed -algebra generated by fw.s/; s  tg.     Definition 5.3.1 The space L2 .w/ D L2 ˝; Ftw tT ; P will be referred to as Wiener chaos space for Brownian motion w: Let T > 0 be non-random and let fmi ; i  1g be an orthonormal basis in L2 ..0; T//. Define i D

Z

T 0

mi .s/ dw .s/ :

For simplicity, and with little loss of generality, it will be assumed in the future that, for each i, sup0sT jmi .s/j < 1: It follows that • Each i is standard Gaussian and FTw -measurable; • For i D 6 j, Ei .w/ j .w/ D

Z

T 0

mi .s/ mj .s/ ds D 0;

so that fi ; i  1g are independent; • The Brownian motion w .t/ admits the following Wiener chaos expansion (cf. (3.2.27) on page 116): w .t/ D

X

Mk .t/ k ;

(5.3.1)

k1

where Mk .t/ D

Rt 0

mk .s/ ds:

Definition 5.3.2 Denote by L2 .w/ the collection of FTw -measurable square integrable random variables. Next, we apply the Cameron-Martin theorem (Theorem 5.1.12) to functions of the Brownian motion w .t/. The difference between the ˚ constructions here and in the previous section is that the Cameron-Martin basis ˛ .w/ ; ˛ 2 J 1 ; associated with the Brownian motion w .t/ is, in a sense, more “focused” than the general version based on an arbitrary system of independent standard Gaussian random

266

5 The Polynomial Chaos Method

variables fk ; k  1g (as in (5.1.12) and Lemma 5.1.9), and, as a consequence, has a number of additional interesting and useful properties. As before, the elements of the Cameron-Martin basis generated by the sequence fk ; k  1g are ˛ D

Y

p H˛k .k .w//= ˛k Š:

(5.3.2)

k1

Let z D fzk ; k  1g be a sequence of real numbers such that X

z2k < 1:

(5.3.3)

Write E .z; t; w/ D exp

1 X kD1

Z zk

t 0

1

1X 2 mk .s/dw.s/  z 2 kD1 k

Z

t 0

! m2k .s/ds

; 0  t  T: (5.3.4)

In particular, EE .z; t; w/ D 1; 0  t  T; and ) 1 1X 2 E .z; T; w/ D exp zk k  z : 2 kD1 k kD1 (

1 X

(5.3.5)

is a special version of the stochastic exponent (5.1.16). By the Itô formula, the function E .z; t; w/ solves the following stochastic differential equation dE .z; t; w/ D E .z; t; w/

1 X

zk mk .t/dw .t/ ;

(5.3.6)

kD1

which, in particular, implies that the process E.z; t; w/ is a martingale:   E E.z; T; w/jFtw D E.z; t; w/:

(5.3.7)

At this point, it might appear to the reader that there is very little new substance in the Cameron-Martin basis associated with Brownian motion, as compared to the original basis generated by an arbitrary system of independent standard Gaussian random variables. Below, we will see that the “Brownian” Cameron-Martin basis is more nuanced then the time independent one. For ˛ 2 J 1 , define 1 @j˛j E .z; t; w/ ˇˇ ˛ .t/ D p ˇ : zD0 @z˛ ˛Š

(5.3.8)

5.3 Elements of Malliavin Calculus for Brownian Motion

267

In particular, ˛ .T/ D ˛ : It follows from (5.3.7) that, with probability 1,

˛ .t/ D E ˛ jFtw :

(5.3.9)

Theorem 5.3.3 Let f D f .t/ be a process such that f .t/ 2 L2 .w/: and f .t/ is Ftw measurable for every t. Then f .t/ D

X

f˛ .t/ ˛ D

˛2J 1

X

f˛ .t/ ˛ .t/

˛2J 1

with probability 1. Proof By Theorem 5.1.12, with ˛ D ˛ .T/, X

f .t/ D

f˛ .t/ ˛ .T/ ;

˛2J 1

  where f˛ .t/ D E f .t/˛ is non-random. Since f .t/ is Ftw -measurable, we find using (5.3.9) that X     f˛ .t/ E ˛ jFtw f .t/ D E f .t/jFtw D D

X

˛2J 1

f˛ .t/ ˛ .t/ :

˛2J 1

Exercise 5.3.4 (A) Let us consider the equation du .t; x/ D a2 uxx .t; x/ dt C ux .t; x/ dw .t/ ; u .0; x/ D  .x/ 2 L2 .R/ :

(5.3.10)

Assume that 2a2   2  0 and kk2L2 .R/ < 1: We know (cf. page 53) that the solution to this equation is square integrable and Ftw -adapted. Find the system of equations for deterministic coefficients u˛ .t; x/ D E Œu .t; x/ ˛ .t/ in the Cameron-Martin expansion u .t; x/ D

X

u˛ .t; x/ ˛ .t/

(5.3.11)

˛2J 1

of the solution of equation (5.3.10). Our next objective is to represent ˛ as multiple stochastic integrals with respect to the Brownian motion w.

268

5 The Polynomial Chaos Method

Exercise 5.3.5 (C) Let Tn D ft1 ; : : : ; tn W 0  t1      tn  Tg and functions gi .t1 ; t2 ; : : : ; tn / 2 L2 .Tn / ; i D 1; 2: Define In .gi / D

Z

T

Z

tn1



Z

0

0

t1 0

gi .t0 ; : : : ; tn1 /dw.t0 /:::dw.tn1 /:

Prove that 8   < .gi ; gj /L2 .Tn / if m D k; E Im .gi / Ik gj D : 0 if m ¤ k: 

Definition 5.3.6 (a) A function f .x1 ; : : : ; xn / is called symmetric if, for all permutations .x1 ; : : : ; xn / of .x1 ; : : : ; xn / ; f .x1 ; :::xn / D f .x1 ; : : : ; xn / :

(5.3.12)

(b) A symmetrization of a function f D f .x1 ; : : : ; xn / is 1 X fQ .x1 ; : : : ; xn / D f .x1 ; : : : ; xn / ; nŠ 

(5.3.13)

where  is running over all permutations of the set .1; : : : ; n/ : Exercise 5.3.7 (C) Verify that, if function f .t1 : : : ; tn / 2 L2 .Tn / is symmetric, then Z 1 f 2 .t1 ; : : : ; tn / dt1 : : : dtn D k f k2L2 .Œ0;Tn / : nŠ Tn Now we can establish the multiple integral representation of ˛ when j˛j > 0 [recall that .0/ D 1.] Theorem 5.3.8 (a) If j˛j > 0, then ˛ .t/ D

Z tX 1 p 0 kD1

˛k ˛.k/ .s/ mk .s/ dw .s/ :

(5.3.14)

(b) If j˛j D n  1, then p Z nŠ

˛ .T/ D p

˛Š

T

 0

Z

t2 0

Q ˛ .t1 ; : : : ; tn /dw.t1 / : : : dw.tn /; m

(5.3.15)

5.3 Elements of Malliavin Calculus for Brownian Motion

269

where m˛ .t1 ; : : : ; tn / D

n Y

mji .ti /;

iD1

Q ˛ is the symmetrization of m˛ . f j1 ; : : : ; jn g is the characteristic set of ˛, and m Proof By (5.3.4), Z t ˇ @ ˇ E .z; t; w/ ˇ D mi .s/ dw .s/ : zD0 @z .i/ 0 By (5.3.6), d

! ! 1 X @j˛j @j˛j E .z; t; w/ D E .z; t; w/ mk .t/ zk dw .t/ @z˛ @z˛ kD1 C

1 X kD1

mk .t/

j˛j1

!

(5.3.16)

@ E .z; t; w/ dw .t/ : @z˛.k/

Integrating in time and setting z D 0 yields ! Z tX 1 ˇ ˇ @j˛j @j˛j1 ˇ ˇ E .z; t; w/ D E .z; s; w/ ˇ ˇ mk .s/ dw .s/ ; ˛.k/ zD0 zD0 @z˛ 0 kD1 @z

(5.3.17)

leading to (5.3.14). After that, (5.3.15) follows after a repeated application of (5.3.14). The linear subspace of L2 .w/ generated by ˛ , j˛j D n; is called the n-th Wiener chaos. The multiple stochastic integral in the right-hand side of (5.3.15) is usually referred to as multiple Wiener-Itô integral. Integrals of this type have been introduced and studied extensively by Itô in .

5.3.1.1 Cameron-Martin Basis for Brownian Motion in a Hilbert Space Let F D .˝; F ; fFt g0tT ; P/ be a stochastic basis with the usual assumptions and let Y be a separable Hilbert space with inner product .; /Y and an orthonormal basis fyk ; k  1g. Also, let us fix an orthonormal basis fmi ; i  1g in L2 ..0; T/; R/; so that each mi belongs to L1 ..0; T//. Exercise 5.3.9 (C) Prove that fui;k D mi yk ; i; k  1g is an orthonormal basis in the Hilbert space L2 ..0; T/ I Y/ : On F and Y, consider a cylindrical Brownian motion W, that is, a family of continuous Ft -adapted Gaussian martingales Wy .t/, y 2 Y, so that Wy .0/ D 0 and

270

5 The Polynomial Chaos Method

E.Wy1 .t/Wy2 .s// D min.t; s/.y1 ; y2 /Y . In particular, wk .t/ D Wyk .t/; k  1; t  0;

(5.3.18)

are independent standard Brownian motions on F. The -algebra generated by fwk .s/; s  tg will be denoted by Ftwk : The sigma-algebra generated by the full collection fwk .s/; k  1; 0  s  tg will be denoted by FtW . We write X yk wk .t/ (5.3.19) W .t/ D k1

in the sense that Wy .t/ D

X

.y; yk /Y wk .t/I

(5.3.20)

k1

cf. (3.2.31) on page 117. Similarly, P .t/ D W

X

yk wP k .t/ ;

(5.3.21)

k1

is the white noise associated with W Next, we construct the Cameron-Martin basis associated with the cylindrical Brownian motion (5.3.20). Define i;k D

Z

T 0

mi .s/dwk .s/ D

Z

T 0

mi .s/dwk .s/:

(5.3.22)

Exercise 5.3.10 (C) (a) Prove that fi;k ; i; k  1g are independent standard Gaussian random variables. (b) Confirm that   E i;k jFtW D

Z

t 0

mi .s/dwk .s/:

(5.3.23)

Definition 5.3.11 Let J 2 be the collection of multi-indices ˛ with ˛ D .˛i;k /i;k1 P so that each ˛i;k is a non-negative integer and j˛j WD i;k1 ˛i;k < 1. For ˛; ˇ 2 J 2 , define Y ˛ C ˇ D .˛i;k C ˇi;k /i;k1 and ˛Š D ˛i;k Š: i;k1

The multi-index consisting of all zeroes will be denoted by .0/.

5.3 Elements of Malliavin Calculus for Brownian Motion

271

The entry ˛i;k of a multi-index ˛ 2 J 2 is similar to a matrix entry, with i representing the row and k representing the column. For ˛ 2 J 2 , define X Y j˛j D ˛i;k ; ˛Š D ˛i;k Š; i;k

i;k

and 1 Y ˛ D p H˛i;k .i;k /; ˛Š i;k

(5.3.24)

where Hn is nth Hermite polynomial and i;k is defined by (5.3.22) Example 5.3.12 Let 0

01 B2 0 B ˛ D B0 0 @ :: :: : :

030 004 000 :: :: :: : : :

0  0  0  :: : 

1 C C C A

(5.3.25)

with only four non-zero entries ˛1;2 D 1I ˛1;4 D 3I ˛2;1 D 2I ˛2;5 D 4, then ˛ D

H1 .1;2 / H3 .1;4 / H2 .2;1 / H4 .2;5 / p  p  p  p : 1Š 3Š 2Š 4Š

For every multi-index ˛ 2 J 2 with j˛j D n, we also define the characteristic set K˛ of ˛ in a way similar to ˛ 2 J 1 : K˛ D f.i˛1 ; k1˛ /; : : : ; .i˛n ; kn˛ /g;

(5.3.26)

˛ i˛1  i˛2  : : :  i˛n , and if i˛j D i˛jC1 , then kj˛  kjC1 . The first pair .i˛1 ; k1˛ / in K˛ is the position numbers of the first nonzero element of ˛. The second pair is the same as the first if the first nonzero element of ˛ is greater than one; otherwise, the second pair is the position numbers of the second nonzero element of ˛ and so on. As a result, if ˛i;k > 0, then exactly ˛i;k pairs in K˛ are equal to .i; k/. For example, if

0

01 B1 2 B ˛ D B0 0 @ :: :: : :

02 00 00 :: :: : :

30 01 00 :: :: : :

1 0  0 C C 0 C A :: : 

272

5 The Polynomial Chaos Method

with nonzero elements ˛1;2 D ˛2;1 D ˛2;6 D 1; ˛22 D ˛1;4 D 2; ˛1;5 D 3; then the characteristic set is K˛ D f.1; 2/; .1; 4/; .1; 4/; .1; 5/; .1; 5/; .1; 5/; .2; 1/; .2; 2/; .2; 2/; .2; 6/g: Remark 5.3.13 Here and below we use the same notation ˛; ˇ; , etc. [that is, a bold-face lower-case letter from the beginning of the Greek alphabet] for matrix-valued multi-indices, as in (5.3.25), and vector-valued multi-indices, as in Definition 5.1.8. However, the meaning of the notation will always be clear from the context. P Let z D fzi;k ; i  1; k  1g be a sequence of real numbers such that i;k jzi;k j2 < 1. The stochastic exponent E .z; t; W/ associated with W .t/ is Z

1 X

E .z; t; W/ D exp

zi;k

i;kD1

D

1 Y

1 X

exp

kD1

iD1

t 0

! Z t 1 1 X 2 2 mi .s/dwk .s/  mi .s/ds jzi;k j 2 i;kD1 0 Z

zi;k

0

1

t

mi .s/dwk .s/ 

1X jzi;k j2 2 iD1

Z

t 0

(5.3.27) ! m2i .s/ds

Remark 5.3.14 If h 2 L2 ..0; T/I Y/ and Et .h/ D exp

Z

t 0

.h.s/; dW.s//Y 

1 2

Z

t 0

 kh.t/k2Y dt ;

(5.3.28)

then, by the Itô formula, dEt .h/ D Et .h/.h.t/; dW.t//Y :

(5.3.29)

This is a alternative representation of the stochastic exponential, with the correspondence given by h.t/ D

X

zi;k mi .t/yk :

i;k

Similarly to (5.3.8), for ˛ 2 J 2 we have 1 @j˛j E .z; t; W/ ˇˇ ˛ .t/ D p ˇ ; zD0 @z˛ ˛Š

(5.3.30)

5.3 Elements of Malliavin Calculus for Brownian Motion

273

where Y @˛i;k @j˛j D ˛ ; @˛ z @zi;ki;k i;k1 and then ˛ .t/ D

Z

1 t X 0 i;kD1

p ˛i;k ˛.i;k/ .s/ mi .s/ dwk .s/ :

(5.3.31)

Due to independence of the Brownian motions wk .t/, the elements ˛ of the Cameron-Martin basis associated with W; are the products of the elements of the Cameron-Martin bases associated with each Brownian motion wk .t/ : More specifically, ˛ D

Y H˛i;k .i;k / Y p ˛Œk D ˛i;k Š k;i1 k1

(5.3.32)

where ˛Œk D .˛1;k ; ˛2;k ; : : :/ 2 J 1 and ˛Œk

p Z T Z t2 nŠ Q ˛Œk .t1 ; : : : ; tn /dwk .t1 / : : : dwk .tn /; Dp ::: m ˛ŒkŠ 0 0

(5.3.33)

where Q ˛Œk .t1 ; : : : ; tn / D m

1 X mj1 .t .1/ /    mjn .t .n/ / nŠ 

and f j1 ; : : : ; jn g is the characteristic set of ˛Œk. In particular, for every k and ˛, ˚˛Œk .t/ is a continuous square integrable martingale with respect to the filtration Ftwk t0 generated by the Brownian motion wk .t/).

5.3.2 The Malliavin Derivative and Its Adjoint In this section, we construct the Malliavin derivative and an analog of the Itô stochastic integral for generalized random processes.

274

5 The Polynomial Chaos Method

By Definition 5.1.15 the action of Malliavin derivative Dk on the element Cameron-Martin basis f˛ ; ˛ 2 J 1 g is defined by Dk .˛ / D

p ˛k ˛.k/ ;

(5.3.34)

where  .k/ is a multi-index of length 1 with the only positive component at the kth entry. Recall that the definition of J 1 and J 2 were introduced in Definitions 5.1.8 and 5.3.11, respectively. A natural extension of the Malliavin derivative to the Cameron-Martin basis f˛ ; ˛ 2 J 2 g is given by the formula Dk .˛ / D

p ˛i;k ˛.i;k/ ;

(5.3.35)

where 

 .i; k/

 j;`

D

1 if i D j; k D ` 0 otherwise

(5.3.36)

is the J 2 -analog of .k/. By linearity, we extend (5.3.35) to the definition of the Malliavin derivative DB ˛ D

X

Dk .˛ / uk D

k1

Xp ˛i;k ˛.i;k/ ui;k ;

(5.3.37)

i;k1

P where B D i;k1 ui;k i;k ; and .ui;k D i; k  1/ is a complete orthonormal basis in the Hilbert space U 2 WD U1 ˝ U2 : Exercise 5.3.15 (C) Prove that E kDB ˛ k2U 2 D

X

˛i;k D j˛j :

i;k1

The following simple example will be instrumental in what follows. Example 5.3.16 Let Y be a separable Hilbert space with a fixed orthonormal bases fyk ; k  1g and fmi ; i  1g be an orthonormal basis in L2 Œ0; T: In this setting, the Malliavin derivative DB ˛ is .DB ˛ /.t/ D

Xp ˛i;k ˛.i;k/ .t/ mi .t/yk :

(5.3.38)

i;k

Lemma 5.3.17 In the setting of Example 5.3.16, E kDB ˛ .t/k2Y D

XX k

i

˛i;k m2i .t/ :

(5.3.39)

5.3 Elements of Malliavin Calculus for Brownian Motion

275

Proof It follows from (5.3.38) that for t  T  ! 2  X X p   D E ˛i;k ˛.i;k/ .t/ mi .t/ yk   

E kDB ˛ .t/k2Y

k

i

Y

X Xp DE ˛i;k ˛.i;k/ .t/ mi .t/ k

DE

!2

i

Xp Xp ˛i;k ˛.i;k/ .t/ mi .t/ ˛j;k ˛.j;k/ .t/ mj .t/ i

(5.3.40)

j

0 1 Xp X Xp @ ˛i;k mi .t/ı .i; j/ ˛j;k .t/ mj .t/A D k

i

j

D

XX k

˛i;k m2i .t/ :

i

Exercise 5.3.18 Confirm the equality

1 if i D j; k D r E ˛.i;k/ ˛.j;r/ .t/ 0 otherwise;

(5.3.41)

used in (5.3.40). An immediate consequence of (5.3.39) is Z 0

T

E kDB ˛ .t/k2Y dt D

X

˛i;k D j˛j :

(5.3.42)

i;k

Our next task is to extend the definition of Malliavin derivative to a more general random variable X vD v˛ ˛ ˛2J 2

P 2 under the standard assumption P p ˛2J 2 v˛ < 1. Since .DB ˛ /.t/ D i;k ˛i;k ˛.i;k/ .t/ mi .t/yk ; we have that 0 DB @

X

˛2J 2

1 v˛ ˛ A .t/ D

X ˛2J 2

Xp ˛i;k ˛.i;k/ .t/ mi .t/yk : i;k

276

5 The Polynomial Chaos Method

Then Z

 2   X    E  DB v˛ ˛   dt D 0   ˛2J 2 X v˛ vˇ T

(5.3.43)

˛;ˇ2J 2

Z

T 0

1 Xp Xq E@ ˛i;k ˛.i;k/ .t/ mi .t/yk ; ˇj;n ˇ.j;n/ .t/ mj .t/yn A dt: 0

i;k

j;n

Y

By orthogonality, Z

T 0

1 Xq Xp E@ ˛i;k ˛.i;k/ .t/ mi .t/yk ; ˇj;n ˇ.j;n/ .t/ mj .t/yn A dt 0

i;k

j;n

D

Y

j˛j if ˛ D ˇ; 0 otherwise; (5.3.44)

Therefore, it follows from (5.3.43) and (5.3.44) Z

T 0

 0 1 2   X X    D @ A .t/ E v  j˛jv˛2 ; D ˛ ˛   B   ˛2J 2 ˛2J 2

(5.3.45)

which can be infinite. Formula P (5.3.45) implies that the Malliavin derivativePDB of2 the process v .t/ D ˛2J 2 v˛ .t/ ˛ .t/ with the standard assumption ˛2J 2 v˛ .t/ < 1 is not necessarily square integrable. The formula also implies that if we assume that X

j˛jv˛2 < 1;

˛2J

then E

Z

T 0

ˇ ˇ2 ˇ ˇ X ˇ ˇ ˇD B v˛ .t/ ˛ .t/ˇˇ dt < 1: ˇ ˇ ˇ ˛2J 2

5.3 Elements of Malliavin Calculus for Brownian Motion

277

Let us now introduce the Hilbert space n o X L12 .W/ D u 2 L2 .W/ W j˛ju2˛ < 1 :

(5.3.46)

˛2J

Exercise 5.3.19 (C) Prove that the Malliavin derivative D extends to a continuous linear operator from L12 .W/ to L2 .W/. For the sake of completeness and to justify further definitions, let us establish the standard connection between the Malliavin derivative and the stochastic Itô integral. W P If u is an Ft -adapted process from L2 .WIWL2 ..0; T/I Y//, then u.t/ D k1 uk .t/yk , where the random variable uk .t/ is Ft -measurable for each t and k, and XZ k1

T

0

Ejuk .t/j2 dt < 1:

We define the stochastic Itô integral U.t/ D

Z

t 0

.u.s/; dW.s//Y D

XZ k1

t 0

uk .s/dwk .s/:

(5.3.47)

Note that U.t/ (as opposed to Y-valued) and is FtW -measurable, and Rt P is real-valued 2 2 EjU.t/j D k1 0 Ejuk .s/j ds. The next result establishes a connection between the Malliavin derivative and the stochastic Itô integral. Lemma 5.3.20 Suppose that u is an FtW -adapted process from L2 .WI L2 ..0; T/I Y//, and define the process U according to (5.3.47). Then, for every 0 < t  T and ˛ 2 J, E.U.t/˛ / D E

Z

t 0

.u.s/; .DB ˛ /.s//Y ds:

(5.3.48)

Proof Define ˛ .t/ D E.˛ jFtW /. By (5.3.31), d˛ .t/ D

Xp ˛i;k ˛.i;k/ .t/mi .t/dwk .t/:

(5.3.49)

i;k

Due to FtW -measurability of uk .t/, we have   uk;˛ .t/ D E uk .t/E.˛ jFtW / D E.uk .t/˛ .t//:

(5.3.50)

278

5 The Polynomial Chaos Method

The definition of U implies dU.t/ D (5.3.50), and the Itô formula, U˛ .t/ D E.U.t/˛ / D

Z tX p 0

P

k1 uk .t/dwk .t/,

so that, by (5.3.49),

˛i;k uk;˛.i;k/ .s/mi .s/ds:

(5.3.51)

i;k

Together with (5.3.38), the last equality implies (5.3.48). Lemma 5.3.20 is proved. Note that the coefficients uk;˛ of u 2 L2 .WI L2 ..0; T/I H// belong to L2 ..0; T//. RT We therefore define uk;˛;i D 0 uk;˛ .t/mi .t/dt. Then, by (5.3.51), U˛ .T/ D

Xp ˛i;k uk;˛.i;k/;i :

(5.3.52)

i;k

Since U.T/ D conclude that

P

˛2J

U˛ .T/˛ , we shift the summation index in (5.3.52) and

U.T/ D

X Xp ˛i;k C 1uk;˛;i ˛C.i;k/ : ˛2J 2

(5.3.53)

i;k

As a result, U.T/ D ıB .u/, where ıB is the adjoint of the Malliavin derivative, also known as the Skorokhod integral; see  for details. Lemma 5.3.20 suggests the following definition. For an FtW -adapted process v from L2 .WI L2 ..0; T///, let Dk u be the FtW -adapted process from L2 .WI L2 ..0; T/// so that .Dk u/˛ .t/ D

Z tX p ˛i;k v˛.i;k/ .s/mi .s/ds: 0

(5.3.54)

i

P is FtW -adapted, then u is in the domain of If u D k uk yk 2 L2 .WI L2 ..0; P T/I Y//  the operator ıB and ıB.u/ D k1 .Dk uk /.t/. Next, we introduce generalized random processes and extend the operators Dk to such processes. Let X be a Banach space with norm k  kX . Definition 5.3.21 (1) The space D.L2 .W// of test functions is the collection of elements from L2 .W/ that can be written in the form X vD v˛ ˛ ˛2Jv

for some v˛ 2 R and a finite subset Jv of J 2 . (2) A sequence vn converges to v in D.L2 .W// if and only if Jvn Jv for all n and lim jvn;˛  v˛ j D 0 for all ˛. n!1

5.3 Elements of Malliavin Calculus for Brownian Motion

279

Definition 5.3.22 For a linear topological space X define the space D0 .L2 .W/I X/ of X-valued generalized random elements as the collection of continuous linear maps from the linear topological space D.L2 .W// to X. The elements of D0 .L2 .W/I L1 ..0; T/I X// are called X-valued generalized random processes. The element u of D0 .L2 .W/I X/ can be identified with a formal Cameron-MartinFourier series X uD u˛ ˛ ; ˛2J 2

and u˛ 2 X are then called generalized Fourier coefficients of u. For such a series and for v 2 D.L2 .W//, we have u.v/ D

X

v˛ u˛ :

˛2Jv

Conversely, for u 2 D0 .L2 .W/I X/, we define the formal Cameron-Martin-Fourier series of u by setting u˛ D u.˛ /. If u 2 L2 .W/, then u 2 D0 .L2 .W// and u.v/ D E.uv/. By Definition 5.3.22, a sequence fun ; n  1g converges to u in D0 .L2 .W/I X/ if and only if un .v/ converges to u.v/ in the topology of X for every v 2 D.L2 .W//. In terms of generalized Fourier coefficients, this is equivalent to lim un;˛ D u˛ in n!1 the topology of X for every ˛ 2 J. The construction of the space D0 .L2 .W/I X/ can be extended to Hilbert spaces other than L2 .W/. Let H be a real separable Hilbert space with an orthonormal basis fek ; k  1g. Define the space o n X vk ek ; vk 2 R; Jv  a finite subset of f1; 2; : : :g : D.H/ D v 2 H W v D k2Jv

By definition, vn converges to v in D.H/ as n ! 1 if and only if Jvn Jv for all n and lim jvn;k  vk j D 0 for all k. n!1

For a linear topological space X, D0 .HI X/ is the space of continuous linear maps from D.H/ to X. An element g of D0 .HI X/ can be identified with a P formal series P g ˝ e so that g D g.e / 2 X and, for v 2 D.H/, g.v/ D k k k k1 k k2J 2 v gk vk . P P 2 If X D R and k1 gk < 1, then g D k1 gk ek 2 H and g.v/ D .g; v/H , the inner The space X is naturally imbedded into D0 .HI X/: if u 2 X, then P product in H. 0 X/. k1 u ˝ ek 2 D .HI P P A sequence gn D k1 gn;k ˝ ek ; n  1; converges to g D k1 gk ˝ ek in D0 .HI X/ if and only if, for every k  1; lim gn;k D gk in the topology of X. n!1

280

5 The Polynomial Chaos Method

A collection fLk ; k  1g of linear operators from X1 to X2 naturally defines a linear operator L from D0 .HI X1 / to D0 .HI X2 /: 0 1 X X L@ g k ˝ ek A D Lk .gk / ˝ ek : k1

k1

Similarly, a linear operator L W D0 .HI X1 / ! D0 .HI X2 / can be identified with a collection fLk ; k  1g of linear operators from X1 to X2 by setting Lk .u/ D L.u ˝ ek /. We will see later that introduction of spaces D0 .HI X/ and the corresponding operators makes it possible to study stochastic equations without worrying about square integrability of the solution. Definition 5.3.23 (a) If u is an X-valued generalized random process, then Dk u is the X-valued generalized random process so that .Dk u/˛ .t/ D

XZ i

0

t

p u˛.i;k/ .s/ ˛i;k mi .s/ds:

(5.3.55)

  P (b) If g 2 D0 YI D0 .L2 .W/I L1 ..0; T/I X// , with g D k1 gk ˝ yk ; gk 2 D0 .L2 .W/I L1 ..0; T/I X//, then D g is the X-valued generalized random process with X XZ t p .Dk gk /˛ .t/ D gk;˛.i;k/ .s/ ˛i;k mi .s/ds: (5.3.56) .D g/˛ .t/ D k

0

i;k

Using (5.3.38), we get a generalization of equality (5.3.48): .D g/˛ .t/ D

Z 0

t

  g D˛ .s/ds:

(5.3.57)

Indeed, by linearity, gk

p  p ˛i;k mi .s/˛.i;k/ .s/ D ˛i;k mi .s/gk;˛.i;k/ .s/:

Theorem 5.3.24 If T < 1, then Dk and D are continuous linear operators.   Proof It is enough to show that, if u; un 2 D0 L2 .FTW /I L1 ..0; T/I X/ and limn!1 ku˛  un;˛ kL1 ..0;T/IX/ D 0 for every ˛ 2 J, then, for every k  1 and ˛ 2 J; lim k.Dk u/˛  .Dk un /˛ kL1 ..0;T/IX/ D 0:

n!1

5.4 Wiener Chaos Solutions for Parabolic SPDEs

281

Using (5.3.55), we find k.Dk u/˛  .Dk un /˛ kX .t/ XZ T p ˛i;k ku˛.i;k/  un;˛.i;k/ kX .s/jmi .s/jds:  0

i

Note that the sum contains finitely many terms. By assumption, jmi .t/j  Ci , and so k.Dk u/˛  .Dk un /˛ kL1 ..0;T/IX/

 C.˛/

Xp ˛i;k ku˛.i;k/  un;˛.i;k/ kL1 ..0;T/IX/ :

(5.3.58)

i

Theorem 5.3.24 is proved.

5.4 Wiener Chaos Solutions for Parabolic SPDEs 5.4.1 The Propagator In this section we build on the ideas from  to introduce the Wiener Chaos solution and the corresponding propagator for a general stochastic evolution equation. The notations from Sects. 5.1 and 5.3.2 will remain in force. It will be convenient to interpret the cylindrical Brownian motion W as a collection fwk ; k  1g of independent standard Wiener processes. As before, T 2 .0; 1/ is fixed and nonrandom. Introduce the following objects: • The Banach spaces A, X, and U so that U X. • Linear operators A W L1 ..0; T/I A/ ! L1 ..0; T/I X/ and Mk W L1 ..0; T/I A/ ! L1 ..0; T/I X/: • Generalized random processes f 2 D0 .L2 .W/I L1 ..0; T/I X// and gk 2 D0 .L2 .W/I L1 ..0; T/I X// : • The initial condition u0 2 D0 .L2 .W/I U/. Consider the deterministic equation v.t/ D v0 C

Z

t 0

where v0 2 U and ' 2 L1 ..0; T/I X/.

.Av/.s/ds C

Z

t 0

'.s/ds;

(5.4.1)

282

5 The Polynomial Chaos Method

Definition 5.4.1 A function v is called a w.A; X/ solution of (5.4.1) if and only if v 2 L1 ..0; T/I A/ and equality (5.4.1) holds in the space L1 ..0; T/I A/. Definition 5.4.2 An A-valued generalized random process u is called a w.A; X/ Wiener Chaos solution of the stochastic differential equation du.t/ D .Au.t/ C f .t//dt C .Mk u.t/ C gk .t//dwk .t/; 0 < t  T; ujtD0 D u0 ; (5.4.2) if and only if the equality u.t/ D u0 C

Z

t 0

.Au C f /.s/ds C

X

.Dk .Mk u C gk //.t/

(5.4.3)

k1

holds in D0 .L2 .W/I L1 ..0; T/I X//. Sometimes, to stress the dependence of the Wiener Chaos solution on the terminal time T, the notation wT .A; X/ will be used. By (5.3.56), equality (5.4.3) means that, for every ˛ 2 J 2 , the generalized Fourier coefficient u˛ of u satisfies u˛ .t/ D u0;˛ C

Z

t 0

Z tX p .Au C f /˛ .s/ds C ˛i;k .Mk u C gk /˛.i;k/ .s/mi .s/ds: 0

i;k

(5.4.4) Definition 5.4.3 System (5.4.4) is called the propagator for Eq. (5.4.2). The propagator is a lower triangular system. Indeed, if ˛ D .0/, that is, j˛j D 0, then the corresponding equation in (5.4.4) becomes u.0/ .t/ D u0;.0/ C

Z 0

t

.Au.0/ .s/ C f.0/ .s//ds:

(5.4.5)

If ˛ D . j; `/, that is, ˛j;` D 1 for some fixed j and ` and ˛i;k D 0 for all other i; k  1, then the corresponding equation in (5.4.4) becomes u. j;`/ .t/ D u0;. j;`/ C C

Z

Z

t 0

.Au. j;`/ .s/ C f. j;`/ .s//ds

t 0

(5.4.6)

.Mk u.0/ .s/ C g`;.0/ .s//mj .s/ds:

Continuing in this way, we conclude that (5.4.4) can be solved by induction on j˛j as long as the corresponding deterministic equation (5.4.1) is solvable. The precise result is as follows. Theorem 5.4.4 If, for every v0 2 U and ' 2 L1 ..0; T/I X/, Eq. (5.4.1) has a unique w.A; X/ solution v.t/ D V.t; v0 ; '/, then Eq. (5.4.2) has a unique w.A; X/ Wiener

5.4 Wiener Chaos Solutions for Parabolic SPDEs

283

Chaos solution so that u˛ .t/ D V.t; u0;˛ ; f˛ / C C

Xp

Xp ˛i;k V.t; 0; mi Mk u˛.i;k/ / i;k

˛i;k V.t; 0; mi gk;˛.i;k/ /:

(5.4.7)

i;k

Proof Using the assumptions of the theorem and linearity, we conclude that (5.4.7) is the unique solution of (5.4.4). If the functions f ; g; u0 are non-random, we can derive a more explicit formula for u˛ using the characteristic set K˛ of the multi-index ˛; see (5.3.26) on page 271. Theorem 5.4.5 Assume that 1. for every v0 2 U and ' 2 L1 ..0; T/I X/, Eq. (5.4.1) has a unique w.A; X/ solution v.t/ D V.t; v0 ; '/, 2. the input data in (5.4.4) satisfy gk D 0 and f˛ D u0;˛ D 0 if j˛j > 0. Let u.0/ .t/ D V.t; u0 ; 0/ be the solution of (5.4.4) for j˛j D 0. For ˛ 2 J 2 with j˛j D n  1 and the characteristic set K˛ , define functions F n D F n .tI ˛/ by induction as follows: F 1 .tI ˛/ D V.t; 0; mi Mk u.0/ / if K˛ D f.i; k/gI F n .tI ˛/ D

n X

V.t; 0; mij Mkj F n1 .I ˛  .ij ; kj ///

(5.4.8)

jD1

if K˛ D f.i1 ; k1 /; : : : ; .in ; kn /g: Then 1 u˛ .t/ D p F n .tI ˛/: ˛Š

(5.4.9)

Proof If j˛j D 1, then representation (5.4.9) follows from (5.4.6). For j˛j > 1, observe that p • If uN ˛ .t/ D ˛Š u˛ and j˛j  1, then (5.4.4) implies uN .t/ D

Z

t 0

ANu˛ .s/ds C

XZ i;k

t 0

˛i;k mi .s/Mk uN ˛.i;k/ .s/ds:

• If K˛ D f.i1 ; k1 /; : : : ; .in ; kn /g, then, for every j D 1; : : : ; n, the characteristic set K˛.ij ;kj / of ˛  .ij ; kj / is obtained from K˛ by removing the pair .ij ; kj /.

284

5 The Polynomial Chaos Method

• By the definition of the characteristic set, X

˛i;k mi .s/Mk uN ˛.i;k/ .s/ D

i;k

n X

mij .s/Mkj uN ˛.ij ;kj / .s/:

jD1

As a result, representation (5.4.9) follows by induction on j˛j using (5.4.7): if j˛j D n > 1, then uN ˛ .t/ D

n X

V.t; 0; mij Mkj uN ˛.ij ;kj / /

jD1

D

n X

(5.4.10) V.t; 0; mij Mkj F

.n1/

.I ˛  .ij ; kj // D F .tI ˛/: n

jD1

Theorem 5.4.5 is proved. Corollary 5.4.6 Assume that the operator A is a generator of a strongly continuous semi-group ˚ D ˚t;s ; t  s  0; in some Hilbert space H so that A H, each Mk is a bounded operator from A to H, and the solution V.t; 0; '/ of Eq. (5.4.1) is written as V.t; 0; '/ D

Z

T 0

˚t;s '.s/ds; ' 2 L2 ..0; T/I H//:

(5.4.11)

Denote by P n the permutation group of f1; : : : ; ng: If u.0/ 2 L2 ..0; T/I H//, then, for ˛ with j˛j D n > 1 and the characteristic set K˛ D f.i1 ; k1 /; : : : ; .in ; kn /g; representation (5.4.9) becomes Z s2 Z Z 1 X t sn ::: u˛ .t/ D p ˛Š  2P n 0 0 0 ˚t;sn Mk .n/    ˚s2 ;s1 Mk .1/ u.0/ .s1 /mi .n/ .sn /    mi .1/ .s1 /ds1 : : : dsn : (5.4.12) Also, X j˛jDn

u˛ .t/˛ D

X

Z tZ

k1 ;:::;kn 1 0

sn 0

:::

Z

s2 0

  ˚t;sn Mkn    ˚s2 ;s1 Mk1 u.0/ C gk1 .s1 / dwk1 .s1 /    dwkn .sn /; n  1; (5.4.13)

5.4 Wiener Chaos Solutions for Parabolic SPDEs

285

and, for every Hilbert space X, the following energy equality holds: X

ku˛ .t/k2X

j˛jDn

D

Z tZ

1 X

sn 0

k1 ;:::;kn D1 0

:::

Z

s2 0

k˚t;sn Mkn    ˚s2 ;s1 Mk1 u.0/ .s1 /k2X ds1

(5.4.14) : : : dsn I

both sides in the last equality can be infinite. For n D 1, formulas (5.4.12) and (5.4.14) become u.ik/ .t/ D X j˛jD1

ku˛ .t/k2X D

Z

t

0

˚t;s Mk u.0/ .s/ mi .s/dsI

1 Z X kD1

t 0

k˚t;s Mk u.0/ .s/k2X ds:

(5.4.15)

(5.4.16)

Proof Using the semi-group representation (5.4.11), we conclude that (5.4.12) is just an expanded version of (5.4.9). Since fmi ; i  1g is an orthonormal basis in L2 .0; T/, equality (5.4.16) follows from (5.4.15) and the Parseval identity. Similarly, equality (5.4.14) will follow from (5.4.12) after an application of an appropriate Parseval’s identity. To carry out the necessary arguments when j˛j > 1, denote by J 1 the collection of one-dimensionalP multi-indices ˇ D .ˇ1 ; ˇ2 ; : : :/ so that each ˇi is a non-negative integer and jˇj D i1 ˇi < 1. Given a ˇ 2 J 1 with jˇj D n, we define Kˇ D fi1 ; : : : ; in g, the characteristic set of ˇ and the function X 1 Eˇ .s1 ; : : : ; sn / D p mi1 .s .1/ /    min .s .n/ /: ˇŠ nŠ  2P n

(5.4.17)

By construction, the collection fEˇ ; ˇ 2 J 1 ; jˇj D ng is an orthonormal basis in the sub-space of symmetric functions in L2 ..0; T/n I X/. Next, we re-write (5.4.12) in a symmetrized form. To make the notations shorter, denote by s.n/ the ordered set .s1 ; : : : ; sn / and write dsn D ds1 : : : dsn . Fix t 2 .0; T and the set k.n/ D fk1 ; : : : ; kn g of the second components of the characteristic set K˛ . Define the symmetric function G.t; k.n/ I s.n/ / 1 X Dp ˚t;s .n/ Mkn    ˚s .2/ ;s .1/ Mk1 u.0/ .s .1/ /1s .1/  2 , then the square-integrable solution of (5.4.20) coincides with the Wiener Chaos solution. On the other hand, the heat equation v.t; x/ D v0 .x/ C

Z

t 0

vxx .s; x/ds C

Z 0

t

'.s; x/ds; v0 2 L2 .R/

with ' 2 L2 ..0; T/I H21 .R// has a unique w.H21 .R/; H21 .R// solution. Therefore, by Theorem 5.4.4, the unique w.H21 .R/; H21 .R// Wiener Chaos solution of (5.4.20) exists for all  2 R. In the next example, the equation, although not parabolic, can be solved explicitly. Example 5.4.8 Consider the following equation: du.t; x/ D ux .t; x/dw.t/; t > 0; x 2 RI u.0; x/ D x:

(5.4.21)

Clearly, u.t; x/ D x C w.t/ satisfies (5.4.21). To find the Wiener Chaos solution of (5.4.21), note that, with one-dimensional Wiener process, ˛i;k D ˛i , and the propagator in this case becomes Z tX p u˛ .t; x/ D x1j˛jD0 C ˛i .u˛.i/ .s; x//x mi .s/ds: 0

i

Then u˛ D 0 if j˛j > 1, and u.t; x/ D x C

X i1

i

Z 0

t

mi .s/ds D x C w.t/:

(5.4.22)

Even though Theorem 5.4.4 does not apply, the above arguments show that u.t; x/ D x C w.t/ is the unique w.A; X/ Wiener Chaos solution of (5.4.21) for suitable spaces A and X, for example,

Z X D f W .1 C x2 /2 f 2 .x/dx < 1 and A D f f W f ; f 0 2 Xg: R

Section 5.6.4 provides a more detailed analysis of Eq. (5.4.21). If Eq. (5.4.2) is anticipating, that is, the initial condition is not deterministic/F0 measurable and/or the free terms f ; g are not FtW -adapted, then the Wiener Chaos solution generalizes the Skorokhod integral interpretation of the equation.

288

5 The Polynomial Chaos Method

Example 5.4.9 Consider the equation 1 uxx .t; x/dt C ux .t; x/dw.t/; t 2 .0; T; x 2 R; 2 p with initial condition u.0; x/ D x2 w.T/. Since w.T/ D T1 , we find du.t; x/ D

.u˛ /t .t; x/ D

Xp 1 .u˛ /xx .t; x/ C ˛i mi .t/.u˛.i/ /x .t; x/ 2 i

(5.4.23)

(5.4.24)

p with initial condition u˛ .0; x/ D Tx2 1j˛jD1;˛1 D1 . By Theorem 5.4.4, there exists a unique w.A; X/ Wiener Chaos solution of (5.4.23) for suitable spaces A and X. For example, we can take

XD f W

Z

2 8 2

R

.1 C x / f .x/dx < 1

and A D f f W f ; f 0 ; f 00 2 Xg:

System (5.4.24) can be solved explicitly. R t Indeed, u˛  0 if j˛j D 0 or j˛j > 3 or if ˛1 D 0. Otherwise, writing Mi .t/ D 0 mi .s/ds, we find: p u˛ .t; x/ D .t C x2 / T; if j˛j D 1; ˛1 D 1I u˛ .t; x/ D 4 xt; if j˛j D 2; ˛1 D 2I p u˛ .t; x/ D 2 T xMi .t/; if j˛j D 2; ˛1 D ˛i D 1; 1 < iI 6 u˛ .t; x/ D p t2 ; if j˛j D 3; ˛1 D 3I T p u˛ .t; x/ D 4 T M1 .t/Mi .t/; if j˛j D 3; ˛1 D 2; ˛i D 1; 1 < iI p u˛ .t; x/ D 2 T Mi2 .t/; if j˛j D 3; ˛1 D 1; ˛i D 2; 1 < iI p u˛ .t; x/ D 2 T Mi .t/Mj .t/; if j˛j D 3; ˛1 D ˛i D ˛j D 1; 1 < i < j: Then, after straightforward but long calculations, we conclude that u.t; x/ D

X

u˛ ˛ D w.T/w2 .t/2tw.t/C2.w.T/w.t/t/x Cx2 w.T/

(5.4.25)

˛2J

is the Wiener Chaos solution of (5.4.23). It can be verified using the properties of the Skorokhod integral  that the function u defined by (5.4.25) satisfies u.t; x/ D x2 w.T/ C

1 2

Z

t 0

uxx .s; x/ds C

Z

t 0

ux .s; x/dw.s/; t 2 Œ0; T; x 2 R;

where the stochastic integral is in the sense of Skorokhod.

5.4 Wiener Chaos Solutions for Parabolic SPDEs

289

5.4.2 Special Weights and The S-Transform The space D0 .L2 .W/I X/ is too big to provide any reasonably useful information about the Wiener Chaos solution. Introduction of Wiener chaos spaces with special weights makes it possible to resolve this difficulty. As before, let  D f˛ ; ˛ 2 J 2 g be the Cameron-Martin basis in L2 .W/, and D.L2 .W/I X/, the collection of finite linear combinations of ˛ with coefficients in a Banach space X. Definition 5.4.10 Given a collection fr˛ ; ˛ 2 J 2 g of positive numbers, the space RL2 .WI X/ is the closure of D.L2 .W/I X/ with respect to the norm kvk2RL2 .WIX/ WD

X

r˛2 kv˛ k2X :

˛2J 2

The operator R defined by .Rv/˛ WD r˛ v˛ is a linear homeomorphism from RL2 .WI X/ to L2 .WI X/. There are several special choices of the weight sequence R D fr˛ ; ˛ 2 J 2 g and special notations for the corresponding weighted Wiener chaos spaces. • If Q D fq1 ; q2 ; : : :g is a sequence of positive numbers, define q˛ D

Y

˛

qk i;k :

i;k

The operator R, corresponding to r˛ D q˛ , is denotes by Q. The space QL2 .WI X/ is denoted by L2;Q .WI X/ and is called a Q-weighted Wiener chaos space. The significance of this choice of weights will be explained shortly (see, in particular, Proposition 5.4.13). • If Y r˛2 D .˛Š/ .2ik/ ˛i;k ; ;  2 R; i;k

then the corresponding space RL2 .WI X/ is denoted by .S/; .X/. As always, the argument X will be omitted if X D R. Note the analogy with (5.2.22) on page 257. The structure of weights in the spaces L2;Q and .S/; is different, and in general these two classes of spaces are not related. There exist generalized random elements that belong to some L2;Q .WI X/, but do not belong to any .S/; .X/. For example, P 2 2 u D k1 ek 1;k belongs to L2;Q .W/ with qk D e2k , but to no .S/; , because P 2 the series k1 e2k .kŠ/ .2k/ diverges for every ;  2 R. Similarly, there exist generalized random elements that belong to some .S/; .X/, but to no L2;Q .WI X/. P p For example, u D n1 nŠ.n/ , where .n/ is the multi-index with ˛1;1 D n and

290

5 The Polynomial Chaos Method

˛i;k D 0 elsewhere, P belongs to .S/1;1 ; but does not belong to any L2;Q .W/, because the series n1 qn nŠ diverges for every q > 0. The next result is the space-time analog of Proposition 2.3.3 in . Proposition 5.4.11 The sum X Y

.2ik/ ˛i;k

˛2J i;k1

converges if and only if  > 1. Proof Note that X Y

.2ik/

 ˛i;k

D

˛2J i;k1

Y

0

1 X Y  n @ ..2ik/ / A D

i;k1

n0

i;k

1 ;  >0 .1  .2ik/ / (5.4.26)

The on the right of (5.4.26) converges if and only if each of the sums P infinite Pproduct   i , k converges, that is, if an only if  > 1. i1 k1 Corollary 5.4.12 For every u 2 D0 .WI X/, there exists an operator R so that Ru 2 L2 .WI X/. Proof Define r˛2 D

Y 1 .2ik/2˛i;k : 2 1 C ku˛ kX i;k1

Then kRuk2L2 .WIX/ D

X ˛2J 2



ku˛ k2X Y .2ik/2˛i;k 1 C ku˛ k2X i;k1

X Y

.2ik/2˛i;k < 1:

˛2J 2 i;k1

The importance of the operator Q in the study of stochastic equations is due to the fact that the operator R maps a Wiener Chaos solution to a Wiener Chaos solution if and only R D Q for some sequence Q. Indeed, direct calculations show that the functions u˛ ; ˛ 2 J 2 ; satisfy the propagator (5.4.4) if and only if v˛ D .Ru/˛ satisfy v˛ .t/ D .Ru0 /˛ C

Z

t 0

Z tX p C ˛i;k 0

i;k

.Av C Rf /˛ .s/ds r˛ r˛.i;k/

(5.4.27) .Mk Ru C Rgk /˛.i;k/ .s/mi .s/ds:

5.4 Wiener Chaos Solutions for Parabolic SPDEs

291

Therefore, the operator R preserves the structure of the propagator if and only if r˛ D qk ; r˛.i;k/ that is, r˛ D q˛ for some sequence Q. Below is the summary of the main properties of the operator Q. Proposition 5.4.13 1. If qk  q < 1 for all k  1, then L2;Q .W/ .S/0; for some  > 0. 2. If qk  q > 1 for all k, then L2;Q .W/ Ln2 .W/ for all n  1, that is, the elements of L2;Q .W/ are infinitely differentiable in the Malliavin sense. 3. If u 2 L2;Q .WI X/ with generalized Fourier coefficients u˛ satisfying the propagator (5.4.4), and v D Qu, then the corresponding system for the generalized Fourier coefficients of v is v˛ .t/ D .Qu0;˛ C

Z

t 0

.Av C Qf /˛ .s/ds

Z tX p C ˛i;k .Mk v C Qgk /˛.i;k/ .s/ds: 0

(5.4.28)

i;k

4. The function u is a Wiener Chaos solution of u.t/ D u0 C

Z

t 0

.Au.s/ C f .s//dt C

Z

t 0

.Mu.s/ C g.s/; dW.s//Y

(5.4.29)

if and only if v D Qu is a Wiener Chaos solution of v.t/ D .Qu/0 C

Z

t 0

.Av.s/ C Qf .s//dt C

where, for h 2 Y, WhQ .t/ D

P

Z

t 0

.Mv.s/ C Qg.s/; dW Q .s//Y ; (5.4.30)

k1 .h; yk /Y qk wk .t/.

The following examples demonstrate how the operator Q helps with the analysis of various stochastic evolution equations. Example 5.4.14 Consider the w.H21 .R/; H21 .R// Wiener Chaos solution u of the equation du.t; x/ D .auxx .t; x/ C f .t; x//dt C ux .t; x/dw.t/; 0 < t  T; x 2 R;

(5.4.31)

with f 2 L2 .˝  .0; T/I H21 .R//, g 2 L2 .˝  .0; T/I L2 .R//, and ujtD0 D u0 2 L2 .R/. Assume that  > 0 and define the sequence Q so that qk D q for all k  1

292

and q
 2 , then the weight q can be taken bigger than one, and, according to the first statement of Proposition 5.4.13, regularity of the solution is better than the one guaranteed by Theorem 4.4.3. Example 5.4.15 The Wiener Chaos solutions can be constructed for stochastic ordinary differential equations. Consider, for example, u.t/ D 1 C

Z tX 0 k1

u.s/dwk .s/;

(5.4.32)

which clearly does not have a square-integrable solution. On the other hand, the unique w.R; R/ Wiener Chaos solution of this P 2equation belongs to L2;Q .WI L2 ..0; T// for every sequence Q satisfying k qk < 1. Indeed, for (5.4.32), Eq. (5.4.30) becomes v.t/ D 1 C

Z tX 0

v.s/qk dwk .s/:

k

P If k q2k < 1, then the square-integrable solution of this equation exists and belongs to L2 .WI L2 ..0; T///. There exist equations for which the Wiener Chaos solution does not belong to any weighted Wiener chaos space L2;Q . An example is given below in Sect. 5.6.4. The S-transform, which is yet another variation on the theme of the Fourier transform, is necessary to introduce yet another notion of the solution (white noise solution). To define the S-transform, consider yet another version of the stochastic exponential: E.h/ D exp

Z 0

T

.h.t/; dW.t//Y 

1 2

Z

T 0

 kh.t/k2Y dt :

5.4 Wiener Chaos Solutions for Parabolic SPDEs

293

Lemma 5.4.16 If h 2 D .L2 ..0; T/I Y//, then • E.h/ 2 L2;Q .W/ for every sequence Q. • E.h/ 2 .S/; for 0   < 1 and   0. • E.h/ 2 .S/1; ,   0, as long as khk2L2 ..0;T/IY/ is sufficiently small. P Proof Recall that, if h 2 D.L2 ..0; T/I Y//, then h.t/ D i;k2Ih hi;k mi .t/yk , where Ih is a finite set. Direct computations show that 0 1 Y X Hn .i;k / X h˛ @ .hi;k /n A D E.h/ D p ˛ ; nŠ ˛Š i;k n0 ˛2J 2 where h˛ D

Q

i;k

˛

hi;ki;k . In particular, h˛ .E.h//˛ D p : ˛Š

(5.4.33)

Consequently, for every sequence Q of positive numbers, 0 kE.h/k2L2;Q .W/ D exp @

X

1 h2i;k q2k A < 1:

(5.4.34)

i;k2Ih

Similarly, for 0   < 1 and   0, kE.h/k2.S/; D

X Y ..2ik/ hi;k /2˛i;k ˛2J i;k

.˛i;k Š/1

D

Y i;k2Ih

0 @

X ..2ik/ hi;k /2n n0

.nŠ/1

1 A < 1; (5.4.35)

and, for  D 1, kE.h/k2.S/1;

0 1 XY Y X @ ..2ik/ hi;k /2n A < 1; D ..2ik/ hi;k /2˛i;k D ˛2J i;k

i;k2Ih

n0

(5.4.36) P 2  if 2 max.m;n/2Jh / .mn/ i;k hi;k < 1. Lemma 5.4.16 is proved. Remark 5.4.17 It is well-known (see, for example, [139, Proof of Theorem 5.5]) that the family fE.h/; h 2 D .L2 ..0; T/I Y//g is dense in L2 .W/ and consequently in every L2;Q .W/ and every .S/; , 1 <   1,  2 R.

294

5 The Polynomial Chaos Method

Definition 5.4.18 If u 2 L2;Q .WI X/ for some Q, or if u 2   1, then the deterministic function SŒu.h/ D

S

q0 .S/; .X/,

X u˛ h˛ p 2X ˛Š ˛2J

0

(5.4.37)

  is called the S-transform of u. Similarly, for g 2 D0 YI L2;Q .WI X/ the S-transform SŒg.h/ 2 D0 .YI X/ is defined by setting .SŒg.h//k D SŒgk .h/. Note that if uS2 L2 .WI X/, then SŒu.h/ D E.uE.h//. If u belongs to L2;Q .WI X/ or to q0 .S/; S .X/, 0   < 1, then SŒu.h/ is defined for all h 2 D .L2 ..0; T/I Y// : If u 2  0 .S/1; .X/, then SŒu.h/ is defined only for h sufficiently close to zero. S By Remark 5.4.17, an element u from L2;Q .WI X/ or  0 .S/; .X/, 0   < 1, is uniquely determined by the collection of deterministic functions SŒu.h/; h 2 D .L2 ..0; T/I Y// : Since E.h/ > 0 for all h 2 D .L2 ..0; T/I Y//, Remark 5.4.17 also suggests the following definition. S Definition 5.4.19 An element u from L2;Q .W/ or  0 .S/; , 0   < 1 is called non-negative (u  0) if and only if SŒu.h/  0 for all h 2 D .L2 ..0; T/I Y//. The definition of the operator Q and Definition 5.4.19 imply the following result. Proposition 5.4.20 A generalized random element u from L2;Q .W/ is non-negative if and only if Qu  0. For example, the solution of equation (5.4.32) is non-negative because 0

1  X Qu.t/ D exp @ qk wk .t/  .q2k =2/ A : k1

We conclude this section with one technical remark. Definition 5.4.18 expresses the S-transform in terms of the generalized Fourier coefficients. The following results makes it possible to recover generalized Fourier coefficients from the corresponding S-transform. S Proposition 5.4.21 If u belongs to some L2;Q .WI X/ or  0 .S/; .X/, 0    1, then !ˇ Y @˛i;k SŒu.h/ ˇˇ 1 : (5.4.38) u˛ D p ˇ ˛ ˇ @hi;ki;k ˛Š i;k

hD0

Proof For each ˛ 2 J 2 with K non-zero entries, equality (5.4.37) and Lemma 5.4.16 imply that the function SŒu.h/, as a function of K variables hi;k , is analytic in some neighborhood of zero. Then (5.4.38) follows after differentiation of the series (5.4.37).

5.5 Further Properties of the Wiener Chaos Solutions

295

5.5 Further Properties of the Wiener Chaos Solutions 5.5.1 White Noise and Square-Integrable Solutions Using notations and assumptions from Sect. 5.4.1, consider the linear evolution equation   du.t/ D .A.t/u.t/ C f .t//dt C M.t/u.t/ C g.t/; dW.t/ Y ; 0 < t  T; ujtD0 D u0 :

(5.5.1)

The objective of this section is to study how the Wiener Chaos compares with the square-integrable and white noise solutions. To make the presentation shorter, call an X-valued generalized random element S-admissible if and only if it belongs to L2;Q .F W I X/ for some Q or to .S/; .X/ for some  2 Œ1; 1 and  2 R. It was shown in Sect. 5.4.2P that, for every S-admissible u, the S-transform SŒu.h/ is defined when h D i;k hi;k mi yk 2 D.L2 ..0; T/I Y// and is an analytic function of hi;k in some neighborhood of h D 0. The next result describes the S-transform of the Wiener Chaos solution. Theorem 5.5.1 Assume that 1. There exists a unique w.A; X/ Wiener Chaos solution u of (5.5.1) and u is Sadmissible; 2. For each t 2 Œ0; T, the linear operators A.t/; Mk .t/ are bounded from A to X; 3. The generalized random elements u0 ; f ; gk are S-admissible. Then, for every h 2 D.L2 ..0; T/I Y// with khk2L2 ..0;T/IY/ sufficiently small, the function v D SŒu.h/ is a w.A; X/ solution of the deterministic equation v.t/ D SŒu0 .h/ C

Z t  Av C SŒ f .h/ C .Mk v C SŒgk .h//hk .s/ds:

(5.5.2)

0

Proof By assumption, SŒu.h/ exists for suitable functions h. Then the Stransformed equation (5.5.2) follows from the definition of the S-transform (5.4.37) and the propagator equation (5.4.4) satisfied by the generalized Fourier coefficients of u. Indeed, continuity of operator A.t/ implies SŒAu.h/ D

X h˛ X h˛ p Au˛ D A p u˛ D A.SŒu.h//: ˛Š ˛Š ˛ ˛

296

5 The Polynomial Chaos Method

Similarly, X X h˛.i;k/ X h˛ X p p ˛i;k Mk u˛.i;k/ mi D p Mk u˛.i;k/ mi hi;k ˛Š i;k ˛  .i; k/Š ˛ ˛ i;k ! X X h˛ D p Mk u˛ mi hi;k ˛ ˛ i;k D Mk .SŒu.h//hk : Computations for the other terms are similar. Theorem 5.5.1 is proved. Remark 5.5.2 If h 2 D.L2 ..0; T/I Y// and Et .h/ D exp

Z

t 0

.h.s/; dW.s//Y 

1 2

Z

t 0

 kh.t/k2Y dt ;

(5.5.3)

then, by the Itô formula, dEt .h/ D Et .h/.h.t/; dW.t//Y :

(5.5.4)

If u0 is deterministic, f and gk are FtW -adapted, and u is a square-integrable solution of (5.5.1), then equality (5.5.2) is obtained by multiplying Eqs. (5.5.4) and (5.5.1) according to the Itô formula and taking the expectation. A partial converse of Theorem 5.5.1 is that, under some regularity conditions, the Wiener Chaos solution can be recovered from the solution of the S-transformed equation (5.5.2). Theorem 5.5.3 Assume that the linear operators A.t/; Mk .t/, t 2 Œ0; T, are bounded from A to X, the input data u0 , f , gk are S-admissible, and, for every h 2 D.L2 ..0; T/I Y// with khk2L2 ..0;T/IY/ sufficiently small, there exists a w.A; X/ solution v D v.tI h/ of Eq. (5.5.2). Writing hD

X

hi;k mi yk ;

i;k

we consider v as a function of the variables hi;k . Assume that all the derivatives of v at the point h D 0 exist, and, for ˛ 2 J 2 , define 1 u˛ .t/ D p ˛Š

!ˇ Y @˛i;k v.tI h/ ˇˇ ˇ ˛ ˇ @hi;ki;k i;k

Then the generalized random process u.t/ D Chaos solution of (5.5.1).

P

˛2J 2

:

(5.5.5)

hD0

u˛ .t/˛ is a w.A; X/ Wiener

5.5 Further Properties of the Wiener Chaos Solutions

297

Proof Differentiation of (5.5.2) and application of Proposition 5.4.21 show that the functions u˛ satisfy the propagator (5.4.4). Definition 5.5.4 A white noise solution of equation (5.5.1) is an S-admissible process u such that SŒu satisfies (5.5.2). Remark 5.5.5 The central part in the construction of the white noise solution of (5.5.1) is proving that the solution of (5.5.2) is an S-transform of a suitable generalized random process. For many particular cases of Eq. (5.5.1), the corresponding analysis is carried out in [76, 78, 162, 189]. The consequence of Theorems 5.5.1 and 5.5.3 is that a white noise solution of (5.5.1), if exists, must coincide with the Wiener Chaos solution. The next theorem establishes the connection between the Wiener Chaos solution and the square-integrable solution. Recall that the square-integrable solution of (5.5.1) was introduced in Definition 4.4.1 on page 199. Accordingly, the notations from Sect. 4.4 will be used. Theorem 5.5.6 Let .V; H; V 0 / be a normal triple of Hilbert spaces. Take deterministic functions u0 , f , and gk such that   X kgk k2L2 ..0;T/IH/ < 1: u0 2 H; f 2 L2 .0; T/I V 0 ; k

Then 1. An FtW -adapted square-integrable solution of (5.5.1) is also a Wiener Chaos solution. 2. If u is a w.V; V 0 / Wiener Chaos solution of (5.5.1) and X ˛2J 2

Z

T 0

! ku˛ .t/k2V dt

C sup

0tT

ku˛ .t/k2H

< 1;

(5.5.6)

then u is an FtW -adapted square-integrable solution of (5.5.1). Proof (1) If u D u.t/ is an FtW -adapted square-integrable solution, then   u˛ .t/ D E.u.t/˛ / D E u.t/E.˛ jFtW / D E.u.t/˛ .t//: Then the propagator (5.4.4) for u˛ follows after applying the Itô formula to the product u.t/˛ .t/ and using (5.3.49). (2) Assumption (5.5.6) implies u 2 L2 .˝  .0; T/I V/

\

L2 .˝I C..0; T/I H//:

298

5 The Polynomial Chaos Method

Then, by Theorem 5.5.1, for every ' 2 V and h 2 D..0; T/I Y/, the S-transform uh of u satisfies .uh .t/; '/H D .u0 ; '/H C

Z

t

0

hAuh .s/; 'ids C

Z

t 0

h f .s/; 'ids

X h˛ X Z t p  C ˛i;k mi .s/ .Mk u˛.i;k/ .s/; '/H ˛Š i;k 0 2 ˛2J

 C .gk .s/; '/H 1j˛jD1 ds:

If I.t/ D

Rt

0 .Mk u.s/; '/H dwk .s/,

E.I.t/˛ .t// D

then

Z tX p ˛i;k mi .s/.Mk u˛.i;k/ .s/; '/H ds: 0

(5.5.7)

i;k

Similarly,  XZ t  Z t p ˛i;k mi .s/.gk .s/; '/H 1j˛jD1 ds: E ˛ .t/ .gk .s/; '/H dwk .s/ D 0

i;k

0

Therefore, X h˛ X Z t p ˛i;k mi .s/.Mk u˛.i;k/ .s/; '/H ds ˛Š i;k 0 2

˛2J

  Z t  D E E.h/ .Mk u.s/; '/H C .gk .s/; '/H dwk .s/ : 0

As a result,     E E.h/.u.t/; '/H D E E.h/.u0 ; '/H     Z t Z t C E E.h/ hAu.s/; 'ids C E E.h/ h f .s/; 'ids 0

0

  Z t  .Mk u.s/; '/H C .gk .s/; '/H dwk .s/ : C E E.h/ 0

(5.5.8)

Equality (5.5.8) and Remark 5.4.17 imply that (4.4.3) on page 199 holds. Theorem 5.5.11 is proved.

5.5 Further Properties of the Wiener Chaos Solutions

299

5.5.2 Additional Regularity Let F D .˝; F; fFt gt0 ; P/ be a stochastic basis with the usual assumptions and wk D wk .t/; k  1; t  0, a collection of standard Wiener processes on F. Let .V; H; V 0 / be a normal triple of Hilbert spaces and A.t/ W V ! V 0 , Mk .t/ W V ! H, linear bounded operators; t 2 Œ0; T. In this section we study the linear equation u.t/ D u0 C

Z

t 0

.Au.s/ C f .s//ds C

Z

t

0

.Mk u.s/ C gk .s//dwk ; 0  t  T;

(5.5.9)

under the following assumptions: A1 There exist positive numbers C1 and ı so that hA.t/v; vi C ıkvk2V  C1 kvk2H ; v 2 V; t 2 Œ0; T:

(5.5.10)

A2 There exists a real number C2 so that X 2hA.t/v; vi C kMk .t/vk2H  C2 kvk2H ; v 2 V; t 2 Œ0; T:

(5.5.11)

k1

A3 The initial condition u0 is non-random and belongs to H; the process f D f .t/ RT is deterministic and 0 k f .t/k2V 0 dt < 1; each gk D gk .t/ is a deterministic P RT processes and k1 0 kgk .t/k2H dt < 1. Note that condition (5.5.11) is weaker than standard stochastic parabolicity condition (4.4.4) on page 199. Traditional analysis of Eq. (5.5.9) under (5.5.11) requires additional regularity assumptions on the input data and additional Hilbert space constructions beyond the normal triple: see Theorem 4.4.12 on page 208. The Wiener chaos approach provides new existence and regularity results for Eq. (5.5.9). A different version of the following theorem is in . Theorem 5.5.7 Under assumptions A1–A3, for every T > 0, Eq. (5.5.9) has a unique w.V; V 0 / Wiener Chaos solution. This solution u D u.t/ has the following properties: 1. There exists a weight sequence Q so that u 2 L2;Q .WI L2 ..0; T/I V//

\

L2;Q .WI C..0; T/I H//:

2. For every 0  t  T, u.t/ 2 L2 .˝I H/ and 0 Eku.t/k2H  3eC2 t @ku0 k2H C Cf

Z

t 0

k f .s/k2V 0 ds C

XZ k1

0

t

1 kgk .s/k2H dsA ; (5.5.12)

300

5 The Polynomial Chaos Method

where the number C2 is from (5.5.11) and the positive number Cf depends only on ı and C1 from (5.5.10). 3. For every 0  t  T, u.t/ D u.0/ C

X

X

Z tZ

n1 k1 ;:::;kn 1 0

sn 0

:::

Z

s2 0

  ˚t;sn Mkn    ˚s2 ;s1 Mk1 u.0/ C gk1 .s1 / dwk1 .s1 /    dwkn .sn /; (5.5.13) where ˚t;s is the semi-group of the operator A. Proof Assumption A2 and the properties of the normal triple imply that there exists a positive number C so that X

kMk .t/vk2H  C kvk2V ; v 2 V; t 2 Œ0; T:

(5.5.14)

k1

Define the sequence Q by qk D



ı C

1=2

WD q; k  1;

(5.5.15)

where 2 .0; 2/ and ı is from Assumption A1. Then, by Assumption A2, 2hAv; vi C

X

q2 kMk vk2H  .2  /ıkvk2V C C1 kvk2H :

(5.5.16)

k1

It follows from Theorem 4.4.3 that equation v.t/ D u0 C

Z

t 0

.Av C f /.s/ds C

XZ k1

t 0

q.Mk v C gk /.s/dwk .s/

(5.5.17)

has a unique solution v 2 L2 .WI L2 ..0; T/I V//

\

L2 .WI C..0; T/I H//:

Comparison of the propagators for Eqs. (5.5.9) and (5.5.17) shows that u D Q1 v is the unique w.V; V 0 / solution of (5.5.9) and u 2 L2;Q .WI L2 ..0; T/I V//

\

L2;Q .WI C..0; T/I H//:

(5.5.18)

If C < 2ı, then Eq. (5.5.9) is strongly parabolic and q > 1 is an admissible choice of the weight. As a result, for strongly parabolic equations, the result (5.5.18) is stronger than the conclusion of Theorem 4.4.3.

5.5 Further Properties of the Wiener Chaos Solutions

301

The proof of (5.5.12) is based on the analysis of the propagator u˛ .t/ D u0 1j˛jD0 C

Z t 0

 Au˛ .s/ C f .s/1j˛jD0 ds

Z tX p C ˛i;k .Mk u˛.i;k/ .s/ C gk .s/1j˛jD1 /mi .s/ds: 0

(5.5.19)

i;k

We consider three particular cases: (1) f D gk D 0 (the homogeneous equation); (2) u0 D gk D 0; (3) u0 D f D 0. The general case will then follow by linearity and the triangle inequality. Denote by .˚t;s ; t  s  0/ the (two-parameter) semi-group generated by the operator A.t/; ˚t WD ˚t;0 . One of the consequence of Theorem 4.4.3 is that, under Assumption A1, this semi-group exists and is strongly continuous in H. Consider the homogeneous equation: f D gk D 0. By Corollary 5.4.6, X

ku˛ .t/k2H

Z tZ

X

D

k1 ;:::;kn 1 0

j˛jDn

0

Z s2    k˚t;sn Mkn    ˚s2 ;s1 Mk1 ˚s1 u0 k2H dsn ;

sn

0

(5.5.20) where dsn D ds1 : : : dsn . Define Fn .t/ D of (5.5.11) shows that

P

j˛jDn

ku˛ .t/k2H , n  0. Direct application

X d F0 .t/  C2 F0 .t/  kMk ˚t u0 k2H : dt k1

(5.5.21)

For n  1, equality (5.5.20) implies X Z t Z sn1 Z s2 d Fn .t/ D  kMkn ˚t;sn1    Mk1 ˚s1 u0 k2H dsn1 dt 0 0 k ;:::;k 1 0 1

C

n

Z tZ

X

k1 ;:::;kn 1 0

sn 0

:::

Z

s2 0

hA˚t;sn Mkn : : : ˚s1 u0 ; ˚t;sn Mkn : : : ˚s1 u0 idsn : (5.5.22)

By (5.5.11), X

Z tZ

k1 ;:::;kn 1 0

X



sn 0

:::

Z

X

0

Z tZ Z tZ

k1 ;:::;kn 1 0

sn 0

k1 ;:::;knC1 1 0

C C2

s2

sn 0

hA˚t;sn Mkn : : : ˚s1 u0 ; ˚t;sn Mkn : : : ˚s1 u0 idsn :::

:::

Z

s2 0

Z

s2 0

kMknC1 ˚t;sn Mkn : : : Mk1 ˚s1 u0 k2H dsn

k˚t;sn Mkn : : : Mk1 ˚s1 u0 k2H dsn :

(5.5.23)

302

5 The Polynomial Chaos Method

As a result, for n  1, d Fn .t/  C2 Fn .t/ dt Z X Z t Z sn1 C ::: k1 ;:::;kn 1 0

Z tZ

X



0

k1 ;:::;knC1 1 0

sn 0

:::

s2

kMkn ˚t;sn1 Mkn1 : : : Mk1 ˚s1 u0 k2H dsn1

0

Z

s2

kMknC1 ˚t;sn Mkn : : : Mk1 ˚s1 u0 k2H dsn :

0

(5.5.24) Consequently, N N X X d XX ku˛ .t/k2H  C2 ku˛ .t/k2H ; dt nD0 nD0 j˛jDn

(5.5.25)

j˛jDn

so that, by the Gronwall inequality, N X X

ku˛ .t/k2H  eC2 t ku0 k2H

(5.5.26)

nD0 j˛jDn

or Eku.t/k2H  eC2 t ku0 k2H :

(5.5.27)

Next, let us assume that u0 D gk D 0. Then the propagator (5.5.19) becomes u˛ .t/ D

Z

t 0

Z tX p .Au˛ .s/ C f .s/1j˛jD0 /ds C ˛i;k Mk u˛.i;k/ .s/mi .s/ds: 0

i;k

(5.5.28) If ˛ D .0/, then ku.0/ .t/k2H D 2  C2

Z

t 0

Z 0

t

hAu.0/ .s/; u.0/ .s/ids C 2

ku.0/ .s/k2H ds 

Z tX 0 k1

Z 0

t

h f .s/; u.0/ .s/ids

kMk u.0/ .s/k2H ds C Cf

Z 0

t

k f .s/k2V 0 ds:

By Corollary 5.4.6, X j˛jDn

ku˛ .t/k2H

D

X

Z tZ

k1 ;:::;kn 1 0

sn 0

:::

Z

s2 0

k˚t;sn Mkn : : : Mk1 u.0/ .s1 /k2H dsn (5.5.29)

5.5 Further Properties of the Wiener Chaos Solutions

303

for n  1. Then, repeating the calculations (5.5.22)–(5.5.24), we conclude that N X X

ku˛ .t/k2H  Cf

Z

t 0

nD1 j˛jDn

Z tX N X

k f .s/k2V 0 ds C C2

0 nD1 j˛jDn

ku˛ .s/k2H ds;

(5.5.30)

and, by the Gronwall inequality, Z

Eku.t/k2H  Cf eC2 t

t

0

k f .s/k2V 0 ds:

(5.5.31)

Finally, let us assume that u0 D f D 0. Then the propagator (5.5.19) becomes u˛ .t/ D C

Z Z

t

0

Au˛ .s/ds

t 0

! Xp ˛i;k Mk u˛.i;k/ .s/ C gk .s/1j˛jD1 mi .s/ds:

(5.5.32)

i;k

Even though u.0/ .t/ D 0, we have u.i;k/ D

Z

t 0

˚t;s gk .s/mi .s/ds;

(5.5.33)

and then the arguments from the proof of Corollary 5.4.6 apply, resulting in X

ku˛ .t/k2H D

j˛jDn

Z tZ

X

k1 ;:::;kn 1 0

sn 0

:::

Z

s2

k˚t;sn Mkn : : : ˚s2 ;s1 gk1 .s1 /k2H dsn

0

for n  1. Note that X

ku˛ .t/k2H

D

j˛jD1

XZ k1

t 0

kgk .s/k2H ds

C2

XZ k1

t 0

hA˚t;s gk .s/; ˚t;s gk .s/ids:

Then, repeating the calculations (5.5.22)–(5.5.24), we conclude that N X X nD1 j˛jDn

ku˛ .t/k2H



XZ k1

t 0

kgk .s/k2H dsCC2

Z tX N X 0 nD1 j˛jDn

ku˛ .s/k2H ds;

(5.5.34)

and, by the Gronwall inequality, Eku.t/k2H  eC2 t

XZ k1

t 0

kgk .s/k2H ds:

(5.5.35)

304

5 The Polynomial Chaos Method

To derive (5.5.12), it remains to combine (5.5.27), (5.5.31), and (5.5.35) with the inequality .a C b C c/2  3.a2 C b2 C c2 /. Representation (5.5.13) of the Wiener chaos solution as a sum of iterated Itô integrals now follows from Corollary 5.4.6. Theorem 5.5.7 is proved. X X Z T ku˛ .s/k2V ds < 1, then sup ku˛ .t/k2H < 1: Corollary 5.5.8 If 0

˛2J 2

˛2J 2

0tT

Proof The proof of Theorem 5.5.7 shows that it is enough to consider the homogeneous equation. Then by inequalities (5.5.23)–(5.5.24), n1 X X `DnC1 j˛jD`

F` .t/

`DnC1

Z

X

 eC 2 T

n1 X

ku˛ .t/k2H D T

Z tZ

k1 ;:::;knC1 1 0

sn 0

0

:::

Z

s2

0

kMknC1 ˚t;sn Mkn : : : ˚s1 u0 k2H dsn dt: (5.5.36)

By Corollary 5.4.6, Z

T 0

D

ku˛ .s/k2V ds

X

Z

X

T

n1 k1 ;:::;kn 1 0

Z tZ 0

sn 0

:::

Z

s2 0

kMkn ˚t;sn Mkn : : : ˚s1 u0 k2V dsn dt < 1: (5.5.37)

As a result, (5.5.14) and (5.5.37) imply Z

T

Z tZ

sn

lim

n!1 0

0

0

:::

Z

s2 0

kMknC1 ˚t;sn Mkn : : : Mk1 ˚s1 u0 k2H dsn dt D 0;

which, by (5.5.36), implies uniform, with respect to t, convergence of the series P 2 2 ˛2J ku˛ .t/kH . Corollary 5.5.8 is proved. Corollary 5.5.9 Let aij ; bi ; c; ik ; k be deterministic measurable functions of .t; x/ so that jaij .t; x/j C jbi .t; x/j C jc.t; x/j C jik .t; x/j C j k .t; x/j  K; i; j D 1; : : : ; d; k  1; x 2 Rd ; 0  t  TI   1 aij .t; x/  ik .t; x/jk .t; x/ yi yj  0; 2

5.5 Further Properties of the Wiener Chaos Solutions

305

x; y 2 Rd ; 0  t  TI and X

j k .t; x/j2  C < 1;

k1

x 2 Rd ; 0  t  T: Consider the equation du D .Di .aij Dj u/ C bi Di u C c u C f /dt C .ik Di u C k u C gk /dwk :

(5.5.38)

In (5.5.38), we use the summation convention (summation over the repeated indices is not shown but is assumed). Assume that the input data satisfy u0 2 L2 .Rd /, f 2 P 1 d L2 ..0; T/I H2 .R //; k1 kgk k2L ..0;T/Rd / < 1, and there exists an " > 0 so that 2

aij .t; x/yi yj  "jyj2 ; x; y 2 Rd ; 0  t  T: Then there exists a unique Wiener Chaos solution u D u.t; x/ of (5.5.38). The solution has the following regularity: u.t; / 2 L2 .WI L2 .Rd //; 0  t  T;

(5.5.39)

and  Ekuk2L2 .Rd / .t/  C ku0 k2L2 .Rd / C k f k2L ..0;T/IH 1 .Rd // 2 2  X 2 C kgk kL2 ..0;T/Rd / ;

(5.5.40)

k1

where the positive number C depends only on C ; K; T; and ". 0 and the equation is fully degenerate, that is, 2hA.t/v; vi C P If f D gk D 2 kM .t/vk D 0, t 2 Œ0; T, then it is natural to expect conservation of energy. k k1 H Once again, analysis of (5.5.23)–(5.5.24) shows that equality Eku.t/k2H D ku0 k2H holds if and only if Z

T

Z tZ

lim

n!1 0

0

0

sn

:::

Z

s2 0

kMknC1 ˚t;sn Mkn : : : Mk1 ˚s1 u0 k2H dsn dt D 0:

The proof of Corollary 5.5.8 shows that a sufficient condition for the conservation RT of energy in a fully degenerate homogeneous equation is E 0 ku.t/k2V dt < 1. One of applications of the Wiener Chaos solution is new numerical methods for solving the evolution equations. P Indeed, an approximation of the solution is obtained by truncating the sum ˛2J 2 u˛ .t/˛ . For the Zakai filtering equation,

306

5 The Polynomial Chaos Method

these numerical methods were studied in [147, 148, 153]; see also Sect. 5.6.1 below. The the analysis is the rate of convergence, in n, of the series P main P question in 2 n1 j˛jDn ku.t/kH . In general, this convergence can be arbitrarily slow. For example, consider the equation du D

1 uxx dt C ux dw.t/; t > 0; x 2 R; 2

in the normal triple .H21 .R/; L2 .R/; H21 .R//, with initial condition ujtD0 D u0 2 L2 .R/. It follows from (5.5.20) that X

Fn .t/ D

kuk2L2 .R/ .t/ D

j˛jDn

tn nŠ

Z R

2

jyj2n ey t jOu0 j2 dy;

where uO 0 is the Fourier transform of u0 . If jOu0 .y/j2 D

1 ;  > 1=2; .1 C jyj2 /

then the rate of decay of Fn .t/ is close to n.1C2 /=2 . Note that, in this example, Ekuk2L2 .R/ .t/ D ku0 k2L2 .R/ . An exponential convergence rate that is uniform in ku0 k2H is achieved under strong parabolicity condition (4.4.4). An even faster factorial rate is achieved when the operators Mk are bounded on H. Theorem 5.5.10 Assume that there exist a positive number " and a real number C0 so that X 2hA.t/v; vi C kMk .t/vk2H C "kvk2V  C0 kvk2H ; t 2 Œ0; T; v 2 V: k1

Then there exists a positive number b so that, for all t 2 Œ0; T, X

ku˛ .t/k2H 

j˛jDn

P

k1

ku0 k2H : .1 C b/n

(5.5.41)

kMk .t/'k2H  C3 k'k2H , then X

ku˛ .t/k2H 

j˛jDn

.C3 t/n C1 t e ku0 k2H : nŠ

Proof If C is from (5.5.14) and b D "=C , then the operators 2hA.t/v; vi C .1 C b/

X k1

(5.5.42) p 1 C bMk satisfy

kMk .t/k2H  C0 kvk2H :

5.5 Further Properties of the Wiener Chaos Solutions

307

By Theorem 5.5.7, .1 C b/n

X

Z tZ

k1 ;:::;kn 1 0

sn 0

:::

Z 0

s2

k˚t;sn Mkn : : : Mk1 ˚s1 u0 k2H dsn  ku0 k2H ;

and (5.5.41) follows. To establish (5.5.42), note that, by (5.5.10), k˚t f k2H  eC1 t k f k2H ; and therefore the result follows from (5.5.20). Theorem 5.5.10 is proved. The Wiener Chaos solution of (5.5.9) is not, in general, a solution of the equation in the sense of Definition 4.4.1. Indeed, if u 62 L2 .˝  .0; T/I V/, then the expressions hAu.s/; 'i and .Mk u.s/; '/H are not defined. On the other hand, if there is a possibility to move the operators A and M from the solution process u to the test function ', then Eq. (5.5.9) admits a natural analog of the standard variations formulation. Theorem 5.5.11 In addition to A1–A3, assume that there exist operators A .t/, Mk .t/ and a dense subset V0 of the space V so that 1. A .t/.V0 / H, Mk .t/.V0 / H, t 2 Œ0; T; 2. for every v 2 V, ' 2 V0 , and t 2 Œ0; T, hA.t/v; 'i D .v; A .t/'/H ; .Mk .t/v; '/H D .v; Mk .t/'/H . If u D u.t/ is the Wiener Chaos solution of (5.5.9), then, for every ' 2 V0 and every t 2 Œ0; T, the equality .u.t/; '/H D .u0 ; '/H C C

XZ k1

t 0

Z

t 0



.u.s/; A .s/'/H ds C

.u.s/; Mk .s/'/H dwk .s/ C

Z

t 0

h f .s/; 'ids

XZ k1

t 0

.gk .s/; '/H dwk .s/ (5.5.43)

holds in L2 .W/. Proof The arguments are identical to the proof of Theorem 5.5.6(2). As was mentioned earlier, the Wiener Chaos solution can be constructed for anticipating equations, that is, equations with FTW -measurable (rather than FtW adapted) input data. With obvious modifications, inequality (5.5.12) holds if each of the input functions u0 ; f , and gk in (5.5.9) is a finite linear combination of the basis elements ˛ . The following example demonstrates that inequality (5.5.12) is impossible for general anticipating equation.

308

5 The Polynomial Chaos Method

Example 5.5.12 Let u D u.t; x/ be a Wiener Chaos solution of an ordinary differential equation du D udw.t/; 0 < t  1;

(5.5.44)

P with u0 D ˛2J a˛ ˛ . For n  0, denote by .n/ the multi-index with ˛1 D n and ˛i D 0, i  2, and assume that a.n/ > 0, n  0. Then Eu2 .1/  C

X

p

e

n 2 a.n/ :

(5.5.45)

n0

Indeed, the first column of propagator for ˛ D .n/ is u.0/ .t/ D a.0/ and p u.n/ .t/ D a.n/ C n

Z

t 0

u.n1/ .s/ds;

so that u.n/ .t/ D

n X kD0

p nŠ

a.nk/ k p p t: kŠ .n  k/ŠkŠ

Then u2.n/ .1/

! 2 n X n a.nk/  k kŠ kD0

and 0 !1 X X 1 nCk @ A a2.n/ : u2.n/ .1/  kŠ n n0 n0 k0

X

Since ! X nk X 1 nCk p   Ce n ; 2 kŠ n .kŠ/ k0 k0 the result follows. The consequence of Example 5.5.12 is that it is possible, in (5.5.9), to have u0 2 Ln2 .WI H/ for every n, and still get Eku.t/k2H D C1 for all t > 0. More generally, the solution operator for (5.5.9) is not bounded on any L2;Q or .S/; . On the other hand, the following result holds. Theorem 5.5.13 In addition to Assumptions A1, A2, let u0 be an element of D0 .WI H/, f , an element of D0 .WI L2 ..0; T/; V 0 //, and each gk , an element of

5.5 Further Properties of the Wiener Chaos Solutions

309

D0 .WI L2 ..0; T/; H//. Then the Wiener Chaos solution of equation (5.5.9) satisfies v Z t 1=2 uX X 1 u ku˛ .t/k2H 2 t C p ku0;˛ kH C k f˛ .s/kV 0 ds ˛Š ˛Š 0 ˛2J 2 ˛2J 2 0 C@

XZ k1

t 0

(5.5.46)

11=2 !

kgk;˛ .s/k2H dsA

;

where C > 0 depends only on T and the numbers ı; C1 , and C2 from (5.5.10) and (5.5.11). Proof To simplify the presentation, assume that f D gk D 0. For fixed  2 J 2 , denote by u.tI 'I / the Wiener Chaos solution of Eq. (5.5.9) with initial condition u.0I 'I / D ' . The structure of the propagator implies the following relation:   u˛ tI p'Š I .0/ u˛C .tI 'I / D : p p .˛ C /Š ˛Š

(5.5.47)

Clearly, u˛ .tI 'I / D 0 if j˛j < jj. By definition, kv.t/k2.S/1;0 .H/ D

X kv˛ .t/k2 H ; ˛Š 2

˛2J

and then, by linearity and triangle inequality, ku.t/k.S/1;0 .H/ 

X

ku.tI u0; I /k.S/1;0 .H/ :

2J 2

We also have by (5.5.47) and Theorem 5.5.7 ku.tI u0; I /k2.S/1;0 .H/

 !2   u0;   D u tI p I .0/    Š

.S/1;0 .H/

 !2   u0; ku0; k2H   :  E u tI p I .0/   eC2 t   Š Š H

Inequality (5.5.46) then follows. Theorem 5.5.13 is proved. Remark 5.5.14 Using Proposition 5.4.11 and the Cauchy-Schwartz inequality, (5.5.46) can be re-written in a slightly weaker form to reveal continuity of the

310

5 The Polynomial Chaos Method

solution operator for Eq. (5.5.9) from .S/1; to .S/1;0 for every  > 1 W ku.t/k2.S/1;0 .H/  C ku0 k2.S/1; .H/ C C

XZ k1

t 0

Z 0

t

k f .s/k2.S/1; .V 0 / ds !

kgk .s/k2.S/1; .H/ ds

:

5.5.3 Probabilistic Representation The general discussion so far has been dealing with the abstract evolution equation du D .Au C f /dt C

X

.Mk u C gk /dwk :

k1

By further specifying the operators A and Mk , as well as the input data u0 ; f ; and gk , it is possible to get additional information about the Wiener Chaos solution of the equation. Definition 5.5.15 For r 2 R, the space L2;.r/ D L2;.r/ .Rd /Ris the collection of realvalued measurable functions so that f 2 L2;.r/ if and only if Rd jf .x/j2 .1Cjxj2 /r dx < 1 1 1: The space H2;.r/ D H2;.r/ .Rd / is the collection of real-valued measurable 1 if and only if f and all the first-order generalized functions so that f 2 H2;.r/ derivatives Di f of f belong to L2;.r/ . It is known, for example, from Theorem 3.4.7 in , that L2;.r/ is a Hilbert space with norm k f k20;.r/

D

Z Rd

jf .x/j2 .1 C jxj2 /r dx;

1 and H2;.r/ is a Hilbert space with norm

k f k1;.r/ D k f k0;.r/ C

d X

kDi f k0;.r/ :

iD1 1 1 Denote by H2;.r/ the dual of H2;.r/ with respect to the inner product in L2;.r/ . Then 1 1 ; L2;.r/ ; H2;.r/ / is a normal triple of Hilbert spaces. .H2;.r/ Let F D .˝; F ; fFt gt0 ; P/ be a stochastic basis with the usual assumptions and wk D wk .t/; k  1; t  0, a collection of standard Wiener processes on F. Using the summation conventions, consider the linear equation

du D .aij Di Dj u C bi Di u C cu C f /dt C .ik Di u C k u C gk /dwk

(5.5.48)

5.5 Further Properties of the Wiener Chaos Solutions

311

under the following assumptions: B0 All coefficients, free terms, and the initial condition are non-random. B1 The functions aij D aij .t; x/ and their first-order derivatives with respect to x are uniformly bounded in .t; x/, and the matrix .aij / is uniformly positive definite, that is, there exists a ı > 0 so that, for all vectors y 2 Rd and all .t; x/, aij yi yj  ıjyj2 . B2 The functions bi D bi .t; x/, c D c.t; x/, and k D k .t; x/ are measurable and bounded in .t; x/. B3 The functions ik D ik .t; x/ are continuous and bounded in .t; x/. B4 The functions f D f .t; x/ and gk D gk .t; x/ belong to L2 ..0; T/I L2;.r/ / for some r 2 R. B5 The initial condition u0 D u0 .x/ belongs to L2;.r/ . Under Assumptions B2–B4, there exists a sequence Q D fqk ; k  1g of positive numbers with the following properties: P P1 The matrix A with Aij D aij  .1=2/ k1 qk ik jk satisfies Aij .t; x/yi yj  0; x; y 2 Rd ; 0  t  T. P2 There exists a number C > 0 so that Z X sup jqk k .t; x/j2 C t;x

k1

T 0

 p kqk gk k0;.r/ .t/dt  C:

For the matrix A and each t; x, we have Aij .t; x/ D Q ik .t; x/Q jk .t; x/, where the functions Q ik are bounded. This representation might not be unique; see, for example, [54, Theorem III.2.2] or [214, Lemma 5.2.1]. Given any such representation of A, consider the following backward Itô equation Xt;x;i .s/ D xi C

Z

t

Bi . ; Xt;x . // d C s

C

Z

t

X

 qk ik . ; Xt;x . // dwk . /

k1

 Q ik . ; Xt;x . // d w Q k . / I s 2 .0; t/; t 2 .0; T; t  fixed;

s

(5.5.49)

where Bi D b i 

X

q2k ik k

k1

and wQ k ; k  1; are independent standard Wiener processes on F that are independent of wk ; k  1. This equation might not have a strong solution, but does have weak, or martingale, solutions due to Assumptions B1–B3 and properties P1 and P2 of

312

5 The Polynomial Chaos Method

the sequence Q; this weak solution is unique in the sense of probability law [214, Theorem 7.2.1]. The following result is a variation of Theorem 4.1 in . 1 ; Theorem 5.5.16 Under assumptions B0–B5 Eq. (5.5.48) has a unique w.H2;.r/ 1 H2;.r/ / Wiener Chaos solution. If Q is a sequence with properties P1 and P2, then the solution of (5.5.48) belongs to

 \   1 L2;Q WI L2 ..0; T/I H2;.r/ / L2;Q WI C..0; T/I L2;.r/ / and has the following representation: Z

u.t; x/ D Q1 E C

XZ k1

0

t 0

t

f .s; Xt;x .s//.t; s; x/ds

! ˇ  ˇ W qk gk .s; Xt;x .s//.t; s; x/dwk .s/ C u0 .Xt;x .0//.t; 0; x/ˇFt ;

t  T; (5.5.50) where Xt;x .s/ is a weak solution of (5.5.49), and .t; s; x/ D exp

Z

t

c. ; Xt;x . //d C

s

XZ k1

1  2

t

 qk k . ; Xt;x . //dwk . /

s

Z tX s k1

! q2k j k . ; Xt;x . //j2 d

: (5.5.51)

Proof It is enough to establish (5.5.50) when t D T. Keeping in mind the summation convention, consider the equation dU D .aij Di Dj U C bi Di U C cU C f /dt C

X

.ik Di U C k U C gk /qk dwk

(5.5.52)

k1

with initial condition U.0; x/ D u0 .x/. Applying Theorem 4.4.3 in the normal triple 1 1 .H2;.r/ ; L2;.r/ ; H2;.r/ /, we conclude that there is a unique solution  \   1 U 2 L2 WI L2 ..0; T/I H2;.r/ / L2 WI C..0; T/I L2;.r/ / of this equation. By Proposition 5.4.13, the process u D Q1 U is the corresponding Wiener Chaos solution of (5.5.48). To establish representation (5.5.50), consider the S-transform Uh of U. According to Theorem 5.5.1, the function Uh is the unique

5.5 Further Properties of the Wiener Chaos Solutions

313

1 1 w.H2;.r/ ; H2;.r/ / solution of the equation

dUh D .aij Di Dj Uh C bi Di Uh C cUh C f /dt C

X .ik Di Uh C k Uh C gk /qk hk dt k1

(5.5.53) with initial condition Uh jtD0 D u0 . We also define Y.T; x/ D

Z

T 0

C

f .s; XT;x .s//.T; s; x/ds

XZ

T 0

k1

 gk .s; XT;x .s//.T; s/qk dwk .s/ C u0 .XT;x .0//.T; 0; x/: (5.5.54)

By direct computation,    E E E.h/Y.T; x/jFTW D E .E.h/Y.T; x// D E0 Y.T; x/; where E0 is the expectation with respect to the measure dP0T D E.h/dPT and PT is the restriction of P to FTW : To proceed, let us first assume that the input data u0 , f , and gk are all smooth functions with compact support. Then, applying the Feynmann-Kac formula to the solution of equation (5.5.53) and using the Girsanov theorem (see, for example, Theorems 3.5.1 and 5.7.6 in ), we conclude that Uh .T; x/ D E 0 Y.T; x/ or   E E.h/EY.t; x/jFTW D E .E .h/ U.T; x// :   By Remark 5.4.17, the last equality implies U .T; / D E Y.T; /jFTW as elements  of L2 ˝I L2;.r/ .Rd / : To remove the additional smoothness assumption on the input data, let un0 , f n , and n gk be sequences of smooth compactly supported functions so that lim

n!1

C

ku0  un0 k2L2;.r/ .Rd / C

XZ k1

T 0

q2k kgk



Z

T 0

k f  f n k2L2;.r/ .Rd / .t/dt

gnk k2L2;.r/ .Rd / .t/dt

!

(5.5.55) D 0:

Denote by U n and Y n the corresponding objects defined by (5.5.52) and (5.5.54) respectively. By Theorem 5.5.7, we have lim EkU  U n k2L2;.r/ .Rd / .T/ D 0:

n!1

(5.5.56)

314

5 The Polynomial Chaos Method

To complete the proof, it remains to show that   ˇ 2   ˇ lim E E Y.T; /  Y n .T; /ˇFTW 

L2;.r/ .Rd /

n!1

D 0:

(5.5.57)

00

To this end, introduce a new probability measure PT by 00

dPT D Q .T; s; x/dPT ; XZ T  Q .T; s; x/ D exp 2 k .s; XT;x .s//qk dwk .s/ k1

2

Z

T 0

X

0

!

q2k j k .s; XT;x .s//j2 ds

:

k1

By Girsanov’s theorem, Eq. (5.5.49) can be rewritten as XT;x;i .s/ D xi C

Z

T s

C

Z

.bi C s

C

s k1

ik . ; XT;x . // hk . / qk d

k1

t

Z tX

X X

q2k ik k / . ; XT;x . // d

k1

 qk ik . ; XT;x . // dw00k . / C

Z

t

 Q ik . ; XT;x . // d wQ00 k . / ;

s

(5.5.58) where w00k and wQ00 k are independent Winer processes with respect to the measure P00T . Denote by p.s; yjx/ the probability density function of XT;x .s/ and write `.x/ D .1 C jxj2 /r . It then follows by the Hölder and Jensen inequalities that  Z  E E  K1  K2 D K2

T 0

Z Z

Rd

Rd

Z

Rd

2 ˇ  ˇ Q .T; s; /. f  f n /.s; XT; .s//dsˇFTW  

L2;.r/ .Rd /

Z Z Z 0

T

   E Q .T; s; x/. f  f n /2 .s; XT;x .s// ds `.x/dx

T

 E00 . f  f n /2 .s; XT;x .s//ds `.x/dx

0

0 T

Z Rd

. f .s; y/  f n .s; y//2 p.s; yjx/dy ds `.x/dx;

(5.5.59)

5.5 Further Properties of the Wiener Chaos Solutions

315

where the numberP K1 depends only on T, and the number K2 depends only on T and sup.t;x/ jc.t; x/j C k1 q2k sup.t;x/ j k .t; x/j2 . Assumptions B0–B3 imply that there exist positive numbers K3 and K4 so that p.s; yjx/ 

  jx  yj2 K3 I exp K 4 .T  s/d=2 T s

(5.5.60)

see, for example, . As a result, Z Rd

p.s; yjx/`.x/dx  K5 `.y/;

and Z

Z Rd

T

Z

0

 K5

Rd

Z

. f .s; y/  f n .s; y//2 p.s; yjx/dy ds `.x/dx

T

kf  0

(5.5.61) f n k2L2;.r/ .Rd / .s/ds

! 0; n ! 1;

where the number K5 depends only on K3 ; K4 , T, and r. Calculations similar to (5.5.59)–(5.5.61) show that ˇ 2   ˇ   E E  2 .T; 0; /.u0  un0 /.XT; .0//ˇFTW 

L2;.r/ .Rd /

 0 12   Z TX ˇ     ˇ n W A  @ C E E .gk  gk /.s; XT; .s//.t; s; /qk dwk .s/ˇFT  0 k1  

L2;.r/

!0 .Rd /

(5.5.62)

as n ! 1. Then convergence (5.5.57) follows, which, together with (5.5.56),    implies that U .T; / D E U Q .T; /jFTW as elements of L2 ˝I L2;.r/ .Rd / : It remains to note that u D Q1 U. Theorem 5.5.16 is proved. Given f 2 L2;.r/ , we say that f  0 if and only if Z Rd

f .x/'.x/dx  0

for every non-negative ' 2 C01 .Rd /. Then Theorem 5.5.16 implies the following result. Corollary 5.5.17 In addition to Assumptions B0–B5, let u0  0, f  0, and gk D 0 for all k  1. Then u  0.

316

5 The Polynomial Chaos Method

Proof This follows from (5.5.50) and Proposition 5.4.20. Example 5.5.18 (Krylov-Veretennikov Formula) Consider the equation d X   ik Di udwk ; u .0; x/ D u0 .x/ : du D aij Di Dj u C bi Di u dt C

(5.5.63)

kD1

Assume B0–B5 and suppose that aij .t; x/ D 12 ik .t; x/jk .t; x/. By Theorem 5.5.7, Eq. (5.5.63) has a unique Wiener chaos solution so that Ekuk2L2 .Rd / .t/  C ku0 k2L2 .Rd / and u .t; x/ D

1 X X

u˛ .t; x/˛ D u0 .x/ C

1 X

d X

Z tZ

nD1 k1 ;:::;kn D1 0

nD1 j˛jDn

sn 0

:::

Z

s2

0

(5.5.64)

˚t;sn jkn Dj    ˚s2 ;s1 ik1 Di ˚s1 ;0 u0 .x/dwk1 .s1 /    dwkn .sn /; where ˚t;s is the semi-group generated by the operator A D aij Di Dj u C bi Di u: On the other hand, in this case, Theorem 5.5.16 yields ˇ u.t; x/ D E u0 .Xt;x .0// ˇFtW

! ;

where W D .w1 ; :::; wd / and Xt;x;i .s/ D xi C

Z

t

bi . ; Xt;x . // d C

d X

s

 ik . ; Xt;x . // dwk . /

kD1

(5.5.65)

s 2 .0; t/; t 2 .0; T; t  fixed: Thus, we have arrived at the Krylov-Veretennikov formula [123, Theorem 4] 1 X   E u0 .Xt;x .0// jFtW D u0 .x/ C

d X

Z tZ

nD1 k1 ;:::;kn D1 0

sn 0

:::

Z

s2 0

˚t;sn jkn Dj    ˚s2 ;s1 ik1 Di ˚s1 ;0 u0 .x/dwk1 .s1 /    dwkn .sn /: (5.5.66)

5.6 Examples

317

5.6 Examples 5.6.1 Wiener Chaos and Nonlinear Filtering In this section, we discuss some applications of the Wiener Chaos expansion to numerical solution of the nonlinear filtering problem for diffusion processes; the presentation is essentially based on . Let .˝; F ; P/ be a complete probability space with independent standard Wiener processes W D W.t/ and V D V.t/ of dimensions d1 and r respectively. Let X0 be a random variable independent of W and V. In the diffusion filtering model, the unobserved d-dimensional state (or signal) process X D X.t/ and the r-dimensional observation process Y D Y.t/ are defined by the stochastic ordinary differential equations dX.t/ D b.X.t//dt C .X.t//dW.t/ C .X.t//dV.t/; dY.t/ D h.X.t//dt C dV.t/; 0 < t  TI X.0/ D X0 ; Y.0/ D 0;

(5.6.1)

where b.x/ 2 Rd , .x/ 2 Rdd1 , .x/ 2 Rdr , h.x/ 2 Rr . Denote by C n .Rd / the Banach space of bounded, n times continuously differentiable functions on Rd with finite norm k f kC n .Rd / D sup jf .x/j C max sup jDk f .x/j: x2Rd

1kn x2Rd

Assumption R1 The the components of the functions  and  are in C 2 .Rd /, the components of the functions b are in C 1 .Rd /, the components of the function h are bounded measurable, and the random variable X0 has a density u0 . Assumption R2 The matrix   is uniformly positive definite: there exists an " > 0 so that d1 d X X

ik .x/jk .x/yi yj  "jyj2 ; x; y 2 Rd :

i;jD1 kD1

Under Assumption R1 system (5.6.1) has a unique strong solution [103, Theorems 5.2.5 and 5.2.9]. Extra smoothness of the coefficients in assumption R1 ensure the existence of a convenient representation of the optimal filter. If f D f .x/ is a scalar measurable function on Rd and sup0tT Ej f .X.t//j2 < 1, then the filtering problem for (5.6.1) is to find the best mean square estimate fOt of f .X.t//; t  T; given the observations Y.s/; 0 < s  t.

318

5 The Polynomial Chaos Method

Denote by FtY the -algebra generated by Y.s/; 0  s  t. Then the properties of the conditional expectation imply that the solution of the filtering problem is   fOt D E f .X.t//jFtY : To derive an alternative representation of fOt , some additional constructions will be necessary. Define a new probability measure e P on .˝; F / as follows: for A 2 F , e P.A/ D

Z A

ZT1 dP;

where Zt D exp

Z

t 0

1 h .X.s//dY.s/  2 

Z

t

jh.X.s//j ds 2

0

(here and below, if  2 Rk , then  is a column vector,   D .1 ; : : : ; k /; and jj2 D   ). If the function h is bounded, then the measures P and e P are equivalent. The expectation with respect to the measure e P will be denoted by e E. The following properties of the measure e P are well known [101, 199]: P1. Under the measure e P, the distributions of the Wiener process W and the random variable X0 are unchanged, the observation process Y is a standard Wiener process, and, for 0 < t  T, the state process X satisfies dX.t/ D b.X.t//dt C .X.t//dW.t/ C .X.t// .dY.t/  h.X.t//dt/ ; X.0/ D X0 I P2. Under the measure e P; the Wiener processes W and Y and the random variable X0 are jointly independent; P3. The optimal filter fOt satisfies fOt D

e E f .X.t//Zt jFtY : e EŒZt jFtY 

(5.6.2)

Because of property P2 of the measure e P the filtering problem will be studied on the probability space .˝; F ; e P/. In particular, we will consider the stochastic basis e F D f˝; F ; fFtY g0tT ; e Pg and the Wiener Chaos space e L2 .Y/ of FTY -measurable 2 e random variables  with Ejj < 1. If the function h is bounded, then, by the Cauchy-Schwarz inequality, q Ejj  C.h; T/ e L2 .Y/: Ejj2 ;  2 e

(5.6.3)

5.6 Examples

319

Next, consider the partial differential operators Lg.x/ D

d d  @2 g.x/ X @g.x/ 1 X ..x/  .x//ij C ..x/ .x//ij C bi .x/ I 2 i;jD1 @xi @xj @xi iD1

Ml g.x/ D hl .x/g.x/ C

d X iD1

il .x/

@g.x/ ; l D 1; : : : ; rI @xi

and their adjoints L g.x/ D



d  1 X @2  ..x/  .x//ij g.x/ C ..x/ .x//ij g.x/ 2 i;jD1 @xi @xj d X @ .bi .x/g.x// I @xi iD1

Ml g.x/ D hl .x/g.x/ 

d X @ .il .x/g.x// ; l D 1; : : : ; r: @x i iD1

Note that, under the assumptions R1 and R2, the operators L; L are bounded from H21 .Rd / to H21 .Rd /, operators M; M are bounded from H21 .Rd / to L2 .Rd /, and 2hL v; vi C

r X

kMl vk2L2 .Rd / C"kvk2H 2 .Rd /  Ckvk2L2 .Rd / ; v 2 H21 .Rd /; 1

lD1

(5.6.4)

where h; i is the duality between H21 .Rd / and H21 .Rd /. The following result is well known: see, for example, Theorem 4.4.27 on page 227 or [199, Theorem 6.2.1]. Proposition 5.6.1 In addition to Assumptions R1 and R1 suppose that the initial density u0 belongs to L2 .Rd /. Then there exists a random field u D u.t; x/; t 2 Œ0; T; x 2 Rd ; with the following properties: 1. u 2 e L2 .YI L2 ..0; T/I H21 .Rd /// \ e L2 .YI C.Œ0; T; L2 .Rd ///: 2. The function u.t; x/ is a square-integrable solution of the stochastic partial differential equation 

du.t; x/ D L u.t; x/dt C u.0; x/ D u0 .x/:

r X lD1

Ml u.t; x/dYl .t/; 0 < t  T; x 2 Rd I

(5.6.5)

320

5 The Polynomial Chaos Method

3. The equality

e E f .X.t//Zt jFtY D

Z Rd

f .x/u.t; x/dx

(5.6.6)

holds for all bounded measurable functions f . The random field u D u.t; x/ is called the unnormalized filtering density (UFD)

and the random variable t Œ f  D e E f .X.t//Zt jFtY , the unnormalized optimal filter. A number of authors studied the nonlinear filtering problem using the multiple Itô integral version of the Wiener chaos [17, 124, 180, 228, etc.]. In what follows, we construct approximations of u and t Œ f  using the Cameron-Martin version. By Theorem 5.5.6, X

u.t; x/ D

u˛ .t; x/˛ ;

(5.6.7)

˛2J 2

where 1 Y H˛i;k .i;k /; i;k D ˛ D p ˛Š i;k

Z

T 0

mi .t/dYk .t/; k D 1; : : : ; rI

(5.6.8)

as before, Hn ./ is the Hermite polynomial (2.3.37) and mi 2 L1 ..0; T// is an orthonormal basis in L2 ..0; T//. The functions u˛ satisfy the corresponding propagator @ u˛ .t; x/ D L u˛ .t; x/ @t Xp ˛i;k Mk u˛.i;k/ .t; x/mi .t/; 0 < t  T; x 2 Rd I C

(5.6.9)

i;k

u.0; x/ D u0 .x/I.j˛j D 0/: Writing f˛ .t/ D

Z Rd

f .x/u˛ .t; x/dx;

we also get a Wiener chaos expansion for the unnormalized optimal filter: t Œ f  D

X ˛2J 2

f˛ .t/˛ ; t 2 Œ0; T:

(5.6.10)

5.6 Examples

321

For a positive integer N, define uN .t; x/ D

X

u˛ .t; x/˛ :

(5.6.11)

j˛jN

Theorem 5.6.2 Under Assumptions R1 and R2, there exists a positive number , depending only on the functions h and , so that e Eku  uN k2L2 .Rd / .t/ 

ku0 k2L2 .Rd / .1 C /N

; t 2 Œ0; T:

(5.6.12)

If, in addition,  D 0, then there exists a real number C, depending only on the functions b and , so that .4h1 t/NC1 Ct e e ku0 k2L2 .Rd / ; t 2 Œ0; T; Eku  uN k2L2 .Rd / .t/  .N C 1/Š

(5.6.13)

where h1 D maxkD1;:::;r supx jhk .x/j. For positive integers N; n, define a set of multi-indices JNn D f˛ D .˛i;k ; k D 1; : : : ; r; i D 1; : : : ; n/ W j˛j  Ng: and let unN .t; x/ D

X

u˛ .t; x/˛ :

(5.6.14)

˛2JNn

Unlike Theorem 5.6.2, to compute the approximation error in this case we need to choose special basis functions mk —to do the error analysis for the Fourier approximation in time. We also need extra regularity of the coefficients in the state and observation equations—to have the semi-group generated by the operator L continuous not only in L2 .Rd / but also in H22 .Rd /. The resulting error bound is presented below; the proof can be found in . Theorem 5.6.3 Assume that 1. The basis m is the Fourier cosine basis r   .k  1/t 2 1 m1 .s/ D p I mk .t/ D cos ; k > 1I 0  t  T; T T T

(5.6.15)

2. The components of the functions  are in C 4 .Rd /, the components of the functions b are in C 3 .R/, the components of the function h are in C 2 .Rd /;  D 0; u0 2 H22 .Rd /.

322

5 The Polynomial Chaos Method

Then there exist a positive number B1 and a real number B2 , both depending only on the functions b and  so that e Eku  unN k2L2 .Rd / .T/  B1 eB2 T



 .4h1 T/NC1 Ct T3 e ku0 k2L2 .Rd / C ku0 k2H 2 .Rd / ; 2 .N C 1/Š n (5.6.16)

where h1 D maxkD1;:::;r supx jhk .x/j.

5.6.2 Passive Scalar in a Gaussian Field This section presents the results from  and  about the stochastic transport equation. The following viscous transport equation is used to describe time evolution of a scalar quantity  in a given velocity field v: P x/ D .t; x/  v.t; x/  r.t; x/ C f .t; x/I x 2 Rd ; d > 1: .t;

(5.6.17)

The scalar  is called passive because it does not affect the velocity field v. We assume that v D v.t; x/ 2 Rd with components v 1 ; : : : ; v d is an isotropic Gaussian vector field with zero mean and covariance E.v i .t; x/v j .s; y// D ı.t  s/Cij .x  y/; where C D .Cij .x/; i; j D 1; : : : ; d/ is a matrix-valued function so that C.0/ is a scalar matrix; with no loss of generality we will assume that C.0/ D I; the identity matrix. It is known from [130, Sect. 10.1] that, for an isotropic Gaussian vector field, the O Fourier transform CO D C.z/ of the function C D C.x/ is O C.y/ D

A0 .1 C jyj2 /.dC /=2

   yy yy b I 2 a 2C ; jyj d1 jyj

(5.6.18)

where y is the row vector .y1 ; : : : ; yd /, y is the corresponding column vector, jyj2 D y y;  > 0; a  0; b  0; A0 > 0 are real numbers, I is the identity matrix. Similar to , we assume that 0 <  < 2. This range of values of  corresponds to a turbulent velocity field v, also known as the generalized Kraichnan model ; the original Kraichnan model  corresponds to a D 0. For small x, the asymptotics of Cij .x/ is .ıij  cij jxj / [130, Sect. 10.2]. By direct computation (cf. ), the vector field v D .v 1 ; : : : ; v d / can be written as X v i .t; x/ D ki .x/wP k .t/; (5.6.19) k

5.6 Examples

323

where fk ; k  1g is an orthonormal basis in the space HC , the reproducing kernel Hilbert space corresponding to the kernel function C. It is known from  that HC is all or part of the Sobolev space H .dC /=2 .Rd I Rd /. If a > 0 and b > 0, then the matrix CO is invertible and

Z HC D f 2 Rd W fO  .y/CO 1 .y/fO .y/dy < 1 D H .dC /=2 .Rd I Rd /; Rd

O because kC.y/k  .1 C jyj2 /.dC /=2 . If a > 0 and b D 0, then

Z HC D f 2 Rd W j Of .y/j2 .1 C jyj2 /.dC /=2 dy < 1I yy fO .y/ D jyj2 fO .y/ ; Rd

the subset of gradient fields in H .dC /=2 .Rd I Rd /, that is, vector fields f for which O fO .y/ D yF.y/ for some scalar F 2 H .dC C2/=2 .Rd /. If a D 0 and b > 0, then

Z HC D f 2 Rd W j Of .y/j2 .1 C jyj2 /.dC /=2 dy < 1I y fO .y/ D 0 ; Rd

the subset of divergence-free fields in H .dC /=2 .Rd I Rd /. By the embedding theorems, each ki is a bounded continuous function on Rd ; in fact, every ki is Hölder continuous of order =2. In addition, being an element of the corresponding space HC , each k is a gradient field if b D 0 and is divergence free if a D 0. Equation (5.6.17) becomes d.t; x/ D . .t; x/ C f .t; x//dt 

X

k .x/  r.t; x/dwk .t/:

(5.6.20)

k

We summarize the above constructions in the following assumptions: S1 There is a fixed stochastic basis F D .˝; F ; fFt gt0 ; P/ with the usual assumptions and .wk .t/; k  1; t  0/ is a collection of independent standard Wiener processes on F. S2 For each k, the vector field k is an element of the Sobolev space .dC /=2 H2 .Rd I Rd /, 0 <  < 2, d  2. P j S3 For all x; y in Rd , k ki .x/k .y/ D Cij .x  y/ so that the matrix-valued function C D C.x/ satisfies (5.6.18) and C.0/ D I. S4 The input data 0 ; f are deterministic and satisfy 0 2 L2 .Rd /; f 2 L2 ..0; T/I H21 .Rd //I > 0 is a real number.

324

5 The Polynomial Chaos Method

Theorem 5.6.4 Let Q be the sequence with qk D q for all k  1, and q
0. Indeed, if 2 > 1, then q > 1 is an admissible choice of the weights, and, bypProposition 5.4.13(1), the solution  has Malliavin derivatives of every order. If 2  1, then Eq. (5.6.20) does not have a squareintegrable solution.p Note that if q D 2 , then Eq. (5.6.17) can still be analyzed using Theorem 5.5.7 in the normal triple .H21 .Rd /; L2 .Rd /; H21 .Rd //. If D 0, Eq. (5.6.20) must be interpreted in the sense of Stratonovich: du.t; x/ D f .t; x/dt  k .x/  r.t; x/ ı dwk .t/:

(5.6.21)

To simplify the presentation, we assume that f D 0. If (5.6.18) holds with a D 0, then each k is divergence free and (5.6.21) has an equivalent Itô form d.t; x/ D

X 1 .t; x/dt  ki .x/Di .t; x/dwk .t/: 2 i;k

(5.6.22)

Equation (5.6.22) is a model of non-viscous turbulent transport . The propagator for (5.6.22) is Xp @ 1 ˛ .t; x/ D ˛ .t; x/  ˛i;k ki Dj ˛.i;k/ .t; x/mi .t/; 0 < t  T; @t 2 i;k (5.6.23) with initial condition ˛ .0; x/ D 0 .x/I.j˛j D 0/: The following result about solvability of (5.6.22) is proved in  and, in a slightly weaker form, in . Theorem 5.6.5 In addition to S1–S4, assume that each k is divergence free. Then there exits a unique w.H21 .Rd /; H21 .Rd // Wiener Chaos solution  D .t; x/ of

5.6 Examples

325

(5.6.22). This solution has the following properties: (A) For every ' 2 C01 .Rd / and all t 2 Œ0; T, the equality .; '/.t/ D .0 ; '/C

1 2

Z

t 0

.; '/.s/dsC

XZ i;k

t

.; ki Di '/dwk .s/

0

(5.6.24)

holds in L2 .FtW /, where .; / is the inner product in L2 .Rd /. (B) If X D Xt;x is a weak solution of Xt;x D x C

Z

t 0

k .Xs;x / dwk .s/ ;

(5.6.25)

then, for each t 2 Œ0; T,    .t; x/ D E 0 .Xt;x / jFtW :

(5.6.26)

(C) For 1  p < 1 and r 2 R, define Lp;.r/ .Rd / as the Banach space of measurable functions with norm p k f kL .Rd / p;.r/

D

Z Rd

jf .x/jp .1 C jxj2 /pr=2 dx

is finite. Then there exits a number K depending only on p; r so that, for each t > 0, p

EkkL

p

p;.r/ .R

d/

.t/  eKt k0 kL

p;.r/ .R

d/

:

(5.6.27)

In particular, if r D 0, then K D 0. It follows that, for all s; t and almost all x; y; E .t; x/ D .0/ and E .t; x/  .s; y/ D

X

˛ .t; x/ ˛ .s; y/ :

˛2J 2

If the initial condition 0 belongs to L2 .Rd / \ Lp .Rd / for p  3, then, by (5.6.27), higher order moments of  exist. To obtain the expressions of the higher-order moments in terms of the coefficients ˛ , we need some auxiliary constructions. For ˛; ˇ 2 J 2 , define ˛ C ˇ as the multi-index with components ˛i;k C ˇi;k . Similarly, we define the multi-indices j˛  ˇj and ˛ ^ ˇ D min.˛; ˇ/. We write

326

5 The Polynomial Chaos Method

ˇ  ˛ if and only if ˇi;k  ˛i;k for all i; k  1. If ˇ  ˛, we define ! Y ˛ ˛i;k Š : WD ˇ ˇi;k Š.˛i;k  ˇi;k /Š i;k Definition 5.6.6 We say that a triple of multi-indices .˛; ˇ; / is complete and write .˛; ˇ; / 2 4 if all the entries of the multi-index ˛ C ˇ C  are even numbers and j˛  ˇj    ˛ C ˇ: For fixed ˛; ˇ 2 J 2 ; we write ˚ 4 .˛/ WD ;  2 J 2 W .˛; ; / 2 4 and 4.˛; ˇ/ WD f 2 J 2 W .˛; ˇ; // 2 4g: For .˛; ˇ; / 2 4; we define

.˛; ˇ; / WD 

˛ˇC 2



!

p ˛ŠˇŠŠ   ˇ˛C 2

!



˛Cˇ 2



!

:

(5.6.28)

Note that the triple .˛; ˇ; / is complete if and only if every permutation of the triple .˛; ˇ; / is complete. Similarly, the value of .˛; ˇ; / is invariant under permutation of the arguments. We also define " C .; ˇ; / WD

! ! !#1=2  C ˇ  2  ˇ ;    ^ ˇ:    

(5.6.29)

It is readily checked that if f is a function on J 2 ; then, for ; ˇ 2 J 2 ; X

C .; ˇ; p/ f . C ˇ  2/ D

^ˇ

X

f ./ ˚ .; ˇ; /

(5.6.30)

2.;ˇ/

The next theorem presents the formulas for the third and fourth moments of the solution of equation (5.6.22) in terms of the coefficients ˛ . Theorem 5.6.7 In addition to S1–S4, assume that each k is divergence free and the initial condition 0 belongs to L4 .Rd /. Then   E.t; x/ t0 ; x0 .s; y/ D

X

.˛; ˇ; / ˛ .t; x/ˇ .t0 ; x0 / .s; y/

.˛;ˇ;/24

(5.6.31)

5.6 Examples

327

and   (5.6.32) E.t; x/.t0 ; x0 / .s; y/  s0 ; y0 X   D

.˛; ˇ; / .; ; / ˛ .t; x/ ˇ .t0 ; x0 / .s; y/  s0 ; y0 : 24.˛;ˇ/\4.;/

Proof It is known  that X

 ˇ D

C .; ˇ; / Cˇ2 :

(5.6.33)

^ˇ

Let us consider the triple product ˛ ˇ  : By (5.6.33), E˛ ˇ  D E

(

X

  .˛; ˇ; / D

24.˛;ˇ/

.˛; ˇ; / ; .˛; ˇ; / 2 4I 0;

otherwise: (5.6.34)

Equality (5.6.31) now follows. To compute the fourth moment, note that ˛ ˇ  D

X

C .˛; ˇ; / ˛Cˇ2 

˛^ˇ

D

X

X

C .˛; ˇ; /

˛^ˇ

(5.6.35)

.˛Cˇ2/^

C .˛ C ˇ  2; ; / ˛CˇC22 : Repeated applications of (5.6.30) yield ˛ ˇ  D

X ˛^ˇ

D

X

X

C .˛; ˇ; / X

 .˛ C ˇ  2; ; /

24.˛Cˇ2;/

.˛; ˇ; / .; ; / 

24.˛;ˇ/ 24.;/

Thus, E˛ ˇ   D

X

X

.˛; ˇ; / .; ; / 1fDg

24.˛;ˇ/ 24.;/

D

X

24.˛;ˇ/\4.;/

Equality (5.6.32) now follows.

.˛; ˇ; / .; ; / :

328

5 The Polynomial Chaos Method

In the same way, one can get formulas for fifth- and higher-order moments. Remark 5.6.8 Expressions (5.6.31) and (5.6.32) do not depend on the structure of Eq. (5.6.22) and can be used to compute the third and fourth moments of any random field with a known Cameron-Martin expansion. The interested reader should keep in mind that the formulas for the moments of orders higher then two should be interpreted with care. In fact, they represent the pseudo-moments (for detail see ). We now return to the analysis of the passive scalar equation (5.6.20). By reducing the smoothness assumptions on k , it is possible to consider velocity fields v that are more turbulent than in the Kraichnan model, for example, v i .t; x/ D

X

ki .x/wP k .t/;

(5.6.36)

k1

where fk ; k  1g is an orthonormal basis in L2 .Rd I Rd /. With v as in (5.6.36), the passive scalar equation (5.6.20) becomes P x/ D .t; x/ C f .t; x/  r.t; x/  W.t; P x/; .t;

(5.6.37)

P D W.t; P x/ is a d-dimensional space-time white noise and the Itô stochastic where W differential is used. Previously, such equations have been studied using white noise approach in the space of Hida distributions [37, 189]. A summary of the related results can be found in [78, Sect. 4.3]. The Q-weighted Wiener chaos spaces allow us to state a result that is fully analogous to Theorem 5.6.4. The proof is derived from Theorem 5.5.7; see  for details. Theorem 5.6.9 Suppose that > 0 is a real number, each jki .x/j is a bounded d measurable function, and   the input data are deterministic and satisfy u0 2 L2 .R /, 1 d f 2 L2 .0; T/I H2 .R / . Fix " > 0 and let Q D fqk ; k  1g be a sequence such that 2 jyj2 

X

q2k ki .x/k .x/yi yj  "jyj2 ; x; y 2 Rd : j

k1

Then, for every T > 0, there exits a unique w.H21 .Rd /; H21 .Rd // Wiener Chaos solution  of equation d.t; x/ D . .t; x/ C f .t; x//dt  k .x/  r.t; x/dwk .t/; The solution is an Ft -adapted process and satisfies kk2L

C kk2L2;Q .WIC..0;T/IL2 .Rd ///    C. ; q; T/ k0 k2L2 .Rd / C k f k2L ..0;T/IH 1 .Rd // : 1 d 2;Q .WIL2 ..0;T/IH2 .R ///

2

2

(5.6.38)

5.6 Examples

329

If maxi supx jki .x/j  Ck , k  1, then a possible choice of Q is qk D .ı /1=2 =.d2k Ck /; 0 < ı < 2: j

If ki .x/k .x/  C < C1, i; j D 1; : : : ; d, x 2 Rd , then a possible choice of Q is qk D q D " .2 =.C d//1=2 ; 0 < " < 1:

5.6.3 Stochastic Navier-Stokes Equations In this section, we review the main facts about the stochastic Navier-Stokes equations and indicate how the Wiener Chaos approach can be used in the study of non-linear equations. Most of the results of this section come from the papers  and . A priori, it is not clear in what sense the motion described by Kraichnan’s velocity (see Sect. 5.6.2) might fit into the paradigm of Newtonian mechanics. Accordingly, relating the Kraichnan velocity field v to classic fluid mechanics naturally leads to the question whether we can compensate v .t; x/ by a field u .t; x/ that is more regular with respect to the time variable, so that there is a balance of momentum for the resulting field U .t; x/ D u .t; x/ C v .t; x/ or, equivalently, that the motion of a fluid particle in the velocity field U .t; x/ satisfies the Second Law of Newton. A positive answer to this question is given in , where it is shown that the equation for the smooth component u D .u1 ; : : : ; ud / of the velocity is given by 8 i i j i ˆ ˆ du D Œ u  u Dj u  Di P C fi dt ˆ ˆ ˆ <   j C gik  Di PQ k  Dj k ui dwk ; i D 1; : : : ; d; 0 < t  TI ˆ ˆ ˆ ˆ ˆ : div u D 0; u.0; x/ D u0 .x/:

(5.6.39)

In (5.6.39), wk ; k  1; are independent standard Wiener processes on a stochastic j basis F, the functions k are given by (5.6.19), the known functions f D . f 1 ; : : : ; f d /, gk D .gik /, i D i; : : : ; d, k  1; are, respectively, the drift and the diffusion components of the free force, and the unknown functions P, PQ k are the drift and diffusion components of the pressure. j

Remark 5.6.10 It is useful to study Eq. (5.6.39) for more general coefficients k : j So, in the future, k are not necessarily the same as in Sect. 5.6.2.

330

5 The Polynomial Chaos Method

We make the following assumptions: NS1

The functions ki D ki .t; x/ are deterministic and measurable, d X X k1

! jki .t; x/j2

C

jDi ki .t; x/j2

 K;

iD1

and there exists " > 0 so that, for all y 2 Rd , 1 j jyj2  ki .t; x/k .t; x/yi yj  "jyj2 ; 2 t 2 Œ0; T, x 2 Rd . NS2 The functions f i ; gik are non-random and d X iD1

0 @k f i k2

L2 ..0;T/IH21 .Rd //

C

X

1 kgik k2L2 ..0;T/IL2 .Rd // A < 1:

k1

Remark 5.6.11 In NS1, the derivatives Di ki are understood as Schwartz distribuP tions, but it is assumed that div./ WD diD1 Di  i is a bounded `2 -valued function. P Obviously, the latter assumption holds in the important case when diD1 Di  i D 0: Our next step is to use the divergence-free property of u to eliminate the pressure P and PQ from Eq. (5.6.39). For that, we need the decomposition of L2 .Rd I Rd / into potential and solenoidal components. Write S.L2 .Rd I Rd // D fv 2 L2 .Rd I Rd / W div.v/ D 0g: It is known (see e.g. ) that L2 .Rd I Rd / D G.L2 .Rd I Rd // ˚ S.L2 .Rd I Rd //; where G.L2 .Rd I Rd // is the orthogonal complement of S.L2 .Rd I Rd //. The functions G.v/ and S.v/ can be defined for v from any Sobolev space  H2 .Rd I Rd / and are usually referred to as the potential and the divergence free (or solenoidal), projections, respectively, of the vector field v. Now let u be a solution of equation (5.6.39). Since div.u/ D 0; we have j Di . ui  uj Dj ui  Di P C f i / D 0I Di .k Dj uj ui C gik  Di PQ k / D 0; k  1:

As a result, j Di P D G. ui  uj Dj ui C f i /I Di PQ k D G.k Dj ui C gik /; i D 1; : : : ; d; k  1:

5.6 Examples

331

So, instead of Eq. (5.6.39), we can and will consider its equivalent form for the unknown vector u D .u1 ; : : : ; ud /: j

du D S. u  uj Dj u C f /dt C S.k Dj u C gk /dwk ; 0 < t  T;

(5.6.40)

with initial condition ujtD0 D u0 . Definition 5.6.12 An Ft -adapted random process u from the space L2 .˝  Œ0; TI H21 .Rd I Rd // is called a solution of equation (5.6.40) if 1. With probability one, the process u is weakly continuous in L2 .Rd I Rd /. 2. For every ' 2 C01 .Rd ; Rd /, with div ' D 0 there exists a measurable set ˝ 0 ˝ so that, for all t 2 Œ0; T, the equality .ui ; ' i /.t/ D .ui0 ; ' i / C Z

t 0

Z 0

t

  . Dj ui ; Dj ' i /.s/ C h f i ; ' i i.s/ ds

 j k Dj ui C gi ; ' i /dwk .s/

(5.6.41)

holds on ˝ 0 . In (5.6.41), .; / is the inner product in L2 .Rd / and h; ; i is the duality between H21 .Rd / and H21 .Rd /. The following existence and uniqueness result is proved in . Theorem 5.6.13 In addition to NS1 and NS2, assume that the initial condition u0 is non-random and belongs to L2 .Rd I Rd /. Then there exist a stochastic basis F D .˝; F ; fFt gt0 ; P/ with the usual assumptions, a collection fwk ; k  1g of independent standard Wiener processes on F, and a process u so that u is a solution of (5.6.40) and  Z E sup ku.s/k2L2 .Rd IRd / C sT

T 0

 kru.s/k2L2 .Rd IRd / ds < 1:

If, in addition, d D 2, then the solution of (5.6.40) exists on any prescribed stochastic basis, is strongly continuous in t, is FtW -adapted, and is unique, both path-wise and in distribution. When d  3; existence of a strong solution as well as uniqueness (strong or weak) for Eq. (5.6.40) are important open problems. By the Cameron-Martin theorem, u.t; x/ D

X

u˛ .t; x/˛ :

˛2J 2

If the solution of (5.6.40) is FtW -adapted, then, using the Itô formula together with relation (5.3.49) for the time evolution of E.˛ jFtW / and relation (5.6.33) for the

332

5 The Polynomial Chaos Method

product of two elements of the Cameron-Martin basis, we can derive the propagator system for coefficients u˛ [165, Theorem 3.2]: Theorem 5.6.14 In addition to NS1 and NS2, assume that u0 2 L2 .Rd I Rd / and Eq. (5.6.40) has an FtW -adapted solution u so that sup Ekuk2L2 .Rd IRd / .t/ < 1:

(5.6.42)

tT

Then u .t; x/ D

X

u˛ .t; x/ ˛ ;

(5.6.43)

˛2J 2

and the Fourier coefficients u˛ .t; x/ are L2 .Rd I Rd /-valued weakly continuous functions so that sup tT

X ˛2J 2

ku˛ k2L2 .Rd IRd / .t/ C

Z

T 0

X ˛2J 2

kru˛ k2L2 .Rd IRdd / .t/ dt < 1:

(5.6.44)

The functions u˛ .t; x/ ; ˛ 2 J 2 ; satisfy the (nonlinear) propagator  @ u˛ D S u˛  @t

X

 

.˛; ˇ; / u ; ruˇ C 1j˛jD0 f

;ˇ2.˛/

 Xp    ˛j;k k ; r u˛.j;k/ C 1j˛jD1 gk mj .t/ ; 0 < t  TI C

(5.6.45)

j;k

u˛ jtD0 D u0 1j˛jD0 I recall that the numbers .˛; ˇ; / are defined in (5.6.28). One of the questions in the theory of the Navier-Stokes equations is computation of the mean value uN D Eu of the solution. The traditional approach relies on the Reynolds equation for the mean @t uN  uN C . u; r/ u D 0;

(5.6.46)

N Decoupling (5.6.46) has been which is not really an equation with respect to u. an area of active research: Reynolds approximations, coupled equations for the moments, Gaussian closures, and so on (see e.g. [169, 221] and the references therein). Another way to compute uN .t; x/ is to find the probability distribution of u .t; x/ using the infinite-dimensional Kolmogorov equation associated with (5.6.40). The complexity of this Kolmogorov equation is prohibitive for any realistic application, at least for now.

5.6 Examples

333

The propagator provides a third way: expressing the mean and other statistical moments of u in terms of u˛ . Indeed, by Cameron-Martin Theorem, Eu.t; x/ D u.0/ .t; x/; X ui˛ .t; x/uj˛ .s; y/ Eui .t; x/uj .s; y/ D ˛2J 2

If exist, the third- and fourth-order moments can be computed using (5.6.31) and (5.6.32). The next theorem, proved in , shows that the existence of a solution of the propagator (5.6.45) is not only necessary but, to some extent, sufficient for the global existence of a probabilistically strong solution of the stochastic Navier-Stokes equation (5.6.40). d d Theorem 5.6.15 Let NS1 and NS2 ˚ hold and u0 22 L2 .R I R /. Assume that the propagator (5.6.45) has a solution u˛ .t; x/ ; ˛ 2 J on the interval .0; T so that, for every ˛, the process u˛ is weakly continuous in L2 .Rd I Rd / and the inequality

sup tT

X ˛2J 2

ku˛ k2L2 .Rd IRd / .t/

C

Z

T 0

X ˛2J 2

kru˛ k2L2 .Rd IRdd / .t/ dt < 1

(5.6.47)

holds. If the process N .t; x/ WD U

X

u˛ .t; x/ ˛

(5.6.48)

˛2J 2

is FtW -adapted, then it is a solution of (5.6.40). N satisfies The process U  Z 2 N E sup kU.s/k C L2 .Rd IRd / sT

T 0

 2 N kr U.s/k ds 0; x 2 R;

(5.6.49)

and its analog for x 2 Rd . Equation (5.6.49) was first encountered in Example 5.4.8 on page 287; see also . With a non-random initial condition u.0; x/ D '.x/, direct computations show that, if exists, the Fourier transform uO D uO .t; y/ of the solution must satisfy d uO .t; y/ D

p

1yOu.t; y/dw.t/; or uO .t; y/ D '.y/e O

p

1yw.t/C 12 y2 t

:

(5.6.50)

The last equality shows that the properties of the solution essentially depend on the initial condition, and, in general, the solution is not in L2 .W/. The S-transformed equation, vt D h.t/vx , has a unique solution   Z t N X v.t; x/ D ' x C h.s/ds ; h.t/ D hi mi .t/: 0

iD1

The results of Sect. 5.4.2 imply that a white noise solution of the equation can exist only if ' is a real analytic function. On the other hand, if ' is infinitely differentiable, then, by Theorem 5.5.3, the Wiener Chaos solution exists and can be recovered from v. Theorem 5.6.16 Assume that the initial condition ' belongs to the Schwarz space S D S.R/ of tempered distributions. Then there exists a generalized random process u D u.t; x/, t  0, x 2 R, so that, for every  2 R and T > 0, the process u   1 is the unique w.H2 .R/; H2 .R// Wiener Chaos solution of equation (5.6.49). Proof The propagator for (5.6.49) is Z tX p u˛ .t; x/ D '.x/1j˛jD0 C ˛i .u˛.i/ .s; x//x mi .s/ds: 0

(5.6.51)

i

Even though Theorem 5.4.4 is not applicable, the system can be solved by induction if ' is sufficiently smooth. Denote by C' .k/, k  0, the square of the L2 .R/ norm of the kth derivative of ': C' .k/ D

Z

C1 1

j' .k/ .x/j2 dx:

(5.6.52)

5.6 Examples

335

By Corollary 5.4.6, for every k  0 and n  0, X

2 k.u.n/ ˛ /x kL2 .R/ .t/ D

j˛jDk

tk C' .n C k/ : kŠ

(5.6.53)

The statement of the theorem now follows. Remark 5.6.17 Once interpreted in a suitable sense, the Wiener Chaos solution of (5.6.49) is FtW -adapted and does not depend on the choice of the Cameron-Martin basis in L2 .W/. Indeed, choose the wight sequence so that 1 : 1 C C' .j˛j/

r˛2 D

By (5.6.53), we have u 2 RL2 .WI L2 .R//. Next, define N .x/

D

1 sin.Nx/ :  x

Direct R computations show that the Fourier transform of N is supported in ŒN; N and R N .x/dx D 1. Consider Eq. (5.6.49) with initial condition 'N .x/ D

Z R

'.x  y/

N .y/dy: 

By (5.6.50), this equation has a unique solution uN so that uN .t; / 2 L2 .WI H2 .R//, t  0,  2 R. Relation (5.6.53) and the definition of uN imply lim

X

N!1

ku˛  uN;˛ k2L2 .R/ .t/ D 0; t  0; k  0;

j˛jDk

so that, by the Lebesgue dominated convergence theorem, lim ku  uN k2RL2 .WIL2 .R// .t/ D 0; t  0:

N!1

In other words, the solution of the propagator (5.6.51) corresponding to any basis m in L2 ..0; T// is a limit in RL2 .WI L2 .R// of the sequence fuN ; N  1g of FtW adapted processes. The properties of the Wiener Chaos solution of (5.6.49) depend on the growth rate of the numbers C' .n/. In particular,   • If C' .n/  Cn .nŠ/ ; C > 0; 0   < 1; then u 2 L2 WI L2 ..0; T/I H2n .R// for all T > 0 and every n  0. • If C' .n/  Cn nŠ; C > 0; then

336

5 The Polynomial Chaos Method

  – for every n  0, there is a T > 0 so that u 2 L2 WI L2 ..0; T/I H2n .R// . In other words, the square-integrable solution exists only for sufficiently small T. – for every n  0 and every T >  0, there exists a number ı 2 .0; 1/ so that u 2 L2;Q WI L2 ..0; T/I H2n .R// with Q D .ı; ı; ı; : : :/. • If the numbers C' .n/ grow as Cn .nŠ/1C ,   0, then, for every T > 0, there exists a number    > 0 so that n u 2 .S/; L2 .W/I  L2 ..0; T/I H2n.R// . If  > 0, then this solution does not belong to any L2;Q WI L2 ..0; T/I H2 .R// . If  > 1, then this solution does not have an S-transform. • If the numbers C' .n/ grow faster than Cn .nŠ/b for any b; C >   0, then the Wiener Chaos solution of(5.6.49) does not belong to any .S/ L2 ..0; T/I H2n .R// , ;  ;  > 0, or L2;Q WI L2 ..0; T/I H2n .R// . To construct a function ' with the required rate of growth of C' .n/, consider '.x/ D

Z

1

cos.xy/eg.y/ dy;

0

where g is a suitable positive, unbounded, even function. Note that, up to a multiplicative constant, the Fourier transform of ' is eg.y/ , and so C' .n/ grows R C1 with n as 0 jyj2n e2g.y/ dy. A more general first-order equation can be considered: du.t; x/ D

X

ik .t; x/Di u.t; x/dwk .t/; t > 0; x 2 Rd :

(5.6.54)

i;k

Theorem 5.6.18 Assume that, in Eq. (5.6.54), the initial condition u.0; x/ belongs to S.Rd / and each ik is infinitely differentiable with respect to x so that sup.t;x/ jDn ik .t; x/j  Cik .n/, n  0. Then there exists a generalized random process u D u.t; x/, t  0, x 2 Rd , so that, for every  2 R and T > 0, the process   1 u is the unique w.H2 .Rd /; H2 .Rd // Wiener Chaos solution of equation (5.6.49). Proof The arguments are identical to the proof of Theorem 5.6.16. Note that the S-transformed equation (5.6.54) is vt D hk ik Di v and has a unique solution if each ik is a Lipschitz continuous function of x. Still, without additional smoothness, it is impossible to relate this solution to any generalized random process.

5.6.5 Problems Problems 5.6.1 and 5.6.2 suggest further investigation of the transport equation ut D ux dw.t/ and its higher-dimensional version. Problem 5.6.3 extends the same ideas

5.6 Examples

337

to parabolic equations. Problem 5.6.4 encourages the reader to learn more about the Skorokhod integral. Problem 5.6.5 presents an easy example of getting additional regularity for parabolic equations using chaos expansion. Problem 5.6.1 A classical example of a infinitely differentiable non-analytic function is ( 2 if x 6D 0; e1=x ; '.x/ D 0; if x D 0: In this case, ' .n/ .0/ D 0 for all n, so '.x/ 6D

P

n0 '

.n/

.0/xn =nŠ, x 6D 0.

(a) What happens to the derivatives of ' near x D 0? [Apparently they must grow pretty fast, to prevent the remainder in the Taylor formula from going to zero]. P with initial conditions u.0; x/ D (b) Investigate the chaos solution of ut D ux w.t/ 1  '.x/ and u.0; x/ D '.x/. Problem 5.6.2 (a) Construct a closed-form solution for the equation du D ux dw.t/; u.0; x/ D xn ; n > 1: (b) Construct a chaos solution for the equation du D

X

i;k .t; x/uxk dwi .t/; x 2 Rd ;

i;k

when u.0; x/ is an element of S.Rd /, each i;k .t; x/ is infinitely differentiable in x, and supt;x jDnx i;k .t; x/j D Ci;k .n/ < 1. Problem 5.6.3 By analogy with equation ut D ux w, P construct the chaos solution of P t > 0; x 2 R; uP D uxx C u.n/ .x/w.t/; given a smooth, but not necessarily analytic, initial condition u.0; x/ D '.x/. Problem 5.6.4 Given real numbers a 6D 0;  6D 0 and a positive integer n, construct a closed-form solution of the Skorokhod integral equation u.t; x/ D x w.T/ C a n

Z 0

t

auxx .s; x/ds C 

Z

t 0

ux .s; x/ıw.s/; 0 < t < T; x 2 R:

What conditions do you need to impose on a and  (i) to be able to find the solution; (ii) to make the computations easier?

338

5 The Polynomial Chaos Method

Problem 5.6.5 Consider a normal triple .V; H; V 0 / and the stochastic parabolic equation uP D Au C Muw P with a standard Brownian motion w D w.t/, bounded linear operators A W V ! V 0 and M W V ! H, and a non-random initial condition u0 2 H, and assume that the stochastic parabolicity condition is satisfied: 2ŒAv; v C kMvk2H C ıkvk2V  C0 kvk2H : Denote by e the sequence with ek D 1; k 1.   Show that u 2 .S/0;qe WI L2 .0; T/I V for some q > 1. [For example, you can take q D 1 C ", if "kMvk2H < ıkvk2V ].

5.7 Distribution Free Stochastic Analysis 5.7.1 Distribution Free Polynomial Chaos So far in this book we have studied linear stochastic PDEs driven by a sequence of independent standard Gaussian random variables  WD fk ; 1  k  Ng :

(5.7.1)

The -algebra generated by  and completed by the sets of probability zero will be denoted  ./ : In this section we will introduce and investigate an extension of this setting to an arbitrary sequence ˚ WD fk ; 1  k  Mg

(5.7.2)

of square integrable and uncorrelated random variables on the probability space .˝; F ; P/. Similarly to  ./ ; the -sigma algebra generated by ˚ will be denoted by  .˚/. As usual,  .˚/ is assumed to be completed by the sets of probability zero. We will assume that the upper bounds N in (5.7.1) and M in (5.7.2) can be either finite or infinite. The most important feature of the sequence (5.7.2) is that the distribution functions of the random variables k are not specified beyond some general assumptions. This property of system ˚ is often called distribution free (and abbreviated as DF).

5.7 Distribution Free Stochastic Analysis

339

Definition 5.7.1 Let  .˚/ be the -algebra generated by fk ; k  1g and completed by sets of probability zero. Suppose that g .˚/ D g .1 ; 2 ; : : :/ is a  .˚/-measurable function such that E jg .˚/j2 < 1: Clearly, the information regarding g .˚/ appears to be very limited. In construct, in the Gaussian setting (˚ D / one can construct the CameronMartin expansion for a random function f ./ with finite variance. For example, it follows from the Cameron-Martin Theorem (see (5.1.12)) that f ./ D

X

f˛ ˛ ;

(5.7.3)

˛ 2J 1

where ˛ D

Y

p H˛k .k /= ˛Š and f˛ D E. f ˛ /:

k1

As we have already seen in this book, the application of Cameron-Martin Theorem to SPDEs driven by Gaussian noise/forcing (e.g. SPDE (5.2.10)) leads to construction of low-triangular system of deterministic equations. In particular, a solution of the linear parabolic SPDE du.t; x/ D .auxx .t; x/ C f .t; x//dt C .ux .t; x/ C g.t; x//dw.t/; t > 0; x 2 R (5.7.4) is given by the Cameron-Martin formula u .t; x/ D

X

u˛ .t; x/ ˛ ;

(5.7.5)

˛ 2J 1

where     Xp du˛ .t; x/ D a u˛ .t; x/ dt C ˛i  u˛.i/ .t; x/ mi .t/dt; xx

i1

x

(5.7.6)

and fmi .t/ ; i  1g is an orthonormal basis in L2 Œ0; T: Obviously, (5.7.6) is a lower triangular system of deterministic PDEs (often referred to as the propagator). An important question is whether one can generalize the Cameron-Martin formula and the related lower-triangular propagator to distribution free ( .˚/measurable) linear SPDEs. The answer is positive and will be established below. To demonstrate this, we shall introduce some fundamental notions and concepts of stochastic calculus in the distribution free setting. It is convenient to start with a -finite complete measure space .U; U; / and the related Hilbert space H D L2 .U; U; /. In addition we shall introduce a complete probability space .˝; F ; P/. Now we shall introduce a general definition of driving cylindrical noise.

340

5 The Polynomial Chaos Method

Definition 5.7.2 A continuous linear functional ˚ from H to L2 .˝; F ; P/ such that EŒ˚ . f /2  D k f k2H for all f 2 H

(5.7.7)

is called a driving cylindrical random field on U: Without loss of generality we assume that E˚. f / D 0 for all f 2 H:

(5.7.8)

Example 5.7.3 Let f 2 L2 .0; T/, w .t/ is a standard Brownian motion and ˚. f / D RT 0 f .s/ dw .s/. Then E˚. f / D 0 and EŒ˚ . f /2  D

Z

T 0

j f .s/j2 ds < 1

for all f 2 L2 .0; T/ : Exercise 5.7.4 (B) Prove that ˚ is an isometric embedding of H into L2 .˝; F ; P/ : Let us fix an integer N0 such that 2  N0 and (possibly) N0 ! 1. If f 2 H and fmk ; k < N0 g is a complete orthonormal system in H then ˚. f / D

X

fk k D

X

fk k in L2 .˝; F ; P/ ;

(5.7.9)

1k 0, and therefore E .In .v// D nŠ

X

v˛ .E˛ / D 0:

(5.7.21)

j˛jDn

Exercise 5.7.28 (C) Confirm that E .In .v//2 D .nŠ/2

X

˛Šv˛2

j˛jDn O Now we shall introduce an extension of H˝n to a general Hilbert space Y: O ˝n Definition .Y/ be the space of all Y-valued symmetric functions P 5.7.29 Let H v D j˛jDn v˛ E˛ on U n such that

D jvj2H˝n O .Y/

Z Un

jv .r/j2Y d n D

X

jv˛ j2Y j˛jŠ˛Š < 1:

(5.7.22)

˛

O O O ˝n ˝n Let H˝n  .Y/ H .Y/ be the linear subspace of H .Y/ spanned on P O O ˝n fE˛ W ˛ 2 Jn g. Set H˝n  .Y/ D f0g if Jn D ;: For v D j˛jDn v˛ E˛ 2 H .Y/; O O ˝n let v .n / be its projection onto H˝n  .Y/ H .Y/ spanned on fE˛ W ˛ 2 Jn g

5.7 Distribution Free Stochastic Analysis

349

O (H˝n  .Y/ D f0g if Jn D ;). We define the n-th multiple integral of v as

  X X v˛ In .E˛ / D nŠ v˛ ˛ ; In .v/ D In v .n / D j˛jDn

j˛jDn

and I0 .c/ D c; c 2 R. Exercise 5.7.30 (B) By induction on n, show that that 0

Z Un

@

12

X

v˛ E˛ A d n D nŠ

j˛jDn

X

kv˛ k2Y ˛Š

j˛jDn

and 2  X

  E jIn .v/j2Y D nŠ2 kv˛ k2Y ˛Š D nŠ v .n /  ˝n O H

˛2Jn

Let S.Y/ be the space of all finite linear combinations O H˝k  .Y/.

.Y/

P

k

:

Ik .Fk / kŠ

with Fk 2

Definition 5.7.31 A generalized S-random variable is a formal sum uD

X Ik .Fk / kŠ

k

O

with Fk 2 H˝k  .Y/:

We denote the set of all generalized S-random variables by S 0 .Y/. The action of u 2 S 0 .Y/ on v 2 S .Y/ is defined as hu; vi D

XZ k

where u D

P

k Ik

.uk / =kŠ; v D

P

k Ik

Uk

.uk ; vk /Y d ;

O .vk / =kŠ with uk ; vk 2 H˝k  .Y/:

Definition 5.7.32 A generalized S-field on a measurable space .B; B/ is a S 0 .Y/valued function on B such that for each x 2 B; u.x/ D

X In .Fn .x// n

2 S 0 .Y/;

O where x 7! Fn .x/ D Fn .xI 1 ; : : : ; n / are deterministic measurable H˝n .Y/-valued functions on B. We denote the linear space of all such fields by S 0 .BI E/: In particular, if B D Œ0; T, we say u .t/ is a generalized S-process. If a generalized S-field u.x/

350

5 The Polynomial Chaos Method

is continuous on B, then we write u 2 CS 0 .BI E/; note that u.x/ is continuous if and only every function Fn is continuous. Exercise 5.7.33 (B) Verify the following statements: (a) D.Y/ S ..Y// and S 0 .Y/ D0 .Y/. (b) 8 9 < = X X S 0 .Y/ D u D u˛ ˚˛ 2 D0 .Y/ W ju˛ j2Y ˛Š < 1 8n  1 : : ; ˛ j˛jDn

Here are some of the potentially useful equalities. If u D 2 j˛jDn ju˛ jY ˛Š < 1; n  1, then

P

X

un D

P

˛

u˛ ˛ with

O

u˛ E˛ 2 H˝n  .Y/

j˛jDn

and uD

X

u˛ ˚˛ D

˛

1 X X

u˛ ˚˛ D

nD0 j˛jDn

1 X In .un / nD0

2 S 0 .Y/:

Exercise 5.7.34 (C) Show that, for j˛j D n; u˛ D

1 ˛ŠnŠ

Z Un

un ./E˛ ./ d n ; n  1:

O For n  0, let EH˝n be the space of all finite linear combinations of E˛ ; j˛j D n. The following statement provides some insight about the transition from n-th multiple integral to an integral on U nC1 : O Proposition 5.7.35 Let f 2 EH˝n ; g 2 EH, f ˝ g D f .z/ g ./ ; z 2 U n ;  2 U; and let f ˝ g be the standard symmetrization of f ˝ g. Then

A

A

  InC1 f ˝ g D In . f / I1 .g/  projectionHn ŒIn . f / I1 .g/ : Proof Let f D

X j˛jDn

f˛ E˛ ; g D

X j˛jD1

g ˛ E˛ :

5.7 Distribution Free Stochastic Analysis

351

Then fg D

A

f ˝g D

X

f˛ g˛0 E˛ E˛0 ;

˛;˛0

X

f˛ g˛0 Eg ˛ E˛0 D

˛;˛0

1 X f˛ g˛0 E˛C˛0 nC1 0 ˛;˛

and

A

  InC1 f ˝ g D

X 1 X f˛ g˛0 InC1 .E˛C˛0 / D nŠ f˛ g˛0 ˛C˛0 ; nC1 0 0

In . f / D nŠ

X

˛;˛

f˛ ˛ ; I1 .g/ D

X

˛

˛;˛

g˛0 ˛0

˛0

Since ˛C˛0 D ˛ ˛0  projectionHn Œ˛ ˛0  ; it follows that

A

  X InC1 f ˝ g D nŠ f˛ g˛0 Œ˛ ˛0  projectionHn .˛ ˛0 / ˛;˛0

D In . f /I1 .g/  projectionHn ŒIn . f /I1 .g/ : Exercise 5.7.36 (C) Show that, if U D Œ0; T; d D dt, and m1 D 1.a;b/ , then, according to Proposition 5.7.35, the “measure” of the square .a; b/2 is   h   2 2 i D I I : 1 1 I2 1˝2  projection 1 .a;b/ 1 .a;b/ H .a;b/ 1 Wick Product and Skorokhod Integral We define the Wick product of ˛ and ˇ by ˛ ˘ ˇ D ˛Cˇ ; 1 ˘ ˛ D ˛ ; ˛; ˇ 2 J 1 : For a Hilbert space Y and u D

P

˛

u˘v D

u ˛ ˛ ; v D XX

P

˛

v˛ ˛ 2 D0 .Y/

.uˇ ; v˛ˇ /Y ˛ :

˛ ˇ˛

352

5 The Polynomial Chaos Method

For a generalized random field on u D Skorokhod integrals ı.k/ .u/ D

X

˛C.k/

P Z

˛ u ˛ ˛

2 D0 .H .Y// ; we define the

u˛ ./Ek ./d ; U

˛

and ı.u/ D

X

ı.k/ .u/ D

XX ˛

k

D

X

X

Z

˛C.k/

Z

u˛ ./Ek ./d ./ U

k

u˛.k/ ./Ek ./d ./: U

j˛j1 kW.k/˛

For a deterministic u 2 H .Y/ ; ı.k/ .u/ D "k

Z

u./mk ./d ./; U

X

ı .u/ D

k

Z

u./mk ./d ./ D ˚ .u/ : U

k

Now, we describe the Skorokhod integral ı in terms of the multiple integrals In . We show that ı maps S to S and maps S 0 to S 0 : Proposition 5.7.37 Let uD

1 X 1 In .un / 2 D0 .L2 .U; d // ; nŠ nD0

where un D un .I 1 ; : : : ; n / D

X

u˛ ./E˛ .1 ; : : : ; n /

j˛jDn

and X j˛jDn

˛Š

Z

ju˛ ./j2 d ./ < 1 8n.

U

Then ı .u/ D

1 X 1 InC1 .Qun /; nŠ nD0

where uQ n is the standard symmetrization of un on U nC1 .

5.7 Distribution Free Stochastic Analysis

353

Proof According to Exercise 5.7.33, for j˛j D n; u˛ ./ D

1 ˛ŠnŠ

Z Un

    un .I  0 /E˛  0 n d 0

and X

u./ D

u˛ ./˛ :

˛

Note that Z UU n

Z

1 u˛ ./Ek ./d D ˛ŠnŠ U Dp

  jun I  0 j2 d nC1 < 1; Z

Z Un

U

Z

1 ˛ŠnŠ

    un .rI  0 /Ek .r/ .dr/ E˛  0 n d 0 Z

Un

    un .rI  0 /Ek .r/ .dr/ e˛  0 n d 0 ;

U

and X

un .;  0 / D

E˛ . 0 /Ek ./

Z

u˛ .r/Ek .r/d .r/:

U

j˛jDn;k1

Therefore, the standard symmetrization of un .;  0 /; with  .1 ; : : : ; n / 2 U n , is X

uQ n .;  0 / D

j˛jDnC1;k1

nŠ E˛ .;  0 / .n C 1/Š

Z

2 U;  0 D

u˛.k/ .r/Ep .r/d .r/:

U

By the definition of the Skorokhod integral, ı .u/ D

X X Z ˛

D

k1

 u˛ .r/Ep .r/d ˛C.k/

U

1 X X Z X nD0 j˛jDn k1

u˛ .r/Ek .r/d .r/ U



InC1 .E˛C.k/ / .n C 1/Š

354

5 The Polynomial Chaos Method

D

1 X X X Z nD0 j˛jDnC1 k1

D

u˛.k/ .r/Ek .r/d .r/ U



InC1 .E˛ / .n C 1/Š

1 X 1 InC1 .Qun /; nŠ nD0

and the statement follows. Multiple Skorokhod Integrals   P O O For symmetric u D ˛ u˛ ˛ 2 D0 H˝n ; with u˛ 2 H˝n ; and  2 J 1 ; jj D n we define Z X E ı .u/ D ˛C u˛ d n ; and then n Š U ˛ Z X X X E ı .u/ D ˛C u˛ d n : ı n .u/ D Š Un ˛ jjDn

jjDn

Let ı 0 .u/ D u0 . O ; For a deterministic u.x1 ; : : : ; xn / in H˝n ı n .u/ D In .u/ : Indeed, ı .u/ D n

X



jjDn

  ı n E  D 

  X In E Z E E u d n D u d n ; nŠ U n Š U n Š

Z

jjDn

Z

Un

E

E d n D nŠ D In .E /: Š

Exercise 5.7.38 (B) Confirm that, for u D   O u˛ 2 H˝n ); ı n .u/ D ı ı n1 .Qu .// , where

P

˛

  O ; n  1; (with u˛ ˛ 2 S 0 H˝n

uQ ./ D uQ .I 1 ; : : : ; n1 / D u.; 1 ; : : : ; n1 /; ; i 2 U. Exercise 5.7.39 (C) Confirm that, in the framework of a single random variable  (see Example 5.7.12), ı k .n / D nCk ; k  1:

5.7 Distribution Free Stochastic Analysis

355

˚ O For n  0, define Jn D ˛ 2 J 1 W ˛ ¤ 0; j˛j D n . Let H˝n  be the subspace O O of H˝n spanned on E˛ ; ˛ 2 Jn : For u 2 H˝n , we denote by u.n / the orthogonal O projection of u onto H˝n  . O ; Lemma 5.7.40 For a deterministic u D2 H˝n Z ˇ ˇ i h ˇ .n / ˇ2 E ı n .u/2 D nŠ ˇu ˇ d n ; Un

O where u.n / is the projection of u onto H˝n  :

Proof By definition, ı n .u/ D

X



Z

jjDn

h

E ı .u/ n

2

i

D

X



u Un

E d n ; Š

Z

2Jn

E u d n n Š U

2

D nŠ

Z Un

ˇ ˇ ˇ .n / ˇ2 ˇu ˇ d n ;

which completes the proof. Exercise 5.7.41 (B) Show that it is possible to rewrite  Proposition 5.7.11 using multiple integrals as follows: for each  2 L2 ˝; F 0 ; P ; D

X

˛ ˛ D

˛

1 1 1 X X X 1 X 1 1 n In .n / D ı .n / ˛ In .E˛ / D nŠ nŠ nŠ nD0 nD0 nD0 j˛jDn

with 1 E Œ˛  D ˛ D ˛Š ˛ŠnŠ

Z Un

n ./E˛ ./ d n

and n D n ./ D

X

˛ E˛ ./ :

j˛jDn

5.7.2.1 The Malliavin Derivative If ˛ ¤ 0, we define D˛ D

X jjD1;˛

X ˛Š ˛ E ./ D .˛  /Š 

and D˛ D 0 if ˛ D 0.

X jjD1; CD˛

. C /Š E ./  ; Š

356

5 The Polynomial Chaos Method

For u D Dk u D

P

˛

X

u˛ ˛ 2 D, and with the convention u˛ D 0 if ˛ D 0, we get ˛k u˛ ˛.k/ mk ./ ; D u D

˛p

j˛j1

Du D

X

D u D

X X ˛p jjD1

jjD1

D

X

˛Š u˛ ˛ E ./ ; jj D 1; .˛  /Š

˛Š u˛ ˛ E ./ .˛  /Š

X X .˛ C /Š u˛C E ./ ˛ : ˛Š ˛ jjD1

P In a standard way we define the higher order Malliavin derivatives: for u D ˛ u˛ ˛ 2 D, Dn u D Dn u D

X ˛

˛Š E .1 ; : : : ; n / u˛ ˛ ; jj D n; .˛  /Š Š

X X jjDn ˛

D

˛Š u˛ ˛ E .1 ; : : : ; n / .˛  /ŠpŠ

X X .˛ C /Š u˛C E .1 ; : : : ; n /˛ : ˛ŠŠ ˛ jjDn

We will now compute the Malliavin derivative for multiple integrals. P O ˝n Proposition 5.7.42 Let u D j˛jDn u˛ E˛ 2 H .Y/ has only finite number of u˛ ¤ 0: Then DIn .u/ D nIn1 .u .; t// ; t 2 U: Proof By definition, In .u/ D

X

u˛ In .E˛ / D nŠ

j˛jDn

X

u˛ ˛ 2 D:

j˛jDn

Since 1 u˛ D ˛ŠnŠ

Z Un

uE˛ d n ;

and, for t; 1 ; : : : ; n 2 U; u.t; 1 ; : : : ; n1 / D

X k

Ek .t/

Z U

  u.t0 ; 1 ; : : : ; n1 /Ek t0 d

5.7 Distribution Free Stochastic Analysis

357

is a finite sum, Du D

X X .˛ C .k//Š ˛

˛Š

k

u˛C.k/ Ek ./ ˛ :

Next, we use the equality n

o n o .˛; .k// W j˛ C .k/j D n; k  1 D .˛; .k// W j˛j D n  1; k  1

to find X

DIn .u/ D

k1;j˛C.k/jDn

D

X X .˛ C /Š ˛Š

j˛jDn1 k1

.˛ C /Š u˛C.k/ nŠEk .t/ ˛ ˛Š u˛C.k/ nŠEk .t/ ˛

 X X 1 Z uE˛C.k/ d n E .t/ ˛ ˛Š j˛jDn1 k1 Z X X E˛ D Ek .t/ ˛ n u Ek d n ˛Š k1 D

j˛jDn1

D

X

n.n  1/Š

Z

j˛jDn1

D

X

u.t; /

 E˛ d n1 ˛ ˛Š.n  1/Š

n.n  1/Šu˛ .t; /˛ D nIn1 .u .; t// :

j˛jDn1

We used here that Eg ˛ Ek D

1 j˛jŠ E˛C.k/ D E˛C.k/ ; n j˛ C .k/jŠ

where e f is the symmetrization of f . If j˛j D n  1 and ˛ D 0, then, for each k  1; 0 D u˛C.k/

1 D .˛ C .k//Š.n  1/Š

Z Un

uE˛ Ek d n

and u .t; /˛ D

Z U n1

  u.t;  0 /E˛  0 d n1 D 0:

358

5 The Polynomial Chaos Method O ˝.n1/

O Exercise 5.7.43 (C) Confirm that if u 2 H˝n .Y/ for  .Y/, then u .; t/ 2 H -almost all t 2 U: P As suggested by Proposition 5.7.42, for an arbitrary v D j˛jDn v˛ E˛ 2 O H˝n  .Y/; we define

DIn .v/ D nIn1 .v .y; // ; y 2 U; I0 .c/ D c; c 2 R. For u D

P

Du .y/ D

n

In .un / nŠ

2 S 0 .Y/ we define

X DIn .un / nŠ

n

X In1 .un .y; //

D

n

.n  1/Š

:

Exercise 5.7.44 (C) Confirm that D maps S .R/ to S .Y/. Exercise 5.7.45 (B) Show that, in the framework of a single random variable  (see Example 5.7.12),   Dk  ˘n D

  nŠ  ˘.nk/ ; k  1; if  ˘n ¤ 0 and Dk  ˘n D 0 if  ˘n D 0: .n  k/Š

5.7.3 Adapted Stochastic Processes In this subsection we assume that U D Œ0; T  V; U D B .Œ0; T/  V; d D dtd: Let uD

X In .un / n

2 S 0 .Y/ ;

(5.7.23)

O with un 2 H˝n  .Y/; n  0: un D un .t1 ; 1 ; : : : ; tn ; n / ; .ti ; i / 2 U; i D 1; : : : ; n. For t 2 Œ0; T, let Qnt D .Œ0; t  V/n .

Definition 5.7.46 Let t0 2 Œ0; T. A random variable u defined by .5.7.23/ is called Ft0 -measurable if for each n; supp .un / Qnt0 ; that is un .t1 ; 1 ; : : : ; tn ; n / D 0; n1  a:e: if ti > t0 for some i Proposition 5.7.47 A random variable u 2 S 0 .Y/ defined by .5.7.23/ is Ft0 measurable if and only if Du .t; / D 0 for all t > t0 .

5.7 Distribution Free Stochastic Analysis

359

Proof For u 2 S 0 .Y/ defined by (5.7.23), Du D

X n1

1 In1 .un .t; ; / ; .t; / 2 U: .n  1/Š

(5.7.24)

and i X h E Du .t; /2 D

1 EŒIn1 .un .t; ; //2  .n  1/Š n XZ D un .t; ; /2 d n1 D 0 n

U n1

for t > t0 iff un .t; ; / D 0 n1 -a.e for t > t0 . Next, we introduce the notion of an adapted random process. Consider u 2 S 0 .UI Y/, i.e, u .t; / D

X In .un .t; ; // n

(5.7.25)

O with un .t; ; / 2 H˝n  .Y/ for all .t; / 2 U:

un .t; ; / D

X

u˛ .t; /E˛ ./ ; .t; / 2 U:

(5.7.26)

j˛jDn

Definition 5.7.48 A random field u.t; / on U defined by (5.7.25) is called adapted if supp .un .t; ; // Qnt ;  2 V; for every t 2 Œ0; T: A straightforward consequence of Proposition 5.7.47 is the following result. Corollary 5.7.49 A random field u 2 S 0 .UI Y/ is adapted if and only if, for each t 2 Œ0; T, the Malliavin derivative Du.t; I s1 ; 1 ; : : : ; sn ; n / D 0;  2 V; if si > t for some i: Given a random field u.t; / on U D Œ0; T  V, consider its Skorokhod integral   ı .u/t D ı 1Œ0;t u ; 0  t  T: Proposition 5.7.50 Consider a random field u 2 S 0 .H .Y/ I Y/, that is, (5.7.25) holds and Z kun k2Y d nC1 < 1 8n: U nC1

If u is adapted, then ı .u/t ; 0  t  T; is adapted as well.

360

5 The Polynomial Chaos Method

Proof Since u .t; / D

X In .un .t; ; // nŠ

n

is adapted, with un satisfying (5.7.26), we have supp .un .t; ; // Qnt ;  2 V; for all n and t 2 Œ0; T, that is, un .t; / D un .t; /1Qnt . By Proposition 5.7.37,

B

1  X  1 InC1 .un 1Œ0;t /; ı .u/t D ı u 1Œ0;t D nŠ nD0

B

where un 1Œ0;t is the standard symmetrization of un 1Œ0;t D un 1QnC1 : Since its support t is obviously a subset of QnC1 , the statement follows. t 5.7.3.1 Itô-Skorokhod Isometry Now we estimate the L2 -norm of the Skorokhod integral. P Proposition 5.7.51 Let u D u ./ D ˛ u˛ ./ ˛ 2 L2 .D .UI Y/ ; d /; i.e. u˛ 2 H .Y/ W Z

ju˛ ./ j2Y d < 1; ˛ 2 J 1 ;

and only finitely many of u˛ are not zero. Then Z

E jı.u/j2Y  E CE

Z

U

 ju./j2Y d

U2

   Du.I  0 /; Du. 0 I / .d/ .d 0 / ; Y

where Du.I  0 / D

X X .˛ C .k//Š ˛

k1

˛Š

  u˛C.k/ ./Ek  0 ˛ :

An equality holds if ˛ 6D 0 for all ˛: Proof By definition, ı.u/ D

X j˛j1

XZ k1

u˛p ./Ek ./d D U

XX ˛

k1

˛C.k/

Z

u˛ ./Ek ./d : U

5.7 Distribution Free Stochastic Analysis

361

Hence  2   Z X   ˛Š  u˛.k/ .x/Ek .x/d    k1 U  j˛j1;'˛ 6D0 Y Z   X X u˛.k/ .x/; u˛.k0 / .x0 /  ˛Š

X

E kı.u/k2Y D

U2

k;k0 1 j˛j1

Y

   Ek .x/Ek0 .x0 / .dx/ dx0 X 

D

 X   C    / WD A C B:

kDk0 1

k6Dk0 1

X X k1 ˛.k/

Z 2    ˛Š  u .x/E .x/d ˛.k/ k   U

Y

Z 2 XX    D . C .k//Š  u .x/Ek .x/d   : 

U

k1

Y

Also, BD

X

X

k6Dk0 1 ˛.k/C.k0 /

˛Š

Z U2

  u˛.k/ .x/; u˛.k0 / .x0 /

Y

   Ek .x/Ek0 .x0 / .dx/ dx0 D

X X

.ˇ C .k/ C .k0 //Š

k¤k0 1 ˇ

Z U2

.uˇC.k0 / .x/; uˇC.k/ .x0 //Y

   Ek .x/Ek0 .x0 / .dx/ dx0 : On the other hand,   E Du.xI x0 /; Du.x0 I x/ D

X ˛

Y

 X .˛ C .k//Š  u˛C.k/ .x/; u˛C.k0 / .x0 / ˛Š Y ˛Š 0 k;k 1

362

5 The Polynomial Chaos Method

.˛ C .k0 //Š  0  Ek x Ek0 .x/ ˛Š   X X   X X  C    WD C C D; D



˛ kDk0 1

˛ k6Dk0 1

and CD

X X

˛Š

k1 ˛.k/

DD

X

˛Š

˛

  ˛Š u˛ .x/; u˛ .x0 / Ek .x/ Ek .x0 /; Y .˛  .k//Š

X k6Dk0 1

 .˛ C .k0 //Š   .˛ C .k//Š  u˛C.k/ .x/Ek x0 ; u˛C.k0 / .x0 /Ek0 .x/ : Y ˛Š ˛Š R R The statement of the theorem follows after comparing U2 Cd 2 ; U2 Dd 2 and A; B, because Z  Z Z 2 Cd 2 C E Dd 2 : (5.7.27) AD ju./jY d ; B D U2

U2

U

Exercise 5.7.52 (C) Verify (5.7.27) Corollary 5.7.53 Let u 2 S 0 .H .Y/ ; Y/, i.e. (5.7.25) holds with Z U nC1

jun j2Y d nC1 < 1 8n:

Then the statement of Proposition 5.7.51 holds for ı .u/ : Proof It is enough to prove the statement for u ./ D In .un .//, where un .; / D

X

u˛ ./ E˛ ./

j˛jDn

with a finite number of nonzero u˛ 2 H .Y/. In this case, X u D nŠ u˛ ./ ˛ 2 L2 .D .UI Y/ ; d / j˛jDn

and Proposition 5.7.51 applies. We obtain the general case by linearity.

5.7 Distribution Free Stochastic Analysis

363

Exercise 5.7.54 (B) Confirm that, in Pthe framework of a single r.v.  (see Example 5.7.12) with all n ¤ 0, for u D n n we have h i

E ı.u/2 D E juj2 C E .Du/2 : For an adapted random field on U D Œ0; T  V; d D dtd, the Itô-type inequality holds as a direct consequence of Corollary 5.7.53. Corollary 5.7.55 Let H D L2 .Œ0; T  V; dtd/. Assume u 2 S .H .Y/ ; Y/ is an adapted random field on U D Œ0; T  V. Then

E jı.u/j2Y  E Duality Between ı and D Proposition 5.7.56 Let u D u ./ D Z

Z

P

U

˛

 ju.t; /j2Y d :

u˛ ./ ˛ 2 L2 .D .U/ ; d /; i.e.

ju˛ ./ j2 d < 1; ˛ 2 J 1 ;

with a finite number of u˛ ¤ 0. Let EŒı .u/  D E

D

P

˛

Z

˛ ˛

2 D. Then

 u./D ./d : U

Proof By direct computation, EŒı .u/  X XZ D ˛Š u˛.k/ ./Ek ./d ˛ ˛

D

k1

XX 

k1

U

  Z  Z  C .k/ Š Š u ./Ek ./d D E u./D ./ d : C.k/ Š U U

5.7.4 Stochastic Differential Equations To simplify the presentation, we assume from now on that ˛ 6D 0 for all ˛ 2 J 1 .

364

5 The Polynomial Chaos Method

Let f D

P

k fk mk

2 L2 .U; d /. Writing f˛ D

Y

fk˛k

k

P

and ˚. f / D

k fk k ,

the definition of the Wick product implies that, for n  1,

˚ . f /˘n WD ˚. f / ˘    ˘ ˚. f / Œn times D

X nŠ f ˛ ˛ : ˛Š

j˛jDn

Note that E

"

1  . f /˘n nŠ

2 #

D

X f 2˛ 1 X nŠf 2˛ D ˛Š nŠ ˛Š

j˛jDn

1 X 2 D fi nŠ i

!n

(5.7.28)

j˛jDn

D

1 k f k2n L2 . / < 1; nŠ

that is, ˚ . f /˘n 2 L2 .˝/. Let Z be the set of all real sequences z D .zk ; k  1/ such that every sequence in Z has only finitely many non-zero terms. The following result holds. Proposition 5.7.57 P a) Let f D k fk mk 2 L2 . /. Then   ˚ . f /˘n D In f ˝n and ˘

exp f˚. f /g WD

1 X ˚ . f /˘n nD0

1 X ˛ X f˛ X f D ˛ D ˛ 2 L2 .˝/ ˛Š ˛Š ˛ nD0 j˛jDn

with f˛ D

1 nŠ

Z

f ˝n E˛ d n :

Moreover, E

n o h 2 i D exp j f j2L2 . / : exp˘ f. f /g

(5.7.29)

5.7 Distribution Free Stochastic Analysis

365

b) Let z D .zk / 2 Z. Then P-a.s. ( ˘

p .z/ D exp

˚

X

!) D

zk mk

k

X z˛ ˛ ; z D .zk / 2 Z; ˛Š ˛

is an analytic function of z and @j˛j p .z/ ˇˇ D ˛ : ˇ @z˛ zD0 Proof a) In terms of multiple integrals we have ˚ . f /˘n D

X nŠf ˛ X f˛ ˛ D In .E˛ / ˛Š ˛Š

j˛jDn

j˛jDn

D

X

D

ki 1;.k1 /C:::C.kn /D˛



D In f

 ˝n

 nŠf ˛  I n E k1 : : : E kn ˛Š

0

1 X f˛ E˛ A ; D In @ ˛Š j˛jDn

with f

˝n

Z X f˛ 1 ˛ E˛ ; f D f ˝n E˛ d n : D ˛Š nŠ j˛jDn

Moreover (see (5.7.28)), E

X  ˚. f /˘n 2

!

n

D

o n X 1 2 D exp f k f k2n k k L2 . / L2 . / : nŠ n

b) Let z D .zk / 2 Z. Then 2

E jp.z/j D

ˇ ˇ X ˇz2˛ ˇ ˛

˛Š

D

Y X jzk j2n k

n

D exp

X

! jzk j

2

;

k

that is, p.z/ is represented by a power series that, with probability one, converges for all z 2 Z. In a time dependent case the following statement holds.

366

5 The Polynomial Chaos Method

Corollary 5.7.58 Let˚ U D Œ0;  T  V; d D dtd. Let G 2 L2 .Œ0; T  V; d /. Consider Mt D exp˘ ˚ 1Œs;t G ; 0  s  t  T: Then Mt D

X H ˛ .s; t/ ˛

˛Š

˛ D

1 X In .H.n/ .s; t//

nD0

; s  t  T;

with H ˛ .s; t/ D

Y

Hk˛k .s; t/; Hk .s; t/ D

Z t Z

G .r; / mk .r; /d

 dr;

s

k

 ˝n : H.n/ .s; t/ D 1Œs;t G Also, 1 H ˛ .s; t/ D ˛Š j˛jŠ˛Š

Z

H.j˛j/ E˛ d j˛j ;

and the process M is adapted. Exercise 5.7.59 (B) Prove Corollary 5.7.58.

5.7.4.2 Linear SDEs Let U D Œ0; T  V; d D dtd. Let w D L2 .D0 .U/ ; d /, P / D ˚.t;

X

P

˛

w˛ ˛ 2 D0 ; f D

P

˛ f˛ .t; / ˛

2

mk .t; /k :

k

For G 2 L2 . /, consider a non-homogeneous linear equation uP .t/ D

Z

Œu .t/ G .t; / C f .t; / ˘ .t; / .d/ ; u .0/ D w:

(5.7.30)

Using the notation P /d .t; /; ˚.dt; d/ D ˚.t; we write (5.7.30) in a more compact form u.t/ D w C

Z

t 0

Œu.s/G.s; / C f .s; / ˘ ˚ .ds; d/ ; 0  t  T:

(5.7.31)

5.7 Distribution Free Stochastic Analysis

367

We seek a solution to (5.7.31) in the form u .t/ D

X

u˛ .t/˛ ; 0  t  T:

(5.7.32)

˛

P P Lemma 5.7.60 Let w D ˛ w˛ ˛ 2 D0 ; f D ˛ f˛ .t; / ˛ 2 L2 .D0 .U/ ; d /. 0 Then there is a unique solution to (5.7.31) in CD0 .Œ0; T/ (Recall P that CD .Œ0; T/ is the class of all real-valued generalized processes u D ˛ u˛ .t/˛ on Œ0; T such that each u˛ is continuous on Œ0; T). The solution u given by (5.7.32) has the following coefficients: u.0/ .t/ D w.0/ ; u˛ .t/ D w˛ C

XZ t Z k1

0

Œu˛.k/ .r/ G.r; / C f˛.k/ .r; /Ek .r; /ddr;

0  t  T: (5.7.33) Proof Plugging the series (5.7.32) into (5.7.31), we immediately get the system (5.7.33). The system is lower triangular, and so we start with u.0/ .t/ D w.0/ to find a unique continuous u˛ .t/ for every ˛ with j˛j  1: Let Z tZ Y ˛ Hk .t/ D Hk k .t/; ˛ 2 J 1 : G .s; / mk .s; /dsd; H ˛ .t/ D 0

For w D

P

˛

k

w˛ ˛ 2 D0 , let 2

jjwjj D

X

2

jw˛ j ˛Š C sup t

˛

Lemma 5.7.61 Let f D 0; w D

X

P

˛ w˛ ˛

˛

12 H ˇ .t/ A ˛Š @ w˛ˇ ˇŠ ˇ˛ 0

X

2 D0 .

(i) The solution to (5.7.31) is ˚   X X H ˇ .r/ ˛ ; 0  r  T: u .t/ D w ˘ exp˘ ˚ 1Œ0;t G D w˛ˇ ˇŠ ˛ ˇ˛ (5.7.34) (ii) If w 2 S 0 .R/, then u 2 CS 0 .Œ0; TI R/ I (iii)

sup E u2 .t/  jjwjj2 ; t

in particular, u 2 L2 .˝; P/ if jjwjj < 1 (see Example 5.7.62 below).

368

5 The Polynomial Chaos Method

(iv) If w D w.0/ is non-random, then u.t/ is adapted and

sup E u2 .t/ D w20 exp

Z

t

jGj2 d :

Proof

˚   (i) Let Mt D exp˘ ˚ 1Œ0;t G . By Corollary 5.7.58, .r/ WD w ˘ Mr D

XX

H ˇ .r/ ˛ ; 0  r  T: ˇŠ

w˛ˇ

˛ ˇ˛

We will show that

solves (5.7.31). Indeed,

  ı 1Œ0;t G D

X XZ t Z

j˛j1 k1

D

0

X XZ

j˛j1 k1

X

w˛.k/ˇ

V ˇ˛.k/

H ˇ .r/ G .r; / Ek .r; /d ˛ ˇŠ

X U ˇ˛.k/

R

˝jˇj 1Œ0;r G Eˇ d jˇj w˛.k/ˇ 1Œ0;t .r/G .r; / Ep .r; /d ˛ jˇjŠˇŠ X XZ X D j˛j1 jpjD1

U j j

˛;jj1

 ˝jj1 w˛ 1Œ0;r G 1Œ0;t .r/G .r; / D

X

X

j˛j1 ˛;jj1

Z

E d jj ˛ .jj  1/ŠŠ

 ˝jj w˛ 1Œ0;t G

E d jj ˛ D jjŠŠ

.t/  w;

and (5.7.31) holds. (ii) follows from (5.7.33). The part (iii) is a direct consequence of (5.7.34) and the definition of kwk. Finally, (iv) follows from (5.7.34), Proposition 5.7.57 and Corollary 5.7.58. Example 5.7.62 Let F 2 L2 .U; d /. Taking w D exp˘ f˚ .F/g in (5.7.34), we see that the solution to (5.7.31) ˚   u.t/ D exp˘ f .F/g ˘ exp˘  1Œ0;t G ˚   D exp˘  .F/ C  1Œ0;t G

5.7 Distribution Free Stochastic Analysis

369

is, in general, non-adapted, 5.7.57, supt E u.t/2 < 1. P but, by Proposition For s  t and f D ˛ f˛ .t; / ˛ 2 L2 .D0 .U/ ; d /, define Hk .s; t/ D

Z

jj f jj20;T D E jj f jj2T D 0 @

X

˛Š

Z

1Œs;t Gmk d ; H ˛ .s; t/ D j f j2 d ; U

j f˛ j2 d C sup

0

X U

X

t

XZ t Z k1

Hk˛k k.s; t/; ˛ 2 J 1 ;

k

Z

U

˛

Y

ˇC.k/˛;jˇjn

Proposition 5.7.63 Let f D D0 : Then

P

˛ f˛

˛Š

˛

12 H ˇ .s; t/ A d : f˛.k/ˇ .s; /Ek .s; / ˇŠ

.t; / ˛ 2 L2 .D0 .U/ ; d / ; w D

P

˛ w˛ ˛

2

(i) The unique solution to (5.7.31) in CD0 .Œ0; TI R/ is u .t/ ˚   D w ˘ exp˘ ˚ Œ0;t G C D

X

j˛j1

C

X ˛

XZ t Z k1

X

0

ˇ˛

0

X U

w˛ˇ

Z tZ

ˇC.k/˛;jˇjn

(5.7.35) ˚   exp˘ ˚ 1Œs;t G ˘ f .s; / ˘ ˚.ds; d/ f˛.k/ˇ .s; /Ek .s; /

H ˇ .s; t/ d ˇŠ

H.r/ˇ : ˇŠ

(ii) RTheR solution is the limit of the Picard iterations un .t/: u0 .t/ D w C t f .s; / ˘ ˚ .ds; d/ ; 0 unC1 .t/ D w C

Z tZ 0

Œun .s/G .s; / C f .s; / ˘ ˚ .ds; d/ ; 0  t  T: (5.7.36)

That is,  ˘k Z t Z n   n X X  1Œs;t G ˘k  1Œ0;t G C ˘ f .s; / ˘ ˚ .ds; d/ : u .t/ D w ˘ kŠ kŠ 0 kD0 kD0 (5.7.37) n

370

5 The Polynomial Chaos Method

If f 2 S 0 .H/ and w 2 S 0 , then un ; u 2 CS 0 .Œ0; TI R/ I (iii)  

sup E u2 .t/  2 jjwjj2 C jj f jj2T : t

(iv) If w D w0 is deterministic and f is adapted with kf k20;T < 1, then u .t/ is adapted and   Z

sup E u2 .t/  C w20 C E j f j2 d : t

U

Proof Because of Lemma 5.7.61 we assume w D 0. (i) Let ˚   l .s; / D f .s; / ˘ exp˘ ˚ 1Œs;r G D

X

X

f˛ˇ .s; /

˛ ˇ˛;jˇjn

D

X

X

˛

H ˇ .s; r/ ; 0srT ˇŠ

f˛ˇ .s; /

ˇ˛;jˇjn

1 jˇjŠˇŠ

Z



1Œs;t G

˝jˇj

Eˇ d jˇj :

and, for 0  r  T, set Z rZ .r/ D l.s; / ˘  .ds; d/ 0

X

D

Z

X k1; .k/˛

j˛j1

X

f˛..k/Cˇ/ .s; /Ek .s; /d

ˇC.k/˛;jˇjn

For 0  r  T and  D .s1 ; 1 ; : : : ; sk ; k / 2 U k ; k  1, define

.r; k; G; f / D .r; k; G; f / .s1 ; 1 ; : : : ; sk ; k / D

k k X  Y  f sO; j ŒOs;r .si / G .si ; i / ; jD1

iD1;i6Dj

where sO D min fsi ; 1  i  kg : By Corollary 5.7.58, .r/ D

X j˛j1;k1; .k/˛

Z rZ 0

X ˇC.k/˛

f˛..k/Cˇ/ .s; /

Ep .s; / jˇjŠˇŠ

H ˇ .s; t/ : ˇŠ

5.7 Distribution Free Stochastic Analysis

Z



1Œs;r G

X

D

Eˇ d jˇj d Z

X

˛

˛ˇ 0 ;jˇ 0 j1

X

D

˝jˇj

X

˛

ˇ 0 ˛;1jˇ 0 j

We will show that Z

371

 U jˇjC1

Z U

jˇ 0 j

1Œs;r G

˝.jˇ0 j1/

Eˇ 0 f˛ˇ 0 .s; / ˇ 0 ˇ d 0 ˇ ˇ . ˇ  1/Šˇ 0 Š jˇ j

 Eˇ 0  ˇ ˇ

r; ˇˇ 0 ˇ ; G; f˛ˇ0 ˇ 0 ˇ 0 d jˇ0 j : ˇˇ ˇŠˇ Š

solves (5.7.31). Indeed,

t

0

D

.r/G.r; / ˘ ˚ .dr; d/ X j˛j2

Z

X

Z

X

Œ0;t .r/G.r; /Ek .r; / 

k1 ˇ 0 C.k/˛;1jˇ 0 j

 Eˇ 0  d jˇ 0 j d

r; jˇ 0 j; G; f˛..k/Cˇ0 / 0 jˇ 0 jŠˇ 0 Š U jˇ j X X X Z   ˛ 1Œ0;t .r/G.r; / r; jj  1; G; f˛ D



j˛j2

jj k1 ˛;2jj U

E d 0 d .jj  1/ŠŠ jˇ j Z X X D ˛



j˛j2

D˛;2jj

 E  d 0 d  t; jj; G; f˛ jjŠŠ jˇ j U jj

and, by construction, Z

t

.r/G.r; / ˘ ˚ .dr; d/ D

0

.t/ 

Z tZ 0

f .r/ ˘ G .r; / ˚ .dr; d/ : V

(ii) Consider un .t/ defined by (5.7.37) with w D 0. Then un .t/ D

X

j˛j1

H ˇ .s; t/ ˇŠ

X

X

k1;.k/˛ ˇC.k/˛;jˇjn

Z

(5.7.38)

f˛..k/Cˇ/ .s; /Ek .s; /d ; V

and, after repeating corresponding arguments from the proof of part (i), we see that for 0  t ; T Z tZ 0

un .r/ G.r; / ˘ ˚ .dr; d/ D unC1 .t/  V

Z tZ 0

f .s; / ˘ ˚ .ds; d/ : V

372

5 The Polynomial Chaos Method

If f 2 S 0 .H/, then u0 2 CS 0 .Œ0; T/. If un 2 CS 0 .Œ0; T/, then un G 2 S 0 .H/ : By Proposition 5.7.37, unC1 2 CS 0 .L2 Œ0; T/ and the statement follows by comparing (5.7.38) and (5.7.35). The part (iii)Ris a direct consequence of (5.7.35). (iv) Since E U jf j2 d < 1; it follows that f 2 S 0 .H; R/, and, according to part (ii) and Proposition 5.7.50, all the iterations are adapted. Therefore Itô isometry holds. By definition,

2 sup E u0 .t/  E

Z

t

jf j2 d < 1: U

Assume supt E Œun .t/2 < 1. Using (5.7.36) and Itô isometry,

E u

nC1

.t/

2

C

Z t Z 0

2

2

E .u .s// G.s; / dds C E n

V

 jf j dds ;

Z tZ 0

2

V

and, by Gronwall’s lemma, there is a constant C independent of n such that 2

sup E .u .t//  CE n

n;t

Z

T 0

Z

jf j2 dds: V

Similarly, using Gronwall’s lemma, we show that X n

  sup E ŒunC1 .t/  un .t/2 < 1: t

This concludes the proof of Proposition 5.7.63. In this section we extend the results on the linear SDE to a simple parabolic SPDE. As in the previous section, let U D Œ0; T  V; d D dtd, ˚.dt; d/ D ˚P .t; /d .t; / We denote RdT D Rd  Œ0; T and suppose that the following measurable functions are given a W Rd ! Rdd ;

b W Rd ! Rd :

The following is assumed. A1. The functions a; bare infinitely differentiable and bounded with all deriva tives, and the matrix a D aij .x/ is symmetric and non-degenerate: for all x aij .x/yi yj  ı jyj2 ; y 2 Rd ; for some ı > 0:

5.7 Distribution Free Stochastic Analysis

373

  Let H2s D H2s Rd ; s D 1; 2; :::; be the Sobolev class of square-integrable functions on Rd having generalized space derivatives up to order s with the finite norm j js;2 D j j2 C jDsx j2 ; R where j j2 D . Rd j j2 dx/1=2 . Let G 2 L2 .Œ0; T  V; d / with d D dtd. Define   L2;1 D L2;1 Rd  Œ0; T  V; dxdtd as the space of all measurable functions g on Rd  Œ0; T  V such that jjgjj21;2

D

Z

T 0

Z

Z Rd

Œjg .s; x; /j2 C jDx g .s; x; /j2 dsdxd < 1:

V

    P P Let w D ˛ w˛ .x/ ˛ 2 D0 H23 .Rd / and g D ˛ g˛ .x; s; /˛ 2 D0 L2;1 . The main objective of this section is to study the equation for u .t/ D u .t; x/ ; du.t; x/ D Lu.t; x/dt Z C .u.t; x/G .t; / C g.t; x; // ˘ ˚ .dt; d/

(5.7.39)

U

u.0; x/ D w.x/; where Lu D aij .x/uxi xj C bi .x/uxi : An equivalent form of (5.7.39) is u.t; x/ D w.x/ C C

Z tZ 0

Z 0

t

Lu.s; x/ds

(5.7.40)

Œu.s; x/G .s; / C g .s; / ˘ ˚ .ds; d/ ;

U

0  t  T: We will seek a solution to (5.7.40) in the form u.t; x/D

X ˛

u˛ .t; x/˛ 2 CD0 .Œ0; TI H22 /:

(5.7.41)

We start our analysis of Eq. (5.7.40) by introducing the definition of a solution in 0 2 the “weak P sense”. Recall that CD .Œ0; T; H2 / is the class of all generalized processes u D u .t/ on Œ0; T such that each u˛ is continuous on Œ0; T with values ˛ ˛ ˛ in H22 .

374

5 The Polynomial Chaos Method

Definition 5.7.64 We say that a generalized D-process u.t/ D

X ˛

u˛ .t/˛ 2 CD0 .Œ0; T; H22 /

is a D-H22 solution of equation (5.7.40) in Œ0; T, if the equality (5.7.40) holds in D.L2 .Rd // for every 0  t  T

5.7.4.3 Linear Parabolic SPDEs Lemma 5.7.65 Assume A1 holds and X X   w˛ ˛ 2 D0 .H23 /; g D g˛ ˛ 2 D0 L2;1 : wD ˛

˛

Then there is a unique solution to (5.7.40) in CD0 .Œ0; T; H22 /. The coefficients u˛ of the solution u given by (5.7.41) satisfy u.0/ .t/ D w.0/ ;

@t u˛ D Lu˛ C u˛ jtD0 D w˛ :

P R

k V

mk .u˛.k/ G C g˛.k/ /d

(5.7.42)

Proof Plugging the series (5.7.41) into (5.7.40) we get (5.7.42): by definition, for t 2 Œ0; T; X ˛

C

u ˛ ˛ D

w˛ ˛

˛

X Z ˛

X

t 0

Lu˛ .x; s/ds˛ C

!

XXZ t Z ˛

k

0

mk Œu˛.k/ G C g˛.k/ dds ˛ : V

Denote by Tt h the solution of the problem

@t u D Lu; 0  t  T; u.0; x/ D h.x/; x 2 Rd :

If A1 holds, then  jTt hj2L2 .Rd /  eCt jhj2L2 .Rd / ;

(5.7.43)

5.7 Distribution Free Stochastic Analysis

375

Since (5.7.42) is a lower-triangular system, starting with u.0/ .t/ D Tt w.0/ we find unique continuous u˛ .t/ for every ˛ with j˛j  1: u˛ .t/ D Tt w˛ C

XZ t Z 0

k

Œmk .s; /.Tts u˛.k/ .s/G.s; /

(5.7.44)

V

C Tts g˛.k/ .s; //dsd: Next, we establish an equivalent mild formulation of the equation. Lemma 5.7.66 Assume A1 holds and X X   w˛ ˛ 2 D0 .H23 /; g D g˛ ˛ 2 D0 L2;1 : wD ˛

˛

Then u is the unique solution to (5.7.40) in CD0 .Œ0; T; H22 / if and only if it satisfies u.t/ D

Z tZ 0

ŒTts u.s/G .s; / C Tts g .s; / ˘  .ds; d/

(5.7.45)

U

C Tt w; 0  t  T: Proof Since (5.7.44) holds, the statement is an immediate consequence of Lemma 5.7.65. Next, we derive a closed-form expression and the norm bounds for the solution. Proposition 5.7.67 Let A1 hold and wD

X ˛

X     w˛ .x/ ˛ 2 D0 H23 .Rd / ; g D f˛ .x; s; /˛ 2 D0 L2;1 : ˛

(i) The unique solution to (5.7.40) is given by ˚   u .t/ D Tt w.x/ ˘ exp˘ ˚ 1Œ0;t G Z tZ ˚   exp˘ ˚ 1Œs;t G ˘ Tts g .s; x; / ˘ ˚.ds; d/ C 0

U

and has the chaos expansion u.t/ D

X j˛j1

0 @

XZ t Z k1

0

0

X

U ˇC.k/˛

1 H ˇ .s; t/ A d ˛ Tts g˛pˇ .s; /Ek .s; / ˇŠ

1 ˇ X X H .t/ A @ C Tt w˛ˇ ˛ : ˇŠ ˛ ˇ˛

376

5 The Polynomial Chaos Method

(ii) The solution is the limit of Picards iterations un .t/ with u0 .t/ D Tt w C unC1 .t/ D Tt w C

Z tZ

Tts g.s; / ˘ ˚ .ds; d/ ;

0

Z tZ 0

ŒTts un .s/G .s; / C Tts g.s; / ˘ ˚ .ds; d/ ; U

0  t  T: In particular, for 0  t  T;  ˘k n X ˚ 1Œ0;t G u .t/ D Tt w ˘ kŠ kD0   Z tZ X n ˚ 1Œs;t G c˘k ˘ Tts g.s; / ˘ ˚ .ds; d/ : C kŠ 0 U kD0 n

If g 2 S 0 .H/ and w 2 S 0 , then un ; u 2 CS 0 .Œ0; T/I (iii) If w is deterministic and g is adapted, then the solution u is adapted and sup E t

h

ku.t/k2L .Rd / 2

i

 Z  CE kwkL2 .Rd / C

T 0

Z

 jg .x; s; /j dx d ds :

Z

Rd

2

U

Proof We repeat the main arguments of Proposition 5.7.63 (as in the case of linear SDE). The changes in the proof (i) are obvious. The proof of (ii)–(iii) is identical to the proof of (ii), (iv) in Proposition 5.7.63 with the use of (5.7.43) for the estimate of the iterations L2 .˝; P/-norm.

5.7.4.4 Stationary SPDEs Let us consider a stationary (time independent) equation Au C ı˚ .Mu/ D g

(5.7.46)

P where, as previously, the ˚-noise is a formal series ˚P D k mk k , fmk g is a CONS in a Hilbert space H, P k are independent random variables with zero mean and variance 1, and g D ˛ g˛ ˛ is a free term: We will consider Eq. (5.7.46) in a normal triple of Hilbert spaces .V; H; V 0 / W • V H V 0 and the embeddings V H and H V 0 are dense and continuous; • The space V 0 is dual to V relative to the inner product in HI • There exists a constant C > 0 such that j.u; v/H j  C kukV kvkV 0 for all u and v: A typical example of a normal triple is the Sobolev spaces  rC  d  r  d  r  d  H R ;H R ;H R for  > 0:

5.7 Distribution Free Stochastic Analysis

377

Everywhere in this section it is assumed that A W V ! V 0 and M W V ! V 0 ˝ `2 are bounded linear operators. As we already know, Eq. (5.7.46) can be rewritten in the form Au C

X

Mn u ˘ n D g;

(5.7.47)

n1

where u D

P

˛

u˛ ˛ : Since Mn u D

X

Mn u˛ ˛ ;

(5.7.48)

˛2J 1

we get X n1

D

Mn u ˘ n X X

Mn u˛ ˛ ˘ n C

˛2J 1 n1

D

X X

Mn u˛ ˛C.n/

˛2J 1 n1

X X

Mn uˇ.n/ ˇ :

n1 ˇ2Wjˇj1

Therefore, for ˛ 2 J 1 such that j˛j > 0, we have 0 @

X

1 Mn u ˘ n A D

n1

˛

X

Mn u˛.n/

n1

The propagator describing the coefficients .u˛ ; ˛ 2 J 1 / for the solution of (5.7.47), uD

X

u ˛ ˛ ;

(5.7.49)

˛2J 1

is Au˛ D Eg if j˛j D 0 Au˛ C

P

(5.7.50) n1 Mn u˛.n/ D g˛ if j˛j > 0:

The propagator (5.7.50) is a lower triangular system. Therefore, if A has an inverse A1 ; then the propagator can be solved sequentially.

378

5 The Polynomial Chaos Method

The next question to ask whether the solution (5.7.49) of Eq. (5.7.47) has a finite variance. The following example demonstrates that, in general the answer to this question is negative. Example 5.7.68 Consider the following simple version of the equation u D 1 C u ˘ ; with E D 0, E 2 D 1. In this setting, J 1 D .0; 1; 2; :::/ and consists of onedimensional indices ˛ D .0/; .1/; .2/; : : :. Recall that, by our general convention, 2 E.n/ D nŠ:

System (5.7.50) in this case becomes u.0/ D 1; u.n/ D InD0 C un1 ; n  1; that is, un D 1 and u D

P

n

.n/ so that Eu2 D

X

nŠ D 1:

n1

As a result, we are essentially forced to define the solution to Eq. (5.7.47) as a generalized D-random variable with values in V; such that (5.7.47) holds in D.V 0 /:

5.7.4.5 Weighted Norms Since the space D0 is often too large to provide any useful information about the solution, a popular definition of solutions in the Gaussian and Poisson cases is based on rescaling/weighting of the coefficients u˛ ; cf. page 255, as well as [38, 151, 166, 177, etc.]. This technique is also works in the distribution-free setting. Hilbert space X and a sequence of positive numbers R D ˚ Given a 1separable , we define the space RL2 .X/ as the collection of formal series f D r ; ˛ 2 J P˛ f  ; f 2 X, such that ˛ ˛ ˛ ˛ k f k2RL2 .X/ D

X ˛

k f˛ k2X r˛2 < 1:

(5.7.51)

P If (5.7.51) holds, then ˛ r˛ f˛ ˛ 2 L˚2 .X/. Similarly, the sequence R1 DPr˛1 ; ˛ 2 J 1 defines the space R1 L2 .X/, consisting of the formal series f D ˛ f˛ ˛ ; f˛ 2 X, such that X ˛

k f˛ k2X r˛2 < 1

5.7 Distribution Free Stochastic Analysis

379

Important and popular examples of the space RL2 .X/ correspond to the following weights: Q1 ˛k (a) r˛2 D kD1 qk ; where fqk ; k  1g is a non-increasing sequence of positive numbers; (b) Kondratiev’s spaces .S/;` W r˛2 D .˛Š/ .2N/`˛ ;   0; `  0: In particular, in the setting of Example 5.7.68, Eu2 D kuk2.S/0;0 D 1, but

kuk2.S/;` < 1 for suitable  and `

Exercise 5.7.69 (A) Determine a range of values for  and ` such that the P generalized process u D n .n/ in Example 5.7.68 belong to .S/;` . [Recall a similar exercise in the Gaussian case on page 257.]

5.7.4.6 Wick-Nonlinear SPDEs To illustrate the general idea, consider the equation Au  u˘3 C

X

Mn u ˘ n D f ,

(5.7.52)

n1

where u˘3 D u ˘ u ˘ u; f D a chaos solution of the form

P

˛ f˛ ˛ :

uD

As in the previous sections, we will look for X

u ˛ ˛ :

˛2J 1

The definition of the Wick product implies u˘3 D

X

u˛ uˇ u ˛CˇC ;

˛;ˇ;2J 1

that is,  ˘3  u ˛D

X

uˇ uˇ0 uˇ00

ˇ;ˇ 0 ;ˇ 00 WˇCˇ 0 Cˇ 00 D˛

Therefore, the propagator for Eq. (5.7.52) is Au˛ 

X ˇ;ˇ 0 ;ˇ 00 WˇCˇ 0 Cˇ 00 D˛

uˇ uˇ0 uˇ00 C

X n1

Mn u˛.n/ D f˛ :

(5.7.53)

380

5 The Polynomial Chaos Method

Similar to (5.7.50), system (5.7.53) is also lower triangular and can be solved sequentially, assuming that operator A has an appropriate inverse. If the Wick cubic u˘3 is replaced by any Wick power or a polynomial then the related propagator remains lower triangular. The same ideas also apply to evolution equations. Exercise 5.7.70 (B) Write the propagator for the following equations: uxx  u ˘ ux C

X

u ˘ n D f I

n1

ut D uxx  u ˘ ux C

X n1

u x ˘ n  f :

Chapter 6

Parameter Estimation for Diagonal SPDEs

6.1 Examples and General Ideas 6.1.1 An Oceanographic Model and Its Simplifications Let U D U.t; x/ be the temperature of the top layer of a body of water such as lake, sea, or ocean. Various historical data provide information about the long-time N of U. The quantity of interest then becomes the fluctuation u D average value U N U  U, and time evolution of u can be modeled by the following heat balance equation (see Frankignoul  or Piterbarg and Rozovskii ): P Q: ut D u C v  ru  u C W

(6.1.1)

Here,  > 0 and > 0 are physical parameters and the v D v.t; x/ is the velocity of the water on the surface. In the most complete model, v is the solution of the Navier-Stocks equations (see (1.2.18) on page 19). Note that both x and v are twodimensional. Practical applications of (6.1.1) for modeling and prediction require knowledge of the parameters  and , and these parameters can only be estimated using the measurements of the temperature. In other words, we have an inverse problem: determine  and given the solution of (6.1.1). We will simplify the problem by making the following assumptions: 1. the parameters  and are constant, that is, non-random and independent of t and x; 2. the surface is not moving, that is, v D 0; 3. the shape G of the body of water is a bounded domain in R2 that is smooth or otherwise nice, e.g. a rectangle, and u.t; x/ D 0 on the boundary of G;

© Springer International Publishing AG 2017 S.V. Lototsky, B.L. Rozovsky, Stochastic Partial Differential Equations, Universitext, DOI 10.1007/978-3-319-58647-2_6

381

382

6 Parameter Estimation for Diagonal SPDEs

4. Eq. (6.1.1) with v D 0 is diagonal (or diagonalizable), that is, the covariance operator Q of the noise has pure point spectrum and the eigenfunctions of Q are the same as the eigenfunctions of the Laplace operator  in G with zero boundary conditions; 5. there is no initial fluctuation: u.0; x/ D 0; 6. the solution u D u.t; x/ of (6.1.1) is a continuous function of .t; x/ can be measured without errors for all t 2 Œ0; T and x 2 G. Equation (6.1.1) becomes P Q ; 0 < t  T; x 2 GI u.t; 0/ D 0; ut D Au  u C W

(6.1.2)

where A is the Laplace operator in G with zero boundary conditions. Exercise 6.1.1 (B) Find sufficient conditions on the operator Q so that the variational solution of (6.1.2) is a continuous function of .t; x/. Denote by hk D hk .x/; k  1, the eigenfunctions of A. Then Ahk D  k hk ; k > 0; Qhk D qk hk ; qk > 0; X P Q .t; x/ D W qk wP k .t/hk .x/;

(6.1.3) (6.1.4) (6.1.5)

k1

and if a reasonable solution of (6.1.2) exists, it has a representation as a Fourier series in space u.t; x/ D

X

uk .t/hk .x/:

(6.1.6)

k1

Substituting (6.1.3)–(6.1.6) into (6.1.2) leads to the equation for the corresponding Fourier coefficient uk : duk .t/ D . k C /uk .t/dt C qk dwk .t/; uk .0/ D 0:

(6.1.7)

In (6.1.4), we assume that qk > 0 for all k. If qn D 0 for some n, then (6.1.7) implies that un .t/ D 0 for all t  0. The corresponding inverse problem can now be stated as follows: estimate the numbers  and given the observations uk .t/; k D 1; : : : ; N; t 2 Œ0; T. In this setting, we can actually forget about the original SPDE (6.1.2) and work directly with (6.1.7). Estimation of the number qk from the continuous time observations of uk is (mathematically) easy.

6.1 Examples and General Ideas

383

Exercise 6.1.2 (B) Take a positive integer M and consider a uniform partition 0 D t0 < t1;M < : : : < tM;M D T of Œ0; T with step 4tM D T=M: tm;M D m4tM . Verify that q2k D

M  X    2 1 uk .m C 1/4tM  uk m4tM : lim T M!1 mD1

(6.1.8)

The result also shows that the sign of qk is not observable, that is, we cannot distinguish between du D udt C dw.t/ and du D udt  dw.t/ based on the observations u.t/; t  0. Hint. The martingale component of uk is qk wk .

While it is still not immediately clear how to estimate the numbers  and , we did make some progress by reducing an SPDE setting to an SODE one: from (6.1.2) to (6.1.7). To proceed, we simplify the problem even further and consider an SPDE in one space variable with one unknown parameter  and with space-time white P x/ as the driving force: noise W.t; P x/; 0 < t  T; x 2 .0; /; ut .t; x/ D  uxx .t; x/dt C qW.t;

(6.1.9)

with zero initial and boundary conditions. When  D 1 and q D 1, we investigate this equation on page 55. In particular, we know that the solution of (6.1.9) is a continuous function of t and x. The reason for introducing q will become clear later, when we study asymptotic behavior of the estimators of  and use dimensional analysis to check some of the results. Even though, by gradually moving from (6.1.2) to (6.1.9), we seemingly lost all connections to the original application, we will eventually learn how the ideas and methods we develop while working with the easy equation (6.1.9) can be applied to more complicated equations, including (6.1.2). Similar to (6.1.2), Eq. (6.1.9) is diagonal, and the solution of (6.1.9) has the Fourier series representation X u.t; x/ D uk .t/hk .x/; k1

where duk .t/ D k2 uk .t/dt C qdwk .t/; 0 < t  T;

(6.1.10)

with initial condition uk .0/ D 0; see (2.3.16) on page 55. By direct computation, uk .t/ D q

Z

Eu2k .t/ D q2

t

e k

2 .ts/

0

Z

t 0

e2 k

dwk .s/;

2 .ts/

(6.1.11) 2

ds D

q2 .1  e2 k t / ; 2k2

384

6 Parameter Estimation for Diagonal SPDEs

Z 0

T

Eu2k .t/dt D

q2 2k2

q2 D 2k2

Z

T 0

2

.1  e2 k t /dt

1  e2 k T 2k2

2T

! :

(6.1.12)

Fix k  1 and consider the process uk D uk .t/; 0  t  T. Multiplying both sides of (6.1.10) by uk .t/ and integrating from 0 to T, we get Z

T 0

uk .t/duk .t/ D k2

Z

T 0

u2k .t/dt C q

Z 0

T

uk .t/dwk .t/:

(6.1.13)

RT RT Note that 0 u2k .t/dt > 0 with probability one. Indeed, 0 u2k .t/dt D 0 implies u2k .t/ D 0 for almost all t, which, by (6.1.11) is a probability zero event. Then equality (6.1.13) suggests the following estimator b  .k/ .T/ of : O .k/ .T/ D 

RT

0 k2

uk .t/duk .t/ ; RT 2 0 uk .t/dt

(6.1.14)

or, using the Itô formula, u2 .T/  q2 T O .k/ .T/ D k R T : 2k2 0 u2k .t/dt For now, we ignore that, while expression (6.1.14) can be both positive and negative, the number  must be positive if Eq. (6.1.9) is to be well posed in L2 ..0; //. Substituting (6.1.10) into (6.1.14), we get the expression for the estimation error: q O .k/ .T/   D 

RT 0 k2

uk .t/dwk .t/ : RT 2 u .t/dt k 0

(6.1.15)

6.1.2 Long Time vs Large Space In what follows, we will look at the estimation error (6.1.15) in two asymptotic regimes, T ! C1 for fixed k (long time) and k ! C1 for fixed T (large space). We will use the following notations: 1. N .m;  2 /; to denote a Gaussian random variable with mean m and variance  2 ; d 2.  D ; to indicate that random variables  and  have the same distribution; L 3. X D Y; to indicate that random processes X and Y have the same distribution in the space of continuous functions.

6.1 Examples and General Ideas

385

For example, if w D w.t/ is a standard Brownian motion, c > 0, and wc .t/ D w.c2 t/, then d d p (6.1.16) w.t/ D N .0; t/ D tN .0; 1/ for fixed t; L

L

w.c2 t/ D cw.t/; that is, wc ./ D c w./ as processes:

(6.1.17) L

d

Is important to distinguish equality X.t/ D Y.t/ for fixed t and equality X.t/ D Y.t/ L as processes because X.t/ D Y.t/ implies Z T Z T d X.t/dt D Y.t/dt (6.1.18) 0

0

d

and more generally, F.X/ D F.Y/ for a continuous functional F on the space of d continuous functions: as (6.1.16) suggests, having X.t/ D Y.t/ for every t is not enough to claim equalities like (6.1.18). Exercise 6.1.3 (C) Verify that Z

1

0

d

w.t/dt D N .0; 1=3/:

Let us pass to the limit T ! C1 in (6.1.15). It is well-known that uk .t/ is  an ergodic process with stationary distribution N 0; q2 =.2k2 / , a time average converges, to the corresponding population average with respect to the stationary distribution: 1 T!1 T

Z

T

lim

0

    d F uk .t/ dt D EF.k /; k D N 0; q2 =.2k2 / ;

and the convergence is with probability one. In particular, 1 T!1 T

Z

T

lim

0

u2k .t/dt D Ek2 D

q2 2k2

(6.1.19)

with probability one. On the other hand, by (6.1.12), E

Z

T 0

uk .t/dwk .t/

2

D

Z

T 0

Eu2k .t/dt '

q2 T ; T ! C1I 2k2

recall that f .T/ ' g.T/ as T ! C1 means that limT!C1 f .T/=g.T/ D 1. Therefore, by the Chebychev inequality, 1 T!1 T

Z

lim

0

T

uk .t/dwk .t/ D 0

(6.1.20)

386

6 Parameter Estimation for Diagonal SPDEs

in probability, and, after combining (6.1.15), (6.1.19), and (6.1.20), lim O .k/ .T/ D 

T!1

(6.1.21)

in probability1 for every k  1. In other words, estimator (6.1.14) is consistent in the long-time asymptotic T ! C1. The next question is the rate of convergence of O .k/ .T/: we want to find an increasing non-random function v D v.T/ such that limT!C1 v.T/ D C1 and   the limit limT!C1 v.T/ O .k/ .T/   exists in distribution and is a non-degenerate random variable. To answer this question, we need some basic facts related to convergence in distribution. The first is a version of the martingale central limit theorem. Theorem 6.1.4 If, for t  0 and " > 0, X" D X" .t/ and X D X.t/ are real-valued, continuous square-integrable martingales such that X is a Gaussian process, X" .0/ D X.0/ D 0, and, for some t0 > 0, lim hX" i.t0 / D hXi.t0/

"!0

(6.1.22)

d

in probability, then lim"!0 X" .t0 / D X.t0 /. Proof Here is an outline; for complete technical details, see Jacod and Shiryaev [93, Theorem VIII.4.17] or Liptser and Shiryaev [138, Theorem 5.5.4(II)]. d First of all, recall that if  D N .0;  2 /, then the characteristic function of  2 2 is Eei  D e =2 . On the other hand, if M is a continuous square-integrable martingale, then, by the Itô formula, the process

.t/ GM

  1 2 D exp i M.t/ C hMi.t/ 2

is a (local) martingale for every real . If X is a Gaussian martingale with X.0/ D 0,   d then X.t/ D N 0; EhXi.t/ ; and the equalities 1D

EGX .t/;

  1 2 1 D E exp i X.t/ C EhXi.t/ ; 2

being true for all and t, suggest that hXi is non-random: hXi.t/ D EhXi.t/:

1

With some extra work [140, Theorem 17.4], one can show that convergence in (6.1.20) and (6.1.21) is with probability one.

6.1 Examples and General Ideas

387

This is indeed the case; see [93, Sect. II.4(d)] for details. To complete the proof of the theorem, it remains to pass to the limit " ! 0 in the equality   1 1 D E exp i X" .t/ C 2 hX" i.t/ 2 and to conclude that 2 =2/hXi.t/

lim E exp .i X" .t// D e.

"!0

D E exp .i X.t// :

Note that passing to the limit in the expectation is not a problem because, for every

2 R, the function f .x/ D ei x is uniformly bounded: jf .x/j  1. This concludes the proof of Theorem 6.1.4. The idea behind the proof of Theorem 6.1.4 is similar to the idea behind the proof of the Lévy characterization of the Brownian motion. Theorem 6.1.5 (Lévy’s Characterization of the Brownian Motion) If X D X.t/ is a continuous square integrable martingale, X.0/ D 0, and hXi.t/ D t, then X is a standard Brownian motion. Exercise 6.1.6 (A) (a) Prove Theorem 6.1.5. Hint. Use the Itô formula to show that ˇ   2 ˇ E ei .X.t/X.s// ˇFs D e .ts/=2 :

(b) Why is continuity of X a necessary assumption?

 2 Hint. Let X be a Poisson process with unit intensity. What compensates X.t/  t to a  2 martingale? At the very least confirm that E X.t/  t D t.

Next, we recall the following classical fact. Theorem 6.1.7 If " ; " , " > 0, are random variables such that d

lim " D 

"!0

and lim"!0 " D c in probability, where  is a random variable and c is a nonrandom number, then d

d

lim ." C " / D  C c; lim " " D  c:

"!0

"!0

Theorem 6.1.7 is known as the Slutsky theorem, after the Ukranian statistician EVGENY “EUGEN” EVGENIEVICH SLUTSKY (1880–1948), who published it in 1925. The proof is by the usual –ı argument.

388

6 Parameter Estimation for Diagonal SPDEs

Now we will establish the rate of convergence of O .k/ .T/ to  as T ! C1, and also identify the limit distribution. O .k/ Proposition p 6.1.8 For every k  1, the estimator  .T/ is asymptotically normal with rate T: p  .k/   d  lim (6.1.23) T O .T/   D N 0; 2=k2 : T!C1

Proof To stay in line with the notations from Theorems 6.1.4 and 6.1.7, write " D 1=T and define Z Tt Z T 1 1 X" .t/ D  p uk .s/dwk .s/; " D u2 .s/ds: qT 0 k T 0 Then (6.1.15) implies R 1=2 T p  .k/  T X" .1/ 0 uk .t/dwk .t/ D 2 : T O .T/   D  RT 2 k " k2 .qT/1 0 uk .t/dt

(6.1.24)

Note that, for each " > 0, X" is a square-integrable martingale and hX" i.1/ D " : Also take a standard Brownian motion w D w.t/ and define the Gaussian martingale X D X.t/ by p X.t/ D w.t/= 2k2 : We know from (6.1.19) that, with probability one, lim " D

"!0

1 D hXi.1/: 2k2

Therefore, by Theorem 6.1.4,   d d lim X" .1/ D X.1/ D N 0; 1=.2k2 / :

"!0

Now (6.1.23) follows from (6.1.24) and Theorem 6.1.7: p p  .k/  X .1/ 2k2 w.1/= d " lim T O .T/   D lim 2 D 2 T!C1 "!0 k " k =.2k2 / p w.1/ 2 d D N .0; 2=k2 /: D k2 This completes the proof of Proposition 6.1.8.

6.1 Examples and General Ideas

389

Let us now comment about usefulness of q. In the course of the proof of Proposition 6.1.8, there is a potential danger of manipulating various fractions the wrong way and getting, for example, 1=.2k2 / as the limit variance in (6.1.23). It is therefore convenient to have an independent way to check whether a certain equality makes sense, and one such way is dimensional analysis. Whenever there is time evolution present, it is natural to measure the time variable t in the units of time Œt, such as seconds, minutes, etc. This forces the Brownian motion wk to have the units of Œt1=2 W Œwk .t/ D Œt1=2 . If we want to have the process uk dimensionless, then, to ensure consistent dimensions in (6.1.10), we need to measure  in the units of inverse time: Œ D Œt1 , and we need to introduce q with Œq D Œt1=2 . Then p  .k/ .T/   D Œt1=2 ; Œb  .k/ .T/ D Œt1 ; Œ T.b indicating that the limit variance in (6.1.23) should be measured in Œt1 , which is indeed the case. Note that this limit variance does not depend on q. Because the quantity 2=k2 in (6.1.23) is a decreasing function of k, it is natural to try the following: instead of keeping k fixed and increasing T, keep T fixed and let k go to infinity. The result is a different kind of asymptotic normality. Proposition 6.1.9 For every T > 0,   d lim k O .k/ .T/   D N .0; 2=T/:

k!C1

(6.1.25)

Proof To begin, note that (6.1.25) makes sense as far as dimensional analysis: the units of the limit variance should indeed be Œt2 . Recall that, by (6.1.15), 

O .k/

k 



.T/   D 

RT

0 uk .t/dwk .t/ RT k2 0 u2k .t/dt

qk

The main objective is to understand what happens to the process uk ./ as k ! 1. By (6.1.17), uk .t/ D q q D k L

Z

t

e

 k2 tC k2 s

0

Z

k2 t

e 0

dwk .s/ D q

 k2 tC r

Z

k2 t

e k

0

1 dw1 .r/ D u1 .k2 t/; k L

that is uk .t/ D

1 u1 .k2 t/: k

2 tC r

dwk .r=k2 / (6.1.26)

390

6 Parameter Estimation for Diagonal SPDEs

Then Z

T 0

d u2k .t/dt D

1 k2

Z

T

u21 .k2 t/dt

0

1 D 4 k

Z

k2 T 0

u21 .s/ds;

and by (6.1.19), 1 k!C1 k2

Z

k2 T

lim

0

u21 .s/ds D

Tq2 2

with probability one. Thus, for every T  0, lim k2

k!C1

Z

T

u2k .t/dt D

0

Tq2 2

(6.1.27)

in distribution, hence in probability, because the limit Tq2 =.2/ is non-random. Next, define Xk .t/ D kq

Z

t

0

uk .s/dwk .s/I

as in the previous proof, it is convenient to have Xk dimensionless. The process Xk is a square-integrable martingale with 2 2

hXk i.t/ D q k

Z

t 0

u2k .s/ds:

p If w D w.t/ is a standard Brownian motion and X.t/ D q2 w.t/= 2, then (6.1.27) and Theorem 6.1.4 with " D 1=k imply Z

T

lim kq

k!1

0

  d uk .t/dwk .t/ D N 0; Tq4 =.2/ :

Since RT  .k/  kq 0 uk .t/dwk .t/ O k  .T/   D ; RT k2 0 u2k .t/dt equality (6.1.25) follows from Theorem 6.1.7. This completes the proof of Proposition 6.1.9. Exercise 6.1.10 (B) Show that (6.1.25) implies lim O .k/ .T/ D 

k!C1

in probability, for every T > 0.

6.1 Examples and General Ideas

391

d

Hint. Let  D N .0; 2=T/. Given  > 0, find C so that P.jj > C / < . Then, given ı > 0, take k so that kı > C .

Propositions 6.1.8 and 6.1.9 quantify the intuitive idea: the more information we have, the closer the estimator is to the true value. In Proposition 6.1.8 extra information comes from increasing time interval; in Proposition 6.1.9 extra information comes from additional Fourier coefficients. It is also reasonable to expect that there are more efficient ways to incorporate the information provided by the Fourier coefficients u1 ; : : : uk than simply using the last one. Accordingly, let us now assume that the trajectories of uk .t/ are observed for all 0 < t < T and all k D 1; : : : ; N, and let us combine the estimators (6.1.14) for different k as follows: b N D 

PN

R

T 2 kD1 0 k uk .t/duk .t/ : PN R T 4 2 kD1 0 k uk .t/dt

(6.1.28)

First suggested by Huebner et al. in , (6.1.28) is consistent and asymptotically normal in the limit N ! C1; and is closely related to the maximum likelihood estimator of  based on the observations uk .t/; k D 1; : : : ; N; 0 < t < T. The actual MLE b

N of  takes into account that  must be positive, whereas b  N can be either positive or negative: ( b N; b

N D 0;

if b  N > 0; b if  N  0:

(6.1.29)

If b  N is consistent, then getting b  N  0 is rather unlikely for large N. We postpone derivation of (6.1.28) and (6.1.29) until Sect. 6.2.2. Let us try to see that b  N is indeed better than O .N/ (now that we fixed T, we will no longer write T as an argument in the estimators). It follows from (6.1.10) and (6.1.28) that PN R T 2 kD1 0 k uk .t/dwk .t/ b N   D  P : RT N 4 2 kD1 0 k uk .t/dt

(6.1.30)

Note that both the top and the bottom of the fraction on the right-hand side of (6.1.30) are sums of independent random variables, and the analysis of the properties of the estimator b  N is reduced to the study of these sums. By (6.1.12), as N ! 1, N Z X kD1

T 0

k4 Eu2k .t/dt '

N TX 2 N3T ; k ' 2 kD1 6

(6.1.31)

392

6 Parameter Estimation for Diagonal SPDEs

where notation aN ' bN means limN!1 .aN =bN / D 1. Since E

Z

T 0

k2 uk .t/dwk .t/ D 0;

it is reasonable to conjecture that • by the law of large numbers, limN!C1 .b  N  / D 0 with probability one; • by the central limit theorem, the sequence of random variables fN 3=2 .b N  /; N  1g converges in distribution to a zero-mean Gaussian random variable. In other words, we expect the rate of convergence for b  N to be N 3=2 , which is .N/ O indeed better than the rate N for  , and the proof is indeed a straightforward application of the strong law of large numbers and the central limit theorem for independent but not identically distributed random variables. We carry out the proof in a more general setting in Sect. 6.2.3 below. What is much less obvious is that there are no “reasonable” ways to improve b  N beside replacing it with b

N according to (6.1.29); the proof of this in a more general setting is in Sect. 6.2.4. There are certainly “unreasonable” improvements, for example, defining an estimator to be a fixed number 0 > 0 W on the one hand, it is not a very useful estimator, but, on the other hand, if indeed  D 0 , then nothing can beat it. Now that we have some understanding of the estimation problem for (6.1.9), let us try to imagine what to expect from the estimation problem for (6.1.2). First of all, analysis of (6.1.9) suggests that a similar analysis of (6.1.2) will require the asymptotic formula for k . This turns out to be a classical result, cf. 174: k  kI

(6.1.32)

recall that ak  bk means that a finite positive limit of the ratio ak =bk exists as k ! 1. Exercise 6.1.11 (A) Verify (6.1.32) (or an easier version k  k) when G is the square Œ0;   Œ0; . Hint. The eigenvalues are of the form k2 C m2 , k; m  1. Start by enumerating them.

Next, there are two parameters to estimate rather than one. To get some idea about the resulting estimators, we pretend that estimating two parameters at once and estimating each parameter individually while treating the other one as known leads to similar asymptotic results (which turns out to be true in this case). Then estimating  in (6.1.2) should be very similar to estimating  in (6.1.9). The only difference is that, instead of k D k2 , we now have k  k, and (assuming for simplicity that qk D 1), this should change (6.1.12) to Z

T 0

Eu2k .t/dt 

T k

(6.1.33)

6.1 Examples and General Ideas

393

and (6.1.31), to N Z X kD1

T 0

k2 Eu2k .t/dt  N:

 N  /; N  Even without looking at the estimator b  N , we can conjecture that fN.b 1g converges in distribution to a zero-mean normal random variable. Similarly, estimation of in (6.1.2) will lead to N Z X kD1

T 0

Eu2k .t/dt  ln N;

and the corresponding rate of convergence

(6.1.34)

p ln N.

Exercise 6.1.12 (C) Verify (6.1.33) if we have qk D 1 in (6.1.7). Exercise 6.1.13 (B) Convince yourself that no consistent estimator of  is possible in the model P x 2 .0; / ut D uxx C u C W; with zero boundary conditions, if t 2 Œ0; T and T is fixed. Hint. By analogy with (6.1.34), the rate of convergence should be N Z X kD1 0

T

!1=2 Eu2k .t/dt

;

RT which, with 0 Eu2k .t/dt 1=k2 , is a convergent series. In other words, b  N   stays a nondegenerate random variable in the limit N ! C1:

6.1.3 Problems All problems below are related to the estimator O .k/ .T/ defined in (6.1.14) on page 384. Problem 6.1.1 offers one way to incorporate several processes uk into a single estimator. Problem 6.1.2 suggests yet another asymptotic regime: small noise. Problem 6.1.3 investigates the influence of the initial condition uk .0/ in the large space asymptotic. Problems 6.1.4 and 6.1.5 investigate O .k/ .T/ in the large time asymptotic when the process uk is not ergodic. Problem 6.1.1 Investigate the weighted average of the estimators O .k/ .T/. That is, consider expressions of the form N X kD1

˛k;N .T/O .k/ .T/;

(6.1.35)

394

6 Parameter Estimation for Diagonal SPDEs

with non-random numbers (weights) ˛k;N .T/ satisfying ˛k;N .T/  0;

N X

˛k;N .T/ D 1:

kD1

The weights can depend on the observation time interval T and on the total number N of the available Fourier coefficients. Is it possible to select the weights so that the p convergence rate as T ! C1 is better than T? Is it possible to select the weights so that the convergence rate as N ! C1 is better than N? d

Problem 6.1.2 Consider (6.1.10) on page 383 with initial condition uk .0/ D N .m;  2 / where m 2 R and   0 are known. With k and T fixed, investigate the corresponding estimator O .k/ .T/ from (6.1.14) on page 384 in the small noise asymptotic, that is, in the limit q ! 0. d

Problem 6.1.3 Consider Eq. (6.1.10) on page 383 with initial condition uk .0/ D N .mk ; k2 / where mk 2 R and k  0 are known. Assume that the collection fuk .0/; k  1g is independent of fwk ; k  1g. Investigate the estimator O .k/ .T/ from (6.1.14) in the limit k ! C1, T fixed. Is it possible to choose mk and/or k so that the rate of convergence of O .k/ .T/ is better than k? Problem 6.1.4 Consider the collection of random variables RT X.t/dX.t/ O ; T > 0; .T/ D 0R T 2 0 X .t/dt where dX D Xdt C dw; and the initial condition X.0/ is a Gaussian random variable independent of the   standard Brownian motion w. Determine the limit in distribution of eT O .T/  1 . Problem 6.1.5 Consider the collection of random variables RT

X.t/dX.t/ O .T/ D 0R T ; T > 0; 2 0 X .t/dt where X.t/ D X0 C dw; and the initial condition X0 is a Gaussian random variable independent of the standard Brownian motion w. Find a positive increasing function v D v.T/ such that, as T ! C1, a non-degenerate limit of v.T/O .T/ exists in distribution, and determine the limit.

6.2 Maximum Likelihood Estimator (MLE): One Unknown Parameter

395

6.2 Maximum Likelihood Estimator (MLE): One Unknown Parameter 6.2.1 The Forward Problem Introduce the following objects: 1. G, a sufficiently regular bounded domain in Rd or a smooth compact ddimensional manifold; 2. H D L2 .G/, a separable Hilbert space with an orthonormal basis fhk ; k  1g; 3. W Q D W Q .t/, a Q-cylindrical Brownian motion on H; 4. A0 , A1 , linear differential or pseudo-differential operators on H. For  2 R, consider the equation P Q .t/ uP .t/ C .A0 C A1 /u.t/ D W

(6.2.1)

with initial condition u.0/. Even though the solution u D u .t; x/ depends not only on t but also on the parameter  and the spacial variable x 2 G, not to mention the elementary outcome !, most of the time we will not indicate the dependence on ; x, and ! explicitly. Definition 6.2.1 Equation (6.2.1) is called diagonal, or diagonalizable, if the operators A0 , A1 , and Q have pure point spectrum and a common system of eigenfunctions fhk ; k  1g; and u.0/ D

X

uk .0/hk ;

(6.2.2)

k1

where uk .0/ are independent Gaussian random variables with mean mk and variance k2 , and the collection fuk .0/; k  1g is independent of W Q . We will introduce conditions on the convergence of (6.2.2) when we discuss existence and uniqueness of solution of (6.2.1). Equation (6.2.1) is an attempt to strike a reasonable balance between the concrete examples from Sect. 6.1 and a completely abstract framework that has little or no connection with SPDEs. Although abstract, Eq. (6.2.1) is a stochastic evolution equation (in fact, an SPDE if the operators A0 ; A1 are differential). There are sufficient conditions for a (pseudo)-differential operator to have pure point spectrum and for the corresponding eigenfunctions to form an orthonormal basis in L2 .G/. It is even more important for our purposes that the eigenvalues of the operators in this setting have a particular growth rate in k: k-th eigenvalue  korder of the operator=d I

(6.2.3)

396

6 Parameter Estimation for Diagonal SPDEs

see, for example, [201, Theorem 1.2.1]. In particular, for the Laplacian , a secondorder operator, we get k-th eigenvalue of   k2=d I in fact, we already used this result before with d D 2: see pages 174 and 392. We will not discuss various technical questions, such as: how regular should G be, what is the meaning of L2 .G/ when G is a manifold, what is the order of a pseudo-differential operator (and what is a pseudo-differential operator to begin with), etc. Nonetheless, these questions can arise naturally even when not originally present. For example, an equation with periodic boundary conditions leads to a manifold (circle if d D 1, torus if d D 2, etc.) Pseudo-differential operators can appear if, for example, in Eq. (6.2.1) we decide to switch from W Q to W. After making this change, only two operator rather than three will have to have a common system of eigenfunctions, but in the resulting equation for v D Q1=2 u, N 1 /v.t/dt D dW.t/; N 0 C A dv.t/ C .A N i D Q1=2 Ai Q1=2 , i D 0; 1, will be pseudo-differential rather than the operators A differential. In the end, though, all we need is to know that all these technical questions have been worked out in special books such as  or . Denote by k , k , qk , and k ./ the eigenvalues of the operators A0 , A1 , Q, and A0 C A1 : A0 hk D k hk ; A1 hk D k hk ; Qhk D q2k hk ; k ./ D k C  k : The reason to write the operators on the left-hand side of (6.2.1) is that it is more convenient to have k ./ > 0. Our objective is an inverse problem for Eq. (6.2.1), that is, estimation of  from the observations of u. Rigorous analysis of an inverse problem is impossible without first addressing the forward problem: existence, uniqueness, and regularity of the solution of (6.2.1). While we have a general result (Theorem 4.4.3 on page 199), there is some work to be done: we need to specify the corresponding normal triple and then state the corresponding conditions in the same terms as in Definition 6.2.1. There is a more philosophical question in the background: what kinds of functions can be observed? Since we are working   with Fourier  series in L2 .G/, we will assume that every function from L2 ˝I C .0; T/I L2 .G/ is observable. In the normal triple, .V; H; V 0 /, we therefore take H D L2 .G/ and then make sure that Theorem 4.4.3 applies to Eq. (6.2.1). To define V, denote by 2m the order of the operator A0 C A1 (which naturally leads to the assumption that the order of A0 C A1 is the same for all  2 ). Then we can take V D H m .G/, where H  .G/;  2 R, are the Sobolev spaces in G, as constructed in Example 3.1.29 on page 87 with k D k1=d : k f k2 D

X k1

k2=d fk2 :

(6.2.4)

6.2 Maximum Likelihood Estimator (MLE): One Unknown Parameter

Note that, by (6.2.3), the eigenvalues of the operator asymptotic as k .

397

p  on L2 .G/ have the same

Definition 6.2.2 A diagonal equation (6.2.1) is called parabolic of order 2m if there exist numbers c1 > 0; c2  0; c3 > 0; possibly depending on , such that, for all k  1, c1 k2m=d  k ./ C c2  c3 k2m=d :

(6.2.5)

If condition (6.2.5) holds for some  2 R, then, by continuity, it holds in some neighborhood of . Accordingly, we define the set 0 R as the largest open set such that (6.2.5) holds for every  2 0 (recall that any union, countable or uncountable, of open sets is open). Example 6.2.3 Let G be a smooth bounded domain in Rd or a smooth compact ddimensional manifold with a smooth measure, H D L2 .G/, and let  be the Laplace operator on G (with zero boundary conditions if G is a domain). It is known (see, for example, Safarov and Vassiliev  or Shubin ) that 1.  has a complete orthonormal system of eigenfunctions in H; 2. the corresponding eigenvalues k are negative, can be arranged in decreasing order, and there is a positive number cı such that j k j ' cı k2=d I see (6.2.10) below for more details, including an explicit formula for cı . Then each of the following equations is diagonalizable and parabolic: P 0 D .0; C1/I ut  u D W; P 0 D RI ut  u C u D W;

(6.2.6)

P 0 D R: ut C 2 u C u D W; We can now address the forward problem for Eq. (6.2.1). Theorem 6.2.4 Assume that Eq. (6.2.1) is diagonal and parabolic of order 2m, X

q2k < 1;

(6.2.7)

.m2k C k2 / < 1:

(6.2.8)

k1

and the initial condition satisfies X k1

398

6 Parameter Estimation for Diagonal SPDEs

Then, for every  2 0 , Eq. (6.2.1) has a unique variational solution in the sense of Definition 4.4.1 on page 199, and, with H D L2 .G/ and V D H m .G/, E sup

0 0, y 6D 0, which in turn implies that the order m of the operator is even. Finally, define CB;G

1 D .2/d

Z dx dy; f.x;y/WBm .x;y/ 0;

(6.2.65)

2m=d N ; N k ./ > 0; k ./ ' ./k

(6.2.66)

where d is the dimension of the (missing) spacial variable in Eq. (6.2.63). The objective is to estimate the parameter  2 R from the observations U N; D fuk; .t/; k D 1; : : : ; N; t 2 Œ0; Tg. The observations generate the measure PN; on T the space of continuous on Œ0; T functions with values in RN . The likelihood ratio is LN;T ./ D

 .N; /  dPN; T U 0 N;0 dPT

D exp 

Z N X k ./  k .0 / kD1

q2k

T 0

uk;0 .t/duk;0 .t/

(6.2.67)

 Z N X 2k ./  2k .0 / T 2  uk;0 .t/dt : 2q2k 0 kD1 While (6.2.67) does not suggest any restrictions on the possible values of , such restrictions can come from the underlying SPDE (6.2.63). To solve Eq. (6.2.63), it was convenient to introduce condition (6.2.5), while analysis of the estimators is more convenient under the equivalent condition (6.2.66). Recall that 0 denotes the largest open set in R such that (6.2.66) (equivalently, (6.2.5)) holds for every  2 0 . For example, if A0 D 0 and A1 D , then 0 D .0; C1/. Recall the notations \$D

2 ./ D

I.N/ D

2.m1  m/ ; with the standing assumption \$  1; d 8 2.\$ C 1/ ./ N ˆ ; if \$ > 1; ˆ ˆ < N 2 ˆ ˆ ˆ N : 2 ./ ; if \$ D 1; N 2 ( N \$C1 ; if \$ > 1 ln N;

(6.2.68)

if \$ D 1:

The key technical result of Sect. 6.2.3 is that, with probability one, Z N X 2 k 2 q kD1 k

T 0

u2k; .t/dt '

Z N X 2 k 2 q kD1 k

T 0

Eu2k; .t/dt '

I.N/T I 2 ./

(6.2.69)

6.2 Maximum Likelihood Estimator (MLE): One Unknown Parameter

419

assumptions (6.2.46) and (6.2.47) in Theorem 6.2.9 ensure that the initial conditions uk .0/ do not affect the asymptotic relations in (6.2.69). Theorem 6.2.9 establishes strong consistency and asymptotic normality of the simplified maximum likelihood estimator b  N , the un-restricted maximizer of LN;T ./. The true MLE b

N is the maximizer of LN;T ./ over the closure of 0 , and,  N , the asymptotic while strong consistency of b

N immediately follows from that of b normality does not. We will eventually establish this asymptotic normality using rather sophisticated and general results in asymptotic statistics; Problem 6.2.2 invites the reader to investigate a direct proof for a specific equation. The question of asymptotic normality of b

N aside, there are deeper reasons why the result of Theorem 6.2.9 should not be taken as the end of the investigation of the estimation problem. The main reason is that the question about optimality of the estimator, either b

N or b  N remains unanswered, and the objective of the current section is to answer this question. It is natural to suspect that optimality of a particular estimator, once we have a suitable definition of optimality, is a property not only of the estimator but of the underlying model: there should be some intrinsic features of the model ensuring particular kind of behavior for a large class of possible estimators. Since a statistical model is usually characterized by the likelihood ratio, we can expect that suitable properties of the likelihood ratio will imply necessary properties of the model, including, with some luck, optimality of the MLE. Given the asymptotic nature of the problem, it is natural to study asymptotic properties of the likelihood ratio. An immediate observation is that, according to (6.2.69), nothing interesting happens to the function LN;T ./ as N ! C1 for fixed  and T. It turns out that non-trivial asymptotic behavior of LN;T ./ in the limit N ! C1 happens if, as N ! C1;  approaches the reference value 0 , and does so at a certain rate. These considerations lead to the local likelihood ratio N; CxN;

ZN; .x/ D

dPT

dPN; T

D exp

N  X kD1

 .N; /  U



k . C xN; /  k ./ q2k

2 . C xN; /  2k .//  k 2q2k

Z

T 0

u2k; .t/dt



Z

T

0

uk; .t/duk; .t/

(6.2.70)

!

;

where x 2 R and s N; D

2 ./ : I.N/T

(6.2.71)

420

6 Parameter Estimation for Diagonal SPDEs

Comparing (6.2.70) and (6.2.67), we see that the parameter 0 corresponding to the reference measure is no longer fixed but becomes another variable; we will put major effort later to establish various properties of ZN; .x/ uniformly in  over compact sets. For the measure on top, the parameter is  C xN; . By (6.2.68), N; & 0, N ! C1, and this turns out to be the correct rate at which the top and bottom parameters approach each other. The additional (dimensionless) variable x controls the relative distance between the parameters. Relation (6.2.50) on page 412 makes the choice of N; very natural; even the appearance of 2 ./ and T in (6.2.71) makes sense, both by looking at (6.2.50) and doing dimensional analysis. Exercise 6.2.14 (C) Confirm that N; and  have the same units Œt1 . Substituting (6.2.64) into (6.2.70) and using that k ./ D k C  k , we get a more convenient formula for ZN; : Z N  X xN; T  ZN; .x/ D exp k uk; .t/dwk .t/ qk 0 kD1 !! 2 Z T x2 N; 2 2  u .t/dt : 2q2k 0 k k;

(6.2.72)

We will now summarize the main results from [84, Sects. II.12 and III.1] reformulated for our setting. To begin, here are some of the desirable properties of ZN; .x/. [LAN] (Local asymptotic normality) The local likelihood ratio ZN; is called locally asymptotically normal (LAN), as N ! C1, at a point 0 2 0 if there exist random variables N ; N  1; and "N .x/; N  1; x 2 R, such that a) the sequence N converges in distribution to a standard normal random d variable: limN!C1 N D N .0; 1/. b) for every x 2 R, the sequence "N .x/ converges to zero in probability. c) for every x 2 R,   x2 C "N .x/ : ZN;0 .x/ D exp xN  2 [UAN] (Uniform asymptotic normality) The local likelihood ratio ZN; is called uniformly asymptotically normal, as N ! C1, in a set  0 if there exist random variables N ./ and "N .x; / such that, for every sequence fN ; N  1g  and every converging sequence fxN ; N  1g R, with limN!1 xN D x and N C xN N;N 2 ,

6.2 Maximum Likelihood Estimator (MLE): One Unknown Parameter

421

a) The sequence N .N / converges in distribution to a standard normal random variable: for every y 2 R,   d lim P N .N /  y D P.  y/;  D N .0; 1/;

N!C1

even though the sequence N does not need to converge; b) The sequence "N .xN ; N / converges to zero in probability, that is, for every ı > 0, lim P.j"N .xN ; N /j > ı/ D 0I

N!C1

again, note that the sequence N does not need to converge; c) The following equality holds:   x2 C "N .xN ; N / : ZN;N .xn / D exp xN .N /  2

(6.2.73)

[HC] (Local Hölder continuity) Local Hölder continuity of ZN; in an open set  0 means that, for every compact set K , there exist positive numbers a and B such that, for every R > 0 and N  1, ˇp ˇ2 p ˇ ˇ sup sup jx  yj2 Eˇ ZN; .x/  ZN; .y/ˇ  B.1 C Ra /:  2K jxjR;jyjR

[UPI] (Uniform polynomial integrability) Uniform polynomial integrability of ZN; in an open set  0 means that, for every compact set K  and every p > 0, there exists an N0 D N0 .K; p/ such that p sup sup sup jxjp E ZN; .x/ < 1: (6.2.74)  2K N>N0 x2R

Under conditions of Theorem 6.2.9, (6.2.62) and (6.2.69) immediately imply that ZN; is locally asymptotically normal at every point  2 0 ; recall that 0 is the largest open set on which the p underlying SPDE is well-posed (equivalently, (6.2.66) holds). Indeed, since N; D 2 ./=.I.N/T/, (6.2.62) implies that  Z N  X N; T  k uk; .t/dwk .t/ qk 0 kD1 converges in distribution to a standard normal random variable, and (6.2.69) implies that N 2 Z T x2 X N; 2 u2 .t/dt 2 kD1 q2k 0 k k;

converges with probability one to x2 =2.

422

6 Parameter Estimation for Diagonal SPDEs

Exercise 6.2.15 (C) Confirm local asymptotic normality of ZN; . It turns out that local asymptotic normality of ZN; at a point  D 0 implies certain property of every sequence of estimators QN ; N  1; in a neighborhood of 0 . Definition 6.2.16 An estimator QN is a random variable of the form QN D FN .U N; /, where FN W C..0; T/I RN / ! R is a measurable mapping. To characterize the behavior of an estimator, we need the notion of a loss function. Definition 6.2.17 A function w D w.x/; x 2 R is called a loss function if w.0/ D 0, w.x/ is continuous, w.x/ D w.x/, w.x/ is non-decreasing for x > 0, and w.x0 / > 0 for at least one x0 2 R. A loss functions w is said to have polynomial growth if there exists a p > 0 such that w.x/=.1 C jxjp / is bounded for all x 2 R. A typical loss function is w.x/ D jxjp , p > 0; or w.x/ D min.jxjp ; 1/. Given a loss function w D w.x/ and an estimator QN , the number Ew.QN  / characterizes the quality of the estimator at the point . Local asymptotic normality at a point implies a lower bound on Ew.QN  / in a neighborhood of . Theorem 6.2.18 If ZN; is LAN at a point  D 0 and if w D w.x/ is a loss function 2 such that limjxj!1 w.x/eajxj D 0 for every a > 0, then, for every ı > 0 and every sequence of estimators QN ; N  1,   d lim inf sup Ew .QN  /=N;0  Ew./;  D N .0; 1/I

N!C1 j  j k ;

iD1

and the random processes duk D  k ./uk dt C qk dwk ; 0 < t < T; k  1; with initial condition uk .0/ from (6.3.30). Let us address the forward problem for Eq. (6.3.28). Theorem 6.3.16 If k ./ > 0 for all but finitely many k  1 and X k

q2k < 1; k ./

then u.t/ D

X

uk .t/hk

k1

is a mild solution of (6.3.28). The solution is unique in the class L2 .˝  Œ0; TI H/. If X k

q2k ı k ./

< 1;

for some 0 < ı < 1, then u 2 L2 .˝I C..0; T/I H//.

(6.3.31)

6.3 Several Parameters and Further Directions

455

The proof is identical to that of Theorem 6.3.12 with one parameter. Let us now discuss the problem of estimating  from the observations U N;T D fu1 .t/; : : : ; uN .t/; t 2 Œ0; Tg. Assume that qk > 0 for all k  1 and the observations correspond to the value 0 of the parameter so that k .0 / > 0 for all k  1. By (6.2.18) on page 402, the likelihood function is Z N  X k ./  k .0 / T  uk .t/duk .t/ q2k 0 kD1  Z 2 ./  2k .0 / T 2 u .t/dt :  k k 2q2k 0

LN;T ./ D exp

We will now compute the simplified (unrestricted) MLE of  by maximizing ln LN;T ./ over R` . Recall that k ./ D k C  > k Therefore the corresponding gradients are r k ./ D k ; r 2k ./ D 2k k C . k k> /: Similar to our analysis of the heat balance equation with two unknown parameters, it is convenient to introduce the matrix BN 2 R`` and vectors AN 2 E` and AN N 2 R` with components Bij;N D

Z N X i;k j;k q2k

0

Z N X i;k

T

kD1

Ai;N D 

T

kD1

q2k

0

u2k .t/dt; AN i;N D 

Z N X i;k kD1

qk

T 0

uk .t/dwk .t/

(6.3.32)

  uk .t/ duk .t/  k uk .t/dt :

Then r ln LN;T ./ D AN  BN ;

(6.3.33)

and to proceed, we need to invert the matrix BN . Proposition 6.3.17 The matrix BN is invertible with probability one if and only if the vectors 1 ; : : : ; N span R` .

456

6 Parameter Estimation for Diagonal SPDEs

Proof The result is immediate from the equalities BN D

Z N X k > k q2k

kD1

T 0

u2k .t/dt

and >

y BN y D

N  > 2 Z X y k

q2k

kD1

T 0

u2k .t/dt; y 2 R` :

Since we expect N to be much larger than `, the condition that 1 ; : : : ; N span R` is equivalent to identifiability of the original model (6.3.28) in the sense that the operators A1 ; : : : ; A` are all different. It is therefore natural to assume that the vectors 1 ; : : : ; N form a basis in R` for all sufficiently large N. Then (6.3.33) implies b  N D B1 N AN : Exercise 6.3.18 (C) Verify that b N  N  0 D B1 N AN :

(6.3.34)

The matrix-vector stricture of the problem makes the analysis more complicated compared to the one-parameter case. Accordingly, before stating and proving the corresponding theorem we will try to come up with the suitable assumptions. As much as possible, we will try to keep (6.3.28) an abstract equation. For a more concrete SPDE setting, see . Define the diagonal matrix SN D diag.S1;N ; : : : ; S`;N / with Si;N D

N X

2 i;k

kD1

k .0 /

!1=2 :

(6.3.35)

By the Cauchy-Schwartz inequality, PN

kD1

j i;k j;k j= k .0 / 1 Si;N Sj;N

(6.3.36)

for all i and j. Therefore, to simplify the analysis, assume that, for every i; j D 1; : : : ; `, the limits Gij D lim

N!C1

PN

kD1

i;k j;k = k .0 / Si;N Sj;N

exist, and define the matrix G D .Gij ; i; j D 1; : : : ; N/.

(6.3.37)

6.3 Several Parameters and Further Directions

457

By (6.3.36), there is always a possibility to define G along a converging subsequence; our simplifying assumption simply ensures uniqueness of G. For example, the matrix G is well-defined if all the eigenvalues have the algebraic asymptotic k ' c0 k˛0 ; i;k ' ci k˛i :

(6.3.38)

Exercise 6.3.19 (A) (a) Compute Gij under the assumption (6.3.38). (b) Give an example when the limit in (6.3.37) does not exist. Exercise 6.3.20 (C) Let AN be the vector defined in (6.3.32). Using Theorem 6.3.7, verify that if lim Si;N D C1; i D 1; : : : ; `;

N!C1

(6.3.39)

and supk j i;k j= k .0 / < 1 for all i, then lim SN1 BN SN1 D T G=2 with probability one

N!C1

(6.3.40)

and lim SN1 AN D ; d

N!C1

(6.3.41)

where  is a Gaussian vector with mean zero and covariance matrix T G=2. Hint. See the proof of Theorem 6.3.13.

As (6.3.40) suggests BN  SN GSN , the key question becomes non-degeneracy of the matrix G. The question is non-trivial indeed, because G can be singular even if every BN is not. As an example, consider ` D 2, A0 D 0, 1;k D k and 2;k D k C 1; 1 > 0, 2 > 0. Then direct computations show that any collection of the vectors k ; k  1; is linearly independent, but GD

1 2.1 C 2 /



 11 : 11

If operators Ai are (pseudo)-differential, then, assuming the power asymptotic (6.3.38) for the corresponding eigenvalues, one can show that G is non-degenerate if no two of the operators Ai , Aj have the same order. Exercise 6.3.21 (A) Assume that (6.3.38) holds and ˛i 6D ˛j for i 6D j. Show that G is invertible.

458

6 Parameter Estimation for Diagonal SPDEs

Theorem 6.3.22 Assume that the vectors 1 ; : : : ; N are linearly independent in R` for all sufficiently large N, and assume that the vector 0 2 R` is such that 1. 2. 3. 4.

k .0 / > 0 for all k  1 and limk!C1 k .0 / D C1; supk j i;k j= Pk .02/ < 1, i D 1; : : : ; `; the series k i;k = k .0 / diverges for every i D 1; : : : ; `; the matrix G is well-defined according to (6.3.37) and is invertible.

Then lim jb  N  0 j D 0 with probability one

N!C1

and d  N  0 / D ; lim SN .b

N!C1

where the matrix SN is from (6.3.35) and  is a Gaussian random vector with mean zero and covariance matrix .2=T/G1 . Proof Since b N  N  0 D B1 N AN ; we have, with a suitable matrix norm k  k, jAN N j : jb  N  0 j  kSN B1 N SN /k PN PN 2 iD1 kD1 i;k = k .0 / By assumption and using the strong law of large numbers, the matrix norm on the right-hand side stays bounded, and the rest converges to zero with probability one. Similarly, 1 N SN .b  N  0 / D SN B1 N S N S N AN

and the result follows from (6.3.41).

6.3.4 Problems The objective of the problems below is to bring the presentation in this section closer to a more detailed analysis of the SPDE with one parameter in Sect. 6.2. Problem 6.3.1 is about extending Theorem 6.3.4 to non-zero initial conditions. Problem 6.3.2 is about alternative ways to define the solution of the abstract

6.3 Several Parameters and Further Directions

459

evolution equation (6.3.15). Problem 6.3.3 is an easy example of extending Theorem 6.3.13 beyond an evolution equation. Problem 6.3.4 is a technical result about the asymptotic behavior of the integral of the square of an OrnsteinUhlenbeck process; this result is the key for the proof of consistency and asymptotic normality in Theorem 6.3.13. Problem 6.3.5 is an invitation to extend the results of Theorem 6.2.22 to some of the models in this section. Problem 6.3.1 Suppose that the initial condition in (6.3.2) is Gaussian with mean mk and variance k2 . Find sufficient conditions on mk and k for (6.3.9) to hold. Problem 6.3.2 (a) Discuss the possibility of defining the variational solution for the abstract evolution equation (6.3.15). (b) Discuss the possibility of defining a mild solution for the abstract evolution equation (6.3.15) when qk D 1 for all k. Problem 6.3.3 State and prove an analogue of Theorem 6.3.13 when the processes (6.3.18) are iid. Problem 6.3.4 Let X D X.tI a; / be the solution of dX D aXdt C dw.t/; t > 0; a > 0; d

with initial condition X.0I a; / D N .x0 ; 02 / independent of the standard Brownian motion w. Verify that lim a E

a!C1

E

Z

T

Var 0

0

T

X 2 .tI a; /dt D

X .tI a; /dt

0

Z

2

Z

T

n

X 2 .tI a; /dt D

 C.n/

 2 T C 02 C x20 ; 2 2n n 2n x2n 0 C 0 C T  ; n D 1; 2; : : : ; a > 0; an

Var X02  4 T C 4.x20 C 02 / 2 C C o.a3 /; a ! C1: 4a2 2a3

Problem 6.3.5 Discuss potential difficulties in extending Theorem 6.2.22 to (a) two-parameter SPDE model (6.3.1); (b) one-parameter diagonal abstract evolution equation (6.3.15) under condition of Theorem 6.3.13 (that is, without assuming any particular asymptotic for k and k ).

Problems of Chap. 2 2.1.1 (page 35) For every t > 0, the function BH .t; / is Hölder continuous of any order less than 1=2; for every x 2 .0; /, the function BH .; x/ is Hölder continuous of any order less than H. Indeed, BH is Gaussian; EjBH .t; x/  BH .t; y/j2 D t2H

X j cos.kx/  cos.ky/j2 k2

k1

 t2H jx  yj1ı

X

k1ı ; ı 2 .0; 1/;

k1

because, by direct computation, cos.a/  cos.b/ D 2 sin..a C b/=2/ sin..b  a/=2/ and j sin xj  jxj" for every " 2 .0; 1; EjBH .t; x/  BH .s; x/j2 D jt  sj2H

X .1  cos.kx//2 k1

k2

:

2.1.2 (page 35) If infk Hk D H0 > 0, then the analysis of the previous solution shows that the upper bound on the order of the Hölder continuity in time will be H0 . The case H0 D 0 is left as a challenge to the reader. 2.1.3 (page 35) The covariance function has to be a positive-definite kernel (see page 94). An easy way to prove that a function is a positive-definite kernel is to interpret the

© Springer International Publishing AG 2017 S.V. Lototsky, B.L. Rozovsky, Stochastic Partial Differential Equations, Universitext, DOI 10.1007/978-3-319-58647-2

461

462

function as the covariance of some Gaussian field, but this is unlikely to work in this problem. The details are left to the reader. 2.2.1 (page 50) (a) Direct computation. (b) Note that the Fourier transform b v D b v .t; y/ of the homogeneous equation satisfies v .t; y/: b v t D a.t/y2b We need the solution for t  s satisfying b v .s; y/ D 1. If A.t/ D we conclude that ˚.t; s/ is the integral operator   ˚.t; s/h .x/ D q

Rt 0

a.s/ds, then

! .x  y/2  h.y/dy: exp     4 A.t/  A.s/ 4 A.t/  A.s/ 1 Z

1

1

2.2.2 (page 50) (a), (b) Direct computations. (c) There is a difficulty interpreting the integral Z

t 0

X.t/ f .s/dw.s/; X.s/

because X.t/ is not Fsw -measurable for s < t. Note that the integral is NOT the same as Z t f .s/ dw.s/: X.t/ X.s/ 0 While the difficulty can be resolved using anticipating stochastic calculus (see Nualart [175, Sect. 3.2]), it is much easier to study the equation without using the closed-form solution. (d) An Ftw -adapted locally square-integrable a will work. In this case, Itô formula shows that  Z t Z 1 t 2 a.r/dw.r/  a .r/dr : ˚.t; s/ D exp 2 s s 2.2.3 (page 51) (a) Here is one possible definition. Assume that A is a linear, and possibly unbounded, operator on a Banach space V. Then u is a classical solution of Rt u.t/ D u0 C 0 Au.s/ds if u W Œ0; T ! V is a continuous function for every T > 0,Ru.t/ is in the domain of A for all t  0, u.0/ D u0 , and t u.t/ D u0 C 0 Au.s/ds in V for all t  0. For the same equation in the

Problems of Chap. 2

463

differential form, the mapping u W Œ0; T ! V would have to be continuously differentiable and Au W Œ0; T ! V, continuous. (b) The operator A should be acting on functions from L2 .G/, where G Rd , and the formal adjoint A of A must be defined using integration by parts so that .Af ; g/L2 .G/ D . f ; A g/L2 .G/ for all smooth compactly supported f ; g. Then the equality in [W5] becomes Z t t Œ' D 0 Œ' C s ŒA 'ds: 0

For A3 , the function a can be just bounded and measurable; for A2 we would need one bounded generalized derivative in x, and for A1 , two. Note that positivity of a is not necessary to define the solution, but is usually required to find one. 2.2.4 (page 51)   Substitution of vN produces the term AN.t/ and changes F.t; u/ to F t; u  N.t/ : When the operator A is unbounded, the process NA typically has better regularity properties than either N.t/ or AN.t/ (think of A D  and ˚.t/, the heat semigroup), meaning that v.t/ D u.t/  NA .t/ is usually the better choice. A trivial example when vN is easier is a linear equation such that AN.t/ D 0. 2.2.5 (page 51) Note that, by the Kolmogorov criterion, W.t; x/ as a function of x is Hölder continuous of every order less than 1=2. Thus, the equation cannot have a classical solution or solutions [W1], [W2]. The class of the test functions should reflect the boundary conditions (for example, for [W3], the test functions can be infinitely differentiable on Œ0; 1 with derivatives equal to zero at 0 and 1). Solutions [W3] and [W4] are equivalent for this equation and the resulting function u is actually continuous in t and x (see Walsh [223, Theorem 3.2]). A measure-valued solution also exists and has a density for t > 0 [120, Sect. 8.3]. 2.2.6 (page 51) This is an open-ended question. Note that, in the case of stochastic equations, one can take random test functions '.x/ or '.t; x/, and then take expectation along with integration in space and/or time. To take the expectation, one would have to impose a priori integrability conditions on the solution. 2.3.1 (page 68) P (a,b) Xi .t/ D 1 kD1 ik wk .t/; vt D

d X i;jD1

aij  .1=2/

1 X

! ik jk Di Dj vI

kD1

  P the matrix aij  .1=2/ 1 kD1 ik jk must be non-negative definite.

464

(c) Same equation for v, but the equation is now considered in the domain f.t; xC X.t// W t > 0; x 2 Gg with the boundary condition v.t; x C X.t// D 0 for x on the boundary of G. 2.3.2 (page 68) (a) Use (2.2.7) on page 43. The integral in space is, up to a constant, a convolution of two normal densities and can be evaluated. The result is u.t; x/ D

Z t 0

f .s/ .t  s C f .s//

d=2

 exp 

 x2 dw.s/: 2.t  s C f .s//

Then 2

Eu .t; x/ D

Z t 0

f .s/ .t  s C f .s//

d

 exp 

 x2 ds: t  s C f .s/

Limiting distribution can exist (for example, if f .t/ D 1 and d > 1). (b) u.t; x/ D

    1 8 X sin..2k C 1/x/ 1 2 t C w.t/ I exp  k C  kD0 .2k C 1/3 2

Eu2 .t; x/ D et

1 8 X sin..2k C 1/x/ k2 t e  kD0 .2k C 1/3

!2 I

no limiting distribution. (c) r u.t; x/ D

1 Z 2X t yk .t  s/f .s/dwk .s/ sin.kx/;  kD1 0

where y00k .t/ C by0k .t/ C k2 yk .t/ D 0, yk .0/ D 0, y0k .0/ D 1 (see (2.3.26) on page 58); Eu2 .t; x/ D

Z 1 2X 1 t 2 y .t  s/f 2 .s/ds sin2 .kx/:  kD1 k2 0 k

Limiting distribution exists, for example, if b > 0 and f D 1. 2.3.3 (page 69) Note that, for each k  1, u.t; x/ D k2 ek

2 tC2i kw.t/Ci kx

is a solution.

Problems of Chap. 2

465

2.3.4 (page 69) Similar to (2.3.18) on page 56, we have Eku.t; /k2L2 .G/ D

X e2 k t  1 k1

2 k

;

where f k ; k  1g are the eigenvalues of the Laplace operator in G with zero boundary conditions. The key observation is that limk!1 kk D c for some positive number c, which is true for every smooth bounded domain G R2 , with c depending only on the domain: see, for example, Safarov and Vassiliev [201, Sect. 1.2]. 2.3.5 (page 69) Taking u to be the closed-form solution u.t; x/ D

Z tZ

 K.t  s; x  y/ dW.s; y/; K.t; x/ D Rd

0

1 2 ejxj =.4t/ ; d=2 .4t/

use (1.1.9) on page 4 to conclude that 2

Eu .t; x/ < 1 ,

Z

t 0

ds < 1 , d D 1: .t  s/d=2

2.3.6 (page 69) (a) u.x; y/ D “

E

1 k 2 X sin.nx/ sin.ky/I  n;kD1 k2 C n2

u2 .x; y/ dxdy D

G

1 X

1 1  X 3 3 : < k < .n2 C k2 /2 4 kD1 8 n;kD1

(b) If d D 1; 2; 3. 2.3.7 (page 70) We have 1 u.x/ D 2

Z

C1

e 1

jxyj

1 dW.y/ D 2

Z

C1

e x

xy

1 W.y/dy  2

Z

x

eyx W.y/dy; 1

where the first equality follows by the Fourier transform and the second, after integration by parts. Uniqueness follows because the general solution of u00  u D 0

466

is c1 ex C c2 ex ; it is unbounded unless c1 D c2 D 0. Next, Eu2 .x/ D

1 4

Z

C1

e2jxyj dy D

1

1 : 4

As a result, Ekuk2H  .R/ D C1 for all  2 R, just like the non-zero constant 2

function is not in any Sobolev space H  Rd /. If you need to be more rigorous, see Problem 4.2.3 on page 181. 2.3.8 (page 70) (a) The solution is 0 u.t; x/ D p C 

r

2 X k2 t e k cos.kx/;  k1

where k are iid standard normal random variables. This function is infinitely differentiable in the region f.t; x/ W t > 0; x 2 .0; /g. (b) The solution is r u.t; x/ D

2 X k sin.kx/ sin.kt/:  k1 k

By the Kolmogorov criterion, this function is Hölder continuous of any order less than 1=2 in the region f.t; x/ W 0  t  T; x 2 Œ0; g for every T > 0. (c) Let x D r cos ; y D r sin . Note that the functions rk cos k; rk sin k, k D 0; 1; 2; : : : are harmonic. Therefore  1  X 0 k k C u.r; / D p p cos.k/ C p sin.k/ rk ;   2 kD1

(S.1)

where 0 ; 1 ; 1 ; 2 ; 2 ; : : : are iid standard Gaussian random variables; recall p k ; sin pk ; k  1g: This that an orthonormal basis in L2 ..0; 2// is f p1 ; cos   2

function is analytic in the region f.x; y/ W x2 C y2 < 1g: the series in (S.1) converges uniformly in .x; y; !/ in the region f.x; y/ W x2 C y2 < Rg for every R < 1.

2.3.9 (page 70) (a) Note that EB2 .0/ D EB2 .T/ D 0. (b) For the first three, verify (2.3.70). For the last, show that the limit is that same as the third representation (see also page 14 and [71, Theorem 3.3]). As far as simulations, note that the first representation is not adapted and the second has a (removable) singularity at the end point.

Problems of Chap. 2

467

2.3.10 (page 71) (b) Note that ˇ n @n xt.t2 =2/ ˇˇ x2 =2 @ .xt/2 =2 ˇ e D e e ˇ ˇ : tD0 tD0 @tn @tn (c) Compute   2 2 E es.s =2/ et.t =2/ directly and using (2.3.37), and compare the results. 2.3.13 (page 72) Since

Z lim

"!0 " , 88 jAj (for operators), 91

k A k1 , 92, 97 tr, 92 L0 .X; Y/, 97 L1 .X; Y/, 97 L2 .X; Y/, 97 K.X; Y/, 97 1A .s/, 103 hMi.t/ (scalar M), 120 hM; Ni.t/, 120 hMi.t/ (vector M), 121 hMi.t/ (H-valued M), 122 QM , 122 H01 .G/, 165 Eh , 221  , 234 J 1 , 236 .k/, 239 uk , 243 DB , 243 R, 255 RL2 . I H/, 256 Ftw , 265 J 2 , 270 d

 D , 384 L

X DY, 384 Œa (units of measurement of a), 389 0 , 397, 411, 418 U N; , 400

© Springer International Publishing AG 2017 S.V. Lototsky, B.L. Rozovsky, Stochastic Partial Differential Equations, Universitext, DOI 10.1007/978-3-319-58647-2

503

Index

a priori bound (estimate), 150 abstract evolution equation, 41 adapted process, 36 algebraic tensor product, 81 almost C "=2 .G/, 29 annihilation operator, 71 asymptotically efficient, 422

Banach space, 77 BDG inequality, 8, 123 BELLMAN , R. E., 9 Berry-Esseen bound, 425 bi-linear equation, 38 Bichteler-Dellacherie theorem, 38 bijection, 78 bilinear equation, 40 Borel sigma-algebra, 76 Brownian bridge, 70 Brownian motion, 114 Brownian sheet, 3, 103 Burkholder-Davis-Gundy inequality, 8, 123

Cameron-Martin expansions, 239 Cameron-Martin formula, 432, 439 Cameron-Martin space, 470, 472 Cameron-Martin theorem, 438 canonical process, 100 canonical random element, 100 centered Gaussian measure, 100 chaos expansion, 107 chaos solution, 49 characteristic set, 271

classical solution, 42 closed set, 76 closed-form exact estimator, 452 closed-form solution, 43 closure of a set, 76 COLE, J. D., 63 compact set, 76 conditional independence, 110 conic section, 146 continuous embedding, 78 continuous mapping, 76 contraction mapping theorem, 77 convergent sequence of points, 76 correlation operator of a martingale, 122 covariance function, 102 covariance operator, 102, 105 creation operator, 71 cross variation, 120 cylindrical x-process, 116 cylindrical Brownian motion, 117 cylindrical Gaussian process, 114 cylindrical process, 114 cylindrical random element, 105 cylindrical Wiener process, 117

dense subset, 76 derivative of a family of operators, 161 deterministic part, 37 diagonal equation, 382, 395, 444, 453 dimensional analysis, 389 Doob-Meyer decomposition, 119 dual operator, 88 dual space, 84

© Springer International Publishing AG 2017 S.V. Lototsky, B.L. Rozovsky, Stochastic Partial Differential Equations, Universitext, DOI 10.1007/978-3-319-58647-2

505

506 elliptic equation deterministic, 148, 161 stochastic, 170 elliptic operator, 148 embedding operator, 78 epsilon inequality, 8 equation fully non-linear, 39 linear, 39 quasilinear, 39 semi-linear, 39 equivalent norms, 77 Euclidean free field, 127, 171, 182, 484 evolution equations, 37

Feynmann-Kac formula, 52 filtering problem, 16, 227 finite-dimensional noise, 5 FISK , D. L., 31 fixed point iterations, 77 formal adjoint, 88 forward problem, 36 Fourier transform, 6, 80 standard normal density, 6 Fréchet derivative, 137 fractional Brownian motion, 27 FRIEDRICHS, K. O., 165 fully nonlinear equation, 39 fundamental solution, 44, 50 fundamental theorem of calculus, 96

GALERKIN , B. G., 154 Galerkin approximation, 154 Gamma function, 7 Gaussian chaos space, 234 Gaussian field, 102, 105 different representations, 171 Gaussian measure, 100 Gaussian process, 114 Gaussian white noise, 104, 105 same noise on different spaces, 110 Sobolev space regularity, 173 generalized field, 105 generalized Itô formula, 32 generalized process, 114 generalized random element, 105 generalized random processes, 279 generator of a process, 164 generator of the semigroup, 155 geometric Brownian motion, 50 germ field, 112

Index germ Markov property, 112 germ sigma algebra, 112 GIRSANOV, I. V., 403 Girsanov theorem, 403 global Markov property, 113 GRONWALL, T. H., 9 Gronwall’s inequality, 9

Hölder inequality, 9 heat balance equation, 381 Hermite polynomials Hn , 61, 71 N n , 166 Hermite polynomials H Hermite functions, 167 Hilbert scale, 79, 81, 208 Hilbert space, 77 Hilbert space tensor product, 82 Hille-Yosida theorem, 156 HÖLDER , O. L., 10 homogenous Gaussian field, 102, 126 HOPF, E. F. F., 63 Hopf-Cole transformation, 63 Hurst parameter, 27 hyperbolic equation deterministic, 148, 162 stochastic, 185

indicator function, 103, 129 input data, 36 integral operator, 93 inverse flow, 214 inverse Fourier transform, 6 inverse operator, 90 inverse problems, 36 isometry of the Fourier transform, 7 isonormal Gaussian process, 105 Itô integral, 30 ITÔ , K., 30 Itô -Wentzell formula, 32 Itô formula, 6, 136, 137, 139 Itô isometry, 6

JENSEN , J. L. W. V., 10

KARHUNEN , K., 108 Karhunen-Loève expansion, 108, 126 kernel, 93 KOLMOGOROV, A. N., 25 Kolmogorov’s continuity criterion, 25 Krylov-Veretennikov formula, 316

Index Lévy Brownian motion, 29, 181 Lévy characterization of the Brownian motion, 387 Laplacian asymptotic of eigenvalues, 174 Lax-Milgram theorem, 85 linear equation, 39 linear functional, 83 linear topological space, 77 local asymptotic normality, 420 local likelihood ratio, 419 LOÈVE, M., 108 loss function, 422

M-admissible integrand, 130, 134 Markov free field, 171 Markov property sharp, 111 martingale, 119 martingale problem, 41 matrix trace, 92 measure space, 76 Measure-valued solution, 47 M ERCER , J., 94 Mercer’s theorem, 94 method of continuity, 153 mild solution, 43, 400 modification, 25, 26 multi-channel system, 445

neighborhood of a point, 76 normal triple, 78 normalized Hermite polynomials, 236 Novikov condition, 405

open set, 76 operator adjoint, 86 bounded, 84 closed, 84 compact, 84 formal adjoint, 88 Hilbert-Schmidt, 91 non-negative, 86 nuclear, 84, 91 polar decomposition, 90 self-adjoint, 86 trace-class, 91 operator norms, 97 Ornstein-Uhlebeck bridge, 14

507 Ornstein-Uhlenbeck process, 14 orthonormal collection, 90

parabolic equation degenerate, 205 deterministic, 148, 163 stochastic, 199 Parseval’s identity, 6 partial isometry, 91 Plancherel’s identity, 7 Poincaré constant, 165 Poincaré inequality, 165 POINCARÈ, J. H., 165 polar decomposition, 90 Polish metric space, 77 polynomial chaos solution, 253 positive measure, 76 positive measure space, 76 positive-definite kernel, 94, 125 potential theory, 196 probability distribution, 100 propagator, 252, 259

Q-cylindrical Brownian motion, 116 Q-cylindrical Wiener process, 116 Q-cylindrical x-process, 116 quadratic variation, 119 quasilinear equation, 39

random field, 25 random function, 25 random process, 25 rate of convergence, 386 reflexive space, 86 regular Q-cylindrical process, 117 regular generalized field, 106, 127 regular measure, 76 regularization of white noise, 125 reproducing kernel, 95 reproducing kernel Hilbert space, 94 and Markov property, 112 of a Gaussian field, 107 of a Gaussian measure, 101 of a Gaussian random element, 101 RIESZ, F., 85 Riesz representation theorem, 84