Uncertainty Quantification using R 3031177843, 9783031177842

This book is a rigorous but practical presentation of the techniques of uncertainty quantification, with applications in

215 43 34MB

English Pages 767 Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Uncertainty Quantification using R
 3031177843, 9783031177842

Table of contents :
1
Introduction
Contents
978-3-031-17785-9_1
Chapter 1: Some Tips to Use R and RStudio
1.1 How to Install R and RStudio
1.2 How to Include a Third-Part Add-In
1.3 How to Create a Document with RStudio
1.4 How to Create a Script with RStudio
1.5 How to Manipulate Numeric Variables, Vectors, and Factors
1.6 How to Manipulate Matrices and Arrays
1.7 How to Use Lists
1.8 Using data.frames
1.9 Plotting with RStudio
1.10 Programming with R
1.11 Classes in R
1.12 How to Solve Differential Equations with R
1.12.1 Initial Value Problems for Ordinary Differential Equations
1.12.2 Boundary Value Problems for Ordinary Differential Equations
1.13 Optimization with R
1.13.1 Linear Programming
1.13.2 Nonlinear Programming
1.13.3 Duality Methods
1.13.4 Multiobjective Optimization
1.14 Solving Equations with R
1.14.1 Systems of Linear Equations
1.14.2 Systems of Nonlinear Equations
1.14.3 Optimization and Systems of Equations
1.15 Interpolation and Approximation with R
1.15.1 Variational Approximation
1.15.2 Smoothed Particle Approximation
1.16 Integrals and Derivatives with R
1.16.1 Variational Approximation of the Derivatives
1.16.2 Smoothed Particle Approximation of the Derivative
978-3-031-17785-9_2
Chapter 2: Probabilities and Random Variables
2.1 Notation
2.2 Probability
2.2.1 Mass Functions and Mass Densities
2.2.2 Combinatorial Probabilities
2.3 Independent Events
2.4 Numerical Variables on Finite Populations
2.4.1 Couples of Numerical Variables
2.4.2 Independent Numerical Variables
2.5 Numerical Variables as Elements of Hilbert Spaces
2.5.1 Conditional Probabilities as Orthogonal Projections
2.5.2 Means as Orthogonal Projections
2.5.3 Affine Approximations and Correlations
2.5.4 Conditional Mean
2.6 Random Variables
2.6.1 Numerical Evaluation of Statistics
2.7 Random Vectors
2.8 Discrete and Continuous Random Variables
2.8.1 Discrete Variables
2.8.2 Continuous Variables Having a PDF
2.9 Sequences of Random Variables
2.10 Samples
2.10.1 Maximum-Likelihood Estimators
2.10.2 Samples from Random Vectors
2.10.3 Empirical CDF and Empirical PDF
2.10.4 Testing Adequacy of a Sample to a Distribution
2.10.5 Testing the Independence a Couple of Variables
2.11 Generating Triangular Random Numbers
2.12 Generating Random Numbers by Inversion
2.13 Generating Random Vectors with a Given Covariance Matrix
2.14 Generating Regular Random Functions
2.15 Generating Regular Random Curves
978-3-031-17785-9_3
Chapter 3: Representation of Random Variables
3.1 The UQ Approach for the Representation of Random Variables
3.2 Collocation
3.3 Variational Approximation
3.4 Moments Matching Method
3.4.1 The Standard Formulation of M3
3.4.2 Alternative Formulations of M3
3.5 Multidimensional Expansions
3.5.1 Case Where U Is Multidimensional
3.5.2 Case Where X Is Multidimensional
3.6 Random Functions
3.7 Random Curves
3.8 Mean, Variance, and Confidence Intervals for Random Functions or Random Curves
978-3-031-17785-9_4
Chapter 4: Stochastic Processes
4.1 Ergodicity
4.2 Determination of the Distribution of a Stationary Process
4.3 White Noise
4.4 Moving Average Processes
4.5 Autoregressive Processes
4.6 ARMA Processes
4.7 Markov Processes
4.8 Diffusion Processes
4.8.1 Time Integral and Derivative of a Process
4.8.2 Simulation of the Time Integral of a White Noise
4.8.3 Brownian Motion
4.8.4 Random Walks
4.8.5 Itô´s Integrals
4.8.6 Itô´s Calculus
4.8.7 Numerical Simulation of Stochastic Differential equations
978-3-031-17785-9_5
Chapter 5: Uncertain Algebraic Equations
5.1 Uncertain Linear Systems
5.1.1 Very Small Linear Systems
5.2 Nonlinear Equations and Adaptation of an Iterative Code
5.3 Iterative Evaluation of Eigenvalues
5.3.1 Very Small Matrices
5.4 The Variational Approach for Uncertain Algebraic Equations
978-3-031-17785-9_6
Chapter 6: Random Differential Equations
6.1 Linear Differential Equations
6.2 Nonlinear Differential Equations
6.3 Adaptation of ODE Solvers
6.4 Uncertainties on Curves Connected to Differential Equations
978-3-031-17785-9_7
Chapter 7: UQ in Game Theory
7.1 The Language from Game Theory
7.2 A Simplified Odds and Evens Game
7.2.1 GT Strategies When p = (p1,p2) Is Known
7.2.2 Strategies When p Is Unknown
7.2.3 Strategies for the Stochastic Game
7.2.4 Replicator Dynamics
7.3 The Prisoner´s Dilemma
7.3.1 Replicator Dynamics
7.4 The Goalie´s Anxiety at the Penalty Kick
7.5 Hawks and Doves
978-3-031-17785-9_8
Chapter 8: Optimization Under Uncertainty
8.1 Using the Methods of Representation
8.2 Using the Adaptation of a Descent Method
8.3 Combining Statistics of the Objective, the Constraints, and Expansions
978-3-031-17785-9_9
Chapter 9: Reliability
9.1 Limit State Curves
9.2 Design Point
9.3 Multiple Failure Conditions
9.4 Reliability Analysis
9.5 Hasofer-Lind Reliability Index
9.5.1 The General Situation
9.5.2 The Case of Affine Limit State Equations
9.5.3 Convex Failure Regions
9.6 Using the Reliability Index to Estimate the Probability of Failure
9.6.1 The Case of Affine Limit State Equations
9.6.2 The Case of a Convex Failure Region
9.6.3 General Failure Regions
9.7 The Transformations of Rosenblatt and Nataf
9.8 FORM and SORM
9.8.1 First-Order Reliability Method (FORM)
9.8.2 Second Order Reliability Method (SORM)
9.9 Reliability Based Design Optimization
9.9.1 The Bilevel or Double Loop Approach for a Desired β
9.9.2 The Bilevel or Double Loop Approach for a Desired Objective
Untitled
Untitled
Untitled
1 (1)
Bibliography
Index

Citation preview

International Series in Operations Research & Management Science

Founding Editor Frederick S. Hillier, Stanford University, Stanford, CA, USA

Volume 335 Series Editor Camille C. Price, Department of Computer Science, Stephen F. Austin State University, Nacogdoches, TX, USA Associate Editor Joe Zhu, Foisie Business School, Worcester Polytechnic Institute, Worcester, MA, USA Editorial Board Members Emanuele Borgonovo, Department of Decision Sciences, Bocconi University, Milan, Italy Barry L. Nelson, Department of Industrial Engineering & Management Sciences, Northwestern University, Evanston, IL, USA Bruce W. Patty, Veritec Solutions, Mill Valley, CA, USA Michael Pinedo, Stern School of Business, New York University, New York, NY, USA Robert J. Vanderbei, Princeton University, Princeton, NJ, USA

The book series International Series in Operations Research and Management Science encompasses the various areas of operations research and management science. Both theoretical and applied books are included. It describes current advances anywhere in the world that are at the cutting edge of the field. The series is aimed especially at researchers, advanced graduate students, and sophisticated practitioners. The series features three types of books: • Advanced expository books that extend and unify our understanding of particular areas. • Research monographs that make substantial contributions to knowledge. • Handbooks that define the new state of the art in particular areas. Each handbook will be edited by a leading authority in the area who will organize a team of experts on various aspects of the topic to write individual chapters. A handbook may emphasize expository surveys or completely new advances (either research or applications) or a combination of both. The series emphasizes the following four areas: Mathematical Programming: Including linear programming, integer programming, nonlinear programming, interior point methods, game theory, network optimization models, combinatorics, equilibrium programming, complementarity theory, multiobjective optimization, dynamic programming, stochastic programming, complexity theory, etc. Applied Probability: Including queuing theory, simulation, renewal theory, Brownian motion and diffusion processes, decision analysis, Markov decision processes, reliability theory, forecasting, other stochastic processes motivated by applications, etc. Production and Operations Management: Including inventory theory, production scheduling, capacity planning, facility location, supply chain management, distribution systems, materials requirements planning, just-in-time systems, flexible manufacturing systems, design of production lines, logistical planning, strategic issues, etc. Applications of Operations Research and Management Science: Including telecommunications, health care, capital budgeting and finance, economics, marketing, public policy, military operations research, humanitarian relief and disaster mitigation, service operations, transportation systems, etc. This book series is indexed in Scopus.

Eduardo Souza de Cursi

Uncertainty Quantification using R

Eduardo Souza de Cursi Department Mechanics/Civil Engineering INSA Rouen Normandie Saint-Etienne du Rouvray, France

ISSN 0884-8289 ISSN 2214-7934 (electronic) International Series in Operations Research & Management Science ISBN 978-3-031-17784-2 ISBN 978-3-031-17785-9 (eBook) https://doi.org/10.1007/978-3-031-17785-9 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Introduction

This book presents a collection of methods of uncertainty quantification (UQ) – id est, a collection of methods for the analysis of numerical data, namely when uncertainty or variability is involved. The general aim of UQ is to characterize the observed variability in a quantity X by using a random variable U. In the ideal situation, the connection between X and U is perfectly known and the random variable U has a known distribution. Unfortunately, such a situation may be unrealistic in practice and we must also consider situations where this knowledge is imperfect or even non-existent – for example, situations where U is simply unknown: variability is observed without precise knowledge of the cause. UQ tries to use all the available information about (X, U ) to construct an explanation of X by U, into a form which will be useful for use in numerical calculations involving X. The information may be, for instance, an equation, a numerical problem involving both the variables, or samples. The methods of UQ are general and may be applied to a wide range of situations. They generally belong to the large and well-supplied family of methods based on functional representations, id est, on expansions of the unknowns in series of functions – we find these approaches in particular in Fourier analysis, spectral methods, finite elements, Bayesian optimization, quantum algorithms, etc. It is a large family with numerous and very diversified applications. Our objective is to present the practical use of UQ techniques under R. We assume that you are a mean user of this kind of software: if you are a user of Scilab, Octave, or Matlab®, you will recognize the instructions and the codes presented will appear as familiar. Evidently, if you are an expert in R, you will find a large number of improvements in our codes and programs: do not hesitate in making your own enhancements and, eventually, in sharing them. R is a GNU project to develop a tool for language and environment for statistical computing and graphics. An IDE is proposed by RStudio. The popularity of R and RStudio is no more to be demonstrated: you will find on the web many sites and

v

vi

Introduction

information about R. A wide literature can be found about this software. The community of the users of R proposes a wide choice of packages to extend the possibilities of R. You will find repositories containing them, for instance, https:// search.r-project.org/R/doc/html/packages.html#lib-1

Contents

1

Some Tips to Use R and RStudio . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 How to Install R and RStudio . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 How to Include a Third-Part Add-In . . . . . . . . . . . . . . . . . . . . . . 1.3 How to Create a Document with RStudio . . . . . . . . . . . . . . . . . . 1.4 How to Create a Script with RStudio . . . . . . . . . . . . . . . . . . . . . 1.5 How to Manipulate Numeric Variables, Vectors, and Factors . . . . 1.6 How to Manipulate Matrices and Arrays . . . . . . . . . . . . . . . . . . 1.7 How to Use Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Using data.frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9 Plotting with RStudio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10 Programming with R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.11 Classes in R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.12 How to Solve Differential Equations with R . . . . . . . . . . . . . . . . 1.12.1 Initial Value Problems for Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . 1.12.2 Boundary Value Problems for Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . 1.13 Optimization with R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.13.1 Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . 1.13.2 Nonlinear Programming . . . . . . . . . . . . . . . . . . . . . . . . 1.13.3 Duality Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.13.4 Multiobjective Optimization . . . . . . . . . . . . . . . . . . . . . 1.14 Solving Equations with R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.14.1 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . 1.14.2 Systems of Nonlinear Equations . . . . . . . . . . . . . . . . . . 1.14.3 Optimization and Systems of Equations . . . . . . . . . . . . . 1.15 Interpolation and Approximation with R . . . . . . . . . . . . . . . . . . . 1.15.1 Variational Approximation . . . . . . . . . . . . . . . . . . . . . . 1.15.2 Smoothed Particle Approximation . . . . . . . . . . . . . . . . .

1 1 2 4 5 9 19 27 32 37 46 51 61 62 64 67 69 71 74 79 83 83 85 86 92 94 96

vii

viii

Contents

1.16

Integrals and Derivatives with R . . . . . . . . . . . . . . . . . . . . . . . . 100 1.16.1 Variational Approximation of the Derivatives . . . . . . . . 104 1.16.2 Smoothed Particle Approximation of the Derivative . . . . 106

2

Probabilities and Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Mass Functions and Mass Densities . . . . . . . . . . . . . . . 2.2.2 Combinatorial Probabilities . . . . . . . . . . . . . . . . . . . . . . 2.3 Independent Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Numerical Variables on Finite Populations . . . . . . . . . . . . . . . . . 2.4.1 Couples of Numerical Variables . . . . . . . . . . . . . . . . . . 2.4.2 Independent Numerical Variables . . . . . . . . . . . . . . . . . 2.5 Numerical Variables as Elements of Hilbert Spaces . . . . . . . . . . . 2.5.1 Conditional Probabilities as Orthogonal Projections . . . . 2.5.2 Means as Orthogonal Projections . . . . . . . . . . . . . . . . . 2.5.3 Affine Approximations and Correlations . . . . . . . . . . . . 2.5.4 Conditional Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Numerical Evaluation of Statistics . . . . . . . . . . . . . . . . . 2.7 Random Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 Discrete and Continuous Random Variables . . . . . . . . . . . . . . . . 2.8.1 Discrete Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.2 Continuous Variables Having a PDF . . . . . . . . . . . . . . . 2.9 Sequences of Random Variables . . . . . . . . . . . . . . . . . . . . . . . . 2.10 Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.1 Maximum-Likelihood Estimators . . . . . . . . . . . . . . . . . 2.10.2 Samples from Random Vectors . . . . . . . . . . . . . . . . . . . 2.10.3 Empirical CDF and Empirical PDF . . . . . . . . . . . . . . . . 2.10.4 Testing Adequacy of a Sample to a Distribution . . . . . . . 2.10.5 Testing the Independence a Couple of Variables . . . . . . 2.12 Generating Random Numbers by Inversion . . . . . . . . . . . . . . . . . 2.13 Generating Random Vectors with a Given Covariance Matrix . . . 2.14 Generating Regular Random Functions . . . . . . . . . . . . . . . . . . . 2.15 Generating Regular Random Curves . . . . . . . . . . . . . . . . . . . . .

109 109 110 114 119 122 126 137 144 146 149 150 150 155 167 170 180 180 185 194 198 208 212 214 217 226 230 232 235 243

3

Representation of Random Variables . . . . . . . . . . . . . . . . . . . . . . . . 3.1 The UQ Approach for the Representation of Random Variables . . 3.2 Collocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Variational Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 The Standard Formulation of M3 . . . . . . . . . . . . . . . . . 3.4.2 Alternative Formulations of M3 . . . . . . . . . . . . . . . . . . 3.5 Multidimensional Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Case Where U Is Multidimensional . . . . . . . . . . . . . . . . 3.5.2 Case Where X Is Multidimensional . . . . . . . . . . . . . . . . 3.6 Random Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

251 251 257 290 300 308 317 317 320 323

Contents

ix

3.7 3.8

Random Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Mean, Variance, and Confidence Intervals for Random Functions or Random Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . 342

Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Determination of the Distribution of a Stationary Process . . . . . . 4.3 White Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Moving Average Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Autoregressive Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 ARMA Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Markov Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Diffusion Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.1 Time Integral and Derivative of a Process . . . . . . . . . . . 4.8.2 Simulation of the Time Integral of a White Noise . . . . . . 4.8.3 Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.4 Random Walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.5 Itô’s Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.6 Itô’s Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.7 Numerical Simulation of Stochastic Differential equations . . . . . . . . . . . . . . . . . . . . . . . . . .

359 361 387 393 397 411 426 437 449 449 455 460 467 470 477

5

Uncertain Algebraic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Uncertain Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Very Small Linear Systems . . . . . . . . . . . . . . . . . . . . . . 5.2 Nonlinear Equations and Adaptation of an Iterative Code . . . . . . 5.3 Iterative Evaluation of Eigenvalues . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Very Small Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 The Variational Approach for Uncertain Algebraic Equations . . .

503 511 518 523 541 548 550

6

Random Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Linear Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Nonlinear Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Adaptation of ODE Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Uncertainties on Curves Connected to Differential Equations . . . .

559 563 575 581 583

7

UQ in Game Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 The Language from Game Theory . . . . . . . . . . . . . . . . . . . . . . . 7.2 A Simplified Odds and Evens Game . . . . . . . . . . . . . . . . . . . . . 7.2.1 GT Strategies When p = (p1, p2) Is Known . . . . . . . . . . 7.2.2 Strategies When p Is Unknown . . . . . . . . . . . . . . . . . . . 7.2.3 Strategies for the Stochastic Game . . . . . . . . . . . . . . . . 7.2.4 Replicator Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 The Prisoner’s Dilemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Replicator Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 The Goalie’s Anxiety at the Penalty Kick . . . . . . . . . . . . . . . . . . 7.5 Hawks and Doves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

591 591 593 595 598 603 605 612 614 619 626

4

484

x

8

9

Contents

Optimization Under Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Using the Methods of Representation . . . . . . . . . . . . . . . . . . . . . 8.2 Using the Adaptation of a Descent Method . . . . . . . . . . . . . . . . . 8.3 Combining Statistics of the Objective, the Constraints, and Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . .

639 640 662

Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Limit State Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Design Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Multiple Failure Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Reliability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Hasofer-Lind Reliability Index . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 The General Situation . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.2 The Case of Affine Limit State Equations . . . . . . . . . . . 9.5.3 Convex Failure Regions . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Using the Reliability Index to Estimate the Probability of Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.1 The Case of Affine Limit State Equations . . . . . . . . . . . 9.6.2 The Case of a Convex Failure Region . . . . . . . . . . . . . . 9.6.3 General Failure Regions . . . . . . . . . . . . . . . . . . . . . . . . 9.7 The Transformations of Rosenblatt and Nataf . . . . . . . . . . . . . . . 9.8 FORM and SORM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.1 First-Order Reliability Method (FORM) . . . . . . . . . . . . 9.8.2 Second Order Reliability Method (SORM) . . . . . . . . . . . 9.9 Reliability Based Design Optimization . . . . . . . . . . . . . . . . . . . . 9.9.1 The Bilevel or Double Loop Approach for a Desired β . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.2 The Bilevel or Double Loop Approach for a Desired Objective . . . . . . . . . . . . . . . . . . . . . . . . . . .

675 676 681 686 690 692 693 700 706

666

709 713 715 718 727 732 732 737 740 741 746

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759

Chapter 1

Some Tips to Use R and RStudio

Abstract This chapter presents the essentials of R, with a focus on the manipulation of variables, plotting, and the use of data frames and classes. We present also some useful packages for standard numerical methods, which will be used in the sequel. Examples of use are given.

1.1

How to Install R and RStudio

If you do not have R installed on your computer, you must download and install it. Installers can be found on Internet, namely at https://cran.r-project.org/ and https://www.r-project.org/. These websites also propose manuals for the installation and the use of R. A large part of this chapter was inspired from these manuals, so that you can download and use them to learn the basic uses of R, which are summarized in the sequel. In this book, we use mainly the versions 4.1.0 to 4.1.3, downloaded from https://cran.r-project.org/. RStudio is a useful IDE (integrated development environment), which will make the use of R easier. RStudio exists in commercial or free versions. You will find both at the website https://www.rstudio.com/. In this book, we use the free version of RStudio Desktop 1.4.1717.

After installing R and RStudio, you will have on your desktop an icon analogous to this one, which allows launching RStudio by a double click. If it is not created, you can create it by generating a shortcut for RStudio. A double click on the icon opens the window of RStudio, shown in Fig. 1.1.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 E. Souza de Cursi, Uncertainty Quantification using R, International Series in Operations Research & Management Science 335, https://doi.org/10.1007/978-3-031-17785-9_1

1

2

1

Some Tips to Use R and RStudio

Fig. 1.1 The main window of RStudio

1.2

How to Include a Third-Part Add-In

R can be extended by add-ins usually called packages, created by the community, and offered to the users of R. You will find many packages at the website https:// cran.r-project.org/web/packages/. RStudio automatically installs the most used packages. In addition, RStudio possesses a package manager that helps you in adding new packages. Let us illustrate the use of the package manager by installing the add-in pracma, which emulates many MATLAB functions: launch RStudio and click on Packages (at right, in the middle of the window)

1.2

How to Include a Third-Part Add-In

3

Then, you can see a list of the installed packages. A button Update offers the possibility of updating your installed packages. To add a new one, click on Install.

Enter the name of the package (a complete list is available at https://cran.r-project. org/web/packages/). If the name is correct, RStudio will recognize it:

Then, click on the button Install.

The installation begins. Maybe some warnings appear, namely if you have not development tools installed: do not matter, RStudio will run the install. At the end, you will get a message on the console indicating that the installation is achieved and that the package is available:

4

1

Some Tips to Use R and RStudio

The newly installed package appears in the list:

1.3

How to Create a Document with RStudio

RStudio offers the possibility of creating many types of documents: go to File>New File and RStudio will show a list of the types of documents that can be created. For instance, you can create a Notebook, where the code is executed in the document itself (analogously to other notebooks, such as Jupyter, Mathematica®, Maple®, and Matlab®). You can integrate Latex in some of these documents.

1.4

How to Create a Script with RStudio

5

If you choose a R Notebook, you can also choose the type of output: HTML, Microsoft Word, PDF, etc. You may generate the desired output of a notebook by using the button Preview and selecting the desired output.

For a R Markdown document, the corresponding button is Knit.

In this book, we use mainly R Script files, which are sequences of instructions. They are generated by the first line and correspond to the shortcut CTRL+SHIFT+N.

1.4

How to Create a Script with RStudio

A script is a document containing a sequence of instructions. As indicated in the previous section, go to File>New File>R Script and open a blank script Enter the instructions x Source File (shortcut CTRL+ALT+G). In this case, the File manager opens, and you must navigate to the file to be executed.

Alternatively, you can define the working directory (by default, it is username>Documents) and set it as being the one containing your source file.

The option Choose Directory opens the file manager and allows you to navigate to the desired directory: you do not need the entire path to run the files in the working directory. Alternatively, you can use the command setwd in the console. For instance,

8

1

Some Tips to Use R and RStudio

setwd("C:/Users/username/Desktop/Rfiles")

sets the working directory as C:/Users/username/Desktop/Rfiles. . Then, the command source(“test.R”, echo=TRUE)

will produce the result at right:

Remark The command source(file) brings the variables, functions, and objects created in file to your environment (see page 60). Namely, the classes contained in your source file are created in the environment and you can use them without further action. ∎ You can save the output in a file by using the command sink(file = “filename”). To stop it, use the command sink(). The option Append=True allows to append the results to an existing file without erasing the precedent content. For instance, add a line print(paste("x = ",x, ", y = ",y)) to the script and save it in a file test.R. Choose it as source file: you will get the results below:

Create a second file test2.R as follows:

1.5

How to Manipulate Numeric Variables, Vectors, and Factors

9

Choose this file as source and execute it. Then, a file result is created in your working directory :

1.5

How to Manipulate Numeric Variables, Vectors, and Factors

The assignment of values to variables is made by the symbols “->” and “ > r 5 r þ 10 r , if 0 ≤ r ≤ > 2 2 2 > 2 > > > > 5 4 3 4 1 3 < 2 - r - 5 2 - r , if 2 ≤ r ≤ 2 ψ ðr Þ = > > 5 4 3 5 > > > > 2 - r , if 2 ≤ r ≤ 2, > > : 0, otherwise • Quintic kernel: αðhÞ =

1 7 1 120 , 478π , 120π

(dimension 1, 2, or 3, respectively)

8 > ð3 - r Þ5 - 6ð2 - r Þ5 þ 15ð1 - r Þ5 , if 0 ≤ r ≤ 1 > > > < ð3 - r Þ5 - 6ð2 - r Þ5 , if 1 ≤ r ≤ 2 ψ ðr Þ = > > ð3 - r Þ5 , if 2 ≤ r ≤ 3, > > : 0, otherwise • Gaussian kernel: αðhÞ =

1 p1ffiffiffiffi , 7 , h 2π 2πh2 h3 ð2π Þ3=2

ψ ðr Þ = e - r

(dimension 1, 2, or 3, respectively) 2

=2

As an example, let us consider again the data in Table 1.2 and the SP approximation with h = x2 - x1 and a Gaussian kernel. The results are exhibited in Fig. 1.32: the approximation is close to the exact curve. Notice that parameter h controls the smoothness of the approximation. For instance, Fig. 1.33 shows the approximations furnished by a quintic kernel with h = x2 - x1, h = 2(x2 - x1), h = 3(x2 - x1)

Fig. 1.32 Interpolation of noisy data using the SP approach with Gaussian kernel

100

1

Some Tips to Use R and RStudio

Fig. 1.33 Interpolation of noisy data using the SP approach with Quintic kernel. Smoothness is controlled by parameter h

1.16

Integrals and Derivatives with R

R proposes a function integrate for numerical integration of one-dimensional functions. Package pracma proposes a function integral for the same purpose. It may be useful to vectorize the function to be integrated.

1.16

Integrals and Derivatives with R

101

As an example, consider f ðxÞ = sinxðxÞ to be integrated over (0, 1)

Multiple integrals can be evaluated by integrate using functions and nested integration. As an example, let us consider the evaluation of Z1 0

e - y dy

Zy

2

ex sin ðx þ yÞdx 0

We may use the code

pracma proposes functions integral2 and integral3 for 2D and 3D integration. For instance:

R repositories propose also packages for the numerical evaluation of derivatives, such as numDeriv:

102

1

Some Tips to Use R and RStudio

Symbolic integrals may be evaluated by using packages mosaic and mosaicCalc:

Notice that R was not capable of furnishing the analytic expression FF(x) = sin (x) - x cos (x), but generated a function which evaluates numerically the result. Symbolic integrals or derivatives of FF can be calculated, as shown at left – we determine the derivative of FF and we compare its value at x = 1 with the value of f(1).

The manipulation of sequences of antiderivatives requests some precaution, due to the existence of constants: by default, R sets the value at x = 0 to 0, as shown at left – to compare the results, we must use an integral taking the value 0 at x = 0. As shown above, symbolic derivatives are also available with R, using packages Deriv, mosaic, and mosaicCalc:

1.16

Integrals and Derivatives with R

103

As we can observe, R was not able to determine an explicit expression for f ′(x), while it furnished an explicit formula for the derivative of each part of the function f. However, the function generated is a function and we can evaluate its derivative symbolically, as shown at left. The results furnished by package Deriv are analogous:

Notice that Deriv works on strings, and we must use eval(parse (text=)) to create a function which can be manipulated.

104

1.16.1

1

Some Tips to Use R and RStudio

Variational Approximation of the Derivatives

When dealing with noisy data in limited quantity, the numerical evaluation of the derivatives may be performed by using the variational approach shown in Sect. 1.15. For instance, we can determine an approximation y ≈ Py =

k X i=0

ci φi ðxÞ:

and evaluate y0 ≈ ðPyÞ0 =

k X i=0

ci φ0i ðxÞ:

Let us apply this approach to the unnoisy data in Table 1.1: for k = 6 and φi(x) = (x/2π)i, we obtain the results exhibited in Fig. 1.34. Now, let us consider the noisy data in Table 1.2 and φi(x) = (x/2π)i. For k = 6, we obtain the results in Fig. 1.35

Fig. 1.34 Derivation of unnoisy data using the variational definition of derivative

Fig. 1.35 Derivative of noisy data using the variational approximation of the data

1.16

Integrals and Derivatives with R

105

As an alternative, we can use the variational definition of derivative – the notion of weak derivative. Indeed, the variational derivative Dy of y verifies Zb

Z vðxÞDyðxÞdx = yðbÞvðbÞ - yðaÞvðaÞ -

yðxÞv0 ðxÞdx, 8v:

a

Let Dy ≈

k X i=0

ci φi ðxÞ:

Then, we can determine the coefficients by solving the linear system AC = B, with Zb Aiþ1,jþ1 =

φi ðxÞφj ðxÞdx, a

Zb Biþ1 = yðbÞφi ðbÞ - yðaÞφi ðaÞ -

yðxÞφ0i ðxÞdx:

a

Let us apply this approach to the unnoisy data in Table 1.1: for k = 5 and φi(x) = (x/2π)i, we obtain the results exhibited in Fig. 1.36. Applying this approach to the noisy data in Table 1.2, with φi(x) = (x/2π)i and k = 4 produces the results in Fig. 1.37.

Fig. 1.36 Derivation of unnoisy data using the variational definition of derivative

106

1

Some Tips to Use R and RStudio

Fig. 1.37 Derivative of noisy data using the variational definition of derivative

1.16.2

Smoothed Particle Approximation of the Derivative

Since Zþ1 yðpÞk ðp, xÞdp:

yð xÞ = -1

we have 0

Zþ1

y ð xÞ =

y ð pÞ

d kðp, xÞdp: dx

y ð pÞ

d kðp, xÞdp: dx

-1

what leads to the approximation

0

xþA Z δ

y ðxÞ = x - Aδ

y0 ð xÞ ≈

Xn j=1

 mj d  y k pj , x : ρj j dx

A second form of approximation is based on the equation

1.16

Integrals and Derivatives with R

107

Fig. 1.38 Derivation of unnoisy data using the SPH approach

Fig. 1.39 Derivation of noisy data using the SP approach

Z 0

y ð xÞ =

δ0 ðp - xÞðyðpÞ - yðxÞÞdp Z ðp - xÞδ0 ðp - xÞdp

what leads to the approximation d   mj  j = 1 ρj yj - yðxÞ dx k pj , x d   Pn mj  j = 1 ρj pj - x dx k pj , x

Pn y0 ð xÞ ≈

Let us apply this approach to the unnoisy data in Table 1.1: h = x2 - x1 and a Gaussian kernel, we obtain the results exhibited in Fig. 1.38. Now, let us consider the noisy data in Table 1.2 and the SP approximation with h = 1.75(x2 - x1) and a Gaussian kernel. The results are exhibited in Fig. 1.39: the approximation is close to the exact curve.

108

1

Some Tips to Use R and RStudio

Fig. 1.40 Derivation of noisy data using the SP approach with Quintic kernel. Smoothness is controlled by parameter h

Analogously to interpolation (Sect. 1.15.2), the smoothness of SP derivatives is controlled by parameter h. Examples are shown in Fig. 1.40.

Chapter 2

Probabilities and Random Variables

Abstract In this chapter, we present the fundamental elements of probability and statistics that are used in the book, namely the elements about random variables and random vectors, with particular attention to the use of R for the calculations. The classical topics of a standard course are considered, such as statistics, regression, correlations, confidence intervals, and hypothesis testing. We consider also random functions and random curves, as well as their generation, evaluation of means, and confidence intervals for such objects.

2.1

Notation

In the sequel, ℕ denotes the set of the natural numbers; ℕ  = ℕ - {0} denotes the set of the strictly positive natural numbers. Analogously, ℝ denotes the set of the real numbers and ℝ = ℝ - {0} denotes the set of the strictly positive real numbers. ℝe = ℝ [ {-1, +1} is the set of the extended real numbers. The notation (a, b) = {x 2 ℝ: a < x < b} defines an open interval of real numbers. ℝn = {x = (x1, . . ., xn) : xi 2 ℝ, 1 ≤ i ≤ n} is the set of the n-tuples formed by real numbers. Analogous notation is used for ℝne . The standard scalar product of x and y = (y1, . . ., yn) 2 ℝn is ðx; yÞℝk =

n X i=1

xi yi :

ð2:1Þ

If no confusion is possible, we drop the indexing expression and use the lighter notation (x, y). In some contexts, we use also the notatiions:

Supplementary Information The online version of this chapter (https://doi.org/10.1007/978-3031-17785-9_2) contains supplementary material, which is available to authorized users. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 E. Souza de Cursi, Uncertainty Quantification using R, International Series in Operations Research & Management Science 335, https://doi.org/10.1007/978-3-031-17785-9_2

109

110

2

Probabilities and Random Variables

xt y = ðx, yÞ or x:y = ðx, yÞ:

ð2:2Þ

We denote by |•|p the standard p-norm of ℝk: jxjp =

n X

!1p j xi j

p

ð2:3Þ

:

i=1

We have jxj2 =

pffiffiffiffiffiffiffiffiffiffiffiffi ðx, xÞ:

ð2:4Þ

A m × n-matrix A of real numbers is A = (Aij: 1 ≤ i ≤ m, 1 ≤ j ≤ n). M(m, n) denotes the space of the m × n-matrices of real numbers. The elements of ℝn may be identified to elements of M(n, 1) (column vectors): ℝn  M(n, 1). Thus, we can set y = A:x 2 M ðm, 1Þ  ℝm , yi =

n X

Aij xj ð1 ≤ i ≤ mÞ

ð2:5Þ

j=1

If no confusion is possible, we shall alleviate the notation by using the convention of an implicit sum on repeated indexes (also called Einstein’s convention) by writing simply yi = Aijxj.

2.2

Probability

We use the formal definition of probability introduced by (Kolmogorov, 1950): Definition 2.1 Let Ω be a non-empty set and P ðΩÞ = fE : E ⊂ Ωg be the set of the parts of Ω. A probability defined on Ω is an application P : P ðΩÞ⟶ℝ such that 1. PðE Þ ≥ 0, 8E 2 P ðΩÞ; 2. P(Ω) = 1;! P S En = PðEn Þ, 8fEn : n 2 ℕg such that E i \ Ej = ∅, if i ≠ j: 3. P n2ℕ

n2ℕ

Ω is the universe. The subsets of E ⊂ Ω are usually called events. (Ω, P) is the probability space. This definition leads to many known properties of probabilities. For instance:

2.2

Probability

111

Proposition 2.1 Let (Ω, P) be a probability space. Then P(∅) = 0; P(E - F) = P(E) - P(E \ F); P(E [ F) = P(E) + P(F) - P(E \ F); P(Ec = Ω - E) = 1 - P(E); E ⊂ F ⟹ P(E) ≤ P(F); 0 ≤ P(E) ! ≤ 1, 8 E ⊂ Ω; P S En ≤ PðE n Þ, 8fE n: n 2 ℕg ⊂ P ðΩÞ: 7. P

1. 2. 3. 4. 5. 6.

n 2ℕ

n2ℕ

8. Let fE n: n 2 ℕg ⊂ P ðΩÞ: If

P

 þ1  S PðE n Þ < 1 , then P E i ⟶0 for

n2ℕ

i=n

n⟶ + 1; 9. Let fE n: n2 ℕg ⊂ P ðΩÞ be such that En ⊂ En+1, 8 n 2 ℕ . then,  þ1 S P E n = lim PðEn Þ; n=0

n → þ1

n=0

n → þ1

10. Let fE n: n 2 ℕg ⊂ P ðΩÞ be such that En+1 ⊂ En, 8 n 2 ℕ . then,  þ1  T P E n = lim PðE n Þ:

Proof We have 1. Let P(∅) = a 2 ℝ, a ≥ 0. Consider {En : n ! 2 ℕ } such that En = ∅ , 8 n 2 ℕ . S S P En = PðE n Þ. Since En = ∅, then Ei \ Ej = ∅, if i ≠ j. Thus, P n2ℕ n2ℕ n2ℕ P a, with a ≥ 0. This equality is impossible if a > 0 (the we have a = n2ℕ

series becomes divergent), so that a = 0. 2. Consider {En : n 2 ℕ } such that E0 = E - F, E1 = ! E \ F, En = ∅ , 8 n ≥ 2. S P En = PðEn Þ. Or, on the Then, Ei \ Ej = ∅, if i ≠ j, so that P n2ℕ n2ℕ S En = ðE - F Þ [ ðE \ F Þ = E and, on the other hand, one hand, n2ℕ

P(E0) = P(E - F); P(E1) = P(E \ F); p(E0) = 0, 8 n ≥ 2. Thus, P(E) = P(E - F) + P(E \ F). 3. P(E [ F - F) = P(E [ F) - P((E [ F) \ F) = P(E [ F) - P(F). In addition, P(E [ F - F) = P(E - F) = P(E) - P(E \ F). Thus, PðE [ F Þ - PðF Þ = PðE Þ - PðE \ F Þ:

(continued)

112

2

Probabilities and Random Variables

4. P(Ec) = P(Ω - E) = P(Ω) - P(E) = 1 - P(E). 5. P(F) = P(F - E) + P(E \ F) = P(F - E) + P(E) ≥ p(E). 6. Since E ⊂ Ω: 0 ≤ P(E) ≤ P(Ω) = 1. nS -1 Ai , for n > 0. We have Ai \ Aj = ∅ for 7. Consider A0 = E0, An = E n !i = 0 S P i ≠ j. Consequently, P An = PðAn Þ. In addition, on the one hand, n2ℕ n2ℕ S S An = En , hence An ⊂ En ⟹ P(An) ≤ P(En) and, on the other hand, n2ℕ

n2ℕ

the result.

 þ1  þ1 P S Ei ≤ PðEi Þ⟶0, since the series is convergent. 8. We have P i=n

i=n

9. Let F0 = E0; Fn = En - En-1, n ≥ 1. Then, P(Fn) = P(En) - P(En - 1 \ En) = \ Fj = ∅, if i ≠ j and, on the P(En) - P(En-1). In addition, on the one hand, F i þ1 þ1 þ1 þ 1 S S S P other hand, Fn = E n . Thus, P E n = P ðE 0 Þ þ  n=0

n=0

n=0

n=0

ðPðE n Þ - PðEn - 1 ÞÞ = lim PðEn Þ: n → þ1   10. Apply the preceding result to Ecn : n 2 ℕ ⊂ P ðΩÞ be such that E cn ⊂ Ecnþ1 :  þ1   þ1      S c T P E n = lim P Ecn . Thus, P E n = 1 - lim P E nc , hence n=0

n → þ1

n=0

n → þ1

the result.

There are two events that deserve particular attention: Definition 2.2 Let (Ω, P) be a probability space 1. E ⊂ Ω is almost sure if and only if P(E) = 1; 2. F ⊂ Ω is negligible if and only if P(F) = 0. Notice that “almost sure” is not “certain” and “negligible” is not “impossible” (but may be interpreted as “almost impossible”). Example 2.1 Let us consider Ω = (0, 1) and the probability P([a, b]) = b - a. Let F = {x 2 Ω, x(1 - x) = 0}: We have F = {0, 1}, so that P(F) = P([0, 0]) + (continued)

2.2

Probability

113

Example 2.1 (continued) P([1, 1]) = (0 - 0) + (1 - 1) = 0. Thus, the event F is almost impossible – but it is not impossible. In fact, the events E = fx 2 Ω : x 2  = ℝ - ℚg; F = fx 2 Ω : x 2 ℚg are complementary: E = Fc. Since the Lebesgue measure of the set of rational numbers is zero, F is negligible and E is almost sure, but any subinterval will contain both rational and irrational numbers. We have: Proposition 2.2 Let (Ω, P) be a probability space. Let E ⊂ Ω be almost sure and F ⊂ Ω be negligible. Then, for any A ⊂ Ω, P(A \ E) = P(A); P(A [ F) = P(A), P(A [ E) = 1; P(A \ F) = 0.

Proof We have 1. 2. 3. 4.

E ⊂ A [ E ⟹ P(A [ E) ≥ P(E) = 1; A \ F ⊂ F ⟹ 0 ≤ P(A \ F) ≤ P(F) = 0 ⟹ P(A \ F) = 0; P(A [ F) = P(A) + P(F) - P(A \ F) = P(A) + 0 + 0 = P(A); (A \ E)c = Ac [ Ec and P(Ec) = 1 - P(E) = 0. Thus, P(Ac [ Ec) = P(Ac) ⟹ P( (A \ E)c) = P(Ac) ⟹ P(A \ E) = P(A).

Proposition 2.3 A finite or enumerable Reunion of negligible events is negligible; a finite or enumerable intersection of almost sure events is almost sure.

Proof Let {Fn: n 2 ℕ!} be a family of negligible events. Proposition 2.1 (item 7) n S S S F n ≤ 0, so that F n is negligible. Let An = F i : Since shows that P i=0 n2ℕ n2ℕ ! S S F n , we have PðAn Þ ≤ P F n = 0, 8n 2 ℕ: An ⊂ n2ℕ

n2ℕ

(continued)

114

2

Probabilities and Random Variables

  Let {En: n 2 ℕ } be a family of almost sure events and F n = E cn: n 2 ℕ .  c S c S T T En = En = F n is negligible, so that En is almost sure. Then, n2ℕ n2ℕ n2ℕ n2ℕ     n n T T T T T En ⊂ E i ⟹1 = P En ≤ P Ei , so that E n is In addition, n2ℕ

i=0

n2ℕ

i=0

n2ℕ

almost sure.

2.2.1

Mass Functions and Mass Densities

Mass functions can be used to define probabilities on finite or enumerable universes, while mass densities are used on subsets of real numbers: Definition 2.3 1. Let Ω be finite or enumerable. A mass function μ on Ω is an application μ: Ω ⟶ ℝ such that 8ω 2 Ω: μ(ω) = P({ω}). We have μ(ω) ≥ 0 on Ω and = Ω. ∑ω 2 Ωμ(ω) = 1. If useful, μ may be extended by zero for ω 2 2. Let Ω ⊂ ℝ. A mass density μ on Ω is an application μ: Ω ⟶ ℝ such Ðb that 8ða, bÞ ⊂ Ω : Pðða, bÞÞ = μðωÞdω . We have μ(ω) ≥ 0 on Ω and a R = Ω in both the Ωμ(ω)dω = 1. If useful, μ may be extended by zero for ω 2 cases.

R offers intrinsic functions that allows us to manipulate mass functions and mass densities. For instance, sum and integrate (see examples) Example 2.2 An usual situation concerns Ω = (α, β) and the mass density μðωÞ =

1 , α < ω < β: β-α

In this case, the probability of any subinterval reads as ðb μðωÞdω =

Pðða, bÞÞ =

b-a , α < a < b < β: β-α

a

(continued)

2.2

Probability

115

Example 2.2 (continued) Consider the code below (notice the capital “V” in “Vectorize”):

Running the code produces the results below:

Example 2.3 Consider Ω = {ω1, . . ., ωn} and the mass function μðωi Þ = pi ≥ 0,

n X i=1

pi = 1:

  P Then, Pðfωi1 , ::, ωik gÞ = kj= 1 μ ωij : For instance, consider an urn containing balls: The quantities are (5, 3, 2, 4, 6), corresponding to colors (R, B, G, Y, M). Their probabilities are

μðRÞ =

5 3 2 4 6 , μðBÞ = , μ ðG Þ = , μ ðY Þ = , μ ðM Þ = : 20 20 20 20 20

Thus, PðfR, G, BgÞ = μðRÞ þ μðGÞ þ μðBÞ =

1 : 2

(continued)

116

2

Probabilities and Random Variables

Example 2.3 (continued) Consider the code below:

Running the code produces the result at left.

Example 2.4 Consider Ω = {ω1, . . ., ωn} and the mass function μðωi Þ = α ln ð1 þ iÞ þ β: Let us determine the possible values of the parameters α, β. Indeed, we have μðωi Þ ≥ 0 ⟺ α ln ð1 þ iÞ þ β ≥ 0: This inequality is satisfied for i = 1, . . .n if and only if β ≥ max f - α ln 2, - α ln ð1 þ nÞg In addition, n X i=1

μðωi Þ = 1 ⟺ αS þ nβ = 1, S =

n X

ln ð1 þ iÞ:

i=1

(continued)

2.2

Probability

117

Example 2.4 (continued) We can visualize the possible values using the code below:

Example 2.5 Consider Ω = (0, 1) and the mass density μðωÞ = α ln ð2 þ ωÞ þ β: Let us determine the possible values of the parameters α, β. Indeed, we have μðωÞ ≥ 0 ⟺ α ln ð2 þ ωÞ þ β ≥ 0:

(continued)

118

2

Probabilities and Random Variables

Example 2.5 (continued) This inequality is satisfied for ω 2 (0, 1) if and only if β ≥ max f - α ln 2, - α ln 3g In addition, ð1 μðωÞdω = 1 ⟺ αS þ β = 1, S = log

27 - 1 ≈ 0:90954: 4

0

We can visualize the possible values using the code

2.2

Probability

119

Exercises 1. Consider Ω = (0, 1) and the mass density μ(ω) = βe-αω. What are the possible values of α and β? Plot the admissible region. 2. Consider Ω = {ω1, ω2, ω3} and the mass function μ(ωi) = αi2 + β. What are the possible values of α and β? Plot the admissible region. 3. Consider Ω = (0, 1) and the mass density μ(ω) = αω + β. What are the possible values of α and β? Plot the admissible region. 4. Consider Ω = {ω1, ω2, ω3} and the mass function μ(ωi) = αi + β. What are the possible values of α and β? Plot the admissible region 5. Consider Ω = (0, 1) and the probability P((a, b)) = b - a.(mass density μ(ω) = 1). Find the probabilities of the following events   a) A = x 2 Ω : x < 13 ;   b) B = x 2 Ω : x2 < 12 ;   c) C = x 2 Ω : x2 > 14 ; d) D = {x 2 Ω: 9x2 - 5x + 1 ≤ 0}. e) Write programs in R to find these probabilities. 6. Consider Ω = {ω1, ω2, ω3} and μðωi Þ = 6i . a) Find P({ω2, ω3}); b) Find P({ω1, ω3}). c) Write programs in R to solve these questions. 7. Consider Ω = {ω1, ω2, ω3, ω4} and the events A = {ω1}, B = {ω1, ω2}, 9 7 C = {ω1, ω2, ω4}. Let PðAÞ = 14, PðBÞ = 20 , PðC Þ = 15 . a) Find P({ω2}), P({ω3}), P({ω4});. b) Find P({ω1, ω2, ω3}). 8. Consider Ω = {ω1, ω2, ω3} and the events A = {ω1}, B = {ω1, ω2}. Let PðAÞ = 13 and PðBÞ = 12. a) Find P({ω2}); b) Find P({ω3}) c) Find P({ω1, ω3}).

2.2.2

Combinatorial Probabilities

Combinatorial probabilities arise when considering finite universes with a constant mass function, id est, when Ω is finite and all its elements are equiprobable, id est,Ω = {ω1, . . ., ωn} and pi = μðωi Þ = Pðfωi gÞ = 1n . Then, probabilities of the events are determined by counting their number of elements: for instance, PðE = fωi1 , . . . , ωik gÞ = nk . Many classical situations involving coins, balls,

120

2

Probabilities and Random Variables

cards, dices, etc., fall in this category. These problems are mainly combinatorial; id est, they use combinatorics to evaluate the number of elements of E and Ω. R proposes some functions to determine these numbers. For instance, combn(x,n) generates all the combinations of n elements from a vector x. In addition, the community offers packages able to deal with combinatorial problems. For instance, the package arrangements furnishes functions that evaluate permutations and combinations. For instance, • npermutations(n,k): number of permutations of k elements among n, without repetition. Returns ðn -n! kÞ! • ncombinations(n,k): number of combinations of k elements among n, without repetition. Returns k!ðnn!- kÞ! • npermutations(n,k,replace = TRUE): number of permutations of k elements among n, with repetition. Returns nk • ncombinations(n,k, replace = TRUE): number of combinations of k - 1Þ! elements among n, with repetition. Returns ðk!nþk ðn - 1Þ! In addition, the function factorial(n) returns n!. Example 2.6 Two cards are simultaneously draw at random from a 52-card deck. What is the probability that are number cards? To evaluate the probability, we start by evaluating the global number of possibilities n: There are 52 possibilities for the first card and 51 for the second one, so that n = 52 × 51 = npermutations(52, 2). The deck has 40 number cards, so that there are 40 possibilities for the first face and 39 for the second: We have k = 40 × 39 = npermutations(40, 2).

The code for the evaluations is shown at right. The result is

Now, assume that you take a card at random, put it back, reshuffle, and take another card at random. In this situation, repetition is possible: On the one hand, n = 52 × 52 = npermutations(52, 2, replace = TRUE), on the other hand, k = 12 × 12 = npermutations(40, 2, replace = true). The code for the evaluations is shown at right. The result is

2.2

Probability

121

Example 2.7 In a 52-card deck of playing cards, you draw simultaneously two at random. What is the probability that you get one ace? Here, it is convenient to consider the complementary event: No ace is drawn. For this event, k = 48 × 47 = ncombinations(48; 2). Thus, its probability is 0.8506787. So, the probability of getting one ace is 0.1493213.

With repetition, these probabilities are, respectively, 0.852071 and 0,147,929.

Exercises 1. In a 52-card deck of playing cards, you draw simultaneously 5 at random. What is the probability of getting a) b) c) d) e)

5 hearts; A full (5 cards of the same color); Four aces; 4 cards of same numbers (or faces); Evaluate these probabilities using R.

2. In a 52-card deck of playing cards, you draw sequentially 5 at random. At each draw, the card is put back in the deck, which is reshuffled. What is the probability of getting a) b) c) d) e)

5 hearts; 5 cards of the same color; Exactly four aces; At least 4 aces; Evaluate these probabilities using R.

3. We roll three dice numbered from 1 to 6. All the faces are equiprobable. What is the probability of getting a) b) c) d)

Three aces A sum of points equal to 9; A sum equal to 10; Evaluate these probabilities using R.

122

2

Probabilities and Random Variables

4. An urn contains 20 numbered balls of three colors: 6 red, 12 green, 2 blue. We draw at random 2 balls simultaneously. What is the probability of getting a) Two green balls; b) Two balls of same color; two balls of different colors c) Evaluate these probabilities using R.

2.3

Independent Events

Definition 2.4 The probability of an event E conditionally to an event F such that P(F) > 0 is PðE j F Þ =

PðE \ F Þ : PðF Þ

The reader can find in the literature an extension of the definition of conditional probability to include negligible events – see, for instance, Coletti & Scozzafava, (2000). We have Proposition 2.4 (Bayes’ Formulae) Let (Ω, P) be a probability space, E, F ⊂ Ω. 1. If P(E)P(F) > 0 then P(E \ F) = P(F) × P(E j F) = P(E) × P(F j E) 2. If P(E)P(F) > 0 then PðF j E Þ =

PðF Þ × PðE j F Þ P ðE Þ 

3. Let verify fE1 , . . . , En g ⊂ P ðΩÞ !  n  iT n Q T -1 P E i = PðE 1 Þ Ei E j , id Est, i=1

P

n \

i=2

P

n T

i=1

 E i > 0.

Then

j=1

!

      E i = PðE1 ÞP E 2 │E1 P E3 │E 1 \ E 2 . . . P E n │E1 \ . . . \ En - 1

i=1

(continued)

2.3

Independent Events

123

Proposition 2.4 (continued) 4. Let Ω =

k S i=1

Ei , P(Ei) > 0, 8 i; Ei \ Ej = ∅, if i ≠ j. Then PðF Þ =

k X

PðE i Þ PðF j Ei Þ;

i=1

          P Ej P F j Ej P Ej P F j Ej = k : P Ej j F = PðF Þ P PðEi Þ PðF j E i Þ i=1

Proof 1. We have P(EjF) = P(E \ F)/P(F) and P(FjE) = P(E \ F)/P(E), hence the result. 2. P(F) × P(EjF) = P(E) × P(FjE), hence the result.  k  n S T Ei ; Ei = 3. The proof is made by recurrence on the assumption H n: P i=1 i=1 ! iT n Q -1 P Ei E j . On the one hand, P(E1 \ E2) = P(E1)P(E2j E1) PðE 1 Þ i=2

j=1

and H2 is!verified. On the other hand, for n ≥ 2, Hn ⟹ Hn+1. Indeed,   n   n    nþ1 n T T T T =P E j = P Enþ1 \ Ei Ei P E nþ1 j E i . Thus, P j=1

i=1

i=1

i=1

using Hn, we obtain Hn+1.  k  k S S Ei = ðF \ E i Þ , we have PðF Þ = 4. Since F = F \ Ω = F \ k P i=1

PðF \ E i Þ =

k P i=1

i=1

i=1

PðE i Þ PðF j E i Þ . But P(Ejj F) = P(Ej \ F)/P(F),

so that P(Ejj F) = P(Ej) P(F j Ej)/P(F), hence the result.

Definition 2.5 E and F are independent if and only if PðE \ F Þ = PðEÞ × PðF Þ:

124

2

Probabilities and Random Variables

Independence means that the events do not affect each other: Proposition 2.5 Let (Ω, P) be a probability space, E, F ⊂ Ω, E and F independent. Then: 1. If P(E) P(F) > 0, then P(Ej F) = P(E) and P(Fj E) = P(F); 2. Ec and Fc are independent; 3. Let {En: n2 ℕ }be a family such that Ei, Ej are independent if i ≠ j. Then  þ1 þ1 T Q P En = PðE n Þ; n=0

n=0

4. Let {En: n 2 ℕ } be a family  þ1 such  that Ei, Ej are independent if i ≠ j. If P S PðEn Þ = þ 1, then P E i = 1, 8n 2 ℕ. n 2ℕ

i=n

n2ℕ

i=n

5. Let {En: n 2 ℕ } be a family  þ1 such  that Ei, Ej are independent if i ≠ j. If P S PðEn Þ < þ 1, then P E i ⟶0, for n ⟶ + 1.

Proof P(E \ F) = P(F) × P(Ej F) = P(E) × P(Fj E) = P(E) × P(F). Thus, P(F) × P(Ej F) = P(E) × P(F) and P(E) × P(Fj E) = P(E) × P(F), hence the first result. P(Ec \ Fc) = P(Ec) + P(Fc) - P(Ec [ Fc). In addition, P(Ec [ Fc) = P((E \ F)c) = 1 - P(E \ F) = 1 - P(E)P(F). Thus, P(Ec [ Fc) = 1 - (1 - P(Ec)) (1 - P(Fc)) = P(Ec) + P(Fc) - P(Ec)P(Fc). Hence the second result, since (Ec \ Fc) = P(Ec)P(Fc). n T E i . Then, An+1 ⊂ An and The third result is obtained by taking An = i=0  þ1   þ1  n T Q T P En = P An = lim PðAn Þ = lim PðE i Þ, hence the n=0

n=0

result. Consider F n = verges, we have

þ1 S

i=n þ 1 P

i=n

n → þ1

E i : We have PðF n Þ ≤

n → þ1 i = 0

þ 1 P i=n

PðE i Þ. Since the series con-

PðEi Þ⟶0 for n ⟶ + 1, hence the last result.

Example 2.8 A fair coin has head and tail results equiprobable. It is tossed twice and the result is written. We have the information that one of the results is head. What is the probability that the other is tail? Let us denote H a head and T a tail. Here, Ω = {HH, HT, TH, TT}. Let us consider the events (continued)

2.3

Independent Events

125

Example 2.8 (continued) E = {HH, HT, TH} (one of them is a head) and the event F = {HT, TH, TT} (one of them is a tail). We have E \ F = {HT, TH} and PðFjE Þ =

P ð E \ FÞ = PðE Þ

2 4 3 4

=

2 : 3

Example 2.9 A sample contains 50% of green pins. Globally, 20% of the pins are damaged, but there is a difference between the colors: 30% of the green pins are damaged. What is the probability that a damaged pin is green? Let E1 = the pin is green; E2 = the pin is not green. We have P(E1) = 0, 5, P(E2) = 0, 5. Let F = pin is damaged. We have P(Fj E1) = 0, 3, P(A) = 0, 2. Then PðE 1 jF Þ =

PðE 1 Þ × PðFjE 1 Þ 0, 5 × 0, 3 = = 0, 75: 0, 2 PðF Þ

Exercises 1. Overall, it is estimated that 1% of the parts manufactured in a factory are defective. The quality control makes mistakes by accepting 1% of non-compliant parts and rejecting 2% of compliant parts. What is the probability that an accepted part is compliant? 2. Two boxes contain the same number of pins, divided between two colors: Red and green. In box 1, 30% are red and 70% are green; in box 2, 40% are red pins and 60% are green. The boxes are equiprobable. a) We take a red pin: What is the probability that it came from box 1? b) We take a green pin: What is the probability that it came from box 2? c) How are these values modified if box 1 is much larger than box 2, so that its probability is 34? 3. An urn contains 20 numbered balls of three colors: 1 to 6 in red, 7 to 18 in green, 19 to 20 in blue. We draw a ball at random. a) The number is multiple of 3: What is the probability that the color is green? b) The number has two digits: What is the probability that the color is blue? c) The number is pair: What is the probability that the color is red? (continued)

126

2

Probabilities and Random Variables

4. Three urns contain 10 numbered balls in red and blue each, but the number of red balls is different in each one: 5 in urn 1, 4 in urn 2, 3 in urn 3. The urns are equiprobable. A red ball is drawn. a) What is the probability that it came from urn 1? b) What is the probability that it did not come from urn 2? c) What is the probability that it came from urn 3?? 5. Determine if the events A and B are independent or not, when a) b) c) d)

2.4

PðAÞ = PðAÞ = PðAÞ = PðAÞ =

1 3, 1 2, 3 5, 3 5,

PðBÞ = PðBÞ = PðBÞ = PðBÞ =

2 3, 2 3, 5 6, 5 6,

PðA \ BÞ = PðA \ BÞ = PðA \ BÞ = PðA \ BÞ =

1 2. 1 3. 1 2. 2 5.

Numerical Variables on Finite Populations

Let us consider a finite universe Ω = {ω1, . . ., ωnp} – Ω is also referred as population. np is the size of the population. A member of the population ωi 2 Ω is an individual: the population contains np individuals. An event E = fωi1 , . . . , ωik g ⊂ Ω is a subpopulation of Ω. Let X: Ω ⟶ ℝ be a numerical variable defined on Ω. Since Ω is finite, the image of X is a finite set of m distinct values: X(Ω) = {X1, . . ., Xm}, Xi ≠ Xj for i ≠ j. We have 0 < m ≤ np. These values can be ranged in a strictly crescent order: in the sequel, we assume that Xi < Xj for i < j without loss of generality. Let us consider the subpopulation Hi = X-1({Xi}) = {ω 2 Ω: X(ω) = Xi}. Let μ be the mass function on Ω. The probability of the value Xi is PðX = X i Þ = pi =

X

μðωÞ:

ω2H i

ð2:6Þ

Assume that X is defined for any ω 2 Ω (id est, Ω ⊂ dom (X)). Then, it is immediate that m [

H i = Ω;

i=1

m X i=1

pi = 1; 0 ≤ pi ≤ 1, 8i:

ð2:7Þ

More generally, for I ⊂ ℝ, X   PðX 2 I Þ = P X - 1 ðI Þ = μðωÞ: ω2X - 1 ðI Þ

ð2:8Þ

2.4

Numerical Variables on Finite Populations

Table 2.1 Frequency table of X

127

Value Frequency

X1 p1

... ...

Xm pm

Remark 2.1 In many practical situations, the individuals are equiprobable, so ci , where ci = card (Hi) is the number of that μ(ω) = 1/np, 8 ω 2 Ω. then, pi = np elements belonging to Hi. We have m X

ci = np; 0 ≤ ci ≤ np, 8i:

i=1

The information about X can be summarized in a frequency Table 2.1. Remark 2.2 In all generality, it is necessary to consider situations where some elements of X(Ω) may be negligible, i.e., a value pi can be null: Such a generality leads to the use of the expressions “almost sure” and “almost surely” in most of the results. Of course, the values corresponding to null probabilities can be eliminated from the possible values of X, so that pi > 0, 8 ithen the expressions “almost sure” and “almost surely” can be eliminated from the results below.

Definition 2.6 Let f: ℝ ⟶ ℝ be a function. The mean or expectation of f(X) is E ð f ðX ÞÞ =

np X

μðωi Þf ðX ðωi ÞÞ:

i=1

We have Proposition 2.6 Let X: Ω ⟶ ℝ be a numerical variable and f, g: ℝ ⟶ ℝ be two functions. Then 1. E(αf(X) + βg(X)) = αE( f(X)) + βE(g(X)); m P pi f ðX i Þ; 2. E ðf ðX ÞÞ = i=1

3. If f(X) ≥ 0, then E( f(X)) ≥ 0; (continued)

128

2

Probabilities and Random Variables

Proposition 2.6 (continued) 4. If f(X) ≥ 0, then λP( f(X) ≥ λ) ≤ E( f(X)), 8 λ 2 ℝ (Markov’s inequality); = A. Then E(1A) = P(A); 5. Let 1A(ω) = 1, if ω 2 A; 1A(ω) = 0, if ω 2 6. If f: ℝ ⟶ ℝ is convex, then f(E(X)) ≤ E( f(X)) (Jensen’s inequality).

Proof We have np X          μ ω j αf X ω j þ βg X ω j j=1



np np X X           μ ωj f X ωj þ β μ ωj g X ωj , i=1

j=1

So thatSE(αf(X) + βg(X)) = αE( f(X)) + βE(g(X)). Since im= 1 H i = Ω, we have 0 1 np m X X      X B C μ ωj f X ω j = μðωÞ f ðX ðωÞÞ A @ |fflfflfflffl{zfflfflfflffl} ω2H i=1 j=1 i

= f ðX i Þ on H i

So that E ðf ðX ÞÞ =

m X

f ðX i Þ

X ω2H i

i=1

! μðωÞ =

m X i=1

pi f ðX i Þ:

If f(X) ≥ 0, we have f(X(ω)) ≥ 0, 8 ω 2 Ω. thus E( f(X)) is a sum of non-negative terms and we have E( f(X)) ≥ 0. Let A = {ω 2 Ω: f(X(ω) ≥ λ} and B = {ω 2 Ω: f(X(ω) < λ}. We have A [ B = Ω, A \ B = ∅. Thus, np X X      X μ ωj f X ω j = μðωÞf ðX ðωÞÞ þ μðωÞf ðX ðωÞÞ j=1

ω2A

ω2B

Since f ≥ 0, np X X      X μ ω j f X ωj ≥ μðωÞf ðX ðωÞÞ ≥ λ μðωÞ = λPðAÞ, j=1

ω2A

ω2A

(continued)

2.4

Numerical Variables on Finite Populations

129

Hence the fourth assertion. In addition, E ð 1A Þ =

np X     X μ ωj 1 A ωj = μðωÞ = PðAÞ: ω2A

j=1

The last assertion results from the convexity condition: f ð E ð X ÞÞ = f

m X i=1

! pi X i

≤ |{z}

m X

convexity i = 1

pi f ðX i Þ = E ðf ðX ÞÞ:

Definition 2.7 1. The moment of order k of X is Mk(X) = E(Xk). 2. The central moment of order k of X is CMk(X) = E((X - E(X))k). 3. The variance of X is V(X) = CM2(X) = E((X - E(X))2). pffiffiffiffiffiffiffiffiffiffiffi 4. The standard deviation of X is ðX Þ = V ðX Þ: 5. A median of X is a number med(X) = m such that ðX ≤ mÞ ≥ 1 1 2 and PðX ≥ mÞ ≥ 2 :

We have Proposition 2.7 Let X: Ω ⟶ ℝ be a numerical variable. Then 1. 2. 3. 4.

V(X) ≥ 0; V(X) = E(X2) - E(X)2; V(X) = 0 if and only if X is constant almost surely; E(X2) = 0, if and only if X = 0 almost surely.

Proof Since f(X) = (X - E(X))2 ≥ 0. The preceding proposition yields that V(X) ≥ 0. We have E((X - E(X))2) = E(X2 - 2 E(X)X + E(X)2), so that V(X) = E(X2) - 2E(X)E(X) + E(X)2 = E(X2) - E(X)2. (continued)

130

2

Probabilities and Random Variables

m P

pi ðX i - EðX ÞÞ2 = 0. Thus, Xi ≠ E(X) ⟹ pi = 0. S Consider Λ = {i: Xi ≠ E(X), 1 ≤ i ≤ m}. Let F = X - 1 ðfX i gÞ: Since Let V(X) = 0. Then

i=1

i2Λ

P(X-1({Xi})) = pi = 0, F is a finite Reunion of negligible events consequently, P(F) = 0 and F = {ω 2 Ω: X(ω) ≠ E(X)} is negligible. Thus, E = {ω 2 Ω: X(ω) = E(X)} is almost sure. m P pi X 2i = 0. Thus, Xi ≠ 0 ⟹ pi = 0. By Analogously, if E(X2) = 0, then i=1

the same way, we obtain that E = {ω 2 Ω: X(ω) = 0} is almost sure.

Proposition 2.8 (Chebyshev’s inequality) let X: Ω ⟶ ℝ be a numerical variable. Then PðjX - E ðX Þj ≥ εÞ ≤

V ðX Þ , 8ε > 0: ε2

Proof Let f(X) = (X - E(X))2 ≥ 0. Then, from proposition 2.6:   ε2 P f ðX Þ ≥ ε2 ≤ Eðf ðX ÞÞ = V ðX Þ: Since P(jX - E(X)j ≥ ε) = P( f(X) ≥ ε2), we obtain the result.

Definition 2.8 The cumulative distribution function (CDF) of X is F ðxÞ = PðX ≤ xÞ = PðX 2 - 1, xÞ =

X

pi =

X

μðωÞ:

i such that

ω such that

Xi ≤ x

X ðωÞ ≤ x

The CDF may be represented by Table 2.2. Table 2.2 CDF of X

value frequency

X1 p1

X2 p1 + p2

... ...

Xm - 1 m -1 X i=1

pi

Xm 1

2.4

Numerical Variables on Finite Populations

131

Proposition 2.9 Let X: Ω ⟶ ℝ be a numerical variable and F: ℝ ⟶ ℝ be its CDF. Then 1. F is monotonically increasing: x ≤ y ⟹ F(x) ≤ F( y); 2. F(x) = 0 if x < X1; iP -1 pj , if Xi - 1 ≤ x < Xi, 1 < i ≤ m. 3. F ðxÞ = j=1

F(x) = 1 if x ≥ Xm; P(a < X ≤ b) = F(b) - F(a); P(X = a) = F(a+) - F(a-); F is right continuous: F(x+) = F(x), 8 x. m is median of X if and only if F ðmþÞ ≥ 12 , F ðm - Þ ≤ 12 : Let F be continuous at m.Then, m is median of X if and only if F ðmÞ = 12 : m P pi δX i , where δX i is the Dirac’s 10. F has a variational derivative: F 0 = f = 4. 5. 6. 7. 8. 9.

i=1

mass supported by Xi. f is the probability DENSITY (PDF) of X.

Proof 1. If x ≤ y then ] - 1, x] ⊂ ] - 1, y], hence the result 2. If x < X1 then X-1(] - 1, x]) = ∅, hence the result. iS -1    3. If Xi-1 ≤ x < Xi, then X - 1 ð  - 1, xÞ = X - 1 X j , hence the j=1

4. 5. 6. 7. 8.

9. 10.

result. If x ≥ Xm then X-1(] - 1, x]) = Ω hence the result. X-1(]a, b]) = X-1(] - 1, b]) - X-1(] - 1, a]), hence the result. = X(Ω), Let a = Xi 2 X(Ω). Then pi = P(X = Xi) = F(Xi+) - F(Xi-). If a 2 then P(X = a) = 0 = F(a+) - F(a-). Result 7 follows from results 2,3 and 4. Notice that F(m+) = F(m). Thus, we have P(X ≤ m) ≥ 1/2. Moreover, P(X ≥ m) = 1 - P(X ≤ m) + P(X = m), so that P(X ≥ m) = 1 F(m) + F(m+) - F(m-) = 1 - F(m-). Consequently, P(X ≥ m) ≥ 1/2 if and only if F(m-) ≤ 1/2. Result 9 follows from result 8. It yields directly from the definition of variational derivative (see, for instance (Souza de Cursi, 2015)).

132

2

Probabilities and Random Variables

You can find R packages and intrinsic commands to create and deal with frequency tables. For instance, consider the dataset chickwts, containing the data about the effects of feed supplements on the evolution of the weight of chickens. Enter the commands:

Indeed, chickwts contains the measurements of six feed supplements (casein, horsebean, linseed, meat meal, soybean, and sunflower). There are 12, 10, 14, 11, 14, 12 measurements of each, respectively. The first command loads the data in a Table aux containing the numbers of occurrences of each feed type. The second one generates a Table t1 containing the relative frequencies: they are equal to the number of measurements divided by the total number of observations (72). The table t1 is transformed in a vector t2 which can be manipulated as usual: for instance, c2 contains the cumulative sums corresponding to Table 2.2.

Notice that R can determine the CDF of a sample of values directly, using the intrinsic function ecdf:

ecdf returns a function – in the example above, a function called F. Since the feed types (casein, horsebean, linseed, meat meal, soybean, sunflower) are not numeric but levels (id est, factors), they are transformed into integers 1, 2, 3, . . . by ecdf. In this transformation, R uses the alphabetic order of the factors, so that 1 corresponds to casein, 2 to horsebean, 3 to linseed, and so on. If you desire to assign specific numbers to each feed type, you must make it manually (or using a package such as

2.4

Numerical Variables on Finite Populations

133

dplyr). For instance, if you want to use the order (linseed, soybean, sunflower, horsebean, casein, meat meal), you must start by the code below:

Here, we create a copy of the data.frame chickwts named cwtcopy and we replace the levels as indicated. Cwtopy can be manipulated as any data.frame: we extract a vector cwtvec1 containing the feed types, which are transformed in characters. Then, we create a frequency table. As an alternative, you can start by creating a vector of characters containing the levels and replace the values in the vector:

In this case, we create a vector cwtvec2 containing the feed types, which are transformed in characters. Then, we replace them by the desired values. Notice that you can use gsub to make the replacements:

134

2

Probabilities and Random Variables

If your data are contained in a bidimensional table, you must start by converting it to a vector: the instruction as.vector(t(data)) generates a vector containing the rows of a 2D table data. As an example, import the file colors200.xlsx to R. This file contains a sample of 200 variates from the colors {red, green, blue, yellow, black}, under the form of a 20 × 10 Table. Enter the commands below:

As you can observe, the original table colors200 is transformed in a vector colorvec. If R is asked for the evaluation of the frequencies directly on the initial colorvec, the colors are ordered in alphabetic order. To keep the desired order, we must perform the replacements: then, the frequencies appear in the order red, green, blue, yellow, and black as desired. If the data are numeric, you can determine statistics such as mean, variance, media, standard deviation by using intrinsic functions of R or by writing simple instructions. Let us illustrate both the methods using the sample data: 3, 1, 2, 4, 2, 5, 3, 2, 4, 1, 1, 5, 3, 2, 3, 2, 1, 2, 1, 5, 4, 3, 5, 4, 1, 2, 1

2.4

Numerical Variables on Finite Populations

135

The frequency table corresponding to the data is 1 2 3 4 5 0.2592593 0.2592593 0.1851852 0.1481481 0.1481481 It can be generated using the class fstats.R, which manipulates frequency tables, defined as lists containing fields “X” (the values of X), “p” ( pi), “Xnames” (labels of the values of X), “cumul” (a numeric vector containing the values P(X ≤ Xi). ) $fromTable creates such an object from a Table, $mean evaluates the mean of X, $var. evaluates its variance, $med determines its median, $meanf evaluates E( f(X)), $mom evaluates E(Xk), $centmom evaluates E((X - E(X))k), $cdf generates a function F(x) = P(X ≤ x). Below, we generate a table t1 as previously shown. Then, we generate a frequency table ft1 and we evaluate the statistics:

The results appear in the environment:

136

2

Probabilities and Random Variables

You can also evaluate the mean, moments, variance, and standard deviation as follows:

Exercises 1. Import the file apples.Xlsx. Generate a table corresponding to its contents. Modify the table by changing the names into numerical values: “Pink” 1

“Gala” 2

“Canada” 3

“Delicious” 4

“Golden” 5

“Granny” 6

Generate a frequency table using these numerical values. Find the mean, the variance, and the CDF of the numerical values. 2. Import the file continents.Xlsx. Generate a table corresponding to its contents. Modify the table by changing the names into numerical values: “Antarctica” 1

“Europe” 2

“Oceania” 3

“Asia” 4

“America” 5

“Africa” 6

Generate a frequency table using these numerical values. Find the mean, the standard deviation, and the CDF of the numerical values. 3. Import the file fruits.Xlsx. Generate a table corresponding to its contents. Modify the table by changing the names into numerical values: “Strawberry” 1

“Banana” 2

“Kiwi” 3

“Pineapple” 4

“Orange” 5

“Apple” 6

Generate a frequency table using these numerical values. Find the mean, the standard deviation, and the CDF of the numerical values. 4. When a fair coin is tossed n times, the number of heads X has a mean n2 and a variance n4. a) Determine an upper bound for the probability a deviation of obtaining   of 10% of the mean. TIP: Consider P X - n2 > α n2 . (continued)

2.4

Numerical Variables on Finite Populations

137

b) Find a lower bound for the probability of obtaining at least 95% of the expected heads. TIP: P(jX - mj > a) = P(X < m - a) + P(X > m + a) c) Determine a number of tosses for which the observed frequency of heads deviates from 1/2 by at most 1% with a probability 99%.   TIP: Find n such that P X - n2 > 0:01 n2 ≤ 0:01. 5. A company has a mean daily production of 1000 units of a given product. a) Use Markov’s inequality to find an upper bound for the probabilities of productions of 1010, 1100, 2000 unities. b) Assume that the standard deviation of the daily production is 10. Find an upper bound for these probabilities TIP: Consider P(jX - 1000j ≥ x - 1000).

2.4.1

Couples of Numerical Variables

Let Y: Ω ⟶ ℝ be a second numerical variable defined on Ω, such that Y(Ω) = {Y1, . . ., Yn}. The couple (X, Y ) forms a 2-dimensional vector of numerical variables defined on Ω. Let us consider the subpopulation Hij = X-1({Xi}) \ Y-1({Yj}) = {ω 2 Ω: X(ω) = Xi and Y(ω) = Yj}. Let μ be the mass function on Ω. The probability of the pair (Xi, Yj) is X   μðωÞ: P X = X i , Y = Y j = pij = ω2H ij

ð2:9Þ

If both X, Y are defined for any ω 2 Ω (id est, Ω ⊂ dom (X), Ω ⊂ dom (Y )), then [

H ij = Ω;

i = 1, ..., m j = 1, ..., n

X

pij = 1; 0 ≤ pij ≤ 1, 8i, j:

i = 1, ..., m j = 1, ..., n

ð2:10Þ

Remark 2.3 In many practical situations, the individuals are equiprobable, c so that μ(ω) = 1/np, 8 ω 2 Ω. then, pij = npij , where cij = card (Hij) is the number of elements belonging to Hij. We have X i = 1, ..., m j = 1, ..., n

cij = np; 0 ≤ cij ≤ np, 8i, j:

138

2

Probabilities and Random Variables

Table 2.3 Contingency table of (X, Y ) Y X X1 ... Xm

... ... ⋱ ...

Y1 p11 ⋮ pm1

Yn p1n ⋮ pmn

The information about the couple (X, Y ) may be synthetized in a Contingency Table 2.3 (or Cross-Tabulation Table): Definition 2.9 Let f: ℝ2 ⟶ ℝ be a function. The mean or expectation of f(X, Y ) is Eðf ðX, Y ÞÞ =

np X

μðωi Þf ðX ðωi Þ, Y ðωi ÞÞ:

i=1

We have Proposition 2.10 Let X: Ω ⟶ ℝ be a numerical variable and f, g: ℝ ⟶ ℝ be two functions. Then 1. E(αf(X, Y ) + βg(X,X Y )) = αE( f(X, Y )) + βE(g(X, Y ));   2. E ðf ðX, Y ÞÞ = pij f X i , Y j ; i = 1, ..., m j = 1, ..., n

3. If f(X, Y ) ≥ 0, then E( f(X, Y )) ≥ 0. 4. If f(X, Y ) ≥ 0, then λP( f(X, Y ) ≥ λ) ≤ E( f(X, Y )), 8 λ 2 ℝ.

Proof Analogous to the proof of proposition 2.6.

Definition 2.10 1. The marginal distribution of X is PðX = iÞ = pi∎ = 2. The marginal distribution of Y is PðY = jÞ = p∎j =

n P j=1 m P i=1

pij . pij . (continued)

2.4

Numerical Variables on Finite Populations

139

Definition 2.10 (continued) 3. The covariance of (X, Y) is cov(X, Y) = E((X - E(X))(Y - E(Y ))). ðX, X Þ covðX, Y Þ 4. The covariance matrix of X is CðX, Y Þ = cov : covðY, X Þ covðY, Y Þ

Proposition 2.11 Let (X, Y): Ω ⟶ ℝ2 be a couple of numerical variables. Then cov(X, Y ) = E(XY) - E(X)E(Y); V ðαX þ βY Þ = α2 V ðX Þ þ β2 V ðY Þ þ pffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffi 2αβ covðX, Y Þ; jcovðX, Y Þj ≤ V ðX Þ V ðY Þ:

Proof We have ðX - E ðX ÞÞðY - E ðY ÞÞ = XY - XE ðY Þ - YEðX Þ - EðX ÞE ðY Þ, So that covðX, Y Þ = EðXY Þ - E ðX ÞE ðY Þ - E ðY ÞE ðX Þ þ EðX ÞE ðY Þ, Hence the first result. Analogously, E(αX + βY) = αE(X) + βE(Y ) and ðαX þ βY - E ðαX þ βY ÞÞ2 = ðαðX - EðX ÞÞ þ βðY - E ðY ÞÞÞ2 = α2 E ðX - EðX ÞÞ2 þ β2 ðY - E ðY ÞÞ2 þ 2αβðX - E ðX ÞÞðY - EðY ÞÞ, So that V ðαX þ βY Þ = α2 V ðX Þ þ β2 V ðY Þ þ 2αβ covðX, Y Þ: Let f(λ) = λ2V(X) + 2λcov(X, Y ) + V(Y ) = V(λX + Y) ≥ 0, 8 λ 2 ℝ. then, the discriminant of the polynomial verifies Δ = ð2 covðX, Y ÞÞ2 - 4 V ðX ÞV ðY Þ ≤ 0 And we have the last result.

140

2

Probabilities and Random Variables

Definition 2.11 The cumulative distribution function (CDF) of (X, Y ) is F ðx, yÞ = PðX ≤ x, Y ≤ yÞ = PððX, Y Þ2 - 1, x ×  - 1, yÞ =

X

pij =

X

μðωÞ:

ði, jÞsuch that

ω such that

X i ≤ x and Y j ≤ y

X ðωÞ ≤ x and Y ðωÞ ≤ y

Proposition 2.12 Let (X, Y ): Ω ⟶ ℝ2 be a couple of numerical variables and F: ℝ2 ⟶ ℝ be its CDF. Then 1. x1 ≤ x2 and y1 ≤ y2 ⟹ F(x1, y1) ≤ F(x2, y2); 2. F(x, y) = 0 if x < X1 or y < Y1; -1 iP - 1 jP prs , if Xi-1 ≤ x < Xi and Yj-1 ≤ y < Yj. 3. F ðx, yÞ = 4. F ðx, yÞ = 5. F ðx, yÞ =

r=1 s=1 jP -1

s=1 iP -1 r=1

prs if x ≥ Xm and Yj-1 ≤ y < Yj; prs if y ≥ Yn and Xi-1 ≤ x < Xi;

6. F(x, y) = 1 if x ≥ Xm and y ≥ Yn; 7. P(x1 < X ≤ x2, y1 ≤ Y ≤ y2) = F(x2, y2) - F(x2, y1) - F(x1, y2) + F(x1, y1); m P 2 pij δX i Y j , where δX i Y j is 8. F has a variational derivative: ∂ F=∂x∂y = f = i=1

the Dirac’s mass supported by (Xi, Yj). f is the PROBABILITY density (PDF) of (X, Y).

Proof Analogous to the proof of proposition 2.9. R can deal with contingency tables. To create a contingency table, you may use a data.frame. As an example, let us consider the dataset esoph, provided by R, containing data about Smoking, Alcohol, and (O)esophageal Cancer. It contains information about age group (agegp), alcohol (alcgp) and tobacco (tobgp) consumption, number of cases (ncases), and number of controls (ncontrols). For instance, enter the commands below to get a contingency table ct1:

2.4

Numerical Variables on Finite Populations

141

Here, variable X is the alcohol consumption, while Y is the tobacco consumption. Let us generate a second contingency table with X = age group, Y = alcohol consumption.

You can determine the marginal distributions with the instruction marginSums and add the marginal distributions to the table using the command addmargins:

142

2

Probabilities and Random Variables

You can replace the names of the rows and of the columns using rownames and colnames:

To evaluate the statistics, you need to transform the labels – which are strings of characters – into numeric variables. Once the transformation is made, you can evaluate the statistics such as covariances or means by using the formulae previously introduced. These expressions are included in the class ctstats:

Class ctstats manipulates contingency tables, defined as a list containing fields named “X” (values of X), “Y” (values of Y ), “p” (values of pij), “Xnames” (names of the values in X), “Ynames” (names of values in Y ), “pX” ( pi∎), “pY” ( p∎j). $fromTable creates such an object from a table. $meanXY evaluates the means of X and Y, $varXY evaluates their variances, $sdXY evaluates their standard deviations, $cov calculates their covariance, $meanf finds E( f(X, Y )), and $medXY determines the medians of X and Y. Analogously:

2.4

Numerical Variables on Finite Populations

As a final example, let us import the data contained in a file EXCEL® testdata.Xlsx:

The data.frame testdata appears in the environment:

We generate a contingency table as follows:

143

144

2

Probabilities and Random Variables

$create generates a contingency table as previously described. The values of X and Y are missing in the call: default values are provided – integers 1, 2, . . . . The contingency table ct2 can be manipulated as in the previous examples. Exercises 1. Import the file pets.Xlsx. Generate a contingency table corresponding to its contents as in the example above (ct2). Find the means, the standard deviations and the covariances. 2. Import the file sport.Xlsx. Generate a contingency table corresponding to its contents as in the example above (ct2). Find the means, the standard deviations and the covariances. 3. Import the file colors.Xlsx. Generate a contingency table corresponding to its contents as in the example above (ct2). Find the means, the standard deviations and the covariances.

2.4.2

Independent Numerical Variables

Definition 2.12 Let X, Y be two numerical variables defined on (Ω, P). X and Y are independent if and only if PðX ≤ x, Y ≤ yÞ = PðX ≤ xÞPðY ≤ yÞ, 8ðx, yÞ 2 ℝ2 :

We have Proposition 2.13 Let X, Y be two numerical variables defined on (Ω, P). Then 1. Let FXY be the CDF of the pair (X, Y ), FX be the CDF of X, FY be the CDF of Y. X and Y are independent if and only if FXY(x, y) = FX(x) FY( y), 8 (x, y) 2 ℝ2. 2. X and Y are independent if and only if P(X = Xi, Y = Yj) = P(X = Xi) P(Y = Yj), 8 i, j. 3. X and Y are independent if and only if pij = pi∎p∎j, 8 i, j. 4. Let fXY be the PDF of the pair (X, Y ), fX be the PDF of X, fY be the PDF of Y. X and Y are independent if and only if fXY(x, y) = fX(x)fY( y), 8 (x, y) 2 ℝ2. 5. If X and Y are independent, then cov(X, Y ) = 0.

2.4

Numerical Variables on Finite Populations

145

Proof The first result follows straightly from definition 2.12. Results 2 to 4 follow from proposition 2.12 (notice that δX i Y j = δX i δY j ). Finally, pij = pi∎p∎j, 8 i, j ⟹ E(XY) = E(X)E(Y), hence the last result.

Definition 2.13 Let X, Y be two numerical variables defined on (Ω, P). The distribution of X conditional to Y is   P X = Xi, Y = Y j   : P X = X i jY = Y j = P Y = Yj 



The distribution of Y conditional to X is   P X = Xi, Y = Y j : P Y = Y j jX = X i = PðX = X i Þ 



Proposition 2.14 Let X, Y be two numerical variables defined on (Ω, P). 1. X and Y are independent if and only if P(X = Xij Y = Yj) = P(X = Xi), 8 i, j. 2. X and Y are independent if and only if P(Y = Yjj X = Xi) = P(Y = Yj), 8 i, j.

Proof If X and Y are independent, then P(X = Xi, Y = Yj) = P(X = Xi) P(Y = Yj), 8 i, j, so that, 8 i, j: P(X = Xij Y = Yj) = P(X = Xi) and P(Y = Yjj X = Xi) = P(Y = Yj). Conversely, let either P(X = Xij Y = Yj) = P((X = Xij Y = Yj) = P(X = Xi), 8 i, j or P(Y = Yjj X = Xi) = P(Y = j), 8 i, j. In both the cases, P(X = Xi, Y = Yj) = P(X = Xi)P(Y = Yj), 8 i, j, so that X and Y are independent. The class ctstats contains a method condXY which determines the conditional distributions. It returns a list containing “X2Y” (conditional distribution of X to Y) and “Y2X” (conditional distribution of Y to X). An example is given below, using the contingency table ct1, previously defined.

146

2

Probabilities and Random Variables

Exercises 1. Consider Ω = {1, 2, . . ., 12} and the variables X(ω) = ω mod 2, Y(ω) = ω mod 3. Assume that μ(ω) = 1/12. (a) (b) (c) (d)

Verify that X(Ω) = {0, 1} and P(X = 0) = P(X = 1) = 1/2. Verify that Y(Ω) = {0, 1, 2} and (Y = 0) = P(Y = 1) = P(Y = 2) = 1/3. Determine P(X = i, Y = j) for all the values of (i, j). Are these variables independent?

2. Consider Ω = {1, 2, . . ., 12} and the variables X(ω) = ω mod 2, Y(ω) = ω mod 4. Assume that μ(ω) = 1/12. (a) Verify that X(Ω) = {0, 1} and (X = 0) = P(X = 1) = 1/2. (b) Verify that Y(Ω) = {0, 1, 2, 3} and (Y = 0) = P(Y = 1) = P(Y = 2) = P(Y = 3) = 1/4. (c) Determine P(X = i, Y = j) for all the values of (i, j). (d) Are these variables independent? 3. Consider Ω = {1, 2, . . ., 12} and the variables X(ω) = ω mod 3, Y(ω) = jsin (ωπ/2)j. Assume that μ(ω) = 1/12. (a) (b) (c) (d)

2.5

Verify that X(Ω) = {0, 1, 2} and (X = 0) = P(X = 1) = P(X = 2) = 1/3. Verify that Y(Ω) = {0, 1} and P(Y = 0) = P(Y = 1) = 1/2. Determine P(X = i, Y = j) for all the values of (i, j). Are these variables independent?

Numerical Variables as Elements of Hilbert Spaces

Let us consider the set of numerical variables defined on Ω: H = fZ : Ω ⟶ ℝ, Ω ⊂ domðZ Þg:

ð2:11Þ

2.5

Numerical Variables as Elements of Hilbert Spaces

147

For X, Y 2 H , we can consider ðX, Y ÞH = E ðXY Þ:

ð2:12Þ

Proposition 2.15 ð∎, ∎ÞH is a symmetric, positive and bilinear on H . in addition, ðX, X ÞH = 0 if and only if X = 0 almost surely.

Proof We have E(XY) = E(YX), so that ð∎, ∎ÞH is symmetric. In addition, E ððαX þ βY ÞZ Þ = E ðαXZ þ βYZ Þ = αE ðXZ Þ þ βE ðYZ Þ, So that ð∎, ∎ÞH is bilinear. We have also E(XX) = E(X2) ≥ 0, hence ð∎, ∎ÞH is positive. The last assertion follows from proposition 2.7. This result shows that ð∎, ∎ÞH satisfies the conditions to be used as scalar product. Indeed, let us consider the Hilbert space     H = L2 ðΩ, PÞ = X 2 H : E X 2 < 1 ,

ð2:13Þ

having scalar product and norm defined by ðX, Y ÞH = EðXY Þ, kX kH =

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðX, Y ÞH :

ð2:14Þ

To alleviate the notation, we drop the index H in the sequel, except in the situations where it is necessary to clear some ambiguity. Remark 2.4 As previously observed, in all generality, it is necessary to consider situations where some elements of X(Ω) may be negligible, i.e., a value pi can be null. In such a situation, we must consider a binary equivalence relation W  Z ⟺ W - Z = 0 almost surely And the classes ½Z H = fW 2 H : W  Z g:     H = ½Z H : Z 2 H , ½X H , ½Y H H = E ðXY Þ:

(continued)

148

2

Probabilities and Random Variables

Remark 2.4 (continued) (∎, ∎)H is a scalar product on H, which can be identified to H . The completion H of H generates a Hilbert space referred as H = L2(Ω, P), having as scalar product (X, Y )H = E(XY). Such a formal construction is detailed in (Souza de Cursi & Sampaio, 2010). The Hilbertian structure allows the use of the standard results concerning Hilbert spaces, namely those concerning orthogonal projections: Definition 2.14 Let S be a non-empty linear subspace of L2(Ω, P) and X 2 L2(Ω, P). The orthogonal projection of X onto S is the element PX that minimizes the distance between X and S, id est, PX 2 S and kX - PX k = min fkX - vk : v 2 Sg:

We have Theorem 2.1 Let S be a closed linear subspace of L2(Ω, P) and X 2 L2(Ω, P). Then, the orthogonal projection PX of X onto S exists and is uniquely determined.

Proof See (Souza de Cursi & Sampaio, 2010).

Theorem 2.2 Let S ⊂ L2(Ω, P) be a linear subspace and X 2 L2(Ω, P). PX is the orthogonal projection of X onto S if and only if PX 2 S and ðX - PX, vÞ = 0, 8v 2 S:

Proof See (Souza de Cursi & Sampaio, 2010).

2.5

Numerical Variables as Elements of Hilbert Spaces

149

Theorem 2.3 Let S ⊂ L2(Ω, P) be a linear subspace and X 2 L2(Ω, P). Then X = PX if and only if X 2 S.

Proof It is immediate that X = PX 2 S ⟹ X 2 S. If X 2 S, kX - PXk ≤ 0, so that X = PX.

Remark 2.5 The results extend straightly to n - dimensional vectors X = (X1, . . ., Xn), Y = (Y1, . . ., Yn), (X, Y) = E(X.Y).

2.5.1

Conditional Probabilities as Orthogonal Projections

Conditional probabilities can be interpreted as orthogonal projections: Definition 2.15 Let A ⊂ Ω. the indicator function of A is 1A ðωÞ = 1, if ω 2 A; 1A ðωÞ = 0, if ω= 2A:

Proposition 2.16 Let A, B 2 P ðΩÞ be such that P(A > 0). Let X = 1B and S = {α1A, α 2 ℝ}. Then, the orthogonal projection PX of X onto S is PX = P(BjA)1A.

Proof Indeed, P(BjA)1A 2 S. In addition, ð1B - PðB j AÞ1A , 1A Þ = ð1B , 1A Þ - PðB j AÞ ð1A , 1A Þ: Since 1B1A = 1A\B and, as shown in proposition 2.6, E(1C) = P(C) for C ⊂ Ω, we have ð1B - PðB j AÞ1A , 1A Þ = PðA \ BÞ - PðB j AÞ PðAÞ = 0: Thus, PX = P(BjA)1A

150

2

2.5.2

Probabilities and Random Variables

Means as Orthogonal Projections

Means also can be interpreted as orthogonal projections: Proposition 2.17 Let X 2 L2(Ω, P). Let S = {Z 2 L2(Ω, P): Z is constant}. Then, the orthogonal projection PX of X onto S is PX = E(X). In addition, kX - PXk2 = V(X), kX - PXk = σ(X).

Proof Let Z 2 S. Then Z(ω) = α 2 ℝ, 8ω 2 Ω, so that Z = α1Ω 2 S. In addition, E(Z) = E(Z1Ω) = (Z, 1Ω). Then ðX - E ðX Þ1Ω , 1Ω Þ = ðX, 1Ω Þ - EðX Þð1Ω , 1Ω Þ = E ðX Þ - E ðX Þ = 0: Since E(X)1Ω 2 S, we have PX = E(X). Then, pffiffiffiffiffiffiffiffiffiffiffi kX - PX k2 = E ðX - EðX ÞÞ2 = V ðX Þ and kX - PX k = V ðX Þ = σ ðX Þ:

Remark 2.6 The medians can be interpreted as orthogonal projections for the norm kXk1 = E(jXj). Indeed, m = med(X) if and only if m minimizes kX - vk1 on the subspace S of the constants, Id Est, if and only if   kX - mk1 = min kX - vk1 : v 2 S :

2.5.3

Affine Approximations and Correlations

In this section, we are interested in the approximation of one of the elements of a couple of numerical variables by an affine function of the other one, id est, in the determination of coefficients (a, b) 2 ℝ2 such that PX = aU + b is the best approximation of X by an affine function of U. Definition 2.16 Let X, U 2 L2(Ω, P). The coefficient of linear correlation between X and U is covðX, U Þ ρðX, U Þ = pffiffiffiffiffiffiffiffiffiffiffipffiffiffiffiffiffiffiffiffiffiffi : V ðX Þ V ðU Þ

2.5

Numerical Variables as Elements of Hilbert Spaces

151

Then Proposition 2.18 Let X, U 2 L2(Ω, P). Let S = {Z 2 L2(Ω, P): Z = αU + β; α, β 2 ℝ}. Then, the orthogonal projection PX of X onto S is PX = aU + b, with a= In addition,

covðX, U Þ , b = EðX Þ - aEðU Þ: V ðU Þ

kX - PX k2 = V ðX Þ 1 - jρðX, U Þj2 :

Proof Let PX = aU + b. Then (X - PX, v) = 0, 8 v 2 S, id Est, ðX, v Þ = aðU, vÞ þ bð1Ω , vÞ, 8v 2 S Taking successively v = 1Ω, v = U, we obtain aðU, 1Ω Þ þ bð1Ω , 1Ω Þ = ðX, 1Ω Þ; aðU, U Þ þ bð1Ω , U Þ = ðX, U Þ: |fflfflfflffl{zfflfflfflffl} |fflfflfflffl{zfflfflfflffl} |fflfflffl{zfflfflffl} |fflfflffl{zfflfflffl} |fflfflfflffl{zfflfflfflffl} |fflfflffl{zfflfflffl} E ðU Þ PðΩÞ = 1 E ðX Þ E ðU Þ E ðXU Þ E ðU 2 Þ These equations form a linear system for the unknowns (a, b), having as solution a=

covðX, U Þ , b = EðX Þ - aEðU Þ: V ðU Þ

Since kX - aU - bk2 = kðX - E ðX ÞÞ - aðU - E ðU ÞÞk2 , We have kX - aU - bk2 = V ðX Þ þ a2 V ðU Þ - 2a covðX, U Þ = V ðX Þ Hence,

kX - PX k2 = V ðX Þ 1 - jρðX, U Þj2 :

ðcovðX, U ÞÞ2 : V ðU Þ

152

2

Probabilities and Random Variables

Proposition 2.19 We have jρ(X, U )j ≤ 1 and jρ(X, U )j = 1 ⟺ X is an affine function of U almost surely.

Proof Proposition 2.18 shows that jρ(X, U )j ≤ 1. We have jρ(X, U )j = 1 ⟺ kX - PXk = 0. Theorem 2.3 shows that kX - PXk = 0 ⟺ X 2 S. Hence the result.

Remark 2.7 For multidimensional vectors X = (X1, . . ., Xn), U = (U1, . . ., Um), we consider S = {Z 2 [L2(Ω, P)]n: Z = αU + β; α 2 M(n, m), β 2 M(n, 1)}. Then, PX = AU + B, A 2 M(n, m), B 2 M(n, 1). Matrix A = (Aij: 1 ≤ i ≤ n, 1 ≤ j ≤ m) verifies m X

  Aij cov U j , U k = covðX i , U k Þ:

j=1

These equations show that each line of A is the solution of a linear system: Line i is the solution Ri of MRi = N, Mkj = cov (Uj, Uk), Nk = cov (Xi, Uk). B is given by B = X - AE(U). The class ctstats contains methods corr and linmod for the determination of ρ(U, X) and the approximation X  aU + b. It is assumed that the contingency table presents X as second variable and U as first variable. To get an approximation U  aX + b, you must use the method transpose to inverse the places between U and X. As an example, let us consider Ω = {1, 2, 3}, μð1Þ = μð3Þ = 1=4, μð2Þ = 12 , X(ω) = ω + 1, U(ω) = 2ω. We have X(Ω) = {2, 3, 4}, U(Ω) = {2, 4, 6}, X = U/2 + 1, U = 2X - 2. As shown at right, we obtain the exact results.

2.5

Numerical Variables as Elements of Hilbert Spaces

153

Linmod returns also a vector of the fitted data Fi = aXi + b (“fitted”), a table of residuals Rij = Fi Yj (“residuals”), the norm kX - PXk of the error (“error”) and a coefficient η(Xj U) = exp (-kX PXk2) (“eta”). Notice that kX - PXk2 = E((X - PX)2), so that the values having null probability have a null contribution. η(Xj U) is an indicator of the distance between X and the function of U determined. We have 0 ≤ η ≤ 1, with η = 1 ⟺ X = PX (see definition 2.17)

R proposes an intrinsic function lm (linear model) for the determination of the coefficients when the values are equiprobable. The instruction lm(X ~ U) generates a list containing the results of the approximation of X by a linear model aU + b. Coefficient b is referred as Intercept. The list contains the coefficients, the residuals, the fitted values, and other information. The results can be plotted.

As an example, let us consider the universe Ω = {1, 2, 3, 4, 5, 6} and the variables X(ω) = 3ω + 1, U(ω) = 2ω - 1. We have X = 3Uþ5 2 . We can determine the coefficients as shown at right. The results are a = 1.5, b = 2.5, as expected. You can plot the data and the results as follows:

154

2

Probabilities and Random Variables

Multilinear approximation can be generated by lm (for equiprobable data). For instance, the instruction lm(X ~ U1 + U2) generates a list containing the results of the approximation of X by a linear model a1U1 + a2U2 + b. As an example, let us consider the variables X(ω) = 3U1(ω) + U2(ω), U1(ω) = 2ω - 1, U2(ω) = ω2 + 1. We can determine the coefficients as shown at right. The results are close to the exact values, as expected. When dealing with noisy data, the command confint can also furnish confidence intervals for the coefficients. For instance:

Here, the fitting is evaluated by R as “essentially perfect”, so that the confidence interval for a risk α = 0.05 contains a single value. Indeed, m1 contains more information than the coefficients:

2.5

Numerical Variables as Elements of Hilbert Spaces

155

Thus, R furnishes also the residuals, the standard errors, the t-values (which are equal to the coefficient divided by the standard errors), and the probability associated to the t-value: a small t-value means a significant probability of error – in such a situation, the results must be considered as not significant. R also furnishes the coefficient of linear correlation and the BFS statistics. You will find in the R repositories packages that complement the basic information and methods provided by lm. For instance, MASS, relaimpo, bootstrap, DAAG, leps.

2.5.4

Conditional Mean

It is interesting to notice that the problem of the approximation of a numerical variable by a general function of another one has a solution – id est, there is a solution PX = f(U ) is the best approximation of X by a generic function of U. We have Proposition 2.20 Let X, U 2 L2(Ω, P). Let S = {Z 2 L2(Ω, P): Z = g(U ), g: ℝ ⟶ ℝ}. Then, the orthogonal projection PX of X onto S is PX = E(Xj U ), with (continued)

156

2

Probabilities and Random Variables

Proposition 2.20 (continued) m m X   1 X E ðXjU = U i Þ = pij X j = P X = Xj j U = Ui Xj: pi∎ j = 1 j=1

Proof Let Z 2 S and Zk = g(Uk). The orthogonal projection minimizes kX - Zk2 =

m X n X j=1 k=1

 2 pjk X j - Z k :

Thus, Zi nullifies the derivative of this sum of squares. Since m X   ∂ pji X j - Z i = 2 kX - Zk2 = 2 ∂Z i j=1

m X j=1

! pji X j - Z i p∎i ,

We have m 1 X ∂ p X, kX - Z k2 = 0 ⟺ Z i = p∎i j = 1 ji j ∂Z i

Hence the result.

Definition 2.17 Let PX = E(Xj U ). Then, ηðXjU Þ = exp - kX - PX k2 measures the distance between X and the subspace S. We can use RðX, U Þ =

1 ðηðXjU Þ þ ηðUjX ÞÞ 2

As generalized coefficient of correlation between X and U.

2.5

Numerical Variables as Elements of Hilbert Spaces

157

Proposition 2.21 Let X, U 2 L2(Ω, P). Let SU = {Z 2 L2(Ω, P): Z = g(U ), g: ℝ ⟶ ℝ}, SX = {Z 2 L2(Ω, P): Z = h(X), g: ℝ ⟶ ℝ}. Let us denote PX = E(Xj U ) and PU = E(Uj X). Then 1. 2. 3. 4. 5.

η(Xj U ) = 1 if and only if X = PX 2 SU; η(Uj X) = 1 if and only if U = PU 2 SX; X = PX 2 SU ⟺ U = PU 2 SX; η(Xj U ) = 1 ⟺ η(Uj X) = 1; R(X, U ) = 1 if and only if X = PX 2 SU and U = PU 2 SX.

Proof η(Xj U ) = 1 if and only if kX - PXk = 0, hence the first result. Analogously, η(Uj X) = 1 if and only if kU - PUk = 0, hence the second result. Let X = PX 2 SU. Then, X = g(U ), so that, for each i, there is j(i) such that Xi = g(Uj(i)) and we have pik = 0, for k ≠ j(i). Indeed, kX - gð U Þ k2 =

m X n X i=1 j=1

   2 pij X i - g U j = 0:

Thus, Xi - g(Uj) ≠ 0 ⟹ pij = 0. Thus, m X n X i=1 j=1

n  2  2 X pij Ai - Bj = pi, jðiÞ Ai - BjðiÞ Þ i=1

Let h(Xi) = Uj(i). Taking Ai = h(Xi), Bj = Uj, we have khðX Þ - U k2 =

n X i=1

 2 pi, jðiÞ hðX i Þ - U jðiÞ Þ = 0,

So that U 2 SX and we have U = PU 2 SX. The converse is obtained by the same way, hence the third result. The fourth result follows from the third one. Finally, notice that 0 ≤ R(X, U ) ≤ 1 and R(X, U ) = 1 ⟺ η(Xj U ) = η(Uj X) = 1, hence the last result. The class ctstats contains methods condmean and gencorr for the determination of the conditional means and of the generalized coefficient of correlation. Both return lists containing elements “X2Y” and “Y2X” corresponding to E(Xj Y ) and E(Yj X). gencorr returns also the value of R (“R”).

158

2

Probabilities and Random Variables

As an example, let us consider the universe Ω = {0, 1, 2, 3}, μ(0) = 0.25, μ(1) = 0.4, μ(2) = 0.1, μ(3) = 0.25 and the variables X(ω) = 2ω - 1, U(ω) = 1 + ω2. We have X(Ω) = {-1, 1, 3, 5}, U(Ω) = {1, 2, 5, 10}. As a second example, let us consider the universe Ω = {-2, -1, 1, 2}}, μ(-2) = 0.25, μ(-1) = 0.4, μ(1) = 0.1, μ(2) = 0.25 and the variables X(ω) = 2ω - 1, U(ω) = 1 + ω2. We have X(Ω) = {-5, -3, 1, 3}, U(Ω) = {5, 2, 2, 5} = {2, 5}.

Exercises 1. Consider the data below: X U

4.9 1.0

7.2 2.0

9.4 3.1

11.2 3.9

12.4 5.0

14.4 5.8

17.7 7.1

19.3 7.9

20.5 8.6

23.7 9.7

Assume that the universe is equiprobable: μ = constant = 0.1 (a) (b) (c) (d)

Determine the covariance between X and U. Determine the best linear approximation of X as a function of U. Plot the approximation and the data Determine the generalized correlation between X and U. (continued)

2.6

Random Variables

159

2. Consider the data below: X U

1.0 1.0

1.4 1.9

1.8 3.1

2.0 4.0

2.3 5.2

2.4 5.8

2.6 7.0

2.8 7.9

3.0 9.2

3.2 10.3

Assume that the universe is equiprobable: μ = constant = 0.1 (a) (b) (c) (d) (e) (f)

Determine the covariance between X and U. Determine the best linear approximation of X as a function of U. Plot the approximation and the data Determine the generalized correlation between X and U. Determine the best approximation of X as a function of U. Plot the approximation and the data.

3. Consider the data below: X U

0.9 1.0

0.8 1.9

0.7 2.9

0.7 4.1

0.6 5.0

0.6 5.9

0.5 7.0

0.5 7.8

0.4 8.9

0.4 9.7

Assume that the universe is equiprobable: μ = constant = 0.1 (a) (b) (c) (d) (e) (f)

2.6

Determine the covariance between X and U. Determine the best linear approximation of X as a function of U. Plot the approximation and the data Determine the generalized correlation between X and U. Determine the best approximation of X as a function of U? Plot the approximation and the data.

Random Variables

Random variables generalize the notion of numerical variables. Now, we consider a general probability space (Ω, P), where the universe Ω can be uncountable, for instance, a real interval or a subset of ℝn. Analogously to a numerical variable on a finite population, a random variable X is an application X: Ω → ℝ. For a given a subset A ⊂ ℝ, we may consider X-1(A) = {ω 2 Ω: X(ω) 2 A }. Then, P(X 2 A) = P(X-1(A)). Analogously to the case of numerical variables, Definition 2.18 Let I X ðxÞ = X - 1 ð - 1, xÞ = fω 2 Ω : X ðωÞ ≤ xg

(continued)

160

2

Probabilities and Random Variables

Definition 2.18 (continued) The cumulative distribution function (CDF) of X is F ðxÞ = PðX ≤ xÞ = PðX 2 - 1, xÞ = PðI X ðxÞÞ: F is usually called the distribution of X.

Remark 2.8 If the probability P on Ω is defined by a mass function μ, then F ð xÞ =

X

μðωÞ =

ω2I X ðxÞ

X ω2Ω

μðωÞ1I X ðxÞ ðωÞ:

ð2:15Þ

If the probability P on Ω is defined by a mass density μ, then ð

ð

μðωÞdω = μðωÞ1I X ðxÞ ðωÞdω:

F ðxÞ = I X ðxÞ

ð2:16Þ

Ω

Remark 2.9 Different variables can have the same distribution. For instance, consider Ω = (0, 1), μ(ω) = 1, X(ω) = ω, Y(ω) = 1 - ω. We have PðX ≤ xÞ = Pð½0, xÞ = x; PðY ≤ yÞ = Pð½1 - y, 1Þ = y: Thus, FX(s) = FY(s), 8 s, but X ≠ Y. We have Proposition 2.22 Let X: Ω ⟶ ℝ be a random variable and F: ℝ ⟶ ℝ be its CDF. Then 1. 2. 3. 4. 5.

F is monotonically increasing: x ≤ y ⟹ F(x) ≤ F( y); F(x) ⟶ 0 if x ⟶ - 1; F(x) ⟶ 1 if x ⟶ + 1; P(a < X ≤ b) = F(b) - F(a); P(X = a) = F(a+) - F(a-). (continued)

2.6

Random Variables

161

Proposition 2.22 (continued) 6. P(X > x) = 1 - F(x). 7. If the probability P on Ω is defined by a mass function μ, then F ð xÞ =

X

μðωÞ =

ω2I X ðxÞ

X ω2Ω

μðωÞ1I X ðxÞ ðωÞ:

8. If the probability P on Ω is defined by a mass density μ, then ð F ð xÞ =

ð μðωÞdω = μðωÞ1I X ðxÞ ðωÞdω: Ω

I X ð xÞ

Proof The proof of assertions 1–6 is analogous to the proof of proposition 2.9. Details can be found at (Souza de Cursi, 1992) or (Souza de Cursi & Sampaio, 2015). Results 7 and 8 follow from the definitions of mass functions and mass densities (Sect. 2.2.1).

Definition 2.19 Let X: Ω ⟶ ℝ be a random variable on the probability space (Ω, P). Let FX be the CDF of X and μX((a, b)) = FX(b) - FX(a). The probability density FUNCTION (PDF) fX of X is the density associated to μX, if it exists. Then ð ð PðX 2 AÞ = μX ðAÞ = μX ðdxÞ = f X dx: A

A

Notice that μX(dx) = dFX(x) and ðx F X ðxÞ - F X ðaÞ = μX ðða, xÞÞ =

f X dx⟹F 0X = f X :

ð2:17Þ

a

If P is defined by a mass density μ, then ð μX ðða, xÞÞ =

μðωÞdω: I X ðxÞ - I X ðaÞ

ð2:18Þ

162

2

Probabilities and Random Variables

If P is defined by a mass function μ, then X

μX ðða, xÞÞ =

μðωÞ:

ω2I X ðxÞ - I X ðaÞ

ð2:19Þ

Definition 2.20 Let f: ℝ ⟶ ℝ be a function. The mean or expectation of f(X) is ð ð Eðf ðX ÞÞ = f ðxÞμX ðdxÞ = f ðxÞf X ðxÞdx: ℝ



We have Proposition 2.23 Let X: Ω ⟶ ℝ be a random variable and f, g: ℝ ⟶ ℝ be two functions. Then 1. E(αf(X) + βg(X)) = αE( f(X)) + βE(g(X)); m P pi f ðX i Þ; 2. E ðf ðX ÞÞ = i=1

3. If f(X) ≥ 0, then E( f(X)) ≥ 0; 4. If f(X) ≥ 0, then λP( f(X) ≥ λ) ≤ E( f(X)), 8 λ 2 ℝ; ω2 = A. Then E(1A) = P(A); 5. Let 1A(ω) = 1, if ω 2 A; 1A(ω) = 0, if 6. Let F be the CDF of X. Then F ðxÞ = E 1I XðxÞ .

Proof The proof of assertions 1–5 is analogous to the proof of proposition 2.6. Result 6 follows from result 5. Details can be found at (Souza de Cursi & Sampaio, 2015). If P is defined by a mass density μ, then ð Eðf ðX ÞÞ = μðωÞf ðX ðωÞÞdω;

ð2:20Þ

Ω

If P is defined by a mass function μ, then E ðf ðX Þ Þ =

X ω2Ω

μðωÞf ðX ðωÞÞ:

ð2:21Þ

2.6

Random Variables

163

Definition 2.21 1. 2. 3. 4. 5.

The moment of order k of X is Mk(X) = E(Xk). The central moment of order k of X is CMk(X) = E((X - E(X))k). The variance of X is V(X) = CM2(X) = E((X - E(X))2). pffiffiffiffiffiffiffiffiffiffiffi The standard deviation of X is ðX Þ = V ðX Þ: The characteristic function of X is φX(t) = E(eitX).

We have Proposition 2.24 Let X: Ω ⟶ ℝ be a random variable. Then V(X) ≥ 0; V(X) = E(X2) - E(X)2; V(X) = 0 if and only if X is constant almost surely; E(X2) = 0, if and only if X = 0 almost surely; P(jX - E(X)j ≥ ε) ≤ V(X)/ε2, 8 ε > 0; ðk Þ If M k ðX Þ < 1, then φX ð0Þ = ik M k ðX Þ. P ðitÞn 7. If M k ðX Þ < 1, 8k 2 ℕ then φX ðt Þ = n! M n ðX Þ;

1. 2. 3. 4. 5. 6.

n2ℕ

8. If Y = aX + b then φY(t) = eibtφX(at).

Proof For assertions 1–5, the proof is analogous to those of propositions   ðk Þ 2.20.6 and 2.20.7. For the others, φX ðt Þ = EðeitX Þ⟹φX ðt Þ = E ik X k eitX , hence assertion 6. Result 7 follows from the Taylor’s series of φX(t). Result 8 is immediate. For details, see (Souza de Cursi & Sampaio, 2015).

Example 2.10 Let us consider Ω = (0, 1) and the probability P((a, b)) = b - a (μ(ω) = 1). Let X(ω) = log (1 + ω). Then X(ω) < x ⟺ ω < ex - 1. Thus ( I X ð xÞ =

∅, if x < 0; ð0, ex - 1Þ, if 0 ≤ x < log ð2Þ; Ω, if x ≥ log ð2Þ:

(

Thus, F X ð xÞ =

0, if x < 0; ex - 1, if 0 ≤ x < log ð2Þ; 1, if x ≥ log ð2Þ:

(continued)

164

2

Probabilities and Random Variables

Example 2.10 (continued) The PDF is

f X ð xÞ =

8 < :

0, if x < 0; e , if 0 < x < log ð2Þ; x

0, if x > log ð2Þ:

The mean of X is logðð2Þ

E ðX Þ =

xex dx = 2 log ð2Þ - 1 0

In addition, 

E X

2



logðð2Þ

=

x2 ex dx = 2 log 2 ð2Þ - 4 log ð2Þ þ 2, 0

So that V(X) = 1 - 2log2(2). We have also 1ð

log ð1 þ ωÞμðωÞdω = 2 log ð2Þ - 1:

E ðX Þ = 0



E X

2





=

log 2 ð1 þ ωÞμðωÞdω = 2 log 2 ð2Þ - 4 log ð2Þ þ 2: 0

And, again, V(X) = 1 - 2log2(2). The median m verifies F ðm Þ =

1 3 ⟹m = log 2 2

We can find a numerical approximation of the distribution by two ways. The first one consists in discretizing Ω and generating a sample from X: (continued)

2.6

Random Variables

Example 2.10 (continued)

The second one consists in integrating the mass density:

165

166

2

Probabilities and Random Variables

Exercises 1. Consider Ω = (0, 1) and the mass density μ(ω) = αω. a) b) c) d)

Show that α = 2. Find the distribution of X(ω) = eω. Find the expression of Mk(X). Determine the mean and the variance of X.

2. Consider Ω = (0, 1) and the mass density μ(ω) = α. a) b) c) d)

Show that α = 1. Find the distribution of X(ω) = eω. Find the expression of Mk(X). Determine the mean and the variance of X.

3. Consider Ω = (0, 1) and the mass density μ(ω) = αω2. a) b) c) d)

Show that α = 3. Find the distribution of X(ω) = ω. Find the expression of Mk(X). Determine the mean and the variance of X.

4. Consider Ω = (0, 1) and the mass density μ(ω) = 1. a) Find the distribution of X(ω) = ω. b) Find the expression of Mk(X). c) Determine the mean and the variance of X. 5. Consider Ω = {-1, 0, 1} and the mass function μð1Þ = μð - 1Þ = 13. a) b) c) d)

Determine μ(0). Find the distribution of X(ω) = ω2. Find the expression of Mk(X). Determine the mean and the variance of X.

6. Consider Ω = {-1, 0, 1} and the mass function μð1Þ = μð - 1Þ = 16. a) b) c) d)

Determine μ(0). Find the distribution of X(ω) = ω2. Find the expression of Mk(X). Determine the mean and the variance of X.

7. Consider Ω = {-1, 0, 1} and the mass function μð1Þ = μð - 1Þ = 16. a) Determine μ(0). b) Find the distribution of X(ω) = ω2. (continued)

2.6

Random Variables

167

c) Find the expression of Mk(X). d) Determine the mean and the variance of X. 8. Let Ω = {0, 1, . . ., n} and X(ω) = ω mod 2 (remainder after division by 2). Assume the elements of Ω as equiprobable. a) b) c) d)

Show that X(Ω) = {0, 1}. Determine p = P(X = 0) when n = 2k. Determine p = P(X = 0) when n = 2k + 1. Determine the CDF F as a function of p.

9. Let X be a random variable having the CDF FX(x) = x3 for 0 < x < 1. a) b) c) d)

Show that P(X ≤ 0) = 0. Show that P(X > 1) = 1. Find its PDF fX. Determine the mean and the variance of X.

10. Let Ω = {1, 2, 3, 4, 5} and X ðωÞ = equiprobable.

1 ωþ1 .

Assume the elements of Ω as

a) Determine X(Ω). b) Determine the distribution of X. c) Determine the mean and the variance of X. 11. Let Ω = (-1, 1) and X(ω) = ω3, with Pðða, bÞÞ =

b-a 2



μðωÞ =



1 2

.

a) Determine X(Ω). b) Determine the distribution of X. c) Determine the mean and the variance of X. 12. Determine the characteristic functions of the following random variables X is such that P(X = 0) = 1. X is such that P(X = 1) = p, P(X = 0) = 1 - p. -λ k X is Poisson distributed with parameter λ : PðX = k Þ = e k!λ : X is uniformly distributed on ða, bÞ - its PDF is : f ðxÞ = b -1 a on (a, b) e) X is exponentially distributed –its PDF is: f(x) = λe-λx on (0, +1).

a) b) c) d)

2.6.1

Numerical Evaluation of Statistics

In Sect. 2.10, we consider samples from random variables and their use for the evaluation of statistics. Here, we consider the numerical evaluation of statistics using numerical integration. The use of this approach requests the knowledge of the

168

2

Probabilities and Random Variables

distribution of the variable or of the mass density. In such a situation, the mean, the moments, and the variance can be obtained by numerical integration. A first approximation consists in evaluating ðA ϕðxÞdF ðxÞ

E ðϕðX ÞÞ ≈

ð2:22Þ

-A

and using numerical integration to evaluate the left-hand side. For instance, we can consider -A = x0 < x1 < . . . < xn = A and EðϕðX ÞÞ ≈

n X x þ xi ϕ xiþ12 ðF ðxiþ1 Þ - F ðxi ÞÞ, xiþ12 = iþ1 : 2 i=0

ð2:23Þ

If the PDF f is known, we can use the approximation E ð ϕð X Þ Þ ≈

ðA -A

ϕðxÞf ðxÞdx:

ð2:24Þ

Again, the left-hand side can be evaluated by numerical integration. In this case, the methods presented in Sect. 1.16 can be used. Of course, a simple estimate may be generated by considering a partition -A = x0 < x1 < . . . < xn = A and E ð ϕð X Þ Þ ≈

n X i=0

piþ12 ϕ xiþ12 , piþ12 = f xiþ12 ðxiþ1 - xi Þ:

ð2:25Þ

A mass density μ can be used into an analogous way with a partition of the domain of the values of ω: E ð ϕð X Þ Þ ≈

n X i=0

piþ12 ϕ X ωiþ12 , piþ12 = μ ωiþ12 ðωiþ1 - ωi Þ:

ð2:26Þ

The evaluation of these expressions reduces to sums to be calculated by using loops. For instance,

2.6

Random Variables

169

Example 2.11 Let us consider Ω = (0, 1) and the density μ(ω) = 1. Let X(ω) = ω2. Consider a partition of Ω in n equally spaced subintervals: ωi = ni . The results furnished by Eq. (2.26) for three values of n are shown in Table 2.4. Table 2.4 Numerical evaluation of the mean and variance using the mass function n E(X)

100 0,333,325

500 0,333,333

1000 0,33,333,325

E(X2)

0,1,999,833

0,1,999,993

0,1,999,998

V(X)

0,088877778

0,088888444

0,088888778

Ð1

We can also use the PDF f ðxÞ =

0x

k

1ffi p 2 x

exact 1 3 1 5 4 45

and a partition of (0,1) to evaluate

f ðxÞdx. The results furnished by Eq. (2.25) are shown in Table 2.5.

Table 2.5 Numerical evaluation of the mean and variance using the PDF n E(X)

100 0,333,362,736

500 0,333,336,015

1000 0,333,334,286

E(X2)

0,199,996,957

0,199,999,876

0,199,999,969

V(X)

0,088866244

0,088886978

0,088888223

exact 1 3 1 5 4 45

Of course, we can use integrate to determine these values:

.

Exercises 1. Let X be a continuous random variable having the CDF F. Assume that F(x) = x3 for 0 < x < 1. a) Determine the PDF f(x). b) Determine the exact values of E(X), E(X2), E(X3), V(X). (continued)

170

2

Probabilities and Random Variables

c) Consider a partition of (0, 1) in n = 100 equally spaced subintervals: xi = ni . Use this partition and f(x) to estimate E(X), E(X2), E(X3), V(X). d) Determine the same values using integrate. e) Determine the mean and the variance of X. 2. Let Ω = (0, 1), X ðωÞ =

1 ωþ1 ,

μðωÞ = 1.

a) b) c) d)

Determine its CDF F(x). Determine its PDF f(x). Determine the exact values of E(X), E(X2), E(X3), V(X). Consider a partition of (0, 1) in n = 250 equally spaced subintervals: xi = ni . Use this partition and f(x) to estimate E(X), E(X2), E(X3), V(X). e) Determine the same values using integrate. f) Determine the mean and the variance of X.

3. Let Ω = (-1, 1), X(ω) = ω3, μ(ω) = 1/2. a) b) c) d)

Determine its CDF F(x). Determine its PDF f(x). Determine the exact values of E(X), E(X2), E(X3), V(X). Consider a partition of (0, 1) in n = 1000 equally spaced subintervals: xi = ni . Use this partition and f(x) to estimate E(X), E(X2), E(X3), V(X). e) Determine the same values using integrate. f) Determine the mean and the variance of X.

2.7

Random Vectors

A random vector X = (X1, . . ., Xn) of dimension n is an application X: Ω → ℝn; id est, X is a vector of numerical characteristics of the elements of Ω. Analogously to random variables, probabilities on the values of X are generated by the probabilities on Ω: for given a subset A ⊂ ℝn, we may consider X-1(A) = {ω 2 Ω: X(ω) 2 A }. Then, P(X 2 A) = P(X-1(A)). The cumulative function is defined analogously to the one-dimensional situation: for X = (X1, . . ., Xn) and x = (x1, . . ., xn), we denote “X ≤ x” the region “X1 ≤ x1, X2 ≤ x2, . . ., Xn ≤ xn“, id est, “X ≤ x” = A(x), n Q Ai ðxi Þ, Ai ðxi Þ = ð - 1, xi Þ. Let I i ðxi Þ = X i- 1 ðAi ðxi ÞÞ: Then, X - 1 where AðxÞ = i=1

ðAðxÞÞ = IX ðxÞ = as f =

n

∂ F . ∂x1 ∂x2 ...∂xn

n T i=1

I i ðxi Þ and the CDF of X is F(x) = P(I(x)). The PDF is defined

For a couple X = (X1, X2) of random variables, we have

2.7

Random Vectors

171

Definition 2.22 Let X = (X1, X2) be a couple of random variables defined on a probability space (Ω, P). The cumulative distribution function (CDF) of X is: F ðx1 , x2 Þ = PðX 1 ≤ x1 , X 2 ≤ x2 Þ F is the distribution of X. 2

F (if it exists). The probability density function (PDF) is f = ∂x∂1 ∂x 2 The marginal distribution of Xi is Fi(xi) = P(Xi ≤ xi). We have

F 1 ðx1 Þ = F ðx1 , þ1Þ, F 2 ðx2 Þ = F ðþ1, x2 Þ Ð þ1 i = - 1 f ðx1 , x2 Þdxi : The marginal density of Xi is f i = ∂F ∂xi The copula associated to X = (X1, X2) is, for 0 < c1, c2 < 1: Cðc1 , c2 Þ = PðF 1 ≤ c1 , F 2 ≤ c2 Þ The covariance between X1 and X2 is covðX 1 , X 2 Þ = E ððX 1 - EðX 1 ÞÞðX 2 - E ðX 2 ÞÞÞ = EðX 1 X 2 Þ - EðX 1 ÞEðX 2 Þ: The linear correlation coefficient between X1 and X2 is covðX 1 , X 2 Þ ffi: ρðX 1 , X 2 Þ = pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi V ðX 1 ÞV ðX 2 Þ The best approximation of Xi by an affine function of Xj in the least squares sense is ℓ(Xj) = αXj + β, with     cov X i , X j   , β = E ðX i Þ - αE X j : α= V Xj We have E((Xi - ℓ(Xj))2) = V(Xi)(1 - jρ(Xi, Xj)j2). Thus,     X i = ℓ X j a:s: ⟺ ρ X i , X j = 1: The covariance matrix C(X) and the correlation matrix ρ(X) of X are given by

    C ij ðX Þ = Cov X i , X j , ρij ðXÞ = Cov X i , X j : (continued)

172

2

Probabilities and Random Variables

Definition 2.22 (continued) The conditional mean of Xi with respect to Xj is the best approximation of Xi by a function ψ j(Xj) in the least squares’ sense. We have ð E ðX 2 jX 1 Þ = ψ 1 ðx1 Þ =

x2 f ðx2 jx1 Þ dx2 , f ðx2 jx1 Þ =

f ð x1 , x 2 Þ : f 1 ð x1 Þ

x1 f ðx1 jx2 Þ dx1 , f ðx1 jx2 Þ =

f ð x1 , x 2 Þ : f 2 ð x2 Þ

ð E ðX 1 jX 2 Þ = ψ 2 ðx2 Þ =

The generalized correlation coefficient is 1 ðηðX 1 jX 2 Þ þ ηðX 2 jX 1 ÞÞ, 2 ηðX 1 jX 2 Þ = exp - kX 1 - E ðX 1 jX 2 Þk2 , ηðX 2 jX 1 Þ = exp - kX 2 - EðX 2 jX 1 Þk2 : RðX 1 , X 2 Þ =

Remark 2.10 If the probability P on Ω is defined by a mass function μ, then F ð xÞ =

X

μðωÞ =

X

μðωÞ1IX ðxÞ ðωÞ,

ω2Ω

ω2I X ðxÞ

1IX ðxÞ ðωÞ =

n Y i=1

1I i ðxi Þ ðωÞ:

ð2:27Þ ð2:28Þ

If the probability P on Ω is defined by a mass density μ, then ð F ðxÞ =

ð μðωÞdω = μðωÞ1IX ðxÞ ðωÞdω:

ð2:29Þ

Ω

I X ðxÞ

Remark 2.11 If X = (X1, X2) has a PDF f, then þ1 ðð

E ðϕðX 1 , X 2 ÞÞ =

ϕðx1 , x2 Þf ðx1 , x2 Þdx1 dx2 :

-1

(continued)

2.7

Random Vectors

173

Remark 2.11 (continued) If Ω has a mass function μ, then E ðϕðX 1 , X 2 ÞÞ =

X

μðωÞϕðX 1 ðωÞ, X 2 ðωÞÞ:

ω2Ω

If Ω has a mass density μ, then ð EðϕðX 1 , X 2 ÞÞ = μðωÞϕðX 1 ðωÞ, X 2 ðωÞÞdω: Ω

If the probability P on Ω is defined by a mass density μ, then ð F ð xÞ =

ð μðωÞdω = μðωÞ1IX ðxÞ ðωÞdω:

IX ðxÞ

Ω

Example 2.12 Let us consider Ω = (0, 1) and the density μ(ω) = 1. Let X1(ω) = ω2; X2(ω) = ω. We have, for 0 < x1, x2 < 1: pffiffiffiffiffiffi X 1 < x1 , X 2 < x2 ⟺ ω < min f x1 , x2 g Thus, for 0 < x1, x2 < 1: n pffiffiffiffiffi pffiffiffiffiffiffi F ðx1 , x2 Þ = min f x1 , x2 g = x ,x1 , 2

if x21 < x2 if x21 > x2

This function cannot be derived in the usual sense, due to the change of definition across the curve x2 = x21. The density is null in the subregions x2 < x21 and x2 > x21 , but corresponds to a Dirac mass – See, for instance (Souza de Cursi, 2015) – On the curve separating these regions. The marginal distributions are, for 0 < x1, x2 < 1: pffiffiffiffiffiffi F 1 ð x1 Þ = x 1 ,

F 2 ð x2 Þ = x2 :

Although we cannot derive F, marginal densities may be determined: 1 f 1 ðx1 Þ = pffiffiffiffiffi , 2 x1

f 2 ðx2 Þ = 1:

(continued)

174

2

Probabilities and Random Variables

Example 2.12 (continued) The copula associated to X is C(c1, c2) = min {c1, c2}. We have ð1

  1 E X k1 = 2

0

  E X k2 =

k - 12

x1

ð1 0

dx1 =

xk2 dx2 =

1 , 2k þ 1

1 : kþ1

Thus, E ðX 1 Þ =

1 4 1 1 , V ðX 1 Þ = ; E ð X 2 Þ = , V ðX 2 Þ = : 3 45 2 12

In addition, 1

ð1

ð1

E ðX 1 X 2 Þ = ð 1

X 1 ðωÞX 2 ðωÞdω =

dω 0

ω3 dω =

1 , 4

0

0

covðX 1 , X 2 Þ =

1 , ρðX 1 , X 2 Þ = 12

pffiffiffiffiffi 15 : 4

The best least squares approximation by linear functions are X1 ≈ X2 -

1 15 1 ; X2 ≈ X þ : 6 16 1 16

The conditional mean E(Xi| Xj) is determined by finding ψ(Xj) that minimizes   2 J ðψ Þ = E X i - ψ X j : Then pffiffiffiffiffiffi E ðX 1 jX 2 Þ = X 22 , EðX 2 jX 1 Þ = X 1 : Thus, kX 1 - E ðX 1 jX 2 Þk = kX 2 - E ðX 2 jX 1 Þk = 0, So that ηðX 1 jX 2 Þ = ηðX 2 jX 1 Þ = 1⟹RðX 1 , X 2 Þ = 1:

2.7

Random Vectors

175

Example 2.13 Let us consider Ω = {-2, -1, 0, 1, 2}. Assume that all the elements are equiprobable. Let X1(ω) = ω2; X2(ω) = |ω|. We have X ðΩÞ = fð4, 2Þ, ð1, 1Þ, ð0, 0Þ, ð1, 1Þ, ð4, 2Þg Thus, PðX = ð4, 2ÞÞ = PðX = ð1, 1ÞÞ =

2 1 ; PðX = ð0, 0ÞÞ = : 5 5

The marginal distributions are 2 ; PðX 1 = 0Þ = 5 2 PðX 2 = 2Þ = PðX 2 = 1Þ = ; PðX 2 = 0Þ = 5

PðX 1 = 4Þ = PðX 1 = 1Þ =

1 : 5 1 : 5

We have 14 6 14 ; E ðX 2 Þ = , V ðX 2 Þ = 5 5 5

EðX 1 Þ = 2, V ðX 1 Þ = In addition, E ðX 1 X 2 Þ =

18 5,

so that

covðX 1 , X 2 Þ =

pffiffiffi 6 3 5 , ρð X 1 , X 2 Þ = : 5 7

The best least squares approximation by linear functions are X1 ≈

15 4 3 12 X - ; X2 ≈ X1 þ : 7 2 7 7 35

The conditional mean E(Xi| Xj) is determined by finding ψ(Xj) that minimizes J ðψ Þ = E



 2 Xi - ψ Xj ,

For instance, we evaluate E(X1| X2) by minimizing J ðψ Þ =

2 2 1 ð4 - ψ ð2ÞÞ2 þ ð1 - ψ ð1ÞÞ2 þ ð0 - ψ ð0ÞÞ2 , 5 5 5 (continued)

176

2

Probabilities and Random Variables

Example 2.13 (continued) So that ψ(2) = 4, ψ(1) = 1, ψ(0) = 0: We may take ψ ðX 2 Þ = X 22. In addition PðX 1 = ijX 2 = jÞ =

PðX = ði, jÞÞ : PðX 2 = jÞ

Then PðX 1 = 4jX 2 = 2Þ = PðX 1 = 1jX 2 = 1Þ = PðX 1 = 0jX 2 = 0Þ = 1: The other probabilities are null. To evaluate E(X2| X1), we minimize 2 2 1 ð2 - ψ ð4ÞÞ2 þ ð1 - ψ ð1ÞÞ2 þ ð0 - ψ ð0ÞÞ2 , 5 5 5 pffiffiffiffiffiffi So that ψ(4) = 2, ψ(1) = 1, ψ(0) = 0: We may take ψ ðX 1 Þ = X 1 . Here J ðψ Þ =

PðX 2 = 2jX 1 = 4Þ = PðX 2 = 1jX 1 = 1Þ = PðX 2 = 0jX 1 = 0Þ = 1 And the other probabilities are null. Again, we have kX 1 - E ðX 1 jX 2 Þk = kX 2 - E ðX 2 jX 1 Þk = 0, ηðX 1 jX 2 Þ = ηðX 2 jX 1 Þ = 1⟹RðX 1 , X 2 Þ = 1:

Let us introduce the independence between variables: Definition 2.23 Let X = (X1, X2) be a couple of random variables defined on a probability space (Ω, P). We say that X = (X1, X2) is a couple of independent variables if and only if ITS CDF is the product of the marginal CDFs of the components: F ðx1 , x2 Þ = F 1 ðx1 ÞF 2 ðx2 Þ In such a case, we say that X1 and X2 are independent.

Proposition 2.25 Let X = (X1, X2) be a couple of random variables defined on a probability space (Ω, P). The following assertions are equivalent: 1. X1 and X2 are independent; 2. The density of the pair is the product of the marginal densities of the components: f(x1, x2) = f1(x1)f2(x2); (continued)

2.7

Random Vectors

177

Proposition 2.25 (continued) 3. The copula associated to X = (X1, X2) is, for 0 < c1, c2 < 1: 4. C(c1, c2) = P(F1 < c1)P( F2 < c2) = c1c2; 5. The characteristic function of the pair is the product of the of the characteristic functions of the components: ϕðt 1 , t 2 Þ = E ðeit1 X 1 þit2 X 2 Þ = ϕ1 ðt 1 Þ ϕ2 ðt 2 Þ, ϕi ðsÞ = E ðeisX i Þ.

Proof We have 1. It is immediate that assertions 1 and 2 are equivalent. 2. We have     PðF i < ci Þ = P X i ≤ F i- 1 ðci Þ = F i F i- 1 ðci Þ = ci : If the variables are independent, then     PðF 1 ≤ c1 , F 2 ≤ c2 Þ = P X 1 ≤ F 1- 1 ðc1 Þ P X 2 ≤ F 2- 1 ðc2 Þ = c1 c2 : Conversely, assume that C(c1, c2) = c1c2. Then,       P X 1 ≤ F 1- 1 ðc1 Þ, X 2 ≤ F 2- 1 ðc2 Þ = P X 1 ≤ F 1- 1 ðc1 Þ P X 2 ≤ F 2- 1 ðc2 Þ : Taking c1 = F1(x1), c2 = F2(x2), we have PðX 1 ≤ x1 , X 2 ≤ x2 Þ = PðX 1 ≤ x1 ÞPðX 2 ≤ x2 Þ, So that the variables are independent. Thus, independence of (X1, X2) is equivalent to C(c1, c2) = c1c2. 3. If X1 and X2 are independent, then E ðeit1 X 1 þit2 X 2 Þ = E ðeit1 X 1 ÞE ðeit2 X 2 Þ, so that ϕ(t1, t2) = ϕ1(t1)ϕ2(t2). Conversely, if the characteristic function of the pair is the product of the characteristic function of the components, then an inverse Fourier transform shows that f(x1, x2) = f1(x1)f2(x2), so that the variables are independent.

Proposition 2.26 If X1 and X2 are independent, then v(X1, X2) = ρ(X1, X2) = 0, E(X1| X2) = E(X1), E(X2| X1) = E(X2).

178

2

Probabilities and Random Variables

Proof Le f be density of the couple and fi be the marginal density of Xi. We have f(x1, x2) = f1(x1)f2(x2), so that E(X1X2) = E(X1)E(X2). Thus, cov(X1, X2) = 0. Consequently, we have also ρ(X1, X2) = 0. In addition, f(x1| x2) = f1(x1) and f(x2| x1) = f2(x2), so that E(X1| X2) = E(X1) and E(X2| X1) = E(X2).

Remark 2.12 The converse of proposition 2.26 is generally false, except for gaussian vectors.

Example 2.14 Let us consider a couple X = (X1, X2) having AS PDF f ð x1 , x 2 Þ =

n

1, if 0 < x1 , x2 < 1 0, otherwise:

The marginal densities verify f i ð xi Þ =

n

1, if 0 < xi < 1 0, otherwise:

Thus, f(x1, x2) = f1(x1)f2(x2) and the variables are independent. Notice that, for 0 < xi < 1: Fi(xi) = xi and F(x1, x2) = x1x2 = F1(x1)F2(x2). Analogously, P(F1 < c1, F2 < c2) = P(X1 < c1, X2 < c2) = c1c2. Notice that f(x1| x2) = f(x2| x1) = 1, for 0 < x1, x2 < 1. Thus, E ðX 1 jX 2 Þ = EðX 2 jX 1 Þ = 12 = E ðX 1 Þ = E ðX 2 Þ:

Example 2.15 Let us consider a couple X = (X1, X2) having AS PDF PðX = ð0, 0ÞÞ = PðX = ð0, 1ÞÞ = PðX = ð1, 0ÞÞ = PðX = ð1, 1ÞÞ =

1 4

The marginal distributions are PðX i = 0Þ = PðX i = 1Þ =

1 : 2

(continued)

2.7

Random Vectors

179

Example 2.15 (continued) Thus, PðX = ði, jÞÞ = PðX 1 = i, X 2 = jÞ = PðX 1 = iÞPðX 2 = jÞ And the variables are independent. Notice that P(X1 = i| X2 = j) = P(X1 = i) and P(X2 = j| X1 = i) = P(X2 = j), so that E ðX 1 jX 2 Þ = E ðX 2 jX 1 Þ = 12 = E ðX 1 Þ = EðX 2 Þ:

Exercises 1. Let Ω = (0, 1), with P((a, b)) = b - a (μ(ω) = 1) and X = (X1, X2), X1(ω) = ω2, X2(ω) = ω4. a) b) c) d) e)

Find the CDF F of X. Determine the PDF f of X. Determine the marginal distributions and densities. Are the variables independent? Find the distributions of E(X2| X1) and E(X1| X2).

2. Let X = (X1, X2) be a pair of random variables having as density

f ðx1 , x2 Þ =

x1 þx2 , if 0 < x1 , x2 < 1; 0, otherwise:

a) Determine the marginal distributions and densities. b) Are the variables independent? c) Find the distributions of E(X2| X1) and E(X1| X2). 3. Let X = (X1, X2) be a pair of random variables having as density f ð x1 , x 2 Þ =

8

: k=1 1, if x > supfX i g:

In addition, E ð ϕð X Þ Þ =

X

pi ϕðX i Þ:

ð2:31Þ

i

Thus, M k ðX Þ =

X i

pi X ki , EðX Þ =

X X   X 2 pi X i , E X 2 = pi X i , φ ð t Þ = pi eitX i : i

i

i

2.8

Discrete and Continuous Random Variables

181

Examples of discrete variables are: • The uniform distribution on {1, . . ., n}:, PðX = iÞ = 1n, for 1 ≤ i ≤ n. We have nþ1 n2 - 1 E ðX Þ = , σ ðX Þ = , V ðX Þ = 2 12

rffiffiffiffiffiffiffiffiffiffiffiffiffi n2 - 1 : 12

You can generate a sample of k variates from the uniform distribution on {1, . . ., n} using the instruction sample(1:n, k, replace=TRUE). Taking replace=FALSE generates a sample without repetition – it can be used to generate random permutations. For instance:

• Bernoulli B ðpÞ : describes success (value 1) or failure (value 0) in a trial for a probability of success p. X(Ω) = {0, 1}, P(X = 1) = p, P(X = 0) = q = 1 - p. We have pffiffiffiffiffi E ðX Þ = p, V ðX Þ = pq, σ ðX Þ = pq: • Binomial B ðn, pÞ : gives the number of successful trials among n when the probability of success is p. It may be interpreted as the law of a sum of n independent Bernoulli laws. X(Ω) = {0, 1, . . ., n}, PðX = kÞ = nk pk qn - k , q = 1 - p. We have pffiffiffiffiffiffiffiffi E ðX Þ = np, V ðX Þ = npq, σ ðX Þ = npq: Notice that B ð1, pÞ = B ð pÞ. R offers built-in functions to deal with Bernoulli and Binomial, namely dbinom, pbinom, qbinom, and rbinom. For instance, dbinom(k,n, p) furnishes the value of P(X = k) for B ðn, pÞ and pbinom(k,n,p) returns the value of P(X ≤ k) for B ðn, pÞ – setting the flag lower.tail to FALSE returns P(X > k). qbinom(p,n,a) returns the smallest value of k such that P(X ≤ k) ≥ a for B ðn, pÞ – the flag lower.tail=FALSE furnishes the (continued)

182

2

Probabilities and Random Variables

smallest value of k such that P(X > k) ≥ a. rbinom(n,k,p) generates a sample of k variates from B ðn, pÞ. For instance, let us consider n = 5, p = 2=10, q = 8=10. We have

As observed above, Bernoulli distributions correspond to B ð1, pÞ . As n alternative, you can use the package Rlab, which proposes analogous instructions dbern, pbern, qbern, and rbern.

• Negative binomial NB ðn, pÞ : gives the number of failures necessary to get n successes when the probability of success is p. X(Ω) = {0, 1, . . .}, -1 PðX = kÞ = nþk pn qk , q = 1 - p. We have n-1 1 pffiffiffiffiffi q q E ðX Þ = n , V ðX Þ = n 2 , σ ðX Þ = nq: p p p

2.8

Discrete and Continuous Random Variables

183

Under R, the instructions dnbinom, pnbinom, qnbinom, and rnbinom deal with the negative binomial distribution. Analogously to the binomial, dnbinom(k,n,p) corresponds to P(X = k) for NB ðn, pÞ and pnbinom (k,n,p) to P(X ≤ k) for B ðn, pÞ. qnbinom(p,n,a) returns the smallest value of k such that P(X ≤ k) ≥ a for NB ðn, pÞ. The flag lower.tail has the same function as in the preceding. rnbinom(n,k,p) generates a sample of k variates from NB ðn, pÞ. For instance, let us consider n = 5, p = 2=10, q = 8=10. We have

• Poisson P ðλÞ: X ðΩÞ = ℕ = f0, 1, . . . , n, . . .g, PðX = k Þ =

e - λ λk k! .

We have

pffiffiffi EðX Þ = λ, V ðX Þ = λ, σ ðX Þ = λ: A Poisson’s law approximates the sum of a large number of Bernoulli laws having small parameter p (λ = np). R built-in functions to deal with Poisson’s variables are dpois and ppois, which return the values of P(X = k) and P(X ≤ k), respectively. qpois returns the smallest value of k such that P(X ≤ k) ≥ a and rpois generates a sample from the Poisson’s distribution. The flag lower.tail produces the same effects as in the preceding. For instance:

184

2

Probabilities and Random Variables

• Multinomial distribution M ðn, pÞ: it is a generalization of the binomial distribution to the situation where we consider a population which is subdivided in k distinct subpopulations of probabilities p1, . . ., pk, respectively – we have p1, + . . . + pk = 1. A trial takes simultaneously n elements from the population and we count the number elements Xi of each subpopulation i. For n1 + . . . + nk = n, we have  k  P ni ! Y k n! i=1 PðX 1 = n1 , . . . , X k = nk Þ = pni i : pn11 pn22 . . . pnk k = k n1 ! . . . nk ! Q ð ni ! Þ i = 1 i=1

Notice that each Xi is a binomial variable B ðn, pi Þ. The variables X1, . . ., Xk are not independent: cov (Xi, Xj) = - npipj, for i ≠ j; cov (Xi, Xi) = npiqi. The function dmultinom(c(n1, . . ., nk), c( p1, . . ., pk)) furnishes P(X1 = n1, . . ., Xk = nk). rmultinom generates a variate from the multinomial distribution. For instance,

2.8

Discrete and Continuous Random Variables

2.8.2

185

Continuous Variables Having a PDF

Let X be a continuous random variable having f as PDF. Then, þ1 ð

ϕðxÞf ðxÞdx:

E ð ϕð X Þ Þ =

ð2:32Þ

-1

Thus þ1 ð

M k ðX Þ =

þ1 ð

x f ðxÞdx, EðX Þ = -1



E X

2



xf ðxÞdx,

ð2:33Þ

eitx f ðxÞdx:

ð2:34Þ

k

-1

þ1 ð

=

þ1 ð

xf ðxÞdx, φðt Þ = -1

-1

Examples of continuous variables are as follows: • Uniform distribution on (a, b): 8 < 1 , if x 2 ða; bÞ f ð xÞ = b - a : 0, if x= 2ða; bÞ We have E ðX Þ =

ð b - aÞ 2 b-a aþb , σ ðX Þ = pffiffiffiffiffi : , V ðX Þ = 12 2 12

The functions for the uniform distribution are dunif, punif, qunif, and runif, illustrated below: (continued)

186

2

Probabilities and Random Variables

• Standard Gaussian N(0, 1) x2 1 f ðxÞ = pffiffiffiffiffi e - 2 2π

We have E ðX Þ = 0, V ðX Þ = 1, σ ðX Þ = 1: Normal N(m, σ) 1 x-m 2 1 f ðxÞ = pffiffiffiffiffi e - 2ð σ Þ σ 2π

We have EðX Þ = m, V ðX Þ = σ 2 , σ ðX Þ = σ:

2.8

Discrete and Continuous Random Variables

187

The built-in functions for normal variables are dnorm, pnorm, qnorm, and rnorm:

• Chi-squared (Pearson’s distribution) χ 2: the distribution of Z2, where Z is standard Gaussian. x 1 f ðxÞ = pffiffiffiffiffiffiffiffi e - 2 : 2πx

We have E(X) = 1, V(X) = 2. • Chi-squared with n degrees of freedom χ 2(n): the distribution of Z 21 þ . . . Z 2n , where Z1, . . .Zn are independent standard Gaussian variables. f ðxÞ =

1 - 2x n2 - 1 x : n  e 22 Γ n2

We have E(X) = n, V(X) = 2n. The functions for the chi-squared distribution are dchisq, pchisq, qchisq, and rchisq : (continued)

188

2

Probabilities and Random Variables

Student–Fisher with n degrees of freedom SF(n): the distribution of pY ffiffiQ , where Y, Q are independent, Y is standard Gaussian, Q is χ 2(n).

n

   nþ1 Γ nþ1 x2 2 2  f ðxÞ = pffiffiffiffiffi n 1 þ : n nπ Γ 2 We have E(X) = 0, V ðX Þ =

n n-2

(for n > 2).

The built-in functions for Student–Fisher variables are dt, pt, qt, and rt :

• Behrens–Fisher–Snedecor (BFS, or simply Fisher) with (n1, n2) degrees of freeQ1

dom: the distribution of Qn12 , where Qi is χ2(ni), Q1and Q2 independent. We have (x > 0)

n2

2.8

Discrete and Continuous Random Variables

189

  n1 - 1  n21 2 Γ n1 þn x2 n1 2 f ðxÞ = : - ðn1 þn 2 n2     2 Þ Γ n21 Γ n22 1 þ nn12 x

The built-in functions for BFS variables are he built-in functions for Student– Fisher variables are df, pf, qf, and rf :

• Log-normal: the distribution of X such that log(X) is Normal N(m, σ). We have f ðxÞ =

2 1 log ðxÞ - m 1 pffiffiffiffiffi e - 2ð σ Þ : σx 2π

190

2

Probabilities and Random Variables

The built-in functions for log-normal variables are dlnorm, plnorm, qlnorm, and rlnorm :

Exercises 1. Let X1 and X2 be two independent variables having Bernoulli distributions of same parameter p > 0. Determine the range and the probabilities of Z1 = X1 - X2, Z2 = X1X2. 2. Let X be a continuous random variable uniformly distributed on (-1, 1). Use R to evaluate   a) P - 12 < X < 13 ;   b) P X > 13 ; c) M3(X); . 3. An urn contains 10 red balls and 30 blue balls. 5 balls are drawn sequentially, with replacement (the ball is put back in the urn after drawing). Use R to determine the probabilities of getting a) Exactly three red balls. b) At least two red balls. c) Less than three red balls. (continued)

2.8

Discrete and Continuous Random Variables

191

4. An urn contains 10 red balls and 30 blue balls. A ball is drawn. If it is blue, it is put back in the urn and we make another draw. The game stops when a red ball is drawn. Let n be the number of draws before stopping. Use R to determine the probabilities of a) n = 5; b) n ≥ 5; c) n ≤ 10. 5. Let X be a random variable which is normally distributed N(1, 2). Use R to find the probabilities a) P(-1 < X < 1); b) P (X < 3); c) P (X > 1). 6. Let X be a random variable which is normally distributed N(0, 2) and Y = e-X+1. Use R to find the probabilities a) P(1 < Y < 2); b) P (Y < 3); c) P (Y > 2). 7. Let X be a random variable which is normally distributed N(0, 2) and Y = X2. Use R to find the probabilities a) P(1 < Y < 2); b) P (Y < 3); c) P (Y > 2). 8. Let X1, X2 be two independent random variables normally distributed 1 N(0, 2) and Y = pXffiffiffiffi . Find the probabilities 2 X2

a) P(1 < Y < 2); b) P (Y < 3); c) P (Y > 2). 9. Let X1, X2 be two independent random variables normally distributed N(0, 2) and Y =

X 21 . X 22

a) P(1 < Y < 2); b) P (Y < 3); c) P (Y > 2).

Use R to find the probabilities

192

2

Probabilities and Random Variables

Supplementary Exercises 1. The statistics of car crashes show the following connection with the local maximum speed regulation (source NHTSA, data for 2017): Max speed (mph) Accidents (thousands)

30 418

40 311

50 248

55 327

≥60 233

No limit 60

Given a crash, what is the probability of that the max speed was 40 mph? Inferior or equal to 40? At least 55? 2. A candidate for a game must choose between 10 boxes containing prizes whose value is respectively 0, 10, 20, 25, 50, 100, 200, 500, 1000, 10,000 dollars. What is the probability of him making more than a thousand dollars? Less than a hundred dollars? What is the mean value of the winnings? 3. A sphynx waits for travelers to pass by and asks them a question which they answer with a probability of hitting the right answer equal to p > 0. In case of error, the traveler is devoured by the sphynx. The number of travelers passing each week near the sphynx follows a Poisson’s law of parameter λ > 0. What is the probability that the sphynx will not eat anyone in a week? 4. A model is destined to predict an event. When the prediction is positive, the success rate is 85%. When the answer is negative, the success rate is of 60%. The global probability of the event is 10%. a) The model furnishes a positive answer. What is the probability that the event arises in reality? b) The model furnishes a negative answer. What is the probability that the event does not occur in reality? 5. You have in your pocket two dices. One of the dices is regular, numbered from 1 to 6, but the other one is unfair: The face numbered one was replaced by a second face numbered 6. In both the dices, all the faces are equiprobable. You take a dice at random, equiprobably and roll it. The face up is 6. a) What is the probability of this result? b) What is the probability that the dice is not the unfair one? 6. Two models M1 and M2 are destined to predict an event. When the prediction is positive, the success rate is 85% for M1 and 70% for M2. When the answer is negative, the success rate is of 60% for M1 and 80% for M2. The global probability of the event is 10%. (continued)

2.8

Discrete and Continuous Random Variables

193

a) Both the models furnish a positive answer. What is the probability that the event arises in reality? b) Both the models furnish a negative answer. What is the probability that the event does not occur in reality? c) M1 furnishes a positive answer and M2 furnishes a negative answer. What is the probability that the event arises in reality? d) M1 furnishes a negative answer and M2 furnishes a positive answer. What is the probability that the event does not occur in reality? 7. Let X = (X1, . . ., Xn)t be a vector of independent variables. Let A be a m × n matrix of real numbers and Y = AX. a) Verify that YtY = XtAtAX. b) Verify that E(Y) = AE(X). c) Verify that C(Y) = E(XtAtAX) - E(X)tAtAE(X). 8. Let X = (X1, . . ., Xn) be a vector of independent standard gaussian variables. Let A be a m × n matrix of real numbers and Y = AX. a) Show that E(Y) = 0. b) Both the models furnish a negative answer. What is the probability that the event does not occur in reality? c) M1 furnishes a positive answer and M2 furnishes a negative answer. What is the probability that the event arises in reality? d) M1 furnishes a negative answer and M2 furnishes a positive answer. What is the probability that the event does not occur in reality? 9. Two models M1 and M2 are destined to predict an event. When the prediction is positive, the success rate is 85% for M1 and 70% for M2. When the answer is negative, the success rate is of 60% for M1 and 80% for M2. The global probability of the event is 10%. a) Both the models furnish a positive answer. What is the probability that the event arises in reality? b) Both the models furnish a negative answer. What is the probability that the event does not occur in reality? c) M1 furnishes a positive answer and M2 furnishes a negative answer. What is the probability that the event arises in reality? d) M1 furnishes a negative answer and M2 furnishes a positive answer. What is the probability that the event does not occur in reality?

194

2.9

2

Probabilities and Random Variables

Sequences of Random Variables

One of the central points in uncertainty quantification is the approximation of random variables: We look for representations of the observed variability of a system as a function of the known or assumed variability of certain parameters and inputs. Such a representations are generally approximations based on the construction of a sequence {Xn: n ≥ 0} of random variables that converge to the exact variable X: Xn ⟶ X. In probability, it is usual to manipulate different definitions of convergence: 1. Convergence in the quadratic mean: in quadratic mean if and only if E((Xn - X)2) ⟶ 0, for n ⟶ + 1; 2. Almost sure convergence: the event E = {Xn ⟶ X} is almost sure, id est, P(E) = 1; 3. Convergence in probability: for any ε > 0, the event En(ε) = {kXn - Xk ≥ ε} verifies P(En(ε)) ⟶ 0, for n ⟶ + 1; 4. Convergence in distribution: let Fn be the CDF of Xn and F be the CDF of X. Xn ⟶ X in distribution if and only if Fn(x) ⟶ F(x) at any point x where F is continuous. One of the fundamental results concerning the convergence of sequences of random variables is Lévy’s theorem (Lévy, 1922): Theorem 2.4 (Lévy) Let {Xn: n 2 ℕ } be a sequence of random variables such that the characteristic function of Xn is φn. Let X be a random variable having as characteristic function φ. Then Xn ⟶ X in distribution if and only if φn(t) ⟶ φ(t) almost everywhere. Levy’s theorem furnishes a practical method for the approximation of a random variable X by approximating its characteristic function φ – as previously observed, φ(t) = E(eitX),expands as φð t Þ =

þ1 X ðitÞk ðitÞ2 ðit Þ3 M k ðX Þ = 1 þ itM 1 ðX Þ þ M 2 ðX Þ þ M 3 ðX Þ þ . . . k! 2 2 n=0

We may consider Xn as the variable having as characteristic function φn ð t Þ =

n X   ðit Þk M k ðX Þ ≈ E eitX n : k! n=0

2.9

Sequences of Random Variables

195

Then, from Levy’s theorem, Xn ⟶ X in distribution. In practice, we look for a variable verifying the equations: M k ðX n Þ = M k ðX Þ, for 1 ≤ k ≤ n:

ð2:35Þ

A second theorem concerns gaussian variables Theorem 2.5 Let {Xn: n 2 ℕ } be a sequence of normally distributed random variables. If Xn ⟶ X in distribution, then X is normally distributed. In addition, Xn ⟶ X in distribution if and only if E(Xn) ⟶ E(X), V(Xn) ⟶ V(X).

Example 2.16 Let us consider a gaussian variable W~N(0, 1). Since M 1 ðW Þ = 0, M 2 ðW Þ = 1, M 3 ðW Þ = 0, We may consider a variable W3 such that M k ðW 3 Þ = M k ðW Þ, 1 ≤ k ≤ 3: For instance, a discrete variable such that PðW 3 = - 1Þ = PðW 3 = 1Þ =

1 : 2

Such an approximation is useful to generate random walks, such as approximated Brownian motions (see Sect. 4.8.3): For instance, we may consider X0 = (X3)0 = 0, Xn+1 = Xn + Wn ≈ (X3)n+1 = (X3)n + (W3)n. Assume that we are interested in the statistics of X100: We generate 1e5 variates from both the exact and the approximated values and we analyze the statistics of the samples: (continued)

196

2

Probabilities and Random Variables

Example 2.16 (continued)

As we can observe, the results are close, namely for the mean and the variance: The approximation furnishes good estimates of the statistics. For 1e6 variates, we obtain

Exercises 1. Let W be a gaussian variable W~N(0, 1). Find a discrete variable W5 such that. M k ðW 5 Þ = M k ðW Þ, 1 ≤ k ≤ 5:

(continued)

2.9

Sequences of Random Variables

197

Analogously to Example 2.16, consider X0 = (X5)0 = 0, Xn+1 = Xn + Wn ≈ (X5)n+1 = (X5)n + (W5)n. Generate a sample of X100 and of the approximated (X5)100 and compare their statistics. TIP: M4(W) = 3, M5(W) = 0. Look for a discrete variable taking the pffiffiffi pffiffiffi values - a, - 1, 0, 1, a: 2. Let W be a gaussian variable W~N(0, 1). Find a discrete variable W7 such that. M k ðW 7 Þ = M k ðW Þ, 1 ≤ k ≤ 7: Analogously to Example 2.16, consider X0 = (X5)0 = 0, Xn +1 = Xn + Wn ≈ (X7)n+1 = (X7)n + (W7)n. Generate a sample of X100 and of the approximated (X7)100 and compare their statistics. TIP: M6(W ) = 15, M7(W) = 0. Look for a discrete variable taking the pffiffiffi pffiffiffi pffiffiffi pffiffiffi values - b, - a, - 1, 0, 1, a, b: 3. Consider Ω = (0, 1) and the mass density μ(ω) = 1. Let Xn, X: Ω → ℝ be random variables such that X n ðωÞ = ωn and X(ω) = 0. a) b) c) d) e) f)

Determine the CDF Fn of Xn. Find the CDF F of X. Show that Xn ⟶ X in distribution. Show that Xn ⟶ X in probability. Show that Xn → X almost surely. Show that Xn → X in the quadratic mean.

4. Let Sn = Xnn , where {Xn: n 2 ℕ } is a sequence of independent discrete random variables such that (0 < p < 1): PðX n = kÞ =

p p k , k ≥ 0: 1n n

a) Let t 2 ℝ and n(t) 2 ℕ verify n(t) ≤ nt < n(t) + 1. Verify that. p nðtÞþ1 PðX n ≤ nt Þ = PðX n ≤ nðt ÞÞ = 1 - 1 n b) Verify that.

1-

p n

ntþ2

p nðtÞþ1 p ntþ1 ≤ 1≤ 1n n

c) Conclude that P(Sn ≤ t) ⟶ 1 - e-pt. d) Show that Sn converges in distribution to an exponential law.

198

2.10

2

Probabilities and Random Variables

Samples

A sample from X is a set of independent observations of X, id est, a set X = fX 1 , . . . , X n g, where each Xi has the same distribution as X and is independent from Xj, 8 j ≠ i. The empirical mean of the sample is Xn =

n 1X X: n i=1 i

ð2:36Þ

A median of the sample is a middle value that cuts the sample in two equal parts. If the sample is ordered in an increasing order, the median is X n2 for n pair; while for n odd, the median is the arithmetic mean of X n -2 1 , X nþ1 : 2 Population’s variance of the sample is Vp =

n n 2 1 X  2 1 X Xi - Xn = X 2i - X n : n i=1 n i=1

ð2:37Þ

n  2 D2n 2 X Xi - Xn : , Dn = n i=1

ð2:38Þ

We have Vp =

pffiffiffiffiffiffi Population’s standard deviation is sp = V p . This Pearson’s approach considers the values X as being a population of equiprobable individuals and apply the formulae. Since the elements of the sample are variates from X, all these quantities are random variables – thus, we may evaluate their means and variances. For instance,     1   σ ðX Þ E X n = E ðX Þ, V X n = V ðX Þ, σ X n = pffiffiffi : n n

ð2:39Þ

For normal variables, E(Vp) ≠ V(X), so that it may be preferable to use the sample’s variance and the associated sample’s standard deviation: Vn =

pffiffiffiffiffiffi D2n , sn = V n : n-1

Observe that, from Chebyshev’s inequality (Proposition 2.8): V ðX Þ PðjXn - E ðX Þj ≥ εÞ ≤ , nε2

ð2:40Þ

2.10

Samples

199

so that Xn → EðX Þ in probability – this result is the weak law of large numbers. The reader may find in the literature extensions of this basic result, such as the strong law of large numbers : for X regular enough, X n → EðX Þ almost surely. In addition, the values of these quantities change with the sample: a confidence interval must be associated to the values of the mean and the variance of the sample, to take into account the error margins. To define a confidence interval to the estimation b ξ of quantity ξ, we must choose a confidence level 1 - α (α is the risk, id est, the probability of rejection of the real value) and determine an interval ξ 2 ðξ min , ξ max Þ = 1 - α. Confidence intervals are often based (ξmin, ξmax)that P b on the Central Limit Theorem Theorem 2.6 (Central Limit) Let {Xn: n 2 ℕ } be a sequence of random variables having the same distribution of finite mean m and variance σ 2. Let Zn =

Xn - m : pσffiffi n

Then Zn converges in distribution to a standard Gaussian N(0, 1). b = X n as estimation of the mean: by determining from N(0, 1) a Indeed, we use m number zα such that P(|Z| ≤ zα) = 1 - α, we evaluate an error margin Δα = zpα σffiffin and   we generate a confidence interval X n - Δα , Xn þ Δα . When σ is unknown, we may use the Cochran’s theorem. Theorem 2.7 (Cochran) Let (X1, . . ., Xn) be a vector of independent random variables having the same distribution N(m, σ). Let Q2n =

D2n X - m , T n = npsnffiffi : 2 σ n

Then Q2n is chi-squared χ 2(n - 1) and Tn is student-fisher SF(n - 1). Using this result, we may determine from SF(n - 1) a number tα such that P(jTnj ≤ tα) = 1 - α and evaluate an error margin Δα = tpα sffiffinn , what generates a   confidence interval X n - Δα , Xn þ Δα . Cochran’s theorem furnishes also a confidence interval for the variance: we determine from the distribution χ 2(n - 1) two     D2 D2 numbers A1, A2 such that P Q2n ≤ A1 = P Q2n ≥ A2 = α2 . Then, A2n , A1n is a confidence interval for σb2 .

200

2

Table 2.6 Non-rejection intervals for tests on the mean

H0 m = m0 m < m0 m > m0

Probabilities and Random Variables

Non-rejection interval Xn 2 ðm0 - Δα , m0 þ Δα Þ Xn 2 ð - 1, m0 þ Δα Þ Xn 2 ðm0 - Δα , þ1Þ

tα P(|Tn| ≤ tα) = 1 - α P(Tn ≤ tα) = 1 - α P(Tn ≤ tα) = α

Table 2.7 Non-rejection intervals for tests on the variance σ 2 H0 σ

2

= σ 20

σ 2 < σ 20

σ 2 > σ 20

Non-rejection interval   D2n 2 A1 σ 20 , A2 σ 20   A1 σ 20 A2 σ 20 s2n 2 , n-1 n-1   D2n 2 0, A2 σ 20   A2 σ 20 sn2 2 0, n-1   D2n 2 A1 σ 20 , þ1   2 A1 σ 0 s2n 2 , þ1 n-1

A1, A2  α   α  P χ 2n - 1 ≤ A1 = ,P χ 2n - 1 ≥ A2 = 2 2   P χ 2n - 1 ≥ A2 = α   P χ 2n - 1 ≤ A1 = α

The data may also be used to test hypothesis made about the mean, such as H0: m = m0, H0: m > m0, or H0: m < m0. A test verifies if the data are incompatible with the hypothesis at a risk level α. If there is an incompatibility, the hypothesis must be rejected. Otherwise, the data cannot reject the hypothesis: the user may consider that the hypothesis remains tentatively valid. Tests are based on non-rejection intervals, analogous to confidence intervals: for instance, we may use the non-rejection regions given in Table 2.6. Analogously, hypothesis on the variance may be tested, such as H0: σ = σ 0, H0: σ > σ 0, or H0: σ < σ 0. Examples of non-rejection regions are given in Table 2.7. In practice, you may also be interested in the comparison of statistics of two samples from variables X and Y. For instance, two samples X = fX 1 , . . . , X nX g and Y = fY 1 , . . . , Y nY g are available and these data are intended to be used to verify the relations between the means mX and mY of X and Y, or between their standard deviations σ X and σ Y: we may test the hypothesis H0: mX = mY, H0: mX < mY, H0: mX > mY, H 0 : σ 2X = λσ 2Y , H 0 : σ 2X < λσ 2Y , H 0 : σ 2X > λσ 2Y . These tests are based in the same principles as the preceding ones: a region of non-rejection is defined for a risk α and the hypothesis is rejected if the observed values are not in the region. Otherwise, the hypothesis is considered as compatible with the data. The tests for the comparison of the means are reduced to those for a single variable Z = XnX - YnY , which is generally assumed to be normally distributed N ðmZ , σ Z Þ. We have mZ = mX - mY, but a supplementary assumption is needed for the normality of the variable Z – for instance, one among the following:

2.10

Samples

201

A large amount of data in each sample: sizes nX, nY of the samples large enough to use gaussian approximations for each empirical mean. Let sX, sY be the in this case is qffiffiffiffiffiffiffiffiffiffiffiffiffi

approximately σ Z ≈ sZ =

s2X nX

s2

þ nYY ;

• Coupled observations: the data are formed by couples (Xi, Yi): in this situation, nX = nY and we have a sample Zi = Xi - Yi from Z = X - Y, which has as mean pffiffiffiffiffi mZ = mX - mY and tests are performed on mZ using σ Z ≈ sZ = nX , where sZ is evaluated by the formula for the sample’s standard deviation; • The variance is the same for both the samples: σ X = σ Y = σ. In this case, qffiffiffiffiffiffiffiffiffiffiffiffiffi 2 ðn - 1Þs2X þðnY - 1Þs2Y σ Z = σ n1X þ n1Y and Dσ2 = X is approximately χ 2(nX + nY - 2). σ2 Then, the tests are performed using these variables. The first approach is often used even for small datasets, but such a use cannot be rigorously justified in such a situation. Under one of these assumptions, mX = mY þ a ⟺ mZ = a, mX < mY þ a ⟺ mZ < a, mX > mY þ a ⟺ mZ > a: Thus, the comparison reduces to a test on the value of Z . Examples of non-rejection intervals for the first assumption are given in Table 2.8. For the comparison of variances, we observe that F=

σ 2Y s2X s2 =σ 2 = X2 X2  BFSðnX - 1, nY - 1Þ: 2 2 σ X sY sY =σ Y

The tests are based on this parameter. Examples of non-rejection regions are given in Table 2.9. Table 2.8 Non-rejection intervals for comparisons of means, assuming normality of Z = X nX - Y nY : ℤ is a standard gaussian variable N(0, 1) H0 mX = mY + a

Non-rejection interval Z 2 ða - zα σ Z , a þ α σ Z Þ

mX < mY + a mX > mY + a

Z 2 ð - 1, a þ α σ Z Þ Z 2 ða þ α σ Z , þ1Þ

α

α 2 Pðℤ ≤ α Þ = 1 - α Pðℤ ≤ α Þ = α Pðℤ ≤ α Þ = 1 -

Table 2.9 Non-rejection intervals for comparisons of variances H0

Non-rejection interval

F1, F2

σ 2X = λσ 2Y

s2X 2 ðλF 1 , λF 2 Þ s2Y s2X 2 ð0, λF 2 Þ s2Y s2X 2 ðλF 1 , þ1Þ s2Y

PðF ≤ F 1 Þ =

σ 2X < λσ 2Y σ 2X > λσ 2Y

α α ,PðF ≥ F 2 Þ = 2 2

P(F ≥ F2) = α P(F ≤ F1) = α

202

2

Probabilities and Random Variables

R has built-in functions to evaluate means, variances, standard deviations, and medians and to perform tests: mean evaluates the mean, var. determines the variance, sd calculates the standard deviation, median determines the median, t.test compares the mean of the data to a given value or the means of two samples, and var.test compares the variances of two samples. To compare the variance of a sample to a given value, you must write some code or use a package such as OneTwoSamples. By default, t.test and var.test use α = 5%. A different value may be defined by setting parameter conf.level to the value of 1 - α. Example 2.17 Let us generate a sample of 10 variates from N(2, 1):

The data is

Let us generate a confidence interval with α = 5% for the mean:

A confidence interval with α = 5% for the variance is determined as follows:

(continued)

2.10

Samples

203

Example 2.17 (continued) Let us test the hypothesis H0: mX = 2.2 with α = 5%:

t.test generates a list containing many fields: Conf.Int contains the interval of non-rejection, and estimate contains the mean of the sample. If estimate belongs to conf.Int, the hypothesis is accepted. The default value of the confidence level is 1 - α = 0.95 – It can be modified by setting conf.Level =1 - α. For instance, let us test the hypothesis H0: mX < 1.9 with α = 10%:

Let us test the hypothesis H0: mX > 2.3 with α = 20%:

Let us test the hypothesis H0: σ X = 1.2 with α = 5%:

(continued)

204

2

Probabilities and Random Variables

Example 2.17 (continued) The hypothesis H0: σ X < 0.8 with α = 5% is tested as follows:

The hypothesis H0: σ X > 1.3 with α = 5% is tested as follows:

Notice that the data are consistent with many hypotheses, which may be in contradiction between themselves.

Example 2.18 Let us generate a supplementary sample of 10 variates from N(1.5, 1.5):

(continued)

2.10

Samples

205

Example 2.18 (continued) The data are

Let us test the hypothesis H0: mX = mY with α = 5%:

The result is independent of the order between x and y:

Let us test the hypothesis H0: mX < mY with α = 5%:

An alternative is:

Let us test the hypothesis H0: σ X = σ Y

(continued)

206

2

Probabilities and Random Variables

Example 2.18 (continued) Let us test the hypothesis H0: σ X > 2σ Y

Let us test the hypothesis H0: σ X < 0.8σ Y

Again, the data is consistent with various hypotheses, which may be in contradiction among themselves.

Remark 2.13 When dealing with tests, it is usual to refer to t-values and p-values. A t-value is the absolute value of the variable used in the test and a p-value is the probability of getting an absolute value higher than the t-value. A p-value gives the odds of observing the value found in the test: It can be used to estimate the reliability of the result. The tests available in R return more information than t-values, namely they return p-values. For instance, let us test H0: m = 2.2, using a sample of 10 values from N(2, 1):

(continued)

2.10

Samples

207

Remark 2.13 (continued) As we can see, R returns the values of t, degrees of freedom (df), p-value, a confidence interval and X n . Analogously, let us make a chi-squared test:

Again, we obtain the values of Q2n , degrees of freedom and the associated p-value

Exercises 1. In a poll, a sample of 1000 voters gives 52 percent to a candidate. Give a confidence interval for his or her score, with a 5% risk. What is the interval for a 1% risk? What is the size of the sample to get a confidence interval of length ±1%? 2. At the end of each day, an agency compiles statistic on the average waiting time (AWT) of customers. Over 30 days, we have the following results (in minutes): AWT Number of days

0–5 2

5–10 7

10–15 11

15–20 6

20–25 3

25–30 1

Use these data to estimate the mean and the variance of the mean of the AWT. Give confidence intervals for these quantities with α = 5%. 3. A quality department must check whether the impurity content of a product complies with the legislation, which requires a mean of 5 and a maximum variance of 2, assuming that the distribution of X is Normal, a) A sample of 20 elements is analyzed and leads to an empirical mean X20 = 3 and an empirical standard deviation s20 = 1.1. Determine confidence intervals with risk 5% for the men and the variance. b) A new sample of 400 elements is analyzed and leads to an empirical mean X400 = 3:2 and an empirical standard deviation s400 = 1.3. Determine confidence intervals with risk 5% for the mean and the variance. (continued)

208

2

Probabilities and Random Variables

c) Can the quality department confirm that the variance is inferior to 2 with a risk of 5%? d) Can it confirm that the mean is inferior to 5, with a risk of 5%? 4. Two methods of measurement of a quantity furnish the results below: Method 1 44,45,45,48,48,46,46,46,46,47,47,47,47,47,47,47,49,49,49,49,50,50,51,51,52 Method 2 45,45,48,46,46,46,46,46,46,47,47,47, 47,47,47, 47,47,47,49,49,49,50,50,50

a) Determine the empirical mean and the standard variance of each method. Give confidence intervals for these quantities, with a risk of 5%. b) May we consider the means of the samples as equal, with a risk of 5%? c) May we consider that the variances of both the methods are equal, with a risk of 5%? 5. The results of the students of two groups are given below: Group 1 Group 2

11,8,4,12,7,8,6,6,3,5 8,12,9,14,10,11,3,5,9,9

a) Determine the empirical mean and the standard variance of each method. Give confidence intervals for these quantities, with a risk of 5%. b) May we consider group 2 as better, with a risk of 5%? c) May we consider that the variances of both the methods are equal, with a risk of 5%? 6. The results of the students of two groups are given below: Group 1 Group 2

13,14,19,7,9,13,11,1,12,7,17,17,11,6,16,7,11,20,13 7,17,4,12,15,9,7,12,8,14,14,9,9,9,10,8,11,17,14

a) Determine the empirical mean and the standard variance of each method. Give confidence intervals for these quantities, with a risk of 5%. b) May we consider group 2 as better, with a risk of 5%? c) May we consider that the variances of both the methods are equal, with a risk of 5%?

2.10.1

Maximum-Likelihood Estimators

Samples are often used to estimate parameters from distributions. For instance, to make a prediction about the results of a poll, we need to estimate the parameter p = ( p1, . . ., pk) of a multinomial distribution. Analogously, to determine if the proportion of non-compliant elements in a product satisfies a given condition, we must estimate the parameter p of a Bernoulli distribution.

2.10

Samples

209

In general, we have a model distribution f(x, θ), where θ = (θ1, . . ., θr) is a vector of unknown parameters – to be determined from the sample. For a discrete distribution, f is a probability: f(x, θ) = P(X = x). For a continuous distribution, f is the density of probability. Example 2.19 Consider a binomial distribution B ðn, pÞ: The distribution is driven by the parameter θ = p and the model is f ðk, pÞ = nk pk ð1 - pÞn - k . For a Poisson’s distribution, θ = λ and f ðk, λÞ = λk e - λ=k!. For an uniform distribution on (-a, a), the parameter is θ = a and the model is f ðx, aÞ = 1=2a, if jxj < a; f ðx, aÞ = 0, otherwise. For a Normal parameters are θ = (m, σ) and the model is distribution, the  pffiffiffiffiffi 2 f ðx, θÞ = exp - ððx - mÞ=σ Þ = σ 2π : The objective is to find an estimator b θ of θ. A classical method to estimate the values of θ is the maximum of the likelihood: let X = fX 1 , . . . , X ns g be a sample from X. The likelihood is defined as LðX , θÞ =

ns Y

f ðX i , θÞ:

ð2:41Þ

i=1

It is often useful to consider the log-likelihood: log LðX , θÞ =

ns X

log ðf ðX i , θÞÞ

ð2:42Þ

i=1

A popular way to generate an estimator is the use of the maximum of likelihood: Definition 2.24 The maximum-likelihood estimator (MLE) of θ is the element b θ which maximizes L or, equivalently, logL.

Example 2.20 Consider a binomial distribution B ðn, pÞ : The model is f ðx, pÞ = nx px ð1 - pÞn - x , so that log ðf ðx, pÞÞ = x log p þ ðn - xÞ log ð1 - pÞ þ log

n x

:

Thus, (continued)

210

2

Probabilities and Random Variables

Example 2.20 (continued) ! ns ns ns X X X log LðX , pÞ = ð log pÞ X i þ n × ns X i log ð1 - pÞ þ log Xn : i=1

i=1

i=1

i

To determine the maximum of log LðX , pÞ, we find the solutions of ∂ log LðX , pÞ = 0, ∂p bp id est, ns ns X 1X X i - n × ns Xi b p i=1 i=1

! 1 = 0, 1-b p

so that b p=

ns 1 X X: n × ns i = 1 i

Example 2.21 Consider an uniform distribution on (-a, a), f ðx, aÞ = 1=2a, if jxj < a; f ðx, aÞ = 0, otherwise: Thus, LðX , aÞ =

1 , a ≥ max fjX i j : 1 ≤ i ≤ ng: 2 n an

The maximum of LðX , aÞ is attained at the minimal value of a. Thus, b a = max fjX i j : 1 ≤ i ≤ ng:

The estimators can be numerically determined by optimization: we can look for the minimum of the negative likelihood -L or the negative log-likelihood - log L. We can use the methods introduced in Sect. 1.13 or use a package, for instance, EstimationTools – this package proposes maximum-likelihood estimation for the distributions predefined in R.

2.10

Samples

211

Example 2.22 Consider a sample from a binomial distribution B ð10, pÞ : X = f1, 1, 1, 3, 3g. As shown above, the MLE estimation for p is b p = 0:18: We can determine the estimation with EstimationTools as follows:

As an alternative, we can define a function nlogl evaluating - log L and use the function optimize:

We can also use a method from Sect. 1.13:

Exercises 1. Let X be a discrete variable Poisson distributed: P(X = k) = λke-λ/k!. Let X = fX 1 , . . . , X ns g be a sample from X. a) b) c) d)

Determine the likelihood function. Find the log-likelihood. Determine the MLE for λ. Generate a sample of 10 variates of X for λ = 3. Use R and the sample to determine the MLE of λ. (continued)

212

2

Probabilities and Random Variables

2. Let X be a discrete variable, geometrically distributed: P(X = k) = p(1 p)k. Let X = fX 1 , . . . , X ns g be a sample from X. a) b) c) d)

Determine the likelihood function. Find the log-likelihood. Determine the MLE for p. Generate a sample of 10 variates of X for p = 0.3. Use R and the sample to determine the MLE of p.

3. Let X be a continuous variable exponentially distributed: Its density is f(x) = λe-λx (x > 0). Let X = fX 1 , . . . , X ns g be a sample from X. a) b) c) d)

Determine the likelihood function. Find the log-likelihood. Determine the MLE for λ. Generate a sample of 10 variates of X for λ = 2. Use R and the sample to determine the MLE of λ.

4. Let X be a continuous variable normally distributed: Its density is 1 x-m 2 ffiffiffiffi e - 2ð σ Þ . Let X = fX 1 , . . . , X ns g be a sample from X. f ðxÞ = σ p12π a) b) c) d)

2.10.2

Determine the likelihood function. Find the log-likelihood. Determine the MLE estimators for θ = (m, σ). Generate a sample of 20 variates of X for m = 2, σ = 1. Use R and the sample to determine the MLE of (m, σ).

Samples from Random Vectors

When X = (X1, . . ., Xk) is a random vector of dimension k, a sample is a set of vectors. Then, the empirical mean and the variance may be determined for each component Xi, 1 ≤ i ≤ k. Analogously, tests on the mean and on the variance may be performed for each component. However, other analyses can be performed, such as the calculation of covariances, correlations, affine approximations, and conditional expectations. For instance, we evaluate the population’s covariance as covP ðX kn , X ℓn Þ =

n   1 X X ki - X kn X ℓ i - X ℓn = X kn X ℓn - X k n × X ℓ n : n i=1

where Xk nXℓ n =

n 1X X X : n i = 1 ki ℓi

2.10

Samples

213

The standard covariance of the sample is covn ðX kn , X ℓn Þ =

n   1 X X k i - X kn X ℓi - X ℓn : n - 1 i=1

The empirical correlation is covP ðX kn , X ℓn Þ covn ðX kn , X ℓn Þ ffi = pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r ðX kn , X ℓn Þ = pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : V P ðX kn ÞV P ðX ℓn Þ V n ðX kn ÞV n ðX ℓn Þ r is independent of the choice between population’s and standard’s definitions. Analogously to Sect. 2.5, we can determine linear approximations and conditional means. R proposes built-in functions for the analysis of samples of vectors. lm(Y~X) determines the linear approximation Y ≈ aX + b. cov evaluates the covariances, cor furnishes their correlation. In addition, the class ctstats can be used to evaluate approximations and correlations.

Example 2.23 Consider the situation where X = (X1, X2) and the available data are given in Table 2.10. Table 2.10 An example of data

X1 X2

1 4.5

2 6.8

3 9.6

4 10.8

5 12.5

We can evaluate the covariance, coefficient of linear correlation, and a linear approximation X2 ≈ aX1 + b as follows (Fig. 2.1):

(continued)

214

2

Probabilities and Random Variables

Example 2.23 (continued)

Fig. 2.1 Linear approximation of the data

As indicated, we can use class ctstats:

2.10.3

Empirical CDF and Empirical PDF

If the sample contains a large number of elements, we may determine approximations of the cumulative function and the probability density of the variable. To fix the ideas, let us consider a sample X = fX 1 , . . . , X n g of a random variable X, with n large enough. To determine the CDF, we can use the function ecdf, which generates a function F associating to an abscissa x the value of F(x) = P(X ≤ x). Of course, it is also possible to use an user-defined function, by approximating the CDF at a subset of evaluation points x1, . . ., xne. For instance, we can fix a number of subintervals min X . Then, we set xi = min X þ ði - 1Þδx. nsub and determine a step δx = max Xnsub Once the evaluation points are determined, we approximate the CDF as F ð xi Þ ≈ F e ð xi Þ =

  cardðX i Þ , X i = X j 2 X : X j ≤ xi , n

2.10

Samples

215

id est, F e ðxi Þ =

number of elements from X that are inferior or equal to xi : n

This approach is implemented in the method empcdf in the class sampleX. Once the CDF was generated, the PDF is approximated as its numerical derivative, generated by one of the methods introduced in Sect. 1.16. Example 2.24 Consider X~χ 2(9). We generate a sample of 1e4 variates from X and determine the CDF by using ecdf and empcdf as follows.

Fig. 2.2 Empirical CDF from a sample of 1e4 variates from χ 2(9). Both the methods furnish the same result

The CDF is shown in Fig. 2.2. The PDF is determined by a particle derivative using the Gaussian kernel with h = x2 - x1. The result is shown in Fig. 2.3: The estimated PDF is close to the exact one. (continued)

216

2

Probabilities and Random Variables

Example 2.24 (continued)

Fig. 2.3 Empirical PDF furnished by the particle derivative of the empirical CDF (Gaussian kernel with h = x2 - x1)

Notice that you can generate the histogram of the data using the function hist, which will furnish an approximation of the PDF. For instance, you can use the code:

The result is shown in Fig. 2.4.

Fig. 2.4 Empirical PDF furnished by hist

2.10

Samples

217

Table 2.11 A Frequency Table generated from a sample {X1, . . ., Xn}: ni is the number of occurrences of elements from class number i and pi = ni/n Class Observations Frequency

2.10.4

X ≤ x1 n1 p1

X 2 (x1, x2) n2 p2

... ... ...

X 2 (xk - 1, xk) nk pk

X > xk nk + 1 pk + 1

Testing Adequacy of a Sample to a Distribution

Samples can be used to generate frequency tables analogous to Tables of relative frequencies, analogous to Table 2.1. For instance, a sample X = fX 1 , . . . , X n g can be used to generate a Frequency Table analogous to Table 2.11. The data in Table 2.11 can be used to verify if prior assumptions on the distribution are coherent with the observations. To do this, one of the popular tests is the χ 2 - test, based on the following theorem: Theorem 2.8 (Fisher-Pearson) Let Y 2 ℝs be a variable having a multinomial b distribution M ðN, pÞ. Let f: ℝr → ℝs and θ 2 ℝr be such that p = f(θ). Let θ be the maximum-likelihood estimation of θ and b p=f b θ . Let us define Y i - nb p ffiffiffiffiffiffi i , 1 ≤ i ≤ s: Zi = p nb pi Then kZk2 is approximatively χ 2(s - r - 1) for large values of n: kZk2 ⟶ χ 2 ðs - r - 1Þ in distribution for n ⟶ þ 1: This theorem allows us to test the hypothesis H0: X has a density f(x, θ). This hypothesis is tested as follows: use the data in Table 2.11 to estimate the r parameters θ and estimate the value pbi = PðClass iÞ. Then, evaluate (s = k + 1, for Table 2.11) Q2 =

s X ðni - T i Þ2 i=1

Ti

ðT i = nb pi Þ:

ð2:43Þ

Ti is the expected value if the hypothesis is true. According to the theorem, Q2 is approximatively χ 2(s - r -1)(i.e., χ2(k - r), for Table 2.11): we may determine  a critical value χ 2α such that P Q2 ≥ χ 2α = α. The non-rejection region is 0, χ 2α . R offers a built-in function chisq.test to perform a test for the comparison between the observed frequencies and a given probability vector p. In the general situation, pchisq can be used to evaluate a = P(χ 2 ≥ Q2), H0 is rejected if a < α

218

2

Probabilities and Random Variables

(This was the original point of view adopted y Pearson – see (Pearson, 1900). An alternative consists in using qchisq to determine χ 2α and comparing its value to Q2: the hypothesis is rejected if Q2 ≥ χ 2α . Example 2.25 A coin is tossed 100 times. We observe 60 heads (H) and 40 tails (T). To test the hypothesis that the coin is fair, we consider the hypothesis H0: P(T ) = P(H ) = 1/2. In such a situation, the number of parameters estimated is r = 0, the number of classes is r = 2 (H and T ). The expected number of observations are T H = T T = 100 × 12 = 50, corresponding to a fair coin. Then: Q2 =

ð40 - 50Þ2 ð60 - 50Þ2 þ = 4: 50 50

Since n = 2, we look for χ 2α at the distribution χ 2(1): For α = 0.05, the value is 3.84. Thus, the hypothesis is rejected with a risk of 5%. Analogously, P(χ 2 > Q2) = 0.0455 < 0.05, so that the hypothesis is rejected by this way too. We can use R as follows:

The alternative is implemented as follows:

(continued)

2.10

Samples

219

Example 2.25 (continued) In this situation, we can use chisq.Test:

Example 2.26 Three products are proposed to 100 consumers. 32 choose product 1, 40 choose product 2, 28 choose product 3. To test the hypothesis that the distribution is uniform, we evaluate  Q = 2

 100 2 3 100 3

32 -

 þ

 100 2 3 100 3

40 -

 þ

 100 2 3 100 3

60 -

= 2:24

As in the preceding example, no parameter was estimated, so that r = 0. Here, s = 3 and we look for χ 2α at the distribution χ 2(2): The critical value is 5.99. Thus, the hypothesis is not rejected with a risk of 5%. Analogously, the critical probability is 0.33 > 0.05 and the hypothesis is not rejected with a risk 5%. We can use R as follows:

(continued)

220

2

Probabilities and Random Variables

Example 2.26 (continued) The alternative is implemented as follows:

In this situation also, chisq.test can be used:

Example 2.27 The number of non-compliant parts in 100 samples of 5 elements is the following. Number Occurrences

0 81

1 10

2 6

3 2

4 1

5 0

To test the hypothesis that the distribution of the non-compliant parts is binomial B ð5, pÞ we must start by estimating p. The MLE estimator for p is (see Example 2.26): b p=

81 × 0 þ 10 × 1 þ 6 × 2 þ 3 × 2 þ 4 × 1 5 × 100

Thus, b p = 0:064: Then, Q2 ≈ 151. One parameter was estimated: r = 1. We look for χ 2α at the distribution χ 2(4): The critical value is 9.5. Thus, the hypothesis is rejected with a risk of 5%. We can use R as follows: (continued)

2.10

Samples

221

Example 2.27 (continued)

The alternative is implemented as follows:

Example 2.28 Let us consider the observations of a variable X: class occurrences

(0,5) 61

(5,9) 28

(9,14) 5

(14,18) 4

(18,23) 1

We consider the hypothesis that the distribution of X is exponential f(x, λ) = λe-λx (x > 0). The first step consists in estimating λ: For a sample X = fX 1 , . . . , X ns g ns × log λ - λ

ns X

Xi,

i=1

so that (continued)

222

2

Probabilities and Random Variables

Example 2.28 (continued) ns 1 1 X Xi: = b λ ns i = 1

We use the centers of the classes to evaluate b λ: b λ=

61 þ 28 þ 5 þ 4 þ 1 ≈ 0:21 61 × 2:5 þ 28 × 7 þ 5 × 11:5 þ 4 × 16 þ 1 × 20:5

Then, we use this value to determine the probabilities of the classes: For an interval (a, b): P(X 2 (a, b)) = F(b) - F(a), where F is the CDF of X. We have class b p

(0,5) 0.61

(5,9) 0.24

(9,14) 0.09

(14,18) 0.04

(18,23) 0.01

Then, Q2 ≈ 6.4. One parameter was estimated: r = 1. We look for χ 2α at the distribution χ 2(3): The critical value is 7.8. Thus, the hypothesis is accepted with a risk of 5%. We can use R as follows:

(continued)

2.10

Samples

223

Example 2.28 (continued) The alternative is implemented as follows:

Example 2.29 Let us consider the observations of a variable X: class occurrences

(-1.1,0.5) 9

(0.5,2.1) 20

(2.1,3.6) 38

(3.6,5.2) 24

(5.2,6.8) 8

We consider the hypothesis that the distribution of X is normal 2 f ðx, m, σ Þ = exp - ððx - mÞ=σÞ . The first step consists in estimating m and σ: For a sample X = fX 1 , . . . , X ns g, the MLE estimators are b= m

ns ns 1 X 1 X b Þ2 Xi, b σ2 = ðX - m ns i = 1 ns i = 1 i

b and b We use the centers of the classes to evaluate m σ: b ≈ 2:9, b m σ ≈ 1:7: As in the previous example, we use this value to determine the probabilities of the classes: For an interval (a, b): P(X 2 (a, b)) = F(b) - F(a), where F is the CDF of X. We have class b p

(-1.1,0.5) 0.07

(0.5,2.1) 0.24

(2.1,3.6) 0.35

(3.6,5.2) 0.25

(5.2,6.8) 0.07

Thus, Q2 ≈ 1.96. Two parameters were estimated: r = 2. We look for χ 2α at the distribution χ 2(3): The critical value is 6.0. Thus, the hypothesis is accepted with a risk of 5%. (continued)

224

2

Probabilities and Random Variables

Example 2.29 (continued) We can use R as follows:

The alternative is implemented as follows:

Exercises 1. A dice rolls 100 times. The results are the following: Face Observations

1 17

2 20

3 25

4 14

5 21

6 13

Can we assume that the dice is fair? (continued)

2.10

Samples

225

2. A sample of a variable furnishes the results below: Value Observations

0 25

1 10

2 30

3 30

4 5

Can we assume that X is uniformly distributed? 3. A sample of a variable furnishes the results below: Value Observations

0 11

1 9

2 7

3 9

4 5

5 10

6 8

Can we assume that X is uniformly distributed? 4. A sample of a variable furnishes the results below: Value Observations

0 40

1 42

2 13

3 5

Can we assume that X is Poisson distributed? 5. A sample of a variable furnishes the results below: Value Observations

0 5

1 15

2 42

3 35

4 30

5 10

6 8

Can we assume that X is Poisson distributed? 6. A sample of a variable furnishes the results below: Value Observations

0 16

1 50

2 120

3 90

4 42

Can we assume that X is binomially distributed B ð4, pÞ? 7. A sample of a variable furnishes the results below: Value Observations

0 200

1 100

2 10

3 5

Can we assume that X is binomially distributed B ð3, pÞ? 8. A sample of a variable furnishes the results below: Value Observations

8–10 5

10–12 67

12–14 510

14–16 370

16–18 60

Can we assume that X is normally distributed N(m, σ)? 9. A sample of a variable furnishes the results below: Value Observations

0–10 10

10–20 210

20–30 560

30–40 180

Can we assume that X is normally distributed N(m, σ)?

40–50 20

50–60 5

226

2

2.10.5

Probabilities and Random Variables

Testing the Independence a Couple of Variables

In the preceding section, we used samples from a variable X to generate Frequency Tables and verify if the data are consistent with a theoretical distribution. Analogously, observations of a couple (X, Y ) can be used to generate a Contingency Table as Table 2.12. o Such a Table can be transformed in probabilities by using the transformation pij = nsij . Pnþ1 Pmþ1 o∎j oi∎ The marginal probabilities are pi∎ = ns , p∎j = ns , oi∎ = j = 1 oij , o∎j = i = 1 oij . Table 2.13 can be used to verify if the hypothesis of independence of the variables is consistent with the observations – we can test the hypothesis H0: X and Y are independent by using the χ 2 test: if the variables are independent, then P(X = Xi, Y = Yj) = P(X = Xi)P(Y = Yj). We start by estimating P(X = Xi) ≈ pi∎, P(Y = Yj) ≈ p∎j, so that pbij = pi∎ p∎j . Then, we evaluate Table 2.12 A Contingency Table generated from a sample {(X1, Y1), . . ., (Xns, Yns)}: oij is the number of occurrences of elements from class (i, j) Y < y1 o11 o21 ⋮ om1 om + 1, 1

Classes X < x1 X 2 (x1, x2) ⋮ X 2 (xm - 1, xm) X > xm

Y 2 (y1, y2) o12 o22 ⋮ om2 om + 1, 2

... ... ... ⋱ ... ...

Y 2 (yn - 1, yn) o1, n o2, n ⋮ om, n om + 1, n

Y > yn o1, n + 1 o2, n + 1 ⋮ om, n + 1 om + 1, n + 1

 2 m X n X  oij - T ij  Q = T ij = ns × pbij : T ij i=1 j=1

ð2:44Þ

2

2 χ 2((m - 1)(n - 1)): we may determine Q2 is approximatively  2   a critical value χ α 2 2 such that P Q ≥ χ α = α. The non-rejection region is 0, χ α . The function chisq.test determines Q2 and P(χ 2 ≥ Q2).

Example 2.30 Let us consider 100 observations of a couple (X, Y) leading to the contingency table at below left. To test the independence, we must determine pij (Table 2.26.5 at right). Table 2.13 A contingency table generated by a sample of 100 observations of (X,Y)

Y

-1

0

1

-1

20

0

10

0

0

40

1

10

0

X

Y

-1

0

1

-1

0.2

0

0.1

0

0

0

0.4

0

20

1

0.1

0

0.2

X

(continued)

2.10

Samples

227

Example 2.30 (continued) Thus, pi∎ = (0.3, 0.4, 0.3), p∎j = (0.3, 0.4, 0.3). Assuming independence, the expected values are (Table 2.14) Table 2.14 Expected values for the test of independence. At left, the values of Tij. At right, the probabilities pbij .

Y

-1

0

1

-1

9

12

9

0

12

16

1

9

12

X

Y

-1

0

1

-1

0.09

0.12

0.09

12

0

0.12

0.16

0.12

9

1

0.09

0.12

0.09

X

Thus, Q2 ≈ 111. The number of degrees of freedom is (3 - 1)(3 - 1) = 4: For α = 0.05, χ 2α = 9:5. Since Q2 > χ 2α , the hypothesis is rejected. We can use R as follows:

Example 2.31 Let us consider 100 observations of a couple (X, Y) leading to the contingency table at below left. To test the independence, we must determine pij (Table 2.15 at right). (continued)

228

2

Probabilities and Random Variables

Example 2.31 (continued) Table 2.15 A contingency table generated by a sample of 100 observations of (X,Y). At left, the numbers of observations. At right, the probabilities

Y

0

1

0

28

22

1

21

29

X

Y

0

1

0

0.28

0.22

1

0.21

0.29

X

Thus, pi∎ = (0.5, 0.5), p∎j = (0.49, 0.51). Assuming independence, the expected values are (Table 2.16) Table 2.16 Expected values for the test of independence. At left, the values of Tij. At right, the probabilities pbij

Y

Y

1

0

1

0

28.5

26.5

0

0.285 0.265

1

34.2

31.8

1

0.342 0.318

X

X

0

Thus, Q2 ≈ 1.58 the number of degrees of freedom is (2 - 1)(2 - 1) = 1: For α = 0.05, χ 2α = 3:8. Since Q2 < χ 2α , the hypothesis is accepted. We can use R as follows:

2.11

Generating Triangular Random Numbers

229

Exercises 1. A sample of two variables furnishes the results below: Y X 1 2 3

1 50 47 35

2 15 13 33

3 6 9 12

4 50 35 34

Can we assume that X and Y are independent? 2. A sample of two variables furnishes the results below: Y X 1 2

1 200 300

2 190 150

3 110 50

Can we assume that X and Y are independent? 3. A sample of two variables furnishes the results below: Y X 1 2 3

1 150 75 40

2 250 110 100

3 80 40 130

4 10 10 10

Can we assume that X and Y are independent?

2.11

Generating Triangular Random Numbers

Triangular distributions are mainly used when the information available on the random variables is limited to a minimal value xmin, a maximal value xmax, and a probable value xp. A triangular distribution with these parameters is referred as T(xmin, xp, xmax). The PDF associated to a triangular distribution forms a triangle and its equation is 8 2ðx - x min Þ > >   , if x min ≤ x ≤ xp ; > > > x ð x min Þ xp - x min max > < f ðxÞ = 2ðx max - xÞ >   , if xp ≤ x ≤ x max ; > > x ð x min Þ x max - xp > max > > : 0,

Its CDF is

otherwise:

230

2

Probabilities and Random Variables

8 0, if x ≤ x min ; > > > > > > ðx - x min Þ2 > > > < ðx max - x min Þxp - x min  , if x min ≤ x ≤ xp ; F ð xÞ = > > ðx max - xÞ2 > >   , if xp ≤ x ≤ x max ; 1> > > ðx max - x min Þ x max - xp > > : 1, otherwise:

The generation of samples from a triangular distribution uses the uniform generator. Indeed, let U be uniformly distributed on (0,1) and consider X=

8 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi < x min þ U ðx max - x min Þðxp - x min Þ, :

xp - x min x max - x min pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x max ð1 - U Þðx max - x min Þðx max - xp Þ, otherwise: if U
2.4) and P(X < 0.7), we get the values 0.2 for both, while the exact values are 1 ð1 þ log ð0:7ÞÞ ≈ 0:3216625; 2 1 PðX > 2:4Þ = ð1 - log ð2:4ÞÞ ≈ 0:06226563: 2 PðX < 0:7Þ =

Finally, these data appear as insufficient to determine the distribution of X, as shown in Fig. 3.2. Of course, the available data do not contain the information that the variable U is distributed on (-1, 1), since we have only data on (-0.521897, 0.972777). Thus, we may consider that the comparisons must be made with the restriction of the variables to the interval of the observations. In this case, setting = - 0.521897, b = 0.972777, UR = U1(a,b), X R = eU R , we have

Fig. 3.2 Comparison between the CDF generated by the data in Table 3.1 (in black) and the exact CDF of X. The approximation has poor quality

256

3

Representation of Random Variables

e b - ea E ðX R Þ = ≈ 1:372799, b-a sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðea - eb Þðð - 2 þ a - bÞea þ ð2 þ a - bÞeb Þ σ ðX R Þ = ≈ 0:581762: 2ða - bÞ2 PðX R < 0:7Þ =

log ð0:7Þ - a ≈ 0:1105405; b-a

PðX R > 2:4Þ =

b - log ð2:4Þ ≈ 0:06510334: b-a

Thus, even in this case, the sample does not furnish good estimates for the probabilities. In addition, as shown in Fig. 3.3, the approximation of the CDF is poor. We shall examine in the sequel some methods to determine the coefficients when the information reduces to a sample S (Eq. (3.1)) of limited size.

Fig. 3.3 Comparison between the CDF generated by the data in Table 3.1 (in black) and the CDF of the restriction XR of X to the interval of the observed data

3.2

Collocation

3.2

257

Collocation

Collocation is the simplest way to determine a representation of random variables – but this approach confronts obstacles in the situations where U is unknown or the data contain significant errors. For explicitly given data (Problem 3.1.1), collocation can be reduced to the solution of a linear system. Indeed, collocation imposes that X i = X ðU i Þ ≈ PX ðU i Þ, 1 ≤ i ≤ ns

ð3:7Þ

id est, that the approximation coincides with the observed value at each point of the sample. Thus, k X j=0

x j φ j ð ui Þ = X i

ð3:8Þ

Let us consider the vectors X = (X1, . . ., Xns)t, U = (U1, . . ., Uns)t. Then, Ax = B,

ð3:9Þ

with A = (Φ(U1), . . ., Φ(Uns))t (line i of A contains Φ(Ui)) and B = X, id est,   A = Aij : 1 ≤ i ≤ ns, 1 ≤ j ≤ k ,

Aij = φj ðU i Þ;

B = ðBi : 1 ≤ i ≤ nsÞ, Bi = X i

ð3:10Þ ð3:11Þ

Thus, the determination of x reduces to the solution of the linear system (3.9, 3.10, and 3.11). Matrix A is formed by ns rows and k + 1 columns: in general, ns > k + 1, so that the system is overdetermined. We have ðAt AÞx = At B:

ð3:12Þ

To obtain a more stable result, we can consider the Levenberg–Marquardt approximation x ≈ xε

with ðAt A þ εIdÞxε = At B:

ð3:13Þ

Here, ε > 0 is a parameter of regularization – Tikhonov’s regularization – to be chosen as small enough (often inferior to 1E - 3). Collocation is implemented in class expansion1D as follows:

258

3

Representation of Random Variables

To apply this method to the data in Table 3.1, enter the data as follows:

Then, run the code below:

Here, xpx contains the coefficients x. The code furnishes x ¼ ð0:5933901,0:8870419,0:662488,0:3289636,0:1313289,0:02278473, 0:01928178Þ The result is shown in Fig. 3.4. Once the coefficients are determined, we can use them to determine the distribution of X, statistics of this variable and probabilities of events connected to it. The simplest way consists in generating a large sample from X, by using a large sample from U : we generate 105 variates from U(-0.521897, 0.972777) – notice that no information is available outside this interval – and use them to generate a sample from X by using the approximation – the computational cost is negligible. Since the

3.2

Collocation

259

Fig. 3.4 Collocation results for the data in Table 3.1 (polynomial family, k = 6)

distribution is uniform, we can use equally spaced values. Then, we use these simulated data to approximate the CDF of X by using the empirical CDF from the simulated data. The results are shown in Figs. 3.5 and 3.6: we see that the accordance is excellent. The PDF is determined by evaluating the numerical derivative of the CDF (a method is available in class sampleX.R). The result is exhibited in Fig. 3.7: again, the accordance is good. In this case, the large sample with 1E5 variates furnishes the estimations  = 1:3728, sðPX Þ = 0:5818, P(X > 2.4) = 0.06511 and P(X < 0.7) = 0.1105 – PX to be compared with the exact values 1.3728, 0.5818, 0.06510, 0.1105. We observe that these values generated are in good accordance with the real values. If we do not desire to use a large sample, we can use alternative approaches. Indeed, many alternatives to this simple approach exist. For instance, let fU be the PDF of U and fX be the PDF of X. Assuming that X is injective, we have (see, for instance (Souza de Cursi E. , 1992)):   f U X - 1 ð xÞ , J ðX, U Þ = jacobien of X ðU Þ: f X ð xÞ = jJ ðX, U Þj In one-dimensional situations, J(X, U ) = X′(U ) = dX/dU. Thus, the PDF of X can be determined as f X ðΦðuÞxÞ ≈ f X ðX ðuÞÞ =

f ð uÞ f U ð uÞ ≈ U : jX 0 ðuÞj jΦ0 ðuÞxj

ð3:14Þ

260

3

Representation of Random Variables

Fig. 3.5 Comparison between the CDF generated by collocation (polynomial family, k = 6) and the CDF of the restriction XR of X to the interval of the observed data. Here, the CDF of PX is generated by ecdf

Fig. 3.6 Comparison between the CDF generated by collocation (polynomial family, k = 6) and the CDF of the restriction XR of X to the interval of the observed data. Here, the CDF of PX is generated by sampleX

Using (3.14), we obtain the results shown in Fig. 3.8 – they are better than those generated by the simple method, but this approach cannot be used if the distribution of U is unknown and supposes that X is injective: to use it effectively, it is necessary to verify that Φ′(u)x is injective on the interval under consideration.

3.2

Collocation

261

Fig. 3.7 Comparison between the PDF generated by collocation (polynomial family, k = 6) and the PDF of the restriction XR of X to the interval of the observed data. Here, the PDF of PX is generated by sampleX

Fig. 3.8 Comparison between the PDF generated by Eq. (3.14) (polynomial family, k = 6) and the PDF of the restriction XR of X to the interval of the observed data

Under these conditions, this approximation can be used to generate the CDF of X by numerical integration of fX given by (3.14). As an example, a trapezoidal rule involving 51 points generates the results in Fig. 3.9.

262

3

Representation of Random Variables

Fig. 3.9 Comparison between the CDF generated by integration of the PDF furnished by Eq. (3.14) (polynomial family, k = 6) and the CDF of the restriction XR of X to the interval of the observed data

As in the preceding, the mean and the standard deviation can be evaluated can be evaluated using a large sample generated by the representation found, but we can also evaluate them by numerical integration using the PDF furnished by Eq. (3.14): Z E ðX p Þ =

Z xp f X ðxÞdx =

ðX ðuÞÞp f U ðuÞdu:

ð3:15Þ

For instance, a trapezoidal integration with 50 subintervals and the first integral  = 1:3729, sðPX Þ = 0:5820 . The second integral produces PX = produces PX 1:3728, sðPX Þ = 0:5821. In this case, the empirical mean of the sample of 50 variates from X has PX = 1:3777, sðPX Þ = 0:6009. The probabilities are evaluated directly from the CDF found: we have P(X > 2.4) = 0.0651 and P(X < 0.7) = 0.1105 – very close to the exact values. A second alternative consists in determining the CDF of X by the numerical evaluation of the Lebesgue integral of the probability measure P(dU): a numerical method is exposed in (Souza de Cursi, 2015). A simple approximation can be generated by considering I ðxÞ = fu: X ðuÞ ≤ xg: An approximation of this set can be generated by considering an interval (umin, umax) containing the range of U and a partition u min = u0 < u1 < u2 < . . . < un = u max :

3.2

Collocation

263

Such a partition defines n subintervals I i = ðui - 1 , ui Þ , i = 1, . . ., n. Let us introduce K ðI i , xÞ = I i , ifmaxfΦðui - 1 Þx, Φðui Þxg ≤ x; K ðI i , xÞ = ∅, otherwise: We can approximate I ð xÞ ≈

n [

K ðI i , xÞ,

i=1

so that PðX ≤ x Þ = PðU 2 I ðX ÞÞ ≈

n X

PðU 2 K ðI i , xÞÞ,

i=1

id est, PðX ≤ x Þ ≈

X

PðI i Þ, ð3:16Þ

1 ≤ i ≤ n, max fΦðui - 1 Þx, Φðui Þxg ≤ x

In practice, we can consider a second interval (xmin, xmax) containing the range of X and a partition xmin = x0 < x1 < x2 < . . . < xm = xmax: the CDF of X is evaluated at these points. For instance, assume that the interval (umin, umax) = (-0.521897, 0.972777) is discretized in n = 250 subintervals, while the interval (xmin, xmax) is discretized in m = 25 subintervals. This approach furnishes the results shown in Fig. 3.10. The PDF generated by SP derivative of the CDF is shown in Fig. 3.11. The results can be improved by using smaller steps. For instance, using n = 2500, m = 250, we obtain the results shown in Fig. 3.12. In this case, P(X > 2.4) and P(X < 0.7) are evaluated from the calculated CDF. The approximation with n = 2500, m = 250 furnishes P(X > 2.4) = 0.06532 and P(X < 0.7) = 0.1103. PX and s(PX) can be evaluated either by using the empirical mean of the sample, by Eq. (3.15) or Stieltjes integration using the CDF: Z E ðX p Þ =

Z xp dF X ðxÞ =

X ðuÞp dF U ðuÞ:

ð3:17Þ

The approach by Riemann integration (3.15) produced PX = 1:3704 and s(PX) = 0.5821; the second approach by (3.17) furnished PX = 1:3701 and s(PX) = 0.5822 – both the values are close to the exact ones. As previously observed, another Hilbert basis can be considered. For instance, we can consider a trigonometrical family:

264

3

Representation of Random Variables

Fig. 3.10 Comparison between the CDF generated by Eq. (3.16) (polynomial family, k = 6) and the CDF of the restriction XR of X to the interval of the observed data

Fig. 3.11 Comparison between the PDF generated by SP derivative of the CDF furnished by Eq. (3.16) (polynomial family, k = 6, kernel gaussian) and the PDF of the restriction XR of X to the interval of the observed data

3.2

Collocation

265

Fig. 3.12 Improvement in the evaluation of the PDF. Using smaller steps, both the CDF and the PDF are better evaluated. Here, the PDF generated by SP derivative of the CDF furnished by Eq. (3.16) (polynomial family, k = 6, kernel gaussian) is compared to the PDF of the restriction XR of X to the interval of the observed data

source("trigo.R") z = 1e-3 nd = 12 phi = trigo$new(a,b,z) he1 = expansion1D$new(phi) eps = 1e-5 xpx = he1$colloc(nd,eps,ucs,xcs) In this case, we obtain the results exhibited in Figs. 3.13, 3.14, and 3.15. For this expansion, PX = 1:372671, sðPX Þ = 0:5842 , P(X > 2.4) = 0.06532, and P(X < 0.7) = 0.1100. With k = 14, we obtain PX = 1:3728, sðPX Þ = 0:5814 , P(X > 2.4) = 0.06530, and P(X < 0.7) = 0.1098. With k = 22, PX = 1:3726, sðPX Þ = 0:5819, P(X > 2.4) = 0.06521, and P(X < 0.7) = 0.1111. Again, we can choose the alternative approach furnished by Eq. (3.14). Then, we obtain the results shown in Figs. 3.16 and 3.17. We use k = 12 – for this approximation, a trapezoidal integration with 2E3 subintervals produces PX = 1:3744, sðPX Þ = 0:5809, P(X > 2.4) = 0.06475, and P(X < 0.7) = 0.1106. With k = 14, we obtain PX = 1:3731, sðPX Þ = 0:5810 , P(X > 2.4) = 0.06495, and P(X < 0.7) = 0.1101. For k = 22, we obtain PX = 1:3758, sðPX Þ = 0:5815 , P(X > 2.4) = 0.06512, and P(X < 0.7) = 0.1112.

266

3

Representation of Random Variables

Fig. 3.13 Collocation results for the data in Table 3.1 (trigonometrical family, k = 12)

Fig. 3.14 Comparison between the CDF generated by collocation (trigonometrical family, k = 12) and the CDF of the restriction XR of X to the interval of the observed data. Here, the CDF of PX is generated by ecdf

The approach furnished by Eq. (3.16) can be used also: with k = 12, we obtain the PDF shown in Fig. 3.18 and P(X > 2.4) = 0.06554, P(X < 0.7) = 0.1098. The approach (3.15) furnishes PX = 1:3710 and s(PX) = 0.5820; using (3.17) gives PX = 1:3699 and s(PX) = 0.5821 – again, both the values are close to the exact ones.

3.2

Collocation

267

Fig. 3.15 Comparison between the PDF generated by collocation (trigonometrical family, k = 12) and the PDF of the restriction XR of X to the interval of the observed cdata. Here, the PDF of PX is generated by sampleX

Fig. 3.16 Comparison between the PDF generated by Eq. (3.14) (trigonometrical family, k = 12) and the PDF of the restriction XR of X to the interval of the observed data

268

3

Representation of Random Variables

Fig. 3.17 Comparison between the CDF generated by integration of the PDF furnished by Eq. (3.14) (trigonometrical family, k = 12) and the CDF of the restriction XR of X to the interval of the observed data

Fig. 3.18 Comparison between the PDF generated by SP derivative of the CDF furnished by Eq. (3.16) (trigonometrical family, k = 12, kernel gaussian) and the PDF of the restriction XR of X to the interval of the observed data

3.2

Collocation

269

If the information that U is uniformly distributed on (-1, 1) is obtained somewhere, we can use the coefficients to generate an analogous sample but using points on (-1, 1), to get the complete distribution of X. The results are shown in Figs. 3.19 and 3.20. Once the CDF is determined, the PDF is found by the numerical derivative of the CDF, using the method of class sampleX.R. The result is exhibited in Fig. 3.21: again, the accordance is good.

Fig. 3.19 Comparison between the CDF generated by collocation (polynomial family, k = 6) and the CDF of X. Here, the CDF is generated by ecdf

Fig. 3.20 Comparison between the CDF generated by collocation (polynomial family, k = 6) and the CDF of X. Here, the CDF is generated by sampleX

270

3

Representation of Random Variables

Fig. 3.21 Comparison between the PDF generated by collocation (polynomial family, k = 6) and the PDF of X. Here, the PDF is generated by sampleX

Here, PX = 1:1752, sðPX Þ = 0:6575 , P(X > 2.4) = 0.06227 and P(X < 0.7) = 0.3217. Again, the accordance is excellent. The trigonometrical family with k = 12 furnishes PX = 1:1748, sðPX Þ = 0:6576, P(X > 2.4) = 0.06243 and P(X < 0.7) = 0.32126. For k = 14, the trigonometrical approximation furnishes PX = 1:1739, sðPX Þ = 0:6591, P(X > 2.4) = 0.06232 and P(X < 0.7) = 0.3210; for k = 14, it furnishes PX = 1:1861, sðPX Þ = 0:6451 , P(X > 2.4) = 0.06241 and P(X < 0.7) = 0.3211; for k = 22, it furnishes PX = 1:1795, sðPX Þ = 0:6524 , P(X > 2.4) = 0.06235 and P(X < 0.7) = 0.3221. In this case too, we can choose the alternative approach furnished by Eq. (3.14). Using a polynomial family, with k = 6, we obtain the results shown in Figs. 3.22 and 3.23. In this case, a trapezoidal rule with 1E3 subintervals furnishes PX = 1:1752, sðPX Þ = 0:6575 , P(X > 2.4) = 0.06226 and P(X < 0.7) = 0.3217. Using a trigonometrical family with k = 12, we obtain PX = 1:1773, sðPX Þ = 0:6571 , P(X > 2.4) = 0.061284 and P(X < 0.7) = 0.3224. Choosing the approach by Eq. (3.16) and a trigonometrical family with k = 12, we obtain the PDF shown in Fig. 3.24 and P(X > 2.4) = 0.06248, P(X < 0.7) = 0.3212. Riemann integration of the PDF furnishes PX = 1:1740 and s(PX) = 0.6586; Stieltjes integration of the CDF gives PX = 1:1745 and s(PX) = 0.6576. For a polynomial family with k = 6, we obtain the PDF in Fig. 3.25, P(X > 2.4) = 0.06231, P(X < 0.7) = 0.3215. Riemann integration of the PDF furnishes PX = 1:1750 and s(PX) = 0.6575; Stieltjes integration of the CDF gives PX = 1:1745 and s(PX) = 0.6575. All these values are close to the exact ones.

3.2

Collocation

271

Fig. 3.22 Comparison between the PDF generated by Eq. (3.14) (polynomial family, k = 6) and the PDF of X

Fig. 3.23 Comparison between the CDF generated by integration of the PDF furnished by Eq. (3.14) (polynomial family, k = 6) and the CDF of X

If the distribution of U is unknown, we can choose between two alternatives: either determining the distribution of U firstly and then determining the distribution of X by one of the preceding methods: we obtain a representation of X as a function of U and then, we generate the CDF and PDF of X.

272

3

Representation of Random Variables

Fig. 3.24 Comparison between the PDF generated by SP derivative of the CDF furnished by Eq. (3.16) (trigonometrical family, k = 12, kernel gaussian) and the PDF of X

Fig. 3.25 Comparison between the PDF generated by SP derivative of the CDF furnished by Eq. (3.16) (polynomial family, k = 6, kernel gaussian) and the PDF of X

• Or directly determining the distribution of X, without determining the distribution of U. In this case, we do not obtain a representation of X as a function of U. – only the CDF and the PDF of X are generated.

3.2

Collocation

273

Table 3.2 An example of data including an artificial Gaussian variable A U

-1.8596342 -0.521897

-0.8097976 -0.428266

-0.5309607 -0.063814

-0.4362206 0.110263

-0.2881361 0.218055

A U

0.3361204 0.439192

0.5652095 0.498947

1.0876513 0.818171

1.2547856 0.899228

2.3231017 0.972777

In both the cases, it is necessary to introduce an artificial variable A, having a known distribution: for instance, let us consider an artificial variable A  N(0, 1) – notice that the distribution of A is different from the distribution of U. Let us examine the first approach: we look for a representation PU = Φt(A)u of U as a function of A. Since the distribution of A is known, this representation determines the distribution of U. Then, we determine the distribution of X by one of the preceding approaches. As an example, let us generate a sample(A1, . . ., Ans) of ns variates from N(0, 1) P and use this sample to find a representation PU = ki= 0 ui φi ðAÞ of U as a function of A. In a first step, it is necessary to create an artificial link between A and U – for instance, we can reorder the data into a way that both A and U are increasingly ordered: U1 < U2 < . . . < Uns and A1 < A2 < . . . < Ans – this generates a positive correlation between U and A. For instance, consider the data in Table 3.2. The values of A were generated by rnorm. We consider an artificial sample formed by the pairs (Ai, Ui) and collocation can furnish coefficients u = (u1, . . ., uk) defining an expansion of U as a function of A: PU ðAÞ =

k X i=0

ui φi ðAÞ:

ð3:18Þ

As an example, using a polynomial family with k = 3, we obtain the results exhibited in Fig. 3.26. An alternative to this approach is the Moments Matching Method (M3, see Sect. 3.4) – of course, both can be combined, and the coefficients furnished by collocation can be used as starting point for M3. The approximated distribution was generated using a sample of 1E5 variates from N(0, 1), truncated to the interval (-1.8596342, 2.3231017), corresponding to the minimal and maximal values of the sample from A. The final sample contained 95797 variates from A, which were used to generate the same number of variates from U. As we can see, the approximated distribution is close to the real one. Once the sample from U is generated, we can use the coefficients x and Eq. (3.3) to generate a sample from X: we obtain a sample of 95797 variates from X, which is used to determine the CDF of X. The result is exhibited in Figs. 3.27, 3.28, and 3.29. In this case, we obtain PX = 1:4319, sðPX Þ = 0:6266 , P(X > 2.4) = 0.1037 and P(X < 0.7) = 0.1063. There is a degradation when compared to the situation where the distribution of U is known, but the approximations can be considered as good,

274

3

Representation of Random Variables

Fig. 3.26 Distribution of U generated by collocation, with an artificial variable A  N(0, 1) and a polynomial family with k = 3

Fig. 3.27 Distribution of XR generated by collocation, using the sample of U generated by the artificial variable A  N(0, 1) and coefficients x determined by collocation (polynomial family, k = 6). Here, the CDF was generated by ecdf

since the distribution of U is unknown and we used a wrong artificial variable (gaussian instead of uniform). Analogously to the preceding situations, we can adopt alternative approaches based on Eqs. (3.14) and (3.16). For instance, let us consider (3.14) under the same

3.2

Collocation

275

Fig. 3.28 Distribution of XR generated by collocation, using the sample of U generated by the artificial variable A  N(0, 1) and coefficients x determined by collocation (polynomial family, k = 6). Here, the CDF was generated by sampleX

Fig. 3.29 PDF of XR generated by collocation, using the sample of U generated by the artificial variable A  N(0, 1) and coefficients x determined by collocation (polynomial family, k = 6). Here, the PDF was generated by sampleX

conditions: U is approximated by a polynomial of degree 3 in A, then X is approximated by a polynomial of degree 6 in U. The result is exhibited in Figs. 3.30, 3.31, 3.32, and 3.33. In this case, we obtain PX = 1:3889, sðPX Þ = 0:6569 ,

276

3

Representation of Random Variables

Fig. 3.30 PDF of U generated by Eq. (3.14), with an artificial variable A  N(0, 1) and a polynomial family with k = 3

Fig. 3.31 CDF of U generated by integration of the PDF furnished by Eq. (3.14), with an artificial variable A  N(0, 1) and a polynomial family with k = 3

P(X > 2.4) = 0.1048 and P(X < 0.7) = 0.1674. Again, we observe a degradation when compared to the situation where the distribution of U is known – we used a wrong artificial variable (gaussian instead of uniform). Now, let us consider (3.16) under the same conditions: U is approximated by a polynomial of degree 3 in A; then, X is approximated by a polynomial of degree 6 in U. The result is exhibited in Figs. 3.34, 3.35, 3.36, and 3.37. In this case, we obtain

3.2

Collocation

277

Fig. 3.32 Distribution of XR generated by collocation, using the sample of U generated by the artificial variable A  N(0, 1) and coefficients x determined by collocation (polynomial family, k = 6)

Fig. 3.33 PDF of XR generated by collocation, using the sample of U generated by the artificial variable A  N(0, 1) and coefficients x determined by collocation (polynomial family, k = 6)

PX = 1:3744, sðPX Þ = 0:5836, P(X > 2.4) = 0.06513, and P(X < 0.7) = 0.1103. Even with a wrong artificial variable, this approach furnishes results not so far from the exact ones. The results can be improved by using A uniform – for instance, a look to the CDF of U furnished by the methods suggests the use of an uniform variable. Let us

278

3

Representation of Random Variables

Fig. 3.34 CDF of U generated by Eq. (3.16), with an artificial variable A  N(0, 1) and a polynomial family with k = 3

Fig. 3.35 PDF of U generated by SP derivative of the CDF furnished by Eq. (3.16), with an artificial variable A  N(0, 1) and a polynomial family with k = 3

consider an uniform artificial variable corresponding to Table 3.3. The first approach furnishes the results shown in Figs. 3.38, 3.39, and 3.40. Then, PX = 1:4068, sðPX Þ = 0:6244, P(X > 2.4) = 0.0941, and P(X < 0.7) = 0.1231, which are close to the exact values.

3.2

Collocation

279

Fig. 3.36 Distribution of XR generated by collocation, using the sample of U generated by the artificial variable A  N(0, 1) and coefficients x determined by collocation (polynomial family, k = 6)

Fig. 3.37 PDF of XR generated by collocation, using the sample of U generated by the artificial variable A  N(0, 1) and coefficients x determined by collocation (polynomial family, k = 6). Here, the PDF was generated by sampleX

280

3

Representation of Random Variables

Table 3.3 An example of data including an artificial Uniform variable A U

0.002062061 -0.521897

0.222307650 -0.428266

0.242540573 -0.063814

A U

0.547722789 0.439192

0.694275970 0.498947

0.698030683 0.818171

0.270328112 0.110263 0.826904916 0.899228

0.534471773 0.218055 0.927155268 0.972777

Fig. 3.38 Distribution of U generated by collocation, with an artificial variable A  U(0, 1) and a polynomial family with k = 3

Fig. 3.39 Distribution of XR generated by collocation, using the sample of U generated by the artificial variable A  U(0, 1) and coefficients x determined by collocation (polynomial family, k = 6)

3.2

Collocation

281

Fig. 3.40 PDF of XR generated by collocation, using the sample of U generated by the artificial variable A  U(0, 1) and coefficients x determined by collocation (polynomial family, k = 6)

Table 3.4 A second example of data including an artificial Gaussian variable A X

-1.8596342 0.59339

-0.8097976 0.65164

-0.5309607 0.93818

-0.4362206 1.11657

-0.2881361 1.24366

A X

0.3361204 1.55145

0.5652095 1.64699

1.0876513 2.26635

1.2547856 2.45770

2.3231017 2.64528

If we are not interested in the distribution of U, we can determine the distribution of X directly. For instance, let us consider the artificial sample in Table 3.4: We can apply the preceding approach to this data. For instance, collocation produces the results in Figs. 3.41 and 3.42. In this case, we obtain PX = 1:3996, sðPX Þ = 0:5672, P(X > 2.4) = 0.06145, and P(X < 0.7) = 0.09696, When dealing with implicit data (Remark 3.1.2), collocation leads to the solution of algebraic equations. Indeed, if (X, U ) verifies ψ(X, U ) = 0, we can consider the system of equations ψ ðPX ðU i Þ, U i Þ = 0, i = 1, . . . , ns:

ð3:19Þ

This system of equations can be solved to furnish the coefficients of the expansion (3.3). As an example, let us consider that the line corresponding to the values of X in Table 3.1 is missing, but we have the information that ψ(X, U ) = U - log X = 0. Then, we can consider the system of nonlinear equations

282

3

Representation of Random Variables

Fig. 3.41 Distribution of XR generated by collocation, using the sample of the artificial variable A  N(0, 1) and coefficients x determined by collocation (polynomial family, k = 4). Here, the distribution of U is not determined

Fig. 3.42 PDF of XRgenerated by the SP derivative of the CDF obtained with the sample of the artificial variable A  N(0, 1) and coefficients x determined by collocation (polynomial family, k = 4). Here, the distribution of U is not determined

log ðPX ðU i ÞÞ - U i = 0, i = 1, . . . , ns: This system can be solved by R (see Sect. 1.14.2). For instance, we can use the package pracma and fsolve as follows:

3.2

Collocation

283

Then, variable xpx contains the coefficients x = (x0, . . ., xk) of the expansion. The code furnishes x ¼ ð0:59339371, 0:88693384, 0:66292699, 0:32883757, 0:12948471, 0:02573000, 0:01797236Þ The RMS difference with the result y collocation is 1E-3. The comparison with the exact solution is shown in Figs. 3.43, 3.44, and 3.45. Now, PX = 1:3728, sðPX Þ = 0:5818, P(X > 2.4) = 0.06511 and P(X < 0.7) = 0.1105. The exact values are 1.3728, 0.5818, 0.06510, 0.1105. As a second example, let us consider again that the line corresponding to the values of X in Table 3.1 is missing, but we have the information that dX - X = 0, X ð0Þ - 1 = 0 dU Then, we can consider the system of linear equations d PX ðU i Þ - PX ðU i Þ = 0, i = 1, . . . , ns; PXð0Þ - 1 = 0: dU Again, we can use the package pracma and fsolve as follows:

284

3

Representation of Random Variables

Fig. 3.43 Collocation results for the implicit data (Table 3.1 without the values of X, polynomial family, k = 6). The curves almost coincide

Fig. 3.44 Distribution of XR generated by collocation, using the implicit data (Table 3.1 without the values of X, polynomial family, k = 6)

3.2

Collocation

285

Fig. 3.45 PDF of XR generated by collocation, using the using the implicit data (Table 3.1 without the values of X, polynomial family, k = 6)

286

3

Representation of Random Variables

Again, variable xpx contains the coefficients x = (x0, . . ., xk) of the expansion. The code furnishes x ¼ ð0:59339613, 0:88692806, 0:66322705, 0:32669316, 0:13487430, 0:02010552, 0:02005645Þ The RMS difference with the result y collocation is 7E-4. The comparison with the exact solution is shown in Figs. 3.46, 3.47, and 3.48. In this case, PX = 1:3728,

Fig. 3.46. Collocation results for the implicit data (Table 3.1 without the values of X, polynomial family, k = 6)

Fig. 3.47 Distribution of XR generated by collocation, using the implicit data (Table 3.1 without the values of X, polynomial family, k = 6). Here, the CDF as generated by ecdf

3.2

Collocation

287

Fig. 3.48 PDF of XR generated by collocation, using the using the implicit data (Table 3.1 without the values of X, polynomial family, k = 6). Here, the PDF was generated by sampleX

sðPX Þ = 0:5818, P(X > 2.4) = 0.06511 and P(X < 0.7) = 0.1105. The exact values are 1.3728, 0.5818, 0.06510, 0.1105. Exercises 1. The following data were observed: U X

1.000 -1.033

0.819 -0.578

0.670 -0.296

0.549 -0.169

0.449 -0.097

0.368 -0.049

U X

0.301 -0.029

0.247 -0.014

0.202 -0.009

0.165 -0.004

0.135 -0.002

0.111 -0.001

(a) Use the data above to determine by collocation a representation of X as a function of U. (b) Assume that U  N(0.4, 0.3). Use the representation found to determine the CDF and the PDF of X. (c) Assume that the distribution of U is unknown. Introduce an artificial uniformly distributed variable A and use it to determine a representation of U as function of A by collocation. Use the representation found to determine the CDF and the PDF of X. (continued)

288

3

Representation of Random Variables

2. The following data were observed: U X

0.0580 0.0000

0.0609 0.0021

0.2197 0.0060

0.3883 0.0078

0.4677 0.0123

0.5439 0.0126

U X

0.7550 0.0050

0.8709 0.0022

0.8761 0.0023

0.9045 0.0019

0.9133 0.0020

0.9778 0.0001

(a) Use the data above to determine by collocation a representation of X as a function of U. (b) Assume that U  N(0.5, 0.1). Use the representation found to determine the CDF and the PDF of X. (c) Assume that the distribution of U is unknown. Introduce an artificial uniformly distributed variable A and use it to determine a representation of U as function of A by collocation. Use the representation found to determine the CDF and the PDF of X. 3. The following data were observed: U X

0.2138 8.7482

0.2483 8.7470

0.2947 7.0831

0.2951 6.1580

0.3354 6.8340

0.3425 5.7317

U X

0.3627 5.1731

0.3654 5.1117

0.4223 4.3880

0.5067 4.6680

0.5422 4.9407

0.5549 4.6589

(a) Use the data above to determine by collocation a representation of X as a function of U. (b) Assume that U  N(0.5, 0.1). Use the representation found to determine the CDF and the PDF of X. (c) Assume that the distribution of U is unknown. Introduce an artificial uniformly distributed variable A and use it to determine a representation of U as function of A by collocation. Use the representation found to determine the CDF and the PDF of X. 4. The following data were observed: U X

0.0866 1.1770

0.4522 2.0239

0.7285 2.8004

0.7902 2.9942

1.1223 4.1943

1.3086 5.0098

U X

1.3672 5.2917

1.4038 5.4745

1.5366 6.1855

1.5799 6.4346

1.6650 6.9506

1.6894 7.1057

(continued)

3.2

Collocation

289

(a) Use the data above to determine by collocation a representation of X as a function of U. (b) Assume that U  N(1, 0.5). Use the representation found to determine the CDF and the PDF of X. (c) Assume that the distribution of U is unknown. Introduce an artificial uniformly distributed variable A and use it to determine a representation of U as function of A by collocation. Use the representation found to determine the CDF and the PDF of X. 5. The following data were observed: U

0.0866

0.4522

0.7285

0.7902

1.1223

1.3086

U

1.3672

1.4038

1.5366

1.5799

1.6650

1.6894

Consider a variable X such that dX = 1 þ X, X ð0Þ = 1 dU (a) Use the data above to determine by collocation a representation of X as a function of U. (b) Assume that U  N(1, 0.5). Use the representation found to determine the CDF and the PDF of X. 6. The following data were observed: U

0.3160

0.3572

0.4611

0.4716

0.5060

0.5162

U

0.5336

0.5379

0.5565

0.5846

0.6349

0.6519

Consider a variable X such that dX = U  X, X ð0Þ = 1 dU (a) Use the data above to determine by collocation a representation of X as a function of U. (b) Assume that U  N(0.5, 0.1). Use the representation found to determine the CDF and the PDF of X.

290

3.3

3

Representation of Random Variables

Variational Approximation

The variational approach is based on the notion of orthogonal projection, introduced in Sects. 1.15 and 2.5: given a random variable X 2 V = L2(Ω, P) and a linear subspace, PX – the orthogonal projection of X onto S is the element of S which minimizes the distance to X and it is characterized by PX 2 S and EðwPX Þ = E ðwX Þ, 8w 2 S

ð3:20Þ

In the context of UQ, we consider ð3:21Þ

F = fφ0 , φ1 , . . . , φk g and ( S = ½F  =

k X j=0

) aj φj ðU Þ: a = ða0 , a1 , . . . , ak Þ 2 ℝ

kþ1

ð3:22Þ

Then, the condition PX 2 S is equivalent to Eq. (3.3). We have E w

k X j=0

! xj φj ðU Þ = E ðwX Þ, 8w 2 S:

ð3:23Þ

Taking w = φi, we have k X j=0

  xj E φj ðU Þφj ðU Þ = E ðXφi ðU ÞÞ, 0 ≤ i ≤ k:

ð3:24Þ

Thus, the coefficients x = (x0, . . ., xk) solve the linear system Ax = B,

ð3:25Þ

with A = E(Φ(U)tΦ(U)) and B = E(Φ(U)tX), id est,     A = Aij : 1 ≤ i ≤ k þ 1, 1 ≤ j ≤ k þ 1 , Aij = E φi - 1 ðU Þφj - 1 ðU Þ ; B = ðBi : 1 ≤ i ≤ k þ 1Þ, Bi = EðXφi - 1 ðU ÞÞ

ð3:26Þ ð3:27Þ

3.3

Variational Approximation

291

Again, x is the solution of a linear system, now given by (3.26) and (3.27). Here, the matrix A is formed by k + 1 rows and k + 1 columns, so that the system is, general, determined. If we use a sample analogous to Problem (3.1) to generate the linear system, then we approximate Aij ≈

ns ns 1 X 1 X φi - 1 ðU r Þφj - 1 ðU r Þ; Bi ≈ X φ ðU Þ ns r = 1 ns r = 1 r i - 1 r

ð3:28Þ

In this case, the variational approximation corresponds to the least-squares of the linear system generated by collocation, with ε = 0 (see Eqs. (3.12) and (3.13)). In the situation where X is defined implicitly by an equation ψ(X, U ) = 0, we can look for an expansion such that PX 2 S and E ðw ψ ðPX, U ÞÞ = 0, 8w 2 S:

ð3:29Þ

This variational equation leads to the equations Eðφi ðU Þ ψ ðPX ðU Þ, U ÞÞ = 0, i = 0, . . . , k:

ð3:30Þ

This system of equations can be solved to furnish the coefficients x of the expansion (3.3). If we use a sample – as in Problem (3.1) – to generate the equations, we have ns 1 X φ ðU Þψ ðPX ðU r Þ, U r Þ = 0, i = 0, . . . , k: ns r = 1 i r

ð3:31Þ

As an example, let us consider again the situation where the line corresponding to the values of X in Table 3.1 is missing, but we have the information that ψ(X, U ) = U - log X = 0. Then, we can consider the system of nonlinear equations ns 1 X φ ðU Þð log ðPX ðU r ÞÞ - U r Þ = 0, i = 0, . . . , k: ns r = 1 i r

As in the preceding, this system can be solved by R (see Sect. 1.14.2). For instance, we can use the package pracma and lsqnonlin as follows:

292

3

Representation of Random Variables

Again, variable xpx contains the coefficients x = (x0, . . ., xk) of the expansion. The code furnishes x ¼ ð0:59552418, 0:89171359, 0:61551074, 0:35995845, 0:17374861, 0:04619873, - 0:03890627Þ The RMS difference with the result y collocation is 7E-4. The comparison with the exact solution is shown in Figs. 3.49, 3.50, and 3.51. In this case, PX = 1:373436, sðPX Þ = 0:5815821 , P(X > 2.4) = 0.06539935 and P(X < 0.7) = 0.1084989 – the exact values are 1.372799, 0.581762, 0.06510334, 0.1105405. Let us consider also that the line corresponding to the values of X in Table 3.1 is missing, but we have the information that dX - X = 0, X ð0Þ - 1 = 0 dU

3.3

Variational Approximation

293

Fig. 3.49 Variational results for the implicit data (Table 3.1 without the values of X, polynomial family, k = 6)

Fig. 3.50 Distribution of XR generated by the variational approach, using the implicit data (Table 3.1 without the values of X, polynomial family, k = 6)

294

3

Representation of Random Variables

Fig. 3.51 PDF of XR generated by the variational approach, using the using the implicit data (Table 3.1 without the values of X, polynomial family, k = 6). Here, the PDF was generated by sampleX

We can use the package pracma and fsolve as follows:

3.3

Variational Approximation

295

The code furnishes x ¼ ð0:59336931, 0:88790104, 0:65760592, 0:33901535, 0:12324390, 0:02410768, 0:02003295Þ

Fig. 3.52 Variational results for the implicit data (Table 3.1 without the values of X, polynomial family, k = 6)

The comparison with the exact solution is shown in Figs. 3.52, 3.53, and 3.54. Now, PX = 1:372808, sðPX Þ = 0:5817713 , P(X > 2.4) = 0.06510935 and P(X < 0.7) = 0.1105189 – exact values: 1.372799, 0.581762, 0.06510334, 0.1105405. Exercises 1. The following data were observed: U

1.000

0.819

0.670

0.549

0.449

0.368

U

0.301

0.247

0.202

0.165

0.135

0.111

Consider a variable X such that X 2 - 3X þ U = 0

(continued)

296

3

Representation of Random Variables

Fig. 3.53 Distribution of XR generated by the variational approach, using the implicit data (Table 3.1 without the values of X, polynomial family, k = 6)

Fig. 3.54 PDF of XR generated by the variational approach, using the using the implicit data (Table 3.1 without the values of X, polynomial family, k = 6). Here, the PDF was generated by sampleX

3.3

Variational Approximation

297

(a) Use the data above to determine a variational approximation of X as a function of U. (b) Assume that U  N(0.4, 0.3). Use the representation found to determine the CDF and the PDF of X. 2. The following data were observed: U

0.0580

0.0609

0.2197

0.3883

0.4677

0.5439

U

0.7550

0.8709

0.8761

0.9045

0.9133

0.9778

Consider a variable X such that dX þ U 2 X = 0, X ð0Þ = 1 dU (a) Use the data above to determine a variational approximation of X as a function of U. (b) Assume that U  N(0.5, 0.1). Use the representation found to determine the CDF and the PDF of X. 3. The following data were observed: U

0.2138

0.2483

0.2947

0.2951

0.3354

0.3425

U

0.3627

0.3654

0.4223

0.5067

0.5422

0.5549

Consider a variable X such that dX þ U 2 þ X = 0, X ð0Þ = 1 dU (a) Use the data above to determine a variational approximation of X as a function of U. (b) Assume that U  N(0.5, 0.1). Use the representation found to determine the CDF and the PDF of X. 4. The following data were observed: U

0.0866

0.4522

0.7285

0.7902

1.1223

1.3086

U

1.3672

1.4038

1.5366

1.5799

1.6650

1.6894

(continued)

298

3

Representation of Random Variables

Consider a variable X such that dX = 1 þ X, X ð0Þ = 1 dU (a) Use the data above to determine a variational approximation of X as a function of U. (b) Assume that U  N(1, 0.5). Use the representation found to determine the CDF and the PDF of X. 5. The following data were observed: U

0.3160

0.3572

0.4611

0.4716

0.5060

0.5162

U

0.5336

0.5379

0.5565

0.5846

0.6349

0.6519

Consider a variable X such that dX = U  X, X ð0Þ = 1 dU (a) Use the data above to determine a variational approximation of X as a function of U. (b) Assume that U  N(0.5, 0.1). Use the representation found to determine the CDF and the PDF of X.

3.4

Moments Matching Method

In the preceding situations, the connection between the variables X and U was empirically contained in the data or given by an equation. We may also be interested in situations where such an information is missing. For instance, when we have only information about the values of X: a variability of the response of a system is observed, but we do not know the explaining variable U. If the information about U is missing, the methods above cannot be used directly: as seen in the preceding sections, we can introduce an artificial variable A and use it to characterize the variability observed. To do this, we must generate an artificial connection between A and X. In the approach by collocation, we ranged both the series in increasing order, to generate a positive correlation between them. An alternative approach is the Moments Matching Method (M3). This method usually requests more data (id est, more values of X) than the preceding ones. It is also considered as difficulty to use, since it involves global optimization problems. In addition, M3 starts by breaking the connection between the observations of X and the values of PX, since it concentrates on the connection of their probability distributions: in principle, PX is not an approximation of X: their distributions may

3.4

Moments Matching Method

299

be close, but the variables may differ significantly. Nevertheless, when M3 is correctly used, it may solve situations where U is unknown. In addition, M3 can be used as a complement of collocation, tending to improve the results. From the mathematical standpoint, M3 is based on a consequence of Levy’s theorem (see Theorem 2.9.1), which shows that is possible to approach the distribution of a random variable X by approximating its moments. Thus, we may find an approximation of the distribution of X by using the distribution of Z such that M i ðZ Þ = M i ðX Þ, 1 ≤ i ≤ m:

ð3:32Þ

To find such a distribution, an arbitrary regular random variable Z may be considered – the unique limitation is the existence of m moments. Taking Z = PX defined as in the preceding (Eq. 3.3) furnishes a nonlinear system of equations F p ðxÞ = μp ðxÞ - M p ðX Þ = 0, 1 ≤ i ≤ m,

ð3:33Þ

where p

μp ðxÞ = M p ðPX Þ = E ððPX Þ Þ = E

k X j=0

!p ! xj φ j ð U Þ

= EððΦðU ÞxÞp Þ ð3:34Þ

One of the interesting features of this approach is the following one: here, U appears as arbitrary – the unique limitation remains the one formulated above: k P xi φj ðU Þ. By this approach, the existence of the first m moments of Z = ΦðU Þx = j=0

the connection between U and X is generated by Eqs. (3.29) and Levy’s theorem, so that – in principle – we can use a variable that has no connection with X. In practice, the use of a variable which is unconnected to X (so, independent from X) often requests more information on X – id est, a larger sample – to obtain good results. Of course, if the information about the couple (U, X) is available, the data for U may be used to the fitting, with best results with smaller sets of data. Errors in the determination of x may be interpreted as the introduction of noise: indeed, the characteristic function ϕX of X may be interpreted as the Fourier transform of fX – the probability density of X. An error in the evaluation of the coefficients x generates an error in the approximation of ϕX by ϕPX, what is equivalent to adding a noise to fX – observe that a small error on the PDF fX may correspond to a large error on the CDF FX(x), which is the integral of fX on (-1, x). This is one of the main difficulties when using the approach by M3: on the one hand, an excellent precision on x is necessary; on the other hand, we must solve a hard numerical problem, which requests adapted methods. Nevertheless, as previously remarked, when these problems are correctly solved, the approach M3 furnishes results for situations where the other approaches cannot be applied, such as, for instance, situations where only observations of X are given and U is unknown.

300

3

Representation of Random Variables

Table 3.5 Data for the method M3. To situations are considered: U known or missing U X

0.110263 1.11657

0.818171 2.26635

0.218055 1.24366

U X

0.972777 2.64528

0.498947 1.64699

-0.063814 0.93818

U X

0.948851 2.58274

U X

-0.752068 0.471391

U X

0.812861 2.25435

0.825434 2.282872 0.217750 1.24328 -0.2440565 0.783443

-0.158727 0.853229 -0.823237 0.439008 0.077486 1.08057

-0.428266 0.65164

-0.521897 0.59339

0.439192 1.55145

0.899228 2.45770

-0.874752 0.416965

0.472945 1.60471

0.435408 1.54559

0.028596 1.02901

-0.453719 0.635261

-0.817560 0.441507

The approach by Moments Matching may be implemented according to different formulations, which are exposed in the sequel. We shall illustrate the method by considering the situation where U is a random variable having a uniform distribution on the interval (-1, 1) and X = eU. We shall consider three situations: in the first one, the available information is a sample of 25 values of the couple (U, X) (Table 3.5) and the distribution of U is known. In the second one, we have the information in Table 3.5, but the distribution of U is unknown. In the last one, the available information consists only of the values of X – U is missing.

3.4.1

The Standard Formulation of M3

If m = k, Eqs. (3.29) and (3.30) can be solved by methods of resolution of nonlinear equations, such as, for instance, Newton-Raphson or fixed-point methods. When m > k, we must look for a generalized solution, such as, for instance, the minimum of the distance between the vectors of moments J ðxÞ = dðμðxÞ, M ðX ÞÞ,

ð3:35Þ

where μðxÞ = ðμ1 ðxÞ, . . . , μm ðxÞÞ, M ðX Þ = ðM 1 ðX Þ, . . . , M m ðX ÞÞ

ð3:36Þ

For instance, we may use a distance generated by a norm: d p ðA, BÞ = kA - Bkp :

ð3:37Þ

In practice, it may be interesting to use a pseudo-distance generated by using a vector of relative errors:

3.4

Moments Matching Method

301



A1 - B1 Am - Bm A - B A - B : dp,rel ðA, BÞ = = , ..., , B p B B1 Bm

ð3:38Þ

Classical norms can be used, such as the mean sum of absolute values (MSA), the root mean square (RMS) or the maximal absolute value (MaxA): 1 X jV j ðMSA Þ: m i=1 i vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u X u1 m 2 V ðRMS Þ: kV k 2 = t m p=1 i kV k1 =

m

kV k1 = max fjV i j: 1 ≤ i ≤ mg

ðMaxA Þ:

ð3:39Þ

ð3:40Þ ð3:41Þ

Although these norms are equivalent in theory, the numerical results may change significantly according to the starting point used and the norm chosen. Indeed, the main difficulty in this approach is to obtain a good approximation of the minimizers of J defined by (3.31). Indeed, this function is characterized by its nonconvexity, so that it does not satisfy one of the main assumptions in the convergence results of the standard methods of optimization – the convexity of the objective function – here, of J. In the lack of convexity, the standard methods of deterministic optimization often fail unless for initial points carefully chosen – in fact, for initial points near the solution. To overcome the difficulty of convergence, global optimization methods were proposed by many authors. R proposes a package GA which implements genetic algorithms – see Sect. 1.13), which will be used here. Otherwise, the reader can find in the literature many algorithms to solve global optimization problems, including for constrained optimization (see, for instance, (Pogu & Souza de Cursi, 1994; Autrique & Souza de Cursi, 1997; El Mouatasim et al., 2006; Bouhadi et al., 2004). Let us consider the first situation: The whole data in Table 3.5 is known and we look for the solution corresponding to k = 6, m = 10, d = d1,rel. Many strategies can be used to carry the optimization, namely concerning the choice of the initial point – a good candidate is the result generated by collocation. To illustrate the method, we start by a different procedure: we generate a starting point by a first optimization using d = d1, rel and a starting point (suggestedSol) randomly generated x

ð0;1Þ

=

0:4277091,0:2848874, 0:9316106,0:3233701, . . . : . . . ,0:5220148,0:1644089,0:6758046

The optimization by GA using as objective d1, rel produces as result: xð0;1Þ =



- 0:6806936,3:402445,3:595097, - 2:45974, . . . : . . . ,4:659208, - 3:720075, - 5:146975

302

3

Representation of Random Variables

Then, the optimization by GA using d1,rel as objective and this point as starting point (suggestedSol) produces the result x ð 1Þ =



2:673211, - 4:507739, - 2:057077,4:546916, . . . : . . . ,0:4474495,4:04641, - 3:065383

If the procedure is iterated, using this point to start an optimization by GA using d = d1, rel and using the result to an optimization by GA with d1,rel, we obtain at convergence x

ð 1Þ

=

2:650768, - 3:436297, - 2:372657,0:9982764, . . . : . . . ,3:07271,2:5276, - 1:442318

The CDF and the PDF corresponding to this point are shown in Figs. 3.55 and 3.56. Notice that GA includes a stochastic part, so that the results may differ from a run to another one. The starting point influences the result too. For instance, the same procedure with the starting point xð0;1Þ =



- 0:8070135, - 0:7510242,0:7712421, - 0:6448800, . . . : . . . , - 0:8149873,0:5284796,0:7886524

Fig. 3.55 CDF of X generated by M3 (Table 3.5, polynomial family, k = 6, m = 10, U and its distribution known)

3.4

Moments Matching Method

303

Fig. 3.56 PDF of X generated by M3 (Table 3.5, polynomial family, k = 6, m = 10, U and its distribution known)

generated x ð 1Þ =



2:650768, - 3:436297, - 2:372657,0:9982764, . . . : . . . ,3:07271,2:5276, - 1:442318

The corresponding results are shown in Figs. 3.57 and 3.58. As previously observed, the result can be improved by using as starting point the solution generated by collocation: x

ðcollocÞ

=

0:4169667,0:7702149,0:7138546,0:4247128, . . . : . . . ,,0:2411957 0:01980785,0:0585251

In this case, the starting point corresponds to d1,rel = 5E - 7, so that GA does not improve the result. The corresponding results are shown in Figs. 3.59 and 3.60. Let us consider the second situation: the distribution of U is unknown. In this case, we can choose the first approach that starts by determining the distribution of U. Analogously to Sect. 3.2, we consider an artificial variable A  N(0, 1). We use rnorm to generate the sample in Table 3.6 We look for an approximation U ≈ Φ(A)u. For k = 5 and a polynomial family, collocation furnishes

304

3

Representation of Random Variables

Fig. 3.57 CDF of X generated by M3 (Table 3.5, polynomial family, k = 6, m = 10, U and its distribution known)

Fig. 3.58 PDF of X generated by M3 (Table 3.5, polynomial family, k = 6, m = 10, U and its distribution known)

3.4

Moments Matching Method

305

Fig. 3.59 CDF of X generated by M3 (Table 3.5, polynomial family, k = 6, m = 10, U and its distribution known)

Fig. 3.60 PDF of X generated by M3 (Table 3.5, polynomial family, k = 6, m = 10, U and its distribution known)

306

3

Representation of Random Variables

Table 3.6 Data for the determination of the distribution of U by method M3 U A

-0.8747520 -1.69282675

-0.8232370 -1.36532620

-0.8175600 -1.13764893

-0.7520680 -0.90435314

-0.5218970 -0.85557539

U A

-0.4537190 -0.62413905

-0.4282660 -0.55531186

-0.2440565 -0.55029124

-0.1587270 -0.48780361

-0.0638140 -0.29600649

U A

0.0285960 -0.17379275

0.0774860 -0.07466342

0.1102630 0.27389469

0.2177500 0.27389469

0.2180550 0.64491201

U A

0.4354080 0.68163795

0.4391920 0.73221107

0.4729450 0.90818361

0.4989470 0.93777431

0.8128610 1.00175775

U A

0.8181710 1.00923872

0.8254340 1.11946666

0.8992280 1.29903777

0.9488510 1.34054714

0.9727770 1.53373080

Fig. 3.61 CDF of U generated by M3 (Table 3.6, polynomial family, k = 5, m = 10, the distribution of U is unknown and we use an artificial variable A  N(0, 1))

uðcollocÞ =



- 0:8277595, - 4:76959,43:16617, - 105:6562, . . . : . . . ,110:3948, - 41:31127

We use this value as starting point (suggestedSol) and the distance d2, rel, we obtain (Fig. 3.61) ð2Þ

u

=

- 0:8861719, - 0:2191236; 5:858474, - 3:138678, . . . : . . . , - 3:08195,2:50002

3.4

Moments Matching Method

307

Then, we determine an approximation X ≈ Φ(U )x. For k = 6 and a polynomial family, collocation furnishes xðcollocÞ =



0:4169667,0:7702149,0:7138546,0:4247128, . . . : . . . ,0:2411957,0:01980785,0:0585251

To determine the distribution of X, we can use the approximation U ≈ Φ(A)u to generate a large sample of U: rnorm generates 1E5 variates from A, from which we exclude the values outside the extremes of A in Table 3.6 : only the values in (-0.62413905, 1.53373080) are retained – we keep 89241 values). These values generate a sample from U, which is used to generate a sample of 89241 variates from X. The results are shown in Figs. 3.62 and 3.63. Analogously to Sect. 3.2, we can choose the direct determination of the distribution of X, without determining the distribution of U. In this case, we consider the data in Table 3.7. We use the package GA to minimize the distance d2, rel. The solution obtained by collocation is used as initial point (suggestedSol). The results are exposed in Figs. 3.64 and 3.65. For this approximation, PX = 1:2040,sðPX Þ = 0:7472 , P(X > 2.4) = 0.09269, and P(X < 0.7) = 0.3577. The values corresponding to collocation were PX = 1:1418,sðPX Þ = 0:7208 , P(X > 2.4) = 0.12, and P(X < 0.7) = 0.24. Notice that these results are obtained without any other information about X than the data in Table 3.7 – in addition, we use a Gaussian variable instead of an uniform one.

Fig. 3.62 CDF of X generated by collocation, using the data in Table 3.5 and the approximation U ≈ Φ(A)u

308

3

Representation of Random Variables

Fig. 3.63 PDF of X generated by collocation, using the data in Table 3.5 and the approximation U ≈ Φ(A)u Table 3.7 Data for the direct determination of the distribution of X by method M3 X A

0.416965 -1.69282675

0.439008 -1.36532620

0.441507 -1.13764893

0.471391 -0.90435314

0.593390 -0.85557539

X A

0.635261 -0.62413905

0.651640 -0.55531186

0.783443 -0.55029124

0.853229 -0.48780361

0.938180 -0.29600649

X A

1.029010 -0.17379275

X A

1.545590 0.68163795

1.551450 0.73221107

1.604710 0.90818361

1.646990 0.93777431

2.254350 1.00175775

X A

2.266350 1.00923872

2.282872 1.11946666

2.457700 1.29903777

2.582740 1.34054714

2.645280 1.53373080

3.4.2

1.080570 -0.07466342

1.116570 0.27389469

1.243280 0.27389469

1.243660 0.64491201

Alternative Formulations of M3

M3 can also be formulated under other forms, generally less efficient, but often easy to implement and less expensive in terms of computational cost. Alternative forms are • A constrained optimization problem (CO formulation) • A bi-objective optimization problem. (BO formulation)

3.4

Moments Matching Method

309

Fig. 3.64 CDF of X generated by M3 (Table 3.7, polynomial family, k = 4, m = 30. We use an artificial variable A  N(0, 1)). The starting point is furnished by collocation, which corresponds to the red discontinuous line

Fig. 3.65 PDF of X obtained by SP (kernel gaussian) derivation of the CDF generated by M3 (Table 3.7, polynomial family, k = 4, m = 30, artificial variable A  N(0, 1)). The continuous line is the solution by collocation

For instance, we can consider Eqs. (3.31) as constraints for the determination of the coefficients x: let A and B be the collocation matrices given by Eqs. (3.10) and (3.11): a good approximation minimizes the distance between the approximated

310

3

Representation of Random Variables

values Ax and the observed values B. By introducing a second distance (or pseudodistance) δ, we may look for   x = arg min δðAy, BÞ: F p ðyÞ = 0, 1 ≤ p ≤ m, y 2 ℝkþ1 In this formulation, the equality of the moments appears as a set of nonlinear constraints of optimization. As in the standard formulation, an alternative formulation is   x = arg min δðAy, BÞ: d ðμðyÞ, M ðX ÞÞ = 0, y 2 ℝkþ1 In practice, it may be convenient to introduce some flexibility by considering ε > 0 and   x = arg min δðAy, BÞ: dðμðyÞ, M ðX ÞÞ ≤ ε, y 2 ℝkþ1 Such a problem can be solved, for instance, by fmincon, from package pracma. We use this function in the examples below. As an example, let us consider the data in Table 3.6. We look for the distribution of U, using a formulation with δ = d = d2 and ε = 1E - 2. With a polynomial family, k = 3, m = 30, collocation furnishes u = (-0.9706, 1.4852, 0.9683, -0.4645), and M3-CO furnishes u = (-0.9584, 1.3791, 1.2194, -0.6338). For k = 4, m = 30, collocation furnishes u = (-0.8957, -0.5307, 10.3349, -14.8767, 7.0651); M3-CO furnishes u = (-0.9066, - 0.06879, 7.6982, -10.0856, 4.3702). Examples of results appear in Figs. 3.66 and 3.67. Let us perform a supplementary step: we use the representation PU = Φ(A)u of U to generate a large sample of 1E5 variates from U. Using the data in Table 3.5, we find by collocation the coefficients x of an approximation PX = Φ(U )x. An example of result is shown in Fig. 3.68. We have PX = 1:1381, sðPX Þ = 0:5656 , P(X > 2.4) = 0.0352, and P(X < 0.7) = 0.2665. In the bi-objective formulation, we can consider a vector of objectives f ðyÞ = ðf 1 ðyÞ, f 2 ðyÞÞ, f 1 ðyÞ = δðAy, BÞ, f 2 ðyÞ = dðμðyÞ, M ðX ÞÞ: Then, we look for the possible tradeoffs between these objectives. To exemplify the BO formulation, let us consider again the data in Table 3.6 and look for the distribution of U, using a formulation with δ = d = d2. With a polynomial family and k = 4, the variational approach presented in Sect. 1.13.3 furnishes the result in Fig. 3.69 – the minimization of the area is carried by fminunc from package pracma. From the figure, we see that the values – this may be interpreted as the fact that all the points furnish similar results. Indeed, as shown in Figs. 3.70, 3.71, 3.72, and 3.73, the results for t = 0.25 and t = 0.5 are similar and close to those furnished by collocation.

3.4

Moments Matching Method

311

Fig. 3.66 CDF of U generated by M3 in the CO formulation (Table 3.6, polynomial family, k = 3, m = 20. We use an artificial variable A  N(0, 1)). The starting point is furnished by collocation

Fig. 3.67 PDF of U obtained by SP (gaussian kernel) derivation of the CDF furnished by the CO formulation of M3 (Table 3.6, polynomial family, k = 3, m = 20, artificial variable A  N(0, 1))

Analogously to the preceding, we can determine the distribution of X using the approximations found for U: we can use them to generate a large sample from X and, then, use this sample to approximate de CDF of X. The CDF found for t = 0.5 is shown in Fig. 3.74. The result for t = 0.25 appears in Fig. 3.75.

312

3

Representation of Random Variables

Fig. 3.68 CDF of X generated by the second step: the coefficients x of PX = Φ(U )x are generated by collocation (data on Table 3.5) and the representation PU = Φ(A)u previously obtained is used to generate a large sample from U, which is used to determine the CDF

Fig. 3.69 Pareto’s front obtained using δ = d = d2 as distance

3.4

Moments Matching Method

Fig. 3.70 CDF corresponding to t = 0.25

Fig. 3.71 PDF corresponding to t = 0.25

313

314

Fig. 3.72 CDF corresponding to t = 0.5

Fig. 3.73 PDF corresponding to t = 0.5

3

Representation of Random Variables

3.4

Moments Matching Method

Fig. 3.74 CDF corresponding to t = 0.5

Fig. 3.75 CDF corresponding to t = 0.25

315

316

3

Representation of Random Variables

Exercises 1. The following data were observed: U X U X

1.000 -1.033 0.301 -0.029

0.819 -0.578 0.247 -0.014

0.670 -0.296 0.202 -0.009

0.549 -0.169 0.165 -0.004

0.449 -0.097 0.135 -0.002

0.368 -0.049 0.111 -0.001

Assume that the distribution of U is unknown. Introduce an artificial uniformly distributed variable A and use it to determine a representation of U as function of A by M3. Use the representation found to determine the CDF and the PDF of X. 2. The following data were observed: U X U X

0.0580 0.0000 0.7550 0.0050

0.0609 0.0021 0.8709 0.0022

0.2197 0.0060 0.8761 0.0023

0.3883 0.0078 0.9045 0.0019

0.4677 0.0123 0.9133 0.0020

0.5439 0.0126 0.9778 0.0001

Assume that the distribution of U is unknown. Introduce an artificial uniformly distributed variable A and use it to determine a representation of U as function of A by M3. Use the representation found to determine the CDF and the PDF of X. 3. The following data were observed: U X U X

0.2138 8.7482 0.3627 5.1731

0.2483 8.7470 0.3654 5.1117

0.2947 7.0831 0.4223 4.3880

0.2951 6.1580 0.5067 4.6680

0.3354 6.8340 0.5422 4.9407

0.3425 5.7317 0.5549 4.6589

Assume that the distribution of U is unknown. Introduce an artificial uniformly distributed variable A and use it to determine a representation of U as function of A by M3. Use the representation found to determine the CDF and the PDF of X. 4. The following data were observed: U X U X

0.0866 1.1770 1.3672 5.2917

0.4522 2.0239 1.4038 5.4745

0.7285 2.8004 1.5366 6.1855

0.7902 2.9942 1.5799 6.4346

1.1223 4.1943 1.6650 6.9506

1.3086 5.0098 1.6894 7.1057

Assume that the distribution of U is unknown. Introduce an artificial uniformly distributed variable A and use it to determine a representation of U as function of A by M3. Use the representation found to determine the CDF and the PDF of X.

3.5

Multidimensional Expansions

3.5

317

Multidimensional Expansions

The methods presented extend straightly to the situation where X or U are multidimensional.

3.5.1

Case Where U Is Multidimensional

In the situation where U = (U1, . . ., Un) is a multidimensional vector, the family {φi : 0 ≤ i ≤ k} is generated by functions φi, j : ℝn → ℝ. For instance, we may consider a tensor product of basis {φi, j : 0 ≤ i ≤ kj }: φℓn ði1 , ..., in Þ ðU Þ =

n Y j=1

  φi j , j U j :

The map ℓ n defines a renumbering of (i1, . . ., in) which brings the vector to a single integer. For instance, ℓ 2 ði1 , i2 Þ = i1 þ i2 k1 : and ℓ n ði1 , ::, in Þ = ℓ n - 1 ði1 , . . . , in - 1 Þ þ in ℓ n - 1 ðk1 , . . . , kn - 1 Þ: For instance, ℓ 3 ði1 , i2 , i3 Þ = ℓ 2 ði1 , i2 Þ þ i3 ℓ2 ðk1 , k2 Þ: Then, the approximation reads as X ≈ PX = ΦðU Þx =

k X ℓ=0

xℓ φℓ ðU Þ, k = ℓ n ðk1 , . . . , kn Þ;

id est, X ≈ PX =

k1 X k2 X i1 = 0 i2 = 0

...

kn X in = 0

xℓn φ1, i1 ðU 1 Þφ2, i2 ðU 2 Þ . . . φn, in ðU n Þ:

An example of creation of a tensor product of two basis is the following:

ð3:42Þ

318

3

Representation of Random Variables

Notice that the class includes methods lindex and invlindex that correspond to ℓ 2 and ℓ 2- 1 , respectively. Example 3.1 Let us consider the situation where the data is the following one: X U1 U2

4.45 0.95 -0.63

2.32 0.57 0.59

2.36 0.15 -0.18

3.28 1.00 0.56

X U1 U2

3.28 0.65 -0.31

1.74 0.27 0.84

1.93 0.34 0.64

2.79 0.30 -0.36

2.74 0.37 -0.26

4.98 0.92 -0.90

X U1 U2

1.65 0.19 0.82

3.33 0.81 -0.07

3.81 0.93 -0.25

3.73 0.29 -0.87

2.08 0.38 0.49

1.86 0.24 0.52

3.32 0.18 -0.75

2.54 0.38 -0.07

4.45 0.95 -0.63

2.02 0.32 0.44

2.19 0.24 0.09

3.09 0.88 0.40

2.73 0.59 0.08

4.03 0.78 -0.61

Let us consider an approximation by a polynomial family, with k1 = k2 = 3. Then, taking ai = min (Ui), bi = max (Ui). (continued)

3.5

Multidimensional Expansions

319

Example 3.1 (continued)

i

j u1 - a1 u2 - a2 φ1,i ðu1 Þ = , φ2,j ðu2 Þ = : b1 - a1 b2 - a2 and φjþ4i ðu1 , u2 Þ = φ1,i ðu1 Þ φ2,j ðu2 Þ: Here, the numbering is ℓ 2 ði1 , i2 Þ = i2 þ i1 ðk 2 þ 1Þ: The collocation approach furnishes the coefficients ℓ2 i j X ℓ2

0 0 0 3.61

1 0 1 -3.96

ℓ2 i j X ℓ2

8 2 0 0.48

9 2 1 0.56

2 0 2 2.56 10 2 2 -1.84

3 0 3 -0.61 11 2 3 0.98

4 1 0 1.10 12 3 0 -0.01

5 1 1 -1.30

6 1 2 3.11

7 1 3 -1.96

13 3 1 0.25

14 3 2 0.23

15 3 3 -0.17

The RMS error of the approximation is 2.9E-3. We used the code below (the environment contains x,u1,u2):

320

3.5.2

3

Representation of Random Variables

Case Where X Is Multidimensional

Let us assume that X = (X1, . . ., Xn)t. Then, the approximation reads as X ≈ PX =

k X j=1

 t xj φj ðU Þ, xj = x1,j , . . . , xn,j :

ð3:43Þ

Thus, each component is approximated by an expansion analogous to (3.3) or (3.5): X i ≈ PX i =

k X j=0

xi,j φj ðU Þ, i = 1, . . . , n

Consequently, we can apply the methods presented to each component of the variable. The coefficients x form a matrix =(xi, j, 1 ≤ i ≤ n, 0 ≤ j ≤ k ) – notice that each line of x contains the coefficients corresponding to the expansion of a component: column j of x contains the coefficients of the expansion of Xi: 0

x1,0 x=@ ⋮ xn,0

1 ⋯ x1 , k ⋯ ⋮ A: ⋯ xn , k

ð3:44Þ

We may also apply the methods to the whole matrix x. For instance, let us consider the approach by collocation: assume that the data are a sample of values S = fðU 1 , X1 Þ, . . . , ðU ns , Xns Þg

ð3:45Þ

Each Xi is a vector: Xi = (Xi, 1, . . ., Xi, n)t. Then, we may range the data in a matrix =(Xi, j, 1 ≤ i ≤ ns, 1 ≤ j ≤ n ) – notice that each line of X is an element of the sample: line i of X contains the values of Xi: 0

X 1 ,1 X=@ ⋮ X ns,1

1 ⋯ X 1, n ⋯ ⋮ A: ⋯ X ns, n

ð3:46Þ

Let A be the collocation matrix, defined in Eq. (3.10): the coefficients x verify Axt = X :

ð3:47Þ

approach: we consider the matrix The situation is analogous for the variational   = B i,j , 1 ≤ i ≤ k, 1 ≤ j ≤ n) such that B ij = E X j φi ðU Þ , i.e.:

3.5

Multidimensional Expansions

321

0

EðX 1 φ1 ðU ÞÞ ⋯ B=@ ⋮ ⋯ EðX 1 φk ðU ÞÞ ⋯

1 E ð X n φ1 ð U Þ Þ A: ⋮ E ð X n φk ð U Þ Þ

ð3:48Þ

Now, let A be the variational matrix, defined in Eq. (3.24): the coefficients x verify Eq. (3.47) yet. The extension of Moment Matching to random vectors requests some precaution: indeed, we must approach also the covariances, so that, we must consider the n Q i – the vector Xm approximation of generalized moments having the form E i i=1  of moments M(X) is replaced by a tensor of moments MðXÞ = M p1 ...pn =

n Q p E X i i , 0 ≤ pi ≤ mi g and we look for the minimum of i=1

J ðxÞ = dðμðxÞ, MðX ÞÞ,



ð3:49Þ

where μ(x) is the tensor of moments of the approximation: μðxÞ = n

 Q μp1 ...pn = E ðPX i Þpi , 0 ≤ pi ≤ mi . i=1

Example 3.2 Let us consider the situation where the data is the following one: U X1 X2

-0.95 -0.37 -0.16

U X1 X2

-0.64 -0.34 -0.90

-0.92 -0.37 -0.25 -1.59 -0.32 0.96

-0.92 -0.37 -0.25 -0.30 -0.22 -0.82

-0.86 -0.36 -0.42

-0.82 -0.36 -0.55

-0.26 -0.20 -0.72

-0.81 -0.36 -0.57

-0.24 -0.19 -0.69

-0.77 -0.36 -0.65

-0.22 -0.17 -0.63

0.03 0.03 0.10

-0.72 -0.35 -0.77 0.05 0.05 0.15

Let us consider an approximation by a polynomial family, with k = 7. The collocation approach furnishes the coefficients x1, j -0.32 x2, j 0.96

-4.56 11.72

37.90 -103.89

-133.58 315.23

248.33 -547.07

The RMS error of the approximation is 5E-3.

-255.26 556.80

138.77 -293.65

-31.22 60.05

322

3

Representation of Random Variables

Exercises 1. The following values of a variable X were observed. Assume that X is a function of U1 and U2. U1 U2 0,1 0,15 0,2 0,25 0,3

0,05 8,11 6,99 5,96 4,96 3,97

0,075 8,79 7,92 7,18 6,48 5,82

0,1 9,14 8,36 7,74 7,19 6,67

0,15 9,37 8,60 8,04 7,56 7,12

0,175 9,53 8,75 8,21 7,77 7,39

0,2 9,66 8,84 8,31 7,90 7,54

(a) Assume that the distributions of U1 and U2 are uniform. Then, determine a representation of X as a function of U and use it to determine the CDF and the PDF of X. (b) Assume that the distribution of U1 is triangular T(0.05,0.1,0.2) and the distribution of U2 is T(0.1, 0.2, 0.3). How this change impacts the representation determine in (a)? Determine the CDF and the PDF of X in this case and compare the results to those found in (a). 2. The following values of a variable X were observed. Assume that X is a function of U1 and U2. U1 U2 0,1 0,15 0,2 0,25 0,3

0,05 8,11 6,99 5,96 4,96 3,97

0,075 8,79 7,92 7,18 6,48 5,82

0,1 9,14 8,36 7,74 7,19 6,67

0,15 9,37 8,60 8,04 7,56 7,12

0,175 9,53 8,75 8,21 7,77 7,39

0,2 9,66 8,84 8,31 7,90 7,54

(a) Assume that the distributions of U1 and U2 are uniform. Then, determine a representation of X as a function of U and use it to determine the CDF and the PDF of X. (b) Assume that the distribution of U1 is triangular T(0.05,0.1,0.2) and the distribution of U2 is T(0.1, 0.2, 0.3). How this change impacts the representation determine in (a)? Determine the CDF and the PDF of X in this case and compare the results to those found in (a). 3. Consider the function Y = 10 xα1 xβ2 - 2 x1 - 5x2 (a) Assume that the distributions of α and β are uniformly distributed on (0.1, 0.5). Determine a representation of Y using a polynomial basis. (b) Assume α  T(0.1,0.2,0.5) and β  T(0.1, 0.4, 0.5). Determine a representation of Y using a polynomial basis.

3.6

Random Functions

3.6

323

Random Functions

The representation of random functions is an extension of the methods previously presented. To fix the ideas, let us consider a function X = X(t, U ) : t 2 (0, T ), depending upon a random variable U. In such a situation, we look for an expansion having time-dependent coefficients: X ðt, U Þ ≈ PX ðt, U Þ =

k X j=0

xj ðt Þφj ðU Þ:

ð3:50Þ

Equation (3.50) for random functions is the equivalent of Eq. (3.3) for random variables. The unknowns to be determined are the functions xi(t), t 2 (0, T ). For instance, assume that a sample from X is available: {Xi, j = X(ti, Uj): 0 ≤ i ≤ nt, 1 ≤ j ≤ ns}, with 0 = t0 < t1 < . . . < tnt = T. In such a situation, we may consider X = (X0, . . ., Xnt)t, where Xi = X(ti). X is a random vector of dimension nt + 1, so that we can apply the approach previously introduced for multidimensional random vectors: for each ti X ðt i , U Þ ≈ PX ðt i , U Þ =

k X j=0

xi,j φj ðU Þ:

The coefficients xi, j can be gathered in a matrix (3.44) and be determined by solving (3.47). Equivalently, we can consider a sample S i = fðU 1 , X i,1 Þ, . . . , ðU ns , X i, ns Þg and apply to S i the methods presented in the preceding. We have xj ðt i Þ ≈ xi,j , 0 ≤ i ≤ nt, 1 ≤ j ≤ ns:

ð3:51Þ

If necessary, the values of xj(t) for other values may be determined by interpolation. Example 3.3 Let Xt = sin (Ut), where U is a random variable. We can determine x(t) = (x0(t), . . ., xk(t))t by solving the linear system Axðt Þ = Bðt Þ, where, for 1 ≤ i, j ≤ k + 1 (continued)

324

3

Representation of Random Variables

Example 3.3 (continued)   Aij = E φi - 1 ðU Þφj - 1 ðU Þ , Bi ðt Þ = EðX ðt, U Þφi - 1 ðU ÞÞ: In practice, the means can be evaluated by numerical integration or by using a sample from U. It is also useful to discretize le time and evaluate the coefficients at discrete times, as previously indicated. For instance, if t 2 (0, Tmax), we can conside the discrete times ti = iΔt, i = 0, . . ., nt; Δt = Tmax/nt. For instance, assume that the PDF of U is evaluated by a function fde(u), the Hilbert basis is phi, the integrals are evaluated on (UMIN, UMAX). Then, A can be evaluated as follows:

Assume that X(t, U ) is evaluated by a function fonc(t,u) and the coefficients are determined at times given in the vector tc. Then, the coefficients can be determined as follows:

(continued)

3.6

Random Functions

325

Example 3.3 (continued) Let us consider U is uniformly distributed on (-π, π). In this case, we have EðX t Þ = 0, V ðX t Þ =

sin ð2πt Þ 1 2 4πt

For Tmax = 1, nt = 25, k = 6, we obtain the results in Fig. 3.76. The corresponding RMS error is 7.6E-6. Let us consider U is uniformly distributed on (0, 2π). In this case, we have E ðX t Þ =

sin 2 ðπt Þ - 8 sin 4 ðπt Þ þ πt ð4πt - sin ð4πt ÞÞ , V ðX t Þ = πt 8π 2 t 2

For Tmax = 1, nt = 25, k = 6, we obtain the results in Figs. 3.77 and 3.78. The corresponding RMS errors are 6.6E-6 and 7.7E-6, respectively. Assume that U  N(0, π) and that the expression of Xt is unknown: we have observations corresponding to ns = 20, nt = 25, Tmax = 1. We apply the procedure of collocation to determine a representation using a trigonometrical basis with k = 24 – recall that collocation coincides with the variational approach in such a situation, so that it is equivalent to approximate the means by the empirical means on the samples S i :

Fig. 3.76 Variance as function of time (U  U(-π, π)). The approximation furnishes values close to exact ones

(continued)

326

3

Representation of Random Variables

Example 3.3 (continued)

Fig. 3.77 Mean as function of time (U  U(0, 2π)). The approximation furnishes values close to exact ones

Aij ≈

ns ns 1 X 1 X φi - 1 ðU k Þφj - 1 ðU k Þ, Bi ðt Þ ≈ φ ðU ÞX ðU Þ: ns k = 1 ns k = 1 i - 1 k t k

With this approach, the RMS error on the approximation of the data was 6E-5. We generated a sample of 10000 variates from N(0, π) and evaluated the approximation and the exact value on this new sample. The RMS error on this new data was 7E-5. The RMS error in the approximation of the variance was 1.6E-5. A comparison of the variances is shown in Fig. 3.79. The exact solution is (Fig. 3.79) E ðX t Þ = 0, V ðX t Þ =

 2 1 1 - e - 19:73920880217872t : 2

(continued)

3.6

Random Functions

327

Example 3.3 (continued)

Fig. 3.78 Variance as function of time (U  U(0, 2π)). The approximation furnishes values close to exact ones

Fig. 3.79 Variance as function of time (U  N(0, π)). The approximation furnishes values close to exact ones

328

3

Representation of Random Variables

Example 3.4 Let Xt = eUt, where U is uniformly distributed on (-1, 1). We have E ðX t Þ =

et - e - t e2t - e - 2t , V ðX t Þ = - ðE ðX t ÞÞ2 2t 4t

The procedure produces the results shown in Figs. 3.80 and 3.81. The mean was calculated at times defined by nt = 25, Tmax = 1. The RMS error in the mean is 1.1E-6 and the RMS error in the variance is 3.5E-6. Let U  N(0, 1). Then t2

2

2

EðX t Þ = e 2 , V ðX t Þ = e2t - et : Assume that we do not have the information that Xt = etU, id est, that we do not know the expression of Xt. In place of this information, we have observations corresponding to ns = 20, nt = 25, Tmax = 1. We determine by collocation a representation using a polynomial basis with k = 6. The RMS error on the approximation of the data is 1.4E-3. We generated a sample of 10,000 variates from N(0, 1) and evaluated the approximation and the exact value on this new sample. The RMS error on this new data was 8.7E-3. A comparison of the means and variances is shown in Figs. 3.82 and 3.83.

Fig. 3.80 Mean as function of time (U  U(-1, 1)). The approximation is remarkably close to the exact values

(continued)

3.6

Random Functions

329

Example 3.4 (continued)

Fig. 3.81 Variance as function of time(U  U(-1, 1)). The approximation is awfully close to the exact values

Fig. 3.82 Mean as function of time (U  N(0, 1)). The approximation is close to the exact values

(continued)

330

3

Representation of Random Variables

Example 3.4 (continued)

Fig. 3.83 Variance as function of time (U  N(0, 1)). The approximation is close to the exact values

Example 3.5   Let us consider a capital path X t = X 0 - δs e - δt þ δs, where s, δ are uncertain. Assume that s is triangularly distributed T(0.15,0.2,0.3), δ is uniformly distributed on (0.05,0.15) and the variables are independent. Again, assume that we do not know the expression of Xt, but we have observations corresponding to the 25 combinations of the samples below: s δ

0.171553736 0.052162242

0.179011042 0.079594785

0.223251145 0.122753751

0.249395849 0.135923868

0.249458937 0.146172518

Assume that nt = 25, Tmax = 1. We apply the procedure of collocation to determine a representation using a polynomial basis with k1 = k2 = 3. Once the coefficients of the expansion are determined, we generate a large sample of 1E6 variates from U, which is used to generate samples of X ti (each sample has 1E6 variates). Figures 3.84 and 3.85 show the results for the mean and the variance. The corresponding RMS errors are 1.8E-7 and 1.6E-6, respectively. (continued)

3.6

Random Functions

331

Example 3.5 (continued)

Fig. 3.84 Mean as function of time (U1  T(0.15,0.3,0.2), U2  U(0.05,0.15)). The approximation is close to the exact values

Fig. 3.85 Variance as function of time (U1  T(0.15,0.3,0.2), U2  U(0.05,0.15)). The approximation is close to the exact values

332

3

Representation of Random Variables

Exercises   1. Consider X t = X 0 - δs e - δt þ δs, where s, δ are uncertain. (a) Assume that the distributions of s and δ are triangular: s  T(0.1, 0.25,0.5) and δ  T(0.05, 0.15, 0.25). (b) Determine a representation of Xt in the polynomial basis 1, u1 , u2 , u21 , u1 u2 , u22 , . . . (c) Determine a representation of Xt in the tensor basis generated by the trigonometric functions 1, sin(u), cos (u), sin (u1), cos (u), sin (2u), cos (2u), . . . . 2. Consider X t = ð1 - uÞ2 t4 , where u is uncertain. 2

Assume that u is a random variable, uniformly distributed on (0.2, 0.8). Determine a representation of Xt in the basis of trigonometric functions 1, sin(u), cos (u), sin (2u), cos (2u), . . . Consider the family Xt = u, if |t| < u; Xt = (|t| - u)2 + u, otherwise. Assume that the distribution of u is triangular: uT(0,0.5,1). (a) Determine a representation of Xt in the polynomial basis. (b) Determine a representation of Xt in the trigonometric basis 1, sin (u), cos (u), sin (2u), cos (2u), . . .

3.7

Random Curves

The analysis of uncertainties on demand curves reveals unsuspected difficulties at first sight. Let us recall that a curve C can be described either by a parameterization or by an implicit equation. A parameterized curve is described by two functions:   C = X = ðX 1 , X 2 Þ 2 ℝ2 : X 1 = X 1 ðt Þ, X 2 = X 2 ðt Þ, t 2 ð0, T Þ : An implicit curve is defined by the solution of an algebraic equation:   C = X = ðX 1 , X 2 Þ 2 ℝ2 : ψ ðX 1 , X 2 Þ = 0, with X 2 Ω : We must distinguish the equations defining a curve and the set of points forming the curve. Indeed, the set of points is unique, but its representation by a parameterization or an equation is not unique. As an example, let us consider a curve defined by the equation

3.7

Random Curves

333

pffiffiffiffiffiffi pffiffiffiffiffiffi ψ 1 ðX, U Þ = X 1 þ X 2 - U = 0, X 1 ≥ 0, X 2 ≥ 0: We can rewrite this equation under the form   pffiffiffiffiffiffi ψ 2 ðX, U Þ = X 2 - X 1 - 2U X 1 þ U 2 = 0, X 1 ≥ 0: or, equivalently,   pffiffiffiffiffiffi ψ 3 ðX, U Þ = X 1 - X 2 - 2U X 2 þ U 2 = 0, X 2 ≥ 0: We can also generate a parametric representation of the curve: X 1 ðt Þ = t 2 U 2 , x2 ðt Þ = ð1 - t Þ2 U 2 , t 2 ð0, 1Þ The non-uniqueness of the representation introduces difficulties: according to the description used, the statistical properties may differ (see Fig. 3.86). For instance, let us suppose that U is uniformly distributed on (0, 1). As an example, let us try to evaluate the mean of the family of curves. The mean parameterization Xi ðt Þ = EðX i ðt, U ÞÞ furnishes

Fig. 3.86 Example of variability of the mean curve: according to the description used, the mean equation leads to a different curve

334

3

Representation of Random Variables

ð1 - t Þ2 t2 X1 ðt Þ = , X2 ðt Þ = , t 2 ð0, 1Þ: 3 3 The mean equations ψi ðX Þ = E ðψ i ðX, U ÞÞ = 0 furnish: pffiffiffiffiffiffi pffiffiffiffiffiffi ψ1 ðX Þ = X 1 þ X 2 - 1=2 = 0, X 1 ≥ 0, X 2 ≥ 0:   pffiffiffiffiffiffi ψ2 ðXÞ = X 2 - X 1 - X 1 þ 1=3 = 0, X 1 ≥ 0, X 2 ≥ 0:   pffiffiffiffiffiffi ψ3 ðXÞ = X 1 - X 2 - X 2 þ 1=3 = 0, X 1 ≥ 0, X 2 ≥ 0: As shown in Fig. 3.86, each description of the curve furnishes a different mean. The same difficulties appear when looking for expansions. For instance, if we consider an expansion in a polynomial basis, using k = 6, we get  PX = t 2 U 2 , ð1 - t Þ2 U 2 ðparametricÞ   PX = t 2 , t 2 - 2Ut þ U 2 ðimplicit ψ 1 Þ  pffi 1 PX = t, - U t þ ð1 þ 3t Þ ðimplicit ψ 2 Þ 3  pffi 1 PX = - U t þ ð1 þ 3t Þ, t ðimplicit ψ 3 Þ 3 To construct a procedure which is independent of the parameterization or the implicit equation, we may consider the set of points C and build a method based on them: indeed, the set of points C is the same independently of the way the equations defining the curve are written. For instance, we may look for the sets of points generated by the representation PX ðt, U Þ =

k X j=0

xj ðt Þφj ðU Þ

and look for the representation that generates the best set of points, in the sense that they minimize the distance to C. Formally, we consider a curve of the family as a set of points, whatever the choice of form to describe them: either   C ðU Þ = X = ðX 1 , X 2 Þ 2 ℝ2 : X 1 = X 1 ðt, U Þ, X 2 = X 2 ðt, U Þ, t 2 ð0, T Þ ð3:52Þ or   C ðU Þ = X = ð X 1 , X 2 Þ 2 ℝ 2 : ψ ðX 1 , X 2 , U Þ = 0 :

ð3:53Þ

3.7

Random Curves

335

The data will be the sets of points C ðu1 Þ, . . . , C ðuns Þ . Then, we consider the curves generated by the representations: ( D ðx, uÞ = PX = ðPX 1 , PX 2 Þ 2 ℝ : PX i = 2

k X j=0

) xi,j ðsÞφj ðU Þ: s 2 ðs min , s max Þ :

For each element from the sample, we may measure the distance between D ðx, ui Þ and C ðui Þ: di ðxÞ = dist ð D ðx, ui Þ, C ðui ÞÞ:

ð3:54Þ

Here, dist is an adequate function to measure the distance between sets of points, such as, for instance, the distance from the points of D ðx, ui Þ to C ðuÞ: dist ðA, BÞ = dSH ðA, BÞ = sup inf ka - bk a2A b2B

ð3:55Þ

or the Hausdorff distance: dist ðA, BÞ = d H ðA, BÞ = max fd SH ðA, BÞ, d SH ðB, AÞg

ð3:56Þ

The distances are collected in a vector dðxÞ = ðd1 ðxÞ, . . . , dns ðxÞÞt :

ð3:57Þ

Then, we look for the coefficients x that minimize a measure g(d(x)) of the magnitude of d(x), such as, for instance, its RMS norm: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ns 1 X 2 d ð xÞ : g ð d ð xÞ Þ = ns i = 1 i

ð3:58Þ

Alternatively, we may consider other measures g, such as, for instance, gðdðxÞÞ = max fd i ðxÞ: 1 ≤ i ≤ nsg: Thus, we must consider two situations: • Either the parameterization or implicit equation appears as clear and undoubtful. Then, we may use it to determine the representation: we consider the components as functions, and we apply the methods previously presented for functions (Sect. 3.3). • Or it is dubious, and we may use the approach by the Hausdorff distance.

336

3

Representation of Random Variables

Fig. 3.87 Example of code for the evaluation of the distances

The distances can be evaluated by using the class curve defined in Fig. 3.87. In this class, distp2p evaluates the Euclidean distance between two points; distp2c calculates the distance of a point to a curve (which is a list of points); distc2c evaluates dSH; disthauss evaluates dH; distpairedfam evaluates di = dH(F1i, F2i) for all the elements of two families F1, F2 of paired curves; distfam evaluates the di = min {dH(F1i, F2j) : j} for two families of curves F1, F2; distX evaluates the norm of a vector d(x) of Haussdorff distances between the curves generated by the coefficients given and a family – he is an object of type expansion1D, which defines the Hilbert expansion to be used; u is vector of random numbers and F is the family. As an example, let us consider two families of circles: F1 is a family of 5 circles, centered at the origin and having radius 1,2,3,4,5. F2 is a family of 5 identical circles of radius 5. Each element of each family is discretized with 51 points (angles equally spaced). We have dsh(F1[[i]],F2[[i]]) = dh(F1[[i]],F2[[i]])=5-i, dsh(F1[[i]],F1[[j]]) = dh(F1[[i]],F1[[j]]) = Abs(i-j). The results are the following:

3.7

Random Curves

337

To determine a representation of the family F1, we may consider the parametric representation x1 ðθÞ = U cos ðθÞ, x2 ðt Þ = U sin ðθÞ, θ 2 ð0, 2πÞ We consider k = 3 and determine the coefficients x = (xi, j(θℓ) : 1 ≤ i ≤ 2, 0 ≤ j ≤ k, 0 ≤ ℓ ≤ 50). In the function approach presented in the previous section, the coefficients can be determined by collocation at each angle, using the available data. An example of prediction for r = 3.5 is shown in Fig. 3.88. The corresponding RMS error is 2.7E-14. As an alternative, we can look for the coefficients that minimize the distance (3.58), which corresponds to distX. The prediction for r = 3.5 is shown in Fig. 3.89. The corresponding RMS error is 2.8E-14.

Fig. 3.88 Prediction of the curve for r = 3.5, by the function approach, with a polynomial basis, with k = 3

338

3

Representation of Random Variables

Fig. 3.89 Prediction for r = 3.5 by the Haussdorff approach, using a polynomial basis, with k = 3

Example 3.6 Let us consider the following multiobjective optimization problem (similar to Binh-Korn presented in Sect. 1.13.3. See also (Bassi et al., 2018)): f 1 ðxÞ = 4ðx1 - u1 Þ2 þ 4ðx2 - u2 Þ2  2  u u 2 f 2 ð xÞ = x 1 þ 1 - 5 þ x2 - 2 10 10 ðx1 - 5Þ2 þ x22 ≤ 25, ðx1 - 8Þ2 þ ðx2 þ 3Þ2 ≥ 7:7, 0 ≤ x1 ≤ 5, 0 ≤ x2 ≤ 3 Assume that u1~N(2, 0.1) and u2~T(0,0.5,1) and these variables are independent. In this case, the Pareto front associated to the multiobjective optimization problem is a random curve. To determine a representation, we can consider a sample from the variables. Since they are independent, we may consider independent samples. For instance, let us consider the samples in Table 3.8. Table 3.8 Two small samples for the representation of a Pareto’s front u1 u2

1.793437 0.210664

1.88848 0.407556

1.999165 0.659623

2.010893 0.787493

2.01773 0.823312

(continued)

3.7

Random Curves

339

Example 3.6 (continued) We have 5 values from each variable: independence allows us to generate a sample formed by 25 couples u1i , u2j : we generate a sample of 25 Pareto fronts using these values – the bi-objective optimization problem is solved by the method presented in Sect. 1.13.3, with discretized times tj = 0.01  j, 0 ≤ j ≤ 100. At this stage, we can adopt three different strategies, all based on the fact that the approach presented in Sect. 1.13.3 involves a unique class of parameterization for all the curves: each front is represented by Fi = {Fi, j = ( f1, i, j, f2, i, j), 0 ≤ j ≤ 100}, fk, i, j = fk(xi(tj)), k = 1, 2, with k X   ci,j φ0i ðt Þ, t 2 ð0, 1Þ: xi ðt Þ = x1,i þ x2,i - x1,i t þ j=1

These parameterizations suggest three approaches: • Method 1: We can collect the points {F1, j, . . ., F25, j} in a sample and determine a representation for these points. Repeating the procedure for all the 101 points, we obtain a representation of the curves. • Method 2: We can collect the coefficients {c1, j, . . ., c25, j} in a sample and find a representation for cj(U ). Repeating the procedure for all the indexes, 1 ≤ i ≤ k, we determine a representation of the Pareto fronts. • Method 3: We can collect the points {x1(tj), . . ., x25(tj)} in a sample and find a representation for x(tj, U ). Repeating the procedure for all the 101 values of tj, we determine a representation of the Pareto fronts. For each one of these approaches, we determined the coefficients of an expansion with polynomial families and k1 = k2 = 3. The representations can be used to generate larger samples from the Pareto front. As an example, Figs. 3.90, 3.91, and 3.92 compare the approximation generated by the three methods above with the solution of the multiobjective problem, for the values u1 = 1.5, u2 = 0.1. Now, assume that u1~T(1.5,2, 2.5) and u2~T(0,0.5,1):consider the samples in Table 3.9. Figure 3.93 compares the fronts generated by the expansion and by the solution of the multiobjective problem, for the values u1 = 1.5; u2 = 0.1. (continued)

340

3

Representation of Random Variables

Example 3.6 (continued)

Fig. 3.90 Comparison between the Pareto front generated by the expansion and the solution of the multiobjective optimization problem (method 1)

Fig. 3.91 Comparison between the Pareto front generated by the expansion and the solution of the multiobjective optimization problem (method 2)

(continued)

3.7

Random Curves

341

Example 3.6 (continued)

Fig. 3.92 Comparison between the Pareto front generated by the expansion and the solution of the multiobjective optimization problem (method 3) Table 3.9 A small sample for the representation of Pareto fronts u1 u2

1.755633 0.210664

1.924332 0.407556

2.021606 0.659623

2.204238 0.787493

2.405309 0.823312

Fig. 3.93 Comparison between the Pareto front generated by the expansion and the solution of the multiobjective optimization problem (method 3)

342

3

Representation of Random Variables

Exercises Consider X = 10 xα1 xβ2 - 2 x1 - 5x2 (a) Assume that the distributions of α and β are uniform. Then, determine an expansion of X as a function of U in a polynomial basis. (b) Assume that the distribution of α is triangular T(0.05,0.1,0.2) and the distribution of β is T(0.1, 0.2, 0.3). Determine the new expansion. Compare the results to those found in (a). 1. Consider the multiobjective optimization problem where f 1 ðxÞ = r 1 x1 þ r 2 x2 þ r 3 ð1 - x1 - x2 Þ,  f 2 ðxÞ = σ2 β21 r 21 x21 þ β22 r 22 x22 þ ð1 - x1 - x2 Þ2 r 23 β23 , x1 ≥ 0, x2 ≥ 0, x1 þ x2 ≤ 1: Let r1 = 0.1, r2 = 0.15, r3 = 0.05, σ = 0.2. Assume that the coefficients β1, β2 are uncertain and independent. Consider the small sample β1 β2

0.984164403 1.12733492

0.996937932 1.187675572

1.017274687 1.230599464

1.062638505 1.236082147

1.091857263 1.272463665

(a) Analogously to Example 3.6, determine a representation of the Pareto front of this problem. Use the representation to determine the front for β1 = 1.05, β2 = 1.2.

3.8

Mean, Variance, and Confidence Intervals for Random Functions or Random Curves

In Sect. 2.10, we saw that we may use samples to obtain some statistics of random variables, such as the empirical mean and the variance and to determine confidence intervals that consider the margin of error connected to the fact that the samples do not represent the whole population. In Sects. 3.2, 3.3, and 3.4 of this chapter, we saw that we may use the samples to generate representations of random variables and then use the representations to generate larger samples and use them to obtain the CDF and the PDF of the variables under consideration. Larger samples may also be used to evaluate the means and improve the evaluation of the variance and of the confidence interval. In Sect. 3.5, these approaches were extended to multidimensional situations, so that analogous procedures may be used for vectors of random variables.

3.8

Mean, Variance, and Confidence Intervals for Random Functions or Random Curves

343

However, their extension to random functions and curves may face difficulties: as pointed in the preceding section, the determination of the mean of a family of random curves must consider the non-uniqueness of the parametric and the implicit representations. These difficulties may concern also random functions, except for the situations where the parameterization to be used is undoubtful. Indeed, analogously to Sect. 3.7, we may look for the mean of a family of curves either by considering a parameterization or the Hausdorff approach. Let us recall the definition of the family of curves (Eq. 3.52):   C ðuÞ = X = ðX 1 , X 2 Þ 2 ℝ2 : X 1 = X 1 ðt, uÞ, X 2 = X 2 ðt, uÞ, t 2 ð0, T Þ The mean of the family is a curve    = ðX1 , X2 Þ 2 ℝ2 : X1 = X1 ðt Þ, X2 = X2 ðt Þ, t 2 ð0, T Þ E ðC Þ = X As shown in the preceding section, the solution consisting in taking the mean representation at each time Xi ðt Þ = EðX i ðt, uÞÞ may lead to a result that depends on the parameterization used. As an alternative, we can use the variational formulation of the mean: if X is a random variable taking values in ℝ, then o n  EðX Þ = arg min J ðyÞ = E ðX - yÞ2 : y 2 ℝ , id est, E(X) is the real number that minimizes the mean quadratic deviation from X. Analogously, the median is the value that minimizes the mean absolute deviation: med ðX Þ = arg min fI ðyÞ = Eðjx - yjÞ: y 2 ℝg, The variance is the value of J(E(X)) = E((X - E(X))2). We may use these properties to determine the mean, the median, and the variance of families of curves. Let   D = Y = ðY 1 , Y 2 Þ 2 ℝ2 : Y 1 = Y 1 ðt Þ, Y 2 = Y 2 ðt Þ, t 2 ð0, T Þ Then, we shall look for a curve E ðC Þ such that    EðC Þ = arg min J ðD Þ = E dist 2 ðD, C ðuÞÞ , where dist is a measure of the distance between the curves D and C ðuÞ. Analogously to Sect. 3.3, we may consider either a distance such as

344

3

ZT dist ðD, C ðuÞÞ = 2

Representation of Random Variables

 ðt Þk dt kXðt, uÞ - X 2

ð3:59Þ

0

or one of the set distances (for instance, the Hausdorf distance (3.56)). In practice, only samples from X are available. Let C = fC i = C ðui Þ: 1 ≤ i ≤ nsg be a sample formed by ns variates from C ðuÞ: as an approximation, we may look for     EðC Þ ≈ C = arg min J ðD Þ = E dist 2 ðD, C i Þ : C i 2 C :

ð3:60Þ

For practical purposes, the distance is often not squared, and we look for E ðC Þ ≈ C = arg min fI ðD Þ = Eðdist ðD, C i ÞÞ: C i 2 Cg:

ð3:61Þ

Notice that, formally, this definition corresponds to the median of the family, not to the mean. Then, we may proceed into a way analogous to the one used in Sects. 3.1, 3.2, 3.3, 3.4, and 3.5: consider a family {φj : 0 ≤ j ≤ k} and ( D ðxÞ = PX = ðPX 1 , PX 2 Þ 2 ℝ2 : PX i =

k X j=0

) xi,j φj ðsÞ: s 2 ðs min , s max Þ :

Then, we may determine the coefficients x that minimize a measure g(d(x)) of the magnitude of d(x), given by dðxÞ = ðd 1 ðxÞ, . . . , dns ðxÞÞt ; d i ðxÞ = dist ðC ðui Þ, D ðxÞÞ: For instance, we may take g(d) as the norm of d (as in Eq. (3.58)). If a discretization in s is used, such as smin = s0 < s1 < . . . < snp = smax, we can look for the values of PX(si), 1 ≤ i ≤ np, i. e., for 2 np unknown values defining np points in ℝ2. However, these formulations present some difficulties yet: indeed, the optimization problems (3.60) and (3.61) may have multiple solutions and it may be necessary to impose supplementary conditions to select the convenient ones. A simpler solution may be found, which may be employed if the sample under consideration is larger enough and all its elements have the same probability: we can look for the element of the sample which is the best representative of all the sample’s population, id est:      E ðC Þ ≈ C = arg min J ðC i Þ = E dist C i , C j : C j 2 C :

ð3:62Þ

In this case, we determine one of the curves of the family, corresponding to the minimal mean Hausdorff distance to the other members of the family – it can be considered as the best representative of the family – the element which is minimally

3.8

Mean, Variance, and Confidence Intervals for Random Functions or Random Curves

345

off-centered. One of the interesting features of this approach is that it produces as a result a member of the family, id est, a curve having the same general properties as the other members of the family. In both the cases, the variance is estimated by J ðC Þ. Confidence intervals with a risk α may be generated by considering quantiles: the confidence interval is formed by the curves of the family that generate a subset C conf such that C 2 C conf and Pð C conf Þ = 1 - α. Example 3.7 Let us consider the family of circles C ðuÞ = fXðt, uÞ = r ð sin ð2π ðt þ uÞÞ, cos ð2π ðt þ uÞÞÞ, t 2 ð0, 1Þg with u uniformly distributed on (1, 3). All the values of u generate the same circle, centered at the origin and having a radius r. Thus, we expect that the mean circle is the single curve of the family: o n  = ðX1 , X2 Þ 2 ℝ2 : X1 2 ðt Þ þ X2 2 ðt Þ = r 2 , t 2 ð0, 1Þ E ðC Þ = X We have E(X1(t, u)) = E(X2(t, u)) = 0, so that the approach by the mean equation leads to the incorrect result X1 ðt Þ = X2 ðt Þ = 0, 8t 2 ð0, 1Þ . Using Eq. (3.59) leads to the same result. Indeed, 0 E@

Z1

1  ðt Þk dt A = kX ðt, uÞ - X

0

2

Z1

 ðt Þk dt þ 1, kX 2

0

 ðt Þ = 0, 8t. so that the minimum corresponds to X Let us use the Hausdorff distance with a discretization of the solution: we look for the points PXi = (PX1, i, PX2, i), 1 ≤ i ≤ np that minimize the mean Hausdorff distance to the family. To fix the ideas, let us consider the situation where we consider a single point: np = 1, PX1 = (a, b). We have: d SH ðPX 1 , C ðuÞÞ = jr -

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a2 þ b2 j, dSH ð C ðuÞ, PX 1 Þ = r þ a2 þ b2

Thus, the Hausdorff distance between the circle of radius r and the point is pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r þ a2 þ b2, so that the minimum is attained for a2 þ b2 = 0, id est, when PX1 = (0, 0). When considering np points, PXi = (ai, bi), 1 ≤ i ≤ np, we have (continued)

346

3

Representation of Random Variables

Example 3.7 (continued)

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 dSH ðPX i , C ðuÞÞ = jr - ai þ bi j, dSH ð C ðuÞ, PX i Þ = r þ a2i þ b2i ,

so that the result is similar: the minimum is attained when a2i þ b2i = 0,id est, when the points coincide at the origin. C ðuÞÞ leads to a more significant result: the Notice that the SH ðPX i ,  use ofqdffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi distance is max jr - a2i þ b2i j , which is minimal when a2i þ b2i = r, id est, when the points belong to the circle – but they can coincide. Finally, let us consider the approach (3.62): both dSH and the Hausdorff distance dH verify   dist C i , C j = 0, 8i, j: Thus, any member of the family is a solution E ðC Þ = C i , 8i – this result corresponds to the expected one (recall that all the C i generate the same circle). In this case, the variance is equal to zero and the confidence intervals contain the single member of the family (Fig. 3.94).

Fig. 3.94 A situation that frustrates several methods: the family is formed by a single circle of radius r = 3, with several different parameterizations. We expect the single curve of the family as its mean. However, many approaches furnish (0,0) as the mean

3.8

Mean, Variance, and Confidence Intervals for Random Functions or Random Curves

347

Example 3.8 Let us consider the family of circles C ðuÞ = fXðt, uÞ = uð sin ð2πt Þ, cos ð2πt ÞÞ, t 2 ð0, 1Þg with u uniformly distributed on (1, 5). The family is formed by concentric circles centered at the origin and having a radius u 2 (1, 5). Thus, we expect that the mean circle is the circle of mean radius: n o  = ðX1 , X2 Þ 2 ℝ2 : X1 2 ðt Þ þ X2 2 ðt Þ = 32 , t 2 ð0, 1Þ E ðC Þ = X We have EðX 1 ðt, uÞÞ = 3 sin ð2πt Þ, E ðX 2 ðt, uÞÞ = 3 cos ð2πt Þ so that the approach by the mean equation leads to the correct. Using Eq. (3.59) leads to the same result. Indeed, 0 E@

Z1

1  ðt Þk2 dt A = J 1 ðX1 Þ þ J 2 ðX2 Þ, kX ðt, uÞ - X

0

where (c is a convenient constant) J 1 ðX1 Þ =

Z1 

2 X1 ðt Þ - 6X1 ðt Þ sin ð2πt Þ dt þ c,

0

J 2 ðX2 Þ =

Z1  2 X2 ðt Þ - 6X2 ðt Þ cos ð2πt Þ dt þ c: 0

Thus, the minimum is attained at X1 ðt Þ = 3 sin ð2πt Þ, X2 ðt Þ = 3 cos ð2πt Þ. Approach (3.62) also furnishes the exact result. In this situation, the confidence interval with risk α is formed by the circles having radius from 3 - Δ to 3 + Δ, Δ = 2(1 - α). This region contains a proportion 1 - α of the family (Fig. 3.95). (continued)

348

3

Representation of Random Variables

Example 3.8 (continued)

Fig. 3.95 A situation where all the approaches furnish the exact result: the family is formed by concentric circles of radius u 2 (1, 5), with the same parameterization. The blue circles delimit the confidence interval for α = 10%

Example 3.9 Let us consider the Pareto Front associated to the Binh-Korn problem involving two independent random variables u1~N(2, 0.1) and u2~T(0,0.5,1). As shown in Example 3.7.1, we may use a small sample and a discretization ti = 0.01  i, 0 ≤ i ≤ 100 to determine a representation of the family of the (continued)

3.8

Mean, Variance, and Confidence Intervals for Random Functions or Random Curves

349

Example 3.9 (continued) Pareto fronts. Once the representation is determined, with the same data and parameters as in Example 3.6, we can generate a large sample of 10000 Pareto fronts using samples of 100 variates from each variable ui, i = 1, 2. Then, we determine the mean front as previously mentioned: we determine the most representative front of the family by looking for the front that minimizes the mean distance to the other fronts of the family. After that, we generate a confidence interval by considering the 80% members of the family of fronts which are the closer to the mean front. The results are shown in Fig. 3.96 Figure 3.97 compares the mean front generated by three methods. The first method (solutions) consists in solving the multiobjective problem (method of Sect. 1.13.3) for a sample of 2500 variates, generated by two samples of 50 values of each ui, i = 1, 2. Then, we determine the mean of the family of fronts as previously: we evaluate the Hausdorff distance of each member of the family to all the other members of the family and we look for the element that minimizes its own mean distance to the family. The second method (expansion) uses the sample of 1E4 fronts previously generated: we calculate its mean as being the most representative element, by the same way (minimization of the mean distance). The third method (temporal mean) consists in evaluate the mean point for each ti for 0 ≤ i ≤ 100.

Fig. 3.96 Determination of the Mean Pareto Front and a confidence interval from the expansion, which is used to generate a sample of 10000 Pareto fronts – the cost of the generation of a front is the evaluation of a polynomial at each value of ti

(continued)

350

3

Representation of Random Variables

Example 3.9 (continued)

Fig. 3.97 Comparison between two mean Pareto Fronts of a sample of 10000 fronts generated by the expansion and the most representative Pareto front of a sample of 2500 variates (generated by solving the multiobjective problem for The cost for the  each variate).  expansion is the evaluation of a polynomial at each tk and u1i , u2j

When using the expansion, the generation of a front involves the evaluation of a polynomial at each ti: the computational cost corresponds to evaluation of the polynomial for each ti. Then, the cost of determination of the mean is the cost of the evaluation of the mean point at each ti – the computational cost of the evaluation by Hausdorff distance is much higher than it. The results for u1~T(1.5,2, 2.5) and u2~T(0,0.5,1) are shown in Figs. 3.98 and 3.99. (continued)

3.8

Mean, Variance, and Confidence Intervals for Random Functions or Random Curves

351

Example 3.9 (continued)

Fig. 3.98 Determination of the Mean Pareto Front and a confidence interval from the expansion, which is used to generate a sample of 10000 Pareto fronts – the cost of the generation of a front is the evaluation of a polynomial at each value of ti

(continued)

352

3

Representation of Random Variables

Example 3.9 (continued)

Fig. 3.99 Comparison between the mean Pareto Front generated by the expansion and the most representative Pareto front on a sample of 2500 variates

Example 3.10 Let us consider the indifference curves xα1 xβ2 = 1, when α and β are uncertain: we look for the most representative curve and a confidence interval 80%. Let us consider a first model where β = 1 - α and suppose that at α 2 (0.4, 0.6). We start by generating a family of curves: since no information about the distribution of α is given, let us assume a uniform distribution on the intervals. We generate a sample formed by a family of curves corresponding to na equally spaced subintervals for α and we apply the procedure previously introduced. The results obtained for na = 100 are shown in Fig. 3.100: A second possibility when not any information is available about the distribution of α is the use of the triangular distribution – for instance, we can consider T(0.4,0.5,0.6). Again, let us generate na = 100 variates from α and use the associated family of curves to find the most representative element. The result is shown in Fig. 3.101. (continued)

3.8

Mean, Variance, and Confidence Intervals for Random Functions or Random Curves

353

Example 3.10 (continued)

Fig. 3.100 Indifference curves with uncertain exponents β = 1 - α. α is uniformly distributed on (0.4, 0.6)

Fig. 3.101 Indifference curves with uncertain exponents β = 1 - α. α is triangular T(0.4, 0.5,0.6)

(continued)

354

3

Representation of Random Variables

Example 3.10 (continued) Consider a second model where α and β are independent and suppose that the experts agree that α 2 (0.4, 0.6) and β 2 (0.3, 0.5). Again, we desire to determine the mean indifference curve and a confidence interval of level 80%. We start by generating a family of curves: since no information about the distribution of α and β is given, let us assume a uniform distribution on the intervals. We generate a sample formed by a family of curves corresponding to na subintervals for α and nb subintervals for β, all equally spaced. The results for na = nb = 10 are exhibited in Fig. 3.102. If the distributions are supposed triangular, the results correspond to those in Fig. 3.103.

Fig. 3.102 Indifference curves with uncertain independent exponents α, β uniformly distributed

(continued)

3.8

Mean, Variance, and Confidence Intervals for Random Functions or Random Curves

355

Example 3.10 (continued)

Fig. 3.103 Indifference curves with uncertain independent exponents α, β triangularly distributed T(0.4, 0.5,0.6) and T(0.3, 0.4,0.5)

Example 3.11 Let us consider the family of random functions X ðt, uÞ = cos ð2πut Þ, t 2 ð0, 1Þ: with u uniformly distributed on (-1, 1). Here C ðuÞ = fXðt, uÞ = ðt, cos ð2πut ÞÞ, t 2 ð0, 1Þg: We have E ðX 1 ðt, uÞÞ = ðt, EðX 2 ðt, uÞÞÞ =

sin ð2πt Þ 2πt

Analogously to the first example, this result may be considered as unsatisfactory since the curve is not representative of the family. Let us look for the most representative element of the family. The result is shown in Fig. 3.104.∎ (continued)

356

3

Representation of Random Variables

Example 3.11 (continued)

Fig. 3.104 Family of random curves. The mean evaluated by taking the punctual mean at fixed t furnishes a curve that is not representative of the family. We may look for an element that is the most representative in the sense of a distance. The results for two distances are shown in Figure

3.8

Mean, Variance, and Confidence Intervals for Random Functions or Random Curves

357

Exercises 1. Consider X t = ð 1 - uÞ 2

t2 : 4

Assume that u is a random variable, uniformly distributed on (0.2, 0.8). Determine the mean path and a confidence interval 80%.   2. Consider X t = X 0 - δs e - δt þ δs, where s, δ are uncertain. Assume that the distributions of s and δ are triangular: s  T(0.1, 0.25, 0.5) and δ  T(0.05, 0.15, 0.25). Determine the mean path and a confidence interval 80% for X0 = 10.

u1 u1 ðu2 Þt þ , where u1, u2 are uncertain. 3. Consider X t = X 0 1 - u2 1 - u2 Assume that the distributions of u1 and u2 are triangular: u1  T(1, 2, 3) and u2  T(0.1, 0.25, 0.4). Determine the mean path and a confidence interval 80% for X0 = 10. 4. Consider the family Xt = u, if |t| < u; Xt = (|t| - u)2 + u, otherwise. Assume that the distribution of u is triangular: uT(0,0.5,1). Determine the mean path and a confidence interval 80%.

Chapter 4

Stochastic Processes

Abstract In this chapter, we consider stochastic processes, with a focus on MA, AR, ARMA, diffusion processes, Ito’s stochastic integrals, and Ito’s stochastic differential equations. Stochastic processes may be considered as random variables indexed by another variable – usually the time. Thus, a typical stochastic process is a family of random variables depending upon time: X(t) = Xt. We can consider also more general indexing sets: Definition 4.1 Let (Ω, P) be a probability space and T ⊂ ℝ be a non-empty set. A stochastic process indexed by T is family of applications X = fX t : t 2 T g, where Xt is a random variable on (Ω, P). We say that t 2 T is the time.

Remark 4.1 From the definition above, we see that a stochastic process defines a family of applications Ω 3 ω → Xt(ω) 2 ℝ, indexed by t 2 T . Thus, a stochastic process can be seen as a function X : Ω × T → ℝ , id est, an application Ω × T 3 ðω, t Þ → X ðω, t Þ = X t ðωÞ 2 ℝ. It is usual to make a distinction between discrete time (when T is a discrete set) and continuous time (when T = ða, bÞ is an interval). According to the situation, the process is said to be discrete (when T is a discrete set) or continuous (when T = ða, bÞ is an interval). In many applications, T = ℕ or T = ℤ.

Supplementary Information The online version of this chapter (https://doi.org/10.1007/978-3031-17785-9_4) contains supplementary material, which is available to authorized users. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 E. Souza de Cursi, Uncertainty Quantification using R, International Series in Operations Research & Management Science 335, https://doi.org/10.1007/978-3-031-17785-9_4

359

360

4

Stochastic Processes

Since each Xt is a random variable, we can define statistics of random processes: Definition 4.2 The mean of X is mX(t) = E(Xt).   The variance of X is V X ðt Þ = V ðX t Þ = E X 2t - ðE ðX t ÞÞ2 . pffiffiffiffiffiffiffiffiffiffiffi The standard deviation of X is σ X ðt Þ = V X ðt Þ:   The moment of order k of X is M X,k ðt Þ = E X kt . X is a second-order process if and only if Xt 2 L2(Ω, P), 8 t. Second-order processes are often used in practice. We have Proposition 4.1 Let X = fX t : t 2 T g be a second-order process. Then,   8t : EðX t Þ < 1,EðjX t jÞ < 1,E X 2t < 1:

Proof   Xt 2 L2(Ω, P) if and only if kX t k2 = E X 2t < 1 (see Sect. 2.5). In addition, E(| Xt|) = (1, |Xt|) ≤ |(1, |Xt|)| ≤ k1k kXtk = kXtk < 1. Finally, Jensen’s inequality (see Proposition 2.6) shows that |E(Xt)| ≤ E(|Xt|) < 1. When dealing with second-order processes, we can consider supplementary definitions: Definition 4.3 Let X = fX t : t 2 T g be a second-order process. The autocorrelation of X is RXX(t, s) = E(XtXs). The autocovariance of X is CXX(t, s) = E(XtXs) - E(Xt)E(Xs). The cross-correlation of X and Y is RXY(t, s) = E(XtYs). The cross-covariance of X and Y is CXY(t, s) = E(XtYs) - E(Xt)E(Ys)

Remark 4.2 Notice that RXX(t, s) = RXX(s, t) and CXX(t, s) = CXX(s, t).

4.1

Ergodicity

4.1

361

Ergodicity

Definition 4.4 1. X is weakly stationary if and only if VX(t) is finite for any t; mX is independent from t: mX(t) = m0, 8 t and RXX(s, t) = RX(|s - t|), 8 s, t. RX is extended by symmetry to negative values: RX(a) = RX(-a), so that RXX(s, t) = RX(s - t) = RX(t - s). 2. If X and Y are weakly stationary, mX(t) = m0, mY(t) = n0, 8 t. Then, we can evaluate RXY(s) = E(Xt + sYt), CXY(s) = E(Xt + sYt) - m0n0. 3. X is stationary if and only if the distribution of (X(t1), . . ., X(tn)) coincides with the distribution of (X(t1 + τ), . . ., X(tn + τ)) for any n, any (t1, . . ., tn) and any τ. If X is stationary, then X is weakly stationary. 4. Let X be weakly stationary. X is ergodic in the mean if and only if RT m0 = lim T1 X ðsÞds . It is ergodic in correlation if and only if T → þ1

RX ðτÞ = lim

0

1 T → þ1 T

RT

X ðsÞX ðs þ τÞds.

0

Notice that, for a weakly stationary process, C XX ðt, sÞ = RXX ðt, sÞ - m2X = RX ðjs - t jÞ - m20 = C X ðjs - t jÞ For ergodic processes, we can evaluate the mean and/or the autocorrelation from a single realization, by approximating the limits by the evaluation at a large T. For discrete processes (for instance, T = ℕ or =ℤ), the integrals are replaced by sums: m0 = nlim →1

n n 1X 1X X i ; RX ðsÞ = lim X i X iþs : n→1 n n i=0 i=0

In practice, these quantities are estimated by finite sums. Under R, the intrinsic function mean evaluates the mean of a signal, acf helps you to evaluate RX and CX: acf(X, lag.max=k, type=“correlation”) evaluates RnX ðsÞ = RX ðsÞ=RX ð0Þ,0 ≤ s ≤ k; acf(X, lag.max=k, type = “covariance”) evaluates CX(s), 0 ≤ s ≤ k. Analogously, ccf helps you to evaluate RXY and CXY: ccf (X, y, lag.max = k, type = “correlation”) . evaluates RnXY ðsÞ = pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi RX ðsÞ= RX ð0ÞRY ð0Þ,0 ≤ s ≤ k; ccf(X, y, lag.max = k, type = “covariance”) evaluates RXY(s), - k ≤ s ≤ k. A parameter demean can be set to TRUE or FALSE to use or not centered variables – the default value is TRUE. Both the functions produce a plot by default – it is controlled by a parameter plot, which takes the value TRUE by default: set it to FALSE if you do not desire a plot. Otherwise, you can use the functions below:

362

4

Stochastic Processes

autocorr = function(k,x){ nx = length(x) kk = abs(k) aux = sum(x[1:(nx-kk)]*x[(kk+1):nx])/(nx - kk) return(aux) } vautocorr = function(k,x){ aux = numeric(length=k+1) for (i in 0:k){aux[[i+1]]= autocorr(i,x)} return(aux) } crosscorr = function(k,y,z){ ny = length(y) nz = length(z) nn = min(c(ny,nz)) if(k < 0){ kk = abs(k) aux=sum(y[1:(nn-kk)]*z[(kk+1):nn])/(nn - kk) }else{ aux =sum(y[(k+1):nn]*z[1:(nn-k)])/(nn - k) } return(aux) } vcrosscorr = function(k,y,z){ aux = numeric(length=2*k+1) for (i in seq(-k,k,by=1)){ aux[[i+k+1]] = crosscorr(i,y,z) } return(aux) } autocov = function(k,x){ aux = autocorr(k,x) – mean(x)^2 return(aux) } vautocov = function(k,x){ aux = numeric(length=k+1) for (i in 0:k){aux[[i+1]]= autocov(i,x)} return(aux) } crosscov = function(k,y,z){ aux = crosscorr(k,y,z) – mean(y)*mean(z) return(aux) } vcrosscov = function(k,y,z){ aux = numeric(length=k+1) for (i in 0:k){aux[[i+1]]= crosscov(i,y,z)} return(aux) }

These functions are implemented in the class signal.R. Remark 4.3 Notice that the formula used by R in the computation of autocorrelations and autocovariances is different from the definition introduced in this text. acf evaluates the covariances using the standard statistical formula RX ð k Þ ≈

n-k   1 X X - X Xi - X : n i = 1 iþk

For instance, consider the signal (continued)

4.1

Ergodicity

363

Remark 4.3 (continued) X i = exp

  ti i ,0 ≤ i ≤ 1000 ,t = 4 i 100

The code

Running this code under RStudio 2021.09.1 Build 372, with R 4.1.3 produces the result in Fig. 4.1. This result is slightly different from the result furnished by the code proposed (function autocov), namely when k increases – autocov uses a division by n - k.

Fig. 4.1 Verification of the formula used by acf for covariance and comparison with the result of autocov

Analogously, R evaluates correlations using the standard statistical formula (continued)

364

4

Stochastic Processes

Remark 4.3 (continued) n-k   1 X X iþk - X X i - X n - k i=1 vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi , RX ðk Þ = sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u n  nX -k  X  2 1 2u 1 Xi - X t X iþk - X n - k i=1 n i=1

R evaluates correlations using n-k   1 X X - X Xi - X n - k i = 1 iþk RX ðkÞ = sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffivffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi , u n n-k 2 u 2 1 X 1 X Xi - X t X -X n - k i=1 i n i=1

while the usual definition in signal analysis is used in this text. To verify these formulae, we can use the code:

Running this code under RStudio 2021.09.1 Build 372, with R 4.1.3 produces the result in Fig. 4.2. This result is different from the result furnished by the code proposed (function autocorr). We can compare the results by normalizing the results of autocorr to meet the statistical definition of correlation.

(continued)

4.1

Ergodicity

365

Remark 4.3 (continued)

Fig. 4.2 Verification of the formula used by acf for covariance and comparison with the result of autocorr

Example 4.1 Let us consider a white noise ε (see Sect. 4.3) and the processes defined by X n = εn þ εn - 1 þ 2εn - 2 - εn - 3 þ 2εn - 4 ; Y i = εn þ εn - 1 þ 2εn - 2 þ εn - 3 þ 2εn - 4 þ 3εn - 5 þ 4εn - 6 : We generate a trajectory containing 1000 points as follows (see Sect. 4.4):

Let us evaluate the autocorrelation RX(s), for 0 ≤ s ≤ 100. We can use the function vautocorr. At right, an example of call using the class signal.

(continued)

366

4

Stochastic Processes

Example 4.1 (continued) Otherwise, we can use acf and correct the result by adding the square of the mean of X:

Finally, we can use parameter demean:

By default, acf generates a plot as in Fig. 4.3.

Fig. 4.3 Plot produced by acf, by default

If you do not desire the plot, you must use the parameter plot = FALSE. For instance,

(continued)

4.1

Ergodicity

367

Example 4.1 (continued) Here, the parameter plot = FALSE indicates that we do not desire to plot the autocovariance. You can plot it by using the standard graphics’ instructions from R. For instance:

produces the result shown in Fig. 4.4.

Fig. 4.4 Autocorrelation of X generated by three different methods. The black curve labeled “autocov” corresponds to the programs previously presented (implemented in class signal)

An example of the plot produced by ccf by default is shown in Fig. 4.5. Analogously to the preceding, we can consider

(continued)

368

4

Stochastic Processes

Example 4.1 (continued)

Fig. 4.5 Plot of the results of ccf, by default

This code produces the result in Fig. 4.6.

Fig. 4.6 Plot of the results of ccf. The black curve labeled “crosscorr” corresponds to the programs previously presented (implemented in class signal)

4.1

Ergodicity

369

Example 4.2 Let us consider the times ti = iΔt, Δt = 0.01, 0 ≤ i ≤ 1000 and two signals defined by X i = sin ð2πt i Þ, Y i = cos ð2πt i Þ: Notice that both are deterministic non-stationary signals. In this case, we have, for s > 0, 1 R X ðsÞ = 10 - s

10 Z-s

sin ð2π ðt þ sÞÞ sin ð2πt Þdt, 0

so that RX ðsÞ =

sin ð2πsÞ 1 : cos ð2πsÞ þ 2 4π ð10 - sÞ

For s < 0, 1 RX ðsÞ = 10 þ s

Z10 sin ð2π ðt þ sÞÞ sin ð2πt Þdt, -s

and 1 RX ðsÞ = 10 - jsj

Z10 sin ð2π ðt - jsjÞÞ sin ð2πt Þdt, jsj

so that ( u = t + |s|) 1 R X ðsÞ = 10 - jsj

Z10 sin ð2πuÞ sin ðð2π ðu þ jsjÞÞdu = RX ðjsjÞ: jsj

Thus, RX(s) = Rx(-s). Analogously, for s > 0, 1 RXY ðsÞ = 10 - s

10 Z-s

sin ð2π ðt þ sÞÞ cos ð2πt Þdt, 0

(continued)

370

4

Stochastic Processes

Example 4.2 (continued) so that RXY ðsÞ =

1 sin ð2πsÞ: 2

For s < 0, 1 RXY ðsÞ = 10 þ s

Z10 sin ð2π ðt þ sÞÞ cos ð2πt Þdt: -s

So (u = - s), 1 RXY ðsÞ = 10 - u

Z10 sin ð2π ðt - uÞÞ cos ð2πt Þdt: u

Thus, RXY ðsÞ = -

1 1 sin ð2πuÞ = sin ð2πsÞ ðs < 0Þ: 2 2

In addition, 1 E ðX Þ = 10

Z10 sin ð2πt Þdt = 0 0

1 E ðY Þ = 10

Z10 cos ð2πt Þdt = 0 0

We generate the signals by the code at right:

Let us evaluate the autocorrelation RX(s), for 0 ≤ s ≤ 100. In this case, we obtain the results in Figs. 4.7 and 4.8. (continued)

4.1

Ergodicity

371

Example 4.2 (continued)

Fig. 4.7 Autocorrelation of X generated by three different methods. The red curve labeled “autocov” corresponds to the programs previously presented (implemented in class signal)

Fig. 4.8 Plot of the results of ccf. The red curve labeled “crosscorr” corresponds to the programs previously presented (implemented in class signal)

372

4

Stochastic Processes

Example 4.3 Let us consider the times ti = iΔt, Δt = 0.01, 0 ≤ i ≤ 1000 and two signals defined by X i = exp

    ti t , Y i = exp - i : 4 10

Notice that both are deterministic non-stationary signals. In this case, we have, for s > 0, 1 R X ðsÞ = 10 - s

Z10

 exp

   tþs t exp dt, 4 4

s

so that RX ðsÞ =

     2 s s exp 5 - exp : 10 - s 4 4

For s < 0, the same transformations used in the preceding example show that RX(s) = Rx(-s). Let us evaluate the cross-covariance: for s > 0: RXY ðsÞ =

1 10 - s

Z10

 exp

   tþs t exp dt, 4 10

s

so that

  20es=10 e3=2 - ⅇ3s=20 : RXY ðsÞ = 3ð10 - sÞ

For s < 0: 1 RXY ðsÞ = 10 þ s

Z10

 exp

-s

   tþs t exp dt, 4 10

id est,

RXY ðsÞ =

1 10 - u

Z10 exp

    t-u t exp dt, 4 10

u

(continued)

4.1

Ergodicity

373

Example 4.3 (continued) so that,     20es=4 e3=2 - e - 3s=20 20e - u=4 e3=2 - ⅇ3u=20 RXY ðsÞ = = 3ð10 - uÞ 3ð10 þ sÞ In addition, 1 E ðX Þ = 10

Z10 exp

    t 2 dt = - 1 þ ⅇ5=2 4 5

0

1 E ðY Þ = 10

Z10

  t -1 þ ⅇ exp dt = : 10 ⅇ

0

The results furnished by the programs previously introduced are shown in Figs. 4.9 and 4.10.

Fig. 4.9 Autocorrelation of X generated by the programs previously introduced (available in class signal)

(continued)

374

4

Stochastic Processes

Example 4.3 (continued)

Fig. 4.10 Cross-correlation of X and Y generated by the programs previously introduced (available in class signal)

In the analysis of weakly stationary process, we may use the partial autocorrelations, defined below: Definition 4.5 The partial autocorrelation ρ(s) of a discrete process X is ρðsÞ = EððX tþs - Ps X tþs ÞðX t - Ps X t ÞÞ, where PsXt is the best approximation of Xt by an affine function of X s,t = ðX tþ1 , . . . , X tþs - 1 Þ, id est, n   o Ps X t = arg min E ðY - X t Þ2 : Y is an affine function of X s,t In addition, ρ(s) coincides with the coefficient of Xt for the best approximation of Xt + s by an affine function of Xt, . . ., Xt + s - 1. Under R, partial autocorrelations are evaluated either by acf with parameter “type = partial” or by pacf. These functions use the approximations

4.1

Ergodicity

375

introduced in Remark 4.3. If you desire to evaluate partial autocorrelations by your own program, you can consider the general form of the approximation: X tþs ≈ PX tþs = c1 X tþs - 1 þ c2 X tþs - 2 þ . . . þ cs X t þ csþ1 Let us denote Z i = X tþs - i ,1 ≤ i ≤ s; Z sþ1 = 1: The best approximation PXt + s is the orthogonal projection of Xt subspace generated by {Z1, Z2, . . ., Zs + 1}. Thus,

+ s

onto the

EððX tþs - PX tþs ÞZ i Þ = 0,1 ≤ i ≤ s þ 1: Analogously to the methods of collocation and variational approximation introduced in Chap. 3, the coefficients C = (c1, . . ., cs + 1)t are the solution of a linear system:   AC = B; Aij = E Z i Z j , Bi = EðX tþs Z i Þ, 1 ≤ i, j ≤ s þ 1:

ð4:1Þ

We have ρ(s) = cs. Thus, an alternative to determine the coefficients is to solve the linear systems (4.1). An example of code for the determination of the partial autocorrelation by this way is the following: vpcorr1 = function(kmax,z,epsi){ PAC = numeric(length = kmax) nz = length(z) for (s in 1:kmax){ ncol = s ndata = nz -s Y = matrix(z[(s+1):nz], nrow = ndata,ncol = 1) X = list() for (j in 1:s){ t_ini = s + 1 – j t_end = t_ini + ndata - 1 X[[j]] = matrix(z[t_ini:t_end], nrow = ndata,ncol = 1) } X[[s+1]] = matrix(1, nrow = ndata,ncol = 1) A = matrix(0, nrow = s+1, ncol = s+1) B = matrix(0, nrow = s+1, ncol = 1) for (i1 in 1:(s+1)){ B[[i1,1]] = sum(X[[i1]]*Y)/ndata for (j1 in 1:(s+1)){ A[[i1,j1]] = sum(X[[i1]]*X[[j1]])/ndata } } cof = solve(A +epsi*diag(s+1),B) PAC[[s]] = cof[[s]] } return(PAC) }

376

4

Stochastic Processes

You can easily build a variation of this program which eliminates the coefficient cs + 1: vpcorr1a = function(kmax,z,epsi){ PAC = numeric(length = kmax) nz = length(z) for (s in 1:kmax){ ncol = s ndata = nz -s Y = matrix(z[(s+1):nz], nrow = ndata,ncol = 1) X = list() for (j in 1:s){ t_ini = s + 1 – j t_end = t_ini + ndata - 1 X[[j]] = matrix(z[t_ini:t_end], nrow = ndata,ncol = 1) } A = matrix(0, nrow = s, ncol = s) B = matrix(0, nrow = s, ncol = 1) for (i1 in 1:S){ B[[i1,1]] = cov(X[[i1]],Y) for (j1 in 1:s){ A[[i1,j1]] = cov(X[[i1]],X[[j1]]) } } cof = solve(A +epsi*diag(s),B) PAC[[s]] = cof[[s]] } return(PAC) }

It is also possible to use lm to fit a linear model to the data and retrieve the coefficients: vpcorr2 = function(kmax,z){ PAC = numeric(length = kmax) nz = length(z) for (s in 1:kmax){ ncol = s ndata = nz -s Y = matrix(z[(s+1):nz], nrow = ndata,ncol = 1) matrix(0,nrow = ndata,ncol = s+2) for (j in 1:s){ t_ini = s + 1 – j t_end = t_ini + ndata - 1 X[,j+1]=matrix(z[t_ini:t_end], nrow=ndata,ncol=1) } X[ ,s +2] = c(rep(1,ndata)) X[ ,1] = c(Y) ccc = character(length=s+2) ccc[[1]] = "Y" for (j in 1: (s+1)){ ccc[[j+1]] = paste0("X",j)

4.1

Ergodicity

377

} ddd :

m2 þσ 2

kX - jpj j=0

aj ajþjpj ,

m2 , otherwise:

if jpj < k

398

4

Stochastic Processes

This equation is often used to determine the order k of a process MA(k): indeed, RX( p) is constant for p ≥ k. We can use the normalized procedure of acf (. . ., type = correlation) instead of RX: then the autocorrelation is near zero for p ≥ k. The simulation of a MA process for given coefficients m, a1, . . ., ak can be performed by generating the white noise (see Sect. 4) and using the expression of Xn. Otherwise, you can use arima.sim. Example 4.12 Let us consider the process Xn = 1 + εn - 2εn - 1 + 0.5εn - 2. We can simulate the process as follows:

An example of result is shown in Fig. 4.30. Notice that R proposes an intrinsic function ts.plot (or plot.ts) to plot time series – the graphics 4.30 is not generated by these functions. To examine the autocorrelation, we generate a larger sample of ns = 1E5 variates and we use acf to find the autocorrelation (notice that we use type = “correlation” to get normalized values).

The result is exhibited in Fig. 4.31 – we observe that the autocorrelation is near zero for k ≥ 3, so that the order of the process is 2. Notice that acf eliminates the mean m. To keep the mean, you must use the parameter demean = FALSE. (continued)

4.4

Moving Average Processes

399

Example 4.12 (continued)

Fig. 4.30 Example of realization of the process Xn = 1 + εn - 2εn - 1 + 0.5εn - 2

We can use arima.sim:

The autocorrelation is shown in Fig. 4.32 – it is like the preceding one (the mean c is eliminated, since we did not set demean = FALSE)

Fig. 4.31 Autocorrelation of the process Xn = 1 + εn - 2εn - 1 + 0.5εn - 2. The values are near zero for k ≥ 3, what indicates a process of order 2

(continued)

400

4

Stochastic Processes

Example 4.12 (continued)

Fig. 4.32 Autocorrelation of the process Xn = 1 + εn - 2εn - 1 + 0.5εn - 2, generated by arima.sim

In practice, the situation is often the reverse one: we have the data concerning observations of the process, but we do not know the coefficients and the order – it is necessary to identify the process using the available data. To do this, we start by evaluating the autocorrelations, which furnish the order k of the process. Then, the coefficients of the approximation may be determined from a trajectory of Xn: indeed, we can determine mX and RXfrom the trajectory. Since E ðX n Þ = E m þ

k X

! ai ε n - i

= m,

ð4:1Þ

i=0

so that the mean of the trajectory estimates the constant c. In addition, 

E X n X nþp



k X



 = E m2 þ E

! ai aj εn - i εnþp - j ,

i, j = 0

so that, for 0 ≤ p ≤ k : RX ðpÞ = m2 þ σ2ε

kX -p i=0

ai aiþp ,

4.4

Moving Average Processes

401

Taking a white noise of variance σ2ε = 1, we have: kX -p

  ai aiþp þ m2 - RX ðpÞ = 0:

ð4:2Þ

i=0

These equations connect the constant c, the coefficients, and the autocorrelation: they can be used to determine the coefficients: the values of RX are determined from the data and the coefficients m, a1, . . ., ak are the solution of these equations, to be determined by an adequate method. For instance, we can solve the nonlinear system or minimize the norm of the equations. Under R, the instruction arima estimates the coefficients. Example 4.13 Let us consider again the process Xn = 1 + εn - 2εn - 1 + 0.5εn - 2. If the coefficients are known, we can use Eq. (4.3) to determine RX For instance, we can use the code at right. This code creates a function that receives as arguments a numeric vector a = (a1, . . ., ak) and the mean m. It returns a numeric vector containing RX(0), . . ., RX(kmax).

With a = (-2, 0.5), m = 1, we obtain RX(0) = 6.25, RX(1) = RX(-1) = 2, RX(2) = RX(-2) = 1.5, RX( p) = 1, if |p| > 2:

Assume that the coefficients are unknown, but we have a trajectory of 1000 values:

(continued)

402

4

Stochastic Processes

Example 4.13 (continued) To determine the coefficients, we start by evaluating the mean and the autocorrelations:

This code generates the graphics shown in Fig. 4.33

Fig. 4.33 Autocorrelation of X. The values are approximately constant and equal to m2X for k ≥ 3, what indicates a process of order 2

Alternatively, you can use the code below, which produces a graphics with normalized values. The results appear in Fig. 4.34. In both the cases, the process appears as having order k = 2

(continued)

4.4

Moving Average Processes

403

Example 4.13 (continued)

Fig. 4.34 acf of X. The values are near zero for k ≥ 3, what indicates a process of order 2

We must solve the equations kX -p

  ai aiþp þ m2 - RX ðpÞ = 0,a0 = 1:

i=0

Since the order is k = 2 and a0 = 1, the equations reduce to 1 þ a21 þ a22 þ m2 - RX ð0Þ = 0 a1 þ a1 a2 þ m2 - RX ð1Þ = 0, m is estimated as the empirical mean of data: m≈mx and the values of RX are estimated from the data:

(continued)

404

4

Stochastic Processes

Example 4.13 (continued) We make a first estimation using arima:

When calling arima, you must give the orders order = c(p,d,q) for a model AR( p), MA(q), with differentiation order d. Notice that arima also furnish other information, such as the AIC:

Notice also that you can generate confidence intervals for the coefficients:

We use the solution found as initial point to solve the equations: initially, we create the equations, and we evaluate their norm:

This function evaluates the RMS residual of the equations. We minimize it, using a_ini as starting point:

(continued)

4.4

Moving Average Processes

405

Example 4.13 (continued) The method estimates a1 ≈ - 2.0, a2 ≈ 0.49. Of course, you can also use fsolve:

Alternatively, we can consider m as a variable to be determined: we consider a[[3]] =m, to be found by fminunc or fsolve – in this case, we need a supplementary equation for RX(2), to get 3 equations for 3 unknowns:

An important result concerning MA processes is the following one: Theorem 4.1 Let {Xn : n 2 ℤ} be a second-order stationary process. Then, Xn corresponds to a process MA(1): there exists a deterministic process {Dn : n 2 ℕ } and a sequence of real coefficients {an : n 2 ℕ } such that 8n 2 ℤ : X n = Dn þ

þ1 X

ai ε n - i

i=0

with a0 = 1,

Pþ1

2 i = 0 ai

< 1, cov(D, εi) = 0, 8 i.

Wold’s theorem suggests that second-order stationary processes may be approximated by considering truncations MA(k) of the infinite series MA(1): X n ≈ Dn þ

k X

ai ε n - i :

ð4:3Þ

i=0

In practice, we can use Dn = E(Xn). The coefficients can be determined by solving Eq. (4.3). Under R, the instruction arima furnishes an estimation of the coefficients. The package forecast proposes an instruction auto.arima which determines automatically a model for the time series.

406

4

Stochastic Processes

Example 4.14 Let us consider again the process Xn = 1 + εn - 2εn - 1 + 0.5εn - 2. Assume that the coefficients are unknown and that we do not know that it is a MA(2)process, but we have a trajectory of 1000 values and we want to determine a MA(k) approximation. We can adopt an approach analogous to the one presented in Example 4.13, but with a varying k: solve the equations kX -p

  ai aiþp þ m2 - RX ðpÞ = 0,a0 = 1

i=0

For different values of k and choose the best result. Initially, we create a function that evaluates the RMS of the residuals (function calcr is defined in Example 4.13):

Then, we find the solution for several values of k:

We obtain the residuals shown in Fig. 4.35: the residuals are very close for all the values of k: the vertical axis has a maximum value of 2.5E-11. Indeed, the coefficients a1, a2 are approximately the same for all the values of k, while the other coefficients are near zero. In Fig. 4.36, we show the values of a1, a2, and the RMS norm of the other coefficients. (continued)

4.4

Moving Average Processes

407

Example 4.14 (continued)

Fig. 4.35 RMS residuals. Notice that the vertical axis is from 5E-12 to 2.5E-11. Although the curve seems to show differences, all the values are very close

Fig. 4.36 Coefficients found: a1 ≈ - 2, a2 ≈ 0.5 and the RMS norm of the other coefficients is near zero

(continued)

408

4

Stochastic Processes

Example 4.14 (continued) Analogously to Example 4.13, we can consider m as an unknown to be determined. Then, we use the code below, which furnishes the results shown in Figs. 4.37 and 4.38.

Fig. 4.37 RMS residuals. Notice that the residuals go from 2E-11 to 4E-11. Although the curve seems to show differences, all the values are very close

(continued)

4.4

Moving Average Processes

409

Example 4.14 (continued)

Fig. 4.38 Coefficients found: a1 ≈ - 2, a2 ≈ 0.5, m ≈ 1 and the RMS norm of the other coefficients is near zero

Remark 4.5 The analysis of MA processes is often made using delay (or lag) operators. The basic delay operator is L(Xt) = Xt - 1. Thus, Xn - k = LkXn and we have X n = c þ εn þ

k X

ai εn - i ⟺ X n - c = BðLÞðεn Þ, BðxÞ = 1 -

i=1

k X

ai x i :

i=1

The polynomial B and its roots play an essential rule in the theory of MA and AR processes (see Sect. 4.5 for AR). Indeed, let xi, i = 1, . . ., n be the roots of p. Then, B(L ) = ak(x1 - L ). . .(xk -L ). Recalling  that product of the roots is equal to

1 ak ,

we have BðLÞ = Id -1

L x1

. . . Id -

L xk

, where Id is the identity

operator. Thus, B(L) exists if and only if not any root of B(L ) has a modulus equal to 1: |xi| ≠ 1, i = 1, . . ., n. If |xi| > 1, we have (see, for instance Dautray & Lions, 2012). (continued)

410

4

Stochastic Processes

Remark 4.5 (continued)  -1 X þ1 1 n L = L Id nþ1 xi x n=0 i Thus, if |xi| > 1, i = 1, . . ., n, then εn = B(L )-1(Xn - c).

Exercises 1. Let Xn = 1 + εn. (a) Determine mX, VX, RX, CX. (b) Generate a sample of 1000 values of N(0, 1) and use it to simulate the process. (c) Use the data furnished by the simulation to determine approximate values of mX, RX( p), p = 0, 1, 2, 3, 4. (d) Use these approximated values to fit processes MA(0), MA(1), MA (2) to the data. 2. Let X n = 1 þ εn þ 12 εn - 1 : (a) Determine mX, VX, RX, CX. (b) Generate a sample of 1000 values of N(0, 1) and use it to simulate the process. (c) Use the data furnished by the simulation to determine approximate values of mX, RX( p), p = 0, 1, 2, 3, 4. (d) Use these approximated values to fit processes MA(0), MA(1), and MA (2) to the data. 3. Let Xn = μ + εn + αεn - 1. (a) Determine mX, VX, RX, CX. (b) Generate a sample of 1000 values of N(0, 1) and use it to simulate the process. (c) Use the data furnished by the simulation to determine approximate values of mX, RX( p), p = 0, 1, 2, 3, 4. (d) Use these approximated values to fit processes MA(0), MA(1), and MA (2) to the data. 4. Let Xn = εn - εn - αεn - 2. (a) Determine mX, VX, RX, CX. (b) Generate a sample of 1000 values of N(0, 1) and use it to simulate the process. (continued)

4.5

Autoregressive Processes

411

(c) Use the data furnished by the simulation to determine approximate values of mX, RX( p), p = 0, 1, 2, 3, 4. (d) Use these approximated values to fit processes MA(0), MA(1), and MA (2) to the data. 5. Let Xn = εn - αεn - 2. (a) Determine mX, VX, RX, CX. (b) Generate a sample of 1000 values of N(0, 1) and use it to simulate the process. (c) Use the data furnished by the simulation to determine approximate values of mX, RX( p), p = 0, 1, 2, 3, 4. (d) Use these approximated values to fit processes MA(0), MA(1), and MA (2) to the data.

4.5

Autoregressive Processes

Definition 4.8 X is autoregressive of order k, denoted AR(k), if and only if X is weakly stationary and there exists a discrete white noise ε = {εn : n 2 ℤ}, a number m 2 ℝ, and a vector of coefficients a = (a1, . . ., ak) such that 8n 2 ℤ : X n = m þ εn þ

k X

ai X n - i :

i=1

A condition to verify if the process is stationary is furnished by the result below Theorem 4.2 P The process AR(k) given by X n = m þ εn þ ki= 1 ai X n - i is stationary if and Pk only if the polynomial AðxÞ = 1 - i = 1 ai xi has not any root of modulus inferior or equal to 1: A(x) = 0 ⟹ |x| > 1.

Remark 4.6 The partial autocorrelation ρX of X verifies ρX(s) = 0, for s > k. Indeed, ρX(s) is the coefficient of Xn - s for the best approximation PXn of Xn by an affine P function of Xn - s, . . ., Xn - 1 and PX n = ki= 1 ai X n - i, so that the coefficient is null for s > k.

412

4

Stochastic Processes

Let us denote E(Xn) = μ. We have μ=m þ So that, for 1 -

k P i=1

k X

ai μ,

i=1

ai ≠ 0, μ= 1-

m k P

:

ð4:4Þ

ai

i=1

Assume that the white noise has a variance σ 2 = 1. Then, E ðX n X n Þ = mμ þ 1 þ

k X

ai E ð X n X n - i Þ

i=1

and, for p > 0, k X     E X n X nþp = mμ þ ai E X n X nþp - i : i=1

Thus, RX ðpÞ = mμ þ δp0 þ

k X

ð4:5Þ

ai RX ði - pÞ:

i=1

Equation (4.6) are usually referred as the Yule-Walker equations. Taking the values p = 0, 1, . . ., k , we obtain k + 1 linear equations for the determination of the quantities mμ, a1, . . ., ak : 0

RX ð1Þ



B B RX ð0Þ ... B B B ⋮ ⋮ B B B R X ð2 - k Þ . . . @ R X ð1 - k Þ ⋯

RX ðkÞ RX ðk - 1Þ ⋮ RX ð1Þ RX ð0Þ

1

1

0

1

0

RX ð0Þ - 1

1

C a1 B C 1 C RX ð1Þ C C B B CB C C CB C B⋮C B B C C ⋮ CB ⋮ C=B C B C C @ ak A B C B RX ðk - 1Þ C 1 C @ A A mμ 1 RX ðk Þ

Once cμ is known, c is determined by Eq. (4.5). We have vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi !ffi u k X u ai m = tmμ × 1 i=1

4.5

Autoregressive Processes

413

The values of the coefficients can also be determined by a least-squares procedure. Indeed, we have 0

Xk



X kþ1







X kþn



1

B B 1 B B B⋮ @ 1

X1

10

0

1

m

0

1

X kþ1

εkþ1

CB C B B C B X2 C CB a1 C B X kþ2 CB C = B B C B ⋮ C A@ ⋮ A @ ⋮

C B C C B εkþ2 C C B C C-B C: C B ⋮ C A @ A X kþnþ1 εkþnþ1

ak

X nþ1

1

Thus, the vector of coefficients (m, a1, . . ., ak)t is the solution of a noisy linear system. Since the noise is gaussian, the maximum-likelihood estimator of the coefficients coincides with the least-squares solution of the linear system where the vector of noise is neglected, id est, the solution of 0

1

B B 1 B B B B⋮ @ 1

Xk



X kþ1







10

X1

m

0

X kþ1

CB C B B C B X2 C CB a1 C B X kþ2 CB C = B CB C B B C B ⋮ C A@ ⋮ A @ ⋮

⋯ X nþ1

X kþn

1

ak

1 C C C C: C C A

ð4:6Þ

X kþnþ1

Notice that the linear system is overdetermined for n > k + 1. Since the process is weakly stationary, RX( p - i) = RX(i - p) and we may use the Yule–Walker equations to determine the values of μ, RX( p), when the values of m, a1, . . ., ak are known. Indeed, we start by the determination of μ by using Eq. (4.5). The second step is the use of Eq. (4.6): for a stationary process, so that Eq. (4.6) form a linear system for RX( p), p = 0, . . .k. For instance, for k = 1: 1

- a1

- a1

1

!

RX ð0Þ

!

RX ð1Þ

mμ þ 1

=

!



For k = 2: 0

1

B B - a1 @ - a2

- a1 1 - a2 - a1

- a2

10

RX ð0Þ

1

0

CB C B B C B 0 C A@ RX ð1Þ A = @ 1 RX ð2Þ

mμ þ 1 mμ

1 C C A



For k = 3: 0

1

-a1

-a2 -a3

10

R X ð 0Þ

1 0

mμþ1

1

B CB C B C B -a1 1-a2 -a3 0 CB RX ð1Þ C B mμ C B CB C B C B CB C=B C B -a2 -a1 -a3 1 CB RX ð2Þ C B cm C 0 @ A@ A @ A -a3 -a2 -a1 1 R X ð 3Þ mμ

414

4

Stochastic Processes

For k = 4: 0

1

B B - a1 B B B - a2 B B B - a3 @ - a4

- a1

- a2

- a3

1 - a2

- a3

- a4

- a1 - a3

1 - a4

0

- a2 - a4

- a1

1

- a3

- a2

- a1

- a4

10

CB B 0 C CB CB B 0 C CB CB B 0 C A@ 1

RX ð 0 Þ

1

0

C B B RX ð1Þ C C B C B B RX ð2Þ C C=B C B B RX ð3Þ C A @ RX ð4Þ

mμ þ 1 mμ mμ mμ

1 C C C C C C C C A



Finally, we may determine RX( p), for p > k, by using Eq. (4.6) again: RX ðk þ nÞ = mμ þ

kþn -1 X

akþn - i RX ðiÞ:

ð4:7Þ

i=n

For a process AR(k), ρ(s) is evaluated by using the orthogonal projection property: indeed, k X ai X tþs - i : X tþs = m þ εtþs þ i=1

Thus, only Xt + s - k, . . ., Xt + s - 1 appear in the expression Xt + s as an affine function of X s,t = ðX t , . . . , X tþs - 1 Þ. Therefore, on the one hand, ρ(s) = 0, for s > k (in this case, Xt does not appear in the affine expression of Xt + s); on the other hand, ρ(k) = ak (since t + k - i = t for i = k). Simulations of a process AR(k) may be generated by two ways: 1. If k values X0, . . ., Xk - 1 are given, we may generate a sample from the white noise ε and use the recurrence in the definition of the AR(k) to determine Xk, Xk + 1, . . .; 2. Otherwise, it is necessary to determine the inverse A(L )-1 of AðLÞ = Id k X ai Li . We have: i=1 þ1 X pi Li : AðLÞ - 1 = ð4:8Þ i=0

3. The unknown coefficients are determined by using that A(L )-1A(L ) = Id : we have p0 = 1; pn =

minX fk, ng j=1

pn - j aj , n > 0 ;

ð4:9Þ

4. What determines the coefficients pi. Then, X n = A ð LÞ - 1 ð m þ ε n Þ :

ð4:10Þ

4.5

Autoregressive Processes

415

Of course, R proposes intrinsic functions for the simulation and the identification of AR processes. For instance (see Example 4.15): • ar identifies the coefficients of the AR process. Options allow to choose the method (Yule–Walker is default); • arima.sim simulates the process; • arima identifies the coefficients. Remark 4.7 The elimination of the mean m is possible in the simulation of an AR process. Indeed, let Yn = Xn - μ. Then, Y n = εn þ

k X

ai Y n - i

i=1

Thus, Yn verifies the Yule-walker equations with m = 0: RY ðpÞ = δp0 þ

k X

ai RY ði - pÞ:

i=1

For instance, 0

RY ð1Þ

B B RY ð0Þ B B B ⋮ @ R Y ð2 - k Þ



R Y ðk Þ

1

0

1

0

R Y ð 0Þ - 1

1

C a1 B C . . . RY ðk - 1Þ C RY ð1Þ C C B CB B C C CB C @⋮A=B C B C ⋮ ⋮ ⋮ A @ A ak RY ðk - 1Þ ... RY ð1Þ

Analogously, the least-squares approach leads to the linear system: 0

Yk

B B ⋮ @ Y kþn

10

1 0 1 Y kþ1 a1 CB C B C ⋯ ⋮ C A@ ⋮ A = @ ⋮ A: ak Y kþnþ1 ⋯ Y nþ1 ⋯

Y1

The characteristic polynomial A is the same for X and Y, so that we have Yn = A(L)-1(εn).

Example 4.15 Let us consider the process X n = εn þ 56 X n - 1 - 16 X n - 2 . We have m = 0, k = 2, a1 = 56 , a2 = - 16 . Thus, μ = 0. Since the roots of the polynomial A(x) = 6 - 5x + x2 are x1 = 2, x2 = 3, the process is stationary. We have: (continued)

416

4

Example 4.15 (continued) 0 B 1 B B 5 BB 6 B @ 1 6

-

5 6

7 6 -

5 6

Stochastic Processes

1 1 0 1 0 1 6C 1 C R X ð 0Þ CB C B C B C B C 0C C @ R X ð 1Þ A = @ 0 A C A R X ð 2Þ 0 1

3 9 Thus, RX ð0Þ = 21 10 , RX ð1Þ = 2 , RX ð2Þ = 10. Now, we use Eq. (4.8) to determine the other values of RX:

1 ; 2 4 ; R X ð 4 Þ = R X ð 2 þ 2Þ = a 2 R X ð 2 Þ þ a1 R X ð 3 Þ = 15 5 RX ð5Þ = RX ð2 þ 3Þ = a2 RX ð3Þ þ a1 RX ð4Þ = ... 36 The instruction R X ð 3 Þ = R X ð 2 þ 1Þ = a 2 R X ð 1 Þ þ a1 R X ð 2 Þ =

generates the exact values of RX(i)/RX(0) for 0 ≤ i ≤ 10. We present in Fig. 4.39 a comparison between the Yule–Walker formulae (4.6) (as above).

Fig. 4.39 Values of RX(i)/RX(0). Yule–Walker’s formula and R furnish the same results

(continued)

4.5

Autoregressive Processes

417

Example 4.15 (continued) Now, assume that our data are the values of RX(4), . . ., RX(8). We use the Yule–Walker Eq. (4.6) to determine mμ, a1, a2: 0

R X ð 5Þ

B B RX ð6Þ @ RX ð7Þ

RX ð4Þ RX ð5Þ RX ð6Þ

1

10

a1

1

0

RX ð6Þ

1

CB C B C 1C A@ a2 A = @ RX ð7Þ A mμ RX ð8Þ 1

Of course, we may consider also 0

R X ð - 1Þ R X ð - 2 Þ

B B R X ð 0Þ @ R X ð 1Þ

R X ð - 1Þ R X ð 0Þ

10

1 0 1 R X ð 0Þ - 1 a1 CB C B C 1C A @ a2 A = @ R X ð 1Þ A 1

1



R X ð 2Þ

The results are almost exact:

A simulation of Xncan be made by calling arima.sim:

An example of simulation appears in Fig. 4.40.

Fig. 4.40 Simulation of the process by arima.sim

(continued)

418

4

Stochastic Processes

Example 4.15 (continued)

We can determine the empirical values of RX(i)/RX(0) by the code at right. Here, Rx_normalized contains the normalized empirical values. A comparison is made in Fig. 4.41.

Fig. 4.41 Values of RX(i)/RX(0)

We can verify the order of the process by determining its partial autocorrelations:

The command produces the result in Fig. 4.42: as expected, the values are near zero for s > 2. Analogous results are generated by vpcorr1a (Fig. 4.43) (continued)

4.5

Autoregressive Processes

419

Example 4.15 (continued)

Fig. 4.42 Values of ρX generated by pacf. As expected, the values are near zero for s > 2

Fig. 4.43 Values of ρX generated by vpcorr1a

(continued)

420

4

Stochastic Processes

Example 4.15 (continued) Now, assume that we desire to identify the coefficients from the available data (the trajectory with 1000 values, generated by arima.sim). Since we identified that the order is k = 2 from the partial autocorrelations, we call arima to furnish a first estimation of the coefficients and use it as first estimate for a least-squares estimation of the coefficients:

The function eqs evaluates the residual kXn - a1Xn - 1 - a2Xn - 2 - a3k a3 estimates m.

Running this code, an example of result is

Thus, the coefficients are estimated as a1 ≈ 0.85, a2 ≈ - 0.18, m ≈ - 0.03. The exact values are a1 = 5/6 ≈ 0.83, a2 = 1/6 ≈ - 0.17, m = 0 ≈ 0.00. Notice that arima furnishes good results, which are slightly improved by fminunc. In addition, arima furnishes the AIC value and you can generate confidence intervals for the coefficients: (continued)

4.5

Autoregressive Processes

421

Example 4.15 (continued)

As indicated previously, we can simulate the process by using (4.11) directly. In this case, we must start by finding the values of pn: Eq. (4.9) furnishes the values, which can be determined by the function calc_p at right. The instructions

determine p0, . . ., pnmax Taking nmax = 20, we obtain (p[[i+1]] = pi)

(continued)

422

4

Stochastic Processes

Example 4.15 (continued) Then, the simulation is generated by the function sim.ar below:

For instance, we can call this function as shown at right. This call generates a trajectory x1 formed by ns = 1000 points. The trajectory is plotted in Fig. 4.44. The partial autocorrelation is plotted in Fig. 4.45. The normalized autocorrelations appear in Fig. 4.46. If m is estimated from the trajectory, we obtain m ≈ 5E - 3.

Fig. 4.44 A realization of trajectory generated by sim.ar

(continued)

4.5

Autoregressive Processes

423

Example 4.15 (continued)

Fig. 4.45 Values of ρX of the trajectory generated by sim.ar. As expected, they are near zero for a lag > 2

Fig. 4.46 Values of RX(i)/RX(0)for the trajectory generated by sim.ar

(continued)

424

4

Stochastic Processes

Example 4.15 (continued) For this simulated data, the identification procedure produces

Thus, the estimates are a1 ≈ 0.83, a2 ≈ - 0.16, m ≈ 0.00. The simulations can be used to determine the distribution of Xn – since the process is stationary, the distribution is independent from n. The determination be done either from the data or from Eq. (4.11). To use the data, we can apply the methods from Chap. 3 and introduce an artificial gaussian variable to be used to determine the distribution. Otherwise, Eq. (4.11) furnishes a representation of Xn as a sum of terms of a white noise – analogous to an MA process – which can be used to determine the exact distribution of Xn: the distribution is P gaussian, with mean m ∑ pi and variance p2i – m is estimated. The results are compared in Figs. 4.47 and 4.48. Observe that (4.11) furnishes excellent results.

Fig. 4.47 Cumulative function and probability density of process X n = εn þ 56 X n - 1 1 6 X n - 2 , corresponding to a simulation of 1000 variates. Continuous line is the exact value

(continued)

4.5

Autoregressive Processes

425

Example 4.15 (continued)

Fig. 4.48 PDF of process X n = εn þ 56 X n - 1 - 16 X n - 2 , corresponding to a simulation of 1000 variates. Continuous line is the exact. When using data, the PDF is obtained by SPH derivative of the CDF using a gaussian kernel

Exercises 1. Let Xn = αXn - 1 + εn. Show that the process is stationary if |α| < 1. Assuming that |α| < 1, find the coefficients pn defining A(L )-1. Determine the values of RX( p), p = 0, 1, 2, 3, 4. Determine ρX( p), p = 0, 1, 2, 3, 4. Generate a sample of 1000 values of N(0, 1) and use it to simulate the process (f) Use these data to identify the process.

(a) (b) (c) (d) (e)

2. Let X n = εn þ 56 X n - 1 - 16 X n - 2 : (a) (b) (c) (d)

Show that the process is stationary. Find the coefficients pn defining A(L )-1. Determine the values of RX( p), p = 0, 1, 2, 3, 4. Determine ρX( p), p = 0, 1, 2, 3, 4. (continued)

426

4

Stochastic Processes

(e) Generate a sample of 1000 values of N(0, 1) and use it to simulate the process. (f) Use these data to identify the process. 3. Let X n = εn þ 7X6n - 1 -

Xn - 2 3

:

(a) (b) (c) (d) (e)

Show that the process is stationary. Find the coefficients pn defining A(L )-1. Determine the values of RX( p), p = 0, 1, 2, 3, 4. Determine ρX( p), p = 0, 1, 2, 3, 4. Generate a sample of 1000 values of N(0, 1) and use it to simulate the process. (f) Use these data to identify the process.

4. Let X n = εn þ 13X12n - 1 -

3X n - 2 8

n-3 þ X24 :

(a) (b) (c) (d) (e)

Show that the process is stationary. Find the coefficients pn defining A(L )-1. Determine the values of RX( p), p = 0, 1, 2, 3, 4. Determine ρX( p), p = 0, 1, 2, 3, 4. Generate a sample of 1000 values of N(0, 1) and use it to simulate the process. (f) Use these data to identify the process.

4.6

ARMA Processes

A process ARMA is the combination between an autoregressive and a moving average processes: Definition 4.9 X is ARMA(r, s) if and only if X is weakly stationary and there exists a discrete white noise ε = {εn : n 2 ℤ}, a number m 2 ℝ and two vectors of coefficients a = (a1, . . ., ar) and b = (b1, . . ., bs) such that 8n 2 ℤ : X n = m þ εn þ

r X i=1

ai X n - i þ

s X

bi ε n - i :

i=1

The instruction arima.sim can simulates ARMA processes (see Example 4.16). If you desire to make your own program of simulation, it is necessary to use an approach analogous to the one introduced for AR processes: let L be the delay operator (see Remark 4.5) and let us denote

4.6

ARMA Processes

427

AðLÞ = Id -

r X

ai Li ,BðLÞ = Id þ

i=1

s X

bi Li :

ð4:11Þ

i=1

Then, AðLÞX n = m þ BðLÞεn and X n = AðLÞ - 1 m þ AðLÞ - 1 BðLÞεn

ð4:12Þ

The determination of A(L )-1 is made by Eqs. (4.9) and (4.10). In addition, let us set AðLÞ - 1 BðLÞ =

þ1 X n=0

qn Ln :

Let us take b0 = 1. Then AðxÞ - 1 BðxÞ =

þ1 X i=0

pi x i ×

s X

bj x j :

j=0

Thus, AðxÞ - 1 BðxÞ =

þ1 X s X i=0 j=0

pi bj xiþj ,

id est, AðxÞ - 1 BðxÞ =

þ1 X

xn

minX ðs, nÞ

n=1

j=0

! pn - j bj :

Hence, qn =

min ðs, nÞ X j=0

pn - j bj :

ð4:13Þ

This equation is the equivalent for A(L )-1B(L ) of Eq. (4.10) for A(L )-1. Thus, the simulation of a process ARMA analogous to the simulation of a process AR:

428

4

Stochastic Processes

1. If r values X0, . . ., Xr - 1 are given, you can generate a sample from the white noise ε and use the recurrence in the definition of the ARMA(r, s) to determine Xr, Xr + 1, . . .; 2. Otherwise, it is necessary to determine A(L )-1 and A(L)-1B(L ) by using Eqs. (4.10) and (4.14). Then, you can generate a sample from the white noise and use Eq. (4.13) to generate X. Analogously to AR processes, ARMA ones satisfy a family of Yule-Walker equations : μ 1-

r X

! ai

= m;

ð4:14Þ

i=1

RX ðpÞ = mμ þ

r X

ai RX ði - pÞ,if p > s:

ð4:15Þ

i=1

By the same way, these equations may be used to determine the quantities mμ, a1, . . ., ar . Taking the values =s + 1, . . ., s + r , we obtain k + 1 linear equations for the determination of the quantities mμ, a1, . . ., ak : 0

R X ðsÞ



B B R X ð s þ 1Þ ... B B B ⋮ ⋮ B B B RX ðs þ r - 1Þ . . . @ ⋯ RX ðs þ r Þ

RX ðs - r þ 1Þ RX ðs - r þ 2Þ ⋮ R X ðsÞ R X ð s þ 1Þ

1

1

0

1

0

RX ðs þ 1Þ

1

C a1 B C 1 C RX ðs þ 2Þ C C B B CB C C CB C B⋮C B B C C ⋮ CB ⋮ C=B C C a CB B C @ r A B C R 1 C ð s þ r Þ X A @ A mμ 1 RX ðs þ r þ 1Þ

Once the system is solved, we may determine m, analogously to an AR process. As an alternative, the identification of the AR part of the process may also be made by the least-squares approach, by solving the linear system (4.7). Finally, we observe that Y n = Xn - m -

r X

ai X n - i = ε n þ

i=1

s X

bi ε n - i :

ð4:16Þ

i=1

Thus, Yn is a process MA(s) having a mean equal to zero. Thus, the coefficients bi, i = 1, . . ., s verify Eqs. analogous to Eq. (4.9): s-p X

bi biþp = RY ðpÞ:

ð4:17Þ

i=0

Taking p = 0, . . ., s - 1, we obtain a system of nonlinear equations which determines the coefficients b1, . . ., bs.

4.6

ARMA Processes

429

The main difficult in the identification of ARMA processes is the determination of the orders (r, s). If these orders are known, the methods for the identification of AR and MA may be applied. But the selection a convenient couple (r, s) may involve difficulties. One of the methods to choose the values of (r, s) is the use of model selection tools, such as AIC (Akaike, 1974), BIC (Schwarz, 1978), or HQ (Hannan & Quinn, 1979). The reader will find in the literature other methods pf identification of ARMA processes (see, for instance, Choi, 1992). When using R, arima returs the values of AIC. In addition, the package forecast proposes a function auto. arima which automatically chooses the best model according to AIC or BIC values. It proposes also a function Arima which returns AIC end BIC values, and a function forecast, which can be used for simulation. Example 4.16 Let us consider the process X n = 16X15n - 1 - 4X15n - 2 þ εn þ 5εn6- 1 þ εn6- 2 . We have 4 5 1 m=0, r = s = 2, a1 = 16 15 ,a2 = - 15 , b1 = 6 ,b2 = 6 . Thus, μ = 0. We have also AðxÞ = 1 -

5x x2 16x 4x2 , BðxÞ = 1 þ þ þ 15 6 6 15

The roots of the polynomial A are x1 = 32 ,x2 = 52 , so that the process is stationary. We can simulate the process using arima.sim:

An example of result appears in Fig. 4.49. Assuming that we know the order of the process, we can use arima to identify the coefficients:

We obtain

(continued)

430

4

Stochastic Processes

Example 4.16 (continued) Thus, the estimates are a1 ≈ 1.13, a2 ≈ - 0.34, b1 ≈ 0.73, b2 ≈ 0.09, m ≈ 0.17. The exact values are a1 ≈ 1.07, a2 ≈ - 0.27, b1 ≈ 0.83, b2 ≈ 0.17, m ≈ 0.00. Confidence intervals can be generated by confint:

Notice that the standard errors can be used to evaluate t-values and pvalues:

Fig. 4.49 Example of trajectory generated by arima.sim

(continued)

4.6

ARMA Processes

431

Example 4.16 (continued) Let us use auto.arima:

Thus, the process is identified as ARMA(3,1). If you desire to use your own program of simulation, you need to determine the coefficients pn and qn. Equation (4.10) furnish pn:

Then, Eq. (4.16) furnishes qn:

These quantities can be evaluated as follows: (continued)

432

4

Stochastic Processes

Example 4.16 (continued)

and the process is simulated as follows:

(continued)

4.6

ARMA Processes

433

Example 4.16 (continued) An example of simulation is given in Fig. 4.50 It uses the code

Fig. 4.50 Example of trajectory generated by arma.sim

Let us use the data to identify the coefficients. Using arima:

Thus, the estimates are a1 ≈ 1.13, a2 ≈ - 0.33, b1 ≈ 0.79, b2 ≈ 60.07, m ≈ 0.5. Let us use auto.arima: (continued)

434

4

Stochastic Processes

Example 4.16 (continued)

Thus, auto.arima identifies a process ARMA(3,1). The results improve by increasing nmax. For nmax=100, arima furnishes

and auto.arima furnishes

These results are close to the exact values. auto.arima identifies a process AR(2,1), which can be considered as AR(2,2) with b2 = 0 instead of the exact value 0.04. We can use these values to determine the distribution of Xn. (continued)

4.6

ARMA Processes

435

Example 4.16 (continued)

Indeed, we can determine the coefficients: associated to the process identified:

The mean is identified by auto.arima as zero (otherwise it is the value of the P 2 last coefficient) and the variance is σ 2 = qi . Thus, the distribution is  pffiffiffiffiffiffiffiffiffiffiffiffi  ffi P 2 pei . A comparison with the exact one is made in Figs. 4.51 and 4.52. N 0,

Fig. 4.51 CDF identified by auto.arima from data generated by sim.arma

Fig. 4.52 CDF identified by auto.arima from data generated by sim.arma

436

4

Exercises 1. Let X n = (a) (b) (c) (d) (e) (f) (g) (h) (a) (b) (c) (d) (e) (f) (g) (h) (i)

Xn - 1 2

þ εn þ εn3- 1 :

Determine A(x), B(x). Verify that the process is stationary. Determine the coefficients pn of the expansion of A(x)-1 Determine the coefficients qn of the expansion of A(x)-1B(x) Generate a sample of 1000 values of N(0, 1) and use it to simulate X. Use the data furnished by the simulation to determine approximate values of mX, RX( p), p = 0, 1, 2, 3, 4. Fit processes ARMA(1, 1), ARMA(2, 1), ARMA(1, 2) to the data. Assume that the orders are unknown. Use auto.arima to fit an ARMA process to the data. ε X Let X n = n - 1 þ εn þ n - 1 : 3 2 Determine A(x), B(x). Verify that the process is stationary. Determine the coefficients pn of the expansion of A(x)-1 Determine the coefficients qn of the expansion of A(x)-1B(x) Generate a sample of 1000 values of N(0,1) and use it to simulate X. Use the data furnished by the simulation to determine approximate values of mX, RX( p), p = 0, 1, 2, 3, 4. Fit processes ARMA(1, 1), ARMA(2, 1), ARMA(1, 2) to the data. Assume that the orders are unknown. Use auto.arima to fit an ARMA process to the data. 3Y n - 1 Y n - 2 ε þ εn - n - 1 : 4 8 16 16 Show that mX = : 3 3 1 1 Let Xn = Yn - mX. Show that X n = X n - 1 - X n - 2 þ εn ε 4 8 16 n - 1 Determine A(x), B(x) for the process Xn Verify that the process Xn is stationary Determine the coefficients pn of the expansion of A(x)-1 Determine the coefficients qn of the expansion of A(x)-1B(x) Generate a sample of 1000 values of N(0,1) and use it to simulate X. Use the data furnished by the simulation to determine approximate values of mX, RX( p), p = 0, 1, 2, 3, 4. Fit processes ARMA(1, 1), ARMA(2, 1), ARMA(1, 2) to the data. Assume that the orders are unknown. Use auto.arima to fit an ARMA process to the data.

(a) Let Y n = 2 þ (b) (c) (d) (e) (f) (g) (h) (i) (j) (k)

Stochastic Processes

4.7

Markov Processes

4.7

437

Markov Processes

Markov processes – also called Markov chains – are processes without memory: the future value Xn + 1 depends only on the actual state Xn. In other words, the conditional probability of Xn + 1 knowing the past values X0, . . ., Xn depends only upon Xn: PðX nþ1 < xnþ1 jX i < xi , 0 ≤ i ≤ nÞ = PðX nþ1 < xnþ1 jX n < nÞ: Therefore, a Markov process is fully characterized by its probability of transition: Pðt, x, s, yÞ = PðX t < xjX s < yÞ: 2

t, x, s, yÞ . The process is The associated probability density is pðt, x, s, yÞ = ∂ Pð∂x∂y stationary if and only if P(t, x, s, y) = P(t - s, x, y) and p(t, x, s, y) = p(t - s, x, y). As previously explained, it is usual to distinguish between continuous time end discrete time, according to the possible values of t and s. R propose many packages to deal with Markov processes. For instance, markovchain (Discrete time Markov Chains) and msm (Markov chains in continuous time). You will find in the R repositories many other packages for the simulation, analysis, and modeling of Markov processes – for instance, hmm for Hidden Markov Chains – id est, chains where our knowledge is incomplete: values of probabilities or states are unknown. Let us consider a stationary Markov chain where X can take a finite number of values {x1, . . ., xk}. In the language of Markov chains, xi is referred as a possible state of the chain. Then,

    p n, xi , xj = P X nþ1 = xi jX n = xj = π ij ðnÞ: We have π ij ðnÞ ≥ 0,

k X

π ij ðnÞ = 1

i=1

These probabilities are collected into a matrix Π(n) = (π ij(n) : 1 ≤ i, j ≤ k). In the framework of Markov processes, the matrix Π(1) is simply denoted Π and is called transition matrix. The chain may be represented by a graph where the arcs represent the transitions. As an example, we represent the case where k = 4 in Fig. 4.53. We have PðX nþ1 = xi Þ =

X     P Xnþ1 = xi jX n = xj P X n = xj

438

4

Stochastic Processes

Fig. 4.53 Graphical representation of a Markov Chain with 4 states

Let us introduce  t ðsÞ ðsÞ ðsÞ pðsÞ = p1 , . . . , pk , PðX s = xi Þ = pi : Then, pð1Þ = Πpð0Þ : Thus, by recurrence pðnÞ = Πn pð0Þ ⟺ΠðnÞ = Πn : The states of a Markov chain are usually classified into recurrent, transient, and absorbing. A state is recurrent when the probability of visiting at some time is equal to 1: P(∃n 2 ℕ : Xn = xi) = 1. Markov chains with a finite number of states have at least one recurrent state. A state is transient when such a probability is strictly inferior to 1: P(∃n 2 ℕ : Xn = xi) < 1. A state is absorbing when it cannot be leaved, id est, ∃n 2 ℕ : P(Xn + s = xi| Xn = xi) = 1, 8 s ≥ 0. Not all the Markov chains have absorbing states. An absorbing state is characterized by the fact that π ii(n) = 1, so that the transition matrix Π(n) contains columns of an identity matrix. A Markov chain that has an absorbing state is

4.7

Markov Processes

439

said an absorbing chain. For such a chain, it is usual to renumber the states to write the transition matrix under the canonical form  Π=

Id

0

A

B



Under the canonical form, A gives the probability of transition from a non-absorbing state to an absorbing state, while B gives the probability of transitions between non-absorbing states. Matrix M = (Id - B)-1 plays an essential rule in the theory of Markov processes. For instance, Mij gives the expected number of visits to P state i when starting at state j. Analogously, M ∎j = ki= 1 M ij gives the expected number of steps before absorption when starting at state j. The matrix N = MA gives the probability Nij of an absorption by the ith. absorbing state when the initial state is the jth non-absorbing state. We say that p is an stationary probability if and only if p = Πp: A stationary probability is a fixed point of the chain: if the chain reaches such a probability, then the probability distribution remains constant for all the posterior times. If p(n)→p for n → + 1 then p = Πp , so that p is a steady state probability. Notice that p corresponds to an eigenvector of Π, associated to the eigenvalue 1, so that it is a solution of the overdetermined linear system  Cp = D,C =

Id - Π 1

   0 : , D= 1

This linear system may be solved by least squares: Ct Cp = Ct D: To verify if the least-squares solution is a stationary probability, we must evaluate the error e = kCp - D k. If e = 0, the solution found is a stationary probability. Otherwise, not. From Perron–Frobenius theorem, it is expected that Πn converges to a matrix having all the columns equal to p, i. e., Πn → ðp, . . . , pÞ: Thus, a second way to find a stationary probability consists in calculating powers of Π until convergence to a matrix having the form above. To simulate the chain, we must generate a sequence X1, . . ., Xn from an initial value X 0 = xi0 . To do this, we choose an initial vector pð0Þ having a single component equal to 1 at position i0 and the other components null. Then, we evaluate

440

4

Stochastic Processes

pð1Þ = Πpð0Þ and we choose the value xi1 at random, among the values x1, . . ., xk, ð1Þ ðk Þ according to the probabilities p1 , . . . ,p1 . For instance, we may generate a variate u1 from the uniform distribution on (0,1) and find its position in the cumulative ð1Þ distribution associated to p(1): we take the value xi1 such that p1 þ . . . þ ði - 1Þ ð1Þ ði Þ p1 1 < u1 < p1 þ . . . þ p1 1 Then, we set pð1Þ as the vector having a single component equal to 1 at position i1 and the other components null and we make the same operations again. Remark 4.8 Under R, package markovchain proposes a complete set of instructions to deal with Discrete Markov Chains. The formulation used in the package is slightly different, since the probabilities of transition are transposed, id est, p(n, xi, xj) = π ji(n), except if you set parameter byrow = TRUE

Example 4.17 Let us consider a stationary Markov chain such that 01 B2 B B Π=B B0 B @ 1 2 The matrix Π is symmetric, so that we can work by rows (probabilities of transition are given by a row: parameter byrow = TRUE) or not (probabilities of transition are given by a column: parameter byrow = FALSE). We can create a Markov chain with probabilities of transition in rows as shown at right:

0 1 2 1 2

11 2C C 1C C 2C C A 0

4.7

Markov Processes

441

To verify the transition matrix, use print (a more complete view is offered by show).

To find the stationary probabilities, use steadyStates:

To find the absorbing states, use absorbingStates – this chain has not any absorbing state. For the recurrent and transient states, there are analogous instructions. For this chain, all the states are recurrent and none is transient.

Fig. 4.54 Plot of the chain

The package offers many functions, allowing you to determine conditional probabilities, hitting probabilities, analyzing first passages, etc. For instance, (continued)

442

4

Stochastic Processes

Example 4.17 (continued) you can plot the chain: the instruction plot(mc1) produces the result in Fig. 4.54. To simulate the chain, you can use rmarkovchain. For instance, let us simulate 20 states, starting from the initial state “st 2”:

The package offers also functions to fit a Markov chain. For instance, let us create a sample of 100 variates from the uniform distribution on {1, 2, 3}:

Let us consider data1 as a sequence of observed data. We start by testing the Markov property of the data:

Thus, the odds that the data do not correspond to a Markov chain are about 10 %. The instruction createSequenceMatrix counts the transitions:

You can also use markovchainFit:

(continued)

4.7

Markov Processes

443

Example 4.17 (continued) Here, mc2 contains other fields, such as the standard errors and confidence intervals. You can generate predictions from the empirical data. For instance, let us predict the 4 next states knowing that the three first ones are 1, 1, 2:

Example 4.18 Let us consider a stationary Markov chain such that 0

0

B Π=B @0 1

1

0

1

0

C 1C A

0

0

We create a Markov Chain using the columns as probabilities of transition: here, P(Xn + 1 = xi| Xn = xj) = π ji(n)

Analyze the states:

(continued)

444

4

Stochastic Processes

Example 4.18 (continued) plot(mc1) plots the chain: the result appears in Fig. 4.55. Fig. 4.55 Plot of the chain

Let us simulate 1e4 states, starting from the initial state “st 1”, check the Markov property on the simulated data and fit a transition matrix to it

4.7

Markov Processes

445

Example 4.19 Let us consider a stationary Markov chain such that 0

0:2 0 B 0 1 B Π=B @ 0:4 0:2 0 0

1 0 0 C C C 0:1 0:3 A 0 1 0:8 0

This Markov chain has 2 absorbing states: 2 and 4. By renumbering the states: 1 → 3, 2 → 1, 3 → 4, 4 → 2, we obtain the canonical form 0

1 B 0 B Πcan = B @ 0 0:2

0 1

0 0

0

0:2

1 0 0 C C C: 0:8 A

0:3

0:4

0:1

Thus,  A=

0 0:2

  0:2 ,B = 0:3 0:4 0

0:8 0:1

 :

We create the Markov Chain, using the rows as probabilities of transition:

Analyze the states:

(continued)

446

4

Example 4.19 (continued) The chain is plotted in Fig. 4.56. Fig. 4.56 Plot of the chain

Let us determine the canonical form using R:

Determine M. The expected number of steps before absorption are M∎1. = 4.25, M∎3 = 3 You can get the expected number of steps before absorption directly:

Stochastic Processes

4.7

Markov Processes

447

Evaluate N:

You can find the probabilities of absorption directly:

Exercises 1. Let us consider a Markov chain having states {1, 2} and transition matrix (0 ≤ a, b ≤ 1)  Π=

a

b

1-a

1-b



(a) (b) (c) (d) (e)

Draw the graphical representation of the chain. Determine the values of P(Xn + 1 = 1| Xn = 2) and P(Xn + 1 = 2| Xn = 1) Determine the values of a, b such that the chain is absorbing. Determine the values of a, b such that the chain is periodical. Find a general expression of the eigenvector v of Π associated to the eigenvalue 1. (f) Determine the conditions on v such that v = p. (g) Simulate the chain for convenient values of a, b.

2. Let us consider a Markov chain having states {1, 2, 3, 4} and transition matrix 0

0, 02

B 0, 91 B Π=B @ 0 0, 07 (a) (b) (c) (d)

0, 12

0

0, 4

0, 6

0, 48 0

0, 4 0

0, 7

1

0 C C C 0 A

0, 3

Draw a graphical representation of the chain Determine the values of P(Xn + 1 = 2| Xn = 3) and P(Xn + 1 = 3| Xn = 2) Determine the stationary probability. Simulate the chain. (continued)

448

4

Stochastic Processes

3. This exercise comes from (Sinclair, 2005), under Creative Commons Attribution License 1.0 license. Consider a machine that can be in the states idle, busy, waiting, broken , in repair – numbered respectively 1–5. The transition matrix is 0

0, 05

0, 1

0

0

0, 75

1

C B B 0, 93 0, 43 0, 6 0 0 C C B C B B 0, 43 0, 35 0 0 C Π=B 0 C C B C B 0, 02 0, 04 0, 05 0, 8 0 A @ 0 0 0 0, 2 0, 25 (a) (b) (c) (d)

Draw a graphical representation of the chain Determine the values of P(Xn + 1 = 2| Xn = 4) and P(Xn + 1 = 4| Xn = 2) Determine the stationary probability. Simulate the chain.

4. This exercise comes from (Chen, 2008), under Creative Commons Attribution License 1.0 license. Consider a Markov chain to generate music: the states are the notes A, A#, B, C, D, E, F, G, G# – numbered respectively 1–9. The transition matrix is 0

4 B 19 B B B 0 B B B 3 B B 19 B B B B 0 B B B 2 Π=B B 19 B B B 1 B B 19 B B B 0 B B B 6 B B 19 B @ 3 19

1

7 15

0

0

0

0

0

0

1 15 4 0 15

6 15 3 15 6 15

0

4 19 1 19

0

1 5

0 0

0

0

3 11 3 11 5 11

3 19

1 1 5 5 2 0 5

0

0 0

5 19 4 19 1 19 1 19

1 0 5

0

0

0

3 15

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0 3 1 5 5 0 0

1 0

C C C 0C C C 3C C 4C C C C 0C C C C 0C C C C 1C C 4C C C 0C C C C 0C C C A 0

(a) Determine the values of P(Xn + 1 = 2| Xn = 6) and P(Xn + 1 = 6| Xn = 2) (b) Determine the stationary probability. (c) Simulate the chain.

4.8

Diffusion Processes

4.8

449

Diffusion Processes

Stochastic differential equations (SDE) are a common tool in Mathematical Finance, namely the famous model of Fischer Black and Myron Scholes (1973) for the price of options, initially introduced as a partial differential equation and reinterpreted by Robert Cox Merton as a SDE (Merton, 1973). Previous works tending to introduce probability in the analysis of finance were made by (Regnault, 1863; Cowles, 1933) and namely by (Bachelier, 1900), which firstly introduced Brownian motion in finance. Since these works, SDE has become a central tool in finance, but also in other fields such as, for instance, stochastic optimization and reliability. The mathematical foundations are connected to Itô’s Calculus (Itô, 1944, Itô, 1946, 1950, 1951). The solutions of SDE are usually referred as diffusion processes. You can find in the R repositories packages for the simulation of SDE, such as Sim.DiffProc and sde.

4.8.1

Time Integral and Derivative of a Process

Standard integrals are defined as limits of finite sums. For instance, to define Rb f ðt Þdt, we may consider a partition a

T = fa = t 0 < t 1 < . . . < t n - 1 < t n = bg

ð4:18Þ

of the interval (a, b) and the finite sum Sð T , f Þ =

n X

ðt i - t i - 1 Þf ðt i - 1 Þ:

ð4:19Þ

i=1

For a regular function f, Zb Sð T , f Þ →

f ðt Þdt a

when n → + 1 and max{ti - ti - 1} → 0. We may define the integral of a stochastic process with respect to time into an analogous way. For instance, let us consider a stochastic process {X(t), t 2 (a, b)}. We may consider the finite sum I ðT , XÞ =

n X i=1

ðt i - t i - 1 ÞX ðt i - 1 Þ:

ð4:20Þ

450

4

Stochastic Processes

If the process has integrable variances, id est, if Zb

  E X 2 ðt Þ dt < 1 ,

a

then the finite sum converges for n → + 1 and max{ti - ti - 1} → 0. In this case, we have, analogously the standard functions (see, for instance Föllmer, 1981): Zb ð4:21Þ

X ðt Þdt:

I ðT , X Þ → I ðX Þ = a

I(X) is called the integral of the process X with respect to the time. It is linear: I ðαX þ βY Þ = αI ðX Þ þ βI ðY Þ, 8α,β 2 ℝ: and Zc

Zb X t dt þ

a

Zb X t dt =

c

ð4:22Þ

X t dt: a

From Eq. (4.21): Zb E ð I ðX Þ Þ =

  ZZ b mX ðt Þdt,E I ðX Þ2 = RXX ðs, t Þdsdt:

ð4:23Þ

a

a

Let us define Zt dY t = X t dt⟺Y t = Y 0 þ

X s ds,8t > 0:

ð4:24Þ

0

Then, Xt is the derivative of Yt with respect to time. We have Zt mY ðt Þ = mY ð0Þ þ

mX ðsÞds,8t > 0⟹mY = 0

d m : dt X

ð4:25Þ

4.8

Diffusion Processes

451

Assuming that Y0 and X are independent, we have 

0

 2

RYY ðs, t Þ = E Y 0 þ mY ð0Þ@

Zs

Zt mX ðτ1 Þdτ1 þ

0

1 mX ðτ2 Þdτ2 A

0

τZ2 = t

τZ1 = s

ð4:26Þ

RXX ðτ1 , τ2 Þdτ1 dτ2 ,

þ τ1 = 0

τ2 = 0

so that 2

RXX ðs, t Þ =

∂ RYY ðs, t Þ: ∂s∂t

ð4:27Þ

∂ RYY ðs, t Þ: ∂t

ð4:28Þ

Into an analogous way, we have RYX ðs, t Þ =

Example 4.20 Let us consider the process Yt = A1 cos (ωt) + A2 sin (ωt), where ω 2 ℝ, while A1, A2 are independent random variables such that E(Ai) = 0, V(Ai) = σ 2, i = 1, 2. We have mY ðt Þ = 0,V Y ðt Þ = σ 2 ,RYY ðs, t Þ = σ 2 cos ðωðs - t ÞÞ: Let dYt = Xtdt, Y0 = A1. We have 2

RXX ðs, t Þ =

∂ RYY ðs, t Þ = ω2 σ 2 cos ðωðs - t ÞÞ: ∂s∂t

RYX ðs, t Þ =

∂ RYY ðs, t Þ = ωσ 2 sin ðωðs - t ÞÞ: ∂t

Indeed, Zt Y t = A1 - A1 ω

Zt sin ðωsÞds þ A2 ω

0

cos ðωsÞds, 0

(continued)

452

4

Stochastic Processes

Example 4.20 (continued) So that, we have X t = - A1 ω sin ðωt Þ þ A2 ω cos ðωt Þ:

Example 4.21 Let us consider the process Xt = A1 cos (ωt) + A2 sin (ωt), where ω 2 ℝ, while A1, A2 are independent random variables such that E(Ai) = 0, V(Ai) = σ 2, i = 1, 2. We have mX ðt Þ = 0,V X ðt Þ = σ 2 ,RXX ðs, t Þ = σ 2 cos ðωðs - t ÞÞ: Let dYt = Xtdt, Y0 = 0. We have mY ðt Þ = 0 and τZ1 = s

τZ2 = t

RYY ðs, t Þ =

RXX ðτ1 , τ2 Þdτ1 dτ2 , τ1 = 0

τ2 = 0

so that RYY ðs, t Þ = σ 2

cos ðωðs - t ÞÞ - cos ðωsÞ - cos ðωt Þ þ 1 , ω2

Indeed, let Zt Y t = A1

Zt sin ðωsÞds þ A2

0

cos ðωsÞds: 0

(continued)

4.8

Diffusion Processes

453

Example 4.21 (continued) Then, Yt =

A A1 ð1 - cos ðωt ÞÞ þ 2 sin ðωt Þ ω ω

and we have E ðY t Y s Þ = σ 2

cos ðωðs - t ÞÞ - cos ðωsÞ - cos ðωt Þ þ 1 : ω2

Example 4.22 Let us consider the process Xt, such that each Xt is a random variable having the uniform distribution on (a, b) and Xt is independent from Xs for t ≠ s. We have mX ðt Þ =

ðb - aÞ2 aþb a2 þ ab þ b2 ,V X ðt Þ = ,RXX ðs, t Þ = δðt - sÞ: 2 12 3

Rt Let us evaluate Y t = 0 X s ds. We have E(Yt) = tmX, V(Yt) = 0. We generate a sample of 1E4 variates from Yt and we evaluate the empirical mean and the empirical variance. The results appear in Figs. 4.57 and 4.58.

Fig. 4.57 Results from a sample of 1E4 trajectories of the process Yt

(continued)

454

4

Stochastic Processes

Example 4.22 (continued)

Fig. 4.58 Results from a sample of 1E4 trajectories of the process Yt

Exercises 1. Consider the process Xt = A1 + A2t, where A1, A2 are independent random variables such that E(Ai) = 0, V(Ai) = 1, i = 1, 2. (a) Determine RXX(s, t). (b) Let Yt = Xtdt, Y0 = 0 . Determine RYY(s, t). (c) Let dXt = Ytdt. Determine RYY(s, t). 2. Consider the process Xt = A1 + A2 exp (-t), where A1, A2 are independent random variables such that E(Ai) = 0, V(Ai) = 1, i = 1, 2. (a) Determine RXX(s, t). (b) Let Yt = Xtdt, Y0 = 0 . Determine RYY(s, t). (c) Let dXt = Ytdt. Determine RYY(s, t). 3. Consider a process Xt such that RXX(s, t) = exp (-|s - t|). (a) Let Yt = Xtdt, Y0 = 0 . Determine RYY(s, t). (b) Let dXt = Ytdt. Determine RYY(s, t). 4. Consider a process Xt such that RXX(s, t) = 1 - |s - t|, if |s - t| < 1; RXX(s, t) = 0, otherwise. (a) Let Yt = Xtdt, Y0 = 0 . Determine RYY(s, t). (b) Let dXt = Ytdt. Determine RYY(s, t).

4.8

Diffusion Processes

4.8.2

455

Simulation of the Time Integral of a White Noise

The simulation of the process defined by the time integral (4.22) or (4.25) can request, on the one hand, special simulation features and, on the other hand, special quadrature formulae. Indeed, a simulation is made by choosing a final time of simulation T > 0 and a partition of the interval (0, T ) in a maximal number of steps ns > 0: T , 0 = t0 < t1 < . . . < tns - 1 < tns = T. For instance, we may take ti = iΔt, Δt = ns 0 ≤ i ≤ ns. Then, we generate a sequence of values X1, . . ., Xns such that Xi = X(ti) and use them to evaluate Ztnþ1 Y nþ1 = Y n þ

X s ds,0 ≤ n ≤ ns - 1:

ð4:29Þ

tn

By this way, we generate Y1, . . ., Yns where Y(tj) = Yj. Such a procedure requests some precautions. On the one hand, the generation of X1, . . ., Xn, . . . must be consistent with the statistical properties of Xt – namely E(Xn) = mX(tn) and E(XiXj) = RXX(ti, tj). On the other hand, the evaluation of the integral at the righthand side must be consistent with Eqs. (4.24) and (4.27). For instance, let us consider the approximation tZ n þΔt

X s ds ≈ Δt ðαX n þ β Þ:

ð4:30Þ

tn

The conditions (4.24) yield that 1 αmX ðt n Þ þ β = Δt

tZ n þΔt

mX ðsÞds tn

1 α RXX ðt n , t n Þ þ 2αβmX ðt n Þ þ β = 2 Δt 2

tZ n þΔt tZ n þΔt

RXX ðs, t Þdsdt

2

tn

tn

If the process is weakly stationary, these equations read as: αmX þ β = mX 1 α RX ð0Þ þ 2αβmX þ β = 2 Δt 2

tZ n þΔt tZ n þΔt

RX ðjs - t jÞdsdt

2

tn

id est, β = ð1 - αÞmX

tn

456

4

α2 CX ð0Þ þ m2X =

1 Δt 2

Stochastic Processes

tZ n þΔt tZ n þΔt

RX ðs - t Þdsdt tn

tn

If RX is a regular function, we can take as approximation tZ n þΔt tZ n þΔt

RX ðs - t Þdsdt ≈ Δt 2 RX ð0Þ, tn

tn

so that α ≈ 1, β ≈ 0 are convenient values. But these values may be not convenient if RX is not a regular function. For instance, if RX(s) = σ 2δ(s), we obtain rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi m2 1 α= - 2X ,β = ð1 - αÞmX : Δt σ If Xt = εt is a white noise, we have mX = 0, so that 1 α = pffiffiffiffiffi , β = 0 : Δt

ð4:31Þ

In the sequel, we shall consider the time integral of a white noise: Zt Et = E0 þ

εs ds:

ð4:32Þ

0

According to Eq. (4.32), the simulations use the approximation tZ n þΔt

We have mE(t) = 0 and τZ1 = s

REE ðs, t Þ = σ

pffiffiffiffiffi εs ds ≈ εn Δt :

tn

τZ2 = t

δðτ1 - τ2 Þdτ1 dτ2 = σ2 min fs, t g:

2 τ1 = 0

ð4:33Þ

ð4:34Þ

τ2 = 0

Taking into account Eqs. (4.20) and (4.21) and the Theorem 2.5, Et is gaussian. In addition, 0 ≤ r ≤ s ≤ t⟹E ððE t - E s ÞE r Þ = σ2 r - σ2 r = 0:

ð4:35Þ

4.8

Diffusion Processes

457

 pffi  Thus, on the one hand, Et  N 0, σ t and, on the other hand, (Et - Es) is independent from Er if 0 ≤ r ≤ s ≤ t. We have, for 0 ≤ s ≤ t:   E ðE t - E s Þ2 = σ2 ðt - sÞ:  pffiffiffiffiffiffiffiffiffi Thus, E t - E s  N 0, σ t - s for 0 ≤ s ≤ t. Let us consider the process Xt = εt + δt and Zt Yt =

Ztþδt εsþδt ds =

Ztþδt εs ds =

δt

0

Zδt εs ds -

0

εs ds = E tþδt - E δt = δE t : 0

Let us introduce δk E t = Etþδtk - E δtk . Then, Rij(s, t) = E(δiEt × δjEs) verifies 



Rij ðs, t Þ = σ 2 min t þ δt i , s þ δt j - min δt i , s þ δt j

- min t þ δt i , δt j þ min δt i , δt j Þ:

ð4:36Þ

Thus, for a process k X

Xt = Yt =

Rt

ai εtþδti ,

i=0

X t ds verifies

0

Yt =

k X i=0

ai δi E t ,δi E t = E tþδti - E δti

and RYY ðs, t Þ = E ðY t Y s Þ =

k X k X

ai aj Rij ðs, t Þ:

ð4:37Þ

i=0 j=0

A process Xt =

k X

ai εt - δti

i=0

may be analyzed into an analogous way: notice that tZ- δt

Ztþδt εs ds  δt

so that Eqs. (4.36) and (4.37) apply yet.

εs , - δt

ð4:38Þ

458

4

Stochastic Processes

Example 4.23 Let us consider a white noise {εt : t ≥ 0} of variance 1 and the process Rt Xt = εt + εt + 1. Let Y t = X s ds. We have k = 1, a0 = a1 = 1, δ0t = 0, δ1t = 1. Thus,

0

Y t = E t þ Etþ1 - E 1 : R00 ðs, t Þ = min ft, sg - min f0, sg - min ft, 0g þ min f0, 0g = min ft, sg; R01 ðs, t Þ = min ft, s þ 1g - min f0, s þ 1g - min ft, 1g þ min f0, 1g = min ft, s þ 1g - min ft, 1g; R11 ðs, t Þ = min ft þ 1, s þ 1g - min f1, s þ 1g - min ft þ 1, 1g þ min f1, 1g = min ft þ 1, s þ 1g - 1 = min ft, sg; For t = s, R00 ðt, t Þ = t,R01 ðt, t Þ = t - min ft, 1g,R11 ðt Þ = t Thus, V Y ðt Þ = 2t,if t < 1; V Y ðt Þ = 4t - 2,if t > 1: Figure 4.59 shows the results of a numerical evaluation of the variances for t 2 (0, 2) by using 10000 simulations of Yt with a step Δt = 0.01.

Fig. 4.59 Results obtained by simulation of the process Yt

4.8

Diffusion Processes

459

Example 4.24 Let us consider a white noise {εt : t ≥ 0} of variance 1 and the process Xt = εt Rt εt - 1. Let Y t = X s ds. We have k = 1, a0 = 1, a1 = - 1, δ0t = 0, δ1t = - 1. 0

We apply the analogy given in Eq. (4.38) E t - 1 - E - 1  E tþ1 - E 1 ⟹Y t  E t - ðEtþ1 - E 1 Þ: Thus, V Y ðt Þ = 2t,if t < 1; V Y ðt Þ = 2,if t > 1: Figure 4.60 shows the results of a numerical evaluation of the variances for t 2 (0, 2.5) by using 10000 simulations of Yt with a step Δt = 0.01.

Fig. 4.60 Results obtained by simulation of the process Yt

460

4

Stochastic Processes

Exercises 1. Consider a white noise {εt : t ≥ 0} of variance 1 and the process Xt = εt + εt - 2. (a) Identify the coefficients a0, a1, a2. (b) Determine Rij(t, t), 0 ≤ i, j ≤ 2. (c) Evaluate the variance of Y2 by using simulations of the process Yt. 2. Consider a white noise {εt : t ≥ 0} of variance 1 and the process X t = ε t - ε t + 2. (a) Identify the coefficients a0, a1, a2. (b) Determine Rij(t, t), 0 ≤ i, j ≤ 2.

Evaluate the variance of Y2 by using simulations of the process Yt.

4.8.3

Brownian Motion

Brownian motion takes his name from Robert Brown, a botanist which observed chaotic motions of particles in suspension (Brown, 1828). The mathematical theory of Brownian motion was established much later (Paley et al., 1933), after its use by (Bachelier, 1900), the questions by (Einstein, 1905), and (Pearson, The problem of random walk, 1905) and the model by (Langevin, 1908). Due to the formalism introduced by Wiener, Brownian motion is also known as Wiener process. Definition 4.10 {Wt = W(t) : t ≥ 0} is a Wiener process (or Brownian motion) if and only if W(0) = 0 and dWt = εtdt, where {εt = ε(t) : t 2 ℝ} is a gaussian white noise of variance σ 2 = 1. {Wt = W(t) : t > 0} is a n-dimensional Wiener process (or n-dimensional Brownian motion) if and only if W(t) = (W1(t), . . ., Wn(t)), where each Wi is a Wiener process and Wi is independent from Wj for i ≠ j. Thus, a Brownian motion corresponds to an integral of a white noise. As shown in Sect. 4.8, we have   mW ðt Þ = 0,RWW ðs, t Þ = min fs, t g,E W 2t = t: 0 ≤ r ≤ s ≤ t⟹E ððW t - W s ÞW r Þ = r - r = 0:   pffiffiffiffiffiffiffiffiffi  W t - W s  N 0, t - s ,E ðW t - W s Þ2 = t - s,for 0 ≤ s ≤ t:

ð4:39Þ ð4:40Þ ð4:41Þ

4.8

Diffusion Processes

461

Let φ : ℝ → ℝ be a regular function and Zt I t ð φÞ =

φðW s Þds:

ð4:42Þ

E ðφðW s ÞÞds:

ð4:43Þ

0

For a regular φ: Zt E ð I t ð φÞ Þ = 0

The simulation of a Brownian motion can be made by the method previously presented (Sect. 4.8): W nþ1 = W n þ

pffiffiffiffiffi Δt Z n ,0 ≤ n ≤ ns - 1,Z n  N ð0, 1Þ:

ð4:44Þ

Example 4.25 Let us consider T = 4. We desire to simulate a bidimensional Brownian motion with ns = 4000, so that Δt = 1E - 3. For instance, we can use Sim.DiffProc :

An example of result is shown in Fig. 4.61.

Fig. 4.61 Simulation of a Brownian Motion until T = 4, with ns = 4000 steps

(continued)

462

4

Stochastic Processes

Example 4.25 (continued) We evaluate E(kX(T )k2) for different values of ns and different numbers of trajectories. The theoretical value is E(kX(T )k2) = 8. The results are shown in Tables 4.1 and 4.2. Table 4.1 Results furnished by the simulation of a Brownian motion until T = 4, sample of 10000 trajectories generated by Sim.DiffProc ns E(kX(T )k2) σ(kX(T )k2)

400 8.1 8.1

4000 7.9 8.0

40000 8.1 8.1

Table 4.2 Results furnished by the simulation of a Brownian motion until T = 4, with ns = 4000. Trajectories generated by Sim.DiffProc Simulations E(kX(T )k2) σ(kX(T )k2)

100 8.8 10.2

1000 8.0 8.4

10000 8.0 8.0

We can also use the package sde:

The results appear in Fig. 4.64.

Fig. 4.64 Simulation of a Brownian Motion until T = 4, with ns = 4000 steps

(continued)

4.8

Diffusion Processes

463

Example 4.25 (continued) Tables 4.3 and 4.4 show the estimations of E(kX(T )k2) for different values of ns and different numbers of trajectories. Table 4.3 Results furnished by the simulation of a Brownian motion until T = 4, sample of 10000 trajectories generated by sde ns E(kX(T )k2) σ(kX(T )k2)

400 8.0 8.0

4000 7.9 7.9

40000 7.9 8.0

Table 4.4 Results furnished by the simulation of a Brownian motion until T = 4, with ns = 4000. Trajectories generated by sde Simulations E(kX(T )k2) σ(kX(T )k2)

100 6.9 6.8

1000 8.0 7.8

10000 7.8 7.8

If you do not want to use the packages, you can easily build your own program. For example:

An example of trajectory is plotted in Fig. 4.65. (continued)

464

4

Stochastic Processes

Example 4.25 (continued)

Fig. 4.65 Simulation of a Brownian Motion until T = 4, with ns = 4000 steps

Example 4.26 Let us consider φðW s Þ = W 2s .Then, E(It(φ)) = t2/2.

To generate one vartR max φðt Þdt, we iate of t min

use the code at right:

(continued)

4.8

Diffusion Processes

465

Example 4.26 (continued) We consider T = 2, ti = (i - 1)/20 evaluate successively the integrals R tiþ1 t i φðW t Þdt over (ti, ti + 1), i = 1, . . ., 20 – each integral is evaluated using P - 1 R tjþ1 Δt = 0.01. Then I ti ðφÞ ≈ ij = 1 t j φðW t Þdt. We generate a sample of 1000 variates from I ti ðφÞ,1 ≤ i ≤ 21 and we use the sample to estimate the mean EðI t i ðφÞÞ. The results are shown in Fig. 4.66. The results for 1E4 simulations appear in Fig. 4.67.

Fig. 4.66 Empirical mean of 1E3 simulations of the time integral of φðW t Þ = W 2t , with Δt = 1E - 2. Brownian motion generated by Sim.DiffProc

Fig. 4.67 Empirical mean of 1E4 simulations of the time integral of φðW t Þ = W 2t , with Δt = 1E - 2. Brownian motion generated by Sim.DiffProc

(continued)

466

4

Stochastic Processes

Example 4.26 (continued) Of course, you can use sde instead of Sim.DiffProc : the results are shown in Figs. 4.68 and 4.69.

Fig. 4.68 Empirical mean of 1E3 simulations of the time integral of φðW t Þ = W 2t , with Δt = 1E - 2. Brownian motion generated by sde

Fig. 4.69 Empirical mean of 1E4 simulations of the time integral of φðW t Þ = W 2t , with Δt = 1E - 2. Brownian motion generated by sde

4.8

Diffusion Processes

467

Exercises 1. For each function below, generate a sample of 1000 variates from It(φ) for t = 1 and use the sample to estimate the mean value of It(φ). (a) φ(Ws) = exp (Ws). (b) φ(Ws) = sin (2πWs). (c) φðW s Þ = W 4s .

4.8.4

Random Walks

Random walks generalize Brownian motion: a random walk is a Markov process which generates trajectories in the space ℝn. A discrete random walk generates points X1, . . ., Xn, . . . in ℝn and the trajectory connects the points generated. A continuous random walk generates a trajectory {Xt = X(t) : t ≥ 0; X(0) = X0}. In general, a discrete random walk is defined by the distribution of the displacements ΔXn = Xn + 1 - Xn and the starting point X0. The displacements ΔXn are often multiples of a random vector having a known distribution: ΔXn = hZn, h > 0, Zn known. In practice, the distribution of Zn is often taken as independent of n, what is equivalent to take Zn  Z. For instance, a discrete Brownian motion is a random walk where Z is a gaussian pffiffiffiffiffi variable N(0, 1) – parameter h is connected to the time discretization: h = Δt . Let us assume that E(Z ) = 0, V(Z ) = σ 2 and that Zi, Zj are independent for i ≠ j. Noticing that Xn = X0 þ

n X

ΔX i ,

ð4:45Þ

i=1

We have E ðX n Þ = X 0 ,V ðX n Þ = nh2 σ 2 :

ð4:46Þ

To simulate a discrete random walk, we need to know the values of X0, h and the distribution of Z. Then, we choose a finite number of steps ns > 0 and we calculate X1, . . ., Xns by the recurrence (Zn is a variate from Z ). X nþ1 = X n þ hZ n , 0 ≤ n ≤ ns - 1:

ð4:47Þ

468

4

Stochastic Processes

Example 4.27 Let us consider a bidimensional random walk starting at  X0 =(0, 0) defined by ΔXn = hZn, Zn  Z, Z = (cosθ, sinθ), Pðθ = 0Þ = P θ = π2 = Pðθ = π Þ =   1 P θ = 3π 2 = 4 . We generate a sample of ns = 10000 variates of Z and the corresponding random walk as follows:

The resulting path is shown in Fig. 4.70.

Fig. 4.70 Example of random walk: each step has a fixed length and a random angle chosen among 4 possible values

Example 4.28 Let us consider a bidimensional random walk starting at X0 = (0, 0) defined by dXt = Zt, Zt  Z, Z = (cosθ, sinθ), θ uniformly distributed on (0, 2π). We generate a sample of ns = 10000 variates of Z and the corresponding random walk as follows: (continued)

4.8

Diffusion Processes

469

Example 4.28 (continued)

The resulting path is shown in Fig. 4.71.

Fig. 4.71 Example of random walk: each step has a fixed length and a random angle uniformly distributed on (0, 2π)

Remark 4.9 Continuous random walks are generally defined by their initial point X0 and their derivatives: dX t = Z t dt, X ð0Þ = X 0 :

ð4:48Þ

Thus (see Sect. 4.8.1): Zt Xt - X0 =

Z s ds,8t > 0:

ð4:49Þ

0

Analogously to the discrete case, the distribution of Zt is often chosen as independent of t: Zn  Z. For instance, a continuous Brownian motion corresponds to Z  N(0, 1).

470

4

Stochastic Processes

Exercises 1. Consider a bidimensional random walk starting at X0 = (0, 0) defined by   2π 1 = ,1 ≤ i ≤ n , ΔXn = hZn, Zn  Z, Z = (cosθ, sinθ), P θ = ði - 1Þ n n with n = 10. (a) Simulate a trajectory of ns = 1000 steps with h = 0.1. (b) Simulate nsim = 1000 trajectories of ns = 100 steps with h = 1 and collect the data about the final distance dns to the origin. Determine the empirical CDF and the empirical PDF of the data collected. 2. Consider a bidimensional random walk starting at X0 = (0, 0) defined by ΔXn = hZn, Zn  Z, Z = (cosθ, sinθ), where θ is normally distributed with mean 0 and standard deviation π. (a) Simulate a trajectory of ns = 1000 steps with h = 0.1. (b) Simulate nsim = 1000 trajectories of ns = 100 steps with h = 1 and collect the data about the final distance dns to the origin. Determine the empirical CDF and the empirical PDF of the data collected.

4.8.5

Itô’s Integrals

Analogously to

Rb

f ðt Þdt , we may approximate

a

partition given in Eq. (4.19) and the finite sum Sð T , f , gÞ =

n X

Rb

f ðt Þdgðt Þ by considering the

a

ðgðt i Þ - gðt i - 1 ÞÞf ðt i - 1 Þ:

ð4:50Þ

i=1

For regular functions f and g, Zb SSðT , f , gÞ →

f ðt Þdgðt Þ a

when n → + 1 and max{ti - ti - 1} → 0. Let us consider two stochastic processes {X(t), t 2 (a, b)}, {Y(t), t 2 (a, b)} and the finite sum I ðT , X, YÞ =

n X i=1

ðY ðt i Þ - Yðt i - 1 ÞÞX ðt i - 1 Þ:

ð4:51Þ

4.8

Diffusion Processes

471

If both processes have integrable variances, id est, if Zb



Zb



E X ðt Þ dt < 1 and 2

a

  E Y 2 ðt Þ dt < 1,

a

then this finite sum converges for n → + 1 and max{ti - ti - 1} → 0. We have analogously the standard functions (see, for instance, Föllmer, 1981): Zb X ðt ÞdY ðt Þ:

I ðT , X, YÞ → I ðX, Y Þ =

ð4:52Þ

a

I(X, Y ) is referred as Itô’s integral of the process X with respect to the process Y. Itô’s integral is linear: I ðαX þ βY, Z Þ = αI ðX, Z Þ þ βI ðY, Z Þ, I ðX, αY þ βZ Þ = αI ðX, Y Þ þ βI ðX, Z Þ: When Y = W and X = φ(W ), supplementary properties may be established. Let us introduce Zb I W ðφÞ = I ðφðW Þ, W Þ =

φðW ðt ÞÞdW ðt Þ:

ð4:53Þ

a

Then, E ðI W ðφÞÞ = 0,

ð4:54Þ

Zb E ðI W ðφÞI W ðψ ÞÞ =

E ðφðW ðt ÞÞψ ðW ðt ÞÞÞdt,

ð4:55Þ

a



2

E I W ðφÞ



Zb   E φðW ðt ÞÞ2 dt: =

ð4:56Þ

a

The reader will find in the literature methods for the numerical evaluation of IW(φ) – for instance, Chorin (1973) and Blankenship and Baras (1981).

472

4

Stochastic Processes

A supplementary property of IW(φ) is Itô’s formula: for a regular function u : ℝ → ℝ (see, for instance, Föllmer, 1981) I W ðu0 Þ = ðW ðbÞÞ - uðW ðaÞÞ -

1 I ðu00 Þ, 2 t

id est, Zb

1 u ðW ðt ÞÞdW ðt Þ = uðW ðbÞÞ - uðW ðaÞÞ 2 0

Zb

a

u00 ðW ðt ÞÞdt:

a

Itô’s formula extends to multidimensional situations where u : ℝ p → ℝ as Zb

1 —uðW ðt ÞÞdW ðt Þ = uðW ðbÞÞ - uðW ðaÞÞ 2

a

Zb ΔuðW ðt ÞÞdt: a

Ito’s formula extends also to functions that derive from Itô’s integrals: if Zt

Zt aðs, XðsÞÞdW ðsÞ þ

X ð t Þ - X ð 0Þ = 0

bðs, X ðsÞÞdt, 0

then Zb

1 —uðX ðt ÞÞ:dXðt Þ = uðX ðbÞÞ - uðXðaÞÞ 2

a

Zb at D2 u:adt: a

Here, D2u is the Hessian matrix:  D2 u =

2

∂ u ∂xi ∂xj

 ,

so that at D2 u:a =

X

ai ðt, X t ÞD2ij ðXðt ÞÞaj ðt, X t Þ:

i,j

In the next section, these formulae are considered as stochastic differential equations.

4.8

Diffusion Processes

473

Remark 4.10 For standard integrals, we may use different forms of discretization to evaluate Rb f ðt Þdgðt Þ. For instance, we may consider a

S 3 ð T , f , gÞ =

n X

ð gð t i Þ - g ð t i - 1 Þ Þ

i=1

Rb

f ðt iþ1 Þ þ f ðt i Þ 2

For regular functions, such a finite sum converges to the same value f ðt Þdt . However, such a property is not verified for stochastic integrals.

a

Indeed, I 3 ðT , X, Y Þ =

n X i=1

ðY ð t i Þ - Y ðt i - 1 Þ Þ

X ðt iþ1 Þ þ X ðt i Þ 2

leads to the Stratonovich’s integral (Stratonovich, 1966; Fisk, 1963), which is a different stochastic integral, with distinct properties.

Example 4.29 The evaluation of integrals

Rb

φðW ðt ÞÞdW ðt Þ can be made using the package

a

sim.DiffProc. For instance, let us consider the evaluation of I ðt Þ = Rt 0

W 2s dW s . In this case, E(I(t)) = 0 and V(I(t)) = t3. You can simulate

10000 trajectories of the integral (for 0 ≤ t ≤ 1) as follows:

To use Stratonovich’s integration, you must use the option type=“str”. The result resu is a list containing a time series with the values of the integral and a vector of times. The results are shown in Figs. 4.72 and 4.73. (continued)

474

4

Stochastic Processes

Example 4.29 (continued)

Fig. 4.72 Mean of 1E4 variates of

Rt 0

W 2s dW s , generated by sim.DiffProc

Fig. 4.73 Variance of 1E4 variates of

Rt 0

W 2s dW s , generated by sim.DiffProc

Example 4.30 The evaluation of integrals

Rb

φðW ðt ÞÞdW ðt Þ can be made using the package

a

sim.DiffProc. For instance, let us consider the evaluation of I ðt Þ = (continued)

4.8

Diffusion Processes

475

Example 4.30 (continued) Rt 0

W 2s dW s . In this case, E(I(t)) = 0 and V(I(t)) = t3. You can simulate 10000

trajectories of the integral (for 0 ≤ t ≤ 1) as follows:

To use Stratonovich’s integration, you must use the option type=“str”. The result resu is a list containing a time series with the values of the integral and a vector of times. The results are shown in Figs. 4.74 and 4.75.

Fig. 4.74 Mean of 1E4 variates of

Rt 0

W 2s dW s , generated by sim.DiffProc

Fig. 4.75 Variance of 1E4 variates of

Rt 0

W 2s dW s , generated by sim.DiffProc

476

4

Stochastic Processes

Example 4.31 Let us consider the process Xt, such that each Xt is a random variable having the uniform distribution on (a, b) and Xt is independent from Xs for t ≠ s. We have mX ð t Þ =

ðb - aÞ2 aþb a2 þ ab þ b2 , V X ðt Þ = , RXX ðs, t Þ = δðt - sÞ: 2 12 3

Rt 2 2 Let us evaluate Y t = 0 X s dW s . We have EðY t Þ = 0,V ðY t Þ = t a þabþb . We 3 generate a sample of 1E4 variates from Yt and we evaluate the empirical mean and the empirical variance. The results appear in Figs. 4.76 and 4.77.

Fig. 4.76 Results from a sample of 1E4 trajectories of the process Yt

Fig. 4.77 Results from a sample of 1E4 trajectories of the process Yt

(continued)

4.8

Diffusion Processes

477

Example 4.31 (continued) The variates were generated by sim.ito.int given in the code below. We used dt_in = 0.001, dt_out = 0.1, tmax = 1,pphi = phi.

4.8.6

Itô’s Calculus

Itô’s calculus manipulates stochastic differential equations (SDE) having the general form dX t = aðt, X t ÞdW t þ bðt, X t Þdt:

ð4:57Þ

478

4

Stochastic Processes

Equation (4.58) defines a Itô’s diffusion. a is the diffusion coefficient; b is the drift. Notice that the Brownian motion is not differentiable, so that (4.58) must be interpreted as an integral equation: Zt

Zt aðs, X ðsÞÞdW ðsÞ þ

X ðt Þ - X ð0Þ = 0

bðs, X ðsÞÞdt,8s > 0: 0

For instance, Itô’s formula may be rewritten as a SDE: 1 duðW t Þ = u0 ðW t ÞdW t þ u00 ðW t Þdt: 2

ð4:58Þ

For a process given by (4.58), 1 duðX t Þ = u0 ðX t ÞdW t þ u00 ðW t Þa2 ðt, X t Þdt: 2

ð4:59Þ

In the multidimensional situation, Yt = F(t, Xt), F : ℝ × ℝn → ℝm . Then, we have  ∂F i 1 d ðY i t = ðt, Xt Þ þ ðA:dXt Þi þ dXtt Bi dX t , 2 ∂t

ð4:60Þ

with 2

Aij =

∂F i ∂ Fi ,Bijk = : ∂xj ∂xj ∂xk

If dXt = a(t, Xt)dWt + b(t, Xt)dt , then  ∂F i 1 d ðY i t = ðt, Xt Þ þ ðA:dXt Þi þ at Bi a, 2 ∂t

ð4:61Þ

For n = m = 1, Yt = F(t, Xt) and dXt = a(t, Xt)dWt + b(t, Xt)dt. Then 2

dZ t =

∂F ∂F 1∂ F ðt, X t Þdt þ ðt, X t ÞdX t þ ðt, X t Þðaðt, W t ÞÞ2 dt: 2 ∂x2 ∂t ∂x

ð4:62Þ

SDE follow some standard rules. For instance, let dXt = aX (t, Xt)dWt + bX(t, Xt) dt, dYt = aY(t, Yt)dWt + bY(t, Yt)dt and Zt = αXt + βYt. Then, for α, β 2 ℝ: dZ t = ðαaX ðt, X t Þ þ βaY ðt, Y t ÞÞdW t þ ðαbX ðt, X t Þ þ βbY ðt, Y t ÞÞdt:

4.8

Diffusion Processes

479

Indeed, consider St = (Xt, Yt) and F(S) = αS1 + βS2. Then, 2

∂F ∂F ∂F ∂ F = α, = β, = 0, = 0, ∂t ∂S1 ∂Sj ∂Sk ∂S2 so that (4.61) yields the result. If Zt = XtYt, then dZ t = X t dY t þ Y t dX t þ ðaX ðt, X t ÞaY ðt, Y t ÞÞdt:

ð4:63Þ

Indeed, taking St = (Xt, Yt) and F(S) = S1S2, we have 2

∂F ∂F ∂F ∂ F = S2 , = S1 , = = 0, ∂t ∂S1 ∂Sj ∂Sk ∂S2



0

1

1

0

 ,

and (4.61) establishes (4.64). Let Xt = x(Wt), Yt = y(Wt) and Zt = XtYt. Then, (4.59) and (4.64) yield that dZ t = X t dY t þ Y t dX t þ x0 ðW t Þy0 ðW t Þdt:

ð4:64Þ

1 2 dZ t = y0 ðX t ÞdX t þ y00 ðX t Þðx0 ðW t ÞÞ dt: 2

ð4:65Þ

If Zt = y(Xt), then



Indeed, Zt = u(Xt), u(s) = y(x(s)). Thus, u′(s) = y′(x(s))x′(s), u′′(s) = y′(x(s))x′ (s) + y′′(x(s))(x′(s))2, so that   1 1 2 dZ t = y0 ðxðW t ÞÞ x0 ðW t ÞdW t þ x00 ðW t Þdt þ y00 ðxðW t ÞÞðx0 ðW t ÞÞ dt 2 2

and we obtain (4.66). These rules are often used to manipulate SDE to get explicit solutions or to generate representations of the solutions of differential equations, such as Feynman– Kac formulae. The reader will find in the literature many representations of the solutions of differential equations and partial differential equations: see, for instance Elepov and Mikhailov (1969), Booth (1982), Souza de Cursi (1994), Morillon (1997), Hwang and Mascagni (2001), Hwang et al. (2003), Milstein and Tretyakov (2004, 2012, 2013, 2020), Kharroubi and Pham (2015), Zhou and Cai, Numerical Solution of the Robin Problem of Laplace Equations with a Feynman-Kac Formula and Reflecting Brownian Motions (2016).

480

4

Stochastic Processes

Example 4.32 The classical Black-Scholes model for the evolution of the price of an option reads as dSt = σSt dW t þ μSt dt: Let

  σ2 F ðt, xÞ = F 0 exp μt - t þ σx : 2

We have   2 ∂F σ2 ∂F ∂ F F ðt, xÞ , = μ= σF ðt, xÞ, 2 = σ 2 F ðt, xÞ: 2 ∂t ∂x ∂x Thus, if Yt = F(t, Xt) and dXt = a(t, Wt)dWt + b(t, Wt)dt, we have   σ2 1 Y t dt þ σY t dX t þ σ 2 Y t ðaðt, W t ÞÞ2 dt dY t = μ 2 2 Let us choose Xt = Wt. Then, dXt = dWt and a(t, Wt) = 1, b(t, Wt) = 0. Consequently, dY t = μY t dt þ σY t dW t Thus, the solution is   σ2 St = S0 exp μt - t þ σW t : 2   2 Notice that St = AtUt, At = S0 exp μt - σ2 t ,U t = exp ðσW t Þ. At is deter pffi  ministic and Ut is a lognormal variable (recall that W t  N 0, t ). We have E ðSt Þ = At E ðU t Þ,V ðSt Þ = A2t V ðU t Þ, with       E ðU t Þ = exp σ 2 t ,V ðU t Þ = exp 2σ 2 t - exp σ 2 t : Thus,     E ðSt Þ = S0 exp ðμt Þ,V ðSt Þ = S20 exp ð2μt Þ exp σ 2 t - 1 :

4.8

Diffusion Processes

481

Example 4.33 The Vasicek’s model for the evolution of the interest rates reads as dX t = σdW t þ θðμ - X t Þdt: Let F ðt, xÞ = xeθt : We have 2

∂F ∂F ∂ F = θF , = eθt , 2 = 0: ∂x ∂t ∂x Thus, if Yt = F(t, Xt) and dXt = a(t, Wt)dWt + b(t, Wt)dt, we have dY t = θX t eθt dt þ eθt dX t = σeθt dW t þ θμeθt dt Thus, Zt Y t = Y0 þ σ

Zt

θs

e dW s þ θμ 0

eθs ds

0

Thus, the solution is Xt = X0e

- θt



þ μ 1-e

- θt



þ σe

- θt

Zt

eθs dW s :

0

We have    σ2  1 - e - 2θt : EðSt Þ = X 0 e - θt þ μ 1 - e - θt ,VðSt Þ = 2θ Vasicek’s model is an example of Ornstein–Uhlenbeck process.

Example 4.34 Let us consider the partial differential equation ∂u a2 - Δu = f on Q = Ω × ð0, T Þ, 2 ∂t

(continued)

482

4

Stochastic Processes

Example 4.34 (continued) uðx, 0Þ = u0 ðxÞ,x 2 Ω ,uðx, t Þ = u0 ðx, t Þ,x 2 ∂Ω,t 2 ð0, T Þ Let us consider dX t = adW t ,dT t = - dt,X0 = x,T 0 = t,Y t = uðX t , T t Þ: Let us introduce Zt = ( Xt, Tt, ). Then, 0

a 0 B 0 ⋱ B dZt = B @⋮ ⋮ 0 ⋯

⋯ ⋯ a ⋯

1 0 1 0 0 B 0 C ⋮C C C CdW t þ B @ ⋮ Adt: A 0 -1 0

Let F(Zt) = u(Xt, Tt). We have ∂F ∂u ∂F ∂u = , A1,nþ1 = = , ∂Z i ∂xi ∂Z iþ1 ∂t

A1i =

2

B1ij =

2

∂ F ∂ u = , ∂Z i ∂Z j ∂xi ∂xj 2

B1i,nþ1 =

2

∂ F ∂ u = , ∂Z i ∂Z nþ1 ∂xi ∂t 2

B1,nþ1,nþ1 = dY t =

2

∂ F ∂ u = 2: ∂t ∂Z 2nþ1

X ∂u ∂u 1 d ðX i Þt þ a2 × Δu dt, dT t þ 2 ∂t ∂xi

id est, dY t =

 2  X ∂u a ∂u dt þ a dW i : Δu 2 ∂t ∂xi

Thus dY t = - f ðX t , T t ÞdT t þ a—u:dW and (continued)

4.8

Diffusion Processes

483

Example 4.34 (continued) Zt Zt Yt - Y0 = f ðX s , T s Þds þ a —uðX s , T s Þ:dW 0

0

Let τðxÞ = inf ft : Zt 2 = Qg: Then, Yτ(x) = u0( Xτ(x), Tτ(x)), Y0 = u(t, x) and 

Zt

ZτðxÞ



f ðX s , T s Þds - a

uðx, t Þ = u0 X τðxÞ , T τðxÞ þ

0

0

Since E

t R

—uðX s , T s Þ:dW:

 —uðX s , T s Þ:dW = 0, we have

0

0  B  uðt, xÞ = E@u0 X τðxÞ , T τðxÞ þ

ZτðxÞ

1 C f ðXs , T s ÞdsA:

0

This equality is an example of Feynman–Kac formula.

Example 4.35 Let us consider the partial differential equation -

a2 Δu = f on Ω, 2

uðxÞ = u0 ðxÞ,x 2 ∂Ω Let us consider dXt = adW t ,X 0 = x,Y t = uðXt Þ: Let F(Zt) = u(Xt). We have dY t =

X ∂u

∂xi

d ðX i Þt þ

1 2 a × Δu dt, 2

(continued)

484

4

Stochastic Processes

Example 4.35 (continued) id est, dY t = dt

X ∂u a2 a dW i : Δu þ 2 ∂xi

Thus dY t = - f ðX t ÞdX t þ a—u:dW and

Zt Yt - Y0 = -

Zt f ðX s Þds þ a

0

—uðX s Þ:dW 0

Let τðxÞ = inf ft : Zt2 = Ωg: Then, Yτ(x) = u0( Xτ(x)), Y0 = u(x) and 



f ðXs Þds - a

uðxÞ = u0 X τðxÞ þ

Since E

t R



Zt

ZτðxÞ

—uðX s Þ:dW: 0

0

—uðX s Þ:dW = 0, we have

0

0  B  uðxÞ = E @u0 X τðxÞ þ

ZτðxÞ

1 C f ðXs ÞdsA:

0

This equality is another example of Feynman–Kac formula.

4.8.7

Numerical Simulation of Stochastic Differential equations

Let us consider the following SDE; dX t = aðt, X t ÞdW t þ bðt, X t Þdt,X 0 = x0 :

ð4:66Þ

a is usually referred as diffusion coefficient and b as drift coefficient. The reader will find in the literature many works concerning methods for the simulation of SDE

4.8

Diffusion Processes

485

analogous to (4.66). For instance, Milstein (1973), Pardoux and Talay (1985), Milstein and Tretyakov (2004), Graham and Talay (2013), Kloeden and Platen (1989), Talay (2015), Boukhetala and Guidoum (2011) and Iacus (2008). One of the most popular ones is Euler’s method: let us introduce a maximum time of simulation T > 0, a maximum number of time steps ns > 0, Δt = T/ns, ti = iΔt, i ≥ 0, Wi = W(ti), Xi = X(ti). Then, Eq. (4.67) is discretized as X nþ1 - X n = an ΔW n þ bn Δt, : an = aðt n ; X n Þ, bn = bðt n ; X n Þ, ΔW n = ðW nþ1 - W n Þ

ð4:67Þ

A simple way to generate ΔWn consists in using that  pffiffiffiffiffi ΔW n  N 0, Δt :

ð4:68Þ

Thus, pffiffiffiffiffi X nþ1 = X n þpaffiffiffiffiffi n Δt Z n þ bn Δt W nþ1 = W n þ Δt Z n , Z n  N ð0; 1Þ

ð4:69Þ

A second popular method of discretization is Milstein’s method:    X nþ1 - X n = an ΔW n þ bn Δt þ cn ΔW 2n - Δt , 1 ∂an ∂an ∂a , = ðt n ; X n Þ: c n = an ∂x 2 ∂x ∂x

ð4:70Þ

Analogously to Euler’s method, we may generate Wn + 1 - Wn by (4.69), so that pffiffiffiffiffi    X nþ1 - X n = an Δt Z n þ an ΔW n þ Δt bn þ cn Z 2n - 1 pffiffiffiffiffi # W nþ1 = W n þ Δt Z n , Z n  N ð0; 1Þ

ð4:71Þ

Some R packages include instruction for the simulation of SDE. For instance, sde proposes sde.sim and Sim.DiffProc proposes snssde1d, snssde2d, snssde3d. Example 4.36 The classical Black-Scholes model for the evolution of the price of an option reads as dSt = σSt dW t þ μSt dt: Let us simulate this SDE with μ = 0.1, σ = 0.3, S0 = 1. We use package Sim.DiffProc as follows: (continued)

486

4

Stochastic Processes

Example 4.36 (continued)

This code generates 1E4 trajectories for 0 ≤ t ≤ 1, using 3 different methods of discretization (Euler, Milstein, Runge-Kutte RK3), with Δt = 0.001. The result is a list containing the results in a field $X (for instance, mod1$X), where each column is a simulation. We evaluate the mean for each time ti = (i - 1)Δt. The results are exhibited in Fig. 4.78 and Table 4.5. The variances for each time are compared in Fig. 4.79 and Table 4.6.

Fig. 4.78 Simulation of Black-Scholes SDE by Sim.DiffProc, with Δt = 0.001. Mean of a sample of 10000 realizations

(continued)

4.8

Diffusion Processes

487

Example 4.36 (continued)

Fig. 4.79 Simulation of Black-Scholes SDE by Sim.DiffProc, with Δt = 0.001. Variance of a sample of 10000 variates

Table 4.5 Estimation of E(ST) by Sim.DiffProc ns Euler Milstein RK3 Exact

Δt = 0.01 1000 1.1150 1.1119 1.1022 1.1052

5000 1.1021 1.1116 1.1074 1.1052

10000 1.1056 1.1105 1.0994 1.1052

Δt = 0.001 1000 1.1091 1.1063 1.0956 1.1052

5000 1.1057 1.1072 1.1023 1.1052

10000 1.1117 1.1054 1.1065 1.1052

Δt = 0.001 1000 0.1115 0.1127 0.1164 0.1150

5000 0.1147 0.1131 0.1205 0.1150

10000 0.1168 0.1116 0.1164 0.1150

Table 4.6 Estimation of V(ST) by Sim.DiffProc ns Euler Milstein RK3 Exact

Δt = 0.01 1000 0.1170 0.1191 0.1192 0.1150

5000 0.1112 0.1146 0.1140 0.1150

10000 0.1182 0.1190 0.1110 0.1150

(continued)

488

4

Stochastic Processes

Example 4.36 (continued) We use package sde as follows (sde proposes an option to simulate Black-Scholes directly, but we do not use it).

This code generates ns trajectories for 0 ≤ t ≤ tmax, using 3 different methods of discretization (Euler, Milstein, KPS), with Δt = dt. The result is a matrix where each column is a simulation. We evaluate the mean for each time ti = (i - 1)Δt. The results are exhibited in Fig. 4.80 and Table 4.7. The variances for each time are compared in Fig. 4.81 and Table 4.8.

(continued)

4.8

Diffusion Processes

489

Example 4.36 (continued)

Fig. 4.80 Simulation of Black-Scholes SDE by sde, with Δt = 0.001. Mean of a sample of 10000 realizations

Fig. 4.81 Simulation of Black-Scholes SDE by sde, with Δt = 0.001. Variance of a sample of 10000 variates

(continued)

490

4

Stochastic Processes

Example 4.36 (continued) Table 4.7 Estimation of E(ST) by sde ns Euler Milstein KPS Exact

Δt = 0.01 1000 1.1101 1.1098 1.1001 1.1052

5000 1.1144 1.0993 1.1072 1.1052

10000 1.1062 1.1088 1.1015 1.1052

Δt = 0.001 1000 1.1040 1.1087 1.1048 1.1052

5000 1.1002 1.0999 1.1097 1.1052

10000 1.1064 1.1105 1.1037 1.1052

10000 0.1155 0.1173 0.1151 0.1150

Δt = 0.001 1000 0.1205 0.1305 0.1304 0.1150

5000 0.1117 0.1123 0.1182 0.1150

10000 0.1150 0.1140 0.1175 0.1150

Table 4.8 Estimation of V(ST) by sde ns Euler Milstein KPS Exact

Δt = 0.01 1000 0.1114 0.1268 0.1063 0.1150

5000 0.1198 0.1137 0.1154 0.1150

We observe that all the results are close to the exact ones.

Example 4.37 The Vasicek’s model (Vasicek, 1977) for the evolution of the interest rates reads as dX t = σdW t þ θðμ - X t Þdt: We can use Sim.DiffProc to simulate this SDE with μ = 0.05, σ = 0.3, θ = 2, X0 = 0.1:

(continued)

4.8

Diffusion Processes

491

Example 4.37 (continued) Analogously to the preceding example, we generate a sample of ns variates from XT to evaluate the mean E(XT) and V(XT) for several times T. Figures 4.82 and 4.83, Tables 4.9 and 4.10 show examples of results.

Fig. 4.82 Simulation of Vasicek SDE by Sim.DiffProc, with Δt = 0.001. Mean of a sample of 10000 realizations

Fig. 4.83 Simulation of Vasicek SDE by Sim.DiffProc, with Δt = 0.001. Variance of a sample of 10000 variates

(continued)

492

4

Stochastic Processes

Example 4.37 (continued) Table 4.9 Estimation of E(XT) by Sim.DiffProc ns Euler Milstein 2 Heun Exact

Δt = 0.01 1000 0.0531 0.0509 0.0508 0.0592

5000 0.0533 0.0503 0.0496 0.0592

10000 0.0530 0.0551 0.0501 0.0592

Δt = 0.001 1000 0.0473 0.0442 0.0530 0.0592

5000 0.0495 0.0536 0.0475 0.0592

10000 0.0513 0.0486 0.0505 0.0592

Δt = 0.001 1000 0.0214 0.0231 0.0217 0.0249

5000 0.0226 0.0219 0.0223 0.0249

10000 0.0227 0.0225 0.0224 0.0249

Table 4.10 Estimation of V(XT) by Sim.DiffProc ns Euler Milstein 2 Heun Exact

Δt = 0.01 1000 0.0235 0.0238 0.0243 0.0249

5000 0.0225 0.0230 0.0238 0.0249

10000 0.0225 0.0226 0.0232 0.0249

Example 4.38 The Cox-Ingersoll-Ross model (Cox et al., 1985) for the instantaneous interest rates is the solution of the SDE: pffiffiffiffiffi dX t = σ X t dW t þ θðμ - X t Þdt: Then (Jafari & Abbasian, 2017)   E ðX t Þ = X 0 e - θt þ μ 1 - e - θt , V ðX t Þ =

 μσ 2  2 σ 2  - θt - e - 2θt þ X0 e 1 - e - θt : θ 2θ

We use Sim.DiffProc to simulate this SDE, with μ = 0.1, σ = 0.3, θ = 2, X0 = 1: (continued)

4.8

Diffusion Processes

493

Example 4.38 (continued)

Analogously to the preceding examples, we generate a sample of ns variates from XT to evaluate the mean E(XT) and V(XT) for several times T. Figures 4.84 and 4.85, Tables 4.11 and 4.12 show examples of results.

Fig. 4.84 Simulation of Cox-Ingersoll-Ross SDE by Sim.DiffProc, with Δt = 0.001. Mean of a sample of 10000 realizations

(continued)

494

4

Stochastic Processes

Example 4.38 (continued)

Fig. 4.85 Simulation of Cox-Ingersoll-Ross SDE by Sim.DiffProc, with Δt = 0.001. Variance of a sample of 10000 variates Table 4.11 Estimation of E(XT) by Sim.DiffProc ns Euler Taylor RK2 Exact

Δt = 0.01 1000 0.1159 0.1178 0.1152 0.1164

5000 0.1161 0.1162 0.1163 0.1164

10000 0.1155 0.1177 0.1153 0.1164

Δt = 0.001 1000 0.1177 0.1173 0.1166 0.1164

5000 0.1170 0.1169 0.1155 0.1164

10000 0.1173 0.1162 0.1173 0.1164

Δt = 0.001 1000 3.0E-3 3.1E-3 2.7E-3 3.0E-3

5000 3.0E-3 3.1E-3 2.8E-3 3.0E-3

10000 3.0E-3 3.0E-3 3.0E-3 3.0E-3

Table 4.12 Estimation of V(XT) by Sim.DiffProc ns Euler Taylor RK2 Exact

Δt = 0.01 1000 3.0E-3 2.9E-3 3.0E-3 3.0E-3

5000 3.0E-3 2.8E-3 2.9E-3 3.0E-3

10000 3.0E-3 3.1E-3 3.0E-3 3.0E-3

4.8

Diffusion Processes

495

Example 4.39 Consider the Partial Differential Equation - Δu = f on Ω,uðxÞ = u0 ðxÞ on ∂Ω, with Ω = fðx1 , x2 , x3 Þ : x1 þ x2 þ x3 ≤ 1, x1 ≥ 0, x2 0, x3 ≥ 0g, f ðxÞ = - 6,u0 ðxÞ = x21 þ x22 þ x23 : Let us consider   pffiffiffi 1 1 : dXt = 2dW t ,X 0 = x = , 0, 3 2 We evaluate Zτi ðxÞ nsim   1 X uð x Þ ≈ Y ,Y = u0 Xτi ðxÞ þ f ðX s Þds nsim i = 1 i i 0

The exact value is u(X0) = 13/36 ≈ 0, 361111. We can generate a sample from Y as follows:

Firstly, define f and u0:

Then define Ω: the function region takes the value TRUE when the point is in Ω and FALSE otherwise.

496

4

Stochastic Processes

Create a function that generates one variate from Y:

Use it to create a function that generates the mean value of nw variates from Y:

Give a point xc for the evaluation, the number nw of variates of Y, the time step dt and generate the estimation of u at the point xc: We obtain

In Tables 4.13, 4.14, 4.15, 4.16, 4.17, and 4.18, we present the results for different values of nw and dt. u is the mean value of uest and e is the mean absolute error e = |uest - u(xc)|. s(u), s(e) are the standard deviations of the estimation uest and of the absolute error, respectively Table 4.13 Statistics of 100 estimations of u by simulation of the SDE (nw = 1000) Δt  u s(u)

1E-2 0.362 7E-3

1E-3 0.361 2E-3

1E-4 0.361 1E-3

1E-5 0.361 7E-4

1E-6 0.361 5E-4

1E-7 0.361 3E-4

1E-8 0.361 1E-4

(continued)

4.8

Diffusion Processes

497

Example 4.39 (continued) Table 4.14 Statistics of the absolute errors in 100 estimations of u by simulation of the SDE (nw = 1000) Δt e s(e)

1E-2 5E-3 4E-3

1E-3 2E-3 2E-3

1E-4 1E-3 1E-3

1E-5 6E-4 4E-4

1E-6 4E-4 3E-4

1E-7 2E-4 2E-4

1E-8 9E-5 9E-5

Table 4.15 Statistics of 100 estimations of u by simulation of the SDE (Δt = 1E - 3) nw  u s(u)

1000 0.361 3E-3

2500 0.361 2E-3

5000 0.361 1E-3

10000 0.361 1E-3

25000 0.361 6E-4

50000 0.361 4E-4

100000 0.361 3E-4

Table 4.16 Statistics of the absolute errors in 100 estimations of u by simulation of the SDE (Δt = 1E - 3) nw e s(e)

1000 2E-3 2E-3

2500 1E-3 8E-4

5000 9E-4 9E-4

10000 8E-4 6E-4

25000 5E-4 4E-4

50000 3E-4 2E-4

100000 2E-4 2E-4

Table 4.17 Statistics of 100 estimations of u by simulation of the SDE (Δt = 1E - 4) nw  u s(u)

1000 0.361 1E-3

2500 0.361 9E-4

5000 0.361 7E-4

10000 0.361 5E-4

25000 0.361 3E-4

50000 0.361 2E-4

100000 0.361 1E-4

Table 4.18 Statistics of the absolute errors in 100 estimations of u by simulation of the SDE (Δt = 1E - 4) nw e s(e)

1000 1E-3 8E-4

2500 7E-4 5E-4

5000 6E-4 5E-4

10000 4E-4 3E-4

25000 2E-4 2E-4

50000 2E-4 1E-4

100000 1E-4 8E-5

Example 4.40 Consider the Partial Differential Equation ∂u - Δu = f on Ω × ð0, TÞ, ∂t

(continued)

498

4

Stochastic Processes

Example 4.40 (continued) uðx, t Þ = u0 ðx, t Þ on ∂Ω × ð0, TÞ,uðx, 0Þ = u0 ðx, 0Þ on Ω With Ω = fðx1 , x2 , x3 Þ : x1 þ x2 þ x3 ≤ 1, x1 ≥ 0, x2 0, x3 ≥ 0g,   f ðxÞ = x21 þ x22 þ x23 - 6 et , u0 ðxÞ = x21 þ x22 þ x23 : Let us consider   pffiffiffi 1 1 ,T = 1: dX t = 2dW t ,dT t = - dt,X 0 = x = , 0, 3 2 0 We evaluate Zτi ðxÞ nsim   1 X Y ,Y = u0 X τi ðxÞ , T τi ðxÞ þ f ðXs , T s Þds uðx, t Þ ≈ nsim i = 1 i i 0

The exact value is u(X0, T0) = 13e/36 ≈ 0.9816018. We can generate a sample from Y as follows:

Firstly, define f and u0:

Then adapt the function region to take the value TRUE when the point is in Ω and t > 0. It will take the value FALSE otherwise.

4.8

Diffusion Processes

499

Create a function that generates one variate from Y:

Use it to create a function that generates the mean value of nw variates from Y:

Give a point (xc,tc) for the evaluation, the number nw of variates of Y, the time step dt and generate the estimation of u at the point (xc,tc): We obtain

In Tables 4.19, 4.20, 4.21, 4.22, 4.23, and 4.24, we present the results for different values of nw and dt. u is the mean value of uest and e is the mean absolute error e = |uest - u(xc)|. s(u), s(e) are the standard deviations of the estimation uest and of the absolute error, respectively (continued)

500

4

Stochastic Processes

Example 4.40 (continued) Table 4.19 Statistics of 100 estimations of u by simulation of the SDE (nw = 1000) Δt  u s(u)

1E-2 0.974 2E-2

1E-3 0.982 8E-3

1E-4 0.982 4E-3

1E-5 0.981 2E-3

1E-6 0.981 1E-3

1E-7 0.982 7E-4

1E-8 0.982 4E-4

Table 4.20 Statistics of the absolute errors in 100 estimations of u by simulation of the SDE (nw = 1000) Δt e s(e)

1E-2 2E-2 1E-2

1E-3 7E-3 4E-3

1E-4 3E-3 2E-3

1E-5 2E-3 1E-3

1E-6 1E-3 9E-4

1E-7 6E-4 4E-4

1E-8 3E-4 3E-4

Table 4.21 Statistics of 100 estimations of u by simulation of the SDE (Δt = 1E - 3) nw  u s(u)

1000 0.982 9E-3

2500 0.981 5E-3

5000 0.982 3E-3

10000 0.981 2E-3

25000 0.982 2E-3

50000 0.982 1E-3

100000 0.982 9E-4

Table 4.22 Statistics of the absolute errors in 100 estimations of u by simulation of the SDE (Δt = 1E - 3) nw e s(e)

1000 7E-3 5E-3

2500 4E-3 3E-3

5000 3E-3 2E-3

10000 2E-3 2E-3

25000 1E-3 9E-4

50000 9E-4 7E-4

100000 7E-4 5E-4

Table 4.23 Statistics of 100 estimations of u by simulation of the SDE (Δt = 1E - 4) nw  u s(u)

1000 0.982 4E-3

2500 0.982 2E-3

5000 0.981 2E-3

10000 0.982 1E-3

25000 0.982 9E-4

50000 0.982 6E-4

100000 0.982 4E-4

Table 4.24 Statistics of the absolute errors in 100 estimations of u by simulation of the SDE (Δt = 1E - 4) nw e s(e)

1000 3E-3 3E-3

2500 2E-3 2E-3

5000 1E-3 1E-3

10000 1E-3 8E-4

25000 7E-4 5E-4

50000 4E-4 3E-4

100000 4E-4 2E-4

4.8

Diffusion Processes

501

Remark 4.11 The reader will find in the literature other methods for the solution of partial differential equations by means of SDE. For instance, the method usually referred as “Walk on Spheres”: see, for instance, Elepov and Mikhailov (1969), Booth (1982), Hwang and Mascagni, Efficient modified “walk on spheres” algorithm for the linearized Poisson–Bolzmann (2001), Hwang et al. (2003), and Zhou and Cai (2016).

Chapter 5

Uncertain Algebraic Equations

Abstract In this chapter, we consider the situation where an unknown n-dimensional vector X has to be determined by solving a system of equations having the form F(X, U) = 0, where F is a mapping from the n-dimensional Euclidean space on itself and U is a random k-dimensional vector. We focus on the numerical determination of the distribution of solution X, which is also a random variable. We consider linear and nonlinear situations. We consider also the determination of the distribution for eigenvalues of matrices. Solving equations is a basic activity in most fields of knowledge. As an example, let us consider the modeling of travel distributions among different locations. The travels are described by a matrix T = (Tij, 1 ≤ i, j ≤ n) connecting inputs and outputs of the different locations: Tij is the number of travels from location i to location j–i is the origin and j is the destination. The matrix T is usually referred as Origin/ Destination matrix, or simply O/D matrix. In general, it takes the form presented in Table 5.1. Such a matrix synthetizes the empirical data about the travels between the locations: the sum of the row i gives the number Oi of travels leaving the location i, while the sum of the column j gives the number of travels arriving at the location j. We have: Oi = T i∎ =

n X j=1

T ij , Dj = T ∎j =

n X

T ij :

ð5:1Þ

i=1

One of the main uses of these data is the prediction of the future numbers of travels if some modification intervenes in the transportation network (for instance, infrastructure evolution, such as the opening of a new road), in the intervening opportunities (for instance, the installation of a new facility – school, hospital, and

Supplementary Information The online version of this chapter (https://doi.org/10.1007/978-3031-17785-9_5) contains supplementary material, which is available to authorized users. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 E. Souza de Cursi, Uncertainty Quantification using R, International Series in Operations Research & Management Science 335, https://doi.org/10.1007/978-3-031-17785-9_5

503

504

5

Table 5.1 Example of O/D matrix

Table 5.2 An example of flux matrix T

... ... ... ...

Location 1 From 1 to 1 ... From n to 1

Location 1 ... Location n

Location n From 1 to n ... From n to n

Destination O R I G I N

Table 5.3 An example of distance matrix D

Uncertain Algebraic Equations

1 2 3 4 5 6

1 0 244 189 399 193 53

2 211 0 72 46 41 20

1 0 1 0.9 0.8 1.1 1.9

2 1 0 1.2 1.8 1.6 1.7

3 95 42 0 63 15 6

4 382 51 119 0 74 18

5 236 58 36 94 0 49

6 5 2 1 2 4 0

Destination O R I G I N

1 2 3 4 5 6

3 0.9 1.2 0 1.2 2.0 2.6

4 0.8 1.8 1.2 0 1.4 2.5

5 1.1 1.6 2.0 1.4 0 1.4

6 1.9 1.7 2.6 2.5 1.4 0

commerce) or in the distribution of the population (relocations and moves of people). To make such a prediction, we need to use the data to calibrate a model describing it, id est, a spatial interaction model describing the flux of population. The literature proposes many models of this type, for instance, (Ravenstein, 1885; Reilly, 1929, 1931; Hoyt, 1939; Stouffer, 1940; Zipf, 1946; Stewart, 1947; Stewart, 1948; Converse, 1949; Zipf, 1949; Clark, 1951; Isard, 1956; Hansen, 1959; Huff, 1964; Wilson, 1967, 1970; Tobler, 1976; Wilson, 2010). In general, all the models include parameters to be determined from the available data. As an example, let us consider the fluxes given in Table 5.2. We can consider the model   T ij = Ai Oi Bj Dj exp - αd ij - βcij , i ≠ j:

ð5:2Þ

where D is a distance matrix and C is an opportunity matrix – examples are given in Tables 5.3 and 5.4. The unknowns to be determined are α, β, Ai, Bi, i = 1, . . .6. The reader can find in the literature many works considering the determination of these unknowns from observed data – id est, about the calibration of the model. A popular method is the method of Furness (1965), which was adapted by many authors – including the author of this book (Gonçalves & Souza de Cursi, 2001). However, our purpose here is not to study calibration methods, but those of

5

Uncertain Algebraic Equations

505

Table 5.4 An example of opportunity matrix C O R I G I N

Destination 1 1 0 2 1.5 3 2.5 4 1 5 2 6 3

2 1.5 0 2.5 4.5 4 4

3 2.5 2.5 0 2 5 5

4 1 4.5 2 0 3.5 4.5

5 2 4 5 3.5 0 2

6 3 4 5 4.5 2 0

uncertainty quantification, namely for algebraic equations. Thus, we shall adopt a simplified method of calibration, more useful for our purposes: we shall look for the coefficients α, β, ai, bi, i = 1, . . .6 such that   T ij = ai bj exp - αdij - βcij , i ≠ j

ð5:3Þ

Of course, Eqs. (5.2) and (5.3) coincide if Ai = ai/Oi, Bj = bj/Dj, but (5.3) leads to a simpler procedure. Notice that     - dij α - cij β þ log ðai Þ þ log bj = log T ij , i ≠ j :

ð5:4Þ

so that we can take as unknowns α, β, log(Ai), log (Bi), i = 1, . . .6. In addition, we can set X = ðα, β, a1 , . . . , a6 , b1 , . . . , b6 Þt and we rewrite Eq. (5.4) as a linear system MX = N:

ð5:5Þ

where for a convenient k 2 {1, . . ., 30}:   M k1 = - dij , M k2 = - cij , M k,iþ2 = 1, M k,iþ8 = 1, N k = log T ij : The other elements from M are null. Notice that M is a 30 × 14 – matrix, so that the linear system (5.5) is overdetermined: we have 14 unknowns and 30 values of Tij, i ≠ j, so that there are more equations than unknowns. Consequently, it must be solved by an adapted approach. For instance, by linear least squares with a regularizing parameter ε > 0: ðM t M þ εIdÞX = M t N:

ð5:6Þ

In general, the values determined by (5.6) do not verify the conditions (5.1) and it is necessary to make a second step involving constrained nonlinear optimization to get results that satisfy (5.1) and α, β ≥ 0. Notice that this second step involves a hard optimization problem: the results can vary significantly according to the method and parameters of optimization chosen by

506

5

Uncertain Algebraic Equations

the user. In the sequel, we call fmincon with the restrictions 0 ≤ α, β ≤ 1. On error, two other calls to fmincon are made, with a different starting point (a random perturbation of the bsolution of the solution of (5.6)). If all these calls produce an error, we call fminsearch (package pracma) followed by a call to GA, with a penalized objective function (quadratic penalty) – the penalty coefficient is chosen increasingly as 1E4, 1E5, . . ., 1E10, using as initial point the preceding solution. If this procedure is applied to the data in Tables 5.2, 5.3, and 5.4, we obtain in the first step (linear least squares solution with regularizing parameter ε = 1E - 8):

which corresponds to α = 3:609157, β = - 0:4397933, a = ð49:89239, 40:48984, 48:99367, 55:53921, 41:20979, 80:28867Þ b = ð49:89241,40:48984,48:99366,55:53921,41:20980, 80:28867Þ The second step furnishes

which corresponds to α = 0:45,

β = 0:25,

a = ð32:210238, 16:233560, 15:424393, 21:404289, 15:184532, 7:633345Þ b = ð34:324890, 14:978429, 8:286350, 21:811274, 19:804841, 0:794216Þ With these values, we have (T(x) is the matrix of fluxes generated by x):    max T ij - T ij ðxÞ, 1 ≤ i, j ≤ 6 ≤ 0:5, so that the relative error between T(x) and the data T of Table 5.2 is about 0.1%. As previously indicated, other approaches exist in the literature. Whatever the method adopted, matrices T,C, and D are generally determined from empirical data and observations – thus, they contain errors, and their values are uncertain, due to the error margins. As an example, assume that the fluxes in T result from measurements and are affected by some variability and errors: for instance, the number of travels can vary

5

Uncertain Algebraic Equations

507

Table 5.5 An example of variability of T T12 T21 T12 T21

214 267 194 245

219 258 195 235

220 261 193 245

209 267 224 266

229 266 203 243

210 233 202 250

194 255 224 245

205 226 232 256

220 236 216 238

231 225 213 264

from a day to another, from a week to another and so on – a new measurement probably will lead to different values. Thus, each observed value Tij can be considered as a fluctuation around a mean value T ij , id est, a realization of a random  variable: T ij = T ij þ uij (additive model of error/variability) or T ij = 1 þ uij T ij (multiplicative model of error/variability), where uij is a random variable. Analogously, the values of C can be erroneous or badly evaluated. To consider the variability of the data and analyze its impact on the determination of the coefficients, we can use the approaches previously presented in Chap. 3. Let us exemplify such an analysis by considering the (very!) simple situation where only the values of T12 and T21 are affected by uncertainty: T 12 = ð1 þ u1 ÞT 12 and T 21 = ð1 þ u2 ÞT 21 . Assume that we get the following data (Table 5.5): We start by estimating T 12 , T 21 by the empirical mean of the data:

Then, we determine the values of u1, u2:

Then, we determine the values of α and β by the simplified procedure described:

508

5

Uncertain Algebraic Equations

Now, we have a sample from (α, β) and the corresponding values of u = (u1, u2): we can apply the methods presented in Chap. 3, namely in Sect. 3.5.1. For instance, we can look for 2D-expansions with k1 = k2 = 3: we obtain the coefficients c1 for the expansion of α and the coefficients c2 for the expansion of β:

If the distribution of u is known, the knowledge of the coefficients allows the determination of the distribution of (α, β). For instance, assume that u1 and u2 are independent, uniformly distributed. Then, we can generate a large sample from u and use it to estimate the CDF and the PDF of the variables. For instance, we can generate 400 variates  from each ui on the interval (minui, maxui), forming a sample of 1.6E5 variates u1i , u2j from (u1, u2). Calculating the 2D expansion on this sample furnishes a sample of 1.6E5 variates from (α, β). Notice that the expansion can generate non-compliant values, such as negative values for the coefficients. In Fig. 5.1a–d, we present the crude results, where the non-compliant values are kept. In Fig. 5.2a–d, the non-compliant values were eliminated – observe that the results are similar to the results generated by the whole set of results, including non-compliant values. If a supplementary information on the support of the variables is available, we can generate variates on the whole support. For instance, assume that they variables have as support (-0.1, 0.1). In this case, we generate a large sample of (u1, u2) on this interval. Since the data are limited to (-0.0911, 0.0925) for u1 and (-0.0966,0.0721) for u2, this procedure implies extrapolation, what introduces errors, such as negative values for the coefficients. The results presented in Fig. 5.3 are crude results, where we kept the non-compliant values. If the non-compliant values are eliminated, we obtain the results in Fig. 5.4.

5

Uncertain Algebraic Equations

509

Marginal PDF of a on data

1.0 0.0 0.0

0.5

1.0

1.5

0.0

0.5

1.0

1.5

α

a)

b)

Marginal CDF of b (u on data)

Marginal PDF of b (u on data)

f(β)

0

5

10

15

20

25

0.8 1.0

a

0.0 0.2 0.4 0.6

P(β ≤ b)

2.0

f(α)

0.6 0.4 0.0 0.2

P(α ≤ a)

0.8

3.0

1.0

Marginal CDF of a (u on data)

0.05

0.10

0.15

0.20

0.25

0.05

0.10

0.15

0.20

b

β

c)

d)

0.25

Fig. 5.1 Crude results for the CDF and PDF, generated by a polynomial expansion with k1 = k2 = 3, with the sample in Table 5.5. The CDF is generated by a sample 401 variates from each ui, forming a sample of 1.6E5 variates from (u1, u2). The PDF is the SPH derivative of the empirical CDF. (a) CDF of α. (b) PDF of α. (c) CDF of β. (d) PDF of β

As a second example, let us consider that matrix C is affected by uncertainty. Let us consider the simple situation where only C12 and C13 are affected by uncertainty. For instance, assume that different evaluations produced the values in Table 5.6. We adopt an analogous procedure:

Then, we generate the sample from (α, β):

510

5

Marginal PDF of a

3 0

1

2

f(α)

4

5

0.0 0.2 0.4 0.6 0.8 1.0

P(α ≤ a)

Marginal CDF of a

Uncertain Algebraic Equations

0.0

0.2

0.4

0.6

0.8

1.0

0.0

0.2

0.6 α

a)

b)

0.8

1.0

Marginal PDF of b

15 0

5

10

f(β)

20

25

0.0 0.2 0.4 0.6 0.8 1.0

Marginal CDF of b

P(β ≤ b)

0.4

a

0.05

0.10

0.15

0.20

0.25

0.05

0.10

0.15

0.20

b

β

c)

d)

0.25

Fig. 5.2 Results for the CDF and PDF, generated by the compliant values furnished by the polynomial expansion with k1 = k2 = 3 used to generate Fig. 5.1. The PDF is the SPH derivative of the empirical CDF. (a) CDF of α. (b) PDF of α. (c) CDF of β. (d) PDF of β

We determine the 2D – expansion:

The distributions obtained are shown in Fig. 5.5a–d.

5.1

Uncertain Linear Systems

511

Marginal PDF of a on (–0.1, 0.1)

1.5 f(α)

1.0

0.6 0.8

0.5

0.4 0.0

0.0

0.2

P(α ≤ a)

1.0

Marginal CDF of a (u on (–0.1, 0.1))

–2

–1

0

1

–2

2

0 α

a)

b)

Marginal CDF of b (u on (–0.1, 0.1))

1

2

Marginal PDF of b (u on (–0.1, 0.1))

0

5

f(β)

10

15

0.0 0.2 0.4 0.6 0.8 1.0

P(β ≤ b)

–1

a

0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40

0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 β

b

c)

d)

Fig. 5.3 Crude results for the CDF and PDF, generated by a polynomial expansion with k1 = k2 = 3 and ui 2 (-0.1, 0.1). (a) CDF of α. (b) PDF of α. (c) CDF of β. (d) PDF of β

5.1

Uncertain Linear Systems

The previous examples show that we can use the approaches introduced in Chap. 3 to determine the distributions of the solutions of algebraic equations, but there are also some alternative approaches. Let us examine a second method: we consider a linear system AðU ÞX = BðUÞ,

ð5:7Þ

where U is a random vector, A is a n × n matrix, B is a n × 1 vector. Let us consider an expansion analogous to Eq. (3.3) of Chap. 3: X ≈ PX =

k X j=1

 t xj φj ðU Þ,xj = x1,j , . . . , xn,j :

ð5:8Þ

512

5

Uncertain Algebraic Equations

Marginal PDF of a

f(α)

0

1

2

3

4

0.0 0.2 0.4 0.6 0.8 1.0

P(α ≤ a)

Marginal CDF of a

0.0

0.2

0.4

0.6

0.8

0.0

1.0

0.2

0.6 α

a)

b)

Marginal CDF of b

0.8

1.0

Marginal PDF of b

10 0

5

f(β)

15

0.0 0.2 0.4 0.6 0.8 1.0

P(β ≤ b)

0.4

a

0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 b

0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 β

c)

d)

Fig. 5.4 Results for the CDF and PDF, generated by a polynomial expansion with k1 = k2 = 3 and ui 2 (-0.1, 0.1), after elimination of non-compliant values. (a) CDF of α. (b) PDF of α. (c) CDF of β. (d) PDF of β Table 5.6 An example of uncertainty on C C12 C13 C12 C13

1.8 2.2 1.4 3.0

1.5 2.9 1.0 2.3

1.8 2.8 1.3 2.8

1.3 2.0 1.6 2.2

2.0 2.1 1.9 2.3

1.5 2.1 1.2 2.5

Then, using Eq. (5.7): AðU ÞPX ≈ BðU Þ, id est, k X j=1

AðU Þxj φj ðU Þ ≈ BðUÞ:

1.9 2.6 1.8 2.8

1.0 2.1 1.1 2.7

1.3 2.7 1.3 2.4

1.6 2.7 1.3 2.3

5.1

Uncertain Linear Systems

513

f(α) 0.0

0.2

0.4

0.6

0.8

0.0 1.0 1.5 2.0 2.5 3.0

P(α ≤ a)

Marginal PDF of a

0.0 0.2 0.4 0.6 0.8 1.0

Marginal CDF of a

0.0

1.0

0.2

0.6 α

a)

b)

0.8

1.0

Marginal PDF of b

0.0

f(β) 1.0 2.0

3.0

0.0 0.2 0.4 0.6 0.8 1.0

Marginal CDF of b

P(β ≤ b)

0.4

a

0.0

0.2

0.4

0.6

0.8

1.0

0.0

0.2

0.4

0.6

b

β

c)

d)

0.8

1.0

Fig. 5.5 Empirical CDF and PDF of the compliant values generated by a polynomial expansion with k1 = k2 = 3 and ui 2 (-1, 1) (variability of C12, C13). The PDF is the SPH derivative of the CDF. (a) CDF of α. (b) PDF of α. (c) CDF of β. (d) PDF of β

Thus, for 1 ≤ r ≤ n k X n X j=1 s=1

Ars ðUÞxs,j φj ðU Þ ≈ Br ðUÞ,

so that, for 1 ≤ i ≤ k k X n X j=1 s=1

φi ðU ÞArs ðUÞxs,j φj ðU Þ ≈ φi ðUÞBr ðUÞ,

Let us denote   A risj = E φi ðU ÞArs ðUÞφj ðU Þ , B ri = E ðφi ðU ÞBr ðU ÞÞ

ð5:9Þ

514

5

Uncertain Algebraic Equations

Then, for 1 ≤ r ≤ n, 1 ≤ i ≤ k: k X n X

A risj xs,j ≈ B ri :

ð5:10Þ

j=1 s=1

Equation (5.10) form a linear system for the unknown coefficients (xs, j : 1 ≤ s ≤ n, 1 ≤ j ≤ k). We may transform it in a classical linear system by using the map ℓ 2 introduced in Sect. 3.5.1. For instance, set p = i þ ðr - 1Þk,q = j þ ðs - 1Þk:

ð5:11Þ

pq = A risj ,p = B ri ,q = xs,j ,

ð5:12Þ

 = :

ð5:13Þ

Then, taking

We have

Thus, we may determine the coefficients of the expansion by solving the linear system (5.13). In practice, the information on U is frequently furnished by a sample (Um : 1 ≤ m ≤ ns). Then, A risj ≈

ns ns 1 X 1 X φi ðUm ÞArs ðU m Þφj ðUm Þ, B ri ≈ φ ðU ÞB ðU Þ ns m = 1 ns m = 1 i m r m

ð5:14Þ

An example of code for the generation of matrices  and  is given in class uls:

5.1

Uncertain Linear Systems

515

Class uls has as properties functions calculating A(u), B(u), the dimension n and the basis φ.

Example 5.1 Let us consider the input/output analysis of economy, (Leontieff, 1936, 1937, 1986; Arrous, 2000), which is described by the linear system AX = B, A = Id - T, W = diagðLÞX

ð5:15Þ

T is the structural matrix : Tij gives the fraction of the production of the economic sector i consumed by the economic sector j by unit of production of sector j; L is the Labor vector: Li is the quantity of Labor requested in field i for the production of an unity; B is the demand vector: Bi is the quantity of goods available to consumers and exportation; W is the quantity of Labor necessary to the production. diag(L) denotes a diagonal matrix having L as diagonal. In practice, T and L result from empirical data and observations, while B is a prevision. Consequently, the model involves uncertainty. Let us consider a model situation in dimension n = 2, with B1 = 125 + u1, u1 uniformly distributed on (-10, 10), B2 = 110 + u2, u2 uniformly distributed on (-5, 15); T=



0:3 0:128

0:3125 0:075



We use the data furnished by a sample of 36 values from U = (u1, u2), formed by the pairs (u1, i, u2, j), 1 ≤ i, j ≤ 6, given in Table 5.7. Table 5.7 Data used for the generation of the sample u1 u2

-9.36334 -4.31108

-9.07657 1.341990

-8.05736 3.774887

-6.57627 8.89657

-4.46154 11.46916

4.120922 14.00444

In this case, X verifies Eq. (5.15), which is analogous to Eq. (5.7). Since U = (u1, u2), we use the expansion (5.8), with two polynomial families of degree k1 = k2 = 2 (see Chap. 3, Sect. 3.5.1). To determine the coefficients, we start by the creation of the basis:

516

5

Uncertain Algebraic Equations

Then, we determine the coefficients of the expansion: We obtained the coefficients shown in Table 5.8. Figures 5.6 and 5.7 show the marginal distribution of each component of the solution: the accordance is particularly good. Table 5.8 Coefficients obtained x1, j x2, j

229.12 145.22

10.29 23.05

Fig. 5.6 CDF of the first component of the solution (marginal CDF of x1)

Fig. 5.7 CDF of the second component of the solution (marginal CDF of x2)

0.00 0.00

15.23 2.11

0.00 0.00

0.00 0.00

0.00 0.00

0.00 0.00

0.00 0.00

5.1

Uncertain Linear Systems

517

Example 5.2 Let us consider a second situation corresponding to Eq. (5.15). where B1 = 125, B2 = 110 ; T=



0:3 - u1 0:128þu2

0:3125 0:075



where u1 is uniformly distributed on (0, 0.1) and u2 is uniformly distributed on (0, 0.05). We use the data furnished by a sample of 36 values from U = (u1, u2), formed by the pairs (u1,i, u2,j), 1 ≤ i, j ≤ 6, given in Table 5.9. Table 5.9 Data used for the generation of the sample u1 u2

0.00086 0.000932

0.040390 0.030252

0.049284 0.033205

0.072618 0.038672

0.074695 0.043191

0.096776 0.046179

Again, X verifies Eq. (5.15), which is analogous to Eq. (5.7). As in the preceding example, we use the expansion (5.8), with two polynomial families of degree k1 = k2 = 2. We obtained the coefficients shown in Table 5.10. Figures 5.8 and 5.9 show the marginal distribution of each component of the solution: again, the accordance is extremely good. Table 5.10 Coefficients obtained x1, j x2, j

246.90 153.09

6.35 14.22

0.17 0.38

-37.22 -5.15

-1.90 -2.27

-0.08 -0.12

4.62 0.64

0.33 0.30

0.02 0.02

Fig. 5.8 CDF of the second component of the solution (marginal CDF of x2)

(continued)

518

5

Uncertain Algebraic Equations

Example 5.2 (continued)

Fig. 5.9 CDF of the second component of the solution (marginal CDF of x2)

5.1.1

Very Small Linear Systems

If your linear system has at just a few variables, you may adopt a simplified approach. Indeed, Cramer’s rule may be used if you have just 2 or 3 variables. In his book (Cramer, 1750), Cramer gives the explicit solution of a linear system for 2 or 3 variables. Using Cramer’s notation, a linear system 

X1 X2

Y1 Y2

  x y

=



B1 B2



has as solution x=

B2 X 1 - B1 X 2 - B2 Y 1 þ B1 Y 2 , y = : - X2Y 1 þ X1Y 2 - X2Y 1 þ X1Y 2

A linear system X1 X2 X3

Y1 Y2 Y3

Z1 Z2 Z3

!

x y z

! =

B1 B2 B3

!

has as solution x=

B3 Y 2 Z 1 - B2 Y 3 Z 1 - B3 Y 1 Z 2 þ B1 Y 3 Z 2 þ B2 Y 1 Z 3 - B1 Y 2 Z 3 , X3Y 2Z1 - X2Y 3Z1 - X3Y 1Z2 þ X1Y 3Z2 þ X2Y 1Z3 - X1Y 2Z3

5.1

Uncertain Linear Systems

519

y=

B3 X 2 Z 1 - B2 X 3 Z 1 - B3 X 1 Z 2 þ B1 X 3 Z 2 þ B2 X 1 Z 3 - B1 X 2 Z 3 , - X3Y 2Z1 þ X2Y 3Z1 þ X3Y 1Z2 - X1Y 3Z2 - X2Y 1Z3 þ X1Y 2Z3

z=

B3 X 2 Y 1 - B2 X 3 Y 1 - B3 X 1 Y 2 þ B1 X 3 Y 2 þ B2 X 1 Y 3 - B1 X 2 Y 3 : X3Y 2Z1 - X2Y 3Z1 - X3Y 1Z2 þ X1Y 3Z2 þ X2Y 1Z3 - X1Y 2Z3

Thus, for very small linear systems involving 2 or 3 unknowns, the explicit solution above may be used to generate samples. If the distributions of the terms Xi, Y j, Zk, Bℓ are known, then large samples of the solutions may be generated. If small samples of these variables are given, then small samples of the solutions can be generated and a method of representation (Chap. 3) must be employed. Example 5.3 Consider again the situation corresponding to Example 5.1, where B1 = 125 + u1, u1  U ð - 10, 10Þ, B2 = 110 + u2, u2  U ð - 5, 15Þ; T=



0:3 0:128

0:3125 0:075



In this case, = Id - T , so that X 1 = 0:7, X 2 = - 0:128, Y 1 = - 0:3125, Y 2 = 0:925: Thus, - X 2 Y1 þ X 1 Y 2 = 0:6075: Since B1 = 125 + u1, B2 = 110 + u2, we have - B2 Y 1 þ B1 Y 2 = 150 þ 0:925u1 þ 0:3125u2 , B2 X 1 - B1 X 2 = 93 þ 0:128u1 þ 0:7u2 : So, we have 150 þ 0:925u1 þ 0:3125u2 , 0:6075 93 þ 0:128u1 þ 0:7u2 x2 = : 0:6075

x1 =

Finally x1 ≈ 246:9136 þ 1:522634 u1 þ 0:514403 u2 , x2 ≈ 153,0864 þ 0:2107 u1 þ 1:152263 u2 :

(continued)

520

5

Uncertain Algebraic Equations

Example 5.3 (continued) Thus, Cramer’s rule furnishes the same result as Table 5.8. If data in Table 5.7 are given, we can generate a sample from x1: u1 u2 -4.31108 1.34199 3.774887 8.89657 11.46916

-9.36334

-9.07657

-8.05736

-6.57627

-4.46154

4.120922

230.4390 233.3470 234.5985 237.2331 238.5564

230.8757 233.7836 235.0351 237.6697 238.9931

232.4275 235.3355 236.5870 239.2216 240.5449

234.6827 237.5907 238.8421 241.4768 242.8001

237.9027 240.8106 242.0621 244.6967 246.0201

250.9706 253.8786 255.1300 257.7647 259.0880

This sample can be used to determine an expansion representing x1 as a function of U = (u1, u2). An analogous table can be generated for x2. Since the results are the same as in Example 5.1, we shall obtain the same results as in this Example.

Example 5.4 Consider again the situation corresponding to Eq. (Erreur ! Source du renvoi introuvable.), where B1 = 125, B2 = 110, u1  U ð0, 0:1Þ, u2  U ð0, 0:05Þ; T=



0:3 - u1 0:128þu2

0:3125 0:075



In this case, A = Id - T and X 1 = 0:7 þ u1 ,X 2 = - 0:128 - u2 ,Y 1 = - 0:3125,Y 2 = 0:925: Thus, - X 2 Y1 þ X 1 Y 2 = 0:6075 þ 0:9025 u1 - 0:3125 u2 : Since B1 = 125, B2 = 110, we have - B2 Y 1 þ B1 Y 2 = 150, B2 X 1 - B1 X 2 = 93 þ 110 u1 þ 125 u2 : So, we have (continued)

5.1

Uncertain Linear Systems

521

Example 5.4 (continued) x1 =

150 , 0:6075 þ 0:9025 u1 - 0:3125 u2

x2 =

93 þ 110 u1 þ 125 u2 : 0:6075 þ 0:9025 u1 - 0:3125 u2

If data in Table 5.9 are given, we can generate a sample from x2: u1 u2 0.000932 0.030252 0.033205 0.038672 0.043191

0.00086

0.04039

0.049284

0.072618

0.074695

0.096776

153.289 152.673 152.614 152.505 152.417

163.767 162.693 162.590 162.401 162.247

166.185 165.002 164.889 164.681 164.512

172.637 171.161 171.020 170.761 170.550

173.219 171.717 171.573 171.309 171.094

179.488 177.694 177.522 177.208 176.952

As indicated in the preceding example, this sample can be used to determine an expansion representing x2 as a function of U = (u1, u2). An analogous table can be generated for x1. Again, the results obtained by this way will be analogous to those obtained in Example 5.2.

Exercises 1. Consider an input-output model where the structural matrix T and the Labor matrix are T=

0:3þu1 0:2 0:4

0:5 - u1 0:2 0:2

0:3 0:3 0:3

! , L=

300 150þu2 400

!

Assume that all the variables ui are independent and triangularly distributed T(-0.1,0, 0.1). (a) Generate a sample of 6 variates from each random variable ui. (b) Use the values generate to generate a sample of 36 variates from x1. (c) Use the sample from x1 to determine a representation of x1 as function of U = (u1, u2). (d) Use varlinsys to determine a representation of x1 as function of U = (u1, u2). (e) Compare the results obtained in (c) and (d). (f) Determine the Marginal distribution of x1. (continued)

522

5

Uncertain Algebraic Equations

2. Consider a model for the equilibrium between two Markets characterized by two variables (x1, x2) such that 

1 1

4 - 10

  x1 x2

=



200 - u1 þu2 300þu1



Assume that the distributions of u1 and u2 are triangular: u1  T(0,25,50) and u2  T(-50, 0, 50). (a) Generate a sample of 6 variates from each random variable ui. (b) Use the values generate to generate a sample of 36 variates from x1. (c) Use the sample from x1 to determine a representation of x1 as function of U = (u1, u2). (d) Use varlinsys to determine a representation of x1 as function of U = (u1, u2). (e) Compare the results obtained in (c) and (d). (f) Determine the Marginal distribution of x1. 3. Consider a model for the equilibrium between two Markets characterized by two variables (x1, x2) such that 

0:2 1

1 - 10

  x1 x2

=



50þu1 300þu2



Assume that the distributions of u1 and u2 are triangular: u1  T(0, 20, 40) and u2  T(-50, 0, 50). (a) Generate a sample of 6 variates from each random variable ui. (b) Use the values generate to generate a sample of 36 variates from x1. (c) Use the sample from x1 to determine a representation of x1 as function of U = (u1, u2). (d) Use varlinsys to determine a representation of x1 as function of U = (u1, u2). (e) Compare the results obtained in (c) and (d). (f) Determine the Marginal distribution of x1. 4. Consider the trip distribution model (5.3) with T=

0 195 146

169 0 53

173 31 0

! , D=

0 1 0:9

1 0 1:2

0:9 1:2 0

! , C=

0 1:5 2:5

1:5 0 2:5

2:5 2:5 0

!

(continued)

5.2

Nonlinear Equations and Adaptation of an Iterative Code

523

(a) Solve Eq. (5.4) to determine the coefficients. (b) Assume that T12 is triangularly distributed T(160,170,180). Determine the distribution of the coefficients α, β (c) Assume that T21 is triangularly distributed T(170,180,210). Determine the distribution of the coefficients α, β 5. Consider the trip distribution model (5.3) with T=

0 321 176

278 0 70

89 41 0

! , D=

0 0:5 1

0:5 0 1:2

1 1:2 0

! , C=

0 1 2

1 0 2

2 2 0

!

(a) Solve Eq. (5.4) to determine the coefficients. (b) Assume that T12 is triangularly distributed T(250,270,290). Determine the distribution of the coefficients α, β (c) Assume that T21 is triangularly distributed T(300,320,340). Determine the distribution of the coefficients α, β 6. Consider the trip distribution model (5.3) with T=

0 321 176

278 0 72

89 42 0

! , D=

0 0:5 1

0:5 0 1:2

1 1:2 0

! , C=

0 1 2

1 0 2

2 2 0

!

(a) Solve Eq. (5.4) to determine the coefficients. (b) Assume that T23 is triangularly distributed T(30,40,50). Determine the distribution of the coefficients α, β (c) Assume that T32 is triangularly distributed T(60,70,80). Determine the distribution of the coefficients α, β

5.2

Nonlinear Equations and Adaptation of an Iterative Code

The approach presented in the preceding section extends to nonlinear systems of equations: ( f ð xÞ = 0 ⟺

f 1 ðxÞ = 0 ⋮ f n ðxÞ = 0

ð5:16Þ

Here, f : ℝn ⟶ ℝ defines the equations to be solved for the unknowns x = (x1, . . ., xn)t. Equation (5.16) are generally solved by iterative methods, such as Newton–Raphson’s method (see Sect. 1.14.2). Codes for the iterative solution of algebraical equations are often available and one of the interesting possibilities offered by UQ is that of being able to study

524

5

Uncertain Algebraic Equations

uncertainties using existing codes, without modifying them and without rewriting any of their parts. As in the preceding situation, such an analysis may be made by generating a sample and then using one of the preceding approaches to determine the probability distribution of the results furnished by the code. Nevertheless, there is an alternative approach – called the method of adaptation, which allows to directly obtain the representation of the variability of the code output as a function of the parameters affected by the variability. As an example, let us consider an iterative method which reads as   X ðrþ1Þ = Ψ XðrÞ , U

ð5:17Þ

Ψ is referred as the iteration function. Notice that a direct method may be considered as an iterative method with a single iteration. If X = (X1, . . ., Xn), then Ψ = (ψ 1, . . ., ψ n) and we can describe the iterations component by component: ðrþ1Þ

Xi

  = ψ i X ðrÞ , U , 1 ≤ i ≤ n:

ð5:18Þ

To adapt this iterative method, we can consider approximations analogous to the preceding ones: X ðpÞ ≈ PX ðpÞ =

k X

ðpÞ

xj φj ðU Þ,

ð5:19Þ

xj,ℓ φj ðU Þ, ℓ = 1, . . . ,n

ð5:20Þ

j=1

i.e., ðpÞ

ðpÞ

X ℓ ≈ PX ℓ = Then, we can write

Thus,

and

k X j=1

ðpÞ

  PX ðrþ1Þ ≈ Ψ PX ðrÞ , U :

ð5:21Þ

  φi ðU ÞPX ðrþ1Þ ≈ φi ðU ÞΨ PX ðrÞ , U , 1 ≤ i ≤ k      E φi ðU ÞPX ðrþ1Þ ≈ E φi ðU ÞΨ PX ðrÞ , U , 1 ≤ i ≤ k

We have k   X   ðrþ1Þ Ai,j xj ,Ai,j = E φi ðU Þφj ðU Þ , E φj ðU ÞPX ðrþ1Þ = j=1

ð5:22Þ

5.2

Nonlinear Equations and Adaptation of an Iterative Code

525

id est, k   X   ðrþ1Þ ðrþ1Þ E φj ðU ÞPX ℓ Ai,j xj,ℓ , Ai,j = E φi ðU Þφj ðU Þ , =

ð5:23Þ

j=1

Let

   ðk Þ Bi,ℓ = E φi ðU Þψ ℓ PX ðkÞ , U :

ð5:24Þ

Then: k X j=1

ðrþ1Þ

Ai,j xj,ℓ

ðr Þ

= Bi,ℓ , 1 ≤ i ≤ k and 1 ≤ ℓ ≤ n:

ð5:25Þ

ðrþ1Þ

. It can be seen as a sequence of   ðrÞ ðr Þ ðr Þ t n linear systems of the same matrix A and second member Bℓ = B1,ℓ , . . . , Bk,ℓ :  t ðr Þ ðr Þ ðr Þ ðrþ1Þ ðrþ1Þ . the solution of AY ℓ = Bℓ furnishes Y ℓ = x1,ℓ , . . . , X k,ℓ This is a linear system for the variables xj,ℓ

This approach does not require any modification to the code evaluating Ψ: indeed, ð eÞ the calculation of Bi,ℓ requires only calls to the code that evaluates Ψ – it can be used “as it is”, without any modification. Such a method is non-intrusive, since it does not request the user to plunge into the meanders of the code to modify it – it requests only calls to the code. An example of algorithm for the adaptation is given below: Adaptation of an Iterative Method Input: a sample from U : (U1, . . ., Uns) is given, as a program which determines the value of Ψ(X, U ). The basis φi and the degree of approximation k are given. Give the stopping conditions (for instance: maximum iteration number and minimal step at each iteration) Output: the coefficients of the representation PX of X. Initialization: Determine the matrix A and give an initial guess  ð0Þ

xj,ℓ : 1 ≤ j ≤ k, 1 ≤ ℓ ≤ n : Set the iteration number to zero: r ← 0.

1. 2. 3. 4.

Determine Zm = PX(r)(Um), for 1 ≤ m ≤ ns. Evaluate Tm = Ψ(Zm, Um), for 1 ≤ m ≤ ns. Let Tm = (T1, m, . . ., Tn, m) Determine Vm, i = φi(Um), 1 ≤ m ≤ ns, 1 ≤ i ≤ k. For 1 ≤ ℓ ≤ n : ðr Þ

ðr Þ

(a) Evaluate Bℓ : Bi,ℓ ≈

1 ns

(b) Solve the linear system ðrþ1Þ

(c) Set xi,ℓ

ðr Þ

ns P

V m,i T ℓ,m ,

m=1 AY ðrrÞ

= BðrrÞ ,

= Y i,ℓ , 1 ≤ i ≤ k. (continued)

526

5

Uncertain Algebraic Equations

5. Increment the iteration number r ⟵ r + 1. 6. Test for the stopping conditions. If the iterations continue, then go to 1, else estimate x ≈ x(r).

This approach is quite general and can be used for any situation where the code performing the iterations X(k + 1) = Ψ(X(k), U ) can be executed. This algorithm is implemented in class uae.

The method iteration_adaptation performs one iteration:

5.2

Nonlinear Equations and Adaptation of an Iterative Code

527

In this code, self$proj is a method that evaluates the expansion having coefficients Xold and the basis phi, with degree nb-1 (nb is the number of coefficients).

The expansion is evaluated in the values of u (each column of u is a variate) and ndim is the dimension of x. For instance, we can use the code at right.

Matrices A and V are calculated by other methods of the class

Here, moyenne_echantillon determines the mean of ff of the sample us. The solution is generated by the method adaptation below:

528

5

Uncertain Algebraic Equations

Remark 5.1 In many practical situations, Ψ(x, u) = x - Φ(x, u), with Φ = (ϕ1, . . ., ϕn). In such a situation, we may solve the linear systems k X j=1

ðr Þ

ðr Þ

Ai,j δxj,ℓ = δBi,ℓ , 1 ≤ i ≤ k and 1 ≤ ℓ ≤ n,

with    ðk Þ δBi,ℓ = E φi ðU Þϕℓ PX ðkÞ , U : Then, ðrþ1Þ

xj,ℓ

ðr Þ

ðr Þ

= xj,ℓ - δxj,ℓ ,1 ≤ j ≤ k and 1 ≤ ℓ ≤ n:

Example 5.5 Let us consider relaxed Newton’s iterations for the resolution of the equation x2 - 2x + u = 0, where u is uniformly distributed on (0, 1). pffiffiffiffiffiffiffiffiffiffiffi The equation has two solutions for each u: xþ = 1 þ 1 - u and pffiffiffiffiffiffiffiffiffiffiffi x - = 1 - 1 - u. We have Pðx - < aÞ = 1 - ða - 1Þ2 ,0 < a < 1; Pðxþ < aÞ = ða - 1Þ2 ,1 < a < 2; Newton’s iterations correspond to the iteration function

Ψðx, uÞ = x - Φðx, uÞ, Φðx, uÞ =

x2 - 2x þ u , 2x - 2

Let us illustrate the procedure using a sample formed by 10 variates from u and the adaptation as shown in the algorithm above described, with a polynomial family and k = 4. The sample from u is (continued)

5.2

Nonlinear Equations and Adaptation of an Iterative Code

529

Example 5.5 (continued)

Starting from the initial guess Xini (randomly selected) below

The final result is Xsol:

If the distribution of u is known, we can generate the CDF of the solution. For instance, knowing that the distribution of u is uniform, we obtain the CDF shown in Fig. 5.10.

Fig. 5.10 Comparison between the exact CDF and the CDF generated by adaptation

If, in addition, we know that the support of u is (0, 1), we can generate data on this interval. In this case, the result is shown in Fig. 5.11. (continued)

530

5

Uncertain Algebraic Equations

Example 5.5 (continued)

Fig. 5.11 Comparison between the exact CDF and the CDF generated by adaptation – here, the data concern u uniformly distributed on (0, 1)

Example 5.6 Let us consider the solution of F(x) = 0 by Newton–Raphson’s iterations, with

F ð xÞ =

 ð1þu2 Þx1 - x2 , DF ðxÞ = 1þu2 x21 þx22 - ð1þu2 Þ 2x1

-1 2x2



:

The solutions are sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   1 þ u2 x1 = ± , x2 = 1 þ u2 x1 2 2 1 þ ð1 þ u Þ (continued)

5.2

Nonlinear Equations and Adaptation of an Iterative Code

531

Example 5.6 (continued) Assume that u is uniformly distributed on (1,3) and we have the sample

We use as starting point (randomly chosen):

We obtain as result after 500 iterations:

The distributions of X1 and X2 are shown in Figs. 5.12 and 5.13.

Fig. 5.12 Comparison between the exact CDF of x1 and its CDF generated by adaptation

(continued)

532

5

Uncertain Algebraic Equations

Example 5.6 (continued)

Fig. 5.13 Comparison between the exact CDF of x2 and the CDF generated by adaptation

Example 5.7 Let us consider the solution of F(x) = 0 by Newton-Raphson’s iterations, with

F ð xÞ =

1 , DF ð x Þ = 2 2 x1 þ x2 - 1 2x1 x1 - ux2

-u 2x2

:

The solutions are rffiffiffiffiffiffiffiffiffiffiffiffiffi 1 x2 = ± , x = ux2 1 þ u2 1 Assume that u is uniformly distributed on (1,3) and we have the sample

(continued)

5.2

Nonlinear Equations and Adaptation of an Iterative Code

533

Example 5.7 (continued) We use as starting point (randomly chosen):

We obtain as result:

The distributions of X1 and X2 are shown in Figs. 5.14 and 5.15.

Fig. 5.14 Comparison between the exact CDF of x1 and its CDF generated by adaptation

(continued)

534

5

Uncertain Algebraic Equations

Example 5.7 (continued)

Fig. 5.15 Comparison between the exact CDF of x2 and the CDF generated by adaptation

It is interesting to notice that practically any iterative code may be adapted by this approach. For instance, iterations from a descent method in optimization may be adapted. As an example, let us consider the iterations of the gradient descent with a fixed step μ > 0.   Xðpþ1Þ = XðpÞ - μ—F X ðpÞ , U

ð5:26Þ

These iterations correspond to Ψðx, uÞ = x - μ—Fðx, uÞ

ð5:27Þ

and may be adapted analogously to the Newton’s iterations. As an example, let us consider the simple situation where F(x, u) = (x - e-u)2, for which the exact solution is x = e-u. Let us take u as uniformly distributed on (-1, 1) and use the sample below:

5.2

Nonlinear Equations and Adaptation of an Iterative Code

535

We use the polynomial family with k = 6 and we run 500 iterations with μ = 0.01. The initial point (randomly selected) is

The iterations of adaptation converge and furnish

As we see in Figs. 5.16 and 5.17, the result is close to the exact solution. Now, let us take u as normally distributed N(0, 1). We run the iterations with the same parameters: μ = 0.01, and same polynomial family used in the preceding example. The sample contains 20 variates from N(0, 1):

Fig. 5.16 Comparison between the exact X and its approximation PX (adaptation of gradient descent with U uniformly distributed on (-1,1)

536

5

Uncertain Algebraic Equations

Fig. 5.17 Comparison between the exact CDF of X and the CDF of its approximation PX (adaptation of gradient descent with U uniformly distributed on (-1,1)

Adaptation iterations run 1000 iteration from a random initial point and furnish

As we see in Figs. 5.18 and 5.19, the result is close to the exact solution. Let us consider a final illustration of the adaptation method, to underline once again that practically any method based on recurrences can be studied by this approach: let us consider a differential equation:

x01 x02





x1 0 =A , A= x2 -U

U , x1 ð0Þ = 0, x2 ð0Þ = 1, t 2 ð0, T Þ: 0

ð5:28Þ

The solution is x1(t) = sin (Ut), x2(t) = cos (Ut). Assume that the differential equation is discretized by Euler’s method, using time steps ti = iΔt, 0 ≤ i ≤ n, Δt = T/n. Let Xi = (x1(ti), x2(ti)). Then, we have Xiþ1 = Ψi ðXi , U Þ,

ð5:29Þ

where Ψi is the recurrence function associated to the method: Ψi ðX i , U Þ = Xi þ Δt AðU ÞX i ,

ð5:30Þ

5.2

Nonlinear Equations and Adaptation of an Iterative Code

Fig. 5.18 Comparison between the exact X and its approximation PX. (Adaptation of gradient descent with U gaussian N(0,1))

Fig. 5.19 Comparison between the exact CDF of X and the CDF of its approximation PX. (Adaptation of gradient descent with U gaussian N(0,1))

For instance, we can define the iteration function as shown at right:

537

538

5

Uncertain Algebraic Equations

Here, F is the right side member of the differential equation:

For other Runge–Kutta methods, the expression of Ψn is more complex, but we do not need to know the exact expression: as previously observed, the availability of a code performing the recurrence is enough to use the adaptation (see Chap. 6). Here, we do not look for the convergence of the iterations, but we generate a sequence of values X0(U),X1(U), . . .,Xn(U). By the UQ approach, each Xi(U ) is approximated by an expansion PXi(U ). The expansion PXi(U ) has coefficients xi to be determined – they can be generated by adaptation of the recurrence (5.29). We can determine the values Xi(U ), 1 ≤ i ≤ n using the method progression in class uae:

For instance, let us assume that U is normally distributed, with a mean 2π and a standard deviation π/6. Assume that a sample formed by ns = 20 variates from U is available. Let us use Euler’s method with different time steps and measure the error between the prevision generated by adaptation and the exact solution for all the values of U in the sample and all the values of Xi, 0 ≤ i ≤ n: ns X n X     1 PX i U j - Xi U j 2 RMS = ns × ðn þ 1Þ j = 1 i = 0

!12

ns X n ns X n   X     X PX i U j - X i U j 2 = X i U j 2 REL = j=1 i=0

ð5:31Þ

!12

ð5:32Þ

j=1 i=0

The results are shown in Table 5.11: we observe that, for a convenient time step, the adaptation furnishes results that are close to the exact ones. Figures 5.20 and 5.21 show comparisons between the components of Xnt+1(U ) and PXnt+1(U ).

5.2

Nonlinear Equations and Adaptation of an Iterative Code

Table 5.11 RMS error in the adaptation of Euler’s method with n time steps

n RMS REL

1E2 4.8E-2 15%

539

1E3 4.1E-3 1.3%

1E4 4.7E-4 0.1%

1E5 4.2E-5 0.01%

Fig. 5.20 Comparison between X1 and its approximation PX1 for t = 1. (Adaptation of Euler’s method with n = 1E4, U gaussian N(2π, π/6))

Fig. 5.21 Comparison between X2 and its approximation PX2 for t = 1. (Adaptation of Euler’s method with n = 1E4, U gaussian N(2π, π/6))

Exercises 1. Consider the adaptation of gradient descent iterations for the minimization of     f ðxÞ = 100 + u21 x21 + 50 + u22 x22 - ð200 - u1 Þx1 + ð50 + u2 Þx2 :

(continued)

540

5

Uncertain Algebraic Equations

Assume that u1  N(0, 0.1), u2  N(0,0.2). (a) Generate a sample of 10 variates from each random variable ui. (b) Use these values to generate a sample of 100 variates from U = (u1, u2). (c) Use the sample to determine a representation of X = arg min f as function of U = (u1, u2). (d) Determine the CDF and the PDF of X. 2. Consider the adaptation of the solution by RK of the differential equations d dt



x1 x2



=

- u1

2

4

- u2





x 1 ð 0Þ 2 , = x2 ð 0Þ x2 5

x1

Assume that the distributions of u1 and u2 are triangular: u1  T(0, 1, 2) and u2  T(0, 4, 8). (a) Generate a sample of 6 variates from each random variable ui. (b) Use these values to generate a sample of 36 variates from U = (u1, u2). (c) Use the sample to determine a representation of X(1) = (x1(1), x2(1)) as function of U = (u1, u2). (d) Determine the CDF and the PDF of X. 3. Consider the adaptation of gradient descent iterations for the minimization of     f ðxÞ = 1 + u21 x21 + 1 + u22 x22 + ð10 - u1 Þx1 + ð15 + u2 Þx2 : Assume that u1  N(0, 0.2), u2  T(0, 1, 2). (a) Generate a sample of 10 variates from each random variable ui. (b) Use these values to generate a sample of 100 variates from U = (u1, u2). (c) Use the sample to determine a representation of X = arg min f as function of U = (u1, u2). (d) Determine the CDF and the PDF of X. 4. Consider the adaptation of the solution by RK of the differential equations d dt



x1 x2



=

0:2 - 5u1 1

- 2:5 - 5u1 0



x1 x2







2 8 x 1 ð 0Þ = + , 5 3 x 2 ð 0Þ

(continued)

5.3

Iterative Evaluation of Eigenvalues

541

Assume that the distributions u1  T(-0.1,0, 0.1) and u2  N(0, 0.1). (a) Generate a sample of 10 variates from each random variable ui. (b) Use these values to generate a sample of 100 variates from U = (u1, u2). (c) Use the sample to determine a representation of X(1) = (x1(1), x2(1)) as function of U = (u1, u2). (d) Determine the CDF and the PDF of X. (e) Determine the CDF and the PDF of X. 5. Consider the adaptation of the solution by Newton–Raphson’s method of the nonlinear equations

F

x1

=

x2

x21 + x22 - u x2 - u  x1

! =

0 0

Assume that u  T(1, 2, 3) (a) Generate a sample of 20 variates from u. (b) Use the sample to determine a representation of X = (x1, x2) as function of U = u (c) Determine the Marginal CDF and the Marginal PDF of X1. (d) Determine the Marginal CDF and the Marginal PDF of X2. 6. Consider the adaptation of the solution by Newton–Raphson’s method of the nonlinear equations

x1 F x2

=

x22 -1 u x2 - u  x 1

x21 +

!

0 = 0

Assume that u  T(1, 2, 3) (a) Generate a sample of 40 variates from u. (b) Use the sample to determine a representation of X = (x1, x2) as function of U = u. (c) Determine the Marginal CDF and the Marginal PDF of X1. (d) Determine the Marginal CDF and the Marginal PDF of X2.

5.3

Iterative Evaluation of Eigenvalues

Eigenvalues and eigenvectors of matrices may be numerically determined by iterative methods. For instance, we can use Power Iterations which read as

542

5

Uncertain Algebraic Equations

Y ðpÞ Xðp + 1Þ = ðpÞ ,Y ðpÞ = AðU ÞXðpÞ ; Xð0Þ given: Y

ð5:33Þ

For convenient X(0), the iterations converge to the eigenvalue of A having the largest modulus – which is said to be the dominant eigenvalue. To obtain such a convergence, the initial guess X(0) must have a significant projection in the direction supported by the associated eigenvector. Analogously, inverse iteration produces the eigenvalue of A having the smallest modulus: Y ðpÞ Xðp + 1Þ = ðpÞ ,AðU ÞY ðpÞ = XðpÞ ; Xð0Þ given: Y

ð5:34Þ

Since these methods are iterative, they can be adapted as shown in the preceding section. For instance, the code at right defines the iteration function corresponding to A(U) evaluated by a function AA(U).

Let us consider U uniformly distributed on (0,1) and

AðU Þ =

cos ðU Þ

sin ðU Þ

sin ðU Þ

- cos ðU Þ



1-u

0

0

u



cos ðU Þ

sin ðU Þ

- sin ðU Þ

cos ðU Þ

A is evaluated as follows:

We consider a sample formed by 50 variates from U:



5.3

Iterative Evaluation of Eigenvalues

543

A good starting point is furnished by the eigenvectors of the mean matrix A:

The preceding code (algorithm of Sect. 5.2), with k = 5, furnishes the results shown in Figs. 5.22 and 5.23.

Fig. 5.22 Dominating eigenvalue and its approximation

544

5

Uncertain Algebraic Equations

Fig. 5.23 CDF of the dominating eigenvalue and its approximation

For inverse power iterations, we can use the code at right to define the iteration function. A(U) is evaluated by a function AA(U). With k = 5, the preceding code furnishes the results shown in Figs. 5.24 and 5.25. As a second example, let us determine the distribution of the dominant eigenvalue of AðU Þ =

cos ðU Þ

sin ðU Þ

sin ðU Þ

- cos ðU Þ

!

j sin ðUÞj

0

0

j cos ðUÞj

!

cos ðU Þ

sin ðU Þ

- sin ðU Þ cos ðU Þ

with U  N(0, π/10). We consider as given a sample of 50 variates from U:

!

5.3

Iterative Evaluation of Eigenvalues

545

Fig. 5.24 Smallest eigenvalue and its approximation

Fig. 5.25 CDF of the smallest eigenvalue and its approximation

Using k = 5 and a polynomial family, we got the results shown in Figs. 5.26, 5.27, and 5.28. The approximated CDF was evaluated from a large sample of 1E4 variates from N(0, π/10), using the polynomial expansion with the coefficients determined by the adapted power iteration method. The PDF was determined by SPH derivation of the CDF. The coefficients obtained were:

Inverse Power Iterations produced the results shown in Figs. 5.29, 5.30, and 5.31.

546 Fig. 5.26 Dominant eigenvalue and its approximation

Fig. 5.27 CDF of the dominant eigenvalue and its approximation

Fig. 5.28 PDF of the dominant eigenvalue and its approximation

5

Uncertain Algebraic Equations

5.3

Iterative Evaluation of Eigenvalues

Fig. 5.29 Smallest eigenvalue and its approximation

Fig. 5.30 CDF of the smallest eigenvalue and its approximation

Fig. 5.31 CDF of the smallest eigenvalue and its approximation

547

548

5.3.1

5

Uncertain Algebraic Equations

Very Small Matrices

If your matrix has dimension 2, you can use a closed formula for the eigenvectors and eigenvalues: let A=

X1

Y1

X2

Y2

! :

Then, the eigenvalues are

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  1 2  2 2ffi 1 1 2 2 1 1 2 , X + 4X Y - 2X Y + Y λ1 = X +Y 2

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  1 2  2 2ffi 1 1 2 2 1 1 2 λ2 = : X +Y + X + 4X Y - 2X Y + Y 2 The corresponding eigenvectors are qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  1 2  2ffi 1 X + 4X 2 Y 1 - 2X 1 Y 2 + Y 2 C B v1 = @ A, 2X 2 1 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 0  1 2  2ffi 1 - X1 + Y 2 X + 4X 2 Y 1 - 2X 1 Y 2 + Y 2 B C v2 = @ A: 2X 2 1 0

- X1 + Y 2 +

It is possible to give analogous formulae for matrices of order 3, but they are more complex: let X1 B A = @ X2

Y1 Y2

1 Z1 C Z 2 A:

X3

Y3

Z3

0

Set a = - X1 - Y 2 - Z3, b = - X2Y 1 + X1Y 2 - X3Z1 - Y 3Z2 + X1Z3 + Y 2Z3, c = X3Y 2Z1 - X2Y 3Z1 - X3Y 1Z2 + X1Y 3Z2 + X2Y 1Z3 - X1Y 2Z3, p=b-

2a3 ab a3 ,q = c + , 3 27 3

5.3

Iterative Evaluation of Eigenvalues

Δ=

549

  pffiffiffiffi pffiffiffiffi 4 3 1 1 p + q2 , D + = - q + Δ , D- = -q- Δ , 27 2 2 pffiffiffi 3 1 z = - + i : 2 2

Then, the eigenvalues are 1

1

1

1

1

1

λ1 = D3+ + D3- , λ2 = zD3+ + z2 D3- , λ3 = z2 D3+ + zD3- : The eigenvector associated with the eigenvalue λ is 0 B v=@

1

- λZ 1 + Y 2 Z 1 - Y 1 Z 2

C X 2 Z 1 + λZ 2 - X 1 Z 2 A:   λ2 - λ X 1 + X 2 - X 2 Y 1 + X 1 Y 2

This expression assumes that at least one of the lines of v is non-null. Alternative expressions exist for the situation where this assumption is not verified: for instance, we can replace ( “→” means “replaced by”) X1 → X1, Y1 → Z1, X2 → X3, Y2 → Z3, Z1 → Y1, Z2 → Y3 or X1 → Y2, Y1 → Z2, X2 → Y3, Y2 → Z3, Z1 → X2, Z2 → X3. Exercises 1. Consider the matrix 0

0:7 + u1 B A = @ - 0:2

- 0:5 - u1 0:8 + u2

- 0:4

- 0:2

1 - 0:3 C - 0:3 - u2 A 0:7

Assume that all the variables ui are independent and triangularly distributed T(0,0.2,0.5). (a) Generate a sample of 10 variates from each random variable ui. (b) Use the values generate to generate a sample of 100 variates from U = (u1, u2). (c) Use the sample from to determine a representation of the dominant eigenvalue of A as function of U = (u1, u2). (d) Determine the Marginal CDF and PDF of the dominant eigenvalue. 2. Consider

A=

u1 1

4 u2



Assume that the distributions of u1 and u2 are triangular: u1  T(1, 2, 3) and u2  T(0, 5, 10). (continued)

550

5

Uncertain Algebraic Equations

(a) Generate a sample of 10 variates from each random variable ui. (b) Use these values to generate a sample of 100 variates from U = (u1, u2). (c) Use the sample from to determine a representation of the eigenvalues of A as function of U = (u1, u2). (d) Determine the Marginal CDF and PDF of the dominant eigenvalue. 3. Consider

A=

0:2 + u1 u2

u1 - 10



Assume that the distributions of u1 and u2 are triangular: u1  T(1, 2, 3) and u2  T(0, 1, 2). (a) Generate a sample of 10 variates from each random variable ui. (b) Use these values to generate a sample of 100 variates from U = (u1, u2). (c) Use the sample from to determine a representation of the eigenvalues of A as function of U = (u1, u2). (d) Determine the Marginal CDF and PDF of the eigenvalues. 4. Consider

A=

u1

u2

u2

u1



Assume that the distributions of u1 and u2 are triangular: u1  T(2, 3, 4) and u2  T(0, 1, 2). (a) Generate a sample of 10 variates from each random variable ui. (b) Use these values to generate a sample of 100 variates from U = (u1, u2). (c) Use the sample from to determine a representation of the eigenvalues of A as function of U = (u1, u2). (d) Determine the Marginal CDF and PDF of the eigenvalues.

5.4

The Variational Approach for Uncertain Algebraic Equations

The variational approach was introduced in Sect. 3.3. We observed that a nonlinear relation between X and U can be exploited to determine the expansion PX. The method presented extends straightly to nonlinear equations

5.4

The Variational Approach for Uncertain Algebraic Equations

551

Let us consider again Eq. (5.16) involving a random variable U: f ðX, U Þ = 0 Analogously to Sect. 5.1, we can consider the approximation introduced in Eq. (5.8): X ≈ PX =

k X j=1

 t xj φj ðU Þ,xj = x1,j , . . . , xn,j :

Then, f ðPX ðU Þ, U Þ ≈ 0, so that Eðφi ðU Þf ðPX ðU Þ, U ÞÞ ≈ 0,1 ≤ i ≤ k:

ð5:35Þ

Equation (5.35) can be solved to determine the coefficients xj, 1 ≤ j ≤ k. For instance, we can use fzero, fminunc, or lsqnonlin from pracma to find a solution. To do this, we need to generate Eq. (5.35) as shown at right. he is a Hilbert Expansion object using basis {φj : 1 ≤ j ≤ nb}, cx a is a n × nb matrix containing the coefficients xj (cx[[i,j]] = xi, j).

We must solve eqs = 0 for the unknown cx. However, the functions of pracma receive vectors as arguments: we need to transform a matrix into a vector and conversely: function torow transforms a matrix into a row vector and function tomat performs the inverse transformation:

552

5

Uncertain Algebraic Equations

Then, we shall solve fvec = 0 for the unknown vecx, with fvec at right :

Example 5.8 Let us consider the equation x2 - 2x + u = 0, where u is uniformly distributed on (, 1), solved in Example 5.5. The equation has two solutions for each u: pffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffi x + = 1 + 1 - u and x - = 1 - 1 - u and Pðx - < aÞ = 1 - ða - 1Þ2 ,0 < a < 1; Pðx + < aÞ = ða - 1Þ2 ,1 < a < 2; Let us illustrate the procedure using a sample formed by 10 variates from an uniformly distributed variable u

We use the algorithm above described, with a polynomial family and k = 5:

We must solve fobj(vx) = 0. For instance, use lsqnonlin with a starting point (1, 1, . . ., 1):

(continued)

5.4

The Variational Approach for Uncertain Algebraic Equations

553

Example 5.8 (continued) The iterations converge to the coefficients of the expansion of x+, as shown in Fig. 5.32.

Fig. 5.32 Comparison between the expansion and the data

Since the variable is uniform, we can generate a large sample on the interval (minu1, maxu1) and use the expansion to determine the approximated CDF of X. The result is shown in Fig. 5.33.

Fig. 5.33 Comparison between the exact CDF and the CDF generated by the expansion

554

5

Uncertain Algebraic Equations

Example 5.9 Let us consider the solution of F(x) = 0, with 0 

1  1 + u 2 x1 - x 2 A F ð xÞ = @   ,DF ðxÞ = 2 2 2 x1 + x2 - 1 + u

1 + u2 2x1

! -1 : 2x2

The solutions are sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   1 + u2 ,x = 1 + u2 x1 x1 = ± 2 2 2 1 + ð1 + u Þ Assume that u is uniformly distributed on (1,3) and we have the sample 2.878982 2.119797 2.263830 2.080316 2.524269 2.587010 2.611016 2.770770 2.831148 2.881682

We use a starting point randomly chosen:

Examples of results are shown in Figs. 5.34 and 5.35.

Fig. 5.34 Comparison between the expansion and the data (values of x1)

(continued)

5.4

The Variational Approach for Uncertain Algebraic Equations

555

Example 5.9 (continued)

Fig. 5.35 Comparison between the exact CDF of x1 and the CDF generated by the expansion

Example 5.10 Let us consider the solution of F(x) = 0 by Newton–Raphson’s iterations, with

F ð xÞ =

x1 - ux2 x21 + x22 - 1

:

The solutions are rffiffiffiffiffiffiffiffiffiffiffiffiffi 1 ,x = ux2 x2 = ± 1 + u2 1 Assume that u is uniformly distributed on (1,3) and we have the sample 2.343613 2.938806 2.592832 2.757182 2.384299 2.661308 2.811384 2.040231 2.825743 2.764893

We use a starting point randomly chosen:

(continued)

556

5

Uncertain Algebraic Equations

Example 5.10 (continued) Examples of results are shown in Figs. 5.36 and 5.37.

Fig. 5.36 Comparison between the expansion and the data (values of x2)

Fig. 5.37 Comparison between the exact CDF of x2 and the CDF generated by the expansion

5.4

The Variational Approach for Uncertain Algebraic Equations

557

Exercises 1. Consider the variational solution of the nonlinear equations

x1 F x2

=

x21 + x22 - u x2 - u  x1

! =

0 0

Assume that u  T(1, 2, 3) (e) Generate a sample of 20 variates from u. (f) Use the sample to determine a representation of X = (x1, x2) as function of U = u (g) Determine the Marginal CDF and the Marginal PDF of X1. (h) Determine the Marginal CDF and the Marginal PDF of X2. (i) Consider the variational solution of the nonlinear equations

F

x1 x2

=

x22 -1 u x2 - u  x 1

x21 +

! =

0 0

Assume that u  T(1, 2, 3) (c) Generate a sample of 40 variates from u. (d) Use the sample to determine a representation of X = (x1, x2) as function of U = u. (e) Determine the Marginal CDF and the Marginal PDF of X1. (f) Determine the Marginal CDF and the Marginal PDF of X2.

Chapter 6

Random Differential Equations

Abstract In this chapter, we examine methods for the determination of the probability distributions of random differential equations. We present also methods for the analysis of orbits and trajectories under uncertainty. Differential are equations, so that the methods used for equations may be used – at least in principle. Nevertheless, differential equations have an essential characteristic: the unknowns are fields – i.e., functions, which are elements belonging to infinitely dimensional vector spaces, contrarily to the equations manipulated in the preceding chapters, whose unknowns are vectors from ℝn, which is finite dimensional. We shall analyze the consequences of passing from finite dimension to infinite dimension in the sequel (Sect. 6.4). As seen in Sect. 1.12, differential equations are generally solved by discretization, using packages where the classical methods of discretization are available. Notice that the discretization of a differential equations brings the unknowns to a finite dimensional space. Consequently, as shown in Sect. 5.2, we can apply the UQ approach of adaptation: the recurrence equations defining successive values generated may be adapted to generate the coefficients of the expansions of the discretized values of solution of the differential equation (see Eqs. (5.29) and (5.30)). For instance, we can use the method progression in class uae:

Supplementary Information The online version of this chapter (https://doi.org/10.1007/978-3031-17785-9_6) contains supplementary material, which is available to authorized users. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 E. Souza de Cursi, Uncertainty Quantification using R, International Series in Operations Research & Management Science 335, https://doi.org/10.1007/978-3-031-17785-9_6

559

560

6

Random Differential Equations

When considering an iteration function psi, this method generates a list Xsol such that Xsol[[i]] contains the coefficients the expansion of PXi(U ), for i = 1, . . ., nt, where Xi(U ) = X(ti, U ). The initial coefficients are X0, corresponding to PX0(U ). The Hilbert basis used is phi and nb = k + 1 is the number of coefficients for each component of PXi. ndim is the dimension of Xi and us is a matrix having as columns the variates from U. As observed in Chap. 5, the discretization of an ordinary differential equation dX = f ðt, X Þ,X ð0Þ = X0 dt corresponds to iterations X iþ1 = Ψi ðX i ; U Þ,

ð6:1Þ

which can be adapted to furnish the coefficients of PXi, for i = 1, . . ., nt. For instance, let us consider the example (5.28): 

x1′ x2′





  x1 0 =A , A= x2 -U

We start by defining the second member, as shown at right:

Then, we define a function that performs one step of a method of solution. Here, we use the method RK4. Finally, we define the iteration function psi:

 U , x1 ð0Þ = 0, x2 ð0Þ = 1, t 2 ð0; T Þ: 0

6

Random Differential Equations

561

Table 6.1 RMS error in the adaptation of RK4 with n time steps

n RMS REL

1E2 3E-7 0.01%

1E3 1E-7 0.001%

1E4 3E-8 0.001%

1E5 1E-8 0.001%

Let us consider a sample of 20 variates from U  N(2π, π/6) and a polynomial basis with k = 6. The initial data X0 = (0, 1)t correspond to  X0 =

0 1

0 0 0 0

0 0

0 0

0 0 0 0

 0 : 0

The method progression furnishes the results in Table 6.1 (to be compared with Table 5.11). As a second example, let us consider the following ordinary differential equation, with U  N(0, 1): d dt



x1 x2

 =

 pffiffiffiffiffi  U x2 , x1 ð0Þ = 1, x2 ð0Þ = 1, t 2 ð0; T Þ: 2Ux21

ð6:2Þ

The solution is x1(t) = eUt, x2(t) = e2Ut. Let us use the adaptation of the Dormand– Prince’s method, with n = 1000, T = 1. In this case,

and dt = 1E - 2, nt = 1E2. We use a sample of 20 variates from U. The RMS error was 5E-3, corresponding to a relative RMS error of 0.1%. An example of result is shown in Figs. 6.1 and 6.2. In addition to adaptation, there are also specific methods for differential equations. We shall examine some of them in the next sections.

562

6

Random Differential Equations

Fig. 6.1 Comparison between X1 and its approximation PX1 for t = 1. (Adaptation of Dormand– Prince’s method with n = 1E2, U gaussian N(0, 1))

Fig. 6.2 Comparison between X2 and its approximation PX2 for t = 1. (Adaptation of Dormand– Prince’s method with n = 1E2, U gaussian N(0, 1))

6.1

Linear Differential Equations

6.1

563

Linear Differential Equations

Let us consider the ordinary differential equations (ODE): dX = Aðt, U ÞX þ Bðt, U Þ, X ð0Þ = X0 ðU Þ: dt

ð6:3Þ

Here, A is a n × n matrix, B is a n × 1 vector, X0 is a n × 1 vector, X is a n × 1 vector. Analogously to the approach adopted for the analysis of linear systems in Sect. 5.1, we consider an expansion analogous to Eq. (5.9): X ≈ PX =

k X j=1

  xj ðt Þφj ðU Þ, xj = x1,j , . . . , xn,j

ð6:4Þ

Thus, we look for an expansion such that d PX = Aðt, U ÞPX þ Bðt, U Þ, PX ð0Þ = X 0 ðU Þ: dt

ð6:5Þ

id est, k X j=1

φ j ðU Þ

k X d φj ðUÞAðt, U Þxj ðt Þ þ Bðt, UÞ, xj ð t Þ = dt j=1 k X j=1

ð6:6Þ

xj ð0Þφj ðU Þ = X0 ðUÞ

Thus, for 1 ≤ r ≤ n k X j=1

φ j ðU Þ

k X d φj ðUÞArs ðt, UÞxs,j ðt Þ þ Br ðt, U Þ, xr,j ðt Þ = dt j=1 k X j=1

ð6:7Þ

xr,j ð0Þφj ðU Þ = X 0r ðU Þ

and k X j=1

M ij

k X n X d A risj ðt Þxs,j ðt Þ þ B ri ðt Þ, xr,j ðt Þ = dt j=1 s=1 k X

ð6:8Þ

M ij xr,j ð0Þ = N 0ri

j=1

Here,

  A risj = E φi ðU ÞArs ðt, U Þφj ðU Þ , B ri = E ðφi ðUÞBr ðt, U ÞÞ   M ij = E φi ðU Þφj ðU Þ , N 0ri = E ðφi ðUÞX 0r ðU ÞÞ

ð6:9Þ ð6:10Þ

564

6

Random Differential Equations

By setting M risj = M ij δrs , we have k X n X

M risj

j=1 s=1

k X n X d A risj ðt Þxs,j ðt Þ þ B ri ðt Þ, xs,j ðt Þ = dt j=1 s=1

k X n X

ð6:11Þ

M risj ðt Þxs,j ð0Þ = N 0ri

j=1 s=1

Analogously to algebraical linear systems, we have 

d = ðt Þ þ ðt Þ, ð0Þ = ℕ0 , dt

ð6:12Þ

pq = M risj , q = xs,j , p = B ri , ℕ0p = N 0ri , p = i þ ðr - 1Þk, q = j þ ðs - 1Þk: These equations for a linear ODE for the k × n components of . Notice that – in the general situation – the matrices  and  depend on time t. If the system is autonomous, id est, if A and B are independent from t, then  and  are also independent from t. In such a situation, you can calculate  and  once. Otherwise, an evaluation at each time step is requested. An example of code for the evaluation of matrices , ℕ0 is given below:

6.1

Linear Differential Equations

565

An example of code for the evaluation of matrices ðt Þ, ðt Þ is the following one:

In the autonomous case, you can use:

566

6

Random Differential Equations

These methods are implemented in the class ode.R. This class contains also a method zeros(m,n) to generate a m × n matrix of zeros. Let us exemplify the use of these subprograms with the equations 

x1′ x2′



 =A

  x1 0 , A= x2 -U

 U , x1 ð0Þ = 0, x2 ð0Þ = 1, t 2 ð0; T Þ: 0

The solution is x1(t) = sin (Ut), x2(t) = cos (Ut). U is normally distributed, with a mean π and a standard deviation π/6. We use a sample formed by ns = 20 variates from U:

Start by creating the functions that evaluate the matrices A and B.

Then, define the basis to be used: here, a polynomial basis with k = 5.

6.1

Linear Differential Equations

Define X0 and determine , , , ℕ0 , 0 : XX0 contains 0

Create a function evaluating the right side of the equation, adapted to the package deSolve. tocol(X) transforms X into a column vector.

Notice that the class ode proposes a function for the evaluation of the right side of the differential equation:

Then, call ode in package deSolve:

To use fsm_auto, the sequence of instructions is:

567

568

6

Random Differential Equations

In both the cases, resu contains the coefficients xs,j(t). Line i of resu corresponds to tc[[i]]=(i - 1)Δt, Δt = tmax/nt. resu[[i,1]] is the value of tc [[i]]; resu[i, 2 : (nb+1)] contains x1, 0(ti), . . ., x1, k(ti) (the coefficients of the expansion of the first component x1).; ]]; resu[i, (nb+2) : (2*nb+1)] contains x2, 0(ti), . . ., x2, k(ti) (the coefficients of the expansion of the second component x2). These coefficients were used to evaluate the solution for the values of U in the sample: for k = 5, the RMS error was 5E-6, corresponding to a relative error of 7E-4%. Examples of results are shown in Figs. 6.3, 6.4, 6.5, and 6.6. As a second example, let us consider a situation where the matrices are timedependent: 0

0 Aðt; U Þ = @ U 2 2

1 -

1

0

U A, Bðt; U Þ = @ U 2 2

2

0 ð1 - tU Þ

1 A, X 0 =

1 0

! :

The exact solution is   x1 ðt Þ = Ut þ e - Ut , x2 ðt Þ = U 1 - e - Ut :

Fig. 6.3 Comparison between X1(1) and its approximation PX1(1) for t = 1. (Solution of Eq. (6.12) by rk4, with n = 1E2, U gaussian N(π, π/6))

6.1

Linear Differential Equations

569

Fig. 6.4 Comparison between X2(1) and its approximation PX2(1) for t = 1. (Solution of Eq. (6.12) by rk4, with n = 1E2, U gaussian N(π, π/6))

Fig. 6.5 Comparison between the CDF of X1(1) and the CDF of PX1(1) for t = 1. (Solution of Eq. (6.12) by rk4, with n = 1E2, U gaussian N(0, 1))

570

6

Random Differential Equations

Fig. 6.6 Comparison between the CDF of X2(1) and the CDF of PX2(1) for t = 1. (Solution of Eq. (6.12) by rk4, with n = 1E2, U gaussian N(0, 1))

We have

In this case, the function evaluating the right side of the equation, reads as shown at right:

6.1

Linear Differential Equations

571

The class ode proposes a function for the evaluation of the right side of the differential equation in this case too:

Now, the code to call ode in package deSolve reads as:

To use fsm, the sequence of instructions is:

We consider U as normally distributed, with mean 0 and standard deviation 1. The sample used contains ns = 20 variates from U:

572

6

Random Differential Equations

Using k = 5, the RMS error was 1E-3, corresponding to a relative error of 0.1%. Figs. 6.7, 6.8, 6.9, and 6.10 exhibit examples of results.

Fig. 6.7 Comparison between X1(1) and its approximation PX1(1) for t = 1. (Solution of Eq. (6.12) by rk4, with n = 1E2, U gaussian N(0, 1))

Fig. 6.8 Comparison between X2(1) and its approximation PX2(1) for t = 1. (Solution of Eq. (6.12) by rk4, with n = 1E2, U gaussian N(0, 1))

6.1

Linear Differential Equations

573

Fig. 6.9 Comparison between the CDF of X1(1) and the CDF of PX1(1) for t = 1. (Solution of Eq. (6.12) by rk4, with n = 1E2, U gaussian N(0, 1))

Fig. 6.10 Comparison between the CDF of X2(1) and the CDF of PX2(1) for t = 1. (Solution of Eq. (6.12) by rk4, with n = 1E2, U gaussian N(0, 1))

574

6

Random Differential Equations

Exercises 1. Consider the differential equations d dt



x1 x2



 =

- u1 4

2 - u2



     2 x1 x 1 ð 0Þ = , x2 5 x2 ð 0Þ

Assume that the distributions of u1 and u2 are triangular: u1  T(0, 1, 2) and u2  T(0, 4, 8). (a) Generate a sample of 6 variates from each random variable ui. (b) Use these values to generate a sample of 36 variates from U = (u1, u2). (c) Use the sample to determine a representation of X(t) = (x1(t), x2(t)) as function of U = (u1, u2) for t 2 (0, 1). (d) Determine the CDF and the PDF of x1 (0.5). 2. Consider d dt



x1 x2



 =

0:2 - 5u1 1

- 2:5 - 5u1 0



x1 x2



      2 8 x1 ð0Þ = þ , 5 x2 ð0Þ 3

Assume that the distributions u1  T(-0.1, 0,0.1) and u2  N(0, 0.1). (a) Generate a sample of 10 variates from each random variable ui. (b) Use these values to generate a sample of 100 variates from U = (u1, u2). (c) Use the sample to determine a representation of X(t) = (x1(t), x2(t)) as function of U = (u1, u2) for t 2 (0, 1). (d) Determine the CDF and the PDF of x1 (0.5). 3. Consider the differential equation d dt



x1 x2



 =

2 þ u1 1

- 3 þ u2 0



x1 x2



 þ

     x 1 ð 0Þ 2 10 þ 5u1 , = -3 5 x2 ð 0Þ

Assume that the distributions of u1 and u2 are triangular: u1  T(1, 2, 3) and u2  T(0, 1, 2). (a) Generate a sample of 10 variates from each random variable ui. (b) Use these values to generate a sample of 100 variates from U = (u1, u2). (c) Use the sample to determine a representation of X(t) = (x1(t), x2(t)) as function of U = (u1, u2) for t 2 (0, 1). (d) Determine the CDF and the PDF of x1 (0.5). (continued)

6.2

Nonlinear Differential Equations

575

4. Consider the differential equation d dt



x1 x2



 =

- 3 - u2 - 2 - u2

- u1 0:5 þ u2



x1 x2



 þ

     2 0:2 þ u1 x1 ð 0Þ = , 0:1 þ u2 5 x 2 ð 0Þ

Assume that the distributions of u1 and u2 are triangular: u1  T(-0.1,0, 0.2) and u2  T(-0.5,0, 0.5). (a) Generate a sample of 6 variates from each random variable ui. (b) Use these values to generate a sample of 36 variates from U = (u1, u2). (c) Use the sample to determine a representation of X(t) = (x1(t), x2(t)) as function of U = (u1, u2) for t 2 (0, 1). (d) Determine the CDF and the PDF of x1 (0.5).

6.2

Nonlinear Differential Equations

The approach presented in the previous section extends to nonlinear ODE such as dX = f ðt, X, U Þ, Xð0Þ = X 0 ðU Þ: dt

ð6:13Þ

The procedure is analogous and leads to a formulation close to the method of adaptation, presented in Sect. 5.2: we use Eq. (6.13) to get d PX = f ðt, PX, U Þ, PX ð0Þ = X 0 ðU Þ: dt

ð6:14Þ

Thus, for 1 ≤ r ≤ n k X j=1

φ j ðU Þ k X j=1

d x ðt Þ = f r ðt, PX, UÞ, dt r,j

ð6:15Þ

xr,j ð0Þφj ðU Þ = X 0r ðU Þ

Then, k X j=1

M ij k X j=1

d x ðt Þ = F ri ðt, xÞ, dt r,j M ij xr,j ð0Þ = N 0ri

ð6:16Þ

576

6

Random Differential Equations

with M ij , N 0ri previously defined (Eq. 6.10) and F ri ðt, xÞ = Eðφi ðU Þf r ðt, PX, U ÞÞ:

ð6:17Þ

The evaluation of F ri is made as follows: 1. Determine Zm = PX(Um), for 1 ≤ m ≤ ns. 2. Evaluate Tm = f(t, Zm, Um), for 1 ≤ m ≤ ns. Let Tm = (T1, m, . . ., Tn,m) 3. Determine Vm,i = φi(Um), 1 ≤ m ≤ ns, 1 ≤ i ≤ k. ns P V m,i T r,m . 4. Determine F ri ≈ ns1 m=1

Using the same map p = i + (r - 1)k, q = j + (s - 1)k previously introduced, we have 

d = ðt, Þ, ð0Þ = ℕ0 , dt

ð6:18Þ

pq = M risj , q = xs,j , p = F ri , ℕ0p = N 0ri : An example of code for the determination of  is given below (implemented in the class ode.R):

6.2

Nonlinear Differential Equations

577

As an example, let us consider the system defined in Eq. (6.1): f ðt; x; uÞ =

pffiffiffiffiffi ! u x2 2ux21

, X0 =

1 1

! :

The solution is x1(t) = eut, x2(t) = e2ut. We consider a sample of 20 variates from N(0,1):

Introduce the function f(t, x, u):

Define the Hilbert basis:

Define X0 and determine , ℕ0 , 0 : XX0 contains 0 .

Then solve Eq. (6.18):

578

6

Random Differential Equations

Fig. 6.11 Comparison between X2(1) and its approximation PX2(1) for t = 1. (Solution of Eq. (6.18) by rk4, with n = 1E2, U gaussian N(0, 1))

With k = 5, the RMS error was 2E-3, corresponding to a relative error of 0.2%. Figs. 6.11, 6.12, 6.13, and 6.14 show examples of results.

Exercises 1. Consider the differential equation dx = ð1 - t Þxu , xð0Þ = 1 dt Assume that u  T(1, 2, 3) and determine a representation of x(t) as function of u for t 2 (0, 1). Determine the CDF and the PDF of x(0.5). (continued)

6.2

Nonlinear Differential Equations

579

Fig. 6.12 Comparison between X2(1) and its approximation PX2(1) for t = 1. (Solution of Eq. (6.18) by rk4, with n = 1E2, U gaussian N(0, 1))

2. Consider the differential equation dx = ð1 - t Þxu1 , xð0Þ = u2 dt Assume that u1  T(1, 2, 3) and u2  N(2, 0.1). Determine a representation of x(t) as function of U = (u1, u2) for t 2 (0, 1). Determine the CDF and the PDF of x(0.5). 3. Consider the differential equation d dt



x1 x2



 =

     ð2 þ uÞx2 0 x1 ð0Þ = , - ð2 þ uÞx1 - 4x21 2 x2 ð0Þ

(continued)

580

6

Random Differential Equations

Fig. 6.13 Comparison between the CDF of X1(1) and the CDF of PX1(1) for t = 1. (Solution of Eq. (6.18) by rk4, with n = 1E2, U gaussian N(0, 1))

Assume that u  T(-1, 0, 1) and determine a representation of X(t) = (x1(t), x2(t)) as function of u for t 2 (0, 1). Determine the CDF and the PDF of x1(0.5). 4. Consider the differential equation dx = t 1 - u ð 1 þ xÞ u , x ð 0Þ = 1 dt Assume that u  T(0,0.5,1) and determine a representation of x(t) as function of u for t 2 (0, 1). Determine the CDF and the PDF of x(0.5)

6.3

Adaptation of ODE Solvers

581

Fig. 6.14 Comparison between the CDF of X2(1) and the CDF of PX2(1) for t = 1. (Solution of Eq. (6.18) by rk4, with n = 1E2, U gaussian N(0, 1))

6.3

Adaptation of ODE Solvers

In Sect. 5.2, we observed that any iterative method can be used to determine the coefficients of the expansion of the solution. Let us illustrate how to adapt the solver ode from deSolve: recall that ode evaluates the solution in the times defined by the user – thus, it is possible to make a single time step using adapted parameters. For instance, let us create a function odestep, which performs a single step of an available method of ode:

582

6

Random Differential Equations

Here, method is a string containing the method to be applied, t1 is the initial time, t2 is the final time, X1 is the value of the solution at time t1. Now, include in class uae two new methods:

These methods perform a progression in time with ψ = ψ(X, U, t0, t1) (in Sect. 5.2, we used ψ(X, U ), without dependance on time). Let us illustrate their use with the differential equation dx = f ðt; x; uÞ = dt

x2

!

U U U x2 , X 0 = ð1 - tU Þ þ x1 2 2 2 2

2

  1 : 0

We create the function that evaluates f(t, x, u):

Then, we define the function ψ(X, U, t0, t1)

We generate a sample of 20 variates from N(0,1) and we define a polynomial basis:

6.4

Uncertainties on Curves Connected to Differential Equations

583

Finally, we run the method:

Examples of results are shown in Fig. 6.15.

6.4

Uncertainties on Curves Connected to Differential Equations

If an ODE involves random parameters, then its solution becomes random – as seen in the preceding sections. Thus, the trajectories t → X ðt Þ = ðx1 ðt Þ, x2 ðt ÞÞ and the orbits   dx t → xi ðt Þ, pi ðt Þ = i ðt Þ dt are also random variables. For instance, let us consider the simple ODE: d dt



x1 x2

 =

pffiffiffiffiffiffiffiffiffiffiffiffiffi ! 1 þ u2 pffiffiffiffiffiffiffiffiffiffiffiffiffi2 , x1 ð0Þ = x2 ð0Þ = 0, t 2 ð0; 1Þ: u2 = 1 þ u22 u1 =

pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi The solution is x1 ðt Þ = tu1 = 1 þ u22 , x2 ðt Þ = tu2 = 1 þ u22 . If u = (u1, u2) is a random variable, x1 and x2 are also random. Thus, we may consider the analysis of the variability of these variables and look for statistics associated to them, such as, for instance, their mean and variance.

584

6

Random Differential Equations

Fig. 6.15 Comparison between the prevision furnished by the expansion determined by adaptation and the exact values at t = 1, U gaussian N(0, 1)

6.4

Uncertainties on Curves Connected to Differential Equations

585

Fig. 6.16 When the mean is taken on the equations, the result is the origin, which is not representative of the family of curves: all its elements are segments of unitary length

However – as shown in Sect. 3.8 – the analysis of uncertainties in curves may request the use of approaches that are independent from the representation. For instance, let us assume that P(u1 = 1) = P(u1 = - 1) = 1/2, while u2 is uniformly distributed on (-1, 1) Then, taking the mean on the equations leads to E(x1(t)) = 0, E(x2(t)) = 0: the resulting curve is not representative of the family, since the family is formed by segments emanating from the origin, all of length one (see Fig. 6.16). In such a situation, it appears as more adequate to determine the most representative member of the family (see Eq. (3.62)). In this case, 



C ðuÞ = Xðt, uÞ = t

u1 = u2 =

 pffiffiffiffiffiffiffi2ffi  1þu2 pffiffiffiffiffiffiffi2ffi , t 2 ð0, 1Þ 1þu2

Since !2 u1 pffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ u22

!2 þ

u2 pffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ u22

= 1,

586

6

Random Differential Equations

we can consider u1 u2 sin θðuÞ = pffiffiffiffiffiffiffiffiffiffiffiffiffi , cos θðuÞ = pffiffiffiffiffiffiffiffiffiffiffiffiffi : 2 1 þ u2 1 þ u22 Then,     sin θðuÞ CðuÞ = X ðt; uÞ = t ; t 2 ð0; 1Þ cos θðuÞ and the Hausdorff distance between C ðuÞ, C ðvÞ corresponds to t = 1, so that dist ðC ðuÞ, C ðvÞÞ =

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2ð 1 - cos ðθðuÞ - θðvÞÞ ,

id est,

1 dist ðC ðuÞ, C ðvÞÞ = sin ðθðuÞ - θðvÞÞ , 2 Thus, J ðC ðuÞÞ = E v ðdist ðC ðuÞ, C ðvÞÞÞ has a constant value for all the members of the family: any of its members may be considered as a good representative of the family. Let us consider a second ODE d dt



x1 x2



 =

 2ut , x1 ð0Þ = 0, x2 ð0Þ = u, t 2 ð0; 1Þ: 2uðt - 1Þ

The solution is x1(t) = ut2, x2(t) = u(t - 1)2. Let us assume that u is uniformly distributed on (0,1). The evaluations of the mean trajectory by two different approaches are shown in Fig. 6.17. In this case, p1(t) = 2ut and the orbits of the first variable are the points (ut2, 2ut). Their mean is shown in Fig. 6.18. The means taken at each time furnish 1 1 Eðx1 ðt ÞÞ = t 2 , Eðx2 ðt ÞÞ = ðt - 1Þ2 , E ðp1 ðt ÞÞ = t, t 2 ð0, 1Þ: 2 2 As a last example, let us consider the ODE d dt



x1 x2



 =

 - ð1 þ jujÞx2 , x1 ð0Þ = 0, x2 ð0Þ = u, t 2 ð0; 2π Þ: ð1 þ jujÞx1

The solution is x1(t) = u cos ((1 + |u|)t), x2(t) = u sin ((1 + |u|)t). Let us assume that u is uniformly distributed on (-1,1). In this case, the means taken at each time furnish

6.4

Uncertainties on Curves Connected to Differential Equations

587

Fig. 6.17 Trajectories of the system. Here, the approach by the Hausdorff distance produces the same result as the mean E((x1(t), x2(t))) taken at each time

Fig. 6.18 Orbits (x1(t), p1(t)) of the system. Again, the approach by the Hausdorff distance produces the same result as the mean E((x1(t), p1(t))) taken at each time

588

6

Random Differential Equations

Fig. 6.19 Trajectories and orbits of the system. Here, the approach by the Hausdorff distance does not produce the same result as the mean E((x1(t), x2(t))) taken at each time: means at fixed time are null, while Hausdorff distance produces a correct result

E ðx1 ðt ÞÞ = 0, E ðx2 ðt ÞÞ = 0, E ðp1 ðt ÞÞ = 0, E ðp2 ðt ÞÞ = 0: The trajectories are concentric circles: for each u, the corresponding trajectory is the circle of radius |u|, centered at the origin (0, 0). Considering that u and -u generate the same circle of radius |u|, the family has radius going from 0 to 1. Thus, we expect as mean trajectory the circle corresponding to |u| = 1/2. In this case, only the Hausdorff approach furnishes the good result. The evaluations of the mean trajectory and the mean orbit by both the approaches are shown in Fig. 6.19. Exercises 1. Consider the differential equation dx = ð1 - t Þxu , xð0Þ = 1, t 2 ð0, 1Þ: dt Assume that u  T(1, 2, 3) and determine  the most representative history (t, x(t)) and the most representative orbit xðtÞ; dx dt ðt Þ : 2. Consider the differential equation dx = ð1 - t Þxu1 , xð0Þ = u2 , t 2 ð0, 1Þ: dt Assume that u1  T(1, 2, 3) and u2  N(2, 0.1). Determine  the most  representative history (t, x(t)) and the most representative orbit xðt Þ, dx dt ðt Þ . (continued)

6.4

Uncertainties on Curves Connected to Differential Equations

589

3. Consider the differential equation 

d dt



x1 x2

 =

     ð2 þ uÞx2 0 x1 ð0Þ = , t 2 ð0; 1Þ: , - ð2 þ uÞx1 - 4x21 2 x2 ð0Þ

Assume that u  T(-1, 0, 1) and determine the most  representative  trajectory (x1(t), x2(t)) and the most representative orbit x1 ðt Þ, dxdt1 ðt Þ . 4. Consider the differential equation dx = t 1 - u ð1 þ xÞu , xð0Þ = 1, t 2 ð0; 1Þ: dt Assume that u  T(0, 0.5, 1) and determine the most representative history (t, x(t)) and the most representative orbit xðtÞ, dx dt ðt Þ . 5. Consider the differential equations d dt



x1 x2



 =

- u1 4

2 - u2



     2 x1 x 1 ð 0Þ = , t 2 ð0; 1Þ: , x2 5 x2 ð 0Þ

Assume that the distributions of u1 and u2 are triangular: u1  T(0, 1, 2) and u2  T(0, 4, 8). Determine the most representative the most  representa tive trajectory (x1(t), x2(t)) and the most representative orbit x1 ðt Þ, dxdt1 ðt Þ . 6. Consider t 2 (0, 1) and d dt



x1 x2



 =

0:2 - 5u1 1

- 2:5 - 5u1 0



x1 x2

 þ

      2 8 x1 ð0Þ = , 5 3 x2 ð0Þ

Assume that the distributions u1  T(-0.1, 0, 0.1) and u2  N(0, 0.1). Determine the most representative  trajectory (x1(t), x2(t)) and the most representative orbit x1 ðt Þ, dxdt1 ðt Þ . 7. Consider t 2 (0, 1) and d dt



x1 x2



 =

2 þ u1 1

- 3 þ u2 0



x1 x2



 þ

     x 1 ð 0Þ 2 10 þ 5u1 , = -3 5 x2 ð 0Þ

Assume that the distributions of u1 and u2 are triangular: u1  T(1, 2, 3) trajectory (x1(t), and u2  T(0, 1, 2). Determine the most  representative  x2(t)) and the most representative orbit x1 ðt Þ, dxdt1 ðt Þ . 8. Consider t 2 (0, 1) and d dt



x1 x2



 =

- u1 0:5 þ u2

- 3 - u2 - 2 - u2



x1 x2



 þ

     2 0:2 þ u1 x1 ð 0Þ = , 0:1 þ u2 5 x 2 ð 0Þ

Assume that the distributions of u1 and u2 are triangular: u1  T(-0.1,0, 0.2) and u2  T(-0.5, 0, 0.5) the most  representative  trajectory (x1(t), x2(t)) and the most representative orbit x1 ðt Þ, dxdt1 ðt Þ .

Chapter 7

UQ in Game Theory

Abstract In this chapter, we examine mathematical games under uncertainty (for instance, in probabilities or payoffs). We examine also evolutionary dynamics associated with games and we present methods for the determination of the probability distribution of variables and the analysis of strategies. For evolutionary dynamics, we consider also trajectories and orbits. As indicated in the title – and in the introduction of their book – Von Neumann and Morgenstern look for a new approach in Economic Sciences. In their book, they establish the basis of GT, namely the assumptions about rational behavior of the players, fixed rules, payoffs, and outcomes. Since Von Neumann’s works, GT was widely developed, including in probabilistic aspects. However, uncertainty quantification (UQ) in game theory (GT) remains rare. In this chapter, we examine some possible uses of UQ in GT: notice that this is not a chapter about GT – although we recall some basic elements – but about the applications of UQ to GT – indeed, the reader can find in the literature many presentations of GT provided by specialists of the domain. Here, we focus on the specific developments concerning the use of UQ in the framework of GT.

7.1

The Language from Game Theory

It is important to recall some terms often used in the sequel. At first, the term “game” itself, which is used as an abbreviation of the complete expression “mathematical game of strategy”. Von Neumann and Morgenstern consider a game as being simply a set of rules (Von Neumann & Morgenstern, 1953, p. 49). They make the difference between this abstract object and one of its concrete achievements – a play. In terms of modern informatics, we may say that a game is an object and a play is an instance of a game. As said, games are defined by rules – id est, the laws governing the game. The rules stipulate what is allowed or forbidden, winnings and losses, the end of the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 E. Souza de Cursi, Uncertainty Quantification using R, International Series in Operations Research & Management Science 335, https://doi.org/10.1007/978-3-031-17785-9_7

591

592

7

UQ in Game Theory

game, the number of participants, etc. In a mathematical game of strategy, the participants must take decisions. A move is a decision took by or for a player: a personal move depends only on the decision of the player, while a random move involves some randomness. Moves may be simultaneous (when all the players reveal their decision and make their moves at the same time) or sequential (when only one player is allowed to move at a time). Static games allow only a unique simultaneous move for all the players, while dynamic games involve sequential moves or sequences of simultaneous moves. As consequence of the moves, the state of the play (and of the players) changes – the position of the play evolves from its initial state to the actual one by the moves of the players. We say that the players have a complete information when all the players know all the rules. They have a perfect information when all the players know the preceding moves and positions. The game is cooperative when negotiations between players are possible and coalitions of players may be created – otherwise, it is a non-cooperative game. In a game, each player looks for the maximization of a utility function generated by the payoffs – in general, the payoffs themselves: we say that the players have a rational behavior. A zero-sum game is a game where the sum of the payoffs is null: a player may have a positive payoff if and only if other players have negative payoffs corresponding to his gains. In mathematical games of strategy, it is assumed that there is no cheating. Rational players often use a strategy, id est, predefined sequences of moves conditionally determined by the actual position of the game: if the position is Pi then the move is Mi. We say that the strategy is pure when it consists in choosing a fixed move among the possible ones. We say that the strategy is mixed when it uses different moves. In general, mixed strategies randomize among the different moves with a given probability. A player may have a dominant strategy, id est, a strategy that maximizes his utility for any move of the other players; but it may also have a dominated strategy, id est, a strategy that minimizes his utility for any move of the other players. An equilibrium – sometimes called Nash equilibrium -is a set of strategies such that no player may increase his own utility by modifying only his own strategy: if all the other players keep their strategies, he must keep it too – otherwise, he will reduce his own winnings. Analogously to strategies, an equilibrium may be pure or mixed, according to the use of a single fixed move or several moves with a given probability. Games are generally represented in a normal form: assume that, at some stage, k players may choose among n possible moves M = {M1, . . ., Mn}: a move is represented by a k- dimensional vector m = (m1, . . ., mk) 2 Mk, where mi 2 M, 8 i. The result of these move is a payoff for each player, represented by a vector of payoffs u(m) = (u1(m), . . ., uk(m)). We may represent such a game by giving the set M and the function u(m). In simple situations, the normal form of a game may be represented by a matrix: we say that the game is under matrix form. An alternative representation is the extended form – a tree representing the choices of the players. A second alternative form is the multimatrix form, where the payoffs of each player appear in a different matrix.

7.2

A Simplified Odds and Evens Game

593

Uncertainties arrive to Game Theory by different ways – for instance, errors or lacks in the information, randomness involved in the moves (for instance, in mixed strategies), and uncertainty in the payoffs. A game involving some uncertainty is referred in the continuation as an Uncertain Game (UG). Examples of UG are Stochastic Games (SG) and Bayesian Games (BG). A SG is a dynamical game where the new state is randomly determined at each stage. A BG is a game where the information of the players is incomplete. In BG, the assumptions made by one player about the characteristics of the other players are his beliefs. The real set of characteristics of a player – for instance, his real payoffs – is called his type. For instance, if the payoffs are uncertain – then, represented by random variables: the distribution assumed by a player about the payoff of another player is a belief. The real distribution is part of the type of the player. In UG, it is usual introduce a special supplementary player called Nature, destined to choose randomly the uncertain quantities, such as, for instance, the state of the game, the payoffs, or the type of the player. Concerning UG, GT is mainly interested in the expected payoff of each player, id est, in the mean of the payoffs considered as random variables. UQ is mainly interested in the probability distributions of the payoffs – which may be used to evaluate the means but may also be used to implement other decision rules. The equivalent of the Nash equilibrium is the Equal Opportunity situation, where all the players have the same probability of obtaining a given non-negative payoff (often zero) – notice that equal opportunity does not mean that all the players have the same expected payoff: this last situation referred as Egalitarian Solution. In the sequel, we examine some classical games from both the points of view. Evolutionary Games(EG) result from the application of GT to biology and look for models describing the evolution of populations having different strategies of survival (see, for instance, Lewontin, 1961; Maynard-Smith & Price, The logic of animal conflict, 1973; Maynard-Smith, 1982). In EG, payoffs are interpreted as a measure of the adaptation – a measure of fitness to the environment. The main tool in EG is the replicator equation, which describes the evolution of a population formed by groups having different strategies. Replicator equations are generated from Mathematical Games by considering the mean payoffs: groups having mean payoffs superior to the general mean are more adapted, have a higher fitness and, so, tend to increase, while groups having payoffs inferior to the general mean are less adapted, have a lower fitness, and tend to decrease. We shall examine replicator equations connected to the classical games considered. In EG, uncertainty arises by several ways: for instance, initial data are often unknown, and payoffs must be estimated.

7.2

A Simplified Odds and Evens Game

Let us consider a simplified version of the game Odds/Evens (OE): a game for two players, where both the players reveal simultaneously a number of fingers and Player 1 wins if the total number of fingers revealed is even, while Player 2 wins if the result is odd. Assume that the winner receives 1$, paid by the loser. This game is UG, since the choices of the players are uncertain.

594

7

Table 7.1 Matrix representation of the coin game

Player 1 O ( p1) E (q1 = 1 - p1)

Player 2 O ( p2) (1, -1) (-1, 1)

UQ in Game Theory

E (q2 = 1 - p2) (-1, 1) (1, -1)

Basically, each player must decide to reveal an odd or even number of fingers: the result will be defined by the sum of the fingers revealed by each player: Player 1 wins if both the players make the same choice (odd or even) and Player 2 wins if they make different choices (one odd, the other even). Let us denote O an odd result and E an even one. Let Ci 2 {O, E} be the choice of Player i, i = 1, 2. Then, the possible moves are the couples (C1, C2): m 2 {(O, O), (O, E), (E, O), (E, E)}. We have u(O, O) = (1, -1), u(O, E) = (-1, 1), u(E, O) = (-1, 1), u(E, E) = (1, -1). It is a zero-sum static game with random moves. Let us denote by pi the probability of revealing an odd number of fingers for Player i, i = 1, 2: for a completely random strategy, pi = 12 , but, in practice, a deviation can be observed – there is some uncertainty about pi, which is unknown in practice. The game may be represented in normal form as shown in Table 7.1. Let Wi be the winnings of Player i, i = 1, 2. From this Table, we see that: • If Player 1 chooses O, he wins with probability p2, so that P(W1 = 1| C1 = O) = p2, P(W1 = - 1| C1 = O) = q2 = 1 - p2. • If Player 1 chooses E, he loses with probability p2, so that P(W1 = - 1| C1 = E) = p2, P(W1 = 1| C1 = E) = q2 = 1 - p2. • If Player 2 chooses O, he loses with probability p1, so that P(W2 = - 1| C2 = O) = p1, P(W2 = 1| C2 = O) = q1 = 1 - p1. • If player 2 chooses E, he wins with probability p1, so that P(W2 = 1| C2 = E) = p1, P(W2 = - 1| C2 = E) = q1 = 1 - p1. Then, we have PðW 1 = 1Þ = PðW 1 = 1jC 1 = OÞPðC 1 = OÞ þ PðW 1 = 1jC1 = E ÞPðC1 = E Þ, so that PðW 1 = 1Þ = π = p1 p2 þ q1 q2 = ð1 - p1 Þ þ ð2p1 - 1Þp2 : Analogously, PðW 1 = - 1Þ = p1 q2 þ q1 p2 = p1 - ð2p1 - 1Þp2 = 1 - π: Thus, the mean wins of Player 1 are EðW 1 Þ = PðW 1 = 1Þ - PðW 1 = - 1Þ = ð2p1 - 1Þð2p2 - 1Þ = 2π - 1 = μ:

7.2

A Simplified Odds and Evens Game

595

and   E W 21 = PðW 1 = 1Þ þ PðW 1 = - 1Þ = 1, so that the variance of W1 is V ðW 1 Þ = 1 - ð2π - 1Þ2 = β: Analogously, PðW 2 = 1Þ = PðW 1 = - 1Þ = 1 - π; PðW 2 = - 1Þ = PðW 1 = 1Þ = π; E ðW 2 Þ = - EðW 1 Þ = 1 - 2π: V ðW 2 Þ = V ðW 1 Þ = 1 - ð2π - 1Þ2

7.2.1

GT Strategies When p = ( p1, p2) Is Known

Let us consider Player 1: his strategy is defined by the probability p1 of choosing O: we will examine below some possible strategies for its choice. The game may be put into a Matrix Form, as indicated in Table 7.1. The game can also be represented by a tree. From the point of view of a given player, the tree starts by his own decisions and finishes by the payoffs. For instance, the tree corresponding to the point of view of Player 1 is shown in Fig. 7.1. Tree representation is a practical tool for the analysis of some strategies, by performing a reverse analysis on the tree. For instance, let us consider the classical minimax and maximin strategies:

Fig. 7.1 Representation of the game Odds/Evens by a tree. Point of view of Player 1

596

7

UQ in Game Theory

• Minimax: the minimax strategy consists in minimizing the maximum loss. From the tree, we quick obtain the result by reverse analysis (see Fig. 7.2). Since the maximum loss is -1 for any decision, it is indifferent to the choice O or E. • Maximin: the maximin strategy consists in maximizing the minimal payoff. Again, this strategy is indifferent to the choice O or E, since the maximum winning is 1 in each case. Trees are also useful for the evaluation of means (Fig. 7.3) and probabilities. For instance, from Fig. 7.4, we have PðW 1 = - 1Þ = 1 - π, PðW 2 = 1Þ = 1 - PðW 2 = - 1Þ = π: Other classical strategies in Game Theory are

Fig. 7.2 Using the tree to find a Minimax/Maximin strategy for Player 1 in the game Odds/Evens

Fig. 7.3 Using the tree to evaluate the expected payoff of Player 1

7.2

A Simplified Odds and Evens Game

597

Fig. 7.4 Using the tree to evaluate P(W1 = - 1)

• The fixed O strategy: Player 1 always choose O – then, p1 = 1. Then, π = p2 and the expected value of his payoff is μ = 2p2 - 1, with a variance equal to β = 1 - (2p2 - 1)2. For Player 1, the probability of winning 1 $ is π = p2. This strategy is interesting if p2 > 1/2 : in this case, it maximizes P(W1 = 1) and E(W1). • The fixed E strategy: Player 1 always choose E – then, p1 = 0. Then, π = 1 - p2 and the expected value of his payoff is μ = 1 - 2p2, with a variance equal to β = 1 - (2p2 - 1)2. For Player 1, the probability of winning 1$ is π = 1 - p2. Such a strategy is interesting if p2 < 1/2: analogously, it maximizes P(W1 = 1) and E(W1). • The uniformly random strategy: choose O or E at random, with equal probability. Then, p1 = 12 : the expected value of his payoff is μ = 0, with a variance β = 1. For Player 2, the probability of winning 1$ is π = 12. Let us try to build a mixed strategy corresponding to a Nash’s equilibrium. In Game Theory, a mixed Nash equilibrium is determined by making the players indifferent to the choice among the strategies. For instance, let us denote W 1O ,W 1E the expected winnings of Player 1 when Player 2 (and not Player 1 himself) chooses O or E, respectively. We have W 1O = p1 - ð1 - p1 Þ = 2p1 - 1, W 1E = - p1 þ ð1 - p1 Þ = 1 - 2p1 : Player 1 is indifferent to the result when W 1O = W 1E , so that p2 = 1/2 – this corresponds to the uniformly random strategy. In this case, the expected payoff is zero for all the strategies and all the players. We can make the same analysis for Player 2: W 2O = W 2E corresponds to 1 - 2p2 = 2p2 - 1, id est, p2 = 1/2 – a purely random strategy. In this game, equal opportunity corresponds to P(W1 = 1) = P(W2 = 1), so that π = 1 - π ⟹ π = 1/2 ⟹ μ = 0. Analogously, an Egalitarian solution corresponds to E(W1) = E(W2) ⟹ μ = - μ ⟹ μ = 0 ⟹ π = 1/2. Thus, in both the cases,

598

7

UQ in Game Theory

ð2p1 - 1Þð2p2 - 1Þ = μ = 0, so that p1 = 1/2 or p2 = 1/2: equal opportunity and Egalitarianism are achieved if and only if one of the players adopts a purely random strategy.

7.2.2

Strategies When p Is Unknown

The preceding analysis assumes the knowledge of p = ( p1, p2), what is, in practice, unrealistic, since each player does not know the strategy of the other Player. For instance, if Player 2 has not any information about the value of p1, how can he define a strategy? If the game is static, id est, if the play consists in a single move, the definition of a strategy is possible only if Player 2 has some information about p1. Otherwise, the only option is a random choice. If the game is dynamic, id est, a play consists in several moves and p1 is unknown, Player 2 may use the results of the previous moves to define a strategy. Indeed, this game becomes BG when the value of p1 is unknown: it is a lack of information and the value of p1 assumed by Player 2 is his belief. As previously observed, it may be updated from the results of the preceding moves. For instance, Player 2 may use a Bayesian procedure – notice that Bayesian corrections are destined to reduce the uncertainty and are not effective to make any correction if a single value of p1 is given: to use a Bayesian approach, it is necessary to assume a probability distribution for p1 – the prior. For instance, we may assume a uniform  distribution on 12 - δ, 12 þ δ , 12 > δ > 0 and apply the Bayes approach to get a new distribution – the posterior. However, in this simple game, it is possible to define other strategies, simpler to implement. Let us examine some of them. If the game is repeated n times, Player 2 has payoffs W 12 , . . . ,W n2 and his total payoff is Sn2 = W 12 þ . . . þ W n2 . Notice that the possible values of Sn2 are {n, n 2, . . ., n mod 2, -n mod 2, . . ., -n + 2, -n}. For instance: if n = 2, then the possible values are {2, 0, -2}. For n = 3, the possible values are {3, 1, -1,-3}.  In classical n GT, Player 2 looks for the maximum of the expected value E S   2 . In the UQ approach, Player 2 appears as interested in maximizing P Sn2 ≥ s – the probability of a minimal winning s, defined by the Player – in general s = 0 (non-negative payoff). In UQ, the analogous of the Nash equilibrium is equal opportunity – the situation where the probabilities of winning a non-negative  payoff  are equal for all the players. In this simple game, it corresponds to P Sn2 ≥ 0 = 1=2 . The equal opportunity for a single move corresponds to π = 1/2, so that p2 = 1/2 – the uniformly random strategy. If Player 2 keeps the same strategy along all the dynamic play, the distribution of 1þW i Sn2 is analogous to a binomial one. Indeed, let Y i = 2 2 : Yi is a Bernoulli variable such that P(Yi = 1) = 1 - π. Thus, Zn = Y1 + . . . + Yn is a Binomial variable B ðn, π Þ. Since W i2 = 2 Y i - 1, we have Sn2 = 2Z n - n , so that

7.2

A Simplified Odds and Evens Game

P



Sn2

599

    nþk n-k nþk n = nþk π 2 ð1 - π Þ 2 =k =P Z = 2 2  n  n E S2 = nð1 - 2π Þ, V S2 = 4nπ ð1 - π Þ 

The probability of winning a non-negative payoff (at least 0$) is       n n P Sn2 ≥ 0 = P Z n ≥ = 1 - P Zn < 2 2  n n  n n Equal opportunity means that P Z ≥ 2 = P Z ≤ 2 , id est, that n=2 is a median of Zn. Thus, n(1 - π) = n/2, so that π = 1/2 and, again, p2 = 1/2 – as in the preceding, if one of the players adopts a uniformly random strategy, he produces equal opportunity independently of the other player. If Player 2 does not want to offer an equal opportunity to Player 1, he may choose a different strategy. For instance, he may bet on an eventual defect in the implementation of the strategy of Player 1 – a small deviation in the real value of p1 when compared to the value defined by the strategy of Player 1. A first simple way to define such a strategy consists in using the previous choices of Player 1. For instance, Player 2 can choose to reveal the same result as the preceding choice of Player 1 (cyclic strategy) or the contrary of this same choice (anticyclic strategy). In this case, the game becomes a Markov chain describing the choices of the players. The chain has four states: {s1 = (O, O), s2 = (O, E), s3 = (E, O), s4 = (E, E)}. For the cyclic strategy, the transition matrix is 0

p1

B 0 B M cyc = B @ 1 - p1 0

p1

0

0

0

p1

p1

1 - p1 0

0 1 - p1

0 1 - p1

1 C C C: A

For the anticyclic strategy, the transition matrix is 0

0 B p B 1 M anti = B @ 0 1 - p1

0 p1

p1 0

0 1 - p1

1 - p1 0

p1 0

1

C C C: 1 - p1 A 0

These chains have stationary distributions: Pscyc =



 p21 , p1 ð1 - p1 Þ, p1 ð1 - p1 Þ, ð1 - p1 Þ2 ,

600

7

UQ in Game Theory

  Psanti = p1 ð1 - p1 Þ, p21 , ð1 - p1 Þ2 , p1 ð1 - p1 Þ , In addition, 0

p21

B p ð1 - p Þ B 1 8k ≥ 2 : M kcyc = B 1 @ p1 ð 1 - p 1 Þ ð 1 - p1 Þ 2

p21

1

p21

p21

p1 ð 1 - p1 Þ p1 ð 1 - p1 Þ

p 1 ð 1 - p1 Þ p 1 ð 1 - p1 Þ C C C: p 1 ð 1 - p1 Þ p 1 ð 1 - p1 Þ A

ð1 - p1 Þ2

ð 1 - p1 Þ 2

ð1 - p 1 Þ2

0

p1 ð1 - p1 Þ p1 ð1 - p1 Þ B p21 p21 B 8k ≥ 2 : M kanti = B @ ð 1 - p1 Þ 2 ð 1 - p1 Þ 2 p1 ð1 - p1 Þ p1 ð1 - p1 Þ

1 p 1 ð 1 - p1 Þ p 1 ð 1 - p1 Þ C p21 p21 C C: ð 1 - p1 Þ 2 ð 1 - p1 Þ 2 A p 1 ð 1 - p1 Þ p 1 ð 1 - p1 Þ

Assume that the initial choice of Player 2 is O with probability p2. Then, the initial state of the chain is P0 = ( p2p1, (1 - p2)p1, p2(1 - p1), (1 - p2)(1 - p1)). The state at step k ≥ 1 is Pkcyc = M kcyc P0 = Pscyc ,Pkanti = M kanti P0 = Psanti : Thus, the mean winning of Player 2 at each step is μcyc = - ð1 - 2p1 Þ2 , μanti = ð1 - 2p1 Þ2 For Player 2, the probability of winning 1$ is     P W 2cyc = 1 = 2p1 ð1 - p1 Þ, P W 2anti = 1 = p21 þ ð1 - p1 Þ2 Consequently: 2 • The cyclic  strategyleads to μcyc = - (2p1 - 1) ≤ 0, and a probability of winning

1 $ of P W 2cyc = 1 = 2p1 ð1 - p1 Þ ≤ 1=2.

• The anticyclic choice leads to μanti = (2p1 - 1)2 ≥ 0, and a probability of winning   1 $ of P W 2anti = 1 = p21 þ ð1 - p1 Þ2 ≥ 1=2. Based on these results, Player 2 must choose the anticyclic strategy: his expected payoff is non-negative, and the probability of a positive payoff is at least 1/2. The cyclic strategy leads to a non-positive expected payoff and a probability of positive payoff inferior to 1/2.

7.2

A Simplified Odds and Evens Game

601

  Table 7.2 P Sn2 ≥ 0 : Anticyclic probability of winning a non-negative payoff for Player 2 p1 0.4 0.45 0.49 0.5 0.51 0.55 0.6

n 10 0.67 0.64 0.62 0.62 0.62 0.64 0.67

20 0.66 0.61 0.59 0.59 0.59 0.61 0.66

  Table 7.3 E Sn2 : Anticyclic expected payoff for Player 2

30 0.66 0.59 0.57 0.57 0.57 0.59 0.66

40 0.66 0.59 0.56 0.56 0.56 0.59 0.66

n 10 0.4 0.1 0.0 0.0 0.0 0.1 0.4

p1 0.4 0.45 0.49 0.5 0.51 0.55 0.6

20 0.8 0.2 0.0 0.0 0.0 0.2 0.8

50 0.66 0.58 0.56 0.56 0.56 0.58 0.66

30 1.2 0.3 0.0 0.0 0.0 0.3 1.2

100 0.69 0.58 0.54 0.54 0.54 0.58 0.69

40 1.6 0.4 0.0 0.0 0.0 0.4 1.6

50 2.0 0.5 0.0 0.0 0.0 0.5 2.0

1000 0.90 0.64 0.52 0.51 0.52 0.64 0.90

100 4.0 1.0 0.0 0.0 0.0 1.0 4.0

1000 40.0 10.0 0.4 0.0 0.4 10.0 40.0

10,000 1.00 0.84 0.52 0.50 0.52 0.84 1.00

10,000 400.0 100.0 4.0 0.0 4.0 100.0 400.0

We can analyze the effects of deviation in the real value of p1 by considering different values: recall that   P Sn2,anti = k =



 n-k nþk 2 ,p = p2 þ ð1 - p Þ2 2 1 nþk p ð1 - pÞ 1 n 2

    Tables 7.2 and 7.3 show, respectively, the values of P Sn2 ≥ 0 and E Sn2 for some values of p1 and n, when Player 2 uses strategy anticyclic: they increase when p1 deviates from 1=2. A second simple strategy consists in estimating p from the previous results and applying the anticyclic strategy with the estimated value. Let On be the number of times where Player 1 has chosen O in the first n moves. Recall p1,n  that the  estimator b for p1 after n moves is b p1,n = On =n. The probability αn = P b p1,n < 1=2 = PðOn < n=2Þ is given in Table 7.4. Notice that On is binomially distributed B ðn, p1 Þ. If Player 2 uses the estimator b p1,n , then, at the move n + 1, he will choose E if 1 1 b p1,n < =2. Thus, p1,n > =2, and O if b     P W nþ1 = 1 = θn = p1 þ αn ð1 - 2p1 Þ, P W nþ1 = - 1 = 1 - θn , 2 2     E W nþ1 = ð1 - 2p1 Þð2αn - 1Þ, V W nþ1 = 1 - ð2p1 - 1Þ2 ð2αn - 1Þ2 : 2 2

602

7

UQ in Game Theory

  pn < 12 Table 7.4 αn = P b p1 0.4 0.45 0.49 0.5 0.51 0.55 0.6

n 10 0.63 0.50 0.40 0.38 0.35 0.26 0.17

20 0.76 0.59 0.45 0.41 0.38 0.25 0.13

30 0.82 0.64 0.47 0.43 0.38 0.23 0.10

40 0.87 0.68 0.49 0.44 0.39 0.21 0.07

50 0.90 0.72 0.50 0.44 0.39 0.20 0.06

100 0.97 0.82 0.54 0.46 0.38 0.13 0.02

1000 1.00 1.00 0.73 0.49 0.25 0.00 0.00

10,000 1.00 1.00 0.98 0.50 0.02 0.00 0.00

50 0.83 0.63 0.57 0.56 0.56 0.68 0.87

100 0.94 0.70 0.55 0.54 0.55 0.74 0.96

1000 1.00 1.00 0.59 0.51 0.60 1.00 1.00

10,000 1.00 1.00 0.94 0.50 0.94 1.00 1.00

  Table 7.5 P Sn2 ≥ 0 for strategy based on estimator b pn p 0.4 0.45 0.49 0.5 0.51 0.55 0.6

n 10 0.67 0.63 0.62 0.63 0.63 0.66 0.73

20 0.70 0.61 0.59 0.58 0.59 0.66 0.77

30 0.76 0.61 0.57 0.58 0.58 0.66 0.81

40 0.80 0.62 0.56 0.56 0.57 0.67 0.84

Table 7.6 Expected payoff for Player 2 for strategy based on the estimator b pn p 0.4 0.45 0.49 0.5 0.51 0.55 0.6

n 10 0.36 -0.03 0.00 -0.01 0.06 0.34 0.99

20 1.35 0.20 -0.02 -0.07 0.11 0.73 2.28

30 2.70 0.57 -0.06 -0.02 0.08 1.27 3.67

40 4.16 0.94 0.09 0.01 0.11 1.69 5.33

50 5.79 1.37 -0.11 0.13 0.26 2.30 7.09

100 14.72 4.28 0.17 -0.03 0.29 5.44 16.28

1000 194.57 89.68 6.21 -0.48 7.02 91.19 196.22

10,000 1995.30 989.94 152.96 -0.11 155.88 991.10 1995.30

  For this strategy, the exact evaluation of P Sn2 ≥ 0 involves an expensive calculation when n increases. In Table 7.5, we show the values furnished by a Monte Carlo simulation with 104 runs. The values of E Sn2 are shown in Table 7.6. An alternative strategy based on the previous results consists in using statistical hypothesis testing. For example, Player 2 can test both the hypothesis H < : p ≤ 12 and H > : p ≥ 12. If both are accepted or both are rejected, Player 2 chooses uniformly random between O and E. If one is accepted and the other is rejected, he chooses the anticyclic strategy: either E if H> is accepted or O if H< is accepted. Notice that the decision for this kind of test is based on the number of O previously observed. Indeed, On is binomially distributed B ðn, pÞ. Thus, a decision concerning H< with a

7.2

A Simplified Odds and Evens Game

Table 7.7 Typical values of o


603

α 0.01 0.05 0.1 0.25

n 10 9 8 7 6

20 15 14 13 12

30 21 19 19 17

40 27 25 24 22

50 33 31 30 27

100 62 58 56 53

1000 537 526 520 511

10,000 5116 5082 5064 5034

α 0.01 0.05 0.1 0.25

n 10 1 2 3 4

20 5 6 7 8

30 9 11 11 13

40 13 15 16 18

50 17 19 20 23

100 38 42 44 47

1000 463 474 480 489

10,000 4884 4918 4936 4966

significance level α< may be taken by considering the maximum number o< such that P(On > o) = α> and accepted otherwise. Typical values of o< and o> are exhibited in Tables 7.7 and 7.8. These values are furnished by the function qbinom of R. This strategy is not as efficient  as the previous ones. A Monte Carlo simulation of 106 plays has led to a P Sn2 ≥ 0 ≈ 0:5 and an expected payoff of zero, for several α from 0.01 to 0.5 and values of n going from 10 to 104.

7.2.3

Strategies for the Stochastic Game

The game becomes stochastic (SG) if the probabilities change at random. For instance, imagine that the players choose a different value of pi at each move. For instance, they can choose this value at random according to a given distribution. In this case, we say that the special player Nature chooses the probabilities and, thus, defines the types of the Players. Analogously to the preceding situation, pi is a random variable having a distribution – which may be known or unknown. Since a draw is made at each move, Nature furnishes a different value pki for each move. Then,         P W k1 = 1 = π k = 1 - pk1 þ 2pk1 - 1 pk2 , P W k2 = 1 = 1 - π k : For a given pk and a given strategy pk2 , the expected gain of Player 2 is      E W k2 jp1 = pk1 , p2 = pk2 = μk = 1 - 2π k = 2pk1 - 1 1 - 2pk2 :

604

7

UQ in Game Theory

The expected gain depends on the distribution of p = ( p1, p2) and of the strategy of the players. The equivalent of the Nash equilibrium on a single move is determined by setting π k = 1=2 – possible solutions are pk1 = 1=2 or pk2 = 1=2, i.e., the strategy of a random uniform choice of one of the players. Analogously, the equal opportunity and the Egalitarian solution correspond to π k = 1=2 and lead to the same results. In this case, the anticyclic strategy is more difficulty to implement, since pk1 varies with k. A solution is to use p1 = E ðp1 Þ and take pk2 = 1 - p1 . Then,     π k = p1 þ pk1 - 2pk1 p1 , E W k2 jp1 = pk1 = 2pk1 - 1 ð 2p1 - 1Þ: Then,     1 E π k = 2p1 ð1 - p1 Þ ≤ , E W k2 = ð2p1 - 1Þ2 ≥ 0 2 If the game is repeated n times, we may consider again the payoff an analysis similar to the preceding Sn2 = W 12 þ . . . þ W n2 of Player 2 and  make  ones. From the preceding, we have E Sn2 = nð2p - 1Þ2: analogously to the previous situation, the expected value increases with n when p1 ≠ 1=2. The distribution of Sn2 is not binomial-like, since on the distribution of p1, what makes difficulty  it depends  the evaluation of P Sn2 ≥ 0 . To solve the difficulty, we can use a Monte Carlo simulation to estimate such a probability. The results obtained this way are shown in Table 7.9 – the calculations assume that p is uniformly distributed on ðp - δ, p þ δÞ ðδ > 0Þ.

  Table 7.9 P S2n ≥ 0 for the anticyclic strategy in the stochastic game, evaluated by Montecarlo simulation (1e6 realizations) δ p 0.01 n = 10 0.4 0.55 0.45 0.51 0.49 0.50 0.5 0.50 0.51 0.50 0.55 0.51 0.6 0.55 n = 1000 0.4 0.89 0.45 0.62 0.49 0.50 0.5 0.50 0.51 0.50 0.55 0.62 0.6 0.89

0.05

0.1

0.15

0.2

0.55 0.51 0.50 0.50 0.50 0.51 0.55

0.54 0.51 0.50 0.50 0.50 0.51 0.55

0.55 0.51 0.50 0.50 0.50 0.51 0.55

0.55 0.51 0.50 0.50 0.50 0.51 0.54

0.89 0.62 0.50 0.50 0.50 0.62 0.89

0.89 0.62 0.51 0.50 0.51 0.62 0.89

0.89 0.62 0.50 0.50 0.50 0.62 0.89

0.89 0.62 0.51 0.50 0.50 0.62 0.89

δ p 0.01 n = 100 0.4 0.65 0.45 0.54 0.49 0.50 0.5 0.50 0.51 0.50 0.55 0.54 0.6 0.65 n = 10, 000 0.4 1.00 0.45 0.84 0.49 0.52 0.5 0.50 0.51 0.52 0.55 0.84 0.6 1.00

0.05

0.1

0.15

0.2

0.65 0.54 0.50 0.50 0.50 0.54 0.65

0.65 0.54 0.50 0.50 0.50 0.54 0.65

0.65 0.54 0.50 0.50 0.50 0.54 0.65

0.65 0.54 0.50 0.50 0.50 0.54 0.65

1.00 0.84 0.52 0.50 0.52 0.84 1.00

1.00 0.84 0.52 0.50 0.52 0.84 1.00

1.00 0.84 0.52 0.50 0.52 0.84 1.00

1.00 0.84 0.52 0.50 0.52 0.84 1.00

7.2

A Simplified Odds and Evens Game

605

In practice, the value of p1 may be unknown: Player 2 assumes a value which is a belief. Analogously to the preceding, Player 2 can use an estimator b p1,n = On =n to p1,k þ pk1- 2pk1 b p1,k . If move k is take a decision at move n + 1. In this case, π k = b independent from the past moves, we have again E π k = 2p1 ð1 - p1 Þ and   E W k2 = ð2p1 - 1Þ2 .

7.2.4

Replicator Dynamics

As observed in the introduction, Mathematical Games may be used for the analysis of population dynamics: the payoffs give the results of a competition between these populations for the resources and replicator equations describe the evolution of the population (see, for instance, Cressman & Tao, 2014; Nowak, 2006). These equations may be interpreted as generalizations of the classical Lotka–Volterra equations (Lotka, 1910, 1920; Volterra, 1926, 1928). A given game may be interpreted in terms of populations dynamics by different ways. For instance, the game represented in Table 7.1 may be interpreted as follows: • As a first approach, we may consider the population as formed by two subpopulations which adopt the strategies O or E. Such a lecture is possible when the mean payoffs of the strategies are the same for both the players, what is the case of symmetric games. A possible refinement consists in considering that the subpopulations adopt the behavior of each player with a given probability; • A second approach may consider the population as formed by subpopulations having the behavior of the Players: subpopulation i has the behavior of Player i, including the probabilities of each strategy; • Finally, we may consider four subpopulations: subpopulations 1 and 2 are formed by internal subpopulations adopting strategies O or E, with the probabilities under consideration. The replicator equation describes the evolution of a subpopulation by its access to the resources represented by the mean payoffs: the rate of variation of a subpopulation is connected to the difference between its mean payoff and the mean payoff of the whole population. According to this idea, a subpopulation which has a mean payoff larger than the mean payoff of the overall population will increase; in the inverse situation, it will decline. In other words, if the mean payoff of the overall population is f and the mean payoff of the subpopulation i is fi, the rate of variation of the subpopulation i depends on the difference f i - f : it declines if f i < f and increases if f i > f . For instance, let us consider the second approach: a population has a proportion x1 of elements adopting the behavior of Player 1 and a proportion x2 of elements having the behavior of Player 2. We have x1 + x2 = 1, with 0 ≤ x1, x2 ≤ 1 – thus, we may consider x2 as main variable and determine x1 = 1 - x2. Using the payoffs presented in Table 7.1, the mean payoff received by an element of the subpopulation x2 is

606

7

UQ in Game Theory

f 2 = ð1 - 2p1 Þð2p2 - 1Þ Since the game is a zero-sum game, the mean payoff of an element of the subpopulation x1 is f1 = - f2. On the overall population, the mean is f = x1 f 1 þ x2 f 2 = ð2x2 - 1Þf 2 : The replicator equation reads as   dx2 = x2 f 2 - f = 2x2 ð1 - x2 Þf 2 dt Such an equation is a logistic equation (Verhulst, 1845, 1847). It has an explicit solution for 0 < x2(0) < 1: x2 ðt Þ =





1 x2 ð0Þ

1

 - 1 e - 2f 2 t

We observe that x2 grows up if f2 > 0 and decreases if f2 < 0. The population is stable if f2 = 0. Indeed, there are 3 steady states: x2 = 0, x2 = 1, f2(x2) = 0. x2 = 0 corresponds to the extinction of subpopulation x2 : the whole population is formed by x1; x2 = 1 corresponds to the extinction of subpopulation x1 : the whole population is formed by x2; f2 = 0 corresponds to the situation where p = p2 = 12 . When explicit solutions are not available, the stability of these steady states may be analyzed by the standard methods of dynamical systems: for instance, a linearization in a neighborhood of the solution or the determination of the eigenvalues of the matrix of the linearized coefficients. Here, stability depends on the sign of f2. Now, let us consider a different lecture of the game: assume that the whole population is formed of individuals applying a fixed strategy O or E. Then, let us denote xO the proportion of individuals applying the strategy O, while xE = 1 - xO is the proportion of individuals applying the strategy E. When two individuals compete for a resource, they play as Player 1 with a given probability η > 0 and with probability 1 - η as Player 2. Then, the expected payoff of an individual adopting the strategy O is f O = ηðxO - xE Þ þ ð1 - ηÞðxE - xO Þ = ð2η - 1Þð2xO - 1Þ: Since the game is zero-sum, the expected payoff of an individual adopting the strategy T is fE = - f0. The mean payoff of the whole population is f = xO f O þ xE f E = ðxO - xE Þf O = ð2η - 1Þð2xO - 1Þ2 :

7.2

A Simplified Odds and Evens Game

607

In this case, the replicator equation reads as   dxO = xO f O - f = 2ð2η - 1ÞxO ð1 - xO Þð2xO - 1Þ dt For η = 1=2, the population is stable. For η ≠ 1=2, we have three equilibrium states: xO = 0, xO = 1, xO = 1=2. Stability depends on the sign of fO. To simulate this equation, we use a right-side member of the differential equation as follows:

Indeed, the growth of xO depends upon the sign of fO: xO grows up if fO > 0 ðeither η > 1=2, xO > 1=2or η < 1=2, xO < 1=2Þ and decreases if fO < 0 ðeither η < 1=2, xO > 1=2or η > 1=2, xO < 1=2Þ. When η < 12 , the system tends to the equilibrium with xO = 1=2. For η > 1=2, one of the groups goes to extinction, except if there is an exact equilibrium at the start (xO = 1/2). We may simulate this equation under R, using the methods previously shown for differential equations. For instance, we use RK4’s method to generate Figs. 7.5, 7.6, 7.7, 7.8, 7.9, and 7.10.

Fig. 7.5 Trend to Equilibrium when η < 12: for 0 < xO(0) < 1, the proportion of species O tends to 1=2. xO = 0 and xO = 1 are equilibrium points: one species is extinct and the other forms the whole population

608

7

UQ in Game Theory

Fig. 7.6 Trend to Extinction when η > 12: for 0 < xO ð0Þ < 1=2, the proportion of species O tends to 0; for 1=2 < xO ð0Þ < 0, the proportion of species O tends to 1, so that species E tends to extinction. xO = 0 and xO = 1 are equilibrium points: where one species is extinct and the other forms the whole population, xO = 0.5 is an equilibrium where both the species are present, with a half of the total population each

Fig. 7.7 Orbits ðxO , pO = x_ O Þ in the phase space for η = 0.4. Except for xO(0) 2 {0, 1}, all the curves tend to (xO = 0.5, p0= 0), which corresponds to the equilibrium

7.2

A Simplified Odds and Evens Game

609

Fig. 7.8 Orbits (xO, xE) for η = 0.4. Except for xO(0) 2 {0, 1}, all the curves tend to (xO = 0.5, xE= 0.5), which corresponds to the equilibrium

Fig. 7.9 Orbits (xO, xE) for η = 0.7. For xO(0) < 0.5 the curves tend to (xO = 0, xE= 1), i.e., the extinction of species O. For xO(0) > 0.5 the curves tend to (xO = 1, xE= 0), i.e., the extinction of species E

610

7

UQ in Game Theory

Fig. 7.10 Orbits ðxO , pO = x_ O Þ in the phase space for η = 0.7. For xO(0) < 0.5 the curves tend to (xO = 0, pO= 0), i.e., the extinction of species O. For xO(0) > 0.5 the curves tend to (xO = 1, pO= 0), i.e., the extinction of species E

Let us introduce other possible lecture of the game: now, we consider two species corresponding to Players 1 and 2 (proportions x1 and x2 = 1 - x1, respectively). Each subpopulation is formed of individuals applying a fixed strategy O or E. The internal proportions are denoted xiO (species i, strategy O), xiE = 1 - xiO (species i, strategy E). Then, the global population is divided according to proportions xiO xi (species i, strategy O), xiE xi (species i, strategy E). We have x1O x1 þ x1E x1 þ x2O x2 þ x2E x2 = 1. In this case, the mean payoffs are f 1O = x2 ð2x2O - 1Þ, f 1E = x2 ð1 - 2x2O Þ, f 1 = ð2x1O - 1Þð2x2O - 1Þx2 f 2O = x1 ð1 - 2x1O Þ, f 2E = x1 ð2x1O - 1Þ, f 2 = ð1 - 2x1O Þð2x2O - 1Þx1 Then, f = 0 and the replicator dynamics leads to d ðx Þ = x1 f 1 = x1 ð1 - x1 Þð2x1O - 1Þð2x2O - 1Þ, dt 1

7.2

A Simplified Odds and Evens Game

611

  d ðx1O Þ = x1O f 1O - f 1 = 2x1O ð1 - x1 Þð1 - x1O Þð2x2O - 1Þ dt   d ðx Þ = x2O f 2O - f 2 = 2x2O x1 ð1 - x2O Þð1 - 2x1O Þ dt 2O In this case, the right-side member of the differential equation can be evaluated as follows:

Analogous analysis may be performed in this case (see Figs. 7.11 and 7.12). The mean evolution of the system and confidence intervals for orbits or trajectories may be obtained by using the methods presented in Chap. 3.

Fig. 7.11 Trajectories of some variables

612

7

UQ in Game Theory

Fig. 7.12 Orbits ðx1 , p1 = x_ 1 Þ for different values of x1(0) Table 7.10 The normal form of the prisoner’s dilemma

7.3

Player 1 C1 ( p1) D1 (q1 = 1 - p1)

Player 2 C2 ( p2) (L, L ) (H, V )

D2 (q2 = 1 - p2) (V, H ) (M, M )

The Prisoner’s Dilemma

A classical game is prisoner’s dilemma: two robbers are arrested and must choose between confess (C) or deny (D). If both confess, they will be sentenced to a light sentence of L years in prison; if one confesses and the other not, the one which has not confessed will be sentenced to a heavy sentence of H years, while the one who has confessed will be sentenced to a very light sentence of V years; if no one confess, both will have a minimal sentence M. In general, M < V < L < H. Each prisoner must take a decision. In practice, a prisoner has also to evaluate the probabilities of the decision taken by the other prisoner. This game may be put into a Matrix Form as in Table 7.10 If the prisoners can communicate (cooperative game), they will choose the strategy (D1, D2) that ensures to both the minimal sentence. If they do not communicate (non-cooperative game), the situation is more difficulty: they must evaluate the probability pi that prisoner i confess. If a prisoner is confident in his partner, he evaluates this probability as extremely low; while if he is not so confident in his partner, he will evaluate this probably with higher value.

7.3

The Prisoner’s Dilemma

613

(D1, D2) is a Nash equilibrium for this game: any solitary deviation of a player results in a loss for himself – but its practical implementation cannot be made without communication and strong mutual confidence between the players. A second Nash equilibrium is (C1, C2): if one player confesses, the other must confess – otherwise his loss will be larger. The existence of a mixed Nash equilibrium may be investigated as previously, by making decisions indifferent. For instance, let us consider Player 1. The mean sentence for strategy C1 is SC1 = V þ p2 ðL - V Þ. For strategy D1, the mean sentence C D mix VM is SD = ΔΔVHML , 1 = M þ p2 ðH - M Þ. Indifference occurs when S1 = S1 , id est, p2 = p where ΔVM = V - M > 0, ΔHL = H - L > 0, ΔVHML = ΔVM þ ΔHL > 0: mix C Analogously, indifference between SD . If both the 2 and S2 occurs for p1 = p mix mix players follow that strategy, the mean sentence S satisfies V < S < L. Notice that the mixed strategy is not optimal. Indeed, for Player 1: P(S1 = H| D1) = p2, P(S1 = Mj D1) = q2, P(S1 = Lj C1) = p2, P(S1 = V| C1) = q2. So, P(S1 = M) = q1q2, P(S1 = V ) = p1q2, P(S1 = L) = p1p2, P(S1 = H ) = q1p2. Then, the mean sentence for Player 1 is

S1 = M þ p1 ΔVM þ p2 ðH - M Þ - p1 p2 ΔVHML : We observe that the behavior of S1 depends on the relation between p2 and the critical value pmix: for p2 > pmix, S1 decreases when p1 increases, so that its minimum corresponds to p1 = 1 (strategy C1); for p2 < pmix, S1 increases when p1 increases, so that its minimum corresponds to p1 = 0 (strategy D1). If p2 = pmix, it is indifferent to choose between C1 and D1. Analogously, the mean sentence for Player 2 is S2 = M þ p2 ΔVM þ p1 ðH - M Þ - p1 p2 ΔVHML : Thus, the behavior is analogous: for p1 > pmix, the minimum of S2 is attained at p2 = 1 (strategy C2); for p1 < pmix, its minimum is attained at p2 = 0 (strategy D2). If p1 = pmix, it is indifferent to choose between C2 and D2. Then, Player 1 may choose a strategy according to these values: • Confess: this corresponds to p1 = 1 In this case, S1 = SC1 . If p2 < pmix, this strategy maximizes S1. Otherwise, it minimizes S1. mix • Deny: this corresponds to p1 = 0. Then, S1 = SD , this strategy 1 . If p2 < p minimizes S1. Otherwise, it maximizes S1. • Apply the minimax strategy: the maximum loss is L for decision C1 and H for decision D1. The minimum of the maximal loss is L for decision C1. Thus, this strategy leads to choose C1. • Considering as a payoff the difference between the maximal sentence and the sentence received, the maximin strategy leads to the same result: indeed, the minimal payoff is H - L for C1 and 0 for D1. The maximum of the minimal payoff is H - L for decision C1. Thus, this strategy leads also to choose C1.

614

7

UQ in Game Theory

The situation is analogous for Player 2 (the game is symmetric). Let us adopt the point view of UQ and analyze the probabilities. Let X1 be the sentence received by Player 1. Then PðX 1 > M Þ = p1 þ p2 - p1 p2 , PðX 1 > V Þ = p2 , PðX 1 > LÞ = p2 - p1 p2 : Player 1 controls his own decision and may decide the value of p1. The value of p2 is chosen independently by Player 2: Player 1 must estimate p2 and take a decision based on his estimation. The decision will involve a risk of having a sentence superior to a given threshold, considered as acceptable by Player 1. Then, the strategies lead to the following results: • Confess: this corresponds to p1 = 1. If Player 1 chooses this strategy, he is looking for the sentence V or L. We have P(X1 > M ) = p2. Thus, Player 1 must have a strong confidence in Player 2: if he does not estimate p2 as small enough, he will consider the risk as excessive. Then, he may choose the betrayal: confess while expecting that Player 2 does not confess – to deceive Player 2 expecting that Player 2 will not betray. • Deny: this corresponds to p1 = 0. If Player 1 chooses this strategy, he is looking for the minimal sentence M (the other option is the maximal sentence H ). Again, Player 1 must have a strong confidence in Player 2 to estimate p2 as small enough. Otherwise, he will consider the risk as excessive and choose to confess. • Evaluate the other as himself: the probability of C2 is the same as C1, what corresponds to p1 = p2. Then PðX 1 > M Þ = 2p2 - p22 . Then P(X1 > M) is an increasing function of p2 having its maximum at p2 = 1. If player 1 accepts a pffiffiffiffiffiffiffiffiffiffiffi maximal risk of α, the maximal acceptable value of p2 is 1 - 1 - α. For small values of α, this value is close to α2. Thus, if the Player 1 accpts a maximal risk of 0.1%, the maximal acceptable value of p2 is about 0.05%. For a maximal risk of 0.01%, the value is about 0.005%. As we may see, either the confidence in the partner is strong either the best strategy is to confess. If both the players make the same analysis, both will confess.

7.3.1

Replicator Dynamics

Let us consider a population formed by two species having the behavior of Players 1 (proportion x1) and 2 (proportion x2). Their mean payoffs f1, f2 are, respectively, -S1, - S2 : notice that S1, S2 are positive. To reinterpret them as punitions, we must change their sign. The mean payoff of the whole population is f = - x1 S1 - ð1 - x1 ÞS2 = - S2 þ x1 ðS2 - S1 Þ:

7.3

The Prisoner’s Dilemma

615

Fig. 7.13 Trend to Extinction of Species 2 when p2 - p1 < 0. It survives only if x1(0) = 0, otherwise, it disappears

Thus, the replicator equation reads as dx1 = x1 ð1 - x1 ÞðS2 - S1 Þ: dt Since S2 - S1 = ð p2 - p1 Þ ð V - H Þ , |fflfflfflffl{zfflfflfflffl} 0 and x1(t) → 1, while x2(t) → 0 : species 2 goes to extinction (Figs. 7.13 and 7.14). If p2 - p1 > 0, then S2 - S1 < 0 and x1(t) → 0: species 1 goes to extinction (Figs. 7.15, 7.16 and 7.17). The populations are stable when p1 = p2. Again, C is the best strategy: p1 = 1 ensures the survival of species 1. The simulations use deSolve. An example of function for the evaluation of the right-side member of the replicator differential equation is given below:

616

7

UQ in Game Theory

Fig. 7.14 Orbits ð x1 , p1 = x_ 1 Þ for p2 - p1 < 0. They tend to the equilibrium point (1,0), corresponding to the extinction of Species 2

Fig. 7.15 Trend to Extinction of Species 1 when p2 - p1 > 0. It survives only if x2(0) = 0, otherwise, it disappears

7.3

The Prisoner’s Dilemma

617

Fig. 7.16 Orbits ð x1 , p1 = x_ 1 Þ for p2 - p1 > 0. They tend to the equilibrium point (0,0), corresponding to the extinction of Species 1

Fig. 7.17 Strategy “C” ensures the survival of Species 1 for any strategy of Species 2

618

7

UQ in Game Theory

If we consider subspecies applying a fixed strategy C or D, the internal proportions are xiC (species i, strategy C), xiD = 1 - xiD (species i, strategy D). Then, analogously to the preceding situation, the mean payoffs are   f 1C = - V - ðL - V Þx2C ,f 1D = - M - ðH - M Þx2C , f 1 = f 1D þ x1C f 1C - f 1D   f 2C = - V - ðL - V Þx1C ,f 2D = - M - ðH - M Þx1C , f 2 = f 2D þ x2C f 2C - f 2D :   The mean payoff is f = f 2 þ x1 f 1 - f 2 and the replicator dynamics leads to     d ðx1 Þ = x1 f 1 - f = x1 ð1 - x1 Þ f 1 - f 2 , dt     d ðx Þ = x1C f 1C - f 1 = x1C ð1 - x1C Þ f 1C - f 1D dt 1C     d ðx Þ = x2C f 2C - f 2 = x2C ð1 - x2C Þ f 2C - f 2D dt 2C We may simulate this system of differential equations with R, using the methods formerly introduced. An example of function for the evaluation of the right-side member of the replicator differential equation is given below:

7.4

The Goalie’s Anxiety at the Penalty Kick

619

Fig. 7.18 Trajectories of x1 ,x1C ,x2C : x1C ,x2C both tend to 1, so that x1D ,x2D both tend to zero. In addition, x1 tends to a stable value

For instance, using deSolve and method rk4, we obtain the results in Figs. 7.18, 7.19, and 7.20. Observe that the strategy C is the best: the subpopulations applying strategy D tend to extinction in all the cases. In addition, the system evolves to a stable equilibrium between the species: the orbits ðx1 , p1 = x_ 1 Þ tend to a point where p1 = 0.

7.4

The Goalie’s Anxiety at the Penalty Kick

Penalties in soccer interested researchers in GT since (Levitt et al., 2002). In 2003, an article (Palacios-Huerta, Professionals play minimax, 2003) analyzed penalties in soccer games using GT and a large set of data. The work of Palacios-Huerta kicked off a series of research studies about the same topic: (Garicano et al., 2005; Coloma, 2007, 2012; Palacios-Huerta & Volij, 2008; Apesteguia & Palacios-Huerta, 2010; Azar & Bar-Eli, 2011; Prakash & Garg, 2014; Palacios-Huerta, 2014; Sarkar, 2017; Boczon & Wilson, 2018; Arrondel et al., 2019). In his work, Palacios-Huerta formulates the penalty kick as a game involving the goal and the kicker. Although the paper contains many information, several studies concentrate on the choice of the side to kick for the kicker and the side to plunge for the goalkeeper. In theory, there are many options: left side, center, right side; in addition, the ball can be at ground level, in the middle or high. The paper furnishes information about the influence of the time where the penalty is kicked, about the foot of the kicker and so on. In a

620

7

UQ in Game Theory

Fig. 7.19 Evolution of x1 for different initial values. Convergence to stable values is observed in all the cases

Fig. 7.20 Orbits ðx1 , p1 = x_ 1 Þ for different values of x1(0). Convergence to a stable value is observed in all the cases

7.4

The Goalie’s Anxiety at the Penalty Kick

Table 7.11 The penalty game

Kicker K L (kL) R (1 - kL)

621 Goalkeeper G L (gL) (π LL, 1 - π LL) (π RL, 1 - π RL)

R (1 - gL) (π LR, 1 - π LR) (π RR, 1 - π RR)

simplified model, the study is reduced to the choices L (left-hand side of the goalkeeper) and R (the right-hand side of the goalkeeper). The compilation of data about a large number of matches furnishes estimations of the probability of scoring for the kicker, which is his payoff. The payoff of the goalkeeper is the complement of the payoff of the kicker. Let us consider the kicker K as Player 1 and the goalkeeper G as the Player 2. Let gL be the probability of the goalkeeper plunging to his left-hand side and kL be the probability of kicking at the same left-hand side of the goalkeeper for the kicker. Thus, the game may be put into a matrix form as in Table 7.11. Typical values for the scoring probabilities are π LL = 0:59,π RL = 0:93,π LR = 0:94, π RR = 0:71: Since these values result from the analysis of a sample, they are uncertain, and a margin of error must be considered. Typical values for the choices are gL = kL = 0.4 – but these values may be decision variables. In the framework of GT, we may determine the Nash mixed strategy by considering the expected gain of each player. Let us denote by EGKL, EGKR the expected payoff of the Goalie when the Kicker chooses L or R, respectively. We have EGKL = gL ðπ LR - π LL Þ þ ð1 - π LR Þ, EGKR = ð1 - π RR Þ - gL ðπ RL - π RR Þ: To make the choice indifferent, we must take EGKL = EGKR. Thus, gL = gL =

π LR - π RR : π LR þ π RL - π RR - π LL

Now, let us denote by EKGL, EKGR the expected payoff of the Kicker when the Goalie chooses L or R, respectively. We have EK GL = π RL - kL ðπ RL - π LL Þ, EK GR = π RR þ kL ðπ LR - π RR Þ: To make the choice indifferent, we must take EKGL = EKGR. Thus, k L = kL =

π RL - π RR : π LR þ π RL - π RR - π LL

With the values given previously, we have gL ≈ 0:40, kL ≈ 0:39 – what is close to the observed values – this was the basic conclusion of (Palacios-Huerta, 2003). Always in the framework of GT, we may analyze the minimax strategy. If the losses of a given player are evaluated as being the payoffs of the other player, the maximal losses of the Kicker are 1 - π LL when he chooses L and 1 - π RR when he

622

7

Table 7.12 Minimax and Maxmin strategies for the penalty kick (π RL ≈ π LR)

UQ in Game Theory

Minimax R Uniformly random

Kicker Goalie

Maximin

chooses R. Thus, his minimax strategy chooses the minimum of these values, which corresponds to the largest value between π LL and π RR. For the given data, the choice is R. The maximin strategy for the Kicker leads to the same result: his minimal payoff are π LL and π RR, so that he will maximize his minimal winning by choosing the maximal value between them. Concerning the Goalie, the maximal losses are π LR and π LR. Let us consider that π RL and π LR are approximately equal: then, the maximum losses are equal for both the sides. Analogously, the minimal winnings of the Goalie are 1 - π LR and 1 - π RL, which are approximatively equal. In this case, we obtain the result shown in Table 7.12 Assuming that the Players choose one of these strategies, the kicker chooses R and the Goalie randomly chooses between R and L. If we assume that the difference of 0.01 between π RL and π LR is significant, then π RL < π LR and the minimax and maximin strategies for the Goalie are L, instead uniformly random. Notice that these choices do not correspond to a Nash Equilibrium. Now, let us consider the point of view of UQ: the basis for decision are the probabilities. From the Kicker’s point of view, the probability of scoring is SL or SR when he chooses to kick L or R, respectively. We have SL = π LR - gL ðπ LR - π LL Þ, SR = π RR þ gL ðπ RL - π RR Þ: The global probability of scoring is (see Fig. 7.21) S = SR þ k L ðSL - SR Þ: If the Kicker desires to maximize this probability, so that he will choose either kL = 1, if SR < SL or kL = 0, if SR > SL. When SL = SR, the Kicker chooses uniformly random between L and R. The result is logical: if the probability of scoring is larger for one of the sides, the Kicker will have the largest probability of scoring by kicking at that side. For instance, if the Goalie is known by always jump to left, gL = 1, so that SR = π RL and SL = π LL and the decision is kL = 0 : the kicker strikes at right to maximize his probability of scoring. Analogously, if the Goalie is known by always jump to right, the decision will be to kick at left. Indifference occurs when SL = SR, id est, gL = gL : From the Goalie’s point of view, the probability of not scoring is NL or NR when the Kicker chooses L or R, respectively. We have N L = ð1 - π LR Þ þ k L ðπ LR - π LL Þ, N R = ð1 - π RR Þ - kL ðπ RL - π RR Þ: The global probability of not scoring is (see Fig. 7.22)

7.4

The Goalie’s Anxiety at the Penalty Kick

Fig. 7.21 Probability of scoring

Fig. 7.22 Probability of not scoring

623

624

7

UQ in Game Theory

N = N R þ gL ðN L - N R Þ: The Goalie desires to maximize N, so that he will choose either gL = 0, if NL < NR or gL = 1, if NL > NR. When NL = NR, the Goalie chooses uniformly random between L and R. Again, the result is logical: if probability of no scoring is higher at left, the Goalie will choose this side; if the same probability is higher at right, he logically chooses the right side. As an example, if the Kicker is known by always kicking at left, then kL = 1, so that NR = 1 - π RL and SL = 1 - π LL and the decision is gL = 1: the Goalie plunges at left to minimize the probability of scoring. Analogously, if the Kicker is known by always strike at right, the decision will be to jump to right. Indifference occurs when NL = NR, id est, k L = k L : The indifference values correspond to the Nash equilibrium: in this game, the payoffs are equal to the probabilities. For the data given, SL ≈ 0.8, SR ≈ 0.8, S ≈ 0.8, gL ≈ 0:4, NL ≈ 0.2, NR ≈ 0.2, N ≈ 0.2, kL ≈ 0:4. In this game, the Kicker can use an anticyclic strategy: he may choose to strike at left with a probability kL = 1 - gL. In this case (see Fig. 7.23)

Fig. 7.23 Probability of scoring as function of gL in the anticyclic strategy of the Kicker

7.4

The Goalie’s Anxiety at the Penalty Kick

625

Fig. 7.24 Probability of not scoring as function of gL in the cyclic strategy of the Goalie

S = Sa = π LR þ ðπ LL - 2π LR þ π RR ÞgL þ ðπ LR þ π RL - π RR - π LL Þg2l This polynomial attains his minimal value at gaL =

2π LR - ðπ LL þ π RR Þ 2ðπ LR þ π RL - π RR - π LL Þ

so that Sa ≥ Samin =

4π LR π RL - ðπ LL þ π RR Þ2 4ðπ LR þ π RL - π RR - π LL Þ

The Goalie may use a cyclic strategy and jump to left at probability kL. Then (see Fig. 7.24), N = N c = 1 - π RR þ k L ð2π RR - π LR - π RL Þ þ k 2L ðπ LR þ π RL - π RR - π LL Þ This polynomial attains his minimal value at k cL =

π LR þ π RL - 2π RR 2ðπ LR þ π RL - π RR - π LL Þ

626

7

UQ in Game Theory

so that N c ≥ N cmin =

ðπ LR þ π RL Þ2 - 4π LL π RR - 4ðπ LR þ π RL - π RR - π LL Þ 4ðπ LR þ π RL - π RR - π LL Þ

For the data given, gaL ≈ 0:5,Samin ≈ 0:8, kcL ≈ 0:4,N cmin ≈ 0:2. In this game, equal opportunity does not exist for the considered values of π LL, π LR, π RL, π RR: indeed, Equal Opportunity gives the same probability to the Goalie and the Kicker, id est, N = S = 0.5, but the data correspond to a minimal S of 0.59 and a maximal N of 0.41.

7.5

Hawks and Doves

Let us consider a population of Hawks (H ) and Doves (D): when a Hawk (H ) faces a Dove (D), the Dove withdraws, and the Hawk wins the reward 2R > 0. When two Doves face, the reward is equally divided: each one wins R > 0. If two Hawks face, they fight and, in the mean, their reward is R - C, where C = βR > 0 is the cost of the injuries received in the fight. This game may be put into a Matrix Form as in Table 7.13. In this case, the mean payoffs are E H1 = Rð2 - pH2 ð1 þ βÞÞ; E D1 = ð1 - pH2 ÞR EH2 = Rð2 - pH1 ð1 þ βÞÞ; ED2 = ð1 - pH1 ÞR; The mean playoff of Players 1 and 2 are, respectively, E 1 = ð1 þ pH1 - pH2 - βpH1 pH2 ÞR; E 2 = ð1 þ pH2 - pH1 - βpH1 pH2 ÞR The results of Player 1 are indifferent to the strategy when E H1 = E D1 ⟺2 - pH2 ð1 þ βÞ = 1 - pH2 ; The solution is pH2 =

Table 7.13 The normal form of the Hawks and Doves

Player 1 H ( pH1) D (1 - pH1)

1 : β

Player 2 H ( pH2) ((1 - β)R, (1 - β)R) (0, 2R)

D (1 - pH2) (2R, 0) (R, R)

7.5

Hawks and Doves

627

Thus, a solution exists if and only if β ≥ 1 (otherwise, pH2 > 1). – id est, a solution exists if and only if the cost of an injury is greater than the reward. The situation is analogous if we make the results of Player 2 indifferent to the strategy: indifference corresponds to pH1 = 1/β. Let us analyze the replicator dynamics: let us consider a population divided into Hawks and Doves. Let us denote by xH the proportion of Hawks and xD = 1 - xH the proportion of Doves. Then, the mean payoff for a Hawk is fH = xH ð1 - βÞR þ 2Rð1 - xH Þ = Rð2 - ð1 þ βÞxH Þ: The mean payoff for a Dove is f D = ð1 - xH ÞR: The mean payoff of the population is   f = xH f H þ ð1 - xH Þf D = R 1 - βx2H The replicator equation reads as   d x = xH f H - f = RxH ð1 - xH Þð1 - βxH Þ: dt H Then, there are three stationary solutions: xH 2 {0, 1, 1/β}. If β < 1, the stationary solutions are {0, 1}: the population of Doves tends to extinction. Indeed, if = {0, 1}, the population of Hawks converges to 1/β (see Figs. 7.25 and 7.26). xH(0) 2 Thus, if the cost of an injury is less than the reward, the best strategy is “Hawk”. Let us consider the population of Hawks divided into two subpopulations: Dominant Hawks (DH) and Hawks (H ): when a Hawk (H ) or a Dominant Hawk faces a Dove (D), the Dove withdraws, and the Hawk wins he reward 2R. When two Doves face, the reward is equally divided: each one wins R. If two Hawks or two Dominant Hawks face, they fight and, in the mean, their reward is R - C, where C is the cost of the injuries received in the fight. When a Hawk faces a Dominant Hawk, the Dominant Hawk receives 2(1 - α)R and the Hawk receives αR (0 < α < 1/2). This game may be put into a Matrix Form as in Table 7.14. Notice that pDH1 + pH1 + pD1 = pDH2 + pH2 + pD2 = 1. The mean payoffs are EDH1 = Rð2 - pDH2 ð1 þ βÞ - 2αpH2 Þ; EH1 = Rð2 - ð1 þ βÞpH2 - 2ð1 - αÞpDH2 Þ; ED1 = R ð1 - pDH2 - pH2 Þ:

628

7

UQ in Game Theory

Fig. 7.25 In this example, 1/β ≈ 0.333, so that the fraction of “Hawks” converge to this value, for any initial value xH(0) 2 = {0, 1}

Fig. 7.26 In this example, 1/β = 2, so that the fraction of “Hawks” converge to 1, for any initial value xH(0) ≠ 0: the species “Doves” tends to extinction

7.5

Hawks and Doves

629

Table 7.14 The normal form of the Dominant Hawks, Hawks, and Doves Player 2 DH ( pDH2) ((1 - β)R, (1 - β)R) (2αR, 2(1 - α)R) (0, 2R)

Player 1 DH ( pDH1) H ( pH1) D ( pD1)

H ( pH2) (2(1 - α)R, 2αR) ((1 - β)R, (1 - β)R) (0, 2R)

D ( pD2) (2R, 0) (2R, 0) (R, R)

Thus, the mean payoff of Player 1 is  E 1 ¼ R 1 - pH2 þ pH1 - pDH2 ð1 þ ð1 - 2αÞpH1 Þ - βpH1 pH2  þ pDH1 ð1 þ pH2 ð1 - 2αÞ - βpDH2 Þ The results of Player 1 are indifferent to the strategy when EDH1 = ED1 = EH1, id est, E DH1 = ED1 ⟺2 - pDH2 ð1 þ βÞ - 2αpH2 = 1 - pDH2 - pH2 ; E H1 = E D1 ⟺2 - ð1 þ βÞpH2 - 2ð1 - αÞpDH2 = 1 - pDH2 - pH2 ; The solution is pDH2,Nash =

1 - 2α þ β ; ð1 - 2αÞ2 þ β2

pH2,Nash =

2α þ β - 1 : ð1 - 2αÞ2 þ β2

Then, pD2,Nash = 1 -

a2

2β þ ð1 - 2αÞ2

For these values, the mean payoff of Player 1 is ! 2β E1,Nash = R 1 = pD2,Nash R: a2 þ ð1 - 2αÞ2

630

7

UQ in Game Theory

Fig. 7.27 Values pDH2, Nash, pH2, Nash, pD2, Nash do not exist for some pairs (β, α). The region in blue is the region where a solution exists

Solutions do not exist for some values of (β, α). Figure 7.27 shows the possible pairs. Figure 7.28 shows the corresponding values of pDH2,Nash,pH2,Nash,pD2,Nash. Again, solutions with R > C (id est, a < 1) cannot exist. Let us examine the replicator dynamics: denoting by xH the fraction of Hawks and xDH the internal fraction of dominant Hawks, the fraction of non-dominant Hawks is xNDH = 1 - xDH and the fraction of Doves is xD = 1 - xH. We have f DH = R ð2 þ ðð2α - β - 1ÞxDH - 2αÞxH Þ f NDH = Rð2 þ ðð2α þ β - 1ÞxDH - 1 - βÞxH Þ f H = Rð2 - ð2βðxDH - 1ÞxDH þ 1 þ βÞxH Þ 

f D = Rð1 - xH Þ

 f = R 1 - βð1 þ 2ðxDH - 1ÞxDH Þx2 : H Thus, the replicator equations read as   dxH = xH f H - f = RðxH - 1ÞxH ðβð1 þ 2ðxDH - 1ÞxDH ÞxH - 1Þ dt

7.5

Hawks and Doves

Fig. 7.28 Values pDH2, Nash, pH2, Nash, pD2, Nash for the pairs (a, α)

631

632

7

UQ in Game Theory

  dxDH = xDH f DH - f H = RðxDH - 1ÞxDH ð2βxDH - 1 þ 2α - βÞxH : dt The stationary points correspond to the critical values ) 2β 0, 1, ; ð1 - 2αÞ2 þ β2

1 2 f0, 1g,xH 2 0, 1, ; β

1 - 2α þ β xDH = ,xH 2 2β xDH

(

xH = 0: Again, the non-trivial solution xH =

2β 1 - 2α þ β , xDH = 2β ð1 - 2αÞ2 þ β2

does not exist for some pairs (β, α) (Fig. 7.29). Figures 7.30, 7.31, and 7.32 show typical evolutions of the populations. Notice that Doves and non-dominant Hawks extinct, except if the cost of an injury is high.

Fig. 7.29 Non-trivial stationary points do not exist for some pairs (β, α). The region in blue is the region where non-trivial stationary points do exist

7.5

Hawks and Doves

633

Fig. 7.30 Survival of Doves and non-dominant Hawks for a high cost for injuries

Fig. 7.31 Survival of Doves and extinction of non-dominant Hawks for a medium cost for injuries

634

7

UQ in Game Theory

Fig. 7.32 Extinction of Doves and non-dominant Hawks for a small cost for injuries

In this model, we can get a more complex behavior. For instance, for Assume that there is uncertainty on the value of α: the real value belongs to (0.2,0.4) with a uniform probability. We can generate a large number of curves for different α on this interval and determine their most representative element and a confidence interval, as in 3.8. Examples of results appear in Fig. 7.33. The confidence interval for a risk 20% is formed by the blue curves – black curves lay outside the confidence interval. Exercises 1. A company must decide on the investment on a new method proposed by an inventor. There are two options: risk-based option or contract option. In risk-based option, the company and the inventor will share 50/50 the gains – if any – and the company does not pay anything, except if a gain is obtained. In the contract option, the company pay for the method and will keep 100% of the gains – if any. For the inventor, the risk-based option implies to support the development costs. Assume that these costs represent 1000 money units (but the inventor will ask for a funding F > 1000). The expected gain G is a random variable uniformly distributed on (a, b). (a) Give the normal form of the game associated to this decision. TIP: Two players (inventor and company) and two choices (contract, risk). The payoff is zero for both if there is no agreement. The payoffs of the inventor must consider the development costs. (continued)

7.5 Hawks and Doves

635

Fig. 7.33 Evolution under uncertainty on , uniformly distributed on (0.2,0.4): blue curves belong to the confidence interval. Red curve is the most representative element, taken as mean

636

7

UQ in Game Theory

(b) Verify that both choices (contract, contract) and (risk, risk) correspond to Nash equilibria when G and F are large enough. (c) Find the relation between F and G for which the strategy is indifferent for the company. (d) Find the relation between F and G for which the strategy is indifferent for the inventor.  (e) Let F  = bþa 4 . The inventor asks for a contract with F = F + δ (δ > 0). What is the company’s best choice when considering the mean payoff of the operation? (f) The company chooses a contract but considers that the probability of a positive payoff must be 90%. What is the maximum δ acceptable? How this value is modified if the company requests a probability of 99%? 2. Two players must choose between two options. If both choose option 1, both win $1. If both choose option 2, both loose $2. If a player chooses option 1 and the other chooses option 2, the player choosing option 2 wins $ α (α > 1) and the player choosing option 1 looses $1. (a) Give the normal form of the game. (b) Verify that there is no pure Nash equilibrium. (c) Denoting by pi the probability of option 1 for Player i, find the mean payoff of each player. (d) Determine a mixed Nash Equilibrium. Determine the mean payoffs of the Players in this case. What is the value of α such that the mixed Nash Equilibrium corresponds to a random choice between the two options? What is the mean payoff in this case? (e) For each Player, find the probability of not loosing money. Is there an Equal Opportunity Solution for such a minimal payoff ? (f) Suppose that the Table find in (a) represents the payoffs of a population having a proportion xi of individuals choosing strategy i. Determine the mean payoffs and the replicator equations. Find the steady states. 3. Two people must decide about cleaning the house. If no one makes the task, the house will be dirty and each one will have a negative utility -di (d _ i > 0). If one cleans the house, the other will pay him a small reward corresponding to the utility of r > 0. If both clean the house, the utility is a small positive number for each one ci (0 < c _ i < r). Assume that d1 < d2 and c1 < c2. Denote by pi the probability of cleaning the house for player i, (a) (b) (c) (d)

Give the normal form of the game. Has this game a pure Nash equilibrium? Determine the mean payoffs of each player. Determine a mixed Nash equilibrium. Is there a Nash equilibrium if one of the Players is little bothered by dirtiness (di < ci)? Assume that (continued)

7.5

Hawks and Doves

(e)

(f) (g) (h)

637

di > ci. What are the mean payoffs of each player in this case? What is the minimalvalue of r such that the mean payoffs are positive? Denote by p1 , p2 the probabilities of the mixed Nash Equilibrium found. What is the best decision for Player 1 when p2 < p2 ? TIP: Determine the mean payoff of each strategy and choose the best. For each player, find the probability of a positive payoff. Is there a Equal Opportunity Solution for a positive payoff? For each Player, find the probability of a minimal payoff of -r. Is there a Equal Opportunity Solution for such a minimal payoff? Player that   2 estimates  p2 is a random variable uniformly distributed on p2 - α, p2 < p2 þ β . Determine the probability that cleaning the house is the best strategy.

4. Two Players compete to be the first user of a resource. They must choose between facing the opponent and giving up. When facing the opponent, an injury may be inflicted, and the Player may be defected – these events are synthetized by a negative utility -bi (bi > 0) for Player i. Assume that the i. When utility of the resource is ui > 0 for the Player   giving up, no injury happens, but the utility is reduced to ui 0 ≤ α < 14 . (a) Give the normal form of the game associated. TIP: Use the utilities ui - bi when facing the opponent and to (0, 0) in the case (give up, give up) (b) Find the conditions for (face, give up) to be a Nash equilibrium. (c) Find the conditions for (give up, face) to be a Nash equilibrium. (d) Find a mixed Nash equilibrium? What are the probabilities and the mean payoffs of each Player? (e) Is there an Equal Opportunity Solution with no injury for the Players? Assume that bi is a random variable, uniformly distributed on (aiui, Aiui). 0 ≤ ai < Ai ≤ 1. Find the mean payoff of each strategy for each Player. Determine the mean payoff of each Player. Find the probability of facing to be a better option than giving up for each Player.

Chapter 8

Optimization Under Uncertainty

Abstract This chapter presents methods for the determination of the probability distribution of the solutions of continuous optimization problems: constrained or unconstrained, linear or nonlinear. We analyze also the distributions of Lagrange’s multipliers.

In Sect. 1.13, we introduced the classical mono-objective optimization problem as x = arg min J = arg min fJ ðyÞ: y 2 C g: C

ð8:1Þ

According to the terminology previously introduced, C is the admissible set and J is the objective function. The optimal point is x and the optimal value of the objective is J(x). In the same section, we observed that, for most of the practical applications, J : ℝn → ℝ and C ⊂ ℝn such that   C = y 2 ℝn : ψ i ðyÞ ≤ 0, 1 ≤ i ≤ p; ψ i + p ðyÞ = 0, 1 ≤ i ≤ q ,

ð8:2Þ

with ψ j : ℝn → ℝ, 1 ≤ j ≤ p + q. The formulation (8.2) involves Lagrange’s multipliers λ = (λ1, . . ., λp+q)t. Uncertainties may concern the objective function or constraints. If any of these elements becomes a random variable, then both x and J(x) become random variables. Optimization problems involving random variables are often referred as stochastic programming problems and the reader may find in the literature many works adopting different approaches, such as robust optimization, reliability analysis, stochastic optimization methods, etc. For instance, we may look for the solution of x = arg min fE ðJ ðyÞÞ: y 2 Cg,

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 E. Souza de Cursi, Uncertainty Quantification using R, International Series in Operations Research & Management Science 335, https://doi.org/10.1007/978-3-031-17785-9_8

ð8:3Þ

639

640

8

Optimization Under Uncertainty

or x = arg min fEðJ ðyÞÞ: y 2 C, V ðJ ðyÞÞ ≤ V max g,

ð8:4Þ

x = arg min fV ðJ ðyÞÞ: y 2 C, EðJ ðyÞÞ ≤ J max g:

ð8:5Þ

or

We may also look for a solution such that x = arg min fE ðJ ðyÞÞ: Pðy 2 CÞ ≥ 1 - αg:

ð8:6Þ

In this Chapter, we do not adopt these points of views. We consider the problem from a different standpoint: we look for the probability distribution of J(x) and x.

8.1

Using the Methods of Representation

We can use the approaches presented in Chap. 3 to get a full characterization of the probability distributions of optimal points, Lagrange’s multipliers, and optimal values. To fix the ideas, let us consider the following example, adapted from (Kim, 1968): min fJ ðyÞ = y1 - 2y2 + 3y3 , y 2 C g,   3 C = y 2 ℝ : - 2y1 + y2 + 3y3 = u1 ; 2y1 + y2 + 4y3 = u2 ; yi ≥ 0, i = 1, 2, 3 Assume that u1 is uniformly distributed on (1, 3), u2 is uniformly distributed on (3, 5) and these variables are independent. If we look for a solution in the sense of (8.3), we search for the solution of o n min J ðyÞ = y1 - 2y2 + 3y3 , y 2 C ,

  C = y 2 ℝ3 : - 2y1 + y2 + 3y3 = 2; 2y1 + y2 + 4y3 = 4; yi ≥ 0, i = 1, 2, 3 The solution is x1 = 1=2, x2 = 3, x3 = 0, J ðxÞ = - 5:5. By this method, we do not obtain information about the distribution of J(x). To get the distribution of these parameters, we can use the approaches presented in Chap. 3. For instance, let us apply the collocation approach: we consider a sample from u1 formed by the values u1i = 1 + 0.1i, i = 0, . . ., 20 and a sample from u1 formed by the values u2j = 3 + 0.1j, j = 0, . . ., 20. We generate a sample from U = (u1, u2) by considering the points (u1i, u2j). The sample is used to generate larger samples from x and J(x), which are used to determine a bidimensional expansion with k1 = k2 = 3 (see Sect. 3.5). Here, we used the representation to generate a sample of ns = 40401 variates from x, using u1i = 1 + 0.01i, u2j = 3 + 0.01j, i, j = 0, . . ., 200.

8.1

Using the Methods of Representation

641

In this simple situation, we may determine the exact values: x =



1 1 ðu - u1 Þ, ðu1 + u2 Þ, 0 4 2 2

J ð x Þ = -



5 3 u - u 4 1 4 2

A comparison between the approximated and the exact values of x is shown in Figs. 8.1 and 8.2. The global RMS error was 3E-13, for each component.

Fig. 8.1 Results for the first component x1 of x in the introductory example (U Uniform)

642

8

Optimization Under Uncertainty

Fig. 8.2 Results for the second component x2 of x in the introductory example (U Uniform)

The large sample from x can be used to generate a sample of ns = 40401 variates from J(x), which provides the distribution of J(x) shown in Fig. 8.3 – the PDF was evaluated by the SPH approach (see Sect. 1.16). The empirical mean generated by such a large sample furnished the estimations E(J(x)) ≈ - 5.5, 0.85 – ffinotice that the exact values are E(J(x)) = - 11/2 = 5.5, σ(J(x)) ≈ p ffiffiffiffiffiffiffiffiffiffiffiffi  σ ðJ ðx ÞÞ = 17=24 ≈ 0:84. The RMS error between J(x) and its approximation was 3E-12. Now, assume that u1  N(2, 0.25), u2  N(4, 0.5) and these variables are independent. Into an analogous way, we consider a sample of 21 variates from each of these variables and we use again the collocation approach to determine a representation of x. The representation was used to generate a large sample analogous to the preceding one, with 201 variates from the gaussian distribution corresponding to each component of U. The CDF and the PDF of J(x) were determined analogously. Now, the estimations are E(J(x)) ≈ - 5.53, σ(J(x)) ≈ 0.49, again identical up to the 5th decimal for both samples (exact and calculated).

Fig. 8.3 Distribution of the optimal value J(x) in the introductory example (U uniform)

pffiffiffiffiffi Notice that the exact value is σ ðJ ðx ÞÞ = 61=16 ≈ 0:49. The RMS error was 1E-4 for J(x) and 4E-5 for x. Although the mean is the same for the two situations, the standard deviations and the distributions differ significantly. A comparison between the approximated and the exact values of x is shown in Figs. 8.4 and 8.5, while the distribution of J(x) appears in Fig. 8.6. The determination of the distribution of Lagrange’s multipliers can be made analogously: we can generate a sample of their values and apply one of the methods presented in Chap. 3. For this introductory and simple problem, the objective function and the restrictions are linear, with coefficients independent from U, so that their gradients are constant and are not affected by the randomness introduced by U. In addition, the active constraints are the same for all the solutions found, so that Lagrange’s multipliers are constant. But such is not a general situation: for instance, let us modify the coefficients as follows: min fJ ðyÞ = ð1 - u1 Þy1 + ðu2 - 2Þy2 + ð3 - u1 - u2 Þy3 , y 2 C g,   C = y 2 ℝ3 : - 2y1 + y2 + 3y3 = u1 ; 2y1 + y2 + 4y3 = u2 ; yi ≥ 0, i = 1, 2, 3 In this case, the gradient of the economic function J depends upon U, so that the Lagrange’s multipliers too. Here, the solution in the sense of (8.4) satisfies o n min J ðyÞ = - y1 + 2y2 - 3y3 , y 2 C ,

  C = y 2 ℝ3 : - 2y1 + y2 + 3y3 = 2; 2y1 + y2 + 4y3 = 4; yi ≥ 0, i = 1, 2, 3 and the solution is now x1 = 2=7, x2 = 0, x3 = 6=7, J ðxÞ = - 20=7 ≈ - 2:86. The exact solution is

644

8

Optimization Under Uncertainty

Fig. 8.4 Results for the first component x1 of x in the introductory example (U Gaussian)

 1 1 ð - 4u1 + 3u2 Þ, 0, ðu1 + u2 Þ 14 7   1 J ðx Þ = 2u21 + 2u1 - 7u1 u2 + 9u2 - 2 u22 14 x =



We may obtain the distribution of J(x) by using the same samples which were considered for the preceding situation. For the uniform distribution, the results are shown in Fig. 8.7. The empirical mean generated by the large sample with 40401  )) ≈ 25.52, to be variates furnished the estimations E(J(x)) ≈ - 23.75,pσ(J(x ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi compared with the exact values - 20=7 ≈ - 2:85714, 2938=2205 ≈ 1:15431 . The RMS error for the values of J(x) was 3E - 13, while it was of 4E-13 for x.

Fig. 8.5 Results for the second component x2 of x in the introductory example (U Gaussian)

Fig. 8.6 Distribution of the optimal value J(x) in the introductory example (U Gaussian)

646

8

Optimization Under Uncertainty

Fig. 8.7 Distribution of the optimal value J(x) in the example modified (U uniform)

The values of the Lagrange’s multipliers may be easily derived from the KKT equations (Sect. 1.13): λ1 =

1 1 ð - 1 - u1 + u2 Þ, λ2 = ð - 9 + 5u1 + 2u2 Þ: 7 14

We may use the samples to estimate the distribution of the Lagrange’s multipliers: indeed, we may evaluate the values of λ1 and λ2 on the small sample u1i = 1 + 0.1i, u2j = 3 + 0.1j, j = 0, . . ., 20. The values collected form a sample from the Lagrange’s multipliers, so that we may use them to find representations of these variables. Analogously to x, we consider collocation with k1 = k2 = 3. Once the representations are determined, we use the large sample to generate new values of the Lagrange’s multipliers, getting a large sample of 40401 values of each multiplier. This sample is used to determine the distribution of the shadow prices. ) ffiffiffiffiffiffiffiffiffiffiffiffi ≈ 0.081, E(λ2) ≈ 0.655, We obtained the estimates E(λ1) ≈ 0.148, σ(λ1p ffi σ(λ ) ≈ 0.116, close to the real values 1/7 ≈ 0.143, 2=147 ≈ 0:117, 9/14≈0.643, 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 29=588 ≈ 0:222 respectively. The RMS error was 8E-14 for λ1, 5E-14 for λ2. The distributions of the multipliers are shown in Fig. 8.8. A comparison between approximated and exact values of λ1, λ2 is exhibited in Fig. 8.9. The same approach can be applied to the case where U is normally distributed. An example of distribution is shown in Fig. 8.9. We obtained the estimates E(λ1) ≈ 0.15, σ(λ 1) ≈ 0.11. Here, the real values are 1/7 ≈ 0.143, pffiffiffi1) ≈ 0.08, E(λ2) ≈ 0.65, σ(λ pffiffiffiffiffi 5=28 ≈ 0:08, 9/14≈0.643, 41=56 ≈ 0:11, remarkably close to the real values. The RMS error was 2E-13 for λ1 and 2E-12 for λ2. Concerning J(x), the error was 2E-11. The results appear in Figs. 8.10, 8.11 and 8.12.

8.1

Using the Methods of Representation

647

Fig. 8.8 Distributions of the Lagrange multipliers in the second example. Here, U is uniform

The same approach can be applied for nonlinear optimization. In addition, as shown in Sect. 5.2, iterative descent methods may be adapted. For instance, let us consider a regular function η and the modified Rosenbrock’s function: f ðxÞ =

nX -1 i=1

2 !

2 xi + 1 xi xi 100 + -1 : ð8:7Þ ηðu, i + 1Þ ηðu, iÞ ηðu, iÞ

648

8

Optimization Under Uncertainty

Fig. 8.9 Lagrange’s multipliers in the second example (U Uniform)

The exact solution is xi = ηðu, iÞ, 1 ≤ i ≤ n, for which f(x) = 0. Let us consider n = 5, η(u, i) = u2 + i, u  N(0, 1) and a sample of 20 variates from u given below generated by rnorm):

We use fminunc from pracma with a random initial point furnished by runif. fminunc furnishes x(u) for the elements in the sample, so that we get a sample (ui, x(ui)) of 20 variates. This sample is used to determine the coefficients of

8.1

Using the Methods of Representation

649

Fig. 8.10 Lagrange’s multiplier λ2 in the second example (U Gaussian)

P an expansion Px = ki= 0 ci φi ðuÞ by collocation. Using k = 3, we obtain a RMS error kx - Pxk ≈ 2E - 10. The expansion is used to generate a large sample of 1E4 variates of x: rnorm furnishes 1E4 variates from u and this sample furnishes 1E4 variates from Px. Then, the RMS error is kx - Pxk ≈ 3E - 9. Examples of results are shown in Figs. 8.13 and 8.14. Of course, this approach can also be used in constrained optimization: we can generate a sample and determine an expansion by the same way (see examples).

650

8

Optimization Under Uncertainty

Fig. 8.11 Distributions of the Lagrange’s multipliers in the second example. Here, U is Gaussian

Example 8.1 Let us consider x = (x1, x2) and (Cobb-Douglas function) J ðxÞ = xα1 xβ2 We are interested in the maximal value of J(x) under the restriction p1 x 1 + p 1 x 2 = A

(continued)

8.1

Using the Methods of Representation

651

Fig. 8.12 Distribution of the optimal value J(x) in the example modified (U uniform)

Example 8.1 (continued) If p1, p2 > 0, then the solution is x1 =

α A  β A ,x = : α + β p1 2 α + β p 2

The Lagrange’s multiplier associated to the restriction is λ= -

α+β J ð x Þ A

Let us consider uncertainties affect the exponents α, β. In a first model, we take β = 1 - α and suppose that α 2 (0.4, 0.6), uniformly distributed. Then   1 A   1 A A A E x1 = = 0:5 , σ x1 = pffiffiffi ≈ 0:0577 2 p1 p1 p1 10 3 p1     1 A A 1 A A = 0:5 , σ x2 ≈ pffiffiffi ≈ 0:0577 E x2 ≈ p 2 p2 p2 p 10 3 2 2 For A = 100, p1 = 2, p2 = 4, we have         E x1 = 25, σ x1 ≈ 2:9, E x2 = 12:5, σ x2 ≈ 1:4

(continued)

652

8

Optimization Under Uncertainty

Fig. 8.13 Approximation of x1 on a sample of 1E4 variates from u, using the expansion determined on the sample of 20 variates from u

Fig. 8.14 Distribution of x1 , determined from on a sample of 1E4 variates from u, using the expansion determined on the sample of 20 variates from u. Since x1 = 2 þ u2 , we have x1 2  χ 2 ð1Þ. The exact values were furnished by pchisq and dchisq

8.1

Using the Methods of Representation

653

Example 8.1 (continued) Let us apply the method presented: assume that a sample of 10 variates from α is given (Table 8.1). These values are used to determine x(α), J(α) = J(x(α)), corresponding to the sample from α, so that we have a sample (α, x(α), J(α)). Then, we can determine expansions Px, PJ by collocation. Table 8.1 Sample from α used in the calculations 0.401836967 0.511581

0.456233218 0.521121935

0.502981397 0.53794

0.508512 0.539049

0.511317 0.596772

For k = 5, we obtain the RMS errors in Table 8.2. Table 8.2 RMS errors for the first model, on the initial sample of 10 variates x1 2E-11

x2 2E-11

λ 4E-10

J(x) 5E-8

Once the expansions were determined, we can generate a large sample of 1E4 variates from α and determine the distribution and statistics of x and J. We obtain the results in Table 8.3. Table 8.3 RMS errors for the first model, on the large sample of 1E4 variates x1 2E-10

x2 2E-10

λ 6E-8

J(x) 6E-6

The empirical means and standard deviations were         E x1 ≈ 25, σ x1 ≈ 2:9, E x2 ≈ 12:5, σ x2 ≈ 1:4 The distribution of J = J(x) is shown in Fig. 8.15.

Fig. 8.15 Distribution of J(x) with uncertain exponents β = 1 - α

(continued)

654

8

Optimization Under Uncertainty

Example 8.1 (continued) Consider a second model where α and β are independent and suppose that the experts agree that α 2 (0.4, 0.6) and β 2 (0.3, 0.5), both uniformly distributed. In this case     A A E x1 ≈ 0:556 , σ x1 ≈ 0:0459 p1 p1     A A E x2 ≈ 0:444 , σ x2 ≈ 0:0459 p2 p2 For A = 100, p1 = 2, p2 = 4, we have     E x1 ≈ 27:8, σ x1 ≈ 2:3     E x2 ≈ 11:1, σ x2 ≈ 1:2 Let us apply the  method  presented: we generate a sample of 100 values of the couple x = x1 , x2 corresponding to (αi, βj), from the sample of α previously introduced and the sample from β shown in Table 8.4. Table 8.4 Sample from β used in the calculations 0.313722041 0.435980503

0.31488304 0.447786821

0.383768292 0.457281824

0.397799173 0.46323338

0.405302848 0.494181001

The expansion uses k1 = k2 = 3. The expansions determined were used to generate samples of 40421 elements from x and λ, by using the values u1, i = 0.2 + 0.01i, u2, j = 0.3 + 0.01j, 0 ≤ i, j ≤ 200. The RMS errors between the values generated and the exact ones appear in Table   8.5.  The empirical   means and standard deviations were E x1 ≈ 27:8 , σ x1 ≈ 2:3 , E x2 ≈ 11:1 ,   σ x2 ≈ 1:2: Table 8.5 RMS errors for the second model x1 7E-4

x2 3E-4

λ 2E-5

J(x) 1E-3

(continued)

8.1

Using the Methods of Representation

655

Example 8.1 (continued) Examples of results are shown in Figs. 8.16, 8.17, and 8.18.

Fig. 8.16 Comparisons between calculated and exact values of x, λ in the second model with uncertain independent exponents α, β. At left, the exact values, at right, the calculated ones. We have u1 = α, u2 = β

(continued)

656

8

Optimization Under Uncertainty

Example 8.1 (continued)

Fig. 8.17 Distribution of x1 ,x2 ,J in the second model with uncertain independent exponents α, β

(continued)

8.1

Using the Methods of Representation

657

Example 8.1 (continued)

Fig. 8.18 Distribution of J in the second model with uncertain independent exponents α, β

Example 8.2 Let us consider the constant elasticity substitution function with two variables  β1 J ðxÞ = αxβ1 + ð1 - αÞxβ2 : We are interested in the maximal value of J(x) under the restriction p1 x 1 + p 1 x 2 = A Consider again a model where α and β are independent, α 2 (0.4, 0.6) and β 2 (0.3, 0.5), both uniformly distributed. We use the same sample as in the preceding example. Expansions are determined with k1 = k2 = 3. Once the expansion is found, it is used to generate a sample formed by a family of curves corresponding to na subintervals for α and nb subintervals for β, all equally spaced. The results for na = nb = 200 are exhibited in Figs. 8.19 and 8.20 (continued)

658

8

Optimization Under Uncertainty

Example 8.2 (continued)

Fig. 8.19 Comparisons between calculated and exact values of x, λ with uncertain independent parameters α, β. At left, the exact values, at right, the calculated ones. We have u1 = α, u2 = β

(continued)

8.1

Using the Methods of Representation

659

Example 8.2 (continued)

Fig. 8.20 Distribution of x1 ,x2 ,λ with uncertain independent parameters α, β

(continued)

660

8

Optimization Under Uncertainty

Example 8.2 (continued) The distribution of J = J(x) is shown in Fig. 8.21.

Fig. 8.21 Distribution of J with uncertain independent parameters α, β

Exercises 1. Red Brand Canners under uncertainty: a company produces cans of whole tomato, tomato juice, and tomato paste, sold by cases. Each case manufactured is sold at a given price, requests a mass of tomatoes, must (continued)

8.1

Using the Methods of Representation

661

respect a minimal quality score, and has a variable cost. The forecast demand is evaluated previously. The corresponding values are shown in table below: Selling price mass requested Quality Score Variable cost Demand

Whole tomato 4 T(16,18,20) 8 T(2.4,2.5,2.6) Unlimited

Tomato juice 4.5 T(19,20,21) 6 T(3,3.18,3.5) U(48000, 52000)

Tomato paste 3.8 T(22,25,28) 5 T(1.8,1.95,2.1) U(78000, 82000)

The quality score results from the quantity et the grade of the tomatoes used in the manufacturing process: there are tomatoes grade A, which have a quality score of 9 and grade B, which have a quality score of 5. If a product is prepared using fractions xA of grade A tomatoes and xB grade B tomatoes, the product has a resulting score 9xA + 5xB. The company has bought the entire harvest of tomatoes of a panel of producers for a total mass of 3 million mass unities paid 180000 money units. The harvest is formed of tomatoes grade A and tomatoes grade B – the fraction of tomatoes grade A is a variable T(0.18,0.22,0.2). The aim is to maximize the profit of the company (a) Find a deterministic solution where each random variable is taken as its most probable value for triangular distributions and its mean value for the uniform distributions. (b) Analyze the effects of the variability of the grades in the harvest: consider all the other variables at their most probable (triangular) or mean (uniform) values and determine the distribution of the solution. (c) Analyze the effects of the fluctuations in the variable costs: consider all the other variables at their most probable (triangular) or mean (uniform) values and determine the distribution of the solution. (d) Analyze the effects of the manufacturing variabilities: consider all the other variables at their most probable (triangular) or mean (uniform) values and determine the distribution of the solution. (e) Analyze the effects of the fluctuations in the real demand: consider all the other variables at their most probable values and determine the distribution of the solution. (continued)

662

8

Optimization Under Uncertainty

(f) A producer proposes 80000 mass unities of grade A tomatoes by 6800 money unities. If the company accepts this proposition, how are modified the preceding results? (g) The Marketing department says that an advertising campaign can modify the distribution of the demand of tomato juice to U(70000, 740000). Determine the new distribution of the solution. How much money units can be invested in the advertising campaign? 2. Consider f ðxÞ = xα1 + x12 - α : We are interested in the maximal value of f(x) under the restriction p1 x 1 + p 1 x 2 = A (a) Assume that A = 100, p1 = 2, p2 = 4 and α  T(0.4, 0.5,0.6). Find the distribution of the solution. (b) Assume that A = 100, α = 0.5, p1  T(1.5,2,2.5), p2  T(3, 4, 5). Find the distribution of the solution.  β1 3. Consider J ðxÞ = αxβ1 + ð1 - αÞxβ2 : We are interested in the maximal value of p1x1 + p1x2, under the restriction f(x) = 1. (a) Assume that A = 100, p1 = 2, p2 = 4, α  T(0.4, 0.5,0.6), β  T(0.3, 0.4,0.5). Find the distribution of the solution. (b) Assume that A = 100, α = 0.5, β = 0.4, p1  T(1.5,2,2.5), p2  T(3, 4, 5). Find the distribution of the solution.

8.2

Using the Adaptation of a Descent Method

In Sect. 5.2, we introduced the adaptation of iterative methods. Such an approach may be applied to iterative methods of optimization. In fact, any iterative method   X ðr + 1Þ = Ψ X ðrÞ , U can be adapted according to the approach presented in Sect. 5.2 and implemented according to the algorithm presented in the same section. In Sect. 6.3, we presented the adaptation to the use of ode in the package deSolve.

8.2

Using the Adaptation of a Descent Method

663

Here, we shall illustrate the approach by adapting the function fminunc of pracma: indeed, fminunc can be considered as a particular iteration function Ψ which transforms the starting point X(r) into a new point X(r+1). In practice, we can define Ψ as the result of fminunc at a predefined small number of iterations – recall that the number of iterations of fminunc is defined by an option. For instance, the instruction fminunc(X,f,maxiter = 5)

fixes the number of iterations fminunc to 5. Thus, we can use fminunc as an iteration function, analogously to the gradient descent iteration function presented in Sect. 5.2. To fix the ideas, let us consider again the optimization of the modified Rosenbrock’s function given in Eq. (8.7), with n = 5, k = 3. We start by defining the iteration function. Here, fobj evaluates Rosenbrock’s function given in Eq. (8.7).

In a second step, generate the sample from U and define the Hilbert basis

Then, make the iterations:

The global RMS error was kx - Pxk ≈ 1E - 6. Using a sample of 1E4 variates from N(0, 1) furnished by rnorm and the expansion, we generated a sample of 1E4 variates from Px. The RMS error was kx - Pxk ≈ 2E - 6. An example of result is given in in Fig. 8.22.

664

8

Optimization Under Uncertainty

Fig. 8.22 Examples of result furnished by adaptation of fminunc

8.2

Using the Adaptation of a Descent Method

665

Exercises 1. Red Brand Canners under uncertainty: a company produces cans of whole tomato, tomato juice and tomato paste, sold by cases. Each case manufactured is sold at a given price, requests a mass of tomatoes, must respect a minimal quality score, and has a variable cost. The forecast demand is evaluated previously. The corresponding values are shown in table below: Selling price mass requested Quality Score Variable cost Demand

Whole tomato 4 T(16,18,20) 8 T(2.4,2.52,2.6) Unlimited

Tomato juice 4.5 T(19,20,21) 6 T(3,3.18,3.5) U(48000, 52000)

Tomato paste 3.8 T(22,25,28) 5 T(1.8,1.95,2.1) U(78000, 82000)

The quality score results from the quantity et the grade of the tomatoes used in the manufacturing process: there tomatoes grade A, which have a quality score of 9 and grade B, which have a quality score of 5. If a product is prepared using fractions xA of grade A tomatoes and xB grade B tomatoes, the product has a resulting score 9xA + 5xB. The company has bought the entire harvest of tomatoes of a panel of producers for a total mass of 3 million mass unities paid 180000 money units. The harvest is formed of tomatoes grade A and tomatoes grade B – the fraction of tomatoes grade A is a variable T(0.18,0.2,0.22). The aim is to maximize the profit of the company (a) Find a deterministic solution where each random variable is taken as its most probable value for triangular distributions and its mean value for the uniform distributions. (b) Analyze the effects of the variability of the grades in the harvest: consider all the other variables at their most probable (triangular) or mean (uniform) values and determine the distribution of the solution. (c) Analyze the effects of the fluctuations in the variable costs: consider all the other variables at their most probable (triangular) or mean (uniform) values and determine the distribution of the solution. (d) Analyze the effects of the manufacturing variabilities: consider all the other variables at their most probable (triangular) or mean (uniform) values and determine the distribution of the solution. (e) Analyze the effects of the fluctuations in the real demand: consider all the other variables at their most probable values and determine the distribution of the solution. (continued)

666

8

Optimization Under Uncertainty

(f) A producer proposes 80000 mass unities of grade A tomatoes by 6800 money unities. If the company accepts this proposition, how are modified the preceding results? (g) The Marketing department says that an advertising campaign can modify the distribution of the demand of tomato juice to U(70000, 740000). Determine the new distribution of the solution. How much money units can be invested in the advertising campaign? 2. Consider f ðxÞ = xα1 + x12 - α : We are interested in the maximal value of f(x) under the restriction p1 x 1 + p 1 x 2 = A (a) Assume that A = 100, p1 = 2, p2 = 4 and α  T(0.4, 0.5,0.6). Find the distribution of the solution. (b) Assume that A = 100, α = 0.5, p1  T(1.5,2,2.5), p2  T(3, 4, 5). Find the distribution of the solution.  β1 3. Consider J ðxÞ = αxβ1 + ð1 - αÞxβ2 : We are interested in the maximal value of p1x1 + p1x2, under the restriction f(x) = 1. (a) Assume that A = 100, p1 = 2, p2 = 4, α  T(0.4, 0.5,0.6), β  T(0.3, 0.4,0.5). Find the distribution of the solution. (b) Assume that A = 100, α = 0.5, β = 0.4, p1  T(1.5,2,2.5), p2  T(3, 4, 5). Find the distribution of the solution.

8.3

Combining Statistics of the Objective, the Constraints, and Expansions

Analogously to Chapter 5, Sect. 5.1, we can also use expansions to generate a new optimization problem, involving means of the objective and the constraints. For instance, we can look for the expansion Px =

k X i=1

xi φi ðuÞ, xi 2 ℝn

  such that x = x1 , . . . , xk verifies ( 

x = arg min GðyÞ = E F

k X i=1

!! yi φi ðuÞ, u

)

, yi 2 ℝ , y = ðy1 , . . . , yk Þ n

8.3

Combining Statistics of the Objective, the Constraints, and Expansions

667

In practice, we may use a sample to estimate the mean: ! ns k X 1 X F yi φi ðun Þ, un : GðyÞ ≈ ns n = 1 i=1 For instance, let us consider the minimization of the modified Rosenbrock’s function (8.7). The minimization of G may be achieved by adapted optimization methods, such as, stochastic gradient methods (see, for instance Holdorf Lopez et al., 2011). Under R, fminunc can handle this minimization. For instance, let us introduce two auxiliary functions:

tovec transforms a matrix into a vector containing its rows sequentially and tomat performs the inverse operation. Then, define a function ftstat that evaluates G(y):

Here, vec_coeff corresponds to y, f corresponds to F, ndim is the dimension of x, nb = k + 1 is the number of coefficients by dimension, and he is the Hilbert expansion. Finally, define an initial point, the objective to be minimized and call fminunc to get a result. Finalize by setting the solution found as a matrix of coefficients

In this case, the RMS error was kx - Pxk ≈ 6E - 8 on the initial sample and kx - Pxk ≈ 2E - 7 on the large sample of 1E5 variates. Examples of results are shown in Fig. 8.23. 

668

8

Optimization Under Uncertainty

Fig. 8.23 Results obtained by minimizing G, the mean of F on the sample.fminunc was used for the minimization. At left, a comparison between xi and the expansion Pxi. At right, a comparison of the empirical CDF of Pxi on a large sample and the exact CDF of xi

This approach can be extended to constrained optimization. For instance, let us consider a restriction ψ(x, u) = 0. Analogously to nonlinear equations, we can consider k + 1 restrictions E(φi(u)ψ(Px(u), u) = 0, 0 ≤ i ≤ k. If the restrictions are ψ j(x, u) = 0, 1 ≤ j ≤ q, we consider (k + 1)q restrictions   E φi ðuÞψ j ðPxðuÞ, uÞ = 0, 0 ≤ i ≤ k, 1 ≤ j ≤ q:

ð8:8Þ

A vector containing these restrictions can be created by the function nlcstat below (it calls the function eqs. Function tomat was previously introduced in this section):

8.3

Combining Statistics of the Objective, the Constraints, and Expansions

669

As an example, let us consider the determination of the point on the line x1 + ux2 = 3 which is the closest to the circle x21 + x22 = 1 (see Sect. 1.13.3). Setting x = (x1, x2, x3, x4), we have J ð xÞ =

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðx1 - x3 Þ2 + ðx2 - x4 Þ2 ,

φ1 ðxÞ = x21 + x22 - 1, φ2 ðxÞ = x3 + ux4 - 3, Then we are looking for   x = arg min J ðxÞ: x 2 ℝ4 , φ1 ðxÞ = φ2 ðxÞ = 0 : Let us consider u uniformly distributed on (1,2) and a sample of 10 of its variates: 1.014772 1.039209 1.138042 1.16512 1.177645 1.281744 1.45127 1.50283 1.741549 1.867562

670

8

Optimization Under Uncertainty

We apply the method with a polynomial family and k = 3. Function fmincon of pracma is called to determine the solution:

The results obtained are shown in Figs. 8.24, 8.25, 8.26, and 8.27.

Fig. 8.24 Results obtained by minimizing G under the restrictions (8.8).fmincon was used for the minimization

8.3

Combining Statistics of the Objective, the Constraints, and Expansions

671

Fig. 8.25 Results obtained by minimizing G under the restrictions (8.8).fmincon was used for the minimization

Inequality constraints can be considered by transforming them into equality constraints. For instance, if x = (x1, . . ., xn), the constraint ψ(x, u) ≤ 0 can be rewritten as ψ ðx, uÞ + x2n + 1 = 0. Alternatively, the approach introduced in Sect. 1.14.3 can be combined to the approach introduced in Sect. 5.4 to deal with inequality constraints. Exercises 1. Use the method presented to solve the Red Brand Canners under uncertainty. 2. Consider f ðxÞ = xα1 + x12 - α :

(continued)

672

8

Optimization Under Uncertainty

Fig. 8.26 Results obtained by minimizing G under the restrictions (8.8).fmincon was used for the minimization

Use the method presented to analyze the maximal value of f(x) under the restriction p1x1 + p1x2 = A in the following situations: (a) Assume that A = 100, p1 = 2, p2 = 4 and α  T(0.4, 0.5,0.6). Find the distribution of the solution. (b) Assume that A = 100, α = 0.5, p1  T(1.5,2,2.5), p2  T(3, 4, 5). Find the distribution of the solution. 3. Consider  β1 cesðxÞ = αxβ1 + ð1 - αÞxβ2 : Use the method presented to analyze the maximal value of p1x1 + p1x2, under the restriction f(x) = 1. Solve the following situations: (continued)

8.3

Combining Statistics of the Objective, the Constraints, and Expansions

673

Fig. 8.27 Results obtained by minimizing G under the restrictions (8.8).fmincon was used for the minimization

(a) Assume that A = 100, p1 = 2, p2 = 4, α  T(0.4, 0.5,0.6), β  T(0.3, 0.4,0.5). Find the distribution of the solution. (b) Assume that A = 100, α = 0.5, β = 0.4, p1  T(1.5,2,2.5), p2  T(3, 4, 5). Find the distribution of the solution.

Chapter 9

Reliability

Abstract In this chapter, we present some tools for reliability analysis and reliability-based design optimization, such as the notions of reliability index and its determination by different methods, transformations of Rosenblatt and nataf, FORM, SORM, and RBDO.

Indeed, civil engineers deal with various essential elements for which variability in time and space may be extremely high: external loads, values of physical parameters, soil, not to mention the future climatic events, etc. The uncertainties and risks require an adapted analysis, especially in certain situations, where the human and economic consequences of a failure can be catastrophic. More recently, other fields of engineering were brought to reliability analysis, by diverse reasons: customer satisfaction, safety, maintenance costs, possible legal liability on failures, . . . . Finance also adopted a culture of risk analysis connected to reliability. Reliability analysis can be performed with the objective of studying and eventually improving an existing system or process. However, it is more efficient to include reliability since the initial studies about system’s design, considering the effects of variability and uncertainties in the design – this is the modern procedure, especially when the correction of errors or the modification of already executed projects can be costly, if not impossible. This is starting point for the modern idea of reliabilitybased design optimization (RBDO), which consists of seeking optimal solutions that satisfy reliability requirements. Indeed, design procedures look for the best solution in terms of cost and quality. Thus, design often makes use of optimization tools. As in any optimization procedure, the designer must define the three fundamental components of any optimization problem: the design variables, the constraints that must be respected and the objective function to be minimized. In general, the design variables are real or integers and denote the parameters for which there is some freedom of choice. The unknowns to be determined define the design. The constraints are the limitations in the freedom of choice, while the objective function gives a numerical evaluation of the design quality, providing a practical method for comparing different solutions,

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 E. Souza de Cursi, Uncertainty Quantification using R, International Series in Operations Research & Management Science 335, https://doi.org/10.1007/978-3-031-17785-9_9

675

676

9

Reliability

the best ones corresponding to their minima. The constraints and the objective function may involve additional parameters that are not design variables but supplied data. If variability and uncertainty are not considered, the design variables remain deterministic: the resulting optimization problem is usually solved by a numerical optimization method, which furnishes one or several approximate solutions. The use of stochastic descent methods or stochastic modifications of descent methods does not change the deterministic character of this approach, which is usually referred as DDO (Deterministic Design Optimization). However, in many practical situations, the framework includes variability of some parameters or stochastic phenomena, which produces a loss of robustness and reliability. In such a situation, uncertainty must be introduced in the design procedure to obtain robust and reliable solutions, able to face implementation errors and variations in the operational conditions. Different approaches were proposed to include random or uncertain aspects in the design, for instance, robust optimization and RBDO – already mentioned. RBDO looks for a control of the probabilities of some key events, such as failure: the maximal acceptable probability of a key event is introduced as a constraint, which corresponds to requesting a minimum reliability for a key event. In practice, reliability can be characterized by a reliability index, usually denoted by β and defined for systems that satisfy two basic conditions: on the one hand, the state of the system is fully defined by a vector of parameters ; on the other hand, the failure is characterized by the sign of a real variable Z – failure and safe operation correspond to opposite signs of Z – for instance, failure corresponds to Z > 0 and safe operation to Z < 0.

9.1

Limit State Curves

A situation often found in practice is the one where the failure regions are defined by inequalities or, equivalently, the limits of the failure region are defined by equations which separates regions of normal operation and regions of failure. For instance, in dimension 2, the safe and failure regions are separated by curves. Let us denote by x 2 ℝn the design variables of the system: the design space is ℝn. Assume that failure occurs when Z = g(x) > 0 and the system operates safely when Z = g(x) < 0. In such a situation, the safe region and the failure region are separated by the curveg(x) = 0, which is called the limit state curve or failure curve. Then, the space of the design variables splits into two disjoint parts (Fig. 9.1): • The failure region F = fx 2 ℝn : gðxÞ > 0g, • The safe region S = fx 2 ℝn : gðxÞ ≤ 0g The frontier between these regions corresponds to the limit state curve C = fx 2 ℝn : gðxÞ = 0g:

9.1

Limit State Curves

677

:

= ( )>

:

:

= ( )=

= ( )
0g, S = fx 2 ℝn : hðx, uÞ < 0g:

ð9:1Þ

Analogously, C = fu 2 ℝn : hðx, uÞ = 0g:

ð9:2Þ

Notice that the expression of h can change with x. In such a general situation, the design space and the centered space may be different. In the sequel, we assume that the design space is ℝn.

678

9

Reliability

Example 9.1 Let us consider the situation where x 2 ℝ2 and gðxÞ = x2 - x21 - x1 - 3 . Consider x = (x1, x2), ϕ(x, u) = u + x. We have hðx, uÞ = gðu þ xÞ = u2 þ x2 - u21 - 2u1 x1 - x21 - u1 - x1 - 3, id est, hðx, uÞ = u2 - u21 - u1 - 2u1 x1 þ gðxÞ: In Fig. 9.2, we show the limit state curve in the design space and 4 points of interest : x(1) = (0, 0), x(2) = (1, 0), x(3) = (-1, 1), x(4) = (1, 2). In Fig. 9.3, we show the limit state curves in the centered space. Notice that: • In the design space, the limit state curve is the same for all the points and the position of each point is different; • In the centered space, the limit state curve is different for each point and all the points are brought to the origin.

Fig. 9.2 Limit state curve in the design space: the curve is unique, and the points occupy different positions

(continued)

9.1

Limit State Curves

679

Example 9.1 (continued)

Fig. 9.3 Limit state curve in the centered space: the points of interest are brought to the origin and the curves are different for each point

Example 9.2 Let us consider the situation where x 2 ℝ2 and gðxÞ = x2 - x31 - x21 - 4 . Consider x = (x1, x2), ϕ(x, u) = u + x. We have hðx, uÞ = gðu þ xÞ = u2 þ x2 - u21 - 2u1 x1 - x21 - u1 - x1 - 3, id est, hðx, uÞ = u2 - u31 - 3u21 x1 - 3u1 x21 - u21 - 2u1 x1 þ gðxÞ: In Fig. 9.4, we show the limit state curve in the design space and 4 points of interest : x(1) = (0, 0), x(2) = (1, 0), x(3) = (-1, 1), x(4) = (1, 2). In Fig. 9.5, we show the limit state curves in the centered space. Again: • In the design space, the limit state curve is the same for all the points and the position of each point is different; • In the centered space, the limit state curve is different for each point and all the points are brought to the origin. (continued)

680

9

Reliability

Example 9.2 (continued)

Fig. 9.4 Limit state curve in the design space: the curve is unique, and the points occupy different positions

Fig. 9.5 Limit state curve in the centered space: the points of interest are brought to the origin and the curves are different for each point

9.2

Design Point

681

Exercises 1. Let x 2 ℝ2 and g(x) = x1x2 - 2. Determine h(x, u) for x(1) = (1, 0), x(2) = (0, 0), x(3) = (1, 1), x(4) = (0, 1). Draw the limit state curves h(x, u) = 0 in the centered space. 2. Let x 2 ℝ2 and gðxÞ = x21 þ x22 - 4 . Determine h(x, u) for x(1) = (1, 0), x(2) = (0, 0), x(3) = (1, 1), x(4) = (0, 1). Draw the limit state curves h(x, u) = 0 in the centered space. 3. Let x 2 ℝ3 and g(x) = x2 + 1 - 3(2x3 + 1)2(3x1 + 1). Determine h(x, u) for x(1) = (1, 0, 0), x(2) = (0, 0, 0), x(3) = (0, 1, 0), x(4) = (0, 0, 1). Draw the limit state surfaces h(x, u) = 0 in the centered space.

9.2

Design Point

In the same way that we can feel as safer as we are far from the edge of a cliff, we can consider the possible solution x as safer as it is far from the failure region – in fact, the greater the distance between x and F , the larger can be the variations of x that will not generate a failure. Thus, the distance βðxÞ = dðx, F Þ may be used as a measurement of the reliability of x. We have βðxÞ = dðx, F Þ = min fky - xk: y 2 F g:

ð9:3Þ

We can recognize in this formula the definition of the orthogonal projection x = Px of x onto F . In regular situations, we may consider the projection of x onto F [ C : βðxÞ = kx - xk = min fky - xk: gðyÞ ≥ 0g:

ð9:4Þ

In the centered space, we have βðxÞ = ku ðxÞk = min fkuk: hðx, uÞ ≥ 0g:

ð9:5Þ

In the language of reliability, x (or u(x)) is the design point, usually denoted DP. x (or u(x)) is also called MPP (most probable point) or MPFP (most probable failure point). These terminologies are traditional, but their use request some precaution: we should not confuse x – the point in the design space under consideration – and x – the design point. Moreover, if x (or u(x)) defines the failure point which is closest to x, this fact does not mean that the failure will occur at that point – the evolution of the values of x may lead to another point. Finally, x is the most probable point only for probability distributions decreasing with distance to the failure region: although such a property seems natural, it requests that the distributions satisfy some mathematical properties. These considerations make that

682

9

Reliability

: ( , )>

: ( , )= ∗

( )

Fig. 9.6 Centered space, Hybrid space and orthogonal projection of a design point x

many authors prefer to use the terminology MCP (most central point) or MCFP (most central failure point) to avoid misinterpretation. We will use the traditional terminology, and we ask the reader to keep in mind these remarks (Fig. 9.6). The determination of x (or u(x)) can be achieved by solving the optimization problem (9.5) with the optimizers available in R. For instance, we can use fmincon from package pracma:

ustar determines u(x) for a failure region g(x) > 0. Then, β(x) = ku(x)k, x = x + u(x). Alternatively, we can determine x directly. 

xstar determines x for a failure region g(x) > 0. Then, u(x) = x - x, β(x) = ku(x)k.

9.2

Design Point

683

Example 9.3 Let us consider the situation where x 2 ℝ2 and gðxÞ = x2 - x21 - x1 - 3.

Start by defining g :

Then, give a point of interest x and call ustar to determine u(x) , x and β(x) We obtain the results in Table 9.1. Figures 9.7 and 9.8 show the results in the design space and in the centered space. Table 9.1 Results funished by ustar x (1,0) (0,0) (–1,1) (2,1)

u(x) (–1.272838, 2.801603) (–0.4232162, 2.7558958) (0.3894887, 1.7622127) (–1, 1)

β(x) 3.077189 2.788203 1.804742 1.414214

x (–0.2728376,2.8016028) (–0.4232162, 2.7558958) (–0.6105113, 2.7622127) (2.3E-9, 3)

Fig. 9.7 Orthogonal projections determined by fmincon. The points on the failure curve are the MCPs associated to the points of interest

(continued)

684

9

Reliability

Example 9.3 (continued)

Fig. 9.8 Orthogonal projections determined by fmincon. The points on the failure curve are the MCPs associated to the points of interest

Example 9.4 Let us consider the situation where x 2 ℝ2 and gðxÞ = x2 - x31 - x21 - 4 . Running the program xstar furnishes the results in Table 9.2. Figures 9.9 and 9.10 show the results in the design space and in the centered space. Table 9.2 Results funished by xstar x (1,0) (0,0) (–1,1) (2,1)

u(x) (–2.9492544, 0.3932201) (–1.9673004, 0.256285) (–0.8440816, 0.1295846) (–0.8351943, 2.0316372)

β(x) 2.975353 1.983924 0.8539707 2.196611

x (–1.9492544, 0.3932201) (–1.9673004, 0.2562853) (–1.844082, 1.129585) (0.1648057, 4.0316372)

(continued)

Example 9.4 (continued)

Fig. 9.9 Orthogonal projections determined by fmincon. The points on the failure curve are the MCPs associated to the points of interest

Fig. 9.10 Orthogonal projections determined by fmincon. The points on the failure curve are the MCPs associated to the points of interest

686

9.3

9

Reliability

Multiple Failure Conditions

In practice, it can be necessary to deal with several failure conditions g1(x), g2(x), . . ., gng(x), so that failure occurs if gi(x) > 0 for at least one of these limit state equations. In the opposite, the safe region verifies gi(x) < 0, 8 i, 1 ≤ i ≤ ng. Let us denote F i = fu 2 ℝn : hi ðx, uÞ > 0g, S i = fx 2 ℝn : hi ðx, uÞ < 0g: Then, F=

ng [ i=1

F i, S =

ng \

Si:

i=1

Thus, dðx, F Þ = min fd ðx, F i Þ, 1 ≤ i ≤ ngg: Consequently, a simple way to extend the preceding ideas to multiple failure conditions consists in using βðxÞ = d ðx, F Þ = min fdðx, F i Þ, 1 ≤ i ≤ ngg: Let βi ðxÞ = dðx, F i Þ = min fky - xk: y 2 F i g: Then βðxÞ = min fβi ðxÞ, 1 ≤ i ≤ ngg: We have also   βi ðxÞ = ui  = min fkuk: hi ðx, uÞ ≥ 0g and    βðxÞ = ku k = min ui , 1 ≤ i ≤ ng : Thus, a simple way to extend the preceding ideas consists in determining individual design points for each limit state curve and taking the one corresponding to the minimal norm. For instance, we can use the functions below:

9.3

Multiple Failure Conditions

687

These functions receive as arguments the point of interest x and a list, whose elements are lists of two elements: a limit state equations and its gradient (see examples) Example 9.5 Let us consider the situation where x 2 ℝ2 and we have three failure conditions, corresponding to g1 ðxÞ = x2 - x21 - x1 - 3 , g2 ðxÞ = x1 - x22 - x2 - 2 , g3(x) = x1 + x1x2 - 1. In this case,

Each pair (gi, gradgi) is formed by a limit state equation and its gradient. For instance,

We determine the values of βi(x) for the points x 2 {(-2, 2), (-1, 0), (1, 1)}. The results appear in Table 9.3. Figure 9.11 shows the results in the design space. Table 9.3 Results funished by xstar x (–2,2) (–1,0) (1,–1)

β1(x) 1.414214 2.788203 4.006089

β2(x) 4.425427 2.788203 0.844989

β3(x) 2.321008 1.732051 0.8182296

β(x) = min βi(x) 1.414214 1.732051 0.8182296

(continued)

688

9

Reliability

Example 9.5 (continued)

Fig. 9.11 Orthogonal projections determined by fmincon. The points on the failure curve are the MCPs associated to the points of interest

We can use the programs previously introduced. For instance:

9.3

Multiple Failure Conditions

689

Example 9.6 Let us consider the situation where x 2 ℝ2 and we have three failure conditions, corresponding to g1 ðxÞ = x2 - x21 - x1 - 3 , g2 ðxÞ = x1 - x22 - x2 - 2 , g3 ðxÞ = x1 þ x2 þ x21 þ x22 - 15 . We determine the values of βi(x) for the points x 2 {(-2, 2), (-1, 0), (1, -1)}. The results appear in Table 9.4. Figure 9.12 shows the results in the design space. Table 9.4 Results funished by xstar x (–2,2) (–1,0) (1,–1)

β1(x) 1.414214 2.788203 4.006089

β2(x) 4.425427 2.788203 0.844989

β3(x) 1.021528 3.229897 2.35586

β(x) = min βi(x) 1.021528 2.788203 0.844989

Fig 9.12 Orthogonal projections determined by fmincon. The points on the failure curve are the MCPs associated to the points of interest. Notice that there are two solutions for (–1,0)

690

9

Reliability

Exercises 1. Let x 2 ℝ2 and gðxÞ = x2 - x21 - x1 - 3 . Determine the design point for x(1) = (2, 2), x(3) = (1, 1), x(4) = (0, 1). 2. Let x 2 ℝ2 and gðxÞ = x2 - x31 - x21 - 4 . Determine the design point for x(1) = (1, 0), x(3) = (1, 1), x(4) = (0, 1). 3. Let x 2 ℝ2 and g(x) = x1x2 - 2. Determine the design point for x(1) = (1, 0), x(2) = (0, 0), x(3) = (1, 1), x(4) = (0, 1). 4. Let x 2 ℝ2 and gðxÞ = x21 þ x22 - 4 . Determine the design point for x(1) = (1, 0), x(2) = (0, 0), x(3) = (1, 1), x(4) = (0, 1). 5. Let x 2 ℝ3 and g(x) = x2 + 1 - 3(2x3 + 1)2(3x1 + 1). Determine the design point for x(1) = (1, 0, 0), x(2) = (0, 0, 0), x(3) = (0, 1, 0), x(4) = (0, 0, 1). 6. Let x 2 ℝ2 and g1 ðxÞ = x1 - x22 - x2 - 2, g2 ðxÞ = x2 - x31 - x21 - 4 , g2 ðxÞ = x1 þ x2 þ x21 þ x22 - 6 . Determine the design point for x(1) = (1, 0), x(2) = (0, 0), x(3) = (1, 1), x(4) = (0, 1) .

9.4

Reliability Analysis

When uncertainties must be considered, x becomes a random variable X. For example, we can consider that x is the vector of the nominal values of the design variables and U and the uncertainty – in such a situation, the mean of U is zero. As in the deterministic case, we can consider the more general case where X = ϕ(x, U) – with a possibly non-linear relationship. In the context of uncertainty, the centered space becomes a space of random variables, while x remains an element of ℝn – the pair (x, U) is formed by two objects of distinct natures: in the context of reliability, we say that this pair is an element of the hybrid space, that is, a space formed by vectors of ℝn and random variables. In this case, Z = H(x, U) = g (ϕ(x, U)) – notice that H is defined on the hybrid space. Z is a random variable and the probability of failure at a point x on the design space is F = P (Z > 0). Similarly, the reliability of the same point is R = P (Z ≤ 0): the determination of F and R involves calculations in the hybrid space, that is, the space of the random variables U. In the context of reliability, the terminology reliability analysis of the point x of the design space refers to the numerical estimation of the probability of failure associated with x. Thus, reliability analysis consists of the numerical determination of F = P(Z > 0) – of course, it is equivalent to determine the reliability R = P(Z ≤ 0). Let fX denotes the PDF of X: we have Z

Z f X ðxÞdx, R = PðZ ≤ 0Þ =

F = P ð Z > 0Þ = F

f X ðxÞdx: S[C

ð9:6Þ

9.4

Reliability Analysis

691

As shown by Eq. (9.6), reliability analysis requests the knowledge of fX – id est, information about the joint distribution of the components of U (or – equivalently – of X) is necessary to carry the evaluation of F or R. In practice, these values can rarely be determined analytically and must be evaluated using numerical approximations – generally Monte Carlo simulation. An additional difficulty arises from the fact that the values of the probabilities of failure are generally small: in general, F ⋘ 1. This difficulty makes that, in practice, it is more convenient to use a reliability index to estimate the probability of failure. Example 9.7 Let us consider the situation where x 2 ℝ2 and gðxÞ = x2 - x21 - x1 - 3 . Consider x = (x1, x2), ϕ(x, u) = u + x. We have hðx, uÞ = gðu þ xÞ = u2 þ x2 - u21 - 2u1 x1 - x21 - u1 - x1 - 3, id est, hðx, uÞ = u2 - u21 - u1 - 2u1 x1 þ gðxÞ: The probability of failure at the point x is F = Pðhðx, uÞ > 0Þ: Assume that u1, u2 are independent random variables having the same distribution N(0, 1). We can evaluate the probability of failure by a Monte Carlo method. For instance, let us generate two samples of ns variates for u1 and u2. The, we consider the sample of ns2 elements formed by uij = (u1,i, u2,j), 1 ≤ i, j ≤ ns. We count the number n+ of pairs in the sample such that h(x(2), uij) > 0 and we evaluate the probability of failure as F ≈ n+/ns2. Since the result depends on the sample used, we can consider nr realizations of F, denote F1, . . ., Fnr , and evaluate F ≈ F nr , the empirical mean of the values obtained. Typical results are shown in Tables 9.5 and 9.6. Table 9.5 Monte Carlo Evaluation of F: values of Fnr ± sn ðF nr Þ with nr = 10 ns (1, 0) (0,0) (–1,1) (1,2)

100 4.6E-4 ± 7.8E-4 1.5E-3 ± 2.6E-3 1.3E-3 ± 7.7E-3 5.7E-2 ±1.4E-2

1000 4.9E-4 ± 2.3E-4 7.5E-4 ± 6.6E-4 1.4E-2 ± 2.5E-3 4.4E-2 ± 3.6E-3

10000 4.0E-4 ± 8.2E-5 9.0E-4 ± 2.2E-4 1.5E-2 ± 7.4E-4 4.5E-2 ± 1.0E-3

(continued)

692

9

Reliability

Example 9.7 (continued) Table 9.6 Monte Carlo Evaluation of F: values of Fnr ± sn ðF nr Þ with nr = 100 ns (1, 0) (0,0) (–1,1) (1,2)

100 4.0E-4 ± 8.3E-4 7.3E-4 ± 1.5E-3 1.6E-2 ± 8.7E-3 4.4E-2 ±1.2E-2

1000 3.7E-4 ± 2.5E-4 1.1E-3 ± 6.7E-4 1.4E-2 ± 2.8E-3 4.6E-2 ± 3.5E-3

10000 4.2E-4 ± 8.0E-5 9.9E-4 ± 1.7E-4 1.5E-2 ± 8.1E-4 4.6E-2 ± 1.1E-3

The main difficulty in this evaluation is the need of the use of exceptionally large samples, due to the smallness of F. Special sampling methods can be used to diminish the computational cost. For instance, importance sampling can be used. (Papaioannou et al., 2016; Li & Wu, 2007).

Exercises 1. Let x 2 ℝ2 and gðxÞ = x2 - x21 - x1 - 3 . Assuming normality N(0, 1), generate two samples of 1000 variates of u1, u2 and use it to evaluate the reliability of each point: x(1) = (-1, 0), x(2) = (1, 1), x(3) = (0, 1). 2. Let x 2 ℝ2 and gðxÞ = x2 - x31 - x21 - 4 . Assuming normality N(0, 1), generate two samples of 1000 variates of u1, u2 and use it to evaluate the } reliability of each point: x(1) = (1, 0), x(2) = (1, 1), xð Þ = ð0, 1Þ.

3. Let x 2 ℝ2 and g(x) = x1x2 - 2. Assuming normality N(0, 1), generate two samples of 1000 variates of u1, u2 and use it to evaluate the reliability of each point: x(1) = (1, 0), x(2) = (0, 0), x(3) = (1, 1), x(4) = (0, 1). 4. Let x 2 ℝ2 and gðxÞ = x21 þ x22 - 4. Assuming normality N(0, 1), generate two samples of 1000 variates of u1, u2 and use it to evaluate the reliability of each point: x(1) = (1, 0), x(2) = (0, 0), x(3) = (1, 1), x(4) = (0, 1). 5. Let x 2 ℝ3 and g(x) = x2 + 1 - 3(2x3 + 1)2(3x1 + 1). Assuming normality N(0, 1), generate three samples of 100 variates of u1, u2, u3 and use it to evaluate the reliability of each point: x(1) = (1, 0, 0), x(2) = (0, 0, 0), x(3) = (0, 1, 0), x(4) = (0, 0, 1).

9.5

Hasofer-Lind Reliability Index

Among the existing reliability indexes, one of the most popular is the Hasofer-Lind reliability index – the distance between a point in the design space and the failure region, calculated through the design point – the MPP or MPFP.

9.5

Hasofer-Lind Reliability Index

9.5.1

693

The General Situation

The Hasofer-Lind reliability index of a design point x is β(x) given by βðxÞ = ku ðxÞk = min fkuk: H ðx, uÞ ≥ 0g:

ð9:7Þ

For x 2 C , an alternative form is βðxÞ = ku ðxÞk = min fkuk: H ðx, uÞ = 0g:

ð9:8Þ

Three interesting properties of the Hasofer-Lind reliability index are (see Fig. 9.13) • The collinearity between the design point u = u(x) and the normal n to the limit state curve at the point u: u(x) = β(x)n; • an upper bound for the failure probability F: F ≤ P(kuk ≥ β(x)); • a nonlinear system of equations defining u and β: u k— u H ðx, u Þk = β— u H ðx, u Þ, H ðx, u Þ = 0

ð9:9Þ

Let us examine the first property: if β(x) > 0 and H is a differentiable function, then u(x) is collinear to the unit normal to the limit state curve at u(x), id est, u ðxÞ = βðxÞn , n =

d , d = — u H ðx, u ðxÞÞ, kd  k





Fig. 9.13 Collinearity between the design point and the normal

694

9

Reliability



 ∂H ∂H —u H ðx, uÞ = ðx, uÞ, . . . , ðx, uÞ : ∂u1 ∂un Indeed, let λ be the Lagrange’s multiplier associated to the condition :H(x, u) ≥ 0. Then, the KKT equations show that (see Sect. 1.13) 2u - λd = 0, λ ≥ 0, H ðx, u Þ ≥ 0, λH ðx, u Þ = 0: Thus, on the one hand, λ > 0 (otherwise, u = 0 ⟹ β(x) = 0) and, on the other hand, u =

λ  λ d ⟹ βðxÞ = ku k = kd k: 2 2

Consequently, u = βðxÞ

d = βðxÞn : kd  k

The second useful property of the Hasofer-Lind reliability index is an upper bound of the probability of failure: let A(r) be the exterior of the open ball of radius r: Aðr Þ = fu: kuk ≥ rg: We have r 1 > r 2 ⟹ Aðr 1 Þ ⊂ Aðr 2 Þ ⟹ PðAðr 1 ÞÞ ≤ PðAðr2 ÞÞ: Thus, r ⟶ P(A(r)) is decreasing. Let us denote A(x) = A(β(x)). We have u 2 F ⟹ kuk ≥ ku k = βðxÞ ⟹ u 2 A ðxÞ: Consequently, F ⊂ A ðxÞ ⟹ F = PðF Þ ≤ PðA ðxÞÞ: The third property yields from the fact that u is a point of the common boundary of S and F , which is C . Thus, H ðx, u Þ = 0 and u = β

— u H ðx, u Þ : k— u H ðx, u Þk

If ϕ(x, u) = u + x, then H(x, u) = g(x + u), and

ð9:10Þ

9.5

Hasofer-Lind Reliability Index

695

— u H ðx, uÞ = ∇x gðx þ uÞ In this case, Eqs. (9.10) can be rewritten as: gðx þ u Þ = 0 and u = β

—x gðx þ u Þ : k—x gðx þ u Þk

ð9:11Þ

These equations can be solved by R (see examples): Start by creating a function that deterÞ mines k—— xx ggððxþu xþuÞk

The, create a function that evaluates the Eqs. (9.11)

To determine the solution, we must solve eqs = 0 for the unknowns unks. This can be achieved by the methods presented in Sect. 1.14.2. For instance, we can use the function lsqnonlin of package pracma as follows (see examples):

Notice that you can reduce the number of unknowns by eliminating β:

696

9

Reliability

In this case, the equations to solve are evaluated as :

They are solved by the function findu:

Example 9.8 Let us consider the situation where x 2 ℝ2, gðxÞ = x2 - x21 - x1 3, ϕðx, uÞ = u þ x: Then, —u H ðx, uÞ = ∇x gðx þ uÞ = ð - 2ðx1 þ u1 Þ - 1, 1Þ, Indeed, ,hðx, uÞ = u2 - u21 - u1 - 2u1 x1 þ gðxÞ and the equality above holds. Then, u is the solution of Eqs. (9.11). We can look for the solution of these equations using findsol defined above. Initially, we define g and gradg:

(continued)

9.5

Hasofer-Lind Reliability Index

697

Example 9.8 (continued) Then, we call findsol:

In Table 9.4, we found the same values. We can use this method to evaluate the values of x, u, β(x) for the same points evaluated in Table 9.4: we obtain the same results. Figures 9.14 and 9.15 show the results.

Fig. 9.14 Orthogonal projections determined by solving Eqs. (9.11) with findsol. The results are the same as in Table 9.4

698

9

Reliability

Fig. 9.15 Orthogonal projections determined by solving Eqs. (9.11) with findsol. The results are the same as in Table 9.4

Multiple failure conditions can be considered, analogously to Sect. 9.3: Define a function that determines the value of βi(x) for a list lists of Limit States Equations. Each limit state equation is a list itself, having the limit state equation at position 1 and its gradient at position 2. The function returns the minimal β and the corresponding u.

9.5

Hasofer-Lind Reliability Index

699

Example 9.9 Let us consider the situation where x 2 ℝ2 and we have three failure conditions, corresponding to g1 ðxÞ = x2 - x21 - x1 - 3 , g2 ðxÞ = x1 - x22 - x2 - 2 , g3 ðxÞ = x1 þ x2 þ x21 þ x22 - 6. We define the list of limit states:

Then, we call findum:

We obtain:

In Table 9.3, we found the same values for β. We can use this method to evaluate the values of x, u, β(x) for the same points evaluated in Table 9.4: we obtain analogous results (notice that the design point for (–1,0) is different, but the value of β is identical). Figure 9.16 shows the results obtained.

700

9

Reliability

Fig. 9.16 Orthogonal projections determined by solving Eqs. (9.11) with findsol. The results are the same as in Table 9.4

Exercises 1. Let x 2 ℝ2 and gðxÞ = x2 - x21 - x1 - 3 . Determine u for x(1) = (2, 2), x(2) = (1, 1), x(3) = (0, 1). 2. Let x 2 ℝ2 and gðxÞ = x2 - x31 - x21 - 4 . Determine u for x(1) = (1, 0), x(2) = (1, 1), x(3) = (0, 1). 3. Let x 2 ℝ2 and g(x) = x1x2 - 2. Determine u for x(1) = (1, 0), x(2) = (0, 0), x(3) = (1, 1), x(4) = (0, 1). 4. Let x 2 ℝ2 and gðxÞ = x21 þ x22 - 4 . Determine u for x(1) = (1, 0), x(2) = (0, 0), x(3) = (1, 1), x(4) = (0, 1). 5. Let x 2 ℝ3 and g(x) = x2 + 1 - 3(2x3 + 1)2(3x1 + 1). Determine u for x(1) = (1, 0, 0), x(2) = (0, 0, 0), x(3) = (0, 1, 0), x(4) = (0, 0, 1).

9.5.2

The Case of Affine Limit State Equations

Let us consider the situation where g(x) = atx + b, a 2 ℝn, b 2 ℝ. In this case, hðx, uÞ = gðx þ uÞ = at ðx þ uÞ þ b = at u þ gðxÞ:

9.5

Hasofer-Lind Reliability Index

701





( )

Fig. 9.17 If the limit state is a hyperplane Π, then β is the distance from Π to the origin

Thus, H(x, U) = atU + g(x): we have g(x) < 0 when x 2 S. In such a case, C is a hyperplane Π of equation H(x, U) = 0. (Fig. 9.17) Let n be the unit normal pointing outwards the safe region: if a ≠ 0, then n = a/k ak. Then, H ðx, u Þ = at u þ gðxÞ = βðxÞ

at a þ gðxÞ = 0: k ak

Thus, β ð xÞ = Notice that β(x) > 0 for x 2 S. Eq. (9.12) can be solved by R as if shown at right. Function solaff returns a list containing β(x), u, x(see example).

g ð xÞ  g ð xÞ a: ,u = k ak k ak 2

ð9:12Þ

702

9

Reliability

Example 9.10 Let us consider the situation where x 2 ℝ2, g(x) = x2 + x1 - 3. Then, a = pffiffiffi ð1, 1Þt , b = - 3, kak = 2. Let us consider the point x = (1, 0). Then, g(x) = - 2. Thus, pffiffiffi βðxÞ = 2, u(x) = (1, 1)t, x = (2, 1)t.

Using R:

Now, consider the point x = (0, 0). Then, g(x) = - 3. Thus, βðxÞ = p3ffiffi2 ,  t u ðxÞ = 32 , 32 ,  t x = 32 , 32 :

Using R:

Let consider the point x = (0, 1). Again, g(x) = - 2. Thus, pffiffiffi βðxÞ = 2, u(x) = (1, 1)t, x = (1, 2)t.

Using R:

(continued)

9.5

Hasofer-Lind Reliability Index

703

Example 9.10 (continued) Let consider the point x = (1, -1). Here, g(x) = - 3. Thus, βðxÞ = p3ffiffi2 ,  t u ðxÞ = 32 , 32 ,  t x = 52 , 12 :

Using R:

Figure 9.16 shows the results obtained (Fig. 9.18).

Fig. 9.18 Orthogonal projections determined by Eq. (9.12)

704

9

Reliability

Analogously to Sect. 9.3, multiple linear failure conditions can be considered by taking the minimal β. For instance, we can use the function at right. It receives a list listls formed by lists, whose elements are ai, bi (see example)

Example 9.11 Let us consider the situation where x 2 ℝ2, g1(x) = x2 + x1 - 3, g2(x) = x2 - x1 - 1, g3(x) = - x2 - 4. Here, a1 = (1, 1)t, b1 = - 3, a2 = (-1, 1)t, pffiffiffi pffiffiffi b2 = - 1, a3 = (0, -1)t, b3 = - 4, ka1 k = 2, ka2 k = 2, ka3 k = 1. Let us consider the point x = (0, 0): g1(x) = - 3, g2(x) = - 1, g3(x) = - 4. Thus, 3 1 β1 ðxÞ = pffiffiffi , β2 ðxÞ = pffiffiffi , β3 ðxÞ = 4: 2 2 Thus, β(x) = β2(x) and



1 1 t  1 1 t u ðxÞ = - , ,x = - , : 2 2 2 2 To use R, we call the function solaffm:

9.5

Hasofer-Lind Reliability Index

705

We obtain the results at right:

We consider the points (2,–2), (1,0), (–2,–2). Figure 9.16 shows the results obtained (Fig. 9.19).

Fig. 9.19 Orthogonal projections determined by solving Eqs. (9.11) with findsol. The results are the same as in Table 9.4

Exercises 1. Let x 2 ℝ3 and g(x) = x2 - 2x1 - x3 - 3. Determine u for x(1) = (1, 0, 0), x(2) = (0, 0, 0), x(3) = (1, 1, 1), x(4) = (0, 1, 0). 2. Let x 2 ℝ3 and g(x) = x3 - x1 - x2 - 4. Determine u for x(1) = (1, 0, 0), x(2) = (0, 0, 0), x(3) = (1, 1, 1), x(4) = (0, 1, 0). (continued)

706

9

Reliability

3. Let x 2 ℝ4 and g(x) = x3 + 2x4 - 3x1 - x2 - 4. Determine u for x(1) = (1, 0, 0, 1), x(2) = (0, 0, 0, 0), x(3) = (1, 0, 1, 0), x(4) = (0, 1, 0, 1). 4. Let x 2 ℝ4 and g(x) = x3 + x4 - x1 - x2 - 4. Determine u for x(1) = (1, 0, 0, 1), x(2) = (0, 0, 0, 0), x(3) = (1, 0, 1, 0), x(4) = (0, 1, 0, 1).

9.5.3

Convex Failure Regions

This property extends to the situation where the failure region is convex: β(x) corresponds to the distance between the origin and the failure region yet (Fig. 9.20). Recalling that a regular convex region is the intersection of all the hyperplanes containing it (see, for instance, (Souza de Cursi & Sampaio, 2010)), the convexity of the failure region can be exploited to characterize the reliability index as

at x þ b : a, b; at y þ b ≥ 0, 8y 2 F , βðxÞ = sup k ak

ð9:13Þ

—gðyÞt ðx - yÞ βðxÞ = sup :y2C : k—gðyÞk

ð9:14Þ

or, equivalently,

Equation (9.14) can be solved as follows:





( )

Fig. 9.20 If the failure region is convex, then β is the distance from F to the origin, which coincides with the maximal distance between the origin and a hyperplane containing F

9.5

Hasofer-Lind Reliability Index

Create the function to be minimized. Here, fhl t

Þ ð x - yÞ (y) evaluates —gkðy—g ðyÞk : nxg was defined in Sect. 9.5.1.

Create the constraints. Here, clh(y) evaluates ( y) , which must be equal to zero. Create a function that determines the minimum of fhl for given x, g, gradg. Here, we call fmincon of pracma. solconv returns the solution: β, u,x.

Example 9.12 Let us consider the situation where x 2 ℝ2, gðxÞ = x2 - x21 - x1 - 3: Let us consider the point x = (0, 0). We run the code at right. The results are identical to those in Table 9.1.

707

708

9

Reliability

Now, consider the point x = (-1, 1). The results are identical to those in Table 9.1.

Figure 9.16 shows the results obtained for the points considered in Sect. 9.2: the results are identical (Fig. 9.21).

Fig. 9.21 Orthogonal projections determined by solconv

Exercises 1. Let x 2 ℝ2 and gðxÞ = x2 - x31 - x21 - 4. Use the method of this section to determine β(x) and x for x(1) = (1, 0), x(2) = (0, 0), x(3) = (1, 1), x(4) = (0, 1). (continued)

9.6

Using the Reliability Index to Estimate the Probability of Failure

709

2. Let x 2 ℝ2 and g(x) = x1x2 - 2. Use the method of this section to determine β(x) and x for x(1) = (1, 0), x(2) = (0, 0), x(3) = (1, 1), x(4) = (0, 1). 3. Let x 2 ℝ2 and gðxÞ = x21 þ x22 - 4 . Use the method of this section to determine β(x) and x for x(1) = (1, 0), x(2) = (0, 0), x(3) = (1, 1), x(4) = (0, 1). 4. Let x 2 ℝ3 and g(x) = x2 + 1 - 3(2x3 + 1)2(3x1 + 1). Use the method of this section to determine β(x) and x for x(1) = (1, 0, 0), x(2) = (0, 0, 0), x(3) = (0, 1, 0), x(4) = (0, 0, 1).

9.6

Using the Reliability Index to Estimate the Probability of Failure

When the random variables U form a gaussian vector, the reliability index may be used to estimate the probability of failure. In general, the determination assumes that the distribution of U is N(0, Id), id est, the components of U are independent and have the same distribution N(0, 1). For a general Gaussian vector having a mean m and a covariance matrix C, id est, a vector U having the distribution N(m, C), a transformation U = T(W) is used to bring the vector U to a N(0, Id) vector W. If U = (u1, . . ., un) is a gaussian vector, id est, if its components are independent, then C is a diagonal matrix such that Cii = σ 2i . Then, each ui is an independent gaussian variable having as distribution N(mi, σ i). In such a situation, the transformation reads as wi =

ui - m i ⟺ ui = mi þ σ i wi : σi

ð9:15Þ

In the context of Reliability, this transformation is referred as the transformation of Hasofer-Lind. If C is not a diagonal matrix, then start by finding a decomposition C = PDPt, where D is a diagonal matrix and PtP = PPt = Id. Such a decomposition can be made by determining the eigenvalues and eigenvectors of C: D is a diagonal matrix containing the eigenvalues of C as diagonal and Pt is formed by the orthonormalized eigenvectors of C, id est, 0

1 P1 0, if i ≠ j , CPit = dii Pit P = @ . . . A, Pi P tj = 1, if i = j Pm Under R, the command eigen furnishes both the values of the eigenvalues and of the eigenvectors, but they can be non-orthonormalized: it is necessary to call a Gram-Schmidt subroutine to get P – such a subroutine is available in packages

710

9

Reliability

pracma (gramSchmidt, with a lowercase g) and matlib (GramSchmidt, with an uppercase G). Notice that all the eigenvalues of C are non-negative, since C is semidefinite: 8a 2 ℝn: taking v = at(u - m), we have atCa = E(vvt) ≥ 0. Assume that the eigenvalues are ranged in a decreasing order: d11 ≥ d22 ≥ . . . ≥ dnn ≥ 0. Let be the matrix formed by the first m columns of P m = max {i: dii > 0}. Let A pffiffiffiffiffi multiplied by the square root d ii , id est, A = ðA1 , . . . , Am Þ, Ai =

pffiffiffiffiffi dii Pi :

Let w = (w1, . . ., wm) be a Gaussian vector having as distribution N(0, Id). Then, U = AW + m has as distribution (m, C) : indeed, E(U) = AE(W) + m = m and  E ðU - mÞðU - mÞt = AEðWW t ÞAt = AAt = PDPt = C As an alternative, we can determine a decomposition C = A At – this can be achieved by Cholesky’s factorization of C: under R, the command chol furnishes the Cholesky factorization of a real symmetric positive-definite square matrix. Whichever the case or the method used, the transformation is affine U = TW þ m, so that the limit state curve can be written as Gðx, W Þ = H ðx, TW þ mÞ: Consequently, the analysis can be brought to the case where the vector of random variables is formed by independent gaussian variables of same distribution N(0, 1). In the sequel of this subsection, we assume that U satisfies such a condition. Example 9.13 Let us consider two independent random variables normally distributed: u1  N(0, 1) , u2  N(0, 1). Let X1 = u1 + u2, X2 = 3u1 + 2u2, X3 = 5u1 - 4 u2. Let X = (X1, X2, X3). The covariance matrix of X is 0

2

B C=@5 1

5 13 7

1

1

C 7 A 41

9.6

Using the Reliability Index to Estimate the Probability of Failure

Let us use pracma and matlib to get the matrix A: with the code at right, A1 is the result furnished by pracma, A2 is the result furnished by matlib. We verify that AAt = C :

Now let us use Cholesky’s decomposition:

711

712

9

Reliability

Exercises 1. Consider a vector of 3 gaussian variables (X1, X2, X3) having as covariance matrix 0

5 7

1

1

B C=@7 8

C 3 A 10

1 3

What is the minimal number of independent N(0, 1) variables requested to describe this distribution? 2. Consider a vector of 4 gaussian variables (X1, X2, X3, X4) having as covariance matrix 0

19 B 19 B C=B @ 9 -4

19 29

9 -1

-1 2

19 - 18

1 -4 2 C C C - 18 A 26

What is the minimal number of independent N(0, 1) variables requested to describe this distribution? 3. Consider a vector of 4 gaussian variables (X1, X2, X3, X4) having as covariance matrix 0

25 B -2 B C=B @ 7 9

-2 5

7 -1

-1 5

2 2

1 9 5 C C C 2 A 10

What is the minimal number of independent N(0, 1) variables requested to describe this distribution? 4. Consider a vector of 5 gaussian variables (X1, X2, X3, X4, X5) having as covariance matrix 0

34

B B 11 B C=B B 28 B @ - 12 7

11

28

- 12

18

19

0

19 0

33 -3

-3 11

- 16

-8

-9

7

1

C - 16 C C -8 C C C -9 A 25

What is the minimal number of independent N(0, 1) variables requested to describe this distribution? (continued)

9.6

Using the Reliability Index to Estimate the Probability of Failure

713

5. Consider a vector of 5 gaussian variables (X1, X2, X3, X4, X5) having as covariance matrix 0

5 B B 2 B C=B B 4 B @ 6 -8

2 17

4 -2

6 - 12

-2 - 12

4 8

8 20

4

-8

- 16

1 -8 C 4 C C -8 C C C - 16 A 16

What is the minimal number of independent N(0, 1) variables requested to describe this distribution?

9.6.1

The Case of Affine Limit State Equations

In this section, we examine the situation where H(x, U) = atU + b(x). As indicated above, we assume that U is a centered Gaussian vector N(0, Id), i.e., all its components are independent and have the same distribution N(0, 1) – otherwise the transformation previously introduced must be used to bring the analysis to such a situation. Under these assumptions, Z = H(x, U) is a Gaussian variable such that E(Z ) = b(x) and V(Z ) = ata = kak2. Thus, S=

Z - b ð xÞ  N ð0, 1Þ: k ak

Consequently,   b ð xÞ F = PðF Þ = PðZ > 0Þ = P S > = PðS > βðxÞÞ k ak Let Φ be the CDF associated to N(0, 1). We have F = 1 - ΦðβðxÞÞ = Φð - βðxÞÞ:

ð9:16Þ

The value of F may be furnished by pnorm(β(x), lower.tail = FALSE) or pnorm(-β(x)). Typical values are:

714

9

Reliability

We see that β = 4 corresponds to F ≈ 3E - 5, while β = 8 corresponds to F ≈ 6E - 16 and β = 12 to F ≈ 2E - 33. Example 9.14 Let us consider the situation where the failure corresponds to the difference between two variables x1 and x2 : failure occurs when x1 > x2. Then, denoting x = (x1, x2): gðxÞ = x1 - x2 : If X1 = x1 + σ 1u1, X2 = x2 + σ 2u2, with ui  N(0, 1), i = 1, 2. Then, with u = (u1, u2): H ðx, uÞ = x1 - x2 þ σ 1 u1 - σ 2 u2 : In this case, a = (σ 1, -σ 2)t. Thus, x1 - x2 x2 - x1 βðxÞ = - pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi = pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : 2 2 σ1 þ σ2 σ 21 þ σ 22 The probability of failure is x2 - x1 F = Φ - pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi σ 21 þ σ 22

!

x 1 - x2 = Φ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi σ 21 þ σ 22

! :

Exercises 1. Let x 2 ℝ2 and g(x) = x2 - 3x1 - 3. Let Xi = xi + σ iui with ui  N(0, 1), i = 1, 2. Determine the probability of failure for the points (0,0), (1,0), (0,1), when σ 1 = 1, σ 2 = 2. (continued)

9.6

Using the Reliability Index to Estimate the Probability of Failure

715

2. Let x 2 ℝ3 and g(x) = x3 - 2x1 - x2 - 2. Let Xi = xi + σ iui with ui  N(0, 1), i = 1, 2, 3. Determine the probability of failure for the points (0,0,0), (1,0,0), (0,1,0), (0,0,1) when σ 1 = 1, σ 2 = 2, σ 3 = 1. 3. Let x 2 ℝ3 and g(x) = x2 - 2x3 - x1 - 2. Let Xi = xi + σ iui with ui  N(0, 1), i = 1, 2, 3. Determine the probability of failure for the points (0,0,0), (1,0,0), (0,1,0), (0,0,1) when σ 1 = 1, σ 2 = 2, σ 3 = 1. 4. Let x 2 ℝ4 and g(x) = x4 + x3 - 2x2 - x1 - 2. Let Xi = xi + σ iui with ui  N(0, 1), i = 1, 2, 3, 4. Determine the probability of failure for the points (0,0,0,0), (1,0,0,0), (0,1,0,0), (0,0,1,0) and (0,0,0,1) when σ 1 = 1, σ 2 = 2, σ 3 = 1, σ 4 = 2. 5. Let x 2 ℝ4 and g(x) = x4 + 2x3 - 3x2 - 2x1 - 3. Let Xi = xi + σ iui with ui  N(0, 1), i = 1, 2, 3, 4. Determine the probability of failure for the points (0,0,0,0), (1,0,0,0), (0,1,0,0), (0,0,1,0) and (0,0,0,1) when σ 1 = 1, σ 2 = 2, σ 3 = 1, σ 4 = 2.

9.6.2

The Case of a Convex Failure Region

Assume that the region F is convex. In this case (see, for instance, (Souza de Cursi & Sampaio, 2010): u 2 F ⟺ - ðu Þt ðu - u Þ ≤ 0:

ð9:17Þ

Considering that u = βn, we have u 2 F ⟺ ðn Þt ðu - u Þ ≥ 0:

ð9:18Þ

Let Π be the hyperplane having as equation Π : p ðuÞ = ðn Þt u - ðn Þt u = 0:

ð9:19Þ

Π splits the space in two half spaces, according to the sign of p(u). We have u 2 F ⟹ p ðuÞ ≥ 0 ⟹ F = PðF Þ ≤ Pðp ðuÞ ≥ 0Þ: P( p(u) ≥ 0) is the probability of failure associated to the limit state curve defined by Π, for which a = n, g(x) = - (n)tx. Thus, βðxÞ = ku k = ðn Þt u :

ð9:20Þ

716

9

Reliability

Assume yet that U is a centered Gaussian vector N(0, Id), i.e., all its components are independent and have the same distribution N(0, 1). Then, denoting by Φ the CDF of N(0, 1): F = PðF Þ ≤ Φð - βðxÞÞ:

ð9:21Þ

Thus, pnorm(β(x), lower.tail = FALSE) or pnorm(-β(x)) can be used to furnish an upper bound to the probability of failure. Example 9.15 Let us consider the situation where x = (x1, x2) and the failure occurs when x1x2 > 1. gðxÞ = x1 x2 - 1, F = fx: x1 , x2 ≥ 0, gðxÞ > 0g: In this situation, F is convex. Let X1 = x1 + σ 1u1, X2 = x2 + σ 2u2, with ui  N(0, 1), i = 1, 2. Then, with u = (u1, u2): H ðx, uÞ = x1 x2 - 1 þ σ 1 σ 2 u1 u2 þ σ 1 u1 x2 þ σ 2 u2 x1 : Consider the point x = (0, 0)t. In this case, H(x, u) = - 1 + σ 1σ 2u1u2. Then  1 a a , a=  , u = 1 σ1 σ2 σ 21 þ σ 22 4 



and sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 1 þ : β= σ 21 σ 22 pffiffiffi Assume that σ 1 = σ 2 = σ. Then β = 2=σ and we have  pffiffiffi 2 : F≤Φ σ Let us consider σ = 1, 1/2, 1/3, . . ., 1/12. We have

(continued)

9.6

Using the Reliability Index to Estimate the Probability of Failure

717

Example 9.15 (continued) The exact value of the probability of failure can be obtained by integration. Indeed, let us denote by Φσ the CDF of xi, i = 1, 2: Φσ (s) = Φ(s/σ), where Φ is the CDF of N(0, 1). Let ϕσ = Φ0σ be the associated PDF: ϕσ(s) = ϕ(s/σ)/σ, where ϕ = Φ′ is the PDF of N(0, 1), 1 ϕðsÞ = pffiffiffiffiffi exp σ 2π



 1 s 2 : 2 σ

Then Zþ1

Zþ1 dx1 ϕσ ðx1 Þ

F= 0

ϕσ ðx2 Þdx2 = 1 x1

  Zþ1  1 ϕσ ðx1 Þ dx1 : 1 - Φσ x1 0

This integral can be evaluated numerically. For instance, let us use R and its command integral :

We obtain

The results present a good accordance with the upper bound.

718

9

Reliability

Exercises 1. Let x 2 ℝ2 and gðxÞ = x2 - x21 - x1 - 3. Let Xi = xi + σ iui with ui  N(0, 1), i = 1, 2. Determine an upper bound for the probability of failure for the points (0,0), (1,0), (0,1), when σ 1 = 1, σ 2 = 2. 2. Let x 2 ℝ2 and gðxÞ = x2 - x31 - x21 - 4. Let Xi = xi + σ iui with ui  N(0, 1), i = 1, 2. Determine an upper bound for the probability of failure for the points (0,0), (1,0), (0,1), when σ 1 = 1, σ 2 = 2. 3. Let x 2 ℝ2 and g(x) = x1x2 - 2. Let Xi = xi + σ iui with ui  N(0, 1), i = 1, 2. Determine an upper bound for the probability of failure for the points (0,0), (1,0), (0,1), when σ 1 = 1, σ 2 = 2. 4. Let x 2 ℝ4 and g(x) = x1x2x3x4 - 2. Let Xi = xi + σ iui with ui  N(0, 1), i = 1, 2, 3, 4. Determine an upper bound for the probability of failure for the points (0,0,0,0), (1,0,0,0), (0,1,0,0), (0,0,1,0) and (0,0,0,1) when σ 1 = 1, σ 2 = 2, σ 3 = 1, σ 4 = 2. 5. Let x 2 ℝ4 and g(x) = x4 - x1x2x3 - 3. Let Xi = xi + σ iui with ui  N(0, 1), i = 1, 2, 3, 4. Determine an upper bound for the probability of failure for the points (0,0,0,0), (1,0,0,0), (0,1,0,0), (0,0,1,0) and (0,0,0,1) when σ 1 = 1, σ 2 = 2, σ 3 = 1, σ 4 = 2.

9.6.3

General Failure Regions

Equations (9.16) and (9.21) furnish the most popular estimations of the probability failure using the reliability index. They may be used in the situations where the failure region F is a subset of the non-negative half space Π þ defined by the hyperplane Π  given by (9.19), which corresponds to the tangent hyperplane to the failure region at the point u. Notice that condition (9.18) is sufficient to guarantee that F ⊂ F ⊂ Π þ . If F 6 Π þ (for instance, in the situation of Fig. 9.22), we need to use a different approach. Recall that we continue under the assumption that U is a centered Gaussian vector of covariance Id (identity matrix), i.e., all its components are N (0, 1) independent variables – otherwise, transformations must be used (see Sect. 9.7). Thus, the probability density of U is ϕU ð u Þ =

1 1 2 ϕðui Þ = pffiffiffiffiffiffiffiffiffi exp u k k 2 2n π n i=1 n Y

ð9:22Þ

Therefore, the probability density depends only on the distance to the origin r = kuk and decreases with r. We will use this property to establish estimates of F = PðF Þ.

9.6

Using the Reliability Index to Estimate the Probability of Failure

Fig. 9.22 If F 6 Π þ , then a different approach must be used

719







( )

Let us consider ε = fe 2 ℝn : kek = 1g, M ðeÞ = fλe: λ ≥ 0g:

ð9:23Þ

Then 

 x 0 2 M ðeÞ, 8e 2 ε, x ≠ 0 ⟹ x 2 M , k xk so that ℝn ⊂

[ [ M ð eÞ ⊂ ℝ n ⟹ ℝ n = M ð eÞ e2ε

e2ε

Let us consider (see Fig. 9.23) F ðeÞ = F \ M ðeÞ, ΛðeÞ = fλ 2 ℝ: λ ≥ 0, λe 2 F g: We have F ðeÞ = fλe: λ ≥ 0, H ðx, λeÞ ≥ 0g, ΛðeÞ = fλ: λ ≥ 0, H ðx, λeÞ ≥ 0g In addition, F=

[ F ðeÞ: e2ε

ð9:24Þ

720

9

Reliability

ℳ( ) = ℎ ℱ( ) =

ℳ( ) ℱ

( )

Fig. 9.23 M ðeÞ is the half line oriented by the vector e. F ðeÞ is the part of the half line belonging to the failure region F. λ(e) is the distance between the first point of F ðeÞ and the origin of the centered space

Let λðeÞ =

- 1, if ΛðeÞ = ∅, infΛðeÞ, if ΛðeÞ ≠ ∅:

Notice that the inf exists for Λ(e) ≠ ∅, since λ ≥ 0, 8 λ 2 Λ(e). If H is a continuous function, then λ(e) is the smallest non-negative solution of the equation H(x, λe) = 0. Thus, λ ≥ 0 and H ðx, λeÞ ≥ 0 ⟹ λ ≥ λðeÞ: Let L ðeÞ = fλe: λ ≥ λðeÞg. We have 8e 2 ε: F ðeÞ ⊂ L ðeÞ ⟹F ⊂

[ L ð eÞ e2ε

and

! [ L ð eÞ : F = PðF Þ ≤ P e2ε

ð9:25Þ

9.6

Using the Reliability Index to Estimate the Probability of Failure

721

Fig. 9.24 Elements of volume used in the integration

This last value can be determined by integration: we have ! [ P L ð eÞ =

Z

e2ε u2

S

ϕU ðuÞdu L ðeÞ

e2ε

The element of volume is du = dSdλ, where dS is the element of surface for a sphere of radius λ (see Fig. 9.24). The surface of a hyperball of radius r is 2π 2 r n - 1  Γ n2 n

surf ðr Þ = measðkuk = r Þ = In addition,

surf ðλÞ dS dε = ⟹ dS = dε surf ðλÞ surf ð1Þ surf ð1Þ and we have ! [ L ð eÞ = P e2ε

1 surf ð1Þ

Z

Zþ1 surf ðλÞϕU ðuÞdu:

dε e2ε

λðeÞ

722

9

Thus, using that 1 ϕU ðλeÞ = pffiffiffiffiffiffiffiffiffi exp 2n π n



 λ2 , 2

we have ! [ P L ð eÞ = e2ε

Z

1 1 n surf ð1Þ 2π 2



Zþ1 surf ðλÞ exp

dε e2ε

-

 λ2 dλ: 2

λðeÞ

Noticing that 

Zþ1 surf ðλÞ exp λðeÞ

   Zþ1 n λ2 λ2 2π 2 n-1  du = n dλ, λ exp 2 2 Γ 2 λðeÞ

We have 

Zþ1 surf ðλÞ exp λð e Þ

þ1  n n Z n λ2 22 π 2 du = n t 2 - 1 e - t dt 2 Γ 2 λðeÞ2 2

and 

Zþ1 surf ðλÞ exp

-

λð e Þ

  n n 2 n λ ð eÞ 22 π 2 λ2 du = n Γ , , 2 2 2 Γ 2

where Zþ1 Γðs, bÞ =

t s - 1 e - t dt

b

is the incomplete Gamma function. Thus, ! [ P L ð eÞ = e2ε

0 λðeÞ2 1 Z Γ n , λðeÞ2 Γ n2 , 2 2 2 1 n dε = E@  A: surf ð1Þ Γ 2 Γ n2 ε

Reliability

9.6

Using the Reliability Index to Estimate the Probability of Failure

and we have

0  1 2 n λðeÞ BΓ 2 , C 2 B C

F ≤ EB C: n @ A Γ 2

723

ð9:26Þ

Since (see, for instance, (Souza de Cursi, 1992))

2

Γ n2 , λð2eÞ  , P χ 2 ðnÞ ≥ λðeÞ2 = Γ n2 we have



F ≤ E P χ 2 ðnÞ ≥ λðeÞ2 :

ð9:27Þ

These values may be estimated by integration or Monte Carlo methods. For instance, let us describe ε using hyperspherical coordinates θ = (θ1, . . ., θn - 1): let e(θ) = (e1(θ), . . ., en(θ)) be given by e1 ð θÞ =

nY -1

 sin θj ,

j=1

ei ðθÞ = cos ðθn - iþ1 Þ

iY -1

 sin θj , 2 ≤ i ≤ n - 1,

j=1

en ðθÞ = cos ðθ1 Þ: Then,

o n ε = eðθÞ: θ 2 ð0, π Þn - 2 × ð0, 2π Þ :

The Jacobian associated to these coordinates is J ðθÞ = ð - 1Þ½n=2

nY -2

nY -2   sin j θn - j - 1 = ð - 1Þ½n=2 sin n - j - 1 θj :

j=1

j=1

Thus, Z ε

P χ 2 ðnÞ ≥ λðeÞ2 dε =

Z θ2ð0, π Þn - 2 × ð0, 2π Þ

 2 n λ ð e ð θÞ Þ Γ , 2 2

J ðθÞdθ: n Γ 2

ð9:28Þ

724

9

Reliability

The evaluation of the estimate in Eq. (9.28) can be performed by considering a discretization of ε. For instance, in dimension 2, we can consider a given number of steps nsp and a step Δθ = 2π/nsp. Then, we generate the angles θi = iΔθ, i = 0, . . ., nsp  + 1 and e(θi) = (cos(θi), sin(θi)). We obtain the discretized values F i = P χ 2 ðnÞ ≥ λ2i , λi = λ(e(θi)) which can be used for a numerical integration. For instance, the trapezoidal rule reads as ! nsp -1



X F þ F Δθ 0 nsp 2 E P χ 2 ðnÞ ≥ λðeÞ Fj : ≈ þ 2π 2 j=1

ð9:29Þ

Example 9.16 Let us consider the situation where x = (0, 0) and the failure occurs when gðxÞ = x2 - x21 - x1 - 3 > 0 . We have g(x + λe(θi)) = aλ2 + bλ + c, with a = - cos2(θi), b = (sin(θi) - cos (θi)), c = - 3. λi is a positive root the equation aλ2 + bλ + c = 0, if such a root exists. If not any positive solution exists, then Fi = 0, what corresponds to λ(θi) = + 1. • Let θi = π/2: a = 0, b = 1 and λi = 3. • Let θi = 3π/2 : a = 0, b = - 1 and λi = - 3. Thus, there is no positive solution. In this case, we take Fi = 0, what corresponds to λ(θi) = + 1. • Let θi ≠ π/2 and θi ≠ 3π/2 : the equation has degree 2. Let Δ = b2 - 4ac. Real roots exist only for Δ ≥ 0. In this case, the roots are pffiffiffiffi -b± Δ : λ± = 2a λi corresponds to the smaller positive root. Again, if no positive root exists, we take Fi = 0. We use Eq. (9.29) with different values of nsp to evaluate F (this probability was evaluated in Example 9.7 by a simple Monte Carlo method). The results for nsp = 101, 102, . . ., 1010 are the following:

The results are in accordance with the Monte Carlo simulation.

9.6

Using the Reliability Index to Estimate the Probability of Failure

725

Example 9.17 Let us consider the situation where x = (x1, x2) and the failure occurs when h(x, u) = (x1 + u1)2 + (x2 + u2)2 > α2(α > 0). Let x = (0, 0)t: then, u is not unique since any point u such that kuk = α is a design point. Then, β(x) = α and the probability of failure is F=

1 2π

Z

1 exp - kuk2 du, 2

k uk ≥ α

id est,

F=

1 2π

Zþ1 α

 

α2 1 : 2πr exp - r 2 dr = exp 2 2

In this case, h(x, λe) = λ2, so that λ(e(θ)) = α, 8 θ. Thus, from Eq. (9.27):    F ≤ E P χ 2 ð2Þ ≥ α2 = P χ 2 ð2Þ ≥ α2 We can compare these values using the program below:

We obtain:

726

9

Reliability

Example 9.18 Let us consider the situation where x = (x1, x2) and the failure occurs when x1x2 > 1, x1, x2 > 0. Let X1 = x1 + σ 1u1, X2 = x2 + σ 2u2, with ui  N(0, 1), i = 1, 2. Let us consider x = (0, 0)t. In this case, H(x, u) = - 1 + σ 1σ 2u1u2 (see Example 9.15). Then, h(x, λe(θ)) = λ2σ 1σ 2 sin θ cos θ - 1, so that 8 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

π 2 < , if θ 2 0, 2 : λðeðθÞÞ = σ 1 σ 2 sin ð2θÞ : þ1, otherwise We can evaluate F as follows:

We obtain:

Now assume that x1, x2 can take negative signs. In this case, 8 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi



π 3π 2 < , if θ 2 [ 0, [ π, 2 2 : λðeðθÞÞ = σ 1 σ 2 sin ð2θÞ : þ1, otherwise In this case,

As expected, the probability of failure doubles.

9.7

The Transformations of Rosenblatt and Nataf

727

Exercises 1. Let x 2 ℝ2 and gðxÞ = x2 - x21 - x1 - 3. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Determine an upper bound for the probability of failure at the points (1,0), (0,1), (1,1). 2. Let x 2 ℝ2 and gðxÞ = x2 - x31 - x21 - 4. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Determine an upper bound for the probability of failure at the points (0,0), (1,0), (0,1). 3. Let x 2 ℝ2 and g(x) = x1x2 - 2. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Determine an upper bound for the probability of failure at the points (0,0), (1,0), (0,1).

9.7

The Transformations of Rosenblatt and Nataf

The preceding analysis was limited to Gaussian variables U. When U is not gaussian, it is necessary to use more complex transformations, such as the Rosenblatt Transformation (Rosenblatt, 1952), which is defined as follows: let F1 be the marginal CDF of U1 and, for i > 1, Fi be the CDF of Ui conditional to (U1, . . ., Ui - 1) : F i ðui jU 1 = u1 , . . . , U i - 1 = ui - 1 Þ = PðU i < ui jU 1 = u1 , . . . , U i - 1 = ui - 1 Þ Let us consider the CDF Φ of a Gaussian variable N(0, 1) and introduce new variables W = (W1, . . ., Wn) such that P(Wi < wi) = Φ(wi) with w1 = Φ - 1 ðF 1 ðu1 ÞÞ, w2 = Φ - 1 ðF 2 ðu2 jU 1 = u1 ÞÞ, w3 = Φ - 1 ðF 3 ðu3 jU 1 = u1 , U 2 = u2 ÞÞ, ⋮ wn = Φ - 1 ðF n ðun jU 1 = u1 , U 2 = u2 , . . . , U n - 1 = un - 1 ÞÞ Let Φi be the CDF of Wi conditional to (W1, . . ., Wi - 1): we have Φi ðwi jW 1 = w1 , . . . , W i - 1 = wi - 1 Þ = F i ðui jU 1 = u1 , . . . , U i - 1 = ui - 1 Þ = Φðwi Þ, so that W is a gaussian vector of independent variables. Notice that Rosenblatt transformation may generate non-gaussian variables if U contains dependent variables (see example below). Under R, packages rvinecopulib and HMMcopula propose function to make Rosenblatt’s transformations.

728

9

Reliability

Example 9.19 Let us consider the situation where U = (U1, f(U1)), where f is a regular bijective function and U1 is uniformly distributed on (-1, 1). Here, F 1 ð u1 Þ =

1 þ u1 1 þ u1 ⟹ w1 = Φ - 1 : 2 2

In addition, F 2 ðu2 jU 1 = u1 Þ = 0, if u2 ≤ f ðU 1 Þ; F 2 ðu2 jU 1 = u1 Þ = 1, otherwise: Thus, w2 = Φ - 1 ð0Þ = - 1, if u2 ≤ f ðU 1 Þ; w2 = Φ - 1 ð1Þ = þ 1, otherwise: In this situation, the variable U2 can be eliminated and the description of the uncertainties can be made by using only U1 and w1: we have U = ð2Φðw1 Þ - 1, f ð2Φðw1 Þ - 1ÞÞ:

The use of the Rosenblatt transformation requests the complete knowledge of the distribution of the vector U, so that its use may result difficulty in practice – in general, only the marginal laws are known. An alternative is to use an approximation: the Nataf transformation (Nataf, 1962), which uses only the marginal distributions of the variables. The Nataf transformation generates a vector W of independent Gaussian variables in two steps: first, we generate a vector V = (v1, . . ., vn) by using the marginal laws: vi = Φ - 1 ðF i ðui ÞÞ, F i ðui Þ = PðU i < ui Þ: In the second step, we look for a vector W = (W1, . . ., Wm), m ≤ n formed by independent Gaussian variables of same distribution N(0, 1), such that V = MW, with M a n × m matrix, which may be determined by the methods presented in Chapter 2, Sect. 2.13 and Sect. 9.6 above. For instance, by solving the equation cov ðV, V Þ = MM t : As previously indicated (Sect. 9.6), M can be determined by Cholesky’s decomposition. In practice, it may be useful to avoid the evaluation of cov (V, V), using an approximation: for instance, cov(V, V) ≈ cov (U, U). As an alternative, the procedure of factorization can be applied to the correlation matrix RV = ρ(V, V), which may be approximated into an analogous way: RV ≈ RU = ρ(U, U).

9.7

The Transformations of Rosenblatt and Nataf

729

Under R, the package mistral proposes functions to make Nataf’s transformations. Example 9.20 Let us consider again the situation where U = (U1, f(U1)), where f is a regular bijective function and U1 is uniformly distributed on (-1, 1). Then, F 1 ð u1 Þ =

1 þ f - 1 ð u2 Þ 1 þ u1 : , F 2 ð u2 Þ = 2 2

Thus, v1 = Φ - 1



  1 þ f - 1 ð u2 Þ 1 þ u1 : , v2 = Φ - 1 2 2

Let r = ρ(U1, U2): the correlation matrix is  RU =

1

r

r

1



and  M=

1 r

0 pffiffiffiffiffiffiffiffiffiffiffiffi 1 - r2

 ⟹ MM t = RU :

Thus, using the approximation RV ≈ RU, we have



pffiffiffiffiffiffiffiffiffiffiffiffi u1 = 2Φðw1 Þ - 1, u2 = f 2Φ rw1 þ w2 1 - r 2 - 1 :

One of the effects of the transformations of Rosenblatt and Nataf is the modification of the geometry of the failure region and of the limit state curve. For instance, let us consider the situation where n = 2 and the boundary curve is linear: g(x) = x1 x2. Consider the point x = (3, 4) and Xi = xi + ui, where u1, u2 are independent variables uniformly distributed on (-1, 1). The associated Rosenblatt transformation modifies the geometry as shown in Fig. 9.25. Notice that the original failure region is convex. In this same situation, let us consider the case where X1 is log-normal distributed with mean x1: we can use u1 = (log(X1) - μ1)/σ 1, id est, X1 = exp (μ1 + σ 1u1). Since u1 = 0 for X1 = x1(centered space), we have μ1 = log (x1). The Rosenblatt transformation is illustrated in Fig. 9.26, where we consider σ 1 = 1 – the other data remain identical. In this case, the failure region becomes non-convex.

730

9

Reliability

Fig. 9.25 Example of modification of the geometry by the Rosenblatt Transformation. The failure region is convex

Fig. 9.26 Example of modification of the geometry by the Rosenblatt Transformation. The failure region is convex

9.7

The Transformations of Rosenblatt and Nataf

731

Fig. 9.27 A third example of modification of the geometry by the Rosenblatt Transformation. The failure region becomes non-convex

As a last example, let us consider the situation where X1, X2, X3 are uniformly distributed and gðxÞ = x3 - x21 x2 , with X1  U(3, 5), X2  U(250,350), X3  U(2E3, 9E3). The Rosenblatt transformation is illustrated in Fig. 9.27. In this case, the failure region becomes non-convex. Exercises 1. Let x 2 ℝ2 and gðxÞ = x2 - x21 . Let Xi = xi + ui with ui lognormally distributed with σ i = 1, i = 1, 2. Plot the Rosenblatt transformed limit curve. 2. Let x 2 ℝ2 and gðxÞ = x2 - x31 - x21 - 4. Let Xi = xi + ui with ui lognormally distributed with σ i = 1, i = 1, 2. Plot the Rosenblatt transformed limit curve 3. Let x 2 ℝ2 and g(x) = x1x2 - 2. Let Xi = xi + ui with ui lognormally distributed with σ i = 1, i = 1, 2. Plot the Rosenblatt transformed limit curve. 4. Let x 2 ℝ2 and gðxÞ = x2 - x21. Let Xi = xi + ui with ui uniformly distributed on (–1,1) ,i = 1, 2. Plot the Rosenblatt transformed limit curve. 5. Let x 2 ℝ2 and gðxÞ = x2 - x31 - x21 - 4. Let Xi = xi + ui with ui uniformly distributed on (–1,1) ,i = 1, 2. Plot the Rosenblatt transformed limit curve. 6. Let x 2 ℝ2 and g(x) = x1x2 - 2. Let Xi = xi + ui with ui uniformly distributed on (–1,1) ,i = 1, 2. Plot the Rosenblatt transformed limit curve. 7. Let x 2 ℝ3 and gðxÞ = x3 - x21 x2 . Let Xi = xi + ui with ui lognormally distributed with σ i = 1, i = 1, 2, 3. Plot the Rosenblatt transformed limit curve.

732

9.8

9

Reliability

FORM and SORM

As shown in Sect. 9.5, the reliability index of Hasofer-Lind corresponds to the value of an optimization problem: n o βðxÞ = ku ðxÞk, u ðxÞ = arg min kuk2 : hðx, uÞ ≥ 0 : The solution of the optimization problem above provides u, which furnishes the value of the reliability index. We presented methods of solution in Sect. 9.5, but the reader will find in the literature about optimization many methods, algorithms, and software for its numerical solution – eventually, global optimization methods must be considered, if the safe or the failure region is non-convex. As shown in the preceding, we may also determine its value by solving a system of algebraic equations: u k—u hðx, u Þk = β—u hðx, u Þ, hðx, u Þ = 0: Other than these standard methods, there are methods specific to reliability, which are examined below.

9.8.1

First-Order Reliability Method (FORM)

The first – and extremely popular method – is the first order reliability method (FORM), which consists in solving a sequence of problems with affine limit state curves: at each step, the limit state curve is approximated by a hyperplane and we use the expression given in Eq. (9.12) to get an approximated value of β (see Fig. 9.28). FORM can be interpreted as an adaptation of the sequential quadratic programming procedure (SQP) to reliability analysis. Assuming that all the variables are independent and N(0, 1) – otherwise, transformations must be considered – the algorithm of the method reads as Algorithm of the Method FORM Input: the point x, starting point uð0Þ 2 S, stopping conditions (for instance, maximal iteration number, minimal step at each iteration). Programs evaluating h(x, u) and —uh(x, u), a program for the correction of a tentative design point u by determining a point uðuÞ such that jhðx, uðuÞÞj < tol, where tol is a predefined tolerance. (continued)

9.8

FORM and SORM

733

n ( )

F : failure region S : safe region

( )

C : limit

( )

state curve ( )

( )

Fig. 9.28 Iterative determination of the reliability index by FORM

Output: estimations of u(x) and β(x). Initialization: If the initial point does not verify |h(x, u(0))| < tol, then  correct it: uð0Þ ⟵u uð0Þ . Initialize β(0) = ku(0)k. Set the iteration number to zero : i ⟵ 0. 1. Approximate the limit state curve by a hyperplane Π(i): p(i)(u) = (a(i))tu + b(i)(x). For instance, use a Taylor-MacLaurin series: a(i) = —u h(x(i), u(i)), b(i)(x) = h(x, u(i)) - (a(i))tu(i). 2. Set n(i) = a(i)/ka(i)k, β(i + 1) = - b(i)(x)/ka(i)k, u(i + 1) = β(i + 1)n(i).  3. If |h(x, u(i + 1))| ≥ tol then uðiþ1Þ ⟵u uðiþ1Þ . 4. Set β(i + 1) = ku(i + 1)k. 5. Increment the iteration number i ⟵ i + 1. 6. Test for the stopping conditions. If the iterations continue, then go to 1, else estimate β(x) ≈ β(i), u(x) ≈ u(i).

To make this algorithm to work correctly, it is necessary to choose the starting point in the safe region. We must also ensure that the generated sequence of points maintains b(i)(x) < 0 during the iterations – notice that 0 2 S, so that u(i) generally satisfies the condition (a(i))tu(i) < 0. When the candidate point u(i + 1) does not belong to the safe region S , it is necessary to perform the correction u uðiþ1Þ , to

734

9

Reliability

bring it to the safe region (with a small enough tolerance tol). For this purpose, optimization methods can be used, such as, for example, those presented in (Souza de Cursi et al., 2004). In general, FORM converges quickly. The reader will find in the literature many works on FORM, with implementations available on the net. Under R, package mistral proposes a pre-implemented FORM. Below, we illustrate this method with a simple implementation as follows:

Notice that this is not an optimal implementation and that you can find on Internet sophisticated software implementing FORM, but we focus on other aspects, more pedagogical and coherent with the spirit of this book.

9.8

FORM and SORM

735

Example 9.21 Let us consider the case where x = (0, 0) and gðxÞ = x2 - x21 - x1 - 3. This problem was previously solved (Example 9.30). We run the program as follows:

The result is

In Example 9.30, we got u = (-0.4232162, 2.7558958) , x = (0.4232162, 2.7558958), β = 2.788203.

736

9

Reliability

Example 9.22 Multiple failure conditions can be treated by the simplified previously introduced in Sect. 9.3 and exploited in the sequel. To do this, we need to run FORM for each failure equation. Let us illustrate the procedure by using the same example considered in the preceding: x 2 ℝ2 and three failure conditions, corresponding to g1 ðxÞ = x2 - x21 - x1 - 3, g2 ðxÞ = x1 - x22 - x2 - 2, g3(x) = x1 + x1x2 - 1. We determine three solutions sol1, sol2, sol3, corresponding top each gi, i=1,2,3. We obtain:

Consequently, β = 1.804913. For the point (1,–1), we obtain β = 0.8946934.

Exercises 1. Let x 2 ℝ2 and gðxÞ = x2 - x21 - x1 - 3. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Use FORM to evaluate the reliability index of the points (1,0), (0,1). 2. Let x 2 ℝ2 and gðxÞ = x2 - x31 - x21 - 4. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Use FORM to evaluate the reliability index of the points (0,0), (1,0), (0,1). 3. Let x 2 ℝ2 and g(x) = x1x2 - 2. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Use FORM to evaluate the reliability index of the points (0,0), (1,0), (0,1). 4. Let x 2 ℝ4 and g(x) = x1x2x3x4 - 2. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2, 3, 4. Use FORM to evaluate the reliability index of the points (0,0,0,0), (1,0,0,0), (0,1,0,0), (0,0,1,0) and (0,0,0,1). 5. Let x 2 ℝ4 and g(x) = x4 - x1x2x3 - 3. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2, 3, 4. Use FORM to evaluate the reliability index of the points (0,0,0,0), (1,0,0,0), (0,1,0,0), (0,0,1,0) and (0,0,0,1).

9.8

FORM and SORM

9.8.2

737

Second Order Reliability Method (SORM)

The second – and less popular than FORM – is the second order reliability method (SORM), which is a variant from FORM, where a quadratic approximation is used instead the affine one. In SORM, the limit state curve is approximated by a quadratic equation and we must solve at each step an optimization problem involving a quadratic objective function and quadratic constraints – such a problem is said a quadratically constrained quadratic problem (QCQP). Analogously to FORM, the reader will find many works and implementations of SORM. We give below a simple implementation intended to a pedagogical aim. The algorithm of the method reads as Algorithm of the Method SORM Input: the point x, starting point uð0Þ 2 S, stopping conditions (for instance, maximal iteration number, minimal step at each iteration). Programs evaluating h(x, u), —uh(x, u) and the Hessian —2u hðx, uÞ, a program for the correction of a tentative design point u by determining a point uðuÞ such that jhðx, uðuÞÞj < tol, where tol is a predefined tolerance. Output: estimations of u(x) and β(x). Initialization: If the initial point does not verify |h(x, u(0))| < tol, then  correct it: uð0Þ ⟵ u uð0Þ . Initialize β(0) = ku(0)k. Set the iteration number to zero : i ⟵ 0. 1. Approximate the limit state curve by a quadratic form Q(i): q(i)(u) = utA(i)u/ 2 + utB(i) + C(i)(x). For instance, use a Taylor-MacLaurin series: AðiÞ =  —2u h xðiÞ , uðiÞ , B(i) = —uh(x(i), u(i)) - A(i)u(i), C(i)(x) = h(x, u(i)) (B(i))tu(i) + (u(i))tA(i)(u(i))t/2. 2. Find the solution of the QCQP: u(i + 1) = arg min {utu: q(i)(u) ≥ 0}  3. If |h(x, u(i + 1))| ≥ tol then uðiþ1Þ ⟵ u uðiþ1Þ . 4. Set β(i + 1) = ku(i + 1)k. 5. Increment the iteration number i ⟵ i + 1. 6. Test for the stopping conditions. If the iterations continue, then go to 1, else estimate β(x) ≈ β(i), u(x) ≈ u(i).

738

9

Reliability

Th implementation used is the following:

Example 9.23 Let us consider the case where x = (0, 0) and gðxÞ = x2 - x21 - x1 - 3. This problem was previously solved (Example 9.30). In this case,

(continued)

9.8

FORM and SORM

739

Example 9.23 (continued) The result is

Thus, the estimation is β = 2.857062.

Example 9.24 Multiple failure conditions can be treated by the simplified approach previously presented. Analogously to FORM, we need to run form with different parameters. Let us illustrate the procedure by using the same example considered in the preceding: x 2 ℝ2 and three failure conditions, corresponding to g1 ðxÞ = x2 - x21 - x1 - 3, g2 ðxÞ = x1 - x22 - x2 - 2, g3(x) = x1 + x1x2 - 1. We determine three solutions sol1, sol2, sol3, corresponding top each gi, i=1,2,3. We obtain:

Consequently, the estimation is β = 1.80304. For the point (1,–1), we obtain β = 1.034464.

Exercises 1. Let x 2 ℝ2 and gðxÞ = x2 - x21 - x1 - 3. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Use SORM to evaluate the reliability index of the points (1,0), (0,1). 2. Let x 2 ℝ2 and gðxÞ = x2 - x31 - x21 - 4. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Use SORM to evaluate the reliability index of the points (0,0), (1,0), (0,1). (continued)

740

9

Reliability

3. Let x 2 ℝ2 and g(x) = x1x2 - 2. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Use SORM to evaluate the reliability index of the points (0,0), (1,0), (0,1). 4. Let x 2 ℝ4 and g(x) = x1x2x3x4 - 2. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2, 3, 4. Use SORM to evaluate the reliability index of the points (0,0,0,0), (1,0,0,0), (0,1,0,0), (0,0,1,0) and (0,0,0,1). 5. Let x 2 ℝ4 and g(x) = x4 - x1x2x3 - 3. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2, 3, 4. Use SORM to evaluate the reliability index of the points (0,0,0,0), (1,0,0,0), (0,1,0,0), (0,0,1,0) and (0,0,0,1).

9.9

Reliability Based Design Optimization

As we have already observed, optimization problems may involve uncertainty. For instance, when considering x = arg min ff ðxÞ: x 2 C ⊂ ℝn g

ð9:30Þ

both the objective f and the admissible set C may contain uncertainty, arising from different variability sources. Thus, variability and uncertainties should be considered in the optimization procedure to generate a robust result. Indeed, • The effects of variations of the parameters involved in the objective and constraints must be analyzed, to prevent the situation where small variations of some parameters can cause significant variations of the result (loss of robustness) • The impacts of implementation errors of the solution found must be considered, to avoid situations where small errors in the implementation produce substantial variations in the real result (variability and uncertainty) • the effects of fluctuations of the external environment must be evaluated to ensure minimal safety conditions (unreliability). To avoid these difficulties, it is necessary to introduce uncertainties in the design process. A possible aim is to get a complete characterization of the variability of the solutions: as shown in the preceding, UQ furnishes tools for such a characterization. However, in the case where we are interested in the determination of robust solutions without the full determination of the variability of the solution, alternatives exist in the literature. For instance, we can look for solutions having a small sensitivity to the variations of some parameters (Robust Optimization): in such a situation, we must introduce an index of robustness β(x), analogous to the reliability index, which gives to each possible solution x a value measuring the robustness. Assuming that robustness increases with β, we may either look for the solution that minimizes the objective on the admissible set and ensures a minimal safety:

9.9

Reliability Based Design Optimization

741

x = arg min ff ðxÞ: x 2 C ⊂ ℝn , βðxÞ ≥ β min g

ð9:31Þ

or determine the solution that ensures the maximal safety for a maximal value of the objective: x = arg min fβðxÞ: x 2 C ⊂ ℝn , f ðxÞ ≤ f max g:

ð9:32Þ

In these formulations, the robustness index may be replaced by the reliability index: in this case, we refer to these optimization problems as being Reliability Based Design Optimization (RBDO) problems. More generally, we can look for the solution that controls the probability of failure: x = arg min ff ðxÞ: x 2 C ⊂ ℝn , F = PðF Þ ≤ p max g

ð9:33Þ

x = arg min fF = PðF Þ: x 2 C ⊂ ℝn , f ðxÞ ≤ f max g:

ð9:34Þ

or

9.9.1

The Bilevel or Double Loop Approach for a Desired β

The formulation (9.31) leads to a bi-level optimization problem, since it deals with nested optimization problems: on the one hand, the determination of β(x) and, on the other hand, the minimization of f(x). Indeed, typical optimization iterations start from an initial guess x(0) to generate a sequence of points x(1), x(2), . . . . At each step, the generation of x(i + 1) requests the evaluation of β(x) for the candidate points β(x) – so, it requests the solution of an internal optimization problem. A typical algorithm is given below: Bi-Level (or Double Loop) Algorithm for RBDO (Desired β) Input: the starting point x(0), stopping conditions (for instance, maximal iteration number, minimal step at each iteration), a program determining u(x), a program for the correction of x for a given u. Output: estimations of x, u(x) and β(x). Initialization: Set the iteration number to zero: k ⟵ 0. 1. Inferior level (interior loop): find (u)k = u(x(k)). Set β(k) = k(u)kk. 2. Superior level (external loop): Determine x(k + 1) as a correction of x(k) using (u)k. (continued)

742

9

Reliability

3. Increment the iteration number k ⟵ k + 1. 4. Test for the stopping conditions. If the iterations continue, then go to 1, else estimate x ≈ x(k), u(x) ≈ u(x(k)), β(x) ≈ ku(x(k))k. Examples of methods for the correction of x(k) can be found in the literature. We presented some methods in Sects. 9.2 and 9.8.1 (see also (Souza de Cursi et al., 2004)). The determination of (u)k is usually called the reliability level, while the determination of x(k + 1) is called the optimization level. The numerical solution by iterative methods may request a large number of calls to each of these levels. Thus, simplification methods have been proposed in the literature –, for example, the use of FORM at the reliability level. A popular approach consists in generating x(k + 1) by determining successively: at the internal loop,

n o uðkÞ = arg min kuk: h xðkÞ , u ≥ 0 and kuk ≥ β min and, at the external loop (correction of x(k)): n

o xðkþ1Þ = arg min f ðxÞ: h x, uðkÞ ≤ 0 : The main difficulty in the implementation of the double loop is the fact that there are two problems of optimization intertwined: the reliability analysis and the optimization of the objective function. Nested calls to an optimizer are a difficulty task. The best solution consists in the use an optimizer only for one of these optimization problems and the use of another method to the other problem. For instance, reliability analysis can be performed by FORM. Example 9.25 Let us consider the simple objective function f ðxÞ = x21 þ ðx2 - 1Þ2 and the failure region defined by g(x) = x1 + x2 - 3. We look for a RBDO solution with β(x) ≥ βmin. The global minimum of f is xg = ð0, 1Þ , for which

pffiffiffi β xg = 2. For βmin ≥ 2, xg is not a solution of the RBDO problem. Define the parameters, the objective function, the limit state equation and the reliability index: since the limit state curve is affine, we can evaluate the reliability index by Eq. (9.12).

(continued)

9.9

Reliability Based Design Optimization

743

Example 9.25 (continued) Minimize the objective function under the restrictions: the solution must belong to the safe region (g(x) ≤ 0) and β(x) ≥ βmin.

We obtain the solution

Make a loop to find solutions for several values of βmin

We obtain:

Try it in the reverse order – just change seq(1,5,by=05) into seq (5,1,by=–0.5) – the results are close to the preceding:

744

9

Reliability

Example 9.26 Let us consider again the simple objective function f ðxÞ = x21 þ ðx2 - 1Þ2 and the failure region defined by gðxÞ = x2 - x21 - x1 - 3. We look for a RBDO solution with β(x) ≥ βmin. In this case, the evaluation of β(x) must be made by a different method. For instance, we can use Eq. (9.11) and the function findsol introduced in Sect. 9.5.1. In this case, findsol furnishes

β xg ≈ 1:8 for xg = ð0, 1Þ.

To do this, we modify the function hlind as shown at right. Notice that we need to define gradg, which evaluates ∇g

Minimizing the objective function under the same restrictions as in the preceding example furnishes the results at right, for βmin = 5.

Here,

In the reverse order:

9.9

Reliability Based Design Optimization

745

Example 9.27 Let us consider again the simple objective function f ðxÞ = x21 þ ðx2 - 1Þ2 and the failure region defined by gðxÞ = x2 - x31 - x21 - 4. In this case, the global

minimum xg =(0, 1) is admissible and β xg ≈ 4:5 (value generated by findsol). Indeed, all the results are close to xg :

In the reverse order, the results are analogous:

Exercises 1. Let x 2 ℝ2, f ðxÞ = x21 þ x22 and g(x) = x1x2 - 1. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Find a RBDO solution with β = 3, 4, 5, 6. 2. Let x 2 ℝ2, f ðxÞ = x21 þ x22 and gðxÞ = x1 x22 - 1 . Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Find a RBDO solution with β = 3, 4, 5, 6. 3. Let x 2 ℝ2, f(x) = (x1 + x2 - 6)2 + (x1 - x2 - 10)2 and gðxÞ = x1 - 3x22 þ 2 . Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Find a RBDO solution with β = 3, 4, 5, 6. 4. Let x 2 ℝ2, f ðxÞ = x21 þ x22 and gðxÞ = x21 þ 8x2 - 2. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Find a RBDO solution with β = 3, 4, 5, 6. 5. Let x 2 ℝ2, f(x) = (x1 + x2 - 6)2 + (x1 - x2 - 10)2 and gðxÞ = x1 þ x32 - 2 . Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Find a RBDO solution with β = 3, 4, 5, 6.

746

9.9.2

9

Reliability

The Bilevel or Double Loop Approach for a Desired Objective

As observed, an alternative formulation is furnished by (9.34): we can exchange the rules of β and f, to look for the maximal β for a prescribed maximal value of f. In this case, we run the SOLVER to maximize β under some restrictions. Example 9.28 Let us consider the simple objective function f ðxÞ = x21 þ ðx2 - 1Þ2 and the failure region defined by g(x) = x1 + x2 - 3. We look for a RBDO solution that maximizes β(x) with the maximal value f(x) ≤ fmax. Define the parameters, the objective function, the limit state equation and the reliability index: as previously, the limit state curve is affine, we can evaluate the reliability index by Eq. (9.12). Minimize the objective function under the restrictions: the solution must belong to the safe region (g(x) ≤ 0) and f(x) ≤ fmax.

Make a loop to get several results for different values of fmax.

9.9

Reliability Based Design Optimization

747

We obtain:

In the reverse order, the results are analogous:

Example 9.29 Let us consider again the simple objective function f ðxÞ = x21 þ ðx2 - 1Þ2 and the failure region defined by gðxÞ = x2 - x21 - x1 - 3. We look for a RBDO solution that maximizes β(x) with the maximal value f(x) ≤ fmax. As previously, we use Eq. (9.11) and the function findsol introduced in Sect. 9.5.1.

(continued)

748

9

Reliability

Example 9.29 (continued) In the reverse order:

Example 9.30 Let us consider again the simple objective function f ðxÞ = x21 þ ðx2 - 1Þ2 and the failure region defined by gðxÞ = x2 - x31 - x21 - 4. Now:

In the reverse order:

Notice that there are some inconsistencies in the results – this is due to the difficulty in the optimization, which involves non-convexities.

9.9

Reliability Based Design Optimization

749

Exercises 1. Let x 2 ℝ2, f ðxÞ = x21 þ x22 and g(x) = x1x2 - 1. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Find a solution that maximizes β for f(x) ≤ 2, f(x) ≤ 3, f(x) ≤ 4, f(x) ≤ 5. 2. Let x 2 ℝ2, f ðxÞ = x21 þ x22 and gðxÞ = x1 x22 - 1 . Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Find a solution that maximizes β for f(x) ≤ 2, f(x) ≤ 3, f(x) ≤ 4, f(x) ≤ 5. 3. Let x 2 ℝ2, f(x) = (x1 + x2 - 6)2 + (x1 - x2 - 10)2 and gðxÞ = x1 - 3x22 þ 2. Find a solution that maximizes β for f(x) ≤ 2, f(x) ≤ 3, f(x) ≤ 4, f(x) ≤ 5. 4. Let x 2 ℝ2, f ðxÞ = x21 þ x22 and gðxÞ = x21 þ 8x2 - 2. Let Xi = xi + ui with ui  N(0, 1), i = 1, 2. Find a solution that maximizes β for f(x) ≤ 2, f(x) ≤ 3, f(x) ≤ 4, f(x) ≤ 5. 5. Let x 2 ℝ2, f(x) = (x1 + x2 - 6)2 + (x1 - x2 - 10)2 and gðxÞ = x1 þ x32 - 2 . Find a solution that maximizes β for f(x) ≤ 2, f(x) ≤ 3, f(x) ≤ 4, f(x) ≤ 5.

Bibliography

Akaike, H. (1974, December). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6), 716–723. https://doi.org/10.1109/TAC.1974.1100705 Apesteguia, J., & Palacios-Huerta, I. (2010, December). Psychological pressure in competitive environments: Evidence from a randomized natural experiment. American Economic Review, 100(5), 2548–2564. Arrondel, L., Duhautois, R., & Laslier, J.-F. (2019). Decision under psychological pressure: The shooter’s anxiety at the penalty kick. Journal of Economic Psychology, 70(C), 22–35. Arrous, J. (2000). Algèbre linéaire et Science économique, un cas exemplaire. Retrieved 1 13, 2021, from Research Gate page of Jean Arrous: https://www.researchgate.net/publication/24230152 5_ALGEBRE_LINEAIRE_ET_SCIENCE_ECONOMIQUE_UN_CAS_EXEMPLAIRE Autrique, L., & Souza de Cursi, E. (1997). On stochastic modification for global optimization problems: An efficient implementation for the control of the vulcanization process. International Journal of Control, 67(1), 1–22. Azar, O. H., & Bar-Eli, M. (2011). Do soccer players play the mixed-strategy Nash equilibrium? Applied Economics, 43(25), 3591–3601. https://doi.org/10.1080/00036841003670747 Bachelier, L. (1900). Théorie de la spéculation. Annales scientifiques de l’École Normale Supérieure, série, 3(17), 21–86. https://doi.org/10.24033/asens.476 Bachelier, L. (1901). Théorie mathématique du jeu. Annales scientifiques de l’E.N.S., 18(3eme série), pp. 143–209. Bassi, M., Souza de Cursi, E., Pagnacco, E., & Ellaia, R. (2018). Statistics of the Pareto front in Multi-objective Optimization under Uncertainties. Latin American Journal of Solids and Structures, 15(11), e130. https://doi.org/10.1590/1679-78255018 Black, F., & Scholes, M. (1973). The pricing of options and corporate liabilities. Journal of Political Economy, 81(3), 637–654. https://doi.org/10.1086/260062 Blankenship, G., & Baras, J. (1981). Accurate evaluation of stochastic Wiener integrals with applications to scattering in random media and to nonlinear filtering. SIAM Journal of Applied Mathematics, 41(3), 518–552. https://doi.org/10.1137/0141043 Boczon, M., & Wilson, A. J. (2018). Goals, constraints, and public assignment: A field study of the UEFA champions league. University of Pittsburgh. Retrieved June 2, 2020, from https://www. econ.pitt.edu/sites/default/files/working_papers/Working%20Paper.18.16.pdf Booth, T. (1982). Regional Monte Carlo Solution of Elliptic Partial Differential Equations. Journal of Computational Physics, 281–290. https://doi.org/10.1016/0021-9991(82)90079-1

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 E. Souza de Cursi, Uncertainty Quantification using R, International Series in Operations Research & Management Science 335, https://doi.org/10.1007/978-3-031-17785-9

751

752

Bibliography

Borel, E. (1921, 7 11). La théorie du jeu et les équations intégrales à noyau symétrique. Comptes rendus hebdomadaires des séances de l’Académie des Sciences de Paris, 173, 1304–1308. Bouhadi, M., Ellaia, R., & Souza de Cursi, E. (2004a). Global optimization under nonlinear restrictions by using stochastic perturbations of the projected gradient. In C. A. Floudas & P. N. Pardalos (Eds.), Frontiers in global optimization (pp. 541–561). Kluwer Academic Press. https://doi.org/10.1007/978-1-4613-0251-3_29 Bouhadi, M., Ellaia, R., & Souza de Cursi, E. (2004b). Stochastic perturbation methods for affine restrictions. In C. A. Floudas & P. Pardalos (Eds.), Advances in convex analysis and global optimization (pp. 487–499). Kluwer Academic Press. Boukhetala, K., & Guidoum, A. (2011). Sim.DiffProc: A package for simulation of diffusion processes in R. HAL. Retrieved 1 04, 2021, from https://hal.archives-ouvertes.fr/hal-0062 9841/en/ Box, G. E. (1958). A note on the generation of random normal deviates. Annals of Mathematical Statistics, 29(2), 610–611. Brown, R. (1828). A brief account of microscopical observations made in the months of June, July, and August, 1827, on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies. The Philosophical Magazine, 4(21), 161–173. https://doi.org/10.1080/14786442808674769 Chen, F. (2008, December 18). Random music generator. Retrieved from OpenStax CNX: http:// cnx.org/contents/62219fba-96df-418a-b9aa-14fddd7c30fa@2 Choi, B. (1992). ARMA model identification. Springer. https://doi.org/10.1007/978-1-4613-9745-8 Choleski, A.-L. (1910). Sur la résolution numérique des systèmes d’équations linéaires. manuscript. Chorin, A. J. (1973). Accurate evaluation of Wiener integrals. Mathematics of Computation, 27(121), 1–15. https://doi.org/10.2307/2005242 Coloma, G. (2007). Penalty kicks in soccer: An alternative methodology for testing mixed-strategy equilibria. Journal of Sports Economics, 8(5), 530–545. Coloma, G. (2012). The penalty-kick game under incomplete information. Universidad del CEMA. Cournot, A. (1838). Recherches Sur les principes mathématiques de la théorie des richesses. Hachette. Converse, P. D. (1949). New Laws of retail gravitation. Journal of Marketing, 14, 379–384. https:// doi.org/10.1177/002224295001400303 Cowles, A. (1933). Can stock market forecasters forecast? Econometrica, 1(3), 309–324. https:// doi.org/10.2307/1907042 Cox, J., Ingersoll, J., & Ross, S. (1985). A theory of the term structure of interest rates. Econometrica, 53(2), 385–407. https://doi.org/10.2307/1911242 Clark, C. (1951). Urban population densities. Journal of the Royal Statistical Society: Series A (General), 114(4), 490–496. https://doi.org/10.2307/2981088 Cramer, G. (1750). Introduction à l’Analyse des Lignes Courbes Algébriques. https://doi.org/10. 3931/e-rara-4048 Cressman, R., & Tao, Y. (2014). The replicator equation and other game dynamics. Proceedings of the National Academy of Sciences of the United States of America, 111(Supplement 3), 10810–10817. https://doi.org/10.1073/pnas.1400823111 Cui, T., & Li, S. (2020). Space fault tree theory and system reliability analysis. EDP Sciences. Dautray, R., & Lions, J.-L. (2012). Mathematical analysis and numerical methods for science and technology (Vol. 3). Springer. Dickey, D. A., & Fuller, W. A. (1979). Distribution of the estimators for autoregressive time series with a unit root. Journal of the American Statistical Association, 366a, 427–431. https://doi.org/ 10.1080/01621459.1979.10482531 Einstein, A. (1905). Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen. Annalen der Physik, 17, 549–560.

Bibliography

753

El Mouatasim, A., Ellaia, R., & Souza de Cursi, E. (2006). Random perturbation of the variable metric method for unconstrained nonsmooth nonconvex optimization. Applied Mathematics and Computer Science, 16(4), 463–474. El Mouatasim, A., Ellaia, R., Souza de Cursi, E. (2011). Random perturbation of the projected variable metric method for nonsmooth nonconvex optimization problems with linear constraints. International Journal of Applied Mathematics and Computer Science: 317–329. http://eudml.org/doc/208050 enseignements. (2020, 12 13). Retrieved from page personnelle de V. Lemaire. https://www.lpsm. paris/pageperso/lemaire/docs/stats-ENPC/series-chrono.pdf Elepov, B. S., & Mikhailov, G. A. (1969). Solution of the Dirichlet problem by a model of “walks on Spheres”. USSR Computational Mathematics and Mathematical Physics, 9(3), 194–204. https://doi.org/10.1016/0041-5553(69)90070-6 Ericson, C. A. (2011). Fault tree analysis primer. CreateSpace Incorporated. Fisk, D. L. (1963). Quasi-martingales and stochastic integrals. Michigan State University, Department of Statistics. Office of Naval Research. Retrieved 12 27, 2020, from https://apps.dtic.mil/ dtic/tr/fulltext/u2/414838.pdf Föllmer, H. (1981). Calcul d’Ito sans probabilités. Séminaire de probabilités (Strasbourg), 15, 143–150. Retrieved from http://www.numdam.org/item?id=SPS_1981__15__143_0 Furness, K. P. (1965). Time function iteration. Traffic Engineering and Control, 458–460. Garicano, L., Palacios-Huerta, I., & Prendergast, C. (2005). Favoritism under social pressure. The Review of Economics and Statistics, 87(2), 208–216. Gauss, C. F. (1811). Disquisitio de elementis ellipticis Palladis ex oppositionibus annorum 1803, 1804, 1805, 1807, 1808, 1809. Commentationes societatis regiæ scientiarum Gottingensis recentiores - commentationes mathematicae, 1–26. Graham, C., & Talay, D. (2013). Stochastic simulation and Monte Carlo methods. Springer. https:// doi.org/10.1007/978-3-642-39363-1 Gonçalves, M. B., & Souza de Cursi, E. (2001). Parameter estimation in a trip distribution model by random. Transportation Research part B, 35(2), 137–161. https://doi.org/10.1016/S0191-2615 (99)00043-0 Hannan, E. J., & Quinn, B. G. (1979). The determination of the order of an autoregression. Journal of the Royal Statistical Society Series B (Methodological), 41(2), 190–195. Retrieved from http://www.jstor.org/stable/2985032 Hansen, W. G. (1959). How accessibility shapes Land Use. Journal of the American Institute of Planners, 25(2), 73–76. https://doi.org/10.1080/01944365908978307 Holdorf Lopez, R., Souza de Cursi, J. E., & Lemosse, D. (2011). Approximating the probability density function of the optimal point of an optimization problem. Engineering Optimization, 43(3), 281–303. https://doi.org/10.1080/0305215X.2010.489607 Hoyt, H. (1939). The Structure and Growth of Residential Neighborhoods in American Cities. Washington: U.S. Government Printing Office. Huff, D. L. (1964). Defining and estimating a trading area. Journal of Marketing, 28(3), 34–38. https://doi.org/10.2307/1249154 Hwang, C.-O., & Mascagni, M. (2001). Efficient modified “walk on spheres” algorithm for the linearized Poisson-Bolzmann. Applied Physics Letters 78, 787.https://doi.org/10.1063/1. 1345817 Hwang, C.-O., Mascagni, M., & Given, J. A. (2003). A Feynman–Kac path-integral implementation for Poisson’s equation using an h-conditioned Green’s function. Mathematics and Computers in Simulation, 62(3-6), 347–355. https://doi.org/10.1016/S0378-4754(02)00224-0 Itô, K. (1944). Stochastic integral. Proceedings of the Imperial Academy, 20(8), 519–524. https:// doi.org/10.3792/pia/1195572786 Itô, K. (1946). On a stochastic integral equation. Proceedings. Japan Academy, 22(2), 32–35. https://doi.org/10.3792/pja/1195572371 Itô, K. (1950). Stochastic differential equations in a differentiable manifold. Nagoya Mathematical Journal, 1, 35–47. https://doi.org/10.1017/S0027763000022819

754

Bibliography

Itô, K. (1951). On a formula concerning stochastic differentials. Nagoya Mathematical Journal, 3, 55–65. Retrieved from https://projecteuclid.org/euclid.nmj/1118799221 Jafari, M. A., & Abbasian, S. (2017). The moments for solution of the Cox-Ingersoll-Ross interest rate model. Journal of Finance and Economics, 5(1), 34–37. https://doi.org/10.12691/jfe-5-1-4 Iacus, S. M. (2008). Simulation and Inference for Stochastic Differential Equations. New York, NY, USA: Springer. https://doi.org/10.1007/978-0-387-75839-8 Jordan, W. (1962). Handbook of geodesy (Vol. 1). (M. W. Carta, Trans.) Isard, W. (1956). Location and Space-economy; a General Theory Relating to Industrial Location, Market Areas, Land Use, Trade, and Urban Structure. Cambridge: MIT Press and Wiley. Kharroubi, I., & Pham, H. (2015). Feynman–Kac representation for Hamilton–Jacobi–Bellman IPDE. The Annals of Probability, 43(4), 1823–1865. https://doi.org/10.1214/14-AOP920 Kim, N. J. (1968). Linear programming with random requirements. Utah Water Research Laboratory. Retrieved February 7, 2021, from https://digitalcommons.usu.edu/water_rep/272?utm_ source=digitalcommons.usu.edu%2Fwater_rep%2F272&utm_medium=PDF&utm_cam paign=PDFCoverPages Kloeden, P., & Platen, E. (1989). A survey of numerical methods for stochastic differential equations. Stochastic Hydrology and Hydraulics, 3, 155–178. https://doi.org/10.1007/ BF01543857 Kolmogorov, A. N. (1950). Foundations of the theory of probability. Chelsea Publishing Company. Kress, R. (1998). Numerical analysis. Springer. https://doi.org/10.1007/978-1-4612-0599-9 Kuhn, H. W. (1957). Extensive games and the problem and information. In H. W. Kuhn & A. W. Tucker (Eds.), Contributions to the theory of games, annals of mathematical studies (pp. 193–216). Princeton University Press. Kwiatkowski, D., Phillips, P. C., Schmidt, P., & Shin, Y. (1992). Testing the null hypothesis of stationarity against the alternative of a unit root: How sure are we that economic time series have a unit root? Journal of Econometrics, 54(1–3), 159–178. https://doi.org/10.1016/0304-4076(92) 90104-Y Langevin, P. (1908). Sur la théorie du mouvement brownien. Comptes-Rendus de l’Académie des Sciences, 146, 530–532. Leontieff, W. W. (1936). Quantitative input and output relations in the economic systems of the United States. The Review of Economics and Statistics, 18(3), 105–125. https://doi.org/10.2307/ 1927837 Leontieff, W. W. (1937). Interrelation of prices, output, savings, and investment. The Review of Economics and Statistics, (3), 109–132. https://doi.org/10.2307/1927343 Leontieff, W. W. (1986). Input-output economics. Oxford University Press. Levitt, S., Chiappori, P., & Groseclose, T. (2002). Testing mixed-strategy equilibria when players are heterogeneous: The case of penalty kicks in soccer. American Economic Review, 92, 1138–1151. https://doi.org/10.1257/00028280260344678 Lévy, P. (1922, November 13). Sur la détermination des lois de probabilité par leurs fonctions caractéristiques. Comptes Rendus de l’Académie des Sciences de Paris, pp. 854–856. Lewontin, R. C. (1961, July). Evolution and the theory of games. Journal of Theoretical Biology, 1(3), 382–403. Leybourne, S. J., & McCabe, B. P. (1994). A consistent test for a unit root. Journal of Business and Economic Statistics, 12(2), 157–166. https://doi.org/10.1080/07350015.1994.10510004 Li, F., & Wu, T. (2007). An importance sampling based approach for reliability analysis. 2007 IEEE international conference on automation science and engineering (pp. 956–961). IEEE. https:// doi.org/10.1109/COASE.2007.4341815. Limnios, N. (2007). Fault trees. ISTE/Wiley. Lotka, A. J. (1910, March 1). Contribution to the theory of periodic reactions. The Journal of Physical Chemistry, 14(3), 271–274. https://doi.org/10.1021/j150111a004 Lotka, A. J. (1920, July 1). Analytical note on certain rhythmic relations in organic systems. Proceedings of the National Academy of Sciences of the USA, 6(7), 410–415. https://doi.org/ 10.1073/pnas.6.7.410

Bibliography

755

Maynard-Smith, J. (1982). Evolution and the theory of games. Cambridge University Press. Maynard-Smith, J., & Price, G. R. (1973, November 2). The logic of animal conflict. Nature, 246, 15–18. Merton, R. C. (1973). Theory of rational option pricing. The Bell Journal of Economics and Management Science, 4(1), 141–183. https://doi.org/10.2307/3003143 Milstein, G. N. (1973). Approximate integration of stochastic differential equations. Theory of Probability and Its Applications, 19(3), 557–562. https://doi.org/10.1137/1119062 Milstein, G., & Tretyakov, M. (2004). Stochastic Numerics for mathematical physics. Springer. https://doi.org/10.1007/978-3-662-10063-9 Milstein, G., & Tretyakov, M. (2012). Solving the Dirichlet problem for Navier–stokes equations by probabilistic approach. BIT Numerical Mathematics, 52, 141–153. https://doi.org/10.1007/ s10543-011-0347-z Milstein, G., & Tretyakov, M. (2013). Probabilistic methods for the incompressible Navier–stokes equations with space periodic conditions. Adv. in Appl. Probab., 45(3), 742–772. https://doi. org/10.1239/aap/1377868537 Milstein, G., & Tretyakov, M. (2020). Mean-Square approximation of Navier-stokes equations with additive noise in vorticity-velocity formulation. Numerical Mathematics: Theory, Methods and Applications, 14(1), 1–30. https://doi.org/10.4208/nmtma.OA-2020-0034 Morillon, J.-P. (1997). Numerical solutions of linear mixed boundary value problems using stochastic representations. International Journal for Numerical Methods in Engineering, 40, 387–405. https://doi.org/10.1002/(SICI)1097-0207(19970215)40:3%3C387::AID-NME69% 3E3.0.CO;2-D Muller, M. E. (1958). An inverse method for the generation of random Normal deviates on largescale computers. Mathematical Tables and Aids to Computation, 12, 167–174. Muller, M. E. (1959). A comparison of methods for generating Normal deviates on digital computers. Journal of the Association for Computing Machinery, 6, 376–383. Muller, M. E. (n.d.). Generation of normal deviates (Technical report no. 13. Statistical techniques research group). Princeton University Press. Nash, J. F., Jr. (1950a). Equilibrium points in n-person games. Proceedings of the National Academy of Sciences of the United States of America, 36(1), 48–49. Nash, J. F., Jr. (1950b, April). The bargaining problem. Econometrica, 18(2), 155–162. Nataf, A. (1962). Détermination des distributions de probabilités dont les marges sont données. Comptes rendus de l’Académie des Sciences Paris, 225, 42–43. Nowak, M. (2006). Evolutionary dynamics: Exploring the equations of life. Harvard University Press. Organisation for Economic Co-operation and Development. (2021, 1 13). Input-Output Tables (IOTs). Retrieved from OECD Homepage: https://www.oecd.org/sti/ind/inputoutputtables.htm Palacios-Huerta, I. (2003). Professionals play minimax. Review of Economic Studies, 70, 395–415. Palacios-Huerta, I. (2014). Beautiful game theory: How soccer can help economics. Princeton University Press. Palacios-Huerta, I., & Volij, O. (2008, January). Experientia Docet: Professionals play minimax in laboratory experiments. Econometrica, 76(1), 71–115. Paley, R., Wiener, N., & Zygmund, A. (1933). Notes on random functions. Mathematische Zeitschrift, 37, 647–668. https://doi.org/10.1007/BF01474606 Papaioannou, I., Papadimitriou, C., & Straub, D. (2016). Sequential importance sampling for structural reliability analysis. Structural Safety, 62, 66–75. https://doi.org/10.1016/j.strusafe. 2016.06.002 Pardoux, E., & Talay, D. (1985). Discretization and simulation of stochastic differential equations. Acta Applicandae Mathematicae, 3, 23–47. https://doi.org/10.1007/BF01438265 Pearson, K. (1905). The problem of random walk. Nature, 294, 342. Phillips, P. C., & Perron, P. (1988, June). Testing for a unit root in time series regression. Biometrika, 75(2), 335–346. https://doi.org/10.1093/biomet/75.2.335

756

Bibliography

Pogu, M., & Souza de Cursi, E. (1994). Global optimization by random perturbation of the gradient method with a fixed parameter. Journal of Global Optimization, 5(2), 159–180. Pourahmadi, M. (2001). Foundations of time series analysis and prediction theory. Wiley. Prakash, P., & Garg, R. (2014, September). Preferred side of the penalty kick. International Journal of Science, Technology and Management, 3(9), 7–12. Ravenstein, E. G. (1885, June). The Laws of Migration. Journal of the Statistical Society, 48(2), 167–227. https://doi.org/10.2307/2979181 Regnault, J. (1863). Calcul des chances et philosophie de la bourse. Castel, Mallet-Bachelier. Reilly, W. J. (1929). Methods for the Study of Retail Relationships. Austin: University of Texas at Austin. https://doi.org/10.15781/T2XD0RC8G Reilly, W. J. (1931). The law of retail gravitation. New York: Pilsbury Press. Robert, C. P. (2006). Le choix bayésien. Springer. Romeo, G. (2020). Elements of mathematical economics with EXCEL. Academic Press. https://doi. org/10.1016/C2018-0-02476-7 Rosenblatt, M. (1952). Remarks on a multivariate transformation. Annals of Mathematical Statistics, 23(3), 470–472. https://doi.org/10.1214/aoms/1177729394 Sarkar, A. (2017). The Gambler’s fallacy and hot outcome: Cognitive biases or adaptive thinking for Goalkeepers’ decisions on dive direction during penalty shootouts. Master thesis. Bowling Green State University. Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6(2), 461–464. https://doi.org/10.1214/aos/1176344136 Shapley, L. S. (1953). Stochastic games. Proceedings of the National Academy of Sciences of the United States of America, 39(10), 1095–1100. Sinclair, B. (2005, June 9). Machine repair model. Retrieved from OpenStax CNX: http://cnx.org/ contents/56f1bed0-bd34-4c28-a2ec-4a3f9ded8e18@3 Soetaert, K., Cash, J., & Mazzia, F. (2012). Solving differential equations in R. Springer. https://doi. org/10.1007/978-3-642-28070-2 Souza de Cursi, E. (1992). Introduction aux Probabilités et Statistiques. Ecole Centrale Nantes. Souza de Cursi, E. (1994). Numerical methods for linear boundary value problems based on Feynman-Kac representations. Mathematics and Computers in Simulation, 36(1), 1–16. Souza de Cursi, E. (2015). Variational methods for engineers with Matlab. ISTE/Wiley. Souza de Cursi, E. (2021). Uncertainty quantification in game theory. Chaos, Solitons & Fractals, Elsevier, 143, 110558. https://doi.org/10.1016/j.chaos.2020.110558 Souza de Cursi, E., & Sampaio, R. (2010). Modelling and convexity.. ISTE-Wiley. Souza de Cursi, E., & Sampaio, R. (2015). Uncertainty quantification and stochastic modelling with Matlab. ISTE/Elsevier. Souza de Cursi, J. E., Ellaia, R., & Bouhadi, M. (2004). Global optimization under nonlinear restrictions by using stochastic perturbations of the projected gradient. In C. A. Floudas & P. Pardalos (Eds.), Frontiers in global optimization (pp. 541–561). Springer. Stewart, J. Q. (1947). Empirical mathematical rules concerning the distribution and equilibrium of population. Geographical Review, 37(3), 461–485. https://doi.org/10.2307/211132 Stewart, J. Q. (1948). Demographic Gravitation: Evidence and Application. Sociometry, XI, 31–58. https://doi.org/10.2307/2785468 Stouffer, S. A. (1940). Intervening opportunities: A theory relating to mobility and distance. American Sociological Review, 5(6), 845–867. https://doi.org/10.2307/2084520 Stratonovich, R. L. (1966). A new representation for stochastic integrals and equations. SIAM Journal on Control and Optimization, 4(2), 362–371. https://doi.org/10.1137/0304028 Talay, D. (2015). Simulation of stochastic differential equations. In B. Engquist (Ed.), Encyclopedia of applied and computational mathematics. Springer. https://doi.org/10.1007/978-3-54070529-1_346 Tobler, W. (1976). Spatial interaction patterns. Journal of Environmental Systems, 6(4), 271–301. https://doi.org/10.2190/VAKC-3GRF-3XUG-WY4W

Bibliography

757

Vasicek, O. (1977). An equilibrium characterization of the term structure. Journal of Financial Economics, 5(2), 177–188. https://doi.org/10.1016/0304-405X(77)90016-2 Verhulst, P. (1845). Recherches mathématiques sur la loi d’accroissement de la population. Nouveaux mémoires de l’Académie Royale des Sciences et Belles-Lettres de Bruxelles, 18, 14–54. Retrieved August 15, 2020, from http://eudml.org/doc/182533 Verhulst, P. (1847). Deuxième Mémoire sur la Loi d’Accroissement de la Population. Mémoires de l’Académie Royale des Sciences, des Lettres et des Beaux-Arts de Belgique, 20, 1–32. Retrieved August 20, 2020, from http://eudml.org/doc/178976 Volterra, V. (1926). Variazioni e fluttuazioni del numero d’individui in specie animali conviventi (Vol. 2). (http://www.liberliber.it, Ed.) Memoria della Reale Accademia Nazionale dei Lincei. Ser. VI. Retrieved August 12, 2020, from https://www.europeana.eu/pt/item/2022117/urn_ axmedis_00000_obj_bd05ae74_d168_4c92_9a65_4f461377f7bd Volterra, V. (1928, April). Variations and fluctuations of the number of individuals in animal species living together. ICES Journal of Marine Science, 3(1), 3–51. https://doi.org/10.1093/ icesjms/3.1.3 Von Neumann, J. (1928). Zur Theorie der Gesellschaftsspiele. Mathematische Annalen, 100(1), 295–320. Von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton University Publishing. Von Neumann, J., & Morgenstern, O. (1953). Theory of games and economic behavior. Princeton University Press. Wilson, A. G. (1967). A Statistical theory of spatial distribution models. Transportation Research, 1(3), 253–269. https://doi.org/10.1016/0041-1647(67)90035-4 Wilson, A. G. (1970). Entropy in Urban and Regional Modelling. London: Pion. https://doi.org/10. 4324/9780203142608 Wilson, A. G. (2010). Entropy in Urban and Regional Modelling: Retrospect and Prospect. Geographical Analysis, 42(4), 364–394. https://doi.org/10.1111/j.1538-4632.2010.00799.x Zermelo, E. (1912). Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels. Proceedings of the fifth international congress of mathematicians (pp. 501–504). Cambridge University Press. Zhou, Y., & Cai, W. (2016). Numerical solution of the Robin problem of Laplace equations with a Feynman-Kac formula and reflecting Brownian motions. Journal of Scientific Computing, 69(1), 107. https://doi.org/10.1007/s10915-016-0184-y Zidani, H. (2013). Représentation de solution en optimisation continue, multi-objectif et applications. INSA/EMI. Zipf, G. K. (1946). The P1 P2/D Hypothesis: On the intercity movement of persons. American Sociological Review, 11(6), 677–686. https://doi.org/10.2307/2087063 Zipf, G. K. (1949). Human behavior and the principle of least effort: An introduction to human eoclogy. Cambridge, MA: Addison-Wesley.

Index

A Adaptation of iterative method, 523–541 algorithm, 525 ODE, 581–584 optimization, 662–666 Power Iterations, 541 Almost sure, 112 Approximation particle, 96–100 SPA, 96–100 variational, 94, 96 AR, 411 identification, 412 Yule-Walker, 412 ARMA, 426 identification, 428 simulation, 427 Yule-Walker, 428 Autocorrelation, 360, 361 partial, 374, 377 Autocovariance, 360 Autoregressive process, 411 identification, 412 simulation, 414 Yule-Walker, 412

B Bernoulli, 181 Binomial, 181 negative, 182

C CDF, 171 Chi-squared test adequacy, 217 independency, 226 Choleski transformation, 710 Classes, 51–61 Cochran’s Theorem, 199 Collocation representation by, 257–289 Conditional mean, 172 Confidence interval, 199 Continuous BFS, 188 chi-squared, 187 Gaussian normal, 186 Gaussian standard, 186 log-normal, 189 normal, 186 Student-Fisher, 188 uniform, 185 Convergence almost sure, 194 in distribution, 194 in probability, 194 quadratic mean, 194 Copula, 171, 177 Correlation, 171 Crosscorrelation, 360 Crosscovariance, 360, 361

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 E. Souza de Cursi, Uncertainty Quantification using R, International Series in Operations Research & Management Science 335, https://doi.org/10.1007/978-3-031-17785-9

759

760 Curves uncertainties confidence interval, 342–357 functions, 323–332 generation, 243–249 mean, 342–357 orbits, 583–589 representation, 332–342 trajectories, 583–589

D data.frame, 32–37 Deterministic process, 382 Differential equations, 61–67 boundary value, 64–67 initial value, 62–64 Diffusion process, 449–501 Brownian motion, 460–467 random walk, 467–470 time integral, 449–454 white noise, 455–460 Discrete Bernoulli, 181 Binomial, 181 negative binomial, 182 Poisson, 183 uniform, 181

E Eigenvalues uncertainty, 541–550 very small, 548–550 Environment, 49 Ergodic in correlation, 361 in the mean, 361 Event, 110 almost sure, 112 independent, 122–125 negligible, 112

F Failure centered space, 677 failure region, 676 FORM, 732–736 Hasofer-Lind, 692–709 algebraic system, 693 linear case, 700–705 probability of failure, 709–727

Index upper bound, 693 hybrid space, 690–692 limit state, 676–681 MCFP, 682 MCP, 682 MPFP, 681 MPP, 681 probability of failure convex case, 706–709, 715–718 general case, 718–727 linear case, 713–715 reliability analysis, 690–692 reliability index, 692–709 algebraic system, 693 collinearity, 693 linear case, 700–706 probability of failure, 709–727 upper bound, 693 safe region, 676 SORM, 737–740 transformation Nataf, 727–732 Rosenblatt, 727–732 Failure probability limit state, 676–681 FORM multiple conditions, 739 Functions uncertainties generation, 235–241 representation, 323–332

G Game theory cooperative, 592 equal opportunity, 593 equilibrium, 592 evolutionary, 593 game, 591 language, 591–593 move, 592 Nash equilibrium, 592 normal form, 592 play, 591 replicator dynamics uncertainty, 605–611, 614–620 replicator equation, 593 strategy, 592 uncertainty, 612–620 Hawks and Doves, 626–637 Odds and Evens, 593–611 Penalty Kick, 619–626

Index zero sum, 592 Gaussian normal, 186 standard, 186

H Hasofer-Lind transformation, 709 Haussdorff distance, 335 Hilbert Basis, 235 approximation, 236 typical, 235

I Independent variables, 176 Innovation process, 381 Itô’s calculus, 477–484 integrals, 470–477

K KKT equations, 68

L Lagrange’s multipliers, 68, 639 Large numbers strong law, 199 weak law, 199 Likelihood, 209 Limit state, 676–681 Linear programming Lagrange’s multipliers under uncertainty, 646 shadow prices under uncertainty, 646 under uncertainty, 640, 643 Linear system, 83–85 uncertainty, 511–523 very small, 518–523 List, 29 Log-likelihood, 209

M MA, 397 identification, 401 simulation, 398 Marginal density, 171

761 Marginal distribution, 171 Markov chain, 437–448 absorbing state, 438 canonical form, 439 recurrent state, 438 stationary probability, 439 transient state, 438 transition matrix, 437 process, 437–448 absorbing state, 438 canonical form, 439 recurrent state, 438 stationary probability, 439 transient state, 438 transition matrix, 437 Mass density, 114 Mass function, 114 Maximum Likelihood, 208–210 Maximum Likelihood Estimator, 209 Mean, 360 variational, 343 Median variational, 343 MLE, 208–210 Moment, 360 representation by matching, 298–317 Moments matching, 298–317 Moving average process, 397 identification, 401 simulation, 398

N Nataf transformation, 728 Negative binomial, 182 Negligible, 112

O Objects, 51–61 Optimization, 67–82 adaptation, 662–666 by algebraic equations, 666–673 duality, 74–78 KKT conditions, 68 Lagrange’s multipliers, 68, 639 under uncertainty, 639 linear programming, 69, 71 multiobjective, 79–82 nonlinear programming, 71–74 RBDO, 740–749

762 Optimization (cont.) bilevel, 741–745 reliablity objective, 746–749 representation, 640–662 robust via RBDO, 740–749 under uncertainty, 639–673 using statistics, 666–673 Uzawa, 74–78

P Partial autocorrelation, 374, 377 PDF, 171 Plotting, 37–46 Poisson, 183 Population standard deviation, 198 variance, 198 Probability Bayes, 122 conditional, 122 orthogonal projection, 149 event, 110 mass density, 114 mass function, 114 universe, 110 Programming, 46–50

R R acf, 361 add-ins third-part, 2, 3 adf, 383 and, 14 approx, 92 approxfun, 92 approximation, 92–100 SPH, 96–100 variational, 94, 96 arima, 394, 398 array, 20–27 as.list, 29 ccf, 361 chisq.test, 217 classes, 51–61 curve, 336 expansion1D, 59 trigo, 58 data.frame, 32–37 derivative, 100–108 particle, 106–108

Index SPA, 106–108 variational, 104–106 differential equations, 61–67 document, 4, 5 create, 4, 5 eig, 26 environment, 49 equations algebraic, 83–91 algebraic, linear, 83–85 differential, 61–67 nonlinear, 85 by optimization, 86–91 factor, 9–19 factorial, 120 fmincon, 69 fminsearch, 69 fminunc, 69 for, 48 fsolve, 85 GA, 301 if, 46 install, 1 integral, 100–108 integral2, 101 integral3, 101 integrate, 100 interpolation, 92–100 kpss.test, 384 lapply, 31 linear system, 83–85 lists, 27–32 locator, 46 logical operators, 14 lpsolve, 70 lsqnonlin, 86 matrix, 20–27 NA, 15 ncombinations, 120 NlcOptim, 74 nlm, 69 nonlinear system, 85 not, 14 npermutations, 120 numeric, 9–19 objects, 51–61 ode, 62 ODE boundary value, 64–67 initial value, 62–64 optim, 69 optimization, 67–82 duality, 74–78

Index by equations, 86–91 linear programming, 69, 71 multiobjective, 79–82 nonlinear programming, 71–74 Uzawa, 74–78 optimize, 69 optimx, 69 or, 14 outer, 12 pacf, 378 package, 2, 3 arrangements, 120 aTSA, 382 bvpSolve, 62, 64 deSolve, 61 dplyr, 34 EstimationTools, 210 forecast, 405, 429 GA, 72, 307 Gpareto, 79 hmm, 437 HMMCopula, 727 install, 3 limSolve, 27 markovchain, 437, 440 matlib, 27 mistral, 729, 734 mosaic, 102 mosaicCalc, 102 msm, 437 numDeriv, 101 OneTwoSamples, 202 optimx, 71 OptR, 27 patchwork, 46 plot3D, 42, 55 plotly, 42 pracma, 2, 61, 101 R6, 51 ReacTran, 62 readxl, 35 rgl, 42, 55 rmoo, 79 rvinecopulib, 727 sde, 449, 485 Sim.DiffProc, 449, 485 tidyverse, 34 tseries, 382 urca, 382 xlsx, 35 plotting, 37–46 pracma, 69 programming, 46–50

763 read.csv, 35 repeat, 48 script, 5–9 create, 5–9 sde, 462 seq, 16 set operations, 31 Sim.DiffProc, 461 solve, 26 solving equations, 83–91 source, 6 split.screen, 41 switch, 46 unlist, 29 variables global, 49 vector, 9–19 which, 14 while, 48 with, 50 Random curves confidence interval, 342–357 differential equations, 583–589 generation, 243–249 mean, 342–357 representation, 332–342 Random eigenvalues iterative, 541–550 variational, 550–557 Random equations algebraical, 503–557 differential, 559–589 adaptation, 581–584 linear, 563–575 nonlinear, 575–581 orbits, 583–589 trajectories, 583–589 eigenvalues iterative, 541–550 variational, 550–557 Leontieff, 515 linear, 511–523 very small, 518–523 nonlinear systems, 523–541 Random functions confidence interval, 342–357 differential equations, 583–589 generation, 235–241 mean, 342–357 representation, 323–332 Random numbers generation inversion, 230–232

764 Random numbers (cont.) triangular, 229–231 Random variable, 159–170 adaptation, 523–541 algorithm, 525 ODE solver, 581–584 Power Iteration, 541 adequacy test, 217–225 affine approximation, 150–155 best approximation, 172 best linear approximation, 171 CDF, 160 characteristic function, 163 conditional mean, 155–157, 172 continuous, 185–190 convergence almost sure, 194 convergence in distribution, 194 convergence in probability, 194 convergence quadratic mean, 194 correlation, 150–155, 171 couple, 137–144 discrete, 180–184 distribution, 160 eigenvalues very small, 548–550 finite population, 126–146 Hilbert properties, 146–159 independent, 144–146, 176 mean, 162 orthogonal projection, 150 moment, 163 moments matching alternative, 308–317 standard, 300–309 multidimensional expansion, 317–322 orthogonal projection, 146–159 PDF, 161 representation artificial variable, 273 collocation, 257–289 equation available, 252 finding the CDF, 258, 259, 262 model problem, 251 moments matching, 298–317 multidimensional U, 317–319 multidimensional X, 320–322 optimization, 640–666 by optimization of statistics, 666–673 Unknown Variable, 270 UQ approach, 251–256 variational, 290–298 sample, 198 empirical CDF, 214–216

Index empirical PDF, 214–216 sequence, 194–197 convergence, 194 standard deviation, 163 statistics, 167–169 variance, 163 Random vector, 170–180 CDF, 171 conditional mean, 172 copula, 171, 177 correlation, 171 generation given covariance, 233, 234 marginal density, 171 marginal distribution, 171 PDF, 171 sample, 212–214 RBDO, 740–749 bilevel, 741–746 reliablity objective, 746–749 Reliability centered space, 677 DDO, 676 failure region, 676 FORM, 732–736 Hasofer-Lind, 692–709 algebraic system, 693 collinearity, 693 linear case, 700–705 probability of failure, 709–727 upper bound, 693 hybrid space, 690–692 limit state, 676–681 MCFP, 682 MCP, 682 MPFP, 681 MPP, 681 multiple conditions, 736 probability of failure convex case, 706–709, 715–718 general case, 718–727 linear case, 713–715 RBDO, 675, 740–749 bilevel, 741–745 reliablity objective, 746–749 reliability analysis, 690–692 reliability index, 692–709 algebraic system, 693 collinearity, 693 FORM, 732–737 linear case, 700–706 probability of failure, 709–727 SORM, 737–740

Index upper bound, 693 safe region, 676 SORM, 737–740 transformation Nataf, 727–732 Rosenblatt, 727–731 Reliability index, 692–709 algebraic system, 693 collinearity, 693 convex case, 706, 715 general case, 718 linear case, 701, 713 numerical evaluation, 732–740 probability of failure, 709 upper bound, 693 Representation collocation, 257–289 moments matching, 298–317 optimization, 640–662 random curves, 332–342 random function, 323–332 variational, 290–298 Rosenblatt transformation, 727 RStudio Import Dataset, 35 Import from Excel, 35 install, 1

S Sample, 198–229 confidence interval, 199 confidence level, 199 empirical CDF, 214–216 empirical mean, 198 empirical PDF, 214–216 large numbers strong law, 199 weak law, 199 mean, 198 risk, 199 standard deviation, 198 standard deviation (Population), 198 test of hypothesis mean, 200 two means, 201 two variances, 201 variance, 200 variance, 198 variance (Population), 198 SDE, 484–501 SORM multiple conditions, 739

765 Standard deviation, 360 Stationary process, 361 in difference, 382 distribution, 387–392 integrated, 382 trend, 382 seasonal, 382 weakly, 361 Stochastic differential equation, 449, 477 Stochastic diffusions calculus, 477–484 simulation, 484–501 Stochastic integral, 470–477 Brownian, 471 Itô’s diffusion, 477 Itô’s formula, 472, 478 Stratanovich, 473 Wiener, 471 Stochastic process, 359 AR, 411 identification, 412 Yule-Walker, 412 ARMA, 426 identification, 428 simulation, 427 Yule-Walker, 428 autocorrelation, 360, 361 autocovariance, 360 autoregressive, 411 identification, 412 Yule-Walker, 412 Brownian motion, 460–467 continuous, 359, 410, 411, 415, 469, 501 crosscorrelation, 360 crosscovariance, 360, 361 deterministic, 382 diffusion, 449–501 discrete, 359, 410, 411, 415, 469, 501 ergodic in correlation, 361 in the mean, 361 innovation, 381 Itô’s calculus, 477–484 Itô’s diffusion, 477 Itô’s formula, 472, 478 Itô’s integrals, 470–477 Brownian, 471 Wiener, 471 MA, 397 identification, 401 simulation, 398 Markov, 437–448 absorbing state, 438 canonical form, 439

766 Stochastic process (cont.) recurrent state, 438 stationary probability, 439 transient state, 438 transition matrix, 437 mean, 360 moment, 360 moving average, 397 identification, 401 simulation, 398 partial autocorrelation, 374, 377 random walk, 467–470 SDE, 449, 477, 484–501 second order, 360 simulation, 484–501 standard deviation, 360 stationary, 361 time integral, 449–454 white noise, 455–460 variance, 360 weakly stationary, 361 white noise, 393

T Test adequacy, 217–225 chi-squared adequacy, 217 independency, 226 independence, 226–229 one mean, 200 one variance, 200 two means, 201 two variances, 201 Theorem Central Limit, 199 Cochran, 199 Fisher-Pearson, 217

Index Lévy, 194 Transformation Choleski, 710 Hasofer-Lind, 709 Nataf, 728 Rosenblatt, 727

U Uncertain Power Iterations, 541 Uncertainties on curves confidence interval, 342–357 functions, 323–332 generation, 235–241, 243–249 mean, 342–357 orbits, 583–589 representation, 332–342 trajectories, 583–589 Uncertainties on functions representation, 323–332 Uniform continuous, 185 discrete, 181 Universe, 110

V Variables environment, 49 global, 49 Variance, 360 Variational representation by, 290–298

W White noise, 393 gaussian, 393 uniform, 393