Preface
Chapter I
Chapter II
I- Simulation
II- The Monté Carlo Methods
III- Random Numbers Generators
IV- Matrices
1- Definition
3- Scalar Multiplication
4- Matrix Multiplication
5- Matrix Transposition
6- Symmetric Matrix; Skew-Symmetric Matrix
7- Identity Matrix
8- Null Matrix
V- Numerical Methods
1- Gauss-Jordan Method of Elimination to Solve a Linear System
2- Gauss-Jordan Method with Pivoting
3- Gauss-Jordan Method of Inversion
4- Overdetermined Systems
VI- Complex Numbers
1- The Complex Number System
2- Fundamental Operations with Complex Numbers
3- Absolute Value
4- Graphical Representation of Complex Numbers
5- Vector Interpretation of Complex Numbers
6- Leonhard Euler’s Formula and Abraham De Moivre’s Theorem
VII- Conclusion
Chapter III
I- Introduction
II- The Theory
III- Problems, Applications, and Algorithms
1- The Cards Problem
2- The First Box Problem
3- The First Two Boxes P
4- The First Bayes’ Problem
5- The Second Bayes’ Problem
6- The Second Box Problem
7- The Coin Problem
8- The Poker Problem
9- The Fair Die Problem
10- The Biased Die Problem
11- The Books Problem
12- The Chess Problem
13- The Two Players Problem
14- The Principle of Inclusion and Exclusion Problem
15- The Birthday Problem
16- The Second Two Boxes Problem
17- The Bose-Einstein Problem
18- The Fermi-Dirac Problem
19- The Two Purses Problem
20- The Bag Problem
21- The Letters Problem
22- The Yahtzee Problem
23- The Strange Dice Problem
24- The Equation Problem
25- The De Moivre Problem
26- The Huyghens Problem
27- The Bernoulli Problem
28- The De Meré Problem
29- The Domino Problem
IV- Conclusion
Chapter IV
I- The Theory
1- Random Variables
2- Discrete Probability Distributions
3- Distribution Functions for Random Variables
4- Distribution Functions for Discrete Random Variables
5- Continuous Random Variables
6- Graphical Interpretations
7- Joint Distributions
7-1- Discrete Case
7-2- Continuous Case
8- Independent Random Variables
9- Conditional Distributions
10- Applications to Geometric Probability
II- Problems, Applications, and Algorithms
1- The Coin Algorithm
2- The Second Coin Algorithm
3- The Continuous Random Variable Algorithm
4- The Joint Distribution Algorithm
Chapter V
I- The Theory
1- Definition of Mathematical Expectation
2- Some Theorems on Expectation
3- The Variance and Standard Deviation
4- The Standardized Random Variables
5- Moments
6- Variance for Joint Distributions. Covariance
7- Correlation Coefficient
8- Chebyshev’s Inequality
9- Law of Large Numbers
10- Other Measures of Central Tendency
10-1- Mode
10-2- Median
11- Skewness and Kurtosis
11-1- Skewness
11-2- Kurtosis
II- Problems, Applications, and Algorithms
1- The First Algorithm: Mathematical Expectation
2- The Second Algorithm: Mathematical Expectation (Joint Distribution)
Chapter VI
I- Introduction
II- The Discrete Probability Distributions
1- The Binomial Distribution
2- The Geometric Distribution
3- The Pascal’s or Negative Binomial Distribution
4- The Hypergeometric Distribution
III- The Continuous Probability Distributions
1- The Normal Distribution
2- The Standard Normal Distribution
3- The Bivariate Normal Distribution
4- The Gamma and Exponential Distributions
5- The Chi-Squared Distribution
6- The Cauchy Distribution
7- The Laplace Distribution
8- The Maxwell Distribution
9- The Student t-Distribution
10- The Fisher F-Distribution
IV- Conclusion
Chapter VII
I- Introduction
II- The Theory
1- Random Processes
1-1- Definition
1-2- Description of A Random Process
2- Characterization of Random Processes
2-1- Probabilistic Descriptions
2-2- Mean, Correlation, and Covariance Functions
3- Classification of Random Processes
III- Problems, Applications, and Algorithms
1- The Simple Random Walk Problem
2- The Random Walk of a Particle Problem
3- The Random Walk of a Drunkard Problem
Chapter VIII
I- Introduction
II- The Theory
1- Definition of a Markov Chain
2- The Initial Probability Distribution
3- The Probability Vector
4- The Probability of Passing from State i to State j in n Stages
5- Regular Markov Chain
6- Long-Term Behavior of a Regular Markov Chain
7- Absorbing State; Absorbing Markov Chain
8- The Fundamental Matrix of an Absorbing Markov Chain
9- The Expected Number of Steps Before Absorption
10- The Probability of Being Absorbed
11- The Average Time Between Visits
III- Problems, Applications, and Algorithms
1- Markov Chains and Transition Matrices Pro
2- Regular Markov Chains Program
3- Absorbing Markov Chains Program
4- Absorbing Markov Chains – The Gambler’s Ruin Program
5- Absorbing Markov Chains – The Rise and Fall of Stock Prices Program
Chapter IX
I- Introduction
II- Nomenclature
III- Historical Review
IV- Albert Einstein’s Contribution
V- The Purpose and the Advantages of the Present Work
VI-1- The Original Andrey Nikolaevich Kolmogorov System of Axioms
VI-2- Adding the Imaginary Part M
VI-3- The Purpose of Extending the Axioms
VII- The New Paradigm and the Diffusion Equation
VIII- The Evolution of Pc, DOK, Chf, and MChf
IX- A Numerical Example
X- Flowchart of the Complex Probability Paradigm
XI- Simulation of the New Paradigm
XI-1- The Paradigm Functions Analysis For t = 3000 seconds
XI-1-1- The Complex Probability Cubes
XI-2- The Paradigm Functions Analysis For t = 1000 seconds
XI-3- The Paradigm Functions Analysis For t = 100 seconds
XII- The New Paradigm and Entropy
XIII- The Resultant Complex Random Vector Z
XIII-1- The Resultant Complex Random Vector Z of a General Bernoulli Distribution
XIII-2- The General Case: A Discrete Distribution with N Equiprobable Random Vectors
XIII-3- The Resultant Complex Random Vector Z and The Law of Large Numbers
XIV- The Complex Characteristics of the Probability Distributions
XIV-1- The Expectation in C = R + M
XIV-1-1- The General Probability Distribution Case
XIV-1-2- The General Bernoulli Distribution Case
XIV-2- The Variance in C
XIV-3- A Numerical Example of a Bernoulli Distribution
XV- Numerical Simulations
XVI- Conclusion and Perspectives
XVII- The Algorithms
Chapter X
Bibliography and References

##### Citation preview

The Analysis of Selected Algorithms for the Stochastic Paradigm

The Analysis of Selected Algorithms for the Stochastic Paradigm By

Abdo Abou Jaoudé

The Analysis of Selected Algorithms for the Stochastic Paradigm By Abdo Abou Jaoudé This book first published 2019 Cambridge Scholars Publishing Lady Stephenson Library, Newcastle upon Tyne, NE6 2PA, UK British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Copyright © 2019 by Abdo Abou Jaoudé All rights for this book reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner. ISBN (10): 1-5275-3703-X ISBN (13): 978-1-5275-3703-3

I am deeply and forever indebted to my parents Miled and Rebecca, my brother Maroun and my sister Zeina, and to all my family for their love, support, and encouragement throughout my entire life and without whom this work would not have been accomplished…

Preface ..................................................................................................... xiii Chapter I ..................................................................................................... 1 Introduction Chapter II .................................................................................................... 7 Fundamental Mathematical Concepts and Methods I Simulation........................................................................................... 7 II The Monté Carlo Methods ................................................................ 8 III Random Numbers Generators ........................................................ 12 IV Matrices ......................................................................................... 13 1 Definition .................................................................................. 13 2 Matrix Addition and Subtraction ................................................ 14 3 Scalar Multiplication .................................................................. 14 4 Matrix Multiplication ................................................................ 15 5 Matrix Transposition ................................................................. 16 6 Symmetric Matrix; Skew-Symmetric Matrix ............................. 16 7 Identity Matrix ........................................................................... 16 8 Null Matrix ................................................................................. 17 V Numerical Methods ......................................................................... 17 1 Gauss-Jordan Method of Elimination to Solve a Linear System 17 2 Gauss-Jordan Method with Pivoting .......................................... 18 3 Gauss-Jordan Method of Inversion ............................................ 19 4 Overdetermined Systems ............................................................ 20 VI Complex Numbers ......................................................................... 21 1 The Complex Number System .................................................. 21 2 Fundamental Operations with Complex Numbers ..................... 22 3 Absolute Value ........................................................................... 22 4 Graphical Representation of Complex Numbers ........................ 23 5 Vector Interpretation of Complex Numbers ............................... 24 6 Leonhard Euler’s Formula and Abraham De Moivre’s Theorem................................................................................... 25 VII Conclusion .................................................................................... 25

viii

Chapter III ................................................................................................ 27 Basic Probability I Introduction ...................................................................................... 27 II The Theory: Definitions 1 to 33 ...................................................... 28 III Problems, Applications, and Algorithms ....................................... 34 1 The Cards Problem ..................................................................... 35 2 The First Box Problem ............................................................... 43 3 The First Two Boxes Problem.................................................... 49 4 The First Bayes’ Problem ........................................................... 52 5 The Second Bayes’ Problem ...................................................... 57 6 The Second Box Problem ........................................................... 60 7 The Coin Problem ...................................................................... 73 8 The Poker Problem ..................................................................... 81 9 The Fair Die Problem ................................................................. 98 10 The Biased Die Problem......................................................... 101 11 The Books Problem ................................................................ 105 12 The Chess Problem................................................................. 109 13 The Two Players Problem ...................................................... 117 14 The Principle of Inclusion and Exclusion Problem ................ 121 15 The Birthday Problem ............................................................ 128 16 The Second Two Boxes Problem ........................................... 132 17 The Bose-Einstein Problem .................................................... 135 18 The Fermi-Dirac Problem ...................................................... 139 19 The Two Purses Problem ....................................................... 144 20 The Bag Problem .................................................................... 148 21 The Letters Problem ............................................................... 154 22 The Yahtzee Problem ............................................................. 171 23 The Strange Dice Problem ..................................................... 187 24 The Equation Problem ............................................................ 195 25 The De Moivre Problem ......................................................... 200 26 The Huyghens Problem .......................................................... 208 27 The Bernoulli Problem ........................................................... 214 28 The De Meré Problem ............................................................ 219 29 The Domino Problem ............................................................. 223 IV Conclusion ................................................................................... 234 Chapter IV .............................................................................................. 236 Random Variables I The Theory ..................................................................................... 236 1 Random Variables .................................................................... 236 2 Discrete Probability Distributions ............................................ 236

The Analysis of Selected Algorithms for the Stochastic Paradigm

ix

3 Distribution Functions for Random Variables.......................... 237 4 Distribution Functions for Discrete Random Variables ........... 237 5 Continuous Random Variables ................................................. 238 6 Graphical Interpretations .......................................................... 239 7 Joint Distributions .................................................................... 239 7-1 Discrete Case ................................................................... 240 7-2 Continuous Case .............................................................. 240 8 Independent Random Variables ............................................... 241 9 Conditional Distributions ......................................................... 242 10 Applications to Geometric Probability .................................. 243 II Problems, Applications, and Algorithms....................................... 244 1 The Coin Algorithm ................................................................. 244 2 The Second Coin Algorithm..................................................... 250 3 The Continuous Random Variable Algorithm.......................... 255 4 The Joint Distribution Algorithm ............................................. 259 Chapter V ............................................................................................... 264 Mathematical Expectation I The Theory ..................................................................................... 264 1 Definition of Mathematical Expectation .................................. 264 2 Some Theorems on Expectation ............................................... 265 3 The Variance and Standard Deviation ...................................... 265 4 The Standardized Random Variables ....................................... 267 5 Moments ................................................................................... 267 6 Variance for Joint Distributions. Covariance ........................... 268 7 Correlation Coefficient ............................................................. 270 8 Chebyshev’s Inequality ............................................................ 270 9 Law of Large Numbers............................................................. 270 10 Other Measures of Central Tendency ..................................... 271 10-1 Mode ............................................................................. 271 10-2 Median........................................................................... 272 11 Skewness and Kurtosis ........................................................... 272 11-1 Skewness ....................................................................... 272 11-2 Kurtosis ......................................................................... 272 II Problems, Applications, and Algorithms....................................... 273 1 The First Algorithm: Mathematical Expectation ...................... 273 2 The Second Algorithm: Mathematical Expectation (Joint Distribution) ................................................................ 281 Chapter VI .............................................................................................. 289 Special Probability Distributions

x

I Introduction .................................................................................... 289 II The Discrete Probability Distributions .......................................... 290 1 The Binomial Distribution........................................................ 290 2 The Geometric Distribution...................................................... 295 3 The Pascal’s or Negative Binomial Distribution ...................... 301 4 The Hypergeometric Distribution ............................................. 306 5 The Binomial and the Poisson Distributions ............................ 322 III The Continuous Probability Distributions.................................... 329 1- The Normal Distribution ......................................................... 329 2- The Standard Normal Distribution .......................................... 333 3- The Bivariate Normal Distribution ......................................... 338 4- The Gamma and Exponential Distributions ............................ 348 5- The Chi-Squared Distribution ................................................. 354 6- The Cauchy Distribution ......................................................... 359 7- The Laplace Distribution ........................................................ 363 8- The Maxwell Distribution ....................................................... 367 9- The Student t-Distribution ...................................................... 372 10-The Fisher F-distribution ....................................................... 378 IV Conclusion ................................................................................... 384 Chapter VII ............................................................................................. 385 Random Processes I Introduction .................................................................................... 385 II The Theory .................................................................................... 385 1 Random Processes .................................................................... 385 1-1- Definition ....................................................................... 385 1-2- Description of a Random Process .................................. 386 2 Characterization of Random Processes .................................... 387 2-1- Probabilistic Descriptions .............................................. 387 2-2- Mean, Correlation, and Covariance Functions ............... 388 3 Classification of Random Processes ......................................... 389 III Problems, Applications, and Algorithms ..................................... 389 1 The Simple Random Walk Problem ......................................... 389 2 The Random Walk of a Particle Problem ................................. 397 3 The Random Walk of a Drunkard Problem .............................. 400 Chapter VIII............................................................................................ 403 Markov Chains I Introduction .................................................................................... 403 II The Theory .................................................................................... 404 1 Definition of a Markov Chain .................................................. 404

The Analysis of Selected Algorithms for the Stochastic Paradigm

xi

2 The Initial Probability Distribution .......................................... 404 3 The Probability Vector ............................................................. 404 4 The Probability of Passing from State i to State j in n Stages .. 405 5 Regular Markov Chain ............................................................. 405 6 Long-Term Behavior of a Regular Markov Chain ................... 405 7 Absorbing State; Absorbing Markov Chain ............................. 406 8 The Fundamental Matrix of an Absorbing Markov Chain ....... 406 9 The Expected Number of Steps before Absorption .................. 407 10The Probability of Being Absorbed ......................................... 407 11The Average Time between Visits .......................................... 407 III Problems, Applications, and Algorithms ..................................... 407 1 Markov Chains and Transition Matrices Program.................... 407 1-1 Population Movement ..................................................... 408 1-2 Mice Maze ...................................................................... 409 2 Regular Markov Chains Program ............................................. 419 2-1 Population Movement ..................................................... 419 2-2 Mice Maze ....................................................................... 420 2-3 A Specific Transition Matrix ........................................... 421 2-4 Consumer Loyalty Problem ............................................. 422 2-5 Spread of Rumor ............................................................. 423 3 Absorbing Markov Chains Program......................................... 437 4 Absorbing Markov Chains – The Gambler’s Ruin Program .... 449 5 Absorbing Markov Chains – The Rise and Fall of Stock Prices Program....................................................................... 464 Chapter IX .............................................................................................. 484 The Complex Probability Paradigm and the Brownian Motion I Introduction .................................................................................... 484 II Nomenclature ................................................................................ 485 III Historical Review......................................................................... 486 IV Albert Einstein’s Contribution ..................................................... 489 V The Purpose and the Advantages of the Present Work ................. 491 VI The Complex Probability Paradigm............................................. 493 1 The Original Andrey Nikolaevich Kolmogorov System of Axioms .............................................................................. 493 2 Adding the Imaginary Part M ................................................. 494 3 The Purpose of Extending the Axioms ..................................... 494 VII The New Paradigm and the Diffusion Equation ......................... 504 VIII The Evolution of Pc, DOK, Chf, and MChf .............................. 511 IX A Numerical Example ................................................................. 513 X Flowchart of the Complex Probability Paradigm .......................... 518

xii

XI Simulation of the New Paradigm ................................................. 519 1 The Paradigm Functions Analysis For t = 3000 seconds ......... 519 1-1 The Complex Probability Cubes ..................................... 524 2 The Paradigm Functions Analysis For t = 1000 seconds ......... 526 3 The Paradigm Functions Analysis For t = 100 seconds ........... 530 XII The New Paradigm and Entropy ................................................ 535 XIII The Resultant Complex Random Vector Z ............................... 543 1 The Resultant Complex Random Vector Z of a General Bernoulli Distribution .............................................. 543 2 The General Case: A Discrete Distribution with N Equiprobable Random Vectors .......................................... 547 3 The Resultant Complex Random Vector Z and the Law of Large Numbers .................................................... 550 XIV The Complex Characteristics of the Probability Distributions 552 1 The Expectation in C = R + M ................................................ 552 1-1 The General Probability Distribution Case...................... 552 1-2 The General Bernoulli Distribution Case ........................ 554 2 The Variance in C ..................................................................... 556 3 A Numerical Example of a Bernoulli Distribution ................... 557 XV Numerical Simulations ............................................................... 559 XVI Conclusion and Perspectives..................................................... 563 XVII The Algorithms ........................................................................ 564 Chapter X ............................................................................................... 593 Conclusion Bibliography and References.................................................................. 598

PREFACE

This book is entitled The Analysis of Selected Algorithms for the Stochastic Paradigm and it includes the analysis and selected algorithms for random and stochastic phenomena in the following areas: basic probability, random variables, mathematical expectation, special probability and statistical distributions, random processes, Markov chains, in addition to the presentation of my “Complex Probability Paradigm” applied to the Brownian motion. Why algorithms and why probability? My background is a Ph.D. in Computer Science and a Ph.D. in Applied Mathematics, so I combined my knowledge in computers and Applied Mathematics with Probability theory and I wrote this manuscript. Eager to learn and discover more as well as to apply my knowledge in mathematics and computer science, I embarked on this new research on stochastic phenomena. Additionally, each time I work on this field I find the pleasure to tackle the knowledge, the theorems, the proofs, and the applications of the theory of probability. In fact, each problem on probability is like a riddle to be solved, a conquest to be won, and I become relieved and extremely happy when I reach the end of the solution. This verily proves two important facts: firstly, the power of mathematics and its models to deal with such kind of problems and secondly the power of the human mind that is able to understand such class of problems and to tame such a wild concept that is randomness, probability, stochasticity, uncertainty, chaos, and chance. Mathematical probability is an attractive, thriving, and respectable part of mathematics. Some mathematicians and philosophers of science say the gateway to mathematics deepest mysteries. Moreover, mathematical statistics denotes an accumulation of mathematical discussions connected with the efforts to most efficiently collect and use numerical data subject to random variation. In the twentieth century and present time, the concept of probability and mathematical statistics has become one of the fundamental notions of modern science and philosophy of nature. This was accomplished after a long history of efforts done by prominent and distinguished mathematicians and philosophers like the famous French Blaise Pascal and Pierre de Fermat, the Dutch Christiaan Huyghens, the Swiss Jakob Bernoulli, the German Carl Friedrich Gauss, the French Siméon-Denis

xiv

Preface

Poisson, the English Thomas Bayes, the French Joseph Louis Lagrange and Pierre-Simon Laplace, the English Karl Pearson and Ronald Aylmer Fisher, the Russian Andrey Nikolaevich Kolmogorov, the American John von Neumann, etc… As a matter of fact, each time I read or meditate these outstanding giants I feel the respect, the admiration, and the esteem to these magnificent men and giants of science who most of them were mathematicians, physicists, astronomers, statisticians, philosophers, etc... at the same time. They were, as we call them today: Universalists. The fields to which this book belongs to are that of Probability and Computer Science, hence the present work should – and it certainly does – include applications to both fields that encompass a wide set of problems taken from engineering, games of chance (cards, dice, urns, coins, etc…), fundamental mathematics, computer science, physics… To delve into the mysteries of the stochastic paradigm, ten chapters and sixty-five algorithms were written for this purpose, and are devoted to the illustration of probability, random variables, and stochastic processes concepts and theory. What is original in this work, like the title of the book suggests, is firstly the analysis of a large array of stochastic problems and secondly the sixtyfive algorithms written for this array of applications. Both are my creation and they illustrate how probability theory can be applied to solve and understand random phenomena existent in innumerable branches of science and pertaining to different disciplines of knowledge. Though, this was not certainly possible without the help of all the books, websites, and encyclopedias that provided me with a very rich source of hints, information, and mathematics. Hence, in some parts of the manuscript I have used the wording of the initial theorems, or the problems given, or sometimes the methods of proof, always avoiding committing plagiarism and with the intention to preserve the integrity and the truth of the data taken from the reference books mentioned in the bibliography. However, the algorithms are completely the result of a personal effort. Moreover, the book develops methods for simulating simple or complicated processes or phenomena. If the computer can be made to imitate an experiment or a process, then by repeating the computer simulation with different data, we can draw statistical conclusions. Thus, a

The Analysis of Selected Algorithms for the Stochastic Paradigm

xv

simulation of a wide spectrum of random processes on computers was done. The result and accuracy of all the algorithms are truly amazing and delightful; hence, this confirms two complementary accomplishments: first the triumphs of the theoretical calculations already established using different theorems and second the power and success of modern computers to verify them. The work was done using Microsoft Visual C++ due to its excellent well-definedness, modularity, portability, and efficiency. Moreover, they were executed on a workstation computer with parallel microprocessors to acquire a suitable speed and efficiency needed for these numerical and computational methods. Chapter IX is an additional section that was added to this book and is entitled: The Complex Probability Paradigm and the Brownian Motion. It is the development of A. N. Kolomogorov’s system of axioms and is applied to the Brownian motion. The theorem is my creation also and my new mathematical paradigm that I called “The Complex Probability Paradigm” was the subject of a personal and published twelve research papers since 2010 to 2018. I wrote five algorithms to illustrate it. To conclude, due to its universality, mathematics is the most positive and certain branch of science. It is successfully called by philosophers the Esperanto of all sciences since it is the common, the logical, and the exact language of understanding, capable of expressing accurately all scientific endeavor. Although Probability and Statistics are approximate sciences that deal with rough guesses, hypotheses tests, estimated computations, expected calculations, and uncertain results, they still keep in them the spirit of “exact” sciences through their numbers, proofs, figures, and graphs, since they remain to be a branch of mathematics. Surely, the pleasure of working and doing mathematics and computer science is everlasting. I hope that the reader will benefit from both and share the pleasure of examining the present manuscript. As a matter of fact, the combination of both mathematics and computer science leads to “magical” and amazing results and algorithms, and the following work is an illustration of this approach. Sincerely, I am truly astonished by the power of statistics and probability to deal with random data and phenomena, and this feeling and impression never left me from the first time I was introduced to this branch of science and mathematics. I hope that in the present book I will convey and share

xvi

Preface

this feeling with the reader. I hope also that he will discover and learn about the concepts and applications of its paradigm. Abdo Abou Jaoudé, Ph.D. Notre Dame University-Louaizé, Lebanon August l7th, 2019.

CHAPTER I INTRODUCTION

“The most incomprehensible thing about the universe is that it is comprehensible…” Albert Einstein. “One thing I have learned in a long life: That all our science, measured against reality, is primitive and childlike – and yet it is the most precious thing we have.” Albert Einstein.

It gives me great pleasure to introduce as well as to discuss, to learn, to solve, to teach, and to work with probability and stochastic theories. So, let us first start the introduction of the present book with a relatively small historical note that will give us an overview on the development of this fascinating paradigm across the ages till modern times, before thoroughly studying its essential theorems and examining its diverse applications. Probability theory is a branch of statistics, a science that employs mathematical methods of collection, organization, and interpretation of data, with applications in practically all scientific areas. When working with probability theory, we analyze random – or stochastic – phenomena and assess the likelihood that an event will occur. In this book we discuss some fundamental aspects of probability theory and explore their use to solve a large array of problems as we will see later. One of the earliest mathematical studies on probability was Liber de ludo aleae (“On Casting the Die”), written by the 16th–century Italian mathematician and physicist Gerolamo Cardano (1501-1576); it was not published until 1663, 87 years after Cardano’s death. Cardano introduced the concepts of combinatorics into the calculations of probability and defined probability as “the number of favorable outcomes divided by the number of possible outcomes”. It is likely that Cardano would be known as “the father of the theory of probability” had the publication not been delayed.

2

Chapter I

In the 17th century, questions about the probability of events occurring in games of chance were discussed in the correspondence between the French mathematicians Blaise Pascal (1623-1662) and Pierre de Fermat (16011665). Building on their results, the Dutch physicist-astronomermathematician Christiaan Huyghens (1629-1695) published, in 1656, De ratiociniis in ludo aleae (“On Reasoning in Games of Chance”). Moreover, the Swiss mathematician Jakob Bernoulli (1654-1705) was an early advocate of the use of probability theory in medicine and meteorology in his work Ars conjectandi (“The Art of Conjecture”), published posthumously in 1713. In the late 18th century, it became increasingly evident that analogies exist between games of chance and random phenomena in physical, biological, and social sciences. Contributions of fundamental importance to probability theory were made in the latter half of the 18th century and the beginning of the 19th century by the French mathematicians, astronomers, and physicists Joseph Louis Lagrange (1736-1813) and Pierre-Simon Laplace (17491827), the omnipresent mathematician and astronomer Carl Friedrich Gauss (1777-1855), and the French mathematician Siméon-Denis Poisson (1781-1840). The most important publication on probability theory in this era is Laplace’s Théorie analytique des probabilités (1812), which discussed practical applications of the theory and developed the concepts of a normal distribution, first discovered by Abraham de Moivre (16671754). In the 1837 Recherches sur la probabilité des jugements… (“Researches on the Probability of Opinions…”) Poisson introduced what we now know as the Poisson distribution, or Poisson law of large numbers, an approximate method used to describe probable occurrence of unlikely events in a large number of unconnected trials. About 1850-1900, the Russian (Petersburg) school of probability theory, emphasizing stringent mathematical methods, dominated its development. Prominent figures of this school were Pafnuty Chebyshev (1821-1894) and, originally disciples of Chebyshev’s, Andrei Markov (1856-1922) and Alexandr Lyapunov (1857-1918). In the beginning of the 20th century, the need for applications of probability theory increased in physics, economics, insurance, and telephone communication. Important impulses were given by Albert Einstein (18791955), the New Zealand-born English physicist Ernest Rutherford (18711937), and the Swedish astronomer Carl Vilhelm Ludwig Charlier (18641934). Applications often precipitated new probability problems which had

Introduction

3

to be tackled within the field of theoretical probability, and thus a fruitful interplay between the sciences was created. Moreover, the English mathematician Karl Pearson (1857-1936) is the founder of modern hypothesis testing – he developed the chi-squared test of statistical significance. Besides making major contributions in mathematics and probability theory, Pearson also practiced law, was active in politics, published literary works, and wrote The Grammar of Science (1892), a classic in the philosophy of science. In addition, one of the most eminent scientists of the 20th century, the English geneticist and statistician Ronald Aylmer Fisher (1890-1962), professor of genetics at Cambridge from 1943 to 1957, developed methods of multivariate analysis – analysis of problems involving more than one variable – and used them in his investigations of the linkage of genes to various traits. He also introduced the idea of likelihood in statistical inference, that is, how to draw conclusions on the basis of the relative probability of events. Fisher’s Statistical Methods for Research Works (1925) was used extensively as a textbook and a reference book and remained in print for more than 50 years. Along with Egon Pearson (18951980) – the son of Karl Pearson – and Fisher, Jerzy Neyman (1894-1981) was one of the principle founders of modern statistical analysis. Neyman lived in Poland until he was forty, then in England, and finally settled in the United States. In 1955, he became professor of statistics at the University of California at Berkeley, where his department became a world center for the development of mathematical statistics. Egon Pearson and Neyman founded what is now known as the school of statistical inference. Furthermore, the Russian mathematicians Alexandr Khinchin (1894-1959) and Andrey Nikolaevich Kolmogorov (1903-1987) are the founders of the Moscow school of probability theory, one of the most influential in the 20th century. One may say that the present golden age of probability theory started in 1933 with Kolmogorov’s Grundbegriffe der Wahrscheinlichkeitsrechnung (“Foundations of Probability Theory”). Kolmogorov introduced several fundamental postulates in statistics and probability theory; he showed that probability theory may be founded on the concepts of set theory and mathematical measure theory. Also, the theory of games was founded by John von Neumann (19031957), pre-eminent 20th century innovator in many fields of pure and applied mathematics. He created a mathematical model for games of chance, such

4

Chapter I

as poker and bridge, that involve free choices – strategy – for the players. His first paper on this subject was presented in 1926; von Neumann’s theories were further developed in his major work Theory of Games and Economic Behavior (1944), co-authored with the economist Oskar Morgenstern (1902-1977). The theory of games is now a mathematical discipline of its own, with far-reaching applications to economics and social sciences. Additionally, I chose the word paradigm for this branch of mathematical sciences after consulting the influential book of the historian of science Thomas Kuhn, which is The Structure of Scientific Revolutions, where the author used the term to describe a set of theories, standards, and methods that together represent a way of organizing knowledge – that is, a model or a way of viewing the world. Kuhn stated in his thesis that revolutions in science occur when an older paradigm is reexamined, rejected, and replaced by another, just like Einstein’s theories of special and general relativity that dethroned Newtonian mechanistic theory, or quantum mechanics that replaced the classical theories of electromagnetism and thermodynamics when probing the micro-world…What about probability and statistics? We can affirm that their set of theories and methods developed across the centuries have defined for us a way to view the world, techniques and a model to understand and to deal with such concepts as randomness, chance, stochasticity, chaos, probability…Hence, to be brief, the definition of a paradigm suits very well this discipline of knowledge and this methodology of thinking. This justifies my usage of this term. After this historical introduction and the last note that followed it, we show now briefly the structure of the book which is divided into 10 chapters. Hence, the structure is as follows: Chapter I, is an introduction to the book that starts with a historical note then states the basic ideas and algorithms that will be developed throughout the whole manuscript. Chapter II, is an introductory chapter to define the fundamental mathematical concepts and methods that will be illustrated in all the 65 algorithms written to solve different and historical probability problems in the following chapters. Chapter III, defines the basic theory of probability and we will solve 29 important probability problems like the game of cards, the game of dice, the

Introduction

5

game of domino, the game of letters, the game of chess, the game of coins, etc… and some historical problems like the birthday problem, De Meré problem, De Moivre problem, Bayes problem, Bernoulli problem, Huyghens problem, the principle of inclusion and exclusion, etc…that I discovered from my research and readings of many books on probability and statistics. Chapter IV, deals with random variables. It includes four algorithms that illustrate the concept defined. Chapter V, talks about mathematical expectation. It contains two algorithms as examples to the theory. Chapter VI, applies the Monté Carlo technique to some well-known discrete and continuous probability and statistical distributions which are: The Binomial distribution, the continuous Chi-squared distribution, the Gamma and Exponential distributions, the F-distribution, the Geometric, the Hypergeometric, the Laplace, the Maxwell, the Negative Binomial distributions, the Standard Normal distribution, the Normal distribution, the Binomial versus the Poisson distribution, the t-distribution, the Bivariate Normal and the Cauchy distributions. Chapter VII, studies random processes that are illustrated in three different algorithms about the random walk problems. Chapter VIII, is an analysis of Markov chains and includes five algorithms illustrating the theory. Chapter IX, is a development of Kolmogorov’s axioms which are at the foundations of probability theory and hence it opens the door to a deterministic expression of probabilistic events. It is an interesting chapter indeed that should be read and that I preferred to include it in this present work like in my previous book The Computer Simulation of Monté Carlo Methods and Random Phenomena which was published in 2019 with Cambridge Scholars Publishing, since both books deal with random phenomena and probability theory. Five algorithms illustrate the original idea, but surely a whole dissertation can be written on this chapter alone, since determinism versus nondeterminism is a very deep debate among mathematicians and among physicists like between Albert Einstein and Niels Bohr…

6

Chapter I

Chapter X, finally, is a conclusion of the book. In the last chapter, we conclude this interesting and exciting topic with few pages in which we try to summarize the previous chapters developed in the manuscript. I think that the topics chosen in the ten chapters of this book have served the purpose of illustrating probability theory and stochastic variables and processes. As a matter of fact, the subject is very broad and could be developed surely in many manuals and research papers… Thus, ten chapters are merely an introduction to this exciting, profound, and modern field of mathematics and knowledge.

CHAPTER II FUNDAMENTAL MATHEMATICAL CONCEPTS AND METHODS

“The known is finite, the unknown is infinite; intellectually we stand on an islet in the midst of an illimitable ocean of inexplicability. Our business in every generation is to reclaim a little more land.” Thomas Henry Huxley.

It is important before “probing the depths” of the stochastic paradigm, that we define some fundamental mathematical concepts and tools that will be extensively used in the whole manuscript algorithms. In fact, what follows is a list of some definitions and theorems that will applied in the subsequent chapters.

I- Simulation Simulation is the process of designing a model of a real or imagined system and conducting experiments with this model to understand the behavior of the system or to evaluate strategies for its operation. Assumptions are made about this system and mathematical algorithms and relationships are derived to describe these assumptions – this constitutes a “model” that can reveal how the system works. If the system is simple, the model may be represented and solved analytically. A single equation such as DISTANCE = (RATE u TIME) may be an analytical solution representing the distance traveled by an object at a constant rate for a given period of time. However, problems of interest in the real world are usually much more complex than this. In fact, they may be so complex that a closed analytical model cannot be constructed to represent them. In this case, the behavior of the system must be estimated through a simulation. Exact representation is seldom possible in a model, constraining us to approximations to a degree of fidelity that is acceptable for the purposes of the study. Models have been constructed for almost every system imaginable, including factories, communications and computer networks, integrated circuits, highway

8

Chapter II

systems, flight dynamics, national economics, social interactions, and imaginary worlds. In each of those environments, experimenting with a model of the system has proved to be more cost-effective, less dangerous, faster, or otherwise more practical than experimenting with a real system. For example, a business may be interested in building a new factory to replace an old one, but is unsure whether the increased productivity will justify the investment. In this case, a simulation could be used to evaluate a model of the new factory. The model could describe the floor space required, the number of machines, the number of employees, the placement of equipment, the production capacity of each machine, and the waiting time between machines. The simulation runs would then evaluate the system and provide an estimate of the production capacity and the costs of a new factory. This type of information is invaluable in making decisions without having to build an actual factory to arrive at an answer. Moreover, one of the pioneers of simulation was John von Neumann. In the late mid-1940s, together with physicist Enrico Fermi and mathematician Stanislaw Ulam, he conceived of the idea of running multiple repetitions of a model, gathering statistical data, and deriving behaviors of the real system based on these models. This came to be known as the Monté Carlo method because of the use of randomly generated variates to represent behaviors that could not be modeled exactly, but could be characterized statistically. Von Neumann used this method to study random actions of neutrons, which was simulated in Chapter V of my book The Computer Simulation of Monté Carlo Methods and Random Phenomena, and aircraft bombing effectiveness. Early civilian applications of this method were found in representations of factories attempting to determine maximum potential productivity. Simulations derive much of their technique from models of the world found in other disciplines. Wind tunnels are models that replicate flight by moving the air rather than the aircraft; chess has been used to simulate strategic thinking about warfare; and computer games are intended to generate believable worlds requiring mastery of a specified set of behaviors.

II- The Monté Carlo Methods The Computer Simulation of Monté Carlo Methods and Random Phenomena (pp. 1-2).

Fundamental Mathematical Concepts and Methods

9

In applied mathematics, the name Monté Carlo is given to the method of solving problems by means of experiments with random numbers. This name, after the casino at Monaco, was first applied around 1944 to the method of solving deterministic problems by reformulating them in terms of a problem with random elements which could then be solved by largescale sampling. But, by extension, the term has come to mean any simulation that uses random numbers. The development and proliferation of computers has led to the widespread use of Monté Carlo methods in virtually all branches of science, ranging from nuclear physics (where computer-aided Monté Carlo was first applied) to astrophysics, biology, engineering, medicine, operations research, and the social sciences. The Monté Carlo Method of solving problems by using random numbers in a computer – either by direct simulation of physical or statistical problems or by reformulating deterministic problems in terms of one incorporating randomness – has become one of the most important tools of applied mathematics and computer science. A significant proportion of articles in technical journals in such fields as physics, chemistry, and statistics contain articles reporting results of Monté Carlo simulations or suggestions on how they might be applied. Some journals are devoted almost entirely to Monté Carlo problems in their fields. Studies in the formation of the universe or of stars and their planetary systems use Monté Carlo techniques. Studies in genetics, the biochemistry of DNA, and the random configuration and knotting of biological molecules are studied by Monté Carlo methods. In number theory, Monté Carlo methods play an important role in determining primality or factoring of very large integers far beyond the range of deterministic methods. Several important new statistical techniques such as “bootstrapping” and “jackknifing” are based on Monté Carlo methods. Hence, the role of Monté Carlo methods and simulation in all of the sciences has increased in importance during the past several years. These methods play a central role in the rapidly developing subdisciplines of the computational physical sciences, the computational life sciences, and the other computational sciences. Therefore, the growing power of computers and the evolving simulation methodology have led to the recognition of computation as a third approach for advancing the natural sciences, together with theory and traditional experimentation. At the kernel of Monté Carlo simulation is random number generation.

10

Chapter II

Now we turn to the approximation of a definite integral by the Monté Carlo method. If we select the first n elements x1 , x2 ,! , xn from a random sequence in the interval (0,1), then: 1

³ 0

f ( x).dx #

(1  0) n ¦ f ( xi ) n i1

1 n ¦ f ( xi ) ni1

Here the integral is approximated by the average of n numbers f ( x1 ), f ( x2 ),! , f ( xn ) . When this is actually carried out, the error is of order

1

, which is not at all competitive with good algorithms, such as the n Romberg method. However, in higher dimensions, the Monté Carlo method can be quite attractive. For example, 1 1 1

³ ³ ³ f ( x, y, z ).dx.dy.dz # 0 0 0

[(1  0) u (1  0) u (1  0)] n f ( xi , yi , zi ) ¦ n i 1 1 n ¦ f ( xi , yi , zi ) ni1

where ( xi , yi , zi ) is a random sequence of n points in the unit cube 0 d x d 1 0 d y d 1 , and 0 d z d 1 . To obtain random points in the cube, we assume that we have a random sequence in (0,1) denoted by [1 , [ 2 , [3 , [ 4 , [5 , [ 6 ,! To get our first random point p1 in the cube, just let p1 second is, of course, p2 ([ 4 , [5 , [ 6 ) and so on.

([1 , [ 2 , [3 ) . The

If the interval (in a one-dimensional integral) is not of length 1, but say is the general case (a, b), then the average of f over n random points in (a, b) is not simply an approximation for the integral but rather for: b

1 f ( x).dx b  a ³a which agrees with our intention that the function f ( x) 1 has an average of 1. Similarly, in higher dimensions, the average of f over a region is

Fundamental Mathematical Concepts and Methods

11

obtained by integrating and dividing by the area, volume, or measure of that region. For instance, 7 5 3

1 f ( x, y, z ).dx.dy.dz [(7  4) u (5  (2)) u (3  0)] ³4 ³2 ³0

7 5 3

1 f ( x, y, z ).dx.dy.dz 63 ³4 ³2 ³0

is the average of f over the parallelepiped described by the following three inequalities:

0 d x d 3 , 2 d y d 5 , 4 d z d 7 . To keep the limits of integration straight, we recall that: b d

³³

b

f ( x, y ).dx.dy

a c

ªd º ³a «¬ ³c f ( x, y).dx »¼ .dy

and a2 b2 c2

a2

­°b 2 ª c 2

º

½°

³ ³ ³ f ( x, y, z ).dx.dy.dz ³ ®° ³ «¬ ³ f ( x, y, z ).dx »¼ .dy ¾° .dz

a1 b1 c1

a1

¯ b1

¿

c1

So, if ( xi , yi ) denote random points with appropriate uniform distribution, the following examples illustrate Monté Carlo techniques: 9

³ f ( x).dx # 1

8 5

³³

f ( x, y ).dx.dy #

4 2

(9  1) n ¦ f ( xi ) n i1

8 n ¦ f ( xi ) ni1

[(8  4) u (5  2)] n 12 n f x y ( , ) ¦ ¦ f ( xi , yi ) i i n n i1 i 1

In each case, the random points should be uniformly distributed in the regions involved. In general, we have:

³f A

# (measure of A) u (average of f over n random points in A)

12

Chapter II

Here we are using the fact that the average of a function on a set is equal to the integral of the function over the set divided by the measure of the set.

III- Random Numbers Generators The generation of random numbers is also at the heart of many standard statistical methods. The random sampling required in most analysis is usually done by the computer. The computations required in Bayesian analysis have become viable because of Monté Carlo methods. This has led to much wider applications of Bayesian statistics, which, in turn, has led to the development of new Monté Carlo methods to the refinement of existing procedures for random numbers generation. Various methods for the generation of random numbers have been used. Sometimes, processes that are considered random are used, but for Monté Carlo methods, which depend on millions of random numbers, a physical process as a source of random numbers is generally cumbersome. Instead of “random” numbers, most applications use “pseudorandom” numbers, which are deterministic but “look” like they were generated randomly. Chapter III of my book The Computer Simulation of Monté Carlo Methods and Random Phenomena discusses methods for the generation of sequences of pseudorandom numbers that simulate a uniform distribution over the interval (a,b). The book includes the basic sequences from which are derived pseudorandom numbers from other distributions, pseudorandom samples, and pseudo-stochastic processes. Moreover, the Latin prefix “pseudo” means in English false, so when it is added to the word random, it describes clearly enough the process of generating random numbers. Why? This makes us return to the definition and the meaning of the word random that the philosophers have meditated upon. They said: is there any deterministic mathematical equation that could describe the inherent randomness existent in nature or is there none?… It is a philosophical debate that has never ended till our times. But for us, as computer scientists and mathematicians, we agree that we can write deterministic mathematical equations that can yield random numbers nearly similar to that existent in nature. Hence, we called the generators of those numbers: “pseudorandom generators of random numbers”. In addition, the random number generator that we use in the present work is the C++ built-in generator that is called by the function rand(). The function srand() is also a C++ function that will generate a new random sequence on

Fundamental Mathematical Concepts and Methods

13

each time we run the program. In fact, the latter takes its seed-value from the computer clock and hence each time the program is executed a new sequence is therefore produced. This surely helps in the analysis of a specific algorithm by giving us the chance to try the latter by a totally different and a new sequence of random numbers. The two C++ functions mentioned prove to be successful and efficient. Furthermore, like we have said, other random number generators exist in literature and they can be easily included in all the book algorithms to improve sometimes the accuracy of the programs results.

IV- Matrices 1- Definition A matrix A is a rectangular array of numbers. An m u n matrix has m rows and n columns which is called the shape of the matrix. Such a matrix is denoted by A [ai j ] and is the following:

ª a11 «a « 21 « a31 « « # « am1 ¬

A(m, n)

a12 a22 a32 #

a13 a23 a33 #

am 2

am 3

" a1n º " a2 n »» " a3n » » # # » " am n »¼

A matrix could be square when m = n. For example, if A is a square matrix of order 3 then it is the following:

A(m, m)

A(3,3)

ª a11 «a « 21 ¬« a31

a12 a22 a32

a13 º a23 »» a33 ¼»

It can be also horizontal when m = 1 like:

A(1, n)

A(1, 4)

> a11

a12

a13

a14 @ ,

14

Chapter II

or vertical when n = 1 like: A(m,1)

A(4,1)

ª a11 º «a » « 21 » . « a31 » « » ¬ a41 ¼

Note that two matrices A and B are equal, if they have the same shape and the same corresponding entries. For example: If A If A

ª1 2 º ª1 «3 4 » and B «3 ¬ ¼ ¬ ª1 2 º ª1 «3 4 » and B «3 ¬ ¼ ¬

2º  A B 4 »¼ 2 º  Az B 4 »¼

2- Matrix Addition and Subtraction The addition and the multiplication of two matrices are possible if and only if the two matrices are of the same size. Then, A+B is obtained by adding the corresponding entries of A and B, and A – B is got by subtracting the corresponding elements of B from the elements of A. For example:

A(3,3)

ª1 2 3º « 4 5 6 »  B(3,3) « » «¬7 8 9 »¼

ª4 5 6 º «6 7 8 » « » «¬9 10 11»¼

ª1 2 3º « 4 5 6 »  B(3,3) « » «¬7 8 9 »¼

ª4 5 6 º «6 7 8 » « » «¬9 10 11»¼

C (3,3)

ª5 7 9 º «10 12 14 » « » «¬16 18 20 »¼

C (3,3)

ª 3 3 3º « 2 2 2 » . « » «¬ 2 2 2 »¼

And

A(3,3)

3- Scalar Multiplication The matrix multiplication of A by a constant O is O u A and is obtained from A by multiplying all the entries of A by O in this manner:

Fundamental Mathematical Concepts and Methods

O u A(m, n)

For example:

1 ª 4 2 º 2 «¬1 6 »¼

ª O a11 «Oa « 21 « O a31 « « # «O am1 ¬ ª2 «1 « ¬2

O a12 O a22 O a32 #

15

O a13 " O a1n º O a23 " O a2 n »» O a33 " O a3n » #

O am 2 O am 3

» # # » " O am n »¼

1º ». 3» ¼

4- Matrix Multiplication To perform matrix multiplication, that is the product A u B , the number of columns in A must be equal to the number of rows in B. Now, if A [ai j ] A(m, n) and B [bi j ] B(n, p) then:

C where ci j =

[ci j ] C (m, p )

n

¦a

ix

u bx j , given fixed i and j.

x 1

that means each entry of C A u B is obtained by executing the multiplication of the entries of a row of A with the corresponding entries of a column of B. For example:

ª3 «4 « ¬« 5

ª4 2 6º 6 1 6º « 6 1 8 »» 7 3 4 »» u « « 1 3 12 » 2 4 1 »¼ « » ¬ 7 5 9 ¼ u B(4,3) A(3, 4)

C(3,3)

ª89 45 24 º «83 44 80 » , « » ¬«35 29 85 »¼

Note: A u B z B u A in general; hence, we say that matrix multiplication is not in general commutative. But addition is always commutative.

16

Chapter II

Matrix addition, subtraction, and multiplication are easy tasks to accomplish on a computer since they don’t take a lot of computer memory or computer time.

5- Matrix Transposition If A [ai j ] is an m u n matrix, then the transpose of A is the n u m matrix

AT

[ai j ] obtained by interchanging the rows and columns of A.

For example,

ª1 5 7 º T « 2 3 4 » then A ¬ ¼

if A

ª1 2 º «5 3» . « » «¬7 4 »¼

6- Symmetric Matrix; Skew-Symmetric Matrix An n u n matrix A such that A AT is called symmetric, and an n u n matrix A such that A  AT is called skew-symmetric.

7- Identity Matrix A square matrix I is said to be an identity matrix if its entries on the main diagonal are all 1’s and off the main diagonal are all 0’s; that is, I [ai j ] is an identity matrix, if ai i

1 and ai j

In

ª1 «0 « «0 « «# «¬0

0 1 0 #

0 for all i z j . In general: 0 0 1 #

" 0º " 0 »» " 0» » % #» 0 0 " 1 »¼

For example:

I2

ª1 0 º «0 1 » ¬ ¼

and

I3

ª1 0 0 º «0 1 0 » « » «¬0 0 1 »¼

Fundamental Mathematical Concepts and Methods

17

8- Null Matrix An m u n matrix O whose entries are all 0’s is called a zero matrix. Note that O doesn’t have to be square. Examples:

O

ª0 0º «0 0» , O ¬ ¼

ª0 0 0º «0 0 0 » , and O ¬ ¼

ª0 0 0 º «0 0 0 » . « » «¬0 0 0 »¼

V- Numerical Methods 1- Gauss-Jordan Method of Elimination to Solve a Linear System Consider the n u n linear system AX

b defined by:

­a11 x1  a12 x2  a13 x3  "  a1n xn b1 ° °a21 x1  a22 x2  a23 x3  "  a2 n xn b2 ® °! ! ! ! ! ! ! ! °an1 x1  an 2 x2  an 3 x3  "  an n xn bn ¯ Where A is the matrix of coefficients and is:

ª a11 «a « 21 « a31 « «! « an1 ¬

a12 a22 a32 ! an 2

a13 a23 a33 ! an 3

ª x1 º «x » « 2» X is the matrix of variables and is: « x3 » , « » «#» «¬ xn »¼

! ! ! ! !

a1n º a2 n »» a3n » , » !» an n »¼

18

Chapter II

ª b1 º «b » « 2» and b is the matrix of constants and is: « b3 » . « » «#» «¬bn »¼ The augmented matrix [ A | b] of the linear system is the following:

ª a11 «a « 21 « a31 « «! « an1 ¬

a12 a22 a32 ! an 2

a13 ! a1n b1 º a23 " a2 n b2 » » a33 " a3n b3 » » ! ! ! #» an 3 ! an n bn »¼

The Gauss-Jordan method of elimination appeared in 1888 in the popular handbook of Geodesy of the German engineer Wilhelm Jordan (18421899). It consists of reducing the augmented matrix to its reduced rowechelon form by means of elementary row operations so that the solution set of the system can be directly obtained. That means we reduce [ A | b] to:

ª1 «0 « «0 « «# «¬0

0 0 ! 0 1 0 " 0 0 1 " 0 # # # # 0 0 ! 1

s1 º s2 »» s3 » ; hence, the solution is: X » # » sn »¼

ª x1 º «x » « 2» « x3 » « » «#» «¬ xn »¼

ª s1 º «s » « 2» « s3 » « » «#» «¬ sn »¼

2- Gauss-Jordan Method with Pivoting There are two kinds of pivoting: x The first one is the Partial Pivoting where we do the row interchanges in the same column to make the pivot equal to the greatest value in absolute value.

Fundamental Mathematical Concepts and Methods

19

x The second one is the Total Pivoting where we do the interchanges in the whole matrix, not just in the same column, to make the pivot equal to the greatest value in absolute value. Pivoting is sometimes recommended when solving a system by the GaussJordan method of elimination. In fact, partial pivoting strengthens the Gauss-Jordan algorithm in reducing the round-off error significantly and without any substantial increase in the operation count. However, total pivoting is not recommended in practice due its computer time cost and the high increase in operation count. The latter will lead to a better result but to a higher cost. Moreover, in the book algorithms, we used sometimes partial and total pivoting to solve linear systems to improve the precision of the result.

3- Gauss-Jordan Method of Inversion The first step in the Gauss-Jordan method of inversion is to place the matrix to be inverted to the left of an identity matrix of the same order. Together, the two matrices comprise an augmented matrix. For a square matrix A of order n this means that the augmented matrix is developed as:

[ Anun | I n ] The method requires that elementary row operations be performed on the augmented matrix until the part representing the coefficient matrix A becomes, if possible, an identity matrix of the same order. At this point, the part originally representing the identity matrix I is the inverse of the coefficient matrix. In notation form:

[ Anun | I n ]

becomes

[ I n | Anu1n ]

Finally, the final solution of the n u n linear system AX b is: ª x1 º ª b1 º «x » «b » « 2» « 2» X « x3 » A1b A1 u « b3 » and is got by right multiplying the matrix « » « » «#» «#» «¬ xn »¼ «¬bn »¼ 1 1 A taken from [ I n | Anu n ] by the matrix of constants b.

20

Chapter II

4- Overdetermined Systems We have solved in the previous sections n u n systems of linear equations that means with n rows and n columns. Now we will deal with overdetermined systems of linear equations that have the form: AX b , with the matrix A having more rows than columns. No ordinary method could be used to solve the system like: Gauss, Gauss-Jordan, LU decomposition, Jacobi, and Gauss-Seidel, etc. We say that the system AX b is inconsistent and over-constrained. The method of solution of overdetermined systems is to find a close solution to the problem otherwise the systems have no solution at all. The method used here is the one to solve least-square problems. For example, consider the following system: ­ a11 x1  a12 x2 ° ®a21 x1  a22 x2 °a x  a x ¯ 31 1 32 2

b1 , b2 , b3 ,

We have here two unknowns x1 and x2 and three equations, so we say that the system is overdetermined. The problem as it stands has probably no solution. So, we will rewrite it as: ­ a11 x1  a12 x2  b1 ° ®a21 x1  a22 x2  b2 °a x  a x  b 32 2 3 ¯ 31 1

r1 r2 r3

so as to be able to solve it. The numbers r1 , r2 , r3 are called the residuals. The hint is to find the values of x1 , x2 which make r12  r22  r32 minimal. After mathematical analysis, this gives:

­ (a1 , a1 ) x1  (a1 , a2 ) x2 ® ¯(a2 , a1 ) x1  (a2 , a2 ) x2 which is equal to:

(a1 , b) ( a2 , b )

Fundamental Mathematical Concepts and Methods 2 ­° (a112  a21  a312 ) x1  (a11a12  a21a22  a31a32 ) x2 ® 2 2 2 °¯(a12 a11  a22 a21  a32 a31 ) x1  (a12  a22  a32 ) x2

21

(a11b1  a21b2  a31b3 ) (a12 b1  a22 b2  a32 b3 )

For the general problem of m equations in n unknowns (m > n): ­ a11 x1  a12 x2  "  a1n xn b1 ° a x  a x "  a x b ° 21 1 22 2 2n n 2 ® # ° °am1 x1  am 2 x2  "  am n xn bm ¯

An almost identical argument leads to the equations:

­ (a1 , a1 ) x1  (a1 , a2 ) x2  "  (a1 , an ) xn °(a , a ) x  (a , a ) x  "  (a , a ) x ° 2 1 1 2 2 2 2 n n ® # ° °¯(an , a1 ) x1  (an , a2 ) x2  "  (an , an ) xn

(a1 , b) (a2 , b) (an , b)

Note that we will use this method when finding the fixed probability vector for the transition matrix in a Markov chain problem in chapter VIII otherwise the system got cannot be solved on computers by ordinary techniques.

VI- Complex Numbers We will use complex numbers in chapter IX of the book when joining probability theory with complex variables and then apply it to the Brownian motion. This work was published in 2015 with Taylor and Francis in my paper entitled “The Paradigm of Complex Probability and the Brownian Motion”. All the definitions below seem to be essential to fully understand the novel “Complex Probability Paradigm”.

1- The Complex Number System There is no real number x which satisfies the polynomial equation x 2  1 0 To permit solutions of this and similar equations, the set of complex numbers is introduced.

22

Chapter II

We can consider a complex number as having the form a  bi where a and b are real numbers and i, which is called the imaginary unit, has the property that i 2 1 or i 1 . If z a  bi , then a is called the real part of z and b is called the imaginary part of z and are denoted by Re( z ) and Im( z ) respectively. The symbol z, which can stand for any of a set of complex numbers, is called a complex variable. Two complex numbers a  bi and c  di are equal if and only if a c and b d . We can consider real numbers as a subset of complex numbers with b = 0. Thus, the complex numbers 0 + 0i and –3 + 0i represent the real numbers 0 and –3 respectively. If a = 0, the complex number 0 + bi or bi is called a pure imaginary number. The complex conjugate, or briefly conjugate, of a complex number a + bi is a – bi. The complex conjugate of a complex number z is often indicated by z or z * .

2- Fundamental Operations with Complex Numbers In performing operations with complex numbers, we can proceed as in the algebra of real numbers, replacing i 2 by –1 when it occurs. x Addition: (a + bi) + (c + di) = a + bi + c + di = (a + c) + (b + d)i x Subtraction: (a + bi) – (c + di) = a + bi – c – di = (a – c) + (b – d)i x Multiplication: (a + bi) × (c + di) = ac + adi + bci + bdi2 = (ac – bd) + (ad + bc)i x Division: a  bi c  di

a  bi c  di u c  di c  di

ac  adi  bci  bdi 2 c 2  d 2i 2 ac  bd  (bc  ad )i c2  d 2

ac  bd bc  ad  i c2  d 2 c2  d 2

3- Absolute Value The absolute value or modulus or norm of a complex number z defined as:

z

a  bi

a 2  b2 .

a  bi is

Fundamental Mathematical Concepts and Methods

23

If z1 , z2 , z3 ,! , zm are complex numbers, the following properties hold: x

z1 z2

z1 . z2 or z1 z2 ! zm

x

z1 z2

x

z1  z2 d z1  z2 or z1  z2  !  zm d z1  z2  !  zm

x

z1  z2 t z1  z2 or z1  z2 t z1  z2

z1 z2

z1 . z2 ! zm

if z2 z 0

4- Graphical Representation of Complex Numbers Since a complex number x  iy can be considered as an ordered pair of real numbers, we can represent such numbers by points in an xy plane called the complex plane or Argand diagram. The complex number represented by P, for example, could then be read as either (3,4) or 3 + 4i (Figure 2-1). To each complex number there corresponds one and only one point in the plane, and conversely to each point in the plane there corresponds one and only one complex number. Because of this we often refer to the complex number z as the point z. Sometimes we refer to the x and y axes as the real and imaginary axes respectively and to the complex plane as the z plane. The distance between two points z1 x1  iy1 and z2 x2  iy2 in the complex plane is given by:

z1  z2

( x1  x2 ) 2  ( y1  y2 ) 2 . Y P(3,4)

4

X’

O

3

X

Y’ Figure 2-1. The complex plane or Argand plane and the complex number P.

24

Chapter II

5- Vector Interpretation of Complex Numbers Y P(x,y)

y

B

A X’

O

x

X

Y’ Figure 2-2. The complex number z as a vector OP.

A complex number z x  iy can be considered as a vector OP whose initial point is the origin O and whose terminal point is P, and is the point (x, y) (Figure 2-2). We sometimes call OP = x + iy the position vector of P. Two vectors having the same length or magnitude and direction but different initial points, such as OP and AB in Figure 2-2, are considered equal. Hence, we write OP = AB = x+iy. Addition of complex numbers corresponds to the parallelogram law for the addition of vectors (Figure 2-3). Thus, to add the complex numbers z1 and z2, we complete the parallelogram OABC whose sides OA and OC correspond to z1 and z2. The diagonal OB of this parallelogram corresponds to z1 + z2.

Fundamental Mathematical Concepts and Methods

Y

25

B

A C X’

O

X

Y’ Figure 2-3. The parallelogram law for the addition of two vectors z1 and z2.

6- Leonhard Euler’s Formula and Abraham De Moivre’s Theorem The following formula: eiT

cos T  i sin T where e = 2.71828…

is called the Euler’s formula. And the formula:

re iT

n

r n einT

zn

n

> r (cos T  i sin T )@

r n (cos nT  i sin nT )

is called the De Moivre’s theorem.

VII- Conclusion To conclude this chapter, we mention that the entire explanations, theorems, and the detailed proofs can be found in other manuals as well as in the references cited in the bibliography of the present manuscript. In addition, the purpose of this chapter is to give the reader a basic summary of the fundamental mathematical concepts and methods that will be used in the proofs and algorithms of the book to solve the various probability and

26

Chapter II

stochastic problems, without losing space and time with meticulous mathematical facts. Thus, always remaining in the scope of the topic, our work will be as complete as possible.

CHAPTER III BASIC PROBABILITY

“Thus, joining the rigor of the demonstrations of science to the uncertainty of fate, and reconciling these two seemingly contradictory things, it can, taking its name from both, appropriately arrogate to itself this astonishing title : the geometry of chance.” Blaise Pascal. “Chance is the pseudonym of God when He did not want to sign.” Anatole France.

I- Introduction Probability theory is a part of mathematics that is useful for discovering and investigating the regular features of random events. Although it is not really possible to give a precise and simple definition of what is meant by the word random and regular, I hope that the definitions and examples given in this chapter and in the whole book will help the reader understand these concepts. Moreover, certain phenomena in the real world may be considered chance phenomena. These phenomena do not always produce the same observed outcome, and the outcome of any given observation of the phenomena may not be predictable. But they have a long-range behavior known as statistical regularity. In some cases, we are familiar enough with the phenomena under investigation to feel justified in making exact predictions with respect to the result of each individual observation. For example, if you want to know the time when the sun will set, you can find the exact time in the weather section of a newspaper. However, in many cases our knowledge is not precise enough to allow exact predictions in particular simulations. Some examples of such cases, called random events, follow. The problems in this chapter demonstrate that in studying a sequence of random experiments it is not possible to forecast individual results. These are subject to irregular, random fluctuations that cannot be exactly

28

Chapter III

predicted. However, if the number of observations is large – that is, if we deal with a mass phenomenon – some regularity appears. In studying probability, we are concerned with experiments, real or conceptual, and their outcomes. In this study we try to formulate in a precise manner a mathematical theory that closely resembles the experiment in question. The first stage of development of such a mathematical theory is the building of what is termed a probability model. This model is then used to analyze and predict outcomes of the experiment. The purpose of this chapter is to learn how a probability model can be constructed, to state and to prove results of the probability of an event using the set theory, to look at several examples of probability problems and the related counting problems, to work some examples to illustrate conditional probability, to acquire an understanding of the meaning of independent events, to apply Bayes’ formula, and to simulate some important and historical random experiments on computers to predict their output. In addition, I have chosen a set of problems that represents different classes of probability applications like the game of dice, the game of cards, the game of coins, the game of domino, the bag problems, the urn problem, the two-boxes problem, and some historical problems like Huyghens problem, Bernoulli problem, De Meré problem, De Moivre problem, Bose-Einstein problem, Fermi-Dirac problem etc…Surely, hundreds could be cited and solved, but we have left the place to other chapters dealing with other probability and statistical concepts. Moreover, when writing the mathematics involved to solve the different problems, I have used modern terminology and notation that are used currently in present books on probability. I hope you will taste and share with me the pleasure of working and solving stochastic problems…

II- The Theory It is important before using extensively the concept of probability in the whole book that we define it first. In fact, what follows is a list of definitions that we will use in the present and subsequent chapters: Definition 1: In some experiments we are not able to ascertain or control the value of certain variables so that the results will vary from one performance of the experiment to the next even though most of the conditions are the same. These experiments are described as random. As

Basic Probability

29

examples of random experiments, we mention: tossing one or many coins, rolling one or many dice, drawing one or many cards from a deck, measuring the lifetimes of electric light bulbs, choosing one or many pieces from 28 domino pieces, or selecting a message signal for transmission from several messages, etc… Definition 2: The set of all possible outcomes of a statistical experiment is called the sample space and is represented by the symbol S. A sample space that is countably finite or countably infinite is called a discrete sample space, while one that is uncountable is called a continuous sample space. Definition 3: An event is a subset of a sample space S, i.e., it is a set of possible outcomes. If the outcome of an experiment is an element of A, we say that the event A has occurred. An event consisting of a single point of S is often called a simple or elementary event. Definition 4: The complement of an event A with respect to S is the subset of all elements of S that are not in A. We denote the complement of A by the symbol A ' or A . Definition 5: The intersection of two events A and B, denoted by the symbol A  B is the event containing all elements that are common to A and B. Definition 6: Two events A and B are mutually exclusive, or disjoint, if A  B  , that is if A and B have no elements in common. Definition 7: The union of the two events A and B, denoted by the symbol A  B , is the event containing all the elements that belong to A or B or both. Definition 8: (De Morgan’s laws) For any two events or sets A and B:

( A  B) '

A ' B ' and ( A  B ) '

A'  B '

In general, for the events or sets A1 , A2 , A3 ,! , An then:

( A1  A2  A3  !  An ) ' A1 '  A2 '  A3 '  !  An ' and ( A1  A2  A3  !  An ) ' A1 '  A2 '  A3 '  !  An '

30

Chapter III

So, we can write: § n ·' ¨ * Aj ¸ ©j 1 ¹

n

A

j

j 1

§ n ·' ' and ¨  Aj ¸ ©j 1 ¹

n

*A

j

'

j 1

Definition 9: (Fundamental Principle of Counting) If one thing can be accomplished in n1 different ways and after this a second thing can be accomplished in n2 different ways…, and finally a k th thing can be accomplished in nk different ways, then all k things can be accomplished in the specified order in n1 u n2 u ! u nk different ways. Definition 10: A permutation is an arrangement of all or part of a set of objects. The number of permutations of n distinct objects is n! . The number of permutations of n distinct objects taken r at a time is:

n

If n = r then we have: n Pn

Pr

n! (n  r )!

n ! . Note that 0! 1 .

Definition 11: The number of distinct permutations of n things of which n1 are of one kind, n2 of a second kind, …, nk of a k th kind is: n! n1 !n2 !...nk !

Definition 12: The number of arrangements of a set of n objects into r cells with n1 elements in the first cell, n2 elements in the second, and so forth, is: n § · ¨ ¸ © n1 , n2 ,! , nr ¹

n! , n1 !n2 !! nr !

where n1  n2  !  nr

n

Definition 13: The number of combinations of n distinct objects taken r at a time is:

Basic Probability

n

Cr

31

n! r !(n  r )!

It can also be written as: §n· ¨ ¸ ©r¹

n(n  1)(n  2)! (n  r  1) r!

Pr r!

n

Definition 14: We can easily show the following relations: §n· § n · ¨ ¸ ¨ ¸ or n Cr n Cn  r ©r ¹ ©nr¹ § n · § n  1· § n  1· ¨ ¸ ¨ ¸¨ ¸ or n Cr © r ¹ © r 1¹ © r ¹ §n· §n· §n· ¨ ¸ ¨ ¸ 1 and ¨ ¸ n ©1¹ ©n¹ ©0¹

x x x

§n· Moreover, we assume that ¨ ¸ ©r¹

n 1

Cr 1 

n 1

Cr

0 if n < 0 or r > n

Definition 15: (Binomial Theorem) For any real numbers a and b and any positive integer n, we have:

( a  b) n

n

§n·

r 0

¦ ¨ r ¸ .a

nr

.b r

§ n · n § n · n 1 § n · n2 2 §n· n ¨ ¸ .a  ¨ ¸ .a .b  ¨ ¸ .a .b  !  ¨ ¸ .b 0 1 2 © ¹ © ¹ © ¹ ©n¹

Definition 16: (Stirling’s Approximation to n!) When n is large, a direct evaluation of n! may be impractical. In such cases use can be made of the approximate formula:

n ! # 2S n .n n .e  n where e = 2.71828…, which is the base of natural logarithms. The symbol # in the relation means that the ratio of the left-side to the right-side approaches 1 as n o f . Computing technology has largely eclipsed the

32

Chapter III

value of Stirling’s formula for numerical computations, but the approximation remains valuable for theoretical estimates. Definition 17: The probability of an event A is the sum of the weights of all sample points in A. Therefore: 1) 2) 3)

0 d P( A) d 1 , P () 0 , and P( S ) 1

Definition 18: If A1 , A2 , A3 ,! is a sequence of mutually exclusive events then: P( A1  A2  A3  !)

P( A1 )  P( A2 )  P( A3 )  !

Definition 19: If an experiment can result in any one of N different equally likely outcomes and if exactly n of these outcomes corresponding to event A, then the probability of event A is: P( A)

n N

Definition 20: If A and B are any two events, then: P( A  B)

P ( A)  P ( B )  P( A  B)

Definition 21: If A and B are mutually exclusive, then: P( A  B)

P ( A)  P ( B )

Definition 22: If A1 , A2 ,! , An are mutually exclusive events, then: P ( A1  A2  !  An )

P( A1 )  P( A2 )  "  P( An )

Definition 23: If A1 , A2 ,! , An is a partition of a sample space S, then: P ( A1  A2  !  An )

P( A1 )  P( A2 )  "  P( An )

P( S ) 1

Basic Probability

33

Definition 24: For three events A, B, and C,

P( A  B  C ) P( A)  P( B)  P(C )  P( A  B)  P( A  C )  P( B  C )  P( A  B  C ) Definition 25: If A and A’ are complementary events, then:

P ( A)  P( A ') 1

or

P ( A ') 1  P( A)

Definition 26: For any events A and B: P ( A) P ( A  B )  P ( A  B ') Definition 27: The conditional probability of B, given A, is denoted by:

P( B | A)

P( A  B) , P( A)

if P ( A) ! 0

Definition 28: Two events A and B are independent if and only if: P( B | A)

P( B)

or P( A | B)

P( A)

Otherwise, A and B are dependent. Definition 29: If in an experiment the events A and B can both occur, then: P( A  B)

P ( A).P ( B | A)

P( B).P( A | B)

Definition 30: Two events A and B are independent if and only if: P( A  B)

P ( A).P( B)

Definition 31: If, in an experiment, the events A1 , A2 ,! , Ak can occur, then:

P( A1  A2  A3  !  Ak ) P ( A1 ).P( A2 | A1 ).P( A3 | A1  A2 )! P( Ak | A1  A2  !  Ak 1 )

34

Chapter III

If the events A1 , A2 ,! , Ak are independent, then:

P( A1  A2  A3  !  Ak )

P( A1 ).P( A2 ).P( A3 )! P( Ak )

Definition 32: If the events B1 , B2 ,! , Bk constitute a partition of the sample space S such that P( Bi ) z 0 for i = 1, 2, …, k then for an event A of S,

P( A)

k

k

¦ P( B  A) ¦ P( B ).P( A | B ) i

i

i 1

i

i 1

Definition 33: (Bayes’ Rule) If the events B1 , B2 ,! , Bk constitute a partition of the sample space S, where P( Bi ) z 0 for i = 1, 2, …, k , then for any event A in S such that P( A) z 0 : P( Br | A)

P( Br  A) k

¦ P( Bi  A) i 1

P( Br ).P( A | Br ) k

, for r = 1, 2, …, k

¦ P( Bi ).P( A | Bi ) i 1

III- Problems, Applications, and Algorithms Firstly, what follows is 29 algorithms to illustrate the definitions, theorems, and probability concepts stated above. Secondly, each application is divided into three parts: the given of the application, the theoretical analysis that explains the mathematical solution of the problem, and the simulation included in the computer program. The analysis parts and the programs are totally my creation; hence, they were written in my own style and using my personal methods of proof. Thirdly, each computer program is divided into a theoretical solution and a simulated solution that computes the answer to the problem and agrees very well with the theoretical answer. Fourthly, the number of iterations inside the simulation part of each program is arbitrary and crucial. As a matter of fact, I chose a limit not greater than 1 billion iterations in general to decrease the run-time of the algorithm so that the user will not go out of patience waiting for the final result to turn out; as well as a limit not smaller than 1 million iterations in

Basic Probability

35

general to reach thus a respectable output precision. Therefore, if the user seeks a better precision, he can simply increase the iterations number, which is the variable limit in the algorithms. Finally, 29 algorithms are surely not enough to encompass the whole field, but they have served their purpose in covering the main themes of probability model. The reader can refer to my book The Computer Simulation of Monté Carlo Methods and Random Phenomena published in 2019 with Cambridge Scholars Publishing to discover other stochastic and interesting problems. This surely doesn’t mean that the present work is incomplete or insufficient to understand probability theory, but the interested and inquisitive reader can refer to the book already mentioned to discover more simulation problems that will satisfy him.

1- The Cards Problem A card is drawn at random from an ordinary deck of 52 playing cards. The program finds the probability that the card is: An Ace A Jack of Hearts A Queen of Diamonds A King of Clubs A Seven of Spades A Heart A Three of Clubs or a Six of Diamonds Any Suit except Hearts A Ten or a Spade Neither a Four nor a Club Neither a Heart nor a Spade An Eight of Heart or a Five of Diamond or a Nine of Club Theoretical Solution Let us use for brevity H, S, D, C to indicate heart, spade, diamond, club, respectively, and 1, 2, …, 13 for ace, two, three, …, king. Then 3  H means three of hearts, while 3  H means three or heart. Let us assign an equal probability of 1/52 to each sample point of the sample space consisting of all the 52 cards. For example, P(6  C ) = 1/52. The theoretical calculations for each of the 12 outcomes above give:

36

Chapter III

For : P (1  H or 1  S or 1  D or 1  C )

P (1  H )  P (1  S )  P (1  D)  P (1  C ) 1 / 52  1 / 52  1 / 52  1 / 52 4 / 52 1 / 13 0.076923076! This could also have been arrived at by simply reasoning that there are 13 numbers and so each has a probability 1/13 of being drawn. For and and and :

P (11  H )

P (12  D) P (13  C ) P (7  S ) 1 / 52 0.019230769!

For :

P( H )

P(1  H or 2  H or ! or 13  H ) P(1  H )  P(2  H )  !  P(13  H ) 1/ 52  1/ 52  !  1/ 52 13 / 52 1/ 4 0.25

This could also have been arrived at by noting that there are four suits and each has an equal probability 1/4 of being drawn. For :

P (3  C or 6  D)

P (3  C )  P (6  D) 1 / 52  1 / 52 1 / 26 0.038461538!

For :

P( H ') 1  P( H ) 1  1/ 4 3 / 4 0.75 For :

P (10  S )

For :

Since 10 and S are not mutually exclusive, we have:

P(10)  P( S )  P(10  S ) 1/13  1/ 4  1/ 52 4 /13 0.307692307 ! The probability of neither a four nor a club can be denoted by P(4 ' C ') . But 4 ' C ' (4  C ) ' by De Morgan’s law; therefore,

Basic Probability

P (4 ' C ')

37

P[(4  C ) '] 1  P(4  C ) 1  [ P (4)  P(C )  P(4  C )] 1  [1 / 13  1 / 4  1 / 52] 1  4 / 13 9 / 13

0.692307692!

For : Since H and S are mutually exclusive, therefore: P( H ' S ') P[( H  S ) '] 1  P( H  S ) 1  P( H )  P( S )

1  13 / 52  13 / 52 1  26 / 52 For : P(8  H or 5  D or 9  C )

26 / 52 1/ 2

0.5

P(8  H )  P(5  D)  P(9  C ) 1/ 52  1/ 52  1/ 52 3 / 52 0.057692307 !

The simulation agrees very well with the theoretical results already calculated. The algorithm is illustrated in Cards.cpp. #include #include #include #include #include

using namespace std; void theoretical(int); void simulated(int); int main() { int card; char answer; answer = 'Y'; while ((answer { cout card; cout